Add a functional WebRTC VP9 codec
https://bugs.webkit.org/show_bug.cgi?id=213778

Reviewed by Eric Carlson.

Source/ThirdParty/libwebrtc:

Update libvpx to synchronize VP9 support with the rest of libwebrtc.
Disable vp9_noop in the build system and use function vp9 encoder/decoder instead.

* Configurations/libvpx.xcconfig:
* Source/third_party/libvpx/BUILD.gn:
* Source/third_party/libvpx/OWNERS:
* Source/third_party/libvpx/README.chromium:
* Source/third_party/libvpx/generate_gni.sh:
* Source/third_party/libvpx/libvpx_srcs.gni:
* Source/third_party/libvpx/lint_config.sh:
* Source/third_party/libvpx/run_perl.py:
* Source/third_party/libvpx/source/config/ios/arm-neon/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/ios/arm-neon/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.asm:
* Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c:
* Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.h:
* Source/third_party/libvpx/source/config/ios/arm-neon/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/ios/arm64/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/ios/arm64/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm:
* Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c:
* Source/third_party/libvpx/source/config/ios/arm64/vpx_config.h:
* Source/third_party/libvpx/source/config/ios/arm64/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp8_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h.
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp9_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h.
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.asm: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm.
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.c: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c.
(vpx_codec_build_config):
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h.
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_dsp_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h.
* Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_scale_rtcd.h: Added.
* Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm-neon/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/arm-neon/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/arm/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/arm/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/arm/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vp8_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h.
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vp9_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h.
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.asm: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm.
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.c: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c.
(vpx_codec_build_config):
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h.
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_dsp_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h.
* Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_scale_rtcd.h: Added.
* Source/third_party/libvpx/source/config/linux/arm64/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm64/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/arm64/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/arm64/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/generic/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/generic/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/generic/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/generic/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/ia32/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/ia32/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/ia32/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/ia32/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/ia32/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/ia32/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/mips64el/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/mips64el/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/mips64el/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/mipsel/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/mipsel/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/mipsel/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/linux/x64/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/linux/x64/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/linux/x64/vpx_config.asm:
* Source/third_party/libvpx/source/config/linux/x64/vpx_config.c:
* Source/third_party/libvpx/source/config/linux/x64/vpx_config.h:
* Source/third_party/libvpx/source/config/linux/x64/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/mac/ia32/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/mac/ia32/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/mac/ia32/vpx_config.asm:
* Source/third_party/libvpx/source/config/mac/ia32/vpx_config.c:
* Source/third_party/libvpx/source/config/mac/ia32/vpx_config.h:
* Source/third_party/libvpx/source/config/mac/ia32/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd_no_acceleration.h:
* Source/third_party/libvpx/source/config/mac/x64/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/mac/x64/vpx_config.asm:
* Source/third_party/libvpx/source/config/mac/x64/vpx_config.c:
* Source/third_party/libvpx/source/config/mac/x64/vpx_config.h:
* Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd_no_acceleration.h:
* Source/third_party/libvpx/source/config/nacl/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/nacl/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/nacl/vpx_config.c:
* Source/third_party/libvpx/source/config/nacl/vpx_config.h:
* Source/third_party/libvpx/source/config/nacl/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/vpx_version.h:
* Source/third_party/libvpx/source/config/win/arm64/vp8_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h.
* Source/third_party/libvpx/source/config/win/arm64/vp9_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h.
* Source/third_party/libvpx/source/config/win/arm64/vpx_config.asm: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm.
* Source/third_party/libvpx/source/config/win/arm64/vpx_config.c: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c.
(vpx_codec_build_config):
* Source/third_party/libvpx/source/config/win/arm64/vpx_config.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h.
* Source/third_party/libvpx/source/config/win/arm64/vpx_dsp_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h.
* Source/third_party/libvpx/source/config/win/arm64/vpx_scale_rtcd.h: Added.
* Source/third_party/libvpx/source/config/win/ia32/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/win/ia32/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/win/ia32/vpx_config.asm:
* Source/third_party/libvpx/source/config/win/ia32/vpx_config.c:
* Source/third_party/libvpx/source/config/win/ia32/vpx_config.h:
* Source/third_party/libvpx/source/config/win/ia32/vpx_dsp_rtcd.h:
* Source/third_party/libvpx/source/config/win/x64/vp8_rtcd.h:
* Source/third_party/libvpx/source/config/win/x64/vp9_rtcd.h:
* Source/third_party/libvpx/source/config/win/x64/vpx_config.asm:
* Source/third_party/libvpx/source/config/win/x64/vpx_config.c:
* Source/third_party/libvpx/source/config/win/x64/vpx_config.h:
* Source/third_party/libvpx/source/config/win/x64/vpx_dsp_rtcd.h:
* libwebrtc.xcodeproj/project.pbxproj:

LayoutTests:

* webrtc/vp9-expected.txt:
* webrtc/vp9.html:


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@263820 268f45cc-cd09-0410-ab3c-d52691b4dbfc
diff --git a/LayoutTests/ChangeLog b/LayoutTests/ChangeLog
index 4f20476..2158f4f 100644
--- a/LayoutTests/ChangeLog
+++ b/LayoutTests/ChangeLog
@@ -1,3 +1,13 @@
+2020-07-01  Youenn Fablet  <youenn@apple.com>
+
+        Add a functional WebRTC VP9 codec
+        https://bugs.webkit.org/show_bug.cgi?id=213778
+
+        Reviewed by Eric Carlson.
+
+        * webrtc/vp9-expected.txt:
+        * webrtc/vp9.html:
+
 2020-07-01  Karl Rackler  <rackler@apple.com>
 
         Remove expectation for fast/events/before-input-delete-empty-list-target-ranges.html and fast/events/before-input-events-prevent-insert-composition.html and fast/events/before-input-events-prevent-recomposition.html as they are passing. 
diff --git a/LayoutTests/webrtc/vp9-expected.txt b/LayoutTests/webrtc/vp9-expected.txt
index 6ca93b7..45ad832 100644
--- a/LayoutTests/webrtc/vp9-expected.txt
+++ b/LayoutTests/webrtc/vp9-expected.txt
@@ -1,4 +1,10 @@
+  
 
 PASS VP9 in getCapabilities 
 PASS Verify VP9 activation 
+PASS Setting video exchange 
+PASS Ensuring connection state is connected 
+PASS Track is enabled, video should not be black 
+PASS Track is disabled, video should be black 
+PASS Track is enabled, video should not be black 2 
 
diff --git a/LayoutTests/webrtc/vp9.html b/LayoutTests/webrtc/vp9.html
index a99d276..ad928b4 100644
--- a/LayoutTests/webrtc/vp9.html
+++ b/LayoutTests/webrtc/vp9.html
@@ -7,6 +7,11 @@
         <script src="../resources/testharnessreport.js"></script>
     </head>
     <body>
+        <video id="video" autoplay playsInline width="320" height="240"></video>
+        <canvas id="canvas1" width="320" height="240"></canvas>
+        <canvas id="canvas2" width="320" height="240"></canvas>
+        <canvas id="canvas3" width="320" height="240"></canvas>
+        <script src ="routines.js"></script>
         <script>
 let hasVP9;
 test(() => {
@@ -33,6 +38,61 @@
         pc.close();
         assert_true(description.sdp.indexOf("VP9") !== -1, "VP9 codec is in the SDP");
     }, "Verify VP9 activation")
+
+    var track;
+    var remoteTrack;
+    var receivingConnection;
+    promise_test((test) => {
+        return navigator.mediaDevices.getUserMedia({video: {width: 320, height: 240, facingMode: "environment"}}).then((localStream) => {
+            return new Promise((resolve, reject) => {
+                track = localStream.getVideoTracks()[0];
+
+                createConnections((firstConnection) => {
+                    firstConnection.addTrack(track, localStream);
+                }, (secondConnection) => {
+                    receivingConnection = secondConnection;
+                    secondConnection.ontrack = (trackEvent) => {
+                        remoteTrack = trackEvent.track;
+                        resolve(trackEvent.streams[0]);
+                    };
+                }, { observeOffer : (offer) => {
+                    offer.sdp = setCodec(offer.sdp, "VP9");
+                    return offer;
+                }
+                });
+                setTimeout(() => reject("Test timed out"), 5000);
+            });
+        }).then((remoteStream) => {
+            video.srcObject = remoteStream;
+            return video.play();
+        });
+    }, "Setting video exchange");
+
+    promise_test(() => {
+        if (receivingConnection.connectionState === "connected")
+            return Promise.resolve();
+        return new Promise((resolve, reject) => {
+            receivingConnection.onconnectionstatechange = () => {
+                if (receivingConnection.connectionState === "connected")
+                    resolve();
+            };
+            setTimeout(() => reject("Test timed out"), 5000);
+        });
+    }, "Ensuring connection state is connected");
+
+    promise_test((test) => {
+        return checkVideoBlack(false, canvas1, video);
+    }, "Track is enabled, video should not be black");
+
+    promise_test((test) => {
+        track.enabled = false;
+        return checkVideoBlack(true, canvas2, video);
+    }, "Track is disabled, video should be black");
+
+    promise_test((test) => {
+        track.enabled = true;
+        return checkVideoBlack(false, canvas2, video);
+    }, "Track is enabled, video should not be black 2");
 }
         </script>
     </body>
diff --git a/Source/ThirdParty/libwebrtc/ChangeLog b/Source/ThirdParty/libwebrtc/ChangeLog
index 057b5ba..41acb33 100644
--- a/Source/ThirdParty/libwebrtc/ChangeLog
+++ b/Source/ThirdParty/libwebrtc/ChangeLog
@@ -1,3 +1,143 @@
+2020-07-01  Youenn Fablet  <youenn@apple.com>
+
+        Add a functional WebRTC VP9 codec
+        https://bugs.webkit.org/show_bug.cgi?id=213778
+
+        Reviewed by Eric Carlson.
+
+        Update libvpx to synchronize VP9 support with the rest of libwebrtc.
+        Disable vp9_noop in the build system and use function vp9 encoder/decoder instead.
+
+        * Configurations/libvpx.xcconfig:
+        * Source/third_party/libvpx/BUILD.gn:
+        * Source/third_party/libvpx/OWNERS:
+        * Source/third_party/libvpx/README.chromium:
+        * Source/third_party/libvpx/generate_gni.sh:
+        * Source/third_party/libvpx/libvpx_srcs.gni:
+        * Source/third_party/libvpx/lint_config.sh:
+        * Source/third_party/libvpx/run_perl.py:
+        * Source/third_party/libvpx/source/config/ios/arm-neon/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/ios/arm-neon/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c:
+        * Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.h:
+        * Source/third_party/libvpx/source/config/ios/arm-neon/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/ios/arm64/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/ios/arm64/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c:
+        * Source/third_party/libvpx/source/config/ios/arm64/vpx_config.h:
+        * Source/third_party/libvpx/source/config/ios/arm64/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp8_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h.
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp9_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h.
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.asm: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm.
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.c: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c.
+        (vpx_codec_build_config):
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h.
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_dsp_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h.
+        * Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_scale_rtcd.h: Added.
+        * Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/arm-neon/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/arm/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/arm/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/arm/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vp8_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h.
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vp9_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h.
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.asm: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm.
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.c: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c.
+        (vpx_codec_build_config):
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h.
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_dsp_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h.
+        * Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_scale_rtcd.h: Added.
+        * Source/third_party/libvpx/source/config/linux/arm64/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm64/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/arm64/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/arm64/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/generic/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/generic/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/generic/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/generic/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/ia32/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/ia32/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/ia32/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/ia32/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/ia32/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/ia32/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/mips64el/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/mips64el/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/mips64el/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/mipsel/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/mipsel/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/mipsel/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/x64/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/x64/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/linux/x64/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/linux/x64/vpx_config.c:
+        * Source/third_party/libvpx/source/config/linux/x64/vpx_config.h:
+        * Source/third_party/libvpx/source/config/linux/x64/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/ia32/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/ia32/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/ia32/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/mac/ia32/vpx_config.c:
+        * Source/third_party/libvpx/source/config/mac/ia32/vpx_config.h:
+        * Source/third_party/libvpx/source/config/mac/ia32/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd_no_acceleration.h:
+        * Source/third_party/libvpx/source/config/mac/x64/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/x64/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/mac/x64/vpx_config.c:
+        * Source/third_party/libvpx/source/config/mac/x64/vpx_config.h:
+        * Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd_no_acceleration.h:
+        * Source/third_party/libvpx/source/config/nacl/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/nacl/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/nacl/vpx_config.c:
+        * Source/third_party/libvpx/source/config/nacl/vpx_config.h:
+        * Source/third_party/libvpx/source/config/nacl/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/vpx_version.h:
+        * Source/third_party/libvpx/source/config/win/arm64/vp8_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h.
+        * Source/third_party/libvpx/source/config/win/arm64/vp9_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h.
+        * Source/third_party/libvpx/source/config/win/arm64/vpx_config.asm: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm.
+        * Source/third_party/libvpx/source/config/win/arm64/vpx_config.c: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c.
+        (vpx_codec_build_config):
+        * Source/third_party/libvpx/source/config/win/arm64/vpx_config.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h.
+        * Source/third_party/libvpx/source/config/win/arm64/vpx_dsp_rtcd.h: Copied from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h.
+        * Source/third_party/libvpx/source/config/win/arm64/vpx_scale_rtcd.h: Added.
+        * Source/third_party/libvpx/source/config/win/ia32/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/win/ia32/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/win/ia32/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/win/ia32/vpx_config.c:
+        * Source/third_party/libvpx/source/config/win/ia32/vpx_config.h:
+        * Source/third_party/libvpx/source/config/win/ia32/vpx_dsp_rtcd.h:
+        * Source/third_party/libvpx/source/config/win/x64/vp8_rtcd.h:
+        * Source/third_party/libvpx/source/config/win/x64/vp9_rtcd.h:
+        * Source/third_party/libvpx/source/config/win/x64/vpx_config.asm:
+        * Source/third_party/libvpx/source/config/win/x64/vpx_config.c:
+        * Source/third_party/libvpx/source/config/win/x64/vpx_config.h:
+        * Source/third_party/libvpx/source/config/win/x64/vpx_dsp_rtcd.h:
+        * libwebrtc.xcodeproj/project.pbxproj:
+
 2020-06-30  Youenn Fablet  <youenn@apple.com>
 
         Add VP9 WebRTC codec runtime flag
diff --git a/Source/ThirdParty/libwebrtc/Configurations/libvpx.xcconfig b/Source/ThirdParty/libwebrtc/Configurations/libvpx.xcconfig
index 61f2509..e12497a 100644
--- a/Source/ThirdParty/libwebrtc/Configurations/libvpx.xcconfig
+++ b/Source/ThirdParty/libwebrtc/Configurations/libvpx.xcconfig
@@ -19,7 +19,7 @@
 ARM_FILES = *_neon.c arm_cpudetect.c *_arm.c
 X86_FILES = *_sse2.c *_ssse3.c *_sse4.c *_avx2.c *_avx.c *.asm
 
-EXCLUDED_SOURCE_FILE_NAMES[sdk=macos*][arch=x86_64] = $(ARM_FILES) sad.c
-EXCLUDED_SOURCE_FILE_NAMES[sdk=macos*][arch=arm64*] = $(X86_FILES)
-EXCLUDED_SOURCE_FILE_NAMES[sdk=iphoneos*][arch=arm64*] = $(X86_FILES)
+EXCLUDED_SOURCE_FILE_NAMES[sdk=macos*][arch=x86_64] = $(ARM_FILES)
+EXCLUDED_SOURCE_FILE_NAMES[sdk=macos*][arch=arm64*] = $(X86_FILES) *_mmx.c
+EXCLUDED_SOURCE_FILE_NAMES[sdk=iphoneos*][arch=arm64*] = $(X86_FILES) *_mmx.c
 EXCLUDED_SOURCE_FILE_NAMES[sdk=iphonesimulator*] = $(X86_FILES) $(ARM_FILES)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/BUILD.gn b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/BUILD.gn
index 8fb6904..31a26e5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/BUILD.gn
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/BUILD.gn
@@ -2,8 +2,8 @@
 # Use of this source code is governed by a BSD-style license that can be
 # found in the LICENSE file.
 
-import("//build/config/arm.gni")
 import("//build/config/android/config.gni")
+import("//build/config/arm.gni")
 import("//build/config/sanitizers/sanitizers.gni")
 import("//third_party/libvpx/libvpx_srcs.gni")
 import("//third_party/yasm/yasm_assemble.gni")
@@ -18,7 +18,10 @@
     cpu_arch_full = "x64"
   }
 } else if (current_cpu == "arm") {
-  if (arm_use_neon) {
+  if (is_chromeos) {
+    # ChromeOS gets highbd vp9 but other arm targets do not.
+    cpu_arch_full = "arm-neon-highbd"
+  } else if (arm_use_neon) {
     cpu_arch_full = "arm-neon"
   } else if (is_android) {
     cpu_arch_full = "arm-neon-cpu-detect"
@@ -26,7 +29,11 @@
     cpu_arch_full = "arm"
   }
 } else {
-  cpu_arch_full = current_cpu
+  if (is_chromeos) {
+    cpu_arch_full = "arm64-highbd"
+  } else {
+    cpu_arch_full = current_cpu
+  }
 }
 
 if (is_nacl) {
@@ -243,6 +250,8 @@
 if (current_cpu == "arm") {
   if (cpu_arch_full == "arm-neon") {
     arm_assembly_sources = libvpx_srcs_arm_neon_assembly
+  } else if (cpu_arch_full == "arm-neon-highbd") {
+    arm_assembly_sources = libvpx_srcs_arm_neon_highbd_assembly
   } else if (cpu_arch_full == "arm-neon-cpu-detect") {
     arm_assembly_sources = libvpx_srcs_arm_neon_cpu_detect_assembly
   } else {
@@ -258,9 +267,7 @@
     gen_file =
         get_label_info("//third_party/libvpx/source/libvpx", "root_gen_dir") +
         "/{{source_root_relative_dir}}/{{source_file_part}}.S"
-    outputs = [
-      gen_file,
-    ]
+    outputs = [ gen_file ]
     if (is_ios) {
       ads2gas_script =
           "//third_party/libvpx/source/libvpx/build/make/ads2gas_apple.pl"
@@ -282,7 +289,8 @@
     sources = get_target_outputs(":convert_arm_assembly")
     configs -= [ "//build/config/compiler:compiler_arm_fpu" ]
     configs += [ ":libvpx_config" ]
-    if (cpu_arch_full == "arm-neon" || cpu_arch_full == "arm-neon-cpu-detect") {
+    if (cpu_arch_full == "arm-neon" || cpu_arch_full == "arm-neon-cpu-detect" ||
+        cpu_arch_full == "arm-neon-highbd") {
       asmflags = [ "-mfpu=neon" ]
 
       # allow asm files to include generated sources which match the source
@@ -290,9 +298,7 @@
       include_dirs = [ get_label_info("//third_party/libvpx/source/libvpx",
                                       "target_gen_dir") ]
     }
-    deps = [
-      ":convert_arm_assembly",
-    ]
+    deps = [ ":convert_arm_assembly" ]
   }
 }
 
@@ -315,7 +321,9 @@
   } else if (current_cpu == "mipsel" || current_cpu == "mips64el") {
     sources = libvpx_srcs_mips
   } else if (current_cpu == "arm") {
-    if (arm_use_neon) {
+    if (is_chromeos) {
+      sources = libvpx_srcs_arm_neon_highbd
+    } else if (arm_use_neon) {
       sources = libvpx_srcs_arm_neon
     } else if (is_android) {
       sources = libvpx_srcs_arm_neon_cpu_detect
@@ -323,7 +331,11 @@
       sources = libvpx_srcs_arm
     }
   } else if (current_cpu == "arm64") {
-    sources = libvpx_srcs_arm64
+    if (is_chromeos || is_win) {
+      sources = libvpx_srcs_arm64_highbd
+    } else {
+      sources = libvpx_srcs_arm64
+    }
   }
 
   configs -= [ "//build/config/compiler:chromium_code" ]
@@ -346,7 +358,7 @@
     deps += [ ":libvpx_intrinsics_neon" ]
   }
   if (is_android) {
-    deps += [ "//third_party/android_tools:cpu_features" ]
+    deps += [ "//third_party/android_ndk:cpu_features" ]
   }
   if (current_cpu == "arm" && arm_assembly_sources != []) {
     deps += [ ":libvpx_assembly_arm" ]
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/OWNERS b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/OWNERS
index 6524678..94c2415 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/OWNERS
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/OWNERS
@@ -2,3 +2,4 @@
 johannkoenig@google.com
 jzern@chromium.org
 jzern@google.com
+# COMPONENT: Internals>Media>Video
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/README.chromium b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/README.chromium
index 90bdcb4..e7e8bd2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/README.chromium
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/README.chromium
@@ -1,13 +1,14 @@
 Name: libvpx
 URL: http://www.webmproject.org
-Version: v1.7.0
+Version: v1.8.2
+CPEPrefix: cpe:/a:john_koleszar:libvpx:1.8.2
 License: BSD
 License File: source/libvpx/LICENSE
 Security Critical: yes
 
-Date: Thursday August 16 2018
+Date: Wednesday March 04 2020
 Branch: master
-Commit: 6c62530c666fc0bcf4385a35a7c49e44c9f38cf5
+Commit: 5532775efe808cb0942e7b99bf2f232c6ce99fee
 
 Description:
 Contains the sources used to compile libvpx binaries used by Google Chrome and
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/generate_gni.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/generate_gni.sh
old mode 100644
new mode 100755
index 2c94f6f..bcf84b0
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/generate_gni.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/generate_gni.sh
@@ -56,9 +56,6 @@
 # $1 - Output base name
 function write_license {
   echo "# This file is generated. Do not edit." >> $1
-  echo "# Copyright (c) 2014 The Chromium Authors. All rights reserved." >> $1
-  echo "# Use of this source code is governed by a BSD-style license that can be" >> $1
-  echo "# found in the LICENSE file." >> $1
   echo "" >> $1
 }
 
@@ -287,6 +284,10 @@
   # Disable HAVE_UNISTD_H as it causes vp8 to try to detect how many cpus
   # available, which doesn't work from inside a sandbox on linux.
   sed -i.bak -e 's/\(HAVE_UNISTD_H[[:space:]]*\)1/\10/' vpx_config.h
+  # Maintain old ARCH_ defines to avoid build errors because assembly file
+  # dependencies are incorrect.
+  sed -i.bak -e 's/\(#define \)VPX_\(ARCH_[_0-9A-Z]\+ [01]\)/&\n\1\2/' vpx_config.h
+
   rm vpx_config.h.bak
 
   # Use the correct ads2gas script.
@@ -305,6 +306,7 @@
     fi
   fi
 
+  mkdir -p $BASE_DIR/$LIBVPX_CONFIG_DIR/$1
   cp vpx_config.* $BASE_DIR/$LIBVPX_CONFIG_DIR/$1
   make_clean
   rm -rf vpx_config.*
@@ -338,9 +340,16 @@
 cd $TEMP_DIR
 
 echo "Generate config files."
-all_platforms="--enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising"
-all_platforms="${all_platforms} --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384"
-all_platforms="${all_platforms} --enable-realtime-only --disable-install-docs"
+all_platforms="--enable-external-build"
+all_platforms+=" --enable-postproc"
+all_platforms+=" --enable-multi-res-encoding"
+all_platforms+=" --enable-temporal-denoising"
+all_platforms+=" --enable-vp9-temporal-denoising"
+all_platforms+=" --enable-vp9-postproc"
+all_platforms+=" --size-limit=16384x16384"
+all_platforms+=" --enable-realtime-only"
+all_platforms+=" --disable-install-docs"
+all_platforms+=" --disable-libyuv"
 x86_platforms="--enable-pic --as=yasm $DISABLE_AVX512 $HIGHBD"
 gen_config_files linux/ia32 "--target=x86-linux-gcc ${all_platforms} ${x86_platforms}"
 gen_config_files linux/x64 "--target=x86_64-linux-gcc ${all_platforms} ${x86_platforms}"
@@ -348,11 +357,14 @@
 gen_config_files linux/arm-neon "--target=armv7-linux-gcc ${all_platforms}"
 gen_config_files linux/arm-neon-cpu-detect "--target=armv7-linux-gcc --enable-runtime-cpu-detect ${all_platforms}"
 gen_config_files linux/arm64 "--target=armv8-linux-gcc ${all_platforms}"
+gen_config_files linux/arm-neon-highbd "--target=armv7-linux-gcc ${all_platforms} ${HIGHBD}"
+gen_config_files linux/arm64-highbd "--target=armv8-linux-gcc ${all_platforms} ${HIGHBD}"
 gen_config_files linux/mipsel "--target=mips32-linux-gcc ${all_platforms}"
 gen_config_files linux/mips64el "--target=mips64-linux-gcc ${all_platforms}"
 gen_config_files linux/generic "--target=generic-gnu $HIGHBD ${all_platforms}"
-gen_config_files win/ia32 "--target=x86-win32-vs12 ${all_platforms} ${x86_platforms}"
-gen_config_files win/x64 "--target=x86_64-win64-vs12 ${all_platforms} ${x86_platforms}"
+gen_config_files win/arm64 "--target=arm64-win64-vs15 ${all_platforms} ${HIGHBD}"
+gen_config_files win/ia32 "--target=x86-win32-vs14 ${all_platforms} ${x86_platforms}"
+gen_config_files win/x64 "--target=x86_64-win64-vs14 ${all_platforms} ${x86_platforms}"
 gen_config_files mac/ia32 "--target=x86-darwin9-gcc ${all_platforms} ${x86_platforms}"
 gen_config_files mac/x64 "--target=x86_64-darwin9-gcc ${all_platforms} ${x86_platforms}"
 gen_config_files ios/arm-neon "--target=armv7-linux-gcc ${all_platforms}"
@@ -370,9 +382,12 @@
 lint_config linux/arm-neon
 lint_config linux/arm-neon-cpu-detect
 lint_config linux/arm64
+lint_config linux/arm-neon-highbd
+lint_config linux/arm64-highbd
 lint_config linux/mipsel
 lint_config linux/mips64el
 lint_config linux/generic
+lint_config win/arm64
 lint_config win/ia32
 lint_config win/x64
 lint_config mac/ia32
@@ -387,18 +402,24 @@
 cp -R $LIBVPX_SRC_DIR $TEMP_DIR
 cd $TEMP_DIR
 
-gen_rtcd_header linux/ia32 x86
+# chromium has required sse2 for x86 since 2014
+require_sse2="--require-mmx --require-sse --require-sse2"
+
+gen_rtcd_header linux/ia32 x86 "${require_sse2}"
 gen_rtcd_header linux/x64 x86_64
 gen_rtcd_header linux/arm armv7 "--disable-neon --disable-neon_asm"
 gen_rtcd_header linux/arm-neon armv7
 gen_rtcd_header linux/arm-neon-cpu-detect armv7
 gen_rtcd_header linux/arm64 armv8
+gen_rtcd_header linux/arm-neon-highbd armv7
+gen_rtcd_header linux/arm64-highbd armv8
 gen_rtcd_header linux/mipsel mipsel
 gen_rtcd_header linux/mips64el mips64el
 gen_rtcd_header linux/generic generic
-gen_rtcd_header win/ia32 x86
+gen_rtcd_header win/arm64 armv8
+gen_rtcd_header win/ia32 x86 "${require_sse2}"
 gen_rtcd_header win/x64 x86_64
-gen_rtcd_header mac/ia32 x86
+gen_rtcd_header mac/ia32 x86 "${require_sse2}"
 gen_rtcd_header mac/x64 x86_64
 gen_rtcd_header ios/arm-neon armv7
 gen_rtcd_header ios/arm64 armv8
@@ -420,6 +441,7 @@
   convert_srcs_to_project_files libvpx_srcs.txt libvpx_srcs_x86
 
   # Copy vpx_version.h. The file should be the same for all platforms.
+  clang-format -i -style=Chromium vpx_version.h
   cp vpx_version.h $BASE_DIR/$LIBVPX_CONFIG_DIR
 
   echo "Generate X86_64 source list."
@@ -456,6 +478,20 @@
   make libvpx_srcs.txt target=libs $config > /dev/null
   convert_srcs_to_project_files libvpx_srcs.txt libvpx_srcs_arm64
 
+  echo "Generate ARM NEON HighBD source list."
+  config=$(print_config linux/arm-neon-highbd)
+  make_clean
+  make libvpx_srcs.txt target=libs $config > /dev/null
+  convert_srcs_to_project_files libvpx_srcs.txt libvpx_srcs_arm_neon_highbd
+
+  echo "Generate ARM64 HighBD source list."
+  config=$(print_config linux/arm64-highbd)
+  make_clean
+  make libvpx_srcs.txt target=libs $config > /dev/null
+  convert_srcs_to_project_files libvpx_srcs.txt libvpx_srcs_arm64_highbd
+
+  echo "ARM64 Windows uses the ARM64 Linux HighBD source list. No need to generate it."
+
   echo "Generate MIPS source list."
   config=$(print_config_basic linux/mipsel)
   make_clean
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/libvpx_srcs.gni b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/libvpx_srcs.gni
index e3a4d39..be796cb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/libvpx_srcs.gni
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/libvpx_srcs.gni
@@ -1,7 +1,4 @@
 # This file is generated. Do not edit.
-# Copyright (c) 2014 The Chromium Authors. All rights reserved.
-# Use of this source code is governed by a BSD-style license that can be
-# found in the LICENSE file.
 
 libvpx_srcs_x86 = [
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
@@ -10,7 +7,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -64,8 +60,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c",
   "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.c",
   "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.h",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/loopfilter_x86.c",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/vp8_asm_stubs.c",
   "//third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c",
@@ -85,6 +79,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -190,16 +185,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -234,6 +223,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -264,16 +254,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -324,6 +314,7 @@
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_avx2.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_ssse3.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_avx2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_sse2.h",
@@ -334,19 +325,21 @@
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/mem_sse2.h",
-  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_x86.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/transpose_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/txfm_common_sse2.h",
-  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_asm_stubs.c",
   "//third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h",
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c",
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -362,12 +355,11 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
 libvpx_srcs_x86_assembly = [
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse2.asm",
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse3.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/dequantize_mmx.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/idctllm_mmx.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/idctllm_sse2.asm",
@@ -380,6 +372,8 @@
   "//third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_sse2.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_ssse3.asm",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/block_error_sse2.asm",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse2.asm",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse3.asm",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/dct_sse2.asm",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/fwalsh_sse2.asm",
   "//third_party/libvpx/source/libvpx/vp9/common/x86/vp9_mfqe_sse2.asm",
@@ -415,6 +409,7 @@
   "//third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.c",
 ]
 libvpx_srcs_x86_sse2 = [
+  "//third_party/libvpx/source/libvpx/vp8/common/x86/bilinear_filter_sse2.c",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_sse2.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/denoising_sse2.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_enc_stubs_sse2.c",
@@ -437,14 +432,15 @@
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_sse2.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/post_proc_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/sum_squares_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_sse2.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_4t_intrin_sse2.c",
 ]
 libvpx_srcs_x86_sse3 = []
 libvpx_srcs_x86_ssse3 = [
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_quantize_ssse3.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_ssse3.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_frame_scale_ssse3.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_ssse3.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.c",
@@ -485,7 +481,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -539,8 +534,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c",
   "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.c",
   "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.h",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/loopfilter_x86.c",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/vp8_asm_stubs.c",
   "//third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c",
@@ -560,6 +553,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -665,16 +659,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -709,6 +697,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -739,16 +728,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -799,6 +788,7 @@
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_avx2.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_ssse3.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_avx2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_sse2.h",
@@ -809,19 +799,21 @@
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/mem_sse2.h",
-  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_x86.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/transpose_sse2.h",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/txfm_common_sse2.h",
-  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_asm_stubs.c",
   "//third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h",
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c",
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -837,12 +829,11 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
 libvpx_srcs_x86_64_assembly = [
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse2.asm",
-  "//third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse3.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/dequantize_mmx.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/idctllm_mmx.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/idctllm_sse2.asm",
@@ -856,6 +847,8 @@
   "//third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_sse2.asm",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_ssse3.asm",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/block_error_sse2.asm",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse2.asm",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse3.asm",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/dct_sse2.asm",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/fwalsh_sse2.asm",
   "//third_party/libvpx/source/libvpx/vp9/common/x86/vp9_mfqe_sse2.asm",
@@ -895,6 +888,7 @@
 libvpx_srcs_x86_64_mmx =
     [ "//third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_mmx.c" ]
 libvpx_srcs_x86_64_sse2 = [
+  "//third_party/libvpx/source/libvpx/vp8/common/x86/bilinear_filter_sse2.c",
   "//third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_sse2.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/denoising_sse2.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_enc_stubs_sse2.c",
@@ -917,14 +911,15 @@
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_sse2.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/post_proc_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/sum_squares_sse2.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_sse2.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_4t_intrin_sse2.c",
 ]
 libvpx_srcs_x86_64_sse3 = []
 libvpx_srcs_x86_64_ssse3 = [
   "//third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_quantize_ssse3.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_ssse3.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_frame_scale_ssse3.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_ssse3.c",
   "//third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.c",
@@ -965,7 +960,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -1036,6 +1030,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -1141,16 +1136,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -1185,6 +1174,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -1215,16 +1205,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -1277,11 +1267,13 @@
   "//third_party/libvpx/source/libvpx/vpx_ports/arm.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/arm_cpudetect.c",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -1296,6 +1288,7 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
@@ -1304,14 +1297,13 @@
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.h",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dc_only_idct_add_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequant_idct_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_0_2x_neon.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_full_2x_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c",
@@ -1323,7 +1315,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -1398,6 +1389,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -1507,21 +1499,14 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_dct_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_denoiser_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_error_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_frame_scale_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -1556,6 +1541,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -1586,16 +1572,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -1682,11 +1668,13 @@
   "//third_party/libvpx/source/libvpx/vpx_ports/arm.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/arm_cpudetect.c",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -1701,6 +1689,7 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
@@ -1728,11 +1717,11 @@
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.h",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h",
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.c",
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -1803,6 +1792,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -1909,16 +1899,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -1953,6 +1937,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -1983,16 +1968,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -2051,11 +2036,13 @@
   "//third_party/libvpx/source/libvpx/vpx_ports/arm.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/arm_cpudetect.c",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -2070,6 +2057,7 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
@@ -2100,8 +2088,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequant_idct_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_0_2x_neon.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_full_2x_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c",
@@ -2116,7 +2102,6 @@
   "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht4x4_add_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht8x8_add_neon.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_dct_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_denoiser_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_error_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_frame_scale_neon.c",
@@ -2153,14 +2138,13 @@
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.h",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dc_only_idct_add_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequant_idct_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_0_2x_neon.c",
-  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_full_2x_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c",
@@ -2172,7 +2156,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -2247,6 +2230,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -2356,21 +2340,14 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_dct_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_denoiser_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_error_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_frame_scale_neon.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -2405,6 +2382,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -2435,16 +2413,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -2535,11 +2513,13 @@
   "//third_party/libvpx/source/libvpx/vpx_ports/arm.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/arm_cpudetect.c",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -2554,10 +2534,868 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
 libvpx_srcs_arm64_assembly = []
+libvpx_srcs_arm_neon_highbd = [
+  "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dc_only_idct_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequant_idct_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/mbloopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/shortidct4x4llm_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/sixtappredict_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/vp8_loopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/blockd.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/common.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropy.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymode.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymode.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymv.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymv.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/extend.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/extend.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/filter.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/filter.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/findnearmv.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/findnearmv.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/generic/systemdependent.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/header.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/idct_blk.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/idctllm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/invtrans.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/loopfilter.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/loopfilter_filters.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/mbpitch.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/mfqe.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/modecont.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/modecont.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/mv.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/onyx.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/onyxc_int.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/onyxd.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/postproc.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/postproc.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/ppflags.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/quant_common.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/quant_common.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconinter.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconinter.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/rtcd.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/systemdependent.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/threading.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/treecoder.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/treecoder.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_entropymodedata.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decodemv.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decodemv.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decoderthreading.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/detokenize.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/detokenize.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/onyxd_if.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/onyxd_int.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/threading.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/treereader.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/denoising_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/fastquantizeb_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/shortfdct_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/vp8_shortwalsh4x4_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/bitstream.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/bitstream.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/defaultcoefcounts.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/denoising.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/denoising.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemb.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemb.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemv.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemv.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ethreading.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ethreading.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/firstpass.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/lookahead.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/lookahead.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mcomp.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mcomp.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/modecosts.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/modecosts.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/onyx_if.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/onyx_int.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/pickinter.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/pickinter.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/picklpf.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/picklpf.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/quantize.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/rdopt.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/rdopt.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/segmentation.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/segmentation.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/tokenize.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/tokenize.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/treewriter.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/treewriter.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/vp8_quantize.c",
+  "//third_party/libvpx/source/libvpx/vp8/vp8_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp8/vp8_dx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht_neon.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_enums.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_filter.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_filter.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_idct.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_idct.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mv.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_onyxc_int.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_ppflags.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_rtcd.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scale.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scale.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scan.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scan.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_denoiser_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_frame_scale_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_dct.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_frame_scale.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_job_queue.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
+  "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
+  "//third_party/libvpx/source/libvpx/vpx/vp8.h",
+  "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
+  "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_codec.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_decoder.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_encoder.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_frame_buffer.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_image.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_integer.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/add_noise.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_pred_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/deblock_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct16x16_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct32x32_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_partial_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fwd_txfm_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/hadamard_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_1024_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_135_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_34_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_intrapred_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_loopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve8_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve_avg_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve_copy_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct16x16_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_135_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_34_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct8x8_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/intrapred_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/mem_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/quantize_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sad4d_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sad_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/subpel_variance_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/subtract_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_squares_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/transpose_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/variance_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_scaled_convolve8_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/avg.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/deblock.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/intrapred.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/loopfilter.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/postproc.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/prob.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/prob.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/psnr.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/psnr.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/quantize.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/quantize.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/sad.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/subtract.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/sum_squares.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/txfm_common.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/variance.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/variance.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_common.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_rtcd.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_filter.h",
+  "//third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h",
+  "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c",
+  "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/arm.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/arm_cpudetect.c",
+  "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/gen_scalers.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/vpx_scale.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/yv12config.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/yv12extend.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/vpx_scale.h",
+  "//third_party/libvpx/source/libvpx/vpx_scale/vpx_scale_rtcd.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/yv12config.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/endian_inl.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
+]
+libvpx_srcs_arm_neon_highbd_assembly = [
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct4x4_1_add_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct4x4_add_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/intrapred_neon_asm.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_16_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_4_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_8_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/save_reg_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type1_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type2_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type1_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type2_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type1_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type2_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type1_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type2_neon.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_avg_neon_asm.asm",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_copy_neon_asm.asm",
+]
+libvpx_srcs_arm64_highbd = [
+  "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dc_only_idct_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequant_idct_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/mbloopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/shortidct4x4llm_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/sixtappredict_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/arm/neon/vp8_loopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/blockd.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/common.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropy.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymode.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymode.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymv.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/entropymv.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/extend.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/extend.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/filter.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/filter.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/findnearmv.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/findnearmv.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/generic/systemdependent.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/header.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/idct_blk.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/idctllm.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/invtrans.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/loopfilter.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/loopfilter_filters.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/mbpitch.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/mfqe.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/modecont.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/modecont.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/mv.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/onyx.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/onyxc_int.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/onyxd.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/postproc.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/postproc.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/ppflags.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/quant_common.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/quant_common.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconinter.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconinter.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/rtcd.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/systemdependent.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/threading.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/treecoder.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/treecoder.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_entropymodedata.h",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.c",
+  "//third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decodemv.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decodemv.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/decoderthreading.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/detokenize.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/detokenize.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/onyxd_if.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/onyxd_int.h",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/threading.c",
+  "//third_party/libvpx/source/libvpx/vp8/decoder/treereader.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/denoising_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/fastquantizeb_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/shortfdct_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/vp8_shortwalsh4x4_neon.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/bitstream.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/bitstream.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/defaultcoefcounts.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/denoising.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/denoising.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemb.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemb.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemv.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/encodemv.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ethreading.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ethreading.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/firstpass.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/lookahead.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/lookahead.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mcomp.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mcomp.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/modecosts.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/modecosts.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/onyx_if.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/onyx_int.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/pickinter.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/pickinter.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/picklpf.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/picklpf.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/quantize.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/rdopt.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/rdopt.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/segmentation.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/segmentation.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/tokenize.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/tokenize.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/treewriter.c",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/treewriter.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/vp8_quantize.c",
+  "//third_party/libvpx/source/libvpx/vp8/vp8_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp8/vp8_dx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht_neon.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_enums.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_filter.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_filter.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_idct.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_idct.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mv.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_onyxc_int.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_ppflags.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_rtcd.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scale.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scale.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scan.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_scan.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_denoiser_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_frame_scale_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_dct.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_frame_scale.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_job_queue.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
+  "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
+  "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
+  "//third_party/libvpx/source/libvpx/vpx/vp8.h",
+  "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
+  "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_codec.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_decoder.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_encoder.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_frame_buffer.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_image.h",
+  "//third_party/libvpx/source/libvpx/vpx/vpx_integer.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/add_noise.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_pred_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/deblock_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct16x16_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct32x32_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_partial_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/fwd_txfm_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/hadamard_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_1024_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_135_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_34_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_intrapred_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_loopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve8_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve_avg_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve_copy_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_vpx_convolve_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct16x16_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct16x16_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_135_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_34_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct32x32_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct4x4_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct4x4_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct8x8_1_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct8x8_add_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/intrapred_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/mem_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/quantize_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sad4d_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sad_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/subpel_variance_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/subtract_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_squares_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/transpose_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/variance_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_avg_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_copy_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_scaled_convolve8_neon.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/avg.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/deblock.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/intrapred.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/loopfilter.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/postproc.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/prob.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/prob.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/psnr.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/psnr.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/quantize.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/quantize.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/sad.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/subtract.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/sum_squares.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/txfm_common.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/variance.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/variance.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_common.h",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_rtcd.c",
+  "//third_party/libvpx/source/libvpx/vpx_dsp/vpx_filter.h",
+  "//third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h",
+  "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c",
+  "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/arm.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/arm_cpudetect.c",
+  "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/gen_scalers.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/vpx_scale.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/yv12config.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/generic/yv12extend.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/vpx_scale.h",
+  "//third_party/libvpx/source/libvpx/vpx_scale/vpx_scale_rtcd.c",
+  "//third_party/libvpx/source/libvpx/vpx_scale/yv12config.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/endian_inl.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
+]
+libvpx_srcs_arm64_highbd_assembly = []
 libvpx_srcs_mips = [
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.c",
   "//third_party/libvpx/source/libvpx/vp8/common/alloccommon.h",
@@ -2565,7 +3403,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -2636,6 +3473,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -2741,16 +3579,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -2785,6 +3617,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -2815,16 +3648,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -2876,11 +3709,13 @@
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/asmdefs_mmi.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -2895,6 +3730,7 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
@@ -2906,7 +3742,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -2977,6 +3812,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -3082,16 +3918,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -3126,6 +3956,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -3156,16 +3987,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -3216,11 +4047,13 @@
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c",
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -3235,6 +4068,7 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
@@ -3246,7 +4080,6 @@
   "//third_party/libvpx/source/libvpx/vp8/common/blockd.h",
   "//third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/common.h",
-  "//third_party/libvpx/source/libvpx/vp8/common/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h",
   "//third_party/libvpx/source/libvpx/vp8/common/dequantize.c",
   "//third_party/libvpx/source/libvpx/vp8/common/entropy.c",
@@ -3317,6 +4150,7 @@
   "//third_party/libvpx/source/libvpx/vp8/encoder/block.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h",
+  "//third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct.c",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h",
   "//third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h",
@@ -3422,16 +4256,10 @@
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.c",
   "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c",
+  "//third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c",
-  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h",
@@ -3466,6 +4294,7 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h",
+  "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c",
@@ -3496,16 +4325,16 @@
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.c",
   "//third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h",
+  "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c",
   "//third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h",
   "//third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h",
-  "//third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_codec.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_decoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c",
   "//third_party/libvpx/source/libvpx/vpx/src/vpx_image.c",
-  "//third_party/libvpx/source/libvpx/vpx/svc_context.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8cx.h",
   "//third_party/libvpx/source/libvpx/vpx/vp8dx.h",
@@ -3556,11 +4385,13 @@
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c",
   "//third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/bitops.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/msvc.h",
+  "//third_party/libvpx/source/libvpx/vpx_ports/static_assert.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/system_state.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h",
   "//third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h",
@@ -3575,6 +4406,7 @@
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h",
+  "//third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c",
   "//third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h",
 ]
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/lint_config.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/lint_config.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/run_perl.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/run_perl.py
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp8_rtcd.h
index 737afd5..c4bb41b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp8_rtcd.h
@@ -27,68 +27,68 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_neon(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x8_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -96,9 +96,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -106,9 +106,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -116,45 +116,52 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_neon(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_neon(short input,
-                               unsigned char* pred,
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
                                int pred_stride,
-                               unsigned char* dst,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
 
@@ -196,11 +203,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_neon(short* input,
                                short* dq,
-                               unsigned char* output,
+                               unsigned char* dest,
                                int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_neon
 
@@ -230,8 +237,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_neon(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_neon
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -283,91 +290,91 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_neon
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_neon
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_mbhs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_mbvs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
 
@@ -381,8 +388,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -400,81 +407,81 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_neon(short* input,
-                               unsigned char* pred,
-                               int pitch,
-                               unsigned char* dst,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_neon(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp9_rtcd.h
index 309c780..e17954d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vp9_rtcd.h
@@ -76,34 +76,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_neon(const int16_t* input,
-                            int stride,
-                            tran_low_t* coeff_ptr,
-                            intptr_t n_coeffs,
-                            int skip_block,
-                            const int16_t* round_ptr,
-                            const int16_t* quant_ptr,
-                            tran_low_t* qcoeff_ptr,
-                            tran_low_t* dqcoeff_ptr,
-                            const int16_t* dequant_ptr,
-                            uint16_t* eob_ptr,
-                            const int16_t* scan,
-                            const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_neon
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -140,12 +112,12 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_neon(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.asm
index 2f1107d..0322157 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.asm
@@ -1,13 +1,16 @@
 @ This file was created from a .asm file
 @  using the ads2gas_apple.pl script.
 
-	.set WIDE_REFERENCE, 0
-	.set ARCHITECTURE, 5
-	.set DO1STROUNDING, 0
+	.syntax unified
+.set VPX_ARCH_ARM ,  1
 .set ARCH_ARM ,  1
+.set VPX_ARCH_MIPS ,  0
 .set ARCH_MIPS ,  0
+.set VPX_ARCH_X86 ,  0
 .set ARCH_X86 ,  0
+.set VPX_ARCH_X86_64 ,  0
 .set ARCH_X86_64 ,  0
+.set VPX_ARCH_PPC ,  0
 .set ARCH_PPC ,  0
 .set HAVE_NEON ,  1
 .set HAVE_NEON_ASM ,  1
@@ -72,20 +75,24 @@
 .set CONFIG_OS_SUPPORT ,  1
 .set CONFIG_UNIT_TESTS ,  1
 .set CONFIG_WEBM_IO ,  1
-.set CONFIG_LIBYUV ,  1
+.set CONFIG_LIBYUV ,  0
 .set CONFIG_DECODE_PERF_TESTS ,  0
 .set CONFIG_ENCODE_PERF_TESTS ,  0
 .set CONFIG_MULTI_RES_ENCODING ,  1
 .set CONFIG_TEMPORAL_DENOISING ,  1
 .set CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.set CONFIG_CONSISTENT_RECODE ,  0
 .set CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .set CONFIG_VP9_HIGHBITDEPTH ,  0
 .set CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .set CONFIG_EXPERIMENTAL ,  0
 .set CONFIG_SIZE_LIMIT ,  1
 .set CONFIG_ALWAYS_ADJUST_BPM ,  0
-.set CONFIG_SPATIAL_SVC ,  0
+.set CONFIG_BITSTREAM_DEBUG ,  0
+.set CONFIG_MISMATCH_DEBUG ,  0
 .set CONFIG_FP_MB_STATS ,  0
 .set CONFIG_EMULATE_HARDWARE ,  0
+.set CONFIG_NON_GREEDY_MV ,  0
+.set CONFIG_RATE_CTRL ,  0
 .set DECODE_WIDTH_LIMIT ,  16384
 .set DECODE_HEIGHT_LIMIT ,  16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c
index 22b0617..f4f0c11 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=armv7-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=armv7-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.h
index 3dbebf9..969f186 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 1
 #define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 1
 #define HAVE_NEON_ASM 1
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_dsp_rtcd.h
index 453e90e..9c31497 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm-neon/vpx_dsp_rtcd.h
@@ -235,349 +235,349 @@
 #define vpx_convolve_copy vpx_convolve_copy_neon
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_16x16_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_32x32_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_4x4_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_8x8_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
@@ -621,13 +621,13 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_neon(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
@@ -635,23 +635,23 @@
 #define vpx_get16x16var vpx_get16x16var_neon
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const unsigned char* ref_ptr,
                                    int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_neon(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -662,41 +662,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
@@ -723,7 +723,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -996,12 +996,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_neon(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -1035,35 +1035,35 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_neon
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_neon
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -1180,12 +1180,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_neon
@@ -1221,12 +1221,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_neon
@@ -1262,12 +1262,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_neon
@@ -1303,12 +1303,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_neon
@@ -1337,16 +1337,23 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_neon
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -1371,12 +1378,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_neon
@@ -1412,12 +1419,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_neon
@@ -1453,12 +1460,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_neon
@@ -1487,12 +1494,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_neon
@@ -1521,12 +1528,12 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_neon
@@ -1562,12 +1569,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_neon
@@ -1603,12 +1610,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_neon
@@ -1644,12 +1651,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_neon
@@ -1755,17 +1762,17 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1773,17 +1780,17 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1791,17 +1798,17 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1809,17 +1816,17 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1827,17 +1834,17 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1845,17 +1852,17 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1863,17 +1870,17 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1881,17 +1888,17 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1899,17 +1906,17 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1917,17 +1924,17 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1935,17 +1942,17 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1953,17 +1960,17 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1971,17 +1978,17 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1989,208 +1996,208 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
@@ -2219,243 +2226,243 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_neon
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_neon
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_neon
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_neon
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_neon
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_neon
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_neon
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_neon
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_neon
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_neon
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_neon
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_neon
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_neon
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp8_rtcd.h
index 737afd5..c4bb41b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp8_rtcd.h
@@ -27,68 +27,68 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_neon(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x8_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -96,9 +96,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -106,9 +106,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -116,45 +116,52 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_neon(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_neon(short input,
-                               unsigned char* pred,
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
                                int pred_stride,
-                               unsigned char* dst,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
 
@@ -196,11 +203,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_neon(short* input,
                                short* dq,
-                               unsigned char* output,
+                               unsigned char* dest,
                                int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_neon
 
@@ -230,8 +237,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_neon(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_neon
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -283,91 +290,91 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_neon
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_neon
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_mbhs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_mbvs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
 
@@ -381,8 +388,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -400,81 +407,81 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_neon(short* input,
-                               unsigned char* pred,
-                               int pitch,
-                               unsigned char* dst,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_neon(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp9_rtcd.h
index 309c780..dfe9bc8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vp9_rtcd.h
@@ -31,6 +31,9 @@
 extern "C" {
 #endif
 
+void vp9_apply_temporal_filter_c(const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre, int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src, int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator, uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count);
+#define vp9_apply_temporal_filter vp9_apply_temporal_filter_c
+
 int64_t vp9_block_error_c(const tran_low_t* coeff,
                           const tran_low_t* dqcoeff,
                           intptr_t block_size,
@@ -76,34 +79,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_neon(const int16_t* input,
-                            int stride,
-                            tran_low_t* coeff_ptr,
-                            intptr_t n_coeffs,
-                            int skip_block,
-                            const int16_t* round_ptr,
-                            const int16_t* quant_ptr,
-                            tran_low_t* qcoeff_ptr,
-                            tran_low_t* dqcoeff_ptr,
-                            const int16_t* dequant_ptr,
-                            uint16_t* eob_ptr,
-                            const int16_t* scan,
-                            const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_neon
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -140,12 +115,12 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_neon(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm
index 0a0b36e..806d43d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.asm
@@ -1,13 +1,16 @@
 @ This file was created from a .asm file
 @  using the ads2gas_apple.pl script.
 
-	.set WIDE_REFERENCE, 0
-	.set ARCHITECTURE, 5
-	.set DO1STROUNDING, 0
+	.syntax unified
+.set VPX_ARCH_ARM ,  1
 .set ARCH_ARM ,  1
+.set VPX_ARCH_MIPS ,  0
 .set ARCH_MIPS ,  0
+.set VPX_ARCH_X86 ,  0
 .set ARCH_X86 ,  0
+.set VPX_ARCH_X86_64 ,  0
 .set ARCH_X86_64 ,  0
+.set VPX_ARCH_PPC ,  0
 .set ARCH_PPC ,  0
 .set HAVE_NEON ,  1
 .set HAVE_NEON_ASM ,  0
@@ -72,20 +75,24 @@
 .set CONFIG_OS_SUPPORT ,  1
 .set CONFIG_UNIT_TESTS ,  1
 .set CONFIG_WEBM_IO ,  1
-.set CONFIG_LIBYUV ,  1
+.set CONFIG_LIBYUV ,  0
 .set CONFIG_DECODE_PERF_TESTS ,  0
 .set CONFIG_ENCODE_PERF_TESTS ,  0
 .set CONFIG_MULTI_RES_ENCODING ,  1
 .set CONFIG_TEMPORAL_DENOISING ,  1
 .set CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.set CONFIG_CONSISTENT_RECODE ,  0
 .set CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .set CONFIG_VP9_HIGHBITDEPTH ,  0
 .set CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .set CONFIG_EXPERIMENTAL ,  0
 .set CONFIG_SIZE_LIMIT ,  1
 .set CONFIG_ALWAYS_ADJUST_BPM ,  0
-.set CONFIG_SPATIAL_SVC ,  0
+.set CONFIG_BITSTREAM_DEBUG ,  0
+.set CONFIG_MISMATCH_DEBUG ,  0
 .set CONFIG_FP_MB_STATS ,  0
 .set CONFIG_EMULATE_HARDWARE ,  0
+.set CONFIG_NON_GREEDY_MV ,  0
+.set CONFIG_RATE_CTRL ,  0
 .set DECODE_WIDTH_LIMIT ,  16384
 .set DECODE_HEIGHT_LIMIT ,  16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c
index 56a5348..273c20e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=armv8-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=armv8-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.h
index c65638d..2c1933e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 1
 #define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 1
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_dsp_rtcd.h
index 453e90e..9c31497 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/ios/arm64/vpx_dsp_rtcd.h
@@ -235,349 +235,349 @@
 #define vpx_convolve_copy vpx_convolve_copy_neon
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_16x16_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_32x32_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_4x4_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_8x8_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
@@ -621,13 +621,13 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_neon(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
@@ -635,23 +635,23 @@
 #define vpx_get16x16var vpx_get16x16var_neon
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const unsigned char* ref_ptr,
                                    int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_neon(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -662,41 +662,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
@@ -723,7 +723,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -996,12 +996,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_neon(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -1035,35 +1035,35 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_neon
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_neon
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -1180,12 +1180,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_neon
@@ -1221,12 +1221,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_neon
@@ -1262,12 +1262,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_neon
@@ -1303,12 +1303,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_neon
@@ -1337,16 +1337,23 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_neon
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -1371,12 +1378,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_neon
@@ -1412,12 +1419,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_neon
@@ -1453,12 +1460,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_neon
@@ -1487,12 +1494,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_neon
@@ -1521,12 +1528,12 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_neon
@@ -1562,12 +1569,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_neon
@@ -1603,12 +1610,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_neon
@@ -1644,12 +1651,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_neon
@@ -1755,17 +1762,17 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1773,17 +1780,17 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1791,17 +1798,17 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1809,17 +1816,17 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1827,17 +1834,17 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1845,17 +1852,17 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1863,17 +1870,17 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1881,17 +1888,17 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1899,17 +1906,17 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1917,17 +1924,17 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1935,17 +1942,17 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1953,17 +1960,17 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1971,17 +1978,17 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1989,208 +1996,208 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
@@ -2219,243 +2226,243 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_neon
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_neon
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_neon
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_neon
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_neon
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_neon
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_neon
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_neon
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_neon
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_neon
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_neon
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_neon
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_neon
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp8_rtcd.h
index ca242d0..80d71bb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp8_rtcd.h
@@ -27,88 +27,88 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_neon(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src,
-                                              int src_pitch,
-                                              int xofst,
-                                              int yofst,
-                                              unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
                                               int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict4x4)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict4x4)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x4)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x4)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x8_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -116,9 +116,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -126,9 +126,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -136,59 +136,66 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_neon(unsigned char* src,
-                            int src_pitch,
-                            unsigned char* dst,
-                            int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem16x16)(unsigned char* src,
-                                      int src_pitch,
-                                      unsigned char* dst,
-                                      int dst_pitch);
-
-void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
-                       unsigned char* dst,
-                       int dst_pitch);
-void vp8_copy_mem8x4_neon(unsigned char* src,
-                          int src_pitch,
-                          unsigned char* dst,
-                          int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x4)(unsigned char* src,
-                                    int src_pitch,
-                                    unsigned char* dst,
-                                    int dst_pitch);
-
-void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
-                       unsigned char* dst,
-                       int dst_pitch);
-void vp8_copy_mem8x8_neon(unsigned char* src,
-                          int src_pitch,
-                          unsigned char* dst,
-                          int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x8)(unsigned char* src,
-                                    int src_pitch,
-                                    unsigned char* dst,
-                                    int dst_pitch);
-
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
-                            int pred_stride,
+                            int src_stride,
                             unsigned char* dst,
                             int dst_stride);
-void vp8_dc_only_idct_add_neon(short input,
-                               unsigned char* pred,
+RTCD_EXTERN void (*vp8_copy_mem16x16)(unsigned char* src,
+                                      int src_stride,
+                                      unsigned char* dst,
+                                      int dst_stride);
+
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+RTCD_EXTERN void (*vp8_copy_mem8x4)(unsigned char* src,
+                                    int src_stride,
+                                    unsigned char* dst,
+                                    int dst_stride);
+
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+RTCD_EXTERN void (*vp8_copy_mem8x8)(unsigned char* src,
+                                    int src_stride,
+                                    unsigned char* dst,
+                                    int dst_stride);
+
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
                                int pred_stride,
-                               unsigned char* dst,
+                               unsigned char* dst_ptr,
                                int dst_stride);
-RTCD_EXTERN void (*vp8_dc_only_idct_add)(short input,
-                                         unsigned char* pred,
+RTCD_EXTERN void (*vp8_dc_only_idct_add)(short input_dc,
+                                         unsigned char* pred_ptr,
                                          int pred_stride,
-                                         unsigned char* dst,
+                                         unsigned char* dst_ptr,
                                          int dst_stride);
 
 int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
@@ -243,15 +250,15 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_neon(short* input,
                                short* dq,
-                               unsigned char* output,
+                               unsigned char* dest,
                                int stride);
 RTCD_EXTERN void (*vp8_dequant_idct_add)(short* input,
                                          short* dq,
-                                         unsigned char* output,
+                                         unsigned char* dest,
                                          int stride);
 
 void vp8_dequant_idct_add_uv_block_c(short* q,
@@ -289,9 +296,9 @@
                                                  int stride,
                                                  char* eobs);
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_neon(struct blockd*, short* dqc);
-RTCD_EXTERN void (*vp8_dequantize_b)(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
+RTCD_EXTERN void (*vp8_dequantize_b)(struct blockd*, short* DQC);
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
                              struct block* b,
@@ -342,120 +349,120 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bh)(unsigned char* y,
-                                       unsigned char* u,
-                                       unsigned char* v,
-                                       int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_bh)(unsigned char* y_ptr,
+                                       unsigned char* u_ptr,
+                                       unsigned char* v_ptr,
+                                       int y_stride,
                                        int uv_stride,
                                        struct loop_filter_info* lfi);
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bv)(unsigned char* y,
-                                       unsigned char* u,
-                                       unsigned char* v,
-                                       int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_bv)(unsigned char* y_ptr,
+                                       unsigned char* u_ptr,
+                                       unsigned char* v_ptr,
+                                       int y_stride,
                                        int uv_stride,
                                        struct loop_filter_info* lfi);
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbh)(unsigned char* y,
-                                        unsigned char* u,
-                                        unsigned char* v,
-                                        int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_mbh)(unsigned char* y_ptr,
+                                        unsigned char* u_ptr,
+                                        unsigned char* v_ptr,
+                                        int y_stride,
                                         int uv_stride,
                                         struct loop_filter_info* lfi);
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbv)(unsigned char* y,
-                                        unsigned char* u,
-                                        unsigned char* v,
-                                        int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_mbv)(unsigned char* y_ptr,
+                                        unsigned char* u_ptr,
+                                        unsigned char* v_ptr,
+                                        int y_stride,
                                         int uv_stride,
                                         struct loop_filter_info* lfi);
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bh)(unsigned char* y,
-                                              int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_simple_bh)(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bv)(unsigned char* y,
-                                              int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_simple_bv)(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_mbhs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbh)(unsigned char* y,
-                                               int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_simple_mbh)(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_mbvs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbv)(unsigned char* y,
-                                               int ystride,
+RTCD_EXTERN void (*vp8_loop_filter_simple_mbv)(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
 
 int vp8_mbblock_error_c(struct macroblock* mb, int dc);
@@ -468,8 +475,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -487,106 +494,106 @@
 RTCD_EXTERN void (*vp8_short_fdct8x4)(short* input, short* output, int pitch);
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_neon(short* input,
-                               unsigned char* pred,
-                               int pitch,
-                               unsigned char* dst,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 RTCD_EXTERN void (*vp8_short_idct4x4llm)(short* input,
-                                         unsigned char* pred,
-                                         int pitch,
-                                         unsigned char* dst,
+                                         unsigned char* pred_ptr,
+                                         int pred_stride,
+                                         unsigned char* dst_ptr,
                                          int dst_stride);
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_neon(short* input, short* output);
-RTCD_EXTERN void (*vp8_short_inv_walsh4x4)(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
+RTCD_EXTERN void (*vp8_short_inv_walsh4x4)(short* input, short* mb_dqcoeff);
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
 RTCD_EXTERN void (*vp8_short_walsh4x4)(short* input, short* output, int pitch);
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
 void vp8_rtcd(void);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp9_rtcd.h
index 28fa6b5..f5fa140 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vp9_rtcd.h
@@ -86,46 +86,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_neon(const int16_t* input,
-                            int stride,
-                            tran_low_t* coeff_ptr,
-                            intptr_t n_coeffs,
-                            int skip_block,
-                            const int16_t* round_ptr,
-                            const int16_t* quant_ptr,
-                            tran_low_t* qcoeff_ptr,
-                            tran_low_t* dqcoeff_ptr,
-                            const int16_t* dequant_ptr,
-                            uint16_t* eob_ptr,
-                            const int16_t* scan,
-                            const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -162,16 +122,16 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_neon(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 RTCD_EXTERN void (*vp9_iht16x16_256_add)(const tran_low_t* input,
-                                         uint8_t* output,
-                                         int pitch,
+                                         uint8_t* dest,
+                                         int stride,
                                          int tx_type);
 
 void vp9_iht4x4_16_add_c(const tran_low_t* input,
@@ -299,9 +259,6 @@
   vp9_denoiser_filter = vp9_denoiser_filter_c;
   if (flags & HAS_NEON)
     vp9_denoiser_filter = vp9_denoiser_filter_neon;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_NEON)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_neon;
   vp9_iht16x16_256_add = vp9_iht16x16_256_add_c;
   if (flags & HAS_NEON)
     vp9_iht16x16_256_add = vp9_iht16x16_256_add_neon;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.asm
index 0263863..db796ea 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.asm
@@ -1,10 +1,15 @@
 @ This file was created from a .asm file
 @  using the ads2gas.pl script.
-	.equ DO1STROUNDING, 0
+	.syntax unified
+.equ VPX_ARCH_ARM ,  1
 .equ ARCH_ARM ,  1
+.equ VPX_ARCH_MIPS ,  0
 .equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
 .equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
 .equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
 .equ ARCH_PPC ,  0
 .equ HAVE_NEON ,  1
 .equ HAVE_NEON_ASM ,  1
@@ -69,21 +74,25 @@
 .equ CONFIG_OS_SUPPORT ,  1
 .equ CONFIG_UNIT_TESTS ,  1
 .equ CONFIG_WEBM_IO ,  1
-.equ CONFIG_LIBYUV ,  1
+.equ CONFIG_LIBYUV ,  0
 .equ CONFIG_DECODE_PERF_TESTS ,  0
 .equ CONFIG_ENCODE_PERF_TESTS ,  0
 .equ CONFIG_MULTI_RES_ENCODING ,  1
 .equ CONFIG_TEMPORAL_DENOISING ,  1
 .equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
 .equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .equ CONFIG_VP9_HIGHBITDEPTH ,  0
 .equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .equ CONFIG_EXPERIMENTAL ,  0
 .equ CONFIG_SIZE_LIMIT ,  1
 .equ CONFIG_ALWAYS_ADJUST_BPM ,  0
-.equ CONFIG_SPATIAL_SVC ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
 .equ CONFIG_FP_MB_STATS ,  0
 .equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
 .equ DECODE_WIDTH_LIMIT ,  16384
 .equ DECODE_HEIGHT_LIMIT ,  16384
 	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.c
index c52697f..0c81021 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=armv7-linux-gcc --enable-runtime-cpu-detect --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=armv7-linux-gcc --enable-runtime-cpu-detect --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.h
index 40332e4..a5c309a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 1
 #define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 1
 #define HAVE_NEON_ASM 1
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_dsp_rtcd.h
index aea7215..95df12a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-cpu-detect/vpx_dsp_rtcd.h
@@ -320,422 +320,422 @@
                                       int h);
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_16x16_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d135_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_32x32_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d135_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_4x4_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d135_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_8x8_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d135_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_128_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
+                                               ptrdiff_t stride,
                                                const uint8_t* above,
                                                const uint8_t* left);
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_128_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
+                                               ptrdiff_t stride,
                                                const uint8_t* above,
                                                const uint8_t* left);
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_128_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_128_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_left_predictor_16x16)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
+                                                ptrdiff_t stride,
                                                 const uint8_t* above,
                                                 const uint8_t* left);
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_left_predictor_32x32)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
+                                                ptrdiff_t stride,
                                                 const uint8_t* above,
                                                 const uint8_t* left);
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_left_predictor_4x4)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
+                                              ptrdiff_t stride,
                                               const uint8_t* above,
                                               const uint8_t* left);
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_left_predictor_8x8)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
+                                              ptrdiff_t stride,
                                               const uint8_t* above,
                                               const uint8_t* left);
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint8_t* above,
                                          const uint8_t* left);
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint8_t* above,
                                          const uint8_t* left);
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_top_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
+                                               ptrdiff_t stride,
                                                const uint8_t* above,
                                                const uint8_t* left);
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_top_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
+                                               ptrdiff_t stride,
                                                const uint8_t* above,
                                                const uint8_t* left);
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_top_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_dc_top_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
@@ -796,51 +796,51 @@
                                   int stride);
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_neon(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const unsigned char* ref_ptr,
                                    int ref_stride);
 RTCD_EXTERN unsigned int (*vpx_get4x4sse_cs)(const unsigned char* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const unsigned char* ref_ptr,
                                              int ref_stride);
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_neon(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
                         int* sum);
 RTCD_EXTERN void (*vpx_get8x8var)(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse,
@@ -850,54 +850,54 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 RTCD_EXTERN void (*vpx_h_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 RTCD_EXTERN void (*vpx_h_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 RTCD_EXTERN void (*vpx_h_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint8_t* above,
                                         const uint8_t* left);
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 RTCD_EXTERN void (*vpx_h_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint8_t* above,
                                         const uint8_t* left);
 
@@ -927,7 +927,7 @@
                                      int16_t* coeff);
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -1289,17 +1289,17 @@
                                             const uint8_t* limit1,
                                             const uint8_t* thresh1);
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_neon(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
                                     int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_across_ip)(unsigned char* dst,
+RTCD_EXTERN void (*vpx_mbpost_proc_across_ip)(unsigned char* src,
                                               int pitch,
                                               int rows,
                                               int cols,
@@ -1341,39 +1341,39 @@
                                    int* max);
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -1526,17 +1526,17 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad16x16x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -1578,17 +1578,17 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad16x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -1630,17 +1630,17 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad16x8x4d)(const uint8_t* src_ptr,
                                    int src_stride,
-                                   const uint8_t* const ref_ptr[],
+                                   const uint8_t* const ref_array[],
                                    int ref_stride,
                                    uint32_t* sad_array);
 
@@ -1682,17 +1682,17 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x16x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -1727,20 +1727,27 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -1772,17 +1779,17 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -1824,17 +1831,17 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad4x4x4d)(const uint8_t* src_ptr,
                                   int src_stride,
-                                  const uint8_t* const ref_ptr[],
+                                  const uint8_t* const ref_array[],
                                   int ref_stride,
                                   uint32_t* sad_array);
 
@@ -1876,17 +1883,17 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad4x8x4d)(const uint8_t* src_ptr,
                                   int src_stride,
-                                  const uint8_t* const ref_ptr[],
+                                  const uint8_t* const ref_array[],
                                   int ref_stride,
                                   uint32_t* sad_array);
 
@@ -1921,17 +1928,17 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -1966,17 +1973,17 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -2018,17 +2025,17 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad8x16x4d)(const uint8_t* src_ptr,
                                    int src_stride,
-                                   const uint8_t* const ref_ptr[],
+                                   const uint8_t* const ref_array[],
                                    int ref_stride,
                                    uint32_t* sad_array);
 
@@ -2070,17 +2077,17 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad8x4x4d)(const uint8_t* src_ptr,
                                   int src_stride,
-                                  const uint8_t* const ref_ptr[],
+                                  const uint8_t* const ref_array[],
                                   int ref_stride,
                                   uint32_t* sad_array);
 
@@ -2122,17 +2129,17 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad8x8x4d)(const uint8_t* src_ptr,
                                   int src_stride,
-                                  const uint8_t* const ref_ptr[],
+                                  const uint8_t* const ref_array[],
                                   int ref_stride,
                                   uint32_t* sad_array);
 
@@ -2247,625 +2254,625 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2902,319 +2909,319 @@
                                                int size);
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_tm_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_tm_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 RTCD_EXTERN void (*vpx_tm_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint8_t* above,
                                          const uint8_t* left);
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 RTCD_EXTERN void (*vpx_tm_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint8_t* above,
                                          const uint8_t* left);
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 RTCD_EXTERN void (*vpx_v_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 RTCD_EXTERN void (*vpx_v_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 RTCD_EXTERN void (*vpx_v_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint8_t* above,
                                         const uint8_t* left);
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 RTCD_EXTERN void (*vpx_v_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint8_t* above,
                                         const uint8_t* left);
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance4x4)(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance4x8)(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance8x16)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance8x4)(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance8x8)(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp8_rtcd.h
new file mode 100644
index 0000000..c4bb41b
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp8_rtcd.h
@@ -0,0 +1,505 @@
+// This file is generated. Do not edit.
+#ifndef VP8_RTCD_H_
+#define VP8_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * VP8
+ */
+
+struct blockd;
+struct macroblockd;
+struct loop_filter_info;
+
+/* Encoder forward decls */
+struct block;
+struct macroblock;
+struct variance_vtable;
+union int_mv;
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
+                                    int dst_pitch);
+#define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
+
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
+
+void vp8_blend_b_c(unsigned char* y,
+                   unsigned char* u,
+                   unsigned char* v,
+                   int y_1,
+                   int u_1,
+                   int v_1,
+                   int alpha,
+                   int stride);
+#define vp8_blend_b vp8_blend_b_c
+
+void vp8_blend_mb_inner_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
+#define vp8_blend_mb_inner vp8_blend_mb_inner_c
+
+void vp8_blend_mb_outer_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
+#define vp8_blend_mb_outer vp8_blend_mb_outer_c
+
+int vp8_block_error_c(short* coeff, short* dqcoeff);
+#define vp8_block_error vp8_block_error_c
+
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
+void vp8_copy_mem16x16_c(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+void vp8_copy_mem16x16_neon(unsigned char* src,
+                            int src_stride,
+                            unsigned char* dst,
+                            int dst_stride);
+#define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
+
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+#define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
+
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+#define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
+
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
+                               int dst_stride);
+#define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
+
+int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
+                          int mc_avg_y_stride,
+                          unsigned char* running_avg_y,
+                          int avg_y_stride,
+                          unsigned char* sig,
+                          int sig_stride,
+                          unsigned int motion_magnitude,
+                          int increase_denoising);
+int vp8_denoiser_filter_neon(unsigned char* mc_running_avg_y,
+                             int mc_avg_y_stride,
+                             unsigned char* running_avg_y,
+                             int avg_y_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+#define vp8_denoiser_filter vp8_denoiser_filter_neon
+
+int vp8_denoiser_filter_uv_c(unsigned char* mc_running_avg,
+                             int mc_avg_stride,
+                             unsigned char* running_avg,
+                             int avg_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+int vp8_denoiser_filter_uv_neon(unsigned char* mc_running_avg,
+                                int mc_avg_stride,
+                                unsigned char* running_avg,
+                                int avg_stride,
+                                unsigned char* sig,
+                                int sig_stride,
+                                unsigned int motion_magnitude,
+                                int increase_denoising);
+#define vp8_denoiser_filter_uv vp8_denoiser_filter_uv_neon
+
+void vp8_dequant_idct_add_c(short* input,
+                            short* dq,
+                            unsigned char* dest,
+                            int stride);
+void vp8_dequant_idct_add_neon(short* input,
+                               short* dq,
+                               unsigned char* dest,
+                               int stride);
+#define vp8_dequant_idct_add vp8_dequant_idct_add_neon
+
+void vp8_dequant_idct_add_uv_block_c(short* q,
+                                     short* dq,
+                                     unsigned char* dst_u,
+                                     unsigned char* dst_v,
+                                     int stride,
+                                     char* eobs);
+void vp8_dequant_idct_add_uv_block_neon(short* q,
+                                        short* dq,
+                                        unsigned char* dst_u,
+                                        unsigned char* dst_v,
+                                        int stride,
+                                        char* eobs);
+#define vp8_dequant_idct_add_uv_block vp8_dequant_idct_add_uv_block_neon
+
+void vp8_dequant_idct_add_y_block_c(short* q,
+                                    short* dq,
+                                    unsigned char* dst,
+                                    int stride,
+                                    char* eobs);
+void vp8_dequant_idct_add_y_block_neon(short* q,
+                                       short* dq,
+                                       unsigned char* dst,
+                                       int stride,
+                                       char* eobs);
+#define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
+
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
+#define vp8_dequantize_b vp8_dequantize_b_neon
+
+int vp8_diamond_search_sad_c(struct macroblock* x,
+                             struct block* b,
+                             struct blockd* d,
+                             union int_mv* ref_mv,
+                             union int_mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             struct variance_vtable* fn_ptr,
+                             int* mvcost[2],
+                             union int_mv* center_mv);
+#define vp8_diamond_search_sad vp8_diamond_search_sad_c
+
+void vp8_fast_quantize_b_c(struct block*, struct blockd*);
+void vp8_fast_quantize_b_neon(struct block*, struct blockd*);
+#define vp8_fast_quantize_b vp8_fast_quantize_b_neon
+
+void vp8_filter_by_weight16x16_c(unsigned char* src,
+                                 int src_stride,
+                                 unsigned char* dst,
+                                 int dst_stride,
+                                 int src_weight);
+#define vp8_filter_by_weight16x16 vp8_filter_by_weight16x16_c
+
+void vp8_filter_by_weight4x4_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp8_filter_by_weight4x4 vp8_filter_by_weight4x4_c
+
+void vp8_filter_by_weight8x8_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp8_filter_by_weight8x8 vp8_filter_by_weight8x8_c
+
+int vp8_full_search_sad_c(struct macroblock* x,
+                          struct block* b,
+                          struct blockd* d,
+                          union int_mv* ref_mv,
+                          int sad_per_bit,
+                          int distance,
+                          struct variance_vtable* fn_ptr,
+                          int* mvcost[2],
+                          union int_mv* center_mv);
+#define vp8_full_search_sad vp8_full_search_sad_c
+
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bh vp8_loop_filter_bh_neon
+
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bv vp8_loop_filter_bv_neon
+
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
+
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
+
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
+
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
+
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
+                                              const unsigned char* blimit);
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
+                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
+
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
+                                            const unsigned char* blimit);
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
+                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
+
+int vp8_mbblock_error_c(struct macroblock* mb, int dc);
+#define vp8_mbblock_error vp8_mbblock_error_c
+
+int vp8_mbuverror_c(struct macroblock* mb);
+#define vp8_mbuverror vp8_mbuverror_c
+
+int vp8_refining_search_sad_c(struct macroblock* x,
+                              struct block* b,
+                              struct blockd* d,
+                              union int_mv* ref_mv,
+                              int error_per_bit,
+                              int search_range,
+                              struct variance_vtable* fn_ptr,
+                              int* mvcost[2],
+                              union int_mv* center_mv);
+#define vp8_refining_search_sad vp8_refining_search_sad_c
+
+void vp8_regular_quantize_b_c(struct block*, struct blockd*);
+#define vp8_regular_quantize_b vp8_regular_quantize_b_c
+
+void vp8_short_fdct4x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct4x4_neon(short* input, short* output, int pitch);
+#define vp8_short_fdct4x4 vp8_short_fdct4x4_neon
+
+void vp8_short_fdct8x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct8x4_neon(short* input, short* output, int pitch);
+#define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
+
+void vp8_short_idct4x4llm_c(short* input,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_short_idct4x4llm_neon(short* input,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
+                               int dst_stride);
+#define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
+
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
+
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
+
+void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
+void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
+#define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
+
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
+
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
+
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
+
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
+
+void vp8_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp9_rtcd.h
new file mode 100644
index 0000000..ca740f4
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vp9_rtcd.h
@@ -0,0 +1,342 @@
+// This file is generated. Do not edit.
+#ifndef VP9_RTCD_H_
+#define VP9_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * VP9
+ */
+
+#include "vp9/common/vp9_common.h"
+#include "vp9/common/vp9_enums.h"
+#include "vp9/common/vp9_filter.h"
+#include "vpx/vpx_integer.h"
+
+struct macroblockd;
+
+/* Encoder forward decls */
+struct macroblock;
+struct vp9_variance_vtable;
+struct search_site_config;
+struct mv;
+union int_mv;
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int64_t vp9_block_error_c(const tran_low_t* coeff,
+                          const tran_low_t* dqcoeff,
+                          intptr_t block_size,
+                          int64_t* ssz);
+#define vp9_block_error vp9_block_error_c
+
+int64_t vp9_block_error_fp_c(const tran_low_t* coeff,
+                             const tran_low_t* dqcoeff,
+                             int block_size);
+#define vp9_block_error_fp vp9_block_error_fp_c
+
+int vp9_denoiser_filter_c(const uint8_t* sig,
+                          int sig_stride,
+                          const uint8_t* mc_avg,
+                          int mc_avg_stride,
+                          uint8_t* avg,
+                          int avg_stride,
+                          int increase_denoising,
+                          BLOCK_SIZE bs,
+                          int motion_magnitude);
+int vp9_denoiser_filter_neon(const uint8_t* sig,
+                             int sig_stride,
+                             const uint8_t* mc_avg,
+                             int mc_avg_stride,
+                             uint8_t* avg,
+                             int avg_stride,
+                             int increase_denoising,
+                             BLOCK_SIZE bs,
+                             int motion_magnitude);
+#define vp9_denoiser_filter vp9_denoiser_filter_neon
+
+int vp9_diamond_search_sad_c(const struct macroblock* x,
+                             const struct search_site_config* cfg,
+                             struct mv* ref_mv,
+                             struct mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             const struct vp9_variance_vtable* fn_ptr,
+                             const struct mv* center_mv);
+#define vp9_diamond_search_sad vp9_diamond_search_sad_c
+
+void vp9_fht16x16_c(const int16_t* input,
+                    tran_low_t* output,
+                    int stride,
+                    int tx_type);
+#define vp9_fht16x16 vp9_fht16x16_c
+
+void vp9_fht4x4_c(const int16_t* input,
+                  tran_low_t* output,
+                  int stride,
+                  int tx_type);
+#define vp9_fht4x4 vp9_fht4x4_c
+
+void vp9_fht8x8_c(const int16_t* input,
+                  tran_low_t* output,
+                  int stride,
+                  int tx_type);
+#define vp9_fht8x8 vp9_fht8x8_c
+
+void vp9_filter_by_weight16x16_c(const uint8_t* src,
+                                 int src_stride,
+                                 uint8_t* dst,
+                                 int dst_stride,
+                                 int src_weight);
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_c
+
+void vp9_filter_by_weight8x8_c(const uint8_t* src,
+                               int src_stride,
+                               uint8_t* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_c
+
+void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vp9_fwht4x4 vp9_fwht4x4_c
+
+int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
+                                 const tran_low_t* dqcoeff,
+                                 intptr_t block_size,
+                                 int64_t* ssz,
+                                 int bd);
+#define vp9_highbd_block_error vp9_highbd_block_error_c
+
+void vp9_highbd_fht16x16_c(const int16_t* input,
+                           tran_low_t* output,
+                           int stride,
+                           int tx_type);
+#define vp9_highbd_fht16x16 vp9_highbd_fht16x16_c
+
+void vp9_highbd_fht4x4_c(const int16_t* input,
+                         tran_low_t* output,
+                         int stride,
+                         int tx_type);
+#define vp9_highbd_fht4x4 vp9_highbd_fht4x4_c
+
+void vp9_highbd_fht8x8_c(const int16_t* input,
+                         tran_low_t* output,
+                         int stride,
+                         int tx_type);
+#define vp9_highbd_fht8x8 vp9_highbd_fht8x8_c
+
+void vp9_highbd_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
+
+void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+void vp9_highbd_iht16x16_256_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int tx_type,
+                                      int bd);
+#define vp9_highbd_iht16x16_256_add vp9_highbd_iht16x16_256_add_neon
+
+void vp9_highbd_iht4x4_16_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int tx_type,
+                                int bd);
+void vp9_highbd_iht4x4_16_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+#define vp9_highbd_iht4x4_16_add vp9_highbd_iht4x4_16_add_neon
+
+void vp9_highbd_iht8x8_64_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int tx_type,
+                                int bd);
+void vp9_highbd_iht8x8_64_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+#define vp9_highbd_iht8x8_64_add vp9_highbd_iht8x8_64_add_neon
+
+void vp9_highbd_mbpost_proc_across_ip_c(uint16_t* src,
+                                        int pitch,
+                                        int rows,
+                                        int cols,
+                                        int flimit);
+#define vp9_highbd_mbpost_proc_across_ip vp9_highbd_mbpost_proc_across_ip_c
+
+void vp9_highbd_mbpost_proc_down_c(uint16_t* dst,
+                                   int pitch,
+                                   int rows,
+                                   int cols,
+                                   int flimit);
+#define vp9_highbd_mbpost_proc_down vp9_highbd_mbpost_proc_down_c
+
+void vp9_highbd_post_proc_down_and_across_c(const uint16_t* src_ptr,
+                                            uint16_t* dst_ptr,
+                                            int src_pixels_per_line,
+                                            int dst_pixels_per_line,
+                                            int rows,
+                                            int cols,
+                                            int flimit);
+#define vp9_highbd_post_proc_down_and_across \
+  vp9_highbd_post_proc_down_and_across_c
+
+void vp9_highbd_quantize_fp_c(const tran_low_t* coeff_ptr,
+                              intptr_t n_coeffs,
+                              int skip_block,
+                              const int16_t* round_ptr,
+                              const int16_t* quant_ptr,
+                              tran_low_t* qcoeff_ptr,
+                              tran_low_t* dqcoeff_ptr,
+                              const int16_t* dequant_ptr,
+                              uint16_t* eob_ptr,
+                              const int16_t* scan,
+                              const int16_t* iscan);
+#define vp9_highbd_quantize_fp vp9_highbd_quantize_fp_c
+
+void vp9_highbd_quantize_fp_32x32_c(const tran_low_t* coeff_ptr,
+                                    intptr_t n_coeffs,
+                                    int skip_block,
+                                    const int16_t* round_ptr,
+                                    const int16_t* quant_ptr,
+                                    tran_low_t* qcoeff_ptr,
+                                    tran_low_t* dqcoeff_ptr,
+                                    const int16_t* dequant_ptr,
+                                    uint16_t* eob_ptr,
+                                    const int16_t* scan,
+                                    const int16_t* iscan);
+#define vp9_highbd_quantize_fp_32x32 vp9_highbd_quantize_fp_32x32_c
+
+void vp9_highbd_temporal_filter_apply_c(const uint8_t* frame1,
+                                        unsigned int stride,
+                                        const uint8_t* frame2,
+                                        unsigned int block_width,
+                                        unsigned int block_height,
+                                        int strength,
+                                        int* blk_fw,
+                                        int use_32x32,
+                                        uint32_t* accumulator,
+                                        uint16_t* count);
+#define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
+
+void vp9_iht16x16_256_add_c(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+void vp9_iht16x16_256_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride,
+                               int tx_type);
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
+
+void vp9_iht4x4_16_add_c(const tran_low_t* input,
+                         uint8_t* dest,
+                         int stride,
+                         int tx_type);
+void vp9_iht4x4_16_add_neon(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_neon
+
+void vp9_iht8x8_64_add_c(const tran_low_t* input,
+                         uint8_t* dest,
+                         int stride,
+                         int tx_type);
+void vp9_iht8x8_64_add_neon(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_neon
+
+void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
+                       intptr_t n_coeffs,
+                       int skip_block,
+                       const int16_t* round_ptr,
+                       const int16_t* quant_ptr,
+                       tran_low_t* qcoeff_ptr,
+                       tran_low_t* dqcoeff_ptr,
+                       const int16_t* dequant_ptr,
+                       uint16_t* eob_ptr,
+                       const int16_t* scan,
+                       const int16_t* iscan);
+void vp9_quantize_fp_neon(const tran_low_t* coeff_ptr,
+                          intptr_t n_coeffs,
+                          int skip_block,
+                          const int16_t* round_ptr,
+                          const int16_t* quant_ptr,
+                          tran_low_t* qcoeff_ptr,
+                          tran_low_t* dqcoeff_ptr,
+                          const int16_t* dequant_ptr,
+                          uint16_t* eob_ptr,
+                          const int16_t* scan,
+                          const int16_t* iscan);
+#define vp9_quantize_fp vp9_quantize_fp_neon
+
+void vp9_quantize_fp_32x32_c(const tran_low_t* coeff_ptr,
+                             intptr_t n_coeffs,
+                             int skip_block,
+                             const int16_t* round_ptr,
+                             const int16_t* quant_ptr,
+                             tran_low_t* qcoeff_ptr,
+                             tran_low_t* dqcoeff_ptr,
+                             const int16_t* dequant_ptr,
+                             uint16_t* eob_ptr,
+                             const int16_t* scan,
+                             const int16_t* iscan);
+void vp9_quantize_fp_32x32_neon(const tran_low_t* coeff_ptr,
+                                intptr_t n_coeffs,
+                                int skip_block,
+                                const int16_t* round_ptr,
+                                const int16_t* quant_ptr,
+                                tran_low_t* qcoeff_ptr,
+                                tran_low_t* dqcoeff_ptr,
+                                const int16_t* dequant_ptr,
+                                uint16_t* eob_ptr,
+                                const int16_t* scan,
+                                const int16_t* iscan);
+#define vp9_quantize_fp_32x32 vp9_quantize_fp_32x32_neon
+
+void vp9_scale_and_extend_frame_c(const struct yv12_buffer_config* src,
+                                  struct yv12_buffer_config* dst,
+                                  INTERP_FILTER filter_type,
+                                  int phase_scaler);
+void vp9_scale_and_extend_frame_neon(const struct yv12_buffer_config* src,
+                                     struct yv12_buffer_config* dst,
+                                     INTERP_FILTER filter_type,
+                                     int phase_scaler);
+#define vp9_scale_and_extend_frame vp9_scale_and_extend_frame_neon
+
+void vp9_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.asm
new file mode 100644
index 0000000..88e5c40
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.asm
@@ -0,0 +1,98 @@
+@ This file was created from a .asm file
+@  using the ads2gas.pl script.
+	.syntax unified
+.equ VPX_ARCH_ARM ,  1
+.equ ARCH_ARM ,  1
+.equ VPX_ARCH_MIPS ,  0
+.equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
+.equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
+.equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
+.equ ARCH_PPC ,  0
+.equ HAVE_NEON ,  1
+.equ HAVE_NEON_ASM ,  1
+.equ HAVE_MIPS32 ,  0
+.equ HAVE_DSPR2 ,  0
+.equ HAVE_MSA ,  0
+.equ HAVE_MIPS64 ,  0
+.equ HAVE_MMX ,  0
+.equ HAVE_SSE ,  0
+.equ HAVE_SSE2 ,  0
+.equ HAVE_SSE3 ,  0
+.equ HAVE_SSSE3 ,  0
+.equ HAVE_SSE4_1 ,  0
+.equ HAVE_AVX ,  0
+.equ HAVE_AVX2 ,  0
+.equ HAVE_AVX512 ,  0
+.equ HAVE_VSX ,  0
+.equ HAVE_MMI ,  0
+.equ HAVE_VPX_PORTS ,  1
+.equ HAVE_PTHREAD_H ,  1
+.equ HAVE_UNISTD_H ,  0
+.equ CONFIG_DEPENDENCY_TRACKING ,  1
+.equ CONFIG_EXTERNAL_BUILD ,  1
+.equ CONFIG_INSTALL_DOCS ,  0
+.equ CONFIG_INSTALL_BINS ,  1
+.equ CONFIG_INSTALL_LIBS ,  1
+.equ CONFIG_INSTALL_SRCS ,  0
+.equ CONFIG_DEBUG ,  0
+.equ CONFIG_GPROF ,  0
+.equ CONFIG_GCOV ,  0
+.equ CONFIG_RVCT ,  0
+.equ CONFIG_GCC ,  1
+.equ CONFIG_MSVS ,  0
+.equ CONFIG_PIC ,  0
+.equ CONFIG_BIG_ENDIAN ,  0
+.equ CONFIG_CODEC_SRCS ,  0
+.equ CONFIG_DEBUG_LIBS ,  0
+.equ CONFIG_DEQUANT_TOKENS ,  0
+.equ CONFIG_DC_RECON ,  0
+.equ CONFIG_RUNTIME_CPU_DETECT ,  0
+.equ CONFIG_POSTPROC ,  1
+.equ CONFIG_VP9_POSTPROC ,  1
+.equ CONFIG_MULTITHREAD ,  1
+.equ CONFIG_INTERNAL_STATS ,  0
+.equ CONFIG_VP8_ENCODER ,  1
+.equ CONFIG_VP8_DECODER ,  1
+.equ CONFIG_VP9_ENCODER ,  1
+.equ CONFIG_VP9_DECODER ,  1
+.equ CONFIG_VP8 ,  1
+.equ CONFIG_VP9 ,  1
+.equ CONFIG_ENCODERS ,  1
+.equ CONFIG_DECODERS ,  1
+.equ CONFIG_STATIC_MSVCRT ,  0
+.equ CONFIG_SPATIAL_RESAMPLING ,  1
+.equ CONFIG_REALTIME_ONLY ,  1
+.equ CONFIG_ONTHEFLY_BITPACKING ,  0
+.equ CONFIG_ERROR_CONCEALMENT ,  0
+.equ CONFIG_SHARED ,  0
+.equ CONFIG_STATIC ,  1
+.equ CONFIG_SMALL ,  0
+.equ CONFIG_POSTPROC_VISUALIZER ,  0
+.equ CONFIG_OS_SUPPORT ,  1
+.equ CONFIG_UNIT_TESTS ,  1
+.equ CONFIG_WEBM_IO ,  1
+.equ CONFIG_LIBYUV ,  0
+.equ CONFIG_DECODE_PERF_TESTS ,  0
+.equ CONFIG_ENCODE_PERF_TESTS ,  0
+.equ CONFIG_MULTI_RES_ENCODING ,  1
+.equ CONFIG_TEMPORAL_DENOISING ,  1
+.equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
+.equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
+.equ CONFIG_VP9_HIGHBITDEPTH ,  1
+.equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
+.equ CONFIG_EXPERIMENTAL ,  0
+.equ CONFIG_SIZE_LIMIT ,  1
+.equ CONFIG_ALWAYS_ADJUST_BPM ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
+.equ CONFIG_FP_MB_STATS ,  0
+.equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
+.equ DECODE_WIDTH_LIMIT ,  16384
+.equ DECODE_HEIGHT_LIMIT ,  16384
+	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.c
new file mode 100644
index 0000000..86db589
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.c
@@ -0,0 +1,10 @@
+/* Copyright (c) 2011 The WebM project authors. All Rights Reserved. */
+/*  */
+/* Use of this source code is governed by a BSD-style license */
+/* that can be found in the LICENSE file in the root of the source */
+/* tree. An additional intellectual property rights grant can be found */
+/* in the file PATENTS.  All contributing project authors may */
+/* be found in the AUTHORS file in the root of the source tree. */
+#include "vpx/vpx_codec.h"
+static const char* const cfg = "--target=armv7-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-vp9-highbitdepth";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.h
new file mode 100644
index 0000000..75556cc
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_config.h
@@ -0,0 +1,107 @@
+/* Copyright (c) 2011 The WebM project authors. All Rights Reserved. */
+/*  */
+/* Use of this source code is governed by a BSD-style license */
+/* that can be found in the LICENSE file in the root of the source */
+/* tree. An additional intellectual property rights grant can be found */
+/* in the file PATENTS.  All contributing project authors may */
+/* be found in the AUTHORS file in the root of the source tree. */
+/* This file automatically generated by configure. Do not edit! */
+#ifndef VPX_CONFIG_H
+#define VPX_CONFIG_H
+#define RESTRICT    
+#define INLINE      inline
+#define VPX_ARCH_ARM 1
+#define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
+#define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
+#define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
+#define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
+#define ARCH_PPC 0
+#define HAVE_NEON 1
+#define HAVE_NEON_ASM 1
+#define HAVE_MIPS32 0
+#define HAVE_DSPR2 0
+#define HAVE_MSA 0
+#define HAVE_MIPS64 0
+#define HAVE_MMX 0
+#define HAVE_SSE 0
+#define HAVE_SSE2 0
+#define HAVE_SSE3 0
+#define HAVE_SSSE3 0
+#define HAVE_SSE4_1 0
+#define HAVE_AVX 0
+#define HAVE_AVX2 0
+#define HAVE_AVX512 0
+#define HAVE_VSX 0
+#define HAVE_MMI 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_PTHREAD_H 1
+#define HAVE_UNISTD_H 0
+#define CONFIG_DEPENDENCY_TRACKING 1
+#define CONFIG_EXTERNAL_BUILD 1
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 1
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 0
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 0
+#define CONFIG_POSTPROC 1
+#define CONFIG_VP9_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_INTERNAL_STATS 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP9_ENCODER 1
+#define CONFIG_VP9_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_VP9 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 1
+#define CONFIG_ONTHEFLY_BITPACKING 0
+#define CONFIG_ERROR_CONCEALMENT 0
+#define CONFIG_SHARED 0
+#define CONFIG_STATIC 1
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
+#define CONFIG_UNIT_TESTS 1
+#define CONFIG_WEBM_IO 1
+#define CONFIG_LIBYUV 0
+#define CONFIG_DECODE_PERF_TESTS 0
+#define CONFIG_ENCODE_PERF_TESTS 0
+#define CONFIG_MULTI_RES_ENCODING 1
+#define CONFIG_TEMPORAL_DENOISING 1
+#define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
+#define CONFIG_COEFFICIENT_RANGE_CHECKING 0
+#define CONFIG_VP9_HIGHBITDEPTH 1
+#define CONFIG_BETTER_HW_COMPATIBILITY 0
+#define CONFIG_EXPERIMENTAL 0
+#define CONFIG_SIZE_LIMIT 1
+#define CONFIG_ALWAYS_ADJUST_BPM 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
+#define CONFIG_FP_MB_STATS 0
+#define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
+#define DECODE_WIDTH_LIMIT 16384
+#define DECODE_HEIGHT_LIMIT 16384
+#endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_dsp_rtcd.h
new file mode 100644
index 0000000..71eb333
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_dsp_rtcd.h
@@ -0,0 +1,5191 @@
+// This file is generated. Do not edit.
+#ifndef VPX_DSP_RTCD_H_
+#define VPX_DSP_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * DSP
+ */
+
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/vpx_filter.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+unsigned int vpx_avg_4x4_c(const uint8_t*, int p);
+unsigned int vpx_avg_4x4_neon(const uint8_t*, int p);
+#define vpx_avg_4x4 vpx_avg_4x4_neon
+
+unsigned int vpx_avg_8x8_c(const uint8_t*, int p);
+unsigned int vpx_avg_8x8_neon(const uint8_t*, int p);
+#define vpx_avg_8x8 vpx_avg_8x8_neon
+
+void vpx_comp_avg_pred_c(uint8_t* comp_pred,
+                         const uint8_t* pred,
+                         int width,
+                         int height,
+                         const uint8_t* ref,
+                         int ref_stride);
+void vpx_comp_avg_pred_neon(uint8_t* comp_pred,
+                            const uint8_t* pred,
+                            int width,
+                            int height,
+                            const uint8_t* ref,
+                            int ref_stride);
+#define vpx_comp_avg_pred vpx_comp_avg_pred_neon
+
+void vpx_convolve8_c(const uint8_t* src,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     ptrdiff_t dst_stride,
+                     const InterpKernel* filter,
+                     int x0_q4,
+                     int x_step_q4,
+                     int y0_q4,
+                     int y_step_q4,
+                     int w,
+                     int h);
+void vpx_convolve8_neon(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_convolve8 vpx_convolve8_neon
+
+void vpx_convolve8_avg_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+void vpx_convolve8_avg_neon(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_convolve8_avg vpx_convolve8_avg_neon
+
+void vpx_convolve8_avg_horiz_c(const uint8_t* src,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h);
+void vpx_convolve8_avg_horiz_neon(const uint8_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h);
+#define vpx_convolve8_avg_horiz vpx_convolve8_avg_horiz_neon
+
+void vpx_convolve8_avg_vert_c(const uint8_t* src,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              ptrdiff_t dst_stride,
+                              const InterpKernel* filter,
+                              int x0_q4,
+                              int x_step_q4,
+                              int y0_q4,
+                              int y_step_q4,
+                              int w,
+                              int h);
+void vpx_convolve8_avg_vert_neon(const uint8_t* src,
+                                 ptrdiff_t src_stride,
+                                 uint8_t* dst,
+                                 ptrdiff_t dst_stride,
+                                 const InterpKernel* filter,
+                                 int x0_q4,
+                                 int x_step_q4,
+                                 int y0_q4,
+                                 int y_step_q4,
+                                 int w,
+                                 int h);
+#define vpx_convolve8_avg_vert vpx_convolve8_avg_vert_neon
+
+void vpx_convolve8_horiz_c(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+void vpx_convolve8_horiz_neon(const uint8_t* src,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              ptrdiff_t dst_stride,
+                              const InterpKernel* filter,
+                              int x0_q4,
+                              int x_step_q4,
+                              int y0_q4,
+                              int y_step_q4,
+                              int w,
+                              int h);
+#define vpx_convolve8_horiz vpx_convolve8_horiz_neon
+
+void vpx_convolve8_vert_c(const uint8_t* src,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          ptrdiff_t dst_stride,
+                          const InterpKernel* filter,
+                          int x0_q4,
+                          int x_step_q4,
+                          int y0_q4,
+                          int y_step_q4,
+                          int w,
+                          int h);
+void vpx_convolve8_vert_neon(const uint8_t* src,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst,
+                             ptrdiff_t dst_stride,
+                             const InterpKernel* filter,
+                             int x0_q4,
+                             int x_step_q4,
+                             int y0_q4,
+                             int y_step_q4,
+                             int w,
+                             int h);
+#define vpx_convolve8_vert vpx_convolve8_vert_neon
+
+void vpx_convolve_avg_c(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+void vpx_convolve_avg_neon(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+#define vpx_convolve_avg vpx_convolve_avg_neon
+
+void vpx_convolve_copy_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+void vpx_convolve_copy_neon(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_convolve_copy vpx_convolve_copy_neon
+
+void vpx_d117_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
+
+void vpx_d117_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
+
+void vpx_d117_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
+
+void vpx_d117_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
+
+void vpx_d135_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_d135_predictor_16x16_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
+
+void vpx_d135_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_d135_predictor_32x32_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
+
+void vpx_d135_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_d135_predictor_4x4_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
+
+void vpx_d135_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_d135_predictor_8x8_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
+
+void vpx_d153_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
+
+void vpx_d153_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
+
+void vpx_d153_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
+
+void vpx_d153_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
+
+void vpx_d207_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
+
+void vpx_d207_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
+
+void vpx_d207_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
+
+void vpx_d207_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
+
+void vpx_d45_predictor_16x16_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+void vpx_d45_predictor_16x16_neon(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+#define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
+
+void vpx_d45_predictor_32x32_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+void vpx_d45_predictor_32x32_neon(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+#define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
+
+void vpx_d45_predictor_4x4_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_d45_predictor_4x4_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
+
+void vpx_d45_predictor_8x8_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_d45_predictor_8x8_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
+
+void vpx_d45e_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
+
+void vpx_d63_predictor_16x16_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
+
+void vpx_d63_predictor_32x32_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
+
+void vpx_d63_predictor_4x4_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+#define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
+
+void vpx_d63_predictor_8x8_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+#define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
+
+void vpx_d63e_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
+
+void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
+
+void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
+
+void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
+
+void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
+
+void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint8_t* above,
+                                      const uint8_t* left);
+#define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
+
+void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint8_t* above,
+                                      const uint8_t* left);
+#define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
+
+void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint8_t* above,
+                                    const uint8_t* left);
+#define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
+
+void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint8_t* above,
+                                    const uint8_t* left);
+#define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
+
+void vpx_dc_predictor_16x16_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_dc_predictor_16x16_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
+
+void vpx_dc_predictor_32x32_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_dc_predictor_32x32_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
+
+void vpx_dc_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_dc_predictor_4x4_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
+
+void vpx_dc_predictor_8x8_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_dc_predictor_8x8_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
+
+void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
+
+void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
+
+void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
+
+void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
+
+void vpx_fdct16x16_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct16x16_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct16x16 vpx_fdct16x16_neon
+
+void vpx_fdct16x16_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct16x16_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct16x16_1 vpx_fdct16x16_1_neon
+
+void vpx_fdct32x32_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct32x32 vpx_fdct32x32_neon
+
+void vpx_fdct32x32_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct32x32_1 vpx_fdct32x32_1_neon
+
+void vpx_fdct32x32_rd_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_rd_neon(const int16_t* input,
+                           tran_low_t* output,
+                           int stride);
+#define vpx_fdct32x32_rd vpx_fdct32x32_rd_neon
+
+void vpx_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct4x4_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct4x4 vpx_fdct4x4_neon
+
+void vpx_fdct4x4_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct4x4_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct4x4_1 vpx_fdct4x4_1_neon
+
+void vpx_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct8x8_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct8x8 vpx_fdct8x8_neon
+
+void vpx_fdct8x8_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct8x8_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
+
+void vpx_get16x16var_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* ref_ptr,
+                       int ref_stride,
+                       unsigned int* sse,
+                       int* sum);
+void vpx_get16x16var_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride,
+                          unsigned int* sse,
+                          int* sum);
+#define vpx_get16x16var vpx_get16x16var_neon
+
+unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
+                                int src_stride,
+                                const unsigned char* ref_ptr,
+                                int ref_stride);
+unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
+                                   int src_stride,
+                                   const unsigned char* ref_ptr,
+                                   int ref_stride);
+#define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
+
+void vpx_get8x8var_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     unsigned int* sse,
+                     int* sum);
+void vpx_get8x8var_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* ref_ptr,
+                        int ref_stride,
+                        unsigned int* sse,
+                        int* sum);
+#define vpx_get8x8var vpx_get8x8var_neon
+
+unsigned int vpx_get_mb_ss_c(const int16_t*);
+#define vpx_get_mb_ss vpx_get_mb_ss_c
+
+void vpx_h_predictor_16x16_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_h_predictor_16x16_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
+
+void vpx_h_predictor_32x32_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_h_predictor_32x32_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
+
+void vpx_h_predictor_4x4_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_h_predictor_4x4_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
+
+void vpx_h_predictor_8x8_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_h_predictor_8x8_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
+
+void vpx_hadamard_16x16_c(const int16_t* src_diff,
+                          ptrdiff_t src_stride,
+                          tran_low_t* coeff);
+void vpx_hadamard_16x16_neon(const int16_t* src_diff,
+                             ptrdiff_t src_stride,
+                             tran_low_t* coeff);
+#define vpx_hadamard_16x16 vpx_hadamard_16x16_neon
+
+void vpx_hadamard_32x32_c(const int16_t* src_diff,
+                          ptrdiff_t src_stride,
+                          tran_low_t* coeff);
+#define vpx_hadamard_32x32 vpx_hadamard_32x32_c
+
+void vpx_hadamard_8x8_c(const int16_t* src_diff,
+                        ptrdiff_t src_stride,
+                        tran_low_t* coeff);
+void vpx_hadamard_8x8_neon(const int16_t* src_diff,
+                           ptrdiff_t src_stride,
+                           tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
+
+void vpx_he_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+#define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
+
+void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+
+void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse,
+                               int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+
+unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      unsigned int* sse);
+#define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_c
+
+unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
+
+unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
+
+unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x16 \
+  vpx_highbd_10_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x32 \
+  vpx_highbd_10_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x8 \
+  vpx_highbd_10_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x16 \
+  vpx_highbd_10_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x32 \
+  vpx_highbd_10_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x64 \
+  vpx_highbd_10_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance4x4 \
+  vpx_highbd_10_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance4x8 \
+  vpx_highbd_10_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x32 \
+  vpx_highbd_10_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x64 \
+  vpx_highbd_10_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x16 \
+  vpx_highbd_10_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x4 \
+  vpx_highbd_10_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x8 \
+  vpx_highbd_10_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x16 \
+  vpx_highbd_10_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x32 \
+  vpx_highbd_10_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x8 \
+  vpx_highbd_10_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x16 \
+  vpx_highbd_10_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x32 \
+  vpx_highbd_10_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x64 \
+  vpx_highbd_10_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance4x4 \
+  vpx_highbd_10_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance4x8 \
+  vpx_highbd_10_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x32 \
+  vpx_highbd_10_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x64 \
+  vpx_highbd_10_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x16 \
+  vpx_highbd_10_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x4 \
+  vpx_highbd_10_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x8 \
+  vpx_highbd_10_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_c
+
+unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_c
+
+unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_c
+
+unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_c
+
+unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_c
+
+unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_c
+
+unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
+
+unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
+
+unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_c
+
+unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_c
+
+unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_c
+
+unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
+
+unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_c
+
+void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+
+void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse,
+                               int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+
+unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      unsigned int* sse);
+#define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_c
+
+unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
+
+unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
+
+unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x16 \
+  vpx_highbd_12_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x32 \
+  vpx_highbd_12_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x8 \
+  vpx_highbd_12_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x16 \
+  vpx_highbd_12_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x32 \
+  vpx_highbd_12_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x64 \
+  vpx_highbd_12_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance4x4 \
+  vpx_highbd_12_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance4x8 \
+  vpx_highbd_12_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x32 \
+  vpx_highbd_12_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x64 \
+  vpx_highbd_12_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x16 \
+  vpx_highbd_12_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x4 \
+  vpx_highbd_12_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x8 \
+  vpx_highbd_12_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x16 \
+  vpx_highbd_12_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x32 \
+  vpx_highbd_12_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x8 \
+  vpx_highbd_12_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x16 \
+  vpx_highbd_12_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x32 \
+  vpx_highbd_12_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x64 \
+  vpx_highbd_12_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance4x4 \
+  vpx_highbd_12_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance4x8 \
+  vpx_highbd_12_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x32 \
+  vpx_highbd_12_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x64 \
+  vpx_highbd_12_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x16 \
+  vpx_highbd_12_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x4 \
+  vpx_highbd_12_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x8 \
+  vpx_highbd_12_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_c
+
+unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_c
+
+unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_c
+
+unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_c
+
+unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_c
+
+unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_c
+
+unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
+
+unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
+
+unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_c
+
+unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_c
+
+unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_c
+
+unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
+
+unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_c
+
+void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse,
+                                int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+
+void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              unsigned int* sse,
+                              int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+
+unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_c
+
+unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
+
+unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
+
+unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x16 \
+  vpx_highbd_8_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x32 \
+  vpx_highbd_8_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x8 \
+  vpx_highbd_8_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x16 \
+  vpx_highbd_8_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x32 \
+  vpx_highbd_8_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x64 \
+  vpx_highbd_8_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance4x4 \
+  vpx_highbd_8_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance4x8 \
+  vpx_highbd_8_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x32 \
+  vpx_highbd_8_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x64 \
+  vpx_highbd_8_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x16 \
+  vpx_highbd_8_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x4 \
+  vpx_highbd_8_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x8 \
+  vpx_highbd_8_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x16 \
+  vpx_highbd_8_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x32 \
+  vpx_highbd_8_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x8 \
+  vpx_highbd_8_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x16 \
+  vpx_highbd_8_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x32 \
+  vpx_highbd_8_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x64 \
+  vpx_highbd_8_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x32 \
+  vpx_highbd_8_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x64 \
+  vpx_highbd_8_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x16 \
+  vpx_highbd_8_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x4 vpx_highbd_8_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x8 vpx_highbd_8_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_c
+
+unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_c
+
+unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_c
+
+unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_c
+
+unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_c
+
+unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_c
+
+unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
+
+unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
+
+unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_c
+
+unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_c
+
+unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_c
+
+unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
+
+unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_c
+
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+#define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_c
+
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+#define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_c
+
+void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
+                                const uint16_t* pred,
+                                int width,
+                                int height,
+                                const uint16_t* ref,
+                                int ref_stride);
+#define vpx_highbd_comp_avg_pred vpx_highbd_comp_avg_pred_c
+
+void vpx_highbd_convolve8_c(const uint16_t* src,
+                            ptrdiff_t src_stride,
+                            uint16_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h,
+                            int bd);
+void vpx_highbd_convolve8_neon(const uint16_t* src,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h,
+                               int bd);
+#define vpx_highbd_convolve8 vpx_highbd_convolve8_neon
+
+void vpx_highbd_convolve8_avg_c(const uint16_t* src,
+                                ptrdiff_t src_stride,
+                                uint16_t* dst,
+                                ptrdiff_t dst_stride,
+                                const InterpKernel* filter,
+                                int x0_q4,
+                                int x_step_q4,
+                                int y0_q4,
+                                int y_step_q4,
+                                int w,
+                                int h,
+                                int bd);
+void vpx_highbd_convolve8_avg_neon(const uint16_t* src,
+                                   ptrdiff_t src_stride,
+                                   uint16_t* dst,
+                                   ptrdiff_t dst_stride,
+                                   const InterpKernel* filter,
+                                   int x0_q4,
+                                   int x_step_q4,
+                                   int y0_q4,
+                                   int y_step_q4,
+                                   int w,
+                                   int h,
+                                   int bd);
+#define vpx_highbd_convolve8_avg vpx_highbd_convolve8_avg_neon
+
+void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
+                                      ptrdiff_t src_stride,
+                                      uint16_t* dst,
+                                      ptrdiff_t dst_stride,
+                                      const InterpKernel* filter,
+                                      int x0_q4,
+                                      int x_step_q4,
+                                      int y0_q4,
+                                      int y_step_q4,
+                                      int w,
+                                      int h,
+                                      int bd);
+void vpx_highbd_convolve8_avg_horiz_neon(const uint16_t* src,
+                                         ptrdiff_t src_stride,
+                                         uint16_t* dst,
+                                         ptrdiff_t dst_stride,
+                                         const InterpKernel* filter,
+                                         int x0_q4,
+                                         int x_step_q4,
+                                         int y0_q4,
+                                         int y_step_q4,
+                                         int w,
+                                         int h,
+                                         int bd);
+#define vpx_highbd_convolve8_avg_horiz vpx_highbd_convolve8_avg_horiz_neon
+
+void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
+                                     ptrdiff_t src_stride,
+                                     uint16_t* dst,
+                                     ptrdiff_t dst_stride,
+                                     const InterpKernel* filter,
+                                     int x0_q4,
+                                     int x_step_q4,
+                                     int y0_q4,
+                                     int y_step_q4,
+                                     int w,
+                                     int h,
+                                     int bd);
+void vpx_highbd_convolve8_avg_vert_neon(const uint16_t* src,
+                                        ptrdiff_t src_stride,
+                                        uint16_t* dst,
+                                        ptrdiff_t dst_stride,
+                                        const InterpKernel* filter,
+                                        int x0_q4,
+                                        int x_step_q4,
+                                        int y0_q4,
+                                        int y_step_q4,
+                                        int w,
+                                        int h,
+                                        int bd);
+#define vpx_highbd_convolve8_avg_vert vpx_highbd_convolve8_avg_vert_neon
+
+void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint16_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h,
+                                  int bd);
+void vpx_highbd_convolve8_horiz_neon(const uint16_t* src,
+                                     ptrdiff_t src_stride,
+                                     uint16_t* dst,
+                                     ptrdiff_t dst_stride,
+                                     const InterpKernel* filter,
+                                     int x0_q4,
+                                     int x_step_q4,
+                                     int y0_q4,
+                                     int y_step_q4,
+                                     int w,
+                                     int h,
+                                     int bd);
+#define vpx_highbd_convolve8_horiz vpx_highbd_convolve8_horiz_neon
+
+void vpx_highbd_convolve8_vert_c(const uint16_t* src,
+                                 ptrdiff_t src_stride,
+                                 uint16_t* dst,
+                                 ptrdiff_t dst_stride,
+                                 const InterpKernel* filter,
+                                 int x0_q4,
+                                 int x_step_q4,
+                                 int y0_q4,
+                                 int y_step_q4,
+                                 int w,
+                                 int h,
+                                 int bd);
+void vpx_highbd_convolve8_vert_neon(const uint16_t* src,
+                                    ptrdiff_t src_stride,
+                                    uint16_t* dst,
+                                    ptrdiff_t dst_stride,
+                                    const InterpKernel* filter,
+                                    int x0_q4,
+                                    int x_step_q4,
+                                    int y0_q4,
+                                    int y_step_q4,
+                                    int w,
+                                    int h,
+                                    int bd);
+#define vpx_highbd_convolve8_vert vpx_highbd_convolve8_vert_neon
+
+void vpx_highbd_convolve_avg_c(const uint16_t* src,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h,
+                               int bd);
+void vpx_highbd_convolve_avg_neon(const uint16_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint16_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h,
+                                  int bd);
+#define vpx_highbd_convolve_avg vpx_highbd_convolve_avg_neon
+
+void vpx_highbd_convolve_copy_c(const uint16_t* src,
+                                ptrdiff_t src_stride,
+                                uint16_t* dst,
+                                ptrdiff_t dst_stride,
+                                const InterpKernel* filter,
+                                int x0_q4,
+                                int x_step_q4,
+                                int y0_q4,
+                                int y_step_q4,
+                                int w,
+                                int h,
+                                int bd);
+void vpx_highbd_convolve_copy_neon(const uint16_t* src,
+                                   ptrdiff_t src_stride,
+                                   uint16_t* dst,
+                                   ptrdiff_t dst_stride,
+                                   const InterpKernel* filter,
+                                   int x0_q4,
+                                   int x_step_q4,
+                                   int y0_q4,
+                                   int y_step_q4,
+                                   int w,
+                                   int h,
+                                   int bd);
+#define vpx_highbd_convolve_copy vpx_highbd_convolve_copy_neon
+
+void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d117_predictor_16x16 vpx_highbd_d117_predictor_16x16_c
+
+void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d117_predictor_32x32 vpx_highbd_d117_predictor_32x32_c
+
+void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_c
+
+void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d117_predictor_8x8 vpx_highbd_d117_predictor_8x8_c
+
+void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_d135_predictor_16x16_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_d135_predictor_16x16 vpx_highbd_d135_predictor_16x16_neon
+
+void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_d135_predictor_32x32_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_d135_predictor_32x32 vpx_highbd_d135_predictor_32x32_neon
+
+void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_d135_predictor_4x4_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_neon
+
+void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_d135_predictor_8x8_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_d135_predictor_8x8 vpx_highbd_d135_predictor_8x8_neon
+
+void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d153_predictor_16x16 vpx_highbd_d153_predictor_16x16_c
+
+void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d153_predictor_32x32 vpx_highbd_d153_predictor_32x32_c
+
+void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_c
+
+void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d153_predictor_8x8 vpx_highbd_d153_predictor_8x8_c
+
+void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d207_predictor_16x16 vpx_highbd_d207_predictor_16x16_c
+
+void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d207_predictor_32x32 vpx_highbd_d207_predictor_32x32_c
+
+void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_c
+
+void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d207_predictor_8x8 vpx_highbd_d207_predictor_8x8_c
+
+void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+void vpx_highbd_d45_predictor_16x16_neon(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+#define vpx_highbd_d45_predictor_16x16 vpx_highbd_d45_predictor_16x16_neon
+
+void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+void vpx_highbd_d45_predictor_32x32_neon(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+#define vpx_highbd_d45_predictor_32x32 vpx_highbd_d45_predictor_32x32_neon
+
+void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_d45_predictor_4x4_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d45_predictor_4x4 vpx_highbd_d45_predictor_4x4_neon
+
+void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_d45_predictor_8x8_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d45_predictor_8x8 vpx_highbd_d45_predictor_8x8_neon
+
+void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_d63_predictor_16x16 vpx_highbd_d63_predictor_16x16_c
+
+void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_d63_predictor_32x32 vpx_highbd_d63_predictor_32x32_c
+
+void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+#define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_c
+
+void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+#define vpx_highbd_d63_predictor_8x8 vpx_highbd_d63_predictor_8x8_c
+
+void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_128_predictor_16x16_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_neon
+
+void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_128_predictor_32x32_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_neon
+
+void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_128_predictor_4x4_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_neon
+
+void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_128_predictor_8x8_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_neon
+
+void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+void vpx_highbd_dc_left_predictor_16x16_neon(uint16_t* dst,
+                                             ptrdiff_t stride,
+                                             const uint16_t* above,
+                                             const uint16_t* left,
+                                             int bd);
+#define vpx_highbd_dc_left_predictor_16x16 \
+  vpx_highbd_dc_left_predictor_16x16_neon
+
+void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+void vpx_highbd_dc_left_predictor_32x32_neon(uint16_t* dst,
+                                             ptrdiff_t stride,
+                                             const uint16_t* above,
+                                             const uint16_t* left,
+                                             int bd);
+#define vpx_highbd_dc_left_predictor_32x32 \
+  vpx_highbd_dc_left_predictor_32x32_neon
+
+void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+void vpx_highbd_dc_left_predictor_4x4_neon(uint16_t* dst,
+                                           ptrdiff_t stride,
+                                           const uint16_t* above,
+                                           const uint16_t* left,
+                                           int bd);
+#define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_neon
+
+void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+void vpx_highbd_dc_left_predictor_8x8_neon(uint16_t* dst,
+                                           ptrdiff_t stride,
+                                           const uint16_t* above,
+                                           const uint16_t* left,
+                                           int bd);
+#define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_neon
+
+void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_dc_predictor_16x16_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_neon
+
+void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_dc_predictor_32x32_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_neon
+
+void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_dc_predictor_4x4_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_neon
+
+void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_dc_predictor_8x8_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_neon
+
+void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_top_predictor_16x16_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_neon
+
+void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_top_predictor_32x32_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_neon
+
+void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_top_predictor_4x4_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_neon
+
+void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_top_predictor_8x8_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_neon
+
+void vpx_highbd_fdct16x16_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+#define vpx_highbd_fdct16x16 vpx_highbd_fdct16x16_c
+
+void vpx_highbd_fdct16x16_1_c(const int16_t* input,
+                              tran_low_t* output,
+                              int stride);
+#define vpx_highbd_fdct16x16_1 vpx_highbd_fdct16x16_1_c
+
+void vpx_highbd_fdct32x32_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+#define vpx_highbd_fdct32x32 vpx_highbd_fdct32x32_c
+
+void vpx_highbd_fdct32x32_1_c(const int16_t* input,
+                              tran_low_t* output,
+                              int stride);
+#define vpx_highbd_fdct32x32_1 vpx_highbd_fdct32x32_1_c
+
+void vpx_highbd_fdct32x32_rd_c(const int16_t* input,
+                               tran_low_t* output,
+                               int stride);
+#define vpx_highbd_fdct32x32_rd vpx_highbd_fdct32x32_rd_c
+
+void vpx_highbd_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct4x4 vpx_highbd_fdct4x4_c
+
+void vpx_highbd_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct8x8 vpx_highbd_fdct8x8_c
+
+void vpx_highbd_fdct8x8_1_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+void vpx_fdct8x8_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct8x8_1 vpx_fdct8x8_1_neon
+
+void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_h_predictor_16x16_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_neon
+
+void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_h_predictor_32x32_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_neon
+
+void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_h_predictor_4x4_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_neon
+
+void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_h_predictor_8x8_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_neon
+
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_16x16 vpx_highbd_hadamard_16x16_c
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_32x32 vpx_highbd_hadamard_32x32_c
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+#define vpx_highbd_hadamard_8x8 vpx_highbd_hadamard_8x8_c
+
+void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct16x16_10_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct16x16_10_add vpx_highbd_idct16x16_10_add_neon
+
+void vpx_highbd_idct16x16_1_add_c(const tran_low_t* input,
+                                  uint16_t* dest,
+                                  int stride,
+                                  int bd);
+void vpx_highbd_idct16x16_1_add_neon(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+#define vpx_highbd_idct16x16_1_add vpx_highbd_idct16x16_1_add_neon
+
+void vpx_highbd_idct16x16_256_add_c(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+void vpx_highbd_idct16x16_256_add_neon(const tran_low_t* input,
+                                       uint16_t* dest,
+                                       int stride,
+                                       int bd);
+#define vpx_highbd_idct16x16_256_add vpx_highbd_idct16x16_256_add_neon
+
+void vpx_highbd_idct16x16_38_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct16x16_38_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct16x16_38_add vpx_highbd_idct16x16_38_add_neon
+
+void vpx_highbd_idct32x32_1024_add_c(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+void vpx_highbd_idct32x32_1024_add_neon(const tran_low_t* input,
+                                        uint16_t* dest,
+                                        int stride,
+                                        int bd);
+#define vpx_highbd_idct32x32_1024_add vpx_highbd_idct32x32_1024_add_neon
+
+void vpx_highbd_idct32x32_135_add_c(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+void vpx_highbd_idct32x32_135_add_neon(const tran_low_t* input,
+                                       uint16_t* dest,
+                                       int stride,
+                                       int bd);
+#define vpx_highbd_idct32x32_135_add vpx_highbd_idct32x32_135_add_neon
+
+void vpx_highbd_idct32x32_1_add_c(const tran_low_t* input,
+                                  uint16_t* dest,
+                                  int stride,
+                                  int bd);
+void vpx_highbd_idct32x32_1_add_neon(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+#define vpx_highbd_idct32x32_1_add vpx_highbd_idct32x32_1_add_neon
+
+void vpx_highbd_idct32x32_34_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct32x32_34_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct32x32_34_add vpx_highbd_idct32x32_34_add_neon
+
+void vpx_highbd_idct4x4_16_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct4x4_16_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct4x4_16_add vpx_highbd_idct4x4_16_add_neon
+
+void vpx_highbd_idct4x4_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+void vpx_highbd_idct4x4_1_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+#define vpx_highbd_idct4x4_1_add vpx_highbd_idct4x4_1_add_neon
+
+void vpx_highbd_idct8x8_12_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct8x8_12_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct8x8_12_add vpx_highbd_idct8x8_12_add_neon
+
+void vpx_highbd_idct8x8_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+void vpx_highbd_idct8x8_1_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+#define vpx_highbd_idct8x8_1_add vpx_highbd_idct8x8_1_add_neon
+
+void vpx_highbd_idct8x8_64_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct8x8_64_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct8x8_64_add vpx_highbd_idct8x8_64_add_neon
+
+void vpx_highbd_iwht4x4_16_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+#define vpx_highbd_iwht4x4_16_add vpx_highbd_iwht4x4_16_add_c
+
+void vpx_highbd_iwht4x4_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+#define vpx_highbd_iwht4x4_1_add vpx_highbd_iwht4x4_1_add_c
+
+void vpx_highbd_lpf_horizontal_16_c(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+void vpx_highbd_lpf_horizontal_16_neon(uint16_t* s,
+                                       int pitch,
+                                       const uint8_t* blimit,
+                                       const uint8_t* limit,
+                                       const uint8_t* thresh,
+                                       int bd);
+#define vpx_highbd_lpf_horizontal_16 vpx_highbd_lpf_horizontal_16_neon
+
+void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit,
+                                         const uint8_t* limit,
+                                         const uint8_t* thresh,
+                                         int bd);
+void vpx_highbd_lpf_horizontal_16_dual_neon(uint16_t* s,
+                                            int pitch,
+                                            const uint8_t* blimit,
+                                            const uint8_t* limit,
+                                            const uint8_t* thresh,
+                                            int bd);
+#define vpx_highbd_lpf_horizontal_16_dual vpx_highbd_lpf_horizontal_16_dual_neon
+
+void vpx_highbd_lpf_horizontal_4_c(uint16_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh,
+                                   int bd);
+void vpx_highbd_lpf_horizontal_4_neon(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit,
+                                      const uint8_t* limit,
+                                      const uint8_t* thresh,
+                                      int bd);
+#define vpx_highbd_lpf_horizontal_4 vpx_highbd_lpf_horizontal_4_neon
+
+void vpx_highbd_lpf_horizontal_4_dual_c(uint16_t* s,
+                                        int pitch,
+                                        const uint8_t* blimit0,
+                                        const uint8_t* limit0,
+                                        const uint8_t* thresh0,
+                                        const uint8_t* blimit1,
+                                        const uint8_t* limit1,
+                                        const uint8_t* thresh1,
+                                        int bd);
+void vpx_highbd_lpf_horizontal_4_dual_neon(uint16_t* s,
+                                           int pitch,
+                                           const uint8_t* blimit0,
+                                           const uint8_t* limit0,
+                                           const uint8_t* thresh0,
+                                           const uint8_t* blimit1,
+                                           const uint8_t* limit1,
+                                           const uint8_t* thresh1,
+                                           int bd);
+#define vpx_highbd_lpf_horizontal_4_dual vpx_highbd_lpf_horizontal_4_dual_neon
+
+void vpx_highbd_lpf_horizontal_8_c(uint16_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh,
+                                   int bd);
+void vpx_highbd_lpf_horizontal_8_neon(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit,
+                                      const uint8_t* limit,
+                                      const uint8_t* thresh,
+                                      int bd);
+#define vpx_highbd_lpf_horizontal_8 vpx_highbd_lpf_horizontal_8_neon
+
+void vpx_highbd_lpf_horizontal_8_dual_c(uint16_t* s,
+                                        int pitch,
+                                        const uint8_t* blimit0,
+                                        const uint8_t* limit0,
+                                        const uint8_t* thresh0,
+                                        const uint8_t* blimit1,
+                                        const uint8_t* limit1,
+                                        const uint8_t* thresh1,
+                                        int bd);
+void vpx_highbd_lpf_horizontal_8_dual_neon(uint16_t* s,
+                                           int pitch,
+                                           const uint8_t* blimit0,
+                                           const uint8_t* limit0,
+                                           const uint8_t* thresh0,
+                                           const uint8_t* blimit1,
+                                           const uint8_t* limit1,
+                                           const uint8_t* thresh1,
+                                           int bd);
+#define vpx_highbd_lpf_horizontal_8_dual vpx_highbd_lpf_horizontal_8_dual_neon
+
+void vpx_highbd_lpf_vertical_16_c(uint16_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit,
+                                  const uint8_t* limit,
+                                  const uint8_t* thresh,
+                                  int bd);
+void vpx_highbd_lpf_vertical_16_neon(uint16_t* s,
+                                     int pitch,
+                                     const uint8_t* blimit,
+                                     const uint8_t* limit,
+                                     const uint8_t* thresh,
+                                     int bd);
+#define vpx_highbd_lpf_vertical_16 vpx_highbd_lpf_vertical_16_neon
+
+void vpx_highbd_lpf_vertical_16_dual_c(uint16_t* s,
+                                       int pitch,
+                                       const uint8_t* blimit,
+                                       const uint8_t* limit,
+                                       const uint8_t* thresh,
+                                       int bd);
+void vpx_highbd_lpf_vertical_16_dual_neon(uint16_t* s,
+                                          int pitch,
+                                          const uint8_t* blimit,
+                                          const uint8_t* limit,
+                                          const uint8_t* thresh,
+                                          int bd);
+#define vpx_highbd_lpf_vertical_16_dual vpx_highbd_lpf_vertical_16_dual_neon
+
+void vpx_highbd_lpf_vertical_4_c(uint16_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit,
+                                 const uint8_t* limit,
+                                 const uint8_t* thresh,
+                                 int bd);
+void vpx_highbd_lpf_vertical_4_neon(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+#define vpx_highbd_lpf_vertical_4 vpx_highbd_lpf_vertical_4_neon
+
+void vpx_highbd_lpf_vertical_4_dual_c(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit0,
+                                      const uint8_t* limit0,
+                                      const uint8_t* thresh0,
+                                      const uint8_t* blimit1,
+                                      const uint8_t* limit1,
+                                      const uint8_t* thresh1,
+                                      int bd);
+void vpx_highbd_lpf_vertical_4_dual_neon(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit0,
+                                         const uint8_t* limit0,
+                                         const uint8_t* thresh0,
+                                         const uint8_t* blimit1,
+                                         const uint8_t* limit1,
+                                         const uint8_t* thresh1,
+                                         int bd);
+#define vpx_highbd_lpf_vertical_4_dual vpx_highbd_lpf_vertical_4_dual_neon
+
+void vpx_highbd_lpf_vertical_8_c(uint16_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit,
+                                 const uint8_t* limit,
+                                 const uint8_t* thresh,
+                                 int bd);
+void vpx_highbd_lpf_vertical_8_neon(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+#define vpx_highbd_lpf_vertical_8 vpx_highbd_lpf_vertical_8_neon
+
+void vpx_highbd_lpf_vertical_8_dual_c(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit0,
+                                      const uint8_t* limit0,
+                                      const uint8_t* thresh0,
+                                      const uint8_t* blimit1,
+                                      const uint8_t* limit1,
+                                      const uint8_t* thresh1,
+                                      int bd);
+void vpx_highbd_lpf_vertical_8_dual_neon(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit0,
+                                         const uint8_t* limit0,
+                                         const uint8_t* thresh0,
+                                         const uint8_t* blimit1,
+                                         const uint8_t* limit1,
+                                         const uint8_t* thresh1,
+                                         int bd);
+#define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_neon
+
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
+                             int p,
+                             const uint8_t* d8,
+                             int dp,
+                             int* min,
+                             int* max);
+#define vpx_highbd_minmax_8x8 vpx_highbd_minmax_8x8_c
+
+void vpx_highbd_quantize_b_c(const tran_low_t* coeff_ptr,
+                             intptr_t n_coeffs,
+                             int skip_block,
+                             const int16_t* zbin_ptr,
+                             const int16_t* round_ptr,
+                             const int16_t* quant_ptr,
+                             const int16_t* quant_shift_ptr,
+                             tran_low_t* qcoeff_ptr,
+                             tran_low_t* dqcoeff_ptr,
+                             const int16_t* dequant_ptr,
+                             uint16_t* eob_ptr,
+                             const int16_t* scan,
+                             const int16_t* iscan);
+#define vpx_highbd_quantize_b vpx_highbd_quantize_b_c
+
+void vpx_highbd_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
+                                   intptr_t n_coeffs,
+                                   int skip_block,
+                                   const int16_t* zbin_ptr,
+                                   const int16_t* round_ptr,
+                                   const int16_t* quant_ptr,
+                                   const int16_t* quant_shift_ptr,
+                                   tran_low_t* qcoeff_ptr,
+                                   tran_low_t* dqcoeff_ptr,
+                                   const int16_t* dequant_ptr,
+                                   uint16_t* eob_ptr,
+                                   const int16_t* scan,
+                                   const int16_t* iscan);
+#define vpx_highbd_quantize_b_32x32 vpx_highbd_quantize_b_32x32_c
+
+unsigned int vpx_highbd_sad16x16_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad16x16 vpx_highbd_sad16x16_c
+
+unsigned int vpx_highbd_sad16x16_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad16x16_avg vpx_highbd_sad16x16_avg_c
+
+void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_c
+
+unsigned int vpx_highbd_sad16x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad16x32 vpx_highbd_sad16x32_c
+
+unsigned int vpx_highbd_sad16x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad16x32_avg vpx_highbd_sad16x32_avg_c
+
+void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_c
+
+unsigned int vpx_highbd_sad16x8_c(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride);
+#define vpx_highbd_sad16x8 vpx_highbd_sad16x8_c
+
+unsigned int vpx_highbd_sad16x8_avg_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      const uint8_t* second_pred);
+#define vpx_highbd_sad16x8_avg vpx_highbd_sad16x8_avg_c
+
+void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* const ref_array[],
+                             int ref_stride,
+                             uint32_t* sad_array);
+#define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_c
+
+unsigned int vpx_highbd_sad32x16_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x16 vpx_highbd_sad32x16_c
+
+unsigned int vpx_highbd_sad32x16_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x16_avg vpx_highbd_sad32x16_avg_c
+
+void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_c
+
+unsigned int vpx_highbd_sad32x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x32 vpx_highbd_sad32x32_c
+
+unsigned int vpx_highbd_sad32x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x32_avg vpx_highbd_sad32x32_avg_c
+
+void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_c
+
+unsigned int vpx_highbd_sad32x64_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x64 vpx_highbd_sad32x64_c
+
+unsigned int vpx_highbd_sad32x64_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x64_avg vpx_highbd_sad32x64_avg_c
+
+void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_c
+
+unsigned int vpx_highbd_sad4x4_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad4x4 vpx_highbd_sad4x4_c
+
+unsigned int vpx_highbd_sad4x4_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad4x4_avg vpx_highbd_sad4x4_avg_c
+
+void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_c
+
+unsigned int vpx_highbd_sad4x8_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad4x8 vpx_highbd_sad4x8_c
+
+unsigned int vpx_highbd_sad4x8_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad4x8_avg vpx_highbd_sad4x8_avg_c
+
+void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_c
+
+unsigned int vpx_highbd_sad64x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad64x32 vpx_highbd_sad64x32_c
+
+unsigned int vpx_highbd_sad64x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad64x32_avg vpx_highbd_sad64x32_avg_c
+
+void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_c
+
+unsigned int vpx_highbd_sad64x64_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad64x64 vpx_highbd_sad64x64_c
+
+unsigned int vpx_highbd_sad64x64_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad64x64_avg vpx_highbd_sad64x64_avg_c
+
+void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_c
+
+unsigned int vpx_highbd_sad8x16_c(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride);
+#define vpx_highbd_sad8x16 vpx_highbd_sad8x16_c
+
+unsigned int vpx_highbd_sad8x16_avg_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      const uint8_t* second_pred);
+#define vpx_highbd_sad8x16_avg vpx_highbd_sad8x16_avg_c
+
+void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* const ref_array[],
+                             int ref_stride,
+                             uint32_t* sad_array);
+#define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_c
+
+unsigned int vpx_highbd_sad8x4_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad8x4 vpx_highbd_sad8x4_c
+
+unsigned int vpx_highbd_sad8x4_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad8x4_avg vpx_highbd_sad8x4_avg_c
+
+void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_c
+
+unsigned int vpx_highbd_sad8x8_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad8x8 vpx_highbd_sad8x8_c
+
+unsigned int vpx_highbd_sad8x8_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad8x8_avg vpx_highbd_sad8x8_avg_c
+
+void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_c
+
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+#define vpx_highbd_satd vpx_highbd_satd_c
+
+void vpx_highbd_subtract_block_c(int rows,
+                                 int cols,
+                                 int16_t* diff_ptr,
+                                 ptrdiff_t diff_stride,
+                                 const uint8_t* src8_ptr,
+                                 ptrdiff_t src_stride,
+                                 const uint8_t* pred8_ptr,
+                                 ptrdiff_t pred_stride,
+                                 int bd);
+#define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
+
+void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_tm_predictor_16x16_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_neon
+
+void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_tm_predictor_32x32_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_neon
+
+void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_tm_predictor_4x4_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_neon
+
+void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_tm_predictor_8x8_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_neon
+
+void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_v_predictor_16x16_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_neon
+
+void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_v_predictor_32x32_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_neon
+
+void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_v_predictor_4x4_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_neon
+
+void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_v_predictor_8x8_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_v_predictor_8x8 vpx_highbd_v_predictor_8x8_neon
+
+void vpx_idct16x16_10_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_10_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct16x16_10_add vpx_idct16x16_10_add_neon
+
+void vpx_idct16x16_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_1_add_neon(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+#define vpx_idct16x16_1_add vpx_idct16x16_1_add_neon
+
+void vpx_idct16x16_256_add_c(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+void vpx_idct16x16_256_add_neon(const tran_low_t* input,
+                                uint8_t* dest,
+                                int stride);
+#define vpx_idct16x16_256_add vpx_idct16x16_256_add_neon
+
+void vpx_idct16x16_38_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_38_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct16x16_38_add vpx_idct16x16_38_add_neon
+
+void vpx_idct32x32_1024_add_c(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+void vpx_idct32x32_1024_add_neon(const tran_low_t* input,
+                                 uint8_t* dest,
+                                 int stride);
+#define vpx_idct32x32_1024_add vpx_idct32x32_1024_add_neon
+
+void vpx_idct32x32_135_add_c(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+void vpx_idct32x32_135_add_neon(const tran_low_t* input,
+                                uint8_t* dest,
+                                int stride);
+#define vpx_idct32x32_135_add vpx_idct32x32_135_add_neon
+
+void vpx_idct32x32_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct32x32_1_add_neon(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+#define vpx_idct32x32_1_add vpx_idct32x32_1_add_neon
+
+void vpx_idct32x32_34_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct32x32_34_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct32x32_34_add vpx_idct32x32_34_add_neon
+
+void vpx_idct4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct4x4_16_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct4x4_16_add vpx_idct4x4_16_add_neon
+
+void vpx_idct4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct4x4_1_add_neon(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_idct4x4_1_add vpx_idct4x4_1_add_neon
+
+void vpx_idct8x8_12_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_12_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct8x8_12_add vpx_idct8x8_12_add_neon
+
+void vpx_idct8x8_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_1_add_neon(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_idct8x8_1_add vpx_idct8x8_1_add_neon
+
+void vpx_idct8x8_64_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_64_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct8x8_64_add vpx_idct8x8_64_add_neon
+
+int16_t vpx_int_pro_col_c(const uint8_t* ref, const int width);
+int16_t vpx_int_pro_col_neon(const uint8_t* ref, const int width);
+#define vpx_int_pro_col vpx_int_pro_col_neon
+
+void vpx_int_pro_row_c(int16_t* hbuf,
+                       const uint8_t* ref,
+                       const int ref_stride,
+                       const int height);
+void vpx_int_pro_row_neon(int16_t* hbuf,
+                          const uint8_t* ref,
+                          const int ref_stride,
+                          const int height);
+#define vpx_int_pro_row vpx_int_pro_row_neon
+
+void vpx_iwht4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_iwht4x4_16_add vpx_iwht4x4_16_add_c
+
+void vpx_iwht4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_iwht4x4_1_add vpx_iwht4x4_1_add_c
+
+void vpx_lpf_horizontal_16_c(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+void vpx_lpf_horizontal_16_neon(uint8_t* s,
+                                int pitch,
+                                const uint8_t* blimit,
+                                const uint8_t* limit,
+                                const uint8_t* thresh);
+#define vpx_lpf_horizontal_16 vpx_lpf_horizontal_16_neon
+
+void vpx_lpf_horizontal_16_dual_c(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit,
+                                  const uint8_t* limit,
+                                  const uint8_t* thresh);
+void vpx_lpf_horizontal_16_dual_neon(uint8_t* s,
+                                     int pitch,
+                                     const uint8_t* blimit,
+                                     const uint8_t* limit,
+                                     const uint8_t* thresh);
+#define vpx_lpf_horizontal_16_dual vpx_lpf_horizontal_16_dual_neon
+
+void vpx_lpf_horizontal_4_c(uint8_t* s,
+                            int pitch,
+                            const uint8_t* blimit,
+                            const uint8_t* limit,
+                            const uint8_t* thresh);
+void vpx_lpf_horizontal_4_neon(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit,
+                               const uint8_t* limit,
+                               const uint8_t* thresh);
+#define vpx_lpf_horizontal_4 vpx_lpf_horizontal_4_neon
+
+void vpx_lpf_horizontal_4_dual_c(uint8_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit0,
+                                 const uint8_t* limit0,
+                                 const uint8_t* thresh0,
+                                 const uint8_t* blimit1,
+                                 const uint8_t* limit1,
+                                 const uint8_t* thresh1);
+void vpx_lpf_horizontal_4_dual_neon(uint8_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit0,
+                                    const uint8_t* limit0,
+                                    const uint8_t* thresh0,
+                                    const uint8_t* blimit1,
+                                    const uint8_t* limit1,
+                                    const uint8_t* thresh1);
+#define vpx_lpf_horizontal_4_dual vpx_lpf_horizontal_4_dual_neon
+
+void vpx_lpf_horizontal_8_c(uint8_t* s,
+                            int pitch,
+                            const uint8_t* blimit,
+                            const uint8_t* limit,
+                            const uint8_t* thresh);
+void vpx_lpf_horizontal_8_neon(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit,
+                               const uint8_t* limit,
+                               const uint8_t* thresh);
+#define vpx_lpf_horizontal_8 vpx_lpf_horizontal_8_neon
+
+void vpx_lpf_horizontal_8_dual_c(uint8_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit0,
+                                 const uint8_t* limit0,
+                                 const uint8_t* thresh0,
+                                 const uint8_t* blimit1,
+                                 const uint8_t* limit1,
+                                 const uint8_t* thresh1);
+void vpx_lpf_horizontal_8_dual_neon(uint8_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit0,
+                                    const uint8_t* limit0,
+                                    const uint8_t* thresh0,
+                                    const uint8_t* blimit1,
+                                    const uint8_t* limit1,
+                                    const uint8_t* thresh1);
+#define vpx_lpf_horizontal_8_dual vpx_lpf_horizontal_8_dual_neon
+
+void vpx_lpf_vertical_16_c(uint8_t* s,
+                           int pitch,
+                           const uint8_t* blimit,
+                           const uint8_t* limit,
+                           const uint8_t* thresh);
+void vpx_lpf_vertical_16_neon(uint8_t* s,
+                              int pitch,
+                              const uint8_t* blimit,
+                              const uint8_t* limit,
+                              const uint8_t* thresh);
+#define vpx_lpf_vertical_16 vpx_lpf_vertical_16_neon
+
+void vpx_lpf_vertical_16_dual_c(uint8_t* s,
+                                int pitch,
+                                const uint8_t* blimit,
+                                const uint8_t* limit,
+                                const uint8_t* thresh);
+void vpx_lpf_vertical_16_dual_neon(uint8_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh);
+#define vpx_lpf_vertical_16_dual vpx_lpf_vertical_16_dual_neon
+
+void vpx_lpf_vertical_4_c(uint8_t* s,
+                          int pitch,
+                          const uint8_t* blimit,
+                          const uint8_t* limit,
+                          const uint8_t* thresh);
+void vpx_lpf_vertical_4_neon(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+#define vpx_lpf_vertical_4 vpx_lpf_vertical_4_neon
+
+void vpx_lpf_vertical_4_dual_c(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit0,
+                               const uint8_t* limit0,
+                               const uint8_t* thresh0,
+                               const uint8_t* blimit1,
+                               const uint8_t* limit1,
+                               const uint8_t* thresh1);
+void vpx_lpf_vertical_4_dual_neon(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit0,
+                                  const uint8_t* limit0,
+                                  const uint8_t* thresh0,
+                                  const uint8_t* blimit1,
+                                  const uint8_t* limit1,
+                                  const uint8_t* thresh1);
+#define vpx_lpf_vertical_4_dual vpx_lpf_vertical_4_dual_neon
+
+void vpx_lpf_vertical_8_c(uint8_t* s,
+                          int pitch,
+                          const uint8_t* blimit,
+                          const uint8_t* limit,
+                          const uint8_t* thresh);
+void vpx_lpf_vertical_8_neon(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+#define vpx_lpf_vertical_8 vpx_lpf_vertical_8_neon
+
+void vpx_lpf_vertical_8_dual_c(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit0,
+                               const uint8_t* limit0,
+                               const uint8_t* thresh0,
+                               const uint8_t* blimit1,
+                               const uint8_t* limit1,
+                               const uint8_t* thresh1);
+void vpx_lpf_vertical_8_dual_neon(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit0,
+                                  const uint8_t* limit0,
+                                  const uint8_t* thresh0,
+                                  const uint8_t* blimit1,
+                                  const uint8_t* limit1,
+                                  const uint8_t* thresh1);
+#define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
+
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
+                                 int pitch,
+                                 int rows,
+                                 int cols,
+                                 int flimit);
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
+                                    int pitch,
+                                    int rows,
+                                    int cols,
+                                    int flimit);
+#define vpx_mbpost_proc_across_ip vpx_mbpost_proc_across_ip_neon
+
+void vpx_mbpost_proc_down_c(unsigned char* dst,
+                            int pitch,
+                            int rows,
+                            int cols,
+                            int flimit);
+void vpx_mbpost_proc_down_neon(unsigned char* dst,
+                               int pitch,
+                               int rows,
+                               int cols,
+                               int flimit);
+#define vpx_mbpost_proc_down vpx_mbpost_proc_down_neon
+
+void vpx_minmax_8x8_c(const uint8_t* s,
+                      int p,
+                      const uint8_t* d,
+                      int dp,
+                      int* min,
+                      int* max);
+void vpx_minmax_8x8_neon(const uint8_t* s,
+                         int p,
+                         const uint8_t* d,
+                         int dp,
+                         int* min,
+                         int* max);
+#define vpx_minmax_8x8 vpx_minmax_8x8_neon
+
+unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride,
+                            unsigned int* sse);
+unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+#define vpx_mse16x16 vpx_mse16x16_neon
+
+unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride,
+                           unsigned int* sse);
+#define vpx_mse16x8 vpx_mse16x8_c
+
+unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride,
+                           unsigned int* sse);
+#define vpx_mse8x16 vpx_mse8x16_c
+
+unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride,
+                          unsigned int* sse);
+#define vpx_mse8x8 vpx_mse8x8_c
+
+void vpx_plane_add_noise_c(uint8_t* start,
+                           const int8_t* noise,
+                           int blackclamp,
+                           int whiteclamp,
+                           int width,
+                           int height,
+                           int pitch);
+#define vpx_plane_add_noise vpx_plane_add_noise_c
+
+void vpx_post_proc_down_and_across_mb_row_c(unsigned char* src,
+                                            unsigned char* dst,
+                                            int src_pitch,
+                                            int dst_pitch,
+                                            int cols,
+                                            unsigned char* flimits,
+                                            int size);
+void vpx_post_proc_down_and_across_mb_row_neon(unsigned char* src,
+                                               unsigned char* dst,
+                                               int src_pitch,
+                                               int dst_pitch,
+                                               int cols,
+                                               unsigned char* flimits,
+                                               int size);
+#define vpx_post_proc_down_and_across_mb_row \
+  vpx_post_proc_down_and_across_mb_row_neon
+
+void vpx_quantize_b_c(const tran_low_t* coeff_ptr,
+                      intptr_t n_coeffs,
+                      int skip_block,
+                      const int16_t* zbin_ptr,
+                      const int16_t* round_ptr,
+                      const int16_t* quant_ptr,
+                      const int16_t* quant_shift_ptr,
+                      tran_low_t* qcoeff_ptr,
+                      tran_low_t* dqcoeff_ptr,
+                      const int16_t* dequant_ptr,
+                      uint16_t* eob_ptr,
+                      const int16_t* scan,
+                      const int16_t* iscan);
+void vpx_quantize_b_neon(const tran_low_t* coeff_ptr,
+                         intptr_t n_coeffs,
+                         int skip_block,
+                         const int16_t* zbin_ptr,
+                         const int16_t* round_ptr,
+                         const int16_t* quant_ptr,
+                         const int16_t* quant_shift_ptr,
+                         tran_low_t* qcoeff_ptr,
+                         tran_low_t* dqcoeff_ptr,
+                         const int16_t* dequant_ptr,
+                         uint16_t* eob_ptr,
+                         const int16_t* scan,
+                         const int16_t* iscan);
+#define vpx_quantize_b vpx_quantize_b_neon
+
+void vpx_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
+                            intptr_t n_coeffs,
+                            int skip_block,
+                            const int16_t* zbin_ptr,
+                            const int16_t* round_ptr,
+                            const int16_t* quant_ptr,
+                            const int16_t* quant_shift_ptr,
+                            tran_low_t* qcoeff_ptr,
+                            tran_low_t* dqcoeff_ptr,
+                            const int16_t* dequant_ptr,
+                            uint16_t* eob_ptr,
+                            const int16_t* scan,
+                            const int16_t* iscan);
+void vpx_quantize_b_32x32_neon(const tran_low_t* coeff_ptr,
+                               intptr_t n_coeffs,
+                               int skip_block,
+                               const int16_t* zbin_ptr,
+                               const int16_t* round_ptr,
+                               const int16_t* quant_ptr,
+                               const int16_t* quant_shift_ptr,
+                               tran_low_t* qcoeff_ptr,
+                               tran_low_t* dqcoeff_ptr,
+                               const int16_t* dequant_ptr,
+                               uint16_t* eob_ptr,
+                               const int16_t* scan,
+                               const int16_t* iscan);
+#define vpx_quantize_b_32x32 vpx_quantize_b_32x32_neon
+
+unsigned int vpx_sad16x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad16x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad16x16 vpx_sad16x16_neon
+
+unsigned int vpx_sad16x16_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad16x16_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad16x16_avg vpx_sad16x16_avg_neon
+
+void vpx_sad16x16x3_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad16x16x3 vpx_sad16x16x3_c
+
+void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad16x16x4d vpx_sad16x16x4d_neon
+
+void vpx_sad16x16x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad16x16x8 vpx_sad16x16x8_c
+
+unsigned int vpx_sad16x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad16x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad16x32 vpx_sad16x32_neon
+
+unsigned int vpx_sad16x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad16x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad16x32_avg vpx_sad16x32_avg_neon
+
+void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad16x32x4d vpx_sad16x32x4d_neon
+
+unsigned int vpx_sad16x8_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride);
+unsigned int vpx_sad16x8_neon(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride);
+#define vpx_sad16x8 vpx_sad16x8_neon
+
+unsigned int vpx_sad16x8_avg_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               const uint8_t* second_pred);
+unsigned int vpx_sad16x8_avg_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  const uint8_t* second_pred);
+#define vpx_sad16x8_avg vpx_sad16x8_avg_neon
+
+void vpx_sad16x8x3_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad16x8x3 vpx_sad16x8x3_c
+
+void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* const ref_array[],
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* const ref_array[],
+                         int ref_stride,
+                         uint32_t* sad_array);
+#define vpx_sad16x8x4d vpx_sad16x8x4d_neon
+
+void vpx_sad16x8x8_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad16x8x8 vpx_sad16x8x8_c
+
+unsigned int vpx_sad32x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x16 vpx_sad32x16_neon
+
+unsigned int vpx_sad32x16_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x16_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x16_avg vpx_sad32x16_avg_neon
+
+void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x16x4d vpx_sad32x16x4d_neon
+
+unsigned int vpx_sad32x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x32 vpx_sad32x32_neon
+
+unsigned int vpx_sad32x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x32_avg vpx_sad32x32_avg_neon
+
+void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x32x4d vpx_sad32x32x4d_neon
+
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
+unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x64_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x64 vpx_sad32x64_neon
+
+unsigned int vpx_sad32x64_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x64_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x64_avg vpx_sad32x64_avg_neon
+
+void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x64x4d vpx_sad32x64x4d_neon
+
+unsigned int vpx_sad4x4_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad4x4_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad4x4 vpx_sad4x4_neon
+
+unsigned int vpx_sad4x4_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad4x4_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad4x4_avg vpx_sad4x4_avg_neon
+
+void vpx_sad4x4x3_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad4x4x3 vpx_sad4x4x3_c
+
+void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad4x4x4d vpx_sad4x4x4d_neon
+
+void vpx_sad4x4x8_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad4x4x8 vpx_sad4x4x8_c
+
+unsigned int vpx_sad4x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad4x8_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad4x8 vpx_sad4x8_neon
+
+unsigned int vpx_sad4x8_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad4x8_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad4x8_avg vpx_sad4x8_avg_neon
+
+void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad4x8x4d vpx_sad4x8x4d_neon
+
+unsigned int vpx_sad64x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad64x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad64x32 vpx_sad64x32_neon
+
+unsigned int vpx_sad64x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad64x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad64x32_avg vpx_sad64x32_avg_neon
+
+void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad64x32x4d vpx_sad64x32x4d_neon
+
+unsigned int vpx_sad64x64_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad64x64_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad64x64 vpx_sad64x64_neon
+
+unsigned int vpx_sad64x64_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad64x64_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad64x64_avg vpx_sad64x64_avg_neon
+
+void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad64x64x4d vpx_sad64x64x4d_neon
+
+unsigned int vpx_sad8x16_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride);
+unsigned int vpx_sad8x16_neon(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride);
+#define vpx_sad8x16 vpx_sad8x16_neon
+
+unsigned int vpx_sad8x16_avg_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               const uint8_t* second_pred);
+unsigned int vpx_sad8x16_avg_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  const uint8_t* second_pred);
+#define vpx_sad8x16_avg vpx_sad8x16_avg_neon
+
+void vpx_sad8x16x3_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad8x16x3 vpx_sad8x16x3_c
+
+void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* const ref_array[],
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* const ref_array[],
+                         int ref_stride,
+                         uint32_t* sad_array);
+#define vpx_sad8x16x4d vpx_sad8x16x4d_neon
+
+void vpx_sad8x16x8_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad8x16x8 vpx_sad8x16x8_c
+
+unsigned int vpx_sad8x4_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad8x4_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad8x4 vpx_sad8x4_neon
+
+unsigned int vpx_sad8x4_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad8x4_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad8x4_avg vpx_sad8x4_avg_neon
+
+void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad8x4x4d vpx_sad8x4x4d_neon
+
+unsigned int vpx_sad8x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad8x8_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad8x8 vpx_sad8x8_neon
+
+unsigned int vpx_sad8x8_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad8x8_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad8x8_avg vpx_sad8x8_avg_neon
+
+void vpx_sad8x8x3_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad8x8x3 vpx_sad8x8x3_c
+
+void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad8x8x4d vpx_sad8x8x4d_neon
+
+void vpx_sad8x8x8_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad8x8x8 vpx_sad8x8x8_c
+
+int vpx_satd_c(const tran_low_t* coeff, int length);
+int vpx_satd_neon(const tran_low_t* coeff, int length);
+#define vpx_satd vpx_satd_neon
+
+void vpx_scaled_2d_c(const uint8_t* src,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     ptrdiff_t dst_stride,
+                     const InterpKernel* filter,
+                     int x0_q4,
+                     int x_step_q4,
+                     int y0_q4,
+                     int y_step_q4,
+                     int w,
+                     int h);
+void vpx_scaled_2d_neon(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_scaled_2d vpx_scaled_2d_neon
+
+void vpx_scaled_avg_2d_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+#define vpx_scaled_avg_2d vpx_scaled_avg_2d_c
+
+void vpx_scaled_avg_horiz_c(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_scaled_avg_horiz vpx_scaled_avg_horiz_c
+
+void vpx_scaled_avg_vert_c(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+#define vpx_scaled_avg_vert vpx_scaled_avg_vert_c
+
+void vpx_scaled_horiz_c(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_scaled_horiz vpx_scaled_horiz_c
+
+void vpx_scaled_vert_c(const uint8_t* src,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       ptrdiff_t dst_stride,
+                       const InterpKernel* filter,
+                       int x0_q4,
+                       int x_step_q4,
+                       int y0_q4,
+                       int y_step_q4,
+                       int w,
+                       int h);
+#define vpx_scaled_vert vpx_scaled_vert_c
+
+uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse,
+                                          const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
+                                             const uint8_t* ref_ptr,
+                                             int ref_stride,
+                                             uint32_t* sse,
+                                             const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
+
+uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
+
+uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
+
+uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse,
+                                          const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
+                                             const uint8_t* ref_ptr,
+                                             int ref_stride,
+                                             uint32_t* sse,
+                                             const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
+
+uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
+
+uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
+
+uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse);
+#define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
+
+uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
+
+uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
+
+uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
+
+uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
+
+uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
+
+uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
+
+uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
+
+uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse);
+#define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
+
+uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
+
+uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance8x8 vpx_sub_pixel_variance8x8_neon
+
+void vpx_subtract_block_c(int rows,
+                          int cols,
+                          int16_t* diff_ptr,
+                          ptrdiff_t diff_stride,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          const uint8_t* pred_ptr,
+                          ptrdiff_t pred_stride);
+void vpx_subtract_block_neon(int rows,
+                             int cols,
+                             int16_t* diff_ptr,
+                             ptrdiff_t diff_stride,
+                             const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             const uint8_t* pred_ptr,
+                             ptrdiff_t pred_stride);
+#define vpx_subtract_block vpx_subtract_block_neon
+
+uint64_t vpx_sum_squares_2d_i16_c(const int16_t* src, int stride, int size);
+uint64_t vpx_sum_squares_2d_i16_neon(const int16_t* src, int stride, int size);
+#define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
+
+void vpx_tm_predictor_16x16_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_tm_predictor_16x16_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
+
+void vpx_tm_predictor_32x32_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_tm_predictor_32x32_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
+
+void vpx_tm_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_tm_predictor_4x4_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
+
+void vpx_tm_predictor_8x8_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_tm_predictor_8x8_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
+
+void vpx_v_predictor_16x16_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_v_predictor_16x16_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
+
+void vpx_v_predictor_32x32_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_v_predictor_32x32_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
+
+void vpx_v_predictor_4x4_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_v_predictor_4x4_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
+
+void vpx_v_predictor_8x8_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_v_predictor_8x8_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
+
+unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance16x16 vpx_variance16x16_neon
+
+unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance16x32 vpx_variance16x32_neon
+
+unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse);
+unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_variance16x8 vpx_variance16x8_neon
+
+unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x16 vpx_variance32x16_neon
+
+unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x32 vpx_variance32x32_neon
+
+unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x64 vpx_variance32x64_neon
+
+unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance4x4 vpx_variance4x4_neon
+
+unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance4x8 vpx_variance4x8_neon
+
+unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance64x32 vpx_variance64x32_neon
+
+unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance64x64 vpx_variance64x64_neon
+
+unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse);
+unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_variance8x16 vpx_variance8x16_neon
+
+unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance8x4 vpx_variance8x4_neon
+
+unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance8x8 vpx_variance8x8_neon
+
+void vpx_ve_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+#define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
+
+int vpx_vector_var_c(const int16_t* ref, const int16_t* src, const int bwl);
+int vpx_vector_var_neon(const int16_t* ref, const int16_t* src, const int bwl);
+#define vpx_vector_var vpx_vector_var_neon
+
+void vpx_dsp_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_scale_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_scale_rtcd.h
new file mode 100644
index 0000000..555013e
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon-highbd/vpx_scale_rtcd.h
@@ -0,0 +1,101 @@
+// This file is generated. Do not edit.
+#ifndef VPX_SCALE_RTCD_H_
+#define VPX_SCALE_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void vp8_horizontal_line_2_1_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_2_1_scale vp8_horizontal_line_2_1_scale_c
+
+void vp8_horizontal_line_5_3_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_5_3_scale vp8_horizontal_line_5_3_scale_c
+
+void vp8_horizontal_line_5_4_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_5_4_scale vp8_horizontal_line_5_4_scale_c
+
+void vp8_vertical_band_2_1_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_2_1_scale vp8_vertical_band_2_1_scale_c
+
+void vp8_vertical_band_2_1_scale_i_c(unsigned char* source,
+                                     unsigned int src_pitch,
+                                     unsigned char* dest,
+                                     unsigned int dest_pitch,
+                                     unsigned int dest_width);
+#define vp8_vertical_band_2_1_scale_i vp8_vertical_band_2_1_scale_i_c
+
+void vp8_vertical_band_5_3_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_5_3_scale vp8_vertical_band_5_3_scale_c
+
+void vp8_vertical_band_5_4_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_5_4_scale vp8_vertical_band_5_4_scale_c
+
+void vp8_yv12_copy_frame_c(const struct yv12_buffer_config* src_ybc,
+                           struct yv12_buffer_config* dst_ybc);
+#define vp8_yv12_copy_frame vp8_yv12_copy_frame_c
+
+void vp8_yv12_extend_frame_borders_c(struct yv12_buffer_config* ybf);
+#define vp8_yv12_extend_frame_borders vp8_yv12_extend_frame_borders_c
+
+void vpx_extend_frame_borders_c(struct yv12_buffer_config* ybf);
+#define vpx_extend_frame_borders vpx_extend_frame_borders_c
+
+void vpx_extend_frame_inner_borders_c(struct yv12_buffer_config* ybf);
+#define vpx_extend_frame_inner_borders vpx_extend_frame_inner_borders_c
+
+void vpx_yv12_copy_frame_c(const struct yv12_buffer_config* src_ybc,
+                           struct yv12_buffer_config* dst_ybc);
+#define vpx_yv12_copy_frame vpx_yv12_copy_frame_c
+
+void vpx_yv12_copy_y_c(const struct yv12_buffer_config* src_ybc,
+                       struct yv12_buffer_config* dst_ybc);
+#define vpx_yv12_copy_y vpx_yv12_copy_y_c
+
+void vpx_scale_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h
index 737afd5..c4bb41b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp8_rtcd.h
@@ -27,68 +27,68 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_neon(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x8_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -96,9 +96,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -106,9 +106,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -116,45 +116,52 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_neon(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_neon(short input,
-                               unsigned char* pred,
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
                                int pred_stride,
-                               unsigned char* dst,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
 
@@ -196,11 +203,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_neon(short* input,
                                short* dq,
-                               unsigned char* output,
+                               unsigned char* dest,
                                int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_neon
 
@@ -230,8 +237,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_neon(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_neon
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -283,91 +290,91 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_neon
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_neon
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_mbhs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_mbvs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
 
@@ -381,8 +388,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -400,81 +407,81 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_neon(short* input,
-                               unsigned char* pred,
-                               int pitch,
-                               unsigned char* dst,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_neon(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp9_rtcd.h
index 309c780..e17954d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vp9_rtcd.h
@@ -76,34 +76,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_neon(const int16_t* input,
-                            int stride,
-                            tran_low_t* coeff_ptr,
-                            intptr_t n_coeffs,
-                            int skip_block,
-                            const int16_t* round_ptr,
-                            const int16_t* quant_ptr,
-                            tran_low_t* qcoeff_ptr,
-                            tran_low_t* dqcoeff_ptr,
-                            const int16_t* dequant_ptr,
-                            uint16_t* eob_ptr,
-                            const int16_t* scan,
-                            const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_neon
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -140,12 +112,12 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_neon(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm
index 7eb1d08..54dae08 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.asm
@@ -1,10 +1,15 @@
 @ This file was created from a .asm file
 @  using the ads2gas.pl script.
-	.equ DO1STROUNDING, 0
+	.syntax unified
+.equ VPX_ARCH_ARM ,  1
 .equ ARCH_ARM ,  1
+.equ VPX_ARCH_MIPS ,  0
 .equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
 .equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
 .equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
 .equ ARCH_PPC ,  0
 .equ HAVE_NEON ,  1
 .equ HAVE_NEON_ASM ,  1
@@ -69,21 +74,25 @@
 .equ CONFIG_OS_SUPPORT ,  1
 .equ CONFIG_UNIT_TESTS ,  1
 .equ CONFIG_WEBM_IO ,  1
-.equ CONFIG_LIBYUV ,  1
+.equ CONFIG_LIBYUV ,  0
 .equ CONFIG_DECODE_PERF_TESTS ,  0
 .equ CONFIG_ENCODE_PERF_TESTS ,  0
 .equ CONFIG_MULTI_RES_ENCODING ,  1
 .equ CONFIG_TEMPORAL_DENOISING ,  1
 .equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
 .equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .equ CONFIG_VP9_HIGHBITDEPTH ,  0
 .equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .equ CONFIG_EXPERIMENTAL ,  0
 .equ CONFIG_SIZE_LIMIT ,  1
 .equ CONFIG_ALWAYS_ADJUST_BPM ,  0
-.equ CONFIG_SPATIAL_SVC ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
 .equ CONFIG_FP_MB_STATS ,  0
 .equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
 .equ DECODE_WIDTH_LIMIT ,  16384
 .equ DECODE_HEIGHT_LIMIT ,  16384
 	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.c
index 22b0617..f4f0c11 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=armv7-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=armv7-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h
index 3dbebf9..969f186 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 1
 #define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 1
 #define HAVE_NEON_ASM 1
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_dsp_rtcd.h
index 453e90e..9c31497 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm-neon/vpx_dsp_rtcd.h
@@ -235,349 +235,349 @@
 #define vpx_convolve_copy vpx_convolve_copy_neon
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_16x16_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_32x32_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_4x4_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_8x8_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
@@ -621,13 +621,13 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_neon(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
@@ -635,23 +635,23 @@
 #define vpx_get16x16var vpx_get16x16var_neon
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const unsigned char* ref_ptr,
                                    int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_neon(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -662,41 +662,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
@@ -723,7 +723,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -996,12 +996,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_neon(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -1035,35 +1035,35 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_neon
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_neon
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -1180,12 +1180,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_neon
@@ -1221,12 +1221,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_neon
@@ -1262,12 +1262,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_neon
@@ -1303,12 +1303,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_neon
@@ -1337,16 +1337,23 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_neon
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -1371,12 +1378,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_neon
@@ -1412,12 +1419,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_neon
@@ -1453,12 +1460,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_neon
@@ -1487,12 +1494,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_neon
@@ -1521,12 +1528,12 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_neon
@@ -1562,12 +1569,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_neon
@@ -1603,12 +1610,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_neon
@@ -1644,12 +1651,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_neon
@@ -1755,17 +1762,17 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1773,17 +1780,17 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1791,17 +1798,17 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1809,17 +1816,17 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1827,17 +1834,17 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1845,17 +1852,17 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1863,17 +1870,17 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1881,17 +1888,17 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1899,17 +1906,17 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1917,17 +1924,17 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1935,17 +1942,17 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1953,17 +1960,17 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1971,17 +1978,17 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1989,208 +1996,208 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
@@ -2219,243 +2226,243 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_neon
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_neon
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_neon
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_neon
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_neon
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_neon
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_neon
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_neon
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_neon
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_neon
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_neon
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_neon
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_neon
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp8_rtcd.h
index a84e3b3..e95dec5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp8_rtcd.h
@@ -27,44 +27,44 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_c
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_c
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_c
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_c
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -72,9 +72,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -82,9 +82,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -92,28 +92,35 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_c
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_c
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_c
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_c
 
@@ -139,7 +146,7 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_c
 
@@ -158,7 +165,7 @@
                                     char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_c
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_c
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -209,55 +216,55 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_c
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_c
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_c
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_c
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_c
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_c
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_c
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_c
 
@@ -271,8 +278,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -288,50 +295,50 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_c
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_c
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_c
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_c
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_c
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_c
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_c
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp9_rtcd.h
index 478101f..5465fc0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vp9_rtcd.h
@@ -64,21 +64,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_c
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -115,8 +100,8 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.asm
index b06082f..451803d4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.asm
@@ -1,10 +1,15 @@
 @ This file was created from a .asm file
 @  using the ads2gas.pl script.
-	.equ DO1STROUNDING, 0
+	.syntax unified
+.equ VPX_ARCH_ARM ,  1
 .equ ARCH_ARM ,  1
+.equ VPX_ARCH_MIPS ,  0
 .equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
 .equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
 .equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
 .equ ARCH_PPC ,  0
 .equ HAVE_NEON ,  0
 .equ HAVE_NEON_ASM ,  0
@@ -69,21 +74,25 @@
 .equ CONFIG_OS_SUPPORT ,  1
 .equ CONFIG_UNIT_TESTS ,  1
 .equ CONFIG_WEBM_IO ,  1
-.equ CONFIG_LIBYUV ,  1
+.equ CONFIG_LIBYUV ,  0
 .equ CONFIG_DECODE_PERF_TESTS ,  0
 .equ CONFIG_ENCODE_PERF_TESTS ,  0
 .equ CONFIG_MULTI_RES_ENCODING ,  1
 .equ CONFIG_TEMPORAL_DENOISING ,  1
 .equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
 .equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .equ CONFIG_VP9_HIGHBITDEPTH ,  0
 .equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .equ CONFIG_EXPERIMENTAL ,  0
 .equ CONFIG_SIZE_LIMIT ,  1
 .equ CONFIG_ALWAYS_ADJUST_BPM ,  0
-.equ CONFIG_SPATIAL_SVC ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
 .equ CONFIG_FP_MB_STATS ,  0
 .equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
 .equ DECODE_WIDTH_LIMIT ,  16384
 .equ DECODE_HEIGHT_LIMIT ,  16384
 	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.c
index 64e2612..b0d23e7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=armv7-linux-gcc --disable-neon --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=armv7-linux-gcc --disable-neon --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.h
index 4afe67a..ee3225f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 1
 #define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_dsp_rtcd.h
index 063a8c7..792d50f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm/vpx_dsp_rtcd.h
@@ -139,253 +139,253 @@
 #define vpx_convolve_copy vpx_convolve_copy_c
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_c
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_c
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_c
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_c
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_c
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_c
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_c
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_c
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_c
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_c
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_c
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_c
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_c
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_c
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_c
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_c
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_c
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_c
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_c
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_c
@@ -418,7 +418,7 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_c
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
@@ -426,13 +426,13 @@
 #define vpx_get16x16var vpx_get16x16var_c
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
@@ -443,25 +443,25 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_c
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_c
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_c
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_c
@@ -482,7 +482,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_c
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -643,7 +643,7 @@
                                const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_c
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
@@ -666,30 +666,30 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_c
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_c
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -764,7 +764,7 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_c
@@ -791,7 +791,7 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_c
@@ -818,7 +818,7 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_c
@@ -845,7 +845,7 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_c
@@ -865,11 +865,18 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_c
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -885,7 +892,7 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_c
@@ -912,7 +919,7 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_c
@@ -939,7 +946,7 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_c
@@ -959,7 +966,7 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_c
@@ -979,7 +986,7 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_c
@@ -1006,7 +1013,7 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_c
@@ -1033,7 +1040,7 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_c
@@ -1060,7 +1067,7 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_c
@@ -1154,9 +1161,9 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1164,9 +1171,9 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1174,9 +1181,9 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -1184,9 +1191,9 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1194,9 +1201,9 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1204,9 +1211,9 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1214,9 +1221,9 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1224,9 +1231,9 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1234,9 +1241,9 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1244,9 +1251,9 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1254,9 +1261,9 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -1264,9 +1271,9 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1274,9 +1281,9 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1284,117 +1291,117 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_c
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_c
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_c
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_c
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_c
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_c
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_c
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_c
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_c
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_c
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_c
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_c
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
@@ -1414,146 +1421,146 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_c
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_c
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_c
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_c
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_c
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_c
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_c
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_c
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_c
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_c
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_c
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_c
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_c
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_c
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_c
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_c
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_c
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_c
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_c
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_c
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_c
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_c
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vp8_rtcd.h
new file mode 100644
index 0000000..c4bb41b
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vp8_rtcd.h
@@ -0,0 +1,505 @@
+// This file is generated. Do not edit.
+#ifndef VP8_RTCD_H_
+#define VP8_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * VP8
+ */
+
+struct blockd;
+struct macroblockd;
+struct loop_filter_info;
+
+/* Encoder forward decls */
+struct block;
+struct macroblock;
+struct variance_vtable;
+union int_mv;
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
+                                    int dst_pitch);
+#define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
+
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
+
+void vp8_blend_b_c(unsigned char* y,
+                   unsigned char* u,
+                   unsigned char* v,
+                   int y_1,
+                   int u_1,
+                   int v_1,
+                   int alpha,
+                   int stride);
+#define vp8_blend_b vp8_blend_b_c
+
+void vp8_blend_mb_inner_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
+#define vp8_blend_mb_inner vp8_blend_mb_inner_c
+
+void vp8_blend_mb_outer_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
+#define vp8_blend_mb_outer vp8_blend_mb_outer_c
+
+int vp8_block_error_c(short* coeff, short* dqcoeff);
+#define vp8_block_error vp8_block_error_c
+
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
+void vp8_copy_mem16x16_c(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+void vp8_copy_mem16x16_neon(unsigned char* src,
+                            int src_stride,
+                            unsigned char* dst,
+                            int dst_stride);
+#define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
+
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+#define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
+
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+#define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
+
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
+                               int dst_stride);
+#define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
+
+int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
+                          int mc_avg_y_stride,
+                          unsigned char* running_avg_y,
+                          int avg_y_stride,
+                          unsigned char* sig,
+                          int sig_stride,
+                          unsigned int motion_magnitude,
+                          int increase_denoising);
+int vp8_denoiser_filter_neon(unsigned char* mc_running_avg_y,
+                             int mc_avg_y_stride,
+                             unsigned char* running_avg_y,
+                             int avg_y_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+#define vp8_denoiser_filter vp8_denoiser_filter_neon
+
+int vp8_denoiser_filter_uv_c(unsigned char* mc_running_avg,
+                             int mc_avg_stride,
+                             unsigned char* running_avg,
+                             int avg_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+int vp8_denoiser_filter_uv_neon(unsigned char* mc_running_avg,
+                                int mc_avg_stride,
+                                unsigned char* running_avg,
+                                int avg_stride,
+                                unsigned char* sig,
+                                int sig_stride,
+                                unsigned int motion_magnitude,
+                                int increase_denoising);
+#define vp8_denoiser_filter_uv vp8_denoiser_filter_uv_neon
+
+void vp8_dequant_idct_add_c(short* input,
+                            short* dq,
+                            unsigned char* dest,
+                            int stride);
+void vp8_dequant_idct_add_neon(short* input,
+                               short* dq,
+                               unsigned char* dest,
+                               int stride);
+#define vp8_dequant_idct_add vp8_dequant_idct_add_neon
+
+void vp8_dequant_idct_add_uv_block_c(short* q,
+                                     short* dq,
+                                     unsigned char* dst_u,
+                                     unsigned char* dst_v,
+                                     int stride,
+                                     char* eobs);
+void vp8_dequant_idct_add_uv_block_neon(short* q,
+                                        short* dq,
+                                        unsigned char* dst_u,
+                                        unsigned char* dst_v,
+                                        int stride,
+                                        char* eobs);
+#define vp8_dequant_idct_add_uv_block vp8_dequant_idct_add_uv_block_neon
+
+void vp8_dequant_idct_add_y_block_c(short* q,
+                                    short* dq,
+                                    unsigned char* dst,
+                                    int stride,
+                                    char* eobs);
+void vp8_dequant_idct_add_y_block_neon(short* q,
+                                       short* dq,
+                                       unsigned char* dst,
+                                       int stride,
+                                       char* eobs);
+#define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
+
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
+#define vp8_dequantize_b vp8_dequantize_b_neon
+
+int vp8_diamond_search_sad_c(struct macroblock* x,
+                             struct block* b,
+                             struct blockd* d,
+                             union int_mv* ref_mv,
+                             union int_mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             struct variance_vtable* fn_ptr,
+                             int* mvcost[2],
+                             union int_mv* center_mv);
+#define vp8_diamond_search_sad vp8_diamond_search_sad_c
+
+void vp8_fast_quantize_b_c(struct block*, struct blockd*);
+void vp8_fast_quantize_b_neon(struct block*, struct blockd*);
+#define vp8_fast_quantize_b vp8_fast_quantize_b_neon
+
+void vp8_filter_by_weight16x16_c(unsigned char* src,
+                                 int src_stride,
+                                 unsigned char* dst,
+                                 int dst_stride,
+                                 int src_weight);
+#define vp8_filter_by_weight16x16 vp8_filter_by_weight16x16_c
+
+void vp8_filter_by_weight4x4_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp8_filter_by_weight4x4 vp8_filter_by_weight4x4_c
+
+void vp8_filter_by_weight8x8_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp8_filter_by_weight8x8 vp8_filter_by_weight8x8_c
+
+int vp8_full_search_sad_c(struct macroblock* x,
+                          struct block* b,
+                          struct blockd* d,
+                          union int_mv* ref_mv,
+                          int sad_per_bit,
+                          int distance,
+                          struct variance_vtable* fn_ptr,
+                          int* mvcost[2],
+                          union int_mv* center_mv);
+#define vp8_full_search_sad vp8_full_search_sad_c
+
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bh vp8_loop_filter_bh_neon
+
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bv vp8_loop_filter_bv_neon
+
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
+
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
+
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
+
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
+
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
+                                              const unsigned char* blimit);
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
+                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
+
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
+                                            const unsigned char* blimit);
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
+                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
+
+int vp8_mbblock_error_c(struct macroblock* mb, int dc);
+#define vp8_mbblock_error vp8_mbblock_error_c
+
+int vp8_mbuverror_c(struct macroblock* mb);
+#define vp8_mbuverror vp8_mbuverror_c
+
+int vp8_refining_search_sad_c(struct macroblock* x,
+                              struct block* b,
+                              struct blockd* d,
+                              union int_mv* ref_mv,
+                              int error_per_bit,
+                              int search_range,
+                              struct variance_vtable* fn_ptr,
+                              int* mvcost[2],
+                              union int_mv* center_mv);
+#define vp8_refining_search_sad vp8_refining_search_sad_c
+
+void vp8_regular_quantize_b_c(struct block*, struct blockd*);
+#define vp8_regular_quantize_b vp8_regular_quantize_b_c
+
+void vp8_short_fdct4x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct4x4_neon(short* input, short* output, int pitch);
+#define vp8_short_fdct4x4 vp8_short_fdct4x4_neon
+
+void vp8_short_fdct8x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct8x4_neon(short* input, short* output, int pitch);
+#define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
+
+void vp8_short_idct4x4llm_c(short* input,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_short_idct4x4llm_neon(short* input,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
+                               int dst_stride);
+#define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
+
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
+
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
+
+void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
+void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
+#define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
+
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
+
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
+
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
+
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
+
+void vp8_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vp9_rtcd.h
new file mode 100644
index 0000000..ca740f4
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vp9_rtcd.h
@@ -0,0 +1,342 @@
+// This file is generated. Do not edit.
+#ifndef VP9_RTCD_H_
+#define VP9_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * VP9
+ */
+
+#include "vp9/common/vp9_common.h"
+#include "vp9/common/vp9_enums.h"
+#include "vp9/common/vp9_filter.h"
+#include "vpx/vpx_integer.h"
+
+struct macroblockd;
+
+/* Encoder forward decls */
+struct macroblock;
+struct vp9_variance_vtable;
+struct search_site_config;
+struct mv;
+union int_mv;
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int64_t vp9_block_error_c(const tran_low_t* coeff,
+                          const tran_low_t* dqcoeff,
+                          intptr_t block_size,
+                          int64_t* ssz);
+#define vp9_block_error vp9_block_error_c
+
+int64_t vp9_block_error_fp_c(const tran_low_t* coeff,
+                             const tran_low_t* dqcoeff,
+                             int block_size);
+#define vp9_block_error_fp vp9_block_error_fp_c
+
+int vp9_denoiser_filter_c(const uint8_t* sig,
+                          int sig_stride,
+                          const uint8_t* mc_avg,
+                          int mc_avg_stride,
+                          uint8_t* avg,
+                          int avg_stride,
+                          int increase_denoising,
+                          BLOCK_SIZE bs,
+                          int motion_magnitude);
+int vp9_denoiser_filter_neon(const uint8_t* sig,
+                             int sig_stride,
+                             const uint8_t* mc_avg,
+                             int mc_avg_stride,
+                             uint8_t* avg,
+                             int avg_stride,
+                             int increase_denoising,
+                             BLOCK_SIZE bs,
+                             int motion_magnitude);
+#define vp9_denoiser_filter vp9_denoiser_filter_neon
+
+int vp9_diamond_search_sad_c(const struct macroblock* x,
+                             const struct search_site_config* cfg,
+                             struct mv* ref_mv,
+                             struct mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             const struct vp9_variance_vtable* fn_ptr,
+                             const struct mv* center_mv);
+#define vp9_diamond_search_sad vp9_diamond_search_sad_c
+
+void vp9_fht16x16_c(const int16_t* input,
+                    tran_low_t* output,
+                    int stride,
+                    int tx_type);
+#define vp9_fht16x16 vp9_fht16x16_c
+
+void vp9_fht4x4_c(const int16_t* input,
+                  tran_low_t* output,
+                  int stride,
+                  int tx_type);
+#define vp9_fht4x4 vp9_fht4x4_c
+
+void vp9_fht8x8_c(const int16_t* input,
+                  tran_low_t* output,
+                  int stride,
+                  int tx_type);
+#define vp9_fht8x8 vp9_fht8x8_c
+
+void vp9_filter_by_weight16x16_c(const uint8_t* src,
+                                 int src_stride,
+                                 uint8_t* dst,
+                                 int dst_stride,
+                                 int src_weight);
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_c
+
+void vp9_filter_by_weight8x8_c(const uint8_t* src,
+                               int src_stride,
+                               uint8_t* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_c
+
+void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vp9_fwht4x4 vp9_fwht4x4_c
+
+int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
+                                 const tran_low_t* dqcoeff,
+                                 intptr_t block_size,
+                                 int64_t* ssz,
+                                 int bd);
+#define vp9_highbd_block_error vp9_highbd_block_error_c
+
+void vp9_highbd_fht16x16_c(const int16_t* input,
+                           tran_low_t* output,
+                           int stride,
+                           int tx_type);
+#define vp9_highbd_fht16x16 vp9_highbd_fht16x16_c
+
+void vp9_highbd_fht4x4_c(const int16_t* input,
+                         tran_low_t* output,
+                         int stride,
+                         int tx_type);
+#define vp9_highbd_fht4x4 vp9_highbd_fht4x4_c
+
+void vp9_highbd_fht8x8_c(const int16_t* input,
+                         tran_low_t* output,
+                         int stride,
+                         int tx_type);
+#define vp9_highbd_fht8x8 vp9_highbd_fht8x8_c
+
+void vp9_highbd_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
+
+void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+void vp9_highbd_iht16x16_256_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int tx_type,
+                                      int bd);
+#define vp9_highbd_iht16x16_256_add vp9_highbd_iht16x16_256_add_neon
+
+void vp9_highbd_iht4x4_16_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int tx_type,
+                                int bd);
+void vp9_highbd_iht4x4_16_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+#define vp9_highbd_iht4x4_16_add vp9_highbd_iht4x4_16_add_neon
+
+void vp9_highbd_iht8x8_64_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int tx_type,
+                                int bd);
+void vp9_highbd_iht8x8_64_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+#define vp9_highbd_iht8x8_64_add vp9_highbd_iht8x8_64_add_neon
+
+void vp9_highbd_mbpost_proc_across_ip_c(uint16_t* src,
+                                        int pitch,
+                                        int rows,
+                                        int cols,
+                                        int flimit);
+#define vp9_highbd_mbpost_proc_across_ip vp9_highbd_mbpost_proc_across_ip_c
+
+void vp9_highbd_mbpost_proc_down_c(uint16_t* dst,
+                                   int pitch,
+                                   int rows,
+                                   int cols,
+                                   int flimit);
+#define vp9_highbd_mbpost_proc_down vp9_highbd_mbpost_proc_down_c
+
+void vp9_highbd_post_proc_down_and_across_c(const uint16_t* src_ptr,
+                                            uint16_t* dst_ptr,
+                                            int src_pixels_per_line,
+                                            int dst_pixels_per_line,
+                                            int rows,
+                                            int cols,
+                                            int flimit);
+#define vp9_highbd_post_proc_down_and_across \
+  vp9_highbd_post_proc_down_and_across_c
+
+void vp9_highbd_quantize_fp_c(const tran_low_t* coeff_ptr,
+                              intptr_t n_coeffs,
+                              int skip_block,
+                              const int16_t* round_ptr,
+                              const int16_t* quant_ptr,
+                              tran_low_t* qcoeff_ptr,
+                              tran_low_t* dqcoeff_ptr,
+                              const int16_t* dequant_ptr,
+                              uint16_t* eob_ptr,
+                              const int16_t* scan,
+                              const int16_t* iscan);
+#define vp9_highbd_quantize_fp vp9_highbd_quantize_fp_c
+
+void vp9_highbd_quantize_fp_32x32_c(const tran_low_t* coeff_ptr,
+                                    intptr_t n_coeffs,
+                                    int skip_block,
+                                    const int16_t* round_ptr,
+                                    const int16_t* quant_ptr,
+                                    tran_low_t* qcoeff_ptr,
+                                    tran_low_t* dqcoeff_ptr,
+                                    const int16_t* dequant_ptr,
+                                    uint16_t* eob_ptr,
+                                    const int16_t* scan,
+                                    const int16_t* iscan);
+#define vp9_highbd_quantize_fp_32x32 vp9_highbd_quantize_fp_32x32_c
+
+void vp9_highbd_temporal_filter_apply_c(const uint8_t* frame1,
+                                        unsigned int stride,
+                                        const uint8_t* frame2,
+                                        unsigned int block_width,
+                                        unsigned int block_height,
+                                        int strength,
+                                        int* blk_fw,
+                                        int use_32x32,
+                                        uint32_t* accumulator,
+                                        uint16_t* count);
+#define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
+
+void vp9_iht16x16_256_add_c(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+void vp9_iht16x16_256_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride,
+                               int tx_type);
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
+
+void vp9_iht4x4_16_add_c(const tran_low_t* input,
+                         uint8_t* dest,
+                         int stride,
+                         int tx_type);
+void vp9_iht4x4_16_add_neon(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_neon
+
+void vp9_iht8x8_64_add_c(const tran_low_t* input,
+                         uint8_t* dest,
+                         int stride,
+                         int tx_type);
+void vp9_iht8x8_64_add_neon(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_neon
+
+void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
+                       intptr_t n_coeffs,
+                       int skip_block,
+                       const int16_t* round_ptr,
+                       const int16_t* quant_ptr,
+                       tran_low_t* qcoeff_ptr,
+                       tran_low_t* dqcoeff_ptr,
+                       const int16_t* dequant_ptr,
+                       uint16_t* eob_ptr,
+                       const int16_t* scan,
+                       const int16_t* iscan);
+void vp9_quantize_fp_neon(const tran_low_t* coeff_ptr,
+                          intptr_t n_coeffs,
+                          int skip_block,
+                          const int16_t* round_ptr,
+                          const int16_t* quant_ptr,
+                          tran_low_t* qcoeff_ptr,
+                          tran_low_t* dqcoeff_ptr,
+                          const int16_t* dequant_ptr,
+                          uint16_t* eob_ptr,
+                          const int16_t* scan,
+                          const int16_t* iscan);
+#define vp9_quantize_fp vp9_quantize_fp_neon
+
+void vp9_quantize_fp_32x32_c(const tran_low_t* coeff_ptr,
+                             intptr_t n_coeffs,
+                             int skip_block,
+                             const int16_t* round_ptr,
+                             const int16_t* quant_ptr,
+                             tran_low_t* qcoeff_ptr,
+                             tran_low_t* dqcoeff_ptr,
+                             const int16_t* dequant_ptr,
+                             uint16_t* eob_ptr,
+                             const int16_t* scan,
+                             const int16_t* iscan);
+void vp9_quantize_fp_32x32_neon(const tran_low_t* coeff_ptr,
+                                intptr_t n_coeffs,
+                                int skip_block,
+                                const int16_t* round_ptr,
+                                const int16_t* quant_ptr,
+                                tran_low_t* qcoeff_ptr,
+                                tran_low_t* dqcoeff_ptr,
+                                const int16_t* dequant_ptr,
+                                uint16_t* eob_ptr,
+                                const int16_t* scan,
+                                const int16_t* iscan);
+#define vp9_quantize_fp_32x32 vp9_quantize_fp_32x32_neon
+
+void vp9_scale_and_extend_frame_c(const struct yv12_buffer_config* src,
+                                  struct yv12_buffer_config* dst,
+                                  INTERP_FILTER filter_type,
+                                  int phase_scaler);
+void vp9_scale_and_extend_frame_neon(const struct yv12_buffer_config* src,
+                                     struct yv12_buffer_config* dst,
+                                     INTERP_FILTER filter_type,
+                                     int phase_scaler);
+#define vp9_scale_and_extend_frame vp9_scale_and_extend_frame_neon
+
+void vp9_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.asm
new file mode 100644
index 0000000..ea77627
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.asm
@@ -0,0 +1,98 @@
+@ This file was created from a .asm file
+@  using the ads2gas.pl script.
+	.syntax unified
+.equ VPX_ARCH_ARM ,  1
+.equ ARCH_ARM ,  1
+.equ VPX_ARCH_MIPS ,  0
+.equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
+.equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
+.equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
+.equ ARCH_PPC ,  0
+.equ HAVE_NEON ,  1
+.equ HAVE_NEON_ASM ,  0
+.equ HAVE_MIPS32 ,  0
+.equ HAVE_DSPR2 ,  0
+.equ HAVE_MSA ,  0
+.equ HAVE_MIPS64 ,  0
+.equ HAVE_MMX ,  0
+.equ HAVE_SSE ,  0
+.equ HAVE_SSE2 ,  0
+.equ HAVE_SSE3 ,  0
+.equ HAVE_SSSE3 ,  0
+.equ HAVE_SSE4_1 ,  0
+.equ HAVE_AVX ,  0
+.equ HAVE_AVX2 ,  0
+.equ HAVE_AVX512 ,  0
+.equ HAVE_VSX ,  0
+.equ HAVE_MMI ,  0
+.equ HAVE_VPX_PORTS ,  1
+.equ HAVE_PTHREAD_H ,  1
+.equ HAVE_UNISTD_H ,  0
+.equ CONFIG_DEPENDENCY_TRACKING ,  1
+.equ CONFIG_EXTERNAL_BUILD ,  1
+.equ CONFIG_INSTALL_DOCS ,  0
+.equ CONFIG_INSTALL_BINS ,  1
+.equ CONFIG_INSTALL_LIBS ,  1
+.equ CONFIG_INSTALL_SRCS ,  0
+.equ CONFIG_DEBUG ,  0
+.equ CONFIG_GPROF ,  0
+.equ CONFIG_GCOV ,  0
+.equ CONFIG_RVCT ,  0
+.equ CONFIG_GCC ,  1
+.equ CONFIG_MSVS ,  0
+.equ CONFIG_PIC ,  0
+.equ CONFIG_BIG_ENDIAN ,  0
+.equ CONFIG_CODEC_SRCS ,  0
+.equ CONFIG_DEBUG_LIBS ,  0
+.equ CONFIG_DEQUANT_TOKENS ,  0
+.equ CONFIG_DC_RECON ,  0
+.equ CONFIG_RUNTIME_CPU_DETECT ,  0
+.equ CONFIG_POSTPROC ,  1
+.equ CONFIG_VP9_POSTPROC ,  1
+.equ CONFIG_MULTITHREAD ,  1
+.equ CONFIG_INTERNAL_STATS ,  0
+.equ CONFIG_VP8_ENCODER ,  1
+.equ CONFIG_VP8_DECODER ,  1
+.equ CONFIG_VP9_ENCODER ,  1
+.equ CONFIG_VP9_DECODER ,  1
+.equ CONFIG_VP8 ,  1
+.equ CONFIG_VP9 ,  1
+.equ CONFIG_ENCODERS ,  1
+.equ CONFIG_DECODERS ,  1
+.equ CONFIG_STATIC_MSVCRT ,  0
+.equ CONFIG_SPATIAL_RESAMPLING ,  1
+.equ CONFIG_REALTIME_ONLY ,  1
+.equ CONFIG_ONTHEFLY_BITPACKING ,  0
+.equ CONFIG_ERROR_CONCEALMENT ,  0
+.equ CONFIG_SHARED ,  0
+.equ CONFIG_STATIC ,  1
+.equ CONFIG_SMALL ,  0
+.equ CONFIG_POSTPROC_VISUALIZER ,  0
+.equ CONFIG_OS_SUPPORT ,  1
+.equ CONFIG_UNIT_TESTS ,  1
+.equ CONFIG_WEBM_IO ,  1
+.equ CONFIG_LIBYUV ,  0
+.equ CONFIG_DECODE_PERF_TESTS ,  0
+.equ CONFIG_ENCODE_PERF_TESTS ,  0
+.equ CONFIG_MULTI_RES_ENCODING ,  1
+.equ CONFIG_TEMPORAL_DENOISING ,  1
+.equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
+.equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
+.equ CONFIG_VP9_HIGHBITDEPTH ,  1
+.equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
+.equ CONFIG_EXPERIMENTAL ,  0
+.equ CONFIG_SIZE_LIMIT ,  1
+.equ CONFIG_ALWAYS_ADJUST_BPM ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
+.equ CONFIG_FP_MB_STATS ,  0
+.equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
+.equ DECODE_WIDTH_LIMIT ,  16384
+.equ DECODE_HEIGHT_LIMIT ,  16384
+	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.c
new file mode 100644
index 0000000..3bdac65
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.c
@@ -0,0 +1,10 @@
+/* Copyright (c) 2011 The WebM project authors. All Rights Reserved. */
+/*  */
+/* Use of this source code is governed by a BSD-style license */
+/* that can be found in the LICENSE file in the root of the source */
+/* tree. An additional intellectual property rights grant can be found */
+/* in the file PATENTS.  All contributing project authors may */
+/* be found in the AUTHORS file in the root of the source tree. */
+#include "vpx/vpx_codec.h"
+static const char* const cfg = "--target=armv8-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-vp9-highbitdepth";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.h
new file mode 100644
index 0000000..e0e39fa
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_config.h
@@ -0,0 +1,107 @@
+/* Copyright (c) 2011 The WebM project authors. All Rights Reserved. */
+/*  */
+/* Use of this source code is governed by a BSD-style license */
+/* that can be found in the LICENSE file in the root of the source */
+/* tree. An additional intellectual property rights grant can be found */
+/* in the file PATENTS.  All contributing project authors may */
+/* be found in the AUTHORS file in the root of the source tree. */
+/* This file automatically generated by configure. Do not edit! */
+#ifndef VPX_CONFIG_H
+#define VPX_CONFIG_H
+#define RESTRICT    
+#define INLINE      inline
+#define VPX_ARCH_ARM 1
+#define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
+#define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
+#define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
+#define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
+#define ARCH_PPC 0
+#define HAVE_NEON 1
+#define HAVE_NEON_ASM 0
+#define HAVE_MIPS32 0
+#define HAVE_DSPR2 0
+#define HAVE_MSA 0
+#define HAVE_MIPS64 0
+#define HAVE_MMX 0
+#define HAVE_SSE 0
+#define HAVE_SSE2 0
+#define HAVE_SSE3 0
+#define HAVE_SSSE3 0
+#define HAVE_SSE4_1 0
+#define HAVE_AVX 0
+#define HAVE_AVX2 0
+#define HAVE_AVX512 0
+#define HAVE_VSX 0
+#define HAVE_MMI 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_PTHREAD_H 1
+#define HAVE_UNISTD_H 0
+#define CONFIG_DEPENDENCY_TRACKING 1
+#define CONFIG_EXTERNAL_BUILD 1
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 1
+#define CONFIG_MSVS 0
+#define CONFIG_PIC 0
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 0
+#define CONFIG_POSTPROC 1
+#define CONFIG_VP9_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_INTERNAL_STATS 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP9_ENCODER 1
+#define CONFIG_VP9_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_VP9 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 1
+#define CONFIG_ONTHEFLY_BITPACKING 0
+#define CONFIG_ERROR_CONCEALMENT 0
+#define CONFIG_SHARED 0
+#define CONFIG_STATIC 1
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
+#define CONFIG_UNIT_TESTS 1
+#define CONFIG_WEBM_IO 1
+#define CONFIG_LIBYUV 0
+#define CONFIG_DECODE_PERF_TESTS 0
+#define CONFIG_ENCODE_PERF_TESTS 0
+#define CONFIG_MULTI_RES_ENCODING 1
+#define CONFIG_TEMPORAL_DENOISING 1
+#define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
+#define CONFIG_COEFFICIENT_RANGE_CHECKING 0
+#define CONFIG_VP9_HIGHBITDEPTH 1
+#define CONFIG_BETTER_HW_COMPATIBILITY 0
+#define CONFIG_EXPERIMENTAL 0
+#define CONFIG_SIZE_LIMIT 1
+#define CONFIG_ALWAYS_ADJUST_BPM 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
+#define CONFIG_FP_MB_STATS 0
+#define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
+#define DECODE_WIDTH_LIMIT 16384
+#define DECODE_HEIGHT_LIMIT 16384
+#endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_dsp_rtcd.h
new file mode 100644
index 0000000..71eb333
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_dsp_rtcd.h
@@ -0,0 +1,5191 @@
+// This file is generated. Do not edit.
+#ifndef VPX_DSP_RTCD_H_
+#define VPX_DSP_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * DSP
+ */
+
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/vpx_filter.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+unsigned int vpx_avg_4x4_c(const uint8_t*, int p);
+unsigned int vpx_avg_4x4_neon(const uint8_t*, int p);
+#define vpx_avg_4x4 vpx_avg_4x4_neon
+
+unsigned int vpx_avg_8x8_c(const uint8_t*, int p);
+unsigned int vpx_avg_8x8_neon(const uint8_t*, int p);
+#define vpx_avg_8x8 vpx_avg_8x8_neon
+
+void vpx_comp_avg_pred_c(uint8_t* comp_pred,
+                         const uint8_t* pred,
+                         int width,
+                         int height,
+                         const uint8_t* ref,
+                         int ref_stride);
+void vpx_comp_avg_pred_neon(uint8_t* comp_pred,
+                            const uint8_t* pred,
+                            int width,
+                            int height,
+                            const uint8_t* ref,
+                            int ref_stride);
+#define vpx_comp_avg_pred vpx_comp_avg_pred_neon
+
+void vpx_convolve8_c(const uint8_t* src,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     ptrdiff_t dst_stride,
+                     const InterpKernel* filter,
+                     int x0_q4,
+                     int x_step_q4,
+                     int y0_q4,
+                     int y_step_q4,
+                     int w,
+                     int h);
+void vpx_convolve8_neon(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_convolve8 vpx_convolve8_neon
+
+void vpx_convolve8_avg_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+void vpx_convolve8_avg_neon(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_convolve8_avg vpx_convolve8_avg_neon
+
+void vpx_convolve8_avg_horiz_c(const uint8_t* src,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h);
+void vpx_convolve8_avg_horiz_neon(const uint8_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h);
+#define vpx_convolve8_avg_horiz vpx_convolve8_avg_horiz_neon
+
+void vpx_convolve8_avg_vert_c(const uint8_t* src,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              ptrdiff_t dst_stride,
+                              const InterpKernel* filter,
+                              int x0_q4,
+                              int x_step_q4,
+                              int y0_q4,
+                              int y_step_q4,
+                              int w,
+                              int h);
+void vpx_convolve8_avg_vert_neon(const uint8_t* src,
+                                 ptrdiff_t src_stride,
+                                 uint8_t* dst,
+                                 ptrdiff_t dst_stride,
+                                 const InterpKernel* filter,
+                                 int x0_q4,
+                                 int x_step_q4,
+                                 int y0_q4,
+                                 int y_step_q4,
+                                 int w,
+                                 int h);
+#define vpx_convolve8_avg_vert vpx_convolve8_avg_vert_neon
+
+void vpx_convolve8_horiz_c(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+void vpx_convolve8_horiz_neon(const uint8_t* src,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              ptrdiff_t dst_stride,
+                              const InterpKernel* filter,
+                              int x0_q4,
+                              int x_step_q4,
+                              int y0_q4,
+                              int y_step_q4,
+                              int w,
+                              int h);
+#define vpx_convolve8_horiz vpx_convolve8_horiz_neon
+
+void vpx_convolve8_vert_c(const uint8_t* src,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          ptrdiff_t dst_stride,
+                          const InterpKernel* filter,
+                          int x0_q4,
+                          int x_step_q4,
+                          int y0_q4,
+                          int y_step_q4,
+                          int w,
+                          int h);
+void vpx_convolve8_vert_neon(const uint8_t* src,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst,
+                             ptrdiff_t dst_stride,
+                             const InterpKernel* filter,
+                             int x0_q4,
+                             int x_step_q4,
+                             int y0_q4,
+                             int y_step_q4,
+                             int w,
+                             int h);
+#define vpx_convolve8_vert vpx_convolve8_vert_neon
+
+void vpx_convolve_avg_c(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+void vpx_convolve_avg_neon(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+#define vpx_convolve_avg vpx_convolve_avg_neon
+
+void vpx_convolve_copy_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+void vpx_convolve_copy_neon(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_convolve_copy vpx_convolve_copy_neon
+
+void vpx_d117_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
+
+void vpx_d117_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
+
+void vpx_d117_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
+
+void vpx_d117_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
+
+void vpx_d135_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_d135_predictor_16x16_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
+
+void vpx_d135_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_d135_predictor_32x32_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
+
+void vpx_d135_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_d135_predictor_4x4_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
+
+void vpx_d135_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_d135_predictor_8x8_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
+
+void vpx_d153_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
+
+void vpx_d153_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
+
+void vpx_d153_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
+
+void vpx_d153_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
+
+void vpx_d207_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
+
+void vpx_d207_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
+
+void vpx_d207_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
+
+void vpx_d207_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
+
+void vpx_d45_predictor_16x16_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+void vpx_d45_predictor_16x16_neon(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+#define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
+
+void vpx_d45_predictor_32x32_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+void vpx_d45_predictor_32x32_neon(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+#define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
+
+void vpx_d45_predictor_4x4_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_d45_predictor_4x4_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
+
+void vpx_d45_predictor_8x8_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_d45_predictor_8x8_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
+
+void vpx_d45e_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
+
+void vpx_d63_predictor_16x16_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
+
+void vpx_d63_predictor_32x32_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
+
+void vpx_d63_predictor_4x4_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+#define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
+
+void vpx_d63_predictor_8x8_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+#define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
+
+void vpx_d63e_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
+
+void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
+
+void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
+
+void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
+
+void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
+
+void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint8_t* above,
+                                      const uint8_t* left);
+#define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
+
+void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint8_t* above,
+                                      const uint8_t* left);
+#define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
+
+void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint8_t* above,
+                                    const uint8_t* left);
+#define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
+
+void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint8_t* above,
+                                    const uint8_t* left);
+#define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
+
+void vpx_dc_predictor_16x16_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_dc_predictor_16x16_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
+
+void vpx_dc_predictor_32x32_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_dc_predictor_32x32_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
+
+void vpx_dc_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_dc_predictor_4x4_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
+
+void vpx_dc_predictor_8x8_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_dc_predictor_8x8_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
+
+void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
+
+void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
+
+void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
+
+void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
+
+void vpx_fdct16x16_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct16x16_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct16x16 vpx_fdct16x16_neon
+
+void vpx_fdct16x16_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct16x16_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct16x16_1 vpx_fdct16x16_1_neon
+
+void vpx_fdct32x32_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct32x32 vpx_fdct32x32_neon
+
+void vpx_fdct32x32_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct32x32_1 vpx_fdct32x32_1_neon
+
+void vpx_fdct32x32_rd_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_rd_neon(const int16_t* input,
+                           tran_low_t* output,
+                           int stride);
+#define vpx_fdct32x32_rd vpx_fdct32x32_rd_neon
+
+void vpx_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct4x4_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct4x4 vpx_fdct4x4_neon
+
+void vpx_fdct4x4_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct4x4_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct4x4_1 vpx_fdct4x4_1_neon
+
+void vpx_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct8x8_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct8x8 vpx_fdct8x8_neon
+
+void vpx_fdct8x8_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct8x8_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
+
+void vpx_get16x16var_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* ref_ptr,
+                       int ref_stride,
+                       unsigned int* sse,
+                       int* sum);
+void vpx_get16x16var_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride,
+                          unsigned int* sse,
+                          int* sum);
+#define vpx_get16x16var vpx_get16x16var_neon
+
+unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
+                                int src_stride,
+                                const unsigned char* ref_ptr,
+                                int ref_stride);
+unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
+                                   int src_stride,
+                                   const unsigned char* ref_ptr,
+                                   int ref_stride);
+#define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
+
+void vpx_get8x8var_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     unsigned int* sse,
+                     int* sum);
+void vpx_get8x8var_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* ref_ptr,
+                        int ref_stride,
+                        unsigned int* sse,
+                        int* sum);
+#define vpx_get8x8var vpx_get8x8var_neon
+
+unsigned int vpx_get_mb_ss_c(const int16_t*);
+#define vpx_get_mb_ss vpx_get_mb_ss_c
+
+void vpx_h_predictor_16x16_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_h_predictor_16x16_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
+
+void vpx_h_predictor_32x32_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_h_predictor_32x32_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
+
+void vpx_h_predictor_4x4_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_h_predictor_4x4_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
+
+void vpx_h_predictor_8x8_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_h_predictor_8x8_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
+
+void vpx_hadamard_16x16_c(const int16_t* src_diff,
+                          ptrdiff_t src_stride,
+                          tran_low_t* coeff);
+void vpx_hadamard_16x16_neon(const int16_t* src_diff,
+                             ptrdiff_t src_stride,
+                             tran_low_t* coeff);
+#define vpx_hadamard_16x16 vpx_hadamard_16x16_neon
+
+void vpx_hadamard_32x32_c(const int16_t* src_diff,
+                          ptrdiff_t src_stride,
+                          tran_low_t* coeff);
+#define vpx_hadamard_32x32 vpx_hadamard_32x32_c
+
+void vpx_hadamard_8x8_c(const int16_t* src_diff,
+                        ptrdiff_t src_stride,
+                        tran_low_t* coeff);
+void vpx_hadamard_8x8_neon(const int16_t* src_diff,
+                           ptrdiff_t src_stride,
+                           tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
+
+void vpx_he_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+#define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
+
+void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+
+void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse,
+                               int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+
+unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      unsigned int* sse);
+#define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_c
+
+unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
+
+unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
+
+unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x16 \
+  vpx_highbd_10_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x32 \
+  vpx_highbd_10_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x8 \
+  vpx_highbd_10_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x16 \
+  vpx_highbd_10_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x32 \
+  vpx_highbd_10_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x64 \
+  vpx_highbd_10_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance4x4 \
+  vpx_highbd_10_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance4x8 \
+  vpx_highbd_10_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x32 \
+  vpx_highbd_10_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x64 \
+  vpx_highbd_10_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x16 \
+  vpx_highbd_10_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x4 \
+  vpx_highbd_10_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x8 \
+  vpx_highbd_10_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x16 \
+  vpx_highbd_10_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x32 \
+  vpx_highbd_10_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x8 \
+  vpx_highbd_10_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x16 \
+  vpx_highbd_10_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x32 \
+  vpx_highbd_10_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x64 \
+  vpx_highbd_10_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance4x4 \
+  vpx_highbd_10_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance4x8 \
+  vpx_highbd_10_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x32 \
+  vpx_highbd_10_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x64 \
+  vpx_highbd_10_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x16 \
+  vpx_highbd_10_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x4 \
+  vpx_highbd_10_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x8 \
+  vpx_highbd_10_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_c
+
+unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_c
+
+unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_c
+
+unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_c
+
+unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_c
+
+unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_c
+
+unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
+
+unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
+
+unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_c
+
+unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_c
+
+unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_c
+
+unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
+
+unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_c
+
+void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+
+void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse,
+                               int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+
+unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      unsigned int* sse);
+#define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_c
+
+unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
+
+unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
+
+unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x16 \
+  vpx_highbd_12_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x32 \
+  vpx_highbd_12_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x8 \
+  vpx_highbd_12_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x16 \
+  vpx_highbd_12_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x32 \
+  vpx_highbd_12_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x64 \
+  vpx_highbd_12_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance4x4 \
+  vpx_highbd_12_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance4x8 \
+  vpx_highbd_12_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x32 \
+  vpx_highbd_12_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x64 \
+  vpx_highbd_12_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x16 \
+  vpx_highbd_12_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x4 \
+  vpx_highbd_12_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x8 \
+  vpx_highbd_12_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x16 \
+  vpx_highbd_12_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x32 \
+  vpx_highbd_12_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x8 \
+  vpx_highbd_12_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x16 \
+  vpx_highbd_12_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x32 \
+  vpx_highbd_12_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x64 \
+  vpx_highbd_12_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance4x4 \
+  vpx_highbd_12_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance4x8 \
+  vpx_highbd_12_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x32 \
+  vpx_highbd_12_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x64 \
+  vpx_highbd_12_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x16 \
+  vpx_highbd_12_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x4 \
+  vpx_highbd_12_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x8 \
+  vpx_highbd_12_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_c
+
+unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_c
+
+unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_c
+
+unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_c
+
+unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_c
+
+unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_c
+
+unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
+
+unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
+
+unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_c
+
+unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_c
+
+unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_c
+
+unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
+
+unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_c
+
+void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse,
+                                int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+
+void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              unsigned int* sse,
+                              int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+
+unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_c
+
+unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
+
+unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
+
+unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x16 \
+  vpx_highbd_8_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x32 \
+  vpx_highbd_8_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x8 \
+  vpx_highbd_8_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x16 \
+  vpx_highbd_8_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x32 \
+  vpx_highbd_8_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x64 \
+  vpx_highbd_8_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance4x4 \
+  vpx_highbd_8_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance4x8 \
+  vpx_highbd_8_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x32 \
+  vpx_highbd_8_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x64 \
+  vpx_highbd_8_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x16 \
+  vpx_highbd_8_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x4 \
+  vpx_highbd_8_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x8 \
+  vpx_highbd_8_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x16 \
+  vpx_highbd_8_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x32 \
+  vpx_highbd_8_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x8 \
+  vpx_highbd_8_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x16 \
+  vpx_highbd_8_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x32 \
+  vpx_highbd_8_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x64 \
+  vpx_highbd_8_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x32 \
+  vpx_highbd_8_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x64 \
+  vpx_highbd_8_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x16 \
+  vpx_highbd_8_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x4 vpx_highbd_8_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x8 vpx_highbd_8_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_c
+
+unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_c
+
+unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_c
+
+unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_c
+
+unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_c
+
+unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_c
+
+unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
+
+unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
+
+unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_c
+
+unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_c
+
+unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_c
+
+unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
+
+unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_c
+
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+#define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_c
+
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+#define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_c
+
+void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
+                                const uint16_t* pred,
+                                int width,
+                                int height,
+                                const uint16_t* ref,
+                                int ref_stride);
+#define vpx_highbd_comp_avg_pred vpx_highbd_comp_avg_pred_c
+
+void vpx_highbd_convolve8_c(const uint16_t* src,
+                            ptrdiff_t src_stride,
+                            uint16_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h,
+                            int bd);
+void vpx_highbd_convolve8_neon(const uint16_t* src,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h,
+                               int bd);
+#define vpx_highbd_convolve8 vpx_highbd_convolve8_neon
+
+void vpx_highbd_convolve8_avg_c(const uint16_t* src,
+                                ptrdiff_t src_stride,
+                                uint16_t* dst,
+                                ptrdiff_t dst_stride,
+                                const InterpKernel* filter,
+                                int x0_q4,
+                                int x_step_q4,
+                                int y0_q4,
+                                int y_step_q4,
+                                int w,
+                                int h,
+                                int bd);
+void vpx_highbd_convolve8_avg_neon(const uint16_t* src,
+                                   ptrdiff_t src_stride,
+                                   uint16_t* dst,
+                                   ptrdiff_t dst_stride,
+                                   const InterpKernel* filter,
+                                   int x0_q4,
+                                   int x_step_q4,
+                                   int y0_q4,
+                                   int y_step_q4,
+                                   int w,
+                                   int h,
+                                   int bd);
+#define vpx_highbd_convolve8_avg vpx_highbd_convolve8_avg_neon
+
+void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
+                                      ptrdiff_t src_stride,
+                                      uint16_t* dst,
+                                      ptrdiff_t dst_stride,
+                                      const InterpKernel* filter,
+                                      int x0_q4,
+                                      int x_step_q4,
+                                      int y0_q4,
+                                      int y_step_q4,
+                                      int w,
+                                      int h,
+                                      int bd);
+void vpx_highbd_convolve8_avg_horiz_neon(const uint16_t* src,
+                                         ptrdiff_t src_stride,
+                                         uint16_t* dst,
+                                         ptrdiff_t dst_stride,
+                                         const InterpKernel* filter,
+                                         int x0_q4,
+                                         int x_step_q4,
+                                         int y0_q4,
+                                         int y_step_q4,
+                                         int w,
+                                         int h,
+                                         int bd);
+#define vpx_highbd_convolve8_avg_horiz vpx_highbd_convolve8_avg_horiz_neon
+
+void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
+                                     ptrdiff_t src_stride,
+                                     uint16_t* dst,
+                                     ptrdiff_t dst_stride,
+                                     const InterpKernel* filter,
+                                     int x0_q4,
+                                     int x_step_q4,
+                                     int y0_q4,
+                                     int y_step_q4,
+                                     int w,
+                                     int h,
+                                     int bd);
+void vpx_highbd_convolve8_avg_vert_neon(const uint16_t* src,
+                                        ptrdiff_t src_stride,
+                                        uint16_t* dst,
+                                        ptrdiff_t dst_stride,
+                                        const InterpKernel* filter,
+                                        int x0_q4,
+                                        int x_step_q4,
+                                        int y0_q4,
+                                        int y_step_q4,
+                                        int w,
+                                        int h,
+                                        int bd);
+#define vpx_highbd_convolve8_avg_vert vpx_highbd_convolve8_avg_vert_neon
+
+void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint16_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h,
+                                  int bd);
+void vpx_highbd_convolve8_horiz_neon(const uint16_t* src,
+                                     ptrdiff_t src_stride,
+                                     uint16_t* dst,
+                                     ptrdiff_t dst_stride,
+                                     const InterpKernel* filter,
+                                     int x0_q4,
+                                     int x_step_q4,
+                                     int y0_q4,
+                                     int y_step_q4,
+                                     int w,
+                                     int h,
+                                     int bd);
+#define vpx_highbd_convolve8_horiz vpx_highbd_convolve8_horiz_neon
+
+void vpx_highbd_convolve8_vert_c(const uint16_t* src,
+                                 ptrdiff_t src_stride,
+                                 uint16_t* dst,
+                                 ptrdiff_t dst_stride,
+                                 const InterpKernel* filter,
+                                 int x0_q4,
+                                 int x_step_q4,
+                                 int y0_q4,
+                                 int y_step_q4,
+                                 int w,
+                                 int h,
+                                 int bd);
+void vpx_highbd_convolve8_vert_neon(const uint16_t* src,
+                                    ptrdiff_t src_stride,
+                                    uint16_t* dst,
+                                    ptrdiff_t dst_stride,
+                                    const InterpKernel* filter,
+                                    int x0_q4,
+                                    int x_step_q4,
+                                    int y0_q4,
+                                    int y_step_q4,
+                                    int w,
+                                    int h,
+                                    int bd);
+#define vpx_highbd_convolve8_vert vpx_highbd_convolve8_vert_neon
+
+void vpx_highbd_convolve_avg_c(const uint16_t* src,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h,
+                               int bd);
+void vpx_highbd_convolve_avg_neon(const uint16_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint16_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h,
+                                  int bd);
+#define vpx_highbd_convolve_avg vpx_highbd_convolve_avg_neon
+
+void vpx_highbd_convolve_copy_c(const uint16_t* src,
+                                ptrdiff_t src_stride,
+                                uint16_t* dst,
+                                ptrdiff_t dst_stride,
+                                const InterpKernel* filter,
+                                int x0_q4,
+                                int x_step_q4,
+                                int y0_q4,
+                                int y_step_q4,
+                                int w,
+                                int h,
+                                int bd);
+void vpx_highbd_convolve_copy_neon(const uint16_t* src,
+                                   ptrdiff_t src_stride,
+                                   uint16_t* dst,
+                                   ptrdiff_t dst_stride,
+                                   const InterpKernel* filter,
+                                   int x0_q4,
+                                   int x_step_q4,
+                                   int y0_q4,
+                                   int y_step_q4,
+                                   int w,
+                                   int h,
+                                   int bd);
+#define vpx_highbd_convolve_copy vpx_highbd_convolve_copy_neon
+
+void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d117_predictor_16x16 vpx_highbd_d117_predictor_16x16_c
+
+void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d117_predictor_32x32 vpx_highbd_d117_predictor_32x32_c
+
+void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_c
+
+void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d117_predictor_8x8 vpx_highbd_d117_predictor_8x8_c
+
+void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_d135_predictor_16x16_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_d135_predictor_16x16 vpx_highbd_d135_predictor_16x16_neon
+
+void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_d135_predictor_32x32_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_d135_predictor_32x32 vpx_highbd_d135_predictor_32x32_neon
+
+void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_d135_predictor_4x4_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_neon
+
+void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_d135_predictor_8x8_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_d135_predictor_8x8 vpx_highbd_d135_predictor_8x8_neon
+
+void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d153_predictor_16x16 vpx_highbd_d153_predictor_16x16_c
+
+void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d153_predictor_32x32 vpx_highbd_d153_predictor_32x32_c
+
+void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_c
+
+void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d153_predictor_8x8 vpx_highbd_d153_predictor_8x8_c
+
+void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d207_predictor_16x16 vpx_highbd_d207_predictor_16x16_c
+
+void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d207_predictor_32x32 vpx_highbd_d207_predictor_32x32_c
+
+void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_c
+
+void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d207_predictor_8x8 vpx_highbd_d207_predictor_8x8_c
+
+void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+void vpx_highbd_d45_predictor_16x16_neon(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+#define vpx_highbd_d45_predictor_16x16 vpx_highbd_d45_predictor_16x16_neon
+
+void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+void vpx_highbd_d45_predictor_32x32_neon(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+#define vpx_highbd_d45_predictor_32x32 vpx_highbd_d45_predictor_32x32_neon
+
+void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_d45_predictor_4x4_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d45_predictor_4x4 vpx_highbd_d45_predictor_4x4_neon
+
+void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_d45_predictor_8x8_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d45_predictor_8x8 vpx_highbd_d45_predictor_8x8_neon
+
+void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_d63_predictor_16x16 vpx_highbd_d63_predictor_16x16_c
+
+void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_d63_predictor_32x32 vpx_highbd_d63_predictor_32x32_c
+
+void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+#define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_c
+
+void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+#define vpx_highbd_d63_predictor_8x8 vpx_highbd_d63_predictor_8x8_c
+
+void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_128_predictor_16x16_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_neon
+
+void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_128_predictor_32x32_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_neon
+
+void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_128_predictor_4x4_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_neon
+
+void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_128_predictor_8x8_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_neon
+
+void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+void vpx_highbd_dc_left_predictor_16x16_neon(uint16_t* dst,
+                                             ptrdiff_t stride,
+                                             const uint16_t* above,
+                                             const uint16_t* left,
+                                             int bd);
+#define vpx_highbd_dc_left_predictor_16x16 \
+  vpx_highbd_dc_left_predictor_16x16_neon
+
+void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+void vpx_highbd_dc_left_predictor_32x32_neon(uint16_t* dst,
+                                             ptrdiff_t stride,
+                                             const uint16_t* above,
+                                             const uint16_t* left,
+                                             int bd);
+#define vpx_highbd_dc_left_predictor_32x32 \
+  vpx_highbd_dc_left_predictor_32x32_neon
+
+void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+void vpx_highbd_dc_left_predictor_4x4_neon(uint16_t* dst,
+                                           ptrdiff_t stride,
+                                           const uint16_t* above,
+                                           const uint16_t* left,
+                                           int bd);
+#define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_neon
+
+void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+void vpx_highbd_dc_left_predictor_8x8_neon(uint16_t* dst,
+                                           ptrdiff_t stride,
+                                           const uint16_t* above,
+                                           const uint16_t* left,
+                                           int bd);
+#define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_neon
+
+void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_dc_predictor_16x16_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_neon
+
+void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_dc_predictor_32x32_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_neon
+
+void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_dc_predictor_4x4_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_neon
+
+void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_dc_predictor_8x8_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_neon
+
+void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_top_predictor_16x16_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_neon
+
+void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_top_predictor_32x32_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_neon
+
+void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_top_predictor_4x4_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_neon
+
+void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_top_predictor_8x8_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_neon
+
+void vpx_highbd_fdct16x16_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+#define vpx_highbd_fdct16x16 vpx_highbd_fdct16x16_c
+
+void vpx_highbd_fdct16x16_1_c(const int16_t* input,
+                              tran_low_t* output,
+                              int stride);
+#define vpx_highbd_fdct16x16_1 vpx_highbd_fdct16x16_1_c
+
+void vpx_highbd_fdct32x32_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+#define vpx_highbd_fdct32x32 vpx_highbd_fdct32x32_c
+
+void vpx_highbd_fdct32x32_1_c(const int16_t* input,
+                              tran_low_t* output,
+                              int stride);
+#define vpx_highbd_fdct32x32_1 vpx_highbd_fdct32x32_1_c
+
+void vpx_highbd_fdct32x32_rd_c(const int16_t* input,
+                               tran_low_t* output,
+                               int stride);
+#define vpx_highbd_fdct32x32_rd vpx_highbd_fdct32x32_rd_c
+
+void vpx_highbd_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct4x4 vpx_highbd_fdct4x4_c
+
+void vpx_highbd_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct8x8 vpx_highbd_fdct8x8_c
+
+void vpx_highbd_fdct8x8_1_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+void vpx_fdct8x8_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct8x8_1 vpx_fdct8x8_1_neon
+
+void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_h_predictor_16x16_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_neon
+
+void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_h_predictor_32x32_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_neon
+
+void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_h_predictor_4x4_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_neon
+
+void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_h_predictor_8x8_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_neon
+
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_16x16 vpx_highbd_hadamard_16x16_c
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_32x32 vpx_highbd_hadamard_32x32_c
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+#define vpx_highbd_hadamard_8x8 vpx_highbd_hadamard_8x8_c
+
+void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct16x16_10_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct16x16_10_add vpx_highbd_idct16x16_10_add_neon
+
+void vpx_highbd_idct16x16_1_add_c(const tran_low_t* input,
+                                  uint16_t* dest,
+                                  int stride,
+                                  int bd);
+void vpx_highbd_idct16x16_1_add_neon(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+#define vpx_highbd_idct16x16_1_add vpx_highbd_idct16x16_1_add_neon
+
+void vpx_highbd_idct16x16_256_add_c(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+void vpx_highbd_idct16x16_256_add_neon(const tran_low_t* input,
+                                       uint16_t* dest,
+                                       int stride,
+                                       int bd);
+#define vpx_highbd_idct16x16_256_add vpx_highbd_idct16x16_256_add_neon
+
+void vpx_highbd_idct16x16_38_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct16x16_38_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct16x16_38_add vpx_highbd_idct16x16_38_add_neon
+
+void vpx_highbd_idct32x32_1024_add_c(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+void vpx_highbd_idct32x32_1024_add_neon(const tran_low_t* input,
+                                        uint16_t* dest,
+                                        int stride,
+                                        int bd);
+#define vpx_highbd_idct32x32_1024_add vpx_highbd_idct32x32_1024_add_neon
+
+void vpx_highbd_idct32x32_135_add_c(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+void vpx_highbd_idct32x32_135_add_neon(const tran_low_t* input,
+                                       uint16_t* dest,
+                                       int stride,
+                                       int bd);
+#define vpx_highbd_idct32x32_135_add vpx_highbd_idct32x32_135_add_neon
+
+void vpx_highbd_idct32x32_1_add_c(const tran_low_t* input,
+                                  uint16_t* dest,
+                                  int stride,
+                                  int bd);
+void vpx_highbd_idct32x32_1_add_neon(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+#define vpx_highbd_idct32x32_1_add vpx_highbd_idct32x32_1_add_neon
+
+void vpx_highbd_idct32x32_34_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct32x32_34_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct32x32_34_add vpx_highbd_idct32x32_34_add_neon
+
+void vpx_highbd_idct4x4_16_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct4x4_16_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct4x4_16_add vpx_highbd_idct4x4_16_add_neon
+
+void vpx_highbd_idct4x4_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+void vpx_highbd_idct4x4_1_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+#define vpx_highbd_idct4x4_1_add vpx_highbd_idct4x4_1_add_neon
+
+void vpx_highbd_idct8x8_12_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct8x8_12_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct8x8_12_add vpx_highbd_idct8x8_12_add_neon
+
+void vpx_highbd_idct8x8_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+void vpx_highbd_idct8x8_1_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+#define vpx_highbd_idct8x8_1_add vpx_highbd_idct8x8_1_add_neon
+
+void vpx_highbd_idct8x8_64_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct8x8_64_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct8x8_64_add vpx_highbd_idct8x8_64_add_neon
+
+void vpx_highbd_iwht4x4_16_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+#define vpx_highbd_iwht4x4_16_add vpx_highbd_iwht4x4_16_add_c
+
+void vpx_highbd_iwht4x4_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+#define vpx_highbd_iwht4x4_1_add vpx_highbd_iwht4x4_1_add_c
+
+void vpx_highbd_lpf_horizontal_16_c(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+void vpx_highbd_lpf_horizontal_16_neon(uint16_t* s,
+                                       int pitch,
+                                       const uint8_t* blimit,
+                                       const uint8_t* limit,
+                                       const uint8_t* thresh,
+                                       int bd);
+#define vpx_highbd_lpf_horizontal_16 vpx_highbd_lpf_horizontal_16_neon
+
+void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit,
+                                         const uint8_t* limit,
+                                         const uint8_t* thresh,
+                                         int bd);
+void vpx_highbd_lpf_horizontal_16_dual_neon(uint16_t* s,
+                                            int pitch,
+                                            const uint8_t* blimit,
+                                            const uint8_t* limit,
+                                            const uint8_t* thresh,
+                                            int bd);
+#define vpx_highbd_lpf_horizontal_16_dual vpx_highbd_lpf_horizontal_16_dual_neon
+
+void vpx_highbd_lpf_horizontal_4_c(uint16_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh,
+                                   int bd);
+void vpx_highbd_lpf_horizontal_4_neon(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit,
+                                      const uint8_t* limit,
+                                      const uint8_t* thresh,
+                                      int bd);
+#define vpx_highbd_lpf_horizontal_4 vpx_highbd_lpf_horizontal_4_neon
+
+void vpx_highbd_lpf_horizontal_4_dual_c(uint16_t* s,
+                                        int pitch,
+                                        const uint8_t* blimit0,
+                                        const uint8_t* limit0,
+                                        const uint8_t* thresh0,
+                                        const uint8_t* blimit1,
+                                        const uint8_t* limit1,
+                                        const uint8_t* thresh1,
+                                        int bd);
+void vpx_highbd_lpf_horizontal_4_dual_neon(uint16_t* s,
+                                           int pitch,
+                                           const uint8_t* blimit0,
+                                           const uint8_t* limit0,
+                                           const uint8_t* thresh0,
+                                           const uint8_t* blimit1,
+                                           const uint8_t* limit1,
+                                           const uint8_t* thresh1,
+                                           int bd);
+#define vpx_highbd_lpf_horizontal_4_dual vpx_highbd_lpf_horizontal_4_dual_neon
+
+void vpx_highbd_lpf_horizontal_8_c(uint16_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh,
+                                   int bd);
+void vpx_highbd_lpf_horizontal_8_neon(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit,
+                                      const uint8_t* limit,
+                                      const uint8_t* thresh,
+                                      int bd);
+#define vpx_highbd_lpf_horizontal_8 vpx_highbd_lpf_horizontal_8_neon
+
+void vpx_highbd_lpf_horizontal_8_dual_c(uint16_t* s,
+                                        int pitch,
+                                        const uint8_t* blimit0,
+                                        const uint8_t* limit0,
+                                        const uint8_t* thresh0,
+                                        const uint8_t* blimit1,
+                                        const uint8_t* limit1,
+                                        const uint8_t* thresh1,
+                                        int bd);
+void vpx_highbd_lpf_horizontal_8_dual_neon(uint16_t* s,
+                                           int pitch,
+                                           const uint8_t* blimit0,
+                                           const uint8_t* limit0,
+                                           const uint8_t* thresh0,
+                                           const uint8_t* blimit1,
+                                           const uint8_t* limit1,
+                                           const uint8_t* thresh1,
+                                           int bd);
+#define vpx_highbd_lpf_horizontal_8_dual vpx_highbd_lpf_horizontal_8_dual_neon
+
+void vpx_highbd_lpf_vertical_16_c(uint16_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit,
+                                  const uint8_t* limit,
+                                  const uint8_t* thresh,
+                                  int bd);
+void vpx_highbd_lpf_vertical_16_neon(uint16_t* s,
+                                     int pitch,
+                                     const uint8_t* blimit,
+                                     const uint8_t* limit,
+                                     const uint8_t* thresh,
+                                     int bd);
+#define vpx_highbd_lpf_vertical_16 vpx_highbd_lpf_vertical_16_neon
+
+void vpx_highbd_lpf_vertical_16_dual_c(uint16_t* s,
+                                       int pitch,
+                                       const uint8_t* blimit,
+                                       const uint8_t* limit,
+                                       const uint8_t* thresh,
+                                       int bd);
+void vpx_highbd_lpf_vertical_16_dual_neon(uint16_t* s,
+                                          int pitch,
+                                          const uint8_t* blimit,
+                                          const uint8_t* limit,
+                                          const uint8_t* thresh,
+                                          int bd);
+#define vpx_highbd_lpf_vertical_16_dual vpx_highbd_lpf_vertical_16_dual_neon
+
+void vpx_highbd_lpf_vertical_4_c(uint16_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit,
+                                 const uint8_t* limit,
+                                 const uint8_t* thresh,
+                                 int bd);
+void vpx_highbd_lpf_vertical_4_neon(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+#define vpx_highbd_lpf_vertical_4 vpx_highbd_lpf_vertical_4_neon
+
+void vpx_highbd_lpf_vertical_4_dual_c(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit0,
+                                      const uint8_t* limit0,
+                                      const uint8_t* thresh0,
+                                      const uint8_t* blimit1,
+                                      const uint8_t* limit1,
+                                      const uint8_t* thresh1,
+                                      int bd);
+void vpx_highbd_lpf_vertical_4_dual_neon(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit0,
+                                         const uint8_t* limit0,
+                                         const uint8_t* thresh0,
+                                         const uint8_t* blimit1,
+                                         const uint8_t* limit1,
+                                         const uint8_t* thresh1,
+                                         int bd);
+#define vpx_highbd_lpf_vertical_4_dual vpx_highbd_lpf_vertical_4_dual_neon
+
+void vpx_highbd_lpf_vertical_8_c(uint16_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit,
+                                 const uint8_t* limit,
+                                 const uint8_t* thresh,
+                                 int bd);
+void vpx_highbd_lpf_vertical_8_neon(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+#define vpx_highbd_lpf_vertical_8 vpx_highbd_lpf_vertical_8_neon
+
+void vpx_highbd_lpf_vertical_8_dual_c(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit0,
+                                      const uint8_t* limit0,
+                                      const uint8_t* thresh0,
+                                      const uint8_t* blimit1,
+                                      const uint8_t* limit1,
+                                      const uint8_t* thresh1,
+                                      int bd);
+void vpx_highbd_lpf_vertical_8_dual_neon(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit0,
+                                         const uint8_t* limit0,
+                                         const uint8_t* thresh0,
+                                         const uint8_t* blimit1,
+                                         const uint8_t* limit1,
+                                         const uint8_t* thresh1,
+                                         int bd);
+#define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_neon
+
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
+                             int p,
+                             const uint8_t* d8,
+                             int dp,
+                             int* min,
+                             int* max);
+#define vpx_highbd_minmax_8x8 vpx_highbd_minmax_8x8_c
+
+void vpx_highbd_quantize_b_c(const tran_low_t* coeff_ptr,
+                             intptr_t n_coeffs,
+                             int skip_block,
+                             const int16_t* zbin_ptr,
+                             const int16_t* round_ptr,
+                             const int16_t* quant_ptr,
+                             const int16_t* quant_shift_ptr,
+                             tran_low_t* qcoeff_ptr,
+                             tran_low_t* dqcoeff_ptr,
+                             const int16_t* dequant_ptr,
+                             uint16_t* eob_ptr,
+                             const int16_t* scan,
+                             const int16_t* iscan);
+#define vpx_highbd_quantize_b vpx_highbd_quantize_b_c
+
+void vpx_highbd_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
+                                   intptr_t n_coeffs,
+                                   int skip_block,
+                                   const int16_t* zbin_ptr,
+                                   const int16_t* round_ptr,
+                                   const int16_t* quant_ptr,
+                                   const int16_t* quant_shift_ptr,
+                                   tran_low_t* qcoeff_ptr,
+                                   tran_low_t* dqcoeff_ptr,
+                                   const int16_t* dequant_ptr,
+                                   uint16_t* eob_ptr,
+                                   const int16_t* scan,
+                                   const int16_t* iscan);
+#define vpx_highbd_quantize_b_32x32 vpx_highbd_quantize_b_32x32_c
+
+unsigned int vpx_highbd_sad16x16_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad16x16 vpx_highbd_sad16x16_c
+
+unsigned int vpx_highbd_sad16x16_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad16x16_avg vpx_highbd_sad16x16_avg_c
+
+void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_c
+
+unsigned int vpx_highbd_sad16x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad16x32 vpx_highbd_sad16x32_c
+
+unsigned int vpx_highbd_sad16x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad16x32_avg vpx_highbd_sad16x32_avg_c
+
+void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_c
+
+unsigned int vpx_highbd_sad16x8_c(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride);
+#define vpx_highbd_sad16x8 vpx_highbd_sad16x8_c
+
+unsigned int vpx_highbd_sad16x8_avg_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      const uint8_t* second_pred);
+#define vpx_highbd_sad16x8_avg vpx_highbd_sad16x8_avg_c
+
+void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* const ref_array[],
+                             int ref_stride,
+                             uint32_t* sad_array);
+#define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_c
+
+unsigned int vpx_highbd_sad32x16_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x16 vpx_highbd_sad32x16_c
+
+unsigned int vpx_highbd_sad32x16_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x16_avg vpx_highbd_sad32x16_avg_c
+
+void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_c
+
+unsigned int vpx_highbd_sad32x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x32 vpx_highbd_sad32x32_c
+
+unsigned int vpx_highbd_sad32x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x32_avg vpx_highbd_sad32x32_avg_c
+
+void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_c
+
+unsigned int vpx_highbd_sad32x64_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x64 vpx_highbd_sad32x64_c
+
+unsigned int vpx_highbd_sad32x64_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x64_avg vpx_highbd_sad32x64_avg_c
+
+void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_c
+
+unsigned int vpx_highbd_sad4x4_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad4x4 vpx_highbd_sad4x4_c
+
+unsigned int vpx_highbd_sad4x4_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad4x4_avg vpx_highbd_sad4x4_avg_c
+
+void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_c
+
+unsigned int vpx_highbd_sad4x8_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad4x8 vpx_highbd_sad4x8_c
+
+unsigned int vpx_highbd_sad4x8_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad4x8_avg vpx_highbd_sad4x8_avg_c
+
+void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_c
+
+unsigned int vpx_highbd_sad64x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad64x32 vpx_highbd_sad64x32_c
+
+unsigned int vpx_highbd_sad64x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad64x32_avg vpx_highbd_sad64x32_avg_c
+
+void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_c
+
+unsigned int vpx_highbd_sad64x64_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad64x64 vpx_highbd_sad64x64_c
+
+unsigned int vpx_highbd_sad64x64_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad64x64_avg vpx_highbd_sad64x64_avg_c
+
+void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_c
+
+unsigned int vpx_highbd_sad8x16_c(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride);
+#define vpx_highbd_sad8x16 vpx_highbd_sad8x16_c
+
+unsigned int vpx_highbd_sad8x16_avg_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      const uint8_t* second_pred);
+#define vpx_highbd_sad8x16_avg vpx_highbd_sad8x16_avg_c
+
+void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* const ref_array[],
+                             int ref_stride,
+                             uint32_t* sad_array);
+#define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_c
+
+unsigned int vpx_highbd_sad8x4_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad8x4 vpx_highbd_sad8x4_c
+
+unsigned int vpx_highbd_sad8x4_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad8x4_avg vpx_highbd_sad8x4_avg_c
+
+void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_c
+
+unsigned int vpx_highbd_sad8x8_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad8x8 vpx_highbd_sad8x8_c
+
+unsigned int vpx_highbd_sad8x8_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad8x8_avg vpx_highbd_sad8x8_avg_c
+
+void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_c
+
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+#define vpx_highbd_satd vpx_highbd_satd_c
+
+void vpx_highbd_subtract_block_c(int rows,
+                                 int cols,
+                                 int16_t* diff_ptr,
+                                 ptrdiff_t diff_stride,
+                                 const uint8_t* src8_ptr,
+                                 ptrdiff_t src_stride,
+                                 const uint8_t* pred8_ptr,
+                                 ptrdiff_t pred_stride,
+                                 int bd);
+#define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
+
+void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_tm_predictor_16x16_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_neon
+
+void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_tm_predictor_32x32_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_neon
+
+void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_tm_predictor_4x4_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_neon
+
+void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_tm_predictor_8x8_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_neon
+
+void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_v_predictor_16x16_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_neon
+
+void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_v_predictor_32x32_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_neon
+
+void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_v_predictor_4x4_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_neon
+
+void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_v_predictor_8x8_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_v_predictor_8x8 vpx_highbd_v_predictor_8x8_neon
+
+void vpx_idct16x16_10_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_10_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct16x16_10_add vpx_idct16x16_10_add_neon
+
+void vpx_idct16x16_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_1_add_neon(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+#define vpx_idct16x16_1_add vpx_idct16x16_1_add_neon
+
+void vpx_idct16x16_256_add_c(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+void vpx_idct16x16_256_add_neon(const tran_low_t* input,
+                                uint8_t* dest,
+                                int stride);
+#define vpx_idct16x16_256_add vpx_idct16x16_256_add_neon
+
+void vpx_idct16x16_38_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_38_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct16x16_38_add vpx_idct16x16_38_add_neon
+
+void vpx_idct32x32_1024_add_c(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+void vpx_idct32x32_1024_add_neon(const tran_low_t* input,
+                                 uint8_t* dest,
+                                 int stride);
+#define vpx_idct32x32_1024_add vpx_idct32x32_1024_add_neon
+
+void vpx_idct32x32_135_add_c(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+void vpx_idct32x32_135_add_neon(const tran_low_t* input,
+                                uint8_t* dest,
+                                int stride);
+#define vpx_idct32x32_135_add vpx_idct32x32_135_add_neon
+
+void vpx_idct32x32_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct32x32_1_add_neon(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+#define vpx_idct32x32_1_add vpx_idct32x32_1_add_neon
+
+void vpx_idct32x32_34_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct32x32_34_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct32x32_34_add vpx_idct32x32_34_add_neon
+
+void vpx_idct4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct4x4_16_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct4x4_16_add vpx_idct4x4_16_add_neon
+
+void vpx_idct4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct4x4_1_add_neon(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_idct4x4_1_add vpx_idct4x4_1_add_neon
+
+void vpx_idct8x8_12_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_12_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct8x8_12_add vpx_idct8x8_12_add_neon
+
+void vpx_idct8x8_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_1_add_neon(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_idct8x8_1_add vpx_idct8x8_1_add_neon
+
+void vpx_idct8x8_64_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_64_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct8x8_64_add vpx_idct8x8_64_add_neon
+
+int16_t vpx_int_pro_col_c(const uint8_t* ref, const int width);
+int16_t vpx_int_pro_col_neon(const uint8_t* ref, const int width);
+#define vpx_int_pro_col vpx_int_pro_col_neon
+
+void vpx_int_pro_row_c(int16_t* hbuf,
+                       const uint8_t* ref,
+                       const int ref_stride,
+                       const int height);
+void vpx_int_pro_row_neon(int16_t* hbuf,
+                          const uint8_t* ref,
+                          const int ref_stride,
+                          const int height);
+#define vpx_int_pro_row vpx_int_pro_row_neon
+
+void vpx_iwht4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_iwht4x4_16_add vpx_iwht4x4_16_add_c
+
+void vpx_iwht4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_iwht4x4_1_add vpx_iwht4x4_1_add_c
+
+void vpx_lpf_horizontal_16_c(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+void vpx_lpf_horizontal_16_neon(uint8_t* s,
+                                int pitch,
+                                const uint8_t* blimit,
+                                const uint8_t* limit,
+                                const uint8_t* thresh);
+#define vpx_lpf_horizontal_16 vpx_lpf_horizontal_16_neon
+
+void vpx_lpf_horizontal_16_dual_c(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit,
+                                  const uint8_t* limit,
+                                  const uint8_t* thresh);
+void vpx_lpf_horizontal_16_dual_neon(uint8_t* s,
+                                     int pitch,
+                                     const uint8_t* blimit,
+                                     const uint8_t* limit,
+                                     const uint8_t* thresh);
+#define vpx_lpf_horizontal_16_dual vpx_lpf_horizontal_16_dual_neon
+
+void vpx_lpf_horizontal_4_c(uint8_t* s,
+                            int pitch,
+                            const uint8_t* blimit,
+                            const uint8_t* limit,
+                            const uint8_t* thresh);
+void vpx_lpf_horizontal_4_neon(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit,
+                               const uint8_t* limit,
+                               const uint8_t* thresh);
+#define vpx_lpf_horizontal_4 vpx_lpf_horizontal_4_neon
+
+void vpx_lpf_horizontal_4_dual_c(uint8_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit0,
+                                 const uint8_t* limit0,
+                                 const uint8_t* thresh0,
+                                 const uint8_t* blimit1,
+                                 const uint8_t* limit1,
+                                 const uint8_t* thresh1);
+void vpx_lpf_horizontal_4_dual_neon(uint8_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit0,
+                                    const uint8_t* limit0,
+                                    const uint8_t* thresh0,
+                                    const uint8_t* blimit1,
+                                    const uint8_t* limit1,
+                                    const uint8_t* thresh1);
+#define vpx_lpf_horizontal_4_dual vpx_lpf_horizontal_4_dual_neon
+
+void vpx_lpf_horizontal_8_c(uint8_t* s,
+                            int pitch,
+                            const uint8_t* blimit,
+                            const uint8_t* limit,
+                            const uint8_t* thresh);
+void vpx_lpf_horizontal_8_neon(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit,
+                               const uint8_t* limit,
+                               const uint8_t* thresh);
+#define vpx_lpf_horizontal_8 vpx_lpf_horizontal_8_neon
+
+void vpx_lpf_horizontal_8_dual_c(uint8_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit0,
+                                 const uint8_t* limit0,
+                                 const uint8_t* thresh0,
+                                 const uint8_t* blimit1,
+                                 const uint8_t* limit1,
+                                 const uint8_t* thresh1);
+void vpx_lpf_horizontal_8_dual_neon(uint8_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit0,
+                                    const uint8_t* limit0,
+                                    const uint8_t* thresh0,
+                                    const uint8_t* blimit1,
+                                    const uint8_t* limit1,
+                                    const uint8_t* thresh1);
+#define vpx_lpf_horizontal_8_dual vpx_lpf_horizontal_8_dual_neon
+
+void vpx_lpf_vertical_16_c(uint8_t* s,
+                           int pitch,
+                           const uint8_t* blimit,
+                           const uint8_t* limit,
+                           const uint8_t* thresh);
+void vpx_lpf_vertical_16_neon(uint8_t* s,
+                              int pitch,
+                              const uint8_t* blimit,
+                              const uint8_t* limit,
+                              const uint8_t* thresh);
+#define vpx_lpf_vertical_16 vpx_lpf_vertical_16_neon
+
+void vpx_lpf_vertical_16_dual_c(uint8_t* s,
+                                int pitch,
+                                const uint8_t* blimit,
+                                const uint8_t* limit,
+                                const uint8_t* thresh);
+void vpx_lpf_vertical_16_dual_neon(uint8_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh);
+#define vpx_lpf_vertical_16_dual vpx_lpf_vertical_16_dual_neon
+
+void vpx_lpf_vertical_4_c(uint8_t* s,
+                          int pitch,
+                          const uint8_t* blimit,
+                          const uint8_t* limit,
+                          const uint8_t* thresh);
+void vpx_lpf_vertical_4_neon(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+#define vpx_lpf_vertical_4 vpx_lpf_vertical_4_neon
+
+void vpx_lpf_vertical_4_dual_c(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit0,
+                               const uint8_t* limit0,
+                               const uint8_t* thresh0,
+                               const uint8_t* blimit1,
+                               const uint8_t* limit1,
+                               const uint8_t* thresh1);
+void vpx_lpf_vertical_4_dual_neon(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit0,
+                                  const uint8_t* limit0,
+                                  const uint8_t* thresh0,
+                                  const uint8_t* blimit1,
+                                  const uint8_t* limit1,
+                                  const uint8_t* thresh1);
+#define vpx_lpf_vertical_4_dual vpx_lpf_vertical_4_dual_neon
+
+void vpx_lpf_vertical_8_c(uint8_t* s,
+                          int pitch,
+                          const uint8_t* blimit,
+                          const uint8_t* limit,
+                          const uint8_t* thresh);
+void vpx_lpf_vertical_8_neon(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+#define vpx_lpf_vertical_8 vpx_lpf_vertical_8_neon
+
+void vpx_lpf_vertical_8_dual_c(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit0,
+                               const uint8_t* limit0,
+                               const uint8_t* thresh0,
+                               const uint8_t* blimit1,
+                               const uint8_t* limit1,
+                               const uint8_t* thresh1);
+void vpx_lpf_vertical_8_dual_neon(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit0,
+                                  const uint8_t* limit0,
+                                  const uint8_t* thresh0,
+                                  const uint8_t* blimit1,
+                                  const uint8_t* limit1,
+                                  const uint8_t* thresh1);
+#define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
+
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
+                                 int pitch,
+                                 int rows,
+                                 int cols,
+                                 int flimit);
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
+                                    int pitch,
+                                    int rows,
+                                    int cols,
+                                    int flimit);
+#define vpx_mbpost_proc_across_ip vpx_mbpost_proc_across_ip_neon
+
+void vpx_mbpost_proc_down_c(unsigned char* dst,
+                            int pitch,
+                            int rows,
+                            int cols,
+                            int flimit);
+void vpx_mbpost_proc_down_neon(unsigned char* dst,
+                               int pitch,
+                               int rows,
+                               int cols,
+                               int flimit);
+#define vpx_mbpost_proc_down vpx_mbpost_proc_down_neon
+
+void vpx_minmax_8x8_c(const uint8_t* s,
+                      int p,
+                      const uint8_t* d,
+                      int dp,
+                      int* min,
+                      int* max);
+void vpx_minmax_8x8_neon(const uint8_t* s,
+                         int p,
+                         const uint8_t* d,
+                         int dp,
+                         int* min,
+                         int* max);
+#define vpx_minmax_8x8 vpx_minmax_8x8_neon
+
+unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride,
+                            unsigned int* sse);
+unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+#define vpx_mse16x16 vpx_mse16x16_neon
+
+unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride,
+                           unsigned int* sse);
+#define vpx_mse16x8 vpx_mse16x8_c
+
+unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride,
+                           unsigned int* sse);
+#define vpx_mse8x16 vpx_mse8x16_c
+
+unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride,
+                          unsigned int* sse);
+#define vpx_mse8x8 vpx_mse8x8_c
+
+void vpx_plane_add_noise_c(uint8_t* start,
+                           const int8_t* noise,
+                           int blackclamp,
+                           int whiteclamp,
+                           int width,
+                           int height,
+                           int pitch);
+#define vpx_plane_add_noise vpx_plane_add_noise_c
+
+void vpx_post_proc_down_and_across_mb_row_c(unsigned char* src,
+                                            unsigned char* dst,
+                                            int src_pitch,
+                                            int dst_pitch,
+                                            int cols,
+                                            unsigned char* flimits,
+                                            int size);
+void vpx_post_proc_down_and_across_mb_row_neon(unsigned char* src,
+                                               unsigned char* dst,
+                                               int src_pitch,
+                                               int dst_pitch,
+                                               int cols,
+                                               unsigned char* flimits,
+                                               int size);
+#define vpx_post_proc_down_and_across_mb_row \
+  vpx_post_proc_down_and_across_mb_row_neon
+
+void vpx_quantize_b_c(const tran_low_t* coeff_ptr,
+                      intptr_t n_coeffs,
+                      int skip_block,
+                      const int16_t* zbin_ptr,
+                      const int16_t* round_ptr,
+                      const int16_t* quant_ptr,
+                      const int16_t* quant_shift_ptr,
+                      tran_low_t* qcoeff_ptr,
+                      tran_low_t* dqcoeff_ptr,
+                      const int16_t* dequant_ptr,
+                      uint16_t* eob_ptr,
+                      const int16_t* scan,
+                      const int16_t* iscan);
+void vpx_quantize_b_neon(const tran_low_t* coeff_ptr,
+                         intptr_t n_coeffs,
+                         int skip_block,
+                         const int16_t* zbin_ptr,
+                         const int16_t* round_ptr,
+                         const int16_t* quant_ptr,
+                         const int16_t* quant_shift_ptr,
+                         tran_low_t* qcoeff_ptr,
+                         tran_low_t* dqcoeff_ptr,
+                         const int16_t* dequant_ptr,
+                         uint16_t* eob_ptr,
+                         const int16_t* scan,
+                         const int16_t* iscan);
+#define vpx_quantize_b vpx_quantize_b_neon
+
+void vpx_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
+                            intptr_t n_coeffs,
+                            int skip_block,
+                            const int16_t* zbin_ptr,
+                            const int16_t* round_ptr,
+                            const int16_t* quant_ptr,
+                            const int16_t* quant_shift_ptr,
+                            tran_low_t* qcoeff_ptr,
+                            tran_low_t* dqcoeff_ptr,
+                            const int16_t* dequant_ptr,
+                            uint16_t* eob_ptr,
+                            const int16_t* scan,
+                            const int16_t* iscan);
+void vpx_quantize_b_32x32_neon(const tran_low_t* coeff_ptr,
+                               intptr_t n_coeffs,
+                               int skip_block,
+                               const int16_t* zbin_ptr,
+                               const int16_t* round_ptr,
+                               const int16_t* quant_ptr,
+                               const int16_t* quant_shift_ptr,
+                               tran_low_t* qcoeff_ptr,
+                               tran_low_t* dqcoeff_ptr,
+                               const int16_t* dequant_ptr,
+                               uint16_t* eob_ptr,
+                               const int16_t* scan,
+                               const int16_t* iscan);
+#define vpx_quantize_b_32x32 vpx_quantize_b_32x32_neon
+
+unsigned int vpx_sad16x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad16x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad16x16 vpx_sad16x16_neon
+
+unsigned int vpx_sad16x16_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad16x16_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad16x16_avg vpx_sad16x16_avg_neon
+
+void vpx_sad16x16x3_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad16x16x3 vpx_sad16x16x3_c
+
+void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad16x16x4d vpx_sad16x16x4d_neon
+
+void vpx_sad16x16x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad16x16x8 vpx_sad16x16x8_c
+
+unsigned int vpx_sad16x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad16x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad16x32 vpx_sad16x32_neon
+
+unsigned int vpx_sad16x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad16x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad16x32_avg vpx_sad16x32_avg_neon
+
+void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad16x32x4d vpx_sad16x32x4d_neon
+
+unsigned int vpx_sad16x8_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride);
+unsigned int vpx_sad16x8_neon(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride);
+#define vpx_sad16x8 vpx_sad16x8_neon
+
+unsigned int vpx_sad16x8_avg_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               const uint8_t* second_pred);
+unsigned int vpx_sad16x8_avg_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  const uint8_t* second_pred);
+#define vpx_sad16x8_avg vpx_sad16x8_avg_neon
+
+void vpx_sad16x8x3_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad16x8x3 vpx_sad16x8x3_c
+
+void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* const ref_array[],
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* const ref_array[],
+                         int ref_stride,
+                         uint32_t* sad_array);
+#define vpx_sad16x8x4d vpx_sad16x8x4d_neon
+
+void vpx_sad16x8x8_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad16x8x8 vpx_sad16x8x8_c
+
+unsigned int vpx_sad32x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x16 vpx_sad32x16_neon
+
+unsigned int vpx_sad32x16_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x16_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x16_avg vpx_sad32x16_avg_neon
+
+void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x16x4d vpx_sad32x16x4d_neon
+
+unsigned int vpx_sad32x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x32 vpx_sad32x32_neon
+
+unsigned int vpx_sad32x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x32_avg vpx_sad32x32_avg_neon
+
+void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x32x4d vpx_sad32x32x4d_neon
+
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
+unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x64_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x64 vpx_sad32x64_neon
+
+unsigned int vpx_sad32x64_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x64_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x64_avg vpx_sad32x64_avg_neon
+
+void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x64x4d vpx_sad32x64x4d_neon
+
+unsigned int vpx_sad4x4_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad4x4_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad4x4 vpx_sad4x4_neon
+
+unsigned int vpx_sad4x4_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad4x4_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad4x4_avg vpx_sad4x4_avg_neon
+
+void vpx_sad4x4x3_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad4x4x3 vpx_sad4x4x3_c
+
+void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad4x4x4d vpx_sad4x4x4d_neon
+
+void vpx_sad4x4x8_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad4x4x8 vpx_sad4x4x8_c
+
+unsigned int vpx_sad4x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad4x8_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad4x8 vpx_sad4x8_neon
+
+unsigned int vpx_sad4x8_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad4x8_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad4x8_avg vpx_sad4x8_avg_neon
+
+void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad4x8x4d vpx_sad4x8x4d_neon
+
+unsigned int vpx_sad64x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad64x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad64x32 vpx_sad64x32_neon
+
+unsigned int vpx_sad64x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad64x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad64x32_avg vpx_sad64x32_avg_neon
+
+void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad64x32x4d vpx_sad64x32x4d_neon
+
+unsigned int vpx_sad64x64_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad64x64_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad64x64 vpx_sad64x64_neon
+
+unsigned int vpx_sad64x64_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad64x64_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad64x64_avg vpx_sad64x64_avg_neon
+
+void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad64x64x4d vpx_sad64x64x4d_neon
+
+unsigned int vpx_sad8x16_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride);
+unsigned int vpx_sad8x16_neon(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride);
+#define vpx_sad8x16 vpx_sad8x16_neon
+
+unsigned int vpx_sad8x16_avg_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               const uint8_t* second_pred);
+unsigned int vpx_sad8x16_avg_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  const uint8_t* second_pred);
+#define vpx_sad8x16_avg vpx_sad8x16_avg_neon
+
+void vpx_sad8x16x3_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad8x16x3 vpx_sad8x16x3_c
+
+void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* const ref_array[],
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* const ref_array[],
+                         int ref_stride,
+                         uint32_t* sad_array);
+#define vpx_sad8x16x4d vpx_sad8x16x4d_neon
+
+void vpx_sad8x16x8_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad8x16x8 vpx_sad8x16x8_c
+
+unsigned int vpx_sad8x4_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad8x4_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad8x4 vpx_sad8x4_neon
+
+unsigned int vpx_sad8x4_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad8x4_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad8x4_avg vpx_sad8x4_avg_neon
+
+void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad8x4x4d vpx_sad8x4x4d_neon
+
+unsigned int vpx_sad8x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad8x8_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad8x8 vpx_sad8x8_neon
+
+unsigned int vpx_sad8x8_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad8x8_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad8x8_avg vpx_sad8x8_avg_neon
+
+void vpx_sad8x8x3_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad8x8x3 vpx_sad8x8x3_c
+
+void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad8x8x4d vpx_sad8x8x4d_neon
+
+void vpx_sad8x8x8_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad8x8x8 vpx_sad8x8x8_c
+
+int vpx_satd_c(const tran_low_t* coeff, int length);
+int vpx_satd_neon(const tran_low_t* coeff, int length);
+#define vpx_satd vpx_satd_neon
+
+void vpx_scaled_2d_c(const uint8_t* src,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     ptrdiff_t dst_stride,
+                     const InterpKernel* filter,
+                     int x0_q4,
+                     int x_step_q4,
+                     int y0_q4,
+                     int y_step_q4,
+                     int w,
+                     int h);
+void vpx_scaled_2d_neon(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_scaled_2d vpx_scaled_2d_neon
+
+void vpx_scaled_avg_2d_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+#define vpx_scaled_avg_2d vpx_scaled_avg_2d_c
+
+void vpx_scaled_avg_horiz_c(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_scaled_avg_horiz vpx_scaled_avg_horiz_c
+
+void vpx_scaled_avg_vert_c(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+#define vpx_scaled_avg_vert vpx_scaled_avg_vert_c
+
+void vpx_scaled_horiz_c(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_scaled_horiz vpx_scaled_horiz_c
+
+void vpx_scaled_vert_c(const uint8_t* src,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       ptrdiff_t dst_stride,
+                       const InterpKernel* filter,
+                       int x0_q4,
+                       int x_step_q4,
+                       int y0_q4,
+                       int y_step_q4,
+                       int w,
+                       int h);
+#define vpx_scaled_vert vpx_scaled_vert_c
+
+uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse,
+                                          const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
+                                             const uint8_t* ref_ptr,
+                                             int ref_stride,
+                                             uint32_t* sse,
+                                             const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
+
+uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
+
+uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
+
+uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse,
+                                          const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
+                                             const uint8_t* ref_ptr,
+                                             int ref_stride,
+                                             uint32_t* sse,
+                                             const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
+
+uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
+
+uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
+
+uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse);
+#define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
+
+uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
+
+uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
+
+uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
+
+uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
+
+uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
+
+uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
+
+uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
+
+uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse);
+#define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
+
+uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
+
+uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance8x8 vpx_sub_pixel_variance8x8_neon
+
+void vpx_subtract_block_c(int rows,
+                          int cols,
+                          int16_t* diff_ptr,
+                          ptrdiff_t diff_stride,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          const uint8_t* pred_ptr,
+                          ptrdiff_t pred_stride);
+void vpx_subtract_block_neon(int rows,
+                             int cols,
+                             int16_t* diff_ptr,
+                             ptrdiff_t diff_stride,
+                             const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             const uint8_t* pred_ptr,
+                             ptrdiff_t pred_stride);
+#define vpx_subtract_block vpx_subtract_block_neon
+
+uint64_t vpx_sum_squares_2d_i16_c(const int16_t* src, int stride, int size);
+uint64_t vpx_sum_squares_2d_i16_neon(const int16_t* src, int stride, int size);
+#define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
+
+void vpx_tm_predictor_16x16_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_tm_predictor_16x16_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
+
+void vpx_tm_predictor_32x32_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_tm_predictor_32x32_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
+
+void vpx_tm_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_tm_predictor_4x4_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
+
+void vpx_tm_predictor_8x8_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_tm_predictor_8x8_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
+
+void vpx_v_predictor_16x16_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_v_predictor_16x16_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
+
+void vpx_v_predictor_32x32_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_v_predictor_32x32_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
+
+void vpx_v_predictor_4x4_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_v_predictor_4x4_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
+
+void vpx_v_predictor_8x8_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_v_predictor_8x8_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
+
+unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance16x16 vpx_variance16x16_neon
+
+unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance16x32 vpx_variance16x32_neon
+
+unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse);
+unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_variance16x8 vpx_variance16x8_neon
+
+unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x16 vpx_variance32x16_neon
+
+unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x32 vpx_variance32x32_neon
+
+unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x64 vpx_variance32x64_neon
+
+unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance4x4 vpx_variance4x4_neon
+
+unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance4x8 vpx_variance4x8_neon
+
+unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance64x32 vpx_variance64x32_neon
+
+unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance64x64 vpx_variance64x64_neon
+
+unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse);
+unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_variance8x16 vpx_variance8x16_neon
+
+unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance8x4 vpx_variance8x4_neon
+
+unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance8x8 vpx_variance8x8_neon
+
+void vpx_ve_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+#define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
+
+int vpx_vector_var_c(const int16_t* ref, const int16_t* src, const int bwl);
+int vpx_vector_var_neon(const int16_t* ref, const int16_t* src, const int bwl);
+#define vpx_vector_var vpx_vector_var_neon
+
+void vpx_dsp_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_scale_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_scale_rtcd.h
new file mode 100644
index 0000000..555013e
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64-highbd/vpx_scale_rtcd.h
@@ -0,0 +1,101 @@
+// This file is generated. Do not edit.
+#ifndef VPX_SCALE_RTCD_H_
+#define VPX_SCALE_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void vp8_horizontal_line_2_1_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_2_1_scale vp8_horizontal_line_2_1_scale_c
+
+void vp8_horizontal_line_5_3_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_5_3_scale vp8_horizontal_line_5_3_scale_c
+
+void vp8_horizontal_line_5_4_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_5_4_scale vp8_horizontal_line_5_4_scale_c
+
+void vp8_vertical_band_2_1_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_2_1_scale vp8_vertical_band_2_1_scale_c
+
+void vp8_vertical_band_2_1_scale_i_c(unsigned char* source,
+                                     unsigned int src_pitch,
+                                     unsigned char* dest,
+                                     unsigned int dest_pitch,
+                                     unsigned int dest_width);
+#define vp8_vertical_band_2_1_scale_i vp8_vertical_band_2_1_scale_i_c
+
+void vp8_vertical_band_5_3_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_5_3_scale vp8_vertical_band_5_3_scale_c
+
+void vp8_vertical_band_5_4_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_5_4_scale vp8_vertical_band_5_4_scale_c
+
+void vp8_yv12_copy_frame_c(const struct yv12_buffer_config* src_ybc,
+                           struct yv12_buffer_config* dst_ybc);
+#define vp8_yv12_copy_frame vp8_yv12_copy_frame_c
+
+void vp8_yv12_extend_frame_borders_c(struct yv12_buffer_config* ybf);
+#define vp8_yv12_extend_frame_borders vp8_yv12_extend_frame_borders_c
+
+void vpx_extend_frame_borders_c(struct yv12_buffer_config* ybf);
+#define vpx_extend_frame_borders vpx_extend_frame_borders_c
+
+void vpx_extend_frame_inner_borders_c(struct yv12_buffer_config* ybf);
+#define vpx_extend_frame_inner_borders vpx_extend_frame_inner_borders_c
+
+void vpx_yv12_copy_frame_c(const struct yv12_buffer_config* src_ybc,
+                           struct yv12_buffer_config* dst_ybc);
+#define vpx_yv12_copy_frame vpx_yv12_copy_frame_c
+
+void vpx_yv12_copy_y_c(const struct yv12_buffer_config* src_ybc,
+                       struct yv12_buffer_config* dst_ybc);
+#define vpx_yv12_copy_y vpx_yv12_copy_y_c
+
+void vpx_scale_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp8_rtcd.h
index 737afd5..c4bb41b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp8_rtcd.h
@@ -27,68 +27,68 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_neon(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x4_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict8x8_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -96,9 +96,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -106,9 +106,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -116,45 +116,52 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_neon(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_neon(unsigned char* src,
-                          int src_pitch,
+                          int src_stride,
                           unsigned char* dst,
-                          int dst_pitch);
+                          int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_neon(short input,
-                               unsigned char* pred,
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
                                int pred_stride,
-                               unsigned char* dst,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
 
@@ -196,11 +203,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_neon(short* input,
                                short* dq,
-                               unsigned char* output,
+                               unsigned char* dest,
                                int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_neon
 
@@ -230,8 +237,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_neon(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_neon
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -283,91 +290,91 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_neon
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_neon(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_neon
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_neon(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_neon(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_mbhs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_mbvs_neon(unsigned char* y,
-                               int ystride,
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
 
@@ -381,8 +388,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -400,81 +407,81 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_neon(short* input,
-                               unsigned char* pred,
-                               int pitch,
-                               unsigned char* dst,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
                                int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_neon(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_neon(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_neon(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp9_rtcd.h
index 309c780..e17954d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vp9_rtcd.h
@@ -76,34 +76,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_neon(const int16_t* input,
-                            int stride,
-                            tran_low_t* coeff_ptr,
-                            intptr_t n_coeffs,
-                            int skip_block,
-                            const int16_t* round_ptr,
-                            const int16_t* quant_ptr,
-                            tran_low_t* qcoeff_ptr,
-                            tran_low_t* dqcoeff_ptr,
-                            const int16_t* dequant_ptr,
-                            uint16_t* eob_ptr,
-                            const int16_t* scan,
-                            const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_neon
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -140,12 +112,12 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_neon(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm
index f8ed0b6..b909ad6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.asm
@@ -1,10 +1,15 @@
 @ This file was created from a .asm file
 @  using the ads2gas.pl script.
-	.equ DO1STROUNDING, 0
+	.syntax unified
+.equ VPX_ARCH_ARM ,  1
 .equ ARCH_ARM ,  1
+.equ VPX_ARCH_MIPS ,  0
 .equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
 .equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
 .equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
 .equ ARCH_PPC ,  0
 .equ HAVE_NEON ,  1
 .equ HAVE_NEON_ASM ,  0
@@ -69,21 +74,25 @@
 .equ CONFIG_OS_SUPPORT ,  1
 .equ CONFIG_UNIT_TESTS ,  1
 .equ CONFIG_WEBM_IO ,  1
-.equ CONFIG_LIBYUV ,  1
+.equ CONFIG_LIBYUV ,  0
 .equ CONFIG_DECODE_PERF_TESTS ,  0
 .equ CONFIG_ENCODE_PERF_TESTS ,  0
 .equ CONFIG_MULTI_RES_ENCODING ,  1
 .equ CONFIG_TEMPORAL_DENOISING ,  1
 .equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
 .equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .equ CONFIG_VP9_HIGHBITDEPTH ,  0
 .equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .equ CONFIG_EXPERIMENTAL ,  0
 .equ CONFIG_SIZE_LIMIT ,  1
 .equ CONFIG_ALWAYS_ADJUST_BPM ,  0
-.equ CONFIG_SPATIAL_SVC ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
 .equ CONFIG_FP_MB_STATS ,  0
 .equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
 .equ DECODE_WIDTH_LIMIT ,  16384
 .equ DECODE_HEIGHT_LIMIT ,  16384
 	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.c
index 56a5348..273c20e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=armv8-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=armv8-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h
index c65638d..2c1933e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 1
 #define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 1
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_dsp_rtcd.h
index 453e90e..9c31497 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/arm64/vpx_dsp_rtcd.h
@@ -235,349 +235,349 @@
 #define vpx_convolve_copy vpx_convolve_copy_neon
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_16x16_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d135_predictor_32x32_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_4x4_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d135_predictor_8x8_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_neon(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
@@ -621,13 +621,13 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_neon(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
@@ -635,23 +635,23 @@
 #define vpx_get16x16var vpx_get16x16var_neon
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const unsigned char* ref_ptr,
                                    int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_neon(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -662,41 +662,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
@@ -723,7 +723,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -996,12 +996,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_neon(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -1035,35 +1035,35 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_neon
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_neon
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -1180,12 +1180,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_neon
@@ -1221,12 +1221,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_neon
@@ -1262,12 +1262,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_neon
@@ -1303,12 +1303,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_neon
@@ -1337,16 +1337,23 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_neon
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -1371,12 +1378,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_neon
@@ -1412,12 +1419,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_neon
@@ -1453,12 +1460,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_neon
@@ -1487,12 +1494,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_neon
@@ -1521,12 +1528,12 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_neon
@@ -1562,12 +1569,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_neon
@@ -1603,12 +1610,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_neon
@@ -1644,12 +1651,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_neon
@@ -1755,17 +1762,17 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1773,17 +1780,17 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1791,17 +1798,17 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1809,17 +1816,17 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1827,17 +1834,17 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1845,17 +1852,17 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1863,17 +1870,17 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1881,17 +1888,17 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1899,17 +1906,17 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1917,17 +1924,17 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
@@ -1935,17 +1942,17 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
@@ -1953,17 +1960,17 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1971,17 +1978,17 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
@@ -1989,208 +1996,208 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
@@ -2219,243 +2226,243 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_neon(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_neon(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_neon(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_neon(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_neon
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_neon
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_neon
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_neon
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_neon
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_neon
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_neon
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_neon
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_neon
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_neon
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_neon
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_neon
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_neon
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp8_rtcd.h
index dc054d6..aa475b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp8_rtcd.h
@@ -27,44 +27,44 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_c
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_c
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_c
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_c
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -72,9 +72,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -82,9 +82,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -92,28 +92,35 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_c
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_c
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_c
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_c
 
@@ -139,7 +146,7 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_c
 
@@ -158,7 +165,7 @@
                                     char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_c
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_c
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -209,55 +216,55 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_c
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_c
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_c
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_c
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_c
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_c
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_c
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_c
 
@@ -271,8 +278,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -288,50 +295,50 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_c
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_c
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_c
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_c
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_c
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_c
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_c
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h
index 289a739..0091393 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vp9_rtcd.h
@@ -64,21 +64,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_c
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -143,8 +128,8 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 #define vp9_highbd_iht16x16_256_add vp9_highbd_iht16x16_256_add_c
@@ -219,14 +204,15 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.asm
index 160e31d0..00712e5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.asm
@@ -1,10 +1,15 @@
 @ This file was created from a .asm file
 @  using the ads2gas.pl script.
-	.equ DO1STROUNDING, 0
+	.syntax unified
+.equ VPX_ARCH_ARM ,  0
 .equ ARCH_ARM ,  0
+.equ VPX_ARCH_MIPS ,  0
 .equ ARCH_MIPS ,  0
+.equ VPX_ARCH_X86 ,  0
 .equ ARCH_X86 ,  0
+.equ VPX_ARCH_X86_64 ,  0
 .equ ARCH_X86_64 ,  0
+.equ VPX_ARCH_PPC ,  0
 .equ ARCH_PPC ,  0
 .equ HAVE_NEON ,  0
 .equ HAVE_NEON_ASM ,  0
@@ -69,21 +74,25 @@
 .equ CONFIG_OS_SUPPORT ,  1
 .equ CONFIG_UNIT_TESTS ,  1
 .equ CONFIG_WEBM_IO ,  1
-.equ CONFIG_LIBYUV ,  1
+.equ CONFIG_LIBYUV ,  0
 .equ CONFIG_DECODE_PERF_TESTS ,  0
 .equ CONFIG_ENCODE_PERF_TESTS ,  0
 .equ CONFIG_MULTI_RES_ENCODING ,  1
 .equ CONFIG_TEMPORAL_DENOISING ,  1
 .equ CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.equ CONFIG_CONSISTENT_RECODE ,  0
 .equ CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
 .equ CONFIG_VP9_HIGHBITDEPTH ,  1
 .equ CONFIG_BETTER_HW_COMPATIBILITY ,  0
 .equ CONFIG_EXPERIMENTAL ,  0
 .equ CONFIG_SIZE_LIMIT ,  1
 .equ CONFIG_ALWAYS_ADJUST_BPM ,  0
-.equ CONFIG_SPATIAL_SVC ,  0
+.equ CONFIG_BITSTREAM_DEBUG ,  0
+.equ CONFIG_MISMATCH_DEBUG ,  0
 .equ CONFIG_FP_MB_STATS ,  0
 .equ CONFIG_EMULATE_HARDWARE ,  0
+.equ CONFIG_NON_GREEDY_MV ,  0
+.equ CONFIG_RATE_CTRL ,  0
 .equ DECODE_WIDTH_LIMIT ,  16384
 .equ DECODE_HEIGHT_LIMIT ,  16384
 	.section	.note.GNU-stack,"",%progbits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.c
index 0eacb8b..8aad25f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=generic-gnu --enable-vp9-highbitdepth --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=generic-gnu --enable-vp9-highbitdepth --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.h
index 180ab1b..fddb76b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h
index 29706b5..8ba4d880 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/generic/vpx_dsp_rtcd.h
@@ -139,253 +139,253 @@
 #define vpx_convolve_copy vpx_convolve_copy_c
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_c
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_c
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_c
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_c
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_c
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_c
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_c
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_c
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_c
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_c
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_c
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_c
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_c
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_c
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_c
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_c
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_c
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_c
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_c
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_c
@@ -418,7 +418,7 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_c
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
@@ -426,13 +426,13 @@
 #define vpx_get16x16var vpx_get16x16var_c
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
@@ -443,25 +443,25 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_c
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_c
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_c
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_c
@@ -482,13 +482,13 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_c
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
@@ -496,7 +496,7 @@
 #define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
@@ -504,38 +504,38 @@
 #define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_c
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -545,9 +545,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -556,9 +556,9 @@
   vpx_highbd_10_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -568,9 +568,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -580,9 +580,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -592,9 +592,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -603,9 +603,9 @@
   vpx_highbd_10_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -614,9 +614,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -626,9 +626,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -638,9 +638,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -649,9 +649,9 @@
   vpx_highbd_10_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -660,9 +660,9 @@
   vpx_highbd_10_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -671,9 +671,9 @@
   vpx_highbd_10_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -682,9 +682,9 @@
   vpx_highbd_10_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -692,9 +692,9 @@
   vpx_highbd_10_sub_pixel_variance16x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -702,9 +702,9 @@
   vpx_highbd_10_sub_pixel_variance16x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -712,9 +712,9 @@
   vpx_highbd_10_sub_pixel_variance16x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -722,9 +722,9 @@
   vpx_highbd_10_sub_pixel_variance32x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -732,9 +732,9 @@
   vpx_highbd_10_sub_pixel_variance32x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -742,9 +742,9 @@
   vpx_highbd_10_sub_pixel_variance32x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -752,9 +752,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -762,9 +762,9 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -772,9 +772,9 @@
   vpx_highbd_10_sub_pixel_variance64x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -782,9 +782,9 @@
   vpx_highbd_10_sub_pixel_variance64x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -792,9 +792,9 @@
   vpx_highbd_10_sub_pixel_variance8x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -802,9 +802,9 @@
   vpx_highbd_10_sub_pixel_variance8x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -812,98 +812,98 @@
   vpx_highbd_10_sub_pixel_variance8x8_c
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_c
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_c
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_c
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_c
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_c
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_c
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_c
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_c
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_c
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_c
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
@@ -911,7 +911,7 @@
 #define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
@@ -919,38 +919,38 @@
 #define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_c
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -960,9 +960,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -971,9 +971,9 @@
   vpx_highbd_12_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -983,9 +983,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -995,9 +995,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1007,9 +1007,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1018,9 +1018,9 @@
   vpx_highbd_12_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1029,9 +1029,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1041,9 +1041,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1053,9 +1053,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1064,9 +1064,9 @@
   vpx_highbd_12_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1075,9 +1075,9 @@
   vpx_highbd_12_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1086,9 +1086,9 @@
   vpx_highbd_12_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1097,9 +1097,9 @@
   vpx_highbd_12_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1107,9 +1107,9 @@
   vpx_highbd_12_sub_pixel_variance16x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1117,9 +1117,9 @@
   vpx_highbd_12_sub_pixel_variance16x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1127,9 +1127,9 @@
   vpx_highbd_12_sub_pixel_variance16x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1137,9 +1137,9 @@
   vpx_highbd_12_sub_pixel_variance32x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1147,9 +1147,9 @@
   vpx_highbd_12_sub_pixel_variance32x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1157,9 +1157,9 @@
   vpx_highbd_12_sub_pixel_variance32x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1167,9 +1167,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1177,9 +1177,9 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1187,9 +1187,9 @@
   vpx_highbd_12_sub_pixel_variance64x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1197,9 +1197,9 @@
   vpx_highbd_12_sub_pixel_variance64x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1207,9 +1207,9 @@
   vpx_highbd_12_sub_pixel_variance8x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1217,9 +1217,9 @@
   vpx_highbd_12_sub_pixel_variance8x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1227,98 +1227,98 @@
   vpx_highbd_12_sub_pixel_variance8x8_c
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_c
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_c
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_c
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_c
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_c
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_c
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_c
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_c
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_c
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_c
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
@@ -1326,7 +1326,7 @@
 #define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
@@ -1334,37 +1334,37 @@
 #define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_c
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 #define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1373,9 +1373,9 @@
   vpx_highbd_8_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1384,9 +1384,9 @@
   vpx_highbd_8_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1395,9 +1395,9 @@
   vpx_highbd_8_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1406,9 +1406,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1417,9 +1417,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1428,9 +1428,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1439,9 +1439,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1450,9 +1450,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1461,9 +1461,9 @@
   vpx_highbd_8_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1472,9 +1472,9 @@
   vpx_highbd_8_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1483,9 +1483,9 @@
   vpx_highbd_8_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1494,9 +1494,9 @@
   vpx_highbd_8_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1505,9 +1505,9 @@
   vpx_highbd_8_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1515,9 +1515,9 @@
   vpx_highbd_8_sub_pixel_variance16x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1525,9 +1525,9 @@
   vpx_highbd_8_sub_pixel_variance16x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1535,9 +1535,9 @@
   vpx_highbd_8_sub_pixel_variance16x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1545,9 +1545,9 @@
   vpx_highbd_8_sub_pixel_variance32x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1555,9 +1555,9 @@
   vpx_highbd_8_sub_pixel_variance32x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1565,27 +1565,27 @@
   vpx_highbd_8_sub_pixel_variance32x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1593,9 +1593,9 @@
   vpx_highbd_8_sub_pixel_variance64x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1603,9 +1603,9 @@
   vpx_highbd_8_sub_pixel_variance64x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1613,118 +1613,118 @@
   vpx_highbd_8_sub_pixel_variance8x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance8x4 vpx_highbd_8_sub_pixel_variance8x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance8x8 vpx_highbd_8_sub_pixel_variance8x8_c
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_c
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_c
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_c
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_c
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_c
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_c
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_c
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_c
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_c
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_c
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
 #define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_c
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
 #define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_c
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
@@ -1746,7 +1746,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 #define vpx_highbd_convolve8 vpx_highbd_convolve8_c
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
@@ -1760,7 +1760,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 #define vpx_highbd_convolve8_avg vpx_highbd_convolve8_avg_c
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
@@ -1774,7 +1774,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 #define vpx_highbd_convolve8_avg_horiz vpx_highbd_convolve8_avg_horiz_c
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
@@ -1788,7 +1788,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 #define vpx_highbd_convolve8_avg_vert vpx_highbd_convolve8_avg_vert_c
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
@@ -1802,7 +1802,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 #define vpx_highbd_convolve8_horiz vpx_highbd_convolve8_horiz_c
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
@@ -1816,7 +1816,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 #define vpx_highbd_convolve8_vert vpx_highbd_convolve8_vert_c
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
@@ -1830,7 +1830,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 #define vpx_highbd_convolve_avg vpx_highbd_convolve_avg_c
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
@@ -1844,284 +1844,284 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 #define vpx_highbd_convolve_copy vpx_highbd_convolve_copy_c
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d117_predictor_16x16 vpx_highbd_d117_predictor_16x16_c
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d117_predictor_32x32 vpx_highbd_d117_predictor_32x32_c
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_c
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d117_predictor_8x8 vpx_highbd_d117_predictor_8x8_c
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d135_predictor_16x16 vpx_highbd_d135_predictor_16x16_c
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d135_predictor_32x32 vpx_highbd_d135_predictor_32x32_c
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_c
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d135_predictor_8x8 vpx_highbd_d135_predictor_8x8_c
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d153_predictor_16x16 vpx_highbd_d153_predictor_16x16_c
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d153_predictor_32x32 vpx_highbd_d153_predictor_32x32_c
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_c
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d153_predictor_8x8 vpx_highbd_d153_predictor_8x8_c
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d207_predictor_16x16 vpx_highbd_d207_predictor_16x16_c
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d207_predictor_32x32 vpx_highbd_d207_predictor_32x32_c
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_c
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d207_predictor_8x8 vpx_highbd_d207_predictor_8x8_c
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d45_predictor_16x16 vpx_highbd_d45_predictor_16x16_c
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d45_predictor_32x32 vpx_highbd_d45_predictor_32x32_c
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d45_predictor_4x4 vpx_highbd_d45_predictor_4x4_c
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d45_predictor_8x8 vpx_highbd_d45_predictor_8x8_c
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d63_predictor_16x16 vpx_highbd_d63_predictor_16x16_c
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d63_predictor_32x32 vpx_highbd_d63_predictor_32x32_c
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_c
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d63_predictor_8x8 vpx_highbd_d63_predictor_8x8_c
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_c
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_c
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_c
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_c
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_left_predictor_16x16 vpx_highbd_dc_left_predictor_16x16_c
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_left_predictor_32x32 vpx_highbd_dc_left_predictor_32x32_c
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_c
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_c
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_c
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_c
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_c
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_c
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_c
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_c
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_c
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
@@ -2164,33 +2164,48 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_c
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_c
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 #define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_c
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 #define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_c
 
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_16x16 vpx_highbd_hadamard_16x16_c
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_32x32 vpx_highbd_hadamard_32x32_c
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+#define vpx_highbd_hadamard_8x8 vpx_highbd_hadamard_8x8_c
+
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
                                    int stride,
@@ -2389,9 +2404,9 @@
                                       int bd);
 #define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_c
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -2442,7 +2457,7 @@
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_c
@@ -2462,7 +2477,7 @@
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_c
@@ -2482,7 +2497,7 @@
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 #define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_c
@@ -2502,7 +2517,7 @@
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_c
@@ -2522,7 +2537,7 @@
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_c
@@ -2542,7 +2557,7 @@
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_c
@@ -2562,7 +2577,7 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_c
@@ -2582,7 +2597,7 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_c
@@ -2602,7 +2617,7 @@
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_c
@@ -2622,7 +2637,7 @@
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_c
@@ -2642,7 +2657,7 @@
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 #define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_c
@@ -2662,7 +2677,7 @@
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_c
@@ -2682,73 +2697,76 @@
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_c
 
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+#define vpx_highbd_satd vpx_highbd_satd_c
+
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_c
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_c
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_c
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_c
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_c
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_c
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 #define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_c
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
@@ -2910,7 +2928,7 @@
                                const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_c
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
@@ -2933,30 +2951,30 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_c
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_c
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -3031,7 +3049,7 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_c
@@ -3058,7 +3076,7 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_c
@@ -3085,7 +3103,7 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_c
@@ -3112,7 +3130,7 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_c
@@ -3132,11 +3150,18 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_c
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -3152,7 +3177,7 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_c
@@ -3179,7 +3204,7 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_c
@@ -3206,7 +3231,7 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_c
@@ -3226,7 +3251,7 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_c
@@ -3246,7 +3271,7 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_c
@@ -3273,7 +3298,7 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_c
@@ -3300,7 +3325,7 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_c
@@ -3327,7 +3352,7 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_c
@@ -3421,9 +3446,9 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3431,9 +3456,9 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3441,9 +3466,9 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -3451,9 +3476,9 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3461,9 +3486,9 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3471,9 +3496,9 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3481,9 +3506,9 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3491,9 +3516,9 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3501,9 +3526,9 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3511,9 +3536,9 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3521,9 +3546,9 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -3531,9 +3556,9 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3541,9 +3566,9 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3551,117 +3576,117 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_c
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_c
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_c
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_c
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_c
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_c
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_c
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_c
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_c
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_c
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_c
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_c
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
@@ -3681,146 +3706,146 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_c
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_c
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_c
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_c
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_c
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_c
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_c
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_c
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_c
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_c
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_c
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_c
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_c
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_c
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_c
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_c
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_c
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_c
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_c
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_c
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_c
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_c
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp8_rtcd.h
index 49fc245..c46bfe5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp8_rtcd.h
@@ -27,303 +27,647 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_bilinear_predict16x16_sse2(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_bilinear_predict16x16_ssse3(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+void vp8_bilinear_predict16x16_sse2(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
+                                    int dst_pitch);
+void vp8_bilinear_predict16x16_ssse3(unsigned char* src_ptr,
+                                     int src_pixels_per_line,
+                                     int xoffset,
+                                     int yoffset,
+                                     unsigned char* dst_ptr,
+                                     int dst_pitch);
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
+                                              int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_bilinear_predict4x4_mmx(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict4x4)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict4x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_sse2
 
-void vp8_bilinear_predict8x4_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_bilinear_predict8x4_mmx(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x4)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_sse2
 
-void vp8_bilinear_predict8x8_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_bilinear_predict8x8_sse2(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_bilinear_predict8x8_ssse3(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_bilinear_predict8x8_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
+                                   int dst_pitch);
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
+                                            int dst_pitch);
 
-void vp8_blend_b_c(unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride);
+void vp8_blend_b_c(unsigned char* y,
+                   unsigned char* u,
+                   unsigned char* v,
+                   int y_1,
+                   int u_1,
+                   int v_1,
+                   int alpha,
+                   int stride);
 #define vp8_blend_b vp8_blend_b_c
 
-void vp8_blend_mb_inner_c(unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride);
+void vp8_blend_mb_inner_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
 
-void vp8_blend_mb_outer_c(unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride);
+void vp8_blend_mb_outer_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
 
-int vp8_block_error_c(short *coeff, short *dqcoeff);
-int vp8_block_error_sse2(short *coeff, short *dqcoeff);
-RTCD_EXTERN int (*vp8_block_error)(short *coeff, short *dqcoeff);
+int vp8_block_error_c(short* coeff, short* dqcoeff);
+int vp8_block_error_sse2(short* coeff, short* dqcoeff);
+#define vp8_block_error vp8_block_error_sse2
 
-void vp8_copy32xn_c(const unsigned char *src_ptr, int source_stride, unsigned char *dst_ptr, int dst_stride, int n);
-void vp8_copy32xn_sse2(const unsigned char *src_ptr, int source_stride, unsigned char *dst_ptr, int dst_stride, int n);
-void vp8_copy32xn_sse3(const unsigned char *src_ptr, int source_stride, unsigned char *dst_ptr, int dst_stride, int n);
-RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char *src_ptr, int source_stride, unsigned char *dst_ptr, int dst_stride, int n);
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+void vp8_copy32xn_sse2(const unsigned char* src_ptr,
+                       int src_stride,
+                       unsigned char* dst_ptr,
+                       int dst_stride,
+                       int height);
+void vp8_copy32xn_sse3(const unsigned char* src_ptr,
+                       int src_stride,
+                       unsigned char* dst_ptr,
+                       int dst_stride,
+                       int height);
+RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
+                                 int src_stride,
+                                 unsigned char* dst_ptr,
+                                 int dst_stride,
+                                 int height);
 
-void vp8_copy_mem16x16_c(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
-void vp8_copy_mem16x16_sse2(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem16x16)(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
+void vp8_copy_mem16x16_c(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+void vp8_copy_mem16x16_sse2(unsigned char* src,
+                            int src_stride,
+                            unsigned char* dst,
+                            int dst_stride);
+#define vp8_copy_mem16x16 vp8_copy_mem16x16_sse2
 
-void vp8_copy_mem8x4_c(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
-void vp8_copy_mem8x4_mmx(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x4)(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_mmx(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+#define vp8_copy_mem8x4 vp8_copy_mem8x4_mmx
 
-void vp8_copy_mem8x8_c(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
-void vp8_copy_mem8x8_mmx(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x8)(unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch);
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_mmx(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+#define vp8_copy_mem8x8 vp8_copy_mem8x8_mmx
 
-void vp8_dc_only_idct_add_c(short input, unsigned char *pred, int pred_stride, unsigned char *dst, int dst_stride);
-void vp8_dc_only_idct_add_mmx(short input, unsigned char *pred, int pred_stride, unsigned char *dst, int dst_stride);
-RTCD_EXTERN void (*vp8_dc_only_idct_add)(short input, unsigned char *pred, int pred_stride, unsigned char *dst, int dst_stride);
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_mmx(short input_dc,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
+                              int dst_stride);
+#define vp8_dc_only_idct_add vp8_dc_only_idct_add_mmx
 
-int vp8_denoiser_filter_c(unsigned char *mc_running_avg_y, int mc_avg_y_stride, unsigned char *running_avg_y, int avg_y_stride, unsigned char *sig, int sig_stride, unsigned int motion_magnitude, int increase_denoising);
-int vp8_denoiser_filter_sse2(unsigned char *mc_running_avg_y, int mc_avg_y_stride, unsigned char *running_avg_y, int avg_y_stride, unsigned char *sig, int sig_stride, unsigned int motion_magnitude, int increase_denoising);
-RTCD_EXTERN int (*vp8_denoiser_filter)(unsigned char *mc_running_avg_y, int mc_avg_y_stride, unsigned char *running_avg_y, int avg_y_stride, unsigned char *sig, int sig_stride, unsigned int motion_magnitude, int increase_denoising);
+int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
+                          int mc_avg_y_stride,
+                          unsigned char* running_avg_y,
+                          int avg_y_stride,
+                          unsigned char* sig,
+                          int sig_stride,
+                          unsigned int motion_magnitude,
+                          int increase_denoising);
+int vp8_denoiser_filter_sse2(unsigned char* mc_running_avg_y,
+                             int mc_avg_y_stride,
+                             unsigned char* running_avg_y,
+                             int avg_y_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+#define vp8_denoiser_filter vp8_denoiser_filter_sse2
 
-int vp8_denoiser_filter_uv_c(unsigned char *mc_running_avg, int mc_avg_stride, unsigned char *running_avg, int avg_stride, unsigned char *sig, int sig_stride, unsigned int motion_magnitude, int increase_denoising);
-int vp8_denoiser_filter_uv_sse2(unsigned char *mc_running_avg, int mc_avg_stride, unsigned char *running_avg, int avg_stride, unsigned char *sig, int sig_stride, unsigned int motion_magnitude, int increase_denoising);
-RTCD_EXTERN int (*vp8_denoiser_filter_uv)(unsigned char *mc_running_avg, int mc_avg_stride, unsigned char *running_avg, int avg_stride, unsigned char *sig, int sig_stride, unsigned int motion_magnitude, int increase_denoising);
+int vp8_denoiser_filter_uv_c(unsigned char* mc_running_avg,
+                             int mc_avg_stride,
+                             unsigned char* running_avg,
+                             int avg_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+int vp8_denoiser_filter_uv_sse2(unsigned char* mc_running_avg,
+                                int mc_avg_stride,
+                                unsigned char* running_avg,
+                                int avg_stride,
+                                unsigned char* sig,
+                                int sig_stride,
+                                unsigned int motion_magnitude,
+                                int increase_denoising);
+#define vp8_denoiser_filter_uv vp8_denoiser_filter_uv_sse2
 
-void vp8_dequant_idct_add_c(short *input, short *dq, unsigned char *output, int stride);
-void vp8_dequant_idct_add_mmx(short *input, short *dq, unsigned char *output, int stride);
-RTCD_EXTERN void (*vp8_dequant_idct_add)(short *input, short *dq, unsigned char *output, int stride);
+void vp8_dequant_idct_add_c(short* input,
+                            short* dq,
+                            unsigned char* dest,
+                            int stride);
+void vp8_dequant_idct_add_mmx(short* input,
+                              short* dq,
+                              unsigned char* dest,
+                              int stride);
+#define vp8_dequant_idct_add vp8_dequant_idct_add_mmx
 
-void vp8_dequant_idct_add_uv_block_c(short *q, short *dq, unsigned char *dst_u, unsigned char *dst_v, int stride, char *eobs);
-void vp8_dequant_idct_add_uv_block_sse2(short *q, short *dq, unsigned char *dst_u, unsigned char *dst_v, int stride, char *eobs);
-RTCD_EXTERN void (*vp8_dequant_idct_add_uv_block)(short *q, short *dq, unsigned char *dst_u, unsigned char *dst_v, int stride, char *eobs);
+void vp8_dequant_idct_add_uv_block_c(short* q,
+                                     short* dq,
+                                     unsigned char* dst_u,
+                                     unsigned char* dst_v,
+                                     int stride,
+                                     char* eobs);
+void vp8_dequant_idct_add_uv_block_sse2(short* q,
+                                        short* dq,
+                                        unsigned char* dst_u,
+                                        unsigned char* dst_v,
+                                        int stride,
+                                        char* eobs);
+#define vp8_dequant_idct_add_uv_block vp8_dequant_idct_add_uv_block_sse2
 
-void vp8_dequant_idct_add_y_block_c(short *q, short *dq, unsigned char *dst, int stride, char *eobs);
-void vp8_dequant_idct_add_y_block_sse2(short *q, short *dq, unsigned char *dst, int stride, char *eobs);
-RTCD_EXTERN void (*vp8_dequant_idct_add_y_block)(short *q, short *dq, unsigned char *dst, int stride, char *eobs);
+void vp8_dequant_idct_add_y_block_c(short* q,
+                                    short* dq,
+                                    unsigned char* dst,
+                                    int stride,
+                                    char* eobs);
+void vp8_dequant_idct_add_y_block_sse2(short* q,
+                                       short* dq,
+                                       unsigned char* dst,
+                                       int stride,
+                                       char* eobs);
+#define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_sse2
 
-void vp8_dequantize_b_c(struct blockd*, short *dqc);
-void vp8_dequantize_b_mmx(struct blockd*, short *dqc);
-RTCD_EXTERN void (*vp8_dequantize_b)(struct blockd*, short *dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_mmx(struct blockd*, short* DQC);
+#define vp8_dequantize_b vp8_dequantize_b_mmx
 
-int vp8_diamond_search_sad_c(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, union int_mv *best_mv, int search_param, int sad_per_bit, int *num00, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-int vp8_diamond_search_sadx4(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, union int_mv *best_mv, int search_param, int sad_per_bit, int *num00, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-RTCD_EXTERN int (*vp8_diamond_search_sad)(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, union int_mv *best_mv, int search_param, int sad_per_bit, int *num00, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
+int vp8_diamond_search_sad_c(struct macroblock* x,
+                             struct block* b,
+                             struct blockd* d,
+                             union int_mv* ref_mv,
+                             union int_mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             struct variance_vtable* fn_ptr,
+                             int* mvcost[2],
+                             union int_mv* center_mv);
+int vp8_diamond_search_sadx4(struct macroblock* x,
+                             struct block* b,
+                             struct blockd* d,
+                             union int_mv* ref_mv,
+                             union int_mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             struct variance_vtable* fn_ptr,
+                             int* mvcost[2],
+                             union int_mv* center_mv);
+#define vp8_diamond_search_sad vp8_diamond_search_sadx4
 
-void vp8_fast_quantize_b_c(struct block *, struct blockd *);
-void vp8_fast_quantize_b_sse2(struct block *, struct blockd *);
-void vp8_fast_quantize_b_ssse3(struct block *, struct blockd *);
-RTCD_EXTERN void (*vp8_fast_quantize_b)(struct block *, struct blockd *);
+void vp8_fast_quantize_b_c(struct block*, struct blockd*);
+void vp8_fast_quantize_b_sse2(struct block*, struct blockd*);
+void vp8_fast_quantize_b_ssse3(struct block*, struct blockd*);
+RTCD_EXTERN void (*vp8_fast_quantize_b)(struct block*, struct blockd*);
 
-void vp8_filter_by_weight16x16_c(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
-void vp8_filter_by_weight16x16_sse2(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
-RTCD_EXTERN void (*vp8_filter_by_weight16x16)(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
+void vp8_filter_by_weight16x16_c(unsigned char* src,
+                                 int src_stride,
+                                 unsigned char* dst,
+                                 int dst_stride,
+                                 int src_weight);
+void vp8_filter_by_weight16x16_sse2(unsigned char* src,
+                                    int src_stride,
+                                    unsigned char* dst,
+                                    int dst_stride,
+                                    int src_weight);
+#define vp8_filter_by_weight16x16 vp8_filter_by_weight16x16_sse2
 
-void vp8_filter_by_weight4x4_c(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
+void vp8_filter_by_weight4x4_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
 #define vp8_filter_by_weight4x4 vp8_filter_by_weight4x4_c
 
-void vp8_filter_by_weight8x8_c(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
-void vp8_filter_by_weight8x8_sse2(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
-RTCD_EXTERN void (*vp8_filter_by_weight8x8)(unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight);
+void vp8_filter_by_weight8x8_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+void vp8_filter_by_weight8x8_sse2(unsigned char* src,
+                                  int src_stride,
+                                  unsigned char* dst,
+                                  int dst_stride,
+                                  int src_weight);
+#define vp8_filter_by_weight8x8 vp8_filter_by_weight8x8_sse2
 
-int vp8_full_search_sad_c(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-int vp8_full_search_sadx3(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-int vp8_full_search_sadx8(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-RTCD_EXTERN int (*vp8_full_search_sad)(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
+int vp8_full_search_sad_c(struct macroblock* x,
+                          struct block* b,
+                          struct blockd* d,
+                          union int_mv* ref_mv,
+                          int sad_per_bit,
+                          int distance,
+                          struct variance_vtable* fn_ptr,
+                          int* mvcost[2],
+                          union int_mv* center_mv);
+int vp8_full_search_sadx3(struct macroblock* x,
+                          struct block* b,
+                          struct blockd* d,
+                          union int_mv* ref_mv,
+                          int sad_per_bit,
+                          int distance,
+                          struct variance_vtable* fn_ptr,
+                          int* mvcost[2],
+                          union int_mv* center_mv);
+int vp8_full_search_sadx8(struct macroblock* x,
+                          struct block* b,
+                          struct blockd* d,
+                          union int_mv* ref_mv,
+                          int sad_per_bit,
+                          int distance,
+                          struct variance_vtable* fn_ptr,
+                          int* mvcost[2],
+                          union int_mv* center_mv);
+RTCD_EXTERN int (*vp8_full_search_sad)(struct macroblock* x,
+                                       struct block* b,
+                                       struct blockd* d,
+                                       union int_mv* ref_mv,
+                                       int sad_per_bit,
+                                       int distance,
+                                       struct variance_vtable* fn_ptr,
+                                       int* mvcost[2],
+                                       union int_mv* center_mv);
 
-void vp8_loop_filter_bh_c(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-void vp8_loop_filter_bh_sse2(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bh)(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bh_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bh vp8_loop_filter_bh_sse2
 
-void vp8_loop_filter_bv_c(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-void vp8_loop_filter_bv_sse2(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bv)(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bv_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bv vp8_loop_filter_bv_sse2
 
-void vp8_loop_filter_mbh_c(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-void vp8_loop_filter_mbh_sse2(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbh)(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbh_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbh vp8_loop_filter_mbh_sse2
 
-void vp8_loop_filter_mbv_c(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-void vp8_loop_filter_mbv_sse2(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbv)(unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi);
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbv_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbv vp8_loop_filter_mbv_sse2
 
-void vp8_loop_filter_bhs_c(unsigned char *y, int ystride, const unsigned char *blimit);
-void vp8_loop_filter_bhs_sse2(unsigned char *y, int ystride, const unsigned char *blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bh)(unsigned char *y, int ystride, const unsigned char *blimit);
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bhs_sse2(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_sse2
 
-void vp8_loop_filter_bvs_c(unsigned char *y, int ystride, const unsigned char *blimit);
-void vp8_loop_filter_bvs_sse2(unsigned char *y, int ystride, const unsigned char *blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bv)(unsigned char *y, int ystride, const unsigned char *blimit);
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bvs_sse2(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_sse2
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char *y, int ystride, const unsigned char *blimit);
-void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char *y, int ystride, const unsigned char *blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbh)(unsigned char *y, int ystride, const unsigned char *blimit);
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
+                                              const unsigned char* blimit);
+void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y_ptr,
+                                                 int y_stride,
+                                                 const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_sse2
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char *y, int ystride, const unsigned char *blimit);
-void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char *y, int ystride, const unsigned char *blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbv)(unsigned char *y, int ystride, const unsigned char *blimit);
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
+                                            const unsigned char* blimit);
+void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y_ptr,
+                                               int y_stride,
+                                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_sse2
 
-int vp8_mbblock_error_c(struct macroblock *mb, int dc);
-int vp8_mbblock_error_sse2(struct macroblock *mb, int dc);
-RTCD_EXTERN int (*vp8_mbblock_error)(struct macroblock *mb, int dc);
+int vp8_mbblock_error_c(struct macroblock* mb, int dc);
+int vp8_mbblock_error_sse2(struct macroblock* mb, int dc);
+#define vp8_mbblock_error vp8_mbblock_error_sse2
 
-int vp8_mbuverror_c(struct macroblock *mb);
-int vp8_mbuverror_sse2(struct macroblock *mb);
-RTCD_EXTERN int (*vp8_mbuverror)(struct macroblock *mb);
+int vp8_mbuverror_c(struct macroblock* mb);
+int vp8_mbuverror_sse2(struct macroblock* mb);
+#define vp8_mbuverror vp8_mbuverror_sse2
 
-int vp8_refining_search_sad_c(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-int vp8_refining_search_sadx4(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
-RTCD_EXTERN int (*vp8_refining_search_sad)(struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv);
+int vp8_refining_search_sad_c(struct macroblock* x,
+                              struct block* b,
+                              struct blockd* d,
+                              union int_mv* ref_mv,
+                              int error_per_bit,
+                              int search_range,
+                              struct variance_vtable* fn_ptr,
+                              int* mvcost[2],
+                              union int_mv* center_mv);
+int vp8_refining_search_sadx4(struct macroblock* x,
+                              struct block* b,
+                              struct blockd* d,
+                              union int_mv* ref_mv,
+                              int error_per_bit,
+                              int search_range,
+                              struct variance_vtable* fn_ptr,
+                              int* mvcost[2],
+                              union int_mv* center_mv);
+#define vp8_refining_search_sad vp8_refining_search_sadx4
 
-void vp8_regular_quantize_b_c(struct block *, struct blockd *);
-void vp8_regular_quantize_b_sse2(struct block *, struct blockd *);
-void vp8_regular_quantize_b_sse4_1(struct block *, struct blockd *);
-RTCD_EXTERN void (*vp8_regular_quantize_b)(struct block *, struct blockd *);
+void vp8_regular_quantize_b_c(struct block*, struct blockd*);
+void vp8_regular_quantize_b_sse2(struct block*, struct blockd*);
+void vp8_regular_quantize_b_sse4_1(struct block*, struct blockd*);
+RTCD_EXTERN void (*vp8_regular_quantize_b)(struct block*, struct blockd*);
 
-void vp8_short_fdct4x4_c(short *input, short *output, int pitch);
-void vp8_short_fdct4x4_sse2(short *input, short *output, int pitch);
-RTCD_EXTERN void (*vp8_short_fdct4x4)(short *input, short *output, int pitch);
+void vp8_short_fdct4x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct4x4_sse2(short* input, short* output, int pitch);
+#define vp8_short_fdct4x4 vp8_short_fdct4x4_sse2
 
-void vp8_short_fdct8x4_c(short *input, short *output, int pitch);
-void vp8_short_fdct8x4_sse2(short *input, short *output, int pitch);
-RTCD_EXTERN void (*vp8_short_fdct8x4)(short *input, short *output, int pitch);
+void vp8_short_fdct8x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct8x4_sse2(short* input, short* output, int pitch);
+#define vp8_short_fdct8x4 vp8_short_fdct8x4_sse2
 
-void vp8_short_idct4x4llm_c(short *input, unsigned char *pred, int pitch, unsigned char *dst, int dst_stride);
-void vp8_short_idct4x4llm_mmx(short *input, unsigned char *pred, int pitch, unsigned char *dst, int dst_stride);
-RTCD_EXTERN void (*vp8_short_idct4x4llm)(short *input, unsigned char *pred, int pitch, unsigned char *dst, int dst_stride);
+void vp8_short_idct4x4llm_c(short* input,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_short_idct4x4llm_mmx(short* input,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
+                              int dst_stride);
+#define vp8_short_idct4x4llm vp8_short_idct4x4llm_mmx
 
-void vp8_short_inv_walsh4x4_c(short *input, short *output);
-void vp8_short_inv_walsh4x4_sse2(short *input, short *output);
-RTCD_EXTERN void (*vp8_short_inv_walsh4x4)(short *input, short *output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_sse2(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_sse2
 
-void vp8_short_inv_walsh4x4_1_c(short *input, short *output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
-void vp8_short_walsh4x4_c(short *input, short *output, int pitch);
-void vp8_short_walsh4x4_sse2(short *input, short *output, int pitch);
-RTCD_EXTERN void (*vp8_short_walsh4x4)(short *input, short *output, int pitch);
+void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
+void vp8_short_walsh4x4_sse2(short* input, short* output, int pitch);
+#define vp8_short_walsh4x4 vp8_short_walsh4x4_sse2
 
-void vp8_sixtap_predict16x16_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict16x16_sse2(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict16x16_ssse3(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_sixtap_predict16x16_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_sixtap_predict16x16_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
+                                   int dst_pitch);
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
+                                            int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict4x4_mmx(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict4x4_ssse3(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict4x4_mmx(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_sixtap_predict4x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
+                                          int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict8x4_sse2(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict8x4_ssse3(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x4_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+void vp8_sixtap_predict8x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
+                                          int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict8x8_sse2(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-void vp8_sixtap_predict8x8_ssse3(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch);
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x8_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+void vp8_sixtap_predict8x8_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
+                                          int dst_pitch);
 
 void vp8_rtcd(void);
 
 #ifdef RTCD_C
 #include "vpx_ports/x86.h"
-static void setup_rtcd_internal(void)
-{
-    int flags = x86_simd_caps();
+static void setup_rtcd_internal(void) {
+  int flags = x86_simd_caps();
 
-    (void)flags;
+  (void)flags;
 
-    vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_c;
-    if (flags & HAS_SSE2) vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_sse2;
-    if (flags & HAS_SSSE3) vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_ssse3;
-    vp8_bilinear_predict4x4 = vp8_bilinear_predict4x4_c;
-    if (flags & HAS_MMX) vp8_bilinear_predict4x4 = vp8_bilinear_predict4x4_mmx;
-    vp8_bilinear_predict8x4 = vp8_bilinear_predict8x4_c;
-    if (flags & HAS_MMX) vp8_bilinear_predict8x4 = vp8_bilinear_predict8x4_mmx;
-    vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_c;
-    if (flags & HAS_SSE2) vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_sse2;
-    if (flags & HAS_SSSE3) vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_ssse3;
-    vp8_block_error = vp8_block_error_c;
-    if (flags & HAS_SSE2) vp8_block_error = vp8_block_error_sse2;
-    vp8_copy32xn = vp8_copy32xn_c;
-    if (flags & HAS_SSE2) vp8_copy32xn = vp8_copy32xn_sse2;
-    if (flags & HAS_SSE3) vp8_copy32xn = vp8_copy32xn_sse3;
-    vp8_copy_mem16x16 = vp8_copy_mem16x16_c;
-    if (flags & HAS_SSE2) vp8_copy_mem16x16 = vp8_copy_mem16x16_sse2;
-    vp8_copy_mem8x4 = vp8_copy_mem8x4_c;
-    if (flags & HAS_MMX) vp8_copy_mem8x4 = vp8_copy_mem8x4_mmx;
-    vp8_copy_mem8x8 = vp8_copy_mem8x8_c;
-    if (flags & HAS_MMX) vp8_copy_mem8x8 = vp8_copy_mem8x8_mmx;
-    vp8_dc_only_idct_add = vp8_dc_only_idct_add_c;
-    if (flags & HAS_MMX) vp8_dc_only_idct_add = vp8_dc_only_idct_add_mmx;
-    vp8_denoiser_filter = vp8_denoiser_filter_c;
-    if (flags & HAS_SSE2) vp8_denoiser_filter = vp8_denoiser_filter_sse2;
-    vp8_denoiser_filter_uv = vp8_denoiser_filter_uv_c;
-    if (flags & HAS_SSE2) vp8_denoiser_filter_uv = vp8_denoiser_filter_uv_sse2;
-    vp8_dequant_idct_add = vp8_dequant_idct_add_c;
-    if (flags & HAS_MMX) vp8_dequant_idct_add = vp8_dequant_idct_add_mmx;
-    vp8_dequant_idct_add_uv_block = vp8_dequant_idct_add_uv_block_c;
-    if (flags & HAS_SSE2) vp8_dequant_idct_add_uv_block = vp8_dequant_idct_add_uv_block_sse2;
-    vp8_dequant_idct_add_y_block = vp8_dequant_idct_add_y_block_c;
-    if (flags & HAS_SSE2) vp8_dequant_idct_add_y_block = vp8_dequant_idct_add_y_block_sse2;
-    vp8_dequantize_b = vp8_dequantize_b_c;
-    if (flags & HAS_MMX) vp8_dequantize_b = vp8_dequantize_b_mmx;
-    vp8_diamond_search_sad = vp8_diamond_search_sad_c;
-    if (flags & HAS_SSE2) vp8_diamond_search_sad = vp8_diamond_search_sadx4;
-    vp8_fast_quantize_b = vp8_fast_quantize_b_c;
-    if (flags & HAS_SSE2) vp8_fast_quantize_b = vp8_fast_quantize_b_sse2;
-    if (flags & HAS_SSSE3) vp8_fast_quantize_b = vp8_fast_quantize_b_ssse3;
-    vp8_filter_by_weight16x16 = vp8_filter_by_weight16x16_c;
-    if (flags & HAS_SSE2) vp8_filter_by_weight16x16 = vp8_filter_by_weight16x16_sse2;
-    vp8_filter_by_weight8x8 = vp8_filter_by_weight8x8_c;
-    if (flags & HAS_SSE2) vp8_filter_by_weight8x8 = vp8_filter_by_weight8x8_sse2;
-    vp8_full_search_sad = vp8_full_search_sad_c;
-    if (flags & HAS_SSE3) vp8_full_search_sad = vp8_full_search_sadx3;
-    if (flags & HAS_SSE4_1) vp8_full_search_sad = vp8_full_search_sadx8;
-    vp8_loop_filter_bh = vp8_loop_filter_bh_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_bh = vp8_loop_filter_bh_sse2;
-    vp8_loop_filter_bv = vp8_loop_filter_bv_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_bv = vp8_loop_filter_bv_sse2;
-    vp8_loop_filter_mbh = vp8_loop_filter_mbh_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_mbh = vp8_loop_filter_mbh_sse2;
-    vp8_loop_filter_mbv = vp8_loop_filter_mbv_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_mbv = vp8_loop_filter_mbv_sse2;
-    vp8_loop_filter_simple_bh = vp8_loop_filter_bhs_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_simple_bh = vp8_loop_filter_bhs_sse2;
-    vp8_loop_filter_simple_bv = vp8_loop_filter_bvs_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_simple_bv = vp8_loop_filter_bvs_sse2;
-    vp8_loop_filter_simple_mbh = vp8_loop_filter_simple_horizontal_edge_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_simple_mbh = vp8_loop_filter_simple_horizontal_edge_sse2;
-    vp8_loop_filter_simple_mbv = vp8_loop_filter_simple_vertical_edge_c;
-    if (flags & HAS_SSE2) vp8_loop_filter_simple_mbv = vp8_loop_filter_simple_vertical_edge_sse2;
-    vp8_mbblock_error = vp8_mbblock_error_c;
-    if (flags & HAS_SSE2) vp8_mbblock_error = vp8_mbblock_error_sse2;
-    vp8_mbuverror = vp8_mbuverror_c;
-    if (flags & HAS_SSE2) vp8_mbuverror = vp8_mbuverror_sse2;
-    vp8_refining_search_sad = vp8_refining_search_sad_c;
-    if (flags & HAS_SSE2) vp8_refining_search_sad = vp8_refining_search_sadx4;
-    vp8_regular_quantize_b = vp8_regular_quantize_b_c;
-    if (flags & HAS_SSE2) vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
-    if (flags & HAS_SSE4_1) vp8_regular_quantize_b = vp8_regular_quantize_b_sse4_1;
-    vp8_short_fdct4x4 = vp8_short_fdct4x4_c;
-    if (flags & HAS_SSE2) vp8_short_fdct4x4 = vp8_short_fdct4x4_sse2;
-    vp8_short_fdct8x4 = vp8_short_fdct8x4_c;
-    if (flags & HAS_SSE2) vp8_short_fdct8x4 = vp8_short_fdct8x4_sse2;
-    vp8_short_idct4x4llm = vp8_short_idct4x4llm_c;
-    if (flags & HAS_MMX) vp8_short_idct4x4llm = vp8_short_idct4x4llm_mmx;
-    vp8_short_inv_walsh4x4 = vp8_short_inv_walsh4x4_c;
-    if (flags & HAS_SSE2) vp8_short_inv_walsh4x4 = vp8_short_inv_walsh4x4_sse2;
-    vp8_short_walsh4x4 = vp8_short_walsh4x4_c;
-    if (flags & HAS_SSE2) vp8_short_walsh4x4 = vp8_short_walsh4x4_sse2;
-    vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_c;
-    if (flags & HAS_SSE2) vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
-    if (flags & HAS_SSSE3) vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_ssse3;
-    vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_c;
-    if (flags & HAS_MMX) vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_mmx;
-    if (flags & HAS_SSSE3) vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_ssse3;
-    vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_c;
-    if (flags & HAS_SSE2) vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_sse2;
-    if (flags & HAS_SSSE3) vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_ssse3;
-    vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_c;
-    if (flags & HAS_SSE2) vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_sse2;
-    if (flags & HAS_SSSE3) vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_ssse3;
+  vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_sse2;
+  if (flags & HAS_SSSE3)
+    vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_ssse3;
+  vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_sse2;
+  if (flags & HAS_SSSE3)
+    vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_ssse3;
+  vp8_copy32xn = vp8_copy32xn_sse2;
+  if (flags & HAS_SSE3)
+    vp8_copy32xn = vp8_copy32xn_sse3;
+  vp8_fast_quantize_b = vp8_fast_quantize_b_sse2;
+  if (flags & HAS_SSSE3)
+    vp8_fast_quantize_b = vp8_fast_quantize_b_ssse3;
+  vp8_full_search_sad = vp8_full_search_sad_c;
+  if (flags & HAS_SSE3)
+    vp8_full_search_sad = vp8_full_search_sadx3;
+  if (flags & HAS_SSE4_1)
+    vp8_full_search_sad = vp8_full_search_sadx8;
+  vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
+  if (flags & HAS_SSE4_1)
+    vp8_regular_quantize_b = vp8_regular_quantize_b_sse4_1;
+  vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
+  if (flags & HAS_SSSE3)
+    vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_ssse3;
+  vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_mmx;
+  if (flags & HAS_SSSE3)
+    vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_ssse3;
+  vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_sse2;
+  if (flags & HAS_SSSE3)
+    vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_ssse3;
+  vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_sse2;
+  if (flags & HAS_SSSE3)
+    vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_ssse3;
 }
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp9_rtcd.h
index c8f69b9..e079a2d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vp9_rtcd.h
@@ -79,15 +79,7 @@
                              int increase_denoising,
                              BLOCK_SIZE bs,
                              int motion_magnitude);
-RTCD_EXTERN int (*vp9_denoiser_filter)(const uint8_t* sig,
-                                       int sig_stride,
-                                       const uint8_t* mc_avg,
-                                       int mc_avg_stride,
-                                       uint8_t* avg,
-                                       int avg_stride,
-                                       int increase_denoising,
-                                       BLOCK_SIZE bs,
-                                       int motion_magnitude);
+#define vp9_denoiser_filter vp9_denoiser_filter_sse2
 
 int vp9_diamond_search_sad_c(const struct macroblock* x,
                              const struct search_site_config* cfg,
@@ -118,46 +110,6 @@
     const struct vp9_variance_vtable* fn_ptr,
     const struct mv* center_mv);
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_ssse3(const int16_t* input,
-                             int stride,
-                             tran_low_t* coeff_ptr,
-                             intptr_t n_coeffs,
-                             int skip_block,
-                             const int16_t* round_ptr,
-                             const int16_t* quant_ptr,
-                             tran_low_t* qcoeff_ptr,
-                             tran_low_t* dqcoeff_ptr,
-                             const int16_t* dequant_ptr,
-                             uint16_t* eob_ptr,
-                             const int16_t* scan,
-                             const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -166,10 +118,7 @@
                        tran_low_t* output,
                        int stride,
                        int tx_type);
-RTCD_EXTERN void (*vp9_fht16x16)(const int16_t* input,
-                                 tran_low_t* output,
-                                 int stride,
-                                 int tx_type);
+#define vp9_fht16x16 vp9_fht16x16_sse2
 
 void vp9_fht4x4_c(const int16_t* input,
                   tran_low_t* output,
@@ -179,10 +128,7 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
-RTCD_EXTERN void (*vp9_fht4x4)(const int16_t* input,
-                               tran_low_t* output,
-                               int stride,
-                               int tx_type);
+#define vp9_fht4x4 vp9_fht4x4_sse2
 
 void vp9_fht8x8_c(const int16_t* input,
                   tran_low_t* output,
@@ -192,10 +138,7 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
-RTCD_EXTERN void (*vp9_fht8x8)(const int16_t* input,
-                               tran_low_t* output,
-                               int stride,
-                               int tx_type);
+#define vp9_fht8x8 vp9_fht8x8_sse2
 
 void vp9_filter_by_weight16x16_c(const uint8_t* src,
                                  int src_stride,
@@ -207,11 +150,7 @@
                                     uint8_t* dst,
                                     int dst_stride,
                                     int src_weight);
-RTCD_EXTERN void (*vp9_filter_by_weight16x16)(const uint8_t* src,
-                                              int src_stride,
-                                              uint8_t* dst,
-                                              int dst_stride,
-                                              int src_weight);
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_sse2
 
 void vp9_filter_by_weight8x8_c(const uint8_t* src,
                                int src_stride,
@@ -223,17 +162,11 @@
                                   uint8_t* dst,
                                   int dst_stride,
                                   int src_weight);
-RTCD_EXTERN void (*vp9_filter_by_weight8x8)(const uint8_t* src,
-                                            int src_stride,
-                                            uint8_t* dst,
-                                            int dst_stride,
-                                            int src_weight);
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_sse2
 
 void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vp9_fwht4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vp9_fwht4x4)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vp9_fwht4x4 vp9_fwht4x4_sse2
 
 int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
                                  const tran_low_t* dqcoeff,
@@ -245,11 +178,7 @@
                                     intptr_t block_size,
                                     int64_t* ssz,
                                     int bd);
-RTCD_EXTERN int64_t (*vp9_highbd_block_error)(const tran_low_t* coeff,
-                                              const tran_low_t* dqcoeff,
-                                              intptr_t block_size,
-                                              int64_t* ssz,
-                                              int bd);
+#define vp9_highbd_block_error vp9_highbd_block_error_sse2
 
 void vp9_highbd_fht16x16_c(const int16_t* input,
                            tran_low_t* output,
@@ -273,18 +202,18 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 void vp9_highbd_iht16x16_256_add_sse4_1(const tran_low_t* input,
-                                        uint16_t* output,
-                                        int pitch,
+                                        uint16_t* dest,
+                                        int stride,
                                         int tx_type,
                                         int bd);
 RTCD_EXTERN void (*vp9_highbd_iht16x16_256_add)(const tran_low_t* input,
-                                                uint16_t* output,
-                                                int pitch,
+                                                uint16_t* dest,
+                                                int stride,
                                                 int tx_type,
                                                 int bd);
 
@@ -376,23 +305,21 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_sse2(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
-RTCD_EXTERN void (*vp9_iht16x16_256_add)(const tran_low_t* input,
-                                         uint8_t* output,
-                                         int pitch,
-                                         int tx_type);
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_sse2
 
 void vp9_iht4x4_16_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -402,10 +329,7 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
-RTCD_EXTERN void (*vp9_iht4x4_16_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride,
-                                      int tx_type);
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_sse2
 
 void vp9_iht8x8_64_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -415,10 +339,7 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
-RTCD_EXTERN void (*vp9_iht8x8_64_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride,
-                                      int tx_type);
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_sse2
 
 void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
                        intptr_t n_coeffs,
@@ -501,46 +422,15 @@
 
   (void)flags;
 
-  vp9_block_error = vp9_block_error_c;
-  if (flags & HAS_SSE2)
-    vp9_block_error = vp9_block_error_sse2;
+  vp9_block_error = vp9_block_error_sse2;
   if (flags & HAS_AVX2)
     vp9_block_error = vp9_block_error_avx2;
-  vp9_block_error_fp = vp9_block_error_fp_c;
-  if (flags & HAS_SSE2)
-    vp9_block_error_fp = vp9_block_error_fp_sse2;
+  vp9_block_error_fp = vp9_block_error_fp_sse2;
   if (flags & HAS_AVX2)
     vp9_block_error_fp = vp9_block_error_fp_avx2;
-  vp9_denoiser_filter = vp9_denoiser_filter_c;
-  if (flags & HAS_SSE2)
-    vp9_denoiser_filter = vp9_denoiser_filter_sse2;
   vp9_diamond_search_sad = vp9_diamond_search_sad_c;
   if (flags & HAS_AVX)
     vp9_diamond_search_sad = vp9_diamond_search_sad_avx;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_SSSE3)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_ssse3;
-  vp9_fht16x16 = vp9_fht16x16_c;
-  if (flags & HAS_SSE2)
-    vp9_fht16x16 = vp9_fht16x16_sse2;
-  vp9_fht4x4 = vp9_fht4x4_c;
-  if (flags & HAS_SSE2)
-    vp9_fht4x4 = vp9_fht4x4_sse2;
-  vp9_fht8x8 = vp9_fht8x8_c;
-  if (flags & HAS_SSE2)
-    vp9_fht8x8 = vp9_fht8x8_sse2;
-  vp9_filter_by_weight16x16 = vp9_filter_by_weight16x16_c;
-  if (flags & HAS_SSE2)
-    vp9_filter_by_weight16x16 = vp9_filter_by_weight16x16_sse2;
-  vp9_filter_by_weight8x8 = vp9_filter_by_weight8x8_c;
-  if (flags & HAS_SSE2)
-    vp9_filter_by_weight8x8 = vp9_filter_by_weight8x8_sse2;
-  vp9_fwht4x4 = vp9_fwht4x4_c;
-  if (flags & HAS_SSE2)
-    vp9_fwht4x4 = vp9_fwht4x4_sse2;
-  vp9_highbd_block_error = vp9_highbd_block_error_c;
-  if (flags & HAS_SSE2)
-    vp9_highbd_block_error = vp9_highbd_block_error_sse2;
   vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_sse4_1;
@@ -550,18 +440,7 @@
   vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_sse4_1;
-  vp9_iht16x16_256_add = vp9_iht16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht16x16_256_add = vp9_iht16x16_256_add_sse2;
-  vp9_iht4x4_16_add = vp9_iht4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht4x4_16_add = vp9_iht4x4_16_add_sse2;
-  vp9_iht8x8_64_add = vp9_iht8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht8x8_64_add = vp9_iht8x8_64_add_sse2;
-  vp9_quantize_fp = vp9_quantize_fp_c;
-  if (flags & HAS_SSE2)
-    vp9_quantize_fp = vp9_quantize_fp_sse2;
+  vp9_quantize_fp = vp9_quantize_fp_sse2;
   if (flags & HAS_AVX2)
     vp9_quantize_fp = vp9_quantize_fp_avx2;
   vp9_scale_and_extend_frame = vp9_scale_and_extend_frame_c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.asm
index 4537261..a5bda29 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.asm
@@ -1,7 +1,12 @@
+%define VPX_ARCH_ARM 0
 %define ARCH_ARM 0
+%define VPX_ARCH_MIPS 0
 %define ARCH_MIPS 0
+%define VPX_ARCH_X86 1
 %define ARCH_X86 1
+%define VPX_ARCH_X86_64 0
 %define ARCH_X86_64 0
+%define VPX_ARCH_PPC 0
 %define ARCH_PPC 0
 %define HAVE_NEON 0
 %define HAVE_NEON_ASM 0
@@ -66,20 +71,24 @@
 %define CONFIG_OS_SUPPORT 1
 %define CONFIG_UNIT_TESTS 1
 %define CONFIG_WEBM_IO 1
-%define CONFIG_LIBYUV 1
+%define CONFIG_LIBYUV 0
 %define CONFIG_DECODE_PERF_TESTS 0
 %define CONFIG_ENCODE_PERF_TESTS 0
 %define CONFIG_MULTI_RES_ENCODING 1
 %define CONFIG_TEMPORAL_DENOISING 1
 %define CONFIG_VP9_TEMPORAL_DENOISING 1
+%define CONFIG_CONSISTENT_RECODE 0
 %define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 %define CONFIG_VP9_HIGHBITDEPTH 1
 %define CONFIG_BETTER_HW_COMPATIBILITY 0
 %define CONFIG_EXPERIMENTAL 0
 %define CONFIG_SIZE_LIMIT 1
 %define CONFIG_ALWAYS_ADJUST_BPM 0
-%define CONFIG_SPATIAL_SVC 0
+%define CONFIG_BITSTREAM_DEBUG 0
+%define CONFIG_MISMATCH_DEBUG 0
 %define CONFIG_FP_MB_STATS 0
 %define CONFIG_EMULATE_HARDWARE 0
+%define CONFIG_NON_GREEDY_MV 0
+%define CONFIG_RATE_CTRL 0
 %define DECODE_WIDTH_LIMIT 16384
 %define DECODE_HEIGHT_LIMIT 16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.c
index 039d479..513b7b2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=x86-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
+static const char* const cfg = "--target=x86-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.h
index 12c4f24..11c14fe 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 1
 #define ARCH_X86 1
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_dsp_rtcd.h
index d820a2b..096e3a8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/ia32/vpx_dsp_rtcd.h
@@ -22,11 +22,11 @@
 
 unsigned int vpx_avg_4x4_c(const uint8_t*, int p);
 unsigned int vpx_avg_4x4_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_avg_4x4)(const uint8_t*, int p);
+#define vpx_avg_4x4 vpx_avg_4x4_sse2
 
 unsigned int vpx_avg_8x8_c(const uint8_t*, int p);
 unsigned int vpx_avg_8x8_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_avg_8x8)(const uint8_t*, int p);
+#define vpx_avg_8x8 vpx_avg_8x8_sse2
 
 void vpx_comp_avg_pred_c(uint8_t* comp_pred,
                          const uint8_t* pred,
@@ -40,12 +40,7 @@
                             int height,
                             const uint8_t* ref,
                             int ref_stride);
-RTCD_EXTERN void (*vpx_comp_avg_pred)(uint8_t* comp_pred,
-                                      const uint8_t* pred,
-                                      int width,
-                                      int height,
-                                      const uint8_t* ref,
-                                      int ref_stride);
+#define vpx_comp_avg_pred vpx_comp_avg_pred_sse2
 
 void vpx_convolve8_c(const uint8_t* src,
                      ptrdiff_t src_stride,
@@ -405,17 +400,7 @@
                            int y_step_q4,
                            int w,
                            int h);
-RTCD_EXTERN void (*vpx_convolve_avg)(const uint8_t* src,
-                                     ptrdiff_t src_stride,
-                                     uint8_t* dst,
-                                     ptrdiff_t dst_stride,
-                                     const InterpKernel* filter,
-                                     int x0_q4,
-                                     int x_step_q4,
-                                     int y0_q4,
-                                     int y_step_q4,
-                                     int w,
-                                     int h);
+#define vpx_convolve_avg vpx_convolve_avg_sse2
 
 void vpx_convolve_copy_c(const uint8_t* src,
                          ptrdiff_t src_stride,
@@ -439,655 +424,553 @@
                             int y_step_q4,
                             int w,
                             int h);
-RTCD_EXTERN void (*vpx_convolve_copy)(const uint8_t* src,
-                                      ptrdiff_t src_stride,
-                                      uint8_t* dst,
-                                      ptrdiff_t dst_stride,
-                                      const InterpKernel* filter,
-                                      int x0_q4,
-                                      int x_step_q4,
-                                      int y0_q4,
-                                      int y_step_q4,
-                                      int w,
-                                      int h);
+#define vpx_convolve_copy vpx_convolve_copy_sse2
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_4x4_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_4x4_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_d207_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_sse2
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_d45_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_sse2
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_d45_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_sse2
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_4x4_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_8x8_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_sse2
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_sse2
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_sse2
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_sse2
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_16x16)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint8_t* above,
-                                                const uint8_t* left);
+#define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_sse2
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_32x32)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint8_t* above,
-                                                const uint8_t* left);
+#define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_sse2
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_4x4)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
-                                              const uint8_t* above,
-                                              const uint8_t* left);
+#define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_sse2
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_8x8)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
-                                              const uint8_t* above,
-                                              const uint8_t* left);
+#define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_sse2
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_sse2
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_sse2
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_sse2
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_sse2
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_sse2
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_sse2
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_sse2
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_sse2
 
 void vpx_fdct16x16_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct16x16_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct16x16)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct16x16 vpx_fdct16x16_sse2
 
 void vpx_fdct16x16_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct16x16_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct16x16_1)(const int16_t* input,
-                                    tran_low_t* output,
-                                    int stride);
+#define vpx_fdct16x16_1 vpx_fdct16x16_1_sse2
 
 void vpx_fdct32x32_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct32x32)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct32x32 vpx_fdct32x32_sse2
 
 void vpx_fdct32x32_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct32x32_1)(const int16_t* input,
-                                    tran_low_t* output,
-                                    int stride);
+#define vpx_fdct32x32_1 vpx_fdct32x32_1_sse2
 
 void vpx_fdct32x32_rd_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_rd_sse2(const int16_t* input,
                            tran_low_t* output,
                            int stride);
-RTCD_EXTERN void (*vpx_fdct32x32_rd)(const int16_t* input,
-                                     tran_low_t* output,
-                                     int stride);
+#define vpx_fdct32x32_rd vpx_fdct32x32_rd_sse2
 
 void vpx_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct4x4)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vpx_fdct4x4 vpx_fdct4x4_sse2
 
 void vpx_fdct4x4_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct4x4_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct4x4_1)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct4x4_1 vpx_fdct4x4_1_sse2
 
 void vpx_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct8x8_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct8x8)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vpx_fdct8x8 vpx_fdct8x8_sse2
 
 void vpx_fdct8x8_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct8x8_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct8x8_1)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct8x8_1 vpx_fdct8x8_1_sse2
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_sse2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 void vpx_get16x16var_avx2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_sse2(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
                         int* sum);
-RTCD_EXTERN void (*vpx_get8x8var)(const uint8_t* src_ptr,
-                                  int source_stride,
-                                  const uint8_t* ref_ptr,
-                                  int ref_stride,
-                                  unsigned int* sse,
-                                  int* sum);
+#define vpx_get8x8var vpx_get8x8var_sse2
 
 unsigned int vpx_get_mb_ss_c(const int16_t*);
 unsigned int vpx_get_mb_ss_sse2(const int16_t*);
-RTCD_EXTERN unsigned int (*vpx_get_mb_ss)(const int16_t*);
+#define vpx_get_mb_ss vpx_get_mb_ss_sse2
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_h_predictor_16x16 vpx_h_predictor_16x16_sse2
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_h_predictor_32x32 vpx_h_predictor_32x32_sse2
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_h_predictor_4x4 vpx_h_predictor_4x4_sse2
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_h_predictor_8x8 vpx_h_predictor_8x8_sse2
 
 void vpx_hadamard_16x16_c(const int16_t* src_diff,
                           ptrdiff_t src_stride,
@@ -1121,249 +1004,209 @@
 void vpx_hadamard_8x8_sse2(const int16_t* src_diff,
                            ptrdiff_t src_stride,
                            tran_low_t* coeff);
-RTCD_EXTERN void (*vpx_hadamard_8x8)(const int16_t* src_diff,
-                                     ptrdiff_t src_stride,
-                                     tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_sse2
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+void vpx_highbd_10_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_sse2
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+void vpx_highbd_10_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_sse2
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_10_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_mse16x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int recon_stride,
-                                                   unsigned int* sse);
+#define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_sse2
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_10_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_mse8x8)(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 const uint8_t* ref_ptr,
-                                                 int recon_stride,
-                                                 unsigned int* sse);
+#define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x16 \
+  vpx_highbd_10_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x32 \
+  vpx_highbd_10_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x8 \
+  vpx_highbd_10_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x16 \
+  vpx_highbd_10_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x32 \
+  vpx_highbd_10_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x64 \
+  vpx_highbd_10_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1372,9 +1215,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1384,283 +1227,212 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x32 \
+  vpx_highbd_10_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x64 \
+  vpx_highbd_10_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x16 \
+  vpx_highbd_10_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x4 \
+  vpx_highbd_10_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x8 \
+  vpx_highbd_10_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x16 \
+  vpx_highbd_10_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x32 \
+  vpx_highbd_10_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x8 \
+  vpx_highbd_10_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x16 \
+  vpx_highbd_10_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x32 \
+  vpx_highbd_10_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x64 \
+  vpx_highbd_10_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1668,9 +1440,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1678,534 +1450,426 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x32 \
+  vpx_highbd_10_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x64 \
+  vpx_highbd_10_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x16 \
+  vpx_highbd_10_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x4 \
+  vpx_highbd_10_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x8 \
+  vpx_highbd_10_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_sse2
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_sse2
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x8)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_sse2
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_sse2
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_sse2
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_sse2
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance64x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_sse2
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance64x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_sse2
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance8x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_sse2
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance8x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_sse2
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+void vpx_highbd_12_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_sse2
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+void vpx_highbd_12_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_sse2
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_12_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_mse16x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int recon_stride,
-                                                   unsigned int* sse);
+#define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_sse2
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_12_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_mse8x8)(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 const uint8_t* ref_ptr,
-                                                 int recon_stride,
-                                                 unsigned int* sse);
+#define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x16 \
+  vpx_highbd_12_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x32 \
+  vpx_highbd_12_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x8 \
+  vpx_highbd_12_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x16 \
+  vpx_highbd_12_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x32 \
+  vpx_highbd_12_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x64 \
+  vpx_highbd_12_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -2214,9 +1878,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -2226,283 +1890,212 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x32 \
+  vpx_highbd_12_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x64 \
+  vpx_highbd_12_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x16 \
+  vpx_highbd_12_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x4 \
+  vpx_highbd_12_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x8 \
+  vpx_highbd_12_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x16 \
+  vpx_highbd_12_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x32 \
+  vpx_highbd_12_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x8 \
+  vpx_highbd_12_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x16 \
+  vpx_highbd_12_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x32 \
+  vpx_highbd_12_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x64 \
+  vpx_highbd_12_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2510,9 +2103,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2520,529 +2113,421 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x32 \
+  vpx_highbd_12_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x64 \
+  vpx_highbd_12_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x16 \
+  vpx_highbd_12_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x4 \
+  vpx_highbd_12_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x8 \
+  vpx_highbd_12_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_sse2
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_sse2
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x8)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_sse2
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_sse2
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_sse2
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_sse2
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance64x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_sse2
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance64x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_sse2
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance8x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_sse2
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance8x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_sse2
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
                                 int* sum);
-#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+void vpx_highbd_8_get16x16var_sse2(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse,
+                                   int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_sse2
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
                               int* sum);
-#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+void vpx_highbd_8_get8x8var_sse2(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_sse2
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 unsigned int vpx_highbd_8_mse16x16_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_mse16x16)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int recon_stride,
-                                                  unsigned int* sse);
+#define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_sse2
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_highbd_8_mse8x8_sse2(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_mse8x8)(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                const uint8_t* ref_ptr,
-                                                int recon_stride,
-                                                unsigned int* sse);
+#define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x16 \
+  vpx_highbd_8_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x32 \
+  vpx_highbd_8_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x8 \
+  vpx_highbd_8_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x16 \
+  vpx_highbd_8_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x32 \
+  vpx_highbd_8_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x64 \
+  vpx_highbd_8_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -3051,9 +2536,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -3062,599 +2547,458 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x32 \
+  vpx_highbd_8_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x64 \
+  vpx_highbd_8_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x16 \
+  vpx_highbd_8_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x4 \
+  vpx_highbd_8_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x8 \
+  vpx_highbd_8_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x16 \
+  vpx_highbd_8_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x32 \
+  vpx_highbd_8_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x8 \
+  vpx_highbd_8_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x16 \
+  vpx_highbd_8_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x32 \
+  vpx_highbd_8_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x64 \
+  vpx_highbd_8_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x32 \
+  vpx_highbd_8_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x64 \
+  vpx_highbd_8_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x16 \
+  vpx_highbd_8_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x4 \
+  vpx_highbd_8_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x8 \
+  vpx_highbd_8_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_sse2
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_sse2
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_sse2
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_sse2
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_sse2
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x64)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_sse2
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance64x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_sse2
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance64x64)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_sse2
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x16_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance8x16)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_sse2
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x8_sse2(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance8x8)(const uint8_t* src_ptr,
-                                                     int source_stride,
-                                                     const uint8_t* ref_ptr,
-                                                     int ref_stride,
-                                                     unsigned int* sse);
+#define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_sse2
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_highbd_avg_4x4)(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t* s8, int p);
+#define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_sse2
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_highbd_avg_8x8)(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t* s8, int p);
+#define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_sse2
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
                                 const uint16_t* pred,
@@ -3675,7 +3019,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 void vpx_highbd_convolve8_avx2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3687,7 +3031,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8)(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3699,7 +3043,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3712,7 +3056,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve8_avg_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3724,7 +3068,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3736,7 +3080,7 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
                                       ptrdiff_t src_stride,
@@ -3749,7 +3093,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 void vpx_highbd_convolve8_avg_horiz_avx2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3761,7 +3105,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_horiz)(const uint16_t* src,
                                                    ptrdiff_t src_stride,
                                                    uint16_t* dst,
@@ -3773,7 +3117,7 @@
                                                    int y_step_q4,
                                                    int w,
                                                    int h,
-                                                   int bps);
+                                                   int bd);
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
                                      ptrdiff_t src_stride,
@@ -3786,7 +3130,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_avg_vert_avx2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3798,7 +3142,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_vert)(const uint16_t* src,
                                                   ptrdiff_t src_stride,
                                                   uint16_t* dst,
@@ -3810,7 +3154,7 @@
                                                   int y_step_q4,
                                                   int w,
                                                   int h,
-                                                  int bps);
+                                                  int bd);
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
                                   ptrdiff_t src_stride,
@@ -3823,7 +3167,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve8_horiz_avx2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3835,7 +3179,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_horiz)(const uint16_t* src,
                                                ptrdiff_t src_stride,
                                                uint16_t* dst,
@@ -3847,7 +3191,7 @@
                                                int y_step_q4,
                                                int w,
                                                int h,
-                                               int bps);
+                                               int bd);
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
                                  ptrdiff_t src_stride,
@@ -3860,7 +3204,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 void vpx_highbd_convolve8_vert_avx2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3872,7 +3216,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_vert)(const uint16_t* src,
                                               ptrdiff_t src_stride,
                                               uint16_t* dst,
@@ -3884,7 +3228,7 @@
                                               int y_step_q4,
                                               int w,
                                               int h,
-                                              int bps);
+                                              int bd);
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
                                ptrdiff_t src_stride,
@@ -3897,7 +3241,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve_avg_sse2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3909,7 +3253,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve_avg_avx2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3921,7 +3265,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_avg)(const uint16_t* src,
                                             ptrdiff_t src_stride,
                                             uint16_t* dst,
@@ -3933,7 +3277,7 @@
                                             int y_step_q4,
                                             int w,
                                             int h,
-                                            int bps);
+                                            int bd);
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3946,7 +3290,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve_copy_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3958,7 +3302,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve_copy_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3970,7 +3314,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_copy)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3982,647 +3326,565 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d117_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_sse2
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d135_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_sse2
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d153_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_sse2
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d207_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_sse2
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_4x4_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_4x4_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_d63_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_sse2
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_16x16)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_sse2
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_32x32)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_sse2
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_4x4)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_sse2
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_8x8)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_sse2
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_16x16_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_16x16)(uint16_t* dst,
-                                                       ptrdiff_t y_stride,
-                                                       const uint16_t* above,
-                                                       const uint16_t* left,
-                                                       int bd);
+#define vpx_highbd_dc_left_predictor_16x16 \
+  vpx_highbd_dc_left_predictor_16x16_sse2
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_32x32_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_32x32)(uint16_t* dst,
-                                                       ptrdiff_t y_stride,
-                                                       const uint16_t* above,
-                                                       const uint16_t* left,
-                                                       int bd);
+#define vpx_highbd_dc_left_predictor_32x32 \
+  vpx_highbd_dc_left_predictor_32x32_sse2
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_4x4_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_4x4)(uint16_t* dst,
-                                                     ptrdiff_t y_stride,
-                                                     const uint16_t* above,
-                                                     const uint16_t* left,
-                                                     int bd);
+#define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_sse2
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_8x8_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_8x8)(uint16_t* dst,
-                                                     ptrdiff_t y_stride,
-                                                     const uint16_t* above,
-                                                     const uint16_t* left,
-                                                     int bd);
+#define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_sse2
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_16x16)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_sse2
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_32x32)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_sse2
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_4x4)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_sse2
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_8x8)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_sse2
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_16x16)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_sse2
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_32x32)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_sse2
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_4x4)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_sse2
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_8x8)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_sse2
 
 void vpx_highbd_fdct16x16_c(const int16_t* input,
                             tran_low_t* output,
@@ -4630,9 +3892,7 @@
 void vpx_highbd_fdct16x16_sse2(const int16_t* input,
                                tran_low_t* output,
                                int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct16x16)(const int16_t* input,
-                                         tran_low_t* output,
-                                         int stride);
+#define vpx_highbd_fdct16x16 vpx_highbd_fdct16x16_sse2
 
 void vpx_highbd_fdct16x16_1_c(const int16_t* input,
                               tran_low_t* output,
@@ -4645,9 +3905,7 @@
 void vpx_highbd_fdct32x32_sse2(const int16_t* input,
                                tran_low_t* output,
                                int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct32x32)(const int16_t* input,
-                                         tran_low_t* output,
-                                         int stride);
+#define vpx_highbd_fdct32x32 vpx_highbd_fdct32x32_sse2
 
 void vpx_highbd_fdct32x32_1_c(const int16_t* input,
                               tran_low_t* output,
@@ -4660,25 +3918,19 @@
 void vpx_highbd_fdct32x32_rd_sse2(const int16_t* input,
                                   tran_low_t* output,
                                   int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct32x32_rd)(const int16_t* input,
-                                            tran_low_t* output,
-                                            int stride);
+#define vpx_highbd_fdct32x32_rd vpx_highbd_fdct32x32_rd_sse2
 
 void vpx_highbd_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_highbd_fdct4x4_sse2(const int16_t* input,
                              tran_low_t* output,
                              int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct4x4)(const int16_t* input,
-                                       tran_low_t* output,
-                                       int stride);
+#define vpx_highbd_fdct4x4 vpx_highbd_fdct4x4_sse2
 
 void vpx_highbd_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_highbd_fdct8x8_sse2(const int16_t* input,
                              tran_low_t* output,
                              int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct8x8)(const int16_t* input,
-                                       tran_low_t* output,
-                                       int stride);
+#define vpx_highbd_fdct8x8 vpx_highbd_fdct8x8_sse2
 
 void vpx_highbd_fdct8x8_1_c(const int16_t* input,
                             tran_low_t* output,
@@ -4686,68 +3938,82 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_16x16)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_sse2
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_32x32)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_sse2
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_4x4)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_sse2
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_8x8)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_sse2
+
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_16x16_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_32x32_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+void vpx_highbd_hadamard_8x8_avx2(const int16_t* src_diff,
+                                  ptrdiff_t src_stride,
+                                  tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
 
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
@@ -4774,10 +4040,7 @@
                                      uint16_t* dest,
                                      int stride,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_idct16x16_1_add)(const tran_low_t* input,
-                                               uint16_t* dest,
-                                               int stride,
-                                               int bd);
+#define vpx_highbd_idct16x16_1_add vpx_highbd_idct16x16_1_add_sse2
 
 void vpx_highbd_idct16x16_256_add_c(const tran_low_t* input,
                                     uint16_t* dest,
@@ -4855,10 +4118,7 @@
                                      uint16_t* dest,
                                      int stride,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_idct32x32_1_add)(const tran_low_t* input,
-                                               uint16_t* dest,
-                                               int stride,
-                                               int bd);
+#define vpx_highbd_idct32x32_1_add vpx_highbd_idct32x32_1_add_sse2
 
 void vpx_highbd_idct32x32_34_add_c(const tran_low_t* input,
                                    uint16_t* dest,
@@ -4902,10 +4162,7 @@
                                    uint16_t* dest,
                                    int stride,
                                    int bd);
-RTCD_EXTERN void (*vpx_highbd_idct4x4_1_add)(const tran_low_t* input,
-                                             uint16_t* dest,
-                                             int stride,
-                                             int bd);
+#define vpx_highbd_idct4x4_1_add vpx_highbd_idct4x4_1_add_sse2
 
 void vpx_highbd_idct8x8_12_add_c(const tran_low_t* input,
                                  uint16_t* dest,
@@ -4932,10 +4189,7 @@
                                    uint16_t* dest,
                                    int stride,
                                    int bd);
-RTCD_EXTERN void (*vpx_highbd_idct8x8_1_add)(const tran_low_t* input,
-                                             uint16_t* dest,
-                                             int stride,
-                                             int bd);
+#define vpx_highbd_idct8x8_1_add vpx_highbd_idct8x8_1_add_sse2
 
 void vpx_highbd_idct8x8_64_add_c(const tran_low_t* input,
                                  uint16_t* dest,
@@ -4978,12 +4232,7 @@
                                        const uint8_t* limit,
                                        const uint8_t* thresh,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_16)(uint16_t* s,
-                                                 int pitch,
-                                                 const uint8_t* blimit,
-                                                 const uint8_t* limit,
-                                                 const uint8_t* thresh,
-                                                 int bd);
+#define vpx_highbd_lpf_horizontal_16 vpx_highbd_lpf_horizontal_16_sse2
 
 void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t* s,
                                          int pitch,
@@ -4997,12 +4246,7 @@
                                             const uint8_t* limit,
                                             const uint8_t* thresh,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_16_dual)(uint16_t* s,
-                                                      int pitch,
-                                                      const uint8_t* blimit,
-                                                      const uint8_t* limit,
-                                                      const uint8_t* thresh,
-                                                      int bd);
+#define vpx_highbd_lpf_horizontal_16_dual vpx_highbd_lpf_horizontal_16_dual_sse2
 
 void vpx_highbd_lpf_horizontal_4_c(uint16_t* s,
                                    int pitch,
@@ -5016,12 +4260,7 @@
                                       const uint8_t* limit,
                                       const uint8_t* thresh,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_4)(uint16_t* s,
-                                                int pitch,
-                                                const uint8_t* blimit,
-                                                const uint8_t* limit,
-                                                const uint8_t* thresh,
-                                                int bd);
+#define vpx_highbd_lpf_horizontal_4 vpx_highbd_lpf_horizontal_4_sse2
 
 void vpx_highbd_lpf_horizontal_4_dual_c(uint16_t* s,
                                         int pitch,
@@ -5041,15 +4280,7 @@
                                            const uint8_t* limit1,
                                            const uint8_t* thresh1,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_4_dual)(uint16_t* s,
-                                                     int pitch,
-                                                     const uint8_t* blimit0,
-                                                     const uint8_t* limit0,
-                                                     const uint8_t* thresh0,
-                                                     const uint8_t* blimit1,
-                                                     const uint8_t* limit1,
-                                                     const uint8_t* thresh1,
-                                                     int bd);
+#define vpx_highbd_lpf_horizontal_4_dual vpx_highbd_lpf_horizontal_4_dual_sse2
 
 void vpx_highbd_lpf_horizontal_8_c(uint16_t* s,
                                    int pitch,
@@ -5063,12 +4294,7 @@
                                       const uint8_t* limit,
                                       const uint8_t* thresh,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_8)(uint16_t* s,
-                                                int pitch,
-                                                const uint8_t* blimit,
-                                                const uint8_t* limit,
-                                                const uint8_t* thresh,
-                                                int bd);
+#define vpx_highbd_lpf_horizontal_8 vpx_highbd_lpf_horizontal_8_sse2
 
 void vpx_highbd_lpf_horizontal_8_dual_c(uint16_t* s,
                                         int pitch,
@@ -5088,15 +4314,7 @@
                                            const uint8_t* limit1,
                                            const uint8_t* thresh1,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_8_dual)(uint16_t* s,
-                                                     int pitch,
-                                                     const uint8_t* blimit0,
-                                                     const uint8_t* limit0,
-                                                     const uint8_t* thresh0,
-                                                     const uint8_t* blimit1,
-                                                     const uint8_t* limit1,
-                                                     const uint8_t* thresh1,
-                                                     int bd);
+#define vpx_highbd_lpf_horizontal_8_dual vpx_highbd_lpf_horizontal_8_dual_sse2
 
 void vpx_highbd_lpf_vertical_16_c(uint16_t* s,
                                   int pitch,
@@ -5110,12 +4328,7 @@
                                      const uint8_t* limit,
                                      const uint8_t* thresh,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_16)(uint16_t* s,
-                                               int pitch,
-                                               const uint8_t* blimit,
-                                               const uint8_t* limit,
-                                               const uint8_t* thresh,
-                                               int bd);
+#define vpx_highbd_lpf_vertical_16 vpx_highbd_lpf_vertical_16_sse2
 
 void vpx_highbd_lpf_vertical_16_dual_c(uint16_t* s,
                                        int pitch,
@@ -5129,12 +4342,7 @@
                                           const uint8_t* limit,
                                           const uint8_t* thresh,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_16_dual)(uint16_t* s,
-                                                    int pitch,
-                                                    const uint8_t* blimit,
-                                                    const uint8_t* limit,
-                                                    const uint8_t* thresh,
-                                                    int bd);
+#define vpx_highbd_lpf_vertical_16_dual vpx_highbd_lpf_vertical_16_dual_sse2
 
 void vpx_highbd_lpf_vertical_4_c(uint16_t* s,
                                  int pitch,
@@ -5148,12 +4356,7 @@
                                     const uint8_t* limit,
                                     const uint8_t* thresh,
                                     int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_4)(uint16_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit,
-                                              const uint8_t* limit,
-                                              const uint8_t* thresh,
-                                              int bd);
+#define vpx_highbd_lpf_vertical_4 vpx_highbd_lpf_vertical_4_sse2
 
 void vpx_highbd_lpf_vertical_4_dual_c(uint16_t* s,
                                       int pitch,
@@ -5173,15 +4376,7 @@
                                          const uint8_t* limit1,
                                          const uint8_t* thresh1,
                                          int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_4_dual)(uint16_t* s,
-                                                   int pitch,
-                                                   const uint8_t* blimit0,
-                                                   const uint8_t* limit0,
-                                                   const uint8_t* thresh0,
-                                                   const uint8_t* blimit1,
-                                                   const uint8_t* limit1,
-                                                   const uint8_t* thresh1,
-                                                   int bd);
+#define vpx_highbd_lpf_vertical_4_dual vpx_highbd_lpf_vertical_4_dual_sse2
 
 void vpx_highbd_lpf_vertical_8_c(uint16_t* s,
                                  int pitch,
@@ -5195,12 +4390,7 @@
                                     const uint8_t* limit,
                                     const uint8_t* thresh,
                                     int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_8)(uint16_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit,
-                                              const uint8_t* limit,
-                                              const uint8_t* thresh,
-                                              int bd);
+#define vpx_highbd_lpf_vertical_8 vpx_highbd_lpf_vertical_8_sse2
 
 void vpx_highbd_lpf_vertical_8_dual_c(uint16_t* s,
                                       int pitch,
@@ -5220,19 +4410,11 @@
                                          const uint8_t* limit1,
                                          const uint8_t* thresh1,
                                          int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_8_dual)(uint16_t* s,
-                                                   int pitch,
-                                                   const uint8_t* blimit0,
-                                                   const uint8_t* limit0,
-                                                   const uint8_t* thresh0,
-                                                   const uint8_t* blimit1,
-                                                   const uint8_t* limit1,
-                                                   const uint8_t* thresh1,
-                                                   int bd);
+#define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_sse2
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -5264,19 +4446,7 @@
                                 uint16_t* eob_ptr,
                                 const int16_t* scan,
                                 const int16_t* iscan);
-RTCD_EXTERN void (*vpx_highbd_quantize_b)(const tran_low_t* coeff_ptr,
-                                          intptr_t n_coeffs,
-                                          int skip_block,
-                                          const int16_t* zbin_ptr,
-                                          const int16_t* round_ptr,
-                                          const int16_t* quant_ptr,
-                                          const int16_t* quant_shift_ptr,
-                                          tran_low_t* qcoeff_ptr,
-                                          tran_low_t* dqcoeff_ptr,
-                                          const int16_t* dequant_ptr,
-                                          uint16_t* eob_ptr,
-                                          const int16_t* scan,
-                                          const int16_t* iscan);
+#define vpx_highbd_quantize_b vpx_highbd_quantize_b_sse2
 
 void vpx_highbd_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
                                    intptr_t n_coeffs,
@@ -5304,19 +4474,7 @@
                                       uint16_t* eob_ptr,
                                       const int16_t* scan,
                                       const int16_t* iscan);
-RTCD_EXTERN void (*vpx_highbd_quantize_b_32x32)(const tran_low_t* coeff_ptr,
-                                                intptr_t n_coeffs,
-                                                int skip_block,
-                                                const int16_t* zbin_ptr,
-                                                const int16_t* round_ptr,
-                                                const int16_t* quant_ptr,
-                                                const int16_t* quant_shift_ptr,
-                                                tran_low_t* qcoeff_ptr,
-                                                tran_low_t* dqcoeff_ptr,
-                                                const int16_t* dequant_ptr,
-                                                uint16_t* eob_ptr,
-                                                const int16_t* scan,
-                                                const int16_t* iscan);
+#define vpx_highbd_quantize_b_32x32 vpx_highbd_quantize_b_32x32_sse2
 
 unsigned int vpx_highbd_sad16x16_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5326,10 +4484,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x16)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad16x16 vpx_highbd_sad16x16_sse2
 
 unsigned int vpx_highbd_sad16x16_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5341,27 +4496,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x16_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad16x16_avg vpx_highbd_sad16x16_avg_sse2
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x16x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_sse2
 
 unsigned int vpx_highbd_sad16x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5371,10 +4518,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad16x32 vpx_highbd_sad16x32_sse2
 
 unsigned int vpx_highbd_sad16x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5386,27 +4530,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad16x32_avg vpx_highbd_sad16x32_avg_sse2
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_sse2
 
 unsigned int vpx_highbd_sad16x8_c(const uint8_t* src_ptr,
                                   int src_stride,
@@ -5416,10 +4552,7 @@
                                      int src_stride,
                                      const uint8_t* ref_ptr,
                                      int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x8)(const uint8_t* src_ptr,
-                                               int src_stride,
-                                               const uint8_t* ref_ptr,
-                                               int ref_stride);
+#define vpx_highbd_sad16x8 vpx_highbd_sad16x8_sse2
 
 unsigned int vpx_highbd_sad16x8_avg_c(const uint8_t* src_ptr,
                                       int src_stride,
@@ -5431,27 +4564,19 @@
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x8_avg)(const uint8_t* src_ptr,
-                                                   int src_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int ref_stride,
-                                                   const uint8_t* second_pred);
+#define vpx_highbd_sad16x8_avg vpx_highbd_sad16x8_avg_sse2
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad16x8x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x8x4d)(const uint8_t* src_ptr,
-                                          int src_stride,
-                                          const uint8_t* const ref_ptr[],
-                                          int ref_stride,
-                                          uint32_t* sad_array);
+#define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_sse2
 
 unsigned int vpx_highbd_sad32x16_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5461,10 +4586,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x16)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x16 vpx_highbd_sad32x16_sse2
 
 unsigned int vpx_highbd_sad32x16_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5476,27 +4598,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x16_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x16_avg vpx_highbd_sad32x16_avg_sse2
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x16x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_sse2
 
 unsigned int vpx_highbd_sad32x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5506,10 +4620,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x32 vpx_highbd_sad32x32_sse2
 
 unsigned int vpx_highbd_sad32x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5521,27 +4632,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x32_avg vpx_highbd_sad32x32_avg_sse2
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_sse2
 
 unsigned int vpx_highbd_sad32x64_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5551,10 +4654,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x64)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x64 vpx_highbd_sad32x64_sse2
 
 unsigned int vpx_highbd_sad32x64_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5566,27 +4666,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x64_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x64_avg vpx_highbd_sad32x64_avg_sse2
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x64x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_sse2
 
 unsigned int vpx_highbd_sad4x4_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5603,19 +4695,15 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad4x4x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_sse2
 
 unsigned int vpx_highbd_sad4x8_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5632,19 +4720,15 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad4x8x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_sse2
 
 unsigned int vpx_highbd_sad64x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5654,10 +4738,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad64x32 vpx_highbd_sad64x32_sse2
 
 unsigned int vpx_highbd_sad64x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5669,27 +4750,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad64x32_avg vpx_highbd_sad64x32_avg_sse2
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad64x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_sse2
 
 unsigned int vpx_highbd_sad64x64_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5699,10 +4772,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x64)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad64x64 vpx_highbd_sad64x64_sse2
 
 unsigned int vpx_highbd_sad64x64_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5714,27 +4784,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x64_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad64x64_avg vpx_highbd_sad64x64_avg_sse2
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad64x64x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_sse2
 
 unsigned int vpx_highbd_sad8x16_c(const uint8_t* src_ptr,
                                   int src_stride,
@@ -5744,10 +4806,7 @@
                                      int src_stride,
                                      const uint8_t* ref_ptr,
                                      int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x16)(const uint8_t* src_ptr,
-                                               int src_stride,
-                                               const uint8_t* ref_ptr,
-                                               int ref_stride);
+#define vpx_highbd_sad8x16 vpx_highbd_sad8x16_sse2
 
 unsigned int vpx_highbd_sad8x16_avg_c(const uint8_t* src_ptr,
                                       int src_stride,
@@ -5759,27 +4818,19 @@
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x16_avg)(const uint8_t* src_ptr,
-                                                   int src_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int ref_stride,
-                                                   const uint8_t* second_pred);
+#define vpx_highbd_sad8x16_avg vpx_highbd_sad8x16_avg_sse2
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad8x16x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x16x4d)(const uint8_t* src_ptr,
-                                          int src_stride,
-                                          const uint8_t* const ref_ptr[],
-                                          int ref_stride,
-                                          uint32_t* sad_array);
+#define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_sse2
 
 unsigned int vpx_highbd_sad8x4_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5789,10 +4840,7 @@
                                     int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x4)(const uint8_t* src_ptr,
-                                              int src_stride,
-                                              const uint8_t* ref_ptr,
-                                              int ref_stride);
+#define vpx_highbd_sad8x4 vpx_highbd_sad8x4_sse2
 
 unsigned int vpx_highbd_sad8x4_avg_c(const uint8_t* src_ptr,
                                      int src_stride,
@@ -5804,27 +4852,19 @@
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x4_avg)(const uint8_t* src_ptr,
-                                                  int src_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int ref_stride,
-                                                  const uint8_t* second_pred);
+#define vpx_highbd_sad8x4_avg vpx_highbd_sad8x4_avg_sse2
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x4x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_sse2
 
 unsigned int vpx_highbd_sad8x8_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5834,10 +4874,7 @@
                                     int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x8)(const uint8_t* src_ptr,
-                                              int src_stride,
-                                              const uint8_t* ref_ptr,
-                                              int ref_stride);
+#define vpx_highbd_sad8x8 vpx_highbd_sad8x8_sse2
 
 unsigned int vpx_highbd_sad8x8_avg_c(const uint8_t* src_ptr,
                                      int src_stride,
@@ -5849,182 +4886,142 @@
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x8_avg)(const uint8_t* src_ptr,
-                                                  int src_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int ref_stride,
-                                                  const uint8_t* second_pred);
+#define vpx_highbd_sad8x8_avg vpx_highbd_sad8x8_avg_sse2
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x8x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_sse2
+
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+int vpx_highbd_satd_avx2(const tran_low_t* coeff, int length);
+RTCD_EXTERN int (*vpx_highbd_satd)(const tran_low_t* coeff, int length);
 
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_16x16)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_sse2
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_32x32)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_sse2
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_4x4)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_sse2
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_8x8)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_sse2
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_16x16)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_sse2
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_32x32)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_sse2
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_4x4)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_sse2
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_8x8)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_v_predictor_8x8 vpx_highbd_v_predictor_8x8_sse2
 
 void vpx_idct16x16_10_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_10_add_sse2(const tran_low_t* input,
                                uint8_t* dest,
                                int stride);
-RTCD_EXTERN void (*vpx_idct16x16_10_add)(const tran_low_t* input,
-                                         uint8_t* dest,
-                                         int stride);
+#define vpx_idct16x16_10_add vpx_idct16x16_10_add_sse2
 
 void vpx_idct16x16_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_1_add_sse2(const tran_low_t* input,
                               uint8_t* dest,
                               int stride);
-RTCD_EXTERN void (*vpx_idct16x16_1_add)(const tran_low_t* input,
-                                        uint8_t* dest,
-                                        int stride);
+#define vpx_idct16x16_1_add vpx_idct16x16_1_add_sse2
 
 void vpx_idct16x16_256_add_c(const tran_low_t* input,
                              uint8_t* dest,
@@ -6032,17 +5029,13 @@
 void vpx_idct16x16_256_add_sse2(const tran_low_t* input,
                                 uint8_t* dest,
                                 int stride);
-RTCD_EXTERN void (*vpx_idct16x16_256_add)(const tran_low_t* input,
-                                          uint8_t* dest,
-                                          int stride);
+#define vpx_idct16x16_256_add vpx_idct16x16_256_add_sse2
 
 void vpx_idct16x16_38_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_38_add_sse2(const tran_low_t* input,
                                uint8_t* dest,
                                int stride);
-RTCD_EXTERN void (*vpx_idct16x16_38_add)(const tran_low_t* input,
-                                         uint8_t* dest,
-                                         int stride);
+#define vpx_idct16x16_38_add vpx_idct16x16_38_add_sse2
 
 void vpx_idct32x32_1024_add_c(const tran_low_t* input,
                               uint8_t* dest,
@@ -6050,9 +5043,7 @@
 void vpx_idct32x32_1024_add_sse2(const tran_low_t* input,
                                  uint8_t* dest,
                                  int stride);
-RTCD_EXTERN void (*vpx_idct32x32_1024_add)(const tran_low_t* input,
-                                           uint8_t* dest,
-                                           int stride);
+#define vpx_idct32x32_1024_add vpx_idct32x32_1024_add_sse2
 
 void vpx_idct32x32_135_add_c(const tran_low_t* input,
                              uint8_t* dest,
@@ -6071,9 +5062,7 @@
 void vpx_idct32x32_1_add_sse2(const tran_low_t* input,
                               uint8_t* dest,
                               int stride);
-RTCD_EXTERN void (*vpx_idct32x32_1_add)(const tran_low_t* input,
-                                        uint8_t* dest,
-                                        int stride);
+#define vpx_idct32x32_1_add vpx_idct32x32_1_add_sse2
 
 void vpx_idct32x32_34_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct32x32_34_add_sse2(const tran_low_t* input,
@@ -6090,15 +5079,11 @@
 void vpx_idct4x4_16_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_idct4x4_16_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_idct4x4_16_add vpx_idct4x4_16_add_sse2
 
 void vpx_idct4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct4x4_1_add_sse2(const tran_low_t* input, uint8_t* dest, int stride);
-RTCD_EXTERN void (*vpx_idct4x4_1_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride);
+#define vpx_idct4x4_1_add vpx_idct4x4_1_add_sse2
 
 void vpx_idct8x8_12_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_12_add_sse2(const tran_low_t* input,
@@ -6113,21 +5098,17 @@
 
 void vpx_idct8x8_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_1_add_sse2(const tran_low_t* input, uint8_t* dest, int stride);
-RTCD_EXTERN void (*vpx_idct8x8_1_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride);
+#define vpx_idct8x8_1_add vpx_idct8x8_1_add_sse2
 
 void vpx_idct8x8_64_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_64_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_idct8x8_64_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_idct8x8_64_add vpx_idct8x8_64_add_sse2
 
 int16_t vpx_int_pro_col_c(const uint8_t* ref, const int width);
 int16_t vpx_int_pro_col_sse2(const uint8_t* ref, const int width);
-RTCD_EXTERN int16_t (*vpx_int_pro_col)(const uint8_t* ref, const int width);
+#define vpx_int_pro_col vpx_int_pro_col_sse2
 
 void vpx_int_pro_row_c(int16_t* hbuf,
                        const uint8_t* ref,
@@ -6137,18 +5118,13 @@
                           const uint8_t* ref,
                           const int ref_stride,
                           const int height);
-RTCD_EXTERN void (*vpx_int_pro_row)(int16_t* hbuf,
-                                    const uint8_t* ref,
-                                    const int ref_stride,
-                                    const int height);
+#define vpx_int_pro_row vpx_int_pro_row_sse2
 
 void vpx_iwht4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_iwht4x4_16_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_iwht4x4_16_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_iwht4x4_16_add vpx_iwht4x4_16_add_sse2
 
 void vpx_iwht4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 #define vpx_iwht4x4_1_add vpx_iwht4x4_1_add_c
@@ -6205,11 +5181,7 @@
                                const uint8_t* blimit,
                                const uint8_t* limit,
                                const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_horizontal_4)(uint8_t* s,
-                                         int pitch,
-                                         const uint8_t* blimit,
-                                         const uint8_t* limit,
-                                         const uint8_t* thresh);
+#define vpx_lpf_horizontal_4 vpx_lpf_horizontal_4_sse2
 
 void vpx_lpf_horizontal_4_dual_c(uint8_t* s,
                                  int pitch,
@@ -6227,14 +5199,7 @@
                                     const uint8_t* blimit1,
                                     const uint8_t* limit1,
                                     const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_horizontal_4_dual)(uint8_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit0,
-                                              const uint8_t* limit0,
-                                              const uint8_t* thresh0,
-                                              const uint8_t* blimit1,
-                                              const uint8_t* limit1,
-                                              const uint8_t* thresh1);
+#define vpx_lpf_horizontal_4_dual vpx_lpf_horizontal_4_dual_sse2
 
 void vpx_lpf_horizontal_8_c(uint8_t* s,
                             int pitch,
@@ -6246,11 +5211,7 @@
                                const uint8_t* blimit,
                                const uint8_t* limit,
                                const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_horizontal_8)(uint8_t* s,
-                                         int pitch,
-                                         const uint8_t* blimit,
-                                         const uint8_t* limit,
-                                         const uint8_t* thresh);
+#define vpx_lpf_horizontal_8 vpx_lpf_horizontal_8_sse2
 
 void vpx_lpf_horizontal_8_dual_c(uint8_t* s,
                                  int pitch,
@@ -6268,14 +5229,7 @@
                                     const uint8_t* blimit1,
                                     const uint8_t* limit1,
                                     const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_horizontal_8_dual)(uint8_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit0,
-                                              const uint8_t* limit0,
-                                              const uint8_t* thresh0,
-                                              const uint8_t* blimit1,
-                                              const uint8_t* limit1,
-                                              const uint8_t* thresh1);
+#define vpx_lpf_horizontal_8_dual vpx_lpf_horizontal_8_dual_sse2
 
 void vpx_lpf_vertical_16_c(uint8_t* s,
                            int pitch,
@@ -6287,11 +5241,7 @@
                               const uint8_t* blimit,
                               const uint8_t* limit,
                               const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_16)(uint8_t* s,
-                                        int pitch,
-                                        const uint8_t* blimit,
-                                        const uint8_t* limit,
-                                        const uint8_t* thresh);
+#define vpx_lpf_vertical_16 vpx_lpf_vertical_16_sse2
 
 void vpx_lpf_vertical_16_dual_c(uint8_t* s,
                                 int pitch,
@@ -6303,11 +5253,7 @@
                                    const uint8_t* blimit,
                                    const uint8_t* limit,
                                    const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_16_dual)(uint8_t* s,
-                                             int pitch,
-                                             const uint8_t* blimit,
-                                             const uint8_t* limit,
-                                             const uint8_t* thresh);
+#define vpx_lpf_vertical_16_dual vpx_lpf_vertical_16_dual_sse2
 
 void vpx_lpf_vertical_4_c(uint8_t* s,
                           int pitch,
@@ -6319,11 +5265,7 @@
                              const uint8_t* blimit,
                              const uint8_t* limit,
                              const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_4)(uint8_t* s,
-                                       int pitch,
-                                       const uint8_t* blimit,
-                                       const uint8_t* limit,
-                                       const uint8_t* thresh);
+#define vpx_lpf_vertical_4 vpx_lpf_vertical_4_sse2
 
 void vpx_lpf_vertical_4_dual_c(uint8_t* s,
                                int pitch,
@@ -6341,14 +5283,7 @@
                                   const uint8_t* blimit1,
                                   const uint8_t* limit1,
                                   const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_vertical_4_dual)(uint8_t* s,
-                                            int pitch,
-                                            const uint8_t* blimit0,
-                                            const uint8_t* limit0,
-                                            const uint8_t* thresh0,
-                                            const uint8_t* blimit1,
-                                            const uint8_t* limit1,
-                                            const uint8_t* thresh1);
+#define vpx_lpf_vertical_4_dual vpx_lpf_vertical_4_dual_sse2
 
 void vpx_lpf_vertical_8_c(uint8_t* s,
                           int pitch,
@@ -6360,11 +5295,7 @@
                              const uint8_t* blimit,
                              const uint8_t* limit,
                              const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_8)(uint8_t* s,
-                                       int pitch,
-                                       const uint8_t* blimit,
-                                       const uint8_t* limit,
-                                       const uint8_t* thresh);
+#define vpx_lpf_vertical_8 vpx_lpf_vertical_8_sse2
 
 void vpx_lpf_vertical_8_dual_c(uint8_t* s,
                                int pitch,
@@ -6382,30 +5313,19 @@
                                   const uint8_t* blimit1,
                                   const uint8_t* limit1,
                                   const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_vertical_8_dual)(uint8_t* s,
-                                            int pitch,
-                                            const uint8_t* blimit0,
-                                            const uint8_t* limit0,
-                                            const uint8_t* thresh0,
-                                            const uint8_t* blimit1,
-                                            const uint8_t* limit1,
-                                            const uint8_t* thresh1);
+#define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_sse2
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_sse2(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_sse2(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
                                     int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_across_ip)(unsigned char* dst,
-                                              int pitch,
-                                              int rows,
-                                              int cols,
-                                              int flimit);
+#define vpx_mbpost_proc_across_ip vpx_mbpost_proc_across_ip_sse2
 
 void vpx_mbpost_proc_down_c(unsigned char* dst,
                             int pitch,
@@ -6417,11 +5337,7 @@
                                int rows,
                                int cols,
                                int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_down)(unsigned char* dst,
-                                         int pitch,
-                                         int rows,
-                                         int cols,
-                                         int flimit);
+#define vpx_mbpost_proc_down vpx_mbpost_proc_down_sse2
 
 void vpx_minmax_8x8_c(const uint8_t* s,
                       int p,
@@ -6435,86 +5351,73 @@
                          int dp,
                          int* min,
                          int* max);
-RTCD_EXTERN void (*vpx_minmax_8x8)(const uint8_t* s,
-                                   int p,
-                                   const uint8_t* d,
-                                   int dp,
-                                   int* min,
-                                   int* max);
+#define vpx_minmax_8x8 vpx_minmax_8x8_sse2
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_sse2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_mse16x16_avx2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse16x8_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 unsigned int vpx_mse16x8_avx2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x8)(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse8x16_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_mse8x16)(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        const uint8_t* ref_ptr,
-                                        int recon_stride,
-                                        unsigned int* sse);
+#define vpx_mse8x16 vpx_mse8x16_sse2
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 unsigned int vpx_mse8x8_sse2(const uint8_t* src_ptr,
-                             int source_stride,
+                             int src_stride,
                              const uint8_t* ref_ptr,
-                             int recon_stride,
+                             int ref_stride,
                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_mse8x8)(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       const uint8_t* ref_ptr,
-                                       int recon_stride,
-                                       unsigned int* sse);
+#define vpx_mse8x8 vpx_mse8x8_sse2
 
 void vpx_plane_add_noise_c(uint8_t* start,
                            const int8_t* noise,
@@ -6530,13 +5433,7 @@
                               int width,
                               int height,
                               int pitch);
-RTCD_EXTERN void (*vpx_plane_add_noise)(uint8_t* start,
-                                        const int8_t* noise,
-                                        int blackclamp,
-                                        int whiteclamp,
-                                        int width,
-                                        int height,
-                                        int pitch);
+#define vpx_plane_add_noise vpx_plane_add_noise_sse2
 
 void vpx_post_proc_down_and_across_mb_row_c(unsigned char* src,
                                             unsigned char* dst,
@@ -6552,13 +5449,8 @@
                                                int cols,
                                                unsigned char* flimits,
                                                int size);
-RTCD_EXTERN void (*vpx_post_proc_down_and_across_mb_row)(unsigned char* src,
-                                                         unsigned char* dst,
-                                                         int src_pitch,
-                                                         int dst_pitch,
-                                                         int cols,
-                                                         unsigned char* flimits,
-                                                         int size);
+#define vpx_post_proc_down_and_across_mb_row \
+  vpx_post_proc_down_and_across_mb_row_sse2
 
 void vpx_quantize_b_c(const tran_low_t* coeff_ptr,
                       intptr_t n_coeffs,
@@ -6687,10 +5579,7 @@
                                int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x16)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* ref_ptr,
-                                         int ref_stride);
+#define vpx_sad16x16 vpx_sad16x16_sse2
 
 unsigned int vpx_sad16x16_avg_c(const uint8_t* src_ptr,
                                 int src_stride,
@@ -6702,11 +5591,7 @@
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x16_avg)(const uint8_t* src_ptr,
-                                             int src_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             const uint8_t* second_pred);
+#define vpx_sad16x16_avg vpx_sad16x16_avg_sse2
 
 void vpx_sad16x16x3_c(const uint8_t* src_ptr,
                       int src_stride,
@@ -6731,19 +5616,15 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x16x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad16x16x4d vpx_sad16x16x4d_sse2
 
 void vpx_sad16x16x8_c(const uint8_t* src_ptr,
                       int src_stride,
@@ -6769,10 +5650,7 @@
                                int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x32)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* ref_ptr,
-                                         int ref_stride);
+#define vpx_sad16x32 vpx_sad16x32_sse2
 
 unsigned int vpx_sad16x32_avg_c(const uint8_t* src_ptr,
                                 int src_stride,
@@ -6784,27 +5662,19 @@
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x32_avg)(const uint8_t* src_ptr,
-                                             int src_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             const uint8_t* second_pred);
+#define vpx_sad16x32_avg vpx_sad16x32_avg_sse2
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x32x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad16x32x4d vpx_sad16x32x4d_sse2
 
 unsigned int vpx_sad16x8_c(const uint8_t* src_ptr,
                            int src_stride,
@@ -6814,10 +5684,7 @@
                               int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x8)(const uint8_t* src_ptr,
-                                        int src_stride,
-                                        const uint8_t* ref_ptr,
-                                        int ref_stride);
+#define vpx_sad16x8 vpx_sad16x8_sse2
 
 unsigned int vpx_sad16x8_avg_c(const uint8_t* src_ptr,
                                int src_stride,
@@ -6829,11 +5696,7 @@
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x8_avg)(const uint8_t* src_ptr,
-                                            int src_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            const uint8_t* second_pred);
+#define vpx_sad16x8_avg vpx_sad16x8_avg_sse2
 
 void vpx_sad16x8x3_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -6858,19 +5721,15 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x8x4d)(const uint8_t* src_ptr,
-                                   int src_stride,
-                                   const uint8_t* const ref_ptr[],
-                                   int ref_stride,
-                                   uint32_t* sad_array);
+#define vpx_sad16x8x4d vpx_sad16x8x4d_sse2
 
 void vpx_sad16x8x8_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -6928,19 +5787,15 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad32x16x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad32x16x4d vpx_sad32x16x4d_sse2
 
 unsigned int vpx_sad32x32_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -6982,25 +5837,41 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad32x32x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -7041,19 +5912,15 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad32x64x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad32x64x4d vpx_sad32x64x4d_sse2
 
 unsigned int vpx_sad4x4_c(const uint8_t* src_ptr,
                           int src_stride,
@@ -7063,10 +5930,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad4x4)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad4x4 vpx_sad4x4_sse2
 
 unsigned int vpx_sad4x4_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7078,11 +5942,7 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad4x4_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad4x4_avg vpx_sad4x4_avg_sse2
 
 void vpx_sad4x4x3_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7102,19 +5962,15 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad4x4x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad4x4x4d vpx_sad4x4x4d_sse2
 
 void vpx_sad4x4x8_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7140,10 +5996,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad4x8)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad4x8 vpx_sad4x8_sse2
 
 unsigned int vpx_sad4x8_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7155,27 +6008,19 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad4x8_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad4x8_avg vpx_sad4x8_avg_sse2
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad4x8x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad4x8x4d vpx_sad4x8x4d_sse2
 
 unsigned int vpx_sad64x32_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -7217,19 +6062,15 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad64x32x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad64x32x4d vpx_sad64x32x4d_sse2
 
 unsigned int vpx_sad64x64_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -7271,22 +6112,22 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad64x64x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -7298,10 +6139,7 @@
                               int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x16)(const uint8_t* src_ptr,
-                                        int src_stride,
-                                        const uint8_t* ref_ptr,
-                                        int ref_stride);
+#define vpx_sad8x16 vpx_sad8x16_sse2
 
 unsigned int vpx_sad8x16_avg_c(const uint8_t* src_ptr,
                                int src_stride,
@@ -7313,11 +6151,7 @@
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x16_avg)(const uint8_t* src_ptr,
-                                            int src_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            const uint8_t* second_pred);
+#define vpx_sad8x16_avg vpx_sad8x16_avg_sse2
 
 void vpx_sad8x16x3_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -7337,19 +6171,15 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x16x4d)(const uint8_t* src_ptr,
-                                   int src_stride,
-                                   const uint8_t* const ref_ptr[],
-                                   int ref_stride,
-                                   uint32_t* sad_array);
+#define vpx_sad8x16x4d vpx_sad8x16x4d_sse2
 
 void vpx_sad8x16x8_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -7375,10 +6205,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x4)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad8x4 vpx_sad8x4_sse2
 
 unsigned int vpx_sad8x4_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7390,27 +6217,19 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x4_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad8x4_avg vpx_sad8x4_avg_sse2
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x4x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad8x4x4d vpx_sad8x4x4d_sse2
 
 unsigned int vpx_sad8x8_c(const uint8_t* src_ptr,
                           int src_stride,
@@ -7420,10 +6239,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x8)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad8x8 vpx_sad8x8_sse2
 
 unsigned int vpx_sad8x8_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7435,11 +6251,7 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x8_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad8x8_avg vpx_sad8x8_avg_sse2
 
 void vpx_sad8x8x3_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7459,19 +6271,15 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x8x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad8x8x4d vpx_sad8x8x4d_sse2
 
 void vpx_sad8x8x8_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7594,850 +6402,850 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -8458,384 +7266,329 @@
                              ptrdiff_t src_stride,
                              const uint8_t* pred_ptr,
                              ptrdiff_t pred_stride);
-RTCD_EXTERN void (*vpx_subtract_block)(int rows,
-                                       int cols,
-                                       int16_t* diff_ptr,
-                                       ptrdiff_t diff_stride,
-                                       const uint8_t* src_ptr,
-                                       ptrdiff_t src_stride,
-                                       const uint8_t* pred_ptr,
-                                       ptrdiff_t pred_stride);
+#define vpx_subtract_block vpx_subtract_block_sse2
 
 uint64_t vpx_sum_squares_2d_i16_c(const int16_t* src, int stride, int size);
 uint64_t vpx_sum_squares_2d_i16_sse2(const int16_t* src, int stride, int size);
-RTCD_EXTERN uint64_t (*vpx_sum_squares_2d_i16)(const int16_t* src,
-                                               int stride,
-                                               int size);
+#define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_sse2
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_sse2
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_sse2
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_sse2
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_sse2
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_v_predictor_16x16 vpx_v_predictor_16x16_sse2
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_v_predictor_32x32 vpx_v_predictor_32x32_sse2
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_v_predictor_4x4 vpx_v_predictor_4x4_sse2
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_v_predictor_8x8 vpx_v_predictor_8x8_sse2
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_variance16x8_avx2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance4x4)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance4x4 vpx_variance4x4_sse2
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance4x8)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance4x8 vpx_variance4x8_sse2
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x16)(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             unsigned int* sse);
+#define vpx_variance8x16 vpx_variance8x16_sse2
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x4)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance8x4 vpx_variance8x4_sse2
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x8)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance8x8 vpx_variance8x8_sse2
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
 
 int vpx_vector_var_c(const int16_t* ref, const int16_t* src, const int bwl);
 int vpx_vector_var_sse2(const int16_t* ref, const int16_t* src, const int bwl);
-RTCD_EXTERN int (*vpx_vector_var)(const int16_t* ref,
-                                  const int16_t* src,
-                                  const int bwl);
+#define vpx_vector_var vpx_vector_var_sse2
 
 void vpx_dsp_rtcd(void);
 
@@ -8846,63 +7599,36 @@
 
   (void)flags;
 
-  vpx_avg_4x4 = vpx_avg_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_avg_4x4 = vpx_avg_4x4_sse2;
-  vpx_avg_8x8 = vpx_avg_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_avg_8x8 = vpx_avg_8x8_sse2;
-  vpx_comp_avg_pred = vpx_comp_avg_pred_c;
-  if (flags & HAS_SSE2)
-    vpx_comp_avg_pred = vpx_comp_avg_pred_sse2;
-  vpx_convolve8 = vpx_convolve8_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8 = vpx_convolve8_sse2;
+  vpx_convolve8 = vpx_convolve8_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8 = vpx_convolve8_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8 = vpx_convolve8_avx2;
-  vpx_convolve8_avg = vpx_convolve8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg = vpx_convolve8_avg_sse2;
+  vpx_convolve8_avg = vpx_convolve8_avg_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg = vpx_convolve8_avg_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg = vpx_convolve8_avg_avx2;
-  vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_sse2;
+  vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_avx2;
-  vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_sse2;
+  vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_avx2;
-  vpx_convolve8_horiz = vpx_convolve8_horiz_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_horiz = vpx_convolve8_horiz_sse2;
+  vpx_convolve8_horiz = vpx_convolve8_horiz_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_horiz = vpx_convolve8_horiz_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_horiz = vpx_convolve8_horiz_avx2;
-  vpx_convolve8_vert = vpx_convolve8_vert_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_vert = vpx_convolve8_vert_sse2;
+  vpx_convolve8_vert = vpx_convolve8_vert_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_vert = vpx_convolve8_vert_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_vert = vpx_convolve8_vert_avx2;
-  vpx_convolve_avg = vpx_convolve_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve_avg = vpx_convolve_avg_sse2;
-  vpx_convolve_copy = vpx_convolve_copy_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve_copy = vpx_convolve_copy_sse2;
   vpx_d153_predictor_16x16 = vpx_d153_predictor_16x16_c;
   if (flags & HAS_SSSE3)
     vpx_d153_predictor_16x16 = vpx_d153_predictor_16x16_ssse3;
@@ -8921,9 +7647,6 @@
   vpx_d207_predictor_32x32 = vpx_d207_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_d207_predictor_32x32 = vpx_d207_predictor_32x32_ssse3;
-  vpx_d207_predictor_4x4 = vpx_d207_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_d207_predictor_4x4 = vpx_d207_predictor_4x4_sse2;
   vpx_d207_predictor_8x8 = vpx_d207_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_d207_predictor_8x8 = vpx_d207_predictor_8x8_ssse3;
@@ -8933,12 +7656,6 @@
   vpx_d45_predictor_32x32 = vpx_d45_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_d45_predictor_32x32 = vpx_d45_predictor_32x32_ssse3;
-  vpx_d45_predictor_4x4 = vpx_d45_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_d45_predictor_4x4 = vpx_d45_predictor_4x4_sse2;
-  vpx_d45_predictor_8x8 = vpx_d45_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_d45_predictor_8x8 = vpx_d45_predictor_8x8_sse2;
   vpx_d63_predictor_16x16 = vpx_d63_predictor_16x16_c;
   if (flags & HAS_SSSE3)
     vpx_d63_predictor_16x16 = vpx_d63_predictor_16x16_ssse3;
@@ -8951,542 +7668,15 @@
   vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_ssse3;
-  vpx_dc_128_predictor_16x16 = vpx_dc_128_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_16x16 = vpx_dc_128_predictor_16x16_sse2;
-  vpx_dc_128_predictor_32x32 = vpx_dc_128_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_32x32 = vpx_dc_128_predictor_32x32_sse2;
-  vpx_dc_128_predictor_4x4 = vpx_dc_128_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_4x4 = vpx_dc_128_predictor_4x4_sse2;
-  vpx_dc_128_predictor_8x8 = vpx_dc_128_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_8x8 = vpx_dc_128_predictor_8x8_sse2;
-  vpx_dc_left_predictor_16x16 = vpx_dc_left_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_16x16 = vpx_dc_left_predictor_16x16_sse2;
-  vpx_dc_left_predictor_32x32 = vpx_dc_left_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_32x32 = vpx_dc_left_predictor_32x32_sse2;
-  vpx_dc_left_predictor_4x4 = vpx_dc_left_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_4x4 = vpx_dc_left_predictor_4x4_sse2;
-  vpx_dc_left_predictor_8x8 = vpx_dc_left_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_8x8 = vpx_dc_left_predictor_8x8_sse2;
-  vpx_dc_predictor_16x16 = vpx_dc_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_16x16 = vpx_dc_predictor_16x16_sse2;
-  vpx_dc_predictor_32x32 = vpx_dc_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_32x32 = vpx_dc_predictor_32x32_sse2;
-  vpx_dc_predictor_4x4 = vpx_dc_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_4x4 = vpx_dc_predictor_4x4_sse2;
-  vpx_dc_predictor_8x8 = vpx_dc_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_8x8 = vpx_dc_predictor_8x8_sse2;
-  vpx_dc_top_predictor_16x16 = vpx_dc_top_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_16x16 = vpx_dc_top_predictor_16x16_sse2;
-  vpx_dc_top_predictor_32x32 = vpx_dc_top_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_32x32 = vpx_dc_top_predictor_32x32_sse2;
-  vpx_dc_top_predictor_4x4 = vpx_dc_top_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_4x4 = vpx_dc_top_predictor_4x4_sse2;
-  vpx_dc_top_predictor_8x8 = vpx_dc_top_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_8x8 = vpx_dc_top_predictor_8x8_sse2;
-  vpx_fdct16x16 = vpx_fdct16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct16x16 = vpx_fdct16x16_sse2;
-  vpx_fdct16x16_1 = vpx_fdct16x16_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct16x16_1 = vpx_fdct16x16_1_sse2;
-  vpx_fdct32x32 = vpx_fdct32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32 = vpx_fdct32x32_sse2;
-  vpx_fdct32x32_1 = vpx_fdct32x32_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32_1 = vpx_fdct32x32_1_sse2;
-  vpx_fdct32x32_rd = vpx_fdct32x32_rd_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32_rd = vpx_fdct32x32_rd_sse2;
-  vpx_fdct4x4 = vpx_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct4x4 = vpx_fdct4x4_sse2;
-  vpx_fdct4x4_1 = vpx_fdct4x4_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct4x4_1 = vpx_fdct4x4_1_sse2;
-  vpx_fdct8x8 = vpx_fdct8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct8x8 = vpx_fdct8x8_sse2;
-  vpx_fdct8x8_1 = vpx_fdct8x8_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct8x8_1 = vpx_fdct8x8_1_sse2;
-  vpx_get16x16var = vpx_get16x16var_c;
-  if (flags & HAS_SSE2)
-    vpx_get16x16var = vpx_get16x16var_sse2;
+  vpx_get16x16var = vpx_get16x16var_sse2;
   if (flags & HAS_AVX2)
     vpx_get16x16var = vpx_get16x16var_avx2;
-  vpx_get8x8var = vpx_get8x8var_c;
-  if (flags & HAS_SSE2)
-    vpx_get8x8var = vpx_get8x8var_sse2;
-  vpx_get_mb_ss = vpx_get_mb_ss_c;
-  if (flags & HAS_SSE2)
-    vpx_get_mb_ss = vpx_get_mb_ss_sse2;
-  vpx_h_predictor_16x16 = vpx_h_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_16x16 = vpx_h_predictor_16x16_sse2;
-  vpx_h_predictor_32x32 = vpx_h_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_32x32 = vpx_h_predictor_32x32_sse2;
-  vpx_h_predictor_4x4 = vpx_h_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_4x4 = vpx_h_predictor_4x4_sse2;
-  vpx_h_predictor_8x8 = vpx_h_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_8x8 = vpx_h_predictor_8x8_sse2;
-  vpx_hadamard_16x16 = vpx_hadamard_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
+  vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_hadamard_16x16 = vpx_hadamard_16x16_avx2;
-  vpx_hadamard_32x32 = vpx_hadamard_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
+  vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_hadamard_32x32 = vpx_hadamard_32x32_avx2;
-  vpx_hadamard_8x8 = vpx_hadamard_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_8x8 = vpx_hadamard_8x8_sse2;
-  vpx_highbd_10_mse16x16 = vpx_highbd_10_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_mse16x16 = vpx_highbd_10_mse16x16_sse2;
-  vpx_highbd_10_mse8x8 = vpx_highbd_10_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_mse8x8 = vpx_highbd_10_mse8x8_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x16 =
-      vpx_highbd_10_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x16 =
-        vpx_highbd_10_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x32 =
-      vpx_highbd_10_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x32 =
-        vpx_highbd_10_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x8 =
-      vpx_highbd_10_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x8 =
-        vpx_highbd_10_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x16 =
-      vpx_highbd_10_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x16 =
-        vpx_highbd_10_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x32 =
-      vpx_highbd_10_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x32 =
-        vpx_highbd_10_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x64 =
-      vpx_highbd_10_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x64 =
-        vpx_highbd_10_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance64x32 =
-      vpx_highbd_10_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance64x32 =
-        vpx_highbd_10_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance64x64 =
-      vpx_highbd_10_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance64x64 =
-        vpx_highbd_10_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x16 =
-      vpx_highbd_10_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x16 =
-        vpx_highbd_10_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x4 =
-      vpx_highbd_10_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x4 =
-        vpx_highbd_10_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x8 =
-      vpx_highbd_10_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x8 =
-        vpx_highbd_10_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_10_sub_pixel_variance16x16 =
-      vpx_highbd_10_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x16 =
-        vpx_highbd_10_sub_pixel_variance16x16_sse2;
-  vpx_highbd_10_sub_pixel_variance16x32 =
-      vpx_highbd_10_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x32 =
-        vpx_highbd_10_sub_pixel_variance16x32_sse2;
-  vpx_highbd_10_sub_pixel_variance16x8 = vpx_highbd_10_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x8 =
-        vpx_highbd_10_sub_pixel_variance16x8_sse2;
-  vpx_highbd_10_sub_pixel_variance32x16 =
-      vpx_highbd_10_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x16 =
-        vpx_highbd_10_sub_pixel_variance32x16_sse2;
-  vpx_highbd_10_sub_pixel_variance32x32 =
-      vpx_highbd_10_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x32 =
-        vpx_highbd_10_sub_pixel_variance32x32_sse2;
-  vpx_highbd_10_sub_pixel_variance32x64 =
-      vpx_highbd_10_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x64 =
-        vpx_highbd_10_sub_pixel_variance32x64_sse2;
-  vpx_highbd_10_sub_pixel_variance64x32 =
-      vpx_highbd_10_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance64x32 =
-        vpx_highbd_10_sub_pixel_variance64x32_sse2;
-  vpx_highbd_10_sub_pixel_variance64x64 =
-      vpx_highbd_10_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance64x64 =
-        vpx_highbd_10_sub_pixel_variance64x64_sse2;
-  vpx_highbd_10_sub_pixel_variance8x16 = vpx_highbd_10_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x16 =
-        vpx_highbd_10_sub_pixel_variance8x16_sse2;
-  vpx_highbd_10_sub_pixel_variance8x4 = vpx_highbd_10_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x4 =
-        vpx_highbd_10_sub_pixel_variance8x4_sse2;
-  vpx_highbd_10_sub_pixel_variance8x8 = vpx_highbd_10_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x8 =
-        vpx_highbd_10_sub_pixel_variance8x8_sse2;
-  vpx_highbd_10_variance16x16 = vpx_highbd_10_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x16 = vpx_highbd_10_variance16x16_sse2;
-  vpx_highbd_10_variance16x32 = vpx_highbd_10_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x32 = vpx_highbd_10_variance16x32_sse2;
-  vpx_highbd_10_variance16x8 = vpx_highbd_10_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x8 = vpx_highbd_10_variance16x8_sse2;
-  vpx_highbd_10_variance32x16 = vpx_highbd_10_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x16 = vpx_highbd_10_variance32x16_sse2;
-  vpx_highbd_10_variance32x32 = vpx_highbd_10_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x32 = vpx_highbd_10_variance32x32_sse2;
-  vpx_highbd_10_variance32x64 = vpx_highbd_10_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x64 = vpx_highbd_10_variance32x64_sse2;
-  vpx_highbd_10_variance64x32 = vpx_highbd_10_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance64x32 = vpx_highbd_10_variance64x32_sse2;
-  vpx_highbd_10_variance64x64 = vpx_highbd_10_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance64x64 = vpx_highbd_10_variance64x64_sse2;
-  vpx_highbd_10_variance8x16 = vpx_highbd_10_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance8x16 = vpx_highbd_10_variance8x16_sse2;
-  vpx_highbd_10_variance8x8 = vpx_highbd_10_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance8x8 = vpx_highbd_10_variance8x8_sse2;
-  vpx_highbd_12_mse16x16 = vpx_highbd_12_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_mse16x16 = vpx_highbd_12_mse16x16_sse2;
-  vpx_highbd_12_mse8x8 = vpx_highbd_12_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_mse8x8 = vpx_highbd_12_mse8x8_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x16 =
-      vpx_highbd_12_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x16 =
-        vpx_highbd_12_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x32 =
-      vpx_highbd_12_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x32 =
-        vpx_highbd_12_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x8 =
-      vpx_highbd_12_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x8 =
-        vpx_highbd_12_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x16 =
-      vpx_highbd_12_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x16 =
-        vpx_highbd_12_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x32 =
-      vpx_highbd_12_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x32 =
-        vpx_highbd_12_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x64 =
-      vpx_highbd_12_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x64 =
-        vpx_highbd_12_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance64x32 =
-      vpx_highbd_12_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance64x32 =
-        vpx_highbd_12_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance64x64 =
-      vpx_highbd_12_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance64x64 =
-        vpx_highbd_12_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x16 =
-      vpx_highbd_12_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x16 =
-        vpx_highbd_12_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x4 =
-      vpx_highbd_12_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x4 =
-        vpx_highbd_12_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x8 =
-      vpx_highbd_12_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x8 =
-        vpx_highbd_12_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_12_sub_pixel_variance16x16 =
-      vpx_highbd_12_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x16 =
-        vpx_highbd_12_sub_pixel_variance16x16_sse2;
-  vpx_highbd_12_sub_pixel_variance16x32 =
-      vpx_highbd_12_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x32 =
-        vpx_highbd_12_sub_pixel_variance16x32_sse2;
-  vpx_highbd_12_sub_pixel_variance16x8 = vpx_highbd_12_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x8 =
-        vpx_highbd_12_sub_pixel_variance16x8_sse2;
-  vpx_highbd_12_sub_pixel_variance32x16 =
-      vpx_highbd_12_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x16 =
-        vpx_highbd_12_sub_pixel_variance32x16_sse2;
-  vpx_highbd_12_sub_pixel_variance32x32 =
-      vpx_highbd_12_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x32 =
-        vpx_highbd_12_sub_pixel_variance32x32_sse2;
-  vpx_highbd_12_sub_pixel_variance32x64 =
-      vpx_highbd_12_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x64 =
-        vpx_highbd_12_sub_pixel_variance32x64_sse2;
-  vpx_highbd_12_sub_pixel_variance64x32 =
-      vpx_highbd_12_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance64x32 =
-        vpx_highbd_12_sub_pixel_variance64x32_sse2;
-  vpx_highbd_12_sub_pixel_variance64x64 =
-      vpx_highbd_12_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance64x64 =
-        vpx_highbd_12_sub_pixel_variance64x64_sse2;
-  vpx_highbd_12_sub_pixel_variance8x16 = vpx_highbd_12_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x16 =
-        vpx_highbd_12_sub_pixel_variance8x16_sse2;
-  vpx_highbd_12_sub_pixel_variance8x4 = vpx_highbd_12_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x4 =
-        vpx_highbd_12_sub_pixel_variance8x4_sse2;
-  vpx_highbd_12_sub_pixel_variance8x8 = vpx_highbd_12_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x8 =
-        vpx_highbd_12_sub_pixel_variance8x8_sse2;
-  vpx_highbd_12_variance16x16 = vpx_highbd_12_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x16 = vpx_highbd_12_variance16x16_sse2;
-  vpx_highbd_12_variance16x32 = vpx_highbd_12_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x32 = vpx_highbd_12_variance16x32_sse2;
-  vpx_highbd_12_variance16x8 = vpx_highbd_12_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x8 = vpx_highbd_12_variance16x8_sse2;
-  vpx_highbd_12_variance32x16 = vpx_highbd_12_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x16 = vpx_highbd_12_variance32x16_sse2;
-  vpx_highbd_12_variance32x32 = vpx_highbd_12_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x32 = vpx_highbd_12_variance32x32_sse2;
-  vpx_highbd_12_variance32x64 = vpx_highbd_12_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x64 = vpx_highbd_12_variance32x64_sse2;
-  vpx_highbd_12_variance64x32 = vpx_highbd_12_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance64x32 = vpx_highbd_12_variance64x32_sse2;
-  vpx_highbd_12_variance64x64 = vpx_highbd_12_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance64x64 = vpx_highbd_12_variance64x64_sse2;
-  vpx_highbd_12_variance8x16 = vpx_highbd_12_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance8x16 = vpx_highbd_12_variance8x16_sse2;
-  vpx_highbd_12_variance8x8 = vpx_highbd_12_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance8x8 = vpx_highbd_12_variance8x8_sse2;
-  vpx_highbd_8_mse16x16 = vpx_highbd_8_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_mse16x16 = vpx_highbd_8_mse16x16_sse2;
-  vpx_highbd_8_mse8x8 = vpx_highbd_8_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_mse8x8 = vpx_highbd_8_mse8x8_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x16 =
-      vpx_highbd_8_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x16 =
-        vpx_highbd_8_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x32 =
-      vpx_highbd_8_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x32 =
-        vpx_highbd_8_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x8 =
-      vpx_highbd_8_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x8 =
-        vpx_highbd_8_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x16 =
-      vpx_highbd_8_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x16 =
-        vpx_highbd_8_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x32 =
-      vpx_highbd_8_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x32 =
-        vpx_highbd_8_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x64 =
-      vpx_highbd_8_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x64 =
-        vpx_highbd_8_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance64x32 =
-      vpx_highbd_8_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance64x32 =
-        vpx_highbd_8_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance64x64 =
-      vpx_highbd_8_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance64x64 =
-        vpx_highbd_8_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x16 =
-      vpx_highbd_8_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x16 =
-        vpx_highbd_8_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x4 =
-      vpx_highbd_8_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x4 =
-        vpx_highbd_8_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x8 =
-      vpx_highbd_8_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x8 =
-        vpx_highbd_8_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_8_sub_pixel_variance16x16 = vpx_highbd_8_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x16 =
-        vpx_highbd_8_sub_pixel_variance16x16_sse2;
-  vpx_highbd_8_sub_pixel_variance16x32 = vpx_highbd_8_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x32 =
-        vpx_highbd_8_sub_pixel_variance16x32_sse2;
-  vpx_highbd_8_sub_pixel_variance16x8 = vpx_highbd_8_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x8 =
-        vpx_highbd_8_sub_pixel_variance16x8_sse2;
-  vpx_highbd_8_sub_pixel_variance32x16 = vpx_highbd_8_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x16 =
-        vpx_highbd_8_sub_pixel_variance32x16_sse2;
-  vpx_highbd_8_sub_pixel_variance32x32 = vpx_highbd_8_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x32 =
-        vpx_highbd_8_sub_pixel_variance32x32_sse2;
-  vpx_highbd_8_sub_pixel_variance32x64 = vpx_highbd_8_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x64 =
-        vpx_highbd_8_sub_pixel_variance32x64_sse2;
-  vpx_highbd_8_sub_pixel_variance64x32 = vpx_highbd_8_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance64x32 =
-        vpx_highbd_8_sub_pixel_variance64x32_sse2;
-  vpx_highbd_8_sub_pixel_variance64x64 = vpx_highbd_8_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance64x64 =
-        vpx_highbd_8_sub_pixel_variance64x64_sse2;
-  vpx_highbd_8_sub_pixel_variance8x16 = vpx_highbd_8_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x16 =
-        vpx_highbd_8_sub_pixel_variance8x16_sse2;
-  vpx_highbd_8_sub_pixel_variance8x4 = vpx_highbd_8_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x4 =
-        vpx_highbd_8_sub_pixel_variance8x4_sse2;
-  vpx_highbd_8_sub_pixel_variance8x8 = vpx_highbd_8_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x8 =
-        vpx_highbd_8_sub_pixel_variance8x8_sse2;
-  vpx_highbd_8_variance16x16 = vpx_highbd_8_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x16 = vpx_highbd_8_variance16x16_sse2;
-  vpx_highbd_8_variance16x32 = vpx_highbd_8_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x32 = vpx_highbd_8_variance16x32_sse2;
-  vpx_highbd_8_variance16x8 = vpx_highbd_8_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x8 = vpx_highbd_8_variance16x8_sse2;
-  vpx_highbd_8_variance32x16 = vpx_highbd_8_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x16 = vpx_highbd_8_variance32x16_sse2;
-  vpx_highbd_8_variance32x32 = vpx_highbd_8_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x32 = vpx_highbd_8_variance32x32_sse2;
-  vpx_highbd_8_variance32x64 = vpx_highbd_8_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x64 = vpx_highbd_8_variance32x64_sse2;
-  vpx_highbd_8_variance64x32 = vpx_highbd_8_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance64x32 = vpx_highbd_8_variance64x32_sse2;
-  vpx_highbd_8_variance64x64 = vpx_highbd_8_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance64x64 = vpx_highbd_8_variance64x64_sse2;
-  vpx_highbd_8_variance8x16 = vpx_highbd_8_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance8x16 = vpx_highbd_8_variance8x16_sse2;
-  vpx_highbd_8_variance8x8 = vpx_highbd_8_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance8x8 = vpx_highbd_8_variance8x8_sse2;
-  vpx_highbd_avg_4x4 = vpx_highbd_avg_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_avg_4x4 = vpx_highbd_avg_4x4_sse2;
-  vpx_highbd_avg_8x8 = vpx_highbd_avg_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_avg_8x8 = vpx_highbd_avg_8x8_sse2;
   vpx_highbd_convolve8 = vpx_highbd_convolve8_c;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve8 = vpx_highbd_convolve8_avx2;
@@ -9505,14 +7695,10 @@
   vpx_highbd_convolve8_vert = vpx_highbd_convolve8_vert_c;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve8_vert = vpx_highbd_convolve8_vert_avx2;
-  vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_sse2;
+  vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_avx2;
-  vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_sse2;
+  vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_sse2;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_avx2;
   vpx_highbd_d117_predictor_16x16 = vpx_highbd_d117_predictor_16x16_c;
@@ -9521,9 +7707,6 @@
   vpx_highbd_d117_predictor_32x32 = vpx_highbd_d117_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d117_predictor_32x32 = vpx_highbd_d117_predictor_32x32_ssse3;
-  vpx_highbd_d117_predictor_4x4 = vpx_highbd_d117_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d117_predictor_4x4 = vpx_highbd_d117_predictor_4x4_sse2;
   vpx_highbd_d117_predictor_8x8 = vpx_highbd_d117_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d117_predictor_8x8 = vpx_highbd_d117_predictor_8x8_ssse3;
@@ -9533,9 +7716,6 @@
   vpx_highbd_d135_predictor_32x32 = vpx_highbd_d135_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d135_predictor_32x32 = vpx_highbd_d135_predictor_32x32_ssse3;
-  vpx_highbd_d135_predictor_4x4 = vpx_highbd_d135_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d135_predictor_4x4 = vpx_highbd_d135_predictor_4x4_sse2;
   vpx_highbd_d135_predictor_8x8 = vpx_highbd_d135_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d135_predictor_8x8 = vpx_highbd_d135_predictor_8x8_ssse3;
@@ -9545,9 +7725,6 @@
   vpx_highbd_d153_predictor_32x32 = vpx_highbd_d153_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d153_predictor_32x32 = vpx_highbd_d153_predictor_32x32_ssse3;
-  vpx_highbd_d153_predictor_4x4 = vpx_highbd_d153_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d153_predictor_4x4 = vpx_highbd_d153_predictor_4x4_sse2;
   vpx_highbd_d153_predictor_8x8 = vpx_highbd_d153_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d153_predictor_8x8 = vpx_highbd_d153_predictor_8x8_ssse3;
@@ -9557,9 +7734,6 @@
   vpx_highbd_d207_predictor_32x32 = vpx_highbd_d207_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d207_predictor_32x32 = vpx_highbd_d207_predictor_32x32_ssse3;
-  vpx_highbd_d207_predictor_4x4 = vpx_highbd_d207_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d207_predictor_4x4 = vpx_highbd_d207_predictor_4x4_sse2;
   vpx_highbd_d207_predictor_8x8 = vpx_highbd_d207_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d207_predictor_8x8 = vpx_highbd_d207_predictor_8x8_ssse3;
@@ -9581,446 +7755,70 @@
   vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_ssse3;
-  vpx_highbd_d63_predictor_4x4 = vpx_highbd_d63_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d63_predictor_4x4 = vpx_highbd_d63_predictor_4x4_sse2;
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_ssse3;
-  vpx_highbd_dc_128_predictor_16x16 = vpx_highbd_dc_128_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_16x16 = vpx_highbd_dc_128_predictor_16x16_sse2;
-  vpx_highbd_dc_128_predictor_32x32 = vpx_highbd_dc_128_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_32x32 = vpx_highbd_dc_128_predictor_32x32_sse2;
-  vpx_highbd_dc_128_predictor_4x4 = vpx_highbd_dc_128_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_4x4 = vpx_highbd_dc_128_predictor_4x4_sse2;
-  vpx_highbd_dc_128_predictor_8x8 = vpx_highbd_dc_128_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_8x8 = vpx_highbd_dc_128_predictor_8x8_sse2;
-  vpx_highbd_dc_left_predictor_16x16 = vpx_highbd_dc_left_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_16x16 =
-        vpx_highbd_dc_left_predictor_16x16_sse2;
-  vpx_highbd_dc_left_predictor_32x32 = vpx_highbd_dc_left_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_32x32 =
-        vpx_highbd_dc_left_predictor_32x32_sse2;
-  vpx_highbd_dc_left_predictor_4x4 = vpx_highbd_dc_left_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_4x4 = vpx_highbd_dc_left_predictor_4x4_sse2;
-  vpx_highbd_dc_left_predictor_8x8 = vpx_highbd_dc_left_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_8x8 = vpx_highbd_dc_left_predictor_8x8_sse2;
-  vpx_highbd_dc_predictor_16x16 = vpx_highbd_dc_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_16x16 = vpx_highbd_dc_predictor_16x16_sse2;
-  vpx_highbd_dc_predictor_32x32 = vpx_highbd_dc_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_32x32 = vpx_highbd_dc_predictor_32x32_sse2;
-  vpx_highbd_dc_predictor_4x4 = vpx_highbd_dc_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_4x4 = vpx_highbd_dc_predictor_4x4_sse2;
-  vpx_highbd_dc_predictor_8x8 = vpx_highbd_dc_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_8x8 = vpx_highbd_dc_predictor_8x8_sse2;
-  vpx_highbd_dc_top_predictor_16x16 = vpx_highbd_dc_top_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_16x16 = vpx_highbd_dc_top_predictor_16x16_sse2;
-  vpx_highbd_dc_top_predictor_32x32 = vpx_highbd_dc_top_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_32x32 = vpx_highbd_dc_top_predictor_32x32_sse2;
-  vpx_highbd_dc_top_predictor_4x4 = vpx_highbd_dc_top_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_4x4 = vpx_highbd_dc_top_predictor_4x4_sse2;
-  vpx_highbd_dc_top_predictor_8x8 = vpx_highbd_dc_top_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_8x8 = vpx_highbd_dc_top_predictor_8x8_sse2;
-  vpx_highbd_fdct16x16 = vpx_highbd_fdct16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct16x16 = vpx_highbd_fdct16x16_sse2;
-  vpx_highbd_fdct32x32 = vpx_highbd_fdct32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct32x32 = vpx_highbd_fdct32x32_sse2;
-  vpx_highbd_fdct32x32_rd = vpx_highbd_fdct32x32_rd_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct32x32_rd = vpx_highbd_fdct32x32_rd_sse2;
-  vpx_highbd_fdct4x4 = vpx_highbd_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct4x4 = vpx_highbd_fdct4x4_sse2;
-  vpx_highbd_fdct8x8 = vpx_highbd_fdct8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct8x8 = vpx_highbd_fdct8x8_sse2;
-  vpx_highbd_h_predictor_16x16 = vpx_highbd_h_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_16x16 = vpx_highbd_h_predictor_16x16_sse2;
-  vpx_highbd_h_predictor_32x32 = vpx_highbd_h_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_32x32 = vpx_highbd_h_predictor_32x32_sse2;
-  vpx_highbd_h_predictor_4x4 = vpx_highbd_h_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_4x4 = vpx_highbd_h_predictor_4x4_sse2;
-  vpx_highbd_h_predictor_8x8 = vpx_highbd_h_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_8x8 = vpx_highbd_h_predictor_8x8_sse2;
-  vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_avx2;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_avx2;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_avx2;
+  vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse4_1;
-  vpx_highbd_idct16x16_1_add = vpx_highbd_idct16x16_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_1_add = vpx_highbd_idct16x16_1_add_sse2;
-  vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
+  vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse4_1;
-  vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
+  vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse4_1;
-  vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
+  vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse4_1;
-  vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
+  vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse4_1;
-  vpx_highbd_idct32x32_1_add = vpx_highbd_idct32x32_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_1_add = vpx_highbd_idct32x32_1_add_sse2;
-  vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
+  vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse4_1;
-  vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
+  vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse4_1;
-  vpx_highbd_idct4x4_1_add = vpx_highbd_idct4x4_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct4x4_1_add = vpx_highbd_idct4x4_1_add_sse2;
-  vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
+  vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse4_1;
-  vpx_highbd_idct8x8_1_add = vpx_highbd_idct8x8_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_1_add = vpx_highbd_idct8x8_1_add_sse2;
-  vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
+  vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse4_1;
-  vpx_highbd_lpf_horizontal_16 = vpx_highbd_lpf_horizontal_16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_16 = vpx_highbd_lpf_horizontal_16_sse2;
-  vpx_highbd_lpf_horizontal_16_dual = vpx_highbd_lpf_horizontal_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_16_dual = vpx_highbd_lpf_horizontal_16_dual_sse2;
-  vpx_highbd_lpf_horizontal_4 = vpx_highbd_lpf_horizontal_4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_4 = vpx_highbd_lpf_horizontal_4_sse2;
-  vpx_highbd_lpf_horizontal_4_dual = vpx_highbd_lpf_horizontal_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_4_dual = vpx_highbd_lpf_horizontal_4_dual_sse2;
-  vpx_highbd_lpf_horizontal_8 = vpx_highbd_lpf_horizontal_8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_8 = vpx_highbd_lpf_horizontal_8_sse2;
-  vpx_highbd_lpf_horizontal_8_dual = vpx_highbd_lpf_horizontal_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_8_dual = vpx_highbd_lpf_horizontal_8_dual_sse2;
-  vpx_highbd_lpf_vertical_16 = vpx_highbd_lpf_vertical_16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_16 = vpx_highbd_lpf_vertical_16_sse2;
-  vpx_highbd_lpf_vertical_16_dual = vpx_highbd_lpf_vertical_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_16_dual = vpx_highbd_lpf_vertical_16_dual_sse2;
-  vpx_highbd_lpf_vertical_4 = vpx_highbd_lpf_vertical_4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_4 = vpx_highbd_lpf_vertical_4_sse2;
-  vpx_highbd_lpf_vertical_4_dual = vpx_highbd_lpf_vertical_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_4_dual = vpx_highbd_lpf_vertical_4_dual_sse2;
-  vpx_highbd_lpf_vertical_8 = vpx_highbd_lpf_vertical_8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_8 = vpx_highbd_lpf_vertical_8_sse2;
-  vpx_highbd_lpf_vertical_8_dual = vpx_highbd_lpf_vertical_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_8_dual = vpx_highbd_lpf_vertical_8_dual_sse2;
-  vpx_highbd_quantize_b = vpx_highbd_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_quantize_b = vpx_highbd_quantize_b_sse2;
-  vpx_highbd_quantize_b_32x32 = vpx_highbd_quantize_b_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_quantize_b_32x32 = vpx_highbd_quantize_b_32x32_sse2;
-  vpx_highbd_sad16x16 = vpx_highbd_sad16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16 = vpx_highbd_sad16x16_sse2;
-  vpx_highbd_sad16x16_avg = vpx_highbd_sad16x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16_avg = vpx_highbd_sad16x16_avg_sse2;
-  vpx_highbd_sad16x16x4d = vpx_highbd_sad16x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16x4d = vpx_highbd_sad16x16x4d_sse2;
-  vpx_highbd_sad16x32 = vpx_highbd_sad16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32 = vpx_highbd_sad16x32_sse2;
-  vpx_highbd_sad16x32_avg = vpx_highbd_sad16x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32_avg = vpx_highbd_sad16x32_avg_sse2;
-  vpx_highbd_sad16x32x4d = vpx_highbd_sad16x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32x4d = vpx_highbd_sad16x32x4d_sse2;
-  vpx_highbd_sad16x8 = vpx_highbd_sad16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8 = vpx_highbd_sad16x8_sse2;
-  vpx_highbd_sad16x8_avg = vpx_highbd_sad16x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8_avg = vpx_highbd_sad16x8_avg_sse2;
-  vpx_highbd_sad16x8x4d = vpx_highbd_sad16x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8x4d = vpx_highbd_sad16x8x4d_sse2;
-  vpx_highbd_sad32x16 = vpx_highbd_sad32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16 = vpx_highbd_sad32x16_sse2;
-  vpx_highbd_sad32x16_avg = vpx_highbd_sad32x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16_avg = vpx_highbd_sad32x16_avg_sse2;
-  vpx_highbd_sad32x16x4d = vpx_highbd_sad32x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16x4d = vpx_highbd_sad32x16x4d_sse2;
-  vpx_highbd_sad32x32 = vpx_highbd_sad32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32 = vpx_highbd_sad32x32_sse2;
-  vpx_highbd_sad32x32_avg = vpx_highbd_sad32x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32_avg = vpx_highbd_sad32x32_avg_sse2;
-  vpx_highbd_sad32x32x4d = vpx_highbd_sad32x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32x4d = vpx_highbd_sad32x32x4d_sse2;
-  vpx_highbd_sad32x64 = vpx_highbd_sad32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64 = vpx_highbd_sad32x64_sse2;
-  vpx_highbd_sad32x64_avg = vpx_highbd_sad32x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64_avg = vpx_highbd_sad32x64_avg_sse2;
-  vpx_highbd_sad32x64x4d = vpx_highbd_sad32x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64x4d = vpx_highbd_sad32x64x4d_sse2;
-  vpx_highbd_sad4x4x4d = vpx_highbd_sad4x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad4x4x4d = vpx_highbd_sad4x4x4d_sse2;
-  vpx_highbd_sad4x8x4d = vpx_highbd_sad4x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad4x8x4d = vpx_highbd_sad4x8x4d_sse2;
-  vpx_highbd_sad64x32 = vpx_highbd_sad64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32 = vpx_highbd_sad64x32_sse2;
-  vpx_highbd_sad64x32_avg = vpx_highbd_sad64x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32_avg = vpx_highbd_sad64x32_avg_sse2;
-  vpx_highbd_sad64x32x4d = vpx_highbd_sad64x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32x4d = vpx_highbd_sad64x32x4d_sse2;
-  vpx_highbd_sad64x64 = vpx_highbd_sad64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64 = vpx_highbd_sad64x64_sse2;
-  vpx_highbd_sad64x64_avg = vpx_highbd_sad64x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64_avg = vpx_highbd_sad64x64_avg_sse2;
-  vpx_highbd_sad64x64x4d = vpx_highbd_sad64x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64x4d = vpx_highbd_sad64x64x4d_sse2;
-  vpx_highbd_sad8x16 = vpx_highbd_sad8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16 = vpx_highbd_sad8x16_sse2;
-  vpx_highbd_sad8x16_avg = vpx_highbd_sad8x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16_avg = vpx_highbd_sad8x16_avg_sse2;
-  vpx_highbd_sad8x16x4d = vpx_highbd_sad8x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16x4d = vpx_highbd_sad8x16x4d_sse2;
-  vpx_highbd_sad8x4 = vpx_highbd_sad8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4 = vpx_highbd_sad8x4_sse2;
-  vpx_highbd_sad8x4_avg = vpx_highbd_sad8x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4_avg = vpx_highbd_sad8x4_avg_sse2;
-  vpx_highbd_sad8x4x4d = vpx_highbd_sad8x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4x4d = vpx_highbd_sad8x4x4d_sse2;
-  vpx_highbd_sad8x8 = vpx_highbd_sad8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8 = vpx_highbd_sad8x8_sse2;
-  vpx_highbd_sad8x8_avg = vpx_highbd_sad8x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8_avg = vpx_highbd_sad8x8_avg_sse2;
-  vpx_highbd_sad8x8x4d = vpx_highbd_sad8x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8x4d = vpx_highbd_sad8x8x4d_sse2;
-  vpx_highbd_tm_predictor_16x16 = vpx_highbd_tm_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_16x16 = vpx_highbd_tm_predictor_16x16_sse2;
-  vpx_highbd_tm_predictor_32x32 = vpx_highbd_tm_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_32x32 = vpx_highbd_tm_predictor_32x32_sse2;
-  vpx_highbd_tm_predictor_4x4 = vpx_highbd_tm_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_4x4 = vpx_highbd_tm_predictor_4x4_sse2;
-  vpx_highbd_tm_predictor_8x8 = vpx_highbd_tm_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_8x8 = vpx_highbd_tm_predictor_8x8_sse2;
-  vpx_highbd_v_predictor_16x16 = vpx_highbd_v_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_16x16 = vpx_highbd_v_predictor_16x16_sse2;
-  vpx_highbd_v_predictor_32x32 = vpx_highbd_v_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_32x32 = vpx_highbd_v_predictor_32x32_sse2;
-  vpx_highbd_v_predictor_4x4 = vpx_highbd_v_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_4x4 = vpx_highbd_v_predictor_4x4_sse2;
-  vpx_highbd_v_predictor_8x8 = vpx_highbd_v_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_8x8 = vpx_highbd_v_predictor_8x8_sse2;
-  vpx_idct16x16_10_add = vpx_idct16x16_10_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_10_add = vpx_idct16x16_10_add_sse2;
-  vpx_idct16x16_1_add = vpx_idct16x16_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_1_add = vpx_idct16x16_1_add_sse2;
-  vpx_idct16x16_256_add = vpx_idct16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_256_add = vpx_idct16x16_256_add_sse2;
-  vpx_idct16x16_38_add = vpx_idct16x16_38_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_38_add = vpx_idct16x16_38_add_sse2;
-  vpx_idct32x32_1024_add = vpx_idct32x32_1024_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_1024_add = vpx_idct32x32_1024_add_sse2;
-  vpx_idct32x32_135_add = vpx_idct32x32_135_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
+  vpx_highbd_satd = vpx_highbd_satd_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_satd = vpx_highbd_satd_avx2;
+  vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_135_add = vpx_idct32x32_135_add_ssse3;
-  vpx_idct32x32_1_add = vpx_idct32x32_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_1_add = vpx_idct32x32_1_add_sse2;
-  vpx_idct32x32_34_add = vpx_idct32x32_34_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_34_add = vpx_idct32x32_34_add_sse2;
+  vpx_idct32x32_34_add = vpx_idct32x32_34_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_34_add = vpx_idct32x32_34_add_ssse3;
-  vpx_idct4x4_16_add = vpx_idct4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct4x4_16_add = vpx_idct4x4_16_add_sse2;
-  vpx_idct4x4_1_add = vpx_idct4x4_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct4x4_1_add = vpx_idct4x4_1_add_sse2;
-  vpx_idct8x8_12_add = vpx_idct8x8_12_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_12_add = vpx_idct8x8_12_add_sse2;
+  vpx_idct8x8_12_add = vpx_idct8x8_12_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct8x8_12_add = vpx_idct8x8_12_add_ssse3;
-  vpx_idct8x8_1_add = vpx_idct8x8_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_1_add = vpx_idct8x8_1_add_sse2;
-  vpx_idct8x8_64_add = vpx_idct8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_64_add = vpx_idct8x8_64_add_sse2;
-  vpx_int_pro_col = vpx_int_pro_col_c;
-  if (flags & HAS_SSE2)
-    vpx_int_pro_col = vpx_int_pro_col_sse2;
-  vpx_int_pro_row = vpx_int_pro_row_c;
-  if (flags & HAS_SSE2)
-    vpx_int_pro_row = vpx_int_pro_row_sse2;
-  vpx_iwht4x4_16_add = vpx_iwht4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_iwht4x4_16_add = vpx_iwht4x4_16_add_sse2;
-  vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_sse2;
+  vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_sse2;
   if (flags & HAS_AVX2)
     vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_avx2;
-  vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_sse2;
+  vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_sse2;
   if (flags & HAS_AVX2)
     vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_avx2;
-  vpx_lpf_horizontal_4 = vpx_lpf_horizontal_4_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_4 = vpx_lpf_horizontal_4_sse2;
-  vpx_lpf_horizontal_4_dual = vpx_lpf_horizontal_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_4_dual = vpx_lpf_horizontal_4_dual_sse2;
-  vpx_lpf_horizontal_8 = vpx_lpf_horizontal_8_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_8 = vpx_lpf_horizontal_8_sse2;
-  vpx_lpf_horizontal_8_dual = vpx_lpf_horizontal_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_8_dual = vpx_lpf_horizontal_8_dual_sse2;
-  vpx_lpf_vertical_16 = vpx_lpf_vertical_16_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_16 = vpx_lpf_vertical_16_sse2;
-  vpx_lpf_vertical_16_dual = vpx_lpf_vertical_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_16_dual = vpx_lpf_vertical_16_dual_sse2;
-  vpx_lpf_vertical_4 = vpx_lpf_vertical_4_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_4 = vpx_lpf_vertical_4_sse2;
-  vpx_lpf_vertical_4_dual = vpx_lpf_vertical_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_4_dual = vpx_lpf_vertical_4_dual_sse2;
-  vpx_lpf_vertical_8 = vpx_lpf_vertical_8_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_8 = vpx_lpf_vertical_8_sse2;
-  vpx_lpf_vertical_8_dual = vpx_lpf_vertical_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_8_dual = vpx_lpf_vertical_8_dual_sse2;
-  vpx_mbpost_proc_across_ip = vpx_mbpost_proc_across_ip_c;
-  if (flags & HAS_SSE2)
-    vpx_mbpost_proc_across_ip = vpx_mbpost_proc_across_ip_sse2;
-  vpx_mbpost_proc_down = vpx_mbpost_proc_down_c;
-  if (flags & HAS_SSE2)
-    vpx_mbpost_proc_down = vpx_mbpost_proc_down_sse2;
-  vpx_minmax_8x8 = vpx_minmax_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_minmax_8x8 = vpx_minmax_8x8_sse2;
-  vpx_mse16x16 = vpx_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_mse16x16 = vpx_mse16x16_sse2;
+  vpx_mse16x16 = vpx_mse16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_mse16x16 = vpx_mse16x16_avx2;
-  vpx_mse16x8 = vpx_mse16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_mse16x8 = vpx_mse16x8_sse2;
+  vpx_mse16x8 = vpx_mse16x8_sse2;
   if (flags & HAS_AVX2)
     vpx_mse16x8 = vpx_mse16x8_avx2;
-  vpx_mse8x16 = vpx_mse8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_mse8x16 = vpx_mse8x16_sse2;
-  vpx_mse8x8 = vpx_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_mse8x8 = vpx_mse8x8_sse2;
-  vpx_plane_add_noise = vpx_plane_add_noise_c;
-  if (flags & HAS_SSE2)
-    vpx_plane_add_noise = vpx_plane_add_noise_sse2;
-  vpx_post_proc_down_and_across_mb_row = vpx_post_proc_down_and_across_mb_row_c;
-  if (flags & HAS_SSE2)
-    vpx_post_proc_down_and_across_mb_row =
-        vpx_post_proc_down_and_across_mb_row_sse2;
-  vpx_quantize_b = vpx_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vpx_quantize_b = vpx_quantize_b_sse2;
+  vpx_quantize_b = vpx_quantize_b_sse2;
   if (flags & HAS_SSSE3)
     vpx_quantize_b = vpx_quantize_b_ssse3;
   if (flags & HAS_AVX)
@@ -10030,415 +7828,195 @@
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_ssse3;
   if (flags & HAS_AVX)
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_avx;
-  vpx_sad16x16 = vpx_sad16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16 = vpx_sad16x16_sse2;
-  vpx_sad16x16_avg = vpx_sad16x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16_avg = vpx_sad16x16_avg_sse2;
   vpx_sad16x16x3 = vpx_sad16x16x3_c;
   if (flags & HAS_SSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_ssse3;
-  vpx_sad16x16x4d = vpx_sad16x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16x4d = vpx_sad16x16x4d_sse2;
   vpx_sad16x16x8 = vpx_sad16x16x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad16x16x8 = vpx_sad16x16x8_sse4_1;
-  vpx_sad16x32 = vpx_sad16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32 = vpx_sad16x32_sse2;
-  vpx_sad16x32_avg = vpx_sad16x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32_avg = vpx_sad16x32_avg_sse2;
-  vpx_sad16x32x4d = vpx_sad16x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32x4d = vpx_sad16x32x4d_sse2;
-  vpx_sad16x8 = vpx_sad16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8 = vpx_sad16x8_sse2;
-  vpx_sad16x8_avg = vpx_sad16x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8_avg = vpx_sad16x8_avg_sse2;
   vpx_sad16x8x3 = vpx_sad16x8x3_c;
   if (flags & HAS_SSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_ssse3;
-  vpx_sad16x8x4d = vpx_sad16x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8x4d = vpx_sad16x8x4d_sse2;
   vpx_sad16x8x8 = vpx_sad16x8x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad16x8x8 = vpx_sad16x8x8_sse4_1;
-  vpx_sad32x16 = vpx_sad32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16 = vpx_sad32x16_sse2;
+  vpx_sad32x16 = vpx_sad32x16_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x16 = vpx_sad32x16_avx2;
-  vpx_sad32x16_avg = vpx_sad32x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
+  vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x16_avg = vpx_sad32x16_avg_avx2;
-  vpx_sad32x16x4d = vpx_sad32x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16x4d = vpx_sad32x16x4d_sse2;
-  vpx_sad32x32 = vpx_sad32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32 = vpx_sad32x32_sse2;
+  vpx_sad32x32 = vpx_sad32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32 = vpx_sad32x32_avx2;
-  vpx_sad32x32_avg = vpx_sad32x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
+  vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32_avg = vpx_sad32x32_avg_avx2;
-  vpx_sad32x32x4d = vpx_sad32x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
+  vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32x4d = vpx_sad32x32x4d_avx2;
-  vpx_sad32x64 = vpx_sad32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64 = vpx_sad32x64_sse2;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
+  if (flags & HAS_AVX2)
+    vpx_sad32x32x8 = vpx_sad32x32x8_avx2;
+  vpx_sad32x64 = vpx_sad32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64 = vpx_sad32x64_avx2;
-  vpx_sad32x64_avg = vpx_sad32x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
+  vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64_avg = vpx_sad32x64_avg_avx2;
-  vpx_sad32x64x4d = vpx_sad32x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64x4d = vpx_sad32x64x4d_sse2;
-  vpx_sad4x4 = vpx_sad4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4 = vpx_sad4x4_sse2;
-  vpx_sad4x4_avg = vpx_sad4x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4_avg = vpx_sad4x4_avg_sse2;
   vpx_sad4x4x3 = vpx_sad4x4x3_c;
   if (flags & HAS_SSE3)
     vpx_sad4x4x3 = vpx_sad4x4x3_sse3;
-  vpx_sad4x4x4d = vpx_sad4x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4x4d = vpx_sad4x4x4d_sse2;
   vpx_sad4x4x8 = vpx_sad4x4x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad4x4x8 = vpx_sad4x4x8_sse4_1;
-  vpx_sad4x8 = vpx_sad4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8 = vpx_sad4x8_sse2;
-  vpx_sad4x8_avg = vpx_sad4x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8_avg = vpx_sad4x8_avg_sse2;
-  vpx_sad4x8x4d = vpx_sad4x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8x4d = vpx_sad4x8x4d_sse2;
-  vpx_sad64x32 = vpx_sad64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32 = vpx_sad64x32_sse2;
+  vpx_sad64x32 = vpx_sad64x32_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x32 = vpx_sad64x32_avx2;
-  vpx_sad64x32_avg = vpx_sad64x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
+  vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x32_avg = vpx_sad64x32_avg_avx2;
-  vpx_sad64x32x4d = vpx_sad64x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32x4d = vpx_sad64x32x4d_sse2;
-  vpx_sad64x64 = vpx_sad64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64 = vpx_sad64x64_sse2;
+  vpx_sad64x64 = vpx_sad64x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64 = vpx_sad64x64_avx2;
-  vpx_sad64x64_avg = vpx_sad64x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
+  vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64_avg = vpx_sad64x64_avg_avx2;
-  vpx_sad64x64x4d = vpx_sad64x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
+  vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64x4d = vpx_sad64x64x4d_avx2;
-  vpx_sad8x16 = vpx_sad8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16 = vpx_sad8x16_sse2;
-  vpx_sad8x16_avg = vpx_sad8x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16_avg = vpx_sad8x16_avg_sse2;
   vpx_sad8x16x3 = vpx_sad8x16x3_c;
   if (flags & HAS_SSE3)
     vpx_sad8x16x3 = vpx_sad8x16x3_sse3;
-  vpx_sad8x16x4d = vpx_sad8x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16x4d = vpx_sad8x16x4d_sse2;
   vpx_sad8x16x8 = vpx_sad8x16x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad8x16x8 = vpx_sad8x16x8_sse4_1;
-  vpx_sad8x4 = vpx_sad8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4 = vpx_sad8x4_sse2;
-  vpx_sad8x4_avg = vpx_sad8x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4_avg = vpx_sad8x4_avg_sse2;
-  vpx_sad8x4x4d = vpx_sad8x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4x4d = vpx_sad8x4x4d_sse2;
-  vpx_sad8x8 = vpx_sad8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8 = vpx_sad8x8_sse2;
-  vpx_sad8x8_avg = vpx_sad8x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8_avg = vpx_sad8x8_avg_sse2;
   vpx_sad8x8x3 = vpx_sad8x8x3_c;
   if (flags & HAS_SSE3)
     vpx_sad8x8x3 = vpx_sad8x8x3_sse3;
-  vpx_sad8x8x4d = vpx_sad8x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8x4d = vpx_sad8x8x4d_sse2;
   vpx_sad8x8x8 = vpx_sad8x8x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad8x8x8 = vpx_sad8x8x8_sse4_1;
-  vpx_satd = vpx_satd_c;
-  if (flags & HAS_SSE2)
-    vpx_satd = vpx_satd_sse2;
+  vpx_satd = vpx_satd_sse2;
   if (flags & HAS_AVX2)
     vpx_satd = vpx_satd_avx2;
   vpx_scaled_2d = vpx_scaled_2d_c;
   if (flags & HAS_SSSE3)
     vpx_scaled_2d = vpx_scaled_2d_ssse3;
-  vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_sse2;
+  vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_ssse3;
-  vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_sse2;
+  vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_ssse3;
-  vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_sse2;
+  vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_ssse3;
-  vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_sse2;
+  vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_ssse3;
-  vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_sse2;
+  vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_avx2;
-  vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_sse2;
+  vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_ssse3;
-  vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_sse2;
+  vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_ssse3;
-  vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_sse2;
+  vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_ssse3;
-  vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_sse2;
+  vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_ssse3;
-  vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_sse2;
+  vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_avx2;
-  vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_sse2;
+  vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_ssse3;
-  vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_sse2;
+  vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_ssse3;
-  vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_sse2;
+  vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_ssse3;
-  vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_sse2;
+  vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_ssse3;
-  vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_sse2;
+  vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_ssse3;
-  vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_sse2;
+  vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_ssse3;
-  vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_sse2;
+  vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_ssse3;
-  vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_sse2;
+  vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_avx2;
-  vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_sse2;
+  vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_ssse3;
-  vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_sse2;
+  vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_ssse3;
-  vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_sse2;
+  vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_ssse3;
-  vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_sse2;
+  vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_ssse3;
-  vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_sse2;
+  vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_avx2;
-  vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_sse2;
+  vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_ssse3;
-  vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_sse2;
+  vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_ssse3;
-  vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_sse2;
+  vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_ssse3;
-  vpx_subtract_block = vpx_subtract_block_c;
-  if (flags & HAS_SSE2)
-    vpx_subtract_block = vpx_subtract_block_sse2;
-  vpx_sum_squares_2d_i16 = vpx_sum_squares_2d_i16_c;
-  if (flags & HAS_SSE2)
-    vpx_sum_squares_2d_i16 = vpx_sum_squares_2d_i16_sse2;
-  vpx_tm_predictor_16x16 = vpx_tm_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_16x16 = vpx_tm_predictor_16x16_sse2;
-  vpx_tm_predictor_32x32 = vpx_tm_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_32x32 = vpx_tm_predictor_32x32_sse2;
-  vpx_tm_predictor_4x4 = vpx_tm_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_4x4 = vpx_tm_predictor_4x4_sse2;
-  vpx_tm_predictor_8x8 = vpx_tm_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_8x8 = vpx_tm_predictor_8x8_sse2;
-  vpx_v_predictor_16x16 = vpx_v_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_16x16 = vpx_v_predictor_16x16_sse2;
-  vpx_v_predictor_32x32 = vpx_v_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_32x32 = vpx_v_predictor_32x32_sse2;
-  vpx_v_predictor_4x4 = vpx_v_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_4x4 = vpx_v_predictor_4x4_sse2;
-  vpx_v_predictor_8x8 = vpx_v_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_8x8 = vpx_v_predictor_8x8_sse2;
-  vpx_variance16x16 = vpx_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x16 = vpx_variance16x16_sse2;
+  vpx_variance16x16 = vpx_variance16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x16 = vpx_variance16x16_avx2;
-  vpx_variance16x32 = vpx_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x32 = vpx_variance16x32_sse2;
+  vpx_variance16x32 = vpx_variance16x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x32 = vpx_variance16x32_avx2;
-  vpx_variance16x8 = vpx_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x8 = vpx_variance16x8_sse2;
+  vpx_variance16x8 = vpx_variance16x8_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x8 = vpx_variance16x8_avx2;
-  vpx_variance32x16 = vpx_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x16 = vpx_variance32x16_sse2;
+  vpx_variance32x16 = vpx_variance32x16_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x16 = vpx_variance32x16_avx2;
-  vpx_variance32x32 = vpx_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x32 = vpx_variance32x32_sse2;
+  vpx_variance32x32 = vpx_variance32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x32 = vpx_variance32x32_avx2;
-  vpx_variance32x64 = vpx_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x64 = vpx_variance32x64_sse2;
+  vpx_variance32x64 = vpx_variance32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x64 = vpx_variance32x64_avx2;
-  vpx_variance4x4 = vpx_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_variance4x4 = vpx_variance4x4_sse2;
-  vpx_variance4x8 = vpx_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance4x8 = vpx_variance4x8_sse2;
-  vpx_variance64x32 = vpx_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance64x32 = vpx_variance64x32_sse2;
+  vpx_variance64x32 = vpx_variance64x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance64x32 = vpx_variance64x32_avx2;
-  vpx_variance64x64 = vpx_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_variance64x64 = vpx_variance64x64_sse2;
+  vpx_variance64x64 = vpx_variance64x64_sse2;
   if (flags & HAS_AVX2)
     vpx_variance64x64 = vpx_variance64x64_avx2;
-  vpx_variance8x16 = vpx_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x16 = vpx_variance8x16_sse2;
-  vpx_variance8x4 = vpx_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x4 = vpx_variance8x4_sse2;
-  vpx_variance8x8 = vpx_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x8 = vpx_variance8x8_sse2;
-  vpx_vector_var = vpx_vector_var_c;
-  if (flags & HAS_SSE2)
-    vpx_vector_var = vpx_vector_var_sse2;
 }
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp8_rtcd.h
index dc054d6..aa475b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp8_rtcd.h
@@ -27,44 +27,44 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_c
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_c
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_c
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_c
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -72,9 +72,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -82,9 +82,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -92,28 +92,35 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_c
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_c
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_c
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_c
 
@@ -139,7 +146,7 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_c
 
@@ -158,7 +165,7 @@
                                     char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_c
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_c
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -209,55 +216,55 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_c
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_c
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_c
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_c
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_c
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_c
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_c
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_c
 
@@ -271,8 +278,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -288,50 +295,50 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_c
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_c
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_c
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_c
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_c
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_c
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_c
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp9_rtcd.h
index 1eb9db5..9e956fb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vp9_rtcd.h
@@ -64,21 +64,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_c
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -115,8 +100,8 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c
index e250383..7ef6d84 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=mips64-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=mips64-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.h
index a7a2938..f79adef 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 1
 #define ARCH_MIPS 1
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_dsp_rtcd.h
index 8703d05..63a0f75 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mips64el/vpx_dsp_rtcd.h
@@ -139,253 +139,253 @@
 #define vpx_convolve_copy vpx_convolve_copy_c
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_c
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_c
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_c
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_c
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_c
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_c
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_c
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_c
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_c
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_c
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_c
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_c
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_c
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_c
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_c
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_c
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_c
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_c
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_c
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_c
@@ -418,7 +418,7 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_c
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
@@ -426,13 +426,13 @@
 #define vpx_get16x16var vpx_get16x16var_c
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
@@ -443,25 +443,25 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_c
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_c
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_c
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_c
@@ -482,7 +482,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_c
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -643,7 +643,7 @@
                                const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_c
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
@@ -666,30 +666,30 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_c
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_c
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -764,7 +764,7 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_c
@@ -791,7 +791,7 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_c
@@ -818,7 +818,7 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_c
@@ -845,7 +845,7 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_c
@@ -865,11 +865,18 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_c
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -885,7 +892,7 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_c
@@ -912,7 +919,7 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_c
@@ -939,7 +946,7 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_c
@@ -959,7 +966,7 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_c
@@ -979,7 +986,7 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_c
@@ -1006,7 +1013,7 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_c
@@ -1033,7 +1040,7 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_c
@@ -1060,7 +1067,7 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_c
@@ -1154,9 +1161,9 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1164,9 +1171,9 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1174,9 +1181,9 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -1184,9 +1191,9 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1194,9 +1201,9 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1204,9 +1211,9 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1214,9 +1221,9 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1224,9 +1231,9 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1234,9 +1241,9 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1244,9 +1251,9 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1254,9 +1261,9 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -1264,9 +1271,9 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1274,9 +1281,9 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1284,117 +1291,117 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_c
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_c
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_c
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_c
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_c
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_c
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_c
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_c
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_c
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_c
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_c
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_c
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
@@ -1414,146 +1421,146 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_c
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_c
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_c
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_c
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_c
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_c
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_c
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_c
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_c
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_c
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_c
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_c
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_c
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_c
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_c
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_c
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_c
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_c
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_c
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_c
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_c
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_c
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp8_rtcd.h
index dc054d6..aa475b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp8_rtcd.h
@@ -27,44 +27,44 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_c
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_c
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_c
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_c
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -72,9 +72,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -82,9 +82,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -92,28 +92,35 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_c
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_c
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_c
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_c
 
@@ -139,7 +146,7 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_c
 
@@ -158,7 +165,7 @@
                                     char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_c
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_c
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -209,55 +216,55 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_c
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_c
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_c
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_c
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_c
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_c
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_c
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_c
 
@@ -271,8 +278,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -288,50 +295,50 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_c
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_c
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_c
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_c
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_c
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_c
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_c
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp9_rtcd.h
index 1eb9db5..9e956fb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vp9_rtcd.h
@@ -64,21 +64,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_c
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -115,8 +100,8 @@
 #define vp9_fwht4x4 vp9_fwht4x4_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.c
index 62b8e79..fe1bbf7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=mips32-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=mips32-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.h
index 533077f..a0cdb81 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 1
 #define ARCH_MIPS 1
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 0
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_dsp_rtcd.h
index 8703d05..63a0f75 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/mipsel/vpx_dsp_rtcd.h
@@ -139,253 +139,253 @@
 #define vpx_convolve_copy vpx_convolve_copy_c
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_c
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_c
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_c
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_c
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_c
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_c
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_c
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_c
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_c
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_c
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_c
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_c
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_c
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_c
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_c
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_c
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_c
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_c
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_c
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_c
@@ -418,7 +418,7 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_c
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
@@ -426,13 +426,13 @@
 #define vpx_get16x16var vpx_get16x16var_c
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
@@ -443,25 +443,25 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_c
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_c
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_c
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_c
@@ -482,7 +482,7 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_c
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
@@ -643,7 +643,7 @@
                                const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_c
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
@@ -666,30 +666,30 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_c
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_c
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -764,7 +764,7 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_c
@@ -791,7 +791,7 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_c
@@ -818,7 +818,7 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_c
@@ -845,7 +845,7 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_c
@@ -865,11 +865,18 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_c
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -885,7 +892,7 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_c
@@ -912,7 +919,7 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_c
@@ -939,7 +946,7 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_c
@@ -959,7 +966,7 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_c
@@ -979,7 +986,7 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_c
@@ -1006,7 +1013,7 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_c
@@ -1033,7 +1040,7 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_c
@@ -1060,7 +1067,7 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_c
@@ -1154,9 +1161,9 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1164,9 +1171,9 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1174,9 +1181,9 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -1184,9 +1191,9 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1194,9 +1201,9 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1204,9 +1211,9 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1214,9 +1221,9 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1224,9 +1231,9 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1234,9 +1241,9 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1244,9 +1251,9 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -1254,9 +1261,9 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -1264,9 +1271,9 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1274,9 +1281,9 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -1284,117 +1291,117 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_c
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_c
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_c
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_c
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_c
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_c
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_c
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_c
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_c
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_c
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_c
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_c
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
@@ -1414,146 +1421,146 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_c
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_c
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_c
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_c
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_c
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_c
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_c
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_c
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_c
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_c
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_c
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_c
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_c
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_c
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_c
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_c
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_c
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_c
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_c
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_c
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_c
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_c
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp8_rtcd.h
index 4e9d062..c46bfe5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp8_rtcd.h
@@ -27,90 +27,90 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_sse2(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_sse2(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
-void vp8_bilinear_predict16x16_ssse3(unsigned char* src,
-                                     int src_pitch,
-                                     int xofst,
-                                     int yofst,
-                                     unsigned char* dst,
+void vp8_bilinear_predict16x16_ssse3(unsigned char* src_ptr,
+                                     int src_pixels_per_line,
+                                     int xoffset,
+                                     int yoffset,
+                                     unsigned char* dst_ptr,
                                      int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src,
-                                              int src_pitch,
-                                              int xofst,
-                                              int yofst,
-                                              unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
                                               int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_mmx
-
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_mmx
-
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x8_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_bilinear_predict8x8_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_sse2
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_sse2
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_bilinear_predict8x8_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -118,9 +118,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -128,9 +128,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -140,65 +140,65 @@
 #define vp8_block_error vp8_block_error_sse2
 
 void vp8_copy32xn_c(const unsigned char* src_ptr,
-                    int source_stride,
+                    int src_stride,
                     unsigned char* dst_ptr,
                     int dst_stride,
-                    int n);
+                    int height);
 void vp8_copy32xn_sse2(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 void vp8_copy32xn_sse3(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  unsigned char* dst_ptr,
                                  int dst_stride,
-                                 int n);
+                                 int height);
 
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_sse2(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_sse2
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_mmx(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_mmx
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_mmx(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_mmx
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_mmx(short input,
-                              unsigned char* pred,
+void vp8_dc_only_idct_add_mmx(short input_dc,
+                              unsigned char* pred_ptr,
                               int pred_stride,
-                              unsigned char* dst,
+                              unsigned char* dst_ptr,
                               int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_mmx
 
@@ -240,11 +240,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_mmx(short* input,
                               short* dq,
-                              unsigned char* output,
+                              unsigned char* dest,
                               int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_mmx
 
@@ -274,8 +274,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_sse2
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_mmx(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_mmx(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_mmx
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -375,91 +375,91 @@
                                        int* mvcost[2],
                                        union int_mv* center_mv);
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_sse2
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_sse2
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_sse2
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_sse2
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_sse2
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_sse2
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y,
-                                                 int ystride,
+void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y_ptr,
+                                                 int y_stride,
                                                  const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_sse2
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y,
-                                               int ystride,
+void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_sse2
 
@@ -475,8 +475,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -484,8 +484,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -505,126 +505,126 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_sse2
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_mmx(short* input,
-                              unsigned char* pred,
-                              int pitch,
-                              unsigned char* dst,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
                               int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_mmx
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_sse2(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_sse2(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_sse2
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_sse2(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_sse2
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_sixtap_predict16x16_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+void vp8_sixtap_predict16x16_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_mmx(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict4x4_mmx(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict4x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict4x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x8_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x8_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
 void vp8_rtcd(void);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp9_rtcd.h
index 6f00c78..12b9110 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vp9_rtcd.h
@@ -110,46 +110,6 @@
     const struct vp9_variance_vtable* fn_ptr,
     const struct mv* center_mv);
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_ssse3(const int16_t* input,
-                             int stride,
-                             tran_low_t* coeff_ptr,
-                             intptr_t n_coeffs,
-                             int skip_block,
-                             const int16_t* round_ptr,
-                             const int16_t* quant_ptr,
-                             tran_low_t* qcoeff_ptr,
-                             tran_low_t* dqcoeff_ptr,
-                             const int16_t* dequant_ptr,
-                             uint16_t* eob_ptr,
-                             const int16_t* scan,
-                             const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -242,18 +202,18 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 void vp9_highbd_iht16x16_256_add_sse4_1(const tran_low_t* input,
-                                        uint16_t* output,
-                                        int pitch,
+                                        uint16_t* dest,
+                                        int stride,
                                         int tx_type,
                                         int bd);
 RTCD_EXTERN void (*vp9_highbd_iht16x16_256_add)(const tran_low_t* input,
-                                                uint16_t* output,
-                                                int pitch,
+                                                uint16_t* dest,
+                                                int stride,
                                                 int tx_type,
                                                 int bd);
 
@@ -345,18 +305,19 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_sse2(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_sse2
 
@@ -502,9 +463,6 @@
   vp9_diamond_search_sad = vp9_diamond_search_sad_c;
   if (flags & HAS_AVX)
     vp9_diamond_search_sad = vp9_diamond_search_sad_avx;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_SSSE3)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_ssse3;
   vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_sse4_1;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.asm
index 032dfdb..b9d7810 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.asm
@@ -1,7 +1,12 @@
+%define VPX_ARCH_ARM 0
 %define ARCH_ARM 0
+%define VPX_ARCH_MIPS 0
 %define ARCH_MIPS 0
+%define VPX_ARCH_X86 0
 %define ARCH_X86 0
+%define VPX_ARCH_X86_64 1
 %define ARCH_X86_64 1
+%define VPX_ARCH_PPC 0
 %define ARCH_PPC 0
 %define HAVE_NEON 0
 %define HAVE_NEON_ASM 0
@@ -66,20 +71,24 @@
 %define CONFIG_OS_SUPPORT 1
 %define CONFIG_UNIT_TESTS 1
 %define CONFIG_WEBM_IO 1
-%define CONFIG_LIBYUV 1
+%define CONFIG_LIBYUV 0
 %define CONFIG_DECODE_PERF_TESTS 0
 %define CONFIG_ENCODE_PERF_TESTS 0
 %define CONFIG_MULTI_RES_ENCODING 1
 %define CONFIG_TEMPORAL_DENOISING 1
 %define CONFIG_VP9_TEMPORAL_DENOISING 1
+%define CONFIG_CONSISTENT_RECODE 0
 %define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 %define CONFIG_VP9_HIGHBITDEPTH 1
 %define CONFIG_BETTER_HW_COMPATIBILITY 0
 %define CONFIG_EXPERIMENTAL 0
 %define CONFIG_SIZE_LIMIT 1
 %define CONFIG_ALWAYS_ADJUST_BPM 0
-%define CONFIG_SPATIAL_SVC 0
+%define CONFIG_BITSTREAM_DEBUG 0
+%define CONFIG_MISMATCH_DEBUG 0
 %define CONFIG_FP_MB_STATS 0
 %define CONFIG_EMULATE_HARDWARE 0
+%define CONFIG_NON_GREEDY_MV 0
+%define CONFIG_RATE_CTRL 0
 %define DECODE_WIDTH_LIMIT 16384
 %define DECODE_HEIGHT_LIMIT 16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.c
index a62935f..52aaac5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=x86_64-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
+static const char* const cfg = "--target=x86_64-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.h
index 84f1c01..7c3c338 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 1
 #define ARCH_X86_64 1
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_dsp_rtcd.h
index 9970fae..ed63233 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/linux/x64/vpx_dsp_rtcd.h
@@ -427,420 +427,420 @@
 #define vpx_convolve_copy vpx_convolve_copy_sse2
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_4x4_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_4x4_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_sse2
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_sse2
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_sse2
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_4x4_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_8x8_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_sse2
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_sse2
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_sse2
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_sse2
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_sse2
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_sse2
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_sse2
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_sse2
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_sse2
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_sse2
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_sse2
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_sse2
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_sse2
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_sse2
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_sse2
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_sse2
@@ -884,44 +884,44 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_sse2
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_sse2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 void vpx_get16x16var_avx2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_sse2(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -933,41 +933,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_sse2
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_sse2
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_sse2
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_sse2
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_sse2
@@ -1012,79 +1012,91 @@
                                      tran_low_t* coeff);
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+void vpx_highbd_10_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_sse2
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+void vpx_highbd_10_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_sse2
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_10_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_sse2
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_10_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
 #define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1094,18 +1106,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1114,18 +1126,18 @@
   vpx_highbd_10_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1135,18 +1147,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1156,18 +1168,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1177,18 +1189,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1197,9 +1209,9 @@
   vpx_highbd_10_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1208,9 +1220,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1220,18 +1232,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1241,18 +1253,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1261,18 +1273,18 @@
   vpx_highbd_10_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1281,18 +1293,18 @@
   vpx_highbd_10_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1301,18 +1313,18 @@
   vpx_highbd_10_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1321,16 +1333,16 @@
   vpx_highbd_10_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1338,16 +1350,16 @@
   vpx_highbd_10_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1355,16 +1367,16 @@
   vpx_highbd_10_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -1372,16 +1384,16 @@
   vpx_highbd_10_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1389,16 +1401,16 @@
   vpx_highbd_10_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1406,16 +1418,16 @@
   vpx_highbd_10_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1423,9 +1435,9 @@
   vpx_highbd_10_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1433,9 +1445,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1443,16 +1455,16 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1460,16 +1472,16 @@
   vpx_highbd_10_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1477,16 +1489,16 @@
   vpx_highbd_10_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -1494,16 +1506,16 @@
   vpx_highbd_10_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -1511,16 +1523,16 @@
   vpx_highbd_10_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -1528,214 +1540,226 @@
   vpx_highbd_10_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_sse2
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_sse2
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_sse2
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_sse2
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_sse2
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_sse2
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_sse2
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_sse2
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_sse2
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_sse2
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+void vpx_highbd_12_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_sse2
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+void vpx_highbd_12_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_sse2
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_12_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_sse2
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_12_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
 #define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1745,18 +1769,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1765,18 +1789,18 @@
   vpx_highbd_12_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1786,18 +1810,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1807,18 +1831,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1828,18 +1852,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1848,9 +1872,9 @@
   vpx_highbd_12_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1859,9 +1883,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1871,18 +1895,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1892,18 +1916,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1912,18 +1936,18 @@
   vpx_highbd_12_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1932,18 +1956,18 @@
   vpx_highbd_12_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1952,18 +1976,18 @@
   vpx_highbd_12_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1972,16 +1996,16 @@
   vpx_highbd_12_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1989,16 +2013,16 @@
   vpx_highbd_12_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2006,16 +2030,16 @@
   vpx_highbd_12_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2023,16 +2047,16 @@
   vpx_highbd_12_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2040,16 +2064,16 @@
   vpx_highbd_12_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2057,16 +2081,16 @@
   vpx_highbd_12_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2074,9 +2098,9 @@
   vpx_highbd_12_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2084,9 +2108,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2094,16 +2118,16 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2111,16 +2135,16 @@
   vpx_highbd_12_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2128,16 +2152,16 @@
   vpx_highbd_12_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2145,16 +2169,16 @@
   vpx_highbd_12_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2162,16 +2186,16 @@
   vpx_highbd_12_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2179,213 +2203,225 @@
   vpx_highbd_12_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_sse2
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_sse2
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_sse2
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_sse2
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_sse2
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_sse2
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_sse2
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_sse2
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_sse2
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_sse2
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
                                 int* sum);
-#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+void vpx_highbd_8_get16x16var_sse2(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse,
+                                   int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_sse2
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
                               int* sum);
-#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+void vpx_highbd_8_get8x8var_sse2(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_sse2
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 unsigned int vpx_highbd_8_mse16x16_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_sse2
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_highbd_8_mse8x8_sse2(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2394,18 +2430,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2414,18 +2450,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2434,18 +2470,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2454,18 +2490,18 @@
   vpx_highbd_8_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2474,18 +2510,18 @@
   vpx_highbd_8_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2494,9 +2530,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -2505,9 +2541,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -2516,18 +2552,18 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2536,18 +2572,18 @@
   vpx_highbd_8_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2556,18 +2592,18 @@
   vpx_highbd_8_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2576,18 +2612,18 @@
   vpx_highbd_8_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2596,18 +2632,18 @@
   vpx_highbd_8_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2616,16 +2652,16 @@
   vpx_highbd_8_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2633,16 +2669,16 @@
   vpx_highbd_8_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2650,16 +2686,16 @@
   vpx_highbd_8_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2667,16 +2703,16 @@
   vpx_highbd_8_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2684,16 +2720,16 @@
   vpx_highbd_8_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2701,16 +2737,16 @@
   vpx_highbd_8_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2718,34 +2754,34 @@
   vpx_highbd_8_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2753,16 +2789,16 @@
   vpx_highbd_8_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2770,16 +2806,16 @@
   vpx_highbd_8_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2787,16 +2823,16 @@
   vpx_highbd_8_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -2804,16 +2840,16 @@
   vpx_highbd_8_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -2821,152 +2857,152 @@
   vpx_highbd_8_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_sse2
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_sse2
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_sse2
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_sse2
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_sse2
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_sse2
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_sse2
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_sse2
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x16_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_sse2
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x8_sse2(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_sse2
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t* s8, int p);
 #define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_sse2
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t* s8, int p);
 #define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_sse2
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
@@ -2988,7 +3024,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 void vpx_highbd_convolve8_sse2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3000,7 +3036,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve8_avx2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3012,7 +3048,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8)(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3024,7 +3060,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3037,7 +3073,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve8_avg_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3049,7 +3085,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve8_avg_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3061,7 +3097,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3073,7 +3109,7 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
                                       ptrdiff_t src_stride,
@@ -3086,7 +3122,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 void vpx_highbd_convolve8_avg_horiz_sse2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3098,7 +3134,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 void vpx_highbd_convolve8_avg_horiz_avx2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3110,7 +3146,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_horiz)(const uint16_t* src,
                                                    ptrdiff_t src_stride,
                                                    uint16_t* dst,
@@ -3122,7 +3158,7 @@
                                                    int y_step_q4,
                                                    int w,
                                                    int h,
-                                                   int bps);
+                                                   int bd);
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
                                      ptrdiff_t src_stride,
@@ -3135,7 +3171,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_avg_vert_sse2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3147,7 +3183,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 void vpx_highbd_convolve8_avg_vert_avx2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3159,7 +3195,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_vert)(const uint16_t* src,
                                                   ptrdiff_t src_stride,
                                                   uint16_t* dst,
@@ -3171,7 +3207,7 @@
                                                   int y_step_q4,
                                                   int w,
                                                   int h,
-                                                  int bps);
+                                                  int bd);
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
                                   ptrdiff_t src_stride,
@@ -3184,7 +3220,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve8_horiz_sse2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3196,7 +3232,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_horiz_avx2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3208,7 +3244,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_horiz)(const uint16_t* src,
                                                ptrdiff_t src_stride,
                                                uint16_t* dst,
@@ -3220,7 +3256,7 @@
                                                int y_step_q4,
                                                int w,
                                                int h,
-                                               int bps);
+                                               int bd);
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
                                  ptrdiff_t src_stride,
@@ -3233,7 +3269,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 void vpx_highbd_convolve8_vert_sse2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3245,7 +3281,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 void vpx_highbd_convolve8_vert_avx2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3257,7 +3293,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_vert)(const uint16_t* src,
                                               ptrdiff_t src_stride,
                                               uint16_t* dst,
@@ -3269,7 +3305,7 @@
                                               int y_step_q4,
                                               int w,
                                               int h,
-                                              int bps);
+                                              int bd);
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
                                ptrdiff_t src_stride,
@@ -3282,7 +3318,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve_avg_sse2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3294,7 +3330,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve_avg_avx2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3306,7 +3342,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_avg)(const uint16_t* src,
                                             ptrdiff_t src_stride,
                                             uint16_t* dst,
@@ -3318,7 +3354,7 @@
                                             int y_step_q4,
                                             int w,
                                             int h,
-                                            int bps);
+                                            int bd);
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3331,7 +3367,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve_copy_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3343,7 +3379,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve_copy_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3355,7 +3391,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_copy)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3367,427 +3403,427 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_sse2
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_sse2
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_sse2
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_sse2
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_4x4_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_4x4_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_sse2
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_sse2
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_sse2
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_sse2
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_sse2
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_16x16_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
@@ -3795,12 +3831,12 @@
   vpx_highbd_dc_left_predictor_16x16_sse2
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_32x32_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
@@ -3808,120 +3844,120 @@
   vpx_highbd_dc_left_predictor_32x32_sse2
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_4x4_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 #define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_sse2
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_8x8_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 #define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_sse2
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_sse2
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_sse2
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_sse2
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_sse2
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_sse2
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_sse2
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_sse2
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
@@ -3979,53 +4015,83 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_sse2
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_sse2
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_sse2
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_sse2
 
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_16x16_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_32x32_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+void vpx_highbd_hadamard_8x8_avx2(const int16_t* src_diff,
+                                  ptrdiff_t src_stride,
+                                  tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
+
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
                                    int stride,
@@ -4423,9 +4489,9 @@
                                          int bd);
 #define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_sse2
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -4511,12 +4577,12 @@
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_sse2
@@ -4545,12 +4611,12 @@
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_sse2
@@ -4579,12 +4645,12 @@
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad16x8x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
 #define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_sse2
@@ -4613,12 +4679,12 @@
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_sse2
@@ -4647,12 +4713,12 @@
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_sse2
@@ -4681,12 +4747,12 @@
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_sse2
@@ -4706,12 +4772,12 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_sse2
@@ -4731,12 +4797,12 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_sse2
@@ -4765,12 +4831,12 @@
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_sse2
@@ -4799,12 +4865,12 @@
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_sse2
@@ -4833,12 +4899,12 @@
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad8x16x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
 #define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_sse2
@@ -4867,12 +4933,12 @@
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_sse2
@@ -4901,118 +4967,122 @@
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_sse2
 
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+int vpx_highbd_satd_avx2(const tran_low_t* coeff, int length);
+RTCD_EXTERN int (*vpx_highbd_satd)(const tran_low_t* coeff, int length);
+
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_sse2
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_sse2
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_sse2
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_sse2
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_sse2
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_sse2
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_sse2
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
@@ -5322,12 +5392,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_sse2
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_sse2(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_sse2(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -5361,68 +5431,68 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_sse2
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_sse2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_mse16x16_avx2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse16x8_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 unsigned int vpx_mse16x8_avx2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x8)(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse8x16_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_sse2
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 unsigned int vpx_mse8x8_sse2(const uint8_t* src_ptr,
-                             int source_stride,
+                             int src_stride,
                              const uint8_t* ref_ptr,
-                             int recon_stride,
+                             int ref_stride,
                              unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_sse2
 
@@ -5623,12 +5693,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_sse2
@@ -5673,12 +5743,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_sse2
@@ -5728,12 +5798,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_sse2
@@ -5794,12 +5864,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_sse2
@@ -5844,25 +5914,41 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad32x32x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -5903,12 +5989,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_sse2
@@ -5953,12 +6039,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_sse2
@@ -6003,12 +6089,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_sse2
@@ -6053,12 +6139,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_sse2
@@ -6103,22 +6189,22 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad64x64x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -6162,12 +6248,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_sse2
@@ -6212,12 +6298,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_sse2
@@ -6262,12 +6348,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_sse2
@@ -6393,850 +6479,850 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -7264,315 +7350,315 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_sse2
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_sse2
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_sse2
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_sse2
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_sse2
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_sse2
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_sse2
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_sse2
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_sse2
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_variance16x8_avx2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_sse2
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_sse2
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_sse2
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_sse2
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_sse2
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
@@ -7752,6 +7838,15 @@
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_ssse3;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_avx2;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_avx2;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_avx2;
   vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse4_1;
@@ -7779,6 +7874,9 @@
   vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse4_1;
+  vpx_highbd_satd = vpx_highbd_satd_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_satd = vpx_highbd_satd_avx2;
   vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_135_add = vpx_idct32x32_135_add_ssse3;
@@ -7841,6 +7939,9 @@
   vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32x4d = vpx_sad32x32x4d_avx2;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
+  if (flags & HAS_AVX2)
+    vpx_sad32x32x8 = vpx_sad32x32x8_avx2;
   vpx_sad32x64 = vpx_sad32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64 = vpx_sad32x64_avx2;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp8_rtcd.h
index 9472db1..c46bfe5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp8_rtcd.h
@@ -27,100 +27,90 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_sse2(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_sse2(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
-void vp8_bilinear_predict16x16_ssse3(unsigned char* src,
-                                     int src_pitch,
-                                     int xofst,
-                                     int yofst,
-                                     unsigned char* dst,
+void vp8_bilinear_predict16x16_ssse3(unsigned char* src_ptr,
+                                     int src_pixels_per_line,
+                                     int xoffset,
+                                     int yoffset,
+                                     unsigned char* dst_ptr,
                                      int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src,
-                                              int src_pitch,
-                                              int xofst,
-                                              int yofst,
-                                              unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
                                               int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict4x4)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
-                                            int dst_pitch);
-
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x4)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
-                                            int dst_pitch);
-
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x8_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_bilinear_predict8x8_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_sse2
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_sse2
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_bilinear_predict8x8_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -128,9 +118,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -138,92 +128,79 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
 
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 int vp8_block_error_sse2(short* coeff, short* dqcoeff);
-RTCD_EXTERN int (*vp8_block_error)(short* coeff, short* dqcoeff);
+#define vp8_block_error vp8_block_error_sse2
 
 void vp8_copy32xn_c(const unsigned char* src_ptr,
-                    int source_stride,
+                    int src_stride,
                     unsigned char* dst_ptr,
                     int dst_stride,
-                    int n);
+                    int height);
 void vp8_copy32xn_sse2(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 void vp8_copy32xn_sse3(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  unsigned char* dst_ptr,
                                  int dst_stride,
-                                 int n);
+                                 int height);
 
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_sse2(unsigned char* src,
-                            int src_pitch,
-                            unsigned char* dst,
-                            int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem16x16)(unsigned char* src,
-                                      int src_pitch,
-                                      unsigned char* dst,
-                                      int dst_pitch);
-
-void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
-                       unsigned char* dst,
-                       int dst_pitch);
-void vp8_copy_mem8x4_mmx(unsigned char* src,
-                         int src_pitch,
-                         unsigned char* dst,
-                         int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x4)(unsigned char* src,
-                                    int src_pitch,
-                                    unsigned char* dst,
-                                    int dst_pitch);
-
-void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
-                       unsigned char* dst,
-                       int dst_pitch);
-void vp8_copy_mem8x8_mmx(unsigned char* src,
-                         int src_pitch,
-                         unsigned char* dst,
-                         int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x8)(unsigned char* src,
-                                    int src_pitch,
-                                    unsigned char* dst,
-                                    int dst_pitch);
-
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
-                            int pred_stride,
+                            int src_stride,
                             unsigned char* dst,
                             int dst_stride);
-void vp8_dc_only_idct_add_mmx(short input,
-                              unsigned char* pred,
+#define vp8_copy_mem16x16 vp8_copy_mem16x16_sse2
+
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_mmx(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+#define vp8_copy_mem8x4 vp8_copy_mem8x4_mmx
+
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_mmx(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+#define vp8_copy_mem8x8 vp8_copy_mem8x8_mmx
+
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_mmx(short input_dc,
+                              unsigned char* pred_ptr,
                               int pred_stride,
-                              unsigned char* dst,
+                              unsigned char* dst_ptr,
                               int dst_stride);
-RTCD_EXTERN void (*vp8_dc_only_idct_add)(short input,
-                                         unsigned char* pred,
-                                         int pred_stride,
-                                         unsigned char* dst,
-                                         int dst_stride);
+#define vp8_dc_only_idct_add vp8_dc_only_idct_add_mmx
 
 int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
                           int mc_avg_y_stride,
@@ -241,14 +218,7 @@
                              int sig_stride,
                              unsigned int motion_magnitude,
                              int increase_denoising);
-RTCD_EXTERN int (*vp8_denoiser_filter)(unsigned char* mc_running_avg_y,
-                                       int mc_avg_y_stride,
-                                       unsigned char* running_avg_y,
-                                       int avg_y_stride,
-                                       unsigned char* sig,
-                                       int sig_stride,
-                                       unsigned int motion_magnitude,
-                                       int increase_denoising);
+#define vp8_denoiser_filter vp8_denoiser_filter_sse2
 
 int vp8_denoiser_filter_uv_c(unsigned char* mc_running_avg,
                              int mc_avg_stride,
@@ -266,27 +236,17 @@
                                 int sig_stride,
                                 unsigned int motion_magnitude,
                                 int increase_denoising);
-RTCD_EXTERN int (*vp8_denoiser_filter_uv)(unsigned char* mc_running_avg,
-                                          int mc_avg_stride,
-                                          unsigned char* running_avg,
-                                          int avg_stride,
-                                          unsigned char* sig,
-                                          int sig_stride,
-                                          unsigned int motion_magnitude,
-                                          int increase_denoising);
+#define vp8_denoiser_filter_uv vp8_denoiser_filter_uv_sse2
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_mmx(short* input,
                               short* dq,
-                              unsigned char* output,
+                              unsigned char* dest,
                               int stride);
-RTCD_EXTERN void (*vp8_dequant_idct_add)(short* input,
-                                         short* dq,
-                                         unsigned char* output,
-                                         int stride);
+#define vp8_dequant_idct_add vp8_dequant_idct_add_mmx
 
 void vp8_dequant_idct_add_uv_block_c(short* q,
                                      short* dq,
@@ -300,12 +260,7 @@
                                         unsigned char* dst_v,
                                         int stride,
                                         char* eobs);
-RTCD_EXTERN void (*vp8_dequant_idct_add_uv_block)(short* q,
-                                                  short* dq,
-                                                  unsigned char* dst_u,
-                                                  unsigned char* dst_v,
-                                                  int stride,
-                                                  char* eobs);
+#define vp8_dequant_idct_add_uv_block vp8_dequant_idct_add_uv_block_sse2
 
 void vp8_dequant_idct_add_y_block_c(short* q,
                                     short* dq,
@@ -317,15 +272,11 @@
                                        unsigned char* dst,
                                        int stride,
                                        char* eobs);
-RTCD_EXTERN void (*vp8_dequant_idct_add_y_block)(short* q,
-                                                 short* dq,
-                                                 unsigned char* dst,
-                                                 int stride,
-                                                 char* eobs);
+#define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_sse2
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_mmx(struct blockd*, short* dqc);
-RTCD_EXTERN void (*vp8_dequantize_b)(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_mmx(struct blockd*, short* DQC);
+#define vp8_dequantize_b vp8_dequantize_b_mmx
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
                              struct block* b,
@@ -349,17 +300,7 @@
                              struct variance_vtable* fn_ptr,
                              int* mvcost[2],
                              union int_mv* center_mv);
-RTCD_EXTERN int (*vp8_diamond_search_sad)(struct macroblock* x,
-                                          struct block* b,
-                                          struct blockd* d,
-                                          union int_mv* ref_mv,
-                                          union int_mv* best_mv,
-                                          int search_param,
-                                          int sad_per_bit,
-                                          int* num00,
-                                          struct variance_vtable* fn_ptr,
-                                          int* mvcost[2],
-                                          union int_mv* center_mv);
+#define vp8_diamond_search_sad vp8_diamond_search_sadx4
 
 void vp8_fast_quantize_b_c(struct block*, struct blockd*);
 void vp8_fast_quantize_b_sse2(struct block*, struct blockd*);
@@ -376,11 +317,7 @@
                                     unsigned char* dst,
                                     int dst_stride,
                                     int src_weight);
-RTCD_EXTERN void (*vp8_filter_by_weight16x16)(unsigned char* src,
-                                              int src_stride,
-                                              unsigned char* dst,
-                                              int dst_stride,
-                                              int src_weight);
+#define vp8_filter_by_weight16x16 vp8_filter_by_weight16x16_sse2
 
 void vp8_filter_by_weight4x4_c(unsigned char* src,
                                int src_stride,
@@ -399,11 +336,7 @@
                                   unsigned char* dst,
                                   int dst_stride,
                                   int src_weight);
-RTCD_EXTERN void (*vp8_filter_by_weight8x8)(unsigned char* src,
-                                            int src_stride,
-                                            unsigned char* dst,
-                                            int dst_stride,
-                                            int src_weight);
+#define vp8_filter_by_weight8x8 vp8_filter_by_weight8x8_sse2
 
 int vp8_full_search_sad_c(struct macroblock* x,
                           struct block* b,
@@ -442,136 +375,108 @@
                                        int* mvcost[2],
                                        union int_mv* center_mv);
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bh)(unsigned char* y,
-                                       unsigned char* u,
-                                       unsigned char* v,
-                                       int ystride,
-                                       int uv_stride,
-                                       struct loop_filter_info* lfi);
+#define vp8_loop_filter_bh vp8_loop_filter_bh_sse2
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bv)(unsigned char* y,
-                                       unsigned char* u,
-                                       unsigned char* v,
-                                       int ystride,
-                                       int uv_stride,
-                                       struct loop_filter_info* lfi);
+#define vp8_loop_filter_bv vp8_loop_filter_bv_sse2
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbh)(unsigned char* y,
-                                        unsigned char* u,
-                                        unsigned char* v,
-                                        int ystride,
-                                        int uv_stride,
-                                        struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbh vp8_loop_filter_mbh_sse2
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbv)(unsigned char* y,
-                                        unsigned char* u,
-                                        unsigned char* v,
-                                        int ystride,
-                                        int uv_stride,
-                                        struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbv vp8_loop_filter_mbv_sse2
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bh)(unsigned char* y,
-                                              int ystride,
-                                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_sse2
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bv)(unsigned char* y,
-                                              int ystride,
-                                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_sse2
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y,
-                                                 int ystride,
+void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y_ptr,
+                                                 int y_stride,
                                                  const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbh)(unsigned char* y,
-                                               int ystride,
-                                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_sse2
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y,
-                                               int ystride,
+void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbv)(unsigned char* y,
-                                               int ystride,
-                                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_sse2
 
 int vp8_mbblock_error_c(struct macroblock* mb, int dc);
 int vp8_mbblock_error_sse2(struct macroblock* mb, int dc);
-RTCD_EXTERN int (*vp8_mbblock_error)(struct macroblock* mb, int dc);
+#define vp8_mbblock_error vp8_mbblock_error_sse2
 
 int vp8_mbuverror_c(struct macroblock* mb);
 int vp8_mbuverror_sse2(struct macroblock* mb);
-RTCD_EXTERN int (*vp8_mbuverror)(struct macroblock* mb);
+#define vp8_mbuverror vp8_mbuverror_sse2
 
 int vp8_refining_search_sad_c(struct macroblock* x,
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -579,20 +484,12 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
-RTCD_EXTERN int (*vp8_refining_search_sad)(struct macroblock* x,
-                                           struct block* b,
-                                           struct blockd* d,
-                                           union int_mv* ref_mv,
-                                           int sad_per_bit,
-                                           int distance,
-                                           struct variance_vtable* fn_ptr,
-                                           int* mvcost[2],
-                                           union int_mv* center_mv);
+#define vp8_refining_search_sad vp8_refining_search_sadx4
 
 void vp8_regular_quantize_b_c(struct block*, struct blockd*);
 void vp8_regular_quantize_b_sse2(struct block*, struct blockd*);
@@ -601,137 +498,133 @@
 
 void vp8_short_fdct4x4_c(short* input, short* output, int pitch);
 void vp8_short_fdct4x4_sse2(short* input, short* output, int pitch);
-RTCD_EXTERN void (*vp8_short_fdct4x4)(short* input, short* output, int pitch);
+#define vp8_short_fdct4x4 vp8_short_fdct4x4_sse2
 
 void vp8_short_fdct8x4_c(short* input, short* output, int pitch);
 void vp8_short_fdct8x4_sse2(short* input, short* output, int pitch);
-RTCD_EXTERN void (*vp8_short_fdct8x4)(short* input, short* output, int pitch);
+#define vp8_short_fdct8x4 vp8_short_fdct8x4_sse2
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_mmx(short* input,
-                              unsigned char* pred,
-                              int pitch,
-                              unsigned char* dst,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
                               int dst_stride);
-RTCD_EXTERN void (*vp8_short_idct4x4llm)(short* input,
-                                         unsigned char* pred,
-                                         int pitch,
-                                         unsigned char* dst,
-                                         int dst_stride);
+#define vp8_short_idct4x4llm vp8_short_idct4x4llm_mmx
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_sse2(short* input, short* output);
-RTCD_EXTERN void (*vp8_short_inv_walsh4x4)(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_sse2(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_sse2
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_sse2(short* input, short* output, int pitch);
-RTCD_EXTERN void (*vp8_short_walsh4x4)(short* input, short* output, int pitch);
+#define vp8_short_walsh4x4 vp8_short_walsh4x4_sse2
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_sixtap_predict16x16_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+void vp8_sixtap_predict16x16_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_mmx(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict4x4_mmx(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict4x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict4x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x8_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x8_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
 void vp8_rtcd(void);
@@ -743,150 +636,36 @@
 
   (void)flags;
 
-  vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_sse2;
+  vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_sse2;
   if (flags & HAS_SSSE3)
     vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_ssse3;
-  vp8_bilinear_predict4x4 = vp8_bilinear_predict4x4_c;
-  if (flags & HAS_MMX)
-    vp8_bilinear_predict4x4 = vp8_bilinear_predict4x4_mmx;
-  vp8_bilinear_predict8x4 = vp8_bilinear_predict8x4_c;
-  if (flags & HAS_MMX)
-    vp8_bilinear_predict8x4 = vp8_bilinear_predict8x4_mmx;
-  vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_c;
-  if (flags & HAS_SSE2)
-    vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_sse2;
+  vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_sse2;
   if (flags & HAS_SSSE3)
     vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_ssse3;
-  vp8_block_error = vp8_block_error_c;
-  if (flags & HAS_SSE2)
-    vp8_block_error = vp8_block_error_sse2;
-  vp8_copy32xn = vp8_copy32xn_c;
-  if (flags & HAS_SSE2)
-    vp8_copy32xn = vp8_copy32xn_sse2;
+  vp8_copy32xn = vp8_copy32xn_sse2;
   if (flags & HAS_SSE3)
     vp8_copy32xn = vp8_copy32xn_sse3;
-  vp8_copy_mem16x16 = vp8_copy_mem16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_copy_mem16x16 = vp8_copy_mem16x16_sse2;
-  vp8_copy_mem8x4 = vp8_copy_mem8x4_c;
-  if (flags & HAS_MMX)
-    vp8_copy_mem8x4 = vp8_copy_mem8x4_mmx;
-  vp8_copy_mem8x8 = vp8_copy_mem8x8_c;
-  if (flags & HAS_MMX)
-    vp8_copy_mem8x8 = vp8_copy_mem8x8_mmx;
-  vp8_dc_only_idct_add = vp8_dc_only_idct_add_c;
-  if (flags & HAS_MMX)
-    vp8_dc_only_idct_add = vp8_dc_only_idct_add_mmx;
-  vp8_denoiser_filter = vp8_denoiser_filter_c;
-  if (flags & HAS_SSE2)
-    vp8_denoiser_filter = vp8_denoiser_filter_sse2;
-  vp8_denoiser_filter_uv = vp8_denoiser_filter_uv_c;
-  if (flags & HAS_SSE2)
-    vp8_denoiser_filter_uv = vp8_denoiser_filter_uv_sse2;
-  vp8_dequant_idct_add = vp8_dequant_idct_add_c;
-  if (flags & HAS_MMX)
-    vp8_dequant_idct_add = vp8_dequant_idct_add_mmx;
-  vp8_dequant_idct_add_uv_block = vp8_dequant_idct_add_uv_block_c;
-  if (flags & HAS_SSE2)
-    vp8_dequant_idct_add_uv_block = vp8_dequant_idct_add_uv_block_sse2;
-  vp8_dequant_idct_add_y_block = vp8_dequant_idct_add_y_block_c;
-  if (flags & HAS_SSE2)
-    vp8_dequant_idct_add_y_block = vp8_dequant_idct_add_y_block_sse2;
-  vp8_dequantize_b = vp8_dequantize_b_c;
-  if (flags & HAS_MMX)
-    vp8_dequantize_b = vp8_dequantize_b_mmx;
-  vp8_diamond_search_sad = vp8_diamond_search_sad_c;
-  if (flags & HAS_SSE2)
-    vp8_diamond_search_sad = vp8_diamond_search_sadx4;
-  vp8_fast_quantize_b = vp8_fast_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vp8_fast_quantize_b = vp8_fast_quantize_b_sse2;
+  vp8_fast_quantize_b = vp8_fast_quantize_b_sse2;
   if (flags & HAS_SSSE3)
     vp8_fast_quantize_b = vp8_fast_quantize_b_ssse3;
-  vp8_filter_by_weight16x16 = vp8_filter_by_weight16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_filter_by_weight16x16 = vp8_filter_by_weight16x16_sse2;
-  vp8_filter_by_weight8x8 = vp8_filter_by_weight8x8_c;
-  if (flags & HAS_SSE2)
-    vp8_filter_by_weight8x8 = vp8_filter_by_weight8x8_sse2;
   vp8_full_search_sad = vp8_full_search_sad_c;
   if (flags & HAS_SSE3)
     vp8_full_search_sad = vp8_full_search_sadx3;
   if (flags & HAS_SSE4_1)
     vp8_full_search_sad = vp8_full_search_sadx8;
-  vp8_loop_filter_bh = vp8_loop_filter_bh_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_bh = vp8_loop_filter_bh_sse2;
-  vp8_loop_filter_bv = vp8_loop_filter_bv_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_bv = vp8_loop_filter_bv_sse2;
-  vp8_loop_filter_mbh = vp8_loop_filter_mbh_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_mbh = vp8_loop_filter_mbh_sse2;
-  vp8_loop_filter_mbv = vp8_loop_filter_mbv_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_mbv = vp8_loop_filter_mbv_sse2;
-  vp8_loop_filter_simple_bh = vp8_loop_filter_bhs_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_bh = vp8_loop_filter_bhs_sse2;
-  vp8_loop_filter_simple_bv = vp8_loop_filter_bvs_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_bv = vp8_loop_filter_bvs_sse2;
-  vp8_loop_filter_simple_mbh = vp8_loop_filter_simple_horizontal_edge_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_mbh = vp8_loop_filter_simple_horizontal_edge_sse2;
-  vp8_loop_filter_simple_mbv = vp8_loop_filter_simple_vertical_edge_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_mbv = vp8_loop_filter_simple_vertical_edge_sse2;
-  vp8_mbblock_error = vp8_mbblock_error_c;
-  if (flags & HAS_SSE2)
-    vp8_mbblock_error = vp8_mbblock_error_sse2;
-  vp8_mbuverror = vp8_mbuverror_c;
-  if (flags & HAS_SSE2)
-    vp8_mbuverror = vp8_mbuverror_sse2;
-  vp8_refining_search_sad = vp8_refining_search_sad_c;
-  if (flags & HAS_SSE2)
-    vp8_refining_search_sad = vp8_refining_search_sadx4;
-  vp8_regular_quantize_b = vp8_regular_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
+  vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
   if (flags & HAS_SSE4_1)
     vp8_regular_quantize_b = vp8_regular_quantize_b_sse4_1;
-  vp8_short_fdct4x4 = vp8_short_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_fdct4x4 = vp8_short_fdct4x4_sse2;
-  vp8_short_fdct8x4 = vp8_short_fdct8x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_fdct8x4 = vp8_short_fdct8x4_sse2;
-  vp8_short_idct4x4llm = vp8_short_idct4x4llm_c;
-  if (flags & HAS_MMX)
-    vp8_short_idct4x4llm = vp8_short_idct4x4llm_mmx;
-  vp8_short_inv_walsh4x4 = vp8_short_inv_walsh4x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_inv_walsh4x4 = vp8_short_inv_walsh4x4_sse2;
-  vp8_short_walsh4x4 = vp8_short_walsh4x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_walsh4x4 = vp8_short_walsh4x4_sse2;
-  vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
+  vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_ssse3;
-  vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_c;
-  if (flags & HAS_MMX)
-    vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_mmx;
+  vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_mmx;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_ssse3;
-  vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_c;
-  if (flags & HAS_SSE2)
-    vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_sse2;
+  vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_ssse3;
-  vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_c;
-  if (flags & HAS_SSE2)
-    vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_sse2;
+  vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_ssse3;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp9_rtcd.h
index c8f69b9..e079a2d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vp9_rtcd.h
@@ -79,15 +79,7 @@
                              int increase_denoising,
                              BLOCK_SIZE bs,
                              int motion_magnitude);
-RTCD_EXTERN int (*vp9_denoiser_filter)(const uint8_t* sig,
-                                       int sig_stride,
-                                       const uint8_t* mc_avg,
-                                       int mc_avg_stride,
-                                       uint8_t* avg,
-                                       int avg_stride,
-                                       int increase_denoising,
-                                       BLOCK_SIZE bs,
-                                       int motion_magnitude);
+#define vp9_denoiser_filter vp9_denoiser_filter_sse2
 
 int vp9_diamond_search_sad_c(const struct macroblock* x,
                              const struct search_site_config* cfg,
@@ -118,46 +110,6 @@
     const struct vp9_variance_vtable* fn_ptr,
     const struct mv* center_mv);
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_ssse3(const int16_t* input,
-                             int stride,
-                             tran_low_t* coeff_ptr,
-                             intptr_t n_coeffs,
-                             int skip_block,
-                             const int16_t* round_ptr,
-                             const int16_t* quant_ptr,
-                             tran_low_t* qcoeff_ptr,
-                             tran_low_t* dqcoeff_ptr,
-                             const int16_t* dequant_ptr,
-                             uint16_t* eob_ptr,
-                             const int16_t* scan,
-                             const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -166,10 +118,7 @@
                        tran_low_t* output,
                        int stride,
                        int tx_type);
-RTCD_EXTERN void (*vp9_fht16x16)(const int16_t* input,
-                                 tran_low_t* output,
-                                 int stride,
-                                 int tx_type);
+#define vp9_fht16x16 vp9_fht16x16_sse2
 
 void vp9_fht4x4_c(const int16_t* input,
                   tran_low_t* output,
@@ -179,10 +128,7 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
-RTCD_EXTERN void (*vp9_fht4x4)(const int16_t* input,
-                               tran_low_t* output,
-                               int stride,
-                               int tx_type);
+#define vp9_fht4x4 vp9_fht4x4_sse2
 
 void vp9_fht8x8_c(const int16_t* input,
                   tran_low_t* output,
@@ -192,10 +138,7 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
-RTCD_EXTERN void (*vp9_fht8x8)(const int16_t* input,
-                               tran_low_t* output,
-                               int stride,
-                               int tx_type);
+#define vp9_fht8x8 vp9_fht8x8_sse2
 
 void vp9_filter_by_weight16x16_c(const uint8_t* src,
                                  int src_stride,
@@ -207,11 +150,7 @@
                                     uint8_t* dst,
                                     int dst_stride,
                                     int src_weight);
-RTCD_EXTERN void (*vp9_filter_by_weight16x16)(const uint8_t* src,
-                                              int src_stride,
-                                              uint8_t* dst,
-                                              int dst_stride,
-                                              int src_weight);
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_sse2
 
 void vp9_filter_by_weight8x8_c(const uint8_t* src,
                                int src_stride,
@@ -223,17 +162,11 @@
                                   uint8_t* dst,
                                   int dst_stride,
                                   int src_weight);
-RTCD_EXTERN void (*vp9_filter_by_weight8x8)(const uint8_t* src,
-                                            int src_stride,
-                                            uint8_t* dst,
-                                            int dst_stride,
-                                            int src_weight);
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_sse2
 
 void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vp9_fwht4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vp9_fwht4x4)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vp9_fwht4x4 vp9_fwht4x4_sse2
 
 int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
                                  const tran_low_t* dqcoeff,
@@ -245,11 +178,7 @@
                                     intptr_t block_size,
                                     int64_t* ssz,
                                     int bd);
-RTCD_EXTERN int64_t (*vp9_highbd_block_error)(const tran_low_t* coeff,
-                                              const tran_low_t* dqcoeff,
-                                              intptr_t block_size,
-                                              int64_t* ssz,
-                                              int bd);
+#define vp9_highbd_block_error vp9_highbd_block_error_sse2
 
 void vp9_highbd_fht16x16_c(const int16_t* input,
                            tran_low_t* output,
@@ -273,18 +202,18 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 void vp9_highbd_iht16x16_256_add_sse4_1(const tran_low_t* input,
-                                        uint16_t* output,
-                                        int pitch,
+                                        uint16_t* dest,
+                                        int stride,
                                         int tx_type,
                                         int bd);
 RTCD_EXTERN void (*vp9_highbd_iht16x16_256_add)(const tran_low_t* input,
-                                                uint16_t* output,
-                                                int pitch,
+                                                uint16_t* dest,
+                                                int stride,
                                                 int tx_type,
                                                 int bd);
 
@@ -376,23 +305,21 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_sse2(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
-RTCD_EXTERN void (*vp9_iht16x16_256_add)(const tran_low_t* input,
-                                         uint8_t* output,
-                                         int pitch,
-                                         int tx_type);
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_sse2
 
 void vp9_iht4x4_16_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -402,10 +329,7 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
-RTCD_EXTERN void (*vp9_iht4x4_16_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride,
-                                      int tx_type);
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_sse2
 
 void vp9_iht8x8_64_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -415,10 +339,7 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
-RTCD_EXTERN void (*vp9_iht8x8_64_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride,
-                                      int tx_type);
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_sse2
 
 void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
                        intptr_t n_coeffs,
@@ -501,46 +422,15 @@
 
   (void)flags;
 
-  vp9_block_error = vp9_block_error_c;
-  if (flags & HAS_SSE2)
-    vp9_block_error = vp9_block_error_sse2;
+  vp9_block_error = vp9_block_error_sse2;
   if (flags & HAS_AVX2)
     vp9_block_error = vp9_block_error_avx2;
-  vp9_block_error_fp = vp9_block_error_fp_c;
-  if (flags & HAS_SSE2)
-    vp9_block_error_fp = vp9_block_error_fp_sse2;
+  vp9_block_error_fp = vp9_block_error_fp_sse2;
   if (flags & HAS_AVX2)
     vp9_block_error_fp = vp9_block_error_fp_avx2;
-  vp9_denoiser_filter = vp9_denoiser_filter_c;
-  if (flags & HAS_SSE2)
-    vp9_denoiser_filter = vp9_denoiser_filter_sse2;
   vp9_diamond_search_sad = vp9_diamond_search_sad_c;
   if (flags & HAS_AVX)
     vp9_diamond_search_sad = vp9_diamond_search_sad_avx;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_SSSE3)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_ssse3;
-  vp9_fht16x16 = vp9_fht16x16_c;
-  if (flags & HAS_SSE2)
-    vp9_fht16x16 = vp9_fht16x16_sse2;
-  vp9_fht4x4 = vp9_fht4x4_c;
-  if (flags & HAS_SSE2)
-    vp9_fht4x4 = vp9_fht4x4_sse2;
-  vp9_fht8x8 = vp9_fht8x8_c;
-  if (flags & HAS_SSE2)
-    vp9_fht8x8 = vp9_fht8x8_sse2;
-  vp9_filter_by_weight16x16 = vp9_filter_by_weight16x16_c;
-  if (flags & HAS_SSE2)
-    vp9_filter_by_weight16x16 = vp9_filter_by_weight16x16_sse2;
-  vp9_filter_by_weight8x8 = vp9_filter_by_weight8x8_c;
-  if (flags & HAS_SSE2)
-    vp9_filter_by_weight8x8 = vp9_filter_by_weight8x8_sse2;
-  vp9_fwht4x4 = vp9_fwht4x4_c;
-  if (flags & HAS_SSE2)
-    vp9_fwht4x4 = vp9_fwht4x4_sse2;
-  vp9_highbd_block_error = vp9_highbd_block_error_c;
-  if (flags & HAS_SSE2)
-    vp9_highbd_block_error = vp9_highbd_block_error_sse2;
   vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_sse4_1;
@@ -550,18 +440,7 @@
   vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_sse4_1;
-  vp9_iht16x16_256_add = vp9_iht16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht16x16_256_add = vp9_iht16x16_256_add_sse2;
-  vp9_iht4x4_16_add = vp9_iht4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht4x4_16_add = vp9_iht4x4_16_add_sse2;
-  vp9_iht8x8_64_add = vp9_iht8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht8x8_64_add = vp9_iht8x8_64_add_sse2;
-  vp9_quantize_fp = vp9_quantize_fp_c;
-  if (flags & HAS_SSE2)
-    vp9_quantize_fp = vp9_quantize_fp_sse2;
+  vp9_quantize_fp = vp9_quantize_fp_sse2;
   if (flags & HAS_AVX2)
     vp9_quantize_fp = vp9_quantize_fp_avx2;
   vp9_scale_and_extend_frame = vp9_scale_and_extend_frame_c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.asm
index 4537261..a5bda29 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.asm
@@ -1,7 +1,12 @@
+%define VPX_ARCH_ARM 0
 %define ARCH_ARM 0
+%define VPX_ARCH_MIPS 0
 %define ARCH_MIPS 0
+%define VPX_ARCH_X86 1
 %define ARCH_X86 1
+%define VPX_ARCH_X86_64 0
 %define ARCH_X86_64 0
+%define VPX_ARCH_PPC 0
 %define ARCH_PPC 0
 %define HAVE_NEON 0
 %define HAVE_NEON_ASM 0
@@ -66,20 +71,24 @@
 %define CONFIG_OS_SUPPORT 1
 %define CONFIG_UNIT_TESTS 1
 %define CONFIG_WEBM_IO 1
-%define CONFIG_LIBYUV 1
+%define CONFIG_LIBYUV 0
 %define CONFIG_DECODE_PERF_TESTS 0
 %define CONFIG_ENCODE_PERF_TESTS 0
 %define CONFIG_MULTI_RES_ENCODING 1
 %define CONFIG_TEMPORAL_DENOISING 1
 %define CONFIG_VP9_TEMPORAL_DENOISING 1
+%define CONFIG_CONSISTENT_RECODE 0
 %define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 %define CONFIG_VP9_HIGHBITDEPTH 1
 %define CONFIG_BETTER_HW_COMPATIBILITY 0
 %define CONFIG_EXPERIMENTAL 0
 %define CONFIG_SIZE_LIMIT 1
 %define CONFIG_ALWAYS_ADJUST_BPM 0
-%define CONFIG_SPATIAL_SVC 0
+%define CONFIG_BITSTREAM_DEBUG 0
+%define CONFIG_MISMATCH_DEBUG 0
 %define CONFIG_FP_MB_STATS 0
 %define CONFIG_EMULATE_HARDWARE 0
+%define CONFIG_NON_GREEDY_MV 0
+%define CONFIG_RATE_CTRL 0
 %define DECODE_WIDTH_LIMIT 16384
 %define DECODE_HEIGHT_LIMIT 16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.c
index 965a7aa..98fd447 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=x86-darwin9-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
+static const char* const cfg = "--target=x86-darwin9-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.h
index 12c4f24..11c14fe 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 1
 #define ARCH_X86 1
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_dsp_rtcd.h
index d820a2b..096e3a8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/ia32/vpx_dsp_rtcd.h
@@ -22,11 +22,11 @@
 
 unsigned int vpx_avg_4x4_c(const uint8_t*, int p);
 unsigned int vpx_avg_4x4_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_avg_4x4)(const uint8_t*, int p);
+#define vpx_avg_4x4 vpx_avg_4x4_sse2
 
 unsigned int vpx_avg_8x8_c(const uint8_t*, int p);
 unsigned int vpx_avg_8x8_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_avg_8x8)(const uint8_t*, int p);
+#define vpx_avg_8x8 vpx_avg_8x8_sse2
 
 void vpx_comp_avg_pred_c(uint8_t* comp_pred,
                          const uint8_t* pred,
@@ -40,12 +40,7 @@
                             int height,
                             const uint8_t* ref,
                             int ref_stride);
-RTCD_EXTERN void (*vpx_comp_avg_pred)(uint8_t* comp_pred,
-                                      const uint8_t* pred,
-                                      int width,
-                                      int height,
-                                      const uint8_t* ref,
-                                      int ref_stride);
+#define vpx_comp_avg_pred vpx_comp_avg_pred_sse2
 
 void vpx_convolve8_c(const uint8_t* src,
                      ptrdiff_t src_stride,
@@ -405,17 +400,7 @@
                            int y_step_q4,
                            int w,
                            int h);
-RTCD_EXTERN void (*vpx_convolve_avg)(const uint8_t* src,
-                                     ptrdiff_t src_stride,
-                                     uint8_t* dst,
-                                     ptrdiff_t dst_stride,
-                                     const InterpKernel* filter,
-                                     int x0_q4,
-                                     int x_step_q4,
-                                     int y0_q4,
-                                     int y_step_q4,
-                                     int w,
-                                     int h);
+#define vpx_convolve_avg vpx_convolve_avg_sse2
 
 void vpx_convolve_copy_c(const uint8_t* src,
                          ptrdiff_t src_stride,
@@ -439,655 +424,553 @@
                             int y_step_q4,
                             int w,
                             int h);
-RTCD_EXTERN void (*vpx_convolve_copy)(const uint8_t* src,
-                                      ptrdiff_t src_stride,
-                                      uint8_t* dst,
-                                      ptrdiff_t dst_stride,
-                                      const InterpKernel* filter,
-                                      int x0_q4,
-                                      int x_step_q4,
-                                      int y0_q4,
-                                      int y_step_q4,
-                                      int w,
-                                      int h);
+#define vpx_convolve_copy vpx_convolve_copy_sse2
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_4x4_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_4x4_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_d207_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_sse2
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_d45_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_sse2
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_d45_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_sse2
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_4x4_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_8x8_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_sse2
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_sse2
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_sse2
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_sse2
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_16x16)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint8_t* above,
-                                                const uint8_t* left);
+#define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_sse2
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_32x32)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint8_t* above,
-                                                const uint8_t* left);
+#define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_sse2
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_4x4)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
-                                              const uint8_t* above,
-                                              const uint8_t* left);
+#define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_sse2
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_8x8)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
-                                              const uint8_t* above,
-                                              const uint8_t* left);
+#define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_sse2
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_sse2
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_sse2
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_sse2
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_sse2
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_sse2
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_sse2
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_sse2
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_sse2
 
 void vpx_fdct16x16_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct16x16_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct16x16)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct16x16 vpx_fdct16x16_sse2
 
 void vpx_fdct16x16_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct16x16_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct16x16_1)(const int16_t* input,
-                                    tran_low_t* output,
-                                    int stride);
+#define vpx_fdct16x16_1 vpx_fdct16x16_1_sse2
 
 void vpx_fdct32x32_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct32x32)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct32x32 vpx_fdct32x32_sse2
 
 void vpx_fdct32x32_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct32x32_1)(const int16_t* input,
-                                    tran_low_t* output,
-                                    int stride);
+#define vpx_fdct32x32_1 vpx_fdct32x32_1_sse2
 
 void vpx_fdct32x32_rd_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_rd_sse2(const int16_t* input,
                            tran_low_t* output,
                            int stride);
-RTCD_EXTERN void (*vpx_fdct32x32_rd)(const int16_t* input,
-                                     tran_low_t* output,
-                                     int stride);
+#define vpx_fdct32x32_rd vpx_fdct32x32_rd_sse2
 
 void vpx_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct4x4)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vpx_fdct4x4 vpx_fdct4x4_sse2
 
 void vpx_fdct4x4_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct4x4_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct4x4_1)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct4x4_1 vpx_fdct4x4_1_sse2
 
 void vpx_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct8x8_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct8x8)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vpx_fdct8x8 vpx_fdct8x8_sse2
 
 void vpx_fdct8x8_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct8x8_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct8x8_1)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct8x8_1 vpx_fdct8x8_1_sse2
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_sse2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 void vpx_get16x16var_avx2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_sse2(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
                         int* sum);
-RTCD_EXTERN void (*vpx_get8x8var)(const uint8_t* src_ptr,
-                                  int source_stride,
-                                  const uint8_t* ref_ptr,
-                                  int ref_stride,
-                                  unsigned int* sse,
-                                  int* sum);
+#define vpx_get8x8var vpx_get8x8var_sse2
 
 unsigned int vpx_get_mb_ss_c(const int16_t*);
 unsigned int vpx_get_mb_ss_sse2(const int16_t*);
-RTCD_EXTERN unsigned int (*vpx_get_mb_ss)(const int16_t*);
+#define vpx_get_mb_ss vpx_get_mb_ss_sse2
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_h_predictor_16x16 vpx_h_predictor_16x16_sse2
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_h_predictor_32x32 vpx_h_predictor_32x32_sse2
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_h_predictor_4x4 vpx_h_predictor_4x4_sse2
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_h_predictor_8x8 vpx_h_predictor_8x8_sse2
 
 void vpx_hadamard_16x16_c(const int16_t* src_diff,
                           ptrdiff_t src_stride,
@@ -1121,249 +1004,209 @@
 void vpx_hadamard_8x8_sse2(const int16_t* src_diff,
                            ptrdiff_t src_stride,
                            tran_low_t* coeff);
-RTCD_EXTERN void (*vpx_hadamard_8x8)(const int16_t* src_diff,
-                                     ptrdiff_t src_stride,
-                                     tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_sse2
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+void vpx_highbd_10_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_sse2
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+void vpx_highbd_10_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_sse2
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_10_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_mse16x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int recon_stride,
-                                                   unsigned int* sse);
+#define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_sse2
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_10_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_mse8x8)(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 const uint8_t* ref_ptr,
-                                                 int recon_stride,
-                                                 unsigned int* sse);
+#define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x16 \
+  vpx_highbd_10_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x32 \
+  vpx_highbd_10_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x8 \
+  vpx_highbd_10_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x16 \
+  vpx_highbd_10_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x32 \
+  vpx_highbd_10_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x64 \
+  vpx_highbd_10_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1372,9 +1215,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1384,283 +1227,212 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x32 \
+  vpx_highbd_10_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x64 \
+  vpx_highbd_10_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x16 \
+  vpx_highbd_10_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x4 \
+  vpx_highbd_10_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x8 \
+  vpx_highbd_10_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x16 \
+  vpx_highbd_10_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x32 \
+  vpx_highbd_10_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x8 \
+  vpx_highbd_10_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x16 \
+  vpx_highbd_10_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x32 \
+  vpx_highbd_10_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x64 \
+  vpx_highbd_10_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1668,9 +1440,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1678,534 +1450,426 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x32 \
+  vpx_highbd_10_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x64 \
+  vpx_highbd_10_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x16 \
+  vpx_highbd_10_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x4 \
+  vpx_highbd_10_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x8 \
+  vpx_highbd_10_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_sse2
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_sse2
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x8)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_sse2
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_sse2
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_sse2
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_sse2
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance64x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_sse2
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance64x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_sse2
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance8x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_sse2
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance8x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_sse2
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+void vpx_highbd_12_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_sse2
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+void vpx_highbd_12_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_sse2
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_12_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_mse16x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int recon_stride,
-                                                   unsigned int* sse);
+#define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_sse2
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_12_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_mse8x8)(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 const uint8_t* ref_ptr,
-                                                 int recon_stride,
-                                                 unsigned int* sse);
+#define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x16 \
+  vpx_highbd_12_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x32 \
+  vpx_highbd_12_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x8 \
+  vpx_highbd_12_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x16 \
+  vpx_highbd_12_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x32 \
+  vpx_highbd_12_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x64 \
+  vpx_highbd_12_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -2214,9 +1878,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -2226,283 +1890,212 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x32 \
+  vpx_highbd_12_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x64 \
+  vpx_highbd_12_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x16 \
+  vpx_highbd_12_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x4 \
+  vpx_highbd_12_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x8 \
+  vpx_highbd_12_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x16 \
+  vpx_highbd_12_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x32 \
+  vpx_highbd_12_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x8 \
+  vpx_highbd_12_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x16 \
+  vpx_highbd_12_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x32 \
+  vpx_highbd_12_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x64 \
+  vpx_highbd_12_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2510,9 +2103,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2520,529 +2113,421 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x32 \
+  vpx_highbd_12_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x64 \
+  vpx_highbd_12_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x16 \
+  vpx_highbd_12_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x4 \
+  vpx_highbd_12_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x8 \
+  vpx_highbd_12_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_sse2
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_sse2
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x8)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_sse2
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_sse2
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_sse2
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_sse2
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance64x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_sse2
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance64x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_sse2
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance8x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_sse2
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance8x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_sse2
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
                                 int* sum);
-#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+void vpx_highbd_8_get16x16var_sse2(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse,
+                                   int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_sse2
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
                               int* sum);
-#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+void vpx_highbd_8_get8x8var_sse2(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_sse2
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 unsigned int vpx_highbd_8_mse16x16_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_mse16x16)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int recon_stride,
-                                                  unsigned int* sse);
+#define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_sse2
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_highbd_8_mse8x8_sse2(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_mse8x8)(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                const uint8_t* ref_ptr,
-                                                int recon_stride,
-                                                unsigned int* sse);
+#define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x16 \
+  vpx_highbd_8_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x32 \
+  vpx_highbd_8_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x8 \
+  vpx_highbd_8_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x16 \
+  vpx_highbd_8_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x32 \
+  vpx_highbd_8_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x64 \
+  vpx_highbd_8_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -3051,9 +2536,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -3062,599 +2547,458 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x32 \
+  vpx_highbd_8_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x64 \
+  vpx_highbd_8_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x16 \
+  vpx_highbd_8_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x4 \
+  vpx_highbd_8_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x8 \
+  vpx_highbd_8_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x16 \
+  vpx_highbd_8_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x32 \
+  vpx_highbd_8_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x8 \
+  vpx_highbd_8_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x16 \
+  vpx_highbd_8_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x32 \
+  vpx_highbd_8_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x64 \
+  vpx_highbd_8_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x32 \
+  vpx_highbd_8_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x64 \
+  vpx_highbd_8_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x16 \
+  vpx_highbd_8_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x4 \
+  vpx_highbd_8_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x8 \
+  vpx_highbd_8_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_sse2
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_sse2
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_sse2
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_sse2
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_sse2
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x64)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_sse2
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance64x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_sse2
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance64x64)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_sse2
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x16_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance8x16)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_sse2
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x8_sse2(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance8x8)(const uint8_t* src_ptr,
-                                                     int source_stride,
-                                                     const uint8_t* ref_ptr,
-                                                     int ref_stride,
-                                                     unsigned int* sse);
+#define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_sse2
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_highbd_avg_4x4)(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t* s8, int p);
+#define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_sse2
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_highbd_avg_8x8)(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t* s8, int p);
+#define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_sse2
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
                                 const uint16_t* pred,
@@ -3675,7 +3019,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 void vpx_highbd_convolve8_avx2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3687,7 +3031,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8)(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3699,7 +3043,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3712,7 +3056,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve8_avg_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3724,7 +3068,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3736,7 +3080,7 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
                                       ptrdiff_t src_stride,
@@ -3749,7 +3093,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 void vpx_highbd_convolve8_avg_horiz_avx2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3761,7 +3105,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_horiz)(const uint16_t* src,
                                                    ptrdiff_t src_stride,
                                                    uint16_t* dst,
@@ -3773,7 +3117,7 @@
                                                    int y_step_q4,
                                                    int w,
                                                    int h,
-                                                   int bps);
+                                                   int bd);
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
                                      ptrdiff_t src_stride,
@@ -3786,7 +3130,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_avg_vert_avx2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3798,7 +3142,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_vert)(const uint16_t* src,
                                                   ptrdiff_t src_stride,
                                                   uint16_t* dst,
@@ -3810,7 +3154,7 @@
                                                   int y_step_q4,
                                                   int w,
                                                   int h,
-                                                  int bps);
+                                                  int bd);
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
                                   ptrdiff_t src_stride,
@@ -3823,7 +3167,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve8_horiz_avx2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3835,7 +3179,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_horiz)(const uint16_t* src,
                                                ptrdiff_t src_stride,
                                                uint16_t* dst,
@@ -3847,7 +3191,7 @@
                                                int y_step_q4,
                                                int w,
                                                int h,
-                                               int bps);
+                                               int bd);
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
                                  ptrdiff_t src_stride,
@@ -3860,7 +3204,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 void vpx_highbd_convolve8_vert_avx2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3872,7 +3216,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_vert)(const uint16_t* src,
                                               ptrdiff_t src_stride,
                                               uint16_t* dst,
@@ -3884,7 +3228,7 @@
                                               int y_step_q4,
                                               int w,
                                               int h,
-                                              int bps);
+                                              int bd);
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
                                ptrdiff_t src_stride,
@@ -3897,7 +3241,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve_avg_sse2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3909,7 +3253,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve_avg_avx2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3921,7 +3265,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_avg)(const uint16_t* src,
                                             ptrdiff_t src_stride,
                                             uint16_t* dst,
@@ -3933,7 +3277,7 @@
                                             int y_step_q4,
                                             int w,
                                             int h,
-                                            int bps);
+                                            int bd);
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3946,7 +3290,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve_copy_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3958,7 +3302,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve_copy_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3970,7 +3314,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_copy)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3982,647 +3326,565 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d117_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_sse2
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d135_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_sse2
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d153_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_sse2
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d207_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_sse2
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_4x4_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_4x4_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_d63_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_sse2
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_16x16)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_sse2
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_32x32)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_sse2
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_4x4)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_sse2
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_8x8)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_sse2
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_16x16_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_16x16)(uint16_t* dst,
-                                                       ptrdiff_t y_stride,
-                                                       const uint16_t* above,
-                                                       const uint16_t* left,
-                                                       int bd);
+#define vpx_highbd_dc_left_predictor_16x16 \
+  vpx_highbd_dc_left_predictor_16x16_sse2
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_32x32_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_32x32)(uint16_t* dst,
-                                                       ptrdiff_t y_stride,
-                                                       const uint16_t* above,
-                                                       const uint16_t* left,
-                                                       int bd);
+#define vpx_highbd_dc_left_predictor_32x32 \
+  vpx_highbd_dc_left_predictor_32x32_sse2
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_4x4_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_4x4)(uint16_t* dst,
-                                                     ptrdiff_t y_stride,
-                                                     const uint16_t* above,
-                                                     const uint16_t* left,
-                                                     int bd);
+#define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_sse2
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_8x8_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_8x8)(uint16_t* dst,
-                                                     ptrdiff_t y_stride,
-                                                     const uint16_t* above,
-                                                     const uint16_t* left,
-                                                     int bd);
+#define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_sse2
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_16x16)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_sse2
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_32x32)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_sse2
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_4x4)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_sse2
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_8x8)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_sse2
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_16x16)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_sse2
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_32x32)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_sse2
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_4x4)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_sse2
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_8x8)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_sse2
 
 void vpx_highbd_fdct16x16_c(const int16_t* input,
                             tran_low_t* output,
@@ -4630,9 +3892,7 @@
 void vpx_highbd_fdct16x16_sse2(const int16_t* input,
                                tran_low_t* output,
                                int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct16x16)(const int16_t* input,
-                                         tran_low_t* output,
-                                         int stride);
+#define vpx_highbd_fdct16x16 vpx_highbd_fdct16x16_sse2
 
 void vpx_highbd_fdct16x16_1_c(const int16_t* input,
                               tran_low_t* output,
@@ -4645,9 +3905,7 @@
 void vpx_highbd_fdct32x32_sse2(const int16_t* input,
                                tran_low_t* output,
                                int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct32x32)(const int16_t* input,
-                                         tran_low_t* output,
-                                         int stride);
+#define vpx_highbd_fdct32x32 vpx_highbd_fdct32x32_sse2
 
 void vpx_highbd_fdct32x32_1_c(const int16_t* input,
                               tran_low_t* output,
@@ -4660,25 +3918,19 @@
 void vpx_highbd_fdct32x32_rd_sse2(const int16_t* input,
                                   tran_low_t* output,
                                   int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct32x32_rd)(const int16_t* input,
-                                            tran_low_t* output,
-                                            int stride);
+#define vpx_highbd_fdct32x32_rd vpx_highbd_fdct32x32_rd_sse2
 
 void vpx_highbd_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_highbd_fdct4x4_sse2(const int16_t* input,
                              tran_low_t* output,
                              int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct4x4)(const int16_t* input,
-                                       tran_low_t* output,
-                                       int stride);
+#define vpx_highbd_fdct4x4 vpx_highbd_fdct4x4_sse2
 
 void vpx_highbd_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_highbd_fdct8x8_sse2(const int16_t* input,
                              tran_low_t* output,
                              int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct8x8)(const int16_t* input,
-                                       tran_low_t* output,
-                                       int stride);
+#define vpx_highbd_fdct8x8 vpx_highbd_fdct8x8_sse2
 
 void vpx_highbd_fdct8x8_1_c(const int16_t* input,
                             tran_low_t* output,
@@ -4686,68 +3938,82 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_16x16)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_sse2
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_32x32)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_sse2
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_4x4)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_sse2
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_8x8)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_sse2
+
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_16x16_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_32x32_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+void vpx_highbd_hadamard_8x8_avx2(const int16_t* src_diff,
+                                  ptrdiff_t src_stride,
+                                  tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
 
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
@@ -4774,10 +4040,7 @@
                                      uint16_t* dest,
                                      int stride,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_idct16x16_1_add)(const tran_low_t* input,
-                                               uint16_t* dest,
-                                               int stride,
-                                               int bd);
+#define vpx_highbd_idct16x16_1_add vpx_highbd_idct16x16_1_add_sse2
 
 void vpx_highbd_idct16x16_256_add_c(const tran_low_t* input,
                                     uint16_t* dest,
@@ -4855,10 +4118,7 @@
                                      uint16_t* dest,
                                      int stride,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_idct32x32_1_add)(const tran_low_t* input,
-                                               uint16_t* dest,
-                                               int stride,
-                                               int bd);
+#define vpx_highbd_idct32x32_1_add vpx_highbd_idct32x32_1_add_sse2
 
 void vpx_highbd_idct32x32_34_add_c(const tran_low_t* input,
                                    uint16_t* dest,
@@ -4902,10 +4162,7 @@
                                    uint16_t* dest,
                                    int stride,
                                    int bd);
-RTCD_EXTERN void (*vpx_highbd_idct4x4_1_add)(const tran_low_t* input,
-                                             uint16_t* dest,
-                                             int stride,
-                                             int bd);
+#define vpx_highbd_idct4x4_1_add vpx_highbd_idct4x4_1_add_sse2
 
 void vpx_highbd_idct8x8_12_add_c(const tran_low_t* input,
                                  uint16_t* dest,
@@ -4932,10 +4189,7 @@
                                    uint16_t* dest,
                                    int stride,
                                    int bd);
-RTCD_EXTERN void (*vpx_highbd_idct8x8_1_add)(const tran_low_t* input,
-                                             uint16_t* dest,
-                                             int stride,
-                                             int bd);
+#define vpx_highbd_idct8x8_1_add vpx_highbd_idct8x8_1_add_sse2
 
 void vpx_highbd_idct8x8_64_add_c(const tran_low_t* input,
                                  uint16_t* dest,
@@ -4978,12 +4232,7 @@
                                        const uint8_t* limit,
                                        const uint8_t* thresh,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_16)(uint16_t* s,
-                                                 int pitch,
-                                                 const uint8_t* blimit,
-                                                 const uint8_t* limit,
-                                                 const uint8_t* thresh,
-                                                 int bd);
+#define vpx_highbd_lpf_horizontal_16 vpx_highbd_lpf_horizontal_16_sse2
 
 void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t* s,
                                          int pitch,
@@ -4997,12 +4246,7 @@
                                             const uint8_t* limit,
                                             const uint8_t* thresh,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_16_dual)(uint16_t* s,
-                                                      int pitch,
-                                                      const uint8_t* blimit,
-                                                      const uint8_t* limit,
-                                                      const uint8_t* thresh,
-                                                      int bd);
+#define vpx_highbd_lpf_horizontal_16_dual vpx_highbd_lpf_horizontal_16_dual_sse2
 
 void vpx_highbd_lpf_horizontal_4_c(uint16_t* s,
                                    int pitch,
@@ -5016,12 +4260,7 @@
                                       const uint8_t* limit,
                                       const uint8_t* thresh,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_4)(uint16_t* s,
-                                                int pitch,
-                                                const uint8_t* blimit,
-                                                const uint8_t* limit,
-                                                const uint8_t* thresh,
-                                                int bd);
+#define vpx_highbd_lpf_horizontal_4 vpx_highbd_lpf_horizontal_4_sse2
 
 void vpx_highbd_lpf_horizontal_4_dual_c(uint16_t* s,
                                         int pitch,
@@ -5041,15 +4280,7 @@
                                            const uint8_t* limit1,
                                            const uint8_t* thresh1,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_4_dual)(uint16_t* s,
-                                                     int pitch,
-                                                     const uint8_t* blimit0,
-                                                     const uint8_t* limit0,
-                                                     const uint8_t* thresh0,
-                                                     const uint8_t* blimit1,
-                                                     const uint8_t* limit1,
-                                                     const uint8_t* thresh1,
-                                                     int bd);
+#define vpx_highbd_lpf_horizontal_4_dual vpx_highbd_lpf_horizontal_4_dual_sse2
 
 void vpx_highbd_lpf_horizontal_8_c(uint16_t* s,
                                    int pitch,
@@ -5063,12 +4294,7 @@
                                       const uint8_t* limit,
                                       const uint8_t* thresh,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_8)(uint16_t* s,
-                                                int pitch,
-                                                const uint8_t* blimit,
-                                                const uint8_t* limit,
-                                                const uint8_t* thresh,
-                                                int bd);
+#define vpx_highbd_lpf_horizontal_8 vpx_highbd_lpf_horizontal_8_sse2
 
 void vpx_highbd_lpf_horizontal_8_dual_c(uint16_t* s,
                                         int pitch,
@@ -5088,15 +4314,7 @@
                                            const uint8_t* limit1,
                                            const uint8_t* thresh1,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_8_dual)(uint16_t* s,
-                                                     int pitch,
-                                                     const uint8_t* blimit0,
-                                                     const uint8_t* limit0,
-                                                     const uint8_t* thresh0,
-                                                     const uint8_t* blimit1,
-                                                     const uint8_t* limit1,
-                                                     const uint8_t* thresh1,
-                                                     int bd);
+#define vpx_highbd_lpf_horizontal_8_dual vpx_highbd_lpf_horizontal_8_dual_sse2
 
 void vpx_highbd_lpf_vertical_16_c(uint16_t* s,
                                   int pitch,
@@ -5110,12 +4328,7 @@
                                      const uint8_t* limit,
                                      const uint8_t* thresh,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_16)(uint16_t* s,
-                                               int pitch,
-                                               const uint8_t* blimit,
-                                               const uint8_t* limit,
-                                               const uint8_t* thresh,
-                                               int bd);
+#define vpx_highbd_lpf_vertical_16 vpx_highbd_lpf_vertical_16_sse2
 
 void vpx_highbd_lpf_vertical_16_dual_c(uint16_t* s,
                                        int pitch,
@@ -5129,12 +4342,7 @@
                                           const uint8_t* limit,
                                           const uint8_t* thresh,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_16_dual)(uint16_t* s,
-                                                    int pitch,
-                                                    const uint8_t* blimit,
-                                                    const uint8_t* limit,
-                                                    const uint8_t* thresh,
-                                                    int bd);
+#define vpx_highbd_lpf_vertical_16_dual vpx_highbd_lpf_vertical_16_dual_sse2
 
 void vpx_highbd_lpf_vertical_4_c(uint16_t* s,
                                  int pitch,
@@ -5148,12 +4356,7 @@
                                     const uint8_t* limit,
                                     const uint8_t* thresh,
                                     int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_4)(uint16_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit,
-                                              const uint8_t* limit,
-                                              const uint8_t* thresh,
-                                              int bd);
+#define vpx_highbd_lpf_vertical_4 vpx_highbd_lpf_vertical_4_sse2
 
 void vpx_highbd_lpf_vertical_4_dual_c(uint16_t* s,
                                       int pitch,
@@ -5173,15 +4376,7 @@
                                          const uint8_t* limit1,
                                          const uint8_t* thresh1,
                                          int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_4_dual)(uint16_t* s,
-                                                   int pitch,
-                                                   const uint8_t* blimit0,
-                                                   const uint8_t* limit0,
-                                                   const uint8_t* thresh0,
-                                                   const uint8_t* blimit1,
-                                                   const uint8_t* limit1,
-                                                   const uint8_t* thresh1,
-                                                   int bd);
+#define vpx_highbd_lpf_vertical_4_dual vpx_highbd_lpf_vertical_4_dual_sse2
 
 void vpx_highbd_lpf_vertical_8_c(uint16_t* s,
                                  int pitch,
@@ -5195,12 +4390,7 @@
                                     const uint8_t* limit,
                                     const uint8_t* thresh,
                                     int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_8)(uint16_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit,
-                                              const uint8_t* limit,
-                                              const uint8_t* thresh,
-                                              int bd);
+#define vpx_highbd_lpf_vertical_8 vpx_highbd_lpf_vertical_8_sse2
 
 void vpx_highbd_lpf_vertical_8_dual_c(uint16_t* s,
                                       int pitch,
@@ -5220,19 +4410,11 @@
                                          const uint8_t* limit1,
                                          const uint8_t* thresh1,
                                          int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_8_dual)(uint16_t* s,
-                                                   int pitch,
-                                                   const uint8_t* blimit0,
-                                                   const uint8_t* limit0,
-                                                   const uint8_t* thresh0,
-                                                   const uint8_t* blimit1,
-                                                   const uint8_t* limit1,
-                                                   const uint8_t* thresh1,
-                                                   int bd);
+#define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_sse2
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -5264,19 +4446,7 @@
                                 uint16_t* eob_ptr,
                                 const int16_t* scan,
                                 const int16_t* iscan);
-RTCD_EXTERN void (*vpx_highbd_quantize_b)(const tran_low_t* coeff_ptr,
-                                          intptr_t n_coeffs,
-                                          int skip_block,
-                                          const int16_t* zbin_ptr,
-                                          const int16_t* round_ptr,
-                                          const int16_t* quant_ptr,
-                                          const int16_t* quant_shift_ptr,
-                                          tran_low_t* qcoeff_ptr,
-                                          tran_low_t* dqcoeff_ptr,
-                                          const int16_t* dequant_ptr,
-                                          uint16_t* eob_ptr,
-                                          const int16_t* scan,
-                                          const int16_t* iscan);
+#define vpx_highbd_quantize_b vpx_highbd_quantize_b_sse2
 
 void vpx_highbd_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
                                    intptr_t n_coeffs,
@@ -5304,19 +4474,7 @@
                                       uint16_t* eob_ptr,
                                       const int16_t* scan,
                                       const int16_t* iscan);
-RTCD_EXTERN void (*vpx_highbd_quantize_b_32x32)(const tran_low_t* coeff_ptr,
-                                                intptr_t n_coeffs,
-                                                int skip_block,
-                                                const int16_t* zbin_ptr,
-                                                const int16_t* round_ptr,
-                                                const int16_t* quant_ptr,
-                                                const int16_t* quant_shift_ptr,
-                                                tran_low_t* qcoeff_ptr,
-                                                tran_low_t* dqcoeff_ptr,
-                                                const int16_t* dequant_ptr,
-                                                uint16_t* eob_ptr,
-                                                const int16_t* scan,
-                                                const int16_t* iscan);
+#define vpx_highbd_quantize_b_32x32 vpx_highbd_quantize_b_32x32_sse2
 
 unsigned int vpx_highbd_sad16x16_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5326,10 +4484,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x16)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad16x16 vpx_highbd_sad16x16_sse2
 
 unsigned int vpx_highbd_sad16x16_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5341,27 +4496,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x16_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad16x16_avg vpx_highbd_sad16x16_avg_sse2
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x16x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_sse2
 
 unsigned int vpx_highbd_sad16x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5371,10 +4518,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad16x32 vpx_highbd_sad16x32_sse2
 
 unsigned int vpx_highbd_sad16x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5386,27 +4530,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad16x32_avg vpx_highbd_sad16x32_avg_sse2
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_sse2
 
 unsigned int vpx_highbd_sad16x8_c(const uint8_t* src_ptr,
                                   int src_stride,
@@ -5416,10 +4552,7 @@
                                      int src_stride,
                                      const uint8_t* ref_ptr,
                                      int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x8)(const uint8_t* src_ptr,
-                                               int src_stride,
-                                               const uint8_t* ref_ptr,
-                                               int ref_stride);
+#define vpx_highbd_sad16x8 vpx_highbd_sad16x8_sse2
 
 unsigned int vpx_highbd_sad16x8_avg_c(const uint8_t* src_ptr,
                                       int src_stride,
@@ -5431,27 +4564,19 @@
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x8_avg)(const uint8_t* src_ptr,
-                                                   int src_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int ref_stride,
-                                                   const uint8_t* second_pred);
+#define vpx_highbd_sad16x8_avg vpx_highbd_sad16x8_avg_sse2
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad16x8x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x8x4d)(const uint8_t* src_ptr,
-                                          int src_stride,
-                                          const uint8_t* const ref_ptr[],
-                                          int ref_stride,
-                                          uint32_t* sad_array);
+#define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_sse2
 
 unsigned int vpx_highbd_sad32x16_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5461,10 +4586,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x16)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x16 vpx_highbd_sad32x16_sse2
 
 unsigned int vpx_highbd_sad32x16_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5476,27 +4598,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x16_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x16_avg vpx_highbd_sad32x16_avg_sse2
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x16x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_sse2
 
 unsigned int vpx_highbd_sad32x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5506,10 +4620,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x32 vpx_highbd_sad32x32_sse2
 
 unsigned int vpx_highbd_sad32x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5521,27 +4632,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x32_avg vpx_highbd_sad32x32_avg_sse2
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_sse2
 
 unsigned int vpx_highbd_sad32x64_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5551,10 +4654,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x64)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x64 vpx_highbd_sad32x64_sse2
 
 unsigned int vpx_highbd_sad32x64_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5566,27 +4666,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x64_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x64_avg vpx_highbd_sad32x64_avg_sse2
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x64x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_sse2
 
 unsigned int vpx_highbd_sad4x4_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5603,19 +4695,15 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad4x4x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_sse2
 
 unsigned int vpx_highbd_sad4x8_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5632,19 +4720,15 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad4x8x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_sse2
 
 unsigned int vpx_highbd_sad64x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5654,10 +4738,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad64x32 vpx_highbd_sad64x32_sse2
 
 unsigned int vpx_highbd_sad64x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5669,27 +4750,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad64x32_avg vpx_highbd_sad64x32_avg_sse2
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad64x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_sse2
 
 unsigned int vpx_highbd_sad64x64_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5699,10 +4772,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x64)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad64x64 vpx_highbd_sad64x64_sse2
 
 unsigned int vpx_highbd_sad64x64_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5714,27 +4784,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x64_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad64x64_avg vpx_highbd_sad64x64_avg_sse2
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad64x64x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_sse2
 
 unsigned int vpx_highbd_sad8x16_c(const uint8_t* src_ptr,
                                   int src_stride,
@@ -5744,10 +4806,7 @@
                                      int src_stride,
                                      const uint8_t* ref_ptr,
                                      int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x16)(const uint8_t* src_ptr,
-                                               int src_stride,
-                                               const uint8_t* ref_ptr,
-                                               int ref_stride);
+#define vpx_highbd_sad8x16 vpx_highbd_sad8x16_sse2
 
 unsigned int vpx_highbd_sad8x16_avg_c(const uint8_t* src_ptr,
                                       int src_stride,
@@ -5759,27 +4818,19 @@
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x16_avg)(const uint8_t* src_ptr,
-                                                   int src_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int ref_stride,
-                                                   const uint8_t* second_pred);
+#define vpx_highbd_sad8x16_avg vpx_highbd_sad8x16_avg_sse2
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad8x16x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x16x4d)(const uint8_t* src_ptr,
-                                          int src_stride,
-                                          const uint8_t* const ref_ptr[],
-                                          int ref_stride,
-                                          uint32_t* sad_array);
+#define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_sse2
 
 unsigned int vpx_highbd_sad8x4_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5789,10 +4840,7 @@
                                     int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x4)(const uint8_t* src_ptr,
-                                              int src_stride,
-                                              const uint8_t* ref_ptr,
-                                              int ref_stride);
+#define vpx_highbd_sad8x4 vpx_highbd_sad8x4_sse2
 
 unsigned int vpx_highbd_sad8x4_avg_c(const uint8_t* src_ptr,
                                      int src_stride,
@@ -5804,27 +4852,19 @@
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x4_avg)(const uint8_t* src_ptr,
-                                                  int src_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int ref_stride,
-                                                  const uint8_t* second_pred);
+#define vpx_highbd_sad8x4_avg vpx_highbd_sad8x4_avg_sse2
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x4x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_sse2
 
 unsigned int vpx_highbd_sad8x8_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5834,10 +4874,7 @@
                                     int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x8)(const uint8_t* src_ptr,
-                                              int src_stride,
-                                              const uint8_t* ref_ptr,
-                                              int ref_stride);
+#define vpx_highbd_sad8x8 vpx_highbd_sad8x8_sse2
 
 unsigned int vpx_highbd_sad8x8_avg_c(const uint8_t* src_ptr,
                                      int src_stride,
@@ -5849,182 +4886,142 @@
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x8_avg)(const uint8_t* src_ptr,
-                                                  int src_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int ref_stride,
-                                                  const uint8_t* second_pred);
+#define vpx_highbd_sad8x8_avg vpx_highbd_sad8x8_avg_sse2
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x8x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_sse2
+
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+int vpx_highbd_satd_avx2(const tran_low_t* coeff, int length);
+RTCD_EXTERN int (*vpx_highbd_satd)(const tran_low_t* coeff, int length);
 
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_16x16)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_sse2
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_32x32)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_sse2
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_4x4)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_sse2
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_8x8)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_sse2
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_16x16)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_sse2
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_32x32)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_sse2
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_4x4)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_sse2
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_8x8)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_v_predictor_8x8 vpx_highbd_v_predictor_8x8_sse2
 
 void vpx_idct16x16_10_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_10_add_sse2(const tran_low_t* input,
                                uint8_t* dest,
                                int stride);
-RTCD_EXTERN void (*vpx_idct16x16_10_add)(const tran_low_t* input,
-                                         uint8_t* dest,
-                                         int stride);
+#define vpx_idct16x16_10_add vpx_idct16x16_10_add_sse2
 
 void vpx_idct16x16_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_1_add_sse2(const tran_low_t* input,
                               uint8_t* dest,
                               int stride);
-RTCD_EXTERN void (*vpx_idct16x16_1_add)(const tran_low_t* input,
-                                        uint8_t* dest,
-                                        int stride);
+#define vpx_idct16x16_1_add vpx_idct16x16_1_add_sse2
 
 void vpx_idct16x16_256_add_c(const tran_low_t* input,
                              uint8_t* dest,
@@ -6032,17 +5029,13 @@
 void vpx_idct16x16_256_add_sse2(const tran_low_t* input,
                                 uint8_t* dest,
                                 int stride);
-RTCD_EXTERN void (*vpx_idct16x16_256_add)(const tran_low_t* input,
-                                          uint8_t* dest,
-                                          int stride);
+#define vpx_idct16x16_256_add vpx_idct16x16_256_add_sse2
 
 void vpx_idct16x16_38_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_38_add_sse2(const tran_low_t* input,
                                uint8_t* dest,
                                int stride);
-RTCD_EXTERN void (*vpx_idct16x16_38_add)(const tran_low_t* input,
-                                         uint8_t* dest,
-                                         int stride);
+#define vpx_idct16x16_38_add vpx_idct16x16_38_add_sse2
 
 void vpx_idct32x32_1024_add_c(const tran_low_t* input,
                               uint8_t* dest,
@@ -6050,9 +5043,7 @@
 void vpx_idct32x32_1024_add_sse2(const tran_low_t* input,
                                  uint8_t* dest,
                                  int stride);
-RTCD_EXTERN void (*vpx_idct32x32_1024_add)(const tran_low_t* input,
-                                           uint8_t* dest,
-                                           int stride);
+#define vpx_idct32x32_1024_add vpx_idct32x32_1024_add_sse2
 
 void vpx_idct32x32_135_add_c(const tran_low_t* input,
                              uint8_t* dest,
@@ -6071,9 +5062,7 @@
 void vpx_idct32x32_1_add_sse2(const tran_low_t* input,
                               uint8_t* dest,
                               int stride);
-RTCD_EXTERN void (*vpx_idct32x32_1_add)(const tran_low_t* input,
-                                        uint8_t* dest,
-                                        int stride);
+#define vpx_idct32x32_1_add vpx_idct32x32_1_add_sse2
 
 void vpx_idct32x32_34_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct32x32_34_add_sse2(const tran_low_t* input,
@@ -6090,15 +5079,11 @@
 void vpx_idct4x4_16_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_idct4x4_16_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_idct4x4_16_add vpx_idct4x4_16_add_sse2
 
 void vpx_idct4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct4x4_1_add_sse2(const tran_low_t* input, uint8_t* dest, int stride);
-RTCD_EXTERN void (*vpx_idct4x4_1_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride);
+#define vpx_idct4x4_1_add vpx_idct4x4_1_add_sse2
 
 void vpx_idct8x8_12_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_12_add_sse2(const tran_low_t* input,
@@ -6113,21 +5098,17 @@
 
 void vpx_idct8x8_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_1_add_sse2(const tran_low_t* input, uint8_t* dest, int stride);
-RTCD_EXTERN void (*vpx_idct8x8_1_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride);
+#define vpx_idct8x8_1_add vpx_idct8x8_1_add_sse2
 
 void vpx_idct8x8_64_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_64_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_idct8x8_64_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_idct8x8_64_add vpx_idct8x8_64_add_sse2
 
 int16_t vpx_int_pro_col_c(const uint8_t* ref, const int width);
 int16_t vpx_int_pro_col_sse2(const uint8_t* ref, const int width);
-RTCD_EXTERN int16_t (*vpx_int_pro_col)(const uint8_t* ref, const int width);
+#define vpx_int_pro_col vpx_int_pro_col_sse2
 
 void vpx_int_pro_row_c(int16_t* hbuf,
                        const uint8_t* ref,
@@ -6137,18 +5118,13 @@
                           const uint8_t* ref,
                           const int ref_stride,
                           const int height);
-RTCD_EXTERN void (*vpx_int_pro_row)(int16_t* hbuf,
-                                    const uint8_t* ref,
-                                    const int ref_stride,
-                                    const int height);
+#define vpx_int_pro_row vpx_int_pro_row_sse2
 
 void vpx_iwht4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_iwht4x4_16_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_iwht4x4_16_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_iwht4x4_16_add vpx_iwht4x4_16_add_sse2
 
 void vpx_iwht4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 #define vpx_iwht4x4_1_add vpx_iwht4x4_1_add_c
@@ -6205,11 +5181,7 @@
                                const uint8_t* blimit,
                                const uint8_t* limit,
                                const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_horizontal_4)(uint8_t* s,
-                                         int pitch,
-                                         const uint8_t* blimit,
-                                         const uint8_t* limit,
-                                         const uint8_t* thresh);
+#define vpx_lpf_horizontal_4 vpx_lpf_horizontal_4_sse2
 
 void vpx_lpf_horizontal_4_dual_c(uint8_t* s,
                                  int pitch,
@@ -6227,14 +5199,7 @@
                                     const uint8_t* blimit1,
                                     const uint8_t* limit1,
                                     const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_horizontal_4_dual)(uint8_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit0,
-                                              const uint8_t* limit0,
-                                              const uint8_t* thresh0,
-                                              const uint8_t* blimit1,
-                                              const uint8_t* limit1,
-                                              const uint8_t* thresh1);
+#define vpx_lpf_horizontal_4_dual vpx_lpf_horizontal_4_dual_sse2
 
 void vpx_lpf_horizontal_8_c(uint8_t* s,
                             int pitch,
@@ -6246,11 +5211,7 @@
                                const uint8_t* blimit,
                                const uint8_t* limit,
                                const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_horizontal_8)(uint8_t* s,
-                                         int pitch,
-                                         const uint8_t* blimit,
-                                         const uint8_t* limit,
-                                         const uint8_t* thresh);
+#define vpx_lpf_horizontal_8 vpx_lpf_horizontal_8_sse2
 
 void vpx_lpf_horizontal_8_dual_c(uint8_t* s,
                                  int pitch,
@@ -6268,14 +5229,7 @@
                                     const uint8_t* blimit1,
                                     const uint8_t* limit1,
                                     const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_horizontal_8_dual)(uint8_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit0,
-                                              const uint8_t* limit0,
-                                              const uint8_t* thresh0,
-                                              const uint8_t* blimit1,
-                                              const uint8_t* limit1,
-                                              const uint8_t* thresh1);
+#define vpx_lpf_horizontal_8_dual vpx_lpf_horizontal_8_dual_sse2
 
 void vpx_lpf_vertical_16_c(uint8_t* s,
                            int pitch,
@@ -6287,11 +5241,7 @@
                               const uint8_t* blimit,
                               const uint8_t* limit,
                               const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_16)(uint8_t* s,
-                                        int pitch,
-                                        const uint8_t* blimit,
-                                        const uint8_t* limit,
-                                        const uint8_t* thresh);
+#define vpx_lpf_vertical_16 vpx_lpf_vertical_16_sse2
 
 void vpx_lpf_vertical_16_dual_c(uint8_t* s,
                                 int pitch,
@@ -6303,11 +5253,7 @@
                                    const uint8_t* blimit,
                                    const uint8_t* limit,
                                    const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_16_dual)(uint8_t* s,
-                                             int pitch,
-                                             const uint8_t* blimit,
-                                             const uint8_t* limit,
-                                             const uint8_t* thresh);
+#define vpx_lpf_vertical_16_dual vpx_lpf_vertical_16_dual_sse2
 
 void vpx_lpf_vertical_4_c(uint8_t* s,
                           int pitch,
@@ -6319,11 +5265,7 @@
                              const uint8_t* blimit,
                              const uint8_t* limit,
                              const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_4)(uint8_t* s,
-                                       int pitch,
-                                       const uint8_t* blimit,
-                                       const uint8_t* limit,
-                                       const uint8_t* thresh);
+#define vpx_lpf_vertical_4 vpx_lpf_vertical_4_sse2
 
 void vpx_lpf_vertical_4_dual_c(uint8_t* s,
                                int pitch,
@@ -6341,14 +5283,7 @@
                                   const uint8_t* blimit1,
                                   const uint8_t* limit1,
                                   const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_vertical_4_dual)(uint8_t* s,
-                                            int pitch,
-                                            const uint8_t* blimit0,
-                                            const uint8_t* limit0,
-                                            const uint8_t* thresh0,
-                                            const uint8_t* blimit1,
-                                            const uint8_t* limit1,
-                                            const uint8_t* thresh1);
+#define vpx_lpf_vertical_4_dual vpx_lpf_vertical_4_dual_sse2
 
 void vpx_lpf_vertical_8_c(uint8_t* s,
                           int pitch,
@@ -6360,11 +5295,7 @@
                              const uint8_t* blimit,
                              const uint8_t* limit,
                              const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_8)(uint8_t* s,
-                                       int pitch,
-                                       const uint8_t* blimit,
-                                       const uint8_t* limit,
-                                       const uint8_t* thresh);
+#define vpx_lpf_vertical_8 vpx_lpf_vertical_8_sse2
 
 void vpx_lpf_vertical_8_dual_c(uint8_t* s,
                                int pitch,
@@ -6382,30 +5313,19 @@
                                   const uint8_t* blimit1,
                                   const uint8_t* limit1,
                                   const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_vertical_8_dual)(uint8_t* s,
-                                            int pitch,
-                                            const uint8_t* blimit0,
-                                            const uint8_t* limit0,
-                                            const uint8_t* thresh0,
-                                            const uint8_t* blimit1,
-                                            const uint8_t* limit1,
-                                            const uint8_t* thresh1);
+#define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_sse2
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_sse2(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_sse2(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
                                     int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_across_ip)(unsigned char* dst,
-                                              int pitch,
-                                              int rows,
-                                              int cols,
-                                              int flimit);
+#define vpx_mbpost_proc_across_ip vpx_mbpost_proc_across_ip_sse2
 
 void vpx_mbpost_proc_down_c(unsigned char* dst,
                             int pitch,
@@ -6417,11 +5337,7 @@
                                int rows,
                                int cols,
                                int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_down)(unsigned char* dst,
-                                         int pitch,
-                                         int rows,
-                                         int cols,
-                                         int flimit);
+#define vpx_mbpost_proc_down vpx_mbpost_proc_down_sse2
 
 void vpx_minmax_8x8_c(const uint8_t* s,
                       int p,
@@ -6435,86 +5351,73 @@
                          int dp,
                          int* min,
                          int* max);
-RTCD_EXTERN void (*vpx_minmax_8x8)(const uint8_t* s,
-                                   int p,
-                                   const uint8_t* d,
-                                   int dp,
-                                   int* min,
-                                   int* max);
+#define vpx_minmax_8x8 vpx_minmax_8x8_sse2
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_sse2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_mse16x16_avx2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse16x8_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 unsigned int vpx_mse16x8_avx2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x8)(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse8x16_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_mse8x16)(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        const uint8_t* ref_ptr,
-                                        int recon_stride,
-                                        unsigned int* sse);
+#define vpx_mse8x16 vpx_mse8x16_sse2
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 unsigned int vpx_mse8x8_sse2(const uint8_t* src_ptr,
-                             int source_stride,
+                             int src_stride,
                              const uint8_t* ref_ptr,
-                             int recon_stride,
+                             int ref_stride,
                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_mse8x8)(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       const uint8_t* ref_ptr,
-                                       int recon_stride,
-                                       unsigned int* sse);
+#define vpx_mse8x8 vpx_mse8x8_sse2
 
 void vpx_plane_add_noise_c(uint8_t* start,
                            const int8_t* noise,
@@ -6530,13 +5433,7 @@
                               int width,
                               int height,
                               int pitch);
-RTCD_EXTERN void (*vpx_plane_add_noise)(uint8_t* start,
-                                        const int8_t* noise,
-                                        int blackclamp,
-                                        int whiteclamp,
-                                        int width,
-                                        int height,
-                                        int pitch);
+#define vpx_plane_add_noise vpx_plane_add_noise_sse2
 
 void vpx_post_proc_down_and_across_mb_row_c(unsigned char* src,
                                             unsigned char* dst,
@@ -6552,13 +5449,8 @@
                                                int cols,
                                                unsigned char* flimits,
                                                int size);
-RTCD_EXTERN void (*vpx_post_proc_down_and_across_mb_row)(unsigned char* src,
-                                                         unsigned char* dst,
-                                                         int src_pitch,
-                                                         int dst_pitch,
-                                                         int cols,
-                                                         unsigned char* flimits,
-                                                         int size);
+#define vpx_post_proc_down_and_across_mb_row \
+  vpx_post_proc_down_and_across_mb_row_sse2
 
 void vpx_quantize_b_c(const tran_low_t* coeff_ptr,
                       intptr_t n_coeffs,
@@ -6687,10 +5579,7 @@
                                int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x16)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* ref_ptr,
-                                         int ref_stride);
+#define vpx_sad16x16 vpx_sad16x16_sse2
 
 unsigned int vpx_sad16x16_avg_c(const uint8_t* src_ptr,
                                 int src_stride,
@@ -6702,11 +5591,7 @@
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x16_avg)(const uint8_t* src_ptr,
-                                             int src_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             const uint8_t* second_pred);
+#define vpx_sad16x16_avg vpx_sad16x16_avg_sse2
 
 void vpx_sad16x16x3_c(const uint8_t* src_ptr,
                       int src_stride,
@@ -6731,19 +5616,15 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x16x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad16x16x4d vpx_sad16x16x4d_sse2
 
 void vpx_sad16x16x8_c(const uint8_t* src_ptr,
                       int src_stride,
@@ -6769,10 +5650,7 @@
                                int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x32)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* ref_ptr,
-                                         int ref_stride);
+#define vpx_sad16x32 vpx_sad16x32_sse2
 
 unsigned int vpx_sad16x32_avg_c(const uint8_t* src_ptr,
                                 int src_stride,
@@ -6784,27 +5662,19 @@
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x32_avg)(const uint8_t* src_ptr,
-                                             int src_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             const uint8_t* second_pred);
+#define vpx_sad16x32_avg vpx_sad16x32_avg_sse2
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x32x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad16x32x4d vpx_sad16x32x4d_sse2
 
 unsigned int vpx_sad16x8_c(const uint8_t* src_ptr,
                            int src_stride,
@@ -6814,10 +5684,7 @@
                               int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x8)(const uint8_t* src_ptr,
-                                        int src_stride,
-                                        const uint8_t* ref_ptr,
-                                        int ref_stride);
+#define vpx_sad16x8 vpx_sad16x8_sse2
 
 unsigned int vpx_sad16x8_avg_c(const uint8_t* src_ptr,
                                int src_stride,
@@ -6829,11 +5696,7 @@
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x8_avg)(const uint8_t* src_ptr,
-                                            int src_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            const uint8_t* second_pred);
+#define vpx_sad16x8_avg vpx_sad16x8_avg_sse2
 
 void vpx_sad16x8x3_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -6858,19 +5721,15 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x8x4d)(const uint8_t* src_ptr,
-                                   int src_stride,
-                                   const uint8_t* const ref_ptr[],
-                                   int ref_stride,
-                                   uint32_t* sad_array);
+#define vpx_sad16x8x4d vpx_sad16x8x4d_sse2
 
 void vpx_sad16x8x8_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -6928,19 +5787,15 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad32x16x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad32x16x4d vpx_sad32x16x4d_sse2
 
 unsigned int vpx_sad32x32_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -6982,25 +5837,41 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad32x32x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -7041,19 +5912,15 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad32x64x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad32x64x4d vpx_sad32x64x4d_sse2
 
 unsigned int vpx_sad4x4_c(const uint8_t* src_ptr,
                           int src_stride,
@@ -7063,10 +5930,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad4x4)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad4x4 vpx_sad4x4_sse2
 
 unsigned int vpx_sad4x4_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7078,11 +5942,7 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad4x4_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad4x4_avg vpx_sad4x4_avg_sse2
 
 void vpx_sad4x4x3_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7102,19 +5962,15 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad4x4x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad4x4x4d vpx_sad4x4x4d_sse2
 
 void vpx_sad4x4x8_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7140,10 +5996,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad4x8)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad4x8 vpx_sad4x8_sse2
 
 unsigned int vpx_sad4x8_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7155,27 +6008,19 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad4x8_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad4x8_avg vpx_sad4x8_avg_sse2
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad4x8x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad4x8x4d vpx_sad4x8x4d_sse2
 
 unsigned int vpx_sad64x32_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -7217,19 +6062,15 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad64x32x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad64x32x4d vpx_sad64x32x4d_sse2
 
 unsigned int vpx_sad64x64_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -7271,22 +6112,22 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad64x64x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -7298,10 +6139,7 @@
                               int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x16)(const uint8_t* src_ptr,
-                                        int src_stride,
-                                        const uint8_t* ref_ptr,
-                                        int ref_stride);
+#define vpx_sad8x16 vpx_sad8x16_sse2
 
 unsigned int vpx_sad8x16_avg_c(const uint8_t* src_ptr,
                                int src_stride,
@@ -7313,11 +6151,7 @@
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x16_avg)(const uint8_t* src_ptr,
-                                            int src_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            const uint8_t* second_pred);
+#define vpx_sad8x16_avg vpx_sad8x16_avg_sse2
 
 void vpx_sad8x16x3_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -7337,19 +6171,15 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x16x4d)(const uint8_t* src_ptr,
-                                   int src_stride,
-                                   const uint8_t* const ref_ptr[],
-                                   int ref_stride,
-                                   uint32_t* sad_array);
+#define vpx_sad8x16x4d vpx_sad8x16x4d_sse2
 
 void vpx_sad8x16x8_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -7375,10 +6205,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x4)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad8x4 vpx_sad8x4_sse2
 
 unsigned int vpx_sad8x4_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7390,27 +6217,19 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x4_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad8x4_avg vpx_sad8x4_avg_sse2
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x4x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad8x4x4d vpx_sad8x4x4d_sse2
 
 unsigned int vpx_sad8x8_c(const uint8_t* src_ptr,
                           int src_stride,
@@ -7420,10 +6239,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x8)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad8x8 vpx_sad8x8_sse2
 
 unsigned int vpx_sad8x8_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7435,11 +6251,7 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x8_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad8x8_avg vpx_sad8x8_avg_sse2
 
 void vpx_sad8x8x3_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7459,19 +6271,15 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x8x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad8x8x4d vpx_sad8x8x4d_sse2
 
 void vpx_sad8x8x8_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7594,850 +6402,850 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -8458,384 +7266,329 @@
                              ptrdiff_t src_stride,
                              const uint8_t* pred_ptr,
                              ptrdiff_t pred_stride);
-RTCD_EXTERN void (*vpx_subtract_block)(int rows,
-                                       int cols,
-                                       int16_t* diff_ptr,
-                                       ptrdiff_t diff_stride,
-                                       const uint8_t* src_ptr,
-                                       ptrdiff_t src_stride,
-                                       const uint8_t* pred_ptr,
-                                       ptrdiff_t pred_stride);
+#define vpx_subtract_block vpx_subtract_block_sse2
 
 uint64_t vpx_sum_squares_2d_i16_c(const int16_t* src, int stride, int size);
 uint64_t vpx_sum_squares_2d_i16_sse2(const int16_t* src, int stride, int size);
-RTCD_EXTERN uint64_t (*vpx_sum_squares_2d_i16)(const int16_t* src,
-                                               int stride,
-                                               int size);
+#define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_sse2
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_sse2
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_sse2
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_sse2
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_sse2
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_v_predictor_16x16 vpx_v_predictor_16x16_sse2
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_v_predictor_32x32 vpx_v_predictor_32x32_sse2
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_v_predictor_4x4 vpx_v_predictor_4x4_sse2
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_v_predictor_8x8 vpx_v_predictor_8x8_sse2
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_variance16x8_avx2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance4x4)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance4x4 vpx_variance4x4_sse2
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance4x8)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance4x8 vpx_variance4x8_sse2
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x16)(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             unsigned int* sse);
+#define vpx_variance8x16 vpx_variance8x16_sse2
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x4)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance8x4 vpx_variance8x4_sse2
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x8)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance8x8 vpx_variance8x8_sse2
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
 
 int vpx_vector_var_c(const int16_t* ref, const int16_t* src, const int bwl);
 int vpx_vector_var_sse2(const int16_t* ref, const int16_t* src, const int bwl);
-RTCD_EXTERN int (*vpx_vector_var)(const int16_t* ref,
-                                  const int16_t* src,
-                                  const int bwl);
+#define vpx_vector_var vpx_vector_var_sse2
 
 void vpx_dsp_rtcd(void);
 
@@ -8846,63 +7599,36 @@
 
   (void)flags;
 
-  vpx_avg_4x4 = vpx_avg_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_avg_4x4 = vpx_avg_4x4_sse2;
-  vpx_avg_8x8 = vpx_avg_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_avg_8x8 = vpx_avg_8x8_sse2;
-  vpx_comp_avg_pred = vpx_comp_avg_pred_c;
-  if (flags & HAS_SSE2)
-    vpx_comp_avg_pred = vpx_comp_avg_pred_sse2;
-  vpx_convolve8 = vpx_convolve8_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8 = vpx_convolve8_sse2;
+  vpx_convolve8 = vpx_convolve8_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8 = vpx_convolve8_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8 = vpx_convolve8_avx2;
-  vpx_convolve8_avg = vpx_convolve8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg = vpx_convolve8_avg_sse2;
+  vpx_convolve8_avg = vpx_convolve8_avg_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg = vpx_convolve8_avg_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg = vpx_convolve8_avg_avx2;
-  vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_sse2;
+  vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_avx2;
-  vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_sse2;
+  vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_avx2;
-  vpx_convolve8_horiz = vpx_convolve8_horiz_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_horiz = vpx_convolve8_horiz_sse2;
+  vpx_convolve8_horiz = vpx_convolve8_horiz_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_horiz = vpx_convolve8_horiz_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_horiz = vpx_convolve8_horiz_avx2;
-  vpx_convolve8_vert = vpx_convolve8_vert_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_vert = vpx_convolve8_vert_sse2;
+  vpx_convolve8_vert = vpx_convolve8_vert_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_vert = vpx_convolve8_vert_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_vert = vpx_convolve8_vert_avx2;
-  vpx_convolve_avg = vpx_convolve_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve_avg = vpx_convolve_avg_sse2;
-  vpx_convolve_copy = vpx_convolve_copy_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve_copy = vpx_convolve_copy_sse2;
   vpx_d153_predictor_16x16 = vpx_d153_predictor_16x16_c;
   if (flags & HAS_SSSE3)
     vpx_d153_predictor_16x16 = vpx_d153_predictor_16x16_ssse3;
@@ -8921,9 +7647,6 @@
   vpx_d207_predictor_32x32 = vpx_d207_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_d207_predictor_32x32 = vpx_d207_predictor_32x32_ssse3;
-  vpx_d207_predictor_4x4 = vpx_d207_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_d207_predictor_4x4 = vpx_d207_predictor_4x4_sse2;
   vpx_d207_predictor_8x8 = vpx_d207_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_d207_predictor_8x8 = vpx_d207_predictor_8x8_ssse3;
@@ -8933,12 +7656,6 @@
   vpx_d45_predictor_32x32 = vpx_d45_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_d45_predictor_32x32 = vpx_d45_predictor_32x32_ssse3;
-  vpx_d45_predictor_4x4 = vpx_d45_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_d45_predictor_4x4 = vpx_d45_predictor_4x4_sse2;
-  vpx_d45_predictor_8x8 = vpx_d45_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_d45_predictor_8x8 = vpx_d45_predictor_8x8_sse2;
   vpx_d63_predictor_16x16 = vpx_d63_predictor_16x16_c;
   if (flags & HAS_SSSE3)
     vpx_d63_predictor_16x16 = vpx_d63_predictor_16x16_ssse3;
@@ -8951,542 +7668,15 @@
   vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_ssse3;
-  vpx_dc_128_predictor_16x16 = vpx_dc_128_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_16x16 = vpx_dc_128_predictor_16x16_sse2;
-  vpx_dc_128_predictor_32x32 = vpx_dc_128_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_32x32 = vpx_dc_128_predictor_32x32_sse2;
-  vpx_dc_128_predictor_4x4 = vpx_dc_128_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_4x4 = vpx_dc_128_predictor_4x4_sse2;
-  vpx_dc_128_predictor_8x8 = vpx_dc_128_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_8x8 = vpx_dc_128_predictor_8x8_sse2;
-  vpx_dc_left_predictor_16x16 = vpx_dc_left_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_16x16 = vpx_dc_left_predictor_16x16_sse2;
-  vpx_dc_left_predictor_32x32 = vpx_dc_left_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_32x32 = vpx_dc_left_predictor_32x32_sse2;
-  vpx_dc_left_predictor_4x4 = vpx_dc_left_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_4x4 = vpx_dc_left_predictor_4x4_sse2;
-  vpx_dc_left_predictor_8x8 = vpx_dc_left_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_8x8 = vpx_dc_left_predictor_8x8_sse2;
-  vpx_dc_predictor_16x16 = vpx_dc_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_16x16 = vpx_dc_predictor_16x16_sse2;
-  vpx_dc_predictor_32x32 = vpx_dc_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_32x32 = vpx_dc_predictor_32x32_sse2;
-  vpx_dc_predictor_4x4 = vpx_dc_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_4x4 = vpx_dc_predictor_4x4_sse2;
-  vpx_dc_predictor_8x8 = vpx_dc_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_8x8 = vpx_dc_predictor_8x8_sse2;
-  vpx_dc_top_predictor_16x16 = vpx_dc_top_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_16x16 = vpx_dc_top_predictor_16x16_sse2;
-  vpx_dc_top_predictor_32x32 = vpx_dc_top_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_32x32 = vpx_dc_top_predictor_32x32_sse2;
-  vpx_dc_top_predictor_4x4 = vpx_dc_top_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_4x4 = vpx_dc_top_predictor_4x4_sse2;
-  vpx_dc_top_predictor_8x8 = vpx_dc_top_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_8x8 = vpx_dc_top_predictor_8x8_sse2;
-  vpx_fdct16x16 = vpx_fdct16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct16x16 = vpx_fdct16x16_sse2;
-  vpx_fdct16x16_1 = vpx_fdct16x16_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct16x16_1 = vpx_fdct16x16_1_sse2;
-  vpx_fdct32x32 = vpx_fdct32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32 = vpx_fdct32x32_sse2;
-  vpx_fdct32x32_1 = vpx_fdct32x32_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32_1 = vpx_fdct32x32_1_sse2;
-  vpx_fdct32x32_rd = vpx_fdct32x32_rd_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32_rd = vpx_fdct32x32_rd_sse2;
-  vpx_fdct4x4 = vpx_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct4x4 = vpx_fdct4x4_sse2;
-  vpx_fdct4x4_1 = vpx_fdct4x4_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct4x4_1 = vpx_fdct4x4_1_sse2;
-  vpx_fdct8x8 = vpx_fdct8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct8x8 = vpx_fdct8x8_sse2;
-  vpx_fdct8x8_1 = vpx_fdct8x8_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct8x8_1 = vpx_fdct8x8_1_sse2;
-  vpx_get16x16var = vpx_get16x16var_c;
-  if (flags & HAS_SSE2)
-    vpx_get16x16var = vpx_get16x16var_sse2;
+  vpx_get16x16var = vpx_get16x16var_sse2;
   if (flags & HAS_AVX2)
     vpx_get16x16var = vpx_get16x16var_avx2;
-  vpx_get8x8var = vpx_get8x8var_c;
-  if (flags & HAS_SSE2)
-    vpx_get8x8var = vpx_get8x8var_sse2;
-  vpx_get_mb_ss = vpx_get_mb_ss_c;
-  if (flags & HAS_SSE2)
-    vpx_get_mb_ss = vpx_get_mb_ss_sse2;
-  vpx_h_predictor_16x16 = vpx_h_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_16x16 = vpx_h_predictor_16x16_sse2;
-  vpx_h_predictor_32x32 = vpx_h_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_32x32 = vpx_h_predictor_32x32_sse2;
-  vpx_h_predictor_4x4 = vpx_h_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_4x4 = vpx_h_predictor_4x4_sse2;
-  vpx_h_predictor_8x8 = vpx_h_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_8x8 = vpx_h_predictor_8x8_sse2;
-  vpx_hadamard_16x16 = vpx_hadamard_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
+  vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_hadamard_16x16 = vpx_hadamard_16x16_avx2;
-  vpx_hadamard_32x32 = vpx_hadamard_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
+  vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_hadamard_32x32 = vpx_hadamard_32x32_avx2;
-  vpx_hadamard_8x8 = vpx_hadamard_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_8x8 = vpx_hadamard_8x8_sse2;
-  vpx_highbd_10_mse16x16 = vpx_highbd_10_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_mse16x16 = vpx_highbd_10_mse16x16_sse2;
-  vpx_highbd_10_mse8x8 = vpx_highbd_10_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_mse8x8 = vpx_highbd_10_mse8x8_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x16 =
-      vpx_highbd_10_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x16 =
-        vpx_highbd_10_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x32 =
-      vpx_highbd_10_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x32 =
-        vpx_highbd_10_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x8 =
-      vpx_highbd_10_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x8 =
-        vpx_highbd_10_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x16 =
-      vpx_highbd_10_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x16 =
-        vpx_highbd_10_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x32 =
-      vpx_highbd_10_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x32 =
-        vpx_highbd_10_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x64 =
-      vpx_highbd_10_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x64 =
-        vpx_highbd_10_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance64x32 =
-      vpx_highbd_10_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance64x32 =
-        vpx_highbd_10_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance64x64 =
-      vpx_highbd_10_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance64x64 =
-        vpx_highbd_10_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x16 =
-      vpx_highbd_10_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x16 =
-        vpx_highbd_10_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x4 =
-      vpx_highbd_10_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x4 =
-        vpx_highbd_10_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x8 =
-      vpx_highbd_10_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x8 =
-        vpx_highbd_10_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_10_sub_pixel_variance16x16 =
-      vpx_highbd_10_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x16 =
-        vpx_highbd_10_sub_pixel_variance16x16_sse2;
-  vpx_highbd_10_sub_pixel_variance16x32 =
-      vpx_highbd_10_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x32 =
-        vpx_highbd_10_sub_pixel_variance16x32_sse2;
-  vpx_highbd_10_sub_pixel_variance16x8 = vpx_highbd_10_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x8 =
-        vpx_highbd_10_sub_pixel_variance16x8_sse2;
-  vpx_highbd_10_sub_pixel_variance32x16 =
-      vpx_highbd_10_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x16 =
-        vpx_highbd_10_sub_pixel_variance32x16_sse2;
-  vpx_highbd_10_sub_pixel_variance32x32 =
-      vpx_highbd_10_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x32 =
-        vpx_highbd_10_sub_pixel_variance32x32_sse2;
-  vpx_highbd_10_sub_pixel_variance32x64 =
-      vpx_highbd_10_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x64 =
-        vpx_highbd_10_sub_pixel_variance32x64_sse2;
-  vpx_highbd_10_sub_pixel_variance64x32 =
-      vpx_highbd_10_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance64x32 =
-        vpx_highbd_10_sub_pixel_variance64x32_sse2;
-  vpx_highbd_10_sub_pixel_variance64x64 =
-      vpx_highbd_10_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance64x64 =
-        vpx_highbd_10_sub_pixel_variance64x64_sse2;
-  vpx_highbd_10_sub_pixel_variance8x16 = vpx_highbd_10_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x16 =
-        vpx_highbd_10_sub_pixel_variance8x16_sse2;
-  vpx_highbd_10_sub_pixel_variance8x4 = vpx_highbd_10_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x4 =
-        vpx_highbd_10_sub_pixel_variance8x4_sse2;
-  vpx_highbd_10_sub_pixel_variance8x8 = vpx_highbd_10_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x8 =
-        vpx_highbd_10_sub_pixel_variance8x8_sse2;
-  vpx_highbd_10_variance16x16 = vpx_highbd_10_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x16 = vpx_highbd_10_variance16x16_sse2;
-  vpx_highbd_10_variance16x32 = vpx_highbd_10_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x32 = vpx_highbd_10_variance16x32_sse2;
-  vpx_highbd_10_variance16x8 = vpx_highbd_10_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x8 = vpx_highbd_10_variance16x8_sse2;
-  vpx_highbd_10_variance32x16 = vpx_highbd_10_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x16 = vpx_highbd_10_variance32x16_sse2;
-  vpx_highbd_10_variance32x32 = vpx_highbd_10_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x32 = vpx_highbd_10_variance32x32_sse2;
-  vpx_highbd_10_variance32x64 = vpx_highbd_10_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x64 = vpx_highbd_10_variance32x64_sse2;
-  vpx_highbd_10_variance64x32 = vpx_highbd_10_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance64x32 = vpx_highbd_10_variance64x32_sse2;
-  vpx_highbd_10_variance64x64 = vpx_highbd_10_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance64x64 = vpx_highbd_10_variance64x64_sse2;
-  vpx_highbd_10_variance8x16 = vpx_highbd_10_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance8x16 = vpx_highbd_10_variance8x16_sse2;
-  vpx_highbd_10_variance8x8 = vpx_highbd_10_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance8x8 = vpx_highbd_10_variance8x8_sse2;
-  vpx_highbd_12_mse16x16 = vpx_highbd_12_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_mse16x16 = vpx_highbd_12_mse16x16_sse2;
-  vpx_highbd_12_mse8x8 = vpx_highbd_12_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_mse8x8 = vpx_highbd_12_mse8x8_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x16 =
-      vpx_highbd_12_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x16 =
-        vpx_highbd_12_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x32 =
-      vpx_highbd_12_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x32 =
-        vpx_highbd_12_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x8 =
-      vpx_highbd_12_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x8 =
-        vpx_highbd_12_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x16 =
-      vpx_highbd_12_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x16 =
-        vpx_highbd_12_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x32 =
-      vpx_highbd_12_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x32 =
-        vpx_highbd_12_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x64 =
-      vpx_highbd_12_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x64 =
-        vpx_highbd_12_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance64x32 =
-      vpx_highbd_12_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance64x32 =
-        vpx_highbd_12_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance64x64 =
-      vpx_highbd_12_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance64x64 =
-        vpx_highbd_12_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x16 =
-      vpx_highbd_12_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x16 =
-        vpx_highbd_12_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x4 =
-      vpx_highbd_12_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x4 =
-        vpx_highbd_12_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x8 =
-      vpx_highbd_12_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x8 =
-        vpx_highbd_12_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_12_sub_pixel_variance16x16 =
-      vpx_highbd_12_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x16 =
-        vpx_highbd_12_sub_pixel_variance16x16_sse2;
-  vpx_highbd_12_sub_pixel_variance16x32 =
-      vpx_highbd_12_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x32 =
-        vpx_highbd_12_sub_pixel_variance16x32_sse2;
-  vpx_highbd_12_sub_pixel_variance16x8 = vpx_highbd_12_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x8 =
-        vpx_highbd_12_sub_pixel_variance16x8_sse2;
-  vpx_highbd_12_sub_pixel_variance32x16 =
-      vpx_highbd_12_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x16 =
-        vpx_highbd_12_sub_pixel_variance32x16_sse2;
-  vpx_highbd_12_sub_pixel_variance32x32 =
-      vpx_highbd_12_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x32 =
-        vpx_highbd_12_sub_pixel_variance32x32_sse2;
-  vpx_highbd_12_sub_pixel_variance32x64 =
-      vpx_highbd_12_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x64 =
-        vpx_highbd_12_sub_pixel_variance32x64_sse2;
-  vpx_highbd_12_sub_pixel_variance64x32 =
-      vpx_highbd_12_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance64x32 =
-        vpx_highbd_12_sub_pixel_variance64x32_sse2;
-  vpx_highbd_12_sub_pixel_variance64x64 =
-      vpx_highbd_12_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance64x64 =
-        vpx_highbd_12_sub_pixel_variance64x64_sse2;
-  vpx_highbd_12_sub_pixel_variance8x16 = vpx_highbd_12_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x16 =
-        vpx_highbd_12_sub_pixel_variance8x16_sse2;
-  vpx_highbd_12_sub_pixel_variance8x4 = vpx_highbd_12_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x4 =
-        vpx_highbd_12_sub_pixel_variance8x4_sse2;
-  vpx_highbd_12_sub_pixel_variance8x8 = vpx_highbd_12_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x8 =
-        vpx_highbd_12_sub_pixel_variance8x8_sse2;
-  vpx_highbd_12_variance16x16 = vpx_highbd_12_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x16 = vpx_highbd_12_variance16x16_sse2;
-  vpx_highbd_12_variance16x32 = vpx_highbd_12_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x32 = vpx_highbd_12_variance16x32_sse2;
-  vpx_highbd_12_variance16x8 = vpx_highbd_12_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x8 = vpx_highbd_12_variance16x8_sse2;
-  vpx_highbd_12_variance32x16 = vpx_highbd_12_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x16 = vpx_highbd_12_variance32x16_sse2;
-  vpx_highbd_12_variance32x32 = vpx_highbd_12_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x32 = vpx_highbd_12_variance32x32_sse2;
-  vpx_highbd_12_variance32x64 = vpx_highbd_12_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x64 = vpx_highbd_12_variance32x64_sse2;
-  vpx_highbd_12_variance64x32 = vpx_highbd_12_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance64x32 = vpx_highbd_12_variance64x32_sse2;
-  vpx_highbd_12_variance64x64 = vpx_highbd_12_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance64x64 = vpx_highbd_12_variance64x64_sse2;
-  vpx_highbd_12_variance8x16 = vpx_highbd_12_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance8x16 = vpx_highbd_12_variance8x16_sse2;
-  vpx_highbd_12_variance8x8 = vpx_highbd_12_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance8x8 = vpx_highbd_12_variance8x8_sse2;
-  vpx_highbd_8_mse16x16 = vpx_highbd_8_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_mse16x16 = vpx_highbd_8_mse16x16_sse2;
-  vpx_highbd_8_mse8x8 = vpx_highbd_8_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_mse8x8 = vpx_highbd_8_mse8x8_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x16 =
-      vpx_highbd_8_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x16 =
-        vpx_highbd_8_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x32 =
-      vpx_highbd_8_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x32 =
-        vpx_highbd_8_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x8 =
-      vpx_highbd_8_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x8 =
-        vpx_highbd_8_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x16 =
-      vpx_highbd_8_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x16 =
-        vpx_highbd_8_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x32 =
-      vpx_highbd_8_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x32 =
-        vpx_highbd_8_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x64 =
-      vpx_highbd_8_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x64 =
-        vpx_highbd_8_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance64x32 =
-      vpx_highbd_8_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance64x32 =
-        vpx_highbd_8_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance64x64 =
-      vpx_highbd_8_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance64x64 =
-        vpx_highbd_8_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x16 =
-      vpx_highbd_8_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x16 =
-        vpx_highbd_8_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x4 =
-      vpx_highbd_8_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x4 =
-        vpx_highbd_8_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x8 =
-      vpx_highbd_8_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x8 =
-        vpx_highbd_8_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_8_sub_pixel_variance16x16 = vpx_highbd_8_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x16 =
-        vpx_highbd_8_sub_pixel_variance16x16_sse2;
-  vpx_highbd_8_sub_pixel_variance16x32 = vpx_highbd_8_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x32 =
-        vpx_highbd_8_sub_pixel_variance16x32_sse2;
-  vpx_highbd_8_sub_pixel_variance16x8 = vpx_highbd_8_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x8 =
-        vpx_highbd_8_sub_pixel_variance16x8_sse2;
-  vpx_highbd_8_sub_pixel_variance32x16 = vpx_highbd_8_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x16 =
-        vpx_highbd_8_sub_pixel_variance32x16_sse2;
-  vpx_highbd_8_sub_pixel_variance32x32 = vpx_highbd_8_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x32 =
-        vpx_highbd_8_sub_pixel_variance32x32_sse2;
-  vpx_highbd_8_sub_pixel_variance32x64 = vpx_highbd_8_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x64 =
-        vpx_highbd_8_sub_pixel_variance32x64_sse2;
-  vpx_highbd_8_sub_pixel_variance64x32 = vpx_highbd_8_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance64x32 =
-        vpx_highbd_8_sub_pixel_variance64x32_sse2;
-  vpx_highbd_8_sub_pixel_variance64x64 = vpx_highbd_8_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance64x64 =
-        vpx_highbd_8_sub_pixel_variance64x64_sse2;
-  vpx_highbd_8_sub_pixel_variance8x16 = vpx_highbd_8_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x16 =
-        vpx_highbd_8_sub_pixel_variance8x16_sse2;
-  vpx_highbd_8_sub_pixel_variance8x4 = vpx_highbd_8_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x4 =
-        vpx_highbd_8_sub_pixel_variance8x4_sse2;
-  vpx_highbd_8_sub_pixel_variance8x8 = vpx_highbd_8_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x8 =
-        vpx_highbd_8_sub_pixel_variance8x8_sse2;
-  vpx_highbd_8_variance16x16 = vpx_highbd_8_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x16 = vpx_highbd_8_variance16x16_sse2;
-  vpx_highbd_8_variance16x32 = vpx_highbd_8_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x32 = vpx_highbd_8_variance16x32_sse2;
-  vpx_highbd_8_variance16x8 = vpx_highbd_8_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x8 = vpx_highbd_8_variance16x8_sse2;
-  vpx_highbd_8_variance32x16 = vpx_highbd_8_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x16 = vpx_highbd_8_variance32x16_sse2;
-  vpx_highbd_8_variance32x32 = vpx_highbd_8_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x32 = vpx_highbd_8_variance32x32_sse2;
-  vpx_highbd_8_variance32x64 = vpx_highbd_8_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x64 = vpx_highbd_8_variance32x64_sse2;
-  vpx_highbd_8_variance64x32 = vpx_highbd_8_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance64x32 = vpx_highbd_8_variance64x32_sse2;
-  vpx_highbd_8_variance64x64 = vpx_highbd_8_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance64x64 = vpx_highbd_8_variance64x64_sse2;
-  vpx_highbd_8_variance8x16 = vpx_highbd_8_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance8x16 = vpx_highbd_8_variance8x16_sse2;
-  vpx_highbd_8_variance8x8 = vpx_highbd_8_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance8x8 = vpx_highbd_8_variance8x8_sse2;
-  vpx_highbd_avg_4x4 = vpx_highbd_avg_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_avg_4x4 = vpx_highbd_avg_4x4_sse2;
-  vpx_highbd_avg_8x8 = vpx_highbd_avg_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_avg_8x8 = vpx_highbd_avg_8x8_sse2;
   vpx_highbd_convolve8 = vpx_highbd_convolve8_c;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve8 = vpx_highbd_convolve8_avx2;
@@ -9505,14 +7695,10 @@
   vpx_highbd_convolve8_vert = vpx_highbd_convolve8_vert_c;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve8_vert = vpx_highbd_convolve8_vert_avx2;
-  vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_sse2;
+  vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_avx2;
-  vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_sse2;
+  vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_sse2;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_avx2;
   vpx_highbd_d117_predictor_16x16 = vpx_highbd_d117_predictor_16x16_c;
@@ -9521,9 +7707,6 @@
   vpx_highbd_d117_predictor_32x32 = vpx_highbd_d117_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d117_predictor_32x32 = vpx_highbd_d117_predictor_32x32_ssse3;
-  vpx_highbd_d117_predictor_4x4 = vpx_highbd_d117_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d117_predictor_4x4 = vpx_highbd_d117_predictor_4x4_sse2;
   vpx_highbd_d117_predictor_8x8 = vpx_highbd_d117_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d117_predictor_8x8 = vpx_highbd_d117_predictor_8x8_ssse3;
@@ -9533,9 +7716,6 @@
   vpx_highbd_d135_predictor_32x32 = vpx_highbd_d135_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d135_predictor_32x32 = vpx_highbd_d135_predictor_32x32_ssse3;
-  vpx_highbd_d135_predictor_4x4 = vpx_highbd_d135_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d135_predictor_4x4 = vpx_highbd_d135_predictor_4x4_sse2;
   vpx_highbd_d135_predictor_8x8 = vpx_highbd_d135_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d135_predictor_8x8 = vpx_highbd_d135_predictor_8x8_ssse3;
@@ -9545,9 +7725,6 @@
   vpx_highbd_d153_predictor_32x32 = vpx_highbd_d153_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d153_predictor_32x32 = vpx_highbd_d153_predictor_32x32_ssse3;
-  vpx_highbd_d153_predictor_4x4 = vpx_highbd_d153_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d153_predictor_4x4 = vpx_highbd_d153_predictor_4x4_sse2;
   vpx_highbd_d153_predictor_8x8 = vpx_highbd_d153_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d153_predictor_8x8 = vpx_highbd_d153_predictor_8x8_ssse3;
@@ -9557,9 +7734,6 @@
   vpx_highbd_d207_predictor_32x32 = vpx_highbd_d207_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d207_predictor_32x32 = vpx_highbd_d207_predictor_32x32_ssse3;
-  vpx_highbd_d207_predictor_4x4 = vpx_highbd_d207_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d207_predictor_4x4 = vpx_highbd_d207_predictor_4x4_sse2;
   vpx_highbd_d207_predictor_8x8 = vpx_highbd_d207_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d207_predictor_8x8 = vpx_highbd_d207_predictor_8x8_ssse3;
@@ -9581,446 +7755,70 @@
   vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_ssse3;
-  vpx_highbd_d63_predictor_4x4 = vpx_highbd_d63_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d63_predictor_4x4 = vpx_highbd_d63_predictor_4x4_sse2;
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_ssse3;
-  vpx_highbd_dc_128_predictor_16x16 = vpx_highbd_dc_128_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_16x16 = vpx_highbd_dc_128_predictor_16x16_sse2;
-  vpx_highbd_dc_128_predictor_32x32 = vpx_highbd_dc_128_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_32x32 = vpx_highbd_dc_128_predictor_32x32_sse2;
-  vpx_highbd_dc_128_predictor_4x4 = vpx_highbd_dc_128_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_4x4 = vpx_highbd_dc_128_predictor_4x4_sse2;
-  vpx_highbd_dc_128_predictor_8x8 = vpx_highbd_dc_128_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_8x8 = vpx_highbd_dc_128_predictor_8x8_sse2;
-  vpx_highbd_dc_left_predictor_16x16 = vpx_highbd_dc_left_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_16x16 =
-        vpx_highbd_dc_left_predictor_16x16_sse2;
-  vpx_highbd_dc_left_predictor_32x32 = vpx_highbd_dc_left_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_32x32 =
-        vpx_highbd_dc_left_predictor_32x32_sse2;
-  vpx_highbd_dc_left_predictor_4x4 = vpx_highbd_dc_left_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_4x4 = vpx_highbd_dc_left_predictor_4x4_sse2;
-  vpx_highbd_dc_left_predictor_8x8 = vpx_highbd_dc_left_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_8x8 = vpx_highbd_dc_left_predictor_8x8_sse2;
-  vpx_highbd_dc_predictor_16x16 = vpx_highbd_dc_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_16x16 = vpx_highbd_dc_predictor_16x16_sse2;
-  vpx_highbd_dc_predictor_32x32 = vpx_highbd_dc_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_32x32 = vpx_highbd_dc_predictor_32x32_sse2;
-  vpx_highbd_dc_predictor_4x4 = vpx_highbd_dc_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_4x4 = vpx_highbd_dc_predictor_4x4_sse2;
-  vpx_highbd_dc_predictor_8x8 = vpx_highbd_dc_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_8x8 = vpx_highbd_dc_predictor_8x8_sse2;
-  vpx_highbd_dc_top_predictor_16x16 = vpx_highbd_dc_top_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_16x16 = vpx_highbd_dc_top_predictor_16x16_sse2;
-  vpx_highbd_dc_top_predictor_32x32 = vpx_highbd_dc_top_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_32x32 = vpx_highbd_dc_top_predictor_32x32_sse2;
-  vpx_highbd_dc_top_predictor_4x4 = vpx_highbd_dc_top_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_4x4 = vpx_highbd_dc_top_predictor_4x4_sse2;
-  vpx_highbd_dc_top_predictor_8x8 = vpx_highbd_dc_top_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_8x8 = vpx_highbd_dc_top_predictor_8x8_sse2;
-  vpx_highbd_fdct16x16 = vpx_highbd_fdct16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct16x16 = vpx_highbd_fdct16x16_sse2;
-  vpx_highbd_fdct32x32 = vpx_highbd_fdct32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct32x32 = vpx_highbd_fdct32x32_sse2;
-  vpx_highbd_fdct32x32_rd = vpx_highbd_fdct32x32_rd_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct32x32_rd = vpx_highbd_fdct32x32_rd_sse2;
-  vpx_highbd_fdct4x4 = vpx_highbd_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct4x4 = vpx_highbd_fdct4x4_sse2;
-  vpx_highbd_fdct8x8 = vpx_highbd_fdct8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct8x8 = vpx_highbd_fdct8x8_sse2;
-  vpx_highbd_h_predictor_16x16 = vpx_highbd_h_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_16x16 = vpx_highbd_h_predictor_16x16_sse2;
-  vpx_highbd_h_predictor_32x32 = vpx_highbd_h_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_32x32 = vpx_highbd_h_predictor_32x32_sse2;
-  vpx_highbd_h_predictor_4x4 = vpx_highbd_h_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_4x4 = vpx_highbd_h_predictor_4x4_sse2;
-  vpx_highbd_h_predictor_8x8 = vpx_highbd_h_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_8x8 = vpx_highbd_h_predictor_8x8_sse2;
-  vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_avx2;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_avx2;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_avx2;
+  vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse4_1;
-  vpx_highbd_idct16x16_1_add = vpx_highbd_idct16x16_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_1_add = vpx_highbd_idct16x16_1_add_sse2;
-  vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
+  vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse4_1;
-  vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
+  vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse4_1;
-  vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
+  vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse4_1;
-  vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
+  vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse4_1;
-  vpx_highbd_idct32x32_1_add = vpx_highbd_idct32x32_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_1_add = vpx_highbd_idct32x32_1_add_sse2;
-  vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
+  vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse4_1;
-  vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
+  vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse4_1;
-  vpx_highbd_idct4x4_1_add = vpx_highbd_idct4x4_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct4x4_1_add = vpx_highbd_idct4x4_1_add_sse2;
-  vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
+  vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse4_1;
-  vpx_highbd_idct8x8_1_add = vpx_highbd_idct8x8_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_1_add = vpx_highbd_idct8x8_1_add_sse2;
-  vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
+  vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse4_1;
-  vpx_highbd_lpf_horizontal_16 = vpx_highbd_lpf_horizontal_16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_16 = vpx_highbd_lpf_horizontal_16_sse2;
-  vpx_highbd_lpf_horizontal_16_dual = vpx_highbd_lpf_horizontal_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_16_dual = vpx_highbd_lpf_horizontal_16_dual_sse2;
-  vpx_highbd_lpf_horizontal_4 = vpx_highbd_lpf_horizontal_4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_4 = vpx_highbd_lpf_horizontal_4_sse2;
-  vpx_highbd_lpf_horizontal_4_dual = vpx_highbd_lpf_horizontal_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_4_dual = vpx_highbd_lpf_horizontal_4_dual_sse2;
-  vpx_highbd_lpf_horizontal_8 = vpx_highbd_lpf_horizontal_8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_8 = vpx_highbd_lpf_horizontal_8_sse2;
-  vpx_highbd_lpf_horizontal_8_dual = vpx_highbd_lpf_horizontal_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_8_dual = vpx_highbd_lpf_horizontal_8_dual_sse2;
-  vpx_highbd_lpf_vertical_16 = vpx_highbd_lpf_vertical_16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_16 = vpx_highbd_lpf_vertical_16_sse2;
-  vpx_highbd_lpf_vertical_16_dual = vpx_highbd_lpf_vertical_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_16_dual = vpx_highbd_lpf_vertical_16_dual_sse2;
-  vpx_highbd_lpf_vertical_4 = vpx_highbd_lpf_vertical_4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_4 = vpx_highbd_lpf_vertical_4_sse2;
-  vpx_highbd_lpf_vertical_4_dual = vpx_highbd_lpf_vertical_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_4_dual = vpx_highbd_lpf_vertical_4_dual_sse2;
-  vpx_highbd_lpf_vertical_8 = vpx_highbd_lpf_vertical_8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_8 = vpx_highbd_lpf_vertical_8_sse2;
-  vpx_highbd_lpf_vertical_8_dual = vpx_highbd_lpf_vertical_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_8_dual = vpx_highbd_lpf_vertical_8_dual_sse2;
-  vpx_highbd_quantize_b = vpx_highbd_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_quantize_b = vpx_highbd_quantize_b_sse2;
-  vpx_highbd_quantize_b_32x32 = vpx_highbd_quantize_b_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_quantize_b_32x32 = vpx_highbd_quantize_b_32x32_sse2;
-  vpx_highbd_sad16x16 = vpx_highbd_sad16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16 = vpx_highbd_sad16x16_sse2;
-  vpx_highbd_sad16x16_avg = vpx_highbd_sad16x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16_avg = vpx_highbd_sad16x16_avg_sse2;
-  vpx_highbd_sad16x16x4d = vpx_highbd_sad16x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16x4d = vpx_highbd_sad16x16x4d_sse2;
-  vpx_highbd_sad16x32 = vpx_highbd_sad16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32 = vpx_highbd_sad16x32_sse2;
-  vpx_highbd_sad16x32_avg = vpx_highbd_sad16x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32_avg = vpx_highbd_sad16x32_avg_sse2;
-  vpx_highbd_sad16x32x4d = vpx_highbd_sad16x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32x4d = vpx_highbd_sad16x32x4d_sse2;
-  vpx_highbd_sad16x8 = vpx_highbd_sad16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8 = vpx_highbd_sad16x8_sse2;
-  vpx_highbd_sad16x8_avg = vpx_highbd_sad16x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8_avg = vpx_highbd_sad16x8_avg_sse2;
-  vpx_highbd_sad16x8x4d = vpx_highbd_sad16x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8x4d = vpx_highbd_sad16x8x4d_sse2;
-  vpx_highbd_sad32x16 = vpx_highbd_sad32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16 = vpx_highbd_sad32x16_sse2;
-  vpx_highbd_sad32x16_avg = vpx_highbd_sad32x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16_avg = vpx_highbd_sad32x16_avg_sse2;
-  vpx_highbd_sad32x16x4d = vpx_highbd_sad32x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16x4d = vpx_highbd_sad32x16x4d_sse2;
-  vpx_highbd_sad32x32 = vpx_highbd_sad32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32 = vpx_highbd_sad32x32_sse2;
-  vpx_highbd_sad32x32_avg = vpx_highbd_sad32x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32_avg = vpx_highbd_sad32x32_avg_sse2;
-  vpx_highbd_sad32x32x4d = vpx_highbd_sad32x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32x4d = vpx_highbd_sad32x32x4d_sse2;
-  vpx_highbd_sad32x64 = vpx_highbd_sad32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64 = vpx_highbd_sad32x64_sse2;
-  vpx_highbd_sad32x64_avg = vpx_highbd_sad32x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64_avg = vpx_highbd_sad32x64_avg_sse2;
-  vpx_highbd_sad32x64x4d = vpx_highbd_sad32x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64x4d = vpx_highbd_sad32x64x4d_sse2;
-  vpx_highbd_sad4x4x4d = vpx_highbd_sad4x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad4x4x4d = vpx_highbd_sad4x4x4d_sse2;
-  vpx_highbd_sad4x8x4d = vpx_highbd_sad4x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad4x8x4d = vpx_highbd_sad4x8x4d_sse2;
-  vpx_highbd_sad64x32 = vpx_highbd_sad64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32 = vpx_highbd_sad64x32_sse2;
-  vpx_highbd_sad64x32_avg = vpx_highbd_sad64x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32_avg = vpx_highbd_sad64x32_avg_sse2;
-  vpx_highbd_sad64x32x4d = vpx_highbd_sad64x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32x4d = vpx_highbd_sad64x32x4d_sse2;
-  vpx_highbd_sad64x64 = vpx_highbd_sad64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64 = vpx_highbd_sad64x64_sse2;
-  vpx_highbd_sad64x64_avg = vpx_highbd_sad64x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64_avg = vpx_highbd_sad64x64_avg_sse2;
-  vpx_highbd_sad64x64x4d = vpx_highbd_sad64x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64x4d = vpx_highbd_sad64x64x4d_sse2;
-  vpx_highbd_sad8x16 = vpx_highbd_sad8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16 = vpx_highbd_sad8x16_sse2;
-  vpx_highbd_sad8x16_avg = vpx_highbd_sad8x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16_avg = vpx_highbd_sad8x16_avg_sse2;
-  vpx_highbd_sad8x16x4d = vpx_highbd_sad8x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16x4d = vpx_highbd_sad8x16x4d_sse2;
-  vpx_highbd_sad8x4 = vpx_highbd_sad8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4 = vpx_highbd_sad8x4_sse2;
-  vpx_highbd_sad8x4_avg = vpx_highbd_sad8x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4_avg = vpx_highbd_sad8x4_avg_sse2;
-  vpx_highbd_sad8x4x4d = vpx_highbd_sad8x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4x4d = vpx_highbd_sad8x4x4d_sse2;
-  vpx_highbd_sad8x8 = vpx_highbd_sad8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8 = vpx_highbd_sad8x8_sse2;
-  vpx_highbd_sad8x8_avg = vpx_highbd_sad8x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8_avg = vpx_highbd_sad8x8_avg_sse2;
-  vpx_highbd_sad8x8x4d = vpx_highbd_sad8x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8x4d = vpx_highbd_sad8x8x4d_sse2;
-  vpx_highbd_tm_predictor_16x16 = vpx_highbd_tm_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_16x16 = vpx_highbd_tm_predictor_16x16_sse2;
-  vpx_highbd_tm_predictor_32x32 = vpx_highbd_tm_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_32x32 = vpx_highbd_tm_predictor_32x32_sse2;
-  vpx_highbd_tm_predictor_4x4 = vpx_highbd_tm_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_4x4 = vpx_highbd_tm_predictor_4x4_sse2;
-  vpx_highbd_tm_predictor_8x8 = vpx_highbd_tm_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_8x8 = vpx_highbd_tm_predictor_8x8_sse2;
-  vpx_highbd_v_predictor_16x16 = vpx_highbd_v_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_16x16 = vpx_highbd_v_predictor_16x16_sse2;
-  vpx_highbd_v_predictor_32x32 = vpx_highbd_v_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_32x32 = vpx_highbd_v_predictor_32x32_sse2;
-  vpx_highbd_v_predictor_4x4 = vpx_highbd_v_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_4x4 = vpx_highbd_v_predictor_4x4_sse2;
-  vpx_highbd_v_predictor_8x8 = vpx_highbd_v_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_8x8 = vpx_highbd_v_predictor_8x8_sse2;
-  vpx_idct16x16_10_add = vpx_idct16x16_10_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_10_add = vpx_idct16x16_10_add_sse2;
-  vpx_idct16x16_1_add = vpx_idct16x16_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_1_add = vpx_idct16x16_1_add_sse2;
-  vpx_idct16x16_256_add = vpx_idct16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_256_add = vpx_idct16x16_256_add_sse2;
-  vpx_idct16x16_38_add = vpx_idct16x16_38_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_38_add = vpx_idct16x16_38_add_sse2;
-  vpx_idct32x32_1024_add = vpx_idct32x32_1024_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_1024_add = vpx_idct32x32_1024_add_sse2;
-  vpx_idct32x32_135_add = vpx_idct32x32_135_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
+  vpx_highbd_satd = vpx_highbd_satd_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_satd = vpx_highbd_satd_avx2;
+  vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_135_add = vpx_idct32x32_135_add_ssse3;
-  vpx_idct32x32_1_add = vpx_idct32x32_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_1_add = vpx_idct32x32_1_add_sse2;
-  vpx_idct32x32_34_add = vpx_idct32x32_34_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_34_add = vpx_idct32x32_34_add_sse2;
+  vpx_idct32x32_34_add = vpx_idct32x32_34_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_34_add = vpx_idct32x32_34_add_ssse3;
-  vpx_idct4x4_16_add = vpx_idct4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct4x4_16_add = vpx_idct4x4_16_add_sse2;
-  vpx_idct4x4_1_add = vpx_idct4x4_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct4x4_1_add = vpx_idct4x4_1_add_sse2;
-  vpx_idct8x8_12_add = vpx_idct8x8_12_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_12_add = vpx_idct8x8_12_add_sse2;
+  vpx_idct8x8_12_add = vpx_idct8x8_12_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct8x8_12_add = vpx_idct8x8_12_add_ssse3;
-  vpx_idct8x8_1_add = vpx_idct8x8_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_1_add = vpx_idct8x8_1_add_sse2;
-  vpx_idct8x8_64_add = vpx_idct8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_64_add = vpx_idct8x8_64_add_sse2;
-  vpx_int_pro_col = vpx_int_pro_col_c;
-  if (flags & HAS_SSE2)
-    vpx_int_pro_col = vpx_int_pro_col_sse2;
-  vpx_int_pro_row = vpx_int_pro_row_c;
-  if (flags & HAS_SSE2)
-    vpx_int_pro_row = vpx_int_pro_row_sse2;
-  vpx_iwht4x4_16_add = vpx_iwht4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_iwht4x4_16_add = vpx_iwht4x4_16_add_sse2;
-  vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_sse2;
+  vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_sse2;
   if (flags & HAS_AVX2)
     vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_avx2;
-  vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_sse2;
+  vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_sse2;
   if (flags & HAS_AVX2)
     vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_avx2;
-  vpx_lpf_horizontal_4 = vpx_lpf_horizontal_4_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_4 = vpx_lpf_horizontal_4_sse2;
-  vpx_lpf_horizontal_4_dual = vpx_lpf_horizontal_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_4_dual = vpx_lpf_horizontal_4_dual_sse2;
-  vpx_lpf_horizontal_8 = vpx_lpf_horizontal_8_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_8 = vpx_lpf_horizontal_8_sse2;
-  vpx_lpf_horizontal_8_dual = vpx_lpf_horizontal_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_8_dual = vpx_lpf_horizontal_8_dual_sse2;
-  vpx_lpf_vertical_16 = vpx_lpf_vertical_16_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_16 = vpx_lpf_vertical_16_sse2;
-  vpx_lpf_vertical_16_dual = vpx_lpf_vertical_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_16_dual = vpx_lpf_vertical_16_dual_sse2;
-  vpx_lpf_vertical_4 = vpx_lpf_vertical_4_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_4 = vpx_lpf_vertical_4_sse2;
-  vpx_lpf_vertical_4_dual = vpx_lpf_vertical_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_4_dual = vpx_lpf_vertical_4_dual_sse2;
-  vpx_lpf_vertical_8 = vpx_lpf_vertical_8_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_8 = vpx_lpf_vertical_8_sse2;
-  vpx_lpf_vertical_8_dual = vpx_lpf_vertical_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_8_dual = vpx_lpf_vertical_8_dual_sse2;
-  vpx_mbpost_proc_across_ip = vpx_mbpost_proc_across_ip_c;
-  if (flags & HAS_SSE2)
-    vpx_mbpost_proc_across_ip = vpx_mbpost_proc_across_ip_sse2;
-  vpx_mbpost_proc_down = vpx_mbpost_proc_down_c;
-  if (flags & HAS_SSE2)
-    vpx_mbpost_proc_down = vpx_mbpost_proc_down_sse2;
-  vpx_minmax_8x8 = vpx_minmax_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_minmax_8x8 = vpx_minmax_8x8_sse2;
-  vpx_mse16x16 = vpx_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_mse16x16 = vpx_mse16x16_sse2;
+  vpx_mse16x16 = vpx_mse16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_mse16x16 = vpx_mse16x16_avx2;
-  vpx_mse16x8 = vpx_mse16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_mse16x8 = vpx_mse16x8_sse2;
+  vpx_mse16x8 = vpx_mse16x8_sse2;
   if (flags & HAS_AVX2)
     vpx_mse16x8 = vpx_mse16x8_avx2;
-  vpx_mse8x16 = vpx_mse8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_mse8x16 = vpx_mse8x16_sse2;
-  vpx_mse8x8 = vpx_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_mse8x8 = vpx_mse8x8_sse2;
-  vpx_plane_add_noise = vpx_plane_add_noise_c;
-  if (flags & HAS_SSE2)
-    vpx_plane_add_noise = vpx_plane_add_noise_sse2;
-  vpx_post_proc_down_and_across_mb_row = vpx_post_proc_down_and_across_mb_row_c;
-  if (flags & HAS_SSE2)
-    vpx_post_proc_down_and_across_mb_row =
-        vpx_post_proc_down_and_across_mb_row_sse2;
-  vpx_quantize_b = vpx_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vpx_quantize_b = vpx_quantize_b_sse2;
+  vpx_quantize_b = vpx_quantize_b_sse2;
   if (flags & HAS_SSSE3)
     vpx_quantize_b = vpx_quantize_b_ssse3;
   if (flags & HAS_AVX)
@@ -10030,415 +7828,195 @@
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_ssse3;
   if (flags & HAS_AVX)
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_avx;
-  vpx_sad16x16 = vpx_sad16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16 = vpx_sad16x16_sse2;
-  vpx_sad16x16_avg = vpx_sad16x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16_avg = vpx_sad16x16_avg_sse2;
   vpx_sad16x16x3 = vpx_sad16x16x3_c;
   if (flags & HAS_SSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_ssse3;
-  vpx_sad16x16x4d = vpx_sad16x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16x4d = vpx_sad16x16x4d_sse2;
   vpx_sad16x16x8 = vpx_sad16x16x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad16x16x8 = vpx_sad16x16x8_sse4_1;
-  vpx_sad16x32 = vpx_sad16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32 = vpx_sad16x32_sse2;
-  vpx_sad16x32_avg = vpx_sad16x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32_avg = vpx_sad16x32_avg_sse2;
-  vpx_sad16x32x4d = vpx_sad16x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32x4d = vpx_sad16x32x4d_sse2;
-  vpx_sad16x8 = vpx_sad16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8 = vpx_sad16x8_sse2;
-  vpx_sad16x8_avg = vpx_sad16x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8_avg = vpx_sad16x8_avg_sse2;
   vpx_sad16x8x3 = vpx_sad16x8x3_c;
   if (flags & HAS_SSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_ssse3;
-  vpx_sad16x8x4d = vpx_sad16x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8x4d = vpx_sad16x8x4d_sse2;
   vpx_sad16x8x8 = vpx_sad16x8x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad16x8x8 = vpx_sad16x8x8_sse4_1;
-  vpx_sad32x16 = vpx_sad32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16 = vpx_sad32x16_sse2;
+  vpx_sad32x16 = vpx_sad32x16_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x16 = vpx_sad32x16_avx2;
-  vpx_sad32x16_avg = vpx_sad32x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
+  vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x16_avg = vpx_sad32x16_avg_avx2;
-  vpx_sad32x16x4d = vpx_sad32x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16x4d = vpx_sad32x16x4d_sse2;
-  vpx_sad32x32 = vpx_sad32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32 = vpx_sad32x32_sse2;
+  vpx_sad32x32 = vpx_sad32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32 = vpx_sad32x32_avx2;
-  vpx_sad32x32_avg = vpx_sad32x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
+  vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32_avg = vpx_sad32x32_avg_avx2;
-  vpx_sad32x32x4d = vpx_sad32x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
+  vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32x4d = vpx_sad32x32x4d_avx2;
-  vpx_sad32x64 = vpx_sad32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64 = vpx_sad32x64_sse2;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
+  if (flags & HAS_AVX2)
+    vpx_sad32x32x8 = vpx_sad32x32x8_avx2;
+  vpx_sad32x64 = vpx_sad32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64 = vpx_sad32x64_avx2;
-  vpx_sad32x64_avg = vpx_sad32x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
+  vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64_avg = vpx_sad32x64_avg_avx2;
-  vpx_sad32x64x4d = vpx_sad32x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64x4d = vpx_sad32x64x4d_sse2;
-  vpx_sad4x4 = vpx_sad4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4 = vpx_sad4x4_sse2;
-  vpx_sad4x4_avg = vpx_sad4x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4_avg = vpx_sad4x4_avg_sse2;
   vpx_sad4x4x3 = vpx_sad4x4x3_c;
   if (flags & HAS_SSE3)
     vpx_sad4x4x3 = vpx_sad4x4x3_sse3;
-  vpx_sad4x4x4d = vpx_sad4x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4x4d = vpx_sad4x4x4d_sse2;
   vpx_sad4x4x8 = vpx_sad4x4x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad4x4x8 = vpx_sad4x4x8_sse4_1;
-  vpx_sad4x8 = vpx_sad4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8 = vpx_sad4x8_sse2;
-  vpx_sad4x8_avg = vpx_sad4x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8_avg = vpx_sad4x8_avg_sse2;
-  vpx_sad4x8x4d = vpx_sad4x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8x4d = vpx_sad4x8x4d_sse2;
-  vpx_sad64x32 = vpx_sad64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32 = vpx_sad64x32_sse2;
+  vpx_sad64x32 = vpx_sad64x32_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x32 = vpx_sad64x32_avx2;
-  vpx_sad64x32_avg = vpx_sad64x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
+  vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x32_avg = vpx_sad64x32_avg_avx2;
-  vpx_sad64x32x4d = vpx_sad64x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32x4d = vpx_sad64x32x4d_sse2;
-  vpx_sad64x64 = vpx_sad64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64 = vpx_sad64x64_sse2;
+  vpx_sad64x64 = vpx_sad64x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64 = vpx_sad64x64_avx2;
-  vpx_sad64x64_avg = vpx_sad64x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
+  vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64_avg = vpx_sad64x64_avg_avx2;
-  vpx_sad64x64x4d = vpx_sad64x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
+  vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64x4d = vpx_sad64x64x4d_avx2;
-  vpx_sad8x16 = vpx_sad8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16 = vpx_sad8x16_sse2;
-  vpx_sad8x16_avg = vpx_sad8x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16_avg = vpx_sad8x16_avg_sse2;
   vpx_sad8x16x3 = vpx_sad8x16x3_c;
   if (flags & HAS_SSE3)
     vpx_sad8x16x3 = vpx_sad8x16x3_sse3;
-  vpx_sad8x16x4d = vpx_sad8x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16x4d = vpx_sad8x16x4d_sse2;
   vpx_sad8x16x8 = vpx_sad8x16x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad8x16x8 = vpx_sad8x16x8_sse4_1;
-  vpx_sad8x4 = vpx_sad8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4 = vpx_sad8x4_sse2;
-  vpx_sad8x4_avg = vpx_sad8x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4_avg = vpx_sad8x4_avg_sse2;
-  vpx_sad8x4x4d = vpx_sad8x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4x4d = vpx_sad8x4x4d_sse2;
-  vpx_sad8x8 = vpx_sad8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8 = vpx_sad8x8_sse2;
-  vpx_sad8x8_avg = vpx_sad8x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8_avg = vpx_sad8x8_avg_sse2;
   vpx_sad8x8x3 = vpx_sad8x8x3_c;
   if (flags & HAS_SSE3)
     vpx_sad8x8x3 = vpx_sad8x8x3_sse3;
-  vpx_sad8x8x4d = vpx_sad8x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8x4d = vpx_sad8x8x4d_sse2;
   vpx_sad8x8x8 = vpx_sad8x8x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad8x8x8 = vpx_sad8x8x8_sse4_1;
-  vpx_satd = vpx_satd_c;
-  if (flags & HAS_SSE2)
-    vpx_satd = vpx_satd_sse2;
+  vpx_satd = vpx_satd_sse2;
   if (flags & HAS_AVX2)
     vpx_satd = vpx_satd_avx2;
   vpx_scaled_2d = vpx_scaled_2d_c;
   if (flags & HAS_SSSE3)
     vpx_scaled_2d = vpx_scaled_2d_ssse3;
-  vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_sse2;
+  vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_ssse3;
-  vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_sse2;
+  vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_ssse3;
-  vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_sse2;
+  vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_ssse3;
-  vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_sse2;
+  vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_ssse3;
-  vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_sse2;
+  vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_avx2;
-  vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_sse2;
+  vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_ssse3;
-  vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_sse2;
+  vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_ssse3;
-  vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_sse2;
+  vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_ssse3;
-  vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_sse2;
+  vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_ssse3;
-  vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_sse2;
+  vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_avx2;
-  vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_sse2;
+  vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_ssse3;
-  vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_sse2;
+  vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_ssse3;
-  vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_sse2;
+  vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_ssse3;
-  vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_sse2;
+  vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_ssse3;
-  vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_sse2;
+  vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_ssse3;
-  vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_sse2;
+  vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_ssse3;
-  vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_sse2;
+  vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_ssse3;
-  vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_sse2;
+  vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_avx2;
-  vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_sse2;
+  vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_ssse3;
-  vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_sse2;
+  vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_ssse3;
-  vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_sse2;
+  vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_ssse3;
-  vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_sse2;
+  vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_ssse3;
-  vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_sse2;
+  vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_avx2;
-  vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_sse2;
+  vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_ssse3;
-  vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_sse2;
+  vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_ssse3;
-  vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_sse2;
+  vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_ssse3;
-  vpx_subtract_block = vpx_subtract_block_c;
-  if (flags & HAS_SSE2)
-    vpx_subtract_block = vpx_subtract_block_sse2;
-  vpx_sum_squares_2d_i16 = vpx_sum_squares_2d_i16_c;
-  if (flags & HAS_SSE2)
-    vpx_sum_squares_2d_i16 = vpx_sum_squares_2d_i16_sse2;
-  vpx_tm_predictor_16x16 = vpx_tm_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_16x16 = vpx_tm_predictor_16x16_sse2;
-  vpx_tm_predictor_32x32 = vpx_tm_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_32x32 = vpx_tm_predictor_32x32_sse2;
-  vpx_tm_predictor_4x4 = vpx_tm_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_4x4 = vpx_tm_predictor_4x4_sse2;
-  vpx_tm_predictor_8x8 = vpx_tm_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_8x8 = vpx_tm_predictor_8x8_sse2;
-  vpx_v_predictor_16x16 = vpx_v_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_16x16 = vpx_v_predictor_16x16_sse2;
-  vpx_v_predictor_32x32 = vpx_v_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_32x32 = vpx_v_predictor_32x32_sse2;
-  vpx_v_predictor_4x4 = vpx_v_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_4x4 = vpx_v_predictor_4x4_sse2;
-  vpx_v_predictor_8x8 = vpx_v_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_8x8 = vpx_v_predictor_8x8_sse2;
-  vpx_variance16x16 = vpx_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x16 = vpx_variance16x16_sse2;
+  vpx_variance16x16 = vpx_variance16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x16 = vpx_variance16x16_avx2;
-  vpx_variance16x32 = vpx_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x32 = vpx_variance16x32_sse2;
+  vpx_variance16x32 = vpx_variance16x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x32 = vpx_variance16x32_avx2;
-  vpx_variance16x8 = vpx_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x8 = vpx_variance16x8_sse2;
+  vpx_variance16x8 = vpx_variance16x8_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x8 = vpx_variance16x8_avx2;
-  vpx_variance32x16 = vpx_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x16 = vpx_variance32x16_sse2;
+  vpx_variance32x16 = vpx_variance32x16_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x16 = vpx_variance32x16_avx2;
-  vpx_variance32x32 = vpx_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x32 = vpx_variance32x32_sse2;
+  vpx_variance32x32 = vpx_variance32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x32 = vpx_variance32x32_avx2;
-  vpx_variance32x64 = vpx_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x64 = vpx_variance32x64_sse2;
+  vpx_variance32x64 = vpx_variance32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x64 = vpx_variance32x64_avx2;
-  vpx_variance4x4 = vpx_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_variance4x4 = vpx_variance4x4_sse2;
-  vpx_variance4x8 = vpx_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance4x8 = vpx_variance4x8_sse2;
-  vpx_variance64x32 = vpx_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance64x32 = vpx_variance64x32_sse2;
+  vpx_variance64x32 = vpx_variance64x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance64x32 = vpx_variance64x32_avx2;
-  vpx_variance64x64 = vpx_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_variance64x64 = vpx_variance64x64_sse2;
+  vpx_variance64x64 = vpx_variance64x64_sse2;
   if (flags & HAS_AVX2)
     vpx_variance64x64 = vpx_variance64x64_avx2;
-  vpx_variance8x16 = vpx_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x16 = vpx_variance8x16_sse2;
-  vpx_variance8x4 = vpx_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x4 = vpx_variance8x4_sse2;
-  vpx_variance8x8 = vpx_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x8 = vpx_variance8x8_sse2;
-  vpx_vector_var = vpx_vector_var_c;
-  if (flags & HAS_SSE2)
-    vpx_vector_var = vpx_vector_var_sse2;
 }
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd.h
index 2e121e6..df3a995 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd.h
@@ -1,4 +1,4 @@
-#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#if WK_DISABLE_HARDWARE_ACCELERATION
 #include "vp8_rtcd_no_acceleration.h"
 #else
 
@@ -31,90 +31,90 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_sse2(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_sse2(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
-void vp8_bilinear_predict16x16_ssse3(unsigned char* src,
-                                     int src_pitch,
-                                     int xofst,
-                                     int yofst,
-                                     unsigned char* dst,
+void vp8_bilinear_predict16x16_ssse3(unsigned char* src_ptr,
+                                     int src_pixels_per_line,
+                                     int xoffset,
+                                     int yoffset,
+                                     unsigned char* dst_ptr,
                                      int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src,
-                                              int src_pitch,
-                                              int xofst,
-                                              int yofst,
-                                              unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
                                               int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_mmx
-
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_mmx
-
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x8_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_bilinear_predict8x8_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_sse2
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_sse2
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_bilinear_predict8x8_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -122,9 +122,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -132,9 +132,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -144,65 +144,65 @@
 #define vp8_block_error vp8_block_error_sse2
 
 void vp8_copy32xn_c(const unsigned char* src_ptr,
-                    int source_stride,
+                    int src_stride,
                     unsigned char* dst_ptr,
                     int dst_stride,
-                    int n);
+                    int height);
 void vp8_copy32xn_sse2(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 void vp8_copy32xn_sse3(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  unsigned char* dst_ptr,
                                  int dst_stride,
-                                 int n);
+                                 int height);
 
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_sse2(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_sse2
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_mmx(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_mmx
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_mmx(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_mmx
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_mmx(short input,
-                              unsigned char* pred,
+void vp8_dc_only_idct_add_mmx(short input_dc,
+                              unsigned char* pred_ptr,
                               int pred_stride,
-                              unsigned char* dst,
+                              unsigned char* dst_ptr,
                               int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_mmx
 
@@ -244,11 +244,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_mmx(short* input,
                               short* dq,
-                              unsigned char* output,
+                              unsigned char* dest,
                               int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_mmx
 
@@ -278,8 +278,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_sse2
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_mmx(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_mmx(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_mmx
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -379,91 +379,91 @@
                                        int* mvcost[2],
                                        union int_mv* center_mv);
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_sse2
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_sse2
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_sse2
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_sse2
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_sse2
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_sse2
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y,
-                                                 int ystride,
+void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y_ptr,
+                                                 int y_stride,
                                                  const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_sse2
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y,
-                                               int ystride,
+void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_sse2
 
@@ -479,8 +479,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -488,8 +488,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -509,126 +509,126 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_sse2
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_mmx(short* input,
-                              unsigned char* pred,
-                              int pitch,
-                              unsigned char* dst,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
                               int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_mmx
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_sse2(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_sse2(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_sse2
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_sse2(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_sse2
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_sixtap_predict16x16_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+void vp8_sixtap_predict16x16_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_mmx(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict4x4_mmx(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict4x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict4x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x8_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x8_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
 void vp8_rtcd(void);
@@ -655,7 +655,11 @@
   vp8_full_search_sad = vp8_full_search_sad_c;
   if (flags & HAS_SSE3)
     vp8_full_search_sad = vp8_full_search_sadx3;
+  if (flags & HAS_SSE4_1)
+    vp8_full_search_sad = vp8_full_search_sadx8;
   vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
+  if (flags & HAS_SSE4_1)
+    vp8_regular_quantize_b = vp8_regular_quantize_b_sse4_1;
   vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_ssse3;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd_no_acceleration.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd_no_acceleration.h
index 44524cc..e98ef65 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd_no_acceleration.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp8_rtcd_no_acceleration.h
@@ -144,21 +144,7 @@
                     unsigned char* dst_ptr,
                     int dst_stride,
                     int n);
-void vp8_copy32xn_sse2(const unsigned char* src_ptr,
-                       int source_stride,
-                       unsigned char* dst_ptr,
-                       int dst_stride,
-                       int n);
-void vp8_copy32xn_sse3(const unsigned char* src_ptr,
-                       int source_stride,
-                       unsigned char* dst_ptr,
-                       int dst_stride,
-                       int n);
-RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
-                                 int source_stride,
-                                 unsigned char* dst_ptr,
-                                 int dst_stride,
-                                 int n);
+#define vp8_copy32xn vp8_copy32xn_c
 
 void vp8_copy_mem16x16_c(unsigned char* src,
                          int src_pitch,
@@ -638,7 +624,6 @@
 
   vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_c;
   vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_c;
-  vp8_copy32xn = vp8_copy32xn_c;
   vp8_fast_quantize_b = vp8_fast_quantize_b_c;
   vp8_full_search_sad = vp8_full_search_sad_c;
   vp8_regular_quantize_b = vp8_regular_quantize_b_c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp9_rtcd.h
index 6f00c78..ce1528e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vp9_rtcd.h
@@ -79,7 +79,11 @@
                              int increase_denoising,
                              BLOCK_SIZE bs,
                              int motion_magnitude);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_denoiser_filter vp9_denoiser_filter_c
+#else
 #define vp9_denoiser_filter vp9_denoiser_filter_sse2
+#endif
 
 int vp9_diamond_search_sad_c(const struct macroblock* x,
                              const struct search_site_config* cfg,
@@ -110,46 +114,6 @@
     const struct vp9_variance_vtable* fn_ptr,
     const struct mv* center_mv);
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_ssse3(const int16_t* input,
-                             int stride,
-                             tran_low_t* coeff_ptr,
-                             intptr_t n_coeffs,
-                             int skip_block,
-                             const int16_t* round_ptr,
-                             const int16_t* quant_ptr,
-                             tran_low_t* qcoeff_ptr,
-                             tran_low_t* dqcoeff_ptr,
-                             const int16_t* dequant_ptr,
-                             uint16_t* eob_ptr,
-                             const int16_t* scan,
-                             const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -158,7 +122,11 @@
                        tran_low_t* output,
                        int stride,
                        int tx_type);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_fht16x16 vp9_fht16x16_c
+#else
 #define vp9_fht16x16 vp9_fht16x16_sse2
+#endif
 
 void vp9_fht4x4_c(const int16_t* input,
                   tran_low_t* output,
@@ -168,7 +136,11 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_fht4x4 vp9_fht4x4_c
+#else
 #define vp9_fht4x4 vp9_fht4x4_sse2
+#endif
 
 void vp9_fht8x8_c(const int16_t* input,
                   tran_low_t* output,
@@ -178,7 +150,11 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_fht8x8 vp9_fht8x8_c
+#else
 #define vp9_fht8x8 vp9_fht8x8_sse2
+#endif
 
 void vp9_filter_by_weight16x16_c(const uint8_t* src,
                                  int src_stride,
@@ -190,7 +166,11 @@
                                     uint8_t* dst,
                                     int dst_stride,
                                     int src_weight);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_c
+#else
 #define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_sse2
+#endif
 
 void vp9_filter_by_weight8x8_c(const uint8_t* src,
                                int src_stride,
@@ -202,11 +182,19 @@
                                   uint8_t* dst,
                                   int dst_stride,
                                   int src_weight);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_c
+#else
 #define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_sse2
+#endif
 
 void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vp9_fwht4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_fwht4x4 vp9_fwht4x4_c
+#else
 #define vp9_fwht4x4 vp9_fwht4x4_sse2
+#endif
 
 int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
                                  const tran_low_t* dqcoeff,
@@ -218,7 +206,11 @@
                                     intptr_t block_size,
                                     int64_t* ssz,
                                     int bd);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_highbd_block_error vp9_highbd_block_error_c
+#else
 #define vp9_highbd_block_error vp9_highbd_block_error_sse2
+#endif
 
 void vp9_highbd_fht16x16_c(const int16_t* input,
                            tran_low_t* output,
@@ -242,18 +234,18 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 void vp9_highbd_iht16x16_256_add_sse4_1(const tran_low_t* input,
-                                        uint16_t* output,
-                                        int pitch,
+                                        uint16_t* dest,
+                                        int stride,
                                         int tx_type,
                                         int bd);
 RTCD_EXTERN void (*vp9_highbd_iht16x16_256_add)(const tran_low_t* input,
-                                                uint16_t* output,
-                                                int pitch,
+                                                uint16_t* dest,
+                                                int stride,
                                                 int tx_type,
                                                 int bd);
 
@@ -345,20 +337,25 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_sse2(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_c
+#else
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_sse2
+#endif
 
 void vp9_iht4x4_16_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -368,7 +365,11 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_c
+#else
 #define vp9_iht4x4_16_add vp9_iht4x4_16_add_sse2
+#endif
 
 void vp9_iht8x8_64_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -378,7 +379,11 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_c
+#else
 #define vp9_iht8x8_64_add vp9_iht8x8_64_add_sse2
+#endif
 
 void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
                        intptr_t n_coeffs,
@@ -484,47 +489,78 @@
     INTERP_FILTER filter_type,
     int phase_scaler);
 
+void vp9_apply_temporal_filter_c(const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre, int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src, int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator, uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count);
+void vp9_apply_temporal_filter_sse4_1(const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre, int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src, int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator, uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count);
+RTCD_EXTERN void (*vp9_apply_temporal_filter)(const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre, int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src, int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator, uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count);
+
+void vp9_highbd_apply_temporal_filter_c(const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre, int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src, int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count);
+void vp9_highbd_apply_temporal_filter_sse4_1(const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre, int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src, int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count);
+RTCD_EXTERN void (*vp9_highbd_apply_temporal_filter)(const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre, int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src, int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count);
+
 void vp9_rtcd(void);
 
 #ifdef RTCD_C
 #include "vpx_ports/x86.h"
+
 static void setup_rtcd_internal(void) {
   int flags = x86_simd_caps();
 
   (void)flags;
 
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+  vp9_block_error = vp9_block_error_c;
+  vp9_block_error_fp = vp9_block_error_fp_c;
+#else
   vp9_block_error = vp9_block_error_sse2;
-  if (flags & HAS_AVX2)
-    vp9_block_error = vp9_block_error_avx2;
   vp9_block_error_fp = vp9_block_error_fp_sse2;
-  if (flags & HAS_AVX2)
-    vp9_block_error_fp = vp9_block_error_fp_avx2;
+#endif
   vp9_diamond_search_sad = vp9_diamond_search_sad_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
   if (flags & HAS_AVX)
     vp9_diamond_search_sad = vp9_diamond_search_sad_avx;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_SSSE3)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_ssse3;
+#endif
   vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_sse4_1;
+#endif
   vp9_highbd_iht4x4_16_add = vp9_highbd_iht4x4_16_add_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht4x4_16_add = vp9_highbd_iht4x4_16_add_sse4_1;
+#endif
   vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_sse4_1;
+#endif
+#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+  vp9_quantize_fp = vp9_quantize_fp_c;
+#else
   vp9_quantize_fp = vp9_quantize_fp_sse2;
   if (flags & HAS_SSSE3)
     vp9_quantize_fp = vp9_quantize_fp_ssse3;
-  if (flags & HAS_AVX2)
-    vp9_quantize_fp = vp9_quantize_fp_avx2;
+#endif
   vp9_quantize_fp_32x32 = vp9_quantize_fp_32x32_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
   if (flags & HAS_SSSE3)
     vp9_quantize_fp_32x32 = vp9_quantize_fp_32x32_ssse3;
+#endif
   vp9_scale_and_extend_frame = vp9_scale_and_extend_frame_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
   if (flags & HAS_SSSE3)
     vp9_scale_and_extend_frame = vp9_scale_and_extend_frame_ssse3;
+#endif
+  vp9_apply_temporal_filter = vp9_apply_temporal_filter_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
+  if (flags & HAS_SSE4_1)
+    vp9_apply_temporal_filter = vp9_apply_temporal_filter_sse4_1;
+#endif
+  vp9_highbd_apply_temporal_filter = vp9_highbd_apply_temporal_filter_c;
+#ifndef WK_DISABLE_HARDWARE_ACCELERATION
+  if (flags & HAS_SSE4_1)
+    vp9_highbd_apply_temporal_filter = vp9_highbd_apply_temporal_filter_sse4_1;
+#endif
 }
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.asm
index ffaf2d9..b9d7810 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.asm
@@ -1,7 +1,12 @@
+%define VPX_ARCH_ARM 0
 %define ARCH_ARM 0
+%define VPX_ARCH_MIPS 0
 %define ARCH_MIPS 0
+%define VPX_ARCH_X86 0
 %define ARCH_X86 0
+%define VPX_ARCH_X86_64 1
 %define ARCH_X86_64 1
+%define VPX_ARCH_PPC 0
 %define ARCH_PPC 0
 %define HAVE_NEON 0
 %define HAVE_NEON_ASM 0
@@ -66,7 +71,7 @@
 %define CONFIG_OS_SUPPORT 1
 %define CONFIG_UNIT_TESTS 1
 %define CONFIG_WEBM_IO 1
-%define CONFIG_LIBYUV 1
+%define CONFIG_LIBYUV 0
 %define CONFIG_DECODE_PERF_TESTS 0
 %define CONFIG_ENCODE_PERF_TESTS 0
 %define CONFIG_MULTI_RES_ENCODING 1
@@ -79,7 +84,11 @@
 %define CONFIG_EXPERIMENTAL 0
 %define CONFIG_SIZE_LIMIT 1
 %define CONFIG_ALWAYS_ADJUST_BPM 0
+%define CONFIG_BITSTREAM_DEBUG 0
+%define CONFIG_MISMATCH_DEBUG 0
 %define CONFIG_FP_MB_STATS 0
 %define CONFIG_EMULATE_HARDWARE 0
+%define CONFIG_NON_GREEDY_MV 0
+%define CONFIG_RATE_CTRL 0
 %define DECODE_WIDTH_LIMIT 16384
 %define DECODE_HEIGHT_LIMIT 16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.c
index 8d9ce54..52aaac5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=x86_64-darwin9-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
+static const char* const cfg = "--target=x86_64-linux-gcc --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.h
index 5c1c92f..7c3c338 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 1
 #define ARCH_X86_64 1
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -21,23 +26,12 @@
 #define HAVE_DSPR2 0
 #define HAVE_MSA 0
 #define HAVE_MIPS64 0
-
-#ifdef WK_DISABLE_HARDWARE_ACCELERATION
-#define HAVE_MMX 0
-#define HAVE_SSE 0
-#define HAVE_SSE2 0
-#define HAVE_SSE3 0
-#define HAVE_SSSE3 0
-#define HAVE_SSE4_1 0
-#else
 #define HAVE_MMX 1
 #define HAVE_SSE 1
 #define HAVE_SSE2 1
 #define HAVE_SSE3 1
 #define HAVE_SSSE3 1
 #define HAVE_SSE4_1 1
-#endif
-
 #define HAVE_AVX 1
 #define HAVE_AVX2 1
 #define HAVE_AVX512 0
@@ -66,15 +60,15 @@
 #define CONFIG_DC_RECON 0
 #define CONFIG_RUNTIME_CPU_DETECT 1
 #define CONFIG_POSTPROC 1
-#define CONFIG_VP9_POSTPROC 0
+#define CONFIG_VP9_POSTPROC 1
 #define CONFIG_MULTITHREAD 1
 #define CONFIG_INTERNAL_STATS 0
 #define CONFIG_VP8_ENCODER 1
 #define CONFIG_VP8_DECODER 1
-#define CONFIG_VP9_ENCODER 0
-#define CONFIG_VP9_DECODER 0
+#define CONFIG_VP9_ENCODER 1
+#define CONFIG_VP9_DECODER 1
 #define CONFIG_VP8 1
-#define CONFIG_VP9 0
+#define CONFIG_VP9 1
 #define CONFIG_ENCODERS 1
 #define CONFIG_DECODERS 1
 #define CONFIG_STATIC_MSVCRT 0
@@ -89,12 +83,12 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
-#define CONFIG_VP9_TEMPORAL_DENOISING 0
+#define CONFIG_VP9_TEMPORAL_DENOISING 1
 #define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
@@ -102,8 +96,12 @@
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd.h
index cb7e2a4..8c232db 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd.h
@@ -1,4 +1,4 @@
-#ifdef WK_DISABLE_HARDWARE_ACCELERATION
+#if WK_DISABLE_HARDWARE_ACCELERATION
 #include "vpx_dsp_rtcd_no_acceleration.h"
 #else
 
@@ -431,420 +431,420 @@
 #define vpx_convolve_copy vpx_convolve_copy_sse2
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_4x4_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_4x4_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_sse2
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_sse2
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_sse2
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_4x4_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_8x8_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_sse2
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_sse2
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_sse2
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_sse2
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_sse2
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_sse2
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_sse2
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_sse2
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_sse2
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_sse2
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_sse2
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_sse2
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_sse2
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_sse2
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_sse2
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_sse2
@@ -888,44 +888,44 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_sse2
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_sse2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 void vpx_get16x16var_avx2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_sse2(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -937,41 +937,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_sse2
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_sse2
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_sse2
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_sse2
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_sse2
@@ -998,6 +998,9 @@
 void vpx_hadamard_32x32_avx2(const int16_t* src_diff,
                              ptrdiff_t src_stride,
                              tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_hadamard_32x32)(const int16_t* src_diff,
+                                       ptrdiff_t src_stride,
+                                       tran_low_t* coeff);
 
 void vpx_hadamard_8x8_c(const int16_t* src_diff,
                         ptrdiff_t src_stride,
@@ -1013,79 +1016,91 @@
                                      tran_low_t* coeff);
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+void vpx_highbd_10_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_sse2
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+void vpx_highbd_10_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_sse2
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_10_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_sse2
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_10_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
 #define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1095,18 +1110,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1115,18 +1130,18 @@
   vpx_highbd_10_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1136,18 +1151,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1157,18 +1172,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1178,18 +1193,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1198,9 +1213,9 @@
   vpx_highbd_10_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1209,9 +1224,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1221,18 +1236,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1242,18 +1257,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1262,18 +1277,18 @@
   vpx_highbd_10_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1282,18 +1297,18 @@
   vpx_highbd_10_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1302,18 +1317,18 @@
   vpx_highbd_10_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1322,16 +1337,16 @@
   vpx_highbd_10_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1339,16 +1354,16 @@
   vpx_highbd_10_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1356,16 +1371,16 @@
   vpx_highbd_10_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -1373,16 +1388,16 @@
   vpx_highbd_10_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1390,16 +1405,16 @@
   vpx_highbd_10_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1407,16 +1422,16 @@
   vpx_highbd_10_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1424,9 +1439,9 @@
   vpx_highbd_10_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1434,9 +1449,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1444,16 +1459,16 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1461,16 +1476,16 @@
   vpx_highbd_10_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1478,16 +1493,16 @@
   vpx_highbd_10_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -1495,16 +1510,16 @@
   vpx_highbd_10_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -1512,16 +1527,16 @@
   vpx_highbd_10_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -1529,214 +1544,226 @@
   vpx_highbd_10_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_sse2
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_sse2
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_sse2
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_sse2
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_sse2
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_sse2
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_sse2
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_sse2
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_sse2
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_sse2
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+void vpx_highbd_12_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_sse2
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+void vpx_highbd_12_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_sse2
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_12_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_sse2
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_12_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
 #define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1746,18 +1773,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1766,18 +1793,18 @@
   vpx_highbd_12_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1787,18 +1814,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1808,18 +1835,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1829,18 +1856,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1849,9 +1876,9 @@
   vpx_highbd_12_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1860,9 +1887,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1872,18 +1899,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1893,18 +1920,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1913,18 +1940,18 @@
   vpx_highbd_12_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1933,18 +1960,18 @@
   vpx_highbd_12_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1953,18 +1980,18 @@
   vpx_highbd_12_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1973,16 +2000,16 @@
   vpx_highbd_12_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1990,16 +2017,16 @@
   vpx_highbd_12_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2007,16 +2034,16 @@
   vpx_highbd_12_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2024,16 +2051,16 @@
   vpx_highbd_12_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2041,16 +2068,16 @@
   vpx_highbd_12_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2058,16 +2085,16 @@
   vpx_highbd_12_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2075,9 +2102,9 @@
   vpx_highbd_12_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2085,9 +2112,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2095,16 +2122,16 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2112,16 +2139,16 @@
   vpx_highbd_12_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2129,16 +2156,16 @@
   vpx_highbd_12_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2146,16 +2173,16 @@
   vpx_highbd_12_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2163,16 +2190,16 @@
   vpx_highbd_12_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2180,213 +2207,225 @@
   vpx_highbd_12_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_sse2
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_sse2
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_sse2
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_sse2
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_sse2
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_sse2
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_sse2
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_sse2
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_sse2
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_sse2
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
                                 int* sum);
-#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+void vpx_highbd_8_get16x16var_sse2(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse,
+                                   int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_sse2
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
                               int* sum);
-#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+void vpx_highbd_8_get8x8var_sse2(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_sse2
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 unsigned int vpx_highbd_8_mse16x16_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_sse2
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_highbd_8_mse8x8_sse2(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2395,18 +2434,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2415,18 +2454,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2435,18 +2474,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2455,18 +2494,18 @@
   vpx_highbd_8_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2475,18 +2514,18 @@
   vpx_highbd_8_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2495,9 +2534,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -2506,9 +2545,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -2517,18 +2556,18 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2537,18 +2576,18 @@
   vpx_highbd_8_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2557,18 +2596,18 @@
   vpx_highbd_8_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2577,18 +2616,18 @@
   vpx_highbd_8_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2597,18 +2636,18 @@
   vpx_highbd_8_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2617,16 +2656,16 @@
   vpx_highbd_8_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2634,16 +2673,16 @@
   vpx_highbd_8_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2651,16 +2690,16 @@
   vpx_highbd_8_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2668,16 +2707,16 @@
   vpx_highbd_8_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2685,16 +2724,16 @@
   vpx_highbd_8_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2702,16 +2741,16 @@
   vpx_highbd_8_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2719,34 +2758,34 @@
   vpx_highbd_8_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2754,16 +2793,16 @@
   vpx_highbd_8_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2771,16 +2810,16 @@
   vpx_highbd_8_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2788,16 +2827,16 @@
   vpx_highbd_8_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -2805,16 +2844,16 @@
   vpx_highbd_8_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -2822,152 +2861,152 @@
   vpx_highbd_8_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_sse2
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_sse2
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_sse2
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_sse2
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_sse2
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_sse2
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_sse2
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_sse2
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x16_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_sse2
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x8_sse2(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_sse2
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t* s8, int p);
 #define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_sse2
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t* s8, int p);
 #define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_sse2
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
@@ -2989,7 +3028,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 void vpx_highbd_convolve8_sse2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3001,7 +3040,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve8_avx2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3013,7 +3052,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8)(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3025,7 +3064,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3038,7 +3077,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve8_avg_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3050,7 +3089,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve8_avg_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3062,7 +3101,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3074,7 +3113,7 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
                                       ptrdiff_t src_stride,
@@ -3087,7 +3126,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 void vpx_highbd_convolve8_avg_horiz_sse2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3099,7 +3138,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 void vpx_highbd_convolve8_avg_horiz_avx2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3111,7 +3150,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_horiz)(const uint16_t* src,
                                                    ptrdiff_t src_stride,
                                                    uint16_t* dst,
@@ -3123,7 +3162,7 @@
                                                    int y_step_q4,
                                                    int w,
                                                    int h,
-                                                   int bps);
+                                                   int bd);
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
                                      ptrdiff_t src_stride,
@@ -3136,7 +3175,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_avg_vert_sse2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3148,7 +3187,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 void vpx_highbd_convolve8_avg_vert_avx2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3160,7 +3199,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_vert)(const uint16_t* src,
                                                   ptrdiff_t src_stride,
                                                   uint16_t* dst,
@@ -3172,7 +3211,7 @@
                                                   int y_step_q4,
                                                   int w,
                                                   int h,
-                                                  int bps);
+                                                  int bd);
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
                                   ptrdiff_t src_stride,
@@ -3185,7 +3224,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve8_horiz_sse2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3197,7 +3236,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_horiz_avx2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3209,7 +3248,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_horiz)(const uint16_t* src,
                                                ptrdiff_t src_stride,
                                                uint16_t* dst,
@@ -3221,7 +3260,7 @@
                                                int y_step_q4,
                                                int w,
                                                int h,
-                                               int bps);
+                                               int bd);
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
                                  ptrdiff_t src_stride,
@@ -3234,7 +3273,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 void vpx_highbd_convolve8_vert_sse2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3246,7 +3285,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 void vpx_highbd_convolve8_vert_avx2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3258,7 +3297,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_vert)(const uint16_t* src,
                                               ptrdiff_t src_stride,
                                               uint16_t* dst,
@@ -3270,7 +3309,7 @@
                                               int y_step_q4,
                                               int w,
                                               int h,
-                                              int bps);
+                                              int bd);
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
                                ptrdiff_t src_stride,
@@ -3283,7 +3322,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve_avg_sse2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3295,7 +3334,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve_avg_avx2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3307,7 +3346,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_avg)(const uint16_t* src,
                                             ptrdiff_t src_stride,
                                             uint16_t* dst,
@@ -3319,7 +3358,7 @@
                                             int y_step_q4,
                                             int w,
                                             int h,
-                                            int bps);
+                                            int bd);
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3332,7 +3371,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve_copy_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3344,7 +3383,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve_copy_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3356,7 +3395,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_copy)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3368,427 +3407,427 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_sse2
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_sse2
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_sse2
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_sse2
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_4x4_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_4x4_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_sse2
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_sse2
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_sse2
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_sse2
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_sse2
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_16x16_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
@@ -3796,12 +3835,12 @@
   vpx_highbd_dc_left_predictor_16x16_sse2
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_32x32_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
@@ -3809,120 +3848,120 @@
   vpx_highbd_dc_left_predictor_32x32_sse2
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_4x4_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 #define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_sse2
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_8x8_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 #define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_sse2
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_sse2
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_sse2
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_sse2
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_sse2
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_sse2
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_sse2
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_sse2
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
@@ -3980,53 +4019,83 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_sse2
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_sse2
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_sse2
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_sse2
 
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_16x16_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_32x32_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+void vpx_highbd_hadamard_8x8_avx2(const int16_t* src_diff,
+                                  ptrdiff_t src_stride,
+                                  tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
+
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
                                    int stride,
@@ -4424,9 +4493,9 @@
                                          int bd);
 #define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_sse2
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -4512,12 +4581,12 @@
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_sse2
@@ -4546,12 +4615,12 @@
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_sse2
@@ -4580,12 +4649,12 @@
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad16x8x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
 #define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_sse2
@@ -4614,12 +4683,12 @@
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_sse2
@@ -4648,12 +4717,12 @@
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_sse2
@@ -4682,12 +4751,12 @@
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_sse2
@@ -4707,12 +4776,12 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_sse2
@@ -4732,12 +4801,12 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_sse2
@@ -4766,12 +4835,12 @@
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_sse2
@@ -4800,12 +4869,12 @@
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_sse2
@@ -4834,12 +4903,12 @@
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad8x16x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
 #define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_sse2
@@ -4868,12 +4937,12 @@
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_sse2
@@ -4902,118 +4971,122 @@
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_sse2
 
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+int vpx_highbd_satd_avx2(const tran_low_t* coeff, int length);
+RTCD_EXTERN int (*vpx_highbd_satd)(const tran_low_t* coeff, int length);
+
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_sse2
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_sse2
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_sse2
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_sse2
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_sse2
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_sse2
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_sse2
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
@@ -5323,12 +5396,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_sse2
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_sse2(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_sse2(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -5362,68 +5435,68 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_sse2
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_sse2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_mse16x16_avx2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse16x8_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 unsigned int vpx_mse16x8_avx2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x8)(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse8x16_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_sse2
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 unsigned int vpx_mse8x8_sse2(const uint8_t* src_ptr,
-                             int source_stride,
+                             int src_stride,
                              const uint8_t* ref_ptr,
-                             int recon_stride,
+                             int ref_stride,
                              unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_sse2
 
@@ -5624,12 +5697,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_sse2
@@ -5674,12 +5747,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_sse2
@@ -5729,12 +5802,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_sse2
@@ -5795,12 +5868,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_sse2
@@ -5845,25 +5918,41 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad32x32x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -5904,12 +5993,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_sse2
@@ -5954,12 +6043,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_sse2
@@ -6004,12 +6093,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_sse2
@@ -6054,12 +6143,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_sse2
@@ -6104,22 +6193,22 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad64x64x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -6163,12 +6252,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_sse2
@@ -6213,12 +6302,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_sse2
@@ -6263,12 +6352,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_sse2
@@ -6394,850 +6483,850 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -7265,315 +7354,315 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_sse2
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_sse2
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_sse2
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_sse2
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_sse2
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_sse2
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_sse2
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_sse2
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_sse2
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_variance16x8_avx2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_sse2
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_sse2
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_sse2
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_sse2
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_sse2
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
@@ -7650,6 +7739,7 @@
     vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_ssse3;
   vpx_get16x16var = vpx_get16x16var_sse2;
   vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
+  vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
   vpx_hadamard_8x8 = vpx_hadamard_8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_hadamard_8x8 = vpx_hadamard_8x8_ssse3;
@@ -7718,15 +7808,37 @@
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_ssse3;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
   vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse4_1;
   vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse4_1;
   vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse4_1;
   vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse4_1;
   vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse4_1;
   vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse4_1;
   vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse4_1;
   vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse4_1;
   vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
+  if (flags & HAS_SSE4_1)
+    vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse4_1;
+  vpx_highbd_satd = vpx_highbd_satd_c;
   vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_135_add = vpx_idct32x32_135_add_ssse3;
@@ -7746,32 +7858,53 @@
   vpx_quantize_b_32x32 = vpx_quantize_b_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_ssse3;
-  vpx_sad16x16x3 = vpx_sad16x16x3_sse3;
+  vpx_sad16x16x3 = vpx_sad16x16x3_c;
+  if (flags & HAS_SSE3)
+    vpx_sad16x16x3 = vpx_sad16x16x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_ssse3;
-  vpx_sad16x16x8 = vpx_sad16x16x8_sse4_1;
-  vpx_sad16x8x3 = vpx_sad16x8x3_sse3;
+  vpx_sad16x16x8 = vpx_sad16x16x8_c;
+  if (flags & HAS_SSE4_1)
+    vpx_sad16x16x8 = vpx_sad16x16x8_sse4_1;
+  vpx_sad16x8x3 = vpx_sad16x8x3_c;
+  if (flags & HAS_SSE3)
+    vpx_sad16x8x3 = vpx_sad16x8x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_ssse3;
-  vpx_sad16x8x8 = vpx_sad16x8x8_sse4_1;
+  vpx_sad16x8x8 = vpx_sad16x8x8_c;
+  if (flags & HAS_SSE4_1)
+    vpx_sad16x8x8 = vpx_sad16x8x8_sse4_1;
   vpx_sad32x16 = vpx_sad32x16_sse2;
   vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
   vpx_sad32x32 = vpx_sad32x32_sse2;
   vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
   vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
   vpx_sad32x64 = vpx_sad32x64_sse2;
   vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
-  vpx_sad4x4x3 = vpx_sad4x4x3_sse3;
-  vpx_sad4x4x8 = vpx_sad4x4x8_sse4_1;
+  vpx_sad4x4x3 = vpx_sad4x4x3_c;
+  if (flags & HAS_SSE3)
+    vpx_sad4x4x3 = vpx_sad4x4x3_sse3;
+  vpx_sad4x4x8 = vpx_sad4x4x8_c;
+  if (flags & HAS_SSE4_1)
+    vpx_sad4x4x8 = vpx_sad4x4x8_sse4_1;
   vpx_sad64x32 = vpx_sad64x32_sse2;
   vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
   vpx_sad64x64 = vpx_sad64x64_sse2;
   vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
   vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
-  vpx_sad8x16x3 = vpx_sad8x16x3_sse3;
-  vpx_sad8x16x8 = vpx_sad8x16x8_sse4_1;
-  vpx_sad8x8x3 = vpx_sad8x8x3_sse3;
-  vpx_sad8x8x8 = vpx_sad8x8x8_sse4_1;
+  vpx_sad8x16x3 = vpx_sad8x16x3_c;
+  if (flags & HAS_SSE3)
+    vpx_sad8x16x3 = vpx_sad8x16x3_sse3;
+  vpx_sad8x16x8 = vpx_sad8x16x8_c;
+  if (flags & HAS_SSE4_1)
+    vpx_sad8x16x8 = vpx_sad8x16x8_sse4_1;
+  vpx_sad8x8x3 = vpx_sad8x8x3_c;
+  if (flags & HAS_SSE3)
+    vpx_sad8x8x3 = vpx_sad8x8x3_sse3;
+  vpx_sad8x8x8 = vpx_sad8x8x8_c;
+  if (flags & HAS_SSE4_1)
+    vpx_sad8x8x8 = vpx_sad8x8x8_sse4_1;
   vpx_satd = vpx_satd_sse2;
   vpx_scaled_2d = vpx_scaled_2d_c;
   if (flags & HAS_SSSE3)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd_no_acceleration.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd_no_acceleration.h
index c13159f..5e5fb66 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd_no_acceleration.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/mac/x64/vpx_dsp_rtcd_no_acceleration.h
@@ -975,38 +975,17 @@
 void vpx_hadamard_16x16_c(const int16_t* src_diff,
                           ptrdiff_t src_stride,
                           tran_low_t* coeff);
-void vpx_hadamard_16x16_c(const int16_t* src_diff,
-                             ptrdiff_t src_stride,
-                             tran_low_t* coeff);
-void vpx_hadamard_16x16_avx2(const int16_t* src_diff,
-                             ptrdiff_t src_stride,
-                             tran_low_t* coeff);
-RTCD_EXTERN void (*vpx_hadamard_16x16)(const int16_t* src_diff,
-                                       ptrdiff_t src_stride,
-                                       tran_low_t* coeff);
+#define vpx_hadamard_16x16 vpx_hadamard_16x16_c
 
 void vpx_hadamard_32x32_c(const int16_t* src_diff,
                           ptrdiff_t src_stride,
                           tran_low_t* coeff);
-void vpx_hadamard_32x32_c(const int16_t* src_diff,
-                             ptrdiff_t src_stride,
-                             tran_low_t* coeff);
-void vpx_hadamard_32x32_avx2(const int16_t* src_diff,
-                             ptrdiff_t src_stride,
-                             tran_low_t* coeff);
+#define vpx_hadamard_32x32 vpx_hadamard_32x32_c
 
 void vpx_hadamard_8x8_c(const int16_t* src_diff,
                         ptrdiff_t src_stride,
                         tran_low_t* coeff);
-void vpx_hadamard_8x8_c(const int16_t* src_diff,
-                           ptrdiff_t src_stride,
-                           tran_low_t* coeff);
-void vpx_hadamard_8x8_ssse3(const int16_t* src_diff,
-                            ptrdiff_t src_stride,
-                            tran_low_t* coeff);
-RTCD_EXTERN void (*vpx_hadamard_8x8)(const int16_t* src_diff,
-                                     ptrdiff_t src_stride,
-                                     tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_c
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
                             ptrdiff_t y_stride,
@@ -3924,6 +3903,27 @@
                                           int bd);
 #define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_c
 
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
+
 void vpx_highbd_fdct16x16_c(const int16_t* input,
                             tran_low_t* output,
                             int stride);
@@ -4991,6 +4991,9 @@
                                        int bd);
 #define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_c
 
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+#define vpx_highbd_satd vpx_highbd_satd_c
+
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
                                   ptrdiff_t y_stride,
                                   const uint16_t* above,
@@ -5860,6 +5863,22 @@
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -7607,8 +7626,6 @@
   vpx_d63_predictor_4x4 = vpx_d63_predictor_4x4_c;
   vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_c;
   vpx_get16x16var = vpx_get16x16var_c;
-  vpx_hadamard_16x16 = vpx_hadamard_16x16_c;
-  vpx_hadamard_8x8 = vpx_hadamard_8x8_c;
   vpx_highbd_convolve8 = vpx_highbd_convolve8_c;
   vpx_highbd_convolve8_avg = vpx_highbd_convolve8_avg_c;
   vpx_highbd_convolve8_avg_horiz = vpx_highbd_convolve8_avg_horiz_c;
@@ -7636,6 +7653,9 @@
   vpx_highbd_d63_predictor_16x16 = vpx_highbd_d63_predictor_16x16_c;
   vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_c;
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
   vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_c;
   vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_c;
   vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_c;
@@ -7663,6 +7683,7 @@
   vpx_sad32x32 = vpx_sad32x32_c;
   vpx_sad32x32_avg = vpx_sad32x32_avg_c;
   vpx_sad32x32x4d = vpx_sad32x32x4d_c;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
   vpx_sad32x64 = vpx_sad32x64_c;
   vpx_sad32x64_avg = vpx_sad32x64_avg_c;
   vpx_sad4x4x3 = vpx_sad4x4x3_c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp8_rtcd.h
index dc054d6..aa475b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp8_rtcd.h
@@ -27,44 +27,44 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
 #define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_c
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_c
 
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_c
 
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_c
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -72,9 +72,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -82,9 +82,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -92,28 +92,35 @@
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 #define vp8_block_error vp8_block_error_c
 
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_c
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_c
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_c
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_c
 
@@ -139,7 +146,7 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_c
 
@@ -158,7 +165,7 @@
                                     char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_c
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_c
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -209,55 +216,55 @@
                           union int_mv* center_mv);
 #define vp8_full_search_sad vp8_full_search_sad_c
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_c
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_c
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_c
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_c
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_c
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_c
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_c
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_c
 
@@ -271,8 +278,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -288,50 +295,50 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_c
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_c
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_c
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_c
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
 #define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_c
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_c
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_c
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
 #define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp9_rtcd.h
index 289a739..0091393 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vp9_rtcd.h
@@ -64,21 +64,6 @@
                              const struct mv* center_mv);
 #define vp9_diamond_search_sad vp9_diamond_search_sad_c
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-#define vp9_fdct8x8_quant vp9_fdct8x8_quant_c
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -143,8 +128,8 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 #define vp9_highbd_iht16x16_256_add vp9_highbd_iht16x16_256_add_c
@@ -219,14 +204,15 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_c
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.c
index 0eacb8b..8aad25f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=generic-gnu --enable-vp9-highbitdepth --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs";
+static const char* const cfg = "--target=generic-gnu --enable-vp9-highbitdepth --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.h
index 180ab1b..fddb76b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_dsp_rtcd.h
index 29706b5..8ba4d880 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/nacl/vpx_dsp_rtcd.h
@@ -139,253 +139,253 @@
 #define vpx_convolve_copy vpx_convolve_copy_c
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_c
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_c
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_c
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_c
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_c
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_c
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_c
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_c
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_c
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_c
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_c
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_c
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_c
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_c
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_c
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_c
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_c
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_c
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_c
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_c
@@ -418,7 +418,7 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_c
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
@@ -426,13 +426,13 @@
 #define vpx_get16x16var vpx_get16x16var_c
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
@@ -443,25 +443,25 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_c
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_c
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_c
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_c
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_c
@@ -482,13 +482,13 @@
 #define vpx_hadamard_8x8 vpx_hadamard_8x8_c
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
@@ -496,7 +496,7 @@
 #define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
@@ -504,38 +504,38 @@
 #define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_c
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -545,9 +545,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -556,9 +556,9 @@
   vpx_highbd_10_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -568,9 +568,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -580,9 +580,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -592,9 +592,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -603,9 +603,9 @@
   vpx_highbd_10_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -614,9 +614,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -626,9 +626,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -638,9 +638,9 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -649,9 +649,9 @@
   vpx_highbd_10_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -660,9 +660,9 @@
   vpx_highbd_10_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -671,9 +671,9 @@
   vpx_highbd_10_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -682,9 +682,9 @@
   vpx_highbd_10_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -692,9 +692,9 @@
   vpx_highbd_10_sub_pixel_variance16x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -702,9 +702,9 @@
   vpx_highbd_10_sub_pixel_variance16x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -712,9 +712,9 @@
   vpx_highbd_10_sub_pixel_variance16x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -722,9 +722,9 @@
   vpx_highbd_10_sub_pixel_variance32x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -732,9 +732,9 @@
   vpx_highbd_10_sub_pixel_variance32x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -742,9 +742,9 @@
   vpx_highbd_10_sub_pixel_variance32x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -752,9 +752,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -762,9 +762,9 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -772,9 +772,9 @@
   vpx_highbd_10_sub_pixel_variance64x32_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -782,9 +782,9 @@
   vpx_highbd_10_sub_pixel_variance64x64_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -792,9 +792,9 @@
   vpx_highbd_10_sub_pixel_variance8x16_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -802,9 +802,9 @@
   vpx_highbd_10_sub_pixel_variance8x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -812,98 +812,98 @@
   vpx_highbd_10_sub_pixel_variance8x8_c
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_c
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_c
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_c
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_c
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_c
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_c
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_c
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_c
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_c
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_c
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
@@ -911,7 +911,7 @@
 #define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
@@ -919,38 +919,38 @@
 #define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_c
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -960,9 +960,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -971,9 +971,9 @@
   vpx_highbd_12_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -983,9 +983,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -995,9 +995,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1007,9 +1007,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1018,9 +1018,9 @@
   vpx_highbd_12_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1029,9 +1029,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1041,9 +1041,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1053,9 +1053,9 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1064,9 +1064,9 @@
   vpx_highbd_12_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1075,9 +1075,9 @@
   vpx_highbd_12_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1086,9 +1086,9 @@
   vpx_highbd_12_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1097,9 +1097,9 @@
   vpx_highbd_12_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1107,9 +1107,9 @@
   vpx_highbd_12_sub_pixel_variance16x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1117,9 +1117,9 @@
   vpx_highbd_12_sub_pixel_variance16x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1127,9 +1127,9 @@
   vpx_highbd_12_sub_pixel_variance16x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1137,9 +1137,9 @@
   vpx_highbd_12_sub_pixel_variance32x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1147,9 +1147,9 @@
   vpx_highbd_12_sub_pixel_variance32x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1157,9 +1157,9 @@
   vpx_highbd_12_sub_pixel_variance32x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1167,9 +1167,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1177,9 +1177,9 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1187,9 +1187,9 @@
   vpx_highbd_12_sub_pixel_variance64x32_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -1197,9 +1197,9 @@
   vpx_highbd_12_sub_pixel_variance64x64_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1207,9 +1207,9 @@
   vpx_highbd_12_sub_pixel_variance8x16_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1217,9 +1217,9 @@
   vpx_highbd_12_sub_pixel_variance8x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1227,98 +1227,98 @@
   vpx_highbd_12_sub_pixel_variance8x8_c
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_c
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_c
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_c
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_c
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_c
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_c
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_c
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_c
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_c
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_c
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
@@ -1326,7 +1326,7 @@
 #define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
@@ -1334,37 +1334,37 @@
 #define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_c
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 #define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1373,9 +1373,9 @@
   vpx_highbd_8_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1384,9 +1384,9 @@
   vpx_highbd_8_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1395,9 +1395,9 @@
   vpx_highbd_8_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1406,9 +1406,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1417,9 +1417,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1428,9 +1428,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1439,9 +1439,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1450,9 +1450,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1461,9 +1461,9 @@
   vpx_highbd_8_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
@@ -1472,9 +1472,9 @@
   vpx_highbd_8_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1483,9 +1483,9 @@
   vpx_highbd_8_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1494,9 +1494,9 @@
   vpx_highbd_8_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -1505,9 +1505,9 @@
   vpx_highbd_8_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1515,9 +1515,9 @@
   vpx_highbd_8_sub_pixel_variance16x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1525,9 +1525,9 @@
   vpx_highbd_8_sub_pixel_variance16x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1535,9 +1535,9 @@
   vpx_highbd_8_sub_pixel_variance16x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1545,9 +1545,9 @@
   vpx_highbd_8_sub_pixel_variance32x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1555,9 +1555,9 @@
   vpx_highbd_8_sub_pixel_variance32x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1565,27 +1565,27 @@
   vpx_highbd_8_sub_pixel_variance32x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1593,9 +1593,9 @@
   vpx_highbd_8_sub_pixel_variance64x32_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
@@ -1603,9 +1603,9 @@
   vpx_highbd_8_sub_pixel_variance64x64_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1613,118 +1613,118 @@
   vpx_highbd_8_sub_pixel_variance8x16_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance8x4 vpx_highbd_8_sub_pixel_variance8x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance8x8 vpx_highbd_8_sub_pixel_variance8x8_c
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_c
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_c
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_c
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_c
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_c
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_c
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_c
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 #define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_c
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_c
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_c
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
 #define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_c
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
 #define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_c
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
@@ -1746,7 +1746,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 #define vpx_highbd_convolve8 vpx_highbd_convolve8_c
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
@@ -1760,7 +1760,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 #define vpx_highbd_convolve8_avg vpx_highbd_convolve8_avg_c
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
@@ -1774,7 +1774,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 #define vpx_highbd_convolve8_avg_horiz vpx_highbd_convolve8_avg_horiz_c
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
@@ -1788,7 +1788,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 #define vpx_highbd_convolve8_avg_vert vpx_highbd_convolve8_avg_vert_c
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
@@ -1802,7 +1802,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 #define vpx_highbd_convolve8_horiz vpx_highbd_convolve8_horiz_c
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
@@ -1816,7 +1816,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 #define vpx_highbd_convolve8_vert vpx_highbd_convolve8_vert_c
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
@@ -1830,7 +1830,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 #define vpx_highbd_convolve_avg vpx_highbd_convolve_avg_c
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
@@ -1844,284 +1844,284 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 #define vpx_highbd_convolve_copy vpx_highbd_convolve_copy_c
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d117_predictor_16x16 vpx_highbd_d117_predictor_16x16_c
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d117_predictor_32x32 vpx_highbd_d117_predictor_32x32_c
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_c
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d117_predictor_8x8 vpx_highbd_d117_predictor_8x8_c
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d135_predictor_16x16 vpx_highbd_d135_predictor_16x16_c
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d135_predictor_32x32 vpx_highbd_d135_predictor_32x32_c
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_c
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d135_predictor_8x8 vpx_highbd_d135_predictor_8x8_c
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d153_predictor_16x16 vpx_highbd_d153_predictor_16x16_c
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d153_predictor_32x32 vpx_highbd_d153_predictor_32x32_c
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_c
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d153_predictor_8x8 vpx_highbd_d153_predictor_8x8_c
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d207_predictor_16x16 vpx_highbd_d207_predictor_16x16_c
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d207_predictor_32x32 vpx_highbd_d207_predictor_32x32_c
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_c
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_d207_predictor_8x8 vpx_highbd_d207_predictor_8x8_c
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d45_predictor_16x16 vpx_highbd_d45_predictor_16x16_c
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d45_predictor_32x32 vpx_highbd_d45_predictor_32x32_c
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d45_predictor_4x4 vpx_highbd_d45_predictor_4x4_c
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d45_predictor_8x8 vpx_highbd_d45_predictor_8x8_c
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d63_predictor_16x16 vpx_highbd_d63_predictor_16x16_c
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_d63_predictor_32x32 vpx_highbd_d63_predictor_32x32_c
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_c
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_d63_predictor_8x8 vpx_highbd_d63_predictor_8x8_c
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_c
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_c
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_c
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_c
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_left_predictor_16x16 vpx_highbd_dc_left_predictor_16x16_c
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_left_predictor_32x32 vpx_highbd_dc_left_predictor_32x32_c
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_c
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_c
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_c
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_c
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_c
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_c
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_c
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 #define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_c
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_c
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
@@ -2164,33 +2164,48 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_c
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_c
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 #define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_c
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 #define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_c
 
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_16x16 vpx_highbd_hadamard_16x16_c
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_32x32 vpx_highbd_hadamard_32x32_c
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+#define vpx_highbd_hadamard_8x8 vpx_highbd_hadamard_8x8_c
+
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
                                    int stride,
@@ -2389,9 +2404,9 @@
                                       int bd);
 #define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_c
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -2442,7 +2457,7 @@
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_c
@@ -2462,7 +2477,7 @@
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_c
@@ -2482,7 +2497,7 @@
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 #define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_c
@@ -2502,7 +2517,7 @@
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_c
@@ -2522,7 +2537,7 @@
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_c
@@ -2542,7 +2557,7 @@
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_c
@@ -2562,7 +2577,7 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_c
@@ -2582,7 +2597,7 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_c
@@ -2602,7 +2617,7 @@
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_c
@@ -2622,7 +2637,7 @@
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 #define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_c
@@ -2642,7 +2657,7 @@
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 #define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_c
@@ -2662,7 +2677,7 @@
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_c
@@ -2682,73 +2697,76 @@
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 #define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_c
 
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+#define vpx_highbd_satd vpx_highbd_satd_c
+
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_c
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_c
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_c
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 #define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_c
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_c
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 #define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_c
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 #define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_c
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
@@ -2910,7 +2928,7 @@
                                const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_c
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
@@ -2933,30 +2951,30 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_c
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 #define vpx_mse16x16 vpx_mse16x16_c
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse16x8 vpx_mse16x8_c
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_c
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_c
 
@@ -3031,7 +3049,7 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_c
@@ -3058,7 +3076,7 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_c
@@ -3085,7 +3103,7 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_c
@@ -3112,7 +3130,7 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_c
@@ -3132,11 +3150,18 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x32x4d vpx_sad32x32x4d_c
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -3152,7 +3177,7 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_c
@@ -3179,7 +3204,7 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_c
@@ -3206,7 +3231,7 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_c
@@ -3226,7 +3251,7 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_c
@@ -3246,7 +3271,7 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 #define vpx_sad64x64x4d vpx_sad64x64x4d_c
@@ -3273,7 +3298,7 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_c
@@ -3300,7 +3325,7 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_c
@@ -3327,7 +3352,7 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_c
@@ -3421,9 +3446,9 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3431,9 +3456,9 @@
 #define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_c
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3441,9 +3466,9 @@
 #define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_c
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -3451,9 +3476,9 @@
 #define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_c
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3461,9 +3486,9 @@
 #define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_c
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3471,9 +3496,9 @@
 #define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_c
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3481,9 +3506,9 @@
 #define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_c
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3491,9 +3516,9 @@
 #define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3501,9 +3526,9 @@
 #define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3511,9 +3536,9 @@
 #define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_c
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
@@ -3521,9 +3546,9 @@
 #define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_c
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
@@ -3531,9 +3556,9 @@
 #define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_c
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3541,9 +3566,9 @@
 #define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_c
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
@@ -3551,117 +3576,117 @@
 #define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_c
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_c
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_c
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_c
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_c
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_c
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_c
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_c
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_c
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_c
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 #define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_c
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 #define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_c
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 #define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_c
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
@@ -3681,146 +3706,146 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_c
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_c
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_c
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_c
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_c
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_c
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_c
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_c
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_c
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x16 vpx_variance16x16_c
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance16x32 vpx_variance16x32_c
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance16x8 vpx_variance16x8_c
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x16 vpx_variance32x16_c
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x32 vpx_variance32x32_c
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance32x64 vpx_variance32x64_c
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_c
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_c
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x32 vpx_variance64x32_c
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 #define vpx_variance64x64 vpx_variance64x64_c
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_c
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_c
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_c
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/vpx_version.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/vpx_version.h
index d8f4d31..cc6a83c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/vpx_version.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/vpx_version.h
@@ -1,8 +1,9 @@
 // This file is generated. Do not edit.
-#define VERSION_MAJOR  1
-#define VERSION_MINOR  7
-#define VERSION_PATCH  0
-#define VERSION_EXTRA  "834-g6c62530c6"
-#define VERSION_PACKED ((VERSION_MAJOR<<16)|(VERSION_MINOR<<8)|(VERSION_PATCH))
-#define VERSION_STRING_NOSP "v1.7.0-834-g6c62530c6"
-#define VERSION_STRING      " v1.7.0-834-g6c62530c6"
+#define VERSION_MAJOR 1
+#define VERSION_MINOR 8
+#define VERSION_PATCH 2
+#define VERSION_EXTRA "100-g5532775ef"
+#define VERSION_PACKED \
+  ((VERSION_MAJOR << 16) | (VERSION_MINOR << 8) | (VERSION_PATCH))
+#define VERSION_STRING_NOSP "v1.8.2-100-g5532775ef"
+#define VERSION_STRING " v1.8.2-100-g5532775ef"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vp8_rtcd.h
new file mode 100644
index 0000000..c4bb41b
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vp8_rtcd.h
@@ -0,0 +1,505 @@
+// This file is generated. Do not edit.
+#ifndef VP8_RTCD_H_
+#define VP8_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * VP8
+ */
+
+struct blockd;
+struct macroblockd;
+struct loop_filter_info;
+
+/* Encoder forward decls */
+struct block;
+struct macroblock;
+struct variance_vtable;
+union int_mv;
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
+                                 int dst_pitch);
+void vp8_bilinear_predict16x16_neon(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
+                                    int dst_pitch);
+#define vp8_bilinear_predict16x16 vp8_bilinear_predict16x16_neon
+
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict4x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_neon
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_neon
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x8 vp8_bilinear_predict8x8_neon
+
+void vp8_blend_b_c(unsigned char* y,
+                   unsigned char* u,
+                   unsigned char* v,
+                   int y_1,
+                   int u_1,
+                   int v_1,
+                   int alpha,
+                   int stride);
+#define vp8_blend_b vp8_blend_b_c
+
+void vp8_blend_mb_inner_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
+#define vp8_blend_mb_inner vp8_blend_mb_inner_c
+
+void vp8_blend_mb_outer_c(unsigned char* y,
+                          unsigned char* u,
+                          unsigned char* v,
+                          int y_1,
+                          int u_1,
+                          int v_1,
+                          int alpha,
+                          int stride);
+#define vp8_blend_mb_outer vp8_blend_mb_outer_c
+
+int vp8_block_error_c(short* coeff, short* dqcoeff);
+#define vp8_block_error vp8_block_error_c
+
+void vp8_copy32xn_c(const unsigned char* src_ptr,
+                    int src_stride,
+                    unsigned char* dst_ptr,
+                    int dst_stride,
+                    int height);
+#define vp8_copy32xn vp8_copy32xn_c
+
+void vp8_copy_mem16x16_c(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+void vp8_copy_mem16x16_neon(unsigned char* src,
+                            int src_stride,
+                            unsigned char* dst,
+                            int dst_stride);
+#define vp8_copy_mem16x16 vp8_copy_mem16x16_neon
+
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+#define vp8_copy_mem8x4 vp8_copy_mem8x4_neon
+
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_neon(unsigned char* src,
+                          int src_stride,
+                          unsigned char* dst,
+                          int dst_stride);
+#define vp8_copy_mem8x8 vp8_copy_mem8x8_neon
+
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_neon(short input_dc,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
+                               int dst_stride);
+#define vp8_dc_only_idct_add vp8_dc_only_idct_add_neon
+
+int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
+                          int mc_avg_y_stride,
+                          unsigned char* running_avg_y,
+                          int avg_y_stride,
+                          unsigned char* sig,
+                          int sig_stride,
+                          unsigned int motion_magnitude,
+                          int increase_denoising);
+int vp8_denoiser_filter_neon(unsigned char* mc_running_avg_y,
+                             int mc_avg_y_stride,
+                             unsigned char* running_avg_y,
+                             int avg_y_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+#define vp8_denoiser_filter vp8_denoiser_filter_neon
+
+int vp8_denoiser_filter_uv_c(unsigned char* mc_running_avg,
+                             int mc_avg_stride,
+                             unsigned char* running_avg,
+                             int avg_stride,
+                             unsigned char* sig,
+                             int sig_stride,
+                             unsigned int motion_magnitude,
+                             int increase_denoising);
+int vp8_denoiser_filter_uv_neon(unsigned char* mc_running_avg,
+                                int mc_avg_stride,
+                                unsigned char* running_avg,
+                                int avg_stride,
+                                unsigned char* sig,
+                                int sig_stride,
+                                unsigned int motion_magnitude,
+                                int increase_denoising);
+#define vp8_denoiser_filter_uv vp8_denoiser_filter_uv_neon
+
+void vp8_dequant_idct_add_c(short* input,
+                            short* dq,
+                            unsigned char* dest,
+                            int stride);
+void vp8_dequant_idct_add_neon(short* input,
+                               short* dq,
+                               unsigned char* dest,
+                               int stride);
+#define vp8_dequant_idct_add vp8_dequant_idct_add_neon
+
+void vp8_dequant_idct_add_uv_block_c(short* q,
+                                     short* dq,
+                                     unsigned char* dst_u,
+                                     unsigned char* dst_v,
+                                     int stride,
+                                     char* eobs);
+void vp8_dequant_idct_add_uv_block_neon(short* q,
+                                        short* dq,
+                                        unsigned char* dst_u,
+                                        unsigned char* dst_v,
+                                        int stride,
+                                        char* eobs);
+#define vp8_dequant_idct_add_uv_block vp8_dequant_idct_add_uv_block_neon
+
+void vp8_dequant_idct_add_y_block_c(short* q,
+                                    short* dq,
+                                    unsigned char* dst,
+                                    int stride,
+                                    char* eobs);
+void vp8_dequant_idct_add_y_block_neon(short* q,
+                                       short* dq,
+                                       unsigned char* dst,
+                                       int stride,
+                                       char* eobs);
+#define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_neon
+
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_neon(struct blockd*, short* DQC);
+#define vp8_dequantize_b vp8_dequantize_b_neon
+
+int vp8_diamond_search_sad_c(struct macroblock* x,
+                             struct block* b,
+                             struct blockd* d,
+                             union int_mv* ref_mv,
+                             union int_mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             struct variance_vtable* fn_ptr,
+                             int* mvcost[2],
+                             union int_mv* center_mv);
+#define vp8_diamond_search_sad vp8_diamond_search_sad_c
+
+void vp8_fast_quantize_b_c(struct block*, struct blockd*);
+void vp8_fast_quantize_b_neon(struct block*, struct blockd*);
+#define vp8_fast_quantize_b vp8_fast_quantize_b_neon
+
+void vp8_filter_by_weight16x16_c(unsigned char* src,
+                                 int src_stride,
+                                 unsigned char* dst,
+                                 int dst_stride,
+                                 int src_weight);
+#define vp8_filter_by_weight16x16 vp8_filter_by_weight16x16_c
+
+void vp8_filter_by_weight4x4_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp8_filter_by_weight4x4 vp8_filter_by_weight4x4_c
+
+void vp8_filter_by_weight8x8_c(unsigned char* src,
+                               int src_stride,
+                               unsigned char* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp8_filter_by_weight8x8 vp8_filter_by_weight8x8_c
+
+int vp8_full_search_sad_c(struct macroblock* x,
+                          struct block* b,
+                          struct blockd* d,
+                          union int_mv* ref_mv,
+                          int sad_per_bit,
+                          int distance,
+                          struct variance_vtable* fn_ptr,
+                          int* mvcost[2],
+                          union int_mv* center_mv);
+#define vp8_full_search_sad vp8_full_search_sad_c
+
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bh_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bh vp8_loop_filter_bh_neon
+
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
+                          int uv_stride,
+                          struct loop_filter_info* lfi);
+void vp8_loop_filter_bv_neon(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
+                             int uv_stride,
+                             struct loop_filter_info* lfi);
+#define vp8_loop_filter_bv vp8_loop_filter_bv_neon
+
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbh_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbh vp8_loop_filter_mbh_neon
+
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
+                           int uv_stride,
+                           struct loop_filter_info* lfi);
+void vp8_loop_filter_mbv_neon(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
+                              int uv_stride,
+                              struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbv vp8_loop_filter_mbv_neon
+
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bhs_neon(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_neon
+
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
+                           const unsigned char* blimit);
+void vp8_loop_filter_bvs_neon(unsigned char* y_ptr,
+                              int y_stride,
+                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_neon
+
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
+                                              const unsigned char* blimit);
+void vp8_loop_filter_mbhs_neon(unsigned char* y_ptr,
+                               int y_stride,
+                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbh vp8_loop_filter_mbhs_neon
+
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
+                                            const unsigned char* blimit);
+void vp8_loop_filter_mbvs_neon(unsigned char* y_ptr,
+                               int y_stride,
+                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbv vp8_loop_filter_mbvs_neon
+
+int vp8_mbblock_error_c(struct macroblock* mb, int dc);
+#define vp8_mbblock_error vp8_mbblock_error_c
+
+int vp8_mbuverror_c(struct macroblock* mb);
+#define vp8_mbuverror vp8_mbuverror_c
+
+int vp8_refining_search_sad_c(struct macroblock* x,
+                              struct block* b,
+                              struct blockd* d,
+                              union int_mv* ref_mv,
+                              int error_per_bit,
+                              int search_range,
+                              struct variance_vtable* fn_ptr,
+                              int* mvcost[2],
+                              union int_mv* center_mv);
+#define vp8_refining_search_sad vp8_refining_search_sad_c
+
+void vp8_regular_quantize_b_c(struct block*, struct blockd*);
+#define vp8_regular_quantize_b vp8_regular_quantize_b_c
+
+void vp8_short_fdct4x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct4x4_neon(short* input, short* output, int pitch);
+#define vp8_short_fdct4x4 vp8_short_fdct4x4_neon
+
+void vp8_short_fdct8x4_c(short* input, short* output, int pitch);
+void vp8_short_fdct8x4_neon(short* input, short* output, int pitch);
+#define vp8_short_fdct8x4 vp8_short_fdct8x4_neon
+
+void vp8_short_idct4x4llm_c(short* input,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_short_idct4x4llm_neon(short* input,
+                               unsigned char* pred_ptr,
+                               int pred_stride,
+                               unsigned char* dst_ptr,
+                               int dst_stride);
+#define vp8_short_idct4x4llm vp8_short_idct4x4llm_neon
+
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_neon(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_neon
+
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
+
+void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
+void vp8_short_walsh4x4_neon(short* input, short* output, int pitch);
+#define vp8_short_walsh4x4 vp8_short_walsh4x4_neon
+
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_sixtap_predict16x16_neon(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_sixtap_predict16x16 vp8_sixtap_predict16x16_neon
+
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict4x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict4x4 vp8_sixtap_predict4x4_neon
+
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x4_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict8x4 vp8_sixtap_predict8x4_neon
+
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
+                             int dst_pitch);
+void vp8_sixtap_predict8x8_neon(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
+                                int dst_pitch);
+#define vp8_sixtap_predict8x8 vp8_sixtap_predict8x8_neon
+
+void vp8_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vp9_rtcd.h
new file mode 100644
index 0000000..ca740f4
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vp9_rtcd.h
@@ -0,0 +1,342 @@
+// This file is generated. Do not edit.
+#ifndef VP9_RTCD_H_
+#define VP9_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * VP9
+ */
+
+#include "vp9/common/vp9_common.h"
+#include "vp9/common/vp9_enums.h"
+#include "vp9/common/vp9_filter.h"
+#include "vpx/vpx_integer.h"
+
+struct macroblockd;
+
+/* Encoder forward decls */
+struct macroblock;
+struct vp9_variance_vtable;
+struct search_site_config;
+struct mv;
+union int_mv;
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int64_t vp9_block_error_c(const tran_low_t* coeff,
+                          const tran_low_t* dqcoeff,
+                          intptr_t block_size,
+                          int64_t* ssz);
+#define vp9_block_error vp9_block_error_c
+
+int64_t vp9_block_error_fp_c(const tran_low_t* coeff,
+                             const tran_low_t* dqcoeff,
+                             int block_size);
+#define vp9_block_error_fp vp9_block_error_fp_c
+
+int vp9_denoiser_filter_c(const uint8_t* sig,
+                          int sig_stride,
+                          const uint8_t* mc_avg,
+                          int mc_avg_stride,
+                          uint8_t* avg,
+                          int avg_stride,
+                          int increase_denoising,
+                          BLOCK_SIZE bs,
+                          int motion_magnitude);
+int vp9_denoiser_filter_neon(const uint8_t* sig,
+                             int sig_stride,
+                             const uint8_t* mc_avg,
+                             int mc_avg_stride,
+                             uint8_t* avg,
+                             int avg_stride,
+                             int increase_denoising,
+                             BLOCK_SIZE bs,
+                             int motion_magnitude);
+#define vp9_denoiser_filter vp9_denoiser_filter_neon
+
+int vp9_diamond_search_sad_c(const struct macroblock* x,
+                             const struct search_site_config* cfg,
+                             struct mv* ref_mv,
+                             struct mv* best_mv,
+                             int search_param,
+                             int sad_per_bit,
+                             int* num00,
+                             const struct vp9_variance_vtable* fn_ptr,
+                             const struct mv* center_mv);
+#define vp9_diamond_search_sad vp9_diamond_search_sad_c
+
+void vp9_fht16x16_c(const int16_t* input,
+                    tran_low_t* output,
+                    int stride,
+                    int tx_type);
+#define vp9_fht16x16 vp9_fht16x16_c
+
+void vp9_fht4x4_c(const int16_t* input,
+                  tran_low_t* output,
+                  int stride,
+                  int tx_type);
+#define vp9_fht4x4 vp9_fht4x4_c
+
+void vp9_fht8x8_c(const int16_t* input,
+                  tran_low_t* output,
+                  int stride,
+                  int tx_type);
+#define vp9_fht8x8 vp9_fht8x8_c
+
+void vp9_filter_by_weight16x16_c(const uint8_t* src,
+                                 int src_stride,
+                                 uint8_t* dst,
+                                 int dst_stride,
+                                 int src_weight);
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_c
+
+void vp9_filter_by_weight8x8_c(const uint8_t* src,
+                               int src_stride,
+                               uint8_t* dst,
+                               int dst_stride,
+                               int src_weight);
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_c
+
+void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vp9_fwht4x4 vp9_fwht4x4_c
+
+int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
+                                 const tran_low_t* dqcoeff,
+                                 intptr_t block_size,
+                                 int64_t* ssz,
+                                 int bd);
+#define vp9_highbd_block_error vp9_highbd_block_error_c
+
+void vp9_highbd_fht16x16_c(const int16_t* input,
+                           tran_low_t* output,
+                           int stride,
+                           int tx_type);
+#define vp9_highbd_fht16x16 vp9_highbd_fht16x16_c
+
+void vp9_highbd_fht4x4_c(const int16_t* input,
+                         tran_low_t* output,
+                         int stride,
+                         int tx_type);
+#define vp9_highbd_fht4x4 vp9_highbd_fht4x4_c
+
+void vp9_highbd_fht8x8_c(const int16_t* input,
+                         tran_low_t* output,
+                         int stride,
+                         int tx_type);
+#define vp9_highbd_fht8x8 vp9_highbd_fht8x8_c
+
+void vp9_highbd_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
+
+void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+void vp9_highbd_iht16x16_256_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int tx_type,
+                                      int bd);
+#define vp9_highbd_iht16x16_256_add vp9_highbd_iht16x16_256_add_neon
+
+void vp9_highbd_iht4x4_16_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int tx_type,
+                                int bd);
+void vp9_highbd_iht4x4_16_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+#define vp9_highbd_iht4x4_16_add vp9_highbd_iht4x4_16_add_neon
+
+void vp9_highbd_iht8x8_64_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int tx_type,
+                                int bd);
+void vp9_highbd_iht8x8_64_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int tx_type,
+                                   int bd);
+#define vp9_highbd_iht8x8_64_add vp9_highbd_iht8x8_64_add_neon
+
+void vp9_highbd_mbpost_proc_across_ip_c(uint16_t* src,
+                                        int pitch,
+                                        int rows,
+                                        int cols,
+                                        int flimit);
+#define vp9_highbd_mbpost_proc_across_ip vp9_highbd_mbpost_proc_across_ip_c
+
+void vp9_highbd_mbpost_proc_down_c(uint16_t* dst,
+                                   int pitch,
+                                   int rows,
+                                   int cols,
+                                   int flimit);
+#define vp9_highbd_mbpost_proc_down vp9_highbd_mbpost_proc_down_c
+
+void vp9_highbd_post_proc_down_and_across_c(const uint16_t* src_ptr,
+                                            uint16_t* dst_ptr,
+                                            int src_pixels_per_line,
+                                            int dst_pixels_per_line,
+                                            int rows,
+                                            int cols,
+                                            int flimit);
+#define vp9_highbd_post_proc_down_and_across \
+  vp9_highbd_post_proc_down_and_across_c
+
+void vp9_highbd_quantize_fp_c(const tran_low_t* coeff_ptr,
+                              intptr_t n_coeffs,
+                              int skip_block,
+                              const int16_t* round_ptr,
+                              const int16_t* quant_ptr,
+                              tran_low_t* qcoeff_ptr,
+                              tran_low_t* dqcoeff_ptr,
+                              const int16_t* dequant_ptr,
+                              uint16_t* eob_ptr,
+                              const int16_t* scan,
+                              const int16_t* iscan);
+#define vp9_highbd_quantize_fp vp9_highbd_quantize_fp_c
+
+void vp9_highbd_quantize_fp_32x32_c(const tran_low_t* coeff_ptr,
+                                    intptr_t n_coeffs,
+                                    int skip_block,
+                                    const int16_t* round_ptr,
+                                    const int16_t* quant_ptr,
+                                    tran_low_t* qcoeff_ptr,
+                                    tran_low_t* dqcoeff_ptr,
+                                    const int16_t* dequant_ptr,
+                                    uint16_t* eob_ptr,
+                                    const int16_t* scan,
+                                    const int16_t* iscan);
+#define vp9_highbd_quantize_fp_32x32 vp9_highbd_quantize_fp_32x32_c
+
+void vp9_highbd_temporal_filter_apply_c(const uint8_t* frame1,
+                                        unsigned int stride,
+                                        const uint8_t* frame2,
+                                        unsigned int block_width,
+                                        unsigned int block_height,
+                                        int strength,
+                                        int* blk_fw,
+                                        int use_32x32,
+                                        uint32_t* accumulator,
+                                        uint16_t* count);
+#define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
+
+void vp9_iht16x16_256_add_c(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+void vp9_iht16x16_256_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride,
+                               int tx_type);
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_neon
+
+void vp9_iht4x4_16_add_c(const tran_low_t* input,
+                         uint8_t* dest,
+                         int stride,
+                         int tx_type);
+void vp9_iht4x4_16_add_neon(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_neon
+
+void vp9_iht8x8_64_add_c(const tran_low_t* input,
+                         uint8_t* dest,
+                         int stride,
+                         int tx_type);
+void vp9_iht8x8_64_add_neon(const tran_low_t* input,
+                            uint8_t* dest,
+                            int stride,
+                            int tx_type);
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_neon
+
+void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
+                       intptr_t n_coeffs,
+                       int skip_block,
+                       const int16_t* round_ptr,
+                       const int16_t* quant_ptr,
+                       tran_low_t* qcoeff_ptr,
+                       tran_low_t* dqcoeff_ptr,
+                       const int16_t* dequant_ptr,
+                       uint16_t* eob_ptr,
+                       const int16_t* scan,
+                       const int16_t* iscan);
+void vp9_quantize_fp_neon(const tran_low_t* coeff_ptr,
+                          intptr_t n_coeffs,
+                          int skip_block,
+                          const int16_t* round_ptr,
+                          const int16_t* quant_ptr,
+                          tran_low_t* qcoeff_ptr,
+                          tran_low_t* dqcoeff_ptr,
+                          const int16_t* dequant_ptr,
+                          uint16_t* eob_ptr,
+                          const int16_t* scan,
+                          const int16_t* iscan);
+#define vp9_quantize_fp vp9_quantize_fp_neon
+
+void vp9_quantize_fp_32x32_c(const tran_low_t* coeff_ptr,
+                             intptr_t n_coeffs,
+                             int skip_block,
+                             const int16_t* round_ptr,
+                             const int16_t* quant_ptr,
+                             tran_low_t* qcoeff_ptr,
+                             tran_low_t* dqcoeff_ptr,
+                             const int16_t* dequant_ptr,
+                             uint16_t* eob_ptr,
+                             const int16_t* scan,
+                             const int16_t* iscan);
+void vp9_quantize_fp_32x32_neon(const tran_low_t* coeff_ptr,
+                                intptr_t n_coeffs,
+                                int skip_block,
+                                const int16_t* round_ptr,
+                                const int16_t* quant_ptr,
+                                tran_low_t* qcoeff_ptr,
+                                tran_low_t* dqcoeff_ptr,
+                                const int16_t* dequant_ptr,
+                                uint16_t* eob_ptr,
+                                const int16_t* scan,
+                                const int16_t* iscan);
+#define vp9_quantize_fp_32x32 vp9_quantize_fp_32x32_neon
+
+void vp9_scale_and_extend_frame_c(const struct yv12_buffer_config* src,
+                                  struct yv12_buffer_config* dst,
+                                  INTERP_FILTER filter_type,
+                                  int phase_scaler);
+void vp9_scale_and_extend_frame_neon(const struct yv12_buffer_config* src,
+                                     struct yv12_buffer_config* dst,
+                                     INTERP_FILTER filter_type,
+                                     int phase_scaler);
+#define vp9_scale_and_extend_frame vp9_scale_and_extend_frame_neon
+
+void vp9_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.asm
new file mode 100644
index 0000000..406fe8f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.asm
@@ -0,0 +1,98 @@
+@ This file was created from a .asm file
+@  using the ads2gas_apple.pl script.
+
+	.syntax unified
+.set VPX_ARCH_ARM ,  1
+.set ARCH_ARM ,  1
+.set VPX_ARCH_MIPS ,  0
+.set ARCH_MIPS ,  0
+.set VPX_ARCH_X86 ,  0
+.set ARCH_X86 ,  0
+.set VPX_ARCH_X86_64 ,  0
+.set ARCH_X86_64 ,  0
+.set VPX_ARCH_PPC ,  0
+.set ARCH_PPC ,  0
+.set HAVE_NEON ,  1
+.set HAVE_NEON_ASM ,  0
+.set HAVE_MIPS32 ,  0
+.set HAVE_DSPR2 ,  0
+.set HAVE_MSA ,  0
+.set HAVE_MIPS64 ,  0
+.set HAVE_MMX ,  0
+.set HAVE_SSE ,  0
+.set HAVE_SSE2 ,  0
+.set HAVE_SSE3 ,  0
+.set HAVE_SSSE3 ,  0
+.set HAVE_SSE4_1 ,  0
+.set HAVE_AVX ,  0
+.set HAVE_AVX2 ,  0
+.set HAVE_AVX512 ,  0
+.set HAVE_VSX ,  0
+.set HAVE_MMI ,  0
+.set HAVE_VPX_PORTS ,  1
+.set HAVE_PTHREAD_H ,  0
+.set HAVE_UNISTD_H ,  0
+.set CONFIG_DEPENDENCY_TRACKING ,  1
+.set CONFIG_EXTERNAL_BUILD ,  1
+.set CONFIG_INSTALL_DOCS ,  0
+.set CONFIG_INSTALL_BINS ,  1
+.set CONFIG_INSTALL_LIBS ,  1
+.set CONFIG_INSTALL_SRCS ,  0
+.set CONFIG_DEBUG ,  0
+.set CONFIG_GPROF ,  0
+.set CONFIG_GCOV ,  0
+.set CONFIG_RVCT ,  0
+.set CONFIG_GCC ,  0
+.set CONFIG_MSVS ,  1
+.set CONFIG_PIC ,  0
+.set CONFIG_BIG_ENDIAN ,  0
+.set CONFIG_CODEC_SRCS ,  0
+.set CONFIG_DEBUG_LIBS ,  0
+.set CONFIG_DEQUANT_TOKENS ,  0
+.set CONFIG_DC_RECON ,  0
+.set CONFIG_RUNTIME_CPU_DETECT ,  0
+.set CONFIG_POSTPROC ,  1
+.set CONFIG_VP9_POSTPROC ,  1
+.set CONFIG_MULTITHREAD ,  1
+.set CONFIG_INTERNAL_STATS ,  0
+.set CONFIG_VP8_ENCODER ,  1
+.set CONFIG_VP8_DECODER ,  1
+.set CONFIG_VP9_ENCODER ,  1
+.set CONFIG_VP9_DECODER ,  1
+.set CONFIG_VP8 ,  1
+.set CONFIG_VP9 ,  1
+.set CONFIG_ENCODERS ,  1
+.set CONFIG_DECODERS ,  1
+.set CONFIG_STATIC_MSVCRT ,  0
+.set CONFIG_SPATIAL_RESAMPLING ,  1
+.set CONFIG_REALTIME_ONLY ,  1
+.set CONFIG_ONTHEFLY_BITPACKING ,  0
+.set CONFIG_ERROR_CONCEALMENT ,  0
+.set CONFIG_SHARED ,  0
+.set CONFIG_STATIC ,  1
+.set CONFIG_SMALL ,  0
+.set CONFIG_POSTPROC_VISUALIZER ,  0
+.set CONFIG_OS_SUPPORT ,  1
+.set CONFIG_UNIT_TESTS ,  1
+.set CONFIG_WEBM_IO ,  1
+.set CONFIG_LIBYUV ,  0
+.set CONFIG_DECODE_PERF_TESTS ,  0
+.set CONFIG_ENCODE_PERF_TESTS ,  0
+.set CONFIG_MULTI_RES_ENCODING ,  1
+.set CONFIG_TEMPORAL_DENOISING ,  1
+.set CONFIG_VP9_TEMPORAL_DENOISING ,  1
+.set CONFIG_CONSISTENT_RECODE ,  0
+.set CONFIG_COEFFICIENT_RANGE_CHECKING ,  0
+.set CONFIG_VP9_HIGHBITDEPTH ,  1
+.set CONFIG_BETTER_HW_COMPATIBILITY ,  0
+.set CONFIG_EXPERIMENTAL ,  0
+.set CONFIG_SIZE_LIMIT ,  1
+.set CONFIG_ALWAYS_ADJUST_BPM ,  0
+.set CONFIG_BITSTREAM_DEBUG ,  0
+.set CONFIG_MISMATCH_DEBUG ,  0
+.set CONFIG_FP_MB_STATS ,  0
+.set CONFIG_EMULATE_HARDWARE ,  0
+.set CONFIG_NON_GREEDY_MV ,  0
+.set CONFIG_RATE_CTRL ,  0
+.set DECODE_WIDTH_LIMIT ,  16384
+.set DECODE_HEIGHT_LIMIT ,  16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.c
new file mode 100644
index 0000000..b24e1a5
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.c
@@ -0,0 +1,10 @@
+/* Copyright (c) 2011 The WebM project authors. All Rights Reserved. */
+/*  */
+/* Use of this source code is governed by a BSD-style license */
+/* that can be found in the LICENSE file in the root of the source */
+/* tree. An additional intellectual property rights grant can be found */
+/* in the file PATENTS.  All contributing project authors may */
+/* be found in the AUTHORS file in the root of the source tree. */
+#include "vpx/vpx_codec.h"
+static const char* const cfg = "--target=arm64-win64-vs15 --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-vp9-highbitdepth";
+const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.h
new file mode 100644
index 0000000..08aecd3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_config.h
@@ -0,0 +1,107 @@
+/* Copyright (c) 2011 The WebM project authors. All Rights Reserved. */
+/*  */
+/* Use of this source code is governed by a BSD-style license */
+/* that can be found in the LICENSE file in the root of the source */
+/* tree. An additional intellectual property rights grant can be found */
+/* in the file PATENTS.  All contributing project authors may */
+/* be found in the AUTHORS file in the root of the source tree. */
+/* This file automatically generated by configure. Do not edit! */
+#ifndef VPX_CONFIG_H
+#define VPX_CONFIG_H
+#define RESTRICT    
+#define INLINE      __inline
+#define VPX_ARCH_ARM 1
+#define ARCH_ARM 1
+#define VPX_ARCH_MIPS 0
+#define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
+#define ARCH_X86 0
+#define VPX_ARCH_X86_64 0
+#define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
+#define ARCH_PPC 0
+#define HAVE_NEON 1
+#define HAVE_NEON_ASM 0
+#define HAVE_MIPS32 0
+#define HAVE_DSPR2 0
+#define HAVE_MSA 0
+#define HAVE_MIPS64 0
+#define HAVE_MMX 0
+#define HAVE_SSE 0
+#define HAVE_SSE2 0
+#define HAVE_SSE3 0
+#define HAVE_SSSE3 0
+#define HAVE_SSE4_1 0
+#define HAVE_AVX 0
+#define HAVE_AVX2 0
+#define HAVE_AVX512 0
+#define HAVE_VSX 0
+#define HAVE_MMI 0
+#define HAVE_VPX_PORTS 1
+#define HAVE_PTHREAD_H 0
+#define HAVE_UNISTD_H 0
+#define CONFIG_DEPENDENCY_TRACKING 1
+#define CONFIG_EXTERNAL_BUILD 1
+#define CONFIG_INSTALL_DOCS 0
+#define CONFIG_INSTALL_BINS 1
+#define CONFIG_INSTALL_LIBS 1
+#define CONFIG_INSTALL_SRCS 0
+#define CONFIG_DEBUG 0
+#define CONFIG_GPROF 0
+#define CONFIG_GCOV 0
+#define CONFIG_RVCT 0
+#define CONFIG_GCC 0
+#define CONFIG_MSVS 1
+#define CONFIG_PIC 0
+#define CONFIG_BIG_ENDIAN 0
+#define CONFIG_CODEC_SRCS 0
+#define CONFIG_DEBUG_LIBS 0
+#define CONFIG_DEQUANT_TOKENS 0
+#define CONFIG_DC_RECON 0
+#define CONFIG_RUNTIME_CPU_DETECT 0
+#define CONFIG_POSTPROC 1
+#define CONFIG_VP9_POSTPROC 1
+#define CONFIG_MULTITHREAD 1
+#define CONFIG_INTERNAL_STATS 0
+#define CONFIG_VP8_ENCODER 1
+#define CONFIG_VP8_DECODER 1
+#define CONFIG_VP9_ENCODER 1
+#define CONFIG_VP9_DECODER 1
+#define CONFIG_VP8 1
+#define CONFIG_VP9 1
+#define CONFIG_ENCODERS 1
+#define CONFIG_DECODERS 1
+#define CONFIG_STATIC_MSVCRT 0
+#define CONFIG_SPATIAL_RESAMPLING 1
+#define CONFIG_REALTIME_ONLY 1
+#define CONFIG_ONTHEFLY_BITPACKING 0
+#define CONFIG_ERROR_CONCEALMENT 0
+#define CONFIG_SHARED 0
+#define CONFIG_STATIC 1
+#define CONFIG_SMALL 0
+#define CONFIG_POSTPROC_VISUALIZER 0
+#define CONFIG_OS_SUPPORT 1
+#define CONFIG_UNIT_TESTS 1
+#define CONFIG_WEBM_IO 1
+#define CONFIG_LIBYUV 0
+#define CONFIG_DECODE_PERF_TESTS 0
+#define CONFIG_ENCODE_PERF_TESTS 0
+#define CONFIG_MULTI_RES_ENCODING 1
+#define CONFIG_TEMPORAL_DENOISING 1
+#define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
+#define CONFIG_COEFFICIENT_RANGE_CHECKING 0
+#define CONFIG_VP9_HIGHBITDEPTH 1
+#define CONFIG_BETTER_HW_COMPATIBILITY 0
+#define CONFIG_EXPERIMENTAL 0
+#define CONFIG_SIZE_LIMIT 1
+#define CONFIG_ALWAYS_ADJUST_BPM 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
+#define CONFIG_FP_MB_STATS 0
+#define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
+#define DECODE_WIDTH_LIMIT 16384
+#define DECODE_HEIGHT_LIMIT 16384
+#endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_dsp_rtcd.h
new file mode 100644
index 0000000..71eb333
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_dsp_rtcd.h
@@ -0,0 +1,5191 @@
+// This file is generated. Do not edit.
+#ifndef VPX_DSP_RTCD_H_
+#define VPX_DSP_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+/*
+ * DSP
+ */
+
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/vpx_filter.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+unsigned int vpx_avg_4x4_c(const uint8_t*, int p);
+unsigned int vpx_avg_4x4_neon(const uint8_t*, int p);
+#define vpx_avg_4x4 vpx_avg_4x4_neon
+
+unsigned int vpx_avg_8x8_c(const uint8_t*, int p);
+unsigned int vpx_avg_8x8_neon(const uint8_t*, int p);
+#define vpx_avg_8x8 vpx_avg_8x8_neon
+
+void vpx_comp_avg_pred_c(uint8_t* comp_pred,
+                         const uint8_t* pred,
+                         int width,
+                         int height,
+                         const uint8_t* ref,
+                         int ref_stride);
+void vpx_comp_avg_pred_neon(uint8_t* comp_pred,
+                            const uint8_t* pred,
+                            int width,
+                            int height,
+                            const uint8_t* ref,
+                            int ref_stride);
+#define vpx_comp_avg_pred vpx_comp_avg_pred_neon
+
+void vpx_convolve8_c(const uint8_t* src,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     ptrdiff_t dst_stride,
+                     const InterpKernel* filter,
+                     int x0_q4,
+                     int x_step_q4,
+                     int y0_q4,
+                     int y_step_q4,
+                     int w,
+                     int h);
+void vpx_convolve8_neon(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_convolve8 vpx_convolve8_neon
+
+void vpx_convolve8_avg_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+void vpx_convolve8_avg_neon(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_convolve8_avg vpx_convolve8_avg_neon
+
+void vpx_convolve8_avg_horiz_c(const uint8_t* src,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h);
+void vpx_convolve8_avg_horiz_neon(const uint8_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h);
+#define vpx_convolve8_avg_horiz vpx_convolve8_avg_horiz_neon
+
+void vpx_convolve8_avg_vert_c(const uint8_t* src,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              ptrdiff_t dst_stride,
+                              const InterpKernel* filter,
+                              int x0_q4,
+                              int x_step_q4,
+                              int y0_q4,
+                              int y_step_q4,
+                              int w,
+                              int h);
+void vpx_convolve8_avg_vert_neon(const uint8_t* src,
+                                 ptrdiff_t src_stride,
+                                 uint8_t* dst,
+                                 ptrdiff_t dst_stride,
+                                 const InterpKernel* filter,
+                                 int x0_q4,
+                                 int x_step_q4,
+                                 int y0_q4,
+                                 int y_step_q4,
+                                 int w,
+                                 int h);
+#define vpx_convolve8_avg_vert vpx_convolve8_avg_vert_neon
+
+void vpx_convolve8_horiz_c(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+void vpx_convolve8_horiz_neon(const uint8_t* src,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              ptrdiff_t dst_stride,
+                              const InterpKernel* filter,
+                              int x0_q4,
+                              int x_step_q4,
+                              int y0_q4,
+                              int y_step_q4,
+                              int w,
+                              int h);
+#define vpx_convolve8_horiz vpx_convolve8_horiz_neon
+
+void vpx_convolve8_vert_c(const uint8_t* src,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          ptrdiff_t dst_stride,
+                          const InterpKernel* filter,
+                          int x0_q4,
+                          int x_step_q4,
+                          int y0_q4,
+                          int y_step_q4,
+                          int w,
+                          int h);
+void vpx_convolve8_vert_neon(const uint8_t* src,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst,
+                             ptrdiff_t dst_stride,
+                             const InterpKernel* filter,
+                             int x0_q4,
+                             int x_step_q4,
+                             int y0_q4,
+                             int y_step_q4,
+                             int w,
+                             int h);
+#define vpx_convolve8_vert vpx_convolve8_vert_neon
+
+void vpx_convolve_avg_c(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+void vpx_convolve_avg_neon(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+#define vpx_convolve_avg vpx_convolve_avg_neon
+
+void vpx_convolve_copy_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+void vpx_convolve_copy_neon(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_convolve_copy vpx_convolve_copy_neon
+
+void vpx_d117_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
+
+void vpx_d117_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
+
+void vpx_d117_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
+
+void vpx_d117_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
+
+void vpx_d135_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_d135_predictor_16x16_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_neon
+
+void vpx_d135_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_d135_predictor_32x32_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_neon
+
+void vpx_d135_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_d135_predictor_4x4_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_neon
+
+void vpx_d135_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_d135_predictor_8x8_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_neon
+
+void vpx_d153_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d153_predictor_16x16 vpx_d153_predictor_16x16_c
+
+void vpx_d153_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d153_predictor_32x32 vpx_d153_predictor_32x32_c
+
+void vpx_d153_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d153_predictor_4x4 vpx_d153_predictor_4x4_c
+
+void vpx_d153_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d153_predictor_8x8 vpx_d153_predictor_8x8_c
+
+void vpx_d207_predictor_16x16_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d207_predictor_16x16 vpx_d207_predictor_16x16_c
+
+void vpx_d207_predictor_32x32_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d207_predictor_32x32 vpx_d207_predictor_32x32_c
+
+void vpx_d207_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_c
+
+void vpx_d207_predictor_8x8_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d207_predictor_8x8 vpx_d207_predictor_8x8_c
+
+void vpx_d45_predictor_16x16_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+void vpx_d45_predictor_16x16_neon(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+#define vpx_d45_predictor_16x16 vpx_d45_predictor_16x16_neon
+
+void vpx_d45_predictor_32x32_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+void vpx_d45_predictor_32x32_neon(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+#define vpx_d45_predictor_32x32 vpx_d45_predictor_32x32_neon
+
+void vpx_d45_predictor_4x4_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_d45_predictor_4x4_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_neon
+
+void vpx_d45_predictor_8x8_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_d45_predictor_8x8_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_neon
+
+void vpx_d45e_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
+
+void vpx_d63_predictor_16x16_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_d63_predictor_16x16 vpx_d63_predictor_16x16_c
+
+void vpx_d63_predictor_32x32_c(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_d63_predictor_32x32 vpx_d63_predictor_32x32_c
+
+void vpx_d63_predictor_4x4_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+#define vpx_d63_predictor_4x4 vpx_d63_predictor_4x4_c
+
+void vpx_d63_predictor_8x8_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+#define vpx_d63_predictor_8x8 vpx_d63_predictor_8x8_c
+
+void vpx_d63e_predictor_4x4_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
+
+void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_128_predictor_16x16_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_neon
+
+void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_128_predictor_32x32_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_neon
+
+void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_128_predictor_4x4_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_neon
+
+void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_128_predictor_8x8_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_neon
+
+void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+void vpx_dc_left_predictor_16x16_neon(uint8_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint8_t* above,
+                                      const uint8_t* left);
+#define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_neon
+
+void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+void vpx_dc_left_predictor_32x32_neon(uint8_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint8_t* above,
+                                      const uint8_t* left);
+#define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_neon
+
+void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+void vpx_dc_left_predictor_4x4_neon(uint8_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint8_t* above,
+                                    const uint8_t* left);
+#define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_neon
+
+void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+void vpx_dc_left_predictor_8x8_neon(uint8_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint8_t* above,
+                                    const uint8_t* left);
+#define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_neon
+
+void vpx_dc_predictor_16x16_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_dc_predictor_16x16_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_neon
+
+void vpx_dc_predictor_32x32_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_dc_predictor_32x32_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_neon
+
+void vpx_dc_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_dc_predictor_4x4_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_neon
+
+void vpx_dc_predictor_8x8_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_dc_predictor_8x8_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_neon
+
+void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_top_predictor_16x16_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_neon
+
+void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint8_t* above,
+                                  const uint8_t* left);
+void vpx_dc_top_predictor_32x32_neon(uint8_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint8_t* above,
+                                     const uint8_t* left);
+#define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_neon
+
+void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_top_predictor_4x4_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_neon
+
+void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+void vpx_dc_top_predictor_8x8_neon(uint8_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint8_t* above,
+                                   const uint8_t* left);
+#define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_neon
+
+void vpx_fdct16x16_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct16x16_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct16x16 vpx_fdct16x16_neon
+
+void vpx_fdct16x16_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct16x16_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct16x16_1 vpx_fdct16x16_1_neon
+
+void vpx_fdct32x32_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct32x32 vpx_fdct32x32_neon
+
+void vpx_fdct32x32_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct32x32_1 vpx_fdct32x32_1_neon
+
+void vpx_fdct32x32_rd_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct32x32_rd_neon(const int16_t* input,
+                           tran_low_t* output,
+                           int stride);
+#define vpx_fdct32x32_rd vpx_fdct32x32_rd_neon
+
+void vpx_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct4x4_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct4x4 vpx_fdct4x4_neon
+
+void vpx_fdct4x4_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct4x4_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct4x4_1 vpx_fdct4x4_1_neon
+
+void vpx_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct8x8_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct8x8 vpx_fdct8x8_neon
+
+void vpx_fdct8x8_1_c(const int16_t* input, tran_low_t* output, int stride);
+void vpx_fdct8x8_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_fdct8x8_1 vpx_fdct8x8_1_neon
+
+void vpx_get16x16var_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* ref_ptr,
+                       int ref_stride,
+                       unsigned int* sse,
+                       int* sum);
+void vpx_get16x16var_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride,
+                          unsigned int* sse,
+                          int* sum);
+#define vpx_get16x16var vpx_get16x16var_neon
+
+unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
+                                int src_stride,
+                                const unsigned char* ref_ptr,
+                                int ref_stride);
+unsigned int vpx_get4x4sse_cs_neon(const unsigned char* src_ptr,
+                                   int src_stride,
+                                   const unsigned char* ref_ptr,
+                                   int ref_stride);
+#define vpx_get4x4sse_cs vpx_get4x4sse_cs_neon
+
+void vpx_get8x8var_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     unsigned int* sse,
+                     int* sum);
+void vpx_get8x8var_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* ref_ptr,
+                        int ref_stride,
+                        unsigned int* sse,
+                        int* sum);
+#define vpx_get8x8var vpx_get8x8var_neon
+
+unsigned int vpx_get_mb_ss_c(const int16_t*);
+#define vpx_get_mb_ss vpx_get_mb_ss_c
+
+void vpx_h_predictor_16x16_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_h_predictor_16x16_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_h_predictor_16x16 vpx_h_predictor_16x16_neon
+
+void vpx_h_predictor_32x32_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_h_predictor_32x32_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_h_predictor_32x32 vpx_h_predictor_32x32_neon
+
+void vpx_h_predictor_4x4_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_h_predictor_4x4_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_h_predictor_4x4 vpx_h_predictor_4x4_neon
+
+void vpx_h_predictor_8x8_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_h_predictor_8x8_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_h_predictor_8x8 vpx_h_predictor_8x8_neon
+
+void vpx_hadamard_16x16_c(const int16_t* src_diff,
+                          ptrdiff_t src_stride,
+                          tran_low_t* coeff);
+void vpx_hadamard_16x16_neon(const int16_t* src_diff,
+                             ptrdiff_t src_stride,
+                             tran_low_t* coeff);
+#define vpx_hadamard_16x16 vpx_hadamard_16x16_neon
+
+void vpx_hadamard_32x32_c(const int16_t* src_diff,
+                          ptrdiff_t src_stride,
+                          tran_low_t* coeff);
+#define vpx_hadamard_32x32 vpx_hadamard_32x32_c
+
+void vpx_hadamard_8x8_c(const int16_t* src_diff,
+                        ptrdiff_t src_stride,
+                        tran_low_t* coeff);
+void vpx_hadamard_8x8_neon(const int16_t* src_diff,
+                           ptrdiff_t src_stride,
+                           tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_neon
+
+void vpx_he_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+#define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
+
+void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+
+void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse,
+                               int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+
+unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      unsigned int* sse);
+#define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_c
+
+unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
+
+unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
+
+unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x16 \
+  vpx_highbd_10_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x32 \
+  vpx_highbd_10_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x8 \
+  vpx_highbd_10_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x16 \
+  vpx_highbd_10_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x32 \
+  vpx_highbd_10_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x64 \
+  vpx_highbd_10_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance4x4 \
+  vpx_highbd_10_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance4x8 \
+  vpx_highbd_10_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x32 \
+  vpx_highbd_10_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x64 \
+  vpx_highbd_10_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x16 \
+  vpx_highbd_10_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x4 \
+  vpx_highbd_10_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x8 \
+  vpx_highbd_10_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x16 \
+  vpx_highbd_10_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x32 \
+  vpx_highbd_10_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x8 \
+  vpx_highbd_10_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x16 \
+  vpx_highbd_10_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x32 \
+  vpx_highbd_10_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x64 \
+  vpx_highbd_10_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance4x4 \
+  vpx_highbd_10_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance4x8 \
+  vpx_highbd_10_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x32 \
+  vpx_highbd_10_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x64 \
+  vpx_highbd_10_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x16 \
+  vpx_highbd_10_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x4 \
+  vpx_highbd_10_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x8 \
+  vpx_highbd_10_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_c
+
+unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_c
+
+unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_c
+
+unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_c
+
+unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_c
+
+unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_c
+
+unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
+
+unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
+
+unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_c
+
+unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_c
+
+unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_c
+
+unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
+
+unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_c
+
+void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+
+void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse,
+                               int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+
+unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      unsigned int* sse);
+#define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_c
+
+unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
+
+unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
+
+unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x16 \
+  vpx_highbd_12_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x32 \
+  vpx_highbd_12_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x8 \
+  vpx_highbd_12_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x16 \
+  vpx_highbd_12_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x32 \
+  vpx_highbd_12_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x64 \
+  vpx_highbd_12_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance4x4 \
+  vpx_highbd_12_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance4x8 \
+  vpx_highbd_12_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x32 \
+  vpx_highbd_12_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
+    const uint8_t* src_ptr,
+    int src_stride,
+    int x_offset,
+    int y_offset,
+    const uint8_t* ref_ptr,
+    int ref_stride,
+    uint32_t* sse,
+    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x64 \
+  vpx_highbd_12_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x16 \
+  vpx_highbd_12_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x4 \
+  vpx_highbd_12_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x8 \
+  vpx_highbd_12_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x16 \
+  vpx_highbd_12_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x32 \
+  vpx_highbd_12_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x8 \
+  vpx_highbd_12_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x16 \
+  vpx_highbd_12_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x32 \
+  vpx_highbd_12_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x64 \
+  vpx_highbd_12_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance4x4 \
+  vpx_highbd_12_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance4x8 \
+  vpx_highbd_12_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x32 \
+  vpx_highbd_12_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
+                                                 const uint8_t* ref_ptr,
+                                                 int ref_stride,
+                                                 uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x64 \
+  vpx_highbd_12_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x16 \
+  vpx_highbd_12_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x4 \
+  vpx_highbd_12_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x8 \
+  vpx_highbd_12_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_c
+
+unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_c
+
+unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_c
+
+unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_c
+
+unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_c
+
+unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_c
+
+unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
+
+unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
+
+unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_c
+
+unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           unsigned int* sse);
+#define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_c
+
+unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_c
+
+unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
+
+unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_c
+
+void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse,
+                                int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+
+void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              unsigned int* sse,
+                              int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+
+unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     unsigned int* sse);
+#define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_c
+
+unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
+
+unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
+
+unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x16 \
+  vpx_highbd_8_sub_pixel_avg_variance16x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x32 \
+  vpx_highbd_8_sub_pixel_avg_variance16x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x8 \
+  vpx_highbd_8_sub_pixel_avg_variance16x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x16 \
+  vpx_highbd_8_sub_pixel_avg_variance32x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x32 \
+  vpx_highbd_8_sub_pixel_avg_variance32x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x64 \
+  vpx_highbd_8_sub_pixel_avg_variance32x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance4x4 \
+  vpx_highbd_8_sub_pixel_avg_variance4x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance4x8 \
+  vpx_highbd_8_sub_pixel_avg_variance4x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x32 \
+  vpx_highbd_8_sub_pixel_avg_variance64x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
+                                                    const uint8_t* ref_ptr,
+                                                    int ref_stride,
+                                                    uint32_t* sse,
+                                                    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x64 \
+  vpx_highbd_8_sub_pixel_avg_variance64x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
+                                                   const uint8_t* ref_ptr,
+                                                   int ref_stride,
+                                                   uint32_t* sse,
+                                                   const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x16 \
+  vpx_highbd_8_sub_pixel_avg_variance8x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x4 \
+  vpx_highbd_8_sub_pixel_avg_variance8x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
+                                                  const uint8_t* ref_ptr,
+                                                  int ref_stride,
+                                                  uint32_t* sse,
+                                                  const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x8 \
+  vpx_highbd_8_sub_pixel_avg_variance8x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x16 \
+  vpx_highbd_8_sub_pixel_variance16x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x32 \
+  vpx_highbd_8_sub_pixel_variance16x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x8 \
+  vpx_highbd_8_sub_pixel_variance16x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x16 \
+  vpx_highbd_8_sub_pixel_variance32x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x32 \
+  vpx_highbd_8_sub_pixel_variance32x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x64 \
+  vpx_highbd_8_sub_pixel_variance32x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x32 \
+  vpx_highbd_8_sub_pixel_variance64x32_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
+                                                const uint8_t* ref_ptr,
+                                                int ref_stride,
+                                                uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x64 \
+  vpx_highbd_8_sub_pixel_variance64x64_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
+                                               const uint8_t* ref_ptr,
+                                               int ref_stride,
+                                               uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x16 \
+  vpx_highbd_8_sub_pixel_variance8x16_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x4 vpx_highbd_8_sub_pixel_variance8x4_c
+
+uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x8 vpx_highbd_8_sub_pixel_variance8x8_c
+
+unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_c
+
+unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_c
+
+unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_c
+
+unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_c
+
+unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_c
+
+unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_c
+
+unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
+
+unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
+
+unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_c
+
+unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          unsigned int* sse);
+#define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_c
+
+unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         unsigned int* sse);
+#define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_c
+
+unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
+
+unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        unsigned int* sse);
+#define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_c
+
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+#define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_c
+
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+#define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_c
+
+void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
+                                const uint16_t* pred,
+                                int width,
+                                int height,
+                                const uint16_t* ref,
+                                int ref_stride);
+#define vpx_highbd_comp_avg_pred vpx_highbd_comp_avg_pred_c
+
+void vpx_highbd_convolve8_c(const uint16_t* src,
+                            ptrdiff_t src_stride,
+                            uint16_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h,
+                            int bd);
+void vpx_highbd_convolve8_neon(const uint16_t* src,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h,
+                               int bd);
+#define vpx_highbd_convolve8 vpx_highbd_convolve8_neon
+
+void vpx_highbd_convolve8_avg_c(const uint16_t* src,
+                                ptrdiff_t src_stride,
+                                uint16_t* dst,
+                                ptrdiff_t dst_stride,
+                                const InterpKernel* filter,
+                                int x0_q4,
+                                int x_step_q4,
+                                int y0_q4,
+                                int y_step_q4,
+                                int w,
+                                int h,
+                                int bd);
+void vpx_highbd_convolve8_avg_neon(const uint16_t* src,
+                                   ptrdiff_t src_stride,
+                                   uint16_t* dst,
+                                   ptrdiff_t dst_stride,
+                                   const InterpKernel* filter,
+                                   int x0_q4,
+                                   int x_step_q4,
+                                   int y0_q4,
+                                   int y_step_q4,
+                                   int w,
+                                   int h,
+                                   int bd);
+#define vpx_highbd_convolve8_avg vpx_highbd_convolve8_avg_neon
+
+void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
+                                      ptrdiff_t src_stride,
+                                      uint16_t* dst,
+                                      ptrdiff_t dst_stride,
+                                      const InterpKernel* filter,
+                                      int x0_q4,
+                                      int x_step_q4,
+                                      int y0_q4,
+                                      int y_step_q4,
+                                      int w,
+                                      int h,
+                                      int bd);
+void vpx_highbd_convolve8_avg_horiz_neon(const uint16_t* src,
+                                         ptrdiff_t src_stride,
+                                         uint16_t* dst,
+                                         ptrdiff_t dst_stride,
+                                         const InterpKernel* filter,
+                                         int x0_q4,
+                                         int x_step_q4,
+                                         int y0_q4,
+                                         int y_step_q4,
+                                         int w,
+                                         int h,
+                                         int bd);
+#define vpx_highbd_convolve8_avg_horiz vpx_highbd_convolve8_avg_horiz_neon
+
+void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
+                                     ptrdiff_t src_stride,
+                                     uint16_t* dst,
+                                     ptrdiff_t dst_stride,
+                                     const InterpKernel* filter,
+                                     int x0_q4,
+                                     int x_step_q4,
+                                     int y0_q4,
+                                     int y_step_q4,
+                                     int w,
+                                     int h,
+                                     int bd);
+void vpx_highbd_convolve8_avg_vert_neon(const uint16_t* src,
+                                        ptrdiff_t src_stride,
+                                        uint16_t* dst,
+                                        ptrdiff_t dst_stride,
+                                        const InterpKernel* filter,
+                                        int x0_q4,
+                                        int x_step_q4,
+                                        int y0_q4,
+                                        int y_step_q4,
+                                        int w,
+                                        int h,
+                                        int bd);
+#define vpx_highbd_convolve8_avg_vert vpx_highbd_convolve8_avg_vert_neon
+
+void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint16_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h,
+                                  int bd);
+void vpx_highbd_convolve8_horiz_neon(const uint16_t* src,
+                                     ptrdiff_t src_stride,
+                                     uint16_t* dst,
+                                     ptrdiff_t dst_stride,
+                                     const InterpKernel* filter,
+                                     int x0_q4,
+                                     int x_step_q4,
+                                     int y0_q4,
+                                     int y_step_q4,
+                                     int w,
+                                     int h,
+                                     int bd);
+#define vpx_highbd_convolve8_horiz vpx_highbd_convolve8_horiz_neon
+
+void vpx_highbd_convolve8_vert_c(const uint16_t* src,
+                                 ptrdiff_t src_stride,
+                                 uint16_t* dst,
+                                 ptrdiff_t dst_stride,
+                                 const InterpKernel* filter,
+                                 int x0_q4,
+                                 int x_step_q4,
+                                 int y0_q4,
+                                 int y_step_q4,
+                                 int w,
+                                 int h,
+                                 int bd);
+void vpx_highbd_convolve8_vert_neon(const uint16_t* src,
+                                    ptrdiff_t src_stride,
+                                    uint16_t* dst,
+                                    ptrdiff_t dst_stride,
+                                    const InterpKernel* filter,
+                                    int x0_q4,
+                                    int x_step_q4,
+                                    int y0_q4,
+                                    int y_step_q4,
+                                    int w,
+                                    int h,
+                                    int bd);
+#define vpx_highbd_convolve8_vert vpx_highbd_convolve8_vert_neon
+
+void vpx_highbd_convolve_avg_c(const uint16_t* src,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst,
+                               ptrdiff_t dst_stride,
+                               const InterpKernel* filter,
+                               int x0_q4,
+                               int x_step_q4,
+                               int y0_q4,
+                               int y_step_q4,
+                               int w,
+                               int h,
+                               int bd);
+void vpx_highbd_convolve_avg_neon(const uint16_t* src,
+                                  ptrdiff_t src_stride,
+                                  uint16_t* dst,
+                                  ptrdiff_t dst_stride,
+                                  const InterpKernel* filter,
+                                  int x0_q4,
+                                  int x_step_q4,
+                                  int y0_q4,
+                                  int y_step_q4,
+                                  int w,
+                                  int h,
+                                  int bd);
+#define vpx_highbd_convolve_avg vpx_highbd_convolve_avg_neon
+
+void vpx_highbd_convolve_copy_c(const uint16_t* src,
+                                ptrdiff_t src_stride,
+                                uint16_t* dst,
+                                ptrdiff_t dst_stride,
+                                const InterpKernel* filter,
+                                int x0_q4,
+                                int x_step_q4,
+                                int y0_q4,
+                                int y_step_q4,
+                                int w,
+                                int h,
+                                int bd);
+void vpx_highbd_convolve_copy_neon(const uint16_t* src,
+                                   ptrdiff_t src_stride,
+                                   uint16_t* dst,
+                                   ptrdiff_t dst_stride,
+                                   const InterpKernel* filter,
+                                   int x0_q4,
+                                   int x_step_q4,
+                                   int y0_q4,
+                                   int y_step_q4,
+                                   int w,
+                                   int h,
+                                   int bd);
+#define vpx_highbd_convolve_copy vpx_highbd_convolve_copy_neon
+
+void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d117_predictor_16x16 vpx_highbd_d117_predictor_16x16_c
+
+void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d117_predictor_32x32 vpx_highbd_d117_predictor_32x32_c
+
+void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_c
+
+void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d117_predictor_8x8 vpx_highbd_d117_predictor_8x8_c
+
+void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_d135_predictor_16x16_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_d135_predictor_16x16 vpx_highbd_d135_predictor_16x16_neon
+
+void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_d135_predictor_32x32_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_d135_predictor_32x32 vpx_highbd_d135_predictor_32x32_neon
+
+void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_d135_predictor_4x4_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_neon
+
+void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_d135_predictor_8x8_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_d135_predictor_8x8 vpx_highbd_d135_predictor_8x8_neon
+
+void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d153_predictor_16x16 vpx_highbd_d153_predictor_16x16_c
+
+void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d153_predictor_32x32 vpx_highbd_d153_predictor_32x32_c
+
+void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_c
+
+void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d153_predictor_8x8 vpx_highbd_d153_predictor_8x8_c
+
+void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d207_predictor_16x16 vpx_highbd_d207_predictor_16x16_c
+
+void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d207_predictor_32x32 vpx_highbd_d207_predictor_32x32_c
+
+void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_c
+
+void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_d207_predictor_8x8 vpx_highbd_d207_predictor_8x8_c
+
+void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+void vpx_highbd_d45_predictor_16x16_neon(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+#define vpx_highbd_d45_predictor_16x16 vpx_highbd_d45_predictor_16x16_neon
+
+void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+void vpx_highbd_d45_predictor_32x32_neon(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+#define vpx_highbd_d45_predictor_32x32 vpx_highbd_d45_predictor_32x32_neon
+
+void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_d45_predictor_4x4_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d45_predictor_4x4 vpx_highbd_d45_predictor_4x4_neon
+
+void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_d45_predictor_8x8_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_d45_predictor_8x8 vpx_highbd_d45_predictor_8x8_neon
+
+void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_d63_predictor_16x16 vpx_highbd_d63_predictor_16x16_c
+
+void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_d63_predictor_32x32 vpx_highbd_d63_predictor_32x32_c
+
+void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+#define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_c
+
+void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+#define vpx_highbd_d63_predictor_8x8 vpx_highbd_d63_predictor_8x8_c
+
+void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_128_predictor_16x16_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_neon
+
+void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_128_predictor_32x32_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_neon
+
+void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_128_predictor_4x4_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_neon
+
+void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_128_predictor_8x8_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_neon
+
+void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+void vpx_highbd_dc_left_predictor_16x16_neon(uint16_t* dst,
+                                             ptrdiff_t stride,
+                                             const uint16_t* above,
+                                             const uint16_t* left,
+                                             int bd);
+#define vpx_highbd_dc_left_predictor_16x16 \
+  vpx_highbd_dc_left_predictor_16x16_neon
+
+void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+void vpx_highbd_dc_left_predictor_32x32_neon(uint16_t* dst,
+                                             ptrdiff_t stride,
+                                             const uint16_t* above,
+                                             const uint16_t* left,
+                                             int bd);
+#define vpx_highbd_dc_left_predictor_32x32 \
+  vpx_highbd_dc_left_predictor_32x32_neon
+
+void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+void vpx_highbd_dc_left_predictor_4x4_neon(uint16_t* dst,
+                                           ptrdiff_t stride,
+                                           const uint16_t* above,
+                                           const uint16_t* left,
+                                           int bd);
+#define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_neon
+
+void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+void vpx_highbd_dc_left_predictor_8x8_neon(uint16_t* dst,
+                                           ptrdiff_t stride,
+                                           const uint16_t* above,
+                                           const uint16_t* left,
+                                           int bd);
+#define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_neon
+
+void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_dc_predictor_16x16_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_neon
+
+void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_dc_predictor_32x32_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_neon
+
+void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_dc_predictor_4x4_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_neon
+
+void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_dc_predictor_8x8_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_neon
+
+void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_top_predictor_16x16_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_neon
+
+void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
+                                         ptrdiff_t stride,
+                                         const uint16_t* above,
+                                         const uint16_t* left,
+                                         int bd);
+void vpx_highbd_dc_top_predictor_32x32_neon(uint16_t* dst,
+                                            ptrdiff_t stride,
+                                            const uint16_t* above,
+                                            const uint16_t* left,
+                                            int bd);
+#define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_neon
+
+void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_top_predictor_4x4_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_neon
+
+void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+void vpx_highbd_dc_top_predictor_8x8_neon(uint16_t* dst,
+                                          ptrdiff_t stride,
+                                          const uint16_t* above,
+                                          const uint16_t* left,
+                                          int bd);
+#define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_neon
+
+void vpx_highbd_fdct16x16_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+#define vpx_highbd_fdct16x16 vpx_highbd_fdct16x16_c
+
+void vpx_highbd_fdct16x16_1_c(const int16_t* input,
+                              tran_low_t* output,
+                              int stride);
+#define vpx_highbd_fdct16x16_1 vpx_highbd_fdct16x16_1_c
+
+void vpx_highbd_fdct32x32_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+#define vpx_highbd_fdct32x32 vpx_highbd_fdct32x32_c
+
+void vpx_highbd_fdct32x32_1_c(const int16_t* input,
+                              tran_low_t* output,
+                              int stride);
+#define vpx_highbd_fdct32x32_1 vpx_highbd_fdct32x32_1_c
+
+void vpx_highbd_fdct32x32_rd_c(const int16_t* input,
+                               tran_low_t* output,
+                               int stride);
+#define vpx_highbd_fdct32x32_rd vpx_highbd_fdct32x32_rd_c
+
+void vpx_highbd_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct4x4 vpx_highbd_fdct4x4_c
+
+void vpx_highbd_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct8x8 vpx_highbd_fdct8x8_c
+
+void vpx_highbd_fdct8x8_1_c(const int16_t* input,
+                            tran_low_t* output,
+                            int stride);
+void vpx_fdct8x8_1_neon(const int16_t* input, tran_low_t* output, int stride);
+#define vpx_highbd_fdct8x8_1 vpx_fdct8x8_1_neon
+
+void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_h_predictor_16x16_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_neon
+
+void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_h_predictor_32x32_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_neon
+
+void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_h_predictor_4x4_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_neon
+
+void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_h_predictor_8x8_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_neon
+
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_16x16 vpx_highbd_hadamard_16x16_c
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+#define vpx_highbd_hadamard_32x32 vpx_highbd_hadamard_32x32_c
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+#define vpx_highbd_hadamard_8x8 vpx_highbd_hadamard_8x8_c
+
+void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct16x16_10_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct16x16_10_add vpx_highbd_idct16x16_10_add_neon
+
+void vpx_highbd_idct16x16_1_add_c(const tran_low_t* input,
+                                  uint16_t* dest,
+                                  int stride,
+                                  int bd);
+void vpx_highbd_idct16x16_1_add_neon(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+#define vpx_highbd_idct16x16_1_add vpx_highbd_idct16x16_1_add_neon
+
+void vpx_highbd_idct16x16_256_add_c(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+void vpx_highbd_idct16x16_256_add_neon(const tran_low_t* input,
+                                       uint16_t* dest,
+                                       int stride,
+                                       int bd);
+#define vpx_highbd_idct16x16_256_add vpx_highbd_idct16x16_256_add_neon
+
+void vpx_highbd_idct16x16_38_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct16x16_38_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct16x16_38_add vpx_highbd_idct16x16_38_add_neon
+
+void vpx_highbd_idct32x32_1024_add_c(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+void vpx_highbd_idct32x32_1024_add_neon(const tran_low_t* input,
+                                        uint16_t* dest,
+                                        int stride,
+                                        int bd);
+#define vpx_highbd_idct32x32_1024_add vpx_highbd_idct32x32_1024_add_neon
+
+void vpx_highbd_idct32x32_135_add_c(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+void vpx_highbd_idct32x32_135_add_neon(const tran_low_t* input,
+                                       uint16_t* dest,
+                                       int stride,
+                                       int bd);
+#define vpx_highbd_idct32x32_135_add vpx_highbd_idct32x32_135_add_neon
+
+void vpx_highbd_idct32x32_1_add_c(const tran_low_t* input,
+                                  uint16_t* dest,
+                                  int stride,
+                                  int bd);
+void vpx_highbd_idct32x32_1_add_neon(const tran_low_t* input,
+                                     uint16_t* dest,
+                                     int stride,
+                                     int bd);
+#define vpx_highbd_idct32x32_1_add vpx_highbd_idct32x32_1_add_neon
+
+void vpx_highbd_idct32x32_34_add_c(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+void vpx_highbd_idct32x32_34_add_neon(const tran_low_t* input,
+                                      uint16_t* dest,
+                                      int stride,
+                                      int bd);
+#define vpx_highbd_idct32x32_34_add vpx_highbd_idct32x32_34_add_neon
+
+void vpx_highbd_idct4x4_16_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct4x4_16_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct4x4_16_add vpx_highbd_idct4x4_16_add_neon
+
+void vpx_highbd_idct4x4_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+void vpx_highbd_idct4x4_1_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+#define vpx_highbd_idct4x4_1_add vpx_highbd_idct4x4_1_add_neon
+
+void vpx_highbd_idct8x8_12_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct8x8_12_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct8x8_12_add vpx_highbd_idct8x8_12_add_neon
+
+void vpx_highbd_idct8x8_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+void vpx_highbd_idct8x8_1_add_neon(const tran_low_t* input,
+                                   uint16_t* dest,
+                                   int stride,
+                                   int bd);
+#define vpx_highbd_idct8x8_1_add vpx_highbd_idct8x8_1_add_neon
+
+void vpx_highbd_idct8x8_64_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+void vpx_highbd_idct8x8_64_add_neon(const tran_low_t* input,
+                                    uint16_t* dest,
+                                    int stride,
+                                    int bd);
+#define vpx_highbd_idct8x8_64_add vpx_highbd_idct8x8_64_add_neon
+
+void vpx_highbd_iwht4x4_16_add_c(const tran_low_t* input,
+                                 uint16_t* dest,
+                                 int stride,
+                                 int bd);
+#define vpx_highbd_iwht4x4_16_add vpx_highbd_iwht4x4_16_add_c
+
+void vpx_highbd_iwht4x4_1_add_c(const tran_low_t* input,
+                                uint16_t* dest,
+                                int stride,
+                                int bd);
+#define vpx_highbd_iwht4x4_1_add vpx_highbd_iwht4x4_1_add_c
+
+void vpx_highbd_lpf_horizontal_16_c(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+void vpx_highbd_lpf_horizontal_16_neon(uint16_t* s,
+                                       int pitch,
+                                       const uint8_t* blimit,
+                                       const uint8_t* limit,
+                                       const uint8_t* thresh,
+                                       int bd);
+#define vpx_highbd_lpf_horizontal_16 vpx_highbd_lpf_horizontal_16_neon
+
+void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit,
+                                         const uint8_t* limit,
+                                         const uint8_t* thresh,
+                                         int bd);
+void vpx_highbd_lpf_horizontal_16_dual_neon(uint16_t* s,
+                                            int pitch,
+                                            const uint8_t* blimit,
+                                            const uint8_t* limit,
+                                            const uint8_t* thresh,
+                                            int bd);
+#define vpx_highbd_lpf_horizontal_16_dual vpx_highbd_lpf_horizontal_16_dual_neon
+
+void vpx_highbd_lpf_horizontal_4_c(uint16_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh,
+                                   int bd);
+void vpx_highbd_lpf_horizontal_4_neon(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit,
+                                      const uint8_t* limit,
+                                      const uint8_t* thresh,
+                                      int bd);
+#define vpx_highbd_lpf_horizontal_4 vpx_highbd_lpf_horizontal_4_neon
+
+void vpx_highbd_lpf_horizontal_4_dual_c(uint16_t* s,
+                                        int pitch,
+                                        const uint8_t* blimit0,
+                                        const uint8_t* limit0,
+                                        const uint8_t* thresh0,
+                                        const uint8_t* blimit1,
+                                        const uint8_t* limit1,
+                                        const uint8_t* thresh1,
+                                        int bd);
+void vpx_highbd_lpf_horizontal_4_dual_neon(uint16_t* s,
+                                           int pitch,
+                                           const uint8_t* blimit0,
+                                           const uint8_t* limit0,
+                                           const uint8_t* thresh0,
+                                           const uint8_t* blimit1,
+                                           const uint8_t* limit1,
+                                           const uint8_t* thresh1,
+                                           int bd);
+#define vpx_highbd_lpf_horizontal_4_dual vpx_highbd_lpf_horizontal_4_dual_neon
+
+void vpx_highbd_lpf_horizontal_8_c(uint16_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh,
+                                   int bd);
+void vpx_highbd_lpf_horizontal_8_neon(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit,
+                                      const uint8_t* limit,
+                                      const uint8_t* thresh,
+                                      int bd);
+#define vpx_highbd_lpf_horizontal_8 vpx_highbd_lpf_horizontal_8_neon
+
+void vpx_highbd_lpf_horizontal_8_dual_c(uint16_t* s,
+                                        int pitch,
+                                        const uint8_t* blimit0,
+                                        const uint8_t* limit0,
+                                        const uint8_t* thresh0,
+                                        const uint8_t* blimit1,
+                                        const uint8_t* limit1,
+                                        const uint8_t* thresh1,
+                                        int bd);
+void vpx_highbd_lpf_horizontal_8_dual_neon(uint16_t* s,
+                                           int pitch,
+                                           const uint8_t* blimit0,
+                                           const uint8_t* limit0,
+                                           const uint8_t* thresh0,
+                                           const uint8_t* blimit1,
+                                           const uint8_t* limit1,
+                                           const uint8_t* thresh1,
+                                           int bd);
+#define vpx_highbd_lpf_horizontal_8_dual vpx_highbd_lpf_horizontal_8_dual_neon
+
+void vpx_highbd_lpf_vertical_16_c(uint16_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit,
+                                  const uint8_t* limit,
+                                  const uint8_t* thresh,
+                                  int bd);
+void vpx_highbd_lpf_vertical_16_neon(uint16_t* s,
+                                     int pitch,
+                                     const uint8_t* blimit,
+                                     const uint8_t* limit,
+                                     const uint8_t* thresh,
+                                     int bd);
+#define vpx_highbd_lpf_vertical_16 vpx_highbd_lpf_vertical_16_neon
+
+void vpx_highbd_lpf_vertical_16_dual_c(uint16_t* s,
+                                       int pitch,
+                                       const uint8_t* blimit,
+                                       const uint8_t* limit,
+                                       const uint8_t* thresh,
+                                       int bd);
+void vpx_highbd_lpf_vertical_16_dual_neon(uint16_t* s,
+                                          int pitch,
+                                          const uint8_t* blimit,
+                                          const uint8_t* limit,
+                                          const uint8_t* thresh,
+                                          int bd);
+#define vpx_highbd_lpf_vertical_16_dual vpx_highbd_lpf_vertical_16_dual_neon
+
+void vpx_highbd_lpf_vertical_4_c(uint16_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit,
+                                 const uint8_t* limit,
+                                 const uint8_t* thresh,
+                                 int bd);
+void vpx_highbd_lpf_vertical_4_neon(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+#define vpx_highbd_lpf_vertical_4 vpx_highbd_lpf_vertical_4_neon
+
+void vpx_highbd_lpf_vertical_4_dual_c(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit0,
+                                      const uint8_t* limit0,
+                                      const uint8_t* thresh0,
+                                      const uint8_t* blimit1,
+                                      const uint8_t* limit1,
+                                      const uint8_t* thresh1,
+                                      int bd);
+void vpx_highbd_lpf_vertical_4_dual_neon(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit0,
+                                         const uint8_t* limit0,
+                                         const uint8_t* thresh0,
+                                         const uint8_t* blimit1,
+                                         const uint8_t* limit1,
+                                         const uint8_t* thresh1,
+                                         int bd);
+#define vpx_highbd_lpf_vertical_4_dual vpx_highbd_lpf_vertical_4_dual_neon
+
+void vpx_highbd_lpf_vertical_8_c(uint16_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit,
+                                 const uint8_t* limit,
+                                 const uint8_t* thresh,
+                                 int bd);
+void vpx_highbd_lpf_vertical_8_neon(uint16_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit,
+                                    const uint8_t* limit,
+                                    const uint8_t* thresh,
+                                    int bd);
+#define vpx_highbd_lpf_vertical_8 vpx_highbd_lpf_vertical_8_neon
+
+void vpx_highbd_lpf_vertical_8_dual_c(uint16_t* s,
+                                      int pitch,
+                                      const uint8_t* blimit0,
+                                      const uint8_t* limit0,
+                                      const uint8_t* thresh0,
+                                      const uint8_t* blimit1,
+                                      const uint8_t* limit1,
+                                      const uint8_t* thresh1,
+                                      int bd);
+void vpx_highbd_lpf_vertical_8_dual_neon(uint16_t* s,
+                                         int pitch,
+                                         const uint8_t* blimit0,
+                                         const uint8_t* limit0,
+                                         const uint8_t* thresh0,
+                                         const uint8_t* blimit1,
+                                         const uint8_t* limit1,
+                                         const uint8_t* thresh1,
+                                         int bd);
+#define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_neon
+
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
+                             int p,
+                             const uint8_t* d8,
+                             int dp,
+                             int* min,
+                             int* max);
+#define vpx_highbd_minmax_8x8 vpx_highbd_minmax_8x8_c
+
+void vpx_highbd_quantize_b_c(const tran_low_t* coeff_ptr,
+                             intptr_t n_coeffs,
+                             int skip_block,
+                             const int16_t* zbin_ptr,
+                             const int16_t* round_ptr,
+                             const int16_t* quant_ptr,
+                             const int16_t* quant_shift_ptr,
+                             tran_low_t* qcoeff_ptr,
+                             tran_low_t* dqcoeff_ptr,
+                             const int16_t* dequant_ptr,
+                             uint16_t* eob_ptr,
+                             const int16_t* scan,
+                             const int16_t* iscan);
+#define vpx_highbd_quantize_b vpx_highbd_quantize_b_c
+
+void vpx_highbd_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
+                                   intptr_t n_coeffs,
+                                   int skip_block,
+                                   const int16_t* zbin_ptr,
+                                   const int16_t* round_ptr,
+                                   const int16_t* quant_ptr,
+                                   const int16_t* quant_shift_ptr,
+                                   tran_low_t* qcoeff_ptr,
+                                   tran_low_t* dqcoeff_ptr,
+                                   const int16_t* dequant_ptr,
+                                   uint16_t* eob_ptr,
+                                   const int16_t* scan,
+                                   const int16_t* iscan);
+#define vpx_highbd_quantize_b_32x32 vpx_highbd_quantize_b_32x32_c
+
+unsigned int vpx_highbd_sad16x16_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad16x16 vpx_highbd_sad16x16_c
+
+unsigned int vpx_highbd_sad16x16_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad16x16_avg vpx_highbd_sad16x16_avg_c
+
+void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_c
+
+unsigned int vpx_highbd_sad16x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad16x32 vpx_highbd_sad16x32_c
+
+unsigned int vpx_highbd_sad16x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad16x32_avg vpx_highbd_sad16x32_avg_c
+
+void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_c
+
+unsigned int vpx_highbd_sad16x8_c(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride);
+#define vpx_highbd_sad16x8 vpx_highbd_sad16x8_c
+
+unsigned int vpx_highbd_sad16x8_avg_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      const uint8_t* second_pred);
+#define vpx_highbd_sad16x8_avg vpx_highbd_sad16x8_avg_c
+
+void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* const ref_array[],
+                             int ref_stride,
+                             uint32_t* sad_array);
+#define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_c
+
+unsigned int vpx_highbd_sad32x16_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x16 vpx_highbd_sad32x16_c
+
+unsigned int vpx_highbd_sad32x16_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x16_avg vpx_highbd_sad32x16_avg_c
+
+void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_c
+
+unsigned int vpx_highbd_sad32x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x32 vpx_highbd_sad32x32_c
+
+unsigned int vpx_highbd_sad32x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x32_avg vpx_highbd_sad32x32_avg_c
+
+void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_c
+
+unsigned int vpx_highbd_sad32x64_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad32x64 vpx_highbd_sad32x64_c
+
+unsigned int vpx_highbd_sad32x64_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad32x64_avg vpx_highbd_sad32x64_avg_c
+
+void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_c
+
+unsigned int vpx_highbd_sad4x4_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad4x4 vpx_highbd_sad4x4_c
+
+unsigned int vpx_highbd_sad4x4_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad4x4_avg vpx_highbd_sad4x4_avg_c
+
+void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_c
+
+unsigned int vpx_highbd_sad4x8_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad4x8 vpx_highbd_sad4x8_c
+
+unsigned int vpx_highbd_sad4x8_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad4x8_avg vpx_highbd_sad4x8_avg_c
+
+void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_c
+
+unsigned int vpx_highbd_sad64x32_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad64x32 vpx_highbd_sad64x32_c
+
+unsigned int vpx_highbd_sad64x32_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad64x32_avg vpx_highbd_sad64x32_avg_c
+
+void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_c
+
+unsigned int vpx_highbd_sad64x64_c(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride);
+#define vpx_highbd_sad64x64 vpx_highbd_sad64x64_c
+
+unsigned int vpx_highbd_sad64x64_avg_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       const uint8_t* second_pred);
+#define vpx_highbd_sad64x64_avg vpx_highbd_sad64x64_avg_c
+
+void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* const ref_array[],
+                              int ref_stride,
+                              uint32_t* sad_array);
+#define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_c
+
+unsigned int vpx_highbd_sad8x16_c(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride);
+#define vpx_highbd_sad8x16 vpx_highbd_sad8x16_c
+
+unsigned int vpx_highbd_sad8x16_avg_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      const uint8_t* second_pred);
+#define vpx_highbd_sad8x16_avg vpx_highbd_sad8x16_avg_c
+
+void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* const ref_array[],
+                             int ref_stride,
+                             uint32_t* sad_array);
+#define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_c
+
+unsigned int vpx_highbd_sad8x4_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad8x4 vpx_highbd_sad8x4_c
+
+unsigned int vpx_highbd_sad8x4_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad8x4_avg vpx_highbd_sad8x4_avg_c
+
+void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_c
+
+unsigned int vpx_highbd_sad8x8_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride);
+#define vpx_highbd_sad8x8 vpx_highbd_sad8x8_c
+
+unsigned int vpx_highbd_sad8x8_avg_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     const uint8_t* second_pred);
+#define vpx_highbd_sad8x8_avg vpx_highbd_sad8x8_avg_c
+
+void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* const ref_array[],
+                            int ref_stride,
+                            uint32_t* sad_array);
+#define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_c
+
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+#define vpx_highbd_satd vpx_highbd_satd_c
+
+void vpx_highbd_subtract_block_c(int rows,
+                                 int cols,
+                                 int16_t* diff_ptr,
+                                 ptrdiff_t diff_stride,
+                                 const uint8_t* src8_ptr,
+                                 ptrdiff_t src_stride,
+                                 const uint8_t* pred8_ptr,
+                                 ptrdiff_t pred_stride,
+                                 int bd);
+#define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
+
+void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_tm_predictor_16x16_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_neon
+
+void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+void vpx_highbd_tm_predictor_32x32_neon(uint16_t* dst,
+                                        ptrdiff_t stride,
+                                        const uint16_t* above,
+                                        const uint16_t* left,
+                                        int bd);
+#define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_neon
+
+void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_tm_predictor_4x4_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_neon
+
+void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
+                                   ptrdiff_t stride,
+                                   const uint16_t* above,
+                                   const uint16_t* left,
+                                   int bd);
+void vpx_highbd_tm_predictor_8x8_neon(uint16_t* dst,
+                                      ptrdiff_t stride,
+                                      const uint16_t* above,
+                                      const uint16_t* left,
+                                      int bd);
+#define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_neon
+
+void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_v_predictor_16x16_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_neon
+
+void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
+                                    ptrdiff_t stride,
+                                    const uint16_t* above,
+                                    const uint16_t* left,
+                                    int bd);
+void vpx_highbd_v_predictor_32x32_neon(uint16_t* dst,
+                                       ptrdiff_t stride,
+                                       const uint16_t* above,
+                                       const uint16_t* left,
+                                       int bd);
+#define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_neon
+
+void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_v_predictor_4x4_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_neon
+
+void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
+                                  ptrdiff_t stride,
+                                  const uint16_t* above,
+                                  const uint16_t* left,
+                                  int bd);
+void vpx_highbd_v_predictor_8x8_neon(uint16_t* dst,
+                                     ptrdiff_t stride,
+                                     const uint16_t* above,
+                                     const uint16_t* left,
+                                     int bd);
+#define vpx_highbd_v_predictor_8x8 vpx_highbd_v_predictor_8x8_neon
+
+void vpx_idct16x16_10_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_10_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct16x16_10_add vpx_idct16x16_10_add_neon
+
+void vpx_idct16x16_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_1_add_neon(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+#define vpx_idct16x16_1_add vpx_idct16x16_1_add_neon
+
+void vpx_idct16x16_256_add_c(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+void vpx_idct16x16_256_add_neon(const tran_low_t* input,
+                                uint8_t* dest,
+                                int stride);
+#define vpx_idct16x16_256_add vpx_idct16x16_256_add_neon
+
+void vpx_idct16x16_38_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct16x16_38_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct16x16_38_add vpx_idct16x16_38_add_neon
+
+void vpx_idct32x32_1024_add_c(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+void vpx_idct32x32_1024_add_neon(const tran_low_t* input,
+                                 uint8_t* dest,
+                                 int stride);
+#define vpx_idct32x32_1024_add vpx_idct32x32_1024_add_neon
+
+void vpx_idct32x32_135_add_c(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+void vpx_idct32x32_135_add_neon(const tran_low_t* input,
+                                uint8_t* dest,
+                                int stride);
+#define vpx_idct32x32_135_add vpx_idct32x32_135_add_neon
+
+void vpx_idct32x32_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct32x32_1_add_neon(const tran_low_t* input,
+                              uint8_t* dest,
+                              int stride);
+#define vpx_idct32x32_1_add vpx_idct32x32_1_add_neon
+
+void vpx_idct32x32_34_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct32x32_34_add_neon(const tran_low_t* input,
+                               uint8_t* dest,
+                               int stride);
+#define vpx_idct32x32_34_add vpx_idct32x32_34_add_neon
+
+void vpx_idct4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct4x4_16_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct4x4_16_add vpx_idct4x4_16_add_neon
+
+void vpx_idct4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct4x4_1_add_neon(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_idct4x4_1_add vpx_idct4x4_1_add_neon
+
+void vpx_idct8x8_12_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_12_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct8x8_12_add vpx_idct8x8_12_add_neon
+
+void vpx_idct8x8_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_1_add_neon(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_idct8x8_1_add vpx_idct8x8_1_add_neon
+
+void vpx_idct8x8_64_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+void vpx_idct8x8_64_add_neon(const tran_low_t* input,
+                             uint8_t* dest,
+                             int stride);
+#define vpx_idct8x8_64_add vpx_idct8x8_64_add_neon
+
+int16_t vpx_int_pro_col_c(const uint8_t* ref, const int width);
+int16_t vpx_int_pro_col_neon(const uint8_t* ref, const int width);
+#define vpx_int_pro_col vpx_int_pro_col_neon
+
+void vpx_int_pro_row_c(int16_t* hbuf,
+                       const uint8_t* ref,
+                       const int ref_stride,
+                       const int height);
+void vpx_int_pro_row_neon(int16_t* hbuf,
+                          const uint8_t* ref,
+                          const int ref_stride,
+                          const int height);
+#define vpx_int_pro_row vpx_int_pro_row_neon
+
+void vpx_iwht4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_iwht4x4_16_add vpx_iwht4x4_16_add_c
+
+void vpx_iwht4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
+#define vpx_iwht4x4_1_add vpx_iwht4x4_1_add_c
+
+void vpx_lpf_horizontal_16_c(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+void vpx_lpf_horizontal_16_neon(uint8_t* s,
+                                int pitch,
+                                const uint8_t* blimit,
+                                const uint8_t* limit,
+                                const uint8_t* thresh);
+#define vpx_lpf_horizontal_16 vpx_lpf_horizontal_16_neon
+
+void vpx_lpf_horizontal_16_dual_c(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit,
+                                  const uint8_t* limit,
+                                  const uint8_t* thresh);
+void vpx_lpf_horizontal_16_dual_neon(uint8_t* s,
+                                     int pitch,
+                                     const uint8_t* blimit,
+                                     const uint8_t* limit,
+                                     const uint8_t* thresh);
+#define vpx_lpf_horizontal_16_dual vpx_lpf_horizontal_16_dual_neon
+
+void vpx_lpf_horizontal_4_c(uint8_t* s,
+                            int pitch,
+                            const uint8_t* blimit,
+                            const uint8_t* limit,
+                            const uint8_t* thresh);
+void vpx_lpf_horizontal_4_neon(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit,
+                               const uint8_t* limit,
+                               const uint8_t* thresh);
+#define vpx_lpf_horizontal_4 vpx_lpf_horizontal_4_neon
+
+void vpx_lpf_horizontal_4_dual_c(uint8_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit0,
+                                 const uint8_t* limit0,
+                                 const uint8_t* thresh0,
+                                 const uint8_t* blimit1,
+                                 const uint8_t* limit1,
+                                 const uint8_t* thresh1);
+void vpx_lpf_horizontal_4_dual_neon(uint8_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit0,
+                                    const uint8_t* limit0,
+                                    const uint8_t* thresh0,
+                                    const uint8_t* blimit1,
+                                    const uint8_t* limit1,
+                                    const uint8_t* thresh1);
+#define vpx_lpf_horizontal_4_dual vpx_lpf_horizontal_4_dual_neon
+
+void vpx_lpf_horizontal_8_c(uint8_t* s,
+                            int pitch,
+                            const uint8_t* blimit,
+                            const uint8_t* limit,
+                            const uint8_t* thresh);
+void vpx_lpf_horizontal_8_neon(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit,
+                               const uint8_t* limit,
+                               const uint8_t* thresh);
+#define vpx_lpf_horizontal_8 vpx_lpf_horizontal_8_neon
+
+void vpx_lpf_horizontal_8_dual_c(uint8_t* s,
+                                 int pitch,
+                                 const uint8_t* blimit0,
+                                 const uint8_t* limit0,
+                                 const uint8_t* thresh0,
+                                 const uint8_t* blimit1,
+                                 const uint8_t* limit1,
+                                 const uint8_t* thresh1);
+void vpx_lpf_horizontal_8_dual_neon(uint8_t* s,
+                                    int pitch,
+                                    const uint8_t* blimit0,
+                                    const uint8_t* limit0,
+                                    const uint8_t* thresh0,
+                                    const uint8_t* blimit1,
+                                    const uint8_t* limit1,
+                                    const uint8_t* thresh1);
+#define vpx_lpf_horizontal_8_dual vpx_lpf_horizontal_8_dual_neon
+
+void vpx_lpf_vertical_16_c(uint8_t* s,
+                           int pitch,
+                           const uint8_t* blimit,
+                           const uint8_t* limit,
+                           const uint8_t* thresh);
+void vpx_lpf_vertical_16_neon(uint8_t* s,
+                              int pitch,
+                              const uint8_t* blimit,
+                              const uint8_t* limit,
+                              const uint8_t* thresh);
+#define vpx_lpf_vertical_16 vpx_lpf_vertical_16_neon
+
+void vpx_lpf_vertical_16_dual_c(uint8_t* s,
+                                int pitch,
+                                const uint8_t* blimit,
+                                const uint8_t* limit,
+                                const uint8_t* thresh);
+void vpx_lpf_vertical_16_dual_neon(uint8_t* s,
+                                   int pitch,
+                                   const uint8_t* blimit,
+                                   const uint8_t* limit,
+                                   const uint8_t* thresh);
+#define vpx_lpf_vertical_16_dual vpx_lpf_vertical_16_dual_neon
+
+void vpx_lpf_vertical_4_c(uint8_t* s,
+                          int pitch,
+                          const uint8_t* blimit,
+                          const uint8_t* limit,
+                          const uint8_t* thresh);
+void vpx_lpf_vertical_4_neon(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+#define vpx_lpf_vertical_4 vpx_lpf_vertical_4_neon
+
+void vpx_lpf_vertical_4_dual_c(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit0,
+                               const uint8_t* limit0,
+                               const uint8_t* thresh0,
+                               const uint8_t* blimit1,
+                               const uint8_t* limit1,
+                               const uint8_t* thresh1);
+void vpx_lpf_vertical_4_dual_neon(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit0,
+                                  const uint8_t* limit0,
+                                  const uint8_t* thresh0,
+                                  const uint8_t* blimit1,
+                                  const uint8_t* limit1,
+                                  const uint8_t* thresh1);
+#define vpx_lpf_vertical_4_dual vpx_lpf_vertical_4_dual_neon
+
+void vpx_lpf_vertical_8_c(uint8_t* s,
+                          int pitch,
+                          const uint8_t* blimit,
+                          const uint8_t* limit,
+                          const uint8_t* thresh);
+void vpx_lpf_vertical_8_neon(uint8_t* s,
+                             int pitch,
+                             const uint8_t* blimit,
+                             const uint8_t* limit,
+                             const uint8_t* thresh);
+#define vpx_lpf_vertical_8 vpx_lpf_vertical_8_neon
+
+void vpx_lpf_vertical_8_dual_c(uint8_t* s,
+                               int pitch,
+                               const uint8_t* blimit0,
+                               const uint8_t* limit0,
+                               const uint8_t* thresh0,
+                               const uint8_t* blimit1,
+                               const uint8_t* limit1,
+                               const uint8_t* thresh1);
+void vpx_lpf_vertical_8_dual_neon(uint8_t* s,
+                                  int pitch,
+                                  const uint8_t* blimit0,
+                                  const uint8_t* limit0,
+                                  const uint8_t* thresh0,
+                                  const uint8_t* blimit1,
+                                  const uint8_t* limit1,
+                                  const uint8_t* thresh1);
+#define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_neon
+
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
+                                 int pitch,
+                                 int rows,
+                                 int cols,
+                                 int flimit);
+void vpx_mbpost_proc_across_ip_neon(unsigned char* src,
+                                    int pitch,
+                                    int rows,
+                                    int cols,
+                                    int flimit);
+#define vpx_mbpost_proc_across_ip vpx_mbpost_proc_across_ip_neon
+
+void vpx_mbpost_proc_down_c(unsigned char* dst,
+                            int pitch,
+                            int rows,
+                            int cols,
+                            int flimit);
+void vpx_mbpost_proc_down_neon(unsigned char* dst,
+                               int pitch,
+                               int rows,
+                               int cols,
+                               int flimit);
+#define vpx_mbpost_proc_down vpx_mbpost_proc_down_neon
+
+void vpx_minmax_8x8_c(const uint8_t* s,
+                      int p,
+                      const uint8_t* d,
+                      int dp,
+                      int* min,
+                      int* max);
+void vpx_minmax_8x8_neon(const uint8_t* s,
+                         int p,
+                         const uint8_t* d,
+                         int dp,
+                         int* min,
+                         int* max);
+#define vpx_minmax_8x8 vpx_minmax_8x8_neon
+
+unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride,
+                            unsigned int* sse);
+unsigned int vpx_mse16x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+#define vpx_mse16x16 vpx_mse16x16_neon
+
+unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride,
+                           unsigned int* sse);
+#define vpx_mse16x8 vpx_mse16x8_c
+
+unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride,
+                           unsigned int* sse);
+#define vpx_mse8x16 vpx_mse8x16_c
+
+unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride,
+                          unsigned int* sse);
+#define vpx_mse8x8 vpx_mse8x8_c
+
+void vpx_plane_add_noise_c(uint8_t* start,
+                           const int8_t* noise,
+                           int blackclamp,
+                           int whiteclamp,
+                           int width,
+                           int height,
+                           int pitch);
+#define vpx_plane_add_noise vpx_plane_add_noise_c
+
+void vpx_post_proc_down_and_across_mb_row_c(unsigned char* src,
+                                            unsigned char* dst,
+                                            int src_pitch,
+                                            int dst_pitch,
+                                            int cols,
+                                            unsigned char* flimits,
+                                            int size);
+void vpx_post_proc_down_and_across_mb_row_neon(unsigned char* src,
+                                               unsigned char* dst,
+                                               int src_pitch,
+                                               int dst_pitch,
+                                               int cols,
+                                               unsigned char* flimits,
+                                               int size);
+#define vpx_post_proc_down_and_across_mb_row \
+  vpx_post_proc_down_and_across_mb_row_neon
+
+void vpx_quantize_b_c(const tran_low_t* coeff_ptr,
+                      intptr_t n_coeffs,
+                      int skip_block,
+                      const int16_t* zbin_ptr,
+                      const int16_t* round_ptr,
+                      const int16_t* quant_ptr,
+                      const int16_t* quant_shift_ptr,
+                      tran_low_t* qcoeff_ptr,
+                      tran_low_t* dqcoeff_ptr,
+                      const int16_t* dequant_ptr,
+                      uint16_t* eob_ptr,
+                      const int16_t* scan,
+                      const int16_t* iscan);
+void vpx_quantize_b_neon(const tran_low_t* coeff_ptr,
+                         intptr_t n_coeffs,
+                         int skip_block,
+                         const int16_t* zbin_ptr,
+                         const int16_t* round_ptr,
+                         const int16_t* quant_ptr,
+                         const int16_t* quant_shift_ptr,
+                         tran_low_t* qcoeff_ptr,
+                         tran_low_t* dqcoeff_ptr,
+                         const int16_t* dequant_ptr,
+                         uint16_t* eob_ptr,
+                         const int16_t* scan,
+                         const int16_t* iscan);
+#define vpx_quantize_b vpx_quantize_b_neon
+
+void vpx_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
+                            intptr_t n_coeffs,
+                            int skip_block,
+                            const int16_t* zbin_ptr,
+                            const int16_t* round_ptr,
+                            const int16_t* quant_ptr,
+                            const int16_t* quant_shift_ptr,
+                            tran_low_t* qcoeff_ptr,
+                            tran_low_t* dqcoeff_ptr,
+                            const int16_t* dequant_ptr,
+                            uint16_t* eob_ptr,
+                            const int16_t* scan,
+                            const int16_t* iscan);
+void vpx_quantize_b_32x32_neon(const tran_low_t* coeff_ptr,
+                               intptr_t n_coeffs,
+                               int skip_block,
+                               const int16_t* zbin_ptr,
+                               const int16_t* round_ptr,
+                               const int16_t* quant_ptr,
+                               const int16_t* quant_shift_ptr,
+                               tran_low_t* qcoeff_ptr,
+                               tran_low_t* dqcoeff_ptr,
+                               const int16_t* dequant_ptr,
+                               uint16_t* eob_ptr,
+                               const int16_t* scan,
+                               const int16_t* iscan);
+#define vpx_quantize_b_32x32 vpx_quantize_b_32x32_neon
+
+unsigned int vpx_sad16x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad16x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad16x16 vpx_sad16x16_neon
+
+unsigned int vpx_sad16x16_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad16x16_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad16x16_avg vpx_sad16x16_avg_neon
+
+void vpx_sad16x16x3_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad16x16x3 vpx_sad16x16x3_c
+
+void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad16x16x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad16x16x4d vpx_sad16x16x4d_neon
+
+void vpx_sad16x16x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad16x16x8 vpx_sad16x16x8_c
+
+unsigned int vpx_sad16x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad16x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad16x32 vpx_sad16x32_neon
+
+unsigned int vpx_sad16x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad16x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad16x32_avg vpx_sad16x32_avg_neon
+
+void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad16x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad16x32x4d vpx_sad16x32x4d_neon
+
+unsigned int vpx_sad16x8_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride);
+unsigned int vpx_sad16x8_neon(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride);
+#define vpx_sad16x8 vpx_sad16x8_neon
+
+unsigned int vpx_sad16x8_avg_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               const uint8_t* second_pred);
+unsigned int vpx_sad16x8_avg_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  const uint8_t* second_pred);
+#define vpx_sad16x8_avg vpx_sad16x8_avg_neon
+
+void vpx_sad16x8x3_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad16x8x3 vpx_sad16x8x3_c
+
+void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* const ref_array[],
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad16x8x4d_neon(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* const ref_array[],
+                         int ref_stride,
+                         uint32_t* sad_array);
+#define vpx_sad16x8x4d vpx_sad16x8x4d_neon
+
+void vpx_sad16x8x8_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad16x8x8 vpx_sad16x8x8_c
+
+unsigned int vpx_sad32x16_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x16_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x16 vpx_sad32x16_neon
+
+unsigned int vpx_sad32x16_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x16_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x16_avg vpx_sad32x16_avg_neon
+
+void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x16x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x16x4d vpx_sad32x16x4d_neon
+
+unsigned int vpx_sad32x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x32 vpx_sad32x32_neon
+
+unsigned int vpx_sad32x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x32_avg vpx_sad32x32_avg_neon
+
+void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x32x4d vpx_sad32x32x4d_neon
+
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+#define vpx_sad32x32x8 vpx_sad32x32x8_c
+
+unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad32x64_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad32x64 vpx_sad32x64_neon
+
+unsigned int vpx_sad32x64_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad32x64_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad32x64_avg vpx_sad32x64_avg_neon
+
+void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad32x64x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad32x64x4d vpx_sad32x64x4d_neon
+
+unsigned int vpx_sad4x4_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad4x4_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad4x4 vpx_sad4x4_neon
+
+unsigned int vpx_sad4x4_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad4x4_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad4x4_avg vpx_sad4x4_avg_neon
+
+void vpx_sad4x4x3_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad4x4x3 vpx_sad4x4x3_c
+
+void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad4x4x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad4x4x4d vpx_sad4x4x4d_neon
+
+void vpx_sad4x4x8_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad4x4x8 vpx_sad4x4x8_c
+
+unsigned int vpx_sad4x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad4x8_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad4x8 vpx_sad4x8_neon
+
+unsigned int vpx_sad4x8_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad4x8_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad4x8_avg vpx_sad4x8_avg_neon
+
+void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad4x8x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad4x8x4d vpx_sad4x8x4d_neon
+
+unsigned int vpx_sad64x32_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad64x32_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad64x32 vpx_sad64x32_neon
+
+unsigned int vpx_sad64x32_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad64x32_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad64x32_avg vpx_sad64x32_avg_neon
+
+void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad64x32x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad64x32x4d vpx_sad64x32x4d_neon
+
+unsigned int vpx_sad64x64_c(const uint8_t* src_ptr,
+                            int src_stride,
+                            const uint8_t* ref_ptr,
+                            int ref_stride);
+unsigned int vpx_sad64x64_neon(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride);
+#define vpx_sad64x64 vpx_sad64x64_neon
+
+unsigned int vpx_sad64x64_avg_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                const uint8_t* second_pred);
+unsigned int vpx_sad64x64_avg_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   const uint8_t* second_pred);
+#define vpx_sad64x64_avg vpx_sad64x64_avg_neon
+
+void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
+                       int src_stride,
+                       const uint8_t* const ref_array[],
+                       int ref_stride,
+                       uint32_t* sad_array);
+void vpx_sad64x64x4d_neon(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* const ref_array[],
+                          int ref_stride,
+                          uint32_t* sad_array);
+#define vpx_sad64x64x4d vpx_sad64x64x4d_neon
+
+unsigned int vpx_sad8x16_c(const uint8_t* src_ptr,
+                           int src_stride,
+                           const uint8_t* ref_ptr,
+                           int ref_stride);
+unsigned int vpx_sad8x16_neon(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride);
+#define vpx_sad8x16 vpx_sad8x16_neon
+
+unsigned int vpx_sad8x16_avg_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               const uint8_t* second_pred);
+unsigned int vpx_sad8x16_avg_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  const uint8_t* second_pred);
+#define vpx_sad8x16_avg vpx_sad8x16_avg_neon
+
+void vpx_sad8x16x3_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad8x16x3 vpx_sad8x16x3_c
+
+void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* const ref_array[],
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad8x16x4d_neon(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* const ref_array[],
+                         int ref_stride,
+                         uint32_t* sad_array);
+#define vpx_sad8x16x4d vpx_sad8x16x4d_neon
+
+void vpx_sad8x16x8_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* ref_ptr,
+                     int ref_stride,
+                     uint32_t* sad_array);
+#define vpx_sad8x16x8 vpx_sad8x16x8_c
+
+unsigned int vpx_sad8x4_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad8x4_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad8x4 vpx_sad8x4_neon
+
+unsigned int vpx_sad8x4_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad8x4_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad8x4_avg vpx_sad8x4_avg_neon
+
+void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad8x4x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad8x4x4d vpx_sad8x4x4d_neon
+
+unsigned int vpx_sad8x8_c(const uint8_t* src_ptr,
+                          int src_stride,
+                          const uint8_t* ref_ptr,
+                          int ref_stride);
+unsigned int vpx_sad8x8_neon(const uint8_t* src_ptr,
+                             int src_stride,
+                             const uint8_t* ref_ptr,
+                             int ref_stride);
+#define vpx_sad8x8 vpx_sad8x8_neon
+
+unsigned int vpx_sad8x8_avg_c(const uint8_t* src_ptr,
+                              int src_stride,
+                              const uint8_t* ref_ptr,
+                              int ref_stride,
+                              const uint8_t* second_pred);
+unsigned int vpx_sad8x8_avg_neon(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 const uint8_t* second_pred);
+#define vpx_sad8x8_avg vpx_sad8x8_avg_neon
+
+void vpx_sad8x8x3_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad8x8x3 vpx_sad8x8x3_c
+
+void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
+                     int src_stride,
+                     const uint8_t* const ref_array[],
+                     int ref_stride,
+                     uint32_t* sad_array);
+void vpx_sad8x8x4d_neon(const uint8_t* src_ptr,
+                        int src_stride,
+                        const uint8_t* const ref_array[],
+                        int ref_stride,
+                        uint32_t* sad_array);
+#define vpx_sad8x8x4d vpx_sad8x8x4d_neon
+
+void vpx_sad8x8x8_c(const uint8_t* src_ptr,
+                    int src_stride,
+                    const uint8_t* ref_ptr,
+                    int ref_stride,
+                    uint32_t* sad_array);
+#define vpx_sad8x8x8 vpx_sad8x8x8_c
+
+int vpx_satd_c(const tran_low_t* coeff, int length);
+int vpx_satd_neon(const tran_low_t* coeff, int length);
+#define vpx_satd vpx_satd_neon
+
+void vpx_scaled_2d_c(const uint8_t* src,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     ptrdiff_t dst_stride,
+                     const InterpKernel* filter,
+                     int x0_q4,
+                     int x_step_q4,
+                     int y0_q4,
+                     int y_step_q4,
+                     int w,
+                     int h);
+void vpx_scaled_2d_neon(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_scaled_2d vpx_scaled_2d_neon
+
+void vpx_scaled_avg_2d_c(const uint8_t* src,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst,
+                         ptrdiff_t dst_stride,
+                         const InterpKernel* filter,
+                         int x0_q4,
+                         int x_step_q4,
+                         int y0_q4,
+                         int y_step_q4,
+                         int w,
+                         int h);
+#define vpx_scaled_avg_2d vpx_scaled_avg_2d_c
+
+void vpx_scaled_avg_horiz_c(const uint8_t* src,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            ptrdiff_t dst_stride,
+                            const InterpKernel* filter,
+                            int x0_q4,
+                            int x_step_q4,
+                            int y0_q4,
+                            int y_step_q4,
+                            int w,
+                            int h);
+#define vpx_scaled_avg_horiz vpx_scaled_avg_horiz_c
+
+void vpx_scaled_avg_vert_c(const uint8_t* src,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           ptrdiff_t dst_stride,
+                           const InterpKernel* filter,
+                           int x0_q4,
+                           int x_step_q4,
+                           int y0_q4,
+                           int y_step_q4,
+                           int w,
+                           int h);
+#define vpx_scaled_avg_vert vpx_scaled_avg_vert_c
+
+void vpx_scaled_horiz_c(const uint8_t* src,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        ptrdiff_t dst_stride,
+                        const InterpKernel* filter,
+                        int x0_q4,
+                        int x_step_q4,
+                        int y0_q4,
+                        int y_step_q4,
+                        int w,
+                        int h);
+#define vpx_scaled_horiz vpx_scaled_horiz_c
+
+void vpx_scaled_vert_c(const uint8_t* src,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       ptrdiff_t dst_stride,
+                       const InterpKernel* filter,
+                       int x0_q4,
+                       int x_step_q4,
+                       int y0_q4,
+                       int y_step_q4,
+                       int w,
+                       int h);
+#define vpx_scaled_vert vpx_scaled_vert_c
+
+uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x16_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x16 vpx_sub_pixel_avg_variance16x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x32 vpx_sub_pixel_avg_variance16x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse,
+                                          const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance16x8_neon(const uint8_t* src_ptr,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
+                                             const uint8_t* ref_ptr,
+                                             int ref_stride,
+                                             uint32_t* sse,
+                                             const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance16x8 vpx_sub_pixel_avg_variance16x8_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x16_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x16 vpx_sub_pixel_avg_variance32x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x32 vpx_sub_pixel_avg_variance32x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance32x64_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance32x64 vpx_sub_pixel_avg_variance32x64_neon
+
+uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance4x4_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance4x4 vpx_sub_pixel_avg_variance4x4_neon
+
+uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance4x8_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance4x8 vpx_sub_pixel_avg_variance4x8_neon
+
+uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance64x32_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance64x32 vpx_sub_pixel_avg_variance64x32_neon
+
+uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
+                                           const uint8_t* ref_ptr,
+                                           int ref_stride,
+                                           uint32_t* sse,
+                                           const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance64x64_neon(const uint8_t* src_ptr,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
+                                              const uint8_t* ref_ptr,
+                                              int ref_stride,
+                                              uint32_t* sse,
+                                              const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance64x64 vpx_sub_pixel_avg_variance64x64_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse,
+                                          const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x16_neon(const uint8_t* src_ptr,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
+                                             const uint8_t* ref_ptr,
+                                             int ref_stride,
+                                             uint32_t* sse,
+                                             const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x16 vpx_sub_pixel_avg_variance8x16_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x4_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x4 vpx_sub_pixel_avg_variance8x4_neon
+
+uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse,
+                                         const uint8_t* second_pred);
+uint32_t vpx_sub_pixel_avg_variance8x8_neon(const uint8_t* src_ptr,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
+                                            const uint8_t* ref_ptr,
+                                            int ref_stride,
+                                            uint32_t* sse,
+                                            const uint8_t* second_pred);
+#define vpx_sub_pixel_avg_variance8x8 vpx_sub_pixel_avg_variance8x8_neon
+
+uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x16_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance16x16 vpx_sub_pixel_variance16x16_neon
+
+uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance16x32 vpx_sub_pixel_variance16x32_neon
+
+uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      uint32_t* sse);
+uint32_t vpx_sub_pixel_variance16x8_neon(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse);
+#define vpx_sub_pixel_variance16x8 vpx_sub_pixel_variance16x8_neon
+
+uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x16_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x16 vpx_sub_pixel_variance32x16_neon
+
+uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x32 vpx_sub_pixel_variance32x32_neon
+
+uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance32x64_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance32x64 vpx_sub_pixel_variance32x64_neon
+
+uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance4x4_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance4x4 vpx_sub_pixel_variance4x4_neon
+
+uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance4x8_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance4x8 vpx_sub_pixel_variance4x8_neon
+
+uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance64x32_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance64x32 vpx_sub_pixel_variance64x32_neon
+
+uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
+                                       const uint8_t* ref_ptr,
+                                       int ref_stride,
+                                       uint32_t* sse);
+uint32_t vpx_sub_pixel_variance64x64_neon(const uint8_t* src_ptr,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
+                                          const uint8_t* ref_ptr,
+                                          int ref_stride,
+                                          uint32_t* sse);
+#define vpx_sub_pixel_variance64x64 vpx_sub_pixel_variance64x64_neon
+
+uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
+                                      const uint8_t* ref_ptr,
+                                      int ref_stride,
+                                      uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x16_neon(const uint8_t* src_ptr,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
+                                         const uint8_t* ref_ptr,
+                                         int ref_stride,
+                                         uint32_t* sse);
+#define vpx_sub_pixel_variance8x16 vpx_sub_pixel_variance8x16_neon
+
+uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x4_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance8x4 vpx_sub_pixel_variance8x4_neon
+
+uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
+                                     const uint8_t* ref_ptr,
+                                     int ref_stride,
+                                     uint32_t* sse);
+uint32_t vpx_sub_pixel_variance8x8_neon(const uint8_t* src_ptr,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
+                                        const uint8_t* ref_ptr,
+                                        int ref_stride,
+                                        uint32_t* sse);
+#define vpx_sub_pixel_variance8x8 vpx_sub_pixel_variance8x8_neon
+
+void vpx_subtract_block_c(int rows,
+                          int cols,
+                          int16_t* diff_ptr,
+                          ptrdiff_t diff_stride,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          const uint8_t* pred_ptr,
+                          ptrdiff_t pred_stride);
+void vpx_subtract_block_neon(int rows,
+                             int cols,
+                             int16_t* diff_ptr,
+                             ptrdiff_t diff_stride,
+                             const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             const uint8_t* pred_ptr,
+                             ptrdiff_t pred_stride);
+#define vpx_subtract_block vpx_subtract_block_neon
+
+uint64_t vpx_sum_squares_2d_i16_c(const int16_t* src, int stride, int size);
+uint64_t vpx_sum_squares_2d_i16_neon(const int16_t* src, int stride, int size);
+#define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_neon
+
+void vpx_tm_predictor_16x16_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_tm_predictor_16x16_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_neon
+
+void vpx_tm_predictor_32x32_c(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+void vpx_tm_predictor_32x32_neon(uint8_t* dst,
+                                 ptrdiff_t stride,
+                                 const uint8_t* above,
+                                 const uint8_t* left);
+#define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_neon
+
+void vpx_tm_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_tm_predictor_4x4_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_neon
+
+void vpx_tm_predictor_8x8_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+void vpx_tm_predictor_8x8_neon(uint8_t* dst,
+                               ptrdiff_t stride,
+                               const uint8_t* above,
+                               const uint8_t* left);
+#define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_neon
+
+void vpx_v_predictor_16x16_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_v_predictor_16x16_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_v_predictor_16x16 vpx_v_predictor_16x16_neon
+
+void vpx_v_predictor_32x32_c(uint8_t* dst,
+                             ptrdiff_t stride,
+                             const uint8_t* above,
+                             const uint8_t* left);
+void vpx_v_predictor_32x32_neon(uint8_t* dst,
+                                ptrdiff_t stride,
+                                const uint8_t* above,
+                                const uint8_t* left);
+#define vpx_v_predictor_32x32 vpx_v_predictor_32x32_neon
+
+void vpx_v_predictor_4x4_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_v_predictor_4x4_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_v_predictor_4x4 vpx_v_predictor_4x4_neon
+
+void vpx_v_predictor_8x8_c(uint8_t* dst,
+                           ptrdiff_t stride,
+                           const uint8_t* above,
+                           const uint8_t* left);
+void vpx_v_predictor_8x8_neon(uint8_t* dst,
+                              ptrdiff_t stride,
+                              const uint8_t* above,
+                              const uint8_t* left);
+#define vpx_v_predictor_8x8 vpx_v_predictor_8x8_neon
+
+unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance16x16_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance16x16 vpx_variance16x16_neon
+
+unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance16x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance16x32 vpx_variance16x32_neon
+
+unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse);
+unsigned int vpx_variance16x8_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_variance16x8 vpx_variance16x8_neon
+
+unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x16_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x16 vpx_variance32x16_neon
+
+unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x32 vpx_variance32x32_neon
+
+unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance32x64_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance32x64 vpx_variance32x64_neon
+
+unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance4x4_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance4x4 vpx_variance4x4_neon
+
+unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance4x8_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance4x8 vpx_variance4x8_neon
+
+unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance64x32_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance64x32 vpx_variance64x32_neon
+
+unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse);
+unsigned int vpx_variance64x64_neon(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse);
+#define vpx_variance64x64 vpx_variance64x64_neon
+
+unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
+                                int src_stride,
+                                const uint8_t* ref_ptr,
+                                int ref_stride,
+                                unsigned int* sse);
+unsigned int vpx_variance8x16_neon(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse);
+#define vpx_variance8x16 vpx_variance8x16_neon
+
+unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance8x4_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance8x4 vpx_variance8x4_neon
+
+unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
+                               int src_stride,
+                               const uint8_t* ref_ptr,
+                               int ref_stride,
+                               unsigned int* sse);
+unsigned int vpx_variance8x8_neon(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse);
+#define vpx_variance8x8 vpx_variance8x8_neon
+
+void vpx_ve_predictor_4x4_c(uint8_t* dst,
+                            ptrdiff_t stride,
+                            const uint8_t* above,
+                            const uint8_t* left);
+#define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
+
+int vpx_vector_var_c(const int16_t* ref, const int16_t* src, const int bwl);
+int vpx_vector_var_neon(const int16_t* ref, const int16_t* src, const int bwl);
+#define vpx_vector_var vpx_vector_var_neon
+
+void vpx_dsp_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_scale_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_scale_rtcd.h
new file mode 100644
index 0000000..555013e
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/arm64/vpx_scale_rtcd.h
@@ -0,0 +1,101 @@
+// This file is generated. Do not edit.
+#ifndef VPX_SCALE_RTCD_H_
+#define VPX_SCALE_RTCD_H_
+
+#ifdef RTCD_C
+#define RTCD_EXTERN
+#else
+#define RTCD_EXTERN extern
+#endif
+
+struct yv12_buffer_config;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+void vp8_horizontal_line_2_1_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_2_1_scale vp8_horizontal_line_2_1_scale_c
+
+void vp8_horizontal_line_5_3_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_5_3_scale vp8_horizontal_line_5_3_scale_c
+
+void vp8_horizontal_line_5_4_scale_c(const unsigned char* source,
+                                     unsigned int source_width,
+                                     unsigned char* dest,
+                                     unsigned int dest_width);
+#define vp8_horizontal_line_5_4_scale vp8_horizontal_line_5_4_scale_c
+
+void vp8_vertical_band_2_1_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_2_1_scale vp8_vertical_band_2_1_scale_c
+
+void vp8_vertical_band_2_1_scale_i_c(unsigned char* source,
+                                     unsigned int src_pitch,
+                                     unsigned char* dest,
+                                     unsigned int dest_pitch,
+                                     unsigned int dest_width);
+#define vp8_vertical_band_2_1_scale_i vp8_vertical_band_2_1_scale_i_c
+
+void vp8_vertical_band_5_3_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_5_3_scale vp8_vertical_band_5_3_scale_c
+
+void vp8_vertical_band_5_4_scale_c(unsigned char* source,
+                                   unsigned int src_pitch,
+                                   unsigned char* dest,
+                                   unsigned int dest_pitch,
+                                   unsigned int dest_width);
+#define vp8_vertical_band_5_4_scale vp8_vertical_band_5_4_scale_c
+
+void vp8_yv12_copy_frame_c(const struct yv12_buffer_config* src_ybc,
+                           struct yv12_buffer_config* dst_ybc);
+#define vp8_yv12_copy_frame vp8_yv12_copy_frame_c
+
+void vp8_yv12_extend_frame_borders_c(struct yv12_buffer_config* ybf);
+#define vp8_yv12_extend_frame_borders vp8_yv12_extend_frame_borders_c
+
+void vpx_extend_frame_borders_c(struct yv12_buffer_config* ybf);
+#define vpx_extend_frame_borders vpx_extend_frame_borders_c
+
+void vpx_extend_frame_inner_borders_c(struct yv12_buffer_config* ybf);
+#define vpx_extend_frame_inner_borders vpx_extend_frame_inner_borders_c
+
+void vpx_yv12_copy_frame_c(const struct yv12_buffer_config* src_ybc,
+                           struct yv12_buffer_config* dst_ybc);
+#define vpx_yv12_copy_frame vpx_yv12_copy_frame_c
+
+void vpx_yv12_copy_y_c(const struct yv12_buffer_config* src_ybc,
+                       struct yv12_buffer_config* dst_ybc);
+#define vpx_yv12_copy_y vpx_yv12_copy_y_c
+
+void vpx_scale_rtcd(void);
+
+#include "vpx_config.h"
+
+#ifdef RTCD_C
+#include "vpx_ports/arm.h"
+static void setup_rtcd_internal(void) {
+  int flags = arm_cpu_caps();
+
+  (void)flags;
+}
+#endif
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp8_rtcd.h
index 9472db1..c46bfe5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp8_rtcd.h
@@ -27,100 +27,90 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_sse2(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_sse2(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
-void vp8_bilinear_predict16x16_ssse3(unsigned char* src,
-                                     int src_pitch,
-                                     int xofst,
-                                     int yofst,
-                                     unsigned char* dst,
+void vp8_bilinear_predict16x16_ssse3(unsigned char* src_ptr,
+                                     int src_pixels_per_line,
+                                     int xoffset,
+                                     int yoffset,
+                                     unsigned char* dst_ptr,
                                      int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src,
-                                              int src_pitch,
-                                              int xofst,
-                                              int yofst,
-                                              unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
                                               int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict4x4)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
-                                            int dst_pitch);
-
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x4)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
-                                            int dst_pitch);
-
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x8_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_bilinear_predict8x8_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_sse2
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_sse2
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_bilinear_predict8x8_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -128,9 +118,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -138,92 +128,79 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
 
 int vp8_block_error_c(short* coeff, short* dqcoeff);
 int vp8_block_error_sse2(short* coeff, short* dqcoeff);
-RTCD_EXTERN int (*vp8_block_error)(short* coeff, short* dqcoeff);
+#define vp8_block_error vp8_block_error_sse2
 
 void vp8_copy32xn_c(const unsigned char* src_ptr,
-                    int source_stride,
+                    int src_stride,
                     unsigned char* dst_ptr,
                     int dst_stride,
-                    int n);
+                    int height);
 void vp8_copy32xn_sse2(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 void vp8_copy32xn_sse3(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  unsigned char* dst_ptr,
                                  int dst_stride,
-                                 int n);
+                                 int height);
 
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_sse2(unsigned char* src,
-                            int src_pitch,
-                            unsigned char* dst,
-                            int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem16x16)(unsigned char* src,
-                                      int src_pitch,
-                                      unsigned char* dst,
-                                      int dst_pitch);
-
-void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
-                       unsigned char* dst,
-                       int dst_pitch);
-void vp8_copy_mem8x4_mmx(unsigned char* src,
-                         int src_pitch,
-                         unsigned char* dst,
-                         int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x4)(unsigned char* src,
-                                    int src_pitch,
-                                    unsigned char* dst,
-                                    int dst_pitch);
-
-void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
-                       unsigned char* dst,
-                       int dst_pitch);
-void vp8_copy_mem8x8_mmx(unsigned char* src,
-                         int src_pitch,
-                         unsigned char* dst,
-                         int dst_pitch);
-RTCD_EXTERN void (*vp8_copy_mem8x8)(unsigned char* src,
-                                    int src_pitch,
-                                    unsigned char* dst,
-                                    int dst_pitch);
-
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
-                            int pred_stride,
+                            int src_stride,
                             unsigned char* dst,
                             int dst_stride);
-void vp8_dc_only_idct_add_mmx(short input,
-                              unsigned char* pred,
+#define vp8_copy_mem16x16 vp8_copy_mem16x16_sse2
+
+void vp8_copy_mem8x4_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x4_mmx(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+#define vp8_copy_mem8x4 vp8_copy_mem8x4_mmx
+
+void vp8_copy_mem8x8_c(unsigned char* src,
+                       int src_stride,
+                       unsigned char* dst,
+                       int dst_stride);
+void vp8_copy_mem8x8_mmx(unsigned char* src,
+                         int src_stride,
+                         unsigned char* dst,
+                         int dst_stride);
+#define vp8_copy_mem8x8 vp8_copy_mem8x8_mmx
+
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
+                            int dst_stride);
+void vp8_dc_only_idct_add_mmx(short input_dc,
+                              unsigned char* pred_ptr,
                               int pred_stride,
-                              unsigned char* dst,
+                              unsigned char* dst_ptr,
                               int dst_stride);
-RTCD_EXTERN void (*vp8_dc_only_idct_add)(short input,
-                                         unsigned char* pred,
-                                         int pred_stride,
-                                         unsigned char* dst,
-                                         int dst_stride);
+#define vp8_dc_only_idct_add vp8_dc_only_idct_add_mmx
 
 int vp8_denoiser_filter_c(unsigned char* mc_running_avg_y,
                           int mc_avg_y_stride,
@@ -241,14 +218,7 @@
                              int sig_stride,
                              unsigned int motion_magnitude,
                              int increase_denoising);
-RTCD_EXTERN int (*vp8_denoiser_filter)(unsigned char* mc_running_avg_y,
-                                       int mc_avg_y_stride,
-                                       unsigned char* running_avg_y,
-                                       int avg_y_stride,
-                                       unsigned char* sig,
-                                       int sig_stride,
-                                       unsigned int motion_magnitude,
-                                       int increase_denoising);
+#define vp8_denoiser_filter vp8_denoiser_filter_sse2
 
 int vp8_denoiser_filter_uv_c(unsigned char* mc_running_avg,
                              int mc_avg_stride,
@@ -266,27 +236,17 @@
                                 int sig_stride,
                                 unsigned int motion_magnitude,
                                 int increase_denoising);
-RTCD_EXTERN int (*vp8_denoiser_filter_uv)(unsigned char* mc_running_avg,
-                                          int mc_avg_stride,
-                                          unsigned char* running_avg,
-                                          int avg_stride,
-                                          unsigned char* sig,
-                                          int sig_stride,
-                                          unsigned int motion_magnitude,
-                                          int increase_denoising);
+#define vp8_denoiser_filter_uv vp8_denoiser_filter_uv_sse2
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_mmx(short* input,
                               short* dq,
-                              unsigned char* output,
+                              unsigned char* dest,
                               int stride);
-RTCD_EXTERN void (*vp8_dequant_idct_add)(short* input,
-                                         short* dq,
-                                         unsigned char* output,
-                                         int stride);
+#define vp8_dequant_idct_add vp8_dequant_idct_add_mmx
 
 void vp8_dequant_idct_add_uv_block_c(short* q,
                                      short* dq,
@@ -300,12 +260,7 @@
                                         unsigned char* dst_v,
                                         int stride,
                                         char* eobs);
-RTCD_EXTERN void (*vp8_dequant_idct_add_uv_block)(short* q,
-                                                  short* dq,
-                                                  unsigned char* dst_u,
-                                                  unsigned char* dst_v,
-                                                  int stride,
-                                                  char* eobs);
+#define vp8_dequant_idct_add_uv_block vp8_dequant_idct_add_uv_block_sse2
 
 void vp8_dequant_idct_add_y_block_c(short* q,
                                     short* dq,
@@ -317,15 +272,11 @@
                                        unsigned char* dst,
                                        int stride,
                                        char* eobs);
-RTCD_EXTERN void (*vp8_dequant_idct_add_y_block)(short* q,
-                                                 short* dq,
-                                                 unsigned char* dst,
-                                                 int stride,
-                                                 char* eobs);
+#define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_sse2
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_mmx(struct blockd*, short* dqc);
-RTCD_EXTERN void (*vp8_dequantize_b)(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_mmx(struct blockd*, short* DQC);
+#define vp8_dequantize_b vp8_dequantize_b_mmx
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
                              struct block* b,
@@ -349,17 +300,7 @@
                              struct variance_vtable* fn_ptr,
                              int* mvcost[2],
                              union int_mv* center_mv);
-RTCD_EXTERN int (*vp8_diamond_search_sad)(struct macroblock* x,
-                                          struct block* b,
-                                          struct blockd* d,
-                                          union int_mv* ref_mv,
-                                          union int_mv* best_mv,
-                                          int search_param,
-                                          int sad_per_bit,
-                                          int* num00,
-                                          struct variance_vtable* fn_ptr,
-                                          int* mvcost[2],
-                                          union int_mv* center_mv);
+#define vp8_diamond_search_sad vp8_diamond_search_sadx4
 
 void vp8_fast_quantize_b_c(struct block*, struct blockd*);
 void vp8_fast_quantize_b_sse2(struct block*, struct blockd*);
@@ -376,11 +317,7 @@
                                     unsigned char* dst,
                                     int dst_stride,
                                     int src_weight);
-RTCD_EXTERN void (*vp8_filter_by_weight16x16)(unsigned char* src,
-                                              int src_stride,
-                                              unsigned char* dst,
-                                              int dst_stride,
-                                              int src_weight);
+#define vp8_filter_by_weight16x16 vp8_filter_by_weight16x16_sse2
 
 void vp8_filter_by_weight4x4_c(unsigned char* src,
                                int src_stride,
@@ -399,11 +336,7 @@
                                   unsigned char* dst,
                                   int dst_stride,
                                   int src_weight);
-RTCD_EXTERN void (*vp8_filter_by_weight8x8)(unsigned char* src,
-                                            int src_stride,
-                                            unsigned char* dst,
-                                            int dst_stride,
-                                            int src_weight);
+#define vp8_filter_by_weight8x8 vp8_filter_by_weight8x8_sse2
 
 int vp8_full_search_sad_c(struct macroblock* x,
                           struct block* b,
@@ -442,136 +375,108 @@
                                        int* mvcost[2],
                                        union int_mv* center_mv);
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bh)(unsigned char* y,
-                                       unsigned char* u,
-                                       unsigned char* v,
-                                       int ystride,
-                                       int uv_stride,
-                                       struct loop_filter_info* lfi);
+#define vp8_loop_filter_bh vp8_loop_filter_bh_sse2
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_bv)(unsigned char* y,
-                                       unsigned char* u,
-                                       unsigned char* v,
-                                       int ystride,
-                                       int uv_stride,
-                                       struct loop_filter_info* lfi);
+#define vp8_loop_filter_bv vp8_loop_filter_bv_sse2
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbh)(unsigned char* y,
-                                        unsigned char* u,
-                                        unsigned char* v,
-                                        int ystride,
-                                        int uv_stride,
-                                        struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbh vp8_loop_filter_mbh_sse2
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
-RTCD_EXTERN void (*vp8_loop_filter_mbv)(unsigned char* y,
-                                        unsigned char* u,
-                                        unsigned char* v,
-                                        int ystride,
-                                        int uv_stride,
-                                        struct loop_filter_info* lfi);
+#define vp8_loop_filter_mbv vp8_loop_filter_mbv_sse2
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bh)(unsigned char* y,
-                                              int ystride,
-                                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_sse2
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_bv)(unsigned char* y,
-                                              int ystride,
-                                              const unsigned char* blimit);
+#define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_sse2
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y,
-                                                 int ystride,
+void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y_ptr,
+                                                 int y_stride,
                                                  const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbh)(unsigned char* y,
-                                               int ystride,
-                                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_sse2
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y,
-                                               int ystride,
+void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
-RTCD_EXTERN void (*vp8_loop_filter_simple_mbv)(unsigned char* y,
-                                               int ystride,
-                                               const unsigned char* blimit);
+#define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_sse2
 
 int vp8_mbblock_error_c(struct macroblock* mb, int dc);
 int vp8_mbblock_error_sse2(struct macroblock* mb, int dc);
-RTCD_EXTERN int (*vp8_mbblock_error)(struct macroblock* mb, int dc);
+#define vp8_mbblock_error vp8_mbblock_error_sse2
 
 int vp8_mbuverror_c(struct macroblock* mb);
 int vp8_mbuverror_sse2(struct macroblock* mb);
-RTCD_EXTERN int (*vp8_mbuverror)(struct macroblock* mb);
+#define vp8_mbuverror vp8_mbuverror_sse2
 
 int vp8_refining_search_sad_c(struct macroblock* x,
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -579,20 +484,12 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
-RTCD_EXTERN int (*vp8_refining_search_sad)(struct macroblock* x,
-                                           struct block* b,
-                                           struct blockd* d,
-                                           union int_mv* ref_mv,
-                                           int sad_per_bit,
-                                           int distance,
-                                           struct variance_vtable* fn_ptr,
-                                           int* mvcost[2],
-                                           union int_mv* center_mv);
+#define vp8_refining_search_sad vp8_refining_search_sadx4
 
 void vp8_regular_quantize_b_c(struct block*, struct blockd*);
 void vp8_regular_quantize_b_sse2(struct block*, struct blockd*);
@@ -601,137 +498,133 @@
 
 void vp8_short_fdct4x4_c(short* input, short* output, int pitch);
 void vp8_short_fdct4x4_sse2(short* input, short* output, int pitch);
-RTCD_EXTERN void (*vp8_short_fdct4x4)(short* input, short* output, int pitch);
+#define vp8_short_fdct4x4 vp8_short_fdct4x4_sse2
 
 void vp8_short_fdct8x4_c(short* input, short* output, int pitch);
 void vp8_short_fdct8x4_sse2(short* input, short* output, int pitch);
-RTCD_EXTERN void (*vp8_short_fdct8x4)(short* input, short* output, int pitch);
+#define vp8_short_fdct8x4 vp8_short_fdct8x4_sse2
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_mmx(short* input,
-                              unsigned char* pred,
-                              int pitch,
-                              unsigned char* dst,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
                               int dst_stride);
-RTCD_EXTERN void (*vp8_short_idct4x4llm)(short* input,
-                                         unsigned char* pred,
-                                         int pitch,
-                                         unsigned char* dst,
-                                         int dst_stride);
+#define vp8_short_idct4x4llm vp8_short_idct4x4llm_mmx
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_sse2(short* input, short* output);
-RTCD_EXTERN void (*vp8_short_inv_walsh4x4)(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_sse2(short* input, short* mb_dqcoeff);
+#define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_sse2
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_sse2(short* input, short* output, int pitch);
-RTCD_EXTERN void (*vp8_short_walsh4x4)(short* input, short* output, int pitch);
+#define vp8_short_walsh4x4 vp8_short_walsh4x4_sse2
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_sixtap_predict16x16_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+void vp8_sixtap_predict16x16_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_mmx(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict4x4_mmx(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict4x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict4x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x8_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x8_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
 void vp8_rtcd(void);
@@ -743,150 +636,36 @@
 
   (void)flags;
 
-  vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_sse2;
+  vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_sse2;
   if (flags & HAS_SSSE3)
     vp8_bilinear_predict16x16 = vp8_bilinear_predict16x16_ssse3;
-  vp8_bilinear_predict4x4 = vp8_bilinear_predict4x4_c;
-  if (flags & HAS_MMX)
-    vp8_bilinear_predict4x4 = vp8_bilinear_predict4x4_mmx;
-  vp8_bilinear_predict8x4 = vp8_bilinear_predict8x4_c;
-  if (flags & HAS_MMX)
-    vp8_bilinear_predict8x4 = vp8_bilinear_predict8x4_mmx;
-  vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_c;
-  if (flags & HAS_SSE2)
-    vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_sse2;
+  vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_sse2;
   if (flags & HAS_SSSE3)
     vp8_bilinear_predict8x8 = vp8_bilinear_predict8x8_ssse3;
-  vp8_block_error = vp8_block_error_c;
-  if (flags & HAS_SSE2)
-    vp8_block_error = vp8_block_error_sse2;
-  vp8_copy32xn = vp8_copy32xn_c;
-  if (flags & HAS_SSE2)
-    vp8_copy32xn = vp8_copy32xn_sse2;
+  vp8_copy32xn = vp8_copy32xn_sse2;
   if (flags & HAS_SSE3)
     vp8_copy32xn = vp8_copy32xn_sse3;
-  vp8_copy_mem16x16 = vp8_copy_mem16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_copy_mem16x16 = vp8_copy_mem16x16_sse2;
-  vp8_copy_mem8x4 = vp8_copy_mem8x4_c;
-  if (flags & HAS_MMX)
-    vp8_copy_mem8x4 = vp8_copy_mem8x4_mmx;
-  vp8_copy_mem8x8 = vp8_copy_mem8x8_c;
-  if (flags & HAS_MMX)
-    vp8_copy_mem8x8 = vp8_copy_mem8x8_mmx;
-  vp8_dc_only_idct_add = vp8_dc_only_idct_add_c;
-  if (flags & HAS_MMX)
-    vp8_dc_only_idct_add = vp8_dc_only_idct_add_mmx;
-  vp8_denoiser_filter = vp8_denoiser_filter_c;
-  if (flags & HAS_SSE2)
-    vp8_denoiser_filter = vp8_denoiser_filter_sse2;
-  vp8_denoiser_filter_uv = vp8_denoiser_filter_uv_c;
-  if (flags & HAS_SSE2)
-    vp8_denoiser_filter_uv = vp8_denoiser_filter_uv_sse2;
-  vp8_dequant_idct_add = vp8_dequant_idct_add_c;
-  if (flags & HAS_MMX)
-    vp8_dequant_idct_add = vp8_dequant_idct_add_mmx;
-  vp8_dequant_idct_add_uv_block = vp8_dequant_idct_add_uv_block_c;
-  if (flags & HAS_SSE2)
-    vp8_dequant_idct_add_uv_block = vp8_dequant_idct_add_uv_block_sse2;
-  vp8_dequant_idct_add_y_block = vp8_dequant_idct_add_y_block_c;
-  if (flags & HAS_SSE2)
-    vp8_dequant_idct_add_y_block = vp8_dequant_idct_add_y_block_sse2;
-  vp8_dequantize_b = vp8_dequantize_b_c;
-  if (flags & HAS_MMX)
-    vp8_dequantize_b = vp8_dequantize_b_mmx;
-  vp8_diamond_search_sad = vp8_diamond_search_sad_c;
-  if (flags & HAS_SSE2)
-    vp8_diamond_search_sad = vp8_diamond_search_sadx4;
-  vp8_fast_quantize_b = vp8_fast_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vp8_fast_quantize_b = vp8_fast_quantize_b_sse2;
+  vp8_fast_quantize_b = vp8_fast_quantize_b_sse2;
   if (flags & HAS_SSSE3)
     vp8_fast_quantize_b = vp8_fast_quantize_b_ssse3;
-  vp8_filter_by_weight16x16 = vp8_filter_by_weight16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_filter_by_weight16x16 = vp8_filter_by_weight16x16_sse2;
-  vp8_filter_by_weight8x8 = vp8_filter_by_weight8x8_c;
-  if (flags & HAS_SSE2)
-    vp8_filter_by_weight8x8 = vp8_filter_by_weight8x8_sse2;
   vp8_full_search_sad = vp8_full_search_sad_c;
   if (flags & HAS_SSE3)
     vp8_full_search_sad = vp8_full_search_sadx3;
   if (flags & HAS_SSE4_1)
     vp8_full_search_sad = vp8_full_search_sadx8;
-  vp8_loop_filter_bh = vp8_loop_filter_bh_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_bh = vp8_loop_filter_bh_sse2;
-  vp8_loop_filter_bv = vp8_loop_filter_bv_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_bv = vp8_loop_filter_bv_sse2;
-  vp8_loop_filter_mbh = vp8_loop_filter_mbh_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_mbh = vp8_loop_filter_mbh_sse2;
-  vp8_loop_filter_mbv = vp8_loop_filter_mbv_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_mbv = vp8_loop_filter_mbv_sse2;
-  vp8_loop_filter_simple_bh = vp8_loop_filter_bhs_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_bh = vp8_loop_filter_bhs_sse2;
-  vp8_loop_filter_simple_bv = vp8_loop_filter_bvs_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_bv = vp8_loop_filter_bvs_sse2;
-  vp8_loop_filter_simple_mbh = vp8_loop_filter_simple_horizontal_edge_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_mbh = vp8_loop_filter_simple_horizontal_edge_sse2;
-  vp8_loop_filter_simple_mbv = vp8_loop_filter_simple_vertical_edge_c;
-  if (flags & HAS_SSE2)
-    vp8_loop_filter_simple_mbv = vp8_loop_filter_simple_vertical_edge_sse2;
-  vp8_mbblock_error = vp8_mbblock_error_c;
-  if (flags & HAS_SSE2)
-    vp8_mbblock_error = vp8_mbblock_error_sse2;
-  vp8_mbuverror = vp8_mbuverror_c;
-  if (flags & HAS_SSE2)
-    vp8_mbuverror = vp8_mbuverror_sse2;
-  vp8_refining_search_sad = vp8_refining_search_sad_c;
-  if (flags & HAS_SSE2)
-    vp8_refining_search_sad = vp8_refining_search_sadx4;
-  vp8_regular_quantize_b = vp8_regular_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
+  vp8_regular_quantize_b = vp8_regular_quantize_b_sse2;
   if (flags & HAS_SSE4_1)
     vp8_regular_quantize_b = vp8_regular_quantize_b_sse4_1;
-  vp8_short_fdct4x4 = vp8_short_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_fdct4x4 = vp8_short_fdct4x4_sse2;
-  vp8_short_fdct8x4 = vp8_short_fdct8x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_fdct8x4 = vp8_short_fdct8x4_sse2;
-  vp8_short_idct4x4llm = vp8_short_idct4x4llm_c;
-  if (flags & HAS_MMX)
-    vp8_short_idct4x4llm = vp8_short_idct4x4llm_mmx;
-  vp8_short_inv_walsh4x4 = vp8_short_inv_walsh4x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_inv_walsh4x4 = vp8_short_inv_walsh4x4_sse2;
-  vp8_short_walsh4x4 = vp8_short_walsh4x4_c;
-  if (flags & HAS_SSE2)
-    vp8_short_walsh4x4 = vp8_short_walsh4x4_sse2;
-  vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_c;
-  if (flags & HAS_SSE2)
-    vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
+  vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict16x16 = vp8_sixtap_predict16x16_ssse3;
-  vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_c;
-  if (flags & HAS_MMX)
-    vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_mmx;
+  vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_mmx;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict4x4 = vp8_sixtap_predict4x4_ssse3;
-  vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_c;
-  if (flags & HAS_SSE2)
-    vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_sse2;
+  vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict8x4 = vp8_sixtap_predict8x4_ssse3;
-  vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_c;
-  if (flags & HAS_SSE2)
-    vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_sse2;
+  vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_sse2;
   if (flags & HAS_SSSE3)
     vp8_sixtap_predict8x8 = vp8_sixtap_predict8x8_ssse3;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp9_rtcd.h
index c8f69b9..e079a2d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vp9_rtcd.h
@@ -79,15 +79,7 @@
                              int increase_denoising,
                              BLOCK_SIZE bs,
                              int motion_magnitude);
-RTCD_EXTERN int (*vp9_denoiser_filter)(const uint8_t* sig,
-                                       int sig_stride,
-                                       const uint8_t* mc_avg,
-                                       int mc_avg_stride,
-                                       uint8_t* avg,
-                                       int avg_stride,
-                                       int increase_denoising,
-                                       BLOCK_SIZE bs,
-                                       int motion_magnitude);
+#define vp9_denoiser_filter vp9_denoiser_filter_sse2
 
 int vp9_diamond_search_sad_c(const struct macroblock* x,
                              const struct search_site_config* cfg,
@@ -118,46 +110,6 @@
     const struct vp9_variance_vtable* fn_ptr,
     const struct mv* center_mv);
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_ssse3(const int16_t* input,
-                             int stride,
-                             tran_low_t* coeff_ptr,
-                             intptr_t n_coeffs,
-                             int skip_block,
-                             const int16_t* round_ptr,
-                             const int16_t* quant_ptr,
-                             tran_low_t* qcoeff_ptr,
-                             tran_low_t* dqcoeff_ptr,
-                             const int16_t* dequant_ptr,
-                             uint16_t* eob_ptr,
-                             const int16_t* scan,
-                             const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -166,10 +118,7 @@
                        tran_low_t* output,
                        int stride,
                        int tx_type);
-RTCD_EXTERN void (*vp9_fht16x16)(const int16_t* input,
-                                 tran_low_t* output,
-                                 int stride,
-                                 int tx_type);
+#define vp9_fht16x16 vp9_fht16x16_sse2
 
 void vp9_fht4x4_c(const int16_t* input,
                   tran_low_t* output,
@@ -179,10 +128,7 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
-RTCD_EXTERN void (*vp9_fht4x4)(const int16_t* input,
-                               tran_low_t* output,
-                               int stride,
-                               int tx_type);
+#define vp9_fht4x4 vp9_fht4x4_sse2
 
 void vp9_fht8x8_c(const int16_t* input,
                   tran_low_t* output,
@@ -192,10 +138,7 @@
                      tran_low_t* output,
                      int stride,
                      int tx_type);
-RTCD_EXTERN void (*vp9_fht8x8)(const int16_t* input,
-                               tran_low_t* output,
-                               int stride,
-                               int tx_type);
+#define vp9_fht8x8 vp9_fht8x8_sse2
 
 void vp9_filter_by_weight16x16_c(const uint8_t* src,
                                  int src_stride,
@@ -207,11 +150,7 @@
                                     uint8_t* dst,
                                     int dst_stride,
                                     int src_weight);
-RTCD_EXTERN void (*vp9_filter_by_weight16x16)(const uint8_t* src,
-                                              int src_stride,
-                                              uint8_t* dst,
-                                              int dst_stride,
-                                              int src_weight);
+#define vp9_filter_by_weight16x16 vp9_filter_by_weight16x16_sse2
 
 void vp9_filter_by_weight8x8_c(const uint8_t* src,
                                int src_stride,
@@ -223,17 +162,11 @@
                                   uint8_t* dst,
                                   int dst_stride,
                                   int src_weight);
-RTCD_EXTERN void (*vp9_filter_by_weight8x8)(const uint8_t* src,
-                                            int src_stride,
-                                            uint8_t* dst,
-                                            int dst_stride,
-                                            int src_weight);
+#define vp9_filter_by_weight8x8 vp9_filter_by_weight8x8_sse2
 
 void vp9_fwht4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vp9_fwht4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vp9_fwht4x4)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vp9_fwht4x4 vp9_fwht4x4_sse2
 
 int64_t vp9_highbd_block_error_c(const tran_low_t* coeff,
                                  const tran_low_t* dqcoeff,
@@ -245,11 +178,7 @@
                                     intptr_t block_size,
                                     int64_t* ssz,
                                     int bd);
-RTCD_EXTERN int64_t (*vp9_highbd_block_error)(const tran_low_t* coeff,
-                                              const tran_low_t* dqcoeff,
-                                              intptr_t block_size,
-                                              int64_t* ssz,
-                                              int bd);
+#define vp9_highbd_block_error vp9_highbd_block_error_sse2
 
 void vp9_highbd_fht16x16_c(const int16_t* input,
                            tran_low_t* output,
@@ -273,18 +202,18 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 void vp9_highbd_iht16x16_256_add_sse4_1(const tran_low_t* input,
-                                        uint16_t* output,
-                                        int pitch,
+                                        uint16_t* dest,
+                                        int stride,
                                         int tx_type,
                                         int bd);
 RTCD_EXTERN void (*vp9_highbd_iht16x16_256_add)(const tran_low_t* input,
-                                                uint16_t* output,
-                                                int pitch,
+                                                uint16_t* dest,
+                                                int stride,
                                                 int tx_type,
                                                 int bd);
 
@@ -376,23 +305,21 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_sse2(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
-RTCD_EXTERN void (*vp9_iht16x16_256_add)(const tran_low_t* input,
-                                         uint8_t* output,
-                                         int pitch,
-                                         int tx_type);
+#define vp9_iht16x16_256_add vp9_iht16x16_256_add_sse2
 
 void vp9_iht4x4_16_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -402,10 +329,7 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
-RTCD_EXTERN void (*vp9_iht4x4_16_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride,
-                                      int tx_type);
+#define vp9_iht4x4_16_add vp9_iht4x4_16_add_sse2
 
 void vp9_iht8x8_64_add_c(const tran_low_t* input,
                          uint8_t* dest,
@@ -415,10 +339,7 @@
                             uint8_t* dest,
                             int stride,
                             int tx_type);
-RTCD_EXTERN void (*vp9_iht8x8_64_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride,
-                                      int tx_type);
+#define vp9_iht8x8_64_add vp9_iht8x8_64_add_sse2
 
 void vp9_quantize_fp_c(const tran_low_t* coeff_ptr,
                        intptr_t n_coeffs,
@@ -501,46 +422,15 @@
 
   (void)flags;
 
-  vp9_block_error = vp9_block_error_c;
-  if (flags & HAS_SSE2)
-    vp9_block_error = vp9_block_error_sse2;
+  vp9_block_error = vp9_block_error_sse2;
   if (flags & HAS_AVX2)
     vp9_block_error = vp9_block_error_avx2;
-  vp9_block_error_fp = vp9_block_error_fp_c;
-  if (flags & HAS_SSE2)
-    vp9_block_error_fp = vp9_block_error_fp_sse2;
+  vp9_block_error_fp = vp9_block_error_fp_sse2;
   if (flags & HAS_AVX2)
     vp9_block_error_fp = vp9_block_error_fp_avx2;
-  vp9_denoiser_filter = vp9_denoiser_filter_c;
-  if (flags & HAS_SSE2)
-    vp9_denoiser_filter = vp9_denoiser_filter_sse2;
   vp9_diamond_search_sad = vp9_diamond_search_sad_c;
   if (flags & HAS_AVX)
     vp9_diamond_search_sad = vp9_diamond_search_sad_avx;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_SSSE3)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_ssse3;
-  vp9_fht16x16 = vp9_fht16x16_c;
-  if (flags & HAS_SSE2)
-    vp9_fht16x16 = vp9_fht16x16_sse2;
-  vp9_fht4x4 = vp9_fht4x4_c;
-  if (flags & HAS_SSE2)
-    vp9_fht4x4 = vp9_fht4x4_sse2;
-  vp9_fht8x8 = vp9_fht8x8_c;
-  if (flags & HAS_SSE2)
-    vp9_fht8x8 = vp9_fht8x8_sse2;
-  vp9_filter_by_weight16x16 = vp9_filter_by_weight16x16_c;
-  if (flags & HAS_SSE2)
-    vp9_filter_by_weight16x16 = vp9_filter_by_weight16x16_sse2;
-  vp9_filter_by_weight8x8 = vp9_filter_by_weight8x8_c;
-  if (flags & HAS_SSE2)
-    vp9_filter_by_weight8x8 = vp9_filter_by_weight8x8_sse2;
-  vp9_fwht4x4 = vp9_fwht4x4_c;
-  if (flags & HAS_SSE2)
-    vp9_fwht4x4 = vp9_fwht4x4_sse2;
-  vp9_highbd_block_error = vp9_highbd_block_error_c;
-  if (flags & HAS_SSE2)
-    vp9_highbd_block_error = vp9_highbd_block_error_sse2;
   vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_sse4_1;
@@ -550,18 +440,7 @@
   vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht8x8_64_add = vp9_highbd_iht8x8_64_add_sse4_1;
-  vp9_iht16x16_256_add = vp9_iht16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht16x16_256_add = vp9_iht16x16_256_add_sse2;
-  vp9_iht4x4_16_add = vp9_iht4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht4x4_16_add = vp9_iht4x4_16_add_sse2;
-  vp9_iht8x8_64_add = vp9_iht8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vp9_iht8x8_64_add = vp9_iht8x8_64_add_sse2;
-  vp9_quantize_fp = vp9_quantize_fp_c;
-  if (flags & HAS_SSE2)
-    vp9_quantize_fp = vp9_quantize_fp_sse2;
+  vp9_quantize_fp = vp9_quantize_fp_sse2;
   if (flags & HAS_AVX2)
     vp9_quantize_fp = vp9_quantize_fp_avx2;
   vp9_scale_and_extend_frame = vp9_scale_and_extend_frame_c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.asm
index b0eb9f0..2f5bd3f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.asm
@@ -1,7 +1,12 @@
+%define VPX_ARCH_ARM 0
 %define ARCH_ARM 0
+%define VPX_ARCH_MIPS 0
 %define ARCH_MIPS 0
+%define VPX_ARCH_X86 1
 %define ARCH_X86 1
+%define VPX_ARCH_X86_64 0
 %define ARCH_X86_64 0
+%define VPX_ARCH_PPC 0
 %define ARCH_PPC 0
 %define HAVE_NEON 0
 %define HAVE_NEON_ASM 0
@@ -66,20 +71,24 @@
 %define CONFIG_OS_SUPPORT 1
 %define CONFIG_UNIT_TESTS 1
 %define CONFIG_WEBM_IO 1
-%define CONFIG_LIBYUV 1
+%define CONFIG_LIBYUV 0
 %define CONFIG_DECODE_PERF_TESTS 0
 %define CONFIG_ENCODE_PERF_TESTS 0
 %define CONFIG_MULTI_RES_ENCODING 1
 %define CONFIG_TEMPORAL_DENOISING 1
 %define CONFIG_VP9_TEMPORAL_DENOISING 1
+%define CONFIG_CONSISTENT_RECODE 0
 %define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 %define CONFIG_VP9_HIGHBITDEPTH 1
 %define CONFIG_BETTER_HW_COMPATIBILITY 0
 %define CONFIG_EXPERIMENTAL 0
 %define CONFIG_SIZE_LIMIT 1
 %define CONFIG_ALWAYS_ADJUST_BPM 0
-%define CONFIG_SPATIAL_SVC 0
+%define CONFIG_BITSTREAM_DEBUG 0
+%define CONFIG_MISMATCH_DEBUG 0
 %define CONFIG_FP_MB_STATS 0
 %define CONFIG_EMULATE_HARDWARE 0
+%define CONFIG_NON_GREEDY_MV 0
+%define CONFIG_RATE_CTRL 0
 %define DECODE_WIDTH_LIMIT 16384
 %define DECODE_HEIGHT_LIMIT 16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.c
index 7f90ecc..3625b55 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=x86-win32-vs12 --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
+static const char* const cfg = "--target=x86-win32-vs14 --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.h
index a2dfec7..cec887d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      __inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 1
 #define ARCH_X86 1
+#define VPX_ARCH_X86_64 0
 #define ARCH_X86_64 0
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_dsp_rtcd.h
index d820a2b..096e3a8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/ia32/vpx_dsp_rtcd.h
@@ -22,11 +22,11 @@
 
 unsigned int vpx_avg_4x4_c(const uint8_t*, int p);
 unsigned int vpx_avg_4x4_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_avg_4x4)(const uint8_t*, int p);
+#define vpx_avg_4x4 vpx_avg_4x4_sse2
 
 unsigned int vpx_avg_8x8_c(const uint8_t*, int p);
 unsigned int vpx_avg_8x8_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_avg_8x8)(const uint8_t*, int p);
+#define vpx_avg_8x8 vpx_avg_8x8_sse2
 
 void vpx_comp_avg_pred_c(uint8_t* comp_pred,
                          const uint8_t* pred,
@@ -40,12 +40,7 @@
                             int height,
                             const uint8_t* ref,
                             int ref_stride);
-RTCD_EXTERN void (*vpx_comp_avg_pred)(uint8_t* comp_pred,
-                                      const uint8_t* pred,
-                                      int width,
-                                      int height,
-                                      const uint8_t* ref,
-                                      int ref_stride);
+#define vpx_comp_avg_pred vpx_comp_avg_pred_sse2
 
 void vpx_convolve8_c(const uint8_t* src,
                      ptrdiff_t src_stride,
@@ -405,17 +400,7 @@
                            int y_step_q4,
                            int w,
                            int h);
-RTCD_EXTERN void (*vpx_convolve_avg)(const uint8_t* src,
-                                     ptrdiff_t src_stride,
-                                     uint8_t* dst,
-                                     ptrdiff_t dst_stride,
-                                     const InterpKernel* filter,
-                                     int x0_q4,
-                                     int x_step_q4,
-                                     int y0_q4,
-                                     int y_step_q4,
-                                     int w,
-                                     int h);
+#define vpx_convolve_avg vpx_convolve_avg_sse2
 
 void vpx_convolve_copy_c(const uint8_t* src,
                          ptrdiff_t src_stride,
@@ -439,655 +424,553 @@
                             int y_step_q4,
                             int w,
                             int h);
-RTCD_EXTERN void (*vpx_convolve_copy)(const uint8_t* src,
-                                      ptrdiff_t src_stride,
-                                      uint8_t* dst,
-                                      ptrdiff_t dst_stride,
-                                      const InterpKernel* filter,
-                                      int x0_q4,
-                                      int x_step_q4,
-                                      int y0_q4,
-                                      int y_step_q4,
-                                      int w,
-                                      int h);
+#define vpx_convolve_copy vpx_convolve_copy_sse2
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_4x4_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_4x4_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_d207_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_sse2
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_d45_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_sse2
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_d45_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_sse2
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_4x4_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_8x8_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_sse2
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_sse2
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_sse2
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_128_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_sse2
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_16x16)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint8_t* above,
-                                                const uint8_t* left);
+#define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_sse2
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_32x32)(uint8_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint8_t* above,
-                                                const uint8_t* left);
+#define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_sse2
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_4x4)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
-                                              const uint8_t* above,
-                                              const uint8_t* left);
+#define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_sse2
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_left_predictor_8x8)(uint8_t* dst,
-                                              ptrdiff_t y_stride,
-                                              const uint8_t* above,
-                                              const uint8_t* left);
+#define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_sse2
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_sse2
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_sse2
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_sse2
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_sse2
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_16x16)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_sse2
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_32x32)(uint8_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint8_t* above,
-                                               const uint8_t* left);
+#define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_sse2
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_4x4)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_sse2
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
-RTCD_EXTERN void (*vpx_dc_top_predictor_8x8)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
-                                             const uint8_t* above,
-                                             const uint8_t* left);
+#define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_sse2
 
 void vpx_fdct16x16_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct16x16_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct16x16)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct16x16 vpx_fdct16x16_sse2
 
 void vpx_fdct16x16_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct16x16_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct16x16_1)(const int16_t* input,
-                                    tran_low_t* output,
-                                    int stride);
+#define vpx_fdct16x16_1 vpx_fdct16x16_1_sse2
 
 void vpx_fdct32x32_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct32x32)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct32x32 vpx_fdct32x32_sse2
 
 void vpx_fdct32x32_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct32x32_1)(const int16_t* input,
-                                    tran_low_t* output,
-                                    int stride);
+#define vpx_fdct32x32_1 vpx_fdct32x32_1_sse2
 
 void vpx_fdct32x32_rd_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct32x32_rd_sse2(const int16_t* input,
                            tran_low_t* output,
                            int stride);
-RTCD_EXTERN void (*vpx_fdct32x32_rd)(const int16_t* input,
-                                     tran_low_t* output,
-                                     int stride);
+#define vpx_fdct32x32_rd vpx_fdct32x32_rd_sse2
 
 void vpx_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct4x4_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct4x4)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vpx_fdct4x4 vpx_fdct4x4_sse2
 
 void vpx_fdct4x4_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct4x4_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct4x4_1)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct4x4_1 vpx_fdct4x4_1_sse2
 
 void vpx_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct8x8_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct8x8)(const int16_t* input,
-                                tran_low_t* output,
-                                int stride);
+#define vpx_fdct8x8 vpx_fdct8x8_sse2
 
 void vpx_fdct8x8_1_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_fdct8x8_1_sse2(const int16_t* input, tran_low_t* output, int stride);
-RTCD_EXTERN void (*vpx_fdct8x8_1)(const int16_t* input,
-                                  tran_low_t* output,
-                                  int stride);
+#define vpx_fdct8x8_1 vpx_fdct8x8_1_sse2
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_sse2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 void vpx_get16x16var_avx2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_sse2(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
                         int* sum);
-RTCD_EXTERN void (*vpx_get8x8var)(const uint8_t* src_ptr,
-                                  int source_stride,
-                                  const uint8_t* ref_ptr,
-                                  int ref_stride,
-                                  unsigned int* sse,
-                                  int* sum);
+#define vpx_get8x8var vpx_get8x8var_sse2
 
 unsigned int vpx_get_mb_ss_c(const int16_t*);
 unsigned int vpx_get_mb_ss_sse2(const int16_t*);
-RTCD_EXTERN unsigned int (*vpx_get_mb_ss)(const int16_t*);
+#define vpx_get_mb_ss vpx_get_mb_ss_sse2
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_h_predictor_16x16 vpx_h_predictor_16x16_sse2
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_h_predictor_32x32 vpx_h_predictor_32x32_sse2
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_h_predictor_4x4 vpx_h_predictor_4x4_sse2
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_h_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_h_predictor_8x8 vpx_h_predictor_8x8_sse2
 
 void vpx_hadamard_16x16_c(const int16_t* src_diff,
                           ptrdiff_t src_stride,
@@ -1121,249 +1004,209 @@
 void vpx_hadamard_8x8_sse2(const int16_t* src_diff,
                            ptrdiff_t src_stride,
                            tran_low_t* coeff);
-RTCD_EXTERN void (*vpx_hadamard_8x8)(const int16_t* src_diff,
-                                     ptrdiff_t src_stride,
-                                     tran_low_t* coeff);
+#define vpx_hadamard_8x8 vpx_hadamard_8x8_sse2
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+void vpx_highbd_10_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_sse2
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+void vpx_highbd_10_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_sse2
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_10_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_mse16x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int recon_stride,
-                                                   unsigned int* sse);
+#define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_sse2
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_10_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_mse8x8)(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 const uint8_t* ref_ptr,
-                                                 int recon_stride,
-                                                 unsigned int* sse);
+#define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x16 \
+  vpx_highbd_10_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x32 \
+  vpx_highbd_10_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance16x8 \
+  vpx_highbd_10_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x16 \
+  vpx_highbd_10_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x32 \
+  vpx_highbd_10_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance32x64 \
+  vpx_highbd_10_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1372,9 +1215,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1384,283 +1227,212 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x32 \
+  vpx_highbd_10_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance64x64 \
+  vpx_highbd_10_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x16 \
+  vpx_highbd_10_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x4 \
+  vpx_highbd_10_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_10_sub_pixel_avg_variance8x8 \
+  vpx_highbd_10_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x16 \
+  vpx_highbd_10_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x32 \
+  vpx_highbd_10_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance16x8 \
+  vpx_highbd_10_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x16 \
+  vpx_highbd_10_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x32 \
+  vpx_highbd_10_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance32x64 \
+  vpx_highbd_10_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1668,9 +1440,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1678,534 +1450,426 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x32 \
+  vpx_highbd_10_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance64x64 \
+  vpx_highbd_10_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x16 \
+  vpx_highbd_10_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x4 \
+  vpx_highbd_10_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_10_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_10_sub_pixel_variance8x8 \
+  vpx_highbd_10_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_sse2
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_sse2
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance16x8)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_sse2
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_sse2
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_sse2
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance32x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_sse2
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance64x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_sse2
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance64x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_sse2
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance8x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_sse2
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_10_variance8x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_sse2
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+void vpx_highbd_12_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_sse2
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+void vpx_highbd_12_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_sse2
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_12_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_mse16x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int recon_stride,
-                                                   unsigned int* sse);
+#define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_sse2
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_12_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_mse8x8)(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 const uint8_t* ref_ptr,
-                                                 int recon_stride,
-                                                 unsigned int* sse);
+#define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x16 \
+  vpx_highbd_12_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x32 \
+  vpx_highbd_12_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance16x8 \
+  vpx_highbd_12_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x16 \
+  vpx_highbd_12_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x32 \
+  vpx_highbd_12_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance32x64 \
+  vpx_highbd_12_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -2214,9 +1878,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -2226,283 +1890,212 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x32 \
+  vpx_highbd_12_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance64x64 \
+  vpx_highbd_12_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x16 \
+  vpx_highbd_12_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x4 \
+  vpx_highbd_12_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_12_sub_pixel_avg_variance8x8 \
+  vpx_highbd_12_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x16 \
+  vpx_highbd_12_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x32 \
+  vpx_highbd_12_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance16x8 \
+  vpx_highbd_12_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x16 \
+  vpx_highbd_12_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x32 \
+  vpx_highbd_12_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance32x64 \
+  vpx_highbd_12_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2510,9 +2103,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2520,529 +2113,421 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x32 \
+  vpx_highbd_12_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance64x64 \
+  vpx_highbd_12_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x16 \
+  vpx_highbd_12_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x4 \
+  vpx_highbd_12_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_12_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_12_sub_pixel_variance8x8 \
+  vpx_highbd_12_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_sse2
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_sse2
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance16x8)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_sse2
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x16)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_sse2
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_sse2
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance32x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_sse2
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance64x32)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_sse2
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance64x64)(const uint8_t* src_ptr,
-                                                        int source_stride,
-                                                        const uint8_t* ref_ptr,
-                                                        int ref_stride,
-                                                        unsigned int* sse);
+#define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_sse2
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance8x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_sse2
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_12_variance8x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_sse2
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
                                 int* sum);
-#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+void vpx_highbd_8_get16x16var_sse2(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse,
+                                   int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_sse2
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
                               int* sum);
-#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+void vpx_highbd_8_get8x8var_sse2(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_sse2
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 unsigned int vpx_highbd_8_mse16x16_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_mse16x16)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int recon_stride,
-                                                  unsigned int* sse);
+#define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_sse2
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_highbd_8_mse8x8_sse2(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_mse8x8)(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                const uint8_t* ref_ptr,
-                                                int recon_stride,
-                                                unsigned int* sse);
+#define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x16 \
+  vpx_highbd_8_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x32 \
+  vpx_highbd_8_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance16x8 \
+  vpx_highbd_8_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x16 \
+  vpx_highbd_8_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x32 \
+  vpx_highbd_8_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance32x64 \
+  vpx_highbd_8_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -3051,9 +2536,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -3062,599 +2547,458 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x32 \
+  vpx_highbd_8_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance64x64 \
+  vpx_highbd_8_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x16 \
+  vpx_highbd_8_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x4 \
+  vpx_highbd_8_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_avg_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse,
-    const uint8_t* second_pred);
+#define vpx_highbd_8_sub_pixel_avg_variance8x8 \
+  vpx_highbd_8_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x16 \
+  vpx_highbd_8_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x32 \
+  vpx_highbd_8_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance16x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance16x8 \
+  vpx_highbd_8_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x16 \
+  vpx_highbd_8_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x32 \
+  vpx_highbd_8_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance32x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance32x64 \
+  vpx_highbd_8_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance64x32)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x32 \
+  vpx_highbd_8_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance64x64)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance64x64 \
+  vpx_highbd_8_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x16)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x16 \
+  vpx_highbd_8_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x4)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x4 \
+  vpx_highbd_8_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
-RTCD_EXTERN uint32_t (*vpx_highbd_8_sub_pixel_variance8x8)(
-    const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
-    const uint8_t* ref_ptr,
-    int ref_stride,
-    uint32_t* sse);
+#define vpx_highbd_8_sub_pixel_variance8x8 \
+  vpx_highbd_8_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_sse2
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_sse2
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance16x8)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_sse2
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x16)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_sse2
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_sse2
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance32x64)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_sse2
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance64x32)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_sse2
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance64x64)(const uint8_t* src_ptr,
-                                                       int source_stride,
-                                                       const uint8_t* ref_ptr,
-                                                       int ref_stride,
-                                                       unsigned int* sse);
+#define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_sse2
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x16_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance8x16)(const uint8_t* src_ptr,
-                                                      int source_stride,
-                                                      const uint8_t* ref_ptr,
-                                                      int ref_stride,
-                                                      unsigned int* sse);
+#define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_sse2
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x8_sse2(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_highbd_8_variance8x8)(const uint8_t* src_ptr,
-                                                     int source_stride,
-                                                     const uint8_t* ref_ptr,
-                                                     int ref_stride,
-                                                     unsigned int* sse);
+#define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_sse2
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_highbd_avg_4x4)(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t* s8, int p);
+#define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_sse2
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t*, int p);
-RTCD_EXTERN unsigned int (*vpx_highbd_avg_8x8)(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t* s8, int p);
+#define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_sse2
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
                                 const uint16_t* pred,
@@ -3675,7 +3019,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 void vpx_highbd_convolve8_avx2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3687,7 +3031,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8)(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3699,7 +3043,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3712,7 +3056,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve8_avg_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3724,7 +3068,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3736,7 +3080,7 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
                                       ptrdiff_t src_stride,
@@ -3749,7 +3093,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 void vpx_highbd_convolve8_avg_horiz_avx2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3761,7 +3105,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_horiz)(const uint16_t* src,
                                                    ptrdiff_t src_stride,
                                                    uint16_t* dst,
@@ -3773,7 +3117,7 @@
                                                    int y_step_q4,
                                                    int w,
                                                    int h,
-                                                   int bps);
+                                                   int bd);
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
                                      ptrdiff_t src_stride,
@@ -3786,7 +3130,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_avg_vert_avx2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3798,7 +3142,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_vert)(const uint16_t* src,
                                                   ptrdiff_t src_stride,
                                                   uint16_t* dst,
@@ -3810,7 +3154,7 @@
                                                   int y_step_q4,
                                                   int w,
                                                   int h,
-                                                  int bps);
+                                                  int bd);
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
                                   ptrdiff_t src_stride,
@@ -3823,7 +3167,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve8_horiz_avx2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3835,7 +3179,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_horiz)(const uint16_t* src,
                                                ptrdiff_t src_stride,
                                                uint16_t* dst,
@@ -3847,7 +3191,7 @@
                                                int y_step_q4,
                                                int w,
                                                int h,
-                                               int bps);
+                                               int bd);
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
                                  ptrdiff_t src_stride,
@@ -3860,7 +3204,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 void vpx_highbd_convolve8_vert_avx2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3872,7 +3216,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_vert)(const uint16_t* src,
                                               ptrdiff_t src_stride,
                                               uint16_t* dst,
@@ -3884,7 +3228,7 @@
                                               int y_step_q4,
                                               int w,
                                               int h,
-                                              int bps);
+                                              int bd);
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
                                ptrdiff_t src_stride,
@@ -3897,7 +3241,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve_avg_sse2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3909,7 +3253,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve_avg_avx2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3921,7 +3265,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_avg)(const uint16_t* src,
                                             ptrdiff_t src_stride,
                                             uint16_t* dst,
@@ -3933,7 +3277,7 @@
                                             int y_step_q4,
                                             int w,
                                             int h,
-                                            int bps);
+                                            int bd);
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3946,7 +3290,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve_copy_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3958,7 +3302,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve_copy_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3970,7 +3314,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_copy)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3982,647 +3326,565 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d117_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_sse2
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d135_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_sse2
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d153_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_sse2
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_d207_predictor_4x4)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_sse2
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_4x4_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_4x4_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_d63_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_sse2
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_16x16)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_sse2
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_32x32)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_sse2
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_4x4)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_sse2
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_128_predictor_8x8)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_sse2
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_16x16_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_16x16)(uint16_t* dst,
-                                                       ptrdiff_t y_stride,
-                                                       const uint16_t* above,
-                                                       const uint16_t* left,
-                                                       int bd);
+#define vpx_highbd_dc_left_predictor_16x16 \
+  vpx_highbd_dc_left_predictor_16x16_sse2
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_32x32_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_32x32)(uint16_t* dst,
-                                                       ptrdiff_t y_stride,
-                                                       const uint16_t* above,
-                                                       const uint16_t* left,
-                                                       int bd);
+#define vpx_highbd_dc_left_predictor_32x32 \
+  vpx_highbd_dc_left_predictor_32x32_sse2
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_4x4_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_4x4)(uint16_t* dst,
-                                                     ptrdiff_t y_stride,
-                                                     const uint16_t* above,
-                                                     const uint16_t* left,
-                                                     int bd);
+#define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_sse2
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_8x8_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_left_predictor_8x8)(uint16_t* dst,
-                                                     ptrdiff_t y_stride,
-                                                     const uint16_t* above,
-                                                     const uint16_t* left,
-                                                     int bd);
+#define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_sse2
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_16x16)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_sse2
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_32x32)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_sse2
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_4x4)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_sse2
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_predictor_8x8)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_sse2
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_16x16)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_sse2
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_32x32)(uint16_t* dst,
-                                                      ptrdiff_t y_stride,
-                                                      const uint16_t* above,
-                                                      const uint16_t* left,
-                                                      int bd);
+#define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_sse2
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_4x4)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_sse2
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_dc_top_predictor_8x8)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
-                                                    const uint16_t* above,
-                                                    const uint16_t* left,
-                                                    int bd);
+#define vpx_highbd_dc_top_predictor_8x8 vpx_highbd_dc_top_predictor_8x8_sse2
 
 void vpx_highbd_fdct16x16_c(const int16_t* input,
                             tran_low_t* output,
@@ -4630,9 +3892,7 @@
 void vpx_highbd_fdct16x16_sse2(const int16_t* input,
                                tran_low_t* output,
                                int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct16x16)(const int16_t* input,
-                                         tran_low_t* output,
-                                         int stride);
+#define vpx_highbd_fdct16x16 vpx_highbd_fdct16x16_sse2
 
 void vpx_highbd_fdct16x16_1_c(const int16_t* input,
                               tran_low_t* output,
@@ -4645,9 +3905,7 @@
 void vpx_highbd_fdct32x32_sse2(const int16_t* input,
                                tran_low_t* output,
                                int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct32x32)(const int16_t* input,
-                                         tran_low_t* output,
-                                         int stride);
+#define vpx_highbd_fdct32x32 vpx_highbd_fdct32x32_sse2
 
 void vpx_highbd_fdct32x32_1_c(const int16_t* input,
                               tran_low_t* output,
@@ -4660,25 +3918,19 @@
 void vpx_highbd_fdct32x32_rd_sse2(const int16_t* input,
                                   tran_low_t* output,
                                   int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct32x32_rd)(const int16_t* input,
-                                            tran_low_t* output,
-                                            int stride);
+#define vpx_highbd_fdct32x32_rd vpx_highbd_fdct32x32_rd_sse2
 
 void vpx_highbd_fdct4x4_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_highbd_fdct4x4_sse2(const int16_t* input,
                              tran_low_t* output,
                              int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct4x4)(const int16_t* input,
-                                       tran_low_t* output,
-                                       int stride);
+#define vpx_highbd_fdct4x4 vpx_highbd_fdct4x4_sse2
 
 void vpx_highbd_fdct8x8_c(const int16_t* input, tran_low_t* output, int stride);
 void vpx_highbd_fdct8x8_sse2(const int16_t* input,
                              tran_low_t* output,
                              int stride);
-RTCD_EXTERN void (*vpx_highbd_fdct8x8)(const int16_t* input,
-                                       tran_low_t* output,
-                                       int stride);
+#define vpx_highbd_fdct8x8 vpx_highbd_fdct8x8_sse2
 
 void vpx_highbd_fdct8x8_1_c(const int16_t* input,
                             tran_low_t* output,
@@ -4686,68 +3938,82 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_16x16)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_sse2
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_32x32)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_sse2
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_4x4)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_sse2
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_h_predictor_8x8)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_sse2
+
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_16x16_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_32x32_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+void vpx_highbd_hadamard_8x8_avx2(const int16_t* src_diff,
+                                  ptrdiff_t src_stride,
+                                  tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
 
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
@@ -4774,10 +4040,7 @@
                                      uint16_t* dest,
                                      int stride,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_idct16x16_1_add)(const tran_low_t* input,
-                                               uint16_t* dest,
-                                               int stride,
-                                               int bd);
+#define vpx_highbd_idct16x16_1_add vpx_highbd_idct16x16_1_add_sse2
 
 void vpx_highbd_idct16x16_256_add_c(const tran_low_t* input,
                                     uint16_t* dest,
@@ -4855,10 +4118,7 @@
                                      uint16_t* dest,
                                      int stride,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_idct32x32_1_add)(const tran_low_t* input,
-                                               uint16_t* dest,
-                                               int stride,
-                                               int bd);
+#define vpx_highbd_idct32x32_1_add vpx_highbd_idct32x32_1_add_sse2
 
 void vpx_highbd_idct32x32_34_add_c(const tran_low_t* input,
                                    uint16_t* dest,
@@ -4902,10 +4162,7 @@
                                    uint16_t* dest,
                                    int stride,
                                    int bd);
-RTCD_EXTERN void (*vpx_highbd_idct4x4_1_add)(const tran_low_t* input,
-                                             uint16_t* dest,
-                                             int stride,
-                                             int bd);
+#define vpx_highbd_idct4x4_1_add vpx_highbd_idct4x4_1_add_sse2
 
 void vpx_highbd_idct8x8_12_add_c(const tran_low_t* input,
                                  uint16_t* dest,
@@ -4932,10 +4189,7 @@
                                    uint16_t* dest,
                                    int stride,
                                    int bd);
-RTCD_EXTERN void (*vpx_highbd_idct8x8_1_add)(const tran_low_t* input,
-                                             uint16_t* dest,
-                                             int stride,
-                                             int bd);
+#define vpx_highbd_idct8x8_1_add vpx_highbd_idct8x8_1_add_sse2
 
 void vpx_highbd_idct8x8_64_add_c(const tran_low_t* input,
                                  uint16_t* dest,
@@ -4978,12 +4232,7 @@
                                        const uint8_t* limit,
                                        const uint8_t* thresh,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_16)(uint16_t* s,
-                                                 int pitch,
-                                                 const uint8_t* blimit,
-                                                 const uint8_t* limit,
-                                                 const uint8_t* thresh,
-                                                 int bd);
+#define vpx_highbd_lpf_horizontal_16 vpx_highbd_lpf_horizontal_16_sse2
 
 void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t* s,
                                          int pitch,
@@ -4997,12 +4246,7 @@
                                             const uint8_t* limit,
                                             const uint8_t* thresh,
                                             int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_16_dual)(uint16_t* s,
-                                                      int pitch,
-                                                      const uint8_t* blimit,
-                                                      const uint8_t* limit,
-                                                      const uint8_t* thresh,
-                                                      int bd);
+#define vpx_highbd_lpf_horizontal_16_dual vpx_highbd_lpf_horizontal_16_dual_sse2
 
 void vpx_highbd_lpf_horizontal_4_c(uint16_t* s,
                                    int pitch,
@@ -5016,12 +4260,7 @@
                                       const uint8_t* limit,
                                       const uint8_t* thresh,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_4)(uint16_t* s,
-                                                int pitch,
-                                                const uint8_t* blimit,
-                                                const uint8_t* limit,
-                                                const uint8_t* thresh,
-                                                int bd);
+#define vpx_highbd_lpf_horizontal_4 vpx_highbd_lpf_horizontal_4_sse2
 
 void vpx_highbd_lpf_horizontal_4_dual_c(uint16_t* s,
                                         int pitch,
@@ -5041,15 +4280,7 @@
                                            const uint8_t* limit1,
                                            const uint8_t* thresh1,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_4_dual)(uint16_t* s,
-                                                     int pitch,
-                                                     const uint8_t* blimit0,
-                                                     const uint8_t* limit0,
-                                                     const uint8_t* thresh0,
-                                                     const uint8_t* blimit1,
-                                                     const uint8_t* limit1,
-                                                     const uint8_t* thresh1,
-                                                     int bd);
+#define vpx_highbd_lpf_horizontal_4_dual vpx_highbd_lpf_horizontal_4_dual_sse2
 
 void vpx_highbd_lpf_horizontal_8_c(uint16_t* s,
                                    int pitch,
@@ -5063,12 +4294,7 @@
                                       const uint8_t* limit,
                                       const uint8_t* thresh,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_8)(uint16_t* s,
-                                                int pitch,
-                                                const uint8_t* blimit,
-                                                const uint8_t* limit,
-                                                const uint8_t* thresh,
-                                                int bd);
+#define vpx_highbd_lpf_horizontal_8 vpx_highbd_lpf_horizontal_8_sse2
 
 void vpx_highbd_lpf_horizontal_8_dual_c(uint16_t* s,
                                         int pitch,
@@ -5088,15 +4314,7 @@
                                            const uint8_t* limit1,
                                            const uint8_t* thresh1,
                                            int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_horizontal_8_dual)(uint16_t* s,
-                                                     int pitch,
-                                                     const uint8_t* blimit0,
-                                                     const uint8_t* limit0,
-                                                     const uint8_t* thresh0,
-                                                     const uint8_t* blimit1,
-                                                     const uint8_t* limit1,
-                                                     const uint8_t* thresh1,
-                                                     int bd);
+#define vpx_highbd_lpf_horizontal_8_dual vpx_highbd_lpf_horizontal_8_dual_sse2
 
 void vpx_highbd_lpf_vertical_16_c(uint16_t* s,
                                   int pitch,
@@ -5110,12 +4328,7 @@
                                      const uint8_t* limit,
                                      const uint8_t* thresh,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_16)(uint16_t* s,
-                                               int pitch,
-                                               const uint8_t* blimit,
-                                               const uint8_t* limit,
-                                               const uint8_t* thresh,
-                                               int bd);
+#define vpx_highbd_lpf_vertical_16 vpx_highbd_lpf_vertical_16_sse2
 
 void vpx_highbd_lpf_vertical_16_dual_c(uint16_t* s,
                                        int pitch,
@@ -5129,12 +4342,7 @@
                                           const uint8_t* limit,
                                           const uint8_t* thresh,
                                           int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_16_dual)(uint16_t* s,
-                                                    int pitch,
-                                                    const uint8_t* blimit,
-                                                    const uint8_t* limit,
-                                                    const uint8_t* thresh,
-                                                    int bd);
+#define vpx_highbd_lpf_vertical_16_dual vpx_highbd_lpf_vertical_16_dual_sse2
 
 void vpx_highbd_lpf_vertical_4_c(uint16_t* s,
                                  int pitch,
@@ -5148,12 +4356,7 @@
                                     const uint8_t* limit,
                                     const uint8_t* thresh,
                                     int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_4)(uint16_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit,
-                                              const uint8_t* limit,
-                                              const uint8_t* thresh,
-                                              int bd);
+#define vpx_highbd_lpf_vertical_4 vpx_highbd_lpf_vertical_4_sse2
 
 void vpx_highbd_lpf_vertical_4_dual_c(uint16_t* s,
                                       int pitch,
@@ -5173,15 +4376,7 @@
                                          const uint8_t* limit1,
                                          const uint8_t* thresh1,
                                          int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_4_dual)(uint16_t* s,
-                                                   int pitch,
-                                                   const uint8_t* blimit0,
-                                                   const uint8_t* limit0,
-                                                   const uint8_t* thresh0,
-                                                   const uint8_t* blimit1,
-                                                   const uint8_t* limit1,
-                                                   const uint8_t* thresh1,
-                                                   int bd);
+#define vpx_highbd_lpf_vertical_4_dual vpx_highbd_lpf_vertical_4_dual_sse2
 
 void vpx_highbd_lpf_vertical_8_c(uint16_t* s,
                                  int pitch,
@@ -5195,12 +4390,7 @@
                                     const uint8_t* limit,
                                     const uint8_t* thresh,
                                     int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_8)(uint16_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit,
-                                              const uint8_t* limit,
-                                              const uint8_t* thresh,
-                                              int bd);
+#define vpx_highbd_lpf_vertical_8 vpx_highbd_lpf_vertical_8_sse2
 
 void vpx_highbd_lpf_vertical_8_dual_c(uint16_t* s,
                                       int pitch,
@@ -5220,19 +4410,11 @@
                                          const uint8_t* limit1,
                                          const uint8_t* thresh1,
                                          int bd);
-RTCD_EXTERN void (*vpx_highbd_lpf_vertical_8_dual)(uint16_t* s,
-                                                   int pitch,
-                                                   const uint8_t* blimit0,
-                                                   const uint8_t* limit0,
-                                                   const uint8_t* thresh0,
-                                                   const uint8_t* blimit1,
-                                                   const uint8_t* limit1,
-                                                   const uint8_t* thresh1,
-                                                   int bd);
+#define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_sse2
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -5264,19 +4446,7 @@
                                 uint16_t* eob_ptr,
                                 const int16_t* scan,
                                 const int16_t* iscan);
-RTCD_EXTERN void (*vpx_highbd_quantize_b)(const tran_low_t* coeff_ptr,
-                                          intptr_t n_coeffs,
-                                          int skip_block,
-                                          const int16_t* zbin_ptr,
-                                          const int16_t* round_ptr,
-                                          const int16_t* quant_ptr,
-                                          const int16_t* quant_shift_ptr,
-                                          tran_low_t* qcoeff_ptr,
-                                          tran_low_t* dqcoeff_ptr,
-                                          const int16_t* dequant_ptr,
-                                          uint16_t* eob_ptr,
-                                          const int16_t* scan,
-                                          const int16_t* iscan);
+#define vpx_highbd_quantize_b vpx_highbd_quantize_b_sse2
 
 void vpx_highbd_quantize_b_32x32_c(const tran_low_t* coeff_ptr,
                                    intptr_t n_coeffs,
@@ -5304,19 +4474,7 @@
                                       uint16_t* eob_ptr,
                                       const int16_t* scan,
                                       const int16_t* iscan);
-RTCD_EXTERN void (*vpx_highbd_quantize_b_32x32)(const tran_low_t* coeff_ptr,
-                                                intptr_t n_coeffs,
-                                                int skip_block,
-                                                const int16_t* zbin_ptr,
-                                                const int16_t* round_ptr,
-                                                const int16_t* quant_ptr,
-                                                const int16_t* quant_shift_ptr,
-                                                tran_low_t* qcoeff_ptr,
-                                                tran_low_t* dqcoeff_ptr,
-                                                const int16_t* dequant_ptr,
-                                                uint16_t* eob_ptr,
-                                                const int16_t* scan,
-                                                const int16_t* iscan);
+#define vpx_highbd_quantize_b_32x32 vpx_highbd_quantize_b_32x32_sse2
 
 unsigned int vpx_highbd_sad16x16_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5326,10 +4484,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x16)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad16x16 vpx_highbd_sad16x16_sse2
 
 unsigned int vpx_highbd_sad16x16_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5341,27 +4496,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x16_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad16x16_avg vpx_highbd_sad16x16_avg_sse2
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x16x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_sse2
 
 unsigned int vpx_highbd_sad16x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5371,10 +4518,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad16x32 vpx_highbd_sad16x32_sse2
 
 unsigned int vpx_highbd_sad16x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5386,27 +4530,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad16x32_avg vpx_highbd_sad16x32_avg_sse2
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_sse2
 
 unsigned int vpx_highbd_sad16x8_c(const uint8_t* src_ptr,
                                   int src_stride,
@@ -5416,10 +4552,7 @@
                                      int src_stride,
                                      const uint8_t* ref_ptr,
                                      int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x8)(const uint8_t* src_ptr,
-                                               int src_stride,
-                                               const uint8_t* ref_ptr,
-                                               int ref_stride);
+#define vpx_highbd_sad16x8 vpx_highbd_sad16x8_sse2
 
 unsigned int vpx_highbd_sad16x8_avg_c(const uint8_t* src_ptr,
                                       int src_stride,
@@ -5431,27 +4564,19 @@
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad16x8_avg)(const uint8_t* src_ptr,
-                                                   int src_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int ref_stride,
-                                                   const uint8_t* second_pred);
+#define vpx_highbd_sad16x8_avg vpx_highbd_sad16x8_avg_sse2
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad16x8x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad16x8x4d)(const uint8_t* src_ptr,
-                                          int src_stride,
-                                          const uint8_t* const ref_ptr[],
-                                          int ref_stride,
-                                          uint32_t* sad_array);
+#define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_sse2
 
 unsigned int vpx_highbd_sad32x16_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5461,10 +4586,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x16)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x16 vpx_highbd_sad32x16_sse2
 
 unsigned int vpx_highbd_sad32x16_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5476,27 +4598,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x16_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x16_avg vpx_highbd_sad32x16_avg_sse2
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x16x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_sse2
 
 unsigned int vpx_highbd_sad32x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5506,10 +4620,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x32 vpx_highbd_sad32x32_sse2
 
 unsigned int vpx_highbd_sad32x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5521,27 +4632,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x32_avg vpx_highbd_sad32x32_avg_sse2
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_sse2
 
 unsigned int vpx_highbd_sad32x64_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5551,10 +4654,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x64)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad32x64 vpx_highbd_sad32x64_sse2
 
 unsigned int vpx_highbd_sad32x64_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5566,27 +4666,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad32x64_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad32x64_avg vpx_highbd_sad32x64_avg_sse2
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad32x64x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_sse2
 
 unsigned int vpx_highbd_sad4x4_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5603,19 +4695,15 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad4x4x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_sse2
 
 unsigned int vpx_highbd_sad4x8_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5632,19 +4720,15 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad4x8x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_sse2
 
 unsigned int vpx_highbd_sad64x32_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5654,10 +4738,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x32)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad64x32 vpx_highbd_sad64x32_sse2
 
 unsigned int vpx_highbd_sad64x32_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5669,27 +4750,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x32_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad64x32_avg vpx_highbd_sad64x32_avg_sse2
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad64x32x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_sse2
 
 unsigned int vpx_highbd_sad64x64_c(const uint8_t* src_ptr,
                                    int src_stride,
@@ -5699,10 +4772,7 @@
                                       int src_stride,
                                       const uint8_t* ref_ptr,
                                       int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x64)(const uint8_t* src_ptr,
-                                                int src_stride,
-                                                const uint8_t* ref_ptr,
-                                                int ref_stride);
+#define vpx_highbd_sad64x64 vpx_highbd_sad64x64_sse2
 
 unsigned int vpx_highbd_sad64x64_avg_c(const uint8_t* src_ptr,
                                        int src_stride,
@@ -5714,27 +4784,19 @@
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad64x64_avg)(const uint8_t* src_ptr,
-                                                    int src_stride,
-                                                    const uint8_t* ref_ptr,
-                                                    int ref_stride,
-                                                    const uint8_t* second_pred);
+#define vpx_highbd_sad64x64_avg vpx_highbd_sad64x64_avg_sse2
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad64x64x4d)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* const ref_ptr[],
-                                           int ref_stride,
-                                           uint32_t* sad_array);
+#define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_sse2
 
 unsigned int vpx_highbd_sad8x16_c(const uint8_t* src_ptr,
                                   int src_stride,
@@ -5744,10 +4806,7 @@
                                      int src_stride,
                                      const uint8_t* ref_ptr,
                                      int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x16)(const uint8_t* src_ptr,
-                                               int src_stride,
-                                               const uint8_t* ref_ptr,
-                                               int ref_stride);
+#define vpx_highbd_sad8x16 vpx_highbd_sad8x16_sse2
 
 unsigned int vpx_highbd_sad8x16_avg_c(const uint8_t* src_ptr,
                                       int src_stride,
@@ -5759,27 +4818,19 @@
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x16_avg)(const uint8_t* src_ptr,
-                                                   int src_stride,
-                                                   const uint8_t* ref_ptr,
-                                                   int ref_stride,
-                                                   const uint8_t* second_pred);
+#define vpx_highbd_sad8x16_avg vpx_highbd_sad8x16_avg_sse2
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad8x16x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x16x4d)(const uint8_t* src_ptr,
-                                          int src_stride,
-                                          const uint8_t* const ref_ptr[],
-                                          int ref_stride,
-                                          uint32_t* sad_array);
+#define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_sse2
 
 unsigned int vpx_highbd_sad8x4_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5789,10 +4840,7 @@
                                     int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x4)(const uint8_t* src_ptr,
-                                              int src_stride,
-                                              const uint8_t* ref_ptr,
-                                              int ref_stride);
+#define vpx_highbd_sad8x4 vpx_highbd_sad8x4_sse2
 
 unsigned int vpx_highbd_sad8x4_avg_c(const uint8_t* src_ptr,
                                      int src_stride,
@@ -5804,27 +4852,19 @@
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x4_avg)(const uint8_t* src_ptr,
-                                                  int src_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int ref_stride,
-                                                  const uint8_t* second_pred);
+#define vpx_highbd_sad8x4_avg vpx_highbd_sad8x4_avg_sse2
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x4x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_sse2
 
 unsigned int vpx_highbd_sad8x8_c(const uint8_t* src_ptr,
                                  int src_stride,
@@ -5834,10 +4874,7 @@
                                     int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x8)(const uint8_t* src_ptr,
-                                              int src_stride,
-                                              const uint8_t* ref_ptr,
-                                              int ref_stride);
+#define vpx_highbd_sad8x8 vpx_highbd_sad8x8_sse2
 
 unsigned int vpx_highbd_sad8x8_avg_c(const uint8_t* src_ptr,
                                      int src_stride,
@@ -5849,182 +4886,142 @@
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_highbd_sad8x8_avg)(const uint8_t* src_ptr,
-                                                  int src_stride,
-                                                  const uint8_t* ref_ptr,
-                                                  int ref_stride,
-                                                  const uint8_t* second_pred);
+#define vpx_highbd_sad8x8_avg vpx_highbd_sad8x8_avg_sse2
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_highbd_sad8x8x4d)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* const ref_ptr[],
-                                         int ref_stride,
-                                         uint32_t* sad_array);
+#define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_sse2
+
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+int vpx_highbd_satd_avx2(const tran_low_t* coeff, int length);
+RTCD_EXTERN int (*vpx_highbd_satd)(const tran_low_t* coeff, int length);
 
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_16x16)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_sse2
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_32x32)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
-                                                  const uint16_t* above,
-                                                  const uint16_t* left,
-                                                  int bd);
+#define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_sse2
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_4x4)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_sse2
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
-RTCD_EXTERN void (*vpx_highbd_tm_predictor_8x8)(uint16_t* dst,
-                                                ptrdiff_t y_stride,
-                                                const uint16_t* above,
-                                                const uint16_t* left,
-                                                int bd);
+#define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_sse2
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_16x16)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_sse2
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_32x32)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
-                                                 const uint16_t* above,
-                                                 const uint16_t* left,
-                                                 int bd);
+#define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_sse2
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_4x4)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_sse2
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
-RTCD_EXTERN void (*vpx_highbd_v_predictor_8x8)(uint16_t* dst,
-                                               ptrdiff_t y_stride,
-                                               const uint16_t* above,
-                                               const uint16_t* left,
-                                               int bd);
+#define vpx_highbd_v_predictor_8x8 vpx_highbd_v_predictor_8x8_sse2
 
 void vpx_idct16x16_10_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_10_add_sse2(const tran_low_t* input,
                                uint8_t* dest,
                                int stride);
-RTCD_EXTERN void (*vpx_idct16x16_10_add)(const tran_low_t* input,
-                                         uint8_t* dest,
-                                         int stride);
+#define vpx_idct16x16_10_add vpx_idct16x16_10_add_sse2
 
 void vpx_idct16x16_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_1_add_sse2(const tran_low_t* input,
                               uint8_t* dest,
                               int stride);
-RTCD_EXTERN void (*vpx_idct16x16_1_add)(const tran_low_t* input,
-                                        uint8_t* dest,
-                                        int stride);
+#define vpx_idct16x16_1_add vpx_idct16x16_1_add_sse2
 
 void vpx_idct16x16_256_add_c(const tran_low_t* input,
                              uint8_t* dest,
@@ -6032,17 +5029,13 @@
 void vpx_idct16x16_256_add_sse2(const tran_low_t* input,
                                 uint8_t* dest,
                                 int stride);
-RTCD_EXTERN void (*vpx_idct16x16_256_add)(const tran_low_t* input,
-                                          uint8_t* dest,
-                                          int stride);
+#define vpx_idct16x16_256_add vpx_idct16x16_256_add_sse2
 
 void vpx_idct16x16_38_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct16x16_38_add_sse2(const tran_low_t* input,
                                uint8_t* dest,
                                int stride);
-RTCD_EXTERN void (*vpx_idct16x16_38_add)(const tran_low_t* input,
-                                         uint8_t* dest,
-                                         int stride);
+#define vpx_idct16x16_38_add vpx_idct16x16_38_add_sse2
 
 void vpx_idct32x32_1024_add_c(const tran_low_t* input,
                               uint8_t* dest,
@@ -6050,9 +5043,7 @@
 void vpx_idct32x32_1024_add_sse2(const tran_low_t* input,
                                  uint8_t* dest,
                                  int stride);
-RTCD_EXTERN void (*vpx_idct32x32_1024_add)(const tran_low_t* input,
-                                           uint8_t* dest,
-                                           int stride);
+#define vpx_idct32x32_1024_add vpx_idct32x32_1024_add_sse2
 
 void vpx_idct32x32_135_add_c(const tran_low_t* input,
                              uint8_t* dest,
@@ -6071,9 +5062,7 @@
 void vpx_idct32x32_1_add_sse2(const tran_low_t* input,
                               uint8_t* dest,
                               int stride);
-RTCD_EXTERN void (*vpx_idct32x32_1_add)(const tran_low_t* input,
-                                        uint8_t* dest,
-                                        int stride);
+#define vpx_idct32x32_1_add vpx_idct32x32_1_add_sse2
 
 void vpx_idct32x32_34_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct32x32_34_add_sse2(const tran_low_t* input,
@@ -6090,15 +5079,11 @@
 void vpx_idct4x4_16_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_idct4x4_16_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_idct4x4_16_add vpx_idct4x4_16_add_sse2
 
 void vpx_idct4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct4x4_1_add_sse2(const tran_low_t* input, uint8_t* dest, int stride);
-RTCD_EXTERN void (*vpx_idct4x4_1_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride);
+#define vpx_idct4x4_1_add vpx_idct4x4_1_add_sse2
 
 void vpx_idct8x8_12_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_12_add_sse2(const tran_low_t* input,
@@ -6113,21 +5098,17 @@
 
 void vpx_idct8x8_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_1_add_sse2(const tran_low_t* input, uint8_t* dest, int stride);
-RTCD_EXTERN void (*vpx_idct8x8_1_add)(const tran_low_t* input,
-                                      uint8_t* dest,
-                                      int stride);
+#define vpx_idct8x8_1_add vpx_idct8x8_1_add_sse2
 
 void vpx_idct8x8_64_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_idct8x8_64_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_idct8x8_64_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_idct8x8_64_add vpx_idct8x8_64_add_sse2
 
 int16_t vpx_int_pro_col_c(const uint8_t* ref, const int width);
 int16_t vpx_int_pro_col_sse2(const uint8_t* ref, const int width);
-RTCD_EXTERN int16_t (*vpx_int_pro_col)(const uint8_t* ref, const int width);
+#define vpx_int_pro_col vpx_int_pro_col_sse2
 
 void vpx_int_pro_row_c(int16_t* hbuf,
                        const uint8_t* ref,
@@ -6137,18 +5118,13 @@
                           const uint8_t* ref,
                           const int ref_stride,
                           const int height);
-RTCD_EXTERN void (*vpx_int_pro_row)(int16_t* hbuf,
-                                    const uint8_t* ref,
-                                    const int ref_stride,
-                                    const int height);
+#define vpx_int_pro_row vpx_int_pro_row_sse2
 
 void vpx_iwht4x4_16_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 void vpx_iwht4x4_16_add_sse2(const tran_low_t* input,
                              uint8_t* dest,
                              int stride);
-RTCD_EXTERN void (*vpx_iwht4x4_16_add)(const tran_low_t* input,
-                                       uint8_t* dest,
-                                       int stride);
+#define vpx_iwht4x4_16_add vpx_iwht4x4_16_add_sse2
 
 void vpx_iwht4x4_1_add_c(const tran_low_t* input, uint8_t* dest, int stride);
 #define vpx_iwht4x4_1_add vpx_iwht4x4_1_add_c
@@ -6205,11 +5181,7 @@
                                const uint8_t* blimit,
                                const uint8_t* limit,
                                const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_horizontal_4)(uint8_t* s,
-                                         int pitch,
-                                         const uint8_t* blimit,
-                                         const uint8_t* limit,
-                                         const uint8_t* thresh);
+#define vpx_lpf_horizontal_4 vpx_lpf_horizontal_4_sse2
 
 void vpx_lpf_horizontal_4_dual_c(uint8_t* s,
                                  int pitch,
@@ -6227,14 +5199,7 @@
                                     const uint8_t* blimit1,
                                     const uint8_t* limit1,
                                     const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_horizontal_4_dual)(uint8_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit0,
-                                              const uint8_t* limit0,
-                                              const uint8_t* thresh0,
-                                              const uint8_t* blimit1,
-                                              const uint8_t* limit1,
-                                              const uint8_t* thresh1);
+#define vpx_lpf_horizontal_4_dual vpx_lpf_horizontal_4_dual_sse2
 
 void vpx_lpf_horizontal_8_c(uint8_t* s,
                             int pitch,
@@ -6246,11 +5211,7 @@
                                const uint8_t* blimit,
                                const uint8_t* limit,
                                const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_horizontal_8)(uint8_t* s,
-                                         int pitch,
-                                         const uint8_t* blimit,
-                                         const uint8_t* limit,
-                                         const uint8_t* thresh);
+#define vpx_lpf_horizontal_8 vpx_lpf_horizontal_8_sse2
 
 void vpx_lpf_horizontal_8_dual_c(uint8_t* s,
                                  int pitch,
@@ -6268,14 +5229,7 @@
                                     const uint8_t* blimit1,
                                     const uint8_t* limit1,
                                     const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_horizontal_8_dual)(uint8_t* s,
-                                              int pitch,
-                                              const uint8_t* blimit0,
-                                              const uint8_t* limit0,
-                                              const uint8_t* thresh0,
-                                              const uint8_t* blimit1,
-                                              const uint8_t* limit1,
-                                              const uint8_t* thresh1);
+#define vpx_lpf_horizontal_8_dual vpx_lpf_horizontal_8_dual_sse2
 
 void vpx_lpf_vertical_16_c(uint8_t* s,
                            int pitch,
@@ -6287,11 +5241,7 @@
                               const uint8_t* blimit,
                               const uint8_t* limit,
                               const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_16)(uint8_t* s,
-                                        int pitch,
-                                        const uint8_t* blimit,
-                                        const uint8_t* limit,
-                                        const uint8_t* thresh);
+#define vpx_lpf_vertical_16 vpx_lpf_vertical_16_sse2
 
 void vpx_lpf_vertical_16_dual_c(uint8_t* s,
                                 int pitch,
@@ -6303,11 +5253,7 @@
                                    const uint8_t* blimit,
                                    const uint8_t* limit,
                                    const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_16_dual)(uint8_t* s,
-                                             int pitch,
-                                             const uint8_t* blimit,
-                                             const uint8_t* limit,
-                                             const uint8_t* thresh);
+#define vpx_lpf_vertical_16_dual vpx_lpf_vertical_16_dual_sse2
 
 void vpx_lpf_vertical_4_c(uint8_t* s,
                           int pitch,
@@ -6319,11 +5265,7 @@
                              const uint8_t* blimit,
                              const uint8_t* limit,
                              const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_4)(uint8_t* s,
-                                       int pitch,
-                                       const uint8_t* blimit,
-                                       const uint8_t* limit,
-                                       const uint8_t* thresh);
+#define vpx_lpf_vertical_4 vpx_lpf_vertical_4_sse2
 
 void vpx_lpf_vertical_4_dual_c(uint8_t* s,
                                int pitch,
@@ -6341,14 +5283,7 @@
                                   const uint8_t* blimit1,
                                   const uint8_t* limit1,
                                   const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_vertical_4_dual)(uint8_t* s,
-                                            int pitch,
-                                            const uint8_t* blimit0,
-                                            const uint8_t* limit0,
-                                            const uint8_t* thresh0,
-                                            const uint8_t* blimit1,
-                                            const uint8_t* limit1,
-                                            const uint8_t* thresh1);
+#define vpx_lpf_vertical_4_dual vpx_lpf_vertical_4_dual_sse2
 
 void vpx_lpf_vertical_8_c(uint8_t* s,
                           int pitch,
@@ -6360,11 +5295,7 @@
                              const uint8_t* blimit,
                              const uint8_t* limit,
                              const uint8_t* thresh);
-RTCD_EXTERN void (*vpx_lpf_vertical_8)(uint8_t* s,
-                                       int pitch,
-                                       const uint8_t* blimit,
-                                       const uint8_t* limit,
-                                       const uint8_t* thresh);
+#define vpx_lpf_vertical_8 vpx_lpf_vertical_8_sse2
 
 void vpx_lpf_vertical_8_dual_c(uint8_t* s,
                                int pitch,
@@ -6382,30 +5313,19 @@
                                   const uint8_t* blimit1,
                                   const uint8_t* limit1,
                                   const uint8_t* thresh1);
-RTCD_EXTERN void (*vpx_lpf_vertical_8_dual)(uint8_t* s,
-                                            int pitch,
-                                            const uint8_t* blimit0,
-                                            const uint8_t* limit0,
-                                            const uint8_t* thresh0,
-                                            const uint8_t* blimit1,
-                                            const uint8_t* limit1,
-                                            const uint8_t* thresh1);
+#define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_sse2
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_sse2(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_sse2(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
                                     int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_across_ip)(unsigned char* dst,
-                                              int pitch,
-                                              int rows,
-                                              int cols,
-                                              int flimit);
+#define vpx_mbpost_proc_across_ip vpx_mbpost_proc_across_ip_sse2
 
 void vpx_mbpost_proc_down_c(unsigned char* dst,
                             int pitch,
@@ -6417,11 +5337,7 @@
                                int rows,
                                int cols,
                                int flimit);
-RTCD_EXTERN void (*vpx_mbpost_proc_down)(unsigned char* dst,
-                                         int pitch,
-                                         int rows,
-                                         int cols,
-                                         int flimit);
+#define vpx_mbpost_proc_down vpx_mbpost_proc_down_sse2
 
 void vpx_minmax_8x8_c(const uint8_t* s,
                       int p,
@@ -6435,86 +5351,73 @@
                          int dp,
                          int* min,
                          int* max);
-RTCD_EXTERN void (*vpx_minmax_8x8)(const uint8_t* s,
-                                   int p,
-                                   const uint8_t* d,
-                                   int dp,
-                                   int* min,
-                                   int* max);
+#define vpx_minmax_8x8 vpx_minmax_8x8_sse2
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_sse2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_mse16x16_avx2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse16x8_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 unsigned int vpx_mse16x8_avx2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x8)(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse8x16_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_mse8x16)(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        const uint8_t* ref_ptr,
-                                        int recon_stride,
-                                        unsigned int* sse);
+#define vpx_mse8x16 vpx_mse8x16_sse2
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 unsigned int vpx_mse8x8_sse2(const uint8_t* src_ptr,
-                             int source_stride,
+                             int src_stride,
                              const uint8_t* ref_ptr,
-                             int recon_stride,
+                             int ref_stride,
                              unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_mse8x8)(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       const uint8_t* ref_ptr,
-                                       int recon_stride,
-                                       unsigned int* sse);
+#define vpx_mse8x8 vpx_mse8x8_sse2
 
 void vpx_plane_add_noise_c(uint8_t* start,
                            const int8_t* noise,
@@ -6530,13 +5433,7 @@
                               int width,
                               int height,
                               int pitch);
-RTCD_EXTERN void (*vpx_plane_add_noise)(uint8_t* start,
-                                        const int8_t* noise,
-                                        int blackclamp,
-                                        int whiteclamp,
-                                        int width,
-                                        int height,
-                                        int pitch);
+#define vpx_plane_add_noise vpx_plane_add_noise_sse2
 
 void vpx_post_proc_down_and_across_mb_row_c(unsigned char* src,
                                             unsigned char* dst,
@@ -6552,13 +5449,8 @@
                                                int cols,
                                                unsigned char* flimits,
                                                int size);
-RTCD_EXTERN void (*vpx_post_proc_down_and_across_mb_row)(unsigned char* src,
-                                                         unsigned char* dst,
-                                                         int src_pitch,
-                                                         int dst_pitch,
-                                                         int cols,
-                                                         unsigned char* flimits,
-                                                         int size);
+#define vpx_post_proc_down_and_across_mb_row \
+  vpx_post_proc_down_and_across_mb_row_sse2
 
 void vpx_quantize_b_c(const tran_low_t* coeff_ptr,
                       intptr_t n_coeffs,
@@ -6687,10 +5579,7 @@
                                int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x16)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* ref_ptr,
-                                         int ref_stride);
+#define vpx_sad16x16 vpx_sad16x16_sse2
 
 unsigned int vpx_sad16x16_avg_c(const uint8_t* src_ptr,
                                 int src_stride,
@@ -6702,11 +5591,7 @@
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x16_avg)(const uint8_t* src_ptr,
-                                             int src_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             const uint8_t* second_pred);
+#define vpx_sad16x16_avg vpx_sad16x16_avg_sse2
 
 void vpx_sad16x16x3_c(const uint8_t* src_ptr,
                       int src_stride,
@@ -6731,19 +5616,15 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x16x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad16x16x4d vpx_sad16x16x4d_sse2
 
 void vpx_sad16x16x8_c(const uint8_t* src_ptr,
                       int src_stride,
@@ -6769,10 +5650,7 @@
                                int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x32)(const uint8_t* src_ptr,
-                                         int src_stride,
-                                         const uint8_t* ref_ptr,
-                                         int ref_stride);
+#define vpx_sad16x32 vpx_sad16x32_sse2
 
 unsigned int vpx_sad16x32_avg_c(const uint8_t* src_ptr,
                                 int src_stride,
@@ -6784,27 +5662,19 @@
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x32_avg)(const uint8_t* src_ptr,
-                                             int src_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             const uint8_t* second_pred);
+#define vpx_sad16x32_avg vpx_sad16x32_avg_sse2
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x32x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad16x32x4d vpx_sad16x32x4d_sse2
 
 unsigned int vpx_sad16x8_c(const uint8_t* src_ptr,
                            int src_stride,
@@ -6814,10 +5684,7 @@
                               int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad16x8)(const uint8_t* src_ptr,
-                                        int src_stride,
-                                        const uint8_t* ref_ptr,
-                                        int ref_stride);
+#define vpx_sad16x8 vpx_sad16x8_sse2
 
 unsigned int vpx_sad16x8_avg_c(const uint8_t* src_ptr,
                                int src_stride,
@@ -6829,11 +5696,7 @@
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad16x8_avg)(const uint8_t* src_ptr,
-                                            int src_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            const uint8_t* second_pred);
+#define vpx_sad16x8_avg vpx_sad16x8_avg_sse2
 
 void vpx_sad16x8x3_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -6858,19 +5721,15 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad16x8x4d)(const uint8_t* src_ptr,
-                                   int src_stride,
-                                   const uint8_t* const ref_ptr[],
-                                   int ref_stride,
-                                   uint32_t* sad_array);
+#define vpx_sad16x8x4d vpx_sad16x8x4d_sse2
 
 void vpx_sad16x8x8_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -6928,19 +5787,15 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad32x16x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad32x16x4d vpx_sad32x16x4d_sse2
 
 unsigned int vpx_sad32x32_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -6982,25 +5837,41 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad32x32x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -7041,19 +5912,15 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad32x64x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad32x64x4d vpx_sad32x64x4d_sse2
 
 unsigned int vpx_sad4x4_c(const uint8_t* src_ptr,
                           int src_stride,
@@ -7063,10 +5930,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad4x4)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad4x4 vpx_sad4x4_sse2
 
 unsigned int vpx_sad4x4_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7078,11 +5942,7 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad4x4_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad4x4_avg vpx_sad4x4_avg_sse2
 
 void vpx_sad4x4x3_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7102,19 +5962,15 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad4x4x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad4x4x4d vpx_sad4x4x4d_sse2
 
 void vpx_sad4x4x8_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7140,10 +5996,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad4x8)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad4x8 vpx_sad4x8_sse2
 
 unsigned int vpx_sad4x8_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7155,27 +6008,19 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad4x8_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad4x8_avg vpx_sad4x8_avg_sse2
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad4x8x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad4x8x4d vpx_sad4x8x4d_sse2
 
 unsigned int vpx_sad64x32_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -7217,19 +6062,15 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad64x32x4d)(const uint8_t* src_ptr,
-                                    int src_stride,
-                                    const uint8_t* const ref_ptr[],
-                                    int ref_stride,
-                                    uint32_t* sad_array);
+#define vpx_sad64x32x4d vpx_sad64x32x4d_sse2
 
 unsigned int vpx_sad64x64_c(const uint8_t* src_ptr,
                             int src_stride,
@@ -7271,22 +6112,22 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad64x64x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -7298,10 +6139,7 @@
                               int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x16)(const uint8_t* src_ptr,
-                                        int src_stride,
-                                        const uint8_t* ref_ptr,
-                                        int ref_stride);
+#define vpx_sad8x16 vpx_sad8x16_sse2
 
 unsigned int vpx_sad8x16_avg_c(const uint8_t* src_ptr,
                                int src_stride,
@@ -7313,11 +6151,7 @@
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x16_avg)(const uint8_t* src_ptr,
-                                            int src_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            const uint8_t* second_pred);
+#define vpx_sad8x16_avg vpx_sad8x16_avg_sse2
 
 void vpx_sad8x16x3_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -7337,19 +6171,15 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x16x4d)(const uint8_t* src_ptr,
-                                   int src_stride,
-                                   const uint8_t* const ref_ptr[],
-                                   int ref_stride,
-                                   uint32_t* sad_array);
+#define vpx_sad8x16x4d vpx_sad8x16x4d_sse2
 
 void vpx_sad8x16x8_c(const uint8_t* src_ptr,
                      int src_stride,
@@ -7375,10 +6205,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x4)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad8x4 vpx_sad8x4_sse2
 
 unsigned int vpx_sad8x4_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7390,27 +6217,19 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x4_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad8x4_avg vpx_sad8x4_avg_sse2
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x4x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad8x4x4d vpx_sad8x4x4d_sse2
 
 unsigned int vpx_sad8x8_c(const uint8_t* src_ptr,
                           int src_stride,
@@ -7420,10 +6239,7 @@
                              int src_stride,
                              const uint8_t* ref_ptr,
                              int ref_stride);
-RTCD_EXTERN unsigned int (*vpx_sad8x8)(const uint8_t* src_ptr,
-                                       int src_stride,
-                                       const uint8_t* ref_ptr,
-                                       int ref_stride);
+#define vpx_sad8x8 vpx_sad8x8_sse2
 
 unsigned int vpx_sad8x8_avg_c(const uint8_t* src_ptr,
                               int src_stride,
@@ -7435,11 +6251,7 @@
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  const uint8_t* second_pred);
-RTCD_EXTERN unsigned int (*vpx_sad8x8_avg)(const uint8_t* src_ptr,
-                                           int src_stride,
-                                           const uint8_t* ref_ptr,
-                                           int ref_stride,
-                                           const uint8_t* second_pred);
+#define vpx_sad8x8_avg vpx_sad8x8_avg_sse2
 
 void vpx_sad8x8x3_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7459,19 +6271,15 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
-RTCD_EXTERN void (*vpx_sad8x8x4d)(const uint8_t* src_ptr,
-                                  int src_stride,
-                                  const uint8_t* const ref_ptr[],
-                                  int ref_stride,
-                                  uint32_t* sad_array);
+#define vpx_sad8x8x4d vpx_sad8x8x4d_sse2
 
 void vpx_sad8x8x8_c(const uint8_t* src_ptr,
                     int src_stride,
@@ -7594,850 +6402,850 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -8458,384 +7266,329 @@
                              ptrdiff_t src_stride,
                              const uint8_t* pred_ptr,
                              ptrdiff_t pred_stride);
-RTCD_EXTERN void (*vpx_subtract_block)(int rows,
-                                       int cols,
-                                       int16_t* diff_ptr,
-                                       ptrdiff_t diff_stride,
-                                       const uint8_t* src_ptr,
-                                       ptrdiff_t src_stride,
-                                       const uint8_t* pred_ptr,
-                                       ptrdiff_t pred_stride);
+#define vpx_subtract_block vpx_subtract_block_sse2
 
 uint64_t vpx_sum_squares_2d_i16_c(const int16_t* src, int stride, int size);
 uint64_t vpx_sum_squares_2d_i16_sse2(const int16_t* src, int stride, int size);
-RTCD_EXTERN uint64_t (*vpx_sum_squares_2d_i16)(const int16_t* src,
-                                               int stride,
-                                               int size);
+#define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_sse2
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_16x16)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_sse2
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_32x32)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
-                                           const uint8_t* above,
-                                           const uint8_t* left);
+#define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_sse2
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_4x4)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_sse2
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
-RTCD_EXTERN void (*vpx_tm_predictor_8x8)(uint8_t* dst,
-                                         ptrdiff_t y_stride,
-                                         const uint8_t* above,
-                                         const uint8_t* left);
+#define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_sse2
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_16x16)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_v_predictor_16x16 vpx_v_predictor_16x16_sse2
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_32x32)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
-                                          const uint8_t* above,
-                                          const uint8_t* left);
+#define vpx_v_predictor_32x32 vpx_v_predictor_32x32_sse2
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_4x4)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_v_predictor_4x4 vpx_v_predictor_4x4_sse2
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
-RTCD_EXTERN void (*vpx_v_predictor_8x8)(uint8_t* dst,
-                                        ptrdiff_t y_stride,
-                                        const uint8_t* above,
-                                        const uint8_t* left);
+#define vpx_v_predictor_8x8 vpx_v_predictor_8x8_sse2
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_variance16x8_avx2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance4x4)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance4x4 vpx_variance4x4_sse2
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance4x8)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance4x8 vpx_variance4x8_sse2
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x16)(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             const uint8_t* ref_ptr,
-                                             int ref_stride,
-                                             unsigned int* sse);
+#define vpx_variance8x16 vpx_variance8x16_sse2
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x4)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance8x4 vpx_variance8x4_sse2
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
-RTCD_EXTERN unsigned int (*vpx_variance8x8)(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            const uint8_t* ref_ptr,
-                                            int ref_stride,
-                                            unsigned int* sse);
+#define vpx_variance8x8 vpx_variance8x8_sse2
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
 
 int vpx_vector_var_c(const int16_t* ref, const int16_t* src, const int bwl);
 int vpx_vector_var_sse2(const int16_t* ref, const int16_t* src, const int bwl);
-RTCD_EXTERN int (*vpx_vector_var)(const int16_t* ref,
-                                  const int16_t* src,
-                                  const int bwl);
+#define vpx_vector_var vpx_vector_var_sse2
 
 void vpx_dsp_rtcd(void);
 
@@ -8846,63 +7599,36 @@
 
   (void)flags;
 
-  vpx_avg_4x4 = vpx_avg_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_avg_4x4 = vpx_avg_4x4_sse2;
-  vpx_avg_8x8 = vpx_avg_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_avg_8x8 = vpx_avg_8x8_sse2;
-  vpx_comp_avg_pred = vpx_comp_avg_pred_c;
-  if (flags & HAS_SSE2)
-    vpx_comp_avg_pred = vpx_comp_avg_pred_sse2;
-  vpx_convolve8 = vpx_convolve8_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8 = vpx_convolve8_sse2;
+  vpx_convolve8 = vpx_convolve8_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8 = vpx_convolve8_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8 = vpx_convolve8_avx2;
-  vpx_convolve8_avg = vpx_convolve8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg = vpx_convolve8_avg_sse2;
+  vpx_convolve8_avg = vpx_convolve8_avg_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg = vpx_convolve8_avg_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg = vpx_convolve8_avg_avx2;
-  vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_sse2;
+  vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg_horiz = vpx_convolve8_avg_horiz_avx2;
-  vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_sse2;
+  vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_avg_vert = vpx_convolve8_avg_vert_avx2;
-  vpx_convolve8_horiz = vpx_convolve8_horiz_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_horiz = vpx_convolve8_horiz_sse2;
+  vpx_convolve8_horiz = vpx_convolve8_horiz_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_horiz = vpx_convolve8_horiz_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_horiz = vpx_convolve8_horiz_avx2;
-  vpx_convolve8_vert = vpx_convolve8_vert_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve8_vert = vpx_convolve8_vert_sse2;
+  vpx_convolve8_vert = vpx_convolve8_vert_sse2;
   if (flags & HAS_SSSE3)
     vpx_convolve8_vert = vpx_convolve8_vert_ssse3;
   if (flags & HAS_AVX2)
     vpx_convolve8_vert = vpx_convolve8_vert_avx2;
-  vpx_convolve_avg = vpx_convolve_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve_avg = vpx_convolve_avg_sse2;
-  vpx_convolve_copy = vpx_convolve_copy_c;
-  if (flags & HAS_SSE2)
-    vpx_convolve_copy = vpx_convolve_copy_sse2;
   vpx_d153_predictor_16x16 = vpx_d153_predictor_16x16_c;
   if (flags & HAS_SSSE3)
     vpx_d153_predictor_16x16 = vpx_d153_predictor_16x16_ssse3;
@@ -8921,9 +7647,6 @@
   vpx_d207_predictor_32x32 = vpx_d207_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_d207_predictor_32x32 = vpx_d207_predictor_32x32_ssse3;
-  vpx_d207_predictor_4x4 = vpx_d207_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_d207_predictor_4x4 = vpx_d207_predictor_4x4_sse2;
   vpx_d207_predictor_8x8 = vpx_d207_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_d207_predictor_8x8 = vpx_d207_predictor_8x8_ssse3;
@@ -8933,12 +7656,6 @@
   vpx_d45_predictor_32x32 = vpx_d45_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_d45_predictor_32x32 = vpx_d45_predictor_32x32_ssse3;
-  vpx_d45_predictor_4x4 = vpx_d45_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_d45_predictor_4x4 = vpx_d45_predictor_4x4_sse2;
-  vpx_d45_predictor_8x8 = vpx_d45_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_d45_predictor_8x8 = vpx_d45_predictor_8x8_sse2;
   vpx_d63_predictor_16x16 = vpx_d63_predictor_16x16_c;
   if (flags & HAS_SSSE3)
     vpx_d63_predictor_16x16 = vpx_d63_predictor_16x16_ssse3;
@@ -8951,542 +7668,15 @@
   vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_d63_predictor_8x8 = vpx_d63_predictor_8x8_ssse3;
-  vpx_dc_128_predictor_16x16 = vpx_dc_128_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_16x16 = vpx_dc_128_predictor_16x16_sse2;
-  vpx_dc_128_predictor_32x32 = vpx_dc_128_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_32x32 = vpx_dc_128_predictor_32x32_sse2;
-  vpx_dc_128_predictor_4x4 = vpx_dc_128_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_4x4 = vpx_dc_128_predictor_4x4_sse2;
-  vpx_dc_128_predictor_8x8 = vpx_dc_128_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_128_predictor_8x8 = vpx_dc_128_predictor_8x8_sse2;
-  vpx_dc_left_predictor_16x16 = vpx_dc_left_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_16x16 = vpx_dc_left_predictor_16x16_sse2;
-  vpx_dc_left_predictor_32x32 = vpx_dc_left_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_32x32 = vpx_dc_left_predictor_32x32_sse2;
-  vpx_dc_left_predictor_4x4 = vpx_dc_left_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_4x4 = vpx_dc_left_predictor_4x4_sse2;
-  vpx_dc_left_predictor_8x8 = vpx_dc_left_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_left_predictor_8x8 = vpx_dc_left_predictor_8x8_sse2;
-  vpx_dc_predictor_16x16 = vpx_dc_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_16x16 = vpx_dc_predictor_16x16_sse2;
-  vpx_dc_predictor_32x32 = vpx_dc_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_32x32 = vpx_dc_predictor_32x32_sse2;
-  vpx_dc_predictor_4x4 = vpx_dc_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_4x4 = vpx_dc_predictor_4x4_sse2;
-  vpx_dc_predictor_8x8 = vpx_dc_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_predictor_8x8 = vpx_dc_predictor_8x8_sse2;
-  vpx_dc_top_predictor_16x16 = vpx_dc_top_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_16x16 = vpx_dc_top_predictor_16x16_sse2;
-  vpx_dc_top_predictor_32x32 = vpx_dc_top_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_32x32 = vpx_dc_top_predictor_32x32_sse2;
-  vpx_dc_top_predictor_4x4 = vpx_dc_top_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_4x4 = vpx_dc_top_predictor_4x4_sse2;
-  vpx_dc_top_predictor_8x8 = vpx_dc_top_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_dc_top_predictor_8x8 = vpx_dc_top_predictor_8x8_sse2;
-  vpx_fdct16x16 = vpx_fdct16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct16x16 = vpx_fdct16x16_sse2;
-  vpx_fdct16x16_1 = vpx_fdct16x16_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct16x16_1 = vpx_fdct16x16_1_sse2;
-  vpx_fdct32x32 = vpx_fdct32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32 = vpx_fdct32x32_sse2;
-  vpx_fdct32x32_1 = vpx_fdct32x32_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32_1 = vpx_fdct32x32_1_sse2;
-  vpx_fdct32x32_rd = vpx_fdct32x32_rd_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct32x32_rd = vpx_fdct32x32_rd_sse2;
-  vpx_fdct4x4 = vpx_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct4x4 = vpx_fdct4x4_sse2;
-  vpx_fdct4x4_1 = vpx_fdct4x4_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct4x4_1 = vpx_fdct4x4_1_sse2;
-  vpx_fdct8x8 = vpx_fdct8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct8x8 = vpx_fdct8x8_sse2;
-  vpx_fdct8x8_1 = vpx_fdct8x8_1_c;
-  if (flags & HAS_SSE2)
-    vpx_fdct8x8_1 = vpx_fdct8x8_1_sse2;
-  vpx_get16x16var = vpx_get16x16var_c;
-  if (flags & HAS_SSE2)
-    vpx_get16x16var = vpx_get16x16var_sse2;
+  vpx_get16x16var = vpx_get16x16var_sse2;
   if (flags & HAS_AVX2)
     vpx_get16x16var = vpx_get16x16var_avx2;
-  vpx_get8x8var = vpx_get8x8var_c;
-  if (flags & HAS_SSE2)
-    vpx_get8x8var = vpx_get8x8var_sse2;
-  vpx_get_mb_ss = vpx_get_mb_ss_c;
-  if (flags & HAS_SSE2)
-    vpx_get_mb_ss = vpx_get_mb_ss_sse2;
-  vpx_h_predictor_16x16 = vpx_h_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_16x16 = vpx_h_predictor_16x16_sse2;
-  vpx_h_predictor_32x32 = vpx_h_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_32x32 = vpx_h_predictor_32x32_sse2;
-  vpx_h_predictor_4x4 = vpx_h_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_4x4 = vpx_h_predictor_4x4_sse2;
-  vpx_h_predictor_8x8 = vpx_h_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_h_predictor_8x8 = vpx_h_predictor_8x8_sse2;
-  vpx_hadamard_16x16 = vpx_hadamard_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
+  vpx_hadamard_16x16 = vpx_hadamard_16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_hadamard_16x16 = vpx_hadamard_16x16_avx2;
-  vpx_hadamard_32x32 = vpx_hadamard_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
+  vpx_hadamard_32x32 = vpx_hadamard_32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_hadamard_32x32 = vpx_hadamard_32x32_avx2;
-  vpx_hadamard_8x8 = vpx_hadamard_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_hadamard_8x8 = vpx_hadamard_8x8_sse2;
-  vpx_highbd_10_mse16x16 = vpx_highbd_10_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_mse16x16 = vpx_highbd_10_mse16x16_sse2;
-  vpx_highbd_10_mse8x8 = vpx_highbd_10_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_mse8x8 = vpx_highbd_10_mse8x8_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x16 =
-      vpx_highbd_10_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x16 =
-        vpx_highbd_10_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x32 =
-      vpx_highbd_10_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x32 =
-        vpx_highbd_10_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance16x8 =
-      vpx_highbd_10_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance16x8 =
-        vpx_highbd_10_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x16 =
-      vpx_highbd_10_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x16 =
-        vpx_highbd_10_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x32 =
-      vpx_highbd_10_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x32 =
-        vpx_highbd_10_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance32x64 =
-      vpx_highbd_10_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance32x64 =
-        vpx_highbd_10_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance64x32 =
-      vpx_highbd_10_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance64x32 =
-        vpx_highbd_10_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance64x64 =
-      vpx_highbd_10_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance64x64 =
-        vpx_highbd_10_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x16 =
-      vpx_highbd_10_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x16 =
-        vpx_highbd_10_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x4 =
-      vpx_highbd_10_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x4 =
-        vpx_highbd_10_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_10_sub_pixel_avg_variance8x8 =
-      vpx_highbd_10_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_avg_variance8x8 =
-        vpx_highbd_10_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_10_sub_pixel_variance16x16 =
-      vpx_highbd_10_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x16 =
-        vpx_highbd_10_sub_pixel_variance16x16_sse2;
-  vpx_highbd_10_sub_pixel_variance16x32 =
-      vpx_highbd_10_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x32 =
-        vpx_highbd_10_sub_pixel_variance16x32_sse2;
-  vpx_highbd_10_sub_pixel_variance16x8 = vpx_highbd_10_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance16x8 =
-        vpx_highbd_10_sub_pixel_variance16x8_sse2;
-  vpx_highbd_10_sub_pixel_variance32x16 =
-      vpx_highbd_10_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x16 =
-        vpx_highbd_10_sub_pixel_variance32x16_sse2;
-  vpx_highbd_10_sub_pixel_variance32x32 =
-      vpx_highbd_10_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x32 =
-        vpx_highbd_10_sub_pixel_variance32x32_sse2;
-  vpx_highbd_10_sub_pixel_variance32x64 =
-      vpx_highbd_10_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance32x64 =
-        vpx_highbd_10_sub_pixel_variance32x64_sse2;
-  vpx_highbd_10_sub_pixel_variance64x32 =
-      vpx_highbd_10_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance64x32 =
-        vpx_highbd_10_sub_pixel_variance64x32_sse2;
-  vpx_highbd_10_sub_pixel_variance64x64 =
-      vpx_highbd_10_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance64x64 =
-        vpx_highbd_10_sub_pixel_variance64x64_sse2;
-  vpx_highbd_10_sub_pixel_variance8x16 = vpx_highbd_10_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x16 =
-        vpx_highbd_10_sub_pixel_variance8x16_sse2;
-  vpx_highbd_10_sub_pixel_variance8x4 = vpx_highbd_10_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x4 =
-        vpx_highbd_10_sub_pixel_variance8x4_sse2;
-  vpx_highbd_10_sub_pixel_variance8x8 = vpx_highbd_10_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_sub_pixel_variance8x8 =
-        vpx_highbd_10_sub_pixel_variance8x8_sse2;
-  vpx_highbd_10_variance16x16 = vpx_highbd_10_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x16 = vpx_highbd_10_variance16x16_sse2;
-  vpx_highbd_10_variance16x32 = vpx_highbd_10_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x32 = vpx_highbd_10_variance16x32_sse2;
-  vpx_highbd_10_variance16x8 = vpx_highbd_10_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance16x8 = vpx_highbd_10_variance16x8_sse2;
-  vpx_highbd_10_variance32x16 = vpx_highbd_10_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x16 = vpx_highbd_10_variance32x16_sse2;
-  vpx_highbd_10_variance32x32 = vpx_highbd_10_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x32 = vpx_highbd_10_variance32x32_sse2;
-  vpx_highbd_10_variance32x64 = vpx_highbd_10_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance32x64 = vpx_highbd_10_variance32x64_sse2;
-  vpx_highbd_10_variance64x32 = vpx_highbd_10_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance64x32 = vpx_highbd_10_variance64x32_sse2;
-  vpx_highbd_10_variance64x64 = vpx_highbd_10_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance64x64 = vpx_highbd_10_variance64x64_sse2;
-  vpx_highbd_10_variance8x16 = vpx_highbd_10_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance8x16 = vpx_highbd_10_variance8x16_sse2;
-  vpx_highbd_10_variance8x8 = vpx_highbd_10_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_10_variance8x8 = vpx_highbd_10_variance8x8_sse2;
-  vpx_highbd_12_mse16x16 = vpx_highbd_12_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_mse16x16 = vpx_highbd_12_mse16x16_sse2;
-  vpx_highbd_12_mse8x8 = vpx_highbd_12_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_mse8x8 = vpx_highbd_12_mse8x8_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x16 =
-      vpx_highbd_12_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x16 =
-        vpx_highbd_12_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x32 =
-      vpx_highbd_12_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x32 =
-        vpx_highbd_12_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance16x8 =
-      vpx_highbd_12_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance16x8 =
-        vpx_highbd_12_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x16 =
-      vpx_highbd_12_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x16 =
-        vpx_highbd_12_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x32 =
-      vpx_highbd_12_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x32 =
-        vpx_highbd_12_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance32x64 =
-      vpx_highbd_12_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance32x64 =
-        vpx_highbd_12_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance64x32 =
-      vpx_highbd_12_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance64x32 =
-        vpx_highbd_12_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance64x64 =
-      vpx_highbd_12_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance64x64 =
-        vpx_highbd_12_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x16 =
-      vpx_highbd_12_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x16 =
-        vpx_highbd_12_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x4 =
-      vpx_highbd_12_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x4 =
-        vpx_highbd_12_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_12_sub_pixel_avg_variance8x8 =
-      vpx_highbd_12_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_avg_variance8x8 =
-        vpx_highbd_12_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_12_sub_pixel_variance16x16 =
-      vpx_highbd_12_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x16 =
-        vpx_highbd_12_sub_pixel_variance16x16_sse2;
-  vpx_highbd_12_sub_pixel_variance16x32 =
-      vpx_highbd_12_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x32 =
-        vpx_highbd_12_sub_pixel_variance16x32_sse2;
-  vpx_highbd_12_sub_pixel_variance16x8 = vpx_highbd_12_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance16x8 =
-        vpx_highbd_12_sub_pixel_variance16x8_sse2;
-  vpx_highbd_12_sub_pixel_variance32x16 =
-      vpx_highbd_12_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x16 =
-        vpx_highbd_12_sub_pixel_variance32x16_sse2;
-  vpx_highbd_12_sub_pixel_variance32x32 =
-      vpx_highbd_12_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x32 =
-        vpx_highbd_12_sub_pixel_variance32x32_sse2;
-  vpx_highbd_12_sub_pixel_variance32x64 =
-      vpx_highbd_12_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance32x64 =
-        vpx_highbd_12_sub_pixel_variance32x64_sse2;
-  vpx_highbd_12_sub_pixel_variance64x32 =
-      vpx_highbd_12_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance64x32 =
-        vpx_highbd_12_sub_pixel_variance64x32_sse2;
-  vpx_highbd_12_sub_pixel_variance64x64 =
-      vpx_highbd_12_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance64x64 =
-        vpx_highbd_12_sub_pixel_variance64x64_sse2;
-  vpx_highbd_12_sub_pixel_variance8x16 = vpx_highbd_12_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x16 =
-        vpx_highbd_12_sub_pixel_variance8x16_sse2;
-  vpx_highbd_12_sub_pixel_variance8x4 = vpx_highbd_12_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x4 =
-        vpx_highbd_12_sub_pixel_variance8x4_sse2;
-  vpx_highbd_12_sub_pixel_variance8x8 = vpx_highbd_12_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_sub_pixel_variance8x8 =
-        vpx_highbd_12_sub_pixel_variance8x8_sse2;
-  vpx_highbd_12_variance16x16 = vpx_highbd_12_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x16 = vpx_highbd_12_variance16x16_sse2;
-  vpx_highbd_12_variance16x32 = vpx_highbd_12_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x32 = vpx_highbd_12_variance16x32_sse2;
-  vpx_highbd_12_variance16x8 = vpx_highbd_12_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance16x8 = vpx_highbd_12_variance16x8_sse2;
-  vpx_highbd_12_variance32x16 = vpx_highbd_12_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x16 = vpx_highbd_12_variance32x16_sse2;
-  vpx_highbd_12_variance32x32 = vpx_highbd_12_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x32 = vpx_highbd_12_variance32x32_sse2;
-  vpx_highbd_12_variance32x64 = vpx_highbd_12_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance32x64 = vpx_highbd_12_variance32x64_sse2;
-  vpx_highbd_12_variance64x32 = vpx_highbd_12_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance64x32 = vpx_highbd_12_variance64x32_sse2;
-  vpx_highbd_12_variance64x64 = vpx_highbd_12_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance64x64 = vpx_highbd_12_variance64x64_sse2;
-  vpx_highbd_12_variance8x16 = vpx_highbd_12_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance8x16 = vpx_highbd_12_variance8x16_sse2;
-  vpx_highbd_12_variance8x8 = vpx_highbd_12_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_12_variance8x8 = vpx_highbd_12_variance8x8_sse2;
-  vpx_highbd_8_mse16x16 = vpx_highbd_8_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_mse16x16 = vpx_highbd_8_mse16x16_sse2;
-  vpx_highbd_8_mse8x8 = vpx_highbd_8_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_mse8x8 = vpx_highbd_8_mse8x8_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x16 =
-      vpx_highbd_8_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x16 =
-        vpx_highbd_8_sub_pixel_avg_variance16x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x32 =
-      vpx_highbd_8_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x32 =
-        vpx_highbd_8_sub_pixel_avg_variance16x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance16x8 =
-      vpx_highbd_8_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance16x8 =
-        vpx_highbd_8_sub_pixel_avg_variance16x8_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x16 =
-      vpx_highbd_8_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x16 =
-        vpx_highbd_8_sub_pixel_avg_variance32x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x32 =
-      vpx_highbd_8_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x32 =
-        vpx_highbd_8_sub_pixel_avg_variance32x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance32x64 =
-      vpx_highbd_8_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance32x64 =
-        vpx_highbd_8_sub_pixel_avg_variance32x64_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance64x32 =
-      vpx_highbd_8_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance64x32 =
-        vpx_highbd_8_sub_pixel_avg_variance64x32_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance64x64 =
-      vpx_highbd_8_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance64x64 =
-        vpx_highbd_8_sub_pixel_avg_variance64x64_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x16 =
-      vpx_highbd_8_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x16 =
-        vpx_highbd_8_sub_pixel_avg_variance8x16_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x4 =
-      vpx_highbd_8_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x4 =
-        vpx_highbd_8_sub_pixel_avg_variance8x4_sse2;
-  vpx_highbd_8_sub_pixel_avg_variance8x8 =
-      vpx_highbd_8_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_avg_variance8x8 =
-        vpx_highbd_8_sub_pixel_avg_variance8x8_sse2;
-  vpx_highbd_8_sub_pixel_variance16x16 = vpx_highbd_8_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x16 =
-        vpx_highbd_8_sub_pixel_variance16x16_sse2;
-  vpx_highbd_8_sub_pixel_variance16x32 = vpx_highbd_8_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x32 =
-        vpx_highbd_8_sub_pixel_variance16x32_sse2;
-  vpx_highbd_8_sub_pixel_variance16x8 = vpx_highbd_8_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance16x8 =
-        vpx_highbd_8_sub_pixel_variance16x8_sse2;
-  vpx_highbd_8_sub_pixel_variance32x16 = vpx_highbd_8_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x16 =
-        vpx_highbd_8_sub_pixel_variance32x16_sse2;
-  vpx_highbd_8_sub_pixel_variance32x32 = vpx_highbd_8_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x32 =
-        vpx_highbd_8_sub_pixel_variance32x32_sse2;
-  vpx_highbd_8_sub_pixel_variance32x64 = vpx_highbd_8_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance32x64 =
-        vpx_highbd_8_sub_pixel_variance32x64_sse2;
-  vpx_highbd_8_sub_pixel_variance64x32 = vpx_highbd_8_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance64x32 =
-        vpx_highbd_8_sub_pixel_variance64x32_sse2;
-  vpx_highbd_8_sub_pixel_variance64x64 = vpx_highbd_8_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance64x64 =
-        vpx_highbd_8_sub_pixel_variance64x64_sse2;
-  vpx_highbd_8_sub_pixel_variance8x16 = vpx_highbd_8_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x16 =
-        vpx_highbd_8_sub_pixel_variance8x16_sse2;
-  vpx_highbd_8_sub_pixel_variance8x4 = vpx_highbd_8_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x4 =
-        vpx_highbd_8_sub_pixel_variance8x4_sse2;
-  vpx_highbd_8_sub_pixel_variance8x8 = vpx_highbd_8_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_sub_pixel_variance8x8 =
-        vpx_highbd_8_sub_pixel_variance8x8_sse2;
-  vpx_highbd_8_variance16x16 = vpx_highbd_8_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x16 = vpx_highbd_8_variance16x16_sse2;
-  vpx_highbd_8_variance16x32 = vpx_highbd_8_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x32 = vpx_highbd_8_variance16x32_sse2;
-  vpx_highbd_8_variance16x8 = vpx_highbd_8_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance16x8 = vpx_highbd_8_variance16x8_sse2;
-  vpx_highbd_8_variance32x16 = vpx_highbd_8_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x16 = vpx_highbd_8_variance32x16_sse2;
-  vpx_highbd_8_variance32x32 = vpx_highbd_8_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x32 = vpx_highbd_8_variance32x32_sse2;
-  vpx_highbd_8_variance32x64 = vpx_highbd_8_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance32x64 = vpx_highbd_8_variance32x64_sse2;
-  vpx_highbd_8_variance64x32 = vpx_highbd_8_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance64x32 = vpx_highbd_8_variance64x32_sse2;
-  vpx_highbd_8_variance64x64 = vpx_highbd_8_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance64x64 = vpx_highbd_8_variance64x64_sse2;
-  vpx_highbd_8_variance8x16 = vpx_highbd_8_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance8x16 = vpx_highbd_8_variance8x16_sse2;
-  vpx_highbd_8_variance8x8 = vpx_highbd_8_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_8_variance8x8 = vpx_highbd_8_variance8x8_sse2;
-  vpx_highbd_avg_4x4 = vpx_highbd_avg_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_avg_4x4 = vpx_highbd_avg_4x4_sse2;
-  vpx_highbd_avg_8x8 = vpx_highbd_avg_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_avg_8x8 = vpx_highbd_avg_8x8_sse2;
   vpx_highbd_convolve8 = vpx_highbd_convolve8_c;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve8 = vpx_highbd_convolve8_avx2;
@@ -9505,14 +7695,10 @@
   vpx_highbd_convolve8_vert = vpx_highbd_convolve8_vert_c;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve8_vert = vpx_highbd_convolve8_vert_avx2;
-  vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_sse2;
+  vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve_avg = vpx_highbd_convolve_avg_avx2;
-  vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_sse2;
+  vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_sse2;
   if (flags & HAS_AVX2)
     vpx_highbd_convolve_copy = vpx_highbd_convolve_copy_avx2;
   vpx_highbd_d117_predictor_16x16 = vpx_highbd_d117_predictor_16x16_c;
@@ -9521,9 +7707,6 @@
   vpx_highbd_d117_predictor_32x32 = vpx_highbd_d117_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d117_predictor_32x32 = vpx_highbd_d117_predictor_32x32_ssse3;
-  vpx_highbd_d117_predictor_4x4 = vpx_highbd_d117_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d117_predictor_4x4 = vpx_highbd_d117_predictor_4x4_sse2;
   vpx_highbd_d117_predictor_8x8 = vpx_highbd_d117_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d117_predictor_8x8 = vpx_highbd_d117_predictor_8x8_ssse3;
@@ -9533,9 +7716,6 @@
   vpx_highbd_d135_predictor_32x32 = vpx_highbd_d135_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d135_predictor_32x32 = vpx_highbd_d135_predictor_32x32_ssse3;
-  vpx_highbd_d135_predictor_4x4 = vpx_highbd_d135_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d135_predictor_4x4 = vpx_highbd_d135_predictor_4x4_sse2;
   vpx_highbd_d135_predictor_8x8 = vpx_highbd_d135_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d135_predictor_8x8 = vpx_highbd_d135_predictor_8x8_ssse3;
@@ -9545,9 +7725,6 @@
   vpx_highbd_d153_predictor_32x32 = vpx_highbd_d153_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d153_predictor_32x32 = vpx_highbd_d153_predictor_32x32_ssse3;
-  vpx_highbd_d153_predictor_4x4 = vpx_highbd_d153_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d153_predictor_4x4 = vpx_highbd_d153_predictor_4x4_sse2;
   vpx_highbd_d153_predictor_8x8 = vpx_highbd_d153_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d153_predictor_8x8 = vpx_highbd_d153_predictor_8x8_ssse3;
@@ -9557,9 +7734,6 @@
   vpx_highbd_d207_predictor_32x32 = vpx_highbd_d207_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d207_predictor_32x32 = vpx_highbd_d207_predictor_32x32_ssse3;
-  vpx_highbd_d207_predictor_4x4 = vpx_highbd_d207_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d207_predictor_4x4 = vpx_highbd_d207_predictor_4x4_sse2;
   vpx_highbd_d207_predictor_8x8 = vpx_highbd_d207_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d207_predictor_8x8 = vpx_highbd_d207_predictor_8x8_ssse3;
@@ -9581,446 +7755,70 @@
   vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_32x32 = vpx_highbd_d63_predictor_32x32_ssse3;
-  vpx_highbd_d63_predictor_4x4 = vpx_highbd_d63_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_d63_predictor_4x4 = vpx_highbd_d63_predictor_4x4_sse2;
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_ssse3;
-  vpx_highbd_dc_128_predictor_16x16 = vpx_highbd_dc_128_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_16x16 = vpx_highbd_dc_128_predictor_16x16_sse2;
-  vpx_highbd_dc_128_predictor_32x32 = vpx_highbd_dc_128_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_32x32 = vpx_highbd_dc_128_predictor_32x32_sse2;
-  vpx_highbd_dc_128_predictor_4x4 = vpx_highbd_dc_128_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_4x4 = vpx_highbd_dc_128_predictor_4x4_sse2;
-  vpx_highbd_dc_128_predictor_8x8 = vpx_highbd_dc_128_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_128_predictor_8x8 = vpx_highbd_dc_128_predictor_8x8_sse2;
-  vpx_highbd_dc_left_predictor_16x16 = vpx_highbd_dc_left_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_16x16 =
-        vpx_highbd_dc_left_predictor_16x16_sse2;
-  vpx_highbd_dc_left_predictor_32x32 = vpx_highbd_dc_left_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_32x32 =
-        vpx_highbd_dc_left_predictor_32x32_sse2;
-  vpx_highbd_dc_left_predictor_4x4 = vpx_highbd_dc_left_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_4x4 = vpx_highbd_dc_left_predictor_4x4_sse2;
-  vpx_highbd_dc_left_predictor_8x8 = vpx_highbd_dc_left_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_left_predictor_8x8 = vpx_highbd_dc_left_predictor_8x8_sse2;
-  vpx_highbd_dc_predictor_16x16 = vpx_highbd_dc_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_16x16 = vpx_highbd_dc_predictor_16x16_sse2;
-  vpx_highbd_dc_predictor_32x32 = vpx_highbd_dc_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_32x32 = vpx_highbd_dc_predictor_32x32_sse2;
-  vpx_highbd_dc_predictor_4x4 = vpx_highbd_dc_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_4x4 = vpx_highbd_dc_predictor_4x4_sse2;
-  vpx_highbd_dc_predictor_8x8 = vpx_highbd_dc_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_predictor_8x8 = vpx_highbd_dc_predictor_8x8_sse2;
-  vpx_highbd_dc_top_predictor_16x16 = vpx_highbd_dc_top_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_16x16 = vpx_highbd_dc_top_predictor_16x16_sse2;
-  vpx_highbd_dc_top_predictor_32x32 = vpx_highbd_dc_top_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_32x32 = vpx_highbd_dc_top_predictor_32x32_sse2;
-  vpx_highbd_dc_top_predictor_4x4 = vpx_highbd_dc_top_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_4x4 = vpx_highbd_dc_top_predictor_4x4_sse2;
-  vpx_highbd_dc_top_predictor_8x8 = vpx_highbd_dc_top_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_dc_top_predictor_8x8 = vpx_highbd_dc_top_predictor_8x8_sse2;
-  vpx_highbd_fdct16x16 = vpx_highbd_fdct16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct16x16 = vpx_highbd_fdct16x16_sse2;
-  vpx_highbd_fdct32x32 = vpx_highbd_fdct32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct32x32 = vpx_highbd_fdct32x32_sse2;
-  vpx_highbd_fdct32x32_rd = vpx_highbd_fdct32x32_rd_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct32x32_rd = vpx_highbd_fdct32x32_rd_sse2;
-  vpx_highbd_fdct4x4 = vpx_highbd_fdct4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct4x4 = vpx_highbd_fdct4x4_sse2;
-  vpx_highbd_fdct8x8 = vpx_highbd_fdct8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_fdct8x8 = vpx_highbd_fdct8x8_sse2;
-  vpx_highbd_h_predictor_16x16 = vpx_highbd_h_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_16x16 = vpx_highbd_h_predictor_16x16_sse2;
-  vpx_highbd_h_predictor_32x32 = vpx_highbd_h_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_32x32 = vpx_highbd_h_predictor_32x32_sse2;
-  vpx_highbd_h_predictor_4x4 = vpx_highbd_h_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_4x4 = vpx_highbd_h_predictor_4x4_sse2;
-  vpx_highbd_h_predictor_8x8 = vpx_highbd_h_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_h_predictor_8x8 = vpx_highbd_h_predictor_8x8_sse2;
-  vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_avx2;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_avx2;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_avx2;
+  vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse4_1;
-  vpx_highbd_idct16x16_1_add = vpx_highbd_idct16x16_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_1_add = vpx_highbd_idct16x16_1_add_sse2;
-  vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
+  vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_256_add = vpx_highbd_idct16x16_256_add_sse4_1;
-  vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
+  vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_38_add = vpx_highbd_idct16x16_38_add_sse4_1;
-  vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
+  vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_1024_add = vpx_highbd_idct32x32_1024_add_sse4_1;
-  vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
+  vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_135_add = vpx_highbd_idct32x32_135_add_sse4_1;
-  vpx_highbd_idct32x32_1_add = vpx_highbd_idct32x32_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_1_add = vpx_highbd_idct32x32_1_add_sse2;
-  vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
+  vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct32x32_34_add = vpx_highbd_idct32x32_34_add_sse4_1;
-  vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
+  vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct4x4_16_add = vpx_highbd_idct4x4_16_add_sse4_1;
-  vpx_highbd_idct4x4_1_add = vpx_highbd_idct4x4_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct4x4_1_add = vpx_highbd_idct4x4_1_add_sse2;
-  vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
+  vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_12_add = vpx_highbd_idct8x8_12_add_sse4_1;
-  vpx_highbd_idct8x8_1_add = vpx_highbd_idct8x8_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_1_add = vpx_highbd_idct8x8_1_add_sse2;
-  vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
+  vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse4_1;
-  vpx_highbd_lpf_horizontal_16 = vpx_highbd_lpf_horizontal_16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_16 = vpx_highbd_lpf_horizontal_16_sse2;
-  vpx_highbd_lpf_horizontal_16_dual = vpx_highbd_lpf_horizontal_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_16_dual = vpx_highbd_lpf_horizontal_16_dual_sse2;
-  vpx_highbd_lpf_horizontal_4 = vpx_highbd_lpf_horizontal_4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_4 = vpx_highbd_lpf_horizontal_4_sse2;
-  vpx_highbd_lpf_horizontal_4_dual = vpx_highbd_lpf_horizontal_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_4_dual = vpx_highbd_lpf_horizontal_4_dual_sse2;
-  vpx_highbd_lpf_horizontal_8 = vpx_highbd_lpf_horizontal_8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_8 = vpx_highbd_lpf_horizontal_8_sse2;
-  vpx_highbd_lpf_horizontal_8_dual = vpx_highbd_lpf_horizontal_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_horizontal_8_dual = vpx_highbd_lpf_horizontal_8_dual_sse2;
-  vpx_highbd_lpf_vertical_16 = vpx_highbd_lpf_vertical_16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_16 = vpx_highbd_lpf_vertical_16_sse2;
-  vpx_highbd_lpf_vertical_16_dual = vpx_highbd_lpf_vertical_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_16_dual = vpx_highbd_lpf_vertical_16_dual_sse2;
-  vpx_highbd_lpf_vertical_4 = vpx_highbd_lpf_vertical_4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_4 = vpx_highbd_lpf_vertical_4_sse2;
-  vpx_highbd_lpf_vertical_4_dual = vpx_highbd_lpf_vertical_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_4_dual = vpx_highbd_lpf_vertical_4_dual_sse2;
-  vpx_highbd_lpf_vertical_8 = vpx_highbd_lpf_vertical_8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_8 = vpx_highbd_lpf_vertical_8_sse2;
-  vpx_highbd_lpf_vertical_8_dual = vpx_highbd_lpf_vertical_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_lpf_vertical_8_dual = vpx_highbd_lpf_vertical_8_dual_sse2;
-  vpx_highbd_quantize_b = vpx_highbd_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_quantize_b = vpx_highbd_quantize_b_sse2;
-  vpx_highbd_quantize_b_32x32 = vpx_highbd_quantize_b_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_quantize_b_32x32 = vpx_highbd_quantize_b_32x32_sse2;
-  vpx_highbd_sad16x16 = vpx_highbd_sad16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16 = vpx_highbd_sad16x16_sse2;
-  vpx_highbd_sad16x16_avg = vpx_highbd_sad16x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16_avg = vpx_highbd_sad16x16_avg_sse2;
-  vpx_highbd_sad16x16x4d = vpx_highbd_sad16x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x16x4d = vpx_highbd_sad16x16x4d_sse2;
-  vpx_highbd_sad16x32 = vpx_highbd_sad16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32 = vpx_highbd_sad16x32_sse2;
-  vpx_highbd_sad16x32_avg = vpx_highbd_sad16x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32_avg = vpx_highbd_sad16x32_avg_sse2;
-  vpx_highbd_sad16x32x4d = vpx_highbd_sad16x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x32x4d = vpx_highbd_sad16x32x4d_sse2;
-  vpx_highbd_sad16x8 = vpx_highbd_sad16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8 = vpx_highbd_sad16x8_sse2;
-  vpx_highbd_sad16x8_avg = vpx_highbd_sad16x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8_avg = vpx_highbd_sad16x8_avg_sse2;
-  vpx_highbd_sad16x8x4d = vpx_highbd_sad16x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad16x8x4d = vpx_highbd_sad16x8x4d_sse2;
-  vpx_highbd_sad32x16 = vpx_highbd_sad32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16 = vpx_highbd_sad32x16_sse2;
-  vpx_highbd_sad32x16_avg = vpx_highbd_sad32x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16_avg = vpx_highbd_sad32x16_avg_sse2;
-  vpx_highbd_sad32x16x4d = vpx_highbd_sad32x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x16x4d = vpx_highbd_sad32x16x4d_sse2;
-  vpx_highbd_sad32x32 = vpx_highbd_sad32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32 = vpx_highbd_sad32x32_sse2;
-  vpx_highbd_sad32x32_avg = vpx_highbd_sad32x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32_avg = vpx_highbd_sad32x32_avg_sse2;
-  vpx_highbd_sad32x32x4d = vpx_highbd_sad32x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x32x4d = vpx_highbd_sad32x32x4d_sse2;
-  vpx_highbd_sad32x64 = vpx_highbd_sad32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64 = vpx_highbd_sad32x64_sse2;
-  vpx_highbd_sad32x64_avg = vpx_highbd_sad32x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64_avg = vpx_highbd_sad32x64_avg_sse2;
-  vpx_highbd_sad32x64x4d = vpx_highbd_sad32x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad32x64x4d = vpx_highbd_sad32x64x4d_sse2;
-  vpx_highbd_sad4x4x4d = vpx_highbd_sad4x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad4x4x4d = vpx_highbd_sad4x4x4d_sse2;
-  vpx_highbd_sad4x8x4d = vpx_highbd_sad4x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad4x8x4d = vpx_highbd_sad4x8x4d_sse2;
-  vpx_highbd_sad64x32 = vpx_highbd_sad64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32 = vpx_highbd_sad64x32_sse2;
-  vpx_highbd_sad64x32_avg = vpx_highbd_sad64x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32_avg = vpx_highbd_sad64x32_avg_sse2;
-  vpx_highbd_sad64x32x4d = vpx_highbd_sad64x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x32x4d = vpx_highbd_sad64x32x4d_sse2;
-  vpx_highbd_sad64x64 = vpx_highbd_sad64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64 = vpx_highbd_sad64x64_sse2;
-  vpx_highbd_sad64x64_avg = vpx_highbd_sad64x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64_avg = vpx_highbd_sad64x64_avg_sse2;
-  vpx_highbd_sad64x64x4d = vpx_highbd_sad64x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad64x64x4d = vpx_highbd_sad64x64x4d_sse2;
-  vpx_highbd_sad8x16 = vpx_highbd_sad8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16 = vpx_highbd_sad8x16_sse2;
-  vpx_highbd_sad8x16_avg = vpx_highbd_sad8x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16_avg = vpx_highbd_sad8x16_avg_sse2;
-  vpx_highbd_sad8x16x4d = vpx_highbd_sad8x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x16x4d = vpx_highbd_sad8x16x4d_sse2;
-  vpx_highbd_sad8x4 = vpx_highbd_sad8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4 = vpx_highbd_sad8x4_sse2;
-  vpx_highbd_sad8x4_avg = vpx_highbd_sad8x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4_avg = vpx_highbd_sad8x4_avg_sse2;
-  vpx_highbd_sad8x4x4d = vpx_highbd_sad8x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x4x4d = vpx_highbd_sad8x4x4d_sse2;
-  vpx_highbd_sad8x8 = vpx_highbd_sad8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8 = vpx_highbd_sad8x8_sse2;
-  vpx_highbd_sad8x8_avg = vpx_highbd_sad8x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8_avg = vpx_highbd_sad8x8_avg_sse2;
-  vpx_highbd_sad8x8x4d = vpx_highbd_sad8x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_sad8x8x4d = vpx_highbd_sad8x8x4d_sse2;
-  vpx_highbd_tm_predictor_16x16 = vpx_highbd_tm_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_16x16 = vpx_highbd_tm_predictor_16x16_sse2;
-  vpx_highbd_tm_predictor_32x32 = vpx_highbd_tm_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_32x32 = vpx_highbd_tm_predictor_32x32_sse2;
-  vpx_highbd_tm_predictor_4x4 = vpx_highbd_tm_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_4x4 = vpx_highbd_tm_predictor_4x4_sse2;
-  vpx_highbd_tm_predictor_8x8 = vpx_highbd_tm_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_tm_predictor_8x8 = vpx_highbd_tm_predictor_8x8_sse2;
-  vpx_highbd_v_predictor_16x16 = vpx_highbd_v_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_16x16 = vpx_highbd_v_predictor_16x16_sse2;
-  vpx_highbd_v_predictor_32x32 = vpx_highbd_v_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_32x32 = vpx_highbd_v_predictor_32x32_sse2;
-  vpx_highbd_v_predictor_4x4 = vpx_highbd_v_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_4x4 = vpx_highbd_v_predictor_4x4_sse2;
-  vpx_highbd_v_predictor_8x8 = vpx_highbd_v_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_highbd_v_predictor_8x8 = vpx_highbd_v_predictor_8x8_sse2;
-  vpx_idct16x16_10_add = vpx_idct16x16_10_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_10_add = vpx_idct16x16_10_add_sse2;
-  vpx_idct16x16_1_add = vpx_idct16x16_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_1_add = vpx_idct16x16_1_add_sse2;
-  vpx_idct16x16_256_add = vpx_idct16x16_256_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_256_add = vpx_idct16x16_256_add_sse2;
-  vpx_idct16x16_38_add = vpx_idct16x16_38_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct16x16_38_add = vpx_idct16x16_38_add_sse2;
-  vpx_idct32x32_1024_add = vpx_idct32x32_1024_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_1024_add = vpx_idct32x32_1024_add_sse2;
-  vpx_idct32x32_135_add = vpx_idct32x32_135_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
+  vpx_highbd_satd = vpx_highbd_satd_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_satd = vpx_highbd_satd_avx2;
+  vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_135_add = vpx_idct32x32_135_add_ssse3;
-  vpx_idct32x32_1_add = vpx_idct32x32_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_1_add = vpx_idct32x32_1_add_sse2;
-  vpx_idct32x32_34_add = vpx_idct32x32_34_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct32x32_34_add = vpx_idct32x32_34_add_sse2;
+  vpx_idct32x32_34_add = vpx_idct32x32_34_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_34_add = vpx_idct32x32_34_add_ssse3;
-  vpx_idct4x4_16_add = vpx_idct4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct4x4_16_add = vpx_idct4x4_16_add_sse2;
-  vpx_idct4x4_1_add = vpx_idct4x4_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct4x4_1_add = vpx_idct4x4_1_add_sse2;
-  vpx_idct8x8_12_add = vpx_idct8x8_12_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_12_add = vpx_idct8x8_12_add_sse2;
+  vpx_idct8x8_12_add = vpx_idct8x8_12_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct8x8_12_add = vpx_idct8x8_12_add_ssse3;
-  vpx_idct8x8_1_add = vpx_idct8x8_1_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_1_add = vpx_idct8x8_1_add_sse2;
-  vpx_idct8x8_64_add = vpx_idct8x8_64_add_c;
-  if (flags & HAS_SSE2)
-    vpx_idct8x8_64_add = vpx_idct8x8_64_add_sse2;
-  vpx_int_pro_col = vpx_int_pro_col_c;
-  if (flags & HAS_SSE2)
-    vpx_int_pro_col = vpx_int_pro_col_sse2;
-  vpx_int_pro_row = vpx_int_pro_row_c;
-  if (flags & HAS_SSE2)
-    vpx_int_pro_row = vpx_int_pro_row_sse2;
-  vpx_iwht4x4_16_add = vpx_iwht4x4_16_add_c;
-  if (flags & HAS_SSE2)
-    vpx_iwht4x4_16_add = vpx_iwht4x4_16_add_sse2;
-  vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_sse2;
+  vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_sse2;
   if (flags & HAS_AVX2)
     vpx_lpf_horizontal_16 = vpx_lpf_horizontal_16_avx2;
-  vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_sse2;
+  vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_sse2;
   if (flags & HAS_AVX2)
     vpx_lpf_horizontal_16_dual = vpx_lpf_horizontal_16_dual_avx2;
-  vpx_lpf_horizontal_4 = vpx_lpf_horizontal_4_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_4 = vpx_lpf_horizontal_4_sse2;
-  vpx_lpf_horizontal_4_dual = vpx_lpf_horizontal_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_4_dual = vpx_lpf_horizontal_4_dual_sse2;
-  vpx_lpf_horizontal_8 = vpx_lpf_horizontal_8_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_8 = vpx_lpf_horizontal_8_sse2;
-  vpx_lpf_horizontal_8_dual = vpx_lpf_horizontal_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_horizontal_8_dual = vpx_lpf_horizontal_8_dual_sse2;
-  vpx_lpf_vertical_16 = vpx_lpf_vertical_16_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_16 = vpx_lpf_vertical_16_sse2;
-  vpx_lpf_vertical_16_dual = vpx_lpf_vertical_16_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_16_dual = vpx_lpf_vertical_16_dual_sse2;
-  vpx_lpf_vertical_4 = vpx_lpf_vertical_4_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_4 = vpx_lpf_vertical_4_sse2;
-  vpx_lpf_vertical_4_dual = vpx_lpf_vertical_4_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_4_dual = vpx_lpf_vertical_4_dual_sse2;
-  vpx_lpf_vertical_8 = vpx_lpf_vertical_8_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_8 = vpx_lpf_vertical_8_sse2;
-  vpx_lpf_vertical_8_dual = vpx_lpf_vertical_8_dual_c;
-  if (flags & HAS_SSE2)
-    vpx_lpf_vertical_8_dual = vpx_lpf_vertical_8_dual_sse2;
-  vpx_mbpost_proc_across_ip = vpx_mbpost_proc_across_ip_c;
-  if (flags & HAS_SSE2)
-    vpx_mbpost_proc_across_ip = vpx_mbpost_proc_across_ip_sse2;
-  vpx_mbpost_proc_down = vpx_mbpost_proc_down_c;
-  if (flags & HAS_SSE2)
-    vpx_mbpost_proc_down = vpx_mbpost_proc_down_sse2;
-  vpx_minmax_8x8 = vpx_minmax_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_minmax_8x8 = vpx_minmax_8x8_sse2;
-  vpx_mse16x16 = vpx_mse16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_mse16x16 = vpx_mse16x16_sse2;
+  vpx_mse16x16 = vpx_mse16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_mse16x16 = vpx_mse16x16_avx2;
-  vpx_mse16x8 = vpx_mse16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_mse16x8 = vpx_mse16x8_sse2;
+  vpx_mse16x8 = vpx_mse16x8_sse2;
   if (flags & HAS_AVX2)
     vpx_mse16x8 = vpx_mse16x8_avx2;
-  vpx_mse8x16 = vpx_mse8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_mse8x16 = vpx_mse8x16_sse2;
-  vpx_mse8x8 = vpx_mse8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_mse8x8 = vpx_mse8x8_sse2;
-  vpx_plane_add_noise = vpx_plane_add_noise_c;
-  if (flags & HAS_SSE2)
-    vpx_plane_add_noise = vpx_plane_add_noise_sse2;
-  vpx_post_proc_down_and_across_mb_row = vpx_post_proc_down_and_across_mb_row_c;
-  if (flags & HAS_SSE2)
-    vpx_post_proc_down_and_across_mb_row =
-        vpx_post_proc_down_and_across_mb_row_sse2;
-  vpx_quantize_b = vpx_quantize_b_c;
-  if (flags & HAS_SSE2)
-    vpx_quantize_b = vpx_quantize_b_sse2;
+  vpx_quantize_b = vpx_quantize_b_sse2;
   if (flags & HAS_SSSE3)
     vpx_quantize_b = vpx_quantize_b_ssse3;
   if (flags & HAS_AVX)
@@ -10030,415 +7828,195 @@
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_ssse3;
   if (flags & HAS_AVX)
     vpx_quantize_b_32x32 = vpx_quantize_b_32x32_avx;
-  vpx_sad16x16 = vpx_sad16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16 = vpx_sad16x16_sse2;
-  vpx_sad16x16_avg = vpx_sad16x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16_avg = vpx_sad16x16_avg_sse2;
   vpx_sad16x16x3 = vpx_sad16x16x3_c;
   if (flags & HAS_SSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x16x3 = vpx_sad16x16x3_ssse3;
-  vpx_sad16x16x4d = vpx_sad16x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x16x4d = vpx_sad16x16x4d_sse2;
   vpx_sad16x16x8 = vpx_sad16x16x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad16x16x8 = vpx_sad16x16x8_sse4_1;
-  vpx_sad16x32 = vpx_sad16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32 = vpx_sad16x32_sse2;
-  vpx_sad16x32_avg = vpx_sad16x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32_avg = vpx_sad16x32_avg_sse2;
-  vpx_sad16x32x4d = vpx_sad16x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x32x4d = vpx_sad16x32x4d_sse2;
-  vpx_sad16x8 = vpx_sad16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8 = vpx_sad16x8_sse2;
-  vpx_sad16x8_avg = vpx_sad16x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8_avg = vpx_sad16x8_avg_sse2;
   vpx_sad16x8x3 = vpx_sad16x8x3_c;
   if (flags & HAS_SSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_sse3;
   if (flags & HAS_SSSE3)
     vpx_sad16x8x3 = vpx_sad16x8x3_ssse3;
-  vpx_sad16x8x4d = vpx_sad16x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad16x8x4d = vpx_sad16x8x4d_sse2;
   vpx_sad16x8x8 = vpx_sad16x8x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad16x8x8 = vpx_sad16x8x8_sse4_1;
-  vpx_sad32x16 = vpx_sad32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16 = vpx_sad32x16_sse2;
+  vpx_sad32x16 = vpx_sad32x16_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x16 = vpx_sad32x16_avx2;
-  vpx_sad32x16_avg = vpx_sad32x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
+  vpx_sad32x16_avg = vpx_sad32x16_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x16_avg = vpx_sad32x16_avg_avx2;
-  vpx_sad32x16x4d = vpx_sad32x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x16x4d = vpx_sad32x16x4d_sse2;
-  vpx_sad32x32 = vpx_sad32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32 = vpx_sad32x32_sse2;
+  vpx_sad32x32 = vpx_sad32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32 = vpx_sad32x32_avx2;
-  vpx_sad32x32_avg = vpx_sad32x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
+  vpx_sad32x32_avg = vpx_sad32x32_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32_avg = vpx_sad32x32_avg_avx2;
-  vpx_sad32x32x4d = vpx_sad32x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
+  vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32x4d = vpx_sad32x32x4d_avx2;
-  vpx_sad32x64 = vpx_sad32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64 = vpx_sad32x64_sse2;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
+  if (flags & HAS_AVX2)
+    vpx_sad32x32x8 = vpx_sad32x32x8_avx2;
+  vpx_sad32x64 = vpx_sad32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64 = vpx_sad32x64_avx2;
-  vpx_sad32x64_avg = vpx_sad32x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
+  vpx_sad32x64_avg = vpx_sad32x64_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64_avg = vpx_sad32x64_avg_avx2;
-  vpx_sad32x64x4d = vpx_sad32x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad32x64x4d = vpx_sad32x64x4d_sse2;
-  vpx_sad4x4 = vpx_sad4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4 = vpx_sad4x4_sse2;
-  vpx_sad4x4_avg = vpx_sad4x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4_avg = vpx_sad4x4_avg_sse2;
   vpx_sad4x4x3 = vpx_sad4x4x3_c;
   if (flags & HAS_SSE3)
     vpx_sad4x4x3 = vpx_sad4x4x3_sse3;
-  vpx_sad4x4x4d = vpx_sad4x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x4x4d = vpx_sad4x4x4d_sse2;
   vpx_sad4x4x8 = vpx_sad4x4x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad4x4x8 = vpx_sad4x4x8_sse4_1;
-  vpx_sad4x8 = vpx_sad4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8 = vpx_sad4x8_sse2;
-  vpx_sad4x8_avg = vpx_sad4x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8_avg = vpx_sad4x8_avg_sse2;
-  vpx_sad4x8x4d = vpx_sad4x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad4x8x4d = vpx_sad4x8x4d_sse2;
-  vpx_sad64x32 = vpx_sad64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32 = vpx_sad64x32_sse2;
+  vpx_sad64x32 = vpx_sad64x32_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x32 = vpx_sad64x32_avx2;
-  vpx_sad64x32_avg = vpx_sad64x32_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
+  vpx_sad64x32_avg = vpx_sad64x32_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x32_avg = vpx_sad64x32_avg_avx2;
-  vpx_sad64x32x4d = vpx_sad64x32x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x32x4d = vpx_sad64x32x4d_sse2;
-  vpx_sad64x64 = vpx_sad64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64 = vpx_sad64x64_sse2;
+  vpx_sad64x64 = vpx_sad64x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64 = vpx_sad64x64_avx2;
-  vpx_sad64x64_avg = vpx_sad64x64_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
+  vpx_sad64x64_avg = vpx_sad64x64_avg_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64_avg = vpx_sad64x64_avg_avx2;
-  vpx_sad64x64x4d = vpx_sad64x64x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
+  vpx_sad64x64x4d = vpx_sad64x64x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad64x64x4d = vpx_sad64x64x4d_avx2;
-  vpx_sad8x16 = vpx_sad8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16 = vpx_sad8x16_sse2;
-  vpx_sad8x16_avg = vpx_sad8x16_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16_avg = vpx_sad8x16_avg_sse2;
   vpx_sad8x16x3 = vpx_sad8x16x3_c;
   if (flags & HAS_SSE3)
     vpx_sad8x16x3 = vpx_sad8x16x3_sse3;
-  vpx_sad8x16x4d = vpx_sad8x16x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x16x4d = vpx_sad8x16x4d_sse2;
   vpx_sad8x16x8 = vpx_sad8x16x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad8x16x8 = vpx_sad8x16x8_sse4_1;
-  vpx_sad8x4 = vpx_sad8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4 = vpx_sad8x4_sse2;
-  vpx_sad8x4_avg = vpx_sad8x4_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4_avg = vpx_sad8x4_avg_sse2;
-  vpx_sad8x4x4d = vpx_sad8x4x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x4x4d = vpx_sad8x4x4d_sse2;
-  vpx_sad8x8 = vpx_sad8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8 = vpx_sad8x8_sse2;
-  vpx_sad8x8_avg = vpx_sad8x8_avg_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8_avg = vpx_sad8x8_avg_sse2;
   vpx_sad8x8x3 = vpx_sad8x8x3_c;
   if (flags & HAS_SSE3)
     vpx_sad8x8x3 = vpx_sad8x8x3_sse3;
-  vpx_sad8x8x4d = vpx_sad8x8x4d_c;
-  if (flags & HAS_SSE2)
-    vpx_sad8x8x4d = vpx_sad8x8x4d_sse2;
   vpx_sad8x8x8 = vpx_sad8x8x8_c;
   if (flags & HAS_SSE4_1)
     vpx_sad8x8x8 = vpx_sad8x8x8_sse4_1;
-  vpx_satd = vpx_satd_c;
-  if (flags & HAS_SSE2)
-    vpx_satd = vpx_satd_sse2;
+  vpx_satd = vpx_satd_sse2;
   if (flags & HAS_AVX2)
     vpx_satd = vpx_satd_avx2;
   vpx_scaled_2d = vpx_scaled_2d_c;
   if (flags & HAS_SSSE3)
     vpx_scaled_2d = vpx_scaled_2d_ssse3;
-  vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_sse2;
+  vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x16 = vpx_sub_pixel_avg_variance16x16_ssse3;
-  vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_sse2;
+  vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x32 = vpx_sub_pixel_avg_variance16x32_ssse3;
-  vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_sse2;
+  vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance16x8 = vpx_sub_pixel_avg_variance16x8_ssse3;
-  vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_sse2;
+  vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x16 = vpx_sub_pixel_avg_variance32x16_ssse3;
-  vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_sse2;
+  vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_avg_variance32x32 = vpx_sub_pixel_avg_variance32x32_avx2;
-  vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_sse2;
+  vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance32x64 = vpx_sub_pixel_avg_variance32x64_ssse3;
-  vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_sse2;
+  vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance4x4 = vpx_sub_pixel_avg_variance4x4_ssse3;
-  vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_sse2;
+  vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance4x8 = vpx_sub_pixel_avg_variance4x8_ssse3;
-  vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_sse2;
+  vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance64x32 = vpx_sub_pixel_avg_variance64x32_ssse3;
-  vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_sse2;
+  vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_avg_variance64x64 = vpx_sub_pixel_avg_variance64x64_avx2;
-  vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_sse2;
+  vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x16 = vpx_sub_pixel_avg_variance8x16_ssse3;
-  vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_sse2;
+  vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x4 = vpx_sub_pixel_avg_variance8x4_ssse3;
-  vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_sse2;
+  vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_avg_variance8x8 = vpx_sub_pixel_avg_variance8x8_ssse3;
-  vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_sse2;
+  vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x16 = vpx_sub_pixel_variance16x16_ssse3;
-  vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_sse2;
+  vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x32 = vpx_sub_pixel_variance16x32_ssse3;
-  vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_sse2;
+  vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance16x8 = vpx_sub_pixel_variance16x8_ssse3;
-  vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_sse2;
+  vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x16 = vpx_sub_pixel_variance32x16_ssse3;
-  vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_sse2;
+  vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_variance32x32 = vpx_sub_pixel_variance32x32_avx2;
-  vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_sse2;
+  vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance32x64 = vpx_sub_pixel_variance32x64_ssse3;
-  vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_sse2;
+  vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance4x4 = vpx_sub_pixel_variance4x4_ssse3;
-  vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_sse2;
+  vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance4x8 = vpx_sub_pixel_variance4x8_ssse3;
-  vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_sse2;
+  vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance64x32 = vpx_sub_pixel_variance64x32_ssse3;
-  vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_sse2;
+  vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_ssse3;
   if (flags & HAS_AVX2)
     vpx_sub_pixel_variance64x64 = vpx_sub_pixel_variance64x64_avx2;
-  vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_sse2;
+  vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x16 = vpx_sub_pixel_variance8x16_ssse3;
-  vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_sse2;
+  vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x4 = vpx_sub_pixel_variance8x4_ssse3;
-  vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_sse2;
+  vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_sse2;
   if (flags & HAS_SSSE3)
     vpx_sub_pixel_variance8x8 = vpx_sub_pixel_variance8x8_ssse3;
-  vpx_subtract_block = vpx_subtract_block_c;
-  if (flags & HAS_SSE2)
-    vpx_subtract_block = vpx_subtract_block_sse2;
-  vpx_sum_squares_2d_i16 = vpx_sum_squares_2d_i16_c;
-  if (flags & HAS_SSE2)
-    vpx_sum_squares_2d_i16 = vpx_sum_squares_2d_i16_sse2;
-  vpx_tm_predictor_16x16 = vpx_tm_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_16x16 = vpx_tm_predictor_16x16_sse2;
-  vpx_tm_predictor_32x32 = vpx_tm_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_32x32 = vpx_tm_predictor_32x32_sse2;
-  vpx_tm_predictor_4x4 = vpx_tm_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_4x4 = vpx_tm_predictor_4x4_sse2;
-  vpx_tm_predictor_8x8 = vpx_tm_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_tm_predictor_8x8 = vpx_tm_predictor_8x8_sse2;
-  vpx_v_predictor_16x16 = vpx_v_predictor_16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_16x16 = vpx_v_predictor_16x16_sse2;
-  vpx_v_predictor_32x32 = vpx_v_predictor_32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_32x32 = vpx_v_predictor_32x32_sse2;
-  vpx_v_predictor_4x4 = vpx_v_predictor_4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_4x4 = vpx_v_predictor_4x4_sse2;
-  vpx_v_predictor_8x8 = vpx_v_predictor_8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_v_predictor_8x8 = vpx_v_predictor_8x8_sse2;
-  vpx_variance16x16 = vpx_variance16x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x16 = vpx_variance16x16_sse2;
+  vpx_variance16x16 = vpx_variance16x16_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x16 = vpx_variance16x16_avx2;
-  vpx_variance16x32 = vpx_variance16x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x32 = vpx_variance16x32_sse2;
+  vpx_variance16x32 = vpx_variance16x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x32 = vpx_variance16x32_avx2;
-  vpx_variance16x8 = vpx_variance16x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance16x8 = vpx_variance16x8_sse2;
+  vpx_variance16x8 = vpx_variance16x8_sse2;
   if (flags & HAS_AVX2)
     vpx_variance16x8 = vpx_variance16x8_avx2;
-  vpx_variance32x16 = vpx_variance32x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x16 = vpx_variance32x16_sse2;
+  vpx_variance32x16 = vpx_variance32x16_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x16 = vpx_variance32x16_avx2;
-  vpx_variance32x32 = vpx_variance32x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x32 = vpx_variance32x32_sse2;
+  vpx_variance32x32 = vpx_variance32x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x32 = vpx_variance32x32_avx2;
-  vpx_variance32x64 = vpx_variance32x64_c;
-  if (flags & HAS_SSE2)
-    vpx_variance32x64 = vpx_variance32x64_sse2;
+  vpx_variance32x64 = vpx_variance32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_variance32x64 = vpx_variance32x64_avx2;
-  vpx_variance4x4 = vpx_variance4x4_c;
-  if (flags & HAS_SSE2)
-    vpx_variance4x4 = vpx_variance4x4_sse2;
-  vpx_variance4x8 = vpx_variance4x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance4x8 = vpx_variance4x8_sse2;
-  vpx_variance64x32 = vpx_variance64x32_c;
-  if (flags & HAS_SSE2)
-    vpx_variance64x32 = vpx_variance64x32_sse2;
+  vpx_variance64x32 = vpx_variance64x32_sse2;
   if (flags & HAS_AVX2)
     vpx_variance64x32 = vpx_variance64x32_avx2;
-  vpx_variance64x64 = vpx_variance64x64_c;
-  if (flags & HAS_SSE2)
-    vpx_variance64x64 = vpx_variance64x64_sse2;
+  vpx_variance64x64 = vpx_variance64x64_sse2;
   if (flags & HAS_AVX2)
     vpx_variance64x64 = vpx_variance64x64_avx2;
-  vpx_variance8x16 = vpx_variance8x16_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x16 = vpx_variance8x16_sse2;
-  vpx_variance8x4 = vpx_variance8x4_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x4 = vpx_variance8x4_sse2;
-  vpx_variance8x8 = vpx_variance8x8_c;
-  if (flags & HAS_SSE2)
-    vpx_variance8x8 = vpx_variance8x8_sse2;
-  vpx_vector_var = vpx_vector_var_c;
-  if (flags & HAS_SSE2)
-    vpx_vector_var = vpx_vector_var_sse2;
 }
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp8_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp8_rtcd.h
index 4e9d062..c46bfe5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp8_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp8_rtcd.h
@@ -27,90 +27,90 @@
 extern "C" {
 #endif
 
-void vp8_bilinear_predict16x16_c(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_bilinear_predict16x16_c(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-void vp8_bilinear_predict16x16_sse2(unsigned char* src,
-                                    int src_pitch,
-                                    int xofst,
-                                    int yofst,
-                                    unsigned char* dst,
+void vp8_bilinear_predict16x16_sse2(unsigned char* src_ptr,
+                                    int src_pixels_per_line,
+                                    int xoffset,
+                                    int yoffset,
+                                    unsigned char* dst_ptr,
                                     int dst_pitch);
-void vp8_bilinear_predict16x16_ssse3(unsigned char* src,
-                                     int src_pitch,
-                                     int xofst,
-                                     int yofst,
-                                     unsigned char* dst,
+void vp8_bilinear_predict16x16_ssse3(unsigned char* src_ptr,
+                                     int src_pixels_per_line,
+                                     int xoffset,
+                                     int yoffset,
+                                     unsigned char* dst_ptr,
                                      int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src,
-                                              int src_pitch,
-                                              int xofst,
-                                              int yofst,
-                                              unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict16x16)(unsigned char* src_ptr,
+                                              int src_pixels_per_line,
+                                              int xoffset,
+                                              int yoffset,
+                                              unsigned char* dst_ptr,
                                               int dst_pitch);
 
-void vp8_bilinear_predict4x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_bilinear_predict4x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_bilinear_predict4x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_mmx
-
-void vp8_bilinear_predict8x4_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x4_mmx(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
-                                 int dst_pitch);
-#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_mmx
-
-void vp8_bilinear_predict8x8_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
-                               int dst_pitch);
-void vp8_bilinear_predict8x8_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_bilinear_predict4x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_bilinear_predict8x8_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+#define vp8_bilinear_predict4x4 vp8_bilinear_predict4x4_sse2
+
+void vp8_bilinear_predict8x4_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x4_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+#define vp8_bilinear_predict8x4 vp8_bilinear_predict8x4_sse2
+
+void vp8_bilinear_predict8x8_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
+                               int dst_pitch);
+void vp8_bilinear_predict8x8_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
+                                  int dst_pitch);
+void vp8_bilinear_predict8x8_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_bilinear_predict8x8)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
 void vp8_blend_b_c(unsigned char* y,
                    unsigned char* u,
                    unsigned char* v,
-                   int y1,
-                   int u1,
-                   int v1,
+                   int y_1,
+                   int u_1,
+                   int v_1,
                    int alpha,
                    int stride);
 #define vp8_blend_b vp8_blend_b_c
@@ -118,9 +118,9 @@
 void vp8_blend_mb_inner_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_inner vp8_blend_mb_inner_c
@@ -128,9 +128,9 @@
 void vp8_blend_mb_outer_c(unsigned char* y,
                           unsigned char* u,
                           unsigned char* v,
-                          int y1,
-                          int u1,
-                          int v1,
+                          int y_1,
+                          int u_1,
+                          int v_1,
                           int alpha,
                           int stride);
 #define vp8_blend_mb_outer vp8_blend_mb_outer_c
@@ -140,65 +140,65 @@
 #define vp8_block_error vp8_block_error_sse2
 
 void vp8_copy32xn_c(const unsigned char* src_ptr,
-                    int source_stride,
+                    int src_stride,
                     unsigned char* dst_ptr,
                     int dst_stride,
-                    int n);
+                    int height);
 void vp8_copy32xn_sse2(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 void vp8_copy32xn_sse3(const unsigned char* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        unsigned char* dst_ptr,
                        int dst_stride,
-                       int n);
+                       int height);
 RTCD_EXTERN void (*vp8_copy32xn)(const unsigned char* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  unsigned char* dst_ptr,
                                  int dst_stride,
-                                 int n);
+                                 int height);
 
 void vp8_copy_mem16x16_c(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 void vp8_copy_mem16x16_sse2(unsigned char* src,
-                            int src_pitch,
+                            int src_stride,
                             unsigned char* dst,
-                            int dst_pitch);
+                            int dst_stride);
 #define vp8_copy_mem16x16 vp8_copy_mem16x16_sse2
 
 void vp8_copy_mem8x4_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x4_mmx(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem8x4 vp8_copy_mem8x4_mmx
 
 void vp8_copy_mem8x8_c(unsigned char* src,
-                       int src_pitch,
+                       int src_stride,
                        unsigned char* dst,
-                       int dst_pitch);
+                       int dst_stride);
 void vp8_copy_mem8x8_mmx(unsigned char* src,
-                         int src_pitch,
+                         int src_stride,
                          unsigned char* dst,
-                         int dst_pitch);
+                         int dst_stride);
 #define vp8_copy_mem8x8 vp8_copy_mem8x8_mmx
 
-void vp8_dc_only_idct_add_c(short input,
-                            unsigned char* pred,
+void vp8_dc_only_idct_add_c(short input_dc,
+                            unsigned char* pred_ptr,
                             int pred_stride,
-                            unsigned char* dst,
+                            unsigned char* dst_ptr,
                             int dst_stride);
-void vp8_dc_only_idct_add_mmx(short input,
-                              unsigned char* pred,
+void vp8_dc_only_idct_add_mmx(short input_dc,
+                              unsigned char* pred_ptr,
                               int pred_stride,
-                              unsigned char* dst,
+                              unsigned char* dst_ptr,
                               int dst_stride);
 #define vp8_dc_only_idct_add vp8_dc_only_idct_add_mmx
 
@@ -240,11 +240,11 @@
 
 void vp8_dequant_idct_add_c(short* input,
                             short* dq,
-                            unsigned char* output,
+                            unsigned char* dest,
                             int stride);
 void vp8_dequant_idct_add_mmx(short* input,
                               short* dq,
-                              unsigned char* output,
+                              unsigned char* dest,
                               int stride);
 #define vp8_dequant_idct_add vp8_dequant_idct_add_mmx
 
@@ -274,8 +274,8 @@
                                        char* eobs);
 #define vp8_dequant_idct_add_y_block vp8_dequant_idct_add_y_block_sse2
 
-void vp8_dequantize_b_c(struct blockd*, short* dqc);
-void vp8_dequantize_b_mmx(struct blockd*, short* dqc);
+void vp8_dequantize_b_c(struct blockd*, short* DQC);
+void vp8_dequantize_b_mmx(struct blockd*, short* DQC);
 #define vp8_dequantize_b vp8_dequantize_b_mmx
 
 int vp8_diamond_search_sad_c(struct macroblock* x,
@@ -375,91 +375,91 @@
                                        int* mvcost[2],
                                        union int_mv* center_mv);
 
-void vp8_loop_filter_bh_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bh_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bh_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bh_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bh vp8_loop_filter_bh_sse2
 
-void vp8_loop_filter_bv_c(unsigned char* y,
-                          unsigned char* u,
-                          unsigned char* v,
-                          int ystride,
+void vp8_loop_filter_bv_c(unsigned char* y_ptr,
+                          unsigned char* u_ptr,
+                          unsigned char* v_ptr,
+                          int y_stride,
                           int uv_stride,
                           struct loop_filter_info* lfi);
-void vp8_loop_filter_bv_sse2(unsigned char* y,
-                             unsigned char* u,
-                             unsigned char* v,
-                             int ystride,
+void vp8_loop_filter_bv_sse2(unsigned char* y_ptr,
+                             unsigned char* u_ptr,
+                             unsigned char* v_ptr,
+                             int y_stride,
                              int uv_stride,
                              struct loop_filter_info* lfi);
 #define vp8_loop_filter_bv vp8_loop_filter_bv_sse2
 
-void vp8_loop_filter_mbh_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbh_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbh_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbh_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbh vp8_loop_filter_mbh_sse2
 
-void vp8_loop_filter_mbv_c(unsigned char* y,
-                           unsigned char* u,
-                           unsigned char* v,
-                           int ystride,
+void vp8_loop_filter_mbv_c(unsigned char* y_ptr,
+                           unsigned char* u_ptr,
+                           unsigned char* v_ptr,
+                           int y_stride,
                            int uv_stride,
                            struct loop_filter_info* lfi);
-void vp8_loop_filter_mbv_sse2(unsigned char* y,
-                              unsigned char* u,
-                              unsigned char* v,
-                              int ystride,
+void vp8_loop_filter_mbv_sse2(unsigned char* y_ptr,
+                              unsigned char* u_ptr,
+                              unsigned char* v_ptr,
+                              int y_stride,
                               int uv_stride,
                               struct loop_filter_info* lfi);
 #define vp8_loop_filter_mbv vp8_loop_filter_mbv_sse2
 
-void vp8_loop_filter_bhs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bhs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bhs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bhs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bh vp8_loop_filter_bhs_sse2
 
-void vp8_loop_filter_bvs_c(unsigned char* y,
-                           int ystride,
+void vp8_loop_filter_bvs_c(unsigned char* y_ptr,
+                           int y_stride,
                            const unsigned char* blimit);
-void vp8_loop_filter_bvs_sse2(unsigned char* y,
-                              int ystride,
+void vp8_loop_filter_bvs_sse2(unsigned char* y_ptr,
+                              int y_stride,
                               const unsigned char* blimit);
 #define vp8_loop_filter_simple_bv vp8_loop_filter_bvs_sse2
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y,
-                                              int ystride,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char* y_ptr,
+                                              int y_stride,
                                               const unsigned char* blimit);
-void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y,
-                                                 int ystride,
+void vp8_loop_filter_simple_horizontal_edge_sse2(unsigned char* y_ptr,
+                                                 int y_stride,
                                                  const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbh vp8_loop_filter_simple_horizontal_edge_sse2
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y,
-                                            int ystride,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char* y_ptr,
+                                            int y_stride,
                                             const unsigned char* blimit);
-void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y,
-                                               int ystride,
+void vp8_loop_filter_simple_vertical_edge_sse2(unsigned char* y_ptr,
+                                               int y_stride,
                                                const unsigned char* blimit);
 #define vp8_loop_filter_simple_mbv vp8_loop_filter_simple_vertical_edge_sse2
 
@@ -475,8 +475,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -484,8 +484,8 @@
                               struct block* b,
                               struct blockd* d,
                               union int_mv* ref_mv,
-                              int sad_per_bit,
-                              int distance,
+                              int error_per_bit,
+                              int search_range,
                               struct variance_vtable* fn_ptr,
                               int* mvcost[2],
                               union int_mv* center_mv);
@@ -505,126 +505,126 @@
 #define vp8_short_fdct8x4 vp8_short_fdct8x4_sse2
 
 void vp8_short_idct4x4llm_c(short* input,
-                            unsigned char* pred,
-                            int pitch,
-                            unsigned char* dst,
+                            unsigned char* pred_ptr,
+                            int pred_stride,
+                            unsigned char* dst_ptr,
                             int dst_stride);
 void vp8_short_idct4x4llm_mmx(short* input,
-                              unsigned char* pred,
-                              int pitch,
-                              unsigned char* dst,
+                              unsigned char* pred_ptr,
+                              int pred_stride,
+                              unsigned char* dst_ptr,
                               int dst_stride);
 #define vp8_short_idct4x4llm vp8_short_idct4x4llm_mmx
 
-void vp8_short_inv_walsh4x4_c(short* input, short* output);
-void vp8_short_inv_walsh4x4_sse2(short* input, short* output);
+void vp8_short_inv_walsh4x4_c(short* input, short* mb_dqcoeff);
+void vp8_short_inv_walsh4x4_sse2(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4 vp8_short_inv_walsh4x4_sse2
 
-void vp8_short_inv_walsh4x4_1_c(short* input, short* output);
+void vp8_short_inv_walsh4x4_1_c(short* input, short* mb_dqcoeff);
 #define vp8_short_inv_walsh4x4_1 vp8_short_inv_walsh4x4_1_c
 
 void vp8_short_walsh4x4_c(short* input, short* output, int pitch);
 void vp8_short_walsh4x4_sse2(short* input, short* output, int pitch);
 #define vp8_short_walsh4x4 vp8_short_walsh4x4_sse2
 
-void vp8_sixtap_predict16x16_c(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict16x16_c(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict16x16_sse2(unsigned char* src,
-                                  int src_pitch,
-                                  int xofst,
-                                  int yofst,
-                                  unsigned char* dst,
+void vp8_sixtap_predict16x16_sse2(unsigned char* src_ptr,
+                                  int src_pixels_per_line,
+                                  int xoffset,
+                                  int yoffset,
+                                  unsigned char* dst_ptr,
                                   int dst_pitch);
-void vp8_sixtap_predict16x16_ssse3(unsigned char* src,
-                                   int src_pitch,
-                                   int xofst,
-                                   int yofst,
-                                   unsigned char* dst,
+void vp8_sixtap_predict16x16_ssse3(unsigned char* src_ptr,
+                                   int src_pixels_per_line,
+                                   int xoffset,
+                                   int yoffset,
+                                   unsigned char* dst_ptr,
                                    int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src,
-                                            int src_pitch,
-                                            int xofst,
-                                            int yofst,
-                                            unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict16x16)(unsigned char* src_ptr,
+                                            int src_pixels_per_line,
+                                            int xoffset,
+                                            int yoffset,
+                                            unsigned char* dst_ptr,
                                             int dst_pitch);
 
-void vp8_sixtap_predict4x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict4x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict4x4_mmx(unsigned char* src,
-                               int src_pitch,
-                               int xofst,
-                               int yofst,
-                               unsigned char* dst,
+void vp8_sixtap_predict4x4_mmx(unsigned char* src_ptr,
+                               int src_pixels_per_line,
+                               int xoffset,
+                               int yoffset,
+                               unsigned char* dst_ptr,
                                int dst_pitch);
-void vp8_sixtap_predict4x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict4x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict4x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x4_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x4_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x4_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x4_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x4_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x4_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x4)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
-void vp8_sixtap_predict8x8_c(unsigned char* src,
-                             int src_pitch,
-                             int xofst,
-                             int yofst,
-                             unsigned char* dst,
+void vp8_sixtap_predict8x8_c(unsigned char* src_ptr,
+                             int src_pixels_per_line,
+                             int xoffset,
+                             int yoffset,
+                             unsigned char* dst_ptr,
                              int dst_pitch);
-void vp8_sixtap_predict8x8_sse2(unsigned char* src,
-                                int src_pitch,
-                                int xofst,
-                                int yofst,
-                                unsigned char* dst,
+void vp8_sixtap_predict8x8_sse2(unsigned char* src_ptr,
+                                int src_pixels_per_line,
+                                int xoffset,
+                                int yoffset,
+                                unsigned char* dst_ptr,
                                 int dst_pitch);
-void vp8_sixtap_predict8x8_ssse3(unsigned char* src,
-                                 int src_pitch,
-                                 int xofst,
-                                 int yofst,
-                                 unsigned char* dst,
+void vp8_sixtap_predict8x8_ssse3(unsigned char* src_ptr,
+                                 int src_pixels_per_line,
+                                 int xoffset,
+                                 int yoffset,
+                                 unsigned char* dst_ptr,
                                  int dst_pitch);
-RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src,
-                                          int src_pitch,
-                                          int xofst,
-                                          int yofst,
-                                          unsigned char* dst,
+RTCD_EXTERN void (*vp8_sixtap_predict8x8)(unsigned char* src_ptr,
+                                          int src_pixels_per_line,
+                                          int xoffset,
+                                          int yoffset,
+                                          unsigned char* dst_ptr,
                                           int dst_pitch);
 
 void vp8_rtcd(void);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp9_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp9_rtcd.h
index 6f00c78..12b9110 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp9_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vp9_rtcd.h
@@ -110,46 +110,6 @@
     const struct vp9_variance_vtable* fn_ptr,
     const struct mv* center_mv);
 
-void vp9_fdct8x8_quant_c(const int16_t* input,
-                         int stride,
-                         tran_low_t* coeff_ptr,
-                         intptr_t n_coeffs,
-                         int skip_block,
-                         const int16_t* round_ptr,
-                         const int16_t* quant_ptr,
-                         tran_low_t* qcoeff_ptr,
-                         tran_low_t* dqcoeff_ptr,
-                         const int16_t* dequant_ptr,
-                         uint16_t* eob_ptr,
-                         const int16_t* scan,
-                         const int16_t* iscan);
-void vp9_fdct8x8_quant_ssse3(const int16_t* input,
-                             int stride,
-                             tran_low_t* coeff_ptr,
-                             intptr_t n_coeffs,
-                             int skip_block,
-                             const int16_t* round_ptr,
-                             const int16_t* quant_ptr,
-                             tran_low_t* qcoeff_ptr,
-                             tran_low_t* dqcoeff_ptr,
-                             const int16_t* dequant_ptr,
-                             uint16_t* eob_ptr,
-                             const int16_t* scan,
-                             const int16_t* iscan);
-RTCD_EXTERN void (*vp9_fdct8x8_quant)(const int16_t* input,
-                                      int stride,
-                                      tran_low_t* coeff_ptr,
-                                      intptr_t n_coeffs,
-                                      int skip_block,
-                                      const int16_t* round_ptr,
-                                      const int16_t* quant_ptr,
-                                      tran_low_t* qcoeff_ptr,
-                                      tran_low_t* dqcoeff_ptr,
-                                      const int16_t* dequant_ptr,
-                                      uint16_t* eob_ptr,
-                                      const int16_t* scan,
-                                      const int16_t* iscan);
-
 void vp9_fht16x16_c(const int16_t* input,
                     tran_low_t* output,
                     int stride,
@@ -242,18 +202,18 @@
 #define vp9_highbd_fwht4x4 vp9_highbd_fwht4x4_c
 
 void vp9_highbd_iht16x16_256_add_c(const tran_low_t* input,
-                                   uint16_t* output,
-                                   int pitch,
+                                   uint16_t* dest,
+                                   int stride,
                                    int tx_type,
                                    int bd);
 void vp9_highbd_iht16x16_256_add_sse4_1(const tran_low_t* input,
-                                        uint16_t* output,
-                                        int pitch,
+                                        uint16_t* dest,
+                                        int stride,
                                         int tx_type,
                                         int bd);
 RTCD_EXTERN void (*vp9_highbd_iht16x16_256_add)(const tran_low_t* input,
-                                                uint16_t* output,
-                                                int pitch,
+                                                uint16_t* dest,
+                                                int stride,
                                                 int tx_type,
                                                 int bd);
 
@@ -345,18 +305,19 @@
                                         unsigned int block_width,
                                         unsigned int block_height,
                                         int strength,
-                                        int filter_weight,
+                                        int* blk_fw,
+                                        int use_32x32,
                                         uint32_t* accumulator,
                                         uint16_t* count);
 #define vp9_highbd_temporal_filter_apply vp9_highbd_temporal_filter_apply_c
 
 void vp9_iht16x16_256_add_c(const tran_low_t* input,
-                            uint8_t* output,
-                            int pitch,
+                            uint8_t* dest,
+                            int stride,
                             int tx_type);
 void vp9_iht16x16_256_add_sse2(const tran_low_t* input,
-                               uint8_t* output,
-                               int pitch,
+                               uint8_t* dest,
+                               int stride,
                                int tx_type);
 #define vp9_iht16x16_256_add vp9_iht16x16_256_add_sse2
 
@@ -502,9 +463,6 @@
   vp9_diamond_search_sad = vp9_diamond_search_sad_c;
   if (flags & HAS_AVX)
     vp9_diamond_search_sad = vp9_diamond_search_sad_avx;
-  vp9_fdct8x8_quant = vp9_fdct8x8_quant_c;
-  if (flags & HAS_SSSE3)
-    vp9_fdct8x8_quant = vp9_fdct8x8_quant_ssse3;
   vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_c;
   if (flags & HAS_SSE4_1)
     vp9_highbd_iht16x16_256_add = vp9_highbd_iht16x16_256_add_sse4_1;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.asm
index 664ecc0..507ab54 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.asm
@@ -1,7 +1,12 @@
+%define VPX_ARCH_ARM 0
 %define ARCH_ARM 0
+%define VPX_ARCH_MIPS 0
 %define ARCH_MIPS 0
+%define VPX_ARCH_X86 0
 %define ARCH_X86 0
+%define VPX_ARCH_X86_64 1
 %define ARCH_X86_64 1
+%define VPX_ARCH_PPC 0
 %define ARCH_PPC 0
 %define HAVE_NEON 0
 %define HAVE_NEON_ASM 0
@@ -66,20 +71,24 @@
 %define CONFIG_OS_SUPPORT 1
 %define CONFIG_UNIT_TESTS 1
 %define CONFIG_WEBM_IO 1
-%define CONFIG_LIBYUV 1
+%define CONFIG_LIBYUV 0
 %define CONFIG_DECODE_PERF_TESTS 0
 %define CONFIG_ENCODE_PERF_TESTS 0
 %define CONFIG_MULTI_RES_ENCODING 1
 %define CONFIG_TEMPORAL_DENOISING 1
 %define CONFIG_VP9_TEMPORAL_DENOISING 1
+%define CONFIG_CONSISTENT_RECODE 0
 %define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 %define CONFIG_VP9_HIGHBITDEPTH 1
 %define CONFIG_BETTER_HW_COMPATIBILITY 0
 %define CONFIG_EXPERIMENTAL 0
 %define CONFIG_SIZE_LIMIT 1
 %define CONFIG_ALWAYS_ADJUST_BPM 0
-%define CONFIG_SPATIAL_SVC 0
+%define CONFIG_BITSTREAM_DEBUG 0
+%define CONFIG_MISMATCH_DEBUG 0
 %define CONFIG_FP_MB_STATS 0
 %define CONFIG_EMULATE_HARDWARE 0
+%define CONFIG_NON_GREEDY_MV 0
+%define CONFIG_RATE_CTRL 0
 %define DECODE_WIDTH_LIMIT 16384
 %define DECODE_HEIGHT_LIMIT 16384
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.c
index 1d316b7..e0b39b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.c
@@ -6,5 +6,5 @@
 /* in the file PATENTS.  All contributing project authors may */
 /* be found in the AUTHORS file in the root of the source tree. */
 #include "vpx/vpx_codec.h"
-static const char* const cfg = "--target=x86_64-win64-vs12 --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
+static const char* const cfg = "--target=x86_64-win64-vs14 --enable-external-build --enable-postproc --enable-multi-res-encoding --enable-temporal-denoising --enable-vp9-temporal-denoising --enable-vp9-postproc --size-limit=16384x16384 --enable-realtime-only --disable-install-docs --disable-libyuv --enable-pic --as=yasm --disable-avx512 --enable-vp9-highbitdepth";
 const char *vpx_codec_build_config(void) {return cfg;}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.h
index c3a567e..23a083b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_config.h
@@ -10,10 +10,15 @@
 #define VPX_CONFIG_H
 #define RESTRICT    
 #define INLINE      __inline
+#define VPX_ARCH_ARM 0
 #define ARCH_ARM 0
+#define VPX_ARCH_MIPS 0
 #define ARCH_MIPS 0
+#define VPX_ARCH_X86 0
 #define ARCH_X86 0
+#define VPX_ARCH_X86_64 1
 #define ARCH_X86_64 1
+#define VPX_ARCH_PPC 0
 #define ARCH_PPC 0
 #define HAVE_NEON 0
 #define HAVE_NEON_ASM 0
@@ -78,21 +83,25 @@
 #define CONFIG_OS_SUPPORT 1
 #define CONFIG_UNIT_TESTS 1
 #define CONFIG_WEBM_IO 1
-#define CONFIG_LIBYUV 1
+#define CONFIG_LIBYUV 0
 #define CONFIG_DECODE_PERF_TESTS 0
 #define CONFIG_ENCODE_PERF_TESTS 0
 #define CONFIG_MULTI_RES_ENCODING 1
 #define CONFIG_TEMPORAL_DENOISING 1
 #define CONFIG_VP9_TEMPORAL_DENOISING 1
+#define CONFIG_CONSISTENT_RECODE 0
 #define CONFIG_COEFFICIENT_RANGE_CHECKING 0
 #define CONFIG_VP9_HIGHBITDEPTH 1
 #define CONFIG_BETTER_HW_COMPATIBILITY 0
 #define CONFIG_EXPERIMENTAL 0
 #define CONFIG_SIZE_LIMIT 1
 #define CONFIG_ALWAYS_ADJUST_BPM 0
-#define CONFIG_SPATIAL_SVC 0
+#define CONFIG_BITSTREAM_DEBUG 0
+#define CONFIG_MISMATCH_DEBUG 0
 #define CONFIG_FP_MB_STATS 0
 #define CONFIG_EMULATE_HARDWARE 0
+#define CONFIG_NON_GREEDY_MV 0
+#define CONFIG_RATE_CTRL 0
 #define DECODE_WIDTH_LIMIT 16384
 #define DECODE_HEIGHT_LIMIT 16384
 #endif /* VPX_CONFIG_H */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_dsp_rtcd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_dsp_rtcd.h
index 9970fae..ed63233 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_dsp_rtcd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/config/win/x64/vpx_dsp_rtcd.h
@@ -427,420 +427,420 @@
 #define vpx_convolve_copy vpx_convolve_copy_sse2
 
 void vpx_d117_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_16x16 vpx_d117_predictor_16x16_c
 
 void vpx_d117_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d117_predictor_32x32 vpx_d117_predictor_32x32_c
 
 void vpx_d117_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_4x4 vpx_d117_predictor_4x4_c
 
 void vpx_d117_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d117_predictor_8x8 vpx_d117_predictor_8x8_c
 
 void vpx_d135_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_16x16 vpx_d135_predictor_16x16_c
 
 void vpx_d135_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d135_predictor_32x32 vpx_d135_predictor_32x32_c
 
 void vpx_d135_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_4x4 vpx_d135_predictor_4x4_c
 
 void vpx_d135_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d135_predictor_8x8 vpx_d135_predictor_8x8_c
 
 void vpx_d153_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d153_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d153_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_4x4_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_4x4)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d153_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d153_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d153_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d207_predictor_16x16_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_16x16_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_16x16)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_32x32_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_d207_predictor_32x32_ssse3(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_32x32)(uint8_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint8_t* above,
                                              const uint8_t* left);
 
 void vpx_d207_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_4x4_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_d207_predictor_4x4 vpx_d207_predictor_4x4_sse2
 
 void vpx_d207_predictor_8x8_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_d207_predictor_8x8_ssse3(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 RTCD_EXTERN void (*vpx_d207_predictor_8x8)(uint8_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint8_t* above,
                                            const uint8_t* left);
 
 void vpx_d45_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d45_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d45_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d45_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_4x4_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_4x4 vpx_d45_predictor_4x4_sse2
 
 void vpx_d45_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d45_predictor_8x8_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_d45_predictor_8x8 vpx_d45_predictor_8x8_sse2
 
 void vpx_d45e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d45e_predictor_4x4 vpx_d45e_predictor_4x4_c
 
 void vpx_d63_predictor_16x16_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_16x16_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_16x16)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_32x32_c(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 void vpx_d63_predictor_32x32_ssse3(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_32x32)(uint8_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint8_t* above,
                                             const uint8_t* left);
 
 void vpx_d63_predictor_4x4_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_4x4_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_4x4)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63_predictor_8x8_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_d63_predictor_8x8_ssse3(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 RTCD_EXTERN void (*vpx_d63_predictor_8x8)(uint8_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint8_t* above,
                                           const uint8_t* left);
 
 void vpx_d63e_predictor_4x4_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_d63e_predictor_4x4 vpx_d63e_predictor_4x4_c
 
 void vpx_dc_128_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_16x16 vpx_dc_128_predictor_16x16_sse2
 
 void vpx_dc_128_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_128_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_128_predictor_32x32 vpx_dc_128_predictor_32x32_sse2
 
 void vpx_dc_128_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_4x4 vpx_dc_128_predictor_4x4_sse2
 
 void vpx_dc_128_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_128_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_128_predictor_8x8 vpx_dc_128_predictor_8x8_sse2
 
 void vpx_dc_left_predictor_16x16_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_16x16_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_16x16 vpx_dc_left_predictor_16x16_sse2
 
 void vpx_dc_left_predictor_32x32_c(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 void vpx_dc_left_predictor_32x32_sse2(uint8_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint8_t* above,
                                       const uint8_t* left);
 #define vpx_dc_left_predictor_32x32 vpx_dc_left_predictor_32x32_sse2
 
 void vpx_dc_left_predictor_4x4_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_4x4_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_4x4 vpx_dc_left_predictor_4x4_sse2
 
 void vpx_dc_left_predictor_8x8_c(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 void vpx_dc_left_predictor_8x8_sse2(uint8_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint8_t* above,
                                     const uint8_t* left);
 #define vpx_dc_left_predictor_8x8 vpx_dc_left_predictor_8x8_sse2
 
 void vpx_dc_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_16x16 vpx_dc_predictor_16x16_sse2
 
 void vpx_dc_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_dc_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_dc_predictor_32x32 vpx_dc_predictor_32x32_sse2
 
 void vpx_dc_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_4x4 vpx_dc_predictor_4x4_sse2
 
 void vpx_dc_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_dc_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_dc_predictor_8x8 vpx_dc_predictor_8x8_sse2
 
 void vpx_dc_top_predictor_16x16_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_16x16_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_16x16 vpx_dc_top_predictor_16x16_sse2
 
 void vpx_dc_top_predictor_32x32_c(uint8_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint8_t* above,
                                   const uint8_t* left);
 void vpx_dc_top_predictor_32x32_sse2(uint8_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint8_t* above,
                                      const uint8_t* left);
 #define vpx_dc_top_predictor_32x32 vpx_dc_top_predictor_32x32_sse2
 
 void vpx_dc_top_predictor_4x4_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_4x4_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_4x4 vpx_dc_top_predictor_4x4_sse2
 
 void vpx_dc_top_predictor_8x8_c(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 void vpx_dc_top_predictor_8x8_sse2(uint8_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint8_t* above,
                                    const uint8_t* left);
 #define vpx_dc_top_predictor_8x8 vpx_dc_top_predictor_8x8_sse2
@@ -884,44 +884,44 @@
 #define vpx_fdct8x8_1 vpx_fdct8x8_1_sse2
 
 void vpx_get16x16var_c(const uint8_t* src_ptr,
-                       int source_stride,
+                       int src_stride,
                        const uint8_t* ref_ptr,
                        int ref_stride,
                        unsigned int* sse,
                        int* sum);
 void vpx_get16x16var_sse2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 void vpx_get16x16var_avx2(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
                           int ref_stride,
                           unsigned int* sse,
                           int* sum);
 RTCD_EXTERN void (*vpx_get16x16var)(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse,
                                     int* sum);
 
 unsigned int vpx_get4x4sse_cs_c(const unsigned char* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const unsigned char* ref_ptr,
                                 int ref_stride);
 #define vpx_get4x4sse_cs vpx_get4x4sse_cs_c
 
 void vpx_get8x8var_c(const uint8_t* src_ptr,
-                     int source_stride,
+                     int src_stride,
                      const uint8_t* ref_ptr,
                      int ref_stride,
                      unsigned int* sse,
                      int* sum);
 void vpx_get8x8var_sse2(const uint8_t* src_ptr,
-                        int source_stride,
+                        int src_stride,
                         const uint8_t* ref_ptr,
                         int ref_stride,
                         unsigned int* sse,
@@ -933,41 +933,41 @@
 #define vpx_get_mb_ss vpx_get_mb_ss_sse2
 
 void vpx_h_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_16x16 vpx_h_predictor_16x16_sse2
 
 void vpx_h_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_h_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_h_predictor_32x32 vpx_h_predictor_32x32_sse2
 
 void vpx_h_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_4x4 vpx_h_predictor_4x4_sse2
 
 void vpx_h_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_h_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_h_predictor_8x8 vpx_h_predictor_8x8_sse2
@@ -1012,79 +1012,91 @@
                                      tran_low_t* coeff);
 
 void vpx_he_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_he_predictor_4x4 vpx_he_predictor_4x4_c
 
 void vpx_highbd_10_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_c
+void vpx_highbd_10_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_10_get16x16var vpx_highbd_10_get16x16var_sse2
 
 void vpx_highbd_10_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_c
+void vpx_highbd_10_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_10_get8x8var vpx_highbd_10_get8x8var_sse2
 
 unsigned int vpx_highbd_10_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_10_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_mse16x16 vpx_highbd_10_mse16x16_sse2
 
 unsigned int vpx_highbd_10_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse16x8 vpx_highbd_10_mse16x8_c
 
 unsigned int vpx_highbd_10_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_10_mse8x16 vpx_highbd_10_mse8x16_c
 
 unsigned int vpx_highbd_10_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_10_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
 #define vpx_highbd_10_mse8x8 vpx_highbd_10_mse8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1094,18 +1106,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1114,18 +1126,18 @@
   vpx_highbd_10_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1135,18 +1147,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1156,18 +1168,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1177,18 +1189,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1197,9 +1209,9 @@
   vpx_highbd_10_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1208,9 +1220,9 @@
   vpx_highbd_10_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1220,18 +1232,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1241,18 +1253,18 @@
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1261,18 +1273,18 @@
   vpx_highbd_10_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1281,18 +1293,18 @@
   vpx_highbd_10_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1301,18 +1313,18 @@
   vpx_highbd_10_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1321,16 +1333,16 @@
   vpx_highbd_10_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1338,16 +1350,16 @@
   vpx_highbd_10_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1355,16 +1367,16 @@
   vpx_highbd_10_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -1372,16 +1384,16 @@
   vpx_highbd_10_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1389,16 +1401,16 @@
   vpx_highbd_10_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1406,16 +1418,16 @@
   vpx_highbd_10_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1423,9 +1435,9 @@
   vpx_highbd_10_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1433,9 +1445,9 @@
   vpx_highbd_10_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -1443,16 +1455,16 @@
   vpx_highbd_10_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1460,16 +1472,16 @@
   vpx_highbd_10_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1477,16 +1489,16 @@
   vpx_highbd_10_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -1494,16 +1506,16 @@
   vpx_highbd_10_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -1511,16 +1523,16 @@
   vpx_highbd_10_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_10_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -1528,214 +1540,226 @@
   vpx_highbd_10_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_10_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance16x16 vpx_highbd_10_variance16x16_sse2
 
 unsigned int vpx_highbd_10_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance16x32 vpx_highbd_10_variance16x32_sse2
 
 unsigned int vpx_highbd_10_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_10_variance16x8 vpx_highbd_10_variance16x8_sse2
 
 unsigned int vpx_highbd_10_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x16 vpx_highbd_10_variance32x16_sse2
 
 unsigned int vpx_highbd_10_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x32 vpx_highbd_10_variance32x32_sse2
 
 unsigned int vpx_highbd_10_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance32x64 vpx_highbd_10_variance32x64_sse2
 
 unsigned int vpx_highbd_10_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x4 vpx_highbd_10_variance4x4_c
 
 unsigned int vpx_highbd_10_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance4x8 vpx_highbd_10_variance4x8_c
 
 unsigned int vpx_highbd_10_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance64x32 vpx_highbd_10_variance64x32_sse2
 
 unsigned int vpx_highbd_10_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_10_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_10_variance64x64 vpx_highbd_10_variance64x64_sse2
 
 unsigned int vpx_highbd_10_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_10_variance8x16 vpx_highbd_10_variance8x16_sse2
 
 unsigned int vpx_highbd_10_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_10_variance8x4 vpx_highbd_10_variance8x4_c
 
 unsigned int vpx_highbd_10_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_10_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_10_variance8x8 vpx_highbd_10_variance8x8_sse2
 
 void vpx_highbd_12_get16x16var_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse,
                                  int* sum);
-#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_c
+void vpx_highbd_12_get16x16var_sse2(const uint8_t* src_ptr,
+                                    int src_stride,
+                                    const uint8_t* ref_ptr,
+                                    int ref_stride,
+                                    unsigned int* sse,
+                                    int* sum);
+#define vpx_highbd_12_get16x16var vpx_highbd_12_get16x16var_sse2
 
 void vpx_highbd_12_get8x8var_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse,
                                int* sum);
-#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_c
+void vpx_highbd_12_get8x8var_sse2(const uint8_t* src_ptr,
+                                  int src_stride,
+                                  const uint8_t* ref_ptr,
+                                  int ref_stride,
+                                  unsigned int* sse,
+                                  int* sum);
+#define vpx_highbd_12_get8x8var vpx_highbd_12_get8x8var_sse2
 
 unsigned int vpx_highbd_12_mse16x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 unsigned int vpx_highbd_12_mse16x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_mse16x16 vpx_highbd_12_mse16x16_sse2
 
 unsigned int vpx_highbd_12_mse16x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse16x8 vpx_highbd_12_mse16x8_c
 
 unsigned int vpx_highbd_12_mse8x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 #define vpx_highbd_12_mse8x16 vpx_highbd_12_mse8x16_c
 
 unsigned int vpx_highbd_12_mse8x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_highbd_12_mse8x8_sse2(const uint8_t* src_ptr,
-                                       int source_stride,
+                                       int src_stride,
                                        const uint8_t* ref_ptr,
-                                       int recon_stride,
+                                       int ref_stride,
                                        unsigned int* sse);
 #define vpx_highbd_12_mse8x8 vpx_highbd_12_mse8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1745,18 +1769,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1765,18 +1789,18 @@
   vpx_highbd_12_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1786,18 +1810,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1807,18 +1831,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1828,18 +1852,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1848,9 +1872,9 @@
   vpx_highbd_12_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1859,9 +1883,9 @@
   vpx_highbd_12_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
@@ -1871,18 +1895,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1892,18 +1916,18 @@
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_c(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1912,18 +1936,18 @@
   vpx_highbd_12_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1932,18 +1956,18 @@
   vpx_highbd_12_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1952,18 +1976,18 @@
   vpx_highbd_12_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -1972,16 +1996,16 @@
   vpx_highbd_12_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -1989,16 +2013,16 @@
   vpx_highbd_12_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2006,16 +2030,16 @@
   vpx_highbd_12_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2023,16 +2047,16 @@
   vpx_highbd_12_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2040,16 +2064,16 @@
   vpx_highbd_12_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2057,16 +2081,16 @@
   vpx_highbd_12_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2074,9 +2098,9 @@
   vpx_highbd_12_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2084,9 +2108,9 @@
   vpx_highbd_12_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
@@ -2094,16 +2118,16 @@
   vpx_highbd_12_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2111,16 +2135,16 @@
   vpx_highbd_12_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
@@ -2128,16 +2152,16 @@
   vpx_highbd_12_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2145,16 +2169,16 @@
   vpx_highbd_12_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2162,16 +2186,16 @@
   vpx_highbd_12_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_12_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2179,213 +2203,225 @@
   vpx_highbd_12_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_12_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance16x16 vpx_highbd_12_variance16x16_sse2
 
 unsigned int vpx_highbd_12_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance16x32 vpx_highbd_12_variance16x32_sse2
 
 unsigned int vpx_highbd_12_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_12_variance16x8 vpx_highbd_12_variance16x8_sse2
 
 unsigned int vpx_highbd_12_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x16 vpx_highbd_12_variance32x16_sse2
 
 unsigned int vpx_highbd_12_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x32 vpx_highbd_12_variance32x32_sse2
 
 unsigned int vpx_highbd_12_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance32x64 vpx_highbd_12_variance32x64_sse2
 
 unsigned int vpx_highbd_12_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x4 vpx_highbd_12_variance4x4_c
 
 unsigned int vpx_highbd_12_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance4x8 vpx_highbd_12_variance4x8_c
 
 unsigned int vpx_highbd_12_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance64x32 vpx_highbd_12_variance64x32_sse2
 
 unsigned int vpx_highbd_12_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 unsigned int vpx_highbd_12_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 #define vpx_highbd_12_variance64x64 vpx_highbd_12_variance64x64_sse2
 
 unsigned int vpx_highbd_12_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_12_variance8x16 vpx_highbd_12_variance8x16_sse2
 
 unsigned int vpx_highbd_12_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 #define vpx_highbd_12_variance8x4 vpx_highbd_12_variance8x4_c
 
 unsigned int vpx_highbd_12_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_12_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_12_variance8x8 vpx_highbd_12_variance8x8_sse2
 
 void vpx_highbd_8_get16x16var_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse,
                                 int* sum);
-#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_c
+void vpx_highbd_8_get16x16var_sse2(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   unsigned int* sse,
+                                   int* sum);
+#define vpx_highbd_8_get16x16var vpx_highbd_8_get16x16var_sse2
 
 void vpx_highbd_8_get8x8var_c(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
                               int ref_stride,
                               unsigned int* sse,
                               int* sum);
-#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_c
+void vpx_highbd_8_get8x8var_sse2(const uint8_t* src_ptr,
+                                 int src_stride,
+                                 const uint8_t* ref_ptr,
+                                 int ref_stride,
+                                 unsigned int* sse,
+                                 int* sum);
+#define vpx_highbd_8_get8x8var vpx_highbd_8_get8x8var_sse2
 
 unsigned int vpx_highbd_8_mse16x16_c(const uint8_t* src_ptr,
-                                     int source_stride,
+                                     int src_stride,
                                      const uint8_t* ref_ptr,
-                                     int recon_stride,
+                                     int ref_stride,
                                      unsigned int* sse);
 unsigned int vpx_highbd_8_mse16x16_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_mse16x16 vpx_highbd_8_mse16x16_sse2
 
 unsigned int vpx_highbd_8_mse16x8_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse16x8 vpx_highbd_8_mse16x8_c
 
 unsigned int vpx_highbd_8_mse8x16_c(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
-                                    int recon_stride,
+                                    int ref_stride,
                                     unsigned int* sse);
 #define vpx_highbd_8_mse8x16 vpx_highbd_8_mse8x16_c
 
 unsigned int vpx_highbd_8_mse8x8_c(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
-                                   int recon_stride,
+                                   int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_highbd_8_mse8x8_sse2(const uint8_t* src_ptr,
-                                      int source_stride,
+                                      int src_stride,
                                       const uint8_t* ref_ptr,
-                                      int recon_stride,
+                                      int ref_stride,
                                       unsigned int* sse);
 #define vpx_highbd_8_mse8x8 vpx_highbd_8_mse8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2394,18 +2430,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2414,18 +2450,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2434,18 +2470,18 @@
   vpx_highbd_8_sub_pixel_avg_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2454,18 +2490,18 @@
   vpx_highbd_8_sub_pixel_avg_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2474,18 +2510,18 @@
   vpx_highbd_8_sub_pixel_avg_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2494,9 +2530,9 @@
   vpx_highbd_8_sub_pixel_avg_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -2505,9 +2541,9 @@
   vpx_highbd_8_sub_pixel_avg_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
@@ -2516,18 +2552,18 @@
   vpx_highbd_8_sub_pixel_avg_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2536,18 +2572,18 @@
   vpx_highbd_8_sub_pixel_avg_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse,
                                                     const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2556,18 +2592,18 @@
   vpx_highbd_8_sub_pixel_avg_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse,
                                                    const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2576,18 +2612,18 @@
   vpx_highbd_8_sub_pixel_avg_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2596,18 +2632,18 @@
   vpx_highbd_8_sub_pixel_avg_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse,
                                                   const uint8_t* second_pred);
 uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8_sse2(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
@@ -2616,16 +2652,16 @@
   vpx_highbd_8_sub_pixel_avg_variance8x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2633,16 +2669,16 @@
   vpx_highbd_8_sub_pixel_variance16x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2650,16 +2686,16 @@
   vpx_highbd_8_sub_pixel_variance16x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2667,16 +2703,16 @@
   vpx_highbd_8_sub_pixel_variance16x8_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2684,16 +2720,16 @@
   vpx_highbd_8_sub_pixel_variance32x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2701,16 +2737,16 @@
   vpx_highbd_8_sub_pixel_variance32x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2718,34 +2754,34 @@
   vpx_highbd_8_sub_pixel_variance32x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x4 vpx_highbd_8_sub_pixel_variance4x4_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 #define vpx_highbd_8_sub_pixel_variance4x8 vpx_highbd_8_sub_pixel_variance4x8_c
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2753,16 +2789,16 @@
   vpx_highbd_8_sub_pixel_variance64x32_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                                int source_stride,
-                                                int xoffset,
-                                                int yoffset,
+                                                int src_stride,
+                                                int x_offset,
+                                                int y_offset,
                                                 const uint8_t* ref_ptr,
                                                 int ref_stride,
                                                 uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
@@ -2770,16 +2806,16 @@
   vpx_highbd_8_sub_pixel_variance64x64_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -2787,16 +2823,16 @@
   vpx_highbd_8_sub_pixel_variance8x16_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -2804,16 +2840,16 @@
   vpx_highbd_8_sub_pixel_variance8x4_sse2
 
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse);
 uint32_t vpx_highbd_8_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                                 int source_stride,
-                                                 int xoffset,
-                                                 int yoffset,
+                                                 int src_stride,
+                                                 int x_offset,
+                                                 int y_offset,
                                                  const uint8_t* ref_ptr,
                                                  int ref_stride,
                                                  uint32_t* sse);
@@ -2821,152 +2857,152 @@
   vpx_highbd_8_sub_pixel_variance8x8_sse2
 
 unsigned int vpx_highbd_8_variance16x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance16x16 vpx_highbd_8_variance16x16_sse2
 
 unsigned int vpx_highbd_8_variance16x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance16x32 vpx_highbd_8_variance16x32_sse2
 
 unsigned int vpx_highbd_8_variance16x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance16x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_8_variance16x8 vpx_highbd_8_variance16x8_sse2
 
 unsigned int vpx_highbd_8_variance32x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x16 vpx_highbd_8_variance32x16_sse2
 
 unsigned int vpx_highbd_8_variance32x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x32 vpx_highbd_8_variance32x32_sse2
 
 unsigned int vpx_highbd_8_variance32x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance32x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance32x64 vpx_highbd_8_variance32x64_sse2
 
 unsigned int vpx_highbd_8_variance4x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x4 vpx_highbd_8_variance4x4_c
 
 unsigned int vpx_highbd_8_variance4x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance4x8 vpx_highbd_8_variance4x8_c
 
 unsigned int vpx_highbd_8_variance64x32_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x32_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance64x32 vpx_highbd_8_variance64x32_sse2
 
 unsigned int vpx_highbd_8_variance64x64_c(const uint8_t* src_ptr,
-                                          int source_stride,
+                                          int src_stride,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           unsigned int* sse);
 unsigned int vpx_highbd_8_variance64x64_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 #define vpx_highbd_8_variance64x64 vpx_highbd_8_variance64x64_sse2
 
 unsigned int vpx_highbd_8_variance8x16_c(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x16_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
+                                            int src_stride,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             unsigned int* sse);
 #define vpx_highbd_8_variance8x16 vpx_highbd_8_variance8x16_sse2
 
 unsigned int vpx_highbd_8_variance8x4_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 #define vpx_highbd_8_variance8x4 vpx_highbd_8_variance8x4_c
 
 unsigned int vpx_highbd_8_variance8x8_c(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         unsigned int* sse);
 unsigned int vpx_highbd_8_variance8x8_sse2(const uint8_t* src_ptr,
-                                           int source_stride,
+                                           int src_stride,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            unsigned int* sse);
 #define vpx_highbd_8_variance8x8 vpx_highbd_8_variance8x8_sse2
 
-unsigned int vpx_highbd_avg_4x4_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_4x4_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t* s8, int p);
 #define vpx_highbd_avg_4x4 vpx_highbd_avg_4x4_sse2
 
-unsigned int vpx_highbd_avg_8x8_c(const uint8_t*, int p);
-unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t*, int p);
+unsigned int vpx_highbd_avg_8x8_c(const uint8_t* s8, int p);
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t* s8, int p);
 #define vpx_highbd_avg_8x8 vpx_highbd_avg_8x8_sse2
 
 void vpx_highbd_comp_avg_pred_c(uint16_t* comp_pred,
@@ -2988,7 +3024,7 @@
                             int y_step_q4,
                             int w,
                             int h,
-                            int bps);
+                            int bd);
 void vpx_highbd_convolve8_sse2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3000,7 +3036,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve8_avx2(const uint16_t* src,
                                ptrdiff_t src_stride,
                                uint16_t* dst,
@@ -3012,7 +3048,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8)(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3024,7 +3060,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 
 void vpx_highbd_convolve8_avg_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3037,7 +3073,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve8_avg_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3049,7 +3085,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve8_avg_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3061,7 +3097,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3073,7 +3109,7 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_convolve8_avg_horiz_c(const uint16_t* src,
                                       ptrdiff_t src_stride,
@@ -3086,7 +3122,7 @@
                                       int y_step_q4,
                                       int w,
                                       int h,
-                                      int bps);
+                                      int bd);
 void vpx_highbd_convolve8_avg_horiz_sse2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3098,7 +3134,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 void vpx_highbd_convolve8_avg_horiz_avx2(const uint16_t* src,
                                          ptrdiff_t src_stride,
                                          uint16_t* dst,
@@ -3110,7 +3146,7 @@
                                          int y_step_q4,
                                          int w,
                                          int h,
-                                         int bps);
+                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_horiz)(const uint16_t* src,
                                                    ptrdiff_t src_stride,
                                                    uint16_t* dst,
@@ -3122,7 +3158,7 @@
                                                    int y_step_q4,
                                                    int w,
                                                    int h,
-                                                   int bps);
+                                                   int bd);
 
 void vpx_highbd_convolve8_avg_vert_c(const uint16_t* src,
                                      ptrdiff_t src_stride,
@@ -3135,7 +3171,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_avg_vert_sse2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3147,7 +3183,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 void vpx_highbd_convolve8_avg_vert_avx2(const uint16_t* src,
                                         ptrdiff_t src_stride,
                                         uint16_t* dst,
@@ -3159,7 +3195,7 @@
                                         int y_step_q4,
                                         int w,
                                         int h,
-                                        int bps);
+                                        int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_avg_vert)(const uint16_t* src,
                                                   ptrdiff_t src_stride,
                                                   uint16_t* dst,
@@ -3171,7 +3207,7 @@
                                                   int y_step_q4,
                                                   int w,
                                                   int h,
-                                                  int bps);
+                                                  int bd);
 
 void vpx_highbd_convolve8_horiz_c(const uint16_t* src,
                                   ptrdiff_t src_stride,
@@ -3184,7 +3220,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve8_horiz_sse2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3196,7 +3232,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 void vpx_highbd_convolve8_horiz_avx2(const uint16_t* src,
                                      ptrdiff_t src_stride,
                                      uint16_t* dst,
@@ -3208,7 +3244,7 @@
                                      int y_step_q4,
                                      int w,
                                      int h,
-                                     int bps);
+                                     int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_horiz)(const uint16_t* src,
                                                ptrdiff_t src_stride,
                                                uint16_t* dst,
@@ -3220,7 +3256,7 @@
                                                int y_step_q4,
                                                int w,
                                                int h,
-                                               int bps);
+                                               int bd);
 
 void vpx_highbd_convolve8_vert_c(const uint16_t* src,
                                  ptrdiff_t src_stride,
@@ -3233,7 +3269,7 @@
                                  int y_step_q4,
                                  int w,
                                  int h,
-                                 int bps);
+                                 int bd);
 void vpx_highbd_convolve8_vert_sse2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3245,7 +3281,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 void vpx_highbd_convolve8_vert_avx2(const uint16_t* src,
                                     ptrdiff_t src_stride,
                                     uint16_t* dst,
@@ -3257,7 +3293,7 @@
                                     int y_step_q4,
                                     int w,
                                     int h,
-                                    int bps);
+                                    int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve8_vert)(const uint16_t* src,
                                               ptrdiff_t src_stride,
                                               uint16_t* dst,
@@ -3269,7 +3305,7 @@
                                               int y_step_q4,
                                               int w,
                                               int h,
-                                              int bps);
+                                              int bd);
 
 void vpx_highbd_convolve_avg_c(const uint16_t* src,
                                ptrdiff_t src_stride,
@@ -3282,7 +3318,7 @@
                                int y_step_q4,
                                int w,
                                int h,
-                               int bps);
+                               int bd);
 void vpx_highbd_convolve_avg_sse2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3294,7 +3330,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 void vpx_highbd_convolve_avg_avx2(const uint16_t* src,
                                   ptrdiff_t src_stride,
                                   uint16_t* dst,
@@ -3306,7 +3342,7 @@
                                   int y_step_q4,
                                   int w,
                                   int h,
-                                  int bps);
+                                  int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_avg)(const uint16_t* src,
                                             ptrdiff_t src_stride,
                                             uint16_t* dst,
@@ -3318,7 +3354,7 @@
                                             int y_step_q4,
                                             int w,
                                             int h,
-                                            int bps);
+                                            int bd);
 
 void vpx_highbd_convolve_copy_c(const uint16_t* src,
                                 ptrdiff_t src_stride,
@@ -3331,7 +3367,7 @@
                                 int y_step_q4,
                                 int w,
                                 int h,
-                                int bps);
+                                int bd);
 void vpx_highbd_convolve_copy_sse2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3343,7 +3379,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 void vpx_highbd_convolve_copy_avx2(const uint16_t* src,
                                    ptrdiff_t src_stride,
                                    uint16_t* dst,
@@ -3355,7 +3391,7 @@
                                    int y_step_q4,
                                    int w,
                                    int h,
-                                   int bps);
+                                   int bd);
 RTCD_EXTERN void (*vpx_highbd_convolve_copy)(const uint16_t* src,
                                              ptrdiff_t src_stride,
                                              uint16_t* dst,
@@ -3367,427 +3403,427 @@
                                              int y_step_q4,
                                              int w,
                                              int h,
-                                             int bps);
+                                             int bd);
 
 void vpx_highbd_d117_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d117_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d117_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d117_predictor_4x4 vpx_highbd_d117_predictor_4x4_sse2
 
 void vpx_highbd_d117_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d117_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d117_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d135_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d135_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d135_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d135_predictor_4x4 vpx_highbd_d135_predictor_4x4_sse2
 
 void vpx_highbd_d135_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d135_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d135_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d153_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d153_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d153_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d153_predictor_4x4 vpx_highbd_d153_predictor_4x4_sse2
 
 void vpx_highbd_d153_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d153_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d153_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d207_predictor_16x16_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_16x16_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_16x16)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_32x32_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_d207_predictor_32x32_ssse3(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_32x32)(uint16_t* dst,
-                                                    ptrdiff_t y_stride,
+                                                    ptrdiff_t stride,
                                                     const uint16_t* above,
                                                     const uint16_t* left,
                                                     int bd);
 
 void vpx_highbd_d207_predictor_4x4_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_4x4_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_d207_predictor_4x4 vpx_highbd_d207_predictor_4x4_sse2
 
 void vpx_highbd_d207_predictor_8x8_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_d207_predictor_8x8_ssse3(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 RTCD_EXTERN void (*vpx_highbd_d207_predictor_8x8)(uint16_t* dst,
-                                                  ptrdiff_t y_stride,
+                                                  ptrdiff_t stride,
                                                   const uint16_t* above,
                                                   const uint16_t* left,
                                                   int bd);
 
 void vpx_highbd_d45_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d45_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d45_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_4x4_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_4x4)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d45_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d45_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d45_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_d63_predictor_16x16_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_16x16_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_16x16)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_32x32_c(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 void vpx_highbd_d63_predictor_32x32_ssse3(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_32x32)(uint16_t* dst,
-                                                   ptrdiff_t y_stride,
+                                                   ptrdiff_t stride,
                                                    const uint16_t* above,
                                                    const uint16_t* left,
                                                    int bd);
 
 void vpx_highbd_d63_predictor_4x4_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_4x4_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_d63_predictor_4x4 vpx_highbd_d63_predictor_4x4_sse2
 
 void vpx_highbd_d63_predictor_8x8_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_d63_predictor_8x8_ssse3(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 RTCD_EXTERN void (*vpx_highbd_d63_predictor_8x8)(uint16_t* dst,
-                                                 ptrdiff_t y_stride,
+                                                 ptrdiff_t stride,
                                                  const uint16_t* above,
                                                  const uint16_t* left,
                                                  int bd);
 
 void vpx_highbd_dc_128_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_128_predictor_16x16 vpx_highbd_dc_128_predictor_16x16_sse2
 
 void vpx_highbd_dc_128_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_128_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_128_predictor_32x32 vpx_highbd_dc_128_predictor_32x32_sse2
 
 void vpx_highbd_dc_128_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_128_predictor_4x4 vpx_highbd_dc_128_predictor_4x4_sse2
 
 void vpx_highbd_dc_128_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_128_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_128_predictor_8x8 vpx_highbd_dc_128_predictor_8x8_sse2
 
 void vpx_highbd_dc_left_predictor_16x16_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_16x16_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
@@ -3795,12 +3831,12 @@
   vpx_highbd_dc_left_predictor_16x16_sse2
 
 void vpx_highbd_dc_left_predictor_32x32_c(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 void vpx_highbd_dc_left_predictor_32x32_sse2(uint16_t* dst,
-                                             ptrdiff_t y_stride,
+                                             ptrdiff_t stride,
                                              const uint16_t* above,
                                              const uint16_t* left,
                                              int bd);
@@ -3808,120 +3844,120 @@
   vpx_highbd_dc_left_predictor_32x32_sse2
 
 void vpx_highbd_dc_left_predictor_4x4_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_4x4_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 #define vpx_highbd_dc_left_predictor_4x4 vpx_highbd_dc_left_predictor_4x4_sse2
 
 void vpx_highbd_dc_left_predictor_8x8_c(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 void vpx_highbd_dc_left_predictor_8x8_sse2(uint16_t* dst,
-                                           ptrdiff_t y_stride,
+                                           ptrdiff_t stride,
                                            const uint16_t* above,
                                            const uint16_t* left,
                                            int bd);
 #define vpx_highbd_dc_left_predictor_8x8 vpx_highbd_dc_left_predictor_8x8_sse2
 
 void vpx_highbd_dc_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_predictor_16x16 vpx_highbd_dc_predictor_16x16_sse2
 
 void vpx_highbd_dc_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_dc_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_dc_predictor_32x32 vpx_highbd_dc_predictor_32x32_sse2
 
 void vpx_highbd_dc_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_dc_predictor_4x4 vpx_highbd_dc_predictor_4x4_sse2
 
 void vpx_highbd_dc_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_dc_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_dc_predictor_8x8 vpx_highbd_dc_predictor_8x8_sse2
 
 void vpx_highbd_dc_top_predictor_16x16_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_16x16_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_top_predictor_16x16 vpx_highbd_dc_top_predictor_16x16_sse2
 
 void vpx_highbd_dc_top_predictor_32x32_c(uint16_t* dst,
-                                         ptrdiff_t y_stride,
+                                         ptrdiff_t stride,
                                          const uint16_t* above,
                                          const uint16_t* left,
                                          int bd);
 void vpx_highbd_dc_top_predictor_32x32_sse2(uint16_t* dst,
-                                            ptrdiff_t y_stride,
+                                            ptrdiff_t stride,
                                             const uint16_t* above,
                                             const uint16_t* left,
                                             int bd);
 #define vpx_highbd_dc_top_predictor_32x32 vpx_highbd_dc_top_predictor_32x32_sse2
 
 void vpx_highbd_dc_top_predictor_4x4_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_4x4_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
 #define vpx_highbd_dc_top_predictor_4x4 vpx_highbd_dc_top_predictor_4x4_sse2
 
 void vpx_highbd_dc_top_predictor_8x8_c(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 void vpx_highbd_dc_top_predictor_8x8_sse2(uint16_t* dst,
-                                          ptrdiff_t y_stride,
+                                          ptrdiff_t stride,
                                           const uint16_t* above,
                                           const uint16_t* left,
                                           int bd);
@@ -3979,53 +4015,83 @@
 #define vpx_highbd_fdct8x8_1 vpx_highbd_fdct8x8_1_c
 
 void vpx_highbd_h_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_h_predictor_16x16 vpx_highbd_h_predictor_16x16_sse2
 
 void vpx_highbd_h_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_h_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_h_predictor_32x32 vpx_highbd_h_predictor_32x32_sse2
 
 void vpx_highbd_h_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_h_predictor_4x4 vpx_highbd_h_predictor_4x4_sse2
 
 void vpx_highbd_h_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_h_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_h_predictor_8x8 vpx_highbd_h_predictor_8x8_sse2
 
+void vpx_highbd_hadamard_16x16_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_16x16_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_16x16)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_32x32_c(const int16_t* src_diff,
+                                 ptrdiff_t src_stride,
+                                 tran_low_t* coeff);
+void vpx_highbd_hadamard_32x32_avx2(const int16_t* src_diff,
+                                    ptrdiff_t src_stride,
+                                    tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_32x32)(const int16_t* src_diff,
+                                              ptrdiff_t src_stride,
+                                              tran_low_t* coeff);
+
+void vpx_highbd_hadamard_8x8_c(const int16_t* src_diff,
+                               ptrdiff_t src_stride,
+                               tran_low_t* coeff);
+void vpx_highbd_hadamard_8x8_avx2(const int16_t* src_diff,
+                                  ptrdiff_t src_stride,
+                                  tran_low_t* coeff);
+RTCD_EXTERN void (*vpx_highbd_hadamard_8x8)(const int16_t* src_diff,
+                                            ptrdiff_t src_stride,
+                                            tran_low_t* coeff);
+
 void vpx_highbd_idct16x16_10_add_c(const tran_low_t* input,
                                    uint16_t* dest,
                                    int stride,
@@ -4423,9 +4489,9 @@
                                          int bd);
 #define vpx_highbd_lpf_vertical_8_dual vpx_highbd_lpf_vertical_8_dual_sse2
 
-void vpx_highbd_minmax_8x8_c(const uint8_t* s,
+void vpx_highbd_minmax_8x8_c(const uint8_t* s8,
                              int p,
-                             const uint8_t* d,
+                             const uint8_t* d8,
                              int dp,
                              int* min,
                              int* max);
@@ -4511,12 +4577,12 @@
 
 void vpx_highbd_sad16x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad16x16x4d vpx_highbd_sad16x16x4d_sse2
@@ -4545,12 +4611,12 @@
 
 void vpx_highbd_sad16x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad16x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad16x32x4d vpx_highbd_sad16x32x4d_sse2
@@ -4579,12 +4645,12 @@
 
 void vpx_highbd_sad16x8x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad16x8x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
 #define vpx_highbd_sad16x8x4d vpx_highbd_sad16x8x4d_sse2
@@ -4613,12 +4679,12 @@
 
 void vpx_highbd_sad32x16x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x16x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x16x4d vpx_highbd_sad32x16x4d_sse2
@@ -4647,12 +4713,12 @@
 
 void vpx_highbd_sad32x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x32x4d vpx_highbd_sad32x32x4d_sse2
@@ -4681,12 +4747,12 @@
 
 void vpx_highbd_sad32x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad32x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad32x64x4d vpx_highbd_sad32x64x4d_sse2
@@ -4706,12 +4772,12 @@
 
 void vpx_highbd_sad4x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad4x4x4d vpx_highbd_sad4x4x4d_sse2
@@ -4731,12 +4797,12 @@
 
 void vpx_highbd_sad4x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad4x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad4x8x4d vpx_highbd_sad4x8x4d_sse2
@@ -4765,12 +4831,12 @@
 
 void vpx_highbd_sad64x32x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x32x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad64x32x4d vpx_highbd_sad64x32x4d_sse2
@@ -4799,12 +4865,12 @@
 
 void vpx_highbd_sad64x64x4d_c(const uint8_t* src_ptr,
                               int src_stride,
-                              const uint8_t* const ref_ptr[],
+                              const uint8_t* const ref_array[],
                               int ref_stride,
                               uint32_t* sad_array);
 void vpx_highbd_sad64x64x4d_sse2(const uint8_t* src_ptr,
                                  int src_stride,
-                                 const uint8_t* const ref_ptr[],
+                                 const uint8_t* const ref_array[],
                                  int ref_stride,
                                  uint32_t* sad_array);
 #define vpx_highbd_sad64x64x4d vpx_highbd_sad64x64x4d_sse2
@@ -4833,12 +4899,12 @@
 
 void vpx_highbd_sad8x16x4d_c(const uint8_t* src_ptr,
                              int src_stride,
-                             const uint8_t* const ref_ptr[],
+                             const uint8_t* const ref_array[],
                              int ref_stride,
                              uint32_t* sad_array);
 void vpx_highbd_sad8x16x4d_sse2(const uint8_t* src_ptr,
                                 int src_stride,
-                                const uint8_t* const ref_ptr[],
+                                const uint8_t* const ref_array[],
                                 int ref_stride,
                                 uint32_t* sad_array);
 #define vpx_highbd_sad8x16x4d vpx_highbd_sad8x16x4d_sse2
@@ -4867,12 +4933,12 @@
 
 void vpx_highbd_sad8x4x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x4x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad8x4x4d vpx_highbd_sad8x4x4d_sse2
@@ -4901,118 +4967,122 @@
 
 void vpx_highbd_sad8x8x4d_c(const uint8_t* src_ptr,
                             int src_stride,
-                            const uint8_t* const ref_ptr[],
+                            const uint8_t* const ref_array[],
                             int ref_stride,
                             uint32_t* sad_array);
 void vpx_highbd_sad8x8x4d_sse2(const uint8_t* src_ptr,
                                int src_stride,
-                               const uint8_t* const ref_ptr[],
+                               const uint8_t* const ref_array[],
                                int ref_stride,
                                uint32_t* sad_array);
 #define vpx_highbd_sad8x8x4d vpx_highbd_sad8x8x4d_sse2
 
+int vpx_highbd_satd_c(const tran_low_t* coeff, int length);
+int vpx_highbd_satd_avx2(const tran_low_t* coeff, int length);
+RTCD_EXTERN int (*vpx_highbd_satd)(const tran_low_t* coeff, int length);
+
 void vpx_highbd_subtract_block_c(int rows,
                                  int cols,
                                  int16_t* diff_ptr,
                                  ptrdiff_t diff_stride,
-                                 const uint8_t* src_ptr,
+                                 const uint8_t* src8_ptr,
                                  ptrdiff_t src_stride,
-                                 const uint8_t* pred_ptr,
+                                 const uint8_t* pred8_ptr,
                                  ptrdiff_t pred_stride,
                                  int bd);
 #define vpx_highbd_subtract_block vpx_highbd_subtract_block_c
 
 void vpx_highbd_tm_predictor_16x16_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_16x16_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_tm_predictor_16x16 vpx_highbd_tm_predictor_16x16_sse2
 
 void vpx_highbd_tm_predictor_32x32_c(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 void vpx_highbd_tm_predictor_32x32_sse2(uint16_t* dst,
-                                        ptrdiff_t y_stride,
+                                        ptrdiff_t stride,
                                         const uint16_t* above,
                                         const uint16_t* left,
                                         int bd);
 #define vpx_highbd_tm_predictor_32x32 vpx_highbd_tm_predictor_32x32_sse2
 
 void vpx_highbd_tm_predictor_4x4_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_4x4_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_tm_predictor_4x4 vpx_highbd_tm_predictor_4x4_sse2
 
 void vpx_highbd_tm_predictor_8x8_c(uint16_t* dst,
-                                   ptrdiff_t y_stride,
+                                   ptrdiff_t stride,
                                    const uint16_t* above,
                                    const uint16_t* left,
                                    int bd);
 void vpx_highbd_tm_predictor_8x8_sse2(uint16_t* dst,
-                                      ptrdiff_t y_stride,
+                                      ptrdiff_t stride,
                                       const uint16_t* above,
                                       const uint16_t* left,
                                       int bd);
 #define vpx_highbd_tm_predictor_8x8 vpx_highbd_tm_predictor_8x8_sse2
 
 void vpx_highbd_v_predictor_16x16_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_16x16_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_v_predictor_16x16 vpx_highbd_v_predictor_16x16_sse2
 
 void vpx_highbd_v_predictor_32x32_c(uint16_t* dst,
-                                    ptrdiff_t y_stride,
+                                    ptrdiff_t stride,
                                     const uint16_t* above,
                                     const uint16_t* left,
                                     int bd);
 void vpx_highbd_v_predictor_32x32_sse2(uint16_t* dst,
-                                       ptrdiff_t y_stride,
+                                       ptrdiff_t stride,
                                        const uint16_t* above,
                                        const uint16_t* left,
                                        int bd);
 #define vpx_highbd_v_predictor_32x32 vpx_highbd_v_predictor_32x32_sse2
 
 void vpx_highbd_v_predictor_4x4_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_4x4_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
 #define vpx_highbd_v_predictor_4x4 vpx_highbd_v_predictor_4x4_sse2
 
 void vpx_highbd_v_predictor_8x8_c(uint16_t* dst,
-                                  ptrdiff_t y_stride,
+                                  ptrdiff_t stride,
                                   const uint16_t* above,
                                   const uint16_t* left,
                                   int bd);
 void vpx_highbd_v_predictor_8x8_sse2(uint16_t* dst,
-                                     ptrdiff_t y_stride,
+                                     ptrdiff_t stride,
                                      const uint16_t* above,
                                      const uint16_t* left,
                                      int bd);
@@ -5322,12 +5392,12 @@
                                   const uint8_t* thresh1);
 #define vpx_lpf_vertical_8_dual vpx_lpf_vertical_8_dual_sse2
 
-void vpx_mbpost_proc_across_ip_c(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_c(unsigned char* src,
                                  int pitch,
                                  int rows,
                                  int cols,
                                  int flimit);
-void vpx_mbpost_proc_across_ip_sse2(unsigned char* dst,
+void vpx_mbpost_proc_across_ip_sse2(unsigned char* src,
                                     int pitch,
                                     int rows,
                                     int cols,
@@ -5361,68 +5431,68 @@
 #define vpx_minmax_8x8 vpx_minmax_8x8_sse2
 
 unsigned int vpx_mse16x16_c(const uint8_t* src_ptr,
-                            int source_stride,
+                            int src_stride,
                             const uint8_t* ref_ptr,
-                            int recon_stride,
+                            int ref_stride,
                             unsigned int* sse);
 unsigned int vpx_mse16x16_sse2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_mse16x16_avx2(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
-                               int recon_stride,
+                               int ref_stride,
                                unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x16)(const uint8_t* src_ptr,
-                                         int source_stride,
+                                         int src_stride,
                                          const uint8_t* ref_ptr,
-                                         int recon_stride,
+                                         int ref_stride,
                                          unsigned int* sse);
 
 unsigned int vpx_mse16x8_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse16x8_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 unsigned int vpx_mse16x8_avx2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_mse16x8)(const uint8_t* src_ptr,
-                                        int source_stride,
+                                        int src_stride,
                                         const uint8_t* ref_ptr,
-                                        int recon_stride,
+                                        int ref_stride,
                                         unsigned int* sse);
 
 unsigned int vpx_mse8x16_c(const uint8_t* src_ptr,
-                           int source_stride,
+                           int src_stride,
                            const uint8_t* ref_ptr,
-                           int recon_stride,
+                           int ref_stride,
                            unsigned int* sse);
 unsigned int vpx_mse8x16_sse2(const uint8_t* src_ptr,
-                              int source_stride,
+                              int src_stride,
                               const uint8_t* ref_ptr,
-                              int recon_stride,
+                              int ref_stride,
                               unsigned int* sse);
 #define vpx_mse8x16 vpx_mse8x16_sse2
 
 unsigned int vpx_mse8x8_c(const uint8_t* src_ptr,
-                          int source_stride,
+                          int src_stride,
                           const uint8_t* ref_ptr,
-                          int recon_stride,
+                          int ref_stride,
                           unsigned int* sse);
 unsigned int vpx_mse8x8_sse2(const uint8_t* src_ptr,
-                             int source_stride,
+                             int src_stride,
                              const uint8_t* ref_ptr,
-                             int recon_stride,
+                             int ref_stride,
                              unsigned int* sse);
 #define vpx_mse8x8 vpx_mse8x8_sse2
 
@@ -5623,12 +5693,12 @@
 
 void vpx_sad16x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x16x4d vpx_sad16x16x4d_sse2
@@ -5673,12 +5743,12 @@
 
 void vpx_sad16x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad16x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad16x32x4d vpx_sad16x32x4d_sse2
@@ -5728,12 +5798,12 @@
 
 void vpx_sad16x8x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad16x8x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad16x8x4d vpx_sad16x8x4d_sse2
@@ -5794,12 +5864,12 @@
 
 void vpx_sad32x16x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x16x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x16x4d vpx_sad32x16x4d_sse2
@@ -5844,25 +5914,41 @@
 
 void vpx_sad32x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad32x32x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad32x32x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
+void vpx_sad32x32x8_c(const uint8_t* src_ptr,
+                      int src_stride,
+                      const uint8_t* ref_ptr,
+                      int ref_stride,
+                      uint32_t* sad_array);
+void vpx_sad32x32x8_avx2(const uint8_t* src_ptr,
+                         int src_stride,
+                         const uint8_t* ref_ptr,
+                         int ref_stride,
+                         uint32_t* sad_array);
+RTCD_EXTERN void (*vpx_sad32x32x8)(const uint8_t* src_ptr,
+                                   int src_stride,
+                                   const uint8_t* ref_ptr,
+                                   int ref_stride,
+                                   uint32_t* sad_array);
+
 unsigned int vpx_sad32x64_c(const uint8_t* src_ptr,
                             int src_stride,
                             const uint8_t* ref_ptr,
@@ -5903,12 +5989,12 @@
 
 void vpx_sad32x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad32x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad32x64x4d vpx_sad32x64x4d_sse2
@@ -5953,12 +6039,12 @@
 
 void vpx_sad4x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x4x4d vpx_sad4x4x4d_sse2
@@ -6003,12 +6089,12 @@
 
 void vpx_sad4x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad4x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad4x8x4d vpx_sad4x8x4d_sse2
@@ -6053,12 +6139,12 @@
 
 void vpx_sad64x32x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x32x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 #define vpx_sad64x32x4d vpx_sad64x32x4d_sse2
@@ -6103,22 +6189,22 @@
 
 void vpx_sad64x64x4d_c(const uint8_t* src_ptr,
                        int src_stride,
-                       const uint8_t* const ref_ptr[],
+                       const uint8_t* const ref_array[],
                        int ref_stride,
                        uint32_t* sad_array);
 void vpx_sad64x64x4d_sse2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 void vpx_sad64x64x4d_avx2(const uint8_t* src_ptr,
                           int src_stride,
-                          const uint8_t* const ref_ptr[],
+                          const uint8_t* const ref_array[],
                           int ref_stride,
                           uint32_t* sad_array);
 RTCD_EXTERN void (*vpx_sad64x64x4d)(const uint8_t* src_ptr,
                                     int src_stride,
-                                    const uint8_t* const ref_ptr[],
+                                    const uint8_t* const ref_array[],
                                     int ref_stride,
                                     uint32_t* sad_array);
 
@@ -6162,12 +6248,12 @@
 
 void vpx_sad8x16x4d_c(const uint8_t* src_ptr,
                       int src_stride,
-                      const uint8_t* const ref_ptr[],
+                      const uint8_t* const ref_array[],
                       int ref_stride,
                       uint32_t* sad_array);
 void vpx_sad8x16x4d_sse2(const uint8_t* src_ptr,
                          int src_stride,
-                         const uint8_t* const ref_ptr[],
+                         const uint8_t* const ref_array[],
                          int ref_stride,
                          uint32_t* sad_array);
 #define vpx_sad8x16x4d vpx_sad8x16x4d_sse2
@@ -6212,12 +6298,12 @@
 
 void vpx_sad8x4x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x4x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x4x4d vpx_sad8x4x4d_sse2
@@ -6262,12 +6348,12 @@
 
 void vpx_sad8x8x4d_c(const uint8_t* src_ptr,
                      int src_stride,
-                     const uint8_t* const ref_ptr[],
+                     const uint8_t* const ref_array[],
                      int ref_stride,
                      uint32_t* sad_array);
 void vpx_sad8x8x4d_sse2(const uint8_t* src_ptr,
                         int src_stride,
-                        const uint8_t* const ref_ptr[],
+                        const uint8_t* const ref_array[],
                         int ref_stride,
                         uint32_t* sad_array);
 #define vpx_sad8x8x4d vpx_sad8x8x4d_sse2
@@ -6393,850 +6479,850 @@
 #define vpx_scaled_vert vpx_scaled_vert_c
 
 uint32_t vpx_sub_pixel_avg_variance16x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance16x8_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance16x8_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance16x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x16_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x16_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x32_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance32x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance32x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance32x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance4x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance4x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance4x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x32_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x32_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x32)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance64x64_c(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse,
                                            const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_sse2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_ssse3(const uint8_t* src_ptr,
-                                               int source_stride,
-                                               int xoffset,
-                                               int yoffset,
+                                               int src_stride,
+                                               int x_offset,
+                                               int y_offset,
                                                const uint8_t* ref_ptr,
                                                int ref_stride,
                                                uint32_t* sse,
                                                const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance64x64_avx2(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance64x64)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x16_c(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse,
                                           const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_sse2(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x16_ssse3(const uint8_t* src_ptr,
-                                              int source_stride,
-                                              int xoffset,
-                                              int yoffset,
+                                              int src_stride,
+                                              int x_offset,
+                                              int y_offset,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               uint32_t* sse,
                                               const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x16)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x4_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x4_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x4)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_avg_variance8x8_c(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse,
                                          const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_sse2(const uint8_t* src_ptr,
-                                            int source_stride,
-                                            int xoffset,
-                                            int yoffset,
+                                            int src_stride,
+                                            int x_offset,
+                                            int y_offset,
                                             const uint8_t* ref_ptr,
                                             int ref_stride,
                                             uint32_t* sse,
                                             const uint8_t* second_pred);
 uint32_t vpx_sub_pixel_avg_variance8x8_ssse3(const uint8_t* src_ptr,
-                                             int source_stride,
-                                             int xoffset,
-                                             int yoffset,
+                                             int src_stride,
+                                             int x_offset,
+                                             int y_offset,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              uint32_t* sse,
                                              const uint8_t* second_pred);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_avg_variance8x8)(
     const uint8_t* src_ptr,
-    int source_stride,
-    int xoffset,
-    int yoffset,
+    int src_stride,
+    int x_offset,
+    int y_offset,
     const uint8_t* ref_ptr,
     int ref_stride,
     uint32_t* sse,
     const uint8_t* second_pred);
 
 uint32_t vpx_sub_pixel_variance16x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance16x8_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance16x8_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance16x8)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x16_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x16_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x16)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x32_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance32x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance32x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance32x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance4x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance4x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance4x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x32_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x32_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x32)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance64x64_c(const uint8_t* src_ptr,
-                                       int source_stride,
-                                       int xoffset,
-                                       int yoffset,
+                                       int src_stride,
+                                       int x_offset,
+                                       int y_offset,
                                        const uint8_t* ref_ptr,
                                        int ref_stride,
                                        uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_sse2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_ssse3(const uint8_t* src_ptr,
-                                           int source_stride,
-                                           int xoffset,
-                                           int yoffset,
+                                           int src_stride,
+                                           int x_offset,
+                                           int y_offset,
                                            const uint8_t* ref_ptr,
                                            int ref_stride,
                                            uint32_t* sse);
 uint32_t vpx_sub_pixel_variance64x64_avx2(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance64x64)(const uint8_t* src_ptr,
-                                                    int source_stride,
-                                                    int xoffset,
-                                                    int yoffset,
+                                                    int src_stride,
+                                                    int x_offset,
+                                                    int y_offset,
                                                     const uint8_t* ref_ptr,
                                                     int ref_stride,
                                                     uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x16_c(const uint8_t* src_ptr,
-                                      int source_stride,
-                                      int xoffset,
-                                      int yoffset,
+                                      int src_stride,
+                                      int x_offset,
+                                      int y_offset,
                                       const uint8_t* ref_ptr,
                                       int ref_stride,
                                       uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_sse2(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x16_ssse3(const uint8_t* src_ptr,
-                                          int source_stride,
-                                          int xoffset,
-                                          int yoffset,
+                                          int src_stride,
+                                          int x_offset,
+                                          int y_offset,
                                           const uint8_t* ref_ptr,
                                           int ref_stride,
                                           uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x16)(const uint8_t* src_ptr,
-                                                   int source_stride,
-                                                   int xoffset,
-                                                   int yoffset,
+                                                   int src_stride,
+                                                   int x_offset,
+                                                   int y_offset,
                                                    const uint8_t* ref_ptr,
                                                    int ref_stride,
                                                    uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x4_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x4_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x4)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
 
 uint32_t vpx_sub_pixel_variance8x8_c(const uint8_t* src_ptr,
-                                     int source_stride,
-                                     int xoffset,
-                                     int yoffset,
+                                     int src_stride,
+                                     int x_offset,
+                                     int y_offset,
                                      const uint8_t* ref_ptr,
                                      int ref_stride,
                                      uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_sse2(const uint8_t* src_ptr,
-                                        int source_stride,
-                                        int xoffset,
-                                        int yoffset,
+                                        int src_stride,
+                                        int x_offset,
+                                        int y_offset,
                                         const uint8_t* ref_ptr,
                                         int ref_stride,
                                         uint32_t* sse);
 uint32_t vpx_sub_pixel_variance8x8_ssse3(const uint8_t* src_ptr,
-                                         int source_stride,
-                                         int xoffset,
-                                         int yoffset,
+                                         int src_stride,
+                                         int x_offset,
+                                         int y_offset,
                                          const uint8_t* ref_ptr,
                                          int ref_stride,
                                          uint32_t* sse);
 RTCD_EXTERN uint32_t (*vpx_sub_pixel_variance8x8)(const uint8_t* src_ptr,
-                                                  int source_stride,
-                                                  int xoffset,
-                                                  int yoffset,
+                                                  int src_stride,
+                                                  int x_offset,
+                                                  int y_offset,
                                                   const uint8_t* ref_ptr,
                                                   int ref_stride,
                                                   uint32_t* sse);
@@ -7264,315 +7350,315 @@
 #define vpx_sum_squares_2d_i16 vpx_sum_squares_2d_i16_sse2
 
 void vpx_tm_predictor_16x16_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_16x16_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_16x16 vpx_tm_predictor_16x16_sse2
 
 void vpx_tm_predictor_32x32_c(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 void vpx_tm_predictor_32x32_sse2(uint8_t* dst,
-                                 ptrdiff_t y_stride,
+                                 ptrdiff_t stride,
                                  const uint8_t* above,
                                  const uint8_t* left);
 #define vpx_tm_predictor_32x32 vpx_tm_predictor_32x32_sse2
 
 void vpx_tm_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_4x4_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_4x4 vpx_tm_predictor_4x4_sse2
 
 void vpx_tm_predictor_8x8_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 void vpx_tm_predictor_8x8_sse2(uint8_t* dst,
-                               ptrdiff_t y_stride,
+                               ptrdiff_t stride,
                                const uint8_t* above,
                                const uint8_t* left);
 #define vpx_tm_predictor_8x8 vpx_tm_predictor_8x8_sse2
 
 void vpx_v_predictor_16x16_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_16x16_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_16x16 vpx_v_predictor_16x16_sse2
 
 void vpx_v_predictor_32x32_c(uint8_t* dst,
-                             ptrdiff_t y_stride,
+                             ptrdiff_t stride,
                              const uint8_t* above,
                              const uint8_t* left);
 void vpx_v_predictor_32x32_sse2(uint8_t* dst,
-                                ptrdiff_t y_stride,
+                                ptrdiff_t stride,
                                 const uint8_t* above,
                                 const uint8_t* left);
 #define vpx_v_predictor_32x32 vpx_v_predictor_32x32_sse2
 
 void vpx_v_predictor_4x4_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_4x4_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_4x4 vpx_v_predictor_4x4_sse2
 
 void vpx_v_predictor_8x8_c(uint8_t* dst,
-                           ptrdiff_t y_stride,
+                           ptrdiff_t stride,
                            const uint8_t* above,
                            const uint8_t* left);
 void vpx_v_predictor_8x8_sse2(uint8_t* dst,
-                              ptrdiff_t y_stride,
+                              ptrdiff_t stride,
                               const uint8_t* above,
                               const uint8_t* left);
 #define vpx_v_predictor_8x8 vpx_v_predictor_8x8_sse2
 
 unsigned int vpx_variance16x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance16x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance16x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance16x8_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance16x8_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 unsigned int vpx_variance16x8_avx2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance16x8)(const uint8_t* src_ptr,
-                                             int source_stride,
+                                             int src_stride,
                                              const uint8_t* ref_ptr,
                                              int ref_stride,
                                              unsigned int* sse);
 
 unsigned int vpx_variance32x16_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x16_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x16_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x16)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance32x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance32x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance32x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance32x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance4x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x4 vpx_variance4x4_sse2
 
 unsigned int vpx_variance4x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance4x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance4x8 vpx_variance4x8_sse2
 
 unsigned int vpx_variance64x32_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x32_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x32_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x32)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance64x64_c(const uint8_t* src_ptr,
-                                 int source_stride,
+                                 int src_stride,
                                  const uint8_t* ref_ptr,
                                  int ref_stride,
                                  unsigned int* sse);
 unsigned int vpx_variance64x64_sse2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 unsigned int vpx_variance64x64_avx2(const uint8_t* src_ptr,
-                                    int source_stride,
+                                    int src_stride,
                                     const uint8_t* ref_ptr,
                                     int ref_stride,
                                     unsigned int* sse);
 RTCD_EXTERN unsigned int (*vpx_variance64x64)(const uint8_t* src_ptr,
-                                              int source_stride,
+                                              int src_stride,
                                               const uint8_t* ref_ptr,
                                               int ref_stride,
                                               unsigned int* sse);
 
 unsigned int vpx_variance8x16_c(const uint8_t* src_ptr,
-                                int source_stride,
+                                int src_stride,
                                 const uint8_t* ref_ptr,
                                 int ref_stride,
                                 unsigned int* sse);
 unsigned int vpx_variance8x16_sse2(const uint8_t* src_ptr,
-                                   int source_stride,
+                                   int src_stride,
                                    const uint8_t* ref_ptr,
                                    int ref_stride,
                                    unsigned int* sse);
 #define vpx_variance8x16 vpx_variance8x16_sse2
 
 unsigned int vpx_variance8x4_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x4_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x4 vpx_variance8x4_sse2
 
 unsigned int vpx_variance8x8_c(const uint8_t* src_ptr,
-                               int source_stride,
+                               int src_stride,
                                const uint8_t* ref_ptr,
                                int ref_stride,
                                unsigned int* sse);
 unsigned int vpx_variance8x8_sse2(const uint8_t* src_ptr,
-                                  int source_stride,
+                                  int src_stride,
                                   const uint8_t* ref_ptr,
                                   int ref_stride,
                                   unsigned int* sse);
 #define vpx_variance8x8 vpx_variance8x8_sse2
 
 void vpx_ve_predictor_4x4_c(uint8_t* dst,
-                            ptrdiff_t y_stride,
+                            ptrdiff_t stride,
                             const uint8_t* above,
                             const uint8_t* left);
 #define vpx_ve_predictor_4x4 vpx_ve_predictor_4x4_c
@@ -7752,6 +7838,15 @@
   vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_c;
   if (flags & HAS_SSSE3)
     vpx_highbd_d63_predictor_8x8 = vpx_highbd_d63_predictor_8x8_ssse3;
+  vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_16x16 = vpx_highbd_hadamard_16x16_avx2;
+  vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_32x32 = vpx_highbd_hadamard_32x32_avx2;
+  vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_hadamard_8x8 = vpx_highbd_hadamard_8x8_avx2;
   vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct16x16_10_add = vpx_highbd_idct16x16_10_add_sse4_1;
@@ -7779,6 +7874,9 @@
   vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse2;
   if (flags & HAS_SSE4_1)
     vpx_highbd_idct8x8_64_add = vpx_highbd_idct8x8_64_add_sse4_1;
+  vpx_highbd_satd = vpx_highbd_satd_c;
+  if (flags & HAS_AVX2)
+    vpx_highbd_satd = vpx_highbd_satd_avx2;
   vpx_idct32x32_135_add = vpx_idct32x32_135_add_sse2;
   if (flags & HAS_SSSE3)
     vpx_idct32x32_135_add = vpx_idct32x32_135_add_ssse3;
@@ -7841,6 +7939,9 @@
   vpx_sad32x32x4d = vpx_sad32x32x4d_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x32x4d = vpx_sad32x32x4d_avx2;
+  vpx_sad32x32x8 = vpx_sad32x32x8_c;
+  if (flags & HAS_AVX2)
+    vpx_sad32x32x8 = vpx_sad32x32x8_avx2;
   vpx_sad32x64 = vpx_sad32x64_sse2;
   if (flags & HAS_AVX2)
     vpx_sad32x64 = vpx_sad32x64_avx2;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.clang-format b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.clang-format
index e76a526e..866b7e2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.clang-format
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.clang-format
@@ -1,7 +1,7 @@
 ---
 Language:        Cpp
 # BasedOnStyle:  Google
-# Generated with clang-format 5.0.0
+# Generated with clang-format 7.0.1
 AccessModifierOffset: -1
 AlignAfterOpenBracket: Align
 AlignConsecutiveAssignments: false
@@ -30,6 +30,7 @@
   AfterObjCDeclaration: false
   AfterStruct:     false
   AfterUnion:      false
+  AfterExternBlock: false
   BeforeCatch:     false
   BeforeElse:      false
   IndentBraces:    false
@@ -39,6 +40,7 @@
 BreakBeforeBinaryOperators: None
 BreakBeforeBraces: Attach
 BreakBeforeInheritanceComma: false
+BreakInheritanceList: BeforeColon
 BreakBeforeTernaryOperators: true
 BreakConstructorInitializersBeforeComma: false
 BreakConstructorInitializers: BeforeColon
@@ -59,7 +61,10 @@
   - foreach
   - Q_FOREACH
   - BOOST_FOREACH
+IncludeBlocks:   Preserve
 IncludeCategories:
+  - Regex:           '^<ext/.*\.h>'
+    Priority:        2
   - Regex:           '^<.*\.h>'
     Priority:        1
   - Regex:           '^<.*'
@@ -68,6 +73,7 @@
     Priority:        3
 IncludeIsMainRegex: '([-_](test|unittest))?$'
 IndentCaseLabels: true
+IndentPPDirectives: None
 IndentWidth:     2
 IndentWrappedFunctionNames: false
 JavaScriptQuotes: Leave
@@ -77,6 +83,7 @@
 MacroBlockEnd:   ''
 MaxEmptyLinesToKeep: 1
 NamespaceIndentation: None
+ObjCBinPackProtocolList: Never
 ObjCBlockIndentWidth: 2
 ObjCSpaceAfterProperty: false
 ObjCSpaceBeforeProtocolList: false
@@ -84,17 +91,50 @@
 PenaltyBreakBeforeFirstCallParameter: 1
 PenaltyBreakComment: 300
 PenaltyBreakFirstLessLess: 120
+PenaltyBreakTemplateDeclaration: 10
 PenaltyBreakString: 1000
 PenaltyExcessCharacter: 1000000
 PenaltyReturnTypeOnItsOwnLine: 200
 PointerAlignment: Right
+RawStringFormats:
+  - Language:        Cpp
+    Delimiters:
+      - cc
+      - CC
+      - cpp
+      - Cpp
+      - CPP
+      - 'c++'
+      - 'C++'
+    CanonicalDelimiter: ''
+    BasedOnStyle:    google
+  - Language:        TextProto
+    Delimiters:
+      - pb
+      - PB
+      - proto
+      - PROTO
+    EnclosingFunctions:
+      - EqualsProto
+      - EquivToProto
+      - PARSE_PARTIAL_TEXT_PROTO
+      - PARSE_TEST_PROTO
+      - PARSE_TEXT_PROTO
+      - ParseTextOrDie
+      - ParseTextProtoOrDie
+    CanonicalDelimiter: ''
+    BasedOnStyle:    google
 ReflowComments:  true
 SortIncludes:    false
 SortUsingDeclarations: true
 SpaceAfterCStyleCast: false
 SpaceAfterTemplateKeyword: true
 SpaceBeforeAssignmentOperators: true
+SpaceBeforeCpp11BracedList: false
+SpaceBeforeCtorInitializerColon: true
+SpaceBeforeInheritanceColon: true
 SpaceBeforeParens: ControlStatements
+SpaceBeforeRangeBasedForLoopColon: true
 SpaceInEmptyParentheses: false
 SpacesBeforeTrailingComments: 2
 SpacesInAngles:  false
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitattributes b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitattributes
index ffc6912..8088b70 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitattributes
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitattributes
@@ -1,18 +1,2 @@
-*.[chs]      filter=fixtabswsp
-*.[ch]pp     filter=fixtabswsp
-*.[ch]xx     filter=fixtabswsp
-*.asm        filter=fixtabswsp
-*.php        filter=fixtabswsp
-*.pl         filter=fixtabswsp
-*.sh         filter=fixtabswsp
-*.txt	     filter=fixwsp
-[Mm]akefile  filter=fixwsp
-*.mk         filter=fixwsp
-*.rc         -crlf
-*.ds[pw]     -crlf
-*.bat        -crlf
-*.mmp        -crlf
-*.dpj        -crlf
-*.pjt        -crlf
-*.vcp        -crlf
-*.inf        -crlf
+configure    eol=lf
+*.sh         eol=lf
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitignore b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitignore
new file mode 100644
index 0000000..5f26835
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.gitignore
@@ -0,0 +1,69 @@
+*.S
+*.a
+*.asm.s
+*.d
+*.gcda
+*.gcno
+*.o
+*~
+.cproject
+.project
+.settings
+/*-*.mk
+/*.asm
+/*.doxy
+/*.ivf
+/*.ivf.md5
+/.bins
+/.deps
+/.docs
+/.install-*
+/.libs
+/Makefile
+/arm_neon.h
+/config.log
+/config.mk
+/docs/
+/doxyfile
+/examples/*.dox
+/examples/decode_to_md5
+/examples/decode_with_drops
+/examples/decode_with_partial_drops
+/examples/example_xma
+/examples/postproc
+/examples/resize_util
+/examples/set_maps
+/examples/simple_decoder
+/examples/simple_encoder
+/examples/twopass_encoder
+/examples/vp8_multi_resolution_encoder
+/examples/vp8cx_set_ref
+/examples/vp9cx_set_ref
+/examples/vp9_lossless_encoder
+/examples/vp9_spatial_svc_encoder
+/examples/vpx_temporal_svc_encoder
+/ivfdec
+/ivfdec.dox
+/ivfenc
+/ivfenc.dox
+/libvpx.so*
+/libvpx.ver
+/samples.dox
+/test_intra_pred_speed
+/test_libvpx
+/tools.dox
+/tools/*.dox
+/tools/tiny_ssim
+/vp8_api1_migration.dox
+/vp[89x]_rtcd.h
+/vpx.pc
+/vpx_config.c
+/vpx_config.h
+/vpx_dsp_rtcd.h
+/vpx_scale_rtcd.h
+/vpx_version.h
+/vpxdec
+/vpxdec.dox
+/vpxenc
+/vpxenc.dox
+TAGS
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.mailmap b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.mailmap
index 29af510..8d97500 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.mailmap
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/.mailmap
@@ -1,12 +1,17 @@
 Adrian Grange <agrange@google.com>
-Aâ„“ex Converse <aconverse@google.com>
-Aâ„“ex Converse <aconverse@google.com> <alex.converse@gmail.com>
+Aâ„“ex Converse <alexconv@twitch.tv>
+Aâ„“ex Converse <alexconv@twitch.tv> <aconverse@google.com>
+Aâ„“ex Converse <alexconv@twitch.tv> <alex.converse@gmail.com>
 Alexis Ballier <aballier@gentoo.org> <alexis.ballier@gmail.com>
 Alpha Lam <hclam@google.com> <hclam@chromium.org>
+Angie Chiang <angiebird@google.com>
 Chris Cunningham <chcunningham@chromium.org>
+Chi Yo Tsai <chiyotsai@google.com>
 Daniele Castagna <dcastagna@chromium.org> <dcastagna@google.com>
 Deb Mukherjee <debargha@google.com>
+Elliott Karpilovsky <elliottk@google.com>
 Erik Niemeyer <erik.a.niemeyer@intel.com> <erik.a.niemeyer@gmail.com>
+Fyodor Kyslov <kyslov@google.com>
 Guillaume Martres <gmartres@google.com> <smarter3@gmail.com>
 Hangyu Kuang <hkuang@google.com>
 Hui Su <huisu@google.com>
@@ -20,6 +25,8 @@
 Joshua Litt <joshualitt@google.com> <joshualitt@chromium.org>
 Marco Paniconi <marpan@google.com>
 Marco Paniconi <marpan@google.com> <marpan@chromium.org>
+Martin Storsjö <martin@martin.st>
+Michael Horowitz <mhoro@webrtc.org> <mhoro@google.com>
 Pascal Massimino <pascal.massimino@gmail.com>
 Paul Wilkins <paulwilkins@google.com>
 Peter Boström <pbos@chromium.org> <pbos@google.com>
@@ -28,6 +35,7 @@
 Ralph Giles <giles@xiph.org> <giles@entropywave.com>
 Ralph Giles <giles@xiph.org> <giles@mozilla.com>
 Ronald S. Bultje <rsbultje@gmail.com> <rbultje@google.com>
+Sai Deng <sdeng@google.com>
 Sami Pietilä <samipietila@google.com>
 Shiyou Yin <yinshiyou-hf@loongson.cn>
 Tamar Levy <tamar.levy@intel.com>
@@ -40,3 +48,6 @@
 Yaowu Xu <yaowu@google.com> <adam@xuyaowu.com>
 Yaowu Xu <yaowu@google.com> <yaowu@xuyaowu.com>
 Yaowu Xu <yaowu@google.com> <Yaowu Xu>
+Venkatarama NG. Avadhani <venkatarama.avadhani@ittiam.com>
+Vitaly Buka <vitalybuka@chromium.org> <vitlaybuka@chromium.org>
+xiwei gu <guxiwei-hf@loongson.cn>
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/AUTHORS b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/AUTHORS
index 04c2872..3eb03e9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/AUTHORS
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/AUTHORS
@@ -4,12 +4,13 @@
 Aaron Watry <awatry@gmail.com>
 Abo Talib Mahfoodh <ab.mahfoodh@gmail.com>
 Adrian Grange <agrange@google.com>
-Aâ„“ex Converse <aconverse@google.com>
 Ahmad Sharif <asharif@google.com>
+Aidan Welch <aidansw@yahoo.com>
 Aleksey Vasenev <margtu-fivt@ya.ru>
 Alexander Potapenko <glider@google.com>
 Alexander Voronov <avoronov@graphics.cs.msu.ru>
 Alexandra Hájková <alexandra.khirnova@gmail.com>
+Aâ„“ex Converse <alexconv@twitch.tv>
 Alexis Ballier <aballier@gentoo.org>
 Alok Ahuja <waveletcoeff@gmail.com>
 Alpha Lam <hclam@google.com>
@@ -19,18 +20,22 @@
 Andres Mejia <mcitadel@gmail.com>
 Andrew Lewis <andrewlewis@google.com>
 Andrew Russell <anrussell@google.com>
+Angie Chen <yunqi@google.com>
 Angie Chiang <angiebird@google.com>
 Aron Rosenberg <arosenberg@logitech.com>
 Attila Nagy <attilanagy@google.com>
+Birk Magnussen <birk.magnussen@googlemail.com>
 Brion Vibber <bvibber@wikimedia.org>
 changjun.yang <changjun.yang@intel.com>
 Charles 'Buck' Krasic <ckrasic@google.com>
 Cheng Chen <chengchen@google.com>
+Chi Yo Tsai <chiyotsai@google.com>
 chm <chm@rock-chips.com>
 Chris Cunningham <chcunningham@chromium.org>
 Christian Duvivier <cduvivier@google.com>
 Daniele Castagna <dcastagna@chromium.org>
 Daniel Kang <ddkang@google.com>
+Dan Zhu <zxdan@google.com>
 Deb Mukherjee <debargha@google.com>
 Deepa K G <deepa.kg@ittiam.com>
 Dim Temp <dimtemp0@gmail.com>
@@ -38,11 +43,13 @@
 Dragan Mrdjan <dmrdjan@mips.com>
 Ed Baker <edward.baker@intel.com>
 Ehsan Akhgari <ehsan.akhgari@gmail.com>
+Elliott Karpilovsky <elliottk@google.com>
 Erik Niemeyer <erik.a.niemeyer@intel.com>
 Fabio Pedretti <fabio.ped@libero.it>
 Frank Galligan <fgalligan@google.com>
 Fredrik Söderquist <fs@opera.com>
 Fritz Koenig <frkoenig@google.com>
+Fyodor Kyslov <kyslov@google.com>
 Gabriel Marin <gmx@chromium.org>
 Gaute Strokkenes <gaute.strokkenes@broadcom.com>
 Geza Lore <gezalore@gmail.com>
@@ -55,7 +62,9 @@
 Hangyu Kuang <hkuang@google.com>
 Hanno Böck <hanno@hboeck.de>
 Han Shen <shenhan@google.com>
+Harish Mahendrakar <harish.mahendrakar@ittiam.com>
 Henrik Lundin <hlundin@google.com>
+Hien Ho <hienho@google.com>
 Hui Su <huisu@google.com>
 Ivan Krasin <krasin@chromium.org>
 Ivan Maltz <ivanmaltz@google.com>
@@ -81,6 +90,7 @@
 John Koleszar <jkoleszar@google.com>
 Johnny Klonaris <google@jawknee.com>
 John Stark <jhnstrk@gmail.com>
+Jon Kunkee <jkunkee@microsoft.com>
 Joshua Bleecher Snyder <josh@treelinelabs.com>
 Joshua Litt <joshualitt@google.com>
 Julia Robson <juliamrobson@gmail.com>
@@ -91,15 +101,19 @@
 Kyle Siefring <kylesiefring@gmail.com>
 Lawrence Velázquez <larryv@macports.org>
 Linfeng Zhang <linfengz@google.com>
+Liu Peng <pengliu.mail@gmail.com>
 Lou Quillio <louquillio@google.com>
 Luca Barbato <lu_zero@gentoo.org>
+Luc Trudeau <luc@trud.ca>
 Makoto Kato <makoto.kt@gmail.com>
 Mans Rullgard <mans@mansr.com>
 Marco Paniconi <marpan@google.com>
 Mark Mentovai <mark@chromium.org>
 Martin Ettl <ettl.martin78@googlemail.com>
-Martin Storsjo <martin@martin.st>
+Martin Storsjö <martin@martin.st>
 Matthew Heaney <matthewjheaney@chromium.org>
+Matthias Räncker <theonetruecamper@gmx.de>
+Michael Horowitz <mhoro@webrtc.org>
 Michael Kohler <michaelkohler@live.com>
 Mike Frysinger <vapier@chromium.org>
 Mike Hommey <mhommey@mozilla.com>
@@ -107,10 +121,12 @@
 Min Chen <chenm003@gmail.com>
 Minghai Shang <minghai@google.com>
 Min Ye <yeemmi@google.com>
+Mirko Bonadei <mbonadei@google.com>
 Moriyoshi Koizumi <mozo@mozo.jp>
 Morton Jonuschat <yabawock@gmail.com>
 Nathan E. Egge <negge@mozilla.com>
 Nico Weber <thakis@chromium.org>
+Niveditha Rau <niveditha.rau@gmail.com>
 Parag Salasakar <img.mips1@gmail.com>
 Pascal Massimino <pascal.massimino@gmail.com>
 Patrik Westin <patrik.westin@gmail.com>
@@ -129,9 +145,13 @@
 Rahul Chaudhry <rahulchaudhry@google.com>
 Ralph Giles <giles@xiph.org>
 Ranjit Kumar Tulabandu <ranjit.tulabandu@ittiam.com>
+Raphael Kubo da Costa <raphael.kubo.da.costa@intel.com>
+Ravi Chaudhary <ravi.chaudhary@ittiam.com>
+Ritu Baldwa <ritu.baldwa@ittiam.com>
 Rob Bradford <rob@linux.intel.com>
 Ronald S. Bultje <rsbultje@gmail.com>
 Rui Ueyama <ruiu@google.com>
+Sai Deng <sdeng@google.com>
 Sami Pietilä <samipietila@google.com>
 Sarah Parker <sarahparker@google.com>
 Sasi Inguva <isasi@google.com>
@@ -139,12 +159,15 @@
 Scott LaVarnway <slavarnway@google.com>
 Sean McGovern <gseanmcg@gmail.com>
 Sergey Kolomenkin <kolomenkin@gmail.com>
+Sergey Silkin <ssilkin@google.com>
 Sergey Ulanov <sergeyu@chromium.org>
 Shimon Doodkin <helpmepro1@gmail.com>
 Shiyou Yin <yinshiyou-hf@loongson.cn>
+Shubham Tandle <shubham.tandle@ittiam.com>
 Shunyao Li <shunyaoli@google.com>
 Stefan Holmer <holmer@google.com>
 Suman Sunkara <sunkaras@google.com>
+Supradeep T R <supradeep.tr@ittiam.com>
 Sylvestre Ledru <sylvestre@mozilla.com>
 Taekhyun Kim <takim@nvidia.com>
 Takanori MATSUURA <t.matsuu@gmail.com>
@@ -157,11 +180,17 @@
 Tom Finegan <tomfinegan@google.com>
 Tristan Matthews <le.businessman@gmail.com>
 Urvang Joshi <urvang@google.com>
+Venkatarama NG. Avadhani <venkatarama.avadhani@ittiam.com>
 Vignesh Venkatasubramanian <vigneshv@google.com>
+Vitaly Buka <vitalybuka@chromium.org>
 Vlad Tsyrklevich <vtsyrklevich@chromium.org>
+Wan-Teh Chang <wtc@google.com>
+xiwei gu <guxiwei-hf@loongson.cn>
 Yaowu Xu <yaowu@google.com>
 Yi Luo <luoyi@google.com>
 Yongzhe Wang <yongzhe@google.com>
+Yue Chen <yuec@google.com>
+Yun Liu <yliuyliu@google.com>
 Yunqing Wang <yunqingwang@google.com>
 Yury Gitman <yuryg@google.com>
 Zoe Liu <zoeliu@google.com>
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/README b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/README
index 73304dd..fd3337d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/README
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/README
@@ -1,4 +1,4 @@
-README - 24 January 2018
+README - 9 December 2019
 
 Welcome to the WebM VP8/VP9 Codec SDK!
 
@@ -9,22 +9,26 @@
 
   1. Prerequisites
 
-    * All x86 targets require the Yasm[1] assembler be installed.
-    * All Windows builds require that Cygwin[2] be installed.
-    * Building the documentation requires Doxygen[3]. If you do not
+    * All x86 targets require the Yasm[1] assembler be installed[2].
+    * All Windows builds require that Cygwin[3] be installed.
+    * Building the documentation requires Doxygen[4]. If you do not
       have this package, the install-docs option will be disabled.
-    * Downloading the data for the unit tests requires curl[4] and sha1sum.
+    * Downloading the data for the unit tests requires curl[5] and sha1sum.
       sha1sum is provided via the GNU coreutils, installed by default on
       many *nix platforms, as well as MinGW and Cygwin. If coreutils is not
       available, a compatible version of sha1sum can be built from
-      source[5]. These requirements are optional if not running the unit
+      source[6]. These requirements are optional if not running the unit
       tests.
 
     [1]: http://www.tortall.net/projects/yasm
-    [2]: http://www.cygwin.com
-    [3]: http://www.doxygen.org
-    [4]: http://curl.haxx.se
-    [5]: http://www.microbrew.org/tools/md5sha1sum/
+    [2]: For Visual Studio the base yasm binary (not vsyasm) should be in the
+         PATH for Visual Studio. For VS2017 it is sufficient to rename
+         yasm-<version>-<arch>.exe to yasm.exe and place it in:
+         Program Files (x86)/Microsoft Visual Studio/2017/<level>/Common7/Tools/
+    [3]: http://www.cygwin.com
+    [4]: http://www.doxygen.org
+    [5]: http://curl.haxx.se
+    [6]: http://www.microbrew.org/tools/md5sha1sum/
 
   2. Out-of-tree builds
   Out of tree builds are a supported method of building the application. For
@@ -41,7 +45,16 @@
   used to get a list of supported options:
     $ ../libvpx/configure --help
 
-  4. Cross development
+  4. Compiler analyzers
+  Compilers have added sanitizers which instrument binaries with information
+  about address calculation, memory usage, threading, undefined behavior, and
+  other common errors. To simplify building libvpx with some of these features
+  use tools/set_analyzer_env.sh before running configure. It will set the
+  compiler and necessary flags for building as well as environment variables
+  read by the analyzer when testing the binaries.
+    $ source ../libvpx/tools/set_analyzer_env.sh address
+
+  5. Cross development
   For cross development, the most notable option is the --target option. The
   most up-to-date list of supported targets can be found at the bottom of the
   --help output of the configure script. As of this writing, the list of
@@ -50,20 +63,20 @@
     arm64-android-gcc
     arm64-darwin-gcc
     arm64-linux-gcc
+    arm64-win64-gcc
+    arm64-win64-vs15
     armv7-android-gcc
     armv7-darwin-gcc
     armv7-linux-rvct
     armv7-linux-gcc
     armv7-none-rvct
-    armv7-win32-vs11
-    armv7-win32-vs12
+    armv7-win32-gcc
     armv7-win32-vs14
     armv7-win32-vs15
     armv7s-darwin-gcc
     armv8-linux-gcc
     mips32-linux-gcc
     mips64-linux-gcc
-    ppc64-linux-gcc
     ppc64le-linux-gcc
     sparc-solaris-gcc
     x86-android-gcc
@@ -78,17 +91,16 @@
     x86-darwin14-gcc
     x86-darwin15-gcc
     x86-darwin16-gcc
+    x86-darwin17-gcc
     x86-iphonesimulator-gcc
     x86-linux-gcc
     x86-linux-icc
     x86-os2-gcc
     x86-solaris-gcc
     x86-win32-gcc
-    x86-win32-vs10
-    x86-win32-vs11
-    x86-win32-vs12
     x86-win32-vs14
     x86-win32-vs15
+    x86-win32-vs16
     x86_64-android-gcc
     x86_64-darwin9-gcc
     x86_64-darwin10-gcc
@@ -98,16 +110,16 @@
     x86_64-darwin14-gcc
     x86_64-darwin15-gcc
     x86_64-darwin16-gcc
+    x86_64-darwin17-gcc
+    x86_64-darwin18-gcc
     x86_64-iphonesimulator-gcc
     x86_64-linux-gcc
     x86_64-linux-icc
     x86_64-solaris-gcc
     x86_64-win64-gcc
-    x86_64-win64-vs10
-    x86_64-win64-vs11
-    x86_64-win64-vs12
     x86_64-win64-vs14
     x86_64-win64-vs15
+    x86_64-win64-vs16
     generic-gnu
 
   The generic-gnu target, in conjunction with the CROSS environment variable,
@@ -123,7 +135,7 @@
   environment variables: CC, AR, LD, AS, STRIP, NM. Additional flags can be
   passed to these executables with CFLAGS, LDFLAGS, and ASFLAGS.
 
-  5. Configuration errors
+  6. Configuration errors
   If the configuration step fails, the first step is to look in the error log.
   This defaults to config.log. This should give a good indication of what went
   wrong. If not, contact us for support.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/args.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/args.h
index 54abe04..aae8ec0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/args.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/args.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef ARGS_H_
-#define ARGS_H_
+#ifndef VPX_ARGS_H_
+#define VPX_ARGS_H_
 #include <stdio.h>
 
 #ifdef __cplusplus
@@ -60,4 +60,4 @@
 }  // extern "C"
 #endif
 
-#endif  // ARGS_H_
+#endif  // VPX_ARGS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/cur_frame_16x16.txt b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/cur_frame_16x16.txt
new file mode 100644
index 0000000..c264639
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/cur_frame_16x16.txt
@@ -0,0 +1,2 @@
+486,720
+230,207,226,208,198,205,214,224,228,181,208,205,211,218,218,221,213,193,213,233,219,206,226,199,199,189,211,190,204,231,229,236,218,227,194,229,222,227,210,219,237,219,225,227,212,207,197,203,207,216,238,208,233,222,213,212,213,220,221,222,191,215,237,211,226,234,208,214,239,210,223,224,236,248,233,216,237,211,198,227,231,233,236,238,239,224,235,208,219,229,212,241,243,218,233,225,231,230,224,221,248,209,241,215,237,216,244,241,218,221,240,211,239,231,237,231,221,219,240,219,232,222,223,255,229,245,243,242,245,246,217,234,243,222,213,239,230,231,230,242,248,225,238,236,223,214,229,216,240,223,212,228,236,219,254,240,222,217,246,228,215,230,255,226,230,248,250,254,234,235,232,243,237,255,253,239,239,251,251,252,238,241,240,251,240,243,223,246,246,249,235,233,228,246,232,236,234,255,253,221,244,237,252,245,253,252,221,251,255,255,233,243,243,246,221,234,238,252,252,215,242,255,229,243,255,251,236,231,241,246,237,225,244,229,242,239,234,235,251,237,253,245,251,230,220,239,224,255,244,249,249,249,255,249,236,240,218,248,247,246,235,250,250,230,239,228,238,250,242,241,232,237,238,246,255,243,232,251,252,244,237,249,247,242,246,254,246,255,247,230,235,241,236,252,245,240,244,228,241,247,231,242,242,242,255,246,236,240,245,246,235,208,219,251,251,252,251,244,240,243,230,250,227,254,224,228,249,232,255,245,248,247,241,241,245,234,194,186,231,214,243,230,243,249,248,190,175,174,161,130,37,204,246,253,250,230,253,239,247,181,213,245,228,251,255,253,233,249,246,252,228,228,219,241,235,245,211,228,228,248,254,247,239,251,232,231,250,246,227,237,238,251,251,242,237,240,219,155,137,131,88,97,19,109,243,243,237,242,242,254,244,245,243,237,242,225,237,219,226,242,248,244,247,253,249,229,255,251,245,255,189,192,240,228,249,248,247,237,241,240,247,247,220,250,251,255,231,251,193,103,31,14,31,31,9,13,44,26,50,35,74,113,61,72,15,35,22,59,25,24,47,38,115,112,50,34,26,59,48,65,78,66,62,38,15,54,212,243,249,236,253,254,254,244,247,239,240,243,239,249,237,240,244,252,205,186,243,200,193,210,244,243,212,235,232,189,243,236,244,234,244,235,238,237,254,249,227,231,241,246,255,236,236,241,249,241,246,253,229,243,143,99,67,6,70,201,221,250,245,233,248,209,224,198,211,232,225,217,209,255,214,121,184,220,235,232,185,230,236,237,227,213,240,222,241,226,224,215,240,238,252,243,224,222,233,236,237,213,214,214,232,229,227,224,203,222,234,209,225,226,216,202,235,208,207,233,234,230,226,215,214,214,214,241,237,214,226,214,227,205,221,222,224,193,209,209,226,213,235,212,250,180,147,118,85,99,73,79,70,109,95,116,65,100,94,108,104,79,91,75,73,126,199,206,241,239,219,246,226,216,229,213,221,214,227,222,214,207,220,226,207,213,234,199,206,213,221,203,190,198,181,183,190,196,219,207,190,170,196,190,184,208,189,220,217,106,17,8,20,16,21,8,10,0,29,18,9,14,0,11,0,15,39,2,13,4,4,41,28,7,235,225,220,231,197,218,197,225,229,195,220,207,235,222,213,217,219,210,219,207,225,192,207,211,203,205,214,204,203,219,207,207,199,206,235,223,221,198,231,232,220,211,238,234,236,216,228,226,196,202,235,234,214,231,223,213,185,220,242,252,233,221,218,229,212,206,210,197,235,222,197,223,213,209,242,197,226,211,212,216,205,229,229,231,213,235,209,237,222,230,214,241,222,235,242,228,230,229,249,227,212,207,217,227,226,237,232,223,214,208,231,240,223,230,225,239,240,221,237,228,245,241,237,224,239,249,237,230,250,228,244,239,214,247,225,226,234,243,239,225,250,248,249,226,232,237,251,226,231,236,233,248,232,240,233,249,245,236,234,227,211,245,233,234,229,237,240,218,243,237,237,216,250,253,238,249,237,245,248,247,241,247,223,231,246,249,245,244,238,223,247,230,247,252,249,210,250,243,247,253,230,255,248,243,241,246,225,254,248,242,255,239,241,233,248,255,254,255,238,249,244,250,236,253,247,252,240,248,240,238,232,252,246,251,237,231,241,248,231,252,254,243,241,245,251,242,250,212,255,245,248,235,252,243,238,250,242,253,251,251,254,238,233,254,254,243,249,252,254,239,250,241,239,243,245,238,247,243,240,242,243,239,252,238,254,226,233,252,238,245,255,234,249,253,238,226,240,253,255,249,230,232,242,251,239,240,209,246,236,235,239,244,251,241,236,236,240,255,242,228,250,239,252,231,243,253,253,226,235,237,240,237,252,239,223,252,216,231,253,235,214,213,238,230,245,210,152,148,132,97,40,203,250,242,249,234,220,240,230,182,195,242,255,250,247,223,255,252,255,235,251,213,210,245,255,253,226,244,241,252,254,236,250,254,235,226,249,255,237,218,230,239,237,252,252,242,221,120,118,110,105,120,44,129,241,242,247,243,244,233,249,232,242,222,237,241,243,237,251,248,255,245,254,248,242,255,246,232,221,246,171,164,239,248,218,252,245,252,250,229,250,248,216,239,250,212,203,218,148,100,35,8,47,38,32,60,48,38,43,58,46,103,90,69,19,33,11,55,55,29,34,71,91,63,65,69,23,25,50,82,80,41,42,43,1,142,218,231,233,242,239,249,233,252,228,234,222,246,244,243,216,225,240,252,216,156,221,221,204,212,249,236,219,190,198,172,224,253,244,252,252,237,233,249,249,252,238,253,244,250,242,249,244,255,229,227,217,234,242,245,116,88,54,14,59,229,246,234,254,249,233,233,231,213,210,219,243,231,228,242,191,71,184,223,223,235,234,236,237,236,238,228,232,219,226,221,208,228,245,213,228,238,230,230,227,217,227,229,224,214,215,221,217,213,217,236,238,224,213,225,194,248,230,234,226,243,214,214,238,226,237,220,231,236,224,222,233,230,219,229,196,208,232,214,217,202,235,247,235,233,206,186,127,128,96,139,132,132,120,114,79,94,87,109,98,90,106,70,58,59,83,74,76,117,204,208,247,231,238,209,214,190,230,236,208,188,225,211,192,208,185,212,213,201,208,215,228,198,210,217,185,197,224,206,206,219,206,198,214,184,194,186,203,208,203,114,26,0,35,5,8,16,3,1,3,1,6,5,10,20,4,6,4,4,0,15,30,7,22,12,216,211,220,193,216,215,233,232,208,205,214,202,231,203,211,224,215,222,216,199,217,210,228,243,237,216,214,213,231,213,203,217,232,218,225,200,200,205,199,235,215,218,215,211,214,194,230,221,240,193,221,212,234,209,193,207,243,216,217,223,231,238,226,193,226,209,236,220,232,240,219,228,214,209,221,218,244,221,217,206,214,235,227,229,226,205,202,238,205,224,222,241,236,229,207,228,222,215,234,231,216,217,237,235,229,252,230,225,220,229,235,241,250,206,227,222,217,238,242,247,228,226,238,210,248,241,255,218,244,209,246,231,201,241,236,236,226,216,210,218,232,199,238,222,246,235,249,244,232,238,240,244,237,252,241,246,234,250,227,225,252,252,237,243,246,239,243,236,235,225,240,241,245,249,230,244,237,206,237,242,237,214,237,241,231,207,244,244,234,231,232,255,252,250,242,252,254,242,255,228,254,239,233,240,250,255,245,229,234,254,249,250,248,242,249,246,252,250,246,225,246,245,231,254,247,230,252,250,252,254,241,240,237,252,246,228,223,251,252,240,240,246,243,253,227,246,242,252,250,242,243,249,246,255,233,248,235,227,255,247,236,246,239,234,245,232,236,217,245,252,253,255,255,251,254,247,249,252,255,237,214,232,237,247,232,212,230,251,238,246,245,233,254,241,238,235,241,249,247,227,255,254,255,243,250,247,249,247,234,248,242,252,241,255,254,243,232,232,239,229,246,255,254,251,249,248,223,240,245,252,239,248,252,242,242,219,238,243,226,197,211,234,253,253,244,234,157,156,140,95,43,196,234,228,235,206,247,247,238,185,196,237,237,234,249,232,230,254,254,221,249,200,180,219,250,241,239,255,219,241,244,238,248,251,248,253,249,250,241,228,225,249,255,250,250,250,206,135,99,113,83,117,24,88,243,237,248,240,229,236,226,250,249,249,244,252,247,251,246,255,239,246,253,252,254,246,247,253,219,234,225,158,224,246,253,247,240,253,247,242,218,219,247,248,233,215,223,240,242,206,118,103,87,103,70,104,52,45,47,75,65,108,83,63,58,36,38,39,50,31,53,100,89,62,54,30,25,38,49,53,96,76,67,41,16,205,241,249,244,239,255,252,237,254,238,248,235,237,253,244,234,232,243,251,227,195,234,254,197,206,229,220,202,201,184,178,231,251,226,233,244,253,238,244,249,251,242,245,240,235,246,249,236,210,242,242,250,247,240,243,125,92,45,0,81,212,243,251,247,244,222,238,221,199,184,218,229,194,211,237,159,82,167,233,220,225,213,224,223,249,224,229,239,232,231,216,245,224,216,236,235,213,215,218,220,235,217,249,234,241,228,231,227,218,229,224,217,200,231,244,222,230,213,199,243,238,209,233,216,226,216,216,203,209,200,221,218,216,233,232,221,231,195,204,230,216,254,243,224,169,163,154,129,146,89,141,139,112,107,109,93,40,71,40,30,58,31,53,53,49,55,55,63,10,28,88,157,193,228,214,203,200,195,205,218,195,196,197,181,243,188,222,212,203,210,212,206,211,213,220,206,211,202,199,182,201,226,199,203,232,210,196,207,203,216,110,2,0,23,15,31,10,25,14,29,28,10,8,29,1,14,1,16,9,16,13,2,31,10,9,207,201,221,200,221,224,225,217,213,211,228,210,216,210,208,221,212,230,197,196,214,187,234,211,233,220,215,232,213,214,215,215,228,224,225,229,215,212,202,219,222,209,212,213,215,214,210,238,230,236,231,216,220,237,237,237,218,224,242,223,221,213,232,210,217,226,222,223,207,234,226,215,240,225,222,228,231,238,222,203,213,222,243,204,224,214,230,214,243,224,208,227,219,225,237,233,220,216,228,227,216,222,212,217,210,243,227,238,240,229,236,244,235,227,227,220,211,239,244,225,240,247,223,213,249,210,231,232,234,236,228,249,217,223,241,248,221,202,221,239,223,236,219,245,224,223,222,226,251,228,211,211,227,223,237,239,235,232,223,245,250,226,250,251,255,249,212,236,242,249,223,250,215,244,243,244,250,245,250,249,247,237,243,252,226,221,236,238,252,255,246,232,230,244,237,254,255,252,247,242,255,235,249,241,252,247,235,223,250,254,244,227,245,252,252,253,251,241,245,252,235,239,252,242,239,254,239,255,250,254,244,246,254,251,248,253,245,236,213,240,243,252,254,229,240,248,243,254,247,246,249,243,254,231,237,244,237,234,225,252,232,240,249,235,251,250,252,248,238,254,249,238,248,244,229,239,253,229,232,234,247,251,244,246,237,255,233,255,242,248,228,249,253,230,254,237,245,244,226,233,201,248,219,246,239,245,235,239,245,244,248,234,253,250,237,229,253,248,220,254,217,240,243,252,244,249,255,226,249,233,249,252,230,237,244,239,186,190,211,214,229,253,255,249,248,203,167,193,163,81,30,182,254,238,249,253,236,245,248,191,153,227,255,229,231,248,218,228,250,219,215,213,231,246,254,242,247,237,212,189,229,238,246,244,226,239,255,252,237,244,223,240,252,242,255,250,199,126,111,115,109,125,32,119,254,241,248,241,253,252,218,245,254,247,255,236,224,241,244,255,251,239,241,246,246,246,255,229,239,220,241,187,203,250,244,220,247,254,236,253,222,227,253,242,234,230,238,247,251,247,178,157,138,85,67,107,69,86,67,46,66,57,61,48,45,11,36,29,28,55,89,74,68,51,55,25,42,53,64,81,99,79,73,32,38,223,245,222,240,241,250,248,235,237,250,232,239,234,235,243,240,251,255,248,243,214,224,232,204,216,163,183,233,194,195,211,211,214,231,249,237,245,232,237,231,241,226,255,219,244,239,226,240,236,255,242,242,228,253,217,132,89,45,6,100,213,242,239,248,248,240,223,244,221,201,228,231,226,236,235,153,71,136,234,230,230,229,236,247,236,251,226,231,225,225,245,237,220,241,228,214,233,242,214,207,207,207,227,238,210,245,212,223,241,231,201,219,220,227,215,221,200,230,246,218,195,214,239,213,221,217,216,217,242,218,226,216,239,202,216,214,223,216,177,212,222,232,152,147,119,107,118,76,116,73,53,64,61,47,108,73,76,40,54,33,44,24,44,13,22,28,44,78,62,34,76,30,28,99,176,213,220,215,245,193,217,203,206,222,230,209,197,215,224,208,197,221,205,197,197,218,229,229,212,209,217,204,186,197,206,220,202,196,207,179,116,0,7,3,10,6,13,0,3,22,2,12,14,3,1,19,9,17,25,19,9,5,1,0,3,211,205,197,203,223,210,233,216,223,214,215,203,213,232,215,226,223,219,219,239,197,227,214,223,233,208,203,235,202,204,227,207,206,201,202,216,230,223,227,224,207,225,233,236,214,231,207,229,225,227,209,223,204,223,218,209,224,232,216,232,233,224,224,214,221,209,222,231,212,216,216,224,183,208,223,219,221,197,240,227,195,216,222,242,223,222,210,222,233,205,224,218,208,233,234,231,219,230,210,228,196,220,231,231,238,239,249,248,233,252,243,230,237,246,231,215,235,229,240,222,228,230,243,236,241,223,247,218,247,246,236,232,229,219,230,217,229,223,228,255,234,224,237,232,231,220,236,239,241,241,231,233,230,236,223,225,250,218,228,240,252,235,217,243,240,235,252,235,255,247,211,232,219,245,247,250,255,248,239,252,247,243,226,243,247,249,251,247,249,234,233,242,248,241,252,252,255,233,241,254,242,242,254,242,218,238,250,229,253,238,230,248,236,249,239,244,251,254,247,248,251,252,252,255,252,255,239,235,245,255,225,233,255,240,243,237,244,239,249,240,244,229,247,242,225,241,254,253,239,252,255,246,235,236,255,254,234,237,254,231,240,244,250,250,236,253,251,243,250,231,227,249,236,234,249,233,229,250,228,250,231,253,254,241,238,248,251,252,253,254,249,245,253,249,224,246,249,233,213,248,248,252,252,242,244,243,235,244,233,241,242,238,245,236,249,248,254,242,245,255,231,245,254,251,236,253,239,251,224,238,250,242,253,238,244,222,206,200,226,238,250,245,244,231,249,197,163,166,150,111,64,209,248,237,250,221,222,247,243,200,173,230,250,251,241,254,229,246,251,207,225,230,240,255,248,237,234,241,203,255,250,249,251,246,228,239,251,246,242,239,232,249,245,255,246,241,209,158,114,123,109,112,24,108,253,232,247,245,238,255,254,243,232,255,230,224,253,250,255,219,249,244,236,233,254,254,249,240,231,237,247,187,232,254,245,241,230,217,246,243,249,226,252,254,230,252,248,250,255,252,173,90,88,52,51,89,113,75,89,169,140,97,37,60,91,60,51,60,44,82,118,79,70,43,47,47,43,55,51,80,79,49,68,21,29,187,246,253,249,249,225,250,225,241,246,255,240,231,221,232,254,225,237,241,255,228,206,232,199,149,135,165,246,232,175,226,222,236,249,217,255,245,209,246,245,255,244,235,249,232,225,232,244,248,242,247,247,233,253,221,125,75,71,26,94,233,246,249,239,244,229,226,233,201,179,227,227,236,253,235,127,57,138,221,223,226,229,242,245,235,225,240,218,227,228,227,225,197,244,220,244,219,219,245,222,248,229,227,224,219,204,237,212,219,216,209,229,230,218,226,208,209,209,219,212,222,210,231,221,225,218,202,223,218,246,238,214,204,212,212,183,226,197,234,217,183,155,59,95,71,39,43,11,47,37,61,25,20,83,85,58,40,50,20,37,12,29,35,26,28,26,75,92,71,64,44,16,11,79,161,170,200,209,232,220,203,196,213,190,218,207,200,226,206,185,215,228,195,214,205,197,217,201,207,215,211,189,204,202,204,200,209,191,196,211,102,13,2,1,0,14,15,17,1,29,11,25,21,3,8,10,21,9,15,1,24,35,11,10,34,224,217,235,216,223,230,206,222,229,214,200,236,203,200,215,221,233,213,228,205,185,206,202,204,235,210,232,214,235,226,214,239,215,233,213,206,197,203,201,217,219,205,229,195,237,226,229,199,219,203,202,222,211,222,245,228,240,220,218,224,195,207,226,202,197,228,227,219,202,203,233,227,200,210,225,205,218,219,211,221,230,219,221,228,222,221,203,241,226,216,219,223,205,208,224,224,222,238,215,223,209,219,214,230,238,236,227,243,228,240,211,239,221,233,223,236,218,226,232,221,222,227,233,220,241,242,228,229,245,238,241,238,214,246,215,222,240,234,244,238,238,235,240,235,213,221,228,252,236,206,248,239,238,252,227,248,243,252,243,231,224,229,244,236,230,236,238,237,234,242,255,234,226,251,231,212,241,240,248,226,201,241,227,235,246,240,255,248,243,236,228,229,255,252,242,243,255,253,249,242,252,225,250,255,236,252,253,231,230,255,247,254,254,243,250,248,239,236,254,229,239,234,250,223,233,247,245,252,235,252,245,230,242,216,251,251,230,255,252,247,248,239,224,238,246,235,255,243,249,224,235,241,245,242,229,251,253,247,248,244,252,248,231,228,232,237,251,250,255,237,230,248,243,243,222,241,252,236,244,251,233,240,252,246,222,251,248,244,243,233,238,248,252,244,233,251,246,246,217,252,247,251,243,241,248,243,229,236,242,250,252,239,252,212,241,239,255,237,242,240,242,255,245,253,235,242,248,255,247,223,252,242,254,237,242,237,207,235,247,221,242,249,251,247,254,223,161,151,118,92,42,204,239,252,245,227,253,233,239,232,194,242,251,241,233,253,237,245,232,229,255,243,254,255,212,206,243,250,239,255,250,229,215,248,249,241,252,233,240,222,232,251,239,243,242,254,218,139,100,107,111,104,29,117,236,255,251,249,236,247,221,254,223,246,237,249,248,247,234,250,230,234,240,253,246,231,247,238,247,242,217,183,230,237,250,246,234,219,237,248,242,251,234,255,218,254,249,250,240,154,74,63,131,57,26,75,112,79,99,193,180,135,75,176,138,49,24,80,126,113,135,99,67,39,43,34,21,21,78,96,91,50,75,39,50,214,244,250,249,254,255,243,243,213,249,248,249,246,236,252,254,224,228,235,249,238,171,232,220,119,121,163,228,210,172,231,229,229,232,255,238,250,250,228,239,241,244,239,235,244,249,233,231,245,232,252,241,243,254,216,113,75,42,0,131,226,240,246,252,245,237,219,219,204,225,215,220,245,247,207,122,78,118,232,219,237,226,245,217,244,241,254,223,220,252,206,239,223,218,214,229,223,212,244,213,230,214,227,236,223,225,200,243,239,229,224,211,211,209,227,244,236,200,203,230,223,223,216,206,233,221,213,208,228,227,223,206,226,219,202,191,217,205,230,156,110,51,46,50,29,20,33,44,28,35,68,44,44,24,53,88,58,60,37,65,43,47,46,29,43,80,90,93,66,33,54,0,56,159,190,228,182,194,210,182,218,202,186,191,196,204,186,194,202,221,208,235,203,206,219,244,222,214,207,198,195,203,198,210,188,194,207,204,190,197,115,5,16,0,1,5,3,18,6,12,14,6,13,7,12,18,13,15,21,17,17,27,11,16,9,217,220,221,223,223,208,221,228,218,197,228,222,220,222,231,203,210,206,229,217,204,213,200,216,203,216,225,237,242,222,238,217,217,216,200,225,200,223,221,215,213,222,229,223,225,217,237,194,247,212,224,180,241,239,222,220,217,199,217,220,213,247,234,243,211,224,216,210,198,214,189,210,242,223,231,237,212,227,224,213,237,245,205,212,210,234,223,222,230,224,224,221,222,216,216,215,221,229,224,238,217,225,219,207,221,220,237,253,223,225,232,222,230,221,235,230,211,216,229,221,240,236,229,222,244,222,247,232,217,218,238,217,220,247,205,225,220,231,198,238,232,236,235,235,249,239,221,238,237,242,240,238,249,243,244,235,245,235,239,219,234,238,231,221,227,216,240,230,225,231,228,225,228,220,254,255,226,248,249,235,224,234,218,239,246,235,239,251,250,224,254,241,244,223,255,237,251,254,253,230,245,230,234,230,234,246,250,250,210,247,249,251,238,247,252,237,252,249,243,248,255,241,250,244,243,245,243,241,255,254,240,243,250,251,233,239,250,245,242,248,234,242,225,247,239,232,254,214,254,241,219,249,249,234,252,254,253,250,234,231,252,234,223,241,247,248,243,252,255,231,251,252,239,255,245,238,241,247,227,237,247,236,229,226,249,221,239,252,243,249,250,240,247,245,244,243,244,237,226,246,246,230,232,232,246,246,237,237,237,248,240,248,224,250,252,253,247,255,255,249,251,254,240,242,250,249,254,249,226,248,245,237,237,233,248,240,243,248,201,237,235,248,239,238,245,214,148,144,127,75,35,218,246,248,240,222,246,238,232,213,172,235,255,227,251,252,247,244,242,252,237,167,218,247,238,253,250,241,233,243,242,225,248,244,241,241,211,232,255,229,241,250,236,233,254,244,214,112,86,109,107,130,27,132,245,249,223,219,244,242,254,245,250,250,246,216,237,248,242,242,240,253,254,252,246,253,255,242,255,246,247,175,186,248,252,244,247,251,252,255,243,235,236,233,223,250,229,249,185,80,87,68,136,132,82,49,55,37,52,192,151,108,78,149,114,30,53,77,95,97,138,130,82,19,57,61,37,54,74,97,88,63,70,57,59,237,235,222,243,249,240,247,246,255,234,252,248,232,239,238,239,253,240,208,254,250,182,220,250,193,158,174,230,243,209,216,245,208,226,243,237,252,247,249,238,231,254,240,235,249,253,251,232,245,249,255,232,219,248,211,105,73,4,0,104,240,218,227,249,242,224,231,230,208,214,205,211,234,240,239,126,55,146,221,211,229,234,240,227,235,234,225,196,235,218,250,222,216,246,218,222,228,219,224,222,234,211,238,243,236,205,231,218,213,205,219,220,230,236,220,230,208,211,215,220,234,208,220,212,209,194,208,214,226,221,220,219,213,247,218,204,236,199,205,180,73,66,9,11,24,56,73,68,35,63,31,45,35,47,69,86,58,48,47,45,21,10,47,38,51,67,73,70,62,56,41,70,169,224,229,201,211,200,227,210,194,215,216,218,205,187,221,201,207,218,197,214,195,213,208,177,226,191,213,204,191,198,211,201,215,220,219,202,193,219,101,2,2,24,15,24,20,0,2,4,13,7,1,8,6,0,7,8,30,11,29,13,1,13,8,226,226,234,219,226,215,237,221,228,204,220,222,223,192,219,204,198,216,212,211,221,202,205,197,220,212,207,199,199,228,206,221,209,210,199,205,236,199,219,232,220,205,213,231,224,209,238,212,227,223,187,224,234,219,225,217,190,216,231,210,231,211,207,226,199,210,230,229,208,235,219,213,216,230,231,229,231,212,216,210,208,233,195,202,213,227,220,231,235,232,189,226,199,216,197,209,231,226,216,229,216,238,217,241,218,211,242,227,228,244,218,225,241,229,252,229,239,244,214,185,197,234,219,240,239,241,237,234,241,205,238,216,243,229,231,228,244,227,208,235,238,235,234,235,223,221,226,216,221,227,225,241,217,242,236,233,247,229,218,243,247,234,228,237,255,250,255,245,252,231,250,246,240,231,254,236,238,250,251,231,241,244,243,251,232,246,242,234,239,250,235,218,247,255,234,241,252,250,235,243,242,227,240,250,253,237,252,208,254,246,250,232,251,234,252,254,232,255,243,241,240,250,246,240,249,249,255,241,236,235,232,241,245,254,246,254,255,213,239,235,229,235,251,247,231,227,210,251,242,235,241,254,244,248,251,255,255,240,245,251,233,237,246,234,230,243,224,254,216,236,255,236,227,244,241,237,243,248,237,233,246,241,223,249,237,254,255,232,253,236,231,250,241,235,235,251,214,240,232,223,235,231,245,239,254,233,237,234,235,231,252,248,229,235,242,234,245,248,253,246,231,252,229,253,253,237,253,255,224,243,250,255,242,230,239,208,174,189,226,224,253,242,252,239,242,196,178,184,170,91,44,188,242,240,253,243,242,239,251,219,151,220,246,253,229,245,253,253,205,225,183,220,244,248,253,250,246,235,213,219,246,227,250,230,249,227,243,251,229,227,209,248,254,248,242,246,208,129,114,122,115,103,32,103,238,240,239,233,255,237,250,254,227,242,195,193,221,254,254,255,251,233,239,238,240,237,234,238,225,238,243,149,145,240,253,239,250,243,226,226,242,242,246,210,240,210,133,194,163,145,155,66,142,107,57,51,60,59,71,157,111,102,80,163,113,53,75,19,64,102,88,138,75,18,33,56,26,67,107,74,68,41,65,40,70,217,240,233,242,255,254,253,236,254,231,242,248,242,230,244,247,229,237,236,254,236,206,189,250,207,168,157,224,236,246,193,224,227,196,237,238,245,240,241,236,226,242,250,255,242,242,251,255,253,255,250,222,243,251,194,104,92,38,0,143,225,236,247,237,215,223,213,226,214,225,227,239,232,229,222,116,54,122,238,202,226,228,218,239,214,246,221,236,207,222,223,232,238,216,239,211,208,227,238,244,203,231,180,238,238,245,243,216,229,224,210,231,231,209,219,217,231,236,222,216,223,212,247,202,204,229,220,227,191,205,219,196,235,207,188,206,213,242,229,174,141,118,78,71,65,56,67,55,58,43,23,69,46,53,92,74,58,69,33,62,36,13,33,41,65,91,70,68,68,53,45,157,210,209,229,223,218,226,223,194,202,225,194,192,212,212,201,200,214,201,210,199,214,208,185,222,222,206,216,210,205,221,229,179,204,183,200,207,198,200,117,7,0,17,13,6,38,14,3,11,22,9,25,18,1,0,14,7,12,37,23,10,12,1,12,222,216,234,240,215,188,209,211,212,221,215,203,214,217,204,202,220,211,213,235,194,211,211,228,231,199,214,208,221,210,217,231,232,202,197,223,235,227,214,199,205,244,227,210,217,230,221,238,220,207,243,216,224,214,230,220,202,214,204,206,194,225,221,220,231,235,186,208,212,231,212,224,228,227,208,210,201,224,217,199,200,229,222,237,212,210,227,204,209,230,219,228,222,231,229,222,217,222,210,223,226,228,229,236,217,230,222,241,225,221,219,207,242,240,220,234,227,219,244,224,236,214,234,246,232,246,222,227,231,216,219,239,224,234,229,249,226,236,234,228,222,211,223,220,216,214,217,214,226,223,224,248,247,243,233,250,234,239,234,235,232,223,240,243,230,210,233,230,231,244,228,208,246,241,251,255,250,237,250,249,251,249,233,243,242,254,254,250,227,251,248,232,233,242,252,226,244,242,234,241,253,232,255,249,230,241,234,237,220,225,253,240,246,231,232,244,252,252,235,236,253,252,248,229,240,251,229,248,251,255,247,254,251,245,243,244,240,254,250,246,228,216,248,251,248,243,251,228,223,253,243,255,222,245,238,237,255,255,254,251,235,249,244,231,255,250,246,251,255,246,251,246,254,233,251,242,237,229,246,234,251,232,250,254,254,222,220,225,239,253,241,248,255,230,253,222,254,241,231,242,240,249,251,252,252,251,245,251,242,249,255,246,230,235,249,251,249,251,238,223,236,250,228,236,250,248,254,248,253,242,251,241,228,244,243,183,221,235,229,224,221,245,250,247,248,211,208,167,154,101,46,200,255,247,243,236,237,253,232,220,152,200,249,251,255,242,249,248,226,219,195,215,249,254,223,227,239,217,198,224,238,248,251,248,227,247,252,231,229,230,242,255,248,223,253,249,243,152,135,141,106,105,20,135,239,249,241,252,246,253,239,244,246,245,187,207,212,240,247,238,247,249,214,225,253,253,249,247,233,252,230,194,154,225,250,236,243,218,245,233,243,247,233,203,223,198,194,253,188,179,172,60,147,101,92,69,70,52,85,186,115,87,70,135,112,76,66,72,105,101,111,109,82,64,22,22,42,54,92,66,73,68,66,33,37,220,247,248,242,239,239,241,252,255,227,244,250,229,232,223,246,230,231,224,245,248,199,205,251,253,174,158,186,236,237,204,250,199,168,230,207,237,253,253,242,213,251,254,240,254,242,226,240,222,245,251,247,251,233,218,120,142,14,6,147,222,249,234,242,239,227,230,224,228,233,214,209,227,249,231,122,46,156,241,243,220,241,254,229,232,227,221,249,238,221,254,220,203,228,236,234,233,221,219,246,211,214,225,220,207,228,231,239,216,240,213,220,227,222,206,213,222,227,212,217,225,226,200,216,224,225,196,216,232,208,209,219,193,211,240,204,219,216,217,192,186,137,148,132,99,51,81,65,92,72,80,51,78,103,56,57,86,44,54,38,41,47,41,61,56,106,70,64,54,37,46,78,184,182,203,220,183,199,212,214,205,219,187,228,207,216,204,197,223,213,222,230,203,207,210,220,196,230,200,187,204,213,202,186,196,188,215,213,216,215,117,7,1,3,27,12,3,16,10,8,33,10,17,11,6,6,20,26,8,26,9,0,7,6,21,241,213,213,199,204,228,213,211,239,226,208,210,224,224,219,234,217,214,214,228,216,204,200,199,210,210,236,224,231,211,198,223,223,239,222,220,220,220,213,236,239,231,192,233,223,241,217,237,217,227,220,233,211,219,246,225,236,210,231,226,217,207,222,220,206,231,212,230,239,211,232,198,225,215,207,189,221,206,229,219,207,239,204,223,244,214,227,214,217,205,206,232,234,231,222,235,220,215,219,241,238,228,233,213,242,232,231,195,237,218,222,219,229,227,233,220,218,221,230,204,226,209,236,231,236,232,231,202,238,243,225,238,218,219,230,237,229,227,231,224,236,219,233,245,235,231,251,212,225,243,244,237,241,224,238,230,232,242,240,227,227,229,242,243,225,225,238,225,252,204,248,254,253,234,236,241,243,249,234,252,238,247,226,255,248,253,249,236,248,240,220,227,231,229,230,227,247,223,245,247,246,243,233,247,216,239,252,255,253,252,233,253,242,244,249,253,249,237,252,245,232,241,246,245,237,240,255,245,236,234,243,243,250,236,235,236,245,228,247,255,254,240,254,235,253,234,244,199,240,248,217,234,248,240,253,241,247,252,249,247,248,246,229,239,234,237,250,244,243,236,246,249,252,244,233,243,251,245,233,247,253,249,255,243,252,253,254,228,247,240,241,248,244,255,249,241,239,238,242,236,245,248,234,238,235,235,247,242,223,230,249,247,235,224,245,249,249,234,250,241,255,252,249,251,238,230,244,242,238,236,239,250,236,241,242,226,209,251,208,194,227,254,226,247,252,194,169,164,128,86,54,168,247,247,252,223,248,243,247,248,154,198,245,245,254,255,252,240,229,231,249,243,231,255,220,173,229,230,247,241,249,247,236,246,247,223,238,250,251,234,255,255,254,252,236,227,218,159,149,116,84,95,20,138,226,252,246,255,249,245,240,251,248,255,238,244,249,243,247,253,251,226,227,253,245,251,246,254,237,250,241,180,178,228,246,253,242,249,229,230,225,246,250,216,225,221,235,251,172,225,189,102,138,99,97,47,63,53,59,195,162,84,46,155,136,58,22,59,116,102,70,43,36,14,35,38,38,50,101,104,90,62,80,34,29,221,243,245,221,240,235,244,249,240,231,230,253,241,239,255,244,237,231,240,245,244,213,181,234,250,196,169,185,220,210,230,237,217,180,192,226,249,247,241,252,249,255,240,245,243,245,251,246,228,251,236,250,251,254,222,140,108,40,3,177,247,238,255,245,240,239,206,226,206,212,218,211,229,247,215,128,72,149,230,179,239,226,217,228,241,235,232,224,228,235,214,228,222,197,228,211,218,218,208,230,204,225,212,232,219,213,209,231,213,214,205,230,209,217,234,219,226,225,216,208,227,193,209,219,203,229,222,207,229,208,210,221,217,207,227,199,213,209,232,204,158,152,95,103,71,61,43,81,71,82,96,98,127,111,65,62,73,45,48,82,81,109,79,137,113,52,58,74,31,25,40,134,171,192,212,217,209,209,186,226,226,213,207,224,224,195,205,186,218,208,193,204,182,234,216,202,232,205,217,213,220,213,224,196,200,204,213,190,217,197,119,17,12,9,19,4,7,12,13,4,29,8,7,1,14,14,17,2,6,16,14,20,11,40,14,227,224,232,199,213,228,223,233,200,234,211,189,208,223,216,225,220,206,224,194,220,223,195,210,207,214,235,205,202,222,222,231,206,220,219,211,226,227,210,221,223,203,192,223,217,242,231,207,218,212,211,215,218,208,236,215,211,225,208,209,225,225,222,211,201,219,213,198,244,227,209,208,201,232,226,231,211,224,221,211,221,191,216,223,221,222,207,235,238,218,236,241,222,235,227,208,217,218,230,200,216,221,242,245,210,243,226,237,220,216,221,222,246,252,215,246,230,239,208,225,251,207,248,229,242,230,232,246,238,212,242,219,223,236,247,245,216,229,215,249,230,206,240,221,232,228,233,239,236,227,239,216,215,247,224,246,215,245,229,235,234,232,238,235,245,238,224,244,231,246,241,241,236,229,235,244,249,228,232,249,242,226,245,234,252,249,248,229,251,246,255,240,246,242,215,242,252,220,242,250,247,233,247,249,237,236,255,245,255,244,247,249,241,245,237,242,243,253,238,253,245,240,215,232,236,223,255,240,245,252,251,243,228,241,255,243,246,243,246,250,247,248,255,240,247,243,255,254,230,251,248,255,254,253,250,222,251,226,229,248,250,254,234,255,212,251,236,251,254,254,231,244,234,251,232,248,246,246,238,237,253,253,254,249,247,234,255,255,239,254,240,230,240,238,241,252,234,228,248,253,251,243,251,231,253,180,159,185,222,243,240,243,240,234,247,238,243,253,232,245,255,230,240,242,233,248,241,245,252,252,247,255,233,246,246,190,208,206,236,201,245,225,243,239,253,199,151,145,122,81,44,190,248,242,224,230,232,246,254,233,210,173,224,255,252,238,246,255,231,249,194,225,250,227,221,227,251,245,252,252,232,253,236,250,241,236,248,245,236,227,204,237,241,239,251,230,142,106,109,131,108,116,57,134,238,248,243,251,244,253,243,255,244,245,211,236,243,243,235,252,251,248,250,255,246,238,244,254,244,239,247,182,135,254,253,249,242,247,247,242,244,251,243,226,255,251,237,248,211,255,194,96,68,36,41,11,34,33,48,223,159,68,43,158,117,27,51,27,108,85,76,44,38,69,51,38,34,62,74,90,59,56,91,30,24,220,253,255,249,246,253,243,252,251,241,244,254,249,248,250,254,254,221,251,253,243,250,194,205,244,231,187,181,134,203,240,220,191,199,192,237,234,254,232,233,246,242,249,236,249,247,241,246,250,254,246,246,252,234,189,104,71,26,44,185,236,205,218,200,241,232,229,234,204,176,210,238,216,227,249,116,50,132,207,216,198,212,208,242,228,249,233,234,214,220,234,226,242,241,219,205,218,202,200,198,223,215,218,213,214,225,207,222,238,215,233,214,216,216,212,206,220,207,210,200,211,215,212,200,221,220,221,201,210,192,212,168,192,235,232,212,210,226,186,181,102,119,110,138,91,90,68,85,80,129,161,162,126,100,121,95,81,57,64,128,136,133,116,131,119,103,77,74,37,53,144,223,225,224,214,210,188,200,202,211,198,227,217,212,205,207,240,193,200,223,193,226,231,200,197,214,188,218,219,220,207,217,186,199,211,181,206,200,211,216,110,0,13,5,8,17,10,4,23,9,27,1,5,2,7,4,4,4,24,8,0,27,15,11,27,210,206,210,229,216,215,188,212,226,227,192,217,213,218,219,215,237,224,229,219,211,234,189,198,226,225,217,216,223,208,246,233,234,218,204,209,232,213,215,231,221,231,226,201,232,220,211,225,203,218,226,188,241,225,221,189,210,226,234,225,201,232,200,220,224,205,217,220,225,216,203,233,212,236,223,215,227,220,209,227,225,203,198,225,215,233,218,214,205,225,237,243,229,220,199,222,209,204,241,207,203,211,245,233,213,241,213,210,224,241,228,219,210,221,238,198,211,239,236,212,237,249,222,227,225,219,227,226,252,241,235,223,203,240,223,226,241,223,243,212,234,225,227,236,230,229,226,238,236,228,231,214,229,249,231,244,226,245,231,238,240,243,242,222,243,240,221,247,240,240,243,255,242,249,221,240,244,247,242,237,243,248,251,251,246,245,243,236,229,243,238,237,238,247,255,248,247,248,251,226,252,254,239,248,238,224,235,238,242,250,244,251,255,243,234,250,251,243,242,251,242,255,237,238,230,250,251,228,245,255,245,253,255,253,253,244,248,250,238,248,246,248,251,253,236,248,224,247,251,250,245,246,253,234,232,242,239,255,242,240,228,215,254,226,234,252,236,252,253,252,241,225,232,237,244,236,249,229,248,252,248,238,244,255,240,250,241,255,245,243,253,254,243,245,240,252,247,238,236,249,238,243,251,249,197,189,193,242,253,242,247,247,243,219,238,238,249,226,246,245,254,251,225,234,255,251,253,230,244,255,245,238,243,250,219,172,190,232,236,233,248,250,254,213,239,206,164,139,139,116,47,215,235,249,233,235,247,244,238,249,185,183,226,247,245,230,247,255,237,178,159,207,246,247,231,222,253,246,229,249,249,238,249,253,250,237,248,255,251,248,210,156,89,85,113,72,99,143,117,158,153,168,123,136,207,188,198,186,188,199,181,153,178,138,183,230,255,255,252,249,246,242,234,218,235,250,255,238,255,225,236,220,182,221,222,234,250,253,246,230,247,255,238,246,248,253,253,199,206,229,135,40,41,18,43,40,70,41,42,196,184,79,51,128,118,61,73,27,124,94,60,81,51,31,75,26,62,45,52,57,74,49,74,18,61,202,250,252,248,249,251,229,232,228,254,240,238,238,246,254,254,255,232,219,231,250,252,204,192,240,228,134,130,144,190,234,192,183,252,248,225,250,237,237,244,238,237,218,245,246,233,255,243,191,128,91,80,110,114,83,65,38,49,52,61,141,145,157,194,232,225,231,203,217,204,214,222,205,220,236,132,41,111,241,202,214,250,225,244,216,246,212,220,228,212,226,236,210,243,220,223,207,217,242,214,221,227,226,215,212,231,219,222,221,216,237,217,215,245,237,210,213,202,190,203,210,228,216,215,237,211,216,200,223,212,227,238,222,214,209,215,225,217,218,194,131,104,92,116,104,92,81,64,56,76,150,127,145,125,115,85,74,51,49,99,137,116,81,129,101,99,86,85,47,15,170,217,227,213,202,225,191,224,224,213,222,209,197,207,220,204,219,205,231,196,209,231,201,220,210,217,227,213,211,206,207,217,203,210,202,210,193,225,191,216,111,0,8,17,15,35,13,6,14,8,8,14,27,9,5,1,0,27,0,13,0,3,4,0,15,236,216,213,217,210,209,232,207,215,229,202,219,217,236,220,222,193,210,236,209,214,207,228,209,207,212,207,245,225,206,213,234,238,201,211,190,205,194,230,203,221,222,213,240,219,234,226,196,216,211,236,214,231,230,217,225,243,221,191,220,221,206,205,213,205,203,200,217,216,206,215,220,215,218,211,210,223,196,208,190,228,231,198,200,204,215,226,210,219,234,222,220,223,217,214,233,202,217,222,233,208,237,221,217,229,200,207,248,230,229,249,225,231,226,249,220,247,246,219,218,226,224,238,229,246,245,197,234,235,235,242,242,229,208,216,231,228,245,233,228,232,214,227,222,240,232,233,221,236,211,240,225,237,245,247,244,235,243,241,242,223,236,233,233,226,241,241,207,245,246,247,245,214,224,250,252,245,253,243,243,247,230,236,248,234,244,254,245,243,239,228,254,236,232,244,244,230,248,234,241,253,245,252,242,248,250,236,252,237,255,249,235,237,245,234,241,253,241,247,246,244,237,244,244,227,243,251,254,250,248,246,252,235,253,248,234,245,255,245,250,233,220,250,243,237,235,247,251,241,254,241,230,238,232,246,246,245,238,254,238,237,249,252,251,252,250,249,233,250,236,255,249,250,227,239,249,215,225,254,247,249,253,247,249,252,248,255,242,252,244,246,244,248,255,217,246,228,252,247,231,236,238,246,249,237,224,249,240,242,233,247,247,253,242,244,237,239,250,232,249,252,245,253,255,235,244,228,253,243,254,247,245,231,250,239,188,246,251,243,190,218,245,252,225,241,218,163,185,152,83,76,189,250,250,241,250,231,243,252,243,184,198,218,230,229,255,253,234,219,191,211,245,250,255,242,249,255,157,143,232,227,252,247,232,255,206,244,244,255,246,181,84,44,48,72,109,109,133,124,170,127,147,146,112,103,89,98,114,90,59,100,100,67,40,158,242,255,239,235,252,252,239,254,246,242,240,229,247,244,249,247,223,157,217,254,245,252,237,242,228,240,247,242,249,239,239,212,125,186,177,95,70,62,61,40,12,83,49,78,176,166,75,26,149,102,62,74,48,102,65,57,64,30,29,55,44,38,19,35,73,65,55,74,37,41,239,244,244,230,236,255,254,245,243,246,250,249,253,231,244,250,255,244,228,242,249,230,224,175,241,241,154,108,159,206,243,202,214,234,227,216,216,248,254,255,243,254,250,241,236,255,247,162,86,62,32,2,7,15,20,25,59,46,44,62,31,52,46,159,203,198,209,200,221,206,220,222,232,251,216,149,65,141,228,211,210,230,224,232,218,228,220,231,220,217,235,242,229,231,218,231,223,208,227,242,212,234,231,205,228,234,204,218,231,211,228,217,205,210,218,200,220,216,200,213,222,233,233,217,205,216,215,211,209,206,228,205,209,200,225,182,196,206,221,225,171,120,100,107,144,86,70,61,33,68,89,132,156,119,109,98,43,32,47,129,121,98,83,102,98,86,97,41,28,50,159,206,184,223,230,216,193,215,218,214,206,220,179,207,229,209,204,205,221,219,207,219,233,199,208,213,205,219,213,212,191,198,186,214,230,197,201,205,207,221,99,11,2,13,14,4,7,7,24,32,4,0,15,24,20,16,0,10,14,17,10,3,6,30,13,201,220,220,202,214,195,227,238,213,236,200,218,227,201,209,229,230,188,222,216,230,233,220,202,235,209,244,209,213,210,225,219,207,209,230,216,201,205,201,235,214,222,190,222,229,236,214,225,226,230,199,208,210,226,201,221,224,236,222,217,211,200,223,232,227,241,207,230,204,193,229,218,228,204,228,240,232,199,193,224,235,213,209,234,224,232,244,229,228,198,246,222,236,198,231,229,229,229,223,216,221,219,225,210,201,214,219,227,232,227,217,239,253,229,200,221,242,236,218,230,241,231,231,229,227,211,213,225,211,239,247,233,252,243,246,217,234,245,227,240,207,229,245,240,214,220,242,223,221,224,248,230,234,232,247,220,236,246,214,238,243,226,246,221,249,243,229,240,249,234,233,237,255,250,231,223,250,242,241,236,241,250,255,250,232,239,234,232,249,246,248,222,255,248,246,236,255,239,255,216,235,228,230,243,255,252,243,252,251,245,245,254,249,248,239,247,251,252,255,245,222,246,247,253,246,255,217,247,236,253,255,229,242,234,243,253,235,243,237,236,236,255,247,242,234,236,233,235,249,255,249,254,232,231,234,245,242,251,252,232,235,243,244,223,239,245,252,255,255,254,247,252,255,243,255,224,247,215,253,250,253,248,244,248,249,247,247,255,242,237,222,249,249,233,241,232,243,248,243,247,253,240,252,243,249,239,254,224,210,255,251,252,255,244,252,249,250,236,249,248,255,254,246,244,246,254,250,231,245,230,252,225,233,255,243,212,231,185,168,199,239,244,255,242,222,205,195,148,130,82,34,203,255,252,241,243,236,234,242,232,198,189,218,241,252,230,254,231,231,208,228,234,237,243,209,195,239,241,207,252,219,247,247,234,236,247,245,254,240,242,202,103,99,83,70,102,69,76,72,69,85,46,49,88,91,73,76,101,78,78,80,76,66,62,173,243,239,239,249,254,248,236,225,232,252,240,231,237,253,255,253,249,152,207,244,244,248,232,236,241,255,224,240,247,238,236,154,112,212,236,164,69,63,97,62,86,85,50,87,171,153,68,33,134,84,52,56,90,128,82,54,64,34,59,40,38,55,23,40,61,72,35,50,22,34,216,255,240,235,219,245,240,250,229,229,240,255,246,252,235,245,252,242,253,234,234,243,215,168,215,233,241,179,217,236,248,249,217,249,239,193,207,241,249,251,242,253,235,213,247,228,203,124,41,76,54,50,24,42,52,46,58,53,77,47,41,15,0,116,198,197,214,207,198,216,204,216,238,250,234,137,71,165,228,219,239,249,235,239,235,239,230,218,231,229,202,235,228,236,242,215,223,221,223,239,226,228,222,213,237,233,244,212,229,219,233,231,206,208,227,204,223,208,225,198,210,223,204,223,196,236,206,192,205,225,231,202,192,193,226,209,198,216,228,185,173,126,109,87,108,82,80,26,53,52,99,122,72,106,83,89,64,37,50,130,87,99,94,95,77,82,54,67,35,67,171,184,185,236,225,201,199,205,194,216,228,216,226,211,210,223,202,201,210,210,206,211,197,221,216,223,239,241,222,206,219,206,210,204,226,192,200,196,203,221,92,2,3,2,24,0,17,3,28,18,6,21,28,9,7,4,3,10,6,20,6,17,21,26,1,209,203,237,192,221,214,214,221,211,237,223,199,220,210,229,199,216,203,226,219,220,212,202,234,198,207,221,197,223,234,237,208,209,230,191,210,220,217,238,234,215,224,221,235,224,233,197,193,217,227,217,216,215,212,212,215,221,230,216,214,208,198,231,224,214,216,226,209,193,228,221,222,203,203,209,218,191,206,226,201,236,203,212,216,219,226,222,237,219,229,197,211,243,229,215,225,210,223,236,221,242,239,242,211,238,222,218,222,227,247,232,234,244,222,235,224,207,233,230,233,227,215,217,207,247,240,227,235,240,230,237,238,220,238,236,227,222,212,241,236,246,242,233,220,228,220,225,251,223,245,225,218,246,241,216,233,222,229,244,235,228,219,224,223,235,255,243,232,234,235,226,245,240,246,247,251,237,231,245,221,241,237,238,250,215,235,235,228,232,242,232,245,223,255,240,233,254,245,253,244,240,246,243,247,253,249,241,242,245,232,243,249,243,237,250,247,229,243,252,235,251,231,227,246,247,247,250,248,227,254,236,240,252,248,250,247,243,241,252,228,246,250,224,250,253,235,234,254,245,245,245,244,255,251,235,236,246,230,238,251,251,227,246,247,255,255,229,255,236,245,251,253,235,244,229,253,233,232,241,238,230,254,232,220,241,224,240,241,251,251,224,240,250,255,223,251,247,252,238,237,253,239,240,238,232,242,251,247,245,249,247,249,243,247,233,231,241,235,238,247,254,250,223,243,244,242,236,255,239,247,248,244,245,225,210,179,202,211,230,221,255,244,241,225,245,200,160,170,136,72,33,214,254,241,253,232,252,239,245,213,154,194,229,231,231,252,251,253,232,191,193,234,226,237,205,234,238,220,235,246,230,249,245,234,255,234,238,242,247,242,190,122,104,73,57,36,72,63,49,57,66,20,46,29,33,44,34,40,81,33,64,57,46,51,66,116,231,243,223,242,254,249,243,232,240,228,253,246,237,249,231,237,130,157,253,251,255,237,238,242,226,204,226,178,144,200,172,226,251,243,166,109,69,93,54,29,83,60,66,172,127,47,36,130,63,36,60,55,120,84,55,39,37,49,60,43,42,39,29,70,60,59,41,29,64,211,249,252,243,239,250,232,252,236,252,248,255,244,242,233,242,251,255,230,244,231,239,253,161,216,251,247,164,184,231,240,254,195,217,234,163,183,187,240,255,235,254,255,253,243,247,149,82,54,76,68,63,72,72,69,58,76,64,53,70,73,32,38,160,218,239,218,181,168,176,191,228,234,255,222,150,50,155,229,202,235,246,228,248,238,231,238,236,213,249,226,221,231,225,220,214,221,250,200,218,231,216,222,220,237,227,225,210,226,217,217,230,231,215,233,201,213,198,207,222,219,207,220,204,200,215,207,205,201,212,231,222,229,209,189,200,191,204,218,211,149,120,60,67,41,42,45,71,56,71,131,119,113,90,68,31,66,39,58,123,118,101,89,71,86,51,42,49,14,88,196,220,207,205,200,225,215,215,225,206,207,209,207,207,223,201,216,214,202,215,225,210,213,216,239,205,208,199,197,202,217,202,199,196,199,213,192,196,196,220,97,12,7,6,6,8,30,19,2,8,26,27,9,4,22,3,6,11,21,16,11,18,11,28,10,213,220,198,189,220,211,214,213,201,223,217,189,205,228,234,218,199,210,227,207,206,223,228,199,228,229,188,219,219,208,212,237,218,211,219,213,224,212,226,245,227,224,235,238,237,228,199,244,216,223,230,191,241,230,221,223,214,210,217,205,201,210,232,211,224,232,230,224,236,199,223,215,212,221,235,240,244,227,216,219,211,224,218,236,222,200,234,207,228,238,235,213,219,213,213,228,240,220,238,233,192,229,209,229,229,207,205,243,218,210,226,237,218,233,238,217,245,203,219,238,222,234,231,224,252,231,238,233,232,212,223,205,246,240,223,205,226,219,232,216,207,229,223,227,244,242,222,241,215,236,212,236,240,235,223,204,232,229,243,233,247,238,228,240,226,232,229,232,224,242,241,236,233,223,242,245,242,236,249,243,235,231,254,246,239,248,223,242,228,251,250,253,241,252,248,240,212,249,240,227,232,248,243,248,243,253,244,226,247,218,248,243,242,241,245,252,245,253,252,245,243,253,251,235,252,221,254,249,229,255,225,243,255,242,248,236,255,252,254,247,255,252,239,249,234,249,229,252,232,247,245,245,247,242,226,237,238,235,245,221,246,254,252,253,253,255,243,243,240,240,233,239,246,250,231,239,248,232,240,236,226,252,242,255,255,247,243,235,239,240,233,237,243,229,236,252,243,242,237,233,232,249,226,241,247,240,251,247,247,251,229,238,245,246,242,254,231,237,245,240,255,251,255,240,235,242,250,240,247,240,221,236,245,244,176,195,247,237,213,211,229,253,255,240,255,173,150,154,136,71,50,218,234,230,237,214,249,244,249,242,205,190,234,223,240,249,246,235,230,185,155,249,253,255,245,232,248,244,225,236,238,237,238,232,254,248,255,239,244,254,154,108,111,64,65,54,66,60,64,76,65,36,66,39,43,54,49,50,66,49,74,55,72,12,45,152,241,239,245,241,232,246,246,242,255,250,233,248,252,243,253,220,154,194,253,254,232,246,235,242,231,180,230,214,222,217,216,232,255,166,154,152,119,140,53,59,102,101,66,154,158,97,46,109,98,77,56,49,105,58,62,56,53,58,58,65,40,55,51,63,63,36,64,21,61,219,254,239,244,224,234,239,221,251,243,242,243,249,253,233,235,252,246,233,239,212,241,252,201,188,249,237,184,163,232,215,243,193,229,201,198,189,199,253,254,227,250,222,240,250,225,140,99,67,81,84,80,99,62,58,53,61,41,63,58,58,38,42,225,216,215,230,201,182,199,198,198,191,237,255,158,67,168,205,200,232,236,225,209,207,231,235,227,225,222,212,225,239,202,227,210,231,231,217,201,223,235,220,208,221,225,234,214,215,219,216,230,227,213,229,225,222,215,229,233,191,222,219,215,237,209,215,212,202,228,214,192,219,206,192,219,218,240,231,203,125,111,68,84,66,60,78,29,34,104,155,104,114,74,59,30,44,48,77,104,89,126,93,68,75,64,46,28,18,97,201,198,218,221,199,206,227,219,201,207,217,197,210,228,215,222,229,217,212,212,207,216,221,219,217,209,190,209,209,213,227,230,233,221,199,205,205,211,206,202,85,21,6,7,14,5,24,3,1,7,27,5,21,3,13,14,3,12,26,35,5,1,24,17,7,207,236,219,230,191,216,214,216,220,221,212,225,225,228,209,233,200,219,223,226,226,207,249,231,232,227,212,230,219,214,234,234,232,187,222,230,216,238,238,211,217,235,209,215,240,235,211,230,219,209,222,223,225,217,200,218,236,209,208,212,220,228,227,239,218,233,218,243,234,247,200,216,234,210,223,239,224,234,215,206,223,219,238,234,213,240,205,214,214,230,186,188,232,218,234,217,218,222,197,245,225,230,193,231,200,217,205,228,206,219,247,219,218,229,206,221,234,219,210,241,246,210,216,230,234,209,250,235,230,234,213,233,208,222,231,218,218,214,234,230,237,247,244,227,232,252,230,250,242,241,239,248,225,227,237,235,240,224,240,241,247,233,235,247,231,237,230,241,224,246,223,229,236,249,221,249,239,232,227,243,233,247,253,255,243,239,243,223,230,238,244,250,242,243,215,247,252,250,239,249,233,224,228,255,249,240,253,245,247,230,251,250,236,245,244,251,247,248,241,243,236,231,248,253,232,234,255,255,250,242,249,255,240,245,242,235,248,252,238,241,228,247,252,251,248,253,251,240,250,255,230,225,251,234,252,248,248,240,235,253,247,218,240,250,248,235,255,225,226,247,225,240,228,241,243,233,238,247,247,213,246,229,249,250,247,252,222,243,240,232,249,247,254,253,233,250,245,255,251,238,229,253,249,251,241,223,248,225,244,234,234,254,233,253,223,255,254,236,247,251,239,240,243,234,253,244,230,242,242,254,248,218,245,247,242,237,253,246,239,183,231,239,227,224,244,189,182,182,142,87,72,243,229,251,247,222,237,243,232,247,224,183,204,245,253,246,252,217,210,205,193,238,250,244,196,243,251,242,229,243,246,251,247,243,245,247,245,244,244,249,153,128,103,69,48,61,94,74,70,49,66,98,84,71,46,79,30,91,91,47,107,88,73,33,122,220,239,243,241,230,244,249,248,225,242,231,241,245,253,247,211,220,156,144,235,250,247,250,218,253,236,234,252,220,253,247,249,214,149,112,166,243,186,134,100,73,123,78,96,145,183,96,128,130,132,122,68,68,105,49,49,75,45,55,43,51,42,19,61,64,76,69,74,14,48,226,247,246,250,244,252,252,252,233,227,225,245,230,254,234,253,251,227,246,230,243,237,251,203,179,242,250,174,164,221,208,196,231,209,232,234,207,213,229,249,254,231,250,242,246,230,120,92,97,66,89,89,70,29,37,59,46,65,49,51,64,31,47,180,230,241,218,230,222,211,209,192,245,244,222,122,84,145,213,218,245,206,244,220,212,230,219,228,215,217,218,230,217,235,222,210,213,214,206,221,231,218,231,202,224,224,233,188,202,238,210,243,223,224,215,234,223,239,192,190,215,201,217,211,233,219,212,221,224,208,182,242,235,199,197,248,208,218,226,190,148,96,76,91,98,45,38,60,50,134,162,112,121,66,66,35,44,18,102,130,107,85,83,61,54,52,54,56,38,169,205,208,199,217,197,217,222,209,209,207,224,213,234,212,220,228,217,216,202,203,225,222,223,206,203,220,213,206,216,215,226,212,200,209,205,202,225,207,198,189,109,2,0,7,2,7,22,14,18,4,7,18,29,0,2,19,17,17,4,10,21,10,40,5,1,209,216,198,201,191,212,205,198,209,226,218,208,197,217,212,236,204,208,196,220,201,214,215,218,228,198,181,196,208,214,233,214,193,215,235,222,230,225,229,201,242,192,208,215,221,224,216,228,216,209,215,215,219,220,212,211,187,219,225,221,226,221,244,225,225,245,217,219,242,242,219,222,213,221,229,225,242,210,222,227,214,235,240,219,219,207,238,202,237,238,229,228,246,204,206,221,219,226,216,213,237,224,200,223,226,234,238,225,240,218,235,230,221,221,214,250,232,247,220,245,230,226,237,218,240,206,220,227,211,207,243,228,215,226,242,249,227,236,200,222,230,238,247,222,228,238,234,250,231,236,233,247,219,236,233,232,226,245,214,248,212,210,246,225,227,231,232,226,249,252,231,238,216,237,246,220,255,224,247,225,227,229,241,240,235,237,247,244,227,234,246,253,244,242,245,244,238,255,249,248,249,251,250,241,252,242,254,233,247,255,249,255,251,252,234,255,218,238,218,247,250,238,243,245,252,249,244,247,250,237,244,252,229,248,241,252,223,254,252,252,255,244,234,245,249,252,250,250,241,242,243,238,252,249,227,253,246,244,247,253,255,251,242,240,238,242,242,231,250,255,220,244,248,228,243,240,244,243,235,234,231,246,237,247,233,235,240,246,253,243,236,254,254,255,245,239,251,250,252,233,248,253,237,249,237,228,251,241,230,238,232,249,238,239,250,251,235,251,247,233,245,246,249,246,245,240,244,229,233,251,242,222,230,230,199,243,240,229,253,206,234,251,236,242,246,192,169,155,111,49,86,236,253,250,232,245,240,239,253,244,246,179,190,242,234,253,242,237,215,185,235,247,255,236,246,251,229,186,192,238,238,251,237,233,244,238,229,241,230,253,170,155,126,51,65,52,40,36,30,48,71,60,71,35,52,51,63,85,97,66,44,49,71,15,176,240,255,253,238,242,241,247,239,252,236,255,248,231,236,255,232,241,176,121,233,245,239,242,221,244,250,252,229,255,232,248,208,181,101,92,202,246,218,190,131,96,101,109,49,139,188,125,83,85,130,122,72,107,137,106,102,84,31,34,48,52,58,7,54,79,79,47,60,27,51,200,246,244,254,243,231,253,242,235,246,228,254,240,244,244,238,251,235,248,225,239,246,248,217,179,240,249,232,183,195,155,167,223,235,205,230,225,174,230,240,253,249,254,235,245,206,106,62,87,75,100,76,46,16,53,49,32,28,60,45,52,24,43,198,223,233,250,229,252,225,240,243,240,248,228,113,70,176,222,185,237,216,255,217,235,231,214,222,218,204,231,223,238,219,238,217,219,217,220,214,223,214,217,237,237,217,236,200,208,212,230,212,237,197,219,234,194,214,187,209,225,211,222,232,228,225,193,234,212,237,219,206,222,222,202,210,191,217,212,222,143,73,71,76,93,81,50,94,74,136,145,109,116,72,68,54,41,70,128,120,95,136,104,102,57,51,13,32,113,187,234,226,223,208,214,222,219,238,209,227,232,210,202,214,205,204,231,220,220,197,192,184,221,224,189,201,206,224,211,170,196,192,202,222,216,216,194,227,189,205,106,8,13,4,27,2,6,21,3,11,12,10,17,13,17,15,30,25,2,7,20,7,38,13,15,217,241,220,199,217,209,217,197,214,205,219,227,204,225,211,211,219,210,205,217,214,225,219,234,179,225,210,234,225,201,223,223,217,224,222,228,220,229,231,202,217,223,242,206,223,214,217,237,233,214,225,212,219,235,219,215,200,221,240,243,229,225,243,205,215,235,209,212,201,212,216,212,236,212,202,230,210,220,234,221,222,231,209,206,210,235,226,206,191,241,228,233,228,217,229,222,228,208,209,232,231,188,211,215,209,234,211,221,227,200,228,211,234,235,205,249,235,222,232,235,243,233,221,202,231,245,231,233,245,237,214,228,221,229,211,244,244,230,238,243,237,234,243,239,225,210,233,248,235,240,236,229,218,235,219,236,252,253,237,228,204,238,232,236,245,242,224,245,237,243,250,235,239,239,245,235,236,211,241,251,226,245,237,249,253,242,242,241,247,250,229,251,230,255,253,225,254,232,232,236,255,223,245,228,244,245,237,203,253,220,251,252,246,226,248,224,254,234,253,240,243,242,245,247,239,217,232,252,237,255,252,249,252,246,227,249,221,230,252,237,249,255,250,255,227,251,255,255,231,235,217,239,241,226,249,250,255,248,237,238,233,237,242,242,241,245,239,250,236,240,255,242,253,247,224,242,235,230,251,243,253,231,251,252,219,247,255,245,249,230,242,233,220,248,252,241,228,253,235,224,246,226,227,240,243,255,226,229,237,235,247,254,240,242,236,239,254,241,243,248,246,234,246,248,236,232,243,238,252,250,237,248,238,232,184,202,234,229,228,232,237,235,203,223,247,174,158,147,125,62,83,235,251,250,255,232,237,224,255,246,219,199,200,242,221,255,244,221,214,222,242,255,238,221,227,224,195,176,220,233,229,246,250,253,227,249,228,225,242,245,172,127,125,112,49,66,67,63,70,43,44,61,46,51,42,48,45,95,77,55,75,53,48,31,165,236,254,241,236,255,221,255,246,234,238,236,240,242,251,252,226,248,201,113,244,240,250,233,223,241,225,238,249,234,223,178,172,177,149,144,199,248,228,117,87,69,78,87,62,138,159,89,80,88,95,111,70,108,174,145,92,62,50,64,47,74,44,33,67,89,65,66,55,37,57,186,236,243,239,250,252,234,224,236,247,234,242,243,221,255,231,244,250,250,213,245,247,248,215,190,217,248,217,104,103,167,223,239,230,164,209,224,213,229,238,248,255,250,224,124,115,84,105,82,86,103,79,74,49,53,36,34,69,35,29,53,31,46,189,223,214,241,240,245,231,240,251,251,239,207,102,93,191,214,211,239,205,210,230,226,222,198,221,218,190,197,200,237,219,214,214,241,231,211,191,223,218,216,239,204,217,204,208,231,219,206,217,222,191,220,235,202,220,233,216,211,213,230,197,231,218,217,206,207,201,198,185,214,212,217,200,204,191,235,190,177,140,78,75,72,83,36,69,65,105,157,156,135,109,66,48,48,73,146,134,136,133,87,68,36,54,14,93,201,240,226,215,194,208,186,202,205,202,215,220,211,214,208,215,213,220,189,197,197,217,231,227,201,189,193,213,208,208,198,224,226,216,201,187,217,216,232,217,208,227,82,1,1,32,7,21,14,8,6,13,7,11,13,2,0,26,7,3,16,7,5,3,7,13,14,205,220,213,216,211,226,221,230,207,203,228,224,225,238,204,223,225,185,208,212,199,212,240,214,228,226,211,228,230,246,231,219,207,214,216,211,226,206,232,225,217,242,228,219,194,214,218,209,204,222,228,225,229,220,233,212,197,227,216,232,226,218,203,211,225,225,234,220,226,234,204,214,230,226,194,219,233,203,212,243,214,213,232,227,224,223,201,229,202,197,209,222,221,225,221,243,237,224,217,242,233,235,215,225,212,225,215,228,200,215,235,226,237,216,206,222,227,217,225,231,230,231,229,220,215,222,232,226,218,217,222,234,236,236,224,221,246,233,210,223,207,233,233,240,254,240,240,242,250,228,224,247,232,232,253,230,245,249,223,228,218,236,249,229,244,231,212,239,234,235,235,215,251,241,249,255,247,233,251,252,234,238,223,236,253,243,216,242,246,245,237,234,245,245,220,235,250,249,255,245,245,235,243,228,253,254,252,238,241,246,247,228,233,249,250,248,251,253,227,247,245,254,255,240,253,248,237,250,252,250,249,251,243,234,241,252,247,243,252,248,252,253,253,242,250,254,242,240,236,251,249,255,249,242,251,234,249,236,252,236,234,247,243,248,247,235,250,248,253,255,232,249,246,237,248,251,245,236,219,251,244,251,224,254,250,242,241,245,231,251,241,251,239,250,251,253,253,243,231,237,250,243,238,247,249,253,224,244,246,240,249,250,230,252,218,231,246,233,252,254,253,231,253,252,225,246,250,249,252,255,249,208,234,234,199,241,249,253,242,214,228,235,203,206,241,212,149,161,117,54,74,237,255,252,232,231,255,250,246,241,225,194,207,246,246,245,237,203,210,201,208,210,232,190,185,223,219,224,227,237,236,237,240,250,248,235,246,235,245,180,122,115,119,91,33,52,49,27,27,38,40,49,61,31,39,48,55,74,55,57,63,65,36,52,170,232,244,255,245,253,252,245,237,255,232,249,224,252,234,250,241,222,233,155,224,243,235,247,242,249,249,238,212,187,111,149,204,230,173,174,238,245,118,94,64,52,58,85,67,141,145,102,98,76,119,79,93,133,145,118,62,45,55,91,88,82,46,49,74,57,73,71,57,32,37,234,243,249,246,240,255,242,239,246,235,246,249,255,248,253,224,251,248,234,217,210,241,248,230,172,193,235,217,174,152,205,240,249,238,173,194,246,209,208,234,228,244,238,121,82,82,51,80,80,91,89,78,60,50,51,36,55,46,24,31,37,15,47,226,236,235,224,226,201,216,228,233,246,239,210,90,119,192,202,207,212,219,231,233,225,239,230,224,210,225,218,237,212,213,214,210,223,227,234,241,211,239,212,203,230,201,241,211,220,214,191,240,195,223,209,207,235,226,200,214,210,198,225,215,214,213,210,229,214,226,225,199,194,216,214,241,206,224,195,219,209,136,100,116,75,43,71,93,56,139,178,149,125,85,67,56,37,107,125,117,133,109,69,52,62,31,51,167,215,227,201,213,218,205,196,197,173,209,216,221,201,203,211,225,210,196,222,197,209,210,215,209,223,223,221,233,228,207,219,191,213,215,206,227,207,205,211,199,200,212,109,14,6,7,0,7,18,24,3,37,10,5,48,7,0,8,9,7,2,13,24,27,20,17,3,197,223,234,233,227,204,223,223,214,225,220,203,215,198,209,216,196,199,207,204,236,201,203,217,213,215,213,218,219,224,218,245,192,213,204,222,233,226,197,217,222,234,211,211,220,217,226,216,227,212,213,214,226,222,242,227,225,210,204,221,222,200,212,201,205,207,242,237,207,201,217,231,223,223,239,222,202,209,224,233,227,215,227,248,225,226,218,205,224,205,218,208,227,226,216,202,215,211,231,219,216,223,216,223,203,230,216,208,241,232,218,228,224,214,233,217,204,201,244,221,199,227,227,234,224,233,223,207,227,223,235,201,216,219,226,237,212,227,226,239,217,234,222,221,241,233,234,234,216,235,243,242,254,225,230,249,242,238,227,233,235,212,208,215,249,233,229,221,235,238,228,223,234,207,246,217,243,227,235,234,245,216,224,245,240,232,250,241,232,236,249,252,245,237,237,241,250,255,237,233,243,231,250,218,249,236,242,254,245,239,246,245,236,250,225,228,243,250,242,249,200,230,245,241,240,240,249,251,230,252,255,235,255,240,245,230,246,247,252,239,249,251,241,255,253,229,244,251,239,249,249,255,244,237,253,246,242,230,255,211,242,255,225,219,228,254,239,230,250,247,239,235,250,221,248,213,242,255,247,246,246,252,253,254,227,241,233,249,245,247,253,252,214,225,239,242,237,226,255,250,245,239,240,240,248,243,239,230,241,242,238,254,242,241,229,242,226,254,248,251,231,231,236,255,226,249,244,243,229,246,233,221,248,233,217,250,219,198,217,222,252,228,219,199,243,197,179,169,137,79,100,235,242,251,251,242,252,251,233,253,235,219,208,244,226,249,244,211,193,145,205,222,226,222,234,243,236,223,233,242,231,246,236,246,231,226,249,243,190,144,98,110,78,24,16,29,27,28,40,21,19,9,29,37,18,35,51,81,67,60,72,82,40,110,244,244,227,241,224,219,253,244,247,254,250,250,247,245,235,219,246,230,211,138,215,250,255,250,208,234,241,186,160,194,193,231,200,193,198,183,243,229,169,134,126,77,63,77,81,132,196,135,81,70,130,105,107,130,138,115,63,55,66,62,88,65,50,28,64,89,48,45,61,40,34,222,245,242,246,245,243,223,231,226,239,219,243,255,244,253,242,237,222,224,251,224,230,253,252,230,187,239,249,233,150,220,217,227,245,218,205,239,201,184,217,233,252,236,124,52,57,41,105,94,94,82,80,64,36,30,35,40,31,54,28,74,48,75,237,233,236,248,226,220,217,213,238,224,221,182,86,132,208,207,202,228,233,243,238,222,225,244,216,222,225,228,227,226,220,253,222,222,237,216,228,219,193,229,210,207,210,227,201,217,231,214,199,176,227,180,193,214,216,224,206,205,201,194,200,222,210,228,204,222,216,212,202,204,215,213,182,222,207,219,243,220,148,106,116,119,73,79,66,46,123,161,117,124,89,54,38,35,124,151,94,113,99,70,64,33,24,138,212,218,213,199,214,187,234,223,227,191,215,236,207,207,212,227,226,216,230,211,195,224,200,229,207,212,210,213,238,202,212,222,227,206,213,216,213,225,189,200,236,187,198,102,18,1,26,23,31,3,14,18,7,28,22,2,18,15,5,6,5,0,20,5,15,13,5,17,215,214,209,209,212,223,230,228,211,209,197,192,206,224,240,244,203,218,230,238,221,217,212,215,215,209,202,215,194,233,228,203,245,229,227,235,208,232,229,239,208,198,243,247,214,228,217,210,216,241,221,213,238,203,207,217,219,233,210,214,220,215,213,226,233,217,225,236,223,237,206,215,207,234,249,226,233,229,222,224,246,235,222,204,228,213,223,209,236,210,201,204,236,222,227,221,201,219,229,233,217,214,215,208,224,201,225,232,220,233,231,228,216,238,219,235,223,252,233,226,209,210,238,221,246,237,247,230,226,244,237,219,216,241,225,202,244,211,216,211,218,238,224,245,243,247,227,223,228,235,245,244,230,235,239,226,224,249,243,240,236,255,224,231,223,218,232,234,230,243,228,231,229,212,251,226,231,239,249,220,244,219,201,240,216,240,240,226,223,249,244,246,245,225,235,250,251,225,243,245,249,240,240,236,242,249,238,239,249,248,241,238,255,246,237,251,235,237,255,250,239,253,233,245,234,255,227,228,250,243,240,225,253,221,238,250,238,253,239,250,243,251,238,252,246,246,243,235,248,240,242,247,253,241,235,252,252,253,255,231,233,254,245,247,252,243,255,250,255,242,237,226,233,246,234,213,251,242,248,236,241,249,255,255,253,234,235,231,245,245,253,250,237,235,232,228,236,239,248,221,252,254,254,253,246,223,242,218,241,244,251,255,239,247,252,248,227,251,249,228,249,254,255,237,248,251,215,219,224,247,219,228,253,224,203,215,203,236,230,248,224,233,211,227,255,192,168,155,126,90,85,229,247,242,230,234,244,211,222,241,226,198,173,219,216,255,247,203,149,173,251,249,247,211,241,253,206,203,225,242,247,249,237,255,244,219,222,233,188,145,104,94,57,18,23,34,56,30,26,26,14,44,56,43,23,26,51,76,75,56,83,60,91,214,254,250,254,245,244,232,213,255,220,240,255,225,253,251,236,248,215,199,173,116,181,245,237,225,201,207,210,167,220,234,242,233,205,207,219,223,254,249,199,115,107,74,71,65,58,106,174,144,93,107,153,131,106,140,134,108,79,64,64,75,73,75,71,68,40,39,48,67,63,6,78,233,241,240,219,252,243,219,222,228,242,239,247,244,246,241,241,243,252,254,249,233,251,243,239,182,174,231,255,226,174,173,232,198,247,196,206,234,208,172,212,235,248,239,132,101,57,63,87,106,119,115,55,46,53,41,39,43,48,22,17,68,33,81,251,240,250,252,212,221,230,239,234,250,239,163,68,154,198,191,229,224,225,227,206,226,208,221,215,242,207,226,236,225,230,222,208,206,235,192,221,222,200,203,212,213,222,233,222,225,204,210,210,214,217,213,213,203,203,197,208,200,214,215,210,213,234,209,215,212,204,203,216,192,213,214,202,201,183,239,246,197,160,110,88,112,94,91,73,69,145,136,130,132,91,51,63,61,123,129,143,134,89,90,57,46,41,192,230,222,225,217,222,223,197,205,193,209,188,230,207,216,217,213,220,218,232,193,217,214,202,214,210,219,195,201,213,211,232,199,217,204,206,216,233,216,221,216,206,210,203,105,7,7,23,14,11,13,8,42,8,9,18,36,9,7,0,37,19,16,24,0,16,22,9,15,203,220,205,211,230,207,241,207,207,196,214,205,223,213,224,219,229,211,206,216,207,230,205,217,199,221,210,218,202,211,235,240,238,227,236,223,209,229,201,203,229,221,210,216,225,225,219,216,229,220,211,214,216,230,229,215,219,231,232,223,217,221,218,202,235,216,203,218,231,212,198,231,206,214,202,220,236,221,243,235,227,219,220,223,229,232,235,203,218,202,214,213,231,208,226,230,244,220,234,215,226,236,242,212,208,217,230,242,229,223,233,249,221,229,217,223,227,220,221,231,224,209,216,233,218,230,228,233,224,239,226,235,241,243,225,240,230,233,206,208,202,222,215,237,245,222,239,235,211,230,226,241,240,233,219,214,233,227,219,238,221,237,236,249,238,239,215,229,228,239,221,245,241,220,245,233,236,224,230,228,214,248,234,240,234,247,240,226,246,246,212,249,214,248,247,248,251,241,243,243,228,252,246,248,240,228,249,242,239,237,228,224,244,253,227,251,251,237,252,216,241,250,245,238,245,244,232,243,246,230,233,251,221,204,251,245,244,251,242,241,255,251,253,242,242,230,253,245,247,233,255,255,249,244,226,249,209,244,234,246,214,252,252,253,247,220,245,245,240,231,244,244,242,254,219,253,234,244,250,254,250,234,254,255,251,238,248,245,222,250,254,252,241,243,254,242,244,224,230,235,236,235,224,241,242,241,255,223,225,244,233,234,240,232,238,235,214,249,249,237,239,244,253,245,255,235,254,232,224,251,232,234,248,202,208,234,222,254,239,202,237,221,211,215,242,146,132,132,120,85,98,230,252,246,236,227,244,240,226,241,220,196,184,224,251,234,215,168,195,201,251,247,245,206,237,255,208,212,233,250,239,253,243,241,252,233,233,234,194,131,101,56,24,10,30,30,17,20,48,21,27,32,33,14,7,24,58,81,85,75,43,38,155,240,250,255,224,242,236,247,251,243,233,255,231,254,218,237,242,252,246,227,214,116,178,238,255,234,204,198,244,216,252,251,253,224,182,225,208,202,236,250,188,158,134,111,51,92,58,114,202,172,90,82,131,109,74,80,147,102,108,64,49,63,60,79,91,82,68,78,87,54,81,36,52,224,253,240,254,225,245,239,225,243,246,247,250,252,239,250,246,217,235,234,242,208,231,234,252,228,157,206,219,224,188,143,201,230,209,233,183,209,224,187,198,225,252,215,123,113,71,108,106,111,108,67,60,67,59,41,19,37,46,31,15,52,11,79,230,236,238,242,228,212,236,231,226,225,206,146,64,145,221,197,219,222,235,245,229,241,233,240,227,210,232,218,201,208,205,226,216,206,238,196,209,212,227,215,224,220,225,219,195,209,225,216,202,221,180,193,219,203,198,196,205,221,195,217,194,176,224,191,216,205,204,225,216,183,198,217,205,238,192,223,223,207,188,147,125,90,99,105,89,81,135,144,130,141,73,55,78,99,129,103,138,122,105,104,77,29,105,225,199,203,203,210,214,221,214,192,207,239,203,229,206,220,186,218,198,218,214,197,215,201,225,213,235,238,198,210,225,201,215,200,226,226,185,203,208,215,204,198,194,222,221,116,7,2,10,19,10,11,10,4,17,23,25,19,21,2,0,5,36,28,14,8,0,2,13,4,233,231,207,200,226,193,215,229,209,218,215,195,215,221,211,205,218,220,219,212,215,202,225,227,233,224,214,236,235,208,237,211,211,213,232,210,241,193,223,201,226,216,228,218,208,225,220,204,225,214,238,233,226,242,226,219,225,231,214,220,208,231,217,210,229,218,237,206,227,231,203,229,236,208,223,215,241,241,190,208,227,235,222,238,209,227,246,234,217,222,238,229,235,199,229,226,219,235,217,225,219,215,219,217,231,221,218,189,240,221,226,224,230,232,225,216,243,238,238,239,229,229,224,206,212,215,221,236,229,215,237,246,228,233,216,222,224,228,218,236,237,225,225,224,221,245,246,230,245,244,243,221,245,246,231,246,243,219,238,251,236,226,218,230,246,239,232,246,235,249,230,253,239,237,245,249,245,253,244,254,245,239,242,249,237,245,252,246,236,252,243,242,251,246,246,245,251,246,253,219,244,245,243,254,245,236,248,247,252,253,234,255,254,248,246,255,240,240,249,252,239,250,255,240,252,228,254,241,246,237,232,234,245,250,254,248,223,228,242,223,252,254,232,251,250,235,244,216,242,235,232,245,229,228,243,250,255,239,220,243,244,246,247,214,239,237,253,250,233,239,252,255,248,229,214,246,239,244,243,226,231,248,254,239,252,216,240,246,255,242,255,241,254,238,243,252,246,242,237,216,252,243,233,254,227,244,247,238,236,247,250,243,253,249,239,245,249,254,236,234,237,243,236,244,238,237,248,241,236,245,243,219,236,220,221,236,243,244,205,216,238,252,213,207,237,178,137,153,139,81,120,217,247,230,248,230,243,244,232,253,242,210,170,235,254,245,203,180,212,234,241,254,243,200,240,249,225,177,249,249,226,249,248,255,228,246,219,248,170,99,67,32,8,26,17,37,30,20,25,31,26,48,47,54,45,49,49,67,51,42,61,33,175,235,247,232,223,241,236,251,246,228,235,239,234,236,255,245,237,231,255,244,235,175,152,252,251,201,202,249,243,245,255,249,242,163,74,151,170,220,240,254,202,118,129,91,52,89,122,149,146,112,57,47,70,72,36,97,119,146,99,68,46,80,65,62,83,86,62,89,93,35,52,3,60,217,237,238,248,251,245,254,242,213,243,250,243,253,241,242,243,247,205,233,241,247,234,241,255,220,184,198,242,242,208,169,182,219,168,219,210,218,200,217,187,210,252,245,99,123,55,66,72,71,43,55,10,42,35,34,26,37,30,18,29,47,15,87,230,235,235,254,227,242,254,238,237,236,254,147,118,179,223,188,229,212,236,241,251,229,227,218,201,206,239,230,227,231,224,232,232,216,203,202,198,226,216,186,214,219,215,208,222,234,220,219,223,212,213,235,221,215,232,199,234,219,212,206,194,233,200,196,219,203,203,203,208,186,218,224,183,194,212,222,235,213,218,119,124,106,99,92,74,75,135,124,123,107,65,69,97,101,128,128,124,129,120,122,95,22,51,160,180,228,226,209,238,199,221,228,214,206,220,219,221,196,201,201,200,201,210,217,220,201,239,235,196,189,207,196,198,194,199,202,196,220,213,200,223,229,239,221,206,195,195,108,13,5,22,3,2,7,16,3,15,3,0,0,1,19,27,2,0,14,20,3,12,9,22,11,206,232,204,235,218,218,217,184,209,227,220,217,208,218,232,232,215,204,205,206,224,207,201,220,224,218,206,214,234,235,238,213,230,220,205,217,248,206,215,232,229,212,219,215,234,211,227,217,211,204,222,212,221,231,213,205,208,234,223,233,219,220,213,202,219,221,227,232,241,213,221,210,208,214,243,239,228,209,196,237,208,231,196,215,240,215,211,221,198,232,225,248,229,230,230,218,199,233,210,227,237,218,216,218,206,231,247,213,222,224,225,227,213,225,243,229,223,215,233,223,229,217,216,225,225,238,228,214,221,223,227,241,231,223,238,241,210,225,234,224,231,238,231,221,244,245,234,235,237,216,241,238,242,238,219,226,226,219,250,241,227,246,219,234,234,249,227,223,242,250,228,230,230,227,244,216,245,218,237,225,230,250,244,235,241,242,235,239,250,218,254,244,232,230,231,239,240,236,246,236,235,245,230,243,238,231,241,239,244,226,240,255,255,225,246,239,223,251,235,218,234,248,240,242,241,226,255,237,247,253,220,236,252,255,239,255,243,247,227,242,255,236,255,234,243,250,223,247,239,242,255,252,255,252,236,237,251,245,240,249,254,251,246,229,245,248,241,255,233,249,242,221,247,230,226,251,253,253,232,250,246,245,239,241,253,252,254,255,255,242,221,220,247,247,236,250,252,238,238,240,238,238,243,224,246,248,235,249,233,253,222,236,247,251,222,243,253,250,231,241,255,228,241,243,252,255,251,237,237,246,204,253,244,219,235,231,179,223,220,207,250,237,224,214,249,206,150,152,134,83,88,221,230,250,241,255,255,225,240,251,227,223,209,221,249,237,215,198,206,241,252,255,229,188,231,249,238,234,255,254,243,251,247,201,236,251,217,249,167,121,74,68,66,11,3,3,40,25,14,36,41,74,71,49,27,44,45,10,23,55,100,134,239,237,254,242,250,251,232,245,221,223,237,255,235,255,249,248,245,252,243,226,255,174,185,227,228,198,230,247,255,230,252,250,233,155,96,158,184,221,253,246,147,113,57,51,21,105,145,167,147,69,31,34,28,68,66,68,120,98,59,68,72,48,75,68,91,49,82,117,102,53,91,16,56,226,240,240,230,244,250,240,252,222,251,255,214,252,246,234,241,249,246,219,253,236,204,241,251,239,184,192,235,231,201,181,187,232,184,189,252,182,212,249,234,213,246,193,93,56,11,3,13,8,12,9,5,28,63,29,46,33,42,34,30,54,28,92,237,225,254,212,244,242,255,226,247,234,235,140,115,213,197,198,239,221,223,206,243,198,223,231,224,230,227,202,233,234,208,216,227,233,233,180,223,217,219,225,224,227,222,231,227,208,197,225,186,204,190,229,215,214,230,204,199,214,192,224,199,226,216,218,241,187,204,220,219,196,205,206,207,198,199,194,204,207,137,161,99,103,79,90,78,77,121,117,142,114,75,61,86,116,144,98,116,131,121,99,77,44,40,106,163,224,220,211,209,211,232,231,212,213,210,226,204,215,222,236,226,207,219,228,226,194,231,211,238,225,199,207,193,208,232,185,206,212,232,207,220,203,222,214,205,214,203,113,10,0,12,42,7,5,17,2,15,27,16,16,10,0,8,4,6,6,13,27,18,26,21,2,237,228,231,188,236,221,242,227,217,202,232,234,217,235,223,183,204,194,218,217,223,196,201,220,199,205,232,196,230,227,200,219,217,225,231,223,218,228,214,228,221,241,214,213,216,221,242,239,213,198,189,227,212,223,226,213,225,232,225,229,195,209,220,234,209,223,210,210,233,214,216,216,211,233,212,201,225,233,204,225,215,207,227,218,196,201,222,231,227,225,236,210,218,249,230,238,195,214,230,225,216,218,220,222,217,242,214,219,230,226,220,236,205,208,216,227,216,226,231,212,232,228,222,222,212,229,214,210,195,224,216,217,228,217,232,219,216,209,242,232,228,218,249,227,243,232,234,208,230,237,230,237,236,243,231,239,236,214,214,246,217,245,219,242,225,224,236,233,240,229,245,242,233,248,230,239,247,252,252,237,233,238,240,232,241,239,246,246,250,252,235,215,249,244,247,237,242,238,253,255,234,240,233,247,248,252,248,233,237,246,235,252,222,254,248,247,245,246,228,228,244,246,236,228,253,254,249,255,242,251,248,233,255,242,227,249,246,245,241,248,253,241,231,251,251,251,253,236,238,245,242,234,239,236,244,250,249,246,238,255,251,252,236,255,235,230,231,246,240,255,253,255,248,235,229,255,230,249,252,251,252,247,251,251,251,247,249,253,254,241,246,238,243,246,231,240,235,248,235,248,254,230,244,239,237,250,228,252,229,247,231,211,255,255,251,240,255,239,234,228,225,240,252,233,233,222,237,252,233,252,239,231,241,207,213,215,241,245,251,240,255,231,231,202,243,192,176,145,146,66,108,235,246,249,255,246,238,243,233,246,219,199,169,184,245,251,232,205,186,236,237,250,217,208,227,243,244,208,226,255,250,238,255,240,246,237,242,209,220,189,168,169,216,220,232,179,87,30,52,40,45,55,92,41,40,75,85,115,81,182,205,186,246,251,241,238,230,255,232,235,249,233,235,242,244,250,248,243,246,245,232,223,248,184,140,217,184,192,243,242,223,245,245,248,179,81,176,216,188,245,253,254,181,133,128,92,99,165,199,200,212,140,51,75,107,108,108,119,126,70,76,73,51,66,49,76,92,38,86,92,49,69,65,7,70,225,252,229,255,248,243,234,252,235,240,237,250,246,246,246,255,222,219,250,254,244,240,228,252,254,179,164,251,225,238,203,162,152,148,210,232,189,160,232,222,199,229,198,102,71,72,74,66,60,46,38,37,53,38,32,33,52,41,50,28,42,8,127,239,243,253,232,229,245,232,255,247,243,254,148,122,221,209,202,224,222,234,225,209,219,224,226,234,209,230,222,217,236,226,216,205,243,229,210,207,232,228,219,229,199,230,204,198,234,214,226,232,231,211,213,205,213,200,212,226,205,186,209,220,192,194,199,211,192,206,199,211,195,211,204,221,205,223,201,216,180,146,130,143,121,86,86,76,85,122,120,128,120,81,98,95,113,115,99,94,122,99,90,57,28,82,211,190,208,219,219,218,219,223,191,204,222,206,198,204,200,215,212,213,228,215,204,224,224,237,204,202,214,201,194,215,200,219,218,194,215,205,202,227,192,186,203,215,198,179,115,10,16,11,13,18,34,4,5,10,10,25,4,11,14,7,24,4,8,10,43,17,15,7,39,205,219,234,211,223,220,210,235,204,193,224,214,228,214,212,244,225,211,221,218,203,226,228,214,230,219,202,229,241,219,197,204,212,228,240,249,241,228,232,227,215,217,224,216,201,219,231,232,213,189,224,212,206,212,230,200,208,216,220,230,210,212,179,204,208,229,211,221,214,202,222,215,208,217,235,212,238,200,218,227,232,220,241,189,228,227,216,219,234,217,228,216,231,223,208,203,226,212,223,243,228,203,206,216,210,251,208,232,207,236,228,236,221,251,205,200,211,230,219,242,228,207,242,217,235,205,225,226,212,242,212,210,243,215,217,235,241,226,232,212,234,245,220,242,230,231,227,231,211,255,223,245,220,218,221,243,227,231,227,234,244,233,235,233,219,238,224,230,235,231,254,238,242,216,242,228,237,245,251,228,228,220,242,236,241,245,239,244,252,233,237,239,228,219,246,233,243,232,225,232,245,249,233,240,242,255,236,232,232,232,249,240,232,225,240,231,224,239,242,235,225,242,243,235,242,246,225,234,239,248,250,250,255,252,245,247,249,249,255,237,241,230,228,255,253,246,251,252,220,249,250,253,235,238,255,250,245,239,252,231,255,254,254,244,241,237,238,235,231,238,224,249,243,244,233,249,253,228,240,232,245,253,216,237,235,232,236,254,245,212,231,218,251,229,227,234,253,231,246,244,239,246,232,253,231,254,254,254,247,229,241,213,223,239,249,247,244,255,253,236,249,252,228,243,243,248,255,232,241,244,231,208,252,215,238,239,255,253,194,218,221,229,216,237,245,174,134,122,118,84,94,223,254,253,243,242,253,226,249,243,253,217,170,209,223,248,201,184,229,244,242,236,228,238,241,255,227,134,172,239,213,238,248,253,253,255,255,248,247,218,254,237,252,255,254,199,111,11,84,108,58,51,74,74,65,197,238,224,144,221,164,146,250,245,229,224,222,246,246,250,254,241,250,251,247,241,242,233,247,242,251,233,252,196,156,183,159,197,238,245,254,226,169,218,173,128,225,212,212,230,255,247,251,194,142,139,170,191,217,202,226,136,96,114,149,159,148,150,151,120,95,71,67,73,13,69,72,65,47,99,87,40,41,19,61,227,231,240,245,253,242,235,250,246,228,244,232,224,234,255,246,244,245,235,240,249,238,200,255,246,179,168,198,225,215,191,80,144,202,196,219,171,180,248,215,130,197,206,242,219,245,244,233,252,246,255,253,234,145,96,99,85,54,31,22,11,34,132,247,212,239,232,251,250,239,254,231,209,222,109,165,240,202,215,233,228,223,231,243,245,199,226,211,199,215,230,205,230,224,246,229,223,211,214,228,215,205,198,195,218,195,198,218,213,201,205,197,229,188,193,209,223,217,197,208,230,193,201,240,209,213,179,183,209,200,228,195,213,187,208,189,192,202,204,229,214,137,121,120,116,89,72,99,80,118,144,126,130,102,121,146,135,130,99,133,104,125,98,79,18,102,229,213,244,214,218,224,225,214,232,187,228,203,224,227,211,199,236,232,215,205,223,207,198,225,209,185,198,211,206,202,214,224,207,196,197,197,201,191,217,195,207,242,215,198,105,9,1,7,8,3,1,12,17,20,12,24,10,8,9,9,7,17,14,15,18,7,36,18,11,213,216,181,233,236,211,210,226,213,240,203,214,217,202,212,216,225,211,212,221,221,237,224,218,214,237,231,228,207,218,202,222,197,223,225,243,221,226,211,222,194,224,224,233,202,231,208,221,217,228,246,240,226,226,214,224,205,198,247,232,211,219,223,235,203,225,207,219,225,222,210,223,208,218,231,214,226,207,227,211,239,208,227,223,213,217,213,182,210,219,205,226,218,232,224,221,235,204,222,231,218,221,214,220,216,217,225,229,207,219,226,215,217,234,230,223,235,228,205,213,234,244,227,222,221,239,234,214,226,218,229,232,206,233,189,229,234,237,225,244,245,217,227,241,234,244,238,248,217,248,215,229,233,237,224,218,243,249,240,213,232,239,247,246,244,250,214,244,211,219,232,239,237,232,240,252,246,247,226,243,255,252,222,231,214,228,228,243,224,238,242,228,229,244,253,251,242,240,247,253,220,226,252,233,251,244,222,239,246,254,228,230,235,252,255,242,249,245,225,221,255,235,241,226,250,244,237,249,253,241,252,250,252,254,246,251,253,252,236,240,250,243,219,239,253,246,249,245,235,252,239,253,253,228,223,253,250,242,225,253,238,249,248,255,248,254,254,242,255,249,224,249,252,243,255,255,208,251,246,255,242,225,248,249,252,238,248,239,239,231,224,250,249,233,252,238,225,240,249,235,235,228,239,235,247,250,246,241,250,255,250,223,251,234,231,242,229,231,234,234,247,234,251,250,252,254,246,248,242,227,232,254,255,242,237,226,173,171,184,232,224,213,196,226,216,175,135,149,163,66,102,242,247,247,240,231,235,211,244,234,245,245,210,228,243,239,202,174,228,248,218,238,223,169,252,239,187,89,72,210,236,242,247,227,233,254,227,246,248,234,226,211,251,243,249,169,86,23,67,121,84,66,64,56,67,234,255,255,178,101,13,73,221,236,255,203,221,238,232,248,246,238,241,232,225,255,246,235,250,247,239,215,206,194,164,186,160,219,246,237,235,128,166,222,136,158,237,203,254,249,252,253,250,175,115,111,122,85,88,79,94,87,81,61,45,56,50,76,50,64,69,87,75,77,36,43,78,59,71,99,93,55,38,2,50,220,239,247,230,252,245,240,240,237,227,248,246,250,244,237,253,227,243,207,236,238,232,217,255,252,209,177,228,241,217,204,143,186,218,236,233,238,217,252,218,150,219,236,215,245,250,252,255,250,252,252,250,255,187,118,132,88,48,19,28,33,25,133,230,204,233,235,242,248,241,248,225,162,176,112,164,227,214,205,204,224,214,227,215,213,243,207,220,207,216,226,199,219,207,223,207,227,228,213,208,196,215,211,214,180,219,194,229,185,205,188,212,212,204,206,212,214,222,207,193,205,224,224,216,198,233,216,198,209,223,225,205,195,222,211,199,209,214,218,224,177,158,152,130,136,94,82,69,93,138,167,157,141,114,106,102,111,102,90,104,154,122,110,42,30,197,208,218,225,237,230,213,212,221,198,223,221,198,215,214,211,235,217,200,204,233,188,201,220,202,200,204,224,249,192,207,198,232,216,235,220,219,189,201,205,204,206,221,207,208,118,6,7,0,4,2,7,19,0,7,6,16,16,24,7,7,1,5,4,21,15,14,30,37,14,207,221,219,228,235,220,240,214,218,206,201,208,217,248,212,211,225,214,226,214,205,206,219,204,210,213,208,223,200,231,214,230,188,207,199,249,217,205,237,221,217,208,234,213,197,203,207,222,223,230,187,228,237,236,223,227,196,198,213,214,233,214,217,214,234,225,244,210,221,225,224,204,225,207,211,196,219,216,218,210,206,206,203,191,211,228,208,221,215,249,220,242,239,235,240,215,250,211,230,227,230,220,231,196,219,222,217,215,213,213,222,234,237,201,212,224,227,217,249,226,220,227,219,216,218,209,224,206,200,222,204,230,243,193,234,226,214,224,233,213,190,206,244,232,232,234,228,247,215,229,230,253,221,222,250,237,240,234,216,227,227,239,219,249,236,251,217,232,236,248,243,239,236,218,252,232,245,250,236,240,252,231,252,220,255,236,251,233,224,249,251,230,253,236,241,245,221,243,247,253,235,253,246,246,253,239,240,254,218,250,240,241,249,223,248,245,236,236,241,228,233,252,242,213,243,249,240,254,224,248,246,248,254,228,247,251,228,255,255,243,253,241,251,238,248,238,252,244,251,249,255,249,241,244,223,252,238,243,254,216,228,251,224,254,243,244,228,229,255,233,247,255,225,240,245,255,235,252,247,244,252,229,245,246,235,234,231,224,252,234,222,255,244,249,248,248,246,243,219,225,237,248,227,239,229,228,226,246,248,244,240,223,232,245,250,244,252,254,248,245,253,235,255,249,255,238,233,248,236,245,231,233,252,235,216,176,209,223,198,231,244,233,213,231,247,205,153,155,148,117,103,252,244,247,243,214,238,246,223,249,255,230,213,193,251,231,200,230,217,246,219,252,186,112,195,200,164,50,43,168,245,239,249,252,227,232,239,241,254,213,223,188,220,250,214,146,99,26,50,123,63,68,73,24,22,216,252,217,66,22,24,91,203,236,235,198,242,228,229,251,232,228,235,239,249,235,255,239,241,247,255,208,229,211,202,155,123,213,218,136,172,190,255,255,160,191,228,232,250,249,243,223,248,140,119,80,36,7,3,0,59,114,97,47,30,40,22,44,70,49,55,28,68,62,56,73,101,65,52,93,111,68,52,21,59,217,246,240,234,242,255,255,243,224,211,215,247,242,226,242,255,254,239,231,248,244,238,222,222,254,211,181,195,244,241,242,218,180,220,206,241,218,161,242,213,212,225,212,246,248,246,252,214,235,247,247,246,254,175,128,132,83,67,13,45,11,27,201,250,241,238,220,249,250,254,239,217,192,204,130,187,217,190,202,220,242,202,218,230,218,218,216,216,228,196,237,219,200,229,225,210,203,220,223,189,228,199,219,211,185,221,234,209,225,196,199,214,217,206,198,185,168,217,218,194,217,217,193,213,207,214,208,206,191,212,217,190,202,207,221,219,223,218,213,205,211,163,131,107,109,97,75,90,110,148,119,122,120,91,73,64,64,68,100,133,139,102,107,69,39,163,209,218,205,222,206,218,215,202,204,231,235,198,216,204,189,199,211,227,205,210,217,239,207,204,228,183,239,226,193,224,232,196,207,214,222,205,191,207,196,216,206,191,220,208,133,5,20,17,9,6,27,14,0,19,8,12,12,7,23,22,9,5,30,18,11,14,15,20,7,233,217,214,211,228,221,224,233,219,216,206,230,235,228,201,204,246,222,225,221,218,199,205,224,223,194,203,223,217,201,192,232,237,225,216,204,220,220,224,221,218,236,233,226,231,191,224,216,207,214,211,229,208,226,211,220,242,206,210,216,206,208,225,219,203,201,212,234,210,231,231,213,220,213,212,209,207,205,202,217,221,235,234,238,219,233,210,206,224,204,209,237,209,214,198,233,226,219,207,224,193,206,229,238,240,235,230,219,220,215,240,225,198,217,230,212,234,215,240,231,225,219,228,228,209,217,206,241,237,233,224,208,229,219,199,218,225,224,220,212,233,216,231,213,227,237,226,242,241,236,229,223,252,243,196,230,223,243,242,240,240,236,217,229,223,230,247,220,240,244,233,224,244,251,244,242,250,245,241,243,230,255,233,227,237,245,236,243,225,248,243,205,243,254,248,236,252,230,231,240,250,240,251,218,241,243,252,255,221,231,219,237,239,244,229,249,255,247,240,239,254,244,236,248,249,255,253,240,229,251,240,237,242,246,255,251,255,249,245,246,245,245,229,255,243,234,246,248,250,252,222,240,253,253,217,244,255,244,238,246,240,227,224,235,231,245,244,236,225,251,252,238,231,217,248,252,247,255,252,236,242,252,235,238,231,245,234,254,238,242,247,237,230,245,233,245,249,226,227,245,241,241,234,240,254,219,250,250,236,245,231,239,254,238,239,250,253,252,242,225,244,245,238,245,247,245,238,221,239,227,199,235,228,190,221,222,244,231,221,235,248,228,213,233,253,157,165,142,126,119,154,233,246,255,249,247,247,241,246,240,245,241,217,218,230,221,238,195,233,235,240,253,121,35,174,196,193,96,15,107,209,230,254,242,242,254,232,218,245,198,209,173,192,233,163,133,93,0,63,78,71,47,52,47,20,200,249,135,34,2,28,147,248,250,239,194,244,255,254,246,231,227,242,234,236,238,230,230,246,247,253,196,146,165,192,205,145,200,200,199,235,222,255,210,81,209,237,178,233,250,253,254,242,233,94,90,7,37,30,22,24,71,34,48,49,44,48,62,92,50,30,47,54,64,30,92,70,68,39,111,106,53,42,55,78,211,252,246,237,255,231,241,250,233,220,243,236,248,235,242,240,242,247,223,228,253,214,233,217,252,243,158,183,211,238,227,215,159,209,223,244,221,176,206,214,224,218,210,253,255,245,242,241,236,238,235,255,243,143,86,109,85,34,52,54,20,33,200,240,225,255,226,226,250,235,221,179,192,171,152,224,206,203,198,205,229,207,251,211,232,196,213,235,217,211,228,216,200,202,237,223,225,217,207,210,210,204,220,196,183,205,198,215,193,215,220,209,203,201,200,201,202,210,192,200,197,224,208,210,203,231,181,201,199,200,213,206,217,212,186,202,206,194,206,220,173,140,133,75,89,89,79,115,126,131,147,97,66,59,64,49,61,66,87,84,107,82,67,43,4,138,214,218,231,230,207,204,211,226,218,229,217,198,223,226,194,216,222,206,216,208,191,198,214,206,200,197,211,197,208,207,224,205,224,210,228,234,177,239,209,223,232,211,187,206,113,2,2,8,13,18,0,6,1,31,18,14,10,6,9,9,17,23,0,12,8,5,0,14,7,226,220,233,210,224,224,211,207,198,209,222,217,233,220,189,217,226,203,224,188,214,214,205,231,223,207,216,212,209,227,203,212,192,220,227,236,210,213,210,240,221,223,231,224,224,212,220,214,201,219,198,236,214,226,214,211,231,225,241,224,216,225,236,222,221,232,227,224,217,222,196,212,226,215,229,212,218,185,212,219,208,215,207,228,212,218,212,209,218,212,228,220,219,223,235,215,212,244,202,227,218,240,237,230,247,202,228,206,203,211,181,234,231,227,224,198,193,230,208,249,222,228,219,228,234,238,208,208,219,233,209,216,242,244,205,236,248,239,212,223,246,227,224,225,249,243,219,231,239,224,229,244,229,220,241,212,244,233,216,243,246,250,248,220,229,239,234,233,236,236,247,251,254,247,238,208,238,230,245,248,231,235,231,246,251,245,234,254,244,232,233,221,247,246,222,243,220,224,244,247,253,253,247,249,236,243,234,229,241,250,239,250,248,250,232,231,240,255,232,252,228,231,250,254,246,236,248,239,245,242,225,237,247,249,237,246,221,214,247,234,227,242,249,248,247,252,246,248,248,243,255,242,240,249,255,233,251,247,253,240,229,234,225,248,230,254,236,242,235,233,237,246,227,227,250,250,248,227,238,250,252,245,245,243,239,246,255,232,228,249,233,252,243,245,238,248,249,252,239,245,248,224,223,233,247,255,253,243,241,229,250,240,237,234,233,237,251,244,237,224,248,244,253,248,255,238,233,252,248,209,189,230,227,215,216,237,236,232,183,217,222,200,206,239,238,167,161,124,97,114,101,143,218,233,235,237,250,233,242,252,245,244,211,224,241,201,173,135,188,235,236,224,122,59,116,193,202,152,74,151,236,246,255,235,230,249,250,208,242,220,188,150,224,238,152,135,79,5,77,79,80,90,76,24,21,198,250,141,39,72,206,233,239,249,241,167,246,255,254,253,231,242,251,240,247,251,230,226,249,237,238,160,76,68,160,138,201,252,243,253,226,230,246,152,102,179,188,211,251,247,240,249,232,230,157,81,50,51,32,26,92,76,86,66,33,38,65,80,80,33,59,41,37,45,46,82,62,91,70,108,95,48,80,16,51,202,227,255,232,237,253,242,235,254,215,238,243,245,249,243,220,237,249,233,228,229,236,233,239,251,252,186,169,212,232,241,210,174,192,242,228,201,209,180,207,207,219,193,229,255,246,223,235,228,233,232,251,249,149,100,90,86,59,35,38,19,50,216,243,230,241,255,254,237,195,226,207,210,178,179,208,224,212,196,218,219,214,225,222,215,208,208,224,184,217,233,234,220,237,227,206,226,211,211,193,215,219,209,219,215,187,204,207,215,190,200,185,201,197,181,187,202,210,213,193,223,206,202,161,217,192,226,213,224,186,209,198,200,224,197,219,199,203,206,220,175,115,114,111,100,75,61,123,159,136,147,145,101,67,64,83,66,90,92,116,95,66,45,35,164,225,209,216,213,222,184,214,184,213,208,240,229,231,229,203,214,223,219,219,227,218,217,231,228,225,227,226,221,197,217,210,219,213,199,217,208,212,208,194,210,208,200,228,220,228,110,21,0,8,0,3,2,11,24,17,25,33,2,7,12,1,12,18,14,26,14,0,12,2,31,211,224,203,218,246,194,222,213,202,225,220,213,218,218,204,207,215,231,178,226,213,231,216,225,207,206,210,215,204,211,236,218,206,206,217,213,198,215,219,209,213,205,232,201,208,204,213,205,206,233,222,230,228,217,200,249,204,192,238,209,213,219,192,231,238,235,235,219,230,208,215,218,216,243,196,195,198,206,230,202,212,246,211,194,230,220,222,220,236,227,223,215,211,229,231,221,224,224,234,227,225,219,236,227,202,216,219,234,190,218,199,202,205,225,207,226,236,224,207,225,229,217,218,214,220,229,228,236,230,223,206,222,219,243,250,223,219,218,218,225,230,235,232,215,227,215,220,222,224,236,239,232,235,236,236,203,223,234,220,231,220,251,227,236,232,239,241,232,251,246,236,240,250,237,234,237,247,223,253,248,233,251,232,254,245,229,240,255,249,245,252,240,225,247,244,249,237,236,239,247,237,243,252,228,242,245,250,246,242,247,253,236,237,255,245,239,253,253,249,241,229,236,250,244,255,231,255,239,237,241,240,240,238,253,216,235,238,237,242,235,252,249,253,250,246,230,241,252,252,255,241,224,249,239,238,246,242,255,252,233,232,237,249,254,237,249,246,246,243,234,249,224,210,252,231,236,230,250,239,199,234,254,249,248,255,232,234,244,254,237,242,255,241,230,241,251,235,255,227,232,249,246,252,236,253,251,252,253,246,246,255,237,245,235,233,237,228,244,224,240,247,252,242,240,254,249,245,241,252,241,216,205,230,246,249,198,219,152,197,232,219,204,209,242,252,171,128,155,124,73,60,67,73,149,248,253,245,251,253,255,233,252,231,201,112,116,119,43,114,203,228,233,176,48,72,154,228,194,150,205,236,241,246,250,245,254,225,243,227,174,179,196,240,247,184,154,67,14,89,84,64,71,70,41,29,223,247,224,72,196,240,230,238,250,231,175,247,246,245,255,238,230,236,225,238,249,242,230,255,239,246,205,36,16,46,99,232,237,240,223,120,198,230,131,177,232,221,249,246,253,242,238,255,228,207,141,114,44,36,89,143,92,75,53,31,41,50,71,54,42,70,47,51,9,65,84,62,85,53,115,105,95,85,50,62,218,240,235,250,255,255,243,230,250,226,227,252,243,241,236,251,255,253,234,213,233,244,226,211,246,246,196,154,197,241,238,218,176,163,239,211,182,231,195,160,212,230,197,176,214,225,232,240,239,249,239,252,233,127,129,72,76,61,19,26,36,84,244,255,232,245,224,224,215,189,219,213,200,187,184,228,224,200,198,205,219,206,235,206,218,213,220,221,221,209,230,221,217,207,207,215,212,214,189,220,217,220,211,217,208,224,220,217,224,208,232,189,191,210,209,181,178,185,204,208,195,206,198,211,220,192,200,227,202,231,189,224,208,192,191,230,221,228,204,215,199,176,127,119,120,99,132,181,168,166,160,161,119,105,103,69,84,80,136,110,82,88,64,110,215,227,233,207,233,196,215,215,215,238,221,214,210,225,224,223,215,225,225,203,185,207,215,198,179,193,218,219,211,224,234,213,209,188,215,189,207,219,211,204,210,232,242,212,198,192,113,5,2,3,30,6,22,21,13,23,12,22,20,4,8,0,15,7,0,8,6,44,25,13,13,230,208,236,210,203,220,220,242,231,228,195,219,210,213,238,209,230,205,203,206,208,210,231,211,208,207,210,222,234,215,206,214,205,214,236,218,240,218,216,215,219,199,213,236,218,240,218,216,213,212,250,210,232,210,213,210,208,249,204,221,234,217,215,223,207,208,206,220,221,215,250,237,206,239,210,208,206,212,212,233,235,252,219,216,208,218,225,228,225,217,201,218,221,220,228,234,207,215,216,241,205,214,227,215,200,226,228,237,213,200,236,218,224,202,200,241,234,224,219,229,214,222,228,215,210,238,230,231,240,201,219,233,245,240,248,240,235,218,214,196,239,229,229,229,222,206,231,253,231,235,226,243,228,211,239,230,240,228,222,247,246,231,226,228,240,242,221,255,253,241,240,238,224,215,233,246,223,228,243,235,249,238,255,244,233,253,238,232,235,218,250,248,244,246,242,241,250,240,232,242,236,219,227,247,237,237,244,239,227,247,252,247,235,227,234,245,219,255,255,237,224,252,223,247,244,226,250,246,224,229,247,254,240,250,250,242,238,246,220,231,255,253,247,246,241,230,237,246,248,236,243,230,246,246,245,236,221,236,239,234,239,246,244,222,253,232,231,255,252,243,243,239,248,239,255,234,255,247,247,255,227,241,254,227,237,231,247,246,245,236,232,249,252,246,243,236,231,232,219,252,229,249,255,247,243,240,247,238,245,249,248,255,236,235,242,243,233,252,236,239,254,242,246,239,220,253,255,223,250,216,212,246,209,248,210,169,244,206,224,225,242,206,212,252,253,198,170,178,115,74,61,43,24,66,193,231,249,245,234,236,249,253,231,202,60,108,104,1,54,213,255,238,165,130,145,197,232,220,173,212,244,241,253,226,217,238,241,236,231,210,180,170,222,244,163,129,53,1,91,118,109,55,52,38,52,217,248,185,164,246,247,243,236,240,226,184,244,245,218,225,227,252,239,233,229,247,234,226,243,242,241,160,58,66,40,82,234,248,231,214,166,244,228,174,232,238,197,250,254,233,251,232,242,241,238,233,84,57,13,54,107,75,48,46,78,34,57,78,38,57,79,66,18,1,24,101,78,57,34,109,115,68,61,47,57,184,250,251,249,239,255,242,230,244,238,212,233,238,255,244,230,251,250,254,214,234,243,247,227,250,246,215,150,197,244,245,212,164,125,166,174,150,242,207,142,188,237,172,193,200,239,247,239,228,246,253,255,224,145,91,82,87,54,54,28,22,114,237,222,220,239,247,232,181,230,219,216,207,141,205,232,220,225,197,198,229,214,216,207,200,248,204,232,195,203,208,215,221,216,217,227,196,227,203,203,200,211,186,202,217,224,221,180,220,211,206,208,203,211,218,187,186,215,215,209,212,219,223,201,215,202,220,218,193,193,201,219,231,207,183,207,200,203,212,238,204,146,161,122,112,84,89,133,105,97,96,96,78,66,44,32,59,61,57,94,60,38,46,90,219,229,238,253,214,211,223,213,227,234,236,215,191,221,207,191,213,203,229,218,218,216,228,227,222,204,206,216,226,217,195,213,233,198,199,194,202,215,187,205,222,215,216,203,214,203,134,2,14,19,21,7,9,5,2,23,4,10,17,17,2,11,17,27,10,18,12,23,23,8,33,219,216,227,209,198,224,247,233,203,226,221,224,232,227,232,226,207,208,226,223,225,215,216,227,218,213,201,222,216,218,217,222,196,219,242,231,224,222,223,234,227,236,233,226,207,225,230,210,232,245,208,225,216,211,239,216,219,226,215,191,234,216,196,227,213,220,227,217,221,224,226,219,214,231,217,220,238,221,200,221,234,223,233,229,209,231,221,225,206,205,203,227,219,220,233,217,214,247,213,226,232,225,226,224,240,223,240,227,223,228,232,234,222,216,231,234,228,211,230,195,208,241,175,238,221,227,241,238,232,223,241,209,230,229,205,239,237,233,225,235,209,220,242,229,212,226,226,240,255,222,219,235,230,250,237,247,241,227,240,246,239,233,222,225,241,244,237,245,247,228,223,237,252,239,226,234,251,242,247,241,224,253,239,245,227,253,226,251,247,252,251,239,251,228,245,223,255,247,239,244,252,233,230,235,239,226,244,222,242,225,238,247,226,229,231,254,251,255,232,230,229,249,246,247,244,250,252,236,245,220,251,223,244,235,230,237,244,247,237,255,252,232,233,243,236,242,255,238,233,239,234,237,251,254,233,248,238,244,242,237,245,232,216,254,241,247,224,243,219,253,254,234,229,253,253,244,234,228,242,246,232,255,252,227,247,236,229,234,231,248,243,249,253,245,240,239,250,226,240,249,246,227,238,247,239,246,234,243,245,237,253,246,250,248,242,238,243,237,229,254,253,244,254,230,233,242,225,238,240,240,250,234,200,227,209,222,249,200,235,246,229,209,195,241,245,177,142,131,155,69,70,109,54,128,209,240,208,252,250,244,253,224,221,185,43,142,181,12,47,195,248,255,232,194,205,203,251,233,191,215,231,243,237,240,224,232,247,238,237,212,200,206,242,244,135,147,70,0,81,79,73,33,39,19,47,213,253,187,147,243,253,226,244,214,200,203,244,249,251,255,241,254,249,242,236,241,219,238,236,229,236,213,174,228,133,52,117,176,186,188,215,234,204,169,241,234,229,249,246,249,241,249,249,244,239,190,129,77,54,12,12,6,49,13,25,45,67,71,39,58,78,71,47,11,64,111,73,100,61,120,110,58,64,14,19,179,236,251,242,226,245,249,234,246,242,244,227,232,237,253,228,250,243,223,231,237,222,228,208,204,239,237,150,200,216,237,221,232,86,92,168,180,251,249,161,200,221,155,157,184,244,237,252,237,240,212,230,241,119,131,72,60,26,25,8,2,139,245,250,251,252,228,197,219,252,235,228,159,135,229,208,244,216,200,231,231,215,230,205,219,194,239,203,233,216,211,233,208,223,199,194,211,214,220,226,209,204,208,196,225,237,221,210,195,200,215,236,184,197,198,199,199,215,225,209,203,214,206,193,215,198,196,219,217,213,194,189,195,198,175,217,221,221,219,234,172,130,98,76,67,66,59,31,28,42,52,68,98,57,73,25,37,20,27,41,77,19,38,163,217,229,218,219,203,210,213,216,202,227,227,216,231,217,208,224,209,205,222,219,208,221,211,222,197,231,223,210,232,217,215,209,220,220,206,206,233,241,207,218,213,201,194,216,217,237,88,12,12,16,0,16,13,15,11,2,26,9,19,14,14,9,22,4,6,9,32,33,20,7,0,223,210,204,209,211,222,221,207,232,217,226,212,220,233,223,242,222,227,227,224,206,216,232,206,219,214,220,233,213,222,241,202,184,189,231,196,235,231,214,211,213,210,209,238,219,220,226,196,240,214,213,209,208,219,229,208,208,225,205,215,224,213,207,209,235,207,222,213,202,232,198,204,238,231,213,242,192,225,226,228,238,216,222,222,243,224,244,235,214,194,227,219,223,234,212,220,223,218,231,206,205,225,221,235,210,224,236,229,219,209,232,213,220,244,237,246,202,217,202,230,226,220,216,234,226,225,244,238,237,235,220,222,226,234,231,239,245,224,218,240,214,225,227,228,237,216,249,234,250,229,223,230,227,242,247,250,239,220,238,224,209,243,238,220,244,233,240,227,224,251,224,243,252,243,223,218,224,242,249,248,235,252,238,247,229,228,251,225,235,235,220,224,222,241,248,246,251,243,243,250,252,230,250,250,223,251,243,242,244,255,254,222,243,233,251,251,245,255,252,241,253,248,225,246,252,250,245,224,224,224,248,238,238,242,232,243,250,253,226,247,247,235,248,210,205,239,219,237,241,253,255,255,242,246,247,243,240,254,235,244,232,243,243,238,255,246,255,236,217,238,247,240,238,227,232,241,235,235,235,230,244,227,246,235,251,241,235,239,249,246,228,223,235,245,233,254,247,230,233,246,252,248,238,246,252,254,246,237,241,234,253,234,252,239,230,246,216,217,233,253,255,246,214,243,230,238,235,255,255,213,241,236,197,221,243,243,236,194,207,224,225,214,205,254,237,162,153,125,148,39,148,224,230,228,224,234,241,247,236,245,242,231,212,163,71,191,175,57,117,225,254,248,231,217,209,190,245,197,203,184,224,239,246,252,243,247,245,240,249,202,202,191,228,224,142,135,56,0,88,85,54,55,41,40,59,221,247,124,151,247,220,218,214,238,206,219,239,236,242,240,227,235,228,213,240,245,254,249,243,253,242,246,247,246,180,67,75,67,212,235,234,244,142,165,245,228,224,238,240,244,246,245,243,231,220,222,236,209,140,140,109,46,44,7,24,59,58,46,64,117,79,47,18,26,42,109,73,74,30,92,98,76,76,38,55,161,225,245,250,255,235,238,250,243,253,229,225,243,221,232,242,243,244,239,223,235,255,244,240,206,253,247,182,160,178,247,230,214,150,152,207,215,247,236,199,223,220,191,176,218,213,235,243,213,230,236,252,193,97,114,85,84,31,40,14,32,177,232,253,245,236,208,205,234,246,196,176,127,157,193,242,227,203,194,213,221,232,210,231,220,228,191,220,232,220,209,217,193,193,226,209,194,210,210,195,211,183,233,223,220,224,219,240,238,204,188,216,200,216,228,212,199,182,193,190,211,203,216,219,187,235,214,219,218,203,213,204,208,192,212,217,186,207,215,200,127,65,53,34,63,54,19,54,24,36,40,59,78,54,57,44,59,49,48,63,55,35,78,204,216,241,232,209,203,223,214,213,221,210,210,205,238,217,201,237,197,209,194,214,205,209,222,207,231,234,217,202,227,218,200,210,232,200,192,209,215,193,233,231,194,208,211,206,194,221,116,0,14,6,8,16,3,12,0,4,8,16,25,2,6,10,26,3,7,13,18,3,3,0,13,213,239,203,207,207,206,191,223,214,221,175,200,226,222,247,214,178,230,222,218,232,229,206,213,211,241,230,217,223,228,207,224,196,222,225,238,214,212,215,232,189,233,194,220,224,201,206,212,222,217,208,225,223,237,230,209,211,232,235,228,207,243,243,228,237,224,214,219,223,195,237,221,234,202,204,232,188,215,231,213,200,224,246,222,234,217,226,224,230,232,209,216,224,215,215,227,214,247,207,222,208,219,225,228,215,241,220,224,238,220,229,214,203,229,215,223,236,203,235,241,246,233,196,214,202,207,236,233,241,217,226,193,241,198,237,221,239,240,214,225,237,223,228,242,228,220,237,219,226,215,228,245,227,229,234,234,225,245,247,229,216,216,249,241,253,239,241,248,229,245,237,238,248,236,229,240,247,230,226,241,218,227,233,235,253,247,229,232,247,254,253,249,239,246,241,239,254,253,250,226,239,255,247,238,226,248,246,233,245,242,242,252,232,232,253,224,244,254,216,253,234,225,223,235,248,250,244,235,233,241,253,251,243,249,245,243,244,226,243,224,227,247,241,233,247,239,252,239,251,236,239,235,249,240,216,243,253,232,233,247,239,255,241,220,252,241,252,250,237,242,249,242,248,255,245,253,237,239,231,243,250,254,241,244,240,237,234,244,234,235,238,234,250,246,253,247,241,218,239,240,235,241,227,246,232,224,230,236,236,247,253,242,250,225,248,255,222,243,245,241,246,247,234,229,236,249,247,244,252,227,253,243,217,241,215,179,204,191,225,244,233,208,216,247,225,189,145,148,115,50,136,237,254,247,205,199,216,236,219,244,230,245,244,213,85,134,210,160,218,250,221,252,169,220,187,189,207,216,219,207,245,253,236,240,248,251,237,227,251,208,176,189,248,248,120,145,28,9,90,86,60,54,47,39,23,245,250,116,176,250,236,212,200,203,177,195,228,248,231,242,228,249,240,221,218,249,238,233,231,231,255,249,246,228,171,144,83,68,221,225,253,231,126,175,251,201,205,237,242,209,251,251,234,255,212,225,254,255,246,244,240,210,85,6,44,164,82,38,37,67,102,62,71,49,56,89,82,57,63,100,99,31,49,22,47,176,245,255,241,252,239,244,236,234,229,243,226,216,247,251,235,249,247,252,227,245,252,228,216,218,244,245,210,174,213,243,224,229,156,145,237,248,217,253,212,226,194,201,218,222,211,231,234,224,241,234,254,231,114,111,89,78,28,28,3,79,243,245,254,248,218,187,217,238,245,152,74,79,180,222,218,241,186,203,221,189,209,231,215,193,206,208,232,213,211,217,196,203,191,223,233,215,224,218,220,219,215,192,212,213,186,216,198,202,206,217,218,213,214,207,209,191,226,184,201,204,199,216,206,216,209,212,172,194,217,191,203,232,179,220,210,210,211,219,180,147,109,87,66,53,81,51,67,58,46,62,52,45,40,48,41,30,31,52,72,51,22,111,211,222,215,211,229,228,237,226,212,205,220,220,217,219,196,226,234,191,227,219,206,207,205,212,209,191,243,223,211,197,226,220,199,222,184,214,223,206,217,202,218,208,207,225,200,240,206,87,1,0,23,10,2,0,5,13,15,40,6,24,14,7,14,16,10,8,2,11,7,7,3,7,242,212,235,212,229,203,210,236,214,222,227,196,219,169,222,212,216,234,217,234,236,221,219,228,215,227,217,223,235,229,226,224,211,229,229,236,215,225,210,208,210,213,233,226,225,234,193,225,219,212,219,234,200,222,222,210,232,199,213,212,214,205,234,226,210,223,235,238,223,238,238,224,214,230,227,228,251,216,206,215,210,227,208,231,191,211,236,220,212,217,211,242,222,219,209,214,222,253,239,225,231,209,218,244,223,224,223,241,248,219,224,227,224,226,219,229,220,239,236,235,228,204,227,235,225,242,229,200,247,217,227,219,225,235,217,245,237,224,239,225,209,228,239,219,241,218,224,235,213,239,218,241,228,225,225,223,233,244,244,230,240,240,235,229,245,236,243,247,210,243,229,230,231,246,247,235,232,246,235,227,236,247,240,229,227,236,240,244,247,249,252,239,238,197,244,230,247,255,254,236,242,253,255,225,252,248,231,254,240,242,233,252,232,220,226,229,224,238,253,252,241,243,236,226,254,236,242,238,240,216,245,252,230,244,240,232,244,252,248,246,242,238,240,245,232,240,240,244,234,254,245,248,252,247,254,228,239,220,230,249,255,238,244,242,242,237,246,236,253,232,237,235,228,242,233,248,236,237,255,249,252,248,225,247,251,252,242,247,215,231,245,231,239,253,216,233,246,237,252,223,241,251,244,235,226,250,240,255,242,225,240,239,234,249,239,253,244,246,246,250,234,236,234,247,248,255,234,218,233,225,248,232,177,203,206,239,242,231,247,254,227,210,220,253,247,163,160,145,116,55,108,221,246,250,206,173,242,248,219,219,222,238,216,195,145,184,183,215,251,254,251,254,210,217,213,206,211,234,248,240,232,252,252,253,242,237,221,216,239,197,186,190,236,202,126,151,57,9,56,82,84,38,44,36,47,230,251,117,178,226,252,227,204,213,211,205,246,244,231,251,238,236,251,248,240,232,218,235,228,229,243,235,248,213,140,153,82,50,213,237,213,185,64,221,230,211,251,235,254,253,246,244,234,236,218,232,251,253,255,245,244,213,131,32,85,175,68,38,33,72,97,51,25,37,63,93,66,68,44,113,130,50,65,35,66,213,236,255,236,246,246,245,236,227,239,224,224,235,239,230,247,250,244,209,203,241,249,248,217,222,234,239,211,153,193,243,252,203,184,171,174,240,209,222,195,165,164,201,245,200,204,231,230,233,245,237,252,205,114,110,85,83,50,19,20,152,229,246,244,243,201,209,228,246,207,133,44,60,216,232,237,214,193,229,226,204,226,210,217,198,199,230,216,187,212,210,224,206,226,191,203,201,221,193,203,233,196,203,232,209,199,196,207,190,243,197,228,210,209,219,214,224,246,228,195,220,190,222,179,212,207,214,199,192,203,209,199,201,198,231,207,234,197,225,200,171,167,183,119,125,69,63,99,48,101,110,77,64,83,65,46,34,61,60,50,32,140,218,242,206,217,230,213,242,212,236,218,222,232,215,220,238,233,228,226,216,212,222,223,220,213,219,211,206,212,218,189,210,189,242,225,201,211,202,200,194,208,201,198,218,218,210,216,190,232,105,7,0,6,18,18,0,14,16,16,14,26,2,36,9,1,36,16,4,12,2,6,8,10,17,201,188,183,205,216,215,227,244,226,218,229,233,222,213,240,222,210,215,208,221,217,211,230,244,213,246,232,234,225,217,206,226,222,227,226,230,190,213,220,201,220,204,243,231,197,208,203,217,213,223,195,229,238,236,205,221,238,210,229,227,237,217,232,232,237,222,214,227,228,237,218,233,226,226,193,238,226,250,244,208,232,241,208,233,245,218,229,223,210,223,219,238,234,240,220,238,214,223,213,214,213,236,204,234,212,231,225,230,246,236,241,240,215,237,219,238,241,233,221,231,214,214,231,217,206,234,239,221,215,217,211,239,243,237,235,241,202,218,222,219,236,228,237,236,226,221,238,228,241,239,241,248,249,217,220,226,214,236,210,241,249,237,239,230,223,225,250,237,242,229,222,235,252,251,232,244,219,247,239,242,237,224,209,247,242,239,239,254,247,252,254,249,249,247,246,246,239,255,252,245,238,243,241,255,240,240,237,235,251,237,238,240,245,243,249,243,252,246,237,251,241,254,255,235,218,246,241,249,252,229,223,239,243,223,228,232,220,254,247,241,252,244,249,230,244,231,238,234,238,220,229,236,231,231,226,236,236,236,243,247,252,242,245,238,255,255,255,243,248,238,237,251,246,249,206,249,245,243,239,253,241,212,224,252,253,235,233,240,251,226,248,255,234,249,236,243,229,254,245,245,232,237,237,235,239,233,245,228,243,244,243,238,216,224,213,238,249,228,247,218,231,246,246,253,249,235,240,250,239,209,229,176,183,233,243,220,254,209,220,241,210,200,210,254,241,187,155,137,137,67,108,238,252,250,255,199,169,209,246,245,248,222,233,216,167,199,176,215,251,247,231,224,205,217,223,190,249,173,232,239,203,252,222,238,233,249,235,229,206,200,194,186,242,171,122,147,56,9,59,76,78,31,69,36,38,238,178,93,151,197,249,227,221,231,236,211,224,254,242,250,241,247,213,230,223,244,242,241,247,248,227,242,236,177,165,196,109,32,146,209,215,133,142,243,207,190,230,237,247,250,234,230,241,215,196,253,230,254,230,245,252,164,133,41,81,174,81,30,35,76,80,23,45,32,72,92,80,60,31,84,127,65,38,18,29,212,237,248,253,250,252,252,230,250,246,222,231,229,253,251,239,247,218,230,236,233,226,253,238,207,242,236,226,180,190,219,225,226,204,173,140,228,226,236,210,178,131,185,217,230,221,214,236,220,244,235,241,206,107,122,95,34,19,15,76,233,250,243,242,228,214,209,251,242,173,93,31,71,211,208,218,234,193,204,214,216,212,214,199,241,231,189,204,211,226,199,225,223,179,211,210,228,205,199,239,203,222,237,205,204,205,212,208,214,224,218,183,214,208,218,198,223,209,182,189,190,207,198,198,231,220,184,206,202,208,218,204,209,214,186,203,229,208,202,185,211,222,195,201,146,109,58,47,31,94,141,121,94,81,41,44,36,31,24,38,150,240,210,243,232,216,233,225,212,209,214,199,213,209,220,219,225,200,225,196,208,220,220,203,180,219,213,223,217,192,217,246,218,205,209,216,236,214,224,194,227,229,231,215,200,200,215,206,205,211,103,7,7,9,23,21,6,13,9,19,18,8,23,11,0,0,8,15,5,1,15,6,10,31,20,226,231,205,221,210,238,217,202,224,227,233,216,216,216,209,209,213,211,226,229,236,212,232,196,218,212,236,221,208,229,217,224,218,246,232,227,230,218,228,233,207,240,224,217,199,219,231,230,227,224,211,226,215,226,213,221,218,219,206,188,226,236,237,223,217,235,221,238,219,231,240,236,227,236,217,233,217,202,225,233,217,238,235,224,210,231,222,206,223,208,207,239,234,215,226,231,229,233,227,235,215,232,238,233,226,219,233,211,218,219,229,214,210,227,208,230,212,235,226,225,228,202,214,234,241,229,217,252,225,227,232,220,226,224,248,228,233,238,236,217,230,237,230,219,245,231,233,226,223,232,250,220,206,220,240,248,228,240,224,237,245,254,237,247,233,231,244,239,234,242,244,250,243,251,235,217,239,253,235,229,229,243,222,228,255,212,229,226,218,240,223,239,251,230,219,253,232,249,244,238,253,250,250,234,248,242,246,226,236,233,236,250,248,241,237,239,243,236,251,242,230,255,247,239,248,221,233,222,228,230,242,233,229,234,241,250,237,255,235,237,255,255,236,245,242,241,232,244,253,252,245,229,249,242,232,246,248,236,233,239,249,227,255,217,251,246,255,231,235,246,232,249,236,245,245,238,247,243,254,244,223,233,240,254,224,246,237,248,246,250,252,233,239,228,253,230,232,241,229,235,244,232,253,232,253,247,243,244,248,232,232,252,223,247,242,249,245,233,252,222,240,241,233,227,255,243,247,249,214,205,215,198,234,247,239,229,219,213,216,229,221,193,237,246,250,165,128,150,121,45,127,241,246,252,249,206,188,198,225,224,254,237,213,171,198,210,235,242,248,232,247,237,153,204,203,201,166,99,159,197,223,224,237,249,248,249,234,227,232,216,176,192,255,147,138,157,34,33,55,48,56,40,30,71,30,47,97,140,198,192,205,193,187,213,223,184,229,248,228,253,247,237,228,215,237,241,246,240,250,229,246,233,221,169,146,214,144,50,103,182,240,171,223,206,206,239,222,250,241,246,236,228,235,233,225,241,248,245,252,238,237,162,115,42,106,166,27,30,20,62,65,27,35,21,59,80,81,111,40,98,119,70,61,27,26,120,241,245,238,221,248,242,246,249,241,255,239,206,251,233,248,241,246,234,228,239,238,250,250,238,231,252,209,175,185,211,238,245,218,149,158,205,225,188,202,215,85,150,226,190,179,163,220,221,228,230,250,185,106,100,68,45,44,111,185,235,244,226,246,211,240,215,251,213,145,80,9,106,224,205,214,219,189,223,214,212,209,208,210,205,203,227,223,212,210,230,205,240,218,217,181,224,228,211,202,233,240,182,218,208,188,202,213,211,203,200,228,218,202,209,193,202,222,178,217,204,201,207,192,196,220,211,211,201,203,188,191,176,198,215,219,205,212,210,166,191,209,220,210,136,109,56,31,19,16,67,78,41,64,55,27,45,10,18,113,229,245,228,223,228,213,205,199,187,243,216,212,210,225,207,206,238,225,226,200,208,226,210,235,217,217,232,218,245,222,211,195,208,165,213,198,205,221,199,219,226,220,209,194,207,207,234,228,229,204,105,12,14,18,18,2,16,12,6,4,0,33,3,19,12,10,22,11,13,14,22,1,1,8,1,213,229,237,228,241,234,226,226,211,245,230,223,218,218,217,224,206,242,205,215,243,212,219,231,249,217,224,217,222,214,204,184,227,200,232,220,216,207,212,242,242,227,220,233,222,205,218,242,214,192,223,229,207,217,205,211,210,227,227,241,201,216,225,223,222,221,203,220,232,237,223,243,215,229,234,249,219,235,228,231,238,252,239,225,244,223,222,213,244,240,218,252,237,203,215,210,233,231,242,232,234,233,241,230,208,235,239,222,243,210,246,235,229,230,229,199,233,237,221,226,220,212,237,230,224,232,230,213,227,246,242,229,235,237,231,231,230,229,237,229,209,228,241,221,234,221,239,229,220,218,253,218,219,236,243,233,241,224,237,252,239,234,222,255,228,235,235,246,228,241,233,247,223,246,232,221,249,243,220,242,218,213,243,251,250,247,251,231,239,247,246,252,249,234,255,244,249,237,237,232,237,244,228,219,252,241,223,230,230,248,235,230,249,248,239,253,238,247,236,255,234,227,236,251,248,236,253,241,245,235,238,247,252,249,243,233,237,249,254,226,253,245,249,211,242,242,231,251,229,247,241,251,253,239,249,252,254,227,249,250,246,245,242,250,235,254,235,246,253,235,234,216,242,236,250,244,239,234,252,244,246,250,236,227,248,242,222,253,229,240,222,251,238,243,254,251,239,236,251,228,225,253,224,246,246,253,252,227,230,224,238,243,221,238,227,242,227,255,253,255,252,229,225,239,252,242,237,246,250,215,213,204,236,216,234,223,219,211,238,247,211,199,210,242,244,153,137,127,127,36,109,241,248,240,252,228,229,187,175,249,229,249,230,176,217,187,224,232,247,243,234,159,101,236,210,151,127,14,83,181,238,226,234,235,227,254,245,244,230,203,156,199,252,180,131,128,67,50,66,44,71,51,76,61,84,25,107,191,211,236,237,192,182,234,237,213,237,252,244,253,229,236,250,219,240,233,251,241,253,233,252,200,200,117,149,219,118,87,230,245,228,167,225,255,183,230,236,253,238,226,255,255,255,229,209,249,240,247,248,246,240,158,104,23,64,168,26,46,39,68,51,35,38,12,70,72,64,88,46,102,109,75,53,47,12,62,214,249,251,248,249,237,252,255,242,244,224,223,240,253,223,245,233,250,243,220,238,253,249,231,224,242,223,197,174,228,235,231,194,171,121,156,181,164,187,232,157,162,216,241,156,138,213,221,254,247,251,159,94,67,77,23,79,218,226,252,251,212,210,227,226,253,245,168,139,98,9,154,237,211,243,204,205,214,216,207,216,218,205,223,198,201,227,208,207,224,185,219,219,189,201,227,195,213,213,220,232,209,228,220,224,223,222,208,202,211,204,197,194,228,210,218,185,227,204,196,203,207,224,182,196,210,203,208,209,211,193,217,195,214,225,206,212,182,171,216,206,213,220,185,156,146,133,79,59,50,48,26,39,12,31,88,152,215,235,223,229,231,221,215,236,186,219,204,199,199,221,208,209,227,195,225,205,229,222,238,223,203,233,219,201,223,213,231,228,184,210,200,232,208,210,211,196,207,239,224,217,201,190,216,209,220,220,210,201,106,0,17,6,7,7,15,3,1,29,0,23,19,10,0,19,2,24,15,11,4,0,11,9,7,210,231,216,236,242,225,222,189,219,221,226,236,212,212,203,215,199,230,238,223,219,237,209,211,227,242,211,215,223,220,206,237,233,211,208,233,239,220,232,206,221,212,226,224,198,213,227,234,199,217,223,207,217,219,207,197,228,190,194,227,230,209,229,203,219,221,236,241,243,224,244,240,229,219,219,224,223,233,246,233,244,220,205,224,246,233,221,233,216,247,233,231,235,233,214,231,226,241,216,231,218,217,236,215,225,232,234,237,250,234,201,233,220,237,232,222,218,221,247,240,222,216,228,241,242,220,206,231,225,242,222,217,221,240,221,217,225,234,237,238,242,243,252,214,237,221,231,209,237,226,233,238,201,243,240,227,230,237,226,237,237,208,235,215,239,222,230,219,237,228,242,252,224,239,222,238,234,244,235,207,247,214,244,247,232,226,255,253,218,245,248,248,246,235,246,241,223,236,246,230,228,244,255,255,255,245,252,233,244,251,235,240,231,245,243,243,249,231,252,244,246,247,236,254,241,241,235,232,225,241,238,245,239,247,234,216,211,251,244,255,251,249,226,246,245,255,255,235,255,243,245,241,243,255,245,252,253,247,250,221,253,225,234,236,233,236,253,246,233,246,231,241,254,242,242,243,248,236,255,245,224,232,237,245,241,245,249,221,249,241,235,233,250,244,234,255,211,248,229,243,233,248,247,253,231,231,246,226,242,234,227,231,241,224,249,252,224,244,239,244,242,255,247,240,250,245,228,226,218,230,234,183,220,193,208,227,235,248,251,207,224,204,222,230,246,170,148,133,137,41,103,237,255,247,226,234,243,201,179,201,232,243,208,196,249,186,215,216,224,230,182,120,55,141,167,94,51,5,112,187,252,208,234,236,252,243,249,239,245,242,138,208,255,211,193,139,85,78,74,74,57,33,31,75,34,54,209,251,226,253,230,218,205,228,247,239,252,254,243,249,253,245,240,253,217,238,244,230,234,239,228,205,139,62,146,105,112,173,242,253,195,186,243,206,203,217,241,236,226,246,249,244,250,209,236,252,241,252,245,241,222,172,96,31,119,181,18,48,26,86,51,17,61,15,66,97,80,89,64,87,105,66,23,60,25,105,212,234,252,246,245,229,245,252,244,247,246,225,223,232,240,237,239,236,250,237,214,235,251,241,196,247,247,220,177,208,243,240,238,162,105,87,135,185,241,218,187,186,227,207,190,165,211,226,234,251,216,127,83,75,26,86,193,202,232,251,241,250,242,236,246,229,209,127,157,87,54,169,222,227,233,205,211,236,230,218,211,201,215,214,215,181,230,249,221,204,205,198,217,212,214,208,184,223,218,219,211,199,211,212,207,202,224,219,233,206,218,193,230,214,217,213,222,215,214,202,220,228,216,220,211,217,222,208,202,195,220,209,213,215,211,234,210,205,204,199,221,217,239,249,249,236,227,227,195,137,90,29,90,213,239,234,239,235,248,239,225,237,217,212,200,197,229,202,205,216,221,223,222,208,203,213,225,207,232,214,222,205,224,220,235,208,211,225,232,194,220,212,204,221,212,204,221,206,217,232,207,205,227,184,215,226,228,222,206,118,2,0,0,2,4,17,9,16,22,12,41,28,2,6,1,18,5,4,0,25,4,19,2,17,237,192,203,223,239,231,228,219,217,218,220,211,206,239,215,198,229,217,206,195,212,233,236,225,243,212,224,214,231,217,206,226,240,225,222,220,241,243,232,214,236,214,237,204,222,234,214,215,246,235,236,216,216,236,207,244,222,219,233,232,203,237,224,245,239,213,225,225,228,233,237,216,227,227,211,237,239,238,229,239,244,241,238,241,243,214,229,212,224,229,230,231,235,221,239,186,230,235,227,199,247,211,234,238,227,228,217,227,237,226,236,235,215,204,225,241,232,231,220,246,210,235,231,223,206,243,244,248,231,237,245,229,247,247,238,214,204,237,213,240,240,239,242,232,235,232,234,224,229,236,252,233,218,233,231,223,232,237,204,228,254,240,249,243,245,224,225,248,244,227,228,254,248,232,230,231,228,228,240,244,240,221,216,239,245,238,250,231,232,246,230,245,222,245,228,236,241,204,233,251,223,255,254,235,250,255,255,246,242,243,247,240,243,253,243,240,237,226,249,246,255,230,219,241,241,249,254,244,255,245,233,255,234,213,250,253,255,234,254,250,247,243,244,244,254,241,229,246,242,244,247,244,248,247,237,255,237,234,232,241,248,222,227,236,221,237,242,224,235,250,244,247,233,249,250,237,255,242,235,251,242,248,237,247,217,228,243,244,248,250,237,252,249,245,237,241,225,245,254,255,230,254,241,247,245,249,246,240,213,229,225,247,230,226,243,218,247,249,229,239,254,236,235,252,230,250,239,252,191,240,193,178,222,222,245,246,248,204,217,236,202,233,214,222,245,185,175,147,144,24,88,247,253,241,243,232,235,235,187,235,228,250,217,176,217,157,189,205,221,224,132,13,16,127,122,74,49,83,227,249,250,242,239,247,250,239,251,255,250,229,227,238,247,243,182,138,63,35,73,76,68,45,29,64,41,92,218,248,251,255,236,198,255,251,240,239,241,247,243,254,254,251,231,234,232,250,234,247,236,241,252,220,136,65,85,30,59,168,239,238,105,194,246,224,201,239,246,244,253,239,250,249,247,224,228,248,235,255,255,238,230,159,89,9,81,136,19,16,97,102,46,29,21,30,38,83,106,52,53,81,117,117,63,50,7,102,248,234,245,255,230,230,237,242,240,241,249,222,211,225,250,255,238,230,237,235,223,235,249,243,210,238,252,233,157,162,231,240,222,205,138,140,178,212,227,232,171,138,188,215,216,207,181,219,240,234,147,121,143,55,107,199,251,227,250,253,224,238,239,237,252,255,153,89,112,103,116,191,201,228,230,204,201,207,202,209,220,222,228,230,203,230,208,227,224,243,201,207,206,207,201,206,210,210,209,196,222,237,216,201,210,193,218,209,200,210,223,236,215,214,196,180,223,208,205,224,207,198,204,195,199,192,207,203,211,195,198,199,191,196,205,211,187,168,204,207,226,184,231,224,251,254,248,242,238,173,111,92,193,240,245,227,253,239,224,231,212,217,200,203,191,203,218,228,203,203,224,241,234,223,211,217,231,216,221,217,242,201,228,226,206,206,233,225,217,216,240,213,228,210,242,217,218,184,228,202,209,201,220,205,228,215,211,233,212,107,7,4,11,21,2,14,14,0,23,7,3,22,5,2,0,17,15,14,11,33,28,4,9,19,220,233,212,223,227,206,219,218,203,222,240,229,204,185,226,220,226,223,222,232,224,216,236,195,209,206,241,237,236,241,207,195,195,237,222,202,206,225,205,225,238,225,216,211,243,242,243,236,242,233,193,235,226,231,214,236,222,237,213,236,230,241,223,231,211,234,225,234,224,216,225,236,235,243,241,241,217,239,232,217,221,249,234,228,237,228,216,214,246,235,222,217,207,225,225,252,236,241,233,220,241,243,222,244,238,233,209,231,238,233,233,219,235,214,235,210,240,229,232,250,243,228,224,236,218,239,194,226,227,227,208,218,210,243,228,230,252,227,231,239,219,250,234,231,228,239,234,238,228,219,212,251,237,209,228,218,228,236,241,243,253,232,235,252,220,238,225,243,235,235,233,241,246,249,229,246,198,242,232,239,242,249,254,248,234,239,235,233,237,233,247,253,247,246,233,247,254,233,250,241,255,224,239,243,236,247,242,242,250,255,248,241,234,224,232,229,246,238,251,241,249,232,246,236,233,240,254,250,247,253,255,253,244,240,238,235,252,224,247,243,233,239,244,255,221,238,246,242,252,247,237,250,227,246,251,245,252,242,251,243,230,242,240,252,225,237,228,253,246,224,230,245,254,251,223,249,240,231,228,233,242,241,253,246,224,237,240,242,254,229,246,231,237,255,230,210,235,227,222,233,249,246,243,229,237,240,243,240,255,235,253,239,245,232,254,227,239,239,231,216,234,255,227,229,223,237,229,246,214,252,228,230,240,249,241,237,194,212,244,242,217,220,237,244,219,182,173,162,118,52,116,239,227,232,252,226,252,242,199,213,206,243,190,113,195,90,113,208,207,197,75,19,124,224,217,203,183,252,251,254,246,245,255,253,255,249,240,218,179,180,132,121,195,130,136,73,14,77,38,35,32,37,38,54,48,48,173,158,144,200,197,172,221,232,243,228,252,245,249,232,245,240,253,255,253,232,246,244,244,244,225,237,155,97,55,8,62,186,246,171,61,176,235,212,212,242,251,247,248,253,248,255,247,173,247,245,244,249,232,236,236,171,67,16,21,57,41,28,57,121,39,28,12,25,62,65,74,46,49,90,110,75,35,66,27,103,232,233,245,239,230,226,231,255,243,236,210,239,218,237,252,229,250,222,254,231,227,201,242,233,222,214,240,217,172,167,201,231,238,205,206,172,186,238,220,205,177,171,151,204,212,232,174,197,236,177,156,121,103,86,173,243,237,255,253,217,232,241,225,226,255,206,112,80,97,91,104,204,204,241,212,202,203,216,206,238,204,225,215,226,215,207,203,214,213,214,213,198,218,198,240,192,237,208,227,211,232,215,205,218,220,198,201,191,205,203,204,205,211,193,193,209,226,219,218,209,186,229,229,219,213,221,220,222,207,211,218,193,192,217,200,220,176,184,215,212,234,218,215,217,225,217,240,240,185,159,114,65,199,218,230,249,225,230,224,205,232,220,216,227,235,208,238,192,218,234,214,206,221,207,203,212,209,208,214,223,223,207,207,232,237,205,221,224,228,227,227,207,224,201,215,205,213,198,194,225,218,230,207,223,199,196,212,194,194,105,13,19,6,0,22,23,16,7,6,28,9,14,7,0,2,12,10,1,9,26,6,19,8,4,224,211,232,243,232,216,225,236,210,220,226,189,220,224,204,196,201,230,215,210,226,204,235,228,215,197,202,210,228,218,235,221,228,237,228,238,236,211,249,212,231,240,217,238,215,224,219,212,244,230,235,241,226,237,224,203,223,214,222,226,236,215,225,243,238,234,220,225,227,224,227,222,234,244,245,224,241,240,248,243,227,254,248,246,233,215,236,229,231,224,247,230,247,242,255,246,219,231,241,239,244,217,233,224,225,247,237,251,227,205,233,215,242,241,227,223,224,238,233,253,226,246,246,234,251,231,233,215,251,229,219,217,243,230,231,229,214,207,239,209,228,207,236,246,230,229,242,218,228,245,247,230,228,233,219,239,210,241,237,236,227,242,229,214,222,246,226,228,233,240,222,235,226,249,226,241,240,225,250,246,212,225,234,244,247,250,249,234,243,255,238,237,222,241,253,229,253,237,216,236,248,247,252,247,253,239,227,243,240,221,238,213,217,246,247,251,234,255,231,243,235,230,252,237,234,237,241,222,237,242,243,224,233,236,240,242,237,226,223,255,254,220,242,249,236,235,241,230,252,248,232,252,229,255,246,248,240,239,250,239,244,232,245,253,250,252,245,244,224,230,231,242,246,235,233,237,245,254,228,227,234,252,248,247,231,253,226,239,218,249,246,228,230,250,241,253,231,240,238,251,255,217,249,235,232,234,235,249,246,215,248,233,240,242,226,226,242,255,233,246,241,255,234,222,249,248,255,242,223,249,228,223,236,226,184,224,212,244,226,236,207,206,245,246,221,132,154,159,166,98,142,234,242,247,227,227,248,226,216,239,175,226,145,62,141,24,95,202,189,248,204,226,252,240,255,255,251,229,218,221,222,160,179,113,121,113,96,84,61,53,22,15,21,67,43,20,16,22,42,56,71,59,39,51,34,27,57,24,11,3,20,12,15,25,27,38,80,91,143,134,188,227,233,249,245,242,246,245,235,245,253,238,222,141,93,19,51,217,227,160,131,236,237,215,251,255,255,247,222,233,232,240,219,190,255,217,243,236,233,248,239,195,140,7,2,16,37,78,113,64,34,39,21,43,57,73,76,53,47,85,114,77,47,70,5,60,226,244,241,228,234,242,233,255,249,243,236,210,195,225,229,233,252,227,243,233,215,227,232,239,231,211,248,236,202,144,193,236,215,219,211,168,165,207,222,160,166,202,151,156,217,221,223,219,240,186,72,90,71,123,228,250,255,246,236,252,226,246,255,249,235,194,61,90,97,76,145,223,211,238,223,216,201,228,218,219,226,202,229,216,233,220,226,210,206,223,239,230,224,240,191,203,208,224,194,220,200,229,216,212,214,200,208,224,217,214,203,202,207,208,206,222,212,192,195,204,217,203,214,204,211,209,207,218,200,217,204,198,184,207,217,198,188,210,211,223,242,216,198,200,230,245,227,228,200,158,86,57,200,219,200,207,202,218,245,208,217,202,213,232,205,221,226,222,234,223,219,209,228,234,216,217,214,212,236,231,222,207,204,197,222,208,205,212,210,240,205,222,217,211,211,211,209,220,222,217,228,227,203,231,189,224,181,208,201,101,0,11,1,0,19,9,33,11,2,28,13,0,0,0,35,2,27,8,0,7,29,12,2,19,202,204,222,217,222,243,209,219,236,204,216,223,217,219,220,214,230,215,228,216,212,217,227,234,220,211,192,230,230,230,220,242,243,219,244,215,237,231,229,227,220,214,246,232,223,221,233,231,236,249,244,227,234,251,229,241,211,224,226,223,235,223,225,212,236,247,227,234,211,246,241,237,220,233,233,247,226,232,230,241,255,244,238,247,241,242,225,235,236,225,223,203,244,221,219,246,227,203,247,243,228,246,211,205,237,236,242,227,233,221,246,225,217,239,226,226,251,232,227,244,239,237,226,231,232,236,225,225,231,214,237,205,255,237,243,223,212,232,233,252,213,249,229,228,231,247,223,198,252,222,231,240,223,234,234,217,224,235,238,242,236,224,241,238,249,222,241,241,234,248,251,220,222,240,245,245,235,218,234,236,220,246,252,245,248,240,235,239,238,246,237,248,234,250,220,231,229,254,236,242,218,237,223,243,227,253,252,233,232,252,224,248,251,252,223,255,239,244,240,249,239,239,236,231,227,240,249,231,233,228,253,234,240,248,239,238,227,241,238,249,239,255,254,252,246,226,235,249,240,251,240,236,222,229,228,243,235,239,225,243,215,243,240,247,252,244,231,255,234,223,246,240,246,255,249,239,238,250,249,208,245,232,243,225,249,245,254,242,246,250,234,246,244,253,224,242,246,225,242,247,253,243,237,251,229,246,232,248,242,240,236,236,229,246,248,219,222,247,248,238,228,246,255,243,229,247,248,211,222,239,190,195,216,198,197,239,233,242,217,230,204,209,246,246,242,139,128,133,148,144,167,255,242,233,251,243,236,229,222,228,233,236,111,57,77,2,95,212,251,251,247,249,242,240,202,185,120,164,108,84,93,86,52,94,66,66,64,59,11,36,19,17,7,47,65,30,58,56,41,59,41,58,58,33,48,47,45,48,26,35,12,6,15,15,9,17,14,8,25,15,2,15,8,68,88,154,178,217,246,249,226,255,234,193,128,67,47,118,144,139,203,236,233,212,249,235,246,239,248,249,241,230,221,210,245,240,255,232,251,253,202,202,172,34,35,12,22,73,142,66,43,30,29,62,84,95,57,47,25,39,76,82,68,56,9,78,232,243,237,231,241,241,236,231,226,236,231,251,226,204,237,246,238,243,247,245,228,192,233,246,206,225,227,241,218,168,194,205,213,219,217,205,137,220,217,152,196,209,156,198,236,227,233,235,164,106,78,39,86,150,245,246,234,251,255,253,231,211,224,228,239,163,56,70,107,97,141,223,241,239,248,242,240,251,243,222,208,197,194,206,201,200,220,212,245,233,221,184,214,184,203,195,202,226,212,214,218,226,202,203,201,222,207,221,227,208,201,211,220,213,216,231,184,201,223,209,227,201,200,191,212,220,217,207,247,196,215,217,198,215,236,196,156,207,220,211,201,221,228,226,189,212,239,250,195,134,81,65,202,204,238,239,194,225,194,221,220,210,214,218,221,219,236,212,211,201,211,209,215,226,225,187,216,219,211,222,226,229,217,216,190,213,220,201,185,232,226,202,216,208,225,225,195,208,201,210,227,242,201,214,212,212,212,222,224,116,0,0,9,38,21,6,0,12,6,10,15,0,10,1,1,16,6,20,24,10,24,35,34,21,219,216,240,219,226,237,215,224,237,221,228,230,217,227,216,210,216,205,238,214,211,238,209,212,229,231,220,238,223,194,223,204,223,216,248,230,236,218,228,220,219,225,235,220,216,253,233,232,193,234,244,222,238,240,238,243,239,227,238,247,229,238,238,206,218,239,230,250,209,236,224,207,238,231,233,230,206,225,236,233,225,229,235,241,240,236,213,210,232,219,224,221,254,220,220,220,246,242,213,221,223,236,239,237,218,241,230,221,229,228,244,226,224,236,228,246,221,236,251,228,244,238,231,239,235,242,222,243,220,228,247,240,236,244,224,249,232,230,242,233,240,233,222,245,213,226,226,249,233,243,249,239,241,224,239,230,219,249,246,232,228,236,245,234,236,236,213,249,236,213,236,236,224,226,233,247,246,227,225,225,249,238,242,240,236,243,255,250,238,239,242,248,240,242,254,223,250,242,240,232,245,247,230,248,236,248,249,247,223,237,247,251,253,244,253,233,247,246,246,234,227,237,243,238,243,237,245,220,244,236,240,229,221,242,234,249,241,250,246,252,245,234,253,251,249,249,252,238,250,234,242,240,237,225,255,236,235,230,245,229,242,249,255,238,243,234,245,240,238,215,251,248,231,236,224,233,235,254,250,232,238,246,239,218,243,237,239,245,252,227,245,212,241,210,244,235,248,231,247,240,254,234,220,209,223,255,234,222,246,244,239,245,244,223,245,247,253,240,255,254,242,227,246,207,246,222,255,221,203,212,178,182,223,236,249,253,199,208,236,244,219,229,245,247,239,154,138,120,102,80,156,251,217,239,219,235,237,212,222,246,234,238,192,108,110,59,154,227,249,251,161,151,127,130,87,67,84,56,62,84,68,100,106,126,106,125,93,104,93,105,98,122,46,61,125,58,39,58,72,43,29,27,48,66,75,68,77,77,41,39,105,93,81,70,41,34,22,30,11,18,4,4,12,4,20,6,6,29,39,37,123,116,233,178,136,123,61,64,44,111,239,242,233,214,230,248,248,252,249,247,248,247,195,240,239,230,248,244,244,253,185,203,132,39,38,19,17,59,96,71,48,37,16,29,68,67,80,66,52,40,78,91,59,44,17,58,233,230,253,235,238,247,226,232,241,251,244,245,224,212,233,245,241,249,243,236,228,203,237,235,225,213,239,237,205,172,173,210,231,211,217,206,152,183,245,205,191,213,180,163,215,243,253,164,115,86,69,52,128,240,237,252,237,242,248,245,224,239,233,244,214,120,49,49,103,103,182,207,223,235,242,198,220,216,229,215,224,216,217,198,206,200,207,209,213,201,203,196,194,205,199,217,186,225,229,208,222,223,209,233,202,229,194,222,215,206,212,234,219,224,233,213,222,214,212,237,227,215,209,224,188,221,218,231,211,205,208,255,215,235,217,177,184,213,224,234,220,221,218,236,239,241,251,244,186,115,66,81,181,194,226,237,209,217,202,194,215,228,216,227,213,224,202,204,210,213,227,228,233,197,209,194,225,224,224,208,224,214,196,215,194,210,206,213,212,206,218,215,208,198,230,220,236,230,211,204,232,223,208,227,196,218,227,205,219,126,0,4,3,10,11,3,9,19,13,3,16,30,5,15,18,8,6,14,4,0,7,16,17,7,209,216,211,214,229,237,194,234,227,216,220,218,204,223,218,229,225,225,226,239,232,205,219,223,232,238,216,229,238,223,222,223,226,214,245,219,211,207,242,217,240,210,239,242,233,236,248,216,228,241,231,231,218,227,222,207,234,252,236,229,225,210,217,212,233,253,220,245,213,220,241,236,246,242,226,242,239,233,222,201,197,211,214,233,201,221,229,219,237,229,234,239,229,223,234,238,246,227,225,219,237,238,217,227,226,216,218,235,244,216,234,234,241,237,225,241,249,232,250,254,220,246,225,249,237,249,244,246,235,234,227,244,238,244,247,253,207,243,240,242,246,247,247,213,248,220,229,217,224,243,238,234,234,213,229,227,226,233,239,240,244,234,252,242,222,217,239,228,255,237,241,244,230,240,238,213,224,225,230,247,225,246,248,220,238,221,245,234,243,245,240,222,242,251,229,221,243,249,242,218,251,244,250,254,237,248,235,239,230,237,225,232,221,232,229,216,218,254,246,237,239,242,239,251,235,248,244,238,245,232,246,252,245,245,253,241,217,220,237,242,243,255,241,217,234,222,220,236,228,251,208,215,246,243,244,224,226,246,248,228,246,234,255,218,237,251,238,239,200,214,230,249,222,237,238,231,251,250,229,236,251,237,222,236,227,237,226,230,247,253,241,234,226,216,247,241,245,227,244,234,242,228,218,235,220,228,242,237,241,241,245,217,232,226,246,214,219,227,255,238,227,237,212,229,232,244,244,208,214,218,176,237,224,223,217,219,207,237,225,239,223,235,250,247,251,154,132,109,130,18,72,212,195,238,210,207,227,243,218,234,255,247,218,207,177,92,152,174,112,109,27,14,39,61,61,53,80,75,36,60,76,92,204,199,233,238,251,251,226,253,239,173,69,48,123,137,69,23,65,65,73,53,158,196,143,162,191,148,99,201,230,242,255,240,249,254,244,231,228,170,106,73,12,9,4,0,17,4,23,3,24,15,76,93,81,114,91,99,18,106,239,237,223,221,241,243,244,242,235,244,255,237,191,239,241,254,252,236,249,228,219,193,84,10,90,69,28,40,65,61,21,5,22,41,51,54,81,58,24,53,83,74,59,55,0,55,244,242,249,243,243,240,223,248,243,238,252,255,242,216,225,241,246,232,233,252,247,203,238,243,228,233,210,235,197,163,175,212,234,209,215,191,130,142,195,174,239,239,170,135,186,242,241,147,79,88,57,89,205,252,247,241,247,243,248,214,238,209,238,255,215,152,61,88,114,135,153,213,253,202,93,55,45,150,198,193,216,197,220,208,205,206,212,223,214,206,227,232,203,201,236,192,184,180,190,211,222,225,195,198,223,220,204,198,194,216,199,221,211,205,201,208,208,198,211,243,222,223,223,224,199,221,186,225,188,226,245,208,210,235,207,156,204,214,212,216,224,246,229,242,235,236,227,225,106,99,27,70,185,220,211,212,221,219,220,224,228,212,195,209,233,221,205,212,207,224,218,236,213,219,218,215,200,233,213,200,218,208,201,195,209,195,209,242,201,229,213,185,220,214,210,200,227,237,191,229,223,226,204,198,225,196,186,216,214,122,27,7,0,18,0,0,10,2,17,14,1,9,2,9,0,5,29,5,2,8,9,3,12,38,236,236,206,220,224,238,226,217,222,224,224,223,226,237,227,220,218,243,207,230,213,244,220,231,241,224,223,214,230,208,238,244,236,212,242,221,246,230,217,219,225,235,234,234,221,236,222,238,214,226,239,246,225,236,236,237,250,246,209,249,255,242,222,233,237,222,245,234,228,222,246,229,226,222,245,239,228,213,219,243,227,231,228,207,210,234,236,221,241,238,217,224,228,229,236,207,238,227,230,233,237,206,239,223,247,220,218,255,248,223,232,240,252,251,217,222,215,209,225,219,236,241,230,223,233,230,246,236,249,220,245,234,222,246,238,235,220,241,238,235,246,249,218,207,235,243,240,229,239,236,234,231,254,234,245,227,227,241,231,220,246,230,242,238,236,234,244,242,216,243,239,207,227,230,225,249,245,239,236,238,233,236,243,240,226,247,223,240,241,246,238,240,248,235,251,235,237,246,248,247,252,218,227,226,234,240,229,239,240,235,236,241,240,239,243,254,244,225,219,226,245,230,236,252,232,236,235,252,245,214,246,233,231,237,247,238,236,239,226,240,236,239,230,247,246,242,242,234,240,248,231,241,244,232,247,225,247,227,245,233,246,236,244,231,222,231,246,241,204,230,247,232,227,250,245,218,251,233,239,243,232,231,237,243,241,240,224,249,231,247,255,243,240,225,246,250,236,255,240,243,227,235,242,245,242,243,248,234,201,240,240,240,223,238,253,244,241,216,251,253,243,233,250,211,247,230,241,227,228,222,223,206,200,199,210,245,230,237,238,232,214,224,227,245,243,147,114,109,112,39,111,244,249,245,223,209,225,234,221,248,247,255,221,199,174,103,89,84,62,65,72,78,98,82,102,144,181,145,80,32,37,130,245,255,251,252,255,252,253,230,237,191,81,79,129,142,105,83,76,109,61,83,178,177,160,139,152,130,92,182,247,216,236,239,248,248,245,255,249,176,68,87,40,161,232,217,192,161,147,54,28,36,111,53,45,39,47,87,27,146,247,250,205,231,243,244,255,247,255,218,232,216,184,222,240,255,231,236,236,241,222,188,106,1,128,112,13,44,10,59,39,46,33,64,91,34,61,35,40,60,69,94,20,64,39,99,236,250,235,247,240,242,249,228,227,217,243,228,253,220,221,250,228,230,242,233,240,223,216,233,227,235,215,251,227,183,168,200,216,227,215,192,158,114,111,139,132,152,85,70,101,183,232,188,135,81,70,158,248,251,228,230,252,216,253,240,254,242,248,238,213,170,94,117,129,117,151,232,223,160,34,8,5,15,151,205,226,224,210,209,207,219,200,200,232,226,238,210,224,222,195,215,223,217,217,231,220,200,234,219,215,215,207,197,209,207,203,206,204,196,193,210,210,195,234,231,217,200,212,222,244,208,213,221,207,221,212,230,215,226,187,173,209,209,219,229,232,245,238,239,255,190,129,106,43,19,28,128,220,208,213,222,222,242,216,219,216,199,234,204,230,207,222,197,223,244,216,226,224,238,203,221,226,210,204,208,203,225,202,217,208,232,221,232,211,214,209,229,212,202,207,189,220,198,222,205,206,231,203,195,214,219,192,208,227,115,5,20,6,22,11,16,1,0,4,15,4,7,0,0,0,10,4,22,26,18,2,25,31,2,211,227,224,240,236,218,235,227,229,226,225,213,224,234,220,206,222,219,240,214,241,230,224,240,223,232,240,231,214,229,234,235,234,229,241,213,245,225,240,245,226,233,236,235,217,226,207,241,215,227,205,229,237,243,222,229,209,241,234,206,242,244,232,249,243,223,212,228,229,234,239,249,245,214,237,230,224,228,243,240,219,236,238,234,223,226,223,235,251,236,245,216,235,229,233,239,247,230,218,246,225,224,217,229,244,238,221,235,253,202,226,230,231,222,235,230,213,237,242,207,247,223,233,234,222,247,244,248,202,233,237,246,236,250,227,234,233,222,235,242,241,223,229,227,249,240,248,240,235,254,223,239,249,243,252,244,235,234,249,250,243,239,239,227,243,234,234,246,235,246,239,242,241,238,244,243,247,214,233,248,250,210,249,240,220,237,235,235,219,225,239,229,233,246,244,223,232,238,240,243,243,245,246,249,231,238,255,246,242,233,243,245,222,250,247,226,224,221,247,230,241,245,241,227,249,255,236,252,224,245,248,234,222,244,227,239,238,243,255,241,236,229,236,222,223,243,249,222,235,242,255,234,242,252,247,246,252,251,226,250,247,248,253,251,240,237,241,218,219,243,241,230,233,244,235,234,253,219,248,254,215,230,250,251,247,238,246,251,229,240,244,236,228,241,232,254,243,236,226,245,233,242,224,217,248,225,247,243,219,249,251,250,231,235,235,235,223,232,239,234,227,208,246,207,205,236,243,181,230,220,158,206,223,210,238,229,249,233,217,250,189,194,227,251,248,134,133,145,117,23,138,250,217,252,244,204,211,185,207,250,235,228,105,91,109,138,98,36,23,83,80,147,233,237,255,251,253,238,202,112,73,72,205,233,237,238,210,197,167,130,135,118,70,67,125,68,51,68,49,31,22,41,30,44,81,58,39,33,52,80,79,78,80,151,154,171,202,144,164,64,60,40,77,237,242,232,252,242,254,218,167,138,100,29,43,71,27,57,17,180,220,222,204,238,243,245,219,242,245,249,249,212,224,246,247,238,251,251,253,229,209,198,101,18,138,90,18,46,23,26,60,66,54,60,75,32,37,29,23,24,90,103,38,51,28,152,251,248,255,249,250,240,239,251,237,244,246,229,234,215,182,253,236,255,233,229,244,205,228,247,231,217,245,242,234,192,156,183,209,217,193,222,207,170,78,80,37,16,37,114,91,191,246,143,141,58,117,228,239,237,232,246,255,228,253,232,224,222,248,249,179,173,150,137,129,118,141,123,188,114,62,25,5,64,140,183,227,222,211,250,214,226,226,211,220,208,223,212,217,204,211,221,222,206,206,240,214,216,212,223,209,226,207,206,217,222,196,228,234,206,195,232,229,198,226,213,219,194,217,224,226,228,220,218,219,202,201,212,228,199,154,212,200,217,207,218,246,249,254,233,201,120,76,37,17,87,196,224,247,227,220,221,224,197,242,226,228,227,206,221,218,221,217,220,194,232,207,239,213,213,223,238,229,219,204,205,203,212,209,218,214,202,212,224,213,225,195,224,201,214,226,202,192,214,213,211,206,221,227,205,210,208,221,182,221,131,27,6,1,8,22,1,17,3,10,21,27,4,1,1,7,18,0,11,22,0,8,24,11,7,224,211,219,231,225,229,224,230,248,219,241,216,251,228,237,212,238,236,239,219,240,243,239,245,235,236,244,214,221,239,221,233,238,242,226,233,228,226,232,237,229,228,247,242,244,242,231,231,237,240,241,231,223,214,232,240,242,227,242,217,244,238,240,238,243,239,231,244,231,238,241,216,233,231,249,247,250,223,238,218,255,225,225,199,232,224,228,238,219,237,247,235,238,224,238,248,223,235,221,230,237,232,251,236,247,213,236,251,232,229,237,239,239,236,252,247,227,244,226,233,223,237,242,205,244,240,241,230,246,241,230,223,234,226,240,237,237,247,235,248,236,221,239,245,224,252,247,230,250,242,240,246,254,225,235,243,208,235,221,250,227,253,245,242,238,229,242,246,239,231,242,240,246,236,245,227,231,248,229,250,229,235,235,240,250,238,240,216,246,215,222,226,234,252,231,231,242,224,216,238,244,234,239,253,250,222,242,227,216,247,228,234,246,242,238,239,244,212,244,218,245,215,220,235,234,232,237,251,237,247,253,225,248,234,255,241,223,247,245,233,242,243,233,248,238,250,232,248,239,208,240,237,221,220,235,224,231,231,239,229,245,252,230,246,231,230,236,208,245,255,212,245,232,236,230,252,223,225,242,247,237,246,246,227,233,249,234,239,241,247,248,221,227,217,246,240,250,245,231,230,242,238,244,250,254,231,245,222,235,236,252,249,250,242,230,234,232,249,236,253,237,197,239,235,235,239,241,201,209,206,200,233,250,206,241,198,191,243,239,233,183,189,246,240,253,155,117,138,120,47,141,243,235,254,217,229,229,250,230,241,170,76,64,30,101,133,122,44,30,0,51,188,255,247,223,237,248,227,209,177,155,173,60,44,61,61,21,40,54,18,12,29,28,15,89,43,25,9,21,28,6,10,39,47,41,15,15,40,36,23,16,69,13,19,44,37,29,38,30,56,45,47,37,154,240,230,255,255,251,203,134,137,127,68,39,83,62,104,63,162,255,197,221,239,242,254,254,244,244,238,237,192,198,236,218,254,239,228,235,251,206,163,42,32,137,68,19,50,25,28,48,57,43,44,29,24,50,41,14,18,38,91,78,28,30,71,229,232,227,252,234,248,236,234,230,233,235,255,228,217,219,216,236,234,222,225,239,217,224,244,245,205,212,204,237,219,154,155,226,227,224,226,217,219,136,104,72,25,18,26,74,196,234,149,89,68,177,245,239,238,232,255,244,249,242,219,210,226,249,238,173,168,142,156,133,137,140,35,96,83,31,33,25,181,231,234,212,195,222,212,209,192,197,195,210,227,227,207,217,225,243,209,206,217,207,213,205,217,219,219,216,226,207,211,217,209,203,197,230,211,219,213,215,215,210,228,214,218,230,213,227,232,207,206,249,234,214,212,232,182,151,223,222,234,234,242,254,235,199,130,99,49,6,29,163,225,232,240,250,232,239,232,215,209,239,209,190,221,228,225,209,194,225,210,202,196,230,211,227,225,209,210,221,232,224,219,211,240,209,219,213,215,231,218,224,219,214,210,208,205,224,227,202,200,231,198,234,234,200,214,206,206,204,208,196,110,13,7,4,9,21,17,7,9,27,4,25,2,3,21,0,20,6,7,9,3,26,28,32,22,237,239,237,238,228,223,246,205,244,193,244,216,207,213,242,234,238,251,223,246,242,226,243,238,252,223,233,238,223,221,231,232,241,235,228,224,240,221,243,227,240,237,227,234,213,242,226,249,252,245,228,229,223,243,233,206,237,236,230,236,249,228,226,228,235,244,215,237,249,255,228,252,210,241,242,218,229,208,247,232,239,254,230,220,215,233,227,227,246,221,230,209,222,246,239,240,240,219,196,223,252,252,208,249,237,233,217,245,224,226,245,243,225,251,216,250,235,234,235,251,208,229,232,238,235,235,245,244,241,237,213,246,249,254,243,209,243,226,243,253,234,251,248,237,255,244,235,250,229,239,223,238,245,251,240,252,243,249,251,249,233,235,234,254,241,224,245,232,229,225,244,252,214,231,255,236,218,250,242,240,236,221,238,228,232,229,236,217,238,227,253,240,249,221,255,241,248,235,229,238,225,237,251,243,244,234,248,239,238,233,228,244,252,224,232,216,228,243,223,232,241,242,251,236,242,246,233,244,241,232,238,241,227,212,235,232,245,214,205,232,243,236,250,241,245,253,251,244,244,245,252,247,237,246,242,239,244,243,244,242,249,218,228,239,235,238,230,205,233,235,253,199,222,238,241,226,211,221,247,244,229,232,244,232,250,247,225,220,223,196,219,237,250,251,220,238,247,236,220,232,211,248,216,242,238,243,234,226,248,246,234,230,251,227,226,237,245,239,226,244,244,235,247,227,252,239,236,198,236,227,193,236,212,171,225,228,215,246,229,247,188,231,255,250,239,178,145,139,119,46,152,223,248,249,246,212,234,245,222,117,72,36,44,62,101,170,182,125,71,27,25,118,182,168,113,119,52,56,52,100,135,137,96,11,22,10,34,40,75,34,12,18,23,51,48,43,18,12,5,4,16,33,26,12,7,13,23,22,27,22,25,17,0,31,28,47,35,19,44,46,18,23,42,36,25,29,62,138,52,52,78,162,132,84,164,205,155,107,91,81,135,204,205,251,228,235,236,216,242,245,233,198,207,252,227,254,242,222,245,246,181,160,51,8,114,56,21,115,31,32,40,35,60,38,35,12,23,60,36,29,45,86,22,48,30,73,235,243,252,247,230,216,213,238,239,244,218,251,245,238,207,211,245,222,235,234,252,208,232,230,224,235,206,234,242,204,176,179,203,209,211,224,186,127,101,129,189,111,32,24,32,157,236,142,85,102,228,252,246,222,244,252,233,253,250,228,236,234,253,217,171,155,165,161,152,157,118,44,129,122,83,65,78,204,229,255,217,191,209,221,201,193,221,206,207,218,188,200,199,220,223,235,205,230,227,197,242,188,236,217,193,216,233,187,245,218,195,202,193,210,184,218,217,220,223,235,226,203,180,193,234,203,231,242,218,206,219,222,226,184,176,212,230,242,237,245,223,185,107,90,30,11,94,203,246,241,246,225,216,206,228,217,206,232,218,199,220,219,199,217,211,214,212,223,216,218,229,202,204,220,214,216,207,223,219,232,201,201,222,226,220,237,220,234,219,191,211,192,206,199,211,211,217,191,210,219,208,191,212,216,211,184,211,226,214,91,4,20,0,23,11,27,6,35,21,24,26,27,0,16,5,15,16,20,20,4,33,37,50,25,219,220,217,248,228,219,224,233,228,221,211,213,236,234,221,199,235,238,230,241,234,221,238,234,232,227,231,220,219,215,216,220,236,246,255,240,234,248,255,225,239,247,250,221,248,249,232,233,236,237,229,243,251,234,227,247,213,239,230,230,233,232,243,241,230,240,230,239,233,238,235,224,220,211,235,249,245,224,229,243,231,236,212,249,231,233,249,223,226,240,225,228,236,243,228,244,222,224,244,243,251,234,234,243,215,244,237,229,239,240,240,209,237,235,222,233,231,242,240,231,250,231,223,253,235,235,241,244,232,249,239,247,248,250,249,243,250,246,253,239,247,233,244,227,253,246,244,226,232,247,247,252,255,231,246,247,225,231,245,250,227,237,226,229,245,243,235,229,253,253,249,234,240,248,242,243,253,254,234,246,237,242,241,250,234,239,228,211,216,242,216,227,246,246,230,251,249,239,245,231,249,228,254,243,239,229,231,238,239,245,217,235,245,244,244,240,222,241,238,229,232,245,246,232,239,229,227,242,231,236,228,247,244,244,239,237,206,224,244,234,235,239,234,249,247,251,245,250,239,243,236,249,246,238,238,232,229,240,228,211,240,220,245,207,226,253,203,227,253,241,245,204,233,246,240,239,233,237,246,237,244,236,247,235,217,244,240,233,223,238,200,240,236,241,247,244,227,241,203,221,223,245,221,244,243,245,223,243,222,251,248,229,235,232,245,239,242,197,228,245,232,216,241,222,255,246,245,193,248,202,189,199,204,180,238,248,245,250,246,247,158,216,236,255,252,153,121,98,101,60,147,231,241,252,240,223,245,245,178,101,65,52,33,78,166,234,228,190,165,113,52,52,15,46,9,12,52,24,38,55,2,79,124,88,54,29,25,28,98,31,32,45,21,16,62,69,29,27,8,42,5,28,40,11,25,29,25,34,26,11,10,12,14,7,8,23,21,14,40,18,20,29,14,8,34,24,42,15,12,34,48,105,75,92,178,243,190,96,65,38,82,152,209,229,246,243,235,239,250,252,242,156,223,230,239,246,249,241,237,250,173,153,36,2,125,54,51,120,32,60,38,34,25,60,47,26,57,37,26,41,73,104,30,46,25,45,230,248,255,233,245,243,231,220,242,243,241,234,243,229,232,224,226,225,226,237,249,228,196,241,232,227,201,215,223,226,178,168,190,213,217,218,170,103,38,11,106,144,107,77,68,92,171,140,101,192,255,255,240,242,253,249,245,236,217,245,242,245,249,196,142,144,154,137,157,181,128,70,170,183,109,77,31,101,183,214,208,221,201,217,222,213,212,225,225,213,214,211,229,231,207,217,207,206,224,228,210,189,221,202,201,212,196,215,200,217,242,205,219,218,210,227,214,216,217,214,235,191,204,188,202,202,237,202,216,198,207,207,188,170,220,230,243,241,242,189,138,105,51,14,43,167,235,230,232,240,235,222,233,229,230,231,216,201,216,230,210,233,226,240,223,238,201,231,196,212,220,235,233,217,225,209,210,198,227,238,200,196,203,214,229,220,203,214,229,225,209,230,210,218,220,212,184,230,228,217,201,225,215,213,206,203,213,217,181,132,0,14,13,5,4,14,24,7,11,1,31,0,2,7,9,15,25,4,22,14,9,34,18,5,236,226,229,240,207,208,219,231,230,234,231,217,222,247,240,221,250,225,242,242,225,247,242,241,206,235,241,247,223,228,227,250,220,228,243,215,231,222,250,232,236,242,230,219,232,220,234,227,253,231,246,225,217,242,255,252,249,248,244,237,232,239,242,253,243,234,225,243,231,240,243,240,243,242,246,251,231,250,242,247,245,223,243,249,222,242,223,225,230,235,243,194,245,226,228,237,236,211,250,211,246,235,251,216,229,232,245,246,240,234,236,243,234,225,242,244,231,242,247,218,251,239,232,244,243,249,244,231,245,229,240,244,245,227,232,251,224,244,247,236,236,252,255,238,243,252,238,247,231,245,230,244,253,242,242,231,243,246,249,247,248,224,240,231,228,218,224,234,233,223,254,241,229,247,237,249,219,251,231,218,236,238,230,234,231,249,228,214,216,218,243,253,254,239,239,242,224,242,237,236,225,231,231,251,219,245,245,247,251,240,249,252,232,226,238,246,234,200,246,236,207,223,236,245,226,241,223,204,222,219,247,242,244,252,245,225,247,232,238,253,232,244,220,230,252,251,255,231,243,253,243,223,246,246,235,236,240,250,230,214,240,243,248,240,238,228,244,226,215,236,234,244,226,238,233,242,221,234,239,247,246,218,250,249,251,229,230,244,241,243,249,228,239,230,242,246,249,234,222,239,226,229,230,245,224,205,218,248,233,239,207,220,254,228,199,247,224,215,232,242,238,254,231,220,236,249,204,192,234,183,204,239,245,244,251,250,229,240,243,225,186,231,239,250,228,123,106,106,69,21,124,243,229,237,217,225,247,217,194,74,59,19,5,22,89,185,167,141,159,137,62,21,37,59,60,22,60,23,16,33,11,12,40,135,120,92,54,43,87,55,24,4,19,48,67,47,26,19,36,3,6,33,25,25,34,34,2,20,16,24,35,38,16,18,12,2,23,7,8,45,16,26,26,26,18,21,44,29,26,19,78,107,85,36,40,81,80,88,92,13,67,189,242,243,249,239,222,239,249,236,217,184,253,248,247,231,249,241,253,252,124,102,50,15,144,47,46,111,36,62,7,26,24,20,63,45,49,56,58,114,134,122,50,48,16,56,206,233,247,248,220,244,250,221,239,239,242,235,239,242,229,194,205,229,244,243,236,239,210,244,242,201,221,209,255,213,162,180,173,228,226,204,227,198,148,44,36,82,155,137,139,94,121,86,148,220,254,249,252,229,251,255,241,212,227,235,227,226,239,159,159,134,109,127,125,149,96,89,176,165,116,64,11,2,67,179,193,221,233,228,198,222,213,201,217,214,215,208,236,225,231,215,205,230,193,233,213,219,203,218,230,221,219,216,233,206,238,212,217,207,217,208,243,229,208,237,202,226,210,218,217,210,229,233,203,218,234,219,204,203,213,229,253,244,149,104,94,2,30,42,146,230,229,237,235,207,233,238,196,225,218,212,228,195,215,215,219,217,230,228,200,220,208,224,231,221,199,224,212,220,222,228,209,199,228,199,204,202,238,213,223,216,214,208,209,215,236,210,229,196,230,224,210,214,214,198,207,216,209,211,200,225,228,211,221,111,21,19,13,9,4,10,8,17,18,36,19,17,7,0,24,21,18,0,28,24,10,14,11,29,231,221,225,230,215,236,219,244,229,220,233,228,229,239,239,250,235,214,248,217,242,220,226,228,243,239,247,234,247,246,245,211,233,232,236,217,210,240,252,240,237,241,233,231,243,240,245,244,237,240,253,255,246,252,233,222,230,236,242,236,223,254,246,241,243,241,255,219,234,254,234,250,233,248,248,224,230,240,224,228,245,230,246,242,223,249,252,237,230,233,216,231,238,233,233,206,198,237,240,239,246,236,223,226,246,213,244,233,224,240,242,245,248,234,252,225,226,244,221,232,250,248,236,233,248,255,246,246,231,233,230,236,244,244,244,226,239,248,255,242,244,253,225,237,251,236,244,218,234,230,240,247,223,254,246,239,228,253,230,247,245,239,226,247,225,240,250,242,240,227,237,249,244,236,244,245,207,216,240,221,225,228,212,228,227,214,230,220,230,213,219,227,235,234,233,238,239,244,248,240,240,206,238,234,230,237,242,255,218,236,232,249,244,236,250,224,233,236,222,241,245,248,250,232,218,250,242,236,252,234,240,250,230,222,242,238,246,252,214,229,252,234,232,243,242,255,252,239,243,225,253,238,236,236,203,245,249,247,243,237,233,221,233,229,246,239,220,232,246,248,252,236,237,238,237,245,211,237,247,246,215,246,219,211,226,247,235,226,216,255,248,232,245,244,235,225,242,219,239,233,242,242,244,246,234,239,240,237,238,251,242,231,232,243,253,238,223,231,246,227,255,236,224,242,249,255,216,214,224,189,252,245,246,226,227,217,231,241,255,229,164,238,236,251,227,101,113,98,83,12,95,240,238,255,248,212,254,255,208,88,27,24,36,80,74,43,32,54,136,160,107,33,20,47,36,34,79,13,33,14,39,48,16,30,88,143,117,105,113,38,15,2,34,2,34,67,36,12,19,11,37,23,35,17,8,6,18,19,14,10,30,22,24,14,13,35,7,22,36,17,13,10,35,35,24,4,28,17,21,59,58,68,57,63,53,33,35,60,97,30,22,189,244,249,253,217,243,241,233,248,193,194,252,238,238,228,243,223,247,231,93,114,35,8,140,43,68,65,11,23,24,33,40,37,43,52,62,86,147,170,164,148,98,35,32,45,188,241,250,228,232,215,245,238,241,227,217,235,229,225,239,195,218,247,222,245,213,241,251,224,229,248,200,221,239,219,196,158,143,234,198,237,224,216,200,132,102,154,154,149,116,52,15,50,192,239,250,254,253,242,249,242,235,227,227,244,240,253,204,140,152,105,82,134,112,147,95,97,192,148,68,61,3,45,173,203,193,223,229,215,196,218,201,196,214,226,213,199,208,219,239,232,198,210,183,232,216,194,234,229,215,213,226,229,225,213,204,203,211,212,237,215,201,234,243,200,227,219,225,216,236,207,197,224,192,244,204,225,179,226,252,242,157,127,71,13,10,3,82,174,238,239,238,213,242,204,221,244,216,218,210,223,249,237,208,238,235,223,227,230,233,215,212,228,207,243,224,221,231,238,201,206,229,233,224,204,210,219,215,214,217,236,222,213,217,190,210,217,190,205,210,237,235,233,209,228,211,218,224,222,194,199,234,206,223,133,1,14,1,19,14,6,18,4,7,12,20,13,5,0,3,15,6,7,23,12,3,7,4,15,219,218,239,228,237,211,218,251,236,253,227,227,225,210,223,239,227,237,193,230,232,243,231,245,229,249,238,246,246,245,237,220,243,236,234,238,229,236,238,224,244,232,250,244,226,226,226,228,236,234,252,236,239,240,246,251,236,239,230,229,223,246,222,254,240,238,233,239,240,228,254,231,231,239,231,245,235,251,240,255,225,255,207,236,228,239,246,228,246,228,233,234,237,235,221,227,235,225,251,225,239,210,253,224,243,228,241,236,240,228,248,237,242,243,247,239,251,224,238,210,239,238,241,221,235,247,206,239,237,241,247,238,244,233,233,248,229,243,237,242,251,247,245,248,227,224,251,233,232,249,245,249,245,233,241,220,226,243,223,249,255,255,233,223,246,238,238,246,211,221,226,232,228,254,234,243,230,239,231,233,236,248,231,243,231,235,243,196,242,251,234,238,241,233,237,255,249,241,231,214,222,249,238,217,248,217,252,227,247,239,224,209,241,225,227,218,229,244,241,233,245,241,234,228,236,230,236,246,239,249,239,230,241,252,226,232,236,233,246,238,234,238,221,244,239,254,242,245,223,238,239,246,231,237,237,229,216,236,254,239,248,232,246,247,236,243,213,238,237,222,246,231,243,246,235,208,234,251,250,233,236,238,239,247,203,229,215,247,217,245,224,224,231,255,217,226,226,228,234,233,250,230,252,220,233,239,243,249,238,240,237,250,233,232,245,232,245,208,253,227,227,226,211,250,231,246,208,213,249,210,239,206,182,181,208,222,235,246,250,255,173,239,232,255,200,80,94,76,68,10,97,237,249,247,233,229,253,252,184,84,53,81,120,99,46,49,12,42,66,153,181,128,63,76,43,64,76,56,53,31,43,46,29,31,7,33,134,169,152,48,5,21,14,12,42,81,23,21,31,29,19,21,12,25,7,7,19,14,16,24,8,12,29,21,27,39,13,8,34,20,22,39,15,34,37,13,30,18,40,63,98,85,64,40,17,32,42,51,61,27,51,210,238,255,250,205,234,236,248,254,185,223,239,219,227,243,226,246,247,179,89,129,47,35,142,8,86,76,18,15,35,16,26,24,18,53,81,119,128,133,102,95,60,50,4,28,200,225,255,214,210,250,222,255,247,238,246,233,207,252,233,202,201,255,245,254,247,250,255,245,243,233,191,214,205,210,201,151,177,176,206,208,227,245,224,134,137,184,169,144,131,61,52,138,241,245,242,249,249,234,242,230,212,238,248,237,236,228,183,155,132,129,129,129,107,147,119,139,217,125,71,64,4,114,222,242,249,204,216,217,210,217,216,221,213,214,220,224,243,217,231,232,232,223,223,239,216,239,212,232,219,220,212,226,221,209,206,224,204,220,239,206,210,208,216,236,235,211,231,220,232,226,226,222,222,212,228,217,200,209,208,135,123,39,48,55,136,164,197,227,241,240,219,212,208,206,206,221,226,224,231,223,232,239,204,220,219,241,217,225,200,221,200,213,214,216,211,208,218,232,212,237,204,202,216,213,208,201,223,216,234,214,209,223,207,222,215,194,233,217,221,214,206,222,183,211,194,206,203,217,200,240,196,211,205,117,20,3,18,13,7,2,15,0,30,20,10,25,0,5,5,2,6,24,9,0,23,10,9,17,211,229,219,228,241,220,238,238,232,241,246,242,228,255,209,225,215,240,222,232,240,236,245,231,237,237,241,245,244,243,225,202,246,255,224,223,232,243,232,240,243,217,228,234,238,225,253,223,245,253,243,246,226,228,227,249,233,233,229,244,241,252,244,231,235,229,229,237,224,249,244,235,223,229,234,241,234,235,221,228,239,230,250,238,219,233,201,254,201,240,237,237,242,253,228,232,253,220,225,226,227,237,222,231,241,222,207,232,227,220,224,240,251,240,219,236,243,212,226,217,215,220,234,253,217,227,249,245,247,254,234,248,236,253,237,249,241,220,239,252,255,250,221,237,245,228,243,237,240,232,232,253,238,229,246,232,229,255,247,250,228,236,222,249,253,231,242,240,229,244,240,222,216,247,213,237,251,240,241,225,237,237,214,218,237,251,228,218,239,217,222,238,242,243,222,236,231,230,229,241,232,219,243,243,230,233,215,241,255,253,233,222,247,229,200,194,236,244,241,245,230,239,229,242,229,222,250,242,231,230,226,236,228,241,247,238,236,254,242,239,231,213,232,252,233,231,226,232,229,235,250,243,246,232,230,233,247,247,222,220,248,237,239,247,227,214,230,242,243,241,219,236,228,251,216,221,255,242,221,225,237,229,230,246,244,233,227,250,250,247,218,231,206,242,246,235,237,239,253,223,236,237,226,203,227,220,243,236,225,243,243,229,237,207,240,228,244,247,219,242,235,213,239,239,234,247,211,244,210,162,203,222,229,247,241,250,245,245,248,246,184,253,245,250,188,94,69,66,54,12,109,230,237,247,234,242,232,255,224,80,65,39,65,44,44,67,44,46,55,73,189,171,131,111,41,55,69,68,58,38,43,50,27,48,42,40,14,65,142,90,85,29,14,18,52,76,32,26,32,3,22,1,25,13,2,23,5,27,42,17,9,33,30,39,18,28,12,46,11,18,25,24,6,40,1,6,12,31,58,93,93,53,29,56,34,35,73,48,55,23,80,219,236,254,255,240,235,234,243,231,192,231,251,243,249,235,238,253,244,189,77,97,69,68,165,13,61,73,25,35,9,30,29,9,76,50,66,101,114,88,80,43,52,92,20,100,207,230,248,255,229,233,230,223,246,244,238,240,253,230,251,243,218,243,252,249,255,254,254,244,233,253,210,190,225,210,215,172,168,207,224,223,191,201,187,185,184,199,157,141,136,82,124,211,242,225,255,233,240,218,237,210,250,243,235,228,249,243,162,161,159,145,163,127,145,133,166,169,133,68,60,34,12,144,244,229,252,214,222,222,212,224,221,186,232,236,200,210,224,225,252,255,241,231,229,239,252,203,236,239,198,225,233,216,221,231,213,213,202,222,223,236,228,241,239,241,251,237,250,250,239,224,234,229,232,252,248,215,196,162,132,80,30,19,88,191,234,216,232,226,238,222,221,216,226,196,215,229,224,209,210,216,209,222,210,237,193,224,220,197,214,232,228,217,227,239,237,241,198,219,197,205,203,221,211,222,222,206,210,243,199,213,231,224,192,206,191,220,243,193,212,215,206,211,209,205,220,212,202,232,200,203,220,186,226,120,7,15,3,10,5,13,14,13,16,20,3,13,3,11,11,3,19,9,32,25,18,14,13,1,241,226,211,238,204,203,208,225,235,218,226,229,244,224,236,239,243,218,237,236,227,242,237,253,219,238,244,237,228,236,250,235,225,232,222,254,220,234,243,236,246,224,232,246,226,246,228,243,231,234,243,235,232,224,248,230,254,248,243,238,213,228,238,239,247,246,238,235,250,244,251,237,224,219,243,249,231,249,215,240,240,244,252,232,235,238,222,224,227,235,240,227,243,235,227,212,241,236,234,223,231,246,243,240,222,250,228,254,233,226,231,209,242,246,231,223,237,232,233,240,251,207,246,226,241,235,228,247,235,240,236,249,247,215,225,219,225,240,218,241,229,231,240,234,247,243,230,241,221,224,215,243,237,205,242,239,242,252,225,245,245,248,248,243,249,235,244,241,232,245,241,237,224,223,240,214,241,227,230,239,238,254,251,227,252,223,243,233,228,249,209,239,207,210,242,243,243,242,222,231,231,247,234,250,230,232,254,232,239,230,245,220,240,229,226,227,238,245,237,232,238,227,231,239,239,239,234,232,234,249,235,242,239,254,244,246,215,255,217,240,246,230,223,250,255,218,250,203,233,255,246,244,244,235,254,247,218,253,236,249,217,241,231,233,249,233,216,223,245,216,233,242,238,229,220,247,219,230,231,221,236,246,236,235,231,232,227,208,233,226,231,234,230,231,242,249,225,254,237,243,228,234,243,252,242,239,254,225,215,241,249,239,232,222,247,252,241,224,237,230,234,228,247,246,250,244,210,193,185,181,233,233,242,229,249,221,204,212,251,227,198,229,245,240,216,76,82,106,89,3,97,231,233,248,210,228,229,241,234,84,17,23,5,40,68,55,31,35,34,68,113,171,167,115,14,22,45,47,53,41,64,56,14,15,18,47,38,25,58,110,167,114,30,29,16,78,45,11,23,16,4,32,14,22,6,40,18,11,7,0,17,20,39,54,22,25,17,42,16,42,12,25,43,47,39,44,9,36,80,83,99,60,65,72,35,55,111,74,68,15,103,227,253,242,253,243,252,220,250,237,166,235,247,234,228,231,234,224,252,186,81,83,38,50,107,12,102,64,21,18,1,19,7,48,15,27,30,58,92,59,42,37,47,28,18,117,246,248,241,236,224,249,237,242,251,227,255,242,246,243,254,232,198,216,217,245,138,171,162,153,240,254,233,215,203,246,216,186,174,190,225,181,100,88,86,162,135,119,120,91,66,111,205,227,243,249,250,255,255,250,239,225,241,228,254,231,222,176,157,151,180,143,144,142,153,193,148,114,84,41,69,9,39,213,249,228,252,236,226,206,212,226,206,190,211,215,209,243,228,246,250,239,255,249,245,243,250,254,231,235,207,204,209,225,223,235,218,225,234,220,230,214,231,236,246,233,236,235,229,227,231,231,235,231,252,245,236,146,156,96,43,27,37,124,246,241,234,209,218,242,242,238,213,202,219,211,206,213,209,216,225,184,226,222,227,239,209,233,206,208,198,217,230,220,214,216,210,209,209,231,233,219,208,217,217,205,233,228,224,204,225,216,212,199,207,229,214,221,229,183,234,246,226,212,220,221,219,230,208,195,190,202,205,213,208,109,15,3,7,23,18,22,11,10,17,2,13,8,18,11,9,19,12,14,19,1,18,9,9,9,233,231,217,229,216,195,229,246,219,230,235,236,220,217,248,225,232,245,249,236,235,237,243,231,240,246,236,224,248,241,242,252,248,226,234,230,249,247,227,235,239,229,238,225,248,246,226,241,240,223,242,251,225,240,207,222,246,249,246,249,241,250,239,238,239,217,240,239,250,239,217,235,227,242,232,234,226,231,210,255,236,251,251,240,241,247,234,218,228,246,244,223,235,214,233,234,232,222,233,214,232,243,255,243,246,242,228,213,235,244,240,242,245,243,225,248,239,247,238,230,242,230,248,206,224,237,232,236,251,240,224,240,247,236,243,237,232,229,254,237,249,237,253,211,224,247,250,238,224,216,222,236,211,237,241,240,255,236,254,251,224,229,240,229,229,236,254,234,222,247,238,234,247,230,237,230,238,233,237,241,211,226,207,239,213,238,234,252,216,235,211,226,241,232,241,246,252,248,250,233,246,215,233,237,248,241,237,231,242,215,246,219,240,238,223,239,234,228,230,252,232,225,211,235,226,248,244,242,231,229,229,234,233,205,223,253,230,241,238,248,254,246,247,244,230,253,241,247,224,255,243,248,229,251,229,240,239,249,222,241,234,246,224,252,212,218,232,236,235,241,220,224,240,212,208,222,243,244,232,228,244,240,237,245,225,239,230,214,248,211,222,246,228,222,224,235,246,240,242,247,247,223,243,223,246,246,229,247,245,252,237,236,225,245,239,212,250,207,238,237,243,244,232,242,241,239,209,218,214,216,246,241,208,251,215,193,233,246,233,201,211,242,234,251,192,161,148,128,130,53,160,234,234,238,243,227,247,249,242,132,46,1,1,49,75,33,47,46,37,17,50,93,193,184,60,40,13,33,53,44,30,9,5,23,24,17,2,44,209,231,242,238,207,136,135,130,23,16,2,0,29,14,9,19,24,33,19,32,14,16,21,49,24,36,43,44,25,35,35,35,24,26,48,62,47,32,60,54,89,89,89,61,55,50,53,61,79,110,40,58,162,245,248,239,237,229,237,228,254,190,173,234,244,213,217,249,239,248,242,164,69,68,73,83,88,37,83,47,8,38,6,20,28,10,31,25,45,46,84,102,77,50,92,42,33,140,221,236,250,251,237,247,254,253,252,250,240,235,230,239,255,192,128,61,14,85,30,31,89,81,189,244,250,247,244,252,242,175,184,181,185,225,97,18,24,17,58,44,33,38,31,107,239,232,255,249,252,248,255,246,233,234,245,255,238,247,231,168,139,169,171,183,113,152,136,157,145,49,25,35,39,17,63,233,249,227,239,221,230,226,217,211,230,212,217,243,212,218,204,154,144,155,207,231,245,252,242,255,215,219,242,228,212,214,226,205,211,235,228,223,206,214,218,248,229,189,103,97,104,199,223,214,249,248,252,236,179,119,107,41,2,70,220,241,252,208,173,222,238,243,225,244,217,217,200,214,205,215,218,230,233,221,234,206,220,230,218,236,211,229,233,235,241,220,210,207,231,219,209,231,212,202,207,215,213,229,204,194,211,202,206,226,234,203,207,218,218,248,231,202,215,220,222,189,209,212,218,204,229,214,200,203,212,214,215,130,26,9,9,0,10,25,16,20,35,37,17,22,11,5,1,26,18,7,5,8,13,23,28,9,236,238,224,239,224,243,240,224,232,239,242,234,244,213,222,246,225,229,246,246,236,208,248,242,242,238,236,237,246,234,223,237,225,231,241,245,228,233,241,249,224,253,238,247,226,226,230,229,251,234,220,236,248,232,220,255,231,228,225,215,224,242,238,224,240,247,243,214,234,235,242,217,230,224,227,229,218,233,240,241,240,237,227,240,251,237,227,209,221,238,215,245,240,239,217,233,244,238,230,238,222,244,227,218,226,224,219,226,245,245,225,225,235,217,250,244,224,253,229,242,233,249,247,255,228,252,232,253,225,242,235,245,221,243,242,250,234,243,236,237,212,241,242,209,235,215,242,233,247,232,222,219,245,240,215,238,252,250,241,255,249,252,229,254,252,245,247,236,242,243,244,241,219,237,202,233,236,229,246,202,209,239,216,233,234,229,240,213,222,237,222,236,238,215,241,229,242,248,243,237,229,254,239,253,230,235,230,253,242,241,213,240,228,213,238,237,224,241,214,239,224,211,223,219,228,215,226,241,249,240,238,218,239,240,225,245,232,238,239,230,231,233,252,251,253,240,245,234,232,219,234,231,250,233,237,245,246,230,241,242,251,230,238,239,228,242,216,227,230,243,248,245,218,204,228,237,211,237,221,243,233,241,223,241,242,225,201,227,230,246,249,232,234,246,215,240,241,242,247,253,233,235,230,244,245,233,250,248,233,243,222,219,250,223,238,245,236,230,234,229,228,223,247,247,242,210,211,226,231,223,217,213,188,212,199,197,230,253,232,231,188,252,251,247,230,149,161,138,104,50,168,240,224,249,226,229,252,250,239,122,48,4,17,70,75,66,50,50,37,22,53,51,119,185,114,62,27,45,39,44,39,16,13,27,14,38,28,187,240,246,239,150,132,151,164,184,63,14,19,5,22,21,34,8,35,13,16,5,25,28,41,47,42,63,87,68,64,66,60,61,53,48,43,45,46,15,77,60,62,93,75,80,89,75,45,58,90,53,18,67,162,248,237,255,237,250,229,229,250,219,173,228,232,250,241,231,229,255,231,170,78,58,52,41,81,25,85,35,9,24,8,46,36,20,11,36,30,89,173,182,141,75,91,88,30,128,208,233,247,244,240,236,213,233,246,254,247,222,237,242,228,185,99,25,0,4,9,73,92,6,139,253,247,238,250,250,243,198,178,153,216,230,162,66,56,47,44,45,50,30,20,72,149,229,242,251,255,223,255,226,253,234,254,249,223,228,201,153,156,156,172,123,135,146,146,146,96,27,27,9,25,22,106,224,243,234,241,229,229,220,227,209,207,219,211,208,225,246,173,86,80,43,11,122,109,144,209,229,219,230,226,219,232,247,218,205,216,236,219,220,207,204,217,234,189,93,45,22,0,75,198,238,253,239,215,171,92,78,30,16,153,236,250,252,232,207,208,226,224,240,212,215,208,206,231,212,207,205,209,225,224,193,233,199,220,215,204,193,221,220,223,216,235,205,189,216,225,226,223,230,206,201,215,230,227,219,229,230,224,211,217,221,228,233,198,205,231,209,205,209,208,208,233,235,204,212,228,224,208,214,230,207,210,235,239,105,22,19,9,4,3,8,23,7,16,23,4,3,8,6,0,9,5,10,13,19,16,23,7,5,231,236,232,232,230,244,219,221,240,242,231,250,239,215,238,225,241,236,232,229,237,240,214,247,213,232,253,246,223,226,233,220,233,235,246,235,223,233,236,244,250,233,247,247,221,239,240,234,219,228,228,232,239,234,243,254,249,237,229,242,218,233,240,228,237,226,248,217,222,235,236,214,223,230,225,229,210,238,217,251,237,242,216,238,230,244,220,230,219,245,243,237,243,229,202,236,230,227,230,240,243,247,229,215,234,238,230,245,228,229,223,223,238,237,249,241,233,247,247,238,246,255,253,252,242,231,251,233,244,231,252,244,243,241,222,245,253,248,243,235,232,234,238,243,235,237,222,227,240,242,208,240,227,223,226,238,244,238,241,251,231,234,224,226,232,246,248,231,229,250,249,233,241,245,240,241,246,233,215,242,240,246,206,233,233,241,233,243,239,238,238,222,239,247,247,231,243,220,238,247,245,249,249,255,235,246,245,255,233,248,233,227,236,210,215,226,248,232,235,224,232,239,242,220,226,218,237,246,213,237,234,232,228,232,235,219,210,246,243,242,251,244,227,198,224,245,246,245,237,255,245,231,235,230,214,241,254,245,235,250,239,242,255,238,225,215,227,236,234,243,226,240,252,207,227,252,222,251,226,225,241,217,229,238,255,247,225,248,252,229,252,250,247,253,229,230,250,226,233,243,243,243,252,234,248,207,245,234,215,242,247,245,237,240,233,233,254,211,232,221,254,251,219,244,242,248,234,228,189,176,232,216,217,224,248,231,251,244,250,182,171,247,229,247,204,171,120,111,90,30,186,236,239,245,215,196,242,243,243,172,131,46,15,25,57,47,45,44,44,21,52,25,77,116,164,144,45,48,38,32,27,11,20,34,25,168,255,253,233,237,224,92,26,5,131,197,129,75,29,14,23,5,14,26,5,32,10,26,17,63,134,90,22,9,57,60,91,81,65,51,39,29,27,25,33,57,54,60,57,91,61,68,67,46,67,93,82,64,11,49,223,238,241,251,221,238,214,236,246,173,182,229,248,254,245,246,247,234,254,244,86,49,69,86,60,54,72,29,15,16,19,25,36,41,28,25,45,95,161,147,119,50,92,122,110,138,193,225,250,255,204,131,65,33,44,166,235,246,249,249,245,156,65,11,1,55,16,31,22,10,127,184,156,139,170,231,254,255,188,165,218,229,203,100,69,54,52,47,32,23,32,6,90,229,251,235,244,255,251,222,247,253,238,252,230,252,146,107,143,161,151,159,149,139,140,127,60,11,5,45,28,24,5,12,119,221,216,248,232,200,222,215,207,223,225,227,244,253,161,69,99,54,5,21,21,5,39,165,180,210,215,202,215,223,223,232,243,208,217,192,234,239,241,250,163,89,81,26,4,78,219,225,237,152,113,69,32,25,92,228,241,232,250,214,173,179,228,222,213,232,242,215,220,226,242,223,211,218,211,203,216,214,211,241,200,207,228,229,224,211,216,211,218,220,210,198,211,223,221,216,226,219,222,209,208,219,189,241,212,216,227,223,208,207,210,220,215,216,211,216,214,235,211,204,215,231,217,219,227,219,218,206,232,186,211,79,15,9,0,19,1,30,18,0,16,28,26,17,7,0,5,2,7,1,1,12,7,7,21,22,235,222,249,229,224,250,230,228,218,236,229,198,236,229,228,228,234,230,204,234,245,232,244,242,233,239,235,224,224,237,218,226,229,235,242,238,241,249,235,242,232,230,248,250,224,238,220,234,223,246,219,241,226,230,236,217,243,240,245,240,235,232,244,228,239,247,227,217,228,222,253,239,230,236,225,216,230,245,219,236,227,230,239,246,236,252,229,213,239,226,237,243,230,244,229,228,222,245,217,247,238,245,212,240,226,217,237,215,251,249,242,237,243,231,216,233,219,233,238,249,209,227,224,240,237,239,240,229,233,233,231,239,239,240,225,242,249,231,216,236,232,225,216,228,230,242,235,243,230,236,218,218,232,237,218,242,238,234,241,253,235,225,233,223,213,230,222,237,229,238,255,226,239,235,255,231,216,234,206,219,222,213,249,224,229,225,242,236,208,223,228,250,218,253,241,247,236,242,234,245,238,242,240,245,249,252,227,245,221,245,235,236,239,216,220,234,232,248,238,231,228,202,231,232,246,226,242,236,226,237,225,219,214,221,253,227,226,253,253,222,252,243,238,233,246,252,227,235,247,233,242,237,228,238,249,232,230,239,232,228,244,224,249,243,235,208,242,231,246,235,225,225,245,209,236,245,227,238,234,219,241,252,225,253,235,249,240,246,253,240,236,243,242,242,235,221,234,237,227,252,241,247,242,232,211,214,226,243,229,246,236,222,239,207,240,255,244,242,234,240,248,245,247,250,243,222,223,220,162,220,226,189,224,241,233,213,244,216,233,183,173,231,240,246,215,135,154,135,114,37,182,226,219,243,236,226,246,243,237,255,221,122,45,3,4,31,18,31,26,25,34,49,49,34,112,215,122,102,47,17,2,10,8,10,95,238,248,227,209,148,97,43,34,25,25,98,141,81,129,45,41,8,29,19,27,15,17,8,29,151,249,139,36,10,68,99,95,60,51,33,21,13,27,12,47,56,64,72,90,87,68,65,29,38,19,72,57,13,94,152,185,237,240,245,246,234,235,236,242,177,244,255,245,252,250,246,243,252,246,197,119,16,58,108,15,30,72,28,22,23,102,133,64,29,4,24,88,126,166,95,73,29,33,124,133,141,234,242,238,237,174,68,23,11,23,128,241,250,254,255,163,68,24,8,24,2,10,116,156,86,56,37,1,39,2,70,186,234,218,149,199,232,212,120,72,71,23,16,34,36,108,72,125,234,247,245,252,238,252,247,252,247,237,222,239,181,152,150,158,146,171,169,137,134,121,122,94,100,36,51,46,39,28,36,10,105,218,231,255,221,230,212,207,241,229,251,255,250,160,57,29,54,225,161,56,20,27,154,242,237,234,223,212,235,252,237,244,250,232,249,222,222,222,244,225,176,137,68,62,200,254,202,174,107,71,28,38,158,232,234,250,255,221,170,170,243,217,227,206,210,212,227,212,223,212,227,234,226,217,236,220,186,220,228,211,227,216,235,208,222,202,243,231,202,207,220,209,226,213,228,214,198,203,230,215,223,244,224,209,222,222,235,233,233,211,201,223,227,216,216,209,205,203,225,209,214,228,207,239,210,222,196,233,223,206,119,10,10,14,19,12,20,0,14,15,9,8,17,0,5,4,13,11,6,22,32,0,16,29,14,234,226,232,229,250,224,247,222,211,219,239,233,223,224,204,247,232,231,245,253,240,219,227,219,237,243,245,216,222,227,221,243,252,244,235,236,225,218,232,255,239,233,247,244,233,230,244,245,230,245,223,216,245,234,243,244,245,254,244,247,223,227,231,227,242,247,237,229,237,237,223,219,229,245,247,253,239,218,249,224,239,241,232,235,236,242,232,239,227,229,231,239,243,215,222,238,226,234,235,236,215,221,225,231,241,216,223,235,227,241,216,237,238,222,227,205,218,221,236,230,236,240,241,232,235,231,244,250,254,242,236,222,230,227,241,247,253,244,225,249,233,237,240,229,250,224,235,247,241,234,229,247,231,240,217,247,223,235,222,246,228,234,201,216,220,230,241,233,241,223,241,224,213,243,245,234,253,241,233,245,237,212,228,254,245,229,252,243,215,237,239,211,247,245,231,243,242,238,244,234,243,244,233,219,230,246,227,235,239,215,217,243,224,218,220,217,219,240,222,226,217,233,246,234,218,223,233,223,237,236,220,245,234,244,207,238,230,240,255,253,251,252,235,228,246,248,250,253,228,247,243,239,234,245,236,225,237,230,231,248,242,240,236,236,231,232,245,232,230,249,236,235,229,232,255,249,242,240,225,251,238,221,245,246,255,233,247,253,227,225,237,228,231,253,228,249,243,250,234,248,224,246,230,228,225,249,253,230,243,254,244,242,224,233,237,243,249,249,238,244,242,248,249,255,255,183,194,220,224,230,244,241,231,242,206,198,225,234,251,176,174,253,230,251,207,134,140,122,118,64,188,244,245,240,245,236,240,242,237,240,253,155,143,72,15,16,18,3,12,6,9,35,52,45,28,127,192,139,60,17,11,29,15,58,20,48,74,36,76,55,36,46,25,36,15,61,81,64,166,136,105,41,16,10,8,7,1,17,10,131,228,108,33,7,56,103,73,30,30,0,21,40,25,32,21,36,64,88,102,88,52,49,23,20,36,53,99,119,166,121,161,220,224,236,232,225,250,254,252,237,251,240,254,226,156,142,163,232,253,189,109,43,77,97,0,43,53,41,27,14,187,191,46,25,3,39,97,167,213,136,108,84,65,105,94,116,237,240,229,245,178,67,52,6,21,190,229,253,249,192,62,25,3,38,3,41,210,245,219,80,14,5,8,13,26,29,63,169,223,173,210,243,237,163,148,73,7,23,12,86,222,174,138,253,222,251,250,229,244,234,238,233,215,231,224,161,125,135,152,182,160,151,138,121,137,180,176,237,180,63,36,31,32,30,41,11,48,201,249,236,232,234,237,236,233,237,199,122,92,53,31,160,231,233,193,43,35,159,219,246,239,250,238,234,224,237,217,249,246,240,226,241,245,247,243,223,150,104,3,104,174,144,112,49,34,84,175,228,240,250,252,235,195,165,173,206,241,212,220,231,218,231,218,202,233,246,211,215,237,218,208,240,227,215,227,250,229,228,249,197,207,217,202,199,221,215,231,213,208,218,207,216,217,217,224,230,232,214,211,197,207,220,197,220,209,225,221,211,227,212,207,214,235,199,215,214,222,223,233,209,214,228,191,198,209,108,11,30,8,8,7,3,15,25,11,23,10,23,20,0,5,5,4,10,24,3,4,24,34,15,238,249,240,211,247,226,212,238,230,229,230,208,226,228,229,228,231,233,239,219,234,220,242,229,231,241,221,230,233,226,237,241,237,198,208,236,209,252,219,218,229,229,236,228,242,250,226,242,251,232,248,230,224,230,230,246,229,226,246,232,235,243,226,246,232,226,242,241,248,225,237,223,227,224,239,230,231,202,243,222,223,216,228,235,241,234,236,241,246,219,242,223,204,247,219,223,230,240,236,221,229,217,213,227,230,209,243,242,215,241,240,224,231,217,231,210,234,239,220,238,228,229,235,224,244,239,247,233,243,230,227,249,242,223,238,222,231,242,228,223,242,227,228,238,212,221,234,232,247,218,224,218,238,219,222,254,229,235,211,242,233,220,227,233,248,226,245,219,225,246,232,213,234,228,215,214,226,238,227,239,235,246,225,221,225,222,239,204,238,233,235,242,225,214,245,249,229,223,218,236,230,254,251,233,231,236,237,246,218,218,244,238,230,240,236,219,214,237,223,215,240,206,223,229,226,216,250,229,218,246,241,210,232,230,241,240,249,245,231,222,254,239,232,226,249,244,246,243,220,252,236,240,236,255,221,227,232,243,243,249,252,254,221,244,236,254,243,237,230,236,249,218,200,216,233,235,233,229,226,239,248,238,255,244,243,249,252,255,242,238,231,226,232,238,240,241,229,240,247,233,243,222,255,255,242,242,217,247,240,230,254,255,218,245,221,250,255,245,217,250,236,224,231,245,244,225,232,237,226,221,220,190,196,205,222,208,246,223,253,215,201,234,229,222,188,155,130,128,102,70,183,252,237,250,237,233,212,224,209,254,236,192,226,155,145,93,7,9,15,11,18,47,75,81,11,73,140,185,81,18,6,3,58,49,29,33,50,42,69,60,34,33,29,52,12,67,63,23,42,111,203,130,89,60,45,37,13,11,45,35,52,46,26,36,16,78,63,26,25,10,16,23,9,17,19,44,66,72,106,34,48,19,26,49,105,213,205,168,116,136,219,228,251,238,239,226,248,255,250,214,247,238,250,128,100,97,63,131,148,154,110,17,90,94,16,64,48,7,52,11,52,115,50,34,12,64,149,204,155,105,87,83,31,59,52,124,242,252,239,219,99,20,12,8,95,219,237,241,160,80,15,1,38,69,106,234,239,251,144,43,39,79,80,74,10,4,9,133,200,184,205,225,219,155,137,53,31,25,39,188,215,237,159,198,255,239,241,253,244,254,233,228,234,222,187,156,151,138,178,165,177,146,99,133,160,213,220,219,217,98,88,139,85,51,34,15,23,91,229,238,223,221,238,249,188,116,26,13,9,14,7,35,159,227,229,107,6,59,193,228,229,242,223,139,106,90,108,199,238,237,240,221,214,232,207,197,186,118,55,48,30,34,51,21,78,180,245,249,240,241,247,208,181,164,239,224,219,216,201,209,216,209,238,235,218,225,223,213,224,236,249,224,203,202,208,208,212,200,226,220,235,204,228,222,216,221,191,207,218,221,219,216,219,213,217,202,195,226,219,212,207,233,204,205,215,220,237,231,215,201,221,206,236,209,210,177,234,208,199,216,215,206,229,206,209,121,9,11,3,4,1,29,17,5,14,36,9,15,5,2,9,4,1,13,9,7,7,23,27,52,230,217,236,239,225,232,222,206,224,227,213,248,223,245,224,208,207,242,221,216,240,246,219,240,203,229,215,242,236,232,241,232,203,223,221,226,240,239,208,249,231,232,239,220,230,243,248,247,214,228,255,235,232,232,243,242,218,210,248,216,223,232,240,234,245,243,238,245,242,246,246,232,243,223,244,237,224,232,244,231,215,245,238,231,231,234,210,228,211,227,240,213,240,215,232,233,234,211,241,219,216,220,217,245,226,215,237,221,228,233,237,215,222,234,241,242,215,248,249,217,241,244,224,240,248,224,228,231,219,236,227,236,239,230,210,219,215,244,234,230,239,224,231,234,254,234,224,253,252,226,244,237,243,235,212,213,224,224,234,244,247,207,230,229,208,235,230,233,233,228,229,251,235,240,239,212,216,243,220,225,229,220,240,219,237,233,230,236,242,249,224,241,234,238,232,241,246,218,227,225,223,247,237,238,249,216,235,204,227,220,228,228,242,224,252,216,245,234,232,224,233,237,247,233,237,232,234,220,245,222,229,235,248,201,253,230,231,227,241,235,253,227,219,252,232,240,240,242,240,248,251,220,240,255,224,239,245,232,252,248,237,238,250,229,250,250,255,238,212,250,234,217,231,231,240,224,207,228,244,238,247,226,226,244,247,252,248,245,253,251,232,248,236,247,222,255,247,231,237,229,230,245,252,251,222,246,247,234,237,243,248,245,244,251,242,229,224,241,220,245,236,238,243,255,220,210,219,236,204,195,204,166,213,248,244,233,254,230,252,191,189,233,244,239,160,151,86,107,91,32,194,236,246,250,234,239,231,241,247,239,237,202,228,252,243,151,74,22,0,18,19,48,65,39,49,11,17,90,129,60,19,25,64,52,31,14,48,36,114,106,23,25,13,37,29,51,95,34,12,27,93,135,155,124,72,63,63,48,21,38,39,20,64,11,19,39,24,35,21,8,16,11,38,42,32,57,40,32,69,74,142,141,162,157,153,215,176,116,109,155,210,248,216,233,246,244,208,180,104,56,133,206,145,71,139,120,15,21,9,37,38,11,84,106,69,74,48,42,24,4,65,107,44,37,29,75,171,202,122,64,39,133,131,60,39,53,224,225,199,109,29,9,15,10,44,188,192,161,91,8,14,88,252,240,231,252,253,98,60,116,240,235,226,211,101,13,28,145,194,207,211,236,178,95,36,28,1,25,103,223,220,240,124,183,254,241,252,235,251,232,230,235,216,209,133,158,156,155,153,180,153,138,135,123,188,238,179,123,112,81,226,222,207,105,54,29,5,56,158,240,251,229,243,214,116,43,22,0,9,6,32,14,52,158,211,148,54,22,91,204,245,246,185,111,51,23,14,72,188,176,244,140,75,129,115,112,127,125,123,99,52,17,26,42,145,217,255,250,246,244,219,186,154,203,212,214,209,226,210,202,228,237,215,232,184,216,229,249,245,238,245,248,231,239,210,210,237,195,230,233,226,225,213,213,214,228,216,219,229,230,210,213,217,222,215,223,226,204,206,220,237,240,223,183,226,211,234,221,214,215,206,226,209,209,211,218,212,231,212,213,217,223,193,203,203,119,34,6,4,28,4,5,15,30,15,15,28,19,19,3,6,6,13,4,17,29,12,8,0,8,240,239,234,249,238,245,235,226,226,218,229,218,231,242,208,216,239,241,230,242,231,236,215,240,251,241,232,230,238,245,228,208,233,212,233,221,220,225,239,234,238,247,227,235,223,238,229,253,253,240,231,245,239,236,232,230,227,246,220,241,225,235,236,241,232,244,242,232,228,230,218,215,236,249,232,215,239,240,216,235,226,240,224,248,237,215,215,239,230,231,221,226,237,227,224,222,243,224,223,225,226,228,227,239,228,231,223,238,245,220,239,230,225,240,221,224,240,192,252,222,243,217,214,216,206,244,220,245,226,208,242,217,227,221,236,225,248,230,242,233,234,238,230,236,223,242,233,237,231,231,228,230,240,222,239,239,237,216,217,246,208,250,232,218,206,252,232,245,231,236,223,228,248,233,236,247,233,236,236,244,235,229,246,227,234,248,243,237,250,248,226,226,246,227,243,247,245,238,242,235,249,249,252,231,224,243,227,231,239,200,238,235,223,226,239,227,218,235,230,200,227,247,237,248,213,204,234,212,220,227,230,211,230,231,248,252,191,248,241,255,229,214,208,240,231,230,224,232,247,248,239,245,231,252,242,252,238,254,246,209,240,254,251,219,252,230,232,225,213,249,251,222,236,239,218,222,247,250,251,244,235,226,237,234,231,251,237,228,255,241,249,247,250,235,249,224,249,248,234,223,242,248,246,239,222,234,231,237,247,251,252,238,249,221,247,219,240,247,251,236,254,237,249,222,208,208,216,175,188,202,233,216,240,252,231,212,242,237,221,185,195,247,238,254,169,164,128,109,121,70,179,239,243,247,219,218,239,245,225,223,239,221,251,247,237,216,149,150,99,25,9,13,7,8,16,0,26,13,96,126,27,20,13,31,44,20,29,34,71,105,21,21,18,23,39,66,95,45,42,29,71,18,64,155,176,112,46,53,29,37,34,26,14,35,20,40,25,21,32,16,9,24,30,18,27,33,10,29,109,124,184,252,243,238,203,246,160,122,99,173,236,224,254,243,145,86,19,2,7,18,21,15,32,15,27,46,35,47,19,27,59,28,54,99,60,69,37,27,14,0,182,217,61,20,54,97,128,110,107,20,34,183,129,51,29,5,45,69,76,33,11,111,148,54,8,10,20,41,23,30,167,240,247,253,255,253,117,24,81,243,230,239,247,195,91,32,22,165,214,220,235,174,95,63,51,44,2,123,217,200,247,171,95,112,225,235,232,243,238,255,208,232,194,190,107,128,139,156,185,168,170,135,138,146,201,245,165,44,28,55,235,234,242,171,86,24,17,3,91,221,239,245,246,161,76,32,16,23,160,210,139,80,19,12,89,123,68,27,29,128,235,224,212,136,109,43,11,22,119,206,240,187,61,27,0,32,58,49,82,97,106,138,101,134,173,248,237,243,235,210,183,173,184,222,237,216,214,220,198,230,209,223,226,252,236,239,236,250,252,250,222,242,251,241,207,234,224,212,250,201,237,237,205,210,210,221,225,227,215,216,234,208,225,211,230,203,227,206,201,210,233,199,215,217,209,189,221,232,208,231,208,221,220,226,221,203,211,203,210,213,213,224,227,210,219,104,8,14,18,22,12,9,7,6,12,11,25,12,0,11,5,4,25,2,4,7,15,2,12,8,238,225,228,237,209,244,209,215,246,231,222,218,220,242,249,209,231,241,214,237,229,219,232,240,207,231,243,244,221,211,220,217,242,218,216,203,230,239,226,234,232,239,227,230,238,213,248,229,231,242,227,243,223,248,219,246,255,243,229,231,233,203,222,234,236,222,219,231,241,241,232,236,227,246,223,249,223,204,238,238,244,240,216,222,245,239,250,248,234,231,225,246,233,232,239,214,241,217,249,226,222,248,220,232,231,234,255,211,231,245,233,224,231,250,238,252,239,216,209,237,216,226,234,223,245,240,230,239,241,251,233,228,213,246,246,244,242,226,233,234,201,241,246,219,236,229,224,237,248,221,248,234,239,247,232,234,250,246,240,221,242,205,229,245,243,226,233,233,234,203,222,230,247,240,218,219,236,227,237,221,212,227,236,227,240,209,229,242,238,227,233,242,253,203,222,241,216,224,235,241,227,252,251,240,226,243,232,232,217,217,228,229,217,215,228,231,232,226,230,229,235,227,247,252,210,224,225,247,221,215,238,218,225,242,244,236,238,250,232,255,253,244,252,234,229,253,234,237,245,224,222,251,230,254,233,227,236,246,231,249,241,255,237,245,235,252,247,249,247,254,252,243,244,238,216,249,239,242,255,251,235,232,250,249,247,249,252,247,231,228,242,220,247,255,249,233,240,254,248,235,243,232,225,246,239,253,235,239,237,244,253,229,232,237,251,234,239,245,255,219,236,247,236,221,234,190,199,208,206,237,228,211,207,222,203,218,226,243,231,187,219,255,254,247,191,162,123,120,106,47,196,239,222,231,236,224,250,230,215,238,240,227,201,215,232,230,239,254,216,234,248,207,212,199,208,184,141,147,176,190,162,99,65,55,19,25,29,10,77,69,34,46,18,77,59,76,77,46,47,44,40,25,29,38,96,184,147,132,97,68,8,12,31,28,38,17,12,26,41,32,13,19,18,33,6,92,151,179,228,167,236,255,192,169,230,226,150,97,79,209,239,255,252,165,57,1,0,26,6,30,25,19,48,0,156,199,144,43,18,21,37,54,83,151,111,85,35,9,22,56,236,217,92,32,25,103,135,82,66,68,54,72,80,41,60,17,17,22,35,90,209,244,242,178,88,29,14,25,25,153,247,244,246,252,207,128,20,55,233,236,246,192,65,45,9,1,79,249,252,237,181,119,54,14,46,28,48,196,252,193,142,62,24,61,227,246,254,238,242,243,227,237,182,143,136,134,162,154,173,155,167,129,157,127,191,234,173,124,9,19,111,221,229,187,103,50,11,12,33,136,233,236,229,171,88,42,40,167,238,242,250,176,101,14,8,97,104,64,3,38,192,234,249,145,20,22,6,61,145,211,241,203,132,58,45,34,33,22,35,132,165,196,207,173,115,153,234,226,232,198,144,169,233,220,206,221,229,210,214,224,215,227,204,241,244,239,231,186,139,130,133,192,217,214,226,216,221,204,238,222,209,223,229,248,239,225,219,242,228,228,223,198,215,212,214,218,218,233,221,218,214,210,213,217,223,225,224,225,225,208,243,194,223,226,216,210,208,217,215,229,219,231,222,217,190,96,19,0,1,4,20,12,9,20,26,17,32,25,19,13,7,6,24,24,5,10,27,2,23,47,242,215,231,217,236,201,235,214,230,245,241,225,243,232,246,224,214,224,251,236,213,232,231,244,240,246,214,224,204,221,244,220,231,241,234,211,222,225,234,234,218,223,211,240,223,243,228,206,223,229,215,238,233,238,199,236,227,234,208,228,228,211,231,219,206,247,240,230,214,216,226,227,240,200,226,247,249,230,217,235,238,226,224,228,240,235,227,241,224,237,242,208,221,240,227,247,240,228,208,228,244,227,225,212,223,219,237,213,241,238,239,237,253,235,245,222,228,212,239,216,242,252,228,242,239,241,245,244,229,221,240,215,217,229,241,219,230,228,231,253,230,240,229,230,226,233,207,190,247,236,235,226,252,210,235,212,201,211,232,243,218,241,221,241,250,218,212,232,232,243,221,230,214,234,241,227,218,231,231,228,219,230,215,218,236,211,246,224,217,240,240,231,239,238,234,222,241,238,245,234,216,228,232,234,250,242,229,253,215,242,199,225,227,207,212,227,219,234,231,214,221,222,204,231,243,243,235,222,210,223,231,237,232,248,224,248,239,254,246,255,246,252,242,250,253,243,246,231,255,250,238,239,236,246,247,233,254,246,241,222,242,238,250,234,242,250,238,233,231,233,224,244,249,233,241,254,228,233,241,228,242,253,244,246,240,244,234,227,232,250,235,220,255,243,221,249,231,232,246,215,237,251,228,242,246,252,231,250,216,248,236,242,252,238,243,224,252,212,245,247,234,251,222,248,236,198,238,215,219,243,220,170,181,233,232,209,245,237,218,163,217,239,234,242,173,177,98,120,93,42,192,237,247,246,246,233,250,238,251,179,210,231,228,232,216,229,230,240,238,244,226,255,254,247,251,241,250,253,253,252,241,238,255,249,216,135,83,22,67,46,26,49,86,91,61,68,95,55,46,27,50,35,29,42,13,45,90,162,133,98,18,10,50,43,48,42,14,28,24,22,60,8,22,83,216,249,244,235,188,174,246,237,102,91,222,237,157,68,107,233,241,255,172,47,13,21,15,53,23,32,31,25,41,11,86,227,224,141,51,50,83,62,98,151,127,107,29,15,37,97,250,218,56,32,13,69,49,52,31,66,44,96,69,41,31,22,42,20,32,164,245,254,252,220,206,67,67,28,58,194,234,244,220,152,74,26,13,137,239,212,232,188,82,42,66,135,241,238,244,180,96,13,26,36,57,4,113,237,236,165,106,47,12,23,151,241,225,243,254,232,211,193,152,162,130,157,141,149,162,155,152,138,125,159,159,132,153,76,9,76,147,221,255,224,91,37,40,23,36,25,99,213,255,253,120,42,4,18,99,177,225,237,164,80,19,108,165,125,64,25,49,163,127,108,50,26,13,112,221,230,234,225,156,120,70,62,31,51,67,139,123,127,117,65,12,20,171,228,250,183,162,187,234,223,234,226,225,239,224,229,239,236,230,208,142,137,115,112,94,105,66,110,235,226,255,243,237,247,229,218,223,201,198,239,222,218,216,220,229,214,222,223,225,203,210,220,228,227,214,210,234,180,207,224,235,221,214,212,221,224,233,237,217,214,197,225,215,238,213,231,197,229,202,236,231,119,13,11,19,8,23,19,45,19,14,22,25,30,27,16,0,11,14,0,7,26,12,19,1,19,214,234,216,235,225,234,226,227,231,234,210,224,255,243,237,222,245,232,241,229,244,227,214,244,222,227,239,225,224,230,233,249,227,226,205,247,244,245,213,227,235,237,239,218,243,242,231,245,215,242,238,221,205,214,231,220,222,232,239,232,234,217,236,232,228,219,227,221,237,233,247,222,209,236,230,202,236,222,220,231,236,237,235,236,208,249,226,250,238,238,219,212,221,239,232,226,232,228,229,216,229,246,238,222,229,237,229,223,230,201,236,237,213,233,235,222,243,231,245,213,213,234,234,228,219,244,238,252,215,238,232,243,235,234,217,229,215,222,250,236,224,232,237,238,218,224,243,214,228,240,238,244,221,222,239,237,225,219,223,224,233,211,243,234,226,225,230,242,243,228,242,224,233,235,239,212,238,227,230,234,248,205,214,226,229,242,234,222,226,231,231,226,242,241,231,244,235,236,245,232,244,217,238,240,221,228,247,236,220,225,245,216,228,231,221,235,199,237,213,234,217,223,218,222,234,253,226,252,229,243,239,220,237,228,235,226,246,240,230,255,238,255,239,247,229,252,247,253,236,232,239,248,234,247,237,243,252,248,251,236,253,238,215,234,229,251,250,238,255,246,246,242,254,235,240,236,234,252,250,254,250,251,255,229,250,237,229,249,255,232,238,247,252,249,238,241,239,233,245,244,236,238,240,237,237,252,248,249,242,247,250,236,239,243,251,240,239,230,235,245,222,235,237,228,208,211,222,196,189,220,209,201,207,239,242,188,237,248,200,188,217,231,252,248,190,141,127,100,66,36,206,247,249,249,227,229,239,241,220,186,244,243,242,253,219,239,206,228,228,240,249,244,234,253,246,252,252,244,249,252,253,247,255,253,242,201,108,31,46,119,155,137,127,109,73,61,111,56,38,28,51,37,53,69,39,41,57,15,147,239,208,234,219,194,74,6,86,179,191,209,198,210,234,253,249,219,186,145,163,223,243,195,51,88,193,223,160,77,126,232,255,216,94,8,23,5,21,47,23,32,30,43,38,39,25,189,255,238,158,61,94,76,89,120,139,89,21,100,81,73,169,107,58,28,37,34,22,54,22,62,107,80,46,67,50,62,66,15,26,175,229,231,248,242,233,121,35,31,33,98,112,57,19,26,42,10,3,131,249,254,254,241,252,251,252,251,218,197,99,48,0,22,46,51,34,33,208,230,241,208,148,85,56,2,129,218,246,231,220,238,225,182,149,146,139,148,147,149,186,152,142,143,119,150,117,23,4,18,96,223,248,249,243,239,140,56,30,14,15,39,19,62,192,216,202,115,3,17,34,172,199,233,249,120,57,2,70,125,83,61,43,57,47,48,15,41,164,232,251,248,242,251,234,180,104,77,27,44,46,43,47,68,82,85,29,26,121,213,199,173,197,231,232,248,248,237,222,223,225,236,223,251,212,126,96,74,22,47,36,75,75,124,226,230,246,254,242,246,242,235,234,231,209,205,210,220,216,241,215,222,207,221,221,230,228,215,217,193,213,227,218,228,232,196,194,232,215,243,210,220,234,217,209,248,204,226,219,216,212,206,182,209,243,221,234,109,13,23,4,25,0,22,22,6,14,5,10,7,15,15,12,7,25,6,2,17,22,5,10,11,225,236,231,220,232,237,222,241,197,222,199,253,245,228,223,215,227,233,249,214,231,236,226,227,210,224,220,189,214,219,202,216,220,243,218,229,223,234,227,208,240,231,234,212,240,233,238,222,242,220,216,225,230,227,232,214,236,241,216,206,226,245,209,228,210,218,231,245,240,253,224,210,240,238,233,205,235,222,242,233,239,226,226,233,230,245,237,232,190,243,232,228,239,238,244,237,229,236,252,223,222,223,216,220,203,223,236,229,206,230,231,250,225,250,245,234,249,232,245,231,207,236,218,231,221,235,241,240,234,231,239,228,216,238,210,254,226,235,232,239,228,238,227,233,205,204,235,214,243,240,235,235,222,243,238,228,227,243,220,228,228,213,203,216,215,230,214,216,225,229,243,235,213,235,217,236,248,223,238,225,238,206,245,249,241,237,237,229,246,225,244,239,238,229,235,231,244,248,236,241,241,245,253,252,229,241,245,226,247,239,223,222,226,210,209,225,219,229,205,216,207,226,234,221,213,236,241,248,238,235,221,238,217,236,236,229,231,244,240,239,243,235,219,232,239,244,253,246,235,234,239,249,245,241,250,228,252,240,237,245,255,228,226,241,244,234,252,234,235,241,250,236,245,240,231,246,235,253,254,240,252,244,249,229,249,244,230,230,239,233,243,232,210,242,255,233,240,246,253,254,234,225,250,221,238,239,255,246,231,246,253,244,232,242,251,255,233,255,228,234,237,255,233,242,207,238,205,179,215,228,210,218,251,249,194,198,231,214,228,195,214,252,252,231,170,136,112,104,94,35,186,233,249,241,214,231,244,250,217,195,245,250,246,228,250,235,208,215,183,222,236,144,148,223,231,250,232,198,246,247,253,255,224,248,219,179,57,5,25,112,219,171,146,126,71,68,112,57,10,24,59,29,44,54,25,14,52,10,82,242,248,235,253,210,157,27,133,229,253,242,243,235,252,244,244,215,120,157,140,244,254,132,115,101,163,206,176,106,166,251,248,180,40,31,27,20,35,46,29,10,76,47,34,19,47,123,237,234,184,80,36,64,80,134,143,35,19,138,79,72,89,22,91,29,8,16,4,39,32,100,126,78,66,66,52,54,74,35,37,192,252,252,255,191,143,87,18,5,27,18,42,49,50,62,29,27,27,86,215,226,226,242,247,251,240,253,150,47,2,8,29,27,46,17,18,191,236,249,241,220,171,161,171,95,140,231,252,231,243,243,212,164,171,157,139,126,165,153,155,130,126,138,146,152,165,135,201,237,210,239,252,246,246,254,203,123,31,38,25,5,14,38,45,106,210,219,230,233,238,236,253,241,240,188,87,27,22,13,39,65,14,30,29,24,185,234,246,247,241,241,238,247,210,162,123,61,23,39,35,17,36,18,134,229,143,39,72,128,155,190,222,241,242,247,244,245,251,242,233,248,217,233,212,129,104,30,1,18,11,41,25,90,160,153,189,220,235,244,253,244,230,213,227,234,241,218,219,231,230,227,223,224,236,226,212,238,227,230,189,212,214,218,243,216,216,201,192,221,228,206,221,211,204,198,212,214,213,196,198,198,212,203,225,221,210,104,1,2,5,0,13,20,0,7,17,13,20,26,12,10,1,22,0,14,3,42,34,8,14,20,252,240,228,211,203,239,227,225,239,235,236,243,234,247,215,235,219,206,232,241,226,239,242,237,232,235,225,215,231,234,222,220,241,228,219,214,235,225,231,230,232,239,197,237,243,236,220,226,227,237,205,242,243,238,222,230,240,197,229,230,248,228,240,234,212,251,227,226,230,244,247,238,217,229,234,244,242,240,218,236,224,224,251,213,227,227,224,238,203,219,242,223,229,232,251,209,225,240,206,211,246,235,208,221,233,221,242,227,246,234,223,228,245,245,220,227,219,254,218,242,222,207,236,236,240,232,220,227,204,250,219,239,235,216,209,212,237,247,248,218,224,221,240,253,224,217,228,213,240,233,219,224,239,210,217,228,204,227,237,213,226,234,216,212,224,219,244,242,234,231,220,235,239,229,228,230,210,235,213,206,236,233,221,237,229,219,214,227,232,207,224,219,223,228,245,217,252,236,215,236,232,243,248,245,222,234,221,245,254,221,201,236,211,220,237,219,237,209,217,223,236,230,216,238,220,232,236,240,213,229,238,238,243,254,226,239,232,243,255,255,247,240,255,247,252,226,251,234,234,239,210,255,239,226,225,251,243,237,234,246,229,228,231,245,249,239,230,236,212,254,232,250,243,254,244,251,247,241,227,224,227,228,247,236,217,251,237,240,242,230,237,230,242,240,230,226,226,236,222,223,246,240,249,235,243,246,250,227,249,253,226,249,247,213,246,250,235,244,246,230,205,242,229,249,216,207,219,241,240,246,247,196,173,194,213,218,249,244,199,210,224,252,218,247,197,158,109,122,92,32,186,243,251,254,222,239,229,240,229,180,229,246,244,253,207,229,201,150,122,231,224,160,212,221,244,243,234,223,242,255,244,216,191,211,144,132,149,65,72,156,221,163,126,85,42,58,86,57,33,39,70,18,37,53,64,43,73,32,98,230,201,187,244,221,183,51,96,238,240,246,234,238,250,207,210,191,150,176,188,254,202,150,149,191,197,195,169,131,211,249,228,152,30,18,21,9,35,18,134,212,197,172,108,54,7,128,245,252,191,94,18,27,71,91,73,27,31,97,67,81,65,45,34,44,3,48,60,64,30,48,25,61,48,90,27,44,13,34,212,255,239,187,84,34,5,9,59,11,32,25,54,47,30,45,51,29,54,28,91,200,239,230,237,204,165,70,34,9,27,52,28,61,35,60,169,246,244,239,237,253,174,191,237,172,138,240,240,243,233,227,196,161,154,149,149,160,175,171,161,154,135,110,121,179,238,218,251,248,249,241,236,246,249,253,249,219,105,36,22,21,14,24,27,10,72,208,240,239,230,244,250,250,246,166,130,24,23,17,23,25,24,30,14,119,229,228,233,221,255,250,250,196,151,115,44,16,5,20,16,40,37,16,150,212,231,189,99,89,150,214,234,236,248,242,248,247,232,228,227,221,225,248,186,134,66,43,16,10,77,102,58,36,5,13,0,16,78,187,193,218,230,241,225,212,195,220,223,212,215,233,230,212,234,242,240,212,200,201,230,201,228,204,213,195,228,205,215,230,235,210,218,215,225,230,225,223,225,211,231,220,237,222,219,207,226,81,15,10,0,14,11,0,10,18,10,22,8,11,10,22,7,16,6,3,0,14,8,10,11,5,228,214,223,243,196,212,205,241,243,212,249,223,226,239,243,226,229,236,200,220,229,217,232,223,231,239,194,229,217,194,229,233,196,233,210,218,232,247,211,202,213,225,222,220,215,182,241,214,243,229,227,234,226,220,238,202,232,237,211,239,245,213,225,228,241,237,233,238,246,233,225,221,207,227,231,249,226,253,228,210,215,239,215,227,227,227,220,242,225,230,249,215,224,241,235,216,228,215,224,214,215,240,231,192,219,237,230,233,248,244,247,223,206,216,218,227,229,234,217,228,238,245,249,245,235,246,228,221,204,232,241,223,234,214,248,211,230,231,220,229,223,240,230,205,223,220,219,220,248,229,210,231,202,233,233,216,238,235,225,222,214,233,226,219,203,216,222,208,221,213,233,212,225,208,228,201,191,237,228,222,215,216,202,211,212,233,209,220,221,227,235,248,239,242,234,225,236,219,235,253,240,231,211,255,235,225,224,237,223,207,236,236,232,229,216,208,225,231,187,208,223,243,226,231,231,233,202,228,234,215,225,236,230,238,226,249,247,232,246,248,222,236,245,247,248,228,236,240,236,249,245,254,221,240,223,245,230,252,252,246,243,235,211,246,247,249,229,245,233,238,246,229,246,240,253,215,232,225,248,221,253,238,247,255,240,231,246,223,231,242,237,218,232,252,234,255,225,241,250,246,240,246,235,236,241,255,255,238,219,235,242,246,225,249,244,241,231,248,246,227,245,206,251,232,226,209,220,234,228,209,187,183,212,253,221,245,232,227,202,207,208,237,220,219,225,162,116,113,95,45,194,244,244,233,219,237,239,249,201,188,255,233,231,204,182,206,198,177,163,187,250,191,225,251,250,254,247,205,203,188,208,223,151,156,161,189,202,125,169,157,197,137,168,110,49,72,87,94,41,49,46,46,38,59,37,20,33,17,78,190,149,154,205,183,190,133,92,191,227,252,212,255,219,216,253,210,181,210,200,210,177,182,197,191,214,204,157,171,243,251,235,119,24,6,4,41,19,154,249,237,241,252,151,42,10,81,255,233,149,91,31,10,42,58,76,49,47,93,65,64,65,33,9,41,52,64,36,47,27,19,20,44,66,51,55,44,96,227,236,237,143,71,9,10,29,31,22,16,30,11,51,51,13,35,27,47,53,37,32,13,80,114,66,61,64,29,62,77,99,94,90,87,132,123,208,198,191,152,162,142,113,98,134,97,90,155,233,242,244,190,166,161,145,145,124,145,138,159,155,145,153,123,152,178,175,127,212,230,206,193,237,233,242,247,246,238,141,28,2,22,40,43,52,36,15,3,160,209,246,255,251,233,196,119,79,21,3,43,27,41,23,9,27,51,142,191,224,222,244,255,239,158,83,65,42,12,15,28,73,57,40,15,73,167,228,207,165,193,215,225,237,242,203,165,129,125,150,210,242,232,238,206,134,115,41,38,11,29,99,79,92,61,36,33,45,20,22,20,110,213,238,236,223,217,232,223,216,203,211,228,230,219,225,218,238,225,226,218,238,210,213,220,212,231,227,207,205,212,208,206,246,231,183,204,221,216,222,220,213,244,226,220,237,213,206,110,0,7,1,25,5,20,17,34,17,5,33,0,2,4,0,4,18,8,13,39,11,23,18,16,214,228,237,225,224,218,225,204,230,239,226,239,223,208,203,204,229,223,226,202,235,216,214,213,235,208,200,208,216,228,214,206,214,222,242,210,221,229,217,221,210,226,231,209,208,217,232,248,239,229,206,194,250,227,214,228,219,184,244,221,245,240,227,222,215,214,234,215,236,225,218,232,233,223,245,225,232,202,211,219,218,212,253,238,242,233,223,220,219,226,226,207,225,222,229,239,226,237,232,240,224,246,217,223,236,242,214,228,243,228,250,234,215,232,250,235,235,233,242,234,247,243,226,240,223,225,242,239,226,246,209,227,247,255,246,248,238,239,233,240,236,231,216,223,220,226,243,218,220,218,219,236,215,195,224,206,238,231,217,237,206,201,213,238,222,223,221,228,202,232,214,230,225,232,209,231,218,221,207,227,221,242,220,233,212,210,233,230,235,224,214,212,229,212,247,246,246,232,197,238,232,241,240,227,243,236,218,215,217,223,226,223,216,231,217,225,219,244,222,235,221,244,224,229,216,231,234,224,227,215,220,231,230,229,237,246,246,252,242,250,248,248,249,242,244,255,231,252,243,237,253,241,244,215,246,242,245,243,241,243,250,242,249,251,240,249,247,241,235,248,245,229,236,235,248,239,250,234,242,249,243,235,228,252,249,255,241,239,253,247,242,239,249,244,220,243,243,241,250,241,229,224,254,217,244,255,252,255,249,243,246,223,233,236,250,237,243,249,236,246,238,236,241,216,225,214,216,184,197,229,210,192,252,251,222,229,253,233,222,203,223,249,230,244,203,173,124,135,78,70,193,239,253,244,228,233,253,254,207,200,240,233,242,214,198,216,204,197,176,199,253,219,246,240,216,241,219,159,145,126,180,183,207,219,205,229,187,180,159,137,195,132,142,137,62,89,96,92,17,39,56,17,58,44,53,38,27,3,135,241,170,168,219,175,163,128,105,158,219,255,221,249,203,236,239,239,201,128,156,255,115,188,198,184,228,154,147,193,221,235,220,115,29,5,24,30,135,244,241,235,229,139,80,39,8,51,250,218,159,118,10,17,64,106,54,67,69,95,103,69,38,38,35,46,37,56,22,34,49,37,38,52,44,58,39,120,229,241,249,111,13,7,25,30,34,29,40,28,64,61,72,91,90,92,82,78,97,70,84,72,33,37,23,38,28,35,53,38,54,44,49,75,61,71,31,16,13,17,25,25,34,28,30,58,3,93,244,234,220,191,126,151,133,138,128,138,119,117,141,154,135,148,144,128,71,17,14,4,16,13,27,9,74,93,133,155,126,67,56,73,52,54,47,66,26,30,27,40,148,211,208,185,107,49,21,14,5,30,30,22,11,46,44,35,50,14,18,54,195,213,249,181,85,36,0,5,98,223,190,136,49,30,39,24,18,19,100,215,242,237,249,214,139,72,45,30,56,155,208,239,245,183,97,88,20,40,91,72,89,88,54,58,27,36,72,33,46,58,27,62,190,226,221,226,206,212,217,203,225,221,221,217,234,207,198,229,228,221,215,239,215,228,210,233,237,209,237,219,199,200,230,235,221,220,228,219,202,210,215,207,221,230,226,197,215,116,0,3,18,21,9,16,9,12,12,8,23,0,10,8,15,4,25,14,12,0,11,3,27,27,217,235,226,231,241,222,221,229,210,230,242,233,240,231,241,234,249,230,225,245,227,199,220,241,237,201,228,227,204,238,221,212,223,231,241,217,233,222,213,222,204,198,247,197,230,207,225,219,225,208,204,227,235,227,195,246,218,180,225,228,232,237,235,229,242,207,222,219,240,213,216,245,241,229,232,225,249,229,237,217,240,224,241,236,232,226,249,227,209,251,232,219,226,228,229,232,223,225,214,228,223,224,238,234,218,239,229,223,226,236,210,210,217,219,221,235,225,222,233,241,237,238,237,231,215,244,247,246,245,224,247,238,226,232,246,220,212,223,226,211,231,228,201,220,216,223,222,225,237,234,208,222,223,216,217,218,223,212,221,207,225,235,235,243,231,220,221,210,217,203,221,226,227,215,234,220,226,229,217,197,213,235,213,217,236,214,237,224,242,224,211,228,208,231,217,218,240,228,231,233,233,218,225,240,233,226,223,247,209,223,218,235,224,216,219,235,237,218,227,228,235,239,238,235,215,238,246,223,212,228,225,233,219,237,243,253,225,220,235,251,250,232,253,246,249,244,223,235,253,251,244,245,255,231,245,237,228,246,223,236,228,251,253,252,236,252,228,246,226,237,251,212,219,240,225,244,255,239,230,225,238,246,242,239,242,238,251,243,236,255,226,251,219,243,246,237,237,249,244,247,237,251,222,255,234,253,247,247,244,255,239,252,242,234,227,240,229,234,237,235,228,203,233,227,200,198,211,200,249,211,222,226,241,222,172,209,244,236,207,215,225,244,247,238,183,156,127,92,79,57,203,247,250,245,229,228,254,248,202,205,255,233,254,234,198,217,240,228,182,222,252,232,221,200,202,221,134,132,139,199,230,232,226,244,236,192,150,166,123,136,166,86,190,131,69,81,92,71,17,44,63,57,54,22,50,37,46,27,162,243,223,246,233,187,196,138,78,84,186,221,251,193,178,244,242,206,158,131,215,209,124,166,193,173,203,149,138,183,248,246,242,146,17,0,33,3,141,245,245,231,85,0,0,14,21,127,239,232,171,76,8,32,37,52,57,32,65,51,51,23,20,25,41,46,48,10,38,49,40,24,35,22,38,49,32,152,198,196,70,17,14,38,47,53,45,64,115,135,113,130,127,104,126,97,105,82,33,83,66,74,37,60,52,42,67,53,36,71,65,67,62,75,63,81,67,57,52,55,57,86,64,63,101,66,38,119,224,222,198,174,141,152,147,170,124,134,161,133,155,126,138,129,148,142,77,9,10,36,65,29,23,17,28,15,38,49,44,33,22,30,10,47,53,56,75,38,72,31,32,57,26,37,36,69,64,42,32,26,43,40,18,37,17,24,38,15,27,49,43,121,227,180,153,43,31,28,55,121,209,195,140,70,39,31,28,8,63,158,246,226,236,185,133,72,49,27,20,160,242,238,222,121,103,22,39,123,168,127,67,32,40,18,19,16,46,11,29,24,44,19,148,196,228,222,233,215,252,210,231,223,222,221,210,219,212,226,223,219,234,227,222,211,217,206,223,225,199,242,212,223,217,216,240,205,224,223,224,222,212,228,225,212,213,200,225,120,5,3,7,3,26,20,21,20,2,11,13,8,21,5,5,15,1,14,8,9,25,17,29,9,217,240,229,196,254,219,218,224,212,225,231,219,226,233,224,221,233,226,214,197,242,233,246,216,178,218,206,214,212,236,216,189,202,239,227,240,199,215,185,218,215,185,226,235,231,215,221,232,197,249,239,229,216,208,225,207,238,218,221,221,231,232,220,216,242,211,226,225,242,236,222,205,243,214,208,217,231,228,227,228,229,229,241,223,248,242,210,226,245,232,235,219,234,235,225,223,239,224,206,235,220,219,224,222,232,233,230,225,245,236,229,226,244,243,233,221,227,234,217,239,247,220,232,207,218,232,251,247,230,239,240,233,239,239,212,203,238,231,216,249,220,207,227,221,241,229,242,231,223,232,221,219,232,218,210,200,231,232,245,233,242,213,220,213,208,244,202,232,228,208,230,209,219,216,180,223,214,219,218,206,233,199,214,229,202,224,219,237,207,235,224,230,213,230,227,237,228,230,231,230,227,230,209,233,213,252,232,245,243,229,234,253,228,227,238,215,233,224,215,241,250,243,206,233,222,226,220,218,209,246,244,232,222,237,235,240,226,236,237,240,243,222,220,250,255,234,224,237,245,255,247,236,237,251,252,253,236,246,250,244,231,236,232,241,223,240,221,241,237,250,242,231,252,244,231,235,254,255,230,243,254,236,231,249,253,243,253,240,252,255,227,253,249,243,245,245,249,242,253,226,254,251,221,254,247,240,255,248,236,247,242,249,235,236,239,229,217,242,231,233,216,212,221,212,206,213,217,223,246,212,187,194,222,230,190,193,239,225,230,235,239,225,223,251,190,153,114,98,39,69,184,235,255,243,236,249,235,246,193,216,245,224,242,253,233,226,249,255,211,170,223,183,147,101,172,200,210,232,197,212,251,243,233,238,237,165,103,164,136,171,192,114,197,120,97,67,69,113,21,50,55,39,53,48,41,34,33,31,154,250,200,213,248,228,217,230,142,39,104,187,214,174,147,237,218,169,125,165,255,214,80,190,157,149,200,125,118,205,230,249,253,158,36,7,20,0,119,233,225,209,66,10,17,2,117,243,255,248,165,75,42,8,37,102,80,20,68,56,70,45,18,55,37,56,48,45,47,40,25,13,24,29,80,43,60,107,95,90,55,77,105,111,104,88,104,107,119,81,75,83,80,43,64,49,61,63,71,62,68,93,67,88,117,101,139,147,124,151,166,150,135,123,130,156,148,142,143,135,138,106,113,80,83,79,79,99,201,207,144,124,135,139,137,136,148,146,125,153,162,141,130,117,155,123,95,29,54,49,53,66,61,73,30,41,60,50,61,42,33,54,24,65,37,29,57,41,47,46,25,43,74,42,57,63,34,47,84,54,49,53,59,52,38,49,45,21,32,52,24,6,54,173,158,160,40,31,36,44,185,232,140,90,36,14,57,36,48,195,222,242,250,232,154,82,46,21,101,239,248,229,170,94,50,109,195,178,157,78,59,26,31,7,15,34,27,2,54,65,45,22,61,201,205,226,212,233,234,234,230,217,208,209,227,225,236,228,229,211,214,230,203,197,218,234,233,206,239,231,198,227,220,217,216,202,226,221,222,241,238,203,211,220,230,228,226,103,7,4,18,8,21,11,14,4,7,6,8,12,6,10,6,26,14,15,15,21,3,9,13,20,195,216,220,233,227,217,233,209,243,242,218,208,246,229,239,224,209,245,216,216,248,232,235,216,217,219,232,240,219,219,208,238,236,216,238,221,198,227,222,220,227,229,235,226,227,217,226,237,220,183,202,214,211,200,238,203,243,221,242,216,188,200,218,222,225,244,249,207,224,207,222,244,206,237,229,217,241,229,237,197,207,228,203,227,220,220,249,205,230,250,232,227,228,243,233,228,225,235,234,226,197,218,240,213,229,234,219,216,219,221,224,201,223,213,224,217,218,217,240,223,221,209,237,210,225,230,248,237,243,233,240,245,225,243,230,215,206,217,226,225,227,210,241,228,225,243,218,213,228,222,243,238,211,216,224,223,227,220,215,227,228,209,223,199,216,227,211,203,220,205,232,229,211,229,218,215,198,213,208,212,191,228,230,214,210,226,220,234,233,223,222,211,223,241,243,210,214,252,236,220,206,234,226,211,243,235,236,248,231,238,251,222,217,246,247,227,253,222,224,226,246,250,233,236,219,223,242,244,214,225,247,228,227,237,246,251,221,255,236,231,241,232,240,240,244,246,247,226,230,240,229,246,225,248,249,247,234,241,227,231,234,238,240,236,243,253,224,249,255,250,228,204,249,247,249,247,254,234,230,243,230,251,255,237,236,252,246,239,253,242,232,242,255,235,230,255,249,249,246,243,223,227,255,253,234,235,250,252,253,239,232,255,238,249,249,255,207,219,229,243,232,201,213,210,215,220,178,197,238,212,195,248,234,255,242,225,238,211,230,214,222,224,207,235,192,157,149,117,82,53,196,248,222,229,206,225,255,222,202,220,241,224,239,250,199,226,228,248,133,123,155,149,185,192,238,247,221,247,224,252,249,252,254,255,221,110,129,170,178,189,213,142,154,70,38,59,81,89,13,37,60,34,29,73,67,72,14,14,163,248,181,201,235,227,237,252,120,57,106,208,228,130,147,160,121,122,148,208,244,148,111,216,192,180,173,96,172,206,252,255,237,178,73,5,11,9,34,212,236,253,207,156,179,232,254,244,235,174,102,25,12,41,38,70,68,36,38,73,62,42,31,28,41,28,40,20,29,43,56,44,68,70,77,107,115,92,89,105,115,109,78,46,66,60,44,69,109,69,64,83,69,100,108,127,128,134,140,126,141,145,141,135,144,139,97,100,76,50,63,52,35,82,31,37,39,46,32,51,49,27,51,15,6,31,53,59,133,155,163,116,122,144,143,145,176,161,139,150,142,141,157,150,156,148,90,18,40,22,17,42,44,36,38,43,37,53,26,78,46,60,33,90,54,59,28,53,42,55,58,56,37,29,47,48,57,51,36,95,73,74,87,70,63,29,59,67,26,46,35,42,33,17,142,142,81,40,53,196,242,230,160,116,49,26,37,30,132,229,247,253,249,223,130,109,80,27,35,131,188,154,108,41,85,220,223,194,108,86,39,35,113,150,113,96,45,64,42,29,21,1,97,179,200,232,236,218,225,229,217,237,204,224,228,231,212,231,200,208,212,212,206,215,227,216,234,206,233,227,218,238,216,225,246,234,238,217,206,191,231,233,213,205,241,217,206,125,21,8,4,25,14,3,17,26,31,21,0,13,6,5,22,7,29,1,11,4,11,12,25,10,243,214,239,241,225,239,230,224,225,219,227,229,225,222,241,237,238,231,223,229,244,222,244,222,210,229,216,223,211,212,213,216,229,220,213,191,199,224,228,227,209,214,210,206,214,209,239,196,223,226,219,218,195,231,219,204,189,226,220,209,243,234,212,219,225,216,220,214,226,235,232,232,240,249,237,219,225,206,241,236,227,230,237,240,224,231,232,220,239,224,206,230,246,218,206,227,245,239,227,226,231,223,235,223,220,234,210,232,239,228,226,229,196,226,237,241,223,219,243,233,240,229,243,231,205,212,241,231,220,234,240,231,228,243,249,234,217,209,221,218,220,219,245,228,216,212,235,238,226,234,223,228,199,210,224,229,223,212,199,196,215,217,212,225,214,223,204,215,231,208,216,216,222,192,223,237,221,216,221,223,232,204,216,217,211,227,226,226,221,220,209,217,227,235,221,225,217,234,228,225,230,240,243,225,232,232,247,238,229,224,219,243,226,228,238,244,219,224,222,240,218,251,210,234,233,241,204,224,217,240,229,221,193,234,242,234,231,221,253,225,242,247,255,242,246,246,248,234,243,249,253,237,232,249,243,247,241,236,236,246,243,245,242,232,250,249,243,213,228,234,226,246,249,250,237,252,237,240,246,247,213,252,230,253,238,248,243,236,246,226,248,242,216,234,229,248,239,243,239,252,255,240,247,244,239,236,252,247,232,249,236,243,246,241,232,233,240,226,218,234,237,226,229,197,212,209,166,200,242,221,222,240,236,246,202,211,242,229,228,208,215,243,226,251,212,166,154,113,76,42,197,234,246,255,196,225,238,241,178,208,252,240,244,253,204,150,169,151,104,90,149,216,226,225,247,251,225,232,185,237,237,215,249,229,205,108,131,178,167,190,187,154,140,68,51,33,21,98,29,58,52,32,26,64,33,43,15,94,207,247,191,210,242,209,203,238,135,11,100,197,216,142,132,127,119,143,176,207,174,111,170,213,222,240,188,79,179,246,251,239,240,226,141,3,0,12,4,85,232,240,252,255,255,243,228,235,96,45,9,5,18,48,22,22,43,45,45,37,59,49,34,21,49,50,61,67,70,70,87,100,119,106,100,73,41,71,32,52,57,53,40,55,75,104,91,116,134,127,150,131,145,143,118,120,96,51,62,36,58,51,45,33,28,21,43,25,47,39,31,49,13,14,43,32,31,9,46,55,20,31,37,17,21,1,19,4,19,75,129,154,144,146,160,165,161,174,159,174,147,152,157,180,173,111,46,28,46,15,2,26,24,29,8,14,11,26,26,22,16,15,31,49,33,51,32,44,69,65,88,43,50,54,51,73,58,75,50,52,68,46,49,30,58,61,87,84,73,85,84,85,62,52,40,79,44,55,179,239,239,246,128,127,70,21,38,24,63,199,238,236,229,142,93,89,77,18,26,13,18,5,32,15,104,217,232,142,105,38,71,199,229,236,229,173,121,85,61,11,30,8,64,176,224,218,236,226,216,241,227,223,221,225,237,233,228,217,211,218,237,226,230,227,230,238,201,222,229,204,207,246,218,223,231,231,205,241,226,230,218,234,218,238,232,230,215,114,12,18,5,6,20,0,38,23,25,16,2,1,16,5,4,19,43,24,10,20,17,19,14,0,231,208,217,219,223,217,235,250,220,239,224,235,227,226,227,233,235,235,211,225,219,208,246,246,235,205,219,226,223,221,240,240,220,217,199,196,187,210,201,207,214,220,221,223,242,217,190,219,215,232,234,214,233,201,212,209,217,221,233,230,232,211,178,207,209,221,235,227,219,211,224,211,206,234,219,231,217,221,219,217,228,235,229,241,227,246,242,227,218,231,207,227,234,222,215,212,239,226,221,243,226,243,242,206,231,230,220,243,220,211,204,241,221,242,212,217,226,210,244,236,231,232,222,235,240,219,224,199,235,220,238,238,230,208,208,225,226,228,227,227,220,214,210,239,219,216,248,231,247,235,233,241,218,235,221,209,226,221,220,204,235,235,217,210,212,219,201,224,224,203,201,212,218,225,190,220,201,195,213,228,227,215,227,216,241,232,230,216,237,212,208,233,191,224,231,223,244,229,225,215,222,214,231,234,241,236,232,225,241,237,245,242,240,234,243,230,231,247,244,245,219,228,227,235,248,237,246,215,226,228,216,222,239,243,226,241,193,237,235,241,243,232,238,244,255,222,214,255,245,251,249,240,226,252,228,247,253,233,241,226,216,241,219,235,246,237,228,216,237,236,241,218,239,255,231,255,235,239,249,248,243,236,254,253,230,249,243,253,246,225,242,245,246,238,250,236,229,235,241,254,230,240,239,227,234,255,252,250,233,246,219,242,248,255,233,243,231,246,239,232,212,233,208,237,213,194,228,227,255,177,225,228,228,201,211,240,255,204,199,207,210,245,227,255,206,160,153,97,53,54,229,246,228,241,212,242,226,236,176,227,254,247,253,250,173,119,171,192,177,147,169,249,243,234,233,252,223,228,189,202,228,239,229,224,180,129,144,174,191,188,210,167,150,142,83,41,30,120,54,84,68,45,40,27,43,41,149,217,227,225,191,192,224,227,224,243,99,28,58,140,196,193,235,185,164,202,190,207,163,117,188,226,201,244,169,82,171,247,254,239,237,243,191,129,70,9,17,25,45,140,176,221,210,158,136,46,39,6,49,78,51,61,31,62,79,79,50,81,65,78,67,94,100,88,85,95,65,89,63,76,59,47,43,76,45,70,58,110,122,114,147,167,127,133,128,107,67,75,47,17,40,37,15,14,26,33,17,20,26,22,43,22,10,32,18,44,35,15,36,22,23,18,14,53,45,17,20,43,17,51,22,49,28,27,18,33,96,122,141,149,151,157,139,150,146,154,137,163,138,159,127,165,154,128,76,12,49,46,39,31,31,17,7,43,14,29,51,38,21,19,33,34,22,14,10,41,10,33,31,64,31,44,53,87,90,86,79,102,83,61,71,77,60,55,55,52,51,63,70,65,54,78,57,61,54,59,172,235,240,191,147,55,50,30,36,44,14,33,137,194,148,104,88,57,92,72,55,41,30,0,35,116,217,255,227,134,90,57,27,76,190,228,226,226,145,111,68,34,41,15,66,198,225,215,204,218,229,236,233,213,228,212,229,221,211,209,213,229,244,231,230,227,245,185,223,215,222,230,204,230,214,242,209,206,244,218,231,228,194,222,209,237,205,227,223,119,0,0,30,13,17,8,7,15,14,2,24,0,8,8,0,24,15,6,18,2,7,26,3,6,205,204,216,226,228,245,228,243,211,216,223,234,216,230,213,219,231,223,227,232,226,227,241,229,217,228,213,210,239,197,238,214,225,211,215,227,218,222,192,229,224,192,211,206,234,228,191,217,231,194,212,234,208,236,221,239,220,215,207,228,215,205,220,212,214,236,229,206,202,199,215,230,227,210,245,229,219,223,242,213,204,224,218,244,193,232,230,213,211,244,233,225,237,213,225,241,215,219,236,239,222,236,227,214,251,212,234,213,218,243,207,220,201,211,218,204,212,229,213,229,241,230,240,228,238,236,204,239,236,237,255,244,214,229,230,206,233,218,215,218,219,204,239,235,226,225,235,235,232,229,215,224,220,206,212,220,217,227,203,215,236,230,209,231,209,215,229,218,228,227,195,216,203,213,204,224,237,232,224,217,203,230,226,223,224,232,228,220,235,225,208,219,235,218,219,223,243,229,227,231,217,230,218,226,227,230,232,229,233,219,245,244,232,231,235,226,226,250,238,205,225,225,226,226,218,212,225,208,203,223,237,226,196,236,216,241,234,230,238,234,214,240,239,251,230,239,247,246,242,249,237,239,229,239,235,236,251,240,229,219,237,238,236,245,255,240,209,250,247,247,233,238,239,247,230,239,226,246,239,247,250,251,246,231,217,241,246,253,254,254,247,251,226,245,253,243,233,221,223,246,240,246,240,224,254,242,234,254,244,247,246,245,239,246,248,240,230,245,217,230,245,239,203,222,219,222,206,251,174,79,129,207,233,245,199,230,233,212,210,240,218,218,196,241,181,152,135,81,53,53,229,255,234,235,215,240,245,230,166,223,212,184,154,194,198,191,233,247,243,185,157,245,239,241,231,253,207,189,162,188,240,245,251,215,198,180,165,166,198,144,209,120,157,173,146,68,51,130,58,98,68,27,34,33,32,77,202,251,219,219,175,203,255,219,246,243,126,72,56,90,182,201,251,206,172,132,166,231,188,156,192,188,153,224,113,69,198,208,250,247,239,241,218,235,203,163,85,113,53,2,11,14,7,9,5,21,50,87,55,111,111,118,126,110,120,113,103,117,105,84,63,74,62,75,49,79,57,54,53,37,49,104,113,110,119,160,130,127,115,98,87,52,49,29,34,27,49,30,22,36,17,1,9,20,24,19,16,9,17,45,27,15,20,22,42,27,32,46,42,38,38,23,50,39,49,17,13,9,18,14,30,43,35,41,59,85,214,172,162,175,161,148,151,160,160,149,154,145,136,128,142,146,151,66,43,26,30,48,74,61,31,22,23,25,25,40,47,44,5,18,39,11,45,49,12,7,22,7,11,14,41,19,7,19,16,40,73,79,63,61,74,100,96,77,87,80,93,76,53,54,42,62,54,56,88,47,164,193,191,171,85,56,51,27,29,8,30,36,17,45,50,115,156,80,88,99,101,165,204,214,251,251,243,240,247,179,107,32,20,18,29,61,194,223,188,117,89,35,45,24,94,216,195,230,223,221,205,213,229,219,202,209,229,230,217,216,240,251,231,215,241,226,234,220,230,238,209,235,235,234,206,227,229,201,224,225,210,221,229,216,210,206,216,233,202,101,25,30,0,0,5,7,5,0,18,13,20,19,14,21,14,7,9,25,17,1,26,12,4,9,219,206,230,226,219,206,216,212,219,213,206,232,230,211,204,208,230,220,246,229,209,211,232,218,219,223,201,219,217,215,215,215,220,209,216,189,218,224,215,208,226,219,200,215,211,207,210,205,217,212,218,217,224,239,213,204,205,210,210,190,209,208,212,225,224,218,197,201,217,217,207,225,218,213,204,204,214,239,237,235,205,237,240,221,214,227,233,208,214,223,244,222,227,214,240,236,232,227,228,212,236,221,238,205,219,216,196,224,229,218,237,233,232,207,237,221,228,211,239,221,225,227,235,213,221,215,205,241,220,225,235,245,250,244,232,235,211,213,237,221,232,229,220,245,210,211,214,209,237,210,220,221,221,213,210,232,218,210,233,227,195,234,226,242,218,199,209,234,209,225,231,229,230,232,190,239,231,244,227,222,232,238,226,208,222,220,201,228,220,228,224,215,213,211,223,207,239,222,213,221,203,234,220,223,239,215,228,214,226,203,225,224,237,228,252,210,218,224,237,241,240,233,251,233,210,217,229,232,220,224,229,238,232,234,235,241,243,239,236,249,254,233,215,213,221,236,198,245,234,231,255,240,227,254,245,246,228,243,218,217,235,251,228,219,255,219,227,246,241,239,209,226,242,249,248,252,226,221,246,255,250,233,254,240,247,250,224,244,229,233,249,252,239,253,253,255,245,245,228,239,234,253,248,243,249,249,238,221,240,209,255,228,251,246,230,248,204,226,243,228,217,255,236,207,213,240,231,191,107,22,98,228,214,232,207,237,254,201,186,241,218,234,223,255,191,144,152,114,51,30,227,239,240,244,243,245,242,210,170,186,188,181,208,238,251,204,233,232,251,186,163,214,243,242,235,252,187,165,218,202,227,235,248,182,167,201,168,169,204,157,191,137,181,177,139,85,31,132,71,90,81,58,71,37,20,71,230,231,204,196,164,224,243,206,220,242,155,157,99,76,155,225,249,160,104,97,190,170,96,144,211,185,139,181,69,75,221,219,237,213,238,229,241,252,253,222,179,195,100,56,54,17,8,15,44,36,46,57,30,13,46,41,25,68,44,50,61,60,97,61,79,68,51,78,79,73,112,119,118,127,150,131,131,111,89,49,49,35,29,26,48,23,4,26,14,24,28,28,35,7,15,13,16,27,14,18,15,32,31,34,41,23,32,43,34,39,39,34,35,38,45,44,61,80,33,19,12,3,23,31,34,32,58,82,82,169,211,159,161,159,141,181,151,126,166,151,161,169,107,134,125,190,132,49,38,30,12,35,66,61,65,41,12,45,42,18,26,7,20,16,19,17,23,29,14,13,0,16,18,38,34,26,23,28,26,5,7,30,9,16,48,47,29,52,65,75,96,104,93,110,109,77,67,54,73,68,40,48,49,78,86,93,42,54,44,61,48,75,40,27,54,63,87,70,63,38,143,222,234,239,245,234,245,250,246,253,135,46,18,14,20,107,217,229,213,133,91,38,53,17,69,214,220,239,249,231,216,239,238,220,219,238,216,232,215,223,238,237,226,224,215,230,236,226,216,240,216,230,247,223,231,214,207,210,219,207,224,223,217,211,210,208,229,228,247,121,18,1,6,1,9,9,7,13,11,12,13,4,4,0,3,18,11,7,8,12,9,16,8,24,240,221,222,224,214,219,224,220,232,223,222,224,235,201,235,210,225,233,247,209,219,227,234,221,204,185,226,228,226,226,197,224,209,220,202,240,224,214,219,214,231,187,219,223,197,202,215,242,220,219,198,231,220,210,209,228,224,222,219,197,220,212,218,223,209,202,211,230,241,225,218,217,205,209,215,232,219,203,218,211,219,238,220,226,239,235,238,244,244,231,214,217,217,216,224,218,218,211,214,226,220,218,232,213,251,228,232,220,229,237,216,220,233,228,229,239,201,204,202,211,222,233,199,216,223,215,247,242,214,206,226,234,243,230,225,213,214,227,199,233,221,222,224,240,222,246,226,240,239,216,233,225,193,215,229,235,239,243,240,231,224,215,234,226,199,233,215,226,220,211,214,214,235,230,201,227,209,222,212,230,202,210,243,218,214,220,215,188,225,220,217,213,234,222,204,210,218,227,229,205,220,207,241,242,235,238,218,229,220,240,224,241,210,208,224,243,221,240,236,210,227,245,206,216,247,223,242,234,219,176,240,248,193,227,226,228,231,241,211,224,240,211,243,227,255,247,235,245,249,237,242,235,247,240,245,235,209,245,229,213,246,247,246,225,217,246,213,246,255,224,226,233,241,239,245,221,255,225,233,238,241,229,246,249,223,242,253,250,248,250,251,245,238,214,240,243,240,220,251,221,218,231,247,254,249,236,254,229,253,214,253,239,234,232,244,236,246,226,238,223,224,237,224,186,212,229,168,219,139,10,146,233,223,228,208,217,247,217,208,187,215,236,214,255,171,178,130,109,61,44,210,241,255,253,210,243,196,155,147,213,228,235,251,242,255,223,184,217,234,188,146,232,252,187,166,201,184,212,248,226,246,244,196,146,182,160,162,196,217,205,213,150,179,160,128,74,12,152,106,78,58,44,73,46,19,86,243,247,208,190,189,231,202,244,233,230,167,208,183,99,142,235,189,190,98,156,244,171,62,143,235,183,171,185,107,128,177,179,181,209,212,234,224,253,255,241,183,208,116,65,76,53,95,76,82,98,80,91,69,78,85,85,80,69,45,94,84,58,101,119,129,131,133,128,122,100,89,59,84,49,42,49,37,32,46,21,18,13,22,18,55,28,5,8,44,19,20,18,36,32,23,9,21,18,6,5,39,41,34,13,33,20,28,57,54,56,45,44,5,25,61,61,37,49,37,23,24,61,42,65,44,71,74,144,177,238,230,157,188,165,152,147,143,156,157,179,171,170,142,128,144,153,142,46,45,16,31,20,64,68,52,43,35,46,9,20,22,8,15,37,16,33,28,36,28,28,18,46,69,18,38,42,30,27,20,14,7,31,4,13,16,29,20,13,11,29,33,59,51,81,77,88,126,122,122,82,59,64,33,77,37,85,43,96,55,52,75,83,58,72,36,62,54,47,46,51,23,66,113,88,106,98,97,164,223,217,190,167,178,172,227,241,248,227,161,157,123,34,14,24,167,223,224,220,234,223,231,241,230,196,217,202,239,243,212,227,217,220,247,228,217,201,231,215,225,223,206,236,241,235,226,227,221,231,214,243,221,224,232,226,231,211,213,235,234,144,0,12,14,0,21,10,22,21,5,11,9,6,0,0,8,2,12,10,1,0,23,14,23,21,216,216,221,241,231,230,222,236,227,203,235,219,220,234,235,213,233,235,202,231,227,215,228,224,230,233,234,221,195,225,196,211,205,213,223,196,228,217,227,215,202,217,224,216,188,207,184,202,206,228,199,202,204,218,247,216,220,191,236,220,202,222,212,189,224,234,226,208,208,194,227,227,205,220,205,238,224,235,220,224,227,241,225,208,218,202,206,234,247,239,223,226,228,200,238,238,215,204,226,209,220,205,228,210,216,237,235,228,221,230,214,214,219,231,210,216,199,209,225,233,233,229,213,226,218,212,236,245,205,225,209,223,225,226,236,210,206,211,236,224,220,224,237,233,232,223,214,207,226,225,219,232,214,238,218,230,214,215,210,209,241,212,204,216,221,237,230,225,239,193,226,226,238,212,211,218,217,231,228,195,217,199,218,218,228,241,223,234,219,200,212,222,211,226,208,214,236,205,212,239,220,232,200,224,222,225,231,238,216,229,238,233,223,236,217,236,236,219,234,217,229,224,221,215,212,218,221,216,230,238,206,224,231,244,201,247,231,232,237,213,236,219,239,248,244,236,242,230,211,249,236,223,252,244,233,254,240,207,201,236,233,246,248,238,227,230,240,252,239,251,234,211,242,233,223,245,251,233,235,245,247,213,248,252,249,233,234,221,242,247,239,255,235,253,230,247,242,237,233,242,211,229,241,255,254,227,223,217,238,246,243,244,224,222,231,243,234,251,236,235,233,241,219,220,207,226,236,253,154,51,160,224,216,218,219,217,249,216,200,193,224,184,200,246,207,153,120,117,63,58,241,243,235,208,161,187,191,205,195,197,218,226,253,245,255,189,174,210,206,130,129,211,209,190,196,235,206,215,249,222,252,240,170,179,173,148,171,190,195,225,249,144,173,151,152,91,0,119,69,75,61,23,58,46,21,74,226,247,236,170,179,239,242,227,217,244,179,244,210,107,148,227,188,152,121,197,243,189,100,176,253,194,232,217,109,194,223,215,204,199,190,168,171,231,235,181,122,79,34,25,26,17,4,2,12,17,67,48,65,88,85,64,77,85,86,78,48,76,60,75,60,67,87,39,32,47,48,71,93,22,7,20,31,33,13,1,3,9,13,36,37,29,35,26,27,16,16,51,23,56,33,6,43,13,25,6,31,53,22,13,44,47,43,36,55,53,10,38,56,55,41,61,59,60,43,22,26,43,65,70,60,44,132,156,171,202,214,177,150,137,177,158,178,175,139,185,175,162,161,143,155,187,103,64,49,50,20,18,30,54,61,58,56,55,34,27,22,37,26,12,34,10,37,37,33,29,0,12,49,20,45,35,18,29,56,32,48,37,35,22,16,29,24,6,21,25,36,47,27,29,28,40,46,75,49,102,77,100,92,90,56,76,49,51,57,63,88,76,80,83,82,107,79,60,43,47,64,64,32,52,35,39,46,38,30,99,204,227,237,239,246,248,230,177,115,103,58,33,31,55,218,224,228,205,210,249,233,219,233,200,221,208,229,222,233,237,239,234,240,239,216,219,239,221,228,216,223,193,242,228,229,251,231,212,245,227,236,241,224,231,209,239,197,210,239,106,14,4,1,16,26,7,1,3,11,12,0,43,16,12,3,9,27,1,37,4,21,12,11,4,204,227,230,225,238,205,193,241,210,220,218,216,235,205,227,225,214,234,240,214,207,214,233,233,236,221,227,234,208,224,248,213,226,207,215,211,222,216,199,212,200,227,206,215,201,199,235,206,199,193,230,250,241,209,227,211,212,215,237,219,220,210,221,223,223,210,229,203,221,187,227,208,234,223,234,231,230,234,222,209,217,204,201,197,247,239,238,224,221,237,235,224,240,234,189,205,226,229,241,205,230,214,203,230,221,224,239,229,210,243,223,211,218,209,200,230,215,231,223,220,237,215,224,222,231,235,236,225,207,219,229,206,223,219,210,239,199,242,222,211,247,194,206,242,232,213,229,233,215,208,228,222,242,234,232,221,226,252,233,225,214,208,223,215,206,222,213,195,217,205,229,251,233,215,208,221,195,240,211,212,235,213,213,224,239,200,236,220,220,217,215,209,223,211,215,235,211,210,246,231,174,227,247,222,231,225,242,205,228,192,231,239,214,237,195,219,212,222,224,221,233,236,235,215,232,243,219,239,213,237,231,233,239,243,244,233,228,232,225,232,235,242,249,228,237,193,247,221,239,243,245,247,229,204,244,242,236,227,198,250,241,240,235,238,234,247,241,234,247,255,242,223,220,239,229,236,238,225,247,234,233,236,249,255,255,222,240,227,248,224,242,238,246,221,252,244,231,234,244,247,254,255,237,240,249,244,248,249,250,245,239,219,252,234,240,238,246,230,230,236,245,244,217,211,214,231,223,253,113,60,201,249,238,236,229,247,243,210,220,218,214,217,226,252,198,179,111,104,63,71,170,195,175,177,188,238,234,221,209,221,242,235,231,244,226,242,181,234,232,164,114,196,243,254,242,250,203,209,235,184,188,229,166,173,161,181,204,186,202,218,222,116,151,216,157,109,36,80,66,86,74,49,32,42,19,52,242,247,220,152,160,216,238,209,216,247,156,186,222,112,118,177,127,196,170,220,245,114,147,226,248,172,213,202,134,198,211,192,233,174,186,194,188,198,196,124,95,65,67,92,59,40,39,18,15,8,33,47,23,25,24,19,35,33,6,17,18,38,13,16,34,47,44,33,45,42,110,138,73,34,36,11,0,7,10,21,8,38,9,18,47,51,46,21,7,34,22,41,54,58,59,49,54,35,7,12,39,22,18,19,41,47,38,55,43,27,25,27,53,46,58,46,67,29,26,26,26,76,77,63,57,84,138,129,224,253,191,156,143,139,143,154,155,152,177,195,169,146,146,157,191,166,113,82,72,87,59,34,55,56,60,71,47,29,25,40,34,51,18,15,33,20,29,32,43,65,10,11,29,25,24,30,21,44,27,17,53,38,32,42,44,21,14,33,15,2,45,47,36,30,33,37,22,2,38,19,49,29,42,52,100,120,96,89,85,83,48,67,40,44,61,67,71,72,76,68,54,81,68,48,53,24,66,32,43,35,4,104,208,219,248,207,186,96,100,58,13,17,66,219,234,226,246,241,224,229,240,228,238,218,235,222,207,240,224,215,251,226,189,224,229,241,229,244,224,214,223,247,235,222,223,238,237,223,234,221,219,232,237,215,217,209,236,218,214,148,12,3,11,6,4,3,6,8,8,11,18,2,1,16,0,12,22,8,15,3,6,11,7,0,193,206,196,214,214,212,245,226,205,204,217,216,205,229,199,220,206,217,223,211,232,222,236,226,223,220,219,225,228,224,242,229,218,236,205,197,223,194,199,203,217,210,212,204,237,196,232,207,196,187,204,219,210,221,217,205,220,215,188,218,220,211,202,213,223,212,216,227,204,216,194,228,221,221,232,234,207,219,216,196,231,219,197,213,223,212,236,226,229,227,216,211,232,216,204,241,233,247,200,234,216,212,230,230,221,246,246,220,233,192,217,213,241,241,232,239,207,214,241,218,222,201,229,235,218,228,216,226,214,232,243,220,227,218,232,204,228,236,248,202,225,216,197,222,209,213,225,223,212,204,220,216,214,216,215,229,211,215,229,206,229,230,198,224,229,239,234,222,220,242,236,232,231,224,236,215,210,220,219,218,222,223,235,220,224,226,194,220,215,211,223,230,196,214,243,222,203,215,236,204,220,223,229,199,228,218,204,251,200,220,210,212,209,240,223,215,220,229,235,219,219,210,228,241,231,214,223,238,226,220,231,235,214,227,231,225,227,254,242,217,231,216,224,234,238,217,232,219,250,233,237,236,217,234,220,224,244,214,182,219,224,220,240,236,214,248,251,248,237,230,221,247,232,237,227,234,232,230,245,223,247,242,235,234,232,241,227,250,251,228,249,229,246,231,254,242,238,217,240,247,217,244,225,232,235,234,250,246,241,241,255,252,235,235,232,242,242,244,221,222,254,221,198,201,211,225,193,226,104,78,236,239,245,229,224,223,244,213,239,206,208,201,244,250,173,159,159,123,124,54,165,212,224,235,212,238,237,196,235,209,240,235,238,208,255,192,169,200,252,227,141,241,234,235,231,167,138,200,203,184,201,212,172,174,210,185,218,206,172,201,206,109,174,176,174,138,50,101,53,109,66,77,56,54,4,71,224,240,200,113,180,193,238,209,209,238,189,179,212,124,112,88,106,158,211,233,161,106,206,253,214,145,199,167,118,221,204,209,202,221,229,194,209,228,158,159,166,195,212,237,238,227,238,240,247,215,165,74,61,30,63,37,41,42,40,35,23,39,44,26,68,58,37,16,50,124,159,151,81,12,20,16,30,2,12,15,38,63,35,27,15,30,37,37,24,17,12,13,32,71,42,38,43,56,54,23,42,7,25,30,60,41,45,19,21,11,33,71,63,37,52,45,37,25,5,32,27,46,33,58,57,60,98,170,245,242,152,150,139,157,172,158,159,151,204,174,175,132,166,152,173,131,72,48,96,77,58,19,30,88,88,48,74,37,21,19,64,33,19,8,37,17,34,18,38,43,42,38,26,10,31,26,40,38,63,26,27,38,31,27,12,6,20,15,28,53,44,25,24,37,39,15,25,32,20,14,25,9,30,45,59,35,23,54,98,98,96,119,96,82,82,72,76,76,63,43,63,90,37,48,84,58,90,81,70,64,14,23,4,11,54,70,52,56,55,30,56,175,233,239,227,253,246,230,251,222,234,237,227,240,238,216,199,213,240,225,234,215,227,243,248,250,237,204,232,222,218,221,195,241,220,211,239,234,221,230,217,233,235,223,245,214,214,227,228,103,20,7,24,3,17,5,16,3,15,0,10,1,18,17,1,5,16,19,14,10,4,14,8,0,221,185,201,221,219,223,221,234,229,225,190,207,223,215,239,223,219,223,229,223,217,211,216,214,236,221,226,226,225,197,210,211,228,207,224,199,204,221,226,224,215,209,201,203,215,222,192,230,218,202,185,210,204,209,207,218,186,210,219,201,209,218,205,219,204,195,195,225,213,231,212,185,189,197,230,214,221,246,220,211,215,225,210,220,214,211,233,220,237,225,222,224,200,213,222,215,223,213,211,203,218,231,221,217,209,204,224,215,198,195,221,219,196,211,223,197,217,225,222,210,206,228,226,234,225,221,212,211,219,240,196,215,218,241,213,230,219,206,243,246,236,231,225,231,228,199,239,211,196,224,220,213,210,210,210,212,215,215,217,224,240,188,212,236,200,228,212,234,224,232,218,243,229,231,235,235,207,236,220,215,190,210,239,204,200,225,208,210,211,223,215,230,233,218,235,209,214,217,240,222,203,214,215,230,220,202,211,212,218,218,200,224,217,204,216,224,228,228,236,223,211,226,210,223,239,195,230,223,255,234,215,245,226,249,233,232,224,248,223,231,239,217,224,221,225,249,232,212,237,244,215,230,209,236,228,224,229,230,201,219,241,225,231,220,247,243,224,244,232,232,232,220,233,251,230,252,209,247,239,238,247,234,226,244,224,243,241,232,230,242,234,253,254,246,241,254,233,241,207,207,243,250,244,231,229,226,252,253,245,237,239,246,240,238,250,234,227,228,221,215,244,236,204,187,171,188,216,241,96,128,235,253,252,187,189,240,241,204,211,223,232,217,247,251,170,127,120,116,81,61,220,250,242,227,228,246,196,208,218,218,223,233,246,215,236,224,178,170,225,185,126,204,234,204,165,157,184,223,236,199,203,199,170,196,198,178,198,183,140,172,182,151,175,171,102,143,140,141,115,111,84,31,62,56,33,84,195,243,205,139,184,225,222,221,223,239,168,151,205,131,114,54,112,195,206,172,130,137,218,236,215,158,176,109,115,219,169,194,224,205,204,231,224,224,172,137,242,231,254,227,244,228,250,253,242,216,153,134,138,137,120,112,116,98,97,74,104,96,71,82,100,65,42,20,73,157,160,148,68,37,23,6,37,4,11,50,82,47,53,32,22,38,44,54,50,24,42,21,53,30,62,39,54,50,45,49,49,33,35,69,74,64,41,40,51,68,66,31,40,35,39,22,40,30,24,51,62,61,35,61,49,33,66,162,239,220,139,132,125,172,157,156,159,163,173,176,133,129,148,142,193,119,50,16,50,57,94,46,33,71,90,94,51,26,23,33,24,30,55,18,30,21,31,30,52,39,39,45,14,37,38,13,56,22,21,17,10,20,24,26,28,19,46,45,35,45,49,63,41,27,5,43,13,14,20,23,64,39,35,24,23,49,32,22,41,51,49,70,93,98,142,152,168,164,141,118,102,84,85,54,71,87,68,68,58,69,72,82,57,48,40,68,73,65,45,10,102,211,251,247,217,232,217,233,235,232,217,222,234,251,238,223,240,235,227,225,210,234,220,222,235,229,213,239,220,202,245,245,242,249,232,236,242,229,234,202,232,234,230,243,230,220,227,227,199,121,12,4,21,1,2,24,1,5,21,20,28,30,7,15,10,19,1,1,38,7,12,3,0,16,241,189,234,211,206,200,214,211,219,220,230,223,206,207,235,213,217,213,226,207,228,220,213,230,216,215,211,220,218,237,225,212,219,208,229,218,200,228,203,196,228,237,217,219,224,210,211,225,199,213,224,219,233,220,218,207,221,218,208,203,217,232,213,227,210,206,200,228,233,226,218,203,215,222,203,208,237,199,236,221,228,202,212,217,177,203,244,194,211,219,208,222,211,230,200,196,207,206,201,221,222,226,214,213,203,242,204,200,201,237,200,209,209,220,221,205,222,221,200,194,244,207,240,221,231,207,177,239,219,232,222,220,223,211,235,208,235,239,201,223,225,230,220,227,221,197,211,224,236,230,212,223,223,233,235,231,209,190,213,236,227,229,216,225,238,194,232,238,196,215,245,214,236,193,220,221,227,228,235,238,205,226,197,216,230,216,217,217,241,237,200,217,215,206,220,221,208,209,215,188,224,238,237,224,234,217,228,242,223,221,244,239,227,228,230,189,221,233,238,208,234,232,220,214,227,237,233,219,215,223,223,219,239,222,233,239,238,221,214,245,215,215,213,206,223,236,228,239,237,237,214,241,227,229,207,229,225,222,215,222,243,226,235,213,246,236,250,211,226,232,231,241,239,231,244,250,242,226,255,236,248,234,214,246,240,238,241,245,230,251,230,233,249,238,235,244,228,234,246,248,226,241,247,236,238,243,239,255,237,245,236,222,235,239,255,229,247,239,231,237,254,234,216,179,165,214,238,233,96,137,217,224,241,203,206,253,229,192,234,236,198,225,221,224,146,117,105,79,71,63,218,221,252,226,205,249,182,212,200,193,225,231,247,246,242,232,200,166,176,152,121,141,190,228,225,156,198,249,242,240,189,190,184,215,197,121,198,190,158,170,178,134,170,125,75,203,201,153,136,103,70,32,31,53,5,67,243,242,196,105,170,216,234,204,210,234,199,151,196,229,169,85,112,155,195,212,107,142,223,251,215,172,198,108,173,240,237,192,239,213,212,217,218,227,150,214,236,238,238,234,227,226,208,148,115,83,102,123,124,118,116,122,114,127,121,116,153,93,111,155,140,64,14,36,112,139,135,142,53,34,29,18,7,15,6,32,68,41,49,30,20,15,24,87,67,39,25,34,54,42,75,61,54,42,57,47,53,37,32,64,68,46,48,48,50,64,31,61,44,45,26,21,41,56,32,67,73,80,56,71,63,44,95,212,204,197,135,127,156,155,148,169,162,175,206,147,122,139,172,152,172,117,41,53,37,18,37,26,43,58,58,52,50,6,23,36,22,40,38,30,15,9,60,33,6,16,32,29,27,25,28,43,32,41,22,27,25,16,13,36,38,56,43,36,56,76,65,55,26,22,31,12,23,33,43,32,36,32,33,7,4,18,22,20,36,28,21,36,97,70,74,86,116,133,141,125,168,170,171,148,124,140,129,129,99,106,86,86,95,67,66,41,33,11,18,19,111,229,240,240,237,235,233,217,236,239,231,233,249,215,242,220,252,233,231,217,225,222,241,197,219,232,239,248,235,228,246,232,238,234,202,226,234,228,217,249,225,225,194,205,234,221,225,210,206,119,8,0,12,8,9,20,0,1,52,9,18,12,7,0,14,19,4,10,16,41,10,22,6,3,219,187,205,230,225,209,236,211,238,223,241,209,227,211,207,231,217,199,222,211,183,202,208,224,242,227,204,181,228,233,217,205,219,208,204,205,204,226,237,196,215,200,204,184,215,217,211,189,227,223,216,203,207,222,228,195,201,219,231,220,227,229,201,198,234,191,222,217,212,207,232,218,213,207,205,212,209,220,223,212,228,197,198,223,198,200,189,191,211,227,212,204,198,205,242,229,230,217,222,229,224,230,197,205,205,222,231,215,223,221,208,211,199,225,218,218,212,236,219,219,215,230,182,236,201,193,224,238,212,242,224,214,217,217,216,238,223,222,218,219,229,242,234,238,220,206,214,207,203,225,229,234,208,210,239,205,227,235,228,248,235,219,213,218,220,237,243,226,233,205,225,225,231,220,234,233,229,231,221,231,203,230,196,224,210,212,226,227,234,221,227,238,225,235,229,232,206,221,206,215,228,210,213,225,220,226,217,222,225,221,215,226,213,201,210,205,223,228,217,221,228,240,215,228,226,210,228,213,227,212,210,248,240,212,222,223,217,239,237,227,239,232,245,232,247,240,219,246,216,232,249,234,227,229,254,217,236,198,222,240,226,252,209,205,252,236,243,229,252,234,239,250,244,245,238,222,238,222,231,240,231,228,238,245,245,234,232,233,240,237,247,247,249,240,246,243,241,241,227,255,244,243,241,237,248,242,245,215,217,228,240,220,220,221,240,239,247,236,247,241,230,220,230,207,207,242,234,254,101,89,190,228,250,209,242,227,233,214,220,198,164,168,182,194,180,122,144,87,89,60,206,233,250,236,206,245,200,210,225,210,238,231,227,250,239,243,191,182,194,184,108,150,224,230,232,189,162,232,249,222,185,203,164,222,186,182,233,231,200,159,194,135,171,103,19,198,190,143,172,101,83,25,28,40,57,79,200,250,167,60,133,213,227,212,215,249,200,152,177,237,201,93,64,121,225,166,93,181,229,242,215,211,213,145,186,243,238,211,228,225,231,222,213,204,161,225,238,218,240,230,220,185,106,86,51,69,75,91,101,43,38,29,47,61,93,116,98,28,10,95,110,81,44,39,111,126,151,99,33,20,32,33,57,13,25,15,48,66,61,40,26,11,13,73,72,46,37,14,16,49,37,54,53,68,31,25,68,76,48,77,66,69,90,89,70,58,83,111,80,79,82,140,101,104,121,113,93,128,123,143,136,68,141,253,253,194,160,132,135,147,162,184,163,169,195,139,157,162,172,166,135,85,69,60,41,54,69,70,53,73,80,52,21,61,25,32,44,28,38,21,18,6,43,21,4,13,21,35,39,25,44,34,35,54,50,24,27,30,29,39,68,74,53,25,56,66,54,39,41,38,19,57,54,44,57,56,31,28,37,22,40,27,3,0,25,27,34,123,122,95,89,43,32,15,40,28,16,49,58,76,60,79,96,105,130,105,100,105,73,59,33,35,8,14,32,97,214,210,229,222,251,231,244,206,229,233,214,251,239,243,222,238,235,222,219,218,221,243,240,220,237,224,216,232,218,229,244,234,219,227,223,216,233,206,225,224,243,219,231,224,208,212,218,208,217,145,7,18,16,30,7,0,4,24,4,17,37,13,19,6,8,25,5,10,11,36,8,20,15,8,189,220,205,240,221,210,213,244,214,187,223,207,218,208,214,220,216,206,213,211,199,231,215,214,232,241,220,211,217,197,212,217,209,182,219,200,210,217,241,214,209,216,214,197,214,214,215,188,189,190,199,201,195,200,234,219,228,220,214,221,209,201,215,191,219,207,230,228,227,206,211,230,174,230,204,209,235,209,220,233,227,213,211,199,206,225,227,218,214,222,214,238,212,209,214,217,223,211,194,216,227,225,239,209,201,225,186,217,212,238,239,208,227,233,211,235,202,217,201,217,208,194,210,229,229,235,223,228,226,200,221,240,222,235,203,204,230,233,212,212,209,226,225,191,225,218,218,237,227,207,223,219,215,219,224,211,226,222,213,212,221,199,243,222,217,216,218,242,204,224,187,219,231,210,219,221,186,223,211,217,222,221,226,242,224,241,200,217,230,194,206,207,237,230,211,245,213,180,240,208,221,224,236,235,209,227,229,217,219,206,229,205,218,230,227,223,236,226,238,221,232,207,229,215,201,230,223,229,242,210,219,222,235,221,219,207,231,226,222,229,223,230,233,219,243,242,240,225,214,240,226,236,248,204,252,244,213,200,204,215,248,238,211,241,232,241,233,220,236,215,246,244,226,232,241,221,249,243,245,221,229,232,239,224,243,224,224,222,245,249,232,243,227,232,244,243,247,239,232,240,230,243,235,240,255,254,243,233,215,222,250,216,247,234,238,248,243,243,254,221,225,228,238,196,204,180,210,197,90,132,216,227,248,219,222,238,243,201,232,194,171,207,216,250,185,168,137,110,75,61,214,239,245,225,212,227,197,206,218,233,224,216,241,203,223,234,180,189,167,200,175,172,223,236,255,148,188,248,218,188,125,143,168,193,184,191,227,207,213,106,177,143,140,68,47,242,202,135,175,132,113,20,17,59,21,89,242,230,195,85,162,201,226,215,202,232,195,128,204,236,205,126,91,193,218,145,62,181,241,189,201,193,168,119,210,224,234,217,215,220,224,213,241,222,136,212,214,212,206,223,210,131,93,73,67,67,77,44,28,26,35,50,46,20,76,93,86,58,1,58,106,132,84,55,130,109,154,113,40,36,26,84,56,50,46,58,57,45,69,20,0,15,14,52,46,13,32,19,31,41,46,58,72,89,81,100,131,131,126,109,145,114,120,105,111,101,114,120,89,122,95,93,103,112,114,141,125,111,106,142,122,78,191,237,229,182,143,154,144,147,151,182,198,194,174,154,144,138,157,214,136,96,61,52,57,47,46,61,81,57,55,73,62,62,72,52,67,42,59,49,50,52,35,53,27,48,31,66,55,51,48,52,15,21,26,48,52,20,51,36,33,51,62,61,25,21,41,45,38,35,27,49,33,43,16,34,39,25,31,50,10,3,2,19,15,37,86,133,115,127,96,58,46,14,54,39,32,26,15,51,18,16,30,51,31,40,74,45,31,72,78,110,133,189,238,245,251,245,239,223,212,218,237,230,234,215,236,220,246,217,248,219,213,240,246,251,225,241,239,219,240,230,221,215,221,217,232,225,242,238,231,227,231,255,235,230,225,244,245,230,230,235,225,233,238,102,30,18,10,0,10,5,13,0,6,18,22,11,0,11,5,7,13,13,7,8,11,12,33,27,216,220,195,198,224,198,237,205,209,208,200,205,212,207,201,213,222,196,223,227,205,203,203,216,246,221,205,212,218,194,206,206,225,209,202,213,211,214,209,192,226,202,183,226,207,184,212,196,198,212,206,208,206,190,203,216,213,206,237,194,195,218,196,223,215,201,205,208,221,218,219,210,209,215,204,209,228,222,213,208,206,220,228,215,196,229,213,213,207,225,219,218,238,218,230,237,216,215,213,213,235,205,212,209,211,216,234,203,218,200,196,195,207,203,213,213,218,215,209,209,201,212,211,211,214,229,166,221,227,212,214,225,204,211,190,221,213,219,211,219,214,212,208,231,205,199,209,219,203,220,210,233,226,224,220,231,236,236,212,207,215,238,203,214,219,217,208,195,195,215,215,211,209,221,215,195,212,214,223,202,226,211,218,232,201,201,187,210,216,204,204,231,242,222,214,244,232,222,210,230,221,222,209,212,206,207,220,213,231,241,232,222,227,224,209,227,222,200,217,232,220,238,243,207,220,210,228,217,211,206,214,242,237,219,248,225,231,255,227,233,248,227,242,243,234,234,236,254,238,217,237,223,233,236,218,198,194,216,232,238,246,225,226,244,250,254,216,237,250,249,233,241,247,250,243,238,249,227,226,218,244,247,214,232,220,239,247,232,209,222,231,221,220,255,243,229,254,233,236,236,243,242,232,241,235,243,233,231,233,240,234,240,246,228,254,252,208,249,236,241,238,212,204,155,200,175,240,234,111,144,248,228,238,199,195,218,196,174,220,215,212,221,248,234,172,187,167,117,90,60,211,247,232,215,221,217,182,217,249,204,235,211,218,218,223,233,200,195,156,157,205,202,218,239,225,146,165,218,181,148,112,203,211,219,160,196,200,188,217,143,179,141,135,55,65,248,172,66,136,148,137,62,35,36,18,69,243,238,188,120,184,203,245,216,215,247,192,123,195,210,142,81,108,185,173,121,152,190,223,196,180,169,115,157,204,223,235,220,204,227,244,234,218,194,170,228,220,208,205,223,180,96,111,64,69,93,43,30,33,62,97,95,108,88,55,115,82,47,23,36,94,125,86,102,167,127,144,57,9,27,6,31,52,64,60,54,60,46,41,24,11,11,28,52,82,55,80,110,111,124,129,139,142,123,133,142,135,132,133,103,108,110,84,90,121,88,109,114,92,103,114,126,115,120,119,121,90,120,74,98,50,103,247,240,253,168,138,150,148,166,158,163,179,183,146,142,144,170,173,174,100,65,32,42,53,38,40,41,74,50,63,67,53,58,72,38,53,28,56,44,33,43,51,63,53,73,37,72,63,54,73,40,46,51,47,46,37,39,37,34,55,44,50,54,77,66,31,31,34,58,68,62,19,14,41,16,45,46,58,22,38,33,21,17,21,22,50,98,121,103,65,60,21,21,75,77,63,56,71,71,69,52,47,60,69,37,52,90,159,210,245,239,238,250,234,218,251,231,230,229,236,229,233,236,235,231,250,210,225,249,242,220,243,233,245,228,234,237,205,202,224,221,241,238,244,210,247,222,205,231,225,237,225,228,210,248,246,236,221,227,212,220,220,211,225,110,1,8,15,4,22,3,9,9,1,20,1,4,0,12,5,9,10,4,5,29,6,18,9,13,212,207,215,199,212,216,209,220,215,233,232,200,200,234,202,208,210,225,205,227,211,191,226,203,208,213,212,238,238,231,225,216,203,209,196,194,200,199,225,195,201,222,213,212,217,206,182,227,209,222,200,240,201,203,203,242,202,219,218,212,194,216,218,204,205,208,202,236,208,209,208,216,223,193,200,231,211,225,210,235,205,207,205,192,206,240,217,219,210,235,210,228,215,219,217,195,223,232,217,238,211,229,203,222,202,215,230,240,199,218,219,203,211,194,226,200,221,191,223,208,224,216,241,189,230,207,220,230,218,236,226,221,214,200,223,231,211,230,208,220,223,204,211,236,222,199,232,238,180,215,218,204,212,215,214,206,206,196,202,233,239,218,223,191,217,218,204,214,219,203,210,212,213,211,188,209,215,200,211,211,222,201,212,213,203,195,206,183,217,227,206,235,217,217,248,200,223,221,207,220,214,228,200,239,221,201,205,223,191,208,217,228,220,224,211,218,223,214,212,219,229,209,223,218,229,243,211,231,219,221,227,209,207,239,222,211,213,222,216,221,248,213,224,251,218,227,234,229,225,234,210,234,211,229,234,213,224,240,238,238,229,229,215,223,231,241,223,234,246,238,236,219,250,238,229,219,248,227,235,241,232,247,249,232,238,229,233,232,223,236,220,227,240,251,236,237,220,239,242,234,227,244,239,246,240,255,249,204,230,238,236,247,245,230,221,214,237,234,220,236,242,230,242,196,199,224,252,228,89,141,241,196,206,190,190,221,241,218,232,214,181,228,245,252,176,166,135,119,65,70,220,247,247,225,239,201,189,255,220,222,243,249,246,243,234,211,203,244,179,128,192,170,199,244,215,114,170,213,188,206,211,193,194,217,177,163,203,201,186,184,228,164,94,63,128,237,159,48,113,128,199,143,41,56,4,49,202,246,169,151,171,202,238,202,199,236,170,130,218,153,135,117,91,175,180,137,142,234,205,189,199,164,118,149,248,237,222,208,231,235,200,205,206,167,188,219,208,245,235,253,158,91,119,84,60,66,57,47,45,125,110,129,119,98,102,117,104,89,40,33,109,146,84,89,119,100,129,61,2,18,15,16,45,55,53,66,55,60,88,84,115,93,165,155,126,152,119,104,137,137,125,108,116,117,135,106,127,109,113,90,81,77,88,121,86,63,61,46,61,59,62,14,39,52,64,22,33,73,26,33,9,123,250,239,235,145,138,155,131,153,167,185,207,200,149,139,151,166,170,174,84,41,20,19,28,44,25,19,31,48,17,40,28,32,44,62,28,24,64,49,40,65,51,58,41,58,38,65,48,56,58,57,57,63,74,53,74,96,63,57,51,58,41,60,28,30,18,41,19,46,37,56,20,19,12,32,52,63,54,52,21,10,16,35,65,48,39,111,130,125,85,40,15,26,115,124,154,145,151,143,104,120,129,120,170,162,154,141,168,250,241,229,235,236,221,239,219,232,214,222,231,230,243,219,232,225,250,231,230,226,216,219,242,239,242,231,228,245,227,204,237,239,218,228,224,218,239,215,229,230,223,225,210,214,232,226,240,234,224,251,236,222,213,210,242,120,16,2,16,11,2,18,21,15,9,31,31,16,21,12,17,9,17,11,10,20,6,18,24,8,208,212,204,226,214,219,216,201,221,209,206,227,202,205,197,213,213,219,190,207,214,216,186,213,215,220,216,200,203,224,215,206,212,203,203,227,191,187,214,189,205,213,204,211,200,228,203,226,193,202,204,210,189,203,189,220,234,218,204,213,212,201,190,213,206,227,234,206,208,212,205,204,217,206,203,224,195,214,202,199,213,200,209,217,194,194,226,212,227,209,212,216,215,228,209,215,196,219,222,230,186,205,223,221,213,219,213,222,225,201,231,228,219,211,221,207,208,192,225,217,211,202,233,231,235,194,216,207,208,227,214,216,231,214,214,225,225,224,221,232,223,237,194,213,239,212,211,205,231,204,214,219,209,204,189,213,229,222,211,208,219,223,219,227,214,221,205,222,207,210,229,217,237,196,213,215,202,226,221,189,218,208,205,220,209,205,220,210,239,206,219,236,196,223,207,214,223,208,242,232,216,199,207,216,216,233,234,210,237,219,207,216,240,239,205,215,240,238,233,207,209,230,197,211,188,231,227,243,216,207,218,220,218,214,239,235,216,221,221,216,214,203,233,228,227,222,229,220,216,234,224,237,231,252,224,218,211,222,210,228,237,225,248,245,245,238,230,240,230,206,239,213,242,220,247,230,245,240,240,200,213,223,254,228,233,232,219,226,245,252,239,234,214,230,249,242,246,241,228,222,223,228,240,243,249,243,232,233,218,220,207,249,242,231,231,243,230,216,221,243,247,217,206,196,232,229,250,193,89,151,216,226,234,204,240,251,238,220,227,197,186,225,236,239,165,154,126,108,74,65,204,246,232,250,222,199,169,248,234,245,216,238,232,241,234,244,216,206,162,149,221,129,139,197,199,181,193,209,200,218,176,181,235,175,177,181,194,219,216,206,224,157,84,113,198,245,151,48,48,78,129,190,142,57,1,4,162,240,161,111,214,193,221,231,215,252,158,154,189,150,183,154,104,107,177,154,216,247,187,187,197,158,101,179,241,231,249,218,222,196,236,205,243,168,184,231,226,211,221,249,141,82,112,44,105,64,30,29,123,134,76,66,18,34,80,135,138,65,41,40,83,137,108,139,107,119,113,31,40,70,64,71,80,136,130,119,152,116,117,113,122,123,112,106,120,128,126,121,141,110,145,109,79,85,66,84,54,55,62,62,36,49,41,24,39,37,96,53,47,52,47,29,47,50,39,32,6,13,12,40,15,174,243,241,237,149,137,140,159,179,191,163,161,155,143,143,150,172,186,133,50,28,25,13,17,24,11,33,46,30,18,30,9,14,16,25,17,18,30,21,4,35,34,46,37,47,63,52,45,65,58,47,74,57,60,79,44,87,70,119,64,96,73,50,88,79,54,39,23,38,37,53,9,18,1,8,47,68,26,29,24,11,31,18,25,53,37,110,100,108,114,64,44,95,166,149,105,97,99,132,146,145,120,120,106,132,119,138,114,99,190,233,233,249,244,227,226,241,209,228,238,247,234,237,228,246,240,237,241,210,215,207,237,234,228,231,236,227,212,239,229,216,245,218,226,246,222,232,223,232,239,221,217,213,230,215,235,235,220,217,204,225,224,218,234,132,5,0,18,10,25,24,23,32,28,17,16,12,22,9,13,22,30,13,6,3,22,12,32,27,212,212,201,195,212,215,232,204,193,193,218,206,214,203,236,197,202,210,186,203,198,207,197,205,213,206,213,212,202,214,198,201,206,198,229,226,189,192,205,198,203,197,198,202,193,203,196,173,210,199,201,208,196,213,177,206,223,218,209,195,207,227,199,211,203,200,234,191,228,210,227,218,222,205,214,196,210,206,211,210,204,213,201,220,201,190,202,212,233,219,197,218,196,192,201,213,244,213,228,211,204,225,231,204,214,234,181,207,217,222,191,226,212,221,222,197,235,210,228,243,219,219,243,220,193,208,215,218,210,200,198,204,191,228,221,219,218,215,217,231,243,213,194,230,206,217,219,225,196,205,220,186,187,210,224,221,195,212,201,217,199,216,219,223,220,208,237,237,225,221,221,200,223,210,230,210,195,214,221,226,204,219,227,228,220,194,244,220,232,238,243,209,228,209,220,215,208,207,229,208,236,241,226,226,219,208,214,200,220,225,242,230,194,217,207,202,224,219,238,201,237,200,210,226,237,190,215,227,225,205,205,232,212,240,229,236,208,246,230,231,217,214,228,223,214,226,223,242,230,233,212,198,233,232,231,195,211,225,230,229,211,203,248,246,219,190,231,242,244,223,248,234,252,229,242,245,251,239,229,240,241,215,245,217,225,237,222,246,239,235,234,232,229,237,225,220,255,249,224,225,253,235,227,222,235,245,232,242,244,236,240,218,222,250,219,235,240,216,210,241,247,249,242,232,201,163,221,128,124,200,207,239,237,185,248,255,211,228,221,202,166,225,251,253,176,154,127,114,72,66,218,236,235,215,203,177,204,255,203,214,205,237,225,209,227,223,196,219,141,144,218,156,150,222,214,194,181,205,202,202,142,202,231,216,205,139,199,207,200,196,188,139,45,164,234,232,151,62,64,80,102,154,161,117,48,4,163,238,179,130,234,206,224,226,221,208,116,209,239,195,158,125,105,112,126,197,206,238,219,200,197,142,121,194,238,213,214,207,187,236,237,241,212,162,173,228,231,224,206,231,159,76,84,96,107,52,20,43,120,124,64,97,44,55,48,140,137,79,28,59,102,95,88,122,97,100,158,93,147,166,183,169,144,112,108,128,106,109,117,124,114,93,123,116,44,87,78,76,61,51,73,39,34,60,42,32,21,39,49,71,34,35,12,24,82,96,83,101,74,36,35,32,40,30,84,24,44,32,22,2,62,219,241,255,208,141,168,146,167,172,192,167,166,130,139,139,181,157,174,111,16,38,55,23,19,27,56,39,27,40,33,22,26,17,36,7,9,18,13,20,4,20,14,16,19,3,46,12,16,31,42,60,49,78,37,81,70,52,85,62,41,84,81,60,103,95,91,53,73,83,77,51,24,24,9,17,27,58,81,36,63,52,52,50,43,39,72,106,120,146,103,55,92,154,152,76,48,28,57,112,88,94,58,59,57,51,59,69,112,97,117,175,238,241,222,237,227,231,254,247,225,240,202,252,240,221,204,241,240,246,241,249,224,240,230,229,233,220,229,235,237,236,235,235,248,239,240,227,212,189,224,234,227,221,223,216,232,219,247,241,246,243,235,216,223,123,2,7,4,5,0,13,6,25,18,13,30,0,17,5,7,13,15,9,9,8,14,14,16,2,203,207,194,204,199,233,205,205,206,223,232,218,218,200,223,222,227,221,214,199,206,200,210,222,216,218,193,197,199,223,187,209,205,200,226,205,239,204,204,207,187,193,209,214,201,184,213,209,202,211,193,199,225,199,225,208,196,232,196,207,217,201,198,227,202,207,215,199,225,192,228,200,226,225,218,214,197,223,206,202,241,210,204,228,232,205,186,211,216,213,205,210,206,209,205,193,218,206,215,198,209,195,200,193,210,217,223,226,199,218,208,216,198,202,230,215,193,228,213,221,223,223,218,200,187,245,234,225,208,232,227,212,217,240,200,217,223,229,214,233,214,195,227,213,202,226,211,207,225,219,222,203,192,206,209,220,213,192,205,237,223,222,206,214,236,217,224,215,193,190,233,230,222,226,198,226,225,205,228,202,211,214,219,207,210,229,209,236,218,213,219,217,218,206,230,209,203,216,213,213,207,210,198,202,200,214,217,202,235,219,214,212,219,216,223,209,185,214,239,213,208,224,214,202,209,213,214,215,212,222,240,200,211,219,194,213,203,224,243,212,203,244,231,234,216,246,207,234,226,222,225,227,242,247,216,216,205,217,253,226,208,243,227,220,232,216,232,230,225,227,233,247,235,243,226,215,228,244,223,233,227,237,234,240,248,231,242,241,244,234,246,245,236,224,219,246,229,252,236,237,233,237,226,248,244,232,234,242,229,231,244,245,229,237,227,243,238,246,221,255,221,208,171,195,161,140,232,172,135,200,224,219,218,203,227,229,226,209,229,180,210,246,251,251,190,148,151,94,59,81,203,244,251,215,203,187,192,241,211,213,241,215,238,221,229,234,215,216,156,176,223,174,222,206,195,197,170,198,196,206,174,195,192,186,215,159,166,217,208,196,162,119,67,206,249,237,157,57,78,112,46,102,125,179,101,51,196,247,162,84,204,218,207,204,202,210,154,194,245,149,123,165,196,125,86,191,234,244,196,222,172,121,114,176,232,221,234,230,196,209,210,230,220,158,227,246,225,231,240,253,118,78,59,61,118,43,14,35,141,116,81,101,100,133,141,136,118,41,37,63,151,115,81,137,98,153,119,102,149,150,135,134,154,122,122,102,114,97,71,88,57,51,43,57,17,13,6,63,77,37,68,24,13,68,91,78,56,17,55,74,68,53,59,80,66,54,71,88,129,81,39,43,46,38,24,39,34,30,29,13,127,239,254,253,190,128,139,159,178,185,154,151,152,124,165,155,167,157,146,55,58,24,18,28,24,47,48,40,42,6,30,21,19,27,37,6,18,10,11,34,19,31,25,37,44,11,2,15,0,13,48,65,56,27,35,26,12,36,54,52,98,101,86,94,74,57,65,67,96,105,77,87,77,69,65,65,32,26,40,48,39,55,47,39,36,23,46,104,147,117,118,28,78,145,115,66,37,44,97,94,66,31,94,76,65,44,22,22,26,51,104,160,237,232,233,245,238,248,255,244,251,226,243,233,251,228,246,228,247,253,235,222,206,232,238,225,216,251,233,221,226,248,230,223,235,249,210,200,231,219,223,192,216,219,222,221,232,242,237,196,232,221,234,236,235,105,0,2,4,11,24,12,17,8,12,1,3,18,9,13,17,27,10,8,18,0,7,5,20,17,182,195,192,220,199,200,214,208,199,218,198,214,202,181,214,208,208,212,204,210,207,202,206,189,208,200,192,220,177,176,213,229,234,226,203,207,231,208,202,184,208,205,209,203,215,217,210,190,192,215,180,196,197,189,190,184,214,207,201,182,195,193,222,198,206,220,216,234,189,202,226,215,187,188,230,223,214,229,234,216,207,214,209,218,202,203,203,215,202,211,205,213,235,216,222,213,211,205,209,236,208,233,195,244,219,213,215,202,203,219,224,198,218,198,216,200,216,201,218,213,218,213,203,183,214,192,224,209,225,217,219,215,208,213,222,190,210,223,221,232,191,196,209,203,224,208,200,232,233,205,202,219,213,219,175,193,221,222,194,203,204,196,209,229,210,206,230,204,197,223,233,222,227,198,212,202,213,210,219,231,225,213,192,197,210,192,197,188,217,208,193,204,210,220,227,207,193,201,224,218,224,213,215,211,217,213,248,195,208,198,219,196,238,214,214,208,210,208,205,228,205,207,216,213,220,219,243,219,200,225,217,236,220,208,231,199,200,225,230,212,210,237,240,224,223,235,238,203,233,213,235,225,250,229,239,206,206,251,236,241,229,205,231,241,207,197,228,222,237,240,248,236,231,212,240,237,215,219,226,242,250,249,235,249,228,229,227,249,234,235,241,249,248,219,240,229,252,245,247,238,246,229,220,249,238,211,235,229,242,236,240,246,215,249,255,248,233,235,249,218,102,75,77,127,177,188,236,143,90,176,236,225,239,222,229,239,218,209,225,202,219,244,239,230,151,164,115,113,61,58,221,236,225,236,206,189,227,229,196,224,186,237,226,217,224,243,230,209,178,188,230,182,193,221,237,207,128,229,202,154,138,136,185,186,211,164,206,193,234,181,135,95,85,236,253,250,150,84,122,105,98,38,70,151,166,168,206,254,175,69,187,184,227,236,204,197,169,214,233,167,144,173,196,66,44,187,240,242,178,202,161,109,110,219,219,230,247,230,208,205,220,255,186,151,224,249,220,247,243,237,133,98,111,107,115,46,18,23,105,127,82,74,95,99,90,70,32,30,47,112,121,99,115,132,116,118,115,150,171,169,122,105,87,85,63,38,20,13,24,10,27,14,39,69,56,12,13,33,80,83,82,12,43,113,108,147,93,63,95,106,97,41,98,127,73,42,13,88,120,112,52,40,33,42,40,26,30,49,15,66,220,238,251,227,149,141,152,132,152,146,153,151,119,133,144,128,128,165,118,65,18,15,31,46,28,23,48,19,11,36,20,18,24,32,30,33,37,38,20,29,0,33,15,22,26,42,33,27,14,5,56,53,51,41,32,0,5,35,12,41,36,24,67,75,71,93,73,99,106,82,79,50,68,77,60,85,65,74,47,69,39,28,10,14,33,72,63,112,109,122,102,89,80,131,94,58,50,72,98,102,107,135,147,153,98,140,62,56,29,31,61,147,221,243,238,243,228,230,230,230,242,233,222,242,239,234,234,237,237,232,220,252,199,249,255,225,237,222,235,233,217,241,232,225,212,219,236,212,239,223,231,222,245,234,203,248,243,215,213,240,226,218,234,232,220,124,7,2,10,1,3,8,15,2,51,3,4,1,19,7,4,10,2,11,16,8,13,48,5,16,196,210,192,196,228,214,205,210,195,205,212,183,229,204,211,218,209,222,197,217,185,189,217,209,206,212,185,206,206,210,229,184,207,201,214,201,178,212,205,198,223,203,227,208,204,213,203,191,207,209,195,214,200,196,195,192,198,203,214,187,213,190,197,228,203,204,188,193,199,193,208,222,162,211,218,223,197,216,209,215,197,211,195,181,179,217,207,212,215,234,188,227,225,211,220,209,210,239,224,217,203,204,216,223,216,204,203,193,219,213,213,196,197,227,215,206,226,212,197,214,230,219,210,211,221,212,212,218,213,233,219,223,215,198,201,223,217,198,236,207,223,201,210,211,221,213,203,197,239,241,212,206,208,215,199,235,222,201,194,214,208,200,220,210,224,219,207,212,220,192,177,202,229,210,209,209,213,227,214,195,225,208,200,212,242,193,236,203,223,212,212,231,225,209,239,242,207,212,192,226,206,246,217,234,206,206,217,221,205,206,231,223,202,221,206,219,216,217,225,226,227,210,221,190,228,238,219,245,215,228,213,197,217,210,215,202,235,182,220,225,226,221,240,226,221,234,198,212,211,219,214,235,247,234,211,207,215,232,217,212,227,232,219,220,221,228,247,199,239,246,228,229,237,237,206,243,212,224,219,226,246,216,225,236,221,230,239,237,236,243,241,227,241,239,225,237,239,238,249,234,240,236,239,227,238,233,227,228,220,234,226,236,232,234,241,239,215,215,166,167,99,58,44,150,236,198,223,132,111,235,219,228,239,205,213,214,205,207,218,193,206,234,244,251,173,130,135,95,50,84,232,225,246,225,217,230,243,255,239,200,229,230,234,214,234,250,180,191,184,194,223,184,193,189,236,195,104,191,141,153,175,160,160,209,227,175,209,192,231,180,187,102,83,229,209,253,169,59,82,113,81,47,4,79,117,196,241,222,190,67,204,203,236,209,178,189,183,189,183,150,141,240,186,66,119,154,234,223,210,214,159,93,167,224,244,254,231,245,214,202,202,228,175,189,243,240,219,234,246,252,172,78,83,94,140,63,41,22,69,106,113,61,18,47,62,39,47,71,127,150,133,73,69,117,97,116,90,86,63,19,42,12,18,22,24,63,96,53,37,15,13,20,72,95,82,99,87,56,122,110,93,9,73,103,123,109,118,159,112,105,128,36,45,140,90,63,36,98,120,66,36,19,18,25,42,29,20,47,39,82,239,255,247,252,191,155,149,168,176,148,154,155,126,133,171,175,173,175,105,44,51,25,24,31,36,33,5,30,13,51,35,17,27,25,40,31,22,12,15,10,30,30,37,39,13,8,29,20,16,9,45,61,61,38,23,43,41,73,82,48,67,64,51,38,14,15,24,58,60,77,96,105,83,76,82,75,79,80,70,69,96,57,43,72,49,75,41,91,120,130,101,89,110,92,76,58,48,102,98,115,113,83,71,48,107,145,86,61,54,37,68,107,217,235,244,249,247,244,213,237,239,229,235,227,229,210,241,240,235,249,251,230,223,251,242,233,230,220,236,226,239,253,215,226,235,225,226,243,215,230,200,200,226,226,221,237,233,233,243,231,227,232,231,209,214,112,8,25,6,19,12,22,13,17,22,3,34,12,16,7,15,10,4,12,2,9,3,30,19,2,218,206,211,181,234,194,211,206,209,207,202,208,199,209,196,197,210,212,198,210,192,193,199,206,194,205,204,221,213,230,227,191,205,172,223,210,190,184,207,197,213,219,181,184,180,221,209,211,228,215,180,195,182,218,223,193,189,209,222,200,198,192,218,189,201,203,186,197,226,227,199,213,213,206,192,228,209,214,178,207,205,213,215,193,217,202,214,216,178,203,220,212,222,197,183,203,227,218,209,214,193,210,215,241,200,204,191,215,233,218,228,219,201,207,211,207,227,203,216,205,216,195,212,207,210,212,209,221,213,212,223,203,200,225,210,223,214,206,214,228,212,212,208,241,221,215,244,223,218,186,207,194,208,189,213,207,191,210,216,237,214,230,209,220,205,207,213,208,215,223,221,234,215,205,215,231,233,207,229,234,230,197,241,221,219,210,236,208,208,215,210,197,203,201,200,235,214,231,212,211,214,221,194,220,204,220,231,230,222,216,231,215,202,222,241,216,228,216,223,212,249,231,214,217,218,221,216,214,214,216,236,223,197,239,237,213,246,236,226,219,209,204,204,225,236,243,208,203,211,213,249,247,231,224,229,174,222,246,209,208,217,228,229,191,228,213,225,230,238,250,214,224,232,220,228,211,238,213,220,241,221,235,245,226,242,236,235,247,232,241,215,246,236,239,209,236,250,237,235,237,236,242,222,246,227,223,251,236,234,255,242,240,244,235,213,193,176,182,205,219,161,98,118,224,203,187,223,129,125,212,249,223,205,204,228,223,193,181,236,164,206,251,230,240,202,162,137,104,49,79,218,232,226,228,178,222,255,243,233,214,202,216,193,223,218,228,204,190,132,148,214,191,176,177,189,149,149,177,190,185,228,182,187,192,240,185,182,200,222,181,174,107,70,242,227,255,188,51,87,132,127,38,24,63,48,144,223,236,143,84,199,205,238,217,142,182,205,160,118,158,156,226,156,78,112,151,239,217,206,249,174,88,151,254,225,219,229,214,207,216,236,216,185,185,224,215,253,238,250,255,205,97,99,96,136,118,84,13,43,92,132,80,65,34,42,64,92,108,83,141,56,28,125,123,109,114,51,31,19,24,23,21,34,4,49,96,84,65,55,20,27,16,74,126,114,112,124,101,105,99,32,35,70,117,99,82,89,122,135,129,104,49,29,63,91,58,103,108,85,75,47,47,17,51,47,42,52,44,25,164,250,245,248,233,180,149,173,152,163,185,188,150,169,157,153,158,170,154,92,15,36,37,17,36,30,17,13,45,31,27,32,30,19,17,9,27,32,2,8,16,36,21,41,34,43,30,26,24,19,25,67,81,41,56,27,26,74,73,48,54,54,54,37,4,55,29,22,45,36,42,20,45,41,48,62,92,64,83,79,57,41,62,93,87,99,89,85,123,132,121,113,45,85,134,89,62,42,74,149,135,90,56,55,25,46,113,128,98,43,76,52,79,219,230,226,240,205,229,252,245,242,241,248,236,227,233,215,219,223,239,214,240,229,226,225,236,224,233,241,233,245,233,234,220,218,229,217,211,226,234,220,222,229,248,248,223,207,227,218,228,234,233,234,231,203,100,1,3,0,10,22,16,38,18,7,12,23,17,4,1,0,16,10,14,7,2,13,24,3,11,201,209,179,211,199,183,197,177,209,233,200,204,182,195,207,217,212,198,179,201,201,198,199,216,217,202,204,204,212,216,200,208,211,214,184,189,203,214,227,191,205,206,188,207,174,200,195,219,202,227,215,233,195,197,212,198,215,227,207,221,207,210,178,185,219,201,209,219,212,185,208,203,187,214,220,196,208,230,191,175,212,204,216,200,211,207,208,189,212,209,203,194,209,203,224,194,206,209,204,195,218,176,214,213,200,205,208,232,209,220,199,217,222,229,206,215,196,212,201,196,209,206,223,210,205,232,223,226,230,220,247,208,212,217,204,222,211,224,211,215,212,221,199,221,242,193,196,215,203,212,177,200,213,207,206,222,205,196,215,181,216,212,213,226,224,229,205,217,227,201,180,202,189,216,211,213,222,225,228,214,218,209,232,196,233,238,216,220,213,204,214,205,211,231,224,204,218,205,220,193,233,177,212,236,199,191,230,214,192,221,212,207,236,233,211,237,221,222,224,237,217,222,213,228,199,197,225,221,218,180,227,217,226,224,245,231,212,223,238,249,216,213,223,207,198,230,225,220,218,230,218,222,214,203,193,218,225,216,215,234,193,226,244,215,219,231,238,222,242,223,226,237,226,231,244,230,240,228,232,225,247,250,232,210,245,234,247,234,238,248,224,224,238,233,239,239,235,230,234,241,222,250,212,245,239,245,239,244,240,241,234,196,176,170,176,204,206,211,244,249,204,108,175,232,207,247,221,116,107,212,228,238,193,187,236,253,225,186,228,193,222,237,198,249,208,187,145,112,57,57,200,228,247,207,147,165,248,238,231,203,219,229,210,214,222,233,220,218,173,131,172,174,190,179,151,156,195,193,182,207,209,194,178,229,219,177,190,181,197,163,183,83,89,234,220,242,163,49,71,126,99,53,41,39,11,102,196,245,137,54,191,201,243,175,111,176,184,84,133,187,209,192,100,93,122,127,213,179,215,230,167,128,168,228,241,239,241,239,234,219,209,240,169,221,222,239,229,242,254,238,188,74,46,108,135,132,84,46,24,51,114,148,144,152,119,120,152,161,152,119,34,31,111,122,105,131,51,9,15,19,6,18,49,35,106,133,121,137,67,41,25,22,49,106,101,102,89,126,88,108,78,37,63,122,101,36,44,59,87,102,80,74,36,18,89,129,115,98,75,64,34,48,44,77,68,48,94,49,88,250,249,231,235,247,202,153,139,154,171,220,199,151,127,149,166,160,152,119,84,36,44,43,23,38,36,26,18,22,36,38,11,11,35,14,46,46,29,15,16,4,21,26,14,19,35,23,16,14,9,37,84,80,46,6,30,38,27,60,87,45,73,49,36,23,44,19,53,79,107,65,70,54,46,29,17,47,47,60,60,76,82,73,58,91,91,81,108,101,105,147,96,95,91,122,80,45,57,48,111,123,115,72,64,113,95,102,140,90,52,52,40,77,200,235,223,239,233,237,234,213,239,246,249,241,219,237,230,240,233,208,227,219,249,225,245,234,217,224,218,206,230,232,227,208,239,206,229,210,226,224,228,227,199,220,222,244,232,212,234,203,222,226,238,236,204,118,5,18,26,22,16,11,6,11,36,7,19,21,1,6,31,1,8,21,1,6,7,1,27,26,198,195,214,187,223,206,192,201,201,202,198,207,210,207,210,214,209,195,217,231,225,221,195,194,189,185,209,205,224,213,205,217,190,185,216,199,223,217,205,200,200,201,220,198,222,204,203,216,201,206,211,208,205,203,212,212,194,190,192,207,184,197,201,209,203,184,206,199,229,214,224,207,198,204,190,214,222,227,194,217,194,199,194,203,236,215,204,214,215,232,200,212,207,215,214,207,207,210,194,209,203,211,215,220,206,208,211,188,202,201,221,210,195,211,214,215,196,214,223,209,212,187,219,185,204,218,225,217,209,202,229,214,205,199,221,199,229,192,215,208,201,222,210,187,233,220,195,211,214,198,188,209,196,190,198,196,200,215,218,206,186,222,208,221,203,222,198,226,203,208,204,228,241,221,222,213,211,221,224,214,184,206,194,189,212,222,229,196,209,205,227,242,224,210,225,205,190,227,208,175,205,214,209,213,222,199,192,206,200,245,215,226,218,222,230,226,222,226,195,200,242,230,220,226,229,207,225,236,204,211,212,217,239,196,216,218,208,239,238,200,221,242,211,225,210,200,207,199,224,202,196,197,215,220,195,210,239,211,215,228,235,224,219,208,245,224,216,243,215,225,221,225,224,222,225,244,225,236,230,241,245,225,229,231,209,225,221,235,233,232,228,209,232,232,247,226,246,240,238,237,230,231,239,248,243,247,230,246,224,200,196,196,187,197,193,218,237,251,230,247,177,85,159,237,217,233,183,89,115,235,222,240,208,205,245,246,206,205,243,186,227,248,253,254,182,142,122,102,62,73,216,237,251,122,34,59,135,243,229,213,235,224,206,232,227,219,226,184,153,148,230,175,176,225,162,167,210,180,192,211,195,150,110,219,186,190,168,211,226,166,196,94,81,240,243,254,174,48,79,106,111,51,57,39,17,47,164,229,153,117,201,205,243,160,146,236,175,171,179,206,220,138,98,177,191,129,147,177,230,217,137,99,167,210,238,245,229,216,218,217,240,213,187,222,227,221,247,253,251,254,116,55,35,47,127,153,144,77,41,36,47,109,137,148,138,119,122,143,120,76,5,37,118,92,136,86,33,38,18,20,24,27,43,78,114,101,100,120,102,89,32,35,75,100,73,38,1,53,90,95,64,49,35,106,85,73,30,15,55,120,129,112,62,59,122,140,129,95,105,96,104,96,153,156,136,145,138,66,150,224,241,253,251,228,187,138,151,203,212,245,205,155,135,132,160,190,170,116,96,77,78,93,63,84,49,59,47,66,66,63,45,54,76,70,47,23,40,43,26,4,29,49,17,33,27,37,47,17,18,29,56,66,31,30,33,54,75,43,28,74,58,53,49,34,35,49,78,152,103,98,95,69,46,32,12,34,99,55,61,55,42,37,77,77,102,76,98,81,108,150,101,83,98,79,101,74,49,24,86,126,180,159,148,92,78,132,121,104,60,34,53,91,205,215,220,236,233,233,242,226,245,221,235,242,216,237,231,219,211,239,244,239,229,244,218,236,237,203,226,219,228,243,221,235,218,226,225,243,234,228,236,238,215,211,248,205,212,227,227,202,212,228,220,223,228,122,26,12,0,17,6,9,16,17,7,30,12,14,12,0,0,18,21,0,7,12,15,30,17,3,192,189,210,199,209,176,211,210,215,194,213,230,203,212,216,180,196,194,205,185,187,229,200,200,220,197,223,204,198,192,181,215,218,206,198,206,197,191,205,208,220,210,196,193,204,203,164,205,209,207,192,214,217,213,236,209,213,194,211,187,219,184,214,224,210,209,187,214,213,183,189,217,190,185,214,189,211,190,194,191,208,196,183,195,194,185,195,189,211,214,231,198,201,206,185,211,193,227,227,191,203,207,190,207,228,218,201,231,223,181,224,185,225,217,197,201,227,194,217,213,194,210,214,234,217,188,226,209,212,222,190,218,208,221,180,197,202,224,205,200,213,203,214,206,214,216,195,198,207,212,212,197,226,207,207,208,226,211,217,189,230,205,204,194,194,190,188,219,203,221,211,209,239,219,211,199,203,218,197,218,206,224,215,211,188,199,233,221,216,182,189,213,212,211,209,225,240,220,199,236,214,224,180,231,198,193,204,216,214,229,203,215,204,232,233,213,205,185,201,205,214,206,207,221,242,205,216,190,222,239,244,220,199,227,210,215,217,225,218,225,237,220,215,213,209,240,227,217,231,215,209,232,214,214,177,217,228,219,219,202,238,209,223,235,225,227,244,231,212,208,217,242,230,234,218,213,221,231,228,211,224,223,223,217,227,249,205,221,237,207,252,246,223,238,235,225,244,235,211,237,216,249,233,255,247,191,185,165,155,185,183,207,225,243,238,240,228,241,224,250,159,73,175,227,176,167,146,87,151,239,244,244,248,218,248,251,204,225,246,203,213,204,196,189,142,155,154,128,85,115,205,237,197,44,5,4,132,224,225,210,223,209,218,228,238,230,192,167,155,177,182,203,190,197,198,187,216,156,167,193,186,158,140,228,183,172,204,238,238,194,188,88,125,244,239,235,187,67,70,124,132,41,47,84,22,65,206,237,176,157,223,215,240,189,160,246,229,173,141,147,209,147,92,228,235,147,126,181,253,197,83,110,197,240,219,253,226,248,238,228,243,195,164,224,243,250,244,249,239,128,41,1,9,38,79,113,166,139,69,65,35,33,80,140,100,53,54,136,138,86,27,70,120,110,140,100,35,24,15,23,57,33,87,113,99,49,75,105,128,97,43,23,64,103,64,59,0,30,64,83,69,84,110,138,156,120,134,123,152,143,177,138,133,165,132,126,102,126,131,113,125,117,127,140,116,88,112,78,182,249,245,252,251,221,179,150,163,200,253,226,177,104,110,131,171,224,152,128,92,96,65,31,58,55,59,41,52,58,68,57,52,64,67,58,67,46,34,67,47,42,72,69,34,64,59,35,38,29,47,71,61,69,37,11,40,82,67,65,80,50,58,52,6,53,67,54,72,104,92,77,40,20,24,41,35,79,105,84,43,20,17,15,34,20,59,67,64,92,104,112,93,84,51,109,146,120,69,58,21,69,70,84,85,51,95,139,129,67,74,49,47,102,208,227,218,248,238,238,235,247,237,245,219,219,248,201,238,231,238,218,233,221,221,229,222,235,247,232,243,226,244,220,244,214,223,241,202,217,248,231,208,211,223,227,217,211,227,214,213,218,205,215,225,232,223,131,4,9,26,0,1,11,19,16,0,10,16,11,9,7,14,30,8,30,0,2,4,36,0,15,209,185,200,185,219,193,222,198,191,195,195,182,205,197,188,195,202,221,222,201,230,213,183,191,224,200,207,201,197,206,179,199,208,216,204,221,199,185,197,198,218,180,214,202,215,230,234,170,193,200,222,213,204,206,208,199,215,197,223,197,228,193,213,182,195,198,216,211,191,229,189,212,211,185,218,221,205,200,193,193,226,182,211,190,173,224,192,202,189,214,219,198,198,188,206,183,191,229,208,189,221,204,200,169,211,199,194,227,210,232,183,205,189,201,199,200,201,214,220,224,217,193,223,211,185,235,209,207,197,176,205,207,212,195,218,214,205,209,206,206,218,207,172,232,212,204,213,213,210,215,197,205,200,186,170,186,229,198,200,215,207,207,205,231,190,198,206,197,215,194,218,215,186,203,211,197,193,199,208,213,219,230,213,196,229,203,194,210,191,201,212,208,201,216,205,218,210,235,180,226,208,200,211,207,225,224,215,205,204,194,196,220,219,183,210,220,217,197,225,201,214,213,233,214,233,211,192,201,217,234,221,204,213,204,208,210,217,221,210,220,217,220,202,198,209,198,224,196,212,204,204,212,202,194,207,217,228,210,203,232,232,209,219,211,224,207,230,230,224,228,221,230,233,214,213,228,231,213,226,228,233,223,226,211,230,198,239,235,245,227,211,243,228,228,233,245,228,238,231,227,234,219,197,207,183,178,182,199,194,216,221,238,248,242,230,238,208,250,243,213,183,84,159,225,177,160,98,90,131,207,200,239,182,172,223,208,193,207,226,191,132,97,141,121,139,154,161,187,147,148,170,160,139,65,12,14,106,232,254,213,239,228,244,232,230,205,224,197,181,178,182,202,214,204,147,147,162,108,188,197,208,190,188,251,210,228,243,248,240,202,183,59,94,235,247,253,176,59,105,122,108,67,41,43,43,100,200,252,201,156,226,231,254,192,202,247,231,179,85,170,255,120,49,198,255,188,128,159,248,198,53,137,214,241,243,241,210,229,239,246,229,171,190,247,240,246,252,242,160,44,5,28,13,35,32,70,116,109,151,104,87,22,13,76,107,46,9,41,92,80,18,80,131,127,160,75,20,41,24,49,43,55,103,112,120,73,51,83,38,115,88,92,115,97,73,57,91,128,133,145,154,157,172,180,150,141,148,145,153,123,108,141,115,131,112,96,137,117,114,102,104,106,101,95,87,93,63,80,213,241,244,245,250,213,144,188,210,237,244,201,164,113,133,120,184,233,147,81,81,75,65,77,54,79,46,77,61,78,73,85,54,78,69,59,68,73,61,61,48,51,48,65,69,60,53,53,77,59,67,76,80,75,62,41,31,45,85,68,109,71,53,24,7,49,69,79,56,98,55,31,7,27,15,31,28,65,70,72,63,25,37,1,12,45,40,10,25,34,43,96,109,71,74,48,106,140,115,71,61,37,50,60,49,110,107,147,117,67,45,68,78,119,234,234,244,223,241,238,238,232,211,247,240,220,187,223,220,241,234,224,228,216,234,236,233,244,208,245,223,233,217,212,216,240,223,238,222,229,228,240,225,230,230,204,207,225,216,217,225,229,247,221,227,229,244,112,11,14,41,2,13,7,17,6,6,10,24,22,14,7,6,0,7,8,9,5,14,0,8,1,209,192,199,194,199,213,224,207,179,183,211,216,191,212,226,185,222,198,207,190,197,205,177,194,192,209,190,210,202,212,204,194,220,223,205,209,225,197,217,204,199,186,177,197,203,196,205,203,220,229,206,211,201,186,184,215,207,201,205,195,195,193,193,212,194,214,189,212,205,183,222,194,204,188,189,198,225,185,225,212,213,209,211,229,211,202,190,203,205,214,198,180,204,234,191,186,200,211,219,197,219,204,229,208,201,210,206,205,186,205,209,215,208,190,194,227,198,217,198,201,211,198,225,207,231,207,203,207,190,203,209,215,185,196,193,227,209,184,222,225,196,209,201,228,207,212,203,227,211,204,184,196,196,215,231,177,189,203,199,204,210,187,200,191,200,184,195,194,214,203,210,188,202,210,206,188,212,199,210,211,220,180,201,200,224,214,227,203,206,201,217,211,215,220,207,196,210,223,228,224,223,223,210,194,208,212,211,239,203,204,233,231,217,204,222,215,215,220,217,205,238,211,223,234,213,222,207,208,227,202,235,213,222,213,210,200,204,231,216,204,196,218,213,215,182,233,194,225,234,195,217,218,219,205,197,223,205,201,224,215,212,208,202,223,236,225,238,221,241,234,220,222,228,219,214,223,235,205,237,227,231,237,228,251,233,200,240,218,248,232,242,209,218,246,207,227,234,227,232,214,208,164,170,186,204,234,213,229,255,236,248,215,223,229,238,240,209,220,222,230,174,93,148,235,202,170,127,95,111,146,129,149,129,125,164,143,151,196,200,128,89,97,131,124,154,172,158,172,169,164,147,175,149,93,34,46,159,241,255,220,255,210,234,221,238,222,225,211,178,218,176,177,149,157,156,134,194,153,206,253,225,247,199,235,194,197,167,154,127,90,107,19,53,194,255,234,146,69,70,128,89,72,42,74,10,77,238,241,240,161,181,253,226,185,191,248,216,131,142,232,238,129,91,236,235,233,136,97,153,172,91,140,233,225,238,249,233,230,232,248,218,136,182,228,243,239,254,187,61,0,9,6,19,68,57,20,40,105,118,158,94,41,45,55,104,102,34,27,41,54,45,112,134,109,127,49,41,48,19,27,58,57,108,97,91,52,38,49,77,134,124,157,120,120,165,159,162,152,139,148,132,102,164,140,122,119,111,103,104,53,83,33,56,81,70,66,53,46,62,71,33,19,62,55,46,47,20,50,240,248,247,248,251,223,168,192,243,255,247,165,138,140,145,163,219,211,109,68,15,23,19,52,20,24,37,21,45,54,64,58,48,55,56,58,82,74,95,42,78,67,62,50,67,31,63,76,50,53,83,72,66,82,52,95,67,65,82,80,67,58,30,26,31,46,93,70,108,98,78,53,51,21,27,35,46,92,83,62,41,28,28,35,16,15,5,17,38,4,61,88,105,88,49,34,68,127,136,122,110,108,76,72,116,140,104,98,56,32,48,62,92,207,230,241,240,225,207,226,229,213,223,236,236,248,209,239,228,249,229,247,247,235,222,211,226,225,226,224,230,239,208,238,240,221,231,239,229,219,204,209,225,224,226,214,233,226,215,211,220,235,213,199,208,231,228,123,18,18,1,11,28,3,5,32,17,8,24,9,15,3,2,0,0,18,5,11,10,27,8,16,177,208,201,164,186,172,195,202,196,211,203,214,171,173,186,176,200,190,202,208,196,218,194,190,205,204,187,204,213,157,201,182,192,207,196,202,179,192,207,211,205,199,197,208,221,198,180,223,189,176,196,185,218,204,197,170,202,202,192,198,197,180,206,205,196,210,214,183,208,205,199,195,188,178,210,210,192,202,216,198,204,204,190,207,196,192,226,204,193,188,203,160,216,215,216,185,186,196,199,229,208,199,194,189,219,215,201,194,196,219,207,187,179,215,181,193,199,204,214,204,198,196,233,232,200,212,237,212,221,216,211,203,199,216,236,197,225,201,205,220,226,205,215,211,215,209,208,192,204,202,198,217,220,207,193,227,207,199,232,217,187,200,214,191,202,199,183,206,221,164,199,221,213,219,194,213,231,213,213,197,199,211,203,193,192,210,238,215,205,197,200,205,226,231,193,224,201,207,214,223,210,209,242,215,184,229,208,223,203,215,207,239,231,221,214,221,193,198,210,229,198,213,198,203,225,210,222,208,197,218,228,220,214,235,208,198,198,224,248,213,214,219,210,198,239,224,205,215,197,202,217,219,189,214,215,237,193,204,210,234,225,215,197,219,218,207,229,209,226,228,222,228,213,233,219,226,239,229,235,210,202,229,249,221,236,237,235,231,208,215,232,237,248,253,239,214,210,207,194,171,183,196,217,205,231,231,243,243,226,249,227,237,216,215,195,231,230,221,234,223,180,69,160,233,220,199,63,91,115,176,155,170,134,120,133,156,119,137,164,86,118,103,99,92,131,127,134,162,124,154,161,121,145,88,95,94,131,197,223,214,246,246,229,218,212,191,232,153,137,212,158,218,156,219,226,180,222,234,238,252,225,238,152,147,98,84,41,17,3,18,39,36,24,89,128,124,75,52,36,105,97,79,47,44,25,73,157,223,136,67,127,206,178,173,142,134,110,68,125,224,197,70,94,195,225,192,156,43,83,143,123,231,241,219,241,220,225,226,241,231,243,158,229,246,241,252,219,124,1,0,19,9,40,46,33,19,23,49,95,101,137,104,40,18,94,99,89,38,55,42,11,123,125,108,142,53,23,9,24,26,53,109,138,125,108,112,116,117,153,134,139,128,151,118,159,143,114,147,118,105,121,121,109,114,76,79,53,64,53,28,35,26,21,19,56,49,42,31,43,12,16,53,29,49,48,55,8,92,212,250,255,253,252,165,144,211,238,202,192,137,114,120,122,156,223,168,74,6,11,22,44,11,51,38,17,42,47,37,34,16,4,38,19,22,14,40,45,15,43,59,42,56,39,52,44,50,46,66,36,57,55,54,71,53,82,83,61,80,69,74,67,122,44,83,121,104,112,77,38,44,12,37,16,37,64,108,96,55,8,43,33,29,41,19,50,18,40,17,33,74,102,89,73,65,41,41,112,122,180,167,166,152,147,116,96,82,37,36,72,67,162,234,243,247,234,228,235,239,237,233,214,205,229,213,242,233,230,225,215,244,221,223,221,239,237,231,232,238,217,226,233,231,212,221,248,236,245,230,238,222,239,212,224,234,212,235,239,215,192,244,224,222,238,202,216,132,0,16,6,0,7,15,16,15,5,41,29,19,5,0,8,34,1,20,38,13,22,0,6,12,197,192,189,183,208,166,202,189,180,187,192,201,184,190,194,196,202,204,194,212,192,204,198,190,193,190,188,182,207,188,200,191,209,199,196,209,212,202,185,191,217,195,170,207,231,204,205,191,155,189,198,226,187,210,212,202,191,199,199,237,201,195,168,204,180,201,211,209,200,189,184,217,188,233,204,214,203,210,201,200,223,205,209,183,188,230,174,220,206,214,206,202,209,197,195,171,217,208,208,189,220,194,209,216,217,212,204,216,208,224,219,210,192,195,206,186,186,193,188,220,180,194,212,189,199,217,213,191,194,198,221,195,188,203,216,205,214,187,223,212,208,204,216,214,199,193,225,222,228,227,206,213,206,206,205,215,202,180,205,215,210,220,193,208,203,190,218,235,211,205,217,194,218,202,217,204,178,214,187,226,181,205,213,229,196,226,197,226,212,203,220,192,222,238,217,184,230,216,207,221,216,203,192,224,226,201,226,207,213,198,211,198,194,218,219,206,182,209,224,224,204,200,220,226,212,198,192,200,212,224,212,209,210,195,226,208,209,211,226,196,231,216,204,233,219,227,208,228,206,221,206,214,202,200,196,205,200,219,240,225,177,216,186,231,237,222,208,181,224,215,228,242,209,222,235,210,213,233,227,231,226,206,219,217,205,212,222,234,230,217,231,225,234,217,195,188,156,199,199,167,223,229,246,228,222,224,226,220,232,233,213,225,230,206,217,220,214,227,222,235,200,96,109,224,190,172,36,65,101,162,181,194,162,162,163,146,125,161,147,104,94,96,64,70,54,85,75,93,90,92,101,93,78,81,84,71,97,188,160,200,220,213,251,240,226,187,237,163,163,196,144,212,200,248,247,203,191,150,122,116,93,93,47,25,26,52,108,131,109,147,119,86,143,162,169,183,103,57,52,91,97,62,70,50,22,21,110,155,146,83,82,148,151,126,123,30,76,62,130,159,115,93,126,137,179,183,110,48,39,69,126,228,246,216,240,223,243,226,239,247,206,153,237,252,234,240,140,32,5,25,26,32,54,34,24,13,35,65,62,91,73,114,34,40,36,91,152,120,103,72,83,170,118,117,124,30,59,27,85,127,147,159,124,118,98,112,128,129,119,116,135,153,129,155,143,124,96,88,83,40,64,36,22,8,55,35,37,69,14,46,24,22,58,43,62,53,37,40,15,38,20,63,44,60,92,89,7,182,245,237,241,225,154,104,156,233,249,229,134,130,104,109,122,205,209,125,39,21,29,15,54,34,13,17,39,47,51,59,49,26,22,51,41,35,32,51,67,25,29,24,34,25,30,24,26,44,25,39,26,43,60,63,36,84,62,79,63,56,62,80,94,122,104,107,104,102,90,72,56,48,52,32,41,36,77,84,84,76,27,38,34,45,44,46,31,32,18,24,64,97,93,127,89,31,32,120,141,101,95,81,137,142,62,75,55,64,37,73,56,156,234,237,247,247,220,234,238,209,221,228,223,246,235,234,231,221,236,214,219,245,220,234,216,234,244,246,234,240,219,217,247,241,229,222,215,200,233,246,221,225,217,208,229,196,225,230,212,231,212,209,237,231,223,217,228,115,15,8,26,41,3,25,37,28,2,22,23,6,9,0,3,5,27,21,21,12,29,16,8,40,193,177,172,192,178,210,185,170,191,206,192,182,217,210,198,216,204,198,200,194,189,198,221,203,197,192,195,178,202,192,196,187,204,185,221,182,200,189,190,204,191,202,209,187,213,190,202,205,191,193,202,193,194,197,181,209,198,185,212,188,182,198,199,193,233,197,198,212,202,192,185,198,194,210,192,222,199,199,184,205,204,210,209,212,191,195,187,207,188,190,221,202,206,202,206,212,188,198,210,185,212,191,208,194,203,197,182,187,223,207,197,187,222,217,190,201,218,189,196,201,237,208,186,207,204,197,190,177,197,232,200,202,188,225,195,206,184,221,205,196,209,190,218,172,169,215,173,216,197,219,206,193,191,219,188,198,202,213,211,193,214,169,215,193,214,199,210,192,189,203,228,204,200,197,213,191,216,235,207,208,215,211,201,204,190,232,197,214,203,206,209,206,191,190,228,186,210,197,196,202,216,176,214,209,198,223,187,201,199,206,231,217,206,189,216,190,212,203,211,210,224,186,223,225,205,200,207,201,211,200,225,204,207,241,202,232,202,218,210,196,221,219,188,224,196,243,224,204,204,230,187,213,188,210,225,206,193,224,205,222,211,228,229,223,212,214,218,198,222,221,215,224,229,233,220,234,207,243,227,215,211,210,218,238,237,226,213,207,224,209,192,184,182,184,172,193,214,201,220,227,242,236,224,229,229,238,237,209,222,232,224,206,216,237,222,224,214,215,226,233,200,58,123,184,233,154,54,102,109,190,163,174,166,156,201,180,144,168,172,91,120,96,84,86,71,55,34,63,69,48,59,55,89,76,75,40,72,151,176,137,185,196,226,219,218,219,242,172,189,235,143,151,168,209,175,57,40,30,28,20,4,38,44,55,67,76,118,135,190,170,143,156,151,179,223,211,111,58,62,107,98,40,28,34,28,80,204,245,181,123,188,206,174,197,177,149,170,162,218,185,125,192,211,204,202,192,129,74,67,53,99,231,248,213,247,242,228,235,230,251,198,147,243,248,241,171,22,12,37,25,16,54,35,36,33,25,53,34,94,97,97,109,98,32,41,33,86,114,156,158,157,160,107,113,135,121,171,178,171,194,137,130,112,134,140,146,137,120,151,108,90,91,67,59,38,30,26,42,17,51,35,47,35,30,17,45,62,92,66,65,74,29,41,59,89,27,28,49,24,41,41,55,60,33,69,70,99,252,246,240,242,194,149,156,208,250,247,201,141,136,130,116,149,230,206,98,13,9,30,30,37,44,35,30,17,31,43,14,42,34,25,70,15,30,47,75,22,28,33,31,52,17,3,19,21,10,12,20,24,35,29,26,31,14,53,34,63,61,80,69,69,84,64,83,97,92,93,93,103,105,81,78,90,142,110,107,87,62,42,15,20,42,37,35,25,32,29,27,69,76,99,107,70,24,82,135,77,63,18,44,76,99,77,42,47,58,11,53,148,235,227,251,231,233,203,239,239,237,238,223,242,228,203,221,216,238,232,230,245,255,218,223,237,226,229,219,222,223,229,212,215,220,218,209,240,215,229,232,197,238,211,237,239,240,178,253,240,229,211,223,212,213,213,209,208,131,4,3,17,7,12,23,5,7,28,19,5,7,2,0,14,15,32,37,21,0,23,0,21,28,204,210,166,195,207,192,188,185,200,193,202,190,194,193,211,221,227,199,188,180,200,192,201,212,201,193,205,220,173,211,220,205,199,208,215,192,228,189,228,201,201,226,203,177,211,180,198,197,191,190,212,194,184,201,211,187,216,183,205,218,191,205,220,213,222,204,200,214,181,192,184,187,205,213,192,200,208,197,207,194,203,166,196,213,195,224,213,196,224,191,191,187,197,177,217,201,208,201,215,201,217,210,187,186,208,199,216,175,217,227,222,206,185,186,202,189,187,224,207,215,199,226,216,225,214,188,217,202,174,201,224,194,207,216,199,236,206,201,231,197,207,179,195,214,182,198,205,196,203,201,221,196,218,209,213,196,211,207,203,197,196,190,204,173,210,208,208,180,202,227,205,228,221,206,203,203,201,229,213,205,202,197,217,211,203,203,210,216,210,218,226,221,206,199,208,202,232,207,190,209,209,202,192,196,193,202,218,216,221,195,220,200,201,217,193,188,208,211,199,195,208,221,204,201,191,206,223,224,207,198,201,201,207,199,221,199,242,200,197,214,222,194,220,214,220,200,206,212,215,205,234,203,218,207,203,209,199,214,220,195,206,238,224,193,184,226,218,209,218,232,217,206,232,221,216,209,221,211,234,229,218,238,219,228,218,247,224,176,189,204,153,180,203,186,210,206,230,235,234,228,225,210,224,218,224,225,213,208,226,236,206,231,220,224,195,222,221,244,210,220,209,40,109,228,225,218,122,156,149,160,160,150,132,192,199,153,145,122,128,116,122,96,73,63,59,66,52,63,69,66,63,62,75,97,85,51,85,161,141,83,82,121,182,224,215,229,243,166,204,241,125,69,14,31,55,0,24,55,118,124,146,161,212,203,203,170,159,118,130,133,113,110,87,118,150,169,117,58,49,131,147,61,47,61,41,67,138,194,111,50,134,148,133,168,128,160,170,116,126,123,89,133,102,117,150,117,153,115,66,66,146,234,239,251,247,253,214,243,245,254,189,178,236,246,209,39,34,1,9,16,59,40,5,25,20,45,59,45,81,122,99,106,93,57,65,34,27,45,53,73,110,154,87,143,153,119,161,145,142,139,116,158,118,120,97,73,49,46,64,37,18,37,57,48,25,17,17,12,24,53,34,57,83,63,54,42,74,113,46,78,79,83,58,72,66,49,25,33,41,64,66,51,63,58,64,48,154,237,247,254,241,188,163,149,241,246,254,133,123,96,131,129,169,235,151,86,91,67,82,64,48,56,27,44,39,14,59,68,55,39,68,57,30,42,42,42,32,50,69,62,45,13,18,27,15,17,10,46,36,55,62,22,28,9,26,26,37,23,42,18,33,54,54,57,97,97,91,58,58,103,86,93,100,111,116,93,84,68,43,59,57,26,10,41,51,33,31,30,63,81,78,105,59,55,41,71,60,41,50,96,91,64,66,79,60,21,112,234,251,245,250,247,238,230,227,230,224,226,216,235,235,246,229,236,225,237,221,239,245,233,218,234,224,234,228,216,230,224,229,201,237,209,209,251,201,209,229,229,216,233,233,220,233,219,234,232,234,238,222,240,226,222,217,221,222,129,6,1,8,3,12,7,0,4,27,5,19,11,9,0,25,34,2,19,15,22,14,5,19,23,199,182,207,200,180,177,155,197,185,187,186,208,216,205,203,188,199,222,189,198,189,216,206,179,192,197,208,215,199,201,177,204,193,189,205,206,207,189,205,220,179,191,176,226,195,216,183,198,215,199,198,208,231,199,184,205,196,198,216,200,178,217,199,227,205,208,198,200,186,197,189,201,196,219,203,195,205,193,183,207,208,209,204,204,215,218,217,217,205,205,204,198,193,214,178,178,219,198,195,176,211,195,189,194,211,176,199,218,198,214,213,204,198,197,204,201,215,172,199,207,198,201,182,197,204,204,201,206,195,207,193,202,205,194,209,167,176,194,194,182,188,208,219,187,205,229,187,217,196,215,222,207,217,208,226,206,210,207,217,198,236,203,203,213,212,177,218,188,215,174,202,186,192,193,211,215,206,213,192,231,201,214,214,199,198,190,214,199,216,200,231,209,229,220,202,202,189,215,189,211,209,227,204,194,193,175,217,191,186,222,206,209,208,208,197,187,208,187,216,230,225,227,212,197,215,209,212,204,207,217,207,187,202,211,230,199,205,184,241,186,212,178,198,223,200,227,217,219,220,221,228,205,182,246,202,203,196,230,214,209,197,208,235,214,210,208,208,213,220,223,200,206,222,215,224,214,201,207,218,232,237,246,235,245,222,222,187,203,152,196,221,213,239,224,226,240,233,228,223,210,217,199,212,224,238,231,233,226,229,227,226,228,220,223,211,221,219,227,226,231,180,50,95,200,238,224,151,148,152,170,166,183,165,161,169,113,102,126,151,95,138,94,90,84,53,61,63,62,49,73,67,69,62,45,72,51,56,107,104,123,143,199,216,212,215,207,238,146,207,243,160,87,118,108,67,96,202,219,216,253,226,255,228,233,240,197,233,226,201,98,99,148,117,184,185,200,165,26,48,72,117,53,32,50,34,58,104,153,132,84,70,67,125,171,146,112,99,93,104,61,53,130,111,115,133,121,167,165,84,30,118,238,223,231,218,239,234,218,226,229,190,234,253,174,78,5,10,18,3,45,41,40,5,13,44,57,50,34,34,62,130,98,88,95,100,63,56,67,62,22,50,94,90,148,111,126,135,116,111,103,66,41,48,53,46,46,17,50,21,25,18,55,48,52,32,39,28,31,52,40,40,46,102,71,54,54,88,87,58,40,54,112,151,151,90,79,107,96,123,130,129,97,115,109,144,122,84,190,248,252,241,212,173,116,188,233,209,112,123,130,135,145,226,235,137,57,126,131,142,50,101,74,64,72,62,60,54,46,64,78,67,51,23,38,79,60,63,46,48,24,18,33,12,25,37,24,28,26,44,50,62,37,11,24,57,47,52,57,56,39,17,2,15,37,48,44,43,75,71,96,82,90,74,50,71,70,107,103,94,104,68,66,45,55,36,14,21,20,47,92,121,102,71,49,39,33,46,85,97,92,79,64,49,52,70,194,228,240,255,232,238,243,224,223,213,218,234,221,225,214,225,240,245,234,214,238,240,227,222,226,235,220,230,242,243,220,224,216,225,224,209,233,220,231,235,230,232,220,207,254,233,230,223,211,228,240,226,238,217,216,240,236,216,207,228,101,19,3,4,25,0,9,20,11,9,1,21,15,5,1,14,2,10,14,7,17,3,28,14,17,200,226,205,198,194,161,187,180,197,192,179,198,194,187,181,173,186,193,194,200,194,179,186,200,197,204,189,208,181,194,208,193,166,197,209,207,200,188,198,202,189,211,188,216,197,200,214,214,206,218,228,201,195,208,226,174,206,219,180,182,192,182,184,201,193,217,204,199,193,179,214,187,186,204,182,164,198,208,195,176,195,204,179,179,188,179,205,199,202,187,215,194,208,204,219,227,207,207,202,200,194,195,203,221,238,197,163,203,192,177,201,205,223,207,201,188,184,198,218,198,191,192,187,201,235,185,208,203,193,192,201,201,209,210,204,199,215,197,206,186,199,223,225,224,204,213,187,191,188,197,234,199,196,202,205,210,199,193,231,201,211,200,205,225,204,200,192,198,209,192,219,220,207,195,217,204,207,203,212,194,200,188,218,197,182,214,187,219,220,204,200,194,200,220,179,214,191,190,211,183,195,191,214,214,188,208,214,213,207,206,208,227,198,192,197,215,188,241,222,200,216,211,184,204,219,205,230,202,203,200,188,179,208,215,222,215,241,178,215,169,195,212,212,188,217,184,193,183,222,232,227,218,217,249,211,199,231,218,213,201,211,211,222,221,227,199,215,216,221,226,219,208,208,202,220,233,233,222,224,234,248,233,182,171,153,178,187,209,192,224,222,244,223,237,219,231,218,224,248,222,217,209,205,206,225,231,241,223,209,232,210,219,223,213,219,236,222,226,203,222,171,53,89,172,248,213,156,164,143,190,176,165,157,147,162,130,136,169,134,116,82,9,24,31,56,26,11,42,48,78,27,50,82,87,62,28,26,54,106,191,221,247,244,234,210,216,226,144,206,230,196,193,229,234,213,186,243,239,239,255,243,238,240,226,215,213,180,216,189,102,115,170,182,240,242,252,198,83,63,90,113,41,29,47,42,91,213,244,228,147,112,199,252,254,200,160,139,148,221,101,88,230,227,239,217,203,237,190,154,75,102,227,225,235,230,214,229,237,221,225,196,228,228,106,42,5,21,5,29,58,18,23,36,65,48,50,54,21,78,113,119,110,102,120,123,124,98,75,63,26,83,128,87,152,121,70,53,44,6,57,40,23,11,48,67,63,49,75,46,40,55,43,69,59,29,30,37,55,49,60,36,46,68,119,154,197,168,147,135,146,176,145,196,164,131,150,121,141,141,120,104,61,99,45,59,39,66,20,133,210,255,219,155,88,172,249,188,142,147,134,122,165,226,196,123,35,11,35,29,31,60,16,52,61,18,27,53,34,42,70,68,68,85,71,82,67,58,67,68,47,48,46,20,46,65,11,34,34,34,74,44,38,28,59,54,42,52,67,35,19,4,35,62,33,25,28,36,9,23,37,27,39,69,49,87,72,100,76,86,101,105,101,107,94,63,75,68,38,89,65,105,106,102,44,49,83,117,125,115,80,78,21,42,83,208,251,231,255,241,221,240,219,217,220,219,230,234,215,238,221,249,229,224,243,238,219,216,226,207,207,251,238,207,219,225,226,235,224,227,213,214,244,210,220,234,241,226,232,216,235,232,244,225,223,211,235,233,222,231,215,241,220,209,236,231,139,2,14,2,2,20,16,5,15,25,17,19,10,11,5,0,8,14,7,18,0,6,14,27,5,207,186,179,189,225,221,172,223,223,208,186,208,161,193,187,185,185,162,179,191,187,199,205,213,161,188,209,186,201,206,203,220,197,195,186,192,185,215,196,200,187,201,183,212,186,201,181,213,184,204,188,194,192,180,203,214,224,189,202,199,186,208,216,219,213,187,189,202,187,208,206,189,195,207,175,180,203,189,197,206,199,186,196,197,197,191,205,220,200,197,214,211,203,195,194,199,192,194,208,200,196,202,206,216,202,196,212,192,193,194,212,188,194,181,197,193,208,208,190,207,175,200,212,187,171,186,214,213,206,205,190,214,190,202,169,203,199,187,180,184,195,222,199,202,188,200,201,200,215,215,195,198,221,194,203,225,231,235,213,194,217,200,189,213,214,209,197,216,219,228,227,219,203,213,197,206,226,207,197,206,206,176,199,224,192,220,188,196,189,194,206,185,211,206,213,207,201,198,197,231,191,210,185,209,188,207,194,201,206,182,200,205,211,218,215,189,188,194,187,197,177,215,202,212,207,218,211,204,201,200,202,192,203,203,214,215,203,195,187,208,206,201,211,211,190,185,190,215,218,184,203,198,142,191,175,191,238,214,205,196,212,212,213,211,205,204,223,214,207,224,208,190,232,215,227,224,242,235,226,218,188,173,164,160,185,220,205,215,225,233,201,228,207,240,212,219,233,226,246,234,210,203,219,231,218,210,229,201,214,214,238,209,230,238,202,218,200,230,212,231,141,51,68,178,238,217,174,131,168,195,193,177,153,152,187,167,154,177,137,85,5,5,10,19,17,16,10,34,34,19,6,45,52,59,12,15,110,117,179,178,234,233,227,205,191,219,239,148,218,214,172,215,250,203,196,215,249,232,227,225,218,185,190,195,140,143,153,214,155,112,158,160,178,206,232,247,156,73,66,92,122,48,28,26,18,78,232,246,246,165,133,225,254,233,172,176,199,224,235,143,148,242,233,222,188,197,241,224,189,96,63,199,199,238,233,227,244,206,244,213,192,237,123,12,34,1,28,27,56,19,30,28,26,40,48,69,22,83,202,166,137,81,51,89,105,109,107,130,110,77,116,125,102,151,85,36,28,43,15,32,54,49,24,58,72,76,56,60,58,63,45,75,74,62,30,38,58,95,140,158,162,196,131,197,194,208,198,174,149,128,126,106,69,61,20,52,46,55,23,41,17,12,48,17,28,35,35,31,7,94,146,183,139,109,176,241,142,130,150,133,139,181,241,171,60,20,1,22,24,19,19,32,7,17,21,26,22,23,5,38,28,27,42,15,37,61,49,45,57,74,67,50,67,66,78,60,69,64,41,60,53,75,67,29,43,28,60,73,29,39,55,98,38,37,49,40,15,34,26,7,18,25,50,56,41,48,61,84,69,63,73,96,90,88,101,91,86,103,92,90,103,94,91,133,145,178,151,125,45,26,54,34,89,220,241,252,242,251,236,228,233,209,240,205,219,240,218,234,230,237,241,233,234,240,241,236,243,222,231,241,198,209,203,226,222,213,239,246,215,216,217,214,227,239,214,223,231,233,222,233,227,230,219,249,220,228,207,230,223,238,234,219,213,241,243,131,3,0,8,14,2,15,17,11,2,5,7,14,7,5,10,0,12,8,13,9,14,23,15,27,219,205,167,196,200,171,195,181,201,187,202,191,214,200,212,187,190,199,189,191,206,198,210,178,192,209,191,177,197,175,194,196,193,192,177,197,191,182,190,211,194,208,206,210,192,186,174,171,201,168,172,209,205,201,193,198,210,205,218,177,197,206,185,186,192,198,209,190,191,200,204,189,201,188,187,173,192,203,175,201,209,193,173,190,194,212,172,201,199,188,212,218,197,197,197,199,180,203,217,207,196,231,190,210,188,195,196,208,205,190,196,196,211,199,212,202,217,208,225,191,206,202,224,194,194,210,193,209,169,218,210,212,180,196,196,211,187,207,219,203,217,194,188,180,199,220,194,208,207,193,189,214,209,201,180,208,215,207,209,204,209,202,177,209,199,217,229,209,211,196,189,225,214,193,210,208,206,174,216,186,199,209,180,184,169,212,216,197,220,193,181,206,225,170,190,199,203,202,201,195,209,187,221,213,192,183,202,195,229,236,196,203,203,186,195,205,183,192,182,203,199,200,196,212,186,183,204,194,190,194,199,206,198,190,196,198,183,226,205,212,208,225,234,232,186,211,226,211,218,198,208,128,41,71,151,212,197,218,184,224,212,222,204,205,204,199,197,226,219,231,243,207,229,226,237,205,193,203,173,173,205,189,216,210,216,241,217,240,239,247,221,215,215,195,229,225,236,200,221,199,206,211,222,227,241,220,224,199,215,212,211,215,238,214,230,237,216,225,220,215,136,49,141,227,249,230,166,148,160,178,162,177,172,174,205,175,174,173,141,48,15,8,28,33,47,18,20,29,57,27,27,63,76,55,28,53,130,224,221,208,228,250,237,216,209,234,243,148,197,208,206,218,191,156,138,203,214,197,169,166,127,151,174,162,128,153,211,247,114,120,165,159,174,192,229,247,171,68,42,83,111,53,25,47,7,72,199,234,210,152,118,228,244,183,99,187,200,174,162,164,204,242,225,240,161,198,228,211,199,129,88,210,214,221,251,239,230,228,244,212,196,124,35,31,24,18,42,62,18,11,23,38,48,60,54,21,66,196,254,233,169,83,57,66,52,80,72,81,81,74,122,130,141,94,54,2,10,3,12,50,75,76,16,51,87,97,21,22,58,87,133,161,128,160,180,185,191,195,203,210,174,186,140,120,121,61,39,49,35,24,6,19,33,36,14,17,15,35,21,24,19,20,30,9,29,20,7,21,35,0,43,143,143,126,191,209,127,133,140,123,150,200,232,129,28,19,2,29,18,4,32,27,10,24,44,12,33,24,28,32,7,3,17,11,21,1,39,4,12,44,21,44,60,47,34,52,44,39,51,92,78,76,62,66,68,53,68,92,57,57,61,64,25,44,26,51,15,43,49,33,43,66,79,66,36,16,21,28,41,44,47,86,70,85,93,85,97,97,108,99,84,105,105,96,114,84,67,58,65,29,59,77,191,246,252,235,221,229,217,243,232,239,214,221,236,244,235,231,238,246,239,234,233,231,232,207,235,208,216,232,199,224,239,240,223,228,239,229,222,235,250,239,247,240,245,219,226,229,204,218,233,219,222,233,234,235,223,217,217,215,223,235,239,220,233,119,3,9,5,22,9,4,0,2,5,0,0,19,18,2,3,6,0,7,2,26,15,5,5,0,195,181,194,186,207,195,214,180,190,192,195,189,178,189,201,176,232,199,207,193,184,171,185,207,196,175,195,209,198,216,199,196,190,195,195,192,218,209,184,151,217,182,190,202,187,207,198,195,169,215,192,195,190,203,199,187,197,191,196,204,203,198,181,194,181,199,199,195,211,175,194,217,198,188,197,162,216,203,174,190,196,184,195,194,190,229,196,186,205,198,225,218,187,222,200,199,177,212,196,193,200,185,188,229,202,184,216,195,198,209,172,200,183,178,185,218,181,202,193,189,203,186,226,182,198,212,200,180,184,182,213,181,213,184,181,242,184,222,199,178,210,195,207,203,199,195,196,198,203,189,193,210,220,184,191,212,216,203,196,215,210,201,216,193,189,185,193,191,190,200,186,193,191,191,214,196,218,203,186,186,215,193,199,201,199,189,200,216,209,207,190,210,206,200,191,177,207,230,203,210,198,183,172,191,191,199,188,193,204,199,197,199,208,214,190,190,205,204,207,201,182,223,199,196,198,228,209,179,200,213,195,214,204,210,213,196,204,191,229,208,209,196,179,191,201,196,162,205,195,213,183,111,30,74,182,210,204,199,179,223,217,226,189,217,211,219,224,216,216,218,217,217,191,212,174,202,182,187,200,202,216,202,240,213,202,228,223,208,214,245,234,196,245,245,196,206,228,212,246,222,199,210,215,235,217,212,234,221,208,198,211,236,236,216,227,207,226,198,215,215,112,60,141,225,254,248,155,157,158,172,161,149,184,184,181,129,139,156,165,159,98,127,154,99,29,20,52,43,46,58,32,87,51,60,83,79,189,236,234,215,227,233,193,203,199,209,228,132,182,180,176,211,178,96,101,191,157,166,171,167,199,195,225,220,242,214,245,227,39,178,182,180,172,198,223,242,199,65,37,76,113,80,28,48,35,78,234,228,212,163,90,212,202,129,117,179,187,153,154,152,227,249,229,235,174,189,210,219,197,186,66,117,222,215,209,220,219,214,251,233,120,80,0,9,34,35,66,36,21,13,38,46,44,72,20,23,188,243,255,177,132,93,59,91,72,76,95,70,30,57,126,136,137,106,31,24,1,27,15,29,66,99,94,119,100,66,57,99,149,171,196,183,204,197,205,169,127,103,81,57,23,24,64,25,4,25,4,0,34,9,9,9,20,30,55,14,16,24,27,34,15,27,23,33,36,2,12,29,17,20,18,62,119,168,234,171,125,151,145,144,144,208,222,76,77,14,18,13,18,32,7,10,22,32,26,42,13,19,11,1,27,23,20,29,9,24,12,10,41,5,9,5,29,5,24,17,8,25,44,52,37,35,58,60,44,89,79,69,100,73,78,39,26,16,53,36,36,33,41,21,78,78,87,51,32,26,82,67,72,53,82,52,34,47,66,72,110,107,92,113,92,92,64,54,63,37,51,52,52,96,71,117,199,247,241,233,243,235,235,234,252,210,228,217,216,242,225,225,226,228,248,225,218,229,233,222,206,242,242,242,227,243,233,222,223,205,213,231,220,213,250,233,238,233,225,240,250,237,240,205,233,229,220,244,239,235,230,218,201,231,238,207,243,205,225,113,4,17,3,6,2,21,6,22,7,28,8,1,11,2,12,6,9,19,16,19,20,3,29,16,179,182,171,174,182,193,189,182,207,194,195,184,202,210,195,192,199,194,199,200,189,165,180,173,220,190,195,197,195,201,185,204,179,177,175,204,201,194,204,212,171,182,188,189,216,204,228,185,197,177,186,192,184,193,196,171,177,205,203,172,178,162,186,186,185,187,184,211,180,183,181,190,193,191,199,184,173,188,190,185,187,192,175,194,202,200,188,178,195,188,199,212,186,189,189,211,205,204,199,201,179,205,182,187,208,198,208,206,217,227,189,186,179,206,173,206,192,222,198,218,199,167,189,204,200,208,198,224,207,206,204,232,197,189,205,184,209,189,176,224,196,190,185,202,179,183,211,181,199,225,188,214,221,218,194,202,195,205,206,198,191,185,212,196,231,196,216,202,184,203,208,203,199,199,186,207,200,220,206,192,200,197,204,192,208,198,208,232,185,209,204,177,187,215,209,190,194,206,201,206,231,199,203,183,218,223,207,195,194,194,193,209,203,200,182,212,214,194,221,192,201,203,210,213,189,206,188,216,194,178,194,215,193,174,190,195,202,207,204,190,233,191,211,234,205,179,214,235,223,187,210,161,136,180,215,247,194,195,222,195,202,218,214,223,205,216,201,209,215,193,219,184,183,204,179,170,179,202,207,233,190,218,208,228,221,216,208,221,218,211,223,220,221,225,227,208,226,231,206,216,217,227,234,243,194,216,221,210,233,197,211,235,223,222,220,202,229,232,199,179,123,48,146,225,231,221,200,185,152,189,154,147,193,121,172,121,152,178,173,205,203,209,226,146,63,36,60,41,39,29,45,99,91,132,109,137,207,249,231,190,197,240,211,203,210,240,220,155,193,192,199,173,153,144,121,171,174,186,229,206,248,226,241,200,196,179,216,137,38,189,198,176,161,199,237,242,170,54,20,63,109,67,42,19,3,74,223,239,212,159,138,203,200,165,168,174,202,160,127,160,199,225,246,224,149,184,208,203,224,243,134,57,133,162,197,225,242,244,252,188,56,27,18,12,11,23,29,11,47,11,23,45,69,59,25,167,252,225,225,212,160,113,52,70,54,103,73,67,75,84,126,116,144,109,42,32,34,70,46,58,73,62,143,150,169,134,136,155,150,120,102,80,57,51,59,33,22,18,25,17,11,16,25,15,22,11,28,19,38,35,33,17,25,40,49,18,41,58,23,32,38,25,13,20,3,20,18,23,32,13,26,26,71,171,251,223,129,135,142,171,167,250,200,73,38,12,21,7,10,32,39,8,16,23,30,5,16,31,20,10,13,9,14,24,28,17,42,20,17,37,22,10,23,9,53,18,31,39,10,19,4,28,2,34,51,41,44,47,53,76,67,73,75,78,72,96,68,53,42,51,55,82,70,43,58,67,72,71,111,83,68,65,37,21,57,42,43,38,70,92,101,121,73,40,60,49,72,93,120,113,91,152,234,250,244,255,247,217,240,223,219,224,239,254,227,228,224,249,229,225,227,247,227,218,233,232,205,227,241,237,214,239,230,223,235,225,251,229,235,216,212,237,237,195,234,239,233,225,228,243,234,239,223,243,218,212,223,216,228,237,226,243,251,241,214,125,21,3,3,2,17,19,10,11,32,9,27,21,19,8,26,9,21,19,8,10,12,28,2,17,194,181,183,184,188,185,179,218,194,166,183,191,207,199,171,200,210,183,182,179,189,199,181,181,183,173,193,195,204,179,188,211,216,192,207,190,189,226,202,195,207,197,187,204,190,206,187,168,196,167,198,169,199,200,174,173,175,195,178,166,212,164,167,197,207,210,207,201,192,157,164,218,197,205,218,191,199,212,184,194,209,189,197,180,216,172,206,196,207,204,200,205,205,205,186,198,212,192,204,200,210,199,202,188,192,181,180,201,201,195,209,183,211,215,186,207,185,185,200,194,189,210,177,213,204,187,229,202,189,194,204,210,198,186,200,198,217,204,189,200,221,199,197,207,212,197,195,188,195,197,194,204,200,222,196,188,209,180,182,166,198,207,206,219,199,194,204,205,203,201,227,180,192,193,199,204,196,186,216,210,213,210,184,189,186,213,188,218,231,192,198,176,201,210,220,192,235,190,201,203,201,199,184,215,208,194,209,187,207,203,202,189,188,178,203,219,204,197,210,191,207,205,201,197,212,213,222,176,197,206,209,217,226,199,210,212,241,210,216,210,195,217,207,189,195,215,204,177,210,205,214,192,210,229,213,232,215,193,211,220,208,225,224,230,243,215,188,200,182,171,182,164,178,207,203,199,201,232,215,190,218,227,230,204,208,218,225,225,209,197,207,211,224,223,226,200,204,238,222,229,206,233,235,202,231,214,204,204,227,212,207,228,187,236,223,231,223,219,219,186,119,97,152,234,247,209,162,167,156,206,172,135,97,84,140,158,155,170,196,200,163,186,182,162,30,61,74,24,44,36,71,113,40,158,190,178,232,246,240,197,209,226,197,193,231,215,233,165,147,204,249,147,208,144,149,212,195,228,240,201,239,235,175,192,161,162,252,161,98,239,207,199,176,217,250,237,175,40,52,36,71,96,22,4,24,58,227,251,214,146,166,255,221,146,176,205,239,163,95,196,253,241,246,221,179,237,197,202,253,254,211,83,57,126,197,231,219,229,238,91,34,0,1,25,55,46,25,17,9,23,53,54,44,37,138,233,244,251,255,234,214,136,27,69,26,75,37,45,42,80,125,124,161,66,41,64,85,161,169,192,180,142,140,133,126,76,45,24,48,25,23,27,11,31,25,11,37,8,10,25,21,11,32,31,0,20,10,5,24,37,18,72,54,56,42,70,53,52,54,41,40,18,22,19,4,22,46,5,24,11,46,29,47,182,240,179,143,170,155,155,206,217,149,26,32,5,22,21,33,28,25,34,30,37,34,11,24,19,30,34,14,10,27,20,21,27,27,21,44,9,3,15,18,13,26,49,12,12,20,4,10,25,14,20,22,16,38,6,15,17,16,29,41,88,67,82,92,126,91,80,89,86,103,78,100,74,46,42,93,89,70,43,18,72,74,49,10,40,13,55,76,104,137,95,125,125,118,133,152,137,118,184,241,232,243,219,224,235,230,217,234,223,230,242,216,227,222,246,240,232,228,215,227,221,203,220,232,241,239,229,213,215,233,247,212,218,247,239,237,238,253,234,224,241,218,224,204,227,240,239,209,237,233,224,236,233,222,235,214,217,223,220,221,224,228,122,8,11,5,2,32,11,15,8,8,43,25,26,27,4,1,10,20,25,16,0,11,37,10,13,198,185,196,192,190,203,190,200,187,198,185,192,188,184,212,210,190,188,208,198,185,175,194,208,168,166,197,188,196,187,179,196,184,192,170,163,200,197,195,186,186,190,214,198,198,172,170,190,179,185,177,200,189,186,208,214,189,184,182,191,186,187,166,187,179,176,179,208,195,201,199,195,222,193,182,190,194,175,192,209,188,217,208,172,197,184,206,208,224,188,171,205,198,193,192,185,172,197,194,199,187,194,177,198,198,193,186,195,192,186,180,201,208,190,206,207,185,187,213,196,205,177,208,190,198,194,202,202,185,206,192,210,200,194,192,182,191,181,195,196,188,197,181,193,195,196,195,194,193,196,192,201,208,189,216,197,213,178,210,188,214,211,202,178,196,206,214,193,220,191,189,197,211,211,203,206,188,211,172,185,204,190,208,204,202,228,196,189,223,181,183,198,192,231,181,191,192,178,210,194,196,200,195,208,205,212,208,178,206,177,200,207,182,189,186,209,184,197,199,195,193,214,198,188,197,185,184,194,188,210,206,190,182,182,210,198,202,197,205,176,232,168,190,181,225,179,196,195,199,206,203,203,231,234,229,208,194,202,223,229,223,242,239,204,213,205,188,174,184,189,174,197,215,209,211,215,231,207,213,228,192,206,227,209,198,213,216,193,209,203,230,208,201,230,237,234,226,225,232,215,221,239,205,219,217,230,202,226,201,197,216,240,240,226,224,227,227,206,226,179,125,142,190,230,235,185,140,104,159,190,158,170,161,79,109,168,189,171,200,162,134,135,144,140,59,59,79,18,50,15,67,44,63,202,248,254,235,213,246,211,249,246,230,245,241,254,251,139,157,240,230,237,208,170,185,237,255,253,236,202,232,227,190,248,223,225,241,158,211,234,253,229,204,245,234,244,187,59,56,42,120,85,11,46,29,44,193,247,249,143,214,229,243,181,196,246,252,136,129,235,251,251,233,223,222,245,218,244,247,232,250,160,86,139,221,217,248,253,147,38,0,25,24,20,43,18,12,5,56,57,77,40,26,126,222,251,252,232,237,249,208,98,63,56,49,53,54,52,33,100,143,114,152,138,145,147,171,173,180,135,92,55,30,35,20,14,8,26,15,29,16,49,19,44,32,20,14,18,38,30,28,25,6,17,27,25,24,50,33,51,48,36,28,50,29,59,37,33,40,31,10,0,28,21,21,6,25,43,29,26,39,32,21,205,233,163,142,156,158,187,186,198,95,13,27,14,14,31,16,38,27,25,57,20,23,20,12,31,15,24,23,9,8,6,4,30,36,56,48,12,36,19,15,30,31,34,36,28,26,9,8,25,17,25,25,9,3,6,18,21,23,23,53,23,32,35,41,51,80,100,95,87,131,119,118,102,79,46,57,61,88,40,93,98,63,31,42,63,33,53,126,105,98,88,113,144,135,141,118,120,101,237,245,243,244,240,232,243,229,214,239,253,241,250,248,225,216,224,214,224,248,214,229,242,232,221,235,231,216,241,228,206,245,233,229,228,232,222,235,241,238,233,232,215,236,210,226,235,234,214,235,223,228,234,229,216,238,240,208,234,230,233,213,208,233,128,6,12,5,30,22,10,20,8,3,5,11,23,4,0,6,27,2,13,10,21,10,9,18,4,201,181,199,213,194,191,205,191,174,184,192,180,200,192,211,187,177,186,183,167,192,207,175,174,186,194,198,200,206,166,186,203,176,178,207,181,194,164,195,199,204,197,188,182,195,188,198,214,171,214,183,194,211,196,187,203,200,199,179,168,189,190,184,205,193,153,193,213,210,207,197,161,212,179,203,167,209,182,204,190,202,204,229,196,214,204,201,189,219,202,193,195,184,187,188,206,177,168,201,167,210,204,207,192,216,186,202,184,197,225,198,193,213,190,181,194,199,198,187,195,197,181,179,175,209,215,191,200,208,208,179,238,209,188,186,200,216,209,203,191,206,205,187,189,197,163,212,203,194,197,207,188,184,200,214,226,198,215,210,192,216,181,212,213,177,201,214,200,177,193,189,221,200,183,212,206,179,193,214,196,186,206,212,203,174,201,205,189,192,194,203,187,201,189,194,179,186,174,225,219,196,198,218,196,208,188,195,165,184,203,193,180,199,198,200,194,199,174,168,185,175,208,198,182,206,173,210,210,220,191,186,193,209,202,200,191,189,207,207,224,201,199,195,183,180,219,195,200,183,214,185,189,229,212,225,220,194,198,247,217,185,146,162,183,179,189,185,201,214,212,212,217,233,245,198,216,225,225,206,226,193,227,214,211,233,231,216,221,233,242,245,250,209,238,222,227,216,245,253,247,255,247,240,234,237,248,248,251,242,223,246,250,254,219,243,253,246,242,222,187,192,150,175,228,249,220,145,116,156,215,219,241,232,182,170,179,203,177,156,132,95,147,155,124,76,65,69,17,38,34,40,30,21,193,255,237,248,228,232,242,242,242,249,231,252,233,220,86,91,195,232,161,216,170,188,206,206,194,176,170,231,215,218,236,191,230,228,142,204,246,225,188,137,195,233,238,162,45,43,74,95,100,45,25,50,37,171,216,153,92,150,216,158,124,177,173,159,93,124,222,216,215,188,204,175,170,146,165,219,220,214,112,67,129,183,199,195,149,63,8,0,24,37,29,52,4,21,46,63,69,62,28,73,224,227,241,250,252,238,218,150,88,41,92,37,78,55,74,65,107,153,124,188,150,141,124,67,52,36,35,8,28,3,10,22,8,22,20,13,25,44,28,36,46,34,9,52,70,40,57,33,25,22,16,31,21,26,11,54,44,47,34,67,40,79,63,66,50,25,17,32,29,17,20,9,13,31,18,40,21,62,29,54,223,217,134,152,151,184,165,190,173,64,34,22,29,42,8,9,51,38,30,26,11,43,12,15,28,8,35,32,26,16,6,25,16,48,7,20,39,13,32,13,22,59,40,47,37,55,27,46,23,8,39,34,29,20,18,6,24,31,18,18,5,6,12,19,19,32,40,57,70,60,60,90,108,99,87,97,101,91,64,72,57,31,61,33,28,16,74,111,121,122,94,127,136,132,142,118,84,139,244,231,239,244,231,240,232,222,217,229,207,241,232,249,235,242,222,234,239,235,219,237,231,236,229,229,225,236,230,210,204,225,240,223,218,230,225,212,220,212,240,241,206,237,238,212,245,242,222,216,239,203,210,244,245,217,243,230,203,223,225,253,218,217,129,2,11,28,7,24,1,14,12,28,3,6,31,2,17,4,2,11,21,18,18,26,16,16,8,190,225,195,193,191,207,203,174,211,204,187,177,173,185,182,219,217,202,214,185,186,180,211,176,185,186,202,200,192,203,185,196,199,185,201,198,177,202,157,186,189,186,192,193,214,200,176,202,193,197,173,201,201,206,215,192,172,182,204,194,188,168,203,193,196,199,193,206,194,188,209,223,181,189,220,200,199,204,201,191,211,199,190,195,192,202,196,205,180,187,213,186,170,200,203,184,211,192,196,192,183,195,203,184,206,207,202,188,205,194,203,198,200,187,171,196,225,207,204,179,215,224,222,222,192,203,218,199,180,218,172,178,197,202,199,205,202,215,195,208,217,191,209,208,199,219,212,199,202,194,197,187,194,215,182,191,227,185,201,226,203,202,181,180,204,198,199,217,182,213,189,204,222,214,196,196,201,196,197,205,205,200,178,185,194,216,193,187,202,176,206,192,184,210,204,197,203,177,213,196,220,202,192,170,181,207,211,213,192,184,182,196,192,206,168,174,207,209,191,197,189,189,198,184,191,199,178,178,196,223,185,177,175,213,173,204,201,204,190,185,211,187,221,220,204,197,193,210,208,199,179,186,208,219,236,192,217,223,215,109,14,0,77,173,215,221,240,246,232,250,239,240,241,220,249,214,252,228,242,232,255,250,251,250,253,251,252,254,251,235,255,254,246,252,243,230,244,253,228,247,255,252,255,254,241,255,249,239,251,254,252,246,242,220,253,253,255,239,250,241,197,149,150,225,251,215,203,124,147,203,219,235,226,206,221,166,183,185,151,129,89,117,135,105,84,103,82,105,86,87,83,54,77,178,191,171,166,169,163,161,168,143,155,185,153,179,162,63,67,166,118,92,77,98,95,86,71,56,94,96,123,88,107,124,93,153,95,56,107,71,113,98,57,31,67,86,115,77,38,58,92,87,31,36,42,40,36,52,67,10,17,43,33,43,46,77,53,59,68,52,14,12,41,99,133,122,88,63,11,48,77,39,43,62,83,69,101,30,49,44,3,24,34,45,17,14,37,63,71,41,18,92,198,227,247,254,239,247,219,192,137,88,86,101,92,125,116,89,108,125,131,114,112,30,13,16,14,18,13,58,29,11,31,33,5,24,43,27,27,34,21,49,35,56,20,57,24,43,22,56,51,67,37,44,30,48,27,37,39,46,54,83,65,107,160,94,79,45,60,12,22,10,29,5,25,30,39,30,29,33,58,37,69,224,174,140,154,121,204,189,147,155,96,70,92,66,55,34,35,4,14,17,17,32,10,28,12,20,24,8,5,53,5,13,13,0,14,49,6,30,5,15,25,31,43,35,36,58,34,23,48,26,42,41,35,21,8,10,13,9,26,29,24,37,32,27,17,22,11,27,18,12,29,39,14,60,55,66,91,106,109,98,95,58,25,28,50,39,26,59,87,122,101,96,97,108,111,104,105,58,100,206,226,235,255,244,230,236,230,236,219,243,227,238,240,241,210,238,213,229,226,242,224,236,231,234,222,205,229,199,232,240,215,225,238,233,239,249,216,231,207,215,232,248,229,217,233,243,243,233,216,231,235,243,245,209,228,229,248,226,215,225,206,235,234,124,7,0,0,14,12,13,12,7,19,4,0,15,10,7,16,0,12,31,12,23,23,14,36,0,185,163,202,190,203,212,205,205,213,210,200,202,207,192,204,190,171,184,210,212,198,191,191,187,191,173,186,175,215,197,207,170,210,182,189,170,204,190,181,183,199,180,189,179,199,160,187,191,189,190,213,187,187,203,192,189,189,184,179,207,181,206,216,201,202,202,162,196,209,176,195,180,188,181,199,188,197,179,223,183,208,206,206,190,202,168,170,203,194,200,166,168,204,178,210,225,164,205,189,207,203,206,204,202,194,207,209,182,198,200,187,218,177,207,198,180,199,198,180,190,197,203,193,182,202,185,204,205,180,174,186,192,194,192,231,206,224,207,208,187,170,211,181,197,191,186,206,215,186,201,209,214,182,199,197,217,193,203,205,211,204,213,203,207,226,201,199,179,197,219,191,178,206,217,200,205,198,182,208,202,204,201,184,207,206,196,223,165,185,196,201,210,185,192,188,204,161,187,188,191,183,203,186,206,172,203,221,190,189,172,224,196,192,194,201,200,197,206,180,176,180,205,178,204,175,208,191,191,228,207,227,217,209,198,213,202,209,213,218,219,217,225,233,223,230,221,222,215,207,215,199,226,245,250,252,234,234,229,188,126,74,107,194,230,250,238,253,231,248,247,241,248,234,236,251,253,247,255,238,238,247,246,246,234,238,235,252,254,236,245,253,240,251,253,243,247,248,223,240,220,206,232,189,188,237,212,214,192,182,182,178,170,158,171,178,228,188,162,155,137,182,107,103,162,154,188,174,90,76,141,152,186,165,138,142,113,120,120,105,117,101,76,76,127,116,99,108,126,138,105,143,115,128,98,91,77,47,69,92,59,56,28,18,85,42,36,75,86,74,50,37,70,66,64,53,28,32,7,48,40,74,48,54,15,13,63,32,36,30,54,54,63,29,21,23,2,33,72,63,58,70,90,74,22,45,50,39,26,77,54,43,38,11,35,16,46,23,60,72,48,22,3,32,80,131,146,124,73,24,17,30,30,46,71,95,68,36,44,0,23,24,41,52,27,25,14,75,46,92,36,36,154,242,235,252,250,247,192,143,144,92,84,120,132,137,151,145,115,153,147,110,141,99,18,11,26,5,21,16,4,36,29,34,17,31,18,35,29,26,34,2,48,32,38,21,66,38,62,25,60,30,41,45,36,37,63,38,26,53,22,52,50,18,94,141,93,55,22,18,14,10,34,26,28,43,12,30,43,29,35,52,1,109,229,136,106,154,181,227,158,174,114,104,79,122,99,72,50,38,12,32,37,14,58,15,33,21,38,15,10,2,28,27,26,6,20,9,37,12,25,27,46,24,27,47,39,37,35,51,57,64,54,33,35,5,42,24,16,40,16,20,14,34,17,11,18,17,19,21,39,11,6,4,31,9,53,17,21,42,43,53,78,69,95,101,89,93,92,39,79,96,98,126,86,65,75,87,69,62,22,81,227,231,252,241,233,242,237,224,245,208,236,231,239,239,241,223,229,228,219,221,247,215,241,229,223,222,236,239,223,235,221,232,240,238,200,227,242,236,245,220,244,222,244,235,203,209,229,231,226,248,236,217,209,222,216,220,238,220,233,230,209,225,240,222,122,5,1,20,1,8,0,22,4,9,13,29,2,11,1,35,12,42,28,5,11,13,3,3,29,196,163,185,205,158,179,190,200,192,194,212,194,182,195,194,196,201,183,205,179,221,193,203,195,172,184,191,167,178,200,187,188,182,205,163,207,186,205,166,176,217,166,170,166,211,204,207,194,201,174,191,203,179,183,208,204,211,171,172,194,177,190,218,195,182,180,205,186,203,197,196,179,177,210,215,198,184,211,189,191,201,205,188,180,214,200,192,183,205,205,201,186,205,193,194,199,179,201,205,194,220,200,176,194,200,210,197,181,199,194,184,168,191,178,189,186,190,187,183,204,201,203,200,193,213,206,195,208,194,214,214,187,194,198,204,190,211,203,194,211,215,210,211,196,193,234,210,218,211,176,212,218,204,204,192,214,193,215,204,206,181,222,200,188,200,209,193,216,199,179,197,188,201,186,207,204,203,197,215,209,183,198,191,190,201,175,192,217,200,183,187,211,215,199,192,190,192,179,179,206,181,202,209,188,193,197,189,197,201,210,197,184,184,165,189,196,204,182,189,202,245,220,216,217,211,204,207,205,234,236,216,222,236,216,249,240,246,246,232,245,248,214,234,229,242,240,232,232,230,207,212,229,241,236,210,184,190,219,223,200,177,215,244,235,228,229,225,210,169,190,225,204,192,169,145,167,166,187,174,167,174,161,149,153,150,146,176,147,139,125,141,108,138,107,116,99,97,115,118,99,87,129,98,118,77,108,73,81,78,93,66,58,96,64,38,79,76,72,51,64,82,55,63,59,49,79,78,71,85,57,74,44,71,94,54,81,82,99,77,85,99,98,95,68,113,102,108,67,109,117,137,118,105,81,57,57,79,71,51,58,61,50,18,44,65,53,46,85,112,70,78,68,59,52,51,22,58,76,89,53,87,41,66,27,38,79,58,28,54,54,90,105,58,44,49,20,59,108,63,60,40,91,58,36,71,77,45,46,88,70,88,100,59,51,67,58,50,115,70,65,77,5,67,31,67,135,131,170,88,54,31,16,58,87,98,80,28,43,22,43,62,44,28,17,16,60,42,38,40,40,167,245,245,252,251,252,156,117,90,68,62,76,128,168,163,154,177,141,149,150,109,133,96,29,36,38,46,25,32,31,9,37,10,29,55,33,1,22,8,65,27,9,14,55,69,95,73,56,37,43,87,59,50,70,93,87,67,34,33,31,8,83,56,100,88,63,44,42,6,12,19,15,9,45,29,43,37,40,58,22,62,12,111,192,92,128,162,220,243,169,164,138,112,112,111,76,59,41,9,52,44,37,37,28,45,1,44,39,28,33,41,35,28,16,23,15,14,40,20,6,45,11,25,8,20,26,42,41,59,73,56,29,37,43,47,31,26,18,35,31,34,28,5,4,49,15,42,38,29,15,13,18,26,15,20,0,33,8,14,20,21,55,40,26,58,102,108,127,103,87,114,84,99,62,67,53,70,69,51,11,167,253,247,233,217,243,225,222,247,234,221,209,227,235,218,249,232,224,234,224,232,237,231,241,244,222,241,245,223,236,235,215,214,232,224,218,226,235,237,236,225,240,239,222,227,239,218,240,244,242,239,237,234,241,214,228,218,230,226,231,236,232,248,240,249,121,17,0,12,23,22,4,17,10,22,3,18,49,0,10,8,15,18,33,7,7,33,2,13,2,183,197,198,212,185,191,175,213,175,186,185,192,192,182,172,194,190,201,173,197,169,186,195,204,201,180,156,196,201,173,184,194,197,176,194,190,185,192,198,162,193,194,189,184,153,194,175,189,172,193,198,200,225,170,187,192,177,185,181,183,177,212,182,199,177,183,160,212,179,212,181,185,207,193,201,188,181,202,195,188,226,190,178,222,191,204,208,170,210,200,214,183,182,178,193,173,179,195,187,191,194,212,209,218,207,159,204,199,194,187,204,201,198,179,225,206,196,214,201,196,188,206,208,178,185,179,207,205,199,185,183,174,175,183,175,222,199,219,198,192,232,209,210,201,223,198,214,191,185,206,193,203,203,206,204,209,210,189,193,210,192,203,207,208,194,208,182,186,192,175,194,217,210,177,186,199,189,212,177,198,193,204,193,200,176,181,190,197,193,204,206,192,214,189,201,199,194,191,189,180,187,206,187,185,179,233,206,206,178,204,183,195,202,188,187,203,211,196,191,237,242,243,254,254,229,229,248,231,250,227,244,227,243,223,239,233,239,226,204,207,212,208,222,223,190,199,212,166,177,192,187,150,129,137,137,127,131,142,179,126,145,167,144,130,112,104,77,84,118,75,92,74,78,50,82,96,55,65,92,66,77,64,73,85,34,43,44,46,44,63,67,47,60,14,63,72,41,74,50,34,54,130,97,65,68,68,61,77,84,84,81,87,51,96,77,66,81,49,79,67,82,50,60,77,40,52,51,40,57,90,43,74,49,38,65,73,86,52,70,117,107,70,97,55,75,98,83,88,52,46,93,66,81,81,62,108,85,82,82,70,73,86,52,74,58,105,75,71,100,80,87,90,53,84,95,81,89,59,58,56,77,37,33,69,32,70,62,57,48,71,73,104,60,62,46,33,74,109,63,95,47,117,49,33,49,58,50,36,51,59,70,45,19,45,65,95,58,130,64,21,18,29,84,84,21,28,127,119,179,126,22,20,7,75,89,70,21,12,26,45,77,36,20,18,23,42,69,44,26,121,243,243,239,230,255,168,135,153,150,112,43,82,85,96,121,117,134,119,154,114,77,116,56,16,20,19,13,18,16,45,41,45,36,49,24,29,4,5,7,9,20,12,15,23,46,49,75,43,51,69,45,41,66,50,89,120,101,102,68,90,92,118,125,79,24,32,13,22,27,23,71,80,44,23,30,40,46,61,53,52,85,37,119,164,114,173,190,240,224,173,158,154,97,62,32,18,61,37,16,42,41,52,35,35,68,54,32,24,57,43,40,12,43,50,48,34,12,29,10,25,22,18,7,8,37,62,52,54,101,69,73,54,44,43,19,24,50,34,33,27,45,35,26,56,21,55,39,31,12,19,57,26,27,5,5,28,7,0,16,2,7,22,24,25,27,23,54,83,84,103,79,106,79,69,42,76,51,68,68,71,183,241,255,251,226,227,235,246,229,228,220,237,213,232,251,235,224,231,227,242,226,238,227,228,213,242,230,224,245,230,231,233,233,241,241,235,232,229,230,234,230,240,236,216,229,226,236,247,227,231,215,229,235,224,246,239,238,243,231,236,226,216,229,247,224,124,25,6,7,2,3,4,14,1,18,16,13,5,23,9,16,10,9,4,26,9,43,7,18,18,196,175,183,187,209,197,197,172,208,188,168,184,191,193,184,174,174,179,182,176,205,174,195,202,202,160,177,191,187,184,203,189,198,174,178,197,194,167,177,198,183,177,202,211,175,190,200,195,188,175,196,203,187,181,205,182,192,172,197,211,201,172,200,207,190,192,183,201,185,188,178,201,216,205,201,203,196,195,204,197,198,201,191,199,202,220,209,213,212,205,190,197,206,186,193,203,195,203,216,195,206,190,193,176,225,188,194,198,196,200,198,212,208,195,197,216,212,224,219,187,203,200,173,221,206,196,181,189,213,218,189,179,175,192,199,226,201,180,187,209,203,198,189,197,198,210,224,200,162,174,216,221,180,193,219,209,200,198,171,216,199,203,199,196,203,189,194,191,195,195,181,202,171,214,169,187,201,193,193,185,183,208,207,193,196,183,235,208,182,218,184,191,195,191,199,183,203,201,190,190,199,202,191,201,192,226,201,223,188,199,172,180,172,223,201,192,193,188,195,188,193,200,204,189,204,202,180,163,162,183,148,179,146,152,142,162,127,122,125,97,135,106,109,104,102,106,106,120,96,131,174,73,56,64,71,105,90,71,88,63,45,70,67,59,57,35,69,59,28,71,63,83,47,45,76,85,70,75,80,36,36,49,25,57,55,53,45,62,57,57,64,56,42,62,61,39,53,38,72,48,62,107,102,48,56,62,58,68,67,102,96,100,109,142,86,71,70,41,69,58,89,39,51,84,79,67,29,65,49,53,33,62,63,62,56,39,75,51,51,51,54,49,59,37,62,49,56,44,42,95,55,51,37,80,57,43,42,66,50,71,63,52,44,13,45,70,57,62,30,60,48,52,75,82,65,63,53,47,57,16,56,55,52,42,45,126,64,41,24,56,33,109,77,78,32,3,63,100,72,105,85,86,72,26,64,39,37,37,57,26,63,69,52,31,61,53,56,64,73,21,14,15,80,60,63,24,45,55,84,115,78,36,46,47,46,37,26,33,44,47,20,23,21,57,62,53,73,35,117,206,247,252,253,254,168,163,200,255,252,213,77,13,12,4,15,38,58,87,130,88,89,123,43,20,34,30,27,32,35,28,27,23,30,22,57,45,22,27,18,15,25,24,23,35,14,45,38,61,54,59,53,64,75,86,85,92,109,116,96,110,126,109,92,74,60,8,22,22,23,65,57,115,81,100,64,63,71,59,57,42,62,38,128,138,111,184,235,244,189,131,150,116,50,28,48,59,71,50,32,17,40,44,60,69,36,63,22,48,53,16,44,38,67,18,36,43,29,22,22,38,26,11,32,33,33,50,71,84,71,65,28,13,17,25,51,52,49,37,52,27,28,44,43,40,14,37,48,47,25,27,27,18,19,4,42,25,24,41,44,23,29,14,15,18,41,25,34,20,18,59,65,79,105,110,81,52,68,106,97,100,160,223,244,234,244,229,245,245,197,220,247,244,247,251,237,216,241,247,221,219,231,212,234,228,244,243,235,244,240,243,238,218,229,217,233,229,212,237,219,238,229,236,226,210,251,240,242,222,224,242,224,240,226,230,234,214,221,223,224,231,243,219,241,244,220,112,2,8,5,13,9,0,12,23,27,15,2,5,10,18,2,10,21,2,6,17,2,10,16,6,222,181,194,190,196,195,182,194,186,186,193,170,188,195,158,191,219,192,190,172,191,180,210,188,178,202,201,204,166,182,202,161,193,195,187,192,177,193,200,193,211,182,176,196,182,193,177,193,180,175,196,182,201,194,183,198,207,178,179,185,203,178,198,173,169,213,211,206,199,182,210,190,169,184,182,198,201,199,203,205,200,193,203,193,208,185,197,210,195,227,208,209,213,180,188,186,192,197,199,196,195,188,205,216,214,216,179,182,209,202,196,212,217,210,214,198,192,227,214,215,213,189,194,198,202,189,197,221,187,219,195,205,194,188,182,201,210,205,205,178,202,205,209,222,196,208,193,196,191,179,199,178,203,214,198,205,208,221,201,214,205,181,199,216,201,223,212,195,197,205,185,201,193,180,175,191,181,168,203,207,180,185,200,183,188,187,188,176,189,194,178,195,199,192,213,189,207,194,203,183,176,187,193,172,190,203,195,194,207,185,179,197,187,199,183,207,218,123,75,62,47,42,37,44,35,37,56,28,48,40,27,44,24,31,24,23,21,37,22,52,38,26,46,12,23,47,36,20,53,105,112,84,9,29,53,52,44,43,51,52,61,28,62,38,53,54,78,35,56,29,64,77,46,65,50,45,56,58,57,54,57,60,82,46,79,66,70,39,28,36,41,68,70,71,42,73,30,57,45,55,16,77,42,32,52,43,44,29,52,91,65,38,39,41,78,67,47,41,37,68,58,37,62,45,36,25,31,20,21,35,42,21,28,60,60,32,65,41,31,32,19,19,29,28,24,12,54,39,39,32,48,28,30,43,65,40,7,23,54,21,40,42,11,50,27,33,84,28,63,61,62,56,44,43,44,31,35,44,32,53,49,10,39,13,18,56,86,35,13,28,14,14,48,45,27,2,41,51,73,116,74,104,110,31,66,30,48,47,9,35,68,48,28,1,10,46,50,59,23,40,19,12,56,13,0,28,38,5,42,50,108,70,40,60,17,21,58,65,39,47,18,6,44,59,53,80,21,94,208,245,252,237,245,196,170,204,243,234,254,185,67,13,0,22,26,65,42,25,97,122,84,122,28,28,32,14,58,7,21,31,12,41,58,42,50,65,62,27,7,16,15,14,56,44,17,41,14,42,34,32,62,68,55,73,78,85,71,104,110,117,120,121,143,93,66,22,19,26,63,104,43,118,157,142,81,41,23,37,43,19,63,51,131,163,174,237,223,244,171,109,135,93,48,46,137,95,85,57,35,29,40,10,28,51,70,64,31,59,59,80,35,38,48,52,22,33,43,22,9,15,14,36,35,17,33,39,35,43,52,63,44,17,36,20,32,45,55,60,44,61,45,57,55,49,49,44,61,46,60,42,61,38,43,31,39,28,17,26,29,17,43,20,30,18,17,16,23,17,4,56,73,98,85,85,89,121,146,163,143,115,156,196,232,245,244,241,234,243,235,239,243,238,232,237,234,208,206,228,248,216,233,237,244,239,247,248,240,246,225,205,234,237,232,235,232,219,229,235,208,248,209,217,245,206,249,237,208,237,229,216,228,231,196,221,229,245,227,229,224,222,231,229,221,235,220,96,7,26,26,1,4,17,11,7,6,12,12,52,9,10,21,19,18,17,7,22,8,9,1,19,208,207,213,214,208,168,169,194,196,190,191,197,195,204,185,184,181,183,189,229,187,187,171,194,205,196,170,190,187,221,194,204,169,186,194,177,204,174,180,206,179,181,196,190,185,168,167,190,183,195,171,174,171,199,170,189,189,195,168,192,166,179,197,188,203,194,186,203,204,206,194,224,206,202,185,166,154,196,199,193,199,169,212,193,221,186,207,202,172,181,221,186,203,192,184,171,181,188,211,204,213,217,205,204,209,214,217,228,206,204,199,216,198,206,217,204,201,205,214,201,222,208,195,202,174,184,180,194,200,223,199,192,175,193,175,185,188,174,207,206,184,195,201,227,200,188,201,191,216,232,199,175,206,203,206,191,214,202,197,179,215,183,191,193,198,189,183,180,201,196,190,192,221,208,205,181,208,173,209,209,181,213,168,205,184,197,199,207,183,176,181,187,204,158,188,183,203,199,206,191,196,168,187,176,207,196,195,195,209,186,201,194,181,176,186,179,184,128,18,1,8,13,8,13,10,20,24,14,7,18,10,3,19,7,30,10,23,16,32,4,26,0,19,18,14,5,18,7,18,36,92,48,38,19,2,35,21,38,13,31,23,26,23,13,29,36,43,20,35,34,32,26,53,55,42,36,41,38,20,40,39,43,25,59,36,39,17,57,59,14,27,61,30,27,14,13,70,24,26,26,3,64,68,17,7,31,4,31,57,21,17,24,33,39,10,30,14,10,12,22,24,24,26,41,22,45,49,19,23,33,33,13,41,35,25,25,33,11,31,27,18,14,24,33,68,39,42,42,22,53,34,10,36,18,38,32,39,27,30,24,29,16,37,59,26,23,74,57,41,46,72,36,42,36,75,27,27,29,25,20,19,43,45,39,78,48,15,17,29,62,75,70,14,41,38,26,49,44,77,102,74,111,94,46,75,44,75,74,62,38,100,98,94,82,79,72,124,112,95,131,107,105,145,117,123,118,129,109,60,182,171,140,98,24,25,10,47,47,24,46,6,41,57,60,47,24,79,210,244,245,244,236,195,170,233,242,237,244,252,138,63,68,34,54,114,66,36,29,136,103,122,105,35,40,39,20,32,33,52,23,18,39,13,32,60,39,82,38,18,27,45,46,26,17,23,28,20,43,40,25,30,29,52,59,103,89,91,99,110,149,140,159,152,134,100,29,10,34,104,101,45,128,152,121,66,36,13,4,29,73,83,48,138,201,225,254,245,237,165,138,157,67,49,141,142,104,112,59,33,22,8,26,0,9,47,71,62,77,53,51,45,49,34,50,45,27,22,52,60,42,21,10,21,29,23,14,12,45,68,51,77,73,75,37,70,52,51,60,34,77,94,75,53,63,59,51,50,29,75,56,65,68,63,59,68,60,50,24,20,16,25,42,13,43,29,34,44,23,40,22,92,78,101,96,108,168,144,158,143,117,90,88,135,216,209,221,216,226,231,232,232,242,225,237,226,217,221,244,226,238,222,227,237,233,197,198,239,243,239,236,234,244,228,243,238,217,235,228,240,210,236,215,223,217,251,234,228,221,206,215,227,237,222,239,236,218,208,234,233,241,218,229,234,222,221,118,2,6,0,29,1,4,18,4,35,4,27,6,36,23,25,4,6,26,8,11,5,2,26,27,189,191,190,176,181,190,174,182,203,184,196,191,160,180,154,195,215,208,202,192,170,194,175,186,191,176,207,183,189,198,207,194,208,221,158,178,201,195,170,200,197,206,199,169,210,210,209,184,189,179,170,173,165,178,192,197,179,197,186,196,200,204,197,205,196,183,183,192,189,201,202,178,184,189,219,169,170,199,202,213,199,191,194,186,210,193,210,188,186,203,214,191,206,208,189,203,201,201,156,207,202,189,209,204,209,217,203,206,194,217,174,204,206,195,216,212,213,215,194,210,227,210,186,209,234,189,201,213,208,212,185,198,207,209,209,204,204,212,200,207,225,196,224,192,185,204,163,193,181,203,174,211,214,191,214,223,194,201,198,222,205,184,195,233,214,204,205,192,187,202,218,186,216,214,206,194,217,212,191,197,195,184,197,210,182,213,217,216,199,200,189,194,175,190,200,188,213,185,185,196,200,186,212,208,185,174,173,185,202,190,208,198,160,188,212,207,220,150,86,58,49,80,55,36,52,70,25,33,48,51,27,29,16,22,26,25,2,28,12,14,40,20,25,8,6,6,27,12,6,2,60,43,9,37,3,18,22,15,56,11,18,39,23,14,39,19,30,38,23,40,16,29,39,27,14,53,3,51,16,69,28,14,18,31,24,26,31,19,2,14,7,9,3,24,15,27,39,22,14,43,18,10,3,29,4,7,11,2,2,7,12,10,19,3,27,5,30,14,22,47,35,67,51,66,54,51,30,29,33,38,62,41,11,28,36,21,36,34,24,69,42,61,71,54,27,17,49,28,29,70,25,62,86,41,30,136,39,130,57,128,66,122,106,156,139,120,134,136,64,110,127,110,103,140,168,144,88,137,168,197,192,212,216,215,169,119,159,247,239,240,228,184,135,198,232,242,201,91,66,72,89,84,105,62,46,50,202,247,188,170,191,249,253,196,171,159,210,240,238,253,245,218,245,248,237,236,234,235,246,255,244,155,50,43,16,34,50,42,27,9,28,43,43,51,42,65,213,237,233,249,255,186,159,215,232,230,251,249,208,89,85,79,72,91,65,97,55,89,132,105,147,89,12,21,31,47,48,34,41,12,28,32,40,40,31,34,55,40,24,53,53,39,70,34,26,42,21,45,69,68,73,61,108,152,141,106,114,112,111,124,91,100,101,72,75,52,50,59,169,101,95,95,57,35,58,43,12,24,22,97,81,64,138,227,232,237,244,227,166,132,154,66,53,156,134,111,101,70,13,18,30,10,2,36,4,43,49,74,42,50,50,44,50,50,36,39,31,44,39,52,49,12,51,47,51,30,57,55,42,45,70,73,74,57,52,76,70,67,51,83,72,48,57,66,61,60,53,57,27,40,61,48,36,41,40,26,47,41,9,18,36,26,45,28,50,10,23,11,43,15,86,98,121,89,116,156,165,162,118,105,77,22,80,195,236,235,229,238,226,226,237,219,220,241,222,196,218,217,226,244,226,247,229,248,226,242,225,222,237,232,239,248,234,248,251,235,235,211,239,222,241,231,237,226,225,236,217,236,221,246,238,242,204,243,210,222,242,242,225,217,229,228,246,241,214,115,0,11,11,26,25,3,24,2,22,9,0,4,7,4,3,17,19,30,30,4,3,2,20,13,213,194,206,219,198,190,193,184,170,193,186,207,194,211,197,192,191,187,200,204,208,204,179,179,194,179,180,195,190,209,184,194,202,199,213,196,206,199,198,214,199,204,181,188,201,187,186,190,197,188,186,187,187,201,208,193,219,185,214,181,182,211,206,183,193,189,184,205,200,206,221,216,217,221,197,198,224,192,183,219,182,197,213,207,196,177,240,180,204,223,220,168,205,196,193,215,204,179,201,206,185,222,210,199,220,199,224,204,193,212,225,189,209,182,215,210,217,199,203,189,195,173,186,216,191,212,205,202,201,213,213,171,206,216,225,181,170,208,210,196,235,201,200,196,211,196,230,193,205,178,209,199,182,192,224,235,188,199,193,184,195,214,204,194,212,198,190,191,177,209,208,200,196,215,210,197,191,201,204,210,202,214,197,182,197,190,181,212,210,189,193,199,177,195,183,204,198,209,219,179,193,181,191,197,202,201,214,196,189,182,209,189,193,180,209,216,192,214,190,228,247,210,236,222,221,201,214,226,243,219,239,207,231,212,218,241,207,216,218,208,212,194,189,210,177,174,193,197,154,135,161,175,174,175,187,205,183,181,166,175,182,163,183,147,172,174,155,167,184,172,160,161,172,179,189,186,203,163,175,197,180,186,176,201,194,219,194,239,216,213,221,222,233,227,232,247,238,219,235,226,215,175,214,221,234,252,239,217,230,206,210,223,203,205,195,230,228,240,251,234,130,126,237,238,180,58,64,126,214,248,240,182,157,199,210,241,247,234,243,254,238,253,231,159,74,61,64,14,53,47,53,207,239,236,236,252,222,246,248,252,243,251,246,255,244,233,246,223,131,119,157,171,113,122,237,210,214,236,252,255,250,250,252,253,163,137,231,244,253,243,240,225,248,251,251,254,224,127,63,51,31,74,74,52,39,67,220,236,251,219,202,223,252,227,136,173,231,240,247,248,247,210,239,255,243,243,245,236,240,253,191,81,45,11,9,40,53,44,7,30,40,68,69,79,7,105,238,243,247,246,165,174,212,249,248,246,234,249,135,25,65,61,60,70,81,101,51,101,138,115,160,80,15,26,33,47,40,28,38,13,24,15,15,13,73,46,61,69,78,59,84,52,97,62,112,141,161,117,129,150,158,166,193,154,151,143,93,108,122,66,36,52,45,52,47,58,72,120,125,75,94,52,53,57,40,29,32,39,99,134,86,54,110,222,252,244,248,255,150,164,147,53,89,157,99,127,74,34,19,36,27,22,25,3,22,12,2,14,17,49,61,42,69,52,30,43,48,45,63,38,61,89,61,74,78,71,68,94,66,60,87,100,61,73,73,51,30,50,42,62,53,81,51,53,60,38,56,55,29,5,47,31,21,18,26,27,18,13,26,17,20,26,35,22,58,28,45,43,39,43,79,91,113,95,97,129,127,134,79,80,146,179,203,246,245,240,240,230,241,240,241,241,230,232,255,242,232,253,230,240,217,233,209,246,235,247,237,238,236,251,241,233,245,242,237,238,238,234,241,230,240,237,234,224,230,245,228,222,226,225,241,235,245,229,221,241,236,245,242,225,232,228,222,244,232,94,31,7,26,33,4,17,14,19,30,11,13,0,13,0,7,10,22,3,27,15,36,10,19,28,177,190,184,179,190,166,179,179,193,183,182,195,215,200,182,191,197,206,205,186,203,199,185,183,178,173,181,175,196,195,202,173,167,182,210,189,213,185,185,190,194,184,200,198,185,201,216,187,188,180,215,199,206,191,185,182,208,200,182,203,203,192,193,189,185,188,205,178,187,179,178,185,200,187,195,175,185,198,207,181,179,186,195,204,215,200,196,194,204,220,219,212,193,189,201,220,197,240,224,217,204,199,196,206,207,206,211,179,219,195,195,212,218,186,194,224,200,210,178,203,195,201,183,209,206,190,205,199,193,212,212,206,185,201,206,198,199,185,205,199,214,205,181,229,207,220,192,185,213,215,208,191,183,184,191,183,190,211,218,218,187,219,188,198,199,195,193,208,196,196,214,211,201,182,166,191,192,194,201,193,177,191,203,192,178,192,204,207,189,186,189,212,180,171,209,190,172,209,178,196,204,171,201,206,192,226,203,196,203,180,183,189,211,172,199,165,177,205,196,235,246,241,252,254,254,227,246,249,249,229,236,236,253,242,255,249,247,250,249,255,240,246,244,245,242,254,244,242,255,211,248,251,254,234,233,238,216,251,249,253,248,250,235,242,233,244,254,232,223,255,246,251,229,254,238,233,246,252,241,247,239,249,248,255,244,255,247,234,253,252,254,254,253,243,238,252,249,255,252,255,255,232,226,245,255,253,234,235,253,253,225,255,238,243,249,237,255,238,249,252,134,133,224,247,221,187,203,251,255,248,242,253,235,230,252,255,249,246,252,239,230,248,238,184,90,81,65,10,16,38,70,219,227,237,255,245,250,237,236,239,252,241,242,221,241,249,255,226,91,64,156,174,70,44,226,242,217,243,234,246,226,193,185,175,136,133,239,246,242,224,242,180,200,248,252,245,177,76,58,89,38,78,54,40,48,31,210,235,202,226,137,149,244,185,155,218,242,221,240,219,213,172,241,227,221,232,239,238,248,215,86,14,6,1,39,59,35,9,6,48,55,56,58,39,81,130,233,223,250,155,155,222,244,254,232,238,241,174,82,51,88,66,53,67,91,59,47,112,111,111,133,38,23,23,18,44,35,36,31,13,23,33,49,8,6,44,36,29,71,61,62,91,142,180,157,155,182,155,163,129,135,183,170,128,100,76,59,74,60,46,47,41,12,20,29,48,61,61,50,60,55,49,76,79,15,34,35,92,169,116,59,47,65,188,240,241,229,192,103,106,84,37,35,101,62,62,67,59,6,4,15,22,28,12,50,37,2,13,41,3,23,38,24,41,18,37,24,24,23,30,33,12,15,43,50,46,44,46,56,83,55,64,64,69,75,51,70,58,67,61,59,61,52,47,48,44,33,15,15,15,0,36,16,4,17,23,24,0,24,43,36,16,31,30,34,33,25,50,49,55,101,97,102,84,56,94,72,71,30,112,228,246,230,235,246,235,238,233,246,240,234,220,246,249,230,223,242,224,233,238,239,232,224,221,211,219,234,226,234,240,208,240,210,231,234,241,230,234,243,239,243,209,243,251,224,226,242,238,240,210,243,230,235,237,238,224,231,215,222,213,218,224,229,206,217,128,9,10,1,4,20,3,43,16,16,36,11,16,9,4,11,0,11,27,6,7,13,25,33,29,205,200,181,197,183,187,203,180,199,201,178,197,196,205,216,219,177,180,186,182,196,184,207,173,176,175,175,181,181,202,170,202,174,174,184,177,197,185,203,180,198,194,175,214,196,186,174,195,190,167,189,199,169,216,190,211,164,196,184,193,196,197,208,182,202,178,174,179,182,193,173,190,206,185,183,180,196,157,191,200,206,213,189,190,208,216,195,206,211,170,196,213,210,193,189,202,213,202,206,192,201,224,197,212,196,217,204,190,200,208,205,209,202,195,197,203,207,203,191,198,174,189,210,216,199,170,212,214,188,199,189,215,207,215,208,186,176,209,196,185,216,214,182,203,194,191,194,208,192,209,204,188,205,196,192,161,215,208,207,191,201,204,189,213,191,189,187,194,201,183,175,199,200,199,211,205,196,196,214,176,187,211,178,185,190,180,202,213,188,181,195,180,208,177,174,169,193,186,189,195,169,167,182,195,180,203,177,209,193,188,206,177,184,201,183,175,180,211,183,218,205,208,215,220,196,218,224,204,189,217,229,205,227,209,238,204,229,238,228,231,228,216,216,223,227,219,233,208,233,221,227,236,243,239,245,233,237,235,250,241,227,247,231,250,243,238,253,221,242,231,253,248,229,246,224,250,238,255,233,235,255,254,254,243,234,255,248,220,253,252,242,231,254,234,232,249,229,236,241,235,242,221,243,234,249,224,236,255,253,253,254,248,244,246,245,240,228,144,212,189,54,82,199,248,254,254,251,242,255,250,229,213,206,223,214,203,200,185,216,190,155,183,188,101,43,78,60,44,52,33,24,167,184,212,215,213,237,219,239,252,255,243,228,244,233,229,237,221,197,188,220,231,162,105,207,180,175,205,196,176,150,213,190,200,124,193,249,236,241,227,242,159,164,233,226,255,168,96,88,87,56,50,87,41,50,70,232,242,171,147,97,169,202,122,152,189,235,224,207,246,183,159,199,185,218,222,248,251,229,85,11,7,11,17,36,54,34,14,37,72,44,58,39,91,185,227,240,242,142,122,223,246,251,253,240,249,145,150,149,142,115,63,46,72,92,100,66,123,143,145,94,24,25,28,28,3,30,36,38,16,24,31,13,24,19,55,19,39,63,45,42,78,151,174,176,183,163,163,124,96,103,129,137,78,101,62,41,61,26,50,17,16,19,16,10,25,53,55,41,29,13,55,41,50,33,53,136,197,172,109,61,28,23,126,245,235,177,100,56,20,34,45,32,22,47,63,43,34,29,17,4,12,15,30,34,17,36,10,26,19,32,27,9,23,14,21,26,49,29,5,27,22,25,44,30,61,59,44,60,83,55,52,30,71,95,47,31,37,70,89,59,49,35,17,15,4,24,22,24,23,15,14,24,3,22,20,6,14,13,26,31,66,32,35,39,69,45,43,41,72,101,98,118,103,67,24,72,65,31,89,227,218,246,242,244,240,229,239,225,229,209,243,246,245,225,234,234,231,222,232,232,230,244,225,246,221,248,236,239,226,218,209,248,239,254,229,226,238,220,220,233,192,203,206,246,241,236,244,202,223,234,223,223,243,214,230,205,245,222,221,209,234,233,235,223,113,3,2,10,17,12,12,9,10,35,17,15,41,0,5,2,19,0,14,8,20,1,10,0,32,194,166,200,172,193,181,194,182,181,204,188,206,189,223,197,191,199,163,190,179,188,178,184,187,191,194,171,217,195,195,189,191,196,195,175,190,178,207,191,174,197,190,181,188,202,204,177,195,178,208,200,205,219,188,191,199,195,193,199,192,206,178,223,185,199,215,224,190,178,188,180,196,209,202,188,201,204,176,195,199,173,192,194,175,164,228,205,198,216,223,180,186,216,195,205,194,215,206,184,199,207,201,202,197,200,204,232,219,192,215,217,206,205,195,201,202,203,205,204,218,212,210,223,209,192,195,192,199,200,209,197,184,167,192,192,225,191,193,200,185,196,182,174,195,216,207,195,195,179,208,203,212,209,210,205,210,207,203,200,208,207,210,209,200,174,184,220,200,207,177,192,211,195,187,184,208,177,199,196,200,194,178,207,214,198,201,183,187,195,194,202,207,173,173,179,185,180,200,214,209,192,189,193,206,175,203,171,194,193,201,185,164,193,199,194,179,189,205,175,197,214,205,187,187,184,189,209,194,187,204,189,187,194,193,187,197,178,190,211,204,197,193,207,202,171,205,207,201,211,183,202,188,207,202,182,218,216,215,208,207,224,200,218,228,218,227,202,216,229,210,208,237,192,233,208,199,204,230,214,215,241,214,238,235,225,212,211,227,208,218,224,226,200,244,233,221,222,196,218,202,232,196,216,232,214,229,203,239,232,219,211,194,220,240,202,219,171,92,159,185,86,70,151,227,177,184,161,148,166,165,177,120,125,133,112,91,106,116,135,110,85,96,114,107,107,165,116,170,154,118,85,65,20,25,57,27,38,43,71,59,123,165,174,141,138,118,141,206,201,247,251,224,244,124,125,122,61,108,161,176,190,232,209,215,133,197,218,233,239,217,248,162,162,227,221,252,156,104,63,66,50,56,101,66,36,56,228,241,178,138,107,180,167,61,101,168,198,193,219,214,146,193,202,180,213,233,241,235,154,39,20,26,8,37,39,43,23,9,69,50,61,33,56,211,248,196,242,157,145,222,250,247,249,242,227,143,165,234,201,147,115,78,32,26,56,34,54,127,89,122,97,32,36,30,42,68,56,29,16,17,42,29,13,16,33,14,23,51,32,45,61,47,92,100,135,207,129,97,67,72,89,114,130,80,114,86,58,63,8,38,15,34,18,30,15,9,25,14,22,30,40,9,27,14,14,127,189,184,131,51,61,22,28,80,175,112,125,44,26,93,62,27,45,92,90,61,55,66,42,12,37,18,29,33,23,15,38,25,17,24,30,31,19,23,14,28,26,40,31,11,10,0,34,48,36,49,30,53,67,63,21,34,32,67,79,69,22,10,23,35,33,20,16,28,26,15,9,36,50,22,20,13,21,20,17,44,32,42,29,18,39,62,82,42,51,61,47,17,51,81,73,115,110,111,73,51,91,89,36,59,195,212,216,249,243,243,222,226,229,234,252,217,216,237,236,240,244,223,243,244,250,226,232,220,234,220,244,225,243,233,216,240,244,238,252,240,242,237,231,230,235,233,246,226,217,226,235,233,224,227,247,241,240,238,222,234,224,232,228,246,243,223,224,226,227,115,30,14,11,34,15,15,2,15,28,19,16,31,25,8,0,23,31,20,5,24,18,23,17,18,193,193,184,212,197,167,191,186,152,187,193,181,170,174,209,192,186,195,174,174,199,188,201,195,193,206,181,186,207,211,202,173,203,197,219,177,196,195,218,174,201,182,192,198,192,188,162,196,206,174,207,174,193,195,187,203,202,178,182,204,198,183,193,168,189,215,193,199,207,201,190,197,185,184,205,209,187,199,176,207,215,207,223,191,202,203,191,199,205,191,191,183,189,202,211,190,183,182,207,218,197,204,205,188,214,208,206,189,215,193,204,189,212,188,199,191,188,187,180,207,204,222,158,201,184,189,188,202,220,179,193,218,198,182,209,197,199,210,209,182,194,204,196,201,219,202,196,200,195,211,207,205,188,190,182,216,201,168,191,191,208,214,196,196,193,212,172,204,191,195,189,193,177,191,186,222,188,204,209,190,180,179,185,191,199,188,183,192,199,171,206,197,202,186,189,201,209,176,198,193,180,172,207,202,172,201,192,193,182,184,205,182,180,198,174,213,190,167,190,201,211,215,179,180,191,192,198,205,186,188,195,198,209,192,198,193,196,194,207,185,195,206,176,206,195,192,188,212,196,183,181,194,214,210,199,205,197,194,200,226,225,199,197,206,198,221,198,211,171,203,210,201,197,196,206,196,223,209,195,199,213,207,217,221,205,231,213,218,222,212,224,202,222,204,219,200,213,231,221,224,225,186,193,238,195,247,216,201,207,207,223,209,204,190,209,238,181,107,171,155,104,86,116,119,99,103,76,95,69,81,94,93,101,100,104,133,83,143,174,106,115,137,168,147,161,200,198,192,194,183,130,59,44,22,25,17,22,28,0,1,27,21,23,11,12,18,49,73,69,145,192,182,202,110,120,127,110,170,236,202,255,206,183,168,109,223,221,249,228,213,221,163,162,202,206,252,164,102,60,72,62,34,108,73,34,55,225,234,135,192,194,228,129,107,172,168,245,211,230,208,159,209,195,166,222,253,229,198,52,19,19,2,24,56,28,30,24,52,57,71,23,35,177,239,210,214,173,112,181,250,251,255,250,245,139,138,215,245,244,161,71,72,90,31,43,24,67,99,103,105,68,11,46,42,25,45,23,14,18,32,8,0,25,34,17,47,13,25,6,24,31,48,44,19,108,179,113,82,80,89,85,105,54,36,54,57,58,45,12,29,28,28,44,24,9,11,25,7,17,23,50,37,33,18,25,81,147,98,64,68,25,19,25,30,61,92,49,8,60,140,92,107,211,241,181,133,129,112,55,24,14,45,6,22,4,49,31,22,23,26,27,22,41,24,40,25,15,54,33,47,38,34,36,17,26,24,48,26,57,33,55,20,20,72,77,56,27,19,34,34,19,18,15,20,39,25,54,8,7,31,28,29,38,24,53,133,83,48,43,30,74,85,63,87,59,67,54,18,41,99,107,80,94,100,58,68,66,70,50,68,175,237,249,255,227,231,243,238,239,245,225,230,241,251,252,231,230,234,233,223,226,228,252,242,247,236,229,218,243,230,241,235,240,239,237,241,240,222,219,235,242,242,248,233,236,233,225,236,217,228,233,244,217,250,224,215,217,231,228,240,242,219,220,230,238,119,0,23,21,2,7,17,18,7,1,5,26,19,10,4,8,14,24,12,19,2,7,34,32,9,177,173,191,184,201,204,180,186,192,195,176,193,190,178,204,167,163,204,156,194,218,186,195,196,174,183,179,180,193,203,193,176,194,195,192,168,189,188,205,197,202,186,183,193,187,220,209,210,193,216,195,197,187,192,181,188,187,173,199,190,189,208,204,180,173,213,193,205,210,188,214,198,199,196,203,209,187,216,179,177,210,202,198,206,185,208,192,196,172,201,199,194,191,203,202,201,189,188,194,193,222,188,205,185,228,219,196,215,204,216,216,197,188,222,182,201,194,199,198,177,207,198,191,207,185,204,181,204,199,216,195,205,186,209,205,175,208,180,213,198,213,180,215,205,193,189,184,195,178,187,193,210,216,196,169,210,158,211,212,185,171,200,190,176,193,200,202,184,194,197,183,199,187,186,198,183,170,191,205,179,204,184,211,206,189,192,221,191,214,197,178,191,183,182,200,185,206,194,208,167,188,178,184,193,174,207,198,224,191,169,181,166,177,188,186,196,202,191,178,173,206,192,190,218,185,192,215,193,205,208,200,179,214,182,194,203,187,215,208,180,210,206,212,189,173,199,171,205,225,211,202,219,211,212,214,223,197,197,197,199,233,182,205,194,207,202,195,210,216,194,198,234,217,205,199,215,197,185,202,200,226,192,213,213,185,217,194,227,217,190,221,226,200,223,205,212,201,198,243,198,204,186,203,240,229,219,218,204,211,207,198,212,206,211,242,233,204,140,125,127,82,88,99,94,86,112,105,127,97,72,77,65,82,149,157,206,196,192,204,152,76,137,241,199,159,192,178,125,154,149,98,80,157,217,206,205,181,129,128,141,64,11,7,25,107,77,32,39,22,36,36,32,49,40,106,144,115,244,255,222,254,169,164,154,113,216,240,243,234,193,237,188,180,220,213,255,156,87,73,48,60,22,104,83,42,38,191,187,138,210,206,196,97,143,215,191,249,208,210,173,147,210,190,179,231,248,193,71,3,31,9,12,41,26,27,23,41,57,81,32,24,167,247,245,230,158,145,171,242,246,237,248,245,194,136,217,242,239,193,101,105,109,118,45,38,16,111,135,80,142,56,25,59,51,25,53,34,11,16,19,16,19,28,12,31,24,12,11,48,27,59,24,54,64,141,121,54,70,61,66,55,58,49,42,40,77,34,28,63,27,63,15,27,30,7,30,43,20,14,2,25,16,28,28,39,79,85,45,61,58,15,21,3,5,59,119,59,67,124,155,74,91,183,223,194,90,155,113,71,27,43,45,38,42,11,43,20,35,39,29,20,19,33,57,10,8,45,73,69,57,95,42,45,28,30,49,28,37,49,37,18,19,53,57,82,80,73,51,55,31,26,12,27,28,16,35,5,31,12,14,37,6,15,72,198,172,124,101,60,49,66,73,93,66,77,50,55,45,38,78,103,106,111,112,85,58,78,99,35,41,200,227,215,238,236,238,220,206,245,246,243,222,242,235,251,244,224,232,223,236,248,236,230,251,221,230,217,223,240,225,237,235,232,219,244,230,243,226,238,222,214,225,224,208,240,234,248,219,221,227,196,246,231,234,218,227,249,221,242,217,226,226,241,247,229,111,0,1,4,8,0,11,35,11,1,14,6,7,6,1,2,16,4,9,12,9,56,16,21,20,189,185,174,175,208,197,186,201,207,173,196,185,202,163,196,188,172,177,183,194,192,198,194,205,210,202,181,183,183,173,191,207,176,195,174,202,164,209,203,177,195,195,193,228,180,170,197,209,205,190,208,197,183,198,180,183,215,209,181,196,192,192,198,203,204,202,179,178,210,226,200,192,178,191,182,208,187,188,204,187,180,196,181,189,204,197,218,175,187,189,199,203,189,208,208,168,211,205,216,206,212,215,213,201,219,200,207,191,204,194,209,218,219,210,200,212,215,178,179,230,176,178,221,203,205,193,179,212,187,201,189,208,186,205,176,195,195,198,198,176,186,205,187,169,200,215,212,210,211,197,164,203,188,189,193,183,204,204,178,192,200,181,218,164,203,190,200,204,201,193,207,190,173,200,185,195,186,205,182,184,175,191,210,182,198,183,174,186,185,195,194,182,162,177,186,192,179,209,195,173,196,189,191,217,202,198,213,210,187,197,183,201,187,207,190,206,183,200,197,206,196,193,198,177,187,206,187,200,165,208,216,213,187,202,200,211,200,229,195,201,202,194,191,195,204,202,202,201,207,185,198,223,218,225,211,197,199,214,218,214,229,191,180,206,216,185,212,204,206,212,223,193,217,186,203,219,219,211,207,198,219,206,219,205,222,200,218,205,206,221,213,210,219,209,205,210,205,204,223,211,233,201,198,246,212,200,204,207,210,223,217,207,207,212,233,185,135,96,95,75,70,59,73,47,71,147,211,253,204,94,184,187,219,250,254,239,239,213,221,148,88,129,198,173,103,138,108,125,122,93,82,100,229,252,237,238,236,244,208,194,108,108,118,232,232,216,179,94,54,71,50,35,20,25,44,30,61,166,233,225,178,120,176,154,161,229,232,238,225,180,240,188,155,210,195,251,153,79,51,47,70,47,115,134,69,51,121,163,157,246,206,111,80,142,211,205,252,214,234,147,162,223,196,191,238,240,109,7,17,24,16,32,42,33,29,34,62,77,28,29,138,230,253,249,172,111,206,220,250,246,236,238,192,143,204,246,212,246,115,104,107,92,82,30,89,103,128,153,133,139,74,9,19,10,44,40,25,26,15,41,9,13,12,23,15,32,31,20,38,18,54,25,66,80,112,77,66,36,62,42,29,36,24,33,33,26,14,40,37,67,41,47,21,42,22,39,19,7,38,29,34,17,32,12,74,103,78,34,59,11,32,9,19,31,58,137,115,108,140,140,80,33,52,63,77,43,60,77,77,36,72,50,66,66,32,44,9,26,28,26,32,30,48,43,15,45,38,68,59,20,48,32,38,43,71,45,32,48,50,93,72,67,71,106,87,82,81,88,72,75,43,44,29,58,24,36,34,35,11,18,39,72,86,116,146,109,94,146,132,97,73,68,82,75,50,22,47,11,54,94,81,74,103,94,76,83,66,83,49,19,214,233,252,246,238,242,234,212,251,245,222,227,227,238,248,239,228,238,241,222,246,239,238,236,224,247,210,229,225,220,244,242,238,241,245,234,239,246,231,226,226,217,217,241,212,203,242,227,229,244,216,233,249,240,240,222,208,230,248,218,240,239,232,222,230,111,0,5,3,0,24,8,37,11,6,6,11,8,4,5,6,2,6,9,14,31,6,17,30,0,193,189,194,189,192,195,177,187,187,205,206,164,199,209,186,180,201,182,204,196,173,181,189,190,178,202,191,184,192,185,222,187,191,187,204,178,188,203,203,191,170,175,178,193,207,208,187,177,167,188,197,203,191,205,204,199,167,198,217,179,190,203,194,207,184,206,197,189,196,214,193,178,195,198,197,192,188,194,191,209,216,195,185,216,211,192,196,189,181,197,219,199,175,191,188,211,196,208,205,214,195,219,217,189,203,202,184,187,194,208,215,221,210,221,194,199,223,197,189,188,185,196,197,186,192,213,203,188,210,189,201,194,206,181,190,209,198,196,215,189,174,196,195,182,194,206,182,188,167,187,174,193,192,203,181,196,190,185,190,174,198,198,181,191,196,188,198,201,196,224,166,181,183,219,186,192,196,196,203,195,197,213,192,205,167,175,190,192,192,194,208,197,158,190,180,227,187,194,182,194,191,200,163,186,184,210,210,213,173,211,200,198,207,183,197,208,186,179,190,198,182,185,202,181,172,192,205,199,193,194,212,192,220,219,202,204,216,208,199,196,194,180,209,192,225,197,205,207,202,203,200,198,180,220,206,194,203,233,190,214,205,217,227,190,199,207,188,222,186,198,234,213,231,222,194,206,209,207,195,220,213,201,210,220,210,202,223,205,218,234,218,208,221,225,214,217,205,226,224,209,233,223,228,251,227,248,226,201,216,193,214,221,203,181,229,183,96,83,65,56,45,34,35,35,76,134,201,222,204,197,248,255,238,230,247,239,184,107,123,125,59,58,77,69,77,62,50,36,29,63,41,55,78,100,111,112,156,181,137,88,62,66,90,211,243,226,175,151,143,107,112,130,162,107,54,50,39,67,143,214,194,170,202,152,159,231,227,230,227,221,235,185,180,204,238,252,159,93,87,69,62,30,90,118,93,86,133,122,167,239,179,140,124,221,211,217,229,207,211,176,193,226,204,233,229,138,10,27,26,28,42,54,10,17,35,43,58,43,8,103,242,252,255,219,88,138,250,247,239,227,243,167,126,194,243,238,250,176,67,110,72,32,17,20,51,103,122,138,122,151,62,37,46,35,40,24,60,46,31,25,26,16,32,10,41,13,42,46,16,38,54,22,49,110,76,83,30,78,53,17,34,32,26,16,4,28,30,53,52,38,46,52,58,10,2,32,27,37,27,13,32,38,29,29,86,115,85,40,32,10,25,53,44,50,71,124,161,110,61,109,104,74,51,19,31,71,43,63,65,70,97,94,66,55,41,32,38,24,4,13,41,1,23,32,21,36,58,55,46,28,25,35,46,37,93,71,68,96,87,128,138,102,87,48,61,57,72,68,81,82,108,118,84,96,80,71,41,37,30,53,65,62,131,84,40,44,56,45,67,29,63,33,70,38,57,53,31,33,44,88,89,71,93,90,95,92,74,71,20,46,216,242,250,250,237,211,241,230,238,239,226,239,219,228,231,223,224,238,237,218,246,231,250,224,230,209,223,236,231,207,249,209,247,219,219,227,211,212,241,247,222,213,234,232,201,234,225,220,226,248,232,220,230,244,241,237,226,231,231,206,248,232,249,225,220,119,1,0,2,27,32,13,18,38,17,28,10,1,2,7,5,10,35,6,1,0,2,10,7,17,196,179,180,202,194,190,178,186,198,168,198,184,192,181,181,181,161,193,215,206,180,175,185,200,184,166,176,175,202,195,209,194,212,179,185,175,195,191,221,194,171,175,202,207,186,191,203,189,205,201,200,210,196,182,188,198,205,177,198,196,172,190,208,188,202,196,183,206,200,195,188,201,197,186,206,210,181,205,183,200,176,184,190,176,195,181,184,223,201,196,184,206,208,178,205,191,222,206,204,210,200,204,201,204,206,230,222,214,185,192,205,178,189,190,194,200,182,192,197,199,206,178,220,183,198,169,178,211,200,184,209,194,182,193,181,199,211,188,183,179,182,199,198,190,173,192,196,208,205,213,200,195,189,212,190,178,220,153,184,193,206,203,183,193,213,219,194,187,187,199,219,195,204,203,217,192,165,197,187,191,188,191,174,153,172,179,195,171,172,192,179,165,210,202,198,203,200,202,172,210,200,209,193,189,203,191,192,186,183,201,185,190,166,180,190,217,202,176,167,173,182,188,212,187,206,189,192,208,195,203,167,191,206,159,179,184,192,213,200,194,186,211,204,178,211,183,207,220,185,202,211,219,201,195,223,218,208,201,220,208,205,202,184,199,190,207,181,206,200,233,219,211,194,213,204,209,211,225,201,244,217,228,217,218,217,222,196,215,222,212,193,210,228,191,211,228,222,199,199,200,232,163,138,222,229,225,214,215,232,205,205,197,230,200,231,152,46,55,11,62,111,84,106,97,70,79,65,86,71,66,136,137,140,142,119,57,14,6,18,33,45,34,0,16,39,23,39,12,12,19,20,19,38,10,34,24,15,25,48,33,39,31,86,81,66,90,89,105,132,124,122,142,186,152,117,84,57,50,171,222,190,215,191,143,195,225,230,233,214,206,228,208,177,203,207,255,148,52,60,53,81,13,63,47,40,87,102,124,157,233,192,135,179,216,218,209,226,203,206,161,188,205,221,200,132,37,27,26,23,34,51,35,18,28,71,47,72,33,83,233,244,250,230,144,141,206,215,245,248,237,177,137,174,236,236,246,172,88,106,109,15,58,108,119,46,89,110,104,132,119,38,15,29,39,25,27,48,28,33,4,26,10,32,11,33,59,15,13,35,68,36,61,149,179,157,97,54,70,79,76,42,6,17,21,27,11,19,28,37,77,97,82,57,43,62,24,48,17,29,31,50,24,35,69,84,83,77,32,31,25,30,78,63,59,75,78,160,129,106,92,95,71,91,86,62,29,61,83,68,33,83,90,57,47,45,40,33,12,18,6,19,14,31,33,44,39,41,60,12,10,20,26,58,69,85,106,109,105,59,97,72,48,44,33,44,49,30,76,72,95,70,90,139,105,125,108,82,59,71,127,135,107,66,53,23,28,29,40,25,34,37,44,38,59,37,52,25,35,59,69,99,86,96,96,101,63,24,53,58,137,255,244,240,251,221,243,245,226,243,228,233,227,220,237,250,234,255,213,237,246,228,243,232,234,251,230,247,241,230,248,223,240,254,227,244,215,238,222,226,230,243,228,247,243,221,220,241,232,245,232,205,237,227,227,231,211,213,240,239,249,245,240,227,219,249,89,13,1,18,4,10,2,20,14,21,0,4,3,0,18,8,2,22,12,16,19,4,25,25,8,195,176,199,181,167,165,193,168,179,188,182,195,200,171,173,184,211,183,184,190,165,200,153,152,216,166,195,169,199,198,172,186,187,192,185,185,179,194,201,190,199,203,190,191,189,194,201,182,164,227,193,216,212,188,188,176,190,195,199,206,191,196,185,204,203,227,166,185,185,219,180,184,200,209,183,198,201,187,179,188,202,182,193,202,182,201,194,196,203,215,199,214,187,194,193,197,207,216,213,205,196,189,201,193,202,193,190,190,195,197,208,207,210,225,190,177,206,186,195,209,188,205,189,197,189,176,188,209,203,190,188,172,168,189,191,184,184,192,205,193,211,187,176,181,204,196,159,177,205,195,195,175,200,189,206,205,189,197,179,196,207,174,193,180,185,190,203,217,208,188,190,208,196,216,196,198,185,199,195,160,175,218,192,186,184,192,189,193,199,179,206,180,216,196,196,195,198,181,205,179,195,212,185,182,177,190,180,189,192,203,166,191,199,182,204,194,185,181,194,170,174,199,189,231,219,186,183,170,206,177,182,202,221,185,178,171,176,194,211,187,200,211,197,183,185,192,199,187,202,206,189,223,187,191,201,193,191,228,192,208,193,200,191,240,199,215,194,198,218,241,219,224,213,215,199,223,206,192,218,210,208,219,230,226,200,223,202,217,217,204,230,190,229,203,204,220,204,210,206,209,205,97,34,146,217,200,217,206,212,212,205,206,215,210,191,157,50,48,63,101,98,44,79,70,65,52,54,34,26,42,34,8,29,3,60,50,10,28,27,8,30,3,7,20,16,18,11,21,42,46,19,32,39,16,9,31,15,24,15,41,16,48,50,73,50,60,54,52,88,42,88,65,65,87,81,71,26,50,186,247,182,203,180,143,185,232,233,216,234,201,235,202,162,206,213,254,137,78,103,72,46,20,24,65,21,34,96,133,223,238,172,61,142,202,174,229,255,221,179,150,193,222,223,153,67,0,5,19,29,43,23,13,23,83,66,36,13,68,205,238,251,227,127,178,195,232,233,242,239,185,123,177,207,239,251,189,95,73,108,126,24,89,128,61,86,89,98,128,144,99,25,30,51,37,58,4,31,34,19,31,45,24,15,20,34,11,37,58,70,36,68,171,190,163,153,122,122,106,107,57,6,41,9,10,34,24,32,29,45,43,67,79,56,71,32,41,3,13,15,22,47,21,36,42,51,43,16,21,37,46,33,54,38,21,60,64,106,162,113,115,96,88,103,99,64,64,73,65,83,89,55,31,41,34,38,36,32,27,19,39,14,30,44,33,15,21,46,32,28,26,33,36,54,70,55,62,35,32,21,38,38,32,39,9,41,55,27,57,56,50,69,70,45,103,119,106,125,102,120,128,84,84,67,18,38,23,32,27,22,5,43,43,72,67,44,59,55,28,31,101,119,103,84,115,111,59,19,65,110,181,248,250,245,243,236,246,213,241,217,239,217,232,234,235,246,240,239,233,247,252,237,242,236,225,232,209,219,240,222,251,231,228,241,215,218,240,221,226,239,229,238,218,224,231,222,248,235,237,236,240,231,213,213,232,243,219,221,238,237,239,235,245,249,237,242,135,8,5,37,0,1,10,11,14,8,10,0,24,14,13,9,3,30,37,6,9,12,21,8,13,186,172,191,194,190,179,208,164,169,185,183,199,170,187,178,166,178,199,193,194,218,183,190,217,181,197,214,183,197,184,179,177,172,175,199,172,180,188,182,192,190,183,172,182,196,197,190,221,204,181,184,198,188,160,165,198,156,206,209,185,178,186,195,197,175,187,211,189,199,209,197,195,191,190,206,202,184,191,209,182,199,174,206,181,192,199,189,194,188,187,200,182,191,161,176,211,208,168,213,203,203,201,186,207,191,194,197,226,218,232,199,208,219,215,209,196,196,183,198,187,208,188,184,195,216,173,219,189,214,174,190,205,174,211,208,194,147,193,181,189,213,214,182,197,201,193,198,181,181,204,214,185,196,198,204,178,198,193,193,211,173,185,203,188,202,196,188,179,189,173,184,174,185,211,194,190,164,200,199,171,201,196,182,176,200,179,200,179,214,163,186,176,192,203,186,183,174,185,175,174,216,196,170,164,204,178,176,182,196,193,185,191,176,186,173,208,188,225,173,189,193,187,178,176,182,190,194,202,165,198,191,195,225,213,207,209,178,216,187,186,189,204,202,192,186,212,216,203,202,216,207,189,194,197,201,209,208,201,225,218,188,215,195,225,214,210,177,199,213,221,204,213,216,206,210,208,205,220,228,204,237,228,194,200,242,207,225,206,184,213,222,231,227,222,206,211,214,206,224,214,185,75,28,89,193,205,212,197,203,201,219,198,199,211,211,183,124,75,44,87,32,49,52,49,54,75,56,37,29,23,34,44,60,19,72,45,23,15,12,36,10,19,50,20,10,12,26,18,13,35,26,13,10,39,10,18,26,30,19,14,14,32,24,63,87,82,37,48,62,61,75,25,48,48,63,69,15,76,197,200,138,214,174,170,211,209,248,218,239,200,216,237,167,190,238,250,173,71,81,59,41,11,106,84,31,41,132,209,248,253,146,70,170,211,180,218,238,215,130,158,230,225,203,74,23,11,22,4,42,42,28,31,24,74,71,57,45,160,239,244,224,158,158,235,183,224,255,225,188,135,160,229,252,217,246,137,66,102,95,95,38,63,82,32,87,82,82,103,144,76,26,34,35,28,44,48,25,29,26,10,27,25,39,22,17,33,107,132,148,118,156,205,171,145,129,128,135,90,67,37,33,51,22,33,30,25,43,39,75,64,75,93,94,86,29,25,12,24,44,50,29,37,13,11,41,3,29,26,16,26,17,20,21,52,70,50,65,92,65,126,126,130,84,111,116,120,82,64,79,58,56,55,50,38,18,52,29,29,36,28,28,33,23,17,53,17,15,71,67,45,45,40,14,24,24,25,2,13,40,26,23,18,33,40,27,41,53,49,33,64,18,51,47,84,60,88,63,66,89,67,76,31,42,47,16,17,32,28,33,37,70,84,77,79,63,66,50,52,51,92,132,97,69,101,109,76,56,116,107,90,147,227,223,246,249,255,229,244,247,230,228,223,247,212,238,222,237,218,237,231,251,214,237,236,207,237,231,227,212,228,236,234,220,246,197,229,242,235,237,237,222,214,230,244,214,216,235,249,238,244,246,245,206,227,243,231,230,220,240,237,215,239,240,238,225,134,4,6,1,9,30,19,12,15,2,20,3,39,8,14,9,18,27,21,7,9,19,20,0,29,192,192,172,189,172,182,174,195,187,165,191,170,180,177,170,186,170,192,171,195,212,170,201,158,202,172,179,216,186,191,170,167,201,170,180,175,187,217,195,209,212,193,177,169,173,166,200,199,179,191,206,204,204,191,186,191,186,211,184,205,197,185,194,197,185,194,163,205,189,184,205,181,212,185,203,217,200,181,195,218,211,181,200,175,185,173,197,181,212,198,199,187,184,203,211,190,202,185,200,185,210,209,182,180,210,219,209,197,210,208,182,183,196,172,177,204,181,192,197,196,188,182,192,201,226,202,210,214,193,212,202,196,190,204,204,189,194,177,167,181,193,187,178,201,193,184,175,175,187,196,203,183,215,216,184,188,211,188,194,192,192,184,208,185,203,179,181,186,190,209,209,205,192,210,184,173,181,185,189,205,224,173,191,191,187,180,186,200,194,192,193,198,184,176,161,187,203,168,189,197,193,195,207,194,220,190,204,213,217,189,176,196,215,182,183,154,189,172,191,171,181,169,206,208,218,211,194,216,183,209,210,183,176,231,195,213,231,217,188,194,179,181,197,200,182,174,201,178,190,199,210,202,218,205,205,220,221,208,233,192,222,211,205,179,216,193,186,208,204,210,207,221,217,191,217,197,226,203,215,238,209,206,226,191,209,210,218,235,182,216,206,205,230,218,210,216,204,217,214,230,216,116,78,165,208,218,230,206,220,221,210,189,200,212,206,239,108,23,9,44,59,27,54,26,35,38,54,54,40,25,32,71,47,40,71,42,10,27,19,16,1,20,24,6,22,9,29,60,16,31,1,31,10,38,22,22,41,20,6,21,46,49,37,58,84,64,82,55,62,62,64,49,74,56,72,40,17,120,176,174,193,242,157,163,221,205,241,237,206,209,219,218,174,184,201,246,127,61,109,59,69,67,93,92,28,55,177,243,248,201,145,163,240,213,181,187,229,211,142,167,233,227,132,44,0,17,16,61,36,38,20,43,61,52,55,11,164,252,252,248,157,153,232,230,211,211,242,178,124,192,231,250,255,235,239,96,73,105,86,72,29,52,89,105,92,66,93,129,129,62,54,11,16,27,42,20,34,26,9,53,14,57,91,40,53,127,215,246,194,129,150,195,137,123,90,101,65,45,24,10,9,24,24,34,23,36,34,50,38,74,79,114,102,72,69,45,11,64,41,25,28,49,34,37,25,6,13,21,1,13,33,1,25,30,17,22,23,19,67,104,118,146,111,85,96,91,95,87,94,73,45,42,38,44,9,24,19,24,33,29,14,43,19,51,11,31,18,24,43,44,67,38,43,12,34,3,27,10,7,26,6,41,10,31,41,23,19,43,34,48,21,47,62,31,41,35,34,57,49,32,50,26,8,31,28,2,13,41,49,48,90,95,76,74,52,59,52,45,39,82,115,123,90,76,91,87,83,115,49,26,22,83,221,233,255,251,233,235,220,233,223,232,235,237,215,216,227,219,245,255,238,247,241,235,239,235,232,221,219,212,247,242,232,228,238,222,237,234,230,234,240,243,235,253,244,240,235,243,232,245,231,247,230,215,219,222,239,243,229,216,221,237,232,212,215,90,19,11,18,0,10,29,7,10,17,44,20,1,9,5,7,7,15,15,6,12,26,12,5,4,186,174,185,195,184,197,176,155,188,197,173,178,203,198,198,181,184,178,192,171,201,157,204,192,179,195,196,185,177,188,182,174,203,196,203,181,171,179,188,153,180,183,219,191,162,167,173,190,200,191,173,193,188,179,215,190,179,196,200,188,194,215,193,174,211,198,221,177,199,181,196,195,176,198,204,196,181,188,196,183,227,205,190,184,209,216,208,216,185,217,190,176,198,193,191,199,185,193,188,190,215,185,201,200,198,233,205,183,183,197,195,188,192,177,211,188,199,207,197,204,208,181,181,205,173,179,173,195,182,197,181,194,178,173,188,182,188,191,181,172,201,202,189,183,228,167,200,194,178,186,204,207,192,166,184,196,199,206,212,179,197,186,209,199,210,182,185,186,177,200,202,189,181,199,186,186,174,204,193,208,204,202,195,193,196,204,186,189,195,184,185,188,183,189,200,195,170,204,159,187,197,183,190,197,185,206,214,192,184,179,218,186,196,193,179,198,207,185,185,182,185,210,190,172,199,206,193,235,197,220,200,181,195,208,209,177,222,216,189,229,210,201,188,202,214,210,188,191,206,231,234,195,189,194,196,218,220,204,240,212,181,196,202,190,220,185,195,214,223,235,187,217,204,207,201,197,204,196,202,200,219,215,220,222,213,223,232,207,208,223,221,210,241,222,230,208,235,219,245,227,207,200,179,204,231,222,216,230,216,179,222,195,225,218,200,218,132,69,5,22,0,30,44,60,15,36,49,23,48,51,23,49,47,57,68,49,32,32,31,57,16,12,24,46,7,1,19,18,12,23,28,15,13,45,22,53,28,8,27,15,54,52,42,62,77,69,92,76,52,55,47,17,54,29,82,25,27,133,225,186,209,231,144,239,235,194,221,199,222,198,251,222,204,223,214,255,147,65,107,69,85,22,74,89,40,39,170,231,232,208,125,172,253,210,185,206,253,202,149,214,215,163,70,14,23,9,30,68,18,44,17,74,58,51,36,120,237,244,255,166,163,211,244,246,183,243,215,117,204,229,240,210,237,255,189,48,94,126,75,78,38,31,52,81,48,47,64,143,121,46,26,41,27,44,70,6,43,10,19,11,59,106,150,125,132,195,237,169,118,100,79,76,97,75,73,42,28,13,20,32,35,51,20,17,60,55,45,43,49,37,72,85,77,59,72,22,36,34,21,43,28,34,34,13,15,27,0,24,23,10,20,15,8,19,21,33,36,35,50,108,114,160,101,94,67,65,92,92,90,75,77,69,34,29,34,13,30,17,57,22,29,29,35,34,39,13,10,13,23,32,62,30,15,43,20,40,35,21,31,9,33,31,35,16,20,29,1,51,13,45,55,33,29,48,40,37,19,17,43,50,7,40,19,24,3,31,29,22,46,25,65,85,94,58,44,39,85,57,20,90,97,96,115,84,123,89,124,67,63,65,32,34,153,243,241,244,227,221,208,234,235,221,233,228,235,220,235,248,210,223,210,242,241,244,233,227,219,245,219,245,213,236,234,225,216,220,242,214,232,222,236,233,214,231,240,222,241,213,228,227,236,218,238,231,236,225,237,210,234,233,235,239,229,210,247,125,2,5,10,12,27,1,3,4,24,4,22,7,5,10,14,25,20,39,19,4,16,9,20,1,164,198,187,185,173,180,188,180,173,194,167,193,176,173,195,187,200,191,184,181,195,190,189,183,192,168,200,193,202,186,189,187,184,211,202,188,208,192,189,189,181,173,178,180,168,183,188,197,197,196,205,174,183,195,191,192,203,197,198,184,183,199,191,181,193,186,177,188,173,185,194,173,194,184,195,202,214,179,196,187,205,209,176,183,211,201,187,185,194,192,194,182,196,195,192,189,181,188,205,209,182,192,209,209,178,189,200,213,185,210,184,195,205,207,207,180,201,190,203,207,206,201,184,177,219,180,192,198,205,190,197,197,211,202,184,209,205,193,193,187,205,213,177,199,203,212,193,178,187,181,185,187,218,199,174,189,198,205,171,202,196,211,197,185,182,177,219,194,195,198,200,200,192,210,184,219,196,193,164,203,192,211,195,203,181,205,188,212,184,200,210,189,178,174,185,224,199,169,201,178,182,201,185,214,219,185,206,205,202,205,186,214,182,196,175,195,182,209,197,194,199,204,182,211,183,192,203,204,214,203,174,188,219,195,204,192,220,197,192,230,213,196,205,210,191,184,207,189,197,201,195,203,179,228,213,218,191,195,215,218,229,239,200,166,229,177,200,201,223,196,208,243,191,215,204,204,197,200,195,199,222,211,216,211,219,221,218,201,237,236,216,200,217,196,208,196,241,232,221,202,232,233,232,230,218,233,200,203,206,214,200,189,210,209,212,235,192,136,119,78,3,2,12,10,18,54,42,30,30,17,49,71,59,51,46,32,53,45,39,55,48,39,31,29,24,44,8,22,14,45,34,46,29,14,52,34,38,36,35,45,92,95,98,85,95,74,81,84,57,45,36,61,27,6,45,61,83,201,217,213,238,199,129,225,232,190,234,235,219,216,233,208,178,179,205,254,126,51,87,37,71,63,63,117,43,62,171,211,249,171,59,164,245,211,195,179,245,202,182,209,120,73,15,7,7,9,37,40,39,8,45,71,53,20,82,236,237,238,195,154,198,244,242,229,192,197,140,197,251,233,252,233,245,243,118,6,72,88,50,44,75,72,84,44,38,34,128,152,129,30,33,35,33,40,39,49,33,6,22,12,39,137,112,159,118,107,116,51,47,13,19,48,49,65,14,32,19,28,20,21,26,14,54,75,77,56,50,35,29,48,38,50,23,9,71,65,61,50,46,35,42,43,31,34,22,19,18,20,18,24,27,9,22,22,11,17,32,32,33,28,67,76,84,63,82,79,58,69,87,78,69,61,36,18,17,12,37,25,33,40,31,23,24,22,25,23,35,8,13,15,40,41,4,21,45,26,22,44,26,24,28,8,28,31,21,16,11,23,49,29,42,55,71,65,36,30,30,28,25,18,11,18,22,28,12,14,9,29,25,20,60,78,79,80,57,70,67,41,44,60,106,108,114,86,137,109,111,75,111,115,73,24,108,213,245,244,242,220,231,223,220,234,217,240,234,249,228,240,227,225,210,236,229,235,245,231,251,225,241,211,235,234,231,234,255,225,231,235,211,235,251,235,246,215,215,231,214,235,240,237,221,246,236,226,208,240,249,222,235,230,237,241,240,240,215,140,3,0,18,17,27,2,10,13,26,5,2,21,0,9,5,13,12,9,12,14,23,32,34,18,185,172,179,185,178,207,186,171,184,204,173,218,183,195,209,185,185,182,200,187,169,199,189,207,200,182,197,193,193,176,174,187,180,178,174,201,183,196,211,192,175,185,187,189,181,192,182,201,194,186,175,181,208,195,175,198,201,179,202,188,183,186,191,205,186,219,205,197,227,201,192,207,182,217,190,196,198,193,191,171,192,191,189,213,194,174,194,177,201,179,190,189,185,217,198,189,163,197,187,203,199,186,205,186,203,178,203,189,225,193,200,200,179,218,210,183,193,199,174,194,197,230,203,179,187,207,212,211,186,202,190,207,195,183,209,196,202,193,189,208,173,197,157,207,215,175,192,194,202,190,212,199,195,192,201,208,184,197,187,176,185,208,170,212,178,160,197,179,184,197,208,194,181,186,183,184,193,179,188,195,204,198,190,220,186,170,162,192,179,184,207,189,193,217,218,205,188,206,172,185,182,215,211,211,219,207,188,197,199,178,192,195,198,197,192,207,188,200,203,166,180,191,203,207,195,184,212,190,196,204,193,212,224,211,204,199,171,193,208,216,210,202,216,212,216,195,194,202,216,215,218,206,207,225,224,228,222,195,240,203,208,215,210,211,226,218,214,210,209,206,216,219,181,217,215,222,216,223,193,220,199,210,208,198,216,203,195,228,229,201,206,221,216,194,201,236,218,215,233,234,206,211,210,194,215,198,222,222,218,192,218,206,224,204,207,190,217,130,195,187,74,15,20,28,12,12,56,35,31,19,13,66,45,79,60,45,37,37,34,92,27,39,35,13,24,23,31,28,28,22,30,24,39,32,29,22,20,19,58,91,48,71,71,79,50,13,87,89,56,22,27,75,84,108,182,173,121,219,203,162,224,180,144,230,201,226,247,208,221,201,214,197,186,194,207,229,175,47,121,58,63,91,101,124,61,55,173,247,238,144,37,169,227,231,227,169,236,185,201,220,53,21,21,12,23,26,33,13,16,53,27,56,35,77,220,240,225,185,142,218,240,231,234,230,170,135,190,227,235,249,237,235,237,144,57,43,96,55,50,100,82,92,107,101,49,107,143,129,119,25,15,7,26,42,18,36,41,6,31,38,106,135,54,74,28,19,34,38,14,41,17,7,25,17,8,31,31,15,35,78,55,78,79,107,124,75,69,27,16,45,32,33,31,37,39,36,54,42,27,30,38,38,50,33,13,11,8,21,17,30,13,19,31,23,41,18,32,23,24,44,58,22,33,83,56,75,63,61,35,45,78,53,35,15,1,2,20,10,53,48,31,21,33,19,56,28,37,32,21,30,33,21,56,52,26,21,26,40,13,24,22,26,25,31,33,42,35,44,97,73,49,61,72,62,65,77,51,22,52,8,33,55,36,63,41,31,42,35,39,20,45,83,74,54,60,65,79,71,41,52,102,140,135,105,109,110,100,76,74,89,99,71,71,187,235,229,252,242,238,245,238,223,224,255,244,242,235,224,196,239,223,249,231,249,236,245,255,235,225,253,239,244,239,250,225,243,211,211,243,249,237,241,212,223,248,219,222,238,237,247,229,230,223,246,232,232,226,209,241,241,237,240,227,239,215,122,15,9,4,4,50,3,14,10,2,45,14,1,8,0,13,10,8,24,13,2,8,2,15,0,156,185,198,187,207,190,176,183,190,201,175,202,178,168,198,196,204,194,164,177,155,198,217,184,215,188,190,200,197,182,207,187,180,192,197,168,204,183,197,181,178,177,198,188,190,160,186,214,187,192,202,221,180,178,202,180,209,193,215,188,180,191,194,195,189,177,178,176,188,204,168,175,159,224,187,221,200,179,206,190,194,202,210,204,219,188,164,179,185,205,191,169,218,199,212,180,195,185,195,207,200,220,218,187,209,187,181,190,189,215,201,214,208,203,189,189,211,203,201,196,191,222,176,200,212,194,200,180,184,180,217,184,199,187,195,190,208,199,179,178,199,198,167,184,185,196,168,204,185,192,204,194,206,193,191,185,184,207,206,189,190,216,215,158,197,194,211,193,192,205,206,196,197,210,195,198,188,192,229,192,204,199,202,223,213,193,206,185,220,199,182,178,179,216,201,196,179,222,211,200,205,189,202,210,194,203,225,199,176,191,179,197,166,176,200,184,210,172,218,202,198,200,203,212,174,186,203,203,201,192,218,241,199,191,220,224,224,205,179,222,213,210,209,192,216,202,229,229,204,206,205,208,214,203,221,208,197,191,202,203,227,217,198,207,206,194,188,178,198,202,190,198,208,216,227,192,231,204,206,223,210,196,203,199,231,234,210,223,208,208,208,208,197,215,222,213,211,224,198,223,216,214,222,206,220,210,238,226,217,218,204,209,207,194,199,193,181,137,212,254,127,76,33,11,18,11,50,37,47,40,5,36,36,31,50,19,41,12,7,11,49,41,24,32,39,32,28,7,15,46,6,43,18,36,48,18,41,19,48,96,46,42,62,52,40,51,62,51,35,59,133,244,222,210,232,212,111,135,148,172,235,150,156,245,214,214,229,216,226,233,229,204,194,195,231,240,185,77,157,92,75,56,49,98,72,66,209,239,224,64,40,198,207,243,223,213,240,196,160,116,13,38,0,7,37,48,44,55,45,71,77,47,65,211,251,246,179,142,187,237,247,239,236,170,98,152,244,236,234,219,241,198,134,97,42,82,97,38,69,96,67,39,42,97,115,112,120,159,92,19,24,10,7,33,53,50,51,16,35,96,115,76,33,40,16,38,13,39,0,8,1,7,12,5,32,27,0,34,79,76,119,93,83,92,114,89,85,35,56,62,65,25,40,13,44,31,37,16,7,7,60,14,21,36,45,31,44,31,45,38,12,14,40,14,13,22,4,13,23,27,44,44,31,29,61,67,64,67,59,62,26,56,7,14,27,13,24,33,40,33,30,23,33,39,42,32,23,29,32,31,4,25,46,26,36,31,25,32,44,15,54,33,43,60,44,35,39,57,69,59,45,25,30,51,51,68,54,62,64,34,44,17,35,32,26,22,31,8,32,36,48,86,76,90,53,53,68,74,13,71,93,114,131,129,126,81,90,111,97,79,50,60,56,159,228,238,247,233,233,239,246,244,245,234,239,247,231,226,239,221,221,246,241,234,249,233,220,221,249,223,222,241,233,241,237,233,231,212,241,238,234,230,226,214,248,237,246,224,247,222,224,222,232,241,249,249,247,221,222,237,235,237,236,249,221,105,0,2,13,3,25,40,2,0,8,20,22,11,5,12,5,10,5,15,8,2,13,26,5,17,187,188,149,187,164,181,181,192,195,178,168,192,183,176,173,187,180,191,185,204,166,182,201,181,185,178,188,203,197,181,206,158,218,194,172,221,171,176,214,159,179,175,201,217,203,179,188,194,175,157,196,194,217,186,181,204,181,188,172,200,210,203,204,205,196,213,195,185,199,211,198,198,192,205,219,174,186,205,169,219,224,211,205,201,177,201,187,207,191,194,194,206,198,164,193,207,199,192,194,195,171,204,176,189,154,202,191,185,202,190,189,195,186,211,174,200,200,194,178,204,172,189,184,206,197,208,182,221,177,210,198,193,207,183,190,188,192,180,189,172,191,187,191,199,189,198,212,179,178,188,191,201,189,174,192,176,215,196,193,189,188,178,209,180,189,228,188,176,186,183,211,235,174,184,207,196,200,196,207,182,219,199,221,198,201,215,184,186,199,206,204,166,219,195,205,217,232,184,201,202,206,187,201,196,197,195,178,203,216,208,209,198,183,193,174,203,184,199,198,211,232,190,204,193,206,203,209,185,203,197,203,205,211,228,206,224,223,195,228,179,205,227,199,216,205,202,199,213,210,205,200,213,224,211,226,205,209,213,227,212,213,202,186,225,232,209,231,200,209,200,211,195,211,207,217,232,223,194,203,215,220,223,195,211,224,213,235,227,235,194,212,228,224,221,190,215,213,239,209,206,233,197,216,184,223,221,217,219,210,203,229,226,214,207,201,204,220,174,177,232,132,114,11,31,95,22,49,28,49,46,65,58,43,51,31,10,29,46,34,40,41,39,23,47,22,17,36,25,37,25,25,27,43,27,25,20,32,14,46,81,49,60,33,36,53,15,23,38,133,150,221,244,232,209,235,125,76,116,163,233,220,135,210,244,205,205,224,229,218,203,230,199,164,185,202,249,157,76,158,86,73,13,21,87,55,44,208,250,155,40,95,241,128,134,202,169,235,135,115,26,5,19,9,52,47,40,16,30,47,62,56,55,204,249,247,216,124,193,217,246,216,218,182,106,116,213,220,242,241,239,187,149,218,126,52,69,100,70,76,107,73,90,32,72,95,101,150,130,77,1,53,28,30,43,41,32,49,36,7,58,60,30,16,30,32,15,40,26,5,12,17,34,19,55,1,23,17,52,71,57,90,83,50,107,117,117,105,83,68,37,49,29,50,42,21,47,25,27,19,23,26,41,32,49,24,29,39,19,15,29,24,21,25,15,18,21,25,44,35,49,23,52,57,5,15,31,65,77,70,60,21,51,11,18,20,4,42,73,63,73,40,16,24,13,36,25,22,15,16,39,28,21,41,25,14,37,42,53,44,8,30,118,140,114,116,110,63,70,62,57,91,52,50,9,60,86,86,88,92,56,89,74,48,43,35,59,37,41,20,36,42,67,46,70,64,81,57,66,56,64,89,111,111,96,112,90,66,64,79,95,36,41,36,120,244,221,246,233,223,209,247,248,218,235,240,229,241,249,240,241,247,229,221,229,244,224,223,228,229,229,234,237,252,242,249,245,230,243,239,252,230,213,224,233,226,245,242,240,233,223,225,245,242,238,233,249,217,234,218,244,223,230,231,231,240,108,3,18,0,18,15,15,45,22,8,29,36,7,15,4,13,1,18,1,2,7,31,9,24,26,168,186,207,195,191,200,155,189,194,185,199,165,192,168,195,174,189,190,187,172,197,184,189,191,194,166,206,173,180,180,179,193,201,190,172,194,179,196,202,167,204,175,196,211,186,195,185,184,197,213,182,189,207,186,197,215,195,192,182,184,205,179,213,218,199,181,176,186,216,223,186,195,196,193,208,192,181,185,196,220,169,179,186,191,214,206,196,202,182,189,206,202,189,196,186,201,188,181,207,194,208,196,208,205,205,180,213,199,187,173,215,181,198,205,193,190,173,201,207,209,226,191,193,180,204,190,205,189,189,188,181,190,183,200,210,185,213,197,178,189,201,170,189,195,188,178,189,189,185,178,196,196,174,194,197,195,195,189,179,204,217,196,180,199,204,187,184,192,171,202,198,219,193,183,212,177,211,203,182,191,213,183,187,194,209,218,188,207,206,167,198,208,204,210,196,175,215,209,227,178,213,211,188,200,191,200,178,216,167,208,184,196,179,165,193,198,206,190,201,207,214,196,218,195,219,190,211,202,220,195,222,219,209,221,208,210,208,208,226,223,204,203,211,230,205,201,204,197,229,221,195,215,206,189,206,221,227,229,212,179,213,233,198,213,194,203,191,175,209,203,199,198,220,230,219,222,229,211,213,218,221,221,219,198,196,218,208,233,224,225,213,228,220,224,218,214,212,219,230,204,215,185,185,197,228,242,198,218,199,223,232,213,211,213,166,174,236,174,220,198,107,91,59,171,220,183,212,251,250,255,222,229,202,179,171,159,228,137,63,33,12,48,33,39,17,20,26,35,29,24,26,37,12,32,59,76,92,48,18,42,38,50,35,42,49,73,162,175,239,217,242,239,155,135,188,113,121,224,184,246,231,135,224,242,206,206,219,214,213,171,233,228,181,181,204,246,154,61,159,73,58,48,16,81,50,66,220,238,115,40,86,118,58,160,215,207,206,91,50,13,11,22,38,35,33,24,36,74,54,49,26,142,255,239,253,176,202,222,234,242,201,185,142,154,187,247,249,220,189,175,124,189,249,132,36,32,70,80,69,161,132,143,31,63,77,87,143,130,56,25,26,15,35,61,56,24,26,14,17,32,38,18,17,1,17,32,43,18,16,41,20,13,36,34,7,35,33,93,94,96,122,91,81,83,132,134,43,57,72,51,81,24,35,29,31,33,40,25,38,32,30,41,45,21,57,39,11,50,3,12,18,16,39,41,34,10,25,18,48,29,28,47,28,53,40,40,46,59,69,53,53,26,28,35,5,18,40,24,32,48,31,42,2,33,38,31,19,12,33,42,40,31,2,27,8,28,26,23,19,29,71,154,177,179,177,165,147,115,56,56,89,44,7,11,61,52,62,38,42,71,65,86,123,117,88,66,17,13,42,27,64,92,83,49,77,60,47,74,47,48,88,111,109,103,98,107,59,26,45,86,74,79,14,61,219,220,240,247,227,222,239,249,249,209,243,239,232,250,246,232,230,211,228,233,228,232,230,228,227,240,228,230,204,217,228,239,224,235,213,237,189,238,195,238,251,236,240,223,237,236,240,205,236,228,237,248,246,214,224,245,240,236,242,224,229,112,7,11,11,6,14,2,13,11,32,11,29,20,3,0,35,13,3,2,14,0,33,14,7,0,179,182,167,174,191,191,201,186,169,181,184,193,223,169,201,195,192,203,179,177,199,173,177,187,207,177,195,182,197,165,199,191,194,218,212,191,197,195,200,195,169,169,190,201,172,194,211,167,207,192,209,197,197,186,192,204,204,196,172,225,218,212,185,184,174,202,205,214,199,217,185,180,205,185,179,181,192,211,181,228,218,205,192,166,182,208,222,187,200,196,194,212,191,179,192,200,204,171,178,184,216,196,216,188,194,185,184,189,191,165,199,191,212,194,204,198,191,197,195,195,198,199,192,201,182,195,183,191,203,186,183,182,199,203,184,203,198,218,216,178,216,200,188,184,188,192,212,195,198,201,204,210,184,217,223,213,195,198,174,219,195,183,197,214,185,196,181,186,209,212,191,197,201,221,219,192,183,184,218,211,203,195,206,214,201,206,198,212,195,181,214,199,219,182,212,209,195,196,219,180,206,205,191,191,199,205,233,195,203,187,211,205,207,217,195,206,193,192,209,223,206,219,209,220,203,219,210,189,224,222,202,205,224,190,222,208,192,202,218,199,225,230,193,204,184,212,220,205,218,240,218,227,211,212,204,228,212,222,220,215,198,206,214,185,225,209,208,218,199,207,211,207,188,214,233,193,209,209,228,220,218,221,212,195,220,202,210,208,195,234,204,220,194,209,236,201,212,234,209,214,212,197,212,188,236,216,212,224,224,228,198,217,221,188,198,204,215,189,195,182,117,114,41,161,242,250,246,241,242,242,251,244,243,214,249,247,244,178,51,42,128,133,80,24,26,17,20,55,34,34,21,40,25,98,197,253,250,87,131,170,146,159,177,171,176,241,253,248,237,228,231,181,89,135,201,150,189,177,208,255,186,149,221,204,236,201,230,222,219,207,229,238,144,208,208,252,142,66,101,72,47,37,46,111,42,52,201,201,104,60,34,3,57,214,245,211,192,37,21,28,9,31,51,54,25,24,70,43,64,46,110,238,240,244,177,206,251,248,253,206,176,97,144,197,225,230,245,225,151,143,184,210,240,153,6,58,87,88,37,78,120,104,48,72,72,115,137,125,62,23,42,33,34,22,33,36,36,0,26,8,26,23,20,20,12,31,37,22,34,38,41,31,27,46,46,25,39,96,113,119,116,89,112,128,85,65,64,89,66,52,55,64,36,26,38,30,45,31,49,32,28,52,50,35,17,29,29,20,35,24,3,16,30,11,39,4,3,23,18,50,39,20,37,47,46,64,35,70,54,45,57,32,36,34,29,59,39,10,13,37,10,32,28,8,20,15,23,5,36,19,67,12,30,12,31,12,11,37,28,20,105,151,203,156,145,172,132,156,102,69,89,49,33,25,33,13,37,33,38,40,53,60,86,78,123,72,31,18,65,21,61,70,59,70,85,85,60,74,41,37,88,98,125,99,126,92,49,81,112,129,124,114,63,26,204,229,242,226,247,246,244,238,230,232,222,234,215,225,240,222,244,234,224,241,226,236,245,213,232,228,244,248,246,219,237,231,224,237,231,242,238,242,226,227,225,226,232,238,234,231,243,243,244,249,232,228,238,248,214,233,228,244,247,242,220,114,3,0,5,27,31,8,26,9,10,15,12,31,37,0,16,6,13,13,10,18,9,10,15,40,182,204,187,208,188,193,199,188,184,155,186,188,184,188,201,185,224,161,222,175,183,162,193,187,201,187,197,169,204,174,184,183,172,197,197,183,168,186,207,172,180,207,192,205,194,206,193,182,220,192,211,194,186,206,180,182,211,198,210,202,199,207,209,201,186,205,182,208,196,200,184,187,211,190,197,206,210,205,173,200,214,187,192,201,201,173,182,196,183,180,175,186,187,191,186,209,177,191,196,168,189,189,202,188,204,182,183,192,207,189,192,175,209,192,207,208,199,212,167,172,172,205,192,196,190,184,194,183,179,176,198,196,180,203,189,207,221,171,202,177,209,204,204,205,190,203,203,185,197,191,188,196,190,197,180,199,189,185,176,217,219,194,213,226,199,206,192,217,197,189,197,187,201,185,212,212,203,189,193,203,190,221,192,211,200,206,207,224,210,208,210,221,213,236,199,190,186,206,207,190,215,196,211,207,215,177,234,224,209,198,204,213,191,209,202,200,194,205,211,205,235,201,191,181,204,195,215,235,211,210,209,222,228,203,210,209,215,195,215,206,228,223,216,199,188,221,224,213,237,203,209,204,216,204,187,205,194,236,214,224,215,199,223,202,229,210,230,198,234,214,208,208,180,224,218,206,227,212,206,207,180,216,233,196,188,227,209,215,228,211,216,207,210,215,222,213,218,207,223,203,197,216,220,217,208,232,212,196,196,216,213,193,218,182,191,182,255,221,207,201,107,105,34,180,248,230,244,243,248,240,228,226,220,216,208,234,235,98,56,50,159,155,85,40,26,41,31,41,23,60,26,39,53,29,139,241,255,148,85,185,196,251,237,239,221,255,248,237,200,147,225,208,150,193,185,118,126,134,187,253,146,168,235,242,196,194,210,219,213,202,231,227,154,200,211,255,149,54,117,70,68,44,27,91,76,44,209,204,74,65,8,5,130,244,246,232,72,14,12,8,9,48,44,45,28,39,40,76,34,80,225,249,249,202,224,231,217,250,254,228,126,102,208,245,241,223,242,169,96,178,232,253,253,159,41,47,78,87,48,23,36,57,32,110,108,93,135,79,19,49,32,36,47,68,10,43,54,57,18,29,18,17,49,32,38,37,36,42,45,44,20,29,27,21,29,33,29,68,100,51,86,103,97,82,71,35,65,86,56,48,66,36,34,34,50,41,26,41,42,42,32,42,64,34,37,13,40,31,32,27,7,23,41,34,39,35,16,30,31,27,32,60,25,55,58,69,52,50,54,68,41,65,27,31,39,65,5,17,2,20,8,21,21,45,3,24,28,30,30,35,26,32,32,23,35,18,36,51,74,77,128,153,142,129,125,153,166,145,101,74,89,51,30,44,19,16,20,15,12,33,19,27,64,84,93,61,11,45,34,17,60,44,50,71,89,93,36,60,38,48,71,93,120,101,106,95,93,137,99,71,40,88,96,55,176,231,227,226,239,244,242,234,208,236,218,218,239,233,240,234,234,242,231,230,243,242,221,242,233,233,230,242,221,224,196,221,235,235,243,226,239,241,230,233,230,248,226,246,247,214,232,241,215,207,244,250,231,229,221,250,244,222,249,227,223,123,17,2,5,7,23,19,36,6,16,24,11,16,5,0,4,1,22,28,20,13,8,3,17,6,170,198,184,212,189,164,191,199,200,179,169,188,197,197,189,183,206,216,197,213,193,206,192,186,185,168,195,200,202,203,196,186,199,193,207,162,185,183,205,200,200,183,177,192,187,182,167,185,192,176,192,210,190,191,179,192,185,187,191,215,202,196,171,182,200,181,207,233,191,212,193,189,182,188,213,191,205,180,201,199,180,204,191,195,191,209,183,203,200,195,203,188,190,184,182,209,223,198,184,182,221,206,167,204,194,185,207,194,199,193,199,197,220,169,205,181,217,199,210,206,204,201,183,192,205,166,183,206,191,189,195,211,183,200,195,166,216,181,202,194,210,180,186,206,192,194,186,188,201,211,193,203,194,190,192,218,210,206,179,201,210,199,193,186,198,206,209,187,202,205,216,235,190,204,206,202,206,224,225,230,201,205,215,203,215,209,234,193,191,202,215,220,196,179,218,210,209,216,224,197,206,221,217,221,172,204,216,200,197,209,196,207,215,192,220,184,231,224,196,195,202,220,210,209,225,207,219,208,195,222,227,215,218,217,220,234,215,230,214,209,229,229,226,205,224,223,205,223,196,207,211,213,223,213,224,223,216,211,213,199,199,202,210,222,217,196,212,214,214,186,225,230,182,235,235,221,200,225,231,179,233,218,196,208,223,203,208,209,221,193,217,220,190,215,212,198,209,214,207,206,205,184,224,200,216,209,223,199,222,234,227,211,222,196,197,198,241,232,205,124,65,85,35,121,176,203,220,233,217,211,215,183,151,173,131,184,188,86,48,101,179,153,107,57,14,10,47,36,43,54,10,30,57,31,99,214,211,117,27,60,178,248,232,204,166,212,165,184,170,138,227,218,201,225,187,111,132,166,237,247,161,189,245,206,228,209,224,249,232,189,218,215,191,201,188,246,165,17,86,88,88,67,73,73,85,61,198,188,81,35,9,36,207,231,255,143,45,13,7,7,43,52,34,4,48,60,87,6,79,207,242,234,203,215,237,254,244,241,240,143,153,238,243,240,211,230,225,113,169,226,239,249,240,213,93,39,18,65,89,72,52,67,130,87,107,126,117,85,9,18,33,28,68,41,31,35,24,21,21,18,19,39,23,45,33,67,29,16,38,47,60,19,51,10,18,30,43,73,57,81,92,114,77,69,56,46,70,36,36,59,39,75,23,3,38,23,24,54,40,24,44,33,9,47,33,21,26,29,34,35,28,31,47,34,28,32,34,42,32,12,40,60,37,53,59,33,68,62,70,45,59,48,23,25,30,30,25,20,18,30,33,40,17,14,47,59,27,26,13,34,13,23,17,35,42,39,62,121,142,156,164,153,139,147,135,112,151,147,78,61,76,30,28,19,27,41,35,16,6,25,31,35,53,58,80,100,43,28,36,25,44,81,87,77,99,95,33,66,59,29,70,94,109,104,106,124,121,126,77,74,46,100,127,15,133,233,229,243,242,251,232,227,225,217,234,224,239,246,221,241,230,240,233,235,236,240,226,237,240,247,242,218,213,225,240,253,237,238,237,236,216,227,209,243,231,255,223,232,241,237,241,227,250,242,236,232,252,245,255,249,239,234,222,225,249,117,6,0,13,12,8,23,12,19,2,9,9,20,17,0,3,18,26,19,37,19,14,17,26,29,178,196,198,188,192,180,209,165,185,176,181,196,196,186,198,192,176,181,202,218,189,206,195,216,187,163,176,177,189,204,181,190,173,183,186,196,197,201,188,175,218,167,175,183,221,198,197,209,204,179,196,215,184,214,201,197,207,177,209,198,185,203,205,203,208,206,180,169,206,185,213,207,189,195,224,183,191,194,175,198,198,176,210,194,218,169,198,202,189,195,189,201,202,191,165,200,201,189,198,200,175,195,186,193,182,193,180,218,187,200,200,190,199,189,211,193,187,201,201,183,192,189,175,200,192,194,194,190,219,199,179,189,211,192,214,167,198,205,205,200,197,179,178,208,203,187,184,213,193,191,221,204,216,185,198,220,176,189,192,192,209,208,197,191,240,239,209,193,180,190,205,198,216,215,191,184,195,208,206,196,206,212,172,191,201,199,232,214,204,206,223,215,207,216,184,194,221,224,213,196,217,194,236,212,210,194,190,194,203,201,185,211,223,230,221,229,211,217,202,229,200,195,182,217,220,214,190,194,183,206,216,209,241,181,239,217,231,213,204,204,209,224,192,164,199,219,204,221,223,215,196,222,223,201,216,200,205,218,209,221,212,231,195,221,208,220,201,206,198,198,195,224,229,194,209,187,219,234,219,234,220,192,238,213,213,213,221,197,229,215,218,233,216,237,223,233,217,204,221,221,216,207,215,213,218,226,193,201,218,229,219,214,207,181,194,214,209,147,113,50,20,39,8,44,46,130,203,172,187,184,173,151,115,176,188,204,205,127,113,120,162,174,119,61,13,24,56,50,49,43,33,38,81,146,203,250,230,163,52,7,84,209,236,146,116,195,158,221,204,188,195,183,182,148,171,155,190,228,249,215,128,228,227,246,212,211,226,225,217,197,205,221,145,186,202,255,154,77,65,63,58,50,72,77,81,71,141,91,23,53,5,66,243,250,211,79,8,2,1,39,67,52,6,34,22,61,39,68,190,234,252,183,197,240,239,249,253,251,194,146,220,226,230,215,193,201,155,180,240,227,249,253,240,208,97,27,1,31,77,116,58,111,60,60,134,106,119,74,21,46,27,25,27,31,23,42,45,41,16,26,26,16,20,32,67,32,40,38,35,60,35,36,43,24,41,20,23,82,48,96,127,109,74,92,119,75,59,56,33,33,41,21,15,41,25,12,27,62,37,20,27,31,30,34,50,21,27,50,18,38,44,22,34,23,58,23,25,38,44,49,54,44,63,49,66,32,69,32,42,44,46,27,19,19,27,14,19,2,11,27,31,18,37,33,5,40,28,19,37,18,27,18,18,32,31,99,141,154,165,162,147,149,113,118,110,122,157,137,55,37,51,43,15,41,32,15,14,41,16,40,3,39,27,53,77,91,59,21,44,22,30,89,74,72,95,67,81,52,38,13,97,108,111,76,85,84,108,107,64,96,143,169,142,31,152,242,229,237,242,245,215,211,239,231,250,223,241,222,239,239,242,236,230,248,235,216,236,231,239,239,238,243,234,245,228,237,235,248,231,209,235,251,223,237,219,239,240,251,233,237,249,237,244,245,240,251,208,253,245,220,244,215,237,234,233,130,22,9,7,7,18,7,21,9,14,17,12,11,7,5,9,3,13,16,3,11,25,13,17,6,190,169,171,184,203,185,196,203,173,188,189,179,189,196,181,169,189,182,166,181,190,191,184,184,168,190,204,174,183,173,174,179,195,205,186,214,178,182,193,186,210,192,187,204,180,197,175,181,193,202,188,207,180,186,214,192,195,209,200,207,187,172,198,176,207,196,183,188,210,188,193,193,196,165,173,189,194,201,195,198,191,201,201,198,212,193,183,187,190,213,198,200,175,187,199,206,198,181,199,164,191,188,193,193,194,181,178,183,191,181,191,188,205,198,189,182,178,191,181,203,201,204,222,200,183,205,207,198,190,184,211,196,196,189,178,212,180,186,191,189,168,189,198,182,179,193,195,179,199,193,193,212,222,205,210,171,200,175,185,204,203,216,208,178,202,205,203,191,195,219,197,209,194,207,213,209,191,213,204,219,210,219,195,233,221,227,219,206,200,181,186,213,217,210,213,211,205,225,184,218,210,217,208,200,203,208,203,194,206,206,199,189,215,228,201,212,196,207,220,212,220,206,217,216,218,185,216,225,221,226,211,211,218,213,220,219,235,221,197,211,216,212,205,201,211,209,206,233,202,213,209,229,215,224,246,224,214,241,221,214,213,211,212,219,211,211,200,210,209,226,228,221,199,238,205,210,219,234,227,203,197,202,210,215,222,218,208,216,216,209,230,206,211,216,242,222,236,209,221,214,229,230,213,204,232,198,231,216,226,206,220,224,205,201,182,204,188,70,18,8,19,33,45,50,27,82,105,160,192,176,213,166,149,207,242,248,199,88,152,107,194,194,154,85,15,32,38,59,35,37,41,30,100,182,230,244,239,207,149,30,57,128,146,111,139,181,191,222,186,193,180,134,156,165,180,168,208,196,253,195,149,240,237,235,199,209,235,212,220,203,233,229,169,207,202,255,146,26,106,89,27,73,55,81,106,42,78,39,88,48,26,129,254,217,90,20,32,7,12,31,19,0,24,49,89,30,55,144,249,239,193,163,228,249,247,250,252,193,155,213,251,231,207,239,159,94,157,241,253,224,250,247,224,136,160,112,51,58,48,71,54,66,62,51,153,143,111,83,19,27,5,46,33,55,26,24,47,20,13,10,51,15,15,47,23,34,13,31,52,47,23,29,36,39,35,17,66,99,58,71,88,74,49,41,62,35,53,58,36,78,53,8,31,10,25,39,26,11,37,28,42,52,52,42,30,18,21,27,28,29,26,36,53,27,31,29,27,67,85,65,105,78,44,80,35,52,27,34,46,51,20,17,25,25,39,17,32,18,35,48,12,17,37,37,11,9,24,30,29,13,20,16,49,10,80,139,125,134,108,126,122,123,91,106,89,124,144,103,83,39,31,21,34,38,34,40,39,51,30,45,12,27,38,31,66,55,37,9,13,16,39,71,72,77,76,42,62,57,35,34,68,98,99,102,77,91,122,98,29,78,115,90,90,11,141,219,243,250,250,246,237,249,231,239,247,240,247,241,248,229,239,238,248,242,233,247,216,246,243,253,249,240,225,236,251,248,235,213,215,239,240,236,240,234,239,215,235,237,253,249,245,245,248,233,238,234,242,254,228,240,240,247,246,242,241,113,19,7,10,31,22,8,7,24,16,14,2,15,16,8,3,5,10,36,17,12,6,29,5,20,205,163,162,173,148,189,184,192,206,182,201,207,212,194,189,185,210,187,184,194,191,197,214,184,212,188,181,189,209,171,190,201,184,187,205,206,208,215,190,166,200,202,195,186,175,194,197,198,196,193,207,194,188,187,191,191,175,181,213,200,200,191,160,205,194,211,178,211,195,173,209,186,203,195,192,209,224,210,180,207,186,194,198,190,195,174,186,206,208,205,200,185,171,182,199,201,168,185,195,204,206,191,192,196,188,182,188,179,197,208,201,181,184,196,174,217,208,191,189,162,203,174,196,207,201,173,182,190,179,188,186,185,204,200,186,213,215,185,172,206,209,181,194,157,189,226,199,203,211,204,209,196,205,199,189,212,187,185,218,186,201,191,211,212,192,197,213,181,179,188,232,233,192,201,220,217,191,227,212,199,225,207,218,207,243,212,219,203,203,198,208,209,237,198,199,225,217,208,202,234,195,223,218,189,222,216,205,228,202,239,203,228,235,209,210,247,197,219,220,216,200,221,203,218,190,207,211,221,205,202,200,210,238,214,195,188,217,220,222,216,215,223,198,223,197,192,226,197,212,204,244,198,227,220,214,196,215,189,241,196,238,247,215,182,194,214,202,213,242,231,190,213,213,220,213,215,205,210,218,229,218,223,230,230,213,200,226,213,206,228,214,206,218,207,188,212,242,193,211,219,227,206,194,212,198,192,209,222,191,232,207,217,211,162,213,211,150,100,19,24,27,43,77,82,40,59,164,212,209,231,225,191,142,206,218,229,125,108,122,100,181,176,132,69,5,31,29,45,50,40,52,30,63,140,200,234,164,205,164,19,56,145,241,168,192,181,182,160,157,116,139,195,137,190,163,184,172,193,253,146,169,236,230,224,218,200,222,206,217,176,234,183,156,149,213,252,119,33,116,85,53,75,60,62,92,53,66,130,218,79,13,187,231,140,35,24,12,15,52,22,0,10,43,48,56,50,147,214,243,180,130,205,241,254,229,228,185,139,191,253,238,215,200,222,72,83,210,228,252,246,254,220,103,68,133,158,85,32,43,45,25,71,26,83,161,134,124,41,10,27,24,22,43,30,17,50,21,32,32,44,15,16,49,33,49,28,20,15,10,25,55,36,11,40,16,61,46,85,67,65,57,60,37,18,46,38,24,12,75,60,14,26,20,23,4,15,32,19,23,9,28,15,18,50,27,25,37,18,30,32,35,42,26,28,8,39,14,159,159,156,91,134,87,42,32,5,27,49,42,33,42,25,29,12,13,38,25,38,70,28,4,30,27,25,31,50,59,27,8,33,30,17,28,3,108,135,128,107,112,112,91,90,102,115,107,108,118,96,26,31,35,23,25,29,29,41,18,51,54,27,45,14,37,29,45,32,34,24,59,40,39,70,72,44,68,47,80,18,63,10,34,115,117,91,104,65,82,102,35,60,56,31,44,11,139,213,236,252,221,254,230,244,238,235,222,238,247,254,255,240,239,240,248,238,235,237,244,242,229,255,224,235,247,253,231,251,238,246,242,234,243,243,250,240,243,245,246,252,253,245,241,238,248,222,253,239,241,241,242,237,206,247,250,240,227,100,7,18,0,1,2,5,33,6,20,17,9,14,2,0,20,22,20,7,9,7,30,10,19,5,190,191,206,208,186,177,165,201,181,202,188,164,206,198,189,172,189,179,199,189,191,196,185,186,204,185,189,185,196,188,188,208,167,203,191,194,196,186,184,197,176,171,194,198,199,209,194,192,193,211,221,170,193,201,191,190,224,210,194,187,186,192,202,192,189,187,205,191,185,191,185,174,195,202,196,191,174,193,174,211,212,198,190,200,202,190,189,212,205,168,202,210,194,165,172,197,181,198,194,195,188,192,184,200,191,182,202,177,197,184,211,183,188,203,179,200,204,226,191,198,177,193,185,192,174,187,177,207,207,181,192,179,198,178,200,187,203,204,169,175,210,200,176,185,202,205,220,200,204,210,211,190,153,200,180,188,189,198,229,207,208,222,174,199,171,205,202,221,203,183,219,195,183,208,206,207,200,228,200,187,200,211,210,208,216,194,218,201,220,200,224,194,206,205,209,211,203,212,212,190,234,211,218,216,202,216,237,195,199,216,210,205,229,206,219,215,226,222,225,213,232,236,246,225,220,226,214,200,220,227,224,220,207,221,203,197,213,231,206,223,195,187,197,211,216,209,202,230,232,232,214,211,192,196,204,222,202,214,207,196,231,200,216,210,201,198,238,210,207,202,219,209,230,230,194,222,207,170,183,211,219,202,212,239,188,185,232,227,213,204,226,231,214,220,226,241,231,229,209,214,202,202,183,202,205,206,216,212,227,216,208,223,206,192,220,222,209,120,96,95,95,134,99,79,59,101,222,189,193,211,242,177,154,226,224,182,110,109,174,108,172,163,42,52,39,31,43,31,65,39,60,13,66,131,199,225,150,211,193,78,51,211,237,222,230,197,162,136,147,124,164,186,131,124,131,138,214,224,250,147,202,238,225,239,217,201,227,212,229,193,231,229,181,170,203,242,149,69,80,98,75,59,38,68,97,94,132,178,221,152,84,209,119,47,11,2,26,24,39,31,45,39,67,67,21,120,233,223,220,135,196,249,253,246,206,177,157,211,217,253,242,192,172,111,87,198,232,229,246,253,233,66,8,64,106,95,93,23,40,51,46,57,38,86,172,139,111,14,32,34,33,38,36,22,26,27,39,30,10,26,22,19,28,43,42,43,40,39,47,37,53,39,21,68,23,54,43,57,71,40,54,74,39,46,28,33,31,37,26,53,11,37,12,26,29,41,43,44,32,49,41,40,38,38,16,27,33,18,38,7,37,48,61,35,51,42,66,140,130,103,86,91,76,44,40,10,37,15,28,22,18,40,30,45,55,50,61,62,16,30,30,18,27,3,17,18,26,42,24,38,29,28,7,34,74,149,126,113,100,98,115,118,124,147,114,78,120,84,52,29,19,42,30,58,40,14,25,45,48,28,19,29,7,15,52,18,19,21,15,14,40,52,78,50,75,70,86,47,58,36,52,150,108,120,106,105,74,85,114,84,43,40,22,108,248,253,242,251,248,251,233,223,223,252,238,242,233,252,243,242,245,242,243,227,253,234,244,237,254,243,248,241,224,237,254,244,250,249,255,251,227,247,214,240,255,240,226,242,236,255,241,232,247,251,243,255,252,236,232,248,248,248,243,236,249,123,11,1,2,18,13,15,0,10,3,8,8,1,11,0,5,8,16,18,30,18,25,4,28,28,150,188,209,172,159,180,187,186,204,171,209,176,180,212,195,228,198,187,177,185,174,164,197,183,185,200,192,199,178,183,202,198,196,185,216,193,191,195,218,170,179,175,209,196,203,167,169,179,199,186,198,215,185,197,192,211,205,200,218,210,212,221,186,165,190,179,165,190,201,199,175,198,198,201,191,218,183,176,193,218,193,187,188,205,193,187,197,190,210,202,204,176,198,199,182,208,176,206,202,177,206,164,208,185,197,204,191,196,192,221,215,186,191,177,174,192,193,193,183,187,204,191,176,193,197,164,203,182,195,192,174,191,207,202,204,211,208,185,214,183,198,203,216,205,206,212,225,202,210,208,183,228,182,193,178,195,187,219,199,222,220,211,203,192,198,194,182,200,211,176,185,204,201,205,211,198,205,237,210,206,212,227,196,226,205,213,212,223,181,216,214,206,192,207,203,202,234,203,216,184,216,209,227,236,197,211,228,240,212,209,201,218,227,224,224,208,220,204,225,219,221,231,221,199,217,206,218,214,234,234,206,219,211,222,179,210,213,210,221,212,223,196,199,213,217,210,217,219,203,224,199,227,222,211,234,201,220,201,199,201,211,211,210,195,199,205,229,222,209,218,223,198,221,232,215,199,227,207,198,210,221,237,240,213,221,236,206,221,218,225,232,222,217,205,238,208,230,190,235,230,237,210,192,187,172,219,229,218,235,232,206,241,198,239,178,223,236,178,193,179,141,132,111,120,118,115,213,211,227,223,243,184,162,226,201,130,103,110,158,125,187,211,40,16,8,19,62,36,79,27,36,41,119,162,197,217,176,221,190,103,90,196,219,167,152,153,167,211,217,199,207,168,64,117,109,149,156,247,183,149,216,249,222,247,200,225,220,217,229,191,228,220,191,190,209,254,157,59,135,88,53,45,36,45,52,104,131,175,253,191,182,139,44,32,5,7,18,28,43,20,58,45,59,24,103,203,241,214,149,194,189,236,252,226,197,163,210,252,251,220,232,166,89,139,161,237,243,239,242,221,95,26,57,89,108,120,141,91,30,45,35,30,73,142,124,127,103,19,44,38,17,6,34,58,38,39,10,29,39,26,45,37,31,36,39,27,19,24,59,50,15,30,30,14,36,44,38,40,28,11,50,48,40,43,54,32,9,4,51,41,11,24,6,22,25,46,53,40,15,33,13,28,46,27,65,19,18,43,30,38,29,27,27,45,17,38,68,111,127,74,98,65,78,55,51,24,35,3,5,6,33,11,31,36,45,36,24,52,51,28,45,15,13,6,30,18,39,47,30,35,43,13,22,26,76,138,113,117,96,93,100,97,87,72,102,106,99,107,43,24,19,42,41,34,62,15,55,39,19,52,45,36,51,20,14,36,53,21,43,6,34,91,89,84,69,50,71,38,56,55,56,129,102,90,90,98,90,49,90,92,113,104,152,235,236,248,232,241,255,241,235,234,243,225,224,254,254,241,241,245,241,247,234,223,234,225,255,248,247,244,236,235,241,255,242,252,251,238,255,247,253,238,247,251,249,253,249,240,245,245,217,230,232,247,249,234,242,250,247,241,247,239,243,247,238,105,10,0,2,11,26,15,26,18,20,12,13,12,12,27,13,12,17,7,5,14,13,14,1,13,188,189,191,175,184,184,192,172,182,188,199,225,210,178,214,186,185,173,195,212,192,187,178,155,205,190,193,212,182,194,215,211,178,179,195,164,181,197,191,190,209,184,183,209,183,176,197,170,190,188,199,191,196,213,187,194,174,209,198,197,204,196,203,185,209,193,207,190,219,214,185,187,206,179,185,205,185,211,204,194,205,186,199,194,198,209,199,198,193,203,217,188,193,186,189,183,188,204,216,175,184,200,191,172,206,182,183,215,215,186,199,182,185,200,198,224,195,192,203,202,192,185,216,186,202,190,210,218,203,190,219,195,219,186,191,220,186,218,185,205,188,199,217,213,223,197,224,165,198,205,209,209,198,201,203,232,196,203,202,195,201,190,163,217,188,206,225,194,212,196,208,213,229,216,194,225,213,200,204,217,204,232,214,201,194,218,232,197,198,218,203,207,218,191,198,214,221,224,212,222,206,230,213,217,209,190,209,198,232,221,208,217,186,206,199,207,241,207,192,223,206,223,203,220,205,217,199,190,202,219,222,239,239,205,217,202,228,200,216,202,220,205,220,223,213,228,212,212,238,213,210,197,221,230,225,216,195,217,208,202,216,206,221,210,210,214,205,213,217,202,230,230,228,217,213,223,224,214,235,204,208,241,249,252,243,223,229,213,213,207,231,215,231,208,225,213,221,210,208,227,243,206,209,216,195,223,216,236,238,232,225,210,171,141,168,157,145,190,225,185,165,116,105,187,186,127,204,188,238,229,244,153,159,219,179,103,96,117,173,117,198,209,15,13,16,97,82,100,49,15,30,89,216,200,220,204,210,235,210,148,64,89,95,117,192,203,194,245,225,208,219,127,108,146,147,163,176,238,145,134,237,213,246,208,220,205,234,215,212,199,222,205,159,188,196,252,141,79,105,84,83,90,57,67,55,29,91,186,247,254,195,91,28,32,34,8,37,32,7,45,39,61,47,63,177,251,232,152,166,229,252,229,250,198,150,212,244,250,233,244,224,105,76,209,190,238,243,242,218,65,25,38,87,103,123,162,211,144,74,87,89,79,130,129,135,140,39,32,17,54,53,61,64,23,26,31,79,11,15,22,30,36,30,46,37,51,51,29,46,52,24,31,18,36,31,31,29,28,35,28,20,41,24,58,41,61,38,25,7,25,22,20,32,19,43,28,42,23,39,40,75,39,32,17,27,26,25,12,15,40,38,24,45,37,24,28,28,94,143,100,58,99,52,28,22,31,21,32,20,15,39,14,28,38,54,46,26,8,36,41,19,3,35,19,11,30,34,53,10,57,25,22,28,29,66,96,116,119,69,74,62,78,38,18,60,104,108,95,61,47,14,29,29,7,34,16,11,27,38,39,42,16,53,31,35,41,59,15,26,25,22,95,86,64,52,53,74,41,57,68,52,118,83,116,108,133,80,87,77,98,95,142,196,219,226,248,255,224,250,234,239,245,249,236,248,244,250,246,248,252,232,252,249,242,248,223,243,255,253,254,250,254,238,243,238,236,240,255,251,253,237,234,251,241,255,238,239,255,250,255,240,249,255,242,237,255,255,239,243,248,244,250,233,246,250,125,0,10,16,21,37,17,7,0,1,17,6,20,12,12,0,9,0,17,5,12,21,16,30,11,172,178,191,176,196,190,197,200,221,176,206,226,195,210,194,190,180,174,207,191,199,199,199,188,181,178,199,196,201,192,195,189,220,164,196,195,182,212,186,177,166,193,187,204,189,175,165,212,192,179,180,205,176,225,221,208,206,220,196,232,203,177,194,200,181,184,205,197,204,174,187,208,188,200,183,188,187,197,184,198,211,200,213,191,196,191,183,182,174,180,200,173,195,185,212,194,187,182,196,181,192,188,214,217,163,195,170,210,192,195,215,208,186,193,201,166,178,196,195,207,194,195,194,190,214,205,217,197,213,201,209,188,212,182,173,176,214,200,224,214,182,188,190,210,205,210,211,193,210,211,200,212,214,215,201,200,205,196,205,221,208,205,218,213,201,211,195,217,198,205,215,194,205,199,200,204,211,201,225,203,213,220,204,205,200,235,198,198,216,230,210,200,222,211,222,193,228,224,224,230,217,187,210,219,201,222,205,219,196,195,207,218,213,211,190,215,197,223,209,215,203,212,207,211,243,219,236,228,218,204,219,224,213,233,219,227,205,203,235,183,204,216,205,206,209,198,205,231,217,205,204,191,203,204,216,208,221,219,201,207,228,223,204,187,191,202,237,209,210,193,230,205,214,212,233,209,213,225,244,226,198,198,167,223,214,246,204,228,235,226,213,201,223,198,237,241,234,208,213,213,239,222,201,214,200,187,212,195,185,187,187,150,177,159,151,110,112,178,225,185,175,111,74,237,227,82,149,191,180,201,191,162,141,186,172,136,129,137,194,150,193,222,50,27,64,142,140,125,83,40,28,132,229,214,185,226,220,178,173,151,48,22,58,154,225,198,224,229,164,153,191,150,92,162,194,224,230,241,132,130,200,151,204,219,181,200,239,228,223,201,239,237,200,171,200,242,106,78,137,116,78,72,40,38,28,86,133,234,247,231,156,38,24,36,47,38,40,54,16,51,65,48,56,178,251,248,150,163,219,241,248,246,225,142,215,249,242,242,244,240,183,135,184,253,208,250,240,249,112,6,42,82,99,97,120,155,188,129,33,22,83,81,92,138,121,109,48,5,23,42,55,47,42,25,20,27,21,33,25,24,38,24,28,26,34,40,38,41,30,19,39,14,60,39,41,15,22,46,47,25,25,1,54,50,43,71,36,10,28,53,29,25,39,17,42,18,34,20,22,24,15,51,12,17,37,34,23,24,26,18,31,49,28,41,33,51,35,72,107,98,75,32,70,29,6,37,35,32,29,13,8,41,58,47,54,46,22,24,24,11,13,36,16,33,34,54,22,7,27,42,27,31,18,30,34,52,118,119,89,56,75,74,68,23,15,43,98,121,110,63,29,32,30,41,51,34,38,9,61,28,51,36,63,30,27,35,51,55,8,40,40,95,73,68,63,70,83,54,49,53,28,91,110,114,123,125,84,79,62,95,70,25,11,80,216,242,232,253,255,250,255,245,248,244,243,247,239,239,239,235,242,238,251,255,231,239,236,243,236,247,242,254,236,251,249,254,249,249,235,253,245,248,253,248,248,238,245,224,250,250,228,226,246,250,244,251,231,254,239,228,245,249,243,224,220,114,25,31,10,34,7,23,28,6,14,3,5,20,2,3,26,21,31,27,13,12,22,15,5,19,202,180,196,185,168,203,212,188,228,179,174,184,197,176,204,183,213,185,195,178,199,215,176,199,196,195,198,184,211,208,209,200,196,225,202,170,172,188,205,198,183,187,190,188,196,189,175,218,220,181,208,182,204,202,185,172,204,171,222,179,193,188,205,194,203,227,204,200,180,195,211,209,199,197,197,190,197,200,202,201,202,204,180,189,209,187,207,213,191,219,204,211,207,194,202,207,180,203,214,198,209,182,197,181,198,168,183,201,195,221,204,193,184,188,203,187,176,198,212,183,212,191,194,174,205,203,187,192,205,205,206,197,196,199,193,195,204,211,212,195,203,207,225,224,220,205,186,226,213,205,200,193,194,220,208,202,196,195,209,190,185,221,215,196,204,228,209,205,212,204,227,194,233,210,231,200,200,225,230,203,216,218,209,203,218,207,207,202,235,219,239,217,202,216,204,229,205,203,205,235,199,200,208,211,217,223,215,218,209,201,201,226,180,224,219,203,219,201,215,190,217,209,227,217,228,210,206,219,208,211,209,235,227,206,198,205,208,211,205,200,208,206,221,201,215,210,217,235,232,199,197,206,205,214,219,206,212,199,226,202,222,216,223,211,214,221,214,220,209,215,230,219,227,202,207,216,199,217,213,241,191,89,85,128,202,218,223,224,243,215,220,242,236,212,226,194,227,220,219,227,247,221,197,192,201,208,171,171,178,180,176,182,170,200,218,194,205,197,212,127,173,102,52,208,198,110,176,146,146,200,221,148,166,162,129,138,152,166,220,142,191,237,123,56,95,170,127,137,140,52,14,173,244,186,136,212,216,152,155,158,91,69,113,175,231,192,209,193,142,136,208,144,116,122,142,175,244,234,138,213,202,213,202,188,198,196,220,199,209,190,239,230,184,163,223,213,79,44,113,111,76,44,50,36,64,126,205,245,241,162,65,1,35,42,40,28,23,24,22,43,74,48,127,231,243,165,175,213,236,233,250,228,159,159,243,221,247,245,250,176,165,196,238,242,221,233,245,101,38,33,76,102,120,111,93,69,128,103,15,47,36,66,103,119,98,115,27,18,48,45,38,53,50,24,48,29,29,42,10,9,20,40,52,38,28,30,28,53,7,28,56,33,38,54,25,16,40,24,19,26,51,19,42,62,46,65,41,20,14,28,20,30,34,33,32,48,39,43,36,41,28,26,33,41,38,38,47,42,3,51,46,31,33,27,26,84,51,56,126,137,95,62,58,18,43,18,34,44,28,28,28,29,59,42,78,53,17,11,23,17,35,16,28,20,43,16,8,25,33,35,36,44,36,14,38,23,70,113,102,82,107,110,45,79,49,61,94,139,144,108,59,24,36,24,34,40,36,48,41,20,45,58,34,53,53,42,34,43,50,32,35,91,53,42,68,78,72,49,62,70,35,92,114,126,101,115,87,88,45,53,49,18,50,200,255,246,253,215,250,244,224,219,254,233,252,245,249,249,240,248,235,251,243,234,253,242,255,252,253,230,254,251,228,229,246,255,223,243,252,255,215,239,254,216,226,225,242,243,242,228,243,241,238,237,246,252,239,255,236,239,249,244,250,248,201,97,14,0,0,4,9,2,18,18,24,28,11,7,4,0,1,9,21,26,15,5,10,14,37,13,176,200,206,181,182,199,183,192,215,197,203,175,197,225,191,181,174,182,213,206,191,185,191,191,193,203,197,220,194,219,193,180,210,180,219,215,195,209,184,194,177,214,195,194,220,179,202,210,197,193,188,200,207,206,188,205,181,206,217,181,192,172,179,172,204,208,199,174,192,174,197,203,228,208,192,184,201,184,201,196,205,202,197,195,177,171,181,187,187,170,196,223,183,194,195,209,184,197,201,217,203,189,189,180,190,204,196,201,182,214,203,195,192,197,194,198,184,184,209,200,178,221,182,217,189,196,204,202,201,203,201,196,192,199,200,214,201,199,204,200,195,221,197,197,234,170,193,214,203,200,189,200,198,213,206,182,185,215,203,198,192,211,193,199,208,212,181,228,189,215,218,227,180,174,203,216,222,235,222,209,197,214,212,189,213,190,202,214,222,219,200,206,201,236,189,195,217,212,191,188,206,235,181,218,237,196,209,222,197,219,204,192,212,185,208,216,203,203,221,218,212,217,209,210,228,207,240,199,191,188,212,222,223,214,199,177,219,211,195,222,202,218,206,210,211,215,198,217,214,219,225,222,208,219,224,194,200,211,211,196,210,212,212,219,199,211,201,230,202,214,202,225,217,216,228,205,211,215,221,244,183,133,117,124,181,216,245,222,221,223,225,224,220,220,224,191,228,209,209,223,236,199,173,175,149,170,180,205,211,205,214,230,188,220,222,252,210,177,214,135,172,122,57,180,169,85,214,206,171,217,229,172,111,152,142,136,160,179,232,113,191,241,159,50,88,180,152,144,134,81,29,195,246,157,141,233,170,138,199,215,222,77,69,182,187,159,192,228,189,179,160,138,120,164,181,215,246,193,160,214,236,245,236,233,203,178,210,195,210,182,236,206,174,194,218,208,71,74,94,71,79,23,10,16,40,115,139,246,191,56,33,29,25,15,70,24,2,43,32,57,21,53,173,219,181,183,232,241,234,251,203,168,132,222,241,226,223,243,164,158,219,221,245,222,207,254,62,33,58,95,128,123,110,76,41,49,138,79,21,36,124,175,143,121,114,113,18,23,54,38,32,57,58,20,32,41,36,23,42,33,18,24,30,46,43,30,66,41,36,16,56,42,25,16,20,5,25,50,47,31,46,44,29,50,93,82,77,34,15,39,29,46,30,60,34,27,15,18,26,33,45,32,49,33,49,38,13,36,8,33,34,58,15,37,84,90,55,48,59,101,92,69,54,46,40,31,23,14,7,28,30,42,60,81,14,41,34,17,22,26,24,36,19,32,17,68,32,27,35,30,24,28,47,44,20,26,34,51,110,135,107,139,68,62,34,41,94,117,149,158,104,52,18,23,36,44,42,64,43,42,38,51,47,56,53,34,59,33,18,16,42,85,61,79,70,84,58,51,74,45,26,78,122,83,98,113,83,42,81,62,31,106,249,240,254,230,241,255,238,240,247,241,249,249,246,230,253,254,249,244,253,235,240,250,251,239,255,250,251,245,253,244,249,224,254,254,251,239,250,255,248,248,238,255,247,241,246,243,240,232,254,250,244,248,229,255,229,241,253,244,235,229,254,243,227,101,6,0,5,0,8,21,31,16,10,14,11,8,20,14,30,12,9,14,8,5,19,1,13,35,175,197,180,193,199,202,192,206,182,189,203,188,213,226,204,195,198,175,198,195,225,194,201,176,187,205,221,204,205,184,225,185,189,226,196,206,194,179,172,200,175,187,209,201,176,212,190,228,196,210,195,208,176,201,214,180,169,194,204,192,201,190,176,175,216,195,209,203,201,202,204,193,196,204,208,191,226,193,197,188,209,215,185,207,212,220,204,191,197,193,193,197,198,194,205,171,190,196,179,174,200,205,186,194,182,201,201,205,195,207,206,201,192,173,191,202,181,214,171,210,217,195,194,201,196,196,196,215,208,205,207,235,199,202,177,206,213,202,176,211,210,208,188,177,210,215,191,211,208,203,203,194,192,205,222,204,187,212,176,184,214,194,205,190,212,222,202,201,207,222,210,196,197,193,218,234,218,207,217,205,219,225,209,219,199,198,207,224,193,226,193,202,201,213,204,226,203,216,207,219,205,196,230,204,212,209,206,219,206,210,221,202,208,204,221,207,205,202,207,232,215,197,236,188,216,221,198,235,223,216,213,214,191,212,223,198,201,200,210,195,183,231,220,202,225,244,238,226,219,184,213,188,201,236,235,205,222,212,212,223,205,212,191,230,186,215,218,245,237,198,200,225,212,220,218,214,229,220,208,195,225,221,187,164,205,187,215,208,234,209,214,198,237,221,237,235,203,221,202,193,174,194,136,202,173,186,219,225,244,217,243,240,160,208,211,231,195,150,205,172,185,98,98,229,183,118,195,234,197,224,220,142,139,154,130,155,119,214,200,90,177,248,176,34,63,147,150,131,76,42,10,172,243,119,175,244,188,201,225,237,225,112,66,156,200,198,224,233,219,202,145,143,132,177,226,235,246,151,144,221,247,246,240,254,230,210,238,220,220,191,234,198,162,173,172,187,65,73,45,51,40,19,30,22,84,88,114,164,65,14,13,25,11,37,64,47,40,37,78,52,66,84,213,198,149,193,219,251,238,234,161,157,216,237,238,213,209,156,155,227,245,192,241,214,166,134,31,48,80,86,106,125,70,57,45,64,141,92,26,43,146,175,134,135,137,107,15,22,22,57,50,17,58,30,40,38,10,25,32,29,18,45,22,26,34,25,22,47,20,15,36,30,19,48,22,66,19,39,3,36,27,21,30,21,72,86,62,80,20,31,27,33,53,35,43,50,33,40,41,40,3,21,33,36,10,38,38,13,37,33,46,56,47,71,134,147,87,59,74,64,89,89,71,51,41,42,30,5,13,16,17,45,40,53,33,28,23,13,24,22,13,45,29,22,18,15,37,16,41,42,32,18,41,33,32,17,38,30,50,102,80,94,74,45,61,42,69,95,157,198,146,63,24,14,42,46,33,33,43,32,33,45,50,29,26,42,30,23,37,43,62,84,76,36,57,58,73,47,65,71,10,103,107,104,112,92,131,83,92,47,75,234,255,251,237,236,244,254,246,242,255,220,248,248,235,248,235,243,252,253,240,241,246,251,224,246,249,251,248,229,242,235,245,242,239,240,233,252,238,255,244,241,254,245,243,253,237,239,237,255,247,240,229,238,247,246,247,254,231,237,250,248,251,248,212,106,8,18,0,15,15,27,9,10,11,20,22,5,22,14,1,0,22,34,0,10,15,13,34,8,201,202,193,197,195,229,169,188,205,177,191,199,195,180,198,203,219,180,189,186,204,208,190,174,176,226,183,189,178,229,207,182,207,198,184,160,217,180,218,220,182,194,219,212,203,208,178,210,195,183,198,214,195,205,209,238,192,178,191,208,194,208,201,212,209,190,199,196,191,185,205,201,180,193,183,187,206,196,230,180,203,193,215,191,203,197,213,187,201,212,197,196,160,207,188,204,201,197,192,182,204,205,205,200,199,208,179,196,200,189,202,194,196,188,194,209,197,201,200,192,208,196,201,191,166,180,198,204,205,166,211,201,203,208,219,212,211,178,234,201,219,216,212,207,230,206,217,218,209,199,187,213,226,179,222,197,193,214,188,209,207,222,200,188,204,227,208,216,232,218,204,182,217,177,230,213,216,190,229,212,201,215,184,217,202,207,210,213,191,222,205,219,237,197,209,214,218,203,226,226,210,209,217,193,196,188,202,226,198,213,206,207,217,210,213,204,222,197,196,206,210,221,189,194,193,176,222,223,214,191,210,215,202,197,233,203,226,193,183,194,211,205,187,198,216,220,206,227,204,201,226,222,210,214,202,209,212,225,206,238,195,214,220,205,205,216,207,197,211,196,216,215,222,226,210,213,223,226,220,209,220,244,242,231,204,183,202,242,228,214,216,215,202,223,220,213,208,204,185,206,204,227,194,215,220,212,241,234,226,228,227,202,179,226,214,225,140,164,228,201,186,95,121,246,201,96,202,217,198,184,226,133,133,175,160,174,137,223,193,108,164,220,221,85,98,155,144,108,87,47,40,200,246,98,202,242,181,177,203,238,132,94,43,98,171,209,245,205,201,143,164,162,183,203,221,252,229,144,161,248,250,225,231,238,229,204,227,237,208,207,239,232,164,149,176,174,107,35,59,42,74,83,103,165,146,157,111,117,42,7,12,9,46,40,28,14,20,58,49,61,129,195,165,172,190,201,237,240,241,169,154,225,243,239,240,226,199,148,201,246,237,213,244,212,26,31,34,96,128,101,123,83,27,86,52,72,127,77,51,35,22,65,111,139,134,76,31,32,48,29,63,49,27,43,39,20,15,15,31,21,20,27,37,29,57,60,23,35,36,34,28,36,28,20,42,30,19,24,38,50,45,30,25,31,55,48,87,61,24,25,26,48,30,6,36,33,28,40,28,28,60,30,6,26,29,41,23,36,31,52,50,99,89,85,125,187,75,49,38,71,79,110,75,38,53,20,29,47,24,54,20,28,28,39,39,3,19,27,3,34,33,20,12,30,76,24,35,29,46,64,26,28,39,36,30,12,40,21,52,44,56,22,68,43,49,57,47,47,93,184,133,38,36,15,26,30,38,49,58,52,23,14,50,53,46,40,45,22,23,31,32,74,72,71,56,55,98,63,63,96,33,91,145,104,120,101,83,107,77,84,83,214,227,223,252,250,249,244,248,255,240,250,219,233,252,250,238,253,252,250,240,244,247,252,241,226,252,243,246,225,250,222,255,236,251,249,243,232,246,250,242,253,231,225,237,251,236,236,241,252,246,222,247,241,248,248,243,255,252,238,237,239,248,250,242,103,15,12,1,31,27,2,12,1,24,38,8,15,6,0,11,9,10,7,11,18,21,5,19,10,198,213,201,206,212,195,221,206,198,209,197,197,180,190,199,229,210,193,196,191,182,181,192,217,203,197,179,208,187,188,183,151,184,208,191,186,181,219,203,203,197,195,170,209,212,209,212,197,185,185,197,188,170,170,190,190,206,187,189,199,202,210,193,180,215,215,211,212,209,187,207,180,192,191,226,190,193,198,183,178,186,180,219,167,199,192,187,213,184,183,194,177,170,201,218,211,227,219,188,200,199,194,198,202,188,202,185,208,199,196,204,193,211,201,217,212,190,205,190,205,191,198,217,197,209,208,181,201,200,211,193,212,224,187,197,202,221,225,208,203,192,175,204,224,201,178,193,194,195,228,230,224,202,183,197,202,216,217,201,206,222,197,200,205,214,212,203,185,223,209,231,196,204,215,201,218,199,205,184,186,200,196,205,203,225,203,196,200,209,217,220,202,205,220,222,206,208,218,210,215,197,217,225,217,217,220,198,213,209,189,213,208,193,214,227,223,211,209,189,211,205,224,217,198,216,204,219,224,204,211,192,234,218,213,211,205,236,184,192,210,236,214,211,217,201,196,205,215,212,231,222,210,210,198,208,211,187,224,210,221,210,191,225,219,223,219,214,211,207,200,219,229,222,217,226,219,208,213,233,226,222,207,232,242,215,213,216,183,194,211,232,211,192,204,194,210,194,203,212,245,217,237,189,220,208,225,199,191,201,195,202,219,172,212,211,216,141,166,237,190,181,92,112,238,215,100,130,189,182,194,216,172,152,134,179,172,142,247,201,103,188,238,233,130,72,118,125,115,72,42,30,174,232,89,215,250,168,137,180,175,177,138,101,129,160,235,209,166,225,166,160,128,166,176,197,231,226,139,188,232,221,249,223,207,207,205,237,220,231,214,229,248,201,171,150,225,109,14,30,81,105,67,146,168,212,190,48,41,33,5,26,42,46,14,4,42,34,61,24,120,202,181,176,182,237,241,255,252,175,152,194,244,238,213,247,170,143,203,250,241,241,201,246,108,26,64,78,131,106,129,74,29,61,111,57,92,146,57,27,58,105,123,125,82,108,64,21,25,5,29,46,41,40,16,46,28,45,39,21,24,33,33,14,26,26,17,49,23,25,44,32,27,31,32,33,38,19,36,39,34,33,52,23,56,25,37,66,76,28,20,28,45,50,4,28,18,45,37,33,35,12,46,44,37,21,24,35,27,42,71,102,128,112,150,171,129,83,35,23,34,57,57,34,60,26,28,33,22,35,7,38,32,33,11,21,14,6,16,19,11,29,43,31,24,31,17,25,37,18,24,15,43,28,28,35,20,53,37,7,22,41,6,43,70,15,51,7,31,49,29,50,40,53,23,38,45,50,39,49,43,51,40,46,24,31,35,5,38,39,30,50,63,69,67,54,69,85,57,68,78,25,46,124,132,137,110,102,113,58,29,25,122,237,241,220,249,254,246,211,235,251,254,247,226,255,252,234,251,254,249,254,240,243,240,252,245,254,253,249,243,252,237,240,254,248,247,211,255,237,248,252,242,242,248,249,255,244,255,228,253,233,246,244,250,252,241,253,249,242,241,226,250,253,252,242,97,2,12,12,13,14,13,0,36,22,12,18,17,4,3,27,7,23,7,33,1,16,1,27,11,172,193,205,215,189,205,186,178,204,201,202,228,171,190,177,190,197,194,193,202,189,199,213,187,197,211,190,188,184,190,187,193,185,200,192,184,181,213,184,193,210,201,196,189,191,187,177,194,177,195,207,214,197,203,197,198,188,198,190,192,181,186,191,192,207,213,196,196,212,187,185,196,196,181,211,196,234,188,197,194,187,212,186,205,201,205,195,200,226,205,200,181,194,216,201,210,194,201,194,199,204,233,193,201,204,223,210,184,208,198,222,195,205,196,185,208,194,203,217,216,197,222,219,207,211,208,209,214,190,215,209,189,217,198,212,194,218,174,208,224,210,210,208,233,189,189,184,187,207,196,200,170,210,199,222,197,201,206,228,202,183,210,217,192,200,203,207,183,200,204,210,218,211,225,195,214,202,219,209,211,211,193,204,204,225,214,231,221,204,213,197,214,219,179,216,225,204,191,194,196,205,217,214,190,220,207,205,218,201,231,199,224,200,210,207,202,221,189,208,185,209,196,230,181,204,217,221,202,210,199,239,205,232,208,179,212,201,205,181,183,184,218,202,192,221,233,193,218,199,197,236,215,224,213,200,217,227,211,217,210,204,214,254,230,223,192,219,213,212,231,203,179,218,209,237,219,233,224,220,237,223,234,235,246,203,233,184,196,178,183,188,221,215,204,222,211,222,233,218,218,223,236,202,204,203,184,228,216,210,211,226,201,205,216,210,229,135,169,243,173,209,77,72,220,221,130,146,237,195,217,228,142,152,158,157,157,174,231,203,110,185,235,229,160,82,112,115,125,60,28,57,196,195,95,213,174,112,136,218,220,194,210,120,116,145,193,171,216,190,185,162,52,104,182,253,236,191,140,227,231,238,241,213,236,202,215,227,209,219,189,235,238,200,176,165,220,111,26,53,60,33,89,100,123,164,134,38,38,11,24,36,20,22,21,13,42,48,57,111,226,228,121,172,212,245,226,251,216,161,223,250,251,251,247,170,150,208,255,254,248,242,175,98,53,43,95,76,133,124,81,63,56,113,165,35,112,146,37,42,149,224,165,147,113,100,54,19,28,8,42,46,17,42,31,41,44,12,10,25,18,30,32,16,29,20,17,17,48,8,23,27,15,35,32,36,47,37,36,34,42,52,21,30,47,29,35,69,64,36,12,38,22,11,9,16,14,28,16,26,10,48,57,23,19,25,25,26,75,137,172,99,125,162,166,113,68,26,26,22,18,51,64,60,26,43,37,8,20,29,45,31,25,25,15,38,23,49,29,9,5,36,4,30,15,0,52,17,49,51,24,21,49,32,43,44,19,35,44,17,34,11,46,67,36,21,61,36,8,4,15,15,40,26,26,43,9,35,26,59,42,36,33,56,59,47,45,23,60,29,43,43,91,69,37,59,65,68,76,52,83,43,54,143,131,116,122,102,105,86,62,23,84,245,229,238,239,251,248,252,236,248,253,245,229,255,249,255,247,253,247,240,253,255,252,233,252,239,255,255,236,242,254,230,246,237,241,243,251,253,255,252,238,240,251,240,229,222,234,239,252,238,239,246,236,226,244,247,254,253,239,255,248,253,255,230,104,16,26,3,7,0,10,29,0,33,9,32,14,3,12,5,12,9,9,8,4,7,25,16,10,192,201,213,201,222,194,207,215,213,194,211,189,207,188,229,182,197,188,205,217,197,190,205,199,173,226,195,219,193,209,173,198,210,209,201,195,183,169,177,184,203,199,229,230,200,203,184,187,177,194,189,193,178,200,224,190,165,186,182,198,212,193,184,210,184,207,169,196,199,197,204,187,207,212,188,191,201,214,195,206,201,221,189,192,202,195,198,192,196,214,186,198,185,189,192,203,185,199,213,205,209,197,191,221,195,210,198,185,222,209,212,190,212,219,205,193,195,219,209,208,215,204,227,192,206,199,210,174,207,213,219,204,207,194,202,214,212,212,217,209,197,205,178,205,209,221,201,192,205,204,204,199,211,222,192,209,197,205,225,200,200,200,198,202,211,199,205,229,193,190,189,228,199,228,192,219,210,208,207,200,222,187,200,200,208,205,220,219,213,196,192,216,235,215,224,217,222,186,215,229,238,201,180,201,209,216,185,209,202,210,199,211,213,204,209,213,220,201,202,209,221,217,205,210,206,221,180,241,236,218,218,192,197,225,218,203,205,202,203,217,213,211,245,219,210,218,214,223,218,214,226,213,218,220,229,203,214,223,206,199,193,233,210,223,208,222,225,210,222,217,236,220,222,235,243,223,230,208,231,230,231,218,234,188,171,169,202,217,191,176,176,212,230,214,206,241,217,242,199,226,221,216,214,230,214,200,232,213,219,235,225,167,184,235,237,213,146,190,248,215,209,84,113,215,214,147,160,246,192,200,207,121,160,169,178,157,172,246,216,144,197,224,255,239,138,110,140,117,77,39,4,165,138,91,164,136,158,167,234,209,221,207,77,58,145,193,186,243,201,191,108,56,133,217,238,255,155,127,226,217,252,247,219,242,230,208,238,227,216,204,249,231,175,144,122,225,124,73,117,93,20,74,120,92,68,85,30,40,19,30,41,49,23,20,64,74,46,82,221,220,151,157,235,252,225,212,200,166,212,243,252,250,240,192,125,197,247,241,236,253,246,101,42,12,36,118,102,127,108,30,72,95,167,139,29,108,147,48,37,99,186,163,151,142,109,43,8,13,1,2,26,19,23,7,49,21,26,9,17,5,25,23,25,32,30,54,49,45,45,36,25,19,4,28,43,13,39,41,55,48,30,42,39,65,42,50,49,83,41,22,35,59,35,29,31,46,18,30,30,42,36,27,52,30,39,56,41,60,105,151,128,84,122,83,55,43,3,4,34,19,56,69,80,51,41,29,28,31,25,32,31,34,34,40,1,7,22,23,15,19,48,38,29,30,43,28,16,42,50,25,41,52,8,48,38,54,28,9,25,31,17,34,79,42,44,39,3,45,9,23,30,29,29,40,31,44,7,33,26,60,66,49,28,18,35,39,40,31,38,46,41,65,79,37,57,53,84,81,23,84,18,57,99,128,108,119,112,131,61,58,14,55,224,228,255,242,238,253,252,242,247,226,243,233,248,231,253,230,249,243,251,241,226,240,254,249,245,233,230,238,239,241,232,244,254,253,228,251,255,253,227,238,222,237,248,226,237,252,243,234,248,247,250,235,249,239,246,253,241,241,248,255,235,250,243,110,0,0,5,12,14,28,9,46,21,8,27,1,4,13,0,4,29,0,12,15,31,18,2,29,192,179,198,228,198,212,218,188,209,210,209,211,204,200,202,198,198,209,194,216,209,196,185,179,211,216,198,188,227,213,191,203,222,219,194,220,194,175,179,174,186,186,215,191,189,183,202,183,186,192,184,209,220,191,215,189,212,175,216,184,215,214,214,198,178,194,199,216,207,195,187,197,213,194,208,202,205,210,166,189,202,194,232,180,210,203,194,201,190,230,218,213,201,224,199,215,215,203,197,184,203,184,201,207,213,187,176,182,194,200,213,191,210,199,234,231,209,184,210,227,231,216,212,217,195,208,189,206,197,226,218,217,212,195,196,209,222,193,209,228,207,241,195,207,195,196,230,179,217,203,216,222,211,192,212,210,216,213,210,184,205,195,221,189,206,228,194,218,206,214,199,204,184,196,205,220,205,219,218,211,199,205,208,218,210,189,229,214,216,203,208,204,226,219,207,217,229,193,224,196,204,215,214,193,203,207,200,188,199,217,225,195,202,188,213,199,223,210,205,209,206,187,197,217,206,219,210,201,212,183,204,215,205,194,192,197,223,201,204,219,216,197,206,207,204,231,208,200,211,225,214,242,191,230,237,203,213,209,202,214,220,215,239,222,208,212,224,218,228,230,225,233,243,247,223,227,238,246,238,213,198,181,181,181,227,242,238,244,244,227,242,212,211,219,246,230,232,243,214,233,236,247,224,206,213,234,241,234,234,240,220,203,209,225,253,236,130,200,240,189,200,65,120,227,232,157,106,226,199,177,153,98,146,188,192,166,177,229,229,134,206,235,246,236,115,124,103,95,67,18,13,127,148,104,185,175,207,195,231,179,161,145,101,102,145,162,215,235,169,143,152,126,182,243,252,252,137,137,250,221,232,228,236,242,209,188,241,203,238,186,226,217,217,134,197,240,172,57,70,90,44,141,164,76,83,81,53,18,12,12,26,32,49,32,40,54,70,193,234,197,136,194,239,241,215,183,146,195,251,245,235,223,206,136,217,255,250,232,244,250,188,34,35,0,36,106,112,85,45,59,119,138,153,92,49,124,113,40,35,83,150,122,118,138,95,18,29,35,22,20,20,43,48,41,55,26,35,31,18,30,15,50,40,36,33,45,22,10,33,28,32,26,44,59,49,40,35,19,43,34,26,57,29,30,26,33,66,69,64,27,12,33,50,22,22,16,43,55,26,31,36,20,40,27,42,49,22,27,39,103,103,56,24,42,23,12,6,35,7,21,30,58,79,70,37,27,51,30,16,27,21,21,26,25,28,25,19,1,29,25,35,18,18,56,7,16,39,31,40,24,41,53,38,51,53,21,46,19,40,33,39,29,66,30,14,31,21,29,22,61,12,35,33,45,39,39,18,44,25,23,46,54,61,56,31,30,35,30,39,16,12,66,87,53,70,73,93,74,25,52,52,25,96,91,128,128,117,125,66,61,35,59,219,236,224,239,255,250,238,237,247,253,253,253,242,249,237,239,227,237,238,255,255,242,250,240,247,234,248,239,245,249,243,248,230,247,236,233,245,233,247,248,250,255,251,255,241,225,245,219,246,234,249,248,253,248,253,255,244,238,255,239,239,250,254,111,3,6,4,30,24,1,23,3,16,5,8,33,27,2,8,8,10,4,12,6,9,16,27,26,217,238,218,179,211,183,209,207,207,235,215,195,222,224,221,201,207,196,190,213,218,212,203,179,197,203,187,204,230,198,194,207,200,177,225,179,205,198,191,211,224,213,195,191,201,164,186,190,177,210,199,193,200,205,206,212,230,198,197,210,207,199,186,183,194,181,206,220,193,193,209,186,195,196,204,190,183,192,215,204,188,209,189,213,196,224,200,196,195,212,194,200,209,186,217,190,206,209,208,184,211,218,192,204,195,190,199,206,220,204,213,195,183,217,200,199,219,209,224,213,227,184,218,214,200,226,168,208,207,224,224,223,206,219,219,190,202,178,230,205,192,196,208,205,210,209,212,213,214,212,207,206,218,209,196,205,231,199,193,230,192,216,201,208,219,208,200,211,227,211,212,197,223,193,209,191,203,179,188,227,211,199,219,181,193,206,220,186,225,239,185,211,219,223,209,205,185,209,184,212,207,193,222,228,208,187,215,213,208,224,234,216,197,200,197,180,195,200,220,221,214,221,204,203,211,220,226,209,213,187,225,209,223,211,203,210,200,193,224,218,209,199,210,226,205,206,233,192,222,212,203,216,212,212,205,206,215,226,230,236,246,249,236,234,252,240,237,253,236,248,255,235,244,242,230,237,229,226,207,220,225,252,253,255,251,246,247,238,228,234,234,241,230,239,245,249,238,251,255,248,249,254,231,221,246,227,255,246,244,248,255,208,239,246,242,249,176,230,252,187,141,75,88,238,237,173,128,249,240,231,154,122,164,181,194,208,244,229,220,186,236,246,255,236,115,92,76,73,52,33,4,143,173,162,235,191,190,216,183,165,181,164,122,122,103,146,196,187,172,178,147,183,224,245,255,232,144,214,249,226,249,217,213,226,212,215,201,212,221,198,221,239,176,140,206,247,156,47,85,37,65,185,78,36,41,83,41,34,18,32,21,14,35,43,36,37,130,237,205,176,157,218,240,227,177,133,170,214,242,244,242,213,136,179,243,243,245,252,245,178,41,17,53,28,106,104,79,58,65,113,117,134,123,82,76,167,93,63,28,37,96,137,105,164,23,15,25,40,50,38,45,57,28,32,22,27,31,26,40,21,38,29,36,42,29,27,23,41,17,23,43,36,25,34,35,37,41,56,36,20,10,24,42,19,21,50,74,98,40,54,25,20,48,19,39,37,27,26,17,42,35,37,40,28,28,26,35,25,45,109,46,64,51,36,22,3,6,18,18,20,57,45,64,38,57,39,45,15,16,32,13,29,1,2,19,43,22,41,36,18,18,16,32,24,38,35,39,34,22,38,27,47,32,13,29,37,25,22,18,17,13,71,65,43,20,52,30,61,26,32,20,41,23,54,69,9,50,15,34,38,31,27,31,11,32,32,20,48,43,29,38,85,113,26,54,46,55,59,39,62,37,33,84,116,97,99,114,122,86,24,26,3,191,238,238,240,249,242,235,250,243,243,250,250,238,245,251,249,248,249,244,236,244,222,226,228,242,237,226,249,255,233,255,254,255,249,254,254,230,248,252,236,251,239,235,239,232,250,245,241,239,224,240,255,247,234,225,248,240,254,243,234,241,252,249,114,6,13,22,8,0,15,36,25,5,0,7,49,7,9,0,0,14,20,23,31,30,3,14,41,190,219,212,203,200,200,221,201,205,196,197,223,200,188,190,195,216,205,216,214,194,214,174,194,181,201,212,191,195,225,190,209,189,197,200,202,169,213,207,183,169,200,210,210,205,212,207,187,199,193,204,192,187,201,196,167,215,198,212,210,205,192,198,208,205,199,176,192,210,205,200,196,215,204,214,192,214,212,196,197,204,202,204,198,188,194,180,209,199,200,216,187,199,228,204,190,214,197,199,236,183,203,188,206,232,179,197,223,182,220,193,201,208,210,197,164,202,208,186,210,208,188,218,204,227,201,228,215,192,224,201,216,219,207,201,213,204,213,196,174,218,188,205,207,221,205,215,194,205,201,187,185,194,213,228,205,205,215,207,187,221,193,193,191,212,234,194,224,197,206,200,213,222,218,206,224,208,229,212,216,214,204,197,217,216,226,196,222,215,216,199,205,195,215,209,201,208,198,198,216,207,213,213,210,206,210,214,226,217,208,198,205,211,203,198,183,203,196,217,218,228,197,229,199,197,202,212,232,212,214,205,223,221,194,215,233,197,203,221,200,205,209,216,213,224,208,197,206,236,233,203,200,206,213,209,213,188,207,196,216,219,196,209,214,190,181,227,209,179,174,166,177,173,157,171,147,158,131,184,185,209,225,193,220,210,191,203,210,232,202,200,190,188,168,183,183,198,208,184,200,203,190,191,184,202,173,207,199,199,200,190,169,190,185,210,146,133,187,204,140,93,18,55,169,144,143,118,211,195,190,144,131,154,176,197,162,227,255,207,164,229,237,250,229,108,100,90,70,65,44,39,135,207,149,224,194,191,215,176,221,214,231,192,154,132,118,174,169,235,175,174,205,234,234,225,235,148,228,219,246,255,235,236,237,202,243,232,236,225,210,213,234,171,140,202,242,119,39,53,77,121,173,84,12,64,82,74,54,43,24,23,28,59,51,59,121,233,248,167,189,189,234,222,195,151,173,230,244,248,245,238,153,201,241,240,252,240,231,146,39,15,15,88,85,139,106,37,75,117,120,128,102,101,36,67,134,87,38,63,187,152,144,129,129,59,7,35,28,19,52,64,43,49,38,53,22,33,34,21,19,35,33,26,29,25,23,33,12,28,20,31,45,74,81,62,59,21,29,8,17,37,59,19,30,30,10,32,73,54,54,16,23,28,21,35,20,64,29,45,11,9,21,46,44,35,30,21,32,31,44,42,46,55,21,17,43,10,6,5,19,30,48,62,39,56,46,30,15,19,12,37,11,10,19,29,22,17,24,13,6,27,6,30,15,35,39,27,39,34,25,22,33,16,35,60,6,28,22,5,28,47,59,54,14,34,41,31,35,21,22,34,31,26,12,16,41,31,37,38,18,34,37,26,25,52,24,50,35,35,30,52,78,101,77,63,56,35,38,44,51,49,35,80,110,98,141,79,113,98,46,34,30,156,236,237,232,253,243,245,248,252,254,252,239,234,249,255,236,255,253,231,250,225,250,255,239,255,253,254,245,224,233,235,253,251,253,250,254,237,249,251,226,223,251,255,248,247,242,220,255,238,223,252,248,236,237,249,252,250,237,251,228,249,234,251,115,2,14,40,0,23,11,2,1,19,3,32,8,23,2,4,2,16,23,18,9,19,27,15,19,191,185,221,187,211,179,215,205,196,192,195,223,195,201,211,205,217,233,190,182,200,193,212,207,214,211,215,225,177,191,220,206,204,197,200,203,214,211,206,183,204,175,197,205,226,189,222,201,209,159,196,201,193,197,176,211,215,184,203,201,198,180,183,170,206,195,203,197,185,196,209,206,217,167,173,205,193,196,207,186,202,185,204,201,206,188,161,206,190,196,203,200,206,205,213,203,199,213,200,204,209,205,222,225,187,196,191,195,188,202,220,216,216,178,229,215,218,219,213,217,226,200,174,201,198,187,227,238,207,183,231,189,211,202,206,208,238,209,179,225,200,198,180,209,180,218,182,221,186,208,184,201,191,198,182,210,200,176,205,198,222,212,212,201,190,214,200,206,225,201,203,191,225,194,223,193,227,221,205,209,188,203,228,212,206,220,211,187,230,223,225,221,206,199,217,213,202,213,201,218,231,205,207,209,223,190,207,237,226,206,210,203,203,191,220,218,190,202,199,193,186,201,222,207,188,209,213,223,218,220,226,214,199,227,174,208,204,220,222,187,218,203,203,208,211,203,218,214,233,207,215,223,215,213,214,203,175,142,112,112,89,100,100,97,91,68,93,114,84,71,69,50,58,28,19,13,29,24,37,48,32,62,48,49,30,70,61,58,27,34,45,49,59,81,102,93,95,72,91,37,42,48,44,68,72,57,63,15,39,60,70,36,50,50,57,55,18,29,46,64,144,51,10,10,19,31,5,88,63,69,76,108,86,93,97,89,110,125,86,61,92,77,129,119,88,118,133,63,108,34,20,138,108,80,135,123,101,142,155,148,153,151,117,124,49,93,171,136,187,106,94,120,196,241,212,145,119,239,235,237,230,217,224,227,204,235,244,224,214,186,250,208,159,186,187,206,118,61,63,76,198,190,69,33,70,109,78,36,35,14,29,39,74,56,73,228,252,194,211,213,210,191,184,154,175,244,255,239,237,229,185,162,244,242,250,253,248,167,15,28,57,65,101,85,80,32,89,138,141,138,88,47,74,41,81,142,71,50,132,202,206,143,145,124,63,12,40,29,37,33,52,37,41,40,47,20,43,20,20,36,11,18,29,39,39,27,32,13,42,52,100,163,173,152,121,147,67,46,51,65,7,55,28,20,30,44,41,76,80,61,17,30,20,26,48,2,32,32,35,21,24,47,29,26,20,25,23,55,30,21,32,63,34,17,35,13,18,5,5,18,53,13,43,65,58,56,58,22,17,17,11,28,11,20,24,16,18,41,34,26,45,21,6,15,43,19,24,48,37,43,30,55,47,14,32,28,8,28,32,31,42,62,36,29,13,24,23,18,20,10,35,52,60,18,35,28,32,20,39,35,17,44,51,42,43,46,12,39,35,34,33,68,131,74,40,75,71,63,45,31,42,25,58,132,104,121,89,95,113,34,20,15,112,224,251,252,242,217,252,255,239,251,243,250,252,246,245,252,255,251,222,233,250,245,249,245,253,237,244,211,239,252,236,236,245,246,248,245,228,254,236,236,245,253,249,249,255,243,244,246,242,251,232,255,255,252,255,241,250,254,242,250,238,246,246,123,17,5,1,18,0,3,19,17,27,23,23,6,5,4,12,4,2,0,26,23,15,8,12,5,227,187,198,201,200,203,203,193,203,228,190,210,222,196,208,196,224,213,200,196,211,215,205,213,205,219,207,208,193,213,191,195,209,219,202,206,214,197,216,198,185,176,175,201,195,219,207,197,186,199,178,191,205,177,213,222,206,196,206,220,200,202,188,193,191,178,244,204,200,222,208,199,198,185,206,201,179,183,199,211,183,196,215,200,213,214,213,207,222,208,175,200,201,210,215,202,215,219,227,189,211,190,183,209,190,206,227,193,213,203,200,215,208,221,202,194,212,218,179,218,211,200,205,206,200,224,180,214,219,209,200,206,209,215,211,219,195,206,210,202,181,200,212,180,201,196,195,214,192,197,208,214,222,203,194,196,204,197,192,187,207,210,208,207,230,206,212,197,217,225,198,211,225,195,190,215,197,218,193,181,199,225,198,215,204,205,194,201,206,208,217,188,202,199,212,195,226,212,219,216,200,215,181,206,196,211,204,215,209,215,200,204,208,217,208,205,208,254,219,216,208,219,214,223,205,189,233,198,238,216,231,217,239,204,219,210,198,204,203,216,213,220,210,212,207,211,206,202,221,212,198,196,206,219,206,213,198,158,166,135,141,147,127,128,80,82,81,107,112,125,162,118,131,129,124,126,90,116,100,94,97,132,93,80,108,92,107,116,100,118,125,126,109,133,138,152,108,123,98,106,114,109,112,126,116,86,109,98,117,119,92,108,116,85,93,89,55,89,113,122,153,87,59,101,50,46,37,61,91,97,80,117,101,107,113,99,89,71,107,59,61,78,84,83,56,104,109,68,61,45,82,107,51,33,76,55,48,46,47,49,35,41,28,35,36,94,108,91,82,34,53,14,80,90,94,65,50,113,88,121,132,114,135,193,192,199,223,237,245,201,212,203,163,188,181,197,156,106,74,110,243,198,50,32,52,88,90,28,15,25,46,47,58,89,180,245,228,234,242,250,186,195,119,104,213,229,251,223,243,166,162,237,243,234,246,254,192,40,30,59,134,121,128,50,23,79,123,122,137,76,64,84,56,28,101,127,68,26,56,121,145,118,139,122,54,18,54,26,5,11,38,19,48,45,36,31,25,27,36,10,33,17,18,19,38,36,49,54,116,143,139,177,120,185,159,184,144,108,44,39,38,34,8,33,41,37,53,81,75,48,19,25,28,21,18,28,52,11,18,22,10,34,33,20,39,44,37,9,12,8,24,15,37,28,32,10,2,10,17,25,19,35,81,93,79,35,6,21,6,7,35,37,32,33,20,36,8,28,40,33,28,25,38,19,26,40,31,43,17,38,40,47,44,31,48,42,44,38,36,26,66,83,37,19,30,28,18,47,32,25,21,44,53,30,39,13,44,30,38,51,37,30,15,36,33,15,41,28,18,41,38,38,124,66,83,73,65,61,64,64,42,39,72,115,109,125,137,153,105,63,36,18,85,221,247,251,242,253,230,240,251,254,215,247,253,239,235,227,245,245,250,243,225,229,251,249,247,246,243,240,239,242,222,250,253,253,255,245,223,248,240,244,249,246,255,230,255,221,251,241,249,232,248,250,232,241,245,246,250,234,247,248,234,255,230,114,12,0,15,9,11,18,19,6,8,6,19,6,0,5,5,28,12,11,33,8,8,17,25,12,191,175,185,201,185,189,184,219,220,181,189,175,179,206,203,187,194,218,198,196,201,210,193,179,212,228,195,207,188,188,221,188,192,189,191,211,188,218,207,219,194,197,209,200,209,214,191,199,204,190,181,193,196,210,188,211,217,199,217,186,199,188,200,205,217,207,223,175,192,205,192,219,193,186,208,211,189,215,200,220,199,188,208,193,200,195,189,213,185,194,215,208,216,207,210,218,214,206,210,191,227,202,181,199,185,209,187,217,213,197,206,208,205,209,214,196,213,205,230,212,211,202,223,203,211,201,200,224,210,202,202,212,213,183,188,210,199,211,194,178,199,192,193,209,221,213,230,208,212,183,204,190,198,174,183,179,182,214,203,201,200,220,200,203,192,194,210,206,199,203,213,191,210,205,201,205,196,224,188,214,168,191,225,201,193,187,201,198,229,179,190,186,212,210,214,224,210,204,220,239,236,222,216,215,213,179,225,235,203,227,210,220,214,190,190,228,197,187,236,220,210,208,210,211,208,212,229,202,213,208,223,217,209,217,230,196,234,216,242,232,232,173,215,196,241,204,219,209,201,205,235,213,219,216,225,196,206,250,240,240,221,238,203,212,185,172,216,213,236,248,252,237,222,247,245,239,255,211,220,247,226,231,239,215,241,244,229,251,246,246,240,247,234,218,243,238,243,209,223,247,255,246,236,237,241,221,252,237,242,228,219,213,244,236,233,197,197,249,238,212,162,86,131,255,225,194,187,169,244,191,186,215,198,198,175,210,255,212,254,178,209,252,237,253,138,88,98,53,62,49,82,149,213,171,120,201,161,116,97,127,126,113,169,199,125,125,92,139,135,118,138,127,156,150,139,83,105,169,144,180,123,180,161,228,206,221,238,233,211,206,242,157,188,187,179,224,160,87,87,214,243,114,22,139,86,58,36,14,26,8,70,24,23,147,227,220,242,242,245,207,191,140,70,170,222,241,235,243,173,166,191,233,238,244,243,184,54,12,75,113,110,103,89,10,40,93,160,123,93,52,90,124,52,27,106,141,36,48,76,111,130,149,108,110,25,8,12,21,20,21,36,68,29,32,53,57,17,31,34,38,24,31,28,25,29,37,95,182,197,167,134,96,100,164,86,47,67,53,58,24,42,28,7,19,65,62,38,74,46,51,20,29,37,43,21,29,32,43,33,34,34,40,16,29,17,34,10,24,32,29,27,20,34,5,19,25,10,26,60,57,26,45,78,69,62,52,9,51,18,3,24,19,34,17,43,48,23,28,9,41,34,27,29,6,13,44,40,34,61,29,21,30,61,26,43,22,49,24,42,41,91,50,37,43,3,15,14,19,56,58,59,45,58,64,68,84,58,59,23,47,23,28,26,41,36,21,29,31,43,38,31,51,101,48,54,80,68,97,115,39,60,32,69,103,92,110,110,117,118,44,37,11,53,190,245,239,253,250,241,245,232,254,248,253,235,249,246,249,245,228,243,247,234,254,255,238,234,249,239,244,249,245,237,253,247,234,252,239,254,244,253,247,237,229,244,253,222,237,235,243,236,245,243,248,243,235,250,246,252,253,244,253,255,254,246,122,7,1,1,12,29,4,8,21,25,12,25,10,18,2,17,5,8,4,5,14,6,21,17,15,215,220,216,203,209,189,195,216,197,192,196,202,229,202,190,186,183,198,185,209,209,197,213,192,203,216,205,191,199,210,186,198,190,211,207,200,204,208,207,181,227,196,183,206,197,214,205,179,219,191,201,196,193,201,179,182,191,217,217,197,215,202,216,183,187,216,204,220,194,201,210,203,200,201,183,187,207,198,179,185,177,198,192,185,192,227,199,201,192,193,172,198,202,194,186,202,196,207,214,213,191,214,203,245,194,205,220,226,188,197,186,191,213,215,199,210,206,197,209,192,218,171,206,210,208,199,219,212,220,208,173,176,212,226,196,196,227,205,206,192,182,203,200,195,211,237,195,204,203,230,204,208,200,212,180,205,205,216,208,230,201,201,208,194,224,202,185,200,163,223,194,193,202,193,192,213,208,207,228,196,185,192,180,208,201,206,181,184,223,172,225,210,224,212,190,214,240,210,206,210,190,218,223,203,234,211,214,223,210,207,215,206,204,218,195,202,199,190,189,197,227,226,212,215,225,203,210,230,214,225,215,235,163,143,198,170,222,197,213,221,208,232,196,196,198,194,206,214,208,207,207,205,227,190,228,195,206,206,193,205,207,219,202,235,228,215,204,244,252,254,255,249,243,249,234,249,226,249,253,255,239,245,235,234,250,240,251,252,255,247,254,254,237,247,232,240,224,235,211,217,233,245,243,234,245,241,245,226,249,206,208,230,242,247,238,208,241,249,245,203,199,70,152,247,237,211,198,206,228,197,139,137,121,97,132,245,231,253,255,224,255,245,250,255,128,116,135,91,76,55,44,181,251,224,237,255,208,212,190,232,240,208,230,247,208,134,123,206,178,198,215,192,247,250,216,124,185,248,252,242,231,250,207,236,214,224,207,235,228,210,202,169,201,168,148,219,118,34,70,210,251,50,39,179,123,41,41,22,22,28,52,25,131,227,210,206,251,247,242,201,116,78,130,206,247,226,229,149,141,212,248,243,243,241,197,59,32,48,103,136,109,117,67,22,94,137,122,70,35,80,82,67,31,56,128,127,32,27,92,163,149,122,143,82,44,10,25,18,47,48,65,44,35,77,11,35,20,0,24,27,55,14,18,37,52,74,81,115,183,127,88,90,135,155,60,64,92,87,71,32,22,15,21,54,21,26,74,76,59,43,9,19,21,21,38,28,16,33,34,17,17,17,43,33,35,35,43,44,25,42,24,12,25,16,13,36,29,35,14,37,34,31,67,56,54,50,34,11,13,11,39,13,19,12,19,21,47,33,28,6,14,26,28,46,48,30,46,35,38,26,56,22,18,20,51,34,29,41,23,53,68,74,26,8,12,36,22,32,74,71,63,91,92,112,166,104,66,52,47,36,28,40,28,22,15,46,22,58,30,33,35,48,119,63,40,42,54,123,139,78,75,35,59,129,91,107,79,115,83,64,21,57,66,126,251,248,251,247,216,253,243,239,232,250,240,242,246,240,252,241,255,250,241,241,250,248,253,248,246,238,236,243,251,242,229,244,252,238,234,248,244,255,236,239,221,253,243,231,244,246,254,245,248,242,233,244,239,246,251,241,238,235,238,255,242,101,1,3,20,10,20,4,10,13,40,19,9,9,7,15,15,24,32,10,25,7,22,4,12,20,183,202,205,188,192,222,187,194,197,195,182,215,188,203,186,194,183,177,187,189,200,202,206,196,204,192,220,236,210,223,173,192,187,192,198,206,179,198,183,217,190,213,189,189,201,199,221,188,199,174,199,201,198,213,207,208,180,193,188,173,201,213,205,202,199,180,184,163,192,189,192,216,202,202,213,196,223,198,202,190,211,202,201,210,224,208,186,185,197,200,223,207,185,208,187,181,202,216,210,210,204,185,189,211,179,205,195,233,210,187,199,190,200,203,191,202,196,200,214,201,211,209,196,198,195,206,201,219,201,195,203,204,212,218,187,193,201,173,199,213,226,208,195,210,219,192,216,236,205,204,184,188,223,199,227,194,217,209,203,213,196,208,206,204,196,196,174,190,214,208,225,191,210,203,192,215,230,189,226,186,188,230,200,192,203,229,222,224,196,212,194,199,212,206,218,211,205,182,220,196,215,208,190,192,217,210,234,227,214,233,226,218,189,209,204,206,236,224,221,216,230,211,237,209,229,193,209,205,197,235,211,195,80,59,136,199,226,234,210,213,216,199,215,228,220,221,194,208,210,236,239,212,187,220,212,175,189,194,174,180,200,217,202,211,196,195,233,224,244,224,205,215,210,220,217,218,200,218,193,209,216,220,219,199,225,229,247,237,227,237,192,212,213,205,241,211,215,217,226,213,206,218,220,204,190,202,218,229,228,191,192,216,227,233,206,194,230,232,229,233,124,67,146,246,232,207,133,111,107,68,74,45,57,37,21,64,86,114,106,55,148,180,195,165,53,113,110,89,109,45,51,136,204,157,192,213,173,205,215,249,217,160,222,160,216,139,88,172,107,150,235,198,255,238,178,146,231,239,254,239,242,247,216,255,222,194,214,219,212,201,195,199,209,179,164,180,69,33,57,193,170,3,143,237,65,35,24,22,49,27,48,73,182,211,213,221,234,254,227,148,84,180,193,236,255,241,154,154,218,230,253,241,250,202,70,22,51,103,113,114,97,14,53,95,104,147,84,71,113,131,68,60,54,60,147,135,58,23,44,138,158,143,140,65,30,36,25,51,33,39,68,46,40,48,44,16,40,37,42,11,15,60,23,43,51,61,44,89,138,87,115,102,107,149,114,104,55,47,32,28,48,9,16,14,33,40,47,92,50,41,19,18,24,26,22,40,29,68,65,24,55,41,5,21,16,46,87,74,33,38,34,25,40,20,27,27,39,23,37,59,46,50,12,45,54,36,20,35,11,2,14,13,38,28,5,31,23,38,41,70,52,21,33,26,28,19,23,29,44,40,35,31,42,68,19,9,17,54,57,47,88,62,32,15,27,26,34,21,40,54,86,80,111,132,121,82,112,105,69,53,53,20,51,32,32,37,47,13,34,44,21,67,104,74,35,48,48,63,65,70,54,29,57,143,107,104,133,94,133,74,11,47,57,78,211,219,234,254,231,255,238,240,245,222,224,246,253,226,236,232,245,241,252,243,242,243,228,252,237,230,252,250,209,240,234,245,247,236,228,254,243,254,255,233,251,242,243,232,252,251,252,242,252,251,233,255,241,253,255,255,225,250,251,242,243,90,1,4,19,20,17,23,12,11,0,4,15,13,0,6,1,38,9,0,12,10,8,18,29,17,198,167,218,210,160,185,216,196,226,197,204,207,178,188,187,187,186,190,202,195,197,181,215,203,207,206,194,188,198,207,195,216,181,197,214,164,202,199,203,207,220,208,185,193,185,201,182,212,186,189,184,198,203,185,198,192,184,221,178,207,201,187,227,183,199,198,189,198,198,203,197,190,210,211,184,191,191,186,207,207,208,202,230,196,204,207,210,207,226,206,181,226,220,217,205,209,164,208,210,212,204,198,190,195,186,202,192,197,214,208,206,226,222,219,192,179,187,207,225,211,207,210,180,202,208,194,217,207,224,197,230,200,212,218,210,185,197,201,209,220,211,199,195,204,212,200,203,191,205,180,200,207,212,203,213,194,219,194,206,196,191,226,211,176,207,223,240,217,207,232,219,212,202,196,204,187,176,237,185,170,218,190,203,209,192,197,211,204,195,210,203,209,208,212,212,211,213,191,234,198,210,231,200,232,233,217,217,212,218,238,206,212,220,205,209,222,220,221,201,216,202,202,217,226,205,211,208,210,206,208,215,202,112,96,193,220,226,197,221,221,218,214,214,225,196,223,220,207,200,199,201,203,176,192,186,179,201,216,208,233,197,203,202,216,216,211,207,230,217,242,200,241,204,209,210,227,220,211,221,222,219,218,246,203,250,226,232,210,223,219,185,218,229,230,222,235,220,221,190,210,215,213,189,187,224,207,223,226,226,232,166,214,241,227,167,161,227,233,234,177,149,59,121,231,239,207,91,39,47,45,80,104,141,91,99,80,45,86,67,29,55,60,67,75,15,27,34,36,86,71,50,86,96,65,89,116,87,100,129,129,108,45,74,52,114,116,71,38,23,135,243,216,252,249,130,152,216,234,208,230,254,207,220,211,193,177,223,224,234,179,197,196,242,152,127,148,123,90,100,195,105,7,201,194,31,37,38,70,47,45,29,129,199,200,237,243,244,242,209,106,121,222,227,244,235,183,176,211,244,255,248,236,224,83,33,62,100,116,97,123,36,28,94,154,123,107,56,83,109,89,62,47,31,74,161,110,41,25,24,112,148,153,109,35,28,30,40,36,43,35,30,48,45,27,34,9,29,30,31,13,11,23,29,17,33,84,85,49,91,65,75,31,68,77,58,28,15,38,28,5,13,27,7,54,26,31,38,90,59,51,23,30,18,62,27,16,39,21,37,52,83,43,53,48,60,122,167,113,87,47,35,41,38,83,48,44,43,36,42,26,57,73,33,37,30,21,38,9,2,30,23,36,40,42,34,15,20,48,87,49,46,15,27,25,37,34,26,53,39,31,40,32,33,38,21,17,15,27,26,72,87,57,36,22,4,8,18,59,54,101,77,62,99,82,79,126,91,90,88,73,45,41,24,43,20,33,18,63,19,39,45,68,126,82,41,39,38,34,41,33,50,37,33,101,105,99,101,108,111,67,61,26,65,30,132,245,234,244,244,245,249,234,225,232,233,222,255,249,235,226,227,244,240,228,239,226,240,255,251,221,235,248,249,249,251,254,239,229,246,223,240,247,235,249,229,212,241,244,255,250,243,255,246,243,247,252,244,246,241,245,233,252,253,251,249,110,2,20,14,5,7,2,1,19,20,14,17,3,3,9,39,12,7,22,9,11,11,0,48,6,211,182,171,185,210,181,212,201,186,204,201,204,203,200,220,202,205,194,192,198,177,221,207,190,183,216,200,221,191,193,204,213,217,174,212,199,177,181,190,182,193,198,201,205,202,176,186,196,211,187,182,189,189,218,188,210,222,193,194,218,218,189,196,177,174,217,208,190,229,213,193,215,218,206,196,212,178,208,186,201,195,203,199,196,189,197,206,212,203,216,217,204,205,186,219,200,190,211,176,199,181,220,219,219,206,212,213,200,192,219,183,232,204,229,226,194,213,200,198,181,185,197,208,199,224,195,197,205,178,196,217,198,210,213,185,188,194,204,181,220,222,187,198,201,193,207,211,205,226,199,200,208,214,181,207,204,220,199,191,203,197,194,190,226,220,216,191,223,199,195,231,219,215,175,202,221,205,206,226,207,192,182,226,208,204,190,212,200,194,201,197,188,208,206,201,216,206,209,198,231,197,212,224,199,224,236,199,202,199,231,222,232,209,209,244,209,221,205,211,215,249,231,221,198,230,217,207,202,181,196,241,238,155,212,229,218,217,210,233,239,225,222,202,228,212,170,178,211,172,200,192,187,204,202,211,239,225,218,211,233,222,223,216,229,196,210,219,202,226,214,213,228,240,213,204,216,236,230,219,211,229,231,233,193,235,220,208,216,224,236,213,195,221,230,228,223,208,205,241,220,217,211,205,160,214,180,207,228,218,202,189,218,230,232,180,195,244,233,240,215,145,60,136,240,221,235,112,141,141,161,183,183,180,228,234,249,249,233,218,154,169,203,221,211,54,34,11,12,78,90,45,84,166,120,114,165,140,110,140,125,73,107,123,117,158,95,59,53,57,173,250,221,226,254,163,177,237,225,233,251,240,232,227,252,233,223,225,244,244,173,199,251,229,169,170,146,155,129,160,177,69,76,250,138,19,25,16,47,65,24,100,130,200,233,253,255,243,213,162,197,186,225,207,226,182,172,202,243,254,235,247,204,68,38,78,97,97,109,116,36,56,125,131,130,109,65,63,84,96,44,42,70,60,81,147,97,37,14,51,173,94,121,112,25,53,42,15,19,24,14,29,25,26,20,23,27,42,50,32,43,20,47,41,62,64,75,76,43,73,28,67,32,26,38,79,50,56,90,19,8,66,55,30,12,12,17,41,99,50,43,26,7,17,47,34,29,42,24,12,85,109,74,77,133,153,163,130,149,157,113,88,74,175,136,42,27,26,22,38,45,40,54,23,52,45,36,37,25,21,27,12,43,68,16,46,46,67,100,119,77,40,43,23,39,70,42,52,41,54,41,28,32,40,39,53,30,32,28,8,70,90,70,46,21,13,35,40,44,32,56,31,20,21,36,43,67,63,70,81,44,42,43,40,36,32,22,41,38,48,28,34,56,106,74,69,65,36,57,21,21,50,30,37,94,144,87,128,87,102,76,31,30,76,38,35,203,239,244,239,225,233,246,235,238,236,227,242,250,232,242,255,247,227,255,252,249,241,235,241,245,248,242,227,249,222,245,241,241,244,236,242,244,250,223,234,244,251,254,240,244,235,211,230,246,242,253,255,248,247,231,250,254,238,247,240,107,9,0,21,20,7,15,27,17,38,21,29,3,0,6,8,5,2,33,31,30,0,9,33,2,193,183,213,201,203,184,195,222,189,185,201,200,198,189,188,200,206,207,179,215,195,194,213,205,211,186,213,210,191,212,209,218,180,196,184,211,212,196,191,225,189,209,212,199,191,206,186,197,182,204,207,199,203,197,201,198,185,216,207,195,193,199,177,206,196,206,192,183,210,177,183,197,183,216,186,203,194,168,197,180,186,186,203,209,207,211,189,189,202,181,201,198,187,204,180,202,195,198,185,204,196,201,186,209,213,241,209,176,216,205,219,196,205,188,220,213,205,205,187,212,235,198,197,186,206,211,189,173,199,198,211,178,213,217,212,206,197,196,197,176,209,207,207,202,194,202,195,193,192,168,178,201,196,212,215,195,223,186,205,196,205,206,194,201,181,197,205,196,212,221,224,209,187,210,210,192,206,209,178,226,201,223,201,180,206,211,209,219,207,220,192,194,214,189,196,195,228,225,222,196,209,199,213,194,230,210,219,241,218,218,226,225,215,226,235,224,232,225,210,224,212,209,212,242,227,202,227,202,219,209,196,228,196,240,251,243,208,234,230,213,226,210,217,206,187,184,214,207,208,218,221,241,220,220,230,227,224,226,229,225,234,210,226,239,190,208,227,226,233,225,227,239,217,237,240,206,221,233,222,222,222,237,238,159,173,222,245,220,235,241,230,228,244,218,236,246,228,233,255,246,228,225,223,213,236,219,214,213,226,212,202,236,236,250,206,214,252,248,246,181,118,64,159,242,249,237,199,212,220,222,215,231,213,255,226,246,228,227,249,249,252,252,244,237,131,33,36,34,100,47,34,156,239,247,248,224,244,173,232,242,243,238,240,247,222,218,187,122,138,195,250,207,250,252,147,242,249,242,245,245,250,228,247,240,241,241,220,255,249,179,205,246,183,130,140,207,221,128,151,160,51,165,225,24,13,39,51,59,71,138,137,129,210,239,242,247,222,130,175,208,186,219,169,138,144,214,237,239,233,242,216,52,29,53,101,104,129,98,41,80,102,166,163,104,42,57,18,25,66,34,47,38,39,117,169,61,48,8,124,151,123,145,117,17,10,26,9,42,39,30,37,20,12,32,34,31,20,28,34,58,23,53,50,74,88,79,38,36,81,54,84,22,52,67,79,48,67,61,34,56,81,41,48,18,30,23,72,140,78,69,47,28,28,39,29,46,13,35,53,59,81,74,75,112,129,87,71,76,123,164,112,106,151,137,69,9,0,23,33,105,114,57,49,14,47,36,6,34,23,28,28,64,54,55,24,72,110,98,148,168,134,81,76,142,117,61,22,36,48,30,28,20,32,37,19,35,40,14,49,85,83,49,26,15,54,78,81,50,38,58,78,59,44,51,76,54,46,38,66,18,30,20,26,28,45,23,27,36,35,22,28,59,110,74,60,56,37,41,45,65,49,39,19,124,118,94,93,97,102,89,47,28,90,61,20,98,221,240,221,244,234,244,250,255,238,224,245,249,243,238,246,255,244,250,245,248,245,244,250,252,250,248,237,247,230,242,243,245,233,245,244,228,247,229,232,246,255,251,231,239,243,241,253,248,255,246,252,222,247,249,239,254,239,237,243,123,0,31,11,36,4,13,10,2,3,34,7,19,10,7,10,11,8,0,14,22,13,5,11,21,181,205,212,189,213,187,190,201,215,191,192,224,194,213,199,200,194,205,184,222,209,206,197,198,196,176,196,175,194,209,193,199,202,196,216,174,192,202,197,189,184,196,192,189,193,184,204,189,190,190,196,197,197,204,182,182,185,200,205,199,227,193,196,183,223,203,190,199,169,216,194,206,171,204,211,211,195,209,199,208,182,204,202,203,202,208,206,201,204,193,190,169,194,211,170,205,180,220,195,186,191,204,198,206,215,184,204,192,182,180,184,190,189,187,225,204,211,202,179,196,213,190,176,196,180,205,214,212,203,206,202,198,216,221,197,182,188,200,198,170,221,193,202,203,194,196,203,202,214,175,214,212,222,207,217,214,186,223,219,209,216,202,218,215,217,212,196,201,199,196,210,197,201,196,201,220,199,205,217,179,218,211,216,199,190,190,212,217,204,193,185,181,220,220,199,208,226,200,228,200,215,217,210,207,211,213,231,224,223,194,200,226,210,221,211,204,196,227,237,179,224,225,249,208,219,208,243,224,241,216,219,191,193,249,226,240,235,240,204,111,77,154,216,236,198,237,197,250,225,235,237,245,255,243,240,231,253,224,253,240,242,229,245,239,233,243,234,253,250,254,243,240,248,252,246,247,243,248,254,255,237,249,228,99,137,237,238,252,244,225,224,251,252,245,243,255,250,250,232,254,235,255,252,227,214,227,238,205,234,191,196,230,237,221,193,221,228,242,221,131,122,58,127,243,246,233,190,178,172,146,149,158,163,232,245,226,239,229,216,164,200,207,244,232,100,76,69,73,62,12,45,147,169,179,211,228,187,131,189,230,243,207,222,194,172,230,180,140,145,195,172,194,243,182,147,225,241,239,230,222,224,223,236,229,191,213,230,237,245,134,189,202,97,61,90,175,220,123,175,93,61,197,163,11,8,29,23,33,144,216,143,171,242,254,245,237,161,142,229,219,197,198,132,135,184,231,233,239,249,229,105,18,41,72,118,119,102,37,34,77,161,126,72,34,49,16,19,32,50,47,56,43,45,70,109,79,26,10,131,139,95,115,68,25,41,18,19,31,5,13,39,40,1,21,4,10,21,34,1,35,7,32,35,58,101,99,70,47,35,77,93,70,100,150,118,120,109,85,57,50,55,55,28,36,29,55,57,110,113,63,28,35,25,59,14,17,34,20,77,102,44,30,23,35,31,54,29,17,23,16,39,35,27,43,73,23,22,19,116,209,145,52,48,21,29,14,17,14,11,25,19,61,60,53,36,25,70,38,20,92,151,115,101,108,103,59,57,34,27,26,23,17,29,31,27,24,49,19,49,108,62,48,29,19,79,77,65,43,72,101,105,81,65,91,78,65,38,49,29,48,49,41,37,40,27,45,39,41,50,40,21,54,112,106,86,66,40,42,69,99,78,56,55,120,125,104,101,81,142,91,30,16,186,160,78,47,120,234,243,241,255,224,234,247,233,229,248,243,251,254,255,232,252,233,243,247,237,255,248,249,251,249,247,238,242,253,254,249,250,254,245,222,250,249,247,236,240,239,247,235,231,228,242,253,237,255,249,247,245,252,252,249,231,246,233,125,18,1,8,11,33,19,22,7,0,26,9,22,0,9,12,5,5,19,25,2,11,1,9,1,216,175,201,197,172,191,167,181,220,191,202,193,222,208,207,173,197,194,185,205,193,213,185,179,204,188,203,207,190,179,186,202,196,245,214,201,188,202,178,191,197,188,219,189,209,216,217,221,193,193,203,211,193,198,199,190,208,184,189,202,188,187,200,198,196,195,193,200,185,213,200,226,191,187,189,208,189,212,198,195,196,207,198,228,209,207,168,193,197,202,207,207,197,200,202,214,206,202,200,193,186,202,208,217,229,189,205,213,193,201,203,198,191,200,204,196,184,204,193,205,212,196,182,185,201,196,200,206,210,204,212,204,221,194,193,183,182,191,212,206,213,176,189,191,208,203,205,225,188,212,213,209,210,213,209,203,170,207,206,216,198,202,196,213,179,169,197,190,210,199,209,190,205,198,211,199,231,202,197,212,203,225,204,216,171,225,213,209,200,206,207,217,196,183,211,209,207,217,218,176,225,185,216,225,211,229,218,224,219,237,231,247,254,247,251,249,250,240,231,249,255,250,251,255,252,240,254,250,211,246,239,214,220,250,253,247,234,222,199,103,104,186,241,239,240,255,241,255,243,255,237,255,241,253,237,247,235,251,238,247,228,246,255,255,253,212,235,253,208,245,238,242,241,246,205,218,238,224,193,220,225,196,174,100,121,172,214,226,217,221,202,215,201,188,216,190,199,232,200,239,193,214,210,177,171,184,175,168,158,157,157,202,159,147,118,116,113,127,123,89,122,85,98,117,175,145,100,112,85,103,49,111,79,112,137,112,124,132,86,44,22,90,127,117,56,94,111,67,47,35,76,50,39,21,36,63,70,37,41,52,82,61,38,65,50,134,114,122,92,83,102,76,114,74,64,149,105,89,77,77,70,86,95,78,65,84,100,73,147,91,81,115,78,112,116,173,177,119,191,66,85,226,61,14,71,34,15,103,204,204,129,181,247,252,217,148,149,199,239,199,196,187,143,176,228,253,241,231,254,99,21,59,66,119,108,78,65,44,98,141,118,110,71,57,36,22,21,28,34,61,60,75,43,83,118,72,27,14,156,153,101,129,57,24,21,22,46,21,36,34,12,11,14,14,26,55,55,34,15,24,28,8,53,29,70,117,116,53,75,124,117,117,105,161,135,122,99,96,67,57,37,49,36,22,17,37,50,85,88,50,41,29,14,44,36,64,43,49,137,85,45,11,39,19,39,14,7,42,37,35,13,30,31,50,112,92,37,33,137,178,102,23,38,26,27,27,48,13,13,35,67,37,23,24,14,18,21,11,42,19,33,34,37,19,18,63,54,13,46,47,34,37,40,25,35,28,52,46,54,89,67,35,33,45,94,110,68,61,61,168,150,86,118,155,129,87,59,47,10,16,34,33,33,37,43,16,64,21,38,57,50,43,94,96,89,71,84,62,73,79,84,85,44,111,140,135,125,97,110,86,65,33,220,226,178,119,45,172,247,233,240,234,240,253,249,210,250,253,248,247,254,222,245,234,250,248,247,237,219,242,206,227,244,248,239,243,232,244,248,234,237,248,239,225,243,245,237,251,236,255,234,237,235,237,243,249,243,234,237,247,225,237,254,255,239,124,26,8,17,5,16,21,18,25,10,6,11,9,15,7,15,0,18,0,36,1,39,18,13,31,198,181,198,187,213,201,222,199,184,200,190,184,196,190,237,194,200,188,214,180,203,198,206,191,193,214,202,190,192,206,200,194,207,174,193,173,213,201,180,175,205,176,189,191,184,216,198,191,211,179,195,211,191,223,195,208,207,195,214,198,211,183,212,202,205,179,199,211,211,222,191,186,200,201,206,205,209,209,209,190,194,188,208,179,227,177,218,208,213,206,203,188,191,204,213,204,191,183,196,226,193,206,204,196,192,219,220,197,210,218,203,200,220,227,206,223,208,185,222,193,173,227,210,201,199,206,183,217,181,198,201,207,219,203,203,194,189,181,214,211,198,192,190,218,191,212,204,208,204,190,193,190,194,171,201,201,191,215,210,208,201,216,206,202,211,219,193,206,214,213,207,185,213,205,205,223,189,212,205,225,189,209,191,194,202,223,216,197,215,196,216,190,203,210,210,212,223,207,229,222,229,208,207,238,206,234,231,242,241,244,233,247,253,243,255,254,238,243,240,239,246,255,232,241,240,255,243,241,251,224,223,241,190,186,183,177,163,218,187,166,146,185,210,192,204,162,169,186,159,172,184,165,172,151,159,149,165,143,174,159,160,139,160,170,168,152,146,191,153,145,141,141,148,135,123,143,156,165,130,117,132,122,122,119,109,117,120,117,110,131,114,124,129,92,116,112,88,130,104,79,91,83,79,102,100,98,143,100,147,122,104,127,108,82,51,40,50,61,65,89,92,62,75,83,79,77,65,86,93,52,98,78,24,31,19,68,40,16,29,33,20,5,42,55,73,95,115,113,58,86,70,69,44,46,11,44,26,34,53,11,43,13,21,32,23,55,50,79,112,63,64,63,39,19,47,66,34,28,5,17,15,56,65,52,42,34,5,31,110,103,43,80,120,143,129,156,104,129,213,52,154,222,63,8,26,21,69,195,204,169,161,237,248,251,174,135,213,240,248,169,110,184,210,241,249,252,217,247,130,34,47,76,108,112,105,57,47,90,128,157,114,96,70,39,24,24,16,9,40,61,53,94,40,83,127,60,37,54,165,103,103,114,17,20,6,0,14,18,18,31,34,32,40,25,11,3,32,39,12,42,28,43,37,32,51,79,74,32,42,71,97,74,111,139,81,88,45,61,31,33,54,14,17,33,8,48,45,86,97,48,23,57,51,34,28,50,22,33,116,95,35,41,34,16,29,34,37,69,61,21,25,30,20,67,98,90,32,45,44,62,46,17,14,40,8,28,27,38,32,26,55,32,22,24,18,9,20,14,35,14,17,44,41,13,59,85,85,55,32,34,34,17,35,41,48,84,84,58,96,69,49,50,29,83,130,92,59,66,101,147,106,139,105,141,111,80,51,58,48,58,56,35,33,43,49,42,37,34,33,25,27,44,85,32,60,63,50,32,50,57,48,86,51,108,128,141,100,95,127,93,12,47,192,218,225,138,48,40,233,233,233,252,233,248,252,242,255,248,238,239,226,247,249,249,228,246,246,246,253,239,247,233,253,246,222,231,241,239,225,255,238,225,246,243,254,240,241,230,238,249,253,239,248,251,246,240,244,255,247,236,234,255,247,234,226,126,6,5,41,23,15,20,22,0,32,23,14,34,0,22,6,8,3,12,14,13,28,27,9,13,195,179,201,182,194,229,207,198,177,192,193,184,181,200,176,191,193,180,199,198,182,195,187,212,194,178,200,197,191,207,178,178,196,200,172,201,204,198,199,172,170,180,197,201,217,164,192,175,211,196,179,186,193,222,191,200,190,200,187,202,193,192,189,180,196,195,190,205,185,188,200,184,211,200,177,194,196,219,175,200,191,203,221,191,196,199,209,208,196,204,220,195,209,213,204,213,201,183,185,205,184,220,206,200,183,198,188,179,228,199,218,191,204,200,220,234,207,185,210,199,203,191,209,217,174,181,175,199,199,183,196,199,206,217,224,219,173,208,199,220,200,224,226,217,193,193,190,210,196,215,213,198,186,210,201,208,173,212,218,205,188,201,201,221,201,219,198,206,200,189,212,184,200,199,206,215,222,202,216,209,197,206,227,195,211,198,213,188,204,191,218,210,206,217,203,204,224,217,194,216,221,223,221,213,223,209,196,184,168,179,180,174,171,176,198,175,145,160,164,168,150,163,141,149,137,151,138,104,119,146,181,198,95,55,112,73,96,111,84,132,131,120,96,90,96,93,84,77,76,64,72,80,80,33,84,57,67,61,88,94,81,66,52,62,59,46,84,70,72,74,46,61,48,60,73,40,103,108,60,52,56,73,133,87,112,102,105,57,65,53,79,35,41,47,40,55,42,72,37,40,42,46,50,45,59,49,69,112,120,125,83,86,144,86,68,55,23,32,69,62,87,95,62,85,87,74,113,103,122,89,88,82,37,28,23,14,20,35,56,61,46,57,43,46,76,136,114,101,64,30,77,79,68,49,31,53,66,72,46,25,70,67,31,59,60,60,43,57,96,94,120,68,56,26,38,42,19,27,22,20,25,13,49,37,46,52,1,32,80,34,35,111,107,93,177,185,75,157,176,93,215,242,36,12,28,10,138,175,188,215,226,236,238,217,143,187,243,246,240,147,102,155,240,239,249,251,247,159,15,28,70,116,116,86,36,53,116,143,127,106,46,49,43,34,30,11,33,45,85,90,117,98,55,115,137,24,5,88,163,128,122,120,31,23,22,27,31,7,6,35,43,3,36,22,9,15,28,24,37,60,10,22,30,9,35,54,88,86,36,35,36,65,70,60,57,45,50,52,57,25,37,33,44,41,12,34,40,67,66,48,46,81,21,22,49,23,34,29,98,61,23,36,31,27,28,29,25,99,35,32,6,18,23,86,120,76,26,4,47,41,39,32,27,42,23,19,25,23,16,35,55,23,28,37,21,30,48,22,24,24,45,42,31,26,61,126,131,43,31,74,66,36,36,51,62,87,92,104,75,57,53,56,63,86,112,67,66,55,31,50,116,129,135,77,94,66,57,39,27,31,40,16,12,16,22,18,36,36,25,46,44,34,20,31,46,28,33,40,21,23,68,56,46,119,140,129,114,85,91,113,45,31,145,147,173,115,35,29,143,239,254,240,242,249,253,247,247,243,249,235,229,247,234,248,253,252,234,240,238,243,227,249,236,238,219,249,232,236,252,246,239,247,253,230,233,248,242,251,237,251,240,236,252,237,241,243,240,230,252,214,246,245,248,244,244,101,4,8,11,9,10,20,3,13,22,5,18,24,0,13,5,22,20,12,13,11,17,6,6,9,197,224,174,189,211,154,189,202,187,182,197,192,190,168,197,205,189,196,183,167,185,216,202,188,189,156,191,198,208,195,185,207,216,197,185,189,206,188,208,189,201,190,188,201,193,207,195,191,186,216,224,204,187,190,224,184,202,185,216,208,179,224,191,201,185,212,205,189,199,190,203,197,206,213,206,190,177,200,174,194,191,221,212,194,198,186,215,188,197,216,183,200,184,202,203,190,203,202,188,203,190,184,209,209,169,205,195,210,227,209,193,208,189,201,200,193,209,205,198,210,209,208,219,215,209,167,195,192,187,216,210,192,195,184,198,197,199,194,212,207,192,182,188,206,187,205,192,215,188,194,194,185,191,194,203,197,199,205,211,181,180,206,185,189,204,202,207,216,197,190,213,223,199,208,226,181,215,193,194,210,168,202,195,212,207,231,222,210,214,215,214,202,217,233,225,218,227,218,221,213,212,227,240,235,193,206,111,18,20,33,21,16,3,20,13,22,15,5,15,16,15,1,24,1,14,3,30,23,30,8,63,88,33,17,13,11,12,36,1,13,12,19,18,30,6,9,6,1,0,11,36,14,17,6,16,30,18,4,0,19,7,7,1,40,22,23,24,8,10,4,23,12,7,19,12,22,18,16,10,50,22,20,27,12,34,19,5,22,11,25,23,6,14,17,6,16,11,11,28,10,0,13,5,16,23,31,19,4,62,88,67,55,44,76,91,54,22,10,0,16,32,10,22,60,79,72,84,96,92,64,69,36,9,5,4,28,29,33,30,33,21,23,9,30,20,116,132,118,65,58,66,83,49,39,22,42,22,45,34,21,44,37,36,18,1,33,17,40,21,33,90,75,24,17,27,33,39,18,16,43,54,19,14,3,28,43,18,11,44,14,24,95,81,90,166,121,100,205,186,173,251,186,18,61,4,40,166,171,210,250,215,228,200,156,200,250,247,255,196,129,165,180,225,238,254,228,131,38,28,60,94,93,102,48,32,93,112,131,88,80,41,35,39,20,22,23,45,82,112,129,131,116,104,140,130,33,7,97,173,97,116,83,44,30,6,5,30,27,30,20,22,21,14,28,40,13,32,36,44,25,16,20,29,21,12,50,102,60,44,28,37,37,62,56,43,21,30,27,39,35,27,24,15,40,14,17,40,73,53,43,47,63,41,19,26,41,48,26,66,111,32,25,16,40,34,59,29,51,35,26,43,37,11,94,130,47,19,12,36,41,35,33,52,5,41,28,27,45,14,2,50,64,30,10,9,35,9,20,23,25,23,53,28,32,68,151,74,11,29,40,25,41,22,44,76,113,47,86,82,64,37,20,60,61,94,55,24,38,29,47,88,116,72,46,53,51,69,69,36,36,24,36,14,35,33,36,26,51,9,38,51,33,12,19,53,37,24,36,43,29,18,59,20,79,143,128,104,84,106,118,57,33,32,71,42,31,32,28,181,228,245,250,254,255,252,248,248,255,250,231,226,247,239,255,251,248,255,250,241,228,231,232,251,237,250,238,239,237,242,233,251,241,243,246,214,253,222,234,248,244,243,234,232,234,252,234,252,237,255,226,231,246,242,241,242,121,3,4,2,2,3,4,18,19,7,22,15,11,4,3,6,14,0,1,24,1,0,3,12,13,193,200,195,214,193,201,191,185,199,222,178,203,189,203,209,185,193,219,181,177,196,194,202,182,194,190,200,213,202,207,199,205,189,208,189,205,194,210,198,175,188,195,198,217,188,218,196,197,195,188,193,205,193,191,192,182,206,211,191,196,203,202,179,208,177,178,199,197,217,224,194,208,208,219,217,198,214,209,217,196,168,204,167,212,204,217,201,203,211,191,195,222,210,221,201,204,185,204,189,192,187,190,198,226,224,221,194,204,202,201,203,190,201,216,197,188,216,200,215,203,197,193,204,200,218,178,209,221,201,204,174,194,198,210,215,223,194,194,169,218,223,202,206,188,198,226,184,194,200,193,209,198,193,203,202,195,189,216,212,197,219,198,222,215,191,178,235,196,213,208,194,195,199,196,200,224,199,232,220,198,209,203,223,169,213,229,220,213,207,214,202,224,199,217,209,218,201,212,208,229,227,238,241,200,207,219,114,32,35,32,62,19,40,21,30,15,22,35,18,10,16,22,29,25,10,1,20,26,41,12,27,49,25,10,15,0,7,7,4,14,22,10,17,4,21,19,8,25,10,0,10,3,21,11,15,29,11,20,16,17,36,6,18,14,21,19,10,17,4,18,35,17,8,8,25,16,9,23,20,10,27,18,12,4,7,14,14,4,2,9,18,11,10,31,12,18,7,27,29,13,25,13,19,15,34,24,21,12,21,19,44,30,34,61,64,82,43,22,5,6,44,21,60,74,44,106,95,59,73,66,57,0,12,4,5,24,25,15,35,45,17,34,35,12,57,80,115,108,77,62,65,71,52,8,15,23,11,39,32,31,44,37,64,43,35,83,41,33,74,68,84,119,66,63,25,42,118,22,37,1,19,28,23,2,54,52,51,133,88,65,129,97,36,120,155,89,170,199,155,146,142,46,31,139,93,89,137,167,220,242,205,212,130,156,228,247,242,243,161,145,162,139,228,240,239,138,17,41,83,127,110,90,73,78,115,150,116,104,64,51,34,16,25,22,32,35,91,100,110,132,107,112,82,161,115,26,18,112,162,94,109,110,23,21,5,6,8,4,10,26,48,26,49,26,28,16,18,29,18,31,50,42,44,43,26,38,91,76,36,45,25,68,49,47,46,22,5,43,18,41,41,77,50,20,23,39,40,61,57,42,53,83,23,17,25,10,39,27,55,83,13,41,42,31,43,38,50,62,26,17,26,34,28,58,117,48,34,24,37,41,39,37,60,26,44,68,34,32,14,11,41,32,40,31,32,8,16,32,21,37,10,15,37,19,60,109,69,18,23,41,33,15,52,53,83,84,104,104,83,21,30,0,55,56,73,33,73,65,49,63,43,60,51,25,73,43,46,55,15,39,14,39,34,18,29,32,12,33,22,48,29,33,29,63,15,56,41,25,47,38,51,28,37,103,143,126,106,78,94,118,47,36,26,52,22,31,16,110,248,251,245,253,241,243,250,254,252,247,253,235,215,243,254,247,249,250,241,255,235,246,229,246,231,241,247,231,226,245,246,254,239,234,247,238,212,247,227,238,249,234,242,250,231,222,240,240,231,243,231,252,236,223,251,247,241,135,3,32,5,13,32,14,12,37,19,28,10,11,14,15,8,17,33,2,11,18,10,34,13,29,169,191,184,176,191,184,196,192,171,170,213,206,221,197,195,217,189,187,199,190,214,198,190,208,185,204,203,188,195,189,188,201,187,196,202,220,205,198,196,201,214,198,206,191,205,201,191,183,201,189,207,208,216,183,202,189,214,191,174,209,188,181,200,206,208,184,208,181,186,206,186,181,205,195,204,208,201,195,206,205,208,195,180,197,204,209,191,197,186,208,196,198,192,215,203,177,212,197,206,197,212,192,197,195,225,195,199,204,207,205,230,205,202,190,207,189,213,210,201,193,190,196,227,209,183,205,227,206,177,185,214,195,207,221,203,199,196,215,223,203,205,212,211,206,224,202,210,228,210,184,185,198,181,213,199,211,223,182,194,190,208,195,194,212,215,212,206,226,214,199,206,196,202,198,214,186,229,202,214,219,212,208,216,216,210,233,204,207,214,220,222,224,213,238,214,220,214,222,194,226,198,183,242,219,224,226,247,235,231,210,209,207,232,215,229,210,206,217,237,228,203,236,202,198,211,196,181,190,182,160,186,157,155,157,148,182,156,150,140,130,147,130,142,119,120,118,92,129,100,113,135,146,139,126,130,127,110,134,134,120,143,122,132,137,137,135,134,142,165,129,135,114,124,146,144,133,130,102,115,152,126,124,123,97,92,90,112,148,127,155,153,170,169,158,137,143,174,160,181,152,176,152,155,163,154,167,157,168,155,102,119,184,164,107,193,164,142,174,118,102,84,31,62,162,168,140,122,116,108,103,154,158,153,209,185,203,199,187,159,145,190,200,217,149,64,106,89,103,83,62,67,119,154,150,166,186,177,213,156,94,173,171,225,201,155,167,173,211,213,170,130,121,95,190,229,252,254,212,225,247,254,216,214,212,228,244,243,251,243,238,137,57,69,161,141,131,215,139,87,140,83,3,134,227,120,115,158,203,239,233,169,107,164,191,239,253,235,188,55,117,189,191,248,250,171,27,11,65,115,101,102,42,56,128,133,143,101,65,75,42,19,8,18,8,30,79,106,106,110,107,107,96,91,152,111,9,38,135,122,85,116,44,14,27,0,18,28,33,38,10,21,35,17,9,34,13,37,15,46,4,29,33,14,22,23,40,92,100,66,50,50,97,49,42,36,22,22,43,19,29,49,58,64,51,7,31,33,43,32,20,45,59,34,16,21,32,64,29,77,116,47,21,40,3,23,32,14,42,48,27,45,15,51,69,89,27,18,18,32,49,53,85,99,84,91,50,46,13,16,12,25,28,20,6,10,54,26,22,18,33,19,7,32,64,51,122,47,34,31,23,20,61,50,75,99,92,69,56,49,25,31,18,39,79,60,68,75,65,49,76,68,45,26,11,51,47,45,45,52,23,29,27,33,9,33,26,28,59,44,50,42,32,45,26,37,18,29,32,29,44,41,22,42,112,156,133,124,72,90,115,54,18,54,115,144,128,214,253,232,240,249,252,249,249,240,250,248,244,243,255,252,238,252,241,244,252,251,237,241,244,229,252,220,236,238,243,252,253,228,234,244,244,247,247,217,244,235,248,249,245,242,243,226,234,252,252,242,221,234,244,233,234,246,251,250,124,14,6,14,17,4,26,23,4,1,24,6,2,11,1,5,11,11,4,2,7,9,5,14,9,186,178,203,199,195,179,189,192,195,216,191,190,199,194,179,195,208,195,204,189,180,207,210,195,217,195,198,209,206,168,183,176,219,186,198,215,194,195,192,216,181,183,161,185,199,197,191,178,186,169,185,210,176,222,201,184,201,177,201,213,201,199,180,183,190,176,198,193,232,184,208,207,205,192,212,180,189,182,193,213,190,192,205,192,200,210,203,196,208,189,191,207,199,208,190,189,194,229,190,205,205,193,198,200,216,205,202,207,200,210,210,189,208,194,215,221,229,184,187,199,181,193,187,199,187,222,185,200,219,186,202,209,192,187,218,209,210,196,221,204,198,197,188,208,186,202,214,214,217,176,203,197,207,203,208,195,185,185,222,220,205,227,200,212,205,203,218,182,213,185,201,201,217,214,203,220,217,202,196,199,214,222,203,223,218,205,224,205,214,198,214,226,212,229,224,217,216,213,222,218,210,213,198,225,210,221,237,247,242,244,230,224,225,248,252,248,238,252,235,247,242,246,239,244,236,238,255,234,242,255,253,236,224,242,244,240,248,252,247,236,245,248,255,221,255,255,255,234,247,215,254,237,243,244,253,254,253,252,229,244,237,247,252,255,250,245,236,236,250,252,252,254,244,228,246,253,252,248,246,241,245,238,247,240,242,242,242,255,238,234,230,246,252,255,254,255,243,251,246,245,243,247,237,222,236,242,245,252,228,252,246,250,249,176,212,237,243,250,255,232,148,108,151,247,178,185,178,125,139,158,246,252,244,245,236,247,238,241,247,249,255,226,250,169,80,98,116,106,63,76,86,154,245,231,241,255,235,246,255,216,204,233,253,182,196,179,247,254,254,219,150,122,106,237,242,247,236,249,238,248,255,245,248,250,238,238,250,224,255,246,161,30,73,184,118,133,240,111,99,192,106,106,191,243,164,104,178,213,248,198,93,155,234,242,235,234,201,138,120,157,224,209,241,209,47,62,80,76,101,103,49,65,74,107,127,99,79,58,66,18,12,4,27,30,87,125,136,109,124,89,101,73,111,154,82,0,60,147,114,101,93,38,14,12,1,22,17,19,34,21,20,40,13,15,42,45,24,36,38,41,25,33,31,43,29,55,111,87,82,94,73,60,56,62,46,55,37,32,8,39,68,62,58,35,4,25,29,39,24,32,36,24,28,21,5,50,8,20,110,91,45,29,22,9,62,50,20,20,25,47,38,29,19,81,133,50,43,15,38,37,24,116,183,202,182,156,115,67,21,36,58,23,26,16,26,49,7,25,41,2,9,28,34,35,45,108,35,14,41,16,15,34,53,56,69,47,56,14,26,30,15,14,46,110,68,64,86,82,72,53,65,77,56,25,13,27,14,34,39,23,32,56,17,15,29,30,31,20,25,17,44,33,35,33,30,10,32,35,21,34,28,49,35,102,118,130,119,77,89,96,85,19,88,232,249,237,255,247,241,255,249,245,245,249,250,242,222,239,235,235,248,251,233,233,239,254,250,244,238,247,230,246,236,236,241,239,234,251,243,250,232,241,248,225,249,237,248,244,233,234,249,246,211,231,226,236,221,242,227,232,237,239,253,249,226,112,3,11,41,7,17,8,28,37,9,18,12,23,10,1,0,20,17,29,16,8,8,10,18,7,205,184,180,213,182,197,167,192,212,212,202,189,191,206,204,196,184,177,199,197,187,183,218,177,190,185,219,201,217,220,207,186,200,208,216,193,202,169,226,200,204,181,187,198,204,215,193,189,164,213,190,176,180,197,219,189,206,189,188,193,221,203,209,193,198,184,197,194,178,179,177,181,211,175,182,196,192,197,205,175,185,215,200,212,215,194,192,181,208,200,191,207,191,203,180,201,196,184,202,228,184,197,200,192,189,224,202,208,204,192,199,205,190,191,202,201,210,195,193,216,207,196,203,217,212,216,208,194,203,199,205,204,203,215,197,202,172,200,206,193,192,186,206,244,213,201,200,222,209,252,205,194,217,191,203,194,195,190,198,202,204,219,214,218,208,218,188,223,208,206,211,189,230,216,204,233,198,198,223,243,198,222,202,187,216,234,210,199,214,214,220,203,213,222,208,208,198,213,236,229,203,230,226,204,204,218,197,202,208,222,228,248,240,240,218,238,239,242,251,243,227,232,198,236,231,240,206,238,251,242,247,229,247,235,247,251,231,252,255,247,243,238,245,251,250,239,247,255,245,247,250,253,236,254,231,248,242,239,247,228,254,236,253,249,253,246,243,223,252,247,238,250,242,250,248,228,243,223,230,254,231,239,242,241,243,242,239,252,249,237,248,252,244,247,250,240,254,251,246,234,231,228,255,241,236,234,231,228,223,247,249,241,207,199,227,243,219,247,235,254,178,128,205,200,174,165,156,196,177,175,218,249,249,249,254,244,250,245,222,241,230,241,255,140,82,70,117,124,65,95,86,126,230,228,246,157,188,238,248,198,191,191,218,193,183,196,245,245,246,218,122,73,110,161,210,224,241,239,228,233,240,246,241,244,221,243,193,180,238,231,66,16,149,167,68,169,142,49,115,195,175,120,243,229,102,144,210,237,252,142,151,232,254,252,241,223,167,221,183,191,243,238,163,58,36,68,107,88,101,58,47,95,117,129,67,62,47,14,20,13,2,37,57,65,88,109,112,122,118,105,119,114,143,129,74,13,79,146,104,134,109,16,12,2,20,27,3,43,29,35,41,18,41,28,4,25,41,10,34,26,23,25,17,19,28,37,70,59,122,99,65,56,46,47,40,36,27,60,6,36,31,44,41,41,8,22,30,44,59,27,57,34,26,27,14,17,24,50,134,88,32,44,40,33,18,43,53,11,31,27,22,35,37,109,153,62,46,34,35,57,54,237,242,242,233,235,204,150,63,38,58,35,47,14,18,25,26,18,41,18,44,7,36,57,72,116,38,50,20,31,32,43,40,77,52,39,30,30,32,50,25,69,89,125,91,76,81,50,19,18,37,49,62,51,39,24,26,51,44,27,45,20,33,44,27,24,26,32,38,29,45,14,33,55,28,17,16,41,15,42,25,54,24,64,132,144,116,90,91,74,72,7,91,232,246,245,254,252,250,255,243,239,238,231,250,229,235,252,246,254,230,254,240,247,236,218,229,235,249,225,244,222,249,242,246,243,238,245,241,247,246,228,246,247,248,227,253,230,215,247,246,239,242,226,252,254,249,237,243,254,240,246,243,247,244,125,27,13,11,4,13,15,8,18,38,4,0,21,4,11,22,0,3,9,8,14,0,12,20,5,197,199,185,214,181,176,212,196,178,181,193,194,193,182,182,197,193,205,199,201,178,226,198,212,186,209,208,198,171,195,203,208,208,178,199,203,182,207,195,205,202,200,178,192,203,198,194,190,180,205,207,178,183,178,177,183,190,195,219,211,190,193,196,217,191,185,197,186,203,181,185,201,225,192,172,206,179,195,206,210,196,207,225,216,197,202,213,191,194,222,183,212,155,195,194,205,194,191,179,194,196,198,192,210,187,192,181,190,202,195,215,194,194,218,204,191,210,204,184,215,224,198,220,195,194,195,178,210,204,222,219,214,206,201,229,206,217,208,201,186,199,224,195,202,229,221,198,213,202,223,203,184,193,199,218,213,212,222,212,208,188,225,211,206,192,197,217,212,208,209,203,224,220,220,230,211,214,220,212,226,232,206,215,235,223,206,222,208,199,240,228,181,207,220,218,216,213,213,219,208,201,205,218,206,218,225,214,223,213,213,218,219,214,228,237,222,203,222,219,211,223,218,201,203,222,241,214,209,234,223,192,221,213,212,205,229,241,229,214,232,222,210,235,216,238,201,221,233,223,210,221,217,231,218,230,223,228,216,208,222,227,217,220,211,233,237,195,226,225,238,247,228,243,218,236,221,235,225,195,240,211,208,220,218,224,219,231,226,211,229,230,199,223,230,210,228,220,181,225,203,214,211,228,227,179,221,210,199,197,207,238,208,193,171,224,213,214,229,198,193,149,115,195,175,154,203,189,177,158,164,236,223,216,232,229,214,224,216,187,212,214,217,247,126,78,103,96,103,72,87,49,152,213,123,219,182,199,241,203,158,91,197,209,199,185,204,238,241,221,213,156,81,102,141,144,207,235,219,212,203,242,213,237,218,207,222,182,146,195,177,19,67,168,147,110,129,105,68,70,161,141,114,162,194,100,164,203,235,236,152,186,245,242,239,245,147,133,186,236,227,250,169,66,31,57,121,94,101,100,23,88,139,146,100,77,60,35,15,34,2,16,36,102,110,113,120,148,92,81,108,95,116,139,150,48,16,103,155,106,120,105,31,37,31,24,9,38,14,36,17,24,13,35,8,30,22,20,43,14,45,22,22,22,34,24,45,50,52,67,84,52,90,73,50,44,70,45,37,22,44,30,41,65,48,25,33,50,40,26,50,62,47,17,13,16,26,18,80,141,35,18,26,12,30,21,41,23,34,64,51,34,43,38,57,155,51,72,28,78,94,128,192,181,150,159,158,161,98,59,48,48,18,13,21,11,21,15,29,26,35,9,17,54,29,63,115,59,23,34,20,26,39,19,58,46,65,31,70,65,34,25,112,126,158,90,84,67,34,21,47,66,32,37,39,11,30,16,34,41,34,20,26,23,47,16,23,3,25,34,31,57,45,31,31,40,62,28,22,32,46,34,13,38,48,125,144,124,97,123,126,89,46,70,235,241,232,234,242,232,226,252,224,243,251,245,218,222,251,241,249,234,251,244,226,229,243,231,246,231,226,254,247,243,228,251,232,227,239,240,234,249,246,242,233,250,234,236,225,231,241,202,244,252,246,218,248,225,254,220,205,240,240,231,235,245,109,11,16,8,31,14,7,10,15,6,5,2,7,0,29,7,4,0,1,14,13,26,15,0,8,201,202,201,186,204,191,226,187,198,179,176,176,209,183,201,196,176,200,216,208,187,222,190,188,206,207,188,202,201,176,181,228,186,187,198,199,190,221,175,182,180,184,187,183,197,203,184,200,208,189,193,192,214,174,196,197,180,191,182,206,202,199,195,185,194,200,182,182,195,213,201,195,193,203,196,179,216,209,202,198,206,186,209,187,203,214,197,190,196,200,203,171,203,181,207,202,181,198,181,177,199,169,196,203,191,199,197,203,214,184,188,215,194,196,215,219,210,193,203,210,220,193,216,223,209,207,207,224,225,218,214,202,199,212,225,204,216,194,203,207,202,227,198,190,206,188,186,176,187,200,201,205,216,191,201,208,209,206,231,201,235,203,202,207,243,215,201,198,188,196,219,213,233,202,205,215,233,222,216,224,247,214,203,180,210,213,220,195,210,211,190,216,242,203,191,213,209,229,189,211,197,220,240,212,213,205,219,211,214,219,231,213,195,195,218,216,227,227,223,225,209,220,203,212,232,190,218,196,235,183,215,215,199,218,202,202,198,207,222,241,238,206,207,240,211,185,223,181,195,220,206,226,226,174,202,213,211,227,230,165,204,213,216,215,234,218,205,190,223,217,198,223,224,209,217,205,240,182,171,214,245,183,218,219,212,203,208,221,203,172,220,199,210,221,216,199,228,210,211,209,198,213,200,197,194,205,189,165,185,232,228,221,157,153,181,217,190,214,182,192,170,90,160,181,172,183,179,132,103,141,208,217,222,212,207,217,224,217,183,216,202,226,227,121,70,77,38,75,71,103,69,177,224,179,209,169,212,172,153,174,133,239,226,184,143,202,244,198,239,244,172,64,112,134,149,205,233,220,194,192,222,236,220,216,205,213,155,81,195,104,37,135,188,124,101,140,61,78,85,128,142,92,147,184,141,191,241,230,170,167,171,151,131,108,87,38,0,121,180,195,207,78,59,53,122,105,119,79,31,96,116,125,95,56,64,20,25,8,6,2,33,87,113,120,122,106,134,80,85,95,106,118,133,158,56,28,128,130,105,101,58,17,18,11,17,38,16,8,22,6,36,19,28,14,25,42,62,63,37,55,54,7,33,42,28,39,14,20,86,85,81,150,133,87,108,55,39,13,28,6,40,54,66,66,82,12,50,48,50,74,70,30,16,0,9,16,34,81,106,49,35,21,32,41,51,51,31,40,47,37,50,37,45,20,135,115,94,186,251,217,133,61,41,27,43,24,34,32,15,51,25,35,32,15,35,53,36,31,55,16,19,53,46,47,49,117,108,34,18,33,18,64,45,64,98,37,85,115,86,44,85,141,146,128,78,105,83,57,50,73,60,50,80,66,27,49,24,30,35,29,15,30,31,19,11,33,34,31,7,33,60,49,46,29,33,29,39,55,55,50,36,48,15,44,141,147,118,104,123,117,109,30,50,236,253,246,222,231,236,231,255,249,251,246,218,237,224,250,233,236,240,230,237,232,245,244,249,245,247,252,230,241,247,252,242,236,235,245,236,233,249,214,234,217,254,231,222,224,226,249,228,250,232,234,236,236,243,237,251,235,253,234,221,248,234,114,4,11,24,11,15,27,12,14,7,16,23,12,3,8,16,5,0,20,10,22,35,8,19,7,196,205,206,183,165,209,205,198,204,191,203,192,203,188,199,213,198,182,206,188,232,172,183,189,215,197,198,185,181,192,213,185,227,201,200,205,209,196,178,192,204,193,195,181,175,188,192,173,174,195,201,203,172,211,195,182,209,179,187,193,197,192,204,150,196,208,171,206,165,177,198,208,210,183,187,196,199,173,219,206,207,196,216,204,180,211,206,187,194,210,190,188,209,212,204,194,212,213,198,194,194,184,189,194,198,219,208,222,202,196,200,214,197,212,177,211,211,218,204,213,220,147,199,225,204,214,203,180,218,194,209,193,188,172,205,203,212,196,217,214,192,229,204,202,205,197,217,202,206,231,198,182,216,235,211,196,180,195,202,199,191,206,198,207,239,227,219,184,221,206,212,194,204,211,207,228,212,224,206,211,196,197,201,222,211,215,196,228,216,224,191,210,214,221,205,234,208,224,223,230,220,210,229,215,194,232,212,215,209,195,223,240,213,209,231,198,210,215,201,213,197,236,217,211,219,211,209,202,202,208,211,205,211,222,223,216,218,197,216,212,193,217,224,222,202,210,192,218,212,208,209,198,207,217,204,207,220,203,205,217,218,203,200,220,217,231,221,208,213,211,196,232,242,182,217,198,207,204,174,242,237,225,197,200,219,210,206,200,206,204,187,226,216,192,184,214,206,210,203,208,225,224,217,185,195,230,191,169,155,219,197,205,138,142,213,204,198,229,173,186,120,110,157,165,140,166,131,128,91,124,195,210,218,213,214,211,226,217,178,209,206,234,230,83,85,54,85,84,76,89,76,203,229,190,222,172,170,172,214,182,203,228,147,119,153,194,238,215,211,234,118,67,113,122,168,210,225,205,213,243,210,229,225,229,225,184,97,103,159,68,66,202,162,110,166,75,32,48,74,135,213,108,202,217,173,233,215,170,102,44,64,60,18,29,21,15,2,114,185,135,73,1,67,103,110,134,79,41,70,113,144,81,54,79,46,15,35,15,21,34,64,114,131,83,99,88,72,93,102,82,107,99,130,130,16,27,127,125,100,123,59,28,8,24,23,25,7,33,29,26,34,19,40,52,74,82,79,64,51,42,54,44,28,34,35,29,0,36,156,139,117,158,157,95,103,42,37,36,32,48,52,49,74,70,62,18,43,23,52,92,44,18,34,37,14,35,19,69,102,68,49,14,49,24,30,13,26,21,30,15,54,77,29,58,203,139,64,192,224,185,158,124,88,63,69,57,51,40,22,88,21,33,47,20,14,33,21,14,78,30,12,38,37,32,35,115,155,58,32,17,18,27,63,78,53,32,154,138,67,82,125,115,178,147,125,141,62,85,64,66,46,24,49,36,10,31,13,29,16,64,23,25,39,28,29,38,7,37,24,43,37,28,36,18,30,40,45,29,38,35,21,36,43,57,133,132,135,108,95,121,84,23,67,236,232,254,249,234,229,227,242,254,224,241,217,244,240,212,242,233,242,253,247,240,239,255,243,236,241,234,225,234,238,250,244,230,243,239,208,242,234,250,239,244,242,227,221,216,238,230,228,220,231,238,251,222,235,248,250,238,231,245,239,250,230,116,2,15,9,9,7,29,10,11,24,34,18,9,3,6,9,4,8,18,1,24,37,19,4,23,186,191,212,221,192,177,183,194,201,204,191,203,211,182,221,195,188,193,198,190,216,210,212,206,180,205,179,194,183,205,208,197,199,185,208,200,207,205,195,204,184,201,187,195,170,201,182,207,192,179,208,178,190,206,178,214,206,203,174,209,208,202,190,196,145,171,200,193,183,189,175,203,191,186,213,201,178,188,221,180,197,177,202,195,213,192,206,172,207,195,192,186,196,200,194,171,194,186,188,195,199,192,191,191,219,194,193,220,205,204,199,220,197,194,200,207,211,214,192,186,193,199,210,209,203,205,196,223,202,201,208,224,188,230,190,198,193,190,223,202,194,196,198,178,197,192,169,214,218,212,196,217,191,226,204,211,226,222,228,223,215,215,220,198,186,197,217,210,210,207,200,197,177,200,215,208,208,192,190,216,228,178,202,231,193,194,177,222,208,206,218,196,217,205,206,235,222,198,219,212,206,219,226,215,176,216,230,210,224,224,193,215,190,206,214,221,202,206,214,227,208,191,205,202,224,199,212,212,225,218,227,229,203,203,221,200,207,204,212,210,188,193,202,234,190,206,197,196,215,176,204,196,196,203,230,212,188,226,215,225,214,220,163,190,190,208,210,204,220,224,200,218,246,198,203,189,238,186,145,167,220,207,207,217,214,193,204,195,205,207,191,202,207,210,206,195,182,188,207,223,193,220,226,180,187,195,189,165,170,209,213,209,152,173,192,220,190,224,226,156,146,119,161,169,137,184,151,172,147,172,208,183,214,229,196,203,212,206,156,194,211,217,210,128,89,80,102,68,70,77,49,167,187,183,184,169,195,165,223,170,210,177,132,157,189,220,239,206,234,249,116,76,140,112,182,141,158,205,206,227,215,233,210,241,161,135,111,146,143,49,139,179,109,157,111,51,9,35,46,116,171,106,156,144,149,186,128,56,28,30,28,58,53,73,66,37,41,199,188,43,37,36,115,99,106,96,13,30,121,131,115,33,51,33,2,11,48,38,44,96,73,122,94,117,114,105,71,118,109,103,91,91,110,121,11,14,136,93,100,99,66,28,30,37,21,38,11,18,32,18,49,48,101,121,128,116,120,84,75,95,69,48,48,46,55,55,39,57,124,143,87,124,130,94,100,46,50,56,61,40,68,66,60,112,20,25,48,50,80,119,73,47,16,16,5,12,3,41,82,98,57,50,58,59,65,64,52,39,60,44,56,66,44,95,197,105,53,42,92,74,44,137,147,145,92,57,41,37,31,66,42,43,13,14,34,28,22,29,36,26,14,20,41,29,50,116,112,57,30,32,23,51,58,90,68,106,180,121,96,113,123,100,123,151,137,162,74,91,70,47,38,5,42,28,22,41,49,15,36,42,20,44,25,27,20,21,35,73,17,42,24,52,54,35,32,34,31,20,35,32,27,49,37,29,144,140,127,109,107,97,96,22,58,222,244,243,236,238,248,254,247,227,255,245,248,255,242,242,242,213,246,238,231,231,255,239,234,230,222,232,236,244,204,241,252,227,244,230,244,222,244,244,242,238,249,231,222,233,245,238,249,253,237,234,224,250,211,233,233,249,227,241,247,220,221,139,5,5,0,36,16,23,1,14,34,19,11,20,12,7,5,8,25,39,16,15,4,24,4,7,188,218,189,168,203,196,192,190,190,192,199,190,202,198,227,195,192,211,205,189,189,208,179,211,206,207,207,204,167,188,202,205,180,176,188,189,207,190,187,193,190,180,200,195,206,186,202,206,198,160,198,209,211,193,195,188,198,201,204,169,183,193,198,181,180,184,201,178,201,199,208,187,191,210,188,180,181,182,186,190,209,199,197,170,201,190,203,164,192,195,214,205,223,201,197,190,181,201,217,195,187,186,191,200,201,208,221,209,208,206,192,174,195,204,204,217,231,230,205,196,214,215,222,205,217,190,204,190,203,197,184,176,206,195,204,193,197,209,213,189,206,197,191,191,215,199,195,202,205,201,216,197,217,191,190,208,198,212,192,197,194,182,198,212,220,201,205,206,182,196,202,187,216,214,204,215,184,206,221,220,206,230,194,207,209,212,205,221,200,224,220,199,220,211,227,195,210,179,204,209,233,205,216,207,199,213,219,220,203,221,207,238,191,225,225,247,241,221,194,211,201,205,218,222,216,205,219,213,216,221,202,227,230,203,209,225,209,207,219,206,189,184,201,192,223,205,220,209,214,192,193,195,227,191,201,216,202,211,192,210,215,196,207,174,228,198,204,193,202,224,187,163,210,205,206,207,201,167,56,83,190,215,215,193,192,194,204,239,198,198,206,197,213,197,209,198,205,213,220,187,191,211,200,187,199,203,162,153,205,209,207,199,142,178,194,213,217,214,224,148,118,95,185,147,200,200,189,185,159,176,208,207,208,217,207,203,232,205,169,205,231,238,189,132,95,49,103,77,90,91,8,117,175,147,164,218,212,194,193,128,209,206,186,178,184,232,205,205,228,255,131,109,170,140,173,118,147,187,202,241,229,246,227,194,107,118,134,169,86,85,208,127,1,149,156,59,71,69,22,58,110,68,76,96,64,86,40,24,43,24,69,88,87,55,70,90,67,198,117,26,76,84,98,111,71,42,55,131,139,113,59,42,64,27,8,22,34,52,95,92,116,106,102,125,103,97,94,88,105,67,74,130,145,93,48,82,140,126,86,137,22,17,9,6,21,18,3,35,26,26,59,109,139,162,114,107,99,81,106,100,100,102,107,83,59,80,92,114,123,147,58,132,148,180,131,45,122,50,55,70,64,80,61,87,18,46,47,46,47,56,62,53,31,12,12,24,40,14,41,31,55,87,98,109,81,88,111,90,74,119,74,78,94,85,52,34,12,20,58,65,33,86,103,128,120,69,46,11,11,44,31,41,35,42,33,51,22,83,56,34,45,57,84,16,80,97,69,14,37,19,6,39,35,69,67,103,203,88,71,110,100,69,60,69,123,110,57,59,85,68,56,23,63,44,28,47,48,38,9,21,23,41,19,17,25,68,32,59,75,39,63,23,52,42,26,19,39,24,32,20,48,56,58,9,107,136,140,113,104,128,112,39,23,195,255,251,254,242,252,241,226,253,230,234,251,227,244,235,237,244,220,239,238,238,217,249,246,236,228,244,229,222,253,248,219,239,238,250,237,238,238,241,236,251,228,255,239,234,226,240,231,233,241,253,227,245,241,244,233,238,247,227,249,255,233,119,2,0,10,2,9,17,1,3,17,18,4,25,10,13,1,6,1,30,14,16,4,39,6,8,173,215,181,205,199,198,201,200,196,185,183,196,183,222,199,197,176,188,205,202,199,208,177,191,176,174,181,178,201,198,175,194,192,201,185,209,199,200,201,201,202,207,183,169,176,207,198,202,185,172,185,199,205,200,181,196,211,182,208,185,178,180,196,187,203,197,187,194,170,180,208,178,189,212,198,189,219,197,194,208,211,199,188,195,172,209,196,177,191,186,182,172,177,207,170,203,200,200,218,191,180,232,182,213,209,178,215,187,187,208,195,201,188,229,209,218,218,206,211,210,204,200,225,224,207,206,197,185,195,192,213,201,218,162,210,209,204,227,213,199,206,203,215,168,214,211,184,185,189,196,197,196,196,181,204,207,220,223,233,171,211,205,215,213,210,204,190,193,206,182,197,207,232,204,236,198,203,196,206,196,193,224,217,183,212,190,206,214,208,202,216,206,199,199,207,218,229,215,217,202,227,211,206,233,196,223,231,196,189,206,188,219,214,207,198,198,234,204,187,183,211,235,223,223,203,201,192,210,218,239,234,221,224,206,217,180,190,182,203,198,221,202,231,238,171,211,201,211,231,180,212,210,188,181,227,206,225,188,186,219,183,189,195,215,208,218,185,181,161,201,186,134,222,224,217,227,192,195,85,98,151,195,217,180,213,221,203,203,212,197,211,215,188,181,216,198,193,225,217,177,199,190,225,203,173,214,165,167,189,218,224,170,148,190,204,216,197,202,186,136,119,114,173,188,177,147,141,182,172,149,186,196,209,208,216,240,206,206,164,224,225,228,166,135,84,77,124,91,130,77,25,119,154,173,231,226,192,199,151,117,216,215,199,180,209,225,238,192,222,237,136,200,203,185,166,129,149,167,184,221,240,252,214,76,80,142,183,172,64,121,221,84,88,195,115,57,44,77,81,54,58,70,65,50,43,50,45,55,84,43,31,50,24,63,41,32,26,64,54,50,84,82,108,67,45,48,100,147,119,57,80,39,1,26,22,23,71,64,95,101,116,119,90,127,98,109,108,95,95,34,150,168,158,123,90,128,173,125,124,101,44,12,6,17,36,31,41,56,16,50,88,121,132,111,117,114,88,87,127,109,149,134,122,83,94,123,103,99,117,91,120,135,98,152,83,50,79,18,45,79,76,114,50,64,39,26,30,18,39,40,20,35,16,55,22,30,36,15,14,14,85,100,87,74,78,133,176,127,96,125,146,138,119,50,7,16,31,37,47,60,64,147,146,142,120,58,44,25,26,35,28,41,51,63,63,61,53,66,60,61,55,81,69,55,73,83,37,53,30,33,40,41,12,31,25,67,116,58,51,74,98,76,53,80,75,104,69,119,116,66,40,85,50,67,67,65,54,10,16,9,23,27,52,50,40,62,73,97,113,90,89,75,30,36,52,30,40,53,49,27,13,37,58,19,109,162,131,144,86,86,118,40,30,186,239,232,243,243,246,237,224,235,222,230,240,249,242,230,238,235,213,235,223,246,245,241,249,236,231,243,231,251,226,239,207,216,232,246,247,235,231,235,233,225,229,240,242,241,247,216,236,228,219,246,235,245,222,249,204,240,209,244,244,249,239,135,5,10,1,0,21,4,16,18,0,5,14,16,2,13,10,19,2,8,17,1,0,2,9,20,191,189,184,217,195,184,206,215,205,213,180,196,200,192,221,195,192,233,191,198,198,173,205,191,199,190,215,176,189,198,196,206,194,193,198,215,175,192,192,193,190,176,206,198,180,199,196,196,193,174,199,192,194,182,202,192,213,196,197,211,204,195,224,189,183,176,185,204,221,188,189,182,178,193,195,182,194,203,210,181,202,188,210,196,190,187,198,202,176,185,183,208,202,201,202,196,211,190,201,179,212,194,210,189,204,202,212,188,209,176,187,222,196,205,202,200,201,172,205,182,199,212,211,203,207,197,183,185,209,210,215,193,209,202,191,203,192,203,184,217,199,198,206,194,216,193,205,207,198,195,179,213,193,199,209,207,210,200,192,208,204,190,207,198,200,197,209,195,223,188,191,193,206,213,200,224,202,227,213,231,208,212,207,217,217,203,219,218,207,217,187,217,210,205,222,228,221,200,199,193,239,193,205,197,202,192,221,212,193,215,219,219,212,189,210,241,203,228,194,212,199,223,179,212,218,213,194,198,230,206,238,201,227,210,193,211,205,171,219,212,195,191,183,191,218,222,201,181,182,218,193,211,203,202,211,220,214,199,218,170,188,198,197,220,210,217,196,196,194,204,207,163,200,214,185,197,214,216,204,214,197,192,209,196,206,183,177,220,216,190,196,224,214,195,210,222,192,205,229,203,187,183,173,207,188,178,196,170,201,214,233,176,147,191,186,212,199,229,181,167,111,44,118,180,159,174,162,206,171,155,203,195,198,189,211,228,241,208,181,201,213,228,127,131,110,68,100,80,99,74,60,164,220,193,241,173,150,211,153,175,222,192,150,172,218,219,236,198,247,221,154,239,239,162,165,185,157,129,145,210,223,233,146,14,95,162,152,123,142,177,169,68,93,162,79,64,67,65,57,53,55,66,63,73,69,48,82,25,44,24,5,13,21,0,28,2,4,43,41,99,80,71,75,31,43,85,116,107,58,92,33,18,1,8,17,39,112,113,103,135,99,121,93,109,101,105,115,82,91,116,216,215,170,128,145,117,157,84,111,99,24,33,5,31,7,36,49,12,59,99,99,140,124,89,84,101,88,90,76,80,76,99,74,40,42,61,95,85,74,102,82,72,45,72,41,37,78,35,49,77,99,84,54,51,25,16,36,28,20,39,18,1,21,21,31,36,19,14,41,57,69,77,37,58,17,25,41,47,35,32,19,41,49,19,14,27,12,61,77,49,119,202,183,110,74,33,28,40,45,35,25,33,58,40,54,40,21,43,43,68,75,83,124,119,56,66,50,33,20,19,28,36,17,42,21,58,88,55,17,46,121,79,90,94,95,112,116,125,127,61,28,86,74,72,16,14,79,33,46,46,46,40,66,40,78,73,89,92,107,113,81,93,69,56,13,52,36,23,45,38,38,25,25,28,73,113,129,130,96,85,120,78,21,154,239,235,255,227,254,254,228,215,227,226,228,222,225,242,227,229,237,229,235,205,236,225,242,235,222,249,251,245,224,217,250,220,238,220,243,226,237,234,207,233,247,239,248,249,248,244,243,239,228,244,242,224,234,233,226,241,212,241,240,245,222,117,16,23,11,26,8,14,24,6,4,23,13,26,25,7,15,0,5,6,9,14,10,27,15,16,199,207,206,171,208,209,183,189,211,201,187,202,211,182,205,201,202,195,201,167,202,181,214,199,196,197,185,188,198,181,203,190,202,214,180,185,187,165,204,201,194,197,196,214,191,202,173,211,187,193,187,166,217,208,190,196,211,200,187,188,204,188,181,199,192,191,184,187,192,200,213,187,212,184,177,194,186,194,181,212,197,201,211,202,182,186,176,197,189,179,201,199,181,195,185,172,204,191,186,175,200,187,189,222,210,204,202,182,220,191,176,234,194,196,188,195,221,205,210,202,227,199,199,207,205,205,194,193,195,203,202,202,204,201,217,182,220,212,193,180,192,203,174,183,200,202,199,206,218,208,200,187,217,198,211,186,223,191,204,206,210,206,220,195,210,213,199,222,193,209,188,211,191,178,205,214,209,205,178,218,192,216,228,210,210,214,205,195,194,222,239,207,199,199,191,215,191,227,196,212,235,206,205,219,210,221,216,221,198,211,201,191,204,215,210,215,209,192,192,208,221,210,200,223,216,237,215,189,212,207,202,223,230,192,202,225,194,190,202,200,206,200,206,225,227,206,188,205,225,213,193,201,208,219,210,207,191,180,201,192,206,225,206,212,208,218,178,179,171,195,186,202,240,212,199,188,196,205,173,217,218,183,225,201,192,209,191,207,207,179,232,216,190,181,202,209,197,201,195,204,217,218,192,212,175,215,170,173,201,206,199,171,127,211,191,199,201,224,172,121,84,55,155,180,164,158,156,196,165,134,195,185,207,195,196,205,203,195,164,212,179,188,135,167,100,55,108,77,92,52,68,185,243,213,200,110,178,204,198,179,144,154,153,201,233,221,220,237,232,185,160,238,238,186,141,181,174,157,133,182,234,230,102,28,134,143,154,186,224,195,79,75,75,88,41,6,25,17,33,20,32,32,45,50,50,33,39,26,1,15,13,12,3,50,10,11,17,62,88,109,100,51,61,33,77,109,114,65,46,52,4,11,10,34,25,119,106,123,94,86,87,56,120,99,84,76,77,74,112,176,229,192,113,113,141,124,130,97,128,70,14,37,6,33,28,31,31,56,95,192,148,117,92,65,92,108,93,87,33,43,50,63,42,80,45,43,78,77,91,64,79,55,29,48,12,34,42,10,48,70,40,43,51,50,41,26,27,38,9,9,5,24,30,26,28,29,60,49,68,97,56,84,74,83,32,37,41,17,27,20,58,28,39,49,40,33,20,72,89,91,163,189,138,70,75,56,36,52,36,32,25,38,27,37,21,12,28,45,30,23,24,70,104,140,114,87,53,69,28,38,29,38,30,49,26,108,80,51,31,73,118,100,106,81,113,168,101,60,57,35,59,49,49,42,39,60,53,62,81,55,70,66,72,74,83,100,115,114,97,92,80,99,63,42,34,18,24,41,71,20,52,41,40,43,64,126,154,138,101,114,76,56,37,130,245,233,254,230,228,237,229,216,236,232,241,229,242,211,230,235,216,238,248,250,224,221,254,240,251,231,238,236,242,249,243,249,243,226,246,245,255,253,238,243,247,234,214,254,245,228,239,250,243,247,228,242,239,219,233,231,238,235,241,246,220,116,5,15,12,34,5,13,16,9,7,17,22,2,34,4,9,2,33,3,6,4,15,2,8,37,193,198,179,208,201,203,173,191,197,213,167,198,195,211,206,174,190,214,175,223,213,195,196,187,200,209,213,203,192,174,233,217,179,196,188,210,193,205,176,176,170,201,182,194,197,185,202,193,197,198,184,193,204,186,196,199,183,173,192,198,189,185,191,193,202,194,215,220,181,180,208,194,209,217,214,200,199,187,190,216,178,202,204,185,212,202,197,197,245,195,213,212,200,208,206,218,189,220,199,176,183,183,172,194,193,215,201,204,195,193,182,201,213,200,215,209,218,202,205,212,214,198,211,222,184,208,216,212,216,210,240,205,204,196,200,205,206,219,195,189,205,192,203,190,194,207,191,213,191,210,204,198,183,236,216,204,202,231,218,207,214,229,197,200,189,209,182,185,219,202,213,211,212,206,226,193,211,213,191,212,184,186,209,207,175,210,226,208,207,232,232,209,204,214,208,211,208,202,220,175,208,223,188,197,201,210,201,184,231,196,191,193,201,199,215,203,208,211,226,226,233,207,210,215,222,217,238,199,204,189,207,227,212,219,204,192,210,208,215,214,211,222,194,196,235,199,208,223,213,190,199,207,205,223,189,207,219,203,223,197,179,207,190,214,217,208,199,206,186,186,222,212,201,227,215,195,186,202,181,194,213,167,200,197,226,211,209,208,217,202,209,206,215,218,187,208,206,188,216,212,189,207,181,200,190,197,188,170,210,210,206,143,166,225,215,205,199,215,170,119,103,118,190,205,173,164,190,200,150,128,186,173,202,240,189,190,216,195,186,212,225,199,159,189,118,77,80,74,69,17,59,184,169,170,213,163,207,246,176,172,145,171,188,216,224,223,224,237,226,118,170,233,238,184,159,160,200,165,138,191,208,215,87,101,143,123,154,200,244,146,44,49,77,72,19,35,23,25,28,21,20,25,33,16,16,1,13,6,20,5,12,12,12,11,14,20,32,48,85,106,101,35,51,75,98,111,63,53,23,14,5,12,23,54,52,103,106,116,108,123,91,94,116,107,86,110,90,89,162,197,210,125,71,115,106,98,138,97,109,75,33,21,19,11,26,20,22,51,156,205,159,72,120,40,30,39,66,45,34,52,18,56,43,31,47,53,56,44,61,87,67,53,41,57,33,30,5,53,46,81,65,38,64,66,62,78,79,37,24,25,19,0,38,69,87,74,83,113,121,106,69,88,85,46,46,31,14,40,32,44,49,58,80,72,93,105,140,129,162,141,121,142,113,146,113,81,96,92,101,79,75,81,48,45,16,14,17,7,26,17,64,76,80,104,78,119,94,54,67,30,49,38,37,10,73,131,84,52,13,94,133,98,94,80,126,139,60,22,83,57,2,33,69,10,41,33,64,62,80,65,60,88,60,86,102,89,78,121,92,79,65,80,82,82,69,22,45,29,23,20,19,35,38,17,71,143,141,117,97,82,119,56,17,159,239,254,246,229,232,212,232,224,241,241,234,209,236,239,231,238,238,231,244,207,242,229,249,218,236,244,229,249,245,255,231,232,245,237,246,225,222,237,226,243,248,235,231,250,223,239,241,243,239,213,234,245,242,223,241,240,248,233,237,250,241,110,11,8,0,23,18,1,27,9,27,20,12,9,16,5,1,12,19,13,15,22,18,29,28,35,208,207,218,217,204,188,188,197,175,213,180,183,203,199,195,194,206,215,191,206,194,201,200,176,172,189,189,199,180,192,199,188,187,169,197,192,203,182,196,213,176,204,203,195,205,184,207,195,190,190,192,207,197,189,193,179,181,214,192,198,164,213,184,226,179,190,221,214,202,195,205,208,203,196,184,193,204,197,183,210,176,174,182,187,186,192,174,163,192,160,160,206,225,197,182,199,198,187,208,186,185,211,185,222,191,202,198,209,197,197,214,193,184,167,196,215,185,193,217,204,208,190,193,199,205,204,209,196,195,199,218,203,191,194,171,216,219,181,184,200,213,196,199,193,185,178,232,182,180,224,193,219,195,197,206,197,217,179,201,194,190,204,207,196,199,207,217,222,205,204,201,206,191,193,189,198,221,183,202,208,197,206,192,209,218,206,198,233,192,208,206,216,206,202,225,226,212,237,222,213,202,188,217,233,202,222,224,213,216,224,234,209,186,212,214,222,244,217,199,201,212,207,207,210,199,198,215,193,214,205,194,227,215,200,208,211,209,227,207,215,198,232,202,230,210,198,197,188,202,168,198,196,223,224,182,201,194,215,195,204,210,224,201,214,222,199,200,206,210,205,210,199,183,198,214,182,199,211,182,204,198,194,187,200,194,196,211,188,216,212,203,194,195,187,215,184,207,204,179,193,192,219,210,219,210,189,138,157,205,196,189,164,167,191,215,229,197,216,155,120,95,109,171,170,135,174,197,238,159,108,183,226,173,215,204,197,207,164,167,205,184,212,157,172,117,71,103,104,84,60,68,64,153,236,193,176,199,181,169,163,190,197,199,218,223,247,227,232,214,86,169,244,245,210,169,181,180,116,126,190,225,220,159,106,157,173,176,227,226,93,33,69,33,30,5,21,13,32,23,22,39,23,12,16,9,24,11,19,16,4,19,28,46,22,34,5,87,81,84,89,56,49,68,102,66,77,61,57,27,38,24,44,54,87,101,142,126,94,90,96,89,130,75,99,84,78,120,181,202,136,154,114,71,105,72,92,118,79,107,62,24,38,26,14,28,48,23,70,156,166,99,85,50,50,42,28,29,44,66,46,37,49,41,16,35,21,61,56,53,52,77,42,43,51,56,58,38,38,50,52,56,69,66,58,73,77,73,60,48,33,37,27,76,87,103,75,96,80,77,47,23,43,34,49,45,34,56,54,100,118,151,190,213,197,220,173,197,168,148,142,67,54,63,76,103,116,119,131,140,120,134,141,134,131,98,87,69,31,53,25,45,47,66,75,38,42,67,48,53,82,63,54,68,55,97,115,73,50,66,156,111,112,108,63,95,102,52,45,64,33,54,43,33,67,78,47,25,42,59,58,95,121,96,22,36,79,63,73,51,64,69,95,85,43,48,76,32,18,39,33,38,31,42,39,53,154,145,134,124,86,103,86,69,150,247,222,234,252,234,251,248,255,242,250,220,223,238,242,238,207,225,228,223,230,247,243,243,248,236,245,252,238,239,243,251,205,250,238,248,222,247,246,249,223,237,251,250,251,236,236,218,234,251,243,237,247,242,226,238,224,236,241,234,249,226,106,12,21,16,5,2,5,3,23,14,15,18,32,20,3,10,12,15,19,15,4,3,7,10,27,200,186,199,194,211,197,188,207,203,203,212,171,193,170,206,207,199,186,192,191,211,216,170,192,187,188,181,193,192,181,189,180,212,174,202,196,215,204,198,186,184,204,207,200,186,210,210,205,217,188,214,196,193,187,196,200,210,166,194,188,183,181,198,196,194,188,215,193,193,193,208,181,203,179,189,210,181,183,205,212,201,207,202,204,213,189,201,201,200,197,212,200,211,218,226,195,183,202,201,181,220,192,171,204,182,188,198,231,179,191,188,195,209,198,191,213,196,201,202,173,206,218,217,196,197,210,210,208,205,199,203,195,193,232,188,197,197,187,205,214,188,221,197,186,201,212,186,212,196,197,196,210,186,214,193,197,172,177,176,220,191,205,221,195,201,207,192,197,207,197,193,202,178,192,229,199,193,213,208,195,201,201,197,209,204,213,199,211,205,210,212,197,203,197,227,219,199,209,214,211,200,205,205,210,227,210,201,202,191,180,201,191,200,187,219,178,199,218,224,214,215,228,211,214,214,192,193,185,230,219,221,199,219,211,222,211,201,204,203,202,195,218,216,186,197,182,206,207,193,208,184,192,218,210,207,174,175,172,204,196,186,200,185,204,225,221,213,224,190,181,224,177,196,200,205,189,204,208,202,191,200,202,185,189,240,213,181,217,195,189,210,205,205,175,165,185,210,207,207,200,186,193,205,214,212,199,164,176,212,232,199,148,171,209,198,212,207,198,169,95,98,83,154,151,176,205,217,237,205,115,181,192,180,223,212,193,191,178,170,190,165,197,138,87,80,63,137,116,84,88,70,137,207,227,207,181,131,184,189,153,198,181,218,215,201,237,213,238,194,100,184,225,229,225,202,168,118,152,163,189,225,239,197,146,183,244,192,221,189,39,59,49,4,30,22,31,10,32,27,11,30,13,8,18,39,39,19,44,22,18,15,10,14,3,27,45,85,90,51,47,53,64,124,101,67,83,38,32,11,91,69,52,66,96,115,143,118,125,91,108,82,72,107,57,98,93,161,190,119,72,92,62,101,108,89,89,142,121,108,54,10,16,26,46,46,29,21,87,153,135,81,70,40,49,46,49,52,27,53,57,40,55,50,28,54,47,44,42,52,50,63,50,75,41,38,41,16,52,37,27,40,58,51,66,69,56,66,47,80,57,66,102,121,93,62,54,50,79,84,35,25,25,10,42,49,79,122,204,205,221,223,214,155,140,133,53,68,60,74,83,54,23,16,21,56,44,80,91,39,36,51,59,75,110,122,180,153,120,120,100,46,50,31,38,32,39,38,3,33,55,78,55,79,59,68,87,69,118,93,154,104,103,97,37,72,112,33,13,22,23,48,26,35,59,58,57,47,78,64,58,56,90,43,53,28,28,60,73,52,51,45,55,79,54,72,42,43,45,48,44,36,2,28,23,55,139,160,125,121,74,97,89,41,167,204,220,250,254,252,253,219,215,214,227,218,230,243,220,206,221,216,243,228,255,242,254,248,252,236,249,253,219,251,240,242,251,229,244,248,253,236,234,238,205,244,233,243,252,237,253,241,222,255,255,236,214,250,234,222,246,227,224,244,231,231,122,29,6,38,6,12,7,30,7,11,30,8,6,1,1,0,18,8,6,29,1,8,53,23,25,204,194,189,235,188,207,213,196,197,208,175,188,172,182,206,175,194,188,196,210,179,216,200,202,182,190,191,220,209,196,183,218,198,198,207,216,216,185,213,176,188,185,203,199,180,203,177,213,188,198,204,198,189,227,193,186,222,168,188,190,187,189,179,185,208,195,200,208,190,190,180,189,202,191,199,189,162,173,203,195,204,188,188,187,184,203,211,208,223,178,200,178,193,193,194,206,204,178,197,183,212,214,199,214,189,185,188,210,208,190,197,200,204,191,215,182,203,189,177,177,189,191,196,198,204,201,216,206,214,194,191,197,212,209,191,169,209,183,227,214,194,201,196,182,189,227,190,178,194,226,218,192,206,196,175,196,212,173,223,184,226,220,199,219,210,197,200,217,202,194,178,200,192,206,202,206,204,199,199,216,234,229,197,213,217,196,197,215,221,209,193,202,211,196,209,217,204,229,213,235,218,229,194,213,201,205,225,199,208,213,212,232,203,207,178,188,211,215,229,193,193,221,230,193,206,195,217,198,195,193,221,203,205,186,209,190,181,219,197,214,193,194,222,192,208,208,219,233,204,210,209,218,202,218,201,196,203,204,204,183,235,203,200,217,200,187,179,228,182,208,212,210,179,213,200,204,193,207,188,228,223,202,199,181,217,179,214,194,202,184,205,186,231,202,211,194,216,198,194,180,211,187,198,208,223,160,161,183,204,212,182,161,175,192,203,226,192,179,134,93,96,77,136,168,192,198,230,214,165,102,158,203,215,185,195,234,233,182,217,178,161,170,120,99,67,58,115,81,73,72,68,154,243,212,177,159,132,210,176,115,173,162,201,220,230,220,212,242,190,88,210,237,236,251,212,146,133,152,161,178,175,204,183,165,197,233,198,214,170,59,71,41,9,29,35,18,28,39,10,25,9,19,22,28,8,10,20,37,33,2,23,7,56,40,26,43,104,65,23,36,49,88,70,71,78,49,20,37,74,139,177,141,121,129,104,113,79,96,106,118,93,98,95,115,105,88,105,114,72,61,90,107,98,114,98,99,91,139,86,45,28,26,36,52,21,34,2,23,116,69,59,50,48,33,57,21,50,59,62,52,50,35,29,20,19,63,25,37,70,21,38,26,14,28,38,22,26,43,67,62,51,67,62,48,67,50,47,63,59,78,73,104,74,49,38,28,21,52,38,43,25,31,53,128,159,195,162,165,156,94,75,24,46,66,48,77,19,76,68,52,55,66,54,44,66,58,39,56,59,33,49,48,36,30,51,75,85,128,154,130,119,104,75,69,29,46,16,23,22,8,34,55,35,82,68,63,93,95,102,125,99,101,63,52,85,68,61,71,54,27,49,63,85,56,50,60,61,58,27,36,42,46,31,30,16,61,33,33,57,50,52,28,44,60,61,91,45,84,45,35,41,50,26,29,72,147,133,149,89,99,116,111,72,84,119,185,245,244,252,217,234,235,224,224,232,228,228,224,218,236,224,232,236,249,235,240,245,240,220,246,233,255,246,252,227,221,244,242,228,251,255,227,248,244,220,242,249,228,240,217,241,253,224,243,246,230,220,243,239,239,218,241,232,231,241,117,6,29,5,8,10,1,14,25,2,26,35,16,17,3,14,0,20,3,9,15,36,8,6,11,210,189,173,187,191,181,185,213,202,186,178,202,193,192,182,192,183,195,210,212,186,194,190,190,184,209,190,174,192,191,186,197,175,195,211,197,188,203,195,205,211,209,207,213,197,202,215,220,191,181,177,195,174,178,189,219,192,203,214,183,188,182,188,186,192,194,205,198,179,184,183,190,174,190,211,205,186,175,196,204,184,190,210,193,198,197,187,193,178,222,212,190,210,187,211,207,202,185,194,213,198,202,201,207,195,204,214,214,208,208,196,207,203,198,184,195,202,203,199,195,206,194,170,208,216,181,197,206,179,204,214,229,190,184,185,176,169,195,194,219,188,203,207,196,217,222,196,206,186,200,204,199,190,202,210,216,210,191,199,206,211,194,191,174,208,204,193,196,195,205,211,184,212,203,207,191,188,205,211,188,190,213,239,197,215,215,217,222,212,207,184,231,217,188,192,193,211,198,205,176,213,208,203,238,187,226,212,211,226,229,209,218,198,230,208,210,212,207,193,208,207,210,195,190,215,199,213,221,208,226,221,192,198,190,172,222,198,229,203,217,196,218,214,168,180,196,218,211,215,180,194,196,222,210,197,209,197,202,208,184,213,208,218,176,199,210,210,195,182,188,194,205,189,216,217,217,209,204,196,223,208,215,191,201,189,226,199,169,200,190,185,196,216,220,203,172,198,196,216,200,194,198,221,215,213,144,179,187,193,215,205,152,208,208,215,216,212,182,138,105,83,96,180,167,191,220,217,206,154,107,137,225,239,190,210,179,210,178,160,206,188,206,160,162,113,60,108,80,90,92,53,190,240,147,154,149,167,243,147,145,150,181,242,216,233,227,212,215,141,124,240,213,252,239,205,171,189,179,204,187,160,178,167,159,197,193,209,191,157,87,73,7,17,8,17,25,16,12,19,16,15,19,4,16,5,14,18,25,36,29,17,21,7,11,76,82,71,60,33,61,85,104,70,60,32,44,1,43,145,203,216,108,76,85,113,115,122,90,115,84,98,93,99,88,108,90,72,72,83,128,135,80,88,84,77,134,93,163,90,21,30,15,43,46,20,22,26,56,35,65,52,23,20,49,51,59,47,75,56,50,26,13,67,48,35,64,49,60,37,50,36,25,7,28,43,32,43,49,62,29,42,53,68,59,64,51,56,68,67,64,71,58,26,42,42,8,26,32,24,53,93,129,213,211,158,143,70,41,49,49,32,57,35,53,56,55,53,51,40,34,66,60,59,53,50,57,44,67,42,71,38,14,39,26,62,60,59,53,47,81,100,128,146,75,91,50,56,31,43,8,7,17,52,80,64,68,57,69,90,74,68,56,49,41,63,63,53,31,35,36,68,68,58,66,52,47,46,57,39,21,46,45,23,47,24,53,63,57,59,55,35,46,62,56,71,69,42,33,70,11,25,39,59,35,30,107,140,129,99,84,107,91,77,149,68,100,185,218,223,243,233,241,245,228,235,230,229,234,218,244,219,254,247,237,246,236,227,232,237,240,231,228,241,231,238,248,249,241,231,231,222,238,242,230,223,254,233,250,246,240,211,239,249,247,228,227,246,228,227,215,224,234,231,251,224,91,19,0,12,21,10,14,19,6,7,13,22,2,24,1,12,2,3,9,1,22,14,1,30,29,198,189,194,215,215,175,204,197,182,208,197,192,209,207,205,186,197,217,214,215,186,209,216,189,204,215,178,188,166,192,200,196,197,189,195,212,197,184,190,202,188,185,190,190,207,157,189,206,189,185,190,199,215,195,201,202,202,195,208,202,205,207,219,187,193,175,185,174,210,207,206,165,206,201,187,183,189,183,212,193,188,197,225,205,192,204,181,172,204,205,194,194,181,194,212,227,171,230,197,176,181,194,234,199,217,181,206,195,227,221,189,191,166,211,178,213,193,207,190,218,209,196,196,197,213,174,231,194,206,212,198,187,179,199,177,188,185,209,218,199,237,203,206,197,203,191,223,203,221,177,209,186,216,198,200,208,220,194,198,202,188,207,195,180,205,211,215,195,210,213,220,189,213,207,216,203,209,229,212,225,213,217,195,211,175,192,198,218,203,198,222,230,219,194,238,196,195,190,190,208,216,214,206,216,215,206,209,196,209,209,197,250,215,228,201,204,179,221,203,188,207,199,202,182,207,221,207,204,195,179,204,203,203,211,220,196,210,218,206,207,191,207,196,203,189,205,208,216,204,195,187,190,196,191,172,197,199,207,199,209,207,209,198,219,193,196,219,201,188,212,209,210,197,194,208,187,198,220,196,188,197,206,191,173,200,208,195,205,189,205,206,180,193,215,208,205,197,211,203,188,221,217,176,191,214,144,158,179,212,221,160,160,193,201,209,207,195,189,133,117,71,66,141,169,188,180,227,223,160,167,175,206,203,224,201,188,194,152,176,174,184,212,201,181,119,150,171,99,80,56,31,147,205,190,226,166,193,207,148,199,195,240,228,201,232,215,233,209,151,169,214,213,236,229,242,198,172,197,185,171,183,192,184,166,209,238,162,140,87,32,24,9,6,27,14,15,8,3,57,38,25,32,14,14,6,37,10,34,27,24,16,18,21,47,60,76,53,55,62,84,104,64,56,71,44,11,33,44,142,140,173,75,46,105,90,95,102,114,121,108,88,116,102,78,128,91,72,125,195,152,125,102,111,121,104,99,127,120,86,40,31,35,6,25,5,25,10,28,26,27,41,52,51,47,103,44,70,65,25,37,20,49,79,65,66,68,66,34,27,51,29,22,27,62,46,34,44,45,30,22,34,66,39,38,44,39,27,46,56,55,49,28,43,14,17,35,44,57,64,165,216,208,142,115,47,61,30,38,44,32,44,33,54,36,41,26,17,20,8,40,44,79,71,28,63,70,22,43,65,47,52,27,37,26,37,38,47,52,74,49,96,57,82,97,109,89,83,61,28,24,10,28,19,29,45,52,61,57,30,50,55,61,58,55,26,42,48,29,28,47,46,30,52,55,58,77,36,29,55,47,43,28,40,51,47,47,32,63,37,55,27,49,45,36,36,37,47,44,48,29,24,39,29,30,35,110,169,123,103,108,118,82,107,111,107,50,85,116,132,177,195,216,228,243,249,224,244,229,212,203,255,240,240,237,236,248,249,230,251,227,233,243,239,241,250,239,230,250,238,245,233,240,229,229,225,246,225,236,245,222,235,230,252,215,230,225,227,230,237,214,215,230,237,219,224,128,12,0,1,31,7,4,12,7,6,15,7,27,15,13,4,8,9,9,22,3,3,5,12,22,178,191,198,205,199,202,191,172,175,170,198,177,192,208,188,224,180,218,174,195,193,189,194,174,184,181,204,210,195,175,195,194,179,205,215,193,207,197,188,184,207,187,189,208,197,178,187,188,203,190,185,194,201,184,181,199,183,174,193,190,216,191,187,195,203,181,177,210,198,196,219,157,187,195,212,187,199,190,210,192,176,200,195,199,189,191,191,211,202,187,179,197,197,197,180,205,183,213,197,185,195,200,203,206,187,177,193,191,212,216,183,192,192,225,192,217,204,210,185,225,209,217,182,206,212,193,207,208,223,208,195,203,231,218,225,191,214,197,195,198,200,206,212,203,196,218,202,207,197,215,214,181,194,181,209,199,201,207,204,195,215,200,199,185,193,223,216,209,195,198,201,215,188,215,180,210,211,195,192,204,168,207,203,212,227,193,221,213,183,215,196,203,183,208,230,200,221,193,214,224,224,186,206,208,202,219,188,198,206,196,223,206,216,211,205,205,207,210,197,203,199,204,187,217,201,212,196,204,182,199,208,193,231,215,223,210,196,200,178,195,202,223,183,199,216,195,202,186,198,208,217,186,201,204,177,186,188,196,191,199,210,202,199,209,216,204,199,186,206,194,193,223,180,205,197,195,223,188,186,193,211,199,189,186,203,201,211,207,201,198,235,199,184,193,206,205,194,190,202,193,207,206,209,179,214,171,174,214,196,197,159,180,197,215,216,205,187,188,87,125,95,71,160,187,225,217,218,204,173,163,126,197,239,218,219,196,220,177,157,168,240,219,178,126,100,145,164,103,102,71,26,130,198,207,236,156,207,210,156,190,191,209,222,209,222,184,254,188,154,185,202,225,205,239,222,207,217,158,159,153,167,175,164,204,208,232,177,75,46,5,0,11,8,26,22,17,15,56,19,27,18,31,16,29,5,14,13,2,9,27,11,13,39,28,93,56,58,59,80,98,61,51,41,33,35,21,46,99,69,47,107,63,80,125,108,127,149,102,107,95,91,93,79,92,85,65,86,150,181,126,128,105,105,97,89,101,117,134,55,3,13,7,13,38,11,26,15,39,21,43,53,98,58,91,64,58,41,32,28,33,59,117,107,102,82,88,88,29,32,24,41,41,14,30,15,31,15,11,29,12,54,39,48,34,26,24,31,51,35,36,36,29,16,26,17,50,97,215,234,192,132,55,48,43,93,50,47,27,35,4,17,5,12,14,2,22,5,28,22,47,61,45,79,75,103,78,95,47,66,38,40,25,35,16,22,36,26,40,48,55,55,58,107,71,49,69,116,106,129,71,53,32,30,19,26,33,48,43,19,23,34,39,58,29,24,37,31,28,29,39,40,34,58,38,76,40,39,76,45,27,21,31,31,39,51,35,30,64,70,40,51,39,36,59,54,56,28,46,43,54,37,48,53,25,33,100,163,140,116,112,109,75,89,117,112,83,85,56,84,169,214,220,230,231,212,213,219,238,237,235,245,240,223,244,237,215,235,245,232,224,237,241,222,233,237,218,235,227,251,239,251,253,214,227,222,223,234,212,247,235,223,237,211,241,222,235,240,212,241,215,237,221,217,232,222,121,20,11,0,21,4,5,23,16,25,12,6,10,15,7,23,11,5,13,15,25,33,12,28,3,180,177,188,224,196,211,188,214,217,207,193,196,213,196,206,216,220,213,181,176,203,228,202,178,203,193,190,230,209,202,183,198,197,186,215,179,186,178,180,195,204,190,198,189,172,186,184,184,185,196,174,181,189,197,204,198,196,203,173,190,207,198,198,186,181,194,210,179,216,189,179,165,201,180,190,189,194,200,220,208,210,211,208,190,199,187,171,201,200,198,205,197,217,207,214,179,201,222,189,196,194,206,197,219,200,214,223,216,194,208,194,185,187,237,175,211,210,201,199,216,181,225,171,196,214,200,213,198,190,202,217,193,191,209,206,180,201,193,202,208,217,183,188,205,200,186,199,202,207,200,217,213,191,171,187,221,201,204,222,200,226,197,194,204,213,207,212,210,196,188,197,221,189,179,198,212,216,197,206,194,219,205,201,207,202,202,203,207,208,192,221,208,212,205,227,198,213,208,223,200,201,199,224,201,202,213,209,210,198,201,198,215,207,199,220,206,225,203,197,227,207,211,219,199,223,198,159,212,192,188,200,224,202,202,196,200,207,215,206,204,195,222,190,211,199,190,204,187,192,207,193,190,196,195,223,198,206,200,217,191,211,203,200,200,194,205,175,197,214,179,182,206,192,181,200,214,204,193,215,193,200,210,215,187,180,213,194,193,156,185,220,183,240,192,192,191,229,216,204,209,190,197,216,189,209,148,192,205,220,198,156,193,181,190,195,188,139,132,103,128,118,88,147,194,237,172,242,194,162,146,133,191,191,220,218,190,230,181,150,211,226,136,103,126,96,93,142,134,130,76,42,132,212,219,168,158,225,214,120,135,173,221,216,209,199,226,234,157,140,199,178,243,222,221,240,209,221,136,122,108,168,193,185,148,143,169,155,42,27,14,15,5,13,10,21,17,22,39,22,7,18,25,10,8,8,18,36,29,29,13,39,50,53,52,67,54,79,113,76,61,55,48,29,25,20,1,75,119,42,60,99,100,138,145,153,134,128,88,111,99,99,111,114,102,127,118,135,143,127,106,101,89,98,83,79,117,114,105,43,15,37,3,21,22,5,40,27,36,10,38,115,106,78,72,72,53,46,46,70,88,105,135,135,104,48,49,44,26,34,14,45,64,34,27,22,19,9,4,16,24,15,43,33,18,21,55,50,39,48,38,40,49,83,53,118,198,238,197,111,65,40,57,41,31,39,14,29,5,5,5,3,9,21,24,8,35,11,15,28,26,57,42,92,95,120,76,62,55,29,8,6,14,4,18,30,22,13,14,39,39,51,39,74,77,77,108,58,73,92,135,95,65,44,34,40,53,26,42,30,34,27,55,53,26,24,18,33,15,8,43,41,45,49,67,49,56,64,55,55,83,105,79,105,61,36,40,37,37,4,38,80,35,29,30,36,40,21,74,26,38,32,77,36,41,18,76,153,132,126,130,84,91,103,114,118,97,84,197,214,243,255,239,209,236,227,227,202,236,229,245,226,246,241,251,232,239,242,247,250,231,231,232,226,225,224,221,221,238,231,241,249,242,233,238,250,238,209,231,220,207,253,223,225,222,216,238,208,238,208,206,224,218,242,214,219,124,20,10,5,0,10,6,22,0,25,17,17,21,13,15,22,5,22,0,10,25,11,12,11,28,185,211,188,211,199,197,211,202,200,185,187,182,187,202,200,217,182,183,200,194,201,188,178,200,188,178,175,207,186,180,206,189,201,171,194,192,185,191,200,208,202,204,185,203,174,175,192,211,195,206,194,187,197,193,180,183,191,171,205,181,161,168,182,206,187,177,201,202,219,190,164,200,201,189,192,207,210,167,220,195,184,200,194,176,190,188,191,194,190,200,209,217,190,201,208,202,212,187,205,221,205,175,184,194,201,190,213,193,175,189,177,195,191,185,209,208,196,200,212,197,202,194,191,199,196,184,194,219,192,207,196,194,200,193,212,188,199,200,181,167,199,203,205,222,195,208,205,185,188,206,183,190,207,198,197,215,232,203,208,205,220,210,190,217,216,224,213,212,207,207,178,220,205,182,213,187,215,208,219,246,205,185,223,213,213,239,213,207,207,200,226,225,226,193,226,191,211,209,189,211,201,182,196,207,196,189,222,213,216,215,194,201,196,228,179,228,209,213,201,226,214,218,186,209,220,212,210,205,191,182,217,203,212,199,223,205,206,200,209,205,215,199,189,192,183,204,209,224,194,196,207,206,221,189,204,182,204,212,221,201,198,189,197,217,177,176,218,202,202,197,198,177,187,187,202,197,209,197,207,186,224,210,202,187,202,195,205,199,206,204,172,192,207,198,209,217,193,217,216,208,212,212,182,227,197,118,167,211,204,187,151,194,178,232,196,184,168,156,136,144,131,96,188,242,227,210,237,199,168,167,142,195,212,203,192,187,212,200,161,192,172,123,138,124,80,125,129,110,159,108,71,131,207,193,177,189,237,128,83,156,202,224,190,202,218,230,237,141,186,189,224,227,209,206,192,225,224,179,112,136,196,169,121,100,43,62,64,28,24,32,15,19,38,17,22,42,23,21,10,29,24,26,7,11,18,6,36,21,13,25,17,23,51,83,64,81,127,116,76,60,53,32,28,9,12,27,114,104,21,77,108,99,101,108,111,94,74,73,98,134,116,98,85,100,176,185,148,101,98,98,114,111,95,98,83,120,167,112,23,5,11,22,22,27,44,52,47,31,16,33,81,112,85,52,61,51,56,46,60,76,93,65,35,63,44,43,4,19,43,17,53,22,20,54,32,38,20,42,33,74,68,58,71,84,94,120,163,162,154,198,203,161,151,168,175,208,130,121,30,54,56,46,11,18,28,20,18,11,10,17,6,1,19,14,15,18,26,43,10,42,39,61,103,59,93,51,50,28,17,20,21,23,2,13,8,12,6,18,27,28,9,28,21,53,52,44,69,62,63,76,140,117,65,79,75,59,96,31,69,54,53,38,27,68,37,41,10,7,15,26,26,35,48,79,51,44,29,45,86,123,177,135,112,99,49,102,30,25,29,42,38,61,30,34,45,49,44,40,17,7,29,32,63,40,75,68,165,156,148,118,105,76,93,116,105,109,132,240,240,255,249,208,233,219,224,243,233,240,224,245,233,233,201,221,241,225,241,225,239,227,221,231,241,222,223,219,217,225,226,215,214,230,238,227,232,232,247,240,216,230,226,220,214,200,217,234,226,223,221,225,244,243,233,217,211,100,12,2,0,4,3,29,12,0,6,28,10,17,0,6,12,11,11,44,19,13,24,3,10,9,195,215,195,194,207,191,198,201,199,201,214,182,173,210,205,185,199,190,184,194,187,201,219,195,202,184,196,185,173,174,206,201,185,208,186,212,161,185,192,209,186,203,191,183,225,182,220,202,174,185,175,198,200,200,232,199,168,186,211,188,180,192,169,208,182,160,191,175,202,198,217,229,187,175,193,171,180,227,179,192,191,202,195,202,208,198,181,173,190,203,210,181,199,194,201,219,208,211,206,205,201,189,202,182,194,201,180,217,230,201,210,193,187,205,216,206,213,194,215,211,226,211,209,219,197,220,198,229,219,204,191,209,187,195,207,190,213,198,214,211,220,193,174,196,227,198,231,179,171,183,200,210,211,191,197,206,205,205,217,207,213,191,220,186,204,197,202,208,218,188,187,199,215,218,210,205,223,224,215,202,198,204,209,231,230,198,214,220,208,207,209,181,205,228,209,230,197,211,208,206,217,201,221,212,214,215,202,199,196,206,185,212,213,201,219,205,188,219,192,219,199,189,210,210,181,191,203,189,219,208,210,210,226,204,194,188,197,221,197,204,171,204,194,211,188,192,195,187,203,200,201,202,194,215,174,190,196,226,166,178,169,180,228,174,205,193,209,181,201,199,180,214,198,173,178,196,189,188,187,169,171,171,201,190,201,184,209,220,199,195,211,204,205,198,197,218,213,213,201,197,183,188,195,204,167,141,208,197,194,195,163,158,179,208,217,198,164,143,130,139,112,118,233,223,237,210,237,186,186,190,153,186,197,200,221,232,231,177,164,203,239,160,149,165,104,80,98,121,123,73,43,128,181,215,183,221,178,98,103,199,223,227,203,222,238,226,186,140,206,163,223,208,218,219,241,239,226,189,180,161,218,194,81,48,55,19,30,11,20,3,37,23,32,2,21,22,21,27,21,35,39,17,20,14,8,12,21,20,22,19,41,36,60,57,81,78,89,69,79,41,10,33,19,20,26,61,114,70,55,121,113,43,42,34,34,23,36,42,74,102,138,102,73,86,148,163,90,88,80,67,59,91,74,77,99,126,111,90,23,29,3,25,20,51,31,37,22,15,30,10,52,36,15,22,8,39,32,33,47,42,37,54,43,59,53,81,41,54,59,59,74,89,95,146,154,161,193,171,192,224,193,154,167,194,172,197,198,167,168,176,126,123,114,77,63,56,39,40,40,49,21,34,7,14,7,22,2,11,20,2,31,14,33,37,21,31,10,34,13,48,43,75,32,20,62,53,37,23,12,14,22,27,7,15,41,30,18,31,2,28,14,6,21,18,12,35,46,55,93,94,63,117,128,142,145,143,156,136,120,145,113,126,90,76,82,59,50,29,27,27,41,30,22,32,53,23,41,25,62,69,105,116,102,62,59,110,66,67,69,67,45,75,41,34,45,18,9,42,14,8,24,37,42,33,40,66,122,154,103,123,100,94,92,116,103,85,115,142,215,239,223,246,244,225,209,207,242,216,227,226,235,236,250,255,225,250,239,227,229,231,238,245,242,226,244,212,239,254,229,204,221,230,224,216,227,215,226,229,242,216,225,243,223,231,233,209,235,227,221,229,239,232,234,231,220,125,13,29,15,0,24,15,7,21,19,5,22,14,12,7,5,2,8,28,16,15,8,15,3,25,180,209,191,183,223,189,182,213,199,185,202,178,201,200,182,196,193,218,192,197,210,217,191,193,197,171,182,172,200,189,190,199,202,182,184,204,164,217,197,198,216,192,194,216,189,190,181,194,192,174,208,181,221,170,204,187,199,201,189,177,206,198,207,201,199,205,181,165,187,207,218,188,189,186,187,178,181,192,198,182,192,207,176,181,186,213,190,192,195,206,199,177,188,178,200,199,202,223,193,205,199,191,203,208,189,203,222,201,206,201,208,206,219,218,199,193,182,220,199,214,189,198,199,201,195,200,214,196,197,200,192,219,207,213,214,209,215,209,178,208,229,201,185,224,198,196,209,202,183,223,194,182,209,207,207,203,198,186,203,208,193,197,182,205,228,189,203,221,194,211,199,205,187,217,203,227,196,198,208,203,205,219,194,216,191,220,206,214,222,230,207,203,205,229,210,227,217,197,202,225,195,211,211,197,195,223,204,214,215,215,202,207,197,197,202,217,191,211,210,203,201,199,210,202,208,221,209,231,202,203,219,210,222,178,211,190,201,207,224,188,200,191,216,200,208,202,192,197,215,183,216,170,188,199,211,178,176,209,189,200,196,198,206,199,201,181,205,201,203,182,170,172,203,212,194,183,192,212,178,180,191,209,206,193,180,161,217,202,206,219,176,202,177,222,203,207,204,193,170,172,187,217,178,220,195,162,207,191,201,178,148,171,169,181,181,160,122,107,105,131,124,148,190,203,201,213,243,204,204,181,125,189,196,228,208,179,217,167,211,217,210,182,180,178,134,141,106,121,87,75,84,135,198,204,192,188,185,121,164,227,221,218,222,221,216,243,161,148,224,146,207,215,214,207,216,213,219,184,187,194,209,161,52,41,103,42,35,24,19,8,35,11,33,22,16,8,6,15,42,19,9,10,51,17,49,22,40,33,3,40,39,28,63,83,100,102,67,73,49,26,14,0,24,22,50,109,129,71,92,139,66,61,80,61,67,60,40,35,31,36,75,112,97,107,121,109,96,108,73,68,78,75,93,66,70,147,138,60,45,39,33,39,23,39,16,24,47,26,40,60,57,53,61,65,65,64,54,85,92,84,110,118,147,143,135,173,175,184,183,168,98,162,179,181,163,166,100,117,140,130,84,50,66,58,83,39,36,35,81,62,57,97,27,44,46,49,38,46,36,2,20,27,11,12,16,15,19,18,5,12,14,24,17,17,23,4,8,23,14,36,66,54,48,30,46,48,29,3,25,33,10,26,11,13,1,50,45,16,15,18,28,39,10,38,24,33,27,53,72,68,78,61,51,81,56,94,126,92,114,158,116,157,168,158,171,157,148,146,110,102,82,69,68,39,38,37,65,33,26,38,22,88,63,9,41,64,37,41,70,71,53,48,57,83,59,37,27,43,22,28,13,34,37,36,31,43,141,155,146,109,101,97,87,111,79,78,85,97,180,233,249,237,234,235,232,219,251,250,242,249,255,247,252,248,241,248,247,240,244,232,220,242,247,230,199,219,224,231,230,242,242,235,237,223,233,240,246,233,245,237,247,206,221,243,212,218,231,199,241,238,228,218,235,242,217,126,9,7,8,15,14,0,18,20,4,30,42,7,10,5,3,23,29,0,21,17,22,3,23,14,191,196,188,202,203,190,193,202,197,200,218,196,218,184,197,181,204,200,154,199,180,211,195,195,181,209,181,185,196,210,197,195,196,164,177,196,185,200,198,188,190,186,195,212,184,187,188,174,218,182,193,188,210,202,193,199,209,198,199,195,174,185,215,203,189,179,184,186,200,199,205,190,208,192,198,195,207,183,187,204,187,186,191,191,206,180,189,203,200,161,171,181,179,184,213,218,193,198,187,190,205,213,207,222,203,216,219,191,189,187,176,198,183,200,212,214,208,204,221,194,212,195,229,204,215,211,192,190,214,178,191,191,200,200,195,175,212,206,193,190,217,201,213,202,219,208,213,185,200,194,210,217,197,193,183,209,193,207,189,201,211,204,214,221,215,213,207,227,210,211,237,201,214,185,177,230,229,215,214,212,190,219,219,215,226,239,216,201,217,213,226,206,201,215,205,213,207,211,216,212,220,210,213,209,241,203,219,213,211,215,225,206,216,209,190,216,219,206,207,206,179,193,211,184,203,226,244,209,199,183,204,209,182,207,213,208,188,205,214,205,199,212,192,202,199,186,203,207,199,205,200,203,214,186,179,211,205,183,191,182,200,209,205,187,192,209,220,190,197,169,186,209,214,172,211,199,204,199,205,191,201,189,183,206,167,180,220,205,203,196,202,175,209,195,220,176,181,178,216,185,209,204,228,208,185,124,193,205,202,158,190,186,192,189,178,142,101,133,149,118,136,111,137,117,126,139,245,198,175,179,141,183,201,200,219,193,192,149,202,206,174,114,173,214,112,125,129,143,94,36,87,167,228,197,191,169,168,176,191,237,213,245,203,212,198,232,147,170,214,131,209,245,210,222,192,239,221,215,220,166,149,103,49,57,65,15,23,9,28,25,10,28,14,22,20,20,17,29,10,8,40,27,33,29,14,30,14,60,7,50,60,41,46,103,98,71,41,74,38,9,34,13,27,49,97,119,152,87,119,113,49,99,90,40,53,21,41,46,37,47,83,116,154,147,143,141,144,141,118,154,114,125,135,115,97,119,111,66,101,126,120,168,131,167,134,147,168,176,185,151,169,166,171,165,144,163,166,150,149,141,153,151,127,139,136,125,145,134,112,86,87,77,52,77,67,67,54,68,46,67,60,64,70,60,37,61,53,70,60,58,58,50,25,59,45,39,54,29,26,20,12,22,8,1,28,15,16,1,49,16,14,9,17,26,18,21,44,2,27,40,87,87,75,51,54,72,47,13,31,31,27,19,10,16,28,34,31,31,13,39,17,21,9,10,14,18,10,13,38,55,89,67,69,78,85,72,57,69,69,61,61,44,75,53,66,104,131,143,157,134,173,136,159,141,97,102,84,80,71,49,65,52,49,49,27,52,30,39,35,34,60,55,52,74,90,67,33,20,29,39,18,43,44,50,54,16,97,141,123,119,92,120,107,93,79,117,100,120,159,235,244,248,221,248,244,233,251,254,250,255,253,244,251,226,253,255,245,246,218,244,224,219,217,207,215,229,217,208,213,246,214,223,213,212,201,218,244,229,211,210,208,217,209,242,232,234,217,234,212,232,210,220,226,242,249,139,0,2,5,8,19,11,0,11,28,29,14,12,2,16,23,23,7,4,31,0,5,20,11,32,203,195,212,200,180,198,189,193,190,182,207,196,192,197,193,181,190,203,174,187,175,193,199,170,182,179,160,214,189,196,211,199,168,199,201,178,208,195,199,192,195,190,210,182,191,169,185,192,199,214,213,196,211,210,170,169,196,188,181,190,202,200,193,204,172,198,202,190,176,195,197,187,196,182,185,211,207,196,215,199,195,186,202,208,193,214,171,187,213,188,184,196,197,196,201,201,166,194,223,216,211,196,198,208,218,209,181,204,194,198,201,200,206,204,190,216,191,206,235,198,206,194,198,236,204,203,210,209,191,199,195,209,203,190,226,202,211,211,198,203,194,219,212,173,211,200,215,192,208,195,186,195,211,212,185,206,220,224,223,219,215,192,206,200,194,201,215,222,216,187,201,195,203,206,219,222,221,190,214,216,195,221,216,207,219,231,207,209,203,187,228,202,204,208,227,217,209,241,209,227,210,218,194,213,225,212,190,204,204,212,223,210,198,215,214,208,207,212,229,202,218,201,199,193,199,215,218,222,205,218,208,214,226,203,212,194,221,226,212,198,211,188,180,189,211,203,210,192,212,187,226,192,215,225,182,197,199,194,205,184,200,200,199,199,189,209,202,204,184,188,199,186,177,174,187,213,202,209,217,159,187,192,177,195,209,180,201,211,191,200,198,179,204,195,194,184,189,209,179,186,202,200,197,201,171,167,203,213,183,173,160,189,210,177,186,145,107,128,123,116,127,100,112,87,73,153,241,183,183,187,151,154,200,192,217,216,168,179,176,142,155,89,153,202,111,116,89,143,137,94,45,152,181,153,193,171,190,169,168,240,209,238,218,234,221,205,131,190,223,119,238,202,225,236,212,250,195,216,208,86,92,74,84,50,0,40,16,4,23,12,18,21,20,7,38,42,37,31,17,10,32,35,6,25,16,5,3,43,31,61,67,42,86,90,99,91,55,50,17,13,30,49,40,86,143,103,98,95,97,108,71,91,75,89,94,80,37,56,72,53,70,82,132,128,120,132,117,132,118,137,137,136,132,103,117,106,89,88,135,165,148,145,155,122,146,131,152,117,142,164,128,116,141,117,120,99,83,97,43,62,36,73,59,60,56,76,45,54,63,54,53,58,58,67,77,68,70,62,80,74,72,52,57,47,44,57,37,33,36,27,28,44,27,24,25,40,52,15,36,48,59,63,38,10,6,36,22,26,17,25,19,6,32,18,26,27,3,22,42,67,142,142,101,69,66,74,39,26,32,3,32,27,6,6,39,26,6,7,14,38,19,15,28,37,27,75,72,24,43,62,33,43,56,48,37,40,71,43,80,76,87,98,74,70,63,87,49,50,54,55,72,93,91,125,136,99,118,138,138,120,128,99,108,88,79,73,58,45,49,36,34,62,54,83,36,50,26,27,24,38,40,69,90,47,41,17,52,127,113,115,103,84,82,98,77,114,123,118,134,135,132,190,206,217,249,251,249,240,219,212,242,246,254,234,204,225,236,240,218,220,206,223,218,201,200,213,211,216,228,238,216,223,245,207,243,235,221,216,236,230,235,223,219,215,244,248,243,242,228,223,206,250,227,246,236,124,3,3,3,7,13,9,17,0,15,20,10,30,7,14,20,22,14,9,10,3,16,22,31,23,203,183,203,212,187,200,210,193,186,207,203,198,173,196,207,197,178,187,188,192,196,157,191,194,218,215,171,193,187,184,185,194,207,180,190,199,201,200,188,209,224,212,196,180,165,172,172,183,183,176,180,213,169,179,199,210,188,188,198,197,179,187,210,199,200,174,191,199,191,177,177,207,204,173,179,191,183,199,196,209,192,201,172,181,188,188,204,187,192,190,204,195,182,215,191,187,212,196,181,211,202,236,173,210,192,204,204,190,192,187,201,204,211,202,206,214,193,213,210,203,212,212,214,208,210,206,194,218,204,210,210,221,206,226,213,202,190,204,197,210,188,209,195,209,196,206,207,212,206,224,210,230,201,187,207,203,190,199,201,215,196,209,211,207,229,194,215,186,215,176,210,217,188,221,203,226,220,210,220,203,230,229,205,215,230,228,212,223,210,189,210,236,214,205,216,183,223,213,233,204,218,186,194,221,210,207,205,221,203,222,224,184,182,224,208,226,218,197,214,197,206,208,205,210,206,215,227,201,216,220,211,219,209,225,199,212,196,190,204,211,214,203,214,215,210,214,202,193,199,206,214,229,203,204,232,204,191,195,224,189,208,191,209,185,171,187,200,191,181,192,172,177,213,191,208,202,215,218,205,179,207,162,191,190,186,189,187,192,200,211,195,213,226,195,226,212,183,214,194,178,174,197,199,206,134,150,202,196,168,148,185,173,193,192,175,122,66,107,110,101,100,78,49,72,57,129,205,184,183,219,131,177,212,211,229,181,158,149,160,140,177,122,184,167,66,82,77,86,78,110,126,198,192,206,183,126,142,120,165,235,194,228,215,234,209,218,130,213,222,135,210,225,220,212,214,235,226,219,119,55,60,71,53,20,10,3,51,24,1,7,9,15,29,23,10,2,24,41,28,19,16,22,22,13,2,22,31,33,83,44,43,75,124,79,69,31,30,5,12,6,44,78,92,87,135,97,66,58,93,110,123,130,113,124,172,114,101,135,102,129,141,114,85,63,48,49,56,72,63,60,42,67,63,44,74,54,59,47,77,76,53,66,65,54,51,55,48,60,42,60,66,38,79,73,79,72,79,85,45,106,76,57,55,102,71,79,59,42,51,53,52,59,43,50,24,44,35,52,33,22,55,18,22,28,24,38,20,21,7,49,15,12,20,5,12,20,52,56,63,83,80,94,71,35,8,24,16,13,36,11,31,23,20,37,4,28,18,15,16,114,183,127,126,143,131,126,67,27,9,28,24,35,9,22,18,17,8,18,30,11,10,28,35,65,78,86,70,56,62,37,48,47,34,18,28,40,37,34,45,28,49,54,59,31,57,73,52,77,50,73,42,74,60,76,102,88,73,65,55,48,75,81,116,128,126,133,141,112,119,106,93,79,94,95,73,67,77,62,41,41,26,55,43,33,74,24,82,135,126,120,122,112,129,121,94,138,136,59,67,46,63,180,238,248,251,236,174,118,129,104,114,123,143,134,104,154,224,214,213,216,222,237,216,214,209,226,226,228,212,205,196,226,233,225,209,229,204,203,232,228,213,232,221,236,229,231,207,242,207,219,232,228,225,207,210,94,1,5,7,15,2,10,16,28,29,9,19,3,0,11,2,7,19,17,16,6,0,27,17,29,206,197,185,198,191,206,201,197,203,204,191,215,215,207,187,196,185,184,221,217,197,184,191,216,175,220,178,197,182,195,179,190,186,202,191,199,204,187,198,189,184,183,184,205,171,176,195,192,181,193,180,196,185,182,196,186,185,171,196,201,177,207,218,201,189,191,198,189,194,178,182,205,176,187,183,193,181,186,197,192,211,179,213,228,186,210,222,208,195,186,191,199,182,197,190,202,215,210,210,203,206,210,209,189,168,209,225,195,212,176,213,204,177,183,180,200,191,199,192,199,212,222,210,230,223,211,208,204,198,184,207,228,211,196,204,194,214,236,217,199,220,225,215,232,204,191,234,203,198,234,217,210,207,203,185,222,214,202,208,192,210,191,216,207,207,235,201,213,198,202,205,217,200,199,215,206,209,216,179,206,206,213,213,226,212,217,207,229,215,188,197,245,195,203,217,216,213,203,187,197,190,224,240,187,214,214,216,198,206,207,218,226,211,204,221,232,199,234,207,213,209,204,199,204,196,221,218,195,189,223,235,228,191,208,234,201,193,216,199,215,217,215,221,190,222,217,187,200,201,203,197,209,185,199,211,203,211,185,200,199,200,210,181,202,205,217,207,186,193,196,195,183,187,183,217,187,189,190,205,186,190,202,198,194,218,180,185,194,195,192,203,185,187,199,190,193,204,190,200,167,172,196,211,195,151,169,205,201,149,137,178,186,175,152,150,103,98,100,68,127,63,42,35,56,95,150,204,171,197,207,142,155,225,205,227,176,162,179,170,169,185,117,142,137,69,74,106,74,61,56,56,203,164,229,152,151,190,143,189,216,190,193,178,220,219,175,134,236,196,151,201,198,216,199,240,218,208,221,57,49,40,45,17,14,36,22,17,14,15,25,19,30,31,10,51,28,27,36,39,13,25,42,27,45,33,17,48,42,43,36,60,75,90,45,63,42,15,3,16,17,43,68,125,117,89,71,21,16,100,65,95,139,107,129,130,124,119,95,109,164,156,148,66,77,83,59,57,66,99,80,79,90,93,80,91,96,60,56,56,106,99,88,85,79,36,77,95,63,95,48,60,69,65,80,55,65,88,80,74,58,55,31,37,46,47,46,46,24,11,17,26,22,21,21,14,20,25,8,19,38,7,7,15,31,14,27,6,13,19,10,23,27,10,3,23,87,82,112,142,134,138,133,105,40,18,7,12,28,8,19,27,14,31,9,21,17,3,47,36,132,171,131,155,137,179,151,78,22,14,5,11,12,19,38,13,7,22,15,0,44,59,17,29,95,110,146,139,118,99,83,80,13,20,13,36,26,19,11,40,9,29,18,33,36,41,46,15,31,38,53,67,61,59,56,73,87,53,50,66,45,45,50,86,103,55,79,73,75,88,84,95,126,139,123,150,127,135,131,125,128,95,85,75,90,82,63,123,176,156,142,111,115,112,97,103,127,113,107,57,70,73,185,224,250,233,99,43,61,49,64,70,64,49,56,22,82,177,192,228,218,208,202,225,214,222,226,233,244,203,241,211,235,220,216,230,224,203,218,220,252,228,207,227,204,243,240,224,217,216,225,230,220,229,240,223,114,6,1,5,6,29,35,22,29,6,14,1,13,12,9,7,22,34,30,10,2,16,16,18,0,201,199,207,204,202,215,183,213,206,185,186,196,200,187,210,199,188,214,195,172,206,198,194,180,184,187,199,192,201,192,190,223,182,177,208,195,206,187,172,214,204,175,205,187,199,202,185,187,183,173,195,192,191,191,220,182,214,194,188,204,190,155,197,205,215,160,206,196,209,184,188,182,177,189,185,186,205,183,204,187,198,204,184,197,201,209,198,186,193,197,197,209,192,191,180,196,189,202,182,168,183,197,214,204,215,215,187,194,190,218,191,215,219,196,200,204,204,194,205,207,219,201,210,215,207,200,211,196,195,219,223,224,225,221,207,171,224,238,225,222,239,227,236,221,221,224,197,205,215,191,199,216,214,214,183,211,211,201,196,192,210,213,205,208,202,215,217,194,226,212,194,212,200,212,214,235,236,214,217,220,215,216,210,212,213,181,223,228,216,216,228,197,228,214,215,176,209,211,209,193,192,211,214,244,213,234,220,186,215,209,204,186,194,206,217,198,211,215,205,201,211,214,200,217,211,242,207,217,236,229,222,221,222,180,212,212,233,238,204,205,238,221,228,201,200,218,204,210,206,215,195,242,209,207,189,206,190,199,208,196,218,201,199,191,186,180,183,189,193,187,191,215,185,210,193,191,210,205,206,191,192,194,188,207,213,186,210,192,188,195,202,174,172,203,188,183,205,222,216,189,193,187,204,157,154,191,175,214,143,131,172,162,180,168,150,114,113,67,54,67,47,10,25,74,68,169,213,174,211,212,159,150,224,221,195,182,181,188,154,196,194,119,107,130,94,125,100,84,66,39,5,122,161,195,156,200,194,163,218,222,209,206,216,214,221,148,165,245,196,152,233,211,206,232,227,201,233,163,25,51,61,23,25,36,10,17,5,25,20,3,20,22,22,1,16,30,29,22,19,31,18,15,19,41,18,56,11,28,54,66,68,100,61,32,56,38,2,26,19,44,76,104,108,101,141,116,6,56,114,91,79,86,56,90,63,69,47,81,105,122,85,51,34,38,20,58,82,59,42,71,64,66,61,97,122,124,118,72,42,47,41,55,45,40,40,45,49,64,39,61,54,38,75,39,41,27,24,12,18,28,5,19,44,12,43,24,31,18,40,31,26,41,52,18,8,16,6,60,25,7,26,7,6,17,9,12,6,31,24,42,32,22,31,11,63,170,193,179,174,206,199,194,144,62,18,28,48,31,33,29,15,22,15,40,21,3,33,14,69,132,152,180,163,127,176,151,106,39,19,22,20,11,15,33,30,13,31,24,54,19,6,28,52,140,156,147,151,182,155,155,95,32,6,40,19,37,10,3,14,13,17,38,26,33,13,24,25,18,36,20,29,41,20,43,44,38,45,31,42,72,68,73,69,91,76,80,68,48,84,57,66,97,74,51,43,59,103,107,127,123,127,155,132,143,135,139,138,157,153,121,107,91,104,134,119,130,105,99,140,109,204,235,233,251,178,58,11,16,73,73,23,31,44,24,1,78,204,222,214,233,203,201,241,221,219,217,218,220,220,217,191,220,193,206,231,206,205,198,212,205,211,206,192,216,222,237,230,217,233,237,237,243,238,210,216,123,3,23,5,8,11,5,17,0,27,29,31,13,17,8,15,34,20,23,19,3,26,18,4,14,198,204,200,184,191,221,208,228,179,177,193,191,199,205,203,214,213,205,168,202,220,206,187,204,208,200,201,209,188,166,198,187,174,195,201,177,182,190,188,169,186,195,190,192,188,179,201,203,185,189,185,224,205,187,220,205,215,218,210,181,181,202,175,206,170,199,179,207,186,176,194,212,183,186,193,196,202,170,174,191,227,193,219,223,226,186,183,210,214,176,201,207,208,189,202,203,213,205,169,200,198,222,180,200,199,191,189,205,196,214,190,227,219,201,203,198,196,190,224,202,202,195,202,203,199,191,220,185,193,204,214,244,208,190,220,201,193,189,226,211,210,198,219,198,197,214,194,219,214,222,209,195,199,191,209,226,189,211,187,233,167,214,202,228,218,194,208,210,202,187,183,213,211,210,207,186,206,207,204,221,185,192,218,212,209,220,219,227,201,212,214,229,214,230,189,195,216,199,213,199,209,212,216,241,213,213,216,196,213,217,214,209,219,220,202,211,214,211,239,213,219,217,207,223,194,211,230,218,230,205,226,191,198,215,208,184,215,193,210,184,213,212,219,169,203,223,189,217,213,213,200,204,165,200,202,208,194,177,213,222,213,162,196,198,203,181,193,203,205,192,196,179,203,204,172,214,215,206,209,169,201,216,198,195,200,201,182,187,178,170,206,212,188,181,207,193,199,205,244,188,185,194,210,174,187,228,200,190,146,160,204,198,174,160,166,145,123,82,109,93,51,52,59,128,157,240,227,199,198,208,156,145,221,231,193,160,216,215,142,153,183,136,174,176,110,99,103,68,56,56,18,91,182,217,153,199,156,162,232,224,211,195,207,227,196,157,190,236,181,152,233,231,237,239,210,185,144,100,76,66,32,40,29,44,13,13,14,33,26,18,30,21,6,49,18,19,25,3,19,4,56,22,19,41,24,23,37,30,82,67,88,46,61,55,29,36,18,34,59,72,123,103,121,92,99,76,20,83,111,129,90,91,51,124,92,34,77,26,56,89,129,49,10,96,62,49,120,82,27,12,53,28,21,93,96,73,86,59,17,38,34,39,27,20,11,24,29,6,54,18,36,12,27,30,28,22,18,19,1,5,31,34,17,15,18,53,65,58,84,70,66,80,97,47,16,31,16,2,26,15,20,13,19,33,24,26,5,22,7,19,8,11,23,4,81,208,200,186,167,178,182,186,171,70,3,11,15,27,6,25,16,36,11,26,19,33,16,33,60,104,141,168,140,140,144,158,147,51,19,16,28,19,28,20,45,27,16,29,24,29,50,21,55,140,156,163,167,146,176,175,164,49,14,11,10,38,16,25,9,9,14,7,9,11,17,11,23,18,15,19,29,41,32,22,24,36,20,21,29,33,48,34,40,52,53,64,57,45,50,66,92,62,79,64,85,71,59,61,77,64,59,86,70,68,114,86,72,85,72,68,45,87,88,65,96,94,128,106,119,99,140,200,162,172,91,60,29,69,57,86,66,57,62,65,21,152,243,221,219,213,204,226,223,228,199,212,233,208,220,236,198,207,219,212,228,215,206,221,222,206,231,213,206,205,210,230,225,224,223,214,199,216,233,210,236,130,1,0,10,18,11,16,30,0,2,21,10,13,0,0,10,26,16,25,4,20,16,18,13,12,213,197,190,173,228,202,212,189,190,218,195,233,192,181,200,197,189,207,201,174,212,224,210,183,211,184,201,186,199,206,167,191,199,180,193,218,187,177,194,199,217,199,196,191,191,209,209,214,203,194,198,213,182,168,192,200,193,198,184,199,208,181,204,191,194,210,177,184,193,187,182,205,208,179,178,201,186,224,178,196,196,205,207,203,208,194,207,215,215,182,204,206,182,199,207,195,209,212,168,210,231,200,188,189,198,210,212,200,193,221,212,186,198,224,214,193,215,187,203,197,200,207,214,199,216,209,211,214,212,202,241,207,193,191,211,198,207,188,206,225,228,207,226,216,191,206,198,220,204,220,190,232,194,204,195,208,187,179,196,205,202,193,204,178,204,209,220,204,204,223,238,209,195,209,232,194,203,213,204,219,204,206,197,229,208,225,217,191,205,215,221,208,211,219,199,223,207,235,240,235,213,226,187,216,206,224,211,212,220,217,213,194,241,195,225,214,191,209,205,192,220,208,220,220,212,184,199,243,209,192,203,208,213,204,218,209,187,212,199,196,214,187,195,228,224,215,204,232,217,201,183,201,215,229,196,219,216,209,197,205,218,184,188,197,191,226,200,208,182,192,201,179,148,184,179,187,179,200,213,185,212,186,195,187,203,210,231,163,206,215,197,200,184,187,190,194,193,198,227,187,194,175,215,143,191,180,176,219,151,197,198,184,190,150,160,150,161,112,121,56,65,38,101,168,203,236,209,171,164,193,168,158,183,200,170,181,238,151,106,125,204,149,198,157,77,129,88,91,104,64,53,158,250,244,168,157,133,164,202,221,218,247,208,224,182,137,186,205,138,156,147,243,237,230,170,99,60,43,75,44,26,50,21,11,24,18,32,21,20,13,24,45,17,43,34,20,20,44,26,8,8,25,31,26,60,51,35,68,80,88,59,58,60,20,31,46,42,66,61,121,84,135,126,90,108,79,44,96,130,87,117,117,123,130,61,61,61,44,97,150,121,59,80,129,135,71,111,129,47,23,21,24,66,127,95,92,113,38,22,32,24,40,40,33,29,33,4,23,34,5,14,36,16,11,3,22,10,20,51,17,21,29,14,21,8,78,121,148,139,168,167,160,199,74,20,34,24,29,13,25,14,19,26,32,11,2,10,22,19,20,17,30,12,16,95,166,151,161,110,96,125,119,105,24,28,42,10,31,8,43,28,17,18,20,52,34,35,17,63,68,138,158,132,136,135,165,182,94,7,25,22,34,28,37,37,37,20,31,23,46,45,22,68,144,170,116,167,163,210,175,190,110,29,9,18,17,24,22,38,33,20,24,19,15,22,3,29,17,27,3,8,38,49,59,63,32,15,16,49,50,17,17,41,40,16,45,55,54,27,27,66,41,26,68,67,92,70,90,76,80,85,61,75,61,94,70,73,58,49,71,77,66,78,58,45,33,55,58,70,55,43,39,30,45,42,13,18,56,45,93,46,50,63,35,40,218,244,232,245,217,220,212,217,214,204,219,233,208,208,196,239,224,206,201,221,221,218,191,233,230,225,230,224,230,210,217,221,204,197,223,233,233,224,249,222,119,15,12,23,18,17,32,22,14,31,4,13,14,6,7,12,3,17,8,6,0,19,20,11,11,224,218,231,216,195,212,228,190,193,196,192,202,206,201,191,181,222,205,211,184,204,166,190,178,179,192,188,181,199,192,184,189,190,192,183,189,188,202,204,202,184,203,213,200,197,206,220,198,187,178,202,202,191,183,177,192,196,203,193,197,210,200,198,190,192,228,210,181,181,196,192,180,183,209,193,194,197,199,171,179,190,198,196,216,180,229,191,179,195,202,209,203,209,213,190,210,205,187,210,200,214,183,210,240,211,207,214,206,222,195,199,202,206,201,207,210,211,209,202,206,181,192,191,176,219,191,216,220,226,203,192,226,217,221,202,194,212,211,196,208,217,204,211,196,216,217,224,223,204,198,212,211,206,212,203,220,204,199,228,218,218,202,211,209,207,195,219,220,218,194,225,213,229,206,226,213,193,202,190,196,205,177,210,207,222,217,236,209,211,216,206,210,205,206,208,206,229,232,207,224,211,202,232,215,243,215,195,223,231,198,213,206,227,232,208,212,218,224,208,223,234,201,202,199,228,203,194,215,211,214,214,207,232,199,197,211,196,200,202,213,210,213,202,230,229,202,220,210,213,218,224,226,199,188,235,226,188,210,220,200,208,198,201,216,206,210,178,185,222,185,206,163,175,208,228,205,207,195,203,204,206,190,203,204,212,200,206,202,187,203,209,182,237,187,191,198,191,194,198,179,202,164,189,136,176,177,174,170,151,215,225,164,143,127,144,157,158,142,107,38,63,48,104,203,218,241,201,166,194,228,179,159,194,205,168,213,168,128,149,180,209,155,167,97,80,109,92,87,81,20,51,221,225,204,140,152,172,160,217,206,196,216,227,241,145,158,202,77,70,90,125,197,241,209,104,65,58,31,65,33,22,7,27,10,15,51,27,24,18,11,27,15,15,23,29,17,54,22,7,26,35,21,27,57,34,45,63,61,91,71,83,56,48,12,65,95,77,96,82,100,115,130,91,88,126,123,118,146,119,74,117,113,119,96,57,77,87,74,97,83,120,136,71,159,148,88,82,161,117,44,35,48,138,175,93,108,94,49,29,59,26,48,52,57,21,20,19,14,10,16,12,22,13,19,11,35,5,11,5,8,29,23,30,20,25,189,187,210,228,223,180,190,214,105,11,23,15,8,14,22,16,26,17,24,20,16,19,32,23,14,8,23,0,31,92,110,67,68,35,38,60,71,87,31,15,6,36,26,40,17,12,66,14,33,47,32,19,29,31,91,147,190,139,145,166,133,188,71,11,17,10,18,29,43,24,21,38,19,19,15,26,20,89,123,113,128,109,116,160,140,174,99,29,21,35,24,12,9,42,23,4,33,14,17,40,19,19,8,25,28,27,21,91,72,54,95,73,45,59,59,37,14,1,31,18,19,18,9,3,27,24,19,39,37,58,14,29,50,47,59,78,56,77,57,41,83,64,70,98,115,140,142,110,90,36,22,8,40,63,60,64,53,55,32,39,14,63,72,55,76,73,59,68,15,133,228,219,232,247,231,202,209,221,198,204,221,229,217,220,226,218,230,208,221,229,211,243,223,248,206,228,203,228,229,227,204,216,235,227,230,202,223,210,226,230,134,1,15,2,12,17,14,14,13,4,20,39,39,2,1,19,1,0,12,11,5,19,3,25,7,207,196,219,195,199,196,192,196,184,200,190,185,190,204,196,191,187,212,225,209,203,216,204,205,178,200,219,194,219,218,196,200,213,180,178,200,194,181,211,194,201,193,202,195,205,195,207,207,205,220,193,206,191,211,209,225,194,194,189,176,200,194,199,192,193,204,202,212,230,189,171,211,210,204,199,216,199,186,223,206,207,191,192,215,195,204,191,223,213,212,201,218,189,191,210,201,204,209,197,185,206,205,207,216,196,212,201,199,196,190,199,185,199,208,192,214,211,221,180,201,186,189,210,209,197,214,206,213,221,230,201,222,205,215,229,233,205,200,223,199,185,173,208,197,205,205,221,206,214,216,214,214,207,199,204,207,197,202,220,216,214,205,213,209,177,216,229,207,189,228,198,209,210,213,196,206,242,208,192,210,237,221,211,202,207,200,217,203,227,224,210,212,215,224,219,208,221,212,211,202,222,227,207,222,200,214,205,192,204,222,206,229,178,220,205,203,216,211,219,201,200,212,226,206,220,216,206,222,186,208,208,199,217,208,193,202,215,211,218,221,208,210,222,220,193,239,236,208,212,207,204,218,197,220,225,218,204,195,213,193,190,202,203,208,206,190,187,193,205,214,210,205,180,203,219,213,197,205,205,222,210,211,201,218,197,210,199,198,207,220,193,225,197,182,199,191,201,202,190,172,198,198,173,152,183,192,186,213,143,207,198,150,152,137,145,194,182,139,86,72,41,49,91,211,211,245,219,163,226,167,105,113,173,163,187,218,165,174,185,230,161,109,170,125,87,129,64,101,62,51,53,172,203,134,152,187,168,189,232,204,191,199,178,219,146,176,189,50,45,74,83,148,240,202,46,37,62,69,8,15,35,22,21,24,23,17,23,38,20,10,1,14,23,7,11,35,26,17,2,32,20,30,18,45,38,51,61,73,67,77,53,28,32,62,144,203,132,113,100,103,137,111,119,119,140,195,135,113,38,84,111,103,77,110,57,96,75,67,88,47,91,120,158,176,196,81,101,138,129,68,45,39,144,132,93,106,71,37,76,69,97,84,142,109,27,41,15,12,31,26,24,31,3,9,16,31,46,11,27,6,31,32,23,33,37,170,189,190,190,171,178,176,192,128,6,20,9,13,21,39,11,9,16,20,34,17,20,31,30,25,10,21,24,31,98,112,45,84,41,51,29,38,30,27,44,11,48,11,31,17,25,36,28,62,11,53,14,16,36,82,118,148,105,148,135,156,169,74,31,22,37,31,18,19,11,7,25,14,36,43,34,45,113,129,92,71,48,61,121,88,117,92,17,15,13,27,18,19,24,17,31,8,54,15,8,28,36,17,33,11,14,40,137,169,152,141,135,131,129,98,40,25,3,38,18,18,15,37,10,13,32,10,15,24,19,18,16,33,39,42,36,43,25,42,41,25,47,47,37,51,115,128,106,96,110,137,147,163,105,117,75,68,70,70,89,87,112,106,116,120,127,101,81,75,157,227,229,207,230,205,214,220,222,216,209,214,221,201,215,197,240,232,195,221,235,202,230,226,225,219,193,232,205,239,186,218,213,205,230,251,245,199,235,220,237,124,7,12,10,8,13,15,32,0,4,21,18,2,22,1,4,14,0,5,1,20,20,22,10,37,190,204,225,205,180,197,205,211,187,212,194,168,174,186,203,203,224,196,191,213,185,223,202,204,199,193,189,201,223,209,180,202,207,199,197,195,199,182,190,222,198,194,207,202,194,210,197,205,198,192,199,220,197,201,187,197,212,194,219,206,220,219,199,193,206,169,192,194,191,203,197,206,196,174,216,202,212,199,198,203,225,203,181,181,209,193,207,200,215,209,198,200,198,210,208,228,198,205,198,198,203,157,216,225,204,221,182,176,202,218,217,210,195,203,214,211,201,204,207,196,204,222,229,205,217,203,214,195,200,187,211,201,220,179,174,202,216,214,197,203,212,220,200,207,222,202,218,220,208,212,202,179,226,197,201,216,214,210,222,210,204,203,214,221,215,190,212,195,199,196,212,212,235,217,211,208,227,219,206,195,206,195,208,195,215,220,183,192,229,199,220,224,195,215,204,231,217,221,201,208,209,218,190,224,211,222,203,187,189,209,216,197,207,206,207,205,215,231,204,196,192,213,215,219,234,198,207,221,225,194,228,206,225,198,191,213,232,216,235,205,221,194,237,169,136,179,205,199,210,207,210,210,219,193,198,191,207,229,196,182,188,196,222,212,218,183,186,220,204,210,187,184,192,193,195,237,192,199,222,190,221,218,195,205,202,219,198,230,222,197,207,210,195,205,170,213,201,191,210,199,193,221,205,148,197,217,164,190,138,147,113,95,166,166,161,178,200,155,114,42,34,31,90,167,191,244,194,173,195,169,99,137,80,162,223,198,154,182,213,209,129,88,202,177,143,105,88,78,52,68,84,173,199,176,175,221,179,197,242,211,223,209,228,185,115,188,151,48,124,96,96,148,220,145,36,62,67,42,26,11,16,32,31,12,27,18,37,23,21,34,52,13,33,37,14,28,15,29,33,12,34,46,29,65,61,65,67,67,49,76,38,7,54,144,171,170,108,106,130,128,104,113,80,125,150,179,156,116,47,62,113,129,95,126,109,118,84,82,81,23,74,106,173,206,152,60,56,164,144,94,0,33,151,116,99,99,101,78,129,143,165,179,191,188,27,0,12,28,24,16,10,31,27,32,35,16,32,24,23,30,14,4,23,45,58,173,132,106,97,78,79,103,111,58,15,17,16,13,3,22,7,29,27,1,24,26,21,39,33,38,36,41,24,60,84,133,57,42,53,23,58,43,61,11,42,24,30,29,38,20,48,47,63,17,38,37,16,23,42,74,80,104,93,107,96,68,102,51,42,19,27,31,26,32,51,48,35,58,22,17,24,48,148,127,77,62,71,36,85,45,51,34,24,30,51,31,23,28,36,17,19,22,11,49,0,41,31,63,45,32,41,25,171,183,162,194,189,184,178,141,86,27,24,48,5,0,22,18,30,33,27,15,22,18,9,10,3,4,41,18,27,15,45,45,24,57,34,10,29,48,83,108,103,159,252,247,237,158,93,59,4,21,54,85,99,108,90,117,94,109,124,135,124,45,144,199,218,239,216,226,231,215,242,218,201,204,209,230,231,208,204,234,200,218,215,190,241,210,189,218,218,211,222,227,196,245,226,215,221,216,232,224,226,241,225,97,15,20,5,16,10,6,3,14,13,17,9,12,4,8,24,5,6,2,6,12,23,29,14,30,223,197,193,208,194,198,187,205,203,200,213,192,204,198,179,197,190,208,199,192,216,193,181,201,206,195,222,184,192,206,213,214,208,180,201,198,193,218,194,195,189,188,201,218,212,198,173,202,200,207,210,202,194,184,199,203,184,187,194,213,198,191,188,203,186,213,216,198,211,196,183,181,193,217,197,201,210,177,214,204,200,190,203,183,206,182,188,211,191,207,205,197,176,216,201,203,200,203,166,204,189,213,194,207,201,169,194,217,220,192,193,195,203,200,204,216,196,205,218,212,207,202,225,195,211,208,216,206,218,199,226,204,212,204,206,220,235,216,210,222,212,234,179,207,202,193,206,207,216,190,192,181,167,209,207,194,210,195,206,206,204,209,238,231,198,210,210,181,188,191,214,209,201,225,209,202,208,209,214,210,225,203,213,221,207,218,213,201,214,217,210,211,199,206,219,226,227,202,221,217,213,210,194,194,217,185,244,208,176,218,222,202,219,226,206,201,199,195,196,192,198,187,211,202,216,203,207,189,216,218,195,190,210,214,201,196,189,211,192,221,201,223,210,172,118,162,186,181,220,209,207,208,202,223,212,218,187,203,211,201,193,194,202,230,184,223,193,205,221,212,220,211,193,196,213,206,228,212,207,198,197,210,226,216,182,224,230,194,204,196,195,210,215,197,193,215,206,189,191,220,205,237,188,141,203,203,163,164,97,133,90,48,146,130,156,183,134,128,86,57,35,7,96,180,198,214,213,196,192,179,153,158,85,109,239,194,176,134,173,202,128,139,212,163,121,106,114,106,59,56,61,179,180,182,194,236,182,201,222,196,222,219,224,181,130,218,134,71,108,75,89,180,229,168,73,92,26,5,20,10,1,28,23,22,22,30,47,35,25,21,45,33,19,14,31,43,34,16,23,26,55,25,39,56,57,69,43,31,80,51,23,1,92,106,94,121,49,59,133,99,92,89,127,93,87,100,191,153,77,99,94,107,121,138,143,128,104,52,43,26,51,49,82,101,47,15,70,179,107,54,27,79,162,94,117,112,110,150,166,183,184,179,189,163,30,23,5,34,35,14,19,33,7,17,34,33,22,15,19,20,28,26,15,33,74,103,56,56,48,45,57,57,86,21,9,5,4,26,12,20,21,53,30,31,42,19,23,21,17,39,30,25,33,40,99,81,66,69,53,80,56,93,60,39,48,42,24,42,36,37,40,70,62,34,54,42,44,38,64,60,94,62,46,57,20,30,47,34,13,24,15,29,19,42,40,39,22,34,13,46,21,58,123,99,54,38,51,63,48,31,52,34,5,32,29,18,37,49,26,8,22,1,25,11,18,41,12,6,15,2,27,48,140,142,157,166,194,176,183,193,160,41,15,14,24,4,35,27,17,37,32,25,25,4,30,35,21,23,12,19,30,41,67,72,76,73,67,68,35,32,101,114,134,184,251,244,216,122,55,53,5,69,107,83,80,41,28,54,106,78,114,105,120,67,70,206,228,229,235,225,231,226,217,219,228,208,232,218,198,242,208,201,214,200,208,215,232,217,216,217,212,234,233,213,222,191,209,207,208,190,197,232,217,223,215,131,14,11,7,13,29,7,4,17,7,11,17,14,0,1,0,1,12,2,17,0,21,10,6,14,192,198,175,213,213,196,206,198,204,197,205,183,216,228,194,186,180,208,195,212,201,187,165,199,201,202,238,209,199,210,208,189,194,175,196,204,209,184,188,203,206,195,211,186,199,226,185,197,197,210,202,210,209,207,202,182,200,196,188,202,206,170,196,235,204,197,189,201,215,212,191,206,197,205,233,224,219,224,193,199,193,191,197,192,214,212,200,197,213,190,193,227,184,201,221,212,206,171,198,190,192,214,211,199,209,199,193,216,211,225,216,194,217,223,217,200,193,213,197,221,179,208,202,206,215,212,219,215,187,206,200,195,205,188,193,214,201,215,204,199,230,206,213,205,202,210,219,206,224,180,230,216,199,205,208,195,227,199,217,199,208,203,199,213,201,178,214,218,199,213,208,219,184,216,219,212,194,215,213,189,211,192,201,208,212,204,224,211,209,226,207,195,192,207,196,199,230,185,211,224,216,228,212,198,233,205,233,209,209,217,198,233,192,203,215,211,211,235,199,206,214,196,200,191,193,219,217,199,184,201,219,197,204,208,197,202,204,198,207,213,204,201,221,184,197,201,212,185,228,210,205,220,184,209,173,214,212,196,195,200,196,217,208,205,199,196,216,220,220,216,188,210,222,238,201,207,181,212,243,202,190,227,223,204,191,217,195,215,201,203,206,219,205,226,214,235,193,181,214,209,212,206,177,145,199,229,190,177,114,160,120,104,151,125,156,168,155,125,80,51,39,53,138,203,201,219,193,166,212,219,213,202,115,134,194,209,141,142,182,218,156,119,188,151,112,129,86,87,62,48,51,170,237,182,160,241,184,205,216,194,217,203,239,176,139,226,189,110,136,72,89,241,253,161,75,26,34,24,12,14,5,12,28,37,41,21,22,14,15,56,2,25,44,20,10,16,29,31,24,37,53,36,71,68,88,85,64,52,49,15,26,32,87,85,64,96,49,102,57,98,95,78,108,106,74,71,154,190,154,87,72,95,72,102,120,90,95,39,22,62,21,42,3,26,38,45,118,129,59,23,7,76,139,81,65,68,99,137,121,124,134,131,150,111,28,29,0,8,19,11,5,30,32,19,33,41,9,44,50,52,30,34,20,38,100,129,76,45,42,40,47,38,44,26,34,36,30,8,7,11,28,25,39,49,15,57,30,60,62,39,75,62,66,77,164,174,114,144,121,173,142,158,133,152,115,157,158,138,165,173,177,180,163,197,156,190,157,180,166,171,146,118,52,49,34,19,36,59,26,50,59,62,58,48,50,41,37,26,54,38,35,62,110,90,102,58,44,52,76,45,69,28,27,20,3,36,20,17,39,32,13,21,23,4,7,28,31,8,30,30,28,50,122,125,104,139,107,147,126,184,123,54,15,36,0,19,14,25,30,16,42,11,20,12,36,20,56,21,35,10,8,74,156,185,177,110,110,131,76,137,121,132,100,178,235,217,108,35,45,29,73,130,135,71,57,64,46,79,92,79,80,79,77,74,108,207,240,244,224,219,220,227,230,199,212,204,212,216,203,204,211,199,223,224,222,212,207,199,227,200,215,204,228,205,202,223,217,219,211,234,212,242,206,220,229,130,6,5,8,23,4,17,16,17,1,16,19,16,11,12,4,2,2,0,18,1,10,4,24,10,204,191,207,185,213,183,224,183,171,185,191,230,221,206,199,197,188,193,197,181,194,203,204,192,192,210,194,198,194,214,216,190,182,185,204,190,206,188,207,178,205,193,217,209,207,196,211,205,198,213,199,176,179,186,205,184,213,223,207,202,168,199,223,210,205,200,193,176,191,193,202,184,199,210,205,201,207,211,197,192,170,214,220,210,211,216,193,207,200,194,184,201,193,212,223,230,213,206,205,197,168,219,226,200,179,202,209,211,225,218,202,206,201,213,204,198,208,212,209,195,207,200,199,195,208,221,234,207,221,209,224,203,209,230,183,204,227,212,228,186,211,195,209,192,232,209,214,212,210,201,218,197,218,216,215,215,202,215,191,234,231,188,188,202,204,210,213,187,221,211,201,197,203,204,224,222,225,215,198,209,206,198,231,226,192,212,214,216,202,193,217,197,205,207,208,199,196,211,213,211,211,228,224,218,242,211,220,221,198,224,195,212,203,208,192,213,209,191,234,200,189,217,222,193,208,207,217,185,198,210,195,209,208,208,204,198,201,231,213,194,204,221,231,216,233,216,228,223,219,214,229,205,184,199,184,237,185,213,217,200,212,207,208,182,213,199,215,194,213,217,193,220,202,208,204,191,205,209,201,215,193,210,217,211,205,196,203,204,211,199,200,190,226,222,196,202,214,217,197,221,222,218,172,193,230,187,149,200,151,211,161,123,205,192,207,203,197,148,80,77,68,61,134,211,186,212,176,192,203,207,225,210,103,133,142,143,200,180,216,208,146,145,242,145,99,117,99,109,55,16,25,102,166,131,175,212,163,185,228,213,224,212,223,144,189,243,175,114,115,132,158,254,228,131,52,30,10,12,12,12,11,21,23,25,12,22,9,8,23,25,0,33,14,12,11,40,42,3,36,48,63,50,77,75,47,74,54,50,11,20,12,52,120,41,65,119,69,101,80,100,99,108,114,123,101,78,142,211,195,144,80,70,66,95,90,88,86,62,31,19,39,6,28,10,37,128,142,69,16,38,10,52,125,79,100,72,54,69,27,56,35,38,73,22,26,32,11,13,32,49,25,34,29,41,43,10,38,56,31,25,31,24,24,23,104,93,51,70,61,38,39,54,65,23,44,39,41,23,39,77,67,80,64,109,105,137,144,188,166,194,215,218,214,195,199,180,195,174,153,173,137,151,169,162,201,206,169,188,175,187,158,164,176,186,137,178,167,183,156,149,138,106,94,87,83,110,108,84,100,112,182,163,183,145,129,103,120,89,70,60,58,91,148,189,140,132,138,113,122,118,79,37,24,26,24,19,39,37,18,56,36,11,33,13,25,42,39,1,23,27,9,47,138,128,63,50,60,54,108,95,68,52,14,13,20,32,27,39,9,27,24,39,13,16,22,11,29,16,15,30,55,88,193,205,228,204,226,212,167,190,156,99,82,161,198,75,24,27,25,46,132,156,61,62,118,122,89,89,65,86,65,53,88,76,97,228,229,229,255,231,229,236,216,208,219,192,227,218,219,205,213,211,214,229,235,222,228,222,205,209,214,203,219,220,203,194,216,222,214,208,243,210,219,213,218,113,11,35,6,12,24,22,13,12,30,27,12,26,14,13,7,27,4,1,24,6,10,46,4,3,189,202,190,195,209,202,187,195,186,208,202,198,168,190,194,212,219,207,203,192,184,195,190,217,201,191,195,187,197,173,200,216,196,205,207,195,188,184,177,204,185,178,195,230,222,205,193,195,199,201,208,185,218,221,199,187,189,220,200,176,189,198,202,210,187,198,191,194,183,221,202,224,197,201,205,219,203,221,217,184,203,207,189,198,194,198,203,181,218,176,215,208,207,209,197,196,206,184,195,193,194,203,195,205,212,193,196,214,208,189,190,204,191,185,214,207,169,193,210,204,197,211,200,208,179,200,183,191,195,210,179,196,185,206,211,230,205,215,218,192,204,202,213,221,206,203,180,228,213,193,182,213,202,217,190,216,221,209,182,221,208,215,198,221,215,202,200,205,200,204,201,222,205,211,199,203,214,202,204,204,195,213,207,225,215,200,181,213,196,208,221,231,211,215,220,201,203,216,213,218,198,182,212,204,203,197,205,229,216,201,228,208,224,210,196,207,201,209,214,211,237,177,216,216,193,202,221,210,203,194,223,208,230,215,203,198,213,195,207,220,193,230,209,214,222,203,232,200,196,199,189,200,207,204,218,213,209,225,222,201,205,195,192,224,189,205,219,179,185,209,227,220,216,203,198,200,220,220,195,224,201,225,209,216,220,182,185,191,220,218,195,199,217,200,219,196,246,197,203,215,217,210,165,183,218,217,173,207,192,207,157,156,181,178,204,167,214,195,93,42,38,29,152,252,172,227,177,202,200,210,242,179,130,139,161,157,168,207,159,163,153,165,238,131,105,75,72,88,80,74,7,111,192,174,188,223,168,217,233,216,216,227,197,139,173,231,200,127,111,69,150,212,135,75,39,7,13,45,30,20,60,5,2,6,10,28,17,8,18,12,9,21,10,22,28,20,27,35,48,62,26,54,55,97,68,65,27,41,23,23,32,80,80,15,83,127,84,81,113,129,108,114,107,99,129,110,122,148,180,148,154,88,76,89,95,60,98,43,27,19,64,33,60,65,103,164,105,30,12,25,0,88,112,89,110,48,53,67,56,46,35,59,59,22,17,23,36,12,26,60,37,41,37,49,38,41,23,50,53,35,38,41,30,62,124,144,83,126,53,80,118,105,91,69,91,134,100,168,192,202,202,224,218,211,198,177,202,202,210,193,174,188,191,141,131,77,123,85,58,70,72,59,61,62,71,96,100,65,67,59,49,79,44,63,90,72,62,58,71,72,38,47,96,68,77,76,106,86,120,128,143,144,153,155,185,185,195,200,186,160,175,119,203,186,182,163,145,135,142,166,139,54,36,58,33,34,16,35,28,11,39,38,40,20,40,40,19,47,25,42,21,102,93,97,45,44,42,71,46,64,63,34,12,27,16,24,30,25,22,24,21,61,27,38,39,28,13,27,25,24,18,79,151,182,190,191,203,179,190,194,135,121,87,153,140,21,40,39,40,76,134,130,46,65,140,105,129,123,84,103,31,41,63,38,43,171,209,205,227,235,236,233,206,175,220,226,215,182,181,217,217,205,215,200,214,217,213,208,219,220,197,197,214,206,223,216,232,221,222,229,221,182,230,227,216,119,19,0,9,14,15,6,12,21,43,2,6,13,13,14,24,7,7,19,10,1,1,10,24,6,204,187,235,198,191,201,185,188,213,192,199,219,202,192,196,187,204,208,174,173,196,229,196,202,206,203,200,221,202,206,226,215,169,194,192,209,210,196,179,178,192,208,202,190,198,199,207,185,206,195,197,213,210,192,190,194,196,211,210,215,204,188,212,191,194,202,213,181,197,187,204,216,192,203,211,184,212,210,210,193,220,211,213,226,205,185,210,202,208,186,200,198,182,194,224,203,200,183,200,215,189,199,190,198,193,201,216,213,221,199,197,216,197,223,213,164,194,210,197,204,198,197,220,209,206,200,177,202,235,196,212,206,220,209,213,211,205,201,218,191,188,201,201,208,193,207,206,201,188,221,203,196,197,229,220,180,197,191,223,192,187,189,199,203,225,220,187,208,229,218,206,207,223,216,213,200,202,194,222,229,224,203,196,235,225,208,187,221,213,181,225,197,201,196,215,204,216,204,218,217,212,204,216,223,219,209,198,203,208,174,190,198,184,204,176,203,221,218,209,223,200,237,208,224,209,217,210,211,202,220,231,219,221,196,241,204,188,211,223,211,221,176,207,193,219,215,232,180,217,222,210,213,199,211,185,192,187,216,198,195,204,190,236,195,195,218,201,225,204,211,204,198,181,213,212,218,214,200,233,199,200,198,239,210,212,181,213,187,213,231,203,188,206,178,218,200,214,199,206,214,199,209,172,192,229,203,191,200,191,163,145,123,146,125,151,173,217,171,102,75,25,45,125,241,218,195,170,198,207,192,181,202,196,160,182,207,172,187,142,186,135,177,236,152,98,60,93,113,99,46,50,188,213,206,205,208,154,223,223,202,202,214,153,125,163,213,175,114,100,69,66,90,67,36,8,19,17,17,41,34,17,25,41,13,26,27,8,16,13,39,15,5,18,29,26,18,18,71,17,28,66,85,64,70,75,49,36,19,53,78,71,155,103,63,122,130,111,114,103,133,119,130,98,119,110,112,110,143,132,147,109,72,68,74,66,37,60,54,46,33,34,41,101,140,114,44,7,23,33,14,30,85,116,74,120,50,73,41,38,34,50,66,76,57,12,16,35,20,25,33,32,54,51,50,20,67,84,64,84,72,68,82,96,144,179,172,172,201,168,200,173,179,183,161,176,182,176,196,199,169,163,149,152,121,113,113,69,63,91,67,56,64,68,71,54,72,60,41,60,75,44,63,77,105,79,90,70,79,73,72,98,82,87,83,41,84,75,54,72,65,66,74,94,62,52,72,56,44,60,58,65,59,83,70,63,96,82,92,95,123,127,127,122,123,92,100,105,111,124,158,154,110,117,138,143,86,67,82,59,58,58,41,44,18,68,44,36,40,34,36,22,113,122,104,63,50,47,42,54,30,57,41,37,17,26,6,18,25,27,6,25,39,37,6,14,5,20,35,47,16,10,46,134,143,123,136,121,154,167,121,121,105,85,127,102,33,40,32,3,80,151,70,58,55,70,79,113,109,56,68,66,65,79,40,20,117,112,138,138,164,202,237,226,195,196,214,213,227,215,213,191,195,235,218,220,221,199,209,210,220,197,208,237,223,237,222,215,206,220,189,223,214,221,220,229,119,30,6,1,9,27,10,16,5,54,22,0,25,9,2,10,11,27,13,25,4,13,6,2,13,218,217,191,197,204,185,208,226,174,217,204,205,185,199,205,189,197,180,186,202,199,195,206,198,205,207,191,190,204,203,215,213,212,235,213,184,205,192,219,189,211,191,233,215,180,210,195,225,194,233,220,200,209,202,188,199,226,201,189,176,204,172,210,205,219,202,226,203,187,193,210,206,220,196,219,195,192,220,211,192,207,196,189,217,179,212,196,191,202,215,196,181,202,190,209,184,175,195,202,202,202,213,195,196,206,207,225,192,187,199,199,202,212,180,197,202,214,206,197,214,207,215,182,226,195,170,192,204,212,200,205,221,198,201,201,192,190,219,182,199,187,211,205,251,209,186,219,235,214,198,192,214,205,210,207,209,202,205,216,199,228,201,189,192,184,184,163,198,212,203,191,193,198,220,235,209,212,194,202,211,203,190,208,192,207,223,226,218,209,197,230,211,206,218,214,200,206,203,210,192,195,213,217,206,210,197,194,196,218,185,238,207,202,207,205,198,215,200,232,199,220,197,208,175,211,193,225,205,213,206,212,185,205,177,206,186,202,219,231,188,221,195,195,207,209,202,195,209,204,188,214,203,211,192,206,205,211,200,193,229,189,227,197,175,194,205,231,233,196,200,202,207,207,196,166,211,235,205,210,195,165,214,190,210,217,191,209,181,200,190,196,203,207,208,206,206,212,213,196,203,199,192,163,202,220,182,149,169,144,116,102,131,139,151,183,182,235,201,99,31,7,27,140,198,216,208,194,225,199,214,194,227,233,175,185,194,170,174,181,182,138,141,192,126,108,88,84,88,72,66,56,160,212,184,232,198,155,197,186,220,186,216,115,136,223,228,164,84,61,60,100,60,24,14,8,12,47,30,28,15,41,16,32,25,23,25,21,21,20,36,10,11,23,13,32,34,36,43,40,54,71,55,83,64,78,36,47,68,116,141,149,135,140,129,126,112,119,101,81,118,121,130,115,119,77,56,88,64,86,98,73,41,32,42,39,18,14,13,21,87,89,150,143,55,52,5,5,43,15,5,40,100,85,92,135,48,52,34,58,78,111,108,97,29,47,48,65,44,72,71,136,126,156,145,191,178,208,219,193,187,204,178,168,160,171,157,128,137,133,116,128,142,120,95,51,76,117,91,35,57,68,66,67,80,64,73,72,67,48,52,97,68,90,37,52,55,67,58,44,68,24,38,70,66,63,60,34,56,38,53,35,31,48,51,7,29,35,33,40,57,66,21,44,64,51,42,53,68,54,38,46,50,78,43,52,69,65,62,56,52,29,40,63,73,82,60,49,64,79,106,99,94,148,195,171,166,143,148,145,149,135,132,74,77,75,93,67,57,66,48,39,98,161,143,109,93,85,84,54,86,70,28,26,10,39,52,21,45,35,29,38,31,32,42,47,38,35,34,11,32,31,67,107,74,35,46,58,58,49,59,65,114,94,110,106,27,30,14,20,77,159,130,82,68,59,27,112,60,57,64,45,72,87,83,54,79,69,23,57,66,173,223,207,227,211,194,220,226,222,191,207,202,205,220,221,218,208,230,182,235,198,197,209,217,213,234,200,209,227,181,231,214,203,223,226,130,1,12,15,1,8,34,15,36,12,22,12,32,1,2,16,4,47,7,24,20,2,16,32,5,204,215,186,196,171,202,193,178,200,194,199,195,197,187,207,210,201,205,220,209,202,193,198,194,202,202,189,176,207,204,205,206,193,205,208,202,191,189,213,210,213,169,207,194,194,207,177,186,217,201,231,216,192,190,196,201,188,196,222,194,183,227,198,193,176,215,214,203,177,196,210,172,204,210,184,190,194,191,188,178,207,224,194,184,194,202,198,205,188,201,216,198,180,205,193,209,204,211,223,207,208,210,206,190,200,193,222,187,190,213,232,180,184,199,179,196,206,195,203,195,227,215,189,202,195,192,201,198,230,201,203,201,206,203,204,192,205,204,216,219,207,202,207,210,185,219,204,226,217,202,199,196,213,174,211,200,200,198,193,213,201,222,210,191,221,181,181,221,197,202,218,189,219,212,195,225,209,221,190,207,207,198,195,202,192,189,205,206,228,198,197,213,210,181,220,213,234,183,190,220,203,199,202,203,179,190,227,190,225,203,214,196,215,209,204,205,217,191,205,230,205,219,194,206,211,195,204,206,194,206,209,219,189,207,204,194,202,213,200,211,208,204,199,208,218,198,205,204,186,223,223,199,188,206,211,214,220,215,198,195,223,175,209,176,213,190,202,213,201,223,220,187,200,200,203,198,203,218,202,208,198,204,212,204,219,226,207,200,202,205,188,210,180,191,248,185,200,203,210,196,204,189,154,223,185,125,115,121,171,117,87,173,196,180,205,241,246,187,107,52,57,50,143,246,247,204,203,220,250,247,174,121,153,111,128,148,181,189,177,193,69,83,162,145,119,102,98,81,42,43,82,137,181,191,224,183,157,231,190,199,201,212,131,197,215,213,226,88,68,51,57,40,6,9,13,7,26,10,31,31,29,17,14,34,22,27,28,23,22,11,20,11,13,33,50,42,27,43,66,85,76,79,117,145,138,121,176,127,169,159,136,142,130,115,88,91,85,89,65,62,95,126,119,100,38,42,50,17,49,70,60,32,41,18,25,17,52,46,54,68,87,78,11,23,3,30,33,12,48,15,61,132,91,90,108,80,95,117,162,160,199,191,188,143,180,194,218,215,199,192,215,171,184,167,188,184,178,162,154,100,86,97,72,57,92,76,62,63,58,27,61,70,63,87,59,81,101,56,87,90,86,75,56,73,66,48,29,57,41,47,69,45,62,35,39,24,8,30,52,30,17,13,14,11,31,23,28,3,40,16,11,18,9,9,16,14,46,24,20,35,27,23,11,19,43,9,25,9,23,37,22,44,37,24,25,48,36,67,68,79,59,72,68,55,90,80,72,66,41,76,63,57,61,63,85,60,83,114,119,147,170,138,141,152,162,160,139,127,128,109,132,165,188,199,177,145,151,161,153,147,95,24,40,35,27,13,40,20,51,47,51,13,51,34,19,20,52,38,36,34,12,71,102,46,48,31,44,44,59,41,36,53,82,124,94,49,36,24,24,59,107,115,99,77,35,19,26,41,58,54,62,60,77,68,59,66,54,43,52,47,106,203,236,231,224,227,227,215,242,199,209,208,216,222,216,205,233,209,224,203,212,220,206,214,233,194,206,217,211,218,242,211,230,210,240,116,12,2,5,21,3,33,5,3,22,18,23,25,4,9,4,8,12,0,4,12,7,8,10,11,199,187,203,202,193,172,185,191,181,184,236,194,202,188,235,186,183,221,200,185,186,186,195,183,202,218,199,184,178,209,189,213,217,191,195,204,192,191,208,195,214,190,185,198,206,195,191,215,192,188,192,186,182,201,221,198,193,205,224,194,189,197,205,214,185,213,209,205,201,187,206,211,204,189,216,199,199,198,223,178,201,166,212,190,218,210,204,193,191,178,214,190,208,184,198,224,234,211,200,220,203,198,188,197,221,206,209,197,180,196,199,224,195,183,229,220,221,209,219,189,211,210,206,193,193,192,222,210,209,188,197,202,214,206,205,209,190,192,225,211,215,199,202,197,225,186,186,189,188,203,219,225,215,210,214,220,195,179,203,182,197,202,212,201,214,210,220,208,209,212,208,231,208,184,191,216,227,193,204,208,216,192,203,210,206,176,215,195,197,198,203,213,185,206,177,195,193,202,234,210,224,204,206,210,192,196,184,246,204,184,178,218,206,228,200,179,183,207,196,197,193,212,178,212,207,220,196,202,203,195,195,192,206,171,215,196,222,218,198,209,182,178,211,223,218,201,213,201,208,214,215,208,198,202,217,211,198,200,215,198,207,192,179,207,221,243,208,199,200,195,196,222,225,187,189,214,217,176,206,190,198,208,216,196,231,205,205,194,215,190,213,207,168,219,207,215,200,205,218,211,211,154,157,211,207,104,94,153,168,97,109,170,175,166,179,194,173,138,90,74,109,72,81,192,201,141,147,199,222,213,109,65,61,51,105,199,239,207,173,141,111,103,213,147,115,117,73,71,49,84,49,93,177,225,244,172,178,215,185,185,230,188,135,201,215,216,210,114,66,27,23,4,10,32,27,6,14,6,32,30,20,36,34,24,21,42,38,22,6,5,20,3,35,44,53,30,42,79,96,67,80,148,176,172,164,169,145,121,129,129,104,107,67,71,58,88,80,58,63,56,79,79,83,79,34,39,66,35,81,61,73,45,21,13,47,25,53,71,77,57,5,63,17,16,7,18,8,5,15,9,56,93,105,114,115,150,186,163,209,180,183,157,183,152,182,157,180,141,108,121,111,125,68,49,86,50,44,76,47,63,59,60,40,59,56,65,54,67,67,71,83,93,76,84,41,67,63,47,47,62,53,33,24,27,32,30,29,3,18,17,14,8,32,26,23,15,9,5,5,9,8,42,43,20,36,15,14,32,24,16,16,8,29,39,35,24,7,23,30,24,15,13,18,20,10,6,3,8,21,28,26,17,34,34,11,22,37,20,33,20,37,27,49,22,50,46,49,50,35,49,53,64,60,88,82,77,88,71,81,28,60,75,70,96,125,109,126,120,135,156,177,144,140,128,124,144,167,144,165,152,131,66,56,53,47,69,70,38,56,48,16,40,23,59,29,16,51,24,18,34,38,108,85,86,94,55,62,45,68,40,55,108,99,92,109,50,23,21,11,20,68,105,118,83,48,54,44,23,58,35,73,36,50,62,61,70,72,59,61,55,100,232,246,233,252,232,243,217,229,222,230,200,203,239,212,203,214,229,207,224,208,199,206,243,204,208,199,205,215,187,218,208,235,215,215,120,14,15,9,7,26,9,15,9,19,8,26,29,4,22,25,21,8,13,0,9,10,12,27,24,192,186,212,215,208,200,184,206,190,182,203,180,182,160,221,196,208,204,215,205,201,222,196,181,203,203,197,201,176,183,186,197,187,203,190,205,179,219,223,182,178,196,189,185,198,212,188,196,197,168,176,200,200,206,203,202,203,202,199,199,189,204,208,200,212,189,219,230,188,191,193,183,202,193,234,200,179,205,185,211,221,212,211,202,189,195,202,192,232,193,211,181,178,234,202,186,212,210,200,206,196,181,193,218,201,193,202,187,209,194,197,218,200,220,196,187,176,197,192,200,192,215,194,209,193,181,213,219,189,213,208,190,167,221,185,180,217,207,189,209,216,200,175,231,233,207,203,205,197,212,209,202,197,215,202,203,222,200,189,218,211,181,198,189,193,204,209,204,212,210,229,188,197,203,189,220,217,196,209,202,188,184,196,203,189,187,197,196,196,215,199,217,204,197,194,205,228,203,196,204,200,204,232,197,216,204,200,187,198,213,206,219,190,191,197,193,202,197,205,207,200,201,211,199,192,197,213,208,198,204,212,193,194,211,214,213,182,198,193,212,213,175,176,184,205,234,193,185,210,200,209,197,234,190,197,189,199,193,190,206,213,188,205,210,211,203,234,226,219,233,208,224,191,208,172,202,203,232,217,214,174,200,209,194,203,211,228,216,218,238,199,205,189,189,204,237,221,211,203,214,194,190,178,191,161,98,91,130,126,103,113,134,159,128,121,144,109,86,72,90,135,83,51,63,93,25,51,40,75,83,71,49,28,43,87,189,213,177,151,141,119,157,248,127,76,89,105,97,91,66,36,138,215,226,227,160,163,196,183,206,232,157,140,232,215,222,225,114,87,3,26,0,13,22,17,46,61,30,10,12,16,28,21,25,12,20,11,42,33,21,29,34,27,40,55,31,41,70,58,37,87,165,183,171,163,150,74,88,132,78,69,36,50,57,46,78,81,69,70,68,57,57,73,76,69,75,30,35,136,124,64,25,11,42,41,36,31,50,40,27,12,13,19,28,15,21,16,18,18,5,97,105,84,102,129,129,132,73,76,92,76,55,52,62,60,32,40,54,59,44,74,60,53,60,56,57,35,55,57,49,45,50,35,46,33,6,11,24,58,37,41,30,42,48,24,30,33,26,28,4,29,23,6,17,23,42,40,26,21,34,8,21,19,32,50,16,35,1,31,3,24,39,25,16,16,31,1,61,21,38,43,48,21,31,27,2,15,35,29,27,17,44,49,30,41,18,16,15,16,20,47,27,41,8,3,28,21,20,19,28,19,9,14,36,6,25,12,4,45,30,45,39,39,59,35,50,54,73,80,82,49,62,68,71,68,83,66,71,65,82,73,76,63,88,55,86,91,91,93,115,132,140,124,144,154,145,137,168,137,128,120,107,84,79,79,75,70,48,86,38,29,96,148,133,143,123,100,99,76,90,119,120,112,120,61,61,59,21,27,24,18,76,97,115,90,74,70,33,51,22,52,65,63,58,85,53,79,71,99,87,141,198,224,217,237,255,252,246,241,203,228,231,209,215,214,221,214,217,217,225,211,235,209,215,194,221,214,219,217,229,223,196,208,210,231,121,4,1,9,6,22,24,29,3,35,5,10,22,10,1,30,7,30,17,16,20,15,12,41,4,194,196,193,213,206,174,185,187,194,179,186,227,207,211,192,211,219,213,201,174,186,203,216,211,196,203,201,218,192,211,184,208,202,206,206,229,186,202,209,194,154,189,188,211,203,192,193,211,207,209,181,187,188,196,189,196,189,198,203,193,178,198,202,172,178,213,194,197,197,196,189,190,217,191,214,188,221,207,199,225,208,209,189,226,204,224,188,220,218,198,177,193,219,192,188,228,193,181,211,196,179,184,183,196,194,212,226,206,193,196,217,208,216,209,221,195,208,198,212,213,196,194,215,205,216,204,216,200,198,205,189,213,182,213,209,198,202,201,239,221,188,210,197,223,205,211,208,223,205,198,200,224,218,199,195,192,192,205,208,218,195,200,203,206,215,200,206,192,180,203,217,210,211,225,223,197,185,192,202,188,209,194,209,200,214,205,213,220,203,193,215,224,199,217,221,202,219,210,228,230,212,195,191,228,225,224,183,202,219,232,208,219,209,203,217,203,212,172,221,210,211,215,205,194,226,236,225,187,217,200,191,188,175,200,174,195,202,200,224,214,218,201,211,240,201,212,234,195,217,217,197,188,193,204,225,214,209,210,192,187,196,197,219,203,194,200,206,218,186,212,213,203,211,201,204,227,200,201,195,196,210,217,189,234,212,207,213,193,217,218,209,195,192,178,210,197,204,213,203,184,198,174,178,138,70,55,128,89,65,108,120,133,117,97,125,114,61,82,73,90,132,97,51,114,104,97,39,5,0,60,69,50,32,35,59,73,115,177,193,178,147,192,236,127,105,116,97,87,82,71,39,139,236,228,227,184,168,195,212,197,204,114,164,224,208,236,182,95,31,15,57,68,46,40,28,73,30,61,96,47,5,46,21,30,24,42,33,14,32,17,35,33,33,39,31,38,81,107,152,144,110,120,131,142,142,123,67,79,64,61,40,57,36,35,25,37,29,54,29,54,39,45,54,88,70,57,81,159,210,189,127,58,18,39,43,30,12,2,29,12,11,8,21,22,29,29,10,18,6,17,125,118,106,149,90,44,64,66,25,38,37,70,99,38,40,45,50,51,57,70,59,62,37,41,58,47,29,16,37,41,15,26,19,9,35,10,30,35,35,24,32,22,17,29,23,28,37,30,18,18,31,21,31,34,21,2,46,31,31,33,16,29,54,46,44,40,26,27,11,20,18,41,30,7,13,15,14,34,27,26,15,42,42,33,5,5,17,31,26,33,70,71,112,100,86,35,7,46,19,19,19,25,46,15,16,23,19,23,39,30,26,6,22,28,14,23,25,25,24,25,8,14,36,43,32,22,28,43,48,27,40,50,24,43,46,45,68,60,58,66,64,91,41,66,79,57,42,72,71,75,74,68,106,115,155,149,141,149,170,159,165,165,174,178,182,194,126,181,145,151,126,177,187,177,168,176,166,172,160,152,148,164,103,145,133,65,40,51,15,14,7,41,64,74,112,135,95,64,42,27,61,53,52,56,84,68,69,108,85,117,100,97,84,131,161,230,239,243,247,253,240,240,225,249,241,225,213,207,194,185,219,195,175,195,222,201,209,208,217,199,218,207,203,214,195,105,2,28,6,1,10,8,33,0,33,20,11,8,1,13,24,4,0,23,23,12,32,6,19,40,184,195,207,222,194,185,209,199,229,177,198,193,186,204,189,197,191,211,216,199,215,202,228,184,190,210,202,228,229,196,200,204,188,184,210,188,202,202,205,211,187,211,216,205,206,190,209,221,223,195,203,200,228,212,204,209,196,214,234,194,197,204,207,191,222,193,189,200,196,194,188,213,193,214,185,197,208,184,208,214,214,202,188,216,195,206,210,190,231,203,205,212,234,206,174,204,217,213,198,191,215,187,182,196,171,199,201,219,192,201,186,207,207,178,190,202,204,194,195,205,227,188,218,199,213,209,206,164,195,228,201,176,221,188,210,186,181,215,200,203,205,204,202,206,222,209,200,214,225,193,230,211,214,194,211,212,213,202,226,210,196,196,203,214,213,217,216,201,188,203,190,200,205,211,185,202,199,208,185,212,189,197,202,196,200,204,208,203,219,198,217,193,172,211,203,210,214,207,192,220,218,215,196,210,199,202,194,208,211,195,208,198,201,215,213,211,201,208,174,180,217,202,195,197,193,179,225,205,219,181,196,223,201,205,206,203,207,213,164,215,177,189,203,207,229,191,208,206,224,201,206,210,191,225,227,194,202,199,180,217,201,203,204,198,183,218,207,193,204,210,197,184,197,201,206,159,207,203,177,208,186,177,188,202,202,216,222,170,207,190,214,198,202,207,213,214,185,206,197,209,195,157,197,80,17,45,75,87,90,116,156,141,123,114,132,123,70,131,94,98,121,95,103,188,192,158,132,48,32,65,111,105,68,71,50,62,95,161,207,195,140,226,192,109,135,139,114,124,67,84,44,159,225,205,198,141,181,232,182,213,196,96,167,209,226,200,145,62,49,35,118,200,95,70,48,22,70,211,211,37,1,34,17,5,10,3,35,7,45,35,18,33,36,28,42,90,80,125,163,176,126,118,77,100,92,126,84,18,41,48,33,59,71,46,54,41,46,24,53,17,48,20,33,18,41,107,167,189,203,164,89,25,19,1,2,22,0,16,31,9,15,44,52,21,46,14,3,23,10,61,125,100,106,136,66,58,44,52,26,36,32,34,54,60,17,46,55,28,34,7,28,23,42,19,21,26,15,6,25,31,13,19,25,8,15,13,17,25,13,18,4,27,13,30,29,27,29,15,6,34,20,28,30,22,49,40,17,20,27,26,40,51,67,48,56,43,21,19,31,45,8,15,40,22,4,19,24,19,26,23,25,12,45,47,19,13,3,11,50,49,112,139,184,133,95,36,22,15,26,20,13,14,27,29,19,22,1,15,12,11,43,46,40,22,17,16,16,12,32,25,5,23,30,22,22,7,17,31,15,36,31,13,19,25,35,22,41,38,49,24,58,29,58,68,70,67,63,81,60,72,65,57,93,88,65,84,92,101,72,96,102,116,137,124,122,176,158,160,151,147,141,146,147,158,143,135,135,154,145,181,154,124,113,129,131,70,29,16,32,3,30,21,41,43,117,145,116,96,69,79,50,37,33,27,47,71,73,59,74,78,49,60,30,20,72,113,151,157,221,237,243,241,242,249,255,241,216,216,214,201,222,194,201,190,223,197,210,195,220,194,202,207,214,228,219,88,6,10,20,24,20,13,20,2,9,1,1,4,9,0,9,16,10,16,23,1,24,16,0,27,187,209,193,187,210,189,173,189,201,198,215,188,210,194,205,230,193,185,222,202,187,199,211,212,237,196,214,210,181,193,188,201,198,202,194,214,177,184,210,228,166,189,212,195,221,196,203,191,210,198,226,206,194,212,214,197,209,179,198,199,180,192,189,214,195,196,199,197,189,208,166,180,207,199,216,214,193,198,206,221,214,203,201,198,196,201,210,182,196,186,212,183,221,215,189,218,205,180,199,186,188,216,218,205,201,218,217,188,223,198,214,221,203,222,203,186,187,205,183,197,208,199,190,219,196,219,190,195,190,208,192,205,202,207,192,222,200,233,188,182,215,219,218,204,213,204,233,208,207,208,195,188,202,211,177,192,218,206,212,227,188,214,197,196,202,218,203,200,197,211,182,213,216,199,212,188,197,180,191,182,195,217,179,219,204,217,203,189,200,214,214,209,199,189,231,183,200,180,193,209,221,186,224,200,206,202,202,210,217,207,182,208,199,219,202,202,202,217,207,197,204,199,184,176,196,199,198,195,198,224,215,170,193,211,181,223,200,212,227,198,222,222,201,170,199,208,187,181,195,190,185,195,219,222,205,193,213,209,201,219,184,191,181,208,212,219,197,200,183,217,177,187,197,210,194,184,208,220,198,212,202,199,181,205,208,221,192,196,189,217,209,211,221,190,196,207,205,202,187,209,210,178,221,117,30,32,76,124,102,126,137,118,89,111,102,60,76,42,59,77,107,112,60,93,130,96,76,58,69,15,38,73,104,73,90,92,114,155,213,174,103,215,201,114,103,66,99,112,58,60,42,170,235,208,211,159,197,218,197,253,153,95,208,183,231,207,113,52,45,70,206,244,89,43,61,64,164,249,248,76,9,15,12,35,7,34,22,31,18,11,71,38,18,48,87,54,101,62,83,106,138,161,88,90,70,84,47,39,12,43,65,65,78,42,58,57,46,42,50,33,40,29,29,64,51,135,154,132,78,13,32,8,27,10,14,2,18,28,26,54,53,59,51,61,38,44,32,16,18,79,148,100,158,126,70,25,45,37,12,17,18,46,12,55,24,10,19,24,21,19,30,20,15,1,28,16,27,29,12,17,18,17,25,22,21,17,13,6,11,12,9,41,6,7,25,38,29,10,19,4,14,25,34,38,30,9,17,29,32,90,95,116,106,83,89,41,41,22,16,48,28,42,39,43,26,33,25,8,21,38,12,22,10,26,51,28,42,19,44,155,149,127,96,94,77,37,24,16,22,34,11,36,18,9,33,8,2,39,42,9,16,49,14,33,27,11,11,16,19,22,15,21,50,40,6,22,37,19,3,18,27,17,23,33,19,41,25,10,28,18,6,36,26,31,40,36,48,57,42,66,48,66,35,52,94,102,71,110,92,67,62,71,57,58,80,51,87,65,75,77,70,72,67,77,69,123,93,89,93,84,106,84,76,135,137,83,25,9,12,2,20,7,1,29,27,85,96,98,113,88,61,54,39,55,38,56,65,42,68,42,74,74,68,48,28,39,45,69,84,134,160,178,178,168,217,215,252,245,232,233,219,220,230,174,239,224,229,204,204,198,221,226,209,219,223,109,0,5,21,33,21,17,26,11,38,16,5,11,7,9,3,30,3,2,8,4,28,2,7,37,225,218,180,195,202,198,217,183,186,201,205,181,158,206,195,190,204,185,204,213,207,204,198,191,186,200,194,183,190,182,199,184,200,189,199,211,194,204,204,230,178,216,228,190,222,200,197,190,222,188,199,226,210,216,209,197,213,175,205,206,198,221,195,187,199,199,202,207,202,197,210,194,193,200,181,198,221,197,213,197,208,185,163,210,200,200,193,182,192,219,200,205,205,200,191,208,184,195,205,185,209,202,211,197,209,215,209,187,209,203,214,220,210,191,207,185,217,219,181,220,211,200,203,208,208,203,189,196,205,209,217,205,200,199,204,183,245,207,187,209,204,196,209,207,169,192,206,220,214,199,211,173,217,194,209,210,204,203,210,203,205,213,207,202,181,203,210,214,184,189,203,204,207,224,201,200,206,188,213,219,193,225,192,183,208,205,183,210,208,186,226,191,209,172,209,233,224,216,204,185,201,211,197,206,193,195,217,196,213,198,208,200,218,203,220,191,187,224,202,197,188,216,206,197,229,195,198,207,185,183,207,189,201,206,208,198,205,200,190,222,190,197,170,203,203,193,186,184,206,200,210,194,200,212,211,211,199,224,203,198,202,222,204,206,196,192,202,204,210,202,201,201,215,209,208,205,185,178,215,205,196,224,193,204,196,204,215,197,220,214,212,233,176,203,203,181,194,192,215,217,168,202,253,132,82,72,110,137,121,86,108,110,68,31,6,11,43,62,45,32,131,120,7,62,75,133,111,48,39,2,48,73,57,76,28,44,111,114,198,92,123,220,187,122,90,67,84,85,59,77,30,125,237,209,199,148,190,192,189,227,107,93,207,221,238,149,76,49,23,40,123,140,37,43,45,131,230,251,247,92,62,29,17,27,2,47,22,47,22,19,32,63,57,62,79,107,111,55,49,58,114,142,77,68,78,80,100,62,41,46,43,50,68,39,51,43,30,50,43,27,50,33,59,143,146,162,136,66,34,12,19,7,5,32,28,11,58,48,64,36,73,38,57,43,12,14,26,32,32,105,128,99,137,121,43,15,28,16,14,28,40,16,28,26,16,25,21,16,32,49,18,33,17,20,14,10,38,19,14,30,19,28,39,19,30,13,14,24,9,48,48,8,46,14,26,0,1,23,30,4,18,13,33,49,50,36,20,21,31,135,158,181,130,107,116,79,52,18,31,17,30,42,20,5,27,48,30,32,8,8,15,21,42,17,43,33,36,17,81,157,155,158,118,103,95,70,68,39,19,22,11,20,15,11,23,19,1,0,20,5,2,11,28,14,56,35,18,11,47,61,40,52,77,58,33,42,37,20,6,9,25,44,40,35,28,47,27,40,35,12,26,14,36,25,29,47,5,18,28,45,22,34,3,48,49,34,48,76,68,52,33,60,70,70,75,42,86,57,57,75,51,94,67,78,63,69,63,84,60,73,71,43,92,96,113,93,39,31,28,8,31,15,42,24,33,35,62,60,120,130,82,53,76,53,61,23,59,51,40,52,15,40,50,54,48,70,74,46,47,50,67,85,84,117,138,128,220,228,244,232,224,234,219,205,217,224,219,208,216,220,230,227,231,208,245,106,15,4,26,25,33,16,21,21,2,5,12,5,12,0,8,13,11,16,14,2,33,17,26,12,219,158,192,210,190,208,219,188,173,203,204,202,188,201,206,212,211,167,176,204,218,210,175,188,231,209,177,207,195,207,201,207,202,195,196,201,226,197,200,201,223,202,205,203,224,194,211,214,205,220,225,219,197,215,187,207,192,178,226,208,211,211,191,226,207,211,194,198,202,197,183,197,220,200,187,189,179,197,213,197,199,201,194,199,200,200,197,189,206,212,209,208,212,206,213,207,198,219,187,216,196,187,205,170,214,211,203,195,189,202,215,195,194,201,214,200,206,186,200,214,199,205,177,203,195,203,180,196,192,196,215,187,212,192,191,223,223,198,212,197,209,156,198,192,202,210,180,207,195,208,207,222,210,207,209,212,202,209,190,219,202,199,230,197,223,205,192,213,178,203,189,204,203,175,221,196,190,212,198,205,192,199,204,199,212,196,210,203,193,201,211,185,192,197,219,199,203,212,180,196,220,230,191,218,189,198,190,206,210,207,184,203,207,166,192,208,203,215,205,214,191,192,192,192,199,210,213,188,203,209,204,180,207,185,193,199,200,205,193,204,197,192,205,205,209,208,213,213,205,195,198,191,188,198,199,182,194,210,187,191,181,216,202,200,206,207,195,219,169,180,199,202,195,203,196,216,192,200,182,197,168,185,210,192,184,189,207,217,191,185,192,219,209,206,202,171,201,210,209,188,156,194,235,182,117,94,56,92,60,85,78,104,41,36,40,38,24,18,43,61,106,88,59,81,136,148,114,52,29,16,23,59,36,14,37,46,105,168,192,119,131,162,161,115,96,62,89,68,40,52,10,146,242,237,186,124,214,238,214,206,66,107,216,211,167,65,47,41,16,40,42,48,35,47,142,209,247,218,163,189,224,192,116,23,3,29,28,21,34,57,42,60,79,94,79,78,79,92,84,103,76,137,107,56,32,46,55,56,72,41,41,46,41,40,27,53,70,41,37,43,35,114,158,185,122,84,81,26,18,13,13,5,39,35,38,76,75,65,30,47,75,63,56,22,15,6,5,15,43,137,131,104,130,118,57,40,15,20,14,20,0,16,34,5,4,18,14,7,21,30,7,25,9,56,48,6,31,10,28,3,40,46,14,23,12,27,16,17,43,10,9,10,25,14,31,3,3,14,25,8,37,28,18,33,82,43,25,15,37,153,174,140,120,106,111,59,47,30,41,31,16,9,11,41,38,38,37,13,36,5,32,46,53,41,24,34,33,18,69,112,120,120,106,41,81,52,15,34,16,6,36,24,6,11,36,16,32,21,12,29,5,10,26,40,67,24,12,30,46,107,96,116,108,96,95,38,44,9,40,43,49,43,27,7,35,41,22,27,29,20,23,11,26,30,10,19,18,17,16,28,20,38,21,23,28,12,19,30,29,26,14,54,3,28,28,35,46,48,63,46,59,49,56,70,44,75,80,68,73,73,60,87,114,107,97,102,54,11,7,18,40,16,3,10,23,7,42,39,47,83,91,62,55,56,54,43,22,43,54,52,44,33,39,61,63,72,98,82,85,70,56,62,65,52,62,69,96,139,234,226,246,227,223,234,212,225,226,218,211,222,226,229,241,238,228,123,12,7,5,21,4,39,24,34,18,6,12,11,8,0,3,0,20,4,8,30,14,7,5,22,202,189,192,197,204,197,203,193,201,194,197,187,219,190,218,197,202,204,198,202,234,231,204,203,218,201,207,197,170,212,212,194,207,198,202,199,220,215,203,216,180,175,209,201,212,226,198,212,202,191,201,187,207,197,211,155,194,181,242,217,214,212,228,218,207,200,213,205,187,197,210,213,197,214,190,224,190,220,199,205,212,228,198,218,220,208,217,194,228,217,189,225,196,182,216,208,215,184,223,225,184,212,200,185,191,215,213,209,218,191,175,175,199,197,207,186,166,200,198,221,201,192,196,183,213,186,222,201,213,206,203,207,188,187,196,184,200,210,203,204,206,191,215,199,203,188,189,183,172,207,208,195,200,185,203,193,205,183,221,186,205,207,211,221,204,196,216,213,201,194,206,202,185,208,216,209,218,183,209,186,205,191,220,198,196,191,177,183,193,195,203,207,196,199,202,196,193,168,185,205,176,213,207,205,180,203,213,213,197,182,195,217,215,211,195,189,212,187,192,206,210,183,186,208,206,204,208,191,217,196,200,202,193,199,211,208,201,216,195,190,183,206,195,205,199,219,196,182,193,216,209,215,188,166,212,194,203,207,196,187,180,211,188,221,194,225,199,205,197,212,218,230,183,201,205,185,164,198,182,202,184,185,199,219,206,188,189,187,175,217,209,201,191,190,204,189,213,184,203,202,174,173,218,193,157,103,65,86,53,95,61,48,17,15,23,19,29,15,52,50,94,89,44,90,163,194,128,75,59,10,92,53,18,37,93,152,161,172,167,127,125,139,160,164,94,95,55,76,78,66,30,150,233,189,211,147,232,188,219,205,78,159,246,162,88,21,5,23,37,21,22,31,18,148,234,245,222,126,176,248,240,248,222,61,0,16,39,58,62,31,40,68,95,76,79,84,104,89,115,94,76,124,105,35,41,37,44,47,68,41,27,39,45,58,44,46,48,25,27,49,114,174,188,128,39,54,33,7,15,4,9,23,37,68,47,38,29,52,57,43,81,54,66,12,46,15,27,21,56,175,105,92,154,82,29,16,8,4,39,8,51,13,49,14,19,21,15,32,11,22,9,7,17,6,4,19,28,29,23,41,44,58,62,69,55,30,37,13,22,24,26,43,27,19,16,28,19,16,30,10,13,13,20,45,32,62,10,0,39,145,110,129,87,117,67,57,58,32,20,27,41,21,36,34,22,45,47,40,18,25,9,37,54,16,27,70,41,12,70,104,73,128,92,59,69,15,43,2,29,31,14,13,15,33,26,32,10,8,17,15,2,16,31,28,41,38,32,12,123,168,148,140,108,109,87,58,44,34,37,23,33,30,49,10,18,11,29,42,22,31,33,27,37,28,28,22,17,29,34,19,41,22,20,25,4,18,13,25,5,33,21,23,41,26,33,26,8,30,12,26,25,39,19,36,23,59,21,36,41,52,51,65,120,119,115,80,12,14,17,42,54,33,54,29,22,23,32,23,16,12,51,62,52,71,27,50,19,59,26,7,14,41,44,25,63,27,58,88,55,87,72,71,54,43,59,35,87,60,133,180,224,240,219,209,185,202,246,208,227,217,213,209,229,221,221,126,15,9,14,23,11,2,13,38,6,19,5,8,1,13,10,0,8,21,13,5,32,8,6,22,209,183,189,219,154,199,190,213,205,186,188,195,207,215,197,194,209,215,202,216,196,182,195,219,208,215,200,206,210,210,197,159,184,170,193,185,214,198,196,200,185,201,234,203,207,211,203,204,218,215,209,183,221,195,217,219,223,200,220,205,203,208,197,215,220,209,190,234,211,204,200,185,172,236,199,213,215,193,196,219,214,217,226,207,211,171,201,203,192,207,220,235,193,209,204,210,200,208,181,205,187,185,196,204,212,192,216,199,199,201,208,186,203,174,186,203,195,217,194,216,198,215,212,204,190,191,193,194,196,193,183,188,212,197,216,201,184,209,204,202,220,204,191,211,185,191,203,194,227,184,197,215,203,197,216,190,195,217,182,179,204,216,195,168,178,194,213,197,192,213,193,206,230,214,206,199,197,200,196,195,186,194,197,203,197,205,185,190,196,204,164,198,215,198,207,186,189,190,230,191,212,194,187,194,235,205,230,205,204,201,186,184,208,189,204,210,182,215,217,198,213,207,205,189,206,175,189,210,203,212,192,202,205,180,185,205,200,187,223,197,194,198,195,204,218,225,219,200,207,198,205,194,197,192,196,194,213,220,209,209,199,211,190,184,209,204,215,196,202,186,191,202,213,191,209,187,191,209,194,185,199,196,188,208,209,189,220,196,199,207,225,165,188,205,190,177,209,212,202,210,184,223,238,210,182,135,100,103,88,65,68,48,2,26,52,74,30,17,64,43,70,74,0,80,127,135,96,79,38,16,66,38,73,55,173,242,211,198,163,112,157,170,227,182,76,133,90,90,85,51,4,173,232,208,182,150,205,187,234,153,61,218,184,73,9,9,28,69,21,9,46,62,103,193,237,224,119,136,214,250,237,245,146,59,69,129,115,111,49,38,53,74,83,74,113,96,106,102,98,111,88,100,110,78,72,60,81,61,62,53,58,32,21,49,43,39,27,35,51,104,146,155,81,27,18,15,16,8,13,57,42,70,63,44,55,80,60,73,43,66,53,45,34,11,12,17,23,18,48,138,107,94,152,85,19,8,26,29,20,49,24,31,41,37,31,35,10,18,30,25,16,33,36,19,26,37,26,2,22,14,80,57,88,79,101,81,52,31,35,30,27,35,28,27,13,11,24,29,38,28,9,20,24,33,53,51,13,15,64,109,92,86,86,96,86,36,43,36,49,41,46,30,16,28,14,28,22,34,19,32,22,22,19,25,31,63,45,14,41,92,92,103,125,87,74,46,28,17,19,7,7,9,23,29,39,24,15,21,25,25,21,29,38,38,34,63,47,77,151,173,158,136,83,121,94,34,53,28,35,16,27,56,8,22,16,17,41,57,31,23,15,20,17,37,26,15,26,54,21,40,23,31,31,28,44,5,28,27,14,31,14,30,27,31,52,16,8,45,25,20,31,16,49,21,20,53,35,47,14,21,15,36,92,105,97,115,73,42,25,5,27,28,74,66,32,22,27,21,16,12,37,30,14,20,41,51,53,34,31,28,70,13,20,45,48,47,81,64,79,59,82,45,54,39,92,64,74,35,48,51,101,184,197,223,224,219,201,245,235,217,212,215,212,224,228,98,10,5,6,26,6,10,10,23,19,13,23,30,7,0,9,4,26,5,47,31,29,10,22,28,191,200,186,195,191,185,167,195,214,216,171,199,211,214,204,176,212,196,193,202,198,174,205,202,179,227,217,209,190,206,196,208,216,204,211,188,194,200,207,202,198,200,196,205,187,206,173,200,202,197,199,216,211,197,192,214,206,227,227,187,203,218,204,214,210,194,215,201,223,209,199,202,222,204,202,189,188,224,210,196,225,205,214,205,206,218,198,193,215,214,203,199,227,200,193,185,189,206,203,228,225,199,191,199,218,194,207,210,213,183,202,192,192,204,193,184,208,204,194,205,187,184,193,234,181,199,199,207,225,200,211,217,214,181,200,163,218,198,193,180,218,202,201,207,175,212,190,208,199,209,193,197,197,207,198,185,204,201,214,210,207,203,210,207,200,234,202,207,188,188,179,201,172,171,208,197,193,192,180,187,202,184,191,193,205,199,199,204,192,177,203,173,194,168,200,201,209,197,205,214,207,216,200,181,194,204,194,196,182,193,188,212,217,201,194,206,200,195,166,212,185,194,195,206,202,201,212,192,230,199,201,195,197,200,198,195,183,185,184,183,191,185,203,210,208,183,202,220,165,206,212,195,156,209,208,208,208,174,210,198,212,200,194,217,188,178,209,202,200,177,200,202,222,213,203,174,176,203,205,197,201,197,229,221,193,196,192,206,193,187,220,203,211,195,200,200,211,189,191,201,179,199,191,173,144,117,173,131,152,126,143,95,53,83,93,126,95,66,76,51,68,59,18,63,152,138,100,136,82,40,67,124,175,171,246,190,185,132,155,146,139,196,220,137,86,137,98,98,69,42,9,164,188,214,184,149,199,184,247,137,63,187,85,17,8,85,177,112,66,40,38,125,204,233,218,156,161,208,249,223,214,155,75,125,132,113,128,102,44,88,124,67,18,58,115,105,102,101,104,119,82,100,64,73,46,46,48,67,32,36,29,22,9,11,12,18,24,104,157,153,123,89,26,9,15,6,10,19,17,45,47,67,65,52,68,61,60,75,75,70,32,22,15,33,20,13,38,2,95,147,105,126,128,59,24,28,5,38,36,16,30,23,14,27,12,16,7,48,24,24,19,30,19,22,16,13,23,55,22,62,100,79,82,112,67,54,27,34,34,26,25,4,32,22,20,17,42,8,20,10,30,61,40,48,32,41,46,10,41,74,57,86,110,158,77,15,49,27,32,14,36,29,33,42,3,10,24,19,34,38,22,26,1,27,15,60,50,10,49,59,50,91,80,86,61,44,33,28,24,20,26,50,36,11,33,12,14,22,21,30,48,19,18,12,68,52,31,99,160,136,142,124,114,73,65,41,39,28,17,14,31,4,23,31,42,32,18,9,22,36,11,15,50,41,32,5,23,67,42,48,41,28,16,12,31,9,20,24,32,29,27,33,35,18,34,32,23,13,24,29,22,39,50,3,38,49,12,5,32,21,54,56,120,121,104,115,67,33,14,41,28,36,72,65,74,71,48,37,9,24,28,36,33,22,20,30,24,41,65,64,55,41,51,31,39,38,34,79,19,73,26,50,48,49,63,55,64,3,49,47,33,177,196,236,227,233,225,238,225,231,220,232,209,217,224,126,13,20,24,5,6,1,31,0,19,40,6,17,2,8,24,13,7,2,22,19,0,5,21,3,199,184,174,206,204,198,203,206,193,223,223,192,201,186,179,211,177,212,197,203,185,179,207,179,194,197,203,205,228,180,171,182,187,206,186,212,195,166,195,215,199,179,223,189,222,209,201,211,227,195,198,208,222,202,222,186,221,183,207,209,184,198,209,215,199,210,198,208,207,195,191,198,210,230,197,207,192,215,200,220,193,187,208,203,205,190,200,215,202,207,195,195,214,196,180,206,193,216,199,190,213,212,199,191,217,193,194,193,215,203,200,192,200,204,191,195,212,223,205,201,206,201,204,216,196,199,191,196,204,178,208,208,187,193,193,202,191,210,202,213,198,207,221,197,187,209,213,192,199,199,193,200,181,210,200,196,201,198,188,202,190,211,208,197,211,204,188,183,194,187,206,212,193,201,199,180,187,211,213,211,186,200,206,195,181,198,201,186,202,173,188,187,190,204,196,200,196,191,197,192,205,189,205,196,217,200,195,192,180,183,194,203,221,214,200,203,205,171,205,165,225,196,180,195,197,199,219,199,184,203,199,187,209,197,198,186,195,192,202,192,203,198,188,221,190,191,204,203,193,192,177,211,200,205,196,222,203,189,193,178,217,223,204,198,212,186,198,217,168,190,201,194,214,197,206,184,204,186,212,167,170,195,186,200,204,199,190,198,198,195,171,190,192,186,177,187,192,189,224,198,181,218,199,157,110,156,180,197,216,186,165,205,137,107,186,186,173,93,119,36,144,92,5,114,172,138,173,243,163,67,57,185,236,234,211,187,140,136,188,155,128,177,179,100,95,121,99,86,60,23,22,136,236,214,186,132,233,223,239,127,61,138,24,20,29,125,230,155,106,39,97,214,242,246,165,179,218,245,252,175,120,47,72,146,89,119,84,61,72,134,88,47,49,83,90,72,70,63,70,100,85,83,95,75,72,77,70,48,34,35,23,25,21,34,11,18,60,135,154,159,107,25,25,7,33,15,41,38,34,49,60,58,80,72,49,63,73,63,86,89,80,71,29,47,23,31,18,42,142,167,105,103,139,63,34,37,40,64,20,16,55,9,19,21,30,30,45,19,13,15,24,6,31,23,51,39,39,36,33,131,135,142,108,65,58,54,56,20,29,0,10,19,28,33,40,23,53,24,35,12,29,37,32,30,5,59,61,57,37,66,80,83,170,108,34,41,48,7,4,37,28,19,35,31,35,6,15,31,54,28,49,4,26,19,33,92,52,40,59,59,53,42,68,52,13,47,14,19,33,1,3,51,58,45,4,24,15,42,29,32,1,20,11,31,59,81,37,91,141,116,107,93,81,96,72,23,7,12,23,48,15,31,2,5,40,34,44,29,23,22,33,17,25,36,69,28,15,38,70,50,75,68,49,66,68,36,47,44,16,29,24,29,25,17,27,24,20,39,12,30,39,33,7,20,22,7,29,30,11,27,31,84,125,112,106,118,49,28,9,16,15,49,60,68,83,59,65,65,24,41,22,31,36,28,0,18,6,38,46,53,47,79,75,46,23,27,28,66,70,49,54,64,50,50,22,40,47,44,73,49,78,151,159,224,223,221,228,226,224,236,226,228,233,210,200,111,16,0,19,18,20,29,23,15,19,10,15,12,15,5,0,18,23,26,17,8,24,21,2,5,214,214,202,218,211,205,227,210,211,210,207,194,192,194,208,186,197,213,215,195,193,214,193,195,195,201,206,224,226,208,205,181,222,195,182,197,216,187,192,219,207,194,180,208,212,216,205,204,208,221,209,206,201,219,208,202,205,227,196,213,200,213,227,214,214,201,190,225,201,198,206,194,181,186,192,205,198,194,181,197,216,224,199,224,234,199,203,212,185,224,217,204,193,178,208,187,223,192,208,207,192,191,206,218,180,192,213,191,199,217,204,221,218,207,210,222,189,191,207,180,203,205,172,202,201,185,205,190,188,207,215,196,221,204,186,182,207,209,205,198,204,193,215,198,194,179,197,178,200,215,185,211,198,199,205,194,213,208,234,187,205,189,188,187,193,182,204,194,223,199,198,199,185,211,208,200,197,208,192,187,208,179,192,222,189,197,208,192,202,182,221,203,203,198,229,202,206,203,195,202,205,190,198,196,215,178,222,202,212,187,174,179,199,209,196,194,219,203,185,217,194,218,199,190,181,194,202,185,163,194,209,206,189,197,216,215,189,203,209,208,165,174,204,203,205,204,202,202,198,193,220,209,202,204,193,196,198,188,192,192,203,165,196,170,202,193,183,175,189,175,196,187,195,210,202,192,171,178,195,201,188,186,187,187,214,185,214,175,196,196,187,195,176,193,212,207,186,199,212,159,188,193,170,160,155,176,187,180,175,147,191,208,149,85,141,241,210,131,134,36,115,136,43,170,172,159,180,211,119,107,123,158,166,182,150,123,174,171,195,170,107,189,172,126,101,101,73,110,58,64,3,143,217,215,176,179,253,190,196,157,50,37,16,8,47,157,132,77,76,62,191,253,246,207,160,218,254,249,246,116,29,60,118,131,138,68,35,71,121,153,77,88,95,99,98,34,117,124,101,76,94,87,117,104,96,78,68,58,52,39,51,28,23,11,13,50,107,157,134,86,46,45,8,35,22,13,9,37,65,77,79,53,62,52,64,69,7,26,66,60,77,93,75,30,30,6,22,39,160,139,84,129,102,35,33,34,97,71,47,43,18,43,25,11,33,5,40,22,53,23,5,11,5,19,43,35,44,38,47,96,146,125,133,82,100,59,36,40,1,18,15,26,31,16,30,19,12,22,26,36,22,16,6,17,28,52,48,32,30,61,75,74,56,62,53,36,39,41,28,25,48,45,7,45,31,43,20,41,17,37,13,16,20,47,48,84,74,46,60,57,35,34,33,30,36,40,35,50,25,12,42,10,34,20,12,21,30,66,26,8,24,21,14,18,64,122,37,92,102,116,73,60,147,121,43,36,29,26,28,7,8,20,44,36,22,22,18,12,0,53,49,15,35,53,53,42,37,46,89,105,122,103,166,91,34,45,16,42,38,8,36,11,19,17,41,10,44,33,38,17,24,52,21,19,46,27,10,44,18,48,9,90,104,99,110,108,65,42,33,22,33,53,96,94,95,81,79,83,61,45,28,20,43,25,36,29,15,22,51,27,26,53,83,61,73,18,27,44,57,31,46,55,33,40,58,56,67,75,45,43,91,195,218,201,205,228,238,213,224,230,227,231,232,201,223,95,16,7,8,1,1,20,0,27,2,11,12,26,8,2,14,2,2,1,12,5,26,7,5,16,195,201,196,197,170,182,193,197,225,216,208,205,210,202,199,201,220,187,181,200,181,203,223,210,195,180,195,191,192,185,213,201,196,195,200,201,194,192,199,211,199,193,189,199,216,194,196,209,196,215,217,190,181,190,209,199,219,204,208,215,226,199,217,222,219,212,187,206,215,208,208,196,220,201,203,196,219,224,219,210,193,189,207,214,208,209,208,210,218,197,201,228,189,176,202,218,216,211,192,210,189,164,216,196,202,210,184,199,208,223,183,195,202,203,209,207,199,186,201,203,203,206,180,202,182,216,191,208,200,203,204,197,216,207,206,205,207,172,209,188,187,215,210,196,184,192,184,202,169,202,203,217,211,211,184,212,203,187,190,183,203,198,184,235,192,179,206,193,209,204,210,199,200,223,201,209,167,184,187,184,196,183,185,205,199,188,193,199,202,173,225,195,201,183,212,222,234,197,193,185,193,189,192,193,198,198,198,169,175,175,187,197,202,185,204,217,203,191,194,179,200,184,199,200,209,223,212,173,196,183,216,211,188,212,194,185,214,207,184,195,207,197,201,185,179,212,170,202,191,176,187,180,202,183,232,186,179,177,206,186,204,220,212,198,220,192,236,221,181,199,186,190,195,191,209,209,169,182,212,170,177,172,188,189,174,179,192,188,207,203,189,195,210,189,210,177,180,196,193,176,188,186,181,209,187,192,164,149,165,151,195,228,212,122,73,168,166,136,94,67,139,87,10,103,187,130,137,126,92,148,201,190,140,131,143,163,176,165,210,106,168,208,194,132,72,91,80,95,56,45,36,160,224,200,188,138,183,99,133,104,56,11,4,21,36,47,37,52,92,176,239,212,218,180,205,248,241,228,189,42,41,75,149,165,90,13,67,115,128,92,46,70,70,157,77,122,165,170,128,112,104,109,108,99,107,95,102,89,63,56,43,49,14,34,30,92,159,134,67,21,20,47,16,23,20,22,27,44,75,96,66,86,58,74,86,93,54,26,30,38,39,40,111,78,28,45,5,60,164,127,113,141,79,52,10,99,107,40,59,38,41,12,25,24,40,5,23,15,32,29,30,32,31,32,40,45,41,36,49,151,137,123,123,94,107,49,49,39,32,25,13,7,46,39,25,8,17,28,16,22,55,38,12,18,23,67,95,36,35,70,31,44,68,26,12,29,47,46,22,27,25,35,8,16,40,28,29,23,11,4,19,36,45,34,58,102,54,17,73,36,45,63,8,28,36,22,53,9,42,26,26,24,30,9,11,39,12,22,44,18,30,24,29,38,83,99,29,58,78,104,89,137,187,91,29,58,26,9,14,20,35,31,18,51,43,17,17,18,47,21,31,38,44,49,57,46,43,74,100,158,130,102,119,85,39,29,38,13,21,21,38,27,20,21,47,53,1,18,48,8,22,46,18,24,34,24,17,11,38,61,21,93,123,112,125,108,92,32,12,10,21,29,51,74,111,87,88,80,81,82,80,42,38,47,19,34,11,14,20,38,51,44,84,83,67,49,37,42,31,22,30,42,62,61,81,93,44,49,62,49,156,232,211,233,219,214,212,223,238,232,227,232,231,198,232,105,12,0,7,2,9,11,28,18,10,31,21,41,36,4,5,15,13,14,3,20,13,24,9,12,212,189,218,195,218,212,177,207,205,199,221,179,211,207,177,218,206,200,216,205,184,193,213,213,201,196,208,217,197,206,220,204,190,210,191,209,205,197,175,206,190,193,223,188,212,205,207,197,209,210,198,195,204,185,231,181,199,187,217,173,180,208,220,208,225,230,208,197,233,195,216,197,215,182,184,209,192,219,163,221,195,211,199,191,205,204,215,199,197,183,191,185,217,201,206,186,204,215,202,202,194,201,194,201,187,197,210,198,182,210,209,201,207,212,193,193,198,197,185,199,199,203,187,208,214,203,200,195,190,202,190,194,181,185,219,210,203,182,189,213,198,197,187,182,200,210,173,195,196,205,192,198,188,202,200,211,200,207,211,189,235,195,192,215,199,219,212,195,218,212,201,190,210,214,213,207,205,197,221,189,221,183,171,211,209,210,181,180,199,178,181,200,200,186,195,188,208,178,204,196,162,206,218,194,177,200,199,181,192,188,204,201,199,185,185,204,191,185,185,186,189,198,206,184,213,243,196,195,212,208,186,207,191,172,211,185,204,185,187,207,190,208,206,199,210,200,198,180,184,196,194,197,191,204,212,196,199,214,185,183,197,224,190,184,170,217,199,204,201,177,182,201,193,214,201,205,189,194,195,199,177,158,199,197,189,165,194,177,195,200,211,209,198,194,193,197,198,202,206,178,192,181,127,114,145,150,147,147,155,150,212,220,195,190,132,34,100,118,102,73,99,82,8,86,162,138,80,80,159,170,215,174,128,174,169,182,157,138,191,144,161,213,193,162,81,103,76,78,57,23,27,145,209,197,178,87,140,115,55,29,8,30,34,62,72,57,35,54,206,243,227,189,179,196,253,238,226,230,65,19,44,123,128,118,54,49,88,109,89,51,49,25,71,117,150,171,222,158,116,119,113,111,105,82,64,82,83,94,56,35,34,16,52,27,73,145,142,101,44,5,34,6,1,45,54,4,35,39,52,90,64,55,58,81,106,56,39,19,30,25,52,59,64,119,54,33,30,98,164,111,96,119,45,42,45,90,87,65,64,32,22,12,7,26,34,37,52,2,10,0,28,21,43,41,14,12,67,63,58,110,108,78,122,95,75,30,55,26,30,16,7,27,20,28,23,24,18,19,35,20,11,46,49,15,11,60,82,55,30,53,18,41,14,44,19,48,46,48,41,33,44,27,17,13,40,66,53,46,30,8,25,47,50,54,83,115,82,19,39,46,41,20,18,28,22,43,34,26,37,39,15,37,21,17,31,29,30,13,4,31,22,78,61,39,115,114,38,57,96,89,89,68,82,43,43,42,31,10,7,20,37,51,23,25,22,20,10,40,29,19,16,7,35,40,85,60,39,94,141,162,169,85,108,104,51,27,39,20,14,26,32,6,31,27,26,20,32,32,31,17,34,41,19,12,27,58,39,25,50,73,26,116,127,101,108,108,96,48,16,10,37,37,25,64,91,111,107,103,65,78,82,73,77,51,12,17,42,17,3,15,25,30,25,65,61,72,62,41,66,19,62,56,58,64,57,69,84,39,52,139,229,242,218,243,247,228,208,228,223,223,234,208,233,226,226,97,4,7,13,19,5,17,24,31,19,20,12,14,5,3,10,11,27,4,3,7,2,10,24,5,208,213,197,197,200,184,208,222,184,204,211,212,224,189,166,195,224,204,197,183,201,207,224,201,226,207,197,210,205,193,198,173,191,196,208,205,201,207,216,190,198,215,202,185,221,224,212,210,220,208,192,228,210,199,212,206,205,222,200,202,227,199,204,205,191,204,227,229,240,201,228,224,233,192,203,209,208,202,208,223,206,222,208,177,191,191,206,177,221,223,188,197,195,191,211,198,201,199,221,192,201,200,191,175,201,197,201,192,187,206,194,184,226,187,184,175,217,199,197,165,216,200,191,177,216,202,191,219,195,199,197,203,203,191,212,195,174,207,185,197,213,179,200,188,211,189,214,218,210,200,231,188,187,184,183,193,204,202,206,219,196,208,201,183,236,190,195,195,192,195,185,192,197,187,191,184,192,191,197,191,195,204,193,174,200,188,201,183,204,207,213,184,219,198,217,190,196,209,191,175,191,200,205,186,198,185,198,210,201,167,187,186,172,199,188,164,206,200,183,202,188,194,185,211,184,192,199,206,196,204,191,196,198,215,204,198,195,194,215,176,188,200,188,205,218,235,173,192,194,211,204,202,179,185,183,187,215,207,192,171,210,194,188,211,200,184,201,190,200,194,195,192,177,197,192,185,196,186,183,185,192,160,188,225,193,215,202,211,228,196,185,206,202,200,189,207,205,176,185,163,195,169,115,103,130,131,127,152,143,195,200,197,233,220,159,99,5,79,100,77,77,79,9,76,102,88,63,99,205,205,196,136,156,202,161,122,170,171,149,114,161,161,191,149,62,119,90,76,46,22,16,164,222,210,185,95,120,119,39,35,0,104,192,72,75,90,61,105,241,233,203,162,212,250,245,246,220,116,42,50,111,86,90,52,84,106,131,73,66,60,50,8,42,97,152,217,216,123,118,108,92,104,73,76,33,72,79,53,66,59,31,23,14,36,73,162,130,85,6,17,32,0,17,47,78,31,39,57,71,67,62,101,101,109,66,25,19,29,11,3,37,34,87,124,59,37,5,123,143,93,115,91,38,37,96,125,101,106,68,27,36,34,17,5,12,10,6,9,5,12,2,26,10,1,14,15,53,68,36,78,71,65,143,111,59,25,35,14,33,3,18,18,22,44,20,7,42,36,15,24,16,39,52,37,21,67,72,39,47,42,22,54,19,16,18,33,53,44,37,18,37,35,15,31,37,35,37,70,43,0,64,46,44,37,76,141,46,30,60,30,34,53,40,40,36,45,28,16,14,4,22,26,14,23,49,20,29,46,31,31,42,51,61,26,94,93,56,65,84,44,23,19,3,28,39,39,17,9,28,19,38,47,29,17,2,17,36,51,13,25,15,27,32,55,80,68,84,124,136,130,128,100,109,114,51,32,31,15,33,11,32,26,63,32,20,37,6,19,32,37,15,38,42,33,44,54,75,81,131,120,93,144,149,107,98,116,111,42,57,48,28,20,43,87,117,121,135,120,81,90,104,80,75,47,69,38,1,27,11,35,42,33,21,32,46,77,61,67,70,41,77,58,77,67,32,43,66,43,70,212,238,234,220,231,233,219,222,225,200,215,218,207,215,217,222,117,22,4,8,32,18,12,41,17,12,18,9,38,3,8,0,16,13,1,24,4,26,17,15,17,200,215,184,191,180,188,205,199,198,177,185,192,218,201,186,218,202,187,204,200,188,207,222,220,202,185,197,216,193,194,198,195,211,215,223,186,212,208,190,182,190,222,230,210,190,193,215,214,209,210,198,234,199,213,212,215,224,201,208,201,194,189,194,207,208,188,208,212,218,193,212,190,195,198,222,209,191,200,223,202,205,201,197,208,219,227,183,178,199,206,191,215,229,200,182,199,187,212,212,206,189,197,214,208,208,198,216,214,192,188,196,223,220,189,221,206,175,200,183,207,214,201,220,200,197,229,207,209,200,196,192,198,169,205,207,200,203,208,207,226,190,175,198,187,210,207,189,201,193,188,217,197,194,212,201,209,184,173,201,191,203,198,202,179,214,180,203,200,210,194,199,194,213,197,196,208,214,196,213,200,207,183,207,202,181,199,184,201,198,212,187,208,182,185,202,202,191,205,186,193,180,198,172,196,186,200,196,182,180,185,170,191,194,217,215,197,188,201,207,180,194,184,196,201,204,180,184,184,193,191,193,198,207,199,208,187,177,200,212,189,206,217,188,200,181,181,220,165,193,202,206,191,218,202,186,197,209,202,200,192,195,182,200,181,205,210,182,188,172,196,205,217,194,186,195,184,165,166,193,193,188,180,193,212,187,191,193,172,196,181,198,194,187,184,165,198,217,202,187,164,168,150,159,160,157,188,169,156,176,197,212,174,216,234,239,135,34,15,90,77,101,82,60,59,74,117,126,148,148,169,208,146,192,184,124,126,164,216,152,106,144,111,191,149,80,108,85,72,54,44,48,167,182,176,146,94,105,88,42,48,8,183,137,84,72,45,150,136,246,205,126,200,241,248,250,219,106,49,91,86,107,47,53,88,129,154,114,80,79,26,36,19,43,101,131,140,141,70,90,109,82,92,88,68,62,67,54,69,68,53,45,37,42,34,119,158,121,45,10,20,4,33,21,67,78,34,28,72,80,108,106,144,147,135,54,31,21,21,12,40,26,70,99,103,48,19,43,121,146,103,100,75,45,45,80,124,76,49,55,40,29,28,24,21,2,44,38,15,7,38,40,9,18,34,18,10,80,75,18,68,42,66,155,105,68,50,51,21,19,38,24,16,39,11,34,51,44,47,12,22,24,28,57,35,22,77,101,29,31,51,34,54,36,26,19,34,43,26,13,52,6,44,52,23,26,5,10,42,29,75,48,23,19,44,100,118,52,38,88,83,83,61,79,61,85,78,63,22,23,5,49,57,47,34,10,6,24,44,22,48,26,30,53,50,120,94,35,49,63,31,19,44,29,29,46,32,34,14,33,60,25,31,20,16,33,35,34,20,29,48,15,43,60,86,111,48,94,144,119,100,78,100,118,78,42,33,16,15,27,22,10,37,33,24,25,14,20,33,36,57,34,34,27,59,77,88,102,129,186,154,130,135,112,131,120,132,121,52,12,14,39,50,50,71,71,113,92,86,95,105,73,90,107,69,69,41,13,39,34,38,31,44,36,19,47,37,69,79,40,71,79,57,47,46,47,23,42,23,100,198,245,235,238,214,243,226,237,224,222,229,245,220,232,221,243,108,7,11,10,21,7,2,12,6,11,10,7,13,6,3,15,19,2,0,16,12,0,8,10,0,190,221,201,216,215,202,195,229,194,207,186,200,195,190,203,201,195,221,189,177,190,197,189,228,196,213,209,217,203,197,182,190,201,211,208,198,204,229,209,192,216,203,204,192,207,193,196,204,194,202,199,202,215,214,195,203,204,201,217,197,212,166,207,222,227,215,218,207,229,203,207,220,196,201,197,194,213,202,230,219,192,181,206,198,175,188,225,200,198,176,212,231,206,221,208,204,188,168,211,181,195,190,211,194,200,223,199,201,219,201,195,208,181,174,194,212,199,198,205,193,193,206,192,193,201,203,189,174,193,178,179,195,181,197,187,201,197,193,184,192,178,180,211,224,193,188,195,209,170,202,192,219,216,200,210,201,209,211,183,175,208,189,216,204,205,201,204,190,189,201,194,190,196,213,210,213,189,192,203,204,188,202,210,191,200,200,177,196,201,209,190,205,196,176,169,185,186,192,194,186,181,192,178,211,204,202,198,204,180,180,200,212,184,187,190,177,193,196,185,206,196,190,167,212,171,190,201,208,164,208,180,175,196,190,190,204,195,217,190,204,189,207,198,182,189,184,179,203,182,209,206,217,193,196,199,174,186,199,192,201,171,193,198,191,182,212,181,207,196,210,199,197,186,198,206,197,196,181,184,204,197,174,176,197,179,191,194,181,211,195,201,183,167,179,194,201,196,210,187,146,159,158,167,161,168,146,177,151,206,187,176,215,182,219,207,212,119,34,46,60,86,112,60,100,116,153,130,138,143,189,165,158,146,147,131,151,184,196,130,97,168,161,214,128,100,131,74,91,70,16,29,133,117,72,149,91,69,50,13,23,20,83,81,79,62,138,217,185,201,202,167,248,242,248,219,90,57,71,117,116,48,49,57,117,147,124,96,97,22,9,3,40,63,89,72,43,35,34,27,78,111,129,121,121,108,66,63,70,83,40,26,22,35,85,129,119,64,23,24,9,25,38,69,41,66,76,56,59,99,117,125,114,84,76,45,14,43,44,41,26,11,75,147,115,16,14,23,142,144,84,121,60,23,28,61,112,82,39,15,29,16,23,15,27,8,57,26,34,41,35,19,10,40,31,26,46,77,86,21,63,50,52,72,25,13,32,32,47,27,6,21,25,28,28,31,14,26,45,35,10,21,55,30,28,38,61,105,38,77,83,40,63,65,12,70,74,75,11,22,25,33,35,35,37,47,28,17,49,51,61,51,27,24,29,113,188,44,36,125,118,167,178,120,190,130,76,38,25,21,45,19,50,60,44,24,34,7,64,32,47,34,14,36,70,131,95,66,51,67,12,22,26,31,13,28,36,30,26,58,8,19,8,20,14,41,46,31,12,21,47,17,64,27,52,91,59,40,76,82,89,82,108,86,66,46,19,9,16,48,53,31,31,24,40,34,14,26,42,43,11,27,39,75,54,99,148,157,144,125,107,65,59,97,110,105,104,109,75,18,24,33,56,38,60,12,67,70,125,123,122,110,118,98,109,70,47,16,13,16,32,17,18,17,28,21,56,56,87,116,84,61,51,17,20,17,33,48,3,141,218,252,253,246,250,235,218,238,248,213,227,221,218,229,215,235,117,11,1,6,18,24,9,45,20,42,30,36,14,18,0,13,10,17,11,24,19,25,3,0,12,217,171,190,202,186,220,215,219,206,208,191,204,189,224,202,225,217,190,212,189,234,200,182,195,191,203,222,184,206,209,198,206,192,200,176,185,213,206,184,186,198,192,207,192,202,171,209,226,187,238,182,187,204,224,211,206,213,209,223,208,203,205,201,232,189,234,226,217,215,221,222,219,227,190,193,219,192,226,197,207,183,206,216,217,218,212,206,203,191,186,188,208,187,218,210,213,208,215,181,192,204,204,218,191,204,192,199,201,194,170,195,212,202,226,210,202,192,178,197,220,200,194,195,213,187,207,175,182,207,165,173,194,177,206,211,203,197,201,196,186,212,206,175,180,185,206,210,183,159,182,182,185,199,174,197,187,210,186,196,197,194,187,186,162,195,194,207,200,191,201,204,190,193,220,199,192,182,208,193,196,204,196,202,179,188,179,206,195,193,196,189,204,186,199,217,224,177,206,188,190,207,198,187,223,189,213,191,195,194,197,192,190,198,170,174,197,195,196,183,197,176,189,187,187,200,217,206,206,172,194,210,185,196,210,207,217,196,208,184,200,211,203,185,225,179,195,186,178,158,206,187,192,194,205,199,198,204,192,186,215,191,203,198,202,187,179,156,181,196,180,181,211,202,184,175,189,194,168,196,196,199,187,162,197,183,188,194,183,184,216,195,193,176,215,192,198,194,204,164,154,159,124,109,125,123,149,126,156,208,214,173,184,181,173,198,215,179,116,119,66,68,75,34,128,139,106,134,125,198,201,186,146,142,179,173,169,211,154,109,121,174,202,224,89,82,122,64,89,55,51,41,117,119,78,103,69,38,30,4,50,73,69,44,24,110,212,255,168,161,179,218,244,251,225,114,65,82,111,117,70,47,82,77,131,111,62,58,52,31,15,18,41,88,122,58,59,60,31,21,59,97,129,115,129,128,102,97,83,46,21,24,29,48,115,146,117,56,10,16,16,35,72,39,48,47,68,127,131,127,129,67,47,33,37,11,21,2,15,34,45,38,121,148,63,27,18,61,170,135,101,115,79,44,27,28,105,107,45,23,43,39,32,45,27,1,30,14,47,64,26,5,26,18,29,81,24,86,80,5,43,22,40,18,25,52,23,44,18,22,12,25,22,16,43,11,38,16,25,38,51,66,37,27,50,29,121,126,31,91,134,121,186,140,114,179,132,50,48,43,11,35,32,34,31,39,25,12,63,42,40,48,47,36,51,170,159,39,26,71,83,155,130,104,148,98,60,33,20,7,46,40,54,28,43,36,23,30,19,49,42,39,3,19,52,139,82,33,68,66,39,39,29,9,48,37,19,29,51,41,20,22,23,36,49,56,19,3,54,14,39,73,35,40,50,82,67,29,59,90,61,161,170,85,30,44,7,13,43,17,64,56,51,6,18,35,46,33,38,35,48,16,64,115,64,132,154,154,139,107,112,59,57,26,112,111,103,92,84,26,52,49,43,27,34,38,37,61,117,131,126,99,93,95,112,78,90,29,45,28,25,27,10,11,20,20,16,43,64,81,87,66,16,10,8,50,60,28,36,180,218,247,252,237,240,243,232,239,212,234,243,206,230,218,235,219,135,12,13,5,18,33,7,14,4,10,11,18,16,2,7,1,0,7,13,15,20,27,5,13,17,207,228,203,216,203,203,211,194,221,212,201,224,204,219,192,201,194,193,198,214,185,175,189,201,207,219,205,172,207,200,209,206,226,208,193,188,183,190,224,202,201,210,209,171,187,221,235,181,194,184,227,206,207,193,191,203,207,218,204,206,204,188,202,213,198,203,216,203,205,218,212,221,201,197,202,216,203,195,198,209,227,211,202,195,208,227,189,202,188,200,215,211,184,204,203,201,189,206,212,210,196,207,183,192,197,226,211,208,198,200,206,186,197,199,205,183,162,197,200,196,184,195,200,207,195,194,209,189,189,191,190,195,201,191,177,205,203,196,204,210,207,193,183,203,190,171,210,202,178,204,192,200,178,188,189,190,216,196,217,199,187,208,217,197,204,203,206,193,202,192,207,162,193,192,182,203,182,178,224,204,201,189,196,189,179,196,187,200,181,177,199,202,198,190,218,179,204,202,177,198,203,191,197,187,192,203,210,187,206,214,188,177,204,194,201,173,186,188,203,187,212,200,192,204,184,193,197,195,180,204,193,195,192,182,205,192,170,209,170,186,197,185,199,195,201,182,203,202,201,175,191,189,186,186,188,183,188,203,195,195,192,214,196,220,200,185,204,190,195,159,188,190,199,175,197,190,184,164,204,190,183,200,184,192,185,200,203,197,178,197,175,193,201,191,171,202,184,179,136,137,148,145,142,164,178,149,153,154,197,191,171,170,193,176,214,203,184,168,159,80,78,102,84,171,173,137,143,122,178,173,141,136,167,184,142,164,155,154,135,130,159,191,203,95,94,105,61,99,66,30,74,140,93,66,94,34,16,7,22,41,54,88,40,103,200,237,243,133,185,249,246,247,197,103,53,60,104,109,66,54,69,125,105,81,104,64,59,13,4,38,58,96,112,157,130,75,54,59,16,46,39,61,94,120,114,121,123,91,101,68,42,29,56,131,176,58,28,11,0,52,78,63,33,20,22,10,68,141,138,95,53,19,20,27,32,37,25,27,19,51,40,76,61,26,56,21,83,179,129,103,132,64,37,20,28,59,39,12,13,34,36,14,6,16,16,4,47,38,35,45,12,16,19,64,32,36,118,85,41,67,28,21,28,19,35,24,52,28,18,39,7,12,69,43,48,14,27,4,34,57,39,15,21,24,33,127,105,48,70,117,109,176,113,134,127,90,18,9,24,55,50,42,21,52,9,30,42,23,33,7,38,61,97,29,104,143,52,41,43,42,17,34,44,43,26,51,41,32,24,40,39,42,0,10,36,29,51,50,39,33,26,26,27,66,146,75,17,83,72,39,63,51,53,38,58,44,27,26,17,31,17,28,27,34,37,40,43,27,60,17,15,45,30,73,116,47,15,46,50,102,105,92,43,51,48,45,23,31,34,14,31,61,44,33,39,42,21,49,30,20,60,118,76,109,123,88,118,92,114,107,92,50,53,117,108,102,90,38,34,42,29,32,31,20,46,35,32,78,122,106,79,90,100,127,121,74,29,49,75,67,40,18,21,15,6,42,38,47,65,81,37,71,41,28,11,29,30,53,202,222,254,210,182,218,231,242,240,221,196,231,222,237,222,214,206,110,34,13,28,30,7,17,6,7,32,1,3,30,5,5,0,49,9,10,25,14,36,31,19,21,183,200,196,184,188,200,179,207,193,201,205,194,196,183,199,187,196,204,207,213,217,204,209,179,203,217,198,173,206,206,212,180,212,220,218,180,206,208,196,208,205,199,171,197,197,217,188,209,195,224,209,234,214,218,189,194,209,200,231,216,225,225,234,194,204,219,216,201,213,216,212,217,177,202,218,216,196,229,188,187,213,222,190,199,195,198,213,207,193,187,194,203,195,216,187,211,198,190,211,202,209,207,196,204,184,189,177,186,191,174,211,216,206,209,213,206,204,192,175,204,194,207,203,200,210,155,192,203,189,191,230,190,198,188,184,207,196,212,195,202,185,207,205,188,199,225,221,175,197,180,171,213,196,204,182,180,206,207,195,214,168,192,188,209,176,197,214,209,211,204,195,202,199,178,199,201,188,179,226,193,161,200,203,207,181,202,199,206,166,211,197,202,197,200,186,221,190,181,197,211,182,183,194,194,206,165,196,184,195,203,200,203,175,219,170,187,206,197,190,226,214,198,190,209,182,195,205,229,197,205,197,183,201,201,193,193,199,193,214,195,175,179,199,198,173,215,189,203,191,193,214,209,205,190,194,209,190,206,196,176,205,193,218,195,196,216,183,208,226,197,191,184,198,205,191,214,199,169,198,215,213,192,167,194,187,180,166,202,195,188,183,203,202,208,180,191,178,210,134,159,173,132,185,199,223,161,197,161,185,183,169,170,196,187,189,199,131,176,164,104,87,109,130,245,150,128,184,161,175,152,120,195,146,151,136,167,152,174,156,118,163,189,161,110,83,107,81,85,52,40,134,152,117,78,69,42,22,41,33,70,49,67,103,196,245,232,182,147,252,255,240,195,77,72,58,88,171,119,52,56,101,128,112,82,72,41,34,12,53,58,101,135,152,160,121,110,103,66,45,35,54,22,48,106,131,142,137,139,154,114,55,62,62,132,122,43,19,4,9,50,59,45,40,23,17,27,38,72,125,108,82,30,31,89,109,54,22,35,28,30,27,11,27,16,30,14,93,181,111,103,126,53,11,36,15,47,42,19,34,34,43,7,24,25,19,19,29,3,39,27,30,54,68,31,17,51,138,72,19,51,28,37,25,24,36,43,37,34,37,11,9,13,17,33,17,20,26,25,35,50,26,22,27,33,69,98,88,53,59,43,35,39,34,12,48,29,45,29,69,58,23,34,31,12,44,62,58,3,24,40,89,76,106,48,53,114,48,48,41,53,57,52,22,29,34,44,41,25,38,16,29,42,29,35,25,69,22,26,4,28,47,63,43,51,127,74,45,164,132,136,175,91,119,156,82,46,14,34,14,33,7,4,10,24,7,25,29,56,56,35,28,19,43,114,89,58,36,74,59,38,28,26,39,38,39,41,54,29,12,22,25,47,68,54,27,25,13,45,29,63,81,119,71,55,129,104,90,129,119,128,84,45,33,65,102,108,128,72,28,61,37,33,29,13,26,51,46,60,98,146,111,95,97,120,105,96,19,71,140,64,45,51,26,8,11,12,43,22,52,80,62,66,22,48,46,48,32,16,117,143,138,123,58,142,164,147,157,142,152,183,228,228,213,214,212,128,0,18,14,20,17,15,19,21,16,8,35,13,17,1,3,27,1,15,3,20,23,14,18,12,213,214,202,205,200,171,206,191,200,218,222,190,208,191,200,194,206,219,202,229,185,203,198,210,200,214,199,218,206,224,196,193,196,205,203,224,197,221,204,207,188,204,218,195,196,195,177,199,218,191,195,189,172,205,189,205,214,208,202,219,242,186,193,204,198,211,199,202,172,208,184,197,192,215,194,204,189,182,205,203,193,209,208,219,199,191,213,198,214,204,209,198,208,190,189,190,183,212,202,193,204,192,178,210,205,198,210,185,200,202,212,197,208,158,195,181,193,203,182,215,186,182,180,185,187,195,211,196,179,204,193,222,194,189,187,172,204,211,187,192,196,182,180,194,204,184,207,227,200,190,202,192,194,205,185,189,194,191,207,195,209,198,201,187,214,184,195,206,186,193,192,215,192,215,183,200,204,189,205,200,197,204,194,197,223,190,195,202,174,220,172,216,187,159,219,187,189,182,179,168,188,184,209,196,200,182,227,204,177,213,204,167,186,210,191,231,193,202,170,195,189,203,196,212,198,179,179,194,211,188,174,198,212,198,193,184,186,206,179,195,202,180,214,216,196,170,172,209,206,192,189,218,196,191,178,208,198,186,179,185,194,182,194,162,181,185,189,182,184,195,196,175,183,192,188,186,182,193,191,214,189,173,189,201,193,203,190,208,195,207,200,172,192,175,180,197,200,204,137,174,187,166,176,180,200,129,215,188,180,207,189,214,195,174,199,177,149,165,177,92,72,73,146,237,137,122,143,93,160,196,176,169,138,138,138,176,166,169,135,116,193,186,214,124,117,107,110,99,82,68,139,83,64,105,37,18,44,71,67,78,56,77,204,255,255,164,199,162,244,240,210,98,42,85,106,113,129,93,67,48,63,118,91,61,27,25,34,15,36,91,106,129,106,125,119,123,124,101,90,66,22,30,43,19,59,139,114,157,162,155,166,113,56,20,26,46,29,15,29,32,56,38,35,40,14,31,6,29,76,128,77,31,118,195,132,64,28,15,38,34,54,18,41,32,30,42,127,148,79,110,130,49,21,22,3,3,8,22,48,49,13,3,6,9,31,36,12,10,16,23,45,59,27,22,12,26,97,81,43,82,28,57,69,42,41,31,39,51,26,41,48,24,10,7,30,47,26,23,28,24,24,63,65,63,77,75,123,23,13,12,65,37,14,33,50,56,85,58,31,23,39,52,32,76,81,50,79,75,75,75,133,122,116,103,112,128,43,104,68,54,61,47,70,77,55,66,81,30,54,78,50,53,35,36,46,46,31,26,35,17,50,44,12,29,129,63,10,135,93,141,164,135,154,159,78,57,7,10,31,0,47,24,25,13,12,1,40,51,49,18,39,36,51,94,91,40,55,67,26,38,37,29,29,30,29,61,23,26,35,19,30,36,68,58,38,34,30,26,44,89,54,117,76,49,101,73,122,136,151,149,53,54,44,93,129,83,114,74,29,54,65,46,18,5,17,42,29,36,126,160,145,136,114,113,109,79,12,117,199,84,49,19,29,23,34,34,22,11,57,56,48,74,41,27,23,62,35,52,41,40,34,49,15,76,133,138,143,113,94,88,155,196,200,186,222,110,6,12,4,10,22,5,17,2,21,5,6,30,12,38,2,24,12,7,34,22,11,15,24,0,229,186,199,180,200,208,213,205,225,194,212,205,237,212,207,209,206,196,192,195,200,201,177,186,210,169,198,214,183,212,201,216,211,181,182,194,195,201,230,201,193,191,196,208,204,211,201,194,198,182,194,201,203,202,208,194,205,207,232,213,204,190,214,200,223,182,195,196,205,200,202,184,175,205,189,171,205,198,191,188,211,222,200,200,195,188,213,208,203,205,204,177,188,193,195,194,183,215,200,178,206,196,171,195,175,207,208,180,194,199,168,202,211,201,189,183,178,193,174,227,217,181,186,200,213,188,169,183,184,214,177,184,218,193,192,186,220,224,193,220,174,199,205,193,187,177,212,206,182,166,192,194,212,153,200,195,202,189,184,221,199,202,206,191,188,195,199,193,205,189,200,206,206,197,209,176,185,200,204,208,188,184,197,196,168,186,193,196,193,166,191,181,217,200,194,193,185,191,201,200,190,197,205,207,159,197,218,206,200,181,187,195,207,199,180,177,214,202,212,171,191,183,193,171,200,175,196,190,215,194,195,195,189,190,186,182,199,179,190,196,177,195,203,205,184,196,176,214,221,187,205,195,184,204,188,171,213,180,198,194,189,201,211,182,192,211,187,188,182,183,176,199,201,165,187,212,193,197,163,154,217,172,189,186,199,188,197,210,174,160,154,162,180,160,209,203,181,195,115,143,129,124,143,152,142,129,206,208,199,187,179,184,182,196,214,195,156,181,182,46,45,65,115,157,79,62,117,147,148,200,170,146,156,142,207,184,145,190,139,171,232,182,178,131,93,145,95,116,57,88,104,40,82,78,59,19,57,76,78,90,63,184,247,242,206,189,246,199,255,228,108,36,63,110,127,107,74,54,60,88,68,74,70,37,25,25,12,40,89,97,115,116,107,106,113,101,124,116,140,102,74,50,48,44,19,51,94,157,182,133,145,149,131,72,43,7,11,36,34,48,47,30,46,21,12,19,6,7,28,121,120,119,161,153,46,31,25,61,88,71,39,34,30,56,48,47,152,136,116,129,127,60,12,7,32,35,7,35,22,39,36,23,17,32,33,38,31,26,16,40,48,24,27,21,9,32,140,84,66,93,57,77,72,64,112,74,60,38,34,54,35,44,10,13,19,59,51,47,24,11,12,37,56,144,189,141,113,55,20,23,37,62,47,35,37,56,65,26,56,52,68,128,99,119,104,97,118,149,182,186,172,199,166,178,198,169,166,207,182,175,175,163,146,155,152,133,132,130,150,162,158,135,112,97,81,47,67,60,40,58,51,41,38,68,140,40,64,42,58,43,38,54,47,55,60,100,37,35,25,21,61,23,30,21,14,49,32,28,36,54,19,28,58,87,99,61,67,50,14,31,59,16,18,14,28,36,20,11,25,18,36,14,23,48,33,4,30,37,56,46,77,141,81,36,59,55,59,116,111,89,39,22,32,83,122,88,120,87,30,43,54,32,32,24,38,17,48,59,70,60,55,96,102,92,88,44,83,155,166,115,71,50,53,22,41,17,55,51,79,70,77,45,73,18,53,44,44,65,83,53,17,28,65,95,101,79,134,79,81,20,73,171,206,220,221,125,1,16,17,46,13,19,12,31,15,29,8,23,15,5,15,21,12,23,12,17,10,0,25,1,190,207,221,211,216,202,225,206,186,203,203,214,193,224,213,197,197,179,191,206,211,185,219,185,187,212,197,196,212,225,217,193,175,195,209,198,194,183,197,188,212,193,198,218,160,189,209,205,194,189,206,223,210,196,187,190,226,202,197,209,188,202,201,190,214,208,197,223,194,204,181,186,198,202,204,203,214,181,214,165,209,209,182,195,175,203,232,201,209,197,214,203,209,192,203,199,201,225,190,233,196,174,186,197,210,174,218,205,179,205,215,194,181,172,187,202,210,201,183,182,176,199,191,182,188,196,200,205,189,201,207,188,164,169,201,200,196,195,177,184,193,173,199,195,215,187,204,188,209,214,197,201,200,195,192,189,200,206,181,210,203,189,209,207,208,191,198,185,185,196,170,205,189,186,213,189,194,177,220,196,214,192,188,164,186,180,213,198,199,226,209,186,187,202,212,206,189,173,170,186,188,201,198,224,154,184,197,192,194,170,203,206,189,208,196,174,216,205,177,192,211,192,205,179,204,182,225,179,202,198,190,204,188,221,181,205,215,208,170,204,196,186,185,195,189,188,185,191,184,183,188,193,201,187,199,178,173,212,177,202,197,191,182,210,197,181,217,188,218,186,184,210,173,188,170,201,157,197,169,195,209,221,170,187,216,172,196,182,177,182,181,182,209,185,189,200,218,149,93,109,153,140,178,149,144,172,207,175,188,178,191,186,213,191,180,216,195,206,183,135,92,126,80,97,17,74,195,180,168,155,184,164,154,185,185,126,176,191,125,155,190,115,175,127,101,137,58,73,66,61,81,6,37,58,42,33,65,56,51,83,152,250,250,215,137,220,252,224,213,117,41,84,96,134,130,77,82,54,110,66,60,65,67,21,6,29,42,83,92,137,131,95,96,122,113,98,94,95,95,125,118,139,65,35,61,57,33,89,115,159,137,142,128,136,106,56,59,23,25,10,18,26,23,37,17,13,41,20,27,65,120,117,178,77,23,6,41,76,115,102,65,40,44,26,22,52,178,131,110,132,98,43,30,15,26,34,29,12,23,61,25,4,38,6,14,10,38,25,43,32,27,10,51,22,57,99,175,53,104,156,131,232,116,161,185,99,68,22,22,27,35,20,6,8,27,42,23,41,14,8,23,56,46,167,105,84,120,47,31,48,35,36,43,52,63,58,128,113,115,129,173,185,217,218,168,206,210,211,206,201,200,204,193,192,187,180,160,197,195,167,181,208,234,223,158,235,211,199,201,209,197,202,152,158,171,147,165,164,145,132,93,43,28,117,139,66,32,54,60,38,37,54,27,40,63,34,15,47,22,14,11,20,26,40,61,36,40,8,13,49,42,70,40,83,112,35,27,61,18,31,44,23,34,30,40,52,32,20,51,35,31,39,23,9,46,68,54,43,36,4,84,137,74,29,55,36,56,87,86,41,44,49,23,95,120,100,124,125,55,13,25,52,33,16,35,12,11,16,19,59,46,78,87,99,102,83,143,167,174,112,101,90,62,32,11,27,42,22,64,58,94,71,63,38,22,43,27,51,66,53,60,84,97,120,119,99,64,74,87,119,169,187,213,201,213,120,15,9,25,26,13,11,2,30,10,8,24,0,19,8,3,1,19,29,25,17,2,3,22,12,165,191,198,212,206,184,203,193,198,222,213,199,188,195,207,211,190,200,192,180,207,197,198,186,197,183,203,207,197,203,225,222,224,209,208,206,187,189,219,207,197,185,207,186,174,188,222,219,183,193,212,183,208,199,208,217,218,196,213,210,194,179,202,204,208,217,207,196,177,215,198,196,182,211,210,189,189,201,178,196,207,198,194,213,195,202,198,208,190,207,202,202,198,209,187,201,184,211,212,220,202,181,211,194,183,205,168,194,201,196,199,213,197,188,195,212,177,199,187,199,189,198,190,187,180,202,206,176,183,214,201,199,195,174,175,177,191,197,200,192,177,202,206,188,197,201,177,184,200,200,197,218,206,219,215,210,171,185,172,191,176,222,190,199,211,180,191,203,213,208,184,205,200,204,231,191,194,196,186,183,186,208,233,209,191,189,208,186,194,185,194,195,197,203,190,174,179,175,204,211,211,210,206,188,183,189,189,214,205,179,189,183,204,206,177,200,182,226,201,191,182,178,174,185,160,181,199,182,191,184,187,190,181,192,194,205,210,175,216,206,193,185,182,181,177,180,170,213,186,188,193,184,177,221,199,199,194,189,196,207,182,181,177,204,176,176,212,192,222,194,180,206,186,191,196,166,188,167,183,178,188,222,177,189,159,177,166,186,177,182,181,169,204,177,198,219,203,144,144,167,171,185,175,177,177,179,214,193,197,166,185,174,190,188,182,215,184,199,213,131,125,146,134,73,38,141,233,199,145,165,167,185,185,187,170,108,183,189,62,167,168,134,199,114,101,108,62,85,61,71,37,15,21,71,63,41,71,56,56,146,233,252,209,173,195,225,239,186,99,57,50,95,119,107,48,34,114,102,80,71,85,33,42,5,33,47,63,101,109,126,118,83,99,132,109,98,110,87,87,84,107,154,60,27,40,36,54,45,53,108,130,179,169,142,130,134,156,91,52,37,12,10,1,23,33,31,1,28,51,42,98,122,142,78,31,7,73,165,139,151,115,64,62,36,43,75,157,103,116,135,111,21,15,20,24,7,29,27,28,38,39,17,24,6,11,8,25,21,59,27,25,6,31,63,65,64,122,55,59,75,55,152,75,62,95,46,35,40,8,27,9,35,21,52,29,27,6,36,18,54,64,51,41,78,60,127,93,29,49,38,47,56,77,134,142,180,208,188,233,215,239,207,199,185,182,190,181,164,173,147,147,124,110,100,131,120,120,118,135,113,150,164,157,154,168,166,174,171,181,150,148,174,186,159,166,176,176,181,197,180,175,161,151,162,152,71,78,51,38,52,36,57,53,15,43,48,13,51,29,35,47,24,28,48,51,29,21,13,18,44,78,65,39,84,100,43,96,74,39,109,56,51,70,52,49,21,14,13,28,43,58,19,18,2,37,49,76,20,15,22,29,131,62,43,50,55,27,39,41,53,17,15,32,84,147,127,117,115,53,23,27,32,18,25,39,25,57,64,47,68,26,64,119,143,132,96,69,46,52,66,68,94,81,45,28,21,18,78,62,66,85,95,60,54,15,43,85,58,41,89,154,134,125,109,108,103,63,115,161,187,217,239,219,225,211,154,8,2,25,14,31,37,23,14,3,12,12,10,17,2,4,10,14,0,7,5,17,6,19,21,199,189,224,203,168,184,209,203,202,206,224,197,205,177,219,205,181,177,203,190,185,216,164,180,197,191,207,213,201,217,203,215,182,197,218,188,200,222,193,203,212,210,214,179,200,198,213,189,190,203,201,214,215,201,212,191,210,178,184,181,208,218,205,209,205,199,207,199,218,192,206,215,199,175,182,205,210,208,189,216,186,210,177,207,203,200,199,189,214,215,186,177,210,177,182,196,194,186,200,220,219,205,206,208,203,183,169,200,205,207,219,202,181,197,206,175,193,192,167,187,190,197,189,188,180,181,189,196,176,175,189,202,215,179,203,209,188,185,194,191,187,176,172,174,204,182,182,197,201,205,187,191,182,186,195,192,185,204,173,171,188,183,173,183,192,220,199,211,196,204,187,185,175,210,176,200,165,205,188,192,196,174,198,200,216,197,177,169,188,182,189,193,191,206,192,192,194,190,205,207,197,195,195,184,206,183,199,199,178,208,183,183,177,191,183,215,188,214,193,216,196,190,200,208,197,195,192,189,201,202,167,178,207,185,180,202,187,193,203,170,204,192,175,189,183,169,191,218,188,178,197,187,191,201,195,205,195,204,194,187,183,185,170,183,190,202,177,209,194,163,185,190,190,183,196,171,177,186,214,192,172,172,167,164,153,184,172,187,197,180,161,179,190,188,187,203,138,96,159,165,158,168,171,157,176,216,207,185,178,197,164,177,189,192,195,163,170,203,198,132,138,213,153,123,94,165,226,170,145,160,176,169,123,191,173,160,215,146,66,154,171,190,226,71,121,123,107,107,87,73,49,8,28,93,88,58,70,38,154,240,251,253,155,161,241,241,239,136,29,47,86,145,109,62,64,86,125,101,53,69,47,41,17,52,39,82,107,102,135,106,109,83,119,148,113,117,96,84,114,102,110,125,70,100,66,29,53,43,25,35,40,96,149,164,129,121,149,143,140,101,54,20,32,17,30,0,23,57,44,20,54,102,147,71,44,22,14,68,2,31,104,64,31,17,29,118,164,114,126,147,82,31,29,17,23,40,10,42,41,70,38,15,55,41,36,18,28,36,38,31,19,28,52,62,66,102,115,65,23,24,26,44,39,51,46,22,87,48,15,20,30,32,74,49,19,10,34,64,71,95,116,93,88,160,87,128,128,146,181,146,195,202,196,209,208,212,224,168,201,170,153,159,141,128,117,139,127,147,135,124,122,152,144,122,148,149,157,171,169,155,146,156,147,163,190,162,204,193,168,164,174,178,148,136,153,123,120,112,158,153,174,160,169,176,174,150,198,164,134,137,76,62,36,31,49,39,41,33,17,40,29,21,37,25,32,29,17,8,33,61,18,58,59,96,102,14,93,108,125,242,108,131,155,62,50,29,38,30,8,36,8,32,25,10,37,60,55,32,32,26,81,131,88,36,64,51,20,38,36,22,25,22,55,88,122,95,117,124,59,49,16,39,40,34,13,47,77,105,61,55,46,112,159,145,81,55,36,34,32,25,36,120,99,63,47,11,41,35,53,23,56,42,34,106,171,144,126,127,121,152,149,98,113,83,72,89,129,211,231,237,224,242,237,211,233,132,11,4,23,8,17,13,17,33,19,16,22,15,22,0,20,29,12,10,10,7,18,25,7,2,221,206,204,211,197,185,200,180,183,214,210,224,219,209,201,205,180,222,199,218,214,185,194,205,198,205,218,198,226,205,171,174,194,214,192,205,207,185,191,181,217,196,208,199,184,198,164,224,183,220,205,210,200,205,195,198,188,184,196,165,205,208,203,199,217,208,221,189,211,223,188,209,216,194,199,209,203,222,194,178,203,214,174,194,218,202,192,196,194,221,203,206,200,188,173,190,175,209,193,205,195,176,194,195,218,222,190,187,218,186,201,179,210,186,180,184,186,202,196,191,209,188,180,193,195,195,192,186,198,178,191,194,186,204,200,189,183,175,218,204,202,218,198,187,160,195,199,187,209,188,214,185,184,180,214,190,166,214,196,213,202,178,202,205,218,193,196,191,191,211,194,203,187,215,193,200,191,201,189,178,204,176,180,162,190,182,187,187,192,178,192,216,195,204,197,189,204,188,179,187,193,209,180,188,188,194,181,180,203,200,189,178,184,169,202,193,177,202,198,194,182,217,205,193,183,207,176,177,191,187,179,200,195,193,196,168,190,204,206,188,191,189,188,183,187,197,200,190,183,187,220,200,190,195,193,175,221,185,183,191,206,187,178,197,195,195,198,164,196,204,187,185,188,186,181,197,190,218,185,179,188,156,162,182,163,193,168,217,190,180,214,192,153,172,212,218,176,107,126,123,136,148,154,150,179,176,198,188,184,197,183,181,197,195,189,179,207,189,221,132,161,148,121,91,85,147,205,197,171,155,173,151,174,177,181,170,172,120,47,164,155,179,169,80,149,150,110,114,101,68,19,35,140,210,129,88,75,83,239,231,236,188,131,231,249,238,170,46,62,88,109,109,67,83,107,122,117,64,73,71,24,0,9,30,81,105,111,123,137,86,103,84,113,156,170,101,99,75,67,89,112,115,86,103,84,60,56,17,39,40,36,22,40,113,110,156,129,171,169,179,158,99,92,48,29,16,12,31,18,4,30,38,89,111,78,49,15,47,21,73,126,74,46,17,43,139,153,92,128,107,64,33,41,73,42,49,92,88,50,53,50,19,30,31,36,18,20,7,36,37,22,70,50,20,29,87,134,49,38,50,54,41,30,58,51,29,40,51,33,13,19,58,39,92,67,82,104,97,131,139,160,162,185,201,182,189,182,216,224,187,192,176,152,180,136,132,140,158,152,140,131,159,148,147,159,144,157,148,141,133,145,139,122,135,109,129,98,124,107,103,89,73,87,95,118,111,114,160,127,150,142,153,110,176,142,134,181,168,171,154,151,144,133,149,146,172,177,185,178,194,156,167,173,120,114,102,70,86,21,32,68,81,60,18,10,41,33,45,27,26,21,35,62,133,93,66,76,87,144,195,102,145,131,65,58,45,28,27,6,6,14,32,22,57,17,8,32,45,70,52,107,175,79,77,70,34,25,27,27,42,26,29,37,86,141,95,133,153,84,18,23,58,58,45,60,49,71,79,140,89,84,164,148,84,33,8,42,15,35,35,79,111,131,85,48,15,37,11,49,60,63,113,156,207,194,179,151,156,141,126,87,80,51,99,134,188,236,236,210,236,219,208,217,223,217,126,11,11,5,4,24,24,8,35,21,9,21,26,7,4,5,15,23,10,17,18,12,3,27,7,206,217,226,201,185,207,213,195,220,217,201,197,244,235,207,206,214,199,208,224,226,220,199,220,190,202,211,217,216,209,179,198,184,214,219,187,186,210,192,204,206,180,195,212,172,208,218,200,188,196,216,222,199,211,186,218,179,186,196,219,198,201,215,207,191,196,183,226,210,202,220,190,203,192,184,201,224,191,190,217,187,188,205,171,203,198,209,182,199,175,190,198,188,217,198,198,202,198,192,206,196,186,203,201,195,218,193,198,207,154,188,178,198,216,208,180,204,193,187,206,193,216,202,210,204,195,207,197,181,192,190,197,177,200,210,180,206,202,196,195,199,189,177,177,191,189,188,202,217,184,200,198,197,176,200,193,179,197,201,200,193,183,192,204,189,194,202,199,198,179,212,190,198,185,186,184,176,211,220,180,192,200,177,181,187,201,197,208,203,202,218,191,192,181,197,193,205,187,199,197,204,214,178,187,195,194,176,192,152,195,195,200,199,176,195,182,195,173,197,185,188,187,203,175,198,209,203,198,195,199,168,179,173,196,199,191,194,181,200,196,190,189,210,193,180,183,189,173,194,192,176,176,180,207,204,181,187,200,182,203,209,201,170,189,190,186,178,214,199,199,192,192,191,201,200,209,187,179,168,156,148,188,186,196,184,191,201,190,205,187,180,186,180,205,191,194,146,102,139,157,152,163,138,138,211,197,224,203,169,202,175,182,193,207,175,178,167,182,198,147,110,86,62,46,45,69,124,162,126,128,137,182,203,216,137,192,165,140,64,99,163,173,143,102,128,104,114,124,99,21,14,61,193,188,131,59,118,200,228,193,138,174,217,240,234,145,52,29,132,115,106,88,58,138,122,131,65,73,40,12,13,33,22,91,119,112,115,81,107,87,99,98,75,136,111,104,109,66,128,131,127,157,79,76,72,39,46,21,34,12,37,45,43,37,61,107,150,171,145,171,154,165,157,120,75,32,17,37,8,14,15,0,28,73,119,79,63,81,98,188,145,69,29,21,52,129,154,95,127,113,59,4,37,60,42,30,47,32,49,47,20,21,21,13,47,38,40,15,10,12,28,55,25,46,1,45,130,24,48,45,80,68,23,57,48,51,52,54,36,55,74,83,119,122,142,157,187,202,190,200,185,211,199,185,185,185,148,152,167,124,120,115,119,132,131,144,129,129,110,140,147,132,144,97,95,74,51,58,36,43,33,30,55,48,34,54,15,36,29,18,36,33,29,28,22,25,52,20,49,24,41,33,61,86,88,110,90,138,113,173,167,145,135,143,146,131,123,136,149,161,147,172,179,190,163,143,162,163,170,148,132,113,104,71,65,21,52,69,31,39,23,27,61,125,109,61,58,35,58,50,48,41,37,9,43,59,51,32,25,16,20,36,70,19,29,11,25,14,91,73,101,155,80,36,95,38,62,69,22,17,37,33,55,110,120,87,95,101,69,39,17,33,56,81,59,48,21,57,115,98,173,188,114,64,21,36,42,24,25,52,144,187,96,47,42,32,17,87,238,200,212,189,160,152,150,147,131,111,101,75,58,94,133,202,237,218,244,234,228,228,235,234,237,223,221,117,0,0,1,24,14,14,10,16,7,18,30,36,8,4,11,8,9,13,24,13,21,10,25,29,211,186,203,188,192,210,200,197,206,171,207,223,203,202,216,221,186,199,221,202,207,189,207,221,202,205,202,189,217,202,192,188,171,201,182,214,213,201,203,205,194,216,206,199,203,204,217,209,189,191,208,201,209,222,194,219,228,222,182,211,188,216,207,190,199,191,219,203,176,210,216,202,203,195,234,200,200,206,202,204,194,191,169,196,201,201,182,187,208,184,223,190,216,195,187,199,215,239,197,178,199,217,206,195,176,217,196,214,179,195,192,204,227,204,204,195,173,196,199,188,209,202,191,213,185,179,196,210,177,181,211,200,195,174,178,197,199,208,194,202,183,189,193,176,168,199,194,197,200,198,174,209,185,204,213,194,187,192,201,189,197,169,195,197,201,192,182,181,173,197,215,198,179,197,199,194,175,191,187,186,212,208,195,189,202,191,195,196,205,218,187,201,185,195,204,176,173,196,172,190,182,221,194,209,209,194,198,202,193,210,183,172,161,193,184,196,202,163,188,220,168,208,197,190,194,186,169,200,193,208,207,206,186,178,177,183,191,210,171,174,189,195,196,168,202,194,189,177,199,188,182,179,188,204,171,191,185,197,211,209,207,202,182,205,186,198,200,187,195,202,190,187,221,202,196,187,143,155,173,159,185,197,184,203,213,185,208,199,188,212,176,180,182,216,189,178,143,152,182,174,191,166,150,209,205,214,203,218,170,223,191,181,185,186,207,206,183,161,175,120,87,67,97,70,61,61,99,127,135,121,159,165,204,146,119,185,185,167,58,71,140,161,198,119,132,112,90,111,99,39,24,21,92,103,50,114,240,220,220,141,177,229,228,245,150,60,62,49,120,133,58,64,88,124,110,95,79,51,4,21,46,39,57,105,128,100,116,85,82,88,135,127,88,71,103,81,86,95,141,137,151,118,81,21,35,64,57,65,44,12,11,13,42,39,37,44,58,102,133,143,166,154,144,160,177,160,116,49,19,34,28,13,24,47,70,90,113,132,176,156,59,40,38,27,66,162,140,110,122,92,35,19,23,23,21,20,11,25,49,37,49,38,27,8,29,31,32,14,20,54,44,34,34,28,43,54,121,78,28,20,24,23,33,27,35,13,78,72,93,106,152,205,203,191,208,210,227,207,213,188,164,156,144,133,134,106,103,90,138,129,133,137,129,146,124,131,116,70,58,39,29,41,59,5,25,29,20,1,2,22,15,29,34,18,12,40,15,27,18,11,13,22,31,32,30,34,32,39,40,12,5,22,44,2,43,24,50,25,56,27,62,81,103,87,126,125,113,130,135,164,160,175,149,144,152,161,161,172,166,151,200,168,179,139,150,120,123,99,71,24,17,42,66,125,144,37,41,39,76,49,24,35,28,32,53,50,35,43,37,13,50,50,7,22,21,32,14,32,61,63,133,157,57,67,94,65,110,91,25,69,76,27,75,119,134,123,104,111,98,22,56,55,60,62,34,60,10,56,154,134,158,114,74,77,11,20,25,38,41,69,131,104,87,16,32,42,27,140,251,242,220,173,130,114,139,106,110,88,75,73,118,195,251,249,227,235,240,237,200,201,234,241,242,226,217,114,10,0,2,9,38,18,26,0,26,11,0,0,20,4,13,14,11,31,17,23,2,15,13,3,203,204,199,197,185,219,207,172,204,184,223,217,196,207,198,233,225,222,218,199,196,227,195,204,219,195,211,205,206,215,214,210,191,197,198,210,218,193,216,187,227,168,183,195,222,184,194,198,207,202,208,212,192,218,215,215,200,203,212,205,208,231,221,192,198,185,221,205,191,199,205,198,210,201,192,195,202,203,202,202,206,222,222,222,210,196,209,188,209,186,200,190,217,204,194,200,183,208,192,197,210,197,184,197,200,200,206,188,202,199,213,204,189,193,191,224,180,184,193,174,207,193,181,177,209,182,189,197,181,194,206,214,183,216,172,173,177,195,191,181,188,207,215,190,172,216,221,187,172,226,150,209,199,192,181,187,193,171,191,200,214,177,176,176,183,182,184,215,199,184,222,198,182,185,205,179,173,194,193,193,200,187,194,214,205,213,202,193,178,197,213,188,202,207,193,196,181,229,181,210,199,196,184,182,193,220,188,196,195,175,189,201,197,211,171,210,177,194,157,192,178,177,201,195,209,195,197,204,205,176,203,201,210,196,190,200,197,184,203,191,197,202,194,182,210,182,204,186,184,211,187,194,172,210,195,174,190,197,172,212,211,180,196,216,182,200,207,190,201,183,163,209,188,174,168,176,186,177,210,209,182,174,198,181,205,188,191,190,208,197,194,195,214,195,176,192,143,165,208,195,212,186,188,215,189,230,214,186,164,201,192,193,206,182,199,209,182,181,159,91,45,91,122,84,87,69,89,154,170,128,186,175,153,139,198,187,211,156,70,116,211,178,177,130,90,131,97,119,84,61,64,90,107,80,59,148,248,227,156,159,230,249,247,151,19,54,101,72,118,80,63,104,141,119,60,62,37,17,15,22,53,55,75,126,135,111,121,80,125,132,133,118,33,39,73,72,106,99,110,94,113,96,60,31,71,69,81,114,46,31,18,22,32,26,33,33,38,40,75,96,169,173,142,136,157,184,178,163,123,99,51,45,22,37,13,38,46,36,64,61,22,25,29,25,84,157,123,106,142,86,10,20,27,32,60,50,39,62,70,53,22,47,38,15,33,15,32,47,60,4,21,23,28,28,26,124,125,57,49,36,39,49,55,63,77,97,115,167,192,203,196,162,200,193,158,161,174,160,128,135,116,132,154,135,138,134,189,151,119,109,108,78,71,33,15,31,22,18,26,40,30,33,11,12,8,21,22,20,34,28,13,22,22,14,36,5,37,44,12,28,22,37,53,20,33,16,54,32,13,13,11,5,40,31,41,26,15,7,15,18,29,17,48,8,25,70,71,85,78,83,122,133,120,157,153,175,156,153,129,154,159,133,147,164,169,177,184,148,148,110,116,124,91,120,57,35,51,64,64,86,89,57,55,43,62,14,42,32,26,60,22,18,22,15,22,38,54,51,17,9,120,163,51,112,168,116,216,147,137,179,65,23,76,163,153,112,116,116,113,32,59,58,61,35,47,51,13,128,175,165,135,95,45,35,11,22,12,35,21,21,38,60,13,15,13,38,40,145,210,161,174,128,111,123,98,126,111,130,154,209,246,248,252,255,230,235,225,250,236,231,225,241,199,211,221,130,0,9,9,6,2,17,17,4,44,27,18,41,15,12,8,43,15,23,3,18,1,25,16,7,216,212,209,181,234,234,213,208,183,207,217,210,190,213,189,200,217,182,177,212,206,203,217,225,225,220,230,193,224,226,208,234,232,210,198,180,188,199,207,231,186,189,205,189,179,186,201,180,208,192,211,181,178,205,234,216,219,200,205,215,207,228,208,200,220,223,209,213,222,217,180,214,215,207,206,208,180,195,194,191,200,219,188,174,192,182,197,215,209,226,210,215,223,192,217,193,187,191,203,200,183,181,174,194,213,201,200,205,197,217,190,206,217,187,195,188,204,205,203,205,189,207,178,164,200,178,189,191,190,185,206,187,208,209,181,209,226,204,212,199,199,200,213,197,187,194,203,219,203,187,179,186,189,207,195,195,204,197,213,170,197,191,206,177,198,202,193,179,205,183,225,185,192,191,205,190,169,201,194,203,179,178,208,203,221,173,180,196,175,179,201,187,198,184,184,195,191,196,190,197,193,200,184,181,196,170,200,204,207,186,180,186,189,208,200,190,179,201,185,186,205,209,178,176,175,199,207,181,210,203,205,201,208,194,168,195,181,190,200,195,193,197,159,179,190,178,183,205,173,193,183,209,191,173,185,227,183,194,209,214,196,184,202,182,190,202,182,189,185,186,191,168,181,181,199,190,196,182,225,206,209,197,160,185,209,197,221,186,174,204,181,188,203,207,154,153,96,133,164,205,160,150,168,206,194,212,210,207,194,210,186,183,204,214,190,202,193,180,143,138,68,78,92,88,71,57,117,156,209,171,158,177,152,199,210,165,170,162,60,207,220,179,181,124,109,138,64,93,46,148,150,198,226,91,70,121,202,143,169,213,245,240,164,82,79,117,106,76,62,58,72,144,114,91,83,39,20,16,38,66,74,79,129,131,112,125,112,110,100,141,145,96,24,52,100,82,99,75,122,121,151,125,61,92,116,108,112,100,31,23,9,13,42,53,52,54,39,16,19,25,70,98,113,146,148,131,140,164,199,152,131,121,67,68,37,27,16,41,3,19,11,39,31,29,86,157,132,103,147,77,33,8,24,38,21,75,105,104,81,43,54,38,13,5,21,38,37,49,63,27,31,43,68,89,111,139,116,43,74,104,107,146,155,175,196,206,200,179,194,152,181,139,137,133,132,129,127,137,116,124,144,126,138,132,120,82,53,53,47,22,20,2,24,50,18,54,31,26,27,23,27,21,11,25,21,29,28,16,35,13,7,1,21,12,22,30,31,14,34,20,30,8,32,33,47,30,49,46,38,18,38,7,6,25,27,28,23,4,8,21,30,47,41,20,18,34,13,22,29,37,34,59,66,69,103,146,126,154,151,161,138,150,132,134,125,143,148,134,163,165,141,154,164,163,99,50,45,26,48,49,43,52,35,42,73,66,37,21,44,56,59,23,7,19,18,56,42,32,9,25,73,134,60,99,83,73,139,115,98,103,62,57,84,158,124,128,112,109,92,33,46,71,14,27,30,28,61,158,169,83,82,45,28,35,13,31,27,7,34,34,39,23,47,28,54,66,106,156,109,108,99,61,102,124,158,142,138,196,234,250,226,235,234,255,226,245,245,208,246,206,211,228,217,238,239,110,4,6,11,10,0,5,8,9,11,19,29,11,7,5,9,11,19,34,11,24,5,13,37,23,228,197,200,188,200,222,193,209,222,185,219,205,207,211,202,206,221,204,211,205,206,215,215,194,210,205,198,205,192,217,221,227,213,186,196,198,206,210,211,219,198,198,189,206,197,196,184,209,198,193,208,197,229,187,210,212,182,211,199,226,178,193,205,211,184,179,193,197,209,218,224,193,182,206,191,195,228,211,191,186,221,209,208,190,229,210,178,206,216,198,184,206,206,217,181,212,219,203,201,197,202,179,211,192,196,182,207,218,190,192,205,224,202,208,192,215,191,204,184,205,191,180,183,181,203,203,180,197,168,184,206,192,195,170,195,187,190,217,188,183,176,190,205,188,220,189,179,211,188,184,200,193,194,201,192,184,207,169,173,178,174,198,186,194,207,157,216,190,183,196,188,187,189,204,173,193,194,195,175,224,190,196,174,183,173,203,194,185,211,204,225,192,181,187,206,190,190,192,166,196,202,197,187,190,192,191,205,192,218,169,184,205,195,189,179,186,195,187,204,176,183,191,178,209,198,184,201,184,209,194,193,181,197,205,194,198,197,207,201,195,198,210,181,188,182,219,187,210,192,175,197,167,208,187,165,189,189,172,185,199,183,189,194,195,190,203,194,172,196,187,182,194,190,195,192,207,204,205,211,180,224,191,173,199,206,205,198,200,187,202,191,194,206,194,169,130,78,112,141,143,117,164,195,229,216,181,208,197,178,208,181,168,206,204,205,193,203,149,164,180,158,88,114,113,107,96,135,226,200,107,160,160,182,214,157,94,220,168,100,170,184,160,208,115,85,125,47,76,57,147,211,227,192,78,72,85,111,102,182,241,238,206,72,60,75,115,99,56,73,76,140,132,97,59,38,21,16,23,73,77,124,127,136,114,126,106,94,116,112,137,136,57,21,94,133,91,115,106,138,124,111,86,81,62,86,65,78,71,49,34,15,22,57,70,77,49,46,33,17,60,20,37,47,92,97,115,129,127,178,131,162,173,158,145,120,76,46,24,15,24,10,30,53,2,107,154,69,117,104,64,21,19,15,22,36,75,111,69,60,56,38,43,40,15,31,39,61,84,96,82,90,136,139,147,153,126,165,161,190,218,210,234,203,206,228,177,150,133,144,145,145,143,140,145,112,155,156,110,112,103,68,45,48,26,26,44,24,41,23,14,14,38,79,50,46,45,3,28,18,10,8,7,14,27,41,23,22,15,12,14,22,23,24,8,34,31,51,36,31,44,42,21,17,17,56,51,64,52,41,47,10,18,9,41,8,33,3,20,25,10,11,18,46,27,44,36,45,42,40,40,46,2,36,27,34,35,53,83,71,63,121,126,130,145,117,158,119,144,137,137,148,160,149,162,141,112,117,77,85,48,32,28,36,41,88,59,30,19,66,63,45,45,18,32,64,47,32,75,35,11,107,135,46,56,42,14,40,30,51,45,17,32,49,130,151,101,112,96,92,56,38,44,45,32,48,72,138,172,74,73,30,27,10,31,17,31,14,41,27,18,45,24,54,128,143,155,198,140,120,89,136,104,110,125,123,111,140,210,249,242,255,249,250,218,215,225,206,223,208,233,226,234,218,235,208,150,20,1,21,19,10,18,10,16,14,26,7,0,18,17,1,17,13,7,7,20,14,12,8,17,204,219,197,196,183,212,186,211,191,193,187,197,200,193,210,210,199,206,215,213,202,206,197,219,214,220,227,205,206,193,176,214,198,197,197,196,196,188,235,189,190,232,203,207,215,210,198,200,208,199,175,192,199,204,210,203,209,206,223,197,206,219,190,205,194,221,209,197,225,205,198,205,198,203,188,227,228,201,188,198,208,227,201,181,209,189,204,190,181,211,194,208,198,193,176,193,194,187,207,196,200,176,179,210,207,184,210,230,190,202,172,201,186,210,203,208,174,204,176,208,194,192,193,185,199,201,213,209,208,205,217,198,192,193,175,184,193,203,185,194,186,197,182,216,207,177,167,175,196,183,206,214,173,198,199,198,174,213,182,194,190,204,197,186,215,189,189,170,199,206,166,200,182,169,214,199,196,192,220,198,196,183,185,202,196,207,198,187,200,184,168,169,193,199,185,183,191,188,213,192,195,189,175,210,199,207,191,205,193,194,209,204,182,199,197,187,194,178,193,227,189,206,205,225,194,177,191,196,198,207,210,204,182,199,192,209,193,184,205,206,195,200,185,216,199,206,191,185,181,192,194,183,213,223,164,180,184,193,169,198,188,192,199,201,205,184,169,176,192,183,197,171,216,211,206,206,192,187,202,195,196,200,183,190,188,205,207,181,195,201,186,185,204,170,189,140,127,175,186,169,173,177,201,219,168,201,201,208,179,218,193,198,198,197,190,218,185,152,162,246,162,134,131,153,129,103,103,177,160,123,180,145,166,178,155,164,229,123,56,136,147,195,237,129,116,114,105,68,43,176,218,231,136,17,26,54,117,158,215,227,201,106,47,97,97,110,70,75,123,124,102,117,101,74,30,34,45,46,115,103,125,109,147,106,97,102,117,114,100,106,108,24,29,148,124,79,109,75,107,91,66,111,75,58,59,81,21,42,30,55,10,56,31,66,63,91,62,64,24,27,41,53,47,23,54,32,99,105,126,134,144,141,145,162,183,209,157,135,101,79,46,54,45,28,123,135,107,128,118,48,4,14,12,20,52,111,119,61,38,44,47,55,72,77,88,135,145,150,153,145,166,172,153,190,175,196,164,176,193,156,173,148,148,110,131,146,120,135,115,127,115,136,126,96,113,72,57,27,21,29,23,19,29,29,10,28,29,21,0,19,31,49,54,52,54,40,11,27,10,13,19,16,20,2,2,68,41,23,56,22,16,25,30,24,62,59,74,70,68,56,56,33,37,62,62,69,65,70,65,60,28,10,5,13,11,5,15,23,2,14,23,5,6,8,5,3,8,64,81,34,47,45,37,29,23,35,41,29,49,41,46,40,60,80,105,132,123,131,145,149,134,132,118,162,139,162,159,150,144,107,95,83,47,68,66,82,72,69,61,39,39,52,71,48,10,12,23,67,45,73,140,150,68,56,64,46,48,24,77,66,25,48,53,95,139,98,105,126,105,38,11,44,39,57,98,103,126,101,55,49,26,38,10,10,26,47,42,60,70,78,128,150,163,155,143,169,145,99,102,149,166,89,105,95,52,44,28,56,133,186,234,229,224,229,220,218,239,237,226,203,215,215,211,200,224,105,22,13,23,8,21,13,26,28,19,12,24,35,17,20,0,17,24,0,23,18,25,29,2,17,188,196,209,209,196,200,186,188,205,192,227,209,211,175,230,213,206,230,206,196,190,204,231,213,227,185,205,193,180,183,182,217,207,201,186,190,184,186,212,209,213,183,191,210,214,188,208,202,217,206,189,205,201,199,203,210,195,201,188,212,193,214,206,206,228,203,198,208,191,214,197,202,206,183,194,210,217,215,231,181,194,211,179,202,201,206,203,209,199,180,212,228,177,194,203,186,190,170,185,208,203,206,216,190,189,182,186,210,167,202,191,191,199,222,183,194,204,182,203,205,186,191,201,229,190,206,191,200,166,206,207,176,187,220,176,189,187,181,194,192,178,219,211,180,199,201,187,180,202,151,180,186,210,207,197,201,215,188,182,207,210,198,203,194,169,191,182,217,186,175,193,204,194,184,217,207,197,174,201,194,200,178,190,187,201,196,210,186,225,178,204,203,196,196,188,189,190,192,176,170,178,203,198,182,181,203,178,205,202,186,199,168,198,198,182,197,174,180,189,190,200,192,189,173,178,187,180,176,206,216,203,175,185,194,195,203,181,179,196,192,199,170,173,184,187,164,204,209,221,197,207,200,190,180,202,206,200,203,182,193,211,216,194,194,155,166,206,187,189,209,210,210,209,207,190,199,190,199,188,181,179,185,189,175,207,224,218,186,198,188,201,187,195,187,176,128,163,203,216,144,131,196,233,194,207,210,228,176,191,203,191,159,196,169,201,234,209,145,201,212,170,98,124,135,90,91,54,142,146,201,176,103,136,207,151,193,235,68,80,126,162,236,244,148,81,123,122,71,16,139,241,253,140,19,22,153,184,148,233,191,100,68,59,109,126,85,49,59,157,124,63,113,65,30,15,27,40,72,105,131,131,139,117,102,125,113,114,93,75,53,51,12,68,148,129,108,101,88,109,103,118,111,102,118,89,90,9,15,6,29,24,59,27,29,33,35,54,50,37,35,17,39,22,16,5,11,28,36,63,79,116,122,123,160,157,163,180,183,200,171,163,149,139,115,164,150,102,123,65,55,41,14,59,45,67,60,72,38,41,76,81,105,142,121,143,198,180,191,194,186,182,190,165,169,186,151,154,118,91,104,113,147,132,161,164,147,132,109,82,92,67,64,27,30,11,41,29,47,16,10,18,3,42,12,34,21,29,32,26,41,36,32,21,54,61,25,13,32,23,20,37,28,20,32,42,48,97,73,57,57,26,22,23,69,75,98,89,85,77,68,76,71,96,104,133,137,83,57,84,80,51,14,17,2,26,70,62,53,24,50,48,11,3,8,6,38,61,105,63,57,39,19,38,22,14,34,33,36,26,54,10,20,6,26,62,67,70,114,124,134,125,148,143,130,123,135,136,135,150,151,160,165,116,122,148,124,143,107,57,52,24,101,94,57,46,22,36,40,82,79,121,155,43,43,44,46,70,58,48,50,43,39,34,104,138,135,108,112,120,52,33,14,40,52,58,71,45,22,22,55,20,46,17,46,57,50,111,165,199,166,172,148,150,114,110,138,140,134,141,108,142,106,52,52,39,16,60,56,19,60,147,217,223,239,224,202,212,227,195,226,226,218,213,238,213,109,1,3,18,14,15,8,17,7,4,10,13,5,9,3,12,21,15,17,11,19,3,16,23,44,184,208,212,191,201,221,212,208,213,212,186,199,202,199,201,201,192,192,208,213,187,197,201,195,240,221,197,206,215,197,208,201,210,200,224,199,201,202,194,202,198,202,195,184,197,208,193,195,217,215,208,192,207,174,200,179,189,182,213,208,200,187,209,186,213,177,210,192,207,230,209,197,201,183,202,209,213,196,197,174,200,192,171,187,199,171,211,192,227,183,207,196,184,221,208,194,213,223,193,201,198,190,184,190,198,222,175,183,182,201,176,199,174,165,202,192,205,217,199,179,196,203,208,187,196,205,199,210,185,170,202,189,188,209,207,194,195,190,198,181,184,184,167,187,163,183,207,194,202,191,212,192,195,210,200,198,197,188,181,214,191,194,199,181,175,184,210,196,166,206,187,186,207,181,185,184,196,229,184,192,194,204,169,182,202,187,198,199,208,202,187,195,196,174,193,185,189,189,204,188,193,203,190,204,204,208,179,190,194,213,166,184,217,203,179,181,183,188,184,196,170,199,195,194,192,206,202,198,213,199,185,197,212,209,167,209,209,196,219,211,216,207,219,200,185,221,205,202,186,204,195,197,197,208,232,225,193,206,213,196,181,179,195,195,160,193,201,208,194,200,203,209,194,192,191,199,228,197,198,160,214,187,190,191,185,207,184,201,202,194,205,217,178,111,144,113,129,159,174,133,142,192,218,169,193,180,182,204,162,202,207,150,215,183,190,201,170,178,186,157,189,106,120,112,124,96,60,94,152,210,159,105,173,170,165,195,191,91,65,116,160,216,232,125,116,123,95,51,13,101,225,255,177,130,183,250,214,231,234,91,56,65,117,150,55,60,99,81,133,117,75,68,20,44,31,53,49,102,111,123,123,123,89,133,105,110,107,93,68,22,35,9,75,149,104,82,95,113,146,139,139,122,98,139,86,68,64,28,58,55,85,55,41,25,36,15,35,42,9,18,8,42,50,40,26,25,39,31,36,27,59,83,103,126,120,142,149,125,117,159,181,195,203,179,146,111,114,130,78,66,67,111,148,162,139,159,180,160,171,167,179,181,187,190,182,174,205,177,158,178,131,130,121,129,102,125,115,117,143,145,166,149,137,89,92,59,53,50,12,38,36,70,25,4,22,6,6,6,4,15,31,32,31,9,3,27,8,17,58,61,43,38,97,79,46,23,13,9,5,20,7,28,10,21,83,87,46,41,80,82,97,33,59,83,97,149,152,129,109,91,96,120,137,143,167,113,143,88,80,128,79,38,19,50,102,148,134,74,54,51,28,29,27,17,36,59,80,67,79,89,51,12,13,23,20,34,50,1,10,16,11,19,37,42,5,12,2,6,25,40,47,70,88,105,124,120,158,108,151,153,143,179,136,157,152,139,162,148,168,119,156,122,139,86,64,41,31,84,73,51,122,142,56,23,39,10,19,54,47,60,64,35,41,86,138,143,93,98,126,96,27,19,9,31,12,43,26,36,39,21,41,73,91,105,185,161,188,185,162,133,125,134,121,156,144,145,136,98,52,51,153,122,49,43,21,24,34,46,39,13,71,173,225,235,222,215,238,217,223,200,230,215,212,224,226,125,4,5,19,15,7,8,13,20,21,31,15,1,19,3,17,21,6,20,17,8,23,7,32,25,194,218,205,215,209,218,237,176,215,207,176,214,211,191,206,211,217,189,194,206,211,188,183,202,200,187,204,205,227,201,186,234,192,227,197,221,203,215,200,184,196,209,222,201,217,194,190,234,207,200,182,220,211,190,198,186,192,210,204,201,209,182,213,208,207,222,182,229,188,223,182,202,199,221,188,181,211,197,190,212,195,205,201,230,235,208,192,191,217,203,184,211,205,195,184,199,231,184,213,192,195,214,190,200,203,195,199,201,215,196,212,193,197,189,199,206,167,207,198,198,200,185,195,208,184,213,221,196,190,194,216,200,188,208,193,208,198,187,198,195,180,186,193,188,197,171,174,186,181,196,203,208,192,217,187,198,194,203,207,195,202,204,186,215,175,187,190,190,205,159,190,211,203,213,176,192,203,200,175,190,187,208,192,198,196,201,193,190,202,186,200,210,204,205,201,205,183,186,181,193,207,188,198,200,212,182,184,181,195,191,206,187,198,188,208,198,190,194,182,210,200,181,207,182,193,213,190,182,202,185,204,190,211,193,208,195,210,196,204,212,167,170,183,193,197,209,213,200,210,213,194,211,204,210,169,214,206,190,188,186,169,197,203,203,211,191,197,187,207,208,196,198,219,180,183,200,195,188,217,215,198,189,178,188,194,215,205,184,206,204,200,189,172,103,114,71,117,155,132,109,164,195,208,212,194,201,192,211,171,223,193,180,191,195,179,180,152,152,156,137,166,124,93,62,86,87,97,139,165,201,142,129,184,181,164,213,147,80,57,80,155,241,204,148,111,92,75,46,9,103,223,245,195,187,251,255,200,183,95,58,116,94,109,79,54,113,123,86,93,78,75,43,4,48,56,110,100,123,114,143,132,100,70,131,135,109,88,92,59,39,23,24,49,130,108,66,120,76,76,47,105,99,98,143,68,63,61,98,114,72,42,52,48,38,11,10,46,24,17,30,11,46,53,49,27,31,37,13,32,10,51,15,35,33,74,113,116,125,133,134,155,146,159,189,128,121,120,115,72,126,187,211,223,210,172,187,202,189,213,158,145,148,172,139,145,163,147,121,90,144,122,125,127,128,129,130,136,125,115,96,45,30,21,26,20,28,36,35,56,64,38,0,13,18,9,8,14,29,0,36,14,9,10,16,33,21,15,41,25,17,35,67,78,78,31,21,4,8,23,14,21,39,19,55,116,84,46,58,74,103,104,72,153,117,121,175,114,43,59,99,136,103,149,149,108,100,140,89,77,160,111,45,87,159,149,104,53,124,140,65,28,24,9,7,48,60,64,102,106,111,57,32,10,18,15,27,25,22,34,6,39,28,8,28,35,9,40,31,28,24,48,28,30,47,17,75,98,100,109,128,135,133,146,142,110,130,128,164,203,192,158,175,143,176,172,113,142,130,125,74,155,127,58,42,51,16,24,36,22,26,48,8,32,83,142,134,94,97,126,70,29,18,25,14,25,74,78,65,73,138,139,160,161,174,182,137,131,123,126,150,148,136,163,130,103,98,61,29,19,115,157,72,69,33,40,29,34,47,50,13,58,189,205,238,241,221,212,191,210,220,214,225,236,223,236,114,19,8,9,26,17,15,26,12,31,24,17,17,3,2,14,5,8,19,10,4,29,24,31,7,187,201,190,216,168,193,202,184,204,211,204,224,198,205,217,200,206,189,217,213,182,208,196,191,203,196,221,209,228,200,198,200,209,201,201,212,196,193,195,222,210,191,210,212,222,212,206,225,217,212,198,198,209,202,170,192,185,200,218,191,222,194,213,179,212,195,211,203,204,199,194,201,223,215,196,211,221,215,189,212,192,197,176,208,221,191,199,217,195,214,201,193,186,210,192,184,194,188,205,167,207,196,194,179,203,200,226,201,195,199,194,222,194,210,200,210,197,183,205,198,191,208,193,189,191,189,217,229,207,204,199,189,202,217,195,207,189,197,192,197,184,186,171,198,206,192,184,181,208,198,193,206,193,217,211,197,197,197,190,184,182,201,205,178,197,233,215,177,191,164,200,184,206,189,206,164,203,207,178,186,170,198,196,182,202,193,191,194,185,207,207,185,185,210,204,203,183,178,208,199,191,187,177,189,187,207,169,192,176,201,216,227,205,194,193,184,199,215,181,189,199,188,184,217,210,205,183,194,204,185,208,184,189,180,210,200,203,182,190,182,136,93,158,209,198,222,201,182,203,198,205,191,177,189,196,211,195,175,200,219,201,177,197,187,227,218,186,198,167,188,208,202,199,218,213,191,186,207,194,201,221,187,169,223,199,189,217,207,189,228,216,197,182,160,174,142,182,193,116,132,152,187,193,223,193,203,174,197,198,220,215,181,206,207,222,165,144,175,162,178,175,141,108,108,101,98,110,104,160,216,171,166,173,155,168,229,130,63,41,102,225,219,232,152,109,76,31,14,31,173,246,251,158,175,244,254,136,83,57,99,100,96,73,38,101,141,101,51,45,54,38,31,18,51,94,129,167,144,131,124,110,113,92,96,89,94,83,124,79,52,28,10,35,179,99,92,88,65,35,52,60,91,104,121,27,30,98,101,86,32,46,37,66,38,33,30,15,37,48,14,26,5,48,36,27,22,65,34,21,33,36,1,27,15,29,23,74,76,124,115,160,144,124,174,153,146,144,114,108,158,163,176,217,169,174,174,184,137,143,108,109,118,100,117,121,140,139,127,140,150,122,125,111,98,89,68,26,26,25,31,19,21,8,5,21,27,6,30,28,26,10,45,3,17,19,15,17,11,9,33,26,11,13,36,6,12,17,24,5,35,46,20,57,26,21,33,19,26,24,17,8,32,14,55,125,130,92,91,74,90,104,111,141,91,76,161,107,110,87,139,131,48,104,121,132,96,161,103,82,146,123,71,155,123,97,69,27,89,169,126,22,9,22,26,44,63,57,74,125,89,38,15,43,21,29,21,22,18,32,28,23,9,12,17,24,5,28,8,0,58,30,51,13,27,40,25,39,48,49,53,56,115,108,121,108,128,125,152,182,172,170,185,159,158,179,193,159,178,162,144,168,130,64,64,63,80,65,69,53,46,47,47,84,106,134,140,109,80,135,81,58,44,69,69,111,141,147,157,188,165,161,177,158,129,126,135,177,138,146,158,138,128,108,48,53,31,57,29,68,202,128,41,42,45,40,14,44,60,49,44,70,204,212,232,226,218,228,220,226,228,222,177,203,215,235,104,8,10,2,15,19,6,11,0,8,12,5,30,4,1,8,0,12,12,14,3,32,2,19,20,218,192,221,219,205,199,211,234,197,193,196,229,196,221,197,224,192,182,190,190,181,185,196,201,209,195,204,202,201,205,177,202,204,179,204,197,213,230,195,197,183,188,200,174,198,208,197,214,190,210,211,206,192,203,218,186,200,215,199,211,199,201,197,189,196,187,218,186,198,204,203,191,203,203,195,198,211,210,220,159,222,201,194,184,199,204,210,201,186,168,218,196,189,202,212,201,220,176,201,185,222,209,199,168,232,199,192,190,203,187,228,178,186,181,196,192,203,192,198,208,205,198,184,196,220,189,201,165,212,196,198,187,188,198,205,186,188,184,188,201,166,175,183,180,185,216,175,210,196,190,188,214,181,207,192,184,191,208,204,185,196,166,198,170,185,187,224,205,178,225,178,199,182,205,196,200,185,183,194,184,169,184,204,201,196,205,194,201,204,200,233,182,191,195,208,208,199,196,207,190,207,196,222,186,196,191,199,183,216,204,182,182,179,196,211,207,203,194,184,200,184,191,190,209,202,186,176,192,190,181,166,180,197,186,195,189,198,199,204,206,148,123,180,211,217,218,189,240,214,204,217,207,194,180,181,198,189,207,185,199,187,215,203,205,192,198,204,200,190,209,203,217,222,172,191,234,194,180,185,203,221,191,174,175,212,189,233,220,224,221,228,153,157,160,153,160,226,222,142,183,166,155,176,204,213,212,176,208,188,214,227,171,219,220,203,194,178,160,161,166,158,199,127,108,106,75,76,75,159,193,152,152,141,157,217,196,123,55,66,151,244,246,244,128,40,46,31,8,114,234,227,219,151,133,203,141,48,37,59,90,135,73,53,72,118,140,84,47,22,37,27,20,42,88,134,137,157,134,121,117,86,69,58,16,64,62,105,95,85,56,52,61,89,162,102,72,59,48,84,19,98,88,62,19,4,67,120,103,53,35,79,60,45,37,19,55,42,37,52,45,34,52,16,23,9,33,3,1,24,14,15,46,13,18,4,47,14,36,43,41,52,72,93,90,114,144,147,154,150,177,178,186,201,175,169,162,131,150,148,152,136,133,131,113,61,133,103,54,88,49,42,22,45,20,30,20,18,9,30,29,19,8,0,11,19,33,14,16,7,10,9,1,9,8,8,11,19,28,18,22,29,29,17,1,32,21,29,11,15,33,48,60,60,56,27,52,26,8,39,2,17,25,26,42,82,132,105,91,53,95,82,110,129,78,86,99,102,103,146,85,36,32,62,77,115,158,127,71,134,139,151,103,175,73,82,147,92,184,216,116,23,0,39,20,39,65,117,88,82,55,19,15,25,47,17,10,18,14,40,21,23,21,6,22,43,33,26,44,14,23,21,60,33,28,51,39,35,53,27,2,26,32,49,64,67,82,103,149,147,167,141,160,144,148,156,160,175,150,151,161,175,152,120,198,158,176,162,143,147,135,140,122,147,179,200,184,143,82,111,96,73,117,138,156,189,175,179,142,157,145,109,145,151,155,174,148,132,120,122,67,72,36,41,18,38,57,22,14,208,246,87,36,51,31,40,56,36,38,38,24,67,205,194,223,237,200,225,202,227,227,200,225,209,206,205,103,12,11,16,4,1,16,60,4,29,23,17,24,17,11,20,9,13,0,8,9,14,27,23,13,208,191,207,189,165,202,211,185,217,205,183,196,215,185,180,175,193,175,205,196,212,179,178,213,188,211,189,182,196,213,205,197,195,191,211,225,190,197,204,211,209,202,181,203,195,205,208,215,192,193,209,226,215,216,230,218,208,198,207,201,199,221,219,181,211,190,185,208,214,195,184,191,220,193,193,198,189,215,176,208,203,167,213,184,193,177,201,172,225,195,220,178,206,229,202,195,195,222,195,205,200,206,195,201,223,208,205,188,205,196,185,213,224,183,182,210,173,202,195,176,205,209,193,182,211,173,205,197,171,189,174,187,199,216,199,196,209,194,193,209,191,207,173,208,197,178,180,202,210,195,198,168,202,186,199,209,213,184,183,171,191,187,179,192,186,181,201,168,205,193,197,216,194,213,213,199,219,202,184,181,196,185,191,186,190,188,193,188,180,185,189,207,207,169,211,195,207,196,173,216,205,191,200,215,176,187,187,196,200,184,210,197,195,185,203,183,188,215,193,207,191,206,190,196,224,185,216,217,211,213,216,209,232,222,212,204,229,221,214,228,190,204,237,239,222,193,211,229,229,220,204,193,205,214,238,217,230,232,226,227,240,230,229,224,228,213,221,219,223,207,226,207,250,226,225,230,217,235,234,213,206,241,236,195,235,245,237,217,209,210,199,168,145,138,154,161,187,162,147,196,195,127,181,229,238,233,226,210,197,227,219,217,220,227,215,184,200,172,190,161,207,229,163,87,91,101,117,66,112,126,110,122,153,228,240,144,65,73,34,141,233,236,150,44,1,13,33,46,151,230,214,230,104,99,157,56,14,58,94,136,83,48,97,127,162,102,81,80,37,23,11,45,77,117,123,130,106,107,108,133,110,93,59,67,57,47,65,96,72,57,92,72,40,142,121,64,58,53,42,12,98,159,67,30,76,115,158,79,49,63,116,72,49,57,26,27,32,52,10,18,14,16,18,17,41,19,4,1,26,18,28,19,9,32,13,54,24,64,32,26,20,23,14,33,41,49,66,58,52,96,100,107,146,117,116,120,105,76,78,95,48,59,29,53,11,29,54,42,21,33,34,21,29,30,22,14,10,24,13,20,34,34,4,8,15,15,4,32,9,8,45,20,3,17,15,17,13,11,24,23,2,40,21,32,18,24,12,23,48,70,97,135,98,78,93,98,55,35,26,19,7,12,7,9,48,62,73,108,65,65,74,80,121,87,104,121,61,46,28,46,33,34,37,54,68,64,83,121,180,163,138,96,124,83,80,176,93,155,155,52,26,2,32,17,16,49,141,83,50,46,36,26,9,17,22,21,19,10,25,11,30,27,35,13,57,39,30,16,21,5,12,27,37,25,17,31,17,21,12,14,20,18,5,17,35,39,31,72,76,119,150,123,106,143,130,137,137,126,127,139,148,140,153,177,154,179,156,209,181,180,178,184,180,161,146,173,124,113,114,126,74,124,131,141,151,126,144,139,128,127,126,139,144,148,126,107,71,37,24,26,55,55,43,28,16,17,15,96,243,237,80,31,43,39,38,28,47,46,53,13,68,183,216,223,218,205,202,227,221,214,209,231,251,221,219,115,9,0,3,33,13,10,10,26,14,34,14,23,11,6,9,17,10,1,16,5,29,16,2,10,192,183,215,177,202,218,181,205,208,211,204,233,203,191,230,193,204,199,199,218,212,168,217,205,210,223,196,187,213,213,171,207,164,206,204,207,222,200,195,228,186,194,216,196,201,190,192,189,179,184,192,211,196,192,216,198,201,204,198,223,231,208,213,219,223,206,187,195,198,186,199,194,202,210,181,204,191,191,197,207,214,200,221,219,200,187,196,191,197,195,196,185,205,181,208,191,196,186,205,182,221,195,188,171,184,186,185,221,201,203,176,191,175,205,222,206,206,185,168,197,210,195,208,187,187,202,200,204,181,204,189,203,197,219,185,217,187,201,179,210,179,164,195,199,209,189,198,198,182,185,204,216,199,200,185,200,215,195,199,197,185,188,178,185,186,178,185,185,203,184,186,187,201,191,196,169,194,195,205,216,202,181,174,167,211,187,173,190,214,197,200,199,189,224,194,202,189,186,202,186,192,213,200,235,205,164,205,194,190,202,212,169,179,205,204,187,191,189,190,206,198,173,205,201,207,222,245,214,242,238,249,213,230,242,234,236,245,227,250,233,219,237,250,238,242,153,99,197,208,246,213,226,242,243,240,238,245,234,222,254,240,234,252,251,247,209,231,242,253,239,233,237,248,251,232,248,225,235,231,250,216,240,216,212,227,202,238,222,215,240,188,157,156,140,122,124,146,120,184,200,192,147,114,194,237,228,208,216,212,212,194,207,216,218,187,188,211,133,175,176,242,207,123,96,111,96,125,92,85,55,85,152,159,240,218,99,59,77,14,49,108,135,62,19,5,46,22,100,217,248,222,234,130,105,126,33,61,109,101,84,68,81,130,144,71,54,50,38,30,34,49,93,122,118,156,127,133,126,151,122,55,47,101,92,73,51,96,87,51,109,72,54,25,60,67,70,43,51,36,49,148,182,80,79,123,163,156,70,62,147,94,72,65,64,65,19,13,27,29,17,24,25,32,16,33,29,31,19,28,17,17,10,22,32,5,13,34,10,30,37,31,12,30,39,33,28,27,23,17,2,9,30,37,30,23,32,21,43,29,26,21,24,43,11,28,18,40,45,19,25,54,26,31,6,8,5,24,23,8,21,55,19,36,27,17,11,14,25,41,10,24,17,4,29,18,20,19,19,29,18,27,22,14,42,30,29,26,8,123,176,199,191,181,175,193,152,62,23,12,2,50,8,19,9,15,62,98,116,50,62,96,97,127,74,97,95,21,36,24,42,58,24,41,33,74,54,59,123,139,116,97,82,93,50,67,129,34,70,39,23,17,47,29,5,74,148,144,113,86,83,39,43,47,6,25,25,10,21,44,46,29,35,60,20,9,12,22,15,13,1,22,25,42,35,18,43,12,12,27,2,37,43,59,42,55,14,36,36,37,27,36,30,83,94,95,90,112,107,128,137,164,127,130,159,131,153,114,125,134,149,144,140,132,154,140,142,130,108,139,122,110,148,133,163,152,120,130,153,158,123,94,107,75,42,48,44,48,31,35,44,43,47,21,23,15,23,35,127,231,171,55,54,34,18,24,25,69,49,64,18,75,211,210,224,239,231,232,213,232,223,231,218,231,223,221,129,5,6,2,12,0,25,11,4,9,12,26,20,0,15,1,14,31,0,20,30,4,2,13,19,211,221,208,211,207,193,196,200,212,197,223,195,177,194,201,178,228,213,210,190,203,201,204,170,213,217,220,225,188,219,187,209,191,195,223,196,198,211,179,186,209,214,207,199,201,209,222,226,192,206,192,199,232,208,221,222,216,216,206,166,193,183,207,207,210,193,220,215,184,196,175,205,188,206,213,188,224,184,206,197,202,185,197,191,207,174,199,214,212,208,200,205,178,164,211,207,206,193,178,196,206,213,194,203,213,214,200,213,207,212,177,216,206,189,185,187,174,194,200,190,198,194,208,170,187,202,211,196,210,205,173,202,174,203,199,172,187,211,211,184,185,203,209,171,179,191,192,166,179,200,197,189,199,187,213,190,210,176,206,204,198,199,212,207,189,200,195,191,208,203,187,194,217,198,206,197,187,224,216,199,198,185,181,183,213,191,200,204,203,190,180,201,192,206,201,202,159,182,168,187,192,176,200,163,169,232,188,197,192,204,210,208,195,198,193,187,212,207,206,206,184,180,194,178,211,212,209,206,210,186,212,190,185,207,204,188,157,188,156,207,206,180,155,153,120,97,51,119,151,185,163,169,164,160,140,141,167,138,160,152,159,134,138,169,187,137,153,154,160,138,154,158,133,146,138,130,139,125,131,126,163,124,148,167,135,109,144,186,151,134,130,140,117,116,127,136,152,118,165,220,208,142,71,111,130,109,134,125,120,153,125,143,119,143,125,156,108,65,100,113,131,124,79,116,141,127,109,105,102,72,79,92,76,152,102,35,50,70,23,17,24,54,44,40,46,43,66,153,246,249,224,250,160,113,89,83,115,98,96,65,91,123,127,72,88,56,20,33,29,12,64,139,142,138,147,131,149,122,129,98,62,50,76,78,63,68,80,73,37,61,31,41,12,84,81,81,38,33,109,107,52,70,106,110,109,109,109,51,34,84,86,62,64,58,60,31,30,7,42,34,49,22,29,34,29,10,29,19,18,12,2,12,11,18,21,16,18,27,7,7,51,38,36,26,32,17,0,21,23,27,9,19,25,15,29,16,20,44,36,31,24,23,9,29,23,34,52,72,36,36,9,13,39,10,32,43,23,16,27,30,14,16,33,32,13,38,7,15,6,27,9,17,37,29,41,14,33,26,16,35,15,18,10,35,13,34,8,11,49,178,181,191,149,172,198,145,77,23,23,23,13,37,32,24,41,93,95,54,40,87,47,137,135,63,52,54,37,47,37,52,41,54,64,98,54,39,33,62,92,149,164,101,111,55,63,116,67,27,10,23,24,5,38,53,125,160,133,155,130,136,153,156,63,19,35,40,12,14,18,31,19,6,29,37,3,10,29,21,20,13,42,2,31,14,11,32,39,12,30,24,22,7,35,26,28,15,18,44,26,13,31,30,42,34,55,54,27,43,54,73,57,105,106,77,115,119,127,98,134,109,133,125,112,152,157,151,113,130,186,155,112,122,126,125,94,103,80,45,31,53,34,29,11,27,28,36,20,36,32,49,39,43,24,18,29,40,28,76,131,94,64,66,40,35,34,29,62,43,66,60,162,211,219,205,225,206,214,237,213,233,237,209,241,197,221,114,4,1,1,7,9,19,36,3,45,15,2,12,2,26,0,23,27,16,17,2,27,20,23,20,203,235,209,194,200,187,200,218,180,207,201,201,191,189,217,224,244,250,219,205,177,211,211,201,186,213,201,185,200,191,209,195,211,185,193,178,186,202,179,212,196,188,195,186,201,200,203,211,230,182,198,223,200,203,207,220,203,187,198,193,183,208,210,209,203,192,204,219,200,198,191,197,199,225,211,187,216,201,182,227,196,196,215,192,171,204,196,194,220,230,197,217,189,214,210,178,208,190,202,206,205,188,226,197,182,175,199,192,177,201,183,204,212,196,190,201,183,188,204,193,216,182,197,200,188,183,176,218,170,185,199,175,174,197,213,200,197,201,192,188,192,195,175,196,198,210,209,206,190,217,188,211,194,185,188,193,210,195,197,200,209,186,187,186,186,176,172,201,191,191,182,192,193,198,196,188,189,214,190,190,191,202,196,178,213,181,174,179,204,216,178,199,188,194,206,202,175,207,185,193,190,188,201,169,192,172,198,205,174,196,185,210,230,186,198,197,197,198,188,196,168,191,175,113,64,74,69,50,74,63,60,78,48,53,42,27,19,10,45,106,76,28,6,14,17,15,8,19,22,29,16,17,14,44,5,7,25,1,4,14,3,2,12,23,57,0,10,4,9,33,12,9,5,42,53,17,10,7,1,7,15,2,46,59,46,9,42,81,43,46,69,59,64,87,98,110,145,136,195,224,173,147,89,55,4,5,19,10,22,19,7,20,3,25,38,37,54,9,44,54,27,36,21,95,134,117,126,106,100,42,50,31,0,43,30,4,65,59,61,44,31,21,29,56,42,45,49,199,237,254,215,225,190,138,126,98,90,84,67,67,111,130,76,100,70,39,24,47,40,98,127,143,129,125,135,124,146,144,118,81,49,55,68,68,69,46,55,14,46,77,44,46,33,60,74,35,50,129,234,126,14,36,55,111,72,51,94,48,40,52,51,76,46,55,28,34,26,30,17,22,22,3,13,25,23,22,7,28,31,21,7,16,28,15,16,17,20,35,31,13,36,11,47,12,47,24,9,28,23,37,14,39,7,7,14,12,14,24,20,55,0,15,34,22,13,48,59,85,47,56,36,12,5,16,18,34,34,21,52,15,26,26,41,3,29,15,31,27,25,22,14,20,27,17,20,17,24,13,17,43,28,9,6,17,25,27,18,35,31,49,95,66,60,20,45,49,56,25,19,10,26,19,36,48,63,138,84,38,44,59,58,108,147,49,39,19,33,19,46,85,87,126,122,44,39,45,72,61,28,135,169,121,99,54,63,98,96,56,24,1,23,9,9,38,74,131,172,141,145,116,160,174,59,21,35,17,29,31,32,42,15,12,14,16,11,15,8,21,45,33,23,32,17,38,2,20,31,34,33,15,12,25,27,31,21,13,26,18,61,29,41,24,9,29,35,39,41,41,36,23,32,19,33,28,45,59,75,77,84,84,101,113,99,72,80,77,77,126,100,64,33,65,49,48,61,24,12,29,20,16,28,25,5,22,4,31,23,29,48,24,40,7,27,27,22,25,50,51,46,47,62,56,19,31,56,47,42,39,6,55,186,226,208,246,207,220,205,220,235,212,224,231,218,225,208,106,3,1,6,0,12,34,20,11,19,13,25,25,16,1,12,4,25,16,45,15,40,36,8,19,211,210,210,196,183,196,195,208,199,206,210,179,198,188,218,177,246,246,194,214,214,192,179,196,191,204,214,189,185,204,215,176,212,219,222,201,211,185,201,209,196,224,210,208,204,227,202,196,210,209,186,200,208,227,209,213,190,204,203,209,210,190,189,211,203,233,212,219,199,230,198,202,207,226,205,197,206,196,188,212,190,181,205,175,195,200,204,182,170,216,213,218,210,176,216,188,194,209,209,192,201,196,202,209,172,209,192,202,213,208,194,187,195,200,196,197,201,214,218,204,208,190,212,191,193,196,177,194,193,191,222,172,189,219,198,194,197,180,182,180,181,193,196,207,215,169,199,182,190,184,219,192,192,216,190,163,216,202,180,202,213,202,194,175,212,207,190,178,188,200,196,191,166,198,215,200,185,191,179,201,196,192,192,209,199,192,207,184,203,200,193,197,205,210,198,189,186,190,202,217,188,176,200,182,194,180,193,214,194,182,198,190,192,199,181,191,210,220,210,200,202,188,181,118,85,73,76,70,32,68,65,77,78,57,77,74,55,65,57,77,95,79,63,63,73,67,57,57,54,27,56,57,36,30,47,44,33,35,34,33,62,57,55,51,104,48,37,39,67,42,41,40,53,34,94,52,21,34,40,18,44,35,87,100,42,5,16,73,56,36,62,82,79,45,77,91,154,167,208,179,137,99,38,44,31,13,15,6,58,34,10,25,15,9,12,24,39,6,25,41,9,19,33,87,109,104,157,131,118,76,49,36,17,61,27,20,66,75,75,47,30,22,36,32,61,38,56,183,184,191,157,160,178,152,122,109,70,38,90,121,106,91,62,64,62,7,34,59,89,128,116,118,149,119,129,118,115,80,108,37,56,74,70,126,59,41,4,28,119,131,46,49,31,43,11,38,81,168,200,100,45,81,80,55,34,77,62,46,68,48,10,32,33,25,50,51,34,7,34,38,64,30,11,24,17,25,33,18,16,39,18,22,16,32,38,17,26,41,22,2,30,13,10,26,18,6,25,33,4,9,12,5,16,17,7,12,42,7,28,16,23,40,28,28,29,29,67,62,56,28,6,13,13,3,25,25,25,31,15,23,4,36,17,24,12,22,23,31,15,23,31,10,28,13,19,2,13,35,17,11,27,19,34,4,28,21,11,14,32,59,46,52,39,45,37,26,19,25,25,41,15,25,39,34,93,122,65,51,50,61,66,73,86,28,52,9,17,22,30,91,123,130,65,42,14,9,5,59,72,90,130,131,101,47,69,62,126,88,37,23,8,2,37,16,48,62,96,80,85,102,52,52,54,30,51,11,19,33,24,25,30,31,18,18,28,16,11,23,21,27,7,19,18,28,13,56,2,20,22,10,15,5,35,37,18,4,5,6,9,35,53,49,28,36,47,48,63,78,59,41,11,32,16,33,24,49,16,50,14,32,36,26,61,50,45,48,20,27,42,25,40,46,43,43,49,36,18,8,32,16,19,29,20,22,33,14,14,35,21,27,18,28,35,23,49,38,12,6,20,43,46,69,34,34,33,43,46,25,38,144,243,236,209,214,216,236,199,237,213,203,219,239,232,209,214,90,11,11,10,18,2,3,25,13,17,41,31,14,2,6,12,8,20,3,3,2,14,17,20,8,210,192,221,194,206,217,206,211,204,227,217,206,194,224,220,203,250,254,217,207,213,205,184,215,202,178,211,198,194,189,203,184,192,197,213,195,196,207,216,205,217,172,198,193,207,207,196,183,185,193,170,208,197,192,199,190,209,200,202,211,219,203,197,203,190,194,228,209,219,196,188,206,181,196,190,208,201,182,210,190,205,210,194,202,160,186,214,201,190,201,206,209,215,177,201,183,176,182,196,180,205,207,198,196,170,217,195,179,211,186,195,196,184,198,196,186,194,197,196,201,188,204,204,206,197,214,213,201,191,176,190,187,213,187,179,200,193,188,187,181,182,178,197,202,217,204,185,196,181,219,191,211,195,196,191,199,183,204,194,203,208,198,210,182,197,195,183,197,208,211,168,228,191,186,167,196,177,220,183,201,213,195,198,174,196,192,221,187,182,206,210,233,200,204,194,188,208,198,196,211,192,202,196,172,214,190,199,200,189,191,179,205,185,194,174,190,182,208,174,211,196,183,201,197,184,207,193,213,177,213,206,216,209,208,211,196,215,185,180,167,194,223,226,211,208,214,203,216,213,187,227,230,208,203,231,217,223,211,186,224,221,224,210,220,221,198,211,241,218,212,209,241,226,214,205,201,211,230,189,227,202,190,205,197,161,161,178,192,173,173,172,136,141,122,105,96,157,204,203,162,69,73,45,73,160,183,187,186,173,140,158,175,103,166,144,147,115,111,180,142,156,138,113,89,72,132,134,114,83,19,92,77,97,95,70,52,59,61,42,12,17,4,36,50,49,45,114,202,116,84,99,60,85,110,133,79,32,71,134,124,83,86,42,10,21,14,62,122,140,117,119,94,127,102,125,107,113,40,42,57,38,34,80,76,49,36,61,132,208,207,125,80,45,16,63,143,160,64,84,90,55,75,63,66,36,70,41,44,82,43,57,8,28,45,50,65,38,23,25,34,44,37,22,11,30,22,35,22,30,22,12,12,26,31,8,8,6,28,4,19,22,18,30,11,24,22,12,8,19,5,19,13,2,29,24,23,25,34,19,14,13,33,39,26,44,70,119,67,46,28,15,16,40,16,5,2,1,27,13,29,16,23,4,17,26,32,27,49,14,21,48,35,13,1,30,14,25,21,47,25,21,17,18,30,12,29,28,34,20,50,46,49,10,50,47,13,20,7,13,4,15,4,41,50,102,121,63,82,68,53,59,85,67,47,26,55,35,40,65,120,154,106,52,17,42,12,25,44,72,88,107,117,68,40,67,94,110,107,57,31,13,28,10,2,38,49,74,43,17,17,35,29,12,24,34,15,22,28,44,30,30,14,22,7,38,25,26,24,22,13,20,28,6,13,23,50,17,7,35,26,28,13,22,33,16,25,24,31,41,23,36,6,21,49,90,131,107,56,62,25,29,28,28,42,34,25,22,36,31,16,15,10,34,19,23,34,28,27,23,47,57,44,48,38,27,55,9,13,52,38,6,4,25,20,25,16,27,24,28,22,30,15,39,15,29,20,44,23,27,50,41,86,49,69,62,55,40,88,165,234,250,210,218,209,222,212,195,216,212,199,218,227,209,222,234,116,12,0,9,27,4,24,40,12,33,17,22,1,5,14,14,8,8,11,23,8,5,12,35,16,199,229,201,188,210,193,221,209,194,203,189,198,203,206,206,192,243,219,214,194,205,184,203,202,208,205,193,204,189,200,214,212,204,183,209,211,218,215,231,192,220,196,213,197,205,218,211,221,193,215,202,221,190,185,212,206,207,179,229,200,187,208,185,199,203,199,199,200,207,208,193,210,192,213,212,191,223,205,198,202,196,190,210,194,236,185,185,194,182,198,219,207,219,206,203,210,206,210,209,195,184,209,189,185,189,191,213,176,195,206,202,196,182,195,218,223,216,199,199,210,201,192,197,176,190,209,213,202,198,172,223,217,197,214,207,183,192,186,172,215,183,170,220,215,200,178,187,207,198,200,185,181,192,202,204,218,172,174,207,190,182,187,194,204,184,185,193,191,171,180,181,210,202,218,190,191,197,235,191,192,196,195,167,178,187,183,203,185,189,204,215,202,184,195,175,195,217,218,210,190,201,194,175,181,188,206,180,211,176,213,192,206,203,191,181,210,184,184,194,214,209,180,223,226,226,212,222,223,247,252,236,255,214,233,245,238,247,224,217,242,187,239,247,245,246,236,247,246,237,248,243,255,245,247,249,232,241,239,237,241,218,234,227,247,249,229,237,252,248,232,235,255,241,247,249,253,239,251,246,241,247,249,246,241,223,218,248,231,202,189,194,180,182,180,130,137,158,173,141,111,64,33,47,186,247,245,243,253,255,211,251,253,216,247,226,220,177,202,246,248,204,178,211,167,123,151,128,86,58,29,111,217,198,160,161,122,74,68,25,33,16,10,34,53,48,57,181,182,84,76,89,57,35,96,84,18,53,132,132,77,78,48,25,29,42,74,104,115,120,106,112,106,102,124,129,146,103,46,33,53,40,19,55,29,39,57,108,174,196,162,92,43,3,9,104,229,101,18,48,73,95,59,91,81,63,34,35,41,64,17,20,54,59,34,104,72,68,40,40,10,55,24,40,37,2,6,28,54,14,32,5,35,22,31,23,11,37,23,27,23,19,51,13,23,16,10,18,16,31,43,23,40,8,9,7,8,4,13,15,15,32,19,59,36,36,78,145,98,52,47,50,27,13,14,5,14,24,47,17,3,30,18,20,34,33,35,8,25,6,38,27,33,17,16,12,2,40,24,43,17,22,21,40,21,21,21,58,36,36,80,56,50,37,53,35,27,58,1,20,14,23,19,28,58,131,124,93,88,90,110,57,52,39,27,18,40,23,19,74,182,186,97,60,62,39,13,35,20,68,112,95,67,46,144,134,71,135,150,56,23,3,19,29,55,36,66,49,20,29,34,15,43,28,13,30,16,35,29,27,27,31,18,9,17,18,25,14,30,21,11,39,10,6,42,27,22,13,12,21,44,25,9,27,10,18,40,20,12,10,19,31,15,26,100,113,100,75,54,56,58,12,20,17,40,42,17,25,65,50,12,22,26,39,13,41,34,29,24,22,50,54,37,55,41,19,15,27,21,9,7,30,21,15,46,21,45,37,40,30,34,20,40,33,41,38,55,97,25,21,23,46,56,79,55,43,89,101,218,238,244,235,230,227,230,194,210,225,197,204,201,212,231,201,224,216,110,10,14,11,17,15,3,27,2,7,20,8,14,17,22,20,41,9,28,11,5,10,14,28,22,191,209,207,201,206,213,181,219,200,153,213,212,208,212,190,187,225,236,188,209,227,190,219,188,207,212,189,200,210,195,243,204,193,208,218,197,204,184,204,200,226,196,211,208,208,204,211,221,207,209,187,189,205,194,190,206,226,196,205,203,197,196,198,206,203,203,212,191,206,202,206,199,209,207,196,189,191,199,204,174,187,208,204,228,196,198,212,213,208,192,186,198,174,200,189,210,196,199,198,204,201,207,197,198,200,177,186,208,198,186,199,202,188,203,203,234,191,212,203,202,180,227,210,212,234,197,196,189,194,179,208,195,213,199,185,196,216,178,185,198,202,192,181,200,209,200,183,207,183,190,171,187,214,204,220,200,187,178,203,196,197,204,203,188,194,187,207,194,196,200,198,179,177,189,187,194,182,186,195,182,174,181,181,199,198,202,190,205,187,214,171,189,195,217,182,182,175,194,198,172,208,194,198,202,188,189,188,174,171,198,207,200,189,169,190,197,194,207,193,211,196,211,205,206,199,224,213,220,211,226,206,214,194,225,201,202,201,185,211,209,198,193,212,211,230,183,219,201,216,209,198,207,221,182,212,226,204,212,224,226,220,208,206,218,210,223,227,217,203,221,210,216,232,228,192,199,191,209,208,198,206,242,224,228,186,206,227,206,176,188,170,196,170,173,116,154,192,197,123,93,35,67,136,186,242,199,222,215,226,184,218,179,198,217,171,152,161,211,245,183,155,182,223,181,132,118,93,78,61,66,125,250,205,224,186,77,45,24,57,88,62,44,48,100,39,155,229,225,111,123,99,30,15,19,73,70,135,107,83,82,47,14,20,45,45,103,133,124,121,140,112,93,149,148,111,147,105,91,43,33,35,39,37,44,19,90,166,198,183,88,47,27,68,32,60,147,95,59,69,61,72,73,63,101,45,42,47,41,52,25,29,45,84,92,71,53,76,76,41,56,54,24,35,4,38,5,34,15,38,51,16,23,32,10,5,31,39,47,18,20,30,11,33,21,32,37,27,22,5,10,10,21,18,11,0,23,2,11,14,37,18,22,13,21,18,59,98,77,15,24,13,2,2,21,9,6,31,30,26,22,32,42,37,22,21,28,19,5,36,18,11,2,40,33,19,14,12,26,34,21,43,37,34,30,9,17,4,43,22,62,108,68,89,84,69,38,19,31,28,23,28,35,43,52,156,98,82,134,119,106,36,30,27,33,20,39,85,94,75,98,42,78,53,80,32,25,44,27,69,71,58,56,101,172,101,65,129,146,92,9,2,46,24,38,56,69,61,47,59,39,24,46,7,26,16,31,7,21,48,19,28,36,28,19,33,26,28,21,18,13,36,16,39,17,13,15,12,9,38,29,17,29,22,32,27,18,35,18,32,18,34,22,43,99,79,63,92,81,60,27,25,18,15,20,12,22,6,2,10,24,21,11,23,18,3,36,27,44,4,29,22,60,17,29,23,21,25,11,11,27,32,32,24,16,36,11,5,20,28,29,15,26,50,32,45,159,109,60,27,30,46,43,95,86,98,75,75,196,230,232,233,219,213,215,242,187,220,191,186,232,213,213,221,219,217,116,19,15,15,22,13,16,2,42,32,52,1,2,0,17,0,31,12,6,12,25,9,0,35,8,202,180,193,184,184,200,209,216,208,199,207,210,190,195,204,177,228,206,156,171,211,206,215,215,230,191,216,230,210,204,204,195,206,220,196,210,206,197,184,206,189,192,203,201,209,211,192,189,181,181,183,176,201,191,207,193,164,209,204,183,195,190,197,202,199,188,191,174,206,197,220,189,192,188,203,187,218,210,169,195,208,202,174,206,205,224,200,221,201,213,197,193,200,213,212,213,209,213,221,201,196,203,207,201,184,190,208,195,192,208,176,178,174,200,166,199,203,196,172,177,174,190,204,192,180,199,179,198,206,196,196,179,193,222,209,189,180,191,201,231,201,173,173,182,181,196,194,178,197,213,184,194,210,184,168,196,200,177,220,174,194,200,186,233,185,186,203,177,202,194,201,204,193,212,185,217,190,204,205,174,197,191,177,181,190,186,199,182,191,174,196,212,192,173,178,182,183,182,189,191,192,180,191,172,175,182,199,193,216,222,190,198,215,178,189,163,189,183,211,194,199,204,193,184,188,204,194,172,202,208,214,207,189,208,205,199,187,202,197,203,186,165,151,207,222,167,200,195,181,204,200,192,180,184,205,176,177,201,194,184,179,212,209,179,212,220,176,190,198,191,179,193,171,199,172,208,213,220,183,192,206,186,220,178,178,191,220,174,122,139,156,209,191,152,125,162,213,192,159,62,19,94,193,243,229,206,204,197,206,190,196,158,160,185,141,147,181,221,170,61,91,190,237,201,138,135,78,62,78,83,118,211,156,135,130,77,42,16,56,156,58,68,59,79,156,242,248,187,69,91,53,6,33,38,70,97,63,65,63,36,46,22,39,66,84,110,125,103,118,124,104,119,107,100,112,103,122,92,64,49,26,31,19,28,93,150,151,149,103,72,109,126,113,86,41,55,72,83,71,46,73,70,39,72,22,37,45,36,38,13,20,44,54,89,69,48,38,81,84,105,109,78,63,20,28,21,34,11,22,23,41,29,33,24,15,44,41,21,30,46,41,35,23,15,25,21,9,25,13,7,45,9,23,8,25,2,20,21,22,25,15,23,28,35,41,69,71,53,24,17,33,4,15,27,44,14,14,21,16,27,10,9,26,34,22,31,38,23,21,6,18,29,11,14,16,9,17,21,19,34,20,8,23,25,31,24,56,47,73,105,127,138,163,116,81,53,23,22,21,24,26,22,23,56,121,138,146,110,152,149,94,44,9,16,28,102,141,99,100,50,54,46,57,64,92,74,52,21,12,82,96,83,103,155,125,107,166,176,80,21,3,20,45,33,52,57,38,54,86,72,52,36,44,23,26,35,28,13,30,24,20,38,38,23,33,8,9,20,43,10,19,26,35,20,26,14,27,26,27,23,40,3,32,34,30,36,24,34,13,17,22,15,88,64,63,124,114,100,55,40,21,28,25,13,20,40,46,46,12,34,20,9,6,21,6,10,16,16,41,44,26,60,22,1,42,8,14,58,10,33,21,35,20,12,17,33,43,50,24,29,27,36,26,32,131,199,80,78,92,66,80,74,79,72,76,48,28,141,231,219,239,192,228,198,182,184,193,198,227,217,220,227,211,213,223,119,13,13,6,16,21,10,9,19,11,6,21,5,29,13,8,20,31,11,30,9,11,0,10,5,198,217,214,196,191,216,208,214,209,188,171,188,215,200,231,215,239,237,194,199,182,203,200,190,185,190,201,164,202,186,211,177,193,216,205,212,201,219,189,216,206,200,195,208,192,189,204,198,217,202,203,204,213,200,189,220,211,195,189,206,223,181,199,204,187,217,210,210,199,188,226,182,205,209,197,210,216,198,203,189,207,203,183,178,208,187,179,207,217,197,226,185,201,191,180,192,206,204,192,195,198,195,188,197,179,197,205,198,186,183,201,185,162,205,205,201,182,184,199,185,203,190,197,197,196,207,201,200,207,198,187,203,220,198,212,229,182,192,224,184,204,190,215,166,185,223,233,208,218,215,206,222,199,190,193,225,228,192,200,155,203,192,197,176,194,195,187,192,203,192,188,226,205,214,192,190,167,202,188,167,188,195,198,167,187,211,192,173,220,209,196,173,193,206,196,200,198,182,187,202,201,180,200,187,195,200,199,191,187,187,199,196,210,212,214,188,201,214,201,172,201,196,180,220,193,202,194,211,170,199,194,197,190,179,195,210,209,184,203,193,204,192,164,179,211,201,185,187,209,190,191,199,181,175,230,198,221,197,188,173,201,200,192,208,193,182,223,198,195,180,177,196,193,189,179,206,182,187,199,199,190,200,172,193,162,184,205,168,100,104,143,168,184,122,135,218,233,187,129,37,91,196,229,218,232,209,209,172,183,174,184,119,190,184,168,186,227,218,108,40,125,176,154,177,159,90,100,102,86,75,133,153,100,68,73,38,19,23,8,74,77,85,62,143,242,237,222,99,53,59,0,40,10,71,113,85,64,47,51,34,35,48,66,89,154,125,106,123,95,108,128,128,107,100,109,123,132,94,119,127,51,37,22,126,131,202,178,103,149,185,156,146,117,55,36,74,46,60,72,67,94,83,52,42,36,26,41,67,56,27,40,37,77,65,52,60,101,96,144,125,142,111,63,23,34,30,7,63,35,50,20,8,17,25,60,35,24,25,42,31,33,34,30,28,31,33,12,3,19,1,12,29,25,32,17,14,12,27,36,16,24,11,18,87,106,121,118,47,85,54,42,26,8,13,33,30,30,10,31,15,29,11,16,13,29,39,29,21,19,34,40,29,22,23,17,22,22,58,43,37,41,24,36,21,30,27,73,59,118,189,220,149,146,132,101,57,32,16,11,9,14,10,30,24,73,104,129,150,129,152,104,52,43,17,6,96,183,149,133,73,44,55,100,116,156,99,20,26,9,72,101,97,128,133,132,134,172,159,30,13,13,12,21,54,45,102,106,129,172,128,86,66,33,24,18,31,27,16,34,47,19,30,11,29,24,35,20,22,34,51,23,34,14,12,20,27,14,27,32,30,8,57,24,16,21,14,28,39,39,5,24,31,54,47,69,116,134,74,40,26,6,8,11,9,33,53,20,40,30,51,19,16,13,21,27,53,20,15,50,43,62,29,6,29,28,17,27,43,2,34,14,37,14,16,25,23,9,26,39,28,21,38,30,96,250,167,97,148,119,68,33,48,52,72,48,60,43,130,207,186,228,212,199,219,226,218,185,237,208,221,231,217,214,209,209,127,6,1,0,8,19,16,25,18,17,11,15,3,1,21,15,27,21,11,20,17,7,23,8,0,215,193,179,200,223,191,199,174,209,205,212,198,180,202,188,195,250,235,174,202,204,202,215,204,205,193,200,201,205,200,211,217,202,227,182,208,203,198,220,190,202,205,192,210,196,201,205,192,208,196,187,176,221,189,219,202,188,198,201,197,211,202,196,213,197,230,199,200,205,211,202,214,190,229,209,212,195,203,199,185,181,202,189,211,219,224,211,203,197,192,165,200,205,200,210,172,185,187,193,180,203,202,211,186,194,197,175,207,196,176,187,189,206,199,182,208,176,195,186,194,197,189,179,203,178,174,185,204,186,189,186,188,218,187,222,197,199,189,212,190,224,201,195,195,189,181,156,174,174,189,179,207,190,214,172,199,203,215,189,215,183,192,204,202,205,194,198,205,211,200,177,209,186,214,192,201,200,179,207,215,188,183,189,190,175,191,184,224,198,209,181,190,178,213,218,192,197,169,187,196,203,191,211,190,182,197,188,202,186,199,208,200,215,208,180,174,206,194,187,209,204,213,234,185,192,182,167,193,196,207,189,196,181,185,181,185,175,193,199,187,209,216,181,173,193,212,198,186,186,190,198,206,176,199,207,215,209,196,198,171,217,197,196,175,171,219,191,200,224,201,222,178,206,195,200,194,198,177,199,204,198,184,196,188,215,209,154,165,134,95,143,159,122,77,180,190,203,187,143,134,186,224,214,214,177,193,187,196,195,174,160,120,189,214,131,163,187,171,81,21,99,106,129,169,144,102,95,95,83,125,122,79,59,37,24,90,41,24,50,38,48,59,150,215,237,193,99,102,90,46,19,32,28,88,83,53,44,57,26,20,22,54,57,116,125,110,130,108,87,89,103,116,114,119,112,114,118,123,133,71,43,37,64,183,195,149,86,72,123,147,102,81,49,49,29,35,40,28,66,69,55,58,24,83,62,24,89,72,55,36,48,47,91,121,127,115,113,119,122,126,115,86,63,46,12,30,35,1,72,39,38,25,28,54,27,32,35,23,25,22,36,22,27,32,26,39,42,67,39,35,18,11,20,21,27,26,8,34,20,8,15,23,145,210,185,179,116,132,139,122,89,27,11,45,24,25,35,22,29,35,8,18,24,26,18,25,13,6,24,28,31,18,6,54,21,29,19,7,5,36,44,17,55,35,27,53,75,68,112,154,155,120,142,114,80,27,44,21,20,15,24,42,22,24,76,82,83,82,110,140,71,57,38,29,22,140,154,97,97,84,43,111,148,130,146,61,36,24,21,64,91,119,108,137,118,134,152,95,39,23,18,12,11,40,139,158,193,167,152,164,140,66,36,41,29,4,7,18,16,17,23,27,28,21,34,47,56,41,46,14,13,24,31,15,47,26,16,18,44,18,48,9,36,34,53,19,27,42,7,49,30,20,39,57,72,92,79,50,23,18,27,2,6,9,24,21,25,27,20,35,8,23,38,31,34,29,35,13,27,32,22,14,6,22,11,42,43,38,16,15,11,45,11,23,30,22,54,7,17,22,67,10,79,238,228,118,98,178,138,107,59,38,48,15,88,55,35,156,223,200,226,226,221,217,229,226,212,211,205,222,206,202,214,176,220,115,0,0,8,8,25,15,13,39,17,10,14,13,17,2,7,20,6,18,28,13,31,5,7,12,198,188,213,203,192,189,186,195,214,160,209,175,201,196,200,208,241,252,230,204,220,201,193,195,201,192,180,187,185,195,184,192,209,201,216,212,206,201,190,189,195,199,185,196,213,194,199,190,207,212,192,199,183,193,202,210,218,175,194,221,204,213,182,229,198,203,193,180,176,186,200,176,202,202,210,174,190,192,193,180,193,199,205,189,194,196,206,210,201,199,203,194,202,207,193,196,219,191,181,189,205,198,177,208,191,178,205,187,181,177,189,210,203,216,189,205,189,181,194,191,181,220,225,211,208,232,208,212,175,212,189,206,203,202,214,192,201,190,213,198,183,210,222,208,210,196,201,180,210,202,196,198,202,186,186,184,204,191,165,183,189,215,174,198,176,181,194,190,181,200,192,201,198,199,193,226,200,207,198,188,188,190,188,194,214,191,200,189,191,192,189,194,220,191,203,179,180,187,189,185,185,193,183,207,197,184,207,184,205,186,198,215,194,182,201,189,166,186,219,176,205,192,198,195,195,196,202,202,202,203,199,184,189,193,196,190,193,202,195,185,184,191,200,186,182,185,186,209,224,213,201,190,205,203,190,212,191,208,224,195,185,170,213,205,193,186,202,195,204,195,198,205,188,192,169,174,179,182,200,201,198,200,204,204,198,174,139,185,130,182,168,195,122,106,167,242,207,205,214,198,199,228,193,216,217,196,171,182,196,160,168,170,161,98,104,166,203,147,85,38,23,49,168,191,176,108,104,69,89,134,90,64,7,49,121,82,53,57,73,63,74,158,217,242,185,90,89,107,132,79,31,87,137,117,90,59,21,32,7,49,49,60,113,100,105,102,104,118,111,112,88,109,110,106,133,95,110,90,79,54,20,33,152,200,149,91,0,62,104,96,99,44,62,86,17,49,30,52,58,34,70,53,27,39,51,36,45,82,62,34,48,108,132,131,135,113,112,109,108,95,85,67,18,37,20,15,26,36,43,42,34,23,51,6,53,39,27,42,24,27,34,25,23,12,38,41,21,39,26,18,28,24,26,21,22,35,15,13,2,8,34,30,112,180,186,205,192,184,194,197,101,22,27,7,30,12,23,26,27,7,17,13,32,45,26,15,19,20,34,38,37,31,44,55,23,20,39,50,4,42,7,30,28,61,24,29,92,69,75,82,143,109,122,100,50,68,38,7,20,12,14,13,5,42,77,89,91,83,119,101,52,44,32,44,42,107,86,49,80,119,52,93,142,68,54,37,30,46,24,54,83,96,109,93,136,71,143,107,42,13,8,2,13,60,108,216,152,145,156,117,104,60,16,37,16,9,5,16,25,18,51,30,19,28,22,32,33,31,30,25,41,16,42,25,24,58,11,34,38,34,31,36,13,7,35,35,22,29,26,5,39,54,55,79,112,88,84,57,34,40,33,20,39,18,26,29,11,22,38,25,39,30,44,59,22,6,36,5,56,24,15,30,28,35,16,39,22,15,25,46,41,20,19,17,41,52,35,60,37,28,20,67,208,238,172,62,67,140,127,53,66,39,17,45,61,29,39,173,217,219,220,219,240,221,225,230,192,208,223,219,223,214,215,207,218,90,13,2,8,12,9,15,33,22,21,7,25,30,31,30,5,3,5,18,7,17,16,8,0,7,186,202,214,201,219,223,161,218,193,180,192,194,203,215,193,174,245,253,249,206,197,233,177,186,180,190,208,189,200,214,194,180,216,185,181,190,207,188,210,219,238,195,182,222,176,189,207,190,194,184,208,203,174,182,180,184,179,202,206,205,158,172,179,228,210,228,202,216,218,209,205,196,199,191,218,203,174,203,202,186,204,184,207,199,169,202,218,200,208,201,168,198,200,190,197,203,219,215,204,210,180,190,207,208,171,208,193,198,191,213,180,205,203,192,169,210,194,195,238,189,190,199,205,210,213,206,187,207,223,204,207,203,181,199,184,191,179,154,197,204,183,186,171,194,213,205,186,198,194,175,189,196,201,179,205,208,195,197,195,190,176,203,201,189,187,179,195,193,176,168,194,171,180,224,199,214,192,192,197,198,210,227,183,190,211,194,197,209,228,198,164,200,197,191,185,176,196,220,181,185,209,173,191,177,196,221,226,191,201,209,197,194,196,205,204,201,197,214,195,197,193,214,203,191,181,187,195,201,212,211,183,238,161,201,194,203,179,205,199,184,206,214,190,199,207,178,197,211,192,204,205,224,207,196,198,176,180,211,195,185,207,198,192,212,205,199,174,212,213,190,197,197,202,207,176,209,209,186,181,212,192,167,187,190,194,160,137,159,162,161,217,202,126,142,199,206,193,199,179,220,222,208,213,215,178,191,178,206,207,145,182,158,123,23,143,232,224,135,51,20,20,117,214,218,142,87,96,43,83,75,49,13,32,209,195,80,27,86,35,117,176,237,248,150,95,79,97,108,70,125,115,143,113,98,84,45,39,34,31,59,97,118,116,122,116,95,97,100,110,84,104,103,116,110,106,119,85,63,73,40,23,46,140,122,60,20,15,69,157,142,73,45,103,75,35,29,56,26,46,44,54,67,34,40,39,14,29,57,79,77,144,161,193,148,115,105,98,69,84,49,51,39,17,21,8,25,48,34,24,38,38,16,43,40,37,39,44,51,20,22,29,50,36,27,21,43,38,45,23,32,41,20,27,23,28,48,38,20,29,18,45,22,61,111,123,134,139,137,143,87,77,17,17,61,9,14,33,26,41,22,45,12,3,42,18,34,9,11,14,24,39,42,26,23,30,11,22,22,34,13,2,20,22,30,57,35,70,48,67,85,94,117,79,72,77,46,35,23,17,25,28,2,17,47,59,88,140,119,100,102,100,33,22,18,46,67,101,91,122,133,126,140,120,72,58,42,22,11,34,76,85,118,118,136,121,164,186,91,14,17,41,29,22,52,117,135,184,124,113,119,108,77,56,54,34,34,32,31,30,32,20,42,22,33,41,51,40,31,33,17,37,20,50,22,51,19,27,36,46,27,9,6,25,29,53,30,27,31,17,23,19,90,119,192,145,118,91,82,71,65,41,8,15,9,26,44,5,24,20,22,10,10,14,32,29,32,19,15,18,22,45,21,27,14,25,41,42,15,16,4,27,17,27,52,21,18,46,20,28,54,65,114,231,222,102,51,46,58,57,67,69,47,51,42,46,15,112,236,218,207,204,219,229,205,228,202,213,223,217,202,224,217,219,195,217,109,5,0,21,11,8,20,21,27,16,33,30,26,10,0,2,4,21,19,7,17,5,2,0,12,218,195,206,199,193,214,204,205,204,180,199,193,179,202,231,212,249,243,189,201,200,187,206,179,205,205,196,184,212,207,212,205,197,202,189,197,192,191,206,210,201,216,194,175,189,200,204,217,202,166,201,207,193,184,202,196,184,181,199,207,207,211,197,187,200,221,198,209,223,182,206,212,177,185,201,198,189,194,209,200,233,202,188,218,212,216,173,188,195,205,219,192,175,195,191,191,208,192,168,190,199,185,189,205,201,165,225,196,180,205,217,202,202,192,202,198,195,201,207,198,195,210,192,208,215,197,175,193,206,213,182,226,208,182,189,190,178,190,188,201,211,210,193,217,180,201,199,201,194,188,197,201,187,195,197,188,193,187,177,195,181,174,189,184,183,197,200,208,190,203,163,181,205,211,184,176,183,177,186,190,184,193,185,173,200,199,185,194,169,200,195,165,197,214,184,184,190,163,205,195,199,189,178,189,178,192,217,185,189,199,203,180,216,195,171,194,217,181,212,195,205,195,201,192,209,174,171,189,217,181,194,187,235,201,208,211,207,215,196,220,195,176,205,209,186,155,205,185,195,232,209,189,182,196,192,201,202,181,186,167,176,198,222,186,176,194,152,169,205,191,178,214,212,185,184,203,206,180,228,203,192,197,166,192,198,153,108,121,160,172,206,166,133,170,226,201,195,190,206,230,188,211,193,199,180,187,207,224,185,149,167,163,105,90,196,228,152,85,40,10,51,174,217,198,117,56,46,40,57,43,25,44,99,203,138,53,49,76,118,234,248,243,118,72,93,109,98,48,103,154,141,143,99,98,46,24,22,53,80,118,107,124,131,109,117,86,121,140,118,106,85,85,88,94,84,74,115,94,37,33,35,100,119,52,17,56,117,122,129,73,88,79,71,61,42,48,27,78,45,66,66,45,46,55,17,1,15,45,109,157,153,191,163,109,112,110,83,18,12,38,38,61,42,26,34,12,13,42,32,28,25,35,33,40,32,37,40,38,32,41,48,10,36,12,43,20,42,28,29,35,37,27,46,30,29,23,31,35,46,25,39,23,47,74,29,28,30,41,38,29,37,8,26,27,19,32,42,25,53,14,25,28,23,30,12,26,17,6,15,19,23,50,20,42,48,17,17,13,37,18,33,31,38,22,14,35,70,77,83,73,109,63,48,66,61,66,34,32,17,20,6,23,67,43,34,80,81,85,106,132,85,71,57,38,19,39,75,108,131,111,88,85,96,71,117,54,39,30,40,117,104,145,85,106,143,146,134,42,29,9,11,11,51,39,102,166,117,116,79,75,59,67,45,36,25,35,12,29,25,25,9,21,25,10,43,41,29,42,38,29,47,36,18,33,43,4,24,30,15,27,15,17,55,10,36,26,23,44,26,36,32,98,188,177,207,158,125,141,165,139,35,7,14,27,28,34,40,20,18,7,37,41,26,20,23,15,8,49,21,25,35,14,53,28,10,24,22,16,26,12,32,22,12,27,40,19,46,29,39,40,83,141,186,131,97,99,38,48,78,63,63,31,38,28,32,124,209,240,233,214,211,217,224,210,202,216,190,219,219,231,221,207,222,196,215,95,5,14,9,27,6,14,29,28,3,14,30,12,5,4,7,22,8,13,20,22,7,13,28,10,206,194,209,186,201,218,205,187,197,198,199,194,211,188,199,175,254,238,151,202,191,187,180,180,228,193,197,208,192,201,202,213,191,199,197,204,202,186,207,202,187,222,185,190,200,198,202,197,158,190,196,182,204,184,217,188,200,192,182,226,195,173,206,202,217,204,183,192,209,191,202,202,211,216,195,217,202,196,181,217,210,177,167,183,186,171,203,193,186,219,220,240,188,197,191,166,216,178,202,202,202,201,182,221,172,221,171,188,207,187,193,207,202,195,207,214,204,184,207,181,214,199,186,192,226,214,203,221,197,210,219,187,201,198,183,195,213,203,212,200,193,186,186,218,193,175,191,207,194,182,171,207,216,192,195,186,183,192,193,197,180,189,196,175,205,200,199,191,181,210,178,209,191,186,188,195,226,195,173,184,204,205,201,177,189,163,184,152,209,206,172,195,177,191,201,187,189,176,205,190,172,197,195,177,172,203,192,178,193,193,194,195,196,202,203,207,221,186,197,190,223,185,208,201,193,178,192,198,202,220,201,183,172,216,190,223,198,225,210,210,181,216,217,186,217,173,182,193,190,218,165,183,178,206,207,194,206,204,163,201,189,192,213,179,200,196,169,203,200,181,212,169,182,198,176,210,206,216,206,178,175,193,204,190,153,146,114,135,173,156,196,141,104,171,202,174,200,174,194,229,203,193,178,206,185,180,218,200,165,162,223,178,94,55,140,118,62,53,90,62,93,241,246,158,113,63,70,31,46,41,11,20,91,88,86,52,34,108,196,253,244,128,79,79,89,63,52,69,91,126,62,84,69,29,40,12,44,78,132,163,130,121,106,122,98,127,98,97,122,115,145,122,90,64,64,102,110,129,44,34,56,112,60,13,46,161,159,97,93,125,105,84,72,113,30,33,32,7,44,17,58,55,39,22,30,29,40,55,56,147,160,130,127,97,91,50,34,50,36,39,61,50,42,28,37,15,28,16,46,18,43,38,21,36,55,33,4,29,12,55,36,35,43,9,17,59,33,30,31,21,35,27,32,19,12,64,38,20,27,13,17,18,42,43,42,35,42,20,39,15,36,3,16,15,22,26,35,29,37,36,2,28,24,19,15,34,16,26,12,17,1,46,27,43,42,21,34,31,43,19,18,11,38,24,34,43,60,56,83,81,63,92,45,46,44,74,45,49,10,19,24,22,44,37,31,76,91,106,134,151,108,60,35,22,8,8,9,35,87,58,30,66,34,60,54,26,39,45,79,135,92,177,104,55,53,83,57,21,22,31,25,24,41,35,109,45,110,70,52,52,41,85,43,32,25,10,28,19,34,43,23,26,18,13,27,40,36,47,22,23,68,39,43,30,19,23,21,31,37,39,26,22,24,12,32,7,43,29,23,33,34,75,127,141,184,170,168,187,176,102,55,35,19,23,35,10,16,19,14,34,32,22,30,40,11,9,49,26,25,21,41,11,11,20,21,21,12,35,19,17,24,30,27,61,31,29,13,47,31,46,107,183,188,134,147,134,97,99,76,55,67,55,23,21,126,229,249,253,225,227,204,231,199,206,209,190,202,215,201,197,209,202,200,200,217,114,7,23,5,12,14,3,3,27,1,16,15,17,12,4,15,15,4,12,16,18,29,25,28,28,203,196,201,197,184,187,203,219,183,204,214,208,191,184,207,194,243,227,170,225,200,227,207,193,183,202,208,199,215,198,203,209,211,199,192,207,226,231,202,190,217,218,191,187,185,187,206,207,192,181,192,195,214,190,207,195,225,184,194,189,205,208,213,196,207,202,176,189,233,202,215,176,210,184,198,197,199,202,215,216,208,200,214,203,200,197,205,202,197,229,202,214,216,199,202,172,211,197,192,216,214,220,193,211,173,191,227,214,202,184,189,218,198,231,193,192,196,211,190,202,201,222,188,192,212,184,216,190,194,190,203,188,199,175,189,193,168,206,185,184,175,191,204,190,207,203,199,201,200,173,215,190,191,182,203,185,203,202,191,194,172,197,171,193,213,190,168,190,208,217,201,198,200,225,174,200,224,180,188,212,192,190,200,199,170,180,206,193,213,165,190,176,191,179,213,192,216,199,189,181,187,189,207,184,174,210,206,196,220,197,178,195,214,200,197,186,208,194,208,213,189,178,184,186,197,218,203,171,185,164,190,189,203,214,196,211,193,199,191,193,194,211,195,198,205,186,174,155,205,212,186,188,206,198,191,204,176,210,208,175,220,188,213,194,193,199,209,199,199,178,189,216,211,224,190,184,201,185,202,180,186,190,220,188,146,179,121,117,163,158,151,119,133,168,192,195,210,202,191,209,167,205,204,221,217,183,205,203,183,174,207,161,43,30,27,64,77,134,182,104,152,228,244,170,115,69,54,21,61,156,41,91,195,137,24,41,132,236,255,246,119,86,79,106,81,32,76,118,144,94,83,51,41,36,7,35,81,119,115,137,111,89,100,103,105,103,88,114,114,126,134,93,88,111,98,116,145,123,108,66,69,35,14,52,131,205,159,77,120,147,146,108,122,88,41,33,34,22,22,53,32,41,25,73,30,47,15,18,35,76,103,95,69,46,48,22,13,36,24,25,42,51,44,46,43,21,29,37,13,42,28,15,14,29,16,33,39,30,19,34,44,37,19,49,31,35,12,64,31,18,41,57,57,42,26,21,24,35,24,38,50,7,42,55,36,21,39,36,34,28,15,4,21,28,36,36,44,29,28,8,36,40,53,15,11,23,40,52,8,27,29,37,25,15,30,45,21,43,28,15,29,23,26,41,37,38,60,91,104,66,111,110,49,69,54,68,76,50,19,8,51,20,18,46,84,145,166,142,183,139,105,65,60,47,30,24,21,62,127,94,79,98,78,39,20,21,18,45,121,100,113,195,149,128,70,76,53,6,7,3,3,13,37,47,59,66,63,76,81,24,58,32,68,35,22,30,14,28,24,33,40,38,38,27,22,43,5,28,29,25,42,45,29,36,12,28,40,14,39,46,52,18,39,16,25,15,56,34,39,56,32,25,69,86,68,102,116,100,56,50,27,19,12,21,27,34,37,14,13,38,18,43,12,28,39,21,16,33,42,21,21,31,15,31,13,16,30,32,31,33,40,34,36,12,24,12,25,51,50,68,104,127,142,132,141,146,149,133,129,112,74,53,15,28,156,208,227,239,220,206,221,216,241,206,235,198,194,212,222,220,203,212,211,225,218,109,16,13,6,6,1,29,6,9,18,29,21,25,6,21,27,35,18,22,32,31,14,14,11,22,187,218,196,196,188,194,192,184,201,202,200,197,200,204,186,190,238,241,176,205,192,220,177,196,203,203,193,204,215,199,214,228,189,215,189,221,206,205,194,212,197,182,210,235,194,196,203,223,180,184,192,200,198,208,226,200,202,178,203,175,190,209,177,208,193,217,186,188,200,187,185,203,196,201,224,194,196,189,193,218,200,209,213,215,210,186,222,186,211,210,214,198,183,192,216,196,198,210,194,204,205,204,207,205,200,203,200,190,193,200,196,200,181,205,180,192,215,184,203,186,189,207,200,197,179,180,179,205,180,224,215,223,203,183,189,210,177,202,188,180,179,181,232,202,205,222,199,194,188,184,196,182,193,188,204,191,176,189,188,177,197,190,203,185,186,193,211,189,189,189,194,215,197,186,175,200,218,157,178,223,179,190,209,177,198,218,188,200,196,215,202,203,185,180,180,200,192,206,189,204,183,191,194,193,193,199,195,211,199,180,192,201,193,196,213,193,220,222,212,194,220,207,205,195,211,186,192,191,164,190,200,193,204,197,185,202,210,185,201,207,185,206,188,192,204,180,226,189,179,208,222,189,187,216,165,182,221,210,235,191,198,182,186,207,203,180,211,181,186,199,184,198,195,165,193,189,202,201,209,189,206,210,201,184,181,172,175,159,196,162,105,110,151,192,188,205,215,215,190,212,212,189,175,155,214,215,203,193,162,223,201,116,56,17,12,115,180,183,224,148,202,223,163,164,95,33,19,7,156,239,84,146,157,103,63,144,242,252,220,89,60,91,103,61,57,77,135,144,101,66,51,20,27,36,59,79,141,110,125,110,89,105,89,109,114,94,104,104,112,98,80,90,86,93,99,87,129,129,113,73,35,8,66,169,103,89,105,35,103,147,96,110,120,61,63,39,26,37,13,4,48,49,20,12,30,54,37,56,28,41,58,40,41,28,13,4,10,32,22,10,17,60,43,55,72,19,7,31,27,50,15,37,48,39,38,34,9,28,34,23,47,33,41,34,60,27,51,33,53,14,53,46,59,45,31,14,26,42,27,40,12,15,51,46,66,24,50,71,51,38,20,3,22,39,22,27,44,41,24,16,49,9,42,18,37,10,50,16,47,31,35,35,59,34,13,45,10,26,35,48,38,46,46,25,40,18,106,150,169,138,162,179,122,153,179,144,178,103,47,2,13,18,48,7,158,203,123,87,58,43,67,47,46,68,27,25,53,97,181,168,177,189,118,24,38,31,38,99,103,84,65,89,131,161,135,144,70,38,54,23,39,25,45,71,106,89,110,102,78,69,59,74,65,75,46,10,9,20,37,58,46,53,9,34,20,37,25,54,18,43,3,25,25,36,22,15,16,38,32,14,20,9,30,25,3,17,19,32,27,20,7,70,47,58,26,45,42,8,14,38,32,23,12,20,25,31,41,53,26,43,29,59,35,50,40,27,35,36,15,37,48,31,28,59,14,34,34,43,32,42,25,40,28,23,66,56,62,104,129,44,59,49,85,100,108,119,148,173,168,162,82,40,5,51,195,219,245,252,236,243,225,251,239,218,223,217,217,240,223,188,201,217,223,209,234,132,12,6,23,34,11,1,6,25,22,17,2,24,5,2,17,0,12,13,1,5,6,19,19,9,200,189,200,182,195,178,169,171,209,204,214,203,215,207,212,206,233,254,138,187,208,198,150,207,216,216,206,199,199,216,195,199,215,197,205,204,204,181,211,184,199,192,214,186,210,209,226,204,212,198,204,222,206,222,190,191,199,205,203,201,233,193,219,215,205,212,199,216,207,196,205,186,200,208,206,196,192,211,215,219,195,182,219,201,197,209,188,204,207,213,193,182,222,196,218,214,185,207,195,188,193,213,217,205,213,191,206,219,220,211,213,205,191,170,197,213,200,221,195,202,190,217,215,182,198,191,188,190,182,188,212,184,176,202,179,179,184,183,225,173,201,203,205,191,182,192,190,176,208,181,189,183,166,208,180,173,202,191,194,211,203,168,187,233,178,175,175,170,171,195,203,175,215,201,210,204,184,191,204,178,158,202,201,225,166,193,202,185,202,202,185,197,208,207,192,181,182,184,213,188,176,192,179,208,189,176,179,182,187,207,186,185,184,228,185,201,163,197,155,199,182,191,211,196,170,172,192,177,194,180,200,189,171,207,188,217,199,187,192,187,210,186,185,175,193,187,221,186,168,196,201,203,212,222,196,185,212,196,208,213,212,210,201,182,212,190,176,207,189,184,168,196,163,173,187,199,178,183,173,210,210,212,207,184,131,141,172,184,194,120,104,143,153,225,178,184,207,216,203,216,183,177,189,188,204,208,223,189,163,199,193,77,83,88,39,103,139,111,149,148,219,195,117,121,44,8,3,27,136,103,63,78,81,52,158,250,244,188,73,119,93,98,57,85,98,111,148,83,131,86,23,11,25,51,90,121,132,122,130,105,113,78,114,137,112,129,102,104,130,103,87,65,98,93,127,124,96,114,56,26,40,29,154,203,51,44,36,80,93,101,101,98,71,49,45,61,44,23,31,38,62,28,40,32,32,39,24,26,24,48,27,48,45,41,14,10,14,27,35,35,40,43,60,74,70,52,48,23,50,60,51,54,56,62,89,40,53,43,56,64,62,58,83,63,77,55,68,60,62,64,41,65,58,29,58,45,73,51,33,48,46,44,70,87,106,64,140,97,60,59,7,3,12,36,48,18,10,32,26,27,27,66,46,28,35,30,41,9,48,48,18,43,36,44,18,15,20,24,45,21,48,42,37,34,38,20,78,158,187,129,162,158,172,151,159,200,145,107,27,29,5,25,46,29,138,168,89,58,49,26,35,73,58,83,69,51,21,134,140,84,129,123,137,37,42,32,58,102,97,107,41,39,47,48,91,127,126,28,32,19,46,26,55,161,157,158,188,185,167,168,181,186,185,114,44,22,10,16,25,44,17,36,70,29,25,35,35,52,18,48,49,23,40,18,12,12,28,27,41,38,25,41,13,24,38,30,11,24,41,25,16,40,64,40,45,46,22,25,6,30,4,44,38,41,34,26,55,74,7,9,31,30,30,57,49,18,44,12,41,33,60,28,27,40,47,25,38,24,30,14,37,23,20,40,33,33,68,194,223,146,107,69,66,86,57,96,128,172,143,127,62,112,136,137,209,209,218,236,213,222,235,232,231,240,221,223,239,188,219,211,215,222,215,222,227,120,15,10,0,11,24,1,19,22,16,19,19,29,2,3,4,0,13,4,14,21,9,17,20,29,202,194,207,173,190,198,210,197,207,216,212,188,194,197,203,187,234,230,110,154,194,168,206,195,195,216,169,182,203,184,185,213,197,209,203,158,213,192,187,191,204,212,200,196,200,178,183,193,197,201,183,219,211,195,228,194,201,190,190,196,221,187,205,216,227,175,169,210,219,178,222,190,199,194,199,189,231,220,206,207,188,206,206,222,206,204,191,217,192,194,231,205,183,197,211,201,193,200,217,195,198,204,209,191,173,219,188,192,183,202,211,201,192,198,199,189,186,186,183,198,202,185,185,167,211,187,194,198,181,208,197,212,180,199,192,183,211,187,199,189,216,182,211,202,191,209,201,191,187,177,223,212,177,195,198,190,170,190,178,205,202,207,194,177,204,207,176,201,182,205,186,200,201,200,210,198,166,233,189,206,213,181,183,202,185,199,186,176,183,197,191,202,190,169,214,194,202,194,184,199,215,184,190,199,180,191,211,183,211,198,200,218,174,191,188,181,181,184,204,196,192,189,229,217,194,191,160,185,190,197,180,177,197,210,198,212,190,191,177,202,194,215,161,186,171,182,188,192,178,174,173,187,186,197,217,169,236,179,187,193,201,202,181,193,201,191,182,192,192,196,183,185,187,190,201,209,189,201,220,174,195,208,201,166,118,116,132,161,150,142,81,94,219,225,171,139,153,222,193,186,198,193,178,190,196,209,191,159,119,148,77,63,171,107,30,61,53,24,64,158,223,160,9,10,14,45,9,67,110,77,83,153,121,195,248,243,177,75,80,83,85,42,62,124,152,145,82,70,67,82,6,19,74,102,124,139,114,120,135,141,122,118,108,106,89,110,103,122,113,83,80,72,128,136,107,109,79,44,38,77,136,94,128,124,34,89,65,92,90,121,140,56,23,31,123,145,40,18,12,50,44,41,40,14,9,30,30,18,16,24,24,28,8,36,8,31,20,20,15,19,29,53,58,72,61,40,83,58,63,154,162,196,210,161,180,195,174,180,144,167,164,176,189,152,161,171,169,172,180,196,187,207,176,165,160,139,108,134,160,142,123,94,135,179,188,186,140,160,139,79,44,28,60,37,46,93,78,60,67,84,84,88,91,86,65,79,75,74,55,77,95,53,72,59,80,39,68,50,65,58,60,64,64,44,85,66,99,170,151,42,39,55,35,55,67,35,44,40,22,9,28,6,20,34,73,91,64,69,70,106,93,103,90,62,65,38,57,117,60,32,14,44,110,100,23,71,72,120,117,97,91,64,54,73,29,96,150,43,17,11,33,26,55,194,190,174,167,166,215,177,163,166,187,137,16,29,47,12,31,10,13,14,41,47,20,25,30,27,28,41,20,33,68,23,41,32,44,26,19,30,23,36,26,10,35,23,29,52,27,24,41,36,50,57,44,26,41,50,21,32,9,46,12,37,23,43,10,33,32,24,67,30,37,53,15,38,10,16,59,21,30,36,41,14,37,11,50,20,27,35,62,28,16,22,50,24,40,163,184,150,155,110,72,94,110,65,50,88,44,40,47,117,195,193,199,152,141,163,171,183,142,170,209,205,217,246,238,222,231,232,202,198,214,225,214,96,8,8,19,0,1,1,40,35,17,34,10,23,33,34,6,24,22,41,12,22,13,31,28,12,186,186,215,198,181,177,206,180,211,196,192,201,218,177,209,202,240,217,93,161,165,199,189,183,201,221,210,196,169,197,217,198,191,188,194,197,176,212,199,205,187,187,192,204,187,208,219,222,194,197,180,198,187,176,211,199,213,197,186,170,208,207,198,218,200,193,184,198,183,191,185,191,203,183,191,194,188,201,228,198,181,186,185,213,215,207,201,190,206,207,186,201,184,182,206,188,199,210,214,183,208,194,221,180,209,208,195,192,193,194,195,206,204,204,210,171,211,211,181,200,204,180,210,197,179,193,210,185,191,217,200,201,207,209,201,187,188,193,223,212,212,171,199,181,188,183,189,201,182,191,174,193,201,180,197,209,210,203,186,186,188,196,204,196,198,194,192,203,209,206,204,214,191,181,202,187,200,205,168,195,169,204,204,208,196,192,211,167,185,181,193,182,182,175,190,196,196,213,214,188,188,176,195,189,193,189,175,199,181,186,190,191,153,204,195,201,182,197,189,181,205,184,198,185,221,182,214,190,184,200,194,190,176,190,211,197,202,185,192,186,188,202,178,214,209,190,201,172,167,200,187,204,195,211,201,185,199,198,206,197,198,190,203,177,183,203,202,194,199,201,191,173,201,197,208,210,232,221,213,198,202,229,181,144,134,87,99,140,136,116,78,90,147,233,203,135,109,194,162,224,182,204,172,190,181,210,192,94,46,42,51,184,212,127,29,75,107,66,97,146,184,82,21,10,100,139,64,189,122,97,180,219,249,252,242,141,99,82,85,102,55,84,150,140,103,69,56,47,48,22,18,59,110,130,119,123,98,107,100,119,105,84,100,103,119,124,120,90,98,112,115,111,125,65,91,76,12,88,140,142,185,95,63,61,44,102,92,47,109,60,87,46,45,78,161,148,84,15,14,7,15,8,27,12,26,5,41,63,44,36,48,30,16,31,30,34,40,18,36,37,27,60,53,92,99,98,62,81,69,124,135,150,145,157,131,155,173,175,150,178,164,222,145,156,159,128,162,142,173,179,174,168,123,172,130,153,119,176,153,158,163,104,148,168,185,169,168,142,118,111,90,95,85,89,82,102,107,105,185,138,164,196,161,174,150,182,200,187,175,165,199,198,174,157,181,171,155,144,157,152,179,184,172,167,195,196,217,166,63,32,37,48,55,61,27,24,58,34,33,14,11,24,35,9,52,82,112,118,137,133,95,109,118,117,110,78,70,36,25,35,18,30,72,74,62,62,92,139,174,163,152,120,99,90,84,163,134,44,15,11,36,6,93,187,157,107,52,60,55,39,55,69,61,46,52,30,25,25,35,19,59,28,39,43,28,50,35,39,43,40,50,39,32,48,33,60,33,61,40,39,59,55,37,32,35,30,43,40,11,11,19,44,62,76,99,57,89,34,51,32,32,29,13,8,21,22,8,26,35,35,33,50,38,53,43,37,17,30,32,38,46,20,22,36,41,54,47,31,32,32,32,42,49,31,11,52,34,155,111,114,90,49,52,112,103,50,23,38,44,32,27,154,137,146,141,114,129,132,132,146,119,126,125,144,176,198,199,226,243,234,236,244,236,207,205,102,7,10,32,1,8,5,29,19,16,37,3,26,11,4,8,36,13,17,1,15,14,3,41,5,163,194,201,214,198,199,208,188,188,202,230,209,209,195,199,194,253,225,122,169,210,200,197,199,200,216,210,188,202,211,191,204,194,194,181,204,201,187,190,217,190,175,179,186,207,194,215,215,195,205,198,201,200,188,169,227,193,197,184,193,206,181,200,205,192,196,201,211,209,206,208,189,190,197,196,189,189,216,194,212,195,192,183,192,200,207,187,190,207,219,211,199,169,187,204,173,190,197,172,202,207,206,200,232,186,207,206,206,185,221,182,203,215,209,195,189,195,190,187,172,208,205,219,217,200,199,192,179,183,210,193,190,176,176,205,195,199,205,199,181,185,200,190,203,211,204,180,187,202,187,205,188,204,165,203,193,199,206,208,197,200,197,197,161,212,216,177,183,206,189,217,196,206,179,196,186,179,207,194,205,188,200,212,183,207,195,222,203,216,187,183,211,170,202,193,183,189,176,177,192,186,199,190,173,195,188,161,175,197,158,208,198,190,183,180,187,193,164,186,200,195,193,189,174,182,197,207,198,180,185,203,212,218,174,227,177,205,223,198,166,191,195,205,188,180,186,170,173,199,179,188,168,180,195,211,181,179,204,170,192,225,213,193,215,190,204,185,199,192,184,202,196,207,235,237,242,160,170,200,192,227,166,149,130,115,113,114,146,146,97,101,62,109,194,230,148,121,213,201,209,207,209,180,195,220,210,110,89,34,29,110,212,220,121,55,102,163,151,89,151,159,153,158,108,234,125,111,191,128,97,104,146,234,193,105,117,81,89,60,96,103,117,141,94,67,48,27,18,22,57,81,101,126,140,111,108,120,117,119,100,124,94,85,97,110,92,103,96,97,85,107,101,93,90,64,4,75,185,173,133,122,94,65,41,36,41,48,71,79,51,22,53,130,159,168,107,33,44,3,26,14,9,28,19,19,20,28,60,51,38,22,16,39,13,33,24,16,1,41,36,32,27,56,69,63,97,46,86,57,43,29,32,35,33,33,68,68,39,39,54,52,61,62,40,60,23,38,45,37,44,35,16,38,28,35,48,33,25,77,53,25,29,93,110,145,140,101,116,87,41,73,32,33,47,24,47,81,64,62,95,76,63,72,82,75,101,122,111,113,104,116,126,92,130,101,135,111,107,101,138,153,141,147,150,156,158,169,97,87,113,64,60,51,70,63,70,57,46,36,21,18,22,17,5,18,101,112,120,93,102,152,165,187,181,168,59,79,104,46,55,49,60,84,42,77,139,157,196,185,167,150,87,121,156,146,149,41,5,40,22,11,38,45,94,87,65,58,27,47,40,42,51,34,61,64,60,45,67,53,66,118,116,137,133,135,142,139,131,114,118,112,89,97,89,105,109,78,87,78,100,75,91,65,75,58,53,70,60,69,72,61,88,146,145,162,171,178,159,49,48,21,33,16,22,30,24,28,37,32,80,98,51,67,50,27,33,43,58,41,48,40,46,46,30,40,55,43,48,22,65,45,49,30,24,52,97,141,155,152,75,53,13,20,89,107,51,17,22,71,66,63,139,140,127,113,131,111,125,120,118,102,119,111,132,150,147,177,171,202,223,184,247,226,187,204,120,0,2,22,12,0,15,32,4,28,17,38,11,8,0,19,3,4,0,10,6,38,39,12,0,204,194,203,180,213,230,203,194,196,185,209,219,195,210,195,218,243,216,123,204,192,195,204,181,191,210,186,211,189,184,199,209,203,191,179,212,201,198,199,186,167,215,198,211,215,188,205,177,192,192,191,204,173,191,217,235,201,182,199,195,211,179,191,217,193,212,197,219,180,217,169,226,183,191,201,166,173,172,204,206,190,215,181,202,218,197,203,192,198,219,202,190,194,201,189,201,186,208,178,197,204,213,197,209,200,202,182,213,192,198,198,186,180,200,179,208,197,190,187,189,198,184,192,197,200,176,190,169,206,201,172,216,194,187,187,193,221,194,187,208,208,200,195,203,181,233,193,202,190,194,174,177,182,196,202,210,197,190,168,171,179,190,193,213,183,209,199,192,193,170,187,173,199,204,172,194,195,190,190,177,209,205,180,180,217,185,190,188,195,199,204,190,224,213,200,176,189,179,200,201,175,171,199,184,205,187,172,167,195,203,180,193,225,164,191,185,186,186,189,201,205,194,209,203,208,185,211,196,195,161,190,187,186,183,205,203,207,219,196,174,182,197,187,191,202,189,202,186,205,213,198,178,210,207,187,178,200,182,179,196,207,194,187,207,192,196,235,196,208,196,206,187,199,223,235,200,109,182,207,171,218,162,122,93,74,102,182,137,97,58,93,44,76,115,134,129,147,211,247,224,224,224,219,247,225,154,149,163,80,103,93,188,221,90,61,26,73,87,75,110,212,238,243,161,125,111,89,112,68,89,68,66,160,114,83,85,93,70,67,100,121,131,120,89,46,42,43,36,55,79,93,176,135,136,107,103,109,107,95,104,108,96,108,80,95,113,78,65,81,88,102,127,106,73,22,6,78,109,129,117,59,37,50,47,56,62,64,36,62,51,56,102,146,138,124,58,32,23,34,21,0,10,21,26,6,34,30,59,38,21,57,30,42,33,34,41,34,13,21,23,45,34,45,50,38,84,77,83,86,59,22,33,19,50,27,50,60,60,39,52,39,59,33,69,77,116,86,82,18,12,33,73,62,43,65,36,73,10,31,37,15,32,88,113,123,64,82,55,73,67,28,23,12,38,67,55,76,79,59,74,58,89,63,75,32,14,65,90,55,9,54,50,76,62,29,28,74,99,42,32,44,55,61,58,38,32,58,37,64,135,109,107,110,122,123,90,53,13,13,19,30,37,37,26,85,143,113,83,127,178,165,125,86,102,134,143,86,72,73,94,87,104,117,151,130,196,237,211,167,100,85,74,70,82,122,130,51,19,12,3,8,23,30,34,92,75,73,70,49,40,54,62,41,48,77,51,48,92,122,126,93,140,126,154,156,154,165,149,134,120,132,152,156,173,121,125,162,164,138,130,153,103,137,146,135,140,172,79,148,175,159,156,168,213,158,167,160,187,108,70,49,55,41,58,49,55,59,88,89,161,129,137,128,126,107,123,124,111,122,123,122,89,104,84,78,133,84,99,90,102,110,60,49,52,116,169,194,146,159,119,72,67,95,136,132,110,106,127,155,157,160,153,140,131,138,144,131,119,114,115,121,124,109,129,120,123,136,124,159,168,191,210,234,235,231,100,12,3,26,6,2,3,22,0,12,8,17,17,0,1,15,5,4,2,29,24,24,25,14,18,200,190,189,183,213,210,215,217,186,202,179,210,179,209,184,197,240,212,159,200,196,205,170,231,200,207,200,176,184,225,196,208,180,183,185,211,200,203,188,198,212,186,202,211,194,189,188,164,197,207,187,189,181,215,179,195,184,190,203,180,200,218,204,200,185,180,195,193,167,182,197,189,216,214,179,175,203,190,201,211,180,212,204,196,193,217,201,204,194,189,200,188,181,194,192,204,192,179,209,198,194,207,193,192,216,219,166,202,213,204,213,195,191,180,194,209,202,189,199,167,191,220,216,188,205,164,188,187,189,206,196,189,176,194,183,202,212,194,203,185,188,189,214,181,209,184,203,176,161,172,198,179,204,177,188,193,188,178,208,165,199,183,216,184,180,174,206,186,212,229,193,196,200,189,190,218,210,195,192,185,176,201,213,212,193,185,188,173,182,194,195,165,201,182,190,208,186,205,178,173,182,182,198,189,170,191,198,190,171,171,159,188,194,197,203,221,197,215,183,170,210,191,201,181,182,184,192,198,173,201,179,214,191,180,192,211,203,200,184,212,198,186,217,189,214,185,185,198,211,178,202,167,184,201,180,202,217,179,178,189,201,170,206,185,198,185,190,186,190,201,195,162,132,193,164,109,137,241,243,193,217,183,113,98,68,120,197,173,77,42,64,55,28,24,55,78,136,203,206,193,159,218,214,206,188,104,169,217,126,94,85,138,131,86,68,36,31,13,6,82,215,244,237,144,72,81,202,157,72,60,7,49,83,93,115,112,37,69,142,149,136,82,58,62,26,31,15,36,84,88,137,133,126,95,136,115,122,101,126,116,118,86,113,114,94,77,88,80,84,127,137,132,73,32,35,22,82,101,124,92,82,106,66,29,25,51,39,38,46,27,41,103,114,94,107,56,37,62,51,23,23,34,6,8,19,23,25,34,47,20,31,30,8,26,16,17,25,47,28,14,38,16,25,42,49,47,84,100,86,90,61,62,22,27,54,16,38,47,64,56,46,43,35,67,122,149,114,72,33,67,118,112,111,77,42,69,53,81,47,50,24,42,88,73,104,100,76,80,46,62,50,17,7,43,54,97,112,141,60,62,117,104,112,139,60,38,91,122,96,13,29,103,109,105,10,46,127,149,57,79,110,89,129,96,44,14,54,45,106,145,114,159,141,125,165,110,20,34,35,9,43,4,22,57,87,145,86,102,156,93,62,43,74,24,26,110,80,74,103,124,169,174,209,154,144,200,117,42,15,3,60,53,97,46,86,156,70,18,21,37,26,19,43,43,103,116,138,90,119,93,116,111,59,26,19,72,31,27,23,32,75,72,61,58,99,45,43,73,50,45,37,65,60,51,57,91,75,75,76,54,74,60,67,56,62,64,88,87,123,124,117,120,143,154,174,159,152,128,87,92,75,79,50,55,56,52,74,112,126,106,166,117,130,146,136,142,173,120,138,143,150,147,157,172,174,171,189,173,156,176,145,86,43,113,139,155,154,128,145,157,159,174,180,187,167,155,160,185,193,163,153,138,126,142,137,138,127,125,132,111,147,130,120,133,139,118,112,118,128,136,145,178,217,228,216,101,19,10,26,10,0,3,0,4,16,50,15,5,16,13,2,7,6,17,7,15,27,11,16,6,214,172,188,190,210,191,198,196,204,193,174,179,183,189,204,211,253,205,191,201,192,219,202,191,184,186,212,201,184,189,201,196,203,215,200,193,207,195,201,191,203,211,220,179,202,190,180,199,174,198,185,205,193,195,199,184,194,178,194,201,178,198,205,214,194,234,196,195,205,202,194,192,192,186,192,211,184,193,208,179,233,190,193,179,189,186,206,224,181,203,166,191,167,193,198,207,181,197,207,187,205,178,194,225,195,187,183,197,190,194,187,205,195,192,219,183,188,200,184,189,174,181,225,198,168,176,196,208,191,185,205,191,172,202,188,186,206,198,201,211,178,206,200,202,184,165,207,206,207,180,210,196,205,202,206,187,208,223,189,187,180,165,175,168,185,193,183,180,192,206,201,196,206,189,170,210,201,180,214,179,188,176,185,180,178,185,175,198,199,170,181,188,196,182,181,176,193,180,185,184,198,159,185,189,211,178,187,188,204,193,198,203,207,207,187,211,167,223,189,188,211,197,186,174,202,181,196,202,207,198,187,169,190,189,210,168,191,194,188,208,214,190,199,194,188,180,186,181,181,217,184,179,194,180,192,210,204,213,177,185,208,186,207,210,196,206,200,178,214,188,191,58,53,95,21,26,68,147,160,120,133,131,123,103,86,111,145,146,98,89,91,109,67,68,59,18,62,84,172,179,168,173,131,150,133,75,119,122,53,66,50,85,123,100,88,25,58,33,26,98,182,244,247,183,115,151,248,157,96,51,0,57,110,101,76,62,79,141,108,119,57,55,18,23,17,12,42,75,108,112,157,103,116,89,90,148,80,105,145,122,107,102,86,102,85,83,99,107,132,112,124,96,23,3,19,12,118,85,71,83,90,133,82,33,54,43,54,55,39,46,48,89,93,62,56,50,38,42,72,34,20,15,19,13,15,29,44,29,26,12,39,24,4,38,36,16,58,11,34,32,30,38,32,56,69,87,103,76,132,90,88,51,63,56,43,28,47,59,46,55,47,40,62,77,98,104,108,33,21,117,106,128,103,85,35,36,47,42,38,25,26,18,44,92,93,73,45,35,64,18,51,33,15,1,54,100,95,128,45,67,125,109,117,152,87,2,95,143,108,49,37,106,100,125,57,62,113,165,61,128,173,134,156,95,48,35,34,14,119,142,127,113,94,128,129,81,12,25,6,19,19,47,14,6,68,155,98,152,158,68,55,50,107,48,26,94,75,67,54,111,172,191,163,79,104,185,63,13,76,58,92,86,100,46,52,145,91,37,40,8,1,19,12,34,127,150,141,132,144,129,126,98,29,9,57,42,82,27,70,68,58,78,100,83,64,60,26,82,98,70,20,45,62,62,94,25,45,69,81,47,67,73,70,72,48,54,85,38,29,32,24,84,97,132,89,118,141,93,86,54,42,36,41,45,45,48,14,66,39,63,53,52,49,78,55,62,70,67,66,103,97,63,66,108,102,93,82,119,106,95,91,52,113,161,142,121,132,124,127,150,198,167,190,180,179,166,128,145,131,101,126,136,121,147,130,126,120,106,106,119,119,126,124,126,133,143,130,127,117,108,110,132,179,218,213,123,12,0,17,27,4,13,19,22,21,16,15,4,12,2,16,4,24,6,15,23,11,12,13,8,180,180,201,189,198,200,212,202,201,185,175,189,208,203,161,179,227,235,208,206,184,192,200,200,198,178,218,201,191,187,172,202,168,181,195,204,195,185,195,215,204,182,195,191,163,211,200,199,168,212,197,195,185,189,204,189,170,188,187,181,197,179,203,222,173,204,196,205,195,184,182,196,177,161,210,199,195,191,201,214,181,190,188,215,201,189,211,197,202,221,205,192,213,183,205,207,186,203,176,208,203,185,193,192,196,217,193,188,178,196,206,209,181,189,181,193,198,211,197,198,192,199,201,190,204,182,176,208,179,185,189,184,172,176,200,174,187,173,173,182,207,192,198,197,203,187,199,219,186,194,194,196,189,196,196,174,199,214,203,177,202,186,174,173,162,171,215,195,179,231,207,195,208,166,198,192,181,168,183,197,187,193,184,180,174,194,190,180,204,195,186,197,184,200,202,199,181,178,180,202,184,195,196,190,172,181,183,211,197,189,170,188,165,212,194,193,182,210,182,192,188,179,219,197,183,185,194,199,214,177,172,176,185,193,190,206,195,200,202,187,201,185,191,183,202,192,192,205,206,195,190,201,189,187,200,176,216,206,197,194,182,187,180,186,211,179,198,215,180,210,191,83,32,86,60,4,5,9,16,42,86,68,104,80,80,94,76,121,116,89,123,92,70,71,35,0,11,73,107,74,97,121,132,158,146,94,90,84,33,33,44,64,118,117,54,19,59,10,91,212,181,137,138,81,86,164,230,71,60,35,47,132,108,53,67,102,129,137,116,77,51,74,62,28,38,53,120,126,104,107,100,122,115,111,86,115,93,118,88,107,121,122,73,89,78,93,107,92,140,109,62,34,11,45,45,93,143,102,81,101,122,103,82,44,49,38,41,64,51,64,38,50,55,66,79,52,41,26,13,49,11,7,13,31,13,33,37,10,33,25,13,24,37,48,12,45,20,29,27,2,3,17,25,30,64,102,85,113,113,79,69,87,68,54,32,28,52,52,44,40,15,60,5,59,102,118,92,30,43,129,144,141,147,132,60,35,48,23,62,24,29,31,120,36,59,78,48,51,38,49,74,27,55,33,39,101,108,90,45,34,118,84,135,133,38,2,88,147,93,41,44,122,126,165,114,119,111,101,5,61,81,84,119,33,37,49,18,29,174,156,82,119,65,118,111,72,9,15,21,15,35,34,36,30,39,137,46,146,148,67,55,63,159,81,23,121,110,73,31,48,62,56,10,63,163,187,24,51,116,42,120,104,107,74,51,130,144,65,19,30,36,24,25,7,106,160,130,117,109,136,132,81,33,61,59,22,71,71,73,84,81,130,140,123,118,39,69,154,131,81,36,34,84,107,60,42,51,126,128,73,64,107,124,73,90,112,113,55,18,41,48,68,101,89,135,93,106,66,84,73,44,44,20,34,35,65,70,71,102,75,46,66,76,60,76,52,78,78,84,74,43,60,37,48,59,43,24,34,42,47,38,100,149,155,137,156,118,144,162,174,188,160,180,178,146,136,154,120,125,130,119,131,125,144,117,119,109,121,127,125,128,133,113,105,140,119,119,123,131,134,121,130,151,172,192,121,7,33,8,1,3,13,15,11,24,20,6,0,26,2,4,5,23,8,17,7,30,27,33,30,209,175,210,180,196,199,210,181,181,206,193,201,224,207,184,176,251,206,192,219,194,218,201,203,202,187,168,164,178,196,203,191,202,211,191,195,198,199,206,211,192,210,189,186,180,182,212,184,209,192,180,213,184,215,198,192,218,185,210,214,180,188,179,203,212,179,193,194,197,179,193,185,188,220,202,192,194,176,201,184,194,187,204,177,207,197,208,202,204,201,214,194,186,196,186,193,194,193,207,204,205,189,190,196,193,221,172,206,193,214,203,183,175,197,205,200,199,208,194,199,202,191,178,201,172,174,189,181,196,180,199,203,204,203,203,187,187,203,189,196,191,203,196,193,195,179,187,199,188,207,181,208,185,200,186,180,178,226,176,179,176,194,185,197,197,224,205,181,182,187,165,197,177,219,174,190,208,203,180,178,191,191,182,188,174,210,204,185,170,186,193,219,200,177,186,193,170,211,187,202,168,177,185,186,188,185,196,187,186,184,201,213,207,186,192,184,200,191,213,204,201,200,212,191,180,205,190,189,190,179,189,189,197,194,185,189,177,178,214,194,201,182,207,190,188,213,196,155,190,193,187,197,192,205,166,188,185,206,194,212,207,190,195,199,222,204,200,199,200,216,232,137,59,119,118,60,27,26,33,17,100,70,84,58,87,112,65,82,157,153,104,53,50,71,53,26,65,66,94,136,119,150,154,147,110,165,147,124,79,89,92,80,139,103,43,56,35,13,72,118,68,109,82,50,30,161,211,32,42,16,31,101,72,117,127,128,133,83,46,79,62,163,83,50,77,121,148,133,101,112,100,117,103,116,75,101,92,113,107,138,121,80,67,81,111,92,113,62,129,54,24,13,16,62,112,112,166,173,113,89,61,123,94,45,14,7,42,46,24,35,36,46,73,44,57,49,45,43,31,51,12,45,28,13,32,16,23,34,10,26,27,26,7,19,40,46,40,32,10,22,38,45,36,38,42,61,90,82,88,71,40,57,58,54,35,20,19,46,24,38,25,51,46,63,105,116,81,36,17,117,97,88,110,136,54,26,47,39,50,47,52,111,145,157,182,140,122,132,138,145,150,115,34,26,21,93,75,44,9,68,132,105,141,93,57,8,95,147,97,29,45,115,143,121,119,131,131,143,15,25,60,85,137,67,7,36,13,42,169,168,91,82,84,87,93,49,14,15,2,26,41,21,12,21,75,112,107,130,107,43,56,156,232,92,2,95,155,75,45,34,86,46,67,192,242,154,0,62,114,41,142,120,98,80,130,184,106,46,34,34,7,39,8,25,126,152,116,76,86,117,101,72,48,39,48,59,74,74,109,48,72,91,88,91,99,47,66,145,97,47,32,24,97,103,80,28,23,116,126,40,63,116,124,123,170,156,94,54,15,42,74,82,59,87,67,40,52,18,44,59,14,31,14,47,31,53,27,65,96,130,111,118,132,59,50,82,105,116,117,48,71,37,51,72,48,30,29,39,15,32,108,131,168,148,118,125,138,159,156,153,156,160,150,151,138,142,152,143,166,160,136,153,152,124,143,124,136,152,147,146,133,127,124,113,136,102,126,125,140,124,130,127,135,151,149,105,7,2,16,22,36,14,15,20,21,18,0,39,3,4,2,29,7,3,17,21,0,17,6,20,195,176,170,184,210,213,179,216,212,189,202,177,206,193,188,151,207,160,182,206,203,201,172,209,206,194,175,172,198,199,192,169,182,182,192,189,190,185,199,226,199,183,191,177,190,185,197,193,207,193,229,200,204,187,185,204,193,204,208,205,216,212,203,210,176,180,196,196,193,208,192,197,211,188,184,191,212,205,206,191,193,202,208,195,169,184,187,197,187,199,176,200,201,188,208,177,182,178,191,196,178,180,181,205,188,189,189,174,191,178,206,188,176,188,204,206,193,202,201,180,187,191,195,190,208,209,218,203,183,192,180,209,210,204,189,188,178,194,202,185,185,173,194,187,192,188,177,178,181,193,179,187,192,191,188,199,192,194,212,145,206,204,177,197,202,221,207,175,189,193,177,185,188,186,198,203,197,203,190,155,193,192,169,179,208,173,206,184,204,181,186,202,208,178,198,188,191,168,192,208,209,185,188,198,188,199,180,211,180,201,193,196,195,196,203,198,196,194,194,184,200,161,203,215,205,210,190,178,160,189,179,190,184,185,190,205,184,210,196,191,185,191,185,207,182,196,192,168,189,197,199,204,187,206,173,169,184,188,203,194,181,198,195,205,184,192,205,195,183,223,252,157,149,205,215,193,162,105,64,34,85,69,95,59,81,154,112,103,113,139,109,28,40,29,21,19,17,50,109,106,97,101,96,86,162,224,196,175,134,148,167,134,167,88,106,214,81,37,79,46,68,150,132,24,69,214,168,4,14,6,102,134,107,128,132,105,96,99,67,132,153,166,84,54,80,98,133,109,97,102,121,102,98,80,69,76,80,111,121,113,84,90,106,85,95,77,102,122,40,8,23,34,71,131,130,129,140,139,116,80,112,105,44,53,14,17,19,32,47,29,47,36,45,33,35,56,61,43,35,69,39,39,17,20,9,8,17,16,22,30,32,28,30,29,38,21,39,29,18,45,44,19,31,51,39,55,54,55,60,51,57,53,28,23,27,51,39,41,28,47,38,30,50,65,95,98,107,34,19,31,59,30,84,49,38,54,39,36,46,67,65,118,232,175,188,224,182,227,223,226,204,150,43,15,16,80,99,40,21,58,111,100,105,119,85,20,79,132,87,16,57,99,147,68,91,140,160,147,15,57,103,93,136,62,10,34,36,65,189,149,92,84,75,88,74,94,4,15,21,30,11,24,30,13,46,90,69,118,86,85,116,230,209,42,13,103,160,153,111,99,139,143,218,244,187,112,7,52,112,113,111,63,54,134,182,201,98,16,36,29,5,33,26,62,155,145,88,90,79,52,82,63,31,14,42,63,80,94,92,69,90,62,31,91,104,51,57,140,62,52,10,24,124,90,69,32,37,102,80,31,75,117,65,105,102,117,57,32,8,17,42,80,103,55,73,72,54,37,44,35,39,25,26,19,22,40,44,78,37,119,113,93,66,24,17,89,108,112,85,33,36,40,55,53,47,9,35,52,68,139,167,158,174,156,137,151,139,124,142,148,129,130,149,144,165,134,165,133,151,142,140,155,149,150,156,144,149,158,175,158,123,130,128,122,139,101,122,130,148,138,116,113,118,122,149,111,37,4,18,16,12,40,11,30,12,13,19,17,22,3,27,14,10,12,29,8,28,4,11,15,187,211,179,213,200,214,200,200,214,208,207,204,199,226,240,170,163,150,152,185,167,194,209,206,197,213,204,191,211,203,178,198,220,208,206,196,199,211,188,197,194,194,200,201,198,191,216,197,192,190,216,205,187,195,198,197,192,168,207,186,196,178,205,186,206,208,194,173,179,184,172,198,205,202,230,191,186,208,202,189,217,189,202,206,209,196,194,196,204,177,207,186,176,210,185,164,178,201,180,171,174,169,193,171,200,221,201,200,207,187,206,200,207,174,178,210,186,214,193,183,200,197,197,206,196,200,186,188,194,174,216,186,168,178,207,178,194,217,213,196,186,194,190,181,209,203,196,191,200,187,213,203,203,201,183,189,189,190,174,195,198,180,200,184,172,187,180,217,191,187,177,177,179,193,206,194,194,183,198,181,203,205,186,193,204,188,178,206,214,195,203,187,188,213,215,216,195,181,197,198,174,195,217,187,191,201,160,187,188,186,203,192,192,216,207,201,183,202,220,178,174,193,178,191,166,173,167,200,205,193,197,205,206,184,179,199,199,176,210,197,209,208,184,199,205,163,177,208,168,190,199,192,192,179,192,183,192,192,189,194,200,191,168,193,161,188,203,180,192,253,235,149,171,218,222,243,251,250,199,147,107,77,103,40,57,133,124,77,48,98,114,72,51,66,27,35,18,23,104,25,22,11,1,66,131,186,129,113,96,106,106,67,74,16,106,180,97,175,185,53,125,209,141,64,144,219,106,27,17,119,217,172,156,148,127,84,84,43,93,124,173,154,68,58,82,78,134,87,99,93,113,99,90,107,57,82,106,95,113,61,85,94,110,82,103,117,66,38,30,14,20,72,118,120,131,134,129,108,80,92,110,72,45,62,34,30,34,42,52,39,23,3,13,35,35,59,39,47,46,85,78,59,63,28,51,28,13,13,12,3,27,31,43,33,16,16,30,24,28,34,33,14,31,72,83,90,80,102,80,106,117,97,81,60,65,88,128,53,76,56,78,87,62,104,80,104,90,58,51,34,76,95,87,58,73,40,29,84,66,63,92,150,165,110,111,65,98,128,93,120,106,81,75,55,50,69,120,57,24,114,121,81,108,99,73,85,101,131,106,25,81,116,115,49,24,109,133,148,58,22,102,99,84,20,24,54,54,119,186,62,57,55,52,95,66,68,42,39,36,19,28,25,15,19,12,47,45,77,94,116,107,176,103,63,20,16,78,166,154,191,127,199,171,168,126,46,0,31,44,87,104,48,101,109,169,126,40,24,8,23,54,26,47,90,165,118,51,56,34,52,57,57,44,27,52,47,117,94,127,101,93,23,13,91,117,11,71,162,68,67,60,78,131,95,65,34,60,130,96,9,99,120,41,31,49,82,65,28,26,22,73,127,128,123,126,105,64,62,92,89,91,56,29,37,49,82,85,84,67,104,80,53,63,33,56,112,101,107,34,28,39,45,80,73,50,25,7,71,157,167,148,101,75,135,139,159,115,168,131,136,143,137,145,180,155,210,173,142,138,138,113,162,136,156,170,141,158,153,152,157,127,135,152,159,129,110,123,128,142,126,129,97,107,134,128,99,16,3,4,9,28,16,18,12,15,4,6,16,2,2,16,22,7,39,10,25,26,8,15,8,183,211,221,223,180,201,205,167,222,193,197,182,222,220,211,149,136,121,143,204,187,230,186,184,194,185,192,191,171,202,204,185,180,191,194,205,171,208,185,196,201,203,176,204,199,197,210,216,201,195,185,214,185,184,193,226,191,195,204,217,177,196,201,214,196,197,194,181,202,190,204,203,199,192,175,207,201,207,214,209,207,188,186,192,185,192,200,179,203,187,181,185,184,191,189,203,193,190,172,204,216,191,205,189,201,201,207,190,196,189,189,186,208,181,190,194,184,178,189,203,193,178,201,186,191,178,219,190,183,186,188,190,208,197,176,194,167,202,184,165,202,200,194,198,187,199,192,210,210,202,207,184,204,181,189,181,186,179,202,219,184,172,175,171,196,191,191,195,198,193,186,159,178,218,175,186,183,205,200,215,195,188,219,181,204,201,190,194,192,199,189,225,174,173,204,208,205,181,190,206,174,165,191,191,200,192,186,191,190,216,207,187,202,208,196,187,186,182,212,199,183,213,170,187,196,202,197,207,193,185,170,192,191,192,176,220,208,213,176,205,178,173,194,186,174,186,177,193,186,200,196,177,198,188,202,201,179,156,194,194,209,202,214,180,199,184,202,215,214,249,217,124,157,182,227,232,243,251,240,117,94,67,81,94,95,115,142,128,59,39,104,130,153,105,34,56,74,113,145,107,101,85,75,103,149,116,105,49,44,93,83,60,90,67,64,110,102,207,117,52,146,171,62,78,137,195,131,66,96,203,201,142,123,84,87,68,18,4,49,51,135,171,91,70,90,82,99,103,91,101,94,118,104,89,90,101,100,83,108,85,109,126,104,104,101,79,28,22,9,31,71,123,121,107,95,118,124,98,130,85,58,19,10,107,95,46,26,9,19,26,39,22,36,40,41,40,27,36,63,77,58,68,40,38,45,51,4,10,7,15,5,56,18,23,20,30,40,55,14,14,43,30,54,116,175,173,173,149,136,165,180,184,150,168,173,202,203,162,166,180,163,170,180,163,146,153,147,140,181,159,172,160,194,196,129,164,167,144,174,165,193,184,143,86,64,67,82,95,91,89,93,131,130,141,172,183,171,158,182,158,149,145,160,162,161,162,169,150,132,141,158,113,135,128,90,115,106,156,114,132,153,123,121,102,128,113,128,196,166,133,130,111,105,108,109,130,62,65,81,64,80,67,86,60,64,70,52,108,137,120,113,69,42,43,66,41,47,97,130,142,140,131,107,114,45,12,53,26,39,43,97,86,84,109,77,53,51,28,35,47,46,49,50,143,176,60,77,43,61,72,75,87,44,29,19,95,70,41,126,96,28,23,54,125,112,66,121,116,105,105,91,91,129,97,112,73,84,124,92,27,82,106,43,39,57,98,62,56,20,34,122,155,181,134,174,158,159,152,206,184,161,88,45,32,52,99,140,94,103,106,75,125,140,101,71,95,96,85,14,35,33,42,99,51,51,61,35,128,182,169,79,38,62,129,149,174,149,138,135,150,113,115,135,144,133,130,154,141,139,123,142,148,130,147,150,140,136,140,141,129,126,139,153,123,142,157,154,141,146,125,120,114,117,151,164,128,21,22,0,4,6,4,16,42,25,7,25,22,6,5,14,18,3,5,6,11,8,14,16,23,209,193,180,191,186,195,193,193,204,206,200,191,183,201,214,175,194,189,171,207,199,171,180,188,196,206,203,217,198,188,223,211,189,195,197,188,190,206,186,193,193,181,191,194,214,217,197,180,184,197,202,196,200,182,204,200,190,194,182,197,221,176,218,176,194,186,188,214,180,183,195,184,192,205,178,185,202,192,186,190,186,205,194,182,182,214,160,183,216,186,175,194,190,200,207,189,202,199,194,181,177,155,203,208,187,181,188,176,164,208,201,224,192,190,180,164,192,193,172,197,193,206,192,171,172,203,174,206,185,188,194,205,210,178,189,197,196,190,188,192,208,168,199,188,186,213,203,174,200,201,180,217,197,179,193,168,205,187,193,171,195,194,179,190,196,172,188,183,189,182,178,189,191,184,218,198,178,198,183,221,192,224,230,219,184,193,215,213,204,228,234,221,146,123,158,196,215,181,217,172,170,200,195,185,184,205,176,206,190,169,190,192,177,190,190,222,164,212,181,194,190,172,183,171,189,181,186,196,194,210,206,155,164,189,196,168,217,184,193,205,174,193,192,174,197,182,202,201,188,205,200,165,203,199,220,198,205,183,208,189,211,208,217,197,213,210,194,199,218,240,184,135,150,180,221,217,251,200,162,112,99,49,74,92,99,121,140,147,121,103,96,164,173,151,51,35,130,163,209,224,219,218,185,155,141,114,105,65,120,106,105,177,147,153,140,79,30,79,49,70,187,100,43,93,188,147,59,75,191,238,176,123,117,72,60,27,13,48,61,64,125,145,37,65,75,97,76,77,92,115,115,113,105,109,79,88,98,82,111,91,120,108,133,113,81,37,14,20,73,77,112,137,151,112,112,112,106,64,85,44,69,10,103,200,130,36,2,19,1,43,9,45,32,14,62,54,33,42,61,75,84,64,61,54,50,31,38,37,37,41,11,30,27,17,13,23,30,48,10,18,48,29,59,126,155,98,74,56,87,89,83,93,82,67,66,69,120,94,98,139,100,89,108,113,62,84,93,119,116,136,117,144,136,103,130,132,113,122,149,152,128,108,83,103,104,109,107,117,96,107,145,162,169,159,162,151,122,168,168,177,136,167,123,132,128,135,131,143,107,150,157,145,178,154,131,152,110,139,143,174,149,167,139,174,200,196,191,172,167,125,146,188,168,175,142,177,178,172,174,168,172,188,178,172,199,162,162,125,144,154,140,142,142,152,175,126,152,158,132,138,151,142,135,116,122,141,124,134,121,133,131,132,129,157,83,79,115,141,129,142,154,149,170,231,182,117,110,102,148,148,150,149,103,81,123,106,57,112,89,86,95,55,113,130,97,95,118,129,101,144,90,111,104,93,101,83,100,130,86,54,107,77,17,32,78,87,62,54,22,62,146,173,140,101,114,100,121,130,124,120,158,78,19,28,48,52,106,110,88,80,54,121,128,133,75,87,81,49,11,57,24,68,80,44,76,42,13,46,145,142,101,91,109,157,138,148,167,126,154,129,127,135,138,114,131,118,100,152,139,160,138,120,142,125,132,106,115,148,150,127,134,147,136,149,132,153,166,145,131,142,146,156,144,147,136,106,18,4,10,4,35,9,3,9,10,16,13,43,8,0,30,6,16,25,2,15,19,4,34,23,202,200,187,215,198,198,187,189,202,193,209,190,187,182,199,212,243,221,200,190,208,205,204,195,205,189,185,191,212,206,210,200,190,190,213,202,184,203,193,206,175,204,190,183,194,187,175,202,203,201,204,192,187,193,204,210,193,175,222,210,178,226,176,209,219,193,220,179,217,207,205,209,200,188,193,183,202,182,182,207,184,175,182,201,188,189,190,187,171,195,198,191,203,196,166,193,188,174,188,203,175,183,189,176,181,193,194,169,196,188,192,200,209,204,225,208,171,196,183,178,179,183,176,184,185,207,176,182,170,197,177,210,213,169,175,184,188,196,188,196,203,205,189,180,191,216,213,218,215,199,177,176,205,162,186,201,192,190,197,171,177,196,208,177,209,184,200,191,218,181,200,187,190,185,206,226,192,208,218,187,194,200,197,211,208,216,228,199,207,208,214,198,127,111,162,195,202,190,190,183,184,192,201,205,185,189,187,173,199,192,190,190,187,175,203,200,183,181,183,183,190,204,181,188,198,185,213,198,220,182,191,198,219,184,200,204,184,160,194,192,194,182,190,187,195,210,186,180,180,180,213,202,216,217,198,176,203,193,203,179,189,217,195,201,190,214,197,174,241,229,134,117,129,140,183,199,199,187,111,77,71,65,80,113,124,98,108,103,137,112,156,161,139,149,69,10,8,12,68,142,177,190,170,174,151,150,121,74,159,147,134,195,151,243,160,66,105,101,48,100,217,84,20,148,176,138,51,145,219,189,110,61,76,37,14,42,19,94,76,70,160,91,56,56,58,109,128,80,120,109,97,112,87,101,82,77,73,54,74,103,140,80,148,91,62,18,21,62,102,110,127,126,139,116,109,97,62,49,82,68,99,150,216,203,80,28,49,9,23,19,25,19,24,30,51,69,34,65,96,102,83,76,42,43,29,48,24,23,10,54,33,3,57,8,59,40,21,16,27,9,14,36,52,112,111,112,91,46,87,55,76,61,86,59,63,84,55,53,81,68,65,61,59,64,39,63,84,59,56,49,53,67,55,75,74,79,70,88,31,62,37,75,43,55,63,43,80,78,63,63,36,61,72,60,58,69,56,43,36,57,59,63,51,39,49,21,73,17,55,52,55,63,37,46,55,69,80,57,39,84,49,78,63,82,66,47,67,68,100,76,59,120,96,94,80,119,92,109,78,99,87,116,118,103,118,107,121,97,92,98,119,119,100,152,155,133,133,163,112,117,107,102,104,113,115,137,149,144,156,133,135,108,98,130,113,105,176,162,159,182,181,162,161,154,168,137,142,151,150,175,172,151,171,182,173,176,149,126,147,113,145,166,167,172,118,167,154,132,158,119,128,127,142,121,153,103,142,168,113,128,177,162,143,138,195,180,118,150,138,161,197,163,123,74,85,65,73,111,77,80,106,102,65,44,58,105,122,143,72,57,78,105,129,96,97,80,27,51,80,62,75,82,84,77,64,58,55,23,70,134,124,128,141,134,128,159,141,148,139,147,128,132,141,148,128,116,124,121,147,132,131,111,134,126,95,138,146,157,157,150,148,154,145,166,136,138,136,128,123,147,188,199,159,143,155,112,30,11,17,6,8,7,35,11,6,8,4,7,3,5,14,11,8,5,5,20,19,28,7,7,195,184,198,203,218,182,191,186,189,212,174,181,190,188,213,207,245,231,206,203,197,212,197,205,202,191,206,205,185,216,209,211,203,214,215,210,200,212,221,210,206,195,208,194,183,191,180,187,203,187,201,167,200,183,197,204,208,187,209,171,217,194,169,208,202,208,202,196,161,189,192,188,204,171,197,202,199,196,184,172,194,207,195,202,207,199,198,201,210,200,194,203,182,179,206,196,195,191,218,210,201,159,218,155,211,186,217,203,200,184,199,209,195,201,185,217,198,181,191,198,200,197,191,180,185,185,181,189,203,180,178,178,194,197,181,214,174,194,198,213,203,184,235,224,214,193,171,203,189,172,193,183,181,204,194,180,188,186,166,186,233,212,202,188,208,209,199,196,221,199,193,222,231,214,196,184,226,216,213,215,184,127,116,161,175,190,195,167,129,105,142,144,112,94,143,183,228,211,204,174,209,217,197,208,182,186,180,201,196,195,188,174,198,200,187,181,204,181,200,189,194,193,199,205,192,187,209,206,182,200,181,182,221,191,187,191,180,182,204,219,203,177,202,190,202,203,210,220,209,217,189,167,188,201,167,183,153,140,156,152,165,152,140,127,139,130,136,197,237,162,131,166,99,107,90,142,141,165,137,59,87,55,74,113,131,131,130,142,84,106,182,132,143,157,106,21,20,48,43,4,2,59,100,107,109,117,103,107,223,155,92,93,98,158,74,114,229,155,90,152,186,94,94,159,190,184,152,153,146,90,85,61,50,15,18,23,73,119,87,87,120,56,54,85,64,111,140,112,101,82,85,94,94,65,95,60,59,113,100,119,135,94,41,24,3,46,30,84,126,134,141,143,121,103,109,60,100,78,126,128,123,155,172,131,53,45,44,14,30,14,18,38,27,24,20,75,69,113,97,98,92,44,21,27,23,18,21,26,25,17,23,22,43,29,0,34,38,32,31,40,40,33,25,117,89,190,168,176,160,159,141,127,124,154,149,150,123,128,146,100,135,116,126,88,92,121,109,111,121,132,128,116,137,135,76,155,136,155,96,150,150,123,135,129,144,149,150,170,131,107,139,118,133,90,92,125,90,95,94,107,91,118,106,98,69,127,83,98,106,77,103,76,98,81,72,80,92,104,75,78,77,74,74,51,113,85,65,82,94,55,43,36,57,55,63,52,73,43,57,63,74,54,72,78,86,70,73,60,54,57,42,42,89,47,76,47,60,48,47,52,42,61,47,44,63,68,95,69,81,54,62,33,50,59,65,56,57,43,27,81,64,53,34,53,23,40,58,64,42,69,75,65,78,111,97,115,113,118,78,96,98,102,77,102,118,103,114,117,80,109,104,103,110,113,110,114,125,126,92,105,137,139,138,172,160,131,134,161,162,167,137,132,118,77,88,77,92,89,113,98,102,98,147,137,106,131,127,123,128,99,130,138,115,113,106,116,108,139,127,131,114,153,103,136,169,113,141,107,115,136,154,124,124,148,145,157,176,189,168,157,126,124,129,109,137,116,117,106,113,119,132,140,117,127,125,117,129,126,142,137,150,149,151,146,145,145,135,156,144,159,188,195,175,157,168,125,0,23,7,42,4,32,29,3,13,10,26,13,8,0,3,25,44,23,2,30,19,14,8,4,189,195,206,184,193,208,198,223,179,188,193,196,181,193,185,231,252,221,193,219,203,224,209,229,219,231,233,233,221,236,211,212,227,193,186,212,206,179,189,185,186,195,200,189,220,201,204,209,216,186,194,216,215,221,183,188,176,182,197,176,199,198,209,195,197,209,216,211,204,199,191,223,183,176,194,200,180,179,198,196,192,210,190,194,192,210,185,203,212,208,195,193,203,212,175,193,187,196,219,173,198,195,176,198,155,205,190,195,200,185,179,182,184,162,166,143,199,195,193,204,214,201,205,208,205,221,195,191,192,195,209,214,212,182,231,192,182,228,214,195,202,195,171,145,147,168,159,172,189,187,203,219,190,174,213,183,217,229,217,213,223,192,220,198,226,232,227,223,227,224,225,197,231,222,196,156,163,182,206,211,158,122,84,81,134,124,113,93,81,66,61,90,94,133,123,175,192,208,239,202,250,216,215,215,198,201,186,187,211,197,182,187,192,211,207,191,203,208,181,219,202,212,182,201,198,211,207,190,180,194,204,193,170,185,200,177,199,187,177,198,192,203,217,181,207,180,184,201,198,200,130,102,79,161,152,139,129,109,123,113,104,108,63,85,54,88,88,176,209,89,127,139,93,108,105,125,131,116,165,104,100,50,74,111,146,158,121,157,99,46,116,142,126,130,118,59,12,31,37,25,34,14,32,4,36,52,125,129,218,169,80,55,62,43,25,183,244,101,133,164,125,153,169,201,151,194,202,140,71,57,56,38,8,15,65,96,140,137,66,83,140,39,54,41,102,128,55,77,128,70,100,78,58,61,73,86,75,103,104,135,89,39,43,18,23,55,89,103,98,127,104,107,119,97,54,119,106,119,118,133,113,96,96,79,42,30,63,40,55,14,21,18,28,26,33,47,67,89,58,54,55,31,31,38,33,27,21,36,41,25,4,51,35,23,17,20,26,22,52,37,33,85,35,57,74,85,53,48,60,68,83,87,66,77,78,45,49,82,88,76,64,80,84,72,77,65,79,84,130,91,96,107,102,133,109,93,103,141,117,117,134,137,108,101,122,135,141,99,102,86,114,112,120,134,131,97,121,128,114,121,93,122,135,156,120,109,114,141,148,126,136,131,90,124,130,127,119,121,126,130,129,177,140,137,109,116,140,110,133,122,111,122,108,104,98,109,122,130,122,125,105,138,137,135,115,127,112,143,147,126,111,101,110,91,103,115,100,72,115,108,110,115,106,106,90,126,99,133,117,87,102,114,115,94,92,114,76,75,59,92,66,66,83,91,47,92,83,56,58,66,65,64,70,66,53,53,81,71,71,64,55,74,78,84,68,49,67,73,52,70,47,57,58,52,56,69,70,56,43,52,59,61,69,80,38,49,54,72,58,54,69,63,69,83,78,50,86,36,52,76,83,83,70,84,65,87,76,84,68,92,85,92,74,92,90,84,99,121,128,117,103,119,124,109,142,133,132,140,134,108,99,107,103,97,113,104,118,141,136,129,102,109,98,100,113,107,102,108,124,104,162,123,127,142,123,139,121,128,137,134,130,138,145,142,164,165,170,168,158,139,163,177,173,151,142,106,14,4,27,25,3,22,10,1,14,24,17,20,0,2,16,20,26,8,16,5,6,12,11,3,186,192,181,210,208,174,204,183,212,189,189,204,213,183,175,215,255,223,153,189,183,181,237,219,228,237,225,231,237,229,227,221,231,199,174,180,183,223,191,197,201,195,213,218,193,218,202,220,162,186,191,203,191,199,193,195,210,180,216,199,192,182,201,206,197,189,195,183,178,198,212,208,201,203,212,208,187,169,190,200,190,174,204,208,207,195,211,219,194,201,198,214,182,201,178,198,162,193,201,209,186,191,159,209,211,189,205,188,182,205,183,137,91,104,87,128,183,208,217,218,213,241,239,213,198,182,208,242,206,206,208,205,190,212,200,203,230,220,205,185,220,203,127,152,112,127,98,149,210,214,211,179,193,191,234,190,170,199,149,166,173,184,172,185,208,194,176,155,173,179,166,169,134,171,144,76,97,114,128,104,124,122,104,91,101,86,92,114,98,97,95,111,144,101,98,141,92,180,184,193,183,186,193,180,193,223,191,204,195,207,191,179,193,174,218,199,200,193,198,191,217,227,203,204,216,183,190,218,213,209,178,201,194,187,206,228,199,193,218,179,189,194,180,197,198,143,137,149,126,133,129,96,85,129,146,132,142,120,106,97,116,89,76,60,102,62,76,194,201,90,89,100,113,112,77,163,153,119,168,152,106,55,85,114,122,146,117,91,95,57,106,113,74,105,142,111,112,63,111,48,67,30,35,19,37,75,66,103,161,194,125,102,164,73,50,161,166,60,110,130,136,160,187,187,164,170,202,138,97,57,49,35,63,83,70,97,129,128,45,76,100,25,64,91,100,104,57,60,86,66,25,68,51,33,57,65,79,103,99,122,57,4,27,41,80,109,123,122,114,116,114,96,97,103,102,132,119,94,114,98,116,88,53,52,69,56,31,49,14,20,14,20,0,36,44,24,21,53,38,41,29,33,30,20,38,20,65,31,14,13,12,49,33,46,43,33,32,34,14,52,52,29,19,51,19,50,53,47,34,61,55,45,24,29,41,23,22,40,40,31,41,25,62,48,37,40,51,45,67,44,44,34,60,43,59,27,15,33,63,67,29,35,35,18,16,29,64,41,55,63,45,58,69,44,29,40,41,27,29,58,72,28,66,56,53,46,33,51,81,37,46,33,47,55,33,42,39,43,58,55,40,37,52,42,60,43,77,54,49,57,36,71,59,65,53,73,54,62,103,67,56,81,98,76,83,100,83,88,72,96,106,89,90,80,96,62,102,80,119,108,108,111,108,129,117,117,111,118,117,127,128,135,151,114,78,137,136,114,133,111,119,132,138,139,133,113,117,103,119,86,136,127,111,90,114,115,135,119,129,130,123,127,117,110,125,103,111,112,67,123,122,122,119,131,90,108,104,116,55,77,113,113,98,114,95,128,118,90,75,114,80,113,88,92,94,102,78,112,102,88,90,67,74,53,110,57,69,56,66,75,60,76,74,96,73,100,107,88,72,88,80,84,78,68,121,82,75,74,42,34,75,85,52,53,47,48,58,59,71,64,68,114,121,105,97,88,76,84,85,98,92,119,140,149,155,143,139,135,119,111,148,136,120,135,132,150,177,171,155,158,171,143,173,153,113,117,26,17,13,2,27,9,7,14,13,5,9,7,0,0,11,7,13,21,10,29,0,25,4,5,172,192,197,193,204,161,207,186,218,191,214,196,219,204,197,244,251,223,160,155,139,95,112,131,119,127,144,124,129,131,153,192,182,211,202,206,201,178,183,211,206,184,205,176,210,207,184,178,195,191,176,183,201,203,204,192,171,210,212,182,185,184,204,189,201,206,184,191,188,211,195,191,186,186,193,180,186,184,194,177,195,184,211,182,177,206,196,198,200,178,207,190,204,190,189,192,198,179,212,197,199,197,209,185,198,196,209,186,206,232,203,121,96,100,93,109,149,197,200,201,221,192,160,166,190,195,193,199,153,151,232,218,169,131,135,140,152,218,199,227,212,211,138,111,130,136,133,149,198,226,205,201,190,216,215,120,82,85,61,123,169,146,112,114,122,103,120,80,78,75,103,95,66,92,99,60,53,68,51,73,103,122,108,99,97,100,105,104,141,134,128,120,118,70,77,84,68,84,101,81,100,114,134,159,170,191,191,154,175,183,178,180,183,191,177,142,170,138,125,151,211,183,158,191,198,165,139,154,179,180,204,195,231,192,214,202,197,237,175,187,139,158,146,146,126,121,95,109,99,117,98,82,109,113,124,144,125,137,100,140,127,104,117,95,78,67,111,236,122,80,93,92,108,140,108,212,175,103,190,141,65,52,62,110,49,76,128,121,108,94,89,89,77,89,119,185,156,130,127,115,85,92,102,97,223,148,135,145,123,123,68,149,195,93,80,164,142,69,137,90,91,172,148,84,111,186,198,193,87,84,69,64,86,54,78,67,65,60,67,87,44,61,76,90,103,87,96,54,72,46,46,46,37,45,71,64,51,61,54,40,27,15,50,102,113,116,126,128,103,95,78,102,84,100,95,111,107,84,104,58,72,56,66,62,73,35,1,17,25,33,24,47,17,36,20,36,36,36,28,33,4,24,46,8,49,26,32,45,29,21,27,29,18,49,48,29,47,21,45,2,37,26,25,29,38,44,71,78,71,53,16,23,6,26,35,40,31,30,48,48,65,45,43,28,36,32,20,47,32,34,42,52,35,36,29,69,49,39,28,47,31,25,46,40,72,29,38,48,19,41,36,36,59,52,60,31,41,37,33,29,47,48,37,58,44,48,38,40,37,53,48,53,59,71,36,45,20,35,32,60,26,24,43,26,31,33,47,28,52,72,46,36,44,41,46,26,64,59,42,35,50,50,43,60,36,18,61,44,22,33,48,65,55,20,52,61,48,41,52,28,38,33,52,59,43,20,26,64,38,35,31,42,29,46,71,52,28,48,37,30,34,23,49,47,70,38,39,50,65,33,57,39,46,57,41,79,76,90,82,82,83,74,92,93,86,93,96,106,98,118,99,106,111,107,91,106,110,111,96,104,98,122,141,106,134,122,139,126,140,112,118,155,113,116,106,104,133,114,133,135,143,102,133,94,101,85,94,98,107,104,124,122,124,130,128,138,121,118,123,114,104,103,115,100,123,128,102,120,87,110,95,70,78,68,70,98,47,57,67,82,97,160,163,165,166,101,95,86,107,70,78,105,143,137,158,129,165,128,121,111,117,100,90,108,124,111,124,144,158,116,149,166,164,94,117,97,16,7,6,18,16,37,27,33,10,16,34,6,0,10,1,0,15,10,25,25,18,32,26,29,183,200,182,182,222,198,198,205,202,190,198,211,196,202,209,226,249,204,168,202,140,1,15,6,4,14,43,36,22,11,43,146,202,176,220,223,206,187,208,200,207,215,166,201,193,203,203,195,189,196,180,171,212,197,206,181,175,202,175,218,185,181,193,193,206,198,210,214,208,213,198,197,184,171,210,168,224,196,203,204,205,187,197,185,184,217,193,199,221,197,204,212,188,194,176,182,183,182,208,199,189,190,191,193,185,221,207,238,205,188,217,140,118,187,137,143,147,173,146,154,155,115,108,133,120,99,117,158,84,110,179,212,126,59,68,38,97,192,212,172,191,166,120,122,94,126,127,172,195,202,195,199,222,208,196,95,59,59,88,163,170,148,105,94,105,101,64,85,78,45,92,111,93,98,121,89,101,104,94,84,102,103,118,113,71,109,88,139,123,114,117,79,74,64,75,68,85,84,63,47,82,74,115,139,157,155,127,121,149,162,174,176,145,163,147,119,155,134,128,142,133,124,154,152,127,136,134,141,164,195,191,184,175,177,170,175,182,211,170,147,149,129,140,92,115,119,109,89,124,144,110,91,84,100,98,97,111,100,114,151,185,165,192,154,126,102,158,254,121,66,108,107,119,172,190,231,186,101,147,176,107,50,59,83,44,56,164,233,174,73,94,120,94,93,79,154,204,174,138,145,136,170,173,155,198,151,146,128,68,40,60,159,142,71,103,194,135,100,147,85,99,166,96,57,25,132,160,163,85,68,49,35,52,59,56,69,60,79,55,78,54,44,54,55,80,82,38,37,98,63,60,76,45,78,71,71,32,45,25,18,47,89,111,101,142,133,137,121,98,111,86,79,83,125,84,102,116,84,32,17,26,56,43,63,38,52,11,31,45,51,42,16,20,11,24,25,10,40,15,25,24,37,22,19,52,43,28,13,6,16,17,20,14,33,40,38,38,28,15,42,30,29,32,34,36,67,66,73,47,38,25,33,25,17,58,66,50,48,59,38,42,84,64,44,52,39,48,43,28,42,48,30,35,49,49,45,51,44,39,12,27,27,36,34,28,32,43,30,40,48,41,35,44,33,49,41,13,41,39,54,63,51,63,63,50,52,40,68,39,40,41,42,52,45,31,19,37,48,36,46,36,52,39,42,24,39,53,41,42,33,39,69,21,55,45,38,37,51,57,30,50,45,21,39,17,52,61,57,60,57,51,55,42,56,48,42,40,26,45,44,45,77,42,53,35,44,37,37,48,45,35,37,32,25,31,44,67,52,15,14,37,47,29,59,7,50,31,7,45,36,21,36,50,40,17,43,48,37,34,51,77,43,41,30,18,59,43,54,46,25,38,43,15,43,58,30,48,57,68,58,25,53,46,49,85,45,47,50,44,38,47,49,57,41,51,49,78,45,72,88,47,60,59,56,75,64,48,108,78,68,88,89,86,90,73,73,83,100,117,101,115,110,79,118,68,113,107,122,111,74,111,108,109,101,104,122,104,94,106,133,115,150,158,151,117,113,116,122,107,90,119,139,154,140,156,144,148,155,121,124,119,109,88,106,100,112,115,123,117,113,124,124,127,86,91,99,27,26,9,10,15,19,24,20,15,19,39,11,4,18,16,29,17,24,6,10,5,33,21,28,206,211,202,210,192,188,203,205,204,189,198,181,182,207,204,254,252,232,176,191,126,22,11,5,12,43,9,15,40,43,40,173,220,238,230,219,215,209,215,184,226,201,216,209,215,223,176,213,213,192,210,195,211,185,211,222,190,218,201,209,183,185,191,219,211,213,206,196,240,208,210,229,202,193,193,213,199,181,171,196,215,220,199,199,196,198,205,189,208,202,243,230,197,228,205,235,207,188,155,185,196,196,218,202,207,209,209,203,162,127,127,120,132,152,95,100,109,109,86,76,72,84,87,78,90,52,85,100,61,78,172,178,53,44,34,22,84,119,146,134,156,121,89,89,79,79,102,163,171,156,147,136,137,183,158,115,77,94,169,196,156,107,95,124,120,104,114,99,105,107,124,116,117,132,101,116,119,122,122,125,166,131,125,123,147,147,118,120,101,114,90,84,120,103,110,121,106,86,94,133,162,130,133,156,155,139,151,180,154,144,176,153,124,158,141,149,171,147,154,167,175,186,162,190,186,168,184,148,165,163,117,98,114,78,79,98,134,146,156,153,162,149,115,102,117,143,89,126,143,214,180,177,109,78,64,80,63,74,166,189,198,191,210,175,115,101,237,224,100,111,87,116,188,228,248,254,199,84,109,149,122,67,91,136,113,111,157,208,161,88,111,103,88,79,32,80,210,227,238,221,166,192,228,103,130,129,97,145,115,43,82,155,82,68,170,149,103,158,189,216,181,131,56,18,0,69,80,69,52,29,26,7,19,63,57,19,58,63,34,91,51,71,82,76,107,79,96,57,74,80,57,58,60,51,64,56,40,16,15,42,70,105,133,139,110,140,111,81,106,73,96,65,81,114,98,67,72,31,19,9,26,32,40,51,59,48,33,35,60,41,31,30,31,1,25,18,20,16,21,35,24,34,28,45,10,26,40,50,36,48,46,32,40,12,57,33,28,43,26,17,33,33,34,66,59,73,77,12,39,28,41,39,21,35,29,43,72,39,87,59,61,65,42,73,55,43,46,56,62,64,59,34,45,63,58,69,49,49,8,37,38,1,4,9,31,19,18,13,30,23,35,58,48,31,61,67,37,69,54,48,62,49,53,66,63,77,66,50,70,62,55,41,44,40,43,42,38,50,49,38,45,30,34,19,25,24,56,25,50,15,2,49,22,34,45,18,44,32,56,28,37,41,47,28,39,51,50,59,53,49,68,34,48,62,53,28,47,52,46,54,49,48,42,58,37,44,46,59,26,36,46,30,43,49,37,39,32,69,43,19,60,29,58,31,32,47,45,32,40,36,42,39,47,49,35,44,47,27,29,45,36,15,53,30,51,29,53,38,52,50,32,44,27,50,23,31,30,53,43,42,27,38,59,52,51,20,39,59,45,49,45,24,10,20,30,52,42,35,48,38,49,48,56,50,35,55,66,37,21,62,52,65,58,61,41,57,74,51,49,52,28,31,20,33,33,26,38,47,74,27,73,99,111,104,67,76,119,111,102,97,101,110,124,133,99,107,134,148,117,125,159,174,155,158,142,174,148,132,142,154,136,102,136,122,102,76,91,90,111,136,99,91,109,74,117,96,15,28,41,2,20,24,16,28,15,26,22,30,2,4,22,14,9,32,18,28,15,25,5,24,166,192,186,191,203,189,175,191,182,174,183,187,187,198,208,253,247,206,166,199,114,36,27,25,43,40,40,35,51,38,73,184,223,232,240,232,200,208,198,224,201,220,231,200,224,205,200,205,210,212,217,195,214,195,209,237,212,208,211,177,217,194,240,244,197,207,213,203,222,230,191,216,211,186,198,211,215,205,206,236,214,216,214,240,240,205,172,212,210,200,170,226,207,202,180,223,202,175,207,182,192,177,204,161,136,126,134,141,75,55,70,68,114,104,76,73,97,107,81,71,66,58,106,77,115,123,120,103,91,101,120,96,65,54,95,84,95,118,98,109,128,111,86,118,155,138,143,115,119,116,125,80,103,103,126,99,141,151,178,189,176,124,151,120,144,109,157,135,134,117,98,137,131,134,108,138,124,126,121,124,136,156,147,121,170,153,127,74,79,106,122,125,153,158,154,138,116,139,126,155,180,113,128,149,116,138,138,184,181,183,176,175,142,168,170,162,174,189,186,187,189,186,187,205,157,158,144,163,160,132,99,47,82,89,78,120,178,192,191,194,183,156,152,144,144,119,129,125,150,190,227,237,238,203,159,163,161,161,206,207,169,202,210,203,84,117,195,166,110,147,112,135,200,208,236,200,192,97,107,137,163,112,72,87,160,180,116,135,122,105,104,97,90,34,12,53,104,168,208,177,148,162,181,113,127,100,71,173,152,86,138,154,90,93,128,115,168,178,175,229,185,104,51,21,12,30,23,57,52,33,40,30,45,41,29,31,25,30,71,71,65,65,36,63,66,65,63,53,46,9,52,55,52,46,44,44,21,39,37,59,110,91,91,99,115,92,89,76,84,97,101,103,118,131,94,85,33,15,21,46,50,88,19,64,66,30,25,57,84,49,72,41,6,3,5,17,5,26,21,25,9,44,8,35,25,56,52,55,80,50,54,39,3,15,33,27,18,17,27,21,66,51,59,78,66,73,58,37,23,52,23,20,34,84,58,61,55,57,48,73,67,75,68,29,80,62,61,66,38,48,75,60,45,73,39,43,53,52,20,5,21,21,6,33,16,59,35,35,53,70,47,56,29,61,39,74,39,53,53,69,44,36,68,47,44,61,37,34,62,83,57,38,53,68,47,48,54,34,31,45,43,52,17,23,21,10,9,29,6,36,37,25,25,36,38,34,38,58,41,26,36,62,75,75,61,57,40,68,58,64,47,68,68,44,35,41,59,75,90,60,52,45,30,53,50,57,54,23,18,45,26,27,38,50,25,60,56,43,26,37,18,18,32,5,18,4,37,54,24,27,39,52,42,38,53,56,40,49,42,33,51,22,61,69,38,27,29,32,37,18,36,53,42,45,47,61,20,32,55,59,26,12,52,34,51,27,42,46,29,61,47,62,54,52,70,65,37,52,59,25,37,40,31,31,71,71,55,40,59,62,62,51,71,61,45,71,57,76,40,25,10,32,26,33,38,39,20,51,60,38,77,51,107,75,109,110,131,116,107,88,57,48,59,81,91,74,124,136,121,125,128,122,146,156,147,147,130,134,168,117,152,155,145,137,136,97,77,113,126,104,108,108,105,109,120,97,3,0,5,15,10,12,49,15,21,17,30,20,27,17,18,5,8,41,2,18,11,11,19,14,177,180,188,167,159,205,199,195,178,193,192,196,204,206,169,229,249,153,166,228,139,46,26,38,26,37,38,37,32,34,94,170,181,127,158,164,148,170,172,172,148,135,128,133,160,180,192,198,199,200,210,220,176,206,209,211,189,212,198,213,192,165,200,219,171,183,177,190,204,197,204,195,202,220,213,216,176,196,191,205,209,191,222,212,211,205,167,154,128,108,159,147,165,145,180,225,220,217,242,255,231,196,166,117,58,61,74,114,113,67,83,121,127,132,128,128,128,90,130,105,85,92,96,147,131,127,137,128,124,83,81,91,73,101,127,124,119,146,114,148,164,89,119,154,201,228,164,194,159,163,133,102,108,123,140,149,135,229,196,191,181,139,131,113,118,102,157,117,104,88,114,110,128,104,109,113,127,128,96,91,104,107,149,157,149,132,111,85,97,110,114,123,132,106,137,132,124,134,136,157,141,137,103,112,92,89,138,190,154,160,146,141,135,165,137,127,149,173,190,148,158,128,118,156,118,87,135,102,131,127,92,97,127,150,184,184,181,205,160,142,170,142,139,130,116,133,106,128,124,145,208,220,254,251,250,251,236,255,228,191,188,157,144,115,95,126,175,130,91,133,111,110,162,193,229,170,161,109,116,133,166,117,106,55,163,180,115,106,115,141,162,107,91,44,11,62,12,25,104,115,126,121,186,159,184,147,115,203,116,104,216,197,46,125,97,97,188,107,73,140,148,56,14,2,31,124,104,77,67,41,79,52,47,69,21,42,58,67,102,69,43,45,27,42,50,37,25,50,49,42,43,32,11,54,29,35,63,82,91,95,126,131,78,99,69,97,87,65,53,83,103,121,136,138,80,24,5,30,58,64,116,103,92,51,59,17,25,75,51,62,51,64,58,41,12,13,31,49,46,37,18,35,14,49,34,48,28,44,47,69,79,50,23,21,6,39,16,31,37,31,57,49,75,61,58,31,24,44,20,27,21,40,39,63,67,55,71,56,48,40,45,55,80,65,55,34,78,60,49,78,58,48,62,61,36,52,27,20,3,18,18,23,31,45,14,13,25,18,43,41,62,39,63,51,63,55,40,47,68,70,52,45,65,55,44,51,79,28,37,68,25,57,56,41,38,50,41,52,72,41,37,26,7,16,26,12,5,49,20,39,5,26,24,48,35,42,40,55,49,43,26,80,79,56,66,74,58,65,42,46,44,57,68,35,73,75,58,26,36,52,61,61,31,27,49,21,44,48,34,31,70,28,37,31,28,53,39,58,20,36,15,21,31,21,31,21,19,10,29,40,27,55,50,29,47,59,51,57,79,51,63,56,63,28,48,29,21,46,63,36,59,21,32,50,70,30,60,27,53,38,14,65,69,36,24,41,15,30,50,44,7,51,29,18,64,65,58,45,60,60,52,53,35,64,36,76,43,57,46,83,67,60,56,61,67,72,51,47,40,30,7,31,27,35,46,37,58,81,48,52,47,30,49,67,44,82,98,56,46,91,19,88,75,83,69,77,112,115,109,137,105,98,132,141,162,94,135,109,77,130,117,122,152,129,111,110,100,145,139,132,113,122,143,113,131,104,28,19,21,24,21,2,2,6,18,23,10,14,25,6,10,11,17,26,9,21,16,36,6,40,124,189,200,186,209,200,212,222,208,220,201,201,209,220,197,237,214,125,143,219,103,59,48,24,3,21,43,63,43,44,76,130,97,31,81,77,91,68,140,124,66,63,77,99,102,145,152,171,124,132,120,148,117,143,160,157,140,120,161,135,112,104,135,148,105,124,122,106,109,130,127,129,163,174,146,101,94,119,152,138,125,143,156,143,148,138,114,113,86,65,89,97,79,100,128,210,244,231,239,250,245,232,162,99,85,117,102,145,101,130,139,152,143,164,146,127,137,114,144,103,143,168,104,115,85,128,99,110,128,90,127,126,151,204,168,197,197,167,148,136,144,80,88,136,180,180,140,172,146,147,138,129,140,127,151,147,193,214,164,138,137,129,92,92,104,65,111,120,101,99,59,104,83,112,71,105,90,101,100,90,92,111,99,120,111,125,111,123,119,110,115,110,106,123,89,114,90,78,114,132,132,83,104,87,85,88,150,167,145,112,95,109,107,107,109,70,72,139,140,85,58,86,124,113,113,109,125,161,150,123,158,121,141,156,151,160,135,107,127,95,108,136,107,94,126,111,140,103,98,126,135,147,149,159,148,160,188,181,206,220,211,194,205,172,94,147,224,132,109,100,117,110,182,214,215,174,149,152,146,98,133,132,104,17,124,212,132,121,124,161,200,255,240,209,174,132,104,110,140,155,189,135,161,144,192,167,157,223,80,126,212,102,75,144,128,136,148,76,22,38,77,67,22,18,89,131,137,101,90,119,154,134,107,70,96,94,71,47,69,76,52,24,27,52,56,18,25,28,56,46,54,55,31,16,12,61,60,123,95,113,132,113,91,75,96,94,68,105,79,102,117,118,109,89,21,35,10,41,74,102,123,113,120,92,82,26,31,54,102,85,110,64,46,52,56,19,14,21,32,17,51,47,48,35,23,22,13,31,42,53,56,46,35,24,34,47,24,18,45,23,45,91,76,41,15,22,12,34,17,21,44,70,51,75,62,74,63,49,35,51,32,71,73,72,49,58,51,41,53,53,57,73,51,56,44,48,42,17,42,24,19,12,13,16,42,19,10,4,38,52,59,65,59,23,23,68,45,66,60,51,38,27,41,52,60,35,61,59,44,65,81,42,45,58,47,30,57,57,60,51,29,16,19,4,19,28,6,14,35,19,26,24,18,52,42,48,25,80,43,45,28,73,52,95,39,64,77,65,47,67,79,64,48,53,46,59,64,58,44,29,48,51,40,57,47,35,42,47,46,32,45,36,35,56,44,61,29,63,19,4,34,31,8,32,5,33,28,8,40,34,36,66,49,46,33,43,50,36,21,44,48,58,33,58,29,36,45,44,29,41,56,62,50,41,63,55,37,54,45,44,56,35,57,30,17,29,15,56,13,52,25,42,12,34,47,37,30,76,49,35,51,35,53,45,53,68,53,63,72,79,77,63,55,69,58,73,51,64,19,25,21,19,44,34,77,91,131,106,63,39,72,75,89,64,37,83,54,35,47,43,33,83,127,110,125,103,133,139,134,122,93,115,121,116,104,111,103,87,104,94,83,129,111,78,75,105,121,124,102,111,113,135,143,100,145,105,1,30,20,14,18,14,33,3,24,9,27,24,48,6,4,8,36,6,51,8,13,5,18,22,179,174,210,194,217,208,196,216,198,194,203,205,218,223,222,255,218,158,106,153,37,22,33,43,27,21,51,36,34,51,77,128,113,54,64,93,72,99,142,119,128,89,105,99,129,135,123,133,94,51,51,75,72,85,110,111,67,60,85,92,97,52,85,94,67,60,64,60,83,78,69,57,67,108,90,56,87,76,103,101,60,77,72,75,119,94,79,64,74,60,80,79,103,99,82,136,181,181,169,182,156,156,118,115,102,115,116,118,140,131,143,133,144,131,103,143,132,127,152,120,111,115,135,102,101,140,143,111,141,132,170,147,196,212,236,217,179,170,160,140,140,77,118,136,138,118,136,144,120,139,114,134,105,99,113,166,208,176,131,115,91,108,50,56,91,87,114,135,105,82,64,56,88,83,100,116,102,128,82,67,83,123,96,116,148,143,142,106,117,95,96,99,109,92,130,136,111,85,49,107,134,90,109,113,118,99,120,124,108,100,84,77,108,125,111,110,131,123,105,90,103,100,91,101,90,132,166,161,160,160,159,160,121,101,94,91,82,84,85,72,80,66,86,114,120,121,96,123,84,79,38,54,56,61,46,32,63,115,148,195,234,224,208,219,112,105,187,155,128,122,122,157,239,235,180,152,174,164,186,91,117,185,160,70,146,244,138,81,90,124,185,248,232,255,237,247,206,238,191,162,217,109,101,120,175,119,181,175,98,169,139,79,124,202,246,193,103,63,23,16,40,52,17,55,48,77,85,61,18,86,96,93,85,83,77,95,63,61,69,39,46,35,31,55,42,43,57,7,60,86,49,61,15,26,46,54,114,126,105,133,118,104,92,87,87,123,100,98,105,103,109,90,65,39,20,22,32,84,103,125,106,136,126,113,75,38,44,94,104,117,106,55,36,24,32,44,13,17,18,32,15,0,34,40,11,18,35,44,45,41,73,45,47,57,44,26,45,34,56,75,59,65,69,26,24,7,21,13,31,58,63,67,66,67,75,56,50,58,20,44,64,69,64,54,67,46,55,36,61,40,50,27,35,59,50,50,71,5,36,44,14,24,19,7,14,17,28,56,52,36,36,40,70,53,68,59,63,32,74,52,64,58,37,43,71,58,60,64,50,73,41,49,47,49,67,66,44,24,63,47,45,38,38,38,2,28,14,34,20,24,22,33,46,22,64,73,34,24,37,20,57,68,61,46,43,56,49,49,64,49,75,59,51,59,53,68,53,30,50,57,40,40,60,46,45,50,24,40,27,38,41,59,41,43,28,36,29,25,43,26,3,29,6,21,32,31,29,39,31,44,52,53,62,45,34,64,58,54,65,43,24,45,15,39,25,32,49,37,30,52,35,59,49,44,75,43,31,74,59,61,44,3,27,28,7,41,18,37,26,34,9,12,65,12,27,40,42,57,35,72,49,58,66,36,31,60,45,58,53,37,58,50,60,69,57,60,32,17,14,43,30,13,14,8,85,129,157,91,99,104,128,121,131,133,90,103,102,75,103,93,108,101,138,141,128,137,119,139,123,110,114,126,134,146,119,126,99,119,89,118,92,106,62,75,83,121,89,94,136,108,95,122,127,105,133,120,24,0,34,22,11,37,0,44,22,35,5,9,15,6,1,17,9,18,29,17,29,22,24,23,177,170,185,165,169,142,118,135,116,149,138,143,134,134,173,253,228,134,59,66,5,35,18,22,27,47,25,29,42,48,101,145,118,83,120,172,132,150,190,164,148,148,148,142,138,118,130,134,118,92,86,86,73,94,92,102,95,61,69,84,67,101,87,103,64,104,98,108,95,94,93,75,84,97,109,98,101,90,94,114,94,100,95,75,95,133,122,109,102,107,107,133,122,118,74,86,82,88,61,79,80,75,85,120,101,99,95,114,123,91,96,109,119,98,95,93,126,119,119,101,102,111,128,122,128,123,109,113,123,137,161,148,185,243,236,233,236,208,209,201,178,117,124,118,116,120,115,96,94,87,109,110,93,98,146,178,191,146,115,140,131,103,89,81,110,89,125,147,115,96,77,101,120,122,135,118,91,100,120,83,144,148,143,136,139,159,115,102,139,135,114,126,100,119,121,140,128,127,114,86,80,102,79,115,122,112,130,84,100,89,117,105,133,143,130,118,134,127,84,118,84,90,94,113,111,151,170,164,152,144,145,163,131,92,74,111,88,82,107,103,89,102,112,97,122,87,115,91,84,63,57,47,65,71,57,72,69,94,110,130,157,136,162,166,116,83,93,94,138,148,153,132,152,126,85,78,63,90,72,72,104,170,171,87,83,133,120,122,59,24,89,223,235,236,232,246,232,177,150,117,186,164,106,157,120,118,185,89,72,213,61,101,188,221,246,145,103,33,20,65,37,53,43,45,42,44,64,18,63,48,46,38,46,46,49,31,36,37,42,53,37,27,13,56,27,28,37,50,38,34,44,19,36,37,104,72,97,99,98,102,100,106,89,117,76,84,76,131,105,127,95,73,15,3,38,74,70,110,98,83,98,134,101,119,106,68,110,103,96,75,41,40,47,25,22,26,11,39,31,41,8,41,50,29,28,24,32,42,11,53,51,64,43,45,26,41,39,40,55,65,40,38,13,15,15,13,12,37,67,53,66,47,83,56,68,68,44,71,55,69,52,45,57,51,61,77,65,75,33,61,45,42,36,48,36,52,45,18,2,28,25,16,9,14,13,29,37,40,69,60,27,62,30,48,57,54,45,57,59,19,37,69,64,59,52,47,88,18,28,65,27,32,62,42,49,56,75,30,70,53,47,9,18,10,21,16,32,50,31,36,12,29,26,39,42,52,11,16,13,39,45,58,53,49,37,65,31,55,67,41,60,60,61,63,42,69,48,38,57,77,75,54,49,36,49,74,30,38,36,28,37,38,39,51,64,27,26,27,28,4,1,22,39,15,5,28,22,12,33,31,47,65,34,58,45,38,68,54,46,46,55,38,71,39,40,42,48,29,58,39,59,48,39,48,30,56,74,43,58,64,44,38,24,22,28,12,5,26,12,46,3,29,8,37,49,57,37,52,52,73,56,49,41,48,45,59,59,39,65,75,50,67,71,34,59,60,53,13,44,55,45,29,29,68,103,130,116,88,117,130,118,130,153,123,91,116,149,148,152,143,154,151,151,156,131,109,107,136,115,99,137,122,146,150,158,114,123,121,135,122,137,148,120,124,110,103,98,88,87,58,65,95,91,109,127,89,6,1,17,12,43,9,19,28,7,17,23,33,14,2,12,17,15,27,18,14,9,19,16,14,143,111,115,117,129,117,96,59,46,34,30,39,73,70,122,241,225,124,66,59,2,3,39,53,29,49,43,27,42,68,115,147,153,167,162,143,156,141,194,205,165,113,128,120,137,118,136,110,128,149,104,137,147,139,143,129,114,110,93,87,110,112,113,124,95,110,95,81,119,109,121,120,88,92,117,121,137,118,129,126,126,143,116,109,129,146,130,92,127,116,135,119,114,114,94,111,98,69,101,83,93,78,96,93,105,91,100,119,96,124,103,102,104,103,87,96,91,104,132,122,114,137,104,104,128,118,127,131,119,154,131,170,230,208,221,239,240,251,255,221,205,143,127,125,127,126,106,85,105,117,122,127,150,187,180,205,187,118,107,152,126,132,133,158,165,169,137,202,164,158,168,162,177,143,145,138,150,157,159,143,153,155,156,146,108,162,127,118,147,114,147,142,129,109,110,122,157,119,125,118,113,102,124,110,99,111,95,94,88,117,92,130,138,143,139,114,101,102,119,109,104,127,152,129,127,158,161,152,155,152,165,167,134,53,80,77,93,83,93,103,96,111,121,100,115,98,109,106,111,148,127,112,110,81,73,104,96,87,107,112,74,58,81,140,145,123,35,55,99,57,53,54,87,64,34,54,42,44,9,8,40,69,162,108,73,19,71,141,103,22,9,10,143,202,236,247,184,126,151,115,207,149,144,148,107,139,140,34,118,169,143,161,158,58,117,115,51,25,14,64,49,38,62,24,35,23,50,13,53,30,11,40,31,27,44,23,1,42,34,43,42,17,20,12,7,57,25,25,27,53,32,49,33,23,79,116,113,97,115,101,108,100,99,114,80,109,122,107,130,87,45,25,37,11,63,109,116,106,129,115,85,109,109,98,89,94,102,81,44,34,46,44,21,12,33,11,7,33,38,14,28,19,21,14,34,16,18,20,36,27,51,84,74,90,44,34,31,81,69,58,24,13,26,25,25,36,35,44,65,55,86,42,69,75,46,46,80,47,27,62,60,52,64,82,37,59,70,61,62,39,60,60,47,30,61,45,27,43,45,25,5,14,12,32,43,10,45,37,41,47,29,42,67,64,56,66,56,14,16,29,48,58,64,59,73,53,60,81,59,62,44,51,51,78,42,69,34,46,51,40,26,25,29,29,12,26,37,6,10,38,21,38,37,45,35,9,34,5,5,50,51,56,72,59,65,90,52,77,54,49,70,62,71,61,52,53,53,82,49,32,28,58,38,43,40,16,59,56,72,70,58,47,54,44,72,50,52,42,35,8,28,19,11,20,33,23,21,25,50,17,32,53,49,44,57,58,69,47,41,62,54,32,59,51,55,24,27,67,63,69,54,69,29,64,78,59,44,43,42,39,52,16,22,40,31,9,29,32,22,41,16,28,33,43,33,29,54,46,51,51,76,39,76,57,53,37,81,46,50,53,70,79,30,42,83,58,41,37,31,30,57,25,56,107,131,115,106,83,89,107,121,127,77,113,94,110,117,115,158,120,140,141,121,108,152,94,115,114,118,137,133,145,150,127,157,133,147,151,127,95,139,177,99,121,112,118,115,130,116,94,87,90,60,79,116,105,31,11,11,11,14,10,9,16,33,11,11,19,9,4,43,12,4,9,13,8,0,42,7,4,96,103,95,117,155,183,182,153,130,105,98,101,80,74,133,244,238,148,83,137,60,33,41,38,44,25,45,17,25,43,83,164,163,182,168,175,188,192,170,142,146,111,140,157,125,123,105,108,129,134,130,189,161,168,158,173,158,131,116,124,126,152,115,119,137,119,120,111,97,139,126,114,113,107,82,120,128,115,135,131,125,117,153,126,97,127,131,82,128,123,118,131,92,96,107,94,112,78,84,105,99,111,123,100,109,108,130,117,131,123,121,125,113,122,105,95,114,111,111,117,128,116,127,130,133,106,140,130,132,117,117,120,143,143,149,168,209,196,210,231,218,208,180,156,146,107,113,110,115,96,122,151,175,211,237,245,173,160,172,173,188,188,233,202,186,206,152,174,177,177,200,202,185,136,142,126,188,184,178,138,112,124,107,91,146,125,100,137,129,110,129,162,128,130,127,112,98,115,128,122,108,123,122,155,137,131,106,130,131,119,137,118,139,132,107,103,117,83,93,118,119,163,217,198,123,165,172,171,147,133,153,123,154,100,85,119,123,138,116,108,126,108,131,108,127,126,150,146,172,242,230,122,96,106,109,142,144,101,88,115,105,100,184,205,193,177,42,7,8,9,21,10,99,165,132,139,163,170,97,28,12,36,134,152,172,128,129,200,125,84,54,32,10,62,77,150,155,158,209,195,211,147,162,130,117,192,104,90,172,196,212,186,89,28,26,49,42,41,61,75,50,30,36,17,34,34,25,15,17,26,10,34,11,16,41,37,12,20,42,39,25,45,11,22,12,26,16,34,10,49,59,77,60,49,75,98,100,108,108,104,105,101,93,101,102,90,116,130,100,41,17,20,35,74,94,100,88,101,85,102,62,105,115,114,96,64,85,71,32,31,37,40,7,21,47,47,29,16,29,3,18,22,30,55,24,32,20,36,22,49,30,45,61,58,57,28,53,48,25,16,18,11,40,39,31,52,50,81,74,59,60,61,38,52,37,75,63,45,61,48,60,64,48,68,61,68,85,70,81,59,65,60,43,41,41,28,21,31,27,49,37,10,23,24,16,27,37,41,18,38,50,40,69,54,49,46,41,45,41,31,53,54,53,48,83,61,41,37,55,38,63,49,45,44,47,68,61,58,52,35,34,12,25,15,5,10,16,8,23,33,30,81,45,46,41,15,21,11,17,39,78,56,77,60,34,65,81,85,73,73,51,56,84,65,51,52,92,84,65,66,58,52,55,44,55,25,52,32,57,62,40,42,46,65,32,47,15,2,27,24,11,19,8,24,36,16,42,58,26,61,44,57,26,40,71,44,55,69,71,68,59,70,80,81,64,40,38,64,70,56,43,36,50,48,35,58,52,45,49,39,47,37,27,12,26,17,24,30,15,29,18,25,21,23,65,37,43,14,67,51,33,49,46,66,60,58,54,55,82,81,82,81,90,72,80,56,37,45,53,28,24,32,62,150,107,110,120,134,106,54,73,76,68,103,62,60,126,129,122,116,133,138,115,124,113,113,159,152,128,121,151,149,144,147,121,129,154,108,134,131,155,149,136,116,116,142,110,118,128,120,99,87,106,111,87,86,5,29,14,16,29,9,16,31,20,46,20,13,3,16,36,6,20,23,18,25,6,43,1,29,87,97,89,152,192,213,212,190,204,174,150,132,127,118,203,253,245,158,148,183,97,29,27,9,10,13,41,23,50,37,63,148,135,179,151,181,166,148,140,98,108,137,138,148,107,134,109,130,143,145,127,141,145,156,127,147,167,134,104,113,110,71,89,114,95,97,115,98,113,128,124,116,115,104,80,96,136,128,121,131,116,121,119,133,123,136,120,130,121,129,110,107,131,129,103,93,146,117,106,111,127,143,115,110,137,144,131,149,135,117,131,93,105,90,121,86,136,98,120,122,116,106,103,116,95,118,122,125,156,136,99,84,92,71,107,71,125,148,185,171,213,221,237,248,208,139,119,116,127,125,118,114,150,249,223,167,167,163,171,178,184,197,189,189,190,193,138,149,117,163,202,191,137,98,133,150,188,170,137,124,90,93,70,95,125,126,87,101,109,69,112,108,116,117,94,98,111,99,112,117,128,157,148,137,124,109,97,139,112,123,106,113,138,97,89,67,70,125,155,176,144,182,245,212,152,157,147,153,150,151,155,172,147,125,49,84,111,138,111,120,145,209,167,131,119,131,136,129,178,216,219,123,102,109,141,164,160,133,66,83,92,113,188,232,220,145,122,104,76,71,22,56,145,201,198,185,191,194,186,110,91,32,99,163,167,153,221,212,122,118,109,71,95,76,55,129,172,155,157,171,201,158,170,103,145,179,123,184,208,217,189,105,99,10,14,32,70,76,56,39,30,36,22,50,22,15,18,16,49,28,29,24,22,37,27,26,16,5,27,34,32,18,34,23,23,22,24,48,35,51,76,89,42,53,90,107,107,105,96,70,83,93,102,118,102,87,117,82,43,34,16,47,68,114,123,112,113,123,112,104,69,76,109,94,124,107,96,95,94,129,107,74,28,44,50,50,14,40,30,17,26,12,35,30,38,36,21,36,12,13,33,58,54,77,98,43,45,28,7,19,5,34,40,24,66,50,78,56,63,63,52,40,40,39,51,53,50,32,66,64,67,51,37,72,65,47,47,58,48,58,64,31,49,38,46,25,32,38,29,8,19,11,26,21,24,43,43,52,35,38,39,56,47,25,56,30,21,51,24,41,30,52,63,37,51,73,75,53,60,59,62,50,64,33,63,50,66,52,60,25,28,34,8,19,24,33,3,27,45,25,52,30,47,27,12,33,32,37,51,92,78,84,60,57,48,52,57,59,49,69,70,59,79,67,64,63,46,67,57,43,58,27,42,27,44,43,62,15,60,42,41,61,30,40,43,31,19,13,35,18,37,47,14,18,35,24,0,37,53,17,33,56,38,47,71,56,60,70,58,66,71,90,28,16,47,40,59,22,47,66,44,49,22,51,46,77,65,58,61,36,45,20,16,38,26,10,36,27,5,7,6,3,28,68,35,81,54,34,54,45,53,73,61,62,68,54,60,64,59,61,47,53,60,76,49,37,56,39,34,55,41,44,128,159,161,149,138,136,109,62,44,95,97,100,67,107,116,121,124,156,133,126,128,120,157,121,139,149,160,118,129,95,142,129,105,129,114,94,126,152,186,150,139,155,132,110,123,135,176,133,122,129,119,108,84,70,12,21,49,13,6,5,5,25,16,29,20,11,6,22,7,15,27,26,26,5,17,23,23,11,81,101,80,86,154,133,121,153,139,181,154,136,144,185,226,254,243,136,179,233,150,81,12,12,21,19,48,38,51,33,70,89,78,109,127,95,110,133,67,82,85,97,95,91,101,98,110,81,92,69,82,103,105,76,104,97,137,111,78,99,72,64,96,91,113,135,103,103,121,130,131,128,102,101,97,145,138,128,125,92,96,118,120,110,139,114,129,130,132,117,112,123,143,129,129,149,128,122,115,123,102,132,116,136,120,168,152,138,167,117,101,111,75,134,94,98,92,82,110,115,108,107,124,133,123,85,103,134,130,125,99,92,95,64,66,77,77,75,118,140,147,176,162,215,219,203,182,150,139,108,86,64,162,175,157,126,112,124,156,159,129,142,147,134,141,152,113,103,91,84,113,111,120,97,105,111,117,156,119,116,102,115,120,149,89,107,111,112,98,59,118,104,93,90,71,79,70,96,89,83,119,146,130,132,84,126,123,126,114,98,124,107,102,110,84,91,163,156,178,188,143,195,239,200,154,160,168,171,170,135,145,140,171,147,91,78,88,111,120,139,197,194,162,171,129,113,120,104,97,184,131,120,124,99,161,178,201,143,92,80,95,102,139,142,172,180,227,151,158,163,131,111,85,172,207,154,127,146,179,131,83,7,45,127,159,136,200,179,93,80,78,81,87,86,85,110,146,137,102,115,179,194,135,82,151,146,161,234,199,238,198,61,34,1,59,64,55,60,40,42,39,7,19,29,33,19,53,14,34,15,61,25,29,29,20,29,0,14,17,11,40,15,40,22,20,29,3,25,40,63,65,86,78,81,110,125,89,75,77,69,74,109,91,116,94,109,92,25,17,19,40,88,107,147,91,90,81,103,90,177,93,82,111,102,114,101,115,125,123,150,159,77,29,57,22,24,16,74,19,2,19,17,25,48,18,19,5,19,22,31,29,60,71,81,96,90,50,34,30,4,16,13,54,49,46,76,52,76,50,52,78,45,50,57,64,64,55,80,59,58,91,55,65,41,59,71,74,25,22,60,34,58,29,38,53,13,7,32,19,15,15,21,46,30,15,12,49,32,44,39,67,34,66,48,55,53,48,41,19,33,27,51,49,71,69,57,31,64,40,60,63,64,69,58,55,29,51,74,31,13,14,34,18,17,5,40,25,28,16,23,68,33,30,30,42,57,38,43,58,77,43,64,64,50,24,71,50,58,59,48,61,38,31,47,67,46,40,61,45,58,76,48,46,61,77,42,74,61,64,73,39,51,45,54,59,46,23,21,8,34,13,9,14,24,27,11,13,19,59,36,44,56,48,45,96,34,46,66,43,41,68,37,53,62,49,59,57,57,35,44,78,49,47,71,47,63,35,57,62,26,28,51,18,15,16,18,16,33,25,33,23,50,43,29,75,66,55,67,48,78,67,44,63,57,47,64,59,91,65,63,85,66,69,60,34,44,50,40,35,32,55,103,130,147,109,122,128,136,118,101,129,121,104,142,141,87,114,126,114,108,129,145,137,133,141,130,117,145,157,110,109,88,117,107,110,112,108,153,132,142,128,127,146,152,147,149,143,120,137,108,117,132,102,121,108,95,7,1,42,20,14,10,15,3,18,13,22,9,6,0,23,8,21,18,32,13,7,14,21,20,101,99,86,74,85,99,56,86,92,108,90,96,149,195,244,247,240,115,99,238,133,54,14,34,7,42,36,76,38,28,85,79,83,90,90,74,77,81,73,68,71,69,93,108,85,98,80,108,78,75,91,83,90,76,63,78,113,68,98,81,91,97,87,83,90,109,98,127,95,89,117,122,126,103,109,118,129,133,74,122,121,117,91,112,119,113,102,130,140,116,114,135,142,91,129,119,148,143,123,143,152,126,120,105,114,112,131,149,142,134,116,138,131,135,138,145,146,97,113,104,94,112,110,120,112,105,67,115,151,165,137,145,137,86,111,90,84,91,85,112,108,64,119,189,204,217,240,232,187,133,86,32,120,143,122,64,95,107,131,90,134,123,100,82,104,103,108,95,86,95,85,132,106,93,121,88,103,144,127,119,138,137,142,137,101,80,62,140,108,84,110,91,86,87,53,76,71,66,90,72,84,94,111,136,128,131,143,108,116,125,132,105,107,145,151,143,173,191,191,182,132,175,147,150,141,159,164,164,156,147,132,137,177,172,118,57,85,85,140,114,160,148,149,134,141,109,119,104,118,136,94,122,152,157,185,150,178,194,130,78,89,144,152,127,90,185,242,192,250,185,167,107,124,181,214,191,159,127,146,67,60,4,57,172,175,136,168,104,51,30,49,72,66,43,73,118,171,86,122,161,182,158,77,101,152,129,177,161,90,159,109,20,18,17,64,33,25,35,42,25,28,31,37,23,30,23,54,34,30,9,28,34,58,15,43,20,11,33,32,35,17,22,27,34,23,31,22,46,76,91,90,52,66,80,99,75,85,69,83,67,84,123,96,88,87,67,24,18,11,78,105,128,126,126,108,77,121,89,100,136,87,104,108,95,142,103,100,111,113,127,141,152,89,50,31,48,37,47,42,27,19,34,24,40,44,30,5,22,17,16,53,44,97,72,91,74,68,63,44,35,56,33,53,44,79,66,65,79,61,65,64,65,46,64,41,54,29,46,42,58,56,57,29,45,36,38,29,53,24,65,45,29,20,41,25,6,13,50,35,12,15,17,9,23,41,44,53,35,43,62,31,67,47,56,52,37,18,27,41,19,53,64,54,57,46,60,49,49,43,46,71,61,56,37,63,39,53,58,43,4,43,23,42,32,21,24,20,14,12,55,58,20,53,44,29,26,43,32,39,75,64,74,69,75,52,52,58,40,38,23,53,69,51,52,51,53,90,53,52,61,85,77,55,48,45,60,69,45,47,46,45,33,63,56,39,11,41,17,21,13,20,38,16,19,14,44,36,40,30,52,59,58,66,60,61,51,50,74,76,71,70,38,42,42,66,87,62,44,45,53,50,57,69,60,47,62,80,65,22,9,31,7,24,45,11,31,31,20,30,14,45,47,56,54,57,65,59,69,31,47,73,44,41,76,76,66,57,57,72,64,67,46,52,40,41,40,35,76,62,84,67,96,134,116,107,122,110,116,124,139,145,146,139,147,133,120,132,109,122,112,142,153,110,120,118,105,124,148,123,122,103,151,114,133,106,111,101,95,99,102,88,110,127,128,155,131,127,82,83,91,115,94,82,89,99,93,28,26,17,1,8,5,46,16,0,9,0,29,9,1,11,0,10,20,35,29,1,33,26,3,128,150,140,164,158,148,109,110,120,127,80,106,103,133,191,247,234,65,88,201,82,25,34,17,26,26,70,76,53,50,72,84,64,63,87,67,72,58,83,72,105,90,64,92,99,90,96,105,77,74,108,72,76,121,82,116,58,100,97,110,127,96,103,119,86,79,89,84,96,101,113,145,127,106,100,119,111,124,103,100,97,101,95,111,109,126,109,126,144,112,130,129,117,101,114,104,140,116,140,117,117,83,113,95,127,139,144,143,136,118,143,114,116,117,142,154,173,135,135,128,118,112,92,123,91,106,120,163,190,175,166,116,105,120,137,137,114,117,120,137,108,152,82,151,205,193,231,240,243,214,126,119,129,128,125,60,55,76,84,116,146,146,123,132,96,107,100,104,92,101,113,98,101,122,114,90,109,109,120,96,105,128,99,86,76,82,113,148,134,103,129,111,91,77,88,53,85,79,72,41,70,86,116,142,114,125,135,109,89,117,135,181,119,129,154,160,176,185,186,156,161,154,178,160,134,186,206,184,152,169,179,132,135,130,113,104,79,100,134,102,117,112,117,120,104,123,134,121,134,116,116,91,130,163,162,166,163,167,132,104,131,125,126,136,141,216,209,182,247,213,159,78,122,220,223,147,129,118,97,86,83,0,70,173,163,138,181,105,35,87,80,111,150,117,119,166,172,74,135,183,137,133,75,168,186,154,142,83,8,32,19,37,65,40,43,23,41,55,11,75,35,45,46,45,39,31,26,17,46,32,36,5,17,10,21,53,26,7,25,29,33,35,41,22,16,21,53,59,81,84,80,75,45,67,81,96,90,89,74,78,103,124,108,83,48,23,8,30,66,88,130,120,100,99,102,77,88,84,94,83,21,62,83,63,81,97,95,112,120,98,126,128,110,50,43,60,66,67,47,55,52,14,35,22,25,38,41,12,34,38,53,50,38,79,75,55,38,66,81,55,47,59,75,54,63,60,68,52,49,71,50,30,63,32,49,84,24,46,57,54,38,49,37,56,70,57,63,82,44,59,57,26,33,34,52,10,37,26,34,31,29,23,26,30,13,43,61,62,59,28,33,87,54,64,47,39,48,34,35,32,40,36,52,60,55,55,54,79,48,73,52,81,65,45,44,33,64,27,11,26,7,13,10,43,21,10,44,32,36,45,59,37,61,41,16,37,44,20,63,93,48,73,45,55,71,61,35,66,40,41,53,56,80,75,48,34,47,47,61,43,60,45,62,44,81,53,45,55,48,76,62,43,38,79,59,29,27,18,33,42,11,20,17,12,23,11,15,68,39,53,57,57,41,58,50,62,67,77,40,61,57,70,63,49,61,45,52,68,25,61,51,68,45,67,56,65,56,55,61,27,36,20,7,22,36,11,37,46,29,25,15,58,54,51,55,75,71,52,48,36,64,43,63,79,75,60,64,80,86,58,41,61,47,52,66,70,89,180,148,131,147,96,96,76,110,93,94,73,87,134,126,125,141,147,97,84,83,104,122,101,111,84,88,120,131,132,145,167,139,151,142,149,128,121,145,117,79,66,111,88,67,98,118,146,113,112,97,75,92,108,83,94,114,76,125,98,29,12,43,24,24,15,6,11,19,17,26,19,0,2,11,17,8,18,24,21,9,12,24,4,167,146,165,215,213,181,155,146,128,133,122,107,117,109,177,238,238,99,54,85,8,26,40,9,21,40,27,33,49,61,70,103,103,114,109,89,114,98,84,80,96,104,112,102,107,79,78,64,94,86,93,103,87,110,101,118,90,89,111,139,151,114,120,111,100,88,121,122,94,122,115,97,82,81,103,76,112,109,99,118,101,115,115,118,137,103,126,97,138,112,141,103,102,84,93,97,114,104,103,131,122,90,117,98,129,140,116,127,98,102,98,104,105,120,122,115,120,133,113,100,105,122,94,102,92,104,97,163,128,123,134,111,118,144,179,131,113,126,129,149,94,100,115,150,164,161,149,159,197,246,223,213,180,160,182,139,146,82,47,102,153,167,150,143,133,125,138,115,161,133,151,115,113,131,123,74,88,116,89,91,113,106,114,54,53,84,125,132,103,110,114,93,119,84,96,100,82,72,73,81,44,58,93,104,119,109,110,116,97,109,123,101,105,154,160,184,195,188,168,167,162,163,171,185,174,177,169,178,157,167,167,81,81,126,132,151,114,143,138,102,114,90,118,87,72,99,109,129,113,97,124,87,101,123,165,133,121,126,130,139,158,125,130,120,166,225,216,203,244,209,186,118,169,221,140,81,129,169,157,24,28,19,75,205,136,157,198,92,38,91,136,183,160,164,143,176,149,95,182,154,131,160,188,228,240,148,85,16,0,11,47,76,86,65,62,15,26,21,33,52,32,29,23,16,38,63,27,28,26,36,60,21,7,26,30,38,42,23,23,22,27,48,10,34,20,30,34,35,99,108,97,95,41,70,74,74,64,89,87,93,125,151,83,53,16,23,24,59,138,126,120,98,90,99,78,102,94,68,148,107,12,41,47,71,91,89,58,54,84,112,105,142,119,101,56,55,72,61,65,53,38,60,35,31,34,36,27,60,56,50,45,48,33,66,54,50,45,62,94,70,23,41,66,85,80,67,62,57,58,64,18,43,36,69,50,55,41,75,48,61,64,38,63,55,65,82,70,40,34,40,62,52,58,49,33,9,24,23,22,32,27,12,15,17,30,36,80,43,61,59,42,38,68,47,61,27,45,34,30,55,41,70,75,46,66,45,56,26,57,60,22,26,49,80,65,17,62,44,17,37,11,30,1,15,16,32,12,14,35,47,71,53,33,58,16,23,39,19,37,44,55,37,56,55,62,41,47,30,57,56,39,21,22,53,30,35,48,39,54,31,63,54,43,42,87,58,63,51,44,74,51,46,66,56,52,34,6,20,9,14,28,13,19,7,30,45,61,33,68,46,52,57,58,60,37,54,70,78,66,73,70,59,58,58,63,37,31,30,35,58,67,61,46,70,56,79,49,44,73,1,24,12,16,3,42,30,24,24,28,36,32,50,54,42,62,72,76,54,64,75,38,52,46,58,48,32,52,70,65,63,43,59,74,42,51,109,177,182,188,162,143,131,69,45,84,72,96,108,95,112,141,99,104,101,88,61,42,75,84,116,110,111,89,95,102,118,141,153,154,135,111,146,118,113,137,157,116,104,131,119,110,116,131,97,123,107,96,127,89,106,89,81,80,107,99,73,7,1,17,28,23,17,14,8,16,41,32,10,23,19,3,29,5,8,31,18,14,5,26,3,120,125,115,152,176,163,140,98,124,145,102,130,142,147,201,251,234,83,57,58,23,54,27,17,42,28,33,45,49,47,89,114,114,116,106,103,99,122,123,126,103,97,82,138,130,123,124,114,112,108,136,113,100,98,108,129,89,125,131,102,135,108,99,106,105,93,109,101,103,99,106,84,94,82,114,146,102,113,121,144,140,142,121,123,99,122,79,88,100,74,87,110,101,91,123,104,120,84,91,81,123,92,123,98,101,104,117,116,97,106,86,113,112,75,90,104,105,96,99,88,88,100,92,93,75,94,98,116,109,106,100,108,115,123,148,131,104,91,83,105,92,69,91,95,130,230,183,149,168,222,208,155,106,98,201,245,216,174,148,169,167,128,142,144,139,146,152,136,127,124,116,137,94,118,89,88,83,100,62,81,120,120,78,78,82,91,134,119,81,95,102,72,75,87,129,122,103,122,87,73,73,86,115,99,86,139,125,119,124,120,101,121,162,151,165,170,172,183,169,177,181,171,160,160,171,169,161,159,172,182,166,134,74,121,115,82,113,125,145,99,122,122,113,107,39,70,92,118,112,76,81,93,86,71,105,105,97,104,126,118,125,114,127,125,196,242,214,246,251,240,166,160,234,205,81,51,185,217,175,70,37,5,127,189,157,196,215,123,66,123,146,125,119,150,164,175,131,141,181,159,174,186,200,149,216,137,59,16,9,59,38,77,39,47,37,32,16,36,22,24,33,33,45,19,54,50,61,24,44,37,24,21,20,9,12,6,24,33,25,11,35,26,49,24,17,36,70,67,84,96,110,88,92,81,60,92,101,91,88,84,142,90,51,31,12,54,85,88,112,90,92,108,106,101,76,96,127,119,134,149,32,25,37,71,86,65,115,73,84,109,116,104,52,86,103,103,52,40,56,45,52,30,39,23,31,23,24,40,59,59,37,15,6,38,87,37,64,87,100,49,82,64,45,49,56,62,52,30,43,63,79,50,49,45,72,68,51,39,61,51,49,72,48,64,74,36,62,34,29,46,39,51,63,34,31,21,14,4,6,22,25,16,35,44,20,51,45,43,76,74,69,64,45,56,50,47,41,19,55,27,40,38,70,44,37,29,41,55,65,47,31,75,54,74,39,50,50,30,11,2,0,24,13,13,31,29,24,19,11,53,40,40,56,32,38,44,15,48,63,55,62,72,61,48,48,55,21,52,49,27,12,46,37,24,10,55,24,52,45,75,57,28,62,70,43,41,75,69,63,41,46,59,69,29,39,9,16,9,17,28,41,23,17,19,30,18,80,88,59,65,63,50,68,49,67,81,66,63,53,76,51,76,33,74,44,47,38,47,76,45,47,49,59,40,75,52,62,69,29,32,19,19,13,26,11,34,3,8,20,46,50,63,59,42,69,56,47,47,21,41,54,71,56,41,65,34,51,47,76,62,77,60,64,42,77,111,154,146,136,108,119,68,63,50,92,114,54,97,127,136,97,123,79,97,131,84,59,79,81,108,94,105,115,132,116,150,135,163,138,118,95,101,132,132,150,187,142,121,76,136,91,102,83,123,116,120,120,148,113,87,64,81,38,69,60,73,24,34,16,34,8,0,13,16,7,0,17,18,23,4,27,14,16,14,25,5,33,4,42,32,139,139,121,110,121,133,104,123,140,105,103,129,167,160,220,227,219,74,62,57,2,60,4,5,38,11,29,30,61,54,80,118,110,78,116,109,141,110,110,99,124,102,104,105,118,133,127,123,121,134,152,117,91,111,132,123,130,149,107,126,101,104,103,126,93,94,129,101,104,107,104,91,115,120,114,125,122,109,120,126,110,128,154,118,128,128,87,85,96,79,104,117,102,113,107,125,86,80,76,99,98,97,99,57,63,93,104,135,104,109,129,103,106,87,112,127,108,139,136,117,124,114,129,96,84,105,114,117,105,132,101,102,110,99,116,101,123,95,51,91,77,75,45,36,95,160,158,137,155,142,103,105,11,59,125,211,230,244,211,199,142,127,111,85,91,119,129,98,125,89,87,83,99,109,119,110,104,158,94,104,86,114,96,108,67,66,108,110,77,112,92,92,86,108,127,130,147,109,96,91,70,68,115,109,139,152,148,146,102,146,121,147,173,171,172,124,157,147,166,148,182,124,158,156,174,137,145,155,160,155,158,148,135,107,112,107,139,159,123,95,103,119,65,66,86,80,107,114,86,100,79,68,92,83,50,101,80,64,86,108,80,109,66,152,237,244,233,230,248,226,134,182,206,148,58,47,100,198,171,50,26,4,125,254,192,162,176,139,119,124,172,140,130,154,184,143,101,170,171,168,169,152,111,44,81,77,31,34,10,56,78,52,40,11,11,31,58,42,32,24,63,31,44,47,32,35,46,43,39,30,16,6,20,36,19,15,22,30,40,28,36,29,36,30,56,69,74,86,93,92,82,40,68,69,98,92,87,104,119,96,76,23,47,25,17,63,118,115,93,94,107,102,117,92,90,114,123,89,167,177,70,60,48,61,104,69,94,106,99,116,121,101,70,90,98,131,113,55,50,82,86,45,24,19,18,39,26,45,51,24,17,14,8,41,38,52,74,82,82,69,60,59,71,65,49,65,65,45,61,39,54,60,23,49,41,55,51,50,45,47,62,69,75,66,30,85,53,40,74,61,53,47,64,33,9,11,33,14,14,32,30,22,37,42,54,61,44,57,28,35,71,38,47,30,53,52,47,33,45,44,58,31,58,45,61,77,77,35,51,40,45,38,50,64,58,32,42,39,29,20,34,7,19,4,35,3,23,35,62,43,55,52,73,27,29,9,25,51,65,32,66,60,48,19,9,17,17,10,20,26,34,18,15,12,22,9,20,41,49,62,62,38,36,81,40,69,53,70,79,67,62,39,52,55,29,22,36,2,6,21,31,33,6,28,18,52,60,49,85,42,60,55,67,69,81,72,82,51,60,43,48,50,48,64,56,41,19,44,41,65,61,64,28,56,64,43,36,39,14,29,12,7,8,7,21,41,30,15,11,55,69,76,54,52,82,49,66,51,48,50,71,60,49,78,57,37,68,60,79,55,93,47,52,63,81,154,137,128,103,77,81,49,52,46,129,117,64,48,95,115,111,106,121,109,109,77,91,99,93,78,111,102,118,135,114,115,112,150,142,88,104,116,104,112,137,164,123,124,83,68,76,86,76,108,119,101,97,101,120,82,105,98,41,33,39,44,18,5,27,32,18,16,42,22,9,11,13,10,29,3,3,35,23,10,14,10,24,44,29,21,147,140,112,98,103,113,125,158,106,114,147,148,117,149,207,253,229,77,105,19,16,65,36,36,41,26,57,30,28,38,94,131,78,108,116,104,115,126,110,83,94,98,116,106,114,104,108,110,83,117,105,124,86,91,106,96,113,112,94,103,98,111,105,122,119,104,115,146,131,100,108,123,102,138,125,133,117,144,120,122,122,122,124,108,122,116,99,111,80,82,102,107,92,69,100,99,80,98,61,75,90,132,102,101,110,122,106,130,150,144,158,146,161,167,142,148,141,116,137,135,99,119,97,104,111,133,107,112,130,128,140,129,106,114,87,111,117,122,80,145,104,68,54,41,48,78,84,72,123,130,124,161,110,59,27,103,149,148,203,201,196,174,146,98,91,88,76,76,89,92,110,109,89,88,154,120,136,138,108,108,71,92,95,105,96,105,102,115,113,152,145,105,102,72,79,89,123,109,135,129,95,111,141,180,164,100,128,136,145,142,148,189,160,152,142,151,126,132,150,143,157,134,178,137,158,158,147,152,157,165,170,169,136,124,108,105,102,126,102,123,97,107,65,66,95,112,127,63,43,59,54,57,53,28,41,71,88,83,88,103,52,91,127,164,230,250,222,215,244,218,169,205,230,133,60,26,3,52,92,38,48,45,167,235,124,79,122,140,121,125,210,178,162,168,174,107,126,215,171,187,163,136,77,19,7,34,28,38,55,47,38,19,20,35,37,23,34,37,41,32,53,49,27,25,1,37,66,39,19,32,19,49,16,57,25,18,36,18,21,25,34,29,34,19,39,43,83,90,104,83,65,56,81,95,86,118,106,131,73,59,18,18,36,96,93,126,118,105,112,95,133,81,83,78,100,91,102,107,135,171,89,103,94,83,71,95,101,82,105,99,109,97,86,101,112,126,89,51,69,60,81,57,59,23,29,60,62,46,16,24,39,7,10,22,45,55,72,80,60,52,89,68,78,57,52,56,71,82,76,30,49,35,71,37,41,65,49,38,64,63,57,57,53,57,67,72,45,37,63,54,41,39,66,34,14,17,12,20,13,17,29,4,35,61,50,33,23,41,66,29,69,19,13,17,8,14,35,35,40,27,50,54,65,77,50,70,46,54,90,58,38,52,36,37,47,41,83,34,22,28,27,12,43,29,25,39,68,26,60,35,52,44,41,49,20,12,43,42,37,55,57,34,31,13,12,31,3,20,20,14,7,15,7,16,10,3,50,73,38,87,32,66,40,80,44,65,49,41,63,52,50,51,22,45,29,17,30,23,14,8,15,20,25,51,36,39,45,53,55,53,54,70,40,69,61,44,61,56,64,37,61,52,58,56,28,17,32,63,35,55,57,41,40,72,45,46,42,46,10,43,17,15,15,33,35,27,22,16,31,77,66,70,59,81,53,45,60,26,36,69,46,60,64,61,70,85,61,58,74,71,61,41,26,57,147,119,100,122,107,61,74,87,111,118,153,136,50,35,59,53,51,98,97,36,52,40,72,98,81,75,57,62,82,91,74,96,99,135,107,118,92,93,62,51,81,136,115,46,56,16,73,55,37,72,100,93,85,104,99,71,87,74,48,41,17,24,20,10,16,10,14,40,11,25,40,13,13,8,27,9,19,4,4,31,25,3,36,19,2,30,133,146,103,119,117,114,123,116,105,113,130,122,125,132,207,249,216,58,74,58,19,60,25,51,28,24,56,52,72,42,93,153,78,90,90,97,100,121,121,91,133,117,102,76,106,99,115,70,105,120,113,118,92,98,74,88,103,118,127,98,125,134,138,109,103,143,140,119,104,89,120,104,101,132,144,147,112,146,109,123,120,113,139,120,96,74,112,127,86,102,96,114,118,97,101,128,100,113,99,108,122,100,113,97,122,105,105,103,129,141,119,138,133,143,117,110,136,107,115,113,92,95,53,76,112,109,104,80,79,100,114,141,138,101,96,80,125,109,99,127,98,97,93,74,93,66,55,102,70,112,160,180,163,81,45,86,110,125,147,143,207,245,230,161,111,85,63,61,82,81,120,118,114,98,90,140,114,146,138,126,115,100,112,104,116,107,129,134,96,139,115,112,120,78,83,82,102,169,149,148,95,144,165,145,109,76,92,93,143,187,176,201,189,177,174,193,170,178,163,170,177,163,148,129,167,159,168,199,190,182,186,157,141,158,129,95,110,94,120,112,109,135,83,81,100,109,118,106,71,73,45,77,36,50,35,46,62,69,96,73,90,97,112,205,242,255,219,204,239,228,194,171,240,193,50,22,6,20,25,44,39,61,132,145,95,49,49,110,144,211,204,175,185,168,162,142,218,238,149,139,158,83,81,14,36,56,61,68,54,49,34,14,46,49,12,44,55,35,40,31,44,57,23,32,40,44,30,30,34,54,25,48,45,33,36,22,19,19,14,31,30,25,29,42,47,69,97,119,76,80,94,93,82,79,103,121,85,111,44,5,39,23,68,98,93,121,103,110,82,104,96,107,95,104,100,79,125,73,118,162,69,77,91,126,100,78,90,82,69,111,104,123,116,106,100,97,84,44,39,54,69,67,64,44,25,47,46,32,31,34,45,27,36,57,75,75,86,58,26,49,97,120,68,100,61,60,75,69,59,60,64,55,55,69,59,56,60,66,72,97,64,59,57,59,77,45,54,49,63,86,51,39,41,16,26,10,36,23,34,13,29,26,25,60,39,56,40,21,8,26,30,25,21,22,22,18,18,27,41,20,50,49,54,54,59,80,53,49,47,39,46,72,54,43,40,51,68,16,11,19,15,7,6,13,20,6,29,14,34,30,39,55,36,36,30,13,36,30,32,42,53,35,28,2,8,28,27,25,33,38,9,28,22,30,40,39,42,45,44,75,47,47,42,83,47,52,61,60,38,26,78,21,45,31,14,42,22,33,13,23,29,29,35,36,53,54,66,32,63,69,40,98,80,54,53,35,44,39,42,58,55,20,39,67,31,39,53,66,53,66,48,68,73,85,52,35,36,34,35,24,5,21,11,19,31,22,32,15,59,53,76,55,63,48,47,32,42,25,68,73,73,66,56,72,110,63,54,59,42,46,37,42,35,112,131,108,127,129,98,66,89,133,150,158,141,115,86,76,72,43,35,72,35,29,41,40,100,165,91,56,90,90,75,78,36,76,109,121,103,108,116,63,58,32,84,94,69,77,91,85,82,65,58,59,52,78,80,69,72,61,83,41,46,79,84,64,12,8,3,0,22,27,30,18,15,35,13,5,27,0,6,17,33,33,17,6,6,3,6,31,130,147,131,100,106,117,130,120,120,125,108,110,97,129,203,251,231,68,102,33,22,37,10,33,30,16,39,48,52,74,107,125,104,84,111,99,128,122,135,129,125,122,102,92,111,114,111,104,112,114,105,120,139,130,115,116,116,121,101,105,104,116,108,98,94,95,123,100,103,111,128,110,119,148,126,111,123,101,131,116,106,135,115,103,96,105,117,136,121,97,98,77,102,109,126,128,115,100,129,109,128,110,125,123,117,135,90,99,111,79,80,107,100,77,74,102,118,132,111,90,77,71,92,117,101,92,75,100,107,145,104,87,98,116,102,85,109,109,75,86,94,100,116,129,118,108,87,101,83,104,156,132,112,81,71,133,130,97,130,115,143,168,172,172,157,167,138,86,103,87,127,136,124,109,128,151,130,107,124,124,115,158,151,110,114,138,111,102,122,122,127,91,84,77,89,116,110,143,145,132,82,106,92,91,71,53,52,106,155,189,192,233,212,230,224,244,193,208,198,219,214,207,180,182,194,188,211,207,200,206,205,171,134,160,111,101,103,90,140,120,127,106,86,109,81,110,125,105,115,92,96,70,67,67,49,49,70,83,94,109,111,85,138,223,255,252,255,218,246,208,176,171,187,179,120,28,32,90,156,54,61,117,106,62,59,27,15,111,154,223,190,185,184,175,164,184,224,184,116,112,95,40,9,2,63,105,57,73,38,36,23,12,25,30,48,23,42,47,19,21,34,29,9,12,52,46,14,11,23,38,47,30,28,23,36,15,28,39,20,36,27,21,36,55,48,83,86,80,79,50,109,119,100,111,106,112,71,36,14,5,76,44,85,83,90,93,99,97,70,88,120,115,122,81,80,71,48,30,117,134,80,101,67,84,102,95,78,101,113,127,103,103,74,109,72,90,115,88,51,56,57,112,64,53,58,30,13,29,14,30,29,24,51,58,64,68,94,62,65,103,105,88,61,70,57,62,84,84,87,79,33,53,62,107,81,89,52,74,79,74,70,79,52,71,65,56,81,63,51,31,52,57,35,50,24,29,14,16,27,19,19,24,45,46,43,51,17,34,20,43,48,55,28,21,35,8,9,48,30,17,37,45,53,28,48,45,66,70,64,73,60,69,64,64,79,67,18,31,7,1,27,27,31,29,34,26,28,35,10,32,39,56,48,28,18,27,21,13,4,15,49,33,23,11,28,13,6,11,33,15,10,13,35,24,16,16,21,38,34,27,44,34,39,38,41,52,37,74,79,37,44,52,24,29,8,15,9,17,21,14,13,38,39,38,65,15,52,67,51,51,73,63,61,27,36,51,23,28,42,25,39,55,47,39,67,38,24,19,18,63,23,24,64,39,32,53,44,4,21,24,16,38,34,27,24,21,46,59,53,49,36,55,50,43,59,66,61,51,79,81,69,99,69,93,94,77,67,47,45,25,33,56,24,86,121,95,147,136,90,136,162,147,135,168,111,142,129,98,123,115,131,167,131,125,105,137,185,183,139,145,128,186,144,136,108,124,117,154,170,133,115,82,102,12,68,133,109,103,80,92,67,56,56,56,46,55,86,95,72,80,59,60,58,73,160,103,8,1,36,4,25,7,26,20,31,21,35,17,26,26,15,15,17,36,2,18,19,13,14,16,157,183,153,147,169,165,164,122,152,152,155,143,133,151,209,254,219,78,104,57,9,44,27,43,29,33,16,48,55,43,131,150,118,118,94,125,88,125,126,97,127,104,121,118,114,150,109,79,114,106,148,118,137,143,105,128,110,123,116,113,104,93,124,93,106,117,101,93,123,105,78,103,95,133,134,142,146,136,141,128,116,115,114,138,114,113,131,114,75,98,81,89,72,95,91,109,86,84,101,93,120,121,119,95,141,123,94,120,116,100,107,100,65,86,74,82,93,95,97,105,77,97,115,95,96,85,63,105,119,97,83,89,75,98,105,80,97,74,92,76,43,100,102,103,146,130,89,104,100,77,83,99,95,58,66,87,88,103,124,90,77,110,148,172,192,245,205,163,156,133,106,109,117,104,108,115,95,108,127,119,108,133,132,118,139,132,106,91,74,86,143,89,113,80,66,68,82,96,112,136,92,73,95,77,60,93,44,75,178,176,172,207,230,226,229,234,163,189,219,212,199,186,193,166,208,199,242,218,173,182,204,135,99,117,107,88,118,118,117,116,130,71,47,62,71,111,105,155,111,67,88,79,51,46,43,53,82,81,103,125,129,122,154,249,252,247,212,228,243,200,172,116,172,202,156,136,136,244,200,74,45,89,32,9,52,34,50,96,184,204,185,178,224,202,193,202,163,130,63,31,47,20,38,66,93,85,23,45,43,10,31,50,44,37,43,43,51,59,28,46,37,37,36,20,50,9,29,5,18,63,42,20,20,14,16,52,14,27,8,33,46,15,66,86,61,89,83,94,132,73,103,73,108,120,108,56,19,22,37,51,100,107,103,85,96,97,85,96,99,93,68,113,104,94,76,49,51,48,101,118,103,78,95,115,93,88,102,80,112,92,98,94,96,90,87,108,110,95,89,56,80,84,92,79,50,14,15,37,5,16,51,86,69,55,98,92,118,102,84,109,91,86,83,42,76,63,81,64,43,14,51,59,55,74,67,62,67,82,74,64,62,64,88,62,68,77,62,56,44,60,58,76,57,35,8,28,33,32,31,8,29,19,29,33,28,26,17,32,58,30,17,43,41,28,41,26,26,34,12,21,21,40,44,27,35,45,64,37,61,84,49,14,68,48,41,12,5,0,13,17,33,11,19,24,13,13,31,21,2,42,7,32,45,37,59,39,10,14,47,37,51,47,19,16,21,2,13,3,25,16,49,12,8,16,27,45,34,41,20,30,33,19,0,16,28,66,61,90,59,40,59,48,62,44,14,18,22,26,22,59,20,40,28,75,29,58,52,64,51,54,81,64,75,45,41,38,38,56,55,77,37,31,29,45,34,36,33,27,40,53,52,50,58,40,18,36,37,26,49,12,25,15,39,30,6,43,66,45,55,59,57,60,61,44,46,39,71,68,78,103,62,80,98,68,62,36,37,36,28,42,40,33,52,134,132,115,121,88,101,125,160,132,124,109,73,92,88,113,140,196,207,197,192,172,181,194,197,184,155,144,168,207,158,157,116,131,117,106,132,157,148,120,114,76,111,157,118,103,100,76,92,116,107,73,83,87,112,128,84,100,115,82,121,159,221,112,65,5,2,9,25,13,30,3,16,14,21,11,37,16,23,9,33,15,14,21,26,14,24,13,147,151,170,158,131,154,163,144,133,169,157,178,135,154,222,230,217,70,135,46,7,55,4,29,15,27,51,44,53,80,124,129,132,122,108,113,79,101,83,108,124,138,139,105,112,132,131,98,122,108,110,111,99,98,115,105,115,129,102,117,139,112,118,79,125,105,130,104,107,76,86,119,108,123,154,158,136,130,135,144,153,152,136,144,143,147,124,135,123,132,109,127,113,141,100,88,89,59,61,82,78,97,94,95,144,106,73,124,98,114,129,89,108,82,111,95,106,86,104,91,113,94,87,83,90,81,66,87,78,78,89,32,65,86,113,81,82,86,84,64,83,93,81,101,103,124,143,123,92,99,114,97,108,95,85,84,95,84,106,112,101,137,93,135,143,194,184,180,227,190,127,112,123,125,73,86,64,78,129,108,92,123,145,111,112,120,86,106,93,69,122,86,105,103,88,54,59,73,117,144,89,86,83,80,62,69,37,45,116,196,156,196,240,217,230,211,166,149,204,183,174,181,176,191,205,202,181,183,155,162,163,110,80,87,82,84,116,105,130,111,121,68,65,54,75,70,103,113,91,91,45,67,54,25,19,62,140,152,117,86,124,126,172,219,246,206,155,186,173,179,120,158,144,99,113,77,114,145,139,45,16,46,56,31,89,81,62,130,165,185,179,211,225,219,134,112,143,65,40,6,27,93,82,94,51,34,20,50,13,21,25,36,77,36,35,24,23,37,31,40,21,43,12,29,13,24,31,38,11,41,31,41,36,22,7,37,24,22,31,27,29,56,63,61,64,70,88,80,100,72,97,104,105,98,28,31,14,43,85,67,114,115,116,89,90,106,96,88,109,102,95,97,125,111,106,97,95,77,149,159,85,92,84,103,93,73,100,99,103,67,76,88,83,65,98,96,137,117,125,89,72,87,95,84,48,34,10,46,58,58,43,62,50,65,64,107,98,110,126,124,114,118,54,30,53,68,52,50,59,41,41,53,54,50,51,52,60,64,55,55,63,55,83,44,56,67,44,36,50,55,36,49,28,22,18,9,11,13,9,6,36,33,24,32,38,24,29,55,63,38,11,11,32,20,21,16,26,50,36,48,37,41,22,37,41,55,53,16,55,57,13,26,31,7,35,42,13,34,32,18,42,27,34,24,33,40,28,3,23,15,37,30,21,53,18,36,8,24,47,16,9,25,21,9,20,30,26,18,24,52,12,20,29,22,13,30,42,19,40,12,8,22,29,20,60,80,48,66,48,70,46,39,16,5,4,24,10,27,4,21,32,27,21,43,45,81,51,71,49,56,38,58,46,42,41,34,29,80,47,27,74,12,17,47,34,25,71,45,51,49,38,53,53,68,20,48,39,19,13,55,26,8,9,31,15,50,46,76,73,62,67,81,62,64,30,51,67,65,42,47,71,75,51,40,39,14,16,25,33,33,21,47,120,128,103,62,44,56,81,90,105,102,73,64,63,82,55,101,141,172,186,176,132,193,166,179,159,132,155,130,148,165,145,140,139,132,122,127,150,158,120,129,144,104,158,187,120,94,72,127,143,132,146,150,158,161,134,152,128,121,153,132,172,186,175,108,15,4,11,20,2,26,4,15,28,30,11,31,14,17,19,6,37,33,23,16,18,26,36,6,143,133,151,148,127,121,151,142,163,146,144,173,167,150,211,237,194,81,97,46,4,57,30,51,16,50,55,47,64,58,127,137,122,103,89,110,95,98,117,114,115,110,98,99,99,111,93,104,131,101,102,111,113,106,102,102,107,119,102,106,137,131,113,109,119,115,120,119,99,121,109,122,153,128,159,171,137,165,164,141,139,169,138,171,152,180,181,165,159,190,166,153,159,161,154,162,135,130,104,147,101,115,145,127,142,141,102,147,157,110,116,136,106,106,96,107,108,74,100,99,83,88,53,75,91,69,88,88,74,78,86,61,34,93,99,133,125,91,113,72,93,98,107,109,92,113,112,135,125,140,118,92,91,88,98,95,58,60,111,95,109,155,132,76,87,123,124,146,223,192,117,126,110,111,80,69,66,80,133,115,122,97,120,119,115,119,119,119,101,125,75,87,120,126,101,100,103,100,81,113,71,63,68,75,88,94,62,80,135,186,143,153,236,247,246,218,147,177,202,215,208,207,185,173,174,193,197,184,150,123,155,67,88,90,87,108,157,122,121,106,111,96,66,63,62,94,104,112,80,69,81,116,128,139,137,151,220,165,171,157,136,166,230,236,195,135,117,133,151,130,147,162,96,99,115,69,29,62,92,16,3,59,141,120,102,109,102,172,186,213,232,201,200,178,113,85,116,64,33,10,60,105,69,34,56,17,21,44,25,64,30,34,67,33,40,26,36,16,45,53,44,47,37,50,48,31,40,30,24,28,49,35,40,31,18,43,27,39,23,29,31,53,48,61,80,70,56,116,93,107,109,128,79,22,30,12,38,101,128,107,119,87,105,78,120,98,91,88,92,91,108,112,124,86,104,116,111,81,173,165,78,90,87,115,98,78,102,87,91,97,127,80,94,106,103,138,110,119,121,85,80,70,77,92,87,60,57,50,51,47,73,74,49,88,77,108,86,110,113,113,140,89,54,45,25,42,67,26,30,39,36,43,23,36,41,41,37,50,27,22,43,35,45,56,25,23,26,40,38,41,38,52,30,21,32,24,27,47,29,19,24,33,30,7,13,28,23,18,8,21,38,35,28,29,4,40,45,16,26,35,49,36,44,42,58,60,33,50,33,42,40,33,29,64,29,16,32,23,28,18,16,8,0,24,5,42,4,37,22,30,44,34,34,45,6,29,20,36,41,32,3,18,20,10,26,25,30,8,36,15,33,14,12,20,32,23,25,37,46,29,61,8,35,70,25,67,36,60,54,42,50,70,27,23,20,15,32,28,15,11,22,6,23,46,32,64,53,70,57,55,57,63,49,49,42,21,42,47,30,32,33,44,38,56,43,35,44,18,27,37,47,63,45,62,26,20,37,32,24,50,31,48,5,12,65,48,62,68,59,49,77,64,43,42,36,60,17,65,66,73,53,84,71,63,15,38,35,28,44,18,10,19,95,129,126,90,63,91,54,53,56,47,40,93,53,84,78,102,117,110,138,120,148,138,134,132,106,104,126,125,87,86,106,137,106,92,75,99,126,116,110,110,138,132,176,195,127,104,111,124,129,132,137,153,183,140,136,116,137,160,146,179,173,179,152,98,17,0,10,7,10,21,9,18,21,20,21,9,12,17,31,4,18,4,6,6,40,41,5,2,127,91,140,129,131,125,117,148,165,141,146,147,116,194,241,243,179,58,81,24,13,38,7,35,17,29,45,54,38,40,138,104,143,127,103,94,131,113,101,98,106,95,105,67,96,127,89,115,106,117,137,107,100,105,90,107,117,109,100,125,95,104,122,89,104,96,89,96,100,107,110,114,104,144,110,148,125,141,127,130,141,137,131,128,105,119,111,151,166,167,160,128,124,146,181,168,168,158,165,156,194,168,166,180,132,125,104,129,106,117,110,121,102,108,77,87,88,62,71,109,70,67,60,71,85,121,76,66,59,66,67,63,64,75,99,129,89,104,69,82,102,111,127,122,117,103,94,88,106,120,87,125,83,80,62,73,92,79,67,101,83,121,125,109,109,102,109,125,160,140,115,115,93,101,61,98,104,116,118,108,109,103,142,182,117,98,89,113,114,127,105,120,154,98,98,104,125,90,99,54,78,94,53,91,123,107,81,89,95,164,135,110,110,222,201,89,106,162,223,211,191,172,178,163,151,145,168,155,151,129,136,78,130,101,88,102,103,95,90,93,134,109,104,104,93,124,102,120,116,56,99,163,219,255,248,248,246,251,255,207,186,221,235,222,152,147,128,173,165,122,159,144,136,182,182,143,115,123,137,57,22,70,180,128,156,80,158,226,196,241,234,145,138,156,54,37,21,23,20,68,81,47,35,30,51,51,54,41,26,50,30,40,31,35,52,29,41,48,23,58,34,30,34,38,29,20,51,39,43,56,28,6,63,39,25,18,14,42,49,43,57,64,79,108,105,89,80,147,104,118,100,36,45,36,5,46,132,122,102,123,85,81,78,94,88,109,77,71,55,107,92,108,119,90,106,106,122,95,205,205,92,98,93,124,93,78,99,61,77,88,104,87,112,91,87,99,147,138,78,34,28,77,91,96,94,59,47,80,73,45,81,74,91,84,102,81,84,61,83,51,60,64,51,58,50,64,46,22,47,18,60,63,30,13,59,36,53,45,44,21,74,41,23,54,40,28,41,22,25,20,66,18,30,35,25,30,10,51,25,10,36,22,27,43,37,51,40,35,51,9,40,43,47,42,20,13,46,12,22,30,34,50,52,32,12,43,36,46,30,34,47,48,29,34,63,26,48,39,13,8,16,53,13,22,51,20,9,45,15,31,25,28,34,35,15,51,14,57,25,30,5,37,21,27,20,32,21,24,30,31,30,29,26,24,21,41,51,38,41,1,45,31,50,40,11,46,40,30,44,38,28,23,28,28,31,29,53,42,25,31,18,25,35,21,17,29,27,40,51,60,60,43,47,50,18,36,48,36,55,50,51,24,54,24,35,17,38,36,25,89,64,37,50,51,39,23,12,22,19,12,14,16,7,30,23,47,64,62,67,18,72,69,46,82,56,61,32,79,56,73,35,65,48,55,21,6,36,9,38,29,13,31,56,58,104,111,99,80,70,71,94,60,44,53,88,105,96,104,64,87,86,105,143,169,161,157,120,107,95,86,73,115,118,120,126,88,65,109,105,94,86,80,92,110,132,155,121,125,113,111,121,88,108,127,121,139,110,115,159,141,156,119,160,137,139,92,30,15,26,14,15,18,8,9,0,24,12,8,25,6,11,8,30,32,23,27,8,12,14,13,116,143,131,146,141,141,133,145,162,149,140,131,138,137,252,253,200,75,104,23,45,52,19,13,11,46,9,61,43,78,109,115,124,136,88,105,111,100,102,71,64,99,125,106,107,133,121,116,115,135,108,117,110,114,106,97,117,114,107,104,105,85,61,110,94,101,99,85,92,80,115,109,101,96,113,76,124,100,85,99,118,99,87,54,39,79,97,84,112,101,123,120,104,103,143,139,158,161,145,163,153,152,154,112,112,52,76,91,88,92,97,106,75,96,63,70,50,45,44,46,62,73,67,80,101,109,117,93,67,99,107,94,148,127,126,96,73,69,108,113,124,127,107,125,105,105,138,108,91,112,94,108,94,85,89,82,52,64,68,94,69,104,131,134,117,94,105,111,123,103,75,101,107,79,90,138,130,96,113,107,77,122,132,175,110,96,127,131,131,154,123,141,151,98,79,98,105,103,94,72,89,57,78,89,100,105,112,116,80,87,101,57,59,67,110,54,63,134,181,170,161,164,157,119,135,75,96,100,94,158,125,122,99,77,95,93,103,87,66,62,94,99,82,69,93,92,103,110,113,125,96,181,213,246,249,251,236,227,228,180,144,223,242,213,187,195,171,169,147,128,136,211,182,171,136,96,105,155,120,37,6,30,147,166,148,153,203,243,246,242,130,90,93,71,19,0,19,8,82,55,59,51,53,33,51,22,34,16,23,43,49,30,37,13,52,26,22,49,31,30,40,28,28,49,56,52,24,2,19,35,37,35,12,36,54,58,12,38,57,67,71,92,97,104,65,101,120,134,137,53,62,34,14,41,70,104,123,123,103,99,81,67,121,99,87,93,92,85,66,84,105,90,103,109,108,56,84,86,189,182,78,89,81,103,85,79,75,57,92,120,91,79,82,106,89,126,117,61,14,12,33,62,94,86,112,67,67,88,57,47,68,74,98,74,88,112,77,24,7,12,46,62,12,26,52,50,45,56,38,64,64,44,57,47,36,58,51,54,55,53,61,53,22,32,41,50,47,31,58,49,31,43,14,14,32,6,37,34,47,14,16,13,38,39,22,39,32,51,25,29,40,60,45,45,34,27,31,60,10,17,28,36,19,50,38,25,28,41,41,48,43,13,24,32,59,26,41,35,24,7,42,44,31,11,26,29,33,27,51,22,27,35,39,35,22,45,41,61,38,42,24,15,48,31,36,21,32,19,37,44,23,35,29,34,30,64,51,29,38,65,25,13,36,60,38,35,19,18,23,29,30,39,48,13,48,0,49,4,23,35,36,27,43,14,42,31,34,66,46,38,51,49,57,13,49,12,28,51,51,56,56,20,23,6,34,48,27,7,20,29,34,71,48,18,39,9,20,27,28,20,20,20,38,25,21,53,23,49,73,70,50,77,55,66,62,57,73,72,69,58,36,52,61,65,46,32,30,38,20,32,44,43,41,56,45,81,64,83,37,39,87,39,47,75,63,63,63,43,24,39,42,29,101,123,126,103,62,83,87,108,93,94,92,82,101,80,90,108,90,81,82,100,113,92,78,124,113,114,128,101,91,103,92,100,107,116,91,106,119,87,84,107,141,157,140,110,13,14,8,24,10,13,18,23,28,12,24,16,8,4,26,7,34,17,19,19,12,10,8,17,147,143,139,143,153,134,161,149,132,169,175,143,130,166,214,255,186,92,148,18,22,36,15,39,16,57,13,43,64,44,111,86,103,117,106,87,109,116,109,63,67,129,106,90,107,118,111,118,134,102,96,92,120,88,97,89,98,81,72,106,115,128,113,93,84,73,121,96,91,100,108,102,81,75,110,98,103,95,100,112,109,94,63,65,52,67,76,67,77,79,110,128,79,121,98,109,98,114,141,122,138,116,79,59,54,42,52,63,41,96,88,83,81,67,64,75,40,64,32,29,75,46,67,97,112,90,100,102,74,99,119,70,145,90,52,67,61,74,94,129,134,102,103,149,122,101,152,80,131,126,76,123,81,91,115,108,117,75,98,115,95,117,143,152,114,124,128,130,92,71,45,89,98,95,111,98,87,33,77,104,89,68,104,86,91,96,94,114,136,146,142,112,138,85,89,113,107,124,114,114,122,93,116,95,118,168,128,154,83,66,54,57,28,73,91,78,85,132,151,161,138,167,155,148,147,113,105,103,83,103,144,100,117,107,89,82,84,74,64,84,100,81,55,74,79,114,74,76,73,48,92,141,138,174,154,131,112,64,118,56,92,163,164,198,251,210,167,144,139,142,208,216,220,168,95,73,75,135,115,35,4,64,168,153,146,204,255,162,176,164,92,51,26,33,8,19,63,66,76,58,26,46,49,39,36,51,28,12,39,35,43,69,49,28,22,53,49,34,32,56,64,8,34,38,37,47,45,45,23,54,12,53,40,34,57,57,8,43,65,107,85,106,99,91,102,119,115,117,71,34,38,13,51,88,107,115,107,107,74,69,79,92,82,103,99,101,104,75,87,102,75,136,95,104,114,105,100,124,210,214,87,79,75,83,94,65,82,63,77,84,104,134,107,89,102,47,22,18,22,20,13,61,94,86,68,51,60,74,56,47,82,62,68,87,104,93,64,44,39,28,38,27,39,76,47,50,67,39,78,50,55,42,42,32,34,48,24,67,59,45,40,59,59,53,64,24,33,46,65,46,45,65,10,15,32,36,20,6,34,18,34,18,14,36,56,32,44,25,24,21,67,57,58,41,32,41,41,52,58,36,30,46,21,71,39,55,65,36,57,28,53,54,36,39,22,46,32,28,13,12,0,34,28,36,50,39,15,26,33,6,52,42,34,19,11,62,32,54,60,15,22,30,39,23,39,22,58,37,60,69,64,29,5,38,15,29,40,7,47,27,17,64,45,43,43,38,21,35,12,45,41,22,41,24,35,43,7,16,38,33,19,18,28,28,33,50,35,23,29,42,48,44,45,35,31,24,33,28,10,55,50,34,38,18,46,30,29,39,31,53,24,35,53,16,26,8,25,16,42,25,27,4,29,59,43,42,45,37,73,65,44,41,45,54,38,54,33,29,64,68,73,49,49,57,29,31,11,29,38,27,63,50,49,35,52,28,56,37,54,46,74,62,51,84,42,43,5,54,46,57,16,51,106,124,123,118,94,102,101,113,115,111,101,110,98,91,96,103,90,97,81,94,121,93,81,107,115,138,148,138,74,98,134,86,107,100,123,91,79,92,92,112,141,102,126,81,31,29,12,28,8,15,30,19,31,10,8,14,23,19,15,3,20,19,11,18,12,21,19,8,126,121,130,126,126,149,157,180,134,154,144,131,126,154,220,238,149,108,135,10,42,36,25,62,35,36,47,27,54,63,84,94,93,103,78,40,72,89,83,72,88,76,72,79,116,95,112,135,107,87,101,117,102,115,113,71,119,112,98,117,145,105,115,97,89,81,69,84,87,118,104,109,106,107,117,117,79,136,126,95,84,62,92,105,88,86,81,98,124,89,97,111,124,116,117,92,89,102,120,156,137,76,63,56,45,33,28,58,63,98,77,97,92,84,43,55,29,80,53,69,64,57,68,62,72,95,107,84,102,102,113,98,105,48,35,68,67,82,59,88,100,110,117,111,104,110,109,77,80,110,94,104,101,81,85,112,142,121,112,111,94,121,133,149,128,133,114,115,87,69,71,104,115,112,88,111,104,75,65,87,127,57,70,62,43,63,63,65,79,102,68,115,108,123,128,186,135,160,154,126,149,171,154,157,190,189,169,145,91,117,81,80,88,77,87,98,114,118,134,164,141,130,157,138,135,107,118,107,94,98,84,104,106,90,102,81,98,100,127,102,56,73,71,94,116,81,101,95,90,68,36,100,50,55,20,67,71,26,83,47,97,214,143,209,220,192,237,218,180,232,185,251,182,132,111,108,118,163,131,105,76,114,154,174,163,237,231,123,142,154,50,45,7,36,53,50,73,38,47,55,31,42,50,13,25,48,60,27,30,30,29,29,33,33,41,36,47,49,55,37,34,5,46,47,36,48,34,39,32,29,44,63,49,34,57,81,51,67,102,117,121,103,102,84,121,114,72,38,14,33,40,69,104,119,99,123,60,96,95,120,96,94,116,97,82,146,107,89,62,77,119,112,56,104,106,89,107,97,166,205,135,94,57,88,64,81,99,99,110,101,113,125,126,91,29,51,47,29,36,15,34,60,82,50,78,43,70,54,48,59,75,74,101,81,113,114,66,21,11,15,26,59,44,43,56,53,51,49,74,44,66,60,44,36,44,45,22,62,57,59,59,50,70,49,68,38,58,35,36,46,53,32,20,23,28,19,21,21,35,16,27,30,29,29,34,49,37,23,44,14,53,65,44,23,45,26,58,58,48,27,49,43,45,45,47,35,70,47,36,42,18,33,26,50,76,38,42,39,31,16,20,12,16,34,25,65,9,27,29,32,39,47,38,38,29,54,49,51,79,48,28,33,48,25,32,41,49,65,58,50,81,35,23,12,13,32,42,17,35,33,37,24,47,44,13,47,45,40,44,22,28,52,21,3,20,29,13,14,1,25,9,40,42,44,25,41,16,24,25,53,22,27,47,20,31,19,9,38,30,39,32,3,15,23,63,25,16,37,27,27,21,36,21,47,15,7,30,16,31,14,23,17,18,25,44,39,43,48,58,43,48,65,38,44,31,49,75,62,46,50,55,38,51,45,33,24,33,38,48,19,40,35,60,36,33,20,45,46,59,88,121,85,90,68,42,45,34,93,99,72,92,98,105,113,116,124,121,111,85,111,99,86,95,110,127,81,127,121,106,88,77,90,104,69,48,77,88,114,136,129,128,105,123,130,111,123,115,100,76,96,82,91,104,111,63,64,17,35,7,11,28,17,10,9,13,10,18,12,36,19,4,43,29,21,10,11,8,27,40,5,118,57,111,77,114,95,110,114,120,149,151,145,143,164,251,237,165,62,73,18,51,51,27,40,12,25,30,75,55,71,88,106,87,61,104,61,74,47,55,59,72,74,52,46,64,90,92,130,114,99,75,102,115,53,105,104,92,116,90,116,118,119,82,101,102,109,111,73,83,79,110,100,121,92,94,106,80,85,89,86,71,64,106,115,138,102,103,86,77,70,54,100,120,130,106,105,122,97,129,148,100,85,76,83,48,81,64,94,61,100,89,103,120,77,96,74,111,101,122,85,63,78,75,97,56,44,54,68,67,98,86,64,74,72,62,68,85,67,35,66,86,79,102,85,129,113,77,60,91,125,127,138,84,84,87,122,143,142,121,90,80,114,131,137,147,143,123,102,114,87,77,96,81,75,88,81,51,50,90,59,66,50,76,76,61,109,68,62,49,69,111,147,138,135,156,163,187,180,151,176,166,133,153,157,153,153,148,106,77,63,82,78,89,67,66,82,70,117,118,93,103,112,129,85,106,96,89,91,92,87,89,76,94,104,106,94,82,101,93,119,98,95,86,83,87,65,94,88,101,37,66,91,67,55,33,114,146,54,138,88,141,224,108,165,191,216,228,231,213,185,160,180,158,139,138,159,158,169,150,139,116,121,151,180,178,186,207,90,105,69,12,17,27,57,60,58,47,39,35,57,21,21,35,11,18,32,48,26,27,40,15,22,25,17,40,13,41,39,24,33,46,42,54,35,21,57,33,19,60,63,58,60,39,55,51,73,62,45,74,115,80,71,68,109,88,54,21,19,18,40,88,120,136,107,103,80,110,96,98,79,85,88,109,107,107,85,115,109,110,101,104,99,116,98,86,90,71,89,86,174,162,56,61,78,82,120,116,120,114,89,99,124,75,37,32,37,36,15,20,53,73,71,64,50,66,42,81,53,68,75,84,103,89,83,84,85,51,33,16,24,21,63,28,39,69,51,36,38,48,28,40,68,43,35,42,23,48,36,20,20,66,43,73,64,63,6,55,60,68,50,20,36,26,10,37,5,27,31,26,23,37,54,16,33,46,53,32,60,31,48,60,51,42,36,29,41,58,40,57,54,47,72,27,54,29,52,45,35,46,7,28,46,38,39,41,23,35,11,45,6,19,2,20,32,34,15,30,50,37,26,34,50,5,49,59,65,48,35,47,31,49,33,31,47,29,39,58,30,21,49,100,87,50,46,28,52,42,32,50,42,28,36,41,45,43,29,39,44,36,34,22,41,27,28,0,44,26,6,33,27,12,28,23,21,28,24,16,35,19,47,46,13,14,13,36,15,38,14,30,43,24,29,53,41,17,43,0,35,22,30,42,27,33,37,36,44,14,14,35,23,27,25,21,29,31,36,50,21,34,49,55,58,33,46,45,36,39,45,33,70,42,46,47,54,48,33,43,69,25,31,37,26,21,58,38,36,39,49,75,144,129,128,97,64,22,90,104,120,88,97,104,103,125,127,121,114,106,114,97,105,70,73,85,121,125,87,116,96,105,47,53,66,96,81,75,102,92,53,106,114,131,122,92,101,117,105,115,105,100,106,92,102,79,100,77,87,29,5,8,23,34,22,24,16,30,15,17,14,17,32,42,21,41,29,22,8,23,1,6,3,82,99,88,77,84,95,70,85,54,135,110,135,150,208,242,240,162,59,55,30,32,49,44,37,20,44,67,55,73,74,120,95,114,105,102,112,86,65,84,93,75,70,45,58,75,83,90,109,94,93,99,124,107,110,75,113,112,102,97,101,135,80,123,110,108,134,106,120,111,145,116,117,119,98,81,107,81,88,60,69,65,84,112,123,120,101,80,79,42,54,62,51,78,75,91,101,122,111,106,84,102,82,74,107,85,67,104,76,98,123,104,105,104,84,72,75,100,104,107,75,101,107,83,105,76,74,51,62,63,93,98,56,82,87,95,77,59,67,36,69,60,40,73,88,130,125,85,50,105,132,119,98,129,96,82,110,151,142,113,79,94,117,167,156,141,142,125,100,121,111,105,52,71,75,107,70,76,58,66,57,66,53,67,56,70,108,84,46,84,64,100,133,170,150,139,157,140,129,179,180,130,88,73,35,44,99,53,59,46,76,54,106,45,98,126,58,86,102,127,130,100,117,107,84,86,92,66,96,78,97,81,90,80,66,89,104,87,67,88,90,80,85,64,55,71,102,79,41,89,81,81,127,88,60,65,155,204,86,120,163,196,169,109,185,205,163,185,187,164,120,71,88,98,114,153,140,152,175,181,167,177,179,191,235,130,101,110,64,22,41,8,55,42,56,65,43,37,39,19,38,59,42,48,27,30,27,61,32,43,45,55,48,12,32,22,36,52,48,46,32,29,24,68,18,25,29,41,57,83,92,43,33,70,74,101,111,77,52,36,75,83,70,73,50,38,15,15,39,66,96,113,119,104,91,133,105,93,51,64,89,86,89,84,89,119,63,109,107,110,89,110,123,110,84,112,97,115,66,99,155,207,108,51,70,83,64,106,92,100,127,104,55,30,29,7,23,12,26,69,82,124,113,95,84,81,63,94,83,58,83,74,104,96,78,88,55,41,32,3,11,46,31,48,18,37,31,35,18,21,26,34,37,51,47,45,51,64,59,42,85,55,52,56,49,38,42,19,44,48,47,48,31,34,43,39,53,27,17,39,34,48,44,36,42,6,55,44,32,38,45,36,50,77,58,18,35,52,38,29,42,58,21,45,45,45,50,39,34,43,13,17,26,19,43,61,7,17,24,41,5,34,43,46,28,45,37,33,37,33,24,38,35,26,33,27,60,51,56,44,56,7,11,51,35,47,39,29,38,43,40,64,66,54,41,44,60,40,22,32,29,48,24,26,25,77,13,16,31,17,31,34,29,18,9,14,6,18,20,15,37,21,17,7,44,6,22,42,31,46,31,39,41,24,20,21,14,43,59,62,45,10,24,31,28,43,35,30,23,27,51,50,51,45,63,31,17,7,26,22,5,18,19,33,12,36,19,32,34,58,59,48,20,67,36,47,47,29,25,42,26,44,36,16,56,52,21,51,53,48,44,46,15,26,45,25,61,64,82,151,155,117,113,142,120,125,105,82,93,108,129,154,131,135,123,118,96,87,110,91,86,82,96,132,113,81,73,78,86,98,88,71,98,105,85,61,91,60,47,94,104,106,121,60,89,103,137,129,101,82,88,73,136,169,165,119,62,9,15,16,19,26,7,43,2,33,14,31,18,22,22,19,12,27,51,28,9,21,18,9,24,86,75,76,50,73,88,74,57,75,113,112,136,120,182,245,239,157,65,76,41,41,27,37,13,6,24,33,52,85,88,105,105,113,110,98,96,105,83,109,95,73,66,65,66,45,76,93,93,90,98,89,91,107,105,80,118,108,77,115,95,90,99,113,113,123,130,130,135,129,82,87,105,63,90,74,66,91,62,80,103,112,85,107,92,97,79,84,48,72,64,71,76,66,87,73,86,102,105,117,78,68,81,91,82,89,100,88,138,99,102,55,50,86,84,82,113,102,85,76,87,62,73,79,109,84,82,69,74,67,90,82,84,101,106,103,90,83,66,61,50,72,101,90,113,68,96,90,85,117,126,137,105,81,91,115,106,122,137,134,123,120,121,152,149,172,115,116,88,91,118,117,85,84,93,90,59,80,62,65,51,77,76,37,54,73,66,65,55,94,113,77,80,106,67,95,65,87,108,147,131,93,36,40,15,44,46,4,15,46,77,84,161,162,53,66,122,124,136,158,135,121,103,57,94,84,100,126,109,81,83,102,81,74,79,50,78,87,86,76,79,98,50,81,70,58,91,51,65,65,92,119,116,57,56,74,148,179,102,144,222,207,135,132,209,196,218,133,81,156,75,17,29,29,101,140,114,138,154,141,168,167,155,178,187,82,64,29,24,26,24,52,76,30,36,52,32,46,41,41,36,38,49,38,28,39,37,41,34,43,50,38,28,24,32,50,13,34,37,34,39,31,32,49,28,39,40,40,91,112,78,70,53,9,55,124,132,124,105,87,80,110,53,22,60,33,42,56,94,99,100,91,99,107,127,134,109,115,87,104,99,88,114,93,82,105,98,110,102,84,107,108,113,103,111,70,100,89,78,79,120,215,144,72,90,87,90,108,116,126,77,59,42,27,12,10,24,43,76,103,119,86,105,123,129,127,71,78,92,134,96,94,77,80,82,115,57,45,10,1,29,34,52,16,47,27,76,34,36,48,36,30,46,39,50,51,79,40,68,55,47,50,62,46,42,48,52,53,48,39,29,54,31,41,29,9,25,25,8,18,33,58,48,36,59,41,59,54,90,54,47,50,52,53,40,56,45,49,60,29,34,45,29,40,42,39,22,41,44,33,20,19,42,30,32,25,15,11,23,44,44,19,33,45,23,5,24,48,47,45,62,40,60,13,38,58,50,51,49,35,46,45,18,30,37,32,28,25,18,47,42,26,60,69,46,20,10,21,53,33,54,27,40,51,36,58,46,10,15,5,25,18,32,4,7,31,17,28,11,33,38,35,30,43,39,45,47,39,26,5,35,36,25,51,17,13,48,44,61,31,14,32,44,40,29,47,40,34,26,28,52,59,30,20,51,64,40,19,20,1,10,20,21,29,38,29,43,30,18,31,75,38,41,63,57,53,44,40,42,44,27,52,30,32,36,16,62,24,57,44,38,34,34,51,19,12,37,76,134,122,123,127,123,147,115,129,122,107,106,114,139,155,119,107,81,141,112,87,95,125,63,91,90,117,108,103,89,99,113,116,126,136,100,56,49,70,79,46,44,52,74,63,71,44,66,107,120,122,87,76,96,74,154,165,166,161,110,25,1,9,18,24,24,12,14,2,11,13,34,45,16,19,21,20,13,26,42,11,6,16,9,86,84,35,40,70,55,88,80,81,94,101,93,73,179,239,247,119,56,100,37,36,24,19,39,35,38,32,41,65,83,92,101,78,61,66,81,105,83,85,92,90,89,84,63,48,59,48,47,73,81,89,88,94,90,70,113,97,113,84,107,100,104,80,99,70,100,69,91,93,103,83,109,75,92,107,78,79,82,74,94,111,91,108,85,51,88,56,111,73,64,115,100,97,103,106,117,80,89,60,68,70,77,98,100,99,97,113,132,119,129,102,88,98,79,87,109,97,103,91,87,100,66,74,91,98,111,75,41,58,94,91,72,72,97,76,115,66,86,81,79,81,107,123,62,67,102,102,134,138,125,173,137,106,93,107,94,82,104,142,135,145,166,159,116,159,157,144,106,76,79,127,94,79,97,63,62,56,76,59,80,73,63,89,59,45,70,81,82,98,93,69,81,53,31,49,55,60,88,113,95,77,81,44,20,35,28,45,31,71,97,90,187,119,66,93,91,152,121,102,125,83,80,109,102,127,93,120,99,93,83,90,63,78,72,66,71,113,55,46,60,99,84,53,39,42,47,54,25,35,70,105,96,52,88,62,88,122,142,214,238,158,94,167,246,226,169,16,78,130,61,24,23,23,83,110,29,75,59,93,91,81,103,111,34,39,35,23,27,38,72,63,66,51,56,34,56,67,43,60,54,80,43,28,17,57,57,37,20,32,44,36,27,15,26,40,48,43,31,26,35,20,31,34,21,49,36,34,69,114,78,69,36,42,31,86,128,140,121,146,140,72,29,21,34,55,50,97,103,102,134,98,86,108,98,108,110,99,97,87,109,86,60,97,101,98,88,77,86,89,94,108,87,94,121,99,88,81,83,67,89,187,179,92,86,92,107,150,123,75,15,37,2,17,12,53,44,91,108,106,91,115,120,124,146,84,74,99,74,81,75,101,104,88,79,83,27,22,9,16,17,40,55,46,43,57,65,56,38,30,47,31,44,53,40,35,30,45,43,54,36,38,51,30,44,41,36,48,9,68,49,45,40,30,30,33,25,36,13,37,54,47,43,62,19,29,27,55,11,34,37,48,22,25,23,28,40,37,60,52,67,30,36,65,32,40,39,20,12,20,21,41,53,53,40,28,9,25,21,13,42,22,31,22,38,24,26,34,47,50,45,48,31,36,51,40,14,57,39,28,22,26,65,49,36,42,41,53,44,22,21,30,26,81,105,37,17,30,28,38,23,47,31,47,31,52,30,20,20,28,23,25,27,10,23,15,31,30,9,7,8,36,17,33,43,25,29,40,28,22,12,60,43,26,34,38,26,42,44,40,38,35,26,9,27,35,23,23,23,24,59,50,46,8,17,21,18,36,13,39,34,24,24,34,65,25,20,34,26,40,74,43,70,69,73,30,46,42,24,49,58,33,53,51,28,17,63,21,12,27,59,15,10,21,12,31,20,33,83,88,98,106,110,88,81,89,99,109,120,95,105,110,95,101,88,82,105,70,68,114,100,65,119,108,104,94,72,115,102,108,118,113,62,71,45,73,73,55,44,27,64,55,75,86,85,103,123,128,118,122,106,98,134,187,161,139,71,19,4,15,2,45,30,23,7,30,27,0,14,13,1,23,0,54,16,7,12,10,28,2,27,78,59,84,70,94,93,96,94,102,116,77,112,98,189,241,251,117,67,71,13,34,33,50,28,48,44,26,75,51,81,100,107,101,60,55,82,91,90,55,71,64,81,113,73,95,67,57,52,69,65,68,90,65,87,107,106,116,98,93,114,121,92,103,79,57,69,67,70,118,96,131,155,85,69,94,74,102,106,86,90,117,104,75,77,93,118,105,85,108,104,102,101,85,99,100,99,95,75,77,74,86,81,41,86,107,112,135,150,131,135,132,140,143,93,90,102,75,82,71,84,117,81,52,70,95,111,59,71,53,72,81,115,90,97,128,103,67,59,68,101,109,102,102,75,99,102,122,112,90,101,111,124,91,74,91,63,54,65,93,136,139,129,136,117,156,166,144,107,77,76,102,125,78,89,75,63,44,75,76,70,76,65,51,47,104,113,70,56,85,75,51,45,100,75,68,34,50,94,112,109,138,92,114,63,82,91,68,113,113,112,73,135,118,71,58,112,124,102,107,78,72,85,92,97,103,96,87,77,66,80,95,57,63,69,44,91,53,92,73,60,94,72,53,49,68,72,66,44,41,19,64,54,45,62,71,90,158,229,245,130,80,102,171,226,190,76,20,25,136,91,96,101,125,121,54,14,6,23,4,34,22,17,12,29,34,29,35,47,74,39,69,45,27,15,31,47,49,81,46,17,83,32,41,30,54,33,26,55,32,6,40,58,11,41,45,51,36,57,42,17,17,40,60,40,58,46,49,69,96,90,70,41,36,44,51,102,172,149,96,62,15,23,50,57,78,96,110,94,93,82,89,78,103,125,103,101,80,93,70,118,98,105,108,92,95,112,90,105,95,92,104,75,105,139,94,91,113,119,95,107,133,207,128,86,114,97,91,54,22,19,23,21,43,36,83,97,88,99,93,86,126,91,112,98,115,92,84,79,76,79,90,66,52,69,50,31,21,8,28,40,29,36,43,61,68,38,30,57,56,63,45,48,42,23,82,29,47,51,26,59,59,20,63,48,22,38,26,50,55,60,52,37,18,39,24,31,13,10,19,40,40,55,45,56,22,36,20,60,18,25,38,33,29,45,13,51,54,57,22,34,37,39,2,62,32,15,10,30,28,16,36,28,20,56,16,9,25,25,13,8,23,6,0,19,25,38,20,35,7,14,28,50,41,52,12,55,58,32,53,28,22,27,32,28,13,29,19,26,47,14,27,11,68,78,69,62,41,40,6,32,26,40,55,42,35,35,23,24,26,46,14,6,15,8,21,26,39,22,33,60,35,48,20,34,38,14,46,11,23,29,31,24,13,12,60,54,54,38,40,26,17,23,15,3,25,6,37,67,45,51,36,37,50,29,0,32,29,20,30,29,21,30,15,37,65,34,54,14,38,33,18,35,57,68,69,50,27,43,57,41,29,44,38,24,16,22,31,44,24,40,16,20,27,10,50,30,70,76,72,93,58,92,74,67,80,82,79,104,90,111,131,68,90,64,95,83,83,91,120,126,98,100,116,77,113,85,58,74,57,104,71,85,74,26,71,35,38,30,9,96,82,98,74,128,98,121,134,106,124,119,108,113,151,157,129,64,29,0,3,21,21,10,19,10,7,28,6,4,6,6,18,1,1,3,11,2,31,15,11,22,71,54,85,71,97,98,87,127,117,125,121,115,107,187,230,243,123,45,67,5,40,14,45,34,23,35,28,81,70,106,82,102,94,44,71,61,90,88,79,42,59,62,61,69,69,71,65,81,62,76,93,92,112,97,93,110,115,125,112,129,94,75,81,72,79,79,78,98,86,121,158,133,110,98,122,100,118,141,108,125,129,67,74,59,124,120,110,100,110,94,137,120,98,76,81,88,89,66,108,105,72,114,71,80,89,121,147,142,124,128,155,138,146,150,132,123,152,117,135,112,87,149,94,78,110,99,96,107,102,85,124,95,90,114,93,163,74,67,87,84,104,75,81,78,65,103,84,100,113,86,113,102,81,88,96,78,100,82,94,93,121,153,133,107,120,123,109,88,79,81,129,140,136,93,66,74,60,52,43,61,88,77,61,80,116,113,122,79,92,109,80,117,129,127,99,83,65,107,127,113,126,135,145,131,93,134,127,130,134,123,85,102,89,105,133,117,132,138,105,89,49,87,74,53,60,54,88,71,84,85,98,79,90,71,78,127,95,92,82,70,53,67,82,123,99,90,54,82,58,54,62,27,77,49,58,131,249,229,144,34,31,43,89,143,124,45,1,38,78,115,120,107,100,117,85,45,78,89,48,55,27,65,94,48,58,63,78,77,48,39,67,44,36,51,42,36,36,43,41,47,58,65,47,47,54,45,49,34,44,27,46,24,43,36,43,45,42,51,43,20,72,55,55,69,39,52,70,84,93,88,78,63,65,31,38,38,97,74,49,29,48,88,74,105,97,83,122,97,101,110,94,88,109,114,97,105,121,76,73,70,98,101,93,74,111,108,106,88,102,86,106,104,124,148,97,90,105,137,142,48,58,176,159,113,62,16,8,19,24,21,18,36,70,81,106,90,125,106,129,85,105,119,107,103,108,49,67,74,78,60,79,111,107,77,68,15,3,12,10,30,43,41,43,37,27,55,28,19,37,21,60,53,2,48,38,95,49,57,35,22,34,18,16,15,48,47,35,35,47,30,15,44,58,66,2,18,14,37,16,46,21,29,27,25,34,32,38,41,51,32,29,41,45,32,53,62,36,68,31,49,29,39,27,13,53,19,28,24,36,37,35,27,42,41,23,1,31,20,15,14,39,16,27,17,27,52,26,33,32,21,32,13,41,60,12,18,14,37,12,38,28,47,18,34,28,32,44,43,25,49,45,49,25,86,80,66,26,31,43,14,6,22,23,48,26,45,19,48,32,36,21,26,30,25,36,33,22,10,26,19,19,17,42,46,13,15,21,36,44,38,51,41,36,49,56,66,48,52,24,30,5,18,46,27,18,13,14,54,33,46,37,63,43,24,24,28,5,9,11,15,23,13,21,46,43,41,33,42,53,43,25,34,44,28,42,59,15,55,50,45,43,63,45,29,52,40,37,57,36,38,31,5,29,21,51,63,52,52,64,58,58,41,56,89,83,49,81,112,129,101,93,58,62,96,92,89,91,125,118,144,110,98,111,99,99,68,39,45,59,89,62,43,38,53,48,46,33,38,101,122,99,108,92,117,110,102,72,106,113,95,82,90,87,85,75,61,23,10,14,18,12,10,17,24,15,15,37,17,21,7,7,16,2,22,8,20,18,26,5,10,59,35,78,84,83,98,96,108,136,126,133,137,159,209,216,225,122,42,81,33,46,36,31,22,35,41,52,68,66,85,89,77,77,59,66,72,82,85,85,82,25,39,53,50,79,56,75,82,77,78,64,94,162,104,87,83,128,100,112,102,94,90,96,82,118,148,118,149,115,135,144,96,96,115,110,107,110,159,123,91,120,94,72,40,108,107,79,98,101,110,144,197,67,54,86,130,118,89,109,114,113,119,77,107,140,153,135,107,125,135,92,109,136,147,137,132,145,124,147,96,152,179,139,114,116,91,87,121,105,121,102,104,106,97,150,181,71,53,92,89,126,93,101,96,63,84,88,133,175,160,114,129,99,165,153,137,138,152,137,150,165,153,157,148,145,146,139,97,94,74,125,162,114,130,98,72,47,56,52,68,54,66,65,118,113,114,81,87,64,107,119,129,136,119,142,139,105,88,116,128,147,107,147,124,114,157,127,121,151,110,77,100,140,102,151,118,91,120,100,113,101,84,114,90,97,96,84,80,64,112,109,79,56,71,92,92,81,66,62,58,75,49,41,110,149,84,53,83,55,102,128,97,77,82,5,100,192,206,170,35,0,70,44,70,43,20,15,45,88,88,105,93,98,85,119,153,96,103,97,88,69,161,142,105,97,81,83,35,27,34,44,43,31,45,45,48,60,48,66,34,31,38,32,41,44,26,27,27,62,34,39,53,43,36,42,26,26,42,12,37,25,43,60,47,36,62,70,86,105,83,58,26,39,35,68,51,52,50,33,53,53,84,64,104,124,80,119,75,102,77,96,83,117,77,96,91,100,87,101,57,87,101,88,109,97,116,107,95,122,105,112,134,117,122,55,39,92,122,65,27,28,111,184,104,30,10,18,16,22,10,27,45,95,101,140,116,106,93,105,116,114,89,95,106,54,83,64,67,75,61,69,67,78,83,73,38,25,2,21,24,41,38,33,52,24,43,29,45,33,29,53,32,53,19,16,36,21,62,30,31,28,54,20,30,53,47,30,32,24,43,8,39,51,40,32,12,13,25,22,13,38,39,21,49,25,46,29,87,17,20,33,35,29,29,32,77,52,41,46,37,44,69,32,40,41,56,40,47,33,25,31,29,54,21,24,46,21,22,37,43,18,29,23,29,61,30,50,42,22,30,32,19,44,29,24,45,11,46,13,9,22,40,17,19,22,31,19,30,19,15,20,43,7,19,59,65,62,20,55,32,34,28,31,56,12,13,25,41,33,27,45,6,7,0,20,10,34,19,26,26,15,38,52,27,33,16,30,44,45,49,46,56,75,41,38,54,44,54,37,3,42,30,22,34,22,47,26,41,55,48,44,63,26,38,16,22,23,14,12,4,36,19,38,52,43,42,47,40,8,64,17,34,31,10,48,38,33,57,44,45,85,57,50,45,53,42,43,46,30,49,51,15,7,25,27,22,38,52,63,58,85,44,55,78,90,87,52,108,118,111,110,59,70,72,108,134,96,98,108,131,118,96,115,83,99,108,125,109,84,126,119,82,89,67,105,87,85,82,108,152,121,146,102,133,99,119,103,81,91,106,91,105,76,66,93,86,29,3,5,27,9,18,24,7,0,24,13,27,24,25,37,3,12,4,20,34,16,5,9,0,65,27,65,45,95,79,93,114,137,124,124,137,129,203,227,243,89,51,59,21,40,26,27,26,12,36,27,53,83,119,85,59,30,48,51,38,81,91,111,121,70,36,37,57,82,81,84,51,39,71,53,79,185,99,66,98,130,111,94,114,94,120,86,99,110,135,133,128,103,103,152,89,85,77,88,100,102,174,105,78,86,84,76,55,83,91,80,67,94,80,180,213,66,63,82,142,129,88,62,72,92,110,104,115,136,110,115,86,69,127,53,76,124,147,146,125,111,112,106,110,95,212,147,94,118,96,93,83,83,84,106,77,69,49,155,182,69,47,100,110,118,88,134,180,160,92,107,171,232,170,145,140,156,207,181,155,158,164,173,185,183,174,199,202,221,153,178,129,119,121,122,115,95,88,81,74,80,74,80,80,39,56,81,99,111,82,56,56,108,86,103,132,106,126,145,149,141,116,107,120,92,130,119,122,112,140,106,125,123,119,92,124,132,100,100,92,104,101,115,121,131,166,143,132,115,125,95,66,48,76,104,107,79,107,79,95,91,113,59,75,95,90,62,67,123,96,70,26,16,53,101,109,110,91,33,15,35,141,187,28,9,68,92,123,47,22,93,93,114,125,85,68,101,73,97,98,110,116,91,109,87,120,92,116,97,57,65,54,68,27,55,50,48,47,44,47,57,73,51,33,23,46,30,44,37,57,37,29,46,44,56,41,31,29,50,42,37,19,48,32,53,55,57,82,59,77,86,98,116,87,67,49,45,56,35,54,51,46,89,98,99,112,110,107,128,92,107,123,100,91,92,87,112,100,93,87,88,73,101,99,99,99,89,105,97,126,78,61,101,115,131,120,79,61,27,58,53,83,41,11,6,55,140,105,58,44,18,24,35,35,66,82,108,111,74,98,131,114,88,81,99,99,73,47,64,54,51,73,73,86,64,72,83,69,53,37,27,15,24,35,59,56,30,36,48,29,35,46,46,25,35,21,8,34,29,43,20,63,27,39,19,49,18,52,62,21,53,12,56,34,34,42,59,47,13,40,4,24,33,39,49,30,44,58,46,29,42,33,17,37,44,23,67,51,36,24,19,47,65,59,61,46,41,49,48,54,43,46,32,45,53,37,31,32,16,13,57,26,30,27,31,24,43,21,22,49,11,10,26,22,36,34,39,7,10,40,44,8,43,23,23,38,8,28,40,8,38,14,33,7,12,45,46,17,54,84,67,49,29,39,28,37,13,28,30,30,25,17,22,11,8,34,25,31,6,43,21,10,18,27,16,18,21,23,17,25,55,52,36,24,42,25,33,38,36,20,60,30,26,25,17,28,57,35,38,34,18,79,40,13,51,45,3,15,28,12,6,22,14,20,10,30,44,55,46,31,38,53,38,31,27,3,3,16,38,35,32,39,14,60,43,44,54,54,44,49,45,23,29,39,0,40,29,32,37,45,43,87,32,40,61,67,76,103,104,82,78,94,94,80,90,56,63,53,91,97,86,71,62,87,97,116,127,160,148,165,183,176,158,141,132,145,164,163,138,149,142,142,146,163,131,129,145,169,124,153,137,149,144,113,131,127,114,128,146,130,5,5,39,18,33,11,8,0,7,15,32,3,28,0,11,1,9,6,42,18,1,32,26,17,94,72,33,34,58,80,81,139,140,120,97,122,139,226,245,238,83,43,44,4,47,16,29,39,27,32,27,63,84,127,130,93,56,41,64,53,70,95,148,142,92,68,35,69,75,88,74,47,34,33,89,99,182,133,84,125,129,140,86,83,75,106,92,91,125,82,74,76,70,118,100,62,57,60,64,63,117,162,113,59,111,102,91,115,111,108,89,82,96,92,157,198,63,50,89,161,127,72,85,71,84,106,92,97,119,100,117,92,102,174,58,64,99,107,99,95,109,98,98,102,115,170,81,98,90,92,98,102,71,107,105,83,114,93,176,193,72,78,89,105,109,73,181,226,215,112,95,174,165,141,146,135,184,156,102,140,130,157,148,166,137,168,169,141,107,88,103,111,81,101,75,82,68,51,34,58,59,68,54,54,52,43,33,54,90,75,66,81,73,64,45,68,90,94,119,91,125,119,116,101,106,116,108,112,125,132,135,101,130,94,98,112,140,147,149,150,163,140,140,136,144,135,129,124,92,125,121,105,97,79,77,84,62,91,92,69,81,73,80,61,71,83,52,122,144,118,116,79,56,81,134,144,95,75,78,88,47,43,122,47,21,99,155,152,93,115,129,118,114,117,104,114,119,120,113,130,125,126,132,105,123,113,90,74,35,44,30,59,73,48,66,48,52,59,45,42,70,46,44,57,65,68,43,32,32,35,58,47,44,19,38,41,37,23,49,37,56,49,54,51,47,56,92,57,64,69,74,77,83,121,114,73,33,33,58,50,41,49,68,124,139,131,119,94,61,69,84,113,67,78,100,123,89,110,99,89,96,117,102,93,99,91,82,95,115,93,83,106,109,92,81,58,33,20,18,68,25,41,51,30,25,78,153,121,109,36,28,44,27,73,78,107,119,88,85,94,69,79,110,109,92,73,87,68,66,69,60,63,71,45,94,93,83,85,37,36,15,31,40,52,38,47,37,25,29,38,25,16,51,48,44,22,23,44,48,43,32,30,31,26,35,36,26,42,19,33,21,5,13,40,35,44,41,23,12,18,21,33,37,55,43,28,70,49,44,14,29,46,50,37,31,53,30,46,27,41,39,46,47,37,59,50,45,43,60,37,52,45,23,31,43,50,39,10,24,20,15,21,8,11,9,27,43,43,5,33,24,41,14,18,41,31,62,8,53,41,27,51,43,52,19,34,19,51,31,15,39,12,30,29,18,25,31,26,6,44,107,60,39,31,33,11,26,30,48,37,12,10,47,23,27,26,4,22,38,21,20,33,31,31,43,16,37,30,28,15,38,32,65,37,32,33,41,44,59,47,31,30,35,18,31,28,17,38,32,37,41,32,47,23,33,40,21,27,23,44,2,12,10,22,14,34,29,36,47,27,34,29,58,56,41,38,37,10,17,40,15,25,41,29,65,46,49,11,71,53,36,48,52,21,46,17,28,24,51,59,52,50,50,44,48,61,62,73,53,70,69,76,55,80,84,60,51,33,52,34,98,13,36,36,53,116,151,162,168,160,199,176,178,208,201,222,173,199,199,180,186,188,180,174,161,183,168,171,168,174,185,177,156,147,168,145,180,156,127,98,14,22,20,33,2,8,24,18,24,33,13,33,22,20,6,14,39,6,21,26,22,31,46,18,172,147,112,62,39,67,91,102,145,139,108,94,108,198,243,224,96,46,41,0,51,47,20,21,22,29,33,63,93,144,229,219,160,115,88,104,146,173,224,231,152,147,116,98,79,103,114,103,62,64,92,105,206,110,101,87,111,100,72,97,94,93,86,95,77,71,77,95,96,129,157,103,73,67,99,78,105,178,154,138,144,192,163,171,160,181,185,173,149,160,212,214,123,84,101,166,149,136,138,161,158,162,166,134,154,146,142,139,212,250,142,68,70,81,91,68,102,74,101,74,126,166,126,169,165,132,183,149,161,168,168,162,206,183,247,251,148,121,112,103,137,100,190,237,182,131,118,124,113,111,92,115,123,87,70,52,90,88,74,61,63,47,49,29,5,1,6,14,13,11,40,13,17,16,27,13,12,29,20,32,18,15,34,19,35,9,28,24,26,34,17,36,22,54,69,111,117,77,114,106,83,93,127,131,127,120,70,97,120,106,141,118,104,171,154,151,169,140,146,118,120,131,154,103,118,110,142,106,111,69,67,66,71,64,78,85,52,68,9,44,74,83,58,123,125,98,103,92,125,139,143,110,78,101,128,122,60,82,154,98,39,102,141,109,120,124,126,110,147,115,104,129,122,113,132,120,104,124,96,129,102,118,49,37,111,72,73,75,90,45,69,33,50,52,33,59,52,59,51,72,54,70,37,35,63,38,30,59,36,44,39,47,32,24,48,40,42,33,67,54,68,100,101,88,94,88,67,86,123,167,148,62,49,58,56,64,48,31,20,77,132,115,95,93,104,81,88,98,104,82,100,120,92,90,78,75,109,77,85,100,84,87,124,95,102,103,104,104,79,29,33,35,32,41,34,17,11,13,45,46,15,142,242,130,111,54,15,74,75,101,119,91,83,67,68,39,65,75,69,62,68,79,76,73,71,70,67,74,91,70,97,93,73,55,22,14,13,26,43,38,66,51,28,52,59,57,41,24,74,43,31,39,22,59,39,47,14,51,20,57,22,40,24,28,48,55,17,39,36,35,60,40,25,25,36,12,11,15,31,46,18,81,39,30,28,69,43,43,59,25,18,45,39,59,47,26,14,30,36,46,60,72,45,42,25,94,38,49,37,52,33,66,54,14,52,24,13,45,11,6,32,16,26,22,29,29,51,51,30,40,55,9,50,26,48,36,21,22,40,6,36,47,37,11,16,16,19,30,16,6,4,26,34,24,21,22,69,89,66,27,18,27,45,23,27,47,34,59,41,26,25,46,15,21,19,24,22,39,33,22,56,14,40,35,7,14,30,43,41,38,38,14,50,33,49,37,41,26,57,39,32,31,18,45,23,21,40,27,51,33,37,9,31,41,45,30,9,21,37,21,23,35,35,33,45,50,47,50,66,51,33,62,48,30,31,19,24,38,36,38,34,37,42,34,41,40,38,51,32,35,19,20,39,36,58,83,88,64,53,32,70,30,68,71,56,9,7,43,65,55,60,56,46,55,53,106,113,107,35,10,34,145,205,169,215,199,213,217,239,226,240,207,219,215,203,214,216,179,179,193,206,203,230,203,194,185,193,196,201,178,202,196,191,195,220,119,9,3,14,31,13,24,35,16,11,21,18,2,18,0,15,35,26,30,1,24,0,21,23,25,232,228,223,110,95,95,109,92,144,134,112,110,105,202,244,236,94,108,84,24,40,36,61,55,15,51,27,45,75,165,229,237,213,127,125,135,206,242,252,254,183,169,145,150,149,138,176,155,133,146,130,179,245,183,123,99,137,137,136,129,145,125,145,155,155,155,148,155,157,203,222,159,104,173,201,158,157,208,174,209,195,193,195,186,199,179,212,204,208,200,204,193,159,157,139,179,190,211,210,210,185,203,213,194,217,175,211,190,183,195,153,129,81,80,70,73,79,45,89,81,119,164,204,206,213,182,186,200,180,211,189,185,217,204,192,199,173,150,107,135,100,62,111,131,155,127,79,83,78,8,17,7,58,4,0,14,15,22,48,11,61,145,106,64,35,24,25,7,29,46,20,14,17,10,35,2,9,16,0,18,7,26,26,2,5,3,61,18,15,32,26,2,0,25,44,41,83,95,94,106,137,135,141,124,134,98,95,101,127,119,128,156,113,147,146,146,139,123,100,76,125,96,108,137,148,142,124,107,89,119,90,71,62,113,69,91,72,42,38,31,80,70,39,84,120,112,100,98,98,161,168,116,53,75,98,89,80,47,143,149,69,98,103,67,58,66,94,121,84,94,79,71,80,132,89,82,87,63,104,104,104,75,71,41,101,100,131,83,101,77,28,41,44,51,53,26,64,48,33,59,54,45,42,59,54,43,44,30,60,55,39,39,43,22,33,41,27,36,61,91,138,166,119,123,81,127,116,119,135,96,77,13,30,46,46,54,83,40,52,70,44,107,110,90,69,75,85,79,108,75,91,113,95,107,100,93,97,81,101,92,101,117,89,90,70,109,116,65,98,19,31,24,15,34,39,19,25,36,33,61,44,150,185,122,158,52,60,75,92,132,99,53,76,80,70,69,65,77,79,56,65,84,72,89,71,50,110,107,94,89,120,65,55,23,6,20,11,5,51,16,52,31,36,49,35,52,41,37,42,31,19,28,30,57,14,24,57,31,26,12,16,29,23,31,26,19,32,30,34,27,35,39,24,3,3,10,9,18,36,30,34,20,40,27,28,31,42,28,35,29,47,54,32,26,59,28,38,30,51,50,48,36,59,48,33,46,46,67,24,60,56,4,26,27,9,27,8,23,27,27,25,44,52,54,38,23,40,25,55,50,61,17,37,33,34,50,51,32,32,19,31,23,37,41,30,37,20,10,29,44,13,41,7,8,14,21,54,105,68,56,15,14,7,39,32,45,34,29,29,2,34,41,31,20,31,14,43,34,33,50,30,20,8,48,27,28,29,20,50,43,41,54,40,34,31,46,53,26,50,16,20,33,35,60,29,49,16,41,48,38,42,37,20,20,41,24,32,39,30,21,37,27,44,32,37,48,30,54,23,50,45,12,22,32,49,17,3,37,52,34,41,30,51,73,29,45,60,67,55,44,9,24,31,47,81,91,74,80,36,74,35,61,47,30,22,39,45,21,45,52,50,62,104,91,107,80,143,122,47,60,46,186,213,193,210,197,215,225,205,240,213,173,215,200,238,228,219,192,201,207,205,201,201,202,213,225,193,197,252,219,212,208,205,217,218,118,32,9,12,16,3,12,10,25,26,29,26,16,14,0,13,26,26,6,5,9,27,25,22,11,234,239,237,156,113,150,103,120,146,156,163,166,136,188,250,218,89,155,152,47,32,22,22,42,52,30,47,30,89,122,141,249,219,142,169,156,229,236,225,223,174,204,179,190,187,198,190,193,192,186,181,211,217,160,160,140,170,200,186,196,196,194,180,214,202,193,197,187,208,224,240,195,219,232,243,231,220,213,189,186,203,189,200,239,219,201,204,210,177,129,84,104,117,199,184,224,179,210,223,207,180,181,204,184,182,189,151,158,116,77,133,176,107,79,76,93,71,50,89,95,165,181,209,212,203,224,194,178,189,179,209,202,204,154,72,97,138,172,111,107,71,71,106,122,132,104,82,83,32,24,40,30,34,98,98,146,172,248,243,229,236,249,244,240,232,238,212,247,215,226,224,199,238,212,192,213,235,192,224,206,195,193,187,175,169,148,151,142,174,176,171,176,120,134,104,51,86,71,109,140,126,117,128,137,99,115,77,101,76,81,133,147,145,122,132,113,127,95,105,86,103,67,122,151,109,115,133,126,116,115,106,112,93,85,70,73,54,70,81,80,90,76,68,52,70,122,95,46,72,101,110,78,104,81,66,52,56,61,133,160,73,61,64,61,42,39,49,57,70,81,91,78,65,75,91,61,36,71,50,60,35,58,17,58,33,67,100,74,89,126,103,97,103,93,90,109,78,72,70,69,52,60,49,36,29,55,45,16,56,24,45,21,53,52,49,56,55,67,55,97,140,144,103,104,104,129,131,102,44,21,22,23,65,73,66,66,55,78,45,40,54,60,73,73,76,102,91,82,98,88,74,90,68,111,101,83,96,110,85,105,133,88,105,91,97,87,68,93,116,87,42,39,27,46,36,18,33,34,27,30,34,79,80,57,133,91,45,66,61,89,62,53,50,24,61,68,90,67,82,51,86,86,76,81,78,75,94,87,94,97,39,14,41,24,1,32,25,20,42,41,22,44,22,50,27,39,33,10,29,46,44,22,52,36,25,31,28,16,40,58,78,53,31,12,44,45,39,46,21,49,25,24,13,44,31,4,8,20,38,24,40,46,31,24,19,22,32,23,36,29,32,37,40,63,39,34,32,41,54,35,53,40,64,53,53,16,23,31,59,31,43,63,26,43,1,7,1,28,24,24,43,27,35,60,40,39,41,39,59,34,22,41,45,26,32,47,55,29,43,50,20,23,42,54,17,27,28,24,44,23,26,39,11,5,22,46,40,44,65,36,44,20,45,38,37,18,49,46,8,4,21,13,44,26,6,26,33,24,47,33,25,33,9,19,10,26,33,49,32,35,34,30,40,72,27,32,29,25,44,56,46,40,34,34,42,41,20,58,43,30,35,43,16,11,21,40,38,20,30,34,19,49,35,47,42,30,36,41,23,35,35,43,45,37,52,41,4,12,13,34,26,7,10,30,9,49,43,51,65,35,41,17,21,60,123,93,95,99,70,113,96,92,73,76,89,108,88,60,36,50,26,49,64,66,103,119,132,104,99,65,131,239,204,215,229,206,224,227,191,206,198,193,219,205,192,205,215,192,200,181,215,213,225,208,211,229,188,227,229,205,223,176,216,202,226,97,23,21,11,0,29,5,14,6,0,31,17,12,29,18,12,19,3,2,8,14,35,34,40,12,234,245,246,150,96,156,150,161,172,191,250,249,209,231,241,233,106,180,119,24,63,5,34,10,13,39,23,55,93,144,136,156,149,129,149,138,195,224,193,162,145,159,193,189,197,225,205,195,193,198,168,115,108,139,172,180,192,196,205,213,198,200,183,203,195,196,193,190,184,73,98,138,227,247,255,235,178,180,186,192,173,183,227,250,188,174,176,181,130,39,9,41,92,170,218,201,194,187,201,223,176,180,176,205,172,177,182,82,57,30,63,147,162,134,27,85,77,118,133,172,180,195,190,193,187,188,183,186,169,196,195,183,177,78,44,69,103,174,150,135,69,81,111,129,143,115,122,162,229,229,240,228,249,245,254,250,255,230,247,254,231,236,252,254,255,255,249,232,254,247,252,239,255,248,255,252,233,243,234,237,242,253,246,251,253,249,246,234,234,242,250,229,255,249,233,172,135,88,98,117,99,59,93,95,106,86,108,102,72,67,148,100,124,121,118,97,126,130,94,140,134,107,98,134,136,131,147,139,101,82,117,82,92,74,45,44,51,75,85,53,69,46,25,77,64,75,82,54,79,107,88,107,77,85,118,81,70,52,101,147,65,27,46,9,29,26,39,76,76,62,61,58,54,67,81,77,74,51,47,81,46,69,57,19,54,24,44,74,74,105,127,124,113,124,156,146,136,126,71,25,44,43,68,11,67,45,33,47,50,17,44,28,40,40,60,61,78,94,72,35,118,141,113,139,122,116,69,27,38,27,49,65,131,119,72,44,47,31,57,28,28,51,50,58,76,83,88,108,108,95,116,99,99,82,69,85,119,115,94,115,102,107,122,69,63,72,78,112,117,103,72,60,35,45,19,52,27,37,25,40,29,48,46,14,97,51,58,34,27,49,77,51,63,70,76,71,78,52,77,73,82,49,71,77,97,69,76,77,82,29,26,41,10,29,23,33,51,62,36,43,50,28,28,19,61,26,28,16,29,38,39,37,55,34,30,23,29,18,40,40,32,31,57,51,27,34,26,44,30,42,39,24,6,2,2,54,9,37,22,44,25,51,29,45,29,61,50,32,44,22,37,34,49,40,14,46,38,43,35,43,44,36,18,55,48,20,54,46,57,40,41,31,37,5,35,12,20,24,17,35,25,38,34,46,30,55,25,39,54,38,38,52,35,48,31,48,46,36,28,21,20,36,56,71,20,34,54,29,42,44,34,27,39,66,21,38,32,27,54,69,86,38,39,44,24,44,27,24,32,7,19,25,2,17,24,10,17,19,7,44,16,12,14,17,39,43,48,54,21,40,46,61,43,31,41,60,38,43,42,28,65,9,46,45,26,45,44,50,55,38,28,19,17,34,23,3,7,28,3,22,27,34,54,24,30,18,35,23,38,44,49,34,27,23,29,41,16,39,25,23,32,35,24,58,48,42,35,43,44,56,29,77,150,172,152,135,114,114,149,161,136,149,129,149,151,159,128,166,105,93,85,36,58,64,53,84,104,110,127,103,192,238,219,223,211,207,227,202,179,187,183,221,211,172,180,214,205,193,205,178,201,215,193,219,199,181,207,209,197,207,185,213,214,209,183,129,10,4,16,5,9,18,0,17,5,3,1,10,19,15,10,12,22,16,22,14,16,28,37,1,249,233,233,130,113,140,164,180,205,224,249,241,242,240,253,225,100,161,86,20,37,14,34,21,25,44,34,54,64,110,101,72,73,93,102,115,126,130,150,152,201,181,144,214,177,203,168,169,194,158,132,17,3,78,146,203,219,196,168,185,192,178,163,157,178,179,157,178,69,4,28,60,196,226,227,232,157,145,154,163,162,193,212,211,186,169,156,150,61,1,12,23,22,134,179,202,219,159,176,175,180,183,217,164,180,172,117,52,38,38,45,118,171,148,77,68,81,119,171,203,199,170,193,181,168,179,186,147,174,200,168,168,103,27,54,19,44,86,137,129,83,31,47,81,101,107,171,247,244,255,236,254,244,248,252,232,255,254,250,230,253,247,245,244,243,240,240,247,255,251,252,250,228,230,242,243,243,255,254,239,239,249,248,255,254,255,240,246,249,255,224,254,243,238,229,233,200,121,93,107,120,122,108,149,134,85,115,126,104,114,119,115,110,129,121,128,124,106,109,130,149,88,114,142,139,145,90,73,53,94,116,123,96,87,44,61,77,84,53,61,62,35,27,56,57,82,74,49,39,45,95,62,68,74,101,78,80,74,65,150,79,21,63,66,95,46,28,53,76,72,100,89,65,33,44,103,90,69,80,55,46,76,48,55,56,56,53,63,102,111,131,135,136,134,128,148,118,87,37,55,24,78,95,79,68,50,55,43,42,66,75,64,23,57,86,116,114,108,121,51,91,149,121,143,81,15,30,28,38,64,110,94,108,129,137,47,27,75,35,22,29,62,119,67,109,89,111,96,97,100,119,116,100,99,97,107,77,106,108,113,72,89,87,61,82,97,82,104,120,112,127,82,47,20,47,35,40,27,57,32,35,55,44,35,99,84,30,15,12,66,67,55,53,81,67,85,63,89,78,72,76,45,64,79,87,96,71,72,45,16,3,21,24,28,24,31,63,62,25,33,36,22,46,38,54,49,43,51,54,44,22,53,11,34,25,46,50,53,57,19,25,50,40,39,32,45,43,51,39,64,43,9,34,45,35,10,17,24,2,29,22,34,27,51,39,42,31,37,35,37,49,37,43,31,18,52,38,26,45,61,51,45,61,43,60,35,46,31,21,39,63,39,53,13,30,23,33,37,24,27,32,1,14,28,39,31,24,68,38,34,39,45,59,48,80,33,30,16,14,8,57,38,23,36,37,42,38,36,29,47,34,37,49,33,47,9,3,40,64,61,77,58,24,29,44,24,38,25,9,9,11,23,8,16,24,28,47,22,38,22,39,19,14,25,38,28,35,28,54,37,28,29,34,36,44,54,26,38,35,18,21,34,29,43,20,43,36,52,29,53,29,56,23,27,23,4,11,19,10,41,32,56,25,23,39,15,16,24,51,22,39,3,13,31,16,36,29,23,33,22,36,2,33,41,21,23,57,45,56,27,35,199,252,241,222,177,141,144,163,155,157,161,159,151,177,175,170,159,170,149,119,80,88,57,67,102,114,145,71,140,223,219,196,222,191,209,187,174,179,189,206,199,212,167,199,184,206,210,180,176,177,191,198,186,194,191,211,194,184,186,200,212,202,210,190,124,22,0,10,9,27,15,21,14,18,24,8,25,8,13,12,45,21,8,14,10,14,8,29,14,246,238,208,137,122,146,116,145,187,249,247,240,253,253,232,243,157,186,66,21,56,12,50,18,18,46,48,57,75,125,85,32,57,73,117,122,149,148,179,193,153,161,188,178,172,180,175,160,141,137,55,9,30,19,88,191,220,197,174,157,160,167,169,150,176,179,171,116,77,30,46,42,68,164,208,210,151,149,152,197,190,195,167,174,165,150,162,83,29,39,40,29,39,55,125,214,204,202,169,188,165,198,193,184,164,131,82,47,69,77,91,84,126,193,152,107,113,157,145,214,179,177,194,177,186,168,156,164,181,172,185,96,28,12,25,37,40,35,74,96,73,59,11,18,24,11,72,124,231,243,237,230,215,206,184,186,174,168,129,117,119,228,245,245,250,229,254,252,247,251,253,255,247,246,255,238,251,246,255,253,241,237,231,246,250,231,255,253,250,244,253,255,249,249,239,243,235,179,75,82,67,94,86,130,108,126,98,113,145,150,148,136,128,135,97,132,124,73,110,111,94,76,127,135,130,117,80,68,85,92,104,103,104,78,67,80,77,108,84,49,57,51,39,56,66,57,60,63,58,29,76,75,97,78,69,96,105,106,88,139,125,21,70,148,141,89,74,60,108,162,161,126,60,33,100,104,91,89,66,85,71,84,90,77,85,59,79,79,87,90,143,115,91,85,78,45,63,64,68,76,86,98,101,81,95,83,78,66,69,57,40,55,27,43,123,133,134,109,128,112,133,140,78,70,15,6,9,72,115,131,130,118,127,111,123,127,112,85,29,9,17,87,131,115,93,100,120,113,109,138,86,102,89,91,140,119,106,79,111,112,98,94,71,56,86,79,90,122,99,107,115,109,49,23,22,34,28,35,39,44,56,57,52,96,185,108,19,30,12,65,54,78,89,71,94,72,87,57,61,102,71,85,69,88,104,87,52,41,14,4,9,43,32,44,41,65,57,81,47,33,27,42,45,56,26,46,23,48,53,50,41,26,51,32,29,36,11,52,29,43,40,36,43,43,57,54,64,35,39,48,55,60,54,55,29,32,50,33,39,14,30,45,34,17,38,43,25,45,35,6,46,28,57,51,18,30,29,54,45,67,54,9,19,13,60,58,12,47,45,45,59,22,31,2,32,38,53,26,27,22,35,45,34,42,27,30,38,35,47,50,39,64,19,54,24,33,16,22,16,7,34,29,22,43,47,18,16,9,4,51,30,39,14,25,21,33,36,49,71,41,78,82,45,25,33,12,36,21,14,30,16,38,18,36,21,34,38,12,40,6,30,19,9,32,20,46,21,52,17,20,31,31,45,62,62,22,45,41,41,15,37,44,19,33,33,20,47,32,33,26,22,22,36,19,33,23,27,15,16,33,33,17,38,22,33,37,8,22,19,25,20,25,52,60,57,31,31,20,34,18,29,30,33,10,9,35,6,39,36,8,66,199,248,251,227,220,187,147,165,135,146,148,140,150,153,185,161,164,191,177,140,139,121,99,103,127,90,81,104,187,243,231,224,213,203,214,189,181,182,192,188,207,212,202,204,209,217,218,192,219,191,218,178,183,189,212,199,194,201,221,205,202,216,217,197,124,7,0,5,23,33,1,23,7,2,23,1,27,28,2,14,26,22,18,9,23,6,20,23,9,248,231,234,212,140,152,103,127,160,166,194,244,244,238,216,219,136,184,71,7,31,3,54,15,54,49,60,66,88,145,134,101,103,105,141,158,176,192,166,187,160,205,167,168,155,157,180,177,133,32,71,60,58,65,49,113,205,199,180,184,197,178,157,160,183,183,145,84,113,135,132,57,31,119,198,177,173,184,209,208,153,160,125,169,174,174,63,38,60,79,125,88,44,18,79,139,191,201,179,192,211,187,174,163,150,81,86,103,134,116,102,91,68,128,168,135,157,157,160,189,182,178,186,202,191,180,171,183,184,171,109,128,119,106,109,129,107,107,165,237,248,243,217,136,72,38,16,6,96,151,138,161,181,241,242,246,241,154,126,102,178,254,232,239,222,252,250,242,236,246,255,242,236,254,249,237,251,226,252,233,255,252,243,254,242,254,252,241,247,237,202,123,164,238,240,236,229,169,84,72,91,89,82,138,122,95,67,104,124,148,155,118,112,115,102,134,137,122,94,82,97,91,135,160,139,95,100,111,105,90,78,90,124,120,101,117,134,97,113,103,93,91,51,56,68,53,54,53,39,16,58,87,106,82,83,112,106,123,105,176,139,62,109,148,126,122,87,111,168,170,151,107,47,44,70,87,68,95,104,106,71,144,128,144,93,126,76,82,104,95,71,52,56,68,72,84,85,131,134,160,122,134,82,65,86,111,114,91,89,87,118,108,73,71,108,111,106,125,117,141,109,90,11,21,12,45,68,109,132,143,136,124,127,131,137,115,132,92,11,17,42,147,95,91,146,112,79,100,97,118,98,100,99,120,89,76,100,110,118,107,83,73,55,74,87,84,108,116,112,114,84,24,36,36,10,21,30,26,34,53,22,43,74,96,230,148,34,25,47,72,75,90,56,80,63,65,71,54,77,68,67,101,72,93,51,41,21,4,31,25,8,29,61,60,75,69,72,46,48,11,22,48,68,29,48,59,25,51,41,34,33,51,27,38,50,50,39,48,21,80,36,59,65,33,45,42,38,32,33,86,66,72,48,58,74,59,18,14,28,36,13,23,25,22,43,52,46,56,32,15,73,53,43,30,49,34,24,51,27,55,55,45,49,32,56,40,43,55,28,74,39,15,29,33,19,44,36,51,26,25,34,35,31,43,19,38,43,37,47,36,39,35,72,46,44,31,32,4,4,50,10,19,21,14,23,13,46,34,5,30,23,27,19,32,3,0,40,16,19,49,66,85,83,58,29,7,36,21,45,30,41,22,33,36,31,40,26,34,5,27,16,25,34,23,56,22,27,31,52,22,47,14,54,34,39,18,37,21,40,43,15,31,23,33,42,38,43,12,18,33,54,33,20,29,15,25,19,43,39,47,36,53,40,28,19,35,43,11,53,31,34,70,22,44,46,39,36,36,34,15,15,29,24,26,29,14,29,18,26,131,190,228,188,165,205,220,218,207,204,161,140,133,103,108,103,127,145,140,156,139,145,159,150,140,132,147,103,113,158,229,235,240,245,240,230,199,233,212,218,245,218,212,232,217,228,242,200,228,219,237,237,218,235,195,229,247,185,231,205,215,225,218,202,223,195,113,14,4,1,21,31,10,1,7,16,20,35,15,15,2,0,8,15,1,7,25,29,3,18,4,252,242,222,215,93,83,80,102,70,48,47,99,139,175,227,181,156,162,62,49,27,6,35,22,32,66,55,74,130,210,149,132,143,113,155,185,158,207,193,187,192,197,172,185,177,142,171,141,49,54,122,120,125,108,68,70,102,204,225,182,170,164,168,186,171,112,51,110,140,161,181,141,74,116,181,155,170,212,178,181,135,137,143,123,146,90,27,30,85,187,168,172,118,32,53,84,165,198,201,170,185,189,180,135,114,61,73,106,71,119,145,91,53,101,124,138,160,198,204,165,195,170,182,158,190,200,193,192,179,116,95,138,154,140,183,218,246,254,243,233,235,239,227,238,243,220,175,118,172,208,252,248,248,229,253,234,250,242,230,241,230,224,250,245,255,244,243,230,247,247,255,238,253,252,226,251,235,235,255,254,243,250,251,244,221,244,238,244,231,103,40,25,93,145,169,161,166,92,82,67,100,105,117,144,145,128,91,112,114,97,116,100,102,67,83,83,128,93,80,99,130,121,142,101,107,108,140,130,95,100,112,99,103,105,122,124,127,126,102,120,49,87,79,111,92,60,85,63,46,16,71,100,93,88,96,101,104,119,110,155,177,70,66,81,76,85,91,170,193,138,157,95,33,20,67,90,59,94,64,73,104,144,163,156,157,113,117,77,77,85,115,66,76,100,118,119,118,190,180,123,124,122,93,71,86,93,132,136,100,131,137,124,91,83,58,73,126,148,117,111,65,58,80,79,110,123,142,144,161,138,144,133,130,118,136,101,111,100,60,27,76,115,147,89,114,113,102,129,68,103,79,117,117,93,63,92,92,79,97,75,88,87,87,86,95,91,108,107,97,42,70,19,23,17,20,65,27,28,35,41,37,44,42,66,131,166,77,32,64,109,86,66,96,86,78,73,73,71,90,82,73,93,78,35,40,25,4,29,13,0,32,37,52,68,64,63,52,49,13,25,49,14,48,25,56,36,42,21,34,15,5,36,7,49,26,12,34,57,25,40,75,51,33,41,48,40,46,45,71,78,97,88,73,72,78,66,54,22,22,32,13,17,35,32,25,26,31,42,32,30,38,42,36,36,13,47,56,47,73,48,30,47,54,20,54,43,48,21,41,36,34,21,60,50,39,32,2,6,21,30,25,51,38,48,55,27,59,42,33,35,37,54,30,47,21,46,21,6,40,30,16,16,35,15,23,38,17,26,46,25,22,26,46,12,25,28,20,44,43,25,26,53,70,77,56,22,14,14,36,51,29,50,41,38,25,26,33,20,18,30,14,24,26,16,23,37,22,28,31,47,39,38,21,38,38,24,54,49,40,24,43,32,17,30,38,42,26,25,62,15,18,11,29,41,42,62,44,17,38,14,29,35,30,56,33,63,28,47,28,35,42,46,21,41,40,46,26,25,40,22,8,14,23,31,14,30,34,40,175,248,247,251,204,207,181,182,178,199,216,202,188,167,134,115,110,115,120,134,148,124,130,161,159,156,169,135,115,148,202,210,203,214,232,231,228,231,230,223,251,222,233,247,239,238,234,252,247,216,226,235,229,223,228,232,223,221,216,220,206,191,223,217,212,211,217,107,6,2,4,18,7,15,15,9,32,48,6,0,26,6,10,2,18,12,9,5,31,12,13,22,173,201,211,136,23,47,65,51,79,60,51,44,52,135,208,134,118,150,63,36,20,32,19,36,53,28,29,51,106,181,181,149,147,165,176,208,195,202,184,182,185,192,164,151,169,167,155,51,31,47,138,167,146,118,59,61,55,173,187,152,164,154,163,129,145,75,15,97,168,180,177,202,178,157,161,135,147,173,166,145,104,147,166,117,98,34,13,28,118,177,153,183,119,61,60,68,75,187,180,124,122,155,145,123,82,32,65,121,103,82,110,77,78,85,85,106,123,172,220,177,211,181,205,199,182,228,175,176,117,66,48,89,179,238,241,246,249,255,245,232,243,246,242,255,243,239,254,247,250,235,254,235,250,251,249,239,255,253,234,237,247,234,234,244,253,249,247,255,234,243,249,245,247,234,255,252,255,247,240,235,243,246,249,242,255,251,249,248,90,39,32,19,93,85,87,127,151,65,93,115,129,177,120,120,126,115,105,104,85,69,84,84,68,37,61,76,96,103,111,136,124,98,146,130,94,136,140,134,91,82,66,117,97,88,133,137,89,94,101,86,82,39,67,85,73,61,63,87,137,127,126,95,98,102,135,138,144,146,143,148,183,124,68,107,144,123,145,173,147,105,138,84,19,44,66,108,64,71,76,78,116,147,155,158,146,132,164,147,104,109,103,67,86,108,106,116,131,154,138,95,118,119,93,43,79,139,131,137,80,79,84,99,70,101,63,83,129,104,60,8,12,67,139,128,117,144,154,151,134,136,120,111,133,95,89,76,91,78,106,95,106,124,90,109,87,114,91,97,114,106,100,81,105,85,59,97,104,96,99,111,94,105,105,106,87,94,103,70,50,41,29,26,13,34,52,82,41,47,14,14,22,49,42,18,68,172,150,75,47,77,87,83,70,87,60,60,57,107,84,82,65,57,16,22,4,10,27,12,25,77,62,76,68,66,53,81,50,57,34,15,37,16,34,11,26,19,28,24,41,52,27,26,23,56,36,24,49,46,43,37,21,54,27,54,24,39,27,52,30,67,71,86,61,71,94,72,67,55,35,9,25,19,7,31,21,36,20,39,30,7,57,43,37,39,27,44,40,45,40,45,27,49,37,50,46,28,50,32,38,44,27,42,14,36,28,50,50,33,47,42,44,41,15,48,36,36,50,48,60,35,30,52,17,27,62,18,34,10,18,12,4,15,7,11,13,7,25,27,31,47,48,46,27,23,12,36,26,18,32,23,41,50,54,88,55,28,18,21,16,33,50,64,36,41,44,21,46,13,20,31,17,5,28,22,20,50,42,32,45,28,13,26,59,45,28,8,25,33,21,28,34,38,29,12,20,29,42,31,39,17,18,18,38,36,30,43,13,67,42,36,25,32,28,32,24,11,25,56,43,63,39,40,28,44,29,40,48,29,27,37,11,43,37,49,38,9,42,66,249,236,246,245,239,255,221,200,176,173,182,187,188,210,207,152,124,144,121,158,159,182,127,169,158,157,168,155,138,160,175,170,138,160,141,150,126,165,158,139,159,166,172,175,187,188,157,178,197,190,167,172,192,194,210,207,203,223,201,198,194,180,215,208,231,226,232,134,11,1,6,11,11,27,12,9,6,20,19,10,15,13,13,6,15,37,26,8,8,5,30,28,103,102,64,46,28,72,60,86,97,74,78,109,90,124,170,73,74,117,21,38,37,36,21,27,29,7,45,74,87,123,133,125,173,169,187,188,185,177,174,164,169,161,170,162,149,128,102,29,37,67,126,161,119,149,138,136,71,120,149,157,103,86,155,131,65,35,26,80,135,177,177,223,223,185,188,123,102,177,173,128,116,180,186,142,122,39,31,58,96,182,177,206,132,23,62,28,56,125,126,105,77,134,107,61,62,70,76,127,60,71,99,66,83,119,179,215,183,188,215,181,163,174,196,166,172,175,167,147,125,77,166,249,246,244,242,234,236,239,249,252,224,245,255,255,237,247,249,243,255,250,245,254,255,231,241,253,237,254,255,245,250,231,243,245,222,238,236,249,245,255,249,255,240,248,242,239,252,250,251,222,252,251,252,243,244,236,230,114,86,63,74,100,70,74,38,98,114,61,124,51,132,116,96,118,107,65,88,70,69,73,100,114,90,77,79,102,117,125,172,155,102,83,138,130,128,98,105,97,73,106,99,93,80,83,137,135,132,102,103,72,70,80,23,29,67,62,105,110,114,133,156,148,115,140,132,144,123,116,135,127,135,121,49,53,103,127,66,92,69,46,63,60,60,16,77,90,22,29,88,113,129,144,128,119,145,158,211,170,144,115,63,59,36,68,72,106,123,112,133,97,124,90,89,26,46,145,147,129,102,49,77,111,107,128,86,80,54,8,27,9,68,112,117,98,117,71,79,91,95,105,104,85,64,61,95,90,65,77,89,91,102,114,79,88,113,88,98,96,103,73,103,135,125,123,70,80,127,83,62,84,103,92,96,77,101,77,53,8,17,28,32,17,62,56,88,108,98,59,36,18,11,35,12,15,53,144,179,91,57,73,69,65,78,69,64,54,66,77,71,47,28,40,16,42,10,25,24,55,64,62,75,92,73,55,79,72,66,47,21,47,27,25,32,32,23,23,21,37,45,32,38,56,50,38,30,42,33,44,45,34,48,32,28,37,29,45,27,47,48,110,63,57,52,67,74,81,79,29,2,26,1,14,38,39,42,17,32,18,8,27,12,35,27,30,22,29,34,33,22,24,51,41,44,71,35,53,32,41,27,35,36,30,20,25,26,58,43,59,35,7,19,39,24,42,30,12,48,52,23,75,50,27,36,32,27,32,36,7,36,14,14,4,18,42,24,16,26,17,52,38,40,22,26,24,37,27,17,35,11,62,25,21,39,72,96,46,0,14,5,6,36,33,27,70,49,26,40,24,16,29,41,18,31,18,41,17,4,28,32,41,36,51,39,30,27,27,21,36,44,45,31,14,45,29,27,39,35,37,18,26,47,6,61,49,27,11,74,51,44,49,14,40,47,24,46,32,8,31,8,43,33,35,38,36,38,37,16,61,8,31,16,13,38,34,8,33,19,100,247,228,251,248,246,244,223,245,227,178,181,169,182,175,226,221,238,221,198,188,192,173,189,195,164,175,173,162,162,185,166,141,107,37,18,5,27,32,60,39,49,35,27,53,45,50,66,63,79,112,79,67,96,114,97,113,117,134,141,157,127,128,147,166,166,169,190,106,8,0,25,27,26,19,14,15,11,12,13,16,10,4,7,35,37,17,22,14,2,16,27,8,112,76,44,33,61,110,148,160,133,125,124,134,92,182,183,82,68,62,30,45,43,35,35,23,28,20,60,78,113,113,154,134,156,163,192,193,161,156,155,180,183,165,159,159,99,101,16,25,30,78,76,82,110,147,228,172,55,75,108,131,133,121,151,58,30,30,41,87,161,156,176,231,182,188,182,108,92,162,218,135,121,217,254,180,164,125,62,53,100,197,178,167,134,32,41,39,35,99,164,115,164,166,79,38,79,88,87,102,80,90,106,96,87,159,240,239,210,148,183,171,156,136,109,125,85,112,133,115,218,250,248,249,223,252,255,255,244,249,236,251,232,248,255,253,250,244,230,252,249,241,255,246,251,255,239,254,246,224,255,242,238,248,237,246,223,244,251,242,220,255,254,240,249,252,225,252,252,252,249,255,255,243,243,228,160,158,84,59,71,76,71,70,78,69,56,81,106,119,120,110,110,103,86,65,91,67,80,94,87,85,129,142,132,95,83,102,129,100,128,147,133,131,130,141,115,66,84,67,77,107,110,85,82,103,92,108,124,111,107,77,88,74,75,41,37,105,104,88,97,98,88,64,41,81,81,29,56,36,31,58,90,146,60,10,33,34,36,48,19,46,57,64,51,36,62,56,39,42,106,137,151,145,133,103,133,136,186,194,156,150,139,115,105,87,62,70,88,101,86,116,159,105,95,19,38,54,92,123,107,61,49,94,144,115,58,36,32,8,69,95,115,82,103,76,63,42,45,56,69,78,59,65,49,54,83,50,75,58,63,74,86,97,101,108,93,117,107,87,68,124,129,150,133,97,53,46,61,67,21,39,49,77,116,75,39,38,31,21,25,16,18,47,71,107,118,121,146,117,51,23,30,27,51,82,82,111,143,138,50,30,53,56,69,91,78,45,91,76,70,39,39,25,10,21,9,39,34,64,52,64,60,47,96,56,79,69,69,38,31,78,25,41,57,25,38,16,50,37,65,46,32,32,23,27,31,31,20,30,53,49,42,46,66,31,16,44,45,23,65,64,52,49,76,52,15,23,48,21,6,33,30,39,31,42,36,6,31,25,51,34,61,19,26,15,8,28,22,19,9,57,26,11,36,27,41,30,29,23,33,40,31,34,18,57,40,41,62,43,28,58,37,40,15,47,11,33,28,25,30,21,24,18,20,22,23,33,8,39,55,23,27,35,4,21,23,20,2,30,13,31,16,25,36,52,6,26,17,25,17,36,28,42,28,65,87,97,31,21,26,23,32,35,30,37,38,43,22,47,27,19,35,20,18,20,15,32,28,40,33,38,7,26,26,17,36,4,40,21,42,40,49,18,38,38,18,7,17,51,20,13,20,23,26,54,26,39,48,57,11,37,25,8,54,39,33,28,3,19,23,41,35,37,43,23,21,33,50,46,23,19,16,29,26,26,31,49,4,134,234,246,252,246,234,253,239,239,214,209,225,203,190,168,186,220,253,248,174,149,182,175,191,161,148,152,162,153,166,163,130,122,79,27,7,10,6,34,30,0,9,22,31,16,18,31,10,12,36,16,21,41,25,10,28,7,11,3,33,42,41,51,57,76,89,90,119,91,30,4,31,8,9,17,25,17,11,22,21,36,38,15,27,11,7,14,32,23,23,11,16,14,139,99,53,89,95,101,151,160,155,137,155,170,124,193,218,76,50,59,27,55,21,53,34,26,37,43,31,80,125,146,134,139,202,186,190,165,172,162,163,158,166,162,147,151,96,57,21,31,67,56,53,50,55,88,205,126,42,81,96,138,156,106,106,31,15,28,88,149,189,193,197,217,159,161,178,129,94,174,219,160,112,218,247,202,172,180,106,67,115,193,123,142,60,43,76,41,86,63,143,174,167,133,57,54,91,96,80,132,109,116,111,122,124,142,202,193,140,61,74,88,87,75,58,64,81,104,142,171,244,244,254,248,254,254,247,236,240,254,249,246,244,253,237,239,255,246,253,245,240,255,240,246,250,245,252,243,252,252,245,249,245,249,254,226,238,243,241,251,242,245,254,250,244,247,248,247,252,245,230,243,252,244,253,241,155,85,34,48,54,34,41,71,74,65,73,99,109,135,169,141,115,102,103,101,106,120,116,99,100,117,159,103,94,93,76,79,127,97,113,122,115,119,129,125,112,60,23,49,81,98,120,78,107,61,101,112,107,110,82,73,106,117,89,61,34,57,74,56,40,35,28,35,19,52,49,19,35,15,25,7,42,153,79,27,25,40,51,57,57,55,54,73,46,63,76,64,40,60,112,158,178,148,158,168,163,161,165,155,181,134,138,124,104,113,80,84,123,123,104,122,91,114,90,37,15,45,19,51,83,79,85,130,97,44,4,6,22,88,125,134,142,129,88,27,22,35,57,47,42,12,44,35,45,14,28,41,50,42,34,43,68,108,86,88,81,122,108,105,104,140,139,128,70,44,62,43,46,41,63,48,39,42,47,63,7,26,18,18,19,51,65,99,135,120,123,114,135,115,107,93,114,76,31,68,31,116,100,154,116,60,57,61,61,70,61,56,55,15,42,28,18,16,11,26,26,73,88,65,61,73,44,76,93,59,68,46,56,37,22,61,42,36,31,49,28,21,42,30,32,47,44,39,46,41,52,49,41,20,42,52,24,47,34,47,14,66,26,42,48,21,19,14,46,48,15,33,34,11,19,20,9,31,45,9,18,29,26,50,28,40,26,26,20,46,48,17,8,34,33,33,36,31,37,29,30,24,37,35,44,41,2,9,24,10,37,44,31,10,38,5,19,16,24,2,32,34,29,0,29,26,39,32,28,41,25,22,7,34,40,37,7,31,37,23,22,23,25,20,28,20,18,18,40,38,13,35,22,50,25,14,26,24,31,39,50,102,84,47,6,18,31,9,10,43,34,35,40,2,11,38,23,20,21,33,14,24,19,16,27,10,38,25,24,20,41,45,22,31,32,21,35,29,50,10,23,12,29,23,38,38,31,9,38,10,64,49,59,45,35,37,8,35,41,25,60,56,31,29,42,33,41,31,40,52,23,23,32,34,24,28,40,36,36,23,9,50,34,171,234,251,238,252,236,249,246,246,246,223,248,209,250,227,219,163,220,188,79,57,84,86,74,98,83,74,83,51,85,69,88,116,99,144,143,137,142,124,79,105,131,90,69,56,47,75,61,66,56,42,36,18,17,49,62,15,24,25,10,25,16,28,20,25,33,32,23,55,11,15,7,20,13,17,21,12,3,23,2,27,14,29,2,13,26,5,13,27,18,8,26,23,82,100,92,157,115,115,85,89,106,75,117,175,130,215,199,101,97,53,23,25,38,18,30,23,26,54,69,68,126,208,222,215,220,221,225,216,220,203,208,205,205,216,231,145,78,118,125,95,102,97,121,97,67,70,109,99,74,126,151,103,89,82,89,45,54,58,118,196,210,185,225,236,156,154,204,152,133,174,228,171,98,187,244,211,160,173,159,114,142,176,63,53,44,74,94,78,60,63,63,103,85,77,53,54,126,117,73,107,104,117,90,119,115,99,105,110,51,26,26,58,52,75,104,145,231,237,245,255,237,254,232,249,255,255,243,240,255,249,233,251,243,255,250,254,248,245,253,248,229,238,236,255,243,246,235,225,248,255,250,250,252,231,255,252,254,254,234,230,246,254,234,236,254,246,251,252,242,253,231,252,254,243,243,245,128,93,56,44,110,103,83,81,86,46,38,105,119,134,184,148,133,128,119,120,101,70,70,104,101,109,102,88,97,86,90,62,80,105,95,109,93,106,104,149,81,72,33,30,44,39,76,59,44,47,101,143,127,102,99,78,118,92,111,54,39,54,45,44,46,59,29,32,43,56,79,68,45,33,82,44,71,163,114,44,95,89,91,59,44,58,70,109,69,51,64,118,161,164,171,175,116,109,162,153,166,126,150,96,57,64,4,11,20,66,102,58,81,65,36,38,39,57,80,43,12,15,11,50,116,82,86,54,10,4,15,78,113,135,132,138,161,113,55,23,22,19,27,34,27,34,37,45,32,24,13,43,40,43,13,46,43,75,74,94,81,110,93,107,116,94,55,10,43,61,42,22,43,38,26,38,24,28,12,25,25,38,12,20,64,77,104,110,110,156,99,110,122,107,159,182,171,72,27,61,74,73,79,129,159,70,48,39,77,53,95,53,17,31,16,17,29,19,41,70,67,84,86,85,71,72,92,29,62,47,43,70,60,35,39,34,46,73,15,56,23,25,47,62,35,33,68,39,35,49,52,71,15,40,32,30,58,33,21,52,45,25,34,38,53,67,40,23,27,9,36,35,9,39,18,23,20,5,26,42,2,11,38,30,23,16,41,12,34,34,20,20,16,21,39,50,21,15,22,33,34,16,15,21,28,28,30,33,17,19,9,37,28,13,21,16,21,32,18,36,24,54,19,42,17,16,27,12,7,5,33,4,22,41,29,19,37,38,6,30,22,23,13,14,23,29,24,35,11,17,16,24,31,27,50,33,20,25,12,28,23,59,94,79,35,16,17,30,19,41,21,27,24,55,13,27,10,26,42,44,7,22,19,37,41,47,14,38,33,18,34,17,16,31,38,36,20,39,30,18,51,18,45,28,18,33,15,32,23,39,39,31,9,40,44,21,26,17,44,32,59,23,12,30,11,20,5,36,32,17,38,39,42,34,26,35,21,44,49,13,17,31,46,201,251,255,243,247,243,233,233,232,246,230,240,230,246,218,216,118,70,54,29,0,15,3,12,18,25,25,32,18,22,23,71,154,192,228,235,233,236,241,224,238,223,214,232,207,175,194,189,205,170,169,174,195,162,168,198,166,151,115,134,154,120,117,100,88,71,52,39,54,32,10,6,4,8,6,18,3,26,17,8,25,0,7,13,5,9,13,19,5,18,26,11,0,154,133,161,173,164,134,100,111,96,88,93,121,132,149,183,67,67,38,50,41,23,31,48,35,42,29,26,60,159,220,253,238,243,251,243,242,231,188,244,222,227,233,234,129,111,168,130,134,148,135,131,109,157,122,136,125,133,167,249,130,49,64,105,122,117,126,187,205,240,211,218,220,164,137,131,117,129,186,127,85,72,73,165,107,108,155,156,148,147,176,94,73,68,88,125,81,87,77,76,53,44,63,83,93,93,128,104,130,97,116,114,81,82,108,140,109,129,65,50,72,98,117,165,200,224,238,252,246,250,253,229,235,247,238,255,249,250,246,226,255,246,254,248,240,236,243,238,253,226,254,251,219,254,250,247,237,242,245,255,255,255,240,245,240,233,152,144,175,151,172,154,154,137,137,155,135,157,153,183,145,145,130,139,118,50,58,60,110,125,100,125,104,77,69,75,103,134,139,131,110,112,94,99,94,69,57,71,64,81,97,63,103,96,85,60,94,98,103,116,104,54,84,97,75,100,59,14,21,20,55,54,42,47,36,121,134,119,106,62,103,91,105,56,34,13,23,32,26,45,70,59,49,53,105,155,141,126,134,141,146,110,162,167,96,172,135,102,95,113,114,141,82,40,13,75,127,108,171,127,136,116,62,88,75,91,69,24,22,32,16,25,23,6,1,30,57,83,73,87,89,118,167,232,233,234,251,249,178,242,146,38,10,9,41,100,133,152,135,144,124,142,92,34,52,67,96,61,85,111,58,78,94,56,43,74,73,22,34,38,41,62,76,94,81,63,72,83,57,49,41,39,29,28,50,37,41,18,30,21,38,37,14,17,41,21,21,53,64,103,126,106,102,139,108,104,101,105,87,119,105,101,63,29,42,76,86,82,103,134,179,96,20,33,45,19,5,21,26,28,18,36,34,58,74,87,82,81,87,78,79,63,53,99,117,109,123,112,69,73,92,96,61,80,68,52,69,69,76,106,81,70,53,53,56,79,80,70,49,97,51,71,121,77,79,91,77,47,60,53,41,24,24,8,24,28,27,12,17,24,49,3,29,39,13,12,25,13,19,53,32,38,30,27,50,28,21,26,52,28,33,33,27,23,20,30,54,32,23,33,16,34,51,18,22,15,32,11,44,41,19,12,18,8,2,33,48,47,39,19,16,37,45,15,20,22,15,26,38,36,26,33,29,33,26,6,27,21,18,21,19,11,21,36,11,25,15,12,24,26,20,32,1,18,34,7,30,37,112,56,21,36,9,13,38,20,28,15,18,14,20,36,25,16,32,14,27,39,14,6,27,48,29,17,29,17,37,2,35,13,28,37,19,24,17,37,7,12,38,20,8,7,23,6,22,37,21,22,17,30,17,31,32,20,41,18,27,16,37,28,19,28,39,52,47,36,57,13,56,32,48,14,22,36,42,39,14,62,238,245,252,240,224,243,234,248,242,247,246,225,242,245,228,210,129,18,2,3,12,18,30,36,50,51,77,48,52,57,24,96,165,232,232,249,250,243,195,234,232,201,198,215,220,230,198,229,202,235,227,235,215,250,219,230,225,221,218,224,221,235,215,186,183,178,142,169,92,12,0,23,10,26,8,42,6,11,1,26,14,24,9,22,10,9,19,22,15,27,21,19,20,176,129,125,144,166,143,140,95,86,78,53,88,107,198,180,50,38,19,13,53,58,33,18,4,14,23,66,53,97,179,178,167,165,167,175,165,98,114,157,194,136,116,107,74,79,139,97,120,83,100,99,69,101,96,84,79,115,168,236,83,41,99,124,182,205,239,255,254,238,186,154,150,168,131,101,80,84,145,107,72,57,66,61,48,90,101,68,68,133,152,97,100,78,84,92,94,100,99,102,72,90,86,58,80,114,78,97,110,107,98,81,85,64,45,96,141,136,106,61,75,37,102,149,196,159,115,169,207,197,202,243,235,249,236,239,245,254,249,239,248,246,250,255,236,254,246,241,240,252,254,250,247,252,253,241,253,244,255,248,239,235,252,189,192,239,24,24,13,41,54,38,112,25,28,91,27,60,47,6,43,81,52,55,44,49,84,111,117,91,59,80,62,76,91,84,111,102,131,94,113,122,127,116,96,86,90,83,57,94,119,94,92,92,62,78,86,104,83,94,105,54,59,50,80,87,54,40,56,32,51,88,47,56,53,120,111,100,124,61,71,78,84,81,44,59,50,52,50,0,60,62,59,40,124,135,110,114,109,141,120,90,111,130,98,134,97,89,95,107,116,115,76,70,31,59,63,41,67,26,13,34,15,25,16,18,17,10,8,16,31,81,124,139,186,206,230,252,251,234,253,247,244,229,245,232,241,250,230,252,124,54,14,105,114,140,177,128,101,98,100,113,107,117,117,138,142,173,154,163,146,159,153,141,122,136,79,46,35,94,70,71,77,60,104,89,72,52,57,77,34,18,29,26,43,46,33,34,47,12,53,21,32,34,14,69,37,80,76,95,91,73,103,72,59,66,75,81,26,41,33,27,38,47,63,85,82,73,76,131,167,158,56,28,4,28,32,6,35,22,50,74,70,87,64,50,88,63,62,57,62,55,80,120,175,160,206,174,147,131,140,153,151,143,144,148,167,156,153,132,141,164,159,165,169,165,167,164,137,157,150,165,148,129,172,157,157,146,130,76,50,22,9,2,16,12,20,40,26,41,15,23,44,27,50,35,35,59,27,74,37,38,40,64,74,70,82,29,45,50,14,38,31,30,38,30,39,26,22,41,25,15,13,16,2,10,5,18,17,36,12,10,25,16,26,14,25,11,18,39,24,32,44,24,25,40,33,37,53,25,32,40,30,29,20,31,33,23,25,10,25,29,13,21,40,15,16,16,55,14,21,11,7,28,16,31,51,37,92,109,68,38,37,27,27,34,12,35,23,11,25,23,38,31,30,47,15,4,20,12,27,17,14,5,18,20,36,17,21,26,29,27,29,13,42,19,32,41,12,19,38,15,10,30,27,20,22,8,43,3,28,32,27,27,14,38,22,10,26,41,22,26,40,32,30,20,18,21,29,47,24,47,48,27,13,20,30,97,234,249,238,255,247,249,245,230,255,238,250,240,236,228,212,208,164,119,82,28,13,27,30,45,48,44,100,82,74,100,73,96,131,129,205,182,194,198,177,175,153,170,167,163,173,186,178,191,195,189,222,207,228,179,215,217,194,171,187,212,234,192,213,208,197,209,197,218,130,6,0,8,12,11,3,11,24,12,22,1,1,8,3,18,7,1,28,9,39,22,37,0,24,154,82,66,96,137,142,126,103,125,109,98,96,129,172,184,39,46,61,17,13,36,43,30,33,37,56,50,53,70,72,54,70,88,79,94,79,40,50,97,85,38,33,42,20,62,59,63,81,71,61,88,55,48,46,50,70,28,139,227,79,41,73,163,206,235,233,228,175,122,45,69,149,198,212,214,169,148,189,161,183,118,86,92,128,86,94,70,55,44,61,54,85,78,74,62,68,90,84,117,103,87,95,88,80,57,85,94,102,89,98,63,77,59,43,81,72,90,63,61,47,62,71,106,136,79,5,19,9,27,16,43,34,54,96,157,123,76,102,107,102,107,95,124,90,67,85,62,61,84,89,37,49,75,50,53,59,104,157,83,116,45,45,2,139,189,28,29,37,39,7,111,214,53,10,28,18,16,32,21,25,63,46,86,73,67,116,106,118,69,37,100,44,26,61,81,81,100,141,125,132,91,92,112,127,115,104,107,92,87,122,110,105,64,81,59,30,58,75,55,80,42,55,50,53,64,45,48,49,16,56,82,44,10,65,83,112,120,111,52,82,87,74,91,65,110,126,130,115,86,120,89,76,78,136,132,56,43,22,100,67,41,32,58,74,77,108,64,70,44,70,66,68,66,31,41,22,17,40,31,25,39,18,39,143,119,163,154,243,253,246,237,255,252,255,244,254,249,255,249,240,245,246,225,211,142,172,103,68,94,41,87,140,124,150,148,109,106,107,115,113,120,92,81,99,122,142,138,138,118,129,136,156,130,137,151,71,20,12,52,88,91,103,56,63,53,98,105,83,99,106,75,49,53,54,10,32,36,41,31,43,38,44,46,60,35,37,50,92,66,46,61,70,69,101,69,88,54,52,31,14,39,40,35,54,96,118,72,85,71,138,191,116,46,15,8,23,10,28,24,67,72,67,97,98,91,58,71,73,60,86,82,54,72,101,96,159,113,126,102,112,114,114,144,126,126,110,136,134,102,123,124,133,121,143,122,140,119,106,127,93,138,94,121,110,104,96,94,104,51,47,10,8,18,6,9,15,13,41,9,21,27,42,63,48,57,73,59,91,69,56,80,98,109,132,164,131,78,68,53,44,73,42,54,30,57,43,20,54,28,28,42,31,46,16,28,21,14,14,41,25,37,30,52,33,27,17,9,22,39,29,22,12,30,34,23,26,29,32,13,36,20,20,30,14,15,41,4,9,38,34,37,28,32,26,14,37,41,3,34,31,27,5,35,20,26,23,26,63,101,89,43,27,16,26,40,26,21,21,46,33,36,37,6,4,32,27,26,42,21,24,42,33,43,25,24,23,9,17,25,25,29,28,32,33,5,57,23,20,29,8,22,44,28,48,39,19,27,55,37,6,29,32,17,3,22,49,7,56,32,17,36,21,28,27,37,25,21,9,35,13,24,45,29,30,42,4,115,230,169,213,226,239,255,249,245,226,253,215,241,248,237,212,233,239,244,224,196,162,117,29,66,38,54,54,72,49,43,30,27,47,74,103,136,150,161,156,138,176,161,168,176,152,177,167,170,177,178,175,195,166,172,194,177,158,176,180,177,181,170,153,172,173,183,197,153,138,17,4,17,3,7,27,17,37,2,35,11,7,13,7,24,11,19,7,11,2,14,15,38,8,123,107,38,64,84,71,103,150,177,164,176,147,179,185,192,138,188,131,108,134,96,144,84,100,64,49,72,81,95,88,77,81,98,90,108,118,74,36,54,15,20,10,51,51,49,63,34,60,66,73,87,69,84,71,67,78,87,114,199,107,77,143,113,159,178,131,89,52,34,46,92,142,207,247,239,249,248,237,227,250,219,100,83,130,98,92,83,69,65,59,39,58,77,89,69,77,90,80,65,88,53,85,92,106,102,86,106,74,87,94,130,128,91,37,73,60,98,80,99,85,70,43,54,62,64,41,76,9,18,24,28,18,22,19,89,94,13,14,25,32,26,35,47,36,35,32,42,6,64,79,44,39,24,14,25,26,65,153,75,26,24,22,6,150,245,28,35,39,43,39,149,246,92,44,99,45,64,52,43,41,97,70,112,101,113,101,71,75,47,61,47,49,39,5,48,64,90,93,113,106,115,102,121,102,100,130,77,88,67,73,139,114,71,79,77,63,23,85,73,29,52,50,85,47,31,17,52,57,74,49,85,43,73,78,143,182,155,104,93,113,102,119,139,146,162,168,203,178,191,211,181,161,113,125,122,14,60,10,14,48,14,23,19,16,23,32,38,6,9,11,35,39,70,68,116,138,149,195,236,247,240,242,230,253,234,243,221,247,250,255,239,246,236,234,226,221,161,153,129,98,102,84,31,17,16,23,21,7,54,66,143,156,146,133,101,104,97,127,121,143,74,118,95,123,97,97,129,93,121,109,108,103,123,127,78,22,17,11,73,95,108,103,65,94,89,111,94,102,101,132,105,49,36,32,42,20,36,32,15,29,29,36,24,23,25,36,23,63,40,63,40,74,83,82,74,64,64,43,53,35,33,42,27,68,84,89,80,69,71,43,140,165,89,10,14,40,45,48,76,68,65,77,99,89,73,64,92,67,48,83,98,59,76,57,32,52,20,33,20,47,16,33,21,26,34,7,53,49,36,39,26,21,13,27,6,29,30,44,20,49,32,10,39,31,8,24,39,53,30,7,40,30,25,16,21,23,32,15,34,26,4,13,3,10,13,57,22,13,6,61,36,35,42,85,143,112,89,20,40,62,42,57,36,29,23,41,23,24,14,36,16,4,9,36,24,19,24,13,41,29,15,16,9,28,33,7,23,26,18,46,46,39,26,12,20,30,22,12,29,35,40,23,16,14,30,29,9,21,37,17,46,11,34,3,38,20,16,14,39,21,26,8,34,32,26,16,27,17,60,90,93,36,40,26,31,6,29,32,38,35,35,22,28,31,51,24,27,41,26,19,51,32,41,33,9,25,28,13,39,31,29,43,32,41,19,29,18,47,35,35,15,28,17,13,13,9,15,19,31,25,23,60,39,50,37,41,41,25,40,41,28,52,41,57,10,46,31,38,36,21,39,31,42,41,36,23,71,95,96,106,164,203,234,232,213,198,197,213,203,220,210,172,206,241,255,240,249,211,209,156,97,88,63,45,36,44,49,33,27,20,19,35,46,79,105,129,174,177,201,172,179,189,195,199,192,205,203,197,171,183,149,180,175,125,177,182,151,156,154,165,134,153,159,169,177,116,15,1,7,18,19,8,7,3,12,17,33,4,11,4,0,27,12,14,11,3,28,9,20,21,153,132,73,78,90,61,108,171,239,229,221,220,235,230,246,230,216,213,194,200,222,207,219,166,105,102,115,115,139,113,147,153,168,82,128,133,111,114,80,80,59,59,69,96,134,87,77,111,97,107,101,105,123,130,167,156,156,173,233,192,194,203,167,197,165,151,107,145,144,110,111,151,210,224,255,239,222,194,244,243,128,117,104,125,72,36,73,81,96,69,42,64,60,92,103,88,46,72,53,95,84,78,100,77,97,104,75,56,85,159,155,206,176,82,57,52,103,113,173,128,106,70,70,130,156,183,177,112,178,203,210,164,154,110,250,250,155,143,182,197,185,200,205,166,191,168,176,181,234,255,185,171,161,152,166,164,200,248,168,143,142,154,192,243,236,42,84,101,134,116,196,241,87,116,123,98,115,102,90,142,132,71,77,135,95,61,38,50,79,43,53,11,13,59,90,117,120,98,79,101,125,122,94,87,80,69,101,116,84,154,136,83,115,158,101,49,60,52,48,70,30,24,71,67,48,27,36,40,77,116,122,145,160,175,179,179,119,90,54,130,144,177,165,163,181,158,190,182,212,198,184,148,123,72,14,12,7,5,16,29,25,7,9,47,23,94,122,122,164,210,226,238,255,248,255,253,227,252,255,250,235,241,251,249,246,247,224,191,190,180,150,115,118,13,29,4,13,29,2,14,33,29,11,10,37,49,18,5,91,119,154,149,124,92,93,100,109,122,103,124,95,100,106,134,127,111,128,127,100,114,120,102,115,126,73,0,20,47,104,97,101,88,88,94,91,96,93,123,89,119,130,93,49,28,12,31,51,7,19,30,33,39,34,38,60,37,58,63,28,73,83,69,61,69,52,73,67,64,29,32,40,49,39,73,71,85,61,44,44,5,37,147,151,49,48,29,63,55,71,74,74,83,97,79,56,72,63,77,77,84,69,63,65,47,31,11,41,16,22,1,17,12,39,38,12,35,33,4,7,22,19,13,9,26,12,18,42,13,17,30,24,41,42,8,40,14,18,29,23,20,8,32,18,32,16,28,63,1,10,27,24,34,35,27,34,21,16,16,30,13,27,39,18,16,72,109,70,53,4,19,8,17,41,33,14,18,28,12,6,17,33,12,10,22,23,16,14,35,21,17,19,8,4,9,33,43,40,15,14,26,32,11,0,16,36,11,14,23,13,9,34,26,28,17,23,25,20,15,16,30,18,34,13,30,12,25,2,24,28,34,28,23,10,23,50,20,10,35,39,60,101,48,33,19,7,20,28,39,19,13,30,15,48,40,49,25,18,27,17,31,31,39,57,33,19,34,43,17,53,37,57,17,49,28,34,18,28,16,31,6,21,17,17,14,19,12,42,22,11,38,51,32,53,47,38,29,36,67,49,26,33,38,22,48,53,51,43,45,34,49,56,39,47,60,45,65,42,66,46,50,62,115,128,138,120,168,153,148,169,153,168,139,169,161,167,232,210,234,226,170,146,157,135,130,107,114,146,125,106,108,88,102,98,85,109,127,153,166,189,167,170,165,153,200,165,193,192,199,188,173,162,164,173,179,172,146,175,145,143,163,178,153,155,175,172,123,23,18,0,2,16,11,20,17,6,14,0,1,2,26,12,4,6,14,13,3,21,30,24,23,232,211,119,165,114,119,132,149,216,219,186,213,224,219,173,156,201,182,132,141,159,196,232,155,110,111,103,128,114,115,162,160,178,164,152,158,146,131,124,137,90,76,121,144,164,191,165,201,152,161,178,183,231,245,238,238,245,248,244,253,245,240,255,247,229,235,253,237,244,255,243,247,241,207,209,202,231,198,216,188,77,91,144,158,85,60,59,56,112,95,69,66,70,55,59,64,38,71,63,97,68,80,90,88,81,79,68,59,99,119,162,160,120,60,71,81,89,95,92,104,94,94,130,157,132,227,183,150,215,227,241,233,188,200,255,241,218,199,223,206,194,239,228,206,227,234,234,223,251,244,207,241,242,241,237,233,176,235,228,191,244,238,194,251,195,56,163,177,171,117,148,251,112,116,132,96,120,82,102,91,137,106,104,142,92,79,83,122,122,105,66,37,104,119,112,86,94,97,140,139,111,95,76,74,26,76,66,120,106,102,133,75,100,146,121,69,29,56,93,85,45,50,47,48,31,46,64,91,147,170,147,138,168,147,166,123,78,38,90,105,108,132,112,141,144,176,164,144,139,123,104,102,128,99,55,43,109,139,170,159,194,224,253,239,252,222,250,238,238,240,253,233,241,253,253,236,233,232,198,181,161,128,84,75,65,61,33,22,6,14,44,29,16,26,25,18,28,33,23,13,44,25,41,28,60,23,34,27,65,87,160,110,73,71,79,97,117,121,148,155,61,77,86,85,81,112,128,99,115,123,137,99,120,103,45,106,123,112,112,92,104,116,70,111,110,76,70,116,99,105,110,56,51,35,34,36,26,44,48,17,42,57,61,63,38,35,67,70,76,58,58,61,77,56,73,65,63,72,19,55,40,20,22,82,72,39,6,21,25,16,47,56,145,152,70,34,67,83,82,67,55,31,79,84,71,65,111,90,66,89,48,51,56,23,8,10,51,12,12,3,56,0,41,4,27,8,23,31,35,16,18,36,25,17,11,36,12,36,46,6,16,6,14,14,26,6,19,16,14,19,25,0,33,9,21,9,25,17,16,1,9,34,14,22,23,18,26,35,23,26,15,35,21,13,80,98,74,48,16,34,8,10,8,8,1,24,38,12,10,3,6,27,30,3,35,39,6,17,2,22,11,26,25,21,16,19,6,23,29,32,26,14,42,5,10,18,11,35,11,26,34,25,27,0,16,19,18,25,2,38,12,1,32,47,11,16,25,1,5,19,12,9,25,21,30,19,27,38,41,38,85,98,79,29,33,19,32,18,9,13,34,18,14,9,35,4,20,24,3,39,65,39,35,35,19,29,18,20,26,13,13,27,21,22,41,23,23,16,35,70,24,22,26,10,30,7,31,24,37,27,10,37,29,40,38,38,24,35,16,42,41,48,43,48,75,35,15,10,21,44,45,45,69,40,48,58,50,27,47,56,21,26,107,146,176,157,149,159,145,143,167,164,123,130,177,179,179,200,173,176,166,216,206,193,208,226,190,217,203,204,212,190,172,182,200,184,173,155,192,136,158,144,127,155,166,178,195,197,209,187,161,174,169,174,169,165,175,158,144,168,177,173,179,165,206,118,15,0,28,2,5,24,14,22,3,38,19,16,0,2,9,14,12,13,32,7,5,14,34,13,253,229,208,220,238,194,169,137,217,198,199,237,243,196,188,165,219,168,139,114,95,201,226,164,110,110,142,141,113,144,127,150,204,200,152,162,161,187,155,155,134,112,155,158,227,205,182,229,228,211,241,248,247,216,243,250,250,255,245,254,241,253,245,248,245,235,246,244,255,253,241,241,228,221,247,230,236,171,238,216,34,108,130,141,119,85,68,76,111,92,80,87,110,59,48,81,77,69,74,101,104,120,102,52,89,67,94,76,42,93,56,69,67,67,52,67,108,53,74,86,90,138,134,104,61,170,135,61,77,78,113,116,158,170,222,241,149,202,163,95,82,146,131,74,128,162,164,224,237,183,131,134,163,244,206,155,120,179,220,201,199,136,100,202,126,50,121,80,116,56,108,150,79,102,71,74,72,70,58,48,106,111,156,139,107,114,116,121,94,100,81,84,124,127,94,65,138,159,136,122,65,58,63,50,28,13,41,81,97,71,77,40,79,105,62,68,58,20,57,67,74,37,44,60,34,59,120,127,141,192,143,173,155,131,139,106,71,74,96,98,83,99,108,129,130,150,170,156,168,146,190,196,244,239,249,246,249,249,255,249,244,255,249,253,240,227,232,229,248,234,212,166,130,89,109,63,44,22,1,19,13,12,4,8,38,32,8,18,14,39,43,49,56,39,31,48,54,36,50,50,20,102,132,48,61,74,22,22,25,54,105,86,78,64,87,107,140,118,156,146,86,78,107,110,97,131,153,113,111,133,125,119,107,91,128,110,123,93,98,99,128,103,101,104,124,129,94,131,114,42,57,30,7,31,39,45,31,30,26,38,27,55,53,54,34,41,62,60,64,64,70,66,97,74,85,91,84,23,49,41,28,25,50,49,69,8,32,19,34,9,26,37,107,116,140,85,47,79,65,72,76,76,66,63,98,75,101,65,84,65,39,47,28,18,21,42,14,18,29,11,17,45,29,30,18,17,15,26,16,23,20,24,28,13,38,10,18,8,21,14,21,27,18,14,25,21,18,11,3,22,23,14,53,39,18,30,4,24,44,31,15,38,28,28,29,22,15,26,8,6,27,22,26,23,46,95,89,22,8,16,11,14,3,3,26,39,4,12,25,35,17,15,24,32,29,18,34,17,15,23,24,23,3,23,23,37,19,8,11,10,23,18,31,19,12,11,10,15,9,30,13,7,22,14,20,9,34,31,22,31,38,18,13,4,43,41,41,8,24,23,7,25,22,8,38,8,21,12,14,13,49,79,79,67,36,20,40,15,22,22,4,50,31,11,14,39,12,16,20,27,44,33,34,6,23,24,22,34,17,29,16,27,6,23,43,20,10,17,15,25,9,27,12,21,5,4,34,37,34,26,28,22,14,25,20,11,11,12,20,20,13,13,18,10,30,27,19,14,29,30,43,16,42,25,32,39,23,42,38,21,27,30,104,203,170,205,206,211,181,176,182,178,183,155,167,167,171,157,165,161,183,177,171,211,230,202,197,204,209,216,206,218,215,214,229,216,193,207,190,181,188,182,170,174,183,164,172,190,191,185,199,175,199,176,150,163,186,159,196,182,196,190,161,205,206,97,22,21,10,37,21,10,13,22,8,32,37,12,0,5,31,23,4,20,15,8,22,24,16,9,245,231,227,250,249,236,220,187,215,242,251,236,248,241,249,246,231,247,247,252,181,235,241,185,183,206,174,182,140,114,117,135,193,149,157,182,179,148,145,138,166,187,198,160,162,138,136,218,167,176,238,236,228,213,217,182,228,251,247,239,251,255,253,254,233,242,252,236,250,244,219,184,154,179,203,235,219,179,232,189,61,83,96,84,100,100,148,104,148,83,80,58,80,61,81,115,88,98,68,88,86,123,99,71,68,82,92,74,52,85,77,64,70,82,43,58,115,78,82,103,116,145,164,100,70,145,101,102,114,84,64,72,120,113,200,252,213,210,193,119,46,131,164,105,142,153,152,202,249,183,178,143,206,230,209,197,174,223,245,222,217,192,124,199,122,88,87,84,86,72,100,98,69,93,69,98,109,99,105,88,146,107,81,107,92,113,124,110,80,39,86,61,63,102,83,161,145,127,110,104,48,101,103,73,58,24,22,58,58,44,71,24,42,50,32,29,50,66,42,67,81,90,71,40,29,38,99,146,154,124,103,145,113,114,123,135,138,118,151,146,182,159,214,253,239,243,243,249,254,249,243,255,231,245,243,252,254,231,245,251,217,207,167,148,152,101,61,46,17,102,130,38,1,2,2,11,19,18,25,39,37,17,33,62,41,61,68,54,74,40,55,76,123,84,30,33,42,56,60,89,127,143,172,88,21,88,49,56,29,28,61,46,116,106,140,156,131,135,177,122,79,82,101,131,92,132,126,111,103,121,128,107,111,100,105,89,61,90,123,96,106,89,107,112,115,107,124,77,43,19,30,48,41,48,39,18,41,31,41,15,38,24,26,54,61,72,62,64,60,71,45,54,68,72,42,68,64,46,32,51,14,34,32,52,50,33,34,73,49,81,64,76,105,96,154,144,49,30,82,83,66,64,75,96,65,115,58,39,69,55,33,62,22,21,35,20,46,31,36,15,37,13,15,20,20,39,29,13,21,24,4,18,15,27,14,26,15,30,22,28,33,23,14,15,7,25,8,13,18,14,21,11,13,1,15,19,1,22,27,4,21,25,31,26,30,10,26,33,25,17,11,5,35,16,47,88,79,49,22,29,4,5,12,17,21,37,33,0,13,41,11,22,10,22,4,32,40,19,20,15,12,27,25,19,12,18,20,30,27,15,18,30,20,9,25,5,22,19,13,18,22,9,23,42,16,34,26,42,24,38,30,37,20,35,34,43,16,1,25,19,32,15,6,2,30,8,6,9,20,2,20,49,46,85,63,12,14,17,7,30,8,43,29,39,36,26,37,6,26,35,25,28,19,12,18,29,16,11,21,8,0,22,9,35,16,12,4,5,12,39,13,22,27,45,23,6,25,32,6,25,10,35,12,18,10,6,26,9,16,16,5,30,24,30,29,50,25,9,29,34,21,13,30,15,21,12,30,12,18,28,20,69,184,195,193,177,168,198,183,179,162,166,164,182,178,138,161,134,148,171,219,175,181,164,148,166,154,156,186,197,194,198,172,192,179,198,205,215,194,190,205,213,202,197,191,172,158,192,203,197,217,216,191,199,213,221,213,192,219,230,220,222,199,201,197,127,20,12,5,17,1,3,8,10,11,6,18,15,19,7,7,22,5,17,11,24,34,2,13,4,208,162,150,199,206,211,113,99,127,179,204,244,234,238,243,243,230,251,239,241,129,217,229,122,140,169,120,197,90,108,118,135,169,130,143,182,188,117,52,131,140,192,213,118,134,81,100,170,84,61,112,104,148,95,98,104,146,199,198,182,192,230,255,240,249,255,253,254,243,234,222,136,82,134,139,134,182,191,252,177,45,95,67,82,86,86,143,106,101,108,84,73,81,55,85,90,87,75,60,78,82,121,97,63,55,60,76,68,49,121,96,67,61,78,61,61,53,124,111,100,65,105,93,113,45,134,79,102,120,54,93,75,121,121,171,211,121,160,158,50,47,109,186,116,114,107,103,162,162,137,125,132,161,199,184,212,139,183,241,192,206,191,137,158,139,84,103,82,125,76,122,104,71,92,58,85,94,113,80,122,139,104,135,143,97,88,116,65,42,46,34,26,54,57,89,123,111,81,53,76,65,102,129,97,96,52,49,17,17,65,53,44,69,45,62,61,23,58,33,39,67,74,71,49,46,78,121,141,130,151,154,168,174,184,221,255,249,244,248,239,254,250,252,255,228,248,253,218,226,244,254,235,239,219,201,190,171,136,110,92,43,8,3,11,23,4,7,16,12,65,105,18,12,14,30,33,21,28,26,58,57,45,60,142,77,131,98,57,49,75,38,64,142,136,71,32,56,117,89,82,84,94,97,64,37,48,26,30,16,41,87,78,135,116,121,133,117,136,133,92,62,74,109,100,99,134,81,125,122,124,126,84,85,79,101,133,77,106,102,110,125,101,84,116,119,79,38,30,21,18,72,70,63,72,8,28,6,56,52,30,31,34,50,86,81,91,70,65,76,70,86,95,81,45,94,81,67,68,41,28,30,39,58,55,60,43,73,76,104,104,72,69,71,61,128,171,133,63,45,42,80,96,90,72,54,68,78,40,82,50,43,44,16,22,34,17,30,14,15,15,22,6,46,11,10,49,27,13,10,15,12,28,14,44,26,18,4,12,26,31,20,15,17,29,44,4,16,32,18,22,13,8,19,48,25,13,42,30,13,14,40,9,28,10,25,22,30,20,16,21,7,32,32,54,28,83,112,56,7,25,23,23,14,30,21,32,2,10,11,36,11,47,14,2,14,5,10,3,2,39,28,33,4,8,21,32,17,25,13,5,15,20,5,49,9,30,3,14,37,30,29,17,10,40,15,26,56,54,24,21,31,24,24,24,36,24,30,15,15,60,12,26,21,11,7,22,31,3,3,10,5,26,70,114,88,65,26,17,28,37,6,35,51,27,28,27,5,29,31,30,65,51,50,7,21,47,21,35,4,0,13,43,24,15,22,33,58,14,29,18,20,19,5,21,16,4,11,11,27,41,19,13,49,20,19,12,15,25,5,21,10,51,30,28,17,35,10,10,20,42,32,27,24,23,3,43,8,16,9,15,36,145,188,176,179,134,160,154,145,130,131,152,204,217,200,175,141,162,128,184,213,222,213,194,166,148,145,117,154,132,121,118,142,142,132,145,134,169,142,142,175,164,199,156,137,124,127,144,192,216,215,196,196,192,207,222,209,228,225,206,223,198,198,184,212,126,11,4,5,9,0,24,21,38,7,20,25,20,7,12,9,23,21,6,25,10,13,33,17,23,178,97,46,54,56,86,61,49,82,114,150,227,191,125,184,186,220,210,166,159,61,159,225,90,37,67,105,143,78,58,85,132,129,112,113,142,155,116,91,113,110,119,146,142,98,89,143,181,117,80,53,89,111,64,50,65,119,110,108,155,155,178,254,249,249,222,249,247,243,231,248,142,77,87,64,85,133,155,240,147,38,95,39,82,27,65,82,97,128,106,93,97,53,59,91,110,87,73,96,64,68,78,78,107,34,49,56,69,68,72,95,79,75,59,66,46,82,55,63,61,47,36,61,76,51,84,51,25,63,53,85,80,82,86,65,77,61,27,12,12,16,36,79,40,53,19,45,51,50,24,31,70,70,90,95,136,109,109,156,110,145,144,108,97,55,47,63,62,85,92,47,38,44,61,37,47,52,45,99,122,127,93,104,95,52,66,88,115,95,65,56,31,38,50,64,66,94,89,108,80,39,89,79,96,115,103,97,48,33,7,46,17,17,61,63,76,85,63,72,93,66,100,150,132,191,224,237,251,247,250,255,243,251,247,251,246,243,251,232,247,254,251,230,233,202,222,187,76,40,117,201,177,135,142,170,166,141,77,72,31,25,31,15,28,12,20,6,0,19,69,93,117,57,47,35,58,53,52,47,95,129,83,77,103,84,133,134,60,41,63,40,32,62,71,66,64,38,52,77,53,56,60,93,59,29,74,61,55,16,117,124,150,168,130,134,131,131,140,126,131,103,71,128,120,118,97,95,111,113,113,78,89,96,78,55,112,106,96,89,95,117,94,115,110,72,22,26,39,15,46,65,119,120,24,1,41,12,26,60,6,48,54,73,68,82,70,64,60,70,81,85,66,80,65,90,99,77,88,35,31,41,29,19,33,49,53,61,74,76,95,86,51,88,58,111,149,165,87,44,67,65,111,78,61,76,97,73,70,76,57,76,47,37,20,21,14,28,8,8,26,22,4,21,22,35,29,3,23,37,18,8,25,21,14,5,27,10,37,19,31,33,41,11,9,16,19,28,33,15,7,29,26,13,2,17,11,20,9,28,44,11,21,34,16,26,19,25,18,40,24,10,19,14,16,9,79,115,66,20,31,39,12,8,18,19,25,18,11,34,36,12,3,29,35,19,21,18,14,37,5,42,20,25,23,8,36,26,8,13,21,36,30,36,9,20,11,26,33,28,18,25,21,33,17,15,7,23,23,40,41,13,46,45,10,35,16,21,12,8,9,30,16,16,33,23,8,7,44,5,18,33,13,54,48,77,75,37,36,24,12,17,26,17,15,32,19,10,24,30,45,20,28,16,23,10,14,37,10,30,25,7,35,8,36,14,32,18,6,20,13,21,18,17,13,26,41,15,46,11,21,13,34,28,25,15,22,32,17,8,16,2,9,14,14,3,8,20,16,35,12,5,20,17,32,17,10,27,20,24,44,86,155,124,176,129,122,122,135,124,143,169,210,254,247,243,229,248,196,180,224,250,243,250,249,220,190,126,79,76,74,128,128,90,110,103,100,125,133,91,95,114,98,93,77,74,89,60,65,110,129,157,153,143,147,151,132,164,155,165,168,165,172,165,139,166,95,22,10,10,0,19,15,11,22,27,20,13,11,15,19,18,11,13,18,24,11,30,4,4,42,230,147,92,104,118,82,135,130,171,192,197,225,207,150,216,180,210,195,157,158,84,179,213,50,59,55,105,144,76,73,80,85,95,45,88,106,141,137,124,106,76,106,194,165,144,106,167,237,157,84,80,115,139,87,142,93,59,44,64,157,177,234,244,248,235,255,235,252,250,250,254,127,72,124,121,98,48,101,159,29,28,62,84,82,44,57,53,55,116,114,119,113,96,82,77,100,99,86,90,103,81,39,37,54,57,43,74,74,57,68,102,87,73,71,81,64,83,93,53,68,90,71,94,151,129,188,163,42,59,102,110,88,75,75,57,79,89,131,172,139,201,176,214,161,133,111,142,171,128,112,71,47,73,64,44,78,61,57,74,96,75,95,85,71,20,60,45,37,52,41,31,62,42,52,26,53,17,40,128,125,129,91,73,66,45,87,90,131,94,57,50,33,61,37,37,50,112,104,78,49,14,20,56,66,49,55,74,31,13,11,17,22,77,108,182,178,209,223,255,223,249,255,243,249,245,239,253,247,240,254,243,239,242,243,240,231,213,213,208,200,187,178,164,172,174,168,107,8,0,37,151,154,82,50,61,66,58,30,35,35,24,57,96,102,102,147,110,39,64,120,122,154,114,38,93,78,35,65,35,68,115,69,35,50,50,95,85,68,42,51,43,65,40,68,70,67,11,34,78,53,113,99,135,100,24,88,84,47,84,167,186,156,150,139,131,141,106,84,129,151,117,115,137,94,91,97,146,121,94,86,66,73,53,72,76,79,77,118,101,123,122,105,69,44,12,11,53,27,41,94,102,84,84,47,13,15,38,68,47,55,35,73,92,78,69,88,89,69,86,72,80,89,87,67,103,45,40,38,50,35,28,15,27,62,77,50,67,69,80,81,75,68,50,64,52,74,145,185,103,40,75,61,90,61,72,52,80,68,70,43,23,29,7,20,18,11,24,3,30,27,25,23,22,13,12,15,12,34,12,6,27,19,28,10,28,31,30,20,6,25,42,17,23,15,10,9,10,12,17,24,8,30,20,13,14,11,23,50,21,33,4,33,26,34,20,18,23,37,26,7,27,39,16,20,0,57,93,65,23,19,17,11,31,4,20,19,37,25,18,31,19,32,15,21,21,26,29,36,11,11,25,21,35,16,22,12,14,23,10,24,24,9,27,20,31,21,8,22,16,22,38,15,38,26,34,26,32,24,23,38,38,23,14,12,40,26,26,24,10,10,20,13,4,12,15,45,22,5,38,32,7,27,11,37,64,96,74,49,24,25,13,13,11,24,9,41,24,42,16,18,46,43,24,31,17,25,36,20,5,17,46,25,31,41,1,33,21,13,18,35,13,33,41,29,11,31,5,10,10,19,32,20,5,3,41,8,27,25,10,47,22,28,23,4,16,13,22,18,21,27,46,15,14,52,4,44,26,25,24,25,33,93,118,96,105,81,110,113,121,150,161,210,194,222,241,226,221,192,164,218,239,235,230,224,208,178,96,66,57,84,130,153,126,118,124,144,172,139,121,88,38,36,23,18,63,50,30,41,40,43,52,49,42,27,39,27,34,57,85,55,144,152,196,167,134,90,26,0,16,0,25,26,23,28,0,46,43,20,19,5,34,23,23,9,6,28,5,17,15,24,219,188,100,162,153,149,163,180,182,230,199,249,188,144,192,188,237,208,166,194,103,194,221,76,54,58,93,127,83,47,40,40,40,33,53,73,91,157,116,88,106,167,216,206,167,114,150,246,162,104,74,124,133,150,151,84,27,8,40,135,242,237,241,227,245,223,203,175,168,154,190,102,117,167,138,106,79,102,138,77,65,67,55,95,55,67,65,67,77,75,100,95,98,67,78,68,106,67,137,125,87,58,31,39,25,67,62,72,98,68,89,108,79,74,59,63,121,112,54,89,138,65,158,164,211,253,229,171,133,147,143,68,73,69,144,237,237,247,247,238,241,250,228,248,250,250,255,246,252,238,249,207,166,71,41,45,4,47,34,27,41,25,27,37,35,49,32,57,42,25,27,31,58,37,27,46,29,29,118,129,91,74,108,35,58,86,58,59,35,41,23,19,29,9,65,34,36,41,37,40,39,86,83,91,116,100,148,169,203,222,235,255,246,243,254,252,242,226,242,243,242,238,216,248,250,230,237,222,208,180,164,173,153,134,126,172,184,147,140,160,176,140,128,116,88,108,46,2,28,10,74,62,63,52,77,92,68,97,128,145,136,165,152,201,238,225,222,95,57,83,133,124,43,45,49,91,55,57,40,51,66,55,86,32,34,41,75,105,30,60,94,105,107,97,97,109,45,63,30,74,100,94,126,98,40,90,56,18,85,154,178,146,145,131,150,150,78,46,83,128,89,115,115,120,106,87,74,69,35,85,89,79,81,100,55,82,88,108,102,71,60,22,26,25,34,10,57,90,96,103,42,47,40,32,12,0,30,25,60,54,63,64,74,93,62,95,76,98,87,82,84,75,66,44,26,33,32,14,32,11,15,44,56,77,64,71,68,80,85,88,93,81,54,90,68,48,101,157,173,99,25,49,44,67,71,78,59,79,100,74,39,47,13,29,30,7,28,26,17,25,30,36,25,13,14,22,24,21,44,9,20,10,28,20,4,33,0,9,14,14,33,19,13,16,31,24,40,23,27,19,11,23,46,9,41,30,29,25,26,28,20,15,34,13,23,34,30,7,27,26,21,34,25,45,21,37,86,94,66,7,20,6,11,41,26,25,17,22,5,19,17,7,15,23,21,18,52,26,18,16,48,34,16,19,28,30,42,25,17,9,41,38,16,30,26,30,15,25,24,9,13,39,22,40,26,28,31,33,37,21,34,24,35,20,55,13,9,31,21,15,25,26,27,7,26,41,32,40,36,16,18,17,21,37,19,75,91,51,42,23,20,25,36,25,7,23,31,38,4,26,43,46,13,24,34,13,9,20,43,41,25,35,33,12,25,21,9,39,30,5,9,10,30,40,36,19,7,28,12,28,19,14,16,11,59,6,25,18,11,19,13,20,37,8,14,34,31,37,14,47,18,31,38,11,42,13,17,46,12,9,17,60,56,40,77,68,47,47,29,89,82,69,77,74,69,94,97,90,71,99,146,106,117,121,131,97,45,45,66,103,152,160,127,129,109,112,117,117,137,106,60,15,39,38,50,52,61,51,22,57,11,11,19,25,29,29,10,30,33,20,119,228,217,220,149,68,11,22,30,10,9,8,39,17,8,29,15,7,16,17,32,11,8,21,32,3,0,13,24,7,237,131,108,95,76,114,140,189,194,231,243,250,199,174,204,213,244,246,228,191,95,236,211,80,85,48,78,139,52,46,25,53,14,9,22,37,102,148,142,88,88,127,185,177,77,81,125,195,110,33,55,88,128,146,157,129,54,53,39,101,185,199,206,228,196,224,165,88,36,43,110,88,142,151,107,83,56,172,225,109,60,66,86,95,58,76,60,51,64,92,134,102,73,90,92,128,138,90,108,145,89,94,69,31,42,37,53,71,107,70,84,89,48,75,67,48,106,105,130,124,122,73,130,124,173,234,244,224,72,104,124,83,77,103,135,178,246,239,242,219,231,241,236,245,247,243,225,219,189,249,246,206,133,54,10,24,8,29,44,54,40,17,39,55,35,25,27,40,36,33,43,33,43,57,45,49,25,11,48,50,58,73,98,114,35,18,22,11,23,12,21,7,20,24,24,36,152,121,157,175,210,241,233,228,240,237,255,248,255,242,236,241,251,253,243,236,226,218,194,154,149,122,115,81,49,89,147,141,150,138,116,117,114,84,56,126,91,95,86,37,56,40,70,54,39,94,52,10,12,8,120,118,203,187,207,173,168,187,229,192,174,205,183,168,136,175,154,43,17,25,46,53,50,21,36,53,53,33,23,37,75,49,71,41,39,9,86,158,77,54,77,105,152,101,103,67,82,66,48,27,44,27,45,61,32,42,53,16,100,144,167,145,152,135,150,142,87,40,117,138,101,137,129,100,75,74,65,82,58,53,32,70,53,102,105,96,95,118,68,56,21,7,10,32,54,24,40,60,76,69,21,45,5,53,34,7,50,51,44,79,62,65,78,103,80,71,90,88,71,73,68,91,64,18,27,30,2,13,34,36,25,71,62,74,80,93,94,101,55,86,68,76,49,83,91,73,70,113,165,131,94,60,39,42,56,94,82,113,90,59,49,7,12,26,13,10,12,27,28,16,30,27,11,37,22,1,32,15,10,20,44,15,2,19,9,31,21,13,19,8,6,22,41,26,14,24,13,23,28,22,34,12,11,34,28,29,8,23,19,9,9,29,15,17,22,8,10,35,10,14,10,23,43,20,23,26,87,84,39,37,12,7,21,37,43,0,20,30,9,15,25,41,25,23,28,2,32,16,19,26,9,23,2,19,31,22,34,36,18,18,3,33,42,15,23,29,11,25,16,15,21,30,28,35,17,32,35,13,15,36,43,37,46,44,28,37,23,18,26,17,18,33,29,25,28,15,11,18,23,22,51,30,18,14,18,39,74,92,58,25,3,14,51,11,24,39,25,28,13,50,81,53,17,48,8,32,19,35,25,6,42,44,25,13,12,12,18,23,22,11,4,34,30,13,5,7,22,5,30,25,24,44,25,36,29,30,15,21,40,28,12,5,19,24,16,35,44,14,32,25,25,27,35,10,29,34,32,30,34,25,32,55,34,79,51,52,59,58,33,31,8,15,33,33,6,24,63,39,10,26,54,21,28,33,69,50,71,67,63,85,112,99,79,72,42,68,70,65,78,72,50,48,20,26,69,72,34,52,45,35,45,36,56,103,138,156,126,92,53,15,189,229,228,222,153,66,4,27,18,40,24,6,33,4,25,0,19,36,1,22,18,16,17,9,10,6,23,24,23,18,225,165,91,91,48,69,121,141,202,192,197,246,199,186,225,210,240,240,193,109,95,224,203,87,62,64,70,138,81,83,67,72,81,29,56,60,77,119,121,82,60,51,63,58,17,29,32,25,14,54,23,41,63,86,113,120,131,115,117,155,189,158,147,173,138,126,163,118,50,50,42,70,124,83,70,71,95,146,217,87,35,70,78,87,96,79,88,76,79,88,106,99,75,55,95,153,158,114,132,140,128,146,116,98,64,72,78,93,111,110,68,106,85,58,88,93,125,105,94,114,112,65,86,74,117,192,227,146,2,90,95,142,163,155,147,101,42,142,181,184,130,168,253,250,255,240,215,218,196,148,149,130,87,31,32,16,13,25,37,63,43,35,60,54,41,32,40,51,48,23,49,30,84,126,94,33,11,19,6,22,59,74,102,80,44,39,36,36,63,128,159,187,205,231,245,252,248,236,253,254,236,243,231,255,243,249,243,227,233,197,186,208,157,134,154,97,128,108,29,1,7,26,84,68,43,53,92,101,85,91,42,33,44,33,16,15,25,34,22,63,77,122,176,117,122,193,121,61,52,34,148,232,226,239,200,169,141,102,128,86,94,90,37,67,40,74,96,81,118,106,123,126,135,129,99,125,45,24,55,58,96,65,66,70,50,31,48,124,79,29,26,80,83,80,54,67,52,83,36,70,41,69,90,102,17,32,85,51,131,129,152,160,142,148,126,137,81,65,124,106,86,114,97,109,63,54,41,56,94,59,89,89,97,71,103,100,58,32,25,26,16,25,31,49,62,76,55,28,80,111,68,8,24,22,68,58,45,46,19,39,49,64,66,64,81,68,65,59,93,80,66,28,20,9,19,5,7,46,53,85,104,89,107,100,85,82,107,71,92,60,91,57,85,78,86,80,47,72,103,171,187,102,52,5,78,51,88,63,74,50,44,40,36,17,3,16,11,24,8,36,32,22,34,37,47,14,22,32,5,33,13,37,20,21,30,32,17,18,43,16,4,17,8,32,54,29,19,23,12,12,30,26,6,35,25,24,1,9,7,25,33,29,4,32,50,15,26,18,34,5,31,16,22,32,5,32,57,96,59,25,7,13,18,8,20,10,26,16,23,26,45,39,10,25,24,39,10,16,27,14,16,33,15,36,14,36,27,25,42,29,16,23,25,7,8,35,24,20,23,23,31,35,5,23,33,39,14,37,4,24,33,44,43,16,28,39,19,24,18,7,3,48,21,3,8,33,20,5,19,31,33,15,41,18,20,12,47,74,90,58,12,8,35,18,36,30,9,46,56,28,36,18,8,55,36,3,18,34,50,12,19,29,25,26,30,42,41,18,18,25,7,13,56,7,51,25,9,30,24,22,21,5,4,24,11,26,33,27,21,21,39,12,27,11,14,41,8,19,18,34,25,31,8,10,33,23,7,22,12,24,49,58,60,78,39,60,25,14,19,13,34,17,38,37,20,6,82,67,29,50,38,25,24,31,51,26,65,76,49,88,64,28,50,57,11,38,77,35,56,63,50,65,68,111,89,91,94,72,42,34,34,89,135,201,230,222,230,216,129,96,231,230,245,236,177,82,4,3,16,21,10,17,22,20,5,4,33,12,0,0,0,24,32,15,17,24,21,5,9,11,251,221,190,140,68,92,131,161,159,139,160,216,159,113,150,194,227,202,113,98,82,194,179,117,106,77,106,129,113,148,150,152,118,111,112,131,123,119,95,72,77,65,48,79,74,59,64,74,69,86,75,40,71,79,76,88,139,169,199,167,145,112,87,127,89,86,98,126,101,97,55,43,84,72,72,63,65,85,106,35,66,104,99,134,153,114,48,62,73,83,88,72,76,58,68,134,135,99,119,169,123,130,113,131,111,82,79,104,110,106,98,128,64,45,79,112,160,95,63,26,25,63,66,59,40,50,46,8,19,6,24,55,156,103,80,7,24,32,26,22,20,25,71,120,86,106,140,178,145,96,86,124,94,17,34,17,10,27,37,39,33,38,31,25,30,55,46,66,47,39,19,20,88,155,122,114,94,90,95,115,145,154,189,228,224,229,252,250,247,254,254,239,248,255,249,244,239,250,245,237,219,182,157,157,142,109,98,65,41,6,21,3,18,51,82,42,71,89,14,9,53,65,79,14,38,39,49,61,58,65,58,53,64,74,27,40,7,63,109,103,138,163,166,151,131,149,150,61,52,17,94,142,124,109,66,70,66,43,81,77,69,115,101,85,73,100,148,180,172,193,197,174,130,130,154,114,66,74,92,33,75,62,44,74,41,70,21,37,56,75,39,75,77,52,80,60,89,105,118,106,95,105,125,113,61,44,93,46,53,100,140,106,130,113,125,155,95,86,137,115,83,84,76,72,55,53,58,58,101,80,72,68,86,83,54,45,38,50,34,61,40,27,65,78,139,156,118,137,139,122,149,57,69,62,65,63,31,31,52,54,57,71,64,68,64,89,103,101,78,44,28,14,45,19,31,50,49,83,85,102,110,103,98,84,55,81,79,68,70,89,61,73,65,82,88,52,65,59,65,105,175,166,117,55,35,61,83,42,45,44,17,33,16,14,43,17,36,25,27,18,26,23,11,7,17,3,12,21,18,15,22,42,24,41,35,20,30,2,14,44,33,25,9,44,9,31,21,27,21,9,20,30,14,18,17,15,25,20,19,19,9,24,39,12,23,23,25,18,10,38,33,37,3,33,17,17,46,77,92,16,12,4,7,16,49,30,16,19,23,45,53,33,40,25,15,14,39,7,35,17,22,7,49,24,15,18,13,22,31,38,17,31,13,27,36,11,14,13,11,17,9,49,25,34,26,23,39,11,23,33,31,44,58,37,32,41,23,10,41,22,18,26,29,18,44,31,10,31,25,18,22,35,31,12,10,28,32,44,84,105,52,14,17,12,23,1,14,23,25,37,53,22,18,27,34,52,19,18,18,14,43,8,25,23,33,3,17,26,11,14,45,1,21,11,17,20,10,32,8,4,27,35,22,21,23,35,35,14,13,37,6,47,2,23,19,19,47,32,29,4,20,36,29,21,40,16,25,14,7,7,32,37,63,64,42,31,21,11,21,30,59,20,15,19,11,15,66,76,55,75,27,25,32,39,31,61,31,84,73,69,50,66,51,61,29,56,53,49,85,93,73,96,148,216,212,238,209,208,189,168,152,173,187,240,207,225,202,183,172,149,239,240,247,222,179,132,22,11,34,5,27,15,0,32,9,8,8,38,26,9,2,29,9,5,5,14,19,42,10,38,254,240,246,234,147,130,145,170,171,112,125,165,152,105,146,160,151,132,127,131,146,151,152,123,133,133,145,138,122,171,166,161,191,211,235,235,212,216,190,227,196,205,185,185,198,193,229,230,224,214,201,212,199,194,193,197,207,206,204,191,161,162,176,189,129,73,51,82,80,87,39,50,88,58,79,91,82,112,145,95,86,121,141,146,135,120,148,100,40,54,58,63,78,64,59,121,86,103,123,131,100,102,130,153,133,111,110,89,96,79,43,72,42,46,35,49,115,26,52,44,57,90,38,102,63,53,19,23,22,40,32,21,43,25,41,28,52,14,18,37,15,19,31,23,30,53,74,67,47,62,56,126,73,12,22,9,29,17,9,38,16,12,20,28,40,44,57,78,28,26,31,105,165,227,207,213,241,234,251,240,237,246,226,240,241,242,231,242,236,199,211,219,159,174,125,128,84,108,110,72,36,15,8,30,23,32,11,22,33,37,7,8,31,8,31,9,7,19,7,9,24,12,29,67,56,86,86,106,93,101,84,69,70,54,20,13,4,0,20,27,26,53,40,13,30,19,28,32,36,10,31,62,63,45,39,90,144,137,179,148,137,133,99,71,67,58,66,65,90,72,77,77,68,83,57,54,40,70,49,27,55,66,92,30,106,70,11,66,157,64,58,105,130,63,48,41,66,32,64,61,47,37,23,48,78,57,39,47,30,70,113,112,105,121,147,178,93,87,78,66,62,68,94,72,57,66,86,70,103,99,102,79,58,40,24,32,36,26,38,38,64,66,100,143,131,140,156,139,171,132,135,128,84,99,80,74,49,31,35,40,42,66,68,88,67,89,53,47,38,32,37,29,24,29,33,60,64,98,90,101,83,103,114,79,49,74,86,64,91,60,73,53,69,38,72,80,82,70,42,40,104,167,188,144,97,56,28,60,24,6,27,12,23,39,18,44,24,14,25,27,22,37,26,33,36,19,18,31,12,8,50,28,38,29,3,11,26,26,17,34,19,33,32,9,17,24,15,35,35,42,27,34,25,15,14,43,24,26,10,31,23,30,18,13,22,22,15,14,18,15,40,1,12,9,32,28,25,79,99,32,6,23,3,23,10,25,16,13,23,13,34,16,17,24,39,21,25,14,25,4,16,15,41,24,24,19,11,27,0,40,4,28,11,17,16,40,28,17,20,4,12,29,54,21,27,18,39,24,46,32,41,35,33,24,40,18,36,14,12,27,18,14,4,18,32,10,25,29,24,22,36,27,26,19,25,41,40,34,63,88,64,41,7,15,24,3,18,31,42,48,36,35,12,16,6,26,21,26,36,28,21,41,18,31,28,5,5,31,21,25,14,17,5,9,18,23,26,32,28,44,16,14,23,22,13,26,13,16,9,17,0,9,29,12,39,25,38,47,33,44,20,29,14,25,26,16,50,31,29,24,65,65,101,96,97,121,132,167,167,180,169,200,169,170,139,163,195,183,161,165,112,92,125,131,136,69,88,91,78,93,80,67,63,42,35,45,57,61,47,68,48,131,165,218,241,246,243,248,243,225,225,214,234,240,224,185,148,144,139,133,215,238,242,231,219,95,17,9,8,14,36,11,16,20,19,11,14,26,16,0,8,26,3,31,17,11,9,14,10,23,250,252,240,247,165,132,130,153,141,133,137,157,125,169,173,138,130,134,104,160,142,173,191,165,193,192,214,181,180,224,225,233,232,241,254,239,232,251,239,255,235,250,230,251,251,253,245,255,249,234,244,249,246,254,255,249,246,246,243,252,246,253,233,246,254,108,100,59,86,113,74,122,114,141,157,188,134,205,216,111,121,208,201,207,188,180,126,104,60,63,44,65,80,52,64,88,87,125,135,163,113,93,122,151,183,126,111,80,43,45,67,137,87,72,59,53,73,63,131,116,164,196,171,226,251,245,243,220,235,213,235,226,232,215,214,247,240,240,239,227,230,207,230,145,177,165,180,208,180,203,181,219,190,227,229,216,240,230,229,222,232,223,137,234,218,142,248,250,240,247,244,225,247,246,253,241,243,248,252,246,238,231,244,193,180,184,182,183,151,38,79,126,144,125,108,72,63,24,23,34,3,12,6,15,21,42,50,66,55,64,77,81,91,129,72,98,78,72,31,1,28,52,92,125,174,101,109,85,71,21,53,18,40,20,39,19,37,41,24,27,49,33,38,33,40,99,98,50,25,22,124,149,129,114,27,69,71,103,110,68,63,53,2,19,32,3,18,8,16,29,8,16,100,42,43,92,41,39,47,35,46,78,161,152,169,88,29,114,140,103,93,84,77,50,47,45,37,37,49,60,32,33,43,45,49,38,75,23,51,80,70,79,89,92,107,141,72,63,98,82,89,94,116,82,57,80,86,107,101,103,72,52,52,25,34,50,28,41,66,75,123,111,116,148,128,150,129,116,105,122,86,75,61,68,71,76,41,23,15,56,41,67,80,90,64,48,19,21,17,20,18,31,50,77,91,92,86,85,92,82,95,86,90,82,94,77,85,124,74,72,57,53,73,74,80,71,65,76,41,58,46,94,187,208,146,46,31,37,16,17,19,52,21,13,10,23,34,30,26,34,26,21,13,29,14,22,23,35,16,26,21,28,13,29,39,6,26,30,18,36,35,7,27,12,29,40,31,17,14,30,16,28,30,28,60,30,37,3,44,29,12,21,28,14,14,10,18,31,35,33,6,38,4,28,34,32,14,87,77,8,15,8,23,11,5,9,40,22,21,22,14,42,32,33,30,29,13,25,18,31,22,11,34,15,5,39,9,15,19,17,19,7,20,3,30,29,12,14,11,14,27,30,24,45,38,34,42,29,23,36,12,40,21,17,22,43,22,21,24,28,12,32,17,20,43,35,30,19,23,32,43,27,15,17,8,31,26,42,38,45,113,86,48,23,23,30,18,64,13,13,48,34,39,18,26,17,29,49,25,22,28,4,12,40,23,13,4,0,31,16,32,0,41,32,18,46,11,9,35,41,18,12,22,10,38,17,39,9,29,10,12,31,32,12,18,14,47,10,32,37,25,36,18,22,8,28,31,12,29,19,43,64,69,100,219,236,242,232,247,247,249,243,229,252,246,243,250,253,254,243,230,220,239,239,229,162,97,125,116,84,86,76,46,56,41,29,73,56,81,57,81,136,185,171,163,237,233,241,240,233,239,249,243,250,205,175,176,168,151,141,227,224,212,246,245,111,0,2,13,34,13,26,16,32,12,25,35,11,8,8,48,6,14,6,18,18,18,14,12,20,203,244,236,235,145,111,159,120,155,115,156,167,152,160,123,165,157,147,159,174,220,246,237,234,249,242,242,243,255,239,252,255,240,255,254,250,252,249,232,252,254,249,255,245,251,245,236,244,241,254,245,255,242,252,255,241,255,252,237,235,254,228,242,253,241,187,126,133,135,150,141,154,183,160,195,227,201,164,158,102,144,226,225,234,216,183,124,78,80,78,56,74,93,83,92,150,128,117,120,165,115,123,90,92,160,124,88,75,83,57,99,188,182,128,150,103,150,92,149,183,195,227,203,240,252,247,211,250,244,252,226,244,248,232,251,239,246,245,237,242,246,234,248,246,236,239,255,246,252,248,243,216,246,243,243,251,254,254,255,242,253,255,255,255,246,252,244,240,242,243,252,251,247,250,240,244,252,239,255,250,253,242,222,253,251,221,234,251,255,67,83,224,238,235,162,136,67,52,99,94,144,186,194,250,238,229,252,254,237,251,253,235,250,238,251,245,236,237,207,78,90,236,225,167,115,50,88,75,53,55,88,134,175,217,242,231,226,250,247,252,240,240,250,218,247,245,250,221,96,22,102,160,108,58,15,4,13,39,42,76,86,114,173,148,178,167,164,159,157,175,133,96,113,117,128,124,50,39,50,51,38,77,131,135,126,60,49,62,80,22,22,47,26,55,53,79,43,84,103,96,103,122,114,108,76,18,75,62,43,73,62,71,79,77,94,97,77,51,60,72,93,81,108,74,71,78,90,128,83,60,13,64,40,52,29,65,53,102,81,97,116,135,120,107,115,120,126,111,78,59,75,59,67,94,76,52,71,61,54,46,37,77,77,23,39,4,49,21,13,19,51,86,93,77,82,81,90,74,76,73,55,93,68,71,78,99,69,72,62,69,61,64,75,79,89,57,70,66,81,51,65,47,87,170,166,58,32,37,11,24,21,5,13,13,30,19,37,29,15,14,13,27,34,46,40,23,35,16,36,19,36,30,19,24,35,21,12,33,8,29,47,37,9,21,22,30,31,22,17,20,38,15,24,23,29,29,16,23,3,28,13,37,27,26,13,38,16,31,29,10,16,11,9,19,5,14,12,28,96,28,21,39,21,22,21,15,43,37,29,24,50,23,44,15,14,19,24,14,15,18,9,30,19,26,7,24,16,20,5,24,26,16,9,23,28,22,26,29,9,25,22,23,34,23,38,22,46,25,29,35,41,20,29,18,45,55,51,35,22,35,17,34,14,2,8,49,35,44,33,16,29,31,12,21,2,28,5,26,42,17,62,85,90,46,22,20,18,11,27,52,45,0,29,9,11,25,11,9,20,0,37,24,14,14,12,21,20,14,45,28,24,23,30,22,20,30,26,21,41,1,2,27,24,22,20,13,18,33,20,10,15,22,19,24,25,8,21,23,28,9,33,3,37,26,24,21,35,11,41,25,22,30,31,109,245,246,248,236,255,252,244,255,247,233,246,247,247,248,230,250,245,250,245,250,224,183,141,134,78,70,20,18,43,65,68,39,66,57,89,75,108,136,97,62,91,135,190,198,191,210,240,233,235,240,228,230,231,223,206,234,248,234,243,233,237,105,7,2,33,16,6,22,21,15,32,9,25,23,26,1,5,11,11,33,16,15,16,19,13,4,142,159,185,168,126,137,136,137,145,148,149,155,175,169,138,152,190,205,229,252,243,236,215,208,210,246,240,240,233,221,206,196,202,205,207,220,206,206,219,205,214,223,160,210,211,221,226,199,211,201,175,179,188,171,190,198,193,211,204,203,191,205,214,204,165,143,141,112,163,164,150,143,146,127,157,151,140,108,80,54,122,132,121,150,127,82,67,9,110,104,121,46,80,82,122,114,119,73,80,74,128,98,26,54,93,81,87,55,46,81,50,173,135,91,118,83,130,127,109,127,142,122,143,219,242,226,219,150,233,179,158,208,222,185,242,236,233,249,231,238,252,236,254,243,252,248,245,253,252,242,255,254,252,253,242,251,253,249,224,249,246,255,255,238,249,253,231,234,253,240,249,237,245,255,243,251,244,254,244,249,249,225,239,246,241,254,240,219,191,34,45,177,190,212,193,217,235,251,239,252,243,245,246,229,237,238,237,238,238,252,233,223,249,234,252,247,224,220,156,27,78,189,156,186,165,196,238,255,244,255,250,239,254,255,251,248,243,247,239,255,255,253,249,239,236,242,252,124,16,13,89,135,125,140,172,203,249,251,244,242,237,250,243,247,247,250,238,226,248,255,246,186,175,157,115,81,65,37,56,4,28,42,74,45,71,34,50,77,73,107,79,84,76,127,129,108,104,138,137,147,137,152,82,93,66,30,83,59,35,66,67,82,103,109,103,62,90,79,75,94,98,74,71,87,82,67,62,49,34,18,16,28,23,53,70,68,107,105,123,96,106,115,116,122,110,83,76,67,35,18,83,62,58,86,60,60,41,39,41,41,52,55,30,57,7,39,24,19,28,58,78,90,87,101,76,88,70,97,71,36,76,76,69,62,53,62,36,67,65,69,71,41,50,85,64,60,62,78,79,101,64,44,44,42,60,52,40,34,19,18,17,27,19,29,14,26,28,5,44,38,33,12,37,8,24,17,14,45,16,16,17,19,3,28,37,23,24,35,19,19,37,0,34,27,4,14,34,38,15,31,20,31,6,32,31,18,18,21,49,27,39,26,13,28,14,30,20,27,20,21,38,1,6,31,20,21,17,70,81,53,23,26,29,24,14,53,14,43,9,43,32,7,16,36,52,10,41,31,25,30,25,26,21,21,13,18,18,9,25,21,46,14,35,24,50,6,33,26,29,36,30,23,20,19,35,27,46,21,38,15,35,28,35,24,31,11,48,19,26,26,14,29,19,23,25,29,14,30,14,8,43,18,31,26,29,57,36,29,21,18,13,87,97,84,30,18,28,28,18,46,25,27,33,4,23,12,22,35,32,8,11,27,8,30,13,24,9,9,52,14,13,37,26,14,22,25,20,9,35,17,53,34,32,26,11,16,2,25,22,32,36,20,32,4,9,31,42,12,28,34,9,47,29,28,35,40,30,20,17,46,11,14,29,106,215,234,252,233,217,217,235,223,229,203,239,220,232,227,237,202,224,235,253,246,220,191,135,100,28,53,73,21,35,61,79,56,49,74,97,112,112,128,85,65,40,79,147,166,220,220,241,246,243,246,254,253,247,238,229,225,249,228,220,244,237,109,16,17,9,12,25,6,27,0,17,25,30,3,15,0,15,8,13,46,23,12,6,9,21,18,236,231,214,173,151,131,165,153,185,164,145,145,175,157,184,210,228,250,246,250,195,135,113,16,61,148,109,104,55,53,38,2,0,0,5,16,22,23,25,39,20,15,45,24,31,30,25,41,24,25,56,52,41,1,22,11,53,102,99,54,9,19,81,64,76,92,76,96,79,62,54,52,83,70,81,87,43,42,57,49,52,26,27,58,23,23,53,47,81,56,97,82,77,87,127,127,103,100,96,109,76,68,71,72,62,64,96,62,74,40,71,92,32,63,44,44,76,66,77,70,51,55,42,82,91,24,10,38,41,30,7,81,107,104,151,146,168,159,189,182,215,234,244,245,246,248,228,235,232,242,223,209,192,183,161,181,190,192,128,83,119,154,183,172,170,131,152,199,212,207,193,203,201,200,173,172,126,110,170,108,108,79,107,109,64,75,60,72,94,26,29,63,131,211,227,241,238,243,251,245,240,250,224,236,225,237,209,184,163,178,148,114,113,91,93,71,58,61,24,12,70,216,245,247,247,249,236,251,250,249,254,237,239,240,227,240,247,224,213,205,153,153,138,117,44,37,66,51,35,2,92,237,249,237,241,249,248,245,248,232,245,248,246,246,235,246,246,230,217,212,193,89,41,65,73,86,46,44,35,63,88,140,69,60,53,60,71,129,78,84,96,91,85,74,37,64,62,33,35,52,56,60,35,45,70,38,76,70,52,93,132,119,124,141,104,74,81,75,119,118,143,124,96,95,43,47,32,13,8,30,56,54,60,90,95,120,90,123,115,98,91,101,91,92,52,69,54,61,61,51,60,67,63,83,59,79,51,45,59,52,29,54,38,30,56,49,73,92,93,128,100,106,109,105,111,95,78,79,50,93,76,54,76,68,23,40,67,61,48,67,71,72,55,104,98,85,85,94,68,29,43,10,7,3,20,21,29,38,18,24,24,4,34,25,19,48,36,14,19,47,23,50,43,36,37,13,32,29,23,29,16,50,11,13,29,27,11,21,40,12,12,34,20,11,26,11,37,15,36,23,15,29,6,22,18,34,16,21,31,28,35,27,21,14,32,11,11,31,13,29,19,12,33,33,49,44,18,64,88,68,51,39,28,55,19,3,38,12,24,25,30,41,29,24,40,29,29,30,17,29,23,42,27,34,41,18,18,20,27,15,13,10,4,3,32,9,7,9,7,33,14,44,31,52,17,33,7,16,14,26,36,25,39,32,22,20,23,3,36,34,23,4,9,38,17,25,2,28,11,13,47,23,36,18,31,20,43,28,24,34,3,60,90,106,86,49,7,9,36,52,42,31,9,19,21,21,25,30,6,30,19,6,21,14,11,27,15,30,18,32,20,36,26,41,9,20,9,32,8,32,20,21,39,18,11,33,18,30,19,57,8,26,29,9,23,16,25,20,29,24,19,31,14,41,23,26,29,26,30,30,26,37,8,146,194,205,202,190,185,206,193,203,185,224,224,222,209,199,202,209,200,188,217,177,169,155,143,124,138,152,143,138,73,78,68,46,80,80,88,92,97,100,106,89,71,106,227,224,239,246,248,246,247,243,250,224,246,255,250,251,255,246,203,243,245,136,0,10,7,5,32,3,20,30,1,26,31,26,5,1,4,17,19,8,15,7,16,12,4,10,194,241,250,251,242,185,203,211,235,194,123,133,171,144,198,254,232,226,221,237,134,96,82,38,18,56,41,23,26,9,22,36,19,11,14,21,16,22,10,30,35,14,21,4,20,25,5,22,23,65,61,87,39,2,17,14,56,60,58,34,37,21,62,58,64,79,130,102,80,54,31,43,88,76,54,21,38,27,58,36,55,47,72,83,65,57,31,64,61,71,51,75,60,63,90,112,100,81,64,60,67,79,74,71,90,54,102,73,41,40,51,65,23,18,34,7,93,37,65,38,43,17,25,47,46,10,54,30,73,38,21,122,135,190,229,225,254,245,252,246,234,239,202,214,211,182,169,152,121,109,120,91,102,108,104,143,115,105,49,15,19,91,95,101,32,4,52,54,27,22,47,42,23,34,6,25,4,32,11,31,12,19,19,54,54,61,79,113,110,25,22,11,34,105,105,90,80,110,112,76,89,50,43,47,49,32,25,6,19,3,11,17,60,43,54,9,31,105,39,6,39,130,158,177,158,178,134,125,136,136,97,95,49,63,15,34,20,10,9,20,7,20,41,50,9,38,105,61,34,46,77,210,204,165,168,170,199,147,161,157,87,151,97,119,117,52,52,30,27,17,39,6,9,0,95,136,80,50,17,43,99,150,85,61,38,52,42,40,54,19,46,67,39,41,35,66,30,59,79,78,83,131,105,152,117,73,151,89,52,63,115,126,106,81,72,38,51,66,77,114,79,73,51,24,20,44,24,12,40,47,57,90,74,84,122,126,103,110,103,104,90,80,92,72,76,50,74,62,87,77,110,76,61,89,77,74,52,41,60,44,15,41,52,33,70,120,123,93,110,112,114,90,102,101,82,69,101,75,64,83,70,84,67,47,63,64,52,34,69,62,66,62,59,83,74,76,76,49,35,2,12,31,23,36,2,18,32,66,7,34,21,27,22,10,14,41,17,36,21,7,18,24,26,26,15,35,33,24,22,30,28,5,10,29,18,51,36,20,20,28,24,10,51,38,55,18,33,31,16,21,2,16,19,56,48,9,27,17,34,9,33,27,27,7,31,33,10,27,26,27,25,42,27,24,16,19,11,36,91,88,34,54,32,7,16,29,27,27,47,18,30,32,11,4,18,0,1,16,24,20,46,15,5,20,0,34,34,22,20,6,19,4,7,27,21,24,23,11,41,10,42,38,36,43,36,16,67,26,12,41,16,37,56,26,37,14,20,25,43,22,23,36,34,25,6,12,17,30,15,31,32,8,31,34,27,5,14,8,33,48,29,33,43,86,112,68,25,14,35,32,48,25,25,14,20,30,33,38,19,28,19,16,21,42,38,9,38,0,29,32,39,18,18,15,19,14,40,25,9,39,47,28,25,2,6,14,16,0,35,5,26,26,5,30,11,25,44,8,41,15,38,17,23,26,19,26,30,20,23,16,12,14,41,131,186,165,161,169,159,168,152,177,175,172,176,174,203,197,173,148,154,125,149,141,137,139,126,96,153,176,212,133,84,60,37,21,42,94,74,50,71,93,118,77,97,201,247,235,249,221,222,243,250,252,236,246,247,233,234,252,232,192,209,240,221,120,3,1,0,21,2,5,19,6,34,31,26,17,0,6,12,24,20,10,6,1,17,22,25,15,237,222,255,252,255,237,252,228,225,161,113,151,129,189,216,255,226,186,163,190,181,207,191,134,146,153,89,108,127,151,94,93,81,80,90,109,106,81,85,106,85,82,98,115,125,116,119,84,139,153,162,184,107,69,95,72,83,103,104,67,34,60,105,81,105,103,128,114,89,89,73,68,63,78,84,84,69,70,68,87,69,36,71,95,91,57,63,61,73,48,58,60,75,56,68,82,93,60,67,25,18,40,50,53,34,47,42,50,14,13,33,52,42,38,28,52,122,91,130,133,135,161,138,154,153,124,119,122,193,124,133,194,233,227,236,232,225,181,169,164,116,123,143,128,106,118,122,97,82,75,86,71,68,60,75,60,65,51,30,13,35,58,47,38,12,22,16,26,27,14,13,24,57,45,72,86,97,84,93,101,81,91,109,126,111,89,88,108,136,48,17,17,45,39,26,30,16,17,10,17,6,10,46,22,43,68,96,61,70,100,98,97,88,140,101,82,73,128,87,38,5,54,24,16,19,11,3,7,12,28,21,31,28,44,35,38,52,48,56,48,9,50,131,138,75,29,105,112,55,45,11,53,48,48,53,5,13,15,34,23,28,7,2,14,29,3,23,22,36,16,25,41,58,47,62,129,50,22,63,60,38,65,64,64,63,42,55,54,73,47,69,82,94,98,123,136,158,170,162,139,129,135,143,125,120,100,81,36,37,52,57,72,27,49,57,47,51,64,59,33,45,4,31,10,51,14,12,59,67,65,88,82,114,123,88,112,114,120,120,119,83,62,56,57,58,67,59,72,45,75,103,85,82,80,78,60,25,41,41,28,35,59,45,38,81,114,131,109,117,127,76,101,68,80,57,80,75,74,110,68,77,60,54,29,72,37,45,76,93,93,82,57,87,83,68,31,41,18,9,9,53,18,32,2,5,54,34,29,19,16,23,21,0,33,18,15,31,6,33,10,16,53,27,54,39,42,15,13,22,23,19,23,17,19,4,16,15,27,3,16,23,26,16,40,19,14,47,31,35,17,28,41,24,3,35,42,41,15,7,12,4,39,9,13,16,37,46,10,48,14,22,5,16,21,13,16,39,23,31,75,47,42,7,28,13,15,29,12,52,14,17,16,32,20,39,64,37,32,39,39,32,9,48,7,40,27,10,8,20,18,22,11,6,42,24,12,30,7,29,27,43,19,27,40,24,9,0,36,12,16,24,9,41,47,29,42,22,25,29,21,32,28,16,26,28,7,6,12,23,11,6,23,1,2,7,27,47,8,63,23,20,21,29,36,84,81,46,28,17,41,8,15,28,12,15,24,23,30,21,10,24,33,21,33,19,34,25,13,12,10,6,12,54,35,25,19,8,19,31,25,26,27,11,7,20,16,15,23,11,38,28,4,18,14,7,46,24,28,30,14,13,12,36,31,42,21,20,18,13,15,16,4,50,109,101,148,115,98,133,142,122,119,133,129,142,142,126,136,133,146,103,104,98,133,87,98,69,25,86,112,116,88,82,88,62,43,37,12,8,6,27,61,82,71,104,205,216,217,216,222,220,232,252,251,226,251,236,233,253,249,205,182,187,212,192,103,18,4,16,34,23,4,23,11,17,12,8,5,5,3,32,0,12,9,14,29,30,5,24,26,255,252,254,253,255,251,238,243,238,157,115,152,135,194,242,244,235,179,128,221,251,236,241,231,192,242,190,184,236,220,189,133,173,169,176,236,248,228,223,216,163,247,247,248,239,239,252,252,252,233,233,245,181,140,178,169,121,132,115,88,47,96,108,70,94,123,141,139,135,103,62,76,61,72,60,69,80,76,67,68,49,61,50,113,90,39,38,62,76,71,38,35,16,25,61,90,77,61,57,25,24,30,23,38,43,70,117,121,137,97,98,151,141,151,91,127,237,189,200,220,207,190,155,220,190,135,125,92,123,94,124,122,176,171,148,143,135,53,42,56,16,96,78,88,86,88,62,91,84,87,78,102,62,50,18,18,24,22,23,30,25,86,74,100,64,20,104,135,129,131,131,150,143,143,150,151,157,168,155,147,134,88,95,58,24,38,31,36,46,33,48,35,29,105,116,138,132,134,118,121,128,142,137,172,164,171,189,158,133,119,99,81,73,76,48,46,57,62,62,53,14,57,64,65,90,77,114,91,127,144,105,120,142,139,149,162,117,124,107,84,30,30,69,65,48,24,105,59,48,38,20,45,52,70,64,76,47,59,9,41,4,20,5,17,53,23,72,42,80,50,40,23,76,35,54,39,58,36,31,47,35,42,49,85,126,140,156,163,144,153,150,153,147,136,127,106,95,115,59,75,35,24,29,30,18,15,28,30,25,37,59,56,53,40,38,23,58,43,38,37,37,16,16,19,60,42,53,91,112,117,113,100,119,121,104,97,105,122,88,68,59,37,66,59,61,63,49,79,85,72,94,98,77,59,50,19,24,26,13,48,37,78,64,76,116,130,89,69,73,95,78,83,81,91,61,67,92,74,60,48,45,64,56,67,56,58,65,64,72,60,66,83,45,31,34,37,11,47,28,11,8,33,16,36,61,48,64,35,17,20,45,33,46,16,9,29,26,22,13,40,34,31,27,13,21,0,48,19,30,32,21,28,16,21,20,8,15,29,37,35,33,21,6,38,18,26,14,38,11,26,37,29,8,12,30,24,14,55,25,12,37,32,10,14,5,25,38,25,13,15,31,20,48,16,1,43,30,28,43,88,81,42,16,23,50,24,43,14,16,51,9,10,9,23,20,15,15,51,18,23,37,28,15,51,21,32,18,14,7,10,8,47,28,21,9,20,19,20,16,19,11,11,31,40,19,46,52,26,25,40,27,23,26,38,26,22,19,16,26,16,29,45,4,7,16,4,10,17,29,6,25,34,21,25,10,32,28,10,29,33,24,23,13,6,67,119,84,65,80,37,16,22,6,18,23,21,12,28,28,13,24,35,24,36,44,25,16,18,32,33,37,7,27,20,33,22,70,35,30,31,14,22,12,5,26,23,18,10,28,28,16,30,10,28,34,7,20,2,39,40,41,22,20,0,32,27,35,22,18,22,24,36,87,124,111,148,119,112,122,142,101,122,117,119,140,126,103,125,127,121,134,105,135,128,113,121,95,116,90,100,118,125,106,138,111,72,46,76,105,84,78,92,73,87,155,232,237,217,235,244,236,249,243,245,239,240,233,245,220,249,215,211,198,216,205,110,2,1,12,25,8,14,6,24,22,14,10,16,20,7,21,8,0,25,7,0,15,15,25,18,200,151,156,190,184,229,244,241,219,131,115,123,122,144,173,207,202,142,96,111,187,213,213,106,92,168,109,105,141,159,99,102,155,144,141,212,194,190,191,179,176,193,203,237,197,154,225,228,241,247,243,238,96,122,158,135,116,141,120,50,12,87,81,73,38,76,124,121,89,99,89,44,51,57,62,69,79,59,33,37,18,57,46,5,7,49,42,33,47,62,50,29,24,28,102,104,122,152,133,180,161,133,133,104,137,205,211,230,212,171,143,162,124,169,112,113,205,151,119,134,141,124,87,102,52,74,57,17,38,22,48,132,42,92,99,115,103,27,29,18,25,50,64,4,15,30,55,41,65,71,80,124,99,105,115,125,147,132,88,47,54,151,171,191,166,57,136,179,169,150,125,108,90,95,89,69,73,72,92,98,97,103,109,91,121,63,100,98,117,100,49,24,83,189,205,214,184,188,167,141,154,127,110,132,105,105,96,91,71,77,75,45,55,56,80,49,48,98,38,57,1,93,169,192,175,149,140,152,130,138,142,149,114,112,103,96,87,53,60,46,20,41,77,105,55,62,163,136,53,5,33,144,208,191,116,119,135,96,87,72,67,64,50,50,55,52,43,73,40,57,49,37,60,81,108,114,109,126,139,49,90,111,138,152,129,167,146,98,100,82,70,46,60,33,41,37,27,29,18,30,42,21,31,2,20,17,14,14,39,38,53,74,79,62,11,23,10,29,40,12,0,35,38,70,93,73,101,121,77,107,106,110,114,127,87,105,109,69,72,70,53,62,49,60,58,83,70,80,81,87,142,84,37,11,9,22,10,25,40,57,76,80,134,96,103,96,98,114,64,80,86,85,64,72,87,77,80,64,68,38,55,69,47,53,67,69,59,75,77,64,62,48,41,31,36,15,32,16,32,17,28,33,20,28,26,56,37,34,38,66,97,69,36,43,12,27,36,37,34,28,21,36,8,26,21,13,31,23,26,33,12,21,43,44,1,17,13,9,18,27,23,22,4,28,23,36,19,38,29,32,22,25,37,26,29,31,25,12,24,10,49,29,45,33,12,19,12,43,18,4,15,25,24,23,35,18,49,17,0,83,70,20,20,38,15,10,7,39,28,24,25,12,51,10,44,48,14,28,41,16,3,42,29,24,34,10,27,35,16,23,11,13,29,11,11,16,14,11,17,25,29,32,51,23,13,25,36,21,52,45,22,48,27,41,24,22,29,28,22,36,29,25,11,12,40,27,20,9,28,14,20,5,12,42,14,16,31,2,26,27,44,15,5,20,18,72,93,108,77,47,24,26,11,17,21,6,24,10,26,35,20,17,20,6,40,14,33,9,3,32,10,23,19,5,37,17,14,19,29,23,16,18,31,35,8,19,37,22,9,16,26,12,33,3,6,22,40,22,18,2,20,24,33,31,19,14,24,22,17,14,41,16,61,114,102,117,134,134,183,149,132,142,164,208,185,172,180,166,169,211,199,172,189,193,188,201,229,209,210,227,189,179,208,195,202,158,185,193,213,196,189,193,175,168,205,229,213,245,246,247,243,236,249,250,250,249,252,233,242,239,228,218,215,222,208,90,18,0,0,13,15,11,5,16,13,30,3,16,13,11,16,14,30,9,2,40,12,37,3,16,69,94,96,132,201,200,179,190,215,139,99,139,111,93,20,15,31,53,25,19,35,20,8,23,45,19,2,50,59,45,41,36,40,9,45,51,32,30,47,35,58,15,47,40,74,25,30,55,53,105,121,116,28,49,49,41,117,132,194,95,56,135,124,84,72,55,88,67,68,86,96,119,57,68,71,77,94,67,52,56,59,77,75,40,43,70,92,55,68,69,126,145,126,111,171,198,187,254,237,236,193,188,145,154,201,235,240,217,232,205,186,166,63,111,49,68,140,108,68,61,78,72,87,99,83,65,22,27,23,35,63,60,49,43,64,55,75,1,30,40,14,68,67,73,69,41,106,77,126,126,128,154,179,158,124,147,195,179,118,36,23,148,146,166,77,51,126,84,93,67,37,40,17,23,36,20,49,71,119,128,118,161,140,156,163,159,166,130,143,105,28,36,67,150,134,122,85,79,46,43,42,35,49,49,3,35,30,70,72,80,122,130,149,135,155,105,106,152,68,51,34,98,99,149,121,83,87,48,24,31,30,12,9,29,11,36,64,96,97,74,8,124,164,169,81,61,175,161,93,25,18,103,129,124,65,26,39,10,30,58,27,54,64,57,68,86,142,191,152,151,162,176,165,153,173,148,151,192,171,68,52,68,53,54,41,42,28,36,19,20,54,15,18,21,44,36,29,28,24,30,3,25,29,7,42,36,31,42,36,35,37,99,70,39,33,20,16,24,35,30,22,50,34,59,66,74,69,92,127,94,109,111,118,96,110,72,65,47,56,72,70,70,76,45,72,75,77,76,67,62,48,50,29,8,1,12,24,56,73,57,79,116,71,126,119,100,102,79,68,98,61,99,104,51,44,75,61,68,48,51,80,91,87,49,79,41,71,79,64,68,44,24,15,48,19,2,19,40,7,28,33,22,63,80,43,26,41,67,55,65,141,73,52,4,25,4,29,12,31,36,20,27,29,8,42,50,42,27,19,26,46,44,3,26,35,17,14,37,35,31,39,9,9,13,30,32,32,20,28,18,26,9,39,21,25,45,44,12,19,34,22,8,39,30,35,20,11,54,14,5,19,24,44,35,29,34,39,30,35,75,83,54,44,20,7,24,25,19,23,36,49,29,44,5,40,42,23,37,11,35,18,50,39,14,13,16,15,16,13,25,13,29,36,17,16,24,7,21,24,20,14,42,50,57,18,5,23,21,26,39,17,45,46,20,25,30,16,36,20,25,14,35,14,12,29,35,22,0,55,12,10,44,34,27,35,29,27,52,27,17,30,15,23,19,15,36,37,109,91,40,37,15,25,41,29,22,29,30,13,24,45,15,4,14,2,30,18,15,38,39,29,17,15,27,34,4,0,9,13,15,29,28,3,34,16,19,51,4,40,40,25,51,33,25,8,23,33,30,20,31,12,16,23,19,22,23,36,23,29,35,24,29,43,92,32,69,81,84,135,85,98,149,159,158,144,139,119,159,163,188,208,202,222,223,220,249,251,237,245,248,235,211,214,207,214,188,217,226,201,201,201,207,178,165,169,193,236,214,234,226,197,187,201,218,215,214,214,210,226,242,241,241,197,202,189,108,32,5,14,12,15,2,20,12,21,31,0,2,13,22,8,23,25,37,33,15,14,59,13,17,180,187,186,171,148,140,114,156,224,159,122,133,123,91,32,14,58,90,51,30,62,35,19,19,63,67,49,76,91,79,38,24,44,40,31,44,47,32,22,31,14,69,21,49,33,28,19,28,11,39,59,88,61,91,121,94,157,188,247,175,122,164,222,199,141,75,62,57,64,106,131,197,116,89,86,70,108,44,73,68,107,168,214,170,131,185,184,146,113,88,137,224,241,175,187,182,126,192,166,156,75,79,92,86,160,195,157,147,197,244,241,180,83,88,102,141,222,214,112,84,91,56,95,123,73,80,89,60,53,53,54,58,49,55,59,76,47,35,72,62,50,117,130,169,133,127,147,153,121,110,132,119,93,107,131,116,97,92,39,24,38,49,24,34,31,13,92,115,104,94,133,118,123,137,135,137,121,129,140,137,156,138,142,133,130,112,98,80,50,33,29,62,39,48,50,29,45,56,72,106,126,114,96,110,141,133,137,126,154,173,168,179,156,148,124,81,49,76,38,89,28,29,72,32,42,27,50,63,94,99,82,110,103,127,106,133,165,179,175,129,77,95,130,133,68,19,55,72,33,87,6,33,48,96,71,60,89,129,117,109,119,128,141,149,172,167,191,199,149,130,137,146,131,80,85,52,46,78,61,30,30,33,16,10,37,7,21,2,31,35,42,7,29,10,34,8,11,27,38,34,35,47,19,28,30,34,43,31,41,9,60,111,55,46,27,16,31,6,39,36,64,71,50,49,72,51,78,73,95,74,88,73,68,76,46,76,76,52,46,72,66,78,61,86,53,81,66,74,52,20,22,5,26,15,21,37,51,76,84,68,81,128,79,98,74,90,78,75,77,84,98,77,80,62,55,42,50,79,63,39,86,77,73,60,92,83,60,62,38,13,31,13,15,15,16,9,46,41,39,44,39,27,30,55,31,17,37,29,23,40,82,130,58,34,3,17,33,14,10,38,9,46,23,42,46,24,22,53,51,32,21,40,30,36,9,49,35,46,16,9,8,41,34,51,20,4,9,27,17,13,22,28,2,30,27,24,23,34,18,33,29,25,20,26,5,26,38,35,18,42,11,26,6,19,18,54,29,12,30,61,94,71,16,0,21,49,30,19,36,37,18,42,14,40,48,41,38,18,11,18,30,37,23,43,27,13,32,3,31,19,3,12,18,16,35,25,14,28,26,25,28,35,54,28,34,21,36,20,37,14,27,16,18,49,35,27,15,35,21,15,28,29,30,30,18,13,18,18,28,21,10,24,3,18,33,11,3,25,22,50,23,33,15,27,31,19,38,50,121,85,49,25,24,50,34,29,52,18,11,33,25,38,40,24,8,3,28,22,5,19,24,30,16,8,27,36,23,21,17,27,18,28,32,13,27,27,25,50,33,29,40,23,36,51,31,19,26,11,8,17,17,22,37,36,31,18,19,7,7,41,43,19,55,112,52,26,25,47,53,27,40,56,68,79,63,38,87,83,110,113,131,124,167,178,149,224,206,184,181,188,169,121,138,167,194,175,175,154,145,150,128,164,140,98,117,118,127,122,133,106,107,90,104,83,120,133,149,128,152,153,177,162,168,155,140,97,31,2,8,5,6,8,12,2,31,12,48,2,28,13,17,13,21,17,22,0,5,5,22,4,252,234,232,180,140,130,151,186,234,166,143,165,151,121,12,18,80,135,86,36,74,41,30,59,110,39,75,84,120,94,70,72,82,53,79,68,60,34,53,55,66,80,66,95,115,94,77,48,74,87,140,147,99,160,120,129,201,233,247,145,85,187,229,229,129,98,99,122,114,109,142,156,94,119,134,126,124,69,65,92,142,186,255,175,179,248,232,214,115,42,103,162,188,139,123,60,45,105,42,39,32,27,69,80,60,58,68,81,165,166,201,152,88,115,81,120,198,195,99,73,84,126,120,126,132,110,107,117,67,7,62,158,137,145,135,137,123,64,107,75,121,164,129,128,138,119,87,70,37,29,31,12,30,22,84,73,93,96,108,74,20,105,134,147,94,46,127,175,163,180,185,80,139,168,128,126,112,98,46,60,17,54,49,75,60,60,82,92,110,99,56,4,43,92,151,146,146,143,184,201,174,196,177,176,146,134,107,60,91,111,105,80,88,64,64,44,44,48,60,47,38,52,101,106,130,141,147,157,180,204,174,187,159,131,123,102,110,99,125,75,36,74,54,24,32,44,51,76,105,67,52,41,108,182,164,146,192,182,170,152,159,155,117,73,105,85,78,49,34,46,2,25,31,23,49,26,24,28,28,25,44,24,25,24,25,3,37,22,29,8,36,35,17,27,21,13,52,27,51,39,21,25,50,11,23,37,64,27,16,25,26,24,31,23,42,14,3,42,40,97,66,62,78,80,88,71,70,76,73,73,69,81,75,43,47,58,37,48,65,84,44,83,74,75,81,68,43,50,38,25,38,19,21,25,82,58,87,98,81,93,86,85,102,84,84,75,96,66,64,78,71,73,46,34,53,55,48,56,56,83,92,54,52,79,61,48,65,32,11,21,2,16,14,21,40,36,28,49,49,19,32,34,51,27,67,36,35,39,24,18,34,112,95,60,26,14,13,20,29,18,7,27,29,36,25,24,14,28,28,35,16,11,13,44,13,11,8,16,9,17,40,32,48,16,13,39,33,43,1,33,36,32,28,40,30,36,39,11,10,57,11,45,19,25,26,12,40,29,28,19,39,26,11,36,54,25,37,23,23,50,81,62,34,20,36,16,41,36,23,25,26,15,34,16,22,30,15,28,28,17,56,25,23,3,13,39,44,24,27,32,48,22,33,47,16,28,26,39,30,53,38,18,15,41,33,25,47,38,26,44,14,18,5,25,65,30,26,24,22,15,26,9,30,24,19,43,26,14,26,17,2,1,27,21,11,19,2,38,26,7,23,20,25,15,29,26,25,49,71,93,78,37,27,32,15,2,21,24,22,5,7,23,22,24,18,28,10,9,36,20,11,23,32,19,13,22,22,3,15,3,13,45,42,41,10,8,41,18,21,49,16,16,47,35,25,28,22,16,32,33,39,24,20,16,20,14,12,41,33,21,50,23,71,115,54,31,30,11,57,58,32,50,41,66,61,72,49,72,72,57,71,56,106,99,71,92,121,108,112,109,95,66,109,142,145,151,113,131,70,90,103,59,61,96,76,59,56,66,45,74,57,50,31,40,74,97,66,69,71,95,90,95,101,70,127,98,7,12,29,4,28,20,23,18,23,19,5,1,1,13,11,17,14,27,15,20,23,21,26,19,252,252,255,239,222,225,245,236,231,167,130,133,169,106,18,16,96,141,66,63,92,75,66,56,78,81,95,117,100,95,57,38,60,81,98,73,65,49,43,56,42,43,81,70,84,75,65,67,101,83,83,62,73,86,82,84,148,216,208,130,83,141,147,120,139,118,157,177,171,172,139,83,111,145,162,150,151,86,47,41,76,132,149,128,129,181,139,176,126,43,69,122,153,90,68,1,22,56,27,46,28,48,76,73,65,28,26,30,44,34,13,43,36,28,31,41,34,56,21,51,110,114,185,175,189,197,179,180,55,17,25,72,119,88,99,79,73,32,41,33,72,85,58,50,57,45,61,63,41,48,58,80,90,114,120,143,122,186,129,63,29,136,187,161,79,26,106,89,110,112,82,80,48,95,59,53,49,46,40,18,3,54,62,99,107,144,148,157,187,164,66,44,84,150,142,130,128,129,133,122,130,67,101,68,49,45,31,29,32,19,21,46,67,105,85,55,122,126,38,75,38,65,142,158,193,187,167,165,159,106,94,96,69,67,38,27,5,12,25,70,74,135,160,169,129,164,174,167,161,129,44,73,139,135,139,94,72,94,54,32,26,45,13,12,20,35,31,37,2,19,29,37,32,18,14,1,43,16,19,27,33,46,2,19,0,23,6,24,33,31,61,31,16,29,37,45,50,23,30,32,20,26,57,26,42,39,47,29,25,34,31,17,28,24,34,55,60,92,99,82,83,62,62,41,66,96,61,86,58,89,88,91,49,67,46,65,64,73,72,67,74,83,63,69,23,26,37,42,8,25,21,29,68,59,76,92,78,88,90,92,91,84,81,78,77,80,84,102,50,38,94,62,62,32,71,58,58,66,66,68,71,76,68,57,46,38,18,11,13,28,26,34,20,25,34,53,76,48,22,31,51,30,65,37,35,7,31,44,25,23,25,39,132,99,31,18,12,14,45,33,33,12,34,31,16,34,28,33,38,13,65,10,31,12,28,43,20,13,21,10,27,26,25,13,23,11,17,38,25,1,52,13,39,5,40,21,34,37,27,32,31,37,43,30,42,30,57,19,29,45,11,22,20,22,24,22,59,30,46,37,102,81,36,36,0,38,28,27,20,25,32,33,35,34,5,25,20,32,31,29,34,40,42,14,8,44,40,28,37,25,11,32,27,7,34,12,21,22,29,35,20,19,49,14,34,42,32,39,31,37,41,31,12,35,40,48,12,20,15,16,27,21,9,48,16,24,13,15,20,7,22,17,22,36,11,9,39,9,25,10,45,16,33,19,15,31,28,40,34,68,124,67,37,26,13,32,11,16,24,25,36,33,46,11,23,36,24,11,24,39,46,35,40,21,26,22,20,32,40,17,18,14,18,10,31,23,33,21,28,20,19,16,10,29,16,24,15,19,24,44,45,31,38,2,7,32,30,4,40,51,58,54,137,161,129,108,68,40,45,33,42,44,52,54,41,76,63,58,68,78,55,79,66,79,75,60,39,70,104,96,58,60,83,142,129,83,83,103,91,106,63,50,71,65,56,63,68,16,47,56,63,41,51,55,45,66,40,34,39,50,43,70,37,54,56,62,31,21,5,19,17,1,15,19,24,12,16,13,1,2,17,27,11,0,4,30,25,16,6,2,250,249,246,255,246,245,223,243,236,168,135,138,151,114,48,61,128,130,98,130,128,105,74,117,127,111,120,115,83,119,55,35,46,57,88,122,41,51,43,73,71,73,98,106,100,110,103,85,136,81,24,48,93,122,74,81,170,226,236,112,81,186,124,80,108,141,138,135,170,197,177,103,73,77,102,112,94,67,51,85,50,44,60,44,60,102,98,122,133,124,67,70,78,78,54,35,52,39,40,46,66,52,72,101,79,38,25,44,47,62,25,47,66,59,61,82,88,61,93,75,38,53,74,103,111,106,141,86,30,25,58,101,90,67,65,70,76,21,55,39,75,110,109,140,146,144,161,154,162,187,170,169,137,161,139,130,107,95,96,48,38,56,77,67,6,38,39,41,65,29,59,47,64,95,86,107,102,134,131,133,132,98,113,134,145,125,113,139,123,91,37,43,47,69,28,44,44,41,19,26,18,13,40,66,93,83,90,121,119,87,99,119,112,130,103,82,112,102,49,50,70,55,42,59,61,40,54,53,40,48,54,68,92,131,138,142,144,159,150,144,178,182,173,178,153,156,121,112,121,51,27,39,65,28,10,24,37,16,18,15,6,37,16,27,33,7,16,19,9,22,2,11,25,19,19,41,18,11,9,51,41,53,29,36,31,37,51,66,67,68,51,37,37,28,40,42,45,52,52,74,83,114,99,97,44,6,43,33,46,54,39,51,63,61,45,58,52,98,104,67,75,68,28,25,65,77,76,74,59,94,35,80,65,53,57,63,62,56,43,63,104,68,43,46,22,1,25,16,27,44,68,84,97,86,122,106,104,80,78,80,68,73,72,85,82,77,76,81,48,73,63,49,62,61,59,46,62,70,54,95,65,82,56,23,26,0,19,19,6,7,18,57,13,57,60,41,27,57,58,31,44,54,62,47,48,26,38,25,41,14,28,29,77,94,81,38,39,29,32,38,4,26,50,31,24,55,32,24,41,36,24,44,19,40,24,28,23,10,16,35,40,28,41,13,44,16,14,17,33,36,24,35,20,15,27,46,28,19,22,25,22,43,19,17,28,27,20,23,35,26,27,20,37,10,37,24,26,55,2,32,70,88,51,63,26,19,29,15,28,23,34,39,23,31,28,42,42,16,41,44,24,9,5,41,23,27,15,29,33,48,19,30,21,36,31,0,37,1,22,40,42,34,42,34,17,43,39,26,29,29,28,44,34,28,30,14,17,48,17,2,27,6,28,15,23,20,27,26,16,30,8,25,13,8,47,28,23,36,34,7,34,22,40,10,17,33,5,36,36,46,100,124,92,49,22,17,12,21,10,22,20,36,8,48,26,29,27,14,23,5,22,38,17,11,50,28,11,22,24,19,16,27,17,35,8,36,11,41,31,24,10,19,19,18,23,22,33,42,13,23,20,15,32,16,32,26,39,30,69,61,64,151,212,208,208,184,154,93,60,24,28,45,19,35,63,48,46,48,71,83,59,40,64,80,59,38,33,86,132,99,93,54,87,91,85,97,94,90,69,82,76,57,59,85,79,72,76,33,76,65,62,49,36,58,56,82,56,41,29,44,41,74,84,44,45,58,41,3,21,6,31,15,29,46,10,23,26,3,6,7,1,24,3,5,17,19,28,9,11,33,235,255,253,237,246,253,244,219,213,139,132,135,134,183,176,119,151,189,120,101,91,47,62,85,133,84,76,60,75,95,68,44,25,42,97,109,46,64,68,81,74,65,99,89,47,90,73,97,99,71,45,22,71,104,53,82,156,202,174,86,55,171,106,63,104,134,73,66,123,191,142,57,57,33,68,80,45,66,103,88,68,77,37,30,26,29,26,104,118,63,67,69,54,64,88,55,55,73,51,67,56,53,92,61,74,82,119,122,104,101,92,125,94,62,81,100,76,88,111,68,24,36,23,32,21,32,24,9,64,213,245,253,215,158,173,184,192,130,101,46,117,191,193,179,177,172,166,134,180,157,107,94,86,37,32,12,14,15,48,35,23,43,87,98,33,65,152,159,152,179,186,173,178,178,187,157,178,189,167,170,150,89,116,73,66,29,32,31,53,62,43,63,34,66,109,146,134,166,139,181,139,143,192,196,198,178,189,165,182,150,121,89,62,43,43,7,63,59,18,78,28,44,40,81,114,116,154,120,120,122,143,178,160,183,208,196,185,136,146,124,117,92,66,51,44,12,45,21,37,14,50,26,37,16,20,4,37,8,19,14,13,9,6,38,29,8,19,38,15,4,40,8,38,28,85,76,24,41,32,70,60,68,87,105,158,139,141,147,99,91,59,61,62,87,108,115,98,117,112,126,145,142,146,148,78,5,23,36,52,54,50,57,39,32,38,43,45,13,38,66,64,64,36,45,72,51,68,55,66,35,71,25,55,62,56,27,76,83,64,61,66,28,21,26,4,2,30,62,72,71,70,101,98,96,88,80,85,87,93,50,71,64,97,77,76,76,51,62,38,52,48,53,58,77,65,54,59,63,49,54,64,8,7,7,7,9,51,23,13,19,37,54,28,25,48,41,57,61,39,54,41,45,60,37,42,66,44,33,49,56,45,37,35,76,109,46,27,18,39,31,57,40,43,23,40,29,34,42,29,41,32,30,16,15,18,10,38,40,47,12,21,22,17,11,29,23,54,10,19,30,21,20,26,38,20,15,22,50,1,32,26,40,38,30,13,21,14,24,33,20,38,30,28,25,36,29,40,15,14,16,57,93,58,22,2,24,21,30,43,28,23,32,37,19,30,49,25,53,31,19,17,26,25,33,13,15,15,21,16,42,41,49,49,32,28,30,19,40,41,53,22,37,47,15,17,34,21,49,6,6,45,12,20,31,33,35,16,33,31,15,20,31,43,25,12,21,15,30,26,19,13,24,14,54,24,37,43,45,26,35,19,28,16,34,25,19,42,62,34,26,51,94,125,87,28,17,36,27,37,24,18,20,29,34,20,14,48,16,38,25,3,44,21,42,37,21,30,20,35,31,20,15,28,35,16,40,29,12,28,19,23,11,5,4,12,24,26,34,13,24,23,32,17,35,24,37,15,25,34,67,138,223,229,175,185,201,213,204,102,44,59,31,43,43,37,28,51,39,39,52,45,74,54,41,73,82,30,42,96,116,142,80,30,38,35,86,42,36,41,32,63,51,53,53,76,65,48,61,82,27,69,74,54,47,66,60,102,75,52,59,51,73,50,70,48,20,19,9,31,32,18,11,4,18,11,15,14,11,8,9,42,3,27,2,9,10,21,16,12,3,228,254,246,236,224,223,221,209,193,118,115,141,162,192,233,161,156,149,78,102,73,33,13,45,55,79,95,79,106,109,94,66,46,45,78,71,56,65,74,54,91,79,65,75,52,69,52,86,97,78,24,8,62,86,73,69,69,84,81,31,36,92,50,56,51,81,45,64,79,103,74,51,54,76,35,45,37,62,69,58,89,110,71,80,72,47,40,95,75,109,127,155,130,88,85,80,40,36,48,74,69,79,71,82,88,87,105,107,55,76,80,82,67,50,36,29,16,33,57,27,25,39,42,111,111,71,98,118,192,222,227,228,206,145,141,187,238,129,62,18,41,98,87,80,83,74,55,22,22,17,23,23,23,12,33,31,20,50,116,52,16,72,132,171,83,58,137,118,152,156,116,127,92,108,94,91,120,68,76,53,71,28,40,43,28,26,32,57,98,104,58,25,44,182,163,174,170,178,172,175,111,93,128,147,112,131,114,86,90,96,51,62,42,42,46,65,92,137,137,121,104,90,72,210,198,209,170,142,146,94,106,109,96,86,42,63,26,17,16,11,14,19,17,22,60,16,40,19,21,22,27,52,39,14,9,15,21,16,50,30,16,25,36,39,41,35,15,53,42,30,33,43,79,84,107,136,96,45,83,28,142,142,109,125,114,94,73,79,62,74,81,105,93,111,97,131,107,87,143,122,112,105,60,47,45,13,13,56,16,36,54,40,22,29,56,32,31,22,39,64,82,63,55,52,54,77,53,64,63,66,80,46,55,56,71,70,64,80,37,55,15,29,35,34,13,43,59,65,83,108,73,87,110,84,89,93,80,84,90,81,77,72,62,69,49,43,67,53,25,43,52,71,30,71,55,58,74,53,35,44,18,33,29,19,14,28,28,21,38,23,33,42,44,58,72,39,37,46,38,38,38,50,49,17,38,27,50,59,45,45,56,44,52,35,107,102,49,29,41,57,65,18,32,34,22,26,45,40,27,14,40,55,47,30,36,45,26,28,18,34,36,8,20,22,36,13,32,37,40,28,16,24,30,26,37,23,38,43,18,13,27,21,21,6,37,45,27,38,28,13,31,17,40,41,49,43,22,28,46,35,11,82,74,45,14,22,57,35,15,35,20,36,32,36,30,23,14,38,31,19,21,13,11,20,34,26,25,26,14,18,14,23,52,23,28,28,46,12,17,27,20,20,27,47,35,28,29,43,18,14,16,5,19,38,25,31,31,13,17,14,34,37,30,8,7,13,31,24,14,3,34,33,33,14,41,4,25,18,30,3,28,46,15,23,24,19,52,59,31,14,22,53,125,87,63,22,7,36,26,19,19,14,12,33,6,18,34,34,31,28,21,37,40,46,17,28,27,13,4,8,52,8,54,6,37,28,25,33,32,27,22,24,38,30,21,18,31,10,47,31,24,37,52,41,52,57,53,91,138,146,162,160,95,152,127,115,147,201,162,137,113,89,67,39,44,25,18,46,51,43,38,54,67,58,40,51,66,44,59,44,68,62,61,53,57,48,46,57,41,33,50,69,44,39,50,37,52,59,42,62,82,54,60,42,60,52,68,82,48,51,70,62,40,42,29,34,22,29,7,21,12,22,26,36,5,31,7,15,2,5,25,33,1,9,5,9,23,32,13,12,247,240,224,235,244,228,216,208,153,125,132,123,127,167,218,156,116,118,57,101,104,38,66,70,84,134,111,152,158,166,131,156,152,121,140,157,112,116,94,140,137,138,113,139,118,130,101,128,118,123,128,35,89,141,126,103,86,114,100,97,71,101,111,128,144,110,111,105,96,110,88,55,60,34,74,64,45,57,55,69,63,72,95,134,110,127,93,150,147,185,246,245,202,120,82,43,39,35,28,44,61,80,58,54,53,80,84,56,53,58,40,34,42,27,20,52,25,51,100,106,153,151,191,222,241,238,220,198,225,197,179,101,61,26,35,76,111,74,53,5,47,40,75,105,80,121,88,113,114,154,146,137,180,167,182,138,121,146,150,86,3,33,102,69,42,49,75,57,24,26,29,48,40,30,38,57,62,85,91,75,85,122,118,134,130,134,120,143,125,130,87,17,31,90,105,86,80,35,27,11,11,36,30,57,87,49,71,59,89,78,149,162,172,148,137,154,148,140,136,136,105,42,38,102,113,83,59,23,55,44,43,65,21,29,23,2,18,26,38,23,42,36,13,9,26,25,32,33,47,31,49,72,41,32,47,29,49,72,62,59,70,96,114,84,47,59,91,115,107,76,50,10,103,169,160,114,91,67,63,77,64,83,63,52,40,59,76,70,88,81,93,101,106,105,110,101,107,112,79,82,52,49,21,18,28,43,70,116,54,32,46,33,34,61,59,50,16,35,49,65,51,82,70,62,68,57,84,100,77,102,61,83,47,82,45,48,36,16,40,20,38,14,47,60,82,84,93,85,94,94,115,85,83,87,67,66,70,83,75,103,72,61,38,59,75,81,48,37,57,45,58,51,48,77,65,60,42,63,23,20,6,16,22,25,34,30,50,63,36,57,61,39,29,53,51,62,19,53,37,38,67,56,46,59,30,69,23,42,45,27,36,58,19,49,31,78,99,53,31,22,37,20,25,2,22,29,12,18,18,25,29,21,39,30,25,32,43,29,22,16,17,28,32,25,10,53,3,33,6,23,51,27,54,35,32,30,25,18,25,48,20,32,21,19,19,20,37,15,30,31,9,18,45,33,33,19,30,37,15,24,24,55,74,35,19,26,30,39,8,20,30,37,20,42,15,40,19,5,12,14,20,27,53,38,17,10,27,28,10,40,28,21,34,29,39,42,24,13,43,42,17,31,18,54,30,29,16,40,33,24,41,51,35,28,40,34,34,40,23,4,33,35,24,4,29,16,16,27,16,5,27,14,16,3,14,4,29,31,41,27,13,46,5,15,24,19,71,70,23,11,42,28,65,83,114,67,28,30,0,25,26,20,17,5,29,25,14,37,23,5,29,44,32,3,38,14,29,17,33,18,14,10,4,5,33,33,31,19,26,30,14,39,14,12,26,35,20,36,28,17,31,60,89,105,126,119,114,156,168,135,186,112,45,85,37,55,94,140,167,168,164,120,111,42,28,55,28,39,52,62,62,38,66,70,68,52,53,52,43,57,57,56,41,62,36,45,64,68,50,39,44,33,34,24,37,47,55,52,27,27,50,34,54,49,54,46,72,43,44,25,57,38,54,51,45,36,16,32,9,25,6,13,42,31,22,12,14,21,6,6,0,10,5,14,17,9,18,3,13,28,249,242,244,242,250,251,233,197,205,129,119,112,145,110,94,76,126,133,73,143,106,100,79,108,128,153,174,175,185,179,176,168,190,174,176,165,184,166,191,184,164,165,182,147,180,185,173,159,132,136,127,95,116,160,169,133,145,129,169,123,154,160,151,169,142,173,165,143,151,106,94,53,86,60,108,78,68,80,56,72,81,42,53,82,89,83,74,83,86,137,157,148,112,81,66,71,78,2,35,17,47,40,60,93,54,36,58,14,34,65,31,57,78,117,97,121,171,202,252,247,236,229,228,218,225,208,176,111,78,52,28,3,6,34,28,29,62,93,98,96,133,156,172,199,208,226,183,222,195,188,236,219,134,174,175,145,119,99,109,51,32,13,47,38,31,34,76,108,112,118,148,140,181,159,179,115,145,208,223,230,139,220,182,187,152,138,112,118,83,81,38,38,59,48,46,74,90,93,111,98,132,162,150,184,163,146,114,152,144,141,156,142,156,90,77,86,46,38,47,33,24,15,13,19,42,21,6,22,18,35,41,39,30,10,39,19,21,14,12,8,33,19,0,29,20,20,37,55,52,73,117,136,150,180,165,150,178,185,217,167,154,194,198,183,92,84,169,209,185,158,94,75,204,203,134,117,81,42,39,52,34,42,34,91,69,73,98,115,84,118,83,120,104,119,105,105,72,70,67,15,41,20,65,48,97,132,155,175,192,117,50,25,42,63,30,38,41,57,49,82,78,53,76,52,61,84,75,85,69,81,70,53,84,82,42,68,25,25,17,3,24,30,44,43,66,81,96,104,68,86,98,93,90,83,78,64,96,98,75,48,60,56,52,54,47,49,77,78,48,59,84,47,77,57,50,36,29,20,23,13,17,38,33,43,48,60,53,74,64,52,36,18,56,45,28,47,53,56,39,48,58,28,52,77,42,41,30,26,50,63,53,17,18,42,35,32,106,106,53,19,15,11,39,36,8,27,30,47,11,33,26,25,25,40,26,14,10,42,47,28,8,58,24,8,28,39,34,35,9,7,11,10,22,37,0,17,32,22,33,18,15,15,45,20,10,18,32,24,38,22,53,30,10,30,31,21,47,22,35,30,37,44,140,36,22,14,13,42,31,16,40,12,27,37,28,22,54,21,17,20,29,7,15,47,27,33,22,17,17,21,44,36,15,20,24,31,21,29,42,6,55,15,30,5,30,24,24,7,35,61,21,43,28,31,32,10,32,24,26,20,24,15,2,1,14,36,29,20,5,28,9,6,22,24,23,50,13,20,22,38,14,19,29,33,13,39,65,65,45,35,46,17,25,48,95,99,93,20,18,28,30,12,23,32,31,6,15,17,15,16,18,23,50,5,30,20,5,12,36,45,25,17,24,12,17,18,48,29,28,33,20,31,8,20,21,29,27,21,28,37,46,102,130,115,97,102,120,139,129,107,119,99,106,92,77,56,81,95,146,133,103,126,84,51,41,21,29,32,41,45,71,51,52,34,33,79,45,64,55,56,65,50,44,36,38,47,60,73,60,41,59,50,44,32,46,59,47,68,51,33,53,67,29,47,44,40,41,25,84,41,42,70,26,56,57,48,8,6,31,19,33,31,22,10,53,6,4,19,2,35,22,11,5,18,6,21,5,22,14,45,255,224,242,254,251,245,209,230,189,123,103,107,94,86,55,77,143,180,133,162,175,131,90,130,160,163,169,174,182,183,181,176,180,183,189,173,199,188,193,210,208,174,182,210,203,154,184,162,124,174,119,41,86,123,204,179,126,178,171,157,171,188,153,154,139,153,148,166,171,133,122,141,100,86,110,73,92,46,42,58,54,57,32,6,27,38,41,56,18,14,41,50,91,84,83,115,44,32,2,30,53,39,94,97,77,89,119,81,128,111,134,186,178,211,213,230,242,214,233,201,176,136,111,98,68,52,65,23,18,33,11,32,86,112,71,59,67,91,158,155,184,200,193,171,167,116,126,127,123,134,156,106,105,117,122,87,131,140,127,104,59,19,79,127,77,116,177,210,190,192,200,190,158,182,171,148,149,121,114,122,125,88,101,102,94,71,102,110,110,64,81,25,36,117,148,193,203,185,190,151,177,150,147,140,110,120,112,73,60,34,34,53,31,31,21,15,17,33,18,26,27,46,42,43,18,13,43,16,31,22,26,39,20,51,38,38,57,66,57,49,69,60,61,72,65,72,141,163,168,191,209,191,207,220,184,199,214,214,234,164,162,217,212,175,108,126,165,188,202,170,94,75,184,146,70,73,52,44,41,37,115,118,110,125,137,133,153,123,139,79,116,122,106,74,75,64,59,10,38,20,35,86,105,131,148,129,132,172,173,120,33,42,87,115,69,50,13,64,71,77,40,72,84,55,72,88,83,107,102,119,92,61,59,55,14,0,8,29,36,22,38,79,65,87,62,92,77,83,62,76,86,55,43,44,92,73,55,66,51,68,50,69,83,35,86,79,60,71,47,88,94,67,39,33,42,26,14,46,21,10,27,7,23,53,25,41,36,39,45,23,51,59,68,70,42,49,22,53,65,55,58,62,52,39,46,41,36,40,41,35,48,25,66,18,13,32,53,82,112,43,12,30,16,4,43,27,45,43,22,33,13,19,44,31,13,45,43,42,24,35,21,40,1,23,49,25,34,19,39,38,29,34,22,14,49,41,34,54,38,21,24,22,31,26,38,28,23,5,39,44,34,44,37,18,32,30,43,33,31,30,24,65,95,79,9,18,5,27,24,30,20,26,36,18,19,6,23,36,5,42,18,29,26,13,8,20,11,23,76,45,29,11,39,20,49,26,17,49,6,44,30,20,36,22,32,35,29,21,28,41,29,11,10,61,57,20,33,38,6,33,32,21,27,29,19,2,14,51,29,22,29,37,26,36,21,10,27,29,6,20,31,8,13,21,22,25,63,90,22,42,2,25,8,46,88,104,87,78,44,22,1,14,19,23,34,16,15,26,21,36,32,33,43,50,55,7,18,2,15,7,19,20,24,51,7,28,76,11,46,8,65,35,17,21,14,22,30,28,36,24,27,88,92,107,56,83,90,99,86,50,74,104,120,95,77,81,87,83,91,57,77,94,88,98,60,70,81,70,123,97,110,56,110,113,105,82,88,59,82,105,98,58,81,71,83,74,104,88,61,62,73,74,52,21,59,79,94,90,96,105,96,73,64,65,84,89,84,119,89,83,86,62,47,74,66,65,48,16,14,2,47,5,18,30,8,6,8,18,14,24,7,4,6,10,14,30,11,20,15,43,241,252,250,244,244,244,221,212,170,108,126,132,145,104,86,102,138,128,110,114,132,70,73,71,115,127,137,120,123,97,135,113,127,115,145,153,139,140,134,131,125,101,148,122,157,132,160,138,43,72,28,0,3,74,96,101,65,88,106,123,125,114,148,116,92,121,120,131,102,112,105,118,97,87,81,89,61,59,28,37,47,41,54,51,46,82,104,105,75,80,94,60,136,102,110,144,92,19,39,54,132,129,170,161,160,174,221,195,189,207,205,210,220,213,141,212,129,109,65,41,42,32,37,34,7,8,49,65,55,41,36,59,162,187,95,41,19,54,85,155,147,126,128,81,25,32,52,24,43,77,66,75,118,122,127,121,181,159,199,171,67,63,167,171,123,140,190,169,148,107,119,85,63,56,22,2,0,7,13,28,125,72,90,135,145,175,200,217,192,163,103,55,70,114,111,140,84,97,49,67,47,27,25,25,13,14,57,10,31,23,12,14,28,34,13,3,48,1,23,31,11,43,45,46,32,19,14,26,25,57,28,38,35,56,112,152,162,198,245,228,227,248,205,217,123,150,231,232,240,222,209,179,207,220,197,192,215,198,206,133,121,188,164,187,176,183,186,205,153,136,71,23,90,117,98,91,97,90,95,108,111,128,134,143,120,135,134,125,111,95,100,53,25,35,36,8,52,49,62,78,130,130,140,119,126,134,127,127,83,32,5,9,94,94,71,72,53,54,78,52,81,78,64,89,87,84,58,90,122,126,58,36,34,32,26,24,26,30,33,61,59,69,69,43,76,78,54,54,79,42,74,78,79,72,45,45,58,52,47,48,67,43,50,69,60,54,46,45,56,55,59,23,15,13,32,21,2,47,5,30,47,36,51,33,58,24,43,32,51,66,51,34,52,58,58,49,57,43,18,59,32,76,54,50,22,46,27,47,50,48,57,54,31,27,45,43,38,45,91,95,31,43,46,11,5,43,34,33,28,26,21,33,27,39,10,30,25,26,10,37,33,46,10,26,27,12,18,56,20,30,37,27,19,28,12,44,45,40,39,23,44,21,32,38,14,29,12,31,55,16,31,6,31,42,19,26,23,18,38,36,62,32,75,76,35,49,39,16,31,18,54,44,58,28,30,8,19,17,18,17,35,20,23,41,18,18,30,29,46,35,17,35,7,21,33,32,5,38,28,50,40,20,40,38,18,19,35,39,26,38,38,29,28,8,28,7,26,36,0,34,25,39,31,27,36,36,24,27,34,26,15,25,27,45,17,20,26,39,8,53,35,16,21,45,25,47,97,82,13,17,4,20,17,28,25,60,112,122,72,41,0,13,5,14,29,12,10,27,44,15,17,8,30,11,27,13,20,40,12,15,22,30,58,35,44,42,24,10,37,22,16,42,19,24,24,8,13,34,21,27,38,55,139,132,110,104,123,141,73,91,105,102,118,142,101,99,79,101,90,73,84,83,83,109,127,122,130,138,132,141,151,148,150,149,129,169,134,103,117,129,115,104,100,95,117,108,120,105,95,56,94,66,81,103,84,112,127,139,113,169,134,133,146,137,140,161,144,159,153,155,163,120,110,125,137,100,12,3,10,2,19,19,19,20,5,32,12,9,7,17,27,7,13,32,20,3,9,20,22,1,250,249,247,230,238,238,223,221,170,120,144,118,162,146,162,206,132,50,15,4,22,51,17,20,31,20,25,37,58,36,23,23,55,51,21,18,58,37,52,55,41,40,28,57,26,76,123,106,48,41,41,42,45,33,23,23,37,35,21,31,55,33,44,42,54,52,32,44,67,47,51,44,55,53,68,44,25,24,13,34,118,158,175,168,160,195,195,191,175,160,198,171,169,155,151,157,174,168,207,233,220,239,230,220,209,236,210,188,149,120,106,114,82,66,103,67,36,48,25,7,10,43,74,57,29,25,54,62,53,48,42,110,229,251,212,120,25,10,28,51,32,25,40,36,27,34,42,46,60,40,14,64,74,90,75,80,69,63,80,74,18,31,85,83,49,53,49,52,48,31,30,21,38,46,85,82,119,129,107,152,134,151,159,185,139,184,153,126,119,64,66,60,29,34,12,28,8,30,40,11,22,24,28,55,18,43,28,29,29,15,45,10,37,32,11,24,13,62,28,23,53,43,28,50,9,7,17,32,36,65,69,67,109,132,150,174,165,199,195,196,191,175,203,146,106,123,165,156,197,143,159,158,169,149,121,164,168,153,154,101,92,136,140,154,160,156,160,139,96,62,54,38,108,122,150,65,65,68,67,75,115,87,84,98,70,61,99,82,62,40,46,30,30,54,54,48,68,86,115,133,149,141,139,132,144,134,132,79,19,22,62,92,63,64,74,89,81,77,50,54,68,44,40,69,106,108,86,121,69,60,36,30,18,39,28,36,67,80,79,66,67,62,57,71,57,82,67,83,58,62,81,66,72,74,50,43,74,35,56,37,51,42,56,94,79,63,67,25,66,20,11,43,15,11,15,9,47,31,65,34,58,32,41,53,64,15,57,50,73,57,46,51,41,40,61,43,30,47,44,32,46,59,55,65,65,36,30,45,44,27,82,42,55,42,50,37,57,38,113,110,66,31,21,35,36,38,40,22,18,14,25,34,14,23,23,35,37,37,22,20,29,13,22,24,26,47,18,36,26,13,22,12,47,25,20,50,17,12,23,11,37,16,57,43,10,8,12,18,16,37,52,11,3,39,51,31,18,51,47,23,21,31,56,82,88,34,2,33,34,25,26,44,39,51,7,12,42,51,27,6,21,37,33,32,19,48,32,6,29,23,30,17,16,33,20,15,26,61,38,33,48,20,24,41,17,22,23,25,23,28,48,23,31,22,23,23,31,14,36,12,12,23,38,29,24,32,13,31,25,20,24,36,13,6,30,11,51,2,22,18,28,41,11,10,44,32,101,24,27,23,12,20,40,15,21,14,40,112,105,90,40,32,14,15,42,25,22,11,33,23,5,9,36,15,20,0,13,49,24,27,42,38,17,14,41,31,17,18,14,28,50,15,20,16,40,34,30,14,18,24,48,59,85,126,110,142,151,162,170,142,162,130,146,120,141,131,129,139,110,124,148,118,154,155,128,115,137,150,177,150,136,120,146,167,183,152,141,124,111,98,109,116,109,124,98,134,112,120,113,114,104,138,134,122,109,140,156,174,176,162,192,164,150,177,191,160,166,208,193,190,165,150,148,172,191,93,28,3,22,12,14,3,0,5,9,3,23,30,5,0,6,18,23,22,12,8,20,30,7,3,253,248,248,225,230,249,226,215,159,98,168,122,147,112,156,167,91,27,15,41,34,61,32,23,29,13,21,32,43,20,22,22,27,44,21,29,28,17,19,27,42,35,19,32,80,80,77,45,35,59,33,45,34,29,41,46,20,16,8,25,25,41,51,20,32,68,22,33,45,44,24,45,52,69,88,58,69,70,105,109,175,253,223,242,224,224,203,195,177,187,210,207,203,185,191,229,219,218,228,201,211,175,180,168,92,111,78,92,122,117,85,41,12,25,32,20,7,88,64,45,74,52,55,44,46,43,23,45,44,24,63,148,242,252,233,197,40,4,32,2,47,17,34,19,19,36,35,50,47,42,32,28,51,34,39,19,32,51,47,53,47,48,27,57,54,101,63,88,89,102,128,101,144,123,144,135,68,141,155,129,95,72,43,26,18,45,45,25,25,13,27,44,53,26,28,59,32,38,40,23,16,39,30,17,23,33,25,27,48,21,23,26,18,40,19,27,9,50,34,29,19,18,31,38,27,16,21,28,39,25,28,38,23,19,40,27,26,30,52,24,20,43,27,50,27,57,10,41,46,43,17,55,55,33,36,64,32,50,34,41,54,63,52,84,46,30,19,41,23,38,47,57,21,47,44,46,41,49,35,47,24,56,26,43,17,71,24,49,33,60,66,35,61,77,103,131,133,157,140,110,108,124,111,141,136,138,119,108,81,79,131,92,67,78,59,69,61,82,59,69,73,71,89,93,86,112,97,23,64,36,27,14,24,35,76,101,73,86,83,70,96,49,64,44,39,51,81,79,56,49,52,72,59,65,59,47,50,69,43,63,82,66,52,77,55,88,55,32,17,22,7,24,36,19,30,31,36,31,33,42,61,65,33,52,58,33,47,51,38,40,47,51,27,65,35,71,43,27,46,36,41,29,27,68,43,77,51,31,58,45,33,17,63,36,46,32,36,10,23,72,86,89,60,5,19,26,31,21,37,32,18,32,26,15,26,10,10,47,53,19,46,26,14,24,24,42,11,34,32,39,23,36,5,32,6,33,6,4,25,13,52,21,32,19,37,38,44,43,40,8,30,44,10,21,37,28,62,35,34,17,41,55,46,102,93,25,5,43,37,21,29,31,28,2,32,43,38,17,21,28,20,17,26,46,46,43,23,20,29,53,30,35,10,18,35,26,37,22,46,43,26,35,17,55,20,33,35,35,57,25,28,36,31,35,24,49,28,28,24,36,16,11,27,9,21,44,29,19,29,20,7,48,20,13,30,25,14,30,28,10,28,41,34,33,22,56,70,20,31,16,29,34,13,27,20,9,39,52,104,101,59,54,18,10,10,6,27,33,53,27,27,26,3,13,25,24,18,25,15,18,27,16,22,27,25,31,35,7,18,41,30,28,8,16,37,18,9,25,45,34,27,85,82,70,62,94,162,179,178,200,197,189,150,198,170,155,175,150,161,158,178,198,172,162,194,157,167,172,154,166,153,142,171,160,140,132,132,134,107,124,124,119,106,129,129,127,128,156,142,123,130,136,138,130,153,128,152,167,158,172,158,174,180,158,170,191,187,207,156,177,201,166,169,169,190,125,7,10,0,10,20,3,12,2,20,0,16,10,20,12,8,8,5,10,4,0,37,8,11,34,250,249,248,249,241,235,225,213,146,115,175,140,151,76,56,56,24,21,22,22,48,39,46,31,33,40,56,45,23,14,33,12,31,18,25,38,32,43,23,48,41,9,41,39,69,85,40,45,23,27,54,43,46,53,41,66,59,60,38,39,46,57,69,57,49,46,32,58,103,110,86,102,165,131,147,157,162,175,222,218,229,237,230,217,177,156,150,131,147,160,167,142,138,153,156,145,129,129,106,77,64,47,46,25,25,12,40,74,144,124,77,79,35,30,24,41,34,56,41,46,30,30,48,54,36,40,32,66,68,25,83,153,243,233,236,225,92,16,2,24,40,31,44,46,67,61,48,64,56,49,65,60,63,53,64,75,110,85,116,136,92,64,99,123,123,142,140,151,166,147,142,106,89,63,74,19,58,19,31,26,27,20,11,27,29,23,29,5,40,28,42,5,48,43,52,24,46,55,36,30,29,29,23,53,59,50,38,29,31,20,42,31,27,21,51,13,18,23,53,29,23,18,21,26,18,29,23,30,18,17,29,31,45,4,14,24,36,56,31,28,23,24,19,24,50,60,22,28,14,3,19,12,21,27,24,37,24,32,32,28,55,45,58,34,29,27,7,31,43,41,62,46,29,33,14,27,29,47,40,31,29,45,32,54,56,68,68,68,71,89,114,106,106,132,113,146,111,122,137,126,109,98,117,114,119,131,136,146,121,92,78,52,58,95,84,91,82,62,85,70,112,112,93,107,62,46,31,9,17,10,12,27,60,75,99,86,108,118,102,96,93,78,63,85,50,55,80,67,63,83,73,51,56,58,47,70,61,58,48,44,69,70,58,55,37,25,37,22,30,42,27,19,54,15,51,33,43,50,61,51,65,52,69,54,57,54,65,50,47,50,29,36,32,58,40,28,31,55,49,53,42,37,35,66,28,33,30,54,36,33,48,66,63,14,31,47,67,58,45,49,83,140,75,25,23,5,21,25,8,27,19,4,18,27,27,13,15,41,67,15,36,29,11,34,27,26,43,31,26,25,30,45,47,24,2,29,39,19,42,34,29,31,26,25,52,39,27,24,20,9,9,19,45,14,14,15,25,16,23,26,27,44,32,73,63,54,34,29,19,20,63,18,21,44,35,37,33,6,14,17,41,24,14,29,23,10,2,43,26,36,48,25,19,26,9,36,14,38,21,15,31,13,33,31,55,19,36,29,47,41,49,28,52,14,56,15,33,32,8,30,24,13,22,33,19,26,36,45,18,11,14,12,40,38,30,9,6,35,43,17,12,37,15,7,11,71,61,7,36,19,10,16,38,46,25,7,32,27,69,82,93,74,39,17,34,40,30,23,18,17,23,22,8,21,13,32,39,47,17,29,33,41,16,13,42,11,35,31,13,14,37,22,37,12,42,14,29,26,13,44,71,74,92,124,96,69,128,202,178,209,191,196,203,181,177,197,170,173,149,176,198,197,199,195,180,179,173,180,179,170,168,176,164,175,160,180,169,169,171,146,146,136,168,157,152,155,163,170,168,152,176,170,163,160,177,141,152,170,166,184,170,202,182,183,169,161,164,167,189,170,183,154,186,185,188,105,12,1,13,3,21,6,7,13,5,2,15,15,2,17,27,13,44,32,22,7,18,10,11,40,250,255,250,229,224,238,224,214,141,147,154,194,205,84,21,24,33,32,44,31,54,32,50,37,21,41,23,28,50,30,24,41,26,25,24,44,46,64,51,49,36,50,27,57,60,72,85,51,51,41,38,37,38,48,34,82,85,112,105,166,135,142,162,147,196,182,209,196,219,191,226,181,205,227,204,221,222,229,203,198,163,169,168,164,145,148,171,138,99,137,118,120,116,84,13,85,11,30,8,33,26,24,32,41,34,19,61,63,75,52,60,47,42,86,141,129,91,65,30,18,35,39,63,61,56,30,55,80,28,62,132,152,178,162,182,159,99,64,31,35,52,52,55,47,50,112,87,120,97,120,164,170,163,144,143,124,146,149,139,119,65,32,48,71,89,50,37,59,39,32,34,44,15,11,38,17,27,22,15,53,3,33,17,24,38,20,32,37,45,28,41,29,7,33,10,30,7,50,45,30,15,23,4,23,14,28,12,17,16,19,17,36,36,19,14,36,31,19,59,36,24,28,37,40,18,19,14,22,40,6,24,29,31,24,6,27,0,30,7,37,27,24,35,27,45,51,14,14,0,17,5,56,38,26,30,52,32,47,46,37,24,13,37,47,51,12,8,13,31,7,58,35,15,31,44,36,40,56,33,58,60,43,97,98,108,115,77,115,137,126,82,110,123,148,118,95,98,102,120,127,102,121,100,133,123,119,123,99,59,67,45,48,72,55,62,71,70,107,89,116,103,97,57,50,29,32,17,31,19,58,94,104,97,75,109,131,82,97,86,78,79,70,92,89,79,81,77,71,64,70,89,61,46,40,67,58,31,94,35,69,83,44,66,26,18,18,12,30,11,23,30,22,46,60,48,34,63,69,56,41,42,75,63,41,59,70,49,38,30,35,43,45,31,29,51,63,35,40,29,49,55,42,41,22,57,42,58,43,37,30,62,39,35,42,44,35,39,31,54,37,36,72,99,73,28,18,28,25,26,30,40,9,38,26,19,32,21,15,37,54,37,31,34,21,23,40,36,32,33,15,58,8,19,26,23,26,18,62,36,20,39,16,47,44,34,18,39,31,36,26,34,10,57,37,23,47,20,40,31,22,28,36,18,61,98,42,32,21,25,36,27,31,18,38,26,45,24,12,36,22,40,27,25,21,30,13,26,36,40,12,20,32,23,42,35,38,44,34,14,45,44,36,35,23,28,23,23,41,9,31,41,27,26,28,35,46,12,9,31,21,54,53,5,39,33,20,25,11,20,31,15,15,36,27,33,21,12,7,37,15,23,23,23,15,48,83,35,11,43,28,18,19,25,17,18,38,28,20,33,51,97,84,76,27,44,31,6,22,10,15,29,33,31,6,31,11,24,38,8,25,15,47,30,46,17,26,26,24,29,22,18,34,18,25,23,21,24,28,36,95,83,105,135,186,154,87,64,108,163,176,207,186,195,190,179,167,205,210,205,183,187,172,183,168,182,184,198,183,186,172,202,169,178,209,187,182,176,157,175,199,194,188,185,185,197,204,146,170,169,194,188,190,186,186,181,190,176,195,165,170,178,179,173,175,180,210,181,187,163,160,166,145,166,180,166,114,17,7,6,31,13,13,25,2,18,0,22,4,8,7,16,9,24,22,7,41,18,35,16,37,244,254,248,244,237,229,215,191,98,130,156,212,210,61,24,33,29,55,29,46,31,32,52,51,10,46,33,48,29,59,53,42,34,25,44,33,59,40,75,58,70,56,66,39,51,128,109,85,82,100,140,131,156,156,180,215,235,206,200,216,234,244,235,203,231,234,229,221,229,221,200,170,167,154,128,135,103,109,71,38,79,33,41,74,144,169,196,169,106,95,93,124,80,45,0,25,31,34,58,38,90,39,45,37,36,62,60,59,57,39,37,71,84,188,255,246,204,129,26,34,63,99,34,37,39,34,53,58,79,168,191,141,86,61,71,91,57,92,90,120,162,112,145,135,140,171,163,130,153,138,112,111,126,100,72,45,26,53,22,48,18,34,48,6,69,19,19,50,29,34,44,40,47,43,41,34,49,19,39,35,25,40,62,51,46,38,45,84,36,51,50,45,31,43,51,50,43,57,48,36,45,66,52,38,42,19,46,61,53,52,61,68,50,51,50,53,22,37,58,42,32,49,43,58,47,33,31,47,37,27,62,44,41,34,66,45,54,32,55,42,58,52,38,66,61,55,8,35,19,56,35,39,40,70,50,38,36,42,33,42,47,25,40,16,14,33,20,45,16,53,75,36,36,25,50,47,77,50,33,53,42,68,98,96,109,89,86,99,83,106,102,106,111,111,108,108,108,90,73,113,97,98,136,115,110,103,63,69,46,50,37,52,76,84,57,58,78,91,70,55,58,26,34,24,26,27,7,73,74,88,118,103,111,115,112,84,91,67,75,101,94,62,77,95,71,68,78,63,53,68,69,83,79,67,58,66,73,49,63,54,57,35,21,15,26,24,26,22,51,47,41,34,51,37,56,62,60,41,57,55,45,44,73,39,62,72,43,36,56,53,59,38,35,35,76,25,51,41,19,37,18,48,39,50,42,23,23,47,32,70,34,45,33,60,25,48,58,24,39,44,31,30,87,100,66,33,8,34,39,53,68,31,19,5,39,21,29,35,7,0,26,21,43,44,28,29,38,42,31,54,46,16,35,21,19,46,18,26,8,46,38,30,21,35,31,47,20,5,28,60,49,22,26,20,8,53,18,41,51,52,7,35,11,42,91,63,57,7,20,27,12,33,37,38,25,54,11,35,29,34,38,42,26,34,16,27,38,27,36,30,20,10,37,47,42,16,37,9,31,44,23,18,55,30,23,33,14,46,20,42,32,30,44,20,63,21,33,29,17,14,41,25,18,31,32,33,49,24,13,19,6,17,46,38,14,8,32,2,40,33,35,4,34,12,57,102,37,26,52,17,35,31,22,34,16,1,34,22,15,12,88,98,116,34,9,15,26,25,23,17,30,49,9,53,25,48,30,20,7,0,36,10,6,5,34,32,20,29,31,20,11,56,27,19,22,31,25,27,142,159,98,72,59,123,138,62,20,44,82,146,178,197,200,197,155,148,220,249,229,203,177,179,179,190,168,174,169,178,167,170,170,173,173,194,193,177,188,211,207,188,204,195,200,189,202,200,197,175,186,190,199,181,167,194,213,171,184,179,174,187,188,175,183,173,211,204,188,177,207,197,212,171,175,189,184,96,10,3,14,13,9,0,7,4,1,11,9,21,27,15,28,19,23,23,13,15,9,13,12,10,252,251,229,244,230,221,210,166,41,49,133,197,196,63,6,11,25,44,36,54,30,56,31,50,48,30,45,51,61,52,60,75,61,106,69,93,83,98,113,123,147,166,165,203,175,205,222,227,221,233,215,235,233,244,225,235,225,235,219,221,212,211,209,177,162,175,127,98,119,107,117,108,100,80,52,71,59,46,20,22,9,7,8,62,101,147,137,124,109,77,98,92,97,77,8,11,54,34,16,41,58,45,37,42,22,73,101,115,134,91,86,86,148,250,252,246,241,148,30,30,94,100,93,58,33,46,70,79,112,157,169,88,55,49,42,41,47,130,135,84,145,93,130,82,89,66,49,46,32,38,69,23,42,16,34,5,13,38,29,38,4,48,37,46,15,27,22,17,19,31,39,63,19,22,48,47,62,31,30,57,54,92,86,137,146,175,198,181,224,184,191,192,171,176,193,192,193,191,174,163,200,199,194,188,206,194,184,159,190,188,202,217,201,208,212,200,193,138,180,210,209,190,204,120,78,73,88,128,176,175,190,184,176,185,199,190,158,177,178,214,212,213,195,187,176,86,36,14,44,88,139,160,134,160,156,157,141,149,152,159,153,155,163,146,130,97,108,111,121,133,132,100,81,92,89,126,131,63,51,37,32,53,92,101,93,78,61,72,76,121,58,73,110,95,81,72,101,96,117,114,107,56,76,47,55,49,35,37,31,43,45,49,74,63,66,45,65,56,55,52,38,11,48,21,33,32,60,93,92,84,77,108,89,120,91,102,82,97,87,66,91,87,91,78,61,63,67,90,103,57,86,78,61,74,63,66,62,52,49,39,22,11,44,5,14,18,37,73,59,31,32,42,61,55,48,59,44,64,12,58,67,70,25,66,31,39,45,50,37,47,58,17,40,30,41,18,39,38,54,34,63,50,45,49,29,37,55,47,29,53,60,19,46,27,17,48,38,82,22,7,38,26,12,86,73,74,9,13,45,45,61,13,16,23,7,22,19,12,36,42,19,7,48,23,48,25,29,39,27,18,19,61,10,15,22,34,9,29,29,32,1,18,36,23,83,37,11,31,23,52,43,28,20,38,41,28,31,28,25,24,14,41,23,15,97,82,45,30,15,30,20,47,22,25,22,10,23,7,32,36,30,19,27,9,25,6,22,19,37,40,21,13,23,48,19,33,10,13,24,34,42,8,19,40,34,55,37,39,53,40,52,63,35,34,42,21,31,40,22,11,20,28,50,26,24,50,5,24,33,37,28,10,6,18,23,0,21,38,31,22,16,28,23,31,82,82,36,24,14,19,20,30,12,4,26,15,14,10,22,29,30,59,90,94,56,47,19,28,8,31,26,14,30,16,28,37,39,5,14,31,41,31,13,13,53,41,26,12,34,33,4,19,10,12,8,36,24,69,169,172,112,106,47,38,38,8,42,4,37,117,124,148,156,208,192,209,244,255,204,186,208,171,207,186,181,173,182,179,162,186,186,157,162,188,183,174,159,179,191,193,190,188,167,146,172,165,175,186,155,190,174,171,195,181,200,177,189,203,191,183,174,196,165,180,180,176,193,189,184,183,193,157,196,192,168,108,32,5,19,32,7,20,21,10,2,19,15,0,21,8,13,8,19,29,20,0,8,9,9,19,251,255,243,221,223,236,220,186,43,81,138,210,212,38,20,17,21,39,57,73,52,35,49,45,45,60,61,91,111,129,132,149,160,162,182,164,179,222,218,226,217,231,219,217,233,233,218,226,207,211,211,184,201,182,139,150,97,89,71,73,50,63,43,39,26,33,12,32,64,66,118,159,163,87,63,84,52,80,50,70,44,11,31,28,87,155,131,116,82,116,112,144,106,44,22,47,73,68,24,45,59,67,60,43,31,81,110,103,110,82,85,135,191,238,238,238,231,160,43,14,53,148,139,123,48,64,149,145,121,63,48,81,68,66,43,23,37,54,30,22,25,28,41,37,37,14,14,15,21,7,16,12,4,52,35,30,38,33,46,28,35,34,46,52,17,34,68,45,41,38,71,44,56,101,108,117,167,178,197,187,203,222,209,223,245,240,239,249,242,218,221,230,202,237,255,237,215,214,217,236,247,250,229,247,239,231,237,225,223,243,250,244,246,230,237,231,255,249,242,221,207,204,167,78,119,139,219,227,238,247,209,238,240,237,242,242,249,248,238,227,228,199,173,122,78,94,131,196,206,219,233,240,225,239,226,245,233,242,231,216,237,202,212,229,227,178,173,177,194,223,233,194,143,141,140,99,119,145,60,75,55,77,101,74,79,72,73,50,65,61,67,56,76,98,88,81,86,100,117,87,50,49,53,39,64,19,36,39,13,40,59,52,79,43,67,57,58,49,19,26,16,18,27,54,74,91,90,107,98,107,75,84,102,74,95,72,98,72,75,88,70,67,60,45,45,68,94,80,76,69,46,66,61,72,77,49,102,46,32,31,38,8,9,44,39,40,47,54,67,55,57,47,57,54,55,42,54,61,50,56,64,48,71,55,45,42,48,53,30,36,52,59,32,58,41,28,17,49,50,46,55,40,55,40,45,43,34,58,44,38,58,50,52,48,29,37,52,33,34,54,75,13,46,15,82,122,63,53,25,25,63,22,13,9,11,4,13,2,13,20,19,9,26,15,0,22,37,13,25,23,8,17,22,30,45,9,45,26,6,45,18,23,12,26,59,26,27,18,48,30,20,16,32,35,12,15,43,47,36,42,18,46,20,12,69,95,24,25,34,29,17,19,33,26,20,33,27,26,39,44,27,22,25,41,22,14,19,47,34,5,15,50,19,32,32,38,32,42,54,45,32,37,18,37,55,36,33,30,30,23,35,40,12,46,32,25,53,11,26,33,39,14,32,28,18,38,39,16,26,49,31,16,31,18,2,22,23,28,15,23,9,22,9,26,56,83,15,37,28,19,40,9,31,25,31,64,24,20,17,8,31,15,77,102,68,78,23,20,25,39,31,33,9,17,19,30,34,24,33,23,28,13,14,10,49,7,23,22,20,22,23,32,21,11,28,28,15,91,184,146,157,176,144,120,79,63,41,43,82,147,125,119,161,208,156,177,207,180,113,145,146,165,164,137,165,151,167,160,182,181,152,146,138,169,160,139,173,169,164,167,148,163,131,142,149,171,143,153,189,160,180,183,179,188,171,185,174,189,162,171,175,182,172,181,180,178,166,194,160,198,162,153,194,183,167,113,13,19,2,2,33,13,3,42,12,9,4,6,21,3,21,19,17,16,15,14,14,7,34,14,235,255,237,248,233,248,223,169,77,86,136,197,247,76,82,77,82,130,121,161,151,147,184,171,175,212,217,202,230,244,240,209,215,162,226,216,210,194,203,180,165,139,141,144,158,93,38,108,110,103,66,38,26,11,23,6,23,31,9,60,18,33,43,22,34,50,33,35,25,58,104,137,102,57,62,83,63,95,108,133,158,127,113,63,106,157,145,122,97,122,107,124,114,32,0,61,133,157,146,137,173,151,96,36,9,23,48,45,25,43,117,135,153,181,219,196,193,126,15,5,70,144,119,108,35,70,121,113,53,37,25,44,80,18,38,20,4,41,4,38,40,33,16,19,25,19,26,40,20,44,11,55,47,56,30,49,74,64,50,44,67,68,64,53,48,90,108,134,162,171,181,208,221,212,225,221,231,235,233,228,198,209,204,175,150,158,163,155,186,193,179,192,184,185,180,164,165,158,185,205,195,169,165,183,198,201,205,197,203,171,190,187,201,213,215,214,202,203,197,75,68,98,120,157,179,224,212,193,195,216,209,215,237,250,227,199,181,157,114,74,58,26,90,110,174,168,208,226,216,246,213,211,193,211,180,193,205,197,171,190,196,217,182,205,175,166,120,119,147,181,169,185,190,142,123,84,93,106,82,78,97,121,119,78,81,74,59,74,72,71,81,91,104,78,98,91,115,89,81,84,54,26,45,53,37,69,54,67,68,113,52,42,47,84,47,48,46,17,30,10,31,70,68,74,110,102,109,133,108,74,101,76,77,60,69,88,84,63,91,59,101,49,61,59,86,81,84,94,55,61,86,43,76,69,53,39,48,21,20,24,57,10,31,53,57,29,61,51,42,60,50,47,58,37,45,38,66,50,55,40,56,53,58,61,51,65,52,75,47,56,36,43,64,29,37,40,35,48,32,32,40,21,44,36,55,44,31,37,56,26,28,38,48,43,44,33,46,58,40,49,13,33,32,10,48,85,120,91,36,9,16,29,9,15,27,26,9,32,27,22,25,31,23,35,27,33,23,40,7,35,31,37,27,36,26,28,4,9,30,16,35,41,35,13,27,28,14,30,14,16,29,13,40,35,30,31,32,23,10,24,32,42,31,29,75,72,55,32,25,14,26,33,17,3,20,32,52,49,30,23,25,28,24,14,34,24,30,9,16,28,27,23,38,25,13,7,47,8,58,31,30,90,17,22,55,19,52,47,41,37,38,45,45,26,23,38,30,41,44,43,36,31,13,10,8,11,11,31,48,18,19,29,27,7,19,28,36,27,18,22,37,15,1,10,95,60,23,27,25,27,14,30,21,24,20,24,19,35,38,20,22,20,42,39,109,98,26,41,26,16,21,8,18,14,39,27,21,27,13,38,30,22,36,36,14,20,19,34,36,14,22,29,31,21,45,1,43,179,188,192,201,214,203,225,204,161,131,138,96,100,82,53,93,142,150,142,177,161,118,157,136,156,148,157,169,159,146,145,143,136,128,115,121,154,132,131,127,122,137,119,166,131,160,127,116,143,146,143,157,142,125,154,137,134,143,152,150,163,172,193,169,163,200,189,178,158,160,171,168,161,176,187,206,175,174,117,2,1,4,17,23,13,22,17,13,15,28,28,33,9,22,8,24,15,36,18,8,33,9,26,244,240,252,251,241,246,217,206,118,129,182,247,243,150,181,188,184,224,195,214,235,242,233,212,225,237,197,226,194,217,199,192,170,133,109,120,113,92,95,64,79,70,99,65,58,43,2,31,10,7,39,24,58,25,47,60,31,45,33,50,81,55,55,34,24,32,33,32,56,74,30,29,26,53,21,68,42,61,97,143,198,240,219,130,187,175,200,220,187,169,204,226,230,145,18,86,178,223,240,216,203,205,152,83,31,68,74,85,53,87,133,142,93,81,122,136,109,63,29,19,24,60,61,54,33,41,75,47,70,37,61,75,33,16,5,14,20,26,31,28,25,37,41,49,59,48,61,54,76,80,49,62,131,136,92,90,75,114,147,166,180,180,204,228,203,222,213,240,245,236,251,239,233,212,212,207,213,192,199,183,184,175,175,175,170,150,146,163,181,144,155,156,148,136,167,151,168,177,147,146,150,131,137,146,177,162,168,171,164,153,161,165,194,198,171,107,111,104,57,38,68,145,186,198,190,184,181,173,203,199,181,190,172,136,100,64,23,38,76,99,131,199,167,197,182,209,217,201,178,191,161,153,170,131,170,173,150,157,180,180,179,158,156,201,169,141,104,103,82,107,99,115,102,103,80,92,110,105,79,99,101,173,115,109,97,102,116,103,89,72,77,70,76,89,79,96,64,85,44,42,52,53,76,53,40,26,32,64,133,93,55,52,44,46,3,18,10,26,48,47,68,94,108,80,103,108,97,98,78,91,80,74,86,70,50,70,69,65,58,85,73,58,59,106,69,104,55,52,93,92,59,98,81,57,21,12,19,23,35,36,19,38,35,53,36,58,59,54,53,24,69,38,42,62,26,43,43,71,64,46,58,41,45,35,45,42,44,40,51,67,25,58,66,50,23,57,74,47,36,28,25,48,27,41,37,32,39,37,42,38,43,38,51,34,16,36,34,51,46,35,53,38,48,38,13,42,83,102,74,65,19,12,15,28,21,20,34,40,19,26,42,34,6,31,51,11,31,11,45,29,23,25,25,29,23,37,2,11,36,32,18,20,30,38,34,29,21,32,48,61,36,35,25,37,41,39,13,41,21,5,18,38,7,12,55,83,81,33,27,27,21,11,25,29,30,37,35,32,22,39,5,20,36,15,29,18,32,62,35,8,29,21,20,49,39,18,32,32,35,35,54,73,58,28,25,41,44,19,21,28,28,19,36,26,23,17,17,5,16,10,21,32,22,31,5,54,15,12,31,30,15,18,9,12,11,19,16,20,42,27,37,7,34,30,87,42,25,41,15,17,26,5,23,16,49,35,13,7,30,30,14,32,28,40,90,117,86,39,18,12,22,20,49,10,18,48,26,10,27,26,41,17,23,20,22,46,33,45,14,33,45,30,28,31,61,27,139,192,241,250,212,187,174,202,190,185,197,164,122,98,65,26,12,45,69,94,170,192,213,228,226,175,159,179,162,182,149,162,170,170,143,178,148,148,165,156,159,155,145,151,145,169,144,135,159,134,102,117,123,129,154,134,122,114,114,129,152,121,130,135,133,150,156,142,155,153,175,159,149,166,192,167,175,159,145,98,23,4,0,13,7,8,25,1,10,13,8,1,21,12,37,6,33,9,60,31,5,12,22,27,233,230,247,235,238,242,242,205,161,196,180,198,208,148,180,205,187,198,184,160,151,122,163,125,131,131,97,90,69,97,77,85,124,112,111,105,109,120,117,105,144,128,127,139,159,122,62,25,1,31,24,83,91,101,90,97,102,78,96,75,65,59,70,55,39,31,28,43,97,73,94,77,45,77,26,35,19,36,50,152,217,224,242,250,233,244,243,228,238,249,248,244,241,185,20,25,149,177,161,119,103,94,69,34,29,50,156,182,168,153,119,84,50,20,13,10,3,13,27,18,34,48,32,56,24,38,45,53,53,37,80,37,32,49,36,46,62,47,79,68,89,112,139,127,163,176,202,218,245,208,95,107,229,209,216,234,240,219,242,225,238,225,245,225,221,203,227,198,199,194,175,171,173,192,193,174,172,179,187,178,202,197,204,172,152,154,169,174,156,152,186,189,168,164,164,151,170,200,166,178,153,165,150,170,157,161,180,143,193,175,196,187,181,152,78,31,50,92,165,141,180,202,234,226,197,190,178,156,101,124,112,64,46,36,70,90,135,165,160,178,205,208,226,190,175,183,175,177,167,169,163,177,180,188,156,151,144,160,166,172,170,154,163,179,161,145,103,121,92,79,62,33,5,56,68,36,55,57,47,106,124,141,162,130,126,135,121,122,101,105,100,101,66,101,97,101,103,88,112,78,74,76,70,84,64,78,49,54,64,59,37,30,47,38,26,19,42,47,71,80,105,119,101,67,95,91,110,79,74,85,80,56,86,94,77,91,79,72,59,61,65,66,94,66,63,60,60,80,60,105,62,42,18,34,19,6,5,25,26,60,53,49,55,37,50,41,84,57,37,50,69,47,33,74,53,51,38,44,50,63,49,29,68,54,50,25,26,49,60,25,33,37,45,20,51,30,36,71,31,53,36,34,38,43,26,33,43,61,70,37,34,29,44,62,56,55,66,32,25,43,46,29,44,24,30,34,37,95,99,71,14,46,18,16,34,34,19,24,57,40,23,25,28,5,35,36,16,27,22,57,35,20,31,16,33,18,23,26,26,13,38,38,31,37,29,22,18,46,52,20,24,31,27,48,28,31,38,44,29,30,3,20,20,8,2,85,94,44,28,17,40,20,24,24,33,58,24,26,30,11,12,34,57,52,27,37,29,19,30,28,27,25,21,42,35,46,16,13,38,56,21,41,68,48,15,15,37,23,14,45,40,35,26,39,18,6,35,8,10,38,20,22,24,6,22,39,35,9,13,16,28,21,18,20,33,13,45,39,24,28,30,27,44,72,79,51,57,2,29,30,7,12,45,31,21,29,25,37,13,22,51,22,30,32,49,92,104,88,58,43,21,28,26,26,2,20,14,39,15,27,35,24,27,3,24,33,20,32,10,39,32,22,15,17,22,62,184,222,208,161,145,88,104,152,191,175,202,200,146,149,87,7,20,14,10,44,97,207,253,233,245,227,212,250,244,217,225,209,216,213,224,233,250,220,220,232,233,212,204,234,195,217,234,208,204,206,142,201,185,154,184,174,169,155,172,168,139,162,155,136,147,141,170,142,148,160,160,151,147,161,150,149,169,142,111,103,11,3,4,13,24,38,20,8,21,6,23,27,11,0,25,1,16,17,26,30,21,20,20,10,197,183,201,188,195,177,161,144,137,113,118,120,114,103,138,123,120,133,129,119,129,91,111,95,113,83,120,38,3,22,64,102,153,128,165,151,136,173,197,187,190,192,203,192,202,190,118,129,60,19,44,115,142,160,151,151,114,145,133,139,112,111,124,129,108,56,44,89,110,202,187,186,158,109,81,68,58,78,100,171,241,247,224,229,215,235,255,255,246,232,212,224,171,100,3,13,68,70,43,35,52,9,38,55,20,102,169,236,184,117,54,61,30,23,19,13,5,21,28,30,92,101,74,44,47,61,69,67,75,66,78,69,117,120,163,142,178,192,204,215,216,230,225,223,217,220,238,235,227,156,86,180,218,240,214,209,226,253,234,212,202,183,181,190,182,169,167,180,180,165,172,184,189,170,166,153,162,176,159,167,203,193,157,190,178,170,145,159,158,169,177,167,173,171,157,166,159,163,167,152,174,149,187,170,166,156,175,168,182,177,123,79,102,57,101,113,137,189,210,231,221,203,205,177,163,108,62,53,39,9,78,79,100,136,137,156,165,198,197,197,181,188,212,149,171,200,187,182,172,143,158,190,185,164,166,181,150,176,141,157,164,171,164,170,170,131,112,117,88,84,72,8,7,22,48,18,37,29,46,81,149,168,151,110,142,156,141,131,119,97,93,93,101,106,107,78,112,108,101,111,109,95,53,82,96,73,60,60,33,6,32,35,6,34,52,75,61,95,93,81,92,59,112,92,97,91,96,82,68,88,78,71,100,67,98,75,88,26,36,58,67,56,81,93,63,59,94,102,78,98,45,28,31,24,8,21,39,31,53,59,77,69,28,66,52,58,60,53,65,39,70,51,49,41,79,71,51,19,65,23,45,39,26,50,37,52,50,48,46,56,57,39,39,27,42,32,41,35,17,21,34,28,54,35,41,37,37,33,65,36,39,67,43,42,28,50,57,46,43,53,27,42,19,30,12,37,27,38,94,111,82,27,22,28,19,17,26,29,11,23,25,25,41,25,28,39,50,44,20,41,53,41,12,11,17,39,42,36,19,29,21,20,24,21,10,8,29,23,47,26,22,52,32,39,15,15,26,26,36,27,11,41,28,24,37,34,118,64,41,42,3,29,52,28,31,50,23,27,8,49,18,9,41,45,34,22,23,50,30,9,23,37,36,40,16,44,24,14,44,40,29,33,42,41,29,46,12,47,16,13,40,39,41,39,29,26,14,19,20,16,44,40,28,7,22,32,25,33,5,34,18,44,25,22,23,19,22,12,24,17,34,10,22,66,94,22,19,19,7,8,25,22,17,42,22,41,20,49,16,17,4,31,28,17,30,29,49,116,93,56,43,23,2,16,26,30,21,23,18,12,17,17,7,14,24,27,38,20,13,19,26,26,21,35,33,67,186,213,218,180,154,124,130,167,169,161,173,195,177,200,146,119,111,115,56,42,109,195,252,244,244,252,252,226,249,249,247,255,250,255,250,246,235,251,254,226,222,254,252,255,252,252,239,253,247,252,253,248,224,254,219,235,237,226,252,234,222,255,216,211,214,190,188,187,171,185,181,164,175,138,156,154,161,140,135,114,15,8,22,0,12,21,12,6,3,9,16,17,4,20,21,19,38,5,16,8,3,32,1,23,80,84,44,59,58,55,72,92,107,138,138,155,154,130,135,134,138,146,171,130,157,156,166,148,161,142,152,67,13,34,102,140,193,187,199,166,166,168,167,173,130,144,151,161,127,106,122,117,25,41,46,75,131,147,132,131,155,133,166,134,151,165,161,204,198,70,23,60,157,205,229,211,193,140,103,89,112,131,151,171,215,208,195,144,139,150,180,166,142,106,68,94,49,21,29,60,113,138,103,109,111,127,75,20,15,64,150,131,74,75,33,88,99,92,79,76,105,134,49,19,130,186,124,89,62,97,101,127,56,21,38,110,151,246,227,229,255,230,228,220,203,215,177,181,192,192,186,199,184,74,73,154,187,208,181,176,160,159,174,138,144,163,161,159,138,178,162,138,160,192,186,183,155,188,148,155,175,160,166,160,158,155,177,150,176,164,170,158,178,174,159,139,135,141,140,141,142,129,156,165,148,189,167,159,198,159,155,108,109,60,38,75,55,157,188,184,211,204,203,189,161,119,87,48,9,53,32,100,113,145,122,160,149,136,161,151,160,164,173,161,191,193,158,148,153,171,155,128,137,129,151,157,139,170,170,186,156,153,160,158,147,162,164,166,166,159,129,104,64,60,112,51,23,39,44,60,59,46,81,94,113,129,115,132,107,118,117,100,66,83,87,104,82,105,102,93,127,89,110,115,102,93,68,83,80,85,51,23,9,42,39,47,51,51,73,108,99,75,76,85,88,63,94,81,102,75,91,48,79,76,90,91,51,86,89,67,65,51,32,88,87,80,88,87,79,73,64,77,41,32,32,17,16,21,21,37,42,50,76,80,59,48,37,22,44,66,34,44,34,52,16,46,45,41,52,28,30,32,53,74,29,55,41,52,53,64,47,24,43,58,37,35,41,26,52,28,67,20,43,52,51,55,45,54,62,29,37,43,53,47,23,20,38,47,48,48,38,30,33,21,18,47,17,30,26,34,6,15,51,49,96,77,34,28,22,32,23,24,6,32,42,9,24,14,17,18,25,52,52,0,36,31,19,44,37,15,40,16,43,33,19,13,31,8,47,26,45,35,38,37,44,31,48,17,49,16,44,38,37,27,33,28,34,18,20,36,74,100,27,42,19,21,34,37,19,31,36,30,27,16,21,49,17,39,14,22,20,5,37,38,20,36,25,19,31,14,26,11,29,38,29,17,21,42,31,33,7,42,17,47,37,31,32,33,36,39,39,12,3,9,10,33,38,28,16,38,58,18,23,28,21,19,14,32,10,6,30,13,13,30,8,31,39,87,67,30,23,11,0,29,29,26,30,17,11,52,22,34,27,7,40,16,5,24,38,20,49,85,149,115,61,30,0,36,13,25,37,30,16,20,32,10,34,23,23,27,44,33,26,25,26,19,20,54,27,130,202,230,174,207,232,175,174,171,175,148,199,182,164,200,196,175,189,150,126,143,169,176,155,147,166,189,225,252,250,247,246,255,250,250,239,243,231,252,248,252,242,242,251,244,253,252,253,243,248,255,247,251,238,254,253,237,249,247,255,255,253,255,255,253,255,252,231,229,228,228,232,246,228,200,201,187,199,138,135,77,6,0,0,9,29,21,8,30,3,22,12,12,11,6,11,5,20,1,29,36,11,12,15,4,74,44,49,28,21,16,80,112,143,157,215,245,200,182,163,169,160,184,171,176,171,185,165,161,179,160,163,65,24,18,51,116,126,137,99,103,62,90,45,80,87,66,85,94,45,78,83,61,17,19,79,120,180,150,173,181,193,214,205,186,201,222,223,210,236,132,20,82,143,237,234,237,220,114,81,84,92,161,140,114,108,122,100,85,21,63,82,104,135,137,165,161,177,49,28,108,193,201,200,172,164,151,115,45,30,63,93,81,48,66,47,107,83,116,179,238,229,244,111,36,114,165,109,42,73,161,135,136,108,22,25,117,215,188,244,211,208,215,190,170,168,168,155,162,154,177,162,157,143,32,80,165,142,173,140,113,112,121,122,95,172,142,133,167,174,188,146,150,142,141,160,151,134,157,150,162,151,133,152,159,133,172,136,130,156,167,175,212,197,187,189,157,164,155,133,144,166,162,119,188,187,172,203,174,190,128,27,14,59,103,139,181,207,233,224,155,131,61,53,11,22,25,44,89,102,148,150,183,169,200,165,207,170,146,188,207,185,193,179,191,133,155,161,145,165,156,159,152,159,160,176,168,166,160,148,156,156,172,173,149,150,141,173,160,167,142,152,98,79,107,102,100,51,65,72,69,80,124,94,95,103,65,77,63,91,59,92,96,74,91,104,103,122,120,100,108,114,122,116,110,87,89,59,42,28,7,47,40,37,50,45,40,75,89,98,97,66,73,63,67,83,87,51,89,90,36,54,74,92,97,55,94,56,60,72,59,70,73,91,90,73,54,81,86,80,51,60,35,14,25,3,31,16,54,55,36,66,69,30,54,60,40,54,48,66,41,49,56,28,48,28,45,36,40,44,50,44,44,37,61,44,38,32,48,63,43,36,51,53,57,16,48,46,49,45,55,36,42,76,55,52,53,41,61,45,9,51,47,22,48,31,13,50,46,28,41,41,34,38,23,52,55,57,36,35,34,42,45,47,32,91,117,74,19,34,5,43,15,19,46,28,12,29,18,35,18,16,54,36,13,39,27,25,25,43,23,24,31,21,12,30,29,25,21,30,37,7,13,44,26,20,30,29,35,10,28,29,16,22,21,11,20,26,45,27,34,94,105,50,27,15,27,54,20,29,20,11,20,22,21,46,39,27,29,13,17,9,20,30,24,32,40,24,15,28,34,38,26,36,18,46,37,29,41,45,43,31,26,20,22,60,24,35,36,44,29,35,14,49,13,28,8,23,29,42,8,10,19,24,5,24,11,32,29,9,14,21,16,32,21,21,4,62,67,42,47,39,8,32,26,32,19,23,27,8,24,14,36,26,24,41,26,38,10,40,33,38,22,88,120,107,73,35,30,28,34,23,28,35,15,22,30,16,12,34,11,42,26,10,42,31,37,37,33,101,124,139,132,99,189,187,176,164,143,173,136,171,186,184,201,183,174,197,208,213,242,243,168,107,0,51,83,134,172,200,205,182,172,227,247,254,253,236,255,248,251,255,249,236,255,249,250,255,237,246,255,250,255,255,249,245,248,244,244,250,255,251,247,241,254,254,247,250,245,240,242,253,227,236,242,232,240,219,202,194,105,3,6,5,17,3,15,10,17,14,8,12,12,10,0,23,12,22,27,18,8,12,5,15,8,142,144,70,29,40,33,80,176,185,202,241,234,234,159,167,166,144,181,146,149,117,176,137,107,99,112,74,27,13,31,62,119,112,89,101,81,67,70,92,96,87,120,113,112,112,133,153,135,21,19,84,174,207,197,243,215,217,230,227,224,227,209,189,187,203,67,1,46,95,153,133,114,75,50,58,43,77,114,80,50,31,19,34,26,14,56,132,212,255,251,247,240,229,144,24,101,187,185,145,120,115,104,51,25,20,77,147,160,137,122,150,147,73,125,223,252,248,217,75,28,108,80,68,33,116,207,179,181,129,50,12,122,200,205,218,187,194,185,171,175,189,179,164,171,196,179,185,155,113,45,124,149,156,163,174,163,154,145,181,170,175,160,138,163,171,172,142,168,161,156,143,141,179,182,160,158,162,164,178,151,144,176,183,177,172,189,196,246,230,186,156,155,182,245,197,218,189,237,236,203,185,206,146,105,93,124,160,148,179,201,215,247,174,123,101,65,49,33,61,121,138,142,173,217,249,232,197,215,211,197,177,197,247,208,192,183,231,218,178,155,138,187,221,214,194,174,214,218,201,167,204,206,170,189,178,215,199,188,189,218,216,201,170,159,161,166,122,130,92,100,96,85,54,98,97,97,100,112,95,95,88,91,93,79,52,91,117,71,82,77,92,82,116,106,117,113,103,129,114,85,52,33,25,13,44,34,36,37,37,85,57,122,83,66,97,83,71,83,88,91,68,61,89,113,82,87,96,110,103,86,72,44,57,83,51,80,46,67,102,56,88,80,95,81,35,29,31,23,47,2,30,34,41,50,40,69,33,32,69,40,36,45,43,33,70,81,40,50,34,56,38,58,58,76,56,43,66,49,90,39,52,26,40,46,40,47,51,35,29,72,27,35,45,53,37,38,31,51,32,34,34,41,48,42,37,48,49,36,44,66,47,79,34,19,41,23,47,43,24,20,24,38,37,37,18,9,34,11,10,41,59,64,116,120,48,11,21,16,19,27,0,21,18,24,25,24,27,67,21,27,29,2,24,26,34,5,14,20,2,37,7,3,48,30,47,17,44,20,15,31,33,25,33,12,50,40,26,41,0,18,48,38,33,29,22,51,63,71,46,49,16,17,11,10,5,17,44,34,39,7,57,34,20,14,33,23,31,46,29,15,32,49,36,28,38,50,6,19,33,31,25,15,45,41,32,36,51,47,27,36,16,27,39,34,31,16,50,27,37,43,25,13,13,20,40,51,46,23,28,16,10,26,2,22,37,25,25,31,9,21,8,22,53,99,71,34,14,42,30,7,20,34,26,21,19,25,28,22,7,18,10,22,3,28,28,13,14,14,47,50,121,107,83,22,3,31,7,24,42,8,55,21,34,23,36,12,15,27,21,13,41,28,39,91,101,32,61,74,55,131,125,109,124,104,127,127,163,174,202,197,200,177,204,231,239,250,253,202,146,8,18,48,45,97,140,161,133,139,169,236,237,230,198,198,235,237,227,228,235,246,255,252,228,248,252,239,255,249,239,244,242,237,251,250,245,253,247,226,236,251,246,250,241,255,255,239,225,254,211,249,236,219,227,241,211,101,28,7,14,4,8,11,5,28,17,12,7,7,13,3,28,29,5,21,26,10,28,8,30,14,168,192,128,57,37,60,101,149,151,145,193,217,158,87,100,93,100,61,75,80,43,61,66,68,69,66,56,31,23,69,116,125,142,169,156,178,187,198,204,202,212,199,208,209,203,221,214,161,81,36,97,207,220,224,217,231,218,213,206,206,126,152,103,99,57,22,33,29,21,23,13,37,18,25,17,90,77,85,86,73,54,38,36,72,105,142,189,236,238,245,236,244,219,83,18,33,97,87,76,55,50,30,38,31,15,45,142,205,184,199,212,168,90,61,148,164,159,101,2,24,84,73,63,26,131,236,165,170,118,16,6,104,164,179,185,164,123,109,144,189,181,233,253,198,175,186,222,223,111,95,227,245,225,180,181,205,222,186,165,188,228,237,184,188,211,232,187,165,194,251,244,154,176,242,247,198,187,213,242,233,162,208,242,247,175,145,158,197,124,104,128,133,251,245,195,163,246,244,234,147,137,91,41,50,112,243,236,175,133,133,85,32,16,72,187,217,150,128,240,220,223,190,235,244,254,192,179,227,248,171,168,235,234,201,147,173,217,179,84,118,173,245,239,222,166,205,219,221,145,170,248,237,150,161,228,251,233,152,201,248,252,201,152,161,162,133,128,98,87,113,87,47,43,108,102,119,112,109,130,117,106,98,127,114,108,81,131,98,97,112,77,111,102,114,136,123,107,70,13,23,40,8,19,59,61,52,85,92,84,79,88,70,57,86,105,63,72,72,70,84,66,58,88,91,86,73,114,57,83,52,42,74,55,70,80,69,116,78,127,124,102,75,54,39,29,22,23,8,30,17,52,80,42,44,61,81,65,53,52,63,47,47,50,27,47,69,52,42,23,46,45,41,36,63,75,57,40,49,24,40,48,60,39,42,57,30,19,21,29,42,30,30,50,53,41,43,42,45,23,45,34,40,52,63,47,46,39,30,25,30,43,33,52,54,31,46,63,35,42,30,35,54,53,30,10,25,22,39,21,35,23,0,85,126,85,64,35,6,38,10,21,18,29,24,50,15,15,19,31,10,18,43,38,35,24,32,39,25,26,38,42,41,29,22,14,37,31,19,33,24,36,10,30,33,42,23,33,25,20,12,27,46,20,55,19,8,30,91,92,33,19,20,20,24,29,18,15,39,37,30,28,14,14,26,33,26,11,29,27,12,39,24,34,38,22,29,19,24,32,29,34,55,31,38,29,41,42,56,33,51,25,45,24,45,18,26,14,28,8,26,7,18,22,27,35,8,18,26,38,26,0,5,7,16,35,20,21,20,15,20,39,14,54,85,44,27,9,12,25,21,29,29,17,18,17,30,26,36,29,24,33,25,32,11,27,20,17,42,27,24,88,103,97,52,40,11,19,11,36,28,27,16,17,13,23,27,42,24,16,23,48,47,91,138,87,38,87,62,52,86,121,136,133,102,137,172,153,181,207,208,194,175,183,202,208,203,243,243,206,121,100,68,46,21,47,70,127,167,154,147,144,147,133,148,139,144,172,219,242,217,151,201,191,195,199,206,228,234,232,244,243,252,229,255,255,245,253,243,248,226,255,243,235,251,241,211,222,233,246,255,241,241,254,232,255,115,2,16,15,9,29,7,19,22,16,28,14,20,12,0,0,2,11,7,16,8,8,26,19,18,146,134,124,26,47,45,73,113,118,115,112,140,75,49,66,58,91,105,87,109,110,119,135,133,152,162,170,39,15,78,151,215,249,249,245,248,243,245,248,247,239,229,244,243,228,217,236,155,57,20,66,109,201,179,159,140,121,83,73,59,50,65,57,61,63,32,42,28,48,40,51,25,69,79,108,101,51,38,50,100,83,57,92,140,202,237,197,190,228,179,178,153,128,19,11,84,59,75,93,91,97,77,36,61,31,73,124,156,131,123,177,171,28,25,83,89,78,63,20,29,124,150,115,63,121,202,214,211,143,28,12,105,154,201,158,82,69,75,95,128,142,226,245,176,161,200,241,210,86,149,236,250,202,167,220,234,181,150,159,212,252,206,147,174,191,205,183,135,183,200,212,137,181,245,241,181,151,223,246,228,160,167,236,240,133,116,107,101,85,116,146,165,202,228,149,153,165,188,77,44,84,83,22,57,93,126,98,45,36,90,118,99,103,146,230,242,184,182,241,251,194,148,179,230,218,176,130,219,215,180,182,197,186,157,144,155,199,99,111,136,149,211,236,187,125,172,200,167,136,140,184,176,141,168,207,217,158,144,184,216,204,160,182,132,148,164,141,132,126,131,127,68,61,46,70,62,111,91,94,118,117,86,114,115,100,105,107,112,92,124,128,93,123,107,92,64,42,34,31,26,51,40,57,93,89,105,88,91,72,76,70,92,87,88,89,114,82,89,67,88,85,85,83,81,74,86,79,79,46,80,56,40,70,70,60,73,94,112,108,72,56,27,29,35,40,50,19,37,72,46,59,63,48,46,37,43,56,18,34,39,54,40,44,55,70,45,43,41,31,43,45,32,52,53,50,53,55,60,60,50,53,51,53,38,18,42,41,30,63,38,57,34,59,51,43,57,44,54,54,37,29,49,31,39,21,18,61,20,63,38,14,20,33,33,39,40,40,39,67,36,13,34,7,37,15,43,40,23,97,20,39,22,29,94,103,95,80,19,14,12,17,38,44,30,36,30,18,33,31,40,33,24,19,13,16,33,28,10,22,41,47,26,32,45,21,51,32,28,15,0,38,26,37,32,5,42,31,26,21,10,35,52,25,41,11,16,46,90,78,42,35,33,30,15,46,48,21,24,33,57,36,29,33,12,21,19,18,15,14,27,16,39,38,40,35,41,23,7,40,35,44,46,24,37,41,37,18,35,20,39,37,34,26,21,36,24,34,24,24,14,11,30,30,18,25,29,20,24,26,25,20,18,34,20,7,8,10,35,5,17,13,22,54,69,35,9,11,7,8,45,34,47,7,24,47,8,39,27,19,34,40,13,27,24,24,5,23,17,20,58,39,79,70,94,51,37,3,20,13,35,39,12,32,38,20,12,46,13,0,19,20,84,164,169,116,66,159,124,103,125,139,181,203,161,213,206,195,203,208,196,180,167,165,203,209,242,223,242,220,147,217,180,92,34,1,26,101,138,170,136,143,172,149,128,139,105,115,180,224,126,57,56,101,108,135,121,157,215,245,214,189,167,172,230,208,221,188,209,253,243,255,227,254,247,235,213,224,240,242,255,248,253,244,248,224,108,1,11,14,14,20,0,23,0,13,14,29,42,15,9,6,4,5,11,21,21,23,10,10,11,113,89,28,6,63,60,91,130,125,132,120,138,128,127,168,157,167,183,194,213,215,208,205,194,228,237,203,101,22,88,155,207,214,233,227,216,233,213,215,206,198,160,158,164,153,131,127,52,16,15,31,31,11,20,26,10,12,21,16,43,44,101,139,158,166,36,33,118,170,212,223,230,196,174,94,46,90,67,87,88,27,38,56,103,127,100,79,88,70,108,113,117,120,32,36,126,185,174,148,162,154,111,112,99,127,119,94,85,55,81,85,77,28,56,150,178,210,201,44,38,120,105,88,46,43,165,129,202,218,44,1,78,156,212,178,154,126,148,135,130,159,157,166,157,145,181,221,120,68,81,189,165,159,153,158,157,152,133,148,177,206,170,141,153,126,153,102,151,155,142,94,124,125,130,175,135,163,147,163,134,162,161,183,176,125,167,173,168,174,189,223,193,227,209,127,103,76,61,20,38,76,69,42,32,30,50,82,96,167,162,199,176,199,171,213,188,174,160,187,161,138,138,152,162,153,157,144,179,174,152,167,156,140,130,159,159,174,160,168,199,136,155,160,136,150,168,163,172,172,156,172,157,152,157,168,180,161,163,152,162,128,156,169,129,159,149,153,151,160,147,123,119,93,71,62,84,94,102,97,107,114,115,94,102,124,99,106,127,97,103,85,80,53,4,12,7,23,35,45,81,102,124,106,89,89,100,91,83,75,90,96,78,58,84,110,73,81,76,72,100,102,75,81,62,66,80,50,68,49,101,68,54,70,77,77,94,124,80,76,37,20,36,22,18,24,39,26,75,58,54,47,52,34,76,45,55,34,45,82,48,57,61,55,14,50,52,24,48,61,67,22,51,45,43,50,24,35,54,45,49,57,28,17,26,41,27,39,45,45,32,31,22,30,27,55,48,49,38,27,30,29,47,16,36,38,47,19,28,35,44,2,39,41,30,50,22,23,60,37,18,9,43,26,42,23,32,20,24,45,26,9,10,17,12,85,97,85,58,15,10,24,32,40,16,6,27,23,26,5,41,29,9,14,12,32,11,17,15,38,6,6,22,12,2,31,22,24,23,42,33,26,10,35,19,14,40,49,25,24,8,15,17,10,32,23,45,53,31,102,75,34,39,21,19,30,6,40,22,19,30,23,51,41,29,24,26,14,30,40,8,48,19,21,37,32,25,45,13,59,36,46,48,28,58,34,35,23,56,39,49,15,16,27,32,22,32,7,22,3,13,6,40,26,30,7,11,22,25,45,32,31,32,27,18,37,24,50,14,25,25,27,28,79,25,10,14,18,10,36,35,31,39,9,5,22,16,8,11,44,46,17,32,10,46,11,11,8,9,17,26,9,37,77,100,102,52,24,38,10,41,20,52,11,37,29,4,17,28,35,38,68,121,140,125,111,111,126,129,112,119,164,186,174,156,190,179,175,193,201,202,184,195,200,201,179,178,161,185,168,161,195,145,131,122,58,18,48,51,91,123,159,154,170,174,169,168,149,179,252,148,26,96,139,134,130,135,145,178,209,174,124,82,86,142,150,157,98,159,245,232,247,240,235,252,231,226,193,215,238,231,240,236,235,191,182,108,0,27,1,0,10,8,12,23,3,21,1,12,4,11,22,6,0,22,19,14,15,9,42,11,74,96,38,23,70,76,160,201,193,203,221,202,219,205,205,198,215,242,226,224,222,236,233,208,214,220,188,99,20,42,82,164,174,187,140,110,131,98,88,77,92,69,70,74,24,22,17,29,27,36,59,40,42,48,52,27,16,11,44,67,73,71,67,111,79,49,61,103,220,239,236,245,231,127,36,31,83,91,96,41,20,9,36,75,48,33,11,53,88,132,183,181,165,32,25,83,125,132,101,89,84,68,24,60,159,142,144,162,114,123,217,152,21,131,229,253,243,232,72,29,91,123,33,39,117,172,151,168,169,48,18,29,103,146,232,224,231,190,209,178,162,161,148,145,178,174,146,58,37,131,168,175,169,176,166,155,168,178,165,169,161,159,175,160,157,132,162,195,138,167,150,131,155,146,137,181,148,145,167,149,144,190,161,169,150,182,209,222,228,214,174,178,126,56,24,26,22,38,13,59,70,62,115,131,162,177,171,196,207,225,200,199,174,180,176,156,139,151,179,169,159,165,176,169,175,155,132,144,131,137,120,145,162,139,162,176,192,183,179,167,148,172,147,141,162,146,176,156,181,136,189,149,147,143,183,163,172,169,164,143,153,157,163,152,142,155,155,135,128,140,133,117,162,115,97,102,113,92,100,54,80,90,79,70,78,66,65,55,41,51,17,25,23,23,56,77,54,109,127,149,146,112,92,99,99,78,84,61,91,108,116,91,101,97,78,73,79,75,87,89,81,66,62,40,64,56,46,66,71,68,66,87,109,96,96,92,78,24,40,26,38,13,42,43,62,62,56,60,67,80,67,55,45,37,34,69,54,51,33,49,32,39,72,42,45,48,37,47,41,50,50,17,18,38,51,41,38,43,42,48,26,36,52,45,40,49,59,57,55,43,47,52,44,16,29,40,48,35,57,40,33,49,47,28,8,27,16,65,16,53,38,19,25,60,42,26,18,65,20,50,33,19,30,40,37,35,18,13,7,19,10,9,28,32,15,76,103,123,37,33,27,33,60,17,51,22,2,5,36,40,21,28,17,23,30,20,29,16,16,19,65,50,10,35,35,10,37,36,42,28,30,19,27,30,22,29,34,27,18,34,24,37,16,30,17,43,35,51,100,89,54,11,22,6,20,36,20,30,33,22,23,46,9,44,16,23,14,36,15,1,15,29,42,17,36,19,30,12,11,12,50,29,59,11,33,42,54,22,43,38,24,27,29,26,25,21,39,24,8,13,29,34,36,24,51,12,25,24,15,22,40,11,11,38,26,26,34,32,1,17,13,76,81,28,25,24,35,41,33,9,11,10,29,33,34,27,3,18,42,12,24,50,17,33,14,29,12,23,21,22,24,18,8,73,98,92,82,25,28,47,21,7,16,23,11,32,40,36,7,38,92,78,61,42,44,77,63,53,65,78,89,90,70,76,111,129,141,121,135,145,110,133,120,89,126,144,139,156,146,147,171,153,149,181,120,118,111,33,25,58,102,138,148,169,181,179,168,207,235,173,54,99,148,176,160,173,164,194,230,181,103,37,126,162,136,151,90,148,247,223,253,232,216,246,242,205,212,211,225,207,199,173,167,142,112,96,11,1,19,19,16,4,19,40,8,24,11,21,30,24,21,1,7,6,12,10,17,5,1,6,172,176,87,25,58,77,163,223,218,247,234,235,201,195,184,188,171,185,181,182,132,156,143,157,146,128,120,44,13,24,51,105,56,90,52,32,31,30,54,79,56,79,76,54,76,61,40,13,35,32,87,114,124,92,95,115,115,92,105,100,80,38,67,25,18,5,22,72,115,136,110,111,95,135,91,60,56,95,155,103,19,29,92,141,138,121,115,106,120,147,128,113,52,22,24,27,54,45,22,48,29,28,13,41,132,152,161,220,193,232,240,246,89,100,192,238,225,148,37,28,98,99,72,62,151,233,203,196,137,10,16,53,16,58,120,209,211,194,183,180,173,176,178,170,174,173,139,18,74,184,173,164,166,152,154,174,166,172,156,181,169,128,164,147,158,158,154,207,182,165,165,185,168,182,151,165,156,197,167,172,183,158,173,194,200,196,199,221,145,95,47,3,15,26,33,6,40,89,134,141,182,189,217,200,214,194,183,187,183,181,193,175,150,162,158,149,189,187,170,162,180,157,168,161,165,171,159,142,162,136,131,159,133,152,152,179,153,173,171,171,162,183,163,151,157,138,153,149,154,173,131,120,130,134,124,131,135,168,146,157,159,169,144,135,124,123,136,126,106,134,102,127,119,147,135,147,123,111,92,105,65,74,81,50,108,48,46,26,30,45,59,75,58,94,109,120,136,135,139,111,128,87,89,96,86,70,98,82,98,80,47,89,89,105,77,94,78,101,83,60,63,69,60,74,73,91,65,74,65,96,74,90,108,93,46,35,6,31,19,7,18,42,38,59,58,52,59,71,59,52,48,67,38,70,62,50,52,26,28,51,39,34,35,41,44,43,41,60,63,43,54,60,40,47,20,71,79,52,53,58,50,37,25,43,61,49,33,67,30,30,20,28,43,51,33,23,35,26,29,53,29,39,16,36,32,26,27,36,59,8,46,30,45,18,35,37,19,35,43,39,39,34,26,60,43,17,21,10,18,14,14,22,20,13,28,43,52,78,116,107,68,15,27,12,21,27,34,41,42,20,20,18,30,42,20,11,25,10,34,34,29,12,47,53,13,7,27,42,18,22,36,24,5,6,18,24,29,23,5,54,24,18,28,19,33,37,33,38,60,86,75,15,57,20,20,35,25,38,14,34,33,29,26,33,46,44,14,21,52,32,28,14,37,24,43,43,22,23,27,19,66,48,44,24,19,45,44,36,53,46,29,24,19,38,14,9,24,22,19,10,23,31,4,19,10,27,17,24,40,20,15,23,39,39,17,10,43,23,39,23,11,82,60,7,30,12,15,16,38,21,25,27,31,22,14,25,29,10,34,20,13,38,31,45,26,47,38,36,11,8,20,25,28,35,39,104,92,66,45,20,32,30,7,50,50,10,13,17,63,106,69,67,56,45,57,55,79,52,56,64,75,47,50,50,40,72,42,46,38,64,43,44,79,131,165,191,210,222,221,201,237,189,220,220,124,200,188,93,77,64,91,115,125,170,217,210,211,243,249,179,82,123,159,164,156,161,169,197,219,184,121,60,145,177,181,172,108,154,248,232,236,220,247,243,230,229,184,181,178,192,160,141,154,132,144,101,11,24,29,1,23,3,25,6,1,9,38,28,22,10,8,10,23,7,19,7,15,5,19,20,203,194,118,36,61,80,155,168,194,244,233,220,189,158,123,131,121,111,75,63,69,83,51,44,66,42,18,17,39,44,84,113,105,113,130,135,127,133,132,148,156,152,177,157,160,157,153,111,42,56,128,197,229,191,200,184,161,146,125,100,96,103,74,39,9,29,24,91,129,138,118,139,156,174,193,112,132,172,182,178,40,38,132,182,159,144,124,89,122,119,90,38,21,7,31,76,60,96,87,100,116,159,118,65,181,164,133,157,165,159,223,150,40,31,105,72,44,25,0,32,71,89,62,27,122,231,247,219,160,32,8,122,137,34,37,70,136,191,184,195,194,175,213,190,168,140,31,60,162,198,198,175,169,161,198,203,162,184,197,172,181,183,194,168,164,187,170,162,154,151,192,172,140,173,164,174,172,165,190,174,174,208,225,231,182,126,83,69,40,3,12,27,29,81,122,187,210,207,191,212,199,218,179,192,167,187,186,145,154,155,189,193,184,169,171,175,164,161,182,164,165,191,164,164,164,202,150,194,188,168,185,160,177,179,169,161,148,147,162,157,177,157,144,167,169,151,146,163,187,152,165,158,181,160,158,150,135,136,167,143,144,182,181,142,123,89,128,154,141,122,110,119,90,93,85,83,117,129,140,119,145,111,101,105,124,98,107,98,102,114,110,116,127,150,111,148,145,112,115,104,102,117,106,60,78,76,77,79,89,63,63,89,100,77,71,62,61,54,73,77,73,76,80,67,83,97,67,71,60,81,85,82,87,36,6,13,17,8,45,33,41,59,47,51,72,55,54,62,56,42,40,24,61,53,56,57,55,44,37,54,50,37,57,22,50,55,41,39,50,42,67,49,51,38,51,41,33,40,38,24,49,41,35,64,52,67,60,50,38,28,42,30,16,34,18,46,37,31,5,38,19,44,50,35,48,41,31,46,61,57,37,35,33,35,36,30,26,20,36,51,17,34,32,10,42,31,55,22,45,30,12,45,23,34,39,38,34,32,68,135,68,53,17,42,11,9,63,28,27,24,27,18,13,12,41,31,27,8,19,30,25,49,36,28,32,33,36,29,42,30,42,43,26,43,43,40,30,45,24,35,40,31,22,37,7,15,36,37,31,106,77,67,11,19,22,16,35,35,30,43,48,27,14,34,22,48,42,27,10,7,13,24,31,54,24,28,39,31,13,57,50,13,30,33,30,22,22,41,27,37,28,30,10,45,41,41,36,5,29,13,13,28,22,33,22,14,30,25,18,13,36,40,25,32,26,45,42,45,10,2,28,89,71,33,10,10,17,19,33,33,27,22,31,28,29,32,23,19,17,23,18,33,12,30,34,24,25,14,23,27,31,21,39,25,29,56,155,98,80,52,49,13,29,1,37,32,8,73,94,95,62,16,56,59,43,64,48,22,37,42,71,73,46,79,58,64,39,35,39,10,28,69,193,225,234,238,254,251,252,251,252,248,230,189,96,145,188,124,82,79,54,51,56,109,185,205,175,176,247,134,47,89,99,124,134,147,157,183,212,141,85,86,162,152,183,147,106,155,248,242,243,249,218,237,254,236,202,173,198,164,158,130,136,131,134,130,7,1,17,6,18,5,16,12,14,7,14,2,8,4,12,13,34,34,6,14,4,25,4,25,169,186,74,13,33,100,126,147,145,167,167,153,77,66,66,43,76,50,58,70,61,62,73,87,100,112,73,16,26,70,118,194,197,203,212,209,208,201,190,218,190,185,188,184,207,193,220,121,61,21,74,167,177,140,125,122,138,139,116,110,95,138,144,184,107,24,56,134,185,224,235,233,234,220,174,123,127,188,228,147,48,23,91,100,122,104,75,65,78,111,56,51,37,12,54,137,180,213,201,212,220,249,157,90,206,194,171,149,65,25,38,28,12,18,28,48,25,26,4,50,17,86,30,38,107,148,181,230,198,58,25,116,212,161,47,37,46,125,131,199,188,197,163,162,70,16,56,137,194,183,185,167,175,168,185,169,157,167,148,192,180,163,157,135,196,169,139,174,164,187,137,164,166,166,191,184,197,215,222,198,209,163,148,108,44,4,19,21,83,105,134,188,199,213,225,214,209,212,190,176,200,192,171,173,164,190,157,133,172,175,166,186,187,158,160,184,178,160,173,189,166,180,158,166,164,166,176,159,186,177,165,196,167,165,176,181,191,165,192,152,158,150,151,135,163,124,143,135,143,161,199,193,191,166,163,191,181,187,152,156,173,176,190,122,109,111,120,145,135,133,98,110,121,103,96,106,118,134,124,138,127,127,117,125,129,119,113,124,133,122,108,112,140,125,116,109,113,97,119,65,68,104,63,88,69,87,78,67,55,72,74,76,56,76,67,45,49,75,61,75,75,78,86,63,93,93,73,70,80,82,56,36,27,29,25,18,41,40,54,61,72,63,57,52,64,71,69,68,50,57,39,59,27,81,52,36,35,48,48,56,51,45,36,32,55,41,72,45,53,36,12,38,48,41,18,53,39,44,34,40,43,49,27,43,39,59,32,37,35,34,40,50,25,6,43,33,31,39,58,39,44,50,45,33,14,32,44,37,57,32,60,27,53,69,25,23,60,34,44,38,42,64,38,36,57,38,37,39,10,15,23,8,2,19,7,29,13,23,41,80,124,77,51,34,8,33,25,5,23,21,11,46,13,15,49,16,15,40,53,17,14,26,13,16,45,12,26,51,23,60,25,28,19,21,35,31,30,21,38,43,7,14,32,30,38,32,25,20,20,78,94,57,29,41,24,36,14,26,25,13,5,20,26,45,51,18,32,29,13,13,31,25,19,32,40,13,17,4,44,31,25,14,44,32,24,29,35,27,15,46,52,42,23,35,29,19,25,25,16,30,40,34,32,10,1,30,6,17,32,29,37,25,35,14,34,33,28,10,44,32,33,93,41,4,32,13,17,32,6,11,4,19,34,7,31,24,23,26,27,28,14,17,26,27,2,5,15,6,27,25,38,26,18,16,41,42,82,103,125,47,51,20,22,17,24,35,56,107,58,76,39,34,61,58,42,64,75,45,51,53,38,59,55,39,82,66,71,17,37,40,131,216,222,235,213,226,233,241,251,242,251,247,244,192,92,99,104,113,109,118,12,19,0,22,46,68,62,116,197,96,33,80,62,100,95,104,113,156,215,142,61,96,149,150,192,169,89,172,244,251,232,212,226,245,249,238,239,184,169,181,160,156,154,160,163,114,1,0,20,22,36,21,18,19,23,2,11,11,10,19,6,20,15,5,0,11,23,24,29,10,116,105,26,28,27,61,87,83,114,77,91,62,93,89,91,96,114,134,124,151,160,189,184,175,197,193,145,55,29,94,160,229,225,220,218,183,223,190,209,186,148,135,142,89,137,103,101,33,17,41,57,114,105,98,112,110,131,133,166,166,167,211,200,227,129,33,41,149,204,243,243,213,139,69,77,44,92,122,138,84,6,19,50,49,45,41,48,52,82,134,129,178,90,2,102,177,226,235,238,189,213,210,134,84,198,194,210,213,132,82,82,33,1,92,143,167,193,122,18,86,71,60,48,49,123,163,141,151,169,50,26,122,197,217,183,102,17,13,20,74,78,79,61,33,37,90,155,173,205,152,164,173,175,141,174,168,160,173,153,170,179,172,167,171,194,176,173,174,183,190,194,187,199,210,230,233,194,196,134,96,63,91,2,48,51,98,143,190,186,228,223,255,197,200,193,184,163,175,163,144,158,185,176,175,182,161,194,166,138,163,165,148,151,184,175,169,186,171,180,161,156,135,168,177,158,143,177,139,147,156,179,138,159,149,175,136,164,149,152,197,188,155,159,162,132,144,163,144,170,183,199,178,171,162,195,181,208,184,168,176,136,155,159,154,138,89,91,105,128,113,110,120,112,119,99,117,112,115,123,109,105,107,123,134,134,117,101,127,108,113,108,116,117,118,92,114,116,83,106,87,86,91,68,72,52,88,87,92,80,92,63,95,73,53,47,59,69,78,49,68,69,78,77,80,68,87,109,99,37,30,43,21,16,21,31,45,44,64,65,37,61,57,55,48,57,54,49,48,48,54,51,48,47,64,49,46,64,54,51,66,26,41,53,18,59,22,32,33,52,31,48,58,40,75,43,40,32,48,28,35,83,44,36,20,46,11,30,26,45,10,51,72,40,40,44,46,16,50,29,28,20,13,40,45,64,57,48,74,48,46,31,15,32,25,33,48,30,30,23,22,28,24,15,43,47,45,25,24,33,32,22,40,19,12,18,19,37,50,55,51,98,153,118,47,43,13,16,17,37,11,45,42,44,37,45,33,66,53,6,31,36,47,35,28,52,23,21,46,34,6,13,14,5,32,28,25,28,25,35,8,26,43,40,35,61,6,34,40,20,47,87,87,56,23,23,23,22,20,29,13,14,11,39,30,9,27,15,12,35,44,15,34,42,21,9,34,36,42,14,40,37,21,29,32,71,37,45,21,55,25,20,40,34,55,36,15,22,7,39,24,26,8,28,23,15,28,7,17,16,3,18,40,40,21,17,24,7,38,50,12,65,74,21,41,26,0,31,18,29,41,13,48,28,19,31,33,1,43,13,57,38,14,27,57,15,17,12,26,30,26,28,32,24,61,25,33,33,55,119,89,75,50,18,28,39,77,103,82,45,47,39,52,54,24,55,68,43,39,40,45,60,60,67,44,62,57,17,45,32,45,136,180,189,215,221,202,188,196,196,210,247,239,240,223,136,105,88,121,160,147,99,12,9,26,7,33,2,53,170,84,45,84,69,96,88,86,61,146,168,134,56,84,117,165,151,141,88,174,226,245,251,236,252,245,241,248,252,200,173,183,177,186,178,156,191,113,0,10,2,51,8,13,11,33,13,2,22,23,19,4,29,11,19,48,22,9,8,31,13,17,55,15,20,31,47,107,107,150,141,133,181,166,163,176,176,190,226,226,223,242,231,225,224,220,216,226,182,62,22,75,133,160,169,159,150,134,120,108,89,100,84,42,66,63,69,54,61,20,8,64,120,173,155,176,192,206,195,205,198,209,201,211,219,190,155,28,48,100,187,197,169,64,27,62,39,54,75,100,52,16,17,41,71,106,84,91,116,128,146,189,195,242,116,21,74,147,172,170,147,127,84,59,3,62,153,164,204,207,172,157,173,86,64,148,219,213,232,110,42,48,102,99,55,46,178,224,187,183,118,3,10,108,168,201,173,175,152,122,46,37,65,48,99,107,137,159,192,179,171,167,157,168,149,170,131,169,165,129,148,154,183,174,172,168,187,175,172,175,200,197,203,218,191,187,127,105,82,9,10,24,44,58,78,170,204,188,226,213,203,191,160,188,188,192,183,157,171,146,154,181,150,156,169,165,192,166,145,151,142,183,178,149,166,178,175,162,186,165,184,149,135,138,141,147,148,167,183,145,151,155,162,158,164,186,148,120,116,144,161,161,155,186,152,167,164,192,176,190,168,137,142,167,190,171,169,161,187,175,191,156,158,185,192,170,116,111,86,93,106,89,80,59,111,96,98,96,98,125,119,121,126,100,128,112,122,127,119,126,111,149,103,107,121,108,102,116,107,88,103,111,66,44,82,69,76,84,77,95,74,69,52,62,62,52,51,62,77,51,66,97,86,75,76,91,98,73,36,52,54,35,27,23,32,26,20,60,72,66,49,46,66,47,61,49,47,48,50,67,39,68,55,54,61,41,30,52,46,44,55,44,47,41,32,31,42,37,43,56,36,35,29,50,34,52,31,35,28,55,78,21,37,37,25,51,27,25,53,36,32,46,18,36,10,34,32,31,16,59,34,52,29,30,59,49,29,36,54,26,24,52,42,20,27,20,81,53,17,23,26,19,32,47,36,16,22,25,18,35,24,4,22,14,46,9,38,32,24,16,53,39,26,46,126,95,82,51,26,14,20,29,35,10,31,42,25,25,40,48,32,42,32,10,42,50,48,48,39,23,17,52,21,25,16,46,11,21,36,22,42,49,53,30,23,41,30,35,37,26,35,16,91,97,46,35,15,20,29,19,13,37,20,11,64,16,19,28,35,23,56,30,44,21,7,24,43,38,15,18,31,47,48,15,47,32,30,34,8,36,60,38,18,29,47,27,33,31,17,24,32,31,28,18,36,6,43,19,20,21,7,28,31,32,27,28,27,44,37,9,22,26,77,60,22,13,32,2,33,20,6,27,19,43,18,21,18,12,35,34,39,42,3,8,28,15,45,22,30,19,48,30,29,20,22,34,14,10,21,26,45,107,115,54,59,29,69,99,82,33,45,63,45,31,55,62,57,43,45,9,27,38,38,64,84,60,61,65,57,28,37,50,102,137,175,219,195,213,193,200,175,216,201,212,202,172,148,123,104,94,195,220,214,204,193,144,110,67,74,139,179,66,36,85,94,126,88,68,66,142,206,147,45,41,78,103,105,111,67,165,225,247,254,246,245,237,230,239,236,234,194,183,185,160,194,184,197,106,10,9,38,19,15,10,29,6,31,1,30,7,3,11,15,11,9,17,31,31,4,9,13,4,143,103,8,64,74,114,188,196,192,199,226,206,215,203,208,193,218,224,205,180,169,192,177,164,127,146,84,13,27,48,105,90,99,60,81,39,82,94,65,75,74,95,88,144,120,180,137,45,20,61,141,198,184,217,207,209,195,204,195,194,130,138,125,115,53,20,64,64,115,83,117,77,66,120,131,124,120,150,141,87,0,54,118,159,175,104,120,125,149,194,215,212,81,6,48,112,120,92,60,73,68,72,15,77,147,139,166,187,164,156,228,85,42,101,171,114,90,56,16,28,13,49,5,37,174,191,251,238,135,30,31,120,185,207,183,181,182,183,175,156,123,168,178,164,211,204,184,185,147,149,188,137,142,157,158,178,154,153,167,174,156,196,220,192,198,195,166,161,153,110,97,65,23,17,5,10,8,6,26,95,167,178,206,206,207,201,201,184,189,161,169,161,149,157,156,166,152,151,160,154,169,176,161,163,156,138,147,138,154,165,166,174,161,187,150,152,166,157,173,189,155,153,154,152,156,166,170,181,167,179,170,166,165,170,174,152,155,182,173,175,174,190,185,197,173,166,186,187,165,164,149,150,162,181,169,211,154,156,138,177,188,167,174,172,181,92,76,89,73,103,104,73,90,74,87,124,143,121,103,146,97,126,101,125,129,107,87,102,123,97,92,136,100,96,99,111,97,119,80,90,96,76,75,68,79,72,82,91,89,63,55,65,79,83,62,58,69,71,67,75,101,74,95,65,45,59,5,18,29,30,42,46,33,60,60,57,45,48,42,45,31,63,42,52,63,65,42,60,61,82,44,56,73,8,69,40,48,64,58,58,56,57,20,50,44,42,54,30,71,59,54,54,40,36,49,39,48,31,14,18,51,55,42,46,47,47,44,38,23,58,42,27,28,46,43,44,25,42,37,27,34,29,16,39,27,39,25,11,29,31,26,41,75,33,59,41,47,24,53,29,45,60,9,45,19,53,15,40,50,16,29,27,35,27,19,34,42,12,21,29,25,7,19,69,96,81,63,43,26,31,35,17,54,45,22,36,30,34,44,16,31,18,12,37,20,35,5,30,33,13,46,42,13,14,11,33,41,35,17,11,25,47,31,8,12,24,39,12,44,22,31,82,77,18,9,16,42,45,9,46,42,24,34,22,24,19,22,29,29,35,16,31,26,26,49,53,31,20,34,42,37,37,20,49,62,1,28,43,25,45,40,25,5,31,40,23,15,5,43,28,50,21,6,33,8,28,22,54,14,15,12,16,4,22,50,41,30,19,17,49,91,71,18,34,14,10,32,9,32,32,35,40,3,20,25,12,24,17,19,3,25,21,13,22,23,17,6,15,19,42,17,35,30,10,11,26,16,16,35,71,95,115,71,98,128,84,19,63,47,42,45,44,46,55,43,83,28,33,39,37,43,68,43,48,60,67,40,59,4,31,51,73,95,158,186,204,211,188,202,216,200,189,194,127,121,146,92,133,203,253,239,232,248,248,240,252,233,227,220,107,107,108,129,112,122,95,84,173,239,171,84,86,34,60,64,65,80,128,222,237,252,243,233,255,212,249,240,237,228,175,173,202,208,181,171,122,4,8,0,28,1,9,0,28,21,39,0,24,20,4,11,8,18,10,3,7,15,15,31,25,201,162,47,51,100,123,196,209,217,192,193,210,181,181,171,159,152,139,142,63,107,80,77,89,79,39,19,40,36,60,104,119,109,106,137,149,158,162,178,169,181,171,184,202,194,212,188,105,26,73,152,181,202,158,166,146,127,146,100,92,94,60,48,23,0,13,43,102,155,160,189,118,118,156,130,128,175,235,234,145,11,64,146,150,160,128,110,87,96,130,93,79,25,13,70,108,145,115,136,155,169,196,93,121,204,218,172,161,96,110,98,47,28,30,57,73,34,25,8,15,44,55,37,34,63,176,170,240,180,28,14,118,187,206,190,158,190,196,174,188,191,199,193,206,183,178,164,157,210,132,154,169,167,167,188,177,202,233,185,184,177,186,169,131,99,117,85,55,29,25,30,23,34,46,15,43,46,82,128,185,199,193,176,181,173,210,151,178,183,145,178,175,193,184,147,164,176,176,188,161,176,173,149,156,184,161,159,154,181,140,174,144,161,189,167,155,151,168,157,195,186,170,166,160,169,172,163,156,171,166,171,167,161,192,151,187,201,187,190,167,158,165,158,145,157,163,171,166,173,197,193,165,181,183,159,162,169,173,150,157,197,172,146,169,144,123,82,98,89,96,70,105,85,68,101,94,112,105,127,102,108,110,100,122,114,126,117,79,112,138,100,111,117,66,94,98,103,117,110,139,91,77,63,72,91,98,96,49,72,86,78,78,76,81,91,83,71,68,110,71,91,69,39,28,39,25,32,40,54,24,49,65,60,72,52,44,51,42,68,40,52,43,46,55,58,66,65,51,58,19,36,65,44,42,51,46,50,49,83,44,25,63,31,20,41,29,31,39,25,48,53,73,49,43,73,57,47,37,33,46,19,31,44,40,37,40,32,62,37,29,49,41,13,50,25,38,43,51,72,40,34,24,25,31,47,52,23,23,20,47,44,8,15,25,32,18,49,58,23,57,42,35,49,11,15,49,33,16,44,26,32,14,26,39,52,8,30,14,19,50,11,35,29,29,74,103,109,83,71,30,15,13,35,19,18,24,5,4,46,29,21,32,26,6,30,32,25,24,42,43,22,42,41,25,22,5,19,28,42,41,64,36,44,27,11,41,22,54,17,17,37,56,103,53,30,13,25,35,27,35,26,42,12,23,43,14,48,33,14,68,12,50,35,63,31,17,24,35,27,52,4,37,45,50,44,34,47,32,33,26,30,43,36,38,46,39,24,7,25,33,38,16,35,21,25,22,19,26,27,13,21,33,18,16,5,8,19,33,17,76,84,30,25,34,24,3,11,6,27,17,35,28,18,25,44,22,20,21,42,26,28,11,42,42,39,49,34,46,29,59,21,9,35,9,22,19,8,33,27,35,68,113,132,130,79,61,34,36,37,49,42,57,60,51,54,52,64,45,43,56,39,58,60,25,41,47,37,28,48,41,32,49,55,40,72,131,152,183,184,198,218,220,169,119,124,76,93,111,143,200,210,238,231,247,251,253,237,244,234,186,196,200,153,140,125,127,205,209,254,218,197,141,154,118,101,118,109,187,246,250,248,233,253,208,172,210,252,236,213,185,191,179,199,183,155,99,6,8,0,2,40,17,39,14,10,6,12,30,2,7,8,2,16,0,14,8,22,6,14,12,158,140,44,43,80,98,133,157,148,120,141,84,96,87,85,90,49,75,59,45,48,49,78,98,92,97,24,13,48,81,157,172,178,210,194,213,198,213,229,188,214,202,200,212,184,166,161,72,22,57,98,154,135,91,76,73,57,65,61,79,82,101,110,140,75,24,109,160,237,234,237,186,126,129,123,144,176,191,216,129,6,27,85,110,96,70,30,50,51,82,74,76,25,23,95,175,186,188,207,214,233,183,100,124,207,217,216,174,84,57,65,23,39,15,45,41,53,29,29,18,33,25,49,22,104,125,147,182,175,4,14,107,159,185,173,153,170,199,176,154,166,166,151,151,164,152,166,166,161,154,165,166,169,188,223,184,203,178,158,136,112,67,48,39,19,72,47,102,111,86,141,157,198,185,192,182,94,152,134,131,128,133,157,164,171,162,149,178,156,127,162,160,156,160,172,150,165,151,173,185,169,164,158,135,165,157,180,150,160,153,155,155,157,177,175,190,175,163,167,178,186,187,174,178,167,163,189,206,160,183,142,150,177,186,162,164,160,173,158,170,145,161,141,173,164,173,157,187,186,186,186,181,178,158,167,162,177,165,182,162,177,171,169,167,152,120,112,113,80,84,104,68,63,30,80,69,75,116,129,121,119,124,125,106,117,135,99,75,110,109,114,128,75,88,129,112,117,107,98,89,79,46,97,84,70,110,65,64,74,122,68,83,97,75,93,84,77,84,48,66,40,21,46,35,0,51,39,40,46,58,51,59,29,49,45,47,78,62,59,51,35,40,51,65,80,45,55,67,54,34,54,56,45,44,31,71,43,64,74,74,45,49,71,44,48,57,56,37,42,45,37,35,66,39,47,34,29,29,46,46,37,32,58,48,36,52,41,24,19,62,64,44,43,42,31,62,23,35,46,41,52,53,28,25,36,28,35,22,48,22,12,39,23,35,13,36,60,31,34,37,43,23,25,26,7,38,26,20,38,47,18,11,39,13,25,30,29,15,22,14,22,18,28,45,10,59,97,112,89,49,41,40,32,18,37,54,22,20,35,11,10,32,34,35,11,35,41,10,26,14,25,42,7,7,23,22,21,50,42,9,39,38,33,45,9,42,57,40,42,29,28,32,91,81,17,20,35,29,58,47,31,13,40,46,29,32,35,46,21,18,32,26,32,35,39,18,55,33,27,35,34,47,40,52,45,29,17,35,33,17,36,27,34,25,32,26,28,15,20,26,21,22,20,20,44,45,34,26,38,16,9,34,8,22,35,16,39,24,39,105,53,24,12,28,18,24,21,20,8,43,43,45,17,26,20,31,28,46,12,21,37,0,38,35,17,19,27,27,38,51,6,16,39,2,21,46,39,27,17,37,51,93,113,107,51,26,35,71,11,33,64,61,40,42,80,65,61,61,54,57,34,49,62,76,52,72,38,55,35,56,37,52,35,60,41,18,89,80,93,115,144,169,154,136,117,101,89,87,126,186,197,203,169,169,171,209,209,193,191,203,230,188,110,75,30,98,176,240,253,248,248,252,231,247,231,234,220,223,251,255,253,247,239,227,174,212,239,246,238,214,196,186,187,188,175,104,23,6,7,19,23,8,0,24,21,4,16,30,5,17,18,19,24,10,21,20,19,18,27,10,144,106,24,34,38,76,103,99,63,67,62,95,49,62,76,73,65,122,124,143,140,154,175,170,179,184,90,27,77,117,197,209,223,199,221,193,227,188,165,166,168,143,147,88,101,102,57,13,18,27,49,103,88,96,88,120,104,160,171,169,153,230,212,204,129,55,73,172,248,243,254,215,143,134,149,105,94,116,118,60,14,25,52,51,47,62,49,70,123,178,205,181,70,55,100,184,230,217,209,185,148,113,42,85,165,159,222,213,105,48,23,14,19,30,45,3,45,21,40,48,82,72,79,108,160,209,240,186,118,11,19,143,187,178,168,183,174,162,184,171,184,171,168,160,173,192,176,133,183,189,201,192,176,154,151,111,88,61,22,49,15,39,104,133,155,159,170,189,159,165,210,194,200,186,171,199,171,177,184,121,171,164,138,141,136,145,146,139,160,166,159,138,139,127,147,160,143,151,138,152,164,142,165,148,154,163,159,154,169,171,167,131,131,169,164,152,164,166,157,169,150,153,155,169,173,177,153,161,165,160,156,182,165,166,156,165,154,167,165,185,172,164,167,197,179,160,173,160,176,165,187,165,167,167,148,176,164,184,184,162,192,162,175,169,205,171,110,97,93,102,105,87,77,68,53,71,86,102,133,105,99,108,97,94,126,136,112,103,136,100,128,119,130,129,129,79,83,51,66,72,75,80,68,87,81,87,121,75,89,81,80,76,96,93,118,98,51,78,33,32,30,19,37,31,30,46,70,80,81,81,73,51,46,69,66,70,56,47,31,27,51,64,48,73,46,82,54,48,65,57,19,52,55,13,56,70,48,49,60,60,54,40,68,39,25,44,36,51,67,66,40,51,53,44,40,31,36,42,44,41,37,40,31,48,52,51,57,27,54,14,55,55,45,29,64,54,46,26,30,42,45,34,22,35,21,34,15,22,39,23,24,27,64,44,54,47,36,30,28,32,6,5,41,47,32,51,34,33,27,23,40,8,48,29,34,28,33,12,29,30,54,34,29,27,39,49,23,59,87,112,70,30,26,23,30,36,16,36,16,45,38,33,48,18,37,33,28,12,21,38,20,16,21,25,51,31,34,37,17,40,35,28,45,16,41,25,37,53,34,10,24,33,73,91,42,32,23,8,23,36,16,40,34,32,25,17,45,39,5,9,55,45,35,37,48,14,20,19,48,29,58,56,18,24,34,33,35,19,28,31,49,16,39,22,15,29,33,44,11,3,4,59,25,26,16,47,9,19,24,39,29,24,34,13,6,23,5,19,62,110,22,29,22,16,13,37,19,38,23,27,7,13,25,21,18,37,17,6,31,19,29,24,21,32,20,11,30,46,10,11,35,39,19,24,19,38,36,38,26,42,99,102,66,83,69,84,49,65,36,35,69,20,31,59,41,62,46,62,54,67,79,54,49,52,49,71,63,53,52,48,55,58,59,25,35,30,55,27,19,32,59,101,116,106,112,119,105,94,131,176,177,155,171,162,158,155,156,149,168,154,161,136,43,21,24,24,104,202,216,254,255,245,243,245,249,249,224,242,234,252,238,249,249,233,220,246,241,246,253,206,202,196,182,179,174,100,15,9,8,15,8,11,0,10,17,20,41,13,14,9,8,17,23,0,9,48,12,9,19,29,61,32,11,50,55,81,106,108,126,119,144,136,141,190,190,167,196,209,197,209,194,213,211,209,215,221,84,50,32,75,159,188,177,171,146,142,160,123,106,88,88,87,59,86,57,63,55,15,27,68,146,181,164,170,181,216,219,229,223,189,228,220,223,205,105,38,77,113,201,221,195,161,106,150,113,87,47,0,19,25,38,47,60,54,34,65,62,114,182,219,189,195,56,17,92,134,164,115,130,80,60,21,20,104,133,133,155,209,136,33,40,15,10,36,55,45,41,20,17,61,114,149,177,163,168,195,217,224,134,0,26,121,169,167,166,164,160,180,145,161,176,192,163,170,188,195,210,196,159,150,116,132,59,53,22,15,49,52,108,134,150,169,194,186,174,218,206,176,201,174,147,153,167,155,158,165,181,183,170,161,171,156,138,186,156,174,144,130,158,162,167,152,146,162,164,161,147,167,146,154,145,139,148,139,122,136,150,144,173,149,144,147,166,151,160,163,175,148,149,141,149,180,164,178,162,165,164,184,180,170,197,189,176,153,157,167,167,170,159,183,200,155,169,164,182,145,159,155,146,173,173,165,168,142,164,190,158,147,139,167,152,177,153,156,161,167,145,122,98,82,71,73,116,97,80,93,112,122,125,166,132,101,100,131,124,103,123,138,110,92,116,103,102,90,76,86,82,25,80,59,79,75,74,90,84,82,53,82,73,99,79,113,96,68,60,33,46,26,15,39,45,74,69,74,55,53,49,61,68,70,40,38,67,40,48,36,55,28,37,57,46,59,61,49,51,45,61,68,42,60,45,31,31,64,48,62,50,66,65,37,41,45,47,20,34,27,22,48,65,58,35,43,54,40,33,30,43,33,38,44,35,37,39,9,73,37,46,55,44,38,37,22,29,33,46,46,13,42,25,53,19,19,45,46,12,37,27,17,27,40,31,17,32,58,34,11,20,22,39,41,33,14,31,18,11,25,51,14,38,20,31,20,33,16,24,17,31,5,39,22,16,34,26,26,32,10,6,17,51,114,114,95,67,15,6,15,8,24,41,15,28,18,26,38,4,43,43,46,22,21,29,17,10,43,52,36,34,14,31,24,23,14,24,27,38,37,30,44,43,45,13,11,42,91,70,16,23,19,7,15,27,25,21,18,36,19,14,21,37,27,37,54,34,42,41,25,22,55,9,23,33,33,44,40,21,34,43,42,24,37,19,35,23,47,20,50,28,10,16,29,11,21,30,29,23,30,22,7,12,32,9,25,24,23,33,45,8,43,85,78,24,46,30,32,2,25,25,24,21,26,10,23,19,0,14,15,14,31,24,20,13,45,18,30,44,20,21,38,5,25,19,45,42,11,28,45,25,12,76,128,105,28,16,47,123,146,92,81,95,84,114,77,55,68,10,59,76,48,101,41,65,55,46,59,49,55,40,57,31,37,39,47,34,21,62,42,67,65,49,39,12,50,89,137,148,152,130,134,150,157,216,236,208,214,181,174,177,181,174,155,159,155,97,82,35,37,68,74,110,182,242,232,244,241,249,247,230,238,242,242,248,235,251,249,238,254,243,250,241,200,244,196,183,220,183,119,18,3,3,10,32,19,11,10,29,7,11,17,7,2,3,9,26,23,7,29,27,12,24,22,81,40,21,31,86,127,156,203,182,200,210,217,218,194,193,216,199,211,205,221,206,192,179,175,163,168,42,43,30,63,133,112,95,79,94,108,76,65,71,103,109,124,146,148,155,163,130,20,35,92,159,212,232,214,226,196,226,193,192,169,142,151,136,124,48,2,33,50,62,93,103,108,96,127,110,109,87,19,23,22,21,53,66,93,80,88,98,120,137,119,126,64,31,40,82,120,87,58,56,43,44,8,46,102,194,196,135,167,107,16,14,19,36,28,20,33,16,22,23,50,90,95,86,97,98,124,149,214,148,3,19,113,174,164,189,176,185,204,181,171,190,215,173,168,121,126,107,66,50,22,12,64,67,104,93,146,151,164,178,203,206,190,213,180,189,217,191,187,146,170,151,173,161,176,171,180,158,166,162,169,170,170,162,161,178,174,157,146,165,181,159,174,169,142,169,152,151,145,174,177,167,168,148,144,162,166,146,160,139,160,143,162,150,157,155,132,153,158,168,165,162,199,158,157,167,152,166,183,158,171,160,183,158,173,165,154,164,181,186,165,146,158,170,176,166,177,166,159,166,165,182,169,190,171,170,170,156,193,189,166,166,167,147,135,162,169,194,223,172,142,51,37,122,83,116,112,101,153,138,139,137,144,136,97,123,109,105,103,98,114,94,90,74,59,77,66,81,60,101,95,68,86,97,111,102,103,86,79,97,123,73,80,35,38,23,18,20,42,48,69,51,58,63,44,71,47,85,45,57,67,59,74,58,54,74,50,38,49,50,62,46,77,58,51,62,37,69,73,60,66,49,24,47,56,72,32,17,61,42,61,23,39,45,57,65,49,38,55,20,54,40,60,50,37,29,28,33,55,32,30,38,39,46,44,59,56,40,27,17,22,49,23,48,32,29,41,35,38,39,18,42,13,41,34,47,47,18,56,3,30,43,61,37,20,28,12,41,14,26,47,33,37,33,17,50,14,36,41,31,24,41,9,57,39,5,8,41,41,12,38,34,19,22,41,14,45,57,21,29,48,92,125,91,89,67,17,20,27,30,9,29,33,40,14,42,58,5,54,34,34,44,52,6,43,35,22,24,9,45,20,25,42,36,32,30,11,29,5,30,30,40,34,34,86,80,63,47,13,19,26,18,6,13,11,27,15,50,32,28,4,37,30,57,37,32,20,11,41,22,38,35,27,45,25,49,44,38,22,36,29,46,47,21,39,37,38,29,12,30,15,36,6,29,44,30,19,42,10,28,31,24,33,30,47,21,27,36,53,84,39,14,40,14,21,21,34,18,16,34,62,13,17,36,15,24,21,35,22,15,12,11,18,3,25,24,47,23,40,21,18,25,27,18,44,21,12,27,99,96,73,44,26,40,181,203,163,151,103,134,125,155,111,117,89,57,104,98,102,63,59,69,41,39,34,34,46,47,72,48,58,50,21,52,39,39,48,62,71,71,65,49,57,85,131,183,183,150,160,149,175,252,227,249,255,227,233,236,211,247,221,225,213,234,231,170,163,121,81,71,97,124,139,190,228,225,243,245,225,247,249,248,234,250,255,233,221,233,237,242,204,219,194,205,225,183,93,0,0,9,0,27,35,25,0,8,21,32,6,10,10,19,5,3,19,22,8,16,11,1,28,175,85,40,72,112,170,181,228,210,214,213,173,198,177,185,201,165,167,158,118,154,115,104,93,74,64,14,16,18,67,108,112,74,63,110,138,133,156,170,183,175,213,195,212,189,225,165,62,28,78,170,205,198,160,176,156,110,107,118,96,107,58,47,47,15,40,44,89,98,112,140,119,158,156,118,144,134,120,125,20,46,89,133,163,135,158,126,104,97,74,79,48,21,17,46,68,69,53,43,30,21,35,49,168,244,245,212,159,67,1,37,0,24,46,44,15,36,25,14,56,56,71,74,90,107,128,145,176,147,31,76,123,169,198,211,200,212,184,178,162,150,99,102,83,10,9,22,57,65,106,127,157,166,197,165,218,211,192,183,199,161,164,158,192,165,153,150,170,143,148,166,146,124,155,164,167,167,155,181,144,180,178,133,147,157,162,138,170,149,150,148,150,141,159,163,173,145,162,165,169,140,139,169,167,142,147,144,137,132,145,143,165,166,158,149,134,169,141,164,137,146,150,176,172,162,161,159,121,147,158,140,168,158,167,156,153,150,160,138,150,150,164,136,147,165,167,176,172,163,171,179,186,184,160,166,173,173,157,165,180,143,166,158,144,153,149,193,222,255,209,62,69,99,119,135,126,88,85,95,93,99,94,105,53,87,71,66,84,94,74,79,72,84,98,106,115,96,78,68,92,69,85,71,106,103,102,107,81,56,61,34,37,33,38,26,59,50,67,65,53,35,40,42,45,62,66,42,62,57,59,58,58,47,74,65,57,72,54,47,47,38,68,28,46,44,47,35,47,39,85,68,57,67,48,49,33,54,60,45,54,46,24,35,39,48,28,12,35,20,47,41,27,52,62,27,33,19,34,38,19,39,52,23,47,36,49,43,36,53,39,49,18,25,33,58,29,61,43,31,33,45,33,30,32,26,51,56,43,43,31,17,30,2,21,53,18,9,39,11,30,33,37,18,26,25,26,21,28,37,28,19,41,26,37,22,25,33,40,28,17,30,19,26,29,34,56,71,45,30,59,29,36,79,119,98,87,40,45,45,42,44,8,22,38,20,17,17,7,29,32,32,18,20,28,20,33,45,54,6,27,12,49,13,16,21,36,10,26,33,37,56,36,34,31,86,58,48,33,8,28,28,8,46,28,23,41,24,47,36,32,34,46,33,47,46,27,22,21,16,52,49,14,30,43,21,14,33,49,11,46,27,27,40,22,35,52,8,14,16,20,11,11,17,23,21,28,23,8,31,32,10,44,13,29,24,13,24,67,114,23,40,36,27,23,21,35,37,19,24,20,50,1,28,39,35,4,21,52,19,12,28,39,8,29,20,17,39,20,23,41,38,28,22,23,34,34,91,128,85,35,29,11,133,199,193,174,164,109,153,122,140,109,112,100,100,115,121,99,118,110,85,99,114,96,91,127,101,111,103,84,51,34,48,45,28,41,45,31,28,19,51,12,37,54,108,126,162,178,167,195,255,252,249,245,247,235,246,238,246,237,225,244,230,246,252,252,194,163,166,104,39,37,65,59,86,173,171,175,196,211,228,210,240,246,238,213,255,238,230,192,185,195,160,172,172,101,9,0,14,11,3,8,6,11,6,14,20,23,30,16,14,32,18,5,25,11,2,20,4,21,208,112,33,71,91,163,188,194,172,170,142,139,146,114,127,97,95,92,101,73,76,37,72,29,58,33,27,14,46,128,167,191,174,181,175,183,165,209,185,203,189,214,188,184,200,196,133,11,12,56,97,128,102,95,79,58,64,91,72,121,112,125,137,129,11,18,102,167,197,187,218,171,157,161,142,149,151,206,127,84,47,65,131,139,127,103,98,78,101,88,58,31,37,23,23,38,17,11,26,16,11,29,33,79,175,226,203,197,76,18,31,20,10,20,36,22,30,27,33,59,144,152,160,158,167,153,142,142,99,68,137,213,195,188,173,159,120,98,55,37,28,24,38,68,104,112,156,174,179,186,199,177,210,209,181,196,178,173,160,167,163,165,160,154,172,159,161,141,146,149,155,175,175,145,147,154,150,122,150,172,168,155,185,154,157,161,170,147,155,154,138,144,170,135,153,157,167,158,174,137,154,160,138,149,142,132,137,135,146,144,145,144,154,158,162,150,161,165,127,115,154,148,146,153,142,163,150,124,141,145,124,138,159,147,116,125,120,149,130,126,142,138,145,147,120,129,111,112,121,123,123,122,152,155,140,121,132,132,148,146,142,132,163,121,172,118,163,243,220,239,132,66,38,117,102,103,85,100,92,70,87,76,72,86,70,75,60,56,76,87,90,106,89,91,107,104,90,78,114,97,84,90,77,90,61,86,67,40,29,21,58,50,48,53,61,70,70,73,50,34,71,44,83,60,57,42,65,73,71,52,67,63,53,69,70,25,64,82,55,65,66,51,53,44,62,53,58,57,58,55,50,48,44,29,36,47,60,71,50,41,34,58,36,40,33,51,36,66,58,37,26,63,63,32,43,28,33,36,39,44,50,48,40,28,61,49,43,55,43,41,36,52,39,37,28,29,39,40,51,41,25,28,43,45,9,27,29,28,12,23,20,17,23,35,34,52,29,20,30,33,17,48,43,29,31,33,27,27,32,21,36,4,26,34,35,37,8,25,35,12,44,32,32,25,25,33,9,22,55,23,31,33,27,84,135,99,94,73,33,41,22,23,22,14,12,32,22,16,29,13,6,33,35,29,38,9,33,33,51,19,34,13,38,36,30,48,16,26,10,9,27,16,11,21,58,58,61,32,20,35,44,27,31,2,47,13,22,30,21,26,10,38,49,29,38,58,30,46,31,29,56,41,43,18,22,37,34,31,56,34,27,37,21,52,21,24,14,13,7,46,17,49,29,24,2,2,42,26,15,36,22,17,32,19,29,9,13,102,45,20,22,25,42,17,34,21,23,2,2,30,39,26,14,29,16,36,38,25,23,13,21,24,31,17,29,19,41,27,51,7,33,37,18,27,80,120,131,37,54,32,38,115,180,203,176,184,186,158,134,122,103,108,104,146,118,126,124,139,140,107,141,109,137,143,160,166,188,163,160,151,136,145,144,77,68,46,49,41,44,14,29,44,40,20,38,77,109,125,134,175,247,252,237,244,246,236,244,250,252,255,252,251,252,255,244,251,232,248,251,219,133,66,27,26,17,31,101,150,174,168,179,158,166,201,187,181,191,185,188,114,161,145,152,138,148,87,3,36,4,2,6,10,5,30,7,20,32,9,4,26,0,8,28,7,3,11,3,3,10,21,168,55,19,78,64,118,110,102,105,89,82,83,78,51,59,50,71,92,69,95,114,117,135,133,149,142,45,20,111,156,199,214,235,207,189,208,215,189,156,175,170,128,138,139,106,100,59,10,24,38,92,67,120,75,98,136,143,160,173,183,198,218,207,187,64,39,92,156,211,188,192,167,129,140,137,110,111,118,84,55,48,47,81,86,89,74,56,53,44,33,41,32,29,20,61,28,22,41,22,38,41,43,40,83,98,147,181,234,83,13,8,17,29,34,24,16,29,5,23,40,69,76,60,63,63,59,35,48,15,42,129,172,138,84,64,43,21,31,41,60,118,130,152,162,159,198,191,197,202,182,191,171,154,143,146,151,153,151,158,189,160,134,148,141,141,170,154,163,151,146,161,158,168,149,154,156,155,175,153,154,143,153,139,166,147,157,156,158,163,142,156,137,159,155,138,145,146,130,155,173,146,142,158,130,116,147,182,152,148,130,143,127,161,145,151,162,131,163,159,146,158,155,140,163,121,143,155,169,132,157,127,143,124,148,135,134,156,140,129,160,146,140,134,118,112,120,120,111,92,106,105,113,154,116,135,87,101,154,140,129,128,143,190,133,121,118,134,185,229,255,168,82,39,73,114,131,96,112,79,105,106,92,98,99,104,104,105,71,104,90,121,115,124,83,127,109,95,123,107,107,87,70,65,52,35,35,43,24,51,31,39,61,70,79,69,68,49,55,81,49,46,58,40,46,76,72,36,44,52,58,71,52,32,15,62,67,41,40,40,51,70,52,23,55,37,54,45,43,44,41,46,58,48,34,40,41,61,64,44,51,55,65,33,58,33,34,22,26,30,33,75,58,54,12,39,15,50,52,30,36,35,39,15,44,29,23,44,28,36,43,22,51,75,29,31,6,43,36,30,28,47,46,56,31,51,35,42,33,45,28,49,51,46,48,33,21,28,33,39,24,39,28,30,29,20,38,18,17,37,36,20,35,27,38,22,23,11,18,15,29,61,26,35,8,18,20,29,36,35,42,51,22,32,18,52,92,128,120,104,39,5,16,11,36,24,24,30,45,40,18,37,20,38,27,27,33,45,29,22,24,18,31,20,54,57,19,19,38,16,34,29,25,25,29,54,63,68,14,8,11,12,22,20,43,33,19,7,49,3,18,36,12,33,24,20,34,18,28,36,40,42,37,48,6,61,25,42,41,39,48,49,28,41,19,23,45,4,17,3,22,45,22,30,14,25,36,20,37,41,31,43,39,40,29,27,7,91,92,58,2,44,4,16,25,21,23,15,39,29,24,17,27,29,31,6,54,36,16,6,25,21,19,33,9,22,5,21,24,54,37,26,29,42,87,111,72,76,44,18,29,116,205,206,176,185,178,167,157,156,137,119,122,144,108,146,128,157,137,158,156,142,142,135,173,162,157,166,154,161,161,177,149,159,141,122,109,143,132,106,92,57,84,50,65,21,40,58,50,68,104,180,204,229,237,244,250,244,252,255,226,236,246,251,245,241,252,250,228,236,244,218,192,145,107,23,55,113,173,182,198,225,196,172,167,154,181,154,144,146,127,114,129,123,125,111,84,15,1,29,10,42,16,4,5,10,22,8,9,23,0,33,15,8,20,13,7,6,5,35,32,58,19,44,50,56,74,97,62,87,94,97,129,87,124,140,162,158,178,169,184,203,201,191,206,227,147,73,32,84,132,179,202,198,188,174,157,147,110,118,79,96,86,68,50,32,51,33,1,64,105,140,179,188,184,186,207,214,191,227,193,193,190,205,178,63,19,57,135,121,104,136,140,128,116,124,124,73,62,49,13,5,61,46,81,16,37,7,16,26,17,17,21,25,33,26,15,38,41,48,60,78,70,109,140,182,159,168,188,56,31,14,7,17,29,27,22,41,36,11,3,14,27,15,20,29,71,56,56,42,42,56,60,49,85,75,140,132,148,163,203,185,212,215,196,164,159,156,167,162,170,170,172,143,152,141,134,138,136,140,143,140,139,161,142,136,167,154,141,132,138,162,150,138,158,184,174,119,159,166,170,132,143,156,147,169,157,148,130,164,134,156,160,149,156,136,144,157,146,140,154,162,157,161,160,168,184,160,130,131,153,149,122,130,154,144,140,142,137,141,166,147,158,146,169,142,136,147,142,149,141,151,143,153,146,176,169,183,155,155,150,157,169,157,119,138,147,147,157,171,138,154,131,141,147,115,138,141,138,129,127,143,151,136,119,102,109,144,213,211,206,156,160,76,82,106,78,96,93,110,87,104,124,127,99,118,128,77,79,93,88,81,102,116,116,120,93,88,95,99,47,30,41,48,23,18,36,51,41,54,46,78,49,73,63,74,53,75,51,64,48,43,50,67,36,68,53,66,45,65,56,45,55,45,48,58,29,44,65,52,53,59,45,55,44,44,59,37,45,49,46,52,42,47,49,53,38,52,50,42,38,50,60,32,54,43,45,22,37,55,35,43,44,57,37,49,38,70,65,43,43,44,30,41,40,38,35,21,45,38,44,27,42,34,58,29,36,17,32,22,17,43,31,41,53,47,37,34,22,30,34,37,48,56,28,31,54,23,33,15,45,13,41,28,18,11,17,44,48,35,10,48,38,32,35,37,18,9,25,45,24,24,32,53,19,6,23,13,21,33,47,41,32,17,16,9,20,45,113,113,124,78,63,30,24,12,22,28,40,39,27,29,24,50,43,41,47,41,54,35,32,37,14,54,20,21,24,40,11,43,32,18,44,46,32,32,53,93,65,45,11,29,10,29,34,31,22,37,49,33,27,22,27,28,12,29,35,24,16,16,28,29,43,33,30,32,39,37,36,36,39,36,36,60,33,18,26,5,34,26,19,19,25,31,16,25,17,19,29,16,5,34,12,22,21,0,27,96,75,15,21,17,33,53,12,18,48,6,29,13,23,49,10,9,33,18,40,54,34,42,36,36,19,27,17,27,36,23,55,23,35,0,50,121,110,91,36,25,29,29,103,163,186,193,154,188,191,167,196,175,149,142,134,125,153,166,140,178,174,158,156,151,166,173,175,185,159,173,145,131,167,157,170,161,144,149,145,147,136,135,136,122,128,89,84,63,11,39,7,14,11,77,103,125,128,159,180,234,232,247,246,239,255,250,254,253,250,253,242,250,230,251,247,242,211,150,122,123,155,212,237,242,224,194,191,164,169,181,149,154,164,187,173,161,146,108,69,13,17,12,1,24,5,13,0,9,7,11,26,9,9,13,21,12,22,5,32,6,12,4,9,43,40,26,78,105,127,147,162,163,192,172,205,205,199,212,185,215,209,213,229,186,204,188,175,166,139,14,11,69,125,142,148,152,105,78,78,79,69,76,65,69,103,90,137,129,167,78,30,90,110,179,191,196,195,219,197,211,212,201,167,136,135,135,86,1,29,68,88,134,101,132,143,130,144,124,97,64,41,34,1,39,19,21,21,24,24,9,36,14,17,16,34,30,26,35,36,17,94,181,116,191,164,185,187,224,222,145,122,32,29,17,2,40,25,34,19,33,19,36,38,40,11,20,42,52,43,50,62,34,16,83,139,130,165,172,175,191,172,187,185,165,166,158,174,153,167,165,156,148,157,161,126,158,138,144,136,135,161,141,148,153,163,141,152,151,149,146,163,154,148,142,152,150,177,135,146,157,149,127,154,151,141,140,162,128,148,140,142,148,150,153,151,165,164,153,163,145,139,155,150,141,156,149,132,161,158,176,135,175,126,134,149,136,129,159,165,147,165,115,137,151,147,163,175,181,172,138,142,172,173,155,145,165,152,125,136,152,151,160,148,141,152,143,143,110,167,143,180,173,187,164,152,142,151,155,130,166,153,168,166,136,165,134,132,94,106,112,169,215,159,106,152,145,110,103,83,108,75,60,91,104,77,108,107,98,112,90,102,114,86,134,104,122,81,91,81,54,45,41,29,40,24,49,31,38,73,51,62,73,58,50,55,51,56,50,81,57,45,59,60,47,61,23,50,64,62,47,70,49,32,58,24,65,60,36,27,38,56,44,64,77,49,28,37,66,36,34,66,50,52,41,44,29,39,70,46,72,38,40,18,50,32,38,29,54,40,54,29,43,47,46,36,12,30,41,45,34,48,45,32,47,21,43,55,34,32,53,38,68,37,27,39,39,42,43,40,36,48,53,26,40,53,27,34,39,27,52,30,43,26,16,38,35,83,27,31,60,10,29,12,38,19,20,34,16,39,47,33,54,25,18,28,18,29,27,35,38,19,28,33,14,25,44,18,48,28,48,17,18,35,12,23,39,32,24,20,22,33,79,133,142,83,74,50,50,10,14,38,49,38,26,45,23,36,17,35,23,24,59,34,30,33,11,27,18,37,39,16,17,37,17,15,12,23,34,44,76,87,57,19,20,16,19,29,4,32,37,24,18,21,59,58,43,32,37,46,42,35,20,48,26,32,42,49,35,40,18,25,16,40,24,36,32,19,45,29,16,27,16,19,30,25,16,23,38,16,36,47,27,27,15,18,15,24,9,28,96,22,12,17,28,17,19,26,36,2,15,24,10,50,29,39,29,34,37,23,53,23,39,32,60,37,4,28,31,6,8,25,26,4,93,120,86,52,41,41,36,63,94,132,189,198,156,165,160,175,167,165,175,159,159,142,84,106,144,154,189,169,203,160,185,168,145,162,167,177,155,156,179,159,149,153,161,139,151,133,179,165,139,167,130,106,95,84,58,43,69,38,24,20,41,82,90,95,72,84,107,105,123,144,179,205,211,203,252,229,242,248,255,245,255,242,255,223,226,219,196,213,239,230,229,215,175,202,215,217,209,196,168,171,206,192,200,181,141,93,9,17,7,19,1,28,9,0,42,28,32,0,20,3,1,15,8,4,27,7,11,22,19,25,111,62,27,96,145,178,195,215,196,218,194,193,205,198,188,195,151,166,179,147,137,133,115,87,86,34,10,34,30,55,66,72,86,59,93,93,103,114,173,173,175,187,202,197,193,219,126,42,64,113,214,207,200,206,173,142,150,117,108,128,108,94,96,52,18,44,43,100,110,157,180,162,146,117,135,116,18,5,6,24,20,50,47,7,6,31,26,36,23,19,41,42,51,24,29,18,55,79,122,126,133,112,135,130,181,179,195,129,14,28,41,8,38,21,40,44,35,20,57,50,58,36,49,65,79,61,56,44,61,105,161,180,189,193,184,168,153,173,159,149,135,135,157,148,131,121,140,131,102,157,124,164,132,157,154,131,151,148,127,140,138,139,135,123,162,152,133,140,146,144,130,149,145,144,120,140,150,149,149,145,145,150,161,116,160,166,141,167,154,159,144,159,132,160,148,141,148,165,137,164,163,162,146,148,128,157,174,133,170,159,158,132,149,143,152,166,156,184,154,144,170,156,158,144,150,154,172,171,115,124,140,140,132,148,138,154,147,159,115,122,131,149,121,108,109,121,140,115,114,142,143,170,155,132,151,143,132,114,163,120,113,146,141,113,99,49,42,85,143,66,66,72,115,129,95,115,104,70,43,85,61,73,89,104,90,109,79,99,80,89,90,84,79,34,35,34,43,37,43,49,49,66,32,59,59,69,60,54,101,50,80,88,60,64,34,64,50,50,35,65,70,41,60,36,60,34,51,41,64,51,56,52,75,52,65,57,62,41,40,62,45,30,60,66,11,56,44,63,25,36,43,38,64,47,58,17,56,38,36,68,50,75,49,32,22,43,38,46,16,50,51,26,29,53,26,33,25,63,53,40,31,13,48,20,35,26,34,42,37,17,15,32,48,38,50,49,34,29,48,43,40,38,31,43,13,16,16,22,56,29,17,40,13,42,50,24,14,26,14,45,45,23,50,27,14,43,14,35,60,39,40,17,33,6,25,15,30,10,35,33,30,56,17,26,25,35,16,31,36,45,33,40,37,27,37,39,15,12,42,57,91,143,116,109,54,42,35,24,35,29,55,27,65,56,27,20,6,50,49,28,9,31,26,27,33,25,45,40,39,42,22,19,26,27,39,21,97,101,63,43,16,26,15,9,26,9,44,21,27,15,52,27,34,39,22,27,25,33,30,45,20,39,25,18,27,41,19,2,32,13,23,37,41,39,41,30,18,34,27,31,40,17,27,17,17,10,17,13,2,38,16,28,9,8,6,60,69,16,16,4,37,44,29,26,14,46,8,37,24,29,21,19,7,11,4,10,10,40,10,26,26,24,37,31,29,27,37,17,48,93,110,76,39,28,25,17,69,95,127,180,197,194,186,180,165,190,158,216,169,132,154,171,125,96,100,101,128,155,171,153,128,140,153,161,147,159,143,170,177,158,149,145,142,142,145,167,159,166,154,145,138,135,132,111,100,79,88,82,72,88,125,145,132,130,97,63,82,102,77,104,140,135,122,147,124,130,141,179,186,194,207,234,235,216,221,206,241,241,249,180,138,159,168,177,196,232,236,215,211,228,224,243,230,203,193,110,4,11,1,9,11,15,26,23,18,29,29,21,18,5,15,9,13,22,8,13,29,23,11,31,158,49,55,109,143,182,175,185,167,169,155,184,135,141,119,108,91,109,100,58,57,61,54,46,49,44,38,24,54,76,88,117,129,152,173,193,191,212,214,241,208,232,204,212,228,203,111,37,25,78,137,129,140,87,95,65,81,72,127,111,106,103,125,76,28,17,56,44,87,123,179,161,150,120,118,66,25,21,20,22,21,40,41,33,6,14,25,14,35,22,23,12,34,51,22,26,21,58,64,145,57,44,51,108,111,176,213,195,100,21,38,34,44,58,17,55,33,29,27,39,41,71,105,146,100,118,150,170,187,181,172,162,144,149,128,136,142,137,145,149,133,145,134,133,144,139,152,113,143,138,139,154,138,138,146,135,170,151,145,150,142,146,108,142,138,137,142,140,134,135,153,133,142,113,138,120,128,160,113,157,144,151,152,165,146,116,142,153,153,135,109,162,149,124,154,144,128,165,166,151,177,163,135,169,163,148,126,153,136,137,152,143,141,126,154,118,153,163,154,157,176,151,128,141,150,160,143,138,145,153,143,129,162,131,126,147,162,142,138,149,153,150,156,167,140,160,139,129,138,149,143,132,180,144,129,127,135,112,142,148,114,149,177,140,143,86,62,52,61,36,47,75,58,118,79,96,135,177,128,40,0,15,28,50,76,65,51,56,70,47,47,61,34,62,30,49,60,45,46,50,63,55,63,46,60,69,68,45,72,91,65,27,44,42,41,65,54,50,56,60,48,102,70,53,67,71,71,66,51,70,72,68,43,59,60,85,53,28,61,36,68,61,71,63,58,36,55,46,57,38,41,56,38,43,43,60,64,9,48,45,56,28,29,40,22,46,19,48,39,58,59,35,35,50,55,49,20,40,46,30,36,25,51,28,33,36,16,46,59,25,19,25,58,25,64,38,56,47,25,45,10,36,42,51,20,14,27,23,27,38,64,42,16,42,27,31,42,35,37,4,46,29,32,23,40,45,28,28,15,25,32,43,30,1,32,34,11,5,34,32,42,13,21,23,11,22,26,23,7,24,40,22,37,35,40,15,15,32,24,29,44,25,83,139,143,106,61,76,34,9,8,18,23,19,15,22,22,34,27,36,37,27,47,42,25,19,21,14,38,19,35,29,26,48,48,35,53,75,90,70,20,26,4,25,22,23,13,6,20,31,35,42,61,25,40,15,24,25,42,28,42,46,35,54,42,51,16,46,35,25,41,27,41,46,56,31,29,25,8,6,26,16,22,27,14,20,39,38,28,39,33,29,19,16,42,70,58,2,17,27,2,28,6,2,51,13,41,31,19,28,51,11,7,19,35,9,21,15,34,26,45,33,39,23,16,40,58,91,119,89,58,21,30,60,41,66,81,106,154,207,182,185,172,157,139,160,168,181,178,157,192,181,119,113,112,69,104,154,159,151,158,170,161,163,164,170,136,171,174,155,167,170,182,174,179,181,139,175,156,188,201,169,156,145,120,92,110,117,107,103,145,143,158,173,138,109,103,111,123,145,124,131,123,156,139,101,126,130,153,148,117,195,173,170,170,193,216,209,173,104,64,137,115,131,158,230,244,247,252,232,237,248,251,242,218,113,6,0,11,43,15,27,20,18,15,6,8,11,2,0,14,10,60,11,26,26,10,18,12,24,141,27,50,46,82,119,144,116,97,109,106,89,70,51,53,48,39,65,78,79,93,122,119,99,149,68,12,59,68,105,173,170,199,199,222,217,224,188,201,183,203,171,165,134,108,121,53,16,22,42,79,113,111,97,82,62,99,75,63,82,46,53,36,38,17,33,8,39,16,80,135,163,134,135,156,90,23,19,16,25,4,38,44,10,10,25,13,22,4,21,22,17,15,20,30,7,77,92,133,120,121,133,108,126,154,144,206,189,87,34,16,40,22,24,54,19,10,24,27,48,46,64,142,150,169,170,175,185,175,169,164,131,161,170,159,146,140,141,149,143,139,139,135,165,169,142,134,139,144,164,122,141,145,147,137,120,135,148,129,149,161,145,138,154,170,136,127,129,134,144,144,139,123,156,125,134,128,168,157,157,133,161,171,158,152,167,131,151,147,138,111,154,121,142,151,136,150,124,126,142,160,141,143,158,155,161,145,143,162,165,148,135,137,150,121,159,154,158,137,140,135,133,154,148,144,154,158,150,140,165,145,177,149,143,150,121,137,166,154,157,182,155,149,159,143,177,154,143,146,160,122,155,141,128,156,141,136,151,157,138,135,170,153,150,124,85,69,47,22,56,81,76,74,67,57,125,223,251,242,205,116,55,24,16,35,10,16,14,61,46,36,47,52,73,61,77,69,55,68,49,83,36,28,46,54,46,65,28,69,44,56,66,43,75,52,37,61,55,57,49,46,66,52,58,61,48,66,53,63,55,45,80,62,57,50,44,57,70,37,59,46,44,76,57,75,28,62,58,84,64,53,65,39,27,40,74,42,49,26,60,47,56,55,42,22,39,30,39,16,39,58,57,47,44,43,51,42,39,43,40,61,21,52,31,44,37,21,35,26,48,32,29,53,53,13,25,38,41,29,52,21,30,58,34,40,40,24,16,23,39,18,30,40,33,20,36,25,17,12,40,39,39,29,36,35,34,40,44,17,35,40,42,14,24,19,26,42,32,23,15,21,27,15,31,24,16,33,41,26,38,47,42,43,20,15,43,27,20,15,18,31,15,33,50,66,145,134,129,108,72,19,30,17,8,1,32,19,45,24,34,19,15,38,45,38,12,43,41,52,33,27,36,28,24,30,23,36,36,76,65,61,43,5,34,42,18,35,9,21,34,41,23,32,37,41,34,27,3,29,41,17,14,34,35,15,34,47,51,39,30,34,11,28,28,32,22,43,39,25,0,11,45,23,20,22,16,8,24,38,34,32,17,39,30,51,53,33,19,19,19,27,20,4,23,10,15,39,15,19,14,39,51,24,14,0,6,12,17,24,40,35,15,42,27,41,65,119,109,68,57,39,32,18,24,55,73,131,153,175,209,185,170,158,142,164,152,155,154,163,155,170,161,149,147,120,126,75,126,164,195,182,177,190,188,153,181,179,150,185,199,181,153,189,160,168,178,139,165,181,177,191,146,164,153,149,151,123,106,158,136,133,151,150,151,140,167,128,159,173,168,164,154,159,155,149,165,151,146,129,136,141,143,156,141,166,142,146,152,150,67,24,96,75,98,117,207,236,235,248,238,250,244,255,255,223,111,7,0,2,35,10,15,18,23,9,10,6,25,24,2,7,0,22,0,7,7,1,11,18,21,80,8,40,39,92,98,65,61,77,60,63,73,78,93,106,120,145,149,177,157,185,193,194,195,182,97,58,48,105,152,166,227,197,149,153,145,142,136,105,115,100,103,106,63,74,20,40,29,12,66,64,69,75,34,22,32,39,14,19,24,15,13,34,38,41,26,28,24,49,59,132,131,112,126,119,93,46,24,15,7,26,27,9,33,33,2,31,61,45,50,16,35,2,58,36,43,23,99,135,146,142,160,145,142,116,72,56,44,45,27,10,40,28,36,26,27,48,30,45,15,43,61,144,161,171,189,165,136,162,158,145,136,145,153,163,142,162,167,146,129,167,161,143,174,151,164,146,142,139,136,116,137,134,156,148,137,140,164,167,149,152,128,132,141,131,142,132,124,152,160,153,126,150,120,169,144,157,130,138,159,146,178,150,143,176,141,149,153,172,152,153,151,133,149,166,128,130,169,146,130,116,119,118,114,112,142,145,132,165,120,160,147,112,117,139,112,124,114,125,116,142,145,131,148,127,163,156,138,131,147,147,141,144,149,148,143,140,144,130,118,142,120,132,97,86,109,146,117,127,143,105,121,101,109,107,122,141,122,114,94,106,97,111,98,41,36,58,37,53,32,28,22,50,39,48,77,175,224,240,254,245,229,163,99,24,18,8,17,32,36,57,58,82,66,64,76,79,65,55,42,73,77,79,65,51,53,52,51,56,82,62,47,66,54,42,49,50,70,68,19,50,59,27,64,74,49,63,43,59,59,60,43,41,69,78,59,54,41,65,52,64,35,63,44,54,20,43,62,52,80,48,60,60,49,18,54,47,37,36,51,36,47,63,17,40,37,57,53,36,22,46,34,56,53,16,42,40,38,43,45,24,30,6,38,58,22,43,52,53,32,27,45,45,43,46,51,37,45,47,68,42,16,25,42,22,34,6,54,28,48,39,27,20,42,28,45,37,31,25,17,16,34,62,33,13,42,26,18,33,32,44,28,18,44,6,8,34,29,22,52,13,33,28,32,17,28,2,29,9,21,18,21,47,22,36,31,29,15,20,13,29,6,29,37,46,67,73,156,138,146,109,77,39,32,21,40,10,20,47,30,26,21,19,20,32,68,22,5,46,30,36,51,71,35,36,15,45,28,38,90,76,46,13,13,38,1,34,34,30,23,58,52,59,41,60,21,12,28,14,30,61,16,41,58,11,43,36,38,51,43,21,43,39,32,17,23,28,14,25,9,48,41,20,9,13,6,35,10,18,41,6,13,8,32,81,60,8,28,11,8,23,29,19,16,24,6,28,14,10,32,25,34,34,16,30,35,37,23,36,26,45,34,31,75,130,120,83,28,51,14,35,26,62,82,78,108,108,162,199,187,161,170,159,148,170,161,152,183,122,141,180,158,178,174,178,158,110,113,107,149,178,185,162,153,165,169,156,175,163,185,142,165,173,163,185,153,164,187,182,172,170,168,190,167,154,131,134,102,120,142,140,155,162,162,164,155,164,150,163,176,179,159,143,149,163,133,126,144,130,110,115,138,148,144,141,126,180,148,151,104,90,79,78,48,65,133,180,186,205,218,230,247,242,251,228,116,0,14,4,0,18,0,22,15,45,14,38,12,9,0,13,11,8,6,9,4,26,14,33,12,32,37,88,58,79,100,113,109,114,157,158,160,167,188,175,176,183,200,184,198,160,183,199,168,153,46,30,41,104,131,141,130,130,100,81,67,58,76,82,51,54,49,60,80,54,51,21,2,13,12,30,57,27,35,24,18,8,28,6,36,8,17,39,42,40,34,62,46,26,69,132,130,102,151,117,84,22,17,19,28,48,24,33,47,14,55,33,20,17,4,39,25,34,45,20,28,45,47,44,86,56,60,48,45,36,29,31,11,15,16,42,21,56,57,95,82,82,92,100,109,120,108,139,166,174,190,167,145,137,151,131,148,124,139,136,129,121,129,128,156,126,135,133,154,129,161,135,175,152,139,140,167,142,121,148,140,133,144,125,124,130,129,139,138,133,154,137,139,160,179,162,179,158,147,139,137,129,154,117,160,155,135,150,156,135,181,140,163,106,139,143,145,130,134,131,110,140,146,134,133,152,149,116,136,127,133,131,142,156,124,155,153,166,132,138,135,133,126,134,118,159,154,156,160,144,138,142,136,136,121,162,147,152,113,138,99,131,122,126,94,96,89,134,97,113,115,102,104,120,113,116,109,135,89,98,113,131,110,90,88,84,104,47,79,85,82,65,84,75,69,62,11,50,60,48,18,99,182,219,237,243,234,225,224,162,89,46,41,21,15,48,78,79,82,105,60,88,76,79,56,33,47,47,45,57,50,60,50,54,53,41,68,47,31,69,27,81,62,52,63,42,60,38,26,58,47,51,77,62,46,38,49,58,49,64,40,55,39,53,41,43,58,40,36,42,45,43,40,40,57,64,59,12,37,36,43,70,53,69,56,45,48,51,30,51,40,46,38,70,36,46,40,49,59,28,16,56,47,26,41,43,46,4,32,39,43,50,30,29,43,52,32,36,16,21,33,27,22,36,18,23,28,31,35,23,16,33,44,33,24,33,58,60,18,38,31,31,57,29,34,32,34,26,35,47,17,38,43,31,30,28,28,42,17,25,39,15,40,8,19,20,37,28,26,37,21,28,21,12,39,3,27,22,21,29,23,24,44,34,22,43,27,29,31,35,51,18,40,69,140,155,153,132,81,32,47,34,29,11,8,55,41,53,22,36,10,52,30,23,25,33,16,22,3,21,24,33,26,29,82,137,78,23,17,4,14,38,29,18,5,51,36,18,42,31,21,26,18,17,40,36,24,33,24,30,43,49,30,25,28,27,12,45,30,43,20,28,19,34,29,18,10,27,49,2,9,13,14,16,12,4,24,50,34,68,50,25,29,27,31,23,35,7,21,4,20,49,24,13,23,13,24,27,40,41,26,23,3,14,15,33,83,98,111,92,39,35,54,37,24,30,31,69,101,91,122,152,179,148,162,176,183,158,177,171,159,157,129,159,150,188,190,173,186,168,163,113,101,102,99,139,174,178,169,146,183,182,170,170,178,176,172,162,183,177,181,191,182,169,167,183,181,175,176,172,163,129,146,157,153,158,136,190,175,161,154,148,179,156,154,181,176,154,137,147,133,155,145,144,154,124,132,123,120,105,160,170,174,166,141,176,141,82,39,64,86,98,111,129,138,170,204,205,202,239,147,2,6,14,19,26,8,0,27,10,22,17,20,21,4,18,19,9,20,14,12,3,0,18,7,41,67,51,72,170,195,192,187,174,184,202,176,189,172,184,183,182,183,161,174,156,165,153,120,83,17,19,33,72,99,93,106,68,108,91,111,118,79,72,66,73,61,41,26,22,36,28,35,29,26,30,8,13,15,13,35,37,31,25,12,26,13,45,19,36,48,105,98,111,145,194,145,136,152,152,80,0,9,33,4,35,52,23,32,32,27,19,41,10,13,31,12,26,51,24,35,56,39,40,44,18,35,15,21,19,29,14,18,42,42,62,76,133,172,179,212,188,198,185,205,225,211,170,166,177,122,137,142,155,145,150,141,117,147,136,153,141,153,129,137,133,118,123,140,126,128,134,127,126,130,124,165,128,150,137,144,138,144,102,154,140,159,145,134,157,144,131,153,100,143,132,121,141,149,135,150,139,124,121,124,123,145,114,142,142,119,138,133,120,112,111,153,111,124,133,134,138,157,115,144,155,117,134,136,105,123,146,136,140,134,144,131,129,150,143,130,114,130,122,114,124,153,115,139,148,126,132,129,141,140,128,129,100,118,91,124,130,130,146,137,120,133,121,157,117,166,160,150,132,168,159,173,147,130,114,106,94,81,81,118,97,81,92,76,51,63,79,90,94,93,67,91,70,17,64,17,83,121,125,189,251,241,253,255,250,233,159,63,18,10,10,48,39,67,94,43,74,54,53,64,39,53,81,55,38,61,48,71,64,60,67,59,28,56,76,41,87,68,54,44,41,62,44,54,67,52,68,30,86,70,60,43,56,47,40,62,56,40,55,81,30,56,38,45,34,65,45,64,30,36,52,53,54,50,41,55,45,39,35,78,51,33,28,51,59,25,27,50,11,21,52,45,48,31,47,50,25,34,27,36,25,42,19,43,34,22,40,23,29,29,29,33,26,39,45,47,38,12,48,47,49,36,21,33,25,11,13,9,46,24,61,15,19,23,35,47,57,34,13,13,40,40,18,0,33,15,30,22,15,13,31,24,14,32,26,30,18,35,16,18,37,37,28,63,27,46,10,39,20,45,28,13,20,21,28,17,18,21,16,41,50,30,51,42,8,20,18,50,47,44,58,113,182,174,146,120,100,77,68,49,19,33,28,34,30,33,9,11,16,42,22,45,16,17,39,23,33,31,31,18,72,96,51,49,16,29,36,25,19,36,20,61,30,52,41,30,15,17,25,43,22,31,44,43,28,34,13,49,38,34,9,16,33,37,21,19,18,32,12,11,13,45,23,45,12,32,22,32,30,20,68,16,52,54,21,28,32,15,17,47,48,29,23,28,24,33,37,6,9,11,30,24,9,4,30,22,28,24,47,81,113,127,82,70,25,36,28,34,6,12,57,87,103,101,98,134,185,205,176,180,174,176,162,174,177,189,164,151,210,158,203,158,164,149,136,152,158,153,99,83,108,137,155,198,197,197,173,187,138,181,148,154,164,155,185,164,162,178,180,156,146,178,179,193,165,168,152,124,139,131,150,155,153,168,154,164,144,145,160,174,189,157,150,175,169,162,165,138,160,130,127,118,128,134,155,148,163,175,182,173,191,179,131,130,113,116,102,73,26,80,91,124,173,160,192,100,10,3,17,26,6,26,19,8,10,11,29,10,16,2,17,4,17,6,8,4,11,18,24,24,65,79,92,114,179,171,205,188,171,178,188,192,179,171,141,165,168,129,128,107,97,108,80,55,39,29,42,21,76,123,129,94,111,131,110,58,35,32,58,16,19,48,44,31,29,9,42,44,16,35,13,30,45,2,19,25,9,28,9,26,8,24,22,37,42,55,132,156,184,189,205,155,137,134,142,68,36,5,28,24,20,38,19,13,25,20,26,67,47,48,46,40,49,48,51,6,26,29,25,56,27,16,32,29,53,19,59,77,82,125,149,184,196,212,217,217,195,177,173,185,206,157,177,165,142,142,149,150,149,140,153,139,118,130,125,122,163,145,121,138,137,127,115,104,119,106,126,125,140,116,116,120,137,130,140,140,147,174,150,141,155,134,127,139,143,130,153,133,111,132,134,124,122,129,120,145,114,128,105,143,160,125,152,136,121,121,126,132,121,135,117,153,104,132,134,138,127,115,81,118,136,149,140,143,128,121,121,137,136,132,130,128,139,138,153,132,140,142,138,142,137,129,124,125,106,123,136,118,143,120,131,108,115,129,101,117,76,110,116,156,126,141,112,126,132,130,105,150,147,140,153,126,122,120,88,81,84,68,54,76,76,101,80,93,76,72,82,68,93,99,67,70,54,75,67,19,101,115,90,97,114,175,254,250,246,252,220,244,177,86,41,0,10,17,56,44,83,66,49,43,54,39,51,51,52,78,42,67,42,73,60,41,46,61,69,62,71,62,32,53,42,49,73,55,60,45,36,52,58,51,40,47,24,61,58,77,51,26,46,46,29,62,48,47,55,34,34,50,26,48,65,37,43,51,46,43,42,44,49,26,23,17,46,68,41,30,49,35,36,44,29,31,23,40,47,27,19,40,34,47,54,30,49,43,16,21,43,49,38,35,32,45,45,45,31,28,36,30,25,31,23,21,22,39,30,24,20,11,26,44,50,45,13,40,35,25,55,16,23,34,43,30,26,32,6,31,15,28,32,22,6,47,36,33,17,21,36,40,31,31,60,33,17,29,12,18,31,25,58,27,15,13,23,26,28,31,37,29,26,54,45,50,44,20,29,24,35,43,5,38,27,41,34,64,127,136,158,163,133,91,68,45,17,14,38,18,41,1,25,44,41,47,54,3,27,23,11,8,40,7,31,84,78,74,39,16,22,13,17,18,43,49,38,40,24,26,30,11,33,28,20,40,20,29,36,27,32,25,15,7,19,32,39,14,41,16,17,7,9,14,18,32,20,23,13,43,36,17,32,33,28,32,94,50,9,6,22,25,15,11,20,42,7,25,22,21,14,20,8,17,32,12,18,17,36,18,48,99,129,129,109,61,25,37,40,24,14,25,48,40,79,101,107,125,118,182,189,173,169,155,177,198,162,202,172,144,173,167,174,172,172,153,159,181,167,161,164,161,161,96,92,108,140,141,175,174,190,195,156,179,181,185,162,151,149,141,150,141,145,150,165,164,170,197,162,158,137,138,136,131,146,144,147,177,183,172,174,155,152,165,166,172,163,140,160,140,165,163,171,155,169,156,166,148,143,173,158,160,185,154,202,195,179,188,174,165,155,109,75,50,57,61,94,123,138,108,18,8,19,5,18,29,33,11,9,13,5,31,4,21,6,7,16,17,19,4,35,22,9,12,51,31,73,94,156,176,168,168,135,148,118,116,104,95,108,86,52,63,82,58,52,66,63,82,94,42,27,38,36,64,75,44,42,20,15,47,21,12,14,24,18,37,17,34,29,30,21,5,39,18,17,45,21,29,16,33,50,34,18,42,35,15,24,33,29,44,61,126,137,122,137,121,115,137,123,77,44,19,20,31,36,28,41,32,34,38,54,29,30,40,11,32,18,11,43,25,29,27,15,24,36,16,68,61,57,137,137,125,200,217,189,199,170,161,159,152,168,181,168,168,167,123,149,165,129,154,137,160,146,154,136,115,138,159,107,153,147,122,139,130,117,142,132,121,127,114,138,137,124,132,112,122,132,138,108,132,161,150,146,128,111,99,81,138,103,133,117,113,140,132,140,138,104,131,142,154,143,148,164,134,167,141,138,124,116,146,142,127,141,131,120,124,112,147,131,139,130,106,105,123,122,145,134,95,122,97,128,130,156,126,120,129,136,103,129,133,120,112,126,121,132,120,115,123,141,126,143,139,122,135,104,111,129,118,110,117,100,95,109,94,83,57,108,88,56,94,90,113,102,84,109,125,132,120,102,107,109,109,108,98,89,86,114,67,84,92,78,75,56,68,69,59,86,79,61,51,68,109,102,99,124,123,151,206,255,212,254,252,254,247,147,70,15,0,1,28,28,56,34,52,56,68,49,51,76,53,42,31,78,67,46,65,55,39,65,37,52,70,25,34,41,31,63,39,43,48,62,44,73,61,67,45,65,56,47,41,73,57,33,66,35,66,47,60,50,60,35,51,43,57,50,37,53,58,57,63,31,41,55,65,44,12,35,50,43,29,27,33,11,45,18,30,31,53,25,29,49,41,31,43,47,48,49,42,47,31,26,30,28,32,60,34,46,16,35,41,26,21,22,25,36,62,22,43,37,46,27,34,60,35,49,28,10,42,8,25,45,50,27,15,39,21,38,32,18,14,46,36,17,35,39,29,20,6,39,15,34,24,23,36,30,52,26,21,18,23,14,23,10,16,15,32,22,6,44,46,31,44,35,47,31,26,38,19,34,38,19,31,59,17,34,34,51,53,48,55,66,134,112,168,168,150,100,75,44,43,29,26,25,21,31,22,39,58,27,13,27,22,20,27,35,32,105,82,68,21,25,6,30,25,44,47,56,27,27,28,61,26,41,71,43,22,37,16,31,33,27,39,22,24,33,39,28,29,28,11,11,23,23,13,29,25,31,24,21,15,21,9,29,32,8,40,29,49,26,33,18,7,8,24,21,30,25,6,28,37,5,23,11,16,21,27,25,34,64,79,129,113,63,58,34,38,45,45,29,32,23,34,74,118,105,113,97,107,179,178,175,138,129,188,175,160,180,169,124,134,160,181,172,129,176,166,168,167,162,165,171,171,150,158,115,77,96,119,173,197,215,156,167,177,185,167,141,169,159,158,180,148,161,155,142,163,164,170,165,153,144,154,147,142,169,160,160,177,141,171,155,167,194,167,180,173,165,162,167,180,174,143,163,149,188,188,151,166,145,157,191,161,187,182,176,213,206,201,189,199,188,154,118,51,72,44,50,57,112,111,29,2,20,16,27,23,27,26,31,5,12,8,35,4,27,5,19,32,11,18,57,22,21,14,46,41,66,63,116,111,109,110,82,88,60,71,75,73,71,91,87,94,92,85,80,44,42,62,28,49,23,14,20,35,19,30,21,13,17,27,17,9,17,24,5,25,29,26,29,20,29,28,25,11,18,18,35,6,21,11,22,40,20,26,20,28,43,19,38,38,57,85,74,102,115,163,144,135,122,96,20,22,23,31,43,38,61,23,41,29,0,26,8,31,26,7,38,33,40,12,20,47,30,66,41,70,63,137,166,168,208,209,208,184,167,160,163,185,152,166,126,158,188,176,174,161,147,169,140,129,145,124,155,144,146,139,144,130,124,125,153,107,130,121,118,146,168,135,133,97,113,147,114,119,118,135,136,121,112,116,108,97,74,67,123,109,102,131,97,101,107,105,122,98,89,96,97,106,81,108,109,118,89,97,112,123,106,96,121,114,138,113,113,121,128,149,133,130,139,116,81,130,108,136,122,143,136,104,128,89,115,99,98,119,103,104,104,122,93,93,132,97,94,102,135,128,110,116,127,118,149,136,137,101,90,134,139,110,96,129,118,89,68,72,82,133,66,74,105,109,91,93,94,124,118,133,145,105,148,128,108,105,144,122,121,120,99,106,93,92,73,86,49,93,74,61,72,81,78,38,68,91,123,155,159,151,137,137,184,227,254,247,252,250,246,246,182,111,38,4,12,28,30,47,32,48,38,35,55,43,61,64,60,71,81,89,61,55,71,69,37,44,45,43,67,54,67,42,64,49,20,45,61,72,37,60,60,69,48,48,57,60,52,52,5,48,34,51,24,39,64,36,48,55,32,36,58,58,75,54,31,52,24,55,16,21,48,51,30,25,14,40,11,44,42,27,31,48,29,43,41,46,20,52,37,30,20,20,23,26,20,47,28,34,46,32,31,32,51,40,21,33,32,28,12,31,29,41,21,34,27,34,38,31,67,32,40,18,64,16,16,38,16,59,52,23,26,28,35,31,51,33,10,16,8,26,32,2,12,24,1,15,30,53,27,13,22,14,34,36,28,31,29,39,23,51,22,47,23,33,40,28,24,24,46,37,28,54,5,22,25,19,20,27,19,46,57,47,25,28,32,48,42,67,116,186,174,153,148,143,109,68,33,36,45,41,32,41,19,24,35,23,48,19,28,25,75,58,68,31,41,20,15,8,31,29,37,40,19,33,14,35,54,53,55,41,49,46,19,30,28,47,17,19,24,24,33,48,24,53,30,17,28,12,29,29,21,17,21,9,15,40,12,20,39,54,19,3,34,15,21,21,48,8,19,45,40,31,44,31,25,30,24,53,36,84,96,126,153,110,62,46,48,28,3,48,26,16,35,50,36,67,93,123,130,109,92,150,199,182,165,159,149,155,171,182,170,180,164,158,190,186,136,155,183,162,158,160,173,159,168,200,180,192,159,135,128,88,103,134,182,163,161,170,153,130,159,173,185,156,161,178,160,153,163,168,177,172,199,160,151,152,145,129,125,134,162,160,184,145,145,177,151,150,166,171,140,165,158,150,133,141,157,166,159,156,135,167,137,162,175,163,149,173,169,194,188,213,195,191,183,195,170,161,106,52,39,61,74,79,3,11,10,9,18,18,19,27,21,4,5,38,27,11,3,7,23,23,11,4,34,18,13,23,39,28,39,69,89,102,91,92,97,99,104,112,100,106,105,63,62,44,47,21,39,2,26,17,38,32,12,35,53,31,17,22,30,51,3,21,26,38,31,32,27,18,19,47,42,41,45,19,16,37,20,34,45,22,11,27,5,22,56,13,16,7,21,11,26,60,85,117,127,143,132,159,136,113,110,77,46,32,30,11,28,19,19,51,52,48,13,20,29,18,33,18,26,19,30,16,34,94,106,143,166,178,177,217,196,204,181,195,174,184,173,173,160,176,177,163,157,139,166,161,151,143,142,126,127,138,148,171,139,140,116,115,113,144,110,103,139,120,152,127,151,138,148,149,139,104,151,111,120,146,93,93,114,125,138,98,131,95,107,135,107,114,147,111,128,113,124,115,115,100,122,103,112,91,92,119,81,73,65,62,119,70,78,111,96,137,97,122,118,134,119,139,126,86,117,73,98,93,122,120,110,105,124,107,146,97,90,85,62,90,92,97,103,88,86,66,91,101,93,121,129,118,101,116,111,90,77,120,112,119,98,93,121,97,64,100,111,70,82,99,93,113,96,108,146,125,130,147,154,151,126,124,129,110,89,91,108,92,102,118,101,86,102,114,101,106,71,103,116,79,79,56,78,72,54,45,53,86,61,116,162,139,129,110,106,131,165,186,249,252,255,250,234,230,198,132,69,3,15,53,23,35,49,63,66,56,56,66,69,62,74,53,90,46,59,62,89,75,30,51,70,77,34,57,90,60,57,47,40,47,67,41,37,38,86,48,66,44,33,71,57,70,46,55,60,47,51,30,55,16,28,76,49,13,55,48,53,27,35,41,68,37,50,58,24,38,18,41,39,35,33,40,39,18,51,39,17,33,31,33,14,55,43,30,15,58,28,41,11,28,11,47,15,56,39,13,23,58,29,42,33,25,59,43,55,18,26,28,46,19,46,41,14,41,37,33,24,51,31,27,15,0,12,43,30,45,6,4,34,24,29,27,19,2,29,17,16,32,48,34,20,41,5,36,33,6,7,21,20,26,28,37,35,5,63,52,49,43,44,7,18,8,27,26,39,48,42,31,5,26,21,32,50,36,27,20,43,45,35,21,44,69,60,102,103,151,171,162,165,118,131,113,109,58,35,47,26,29,24,28,24,17,26,41,74,61,58,32,15,33,32,52,50,14,40,71,113,120,86,48,47,50,63,25,22,34,60,29,45,48,42,33,27,38,18,59,34,21,25,6,9,5,39,50,22,17,8,12,31,34,46,44,20,14,20,40,17,15,7,8,17,31,27,28,11,37,40,30,50,108,148,191,129,104,56,60,33,15,29,14,39,27,19,49,18,60,62,106,110,110,100,126,106,176,177,187,182,158,137,170,154,147,176,159,146,165,147,144,166,157,160,163,156,182,171,166,138,172,146,187,181,159,143,87,76,126,118,175,177,176,169,173,159,161,181,153,170,154,201,168,176,158,174,174,162,166,150,126,123,152,146,162,164,144,178,155,144,177,168,159,186,179,180,184,148,152,135,152,168,175,192,154,150,158,170,147,187,180,147,161,163,184,233,224,216,213,172,177,222,145,149,115,120,90,72,106,11,10,18,29,4,7,22,19,14,9,14,16,3,3,17,17,1,28,17,5,3,9,26,3,41,39,41,50,97,135,100,93,103,76,37,63,27,57,25,5,25,28,19,31,44,27,2,4,18,28,16,33,31,18,13,19,30,14,40,29,36,22,13,41,38,21,9,23,33,30,36,36,21,8,40,38,12,11,44,15,11,17,25,21,30,17,33,38,42,43,59,99,112,97,82,57,40,54,18,34,43,11,7,15,56,43,25,29,52,41,7,42,17,12,6,17,26,5,41,50,87,146,199,232,199,197,206,165,159,157,136,162,146,161,138,150,136,127,129,141,133,146,141,148,129,132,113,80,107,99,130,129,139,144,151,114,127,111,109,108,97,143,151,144,137,136,121,127,117,112,93,127,136,140,121,144,140,149,155,152,132,136,117,134,136,147,170,162,162,123,142,151,144,141,154,152,132,119,134,130,120,141,121,128,119,135,174,150,120,131,149,136,175,126,121,130,119,122,113,117,137,127,110,130,104,87,94,111,108,105,92,103,131,130,111,98,109,130,110,118,144,103,152,134,140,159,143,151,163,131,115,148,116,108,100,120,108,105,97,114,121,112,127,137,120,122,133,112,75,84,106,109,88,114,101,108,62,76,84,77,92,98,94,80,86,61,117,91,90,110,63,81,71,69,63,50,76,47,54,66,64,53,30,27,65,52,76,109,160,116,72,106,142,181,231,242,255,253,246,255,235,170,87,42,20,4,21,43,61,58,57,61,71,54,50,64,75,51,59,54,73,77,66,57,46,60,73,24,86,65,31,59,36,39,61,61,48,54,38,45,63,54,45,68,61,50,55,26,57,58,66,51,37,27,44,65,59,60,35,44,51,51,63,69,6,27,72,23,72,26,46,43,18,45,12,19,36,43,25,23,27,40,33,12,40,28,28,35,39,54,29,64,37,30,15,47,45,3,56,5,35,22,27,36,21,32,7,47,21,24,23,38,65,43,35,37,20,14,51,37,28,29,11,14,34,33,40,37,12,21,31,30,5,7,39,30,41,10,47,21,41,24,32,16,44,48,45,7,41,18,24,24,36,1,27,24,12,11,26,35,15,30,19,26,17,39,21,41,30,45,19,29,10,2,50,29,51,39,23,19,24,30,17,38,38,31,16,46,28,24,63,68,131,151,155,161,176,144,153,127,163,131,121,105,50,68,61,58,105,106,109,84,55,76,114,134,147,169,186,215,243,196,127,59,33,47,73,23,25,37,22,3,11,42,21,21,15,53,26,27,61,38,50,20,20,19,24,43,17,29,38,0,20,71,44,18,3,15,19,8,33,43,8,22,16,41,47,45,63,79,145,116,172,125,100,72,28,56,64,11,34,9,32,12,42,32,32,62,59,84,113,116,89,83,117,99,143,181,166,143,166,161,159,161,133,184,169,190,160,170,180,161,184,177,169,141,152,145,156,161,157,150,186,158,172,196,160,147,130,110,135,128,145,172,200,177,186,175,196,170,185,158,173,184,160,166,173,177,160,177,156,143,145,181,156,141,165,178,166,184,184,169,177,169,187,192,153,160,167,172,159,159,176,160,175,152,173,169,172,147,168,151,145,175,180,208,193,219,208,206,173,206,206,165,156,156,160,154,134,115,16,0,29,20,6,7,9,2,13,12,2,15,7,12,17,11,24,15,17,2,31,3,1,14,11,18,47,46,29,34,57,42,21,41,6,24,15,30,7,21,23,10,30,22,33,29,0,16,21,32,15,22,35,24,32,12,28,30,42,11,19,31,44,14,35,25,25,9,11,24,25,10,19,18,10,11,30,25,25,32,29,29,61,33,29,53,29,33,66,37,63,48,57,29,28,13,35,36,20,25,47,23,11,22,13,35,33,30,40,26,8,9,47,29,7,40,44,30,50,22,30,137,177,203,150,137,138,113,107,87,102,126,131,131,124,116,111,106,101,122,105,90,139,124,107,134,125,129,131,130,147,127,164,136,125,121,126,125,141,114,98,134,133,114,127,105,110,135,96,86,114,113,125,129,91,130,121,101,80,76,106,112,89,120,114,109,99,99,54,86,97,106,102,106,97,92,91,99,102,103,123,104,116,123,124,123,107,112,116,79,144,116,99,82,100,69,74,71,129,117,112,135,141,121,107,96,89,108,71,71,139,145,148,115,98,115,119,86,118,115,140,124,148,154,136,150,124,127,133,141,141,134,118,111,103,116,119,96,151,143,134,133,139,134,136,128,142,93,99,111,94,88,72,74,89,87,66,91,96,72,120,108,88,94,103,110,67,81,88,86,66,88,71,36,54,63,56,42,59,66,50,64,25,44,41,36,76,120,154,187,125,127,104,141,151,195,241,234,227,237,248,251,229,174,114,39,1,15,13,32,45,62,39,58,76,46,66,44,28,48,72,55,45,56,59,60,64,44,43,69,52,57,62,55,47,45,56,42,44,73,69,42,34,49,60,56,55,52,53,53,38,71,20,33,46,60,52,60,26,25,46,30,38,57,43,62,28,57,56,51,19,38,42,53,49,8,42,40,9,22,33,12,60,27,48,57,24,52,29,35,57,38,34,26,33,16,32,18,28,28,42,2,8,32,28,37,41,33,27,16,28,36,26,20,31,32,33,34,39,41,36,23,22,11,28,24,36,25,23,6,14,19,15,31,23,25,23,18,34,18,40,31,19,41,15,52,8,43,44,16,26,41,18,26,26,16,25,17,17,31,12,1,44,19,19,17,5,41,37,22,28,37,46,39,24,36,38,17,28,26,34,29,32,24,17,14,17,17,3,25,57,30,29,40,47,57,86,96,134,169,172,181,195,165,185,174,174,152,151,177,222,169,179,221,196,193,193,193,196,139,156,112,69,14,32,40,45,54,60,25,14,51,30,31,33,16,20,32,33,25,51,29,61,57,34,48,36,29,41,17,33,55,55,27,47,59,66,31,31,23,40,65,84,75,95,69,144,113,167,119,94,97,46,109,29,22,33,14,19,7,31,21,7,25,25,53,61,85,85,84,102,95,106,120,102,159,169,171,158,170,174,164,172,172,144,178,171,170,153,176,184,147,169,182,160,155,154,164,157,150,168,166,145,167,146,157,171,180,165,134,85,92,117,161,194,191,166,195,178,155,171,158,138,173,164,159,189,167,179,167,181,182,165,151,165,195,176,158,177,178,169,195,173,159,188,165,170,134,155,172,178,184,196,162,146,157,158,147,155,174,162,133,150,162,185,208,221,212,201,189,179,182,199,172,153,167,165,161,151,103,3,10,9,30,11,17,25,7,11,31,41,14,15,11,29,10,11,23,57,23,17,11,22,36,42,15,26,15,45,36,44,38,38,34,19,17,20,38,25,34,33,33,21,11,27,25,24,29,43,25,9,20,22,3,20,24,18,13,18,44,25,24,18,20,21,24,20,16,37,37,22,31,43,36,21,33,19,27,30,21,68,39,46,42,37,35,7,53,56,32,32,34,43,20,27,13,41,15,28,26,38,28,25,18,40,42,30,35,57,28,36,45,27,42,21,27,49,22,25,27,38,63,90,118,145,118,118,94,71,97,99,116,115,125,96,89,108,88,110,107,132,84,103,120,137,135,161,131,134,149,155,165,179,136,154,129,127,140,116,137,97,105,85,66,65,64,67,97,105,86,95,101,113,123,107,69,91,86,77,82,85,69,78,83,67,63,68,39,44,64,75,35,47,73,63,59,65,81,86,68,68,79,87,70,83,101,75,69,97,39,33,27,50,57,60,47,81,97,88,92,81,72,113,95,80,63,85,88,81,107,83,113,121,91,90,95,48,100,87,124,135,88,113,75,86,70,104,82,69,36,79,112,89,55,95,120,109,114,66,102,116,101,101,112,104,118,108,114,130,114,106,90,75,65,58,60,30,67,74,70,60,57,93,81,75,75,67,106,68,53,76,88,62,51,65,64,76,47,66,41,61,62,66,67,58,20,66,69,142,172,161,194,178,137,102,113,133,140,203,245,252,250,250,254,207,134,74,68,33,32,26,26,38,27,62,39,41,60,69,51,40,38,78,56,44,52,33,56,37,53,51,34,62,53,46,58,37,50,42,67,62,19,55,60,41,50,71,44,32,54,49,43,25,54,27,40,36,48,48,24,58,30,50,43,43,23,56,31,59,35,40,33,35,33,15,24,26,36,32,38,38,44,47,38,19,38,45,29,35,32,35,28,29,23,17,46,59,45,39,45,31,37,28,33,32,30,27,25,22,29,16,28,37,19,37,37,38,16,38,51,41,16,39,45,31,30,8,33,8,20,40,36,52,39,20,6,16,4,18,10,30,2,9,14,15,12,28,19,14,31,16,25,35,13,43,22,26,35,8,27,22,32,39,11,7,18,12,56,22,15,0,47,27,22,44,11,15,39,40,26,59,34,10,30,6,32,19,40,41,22,40,13,19,19,29,44,30,29,35,49,60,57,56,90,41,83,92,113,116,135,129,139,161,143,160,91,64,51,19,42,35,47,60,39,49,10,28,53,8,43,32,45,28,22,20,37,11,16,40,37,28,18,47,91,38,61,55,84,80,72,90,135,120,64,88,76,93,111,128,155,129,113,183,145,165,114,63,55,47,39,63,22,32,46,37,14,30,29,7,13,25,34,15,71,45,80,77,87,94,66,70,96,115,106,133,167,193,162,159,191,166,165,194,155,186,132,169,173,167,181,135,155,141,145,159,178,166,161,170,143,177,193,137,146,160,173,188,188,167,151,119,91,117,125,150,182,183,199,175,193,147,159,162,173,174,179,154,154,164,172,174,169,177,192,173,174,147,185,189,196,172,180,165,187,189,145,157,139,151,187,169,168,173,180,162,182,171,178,160,164,174,169,159,172,171,202,199,204,200,191,176,169,173,191,157,170,158,172,156,100,12,9,43,15,16,22,12,7,8,2,30,13,10,3,5,34,10,17,16,35,14,13,7,21,20,18,10,15,8,36,42,17,23,43,21,33,19,26,51,17,33,12,27,15,12,21,18,51,8,23,28,27,13,30,26,44,44,47,6,43,14,23,37,33,26,18,33,8,41,16,45,41,44,43,22,65,41,25,23,56,9,27,33,44,20,12,41,11,18,18,32,14,45,34,32,27,26,23,33,20,53,20,9,23,64,54,66,51,127,79,66,71,54,73,54,64,55,122,84,97,103,97,130,145,147,102,133,73,99,77,106,93,101,89,62,69,106,93,95,96,84,90,74,97,104,121,95,101,117,124,110,117,145,139,139,151,102,96,137,107,91,111,131,101,112,96,87,97,111,112,104,99,80,107,108,139,131,144,136,113,136,121,124,126,75,102,132,130,153,133,102,57,71,62,67,83,92,100,97,95,102,89,87,92,111,81,134,56,75,75,49,88,96,100,100,102,116,108,81,113,84,75,76,94,81,108,110,108,127,109,97,81,118,84,84,77,90,118,83,121,111,96,110,90,84,91,80,87,81,75,87,80,86,69,93,123,102,95,75,76,102,107,95,111,77,89,101,91,83,105,98,92,74,72,70,46,74,50,71,78,69,101,97,72,79,64,60,103,88,53,40,76,78,64,60,80,58,70,66,81,86,68,81,38,63,71,62,38,84,90,133,173,211,164,172,103,97,104,121,169,213,244,254,250,234,204,180,148,83,76,45,24,6,15,27,7,37,48,42,32,53,69,63,26,41,66,43,40,50,57,42,60,51,42,53,58,37,51,53,36,58,48,52,40,59,27,50,66,40,61,61,44,46,61,48,51,53,43,35,27,26,48,31,35,47,44,37,20,29,52,49,11,31,38,64,21,19,55,51,33,24,58,45,46,58,24,53,17,6,48,50,24,46,36,32,36,48,37,22,36,25,35,55,17,48,41,21,38,26,36,23,30,31,15,20,25,15,37,41,51,27,17,9,27,24,17,26,25,36,22,22,55,29,13,10,24,42,17,22,15,35,18,33,12,28,18,21,15,15,24,59,21,41,26,33,30,12,0,28,20,32,28,24,21,23,22,12,27,11,9,36,16,35,19,34,19,22,41,31,64,58,60,43,18,42,19,14,12,50,46,15,27,30,70,37,24,21,37,16,38,16,55,22,22,21,19,23,32,37,35,33,50,43,73,86,71,53,50,52,3,42,151,138,61,28,27,30,20,28,36,34,52,34,38,27,64,41,19,43,23,16,27,31,15,43,37,52,71,58,70,72,78,95,92,95,79,89,85,106,63,79,78,61,45,25,70,39,61,44,28,3,15,38,16,4,24,36,48,24,30,25,18,48,81,67,69,73,89,105,95,109,89,128,143,181,206,175,151,150,171,159,173,150,145,155,174,151,131,130,160,150,156,146,147,150,188,169,159,127,157,174,176,150,172,150,155,175,160,182,169,126,105,96,78,132,158,144,175,182,154,145,180,190,173,168,133,164,163,170,197,175,174,159,170,154,158,144,166,169,165,139,151,172,174,153,173,178,172,152,189,184,152,179,183,178,166,182,166,171,150,177,163,158,163,173,182,186,193,205,180,184,174,191,166,164,163,144,159,156,116,21,1,24,4,17,28,12,16,8,34,11,25,0,0,4,14,15,7,20,9,11,14,8,11,33,45,38,17,36,23,31,31,24,29,17,29,34,27,50,43,27,43,26,14,39,53,25,30,20,25,23,12,28,47,20,26,9,30,35,18,23,35,38,10,13,14,37,42,36,49,42,40,49,13,32,24,45,63,23,43,31,1,28,3,2,12,25,25,33,41,32,56,40,23,6,21,11,39,23,34,54,24,42,48,78,121,156,215,193,216,214,206,215,171,202,199,194,196,187,153,134,149,106,136,105,94,107,121,124,108,138,130,160,127,129,105,98,92,95,79,80,103,92,105,99,90,56,102,113,123,80,47,75,85,82,73,73,71,75,111,101,92,127,119,125,129,123,118,149,138,134,109,119,121,110,124,166,139,183,134,143,156,138,136,124,106,147,127,179,152,93,108,88,103,114,125,107,170,137,110,115,144,164,150,129,136,119,94,94,110,141,156,154,144,163,147,128,149,157,114,108,106,96,84,108,111,117,84,105,103,110,72,103,78,75,84,85,94,97,84,104,135,130,117,94,100,91,107,111,97,99,99,131,121,107,107,114,104,87,60,93,62,81,102,110,103,99,67,92,94,59,109,115,79,114,81,81,90,107,95,98,94,118,111,91,97,66,89,59,68,95,76,67,61,52,78,64,73,40,64,77,89,47,65,71,60,42,60,40,62,67,78,111,142,159,146,182,156,129,113,130,121,192,191,172,178,215,193,188,162,141,152,151,100,59,39,21,9,11,19,31,30,52,70,53,63,75,53,70,46,42,43,48,58,37,43,71,65,44,74,69,33,64,51,35,65,59,61,46,39,71,37,38,22,57,42,40,26,40,51,61,26,38,52,26,17,30,61,55,38,49,39,18,39,29,18,36,27,33,58,55,22,35,55,33,47,44,51,11,27,27,34,33,33,45,31,41,61,38,41,45,51,30,52,41,31,41,14,68,37,30,11,42,28,31,38,27,39,33,6,46,37,26,33,25,13,33,32,38,33,37,36,35,36,3,15,13,30,13,21,14,28,18,24,33,21,25,0,0,33,46,27,41,16,48,17,14,20,11,10,17,20,28,38,9,51,37,20,36,28,11,57,4,42,32,46,38,10,15,11,33,21,19,5,16,44,34,38,36,13,48,31,0,35,20,34,29,32,16,40,33,31,27,41,23,27,23,34,11,7,42,22,21,46,77,67,102,86,112,118,164,243,154,71,11,54,38,31,47,46,58,44,39,51,52,41,41,50,58,40,32,3,16,36,18,18,32,37,32,35,14,36,38,56,20,30,27,8,46,41,56,32,44,17,16,11,46,1,32,18,2,25,16,24,36,34,15,37,25,53,71,57,55,72,83,94,74,79,77,68,96,96,121,161,184,168,175,142,140,150,154,125,149,147,137,153,140,138,163,169,169,171,155,159,187,152,154,137,171,178,158,162,156,167,158,192,171,153,182,138,178,140,127,96,115,107,137,148,192,179,164,155,158,145,141,144,157,196,146,158,179,193,176,145,138,133,131,144,139,158,147,153,153,177,177,148,160,162,166,158,161,129,177,161,169,178,144,142,168,147,158,150,144,159,176,187,195,190,204,196,195,182,184,183,180,171,160,150,183,112,31,1,31,5,0,22,0,35,24,28,19,13,5,15,14,20,11,28,24,10,15,10,4,10,40,24,20,27,20,28,35,21,32,35,34,51,35,62,72,69,62,21,50,28,6,4,20,35,25,16,8,23,35,27,30,18,22,26,31,23,24,39,37,27,30,28,64,48,37,55,38,47,22,19,38,21,2,33,32,27,40,30,21,23,38,14,35,16,31,45,24,32,29,25,14,18,14,24,28,39,30,57,61,142,102,158,108,157,148,165,154,168,217,141,176,153,153,147,121,111,111,104,91,75,105,94,126,151,145,145,90,112,126,101,125,128,110,90,122,127,104,92,86,93,114,99,77,99,103,115,94,91,66,88,106,92,97,119,68,110,118,141,113,90,149,129,143,116,138,127,141,119,119,135,120,129,132,120,113,93,90,85,79,89,109,79,67,101,111,125,112,121,128,143,132,141,116,92,129,120,109,130,142,116,121,134,122,115,105,115,121,118,116,111,101,115,115,111,95,95,108,86,87,98,84,65,119,67,68,93,97,69,91,99,94,144,93,80,106,94,104,113,123,83,103,82,112,89,123,124,108,102,121,108,102,101,99,105,97,66,102,93,96,107,103,76,85,80,95,108,77,96,98,72,99,81,72,98,122,78,96,77,89,78,87,59,90,78,74,78,82,84,64,79,81,66,51,61,55,37,58,77,78,43,82,62,51,63,71,63,36,63,52,88,144,176,190,168,193,139,102,94,135,107,104,133,152,179,202,182,187,249,250,243,227,175,140,74,41,29,6,16,14,15,43,55,50,48,40,70,59,52,53,56,42,67,44,70,30,55,46,42,51,56,34,40,61,21,28,64,28,51,61,49,64,49,42,48,44,44,41,30,64,41,32,39,29,33,33,40,43,19,49,42,15,52,25,34,36,59,47,26,33,22,26,16,34,29,28,23,39,32,22,28,52,32,40,13,17,20,33,46,34,44,28,26,32,37,32,19,27,37,26,10,38,27,9,28,38,34,27,9,9,6,30,39,23,7,17,20,20,48,27,30,18,25,35,47,33,20,19,7,31,13,20,20,27,44,4,44,30,34,20,29,24,24,18,26,32,28,24,26,32,12,22,17,8,31,9,33,2,20,39,24,24,46,31,20,52,36,16,17,42,25,31,43,16,19,38,17,18,19,30,31,18,14,40,23,43,25,31,46,47,37,16,37,34,26,30,18,21,31,56,39,28,62,58,74,67,73,109,126,71,39,43,42,32,90,104,88,86,115,117,83,67,95,100,86,61,30,20,37,48,29,67,55,51,23,34,26,7,13,29,27,31,20,23,13,12,3,22,28,20,28,9,15,28,56,31,38,26,18,22,22,25,44,23,27,68,68,75,71,82,79,97,83,72,75,101,65,99,97,179,168,183,161,176,179,138,162,137,168,170,132,131,150,166,148,160,159,164,149,190,175,147,130,149,150,160,139,162,118,158,160,148,149,162,166,152,158,185,164,153,156,100,104,102,152,168,159,178,144,153,154,161,160,166,148,169,166,154,139,127,143,141,138,131,117,132,146,123,158,159,150,146,157,116,147,166,152,183,142,159,154,166,179,150,125,149,162,165,189,154,180,163,185,159,199,194,208,213,194,214,190,154,149,196,157,144,123,11,0,15,44,6,8,8,31,1,16,18,13,3,0,11,10,1,3,14,16,43,12,20,11,40,21,27,14,27,6,42,12,25,35,33,60,65,40,62,79,62,31,28,19,7,11,33,38,7,52,28,58,25,7,40,45,32,19,78,11,34,51,44,50,18,34,48,15,47,25,0,40,28,37,25,19,40,32,31,22,19,31,26,20,11,31,26,30,21,24,7,34,10,14,25,23,44,39,38,34,56,95,47,67,70,62,51,81,37,39,72,76,54,71,62,29,70,72,97,101,102,108,88,108,92,94,94,84,94,81,81,76,62,96,109,104,92,112,118,120,105,105,64,83,139,112,125,77,111,116,140,113,109,123,125,95,119,93,86,105,97,77,56,60,91,111,91,60,81,91,96,101,98,88,52,80,78,82,76,72,71,83,92,75,81,59,81,86,115,105,92,96,118,107,105,129,87,61,89,76,74,69,68,102,111,104,78,74,80,58,69,70,63,86,54,58,84,67,81,80,90,76,100,73,87,106,74,69,85,97,67,112,87,109,109,130,116,129,114,119,90,85,100,77,100,112,87,94,113,123,106,88,114,118,108,97,118,97,131,125,128,89,86,98,119,79,87,80,89,112,82,75,92,86,112,92,74,122,107,91,96,109,88,101,109,72,93,73,83,82,74,95,69,67,77,82,65,56,85,45,75,67,56,58,82,62,48,84,60,65,53,54,54,67,71,118,128,135,128,98,122,154,157,145,97,49,89,116,163,149,163,227,249,255,254,254,243,247,210,160,122,51,16,1,19,24,10,31,43,56,53,43,54,62,86,68,48,53,53,41,38,57,49,69,18,57,38,39,48,42,32,50,54,52,50,60,27,42,34,20,26,41,49,38,27,22,38,41,27,38,54,27,21,29,45,30,35,44,35,54,33,38,32,23,37,11,27,20,24,51,44,19,9,38,31,21,24,39,18,18,33,51,28,37,45,42,37,38,33,10,21,51,25,4,34,31,23,37,5,8,19,53,43,33,38,30,27,48,47,22,47,2,21,21,17,40,25,21,24,19,21,19,11,39,21,43,34,46,9,14,28,14,32,10,24,21,35,18,37,33,33,21,20,19,30,29,15,19,10,36,25,33,4,39,5,18,42,25,5,0,27,28,26,14,2,1,26,53,39,57,39,51,22,31,40,30,51,31,26,26,37,22,20,42,13,33,3,32,1,25,39,15,47,25,44,51,25,54,10,48,41,57,42,12,27,26,74,139,132,146,146,166,156,141,108,126,167,144,138,87,47,15,40,54,39,35,14,21,18,23,32,46,13,33,32,19,30,16,6,12,33,13,8,15,15,19,10,18,19,15,19,13,6,69,27,61,30,41,57,24,77,52,74,99,63,77,113,112,70,97,110,184,199,191,160,158,148,157,148,171,183,165,167,172,172,177,158,160,171,165,166,151,164,154,170,133,153,165,169,158,145,166,146,147,168,156,189,156,161,154,169,148,148,144,122,124,68,88,147,182,177,180,152,155,166,158,148,182,164,152,159,161,133,153,153,149,147,147,129,158,169,151,156,147,150,164,156,156,164,152,170,157,177,160,153,151,151,138,149,156,167,178,162,178,196,168,168,195,186,180,209,200,191,181,193,151,172,183,192,105,16,7,5,13,16,2,32,20,2,2,10,21,25,16,18,2,25,17,15,7,6,14,18,22,18,28,20,33,28,23,4,45,18,30,31,41,52,87,59,58,59,30,57,49,25,13,12,15,19,31,45,44,38,42,25,58,54,29,32,33,14,20,28,28,35,10,13,44,21,19,32,28,20,27,25,30,35,20,24,10,32,12,27,10,17,23,17,15,55,12,33,40,25,24,35,22,49,51,37,40,41,34,44,52,68,62,72,37,41,46,27,68,85,68,74,106,98,124,105,128,111,91,138,115,93,101,94,61,87,61,70,67,110,87,122,128,152,165,148,151,152,139,125,84,97,116,154,111,112,140,110,98,134,99,82,62,75,79,85,54,66,80,80,59,63,63,51,52,75,57,75,112,76,54,42,78,66,92,81,86,74,105,92,85,57,81,87,80,88,109,85,68,81,98,75,97,127,97,85,67,68,57,91,86,65,76,85,93,64,82,88,88,71,106,100,82,77,73,98,102,117,112,105,95,87,99,95,116,116,127,102,79,108,97,119,130,109,98,116,113,91,77,90,126,94,74,105,101,95,130,138,151,135,99,90,98,94,104,90,90,87,97,120,110,131,96,117,114,113,116,104,106,104,100,97,88,88,102,106,123,97,88,106,100,88,81,76,85,77,74,101,75,87,99,57,49,53,82,81,45,65,68,87,76,87,77,89,53,54,66,57,61,42,52,24,39,39,28,34,83,99,155,199,115,75,62,93,84,84,95,95,142,236,252,254,250,243,226,253,251,245,214,168,134,116,43,3,18,27,11,11,36,59,53,48,44,55,44,71,38,56,50,51,58,54,54,46,56,26,57,32,61,44,60,24,24,45,20,50,30,53,36,32,55,29,33,45,43,32,45,10,52,41,32,69,5,33,48,20,36,31,56,10,34,53,37,42,26,48,11,15,24,56,30,38,50,52,33,22,49,22,17,49,11,41,20,39,48,26,27,53,43,49,20,15,7,7,47,37,58,27,19,43,19,33,11,2,34,13,40,41,48,23,22,43,24,32,33,16,15,6,24,26,34,5,22,31,18,51,17,14,49,11,2,43,14,23,7,39,6,24,17,8,37,31,15,37,29,16,34,28,31,28,16,11,24,25,22,45,32,42,24,29,14,26,26,21,27,12,47,36,30,32,42,23,41,25,28,42,17,22,39,40,41,47,8,23,40,14,6,34,28,17,14,53,26,32,40,37,15,38,9,46,22,24,56,61,88,159,135,133,163,129,146,137,125,147,151,148,109,17,16,33,7,39,11,53,24,1,13,15,3,6,15,25,18,12,36,33,26,30,15,6,64,28,9,27,41,46,26,39,27,42,50,41,34,47,39,52,60,67,95,41,70,81,86,70,77,79,120,177,151,181,173,171,168,161,138,173,171,161,176,161,164,156,163,158,171,178,151,141,165,161,161,159,159,166,162,165,161,169,196,166,177,172,170,165,159,171,170,183,138,154,154,144,117,94,75,116,125,180,184,190,175,143,161,158,166,133,143,184,135,134,176,157,170,172,162,179,159,158,172,159,132,141,162,161,159,163,170,146,189,168,155,180,181,182,163,131,184,151,170,182,152,180,180,188,182,162,199,202,177,184,182,174,154,189,157,177,107,2,1,18,13,13,3,31,11,35,28,5,11,11,14,1,12,10,38,9,40,0,28,3,22,32,2,11,27,29,23,31,10,11,32,56,34,37,50,45,65,31,35,29,16,26,27,44,11,31,51,32,34,34,22,39,43,34,6,34,39,16,22,23,36,30,15,34,24,41,30,26,47,36,36,39,33,54,32,17,8,24,29,21,21,20,9,22,9,17,35,18,8,32,26,22,30,16,45,52,38,42,46,37,41,78,35,53,49,52,34,77,96,72,79,93,111,122,123,80,98,101,98,98,103,102,69,75,66,74,90,102,92,90,99,96,101,110,97,101,68,63,83,86,53,70,104,98,119,93,120,112,113,82,71,81,79,95,75,64,78,77,96,109,79,82,102,81,77,84,96,67,81,84,95,113,117,107,100,112,104,112,106,119,81,88,82,104,100,94,117,125,78,115,108,92,92,105,117,107,92,118,106,106,103,84,113,82,104,106,99,117,84,90,105,132,87,102,116,118,103,142,104,94,95,109,100,110,111,108,110,118,111,132,86,121,116,121,108,94,122,126,115,118,97,97,131,106,127,141,125,117,135,123,142,132,95,127,111,80,106,127,108,112,135,116,116,122,122,99,136,95,120,102,112,91,105,104,119,96,99,95,95,102,86,116,115,100,112,100,92,96,115,86,79,84,86,77,51,54,60,95,77,66,69,80,71,68,79,50,38,35,29,59,64,50,43,40,35,41,18,52,68,93,107,85,108,87,70,46,51,38,39,114,150,176,220,239,246,250,251,236,237,218,249,244,210,167,127,76,79,37,23,3,8,42,59,55,36,63,29,50,46,73,55,67,39,33,51,55,50,39,45,9,44,60,49,15,33,59,57,31,35,40,52,27,44,47,39,42,48,47,37,38,48,44,30,31,47,46,31,11,52,37,39,21,35,1,48,43,51,53,20,49,41,14,45,36,42,18,14,44,36,43,34,27,32,25,40,13,37,7,35,18,20,28,54,27,20,16,26,15,8,25,27,37,33,25,30,33,22,21,24,25,19,19,41,13,12,31,10,40,11,32,18,18,16,9,37,30,21,13,17,29,35,17,27,32,17,9,33,31,25,34,18,7,17,22,28,13,60,8,22,49,30,35,39,1,49,55,62,28,26,20,42,42,48,40,36,39,34,28,27,9,19,36,25,40,21,28,31,27,59,16,27,23,24,13,23,13,28,30,32,37,23,30,26,36,48,45,26,11,67,51,35,23,33,51,68,127,108,134,131,127,129,117,101,117,149,167,120,30,9,20,28,20,40,37,30,21,38,19,13,1,17,42,8,26,16,35,9,7,15,7,18,3,7,21,14,27,12,18,22,26,58,35,24,52,50,61,71,72,43,73,94,70,91,85,73,116,142,183,179,162,142,162,143,136,140,139,168,177,165,216,179,162,179,164,162,176,155,173,134,138,153,150,140,176,178,166,198,167,147,157,146,174,171,153,168,163,141,151,172,151,175,194,154,140,97,88,85,131,152,183,180,202,178,163,171,170,144,169,164,148,159,143,141,151,155,145,131,159,171,186,156,172,163,139,158,163,160,152,155,178,154,153,157,157,163,163,143,166,125,162,163,165,195,147,199,160,209,169,169,165,152,156,180,157,150,167,122,20,12,25,20,13,8,40,23,11,16,18,15,29,35,0,32,43,13,7,23,9,5,26,5,39,14,25,39,14,18,19,39,17,25,44,41,27,62,55,46,45,75,43,23,16,26,29,17,34,25,4,10,23,25,37,25,31,52,34,4,10,20,26,21,45,38,31,22,11,47,38,26,43,31,9,42,25,21,26,35,10,21,6,14,24,10,15,9,22,9,19,24,19,50,11,41,36,28,41,35,45,41,19,49,23,37,39,35,45,46,29,30,55,44,45,36,51,41,48,37,26,32,34,56,52,25,26,15,44,27,43,67,62,59,37,45,21,27,31,34,54,29,70,39,68,62,54,81,93,95,78,80,95,86,78,102,105,78,102,110,99,121,140,137,114,117,122,72,118,94,108,112,131,101,118,117,118,129,103,108,123,101,131,91,92,127,119,112,85,113,104,116,98,87,87,105,119,119,113,102,124,123,120,134,117,122,111,112,119,126,103,104,122,125,97,107,97,124,95,124,120,92,103,93,101,101,83,113,124,132,98,143,111,126,120,104,153,126,101,111,127,107,130,123,111,111,118,140,104,107,107,132,123,117,126,117,111,114,118,105,127,104,109,113,117,102,122,110,111,109,115,96,120,97,91,115,103,104,117,95,109,112,96,119,87,110,132,116,109,94,102,82,75,99,66,65,89,67,54,65,65,65,58,89,87,64,81,61,59,65,62,52,59,51,34,45,63,49,70,40,52,54,76,80,87,107,105,73,63,80,44,62,77,110,115,121,147,190,207,233,244,250,249,247,255,243,255,255,238,190,145,135,83,45,11,4,57,3,18,26,24,39,33,29,55,21,40,39,28,29,53,64,43,60,48,47,34,37,28,25,39,40,51,58,30,47,55,37,46,37,47,39,37,31,51,42,36,30,38,28,30,36,22,17,53,23,19,17,22,64,52,20,18,35,25,22,14,36,21,33,23,25,25,15,14,19,25,29,44,16,23,35,20,17,30,33,32,10,15,33,33,24,31,9,15,27,39,40,19,18,36,40,33,19,6,18,0,19,31,23,48,18,35,30,6,5,14,30,5,59,26,25,35,11,32,17,26,18,25,25,33,11,30,31,14,16,4,12,40,32,12,33,9,24,21,28,5,29,69,34,25,21,22,8,18,46,26,27,11,10,29,27,52,22,46,32,14,23,28,13,26,30,18,32,10,43,27,29,38,36,10,32,24,40,56,54,20,42,10,25,26,47,49,25,45,30,47,78,77,82,106,104,122,127,116,101,117,135,154,119,24,11,31,18,33,37,17,24,14,13,9,15,19,39,24,11,28,13,13,35,34,33,7,24,26,51,28,39,15,32,30,50,61,33,35,58,54,66,66,52,84,49,75,83,101,99,103,121,151,168,172,177,157,137,146,136,134,168,154,162,177,150,192,197,157,175,167,167,134,152,139,157,171,159,169,158,156,168,168,153,155,150,157,178,137,179,176,162,188,157,165,145,172,162,176,161,150,138,110,64,75,117,178,190,164,188,164,157,164,163,160,161,135,156,134,144,153,184,162,162,173,133,164,114,145,159,168,140,140,152,169,151,159,133,151,150,187,187,153,196,155,171,123,169,140,164,167,149,170,173,143,165,164,157,169,162,167,149,195,138,7,19,4,11,39,20,10,3,0,5,10,22,41,0,11,22,18,27,0,35,19,8,37,19,24,32,13,28,33,33,30,50,44,21,39,30,41,27,30,40,31,35,24,20,22,19,24,26,22,15,23,24,11,12,5,18,31,21,26,22,36,23,45,15,24,14,15,16,30,21,22,14,15,42,13,18,8,27,10,7,11,26,6,9,16,18,15,17,26,3,16,3,20,37,42,58,49,44,54,74,58,45,50,81,72,39,58,29,60,51,29,22,45,54,33,20,44,54,50,20,43,37,30,62,43,43,32,40,34,55,78,51,67,61,117,73,80,99,81,79,73,80,92,64,81,104,87,90,94,101,98,110,81,121,101,94,76,96,119,91,119,112,116,131,114,122,125,129,102,132,135,104,107,144,131,111,115,90,84,96,132,136,126,126,116,109,115,122,86,148,130,110,82,115,114,111,110,106,138,124,110,107,114,124,120,147,94,131,103,139,126,101,115,126,131,105,116,115,124,121,123,117,114,142,135,105,109,114,100,112,130,98,126,110,115,122,132,138,122,130,99,80,104,106,93,113,132,110,120,109,117,110,96,144,141,122,131,144,106,126,107,104,109,103,112,142,125,115,115,114,140,108,132,116,78,100,88,103,99,106,104,102,125,71,101,107,90,89,78,88,118,82,70,55,90,94,94,104,100,57,97,87,92,58,68,57,41,81,57,63,60,60,67,58,58,50,83,96,89,58,68,77,76,66,47,30,91,86,117,105,114,102,142,99,112,102,96,81,117,110,160,162,214,234,254,248,243,252,254,252,254,255,251,214,156,146,115,83,51,35,14,20,35,10,42,32,53,44,48,64,27,60,38,33,62,27,53,43,19,14,15,37,22,45,18,53,82,42,29,15,25,41,36,14,31,26,37,30,34,22,40,53,34,43,32,30,25,56,48,19,27,57,21,11,25,37,33,22,53,44,18,17,29,34,36,42,4,10,8,5,27,7,43,29,12,29,43,11,24,24,7,49,4,11,20,32,22,28,33,55,27,17,17,39,52,29,7,12,30,42,39,16,28,11,32,11,34,11,23,18,11,15,18,23,29,13,30,29,13,10,7,23,25,27,30,32,22,6,9,20,10,16,8,24,29,16,19,23,15,31,37,35,32,20,34,49,31,25,9,22,18,38,26,12,47,32,31,13,31,39,38,34,20,31,50,10,14,27,63,8,33,14,56,30,62,44,29,28,37,25,26,35,46,40,23,41,40,61,78,82,98,67,103,87,92,99,110,101,107,95,30,27,13,10,51,23,19,19,28,19,43,27,21,16,22,25,26,35,23,1,17,10,34,10,0,17,30,9,29,45,33,34,21,53,48,45,39,47,82,73,45,53,93,88,82,119,99,165,161,161,161,162,166,177,159,152,149,160,146,169,149,170,176,167,149,163,170,154,131,156,154,148,156,183,154,141,169,183,171,147,163,139,150,162,154,156,142,153,160,151,143,155,149,157,173,192,191,188,137,129,82,100,118,158,161,172,175,196,160,166,175,179,136,153,165,152,171,182,174,162,153,162,163,178,185,175,158,169,146,144,145,141,159,176,153,146,153,169,161,151,140,154,141,178,149,145,159,163,137,167,150,158,167,172,161,142,170,178,176,109,29,0,37,2,13,27,9,9,4,6,12,8,14,9,14,46,25,9,9,25,10,28,19,4,35,25,32,32,37,45,14,69,25,41,29,51,49,46,21,47,20,58,31,39,42,40,32,29,29,28,19,19,11,42,25,13,38,33,48,25,41,15,31,40,32,18,33,28,25,14,27,12,31,15,13,39,28,25,10,0,14,15,12,8,27,11,15,15,25,9,23,14,9,21,36,44,69,72,59,104,46,62,72,84,60,51,70,60,67,64,58,64,44,60,56,47,32,16,39,39,38,32,64,40,76,58,69,47,56,42,45,25,29,88,82,132,96,103,123,96,131,127,151,121,132,115,129,95,107,112,128,123,104,100,108,94,137,117,104,108,78,109,109,108,106,124,115,109,105,109,97,119,109,113,115,155,123,127,125,125,134,125,105,103,112,121,114,117,125,112,125,93,85,121,119,96,110,115,124,146,119,117,107,111,138,136,126,142,143,135,104,127,108,103,127,128,141,98,137,124,111,122,103,116,150,136,121,108,110,138,121,125,95,132,99,119,149,144,144,115,121,127,108,100,94,125,132,131,129,123,111,116,109,121,98,131,102,114,104,113,126,133,107,112,125,105,127,128,121,131,107,86,71,112,95,125,96,96,96,122,113,83,71,101,86,86,97,73,71,77,83,111,97,107,97,87,119,65,60,41,50,99,57,67,85,93,90,78,71,58,46,55,51,45,72,43,66,18,73,61,81,78,96,43,30,77,105,100,81,97,100,118,116,106,138,106,114,103,82,79,75,82,131,138,163,210,211,234,251,255,248,240,249,254,254,239,205,222,143,128,104,56,38,16,1,26,13,28,18,32,34,51,53,49,47,34,55,46,53,28,50,33,26,59,15,49,54,38,47,48,3,47,47,15,30,20,34,32,16,48,29,26,38,13,42,15,13,33,7,40,40,61,33,15,4,35,26,44,42,47,19,37,31,11,48,18,28,32,49,8,20,12,35,16,22,18,40,28,42,21,40,41,27,15,4,12,30,14,9,15,15,35,35,46,29,46,2,26,29,8,20,15,39,34,30,14,14,16,20,6,19,34,17,14,9,18,24,29,1,20,37,24,44,28,15,4,41,39,28,38,31,15,27,26,11,34,37,49,6,30,16,16,56,40,24,10,22,13,24,35,50,27,45,27,22,25,17,16,45,63,52,43,21,44,11,36,23,31,16,25,13,20,22,62,48,26,40,21,30,32,42,63,51,38,45,12,43,47,66,118,105,96,105,115,101,98,121,84,138,139,39,36,30,31,36,27,21,1,25,52,37,29,29,32,20,14,21,18,31,31,30,23,10,35,31,6,21,40,44,37,23,34,49,55,64,57,42,97,81,86,76,82,100,81,95,103,155,170,163,174,153,142,168,169,192,160,181,178,135,156,154,177,145,150,144,165,143,186,176,159,160,179,162,161,166,154,180,156,158,162,173,154,168,164,131,164,168,162,155,148,151,146,153,150,163,165,188,200,167,157,138,99,113,109,138,154,184,185,175,164,175,160,141,186,146,180,168,167,168,178,168,143,188,183,189,175,197,168,163,175,129,160,132,158,175,168,151,157,144,154,148,157,149,139,160,148,164,164,160,147,169,165,149,168,162,163,175,155,164,102,4,23,26,21,13,11,27,24,15,8,28,25,26,0,2,11,9,2,4,26,9,10,22,7,70,39,26,34,14,19,11,19,31,28,13,13,5,14,6,21,24,40,15,36,8,28,6,14,13,41,31,39,38,25,53,19,27,27,31,16,25,13,37,36,18,22,14,3,18,5,38,27,20,30,20,21,11,23,12,14,23,10,11,26,19,25,7,22,25,17,13,18,4,38,46,48,44,72,45,54,43,49,42,59,48,40,56,57,70,51,64,58,46,67,62,53,56,36,40,54,35,56,61,85,89,81,75,60,69,78,47,60,43,57,46,95,103,113,81,78,87,116,116,118,128,131,135,117,116,111,126,106,103,114,111,106,98,136,103,107,108,107,118,120,94,115,103,98,102,103,124,128,111,133,150,121,139,136,101,99,132,117,145,137,118,108,126,121,84,118,116,125,146,145,122,128,125,115,120,97,99,124,92,113,101,132,142,105,108,102,110,96,122,93,112,112,124,105,122,107,112,125,128,123,142,148,132,100,133,139,121,116,122,100,133,125,156,134,125,121,139,104,129,143,122,139,135,148,123,104,119,104,144,94,85,106,125,142,101,129,133,127,111,115,140,106,116,132,101,145,132,138,119,116,134,128,124,121,118,105,106,80,76,61,81,101,85,76,82,77,73,100,91,121,100,85,81,72,68,60,62,55,62,68,74,55,57,59,56,87,97,67,57,62,71,54,44,59,86,49,75,83,75,72,57,41,76,61,65,82,62,84,96,73,74,129,161,113,132,113,105,37,55,42,64,81,132,138,172,162,194,234,243,245,253,249,248,243,215,239,252,225,188,150,173,112,51,31,9,8,7,42,22,45,35,46,49,45,46,59,42,23,32,36,39,36,37,40,50,52,41,23,24,47,30,39,38,41,45,21,33,56,48,41,14,28,24,37,28,27,43,31,39,29,28,29,18,17,24,34,44,34,14,10,19,6,26,36,20,13,60,34,20,29,31,17,39,22,19,30,47,19,21,14,51,8,19,26,11,35,26,13,12,15,32,34,37,40,22,9,9,17,31,7,7,18,27,7,30,21,29,18,11,22,37,22,15,35,29,10,1,9,11,22,27,17,21,2,17,9,44,22,25,5,20,8,13,45,12,5,9,16,22,25,49,25,41,33,55,21,25,35,33,22,36,43,26,30,66,76,59,33,18,28,26,40,41,28,34,54,48,41,27,29,38,35,53,40,49,25,43,33,49,28,31,32,39,38,78,111,157,129,158,138,100,102,140,144,154,143,70,41,6,35,20,23,15,10,8,40,21,8,37,44,12,27,19,20,38,34,31,10,9,32,30,26,20,16,36,25,38,47,49,54,59,75,60,62,102,86,93,74,95,95,80,149,150,176,154,143,156,155,182,164,153,153,155,174,185,167,161,158,182,132,160,155,167,179,155,170,152,200,155,151,177,150,196,162,123,161,175,185,171,146,152,155,152,132,142,146,162,160,154,150,173,147,175,179,181,181,192,140,112,78,109,108,160,190,184,185,178,142,186,174,178,146,157,167,170,166,161,191,132,168,155,145,176,175,161,154,159,145,166,140,152,156,163,157,159,150,164,158,152,178,160,155,158,161,152,179,158,170,169,171,182,170,141,159,142,111,8,6,10,10,31,17,31,4,24,4,29,30,14,1,22,28,17,15,2,9,10,22,10,34,31,47,17,24,51,21,7,32,18,21,33,45,22,9,18,17,6,29,22,27,8,3,33,21,0,12,9,28,44,20,18,6,16,35,28,12,35,28,12,32,30,8,4,29,11,10,23,12,6,12,36,31,13,30,20,23,28,34,48,21,14,16,20,16,13,38,28,19,27,45,6,65,62,57,66,76,49,59,61,55,78,66,83,70,55,53,85,95,104,84,77,65,82,61,58,44,69,58,53,99,93,74,121,81,106,106,89,75,51,89,71,70,90,70,87,65,73,110,69,99,103,87,129,109,128,140,140,114,125,111,126,140,123,115,116,116,119,84,128,130,119,124,101,151,111,130,120,129,145,116,150,110,113,127,119,107,121,164,120,105,117,108,124,126,118,132,104,146,112,110,108,136,117,103,134,118,135,104,119,118,123,124,64,95,105,127,136,135,122,106,122,124,102,102,131,136,123,148,130,122,103,131,137,123,117,148,128,112,117,136,141,109,92,114,123,107,139,119,133,111,117,135,119,103,116,113,111,114,86,94,105,119,123,112,122,94,117,116,116,110,105,121,115,119,127,149,128,118,131,121,134,148,124,159,145,139,133,85,100,82,84,112,87,84,82,74,76,58,79,120,93,101,78,64,94,69,111,85,62,68,89,96,106,88,74,99,74,74,62,68,39,48,69,55,80,68,62,52,74,56,50,54,46,49,58,53,64,79,60,65,100,125,150,150,179,168,154,150,124,87,97,111,82,115,99,89,125,124,177,166,195,196,215,201,228,248,242,251,247,238,252,241,243,194,166,102,67,114,5,19,24,25,30,33,35,29,15,41,18,31,11,48,35,49,13,39,50,23,31,33,39,36,30,35,31,24,13,24,42,34,9,17,50,49,24,50,30,30,33,30,33,25,16,16,12,28,33,33,42,33,10,51,9,27,8,47,32,23,43,45,12,25,22,23,20,49,19,14,41,44,23,25,14,12,23,23,23,22,14,11,19,12,14,33,23,29,38,28,8,5,26,31,24,10,22,15,23,3,18,20,5,22,28,17,11,25,15,28,10,13,23,17,14,42,33,0,43,31,30,9,33,14,26,7,29,8,22,33,31,14,21,23,33,43,14,14,28,20,23,48,61,31,43,35,23,47,29,17,41,14,22,30,10,51,35,22,29,33,28,42,39,61,45,44,42,52,24,36,49,44,44,35,34,42,80,118,140,134,122,95,85,112,115,122,146,139,63,36,3,12,26,27,8,17,29,31,40,11,45,41,15,31,26,25,28,12,11,11,15,23,39,20,39,31,60,49,29,50,56,42,81,60,51,52,67,78,76,109,104,107,144,183,152,167,166,156,154,180,174,179,163,164,183,167,159,170,147,172,148,166,147,153,159,145,149,154,155,163,157,160,164,155,167,151,148,135,153,151,176,161,146,168,150,165,162,174,194,164,151,149,167,143,152,146,181,162,160,146,138,102,87,91,127,164,185,206,147,167,155,168,150,156,145,175,147,157,156,166,146,150,138,150,150,153,148,155,179,161,161,174,180,171,167,160,185,135,167,180,140,173,174,167,187,200,199,146,178,145,156,141,178,171,185,174,170,124,1,8,6,3,13,15,13,21,25,28,12,16,9,8,3,18,5,4,10,13,7,6,3,7,27,14,15,18,25,32,17,9,9,30,16,24,26,45,10,33,1,7,18,27,15,33,32,31,25,20,15,16,30,8,46,21,11,0,25,27,38,40,7,11,31,31,4,35,19,21,24,23,3,26,15,26,24,22,29,9,15,1,27,29,18,10,36,12,46,26,35,10,2,37,35,49,23,55,72,71,68,87,63,102,71,66,80,102,109,150,100,79,110,102,107,70,74,38,67,39,48,64,65,117,113,105,97,106,117,121,104,97,106,84,101,95,85,131,133,94,91,95,87,88,130,113,100,111,134,137,137,126,135,143,132,116,129,120,132,136,153,121,128,114,117,149,132,140,156,131,122,139,128,135,130,144,146,140,144,134,128,97,141,126,132,128,121,109,91,102,117,118,134,121,132,128,117,135,133,128,95,97,129,133,126,118,124,139,138,111,128,104,130,121,147,157,104,130,132,127,112,125,118,102,112,129,149,129,137,114,104,132,120,119,145,111,132,121,131,129,126,114,140,144,133,127,107,108,99,118,144,133,130,132,154,145,131,132,128,135,171,130,148,141,119,112,140,118,104,104,105,123,116,140,144,153,126,140,150,109,119,121,110,133,122,122,58,82,89,86,95,92,102,116,98,100,126,96,89,110,107,100,92,107,84,98,101,111,102,70,65,63,73,78,46,39,68,66,74,60,47,34,65,48,34,40,60,26,37,53,55,102,64,81,97,122,137,154,172,162,173,194,195,184,164,135,125,129,92,101,73,73,92,63,105,132,154,157,179,226,225,244,235,243,252,255,246,237,233,234,241,233,210,144,116,70,66,19,5,5,41,34,45,35,34,33,33,30,47,28,28,33,27,35,38,31,36,57,32,38,52,26,22,30,15,40,35,31,11,7,42,36,19,25,22,27,7,44,35,37,16,27,20,32,12,47,29,34,32,38,39,22,28,7,27,33,39,13,21,23,10,15,15,18,33,26,13,23,18,26,39,29,12,41,32,34,11,4,18,37,14,20,13,17,12,29,30,52,8,6,29,15,27,45,44,12,31,34,7,36,26,15,41,37,2,24,6,11,29,45,41,9,21,31,24,25,18,17,3,11,30,42,32,4,24,24,25,41,22,22,23,30,13,40,37,27,13,23,37,39,23,26,33,17,20,50,18,21,43,51,10,31,6,50,24,17,57,47,40,33,36,44,32,17,50,16,10,56,57,71,61,59,55,37,36,41,55,66,55,76,28,40,9,33,21,15,12,8,26,34,21,10,41,29,20,7,17,45,6,47,24,9,20,25,49,21,36,26,42,17,50,37,63,40,51,70,62,72,80,96,94,122,108,141,173,149,162,151,174,171,165,155,163,176,163,141,157,153,148,165,147,140,149,126,112,165,175,166,182,161,154,148,157,168,173,169,158,145,160,138,164,154,160,168,162,175,179,167,198,173,169,155,136,117,117,151,150,158,158,174,154,179,163,134,132,92,85,102,169,178,174,162,172,160,159,154,159,168,118,163,153,152,157,154,139,165,157,175,176,167,162,173,169,145,147,173,183,154,153,165,139,152,181,184,140,158,164,166,167,141,148,171,163,157,149,171,170,168,159,102,22,10,3,20,3,11,31,10,7,3,13,7,20,11,14,10,10,12,4,5,19,5,1,3,10,43,14,15,12,4,38,24,19,16,42,5,24,22,24,2,22,49,8,18,26,10,17,10,24,22,20,33,8,40,11,15,15,9,13,11,22,34,16,13,42,14,24,16,9,10,27,12,8,9,17,19,26,12,18,10,18,46,36,11,20,22,25,11,28,19,10,20,32,41,24,25,37,31,50,36,33,36,31,45,53,65,55,76,57,82,85,68,81,99,58,40,70,12,43,45,35,66,52,47,82,60,47,75,42,63,61,80,53,65,72,51,57,42,76,63,56,32,27,45,82,85,90,50,64,78,73,113,93,97,98,115,83,112,125,118,100,123,116,128,125,138,125,136,113,125,152,125,121,112,118,121,161,123,125,119,122,98,98,135,97,111,137,139,124,123,101,137,137,134,138,138,128,134,136,113,126,133,135,118,122,116,129,144,140,147,127,135,139,128,123,136,140,131,144,128,119,150,128,128,128,141,137,157,128,117,120,128,145,131,127,114,126,96,117,132,128,126,141,115,121,146,129,148,145,146,147,125,128,146,103,132,131,135,135,121,136,126,127,124,124,136,112,101,115,125,123,127,108,129,121,121,115,128,113,73,108,87,132,117,91,96,104,120,109,127,125,117,99,116,140,88,114,128,91,85,96,74,61,98,88,76,61,77,73,49,35,57,78,101,77,75,79,66,92,45,67,58,26,51,30,45,62,72,60,36,29,60,113,88,95,109,112,70,102,116,69,115,91,80,71,68,132,111,92,103,92,91,116,43,51,69,89,58,123,132,131,153,149,180,194,228,237,238,247,239,252,249,249,244,240,216,179,190,131,98,113,48,13,20,10,20,24,29,20,24,16,40,49,33,18,27,16,38,40,40,21,51,16,20,43,9,37,15,29,9,11,7,21,35,33,29,18,6,43,16,47,38,38,15,31,27,53,39,18,25,35,19,11,23,29,17,22,20,32,31,8,44,22,11,18,24,29,26,44,32,17,20,6,30,20,23,15,40,30,23,13,25,27,31,23,38,19,19,16,30,14,16,40,18,12,26,29,22,15,16,37,55,17,13,65,17,41,52,35,9,30,25,19,29,3,27,46,16,27,33,18,42,31,22,22,7,20,23,2,2,26,25,25,30,32,44,36,17,49,47,51,19,46,34,23,37,42,33,35,36,34,31,24,20,22,20,48,46,39,24,21,41,30,32,59,36,21,45,31,54,41,38,18,47,34,32,28,38,72,69,30,41,32,30,17,12,16,29,28,16,40,21,22,11,44,31,26,34,10,23,7,12,34,22,37,24,14,26,54,25,25,57,31,41,71,76,50,79,79,69,112,93,155,194,169,173,151,163,180,163,148,144,162,161,168,138,158,156,149,141,178,150,172,163,173,144,180,166,153,158,175,166,149,147,156,152,173,153,157,137,178,173,151,169,149,163,142,186,168,178,158,166,162,146,151,152,162,165,197,144,167,166,142,171,150,106,66,66,131,145,152,152,178,159,152,138,172,158,164,144,152,172,165,180,168,165,160,157,158,183,154,169,148,167,133,162,167,164,166,149,174,154,187,170,166,172,160,166,173,138,155,164,170,165,183,152,153,184,190,116,3,7,11,3,18,6,18,18,2,26,19,2,27,1,17,4,28,22,14,24,27,23,4,4,30,31,16,43,14,27,43,15,19,14,20,20,10,29,9,16,45,14,25,12,23,13,20,12,11,16,8,45,45,36,19,17,13,45,10,10,31,28,35,13,31,20,2,32,32,36,11,12,2,37,4,4,43,23,17,21,7,38,30,19,14,27,35,29,20,19,31,8,33,28,22,16,52,3,4,22,20,44,15,30,22,17,47,53,33,36,21,33,35,22,35,32,32,47,40,69,36,35,42,48,67,53,57,66,27,36,61,48,47,63,31,60,61,65,34,23,36,46,45,58,56,75,68,42,75,45,67,78,46,76,68,112,103,125,130,129,112,111,114,103,107,119,127,138,114,129,124,128,128,119,134,118,116,108,105,126,100,92,109,118,139,108,147,132,117,136,144,124,151,127,151,129,128,119,132,129,123,130,121,122,138,166,154,116,129,135,132,102,126,133,124,129,145,145,136,129,139,138,132,128,123,133,148,116,144,125,121,122,141,152,118,134,129,134,125,132,119,136,144,130,135,146,126,159,149,130,145,130,147,112,134,133,127,118,128,110,137,111,129,130,117,94,136,108,133,139,140,118,124,123,109,102,115,103,118,100,114,113,116,94,107,125,115,126,115,102,128,118,99,109,66,107,86,88,102,82,68,75,67,90,113,68,77,91,74,64,56,59,43,87,40,93,60,89,62,64,54,67,83,84,63,52,56,42,20,24,53,70,77,73,61,36,52,58,56,48,40,49,62,53,55,61,86,89,108,87,128,158,129,112,88,66,46,22,48,14,30,72,81,85,85,129,122,169,200,185,239,253,254,244,251,234,255,242,250,252,243,195,175,148,110,88,65,44,25,19,28,66,30,39,26,57,23,28,47,26,34,11,21,34,33,42,44,44,32,53,32,32,27,33,29,47,13,34,28,27,31,54,22,2,35,20,31,24,19,50,13,9,7,30,30,24,14,29,41,17,5,15,7,38,4,38,16,1,24,19,14,28,11,1,8,6,27,19,27,23,37,7,43,12,21,31,5,45,51,24,17,32,28,5,11,24,29,6,5,2,7,5,9,35,39,31,38,8,21,27,9,10,9,8,24,13,13,3,26,22,26,9,24,1,11,8,16,30,22,10,24,9,30,20,18,25,29,45,42,67,67,18,39,53,20,7,41,25,49,21,22,46,45,42,27,49,17,57,30,24,49,43,62,25,45,43,39,56,44,59,53,68,21,19,22,60,17,44,28,26,42,37,46,7,29,31,14,52,20,41,8,16,14,21,12,30,33,3,14,35,3,23,34,12,44,46,21,18,34,30,66,55,48,62,72,82,95,85,87,90,115,171,183,186,163,170,150,182,162,152,161,143,174,154,163,159,174,176,158,168,144,146,177,177,174,165,147,159,159,178,148,186,158,163,168,157,177,173,163,178,163,162,158,164,152,163,141,152,142,177,159,165,137,155,153,137,153,187,157,180,172,162,181,210,173,163,106,90,77,106,130,168,162,171,168,161,173,148,165,187,163,174,151,174,164,167,162,183,167,159,172,165,160,177,155,165,166,158,155,158,163,151,161,152,163,170,146,159,176,180,162,153,161,152,156,147,189,143,180,106,9,6,3,0,24,1,11,13,29,26,29,29,12,17,20,5,7,11,18,21,3,20,16,23,60,14,25,23,41,15,37,11,39,15,26,31,10,4,24,20,20,11,27,13,0,48,7,24,42,29,21,29,8,34,14,46,23,9,28,15,7,48,29,10,13,40,10,11,17,44,21,41,33,18,24,20,52,51,34,4,33,5,23,15,41,27,21,38,34,10,19,29,28,27,15,33,19,17,9,4,33,11,19,37,40,51,36,36,38,31,30,34,56,32,54,67,53,24,46,49,68,49,45,64,48,21,58,58,64,31,59,61,34,67,72,62,64,69,74,53,44,54,54,44,105,77,94,82,84,102,89,89,88,136,134,147,129,114,133,142,106,129,152,136,108,112,115,110,108,120,102,105,148,103,139,135,137,116,144,134,138,114,126,154,153,116,141,138,115,135,115,122,119,117,116,120,130,115,101,118,122,125,121,134,104,134,144,133,132,130,132,133,111,120,124,139,124,144,138,128,150,130,150,114,141,148,136,137,140,135,119,136,140,121,138,130,152,130,130,129,123,166,121,157,130,135,120,143,173,136,130,138,119,150,126,119,138,143,132,152,137,168,122,118,146,146,134,127,130,148,143,140,123,126,131,116,139,109,141,120,137,124,140,106,115,125,123,121,92,100,115,104,111,116,86,82,85,79,84,109,79,88,85,83,93,89,75,95,81,101,85,59,75,68,64,52,67,61,79,61,88,67,76,79,68,41,58,71,66,74,60,60,30,48,66,90,55,59,50,64,40,51,72,35,60,57,45,77,95,113,129,121,146,141,141,132,80,59,84,85,91,75,57,57,53,70,72,98,112,104,146,159,219,213,223,230,247,213,242,239,226,243,235,220,229,211,196,190,145,121,105,94,57,29,21,25,11,61,37,45,39,43,32,13,36,44,19,30,11,30,28,17,15,47,13,38,33,23,21,9,42,33,20,15,14,25,42,25,31,19,17,14,56,43,20,37,25,9,2,22,22,18,32,24,3,29,2,46,20,2,54,20,51,43,26,14,17,13,11,6,33,38,21,37,12,0,29,11,18,33,28,34,29,29,38,20,29,16,20,39,35,22,39,21,36,9,34,18,12,8,19,35,31,27,36,14,19,16,3,18,20,42,44,22,17,13,8,15,7,24,22,3,6,30,17,46,37,52,42,32,36,54,39,40,52,55,53,19,32,23,27,42,9,26,48,40,57,30,10,18,49,37,40,28,36,29,39,50,38,35,49,34,44,29,25,15,17,33,31,29,72,25,40,51,29,12,15,8,39,20,23,20,13,6,50,33,16,44,24,13,30,22,36,5,53,19,44,33,40,40,55,52,59,63,94,91,91,86,75,108,157,179,178,147,169,169,166,134,161,156,170,147,152,153,186,140,173,191,178,180,172,155,157,183,137,151,154,169,132,164,143,168,169,169,186,184,163,160,160,154,164,170,162,166,162,159,151,144,161,157,160,162,163,176,150,137,127,154,164,167,171,161,162,153,155,176,148,142,98,84,92,134,147,183,196,174,163,170,172,132,167,153,176,188,175,155,165,193,166,137,164,133,185,158,168,157,162,159,162,159,150,176,195,161,151,204,190,172,178,191,174,168,154,171,155,149,166,172,166,92,12,5,4,16,15,2,9,4,11,8,20,5,28,8,15,30,5,4,9,18,24,40,23,35,22,30,23,27,31,19,23,13,38,18,10,14,16,19,12,14,9,18,13,21,23,50,22,22,23,20,13,10,9,15,6,33,4,18,36,13,12,13,10,15,37,10,33,21,30,12,47,27,44,2,8,31,36,8,18,36,17,12,39,19,29,18,34,16,18,13,8,19,19,16,30,27,23,26,25,36,21,37,38,28,33,58,24,39,45,63,54,59,40,35,34,46,56,61,60,83,85,72,45,68,61,79,65,50,49,37,72,46,35,62,59,88,52,66,66,82,73,92,112,95,80,100,98,92,59,90,79,105,126,108,124,122,130,124,122,94,115,127,145,130,136,122,113,112,136,148,134,127,160,149,168,136,138,110,140,120,144,167,115,104,149,154,137,128,116,124,121,107,143,126,141,117,111,125,116,117,141,140,106,102,133,138,154,128,102,107,123,121,131,125,118,150,122,91,127,154,97,128,118,131,95,135,129,157,175,148,137,156,122,126,151,173,153,138,143,147,129,143,142,148,156,141,136,146,136,152,115,145,138,140,154,173,157,136,153,151,126,159,148,140,171,134,146,154,140,148,130,167,151,133,153,156,126,154,163,146,127,135,144,147,142,151,128,115,131,114,97,126,95,121,83,94,124,122,92,78,76,62,70,62,81,64,61,60,88,93,68,76,84,74,107,89,89,96,100,77,76,64,63,41,78,56,69,70,44,58,38,49,62,60,56,15,58,54,65,50,64,7,78,60,45,49,53,34,62,75,81,94,101,122,147,152,129,124,150,161,160,153,132,90,87,87,63,53,60,38,50,56,95,98,132,158,168,206,190,212,238,233,230,240,209,253,244,230,239,247,246,223,211,200,181,182,141,99,62,60,44,24,14,29,22,36,10,42,29,15,28,8,23,43,33,27,27,36,37,9,30,12,28,5,41,18,52,22,33,8,16,40,10,11,26,21,18,15,27,16,32,26,19,40,25,19,9,20,35,17,38,10,15,4,23,10,9,11,27,27,11,21,16,17,7,21,38,8,17,24,44,31,10,29,2,18,44,45,20,15,26,8,38,12,5,7,12,40,31,3,29,5,30,44,23,18,9,29,7,11,18,12,39,34,11,44,16,16,22,3,32,13,7,14,14,38,38,22,37,44,22,29,19,34,51,20,6,26,37,50,48,18,13,60,56,54,39,50,55,63,49,68,20,29,35,32,44,43,43,42,41,23,55,26,30,19,0,36,29,49,28,16,32,15,47,5,43,21,15,14,16,17,11,10,26,32,51,12,36,39,29,23,20,23,31,44,39,44,51,24,41,70,66,64,61,85,93,74,100,166,149,173,160,167,168,146,169,173,151,171,156,158,149,163,166,145,161,166,143,153,146,151,156,161,169,152,122,164,173,147,146,151,152,159,153,163,152,179,165,160,150,167,152,153,157,184,162,193,160,155,155,128,155,168,147,185,136,125,141,157,157,157,135,157,178,187,156,142,128,108,88,95,100,144,150,169,180,168,161,138,158,156,168,164,150,160,154,157,170,177,165,148,151,153,168,180,156,162,148,160,152,163,168,168,151,174,167,181,170,149,174,161,178,169,161,177,177,150,161,86,4,29,2,3,22,16,56,16,13,31,11,26,1,0,11,4,5,17,49,17,6,10,2,3,27,35,18,21,21,39,17,13,30,18,11,12,6,23,12,10,9,17,25,24,6,26,30,18,20,23,33,4,21,28,29,46,23,8,37,19,32,31,39,22,31,52,16,27,22,14,12,37,25,22,14,37,15,18,10,21,31,6,24,14,8,11,12,27,31,25,25,19,34,11,9,41,37,27,24,25,47,38,41,31,34,32,50,73,44,38,66,56,47,55,59,57,75,50,62,67,85,70,48,55,94,42,62,47,81,61,48,49,24,59,72,44,46,39,59,10,40,46,95,92,82,62,81,75,57,64,94,86,96,96,126,106,106,108,101,87,94,131,108,126,127,148,115,114,122,141,130,88,110,111,121,110,152,147,113,125,119,144,141,128,145,110,120,152,129,133,132,139,142,139,110,127,118,130,123,134,144,120,123,109,150,156,137,148,123,134,136,147,143,161,154,112,132,123,135,150,134,147,145,128,127,165,136,140,124,121,130,159,155,173,138,132,141,132,130,164,140,157,127,150,156,141,135,162,134,120,165,145,152,136,103,143,148,116,147,134,123,146,141,144,175,183,172,156,176,167,145,144,145,127,132,144,133,143,144,142,120,117,123,127,110,117,111,116,108,116,141,116,106,90,74,105,98,101,96,81,91,71,61,59,46,75,62,74,50,70,37,65,105,82,80,79,81,92,74,69,65,95,78,76,29,66,50,70,54,31,38,53,54,62,59,56,68,64,58,35,50,59,45,63,52,41,77,63,71,73,84,85,82,132,83,133,67,111,151,150,143,157,137,129,101,139,98,98,92,37,36,52,71,46,38,59,56,87,99,130,167,130,154,179,209,204,216,216,216,228,238,227,230,241,235,223,229,225,204,177,186,164,149,144,63,69,17,22,18,39,29,22,29,0,21,3,44,10,48,20,12,8,40,28,22,23,2,30,41,26,12,61,12,34,22,21,34,23,23,37,51,11,21,28,38,12,22,26,11,11,32,29,13,13,8,37,33,37,28,20,30,22,1,28,21,19,26,30,25,24,28,55,10,5,5,20,24,13,6,21,9,12,27,32,5,4,20,32,37,53,16,35,20,22,25,39,8,25,14,19,40,8,17,33,5,3,36,16,6,13,26,4,34,15,49,27,35,23,37,52,21,56,22,55,19,24,21,46,15,36,42,36,30,62,67,69,51,42,35,41,27,56,54,45,48,29,44,38,59,16,54,39,45,31,53,56,29,22,34,27,15,24,39,14,13,6,27,20,26,34,22,28,32,43,31,30,23,28,3,8,37,36,22,13,15,41,25,11,40,30,38,62,62,60,98,102,97,129,150,129,150,160,142,135,163,154,163,135,173,168,164,156,157,131,139,132,177,144,136,144,150,117,165,126,149,149,146,140,166,145,164,183,141,131,146,113,145,152,145,139,172,154,165,154,158,156,152,167,159,171,165,168,144,165,169,159,141,159,140,134,163,133,175,156,167,177,157,152,160,166,125,86,82,136,148,155,161,188,147,143,132,152,143,137,173,173,157,161,160,181,132,174,175,135,176,169,160,141,146,166,148,152,161,201,161,158,176,148,162,154,171,150,166,164,154,155,182,164,147,127,19,16,6,15,13,22,11,19,22,25,27,45,6,8,12,6,22,14,7,10,22,0,0,33,14,40,7,9,12,23,22,9,26,32,17,8,48,45,2,24,34,24,17,27,23,11,32,6,2,20,38,12,13,4,14,20,24,27,4,8,5,5,34,14,6,4,27,27,24,26,3,20,9,22,10,5,21,20,18,31,6,7,8,13,6,48,21,11,3,4,36,13,38,48,40,14,25,16,36,33,39,32,17,14,26,30,34,32,32,22,27,39,35,41,58,8,19,35,33,34,8,21,42,37,45,56,43,28,83,68,59,56,67,28,34,38,34,25,21,25,25,14,52,56,71,86,93,63,77,66,86,108,138,111,87,97,75,101,102,122,71,96,105,118,129,139,141,150,118,136,136,128,136,88,82,114,153,126,108,132,121,127,120,147,133,143,140,147,147,115,121,107,131,136,133,144,152,145,172,166,135,137,140,169,178,139,136,153,168,162,171,155,156,157,142,149,142,151,166,138,141,123,139,95,116,126,120,118,123,145,123,121,102,89,110,140,137,121,131,134,120,121,122,98,121,132,123,147,123,156,106,116,106,109,126,103,114,108,95,95,114,117,91,113,139,139,136,119,130,147,123,124,139,114,140,131,134,105,124,104,90,116,115,112,103,122,114,113,110,118,122,78,100,73,71,120,118,105,83,75,78,79,34,60,63,104,85,53,89,63,94,101,95,80,71,30,60,84,55,82,93,62,63,71,52,75,45,64,58,69,57,59,51,33,62,64,69,69,82,76,54,59,74,77,105,59,78,79,80,74,78,55,110,87,90,96,70,82,79,82,67,94,86,101,125,150,151,121,126,120,134,108,84,77,57,28,55,27,38,28,28,43,52,78,95,108,140,153,166,161,179,191,212,198,170,200,213,182,214,203,218,241,240,244,213,174,172,141,111,106,94,83,110,41,3,18,39,22,19,10,35,12,19,24,11,14,26,22,38,49,50,36,22,38,10,30,23,39,23,24,46,12,43,30,26,31,14,1,25,17,31,40,40,29,28,30,14,29,31,8,18,18,27,23,15,7,10,18,31,1,48,32,49,24,36,23,32,14,26,8,20,4,50,41,31,18,20,26,19,14,9,24,29,7,22,46,32,42,37,14,20,9,21,20,9,42,25,12,8,25,31,21,10,40,27,43,34,20,35,48,23,24,28,25,45,32,31,20,47,29,55,32,33,78,60,37,39,49,17,28,51,24,18,74,46,62,37,58,57,51,45,49,40,29,38,43,34,46,28,22,32,33,46,36,13,31,13,24,17,9,39,55,5,18,25,5,10,17,3,29,35,32,34,7,47,39,40,25,26,41,43,25,69,83,91,104,158,186,160,142,152,165,171,150,167,155,144,137,109,157,160,166,156,176,154,169,159,187,177,159,172,139,150,147,161,163,151,155,161,166,158,134,150,150,172,143,151,152,147,174,142,154,130,151,157,179,139,154,156,143,163,149,168,168,139,163,159,172,162,142,161,147,152,163,170,160,186,146,189,166,168,97,103,77,122,124,187,153,177,153,145,175,158,141,172,145,167,163,126,161,149,176,164,152,172,141,152,132,154,167,161,170,133,149,162,140,157,168,160,151,175,160,130,152,166,156,165,182,157,103,23,13,28,8,16,11,0,5,22,20,34,26,11,14,7,22,2,7,17,37,31,22,0,6,34,7,15,34,27,39,20,2,11,28,21,12,7,21,31,33,21,19,18,11,28,37,31,6,16,30,29,17,14,25,13,37,12,26,24,1,6,24,4,20,28,14,36,39,10,10,25,23,3,0,44,23,3,31,45,22,18,24,16,27,28,10,13,0,33,23,22,19,2,19,39,26,31,35,33,32,39,7,15,39,16,22,25,48,31,9,16,33,37,46,6,26,16,32,27,33,39,37,33,50,32,43,51,84,71,61,45,51,59,37,67,58,56,41,52,56,37,51,69,50,67,53,96,87,78,111,106,77,81,67,55,61,88,121,119,102,107,112,116,119,127,87,112,98,146,154,167,117,150,141,92,142,126,118,105,114,116,124,78,93,129,121,124,127,112,141,146,133,120,121,147,103,127,129,130,127,129,130,111,95,104,105,113,129,102,103,106,99,130,100,111,139,139,147,123,113,136,121,132,125,121,135,136,114,128,138,106,66,89,53,63,134,138,121,156,146,108,109,96,123,124,116,140,121,126,130,135,109,115,143,115,125,103,82,90,90,101,105,106,128,111,72,56,89,95,69,96,105,100,95,126,128,122,118,107,106,91,119,118,122,127,109,109,96,99,105,105,101,104,114,99,131,123,81,99,78,115,80,101,70,87,102,75,117,112,105,101,124,95,58,72,34,64,69,88,80,90,64,69,59,101,54,79,63,54,50,36,49,39,28,30,53,59,56,38,62,53,67,53,53,68,57,60,46,51,55,49,63,50,64,43,48,42,47,52,51,33,22,57,87,115,142,126,84,112,147,108,129,115,90,126,115,109,89,41,86,52,59,62,71,30,75,58,57,72,85,78,110,124,123,129,134,166,180,189,192,190,212,214,201,230,216,213,198,195,205,232,193,151,116,116,109,103,47,34,19,52,15,29,32,10,39,27,10,26,5,31,33,32,16,50,2,26,41,11,32,18,21,21,16,15,15,22,23,17,16,14,37,10,11,33,25,17,10,19,12,19,36,5,18,15,21,31,44,30,7,26,25,19,13,24,22,19,15,15,6,26,17,50,5,23,7,18,25,41,16,29,15,36,10,27,8,33,38,18,18,35,34,37,14,2,55,27,50,7,9,16,12,28,22,25,8,31,23,20,12,36,28,55,20,48,22,38,38,35,31,35,26,30,47,51,53,52,32,22,14,22,42,35,13,39,43,36,23,36,33,41,11,60,22,55,32,37,40,39,53,48,20,9,30,51,27,14,28,46,24,39,31,33,6,7,14,12,38,17,22,24,22,20,46,22,26,50,26,15,45,65,27,76,103,108,154,170,167,210,157,178,172,171,179,181,163,161,170,172,176,144,165,165,143,173,150,163,160,179,161,147,154,167,182,159,142,143,149,146,140,151,146,152,143,187,171,144,165,190,171,162,191,162,166,161,167,170,146,148,149,164,171,157,139,151,129,161,178,172,140,160,155,153,154,141,180,173,172,155,188,170,137,147,86,62,81,129,164,154,199,185,147,154,158,180,147,169,164,153,143,147,144,145,125,160,153,162,149,165,169,154,146,156,172,170,191,173,151,162,164,173,145,158,154,154,149,136,160,186,110,28,0,12,7,7,38,42,19,11,16,49,14,9,4,0,0,10,13,19,12,45,11,24,1,29,23,2,14,28,13,4,11,14,36,4,22,16,13,14,22,5,12,22,37,12,6,13,20,6,1,19,17,21,19,35,20,4,33,21,3,25,33,48,30,41,19,16,10,12,14,7,19,8,16,40,24,26,16,19,9,1,5,11,19,21,12,29,24,3,29,19,16,28,20,10,29,45,29,39,18,40,42,14,17,7,18,31,36,15,27,40,39,30,30,22,9,43,43,33,30,41,32,26,43,45,32,37,71,66,105,88,42,71,42,38,53,43,69,56,19,41,17,29,43,50,32,29,36,57,46,55,48,44,56,75,89,115,122,168,109,143,120,133,128,117,137,129,112,142,124,133,146,162,151,145,137,135,133,121,133,146,110,105,120,117,131,134,166,128,146,151,127,120,116,93,102,108,133,134,107,111,113,92,92,86,105,95,66,73,91,101,57,74,94,113,119,112,128,139,113,121,120,105,119,145,149,144,114,163,139,100,98,121,127,157,156,146,150,154,102,142,119,148,98,108,128,99,117,117,123,142,153,157,143,149,158,139,142,133,137,119,119,142,124,132,134,125,110,99,116,132,127,102,150,153,162,135,109,117,138,99,139,131,146,120,110,94,106,121,108,79,110,105,129,131,106,82,77,74,72,100,118,95,95,112,96,113,105,138,146,107,72,75,55,86,59,60,50,73,89,80,75,86,73,62,77,62,36,70,42,49,52,41,82,47,40,56,37,85,65,55,65,54,64,39,49,54,64,57,67,50,58,60,40,73,40,63,41,39,39,26,22,48,65,58,61,80,64,54,110,104,71,112,126,110,114,105,138,96,95,87,131,94,71,60,71,53,47,59,25,27,34,24,83,41,76,101,91,104,121,146,132,124,161,169,169,184,177,205,216,218,207,169,201,185,189,174,158,188,149,112,100,120,96,72,44,49,21,40,30,16,42,1,40,5,15,31,15,17,19,19,18,22,17,27,15,15,12,17,28,31,25,5,24,11,1,47,12,7,1,20,37,27,29,43,35,35,27,8,34,33,38,6,17,25,16,21,28,39,36,15,7,9,34,9,11,33,15,15,43,31,19,32,32,18,37,22,21,23,26,43,30,8,26,51,11,23,34,67,16,10,13,20,2,46,36,4,29,30,44,6,10,37,31,36,56,10,24,35,23,36,19,44,59,51,58,32,40,19,36,17,45,26,34,57,54,36,50,64,35,49,21,29,28,47,33,43,45,26,18,11,9,30,15,25,26,2,20,8,18,35,24,18,33,14,8,30,27,32,24,14,19,17,29,28,16,17,47,35,43,52,36,59,67,128,152,163,156,178,193,143,165,157,157,152,155,155,160,129,177,129,154,148,136,160,148,141,145,160,147,179,166,159,168,167,167,167,145,154,165,175,163,156,176,162,175,181,158,167,176,148,153,129,150,145,132,167,172,157,166,150,138,145,175,172,168,171,127,136,161,163,131,163,150,148,182,181,163,179,176,185,155,162,127,97,77,111,126,144,175,160,174,172,147,181,154,170,169,160,165,152,139,168,155,149,159,165,166,157,159,179,161,158,133,158,134,170,152,157,192,151,170,170,145,159,129,175,177,156,112,0,0,31,15,15,0,22,9,2,8,3,12,26,1,20,9,25,12,30,21,18,1,35,26,10,9,15,27,12,14,9,29,12,46,22,26,20,12,21,10,4,7,38,0,40,7,8,24,13,21,38,27,15,31,14,23,29,14,0,22,35,12,22,6,15,35,17,16,33,4,37,27,8,13,14,21,17,29,5,8,14,26,16,16,32,21,10,20,6,12,33,17,53,17,26,18,26,31,30,18,26,21,41,32,38,39,13,22,12,17,27,19,22,28,19,22,33,19,35,35,28,33,39,49,29,41,30,38,13,55,49,61,56,72,58,71,37,33,66,27,33,21,38,43,55,39,52,58,26,72,55,39,42,71,76,78,124,144,111,124,130,137,158,152,133,144,126,115,108,117,124,149,128,148,154,169,157,157,173,165,130,133,113,153,114,125,152,135,140,135,131,117,120,96,108,112,129,102,123,123,120,100,119,145,135,146,122,108,118,100,114,110,114,128,118,128,103,102,95,123,104,133,136,108,151,134,128,144,155,101,155,154,138,182,171,135,148,139,133,120,118,129,146,111,118,136,151,120,124,117,116,142,149,174,172,124,152,144,120,142,135,128,151,149,147,135,162,161,142,142,123,157,169,118,137,112,131,132,129,136,119,145,142,111,109,123,107,133,101,86,97,116,111,91,101,80,66,94,81,121,74,91,108,109,98,105,88,99,93,94,100,68,73,87,70,76,46,64,33,43,50,52,44,70,71,60,67,51,59,57,65,46,62,45,66,67,56,68,65,59,49,98,97,68,75,18,45,81,59,55,37,71,68,31,31,21,27,58,47,42,55,71,61,67,56,45,52,52,75,73,78,76,81,103,56,72,102,100,102,100,117,126,113,103,107,91,102,80,122,87,78,47,65,54,60,47,47,51,72,36,72,61,37,70,131,126,112,161,152,153,210,189,172,181,172,153,187,205,191,175,176,163,192,172,134,170,162,179,159,78,80,89,113,52,47,29,44,51,27,35,29,13,37,30,10,53,25,43,36,36,24,4,19,15,22,11,36,27,25,49,16,15,21,14,26,47,17,31,25,28,11,17,14,29,14,0,26,15,8,5,12,27,4,41,25,46,12,6,39,7,3,22,10,3,33,22,26,23,13,17,9,10,29,33,38,23,36,40,4,16,42,13,9,21,30,36,29,40,24,21,26,18,17,39,28,52,40,23,37,37,28,35,62,29,66,44,55,60,22,44,36,40,57,49,62,51,56,44,55,39,30,33,38,43,48,37,31,51,36,45,25,42,21,1,12,23,16,14,24,11,11,44,14,11,17,36,14,5,33,17,10,34,27,24,39,30,50,20,22,50,33,39,49,99,83,109,125,170,191,169,155,160,157,164,156,169,168,133,145,148,152,140,151,151,145,167,154,144,149,135,168,177,161,172,180,184,178,190,149,148,194,170,159,165,193,177,151,171,135,165,164,163,158,155,148,193,153,182,149,159,165,180,168,150,156,158,152,156,145,160,168,162,151,172,156,161,166,136,158,147,157,178,148,136,143,116,102,118,128,172,167,173,161,164,148,160,174,173,143,152,133,133,156,163,153,142,154,136,157,146,142,128,158,136,150,174,155,167,185,158,155,140,156,186,145,176,179,147,148,107,18,1,3,5,22,25,27,8,3,27,17,14,8,12,11,7,4,2,6,10,9,2,2,11,9,24,29,5,13,41,14,30,14,3,22,9,12,0,8,17,37,11,27,2,23,9,22,20,7,0,17,25,8,43,19,27,12,33,7,14,1,0,14,3,6,33,26,19,32,3,14,11,10,24,18,9,44,32,18,22,18,20,15,11,21,11,19,33,18,9,27,17,10,43,14,13,35,48,32,35,43,15,68,15,29,28,28,21,12,24,27,19,33,35,18,20,26,24,27,38,32,9,22,25,54,37,30,61,54,50,36,70,58,46,65,44,71,49,62,45,35,55,62,58,44,51,43,69,60,61,48,49,30,41,102,93,121,120,110,94,98,68,122,111,142,112,125,118,118,121,112,99,124,119,129,122,110,146,117,138,136,150,118,160,132,130,134,147,127,146,127,132,117,147,115,144,110,101,110,134,110,136,120,159,128,152,147,152,162,164,168,185,178,170,146,101,113,139,97,121,121,125,135,135,133,139,145,128,156,113,142,128,156,164,122,134,130,136,133,126,128,139,137,136,165,132,128,140,141,104,153,145,125,142,151,159,177,132,124,132,116,138,119,153,143,164,137,135,153,145,178,154,129,150,127,118,139,106,140,147,120,158,88,143,109,131,109,114,126,142,131,113,84,101,111,98,117,100,119,105,83,96,108,131,89,96,109,114,82,78,83,98,87,79,69,92,78,73,56,53,56,62,64,62,98,69,38,72,39,84,42,61,44,66,68,49,54,63,57,65,68,79,64,51,49,64,38,57,70,51,41,30,62,50,50,51,38,58,68,49,47,46,28,43,46,53,62,74,88,75,75,63,55,74,71,73,92,103,83,106,93,116,94,73,104,116,121,116,122,122,141,135,115,111,93,100,87,76,57,58,38,24,40,20,35,39,37,36,71,79,66,111,148,138,166,165,139,165,152,162,185,205,183,180,181,205,238,214,185,175,188,155,146,155,170,183,155,127,121,95,63,78,31,47,55,68,69,21,46,38,36,38,17,12,30,24,19,32,26,15,23,34,5,3,12,20,16,13,19,8,21,11,35,25,31,15,16,33,10,25,20,38,16,10,18,31,23,53,38,42,35,11,11,14,2,20,23,18,20,8,43,11,46,15,31,35,27,21,20,8,32,28,18,44,18,29,36,16,5,2,8,25,19,34,27,38,28,34,28,32,28,28,34,57,45,45,49,43,63,63,65,58,45,54,44,52,37,58,42,44,32,48,28,18,21,17,18,58,49,34,9,29,31,3,13,7,22,38,29,28,4,29,19,11,26,12,10,11,26,12,31,24,4,19,26,24,28,67,39,61,59,53,60,75,52,92,117,136,158,167,169,161,182,170,192,175,167,140,139,149,177,167,142,175,124,178,167,171,156,168,170,175,152,173,133,169,166,159,176,172,178,128,159,150,167,133,166,166,159,175,171,151,172,161,160,162,178,174,142,146,153,156,180,143,150,170,162,155,177,192,134,156,157,154,167,146,151,147,157,145,148,144,165,142,151,116,133,104,92,111,136,159,166,151,145,167,149,170,146,150,146,164,144,150,161,151,167,168,157,147,162,146,146,176,171,148,174,151,147,173,147,167,154,172,163,177,185,165,187,135,0,4,2,9,20,11,29,18,13,25,26,5,23,19,6,1,1,4,27,30,16,16,19,29,7,18,7,16,6,6,17,13,21,24,26,20,18,7,28,18,13,26,15,12,12,10,3,17,23,13,14,35,15,41,14,18,19,24,53,25,12,9,35,18,3,2,25,25,18,3,23,10,17,20,15,8,27,44,17,4,32,40,2,7,7,37,11,13,19,16,53,32,33,29,8,22,31,36,48,20,31,31,27,29,11,22,44,35,27,13,41,33,54,37,44,13,26,36,23,41,28,30,11,62,50,60,28,30,38,60,50,33,24,44,73,48,67,86,76,82,93,65,38,74,83,56,59,73,80,69,66,47,64,102,99,127,129,113,123,90,109,82,90,93,99,119,117,100,100,127,114,85,113,76,85,94,111,87,124,127,128,120,137,144,119,139,102,145,137,126,119,137,150,147,147,141,141,129,126,111,165,143,115,116,129,147,116,138,156,150,142,156,130,140,131,118,139,149,149,163,144,161,173,169,144,154,143,135,135,148,145,131,134,157,127,125,146,140,141,140,128,123,145,155,162,105,136,110,109,122,143,127,101,140,95,124,147,135,132,133,127,144,159,142,127,138,128,140,132,126,136,139,147,122,101,120,131,107,120,116,141,140,123,134,141,132,126,128,110,127,93,122,106,98,86,80,108,119,99,118,109,124,117,136,95,115,72,96,73,85,87,70,77,97,96,45,64,48,45,51,54,84,39,62,63,43,68,43,55,72,71,59,68,52,55,61,61,58,44,54,74,44,40,41,56,54,50,41,60,54,57,47,47,62,43,82,49,61,84,59,73,43,50,69,45,37,55,54,62,67,76,79,105,93,77,113,121,108,113,90,86,62,82,71,73,113,88,93,125,122,112,147,92,108,115,130,98,92,106,118,75,69,60,52,42,44,58,42,24,30,53,25,31,46,86,92,84,80,107,113,112,149,121,145,167,177,153,208,186,189,178,175,162,210,198,202,173,156,127,140,177,151,141,129,168,167,108,54,85,115,104,69,49,79,27,35,26,41,10,35,12,18,35,5,35,16,5,14,15,10,11,28,22,12,20,11,25,8,17,8,6,8,12,11,29,22,19,27,10,20,28,22,34,15,45,31,4,18,11,39,25,17,22,31,10,19,25,25,1,23,25,35,50,22,21,41,12,16,31,28,36,6,63,15,21,40,27,48,29,7,26,20,23,66,30,25,57,46,49,55,77,60,76,40,50,31,46,51,30,20,33,33,14,35,33,48,32,33,19,31,13,42,19,29,17,7,17,27,16,7,27,25,12,37,23,28,38,13,22,13,2,31,27,25,33,25,37,34,41,65,68,71,79,80,84,95,90,129,125,146,132,148,170,149,145,168,161,154,158,172,177,169,152,155,145,151,157,162,139,136,136,145,145,155,121,151,171,150,164,161,139,159,146,163,150,168,145,169,150,172,149,166,182,134,147,157,146,164,166,142,156,143,141,175,166,155,182,157,171,158,144,144,162,158,184,154,154,160,161,166,184,171,143,172,146,155,158,127,90,97,98,140,177,174,178,169,156,181,145,181,168,150,158,167,150,137,147,158,164,174,155,141,142,177,163,156,177,163,135,152,156,141,157,171,176,173,140,154,163,104,21,15,11,6,35,21,12,45,11,29,26,28,45,19,20,7,12,20,28,5,17,14,23,15,6,30,49,13,14,16,23,34,23,12,6,16,6,16,11,25,7,36,36,14,5,54,35,20,18,26,30,16,25,9,45,11,32,32,21,20,24,10,31,12,27,24,20,4,16,16,27,15,39,18,18,8,24,33,3,8,27,37,48,32,20,22,35,9,19,8,23,20,23,28,40,58,49,46,20,20,37,14,23,42,38,39,23,58,39,43,41,52,54,53,47,35,65,51,61,30,75,58,63,33,52,74,49,42,64,72,60,89,49,91,50,70,83,87,82,81,81,59,55,73,91,113,99,116,110,123,110,92,109,140,138,145,156,133,141,114,138,110,116,98,111,124,118,134,114,86,156,101,93,106,114,144,125,128,113,142,100,94,89,110,111,132,126,124,110,127,135,96,137,112,119,106,103,118,111,135,143,120,128,119,100,117,118,118,111,117,117,87,110,110,70,101,159,120,140,157,111,130,137,151,140,132,142,137,108,146,95,126,128,121,133,151,156,131,123,141,144,149,124,114,123,130,109,116,134,130,113,145,129,114,110,124,120,144,108,133,113,129,107,149,141,138,123,131,109,127,137,114,127,100,125,145,117,107,113,110,106,112,126,142,133,124,86,114,79,92,124,144,144,88,119,91,105,110,92,73,107,139,128,116,75,102,68,85,95,97,95,68,57,90,63,74,71,32,74,63,77,73,69,95,65,46,79,60,53,56,65,52,75,82,59,85,85,57,50,58,90,53,47,69,39,62,59,66,68,54,60,43,38,45,54,44,66,58,46,51,67,60,36,43,48,47,42,42,54,64,56,44,64,59,49,47,32,31,36,51,62,41,20,62,67,68,54,72,53,58,90,79,79,66,95,99,85,98,85,110,109,121,104,83,54,58,87,61,48,43,42,16,36,52,48,35,25,55,42,16,37,30,57,78,91,121,105,117,135,130,112,136,107,155,136,144,103,130,166,157,142,165,147,140,132,125,110,142,123,155,154,138,157,128,121,100,95,110,104,74,74,65,42,48,55,30,40,24,31,36,18,31,33,34,10,5,22,21,11,27,34,4,30,29,10,30,20,27,4,10,5,15,36,37,30,42,10,20,32,20,14,19,12,26,46,23,36,22,31,26,33,9,14,68,46,30,10,2,34,25,11,27,6,31,37,13,33,44,31,56,23,47,53,34,66,38,60,65,13,37,72,75,70,46,44,41,61,48,49,45,40,53,25,40,36,31,30,27,36,6,16,11,12,14,10,25,4,7,39,29,27,21,29,5,30,40,15,13,19,40,2,40,25,42,35,51,47,42,45,70,82,80,76,89,111,79,74,111,104,118,145,141,152,136,160,124,156,162,184,147,155,146,158,161,149,164,153,169,158,150,145,157,174,164,121,173,134,150,120,149,148,151,164,138,146,143,171,177,163,133,151,154,154,158,164,178,131,170,157,162,178,155,165,186,174,155,177,161,158,143,141,142,166,172,158,163,177,181,171,170,152,180,173,157,187,160,149,149,148,93,117,98,130,152,175,172,166,169,156,162,168,145,162,157,143,144,168,137,169,165,126,131,164,161,160,137,169,139,177,154,149,148,157,163,175,149,173,166,160,110,10,8,23,15,30,10,0,28,34,17,16,14,13,4,34,6,19,15,11,10,9,7,13,24,4,32,10,25,19,22,30,19,38,15,28,17,34,19,27,33,20,15,56,28,23,17,10,37,20,20,35,12,22,20,40,24,41,8,30,0,2,9,37,4,39,11,11,23,5,19,20,14,11,25,13,22,55,43,37,11,20,26,30,19,37,33,33,27,18,23,11,15,47,42,47,46,23,39,19,58,37,43,70,49,56,36,57,80,48,68,52,62,46,50,53,63,83,58,40,67,49,25,57,56,31,52,30,29,32,46,60,49,60,62,38,48,67,33,48,63,68,52,29,57,62,49,43,74,81,115,111,142,85,121,100,88,80,118,121,120,129,122,110,115,122,97,115,122,119,147,109,122,114,115,127,120,108,114,115,116,128,113,95,106,116,110,118,98,80,96,118,106,116,120,107,86,106,110,127,123,137,144,121,99,113,103,104,130,85,109,106,123,100,121,97,119,126,105,118,95,81,101,119,120,91,116,109,121,138,129,107,106,120,130,148,132,131,110,148,126,118,117,114,123,141,141,155,112,115,144,128,164,155,136,108,147,140,129,136,119,130,106,143,137,123,127,136,130,158,130,138,121,106,125,101,109,114,101,109,104,112,133,104,135,107,109,124,100,116,105,116,106,103,87,123,107,100,89,92,77,107,90,106,134,75,98,86,77,103,90,90,84,68,65,55,69,90,101,64,58,65,71,67,58,46,50,58,46,49,48,35,46,71,41,61,31,79,44,47,94,71,28,48,28,51,56,56,49,47,57,60,42,52,55,47,66,44,66,67,59,26,58,54,48,43,60,45,66,61,62,39,52,57,45,56,45,67,34,38,59,57,43,27,33,47,50,45,56,47,72,41,64,56,72,56,71,64,76,93,100,93,77,72,86,86,77,113,71,89,111,76,74,68,61,48,51,48,59,17,33,27,44,66,36,33,33,40,63,29,13,50,69,71,89,71,67,86,117,123,89,100,139,139,87,109,100,149,130,128,149,160,120,144,139,136,119,151,128,142,114,143,144,143,137,136,145,130,63,83,72,59,79,68,37,54,25,44,62,31,55,32,38,43,51,51,29,42,20,40,22,13,3,27,24,3,0,13,31,16,40,17,12,31,18,21,9,2,30,9,12,7,33,34,30,7,20,12,5,20,28,38,35,34,52,13,31,41,50,57,34,29,37,32,57,56,33,36,57,39,45,58,44,75,70,54,42,22,71,68,61,42,34,45,24,36,22,57,28,18,44,45,36,18,5,5,51,15,17,13,22,13,19,21,41,37,25,35,20,3,27,23,30,60,19,56,61,62,88,45,35,86,79,81,89,93,72,70,89,112,91,132,122,140,90,139,138,138,153,161,142,163,159,138,149,156,160,149,159,156,154,148,142,179,172,170,164,168,158,157,147,164,166,178,176,147,112,176,124,140,152,157,163,174,146,156,159,151,140,160,166,164,150,155,119,132,152,150,155,139,141,150,165,146,171,161,169,181,174,151,160,160,143,156,169,181,160,132,163,159,133,129,96,96,126,153,171,193,172,170,160,166,167,175,174,150,133,160,154,143,158,149,146,153,139,171,161,164,158,159,158,172,168,153,146,178,144,138,147,154,108,0,17,26,2,11,6,9,4,8,11,27,14,11,8,19,10,19,10,14,11,20,8,15,21,26,32,27,42,23,6,15,1,15,9,25,6,27,8,14,3,10,13,25,32,21,22,15,12,24,12,40,18,37,12,13,18,13,1,7,22,26,9,29,22,39,27,36,20,20,24,40,20,27,36,3,6,13,12,13,38,20,8,12,36,8,25,11,10,9,33,27,21,32,38,18,33,48,49,14,20,58,64,55,71,56,31,42,27,44,31,20,25,57,54,35,38,52,38,44,35,34,33,20,16,35,21,45,27,31,48,39,36,32,23,42,61,41,39,50,43,30,32,50,45,33,11,32,56,35,65,85,96,95,90,78,73,87,93,102,125,115,114,102,81,79,75,97,103,109,147,142,96,127,115,137,104,92,119,121,106,124,109,118,120,128,122,115,127,129,125,120,140,126,128,139,134,122,144,134,116,104,136,134,130,101,122,101,120,136,132,128,105,127,138,126,102,135,123,132,114,118,103,104,123,108,109,105,122,112,131,110,130,128,161,104,156,123,116,121,114,125,112,96,148,130,144,113,105,133,143,165,131,117,140,140,164,140,155,128,119,128,117,147,115,123,134,100,127,127,121,121,99,127,155,101,158,124,118,110,114,70,118,75,85,122,112,108,121,133,84,117,105,74,93,104,100,94,110,112,90,79,78,105,107,118,126,108,85,105,125,99,72,67,60,39,103,75,58,54,58,75,61,72,50,76,76,33,78,30,36,51,76,42,45,71,54,39,62,52,61,47,30,43,43,58,62,67,75,45,52,53,71,30,62,35,55,34,75,28,55,35,37,75,37,40,35,63,61,58,48,73,52,57,41,47,74,66,56,67,70,67,72,28,42,54,45,77,61,74,63,55,66,49,62,53,42,53,85,34,59,85,52,68,94,69,53,86,85,86,66,29,60,73,89,128,139,134,84,88,88,72,90,39,51,55,53,33,29,30,18,63,46,28,46,56,64,54,51,64,54,33,66,84,69,90,75,127,90,105,124,89,102,134,120,144,141,131,137,123,138,129,125,108,129,112,128,124,157,135,117,141,140,130,125,129,114,118,91,105,118,91,98,75,95,73,51,60,69,40,37,39,65,42,36,35,14,31,22,48,33,17,21,25,21,22,18,22,43,9,3,18,3,31,5,44,58,28,10,19,22,28,32,31,17,28,39,32,32,38,48,50,43,51,54,35,44,43,50,33,16,47,36,20,65,61,41,32,54,54,48,62,10,30,31,34,23,58,18,23,19,21,22,16,0,10,25,31,45,1,28,26,24,27,34,40,44,15,55,48,58,50,44,33,75,61,42,57,83,76,72,57,47,53,76,56,49,85,60,81,78,86,108,132,149,169,155,154,160,140,138,177,147,147,158,184,127,94,121,160,153,150,169,151,174,158,152,161,180,177,159,170,147,165,118,131,143,144,156,155,140,141,141,151,141,143,140,159,162,151,157,167,160,148,156,171,174,153,155,142,130,149,140,141,139,162,133,143,171,147,149,172,152,155,188,171,154,146,158,175,149,158,120,117,96,111,149,145,185,183,171,170,154,129,156,149,161,154,179,171,139,164,193,156,159,137,184,176,156,160,168,169,169,164,144,172,161,169,163,177,99,24,0,14,3,20,24,40,20,35,11,32,14,23,7,14,4,16,19,5,14,16,12,19,19,22,6,31,6,12,38,16,16,8,28,7,35,42,37,28,23,51,30,18,34,22,15,50,19,20,26,0,6,28,22,24,3,15,32,1,19,13,19,19,9,15,25,25,2,8,24,20,5,7,43,18,7,24,12,12,20,35,30,9,12,30,31,6,8,18,28,9,48,24,31,46,37,63,46,40,61,65,63,66,58,65,55,55,21,20,27,51,46,40,46,35,44,32,22,37,34,45,19,27,25,41,27,59,68,33,39,14,40,48,64,37,53,36,29,78,54,22,33,53,24,44,31,43,46,60,25,78,60,91,111,92,95,92,106,100,128,95,94,102,95,116,120,107,144,117,109,122,118,107,144,142,132,121,124,127,131,122,121,124,120,119,102,124,122,127,127,130,136,137,118,118,137,117,112,126,107,111,112,103,100,114,94,86,127,121,128,108,118,139,138,115,105,117,102,124,119,134,145,142,113,133,131,114,148,115,134,130,123,126,140,129,123,147,151,136,124,150,119,123,134,129,125,135,119,121,117,143,160,155,172,177,160,144,150,130,161,147,116,127,134,141,106,94,113,120,142,130,137,113,126,147,133,104,131,134,139,124,123,150,96,111,120,111,93,88,107,73,112,94,93,117,97,83,71,73,73,102,79,87,106,110,86,107,106,104,104,91,108,78,72,85,55,49,62,77,74,60,59,51,73,75,95,48,56,34,59,47,45,70,48,79,53,72,71,37,55,81,42,51,55,26,58,54,67,66,58,76,65,63,38,72,48,50,29,55,65,80,53,76,29,37,48,57,51,58,47,62,74,44,33,66,92,57,57,61,67,79,68,69,50,50,50,62,71,64,50,59,63,68,40,62,62,44,59,68,76,61,38,52,39,44,27,51,45,45,60,25,55,62,103,106,86,94,115,95,135,82,98,97,72,72,82,73,53,38,66,56,23,45,49,57,50,19,42,25,49,28,35,57,31,67,58,68,66,50,50,54,40,99,89,101,100,63,90,101,97,127,86,77,122,109,110,124,123,112,122,132,108,144,112,144,96,159,169,150,153,95,103,146,86,91,132,97,161,110,157,112,125,151,141,100,88,112,91,79,83,45,66,58,55,45,50,41,36,60,21,9,39,23,21,24,24,11,6,31,50,22,24,5,42,12,51,32,10,34,43,25,39,50,42,42,51,51,71,18,69,80,52,50,50,62,61,85,68,58,64,32,25,29,18,42,8,26,14,23,19,46,9,15,33,26,12,39,48,23,45,44,30,28,28,34,41,38,48,69,62,61,25,47,53,73,52,61,66,56,63,59,78,82,86,77,75,61,78,53,82,122,97,116,154,191,197,167,127,149,121,113,143,189,144,156,146,146,155,147,157,154,158,156,143,132,153,159,157,137,151,122,152,146,136,135,156,150,167,169,169,151,174,151,142,150,172,147,139,172,158,159,140,124,143,149,144,148,154,157,142,168,198,176,155,155,140,149,166,147,156,149,146,152,117,172,150,163,172,154,152,159,141,107,101,95,91,108,157,171,170,149,133,130,147,135,171,160,123,158,155,153,173,180,173,168,172,147,179,145,131,171,164,162,149,156,169,168,140,167,122,6,3,11,3,30,20,32,9,3,13,14,12,14,0,5,12,5,9,4,0,23,3,9,27,15,26,24,31,19,31,1,27,17,5,33,36,7,40,26,11,0,28,6,20,38,9,23,28,15,15,14,25,5,21,27,29,24,14,47,22,33,7,24,12,29,21,26,14,5,5,6,16,23,26,15,2,8,40,19,15,25,25,27,45,1,35,13,18,35,6,34,10,35,24,16,49,26,55,33,37,41,17,41,43,42,29,28,48,18,44,51,44,14,16,26,39,21,29,44,28,9,48,35,5,34,31,26,24,29,23,8,39,15,29,43,17,30,43,17,31,39,22,40,24,1,29,53,31,18,42,65,56,72,83,88,94,116,120,122,124,102,133,89,114,127,113,123,87,95,94,121,102,121,110,82,108,91,131,102,98,129,104,119,107,120,120,115,105,135,92,110,121,119,107,116,119,112,113,123,97,133,114,129,133,127,108,114,97,105,96,106,121,92,101,115,107,93,99,121,114,108,123,107,122,114,118,138,126,124,119,117,122,101,73,87,121,108,98,112,113,99,152,144,142,131,130,109,146,118,102,94,132,143,152,148,125,108,141,127,137,118,98,110,121,135,112,143,163,137,160,136,124,137,94,134,116,120,129,136,150,123,136,132,126,121,92,122,102,99,87,94,90,95,111,98,109,80,87,68,52,90,84,75,71,64,103,109,102,87,96,83,96,103,104,97,59,87,112,63,96,72,72,29,44,62,89,72,26,60,73,68,72,82,48,64,66,58,39,81,45,48,54,58,50,76,68,61,58,31,47,60,53,57,55,41,31,49,69,55,45,56,47,42,32,57,75,51,76,42,43,12,56,44,53,46,32,64,56,55,83,33,56,64,59,56,39,37,33,52,73,34,60,55,71,48,47,48,29,61,46,43,51,62,50,40,57,17,40,56,57,57,61,66,67,51,45,72,43,35,85,68,84,64,65,79,86,94,95,94,75,86,65,89,93,87,71,77,73,82,54,43,45,46,27,44,36,34,50,21,47,40,33,37,32,44,32,46,50,50,47,68,64,61,71,58,59,80,64,91,97,109,109,116,90,99,125,130,139,117,121,145,123,109,102,139,123,105,135,146,155,125,122,153,145,146,144,144,132,167,152,109,154,135,124,112,104,123,48,93,99,73,83,44,37,66,10,15,32,30,53,54,46,24,7,34,11,4,24,22,51,28,1,24,38,64,44,51,41,22,37,57,81,37,47,45,41,76,55,47,35,11,44,33,9,32,31,23,41,27,30,38,50,55,25,22,36,54,67,42,68,62,65,73,66,44,59,51,51,48,31,63,70,42,44,55,20,46,50,69,61,70,75,104,67,78,80,74,74,80,84,142,129,150,149,171,177,160,162,116,165,134,145,157,138,174,160,143,185,171,170,157,157,179,165,143,125,169,157,128,126,145,158,168,169,152,138,162,164,138,152,153,161,161,152,145,157,188,161,157,163,187,158,182,153,139,148,185,150,170,164,174,194,156,161,160,173,147,139,164,165,165,162,152,141,162,175,165,182,154,136,149,143,171,120,123,99,81,143,163,170,170,151,152,185,174,178,166,176,158,162,157,155,147,144,163,138,177,130,133,134,159,140,176,155,150,155,160,143,160,94,8,5,24,6,15,2,11,8,25,6,28,25,24,3,40,33,28,25,19,8,15,1,14,36,18,5,47,34,9,30,11,34,23,22,4,5,13,20,5,15,17,17,17,41,23,0,34,39,16,22,25,31,7,12,41,10,12,16,38,23,18,43,31,22,22,25,24,7,12,15,5,18,31,14,28,3,22,13,25,16,18,32,45,40,27,4,18,30,25,11,36,19,30,6,8,22,52,26,21,15,14,21,32,33,17,36,47,3,15,46,22,6,48,38,25,31,14,5,31,30,21,53,30,32,27,16,24,33,39,22,42,9,35,55,33,31,45,36,71,55,60,50,29,26,30,23,37,22,44,36,51,75,75,96,97,82,83,120,92,118,118,125,109,119,107,91,107,99,115,114,123,118,97,133,101,96,74,73,71,139,97,104,111,84,101,109,120,128,103,127,111,105,101,129,97,117,103,119,130,105,132,128,131,132,124,109,102,105,106,116,129,146,138,79,111,127,106,102,123,101,79,115,107,131,135,109,146,103,115,100,109,101,90,69,106,79,94,94,114,120,116,123,85,134,93,125,109,121,106,115,123,103,101,100,117,119,104,133,113,127,147,117,142,113,134,151,131,137,139,137,125,86,105,90,114,119,115,125,95,134,103,110,120,109,95,122,105,121,114,94,112,100,94,103,104,104,107,98,110,62,91,64,92,80,72,52,95,82,81,52,85,73,99,98,107,109,115,102,96,108,84,92,103,88,96,95,110,118,88,65,50,52,46,60,63,62,57,73,34,92,25,31,39,49,63,67,57,64,69,42,83,61,80,58,32,54,77,46,62,56,44,45,34,56,58,59,57,39,58,67,58,53,56,36,30,42,43,29,92,60,25,35,21,40,36,52,46,29,42,17,58,49,54,30,34,47,43,48,26,13,40,54,61,61,24,59,61,48,43,67,41,35,38,56,44,61,58,66,38,67,34,47,54,59,70,59,68,62,63,60,74,51,103,107,123,87,97,82,59,75,63,52,65,55,45,43,60,56,53,13,64,47,37,36,31,21,34,26,45,39,48,48,42,41,45,32,51,42,42,73,44,55,56,68,61,75,89,75,96,82,86,86,98,98,120,105,116,111,117,136,112,110,101,109,147,114,107,124,140,139,113,114,144,147,154,155,147,150,124,146,160,157,140,145,166,177,171,190,210,211,238,183,176,176,159,198,95,138,67,50,33,58,94,41,7,33,20,41,25,28,52,42,56,47,46,41,38,70,47,46,26,36,15,20,50,34,30,27,40,30,45,51,60,43,41,54,55,74,31,49,59,23,15,40,49,21,32,41,41,18,20,36,41,44,62,56,31,50,82,68,59,94,83,75,70,77,48,62,59,58,85,89,108,149,144,167,164,149,145,150,165,146,136,149,147,168,154,152,151,186,184,157,189,179,150,142,170,136,131,129,179,171,157,152,157,160,165,171,183,158,159,177,167,160,175,146,182,140,152,166,172,157,173,177,145,143,142,185,152,184,186,165,178,162,164,166,175,155,160,135,162,183,158,192,149,179,163,157,156,153,178,177,187,161,172,117,103,90,124,143,161,172,164,153,157,172,155,148,158,161,144,176,193,181,162,148,183,161,148,144,180,160,158,162,163,144,180,162,156,125,1,14,1,28,5,12,9,26,6,16,4,24,26,6,21,24,1,0,42,8,29,10,22,11,36,26,61,29,48,49,40,33,54,31,59,43,49,36,24,27,21,15,13,10,26,11,11,40,14,28,28,28,32,19,28,29,39,25,10,27,8,37,9,10,19,22,52,1,19,19,19,0,23,17,20,33,11,20,24,16,17,19,3,28,7,26,5,42,37,21,43,14,27,35,39,34,46,15,37,25,33,42,24,40,74,9,21,13,6,48,20,29,29,19,34,10,16,29,31,26,14,36,2,30,23,34,21,19,36,30,35,44,50,29,24,3,24,46,44,25,69,28,71,80,86,53,42,54,48,64,77,62,102,114,77,88,59,79,76,73,78,118,116,94,105,109,122,94,111,119,92,98,108,109,118,92,120,103,105,99,107,95,111,89,107,101,94,129,103,116,120,115,112,107,112,131,138,122,128,96,104,107,108,108,108,117,123,118,94,112,108,131,123,128,108,123,140,125,110,96,102,132,95,114,97,133,132,118,108,125,118,113,91,103,92,92,102,90,78,70,92,103,78,97,118,97,119,112,116,102,117,81,75,73,112,117,86,121,123,126,120,121,134,110,163,128,94,126,102,96,99,132,125,110,104,124,133,97,121,121,95,80,95,85,101,98,106,107,114,123,86,112,97,99,106,106,120,110,105,128,103,83,76,84,73,87,37,72,58,59,77,60,88,94,78,81,107,79,98,95,105,121,95,115,101,107,89,96,80,61,87,101,61,55,68,47,83,46,63,51,58,29,83,60,72,67,52,44,49,44,45,62,65,29,41,66,56,67,45,56,64,57,61,40,46,67,65,75,65,55,61,49,21,15,53,68,52,43,38,57,29,49,51,61,35,44,53,59,58,66,21,64,59,80,60,50,38,39,66,26,37,54,68,50,62,59,63,73,58,74,33,29,50,37,22,57,21,23,36,56,35,67,51,21,44,47,49,28,60,41,74,69,66,47,65,65,71,63,82,53,52,86,73,70,58,66,77,83,69,67,74,52,43,36,51,71,70,72,46,33,57,23,11,70,22,34,41,24,53,42,53,30,27,36,33,39,30,44,49,39,45,41,64,80,73,77,93,57,70,101,97,64,102,102,106,113,79,129,117,141,129,124,140,134,148,119,163,184,167,154,138,150,190,209,228,237,238,245,250,247,235,240,241,238,255,255,251,233,244,231,234,247,251,227,177,156,105,15,20,10,9,41,33,50,32,64,35,29,61,34,18,34,39,24,34,48,55,24,25,21,10,36,52,56,75,58,78,49,30,48,39,35,50,27,67,46,35,36,48,42,37,29,46,58,27,39,42,60,38,54,61,60,42,46,44,52,68,37,66,70,50,74,106,127,132,135,149,145,188,164,160,170,151,147,171,175,160,157,166,156,170,165,177,169,128,128,150,135,172,147,165,190,160,144,159,169,162,148,165,164,165,143,147,171,165,149,133,166,142,172,151,160,156,163,184,163,164,165,156,149,168,155,163,169,173,146,179,168,164,159,141,160,174,150,158,129,174,165,164,174,170,161,153,163,164,178,113,86,96,103,137,161,179,163,138,150,156,131,118,132,149,143,170,166,155,172,193,156,137,142,167,176,157,181,166,155,141,152,159,110,1,0,12,7,7,19,9,29,12,18,14,10,15,7,28,6,7,11,28,11,3,17,43,19,57,78,79,89,61,124,84,113,122,111,124,128,90,48,50,52,26,34,23,21,24,9,48,21,19,16,2,28,24,14,25,9,18,21,16,23,19,11,18,0,18,26,18,25,17,23,6,7,18,15,42,36,10,6,41,9,31,14,15,21,9,17,17,30,27,30,27,12,5,23,40,33,20,27,50,12,48,46,34,47,53,47,47,7,19,38,41,20,55,45,19,12,4,11,21,6,11,25,20,17,27,47,11,34,17,41,12,50,28,44,55,20,30,43,45,32,41,51,72,51,29,37,70,50,62,52,65,62,78,68,60,63,67,77,77,83,90,110,89,57,77,56,98,112,115,103,99,101,110,94,129,113,107,132,108,140,125,105,100,105,132,123,117,133,117,121,125,129,123,117,138,114,106,96,114,90,112,112,103,136,100,119,102,128,92,143,104,131,119,127,123,131,117,156,114,135,116,141,106,116,135,119,115,121,129,133,104,109,153,130,104,107,54,86,92,136,111,117,141,112,112,94,114,123,134,121,113,110,120,107,138,128,106,112,119,135,146,137,151,138,126,121,80,118,101,103,116,104,108,107,145,145,107,103,115,112,113,88,134,101,113,108,105,87,115,110,80,86,114,106,102,95,86,93,95,93,84,78,87,89,72,94,81,83,92,84,64,78,68,45,44,66,58,58,77,48,57,58,84,66,67,62,71,98,63,71,36,56,68,53,61,69,35,73,68,54,55,51,49,67,55,69,77,69,53,68,53,50,35,39,58,57,48,73,76,70,50,52,52,52,34,44,52,45,50,35,44,37,47,41,31,32,38,27,57,67,39,53,58,48,66,51,49,57,57,40,37,35,54,49,44,64,52,44,32,22,60,49,46,53,30,57,62,45,54,46,29,50,45,35,32,50,22,56,28,55,54,63,37,21,28,49,70,56,57,53,52,58,22,56,39,75,63,27,47,60,48,48,56,45,76,62,41,66,65,70,62,81,64,70,70,69,79,69,72,53,63,71,54,66,45,30,94,75,48,34,69,28,30,41,43,51,55,52,45,38,20,30,40,34,54,36,55,44,37,45,55,64,45,53,79,95,68,78,84,102,59,115,135,112,128,135,124,131,148,170,180,187,238,250,251,239,242,249,248,213,255,249,253,246,250,248,241,246,252,252,255,251,250,241,243,247,230,187,77,38,3,9,22,8,26,67,43,57,42,48,43,41,48,18,45,35,51,61,36,41,27,39,35,36,44,22,37,18,37,31,47,11,39,31,32,27,37,20,45,16,42,22,38,33,32,48,60,33,37,69,42,44,83,46,69,51,57,51,62,40,59,59,76,86,80,107,139,184,143,154,183,159,148,156,160,179,171,156,150,154,169,178,178,168,133,136,167,154,140,154,160,161,142,140,185,171,164,183,174,186,149,170,152,166,171,164,174,151,145,181,156,126,158,169,162,184,155,152,156,137,141,138,158,150,154,157,174,157,174,149,150,147,168,146,150,146,165,172,147,165,153,166,157,158,128,163,121,144,104,62,113,91,140,132,174,170,154,146,140,112,153,153,146,145,168,166,156,159,151,146,151,161,156,153,172,163,168,155,136,129,26,2,17,20,2,21,25,7,1,33,29,10,7,13,28,9,7,9,32,17,30,10,24,2,153,154,183,182,175,169,182,153,152,146,147,136,129,102,64,51,23,38,47,12,7,17,19,37,8,38,20,10,1,25,16,29,20,4,1,12,34,16,26,17,16,25,36,23,8,32,40,18,21,34,17,22,13,8,10,10,17,7,2,16,3,20,19,0,2,31,13,33,29,22,25,32,21,16,23,8,73,21,59,44,34,32,50,0,26,47,44,23,52,36,11,34,22,24,37,20,2,29,21,28,39,16,16,50,13,20,21,40,49,23,50,37,54,36,58,64,48,33,52,53,52,30,18,45,33,38,49,57,65,59,65,48,62,60,80,58,95,73,90,57,84,85,110,104,99,86,112,118,133,117,96,116,121,130,113,126,134,110,135,135,128,116,86,102,106,117,143,123,139,100,128,120,117,136,121,122,135,103,133,132,103,106,110,98,85,106,119,136,125,112,82,152,123,133,102,125,123,111,124,123,126,111,110,118,94,127,105,122,135,148,143,135,108,134,143,135,145,138,118,130,117,124,113,124,114,118,108,112,142,140,101,94,109,132,83,95,95,110,116,112,109,129,132,95,102,113,97,103,114,98,116,119,114,98,122,113,126,97,92,98,112,116,87,71,82,107,83,84,96,83,82,91,83,100,104,94,88,108,83,86,60,94,85,98,100,103,94,78,94,80,67,99,79,107,76,67,61,27,58,29,53,54,66,82,84,62,67,60,55,59,45,55,90,66,51,64,38,63,43,56,90,40,60,46,55,55,71,48,63,44,36,59,40,57,43,63,47,28,36,55,67,50,56,71,47,42,49,37,49,47,72,46,42,50,42,51,51,34,60,53,6,44,39,38,36,56,55,55,38,63,53,34,50,67,34,62,39,51,56,29,42,33,24,41,66,46,45,69,49,65,51,51,39,31,60,56,46,53,32,37,51,59,49,64,57,44,77,63,48,41,32,58,43,58,49,68,48,60,8,48,46,59,38,58,61,45,55,79,48,25,51,47,33,40,65,75,77,45,84,56,20,46,61,69,50,62,41,65,38,65,63,43,45,44,49,37,52,33,28,39,47,29,44,37,48,61,44,37,34,34,57,52,32,70,45,69,44,37,60,65,69,88,57,98,111,161,136,181,206,210,212,220,212,229,243,244,247,242,252,250,253,231,244,255,253,246,252,250,253,252,244,253,255,243,203,168,152,121,31,4,8,20,14,17,49,52,53,57,15,48,33,35,30,35,45,39,14,53,42,26,38,27,24,23,18,33,37,53,41,46,43,47,39,51,35,31,33,31,26,22,58,38,39,27,27,50,69,60,58,59,60,57,49,52,76,62,64,74,64,100,93,90,126,154,119,117,131,142,131,140,158,157,175,164,151,126,147,160,143,139,161,171,146,165,144,163,151,148,142,164,176,169,169,184,161,145,129,129,153,158,167,161,150,166,144,165,161,162,136,160,173,169,186,172,185,164,178,135,139,156,165,145,152,155,138,160,146,127,144,150,178,183,118,139,158,161,147,166,164,171,188,138,167,185,135,87,73,109,150,153,178,177,188,166,153,164,144,123,149,142,155,164,156,135,159,143,131,165,185,159,147,181,130,139,168,110,23,6,10,9,11,3,0,1,18,15,21,15,8,5,19,46,24,15,15,11,6,18,31,17,175,151,184,146,131,132,121,133,109,75,64,35,51,27,47,33,32,26,18,14,16,30,2,33,8,16,26,26,22,7,5,20,25,14,25,20,13,6,24,38,8,25,26,45,5,23,14,14,8,2,3,21,11,27,9,10,43,12,36,29,18,20,27,53,27,54,36,11,18,50,47,31,21,54,40,17,35,34,39,49,18,14,17,30,13,31,77,17,39,43,31,29,18,29,27,26,20,29,19,42,62,23,41,22,18,25,59,45,56,71,14,29,51,43,51,45,68,63,60,96,66,48,43,40,64,36,86,69,101,106,107,101,61,74,90,94,85,133,120,101,95,132,110,126,81,120,137,99,113,126,102,121,99,107,110,113,107,126,111,116,104,96,124,101,109,112,109,115,126,112,133,116,115,117,125,120,132,118,107,108,110,111,116,108,100,109,104,115,105,97,125,111,122,115,108,104,104,100,101,98,105,97,115,112,115,119,108,141,131,114,156,120,114,114,122,104,121,132,114,116,124,113,100,99,151,149,96,106,103,114,117,123,115,120,108,106,77,109,76,112,111,94,96,97,111,130,101,143,112,120,135,127,105,101,130,110,109,120,108,111,102,87,110,85,95,100,107,75,101,86,108,85,68,122,102,92,105,92,92,82,102,91,98,131,133,128,105,119,119,105,96,98,73,74,72,74,93,47,65,57,65,86,87,75,84,55,51,68,76,50,49,63,52,62,69,59,67,34,45,66,66,57,59,50,58,93,29,69,47,41,79,55,70,68,42,62,70,42,56,44,41,44,41,53,44,49,24,83,49,36,47,53,55,50,52,35,42,57,34,24,39,9,32,45,41,29,35,38,63,28,38,26,42,49,27,40,42,39,36,36,23,62,37,38,34,60,47,59,32,28,49,59,28,23,51,26,35,36,35,35,30,46,40,52,43,47,51,39,66,50,8,59,39,38,35,64,52,48,42,76,44,36,44,46,40,43,25,35,37,67,37,51,52,39,59,81,46,55,67,67,55,40,70,64,42,69,79,51,55,75,41,71,87,71,87,58,46,56,68,39,70,61,59,50,60,60,41,50,33,65,47,48,25,44,47,27,45,34,56,60,72,60,27,32,71,68,77,75,82,94,110,122,88,155,157,148,184,178,213,227,255,255,252,255,248,247,255,252,224,254,238,255,236,255,245,233,232,246,228,111,115,26,0,15,7,19,45,33,42,24,26,23,31,51,68,33,38,32,27,25,19,47,44,38,48,38,41,25,44,39,17,22,40,39,20,23,49,38,12,21,13,26,48,19,41,73,60,69,52,44,58,68,57,74,46,56,64,39,64,53,91,106,100,109,145,117,140,154,133,134,133,142,126,142,119,137,133,121,121,133,163,107,146,143,155,150,117,149,169,158,165,145,125,135,146,141,147,135,151,152,170,168,171,143,146,137,160,153,138,160,146,182,161,168,165,161,146,138,157,127,150,175,160,172,170,142,186,146,166,151,148,156,174,135,157,147,151,138,135,165,170,195,174,175,150,124,127,107,77,134,142,162,191,177,191,179,158,142,157,156,159,145,160,165,176,179,160,173,126,169,174,181,163,149,152,132,14,0,28,16,12,7,36,34,11,0,12,12,8,18,14,32,6,10,16,30,20,32,1,5,107,108,79,49,88,62,15,5,35,39,10,37,29,17,24,19,42,7,42,20,39,22,29,25,33,46,14,22,28,44,12,5,5,4,29,29,7,31,22,8,22,21,27,25,9,35,24,29,28,17,6,14,9,16,12,4,28,21,16,7,24,15,22,29,14,27,65,46,30,31,36,40,19,58,34,17,48,60,59,50,32,47,20,40,34,43,16,59,33,30,57,14,11,37,36,24,25,27,38,38,51,29,15,39,32,23,31,48,51,35,55,50,43,40,31,53,33,47,62,39,35,58,62,59,66,79,89,90,110,126,124,96,100,146,115,110,136,117,152,138,116,106,130,132,143,106,111,114,117,119,131,123,119,115,123,112,115,102,128,120,114,131,113,132,90,115,121,110,121,106,128,115,124,121,127,149,119,120,120,127,108,124,102,108,118,134,126,97,128,109,113,133,117,138,137,140,116,113,112,128,126,118,113,124,135,125,111,106,130,99,119,97,140,86,142,139,129,135,100,120,132,118,157,123,122,149,115,124,117,118,145,150,135,141,133,135,154,111,121,124,121,115,100,122,128,95,120,125,129,121,82,94,106,84,124,148,124,120,123,116,157,142,124,109,116,121,118,107,116,116,107,100,91,116,89,92,117,86,93,92,77,76,80,89,78,79,97,99,105,108,74,85,82,70,108,93,58,59,64,65,55,61,46,74,95,76,51,71,51,45,59,49,76,65,43,48,49,41,26,47,63,74,71,38,62,49,64,44,64,59,36,51,70,47,59,42,63,55,43,44,75,41,66,36,47,62,51,32,48,42,47,67,48,42,53,39,37,12,18,36,5,53,34,40,62,41,36,40,48,54,24,39,40,36,29,55,58,46,29,74,58,42,37,56,25,22,53,45,43,31,31,44,17,65,51,57,42,50,59,28,33,46,50,60,35,60,37,26,50,35,54,51,46,33,71,44,47,38,47,38,33,77,45,36,17,65,45,50,41,59,31,62,61,79,40,41,61,54,50,80,15,42,58,70,33,86,39,47,93,62,53,61,61,72,91,67,60,66,53,66,58,84,72,51,72,64,23,59,79,76,84,73,59,55,54,56,31,75,18,56,54,58,51,43,74,34,44,39,47,68,44,56,48,54,49,50,63,73,103,104,135,156,151,203,182,160,186,209,203,247,255,255,254,249,245,241,252,242,217,228,234,232,173,94,11,1,12,18,33,2,18,12,21,29,34,49,53,27,54,35,25,40,56,38,44,54,47,34,15,40,21,42,44,13,28,35,34,31,39,31,46,23,37,37,50,41,30,43,47,35,54,41,46,51,42,68,50,55,42,66,67,58,92,68,114,130,173,173,138,142,146,150,132,125,130,129,139,148,162,147,145,151,160,136,156,157,154,145,155,132,165,134,152,133,148,156,156,157,137,155,147,140,121,138,148,155,167,150,148,157,142,149,151,157,155,130,148,188,150,146,164,161,146,155,140,154,183,168,175,160,165,175,139,150,170,138,159,157,150,181,145,158,157,166,172,155,142,111,99,66,113,143,142,162,174,143,155,165,132,149,156,144,156,160,133,156,151,138,153,145,160,140,127,139,163,111,26,3,9,5,9,10,12,22,21,18,14,5,32,12,24,0,0,11,29,11,15,19,24,18,64,34,32,51,39,12,26,29,19,60,37,16,48,8,16,24,25,34,38,32,17,29,18,19,30,3,23,2,11,25,15,20,10,31,5,33,11,4,12,27,29,36,25,19,19,28,2,43,40,36,35,27,3,26,38,12,15,16,31,8,22,16,11,17,19,45,24,26,10,48,35,32,52,43,49,18,21,31,13,34,9,43,61,18,45,45,43,46,39,52,55,38,33,28,58,51,41,49,28,33,38,26,31,38,34,44,48,17,36,26,16,42,20,15,23,17,41,55,53,48,48,47,43,69,67,39,85,74,110,110,115,119,121,98,120,132,126,105,124,106,107,104,130,108,113,112,124,139,95,115,123,114,124,109,109,101,116,124,119,123,138,134,110,125,148,122,91,139,117,115,121,124,131,141,124,143,123,97,117,152,126,105,139,107,121,118,117,122,122,118,126,137,96,137,123,114,110,128,129,136,120,132,119,145,106,121,117,92,120,114,128,103,121,121,129,121,115,125,127,108,102,112,137,133,122,122,132,128,113,143,150,123,154,145,101,140,119,138,137,121,116,118,148,153,127,118,119,147,102,125,124,138,115,95,115,101,103,106,111,94,98,127,130,131,120,106,113,94,111,89,93,84,110,99,74,78,100,112,79,90,64,68,61,68,78,104,66,105,81,95,95,67,61,102,90,67,83,62,75,59,43,85,42,34,59,60,44,86,59,79,52,60,44,56,66,53,50,68,83,42,59,62,49,71,66,61,59,56,78,50,56,56,53,56,34,59,67,65,41,60,52,71,61,54,21,48,37,56,54,54,83,26,38,30,30,34,51,26,47,41,14,48,51,33,30,43,45,25,53,40,13,33,48,25,17,50,56,55,43,48,46,50,34,32,37,56,60,30,51,73,25,72,55,20,58,34,44,87,75,66,48,24,29,47,35,55,25,44,45,29,56,37,59,52,59,9,28,34,69,45,37,29,40,37,47,60,48,33,48,35,39,35,23,37,62,18,34,24,30,43,40,49,36,37,61,33,53,41,35,42,39,52,54,50,28,78,33,59,52,46,37,64,58,42,44,75,83,71,68,80,34,71,71,71,66,64,71,77,73,71,87,84,86,81,58,79,45,51,69,97,73,59,84,82,56,51,81,23,64,44,39,43,65,88,93,35,58,104,149,175,227,249,226,240,222,249,251,255,226,245,252,246,242,247,190,130,59,11,12,11,17,28,16,34,7,35,24,17,39,37,36,43,39,36,63,20,46,50,45,23,59,30,39,55,26,50,22,27,45,25,52,40,29,42,31,23,50,47,60,38,30,54,61,37,49,43,47,41,54,82,68,51,76,77,76,78,131,124,130,137,132,149,126,118,159,141,163,157,179,180,173,156,173,165,165,168,178,144,150,136,169,169,153,170,146,163,154,163,147,114,142,170,129,159,145,153,185,158,174,148,133,177,178,161,148,169,160,161,168,153,138,161,153,169,153,152,143,186,164,162,171,157,145,166,180,167,154,144,157,145,175,162,149,166,164,173,158,139,139,78,78,108,144,159,174,193,161,171,152,152,169,154,169,170,157,167,164,178,140,156,168,169,172,176,160,108,34,0,26,19,8,3,10,5,27,13,33,41,18,9,9,29,30,20,25,13,16,13,14,54,44,40,31,34,35,28,35,26,17,29,37,29,43,2,40,27,9,13,33,26,20,9,30,25,46,25,21,20,8,15,25,7,9,10,22,5,17,20,6,19,8,30,17,12,21,26,16,12,9,26,13,14,7,32,23,14,18,1,45,13,22,11,33,40,19,24,36,49,26,24,18,48,36,17,36,13,2,27,43,29,29,4,24,32,23,9,42,30,19,17,53,19,39,20,26,43,29,25,37,33,27,62,30,45,13,42,44,38,27,24,32,39,26,18,36,34,72,35,24,65,30,39,33,63,47,72,62,50,68,78,116,112,92,103,94,116,110,96,128,117,109,119,113,111,107,115,143,126,127,104,97,111,115,124,108,125,142,134,114,106,126,126,112,120,127,124,133,107,132,122,123,125,138,101,128,126,138,116,134,119,107,105,118,104,128,112,133,90,115,107,113,82,84,115,77,110,121,134,138,131,118,124,131,166,122,108,126,115,117,106,100,106,124,92,135,131,110,112,93,122,124,123,121,110,127,132,100,126,115,113,131,122,126,117,117,141,107,103,96,110,135,144,135,131,108,104,126,135,110,122,98,141,118,90,125,112,98,102,86,115,126,120,127,111,107,86,134,103,96,87,54,93,91,89,97,81,86,100,83,92,102,77,91,66,97,91,106,91,106,101,78,60,49,39,46,63,82,51,43,30,56,58,24,43,52,46,17,45,44,71,42,66,77,60,71,52,39,69,47,76,49,58,71,56,47,73,52,43,57,47,53,43,72,45,64,55,48,54,59,61,67,71,41,42,73,70,59,58,59,58,42,71,29,63,56,53,41,56,34,51,35,63,41,79,30,80,54,46,59,60,54,70,30,18,54,50,27,31,44,68,51,49,14,36,36,30,26,39,38,38,62,33,36,30,52,46,46,60,40,38,53,17,32,42,30,11,45,44,50,45,47,31,24,8,64,30,78,42,30,30,74,44,34,47,41,32,23,53,44,55,38,26,41,37,39,20,39,49,61,46,65,25,34,41,21,40,40,49,39,56,38,31,51,45,66,55,61,50,51,18,17,46,39,46,40,40,56,35,56,67,72,51,74,73,68,60,69,31,91,75,96,72,60,77,77,81,95,95,105,106,143,113,135,114,151,153,141,145,137,97,103,107,89,118,82,9,20,64,92,130,130,163,190,175,181,253,253,249,246,244,244,249,249,249,248,246,220,228,199,185,196,220,192,190,163,205,165,48,23,36,36,44,25,37,30,30,14,64,25,47,35,38,42,13,37,52,17,14,35,30,22,44,23,19,49,25,18,29,41,54,55,38,54,54,39,62,66,79,67,80,97,59,94,100,72,107,96,113,122,119,122,123,118,116,146,152,175,166,165,172,186,159,153,186,166,153,149,156,158,162,146,180,158,147,148,154,156,174,160,139,147,155,150,157,156,167,161,209,181,164,165,155,152,148,163,159,151,173,154,151,155,174,167,153,168,155,159,153,180,163,138,155,154,164,150,170,155,151,144,172,152,175,152,167,163,173,160,173,175,134,105,88,106,128,155,157,170,160,140,136,166,181,179,163,163,159,161,186,149,157,168,166,176,151,168,111,3,36,5,10,19,22,18,6,14,12,9,11,24,20,28,3,9,16,10,17,16,12,16,32,29,19,3,53,50,21,25,30,10,25,33,36,10,35,17,28,29,14,21,19,33,11,13,17,33,12,42,33,23,30,51,35,12,15,6,14,50,19,25,6,34,19,41,44,19,4,21,59,10,12,27,32,1,2,13,25,20,5,34,19,3,51,26,10,26,30,30,25,27,24,26,27,19,32,39,22,37,42,30,33,57,37,36,49,17,35,25,56,19,38,16,24,22,33,42,34,53,24,33,56,28,40,15,35,47,54,42,39,41,30,42,39,17,42,24,45,63,57,74,49,75,66,71,75,60,72,99,93,87,93,135,99,109,95,127,112,121,114,84,88,97,92,121,111,116,135,127,103,110,110,108,140,109,117,125,117,119,117,132,121,112,139,111,111,109,88,101,78,116,118,91,95,99,91,105,136,119,110,144,115,101,117,112,120,126,115,113,106,93,90,96,118,101,109,109,94,108,146,119,123,115,124,122,122,117,123,114,126,131,131,114,127,137,99,89,116,110,105,130,116,117,109,129,114,123,133,142,103,111,120,128,112,128,133,133,119,81,122,102,117,129,127,115,115,99,110,114,135,130,112,112,108,103,102,124,124,124,135,134,107,149,123,153,102,92,118,94,76,76,126,144,121,136,111,113,114,112,112,96,97,144,78,86,67,53,108,82,107,85,91,100,75,57,36,56,90,78,91,66,29,48,66,64,40,52,42,77,62,63,30,73,44,76,49,60,52,77,35,57,79,65,53,29,61,43,58,53,47,45,40,76,50,55,53,84,86,72,54,52,77,43,62,66,38,78,38,69,54,77,34,37,42,51,48,26,62,32,63,47,44,33,77,60,31,44,24,35,63,43,44,29,31,37,21,38,45,40,66,43,46,38,21,60,46,56,41,26,43,35,56,45,52,53,75,64,54,47,44,26,49,41,31,50,44,19,52,44,12,59,36,23,40,27,34,53,16,39,32,32,40,26,42,56,22,70,36,35,68,56,48,38,28,22,61,46,22,40,52,46,36,36,56,34,32,44,37,19,40,37,55,34,39,34,59,39,42,50,32,37,27,10,21,45,65,16,53,52,23,22,49,40,54,56,82,64,52,63,64,62,75,44,42,64,78,96,76,82,85,101,132,116,113,135,140,157,148,134,157,117,160,191,173,180,136,139,114,128,121,132,101,93,105,145,82,113,213,241,251,246,245,242,241,246,252,236,248,252,243,245,246,254,247,249,234,248,243,217,61,33,35,22,42,32,24,7,29,32,21,37,36,35,10,32,15,16,27,13,19,37,18,34,42,25,29,13,53,23,22,18,40,50,69,46,70,66,68,45,55,46,71,92,66,119,109,122,109,133,121,95,127,109,112,122,129,122,169,119,143,118,144,154,153,165,125,161,178,172,169,166,160,169,136,154,157,158,141,153,148,188,154,181,141,147,118,156,154,158,191,203,189,173,146,182,162,147,170,161,163,175,167,154,165,169,160,153,186,152,151,163,160,159,168,153,131,163,184,165,143,160,147,182,138,155,157,136,172,138,164,151,185,128,123,70,88,138,151,170,170,161,129,140,159,152,157,162,159,151,180,162,163,154,164,161,147,148,107,17,6,3,21,23,10,17,0,2,7,29,22,42,20,10,15,9,26,22,31,8,0,15,8,15,28,21,31,32,41,10,21,17,39,12,29,21,13,35,50,32,32,12,24,9,11,30,10,9,17,14,13,26,6,29,21,18,14,11,24,17,12,14,30,2,24,25,40,26,1,20,18,7,5,17,29,27,27,19,33,32,24,13,12,45,40,13,27,3,12,4,33,27,24,34,19,34,36,49,30,32,32,56,49,21,18,54,60,24,40,36,32,69,43,48,15,26,40,26,38,51,56,39,32,39,14,24,71,48,49,52,65,39,47,44,53,55,64,42,43,57,52,48,74,47,85,113,88,98,102,99,98,86,85,95,105,130,112,128,115,106,106,123,99,116,129,114,117,117,124,119,131,94,111,119,107,117,116,103,118,121,151,113,111,127,95,126,121,84,111,109,114,132,104,81,112,106,145,132,134,129,114,106,114,116,93,120,116,142,122,97,101,142,124,134,127,120,138,107,94,108,101,120,129,131,104,121,119,99,125,118,148,144,112,122,139,150,131,125,128,112,114,113,146,123,117,110,136,136,112,124,116,116,146,147,137,145,124,117,119,108,125,117,130,126,138,124,108,117,114,106,109,131,123,83,80,112,110,121,109,127,110,101,96,125,116,108,105,102,120,99,108,130,119,116,138,136,111,106,123,115,108,109,132,120,117,115,74,77,85,90,72,62,59,85,82,110,108,90,109,115,90,81,73,50,57,75,65,95,77,69,65,56,42,85,72,67,48,80,76,56,68,37,60,67,62,53,75,76,55,47,75,51,55,69,66,69,61,57,61,39,23,64,64,56,57,62,55,60,48,44,40,65,68,51,62,67,37,36,49,41,52,44,70,44,37,45,85,32,31,77,31,20,25,37,50,32,72,32,27,56,55,48,28,41,57,40,39,37,16,40,44,54,50,49,27,57,52,29,22,42,38,27,25,47,41,55,54,63,38,42,32,36,34,27,49,59,36,50,40,57,68,76,28,16,55,41,19,36,35,25,50,55,33,46,4,22,22,12,54,57,43,11,6,62,45,27,22,54,41,49,29,63,27,46,34,33,34,28,34,50,45,54,8,47,49,32,43,37,45,66,26,60,43,62,65,60,73,58,55,48,36,44,50,57,30,71,67,65,89,74,92,100,97,115,120,120,122,98,111,127,128,131,140,183,125,175,177,147,140,135,224,197,163,128,108,117,66,70,149,136,181,225,250,254,255,255,254,250,255,229,255,248,231,250,244,242,253,242,227,212,43,15,27,5,12,19,20,27,34,34,21,19,28,9,9,23,24,27,25,42,32,22,15,7,33,15,22,12,44,26,50,25,29,33,20,32,67,54,31,44,62,55,65,91,65,43,79,113,138,102,127,104,120,153,135,142,147,131,131,161,134,149,140,125,134,169,147,155,157,178,170,166,180,173,180,134,149,151,151,151,156,187,152,157,174,155,140,152,124,145,159,170,154,145,158,152,112,168,131,163,153,138,157,163,161,135,165,160,130,149,163,154,143,139,178,149,155,165,161,161,161,168,169,153,159,149,144,127,172,167,151,177,177,148,132,105,74,83,80,161,168,180,153,175,165,177,138,166,168,129,141,147,171,151,152,164,162,163,112,20,1,7,12,26,18,20,13,21,18,24,3,1,2,20,14,7,6,6,7,15,39,39,9,50,0,18,17,20,29,5,29,33,44,5,19,27,31,27,21,36,38,31,10,20,37,45,16,16,25,17,16,6,37,11,20,11,16,38,25,25,2,9,13,6,12,30,9,18,28,13,40,36,0,16,5,12,18,22,35,24,24,34,29,24,29,16,25,11,29,12,14,38,24,52,11,29,40,21,38,9,15,33,39,52,26,11,40,9,24,46,46,20,57,51,21,23,13,15,48,40,66,40,68,36,31,38,52,66,14,31,41,42,41,46,47,69,43,56,39,43,42,44,48,63,60,78,109,103,104,88,83,103,99,112,125,96,129,124,110,115,124,110,148,129,126,119,120,129,113,85,141,126,114,113,136,119,120,118,146,140,97,124,120,118,122,126,115,153,135,115,96,128,144,103,134,127,114,132,123,131,141,129,95,114,149,114,98,101,110,109,109,113,116,134,126,113,123,136,129,124,105,105,118,103,112,109,116,146,130,130,92,126,125,120,125,107,128,126,124,107,138,121,139,135,98,110,140,119,144,135,132,141,87,130,125,126,140,145,93,126,148,123,124,104,142,98,126,121,122,120,98,118,130,99,119,90,103,101,94,132,114,104,118,113,95,94,92,103,84,127,108,121,63,90,96,109,134,87,110,70,78,109,92,110,90,104,116,116,112,60,72,100,81,90,105,104,106,85,103,80,117,109,96,109,89,90,93,101,74,98,91,73,70,60,57,75,62,75,57,56,40,58,29,79,46,35,68,51,65,62,59,66,58,52,60,47,65,83,36,35,40,60,52,59,58,25,51,74,64,74,66,66,54,60,36,53,40,37,54,39,41,42,54,36,59,46,43,46,51,40,21,68,68,49,38,40,84,47,51,45,60,49,42,61,44,68,52,61,62,34,50,49,98,48,52,80,30,39,48,32,18,50,49,55,54,22,67,27,69,41,36,68,60,61,61,47,49,41,41,42,40,46,50,38,52,49,44,45,33,39,56,43,13,38,43,45,27,39,44,73,64,13,46,60,66,42,44,47,53,24,59,59,20,19,60,35,30,41,38,53,60,30,43,29,46,58,27,11,41,36,55,18,47,42,41,58,33,30,74,60,38,68,60,53,51,65,54,45,66,45,51,65,76,85,73,59,55,74,69,105,95,92,107,114,123,162,147,104,120,144,172,164,156,157,205,137,87,111,116,115,89,115,153,190,195,237,251,255,252,243,250,253,237,253,221,250,255,252,249,221,37,1,17,26,17,45,53,35,12,57,15,42,38,27,42,28,28,30,35,3,64,32,34,27,28,19,32,28,20,21,30,44,35,15,11,39,26,31,64,62,34,45,57,52,53,48,62,75,68,117,114,147,148,152,149,91,107,132,181,162,145,122,145,137,168,131,152,136,140,175,141,176,124,148,161,158,161,167,156,161,157,151,144,142,150,166,161,161,146,155,166,202,141,141,159,176,139,143,147,144,136,124,164,144,152,150,142,153,147,144,153,172,125,150,166,168,156,162,135,147,154,160,124,164,162,170,145,142,143,148,158,157,156,129,148,142,125,76,75,103,129,143,174,144,158,135,113,145,155,154,171,167,159,166,167,164,171,147,140,27,8,14,8,12,37,21,17,16,12,33,20,1,20,13,28,29,17,15,33,46,19,6,16,13,26,23,22,25,15,21,41,26,36,31,38,22,14,33,18,21,16,18,4,20,36,8,14,9,39,57,32,30,7,25,34,31,7,8,10,8,20,13,23,6,26,20,16,21,31,25,40,11,20,35,9,37,29,16,5,21,24,25,25,23,12,3,3,38,8,9,17,41,3,21,23,45,20,28,36,29,15,39,3,22,13,32,44,20,17,47,51,21,43,38,23,22,22,43,21,16,35,43,37,18,41,27,38,65,18,26,41,23,46,9,31,47,57,38,38,22,25,53,44,46,53,74,68,91,117,118,87,100,124,124,129,103,112,97,123,106,135,148,112,124,137,129,132,117,127,123,126,136,133,120,140,131,114,138,113,143,114,126,94,131,138,129,114,146,129,121,94,116,130,141,138,125,126,139,123,90,119,108,103,140,120,154,108,137,126,149,110,113,157,136,100,141,137,145,129,137,144,144,116,120,149,118,111,128,126,153,138,152,134,115,110,116,125,146,109,131,134,138,131,120,132,131,129,147,102,157,121,120,134,118,148,142,121,111,133,127,146,154,115,139,131,122,104,134,135,108,127,118,110,118,142,96,116,112,105,131,116,113,116,117,95,100,129,111,128,100,105,66,115,123,76,130,81,85,76,80,107,68,97,103,97,124,110,148,89,85,89,110,94,95,126,105,79,95,105,96,92,117,120,121,126,149,112,96,90,103,122,110,89,109,82,79,55,46,52,51,57,45,75,66,64,61,88,35,44,61,60,65,52,70,31,68,53,53,26,68,50,53,76,54,44,47,48,21,49,22,78,45,38,68,47,52,71,48,58,65,34,38,68,51,42,59,38,52,81,70,70,72,41,73,61,38,36,49,64,44,31,65,57,51,44,54,44,35,54,48,20,38,58,19,35,55,56,40,42,69,30,14,41,26,59,55,54,58,32,42,30,39,66,40,16,37,41,27,32,60,54,58,21,45,38,22,43,31,49,50,47,48,31,29,30,29,65,40,36,38,35,34,50,56,41,45,50,44,46,54,23,7,33,34,49,19,38,25,46,34,37,16,49,47,45,40,53,58,2,40,58,30,8,39,34,47,40,58,70,49,53,44,29,53,64,65,71,30,70,45,58,70,53,58,40,52,62,55,84,90,109,77,74,85,86,101,99,100,103,110,136,123,155,183,180,179,124,131,144,155,154,72,78,83,114,107,166,189,210,178,197,213,215,206,212,215,196,227,235,175,12,21,12,32,15,34,29,49,21,20,46,8,43,34,21,30,27,36,29,23,12,25,24,24,14,43,39,32,42,25,37,20,41,40,6,39,39,39,54,42,47,61,73,59,58,73,48,69,64,95,126,140,117,132,109,108,122,131,177,186,175,149,163,150,155,163,135,119,126,156,161,150,156,139,176,158,163,190,158,156,160,141,146,140,176,153,147,169,144,171,134,174,164,151,160,151,116,117,151,136,172,165,120,156,155,164,114,130,153,133,155,158,159,146,152,131,141,154,139,143,180,170,160,167,158,161,156,150,134,134,185,186,165,145,168,171,137,136,115,98,102,147,143,182,160,166,167,147,158,166,159,174,130,192,140,165,169,170,112,1,16,11,3,4,14,13,18,13,10,7,11,31,4,15,11,28,1,5,1,22,4,20,30,0,31,10,11,5,38,32,24,46,35,38,24,21,33,43,47,7,10,16,19,12,8,44,24,18,15,16,3,23,14,21,6,44,6,29,22,0,20,26,23,2,31,14,13,19,24,4,3,5,18,12,11,11,15,27,34,23,12,25,17,14,12,14,25,22,51,27,3,2,29,27,18,39,13,28,24,24,50,31,17,28,18,35,39,34,51,23,19,35,16,34,33,23,49,24,37,54,42,34,64,28,40,31,24,7,45,23,36,47,41,39,42,24,52,42,46,54,38,35,43,41,69,60,24,53,80,58,60,102,97,100,106,125,93,130,134,111,132,139,128,100,130,128,93,139,120,120,152,142,144,121,112,122,138,149,123,125,139,134,132,128,136,138,125,144,115,135,138,118,138,119,131,125,117,123,116,131,163,136,128,132,92,136,151,123,162,110,129,133,116,110,98,109,124,120,133,155,151,152,135,111,139,147,144,159,124,114,165,123,134,133,151,130,144,118,140,131,113,105,117,141,131,124,123,123,118,123,126,135,131,143,136,155,129,117,158,122,122,138,127,109,146,165,116,135,141,142,111,127,143,113,133,139,138,142,133,99,105,131,130,121,123,111,122,118,120,115,84,121,117,132,142,120,110,114,109,117,92,109,88,102,102,116,100,93,115,77,124,118,95,115,121,123,110,121,110,86,83,82,72,59,73,86,88,89,96,95,121,86,119,81,65,80,68,68,76,56,69,107,79,36,81,24,56,57,44,50,33,66,31,78,55,51,44,75,44,71,64,52,54,44,72,70,47,54,49,59,59,52,42,36,54,70,37,56,54,50,59,50,18,70,63,62,54,29,43,43,54,30,36,64,45,25,37,37,71,47,39,47,59,62,28,32,61,28,63,28,64,26,40,34,45,21,41,45,44,56,36,44,27,50,9,57,33,20,29,58,32,43,43,61,43,47,41,23,29,27,71,24,31,51,35,50,50,26,15,16,50,29,36,53,42,40,50,40,25,34,49,48,34,15,53,48,37,44,10,36,61,41,32,22,41,35,62,19,38,52,41,52,43,29,17,32,53,73,22,39,37,62,51,56,35,29,27,12,48,37,46,16,54,45,39,38,48,26,26,42,31,50,36,53,48,84,42,54,55,95,44,71,51,52,70,59,55,66,80,96,100,116,84,144,170,142,110,143,190,195,166,128,143,94,137,157,98,140,146,145,157,101,99,94,117,85,107,120,183,99,25,34,16,29,19,26,39,20,13,21,31,23,31,20,27,39,26,17,30,6,30,2,18,10,24,4,44,32,16,31,47,33,24,27,25,45,41,48,76,30,58,51,58,45,46,77,75,67,63,104,107,90,77,130,90,102,131,136,137,147,130,175,180,160,151,160,143,147,180,163,160,156,137,133,165,159,154,143,153,157,141,136,175,146,167,155,167,170,160,161,137,170,172,151,138,171,142,148,145,170,191,162,145,150,144,135,115,131,178,155,149,179,169,180,151,137,139,166,147,176,165,152,148,145,182,160,169,162,195,129,162,160,161,152,165,143,166,160,108,99,81,125,120,142,166,177,186,161,176,147,142,149,128,158,177,146,150,187,108,17,1,0,6,29,23,20,11,38,19,8,2,3,0,19,24,4,19,14,31,17,2,3,17,28,8,26,32,28,49,27,35,17,10,14,20,25,30,24,20,17,21,20,32,48,15,12,29,30,13,21,9,5,9,19,12,37,25,11,20,23,26,27,9,51,37,29,25,17,23,15,33,24,12,31,32,21,18,33,41,27,25,39,24,36,48,29,36,22,13,6,21,42,25,23,19,30,24,24,11,15,28,39,8,44,16,21,34,27,31,10,57,25,7,42,31,43,20,21,48,15,19,31,15,36,33,31,34,27,32,25,25,6,31,32,44,18,36,45,44,78,51,35,39,61,63,54,44,45,70,52,71,85,100,100,111,109,134,130,142,119,131,136,126,131,123,113,123,113,123,115,144,153,115,126,129,120,108,137,129,114,121,147,150,131,149,139,112,144,136,129,120,128,128,137,140,125,151,113,141,144,137,118,99,117,121,122,113,117,140,149,107,107,114,93,95,122,124,142,122,94,139,142,123,128,129,107,95,149,149,142,146,152,118,124,119,135,136,136,127,131,131,127,124,111,110,137,153,139,107,150,147,149,158,120,139,108,87,120,124,138,149,124,126,147,141,128,129,167,155,136,132,132,128,112,101,130,116,130,118,110,119,111,117,115,112,109,119,125,111,121,130,109,138,115,138,143,138,113,120,114,115,115,110,124,126,118,107,102,96,71,88,100,63,89,84,96,107,88,89,85,58,43,35,52,62,60,62,72,61,66,59,55,79,75,33,57,60,85,54,106,69,84,77,60,66,55,82,85,63,62,62,53,30,63,57,59,38,52,74,77,74,54,52,47,72,55,61,51,51,38,62,46,68,46,73,56,14,61,78,49,40,52,49,51,52,47,41,58,40,34,67,27,32,50,52,37,33,28,51,43,47,54,37,53,34,69,47,26,45,71,40,47,50,58,46,31,40,22,58,35,41,44,26,58,35,38,22,34,45,39,58,26,38,38,58,37,58,43,56,46,29,22,54,20,20,42,33,49,46,50,66,49,48,39,10,38,56,31,45,17,47,76,28,18,57,7,57,51,49,19,24,61,18,42,34,71,22,51,48,38,43,29,31,24,54,38,23,32,57,44,25,39,43,30,55,26,36,34,48,33,52,33,33,25,49,51,42,40,24,37,56,66,45,53,47,52,62,44,87,58,24,64,58,37,53,52,50,56,58,35,91,97,100,82,135,118,111,105,144,191,223,196,209,189,188,213,214,190,147,128,76,84,71,85,70,74,26,39,79,40,26,30,15,13,32,33,41,8,7,3,22,24,36,41,14,42,10,38,50,25,38,25,4,16,30,35,30,23,35,34,34,48,51,56,33,46,56,38,67,40,35,44,51,70,77,46,55,74,80,51,94,100,79,85,121,98,103,95,108,101,149,170,187,178,173,149,158,142,157,179,166,179,161,154,156,193,150,141,144,142,141,131,155,158,144,144,136,161,165,161,151,157,147,141,163,177,153,148,181,161,145,169,141,170,171,148,168,143,140,173,177,165,138,157,131,146,148,180,183,163,156,144,156,129,176,165,173,166,171,161,155,160,152,154,134,149,176,151,153,111,118,86,65,108,154,158,170,192,182,163,160,138,133,144,141,151,124,148,111,12,0,15,55,30,13,37,16,13,27,11,3,3,1,1,12,39,2,18,7,10,26,18,28,30,13,22,16,28,27,12,12,61,8,26,9,24,22,44,22,19,28,30,24,23,26,19,2,17,16,28,20,30,28,27,23,16,8,37,3,11,9,5,34,5,25,23,28,32,12,31,20,27,23,19,28,20,21,32,23,35,32,14,25,15,25,10,41,37,21,39,14,42,21,50,55,37,25,31,31,35,20,35,7,15,20,17,23,26,31,4,42,17,43,34,43,35,25,14,30,22,34,23,14,41,18,24,31,5,39,44,42,16,28,26,45,34,57,46,44,43,41,59,56,63,57,76,87,70,57,13,38,72,58,95,92,122,120,106,121,101,100,130,101,85,102,144,140,141,126,132,129,127,105,149,125,149,133,163,133,104,152,147,120,101,119,135,94,118,128,109,124,134,136,139,117,122,114,129,108,134,126,110,121,133,113,139,116,126,118,117,146,133,119,115,124,116,124,85,151,124,127,121,109,136,117,115,147,113,127,133,128,131,137,137,163,152,138,139,134,103,101,94,152,125,122,136,134,123,150,121,129,99,108,139,105,123,121,104,84,136,132,122,120,114,139,160,118,100,132,115,107,99,108,124,95,117,127,96,125,131,113,100,126,124,112,107,129,105,109,113,131,126,124,126,142,123,93,138,105,106,121,136,110,124,104,83,119,124,121,93,103,79,83,98,121,80,101,91,47,63,76,90,78,34,77,82,64,69,25,57,43,38,39,78,37,16,42,66,73,65,72,14,59,67,26,68,42,43,62,58,49,57,60,67,81,66,60,68,60,52,72,60,71,64,55,45,61,55,18,45,71,60,57,49,63,61,44,23,83,81,57,56,56,58,47,46,60,54,52,50,47,65,65,56,55,57,24,37,46,56,50,40,53,52,73,37,31,47,70,31,62,58,69,24,40,38,54,36,48,61,29,38,35,25,40,63,50,34,55,60,49,37,38,57,25,57,64,64,47,42,44,36,49,15,50,44,34,39,55,71,23,55,49,69,48,26,35,23,46,11,26,61,51,29,37,51,37,31,36,43,41,42,7,22,52,45,32,33,8,38,67,27,24,35,48,32,54,43,51,28,32,34,15,56,29,11,33,29,37,35,42,44,29,37,22,40,50,28,35,23,14,46,42,40,57,47,51,58,53,49,73,32,50,72,42,65,66,57,59,49,68,89,101,85,126,115,110,107,120,195,190,200,170,176,164,188,171,162,123,107,114,140,140,164,140,138,126,130,101,54,4,32,3,17,15,30,41,23,14,18,38,22,37,37,7,8,15,12,35,2,17,30,18,20,11,18,35,56,29,52,53,61,29,33,42,54,45,33,49,27,76,55,79,83,63,71,63,78,50,102,78,100,104,158,119,133,148,135,123,141,133,161,141,158,137,160,151,128,164,159,154,137,160,157,167,133,129,156,142,197,110,145,180,168,169,166,137,155,160,172,168,142,165,158,178,147,170,162,127,161,180,159,139,166,160,156,173,139,156,170,189,146,166,163,171,142,141,164,139,144,143,151,172,168,148,156,146,143,157,139,159,154,144,150,159,125,159,169,159,149,119,118,112,97,92,133,169,190,166,144,186,176,140,124,142,149,160,166,113,25,4,1,19,5,2,16,5,5,35,7,18,4,7,25,20,14,37,20,3,6,2,15,16,13,29,13,15,15,16,20,18,31,33,17,20,14,23,40,19,11,25,15,32,16,12,31,10,1,14,34,42,6,8,14,24,26,23,31,14,39,15,10,30,7,18,24,15,29,37,10,16,30,21,17,31,19,29,8,16,26,2,32,3,36,18,20,36,33,36,1,5,48,21,51,17,18,33,20,40,18,41,43,56,42,29,37,21,24,22,19,44,4,42,32,29,29,33,7,32,28,27,20,42,10,17,55,34,32,38,48,37,28,30,43,39,47,37,29,37,53,46,53,38,68,66,48,75,45,65,60,60,25,31,62,106,57,91,51,59,87,104,116,110,105,141,127,139,135,133,119,130,149,125,144,142,123,102,101,119,114,105,138,127,133,107,120,123,108,131,128,150,135,136,130,119,132,114,102,125,133,102,152,119,151,139,128,114,123,119,132,147,113,155,127,132,120,135,131,121,109,119,138,134,134,142,118,135,131,149,127,135,117,137,133,139,151,140,137,113,111,146,115,128,128,140,150,105,129,117,129,119,120,140,131,154,157,153,162,129,113,118,129,128,120,146,126,143,154,136,128,111,115,137,140,116,115,132,115,133,122,149,143,105,111,117,101,108,106,115,99,111,112,83,110,99,97,118,115,115,148,144,137,133,130,97,90,119,137,121,128,106,77,100,114,128,101,121,103,111,125,115,118,102,115,89,83,72,92,64,57,77,87,92,111,82,76,88,98,71,80,49,73,50,57,61,60,60,70,48,44,65,45,42,57,52,58,52,97,83,70,37,49,27,47,46,36,19,78,47,43,47,48,67,82,49,35,45,64,32,33,66,40,57,47,41,57,44,43,67,11,52,51,41,48,46,65,41,44,60,45,42,42,59,26,37,54,34,66,31,42,22,73,49,61,59,31,22,42,43,48,45,59,55,56,33,46,62,41,48,49,59,29,49,34,44,34,45,60,24,47,55,40,43,48,41,33,34,51,51,72,42,30,20,41,43,36,23,36,26,44,40,59,46,61,17,30,41,27,25,56,69,21,54,45,14,34,49,33,51,27,25,33,39,63,34,43,40,26,26,35,42,29,30,44,41,39,27,32,63,44,58,46,29,60,64,45,36,30,47,39,78,70,26,47,40,46,81,63,46,66,58,68,40,54,73,67,64,74,63,61,81,108,107,111,68,63,79,88,92,129,162,132,155,127,141,113,87,112,114,129,178,165,182,166,198,203,210,193,145,74,39,24,8,23,3,27,13,38,34,29,33,1,26,23,14,32,12,25,32,42,33,41,59,31,30,22,22,32,43,17,39,45,33,55,32,59,46,67,61,60,83,58,57,53,80,74,63,50,71,98,125,127,121,137,140,151,135,168,152,166,149,151,144,114,121,124,127,111,121,131,148,119,143,107,137,147,148,174,156,164,159,141,159,172,166,132,167,160,167,165,186,158,171,167,147,139,153,156,136,176,152,169,135,166,154,145,144,160,170,150,132,159,158,167,162,158,174,176,158,171,165,183,159,157,147,147,150,147,125,145,152,134,177,134,144,144,142,161,129,183,162,133,122,97,85,102,119,152,180,167,150,173,133,151,135,156,153,147,102,15,3,18,14,13,4,10,25,31,26,12,7,11,22,11,4,9,8,7,5,14,3,16,33,34,7,19,24,36,24,27,31,9,12,32,23,45,12,0,30,21,32,9,12,18,38,30,28,7,22,24,9,15,43,12,15,16,18,9,25,31,9,13,16,11,12,29,24,22,44,13,5,14,28,20,2,19,18,14,13,39,4,10,18,32,34,4,27,28,20,5,34,12,13,43,11,49,54,34,55,38,48,23,44,38,18,48,20,49,29,27,44,30,48,23,49,20,35,25,41,55,8,42,38,41,26,38,45,18,40,51,51,38,25,29,28,69,74,72,52,32,57,60,65,87,65,119,121,72,111,69,70,72,85,104,106,108,110,99,98,106,124,125,107,115,129,117,124,111,102,94,132,146,144,129,126,102,119,136,120,135,129,125,128,121,133,112,122,149,141,137,134,106,140,112,139,152,122,151,131,132,106,142,127,113,141,141,124,139,115,104,116,168,137,145,147,137,165,132,130,158,143,129,141,143,141,171,113,141,110,118,129,114,126,139,134,110,109,143,145,117,121,140,124,131,133,131,125,143,119,136,134,146,147,141,135,150,122,132,115,144,120,120,141,141,152,142,149,122,141,120,145,145,127,173,146,141,150,148,128,171,126,132,127,131,111,129,122,127,123,127,107,84,102,113,85,74,89,92,92,99,109,137,97,105,86,80,85,95,101,102,128,111,93,128,126,81,82,91,132,115,110,127,92,86,98,106,106,85,79,101,81,99,66,95,104,109,78,82,55,107,66,105,72,68,54,83,65,53,37,29,60,55,61,75,80,68,39,44,44,62,70,57,47,37,29,61,45,50,50,61,51,54,39,60,35,66,49,67,57,59,54,29,59,59,23,44,50,76,53,50,49,44,43,46,58,49,42,49,27,40,49,77,47,44,62,40,74,68,30,60,40,54,47,43,35,59,50,71,50,31,57,45,48,50,55,52,43,41,38,32,52,47,78,42,40,52,60,28,46,58,40,56,73,49,58,36,21,44,46,33,39,34,43,68,31,33,27,50,28,36,46,30,43,48,38,33,51,43,36,30,40,16,67,27,48,20,47,58,58,35,56,25,33,30,49,46,50,20,39,62,45,48,60,18,55,37,53,24,34,39,23,39,32,32,36,56,33,47,27,37,28,18,41,42,48,66,34,40,70,65,44,37,63,42,63,43,55,50,47,73,56,49,63,63,80,64,51,66,61,95,78,103,92,84,65,98,102,121,93,117,109,139,149,124,132,148,183,178,164,110,50,40,31,27,42,38,38,23,31,10,46,14,38,34,45,51,25,12,35,26,33,47,18,29,43,55,41,45,17,53,37,32,42,41,62,42,50,57,45,69,61,73,46,60,64,55,67,54,70,120,104,117,126,125,112,119,145,140,105,143,127,152,121,166,142,153,144,168,159,138,155,153,113,141,135,176,148,165,161,135,175,174,172,152,179,164,127,149,171,160,160,155,157,157,193,140,126,161,143,145,180,140,132,161,159,136,138,150,145,168,153,116,131,151,178,169,170,189,137,149,164,154,148,153,150,143,154,157,148,145,125,154,105,141,139,146,128,121,141,156,140,153,128,119,83,103,73,97,139,146,149,156,169,158,148,163,144,147,116,26,4,17,18,23,15,19,5,21,17,8,21,0,16,28,0,19,6,3,7,42,18,12,9,21,12,32,10,39,27,15,31,11,36,27,21,37,28,39,17,30,34,28,38,19,11,43,22,34,22,8,1,21,6,50,6,18,25,20,22,14,24,32,5,20,14,18,22,34,25,17,16,28,40,6,1,33,5,37,44,24,16,23,32,26,18,19,28,14,6,30,53,48,35,27,31,32,48,28,49,49,52,29,40,45,62,29,49,18,25,33,34,36,37,36,44,16,55,44,52,58,34,37,50,35,29,33,15,42,28,25,48,48,67,48,45,46,61,61,28,59,58,42,68,75,80,91,98,86,82,89,105,131,110,115,136,143,152,126,130,116,118,118,115,113,136,108,95,92,93,106,131,128,101,121,107,95,106,104,113,115,130,130,121,139,117,157,146,137,142,155,183,162,116,125,123,125,125,129,107,134,116,135,158,114,112,146,135,123,146,155,133,131,118,126,120,132,149,130,131,137,146,133,136,138,135,121,133,129,144,152,134,125,139,130,143,129,145,153,122,142,127,117,143,151,140,127,152,139,127,153,146,179,127,151,134,126,127,127,115,170,150,151,141,144,136,120,135,134,128,129,124,117,133,114,136,150,159,162,116,161,173,149,120,147,146,127,122,116,141,123,92,93,100,91,115,108,114,83,111,96,75,90,99,83,68,89,86,92,86,93,124,118,129,120,101,82,55,95,22,56,78,83,85,81,99,100,117,84,103,101,89,95,51,63,84,74,54,75,58,55,25,27,61,32,57,22,64,32,52,69,34,85,80,56,63,52,57,66,55,81,83,70,40,71,97,73,48,64,52,61,54,69,63,63,59,52,59,39,51,56,46,49,54,50,52,37,37,69,47,34,50,40,59,50,45,44,35,49,45,66,46,22,49,73,51,70,55,59,62,36,58,55,48,56,40,22,50,50,55,41,50,46,20,34,55,57,37,28,26,53,34,47,50,29,54,49,48,50,50,45,45,39,48,68,36,16,52,26,54,23,25,40,48,41,52,57,45,48,59,44,56,28,57,25,44,33,47,47,29,44,61,53,34,44,57,32,53,54,53,39,55,36,48,61,32,22,40,46,39,46,39,40,60,40,48,29,30,36,24,33,40,28,63,27,49,74,32,50,51,44,46,64,33,62,56,58,34,62,53,31,45,49,43,68,62,70,56,64,54,57,57,56,57,57,61,61,64,61,29,35,78,77,48,49,80,94,115,95,72,70,74,106,130,71,78,65,113,142,152,116,39,13,4,2,25,37,32,26,26,16,24,29,51,9,38,56,49,41,3,9,37,14,25,47,26,48,48,31,64,33,45,31,48,28,59,55,37,64,53,38,56,89,51,58,63,40,45,64,66,82,104,106,105,119,83,92,123,110,128,124,132,124,139,156,192,144,199,173,118,172,162,155,157,148,157,203,166,151,183,173,158,147,145,161,166,156,133,157,156,150,163,143,143,135,154,133,169,147,170,146,164,167,165,136,155,154,138,149,133,137,134,123,141,141,146,152,148,140,156,162,151,152,159,145,134,164,146,146,150,145,138,155,170,155,156,155,138,116,146,146,143,139,146,143,166,107,93,119,119,104,125,152,162,161,145,164,132,156,94,7,11,30,0,5,18,14,9,15,23,8,0,20,28,28,15,23,13,42,19,6,14,21,21,38,28,17,35,28,13,34,19,52,22,38,14,4,22,21,18,23,3,16,11,18,53,33,27,38,22,5,13,36,18,34,23,34,32,25,9,24,22,12,20,35,27,30,13,0,13,40,6,8,29,39,20,3,2,24,36,19,38,23,39,10,22,16,11,21,31,13,14,25,36,62,24,35,35,6,20,18,34,3,29,22,30,34,16,22,28,51,17,39,24,42,8,39,32,65,44,31,47,45,62,31,18,14,32,44,44,27,53,30,61,15,32,51,41,32,48,50,37,51,54,73,78,74,90,65,85,58,69,123,115,124,98,109,118,135,129,159,114,107,136,135,157,129,121,111,127,128,150,135,133,132,117,111,126,138,122,115,124,100,97,131,111,115,133,129,102,119,136,135,119,126,124,118,137,144,130,112,141,126,100,124,125,124,138,137,135,119,119,114,114,112,137,137,116,137,122,113,138,151,125,120,144,136,140,120,126,122,123,130,144,142,148,136,142,127,127,112,140,114,115,142,130,125,134,120,127,141,139,167,152,154,159,141,149,124,158,138,140,142,141,153,118,136,141,127,132,134,139,133,164,146,134,148,158,139,138,155,121,133,95,121,124,137,130,143,107,121,91,92,103,118,123,112,88,86,91,113,99,131,143,103,81,108,112,92,110,106,101,106,127,108,98,119,114,94,94,106,62,105,88,69,91,110,108,77,101,67,86,40,66,82,81,41,31,58,84,71,30,53,56,89,64,71,93,54,48,85,87,76,87,65,90,69,53,71,96,91,66,94,86,68,75,80,83,102,80,59,59,51,47,53,55,70,50,65,78,69,39,41,61,32,47,99,58,62,62,31,48,68,72,41,57,54,50,37,75,57,53,52,43,49,67,69,46,68,54,52,55,32,35,40,68,66,35,29,49,44,55,53,44,83,51,49,47,55,49,8,28,82,33,48,42,47,34,32,52,58,49,50,41,38,35,31,62,22,54,35,40,20,41,42,30,24,40,59,47,8,59,54,41,49,24,12,36,37,36,28,36,42,15,27,42,52,54,41,32,54,25,44,31,58,24,22,36,39,46,16,66,38,46,54,54,22,32,32,11,42,17,20,50,11,34,18,40,36,37,29,30,28,33,49,30,57,68,69,57,54,50,61,76,61,53,54,69,55,39,58,25,54,57,74,45,72,61,74,73,63,69,84,75,61,60,101,98,104,103,105,102,103,87,92,71,58,77,115,116,68,45,30,30,35,33,51,30,13,23,49,29,29,41,16,19,37,41,24,23,22,39,66,22,34,46,24,28,17,15,19,53,47,47,67,41,38,58,29,35,59,45,31,67,35,67,36,57,67,71,82,90,113,124,133,121,126,147,121,129,96,107,130,114,164,158,168,172,162,168,149,130,138,138,142,168,149,144,169,173,170,150,162,159,147,139,160,151,157,160,163,180,185,154,147,150,152,145,155,145,146,183,161,157,152,152,136,159,126,136,158,154,159,147,148,148,156,127,147,139,145,159,146,131,139,141,156,160,154,158,166,144,139,137,147,136,169,128,140,143,151,157,132,114,157,151,148,124,122,104,98,136,127,162,150,148,164,148,152,115,0,3,17,10,35,20,35,1,29,23,3,6,16,1,27,11,19,13,37,28,0,4,36,22,33,31,30,33,23,9,18,36,27,14,33,25,23,25,54,48,7,26,43,23,31,28,12,18,13,13,12,29,15,17,13,25,18,27,22,41,22,17,24,41,4,32,35,19,32,18,10,4,15,26,46,29,18,6,49,25,36,12,11,23,28,33,42,18,19,33,36,19,29,30,9,29,21,11,33,30,32,9,11,32,21,34,19,32,40,52,52,51,33,50,32,52,53,33,23,17,29,63,26,37,46,64,56,35,64,52,46,22,35,30,48,70,30,43,55,65,54,48,41,79,62,54,58,69,73,102,91,82,95,83,94,84,85,87,99,102,153,159,148,128,102,120,152,148,121,159,136,139,114,142,127,116,145,132,138,142,153,129,141,147,137,123,122,137,129,118,127,121,124,130,141,114,168,131,133,140,162,122,137,147,148,133,129,117,134,104,141,115,130,117,133,144,131,116,122,111,125,132,120,125,110,150,146,146,148,98,130,142,131,144,134,121,142,140,144,148,142,137,143,131,130,153,144,137,130,148,149,143,161,141,133,149,148,158,143,160,151,127,113,114,118,132,155,130,114,135,144,124,137,147,152,122,124,117,124,136,136,127,129,137,116,140,107,137,106,143,171,147,159,136,122,150,114,159,123,130,138,139,143,150,129,144,131,113,133,127,104,116,105,103,74,91,116,83,100,108,117,106,99,111,85,119,82,65,109,97,95,108,96,106,79,86,83,101,80,65,52,61,62,55,94,96,116,89,90,111,95,67,81,86,88,49,69,109,78,80,75,72,78,86,94,67,65,104,98,83,101,76,48,71,68,67,64,56,80,37,63,28,56,42,83,48,69,49,62,46,46,32,65,59,58,48,45,48,34,58,75,42,60,48,72,38,63,46,60,56,30,26,37,15,52,44,44,23,40,36,52,29,54,38,58,53,61,46,48,43,52,44,31,52,64,26,39,62,31,25,49,53,39,55,50,63,51,46,23,59,15,39,52,37,51,42,45,25,50,44,47,32,35,41,53,47,66,50,32,47,36,24,28,50,42,48,45,26,40,26,47,42,38,57,29,33,23,33,38,28,43,44,61,50,45,31,66,35,29,41,24,43,47,48,40,55,60,37,24,42,45,67,20,70,57,41,36,48,44,39,64,51,54,62,55,73,48,54,60,58,65,49,58,44,37,104,50,94,99,94,86,52,105,127,105,85,88,89,97,99,100,68,87,79,88,83,66,65,52,72,70,55,19,2,51,21,20,27,36,4,15,31,27,47,51,40,38,30,27,20,31,30,25,41,42,55,15,38,22,61,17,20,36,31,48,43,29,50,36,35,57,62,65,70,39,56,52,37,60,80,85,73,91,120,125,121,139,147,123,137,150,143,105,135,135,173,148,144,140,171,139,158,175,166,148,164,154,151,140,171,155,152,174,163,173,160,179,147,141,167,165,167,167,143,168,140,160,183,168,153,164,166,157,179,164,170,161,164,178,165,162,165,163,136,140,156,136,158,138,185,188,164,159,143,129,155,143,126,170,128,137,148,149,154,165,168,160,174,138,147,123,155,159,150,129,153,140,148,153,73,68,91,116,120,158,163,189,193,157,114,12,10,11,13,0,14,7,19,31,11,11,5,4,0,15,5,9,0,33,10,12,22,14,24,30,23,15,14,31,33,27,16,22,24,12,20,8,25,9,22,30,39,22,16,19,22,4,24,3,7,30,28,24,50,31,14,36,22,37,17,15,14,36,15,13,30,22,13,30,26,25,35,23,18,30,35,14,14,37,34,9,5,35,15,43,10,12,28,26,48,26,23,4,20,8,45,7,18,46,37,21,20,47,29,10,42,25,51,42,23,23,30,27,10,43,53,47,45,61,31,23,36,30,26,32,43,47,63,58,55,53,41,29,63,59,67,64,37,36,64,30,56,70,49,84,71,84,83,69,66,99,139,102,92,100,77,80,81,81,107,90,121,117,137,126,146,125,107,158,95,124,128,129,115,130,140,125,141,152,175,123,143,138,125,122,95,112,156,96,134,117,140,131,116,107,123,137,138,141,128,157,148,142,158,124,147,134,150,134,127,130,95,136,133,168,169,142,140,143,165,130,126,111,111,143,143,134,148,119,136,139,132,145,145,140,155,139,129,132,160,160,113,141,146,156,137,115,135,134,118,125,135,131,109,119,139,150,137,130,126,115,90,92,82,124,115,122,134,125,114,121,125,116,126,116,116,100,111,165,123,129,114,126,126,145,126,98,105,129,113,140,152,141,151,117,108,137,111,125,116,116,124,155,110,120,131,110,150,93,77,90,114,81,72,80,73,48,87,94,93,72,115,73,100,75,76,115,74,123,90,103,115,135,89,50,73,90,58,106,74,72,57,48,77,61,102,90,80,56,87,94,28,69,44,57,62,42,61,63,50,61,56,68,66,84,69,70,80,75,82,39,66,73,61,76,72,81,58,45,65,93,75,48,55,31,47,33,50,43,45,73,68,46,30,74,62,49,48,33,45,53,45,42,67,22,60,46,31,49,56,34,55,51,35,50,58,80,39,42,58,62,45,40,46,13,29,41,36,37,14,54,24,68,49,55,50,39,53,32,45,46,40,36,42,66,41,59,20,31,37,50,59,31,51,32,34,53,36,40,33,38,45,19,37,23,45,21,25,33,22,36,40,53,16,34,48,27,44,29,46,43,47,26,61,22,45,40,40,60,50,26,58,65,62,15,34,31,31,36,47,32,57,29,32,63,43,32,30,40,58,32,42,52,37,58,33,48,31,62,51,30,34,64,53,55,34,46,85,73,48,62,30,58,81,71,61,80,69,87,76,59,55,80,67,77,69,50,78,56,82,71,69,57,66,62,71,51,47,54,69,68,64,51,30,35,30,12,23,8,16,31,52,32,30,40,18,20,41,20,59,29,37,22,44,41,24,59,24,55,32,45,56,46,33,38,57,48,47,52,46,45,54,64,63,82,36,75,57,47,91,70,62,88,84,113,113,84,95,121,143,144,124,133,139,145,123,146,165,135,117,134,138,166,175,150,136,150,144,156,159,130,164,150,151,178,167,136,156,149,116,154,160,139,154,165,136,145,159,147,158,144,154,134,178,153,156,150,170,165,144,128,147,121,134,160,140,171,149,172,143,155,143,169,165,153,195,171,157,132,145,158,148,139,162,135,151,159,158,137,170,153,160,130,148,152,171,165,132,123,105,93,67,77,111,141,170,154,143,160,110,11,15,34,7,21,11,26,5,19,0,21,24,12,5,19,5,13,4,10,23,3,19,9,8,9,39,50,38,23,30,15,16,21,26,27,33,24,28,24,27,14,9,26,43,4,11,22,28,23,17,5,9,23,38,24,33,23,15,16,22,36,0,11,31,11,31,48,10,12,13,13,18,17,42,26,16,14,47,26,4,28,4,34,39,14,31,19,13,19,19,31,22,26,39,48,3,20,20,43,46,46,48,11,9,39,35,27,12,28,38,14,12,22,40,49,58,35,18,62,45,27,42,80,44,60,55,72,73,24,61,46,67,64,38,41,43,37,57,97,57,75,90,95,100,106,100,99,124,64,82,74,120,79,89,98,72,83,71,87,61,121,111,136,120,142,141,143,120,133,124,127,124,140,113,127,135,128,113,140,131,123,100,126,115,124,113,106,128,124,148,122,154,123,114,136,141,122,125,118,100,140,119,131,152,134,129,152,134,136,156,132,117,146,134,146,124,139,133,121,131,121,123,117,150,132,119,114,120,146,113,168,132,139,132,157,142,155,142,144,144,122,124,134,135,117,125,131,134,122,130,112,122,121,137,114,102,133,125,138,118,134,105,137,108,123,128,109,109,121,110,132,140,152,147,137,135,129,98,112,103,111,102,114,89,136,109,105,112,125,124,147,125,123,85,120,103,125,134,105,127,147,136,125,129,132,96,85,95,121,106,83,99,116,80,79,82,97,90,104,72,78,89,86,81,74,77,74,36,65,32,54,78,75,104,65,95,62,60,53,46,52,61,62,63,56,63,69,71,90,55,51,87,70,43,81,65,59,54,52,71,65,49,80,39,51,50,62,45,72,68,63,50,69,49,53,43,48,41,64,59,82,61,50,69,37,66,46,58,71,32,51,43,59,34,67,55,68,65,59,20,35,46,50,48,47,34,29,57,57,55,64,57,53,49,48,61,57,66,32,49,40,62,46,60,47,63,42,49,51,76,56,54,51,41,44,75,37,42,57,64,34,50,32,43,59,35,54,51,59,38,40,60,44,32,34,57,35,48,51,41,64,60,51,54,52,1,58,44,66,38,30,43,36,74,73,48,56,30,38,25,49,37,47,57,38,57,50,58,52,74,44,47,14,10,29,29,33,64,36,37,47,44,42,53,48,30,34,41,34,35,47,32,54,68,34,36,34,32,45,51,65,45,72,71,64,70,68,60,70,54,59,48,40,48,72,74,70,44,64,43,72,69,63,71,89,75,81,55,66,63,56,53,62,42,69,74,62,66,53,62,32,27,37,38,44,28,33,51,44,29,40,26,54,29,24,5,31,34,26,56,52,34,29,22,28,39,7,69,31,32,47,52,60,54,48,54,68,30,58,58,57,38,52,70,66,68,59,52,77,72,95,101,65,51,71,84,90,89,95,116,137,106,125,149,132,149,134,144,153,144,199,155,133,181,151,144,143,150,128,127,129,121,153,132,145,144,126,166,152,153,139,160,161,146,161,167,156,162,170,154,158,145,147,178,150,179,158,153,143,159,157,133,145,138,173,180,141,140,143,155,147,168,164,137,160,170,170,148,153,137,151,153,142,165,149,180,141,145,135,132,144,145,137,140,127,141,121,155,175,137,136,103,89,76,95,129,145,162,163,106,26,0,9,37,16,10,2,12,16,2,14,32,19,5,7,15,13,12,42,12,20,0,3,9,17,30,48,1,28,26,18,7,11,52,36,22,31,9,29,27,7,32,5,11,10,28,13,36,23,23,11,34,30,18,23,11,39,26,2,11,12,17,38,5,15,17,11,11,4,10,9,8,15,27,24,35,35,37,5,37,18,26,2,19,23,19,40,32,12,26,33,39,46,30,24,45,13,49,26,18,25,42,35,27,36,11,33,36,36,35,27,28,5,35,19,57,53,35,28,52,46,39,29,35,65,59,58,72,56,35,54,33,77,49,53,49,35,72,68,74,79,94,66,77,88,96,71,79,80,56,99,80,67,88,64,88,94,110,117,105,108,131,123,118,152,134,126,148,128,131,132,107,126,138,140,101,120,128,119,117,132,125,155,136,104,153,133,130,119,136,164,126,160,144,143,124,101,125,144,113,120,144,143,143,101,111,105,129,124,162,127,133,133,127,134,126,125,87,132,127,131,126,97,134,132,133,133,162,127,134,142,142,135,143,119,131,135,124,144,139,126,130,131,143,129,118,112,152,153,139,112,132,165,135,155,139,115,135,151,150,167,134,134,130,155,121,132,120,127,128,117,125,133,129,156,115,123,134,120,116,121,107,147,134,121,119,130,137,130,130,156,117,128,115,104,129,141,141,122,99,137,115,139,138,144,107,89,125,129,118,123,105,130,153,115,123,154,126,114,84,101,80,97,103,87,59,79,89,63,75,63,95,86,86,83,64,73,83,74,100,100,102,118,97,81,86,96,67,56,102,85,96,54,31,73,61,67,65,81,73,84,56,51,76,52,79,40,53,46,54,56,59,41,78,57,63,29,59,47,66,60,59,51,61,46,58,69,55,45,66,46,49,56,56,77,43,42,74,48,24,24,55,49,46,54,27,55,59,56,32,62,41,52,58,58,40,52,61,42,35,48,61,39,49,26,44,36,45,32,25,69,47,57,16,52,42,44,41,66,46,46,44,34,28,59,52,63,39,42,48,58,51,36,53,49,58,55,66,50,57,26,26,43,58,25,24,52,30,73,21,54,30,52,48,46,57,31,16,48,66,18,48,40,32,41,29,49,56,48,30,26,59,29,44,36,33,39,47,38,58,23,47,54,37,40,24,38,41,78,43,61,15,49,26,43,37,52,40,37,39,33,53,57,62,50,45,47,81,70,48,37,56,52,39,49,51,69,86,46,43,64,79,88,45,86,88,55,95,97,107,108,93,79,58,45,75,59,54,24,75,28,23,45,47,9,24,12,39,31,26,48,20,21,61,46,52,31,35,60,53,52,43,29,19,54,41,36,40,32,42,21,34,33,60,40,52,45,63,38,37,68,59,61,49,62,22,46,53,77,69,88,77,52,56,51,92,82,92,97,125,136,144,132,162,121,132,162,171,151,157,148,171,146,181,173,177,133,156,166,170,186,140,151,149,157,148,150,126,157,115,167,183,162,116,125,148,171,162,157,151,165,148,158,163,131,154,159,134,109,147,169,168,150,166,166,137,144,142,149,174,145,149,139,142,145,138,158,160,158,156,159,139,173,165,161,154,142,146,131,155,144,130,137,160,146,133,136,142,142,148,144,120,78,84,78,106,123,127,164,102,16,18,15,11,12,11,11,18,12,35,20,21,13,4,20,33,15,9,10,20,3,25,11,25,38,32,34,5,48,16,40,18,11,36,11,30,7,41,18,41,30,11,23,15,31,32,25,33,9,29,36,6,10,28,32,38,35,14,6,27,10,19,7,11,6,17,20,14,11,9,14,27,10,18,38,36,12,9,29,17,16,17,44,23,34,44,38,14,41,22,19,36,30,22,64,33,17,33,20,49,34,35,25,26,28,28,12,55,15,9,39,61,3,39,44,29,16,33,51,40,58,24,24,51,45,46,77,34,76,84,71,71,82,84,73,91,73,87,90,76,90,79,95,103,66,90,57,70,55,95,71,82,92,107,135,104,134,126,111,113,132,137,132,132,122,129,118,98,125,132,134,127,110,135,122,117,148,153,167,152,144,170,152,139,152,153,147,147,125,107,156,129,125,133,127,136,141,129,145,134,143,136,162,142,151,134,118,123,147,142,126,141,149,115,124,144,104,137,124,166,149,128,158,159,145,149,153,170,127,120,154,131,127,114,103,142,101,129,134,160,124,146,120,109,147,118,126,87,110,124,137,134,145,108,118,123,137,128,150,149,164,163,133,130,137,146,123,143,146,146,133,128,138,122,144,138,121,104,128,134,124,120,138,125,123,136,143,181,118,116,137,138,123,149,138,130,160,101,110,107,105,113,140,143,138,140,144,170,148,165,147,142,151,129,125,143,151,128,134,112,117,83,132,111,133,105,119,110,98,64,70,104,86,113,86,69,76,70,96,95,121,103,70,93,76,67,53,88,47,86,37,61,75,69,55,85,83,64,35,60,74,80,69,78,44,59,95,51,50,58,65,39,53,42,55,62,56,31,62,38,63,34,61,55,56,64,43,34,64,46,26,60,53,43,59,49,48,23,40,30,36,38,53,43,29,74,48,61,67,35,35,44,44,48,49,29,53,42,52,55,59,53,73,57,42,71,59,45,53,36,43,46,42,67,51,55,47,36,13,54,26,28,46,53,30,30,43,35,37,37,37,43,70,49,42,56,38,31,50,47,28,29,33,36,37,13,35,37,58,46,25,54,56,53,44,36,79,45,36,56,35,39,66,46,75,46,62,40,33,45,34,33,37,40,49,30,59,51,25,31,40,60,55,50,29,41,14,17,50,62,46,65,66,35,37,33,49,43,82,67,38,45,50,29,42,33,46,38,41,38,54,44,43,60,38,46,42,69,52,65,75,72,55,72,87,76,86,93,72,106,113,89,63,58,30,62,41,49,46,74,60,53,52,25,32,39,44,44,25,38,43,38,37,49,30,37,13,29,32,23,35,4,35,26,36,56,40,52,38,32,44,30,40,59,50,47,42,57,47,51,70,35,62,51,44,66,59,86,67,50,52,60,50,55,55,107,105,104,118,126,151,141,155,184,168,121,181,139,181,168,165,144,170,171,168,159,163,169,160,180,152,191,163,178,148,138,166,147,164,191,157,176,168,131,132,155,170,169,182,156,150,156,178,173,160,189,153,176,162,153,148,172,152,159,148,142,145,165,171,150,156,113,123,141,139,157,173,163,149,148,139,181,119,176,140,168,138,181,161,155,166,170,138,140,139,134,120,177,150,162,166,172,159,121,75,81,114,82,126,119,38,19,14,16,7,23,23,2,25,44,13,20,27,12,4,31,13,14,33,7,16,15,41,25,4,9,46,29,33,14,14,18,49,34,24,34,19,22,32,18,5,16,21,14,28,20,9,30,5,11,31,14,22,35,17,31,11,16,17,4,8,26,18,22,47,24,16,15,20,9,42,25,30,46,13,7,18,19,34,33,17,10,33,29,31,11,39,43,25,34,25,29,17,30,20,5,22,36,30,38,45,64,45,33,33,28,33,33,48,30,51,66,27,35,40,29,44,54,55,48,50,29,50,24,56,45,73,67,94,103,93,99,107,122,88,112,71,101,76,81,71,57,115,68,133,94,96,84,104,123,115,118,137,138,109,116,121,132,138,131,109,138,127,102,130,107,118,128,93,134,128,137,121,114,126,117,157,167,126,154,137,143,123,137,116,117,129,125,136,140,132,132,119,146,131,164,128,133,141,137,137,125,161,161,159,162,167,148,155,139,141,138,144,140,140,141,142,158,157,123,162,152,155,157,169,151,137,132,145,137,129,157,122,146,114,113,109,96,123,115,122,131,140,157,129,113,137,103,158,110,138,150,149,118,142,150,142,147,137,155,140,155,128,141,153,148,129,148,141,126,152,165,147,159,133,114,148,131,148,159,131,112,118,127,99,156,134,126,131,112,105,101,140,130,131,116,148,127,125,114,154,155,137,115,106,139,153,159,114,118,132,113,107,140,135,85,88,118,125,79,88,98,66,62,79,114,93,69,112,99,59,56,92,59,74,94,82,89,79,76,49,72,61,28,50,43,46,60,51,65,49,63,44,51,84,68,60,66,58,57,60,71,64,71,65,72,73,44,65,74,80,71,57,55,66,61,19,64,70,87,50,51,54,65,22,50,28,64,65,44,49,56,32,39,57,43,20,55,33,45,50,47,46,91,50,28,62,67,68,67,39,32,49,47,56,34,38,52,87,22,50,57,47,29,50,33,38,51,46,38,63,43,55,42,41,48,42,58,32,43,52,57,36,57,52,67,50,36,34,38,24,49,41,56,42,49,45,35,46,34,46,46,31,26,46,55,53,47,43,20,33,52,60,55,29,51,21,41,74,52,59,39,33,56,63,54,59,56,13,36,44,19,37,31,22,39,43,12,62,12,16,51,46,28,29,29,37,40,47,48,52,41,52,46,43,40,44,44,39,38,57,34,46,34,33,33,49,50,68,46,37,69,39,62,58,67,57,71,60,60,52,67,70,69,75,76,59,100,32,76,42,78,58,67,81,48,52,63,35,54,39,57,34,51,49,18,25,42,55,45,31,45,33,24,14,7,60,50,28,32,51,31,39,45,17,46,32,24,23,82,40,19,25,41,61,8,29,32,62,32,53,68,51,48,67,60,43,63,40,41,77,32,51,44,53,69,83,70,88,124,149,135,154,173,150,145,140,144,173,168,162,140,165,185,187,164,159,167,170,190,185,176,166,155,153,153,160,148,143,177,165,156,168,144,149,140,157,158,157,152,161,154,135,145,186,141,176,145,140,153,130,172,164,156,160,160,164,163,143,153,166,122,151,152,130,137,147,156,148,140,143,159,141,129,137,157,152,163,158,163,164,154,153,159,126,158,122,124,160,111,149,168,168,132,117,79,99,102,119,103,29,14,17,13,35,12,1,35,18,15,13,2,20,20,19,0,1,17,9,7,19,13,3,22,26,36,33,22,35,15,28,19,24,26,32,29,51,24,23,37,12,28,6,13,15,6,31,32,35,15,9,15,23,8,21,20,26,45,20,18,56,48,19,18,13,24,20,36,25,2,12,20,46,46,34,20,23,37,7,27,21,33,33,39,32,39,5,28,45,26,16,21,12,19,36,44,41,38,30,33,7,42,25,51,36,38,31,13,58,19,11,23,23,44,43,53,50,52,50,31,39,75,39,52,50,58,55,80,71,72,71,79,86,80,65,97,73,74,84,48,62,34,62,98,91,94,98,143,105,95,120,157,138,141,115,143,128,125,124,124,138,128,116,141,148,123,134,145,151,136,108,154,126,149,156,164,130,112,135,130,127,105,104,142,109,113,122,128,141,126,130,135,119,141,114,111,132,141,140,139,127,133,162,134,138,122,152,132,157,147,150,138,122,136,109,156,137,138,122,154,154,148,152,155,168,153,125,130,142,94,109,126,134,135,133,133,114,147,120,150,130,160,136,110,163,161,139,159,159,132,135,131,155,166,158,150,125,107,137,134,137,154,165,110,123,140,131,119,144,147,137,140,159,117,142,137,168,190,161,121,131,114,137,140,136,137,156,157,160,168,130,160,133,101,107,102,91,98,140,125,115,108,118,74,94,132,120,104,67,94,122,96,111,99,89,89,74,81,86,93,83,91,70,44,78,91,120,100,114,76,80,89,68,61,85,56,74,80,94,92,74,63,81,60,61,40,53,31,60,59,45,53,47,38,64,90,54,50,45,86,26,72,70,58,43,41,37,71,58,38,85,58,49,79,56,71,51,75,64,56,48,58,54,47,56,49,58,63,66,60,65,65,40,66,43,74,54,46,56,59,47,53,50,33,59,56,47,74,46,56,59,49,64,18,65,55,52,28,59,42,56,54,64,68,48,71,47,24,47,67,44,62,26,57,60,23,69,28,32,43,46,51,31,26,48,16,42,49,34,69,51,47,48,22,26,42,64,29,40,41,47,53,42,47,30,15,38,19,27,54,48,28,26,37,29,45,54,27,41,27,36,36,40,35,13,32,48,24,40,49,39,40,25,68,41,51,27,46,22,51,53,28,55,34,46,43,62,36,54,40,44,45,55,52,37,43,55,46,53,58,61,39,56,31,37,47,39,61,73,62,58,60,43,54,46,54,59,70,73,51,92,70,76,102,106,74,58,42,48,56,55,64,38,68,68,75,40,61,42,37,56,52,65,46,49,29,30,50,56,37,27,50,29,44,44,21,11,40,51,50,26,38,25,50,40,14,53,53,28,52,48,41,42,65,70,47,38,45,54,22,63,70,60,47,27,62,52,46,61,50,89,52,83,77,64,63,62,49,64,91,112,82,108,122,145,137,136,119,161,160,129,128,145,148,175,147,123,160,134,188,166,161,167,169,164,191,151,148,158,141,157,172,145,159,142,124,150,157,160,159,148,145,166,168,136,140,143,114,139,130,168,160,149,181,175,156,168,160,145,163,171,155,164,163,140,160,141,178,160,165,154,170,156,163,138,130,147,153,144,126,163,148,155,147,136,137,143,143,155,156,153,143,173,144,130,122,116,78,93,81,16,21,32,0,0,9,7,26,6,12,17,18,22,6,18,18,7,8,19,6,14,17,23,14,27,31,11,39,48,25,20,15,38,29,23,31,11,51,31,32,13,30,20,18,11,21,13,12,6,24,34,9,43,13,27,18,43,3,29,36,35,20,29,16,26,34,25,29,30,12,28,29,21,25,40,31,20,26,25,39,23,25,11,34,30,22,20,40,49,49,32,11,38,40,17,40,24,30,16,38,52,24,29,40,45,35,35,48,40,48,32,40,22,25,29,34,47,57,47,67,41,32,63,24,40,52,57,47,46,54,48,28,36,40,79,53,85,47,28,55,15,48,54,71,58,90,104,86,109,137,130,118,138,126,139,144,167,164,148,162,141,169,154,149,129,124,138,141,126,124,152,145,158,154,141,138,149,126,134,131,122,98,113,112,141,142,121,135,131,160,161,150,164,167,140,149,140,144,114,141,128,108,124,125,128,109,159,142,123,131,150,131,143,125,151,126,113,130,101,140,141,108,141,141,113,144,141,134,145,171,134,142,143,140,157,134,114,158,147,154,141,141,122,134,132,143,141,167,158,143,138,130,102,124,119,132,139,136,146,151,145,161,134,134,107,115,110,135,135,129,178,120,142,125,114,134,143,163,148,151,162,164,151,144,146,112,156,132,138,118,149,158,130,129,126,101,117,110,113,112,83,112,126,115,92,92,84,84,60,73,104,69,118,129,99,103,89,92,95,59,107,99,121,97,92,90,90,86,111,95,101,97,93,13,56,49,68,53,86,80,69,65,73,82,120,45,92,78,94,77,73,65,73,61,46,69,39,58,68,54,68,57,53,41,43,31,66,51,76,37,60,57,41,46,57,55,47,70,59,60,55,56,52,56,32,39,39,56,46,54,70,52,39,62,46,40,65,54,25,57,48,65,44,62,69,42,28,49,67,44,66,61,49,47,38,75,66,57,62,68,64,54,40,47,38,56,36,30,43,19,50,39,41,36,46,43,60,52,51,31,40,53,45,55,51,17,53,18,35,47,53,43,54,54,45,27,29,18,19,49,50,13,36,29,55,46,34,59,47,49,45,47,45,47,41,58,28,58,44,23,55,33,45,64,53,37,23,45,28,54,56,20,58,52,54,25,42,51,66,31,62,53,32,34,29,38,67,33,52,23,44,37,50,43,48,44,50,45,57,52,58,42,52,44,50,49,58,58,58,67,52,21,37,46,60,46,43,44,49,51,38,69,53,59,68,60,59,47,73,41,73,70,88,69,71,53,55,44,44,39,33,53,47,29,29,25,17,56,46,20,34,31,30,61,51,56,25,62,39,31,45,44,42,31,25,31,31,55,29,41,37,42,50,46,47,51,53,57,47,55,48,28,58,39,71,52,39,61,60,54,84,60,90,62,75,74,52,73,92,107,140,106,125,105,126,130,121,130,159,178,150,138,127,138,169,216,132,150,177,167,145,151,164,161,155,156,138,151,168,161,161,144,147,133,143,156,167,140,129,156,132,140,156,157,122,121,134,156,149,172,193,179,158,150,153,132,173,161,154,178,170,173,130,136,138,163,121,165,132,147,152,186,164,182,126,156,144,154,172,157,156,166,158,127,161,131,135,141,153,157,156,133,163,145,146,143,118,100,76,65,25,11,40,0,11,37,46,12,2,24,7,27,22,31,20,12,24,5,17,8,27,26,11,9,17,16,15,20,43,8,17,11,30,31,28,43,33,37,15,21,28,21,29,48,19,7,9,14,29,30,10,22,25,21,18,31,43,19,31,11,38,38,32,43,22,32,17,43,22,23,37,27,32,8,18,25,23,32,28,10,21,20,36,23,22,33,7,50,24,45,30,47,42,31,34,45,38,30,27,34,22,36,51,24,48,50,35,33,49,69,16,18,31,25,50,42,54,55,58,65,35,34,75,41,50,47,60,44,62,58,44,27,42,40,59,76,70,55,57,41,75,65,41,80,45,85,94,130,136,108,102,99,135,95,120,137,156,147,191,155,148,113,144,117,98,133,142,143,124,143,143,134,125,143,156,117,121,123,139,150,127,155,159,121,145,152,134,136,103,132,131,127,133,154,156,145,138,143,150,139,142,120,131,130,135,146,127,123,128,131,131,135,138,132,111,134,153,111,118,126,130,121,133,131,129,158,151,128,171,152,135,135,148,157,149,153,151,157,139,133,128,140,134,126,145,122,158,165,144,133,113,132,142,143,151,180,140,111,127,131,137,137,123,117,135,120,126,118,115,117,126,168,126,137,138,146,103,81,129,105,115,120,100,115,102,92,100,94,94,103,80,101,100,142,131,129,117,129,137,124,131,149,141,143,122,139,113,106,106,92,120,121,137,99,101,94,122,146,111,100,110,136,129,130,114,80,74,75,49,44,76,91,71,57,93,60,65,74,62,54,69,62,67,57,98,104,88,89,93,58,72,75,63,76,72,84,77,78,76,44,28,68,76,65,57,72,68,72,52,55,74,40,37,71,60,56,45,44,51,21,38,42,36,41,72,35,44,55,30,42,43,65,29,42,53,38,54,81,56,65,61,48,36,67,56,45,62,37,70,49,52,65,54,58,48,44,57,34,22,47,45,54,44,63,33,34,59,56,59,51,25,56,29,55,53,62,57,42,50,18,51,40,23,35,31,48,48,26,53,47,45,35,30,39,22,44,36,40,40,64,40,73,48,62,36,43,27,33,39,56,50,66,40,41,47,44,38,50,35,49,47,48,36,45,28,27,12,46,38,38,29,35,38,62,55,43,5,45,44,47,74,41,42,40,47,47,42,21,42,66,35,72,28,32,54,60,65,51,66,58,38,41,52,67,72,41,63,50,41,32,34,35,43,56,23,56,48,51,57,54,40,49,66,44,43,43,59,58,54,36,87,57,49,77,78,64,67,59,30,55,66,21,28,37,26,53,32,37,30,30,25,34,37,30,27,36,40,30,52,36,64,26,47,38,44,32,50,46,11,21,57,43,43,40,31,43,64,53,37,33,54,19,58,69,50,55,66,51,57,77,87,86,70,94,95,89,102,104,109,142,133,119,132,128,164,135,158,156,160,172,182,167,153,163,161,195,151,171,155,150,155,162,138,144,150,163,150,130,167,155,164,129,165,127,171,138,173,142,164,133,180,165,162,164,161,167,164,173,168,155,166,162,176,169,163,163,138,166,166,167,144,153,160,167,141,134,164,150,145,153,164,154,121,136,141,129,139,153,140,155,149,137,136,120,153,134,147,134,143,134,149,151,123,134,152,136,146,116,82,80,30,5,27,6,26,17,10,1,11,33,14,9,33,0,22,13,1,14,39,5,33,32,22,21,34,19,22,21,45,37,8,18,31,19,8,34,32,6,31,28,22,21,17,40,41,15,16,14,32,14,22,14,13,19,29,26,22,9,17,26,33,26,40,18,43,24,3,15,33,40,29,15,48,21,32,30,50,31,41,23,43,31,23,21,29,29,9,20,29,9,31,11,22,42,51,36,47,41,47,57,26,26,18,54,58,15,32,78,57,54,65,55,87,57,52,54,67,91,99,83,74,85,102,118,94,111,157,133,138,126,135,140,113,107,113,107,82,63,91,63,66,97,110,95,98,87,88,97,74,86,79,90,99,108,93,83,75,97,110,97,108,113,127,143,123,119,127,146,131,125,138,151,159,155,156,161,122,113,145,150,137,139,130,137,141,173,156,132,146,145,141,153,120,155,142,160,159,158,154,161,121,126,138,126,125,142,143,138,136,169,166,134,152,126,146,163,170,158,146,164,154,140,155,154,143,152,192,152,142,156,170,132,130,128,119,120,89,123,111,125,141,137,122,144,130,132,162,150,136,116,111,145,165,150,142,163,158,163,147,141,117,96,107,156,138,191,156,155,175,165,115,108,122,129,147,163,165,129,161,179,154,170,131,151,145,176,149,145,139,149,148,148,135,107,132,122,133,128,132,149,127,120,120,112,130,136,143,98,93,127,158,123,117,145,151,114,101,151,144,143,122,98,97,126,70,97,118,115,170,130,94,58,57,91,105,68,98,112,113,90,85,104,100,88,82,61,78,50,44,61,46,65,78,80,72,76,73,64,71,47,92,93,85,90,82,52,96,75,85,51,98,64,58,63,55,79,56,84,42,27,73,51,44,46,56,39,54,62,58,26,65,62,22,42,38,30,45,70,18,32,17,37,78,32,61,58,48,60,45,25,54,37,41,55,46,75,57,27,54,45,42,64,43,43,59,69,46,28,60,77,30,40,33,59,32,36,32,54,50,50,41,32,49,34,32,65,50,53,29,41,39,37,64,49,21,39,47,49,55,53,58,25,64,44,51,46,50,53,42,38,37,41,45,14,64,70,43,40,27,30,46,52,66,29,46,37,40,59,44,36,28,46,45,32,36,25,10,51,70,31,36,55,46,36,20,36,40,29,52,41,65,29,44,59,57,57,37,35,76,32,25,54,62,52,59,62,39,42,45,58,54,44,31,52,56,30,37,29,29,49,42,32,50,52,44,32,75,45,48,44,46,68,47,53,66,58,48,41,78,43,85,50,32,67,47,40,66,42,43,36,27,39,57,25,51,31,20,23,49,50,53,48,35,39,40,29,76,41,66,18,33,28,58,47,56,11,29,42,48,42,52,40,16,55,57,46,53,72,67,75,70,56,49,60,92,75,97,124,116,109,100,132,135,134,135,147,141,166,176,172,175,156,157,149,134,152,175,173,177,170,158,166,156,163,181,161,162,151,164,178,166,156,178,155,163,153,161,163,142,153,134,170,150,138,151,144,153,148,160,129,158,141,166,163,139,182,145,157,173,151,124,154,151,142,156,143,134,161,144,182,143,161,153,144,158,146,149,144,150,155,148,125,157,142,148,149,129,148,121,129,113,135,141,176,159,160,159,161,130,132,29,2,28,21,8,33,7,21,25,31,12,11,19,8,10,9,3,9,31,15,23,13,46,13,7,23,31,26,14,15,33,33,23,13,32,20,33,20,23,33,25,21,19,28,5,14,24,31,28,36,22,14,20,32,28,42,16,7,20,24,16,32,53,36,11,29,15,10,11,42,16,26,40,11,26,4,12,24,38,6,26,44,54,12,20,22,24,24,31,52,45,36,47,24,48,48,41,41,27,24,61,38,14,45,57,61,51,46,31,43,72,62,58,82,29,64,61,60,75,73,81,57,91,104,99,121,123,129,134,114,122,115,120,116,102,115,85,80,63,78,75,109,96,118,94,90,96,83,106,89,101,105,106,100,107,70,101,90,91,113,119,108,117,162,154,135,149,136,140,130,159,121,141,157,126,150,142,117,115,148,138,139,156,142,134,136,164,121,142,102,118,130,165,132,149,137,143,149,143,134,126,117,135,114,144,144,150,119,128,152,167,136,146,156,155,158,156,141,139,155,151,142,150,141,146,155,178,161,171,157,152,139,146,122,138,128,113,137,126,144,130,151,125,148,128,146,145,149,104,104,146,140,137,182,147,140,170,174,140,152,122,117,133,142,145,151,141,159,123,174,126,122,139,147,144,166,135,130,137,177,163,133,148,121,146,158,136,139,159,137,158,153,128,115,123,123,138,145,148,113,105,100,92,118,121,121,124,119,111,131,112,142,138,135,94,129,110,104,120,134,142,127,112,94,95,99,127,124,136,163,111,77,84,80,96,87,132,106,103,94,102,113,93,77,88,84,79,36,54,65,74,72,65,55,51,65,48,47,59,72,77,86,86,62,93,55,82,83,95,79,64,89,73,48,70,78,20,47,37,37,55,54,38,72,46,42,48,45,48,44,59,47,62,53,42,34,58,50,62,58,40,42,78,67,54,39,35,55,47,35,42,36,37,43,45,57,45,67,67,60,30,40,61,47,65,48,55,46,51,77,49,58,48,70,28,49,52,34,48,54,45,44,46,31,46,56,28,35,56,54,42,39,57,66,38,45,32,32,60,69,23,45,50,47,36,57,47,40,67,30,39,38,41,53,52,49,40,46,17,26,41,49,50,42,84,47,26,48,44,34,46,44,70,38,39,56,44,54,61,62,40,72,44,35,42,36,53,32,19,26,13,34,65,79,37,45,37,20,40,38,49,55,46,19,50,41,36,30,46,43,49,57,41,54,24,62,48,27,52,32,65,49,53,46,81,46,40,39,46,53,53,56,69,31,66,51,44,27,52,53,67,49,30,42,46,42,36,20,39,29,50,38,50,21,29,44,20,46,35,24,38,54,34,51,43,38,27,15,37,19,51,57,52,48,57,30,48,81,42,35,70,41,43,62,25,42,58,76,52,77,79,58,84,82,73,84,88,128,82,123,105,130,164,131,149,161,161,158,170,153,166,156,190,149,162,158,168,148,164,163,140,160,151,156,168,158,133,152,171,180,169,169,154,159,157,149,143,175,141,149,134,182,156,155,154,148,146,173,172,160,141,125,129,147,139,160,150,160,174,161,153,151,137,125,151,169,158,130,160,142,127,136,136,145,130,154,137,144,154,154,143,125,111,121,136,156,145,145,115,137,138,174,137,138,132,144,138,146,165,100,25,5,1,19,8,1,5,16,17,17,39,18,25,31,19,27,13,19,21,31,17,25,19,10,8,32,27,1,26,29,29,21,31,24,16,31,17,41,27,15,22,37,23,43,23,13,26,33,16,18,14,30,13,10,4,30,16,42,21,42,21,4,4,24,14,11,31,18,43,27,21,50,38,27,58,23,43,39,67,48,39,35,25,48,22,34,27,22,40,35,19,18,21,16,14,50,36,46,34,42,22,35,43,34,73,56,49,57,57,65,57,71,70,72,57,56,76,100,89,89,91,92,118,128,98,109,121,92,135,115,109,107,123,99,114,94,158,104,124,102,150,106,110,121,124,135,131,143,149,132,139,151,139,108,126,114,125,133,120,163,169,139,125,151,152,148,158,151,162,149,165,165,151,139,132,137,139,132,158,136,123,138,153,151,145,137,143,144,126,142,138,151,128,144,130,132,154,113,141,139,147,148,130,143,160,157,118,143,142,144,140,131,127,127,134,152,142,101,148,125,136,139,137,153,153,130,144,127,134,152,135,126,151,161,158,121,139,140,149,168,173,159,144,150,172,176,132,149,131,146,157,146,148,150,150,128,139,151,152,155,146,159,143,119,143,158,141,113,139,135,134,133,126,108,85,116,119,124,121,112,122,131,124,132,126,112,124,130,127,135,124,27,4,20,12,23,20,7,5,17,4,28,8,19,4,0,31,20,16,5,5,25,28,18,1,15,35,40,25,31,8,33,4,1,22,10,16,33,11,3,3,9,7,28,14,1,11,14,15,12,16,28,23,6,11,21,7,22,24,20,5,14,5,9,6,11,10,24,15,37,21,21,26,35,18,33,12,0,25,21,3,26,26,34,11,22,4,29,27,13,11,17,29,7,8,15,20,11,19,36,21,26,15,41,38,27,46,5,14,23,18,14,11,7,24,12,13,26,18,40,34,11,24,22,20,4,31,10,23,15,18,9,9,25,18,14,31,26,11,14,16,11,17,17,9,14,10,10,1,16,11,2,16,4,29,3,3,16,29,45,14,15,36,13,27,26,29,11,3,15,13,22,11,39,1,14,3,16,6,17,7,43,18,25,0,20,24,17,7,18,5,25,2,4,11,14,30,15,8,38,28,23,7,16,42,42,7,10,23,25,6,11,16,30,14,47,36,17,28,18,15,41,21,26,16,24,34,23,23,14,15,4,15,37,13,10,23,10,0,9,25,7,17,5,7,27,15,5,15,6,4,27,6,13,36,8,12,18,2,1,14,15,13,46,8,31,35,21,23,19,10,1,32,23,17,3,18,42,28,11,3,23,18,10,45,0,15,12,37,11,3,5,31,34,26,12,30,3,4,13,4,11,10,27,1,15,10,10,8,13,4,10,10,4,24,16,5,22,16,10,45,46,45,26,11,25,49,19,19,3,10,12,13,16,24,25,6,4,17,19,3,14,15,45,40,10,16,28,12,11,10,19,14,6,51,16,29,42,13,29,1,25,26,13,10,31,5,19,6,22,30,20,18,37,11,18,25,12,20,14,12,2,15,26,11,1,24,24,21,15,9,38,38,46,8,7,23,12,15,19,18,24,2,28,5,2,16,25,30,3,16,58,19,15,31,15,26,20,11,43,17,31,26,21,3,12,8,29,8,2,24,38,24,18,10,1,20,16,32,15,7,19,1,0,7,0,26,4,2,34,8,9,32,32,26,42,26,39,37,14,51,23,25,8,43,14,14,33,33,18,6,21,11,20,29,23,13,21,18,21,26,11,35,17,20,21,48,25,16,16,33,21,28,19,32,33,37,26,48,39,27,19,12,20,45,19,21,35,49,25,53,56,20,40,6,10,21,28,56,24,31,48,36,28,21,22,70,25,24,34,36,47,29,82,58,49,63,58,64,71,40,68,53,42,46,59,74,94,99,79,97,103,128,109,78,100,112,117,100,129,104,115,92,106,91,82,144,140,131,126,127,131,128,148,141,115,138,138,137,143,124,139,114,151,124,150,147,126,159,145,155,170,139,158,136,158,126,146,157,134,164,174,137,143,98,149,144,132,129,157,149,123,110,134,126,112,145,139,161,139,137,138,145,146,131,109,127,108,125,138,148,121,130,112,131,154,123,170,155,150,137,130,126,153,142,122,131,129,117,134,148,126,162,117,136,157,143,151,103,143,156,120,150,154,157,159,122,159,130,140,144,145,152,117,164,162,143,151,160,129,147,142,151,116,144,140,142,148,130,159,152,156,124,156,120,160,118,118,99,148,186,132,125,111,113,122,122,125,123,132,124,129,134,149,132,130,118,123,128,145,125,41,0,24,35,37,7,18,31,15,20,13,18,12,23,8,39,16,23,16,30,2,23,21,17,16,21,8,12,21,7,22,17,28,27,4,3,35,4,20,30,8,3,15,19,42,29,12,31,18,18,13,11,19,24,14,25,51,11,41,8,11,10,1,19,14,2,11,16,25,19,11,11,16,12,52,19,8,18,12,18,42,23,8,12,24,19,12,17,27,28,13,42,5,24,19,12,35,7,15,15,17,12,19,21,12,35,13,35,11,7,0,22,21,28,18,13,31,21,1,5,15,14,41,8,21,41,30,0,26,16,6,17,22,13,28,8,20,30,9,26,30,6,12,34,45,42,58,23,4,4,14,3,41,7,25,1,7,10,2,20,37,24,22,31,4,29,6,5,29,23,11,13,22,31,30,33,13,13,24,1,4,6,10,21,3,13,4,30,22,17,26,19,13,30,47,32,17,18,0,18,12,18,12,10,9,5,8,18,3,35,12,41,17,13,7,10,24,28,34,24,6,39,9,25,30,13,27,10,4,18,10,31,32,8,23,39,26,14,13,35,28,18,10,25,23,4,20,6,25,40,18,38,15,11,25,22,30,12,32,20,25,2,20,16,23,9,31,20,5,22,13,15,14,23,23,21,18,12,11,16,5,1,15,4,40,18,20,18,7,21,8,22,1,15,28,30,17,38,20,8,33,17,23,14,10,1,22,45,20,19,19,12,10,25,3,3,13,12,29,7,0,23,13,1,10,18,29,54,5,10,22,12,13,20,16,21,6,12,3,5,23,19,17,10,13,16,12,24,31,20,36,27,23,1,32,30,18,8,24,27,12,24,50,28,5,6,5,1,24,30,13,33,27,18,23,5,2,31,15,1,8,32,17,21,13,8,15,34,30,7,20,22,15,4,7,10,35,14,14,41,14,42,13,34,16,20,14,22,29,26,26,8,12,16,30,24,22,10,4,0,9,21,13,8,11,20,10,17,29,12,15,14,17,20,6,18,11,19,6,11,24,22,3,26,31,0,3,12,32,8,21,5,18,20,26,9,5,33,19,25,31,38,20,16,14,5,17,10,34,12,26,8,12,19,38,28,30,17,33,32,6,21,38,19,15,3,31,22,35,7,21,3,30,43,17,14,10,40,28,34,16,30,26,40,18,6,29,9,18,2,36,25,34,16,30,3,30,14,13,28,36,38,11,38,38,51,25,8,36,21,22,17,37,17,33,38,24,50,31,28,25,20,18,30,38,40,9,31,22,14,42,33,35,40,36,53,10,47,44,34,60,57,35,49,45,70,56,39,33,31,48,53,25,40,53,35,52,49,32,60,55,38,50,65,65,56,71,101,91,91,77,98,116,112,110,84,118,110,104,102,117,128,141,112,114,152,125,142,130,128,147,142,144,134,112,156,130,139,119,147,130,131,140,131,130,119,145,106,112,103,128,112,129,123,107,135,133,110,131,154,148,130,149,148,138,129,162,120,112,148,138,126,141,124,122,139,134,126,137,121,140,103,134,147,154,115,117,147,104,110,127,153,157,147,163,172,152,137,159,152,144,128,123,144,155,136,134,131,142,115,136,126,128,123,101,120,107,113,136,138,120,113,97,85,111,111,133,111,105,143,102,89,141,106,118,149,135,122,115,106,96,149,119,123,153,90,136,138,115,123,130,148,150,122,124,144,135,124,127,116,126,118,151,138,138,144,96,136,121,138,99,98,84,79,103,94,62,71,85,66,78,81,97,101,84,88,85,121,59,102,107,100,100,132,89,64,81,82,74,94,59,63,43,62,53,50,71,93,56,98,73,68,54,53,46,36,61,70,70,50,51,54,86,42,63,62,46,89,67,67,45,77,54,68,74,72,69,71,47,67,71,51,63,58,56,70,63,52,83,57,39,40,21,23,61,69,49,46,35,39,23,47,44,62,43,36,40,49,52,46,39,37,54,62,25,14,39,45,26,67,43,25,52,65,40,35,54,51,39,49,38,37,43,34,56,43,35,30,53,51,35,47,32,28,40,30,48,40,32,23,18,44,48,48,30,30,37,34,29,36,46,44,45,37,52,42,58,45,29,30,34,33,51,42,49,35,50,46,22,44,41,50,66,57,36,15,31,40,53,52,27,43,62,33,40,37,36,33,36,22,54,46,22,42,52,40,32,58,44,22,42,36,32,55,31,42,60,43,59,73,54,41,60,24,51,51,35,64,65,46,54,68,60,55,62,69,81,68,80,90,62,48,78,71,82,68,74,42,82,95,84,82,28,48,61,86,73,42,70,60,56,67,31,27,30,33,64,29,57,43,20,48,13,28,29,10,40,32,51,31,46,29,32,31,28,64,28,11,39,56,27,39,32,42,20,22,54,47,45,74,79,46,63,43,59,52,34,51,44,85,82,77,66,68,84,70,119,126,94,111,131,114,145,148,146,141,140,128,134,156,128,147,184,156,139,158,142,122,146,153,152,150,156,149,107,166,136,172,149,171,146,164,162,150,146,166,163,163,142,141,158,181,149,143,121,162,149,147,133,168,147,150,157,127,159,161,169,160,166,149,168,141,169,146,164,168,147,152,177,174,156,144,148,133,171,138,154,153,146,152,153,118,162,146,127,137,142,174,137,160,102,85,85,86,117,152,141,174,135,168,102,9,17,25,0,15,1,9,20,13,12,23,0,23,10,30,24,18,18,14,35,6,43,28,16,11,8,22,3,30,41,21,12,9,10,20,33,35,17,33,8,34,24,46,23,49,38,30,7,27,11,26,13,12,34,11,13,49,24,9,44,10,10,30,35,27,14,20,37,37,7,20,14,31,29,26,37,16,19,21,6,43,45,30,33,28,37,35,11,31,41,8,16,32,30,24,38,13,61,40,27,50,40,11,38,16,51,33,22,12,30,28,28,12,8,36,56,63,64,47,49,35,24,40,48,56,50,71,61,61,68,58,31,64,51,48,76,82,57,59,47,62,89,111,89,89,89,97,75,81,61,57,70,91,97,101,71,100,72,84,89,110,110,115,151,132,123,121,133,137,125,161,157,99,119,117,126,134,108,121,112,120,126,146,121,134,94,144,128,122,136,134,142,149,107,141,134,151,131,130,142,124,146,134,153,136,113,107,117,123,142,165,138,128,129,156,135,143,117,116,127,123,118,129,130,86,138,95,122,129,145,123,121,147,123,140,145,176,136,114,140,110,129,115,117,108,159,118,122,130,127,126,123,116,114,139,122,152,123,136,135,94,120,133,115,138,131,113,124,106,117,97,125,153,147,130,116,112,122,113,79,130,131,99,124,97,93,124,116,104,125,115,114,106,79,99,118,104,129,117,116,123,133,147,127,102,72,90,108,77,98,105,109,94,65,41,79,92,99,98,49,83,78,72,96,60,63,77,58,58,48,81,67,111,80,23,51,48,87,65,56,51,68,85,52,49,102,55,50,69,50,38,69,64,57,33,68,52,71,49,57,50,50,54,63,46,62,52,77,77,52,57,76,39,37,71,48,63,60,64,67,46,62,59,68,40,82,32,64,72,50,58,70,40,58,39,63,51,51,38,53,66,50,59,42,42,38,43,28,35,14,28,49,60,63,50,45,32,27,65,47,60,39,52,40,46,46,24,74,53,57,37,60,50,37,49,43,43,45,49,50,48,37,43,52,29,57,52,49,39,23,67,43,49,55,36,33,25,55,45,35,59,34,36,43,40,48,39,58,26,31,40,50,40,32,26,32,28,51,60,45,36,62,70,56,26,71,50,54,49,35,54,31,50,19,54,41,37,23,35,54,29,11,40,34,37,34,47,56,29,40,35,33,54,65,34,61,65,37,63,42,75,48,52,64,52,41,23,55,53,46,68,60,68,51,23,58,45,49,49,47,90,73,69,67,75,91,56,62,29,66,77,48,44,32,64,63,42,68,70,63,53,54,40,49,23,40,63,44,40,38,39,50,39,7,40,38,31,27,32,37,22,36,4,31,42,33,16,32,30,22,49,50,20,54,30,30,38,44,47,43,46,57,60,46,76,84,40,100,62,98,97,56,49,72,77,94,65,90,74,154,139,122,137,124,124,114,153,134,152,152,152,158,179,181,138,169,119,148,162,124,145,142,156,161,138,139,156,146,153,140,168,174,160,152,178,147,161,170,166,162,152,156,154,160,130,157,120,138,166,149,149,136,145,122,160,161,132,156,155,144,161,163,173,157,169,166,150,170,143,151,127,154,166,146,153,181,161,134,138,128,182,153,143,146,141,123,170,139,128,126,113,88,77,68,78,104,154,170,171,114,14,5,7,6,23,19,18,10,13,38,33,15,28,1,15,25,44,13,19,29,44,10,21,13,24,13,28,51,16,31,28,22,15,17,44,9,14,28,9,37,9,38,24,5,10,20,4,28,18,32,13,19,15,14,28,32,25,28,21,17,5,26,15,41,27,16,12,23,19,27,14,10,28,49,3,42,60,14,4,41,26,29,24,11,28,40,38,21,17,12,32,42,20,20,33,45,30,23,29,30,39,48,26,36,12,18,23,39,24,41,10,28,11,32,30,52,25,11,82,41,48,40,51,34,79,31,54,28,37,31,36,20,69,47,39,67,49,66,67,93,77,89,83,89,72,90,84,70,68,76,79,100,112,106,80,107,95,108,111,91,131,123,131,123,132,137,146,126,135,112,108,155,119,106,146,126,138,127,127,103,96,145,138,134,121,109,126,150,126,149,136,124,147,159,128,128,113,123,100,102,113,135,104,129,144,93,81,96,127,126,124,124,123,114,126,128,148,132,146,138,143,111,141,151,145,153,145,133,142,146,139,130,122,111,110,136,131,99,134,111,121,137,121,132,122,134,112,133,128,143,100,125,142,137,122,135,136,130,117,153,133,137,131,127,99,115,101,125,133,106,119,143,147,135,112,143,129,114,105,122,132,121,134,117,90,114,148,129,140,151,133,139,99,112,134,113,130,132,144,110,116,130,134,146,124,98,120,105,120,121,135,92,135,118,131,142,114,144,121,83,79,106,112,121,86,73,70,86,74,85,76,102,123,76,118,92,58,70,71,81,71,111,117,109,100,77,67,75,78,86,59,96,47,38,97,65,62,100,67,71,40,34,66,42,59,26,68,64,27,32,31,52,45,50,47,60,66,73,62,65,42,77,47,56,37,21,70,67,70,47,55,38,56,47,55,73,44,55,41,65,48,44,40,39,46,54,62,43,14,68,53,78,70,53,63,46,50,51,79,62,31,42,47,68,60,46,73,25,78,38,53,34,42,40,24,48,23,47,21,48,52,55,44,48,66,13,41,46,53,30,61,36,30,47,37,36,79,68,59,49,41,42,47,46,55,44,46,18,8,57,52,28,26,33,39,76,64,26,30,48,35,41,56,33,37,29,51,42,30,54,38,22,44,49,47,25,40,38,23,56,37,43,42,18,46,32,33,37,37,59,20,25,49,18,21,47,40,47,65,46,41,43,50,64,55,66,57,55,63,50,34,37,34,67,63,60,31,65,58,75,71,88,61,77,92,88,84,90,75,84,54,87,106,48,58,58,57,71,41,30,52,36,31,36,43,19,40,13,50,35,26,35,25,41,18,58,19,20,56,36,41,22,21,24,55,40,28,51,25,23,27,22,21,44,47,33,42,41,25,52,68,51,64,70,53,62,37,77,59,51,75,61,51,48,72,99,78,78,127,134,141,127,141,142,124,146,154,158,134,166,156,170,171,175,158,173,158,149,160,164,159,159,175,164,126,161,145,146,158,133,169,174,135,152,138,158,167,172,186,156,165,162,160,160,172,165,149,143,152,156,136,149,163,148,154,158,138,145,161,151,153,154,138,134,147,144,133,150,152,165,189,158,156,152,167,150,159,122,143,166,137,136,131,141,143,142,165,134,147,146,157,143,94,86,93,87,114,122,167,123,20,3,20,23,12,27,1,22,15,20,9,16,1,26,6,17,10,34,32,12,23,19,53,33,27,37,12,19,38,3,41,8,14,9,14,26,6,32,6,25,45,26,7,6,14,15,13,40,31,8,21,7,27,51,20,29,4,35,3,15,16,38,13,18,10,8,41,29,32,9,2,19,16,13,46,14,18,16,19,14,44,7,16,8,20,27,4,10,2,17,28,30,62,25,10,18,35,14,33,17,31,50,39,11,30,7,9,14,33,44,34,54,34,41,13,26,21,36,39,28,52,48,45,54,70,53,65,61,69,42,52,64,83,75,84,71,103,83,91,79,66,82,84,75,68,78,87,71,75,84,69,109,108,116,133,139,109,120,126,129,141,131,123,121,137,112,96,137,120,117,132,139,137,136,103,142,151,138,162,142,166,128,165,137,154,145,141,156,149,133,125,139,119,106,131,133,138,151,133,138,135,107,144,145,142,125,126,113,121,110,106,125,128,115,130,140,106,125,143,146,150,166,149,151,148,140,136,145,137,147,143,153,145,106,122,132,103,132,132,138,128,153,121,120,119,128,113,116,95,119,147,126,121,141,133,129,122,142,162,161,158,159,114,142,131,162,146,150,126,119,117,148,124,133,110,127,131,115,122,140,171,139,156,103,117,124,148,155,126,128,105,118,137,145,137,125,132,116,129,128,123,133,138,110,140,123,147,148,168,146,139,132,134,158,152,144,156,134,138,113,87,100,113,106,117,80,91,95,96,108,97,97,84,80,94,52,95,79,101,112,98,110,88,80,59,88,46,68,102,77,69,71,53,47,44,77,70,33,56,86,45,85,66,55,48,54,65,74,69,43,54,61,43,59,80,59,70,38,62,61,64,67,72,66,43,76,53,43,64,51,48,46,71,63,40,57,67,54,67,61,55,27,28,34,60,60,54,36,53,53,74,53,23,51,54,47,59,43,22,44,38,49,58,70,47,37,47,65,61,15,30,26,52,47,55,62,58,65,45,26,53,30,55,64,34,56,38,59,39,34,28,27,52,36,51,28,22,36,59,51,42,49,45,45,51,45,40,32,27,53,42,31,47,46,3,46,39,44,40,42,55,58,42,42,20,29,42,42,35,66,31,59,53,52,47,39,28,38,22,55,22,48,38,20,41,35,31,32,20,39,37,44,54,34,24,47,40,36,62,32,35,55,57,23,72,30,67,42,58,42,23,21,28,58,49,77,44,73,83,58,92,62,63,91,72,79,80,112,92,86,104,68,65,38,43,66,55,47,50,43,59,33,27,33,14,6,66,52,46,31,37,31,44,26,31,66,28,36,44,14,38,19,45,33,48,24,44,51,31,40,38,30,35,16,54,42,37,29,44,77,53,67,64,58,46,61,63,46,69,35,62,33,94,69,38,107,69,76,94,120,137,141,151,162,148,128,121,148,167,195,150,163,158,180,171,156,159,150,173,154,182,171,163,164,197,165,154,179,182,156,182,153,147,135,118,154,134,187,171,164,153,153,172,171,143,159,165,151,149,158,145,148,167,160,160,150,140,160,156,145,154,151,141,160,135,129,131,161,157,143,184,150,159,149,159,156,158,163,147,171,158,171,135,147,131,146,112,156,146,135,174,150,132,112,103,81,98,113,139,128,5,5,0,15,1,21,7,22,17,11,15,45,17,14,53,19,15,30,18,6,39,14,16,39,34,29,5,19,47,33,19,5,14,29,27,8,25,22,34,27,29,25,12,14,8,15,31,20,29,7,18,15,2,33,36,19,21,24,9,20,8,13,18,11,8,11,30,14,20,36,44,42,11,11,12,36,28,3,29,4,36,39,36,22,42,25,30,15,17,8,28,36,27,40,31,41,18,44,28,28,26,35,41,20,23,64,40,36,10,36,40,43,26,19,33,40,29,40,45,61,48,33,41,37,76,57,75,83,122,102,70,102,113,91,100,120,56,91,78,96,59,54,104,67,131,67,92,108,96,112,91,123,128,115,149,129,124,127,130,129,106,121,117,119,98,130,134,105,117,117,132,140,143,131,130,146,151,149,139,147,156,138,97,109,133,132,141,129,133,119,131,127,130,97,121,121,116,133,134,123,136,150,159,156,116,178,166,159,130,150,118,148,157,161,130,144,151,152,154,169,153,161,130,137,124,154,132,146,137,163,166,142,155,145,165,144,108,122,126,132,120,109,134,143,162,96,134,122,125,102,151,147,120,162,131,137,142,160,165,122,153,147,119,136,117,148,163,139,126,146,140,156,141,148,118,134,126,121,126,159,120,102,123,147,101,108,150,140,139,138,107,109,142,125,118,121,112,107,122,122,136,118,108,122,124,142,128,125,111,137,132,93,116,119,108,122,108,90,114,89,112,86,89,106,92,86,97,93,92,87,79,80,62,78,60,65,69,82,72,72,79,75,67,49,53,39,54,29,46,66,49,57,41,63,34,22,66,34,54,51,50,64,66,47,64,61,62,54,65,90,52,76,67,50,39,57,57,37,53,57,50,69,67,42,67,53,40,45,63,64,55,40,45,34,25,55,50,35,46,50,27,47,50,43,54,37,42,50,69,60,46,49,48,44,39,28,47,42,53,32,29,31,23,56,47,15,71,28,42,44,46,46,30,49,36,45,39,62,29,48,34,55,58,20,41,37,51,52,46,61,40,20,47,49,39,38,37,32,53,45,39,27,51,41,37,47,49,31,42,31,39,20,65,45,47,46,51,49,52,46,21,39,44,38,54,38,34,44,26,35,46,27,49,36,32,25,27,37,35,25,57,56,35,42,33,48,44,21,49,19,24,33,59,47,51,62,70,57,45,23,37,45,67,36,27,46,20,36,44,57,43,43,43,61,47,79,49,64,49,65,56,55,78,72,88,98,78,74,57,59,79,53,60,53,52,64,81,84,29,43,44,28,48,32,18,46,50,40,22,43,48,44,9,38,43,33,24,50,21,60,35,54,34,56,30,31,40,56,37,29,35,46,49,58,48,50,51,43,29,46,55,72,65,49,56,70,58,46,73,39,56,38,78,56,72,112,97,65,66,101,128,112,132,157,153,113,135,141,175,149,137,137,132,161,177,198,188,184,175,191,158,171,166,166,212,156,139,150,141,168,161,150,162,147,157,159,169,166,158,121,156,174,151,171,162,166,150,152,133,163,146,142,142,154,125,152,145,159,157,154,137,161,148,150,162,143,142,148,141,140,134,138,142,143,152,132,148,153,138,146,156,149,153,148,141,131,142,138,124,177,158,165,145,129,119,97,80,77,111,100,24,0,14,9,26,6,23,16,0,21,28,17,9,14,54,41,22,3,18,21,32,30,18,30,11,42,20,43,16,15,8,24,14,25,41,9,5,27,25,45,28,32,16,0,35,12,12,22,10,19,41,10,37,0,9,18,21,29,37,23,16,13,32,44,17,6,24,25,34,19,37,7,38,33,11,6,5,30,17,19,7,37,21,13,54,12,4,25,26,0,18,30,31,24,38,25,0,27,38,20,21,51,30,40,40,34,34,7,41,17,20,12,28,46,45,29,52,55,40,42,55,70,24,42,49,82,72,61,70,51,112,50,62,64,71,67,82,66,62,53,33,59,70,111,95,81,99,102,111,146,157,143,130,133,145,144,148,132,138,123,126,154,129,127,110,110,123,147,158,132,125,128,147,132,124,163,129,104,113,138,120,88,134,117,108,123,140,117,130,142,120,121,141,136,136,88,154,131,126,127,140,120,111,120,110,151,137,117,153,166,134,143,140,111,106,131,130,150,147,141,142,160,169,151,142,155,136,127,137,130,123,136,126,150,98,148,148,137,127,136,125,135,154,117,148,124,136,138,163,129,125,139,137,137,147,139,140,141,115,143,117,148,151,146,136,147,143,134,156,156,167,155,161,131,169,134,157,169,145,131,136,125,152,125,156,140,138,138,147,148,160,131,121,87,92,85,115,94,117,126,112,93,75,92,115,101,100,96,81,68,114,142,75,94,105,91,105,77,78,70,89,70,88,70,73,79,85,94,95,104,75,80,42,40,78,82,93,71,78,78,63,73,66,60,64,59,32,64,62,58,53,46,69,53,66,52,44,70,36,41,47,80,64,71,64,62,58,57,66,68,67,56,38,61,63,55,51,63,69,42,63,53,58,67,72,42,56,32,46,61,50,40,55,92,41,55,48,54,43,42,50,53,73,45,63,61,26,45,48,70,46,66,48,34,53,59,66,44,50,55,69,26,40,45,36,43,51,61,59,23,36,52,67,28,53,42,51,46,59,40,31,37,60,49,24,54,49,41,63,37,46,39,58,27,58,30,55,44,57,48,40,21,36,67,33,40,45,43,68,50,32,59,42,41,40,45,53,28,16,25,61,54,19,37,41,28,32,40,39,58,31,32,60,68,59,42,29,45,42,40,59,75,42,65,29,36,35,43,51,57,31,36,61,46,44,51,79,52,29,37,79,68,56,65,39,72,56,77,67,73,40,54,64,45,69,67,60,55,43,24,54,80,96,78,95,72,45,63,41,60,57,40,41,76,55,65,69,57,61,39,76,57,29,46,56,19,43,28,42,37,28,43,30,56,23,33,13,56,26,41,59,39,37,46,54,16,47,41,25,36,36,49,28,53,60,29,51,43,44,52,60,44,49,50,32,56,67,72,68,42,61,42,67,44,71,52,60,48,72,113,121,104,109,141,131,150,135,137,170,129,143,141,120,154,167,164,162,165,127,153,138,151,166,141,151,146,158,156,188,155,169,170,178,166,142,169,164,163,129,180,152,165,150,136,154,141,139,153,170,175,176,156,166,174,162,157,178,138,158,172,135,167,170,174,169,149,171,158,163,170,139,165,143,140,162,160,136,122,157,154,162,135,150,142,163,151,153,120,113,116,135,137,159,146,140,126,91,59,88,80,8,7,41,9,31,13,5,4,15,31,10,8,18,2,23,10,45,12,19,26,4,7,34,52,12,14,59,30,22,8,22,13,19,37,25,32,11,18,12,28,19,23,33,24,26,18,24,16,28,14,13,2,19,22,52,24,28,19,8,40,25,11,40,17,24,21,16,44,19,27,6,22,35,43,16,3,18,11,20,32,47,20,10,21,4,40,7,28,4,52,44,22,19,19,23,15,33,52,35,62,44,25,18,18,39,31,9,14,39,30,27,39,48,34,24,45,28,56,41,61,56,43,54,63,72,68,61,54,74,73,29,39,57,46,52,31,74,73,45,28,24,64,28,49,48,89,79,95,125,128,146,135,136,136,140,152,181,165,142,151,153,165,150,159,149,126,148,152,132,120,151,131,128,143,120,134,125,114,140,153,119,117,126,133,138,143,143,119,130,132,151,152,130,149,128,144,156,147,133,124,110,118,121,140,140,122,122,134,123,140,137,154,133,100,125,119,121,98,95,139,136,131,150,148,143,147,152,173,148,136,144,138,137,140,145,151,152,155,143,122,150,124,156,139,151,128,121,141,139,142,123,148,115,107,126,114,123,150,139,150,135,169,146,138,129,120,117,149,135,122,151,145,139,111,132,132,114,153,142,139,146,139,140,151,127,112,157,164,143,136,112,127,121,119,142,118,115,100,121,137,105,106,119,113,122,97,120,107,94,75,92,97,89,94,107,110,86,104,97,83,110,107,108,118,63,85,89,72,101,75,117,112,80,63,50,56,65,63,70,67,82,66,70,90,89,72,37,83,42,82,70,86,70,79,70,48,64,48,65,85,53,54,38,43,80,54,68,58,34,40,53,48,55,53,55,49,69,42,61,49,49,52,53,48,61,39,57,71,69,52,74,56,23,51,68,41,61,72,69,50,30,40,33,65,38,42,51,56,40,39,53,58,46,36,51,23,63,50,39,52,60,48,47,44,34,36,54,31,22,37,55,60,56,44,63,46,56,30,50,36,34,11,59,40,68,32,45,58,23,26,41,36,51,30,27,46,44,47,43,37,34,53,63,11,42,34,43,61,34,34,43,36,55,43,39,63,38,57,47,72,39,36,53,43,28,45,47,43,22,33,15,45,48,41,41,32,32,57,36,34,62,42,48,42,24,61,58,31,42,45,57,32,25,54,36,49,63,63,66,41,29,39,38,40,36,63,47,67,57,38,57,27,61,52,62,33,12,50,53,69,59,41,57,79,86,69,66,29,52,37,83,74,79,48,47,45,35,43,73,48,38,43,47,47,53,39,20,14,26,29,46,41,20,49,40,20,21,12,55,62,53,47,38,21,26,55,27,44,53,52,36,53,44,33,61,52,51,48,48,44,37,74,39,41,86,36,47,63,43,57,92,47,94,64,90,98,77,66,94,119,139,111,126,139,128,120,140,144,162,182,159,139,120,137,169,149,141,129,146,137,136,139,146,176,161,129,129,148,164,157,127,171,140,152,152,146,123,148,148,137,152,131,161,161,134,155,138,131,174,172,144,155,159,152,163,141,154,162,159,157,155,147,122,143,167,120,135,160,158,162,150,143,128,172,156,130,139,155,167,141,153,159,162,137,118,164,146,170,137,157,161,181,134,126,162,136,131,76,70,74,33,18,20,8,9,29,2,17,12,5,15,26,6,32,25,8,18,18,16,25,31,43,26,5,24,15,39,8,8,43,21,40,28,16,30,26,39,46,26,2,24,7,9,16,8,47,23,22,22,32,17,53,20,9,5,6,39,20,30,30,13,46,28,23,12,20,21,15,3,34,17,32,26,27,21,19,31,12,34,22,33,27,16,4,6,36,43,14,40,21,4,30,38,35,12,49,56,42,48,40,22,35,10,46,49,23,32,21,61,24,41,41,45,35,42,52,54,33,38,47,56,47,42,53,63,40,27,32,29,64,26,31,43,31,52,53,71,68,52,74,45,65,45,54,79,83,104,110,116,77,123,119,107,122,149,137,157,149,144,145,120,137,119,120,138,122,124,148,125,134,142,141,124,120,123,102,114,141,123,125,115,124,163,160,122,131,99,134,140,120,152,140,144,132,131,152,141,148,133,130,157,152,129,121,109,131,137,133,130,126,123,124,120,128,103,130,124,109,120,109,121,115,148,150,159,131,143,142,163,160,163,127,140,154,142,148,120,151,133,149,156,131,144,127,139,122,152,138,136,136,119,135,124,140,150,137,144,127,135,152,127,127,149,125,122,134,112,114,107,124,145,145,139,118,112,113,119,88,117,136,105,120,99,80,102,91,84,80,103,94,55,76,107,118,140,134,151,94,120,125,124,150,166,147,134,117,100,101,100,90,104,119,103,98,83,97,108,120,106,110,121,133,130,109,121,108,88,44,46,60,89,93,94,89,76,52,62,69,69,66,88,63,80,64,104,95,84,71,64,76,67,28,52,75,58,63,84,110,80,65,44,63,46,72,53,55,65,81,54,47,60,50,45,54,64,67,62,74,36,54,60,49,43,37,52,47,81,58,43,51,73,56,41,76,48,51,55,67,44,65,58,56,46,59,71,43,72,61,77,67,79,64,39,34,40,69,57,63,59,48,27,47,65,28,41,23,83,44,54,44,71,63,51,47,58,14,60,38,41,43,33,36,45,39,48,22,47,16,28,40,40,43,41,31,37,60,50,11,23,35,55,52,52,57,51,39,51,16,49,56,32,40,52,49,43,48,72,36,56,63,29,25,48,64,52,44,43,42,63,42,31,63,50,59,41,55,49,18,33,35,29,55,57,66,77,40,44,23,62,52,51,52,66,59,60,44,55,64,42,63,31,53,30,44,29,46,62,24,69,68,60,40,45,17,29,57,61,33,42,30,57,59,41,61,37,43,68,43,53,74,53,61,40,74,63,54,57,63,68,47,26,29,56,29,41,59,57,35,29,59,46,41,42,57,36,23,20,50,40,29,27,13,37,19,35,47,54,52,42,37,30,29,51,51,15,63,63,18,37,43,33,60,67,22,54,80,36,52,49,70,87,75,81,46,92,78,108,111,130,137,129,143,135,125,157,141,154,134,167,168,191,157,153,158,161,172,184,157,155,133,139,149,141,142,146,162,147,125,162,172,175,163,161,156,142,154,141,145,165,144,147,135,191,160,182,191,161,152,168,160,163,199,177,163,142,195,153,153,144,167,146,150,166,184,150,170,157,170,155,147,177,163,145,164,151,138,158,145,159,140,167,166,127,145,162,133,146,134,154,146,122,154,139,158,143,145,143,130,117,69,23,16,12,21,26,12,15,14,8,25,12,29,36,26,32,26,35,18,27,25,8,28,3,14,38,50,9,9,29,37,31,34,19,8,22,17,46,40,5,37,17,55,31,21,40,27,20,16,30,35,4,16,37,18,14,27,16,22,40,35,20,36,8,20,20,13,31,20,16,17,44,40,47,12,16,22,11,33,21,34,25,34,19,35,33,11,23,26,38,29,24,23,23,59,29,27,24,62,22,55,42,38,58,55,31,41,54,68,66,37,56,63,29,46,38,64,80,55,82,77,80,70,109,118,119,130,122,124,139,122,99,123,104,118,103,98,85,62,86,91,85,105,114,114,100,109,103,87,56,67,101,106,117,99,92,123,114,101,87,109,77,108,132,128,154,120,157,163,123,141,146,133,164,147,156,116,136,135,140,164,151,145,131,154,145,150,135,128,119,140,136,112,113,151,137,151,155,158,176,135,126,136,135,136,150,157,122,127,144,153,143,155,140,151,175,177,167,151,162,165,166,153,138,141,147,161,164,157,145,155,141,116,131,103,128,131,113,128,133,117,128,134,138,133,159,122,121,123,140,148,143,154,149,136,140,156,143,162,152,133,128,104,113,159,144,165,128,143,142,141,139,121,136,131,156,138,153,150,155,175,163,136,125,133,149,174,130,118,152,145,123,131,132,129,119,130,125,138,127,137,127,92,116,129,106,140,126,112,118,151,124,123,130,142,103,140,131,120,142,125,134,116,110,96,90,116,106,151,152,113,105,92,57,96,99,69,93,102,97,108,98,120,116,93,84,67,77,62,80,71,72,72,85,62,35,81,62,58,69,74,99,99,58,71,62,59,65,56,102,76,72,67,73,42,76,78,45,43,56,51,42,60,70,63,38,51,53,60,56,56,41,35,66,49,49,77,47,31,61,61,31,59,43,75,57,38,58,72,50,50,51,83,44,56,64,47,56,49,64,68,45,72,47,58,56,19,34,59,79,72,25,18,34,40,42,38,52,51,52,56,37,56,55,27,53,51,29,56,15,54,20,30,49,45,37,51,41,51,34,49,50,45,51,61,45,48,22,64,15,49,27,27,56,62,42,59,42,55,52,58,40,36,23,52,36,50,71,43,52,69,55,44,68,41,46,44,53,73,55,29,54,35,26,69,37,45,39,38,50,58,34,61,39,48,30,44,40,30,50,54,52,53,45,39,22,50,41,54,49,61,60,62,33,49,25,33,52,46,44,48,51,71,61,51,53,40,33,35,49,49,38,45,59,67,54,58,72,53,63,39,62,52,30,64,63,50,48,44,54,45,47,41,61,49,45,51,41,43,68,21,26,16,39,37,34,53,39,61,48,34,56,61,70,44,66,55,56,50,49,50,61,31,43,66,52,57,72,65,48,74,49,40,74,80,89,97,100,122,113,105,81,134,126,142,144,151,163,174,187,169,150,179,166,160,133,154,168,152,173,172,183,146,153,161,163,141,157,160,175,165,178,156,160,150,147,130,132,152,168,138,140,146,155,142,158,143,172,172,135,148,158,133,171,146,169,138,136,161,152,132,146,153,140,140,133,154,157,147,138,151,132,135,149,160,147,165,143,142,138,171,128,118,119,175,133,127,125,105,132,136,130,123,163,145,162,151,132,125,133,101,21,9,19,3,16,17,26,1,24,6,34,17,32,14,41,21,30,14,51,13,28,4,12,18,24,25,34,22,16,19,22,28,41,27,39,42,30,24,4,24,14,15,23,10,6,47,21,19,38,37,38,6,19,35,31,35,11,6,37,23,27,38,13,36,10,14,21,38,38,9,9,11,28,22,30,49,15,52,53,28,22,9,66,27,4,27,21,25,16,32,22,9,40,34,47,38,27,31,46,37,36,39,34,68,41,35,60,49,55,58,25,57,47,61,46,62,99,98,115,73,83,61,84,116,116,113,127,129,113,139,133,128,90,128,86,72,93,66,101,93,98,99,111,113,105,91,100,98,98,72,94,92,100,114,100,103,97,90,99,109,141,115,155,144,146,144,142,117,103,118,151,157,145,125,146,187,117,143,135,145,137,118,137,146,154,161,141,146,142,120,127,130,138,126,134,138,121,134,161,125,128,131,143,123,120,110,146,142,136,153,170,134,142,166,165,157,133,130,160,169,154,136,148,136,148,160,165,148,142,160,160,143,130,143,110,121,110,136,157,109,116,115,142,137,144,137,115,121,140,135,121,123,141,131,142,152,152,163,149,130,131,140,137,147,143,175,169,155,141,144,145,106,113,131,141,153,150,147,153,147,137,148,122,122,147,169,147,163,131,151,158,126,126,138,113,120,124,130,121,129,137,112,133,127,121,113,132,102,126,120,121,138,125,120,127,117,115,122,123,133,130,113,128,99,106,117,115,111,147,131,113,73,62,80,91,100,102,92,112,110,67,95,105,81,104,55,61,65,67,75,50,89,72,79,64,58,87,56,59,76,52,92,102,104,95,61,79,72,83,74,94,80,79,74,92,84,35,69,47,54,42,66,53,39,57,57,38,51,60,27,41,51,51,52,65,38,42,35,29,51,40,48,51,21,64,54,43,33,38,48,52,31,59,67,30,47,49,69,58,57,35,49,62,29,77,44,64,72,42,32,44,51,56,53,46,67,37,48,48,55,56,57,60,36,32,55,48,22,39,39,57,40,28,11,32,38,29,57,64,30,30,30,51,60,62,57,43,67,45,63,40,23,26,49,40,55,31,27,44,50,51,54,47,33,54,47,58,42,37,25,30,45,52,53,39,52,53,60,53,31,40,72,34,52,68,53,24,49,34,47,45,52,55,40,35,32,51,54,57,21,48,53,69,64,26,48,53,64,17,41,28,42,32,50,54,47,55,47,31,33,38,19,58,64,44,71,55,68,56,41,66,50,39,65,56,61,49,32,55,44,68,52,43,40,53,54,24,37,72,51,23,28,74,50,34,64,39,39,42,27,56,47,48,40,38,45,73,14,53,23,61,76,58,37,42,43,45,45,40,42,67,35,47,87,53,44,52,78,36,71,67,82,55,67,104,86,85,122,118,87,75,103,124,142,131,145,153,168,160,171,168,179,193,167,166,163,163,169,170,150,163,172,182,157,161,168,163,164,174,183,160,172,143,155,142,140,149,165,162,160,174,156,180,147,169,143,164,145,159,138,131,147,158,166,147,155,147,131,164,152,136,142,125,119,126,160,146,160,167,125,139,133,145,146,155,166,143,130,142,155,165,123,127,130,130,139,149,130,122,121,126,139,147,114,169,114,125,137,138,109,15,4,21,15,0,15,13,30,7,35,19,0,47,27,16,12,17,42,33,19,4,37,20,26,16,26,39,30,22,30,19,27,25,23,28,32,23,21,30,21,27,54,8,29,12,27,7,41,17,37,31,37,18,14,38,23,32,42,1,28,48,20,20,39,54,24,23,29,41,23,25,12,45,41,51,28,27,27,18,37,24,14,18,23,38,37,21,11,36,40,18,28,33,24,17,17,46,42,25,26,37,29,21,54,53,48,67,42,57,67,55,60,62,83,62,83,56,86,111,70,108,108,110,135,77,122,118,87,120,100,121,136,108,106,109,103,118,103,132,116,115,130,114,143,112,121,123,138,149,142,124,128,136,119,137,123,152,124,148,169,156,135,139,153,158,150,149,139,180,137,144,152,154,130,146,131,137,143,133,141,124,140,157,129,132,138,163,136,159,109,132,163,137,143,171,136,116,115,134,155,140,152,115,122,138,154,157,164,144,151,143,123,144,125,131,148,114,129,113,141,135,102,149,154,159,162,152,86,135,134,149,135,162,139,136,156,141,124,163,141,150,108,157,152,162,162,134,144,144,131,151,149,170,151,144,149,142,162,149,145,155,140,139,152,145,117,126,131,131,141,139,151,147,105,145,122,125,114,119,113,125,143,155,150,146,142,111,105,137,152,97,13,13,5,23,10,25,16,2,0,7,24,49,31,16,54,45,0,30,12,13,10,14,7,30,24,19,24,38,22,12,2,16,39,20,22,24,7,27,7,32,28,20,29,13,16,22,28,5,2,43,17,8,28,17,13,7,5,12,15,30,15,32,30,5,23,24,40,5,40,0,7,5,5,21,34,7,18,25,20,28,41,53,18,14,30,13,12,14,2,11,9,31,15,1,26,21,11,24,20,23,21,20,5,10,8,12,21,18,20,33,50,23,7,15,28,26,18,32,16,33,8,21,25,13,5,25,33,27,5,3,10,34,8,37,13,26,14,19,23,12,4,14,20,10,21,4,19,1,29,8,26,30,26,28,36,29,26,9,9,15,21,17,18,25,24,44,2,27,17,26,9,36,23,33,4,7,11,6,8,21,28,27,11,27,15,17,31,41,9,38,20,24,31,25,14,16,12,9,16,16,16,23,21,30,22,8,28,29,10,1,17,11,20,16,6,21,18,11,35,3,11,5,9,20,19,16,25,7,34,24,9,32,22,17,13,16,27,12,19,24,34,18,0,13,30,5,8,10,26,10,26,19,17,12,12,38,20,2,13,12,35,16,29,3,42,35,15,40,15,11,5,19,17,8,45,39,20,10,40,12,17,27,6,13,42,9,17,7,18,22,14,14,20,18,22,1,5,15,24,18,33,40,33,12,11,11,11,36,8,45,19,29,40,2,20,9,6,28,9,15,14,25,29,13,25,30,32,19,16,8,24,21,45,30,27,45,15,28,29,5,29,18,11,20,9,32,14,18,47,2,10,17,15,19,29,5,13,14,17,35,21,18,5,16,12,11,5,6,29,30,21,28,40,10,31,3,13,38,14,2,21,22,15,29,4,9,6,3,16,23,13,33,13,28,22,17,22,19,20,29,43,8,3,20,17,15,21,21,21,13,33,6,2,12,2,21,16,19,31,21,14,19,21,31,26,25,37,3,15,17,14,20,27,29,25,2,16,8,14,22,12,36,50,33,38,11,26,28,20,23,23,15,34,21,26,39,29,50,34,24,18,34,25,30,15,16,39,14,17,3,24,21,41,19,55,41,9,13,28,22,19,28,24,5,29,51,19,14,35,16,27,28,0,39,35,36,34,39,35,35,31,41,35,40,46,47,13,25,26,13,50,32,23,48,37,34,34,34,25,11,38,28,27,41,27,36,47,25,43,16,9,30,49,41,54,74,47,58,64,52,66,41,57,52,39,72,106,88,87,97,118,119,129,82,102,112,89,103,129,97,132,118,108,126,103,111,91,100,128,123,135,115,136,112,130,99,139,157,120,144,125,131,117,153,138,101,155,129,147,162,160,136,139,149,160,134,132,169,142,160,112,164,133,120,121,139,137,146,160,136,110,157,147,160,143,119,140,157,157,132,151,155,146,134,120,132,114,134,144,166,166,174,129,139,144,165,125,119,159,134,120,116,132,140,144,116,123,116,127,128,105,144,143,153,155,144,149,127,139,129,150,123,169,138,151,137,128,126,169,136,152,136,158,146,164,149,148,147,140,139,165,128,131,132,154,125,115,166,137,143,141,120,145,147,145,136,118,108,130,171,132,126,135,93,107,115,157,147,132,112,121,124,123,128,118,118,108,110,147,103,17,0,3,9,15,36,5,37,5,26,35,17,37,15,1,27,21,33,1,16,7,1,22,21,10,7,12,6,0,22,9,12,13,18,6,19,27,17,19,24,16,22,9,32,18,7,29,27,18,18,1,17,47,9,25,13,22,21,13,10,13,15,23,21,23,10,24,18,34,8,14,24,33,6,10,0,33,1,13,37,2,12,37,9,35,34,5,21,37,10,28,13,12,43,15,23,5,31,10,18,13,18,25,26,45,4,20,10,20,11,16,7,14,10,25,16,17,3,9,9,10,9,12,0,8,6,11,20,20,17,17,3,23,12,22,11,7,17,10,7,16,22,14,34,25,16,14,18,6,23,3,16,31,16,11,21,37,12,12,2,19,28,30,19,12,25,18,35,14,29,26,23,8,39,16,20,2,31,14,42,16,8,29,22,18,5,5,15,26,16,1,18,20,27,23,5,16,17,21,17,24,28,33,10,25,16,9,19,23,27,19,28,33,18,30,21,12,16,12,19,7,29,20,12,10,4,32,19,21,43,18,32,28,22,31,29,13,12,16,36,19,13,30,32,28,22,24,14,22,38,26,2,31,16,13,16,11,24,15,12,34,8,26,10,33,13,5,8,8,6,27,19,12,7,27,17,48,24,10,1,5,17,25,11,6,38,13,15,20,14,21,14,23,26,22,0,23,36,20,32,16,8,12,19,8,11,24,1,13,12,26,29,23,2,21,8,31,20,25,3,19,35,6,13,5,16,26,19,3,14,20,16,3,20,10,42,15,18,13,27,19,3,48,4,8,19,14,22,26,11,27,18,12,9,13,28,37,1,14,23,16,23,21,17,30,13,26,20,15,23,21,9,29,37,16,17,24,21,25,4,8,6,20,34,12,17,10,19,0,23,26,0,14,24,4,15,31,4,3,8,35,30,22,1,13,29,21,15,8,42,6,14,13,41,2,17,43,17,7,7,30,4,17,38,20,8,8,15,35,19,11,13,24,31,16,23,36,26,8,29,27,
\ No newline at end of file
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/estimation_16x16.txt b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/estimation_16x16.txt
new file mode 100644
index 0000000..7216dbc
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/estimation_16x16.txt
@@ -0,0 +1,2 @@
+30,45
+12,8;12,8;12,8;12,8;12,8;12,9;12,9;12,9;12,9;12,9;12,9;11,10;11,10;11,10;11,10;11,11;11,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,11;9,4;10,12;11,12;11,12;11,12;11,5;10,11;10,-1;12,8;12,8;12,8;12,8;12,8;12,9;12,9;12,9;12,9;12,9;12,10;11,10;11,10;11,10;11,10;11,11;11,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,11;11,13;11,11;11,12;11,12;11,12;10,9;10,11;10,-1;11,8;12,8;12,8;12,8;12,8;12,9;12,9;12,9;12,9;11,9;11,10;11,10;11,10;11,10;11,11;11,11;11,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;12,12;11,12;11,12;11,12;11,12;11,11;10,11;10,11;10,-1;12,8;12,8;12,8;12,8;12,9;11,9;11,9;11,9;11,9;11,9;11,10;11,10;11,10;11,11;11,11;11,11;11,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,14;10,13;10,8;10,11;10,0;12,8;12,8;12,8;12,8;11,9;11,9;11,9;11,9;11,9;11,10;11,10;11,10;12,11;11,11;11,11;11,11;12,11;12,12;12,12;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,13;9,13;10,24;10,12;10,11;10,0;11,8;11,8;12,8;12,9;11,9;10,9;11,9;11,9;11,10;11,10;11,10;11,10;11,11;11,11;9,10;11,11;12,12;13,12;12,12;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,13;10,9;10,7;9,11;9,0;11,8;11,9;10,9;11,9;11,9;10,9;10,9;10,9;11,10;11,10;11,10;11,11;11,11;11,11;11,11;11,12;11,12;11,12;11,12;11,12;11,12;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,10;9,11;9,11;9,0;11,9;11,9;10,9;10,9;10,9;10,9;9,9;10,10;10,10;11,10;11,11;11,11;11,11;11,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,11;9,9;9,10;9,11;9,0;11,9;11,9;11,9;10,9;10,9;9,9;7,9;9,10;10,10;11,10;11,11;11,11;12,12;12,12;11,12;11,12;12,12;19,14;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,12;10,12;8,-2;8,12;8,12;8,0;11,9;11,9;11,9;11,9;10,9;10,9;10,10;10,10;10,10;11,11;11,11;11,13;12,12;13,12;10,12;3,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,12;9,11;9,25;8,12;7,11;7,0;11,9;11,9;11,9;11,9;11,9;10,9;10,10;10,10;11,10;11,11;11,11;11,11;12,11;11,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,12;9,12;8,15;7,13;6,11;5,0;11,9;11,9;11,9;11,9;11,9;11,10;11,10;11,10;11,10;11,11;11,11;12,11;12,10;12,11;12,11;12,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;9,13;9,14;8,14;6,11;4,11;3,0;12,9;12,9;12,9;12,9;12,10;11,10;11,10;11,10;11,10;11,10;12,11;12,11;12,11;12,11;12,11;12,11;12,12;12,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,12;9,12;9,14;9,15;8,16;5,15;1,12;0,0;12,9;12,9;12,9;12,9;12,10;12,10;12,10;11,10;11,10;12,10;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,13;9,13;9,14;8,16;7,17;3,16;-2,12;-5,7;12,10;12,10;13,9;14,10;13,10;12,10;12,10;12,10;12,10;12,10;12,11;13,11;13,11;13,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,13;10,13;9,14;9,15;7,16;6,18;0,18;-8,12;-19,3;12,10;12,10;13,10;13,10;13,10;12,10;12,10;12,10;12,10;12,10;13,11;13,11;14,11;13,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,13;10,13;9,14;9,15;7,16;4,17;-1,17;-13,12;-30,-2;12,11;12,10;11,10;16,10;14,10;12,10;12,10;12,10;12,10;12,10;13,10;14,10;18,10;13,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,14;10,14;10,16;9,17;5,18;0,19;-12,14;-39,-11;12,11;12,11;12,10;12,10;12,10;12,10;12,11;12,11;12,11;12,11;12,10;12,10;12,10;12,10;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,13;11,13;10,14;9,15;8,16;7,18;3,19;-4,17;-13,6;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,13;11,13;10,13;9,14;8,15;7,17;5,17;1,15;-3,2;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,13;10,13;10,13;11,14;9,14;9,16;8,16;5,13;0,-2;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,12;10,13;9,13;9,14;9,14;8,13;3,-5;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;12,11;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,12;10,12;10,13;10,13;9,13;9,13;9,12;4,-7;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;10,12;10,12;10,13;10,13;10,13;10,13;10,13;10,12;6,3;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,13;10,13;10,13;10,12;10,12;10,13;10,13;11,14;10,14;8,12;2,-1;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,13;10,13;10,13;11,13;10,13;10,13;10,13;10,13;10,13;9,13;8,13;6,12;0,-6;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;10,13;10,13;10,13;10,13;10,13;10,13;10,13;10,14;11,14;10,14;10,14;8,14;5,13;-1,-10;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,13;10,13;11,13;11,13;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,12;11,13;11,13;11,13;11,14;10,14;9,13;9,13;9,13;9,14;9,14;9,14;9,14;9,14;9,14;7,13;5,12;2,0;10,13;10,12;9,12;9,12;10,12;11,12;11,13;10,13;10,12;10,12;10,12;11,12;11,12;11,13;10,13;10,13;10,13;10,13;10,13;11,13;11,12;11,13;11,13;11,13;11,12;11,12;11,12;11,13;11,13;10,14;10,14;9,15;9,15;10,15;8,13;8,13;8,15;7,14;6,13;7,13;8,14;8,14;5,13;4,12;2,-1;10,13;7,12;7,12;7,12;8,12;8,11;8,12;8,13;8,13;9,12;9,11;9,12;9,12;9,12;9,12;9,12;7,13;7,13;8,13;8,12;8,12;8,13;9,13;10,13;11,13;11,13;10,13;10,13;10,14;10,15;10,16;9,17;7,18;6,16;5,12;8,15;8,15;1,13;1,10;4,10;6,12;7,15;4,14;3,12;2,1;-1,13;5,12;6,12;4,12;6,11;5,9;2,5;3,6;0,1;6,9;6,10;3,10;6,11;4,11;3,10;3,10;1,8;-7,3;-6,-1;3,8;4,10;2,12;-11,13;1,13;-4,13;-6,13;-6,13;0,13;-3,14;0,15;0,15;1,16;-11,13;1,14;3,13;-2,5;-8,2;-17,-8;-3,0;0,6;1,10;2,11;-1,10;2,11;1,0;
\ No newline at end of file
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/exhaust_16x16.txt b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/exhaust_16x16.txt
new file mode 100644
index 0000000..719c3f0
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/exhaust_16x16.txt
@@ -0,0 +1,2 @@
+30,45
+6,4;12,14;6,16;24,9;30,2;7,11;11,12;13,10;12,12;30,11;9,13;10,11;4,11;1,-7;7,-13;9,-32;1,-12;22,9;29,5;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;28,-9;12,12;12,12;12,12;12,12;10,5;12,12;0,-1;9,29;14,3;4,31;11,24;-10,7;5,23;-15,-32;13,6;13,6;0,6;27,3;10,9;12,11;14,3;2,-19;-4,14;16,-13;12,12;17,10;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;6,18;10,17;9,12;12,12;12,12;12,12;3,9;26,12;-16,-1;8,31;11,-10;17,-17;16,-22;13,14;10,18;12,12;12,11;20,-23;3,9;7,4;12,13;10,13;12,12;15,24;11,-6;12,12;12,12;11,11;12,11;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;14,9;12,12;12,12;12,12;12,12;24,17;4,11;17,12;-32,-1;12,8;13,11;13,-4;13,25;14,-26;12,12;20,-8;12,12;12,12;13,12;13,12;12,12;12,12;14,11;11,13;3,8;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;14,16;5,6;5,12;-32,-1;12,11;13,14;12,13;12,12;8,-2;12,20;11,13;12,14;12,12;12,12;10,7;11,-10;13,14;11,13;12,12;-16,-10;12,12;12,13;9,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;16,25;12,12;18,12;-32,-1;12,13;12,12;16,14;16,30;11,7;10,-4;11,15;12,12;9,22;13,15;11,-12;12,12;12,12;12,6;4,-22;12,12;12,12;15,24;12,9;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;15,9;19,6;27,12;-32,-1;11,5;13,26;6,-20;13,8;13,-2;8,-14;12,22;10,1;11,-14;13,24;15,10;14,17;12,-8;12,12;11,-9;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;5,12;-32,-1;12,12;9,-8;12,12;30,1;25,-3;12,12;12,17;8,14;8,24;8,-2;7,20;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;8,16;8,10;12,12;-32,-1;9,11;5,6;14,1;-4,27;12,12;19,7;-18,-32;22,-20;11,20;11,16;10,20;12,12;12,12;12,12;12,12;12,12;12,12;31,-22;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;7,-4;12,12;13,12;-32,-1;12,12;12,12;11,4;12,12;12,20;12,7;12,12;10,20;13,8;19,-1;12,11;24,19;13,8;14,6;11,9;2,16;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;11,11;14,26;17,12;12,12;-32,-1;20,18;10,10;15,-17;14,-2;12,12;11,26;13,27;10,12;13,14;23,9;8,20;17,0;12,12;12,11;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;10,13;12,1;13,12;-32,-1;16,11;12,12;9,13;22,0;-32,-5;9,8;15,6;-8,-2;12,12;10,-21;12,3;12,12;8,3;12,12;12,12;12,23;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;11,11;7,12;31,-1;8,16;11,7;11,18;11,9;12,3;13,-4;5,24;8,17;12,12;12,14;12,12;-9,14;7,-2;12,16;15,6;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,14;11,12;21,-1;13,20;27,-9;11,6;14,8;7,22;19,7;4,1;-24,21;12,19;11,18;9,-24;14,-20;12,7;13,-28;10,15;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;13,12;24,-32;14,5;9,18;25,-26;24,-27;12,11;10,16;7,26;12,12;17,17;13,4;6,18;12,20;30,3;16,17;12,16;12,12;12,-21;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;6,-32;12,12;12,12;-16,11;12,12;10,11;25,-9;27,5;22,28;8,13;13,3;12,30;-5,-8;18,-31;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-9,-32;13,12;12,12;12,3;18,29;15,-9;1,19;-9,14;-3,-3;-21,8;13,6;-12,12;-2,-13;20,-32;-19,-31;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-24,-30;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;27,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;11,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-5,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-21,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,15;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-29,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;17,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;1,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-15,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-31,-1;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,9;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;3,-1;12,12;12,12;17,20;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;-13,-1;12,12;12,12;-8,25;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;21,-1;-2,5;-3,25;-18,27;1,-31;0,17;-9,26;-23,27;-23,20;-27,18;1,30;3,15;0,0;3,22;3,5;0,0;0,0;0,0;-17,-18;-15,-16;-1,15;-4,9;2,11;-13,-2;2,13;-5,16;-7,-2;-7,7;0,18;-5,-11;0,-10;0,0;2,-5;-5,-32;4,-27;2,21;-8,7;-8,1;-17,-10;-4,-1;-13,0;-8,5;1,10;0,-1;4,12;5,-1;
\ No newline at end of file
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/ground_truth_16x16.txt b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/ground_truth_16x16.txt
new file mode 100644
index 0000000..850b7ed
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/ground_truth_16x16.txt
@@ -0,0 +1,2 @@
+30,45
+12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;12,12;
\ No newline at end of file
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/localVar_16x16.txt b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/localVar_16x16.txt
new file mode 100644
index 0000000..5e4ea8e
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/localVar_16x16.txt
@@ -0,0 +1,2 @@
+30,45
+0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,1;0,0,0,0;0,0,0,0;0,0,0,1;0,0,0,0;1,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,2,2,4;25,115,115,529;196,644,644,2116;225,420,420,784;576,696,696,841;4900,3920,3920,3136;9604,6664,6664,4624;9,120,120,1600;1024,2208,2208,4761;3249,5016,5016,7744;484,1870,1870,7225;9,288,288,9216;64,384,384,2304;324,882,882,2401;10404,8058,8058,6241;1936,2332,2332,2809;196,322,322,529;1,10,10,100;4,16,16,64;4,100,100,2500;1156,2482,2482,5329;3600,5160,5160,7396;81,594,594,4356;0,0,0,1521;0,0,0,10000;1,134,134,17956;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;1,0,0,0;1,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,4;25,110,110,484;196,560,560,1600;529,713,713,961;1296,2196,2196,3721;4096,6784,6784,11236;6889,9960,9960,14400;441,1659,1659,6241;1764,2520,2520,3600;4225,4875,4875,5625;529,2208,2208,9216;36,558,558,8649;36,276,276,2116;529,1564,1564,4624;9801,11088,11088,12544;3600,4620,4620,5929;225,360,360,576;4,2,2,1;1,9,9,81;1,44,44,1936;2025,3060,3060,4624;4624,4760,4760,4900;16,168,168,1764;0,0,0,1681;0,0,0,10609;1,133,133,17689;9,0,0,0;1,0,0,0;4,0,0,0;9,0,0,0;9,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;1,1,1,1;0,0,0,0;0,0,0,0;0,0,0,0;1,-1,-1,1;0,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;0,0,0,0;0,0,0,9;81,198,198,484;225,540,540,1296;2209,2162,2162,2116;6400,5760,5760,5184;6241,7347,7347,8649;7396,7998,7998,8649;8464,7084,7084,5929;2916,4104,4104,5776;3249,6498,6498,12996;784,3724,3724,17689;400,1820,1820,8281;784,1288,1288,2116;1849,1935,1935,2025;8649,4650,4650,2500;4624,3060,3060,2025;1089,891,891,729;625,175,175,49;81,0,0,0;441,693,693,1089;3969,4347,4347,4761;2809,2862,2862,2916;0,0,0,0;0,0,0,2116;0,0,0,10816;1,134,134,17956;1,1,1,1;16,0,0,0;25,0,0,0;9,0,0,0;25,0,0,0;1,0,0,0;1,0,0,0;4,0,0,0;9,0,0,0;9,0,0,0;9,3,3,1;9,3,3,1;1,0,0,0;4,2,2,1;1,0,0,0;0,0,0,0;4,2,2,1;4,4,4,4;1,0,0,0;1,4,4,16;196,378,378,729;256,784,784,2401;4624,4692,4692,4761;16384,8960,8960,4900;6889,4067,4067,2401;8649,3534,3534,1444;17689,8246,8246,3844;6889,8798,8798,11236;4761,8832,8832,16384;1024,4032,4032,15876;3136,5656,5656,10201;6400,5920,5920,5476;5776,5700,5700,5625;6724,7626,7626,8649;2916,5130,5130,9025;7744,6688,6688,5776;5184,3528,3528,2401;4225,2665,2665,1681;4356,3564,3564,2916;7056,3528,3528,1764;2500,600,600,144;1,0,0,0;0,0,0,225;0,0,0,10404;1,-135,-135,18225;16,0,0,0;25,5,5,1;25,0,0,0;36,0,0,0;16,0,0,0;9,0,0,0;9,0,0,0;9,0,0,0;9,3,3,1;16,0,0,0;25,0,0,0;25,0,0,0;25,10,10,4;4,4,4,4;4,6,6,9;1,0,0,0;4,2,2,1;1,0,0,0;0,0,0,0;1,9,9,81;576,720,720,900;100,450,450,2025;1849,1763,1763,1681;8464,5796,5796,3969;3969,5733,5733,8281;5041,6177,6177,7569;7921,7476,7476,7056;8100,7740,7740,7396;6561,6075,6075,5625;1296,2088,2088,3364;4489,3618,3618,2916;7056,4368,4368,2704;5041,4331,4331,3721;4900,5530,5530,6241;1849,3311,3311,5929;7225,4930,4930,3364;6084,3744,3744,2304;7396,5934,5934,4761;6241,7110,7110,8100;8100,8730,8730,9409;5184,5688,5688,6241;1,35,35,1225;1,54,54,2916;0,0,0,11881;1,-137,-137,18769;64,0,0,0;49,0,0,0;16,-4,-4,1;25,0,0,0;25,0,0,0;36,0,0,0;25,0,0,0;36,0,0,0;16,0,0,0;9,0,0,0;16,0,0,0;9,0,0,0;25,0,0,0;25,0,0,0;16,4,4,1;100,10,10,1;49,14,14,4;25,10,10,4;36,0,0,0;36,96,96,256;1764,1680,1680,1600;1600,1880,1880,2209;1024,1184,1184,1369;1681,1517,1517,1369;2704,3484,3484,4489;2025,3330,3330,5476;2809,2915,2915,3025;4489,5293,5293,6241;5329,6132,6132,7056;1764,2100,2100,2500;3025,2090,2090,1444;3249,1596,1596,784;1764,2310,2310,3025;4489,6164,6164,8464;1521,2808,2808,5184;729,810,810,900;900,330,330,121;1764,882,882,441;2025,1530,1530,1156;5041,5467,5467,5929;3136,4144,4144,5476;0,0,0,676;0,0,0,289;0,0,0,12321;1,-139,-139,19321;9,0,0,0;9,0,0,0;36,0,0,0;9,0,0,0;4,0,0,0;9,0,0,0;9,0,0,0;9,0,0,0;9,0,0,0;9,0,0,0;4,0,0,0;9,0,0,0;9,0,0,0;121,0,0,0;36,0,0,0;1089,231,231,49;1225,210,210,36;1369,185,185,25;1225,245,245,49;1764,1218,1218,841;3364,3654,3654,3969;3969,5292,5292,7056;4761,5313,5313,5929;2704,2964,2964,3249;2209,3290,3290,4900;1681,2952,2952,5184;3721,4453,4453,5329;12321,12432,12432,12544;9025,9405,9405,9801;2116,2392,2392,2704;2704,2704,2704,2704;2304,2256,2256,2209;1089,2244,2244,4624;4761,7176,7176,10816;900,1890,1890,3969;100,150,150,225;324,342,342,361;576,600,600,625;441,945,945,2025;3600,4860,4860,6561;441,1092,1092,2704;1,2,2,4;0,0,0,3025;0,0,0,12100;0,0,0,19881;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;4,0,0,0;9,0,0,0;9,18,18,36;256,320,320,400;1600,720,720,324;2209,611,611,169;3249,798,798,196;3364,522,522,81;3600,600,600,100;4761,1725,1725,625;2916,2376,2376,1936;2809,2703,2703,2601;6084,3900,3900,2500;3844,2914,2914,2209;3249,2850,2850,2500;3600,3480,3480,3364;9025,7505,7505,6241;19321,15846,15846,12996;6889,9545,9545,13225;1225,2485,2485,5041;1600,1640,1640,1681;961,1147,1147,1369;729,1431,1431,2809;4489,5025,5025,5625;400,920,920,2116;81,108,108,144;576,336,336,196;576,552,552,529;289,1139,1139,4489;2304,3984,3984,6889;4,58,58,841;0,0,0,0;0,0,0,1849;0,0,0,12544;1,-144,-144,20736;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;9,0,0,0;0,0,0,0;4,0,0,0;4,0,0,0;4,0,0,0;16,0,0,0;9,0,0,0;121,11,11,1;484,22,22,1;576,24,24,1;1024,32,32,1;529,138,138,36;1681,451,451,121;2304,2928,2928,3721;3969,5355,5355,7225;4096,3776,3776,3481;6889,4980,4980,3600;3249,4161,4161,5329;1296,3276,3276,8281;4900,7280,7280,10816;17424,15972,15972,14641;8100,11070,11070,15129;3600,5100,5100,7225;576,1272,1272,2809;841,1102,1102,1444;400,620,620,961;144,228,228,361;1936,924,924,441;225,315,315,441;25,125,125,625;1444,1292,1292,1156;256,480,480,900;169,728,728,3136;1156,2754,2754,6561;9,117,117,1521;4,68,68,1156;9,174,174,3364;1,114,114,12996;1,-149,-149,22201;4,0,0,0;1,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;4,0,0,0;4,0,0,0;1,0,0,0;1,0,0,0;0,0,0,25;4,0,0,0;196,0,0,0;400,0,0,0;400,0,0,0;1225,315,315,81;2116,414,414,81;1936,616,616,196;2601,1530,1530,900;5929,4774,4774,3844;4624,4964,4964,5329;7225,4760,4760,3136;3969,3276,3276,2704;1024,2560,2560,6400;9216,9600,9600,10000;12321,10989,10989,9801;5476,7104,7104,9216;3364,4292,4292,5476;289,578,578,1156;676,286,286,121;256,96,96,36;49,105,105,225;784,672,672,576;144,180,180,225;64,136,136,289;1444,988,988,676;144,252,252,441;144,696,696,3364;1296,3132,3132,7569;1,46,46,2116;4,94,94,2209;1,61,61,3721;0,0,0,14400;1,-150,-150,22500;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;9,48,48,256;900,600,600,400;1521,507,507,169;1936,440,440,100;3364,580,580,100;4225,585,585,81;3969,693,693,121;4225,1755,1755,729;4356,3234,3234,2401;3136,2912,2912,2704;5776,3192,3192,1764;5041,3550,3550,2500;4624,5236,5236,5929;13456,10672,10672,8464;7744,8096,8096,8464;4225,4290,4290,4356;2601,2499,2499,2401;324,720,720,1600;676,728,728,784;169,286,286,484;256,288,288,324;1369,851,851,529;289,374,374,484;225,300,300,400;961,651,651,441;196,280,280,400;256,768,768,2304;1681,3772,3772,8464;1,53,53,2809;1,0,0,0;0,0,0,0;0,0,0,14161;0,0,0,22801;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,9;256,0,0,0;441,0,0,0;625,50,50,4;784,56,56,4;1089,132,132,16;841,232,232,64;2025,945,945,441;2116,2162,2162,2209;1296,2268,2268,3969;3025,2585,2585,2209;6084,4992,4992,4096;5476,5550,5550,5625;6084,3900,3900,2500;5041,2911,2911,1681;1225,1365,1365,1521;2025,1980,1980,1936;784,980,980,1225;961,682,682,484;529,552,552,576;1156,1258,1258,1369;2209,2021,2021,1849;576,744,744,961;625,650,650,676;961,930,930,900;576,432,432,324;289,884,884,2704;3025,4895,4895,7921;2209,1974,1974,1764;16,0,0,0;16,204,204,2601;1,118,118,13924;1,141,141,19881;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;0,0,0,9;0,0,0,0;1,0,0,0;1,0,0,0;9,3,3,1;9,6,6,4;16,8,8,4;16,68,68,289;1681,1681,1681,1681;2401,2842,2842,3364;576,1560,1560,4225;1296,2556,2556,5041;8836,8272,8272,7744;2116,2668,2668,3364;1600,1120,1120,784;2025,1530,1530,1156;1444,1330,1330,1225;1849,1677,1677,1521;400,740,740,1369;1444,1596,1596,1764;676,1222,1222,2209;1849,2107,2107,2401;1521,1677,1677,1849;676,1066,1066,1681;1521,1716,1716,1936;441,882,882,1764;900,1230,1230,1681;256,784,784,2401;3136,3472,3472,3844;7396,6450,6450,5625;3136,2352,2352,1764;256,1056,1056,4356;1,107,107,11449;4,278,278,19321;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;1,0,0,0;4,0,0,0;4,0,0,0;4,0,0,0;4,0,0,0;4,0,0,0;9,6,6,4;100,390,390,1521;3136,3304,3304,3481;3844,3100,3100,2500;484,1254,1254,3249;4624,5916,5916,7569;10201,8585,8585,7225;1024,1536,1536,2304;1369,1591,1591,1849;2025,1710,1710,1444;1600,1400,1400,1225;1369,1369,1369,1369;400,680,680,1156;1369,1147,1147,961;1521,1131,1131,841;2025,1080,1080,576;1521,897,897,529;1521,1014,1014,676;2209,1598,1598,1156;441,609,609,841;1024,832,832,676;361,627,627,1089;1521,1326,1326,1156;3136,2632,2632,2209;8100,5940,5940,4356;3136,5488,5488,9604;9,393,393,17161;7056,5376,5376,4096;0,0,0,0;0,0,0,0;0,0,0,1;9,0,0,0;0,0,0,0;0,0,0,0;0,0,0,1;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;1,0,0,0;0,0,0,0;9,0,0,0;144,0,0,0;256,0,0,0;100,0,0,0;256,112,112,49;576,408,408,289;3025,1925,1925,1225;2704,2808,2808,2916;1764,2814,2814,4489;7744,6864,6864,6084;5329,5256,5256,5184;1369,1850,1850,2500;961,1302,1302,1764;1521,1560,1560,1600;1936,1628,1628,1369;1600,1400,1400,1225;1600,1280,1280,1024;1681,1517,1517,1369;3249,2451,2451,1849;2401,1960,1960,1600;2209,1504,1504,1024;2209,1598,1598,1156;2209,2021,2021,1849;1089,1221,1221,1369;1369,1258,1258,1156;1156,1190,1190,1225;1521,1287,1287,1089;1156,1088,1088,1024;3969,3528,3528,3136;4096,6720,6720,11025;25,670,670,17956;8836,6298,6298,4489;0,0,0,1;9,3,3,1;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;1,0,0,0;0,0,0,0;0,0,0,81;400,280,280,196;900,210,210,49;784,140,140,25;961,589,589,361;1369,1221,1221,1089;2401,2107,2107,1849;2704,3068,3068,3481;4624,4556,4556,4489;4489,3819,3819,3249;2025,1845,1845,1681;1521,1287,1287,1089;961,1116,1116,1296;1225,1365,1365,1521;3136,2072,2072,1369;2704,1976,1976,1444;2401,1960,1960,1600;2704,1820,1820,1225;2601,1530,1530,900;1225,1015,1015,841;1764,1218,1218,841;2209,1645,1645,1225;1444,1406,1406,1369;1444,1406,1406,1369;1681,1722,1722,1764;2025,1980,1980,1936;2809,1802,1802,1156;1849,1849,1849,1849;2601,3672,3672,5184;2916,6912,6912,16384;16,660,660,27225;7921,5785,5785,4225;1,6,6,36;400,120,120,36;144,36,36,9;225,0,0,0;64,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;0,0,0,0;64,0,0,0;0,0,0,0;0,0,0,0;361,0,0,0;1,0,0,0;256,16,16,1;576,24,24,1;961,217,217,49;2304,1248,1248,676;3136,2856,2856,2601;2916,3402,3402,3969;4225,3835,3835,3481;3025,2970,2970,2916;1681,1804,1804,1936;1156,1088,1088,1024;1600,1320,1320,1089;625,825,825,1089;1024,768,768,576;2401,686,686,196;3025,935,935,289;2209,1081,1081,529;3136,1344,1344,576;2401,1519,1519,961;729,999,999,1369;1600,1600,1600,1600;2025,1845,1845,1681;1225,1400,1400,1600;1849,946,946,484;2209,658,658,196;2025,1170,1170,676;2601,765,765,225;2601,1224,1224,576;3136,3360,3360,3600;3025,5500,5500,10000;225,1905,1905,16129;7569,6438,6438,5476;676,910,910,1225;4225,2210,2210,1156;1444,684,684,324;1521,39,39,1;1444,38,38,1;1444,152,152,16;1296,468,468,169;1296,576,576,256;1024,480,480,225;961,496,496,256;1156,544,544,256;1369,666,666,324;1600,680,680,289;625,275,275,121;841,348,348,144;1600,920,920,529;2209,2021,2021,1849;3481,3304,3304,3136;5041,3550,3550,2500;3844,3162,3162,2601;4225,3120,3120,2304;1369,1295,1295,1225;625,800,800,1024;841,1218,1218,1764;1521,1092,1092,784;225,255,255,289;784,588,588,441;961,558,558,324;1849,817,817,361;1521,741,741,361;1764,756,756,324;2116,828,828,324;900,660,660,484;1444,950,950,625;1444,874,874,529;1296,792,792,484;1369,703,703,361;1936,968,968,484;1089,660,660,400;1156,714,714,441;2209,1504,1504,1024;1444,1064,1064,784;1444,1558,1558,1681;529,1426,1426,3844;144,756,756,3969;1521,3315,3315,7225;3600,4320,4320,5184;2401,1666,1666,1156;2116,460,460,100;2025,405,405,81;2401,490,490,100;2025,540,540,144;1296,612,612,289;1936,1364,1364,961;1936,1672,1672,1444;1444,1064,1064,784;1089,528,528,256;1681,1066,1066,676;1936,1496,1496,1156;1849,1419,1419,1089;2116,2070,2070,2025;4096,4096,4096,4096;2809,3445,3445,4225;5776,5624,5624,5476;2916,3348,3348,3844;1521,1053,1053,729;1156,1020,1020,900;289,578,578,1156;676,702,702,729;1089,1089,1089,1089;400,500,500,625;400,220,220,121;169,156,156,144;400,240,240,144;361,133,133,49;289,187,187,121;529,322,322,196;324,180,180,100;576,216,216,81;441,252,252,144;400,240,240,144;256,144,144,81;529,276,276,144;400,320,320,256;484,594,594,729;1156,1088,1088,1024;625,525,525,441;729,945,945,1225;225,750,750,2500;144,732,732,3721;1225,2450,2450,4900;576,1296,1296,2916;2209,1175,1175,625;676,364,364,196;1296,576,576,256;1296,756,756,441;900,630,630,441;1600,840,840,441;5041,1562,1562,484;5041,1917,1917,729;5041,2059,2059,841;3721,1952,1952,1024;676,962,962,1369;1444,1292,1292,1156;784,840,840,900;1024,1440,1440,2025;4624,3876,3876,3249;3136,2912,2912,2704;5476,2738,2738,1369;1849,731,731,289;961,620,620,400;676,572,572,484;441,399,399,361;1369,999,999,729;1024,960,960,900;729,810,810,900;324,324,324,324;144,72,72,36;36,30,30,25;25,15,15,9;36,24,24,16;9,12,12,16;36,24,24,16;81,54,54,36;49,42,42,36;25,20,20,16;25,15,15,9;49,42,42,36;81,81,81,81;3969,1512,1512,576;2025,1530,1530,1156;2209,1269,1269,729;2401,2107,2107,1849;400,1220,1220,3721;144,720,720,3600;2025,3330,3330,5476;3249,4788,4788,7056;3844,4402,4402,5041;2500,2350,2350,2209;3721,3477,3477,3249;2601,2703,2703,2809;1369,1554,1554,1764;3136,3472,3472,3844;8100,5490,5490,3721;9604,4410,4410,2025;14884,5002,5002,1681;10609,5768,5768,3136;2401,2450,2450,2500;2401,1176,1176,576;2025,1125,1125,625;2601,1683,1683,1089;5776,3192,3192,1764;3249,2964,2964,2704;3481,3186,3186,2916;1849,1935,1935,2025;1296,1224,1224,1156;676,702,702,729;729,837,837,961;1296,1260,1260,1225;900,900,900,900;729,702,702,676;441,252,252,144;289,170,170,100;64,128,128,256;81,72,72,64;25,20,20,16;16,12,12,9;9,9,9,9;49,49,49,49;81,72,72,64;25,20,20,16;9,3,3,1;9,3,3,1;25,295,295,3481;8281,9737,9737,11449;5329,6935,6935,9025;4225,4355,4355,4489;3969,4032,4032,4096;729,2430,2430,8100;144,744,744,3844;2916,3618,3618,4489;5476,4366,4366,3481;2809,2385,2385,2025;4624,3400,3400,2500;4489,4489,4489,4489;2401,2352,2352,2304;2209,1410,1410,900;2500,2100,2100,1764;6889,4399,4399,2809;8464,5060,5060,3025;11881,5886,5886,2916;7569,5220,5220,3600;5184,5040,5040,4900;5184,5112,5112,5041;5625,5625,5625,5625;6400,6800,6800,7225;8281,8190,8190,8100;2916,3564,3564,4356;2209,1974,1974,1764;1936,1584,1584,1296;625,675,675,729;841,870,870,900;484,660,660,900;625,550,550,484;729,702,702,676;676,728,728,784;169,169,169,169;100,0,0,0;25,20,20,16;144,96,96,64;9,9,9,9;4,0,0,0;1,1,1,1;9,15,15,25;81,81,81,81;81,63,63,49;4,4,4,4;1,0,0,0;16,104,104,676;4356,4356,4356,4356;5476,5328,5328,5184;2401,3332,3332,4624;3844,5332,5332,7396;900,3060,3060,10404;121,638,638,3364;3721,5063,5063,6889;4225,3575,3575,3025;4096,2176,2176,1156;5041,2982,2982,1764;4096,2688,2688,1764;4096,2816,2816,1936;3249,3477,3477,3721;3136,4032,4032,5184;5041,4899,4899,4761;5184,4680,4680,4225;5184,4464,4464,3844;3969,4032,4032,4096;4900,4410,4410,3969;5625,4500,4500,3600;6400,5120,5120,4096;5776,4560,4560,3600;5476,3774,3774,2601;1681,1722,1722,1764;1156,1088,1088,1024;1156,986,986,841;289,374,374,484;576,480,480,400;400,420,420,441;289,289,289,289;289,204,204,144;225,150,150,100;49,42,42,36;1,1,1,1;1,1,1,1;25,20,20,16;81,45,45,25;1,3,3,9;1,1,1,1;0,0,0,1;9,21,21,49;225,195,195,169;64,64,64,64;1,3,3,9;784,784,784,784;4624,3468,3468,2601;4761,2691,2691,1521;3969,2205,2205,1225;6084,4056,4056,2704;1296,2412,2412,4489;529,1426,1426,3844;4225,5980,5980,8464;4225,4875,4875,5625;5041,4544,4544,4096;4356,4422,4422,4489;3364,4292,4292,5476;5776,5700,5700,5625;4761,4278,4278,3844;3844,3534,3534,3249;4225,3770,3770,3364;4225,3315,3315,2601;4356,2838,2838,1849;3136,1904,1904,1156;3136,1792,1792,1024;4489,2345,2345,1225;4900,2170,2170,961;3364,1682,1682,841;2209,1739,1739,1369;784,1064,1064,1444;529,667,667,841;441,462,462,484;361,323,323,289;256,256,256,256;196,182,182,169;49,28,28,16;9,3,3,1;9,15,15,25;100,90,90,81;49,56,56,64;9,6,6,4;4,6,6,9;144,84,84,49;1,4,4,16;1,2,2,4;0,0,0,1;4,6,6,9;121,99,99,81;121,132,132,144;361,646,646,1156;2304,3504,3504,5329;5184,5112,5112,5041;4489,2613,2613,1521;5776,1672,1672,484;5041,4402,4402,3844;576,2496,2496,10816;529,1403,1403,3721;2916,4320,4320,6400;3136,4480,4480,6400;5184,5544,5544,5929;5929,5467,5467,5041;3481,3717,3717,3969;4489,4355,4355,4225;5929,5313,5313,4761;3969,4725,4725,5625;3136,3584,3584,4096;2401,2156,2156,1936;2601,1836,1836,1296;1936,1452,1452,1089;1681,861,861,441;1936,308,308,49;1936,176,176,16;1600,560,560,196;1849,1290,1290,900;1156,918,918,729;1369,777,777,441;324,306,306,289;225,150,150,100;64,40,40,25;9,3,3,1;1,0,0,0;1,0,0,0;1,0,0,0;16,16,16,16;81,99,99,121;100,110,110,121;49,42,42,36;196,84,84,36;169,52,52,16;36,12,12,4;1,3,3,9;49,35,35,25;121,44,44,16;361,361,361,361;3364,1914,1914,1089;3481,2301,2301,1521;4356,4686,4686,5041;2704,4524,4524,7569;3364,4698,4698,6561;2304,4992,4992,10816;324,2376,2376,17424;529,1449,1449,3969;2401,2450,2450,2500;2209,2115,2115,2025;3249,1881,1881,1089;3136,1848,1848,1089;2304,2016,2016,1764;1936,2332,2332,2809;3969,3843,3843,3721;2916,2700,2700,2500;1156,816,816,576;676,234,234,81;841,174,174,36;961,248,248,64;784,168,168,36;484,132,132,36;529,184,184,64;841,464,464,256;961,1054,1054,1156;1764,2772,2772,4356;3025,3795,3795,4761;1024,896,896,784;1024,64,64,4;625,0,0,0;400,20,20,1;121,11,11,1;1,0,0,0;0,0,0,0;1,0,0,0;16,8,8,4;64,80,80,100;225,330,330,484;225,405,405,729;529,598,598,676;324,324,324,324;169,156,156,144;121,143,143,169;289,391,391,529;2500,2750,2750,3025;2304,2448,2448,2601;1444,608,608,256;1681,615,615,225;1296,612,612,289;961,589,589,361;1444,2166,2166,3249;324,1674,1674,8649;484,1386,1386,3969;1521,78,78,4;441,21,21,1;324,18,18,1;121,55,55,25;400,300,300,225;841,464,464,256;1369,481,481,169;1369,777,777,441;361,285,285,225;225,75,75,25;225,45,45,9;256,48,48,9;256,48,48,9;225,30,30,4;196,42,42,9;324,162,162,81;225,210,210,196;841,435,435,225;1156,1020,1020,900;2704,3172,3172,3721;3136,4256,4256,5776;2704,3692,3692,5041;2209,2538,2538,2916;1156,1020,1020,900;484,242,242,121;361,19,19,1;256,0,0,0;196,-14,-14,1;196,0,0,0;484,22,22,1;900,90,90,9;1600,280,280,49;841,725,725,625;576,744,744,961;64,184,184,529;441,1029,1029,2401;2500,2500,2500,2500;81,99,99,121;25,20,20,16;121,132,132,144;196,168,168,144;81,54,54,36;121,440,440,1600;64,608,608,5776;49,784,784,12544;324,306,306,289;1,8,8,64;0,0,0,0;0,0,0,4;9,12,12,16;169,52,52,16;196,84,84,36;400,480,480,576;196,364,364,676;81,72,72,64;49,28,28,16;100,60,60,36;81,45,45,25;64,40,40,25;81,45,45,25;81,54,54,36;100,90,90,81;169,182,182,196;196,168,168,144;841,145,145,25;784,84,84,9;1089,297,297,81;1156,544,544,256;1225,910,910,676;1369,1295,1295,1225;1156,1122,1122,1089;676,572,572,484;529,368,368,256;625,375,375,225;841,638,638,484;2401,1617,1617,1089;5929,3619,3619,2209;4489,2881,2881,1849;1600,680,680,289;16,56,56,196;441,966,966,2116;1600,1560,1560,1521;36,48,48,64;1,2,2,4;16,12,12,9;64,64,64,64;81,99,99,121;64,328,328,1681;1,71,71,5041;49,777,777,12321;289,17,17,1;0,0,0,0;0,0,0,1;0,0,0,1;4,4,4,4;9,3,3,1;25,45,45,81;121,297,297,729;144,288,288,576;36,36,36,36;25,10,10,4;64,16,16,4;49,21,21,9;64,16,16,4;49,14,14,4;81,36,36,16;64,64,64,64;121,132,132,144;196,210,210,225;64,80,80,100;9,12,12,16;64,16,16,4;144,12,12,1;441,21,21,1;961,0,0,0;729,0,0,0;441,0,0,0;361,0,0,0;400,0,0,0;900,150,150,25;3249,969,969,289;8281,3458,3458,1444;10000,6700,6700,4489;3025,3795,3795,4761;16,120,120,900;400,340,340,289;2500,2100,2100,1764;64,248,248,961;4,12,12,36;4,4,4,4;36,12,12,4;121,99,99,81;144,480,480,1600;4,138,138,4761;64,592,592,5476;576,0,0,0;0,0,0,0;0,0,0,1;49,0,0,0;1,2,2,4;9,6,6,4;169,182,182,196;64,120,120,225;81,108,108,144;16,4,4,1;25,10,10,4;49,14,14,4;49,14,14,4;64,8,8,1;25,10,10,4;16,8,8,4;16,12,12,9;36,18,18,9;25,30,30,36;64,40,40,25;9,9,9,9;4,4,4,4;729,0,0,0;9,0,0,0;961,0,0,0;961,0,0,0;676,0,0,0;289,0,0,0;529,0,0,0;289,0,0,0;784,56,56,4;1936,88,88,4;784,140,140,25;3721,244,244,16;16,12,12,9;100,100,100,100;1849,1247,1247,841;1024,992,992,961;1,20,20,400;4,6,6,9;16,4,4,1;25,15,15,9;81,36,36,16;1,15,15,225;9,213,213,5041;
\ No newline at end of file
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/raw_1.png b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/raw_1.png
new file mode 100644
index 0000000..ebf23e3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/raw_1.png
Binary files differ
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/raw_1_12_12.png b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/raw_1_12_12.png
new file mode 100644
index 0000000..9294121
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/raw_1_12_12.png
Binary files differ
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/ref_frame_16x16.txt b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/ref_frame_16x16.txt
new file mode 100644
index 0000000..b1a877a
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/build_debug/non_greedy_mv_test_files/ref_frame_16x16.txt
@@ -0,0 +1,2 @@
+486,720
+214,214,214,213,214,212,213,215,214,214,215,214,214,214,215,214,214,214,214,216,214,214,214,214,216,215,215,214,214,216,217,216,215,214,217,217,218,217,217,215,216,217,215,218,216,216,218,217,218,219,218,216,219,218,218,220,219,219,219,220,222,219,222,222,222,223,222,223,224,223,222,225,223,224,224,223,225,224,224,224,224,224,225,224,226,226,223,224,226,226,226,226,224,224,227,225,226,227,227,226,225,225,224,224,227,227,227,228,227,229,230,228,230,230,228,229,230,230,230,230,229,229,230,230,230,232,232,233,230,230,231,232,233,232,233,233,232,233,233,231,233,233,233,235,234,235,237,238,236,235,236,236,237,237,236,237,236,236,238,239,239,241,241,240,241,240,240,239,240,240,239,241,240,241,242,241,242,242,242,242,242,243,244,244,245,244,244,244,243,243,244,244,244,243,243,244,245,245,243,244,245,246,246,244,244,245,247,247,245,245,244,247,245,245,246,244,245,245,245,245,245,245,245,245,244,245,246,246,247,247,247,246,247,247,247,247,247,247,247,247,245,245,247,247,247,247,246,247,247,247,247,246,246,247,247,247,246,246,247,247,247,247,247,246,246,246,246,246,247,247,247,247,248,247,247,247,247,247,247,247,247,248,248,247,248,246,247,246,247,247,247,247,247,247,249,241,234,247,247,247,250,249,249,252,252,252,251,251,252,252,252,252,252,252,253,253,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,253,253,252,252,253,253,250,250,244,251,252,252,253,253,251,251,230,211,171,147,80,131,251,251,252,252,249,242,206,236,252,252,251,251,252,252,252,252,249,243,215,212,212,184,169,186,193,186,179,152,136,129,129,132,141,118,71,35,27,44,76,82,53,44,33,36,68,94,117,144,123,107,77,36,45,21,18,34,48,55,47,57,66,66,62,61,68,68,74,73,71,65,120,214,249,252,249,248,223,184,205,244,251,251,250,250,251,249,249,250,251,252,250,250,238,240,252,251,248,230,195,218,217,183,229,249,245,229,136,177,243,234,143,105,108,67,101,177,193,223,171,92,55,21,48,55,45,91,71,36,36,57,76,63,17,159,250,250,250,250,253,253,251,251,252,252,252,252,252,252,252,252,194,110,29,128,248,226,206,165,176,235,252,252,250,242,247,247,249,249,247,248,247,247,246,247,247,246,247,246,247,247,246,247,246,250,243,226,233,252,240,127,96,63,7,118,238,246,250,250,250,248,219,210,225,212,198,213,226,237,251,246,242,236,182,170,229,246,250,250,248,247,246,244,244,242,243,243,242,244,244,244,244,243,242,241,242,241,241,241,241,241,241,241,239,240,239,239,238,238,239,238,238,237,236,237,238,235,234,235,233,234,233,231,231,230,227,228,229,229,227,228,227,227,227,229,229,227,225,226,223,223,224,222,224,222,223,222,222,230,233,239,237,217,212,225,230,224,223,222,218,217,218,216,214,219,217,216,217,217,215,214,215,217,217,213,214,214,213,214,214,212,210,211,210,208,210,208,208,210,206,209,207,202,205,204,201,201,200,199,200,197,199,195,199,114,5,1,4,7,9,8,10,10,11,11,11,11,214,214,214,213,214,212,213,215,214,214,215,214,214,214,215,214,214,214,214,216,214,214,214,214,216,215,215,214,214,216,217,216,215,214,217,217,218,217,217,215,216,217,215,218,216,216,218,217,218,219,218,216,219,218,218,220,219,219,219,220,222,219,222,222,222,223,222,223,224,223,222,225,223,224,224,223,225,224,224,224,224,224,225,224,226,226,223,224,226,226,226,226,224,224,227,225,226,227,227,226,225,225,224,224,227,227,227,228,227,229,230,228,230,230,228,229,230,230,230,230,229,229,230,230,230,232,232,233,230,230,231,232,233,232,233,233,232,233,233,231,233,233,233,235,234,235,237,238,236,235,236,236,237,237,236,237,236,236,238,239,239,241,241,240,241,240,240,239,240,240,239,241,240,241,242,241,242,242,242,242,242,243,244,244,245,244,244,244,243,243,244,244,244,243,243,244,245,245,243,244,245,246,246,244,244,245,247,247,245,245,244,247,245,245,246,244,245,245,245,245,245,245,245,245,244,245,246,246,247,247,247,246,247,247,247,247,247,247,247,247,245,245,247,247,247,247,246,247,247,247,247,246,246,247,247,247,246,246,247,247,247,247,247,246,246,246,246,246,247,247,247,247,248,247,247,247,247,247,247,247,247,248,248,247,248,246,247,246,247,247,247,247,247,247,249,241,234,247,247,247,250,249,249,252,252,252,251,251,252,252,252,252,252,252,253,253,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,253,253,252,252,253,253,250,250,244,251,252,252,253,253,251,251,230,211,171,147,80,131,251,251,252,252,249,242,206,236,252,252,251,251,252,252,252,252,249,243,215,212,212,184,169,186,193,186,179,152,136,129,129,132,141,118,71,35,27,44,76,82,53,44,33,36,68,94,117,144,123,107,77,36,45,21,18,34,48,55,47,57,66,66,62,61,68,68,74,73,71,65,120,214,249,252,249,248,223,184,205,244,251,251,250,250,251,249,249,250,251,252,250,250,238,240,252,251,248,230,195,218,217,183,229,249,245,229,136,177,243,234,143,105,108,67,101,177,193,223,171,92,55,21,48,55,45,91,71,36,36,57,76,63,17,159,250,250,250,250,253,253,251,251,252,252,252,252,252,252,252,252,194,110,29,128,248,226,206,165,176,235,252,252,250,242,247,247,249,249,247,248,247,247,246,247,247,246,247,246,247,247,246,247,246,250,243,226,233,252,240,127,96,63,7,118,238,246,250,250,250,248,219,210,225,212,198,213,226,237,251,246,242,236,182,170,229,246,250,250,248,247,246,244,244,242,243,243,242,244,244,244,244,243,242,241,242,241,241,241,241,241,241,241,239,240,239,239,238,238,239,238,238,237,236,237,238,235,234,235,233,234,233,231,231,230,227,228,229,229,227,228,227,227,227,229,229,227,225,226,223,223,224,222,224,222,223,222,222,230,233,239,237,217,212,225,230,224,223,222,218,217,218,216,214,219,217,216,217,217,215,214,215,217,217,213,214,214,213,214,214,212,210,211,210,208,210,208,208,210,206,209,207,202,205,204,201,201,200,199,200,197,199,195,199,114,5,1,4,7,9,8,10,10,11,11,11,11,214,218,218,215,218,217,215,215,216,218,217,217,219,218,216,215,219,217,216,217,218,221,221,221,220,218,219,220,220,218,217,219,218,220,220,217,218,217,220,223,220,221,219,220,219,217,219,217,218,219,221,222,220,220,220,221,221,221,221,221,223,224,223,225,224,223,224,223,224,226,226,228,227,227,227,225,228,229,229,227,226,228,229,229,229,229,229,230,229,226,226,228,227,227,230,229,229,227,228,229,227,230,230,230,230,231,230,230,232,228,230,230,229,231,230,232,231,230,231,232,233,230,233,233,231,234,235,236,235,236,234,232,234,233,236,237,236,237,237,236,239,237,236,238,238,237,237,238,240,238,239,241,240,240,240,241,241,240,240,240,241,240,241,242,243,240,242,243,244,244,242,244,244,244,245,244,244,244,244,246,245,246,246,245,245,245,247,247,247,247,246,245,246,245,246,246,246,247,248,248,249,248,248,247,248,249,248,249,248,249,248,249,247,249,250,248,249,248,248,248,248,249,249,248,248,249,249,249,249,249,250,251,250,249,250,249,249,249,249,249,248,249,249,248,249,250,248,249,250,250,251,251,250,250,250,249,249,250,250,250,250,250,249,250,249,249,250,250,249,250,250,251,251,251,251,249,251,251,251,252,251,250,250,249,251,250,250,251,247,249,250,250,250,252,248,233,242,250,249,250,249,249,250,251,251,251,251,251,250,252,250,245,248,251,252,247,249,252,252,251,250,249,250,250,250,248,250,251,249,248,242,243,249,252,251,251,251,251,251,250,251,251,251,252,250,251,245,237,252,240,207,180,212,229,246,251,247,228,171,179,151,123,49,149,250,250,252,252,250,238,178,234,251,251,250,250,253,253,252,252,252,252,252,252,252,252,252,252,252,249,253,253,252,252,253,253,249,249,209,216,252,252,252,252,252,252,253,253,245,190,135,134,118,107,49,100,248,248,250,250,253,253,252,252,253,253,252,252,252,252,252,252,252,252,252,252,252,252,249,242,205,167,207,246,251,251,251,251,252,252,251,250,252,252,250,250,235,252,252,252,252,245,252,252,218,167,225,251,172,125,116,104,100,85,87,96,91,76,69,62,55,77,95,93,65,34,41,49,68,90,52,32,41,65,69,59,17,150,247,247,249,249,252,252,250,248,245,247,249,249,237,245,250,252,223,173,118,103,162,210,240,188,165,207,234,248,251,247,245,249,249,249,249,249,249,250,249,250,248,248,250,248,248,249,249,247,249,250,251,239,225,252,222,119,104,47,8,136,242,246,250,250,252,251,223,209,223,229,214,202,218,235,252,242,192,113,42,15,89,181,225,246,248,248,248,247,247,245,244,244,244,244,244,244,245,242,242,242,242,244,241,240,241,241,240,240,239,239,239,239,238,237,238,238,238,237,236,234,234,236,236,232,232,233,232,234,231,230,230,228,229,229,227,226,226,228,227,226,225,226,224,225,227,222,223,224,222,222,220,224,231,229,203,153,120,83,73,129,183,212,225,227,226,222,222,221,216,216,214,213,215,218,217,217,215,214,214,212,213,214,214,211,212,212,212,212,211,213,213,210,210,209,207,208,208,207,206,205,202,203,205,201,200,198,201,196,200,115,3,1,6,8,9,7,10,10,10,11,10,10,214,218,218,215,218,217,215,215,216,218,217,217,219,218,216,215,219,217,216,217,218,221,221,221,220,218,219,220,220,218,217,219,218,220,220,217,218,217,220,223,220,221,219,220,219,217,219,217,218,219,221,222,220,220,220,221,221,221,221,221,223,224,223,225,224,223,224,223,224,226,226,228,227,227,227,225,228,229,229,227,226,228,229,229,229,229,229,230,229,226,226,228,227,227,230,229,229,227,228,229,227,230,230,230,230,231,230,230,232,228,230,230,229,231,230,232,231,230,231,232,233,230,233,233,231,234,235,236,235,236,234,232,234,233,236,237,236,237,237,236,239,237,236,238,238,237,237,238,240,238,239,241,240,240,240,241,241,240,240,240,241,240,241,242,243,240,242,243,244,244,242,244,244,244,245,244,244,244,244,246,245,246,246,245,245,245,247,247,247,247,246,245,246,245,246,246,246,247,248,248,249,248,248,247,248,249,248,249,248,249,248,249,247,249,250,248,249,248,248,248,248,249,249,248,248,249,249,249,249,249,250,251,250,249,250,249,249,249,249,249,248,249,249,248,249,250,248,249,250,250,251,251,250,250,250,249,249,250,250,250,250,250,249,250,249,249,250,250,249,250,250,251,251,251,251,249,251,251,251,252,251,250,250,249,251,250,250,251,247,249,250,250,250,252,248,233,242,250,249,250,249,249,250,251,251,251,251,251,250,252,250,245,248,251,252,247,249,252,252,251,250,249,250,250,250,248,250,251,249,248,242,243,249,252,251,251,251,251,251,250,251,251,251,252,250,251,245,237,252,240,207,180,212,229,246,251,247,228,171,179,151,123,49,149,250,250,252,252,250,238,178,234,251,251,250,250,253,253,252,252,252,252,252,252,252,252,252,252,252,249,253,253,252,252,253,253,249,249,209,216,252,252,252,252,252,252,253,253,245,190,135,134,118,107,49,100,248,248,250,250,253,253,252,252,253,253,252,252,252,252,252,252,252,252,252,252,252,252,249,242,205,167,207,246,251,251,251,251,252,252,251,250,252,252,250,250,235,252,252,252,252,245,252,252,218,167,225,251,172,125,116,104,100,85,87,96,91,76,69,62,55,77,95,93,65,34,41,49,68,90,52,32,41,65,69,59,17,150,247,247,249,249,252,252,250,248,245,247,249,249,237,245,250,252,223,173,118,103,162,210,240,188,165,207,234,248,251,247,245,249,249,249,249,249,249,250,249,250,248,248,250,248,248,249,249,247,249,250,251,239,225,252,222,119,104,47,8,136,242,246,250,250,252,251,223,209,223,229,214,202,218,235,252,242,192,113,42,15,89,181,225,246,248,248,248,247,247,245,244,244,244,244,244,244,245,242,242,242,242,244,241,240,241,241,240,240,239,239,239,239,238,237,238,238,238,237,236,234,234,236,236,232,232,233,232,234,231,230,230,228,229,229,227,226,226,228,227,226,225,226,224,225,227,222,223,224,222,222,220,224,231,229,203,153,120,83,73,129,183,212,225,227,226,222,222,221,216,216,214,213,215,218,217,217,215,214,214,212,213,214,214,211,212,212,212,212,211,213,213,210,210,209,207,208,208,207,206,205,202,203,205,201,200,198,201,196,200,115,3,1,6,8,9,7,10,10,10,11,10,10,215,216,216,214,217,215,216,216,218,216,216,219,217,218,215,216,217,216,218,217,218,215,215,215,217,218,217,219,217,219,217,218,218,218,219,217,217,215,215,217,217,218,218,217,219,217,220,219,220,221,218,220,220,218,220,218,220,222,218,222,221,222,222,222,224,224,224,227,224,223,223,225,226,226,226,224,226,229,228,226,226,224,227,225,226,226,226,227,227,229,227,227,227,227,226,229,227,225,226,225,227,229,228,227,229,229,230,229,230,232,230,229,229,231,229,231,231,230,231,232,230,230,232,230,232,233,233,233,234,233,234,232,234,235,233,234,233,234,234,233,233,235,237,236,235,236,236,239,238,238,238,239,237,237,239,239,237,239,240,239,239,239,240,241,241,240,241,243,240,242,242,241,242,243,242,244,244,244,244,244,245,245,245,245,246,246,245,244,246,246,245,246,245,245,245,246,247,245,246,248,248,248,247,247,246,247,248,249,249,248,247,248,249,249,248,247,247,247,249,248,249,249,249,249,248,248,247,248,248,247,249,248,249,250,249,248,248,248,248,249,248,249,249,249,249,249,249,249,248,248,248,249,249,249,250,249,248,250,249,248,249,250,249,247,249,249,249,249,249,250,249,249,250,250,248,249,249,249,249,248,248,248,249,250,249,249,248,248,247,248,248,248,248,252,244,232,244,249,248,249,247,247,248,249,250,249,249,249,248,250,248,243,249,251,252,250,252,252,252,251,247,248,249,247,247,247,248,248,248,249,246,242,242,247,250,249,250,249,248,247,248,248,247,250,247,250,237,230,245,188,181,208,243,244,252,252,248,216,170,179,157,134,49,149,250,250,252,244,250,229,141,191,247,247,248,243,248,252,250,251,251,252,241,252,252,208,240,251,241,249,252,252,252,252,252,252,252,252,252,252,252,252,252,252,253,253,252,252,239,165,127,125,110,104,36,85,248,248,249,249,252,252,251,251,252,252,252,252,252,252,251,248,252,252,251,251,251,252,250,240,230,202,204,249,251,251,251,251,252,250,250,249,251,250,250,248,235,252,251,251,242,242,252,241,149,112,205,242,193,130,101,104,102,115,109,96,97,81,81,64,66,68,72,117,110,65,50,47,82,90,52,29,37,70,65,57,12,141,246,246,249,249,252,252,250,250,244,244,247,252,240,244,250,252,245,219,178,95,113,186,242,204,186,220,225,248,251,250,246,247,248,248,248,246,249,249,248,249,247,248,247,247,248,247,247,247,247,248,248,242,222,252,210,116,104,35,11,142,243,247,251,248,252,243,224,207,219,229,222,203,209,230,234,234,196,146,60,66,145,208,231,243,245,245,244,244,244,243,242,244,244,241,243,241,242,243,240,239,240,239,239,239,239,239,238,239,238,236,237,237,234,237,235,236,237,235,233,234,232,231,230,229,230,228,229,229,229,228,226,227,224,225,226,226,225,224,224,224,222,224,224,222,222,222,221,222,222,224,230,231,241,225,173,131,103,77,48,94,171,217,233,230,227,222,219,219,217,215,215,214,215,217,215,214,212,212,213,214,213,214,215,214,212,212,208,212,210,208,212,210,212,210,207,208,206,204,204,205,205,202,200,199,200,196,200,197,200,114,4,0,4,7,9,8,10,10,11,11,12,10,217,217,217,215,215,213,214,216,215,214,215,215,213,215,215,214,215,214,216,217,214,217,217,216,217,214,215,214,214,217,215,214,214,215,217,217,216,214,214,216,213,215,216,214,215,218,217,217,217,217,217,217,217,218,220,215,217,218,219,220,217,218,217,221,222,218,222,221,221,221,221,222,222,222,222,225,224,224,223,223,224,224,225,227,225,225,225,224,223,225,226,226,226,225,227,227,228,230,226,226,226,226,229,226,225,228,228,227,228,228,231,231,231,231,230,232,229,229,229,230,232,229,230,231,232,233,231,231,230,231,232,232,232,232,232,232,230,233,232,232,233,232,233,236,236,235,237,236,236,235,234,237,234,235,236,236,238,238,239,240,239,239,239,237,239,241,241,240,239,241,239,241,242,241,241,241,242,244,243,244,244,244,245,245,244,244,244,244,244,244,244,244,244,244,244,244,245,246,245,245,247,246,246,245,246,247,247,248,247,248,248,247,248,248,247,247,248,248,249,248,248,247,247,247,247,248,247,247,247,248,248,247,246,247,247,247,247,248,248,247,250,248,248,249,248,249,248,247,248,248,249,249,248,248,248,247,247,247,247,249,248,248,247,248,247,247,248,247,248,248,249,249,248,248,249,248,248,248,249,248,247,248,247,247,248,248,246,247,247,248,249,247,248,251,236,228,245,249,250,250,249,248,247,248,249,249,248,248,249,252,246,243,251,250,251,236,215,238,242,248,250,247,249,248,248,247,247,247,248,248,247,244,241,243,248,248,249,249,248,248,248,249,248,248,248,251,230,222,248,215,214,237,251,240,239,252,247,206,159,167,146,125,43,152,250,249,251,243,252,239,171,208,249,249,248,244,246,251,249,251,250,248,233,252,218,174,238,252,250,251,252,229,225,239,252,252,251,251,252,252,250,246,235,252,248,252,252,252,234,156,132,135,119,116,32,77,248,248,247,247,249,252,250,250,244,247,252,250,252,252,245,247,252,251,251,251,252,252,251,245,241,214,202,248,252,252,250,250,252,251,251,250,252,252,250,245,239,252,250,250,233,243,238,143,95,145,249,249,200,137,106,101,118,126,115,98,87,83,74,69,65,62,67,92,123,102,60,56,99,85,40,27,41,73,57,57,12,122,246,246,249,249,252,252,249,250,243,241,246,251,241,242,247,252,247,226,212,129,108,152,222,219,195,227,224,248,252,252,247,244,249,249,249,247,248,248,247,249,247,247,247,246,247,247,247,247,247,247,248,248,229,252,199,110,97,29,14,152,243,248,250,249,249,250,225,206,217,229,228,206,204,226,249,232,207,200,188,190,220,240,244,245,244,245,244,242,242,241,243,242,242,242,243,241,240,240,240,239,239,239,239,239,239,239,238,237,240,237,235,236,233,234,236,235,234,234,232,231,232,231,229,229,230,229,226,229,229,227,228,226,225,225,225,227,225,224,224,223,222,223,221,219,218,218,220,220,226,231,240,246,241,198,150,122,106,87,69,110,166,209,238,239,236,227,223,222,217,219,217,215,217,217,213,214,215,215,213,211,214,214,211,210,213,211,210,212,211,208,210,208,210,208,207,210,206,206,206,207,204,202,198,199,201,198,201,198,200,113,4,1,4,7,9,8,10,9,10,10,10,10,217,220,218,217,218,215,215,215,215,214,214,216,214,214,213,214,216,213,215,215,216,218,219,218,215,214,216,217,214,216,216,214,214,213,215,217,216,214,215,215,216,217,218,215,216,213,215,216,215,215,215,218,218,218,219,218,218,218,218,218,217,217,219,219,221,220,218,222,220,222,223,222,222,222,224,222,223,224,222,224,226,225,225,224,224,224,224,224,224,226,227,226,224,228,226,228,226,225,229,227,229,226,227,227,228,229,227,230,229,229,231,231,229,230,230,230,229,231,233,231,230,230,232,232,231,232,230,232,230,230,233,231,231,232,234,234,232,236,237,235,235,236,235,236,237,237,236,235,235,236,236,236,237,237,236,236,237,238,237,239,241,238,238,241,239,239,240,240,240,239,241,241,241,242,241,241,243,244,243,244,244,243,244,244,244,244,244,244,242,244,244,244,245,246,244,243,244,244,247,247,246,247,247,247,247,247,247,246,248,249,247,248,248,247,248,247,249,247,247,248,246,248,248,248,248,247,247,247,247,247,247,247,247,247,247,247,248,248,247,248,248,248,248,248,249,249,247,247,247,248,248,247,248,247,248,248,248,249,248,248,248,247,249,249,248,247,247,247,248,248,248,249,249,249,250,249,247,249,249,249,249,248,247,247,247,249,249,247,247,247,247,247,249,248,228,230,246,248,249,248,249,250,248,247,248,249,250,249,249,248,242,245,252,249,243,171,119,187,232,249,249,246,248,248,249,249,247,247,247,247,247,247,240,240,244,247,250,247,248,247,247,248,248,247,247,250,231,245,252,227,221,214,201,201,237,249,247,200,156,162,139,118,37,157,249,248,252,242,252,249,184,202,247,248,248,245,243,250,249,251,252,251,225,241,225,222,252,252,246,250,227,172,211,241,252,252,248,246,252,251,250,241,230,252,244,251,252,252,233,150,134,140,120,123,33,77,248,248,247,247,249,252,250,247,234,246,252,249,250,250,241,251,252,249,252,252,252,252,252,241,250,221,189,239,252,252,250,250,252,250,250,250,252,251,250,238,245,252,251,247,228,232,183,161,148,212,253,253,169,107,93,89,98,101,87,76,74,59,62,53,50,49,46,54,79,100,63,65,100,76,39,30,48,71,59,60,12,134,246,246,249,249,252,252,249,250,244,240,244,251,244,240,249,252,246,223,236,174,105,130,200,209,179,216,208,227,251,251,248,245,250,248,248,247,247,247,247,248,247,247,247,246,247,246,246,247,246,247,248,251,236,245,182,105,91,19,18,163,243,248,251,247,252,242,226,204,217,229,223,216,205,223,234,217,206,227,252,252,251,251,247,247,245,244,243,241,241,240,240,241,240,239,239,239,239,239,239,238,240,238,236,237,236,236,236,234,236,233,235,236,233,234,232,232,231,231,231,229,228,227,229,227,229,229,228,225,225,225,224,224,225,223,222,224,221,222,224,220,220,219,220,218,219,220,219,228,231,242,238,200,171,146,126,114,107,95,80,78,102,127,181,221,238,231,223,220,221,216,215,214,212,212,214,213,212,214,214,212,210,211,212,211,209,210,208,209,210,210,211,207,207,208,205,207,205,206,204,201,203,202,201,199,202,198,200,196,201,114,3,1,4,7,8,7,10,10,10,11,10,10,218,220,220,220,220,216,216,217,215,214,217,217,216,217,213,213,217,216,215,217,216,217,217,217,216,215,217,217,217,216,215,217,217,215,215,215,215,216,215,216,214,213,216,216,216,217,216,217,216,216,216,218,217,216,218,217,217,218,218,219,218,220,218,220,220,218,220,220,223,226,225,222,223,224,223,225,224,225,224,223,225,222,224,224,221,225,222,224,224,223,225,224,227,225,226,224,226,226,223,226,226,227,228,226,225,228,230,229,227,229,229,229,231,228,230,229,231,232,229,230,229,231,233,232,232,232,231,232,232,232,236,235,233,232,232,232,234,234,233,234,234,236,236,236,236,237,237,234,238,237,235,239,236,238,236,235,237,237,237,238,237,237,239,240,241,239,241,241,239,241,239,240,240,240,242,243,242,242,242,244,243,243,244,243,242,244,244,244,242,243,243,244,244,245,248,246,245,246,246,246,247,247,247,247,248,249,249,249,248,247,249,249,248,248,247,247,247,245,248,248,248,249,247,247,246,246,247,247,248,249,248,246,248,249,247,247,248,248,248,247,247,247,247,248,248,249,248,247,249,248,248,247,247,248,247,248,249,248,247,249,249,247,247,247,247,249,248,248,248,248,248,248,249,247,249,249,247,249,247,248,248,249,249,249,249,248,249,249,249,248,248,247,250,247,228,239,251,248,249,248,249,248,248,248,246,248,248,250,250,247,243,246,252,249,240,182,169,231,248,252,249,246,247,247,247,247,247,247,247,247,247,248,247,241,240,243,248,249,248,248,247,247,246,247,249,250,230,245,250,186,171,199,220,229,250,252,247,201,169,177,155,125,42,168,249,249,252,242,252,252,175,174,239,246,249,247,243,247,252,250,252,249,242,250,229,237,252,251,212,229,243,232,252,248,252,252,248,246,252,247,250,241,234,252,247,252,252,252,226,146,125,128,118,111,29,83,247,247,247,247,249,252,249,246,231,243,252,250,250,250,241,252,252,250,250,250,252,251,248,246,243,212,173,226,252,252,250,250,252,251,250,251,250,252,250,237,249,252,252,237,218,210,220,219,222,252,250,229,129,99,92,92,96,86,85,81,81,78,71,73,69,60,57,52,63,83,67,81,107,63,40,32,53,74,59,62,12,141,246,246,249,249,252,252,250,250,245,241,247,251,248,241,248,252,249,217,237,207,129,139,195,229,208,207,182,200,244,251,251,243,248,247,247,249,248,247,247,247,247,247,247,247,245,246,247,247,247,247,245,249,245,234,161,92,83,17,25,174,246,248,251,250,250,250,227,204,217,228,230,227,209,224,245,201,201,237,252,252,246,252,247,244,245,242,241,239,241,240,239,239,239,239,239,239,238,238,237,239,237,237,236,237,236,234,232,233,233,233,235,234,232,232,231,229,229,230,230,227,224,226,225,225,226,224,225,224,222,222,223,223,220,223,223,222,223,221,221,220,220,220,219,219,218,221,228,231,240,209,159,130,110,96,96,95,88,84,80,75,49,34,67,135,200,225,224,222,219,218,218,216,212,210,214,215,213,214,213,211,210,212,212,210,213,210,211,213,212,211,209,207,210,208,209,209,206,206,203,204,202,203,201,198,200,196,198,194,201,114,3,1,4,7,9,8,10,9,10,11,10,10,219,222,221,219,219,214,217,218,215,215,215,217,217,215,214,214,215,212,216,217,213,215,216,217,218,215,217,215,213,216,217,215,214,214,215,214,213,213,217,216,214,215,215,215,217,216,217,215,216,215,217,215,215,216,214,219,219,220,220,219,219,219,221,219,221,223,223,224,223,223,224,223,223,222,223,224,225,224,221,224,220,222,225,221,223,224,225,224,223,221,223,225,223,225,222,226,227,225,226,226,228,225,227,226,226,227,225,227,228,229,230,228,226,230,229,229,230,231,232,227,231,232,231,233,231,234,234,234,233,232,232,232,234,233,235,233,231,232,232,234,233,233,235,235,234,235,237,237,236,237,236,236,236,237,239,236,237,237,235,237,238,236,236,237,237,238,237,237,239,239,240,241,242,243,244,242,243,243,242,243,243,244,242,243,242,244,244,243,243,245,245,243,243,244,245,246,245,245,246,245,246,246,247,248,246,248,249,249,249,249,247,247,249,248,248,248,248,247,247,247,247,248,247,248,247,244,247,247,248,247,246,247,248,248,249,249,247,248,247,247,246,248,247,247,248,247,247,247,247,247,248,247,247,248,246,247,248,248,248,247,248,247,248,248,247,247,247,247,247,246,247,247,247,247,247,248,248,249,249,247,247,248,249,248,248,247,248,247,248,249,249,249,250,242,234,247,248,248,249,248,248,247,248,246,247,246,247,248,248,245,244,250,249,249,252,247,252,252,251,249,248,248,248,246,246,246,246,246,245,247,248,248,247,247,240,239,244,245,250,248,247,248,247,247,248,248,224,237,215,182,215,230,243,250,252,252,248,204,174,169,157,127,39,173,249,245,251,241,251,251,198,187,240,248,249,248,242,245,250,249,252,251,245,251,224,229,244,243,242,251,252,236,252,252,252,252,250,244,252,249,249,241,235,252,229,245,253,253,226,128,107,116,103,118,33,83,248,248,247,247,249,252,250,247,235,246,252,248,250,248,245,252,252,250,252,252,251,248,247,247,241,220,181,229,252,252,249,247,250,252,249,250,250,252,249,236,252,252,250,226,229,252,253,253,250,252,247,186,119,114,110,121,117,114,113,116,124,110,112,109,108,75,96,86,91,105,80,118,124,61,35,27,62,77,62,62,9,136,247,247,249,249,252,252,248,250,245,239,247,251,250,240,244,249,252,219,230,234,148,141,179,217,214,187,165,187,227,248,252,242,245,246,247,247,247,247,247,248,247,247,247,247,247,246,246,246,245,245,246,247,252,225,139,94,74,10,29,176,243,248,252,245,249,243,220,201,216,228,230,225,215,236,234,180,189,223,241,239,230,245,243,242,241,240,240,239,239,241,239,239,239,239,238,237,238,239,238,236,237,237,236,235,235,232,233,233,233,232,233,231,228,229,229,228,226,227,226,224,227,226,226,223,225,224,222,224,224,224,223,221,223,220,221,223,219,217,218,217,219,220,219,218,219,226,231,231,188,139,110,98,95,92,97,95,95,91,87,73,63,52,35,57,129,200,219,218,219,216,216,216,215,214,212,211,212,212,212,212,212,213,210,212,212,211,212,210,212,209,210,210,209,210,207,211,208,206,208,204,204,203,200,199,198,197,200,195,202,113,4,1,4,7,9,8,10,10,10,11,11,10,222,223,221,220,220,218,219,218,218,217,217,217,217,218,217,215,217,214,215,219,217,217,218,218,216,217,217,215,215,213,215,216,212,216,215,215,217,214,213,214,214,214,218,216,216,218,215,217,217,217,216,218,218,218,219,218,220,220,218,218,219,221,219,221,225,224,223,222,222,223,222,222,224,224,224,224,223,223,224,225,225,221,223,224,223,226,224,224,224,226,225,223,223,224,226,224,224,225,224,227,225,227,229,226,227,227,228,227,227,229,228,226,230,229,228,229,231,232,232,234,231,233,232,231,234,233,233,233,233,231,233,232,235,234,234,235,234,235,235,237,235,235,237,236,234,236,237,236,238,235,237,238,236,237,239,239,237,239,239,239,239,238,239,240,238,239,240,240,239,242,241,241,242,241,241,241,242,242,242,243,244,243,242,243,243,244,244,243,245,245,243,244,243,244,245,246,246,245,246,246,247,246,247,248,248,249,247,247,248,248,249,249,247,248,249,249,249,247,246,247,249,249,248,247,249,249,247,248,247,247,247,245,248,247,248,249,248,247,248,247,249,249,246,248,248,247,248,247,247,248,248,249,248,248,247,247,249,249,249,248,248,247,247,248,247,248,247,247,247,247,248,248,248,247,249,249,247,248,248,248,248,248,250,251,249,248,248,248,249,249,248,249,251,240,240,250,250,249,248,249,250,249,249,249,248,248,247,249,247,243,248,251,250,249,252,252,252,252,249,247,246,247,247,247,247,245,247,247,247,247,245,247,247,248,244,240,242,246,249,248,247,247,247,249,249,246,226,248,242,211,232,244,234,218,241,250,249,201,162,160,139,113,41,176,249,243,251,239,250,252,212,200,246,248,249,248,246,245,251,250,252,247,242,247,178,202,252,252,249,250,246,227,246,242,251,251,250,245,251,251,249,243,235,250,163,158,234,249,225,137,119,113,107,110,29,90,247,247,247,247,250,252,250,249,240,246,252,249,251,245,248,252,251,250,252,252,251,246,248,252,238,245,198,208,252,250,250,250,251,252,249,251,250,252,244,240,252,252,248,241,252,252,252,252,222,224,169,122,113,118,128,122,126,119,93,91,95,101,104,94,86,79,85,83,84,95,99,136,115,51,34,25,61,72,60,60,12,140,247,247,249,249,252,252,249,251,248,238,244,249,252,244,241,248,252,226,218,244,163,100,132,167,184,171,168,198,215,243,252,243,247,247,247,247,248,247,246,246,247,246,245,246,246,247,248,245,247,244,248,247,252,217,118,87,68,11,30,179,243,249,251,247,246,240,214,198,216,229,231,234,220,233,237,160,178,225,233,232,229,244,240,240,243,239,239,237,238,238,237,237,237,237,239,237,237,236,237,237,236,235,235,234,233,232,231,231,232,230,229,229,227,229,227,227,228,225,224,227,225,226,224,222,225,224,224,225,223,222,222,224,223,222,220,220,221,216,216,217,218,216,219,220,222,229,227,184,136,105,92,101,125,132,120,109,106,98,91,85,74,60,50,36,57,150,201,211,219,217,219,216,213,213,212,210,212,213,213,214,214,212,212,212,210,210,211,211,210,209,207,210,210,209,208,207,208,205,205,202,201,203,199,200,199,196,202,196,200,113,4,1,4,8,9,7,10,10,10,11,10,10,220,222,221,220,223,220,220,222,220,218,217,218,217,219,219,215,219,218,217,218,217,218,215,216,216,214,217,216,217,215,213,214,216,216,215,214,214,212,214,215,213,215,214,212,212,214,217,216,216,215,219,217,218,221,219,220,218,220,219,218,221,220,221,219,220,221,222,224,221,222,220,220,223,223,221,220,220,221,223,225,222,222,222,220,222,224,222,221,224,222,222,222,222,226,223,221,224,222,222,226,227,224,227,226,227,226,226,228,227,228,227,229,227,229,231,230,231,231,233,233,233,232,233,232,229,232,232,233,231,230,232,232,232,234,235,231,235,236,234,237,237,236,234,237,236,235,236,233,235,237,236,237,238,236,236,236,237,238,236,237,237,237,238,239,238,238,239,240,240,240,241,241,241,241,241,241,240,240,242,242,241,243,242,242,243,242,244,243,244,244,243,244,244,246,246,246,246,246,247,247,247,247,245,247,247,247,247,248,248,248,247,247,248,246,247,247,249,249,249,249,248,249,246,247,248,247,249,247,247,247,247,246,247,247,246,246,247,248,249,249,249,247,248,249,247,247,247,248,247,247,248,249,248,247,247,247,248,248,247,248,249,247,247,247,247,248,247,248,247,248,248,249,248,248,248,249,249,247,247,248,248,249,250,248,248,249,249,248,248,248,248,251,247,238,245,251,248,250,249,248,249,249,250,247,248,247,247,248,245,245,248,250,247,245,248,252,247,246,247,247,248,246,247,247,247,247,246,247,246,247,247,245,247,248,248,244,240,240,247,248,247,247,247,246,250,242,236,252,240,224,234,203,189,214,244,250,248,201,167,153,141,109,37,187,250,245,250,236,251,252,228,178,208,247,250,250,247,242,251,249,252,245,242,205,172,241,252,252,249,250,237,190,225,241,252,252,249,240,252,252,250,239,237,249,192,212,251,251,227,142,130,137,113,110,28,95,247,247,247,246,248,252,249,247,237,240,250,252,251,243,252,252,251,250,250,252,250,245,251,252,245,248,204,215,252,252,250,250,251,252,249,250,250,252,241,244,252,252,244,252,252,252,252,174,95,78,64,58,84,77,73,81,83,92,64,45,48,43,49,46,49,52,44,46,51,57,57,80,80,59,47,50,72,69,58,53,11,139,247,247,249,249,252,252,250,251,248,238,242,246,251,245,239,245,252,232,206,229,188,144,144,162,220,194,172,221,219,239,252,248,248,247,247,247,245,246,246,245,246,245,244,244,245,245,245,244,245,243,247,245,250,223,115,89,63,7,42,194,245,249,252,247,247,229,215,197,215,230,230,234,223,236,234,146,179,228,236,231,225,241,237,241,240,238,239,236,236,237,237,236,235,236,236,236,235,235,236,234,235,233,232,231,231,229,229,229,228,228,227,226,229,225,225,225,222,224,227,224,226,224,222,221,224,223,220,222,220,222,220,219,222,219,220,219,219,219,218,216,216,217,217,220,228,225,188,139,106,92,83,110,134,135,122,105,94,80,72,61,61,53,48,36,22,101,177,204,223,221,218,214,216,214,209,212,211,211,212,212,211,211,212,211,208,206,211,210,208,208,208,208,207,209,206,206,205,206,204,200,203,203,200,198,199,198,198,192,201,114,3,1,4,8,8,8,10,10,11,11,10,10,216,220,220,219,221,216,219,218,221,219,218,219,219,219,216,215,217,215,215,217,214,216,215,214,218,216,214,218,218,214,216,216,213,214,214,215,214,212,214,214,214,210,214,214,213,214,214,217,215,215,217,218,217,215,218,217,218,221,220,223,223,221,221,221,220,221,222,224,218,222,222,219,219,221,221,217,219,221,222,219,221,222,222,221,221,221,221,221,221,222,219,224,222,220,222,223,225,224,224,225,222,223,225,226,225,224,225,224,227,226,228,229,227,228,226,227,229,229,230,230,229,232,232,230,231,232,231,231,233,233,234,230,232,232,232,234,233,234,234,234,236,234,234,235,233,234,233,234,237,235,236,235,235,236,237,236,235,238,236,235,237,235,236,238,236,236,237,238,238,240,237,240,240,240,242,241,242,241,241,241,242,242,241,241,241,242,244,244,243,243,243,245,244,243,244,244,245,247,247,246,247,247,246,247,246,247,248,248,248,247,246,248,247,246,247,248,249,248,248,248,248,247,249,247,247,247,246,248,248,248,249,247,247,248,247,248,247,247,249,248,248,248,247,248,248,248,248,247,249,249,247,247,248,248,249,249,249,248,248,248,247,248,247,249,248,247,247,247,249,249,248,248,250,249,249,249,249,248,249,249,249,249,248,249,249,248,248,249,249,249,247,250,246,237,247,250,249,248,249,249,248,249,249,249,249,248,247,247,242,246,251,249,246,244,247,249,247,245,246,247,247,247,248,247,247,247,246,247,247,247,247,246,247,247,247,247,242,240,242,246,247,247,247,247,250,241,234,252,230,201,184,187,222,238,252,252,249,206,175,173,153,111,44,190,249,247,249,236,249,252,223,167,205,246,250,251,249,245,247,250,251,239,244,227,218,251,252,250,238,249,215,211,242,244,252,252,250,243,252,252,248,238,237,252,251,252,253,253,224,145,134,132,117,111,27,95,247,247,247,247,248,252,250,247,242,242,249,252,249,246,252,252,252,252,250,252,250,243,252,252,243,250,222,227,252,252,250,250,251,249,248,250,251,252,240,248,252,249,246,252,253,196,123,75,50,28,2,9,28,33,45,39,44,98,84,46,35,32,32,30,36,33,29,37,62,76,60,48,46,53,55,68,76,62,46,49,12,106,240,242,248,248,252,252,249,251,248,239,241,245,250,247,238,244,252,246,200,227,238,187,204,215,250,232,193,244,231,240,252,249,247,247,247,248,247,246,247,246,245,245,244,245,244,244,244,244,247,244,247,244,251,241,141,95,62,6,53,210,245,250,252,247,246,240,217,199,216,230,231,233,225,250,230,133,180,232,231,227,224,242,237,239,241,237,237,236,236,235,235,236,237,236,235,234,232,232,232,230,230,231,232,232,229,227,228,226,227,228,224,224,224,224,223,224,224,224,225,224,223,223,223,223,223,221,222,221,220,222,218,217,218,221,218,219,215,215,218,217,218,214,218,223,229,210,155,112,89,77,75,90,105,105,102,95,85,73,65,60,62,60,57,47,24,95,186,221,239,228,221,217,218,217,214,215,210,209,213,213,213,212,212,212,209,211,210,208,210,209,208,210,206,208,207,205,206,204,205,200,200,201,198,197,197,197,198,192,202,114,3,1,5,8,8,8,10,10,10,11,11,10,218,222,219,217,220,218,217,220,221,221,222,222,217,216,218,216,216,214,215,215,217,216,217,216,215,217,215,216,216,214,217,217,216,217,214,216,214,213,215,214,214,214,215,213,215,215,214,214,215,217,217,216,216,216,218,218,215,219,220,219,222,220,221,221,221,221,221,223,221,221,219,221,222,219,222,220,221,219,219,221,219,220,220,223,219,222,223,221,223,219,221,224,221,221,223,221,224,222,222,224,222,223,224,222,223,224,223,225,226,227,225,226,225,226,229,226,228,229,229,231,230,229,232,232,231,232,231,232,232,232,232,232,232,235,235,234,236,234,234,235,236,235,234,236,233,235,235,235,237,236,236,236,236,237,239,237,237,237,238,237,237,239,239,236,236,239,240,239,238,237,237,239,241,241,241,241,242,242,241,240,241,242,240,240,240,241,242,241,244,242,243,244,243,244,244,244,244,245,245,245,247,246,247,248,248,247,246,247,247,247,246,245,248,248,249,248,249,249,248,249,248,248,246,247,249,249,250,247,247,247,247,247,247,248,249,249,249,247,247,247,249,249,248,248,249,249,249,248,249,249,249,250,249,249,249,249,249,249,248,247,248,247,248,249,247,247,247,247,249,248,247,248,248,248,249,248,248,248,248,250,249,249,250,249,249,247,248,248,247,249,250,250,240,241,250,249,249,248,249,249,247,248,247,247,249,249,250,243,241,245,248,249,245,246,249,248,247,244,244,247,247,247,247,244,247,247,247,247,246,247,245,245,247,247,247,247,247,241,240,243,247,247,248,247,249,236,231,243,191,196,228,225,239,237,249,251,249,207,170,164,153,115,43,192,249,248,249,236,251,252,239,193,211,247,250,250,250,245,246,250,252,244,252,235,225,251,249,226,211,249,240,240,250,247,252,252,250,241,252,252,247,236,232,252,252,252,252,252,222,140,125,130,113,109,29,99,247,247,247,247,249,252,250,248,244,246,248,249,245,248,252,252,252,250,250,252,246,248,252,252,241,243,201,190,252,252,250,250,248,251,250,251,251,247,237,250,252,247,241,253,195,91,28,20,57,26,12,27,33,42,54,51,56,81,68,51,39,31,34,36,37,34,36,55,89,84,58,44,28,35,40,66,78,61,50,59,10,93,227,240,250,250,252,252,249,252,248,242,241,244,250,249,236,241,253,253,207,206,247,210,202,224,251,249,201,237,231,214,247,246,244,244,247,246,246,247,246,246,245,244,245,246,248,245,244,242,246,244,244,243,253,251,148,102,69,6,59,212,242,250,252,242,243,231,219,201,213,227,227,225,222,253,217,114,173,231,232,226,226,243,236,241,240,239,237,236,236,235,233,235,236,235,234,233,233,232,233,230,231,229,230,229,231,232,227,227,224,225,225,222,226,222,222,225,223,224,222,223,223,224,223,219,221,219,221,222,219,222,222,220,218,215,217,219,217,216,215,217,217,222,229,236,240,198,134,101,91,85,78,91,87,93,97,92,99,94,93,89,91,86,81,86,65,117,194,219,243,243,245,233,227,224,214,213,213,213,213,211,210,209,210,211,212,212,211,208,206,208,209,206,207,207,204,203,202,204,203,203,203,200,198,198,198,198,200,193,202,113,4,1,4,8,9,8,10,10,11,11,10,11,218,223,222,220,222,220,220,219,222,220,219,219,218,216,216,218,214,215,217,216,216,216,218,217,217,217,218,217,215,217,215,216,217,218,217,215,214,215,214,212,215,213,215,215,213,215,212,214,213,215,214,215,220,215,216,215,220,219,218,222,220,222,222,222,223,223,221,223,221,222,222,222,221,220,219,220,222,218,220,221,220,218,220,221,219,219,221,220,220,222,221,223,222,222,223,221,221,222,223,223,222,224,225,224,225,221,223,224,226,224,224,227,227,228,229,230,230,232,230,229,232,232,232,231,231,234,232,233,234,231,234,235,234,234,233,237,236,235,236,237,238,235,237,237,235,236,234,235,237,237,238,237,237,236,236,238,237,237,237,237,240,238,239,239,236,238,239,238,239,240,237,239,239,240,241,239,240,241,241,240,242,242,240,241,243,243,243,243,244,245,244,244,244,244,244,244,245,246,246,247,247,247,247,248,248,247,247,247,248,247,247,248,247,246,247,247,247,247,247,248,249,249,250,248,248,249,248,249,249,249,248,247,247,248,248,249,249,249,249,248,248,247,249,249,249,249,249,249,247,249,248,248,248,248,249,247,248,247,248,248,249,249,248,248,248,248,248,248,247,248,247,248,248,247,248,248,249,249,249,249,247,250,249,249,249,247,249,249,248,249,249,249,240,244,251,248,250,249,248,248,248,248,248,249,249,250,249,244,242,248,247,247,245,247,249,247,248,244,246,247,245,248,247,247,248,246,245,247,245,245,247,245,247,245,247,248,247,246,241,240,244,248,249,247,251,237,242,247,221,236,252,236,208,225,249,250,248,200,160,152,138,97,44,198,249,247,250,239,252,252,244,190,208,246,251,251,250,249,246,249,252,243,250,212,190,243,241,242,251,252,243,252,252,245,252,252,250,242,252,252,247,232,226,250,251,251,252,252,215,127,105,109,103,101,29,107,247,247,247,246,250,252,249,247,244,243,245,248,242,252,252,252,251,249,250,252,244,249,252,252,238,235,189,168,244,249,249,249,249,251,250,251,251,246,241,252,252,234,230,219,156,98,33,11,44,39,34,49,46,44,51,44,55,87,73,55,44,30,32,37,32,28,46,77,89,71,57,44,32,34,45,77,80,61,59,55,19,145,242,242,250,250,252,252,248,252,247,244,242,244,247,251,239,237,250,252,220,192,240,220,201,217,231,240,194,214,193,171,221,240,244,244,249,250,247,248,249,245,246,247,246,246,245,244,244,243,247,243,246,241,253,245,139,101,67,5,67,218,240,250,252,245,239,232,222,203,210,224,225,229,224,251,202,95,168,231,227,227,227,241,236,239,239,236,234,233,234,233,232,235,234,232,232,231,232,230,231,230,232,230,226,227,227,229,227,224,225,225,224,226,226,222,221,222,223,223,222,222,222,220,222,223,221,220,220,222,220,221,219,218,220,218,217,217,217,217,218,223,228,235,245,245,221,167,129,117,108,111,113,118,114,107,93,85,92,93,92,87,82,77,79,75,70,72,73,119,197,225,246,246,249,226,212,214,214,215,212,210,210,211,210,208,208,210,210,206,209,208,207,207,204,204,202,205,206,201,202,203,202,202,201,200,200,198,205,199,201,113,3,1,4,7,9,8,10,10,11,11,10,12,218,223,222,221,221,219,222,222,220,219,216,219,218,220,219,217,215,217,217,217,220,214,215,214,214,218,213,214,214,214,214,215,216,214,214,216,214,214,216,214,214,214,216,213,214,214,213,215,213,214,216,214,217,216,216,218,217,218,221,219,222,220,220,220,222,224,221,221,221,220,220,220,222,220,221,218,220,221,220,220,220,220,217,218,218,220,219,218,222,219,218,220,221,221,218,220,222,218,221,223,222,221,223,224,224,224,221,223,221,223,224,226,225,226,228,227,228,229,230,231,231,230,232,232,230,232,234,232,231,233,233,232,232,233,231,233,235,232,234,234,235,234,234,233,234,237,233,236,235,234,235,234,233,234,234,236,237,235,236,236,235,235,235,236,237,236,235,237,237,238,237,237,239,237,239,239,237,238,239,239,240,240,240,242,242,242,242,242,243,240,243,244,243,244,244,245,246,246,246,245,246,247,246,248,247,247,247,247,248,247,248,248,246,247,248,247,248,246,248,249,247,247,248,249,248,248,248,248,247,247,248,247,249,249,247,247,249,249,248,249,249,249,247,249,249,249,249,247,247,248,248,247,247,248,247,248,249,249,248,248,248,247,248,248,247,248,249,249,248,248,248,249,249,248,249,248,249,248,248,249,248,248,249,249,248,248,249,248,249,249,251,246,237,247,250,247,249,248,248,248,248,248,248,249,248,249,246,241,246,247,247,245,244,249,248,248,249,244,244,247,247,247,247,247,247,247,246,245,245,246,246,247,245,246,246,246,247,249,245,242,241,246,248,248,248,237,251,252,236,250,231,190,209,236,252,252,249,209,166,152,131,91,47,199,250,250,249,235,252,252,247,191,174,238,251,251,250,247,242,249,252,243,250,191,191,247,252,249,251,251,237,246,232,232,252,252,250,240,252,252,249,231,225,252,252,252,252,252,207,120,102,112,94,98,24,101,248,248,247,246,249,252,250,246,245,246,246,242,244,252,252,252,252,249,250,251,241,251,252,252,241,245,227,178,237,249,250,250,248,251,250,249,251,241,243,252,247,217,220,239,227,190,121,91,93,83,79,84,62,46,49,47,53,89,80,54,49,35,36,35,30,36,52,89,92,61,44,34,39,44,57,83,79,60,63,50,37,191,246,242,251,250,252,251,249,252,244,245,242,241,246,250,242,236,244,252,236,193,230,237,202,219,221,222,201,197,180,168,208,238,246,243,246,246,246,247,246,247,246,246,245,244,245,242,244,242,244,242,245,242,253,240,128,94,57,4,80,224,243,250,252,241,237,228,222,204,211,223,226,227,235,250,186,80,159,228,232,223,229,240,234,236,234,233,233,233,233,234,232,230,230,230,230,229,229,229,227,228,226,226,228,224,223,226,225,226,225,225,224,222,225,223,223,224,222,222,221,222,221,220,221,218,220,222,218,217,216,219,217,216,218,216,217,216,214,219,226,233,246,246,221,185,159,143,135,126,118,122,126,127,113,110,77,59,69,49,49,50,46,51,47,45,47,42,43,20,25,106,163,199,224,209,208,214,215,212,210,212,212,210,210,209,206,208,208,206,206,205,205,205,205,206,205,204,202,203,202,201,203,200,200,201,202,203,203,197,205,113,4,1,5,9,9,9,10,10,10,11,12,12,220,222,219,220,223,221,221,219,222,221,219,219,220,219,219,219,217,217,216,217,216,216,216,214,215,214,214,213,213,214,214,216,217,214,214,214,216,215,217,216,216,215,215,215,214,216,214,217,215,217,216,215,217,216,218,218,218,216,217,221,221,221,221,220,220,220,222,223,221,221,222,219,222,222,219,219,222,222,217,221,219,217,220,221,219,217,218,219,221,219,218,220,219,218,221,221,221,220,219,221,222,221,221,222,224,222,224,225,222,222,225,227,227,226,229,228,227,230,229,232,230,229,232,230,232,232,230,233,232,234,235,234,235,235,234,234,234,235,235,235,233,233,234,234,237,237,236,236,237,237,236,236,236,237,236,239,236,236,237,237,237,235,237,237,236,238,238,238,238,238,239,239,239,239,240,240,239,239,239,239,240,240,241,239,241,241,242,244,242,244,244,244,245,245,245,245,245,246,246,245,246,246,246,246,247,247,247,248,247,247,246,247,248,245,247,247,246,247,245,247,249,248,249,248,247,249,249,249,249,249,247,247,249,248,248,249,249,249,249,249,248,248,249,249,249,249,249,249,248,249,248,249,248,247,249,248,249,248,249,248,249,249,248,250,248,248,249,249,249,248,247,249,249,248,249,248,248,248,249,249,248,248,248,248,248,249,249,249,249,249,250,242,239,249,248,249,249,247,248,248,248,248,248,247,249,250,244,243,249,249,245,243,247,249,247,247,248,247,245,247,246,247,247,246,245,245,247,245,247,246,247,247,247,246,246,247,247,248,247,243,240,242,247,249,247,235,248,239,205,199,198,212,232,249,252,246,249,208,174,169,153,96,53,209,249,249,249,234,252,250,247,190,151,222,250,252,249,247,241,247,249,231,223,201,227,251,252,252,250,250,203,198,222,239,252,252,251,241,252,252,250,229,234,252,251,251,252,252,210,131,122,126,111,99,21,113,248,248,247,245,249,252,250,246,244,248,246,235,238,252,252,252,252,250,250,247,245,251,249,252,248,249,240,181,228,249,250,250,249,252,249,251,250,238,247,252,243,238,252,252,252,252,193,151,132,81,71,102,91,66,71,69,74,65,46,54,46,41,36,33,38,46,66,82,71,53,35,38,47,46,68,81,79,61,62,39,41,207,247,239,252,250,252,250,249,251,244,247,244,239,246,249,246,231,241,252,251,210,218,247,213,195,174,180,223,205,188,214,217,234,246,242,249,247,246,247,247,246,245,246,245,246,245,244,245,243,246,241,246,241,253,232,122,97,48,3,90,226,241,250,252,242,237,230,223,205,207,224,230,230,244,253,169,68,151,226,221,221,232,238,237,237,234,232,231,232,233,232,229,230,231,229,229,225,229,226,225,227,225,226,225,226,226,225,225,224,226,225,223,223,222,222,223,223,221,221,221,220,220,220,221,218,218,221,217,219,218,218,216,214,217,216,217,215,218,226,229,241,226,177,146,116,102,94,91,85,69,74,71,63,66,94,89,65,52,39,36,31,34,33,31,31,32,41,56,62,48,39,10,21,113,174,202,214,211,214,210,211,209,209,210,208,207,210,210,205,206,207,207,208,207,208,207,204,205,206,204,202,203,205,201,202,205,203,205,199,204,113,3,1,5,8,9,9,12,10,10,11,12,12,219,222,220,221,222,219,218,220,220,219,219,221,218,217,218,218,216,217,218,214,217,214,215,216,216,222,216,213,214,215,217,215,216,217,216,216,215,215,214,216,214,213,214,212,215,214,214,217,217,217,217,217,217,217,216,218,219,220,219,217,219,219,222,222,220,221,220,221,222,220,220,222,220,216,219,219,218,219,218,217,221,222,222,218,217,219,220,221,219,217,218,219,219,220,219,220,222,224,223,223,222,222,224,220,221,223,223,225,225,225,225,227,229,229,228,228,227,227,230,229,229,229,229,231,231,229,234,232,229,234,232,234,234,233,234,234,234,233,235,233,236,235,235,233,233,236,234,235,233,234,237,235,234,236,236,237,236,236,238,235,237,238,237,239,237,237,237,238,238,237,238,237,239,238,238,239,238,240,240,239,240,240,240,241,241,241,240,243,244,243,243,244,245,246,246,245,245,245,246,245,247,247,246,248,245,246,245,245,247,245,248,247,245,247,248,249,247,246,248,247,248,247,249,249,247,247,248,248,248,247,248,248,248,248,247,249,249,247,249,249,248,249,248,249,248,248,248,247,248,247,249,248,248,249,247,248,248,247,248,248,247,248,248,249,248,248,247,248,247,247,247,247,247,248,247,248,249,247,248,248,248,248,247,247,249,248,248,249,247,250,249,240,243,249,248,248,248,248,249,249,248,247,247,248,248,247,242,245,251,249,245,242,245,248,247,245,247,246,242,246,247,247,247,246,247,245,247,247,247,247,247,247,245,247,247,247,247,248,248,250,244,239,243,249,247,229,242,213,200,236,236,240,234,236,251,244,249,209,170,169,154,100,57,206,249,249,249,236,252,248,248,205,179,239,252,252,249,250,243,248,245,245,246,223,251,252,250,226,235,239,213,240,240,246,251,250,251,240,252,252,250,224,225,252,250,251,252,252,217,139,123,137,116,110,28,109,249,249,248,247,250,252,248,244,244,248,245,232,246,252,251,251,252,250,248,245,246,251,249,252,248,245,242,187,229,250,250,250,248,250,250,251,249,238,250,252,238,245,252,252,252,252,162,100,87,43,50,92,105,72,97,163,139,90,44,70,84,44,37,50,66,82,94,79,59,45,42,39,44,45,62,81,75,60,63,33,40,208,248,240,252,249,252,251,250,250,243,247,244,239,244,248,249,233,237,252,252,223,204,247,194,144,135,153,232,221,186,229,230,228,245,240,247,248,247,248,245,247,247,246,245,244,246,245,244,242,245,242,245,241,252,225,117,97,47,3,104,230,239,250,250,239,235,224,221,205,205,219,228,230,247,234,148,62,145,226,225,225,230,237,234,236,235,232,234,232,229,231,231,229,229,229,228,229,226,224,226,227,225,227,227,223,224,227,225,224,225,225,224,224,224,223,222,221,223,222,220,223,220,220,222,219,220,218,217,220,217,218,218,217,218,215,215,215,217,225,210,168,137,102,83,73,46,24,22,36,45,41,46,42,46,73,73,63,57,34,34,34,33,35,32,33,43,68,71,67,54,42,24,11,83,159,197,214,207,210,211,209,209,210,207,208,212,209,209,208,209,207,207,208,205,204,205,207,206,206,206,205,205,205,205,204,201,202,206,202,206,112,4,1,4,8,10,9,10,10,11,11,12,12,222,222,222,219,218,218,217,220,220,219,221,219,219,219,217,215,218,217,215,215,214,215,217,216,216,214,215,217,215,216,217,215,214,217,215,214,217,214,217,216,217,216,214,215,217,215,216,217,218,220,216,219,220,219,220,219,221,222,219,219,220,217,218,222,221,222,222,220,221,217,219,218,218,218,218,219,219,221,219,220,218,218,219,218,217,218,219,218,220,219,219,219,223,221,220,220,220,223,221,222,219,220,222,222,224,221,221,221,223,225,224,224,224,226,227,225,227,229,225,229,229,228,229,229,231,229,231,231,230,230,230,229,231,231,231,234,233,234,232,233,234,235,234,233,233,234,235,235,235,233,234,235,234,234,232,236,237,236,236,235,236,234,237,237,237,238,235,237,238,237,237,236,239,237,236,237,238,239,237,239,239,239,241,240,242,243,241,242,242,243,243,243,243,244,244,244,246,245,244,245,246,246,246,246,246,246,244,244,246,247,247,247,247,245,246,249,248,249,247,247,248,249,248,247,247,248,248,248,247,248,249,250,250,248,247,247,247,247,249,249,248,248,249,248,248,248,248,249,247,248,248,248,248,248,247,249,248,248,248,248,248,248,248,247,248,248,248,248,247,250,248,249,247,247,247,247,248,247,248,248,249,249,247,250,249,249,249,248,248,250,248,239,246,252,248,249,248,248,248,247,248,247,247,248,249,246,242,247,249,248,246,245,247,246,247,248,249,247,242,246,247,247,247,246,247,247,247,247,249,247,246,247,245,246,246,247,245,246,248,248,247,243,241,247,246,240,252,243,237,252,252,220,217,242,251,248,249,208,165,156,140,83,50,202,249,249,249,236,251,246,249,228,193,243,252,250,250,249,246,247,247,252,245,236,249,249,225,219,250,251,239,252,252,244,252,251,250,239,250,252,250,221,225,251,250,252,252,252,214,131,115,119,105,103,26,124,249,249,247,245,252,252,248,246,242,252,247,244,252,252,251,251,251,252,245,246,251,252,249,252,248,242,240,181,225,249,250,250,248,252,250,250,245,237,252,249,240,252,251,253,247,153,79,68,95,61,51,84,100,79,123,196,181,131,92,159,139,67,45,69,111,118,137,104,69,51,38,45,45,35,60,79,74,56,65,35,41,213,249,243,252,249,252,250,250,249,243,248,247,238,244,247,251,238,234,247,252,235,198,239,222,142,128,166,239,221,172,227,241,226,245,244,246,250,246,248,244,244,246,245,245,244,246,245,244,244,245,241,246,241,253,214,111,93,34,1,118,234,240,250,252,243,237,229,221,205,204,215,230,231,251,236,131,60,141,228,215,222,235,236,237,234,235,234,230,231,230,229,229,229,229,227,229,225,226,228,225,229,224,224,224,222,223,225,226,224,228,224,221,225,224,223,223,221,223,221,221,222,218,219,220,217,219,217,216,217,217,218,214,215,214,214,215,212,216,213,155,101,66,23,40,48,32,36,36,44,46,46,46,43,45,64,66,63,56,37,40,36,33,37,35,43,63,74,71,65,46,48,18,49,155,193,210,212,207,212,208,210,208,209,210,208,210,210,208,208,208,208,207,207,206,205,207,207,207,207,207,206,205,207,203,206,205,202,207,200,206,113,4,1,4,9,10,8,10,10,11,12,12,12,217,221,220,220,220,218,219,220,217,218,220,218,221,219,218,218,216,216,218,215,218,217,214,217,214,215,215,214,218,214,216,217,215,217,214,214,214,213,213,217,217,216,218,217,218,218,218,217,214,218,218,218,220,220,218,216,220,220,220,221,221,220,222,218,220,222,219,220,220,220,220,220,220,221,219,219,220,218,218,217,218,217,217,218,220,218,218,219,219,220,220,220,221,222,220,221,220,219,219,220,220,221,221,220,219,222,222,222,223,224,226,224,225,225,226,226,227,227,227,227,228,229,229,229,230,228,230,229,230,232,231,232,234,233,234,232,232,232,234,233,234,234,233,234,235,236,233,235,235,233,233,233,235,237,234,235,236,235,235,234,236,234,235,237,236,236,237,236,237,237,238,239,237,237,237,240,238,237,238,238,239,240,241,241,241,241,241,242,244,243,245,244,243,244,243,244,245,244,246,244,245,246,245,247,247,247,247,247,247,245,247,247,248,249,246,247,248,248,249,248,248,247,247,247,247,247,248,247,248,249,248,248,248,248,247,247,248,248,248,248,248,247,248,249,247,249,249,247,249,248,248,248,248,247,247,248,248,248,248,248,248,247,247,247,247,248,247,247,247,247,248,248,248,248,247,247,248,248,249,247,247,248,248,248,249,248,247,249,247,251,244,237,249,249,249,249,249,249,248,248,248,249,247,249,249,241,244,248,248,247,243,248,247,246,247,245,249,249,244,244,247,247,247,246,245,245,247,246,247,246,247,247,246,247,247,247,248,247,247,247,249,249,243,244,242,245,252,240,241,241,201,212,245,251,252,242,249,202,152,155,134,84,51,198,249,249,248,238,252,247,249,219,174,214,248,251,249,249,246,250,240,252,222,176,216,239,249,249,252,250,242,252,242,237,252,251,250,241,249,252,250,227,234,252,250,251,252,252,210,120,99,108,99,103,24,117,248,248,247,245,249,252,249,244,244,252,242,235,252,252,250,250,251,250,242,247,252,252,250,252,250,235,239,164,172,246,250,251,249,249,250,250,240,241,252,243,243,252,223,234,174,92,81,81,136,121,71,54,68,45,79,170,151,116,87,157,128,48,40,39,72,100,137,118,72,50,42,48,44,47,68,78,71,57,66,32,51,219,249,248,252,248,252,249,250,249,241,247,247,239,240,244,251,240,234,245,251,246,199,230,250,192,164,176,225,244,205,223,248,224,237,244,239,248,245,246,246,244,245,245,244,245,245,246,247,245,247,242,247,242,252,207,107,89,31,0,125,236,239,250,249,240,234,222,220,207,204,212,226,235,247,229,128,58,137,230,217,222,231,237,236,235,232,229,232,229,228,230,229,228,229,226,224,226,225,225,226,224,222,224,224,224,225,226,226,224,224,224,224,223,220,222,220,220,220,218,218,220,220,217,220,217,217,218,214,216,214,217,215,212,215,214,213,215,215,214,153,92,65,21,15,25,47,66,57,55,49,47,47,41,58,79,73,63,55,41,36,32,33,35,30,50,82,80,75,56,49,27,61,180,221,216,214,214,216,212,209,210,206,207,208,209,208,208,208,207,211,208,208,209,206,208,209,211,208,208,207,206,206,206,203,202,203,204,205,198,205,113,3,1,5,8,9,9,12,10,10,12,11,11,220,221,218,217,219,218,215,217,217,217,220,216,216,219,220,217,218,217,216,217,218,218,217,218,218,217,216,216,217,215,217,216,217,215,217,218,215,217,216,216,217,217,217,218,218,217,219,218,219,219,217,218,220,218,217,217,217,219,220,220,218,221,219,218,221,219,219,220,221,222,222,222,222,222,220,220,219,219,219,218,220,220,218,219,218,220,219,216,219,216,220,220,221,219,218,222,220,221,220,221,222,221,222,224,222,222,225,224,223,222,226,227,226,227,227,226,228,229,227,229,227,229,230,230,229,230,232,229,229,233,231,230,232,232,231,232,233,233,232,231,233,233,234,235,234,232,233,232,233,234,232,235,236,235,235,233,235,236,234,235,237,236,237,236,237,238,236,239,239,236,237,238,238,237,239,237,238,239,239,239,239,240,241,241,241,241,242,243,242,244,244,244,244,244,244,245,244,244,246,246,246,246,245,246,247,247,247,247,246,248,247,245,249,248,248,249,247,247,247,248,248,249,248,248,249,249,249,247,249,249,248,249,248,248,249,249,249,248,249,250,249,249,249,248,248,248,248,249,249,248,248,248,248,247,247,248,247,248,248,248,247,247,248,247,248,248,247,247,248,248,247,247,247,248,249,249,247,248,249,249,248,248,248,248,249,247,247,247,249,251,241,241,251,249,249,247,249,249,248,249,247,247,248,248,246,241,246,248,248,244,245,248,247,246,245,245,248,249,247,245,247,248,247,246,246,246,245,245,248,246,247,248,248,248,247,249,247,248,247,248,248,249,248,244,235,233,253,214,185,199,218,236,249,250,251,243,249,209,166,169,144,85,60,202,249,249,248,240,252,249,247,217,163,210,251,250,250,249,251,246,237,235,181,194,246,252,252,252,250,241,211,232,236,239,252,251,251,240,247,252,250,229,236,252,250,251,252,252,207,129,117,113,103,99,21,122,249,249,247,245,252,252,249,246,245,251,215,190,237,249,251,251,252,250,237,249,252,250,250,251,251,240,243,152,163,245,250,252,249,248,251,249,239,243,252,227,235,217,156,218,180,170,150,60,120,109,73,46,54,51,67,127,131,119,63,133,109,47,52,27,65,81,106,113,80,51,44,41,39,57,81,77,73,59,64,32,49,222,249,249,251,248,252,249,249,248,243,247,247,240,240,244,251,245,233,243,251,251,204,213,252,213,162,172,211,241,214,215,249,205,202,232,233,249,249,248,246,243,247,245,244,244,244,244,245,243,244,243,246,245,252,200,104,87,24,1,143,236,242,250,250,241,231,224,219,211,206,212,226,229,249,224,127,56,135,234,211,224,232,231,238,232,233,230,227,229,229,227,228,227,225,226,227,226,223,225,223,225,224,223,226,224,226,226,223,222,223,224,223,224,219,220,221,219,222,218,218,217,218,221,216,214,217,214,217,217,214,216,214,215,213,214,214,211,218,216,170,132,115,77,72,77,66,77,79,67,55,49,50,49,59,72,67,62,58,44,39,34,31,34,41,73,86,86,70,60,44,40,148,217,226,217,214,217,212,212,206,207,207,206,210,209,211,210,208,209,208,210,208,208,208,208,210,210,208,211,206,203,206,206,202,202,203,200,204,201,208,113,2,1,5,8,9,8,12,10,10,11,12,12,219,222,218,217,217,218,217,217,217,219,218,217,220,216,217,217,215,217,218,215,216,218,217,218,217,217,217,217,217,214,216,216,216,218,216,217,218,217,217,217,218,216,220,219,217,218,218,219,221,220,218,220,220,220,219,220,221,220,220,218,219,218,219,218,218,220,221,222,221,221,221,220,219,221,220,221,219,217,221,219,219,220,218,217,218,221,220,218,219,217,217,218,218,219,220,218,218,220,220,221,220,221,222,220,222,223,222,222,224,224,223,222,223,224,223,226,225,225,227,224,226,225,226,226,230,230,229,231,230,231,231,232,232,229,232,232,232,232,232,232,233,232,235,234,232,233,232,232,232,230,231,232,230,232,234,232,235,237,237,237,236,236,237,236,236,237,236,237,239,239,237,239,240,240,238,239,239,238,239,239,240,241,241,241,242,242,243,243,242,243,245,243,244,245,244,245,245,243,244,245,245,247,245,246,245,245,246,247,247,245,247,247,247,248,247,246,249,249,248,248,247,247,248,248,248,247,248,248,249,248,249,250,250,249,248,248,248,249,249,249,249,247,248,250,248,248,248,248,248,248,247,248,248,247,249,249,248,249,249,249,249,248,248,248,248,249,248,248,247,248,248,248,248,247,248,247,248,247,247,248,248,248,249,249,250,250,249,248,249,249,239,244,252,247,248,249,249,249,250,250,248,249,249,250,245,242,249,249,247,244,245,249,246,248,245,245,252,252,252,252,251,248,245,246,246,245,247,246,247,248,247,247,245,247,247,247,247,247,247,247,249,249,248,248,226,229,239,190,209,240,249,220,218,241,249,244,249,211,173,177,150,101,57,195,249,249,248,242,252,249,249,235,181,208,252,252,249,249,250,250,218,209,214,243,251,251,246,244,249,208,199,241,237,243,252,252,252,240,248,252,250,230,235,252,253,253,252,252,231,160,137,134,105,95,15,113,248,248,249,249,252,252,251,250,251,251,206,203,243,248,251,251,251,246,240,252,250,251,250,251,252,239,250,191,176,248,252,252,250,248,252,246,235,248,248,207,218,207,207,252,193,186,167,86,134,117,85,53,51,47,77,163,134,98,62,147,114,51,64,56,106,96,94,101,71,51,39,42,36,56,91,73,73,59,59,32,45,214,249,249,251,246,252,249,250,248,242,247,249,244,239,241,249,246,233,240,250,252,222,205,252,241,169,165,195,225,215,199,238,199,179,215,224,245,249,249,247,244,245,245,244,244,243,244,245,245,249,250,252,252,251,208,125,101,33,7,151,238,248,251,252,234,234,226,218,211,206,214,220,232,248,223,126,53,132,231,214,226,232,236,233,232,234,230,230,229,227,229,226,226,228,225,226,225,224,223,225,223,222,226,224,223,222,223,223,222,224,219,221,222,220,220,219,221,219,220,217,217,218,214,214,214,215,214,214,217,215,213,214,212,212,213,212,212,218,232,202,168,151,116,107,88,71,71,89,87,72,65,50,57,66,69,57,54,53,39,38,43,49,58,69,85,80,65,66,50,44,11,74,190,208,221,220,210,210,208,212,207,207,211,208,210,210,208,210,209,210,209,208,211,210,208,206,209,206,206,206,205,205,204,205,205,204,202,205,200,206,112,4,1,4,8,10,8,10,10,11,11,11,11,217,219,218,217,220,215,216,219,219,218,217,216,216,217,217,214,217,215,217,216,216,218,215,217,216,215,217,215,217,217,218,217,215,216,214,215,215,218,217,215,219,219,220,219,218,220,219,221,222,221,221,222,223,220,220,222,221,221,217,221,220,220,220,220,221,219,221,221,221,219,217,219,221,221,220,222,220,218,223,218,218,219,218,221,219,219,217,218,220,217,220,219,218,221,218,221,220,219,221,220,220,221,219,222,220,221,222,222,223,222,223,223,224,224,226,225,226,226,224,227,226,225,224,228,228,227,227,228,230,230,229,228,231,231,230,232,230,230,234,233,232,231,231,231,231,232,232,233,233,233,232,234,233,232,233,232,234,234,237,236,235,237,236,236,237,237,237,237,238,239,240,239,237,239,241,239,237,237,238,239,239,240,241,241,242,243,242,242,242,243,243,244,245,243,244,244,244,244,245,244,245,245,246,246,246,247,246,247,246,247,249,249,249,247,247,248,249,249,249,248,248,248,248,248,248,248,248,248,250,250,248,248,247,247,249,248,249,248,248,248,248,247,248,248,249,249,249,249,248,249,248,247,248,248,249,249,249,248,248,249,247,249,249,247,247,247,247,248,249,248,247,247,247,249,248,248,247,248,249,249,249,248,249,249,249,248,249,249,250,245,236,245,249,248,249,249,247,249,247,248,248,248,249,248,242,246,249,248,245,244,248,248,247,248,247,248,249,245,252,245,240,247,247,246,246,245,246,246,248,245,247,247,247,249,247,247,247,247,247,247,249,249,249,250,237,239,252,212,230,250,220,203,227,247,252,243,249,210,168,167,138,88,51,189,249,249,247,242,250,246,249,243,180,192,246,250,249,247,252,249,245,245,229,250,249,249,198,187,242,237,241,252,247,247,251,250,251,240,246,252,250,239,244,253,253,253,253,253,230,153,127,122,108,92,16,115,248,248,250,250,252,252,252,252,252,252,239,244,252,252,250,250,252,244,244,252,248,250,251,250,253,244,250,188,161,245,252,252,250,246,252,243,236,252,242,229,251,232,250,253,191,250,186,91,127,79,67,43,57,60,81,178,148,70,31,152,122,57,61,46,113,101,69,56,44,48,46,39,36,61,93,81,71,55,64,30,41,211,249,249,250,244,252,248,250,248,243,247,249,247,239,243,247,249,236,237,249,252,234,202,244,252,206,172,199,205,214,220,229,202,186,211,217,239,250,247,247,245,245,246,245,246,244,246,252,252,253,253,252,252,251,227,139,103,31,10,172,241,250,250,253,253,240,229,218,212,205,206,223,227,249,222,127,52,131,229,206,229,236,231,236,230,235,229,229,230,229,227,229,228,226,226,225,225,222,226,222,221,223,222,224,221,225,224,222,224,222,220,219,220,219,219,219,218,218,216,219,215,215,217,216,215,216,214,215,215,212,214,214,214,212,213,213,212,218,229,210,163,129,107,99,80,53,61,80,83,81,107,118,113,102,71,62,60,48,42,75,112,111,104,106,107,76,64,60,48,29,24,120,179,197,218,214,214,208,210,210,208,212,210,209,210,213,212,210,212,212,211,209,210,212,207,207,210,206,208,207,206,205,206,204,204,205,202,206,200,204,113,4,1,4,8,10,8,10,9,11,11,11,11,220,220,218,220,218,217,217,217,218,216,218,217,215,218,219,218,217,217,217,217,217,218,218,216,217,215,214,217,216,216,217,217,217,215,215,215,215,215,215,214,218,217,218,220,218,219,219,219,220,220,219,219,220,220,219,219,218,216,219,218,217,218,219,217,219,218,220,219,220,219,219,222,221,221,219,223,220,220,221,217,220,220,218,219,217,218,217,217,220,219,220,220,218,220,220,220,221,221,217,220,220,220,222,219,222,221,220,221,221,222,222,221,223,224,223,226,223,226,229,225,228,228,228,227,228,229,226,230,230,227,230,230,230,231,232,230,232,232,230,232,230,229,231,228,231,235,232,231,233,232,232,234,233,234,236,231,235,235,233,236,234,235,237,235,234,237,236,237,238,238,237,239,240,239,238,238,237,236,236,238,239,239,241,241,242,242,243,243,244,243,244,244,244,245,244,244,242,243,245,245,244,245,245,245,245,245,247,247,247,245,248,249,248,249,248,247,248,247,247,248,247,248,247,247,247,248,249,249,248,248,249,248,247,249,249,249,248,248,249,247,248,249,248,249,248,249,247,248,249,248,248,248,249,248,248,247,248,248,248,249,248,248,247,248,248,248,248,249,247,247,247,247,247,248,249,248,248,247,248,249,248,247,248,249,250,248,249,249,249,243,239,249,249,248,248,248,249,247,248,247,248,249,247,244,242,247,249,247,244,246,248,248,247,248,248,246,239,186,172,205,234,249,247,248,245,245,249,248,247,247,246,247,249,249,248,247,247,247,247,247,247,248,250,251,238,252,245,195,207,204,226,231,249,252,249,242,249,203,153,152,124,83,42,186,249,249,247,240,247,246,249,242,190,184,239,250,250,250,252,252,241,230,208,217,239,239,224,239,252,242,252,252,245,248,248,249,251,240,249,252,250,246,237,251,245,243,251,232,151,103,109,122,126,123,57,144,247,247,250,250,253,253,252,252,252,252,244,253,252,252,250,250,250,240,247,252,248,249,249,248,252,237,248,194,144,238,252,252,250,246,252,237,241,252,236,243,252,249,252,247,211,253,202,103,78,32,32,27,60,48,69,194,154,68,39,148,105,47,64,38,109,101,69,54,39,48,45,39,32,44,75,72,73,57,64,36,42,211,249,249,249,243,252,248,250,246,244,249,248,250,241,242,245,249,241,234,245,252,248,200,221,250,235,184,166,158,201,230,222,211,211,213,214,236,248,248,248,247,244,243,244,244,247,252,252,252,253,253,252,252,248,181,99,69,13,34,186,235,212,214,234,234,232,226,215,210,208,208,219,229,247,212,126,51,134,227,208,230,232,234,234,233,231,227,227,225,227,227,224,225,226,224,223,222,224,222,222,224,224,220,220,222,223,225,222,222,222,218,221,220,218,217,217,217,216,216,214,215,214,213,214,214,215,212,212,213,209,211,211,212,214,212,210,210,213,212,166,122,105,92,105,97,88,70,78,71,96,148,141,145,123,109,95,71,49,54,125,150,139,124,123,122,108,83,60,39,31,135,220,221,210,210,213,211,210,208,209,210,211,211,212,211,209,211,212,210,210,208,208,207,208,208,207,209,207,208,208,206,207,205,206,203,204,204,204,199,205,113,2,1,5,8,8,8,12,10,10,11,10,10,217,222,217,219,219,215,219,219,219,218,216,219,220,215,216,214,217,215,215,217,215,217,217,217,218,217,218,218,220,216,215,219,217,218,218,218,217,217,216,216,218,215,219,218,217,221,220,221,219,220,219,217,218,218,220,217,218,218,216,219,218,218,217,217,219,218,220,218,218,220,219,221,221,220,218,220,218,217,221,215,220,220,216,219,216,219,218,220,221,218,220,219,221,219,219,221,220,217,221,220,217,220,220,221,219,221,221,222,220,222,222,223,222,222,224,222,224,223,224,226,226,229,227,228,227,227,228,227,229,230,230,229,231,228,229,230,229,230,229,230,231,232,232,229,230,232,229,229,230,230,232,233,232,232,233,232,232,231,232,234,233,233,234,236,237,237,237,239,239,238,238,239,238,237,237,236,237,237,238,237,239,239,241,240,240,242,241,243,243,243,244,243,243,242,245,244,242,244,244,244,246,245,246,245,246,246,245,247,247,247,246,247,246,245,246,247,249,249,248,247,249,248,248,248,248,248,248,250,249,249,249,249,248,249,249,249,249,249,249,248,248,249,248,248,248,248,249,248,250,248,247,248,249,249,249,249,248,248,248,248,248,248,247,247,248,248,248,248,247,249,248,248,247,248,247,249,248,249,248,247,247,247,250,248,247,248,248,248,249,239,242,250,247,248,248,248,249,248,247,248,249,249,249,243,243,248,248,247,244,248,249,248,247,247,248,249,232,169,185,229,240,249,246,246,246,247,248,247,247,246,247,247,248,249,247,248,248,247,247,247,249,249,248,249,234,252,225,154,209,247,253,241,237,249,247,243,249,204,164,162,136,82,48,202,250,250,247,238,248,247,249,242,188,196,244,250,250,247,252,250,238,184,147,210,252,252,245,252,252,237,217,234,241,250,250,245,252,242,246,252,250,250,205,134,82,80,91,99,107,118,135,143,155,165,128,150,198,190,189,188,196,192,177,170,183,157,201,250,251,251,251,251,249,240,249,251,249,251,251,248,252,237,239,222,165,227,252,251,250,248,252,233,245,251,231,249,252,249,254,206,200,253,127,39,59,27,30,29,61,47,63,188,164,81,35,130,107,61,64,39,112,101,71,55,41,48,50,37,34,34,49,71,75,60,66,32,46,215,249,249,249,242,252,247,250,248,244,249,247,250,245,245,248,251,244,233,244,252,251,203,196,249,239,146,117,139,190,240,201,182,232,228,216,244,248,250,248,246,247,244,245,243,251,251,252,195,126,104,87,92,99,79,66,63,38,39,85,132,142,144,197,222,220,222,211,213,212,207,220,225,249,229,129,50,139,226,202,231,234,233,237,229,233,228,228,227,226,226,228,226,226,225,225,225,221,223,221,222,223,222,225,222,222,221,222,223,221,219,222,220,217,219,217,215,215,215,215,214,215,216,214,213,216,214,213,214,211,211,212,211,210,214,209,213,211,215,182,127,94,84,110,112,96,74,63,47,83,137,132,128,128,117,94,64,49,55,113,131,116,102,105,118,102,82,68,34,40,159,223,222,223,215,213,212,212,210,208,209,212,211,212,213,212,213,211,212,213,212,212,210,212,210,206,209,208,208,209,207,206,206,205,206,205,204,206,201,206,113,2,1,5,8,9,8,12,9,11,12,10,10,215,220,218,218,220,215,217,217,219,217,216,218,214,215,214,214,214,213,214,217,215,215,219,218,217,215,218,218,217,217,216,215,215,217,217,217,216,217,217,217,218,217,217,218,218,220,220,221,217,219,222,217,220,219,215,217,219,218,217,218,219,219,218,219,218,215,220,218,220,218,219,220,218,220,219,221,217,220,220,217,220,218,216,216,217,219,220,220,218,219,221,219,218,220,218,216,217,219,218,220,218,219,222,218,221,221,220,221,219,221,222,221,222,221,222,223,223,225,224,222,227,226,225,228,229,227,224,229,227,226,229,228,229,229,229,229,230,229,228,229,229,230,232,230,230,229,229,232,230,233,233,232,231,231,233,232,232,233,233,234,234,235,236,235,236,237,237,237,238,238,237,239,239,238,239,237,237,237,239,239,239,239,238,239,240,240,241,241,242,241,242,242,243,243,243,242,242,244,244,244,244,244,244,245,245,245,245,246,246,245,246,247,246,247,246,245,247,247,247,247,247,247,248,247,248,249,248,248,249,249,249,249,248,247,247,248,247,248,248,248,247,247,248,249,248,248,248,248,249,248,248,249,248,248,248,248,249,248,248,247,247,249,247,247,248,248,249,247,247,248,247,248,247,249,247,248,249,249,248,247,248,248,248,247,247,247,247,249,246,237,244,249,248,249,247,247,247,247,248,248,248,248,246,241,247,248,248,244,245,249,247,247,245,246,247,248,250,243,252,251,247,247,245,247,247,247,247,246,245,246,248,248,247,247,247,247,247,248,247,247,246,249,249,246,230,252,238,205,230,251,246,191,224,249,247,245,249,209,171,173,145,95,65,219,251,251,248,238,246,246,249,249,207,190,240,250,249,250,252,252,227,179,206,248,251,251,238,252,247,174,182,232,235,252,248,243,252,242,244,252,249,251,178,95,66,57,83,84,105,131,135,141,139,150,129,132,113,88,88,94,100,87,86,77,63,53,173,242,250,250,251,251,248,244,252,250,250,249,250,248,252,244,247,225,156,218,252,252,250,250,252,230,249,246,243,252,251,251,199,113,183,186,99,69,61,65,36,24,79,47,62,197,162,65,36,131,98,66,70,39,108,93,58,48,42,49,49,36,34,33,45,72,79,57,67,34,45,214,249,249,249,240,252,247,250,248,243,249,248,251,246,241,248,251,249,232,240,248,250,216,184,238,238,162,125,170,217,244,202,180,244,243,218,231,244,251,249,247,247,245,244,248,251,251,165,84,55,23,6,6,9,11,36,56,49,59,46,22,69,56,140,200,206,215,201,207,210,211,220,229,248,234,136,53,145,224,206,234,232,235,232,229,232,226,229,227,224,227,227,227,227,225,225,222,222,221,219,221,222,226,225,224,225,222,220,222,222,219,222,220,217,218,215,217,217,216,215,218,218,217,216,215,214,212,213,214,213,214,211,214,212,210,211,211,214,229,217,167,123,86,105,118,103,69,56,42,70,118,119,131,125,112,81,61,41,47,112,111,104,90,99,107,94,78,59,33,29,139,198,208,224,213,214,210,210,213,213,211,213,211,211,215,213,212,212,211,214,211,211,210,212,212,211,212,210,211,209,208,208,205,205,205,203,205,207,202,207,113,3,1,4,8,10,9,10,9,11,11,11,11,219,221,219,219,221,217,218,218,217,220,215,217,216,214,217,217,217,214,217,217,216,218,214,215,219,218,219,215,218,217,218,218,217,217,217,217,217,218,218,217,217,218,220,217,219,221,219,220,219,220,218,220,219,217,217,216,220,220,218,220,219,219,220,220,220,220,220,220,222,220,222,222,223,223,221,221,219,221,220,217,220,217,219,222,218,222,219,219,219,219,222,219,222,222,221,221,222,218,220,220,219,222,221,222,221,222,223,223,220,222,221,221,223,223,224,221,224,224,224,226,227,228,227,229,229,227,229,229,229,229,227,229,231,229,231,231,231,230,230,230,229,230,232,232,232,233,233,232,234,232,232,234,232,234,234,234,235,233,234,235,232,234,235,234,237,236,235,235,237,237,237,240,237,239,241,239,240,239,240,240,240,241,242,241,241,243,241,243,241,242,242,241,243,241,242,242,241,243,244,244,245,245,245,245,245,246,246,247,247,247,245,246,245,246,249,249,248,248,247,247,248,247,248,248,249,248,248,247,248,249,249,249,248,249,248,249,248,248,249,249,247,249,249,249,248,249,249,248,249,248,249,249,249,248,248,248,247,249,249,248,248,249,248,248,248,247,249,249,247,248,248,248,249,249,250,249,247,249,248,248,249,249,249,248,248,248,248,250,245,237,248,250,248,249,247,247,249,248,247,247,248,249,244,242,248,249,248,245,246,248,247,248,246,246,247,250,252,252,252,252,241,244,245,249,247,247,247,244,245,245,247,246,247,248,247,247,247,247,247,246,247,248,250,245,237,252,243,211,224,202,213,219,241,252,246,244,249,207,172,162,138,80,61,218,251,251,248,238,248,244,250,247,205,188,220,249,250,250,252,246,243,204,225,249,248,243,181,208,239,220,224,242,244,252,251,242,251,244,243,251,251,251,180,102,93,84,85,84,80,84,80,81,78,77,67,79,80,71,64,84,94,71,81,75,66,47,149,241,251,251,250,250,246,247,251,249,252,249,250,247,252,250,247,238,157,196,250,250,249,249,248,231,252,237,248,252,235,227,145,115,210,237,160,81,66,84,64,57,95,65,59,168,151,75,48,131,99,56,66,71,127,86,55,45,42,51,48,35,36,26,43,75,70,56,67,34,47,211,249,249,249,242,249,249,250,248,245,250,249,250,248,241,245,249,252,236,239,248,249,233,184,220,251,221,166,203,222,246,245,211,252,243,179,202,233,248,252,246,248,244,245,252,253,197,106,65,52,59,48,28,24,39,54,61,63,63,55,53,33,2,115,196,200,211,192,202,206,204,219,225,251,249,143,58,152,222,203,233,232,235,234,228,234,224,226,229,224,226,227,224,225,224,223,223,223,224,222,223,223,225,225,223,225,223,221,221,222,220,220,219,218,221,217,216,217,215,216,216,215,215,214,215,214,213,212,214,212,210,213,209,208,210,210,216,213,229,206,161,129,101,116,109,92,69,53,35,80,121,107,115,96,86,78,62,45,52,108,105,104,107,91,82,69,60,63,23,58,174,193,207,218,211,213,209,211,213,212,212,216,213,213,214,213,214,211,212,213,210,213,212,213,211,208,214,210,209,210,207,208,206,205,203,204,205,208,205,206,112,4,1,4,7,10,8,10,10,10,11,11,11,216,220,219,217,221,218,216,217,217,216,214,217,216,217,217,215,217,217,216,217,214,215,217,214,215,217,218,215,216,215,214,217,217,214,214,213,215,217,214,218,217,215,218,217,217,218,219,219,219,217,220,217,218,218,218,221,217,221,219,217,220,221,220,219,219,217,221,219,218,220,222,222,219,220,219,220,219,222,220,218,222,222,220,221,220,220,220,220,219,218,220,222,222,222,224,222,220,220,220,221,219,220,222,222,223,220,220,223,219,222,221,222,222,221,224,221,223,225,225,225,225,225,226,227,227,227,227,230,226,228,230,226,228,227,227,227,229,229,230,229,231,230,229,228,229,232,230,233,230,231,231,230,231,232,230,232,233,235,235,232,233,234,232,234,234,232,235,235,236,235,235,236,239,237,236,238,237,237,239,238,239,240,241,241,240,241,243,242,241,243,242,243,242,241,243,242,242,244,243,244,245,244,244,245,245,245,245,246,245,243,246,247,246,247,248,246,248,249,248,247,247,248,248,249,248,248,248,248,248,248,248,249,250,248,250,248,248,248,248,248,247,248,248,247,248,247,248,248,247,248,247,249,248,248,248,247,248,247,248,248,248,248,247,248,249,248,248,247,248,249,248,249,247,247,247,247,247,247,247,247,248,248,248,249,249,250,249,249,240,240,250,249,248,248,249,249,247,248,247,246,249,249,242,245,250,247,245,242,247,247,245,247,246,245,246,247,247,247,249,242,239,242,243,246,246,246,245,245,245,246,245,245,245,248,247,246,247,248,247,247,247,247,249,243,237,252,219,179,199,214,232,223,252,252,246,243,249,201,152,145,122,75,56,215,251,251,248,236,245,242,249,246,204,177,218,249,250,250,252,248,248,198,198,235,239,243,208,230,253,223,244,252,241,252,251,243,251,245,243,249,251,251,176,114,102,73,57,47,56,51,45,47,41,48,42,45,41,48,37,60,77,45,55,50,52,45,38,135,227,246,248,246,244,248,248,250,251,248,249,249,252,252,238,230,162,191,251,251,250,250,240,233,247,209,228,171,141,193,184,212,253,253,181,109,73,94,62,55,98,60,72,168,148,60,39,113,77,57,66,77,125,78,48,42,47,57,44,42,42,24,42,72,72,56,61,27,53,213,249,249,248,242,249,248,251,248,244,250,248,250,250,242,244,247,251,239,236,245,250,242,186,209,252,242,174,198,224,233,247,215,244,234,167,180,214,243,250,246,244,243,243,252,250,152,85,67,68,79,79,62,57,61,59,63,61,60,61,68,40,20,151,215,210,218,190,197,195,203,221,226,252,244,151,63,150,213,207,234,228,233,231,230,230,225,228,226,225,226,222,223,225,220,221,221,222,221,222,224,222,223,223,221,222,219,219,220,219,218,218,217,218,218,215,216,215,213,214,213,211,213,213,213,213,211,212,209,210,212,211,213,208,211,209,210,212,225,202,147,116,77,73,61,53,55,55,42,81,131,122,110,77,62,50,53,39,51,101,103,113,107,85,66,55,50,49,17,73,184,200,214,217,210,213,210,210,212,212,211,211,211,215,214,212,212,213,211,212,212,212,211,211,211,210,208,208,210,206,206,207,205,205,204,204,202,205,201,208,113,2,1,5,8,8,8,12,10,11,12,12,10,215,218,218,215,219,216,215,216,215,216,217,215,214,215,213,213,215,212,214,215,215,220,216,217,216,215,217,214,219,217,216,215,214,217,217,216,214,214,215,217,217,217,219,217,217,217,218,218,217,219,217,220,219,219,222,220,220,219,219,221,220,217,220,220,220,220,219,220,219,219,220,219,222,220,221,221,220,221,218,219,221,220,221,221,221,223,221,222,225,225,223,222,221,221,221,222,224,219,222,222,222,221,220,222,220,220,218,221,221,220,222,222,218,220,222,222,225,224,224,225,225,224,226,227,225,226,226,225,226,229,229,229,228,226,227,229,230,229,230,228,231,231,229,229,229,229,230,230,231,231,231,234,234,233,233,231,232,233,232,234,234,232,233,236,235,232,232,233,235,234,235,237,236,237,239,237,237,237,237,239,240,240,241,240,240,241,240,241,241,242,243,242,242,243,244,244,244,243,244,244,246,246,244,245,245,246,245,245,246,245,246,247,248,248,246,247,248,249,247,247,248,247,249,249,248,247,248,248,248,248,247,249,248,248,249,249,250,251,249,248,248,249,248,248,248,247,247,247,249,248,249,248,248,248,248,249,248,249,247,247,249,249,248,248,248,247,248,247,247,247,247,248,248,247,248,247,247,249,249,247,247,247,248,247,248,247,248,248,238,242,249,248,249,249,249,250,248,248,247,247,249,244,242,247,249,248,245,245,247,247,247,247,245,246,248,246,247,245,247,247,242,242,242,245,245,244,247,245,247,247,245,247,246,248,248,247,247,247,248,247,248,248,249,238,230,247,189,201,244,243,240,207,235,251,244,244,249,194,160,154,131,72,72,232,252,252,248,236,246,242,250,249,213,191,221,249,250,249,252,245,238,164,177,242,251,250,222,248,252,231,245,252,242,252,251,241,250,246,241,248,251,251,174,120,105,71,55,56,60,62,60,61,53,57,53,53,49,55,36,61,85,43,48,52,68,22,31,155,240,248,249,244,245,249,249,250,249,247,250,249,252,251,223,203,149,188,250,250,249,249,235,240,237,190,208,187,206,245,214,246,252,179,160,157,118,113,73,45,94,79,65,151,161,89,42,105,96,78,64,61,105,62,51,50,48,57,44,57,53,26,49,74,70,57,61,26,51,217,249,249,248,246,246,247,252,247,246,249,248,248,250,243,243,247,251,245,234,242,250,250,196,194,246,245,171,186,226,227,237,204,225,228,188,189,201,238,252,248,246,244,245,252,229,123,92,85,88,91,87,71,46,44,49,53,50,55,46,55,39,35,187,234,225,231,195,200,202,208,212,201,240,251,156,68,151,205,202,235,226,231,229,225,233,227,226,226,223,225,225,224,225,223,222,220,222,221,221,223,223,223,222,222,221,217,218,220,217,216,216,217,216,214,215,217,217,214,215,214,212,213,209,214,212,210,212,210,212,211,213,211,210,214,210,212,209,224,199,137,103,69,71,57,44,49,45,41,101,143,131,120,87,68,46,44,39,67,108,101,112,99,78,73,55,46,39,13,103,193,200,217,215,211,212,211,213,213,212,212,212,214,214,213,212,213,212,211,214,210,212,210,212,212,209,211,207,208,208,206,207,206,204,204,207,206,207,203,207,112,3,1,5,8,9,8,10,10,10,11,12,12,216,216,216,214,214,214,213,217,215,213,214,215,214,214,216,214,214,214,215,214,214,216,217,215,216,219,216,217,220,217,219,218,218,215,218,216,215,217,216,220,218,217,219,216,219,218,219,220,219,217,217,219,220,220,220,221,218,222,222,219,221,221,221,221,221,221,220,222,221,220,222,222,219,220,221,219,220,221,222,222,218,220,220,223,222,221,224,222,222,223,223,222,222,221,223,221,221,224,221,222,220,221,219,217,221,219,220,222,219,224,222,221,223,222,223,223,227,224,225,227,227,224,224,227,225,226,229,229,228,228,227,229,229,229,229,229,229,228,229,228,229,229,229,230,231,231,231,232,231,231,232,233,231,233,233,230,233,233,232,232,235,234,232,232,232,233,235,236,236,236,236,237,236,237,237,237,237,237,237,239,242,241,241,241,241,241,241,242,242,243,241,242,242,242,245,243,242,243,245,244,244,245,245,245,245,246,245,245,245,245,245,247,246,247,247,246,249,249,249,248,247,246,247,248,248,248,248,248,248,248,249,248,247,248,248,248,249,248,248,247,249,248,248,247,247,248,248,248,247,247,249,249,249,248,248,249,248,249,248,248,248,247,248,247,248,247,247,247,247,247,247,247,247,247,248,247,249,249,248,248,249,248,248,249,248,247,249,247,237,246,249,247,249,249,249,247,247,247,248,249,249,241,243,248,248,246,244,248,249,247,247,247,246,247,247,247,246,245,247,247,244,242,243,242,243,245,246,247,247,246,246,246,247,247,247,247,247,247,247,245,249,249,251,235,236,252,218,231,253,253,221,188,226,245,245,239,250,198,169,169,144,84,84,240,252,252,249,236,245,242,250,251,228,184,197,246,251,251,251,236,214,189,215,249,250,248,234,252,253,225,232,244,240,252,251,245,249,246,241,246,251,251,165,125,107,72,50,57,66,70,76,64,64,68,61,61,57,63,53,87,92,70,95,79,75,25,114,241,252,252,248,241,248,249,247,248,247,247,248,249,250,252,214,218,155,141,244,249,249,249,231,246,236,241,252,241,252,252,242,220,141,106,178,242,180,136,107,89,108,66,61,164,181,101,80,129,115,115,72,70,107,63,65,61,51,55,41,57,57,39,59,72,70,55,62,28,47,208,249,249,248,247,246,244,250,247,247,249,249,249,250,245,242,246,252,247,232,241,252,252,206,188,232,248,191,171,229,214,195,207,218,223,226,212,193,230,246,249,246,252,252,252,230,116,94,83,87,108,94,59,39,35,37,41,41,49,35,52,31,35,192,234,234,234,217,222,222,212,228,222,250,231,126,73,167,204,211,232,225,234,227,227,227,224,227,224,223,225,224,223,224,222,224,221,221,221,222,224,223,222,220,221,221,220,219,218,219,218,217,216,218,216,216,216,214,215,214,215,212,211,212,213,214,212,211,212,212,209,208,212,206,211,210,212,211,220,192,132,94,66,80,73,56,52,56,50,102,152,131,122,95,66,51,47,38,93,115,100,111,93,76,69,53,50,18,42,161,209,209,215,210,211,211,210,208,211,211,213,215,212,213,214,214,213,211,211,211,210,210,211,210,212,211,211,210,209,208,208,207,205,204,206,206,205,208,203,207,112,3,1,4,8,9,8,10,10,11,11,10,11,218,220,218,217,218,216,214,215,216,214,216,215,214,214,214,215,217,215,217,218,215,216,214,217,215,216,219,217,216,217,219,216,218,216,217,217,216,219,217,217,218,217,220,220,220,221,218,220,220,221,217,218,219,219,221,219,219,219,218,222,222,220,223,220,219,221,220,221,222,223,223,220,222,220,220,222,220,223,222,221,224,222,221,222,221,221,222,221,220,219,222,223,221,222,223,222,222,222,223,221,222,222,220,222,221,220,220,221,222,219,220,221,224,226,223,224,226,226,225,226,227,225,229,228,227,228,227,229,227,228,229,228,228,227,227,227,225,226,229,229,229,228,229,229,231,231,231,230,229,231,232,233,232,232,234,232,233,234,232,234,234,234,234,234,237,237,237,237,237,237,235,236,236,236,235,234,236,236,238,239,239,239,240,240,241,240,240,241,240,241,242,242,242,242,242,243,243,242,244,243,245,245,244,246,246,245,245,246,246,246,247,247,248,247,247,248,247,247,247,247,248,247,248,247,247,248,248,248,248,248,248,249,248,248,248,248,247,248,247,248,248,247,248,249,248,248,249,248,248,249,249,247,248,248,249,248,247,248,248,248,248,248,247,247,248,248,249,248,247,248,248,248,248,248,247,247,247,249,247,247,248,247,249,247,247,247,249,243,238,247,249,248,249,247,247,250,247,248,247,249,247,240,247,248,247,245,244,248,247,247,247,247,246,246,246,247,247,247,247,247,245,244,245,244,244,243,244,244,246,247,247,247,246,246,245,245,247,247,245,247,248,250,252,236,246,252,210,233,227,206,233,213,233,241,232,237,250,197,167,154,130,77,83,233,251,251,249,238,244,240,250,252,222,190,196,243,250,252,250,228,221,195,217,250,251,246,245,252,231,181,189,235,238,252,251,243,246,246,236,243,250,240,165,136,116,88,57,60,66,50,53,59,54,57,58,53,50,60,55,80,80,57,73,78,69,26,161,241,249,249,244,241,251,248,249,249,248,250,248,249,250,252,232,241,165,125,227,246,249,249,226,241,235,252,252,252,252,252,229,196,113,81,195,246,231,177,121,94,110,85,64,153,173,112,81,96,114,122,81,91,131,113,100,67,53,54,34,52,54,33,57,74,74,55,61,32,39,200,249,249,248,248,247,242,251,245,247,249,248,248,249,248,241,245,250,251,232,236,249,252,222,184,222,251,207,157,181,150,158,227,225,196,234,240,197,226,248,247,252,252,253,253,191,101,81,73,91,104,96,57,37,39,34,37,40,46,39,46,26,37,197,237,239,248,234,245,239,243,248,246,247,210,96,80,184,204,205,234,226,234,228,227,229,223,226,224,224,225,223,224,224,222,224,223,221,222,222,221,222,222,221,218,221,218,220,219,214,218,216,216,217,214,213,214,214,212,215,214,214,214,209,214,212,210,213,212,212,208,211,209,207,209,207,211,208,218,198,139,93,59,78,79,61,57,69,62,123,152,128,122,88,66,50,42,69,117,121,110,116,101,81,56,47,30,27,118,214,223,218,216,214,213,212,210,207,210,212,213,214,214,213,213,214,215,213,210,210,210,211,212,211,208,211,212,212,211,209,208,207,206,208,207,206,205,208,207,207,111,3,1,4,8,9,8,10,10,11,11,10,12,216,221,218,216,217,216,217,216,215,217,214,214,217,214,215,213,215,217,214,215,214,216,218,214,216,217,215,215,217,215,215,214,215,216,216,218,216,216,217,217,217,218,220,218,221,218,218,219,219,222,219,219,220,219,220,218,217,220,220,219,222,221,221,220,220,221,220,221,219,220,222,220,220,222,221,220,222,220,222,221,219,221,218,219,218,218,219,220,220,218,220,219,220,219,219,220,220,221,221,220,222,220,222,220,220,220,218,221,220,222,222,222,224,222,224,223,224,225,226,227,226,225,226,228,227,228,228,225,228,229,228,228,226,227,225,226,229,225,226,224,226,228,229,229,229,228,230,230,229,229,230,232,231,234,233,231,232,233,232,234,236,234,235,235,235,235,235,235,235,235,236,237,236,237,237,237,237,237,238,238,239,239,239,239,242,241,240,241,241,242,241,243,243,242,243,242,244,242,243,244,243,245,244,245,245,246,246,245,247,246,248,249,246,247,247,246,247,247,247,247,247,247,247,247,246,247,247,247,247,247,247,247,247,248,248,247,248,248,249,248,248,247,247,248,247,249,248,248,249,248,247,247,247,247,248,248,248,247,246,248,248,247,247,248,248,247,248,247,247,247,247,248,248,248,248,247,247,247,247,248,247,249,248,247,247,248,249,239,240,249,248,248,247,249,248,247,249,247,248,249,243,241,248,249,247,244,246,248,246,247,246,245,247,247,246,246,246,246,247,247,247,244,246,246,246,245,244,246,247,247,247,247,247,247,246,245,247,247,247,247,247,247,250,227,245,241,167,196,213,237,253,236,229,218,222,233,249,186,147,142,124,69,83,234,251,251,249,237,243,239,248,251,225,196,205,247,251,252,250,221,211,201,229,251,252,231,209,219,186,177,218,242,241,251,252,247,246,243,237,248,250,237,152,129,124,97,69,69,67,57,51,44,51,51,47,49,46,52,44,68,80,51,54,62,51,35,162,239,250,250,243,244,252,248,247,247,248,249,249,250,249,252,236,252,198,132,233,247,249,247,225,237,239,252,253,253,232,184,164,182,163,142,213,249,218,124,83,66,85,77,67,151,149,74,78,84,100,116,79,113,158,133,98,51,54,61,46,63,50,34,57,70,73,56,64,27,49,217,250,250,248,248,249,239,248,243,247,249,247,246,248,249,240,244,248,251,235,234,245,252,231,181,212,244,223,121,117,181,195,244,222,165,211,244,213,223,245,246,252,252,224,165,113,77,88,76,85,101,87,56,39,36,34,40,45,42,36,46,24,46,208,234,234,234,222,231,234,234,252,252,245,205,92,97,194,203,216,229,227,233,228,225,227,225,227,227,226,225,224,225,225,224,222,225,225,222,222,222,221,219,221,222,220,216,218,218,216,217,214,216,216,215,214,212,213,214,213,212,210,213,211,208,211,213,209,207,210,208,209,210,206,211,207,207,208,217,210,160,107,72,89,85,69,66,82,70,120,168,136,120,88,55,57,45,83,150,122,119,117,90,77,58,39,22,98,203,229,221,215,215,214,212,210,210,211,211,213,214,212,214,214,214,212,213,213,212,211,213,212,209,211,213,212,211,212,212,209,209,208,208,205,205,208,205,208,203,208,113,2,1,5,8,8,8,10,10,10,12,10,10,214,218,216,216,216,214,216,218,217,214,217,216,216,216,216,214,214,214,213,213,214,216,214,217,215,215,214,214,216,214,216,214,217,217,217,218,214,216,216,218,218,218,220,217,217,220,219,218,218,220,220,219,219,217,221,219,217,219,220,220,218,220,222,219,223,223,222,221,218,222,219,218,220,220,219,219,219,219,219,217,219,219,217,218,220,218,220,219,220,220,218,220,221,221,220,221,220,218,220,221,222,221,218,220,220,220,220,222,222,220,222,222,222,224,221,223,223,223,225,226,227,226,227,226,225,226,227,228,226,226,228,229,230,229,229,227,227,229,227,228,227,229,229,228,229,229,230,230,232,232,231,231,232,232,234,234,235,234,234,233,233,235,235,236,236,234,235,235,237,236,237,238,236,240,238,238,240,239,239,239,240,240,238,239,241,240,242,243,240,241,242,242,241,243,243,244,242,242,244,244,245,245,245,246,246,247,245,247,246,246,246,245,247,246,248,248,248,248,246,247,247,247,247,246,247,247,248,247,247,247,248,248,247,248,248,248,249,249,248,248,247,248,249,249,249,249,249,248,248,249,249,249,249,248,248,248,247,248,248,248,248,247,248,248,250,249,247,247,248,248,248,247,246,247,247,248,248,248,247,247,248,247,248,248,247,250,249,238,245,250,247,248,247,247,248,248,249,249,249,248,240,244,249,248,246,244,247,248,247,247,246,246,247,248,247,247,245,246,247,248,247,243,246,248,247,245,244,244,247,245,248,248,246,247,247,247,247,247,247,247,247,247,249,218,232,232,184,229,238,252,234,205,231,224,212,224,250,190,163,150,124,71,89,236,251,251,249,237,245,241,246,252,226,199,203,247,252,252,248,211,207,206,232,233,233,186,197,244,223,214,238,252,243,252,252,249,247,245,243,252,249,181,113,117,108,87,43,46,51,37,45,40,43,44,39,43,43,49,45,71,78,60,67,69,42,42,169,235,250,250,241,245,252,249,249,250,247,249,248,250,248,252,238,252,212,141,227,247,250,245,223,236,250,250,227,179,123,152,195,198,168,176,246,245,139,106,73,61,87,74,60,142,160,97,82,85,116,110,85,129,150,122,80,45,52,63,71,81,56,42,61,70,71,52,61,29,57,229,250,250,249,249,252,237,244,241,246,249,247,249,248,251,241,241,245,251,241,229,242,252,245,183,200,240,229,164,161,223,229,248,242,174,207,250,211,196,235,239,251,236,123,87,46,62,78,71,87,94,89,55,45,35,34,44,33,41,30,46,21,57,221,236,240,240,212,213,214,231,243,251,246,202,92,117,204,199,215,234,230,236,229,229,229,225,227,226,226,227,226,225,224,223,225,224,224,224,222,222,223,222,222,220,218,217,219,218,216,217,217,215,214,214,213,214,214,213,214,210,210,211,210,211,210,210,210,208,208,208,212,207,210,210,205,210,208,214,221,193,143,100,96,90,62,58,76,60,125,158,141,129,80,56,48,39,93,136,123,120,107,84,53,51,25,50,170,223,231,218,217,215,212,212,210,212,214,213,217,215,214,215,213,214,212,213,215,212,213,215,214,214,213,214,215,211,210,212,210,210,209,206,208,208,208,206,208,205,208,113,2,1,5,7,8,8,10,10,10,12,10,10,217,218,218,218,217,215,214,216,213,215,215,216,215,212,216,215,214,215,214,216,214,214,216,215,214,214,216,215,213,214,214,213,214,217,217,217,218,216,215,217,218,219,219,218,219,217,216,217,219,219,217,218,220,219,219,218,218,217,217,217,219,218,219,221,218,217,219,219,219,219,220,219,218,220,219,220,221,218,220,218,217,217,220,221,219,219,218,220,219,220,221,220,222,220,218,220,221,219,222,221,217,220,219,217,219,220,219,220,221,221,220,221,220,222,221,222,223,222,222,221,225,225,224,225,224,224,225,225,227,227,226,227,226,227,228,227,226,228,229,229,229,228,229,229,230,227,228,229,231,232,234,232,229,232,232,233,233,234,235,234,234,233,235,234,233,236,237,236,235,235,237,236,236,237,237,237,237,237,239,238,238,239,237,239,239,239,238,240,240,239,239,241,241,239,242,242,243,242,242,242,243,245,244,245,245,245,245,245,246,245,244,244,245,245,248,248,246,247,247,247,247,246,247,247,248,248,247,247,247,247,248,248,246,248,248,248,248,247,248,248,247,248,249,248,248,248,247,247,247,247,247,248,248,247,248,247,247,248,247,247,247,247,247,249,248,247,248,248,249,247,248,249,247,249,247,247,248,247,248,247,247,249,248,248,248,250,245,237,248,250,247,249,247,248,247,247,249,247,249,245,242,248,249,248,244,243,247,247,247,246,245,247,247,246,246,246,248,249,247,245,247,243,244,247,247,248,246,246,244,245,246,249,249,248,247,246,247,247,246,245,246,249,245,222,252,245,213,245,211,206,213,219,248,239,209,224,251,193,171,163,139,80,90,238,251,251,249,236,244,236,244,251,239,210,187,235,249,252,245,210,186,162,206,219,244,223,230,252,239,227,246,247,240,252,250,250,247,242,235,253,207,128,106,114,85,34,27,36,37,26,24,25,29,23,26,28,21,32,36,68,79,57,73,69,47,128,236,246,250,250,239,245,249,250,246,249,249,248,247,250,248,252,237,240,206,127,196,248,250,237,219,225,248,191,152,188,196,222,215,183,176,194,247,235,161,147,120,76,77,89,66,127,184,141,87,72,120,115,94,138,154,112,89,59,56,59,76,83,55,47,59,66,67,47,61,25,51,221,250,250,249,249,252,239,241,240,245,248,247,247,248,251,244,239,244,250,245,230,239,251,250,188,190,235,246,207,173,224,233,233,252,222,223,252,198,165,217,234,251,239,115,73,56,61,87,78,93,103,91,60,38,37,39,44,31,37,32,47,18,67,230,234,234,234,217,212,216,231,237,250,246,185,80,128,208,200,224,231,231,233,230,230,229,226,226,227,224,226,225,223,223,225,223,222,221,222,222,219,220,218,218,218,220,217,215,216,214,218,212,214,215,213,216,213,213,211,211,213,211,214,211,208,212,208,209,212,208,207,207,208,208,208,207,207,208,212,223,215,160,110,109,99,71,57,75,63,114,153,135,125,90,61,58,50,99,130,106,110,94,59,59,39,24,116,211,225,224,220,222,214,213,214,212,211,214,213,214,214,214,216,214,214,214,214,214,214,213,212,214,214,214,211,212,212,213,211,209,212,209,211,208,207,208,205,210,206,208,111,3,1,3,8,9,8,10,10,11,10,10,11,219,221,220,215,216,213,214,216,214,214,214,214,218,213,214,215,213,217,215,216,215,215,214,216,216,215,215,217,219,216,218,217,217,217,217,218,216,220,217,217,218,217,220,218,220,219,218,220,218,218,219,217,219,220,219,220,222,220,219,218,218,221,219,218,221,220,219,221,221,220,218,222,223,221,221,221,222,222,220,220,220,222,220,219,220,218,221,220,219,222,220,220,220,220,218,220,221,219,221,219,218,220,221,220,219,219,217,221,220,219,221,222,223,223,223,225,226,225,224,224,224,225,225,224,225,225,226,227,226,227,227,227,226,226,229,226,228,227,228,230,228,231,228,229,229,230,232,230,231,229,232,231,229,231,230,231,232,234,236,234,233,236,231,235,236,235,237,234,236,236,235,236,237,239,238,237,237,237,238,238,238,239,239,240,241,240,240,240,240,240,240,240,241,242,240,243,242,242,245,243,246,245,245,245,244,246,245,246,245,245,245,245,245,245,246,246,246,247,247,248,248,246,246,247,248,248,247,247,247,247,247,247,247,247,247,247,247,248,248,248,247,248,249,249,248,249,248,248,248,248,247,248,248,248,249,247,247,249,249,249,250,249,248,247,248,247,248,248,248,248,247,247,247,248,249,247,248,247,247,247,248,247,248,249,250,252,241,239,251,249,249,249,248,249,247,248,247,247,248,242,244,249,249,247,244,246,249,249,247,247,247,246,247,246,245,245,246,247,246,245,246,244,244,247,246,248,247,246,246,244,246,246,247,248,247,247,246,247,245,245,245,250,244,217,253,236,195,199,197,241,236,243,252,245,221,219,250,181,158,145,125,79,94,234,251,251,250,239,244,237,244,251,231,209,179,210,245,252,241,193,151,173,242,249,252,224,240,252,224,208,237,240,235,251,250,250,249,237,224,248,193,119,119,111,60,20,24,43,36,23,24,23,26,22,21,26,22,21,31,71,79,60,77,53,84,218,251,251,250,249,245,248,249,250,244,248,247,248,247,252,245,252,235,214,190,116,179,246,249,231,192,200,219,177,207,237,248,252,214,200,195,218,253,253,188,135,111,73,86,78,66,120,167,137,100,105,151,131,98,135,134,105,89,62,56,62,66,74,68,54,63,75,69,50,59,24,54,218,251,251,249,249,252,243,236,236,244,249,248,248,247,250,247,239,243,249,249,228,234,248,250,200,178,230,243,226,167,182,232,224,247,214,208,248,204,176,206,229,251,235,121,97,69,83,92,98,116,113,91,53,48,37,36,45,39,44,34,51,18,78,238,238,245,245,225,227,224,236,236,242,246,167,69,143,213,202,224,232,233,235,229,231,227,227,225,224,224,225,224,224,226,223,223,220,218,220,219,219,217,217,215,215,216,215,216,215,215,214,214,212,211,212,213,213,214,211,212,210,211,212,208,212,210,209,210,208,208,206,208,206,207,205,207,209,206,211,220,218,170,122,113,110,88,81,79,65,126,140,129,132,95,74,73,81,118,115,107,106,92,84,77,35,62,206,234,222,221,216,222,214,213,212,210,214,214,213,215,215,214,214,215,216,216,213,215,214,212,212,209,212,213,211,212,214,213,211,209,212,209,208,211,210,208,207,211,207,210,112,3,1,3,8,9,8,10,10,11,10,10,10,215,217,217,217,216,214,214,215,215,217,216,214,214,215,215,214,216,214,213,215,216,214,214,213,214,214,216,216,215,217,217,217,217,217,215,215,218,218,218,218,217,216,218,219,219,218,219,218,216,217,216,215,217,218,219,216,219,219,216,218,218,220,222,222,222,221,223,221,218,220,219,219,220,221,220,219,220,219,221,221,221,218,218,219,217,218,217,219,218,217,220,220,218,220,220,218,219,217,221,223,219,221,222,221,222,221,219,221,222,222,222,224,223,222,222,223,222,221,224,222,226,224,222,224,224,224,225,224,224,226,225,229,227,226,228,226,227,228,226,229,227,228,228,226,230,229,229,230,229,229,229,229,231,231,230,231,231,232,234,234,235,236,237,237,237,234,235,235,237,235,236,236,237,240,238,240,238,237,238,238,241,241,239,239,240,239,239,241,240,241,241,242,242,240,243,242,242,243,244,244,245,246,247,245,246,246,246,245,244,245,247,248,248,248,247,247,247,246,247,247,246,246,246,247,247,246,247,247,247,248,246,248,247,248,247,247,248,247,249,248,247,248,247,247,249,249,248,247,247,248,248,247,247,247,248,248,248,249,248,249,248,248,248,247,247,248,247,247,247,247,248,247,248,248,248,247,247,247,248,247,247,247,248,247,249,250,237,241,249,248,249,250,248,247,247,247,248,248,247,242,246,250,247,245,245,248,248,247,247,246,245,246,246,247,246,245,245,245,245,244,247,245,241,247,244,245,247,247,246,245,244,244,247,247,247,247,246,246,245,245,244,250,236,212,238,188,185,233,232,252,236,220,238,237,224,220,239,156,133,136,118,71,87,234,251,251,250,237,242,239,242,249,233,201,173,218,250,252,225,176,180,214,252,252,252,210,236,252,211,190,228,238,236,252,248,250,249,237,232,254,185,132,114,79,46,9,30,41,31,27,27,29,27,27,22,24,22,29,40,76,74,61,59,34,128,243,252,249,252,246,245,247,247,248,244,249,246,248,248,248,246,252,244,224,199,110,172,245,251,226,199,206,220,214,250,250,253,230,177,196,206,219,253,251,193,176,135,95,73,83,71,121,194,165,104,87,123,107,84,108,137,121,96,59,50,54,59,76,77,68,66,80,77,49,63,22,58,222,251,251,249,249,252,247,232,229,244,248,249,249,247,248,247,237,239,245,249,229,229,245,251,214,171,222,242,232,175,155,219,223,214,205,204,229,223,199,195,217,252,227,119,99,76,89,110,110,113,89,70,60,47,37,34,27,38,34,26,45,17,82,234,234,234,234,227,229,235,243,234,242,232,157,79,166,216,207,225,228,231,231,230,228,226,223,224,225,222,224,223,223,219,219,221,220,218,217,216,215,216,216,217,214,212,212,214,214,212,214,211,211,208,210,210,208,209,208,210,208,207,208,208,208,209,206,207,207,205,206,206,205,204,206,205,204,205,208,217,226,189,143,121,106,93,84,92,70,111,133,116,120,85,76,84,90,122,112,107,125,124,105,86,33,102,226,232,217,216,214,214,212,214,211,212,213,215,214,214,214,212,214,214,215,212,214,214,213,212,213,213,209,209,212,213,210,211,210,209,210,209,211,206,206,209,207,208,204,208,113,2,1,4,7,8,7,10,9,10,11,12,10,219,218,217,216,221,214,216,216,213,217,216,214,215,215,216,214,214,215,212,215,214,215,216,216,216,214,216,215,216,213,218,218,216,217,217,219,218,218,217,220,221,219,218,216,217,215,219,220,217,217,217,218,219,220,219,218,218,217,218,217,217,219,218,218,218,220,218,219,221,219,220,218,218,219,219,219,218,220,220,220,219,221,219,218,219,219,219,220,219,220,220,221,219,219,217,220,221,219,222,220,220,222,220,221,219,223,222,223,222,218,222,221,222,224,222,224,222,223,226,222,223,222,222,224,223,224,225,224,227,226,225,227,226,226,227,227,228,229,228,228,228,229,228,229,229,228,228,227,229,227,230,231,231,234,230,232,233,232,233,234,236,236,232,234,236,236,237,237,237,236,237,237,237,237,238,238,239,238,238,239,238,240,240,239,240,241,243,242,242,241,241,242,241,242,242,243,243,244,245,242,245,245,244,245,245,246,246,246,247,247,247,247,248,248,248,248,247,246,248,248,248,248,247,246,246,248,247,247,247,247,247,247,247,248,247,248,248,247,249,248,247,247,248,248,247,248,248,248,247,248,247,248,249,248,248,248,248,247,247,247,248,248,248,248,249,248,248,248,248,248,248,249,248,247,248,247,247,247,249,248,247,248,249,249,250,247,235,246,251,248,250,248,247,247,247,247,247,247,244,241,247,249,245,243,246,248,247,247,245,246,247,247,247,247,245,245,247,246,246,245,247,246,241,246,247,247,247,248,249,246,246,245,245,247,247,249,249,247,245,246,245,251,232,212,244,220,232,249,239,238,178,206,240,242,226,235,246,169,161,150,122,75,94,235,252,252,250,236,241,240,244,252,243,222,194,225,252,252,222,188,188,232,250,250,239,196,240,252,215,201,241,249,242,252,247,250,250,239,230,251,181,101,61,36,20,2,6,29,29,19,28,32,34,38,39,35,32,38,45,73,58,54,39,33,174,243,251,242,248,245,249,250,245,247,244,249,248,248,248,250,247,251,249,245,247,149,166,247,249,212,212,251,252,240,252,251,248,166,82,148,175,226,252,252,201,134,126,75,65,94,128,141,151,122,74,46,47,60,56,90,128,129,99,59,49,66,61,76,95,66,66,88,80,55,61,19,61,223,251,251,249,249,252,249,232,223,243,246,247,248,248,249,249,239,237,244,252,235,226,241,251,229,167,208,243,235,202,153,207,208,179,218,223,214,216,224,198,205,252,218,113,85,57,74,67,50,44,30,37,47,47,36,27,26,21,24,18,39,12,88,240,239,248,244,229,235,240,247,245,249,246,154,95,190,218,211,227,229,231,229,230,227,225,225,224,224,224,224,220,221,219,218,219,218,217,219,218,220,219,218,216,214,214,214,215,213,212,213,213,213,210,213,212,210,211,208,210,208,207,211,207,208,207,207,208,205,206,208,205,205,206,205,207,207,206,213,217,228,201,146,116,86,83,91,83,78,125,130,105,95,65,69,96,112,134,116,110,128,123,98,79,24,45,160,185,211,223,214,215,210,215,213,213,215,212,212,217,216,214,215,216,214,213,214,215,213,214,215,213,212,210,210,212,211,212,213,210,213,209,209,210,206,209,205,206,207,209,113,2,1,5,8,8,8,12,9,10,11,10,10,220,222,220,219,219,214,214,214,214,217,217,217,215,214,215,214,214,213,214,214,213,214,214,215,216,213,214,217,216,216,216,215,217,218,217,217,217,218,217,215,219,218,219,218,219,220,219,220,219,218,219,220,220,217,220,218,219,220,217,222,218,219,219,216,218,217,219,220,220,220,219,221,219,218,219,218,220,218,216,219,219,220,220,220,218,220,220,218,216,217,219,218,217,217,217,218,218,216,218,219,218,219,220,218,221,222,220,219,220,222,219,222,221,219,223,222,222,222,225,222,223,222,222,225,224,225,224,225,227,225,226,227,225,223,227,228,226,227,225,227,226,226,229,225,226,228,227,227,227,226,229,231,232,231,231,230,232,233,232,234,234,237,235,234,234,233,238,234,236,235,237,237,235,238,237,237,239,239,239,239,239,238,238,239,240,239,240,242,241,241,240,241,241,240,241,242,243,242,244,244,244,244,242,242,244,244,245,245,245,246,246,246,246,246,244,245,247,247,248,246,246,246,246,245,247,248,247,247,247,247,247,248,247,247,248,247,248,247,248,247,247,247,247,248,248,248,249,248,249,247,247,248,248,248,247,247,247,247,247,247,247,248,248,247,248,248,247,247,248,248,249,248,248,248,248,247,247,247,248,248,248,247,248,249,250,243,238,248,247,248,248,247,247,247,247,248,248,246,240,241,247,247,245,242,248,247,246,247,246,246,247,247,247,246,246,246,246,247,247,245,247,247,242,247,247,247,247,246,247,247,247,244,244,244,245,246,247,247,244,244,246,251,232,232,252,233,243,219,182,225,210,229,252,239,225,239,250,193,171,162,137,79,92,238,252,252,250,237,243,238,244,252,247,232,199,209,249,252,218,196,198,235,250,250,229,182,242,252,232,230,252,252,245,252,248,251,248,239,223,240,164,94,70,49,39,7,7,14,30,27,27,48,51,55,53,55,49,47,33,44,18,64,110,112,231,246,250,244,243,246,249,249,244,247,244,249,247,248,247,247,248,250,252,240,250,166,155,243,217,195,221,252,251,245,252,252,247,140,84,131,173,236,250,250,171,101,65,43,35,93,142,154,141,70,31,36,47,48,60,92,107,77,65,59,60,68,70,70,81,58,70,105,85,49,61,21,55,214,251,251,250,250,252,252,238,228,244,246,248,247,247,247,249,240,234,242,250,240,225,236,252,242,171,198,242,237,223,171,188,216,180,216,239,189,201,242,212,204,252,190,80,44,6,4,6,7,10,7,13,29,41,33,33,29,30,25,33,36,18,102,233,233,234,234,229,237,246,250,249,234,234,150,114,204,215,213,225,230,230,227,229,227,223,223,223,222,222,222,221,220,218,219,219,217,219,218,218,217,216,215,214,214,215,213,213,213,212,213,209,212,212,214,211,209,211,209,210,210,209,208,208,207,208,206,206,208,208,207,207,208,206,206,206,206,207,209,210,196,160,142,118,93,83,83,87,75,124,134,111,116,75,75,95,112,134,113,118,126,113,94,73,29,20,105,171,212,226,217,216,212,214,215,214,215,217,217,217,214,215,216,215,217,216,217,215,214,212,212,214,214,213,208,210,212,212,211,211,212,210,211,210,210,209,206,210,208,210,111,3,1,3,7,9,7,10,10,11,11,10,11,221,220,220,217,222,215,216,219,217,217,217,216,214,216,217,215,217,216,214,215,213,215,216,218,217,218,219,218,218,217,219,218,217,220,219,219,216,219,219,217,219,218,218,222,221,221,219,221,220,222,222,220,222,221,221,219,222,221,222,222,220,220,220,220,219,220,216,218,220,218,220,218,218,220,220,219,221,220,221,220,219,220,218,220,220,220,217,218,218,218,219,218,217,219,220,218,219,218,220,219,220,222,220,221,219,220,219,221,222,220,224,222,222,223,221,223,223,223,223,223,225,223,222,225,224,226,224,225,226,227,228,225,225,224,225,225,225,227,226,225,224,225,226,226,225,225,229,229,229,229,229,231,229,231,231,233,231,232,232,231,232,232,234,236,236,235,235,236,235,236,236,237,237,239,240,237,238,239,240,239,239,240,239,240,241,241,241,241,242,240,241,240,239,241,240,243,244,241,244,244,244,244,245,244,245,245,244,246,244,244,246,246,245,244,245,245,246,247,247,246,246,246,247,246,246,247,246,247,247,247,247,246,247,247,247,247,248,247,248,248,247,248,248,249,248,247,247,247,247,248,248,248,248,247,247,248,248,247,247,248,247,247,248,248,248,249,247,247,249,248,248,247,248,247,249,247,247,248,247,247,247,248,249,248,249,238,240,251,247,247,247,248,249,247,247,247,249,245,241,246,249,248,244,246,249,247,247,247,246,247,247,246,247,247,247,247,247,246,248,248,247,249,242,244,248,247,249,247,247,247,247,246,246,244,242,246,246,246,246,245,247,250,227,235,252,196,202,214,232,252,236,250,247,234,219,243,250,191,163,140,123,69,91,238,252,252,250,238,243,240,244,252,238,214,183,184,240,251,217,199,194,239,250,250,232,196,252,252,253,218,232,251,246,251,248,251,249,242,228,241,228,194,176,178,199,226,221,155,82,18,36,63,61,61,60,54,46,71,101,116,73,183,210,170,250,250,251,241,237,244,245,248,245,246,244,247,247,247,247,247,248,250,252,238,250,184,146,206,185,185,237,252,241,242,240,252,192,116,175,198,202,249,250,250,183,125,117,86,89,145,195,205,198,135,75,80,102,113,118,127,118,84,62,55,49,51,50,62,74,55,62,110,89,50,66,18,56,214,250,250,250,249,252,252,246,227,241,247,247,249,248,247,248,242,235,239,250,247,227,232,252,252,183,181,238,236,229,179,159,169,168,221,243,184,178,241,223,181,222,167,94,84,55,58,53,54,59,45,49,53,33,27,42,47,40,36,25,35,10,117,241,235,249,237,230,242,245,250,249,249,245,139,134,217,214,216,225,225,228,227,227,226,223,224,224,221,221,221,221,221,220,219,218,219,217,217,216,215,215,216,214,213,214,213,213,213,213,211,212,213,212,214,208,211,210,206,208,207,208,210,208,208,208,208,209,207,207,208,207,206,205,206,207,207,205,210,203,171,141,127,134,115,91,79,77,82,117,135,128,131,98,85,100,113,132,111,114,131,117,103,72,21,98,191,201,218,217,216,218,214,214,214,214,213,215,218,217,215,217,217,213,216,217,215,216,217,216,213,212,214,212,212,212,213,211,208,207,213,208,205,210,207,210,207,208,207,207,112,3,1,3,7,9,7,10,10,10,11,10,10,217,219,217,217,219,216,216,216,217,217,215,213,215,217,217,214,218,216,214,216,215,215,215,217,220,219,215,218,218,218,217,217,217,217,220,221,217,217,218,220,220,218,220,219,218,219,218,219,221,221,220,221,220,218,218,220,218,219,220,220,217,220,220,217,219,218,221,219,218,219,217,218,218,220,223,219,220,220,221,221,219,218,217,220,220,220,220,219,219,221,222,221,221,219,221,220,220,219,219,219,218,218,219,217,217,220,221,218,220,220,220,221,220,220,224,220,221,222,221,220,222,223,221,222,221,221,223,222,225,223,223,225,225,224,226,225,226,229,225,226,225,225,226,226,229,229,228,228,229,230,230,228,232,232,230,231,231,232,232,232,231,234,235,236,237,235,237,236,236,236,236,236,237,238,237,237,237,237,237,239,239,242,240,240,241,242,242,244,242,242,241,241,240,240,241,241,243,242,243,243,242,243,243,245,246,244,245,246,246,245,245,246,245,245,244,244,245,246,247,246,245,245,247,246,245,247,246,245,247,246,247,246,246,248,247,246,246,246,246,247,247,247,248,247,248,248,247,247,247,248,248,248,248,247,247,247,247,247,247,247,246,247,247,246,248,247,247,249,248,248,247,247,248,247,248,248,247,247,247,247,247,248,247,250,247,236,244,249,247,248,247,247,247,248,247,247,247,240,242,248,247,245,242,246,248,246,247,246,247,246,247,246,246,247,247,247,246,247,247,246,247,249,244,244,247,247,248,247,247,247,247,247,248,246,244,241,243,245,246,247,248,249,218,226,233,212,240,240,252,252,208,214,235,227,214,241,249,178,134,125,124,70,85,239,252,252,250,237,244,240,244,251,239,209,186,191,252,252,203,192,200,247,251,251,242,224,253,253,225,144,173,226,241,250,249,251,246,247,241,252,253,253,250,252,252,252,248,193,113,26,81,118,61,55,57,43,63,204,250,236,158,226,160,141,250,246,252,226,223,245,245,251,246,243,242,245,245,246,247,246,247,247,252,229,243,187,160,195,161,196,252,252,252,204,168,206,148,127,213,212,222,252,252,252,247,190,154,158,164,211,218,210,203,137,99,115,132,154,145,144,133,100,87,65,60,55,37,56,71,59,57,102,88,55,63,17,62,217,250,250,249,248,251,249,244,219,233,247,248,248,244,245,248,245,234,238,247,248,229,227,250,252,190,169,225,241,226,176,103,142,198,214,249,198,179,250,214,150,192,214,229,243,242,250,248,243,247,241,245,235,147,91,93,81,56,25,31,31,19,127,233,230,234,234,240,244,246,252,234,215,217,122,152,223,214,217,218,227,223,223,222,223,222,221,222,221,221,218,218,218,218,219,220,216,215,215,215,215,215,214,214,213,212,211,211,212,213,211,209,209,208,210,210,209,209,208,207,208,207,207,207,206,208,206,207,206,207,207,206,207,206,205,205,206,207,211,218,194,139,117,116,109,90,81,83,78,121,134,133,139,99,110,119,137,133,108,113,117,117,101,69,22,122,225,223,225,221,215,215,213,217,214,213,216,214,215,217,214,214,216,214,214,214,216,216,214,214,214,212,211,212,211,211,211,208,207,207,207,207,207,206,205,208,208,208,205,208,113,2,1,4,7,7,8,10,9,10,10,10,10,218,219,217,216,220,217,217,217,216,216,217,215,217,215,215,216,214,216,216,218,217,217,215,217,217,215,215,216,216,216,217,217,219,217,217,220,218,218,218,217,218,219,221,221,219,220,218,219,220,220,219,220,220,217,221,218,218,219,221,221,218,220,220,220,220,219,217,220,220,220,221,220,219,220,220,219,220,220,219,218,222,220,218,220,220,219,218,221,219,219,219,221,221,218,221,218,219,218,217,218,217,218,219,218,219,219,219,221,220,219,222,221,218,222,220,221,223,220,222,222,222,224,222,223,222,222,221,223,224,222,225,223,224,226,226,227,225,226,228,226,228,227,226,227,226,226,227,226,227,227,227,229,231,230,228,231,231,234,232,232,234,233,234,234,235,235,235,237,237,235,235,236,237,237,238,238,237,238,239,240,240,241,240,241,242,242,242,241,241,241,241,242,241,242,242,243,242,243,244,243,243,243,244,244,246,245,245,245,245,246,246,245,244,245,245,245,245,246,244,245,246,246,246,246,247,246,246,245,246,247,248,247,247,247,247,247,248,247,248,248,248,248,247,247,247,247,247,247,247,249,247,247,248,248,248,247,248,248,248,249,247,247,247,247,248,248,247,248,248,248,248,248,248,246,247,247,247,247,247,248,248,248,247,249,247,236,246,249,246,248,248,247,247,248,248,249,246,240,245,249,250,245,244,248,247,247,246,247,245,246,248,246,248,247,246,247,247,246,247,247,246,249,245,243,247,246,247,247,246,247,246,247,249,248,247,246,244,242,246,246,250,247,221,245,252,237,252,231,211,201,184,223,234,225,211,241,249,176,152,146,144,85,92,242,252,252,249,237,245,239,244,252,247,236,206,203,252,243,199,198,214,247,250,250,228,177,253,237,194,90,65,191,244,252,250,246,246,251,243,251,253,237,224,224,249,251,246,183,101,5,72,105,68,58,52,37,50,223,253,253,152,97,40,88,234,240,248,214,214,247,248,252,245,242,240,244,248,247,247,244,247,245,252,193,222,203,163,195,154,216,253,240,225,146,181,246,124,147,231,201,239,252,253,253,247,184,101,103,104,108,97,74,93,99,71,53,44,49,57,61,66,60,63,66,78,71,41,60,78,63,58,102,89,52,61,16,70,218,249,249,249,249,252,250,249,224,226,243,246,248,246,247,247,246,237,237,245,251,232,224,244,252,212,160,213,243,227,205,172,178,226,224,244,234,198,249,225,162,199,236,252,252,252,252,252,252,253,253,252,252,203,134,121,99,60,35,27,29,13,143,234,227,247,238,245,252,251,253,218,185,191,125,176,229,212,216,222,227,226,224,224,221,220,220,220,219,219,218,217,217,217,217,217,217,214,214,214,213,215,212,213,214,214,212,211,210,208,210,210,207,206,211,207,208,208,208,210,206,208,208,206,207,207,205,206,206,207,207,205,206,207,206,207,207,206,216,215,181,154,141,136,123,98,82,74,82,120,155,159,147,109,110,113,107,116,103,122,150,132,112,44,54,197,220,212,222,220,222,215,214,216,216,217,219,213,216,218,215,217,216,215,215,215,215,215,213,214,214,211,211,209,210,212,211,210,207,205,208,208,207,207,208,210,209,212,207,208,112,2,1,4,7,7,7,10,9,10,10,10,10,217,222,219,217,221,216,218,217,216,215,215,217,218,216,215,214,216,217,216,217,215,214,219,216,217,219,217,216,217,217,217,219,217,219,218,218,217,219,222,221,219,217,220,221,220,221,219,220,221,223,220,220,222,220,220,223,220,220,219,219,219,219,218,218,220,220,221,220,219,218,217,221,219,217,219,219,222,218,220,220,217,220,218,218,219,217,220,221,217,219,220,218,220,217,218,220,219,221,219,218,220,218,221,221,218,220,220,219,219,220,222,221,219,219,221,222,220,220,220,223,223,223,223,225,224,225,226,223,223,224,225,226,227,224,224,226,226,225,225,227,229,227,227,227,227,226,227,227,227,229,230,230,230,229,230,230,230,233,235,232,232,233,232,232,234,236,237,235,235,235,236,237,240,239,237,238,239,240,239,239,240,240,239,242,243,243,242,241,241,241,242,241,240,241,241,242,242,243,243,243,243,242,244,244,242,242,244,244,244,244,244,244,244,245,244,244,245,245,246,246,245,246,245,244,245,245,246,248,246,245,246,246,246,249,246,245,247,247,248,247,246,247,246,248,247,247,247,247,248,246,247,247,247,247,247,247,247,247,247,247,247,248,247,247,247,248,247,247,246,246,248,248,247,247,247,247,247,247,247,247,247,247,248,250,240,235,247,249,248,248,247,247,247,247,246,247,244,241,247,248,246,244,247,250,247,247,246,245,245,244,246,246,247,247,246,245,245,245,245,246,246,247,246,242,245,247,247,246,246,247,247,247,247,247,248,247,244,242,242,244,250,247,226,252,252,241,230,175,207,225,211,235,240,223,212,246,250,184,159,151,146,109,124,246,252,252,250,243,246,240,242,250,249,234,215,195,242,239,211,216,232,248,248,248,172,100,212,216,171,80,35,142,232,250,250,246,242,252,236,234,251,219,202,186,222,250,222,150,92,2,60,92,62,56,51,29,36,212,251,213,71,21,11,71,215,239,250,207,224,250,248,252,245,241,239,242,247,246,247,246,247,248,251,215,208,220,204,178,143,219,214,147,164,175,253,253,141,198,234,199,248,252,252,252,248,170,108,84,17,6,9,2,57,110,68,36,20,27,27,46,74,50,37,30,57,56,38,69,78,68,57,100,93,52,61,19,61,206,249,249,249,249,252,251,249,231,226,240,248,247,246,247,246,249,242,237,241,250,237,222,239,252,229,162,195,240,234,233,214,194,234,224,237,224,174,231,234,199,210,215,250,252,252,252,252,252,253,253,252,252,174,124,118,88,55,27,30,25,22,173,234,230,234,234,252,252,253,252,204,188,189,135,195,228,213,212,218,227,223,224,223,221,218,218,219,220,219,218,217,215,215,216,216,216,214,211,213,214,214,211,211,212,211,212,211,210,209,209,210,208,208,210,207,207,209,207,205,209,206,206,207,207,207,205,207,206,205,205,206,205,206,206,206,209,205,210,222,199,152,123,118,115,93,79,96,119,141,137,128,120,88,82,69,66,71,87,123,129,113,91,39,36,163,215,216,227,217,217,219,217,217,215,215,217,217,217,214,215,217,217,214,215,216,214,214,216,214,214,210,213,212,210,212,211,208,207,207,207,208,206,208,208,211,208,210,209,208,112,2,1,3,7,9,7,10,10,10,11,10,10,217,218,217,218,217,214,214,216,217,218,217,215,218,216,217,217,217,217,216,217,215,217,218,219,218,219,219,216,214,217,217,216,216,216,218,217,217,220,218,217,220,218,217,220,218,221,217,218,220,220,222,219,222,220,225,222,219,220,219,220,220,220,219,219,218,219,218,221,221,220,218,218,218,217,221,221,219,220,218,217,220,218,217,221,221,221,221,221,218,218,220,220,219,216,221,220,221,222,219,222,221,220,219,218,219,219,219,218,220,220,219,221,220,220,222,221,222,221,223,222,222,224,224,223,223,224,223,226,224,222,225,224,226,225,224,227,228,228,227,226,226,227,226,227,228,227,229,230,230,229,229,230,229,229,230,231,230,231,232,234,231,231,233,234,234,235,234,234,236,237,237,237,238,239,239,240,240,239,239,239,241,242,240,241,243,242,243,243,244,242,241,243,242,243,243,242,243,243,243,244,244,243,244,245,245,243,244,244,244,244,245,245,245,246,244,245,245,246,245,245,245,244,245,247,245,244,248,246,246,246,245,245,247,247,246,247,247,247,247,246,247,247,247,248,248,247,247,247,247,248,247,248,248,247,247,248,247,247,247,246,247,247,247,246,247,247,247,247,247,248,247,248,248,247,248,247,248,248,248,247,247,246,248,249,237,236,248,247,249,249,247,247,248,249,249,249,241,242,249,248,245,243,247,247,247,247,246,247,246,246,246,246,247,245,246,247,245,244,245,244,246,247,247,242,244,247,246,247,246,246,246,246,247,248,247,247,247,244,244,241,249,238,215,247,224,191,215,217,246,247,232,241,235,221,210,245,250,179,150,137,132,117,147,247,252,252,250,249,251,243,245,251,247,225,206,197,234,236,228,204,219,249,248,248,144,38,156,202,188,113,9,106,226,248,249,244,243,252,232,221,243,213,201,177,213,251,186,137,88,1,74,86,62,61,50,36,30,204,248,132,22,6,26,147,237,245,251,214,241,252,247,250,245,245,239,242,247,245,248,244,250,244,249,197,149,166,188,194,137,201,190,174,226,229,248,219,84,187,217,198,251,248,251,250,249,218,103,78,23,8,14,11,37,72,62,41,36,36,33,60,87,55,37,34,63,51,29,74,80,66,58,104,97,51,62,18,62,206,246,249,249,249,252,249,250,240,226,234,247,247,247,248,247,249,245,237,239,249,244,223,231,252,248,171,179,235,240,235,222,174,214,235,223,221,177,202,218,203,218,205,230,247,248,248,244,241,242,239,253,252,146,108,112,79,50,24,36,19,33,208,243,234,250,246,253,253,235,223,195,200,186,148,211,228,213,211,217,224,219,222,222,220,218,218,218,218,215,216,215,214,218,215,213,214,215,215,214,214,212,211,210,212,211,210,210,210,211,209,210,207,206,211,206,206,207,205,208,207,209,206,204,207,205,205,206,206,206,206,207,208,206,204,206,206,206,210,214,182,139,118,100,73,60,64,101,136,149,141,106,78,49,49,48,47,62,83,105,102,78,60,22,21,133,196,220,231,218,219,219,217,220,219,217,217,219,217,217,216,218,216,215,215,215,215,214,215,214,215,214,214,215,212,212,211,209,209,209,211,208,211,211,210,209,206,211,208,210,112,3,1,3,8,9,8,10,10,11,11,10,11,215,216,218,216,216,215,217,217,216,218,218,218,214,217,216,216,218,217,214,217,218,216,218,217,217,217,217,217,216,217,217,219,218,218,216,218,217,217,217,217,218,219,220,219,219,220,219,216,215,218,217,220,220,218,218,219,220,219,220,220,216,221,220,220,220,218,219,219,219,219,220,220,218,220,222,219,222,217,219,221,219,220,218,219,219,219,221,220,218,221,220,219,219,216,220,218,218,220,217,220,218,217,218,217,219,219,221,220,219,220,220,221,221,222,220,219,221,223,223,223,222,221,222,221,221,222,223,224,224,227,223,224,225,224,226,225,224,226,227,226,224,224,226,227,227,229,227,225,228,229,229,231,231,229,231,231,231,231,229,232,232,234,235,234,235,234,235,235,237,237,237,239,239,238,239,240,239,239,238,239,239,239,241,243,240,242,243,242,243,244,245,244,244,242,242,242,243,243,244,243,242,242,242,244,245,244,244,245,244,244,245,245,244,245,244,245,245,244,244,244,244,244,244,244,247,246,245,246,246,247,248,246,246,246,246,247,247,247,247,246,247,247,247,246,246,246,246,247,247,247,248,246,246,246,246,247,247,247,247,248,247,247,247,247,246,247,247,247,247,246,247,247,247,247,247,247,247,247,247,246,245,245,249,247,234,241,248,247,248,247,248,247,247,248,249,244,241,247,248,247,242,244,247,246,247,247,247,247,246,247,245,244,245,245,246,245,244,244,244,244,246,246,247,243,243,246,246,246,246,246,245,247,246,247,247,245,247,246,246,241,248,226,202,234,213,224,244,232,252,237,205,221,229,213,207,246,249,172,141,136,120,98,97,152,201,245,249,249,250,243,244,249,250,226,211,207,228,203,160,122,181,244,246,233,125,54,135,190,197,138,73,159,243,250,249,241,245,251,237,222,237,211,198,178,218,252,175,138,77,1,75,85,67,61,56,38,34,206,242,147,39,71,192,241,251,250,242,196,236,252,244,250,245,246,236,240,245,244,244,245,246,249,246,184,69,68,130,147,206,250,241,240,232,227,247,158,87,193,193,212,252,249,252,251,249,225,141,87,55,57,38,25,74,71,51,54,37,35,50,74,70,46,45,42,58,46,35,74,83,67,56,106,97,53,61,24,61,196,247,249,249,249,252,247,246,246,228,229,245,245,247,248,247,248,245,236,237,247,247,222,227,251,250,183,167,222,244,242,237,178,191,245,222,208,211,196,188,211,229,207,224,247,244,244,243,236,239,234,252,244,131,106,100,76,49,22,38,15,56,226,234,234,244,252,253,237,213,222,202,205,176,163,220,225,217,211,220,221,219,222,219,217,219,217,217,218,215,216,216,217,215,214,214,214,214,213,213,213,213,211,210,209,208,210,210,211,209,208,210,209,206,207,206,206,207,207,207,205,205,207,206,206,205,205,206,207,205,207,208,208,208,205,208,209,207,210,213,170,128,112,99,98,75,71,124,152,162,163,143,104,71,63,71,73,87,110,120,105,83,36,44,154,217,216,215,219,219,219,218,217,218,217,218,219,216,216,217,217,214,215,215,215,215,218,218,216,213,214,214,215,215,213,212,209,211,212,210,208,208,208,208,206,208,208,209,208,209,112,2,1,5,7,8,8,10,9,10,10,10,10,218,218,218,217,218,217,217,216,214,217,216,217,217,215,217,217,217,217,216,217,218,218,217,217,217,219,218,217,219,217,217,217,217,218,219,218,216,217,218,217,217,216,219,219,217,219,219,219,218,218,219,219,218,217,221,218,219,219,218,218,218,219,219,219,221,219,219,219,216,217,217,217,220,220,221,220,220,223,221,220,222,220,220,219,217,220,218,218,222,219,219,220,219,220,218,217,218,221,220,220,219,219,219,218,221,220,218,220,222,221,218,221,223,221,221,220,223,220,222,224,221,221,223,222,223,224,222,223,223,223,225,224,225,225,225,227,225,225,227,225,227,228,226,229,229,230,229,227,228,227,228,230,229,227,228,226,229,230,230,233,231,232,234,233,236,237,235,235,236,234,236,238,239,239,239,240,239,239,240,239,240,241,242,241,243,242,242,244,242,244,244,244,243,243,242,242,242,242,242,242,244,244,244,244,244,244,244,245,244,242,245,243,245,245,242,245,245,247,246,244,245,244,245,246,245,245,247,247,246,247,247,247,247,247,247,247,246,247,247,247,247,247,247,246,246,246,247,246,246,247,247,247,248,247,246,247,247,246,247,247,247,247,246,246,248,247,247,247,247,247,247,247,247,247,247,247,248,247,247,247,249,249,249,243,234,245,249,247,248,248,247,247,246,246,246,242,241,248,249,246,242,245,248,247,248,247,247,247,246,246,245,244,245,246,246,244,245,245,246,245,246,247,247,246,242,247,245,245,246,245,245,244,247,247,246,246,247,246,248,246,249,231,226,252,232,242,252,228,219,183,193,229,232,212,207,249,248,182,153,140,127,92,60,53,59,175,249,247,249,240,242,249,251,240,224,214,154,117,126,38,99,232,249,244,155,52,88,175,232,202,141,204,251,251,249,242,247,251,238,226,231,206,196,179,223,252,169,141,75,0,75,84,73,62,54,33,41,221,247,197,116,195,249,249,249,249,230,200,240,249,247,250,246,247,235,237,244,244,245,243,246,245,248,189,41,7,32,115,237,249,247,229,151,198,223,139,183,242,202,231,252,249,252,247,251,251,212,162,89,76,57,86,142,105,66,50,46,45,54,69,49,43,56,51,42,24,41,81,81,69,51,107,106,50,64,22,56,195,244,249,249,249,252,248,245,248,234,226,242,246,247,246,247,246,247,236,232,245,250,229,220,244,250,198,163,208,242,246,238,188,168,239,222,188,217,179,161,208,236,210,205,230,238,239,240,237,240,232,253,237,130,100,97,71,48,28,37,0,84,239,235,246,253,251,245,203,203,229,212,206,165,180,226,225,214,206,220,221,219,220,217,217,216,217,216,217,215,214,217,214,217,214,214,214,212,213,211,214,212,211,213,212,210,209,210,208,210,206,209,210,206,208,206,208,207,207,208,203,206,206,205,209,206,206,207,205,207,207,205,206,207,209,208,206,209,215,228,211,161,136,117,115,132,135,154,169,162,165,151,116,94,96,93,99,100,107,111,101,77,54,136,217,239,233,216,219,218,220,218,217,217,214,219,219,219,216,217,218,214,218,216,215,217,217,217,214,214,215,214,214,215,212,211,210,209,209,208,209,207,209,209,208,210,210,212,208,212,113,2,1,4,7,7,8,10,9,10,11,10,10,217,217,218,216,218,217,214,215,216,214,215,214,217,218,217,217,216,214,216,219,219,218,218,217,218,217,216,215,214,217,217,216,214,216,217,218,218,220,219,218,218,217,218,220,221,220,220,220,221,222,218,219,219,217,218,219,220,220,220,220,219,219,219,219,220,217,219,221,219,220,219,219,218,218,220,219,220,219,218,220,221,220,220,221,219,218,220,219,217,219,219,218,221,222,221,219,221,221,218,220,219,220,220,219,222,219,220,220,222,221,219,222,220,220,222,221,222,220,222,226,222,223,225,224,225,224,224,223,223,223,221,224,226,225,228,226,227,228,225,228,228,227,229,227,225,228,228,227,229,226,225,228,229,228,228,229,230,230,232,232,230,233,232,234,233,236,235,230,235,234,235,234,236,235,235,237,238,240,238,240,240,240,240,240,240,241,241,241,241,242,243,241,241,242,242,243,242,242,242,241,241,243,243,244,242,243,244,243,243,243,244,245,244,245,245,245,246,245,246,245,248,247,247,248,246,245,245,245,247,247,247,246,247,246,246,247,245,246,246,246,247,245,246,246,246,247,246,246,246,246,247,247,248,247,246,248,246,247,247,246,247,247,247,247,246,247,247,247,246,246,246,246,247,246,246,245,247,247,247,247,247,247,248,241,236,245,247,247,247,247,247,247,247,248,246,238,243,248,246,244,242,247,247,248,247,246,245,245,245,245,245,244,244,244,245,245,244,244,244,246,245,244,247,244,241,245,246,245,246,245,244,245,246,246,246,246,246,246,247,247,252,229,230,252,222,240,213,178,221,220,225,237,236,211,208,251,250,181,161,155,131,86,55,53,6,59,185,227,249,243,242,249,252,242,231,186,87,99,120,18,62,203,252,252,186,124,125,182,243,222,179,210,252,252,249,245,247,250,244,229,230,204,196,182,226,252,166,142,71,1,75,87,71,55,56,32,41,224,249,191,143,236,252,230,237,243,230,202,240,251,244,248,245,248,236,237,244,244,244,244,244,249,249,173,54,61,55,71,220,246,247,192,166,249,226,163,232,248,207,247,252,250,250,245,252,252,249,204,108,57,7,49,118,92,55,44,53,47,61,79,43,50,75,50,32,13,46,84,83,68,48,106,108,53,62,22,50,182,244,248,249,249,252,247,244,249,239,224,239,245,244,245,246,245,247,237,228,239,250,232,222,240,251,220,163,191,236,238,235,184,119,188,183,148,225,196,132,190,226,183,162,205,231,236,242,235,237,231,253,231,127,108,92,76,43,34,34,15,112,233,234,234,253,253,226,187,223,234,218,185,152,193,224,226,213,203,218,219,221,220,218,215,217,216,217,216,214,215,214,215,213,216,216,211,213,213,212,211,211,212,212,212,211,211,211,210,209,208,208,206,206,206,205,206,205,205,207,205,205,205,206,207,206,205,206,208,205,204,206,208,207,207,208,208,209,217,232,206,167,144,119,111,111,113,115,105,100,89,91,91,72,60,50,52,51,55,62,60,58,45,108,215,226,227,226,218,216,215,220,218,218,221,219,219,220,220,218,217,218,217,218,217,216,215,214,215,214,217,214,215,215,212,211,211,211,210,210,208,210,210,210,208,210,211,212,210,210,112,2,1,2,7,9,7,10,9,10,10,10,10,217,218,218,216,218,216,217,219,215,217,217,218,215,215,218,216,217,219,217,220,219,218,218,216,219,218,220,219,218,217,219,219,218,218,218,220,217,220,220,217,222,218,219,218,219,219,219,221,219,220,217,219,219,221,221,219,221,218,222,222,220,220,218,219,223,219,219,222,218,219,221,220,219,217,219,218,218,221,219,221,221,220,221,220,222,224,222,219,220,221,222,222,221,222,222,221,221,221,219,217,222,223,221,222,223,222,223,224,222,222,220,222,221,222,223,222,225,222,224,225,223,225,225,225,225,224,224,225,226,227,227,225,226,226,226,226,226,227,229,227,228,229,227,227,227,227,227,229,229,231,230,229,231,231,234,233,234,234,231,233,232,232,233,232,232,233,235,236,236,235,235,234,234,237,237,238,240,239,240,238,239,241,241,239,241,241,239,242,242,242,242,243,244,243,244,244,244,244,244,243,244,243,241,242,241,241,244,242,244,242,244,244,244,244,244,246,245,245,245,246,246,245,247,245,246,246,246,247,247,247,247,247,247,246,246,247,247,246,245,245,246,248,247,247,248,248,246,247,247,245,246,246,246,245,246,246,247,247,247,247,246,247,247,247,247,247,247,247,246,247,246,246,245,245,246,247,247,246,247,247,247,247,249,237,238,248,247,246,246,247,245,246,246,248,244,239,246,247,246,242,244,247,246,246,246,245,246,245,245,245,246,246,245,245,244,244,245,244,244,245,245,245,247,246,241,243,245,245,247,245,244,244,244,245,245,247,246,244,249,248,250,224,229,230,187,204,213,222,250,233,226,231,231,205,212,250,247,178,150,150,132,81,56,98,78,111,203,235,250,237,244,249,252,229,218,183,50,115,175,25,60,205,248,253,224,191,186,201,241,221,189,204,245,251,246,244,247,250,244,234,224,197,194,184,232,250,154,149,71,1,80,84,70,51,56,33,47,234,248,170,158,250,251,215,232,235,210,199,238,246,245,248,245,249,240,236,244,243,244,244,244,247,251,198,181,205,125,67,137,191,190,195,226,252,218,168,251,228,210,252,252,249,249,249,252,252,249,201,134,89,28,6,14,9,29,39,42,44,55,63,49,59,83,52,23,19,48,92,86,69,46,105,112,56,63,19,46,180,239,248,250,249,252,247,245,249,246,225,231,244,245,245,245,243,248,242,229,236,248,240,221,231,252,236,169,181,228,244,221,196,96,92,166,183,237,236,159,186,225,171,161,190,224,240,238,236,238,230,252,225,119,105,94,73,45,33,29,4,148,245,250,250,253,241,199,206,251,241,213,156,146,207,222,230,209,207,223,217,221,220,218,216,213,213,214,217,218,216,216,213,214,212,210,212,212,215,213,212,212,208,210,210,209,211,210,211,212,210,209,207,208,207,206,209,205,204,207,207,207,206,205,207,207,207,208,207,206,205,206,207,208,210,209,210,211,216,212,163,113,97,68,59,69,50,41,44,36,35,71,84,69,42,28,33,20,37,50,62,30,42,167,212,217,222,221,222,216,218,218,218,222,222,221,220,222,219,218,220,217,220,219,217,219,217,219,218,219,219,216,217,216,213,212,212,212,212,213,214,212,213,214,211,210,211,213,211,210,112,3,1,3,7,9,7,10,10,10,10,10,10,214,217,218,215,216,215,214,216,218,217,218,215,217,217,215,218,217,217,218,218,217,217,217,215,219,218,217,219,218,220,218,219,217,219,219,218,218,217,217,218,218,220,220,218,217,216,219,218,216,219,220,220,220,218,220,220,219,220,217,217,218,218,221,219,220,219,220,219,218,218,218,218,219,218,220,219,222,222,221,222,221,222,221,222,222,221,221,221,221,219,221,222,221,222,222,221,222,221,221,222,219,220,220,222,223,221,221,221,221,219,222,222,222,222,223,221,222,225,225,223,222,224,224,222,225,224,224,226,224,226,225,225,226,222,224,225,225,224,222,224,226,225,226,227,226,226,226,228,226,226,227,229,231,231,231,232,232,231,232,231,231,230,231,232,231,232,233,232,235,233,233,234,235,236,235,237,237,237,237,237,238,240,240,240,240,240,240,242,240,241,242,241,243,243,243,244,243,242,242,241,244,242,241,243,241,242,243,242,244,243,245,244,245,245,245,246,245,246,248,246,248,247,245,246,246,246,248,248,248,248,247,247,247,247,246,248,246,248,245,245,246,245,244,244,247,246,244,245,244,244,245,245,245,244,244,245,245,246,246,246,246,245,246,246,245,246,246,246,246,247,246,245,246,245,246,248,247,246,245,246,245,249,244,234,240,247,247,247,246,246,245,245,247,245,240,242,247,247,244,241,245,246,245,245,244,245,245,244,244,243,245,246,245,245,245,245,245,245,244,244,244,244,245,246,241,241,245,245,245,243,244,244,244,246,246,246,245,244,247,249,249,212,229,233,201,236,226,227,230,184,209,231,231,205,214,250,244,164,138,139,128,67,110,238,224,227,232,223,242,230,234,242,253,237,223,168,86,174,168,66,127,232,253,253,228,206,203,214,244,214,197,198,231,251,239,242,244,247,244,233,225,198,195,186,232,233,144,143,61,3,82,86,68,47,51,29,46,235,248,134,165,250,245,218,226,220,206,200,233,246,243,245,242,246,239,231,240,244,243,243,241,249,240,238,252,252,169,100,58,69,202,222,250,250,153,177,252,207,222,252,247,250,245,250,249,252,243,221,223,212,147,127,115,71,42,12,18,54,62,63,50,93,90,37,29,19,54,92,86,68,44,108,107,55,61,24,50,179,243,248,250,249,252,245,243,247,247,231,225,240,246,247,246,243,246,242,226,230,247,242,218,224,253,246,176,168,215,241,226,206,140,140,203,212,234,251,198,212,224,185,199,198,217,236,237,235,235,228,253,212,116,103,88,77,34,34,16,31,194,233,251,251,244,208,202,229,252,201,159,111,163,218,223,229,201,207,221,217,218,216,215,214,215,212,215,213,213,215,212,211,210,211,210,210,213,211,210,211,210,208,208,210,209,208,209,208,208,210,210,207,207,207,207,206,205,205,205,205,206,207,206,208,206,205,206,206,206,208,207,206,206,210,206,208,209,214,195,127,91,69,36,53,57,43,49,37,35,39,53,72,66,55,41,30,28,44,59,63,18,73,198,221,225,220,216,218,218,218,218,220,218,221,220,219,220,218,220,217,217,217,217,215,215,216,215,215,215,216,215,214,213,214,212,210,212,214,213,210,211,213,212,210,212,214,212,207,211,112,1,1,5,7,8,7,10,10,9,11,10,10,217,220,217,217,217,214,216,218,218,217,216,217,217,217,216,215,218,217,219,218,217,218,218,218,221,217,219,217,216,219,220,219,221,220,219,220,218,219,218,218,222,217,218,220,219,219,219,219,218,220,219,219,218,220,221,219,218,217,218,219,221,220,218,218,221,220,221,220,218,218,220,221,222,222,222,223,222,225,222,221,224,224,224,223,223,223,222,221,221,223,221,222,221,222,224,221,223,225,223,222,224,222,222,223,222,224,223,221,222,222,223,224,222,224,223,227,226,226,229,225,226,225,225,225,224,226,226,225,226,225,227,226,225,225,225,226,226,227,226,225,226,227,229,226,226,226,225,228,229,229,229,228,231,230,228,230,232,230,232,231,230,233,233,234,233,232,232,232,234,234,234,234,236,236,236,237,237,239,239,239,241,241,241,240,239,242,241,241,242,241,244,242,242,244,244,244,242,244,244,244,244,243,243,244,244,244,246,244,245,245,245,246,247,247,248,249,248,249,248,248,247,247,249,248,249,247,247,247,247,247,247,247,246,247,246,246,245,244,245,245,246,244,244,244,245,246,245,245,248,246,245,244,244,245,247,246,245,245,246,248,246,246,246,247,247,246,246,247,246,247,249,248,247,246,246,247,245,246,246,245,247,249,241,230,242,247,247,248,246,247,246,246,247,244,238,245,248,247,242,241,246,247,247,247,246,245,246,246,245,244,246,245,245,245,244,245,245,245,244,244,245,245,244,244,242,241,243,245,246,245,246,246,244,245,245,247,245,246,245,249,246,222,251,244,226,245,200,193,217,206,222,238,233,201,217,249,243,172,146,142,123,59,127,244,250,249,209,202,240,229,229,235,246,240,247,202,93,165,204,153,217,251,253,253,208,208,202,211,236,210,218,211,237,252,239,247,242,248,243,236,230,207,191,181,237,218,136,145,57,2,83,91,71,46,49,36,36,235,242,121,183,244,245,217,223,212,206,213,236,246,242,245,243,245,239,227,236,244,243,244,243,248,246,241,252,252,167,129,69,60,209,238,245,232,101,184,245,196,238,252,248,251,247,250,249,252,229,236,252,252,253,253,249,186,84,14,57,150,71,41,31,76,94,31,31,24,55,100,85,67,42,98,113,57,61,20,56,189,243,248,250,249,252,245,244,247,249,239,222,238,245,244,245,244,244,244,232,232,245,247,224,219,248,252,193,165,203,239,233,222,168,152,222,236,225,245,199,192,212,199,231,208,211,235,237,237,235,230,252,204,110,105,81,75,41,23,8,73,242,249,251,251,211,199,215,252,243,156,80,70,181,223,225,233,199,211,218,215,219,214,214,216,216,213,214,214,212,212,213,212,211,213,212,214,211,209,210,212,212,209,210,210,208,209,210,209,208,208,210,209,207,208,208,208,207,205,207,205,205,207,207,206,207,206,205,205,206,206,205,205,207,208,208,210,212,220,195,146,113,78,60,65,63,56,60,54,39,57,69,55,48,43,39,30,38,55,58,42,19,123,224,224,229,221,220,219,217,219,218,222,222,217,220,220,219,219,219,218,217,218,218,219,217,215,217,215,215,214,214,215,213,214,213,213,214,214,212,211,212,212,212,210,212,210,210,208,212,113,1,1,4,7,8,8,10,9,10,11,10,10,217,219,217,220,220,216,216,218,217,217,217,217,218,215,216,217,216,219,220,219,218,220,218,216,219,219,218,217,217,219,220,221,218,218,220,220,220,219,218,218,219,219,220,218,220,220,219,220,219,221,220,218,222,222,222,218,219,221,217,220,220,218,222,219,219,220,220,219,221,222,221,221,221,221,222,224,224,224,223,224,226,226,224,225,226,224,224,224,224,224,224,225,221,222,226,226,225,224,223,224,224,225,225,223,223,223,226,225,224,224,227,224,224,224,224,226,226,225,227,226,228,227,224,224,228,226,225,227,227,227,224,226,226,225,224,227,227,229,228,228,230,227,228,227,230,229,229,232,227,232,232,229,232,232,233,232,231,232,232,233,233,234,236,235,236,235,236,235,234,234,235,235,234,235,236,237,238,238,238,239,239,239,239,240,241,240,241,242,241,241,243,243,243,243,243,243,243,243,243,245,244,243,245,244,243,245,244,244,245,243,244,245,245,246,249,248,247,247,247,246,247,247,247,247,247,247,247,247,247,246,247,247,246,246,246,246,245,244,244,244,244,244,245,244,245,244,244,245,245,244,244,245,246,246,244,245,246,245,244,244,246,245,246,246,245,247,246,247,246,246,246,246,247,246,246,246,246,246,245,244,246,247,236,235,245,247,247,247,247,248,246,247,249,241,240,246,247,246,242,244,246,246,247,246,244,245,245,245,247,245,245,245,244,244,244,244,244,245,244,244,245,244,243,244,242,237,240,245,244,243,245,245,244,246,244,245,245,246,245,249,241,220,252,231,192,211,205,231,238,226,242,240,229,198,220,250,246,178,153,151,131,53,112,243,251,251,218,176,225,240,237,228,240,229,228,208,150,184,189,205,252,252,251,251,202,212,202,212,242,229,245,229,242,251,239,249,244,247,241,234,232,198,177,184,239,198,134,141,53,3,78,81,64,49,50,41,51,239,249,111,160,237,243,221,215,217,226,215,234,249,241,244,242,245,241,228,231,241,242,243,244,248,246,234,253,221,134,156,90,62,195,234,245,177,85,216,232,203,248,249,249,247,247,249,250,250,215,238,252,252,252,252,249,208,124,33,84,180,71,30,15,69,84,32,34,19,67,104,81,65,36,98,109,57,60,28,54,196,249,249,249,249,252,247,243,248,248,244,224,232,245,244,246,244,244,246,231,226,239,249,231,218,241,252,208,163,193,234,235,221,185,145,186,236,224,234,204,167,173,199,246,221,203,236,239,235,237,230,252,198,113,107,88,70,29,8,27,156,234,251,252,230,204,213,234,234,202,125,52,60,188,219,227,228,198,211,216,217,218,213,218,216,215,214,215,214,212,215,215,213,213,212,211,212,212,210,212,213,212,210,208,208,210,210,208,209,208,208,209,209,210,209,208,209,211,208,207,208,208,207,205,210,207,206,208,205,206,206,205,207,208,207,206,210,214,219,204,172,173,155,118,108,81,69,73,58,94,113,84,67,51,45,39,37,46,58,27,40,147,224,243,221,221,225,221,222,217,216,218,218,218,219,217,218,218,218,217,217,217,216,216,216,216,216,215,215,218,215,216,215,213,215,214,215,214,213,215,212,212,212,211,211,210,210,212,210,210,110,3,1,3,7,9,7,9,9,10,10,10,10,219,221,218,219,222,218,218,219,219,219,220,221,218,218,221,218,221,219,219,221,218,222,218,219,220,216,221,219,217,218,219,218,219,220,219,218,219,220,219,219,222,221,222,220,220,222,221,223,223,221,222,220,219,219,219,222,222,222,220,221,221,220,222,223,223,221,223,222,223,223,222,221,221,224,225,225,224,225,226,227,228,229,228,229,227,226,227,226,226,228,226,228,227,226,227,225,226,229,226,225,229,227,227,227,226,228,228,228,227,227,229,226,226,228,223,226,224,227,227,224,229,225,226,227,226,229,228,227,226,223,226,225,225,226,226,229,228,225,227,229,228,229,229,229,229,230,230,231,231,232,234,231,232,233,234,236,233,234,234,235,236,236,236,235,235,236,236,234,233,236,235,235,236,235,237,237,237,239,238,239,237,238,238,238,240,241,241,239,242,242,243,243,242,242,242,242,243,243,244,244,245,245,245,244,242,243,245,245,247,246,246,245,246,246,247,247,246,247,246,247,247,247,247,247,249,247,247,247,247,247,247,247,246,246,246,246,245,246,245,244,244,245,247,247,245,244,245,246,247,246,244,244,244,245,246,246,245,244,245,244,245,246,246,247,247,246,246,247,247,247,245,245,246,246,248,246,247,247,246,246,249,247,235,237,248,245,246,247,246,247,247,247,245,237,242,247,247,244,241,246,246,244,247,245,245,246,245,245,245,244,245,244,244,244,244,244,244,244,244,244,245,245,245,244,242,235,239,243,244,244,241,244,243,246,244,246,246,246,245,249,233,212,239,191,194,238,229,252,252,226,220,228,219,193,221,250,247,174,144,143,130,54,108,244,252,252,249,202,194,203,235,246,237,220,214,187,180,203,202,221,252,252,250,247,185,219,206,215,232,187,230,229,239,242,236,250,241,249,241,235,229,200,181,187,245,181,118,136,49,8,66,73,62,49,57,41,46,212,168,87,156,206,238,212,213,226,232,213,230,248,242,247,242,244,241,232,234,239,241,242,244,247,247,217,220,182,150,212,118,41,150,206,227,156,158,249,222,210,251,248,250,247,248,246,251,243,210,241,252,252,249,248,249,177,117,39,84,168,57,31,19,75,73,29,32,20,65,93,83,69,42,79,111,62,51,27,45,192,249,249,249,247,252,246,242,248,246,248,224,224,244,245,245,244,244,247,237,226,236,247,236,215,232,252,225,169,181,223,239,223,199,160,165,228,231,204,198,166,134,175,240,235,201,218,231,229,233,237,252,191,120,111,88,50,16,4,83,243,252,252,248,210,208,231,252,252,158,104,36,66,199,212,231,223,193,214,213,218,217,212,214,214,214,213,214,213,211,211,213,211,209,212,211,210,211,212,213,212,211,210,211,210,210,210,209,206,210,209,209,208,207,211,208,209,209,208,210,211,210,209,209,208,208,208,208,209,208,206,208,208,207,205,207,211,213,213,189,193,226,225,194,146,103,72,47,34,80,126,107,81,60,45,43,34,51,29,36,154,230,235,245,222,225,221,220,218,214,216,216,220,220,218,220,218,220,219,216,217,218,217,215,217,215,216,215,214,216,214,217,214,212,213,213,213,214,214,211,214,211,213,213,212,211,208,214,212,211,111,2,1,3,7,9,7,10,10,10,10,10,10,220,224,219,221,222,218,219,220,220,219,220,220,220,219,219,222,218,220,222,220,221,219,222,222,221,221,218,219,221,219,221,221,219,220,219,219,220,220,222,221,224,220,223,220,220,222,222,225,220,224,222,220,222,219,223,220,220,222,222,224,221,224,224,221,221,222,224,220,222,222,220,224,222,223,225,225,225,227,227,226,228,227,228,230,229,229,228,228,229,229,227,228,229,227,226,229,229,229,229,230,230,228,230,230,230,230,228,229,228,226,231,227,226,229,229,229,227,226,227,224,227,226,225,226,226,228,229,226,226,228,228,229,230,228,227,229,226,228,229,227,229,230,229,227,229,230,232,233,230,232,232,232,234,234,235,232,235,236,236,235,234,235,234,236,235,232,234,234,236,236,235,236,233,237,238,237,239,238,239,239,239,240,238,240,239,240,240,240,240,241,243,242,242,242,244,243,244,244,244,243,244,245,243,243,244,244,245,245,245,245,245,245,245,244,246,246,247,246,246,247,247,247,247,248,246,247,247,246,247,247,247,247,246,246,246,246,246,246,245,246,244,244,245,244,244,245,245,246,245,245,244,244,244,244,243,244,244,244,245,244,245,245,245,245,246,246,246,246,245,246,247,247,246,245,246,246,246,245,246,246,249,244,233,241,246,246,249,245,245,244,245,249,244,238,244,248,247,242,242,246,245,245,246,246,245,246,245,245,245,244,244,244,244,244,246,244,244,245,244,243,244,244,242,242,242,237,237,241,241,242,243,243,242,243,243,244,246,247,244,251,227,211,239,207,226,250,240,252,224,199,226,230,216,196,225,251,243,158,127,136,115,44,111,244,252,252,249,214,187,181,214,246,251,240,204,174,198,205,203,221,247,252,249,237,152,203,203,190,153,94,173,204,230,228,231,248,241,249,243,233,231,214,179,190,242,163,119,132,41,20,65,56,55,57,50,57,31,49,96,138,177,208,227,201,197,218,234,209,230,247,241,245,243,245,240,233,229,237,242,243,243,248,245,228,231,171,175,224,125,45,108,222,250,160,208,251,207,224,251,248,250,245,248,244,251,233,209,248,249,250,243,242,248,156,101,26,87,155,37,28,25,83,66,26,30,22,61,84,82,78,50,92,116,62,51,39,25,128,244,244,247,247,251,246,242,248,246,250,232,220,238,248,247,247,245,246,243,227,234,248,240,216,226,251,237,174,175,217,236,231,210,171,163,217,223,161,197,198,114,150,223,227,162,173,220,226,235,241,252,181,110,104,66,41,34,115,183,248,250,250,231,215,229,234,254,220,135,98,30,103,211,215,234,216,195,208,212,219,215,210,214,211,213,212,213,212,211,211,210,210,210,210,209,210,210,208,210,212,210,212,212,211,211,208,212,209,207,207,209,210,206,208,208,207,208,208,208,208,207,208,209,209,207,208,208,206,208,207,206,205,207,206,206,208,213,198,179,197,223,235,213,157,103,57,25,13,23,43,65,65,51,46,33,34,22,8,110,223,244,241,228,223,226,220,215,217,215,217,219,219,218,217,219,219,218,219,218,217,217,217,217,214,214,217,215,215,214,213,213,212,212,215,214,213,212,212,213,210,212,211,208,210,212,212,212,211,212,113,1,1,5,8,8,7,10,9,10,11,10,10,220,223,221,221,219,217,218,219,221,220,217,218,221,222,219,219,221,219,220,219,220,223,219,219,220,217,219,218,218,220,220,221,219,220,221,221,222,220,219,217,221,221,219,220,220,220,220,221,218,218,223,223,222,222,220,221,223,222,222,222,223,222,221,221,220,222,222,221,222,225,224,223,225,225,226,226,226,227,229,229,229,230,229,229,231,230,231,230,231,230,228,227,230,229,229,230,230,230,230,232,231,230,231,230,232,232,228,230,228,229,231,229,230,231,228,229,228,229,227,226,229,226,226,226,224,226,227,227,229,229,229,227,228,228,228,229,231,229,229,230,229,229,229,231,229,230,230,232,230,230,232,228,230,232,232,232,232,236,234,234,234,231,232,235,235,235,236,236,236,237,237,238,239,236,237,237,237,237,237,239,240,239,237,237,239,240,241,241,240,240,240,240,241,241,244,242,243,245,245,243,244,243,243,242,242,242,244,245,245,245,244,244,245,244,245,245,244,247,246,247,245,245,247,248,247,246,244,245,246,248,247,246,246,246,247,247,245,244,245,244,244,245,245,244,246,246,245,246,244,244,244,244,245,245,245,244,244,244,245,244,245,245,244,244,246,245,244,246,244,245,245,245,244,244,246,245,246,246,245,246,248,240,230,241,246,245,246,246,246,245,246,247,239,239,247,247,245,241,245,247,246,245,245,244,245,245,244,245,245,244,244,244,244,244,244,245,246,245,244,243,243,243,242,240,243,239,237,241,242,242,244,245,243,244,242,244,245,247,244,251,225,232,252,214,236,245,204,208,220,229,245,248,224,203,224,246,237,161,135,128,113,41,104,245,247,251,251,228,218,179,188,231,245,252,224,184,228,199,204,224,240,252,239,169,92,206,196,152,103,17,83,174,233,222,232,247,240,251,245,232,226,210,165,182,245,179,149,141,60,49,63,65,59,43,52,51,40,35,103,206,208,229,243,193,192,217,233,212,231,246,242,244,242,244,240,233,229,234,240,244,241,248,244,224,184,119,160,210,125,97,214,249,244,181,240,251,196,232,252,247,249,245,248,247,251,222,217,251,247,250,242,244,247,155,99,24,94,155,38,27,33,90,55,23,32,25,64,76,82,77,53,89,116,74,45,46,10,72,226,243,249,249,251,247,241,248,244,248,237,217,235,246,247,247,245,247,245,230,234,244,246,221,220,249,247,189,169,207,238,230,213,182,119,162,194,151,204,231,162,158,218,217,151,160,203,223,235,251,236,146,100,81,66,25,84,237,252,252,250,229,221,226,249,252,252,183,124,91,32,150,223,214,236,209,199,214,212,217,214,212,213,213,214,213,214,211,212,210,210,211,210,211,211,212,210,211,211,212,210,210,211,211,210,210,211,209,208,208,209,207,208,208,208,210,208,208,207,208,206,207,210,209,208,207,207,206,207,207,205,206,205,208,210,212,213,188,174,198,216,222,220,189,155,140,110,71,58,49,31,33,24,6,49,93,153,209,237,241,235,230,228,221,219,216,216,218,217,217,220,219,217,218,218,216,218,217,218,218,218,218,217,217,214,217,213,214,215,213,214,212,213,214,214,212,212,213,212,212,211,214,213,210,213,211,213,211,211,113,1,1,4,7,8,7,10,9,9,10,10,10,221,224,220,221,222,217,219,220,219,220,221,220,220,224,221,221,221,220,224,218,221,220,219,218,217,217,217,219,220,219,219,220,220,223,221,220,222,221,220,218,222,218,219,220,219,222,220,221,221,221,221,223,223,221,224,221,221,223,221,223,220,222,223,220,224,223,224,224,223,225,224,225,225,224,227,226,225,227,229,229,230,230,230,230,231,232,232,233,231,231,231,231,231,229,231,232,229,230,230,229,232,229,229,231,229,231,228,227,230,229,230,228,229,229,229,228,227,229,231,227,229,228,228,228,227,227,227,226,229,228,229,227,226,229,229,231,228,227,229,227,227,229,229,230,230,231,230,231,232,231,232,231,231,231,231,233,233,232,234,233,234,235,234,236,233,233,237,236,237,237,235,239,237,236,236,236,238,237,236,235,237,237,237,240,238,239,239,240,240,239,241,241,240,241,241,242,242,241,244,244,241,241,241,243,244,243,244,245,243,245,245,245,245,245,246,244,245,244,245,245,244,244,244,245,245,245,244,244,245,246,247,246,246,245,246,245,245,246,244,244,245,245,245,245,245,245,244,244,244,244,245,245,245,245,242,245,244,243,243,243,243,245,244,244,244,243,244,244,244,245,244,244,245,245,246,246,244,244,246,247,248,234,232,245,245,247,245,243,246,245,247,244,235,240,246,246,241,241,245,245,245,245,246,246,244,244,245,244,245,243,244,244,241,245,243,243,244,243,243,243,242,242,242,242,243,238,235,238,241,243,243,244,243,244,244,244,243,245,245,252,220,229,243,191,208,196,190,240,242,248,249,243,223,205,225,241,234,164,151,146,115,40,99,240,248,251,248,232,243,208,170,207,238,252,227,203,248,200,221,224,232,253,182,109,62,154,157,87,37,3,95,201,246,229,241,251,252,253,252,249,246,234,176,199,252,223,199,162,71,52,71,72,63,49,47,54,42,49,200,248,231,252,252,211,207,243,252,227,247,252,252,252,250,249,244,239,233,234,241,243,243,248,243,205,118,66,131,136,101,174,251,252,202,174,251,236,200,244,248,249,247,247,249,249,252,212,223,251,245,250,243,244,248,150,101,29,98,161,42,22,41,91,49,24,29,27,63,84,81,71,50,76,107,74,48,48,11,92,242,245,248,248,250,244,238,247,242,246,245,222,227,244,246,246,245,245,248,232,229,240,246,229,217,244,251,200,165,194,229,234,219,174,106,80,144,185,219,248,191,182,221,217,185,184,197,219,242,252,205,117,89,90,64,92,180,234,253,253,234,232,233,234,252,234,216,142,136,74,60,182,220,224,230,205,202,212,215,218,213,213,215,212,213,211,214,210,212,213,211,211,210,211,210,211,213,212,211,210,210,212,211,208,210,210,209,208,212,211,208,211,209,209,208,209,210,206,209,210,209,210,208,208,211,210,206,207,210,206,208,210,207,211,211,215,211,187,189,207,211,221,229,233,238,234,221,226,201,137,77,16,92,211,239,242,251,251,249,249,232,225,222,217,217,217,218,217,217,220,218,217,218,218,219,216,216,217,218,217,218,215,215,216,218,218,215,216,214,215,215,215,215,214,212,216,214,212,213,212,211,212,213,210,211,210,214,212,210,112,2,1,2,6,8,6,9,9,9,9,9,9,221,224,222,223,225,222,222,220,222,222,222,223,223,220,220,223,222,220,224,224,221,222,220,223,221,220,223,220,221,222,220,220,221,221,219,218,221,221,221,222,221,221,223,223,221,221,223,224,223,228,221,221,224,222,226,225,222,222,223,224,226,226,226,227,228,226,226,224,223,226,225,226,224,228,226,225,227,228,229,227,229,231,230,231,231,231,232,230,231,231,232,232,232,232,232,232,231,232,231,232,231,230,232,228,231,230,229,229,227,230,228,227,229,229,230,229,231,232,229,229,231,227,229,229,229,229,230,232,230,230,232,230,230,230,228,231,229,230,230,229,229,230,231,230,231,230,232,234,232,232,233,231,232,232,232,234,233,235,233,234,237,237,236,235,234,235,237,236,237,235,236,237,237,239,238,239,240,239,239,237,237,237,239,238,241,240,239,241,241,242,240,241,242,242,243,242,243,242,242,243,243,243,242,242,243,243,243,244,243,244,244,244,246,244,245,246,245,246,245,246,245,244,246,244,244,245,245,245,244,244,244,245,245,244,245,246,247,245,245,244,245,246,245,245,245,243,244,244,245,245,244,244,244,244,243,244,244,244,245,244,244,245,244,243,245,245,245,246,244,245,245,246,245,244,246,246,245,245,244,245,245,232,235,244,245,246,245,244,245,245,245,240,236,244,245,244,240,241,246,245,244,244,245,245,245,245,245,245,245,244,243,245,243,244,242,243,244,242,242,241,241,240,243,243,244,241,234,240,243,243,244,241,243,245,245,246,242,244,246,248,209,224,212,170,215,227,233,252,237,214,227,237,221,212,231,245,239,182,175,155,126,41,93,238,242,251,243,230,249,230,197,207,222,252,225,171,236,156,179,214,217,227,118,32,10,131,127,64,33,89,227,249,252,252,252,252,252,252,252,252,252,252,220,226,252,250,202,146,63,51,69,76,58,54,56,53,49,85,233,248,252,252,252,239,248,252,253,253,253,253,253,253,253,253,252,252,252,248,249,251,248,251,247,214,132,66,71,43,69,181,250,227,105,173,251,214,207,245,247,248,246,249,245,252,245,202,232,251,244,250,243,249,248,146,73,11,84,132,40,20,66,92,41,28,23,35,64,81,83,70,55,71,107,77,44,49,12,83,243,244,249,249,249,245,238,246,243,246,247,225,220,239,246,246,246,244,247,236,225,235,249,232,214,236,252,216,166,184,225,231,223,203,150,127,169,216,219,235,193,162,207,208,208,212,194,218,247,242,164,126,117,65,86,193,252,252,251,250,235,240,244,248,252,252,168,106,132,73,93,203,220,223,231,204,200,214,213,219,213,213,214,212,211,209,213,211,211,213,212,210,211,210,211,211,211,212,210,211,212,212,209,211,211,210,209,208,209,208,211,212,212,210,208,210,209,208,211,212,209,209,210,210,211,211,209,210,209,209,210,212,210,211,215,217,201,184,199,214,213,218,226,231,244,249,251,251,233,178,114,80,184,234,252,252,251,251,238,231,225,220,219,217,217,218,217,216,217,217,219,219,217,219,218,219,218,215,216,217,216,215,218,217,216,217,214,216,216,214,217,216,216,215,213,215,213,213,213,213,212,212,214,213,212,211,213,212,212,112,2,1,2,6,8,7,9,9,9,10,10,10,221,224,223,223,223,220,222,223,222,220,220,220,222,222,218,223,222,223,225,221,225,222,222,222,222,225,222,222,221,220,219,219,220,220,222,222,223,223,222,222,223,223,223,225,225,224,224,224,224,225,226,224,224,224,226,223,222,225,226,226,225,228,227,227,227,227,227,223,226,227,226,228,225,227,226,226,229,227,230,228,228,230,230,234,232,231,232,230,229,229,229,231,232,232,234,234,234,233,232,231,232,234,230,232,231,232,231,230,230,229,230,229,231,229,230,231,231,230,230,229,230,230,231,229,230,231,230,231,234,232,233,236,233,232,231,231,231,232,230,230,231,231,231,230,229,231,230,230,232,230,232,230,232,234,233,235,234,233,234,234,234,235,233,236,236,235,236,236,235,237,237,237,238,237,240,241,240,240,240,239,239,238,239,239,238,240,241,241,240,240,241,242,241,241,242,242,243,242,242,242,241,241,241,241,242,242,241,243,244,244,244,244,244,245,245,245,246,245,246,244,244,244,241,243,244,244,245,246,246,246,245,244,245,245,247,245,244,245,246,245,244,245,245,245,244,244,243,244,243,244,244,244,244,243,242,244,244,243,245,245,244,245,244,244,244,244,243,244,245,244,245,244,245,245,243,244,243,244,243,246,242,225,237,245,244,245,243,244,245,244,245,235,236,245,245,242,239,243,246,245,244,242,242,244,244,244,243,241,243,243,243,241,244,243,241,242,241,241,241,241,240,241,241,241,242,240,237,239,241,242,242,242,244,243,244,243,243,242,246,246,208,234,224,208,247,240,243,230,194,214,226,236,217,212,236,248,242,177,171,162,141,66,103,233,243,250,241,226,248,237,214,210,203,251,192,117,179,82,129,206,208,204,85,21,106,220,225,214,205,243,253,253,253,253,253,253,253,253,231,208,193,193,141,148,192,150,122,78,32,45,44,48,49,47,44,51,34,54,178,163,157,213,181,163,204,240,247,229,252,252,252,252,253,253,253,253,252,252,252,252,253,253,250,244,162,91,64,4,43,198,247,177,67,177,250,200,220,246,244,248,244,247,242,252,234,200,238,249,244,248,245,249,241,144,77,9,18,67,45,30,74,92,35,27,24,36,62,76,81,71,54,70,114,81,42,55,12,82,241,243,247,247,248,242,237,245,243,245,249,233,217,236,245,244,244,243,246,238,224,231,246,237,211,227,252,226,173,173,215,232,227,218,206,167,188,236,221,208,194,174,174,187,212,224,200,216,249,212,145,117,97,80,146,245,252,252,251,241,234,239,242,247,253,218,111,85,115,74,119,204,211,226,225,205,200,211,212,215,213,214,212,210,212,209,211,211,210,210,209,210,211,208,209,211,212,209,209,209,210,210,211,212,212,209,210,208,209,210,209,210,210,210,208,208,210,209,209,208,208,210,210,208,208,208,210,210,211,211,209,211,209,212,216,208,185,184,203,214,218,219,217,220,226,234,240,246,210,154,106,71,198,233,228,245,235,234,225,221,220,218,216,216,219,215,218,218,215,217,218,218,217,217,218,215,216,216,217,217,216,217,216,217,215,214,215,214,214,217,214,212,215,214,212,212,212,214,213,213,212,213,210,208,213,212,212,210,212,113,1,1,3,6,7,7,10,8,9,10,9,9,220,224,225,224,225,221,223,222,222,224,221,221,224,221,222,223,223,223,223,222,222,223,220,223,224,221,224,221,222,222,221,222,224,223,225,224,223,225,223,227,225,225,225,226,225,225,226,226,227,227,226,226,227,225,226,227,229,228,227,229,226,226,227,227,227,226,226,226,228,227,226,227,228,229,228,229,228,229,229,229,232,232,231,231,234,234,232,233,232,231,233,233,233,233,233,234,234,234,232,233,234,231,233,230,232,230,230,232,228,231,231,231,231,230,232,231,228,231,230,231,234,229,231,233,232,232,232,234,233,234,235,234,235,237,235,234,233,233,231,230,232,231,232,232,233,232,230,231,231,232,231,231,232,232,233,232,233,235,234,234,235,236,235,236,236,235,237,236,237,235,236,239,237,237,239,238,240,238,239,238,239,242,240,241,240,241,242,241,240,240,240,242,242,241,242,241,242,242,243,243,241,241,242,241,243,242,243,242,245,246,244,244,245,245,245,245,245,244,244,243,242,241,241,242,244,245,245,247,244,245,245,245,244,244,244,244,246,245,245,244,245,245,244,244,245,245,244,243,244,243,244,243,242,244,244,244,244,244,244,242,244,245,245,245,245,245,245,244,244,244,244,245,245,244,244,244,243,245,245,246,235,225,240,244,244,246,243,243,244,245,243,234,240,244,244,240,239,244,245,244,245,244,243,242,245,244,243,242,242,245,243,243,242,242,242,241,243,241,240,241,241,241,241,241,241,242,236,235,240,242,242,240,243,242,244,242,241,241,248,240,211,250,229,217,252,213,183,223,224,237,242,235,214,214,237,250,236,151,143,165,176,117,145,247,249,251,244,224,242,233,222,226,198,229,144,53,122,33,88,198,206,245,190,204,252,252,252,252,252,252,243,229,206,155,155,137,124,115,99,88,65,42,24,15,13,54,56,21,31,44,42,46,47,43,42,42,38,42,45,27,12,9,8,10,10,17,34,34,69,105,131,163,186,220,231,247,253,253,252,252,252,252,252,252,226,144,94,25,43,207,229,157,131,231,248,199,235,250,248,249,244,248,245,252,222,202,246,246,247,249,248,252,231,208,134,18,1,14,45,63,98,78,29,30,20,44,63,73,84,65,53,62,107,88,46,54,11,82,242,244,249,249,246,243,240,245,243,245,246,241,217,227,244,244,244,243,246,245,228,229,244,241,216,222,249,238,184,166,203,229,224,224,220,187,169,223,220,174,190,198,155,166,226,238,220,220,241,180,102,89,61,108,226,252,250,250,246,245,242,244,239,248,249,181,68,87,118,81,142,204,214,229,233,214,218,227,221,221,213,213,214,211,213,211,212,212,212,210,210,210,211,214,211,211,210,210,210,211,213,208,209,210,209,210,209,212,211,210,213,212,210,211,211,212,212,209,209,211,212,211,210,210,208,209,213,213,213,214,215,213,213,216,218,198,177,191,212,217,222,221,219,219,220,225,231,243,195,149,90,65,189,223,218,231,224,225,218,219,220,218,217,218,220,220,222,218,217,218,219,218,217,218,220,217,215,217,220,218,216,214,216,217,214,215,216,217,216,216,215,214,214,214,214,214,214,215,215,211,212,213,212,211,211,214,212,209,213,112,1,1,4,6,7,7,10,8,9,10,9,9,223,224,224,222,225,225,224,224,224,222,226,224,225,222,221,225,221,223,222,222,222,220,225,222,223,225,221,224,224,222,224,223,225,222,223,224,222,223,223,223,225,225,224,227,227,226,226,225,226,227,228,227,227,230,230,229,229,228,227,229,229,230,227,226,228,227,228,227,228,229,228,229,227,230,231,230,232,230,231,231,231,232,232,233,232,231,235,235,233,236,235,236,235,237,235,234,236,235,232,231,231,231,231,231,231,230,229,231,233,230,232,229,231,234,231,230,229,229,230,231,230,231,232,233,235,234,237,238,237,237,236,235,235,234,234,234,234,232,231,234,232,232,232,233,234,232,234,234,234,233,233,232,232,232,233,235,234,235,236,235,236,236,236,236,237,236,237,235,235,237,238,238,238,237,239,240,238,240,238,238,241,240,240,240,241,242,241,242,242,241,241,240,239,240,242,243,242,240,241,242,242,242,243,243,243,242,243,243,243,244,244,244,244,244,245,245,244,242,244,242,241,241,242,242,242,243,243,243,244,245,246,246,245,245,245,244,245,246,246,246,245,245,243,244,245,244,244,245,244,244,243,243,244,243,242,243,243,244,244,243,242,242,242,242,243,243,242,242,242,243,243,242,241,244,244,243,244,244,243,247,232,226,244,243,244,244,242,244,245,245,237,232,243,246,242,237,241,243,242,243,243,244,243,244,243,242,244,243,244,242,243,243,243,242,240,242,242,242,242,239,240,241,241,241,242,242,236,234,239,242,242,241,242,240,243,240,242,239,248,232,203,234,203,205,214,187,220,247,240,241,231,236,212,217,240,246,239,156,134,145,160,143,187,250,248,252,243,223,241,230,221,240,224,224,128,62,94,3,83,220,252,252,253,253,253,252,210,173,138,137,104,96,99,73,77,68,69,74,59,50,29,12,6,14,10,48,76,29,43,53,50,54,53,51,49,45,42,39,42,36,38,30,8,8,9,10,10,10,10,10,10,11,11,11,27,53,101,130,171,213,243,249,252,245,248,210,132,69,44,115,142,142,181,250,240,200,243,249,249,248,245,246,247,252,209,208,249,245,250,247,252,252,219,221,163,42,11,7,22,83,114,73,29,31,21,41,67,78,80,63,50,50,89,85,45,54,14,71,238,244,249,249,247,242,236,246,241,243,246,247,223,220,241,244,244,243,243,246,229,224,240,244,218,215,243,243,193,163,190,225,222,220,224,199,163,210,222,146,196,231,156,172,231,249,240,211,185,118,82,56,85,176,252,252,250,249,244,244,242,243,239,252,234,155,63,84,111,95,153,209,233,242,251,248,236,249,234,220,216,217,214,212,213,209,212,212,212,213,212,210,210,212,212,211,212,210,213,211,210,214,210,210,211,210,211,213,211,211,214,212,213,211,210,212,212,212,213,211,212,212,211,211,213,213,214,215,213,214,214,216,217,218,220,195,181,203,216,220,221,222,222,221,222,230,240,246,193,144,79,65,194,216,219,230,217,224,217,219,218,217,216,215,219,220,219,217,218,220,220,220,218,220,217,214,219,217,219,217,217,217,214,217,217,217,216,216,216,215,215,214,216,213,214,215,214,213,211,214,213,214,213,212,212,209,213,212,210,112,2,1,3,7,8,6,9,9,10,10,9,9,223,226,225,224,226,223,225,226,225,224,225,223,225,225,226,225,222,223,223,226,225,226,226,226,225,223,227,225,225,224,224,224,225,224,225,225,225,227,224,225,226,226,226,228,227,226,229,229,229,229,227,229,231,229,230,229,231,229,230,230,228,230,230,231,231,231,229,230,231,230,230,231,230,231,234,233,233,233,232,234,233,232,231,232,234,232,234,236,235,234,236,236,235,236,235,236,234,232,232,232,233,230,232,232,233,233,231,232,231,230,230,229,232,232,232,231,229,232,229,229,232,231,234,235,236,238,239,237,237,237,238,236,236,237,236,235,235,236,238,236,236,235,234,233,235,235,235,237,239,237,236,238,236,236,236,236,235,237,236,236,234,235,237,234,235,237,236,237,240,238,238,239,237,239,239,239,239,239,239,240,241,240,239,239,241,242,240,240,241,243,243,241,241,239,240,243,242,240,241,242,241,242,242,241,243,241,243,243,242,242,242,244,242,244,243,243,243,243,244,242,242,242,243,243,243,243,242,243,243,245,245,243,244,244,245,245,242,243,246,245,246,245,244,246,244,244,242,242,243,243,242,244,244,243,243,243,243,243,241,243,243,242,243,243,243,242,244,243,243,244,244,244,244,244,243,244,243,243,246,244,225,231,245,244,244,242,243,244,244,243,232,236,246,245,240,237,242,245,245,245,242,242,243,242,242,241,240,242,242,242,242,241,242,241,240,239,240,241,242,243,241,242,241,241,242,243,240,232,237,240,241,241,241,241,242,240,241,236,247,219,192,222,174,182,229,226,248,252,202,212,230,235,210,217,238,247,241,162,141,128,108,81,155,243,232,242,233,220,238,227,222,237,238,252,182,106,103,35,151,245,253,253,159,121,150,127,98,78,51,55,60,78,93,100,105,100,120,122,110,101,87,94,112,105,61,79,95,73,61,51,56,56,54,43,50,82,69,54,94,92,40,56,83,68,70,63,54,28,12,12,9,10,11,10,9,11,10,12,10,12,16,57,110,119,204,167,147,125,65,71,31,122,224,248,231,203,247,247,250,248,247,243,250,252,198,220,251,244,251,246,253,253,199,207,129,33,37,16,18,51,85,63,33,27,19,48,69,74,76,55,42,38,80,90,47,50,14,62,228,243,249,249,246,239,239,245,242,245,245,251,229,213,236,244,244,244,242,245,231,220,236,245,222,208,236,244,208,169,180,219,222,219,222,207,164,178,234,199,219,252,188,163,231,246,246,156,105,89,70,71,131,241,252,252,250,249,247,242,234,237,238,251,229,137,54,64,109,108,162,216,238,247,226,198,200,234,245,225,214,215,214,214,211,211,213,210,214,211,212,212,209,213,210,210,212,213,214,210,214,211,214,213,212,217,212,213,214,212,214,214,215,213,212,213,214,214,212,212,211,210,212,214,213,214,216,216,217,217,219,220,219,223,217,186,189,213,218,223,222,218,221,224,231,244,250,240,167,123,63,65,174,207,222,230,217,222,218,219,221,218,219,219,217,219,219,218,220,221,219,219,220,218,217,217,218,219,217,215,219,219,216,218,217,218,216,214,216,215,214,215,216,214,214,215,213,214,214,213,214,211,212,214,213,213,213,212,212,112,2,1,3,7,8,6,9,9,10,10,10,10,222,225,225,224,225,222,223,225,227,224,227,226,223,224,225,227,224,224,225,225,229,227,224,225,227,229,226,226,225,224,227,227,228,227,226,227,225,227,227,226,228,228,228,227,229,227,229,229,229,231,230,231,232,232,233,231,231,231,232,232,229,231,232,231,232,231,232,231,231,232,231,231,229,231,231,229,231,231,232,232,233,233,229,231,234,234,234,234,234,235,233,232,233,233,234,232,232,230,229,230,234,234,232,232,232,232,231,231,232,229,232,232,230,231,230,232,234,231,231,232,232,233,234,235,235,236,237,237,237,236,237,235,237,237,238,235,235,238,237,237,236,236,236,238,239,239,240,239,237,240,240,237,237,237,238,237,236,237,236,237,235,235,235,235,237,236,238,236,236,236,238,238,237,238,237,239,239,239,240,240,239,238,240,239,238,239,241,241,240,240,240,242,241,241,241,241,242,240,241,241,242,241,241,242,241,241,241,242,242,243,242,241,243,243,244,242,241,241,240,240,242,240,240,240,241,241,240,239,240,241,241,242,241,240,244,243,244,244,242,243,242,243,243,244,243,242,242,242,241,241,241,242,241,243,243,243,244,244,242,242,242,241,241,242,242,241,242,242,243,241,242,241,241,243,242,242,244,243,246,238,221,234,244,243,244,243,243,242,245,238,231,240,245,241,237,238,242,241,243,242,241,242,242,243,241,241,241,242,241,241,241,240,240,240,240,240,239,241,241,240,240,240,239,239,240,242,240,233,235,240,239,240,240,240,240,237,239,236,248,211,214,236,191,226,243,212,208,213,210,229,232,237,201,212,240,248,242,148,122,124,103,34,98,228,229,237,223,208,236,225,215,235,248,252,225,214,167,73,147,159,127,106,55,47,46,57,80,73,66,60,49,57,77,112,185,204,232,244,251,251,249,252,247,195,78,68,129,126,83,45,64,62,52,49,150,206,139,152,200,154,95,193,251,247,250,250,252,249,238,238,223,174,102,54,21,7,6,8,9,8,10,9,13,11,81,77,83,96,73,88,22,112,246,247,221,211,248,244,249,247,245,242,252,239,191,232,252,244,250,243,252,252,196,193,104,17,91,69,23,31,49,64,42,28,26,45,69,66,68,52,39,38,82,89,42,55,14,67,228,243,249,249,244,239,239,247,241,244,240,246,237,212,228,240,241,244,238,244,233,218,230,243,229,207,229,247,218,174,171,210,220,215,218,202,139,143,197,185,246,252,173,141,177,241,246,155,93,83,65,91,198,252,252,252,250,248,242,237,235,240,234,251,206,138,73,79,119,123,167,218,251,178,110,75,45,160,221,213,220,210,214,214,215,214,210,212,211,210,211,211,214,212,210,212,213,212,214,212,210,213,213,215,214,213,214,214,212,212,213,210,213,212,215,216,212,213,213,211,211,211,212,214,214,215,215,215,216,217,217,218,223,224,201,182,201,218,217,218,220,223,225,234,248,248,232,179,117,89,25,34,169,208,225,229,217,224,217,220,218,218,218,218,217,215,218,221,219,218,216,218,215,216,217,215,219,215,217,217,216,217,215,218,217,217,216,215,214,214,216,214,213,214,216,214,214,216,214,215,214,215,214,212,215,213,214,212,214,113,1,1,4,6,7,7,10,9,9,10,9,9,223,227,227,225,227,225,226,226,224,226,227,225,226,225,224,225,226,226,224,224,227,227,227,225,227,225,226,226,227,229,227,230,229,227,228,230,227,229,230,227,227,229,229,231,229,229,229,230,231,230,232,233,234,232,235,235,235,235,234,234,234,232,230,233,232,231,231,232,232,232,231,231,230,230,233,231,232,233,233,234,231,234,231,231,234,234,234,233,234,233,234,233,232,234,233,235,232,231,233,232,232,232,232,232,230,230,231,231,232,232,232,232,233,232,233,232,232,233,234,233,234,236,237,237,237,238,238,236,237,238,238,239,237,237,237,236,235,235,238,238,237,239,238,241,242,239,242,241,240,242,240,241,241,240,239,239,240,240,239,239,240,240,239,239,239,239,239,240,240,236,237,240,240,239,239,239,241,240,239,238,237,237,239,240,240,240,239,240,241,240,241,240,241,240,241,242,240,240,241,241,241,243,241,241,241,239,240,239,242,244,245,242,242,244,243,241,240,241,239,240,239,237,237,238,238,239,239,240,238,238,240,240,241,241,241,244,243,243,243,243,245,242,244,244,242,244,243,243,243,242,242,242,243,242,241,243,241,243,244,243,242,241,242,242,241,241,242,242,240,243,242,240,241,242,241,241,243,242,245,230,220,238,242,242,243,240,242,243,245,233,229,241,245,241,235,241,244,242,242,242,241,243,243,242,242,241,242,241,242,242,242,242,240,239,240,239,240,240,239,239,238,241,241,240,241,241,241,234,235,240,239,239,238,239,239,238,240,240,249,211,221,242,203,219,206,186,221,240,235,247,236,237,197,212,240,247,238,148,130,124,102,41,114,240,240,246,224,208,230,218,214,246,253,253,234,176,167,98,76,59,54,68,64,89,81,61,110,165,180,136,90,47,30,126,239,249,251,251,252,252,253,253,249,204,95,79,140,129,95,75,82,79,68,91,174,194,153,126,147,139,94,198,247,236,251,252,252,252,253,253,246,160,79,69,33,139,225,220,195,148,113,69,47,53,91,46,48,39,39,81,22,150,251,251,217,221,249,245,247,245,244,240,252,226,194,240,247,242,245,242,250,252,217,197,90,16,132,101,32,23,21,57,57,38,29,57,79,54,60,46,33,39,71,92,46,51,20,106,245,245,250,250,244,237,237,244,240,241,241,244,243,215,221,240,240,244,239,242,238,218,225,240,233,208,221,243,226,182,166,199,220,211,218,191,151,106,126,152,150,144,94,73,84,189,240,177,139,83,69,170,252,252,250,249,247,242,244,240,231,227,237,245,185,148,98,104,131,125,167,209,247,175,57,20,8,33,147,201,218,217,212,215,213,213,212,210,211,212,212,212,213,214,213,211,212,212,213,214,214,212,215,214,214,217,212,214,214,213,214,214,213,213,214,214,214,214,213,214,213,213,214,213,216,216,216,214,216,216,217,217,224,222,192,186,208,220,220,222,227,232,245,247,250,201,145,107,63,18,24,138,221,223,227,225,221,222,219,217,218,215,217,219,220,218,220,219,217,220,220,220,217,216,218,217,217,217,220,217,217,217,217,218,217,219,218,216,217,216,216,214,215,216,214,215,214,216,217,214,215,216,213,212,213,214,214,211,213,112,1,1,4,6,7,6,10,8,9,10,10,10,227,231,226,227,229,229,229,226,230,226,228,228,227,228,227,227,226,227,227,226,227,227,228,227,226,229,228,226,228,227,230,230,229,229,229,229,231,230,230,231,230,229,230,230,233,231,231,233,231,233,233,234,235,235,234,236,236,232,235,234,232,232,232,234,235,234,235,232,232,232,233,234,232,234,235,235,235,234,234,233,234,235,234,235,234,232,232,234,233,232,235,232,232,234,234,235,231,232,234,234,236,232,231,232,232,232,232,232,233,233,235,233,234,235,234,234,234,234,234,235,236,237,238,236,238,241,239,238,236,238,241,237,237,237,237,237,236,238,236,238,238,240,241,240,240,239,242,240,243,243,242,242,241,241,241,242,240,242,242,242,242,241,241,240,239,239,240,240,239,240,240,241,241,240,239,239,240,239,238,238,238,237,237,238,237,240,239,239,240,241,241,241,239,239,242,242,241,239,241,242,241,240,239,237,237,238,239,240,240,243,244,243,242,241,242,241,239,239,239,237,238,238,239,238,239,240,239,239,237,240,239,239,242,240,244,242,242,242,242,243,241,241,242,243,244,243,243,244,244,243,244,242,243,243,244,242,241,244,243,244,243,242,241,241,240,241,241,240,241,240,241,242,241,242,240,241,242,243,242,224,225,241,241,242,241,241,243,243,239,227,235,244,243,238,236,241,241,241,244,242,241,243,242,241,240,241,241,241,240,240,241,240,240,240,240,239,239,239,240,240,240,238,239,239,240,239,241,236,230,237,239,239,240,238,239,239,241,242,246,203,225,210,153,205,218,223,252,249,231,232,232,242,191,214,241,249,242,152,136,141,122,46,123,244,246,251,236,213,225,216,223,241,251,238,108,74,118,134,108,51,50,65,73,161,240,243,252,252,252,252,183,114,67,77,193,240,249,247,212,199,171,145,143,128,76,74,111,69,52,55,44,49,46,46,51,55,51,53,45,45,45,50,86,89,105,143,147,165,203,155,146,75,52,40,73,230,252,252,252,252,251,236,158,133,110,39,52,47,44,60,37,179,252,252,214,230,250,244,247,242,243,240,252,214,198,246,244,244,247,243,252,252,224,195,75,6,131,83,23,39,17,34,55,55,43,58,63,36,55,49,29,26,61,87,47,43,26,141,246,246,250,249,245,234,237,244,240,242,240,243,247,220,214,237,241,244,239,241,239,221,223,235,238,211,214,238,232,190,163,185,217,211,217,212,198,169,115,61,39,16,27,81,82,183,230,162,128,79,126,219,252,252,251,247,243,242,246,239,229,229,234,234,175,152,125,129,135,137,147,128,196,117,46,29,5,54,164,207,227,220,214,214,212,215,211,213,212,211,212,212,211,213,212,212,214,212,213,213,216,217,214,217,216,214,216,214,215,215,214,216,218,218,217,217,217,217,214,213,215,214,213,216,217,214,217,221,219,219,220,222,227,211,183,193,215,217,226,232,239,246,252,232,171,124,78,28,4,83,190,231,247,225,224,225,222,222,217,217,217,219,221,219,220,220,220,220,216,219,220,219,218,220,217,217,220,218,217,217,217,217,218,219,216,218,215,218,217,214,217,219,217,216,215,214,214,216,215,214,212,213,214,214,214,212,215,212,212,112,2,1,3,8,9,7,9,9,10,10,9,10,227,229,229,229,230,229,227,229,230,230,230,230,230,229,229,229,229,229,229,230,229,229,228,227,228,229,231,230,230,232,229,231,232,232,232,231,230,232,234,230,232,233,231,232,232,232,233,235,235,234,235,237,237,237,238,236,236,237,236,235,234,235,234,235,236,235,238,237,235,234,232,235,233,233,236,232,234,234,231,232,232,233,235,233,234,234,233,235,234,235,234,233,234,231,233,233,232,234,233,234,232,235,234,231,232,234,235,234,235,237,235,236,236,234,237,235,235,236,237,237,237,237,237,239,238,239,237,239,240,236,238,238,237,238,237,237,238,237,238,238,239,239,241,241,242,241,243,243,242,243,242,243,243,242,242,243,242,241,242,243,243,242,240,241,241,241,242,240,243,242,241,244,242,241,241,239,240,239,237,238,238,239,237,237,238,238,240,240,239,241,241,241,243,241,242,241,240,240,240,235,234,235,234,237,237,237,240,240,241,241,240,241,241,242,240,239,238,237,236,238,237,239,242,241,242,243,240,240,238,237,237,237,241,241,240,242,241,241,240,240,241,239,241,242,241,243,242,242,242,242,243,243,243,243,242,244,243,244,245,242,241,241,242,240,240,240,239,241,239,240,240,240,241,240,240,240,240,244,239,220,229,242,241,244,241,241,242,243,235,227,240,244,242,234,237,243,241,241,241,242,241,241,241,239,240,241,239,239,240,239,238,240,240,240,239,239,240,240,238,238,238,240,239,238,240,237,241,237,231,237,239,241,238,240,240,239,241,244,244,198,217,208,198,240,237,224,234,208,194,230,234,243,186,213,245,251,245,156,141,148,128,51,134,244,247,251,240,224,234,229,225,234,160,83,47,37,85,131,122,64,27,0,46,208,251,251,250,250,244,216,203,172,143,142,76,57,67,42,14,33,53,17,16,37,29,24,74,53,25,21,15,21,21,29,31,28,27,26,22,19,19,30,22,31,33,21,37,36,37,42,47,43,41,41,31,157,236,237,252,252,250,196,149,154,129,62,66,74,70,86,61,148,243,226,214,237,247,248,245,244,245,245,252,200,208,248,243,247,245,243,252,252,205,156,55,7,134,70,21,64,39,33,32,62,60,46,42,25,51,47,30,30,56,93,51,46,16,84,243,245,250,250,241,235,238,243,242,241,243,243,246,229,208,228,238,239,239,239,240,222,218,233,237,214,208,232,236,198,166,178,215,214,216,219,217,201,145,94,78,41,7,42,71,200,227,127,87,81,189,251,251,251,249,246,249,249,244,226,231,236,253,236,167,162,148,149,143,143,131,47,90,79,36,33,16,177,248,228,231,216,217,215,212,213,213,214,212,213,214,213,212,212,214,212,214,213,213,217,217,215,216,217,218,218,216,216,215,217,215,214,216,216,218,217,218,218,215,214,215,216,215,216,219,217,217,219,221,220,222,224,223,198,179,207,218,225,238,244,249,248,196,139,99,56,5,26,151,227,246,246,237,228,225,223,221,217,216,217,220,220,220,218,217,217,220,219,219,218,217,219,220,218,217,217,218,218,217,217,217,217,217,218,220,218,215,217,217,216,216,216,215,215,215,215,214,212,213,215,213,211,212,211,213,212,215,212,212,112,2,1,2,6,8,7,8,8,9,10,10,10,227,229,228,229,228,228,229,226,229,230,232,231,229,230,229,230,231,230,229,229,230,229,229,229,231,230,231,233,234,232,232,234,234,236,233,234,233,232,232,232,234,234,236,235,236,235,234,236,235,237,236,236,236,235,237,239,237,237,236,237,238,237,237,235,236,236,236,236,237,237,236,236,237,235,234,235,233,232,233,233,233,233,234,235,234,236,236,236,233,230,235,234,232,234,232,232,232,234,234,234,234,232,232,232,234,232,232,233,234,232,234,234,232,234,233,235,235,236,236,237,237,236,237,237,236,237,236,238,236,237,237,236,237,237,237,237,237,237,238,240,240,241,241,240,242,243,242,241,242,242,243,244,243,245,243,244,243,243,244,244,244,244,244,243,242,243,245,244,245,245,244,245,243,242,243,241,239,239,236,237,239,237,240,237,236,238,237,238,239,239,239,239,240,240,239,241,240,239,236,231,229,231,234,235,239,240,241,240,239,240,238,239,240,238,239,237,239,238,238,239,240,241,241,242,243,240,239,239,237,237,236,236,238,237,239,240,241,240,239,240,240,239,240,240,241,240,240,241,241,242,242,243,244,243,245,244,244,245,244,241,242,242,241,240,240,241,240,240,240,240,241,240,238,240,240,239,240,244,231,216,232,241,240,241,241,241,241,241,229,232,243,244,239,232,240,242,240,242,240,239,239,239,238,240,241,239,239,240,237,238,239,239,239,239,239,237,237,237,237,237,237,239,239,240,240,237,240,238,228,234,240,238,239,240,237,239,237,244,237,207,241,220,219,250,209,178,214,221,221,239,237,240,175,216,246,252,246,165,145,139,124,57,144,243,248,251,239,223,242,245,220,133,69,51,51,63,91,174,192,143,72,26,12,92,192,174,139,114,78,53,55,101,134,130,80,12,8,27,15,19,73,45,16,20,13,30,73,57,22,13,10,20,14,20,29,19,16,17,18,19,18,17,20,23,22,19,24,34,30,28,34,33,31,25,31,39,38,34,48,134,61,66,86,137,117,81,185,200,150,115,89,83,130,182,204,245,247,245,246,244,244,250,249,190,220,247,244,247,244,242,253,253,172,146,50,2,141,61,35,110,56,39,27,36,65,53,41,20,47,55,33,26,51,90,50,50,18,65,228,243,250,250,239,233,239,243,243,242,239,241,247,238,209,221,237,237,240,236,238,225,214,229,241,220,206,228,237,207,174,171,205,214,214,216,186,122,89,142,189,123,28,15,32,160,213,143,84,129,227,249,249,249,245,245,247,247,238,226,232,234,253,216,160,161,149,152,143,155,126,44,132,130,88,51,54,219,249,247,227,212,220,214,214,212,210,212,213,215,211,212,213,212,212,212,212,212,214,215,214,215,216,217,214,214,216,215,214,214,214,216,214,217,217,217,218,217,218,219,217,217,217,216,218,218,220,221,218,218,221,224,215,185,189,214,228,241,249,249,222,173,122,79,37,7,91,205,241,243,248,233,229,223,223,219,216,215,218,220,220,218,218,219,217,217,218,220,218,217,217,218,217,217,218,219,217,218,220,218,217,217,217,217,216,219,217,215,217,217,217,215,214,214,216,216,215,215,212,212,213,213,215,212,214,214,214,213,214,113,1,1,4,6,7,7,9,7,9,10,9,10,226,229,226,228,227,226,226,227,230,229,232,230,229,231,231,231,230,232,232,230,229,229,229,230,230,231,229,230,233,233,234,235,236,236,236,237,237,234,233,235,234,234,235,236,238,238,237,237,234,237,238,237,237,235,237,237,236,235,237,237,236,240,238,238,239,235,238,237,236,239,238,239,238,239,238,235,235,235,235,236,236,237,236,235,236,235,236,236,235,236,236,234,234,234,235,234,232,233,235,235,231,232,232,234,235,234,235,233,233,235,231,232,234,232,236,237,235,236,238,239,237,235,236,236,236,237,236,238,237,237,239,237,237,239,238,237,240,239,240,241,241,243,243,244,243,242,244,244,244,243,244,244,244,243,243,245,245,244,245,245,244,245,243,246,244,244,244,245,245,244,244,244,244,243,244,241,239,239,238,239,239,239,239,240,241,239,239,238,237,240,239,239,239,239,239,236,236,232,232,231,230,235,234,236,239,238,238,239,238,239,238,237,239,238,240,240,239,239,239,243,243,244,244,241,241,240,237,235,235,235,237,236,237,238,237,240,239,240,239,238,240,238,240,240,241,240,239,240,242,242,243,244,243,244,244,245,244,244,245,244,242,244,242,240,241,239,241,240,239,240,239,240,239,240,238,239,243,241,223,219,239,241,239,239,238,240,243,236,226,237,244,242,236,233,241,241,240,240,239,241,240,241,240,240,240,240,240,240,238,237,237,237,237,239,239,237,237,238,239,239,239,237,238,239,236,238,240,236,229,232,237,238,238,238,237,239,238,247,233,207,244,206,184,206,202,214,250,245,245,248,238,237,178,223,247,252,243,145,119,110,96,46,148,243,250,250,237,227,245,252,190,99,61,55,37,63,141,240,235,187,166,92,58,44,27,47,17,11,42,18,38,39,20,93,128,98,59,25,15,25,72,47,26,19,17,31,68,65,22,14,8,16,17,17,24,17,13,21,11,17,18,18,18,21,19,16,22,20,22,17,18,27,17,24,22,25,32,24,40,34,30,39,56,98,76,91,187,234,170,97,81,63,87,163,219,248,250,248,242,243,243,252,236,184,230,245,243,245,243,243,253,253,166,132,37,6,136,48,44,109,47,44,27,24,44,57,51,24,46,55,36,28,55,94,53,50,19,51,219,242,249,249,236,235,239,241,243,242,241,242,243,245,213,212,236,237,242,238,239,232,214,226,237,226,205,222,239,217,179,164,199,216,207,215,179,113,43,19,116,161,117,98,69,114,187,139,101,188,251,251,249,247,246,249,248,239,229,233,241,248,253,195,156,154,141,148,137,159,120,54,186,188,120,89,6,93,195,208,226,214,214,214,211,216,215,214,215,214,214,215,215,213,212,212,213,214,214,217,217,217,217,219,217,215,214,217,217,217,217,217,217,216,218,219,219,218,218,220,218,219,219,218,221,220,219,217,218,218,223,224,203,186,204,230,246,247,247,190,143,103,51,12,36,156,229,240,251,240,232,225,223,222,218,219,219,217,216,220,219,218,219,218,220,218,217,217,218,220,220,220,219,218,220,220,218,218,218,218,218,217,219,216,218,218,216,217,215,215,217,215,214,214,216,214,215,215,213,214,214,215,212,210,214,214,213,213,215,113,1,1,3,7,7,7,10,8,10,10,10,10,227,230,229,229,229,226,228,227,229,231,230,230,231,230,230,231,230,232,232,231,234,232,230,232,232,231,232,231,232,232,234,232,234,237,234,237,237,237,238,237,237,236,237,237,238,239,237,237,234,236,237,236,238,238,239,239,237,238,238,238,237,237,237,237,238,239,238,240,240,238,239,239,240,239,237,236,238,237,238,237,236,237,236,238,236,235,235,236,236,236,237,238,237,235,237,237,234,236,236,236,237,235,237,235,235,235,234,235,235,236,236,234,235,235,234,234,235,238,235,237,238,236,237,237,236,237,236,236,237,237,239,239,238,237,239,239,239,241,241,241,241,241,241,242,243,242,242,242,243,242,242,242,243,243,245,245,244,245,245,246,246,246,245,245,245,244,245,244,245,246,244,244,244,241,240,237,237,237,238,241,240,238,238,237,239,240,239,239,238,239,236,237,236,235,232,231,232,232,231,230,232,232,234,235,235,236,237,238,237,237,237,238,239,240,240,239,240,240,242,243,243,242,241,241,240,237,234,235,234,236,236,236,237,238,239,240,240,239,240,239,240,239,239,240,241,241,242,241,240,241,241,242,242,243,245,245,245,244,244,244,243,243,240,240,238,239,238,239,240,239,240,239,238,239,239,238,242,240,220,222,239,241,240,240,239,240,241,231,224,241,243,238,233,235,241,241,240,240,240,240,240,239,241,239,239,239,240,239,238,239,237,237,237,237,237,237,238,238,238,237,239,239,237,239,238,238,238,240,230,226,237,238,238,239,235,240,238,248,226,206,223,168,192,232,228,246,252,240,227,237,239,240,185,229,247,252,231,119,95,89,71,17,126,242,248,250,235,228,245,253,194,101,57,6,2,12,90,188,161,154,177,132,54,14,11,43,36,29,49,24,37,39,17,15,46,134,127,80,48,29,78,49,17,15,14,21,55,73,24,14,14,13,21,22,26,19,16,15,16,15,19,17,15,17,15,20,24,23,18,19,24,20,19,20,23,25,26,21,18,24,19,48,67,95,80,45,53,63,84,81,87,22,66,193,230,250,247,245,244,243,243,252,223,185,238,239,244,244,242,244,253,249,116,117,44,15,144,39,49,98,37,40,25,29,35,35,66,45,46,59,67,104,134,121,57,55,20,44,214,243,250,250,235,236,238,241,240,240,242,241,241,248,221,207,230,234,241,236,238,235,218,224,234,231,206,216,237,223,191,166,188,212,209,217,221,198,155,50,37,120,149,155,136,94,99,96,149,228,252,252,250,246,251,248,244,232,228,240,245,234,233,171,152,130,111,134,133,155,106,73,184,158,113,75,11,3,71,184,222,224,217,213,217,215,215,214,214,214,213,215,216,215,217,214,216,216,214,217,217,220,219,217,219,219,219,217,222,220,217,219,218,221,218,220,220,218,219,220,220,219,220,222,221,218,218,220,219,220,227,222,198,199,229,246,252,227,160,118,72,24,4,36,162,229,249,249,232,232,225,224,220,216,220,222,220,220,221,220,220,220,220,220,222,221,220,220,220,220,220,219,220,220,219,220,219,219,218,218,220,218,217,220,219,218,219,217,217,217,215,215,217,216,215,215,214,215,215,216,216,214,216,214,214,212,213,212,214,113,2,1,2,6,8,7,8,8,10,10,10,9,229,231,227,229,230,227,228,229,228,229,232,229,231,231,231,233,231,231,231,232,232,233,233,232,230,233,231,233,232,230,231,231,236,235,235,237,237,237,236,237,237,237,237,238,239,240,239,238,240,238,240,239,236,237,239,240,239,238,242,240,240,239,237,240,240,239,241,241,239,240,239,239,239,237,238,239,237,236,239,239,239,241,239,240,240,239,237,237,238,237,237,236,237,237,238,237,236,237,237,235,235,236,237,237,234,235,236,233,234,235,234,236,236,236,236,235,236,236,237,235,236,237,239,238,238,237,235,234,237,236,235,237,237,239,240,239,240,238,240,240,241,241,241,242,241,243,243,242,242,242,242,243,243,244,244,244,245,245,245,245,245,245,245,245,247,246,245,246,245,245,244,243,245,242,239,236,237,239,240,241,239,238,238,239,241,240,240,241,237,236,236,233,234,233,233,232,234,233,234,234,232,230,229,232,232,232,235,239,239,238,239,238,241,240,238,240,239,240,241,241,239,239,239,237,239,237,235,237,236,237,236,236,239,236,239,239,239,240,237,239,240,240,241,239,241,240,242,243,243,243,241,243,245,244,244,245,245,244,244,244,242,242,241,240,241,239,239,238,241,239,239,240,237,237,237,240,242,234,216,226,241,240,241,241,241,244,237,224,232,241,242,234,231,239,241,241,241,240,239,239,239,239,240,239,240,240,237,239,238,237,237,236,236,237,238,237,237,237,236,237,237,237,236,237,237,237,237,240,232,227,235,238,238,239,237,240,237,249,218,206,225,206,245,250,232,217,226,200,213,241,241,241,191,234,245,252,222,111,101,94,77,8,100,239,249,250,238,227,245,252,207,84,28,19,41,87,75,48,28,51,136,163,119,38,23,71,38,41,54,22,51,35,16,26,11,22,77,131,110,104,111,47,18,10,16,11,46,76,24,11,15,14,20,17,21,16,16,16,14,16,14,13,17,18,16,20,24,20,17,16,19,21,15,18,20,20,28,15,22,18,31,63,69,97,65,51,47,22,42,74,93,21,56,198,240,250,251,244,241,240,245,252,204,193,244,236,245,243,241,240,253,232,95,110,36,27,145,34,63,91,32,38,17,32,29,25,51,59,71,98,153,174,169,154,77,52,17,34,204,243,250,250,234,235,239,242,241,240,238,240,238,244,232,204,226,237,239,240,236,241,227,228,240,237,211,211,235,227,199,171,178,214,210,222,231,238,221,114,93,147,150,152,127,75,19,57,196,247,249,249,247,247,250,245,232,226,238,244,246,253,210,153,151,117,95,123,131,151,97,104,213,141,88,65,4,49,163,204,222,222,217,217,212,217,215,212,212,214,216,215,214,216,215,216,217,217,218,219,220,222,220,220,219,219,221,221,221,219,219,221,219,220,220,220,221,218,219,220,222,222,222,223,223,220,221,221,221,222,229,219,199,224,249,245,184,132,87,32,5,23,98,176,240,242,236,233,228,225,223,218,217,217,220,220,220,222,222,221,217,220,220,220,221,218,219,220,220,221,219,220,220,219,220,220,220,219,218,219,219,217,218,220,220,217,217,220,219,217,215,218,217,216,215,215,218,216,216,217,217,214,214,214,214,216,216,214,212,111,2,1,2,6,8,7,9,9,9,10,9,10,228,231,230,229,229,227,228,229,228,229,229,227,232,232,232,232,231,234,231,232,230,231,230,229,232,231,230,231,233,231,233,235,236,237,237,239,236,237,237,238,237,236,237,237,239,239,240,240,240,241,239,239,240,239,239,240,239,239,240,239,239,241,239,240,241,241,240,240,240,239,241,239,237,239,239,238,239,239,237,236,238,237,238,240,240,239,239,239,237,240,238,238,239,237,237,239,237,237,236,235,237,233,237,235,234,235,232,234,235,234,235,235,236,234,238,237,236,238,236,238,237,237,237,237,238,237,235,234,236,236,237,237,237,238,241,240,239,241,239,239,240,242,241,240,242,243,240,243,243,242,244,242,243,243,242,242,242,244,244,245,246,245,246,246,246,247,243,244,244,243,242,243,244,241,240,240,241,241,242,241,240,239,238,240,241,241,240,239,237,237,234,235,234,235,234,235,233,235,235,233,232,229,229,229,232,234,235,236,238,240,239,238,239,239,238,239,239,238,236,237,237,239,237,236,236,235,236,236,236,235,237,237,237,237,237,239,238,238,239,239,239,239,239,240,239,240,242,243,244,243,244,243,244,244,244,245,246,246,245,244,244,243,241,241,240,240,239,239,239,237,239,237,237,237,239,240,242,226,213,232,241,239,238,237,240,241,231,224,235,242,240,231,234,240,240,241,240,239,239,238,239,239,238,237,239,237,238,238,236,237,237,238,237,236,236,235,237,237,237,237,237,237,237,237,237,237,237,240,236,229,235,237,237,239,239,240,240,248,220,227,244,213,232,220,184,195,235,226,235,247,242,242,200,236,248,252,220,106,88,85,73,11,108,240,249,250,235,228,245,253,199,69,57,78,107,93,54,35,26,29,71,155,181,113,64,75,46,55,65,48,60,36,26,32,16,22,13,39,107,164,156,66,30,12,11,11,40,71,31,16,13,14,22,17,17,15,15,14,15,13,18,19,12,13,17,26,22,18,17,16,18,14,16,13,18,18,19,17,13,19,49,83,75,70,54,53,39,22,51,68,78,33,72,193,238,250,247,241,240,236,245,251,190,201,245,234,247,239,242,243,251,218,77,116,44,33,143,28,70,84,21,36,21,27,30,25,32,54,76,100,133,134,124,104,71,54,13,33,207,242,249,250,229,234,239,241,241,238,238,236,237,242,236,212,223,244,252,252,252,252,251,249,246,244,218,205,231,227,202,171,171,202,215,223,236,243,217,144,144,171,156,148,130,78,54,133,228,247,249,249,248,246,248,240,229,231,245,244,243,234,181,147,151,134,124,128,132,151,133,156,207,107,78,49,9,116,238,237,228,218,217,218,212,215,212,211,214,214,214,218,217,216,219,220,222,222,224,222,220,223,222,220,222,218,222,222,222,222,220,222,217,218,219,220,220,220,223,227,230,236,239,233,228,222,221,220,219,227,231,214,217,237,209,151,114,61,9,14,117,175,196,243,248,237,232,227,221,221,217,217,218,219,223,220,220,221,217,217,219,219,217,217,219,217,219,219,217,217,218,219,219,218,218,220,216,218,220,217,217,216,218,219,217,218,220,218,218,219,218,215,217,220,218,215,214,214,215,215,214,214,212,213,213,213,215,212,217,114,1,1,3,7,7,7,10,9,10,10,9,9,225,227,229,229,229,227,226,229,227,229,232,230,233,231,230,233,232,232,234,232,234,232,232,234,234,234,231,232,234,232,233,235,235,235,235,236,238,236,237,240,238,239,237,240,239,240,242,237,240,239,240,240,239,240,240,240,241,241,241,240,240,241,241,241,241,239,240,240,241,240,239,240,241,240,240,241,240,239,239,237,239,238,238,239,239,239,238,239,239,239,239,237,239,238,236,235,235,236,236,237,237,235,236,234,234,234,234,235,235,234,235,236,236,237,237,236,235,235,236,236,238,236,236,237,237,236,234,234,235,236,237,237,236,237,239,239,239,240,240,240,242,241,241,242,241,241,241,243,242,241,242,242,241,239,237,236,241,241,244,246,244,243,241,243,242,241,240,240,240,241,242,241,242,242,243,244,244,242,243,242,242,240,239,238,239,240,239,238,235,235,236,237,236,237,238,234,236,235,232,232,230,229,229,229,232,235,237,239,239,239,239,239,239,242,239,240,239,239,237,237,236,235,237,233,234,233,231,235,235,234,236,234,237,235,238,238,238,239,240,239,239,240,241,242,242,240,241,241,242,243,245,245,243,243,245,245,245,246,246,245,242,242,242,242,241,241,241,239,241,239,238,239,238,237,238,242,240,218,216,237,240,238,238,238,240,238,222,226,241,241,237,232,239,241,240,241,239,240,240,241,242,239,240,239,237,239,238,239,238,240,240,237,237,235,237,236,237,237,238,237,236,238,236,238,239,240,238,240,237,228,234,239,237,237,239,241,243,250,215,232,229,169,202,213,215,236,252,241,252,244,239,238,201,242,244,252,210,87,63,53,59,6,107,239,247,250,234,229,245,253,200,73,62,66,64,59,61,55,44,50,39,99,188,178,122,71,27,55,68,54,61,43,32,34,24,35,22,31,20,63,154,97,68,42,23,13,34,76,30,12,12,14,16,17,18,14,17,17,15,19,16,16,15,17,32,33,27,20,12,17,19,14,17,16,15,20,21,15,22,21,68,101,77,62,43,54,33,39,69,66,61,21,77,207,246,248,248,239,240,235,247,241,180,212,244,236,247,237,240,243,251,206,62,105,50,46,137,22,76,73,21,36,8,22,22,28,35,30,53,101,110,77,59,53,48,57,30,79,222,244,250,250,228,234,239,239,244,239,236,236,236,239,248,229,238,249,252,252,252,252,252,240,251,249,223,205,225,229,208,180,164,196,214,224,207,178,193,183,183,183,156,137,109,99,128,227,252,252,250,247,251,247,243,231,232,244,249,247,253,226,159,148,154,151,150,141,141,155,173,188,129,61,59,35,12,150,241,231,234,217,218,216,214,214,211,212,210,216,214,221,228,230,243,249,248,240,234,232,230,227,225,223,220,220,222,221,222,221,223,223,218,219,219,221,222,225,234,239,248,248,251,249,228,226,224,223,230,236,240,214,197,167,124,83,28,6,80,198,217,215,235,240,238,230,227,223,218,219,218,220,219,218,218,219,220,219,220,220,218,217,217,219,218,217,220,221,218,217,218,218,217,217,218,218,220,219,217,218,218,218,221,221,218,218,219,215,216,219,217,217,217,215,215,214,216,215,214,214,213,215,214,214,214,214,216,214,216,113,1,1,4,7,7,7,10,8,9,10,9,9,225,229,226,225,229,226,226,227,228,227,229,230,230,229,231,230,230,233,232,231,232,235,233,234,235,235,236,234,233,236,237,235,236,236,238,240,237,237,237,238,238,237,239,239,240,240,240,240,238,239,240,239,239,238,237,239,239,240,241,239,241,241,239,240,239,238,238,238,240,239,239,239,238,240,238,239,241,239,239,239,240,240,239,238,237,238,239,237,234,237,237,237,239,236,236,236,235,236,236,235,237,236,237,235,235,235,234,234,234,236,234,235,236,234,236,233,234,235,232,235,233,235,237,236,237,235,234,237,237,237,237,237,237,239,237,237,239,238,240,241,242,242,242,241,242,242,241,242,240,239,238,235,234,234,235,234,236,239,241,242,238,235,234,232,230,231,233,235,239,240,241,242,243,244,244,245,244,243,244,242,241,241,239,239,236,237,238,238,237,236,236,235,235,237,235,235,235,236,236,235,231,231,230,231,232,233,238,239,241,241,242,241,242,242,241,241,239,240,239,235,234,233,232,232,233,229,231,231,232,233,234,233,232,235,235,236,236,235,237,236,236,237,238,238,239,238,238,239,242,242,243,244,245,245,244,243,244,244,244,241,242,241,241,244,242,243,241,241,241,242,239,237,237,237,238,242,235,216,222,240,240,239,239,237,241,234,220,230,240,239,232,229,238,240,241,240,239,239,239,239,238,239,238,237,239,238,239,239,240,241,239,239,239,237,236,236,238,237,237,237,239,239,238,239,238,239,239,239,239,227,229,236,237,240,240,239,245,243,209,216,199,190,233,236,239,246,252,223,221,239,238,233,200,239,243,252,212,100,81,81,72,9,106,238,245,250,232,229,244,252,231,114,46,5,17,53,66,54,40,48,44,54,115,171,180,108,32,30,57,53,62,41,33,33,33,39,30,35,30,19,63,119,157,112,72,33,29,74,35,14,14,13,20,18,17,17,17,16,13,18,15,12,19,26,39,37,28,19,21,22,21,27,16,22,27,27,27,29,30,40,63,83,86,67,52,57,42,58,92,87,54,12,107,232,250,248,242,236,239,233,249,231,174,222,240,232,244,236,238,244,251,193,54,90,51,57,125,22,84,65,10,32,10,25,19,24,30,31,26,62,85,56,45,33,48,43,41,117,230,246,249,249,237,239,247,252,251,247,247,242,239,241,253,237,212,206,234,240,176,164,144,157,236,250,232,205,227,238,218,191,168,187,216,195,123,75,98,136,137,127,107,81,68,126,198,234,250,250,252,249,250,246,237,228,237,247,248,234,234,191,151,148,157,150,148,139,150,181,155,125,69,33,51,15,34,208,247,233,238,220,220,216,214,214,216,215,217,215,218,231,236,247,246,246,251,251,252,251,249,243,231,228,225,223,223,219,219,221,220,222,221,222,222,222,223,232,241,243,235,222,227,246,244,226,227,232,243,246,232,175,136,98,53,11,26,126,229,242,222,203,229,237,232,226,221,219,214,220,221,221,220,218,221,217,219,221,218,220,219,220,219,218,220,219,220,219,220,220,218,218,220,219,218,219,217,218,217,218,220,217,218,219,218,217,215,217,217,215,214,215,217,214,215,215,214,215,215,216,215,214,214,214,216,214,215,213,213,113,2,1,2,6,8,7,10,9,9,10,9,10,230,229,229,229,229,229,227,229,227,226,230,227,229,229,229,234,231,232,231,227,232,231,232,233,230,234,236,234,237,236,236,239,238,238,240,239,240,239,239,239,238,239,237,239,240,240,240,238,239,237,237,237,237,239,240,239,239,240,239,240,242,241,238,239,240,240,239,240,240,238,240,240,239,239,239,241,239,239,237,239,241,240,240,239,239,238,238,238,237,239,238,237,237,238,237,238,239,238,237,237,236,234,235,234,236,236,236,237,236,235,234,235,236,236,234,235,234,234,234,233,235,234,237,237,238,237,237,238,239,239,240,240,241,239,239,240,239,241,240,240,240,240,241,240,242,241,240,242,240,237,237,236,237,238,238,238,237,235,235,234,230,230,228,230,232,234,236,239,241,244,245,244,245,244,245,245,244,245,242,241,240,240,241,237,236,235,235,239,239,239,237,236,235,237,236,236,237,236,234,234,233,232,232,233,234,236,239,241,242,244,245,246,243,242,243,242,242,241,238,237,233,233,234,231,232,230,232,235,234,236,236,234,234,235,236,235,235,235,236,235,236,236,236,236,237,238,237,240,241,242,243,243,244,245,245,246,246,245,245,242,241,241,243,245,246,245,243,243,241,241,240,239,239,239,239,242,229,214,231,242,240,240,236,240,239,224,223,237,240,236,228,234,240,239,241,239,238,239,239,239,239,239,240,238,237,239,237,240,241,242,242,240,242,240,239,239,239,238,240,239,241,241,238,240,239,237,239,240,242,229,229,238,240,241,240,241,245,240,203,223,215,216,252,245,237,248,236,193,226,238,241,225,192,241,241,252,227,146,142,134,119,42,139,243,248,251,234,229,245,251,241,128,27,2,12,57,62,45,48,45,28,35,39,91,184,152,53,24,41,50,55,36,17,23,23,33,17,35,15,60,189,225,242,214,193,126,110,118,31,8,11,12,14,14,17,19,17,14,16,22,20,16,34,37,33,33,37,37,32,35,35,39,39,34,39,40,41,39,44,53,75,79,69,65,67,64,56,78,92,88,45,59,174,245,251,244,243,236,239,235,252,213,177,234,234,237,241,235,237,248,251,175,40,59,65,69,108,29,87,56,12,32,10,24,20,16,24,34,28,38,73,87,70,48,76,51,36,138,230,246,250,251,246,250,252,252,253,253,251,244,240,244,253,208,121,43,30,79,28,53,89,74,196,241,249,248,249,250,231,204,173,185,216,208,112,17,17,40,49,37,44,27,29,122,229,245,247,247,252,249,249,240,233,239,247,252,252,252,235,161,149,156,159,162,142,139,150,156,126,61,25,24,33,2,91,239,252,252,238,225,229,221,218,213,216,218,216,220,225,238,228,179,150,152,198,237,250,250,252,252,241,226,226,225,222,220,222,222,221,221,223,222,221,222,225,235,228,168,123,86,112,204,228,232,240,248,250,238,166,114,77,24,5,83,205,237,250,217,183,218,231,231,223,218,220,220,220,220,219,220,221,221,222,220,220,221,220,219,219,221,221,220,220,219,220,220,220,221,220,221,218,220,221,218,220,220,218,220,220,220,219,220,220,219,218,218,218,214,217,220,218,217,217,218,216,216,215,216,215,215,214,214,214,216,217,213,214,112,2,1,3,7,8,6,9,9,9,10,9,10,229,230,228,227,231,229,230,229,230,229,229,230,229,232,233,232,232,235,234,232,233,232,231,232,232,231,231,232,232,235,236,234,235,236,237,239,240,238,239,240,239,239,239,239,240,236,236,237,234,233,234,237,236,235,237,237,238,238,238,238,239,239,238,237,239,238,239,240,237,239,238,239,239,240,237,236,237,237,238,236,239,237,237,239,240,239,236,237,237,239,236,236,239,238,238,237,238,236,236,236,234,233,232,233,236,236,236,234,235,235,235,235,234,232,234,234,232,234,232,234,234,235,236,235,236,237,236,239,238,238,239,237,239,240,239,239,241,239,240,241,241,241,241,240,241,240,242,242,241,240,239,240,240,238,237,236,235,232,229,230,229,226,228,232,234,237,240,241,243,245,247,247,247,246,245,245,244,241,240,239,237,237,238,236,235,235,236,238,236,235,237,236,234,235,233,233,234,230,232,232,232,233,233,234,235,238,240,240,242,244,243,244,245,245,242,243,242,242,238,238,237,234,233,232,231,230,232,232,234,235,232,234,231,231,232,232,234,233,232,234,233,231,233,236,235,232,235,237,239,242,243,244,244,244,245,244,245,244,243,242,242,242,244,244,244,244,243,242,240,241,239,238,240,239,238,241,222,214,233,239,239,238,238,240,234,217,223,238,239,231,225,235,238,237,239,237,237,238,237,238,238,239,239,239,240,239,239,240,241,241,242,243,242,240,241,239,241,241,239,240,238,237,237,238,238,239,239,239,241,235,232,239,242,241,243,242,249,237,215,241,223,232,242,222,211,203,230,215,241,245,237,214,189,240,239,252,231,160,147,140,122,53,171,244,247,250,227,229,245,249,236,129,40,1,11,49,66,57,41,30,22,30,42,35,121,180,121,53,44,44,53,28,19,20,18,19,24,30,35,190,252,252,250,179,130,162,178,176,63,16,7,5,14,13,17,14,17,14,17,22,21,33,43,37,24,45,70,60,71,72,71,65,45,46,42,49,50,47,54,65,77,79,72,70,70,60,61,74,84,64,33,57,182,246,250,246,238,234,236,234,250,197,182,236,232,240,239,239,239,250,249,173,74,46,73,74,69,38,93,45,15,32,8,21,16,21,18,29,28,86,171,185,145,78,89,84,50,124,217,242,251,251,250,239,217,235,247,248,242,229,244,249,250,195,82,23,3,11,10,72,96,24,137,246,251,251,253,253,249,216,186,179,214,227,151,59,41,54,46,54,34,17,24,52,158,240,250,250,250,247,245,238,238,239,245,250,234,229,187,149,153,153,160,158,138,141,146,139,85,27,29,15,31,8,92,226,246,246,249,229,228,216,221,215,214,214,216,218,230,237,166,89,81,48,18,109,119,152,199,229,230,225,226,226,225,224,223,222,220,224,221,220,220,222,223,231,186,93,36,17,11,69,184,232,251,251,208,148,101,56,4,27,153,232,250,250,220,183,202,225,229,220,219,220,217,220,221,220,218,219,221,220,217,217,219,217,219,221,219,218,217,217,221,220,222,221,218,220,217,218,220,218,218,220,218,218,219,217,217,217,220,219,216,220,219,216,217,219,219,218,218,214,218,215,214,218,216,214,214,216,213,213,217,214,214,212,215,114,1,1,4,7,7,7,10,9,10,11,10,10,226,230,228,230,232,229,227,230,230,231,233,232,232,231,232,232,233,234,232,233,232,231,233,233,234,232,232,232,235,234,234,235,233,235,236,237,238,238,239,237,238,239,239,239,237,237,237,236,237,236,235,235,237,238,237,238,237,239,238,240,240,239,241,239,237,238,238,239,240,238,239,239,238,237,237,238,237,239,237,239,239,237,239,237,239,237,237,237,237,236,235,237,235,237,236,236,237,236,237,237,236,236,235,235,236,232,235,234,234,234,234,234,232,234,234,234,236,235,234,235,234,235,235,235,235,237,240,236,237,237,237,237,237,239,240,241,240,241,240,240,239,241,242,241,243,241,241,241,243,243,242,240,241,237,235,235,231,229,231,230,229,231,231,234,236,237,239,240,243,246,247,247,245,246,246,243,240,238,239,236,235,236,234,234,235,235,237,237,236,235,236,235,233,235,232,232,233,232,235,236,233,234,236,235,237,240,242,244,243,245,244,244,245,244,244,243,244,244,244,243,239,236,234,231,230,229,230,233,233,232,232,230,230,230,231,231,231,232,231,232,231,230,233,231,233,232,235,239,240,244,245,245,244,244,244,244,243,244,243,241,244,244,245,246,244,244,243,243,241,241,242,241,241,239,240,240,220,223,240,240,239,238,237,240,226,215,231,237,235,226,228,236,237,238,239,237,236,237,239,239,240,240,241,240,239,240,241,242,241,244,243,243,243,242,242,242,241,240,240,238,239,240,239,240,239,237,239,239,244,237,233,241,243,240,242,242,249,237,217,244,207,190,220,203,188,221,243,229,251,236,237,204,185,241,236,252,223,147,133,124,105,49,169,244,248,251,228,228,244,249,238,188,119,35,15,31,47,44,26,18,20,21,36,45,68,130,179,137,64,49,49,27,20,17,14,24,31,150,250,250,250,250,224,105,19,2,110,203,128,70,29,9,13,10,14,15,11,20,15,20,24,57,125,84,26,30,59,79,85,89,80,59,39,33,31,35,42,42,55,62,75,86,69,62,62,46,61,77,72,50,9,66,211,246,250,240,234,234,233,235,246,184,191,241,238,251,252,252,252,252,250,200,117,41,86,80,34,47,93,36,17,24,15,44,33,27,17,23,38,115,184,160,118,49,60,100,96,133,203,240,252,252,211,135,74,33,55,176,229,238,252,252,250,146,73,29,2,30,16,8,12,12,132,187,155,133,175,234,250,247,188,172,207,238,191,94,78,66,57,63,39,28,29,1,95,235,250,250,249,245,247,242,246,246,247,252,253,202,150,142,155,153,162,158,136,141,145,123,48,9,16,22,20,27,28,14,128,222,250,250,224,222,221,221,223,218,227,234,247,250,171,93,104,57,7,7,8,10,27,147,219,225,235,223,228,228,233,237,239,235,228,225,224,224,222,227,183,89,45,29,8,91,211,238,250,177,123,76,27,5,97,212,245,246,250,226,181,187,222,227,220,220,223,222,221,221,221,220,219,219,220,219,219,218,217,219,220,220,220,220,218,221,219,217,219,218,219,220,218,221,218,219,223,218,221,219,217,218,217,217,219,219,217,219,218,218,218,219,220,217,218,217,217,215,215,217,215,215,214,216,214,213,216,217,216,212,216,113,2,1,4,7,7,8,10,8,9,11,10,10,229,228,229,229,229,226,227,229,229,230,232,232,232,233,231,231,231,230,230,229,231,231,233,235,232,232,233,235,235,234,236,235,234,237,237,238,238,237,239,237,236,237,235,238,237,234,237,236,236,237,234,237,237,237,238,237,239,237,238,237,237,238,239,238,238,237,236,238,237,237,237,239,238,238,237,236,236,236,237,236,239,237,236,237,237,236,236,237,235,234,236,237,235,235,236,234,236,237,237,236,237,235,233,235,233,232,232,233,231,232,234,232,233,233,235,233,232,234,232,234,235,234,235,235,236,234,235,234,234,234,236,234,236,238,237,240,241,239,241,240,239,239,239,239,242,241,242,244,242,240,238,237,234,232,232,232,233,233,233,232,233,234,235,236,233,234,237,240,241,244,244,241,240,239,237,235,236,235,233,233,232,234,234,233,234,235,237,237,235,236,234,233,233,232,233,233,234,233,236,235,234,234,235,237,238,239,241,242,243,243,244,244,244,246,244,244,245,245,243,243,239,235,232,232,230,230,233,232,234,231,231,231,231,232,232,230,231,231,230,229,229,231,229,231,234,235,237,238,241,244,245,246,245,244,244,244,244,244,244,244,243,244,245,244,246,244,244,242,241,241,241,240,239,241,244,237,222,231,243,240,240,236,240,238,220,223,237,239,232,224,234,239,236,239,240,238,239,242,240,238,241,242,243,243,244,241,241,243,241,243,243,242,244,244,243,242,243,242,242,241,242,240,240,242,242,240,240,240,241,239,235,240,244,242,244,242,248,226,214,214,179,211,231,222,225,244,252,201,225,237,235,202,185,238,235,252,212,151,139,128,102,39,170,243,246,251,227,232,243,244,244,253,227,118,35,1,3,25,22,16,20,17,32,48,60,50,121,184,130,87,39,23,23,10,19,18,91,235,248,232,198,121,103,45,13,21,8,94,150,85,104,49,32,11,13,15,14,14,13,23,22,157,249,137,29,11,67,111,83,61,48,36,22,21,19,24,29,37,55,63,85,88,71,53,35,34,50,55,46,29,103,173,209,238,232,237,236,236,233,243,244,186,218,250,252,252,252,252,252,252,248,199,133,32,84,77,12,52,86,29,25,10,97,157,62,25,12,23,65,135,168,113,77,39,45,131,130,137,211,239,251,249,164,87,26,5,18,122,222,249,253,253,165,87,31,7,9,7,18,128,145,89,60,12,6,14,17,81,193,235,207,165,193,241,214,129,101,64,34,39,37,30,107,68,119,234,250,251,250,247,249,243,246,245,248,234,234,176,147,147,153,155,167,155,139,138,134,137,99,96,41,40,33,28,27,27,18,95,212,234,246,219,223,222,225,230,239,249,249,249,170,84,12,60,198,176,62,18,21,136,242,237,232,230,237,245,250,250,249,249,234,228,229,228,236,249,226,167,127,66,69,194,246,208,164,109,64,16,28,160,236,243,252,249,231,186,174,212,224,224,223,223,222,220,224,221,219,221,219,220,221,221,220,219,221,222,221,219,220,221,223,222,220,220,217,218,221,222,222,220,221,219,218,220,220,221,218,218,220,218,217,218,219,219,218,217,218,219,218,220,219,215,219,219,217,217,217,216,216,217,215,215,217,216,216,216,217,112,2,1,3,8,8,6,10,9,10,10,10,10,228,232,229,229,229,227,231,231,231,231,231,233,233,233,234,232,232,233,232,233,232,232,233,231,232,231,234,233,233,234,235,238,237,239,239,239,238,237,237,234,235,234,234,235,236,235,236,234,235,236,237,238,237,236,237,240,239,235,235,237,237,235,235,235,237,238,237,238,237,236,236,236,236,236,236,235,237,238,236,237,235,235,237,236,236,234,237,236,237,236,235,235,233,235,237,236,235,234,236,235,233,236,232,234,234,233,231,231,232,231,232,231,231,233,233,229,232,232,230,232,231,232,234,234,233,233,234,232,232,232,235,236,237,240,239,240,240,238,239,237,239,243,241,242,241,239,240,239,239,236,235,233,231,232,231,233,233,236,240,236,236,235,235,234,234,236,237,240,237,239,236,235,232,229,231,230,231,232,232,231,231,233,230,233,233,232,235,234,236,234,234,234,232,233,233,236,235,234,235,236,236,237,236,238,240,239,238,239,239,239,242,243,245,245,245,245,244,244,241,237,232,229,229,231,231,230,232,233,232,233,232,230,230,229,232,232,230,230,229,232,232,230,233,232,237,237,236,241,243,244,244,244,245,245,244,245,244,244,245,245,245,242,242,244,243,243,242,242,242,242,244,242,244,244,246,236,224,236,241,239,240,239,240,231,217,229,241,237,227,226,237,239,239,239,240,241,244,245,245,245,245,245,246,245,245,245,241,242,243,244,242,242,241,242,242,243,242,242,243,242,244,244,243,241,240,240,239,240,240,240,237,241,245,241,243,245,247,221,207,222,211,244,252,235,235,252,215,182,230,233,237,196,191,243,236,253,206,157,150,139,109,43,177,243,247,251,228,236,245,244,235,252,251,174,136,54,1,7,15,14,16,15,27,48,67,38,30,132,187,139,50,12,10,12,24,51,40,65,64,39,90,51,37,45,15,21,12,63,62,71,160,148,92,54,25,19,19,17,7,23,20,125,210,89,33,15,56,102,66,48,34,15,20,17,17,29,18,30,63,81,96,73,49,53,29,31,27,51,92,130,172,134,174,226,232,243,238,246,252,252,252,229,251,251,252,224,161,143,171,241,244,194,127,15,82,91,12,60,67,27,29,5,161,193,53,26,12,35,97,174,195,150,117,91,75,97,107,141,236,248,251,251,179,77,29,5,23,194,251,251,248,168,78,21,1,15,2,58,211,249,235,97,7,5,8,13,21,1,54,188,204,178,188,237,233,169,144,81,30,35,11,66,214,165,151,240,243,251,250,248,249,240,243,248,251,251,214,158,150,153,153,165,167,148,138,135,140,169,181,216,185,76,14,13,35,29,30,7,74,211,248,243,223,219,234,237,249,240,182,134,87,43,42,155,241,241,191,59,32,154,241,241,240,238,242,240,245,235,229,244,249,237,240,239,249,249,250,223,142,74,15,110,167,131,109,48,14,74,190,241,250,250,242,235,196,171,196,222,226,221,223,223,222,221,219,220,220,219,219,218,219,220,222,221,220,219,218,218,219,221,220,221,221,219,221,218,220,220,220,220,222,222,220,218,219,218,218,220,221,219,218,219,219,220,218,220,219,219,219,218,220,217,218,219,218,217,217,221,214,214,214,215,217,215,216,213,215,113,2,1,2,7,8,7,10,9,10,10,10,10,227,229,230,229,229,228,230,232,230,230,232,230,234,234,230,232,232,231,234,233,234,232,231,232,229,230,232,230,233,235,234,235,233,237,236,237,237,232,234,233,233,235,234,236,235,232,235,234,234,235,233,235,236,235,239,237,236,237,236,236,237,234,235,236,236,237,235,236,235,236,235,234,235,235,235,236,234,236,237,235,235,235,234,232,236,235,233,235,234,232,234,234,234,234,234,235,234,234,233,232,234,233,232,235,235,233,233,231,232,231,230,230,230,232,232,232,232,231,230,231,231,232,233,231,230,230,232,231,233,234,232,235,237,240,237,237,236,237,238,236,239,237,239,239,236,234,234,234,234,235,232,232,234,232,231,234,234,235,236,237,239,237,237,238,237,237,239,239,237,235,233,232,231,231,231,231,233,231,233,234,231,232,232,230,230,232,233,232,230,232,233,232,232,230,232,233,233,234,235,236,238,240,241,240,239,238,238,240,240,240,241,242,243,243,245,242,243,242,236,232,229,227,227,227,228,228,229,229,231,233,232,231,228,230,232,233,234,231,233,235,234,233,232,235,238,238,240,242,245,245,245,244,244,244,245,246,245,246,246,245,244,244,244,242,244,241,243,243,243,243,241,244,244,245,246,231,227,241,241,241,240,239,240,224,220,235,240,234,225,231,238,239,241,241,245,247,246,246,245,245,245,244,244,244,245,245,243,243,243,244,242,241,240,242,244,242,243,242,242,244,245,244,244,241,241,241,241,242,241,243,238,241,244,242,242,245,246,220,230,233,222,249,239,212,188,212,222,211,251,238,231,196,197,242,239,252,200,155,123,125,105,44,185,243,249,251,226,236,243,241,223,248,233,194,232,164,145,69,3,3,12,14,22,53,72,42,23,73,162,195,93,24,8,20,46,46,35,29,27,38,96,82,44,30,12,25,19,65,72,32,36,113,185,135,84,51,30,19,14,27,29,31,47,50,38,15,17,55,56,39,19,10,17,15,21,24,15,35,50,71,100,66,41,19,17,44,88,191,196,169,123,133,222,235,243,251,251,251,251,251,252,227,252,252,248,138,101,99,45,141,159,133,99,10,94,98,19,65,60,24,35,8,49,110,60,29,14,48,165,204,179,123,112,96,40,44,57,106,248,248,250,234,123,49,16,1,104,240,252,252,175,68,24,2,33,70,101,220,253,253,135,23,28,78,102,56,11,8,19,129,198,184,198,231,228,169,145,63,24,13,33,189,234,223,148,217,248,247,248,244,244,242,245,234,234,227,186,156,156,152,156,164,164,146,135,133,156,212,201,250,232,107,88,136,97,44,24,23,18,97,233,237,236,224,235,242,196,123,54,10,1,18,11,25,162,226,220,125,17,58,191,232,249,247,211,145,124,83,110,199,241,238,237,214,214,242,229,185,156,107,61,46,39,39,42,10,82,194,248,248,246,239,238,210,171,180,216,227,225,223,221,223,223,221,220,217,218,218,221,225,228,231,231,231,224,221,220,219,218,219,220,220,218,217,219,221,216,217,221,220,221,217,219,221,219,220,220,220,217,220,218,218,217,218,220,217,217,220,218,218,219,220,217,217,216,217,216,213,214,214,213,215,216,218,216,214,214,114,2,1,4,6,7,8,10,8,9,11,10,10,225,229,228,229,229,226,228,229,232,231,233,233,230,232,232,231,233,232,232,235,235,235,230,229,229,227,229,230,232,234,235,234,234,236,233,235,234,232,235,233,233,235,236,238,232,231,234,232,235,234,235,234,234,233,234,236,235,237,239,239,236,235,239,236,236,237,234,234,235,233,235,235,234,234,234,231,232,232,232,232,233,234,234,234,232,233,235,231,232,236,232,233,234,234,233,232,233,231,231,231,232,231,231,236,233,236,234,232,231,227,230,230,228,230,233,231,232,232,232,233,232,231,233,234,232,232,233,234,233,234,236,236,236,236,235,235,239,234,233,233,233,235,231,232,233,231,233,231,233,235,237,236,236,235,232,235,236,236,236,237,239,238,238,238,238,239,237,238,236,236,231,230,233,232,235,233,235,235,236,235,231,235,232,233,232,233,235,232,232,231,232,232,231,231,233,231,232,237,239,239,241,242,241,240,241,241,240,241,241,241,242,241,242,242,241,241,240,237,232,228,227,227,225,228,229,227,228,231,231,233,234,229,232,231,233,236,231,234,233,234,234,230,236,236,237,241,240,243,245,244,245,244,244,242,242,245,246,246,246,245,245,246,245,244,242,245,244,245,245,244,244,243,246,247,241,230,233,244,242,241,242,242,239,225,232,244,242,236,230,239,243,241,245,245,248,249,248,247,247,246,246,247,244,245,244,244,245,243,244,242,241,241,240,242,242,244,245,244,244,245,245,244,245,244,243,243,242,244,245,245,242,237,242,243,240,246,241,215,225,224,186,204,209,181,192,240,240,230,248,235,232,193,201,241,241,252,191,145,112,116,96,39,185,244,248,250,226,236,244,242,225,237,225,209,252,252,245,174,71,12,1,5,4,21,53,36,21,9,16,109,123,56,21,26,36,34,25,28,32,45,94,83,42,29,19,29,34,67,79,40,30,27,64,136,165,128,76,41,28,32,33,30,29,30,36,20,17,39,41,33,15,15,18,11,20,27,21,42,40,35,63,69,112,146,174,179,171,230,191,130,120,169,249,249,252,252,250,250,210,160,105,68,130,194,146,53,122,104,25,21,11,23,49,13,74,103,42,74,50,22,32,3,75,116,57,32,17,78,184,193,123,63,43,129,141,74,24,55,216,236,183,99,38,8,13,5,42,191,194,148,80,12,11,98,239,251,252,252,240,125,34,111,246,248,244,188,95,15,15,125,188,203,203,235,191,104,63,18,8,24,97,205,251,243,125,185,250,245,249,245,249,249,250,253,236,201,160,152,158,151,160,167,161,143,136,133,170,235,177,148,110,103,232,248,208,109,35,23,12,25,159,233,249,236,247,201,120,59,2,3,9,17,39,17,37,154,205,144,53,14,94,219,249,249,205,100,49,19,9,75,179,210,220,137,102,130,111,109,137,138,113,92,39,22,27,35,154,238,249,248,239,238,223,179,167,209,226,225,224,222,223,221,220,221,220,224,228,233,243,247,249,249,249,246,233,226,223,224,219,221,221,222,221,217,222,219,221,221,223,223,219,221,222,219,219,221,221,220,218,220,220,220,220,221,220,219,218,218,219,218,219,216,216,215,217,216,215,216,214,217,216,214,214,217,216,213,218,113,2,1,4,7,7,8,10,8,10,10,9,10,229,230,229,229,230,227,229,230,230,229,233,232,231,232,230,234,232,233,232,231,233,231,232,229,226,225,227,227,230,230,231,235,236,237,234,235,233,232,234,234,234,235,235,236,232,231,232,232,232,232,232,231,232,233,235,234,236,239,237,236,240,235,237,237,236,238,235,237,236,235,234,232,232,230,232,232,230,231,231,231,231,231,234,232,232,231,232,234,233,232,234,232,233,235,234,234,233,231,232,233,232,232,231,233,236,235,233,230,231,229,230,233,230,231,231,228,229,231,232,234,235,235,235,234,234,232,232,232,234,235,235,235,235,235,232,232,233,230,232,232,233,230,231,232,228,229,231,228,231,232,232,235,235,231,234,235,233,237,236,237,239,237,239,240,237,234,234,234,234,233,232,232,231,231,232,232,233,234,235,235,236,236,235,234,234,234,233,231,230,231,229,231,232,231,233,235,236,236,238,236,239,239,239,241,240,240,241,242,240,240,240,240,240,240,240,237,235,230,229,226,226,225,226,227,227,231,231,230,231,232,231,230,230,232,232,235,235,232,232,232,232,232,231,234,236,235,239,241,242,244,245,244,244,243,243,243,244,245,245,246,245,246,245,244,246,245,245,244,245,245,245,245,245,247,240,227,237,244,240,241,241,245,236,229,242,245,244,237,238,244,246,247,247,247,247,247,247,246,245,245,246,246,247,247,245,245,244,244,244,243,242,240,241,242,244,245,244,244,245,247,244,243,245,244,244,244,242,243,242,244,241,235,239,241,239,246,233,206,212,187,177,220,221,222,231,252,226,189,237,237,233,196,202,241,242,252,187,159,122,118,94,38,186,243,249,250,225,236,244,240,230,228,221,230,247,252,251,218,147,142,109,32,4,19,20,5,6,10,9,29,93,108,33,8,21,17,23,17,27,35,65,82,39,31,20,42,51,68,83,48,38,26,49,37,61,153,164,113,66,47,38,34,30,23,22,30,33,31,30,29,24,15,18,15,21,30,27,49,19,45,123,118,188,249,252,222,200,226,176,122,107,193,250,250,251,251,158,97,34,2,5,9,24,13,34,21,31,65,17,29,31,19,61,33,78,125,63,84,47,15,35,5,170,200,56,29,39,88,122,112,94,36,29,160,136,55,35,9,26,55,56,27,32,119,127,60,27,4,23,46,12,44,175,250,253,253,252,252,136,37,112,241,252,252,251,185,96,18,14,162,223,225,237,192,118,56,26,17,6,114,185,194,233,162,58,121,234,245,248,248,250,252,234,234,211,171,142,141,145,151,162,169,157,141,136,141,181,225,168,46,9,62,214,239,240,162,73,22,17,8,75,213,247,247,248,183,90,43,4,19,145,210,145,76,11,21,98,118,68,19,27,149,241,242,222,123,78,21,2,20,122,203,231,173,64,26,4,19,45,54,78,105,111,115,117,135,194,238,246,242,237,233,195,165,196,226,227,223,222,220,222,223,219,222,230,238,249,249,251,251,249,249,233,239,241,224,221,221,223,222,220,219,222,220,220,220,218,220,222,221,221,220,220,220,222,221,218,219,220,221,219,218,219,219,220,218,219,218,216,220,219,218,217,216,217,217,216,216,217,215,217,217,215,215,217,215,214,113,3,1,2,7,9,7,10,9,10,10,10,10,226,227,228,227,227,228,230,228,229,228,229,229,227,229,229,229,232,231,232,231,227,231,228,230,229,229,226,225,228,230,234,231,232,234,234,233,233,231,234,233,236,235,232,231,229,229,229,229,229,229,230,229,233,232,231,233,231,235,235,235,235,235,235,233,232,232,232,234,234,231,232,230,231,230,229,232,230,231,232,230,232,230,231,232,232,234,230,232,232,233,233,233,234,235,235,234,234,231,232,233,235,232,231,233,233,236,232,230,231,231,231,231,231,230,229,230,228,230,230,232,233,234,236,234,234,232,232,234,233,236,235,236,235,233,234,231,232,231,234,235,234,233,231,234,235,230,233,232,234,235,234,232,231,234,234,233,234,233,234,236,237,237,238,235,232,232,229,229,230,230,229,229,230,228,230,232,233,233,235,236,235,236,233,234,232,231,230,229,229,229,229,231,229,229,229,229,232,233,234,234,235,236,237,239,239,238,239,240,238,237,240,238,239,240,238,238,235,231,232,228,225,228,227,230,231,232,232,230,230,231,231,229,231,229,230,230,229,232,230,230,231,230,233,230,232,235,237,242,243,245,245,245,246,246,245,246,246,246,247,247,247,247,244,245,246,246,245,246,246,245,246,246,248,246,235,230,241,244,242,241,242,244,232,235,247,246,241,239,244,248,247,247,248,247,247,247,246,245,246,246,246,246,246,247,244,244,245,245,245,243,244,244,243,244,244,245,245,245,246,246,245,244,245,245,245,245,244,242,241,241,241,236,237,241,239,244,231,201,208,207,221,252,242,224,218,223,198,203,241,237,227,192,207,241,245,250,188,162,127,129,95,39,194,243,250,250,225,236,246,242,234,216,222,237,217,226,237,228,233,252,215,220,245,207,199,210,205,186,169,145,154,184,154,95,64,40,11,9,27,20,55,77,39,33,34,67,57,70,90,44,39,35,46,40,34,40,93,167,159,122,78,47,23,19,33,28,36,31,24,31,23,13,17,16,19,26,14,92,148,204,207,163,245,252,198,168,234,223,139,86,116,216,251,251,253,150,50,7,1,17,21,26,26,28,35,16,135,198,121,56,10,10,66,49,72,152,109,93,32,18,17,51,252,229,83,32,38,96,110,98,66,52,36,62,75,46,31,16,27,22,8,88,213,249,243,179,81,22,3,41,33,137,252,252,252,252,223,130,19,59,232,244,245,201,84,37,17,2,93,247,251,251,185,108,59,26,42,12,48,225,243,177,147,76,8,51,217,244,248,248,245,247,235,225,189,149,131,136,139,150,166,165,152,141,140,141,190,219,186,134,21,7,151,230,243,191,83,28,11,16,27,125,231,236,242,185,90,40,15,137,236,238,237,180,83,24,15,93,117,57,6,42,187,228,236,145,44,18,7,43,168,217,245,202,101,51,27,30,25,19,54,129,175,196,191,150,99,149,234,236,242,214,170,187,222,231,223,224,224,220,221,222,220,231,239,246,247,247,244,193,151,133,135,190,228,227,231,230,228,225,223,222,221,221,221,219,221,223,223,222,222,222,223,220,221,222,220,221,223,221,221,220,219,217,218,220,221,220,219,220,219,220,220,219,219,218,218,218,217,216,217,216,216,218,219,217,217,113,2,1,3,7,9,8,10,9,10,10,10,10,228,229,229,228,229,223,227,229,230,227,229,228,229,230,230,232,231,232,230,231,231,231,232,232,235,232,230,230,230,230,231,230,230,230,231,232,231,231,229,227,227,229,229,227,225,226,226,227,226,227,227,229,229,227,231,232,229,232,233,233,231,229,233,231,230,230,226,229,229,229,229,229,229,231,230,230,226,230,230,229,230,229,229,229,231,230,230,230,231,232,233,235,234,233,232,236,234,231,233,230,230,230,229,230,231,229,231,231,230,229,229,231,228,229,230,228,229,229,230,230,233,233,233,235,235,233,235,235,234,235,234,233,234,235,234,234,235,235,237,237,235,235,235,236,235,236,236,234,237,239,236,235,232,232,233,231,229,229,230,232,235,234,231,232,230,227,228,227,226,226,226,225,228,227,229,231,233,234,234,232,230,229,231,229,226,226,226,229,229,229,227,229,229,232,232,232,232,232,232,231,234,231,234,235,235,239,237,237,235,236,237,237,237,237,240,237,237,235,235,235,233,232,229,228,230,231,231,229,229,229,231,229,226,227,227,230,231,228,231,233,233,232,234,234,232,232,237,241,242,245,248,245,245,245,244,245,246,247,246,245,245,246,245,245,246,246,245,246,246,245,244,245,246,243,232,231,243,245,245,244,245,237,227,240,247,243,239,240,246,245,247,247,246,247,245,246,245,244,246,246,246,245,246,246,246,246,245,244,242,242,243,245,244,244,245,246,246,246,246,244,245,245,244,244,244,244,244,244,242,241,244,234,234,238,238,246,224,215,228,217,230,244,212,175,195,238,227,230,250,235,221,189,208,241,243,245,187,164,120,114,96,52,198,243,249,249,225,237,245,245,234,208,221,245,225,232,240,233,243,252,249,252,252,252,252,252,253,253,252,230,252,252,252,252,251,251,205,128,81,46,71,76,32,51,66,89,67,83,100,44,38,43,48,44,34,38,34,44,114,155,152,112,29,7,31,41,39,31,39,33,26,34,31,16,15,76,202,246,250,235,173,198,250,230,103,104,219,243,164,73,128,231,252,252,170,62,11,0,22,27,29,29,34,35,50,15,92,241,241,141,41,35,78,57,84,144,119,101,27,39,36,101,251,204,72,32,27,57,59,61,56,59,55,64,61,46,38,21,35,19,33,135,250,250,248,248,187,97,68,53,61,204,245,245,236,142,74,33,9,132,248,248,244,194,91,38,59,154,240,246,244,168,95,36,23,37,36,8,112,243,246,174,91,31,15,15,165,235,245,248,237,239,225,206,170,147,134,142,150,156,164,166,148,139,138,131,140,119,136,78,4,64,182,222,249,210,107,37,11,20,21,8,114,214,249,243,124,32,5,7,66,181,247,247,160,64,20,83,183,144,74,20,45,162,155,103,50,14,6,90,227,235,249,219,146,106,65,60,55,33,62,123,141,127,125,64,17,34,191,233,239,189,171,215,227,227,223,223,223,222,221,221,223,232,233,192,160,150,135,110,73,100,71,110,222,234,250,250,242,238,230,228,224,220,221,222,223,222,222,221,218,221,220,220,221,220,221,220,220,220,220,220,221,221,220,218,220,220,217,217,218,218,220,218,218,218,217,219,218,217,217,214,217,218,218,217,216,115,2,1,4,7,8,7,10,9,10,11,10,10,228,230,231,230,226,228,231,227,230,230,229,232,231,232,232,232,235,232,232,233,234,236,232,235,234,237,234,234,233,230,231,229,232,232,232,231,229,227,228,225,226,226,226,227,224,225,224,225,226,227,229,227,228,229,229,230,230,232,233,232,232,231,229,229,229,230,231,230,229,229,230,230,231,230,227,229,229,229,230,228,230,229,230,229,228,230,229,232,234,234,236,235,235,235,234,237,235,234,234,230,232,228,229,231,229,229,230,231,230,232,230,231,231,228,230,229,229,231,232,234,232,233,237,236,236,235,235,238,237,238,237,236,234,235,237,238,237,236,239,239,239,241,236,236,235,234,239,236,236,234,235,235,234,235,230,231,232,228,230,230,230,231,230,228,229,228,227,227,227,227,227,226,227,227,226,226,226,229,230,229,227,227,227,227,226,227,227,228,228,227,228,228,228,228,230,228,230,232,232,232,230,232,233,234,236,237,237,237,237,237,238,237,240,239,239,239,240,239,241,239,236,234,230,230,229,230,231,230,229,229,228,228,229,229,231,233,232,234,232,235,236,235,238,237,235,235,238,243,244,247,246,246,247,246,247,246,246,247,245,244,246,246,245,245,247,246,246,247,246,245,244,245,245,240,229,235,244,243,244,244,244,235,232,244,244,241,237,243,247,247,247,247,247,247,247,247,246,246,246,246,246,246,246,247,246,246,247,244,243,244,244,244,244,245,244,244,246,246,246,246,246,245,244,245,245,244,244,244,245,243,243,236,232,237,240,242,221,210,222,194,185,212,204,205,231,251,233,220,241,235,218,196,213,240,242,244,187,147,107,109,83,47,200,243,249,250,224,239,245,246,228,197,227,251,234,237,228,223,229,240,227,247,253,239,240,250,249,251,248,212,251,251,252,252,253,253,249,205,108,27,38,107,152,159,141,101,50,74,107,53,34,39,52,45,34,41,36,39,32,27,145,244,203,202,229,190,99,4,69,173,190,199,181,205,251,251,253,253,200,145,163,215,248,200,56,77,185,217,169,84,153,240,251,214,90,14,2,14,28,32,32,35,34,33,39,32,22,166,251,237,128,66,80,71,81,140,136,84,24,103,88,77,182,102,53,45,14,31,29,49,47,78,99,67,54,48,55,55,48,23,23,168,251,251,252,252,241,122,41,34,28,92,104,63,36,15,29,32,14,134,247,247,247,247,228,251,253,253,252,211,108,61,7,23,33,38,16,45,200,245,249,196,133,99,44,2,122,231,245,248,234,232,217,189,155,138,137,145,158,156,166,163,142,141,135,129,99,27,7,18,80,210,246,228,250,228,149,69,11,13,19,29,16,50,202,217,198,108,16,13,35,169,241,244,228,113,55,13,95,151,85,60,42,33,36,50,14,52,185,245,249,239,249,249,222,173,113,70,36,37,31,34,53,43,100,110,21,26,141,216,206,171,196,227,232,231,235,237,233,232,228,226,227,238,202,139,92,44,39,31,39,86,70,121,233,238,252,252,252,252,246,236,229,226,223,223,222,221,223,222,220,220,222,222,222,223,221,219,222,223,223,220,220,221,222,220,220,221,222,220,219,222,219,220,219,220,219,216,217,217,217,217,218,217,218,216,218,114,2,1,4,7,8,8,10,9,10,11,10,10,228,230,228,227,230,228,229,229,227,227,231,227,230,231,230,231,229,232,234,232,230,231,233,235,232,234,233,231,232,230,230,229,229,230,231,230,228,228,226,224,225,224,223,222,222,224,221,224,225,225,227,229,227,225,229,229,229,229,227,228,229,228,226,227,230,227,227,229,229,228,229,230,228,227,225,227,229,229,228,229,227,227,228,228,228,229,232,231,232,232,232,235,232,235,233,230,231,230,230,229,229,230,229,230,230,230,230,229,231,231,230,230,228,230,230,229,230,231,233,235,235,236,235,234,234,233,237,238,238,236,236,236,233,232,233,236,234,234,236,236,236,235,232,229,228,230,231,230,229,229,229,230,230,229,229,228,226,223,227,228,229,231,229,229,226,227,227,225,226,225,224,224,223,224,224,223,224,223,224,225,226,229,225,224,224,227,228,225,227,226,226,226,224,224,225,229,229,228,229,228,230,229,232,233,232,233,234,233,234,236,235,237,236,236,239,238,239,238,236,236,232,232,228,227,229,229,232,230,229,227,227,226,227,229,229,230,231,231,234,234,234,236,237,236,235,236,240,242,244,246,245,245,247,248,246,246,246,246,245,245,244,244,245,245,245,246,245,246,246,244,245,245,244,235,229,240,244,244,244,244,242,231,239,246,243,238,239,245,247,247,247,247,246,248,246,246,246,245,245,244,245,247,246,246,245,245,245,244,244,244,245,246,244,244,243,242,244,245,246,245,244,244,244,244,245,245,246,244,243,242,243,237,231,235,240,242,215,205,200,185,211,232,226,226,242,240,188,205,242,234,218,197,214,242,241,240,191,161,120,102,81,44,194,243,250,250,223,242,241,248,225,192,231,251,240,230,232,237,206,212,200,224,245,142,165,230,221,252,246,191,247,250,251,250,250,251,236,186,75,2,13,120,203,169,148,100,46,59,101,61,30,41,51,47,42,44,39,30,44,10,90,241,252,252,248,245,165,37,130,242,252,252,252,252,252,242,249,222,133,157,161,239,246,146,106,87,169,220,149,96,181,244,251,174,46,6,8,17,29,37,38,39,44,46,37,36,15,116,249,249,174,83,65,41,72,148,135,49,36,148,92,65,68,36,70,38,19,28,41,54,38,89,117,69,61,53,50,60,64,15,43,209,250,250,252,202,133,80,34,17,21,32,29,38,41,45,33,37,18,92,238,239,248,248,253,253,252,249,139,43,4,1,24,32,42,25,38,197,248,248,247,245,178,153,165,86,141,234,248,252,234,225,202,174,153,139,134,142,155,155,159,155,136,137,138,157,176,141,208,228,229,249,251,247,251,251,209,123,53,21,17,20,27,27,22,116,215,232,237,229,241,249,250,250,240,173,90,27,7,17,36,37,25,35,27,37,199,242,250,250,251,251,248,248,224,163,108,63,34,29,24,24,11,46,137,226,148,43,57,125,146,188,224,237,240,245,249,249,248,246,229,226,237,243,210,131,76,36,15,24,7,25,34,95,168,166,198,227,243,243,251,251,230,226,227,222,223,222,220,220,220,223,220,220,221,220,218,220,222,220,219,221,221,221,220,221,220,217,219,221,220,219,218,220,218,218,221,220,217,216,216,215,215,216,218,217,216,114,3,1,3,7,9,8,10,10,10,11,10,10,227,230,230,229,227,227,227,227,229,229,229,229,227,229,229,226,229,229,232,230,230,232,229,230,231,232,231,230,229,227,228,227,226,229,230,230,231,228,228,226,225,222,222,222,223,225,225,227,226,226,227,226,224,224,226,229,229,229,229,227,227,228,227,228,228,226,226,227,226,227,226,225,226,226,225,225,227,226,226,227,227,227,226,226,227,229,230,231,233,232,232,231,229,231,230,230,229,227,228,229,233,229,230,232,229,229,230,231,229,227,227,228,228,227,229,229,230,232,234,236,234,236,237,234,236,237,237,238,236,235,233,234,233,233,234,232,233,235,236,233,230,229,226,227,227,225,230,227,229,230,231,232,229,232,231,226,225,221,225,228,229,229,226,224,228,226,224,224,224,224,224,222,222,224,223,224,224,224,225,224,229,228,226,225,225,224,225,225,226,225,224,225,226,225,227,225,225,227,226,227,225,229,232,230,231,231,231,233,234,233,237,236,237,238,239,240,238,235,235,233,231,230,228,229,228,233,232,229,229,228,225,226,226,226,228,226,229,230,231,232,232,235,237,240,241,242,242,242,244,246,246,246,249,246,247,247,247,247,246,248,246,246,247,245,247,248,246,247,245,245,245,247,245,233,232,244,246,246,247,245,237,234,244,248,243,238,242,244,246,248,247,247,247,247,247,246,246,246,246,245,244,245,246,245,245,246,247,244,245,245,244,246,245,243,245,245,245,246,246,245,245,244,245,245,245,245,244,243,241,241,242,238,231,233,243,240,214,206,215,214,243,252,221,193,207,231,211,224,247,237,213,204,221,244,239,242,201,166,138,121,85,48,198,243,249,249,220,241,241,252,219,184,236,248,244,230,198,228,200,148,130,188,240,168,199,243,242,252,248,205,248,249,237,222,192,190,168,148,123,73,78,160,207,141,137,108,53,53,92,65,31,42,55,44,42,45,43,35,42,16,80,221,179,199,253,232,182,51,96,240,252,252,252,252,251,190,208,191,165,183,194,247,193,151,158,175,204,206,160,139,215,247,248,147,30,9,6,24,29,22,129,207,202,179,105,53,7,97,247,247,182,95,20,15,57,105,95,32,24,79,63,72,79,42,55,34,16,47,49,37,34,33,48,57,61,66,34,41,28,29,195,249,251,190,88,33,5,24,29,23,24,33,38,39,42,36,39,35,38,14,93,201,239,241,238,212,136,66,22,3,19,43,41,49,46,74,207,246,251,251,251,251,196,199,252,155,151,242,252,252,242,216,187,160,145,141,137,143,157,149,158,153,135,141,141,185,245,246,251,251,252,252,252,252,253,253,246,210,117,50,15,12,23,23,30,8,68,215,245,245,248,248,251,251,246,174,110,37,10,18,17,34,27,27,25,118,232,234,247,247,250,250,250,207,138,98,47,27,18,14,21,25,24,14,146,226,240,202,96,89,153,214,248,245,245,245,246,246,238,241,225,229,244,242,192,129,74,27,18,12,75,107,59,42,26,8,15,18,89,183,231,238,249,231,228,226,228,223,223,222,221,223,222,221,222,223,223,220,221,220,221,222,221,222,222,219,221,221,220,220,220,221,219,220,220,218,219,219,220,217,217,220,218,218,219,218,218,113,3,1,3,7,9,8,10,9,10,11,10,11,224,226,227,225,226,227,227,229,229,228,230,227,230,229,225,227,226,227,230,231,230,230,230,229,228,230,229,228,226,222,220,222,224,227,227,226,225,223,224,223,225,222,223,224,222,225,224,224,224,223,224,224,224,223,226,224,224,225,226,225,224,225,223,225,226,221,225,223,222,224,223,222,223,225,225,226,226,224,225,225,226,226,226,227,225,229,228,230,231,229,230,230,230,229,229,227,227,227,226,227,229,230,230,230,227,227,227,229,227,225,227,224,225,227,227,227,227,231,231,230,232,234,234,235,235,234,236,232,234,234,233,234,233,234,234,233,231,232,234,232,231,231,229,231,230,229,229,229,230,230,231,232,232,230,229,227,226,225,225,224,226,226,224,224,220,220,222,221,223,221,222,222,222,222,222,221,221,220,224,224,222,224,222,224,225,226,225,223,224,223,224,225,222,224,223,223,225,225,227,226,227,226,227,226,227,230,231,229,230,232,233,234,234,236,237,236,236,236,232,232,230,229,229,230,230,229,232,229,228,226,224,225,225,227,226,226,226,227,230,228,230,232,235,239,241,242,243,244,243,245,245,246,246,247,247,245,245,244,245,245,244,245,246,246,245,246,245,245,244,244,247,247,242,233,239,245,244,244,245,245,235,238,246,246,240,238,245,246,247,246,246,247,245,247,246,247,247,245,246,245,245,245,244,244,244,245,245,244,245,244,246,245,246,246,244,242,245,245,245,246,245,245,244,244,244,245,243,241,243,242,241,238,229,230,244,235,214,219,223,218,237,215,182,191,230,249,229,243,250,235,212,207,219,244,237,239,202,167,138,115,86,57,203,244,247,248,220,239,241,252,208,184,241,244,246,213,178,202,177,158,148,203,252,199,220,246,252,252,249,203,224,198,186,194,146,171,166,200,213,143,154,177,203,140,162,117,63,68,89,73,30,46,54,41,38,46,42,36,41,14,76,190,139,154,208,170,183,122,95,190,243,247,249,245,205,196,251,210,189,199,194,224,183,176,203,190,213,200,156,167,230,248,240,120,24,6,11,26,39,150,251,251,251,251,147,53,8,86,247,232,162,98,21,10,42,61,74,49,55,87,57,71,66,40,35,18,43,49,33,35,37,29,28,43,57,51,43,16,72,225,253,253,133,44,6,7,32,25,27,25,34,35,36,44,37,43,42,45,45,48,37,15,68,101,91,60,38,33,48,68,74,83,81,101,103,131,190,213,169,154,163,146,113,95,128,93,76,172,234,234,234,196,165,151,146,140,135,142,151,151,162,148,136,142,144,163,166,141,226,234,200,188,225,237,246,246,250,240,149,60,18,25,30,25,27,35,16,26,166,222,251,251,252,252,212,130,68,22,14,15,22,30,29,31,33,28,139,198,228,235,251,251,246,159,93,51,25,24,20,46,77,59,44,19,81,163,235,215,179,178,206,235,248,247,217,150,145,134,145,223,226,235,248,212,149,97,64,28,5,45,93,89,67,51,39,35,39,35,35,9,82,208,243,243,225,220,226,226,223,222,220,222,223,222,221,223,222,221,222,221,223,222,219,221,222,222,221,221,219,218,218,219,219,217,216,218,217,217,220,219,218,216,219,220,220,216,217,115,2,1,5,8,8,8,10,10,11,11,12,10,224,228,225,224,226,226,227,225,228,227,228,229,228,227,229,229,229,230,231,231,230,231,231,231,231,229,228,230,229,229,225,223,224,225,224,221,222,222,222,223,224,221,224,223,222,221,222,222,220,222,224,224,220,222,224,222,220,221,222,223,223,224,223,222,222,224,223,224,226,224,224,224,225,225,223,225,228,225,225,224,225,227,227,226,228,229,229,229,229,230,230,230,230,231,229,228,228,228,229,230,230,229,229,231,229,229,229,226,229,227,227,228,225,226,227,228,228,227,230,231,231,233,233,233,235,234,233,232,230,232,234,236,235,236,234,232,234,234,235,233,233,235,235,237,234,232,232,227,229,232,230,232,230,230,229,227,226,226,227,224,224,223,224,223,222,222,222,222,223,223,222,222,224,221,220,221,218,220,219,218,222,222,223,224,226,224,224,225,222,224,223,222,224,221,222,224,223,227,229,229,228,229,229,226,227,230,229,229,232,230,233,232,232,235,234,237,239,236,234,232,233,233,232,230,229,232,231,232,230,230,230,231,231,227,229,227,227,226,229,229,227,230,231,236,236,240,243,243,246,246,245,245,245,247,248,246,245,244,244,245,244,246,247,245,247,246,245,246,246,247,248,249,241,235,244,246,244,245,248,240,236,243,247,244,239,240,246,246,246,246,244,247,247,247,246,247,246,246,247,245,245,245,244,244,245,244,245,244,245,247,247,246,246,246,245,244,245,245,245,245,244,244,244,243,242,241,243,242,241,240,240,241,230,227,243,231,211,214,211,181,200,210,198,224,252,252,219,218,245,233,208,212,223,245,231,244,204,163,139,113,84,50,200,244,248,250,222,244,242,252,206,194,249,244,249,238,194,211,211,195,178,218,252,220,234,247,252,252,208,149,145,150,180,198,211,248,229,218,188,155,158,162,189,128,174,124,81,86,86,79,30,49,54,43,37,39,44,36,42,12,121,244,170,164,201,170,171,127,104,150,225,248,250,240,194,218,252,210,205,160,175,244,143,170,212,176,217,168,139,184,229,250,237,120,21,5,10,22,124,242,253,253,237,153,67,21,5,84,246,226,154,86,15,14,70,91,66,59,81,91,76,59,44,39,32,33,44,38,32,33,34,32,35,41,44,55,35,117,243,247,242,103,15,1,19,31,29,34,37,44,45,53,57,62,69,72,78,81,85,75,80,53,26,32,41,50,42,56,64,60,67,69,69,66,63,60,48,27,12,14,15,27,36,48,48,39,13,100,237,243,224,174,145,145,138,141,137,137,145,148,155,143,134,141,144,140,69,10,6,8,12,14,11,19,53,92,133,147,128,81,47,51,51,53,50,46,41,43,16,41,146,198,201,160,96,46,27,28,20,16,24,30,27,32,33,35,33,29,18,73,199,217,243,179,83,38,21,12,89,214,211,132,57,37,39,15,24,45,95,218,244,235,249,222,141,83,55,14,38,153,217,239,243,171,95,70,31,49,105,89,84,57,47,54,39,38,34,30,35,38,21,77,211,225,231,221,223,226,225,225,223,223,221,224,223,222,224,222,223,223,224,222,220,221,225,224,221,222,221,222,220,221,221,220,221,220,220,217,220,222,217,217,220,220,220,217,220,116,3,1,5,9,9,9,12,10,11,12,13,12,224,225,227,224,226,222,222,226,226,223,227,227,225,226,227,229,227,228,231,229,229,232,229,229,229,230,231,229,231,230,228,227,223,223,222,221,222,220,220,221,223,222,222,221,221,222,222,222,222,222,220,218,221,219,220,221,220,220,222,225,221,220,220,219,222,218,222,223,219,223,223,223,225,224,223,224,225,223,223,225,226,225,226,228,226,228,229,229,229,231,230,229,229,229,232,230,229,229,229,227,229,230,228,227,229,231,227,226,226,226,227,224,227,227,226,229,226,229,229,227,231,230,234,235,233,235,232,230,231,230,234,235,233,233,232,234,231,232,235,234,235,235,237,239,235,232,230,229,230,229,227,226,227,227,229,227,226,225,224,226,226,225,223,223,221,220,221,220,222,224,223,221,222,222,220,222,222,220,220,219,223,223,223,225,224,224,225,222,225,224,222,223,221,221,222,223,222,224,222,223,225,224,226,227,227,227,227,229,231,230,232,233,230,234,235,234,236,235,234,236,234,232,230,231,232,230,232,232,232,234,234,232,230,230,227,225,228,229,228,226,226,226,227,229,232,237,239,241,241,242,242,241,242,243,245,245,244,244,245,246,246,245,246,245,244,245,246,246,245,247,248,246,238,236,246,245,245,247,245,237,236,246,247,242,237,242,245,245,247,245,244,244,245,246,246,247,246,246,246,246,245,244,244,244,245,245,244,244,244,245,246,246,247,245,246,245,245,245,245,244,244,244,243,243,240,241,240,242,240,238,237,240,232,224,241,224,204,197,192,197,239,226,208,229,244,223,185,220,246,230,207,213,226,245,231,245,199,151,124,99,73,55,205,244,249,247,222,244,243,252,203,208,251,241,250,244,208,237,232,212,187,227,252,227,224,193,200,209,162,128,148,184,236,251,251,252,241,198,163,160,133,135,180,111,192,132,88,81,78,84,25,53,56,46,33,43,44,38,34,23,158,243,206,222,247,192,199,160,61,94,184,233,242,203,175,244,249,211,162,130,222,203,91,174,188,160,205,139,127,186,233,249,237,127,26,2,10,19,152,244,250,214,64,3,1,10,6,133,251,224,168,86,22,26,32,66,64,52,61,61,55,51,39,33,43,38,36,37,40,39,35,33,34,28,39,55,47,126,196,198,77,9,6,38,42,50,63,73,94,105,121,133,132,130,128,109,95,78,69,69,59,55,56,43,47,60,58,58,58,63,65,69,69,70,69,76,75,66,59,61,66,66,71,81,87,69,44,102,229,226,187,152,137,141,138,142,142,141,149,146,153,141,134,145,149,141,71,17,15,30,40,42,39,34,40,39,39,43,42,42,38,35,39,52,57,55,60,54,55,42,36,27,15,39,47,55,43,44,27,24,34,29,32,30,26,31,33,29,30,33,27,125,222,195,155,47,32,26,42,126,231,194,128,49,34,29,27,12,43,169,230,236,242,184,120,72,49,16,28,170,241,241,215,115,79,29,44,145,162,139,77,35,45,38,31,32,33,43,37,35,27,12,127,217,226,232,224,225,225,224,224,225,222,221,224,223,223,222,223,223,223,224,223,222,223,223,224,222,221,223,223,223,223,224,222,223,221,223,224,219,220,220,219,221,221,219,218,115,4,1,4,8,10,9,10,10,11,12,11,11,222,227,225,224,227,223,225,223,225,226,225,226,226,226,226,227,226,225,227,226,228,230,231,227,228,229,229,231,229,229,228,227,228,226,225,223,222,221,221,218,220,220,220,222,220,222,224,223,223,219,218,219,218,220,222,219,220,220,220,223,220,221,222,220,221,221,221,218,219,219,220,222,221,223,221,224,224,222,225,224,225,227,227,227,229,228,226,230,229,230,230,229,229,229,229,229,230,228,227,227,229,227,227,229,229,229,229,227,227,228,227,227,226,225,224,224,224,227,227,227,228,228,229,230,231,230,231,231,229,230,231,232,231,230,233,232,232,233,232,234,235,234,232,234,233,231,232,229,227,227,224,225,223,222,225,226,227,226,228,226,226,226,226,224,221,222,221,220,224,224,224,222,223,222,220,223,221,222,221,219,221,221,221,221,222,222,222,223,222,220,219,221,223,220,220,222,221,221,221,224,222,223,226,225,227,228,226,226,227,227,229,229,230,232,232,236,235,236,237,235,234,237,237,236,235,237,237,236,236,236,234,235,234,230,230,230,231,229,229,225,227,229,226,228,230,234,236,236,238,237,238,240,241,241,242,241,242,245,244,245,245,246,244,244,244,244,246,245,244,245,246,243,234,238,247,248,246,247,243,234,241,247,246,241,241,245,244,244,246,247,246,246,246,246,246,248,247,247,246,247,246,246,244,244,245,244,244,244,245,245,246,246,246,247,244,244,245,245,246,245,246,243,243,244,241,242,242,242,240,238,235,239,232,224,237,222,205,206,212,228,252,221,187,189,219,229,214,235,250,227,204,217,229,246,227,247,200,150,124,94,66,48,204,245,250,249,223,244,244,252,203,215,250,238,247,249,211,223,252,248,190,195,220,181,139,113,160,206,196,214,201,217,252,252,252,252,240,163,137,169,139,160,190,130,190,106,78,73,83,105,22,45,57,41,41,45,38,43,33,26,163,243,195,223,249,232,237,225,112,38,125,196,202,171,174,233,208,172,135,162,252,203,75,171,187,151,199,119,116,198,231,248,247,151,45,3,8,10,99,240,247,202,66,4,8,1,111,250,253,251,173,82,30,19,50,109,84,49,57,57,48,44,39,40,35,36,36,38,39,38,39,34,33,33,53,56,53,113,108,83,74,72,93,97,112,116,108,109,104,92,86,86,72,60,62,59,56,57,66,69,72,76,77,82,98,114,121,127,131,136,142,133,146,149,139,147,146,149,136,123,122,120,113,111,97,94,78,92,200,199,155,139,134,144,139,145,147,150,153,154,152,138,135,144,159,144,83,51,43,46,54,53,64,60,59,59,58,59,50,54,50,43,41,44,39,39,40,40,42,47,45,42,50,53,60,62,58,64,67,56,53,50,42,43,37,35,37,28,33,30,37,9,67,171,179,139,57,35,19,26,178,223,151,86,42,29,39,13,47,205,239,240,249,224,143,83,49,14,98,226,247,218,150,83,49,122,201,186,148,79,49,40,25,29,28,36,39,35,38,33,49,16,78,196,218,235,228,222,224,221,224,223,222,222,225,224,222,224,225,223,223,222,222,222,225,225,222,222,222,225,223,222,222,224,225,223,222,221,222,223,220,220,221,220,222,221,220,114,4,1,5,9,10,9,10,10,12,12,11,11,223,225,226,221,226,223,222,224,224,223,225,227,224,226,226,226,226,224,226,227,228,230,227,229,231,229,231,230,231,230,231,232,229,229,229,227,226,222,218,222,217,220,219,218,220,219,220,217,217,219,219,218,220,220,221,219,218,220,217,221,220,219,220,221,222,220,222,221,217,222,220,219,222,220,221,222,222,222,224,223,225,228,225,226,226,225,226,228,229,231,229,231,231,230,232,230,232,230,230,230,227,228,229,229,230,231,229,230,233,231,232,228,227,225,223,224,223,226,225,225,227,226,229,226,224,227,227,227,228,227,229,229,228,232,229,232,232,233,231,230,233,233,234,231,230,232,230,228,227,224,221,222,224,223,225,224,226,226,225,227,226,225,226,226,222,222,222,224,224,224,222,220,221,220,219,222,220,219,220,218,220,219,220,221,218,219,220,217,221,220,219,222,219,221,221,221,219,223,222,223,225,224,225,223,226,226,227,223,227,227,227,230,227,232,232,231,234,232,234,235,237,239,240,238,240,240,239,239,237,237,237,238,237,236,233,230,231,229,230,229,229,227,229,231,233,234,232,235,236,237,237,239,240,241,242,241,242,242,242,242,242,242,242,243,245,246,246,246,245,247,247,240,233,242,246,245,246,246,237,236,245,246,243,237,242,245,245,246,246,245,245,246,245,245,245,246,246,246,246,246,245,246,246,245,244,244,244,244,243,244,245,246,247,246,245,244,245,245,244,245,244,244,244,244,242,242,239,240,237,237,235,238,235,226,232,220,213,216,215,200,220,196,187,221,247,247,229,240,247,222,203,219,230,243,220,249,203,164,142,108,71,49,208,244,249,246,220,241,245,251,190,214,247,236,248,251,211,195,227,232,139,101,134,152,168,168,229,250,230,238,222,238,251,249,251,252,197,109,129,174,171,186,200,139,165,69,48,55,69,97,19,46,57,46,42,40,45,44,21,31,168,234,189,215,243,222,240,248,131,49,110,188,226,161,144,160,108,130,148,220,252,150,122,213,189,194,183,106,162,218,240,244,249,188,70,3,4,2,43,210,252,252,201,133,182,244,253,253,252,179,91,36,12,33,27,74,76,42,41,45,42,39,39,33,35,37,36,38,35,36,42,44,51,63,80,95,99,85,67,82,100,92,73,74,65,59,57,56,113,60,62,73,85,99,112,123,135,139,141,145,138,138,133,130,133,119,107,95,77,67,65,58,56,55,49,46,49,43,42,45,39,36,35,24,24,24,31,40,103,140,145,142,139,139,142,147,156,154,154,155,152,136,136,152,162,134,69,27,38,15,24,27,35,36,38,44,50,46,54,57,50,56,58,64,59,56,57,53,52,51,49,48,46,42,41,49,57,63,63,72,73,73,75,73,66,62,59,57,44,50,36,36,26,30,153,134,77,34,32,178,236,212,160,98,62,27,33,19,119,227,250,250,249,227,152,99,56,18,35,152,157,128,105,43,99,204,234,185,107,69,33,24,101,146,124,82,51,39,36,34,44,9,77,193,205,231,231,225,226,222,225,223,223,224,222,223,225,225,223,220,225,224,221,224,224,222,222,223,225,226,224,226,224,225,223,226,224,221,224,220,221,220,219,223,221,220,221,116,3,1,5,8,9,9,12,10,11,12,12,12,221,222,221,223,225,221,223,221,220,222,222,223,225,223,224,226,228,226,226,228,226,226,229,229,229,229,229,229,230,231,231,234,228,231,232,229,227,221,221,219,220,218,218,218,215,218,217,215,218,215,218,219,217,219,218,217,217,217,221,219,218,219,217,216,218,219,221,219,217,217,217,218,220,220,218,220,219,220,222,222,225,224,223,224,225,224,226,229,229,228,229,230,229,226,227,229,231,229,229,226,229,229,228,232,230,231,231,231,230,232,229,230,230,227,227,225,222,223,224,223,225,224,224,224,223,223,225,225,224,227,227,227,230,230,232,232,231,232,231,231,232,232,230,231,231,229,229,228,225,222,223,223,223,226,226,224,224,224,226,226,227,226,223,226,225,225,222,221,223,220,223,218,217,218,217,220,219,218,219,220,221,219,218,218,217,218,218,218,221,221,220,220,220,218,220,220,221,223,221,222,220,222,224,223,224,225,225,227,225,225,228,227,230,229,229,231,231,234,233,235,236,239,239,238,240,240,237,238,237,236,237,239,237,234,235,232,230,229,229,229,228,228,227,231,231,232,234,234,237,235,237,237,240,243,242,244,242,241,239,238,240,242,243,242,244,245,246,246,244,244,246,235,229,241,244,245,248,244,234,238,247,244,240,239,245,246,246,247,246,247,246,246,246,246,246,246,245,246,246,245,245,244,246,246,244,242,244,244,244,244,246,245,246,246,245,244,243,244,244,244,244,244,244,245,243,243,240,239,237,237,237,239,239,232,229,217,210,207,180,195,238,214,241,250,252,236,192,225,249,217,203,219,229,240,215,252,202,160,149,117,74,55,214,244,250,246,221,240,247,243,177,214,248,245,252,252,218,147,159,161,122,114,156,211,227,223,252,252,234,240,212,234,251,239,252,247,177,120,142,169,175,188,203,150,148,72,63,24,32,90,27,69,57,36,34,35,46,44,23,93,220,234,191,212,237,213,234,242,128,24,90,198,225,148,139,136,140,159,165,217,174,112,155,243,209,240,192,85,187,238,239,241,247,224,127,28,2,9,12,88,223,240,249,249,253,253,253,210,112,45,1,12,24,27,34,30,39,51,41,39,45,43,45,42,44,51,57,62,69,74,87,103,111,110,98,77,63,60,45,50,57,50,53,61,76,86,100,118,129,128,125,139,146,138,128,117,93,70,61,55,46,39,35,34,33,30,31,29,31,27,24,25,17,23,19,17,24,30,35,33,34,36,30,23,21,13,22,11,32,96,141,152,143,146,148,154,155,155,157,160,151,136,138,153,166,121,53,32,33,28,23,18,16,14,13,16,18,22,23,23,26,26,33,32,35,42,48,56,58,61,62,64,69,63,63,61,59,62,55,53,47,48,54,58,66,71,77,78,72,73,66,57,55,48,34,72,70,27,157,239,248,232,150,111,63,31,34,18,70,210,241,241,229,144,104,83,69,33,21,15,14,10,19,9,92,234,227,143,101,45,58,196,235,235,241,184,118,75,51,31,39,8,72,203,209,229,234,225,228,222,226,223,224,224,222,225,225,224,221,222,227,225,226,225,224,224,225,227,224,224,222,228,225,222,226,223,224,222,222,223,219,221,221,221,223,220,221,116,3,1,5,8,9,9,11,9,11,12,11,11,219,220,222,218,223,220,221,221,223,222,220,222,222,221,221,222,222,223,224,227,226,229,229,226,229,227,230,229,229,231,229,229,226,228,229,225,226,224,221,221,218,216,216,216,213,214,216,214,217,218,217,216,218,217,215,214,217,219,217,219,217,216,216,215,216,214,216,218,215,218,219,216,220,216,216,217,218,219,219,220,219,221,220,223,225,222,224,225,225,226,228,226,226,225,224,224,225,227,226,229,228,229,230,227,229,229,228,227,229,229,230,230,231,229,226,224,225,223,221,221,222,222,222,220,220,221,222,223,224,226,227,226,227,231,230,232,229,230,231,229,230,229,230,232,231,229,226,224,224,223,223,223,221,221,222,222,222,221,222,225,222,222,223,221,222,224,225,222,221,219,219,221,219,218,218,220,217,219,221,222,222,221,221,217,217,216,218,217,220,218,215,219,220,221,220,220,219,218,219,219,221,221,220,220,225,224,224,223,223,225,227,227,226,229,228,229,230,229,232,235,236,236,235,236,237,236,237,236,235,236,235,235,232,232,231,229,229,229,227,227,228,226,229,228,230,232,232,235,234,234,235,239,241,240,240,241,240,240,238,239,240,241,243,243,244,243,243,244,244,244,241,227,225,238,241,241,246,239,229,240,246,243,238,241,245,245,244,244,245,246,247,247,245,245,247,247,247,246,245,246,245,244,244,244,244,243,244,244,244,243,244,245,245,245,244,244,245,245,245,245,245,244,245,243,242,242,240,239,236,236,235,235,237,235,224,214,199,196,211,240,252,203,218,230,222,206,196,229,248,214,201,218,223,237,214,252,194,149,134,108,70,53,210,244,250,242,219,238,246,238,181,224,250,250,248,232,187,144,167,182,187,160,181,237,247,225,246,252,220,228,187,197,239,238,252,239,179,146,141,163,192,171,206,146,151,136,101,37,37,111,58,75,61,36,36,35,38,46,140,221,236,224,186,210,240,211,233,234,122,22,41,139,196,187,217,188,167,201,188,195,159,99,182,232,194,252,163,83,198,235,242,236,243,250,203,127,53,3,3,25,57,145,198,216,203,163,110,60,37,15,41,60,53,64,61,66,65,69,75,76,80,84,86,91,99,91,91,93,86,76,63,63,57,50,48,53,55,64,73,85,103,117,137,141,139,134,127,107,80,67,53,44,39,28,29,28,31,27,20,23,25,25,19,14,19,26,25,30,28,31,35,19,17,21,23,36,34,35,25,18,25,33,39,38,19,21,14,47,111,139,155,158,143,150,148,150,145,148,160,155,141,132,141,157,158,101,40,23,29,46,49,29,21,13,15,16,16,24,24,20,20,24,22,21,20,16,19,19,22,27,31,43,44,46,60,69,80,91,94,89,83,76,67,60,57,53,48,49,54,60,68,75,69,74,67,54,59,56,180,237,248,211,130,75,39,34,32,38,23,22,142,198,148,99,79,70,67,66,55,37,11,15,12,125,234,241,227,135,95,55,19,100,198,228,248,234,165,104,64,30,46,10,77,206,208,232,235,224,228,222,227,224,224,224,225,226,225,226,225,224,226,225,224,227,227,225,226,225,225,224,226,226,225,225,224,223,222,223,221,218,220,222,219,220,222,217,218,115,4,1,4,8,10,9,10,10,11,11,11,12,220,224,220,220,221,216,218,220,220,220,221,220,219,218,222,222,222,224,224,228,227,226,229,227,228,229,228,227,227,226,227,226,222,224,223,220,221,221,222,222,219,219,215,216,219,217,214,214,217,215,217,217,215,216,213,217,217,215,218,216,215,214,215,215,214,215,216,215,217,216,217,219,217,218,221,220,218,218,218,219,222,221,221,223,222,224,224,227,227,224,223,225,225,225,226,224,229,228,229,227,228,230,228,229,229,227,229,230,229,231,227,230,230,228,228,227,224,221,222,220,223,224,223,222,220,219,221,220,220,224,223,223,227,227,228,231,229,233,232,233,230,230,231,230,230,230,226,224,222,220,223,219,218,222,219,220,222,224,224,223,224,223,223,222,222,224,222,220,223,222,223,221,222,221,221,220,220,223,222,221,224,222,219,220,218,218,217,218,219,217,220,218,219,218,219,220,217,219,218,221,223,223,222,222,225,224,224,224,225,225,226,225,224,226,227,228,229,230,231,232,234,232,234,234,234,236,236,235,235,234,234,232,230,229,229,227,227,229,227,227,227,227,230,230,230,234,234,236,237,235,237,237,240,239,240,240,237,239,237,238,241,240,243,243,243,242,243,245,245,244,240,225,229,242,241,244,245,233,232,242,245,241,238,244,245,244,246,245,246,247,246,246,246,246,247,245,245,247,246,246,246,245,244,244,245,242,244,244,243,245,245,245,246,246,245,245,245,245,245,245,244,244,244,245,243,242,240,237,237,235,235,237,239,240,218,208,208,217,228,253,194,78,131,198,236,232,223,243,247,209,203,219,222,232,215,252,188,146,128,101,63,52,214,244,250,242,220,246,252,239,194,229,230,205,189,189,200,185,238,248,241,173,159,232,247,226,238,251,223,196,145,187,241,247,252,226,176,165,160,181,206,154,193,133,165,185,156,76,55,125,66,89,65,45,39,41,30,61,216,246,219,198,177,210,239,208,234,237,130,66,54,103,165,219,252,210,177,130,173,237,167,165,200,184,153,210,122,69,200,228,244,240,239,249,252,243,210,134,73,113,54,7,14,5,10,8,10,17,45,67,74,92,105,113,120,113,115,110,97,103,100,82,67,61,59,55,51,53,52,53,62,65,80,90,108,129,136,144,137,131,110,94,75,52,43,35,32,27,23,29,28,19,15,12,15,25,25,23,18,25,29,29,26,14,24,40,35,37,37,33,35,26,19,30,41,44,35,19,19,15,16,22,31,53,33,38,51,97,196,181,161,161,155,158,151,150,148,158,158,149,135,133,143,164,155,82,29,12,16,40,60,54,36,27,14,19,31,27,25,18,13,19,23,23,28,22,16,13,17,24,19,22,23,22,24,27,28,35,47,55,69,78,81,87,90,83,80,76,66,64,57,55,51,57,61,69,59,50,171,208,176,159,99,70,43,31,34,33,36,39,16,31,76,126,137,71,74,100,96,153,196,214,245,246,253,253,236,173,104,36,5,12,15,55,198,228,199,128,71,45,46,8,82,206,211,233,234,225,226,226,226,225,226,223,227,227,226,227,227,228,229,226,229,228,228,227,227,228,226,229,225,225,223,226,227,221,225,222,222,219,219,223,221,222,225,220,218,115,4,1,4,9,10,9,10,10,12,12,11,11,214,220,218,218,219,217,216,217,220,219,218,217,218,221,220,221,220,222,225,224,224,225,226,226,227,224,226,223,223,227,223,224,222,221,222,219,220,216,218,219,218,214,215,218,214,215,215,213,215,212,215,214,213,215,212,213,213,213,213,215,210,212,215,212,213,213,213,214,213,212,215,213,215,215,218,216,217,217,218,220,219,223,222,223,224,222,224,224,225,223,223,223,223,224,225,228,226,227,227,228,227,227,228,226,226,226,226,229,228,227,227,227,227,224,225,224,222,219,220,221,222,224,223,221,218,221,222,219,220,222,223,224,224,224,225,226,226,228,228,227,228,228,227,227,228,229,227,222,222,219,216,218,220,220,221,222,220,223,224,223,223,222,223,221,225,222,220,219,219,221,222,224,221,220,222,221,220,216,218,220,218,220,219,217,217,217,219,217,217,218,217,217,215,215,215,214,217,217,220,218,220,222,221,221,222,222,222,220,221,222,222,222,223,225,223,223,226,226,226,226,228,227,228,231,232,233,234,231,232,235,234,232,229,229,228,226,227,226,227,226,226,226,226,229,233,233,234,234,233,235,236,234,236,236,236,238,236,237,235,236,238,239,241,240,243,242,243,245,244,244,235,221,231,243,241,245,240,226,236,244,244,237,237,245,244,246,245,246,246,246,246,246,246,245,246,244,244,244,244,244,244,244,244,244,245,243,243,242,242,242,244,246,246,246,244,245,245,246,245,245,243,243,244,242,241,241,238,236,235,234,232,235,237,240,214,208,213,222,217,202,125,13,103,220,251,244,220,235,244,203,206,218,219,229,215,252,193,160,141,110,61,46,214,244,251,248,228,250,252,219,165,179,181,187,210,227,247,214,229,241,250,182,140,228,249,230,230,239,167,179,197,205,245,250,236,197,165,178,160,192,210,155,201,134,182,187,148,83,42,138,81,87,81,50,42,49,24,70,231,246,197,176,177,212,240,208,232,246,169,160,111,88,149,228,250,178,112,103,212,198,110,152,207,177,154,190,71,76,201,214,226,217,229,242,252,252,251,227,177,183,110,54,29,18,19,29,45,46,46,46,50,46,49,53,53,53,52,53,49,49,93,47,53,58,63,67,77,88,103,120,122,129,139,131,110,90,69,55,42,35,28,19,22,14,14,12,14,22,26,29,26,24,17,12,22,27,23,19,25,37,30,32,29,27,39,43,42,45,43,33,25,22,39,46,46,48,34,26,21,16,24,24,41,52,60,87,105,170,210,171,165,158,159,163,158,158,154,163,165,149,134,135,146,164,140,65,24,24,23,44,64,62,60,37,15,28,33,29,18,14,15,13,16,19,30,27,22,18,17,29,27,28,27,29,24,24,22,15,16,21,21,24,34,35,46,59,65,78,83,90,98,93,90,79,69,59,57,51,50,36,50,79,77,76,65,60,54,50,48,41,33,37,22,54,101,75,53,42,136,225,243,244,249,249,247,247,244,222,130,47,6,11,14,76,220,235,202,143,91,47,37,6,102,211,219,238,229,225,227,222,226,223,225,226,226,227,227,228,226,228,229,229,226,227,228,225,229,225,226,226,224,225,224,224,222,223,222,222,222,223,219,220,222,222,224,220,220,116,3,1,5,8,9,9,12,10,11,12,12,12,214,217,214,216,219,215,215,216,218,218,219,218,218,221,221,219,221,220,221,223,224,224,224,224,225,227,225,224,225,224,225,227,224,225,224,221,223,218,217,214,213,217,215,212,213,213,213,214,215,214,212,215,214,213,213,214,212,211,214,212,214,212,212,212,212,214,214,213,214,211,213,213,213,214,216,216,217,218,219,217,221,220,221,224,222,224,224,223,225,223,223,224,223,224,226,226,230,228,229,229,227,227,224,227,226,226,224,227,226,224,223,225,222,221,221,221,223,219,222,222,223,223,222,220,219,222,222,220,220,222,220,222,226,223,225,224,222,227,228,227,227,228,227,229,230,227,224,222,223,220,221,224,224,223,221,221,223,225,222,223,226,222,224,222,221,224,223,224,225,225,226,224,224,221,221,221,220,220,219,218,219,218,217,217,217,218,218,219,217,217,219,215,216,217,216,215,215,219,218,216,218,220,220,221,222,221,221,222,220,221,222,221,223,224,224,224,225,226,226,227,229,227,228,229,232,233,232,231,231,232,233,235,232,230,227,224,226,224,226,228,227,225,225,227,228,231,232,233,234,235,238,238,239,237,237,237,240,240,237,237,238,240,240,240,241,243,244,244,245,243,230,221,236,241,240,244,231,225,239,244,241,235,240,244,244,245,245,245,244,245,244,245,245,245,247,245,245,244,244,244,245,245,245,244,244,243,243,245,244,244,245,244,244,244,245,245,245,245,244,245,244,244,242,243,242,239,239,236,236,234,236,236,237,242,214,210,210,198,176,223,142,16,149,237,248,208,190,232,242,203,206,217,220,225,217,252,191,163,145,124,71,45,215,243,250,250,223,228,193,154,151,189,221,234,245,248,252,210,189,201,232,178,149,228,252,210,163,202,182,208,233,223,251,252,201,163,159,162,153,184,197,182,226,142,186,172,133,72,35,132,83,95,68,53,49,47,24,74,243,243,207,180,184,220,244,212,226,249,179,222,179,79,147,230,217,156,111,142,245,168,59,159,236,202,184,172,92,139,212,188,196,196,208,219,228,245,253,229,187,184,92,58,59,57,62,66,84,84,86,83,88,85,79,77,74,74,76,85,86,91,93,118,135,127,134,122,113,108,90,78,68,47,40,37,27,27,24,22,22,23,23,21,19,12,12,10,14,23,27,30,33,32,18,12,16,19,15,15,37,42,27,27,38,46,50,52,51,45,43,34,23,38,51,52,54,54,45,29,33,48,53,48,51,55,97,131,155,239,216,160,165,159,162,162,160,160,158,170,164,147,140,138,152,169,123,51,38,39,26,36,54,64,71,55,36,33,31,24,24,22,13,17,11,21,32,30,30,19,20,28,24,33,33,30,30,27,24,19,23,24,22,21,20,20,23,18,20,21,27,39,53,70,83,98,114,118,105,86,74,53,49,55,51,57,68,76,75,73,72,68,63,55,48,40,43,44,39,39,27,69,101,100,94,78,113,167,214,227,198,172,160,180,238,242,248,240,177,141,99,40,17,9,139,222,229,237,227,229,226,226,227,224,227,229,227,227,231,230,227,229,230,229,229,230,231,229,227,228,226,227,226,227,225,225,224,225,224,225,229,224,223,223,224,225,226,223,222,116,3,1,5,8,9,9,12,10,11,12,12,12,212,217,213,214,215,213,217,214,217,214,216,217,218,218,217,221,220,220,220,221,222,223,224,222,223,222,225,223,222,225,224,226,224,225,224,223,224,219,219,216,216,216,213,215,212,214,215,213,216,212,214,214,213,218,214,216,214,213,214,215,213,211,213,212,213,214,212,214,214,212,215,213,214,213,215,215,215,218,218,218,216,217,219,220,222,220,223,222,223,222,226,226,224,227,224,226,225,226,228,225,225,227,225,225,226,226,227,227,224,223,224,222,221,222,224,222,224,222,226,224,222,222,220,220,217,217,217,217,218,219,219,222,222,222,224,224,225,226,229,229,228,227,226,228,227,224,223,217,221,222,222,226,222,224,223,222,223,224,223,224,224,224,224,222,224,224,225,224,223,225,224,223,222,220,220,220,221,221,218,220,217,218,220,219,222,218,218,218,217,219,216,215,216,217,218,217,217,215,217,217,218,216,217,219,218,220,220,221,222,223,223,223,223,223,223,222,224,225,227,227,227,225,225,227,226,229,229,230,230,228,231,232,231,231,226,223,226,225,225,227,227,227,227,227,230,230,231,233,234,235,236,236,237,237,235,237,239,237,238,237,237,237,237,238,240,241,240,242,245,236,223,224,237,241,241,243,224,227,243,242,238,234,244,244,243,244,244,245,244,244,244,244,245,244,244,244,244,244,244,243,243,244,243,243,243,244,244,246,245,244,244,244,244,244,244,245,243,243,243,244,242,243,245,243,242,240,238,237,237,237,237,237,239,244,218,199,192,204,219,252,143,49,169,221,232,215,207,237,244,198,209,218,216,222,219,252,183,151,125,105,71,46,211,243,250,208,152,188,187,183,202,225,249,249,252,241,252,204,177,188,207,164,136,221,223,184,189,231,199,229,248,217,252,240,173,160,170,158,154,186,188,216,245,132,180,163,148,84,19,118,76,90,71,44,44,48,22,76,235,245,221,177,188,221,239,213,221,245,171,230,216,101,146,218,186,130,132,195,252,185,118,199,244,189,218,217,131,188,209,199,211,192,184,185,199,219,232,183,128,78,10,2,6,8,8,10,21,36,48,65,79,78,79,77,77,83,83,78,80,78,61,69,61,59,54,48,46,45,57,77,71,26,15,25,18,20,14,12,15,18,20,32,32,18,15,12,12,16,23,45,44,40,34,22,24,15,15,20,36,46,29,19,39,47,46,52,48,40,28,27,44,54,57,58,51,50,46,33,34,49,58,57,57,56,119,132,181,234,198,160,159,147,165,161,162,165,166,178,165,148,142,145,160,164,103,48,53,49,37,23,25,56,70,68,54,36,34,30,38,33,24,19,17,27,35,33,32,29,16,17,25,32,37,36,29,32,32,27,29,27,34,30,19,17,25,22,15,16,15,24,24,23,30,34,43,60,68,83,91,97,93,82,71,63,59,53,53,58,66,74,78,79,77,76,69,61,62,54,45,38,39,42,42,36,38,27,19,96,199,230,250,250,252,252,243,195,142,107,41,26,14,64,223,232,236,236,227,230,228,226,228,226,227,227,231,232,229,229,230,232,229,229,226,230,230,226,227,226,228,230,228,227,226,226,228,228,228,227,225,226,225,226,226,224,225,223,224,115,3,0,4,9,10,9,10,10,12,11,12,12,212,214,213,212,213,214,213,212,215,214,214,215,214,214,217,214,216,218,220,222,220,221,223,222,220,221,220,221,222,224,225,223,222,225,222,222,225,220,220,219,219,222,219,220,220,218,216,213,214,212,215,214,214,213,214,214,213,215,215,215,214,214,214,212,214,212,213,213,214,211,211,216,213,214,216,214,217,216,215,217,219,217,221,222,217,221,222,222,222,221,222,222,225,226,224,225,224,224,224,221,224,223,224,226,224,224,224,225,224,224,222,222,222,223,223,222,225,223,225,224,223,221,219,219,219,216,215,217,217,220,219,222,225,222,222,225,227,227,225,224,226,226,224,225,224,223,221,219,221,223,225,224,224,224,222,222,222,224,225,225,224,222,225,224,225,226,224,226,225,223,221,218,218,220,220,221,220,220,219,217,218,218,221,221,222,221,221,219,219,219,219,219,218,220,217,218,215,216,219,218,221,217,217,218,218,217,222,223,222,221,223,222,222,223,221,223,222,224,224,221,224,225,225,225,226,226,227,228,229,229,231,230,231,230,227,223,225,225,224,226,224,225,228,228,230,229,232,231,232,233,234,233,235,236,236,237,237,237,237,236,236,237,236,239,239,239,238,240,241,231,215,223,239,237,243,236,222,236,245,241,238,238,244,245,244,244,245,245,244,245,244,245,245,244,245,244,244,244,245,244,244,243,244,244,243,244,243,245,245,245,244,244,244,244,243,241,243,244,243,243,243,242,242,242,241,239,239,237,239,238,238,238,239,243,211,197,205,224,237,252,114,55,195,234,251,232,229,249,243,199,211,217,212,223,223,252,188,154,125,108,71,55,203,204,181,184,183,239,226,214,230,227,246,242,240,235,252,212,174,200,226,177,133,212,244,237,226,248,193,207,238,182,218,213,159,174,189,178,196,208,200,229,231,116,164,188,163,105,33,91,72,89,68,47,47,52,20,74,233,241,217,152,171,218,236,214,216,243,173,214,230,113,134,178,131,162,167,221,241,132,162,240,236,175,221,212,147,205,216,214,214,196,201,199,190,193,194,132,95,87,70,81,69,57,46,30,20,16,32,39,32,29,27,23,29,28,22,25,17,25,24,18,36,36,38,32,36,71,119,142,81,24,24,18,15,14,11,14,19,28,24,24,35,34,33,18,13,16,27,45,41,49,50,44,41,31,20,15,24,27,25,24,34,46,45,39,41,35,19,39,55,54,56,58,47,33,30,26,25,46,59,68,54,68,136,134,211,252,177,156,150,150,163,163,165,165,174,186,165,145,146,150,170,164,84,56,76,66,60,36,40,62,67,73,63,42,37,35,34,34,24,19,23,36,30,30,34,33,30,17,24,31,30,36,35,37,36,32,29,29,41,37,22,16,16,22,18,16,27,36,31,29,27,19,18,14,18,25,33,45,61,69,79,95,93,90,82,72,65,59,61,59,61,72,76,81,77,72,71,69,59,56,61,51,57,47,46,37,20,104,206,227,239,212,172,132,110,67,35,9,74,221,245,245,235,236,232,231,231,230,229,230,231,231,233,233,228,229,232,230,229,229,229,227,229,231,228,226,229,227,229,228,228,229,229,231,229,230,228,226,228,226,228,226,225,225,222,116,5,1,4,9,10,9,10,10,12,12,13,12,210,213,211,211,214,212,213,209,213,213,214,212,213,215,213,214,213,216,218,219,221,219,220,219,221,220,220,221,220,223,222,221,221,224,222,220,222,220,220,220,222,221,219,221,219,219,216,213,214,214,214,214,212,214,214,214,214,213,214,214,212,211,214,211,209,211,211,208,210,210,213,212,211,213,214,212,214,217,216,216,217,217,219,217,220,218,217,219,221,222,224,222,221,221,221,225,223,222,220,219,218,221,220,219,222,221,223,223,220,221,222,220,221,219,220,221,221,220,222,219,217,218,217,218,217,218,217,220,217,219,221,221,222,221,221,220,222,222,223,222,222,224,220,221,221,222,222,219,224,223,222,224,222,222,221,220,221,223,222,221,221,223,224,224,222,223,222,225,224,220,220,217,220,220,223,223,222,221,219,221,218,221,221,220,223,222,223,222,217,221,221,221,219,216,216,214,216,215,215,216,217,217,217,217,216,218,218,219,220,222,222,220,223,222,222,222,224,223,223,225,223,222,223,226,225,225,228,229,230,228,228,228,229,229,228,227,224,222,224,226,223,224,225,225,227,226,228,229,232,234,234,233,233,235,234,235,236,236,235,234,233,235,236,236,237,237,236,236,236,222,211,224,234,237,240,226,222,240,244,238,234,241,245,244,245,243,244,243,243,243,244,242,244,242,242,243,242,243,242,243,244,242,243,243,244,242,242,243,244,243,244,242,242,242,241,242,242,242,242,243,242,243,241,240,241,238,237,238,237,235,237,235,241,239,215,193,209,223,199,228,96,97,239,246,252,233,219,242,238,196,212,215,209,219,231,252,198,178,146,120,95,66,176,209,224,227,216,250,226,212,225,219,240,231,238,228,250,205,169,203,243,218,152,235,251,251,221,178,150,192,209,177,206,187,156,187,204,181,208,218,183,200,204,113,182,174,159,147,74,95,68,95,70,48,53,50,18,71,230,245,207,124,171,223,236,215,211,244,165,184,205,127,112,78,98,165,200,217,146,135,199,248,219,150,198,167,126,198,206,214,221,214,216,206,200,214,178,152,183,198,228,241,241,247,245,244,239,217,149,73,55,51,46,50,42,37,35,27,31,36,34,40,60,53,39,16,53,127,161,162,70,23,26,14,11,11,9,21,43,46,39,26,21,37,44,41,28,15,19,19,40,51,47,53,46,44,39,29,23,21,13,24,49,54,46,32,23,22,44,53,55,47,45,56,44,30,12,24,44,56,56,60,51,47,108,147,234,234,159,152,145,150,167,160,161,161,179,185,156,140,149,159,170,142,61,39,56,76,69,38,42,74,71,76,66,37,34,30,36,27,22,16,17,24,23,36,37,35,32,22,21,17,25,40,27,33,35,22,19,29,35,31,25,17,16,12,19,30,41,44,39,32,23,18,14,15,16,17,19,20,25,26,36,42,53,69,81,95,104,103,93,85,83,77,68,66,60,62,63,66,69,69,68,71,77,74,63,63,42,16,24,24,38,53,50,53,64,27,56,189,243,243,250,236,235,235,231,229,228,231,231,231,235,233,232,232,229,231,228,231,231,228,228,228,230,229,230,230,230,229,228,230,229,229,230,228,229,229,227,229,227,224,227,228,226,225,224,116,3,1,5,9,9,9,11,9,12,12,12,12,210,212,210,212,211,210,212,208,212,211,211,211,213,213,214,212,215,216,217,218,220,219,218,219,218,219,218,221,219,218,223,220,220,221,219,219,221,220,218,220,219,218,218,220,216,215,216,214,215,213,215,215,215,215,217,215,214,217,214,215,213,213,214,211,212,208,208,211,211,211,211,212,207,211,213,211,212,213,214,214,215,214,213,216,216,217,218,218,221,220,223,220,219,222,219,222,222,220,218,218,220,218,219,221,220,220,222,220,221,219,220,220,218,218,216,218,217,216,216,216,217,214,218,218,217,219,220,220,218,220,220,220,221,219,220,221,220,222,223,222,224,226,222,222,220,219,219,220,222,221,222,222,223,224,223,223,223,221,219,219,219,219,221,222,222,221,221,222,222,219,218,219,221,223,223,227,223,221,222,223,222,221,223,222,223,224,223,221,220,220,221,219,215,217,214,214,216,217,218,215,219,218,217,219,220,218,219,218,219,217,220,221,220,222,221,223,222,223,223,220,224,223,219,224,225,226,228,226,227,224,227,227,226,227,227,228,227,224,225,224,224,225,224,225,230,228,228,229,230,233,233,233,232,233,234,232,232,234,233,232,234,234,235,235,234,235,235,237,231,215,210,227,234,236,235,218,227,242,241,234,236,243,242,242,241,242,242,242,244,243,244,242,242,243,242,244,243,242,243,243,244,242,243,242,242,242,241,242,241,243,244,242,241,243,242,241,242,241,243,241,242,243,242,241,242,240,239,238,237,235,237,234,241,235,217,194,193,195,210,249,95,118,247,247,251,189,198,238,233,197,214,218,208,225,237,252,186,146,118,108,75,68,220,245,251,244,223,251,207,212,226,213,239,229,235,231,246,225,181,185,220,200,143,217,251,214,170,160,178,224,244,213,191,183,166,200,199,151,203,203,160,176,193,120,171,182,126,174,157,131,94,90,69,47,47,56,18,67,233,247,210,131,177,229,239,216,209,245,175,162,181,158,117,58,88,181,194,192,151,145,224,246,209,149,174,115,117,218,214,225,225,210,213,215,220,230,179,158,225,241,242,243,251,252,253,253,250,210,162,132,131,129,125,118,111,95,82,80,93,90,85,96,89,53,31,21,94,136,147,144,55,27,24,12,15,12,10,33,54,45,54,42,18,23,40,50,47,36,27,28,32,49,52,49,48,46,46,45,41,36,27,46,62,46,51,45,39,49,51,57,50,38,23,30,38,19,15,35,58,60,55,49,38,19,55,164,251,217,145,148,141,157,160,159,162,167,191,177,146,146,157,165,174,124,40,21,31,58,67,35,36,62,79,77,48,23,25,31,33,34,34,19,14,15,22,32,32,34,37,31,21,18,20,23,27,30,30,22,12,23,27,19,20,23,30,40,43,51,55,49,46,32,20,14,17,12,23,33,29,33,26,29,23,23,27,22,28,39,57,73,78,96,123,146,152,152,134,126,109,93,82,66,63,59,61,59,60,57,55,61,47,42,44,46,47,52,41,11,116,223,250,250,243,235,238,234,233,229,230,231,235,234,235,236,233,231,232,232,231,232,231,230,230,231,230,229,229,229,230,227,231,232,230,228,229,229,227,228,223,228,227,224,226,226,226,224,224,116,4,1,6,9,9,9,12,10,11,12,12,12,206,214,212,210,210,208,209,209,211,210,210,210,212,211,212,214,213,216,218,215,217,218,218,215,217,220,219,220,218,220,220,219,220,220,218,219,221,219,221,220,220,220,218,218,218,216,213,216,214,215,216,214,216,216,213,213,214,215,213,216,216,215,216,214,213,214,214,212,212,211,212,214,212,210,212,212,213,213,212,214,214,214,215,214,215,216,220,221,219,218,218,217,216,216,217,217,216,218,219,220,218,218,219,218,219,220,219,218,217,218,219,217,216,217,215,218,216,212,215,215,215,216,215,217,217,217,218,218,220,220,219,220,217,217,219,222,222,218,221,221,221,223,221,223,221,217,219,220,221,222,220,219,221,223,222,223,224,223,221,222,220,218,221,221,220,221,219,222,221,220,222,221,221,221,223,222,222,222,224,225,222,222,220,219,221,221,222,224,222,223,221,218,218,217,214,215,219,217,218,219,220,220,220,220,221,220,218,218,217,219,218,218,220,218,221,222,222,225,222,222,223,222,223,223,221,224,225,222,224,224,225,224,226,224,224,226,225,226,225,222,221,222,224,226,228,229,230,229,231,233,232,230,231,233,232,232,232,232,232,234,234,232,234,231,233,233,235,237,229,214,215,232,235,237,226,216,235,242,237,232,236,244,243,241,242,241,241,242,241,241,242,241,243,242,241,242,241,242,242,243,242,241,241,242,242,240,241,242,241,243,241,240,240,240,240,241,241,241,241,242,242,241,240,241,242,240,240,239,239,235,237,235,244,231,214,183,194,227,240,249,104,115,233,221,227,196,209,243,235,203,226,222,211,217,216,209,131,126,104,93,68,57,216,245,252,236,216,249,199,215,228,210,238,230,236,229,243,234,198,184,183,164,111,155,201,220,222,178,198,246,252,230,193,191,172,208,185,146,210,208,179,174,196,136,187,138,71,207,198,161,125,97,61,36,44,57,19,64,226,248,193,101,160,216,235,213,210,243,190,156,181,218,159,78,121,145,185,213,98,159,237,239,201,185,190,97,170,246,222,223,225,228,224,207,213,224,169,200,239,224,224,222,232,227,211,154,106,89,90,108,133,128,113,113,110,133,146,139,128,105,114,150,125,68,29,24,114,139,145,130,42,25,29,25,17,12,13,19,49,55,58,39,22,13,23,46,50,54,48,39,42,42,48,52,47,50,48,47,53,45,46,50,49,48,54,50,54,53,50,52,40,34,27,43,47,35,45,55,71,77,76,81,77,50,94,206,234,202,145,141,145,156,165,159,166,180,191,162,137,147,160,174,169,110,47,41,41,48,57,44,33,49,72,60,39,15,16,22,30,39,34,26,17,14,21,27,20,26,35,36,29,22,17,21,32,36,32,25,17,12,19,33,43,44,49,53,53,56,54,47,41,28,21,16,22,35,42,44,40,31,27,21,15,19,18,25,24,22,26,51,77,67,61,76,98,134,145,145,156,159,157,148,141,136,129,118,103,100,97,90,89,74,60,46,27,16,22,7,99,222,247,248,238,232,235,231,233,231,233,235,235,234,236,235,233,235,233,233,231,232,232,231,232,232,230,231,231,229,229,230,230,231,229,230,230,227,229,227,225,226,226,225,225,226,226,226,223,115,3,0,5,9,10,9,10,10,12,12,11,11,205,210,208,209,210,208,210,207,212,212,213,210,211,210,210,212,213,212,212,216,216,217,217,217,217,217,220,219,219,217,221,219,218,220,220,219,219,220,218,217,217,218,216,215,215,215,217,216,215,214,215,216,215,213,212,212,211,213,214,212,213,214,215,212,216,215,215,216,212,212,212,212,211,212,211,213,213,210,212,213,214,214,215,216,214,215,214,216,218,215,215,215,214,215,214,214,216,216,216,220,217,220,219,220,221,217,220,218,220,217,220,217,216,219,215,215,213,214,214,214,216,214,216,215,216,219,217,217,217,218,218,217,219,217,215,218,218,218,218,219,220,221,221,220,220,218,222,223,221,222,218,221,221,221,223,219,220,220,222,222,220,221,219,219,220,221,222,219,219,221,222,223,223,223,222,221,223,222,220,222,222,219,219,219,218,219,223,223,222,222,223,222,217,218,218,218,218,220,222,223,222,221,221,221,220,218,219,219,218,216,218,217,217,222,220,221,221,222,223,220,224,224,222,223,223,222,222,222,223,223,227,225,227,232,227,227,225,226,226,222,224,224,222,223,228,230,230,231,229,230,231,232,231,231,234,231,233,234,231,232,234,233,232,232,232,232,235,234,224,211,220,234,236,238,219,224,240,241,234,228,242,243,242,243,240,241,242,242,240,240,242,242,243,244,242,242,241,241,242,241,241,242,242,242,242,242,241,241,243,245,243,242,240,241,241,242,243,241,243,241,242,240,240,241,240,241,240,241,240,238,239,237,244,227,220,194,215,235,235,250,89,96,203,220,247,219,237,249,239,213,227,209,174,177,188,215,161,149,125,101,63,60,216,243,249,231,218,243,191,222,226,212,236,227,234,230,239,235,196,180,199,178,113,147,220,243,250,177,188,246,252,229,182,179,171,214,189,184,226,208,202,159,184,151,184,106,21,221,199,160,167,113,69,21,33,56,17,66,227,248,182,83,132,194,233,218,210,244,191,151,185,247,199,94,76,132,210,156,72,171,240,233,199,196,194,142,199,239,221,219,221,223,229,226,229,213,163,200,231,227,239,237,232,197,127,79,49,62,77,93,87,48,39,35,45,66,98,114,85,39,32,103,116,89,39,41,135,127,148,120,29,18,28,42,34,18,16,30,54,61,61,32,13,13,20,48,56,43,32,32,34,42,50,47,52,48,50,49,50,54,48,56,61,68,70,69,77,77,83,92,97,94,101,111,110,111,121,122,119,124,127,135,117,84,152,248,249,190,147,149,144,160,160,159,171,186,188,150,143,153,166,179,159,96,59,65,57,65,66,66,70,61,61,60,55,51,47,48,48,47,45,34,27,27,31,20,15,19,28,38,36,35,31,33,36,36,37,32,24,27,37,50,54,53,54,57,55,52,49,36,25,22,25,40,42,43,39,35,37,38,34,19,16,15,17,16,29,28,38,93,114,98,68,43,27,25,33,43,47,53,61,66,75,88,104,107,113,112,110,95,80,69,39,27,22,16,36,111,221,241,242,243,238,235,231,231,233,233,231,233,235,236,235,234,235,236,233,233,234,234,232,233,232,231,234,233,232,230,232,229,230,229,227,232,231,229,228,230,227,226,226,227,228,227,227,225,223,116,4,1,5,10,10,9,10,10,11,12,11,12,205,208,205,207,206,207,207,206,209,210,207,207,210,209,210,210,210,211,212,214,216,214,217,214,215,216,212,216,217,214,214,215,215,216,214,215,215,214,216,215,214,212,212,212,215,215,215,218,214,212,214,214,216,214,212,213,210,210,210,208,209,209,211,212,211,213,212,211,214,212,211,209,210,212,211,211,210,212,211,211,212,210,212,211,214,214,214,213,211,214,214,215,214,211,214,214,214,216,213,215,215,217,220,217,219,218,217,219,217,218,216,215,218,215,214,216,215,213,214,214,211,215,215,216,216,215,215,214,218,216,215,214,215,217,213,216,217,217,223,220,222,222,219,221,220,220,221,220,218,217,215,218,220,222,220,219,219,220,218,219,218,215,217,214,217,219,218,218,217,220,219,222,220,218,219,218,218,218,219,217,216,217,218,219,217,219,219,219,219,218,219,221,218,218,217,215,217,216,218,219,220,221,219,220,220,216,217,218,216,219,218,217,220,219,223,222,219,222,222,223,222,220,221,222,223,224,223,222,220,223,224,224,227,226,230,229,224,225,223,222,223,220,222,225,226,227,226,225,226,229,231,231,230,231,232,230,230,231,232,232,229,232,233,232,230,229,233,229,217,208,224,235,239,232,217,232,244,239,229,232,240,240,240,240,241,240,239,239,241,241,241,240,241,241,241,242,240,241,241,241,242,241,243,242,242,242,242,243,242,242,243,242,240,242,240,240,241,240,240,240,240,241,240,240,240,237,239,238,239,239,239,238,241,225,217,200,209,187,207,234,89,123,232,243,252,221,223,241,224,190,198,179,170,202,230,251,187,172,142,125,81,62,220,243,244,227,218,232,185,226,225,213,235,224,231,229,234,232,186,184,198,187,165,174,229,248,250,164,185,243,235,190,131,156,187,217,184,194,220,204,193,128,171,149,164,88,24,225,191,136,174,144,92,20,19,51,17,72,230,247,177,93,157,200,239,220,206,244,189,139,195,246,207,102,81,170,219,141,74,178,234,220,193,190,163,129,203,228,219,214,215,222,225,222,240,211,158,206,209,212,222,236,239,148,105,72,56,77,79,50,29,33,42,56,55,39,54,96,92,58,16,57,109,109,52,64,138,119,147,102,23,14,27,58,54,50,49,51,60,57,48,23,11,9,20,46,46,30,22,25,30,46,55,60,68,76,84,93,112,117,125,136,134,137,129,117,118,107,113,116,109,109,100,101,100,104,118,113,117,109,117,128,99,93,195,250,234,176,144,150,150,156,161,166,181,186,169,136,141,156,165,180,140,78,55,47,43,46,51,54,56,59,57,56,51,57,63,57,64,62,63,63,60,59,50,48,42,46,42,51,49,41,39,36,35,37,38,42,37,37,41,50,52,55,54,53,54,48,41,33,37,41,43,52,43,35,27,36,47,42,44,37,21,15,15,16,33,35,52,112,131,110,89,49,27,27,42,38,29,19,26,28,22,26,28,37,40,44,50,55,51,60,83,104,150,208,244,244,252,252,241,244,237,236,232,230,233,232,233,234,233,233,232,231,232,232,233,233,234,233,232,232,233,233,233,233,229,230,231,229,230,231,230,229,228,228,227,225,227,227,222,222,225,228,226,223,223,116,4,1,5,8,9,9,11,9,11,12,11,11,204,209,205,206,207,206,206,204,204,204,208,207,208,208,208,210,211,212,210,213,212,213,213,213,214,214,215,214,214,214,214,213,214,213,214,216,214,216,214,217,214,213,214,215,217,213,214,214,214,213,212,210,212,213,210,212,210,210,211,207,207,208,207,207,212,210,209,209,210,212,212,212,211,212,213,213,210,211,212,208,210,210,213,212,214,213,211,213,212,212,211,211,212,213,214,214,215,214,214,217,214,217,215,215,220,214,217,217,215,215,214,215,214,216,216,214,214,214,214,214,214,213,216,215,214,215,215,214,213,216,217,216,215,214,217,217,217,218,219,218,221,221,217,218,219,219,217,217,218,217,217,220,221,218,222,222,220,218,217,217,214,217,215,217,217,217,218,217,219,218,216,219,218,216,217,218,216,215,216,216,217,218,217,217,217,215,215,214,214,218,219,216,220,220,217,216,214,217,215,216,217,217,220,220,220,219,220,218,218,219,218,218,218,220,220,218,221,221,222,221,222,221,220,221,222,221,222,224,223,223,224,224,226,225,224,224,223,224,222,219,221,221,224,226,225,226,228,229,230,230,231,232,230,231,232,231,232,232,232,231,229,230,230,230,228,230,231,224,214,214,231,239,236,221,218,239,244,238,231,236,242,240,241,241,240,241,241,240,240,238,239,239,240,240,240,240,240,242,241,241,241,241,242,241,241,243,243,242,242,243,242,242,240,241,241,241,241,240,240,240,240,240,241,239,240,240,239,240,237,239,236,238,243,217,210,177,186,201,235,242,97,152,246,245,232,180,201,208,201,192,216,204,197,229,245,252,187,170,149,129,84,69,222,243,245,223,220,225,189,235,222,215,234,225,231,228,231,234,205,196,165,162,188,182,230,250,248,154,165,218,187,165,143,177,207,210,171,189,203,205,203,145,184,150,140,65,70,250,165,84,137,160,143,58,12,29,11,70,231,248,174,115,201,208,236,219,207,249,185,155,208,205,143,77,107,183,185,134,123,199,230,210,190,158,123,150,213,229,227,212,213,220,224,222,237,194,168,230,225,214,206,231,183,107,107,69,73,87,44,33,36,61,95,108,115,84,56,103,103,71,30,39,109,125,70,95,129,112,145,78,16,19,8,41,59,57,61,56,58,47,36,28,24,28,44,67,70,76,88,99,111,122,122,131,141,138,135,127,129,122,116,118,111,117,106,107,105,96,106,109,111,110,112,107,116,113,110,116,101,103,93,92,54,99,231,250,247,162,140,151,151,159,162,174,185,178,150,135,150,163,181,175,112,63,46,39,41,50,55,56,58,58,60,59,59,62,53,51,50,50,49,50,53,47,50,54,55,63,61,63,63,61,59,53,53,47,47,42,41,44,44,48,49,53,52,57,60,54,46,41,46,49,50,55,36,21,24,33,49,48,49,35,24,16,14,21,38,38,57,106,114,115,87,59,29,27,65,73,66,62,66,58,48,46,48,53,56,66,58,72,169,238,251,251,252,252,252,252,252,252,241,240,238,234,234,234,233,234,236,236,235,234,235,234,231,233,233,233,232,232,234,232,233,233,233,232,231,233,232,230,231,232,229,229,229,229,227,227,227,226,228,227,226,225,226,226,223,116,4,1,5,8,9,9,10,9,11,12,11,11,200,206,206,206,205,202,204,205,205,206,207,207,207,206,207,206,206,208,209,210,209,208,211,209,211,212,210,211,212,212,213,214,214,214,212,211,213,212,213,215,213,214,214,212,214,213,209,209,210,211,212,208,208,207,206,206,207,208,208,206,208,208,208,210,210,208,208,209,209,208,210,210,210,209,207,210,208,210,209,211,210,208,210,212,212,210,211,212,212,211,211,212,212,212,212,210,212,213,212,213,214,214,214,214,214,215,214,214,214,214,214,214,215,215,215,215,213,211,211,213,213,212,212,212,212,214,214,214,215,213,215,214,217,217,215,217,215,216,217,217,217,218,218,218,219,218,217,216,216,217,218,220,217,219,219,216,214,214,215,215,216,216,215,215,216,214,216,214,215,219,214,214,216,215,214,215,215,214,214,213,215,218,216,218,216,217,213,213,215,214,214,215,215,214,217,215,214,214,213,214,214,217,218,217,217,217,218,217,217,217,219,220,218,219,218,218,216,220,222,220,220,221,222,220,219,220,221,221,219,221,220,220,222,223,222,220,222,223,222,221,222,221,223,226,225,225,227,229,229,230,230,232,230,229,232,229,230,231,230,230,229,230,230,229,228,229,229,222,211,218,233,237,234,216,225,240,240,231,229,241,240,241,241,239,241,239,239,238,239,238,238,238,239,239,239,240,240,240,240,240,241,241,241,241,241,241,240,240,241,239,239,241,240,240,239,240,242,240,240,239,238,240,239,238,238,237,237,237,236,236,234,239,237,219,201,191,214,223,250,238,96,136,214,212,203,169,200,226,218,211,240,214,200,229,240,251,177,164,136,110,72,66,218,242,242,221,224,218,194,241,221,217,231,225,230,226,231,235,211,229,174,143,185,161,201,246,222,128,170,219,189,210,181,184,209,197,179,189,196,207,214,179,212,158,102,78,148,249,146,48,97,134,168,128,55,36,8,37,194,246,174,128,202,193,227,218,210,252,166,149,194,123,118,113,97,148,170,132,151,227,222,207,196,147,107,169,238,230,229,216,210,219,222,223,236,178,179,235,225,236,229,240,159,96,115,73,89,76,39,24,80,130,128,117,91,117,113,109,118,84,37,41,97,125,85,106,124,108,130,61,11,14,5,15,33,55,57,62,66,70,80,77,125,89,157,143,134,138,119,135,133,132,128,129,128,118,109,112,113,109,106,90,84,87,93,95,78,67,79,66,61,59,51,46,43,39,44,40,58,60,24,36,6,119,238,250,234,148,138,152,148,162,168,178,183,166,137,139,155,169,179,166,82,33,27,17,22,17,27,25,30,36,28,34,37,38,38,44,43,44,53,49,56,55,56,56,48,56,53,53,54,53,57,63,61,65,67,61,61,69,66,65,58,60,58,57,59,37,34,33,24,35,56,51,30,22,18,31,49,54,49,31,24,19,17,32,50,41,54,109,114,125,99,56,30,46,109,130,141,147,149,131,117,119,133,144,147,149,138,123,181,243,251,251,252,252,249,249,246,240,237,241,236,238,237,233,235,235,237,238,237,236,236,235,235,234,233,234,233,232,233,233,233,233,232,232,232,232,232,230,232,233,231,231,229,229,228,227,227,227,226,226,227,226,228,226,223,116,2,0,4,8,9,8,10,10,11,11,12,12,201,205,201,205,205,203,204,204,204,206,208,203,206,205,206,207,207,207,206,210,206,209,206,205,210,211,210,210,208,210,210,209,212,210,212,210,210,213,211,212,212,213,211,210,213,208,209,210,210,210,208,208,207,207,206,208,210,207,209,208,206,207,209,208,210,208,208,211,211,209,210,211,209,208,207,209,208,208,210,210,210,213,208,209,210,211,210,212,214,213,213,213,213,211,212,213,212,213,212,214,214,215,215,214,214,213,214,214,211,215,213,215,214,213,216,211,215,214,211,212,211,211,214,212,214,212,214,215,213,214,214,215,215,216,218,217,216,216,217,216,215,216,216,219,218,215,217,216,217,216,219,218,217,217,217,216,214,214,217,216,216,218,216,214,214,215,214,215,215,215,215,215,216,213,212,215,214,215,216,215,216,214,214,217,216,216,217,216,214,214,214,214,212,213,215,215,215,215,215,216,218,216,216,218,217,218,220,218,219,218,222,221,221,221,218,217,220,219,218,218,219,218,219,220,219,219,219,220,217,218,222,219,221,220,221,222,218,221,221,221,223,222,222,225,226,226,227,226,228,230,230,230,229,226,229,229,228,227,228,230,229,228,229,226,228,234,229,220,212,221,234,239,226,214,233,239,235,225,232,240,239,242,240,240,238,239,239,238,240,239,239,236,237,239,238,239,238,239,240,239,241,240,241,240,240,241,240,241,241,239,239,240,238,240,240,240,242,239,240,240,237,239,238,238,239,237,237,236,234,237,231,242,239,224,226,220,233,217,245,194,96,161,200,213,222,207,234,240,233,219,242,209,192,229,236,249,172,155,127,101,63,61,215,240,242,220,224,212,199,242,218,214,229,227,229,225,229,232,207,214,155,151,196,133,150,188,200,175,203,227,206,223,173,177,212,192,186,186,193,207,212,196,219,163,80,103,203,250,141,37,67,91,132,171,139,63,6,24,167,245,168,120,215,185,220,218,210,248,162,159,198,160,190,147,81,119,156,155,198,244,214,201,213,150,98,187,238,229,234,218,208,217,222,228,229,166,193,236,225,233,233,252,141,87,112,74,100,63,19,47,118,133,90,59,19,33,88,126,123,81,43,40,99,115,89,120,114,120,120,44,34,46,49,71,86,114,120,122,133,128,124,122,131,131,121,110,109,117,127,136,134,133,122,112,101,78,63,56,56,52,53,52,41,39,28,36,53,54,62,60,50,39,31,29,30,30,35,29,24,27,23,23,12,159,252,252,227,136,148,143,154,167,175,190,172,147,133,147,161,173,181,139,63,33,28,15,27,33,19,28,24,30,28,17,18,19,17,17,16,18,22,17,25,35,30,35,38,45,55,56,57,58,56,58,57,57,56,57,58,59,72,80,85,84,73,75,65,56,42,27,29,29,48,46,25,16,14,25,42,54,46,39,31,22,39,48,48,41,56,113,118,127,108,60,42,106,152,139,110,83,109,128,140,136,127,116,104,123,123,124,107,108,198,237,248,245,237,238,238,238,236,239,237,238,238,236,235,235,236,236,236,238,238,236,234,235,235,234,233,232,234,235,233,235,233,233,233,231,231,230,230,232,231,231,228,225,226,227,226,225,226,226,226,226,228,226,222,117,4,1,4,8,10,9,10,10,11,11,11,11,200,204,202,202,205,199,201,204,203,204,204,204,206,206,205,205,207,207,207,206,208,205,205,206,207,208,209,208,209,209,207,210,208,209,209,208,211,210,210,211,210,210,209,210,208,209,210,209,209,207,208,206,206,208,210,211,207,208,208,205,207,206,206,206,208,207,207,205,206,208,208,206,208,209,206,209,208,209,208,208,208,209,209,208,210,210,210,210,212,210,212,213,209,209,211,210,212,212,213,214,213,213,212,213,212,212,214,212,214,212,212,212,211,212,213,212,211,212,214,212,211,211,213,212,210,212,214,211,210,211,213,215,214,216,217,215,218,216,216,219,217,217,214,213,217,216,216,216,217,218,218,217,216,215,216,216,215,215,217,212,214,217,212,214,212,213,215,213,214,214,214,214,214,215,214,214,216,214,215,216,214,214,214,215,214,217,216,214,215,214,214,214,213,215,217,215,214,215,214,215,218,217,218,217,216,219,217,217,220,219,221,220,220,221,217,218,214,215,217,214,216,215,217,217,218,216,215,219,217,220,220,219,220,217,217,217,218,218,218,220,222,220,225,225,224,224,224,227,227,229,228,229,225,224,228,226,227,224,226,228,226,228,226,227,227,230,224,212,211,225,236,231,215,217,233,235,227,223,235,238,238,239,236,236,237,235,234,237,237,237,237,237,237,235,237,236,236,237,236,239,237,238,240,239,241,239,239,238,237,238,239,240,239,238,239,239,239,238,239,239,237,238,238,236,236,236,235,236,233,234,236,249,245,252,237,231,211,169,203,149,103,196,234,243,237,197,225,243,222,210,236,200,200,234,237,249,173,164,137,105,61,66,223,243,241,226,221,194,199,240,214,215,227,224,227,224,226,229,190,198,142,141,210,149,150,199,220,190,189,209,209,219,172,188,217,184,198,184,197,202,212,196,193,141,56,152,234,250,155,47,81,66,89,136,168,134,28,21,177,245,171,124,222,197,212,216,215,226,139,194,236,184,174,127,132,145,136,173,225,249,207,202,207,145,110,182,229,226,230,223,211,215,219,231,218,157,207,233,223,234,229,252,123,78,97,73,103,49,22,50,130,123,65,74,48,29,71,125,119,70,35,47,105,107,92,121,112,117,120,104,136,138,142,146,138,125,125,125,120,125,117,134,130,96,111,95,79,83,82,79,67,57,53,38,40,53,47,37,26,28,53,66,59,29,19,55,78,76,87,90,81,56,32,32,33,31,33,30,29,26,30,11,71,213,251,251,208,137,150,146,156,173,178,178,159,136,138,152,163,174,168,113,47,34,29,28,40,34,37,36,32,30,18,21,19,15,24,16,13,18,19,14,19,17,13,19,17,16,20,23,25,24,40,46,51,58,61,63,62,63,71,74,70,71,70,75,83,80,76,67,61,59,61,55,36,24,21,24,37,52,49,48,53,47,51,49,46,29,54,113,123,127,105,49,73,146,157,99,55,41,70,106,107,81,57,49,46,49,51,60,87,78,113,216,243,242,241,234,237,238,236,236,237,237,236,237,238,236,236,239,238,237,237,236,236,233,232,234,231,232,234,235,232,233,233,232,233,231,232,232,229,229,229,227,225,227,225,226,225,227,226,224,227,226,226,225,223,117,3,1,5,8,9,8,10,10,11,12,12,12,200,204,201,202,200,200,201,203,200,200,203,200,203,203,202,203,205,202,205,207,205,207,205,207,208,208,206,206,210,208,207,207,205,207,207,206,207,206,207,208,207,208,208,210,208,207,209,208,207,207,207,207,207,209,206,206,207,205,207,206,204,205,206,204,203,204,205,207,204,204,205,205,206,205,206,206,207,210,207,208,207,208,207,210,211,211,211,209,211,209,210,210,210,211,213,212,210,212,211,210,211,211,211,213,214,211,214,214,213,214,212,212,212,210,210,209,213,210,210,212,212,212,212,210,212,213,213,211,211,214,214,216,217,217,215,214,216,218,219,216,216,216,215,216,218,216,217,219,217,215,217,216,215,214,216,216,216,214,216,216,214,212,212,213,214,212,212,213,213,213,215,214,212,214,213,216,214,212,214,213,214,213,214,216,213,212,213,214,215,214,213,214,214,216,216,212,214,215,214,214,217,218,215,218,217,217,217,218,218,218,220,219,218,216,215,214,215,216,214,217,218,215,217,217,217,218,216,219,218,216,219,216,219,217,215,217,217,220,221,218,219,222,221,224,221,220,227,226,225,225,227,226,226,227,227,227,225,228,227,226,227,226,228,224,228,231,220,212,213,225,232,224,211,224,236,231,224,227,237,235,237,238,236,237,235,235,234,233,235,235,236,235,235,235,234,237,237,237,238,239,239,238,239,239,239,237,237,240,239,237,238,239,238,239,239,238,238,235,237,237,234,236,237,236,236,235,236,235,236,239,242,252,233,200,181,183,169,155,211,160,113,195,227,236,213,193,226,238,219,201,235,199,204,237,234,249,173,168,144,120,71,66,221,244,242,227,213,181,200,238,216,215,227,226,225,226,226,233,208,217,150,160,227,188,210,206,231,199,146,204,218,200,172,174,193,181,204,177,193,197,216,188,160,103,58,203,250,251,177,63,94,93,57,74,130,191,112,77,201,247,176,107,193,190,211,209,212,207,147,198,222,154,112,154,192,115,93,175,238,248,203,206,185,124,120,185,226,226,229,229,213,216,223,239,206,159,223,232,227,234,234,252,124,69,85,78,117,52,18,48,129,113,69,107,106,120,124,125,92,45,29,66,120,107,99,124,101,125,118,108,151,137,142,137,138,146,128,119,99,90,93,84,67,53,53,49,38,29,19,33,57,65,50,19,29,72,82,71,44,27,71,95,82,37,49,95,90,57,75,118,108,77,40,29,33,34,30,29,30,27,36,14,148,247,249,249,190,135,156,143,170,179,178,165,131,126,146,154,165,175,152,79,36,33,27,28,32,38,29,29,23,22,21,22,22,21,21,21,24,16,22,20,19,19,20,21,21,24,13,18,18,17,47,53,46,32,23,30,36,39,51,64,81,87,79,81,72,71,72,74,80,85,85,80,69,62,59,51,46,42,53,51,52,47,43,39,34,31,48,111,118,120,108,47,80,151,121,75,40,52,97,96,82,70,76,81,66,57,45,29,36,60,80,165,236,240,244,237,238,239,237,239,237,238,238,236,238,236,236,238,238,237,238,236,236,235,235,236,234,235,233,233,231,232,232,231,231,232,232,229,228,229,228,224,227,229,227,226,227,227,225,226,226,226,228,225,223,117,2,0,5,9,9,9,10,9,11,12,12,12,197,201,199,202,202,200,200,201,201,202,199,199,199,198,202,201,201,201,202,204,204,204,206,207,205,206,206,205,208,207,207,207,205,207,207,205,207,206,206,208,205,207,203,206,206,204,208,206,206,205,206,206,207,206,205,206,204,206,204,204,204,202,204,203,202,201,204,206,205,204,205,205,205,206,207,208,207,208,208,207,207,207,208,209,207,208,208,208,208,208,211,208,208,208,210,211,210,209,210,211,208,211,210,211,212,212,214,213,212,210,210,213,211,211,211,212,213,211,211,211,211,210,212,212,210,212,211,212,212,213,214,214,214,213,215,214,216,217,215,213,211,214,214,217,218,217,217,214,215,214,216,213,213,213,213,217,213,212,217,213,214,214,212,214,211,212,210,210,211,211,212,212,212,213,214,212,213,211,212,213,212,212,214,212,211,212,212,211,211,213,212,214,214,212,212,212,214,214,214,214,215,215,214,217,217,217,219,217,218,219,221,218,219,215,214,217,214,215,216,216,217,216,216,215,218,216,216,217,214,217,217,215,217,217,218,219,217,217,220,221,222,220,222,220,221,222,223,225,224,223,224,227,227,226,226,226,227,226,227,225,225,225,225,226,227,225,218,210,216,229,230,216,215,232,234,227,220,227,235,232,236,237,236,237,235,232,233,234,233,233,234,233,234,234,235,234,234,237,237,237,237,239,237,236,237,237,238,238,237,237,236,237,238,239,237,235,236,235,235,236,235,235,236,237,237,239,242,242,247,244,244,220,121,78,79,123,169,209,250,153,97,187,224,242,229,209,235,240,209,202,235,194,204,234,234,247,170,155,125,107,66,70,219,244,242,234,207,184,216,240,217,215,226,225,225,226,227,236,210,226,183,180,229,184,204,200,232,205,136,211,195,158,149,150,178,181,207,172,193,193,220,184,146,90,71,231,246,251,184,64,100,105,81,30,62,141,155,144,217,250,171,87,182,201,212,204,204,191,179,202,201,179,150,206,204,79,44,171,247,236,197,195,174,111,120,210,231,229,235,230,221,217,225,244,196,174,236,232,231,238,237,252,143,74,82,84,117,56,26,38,112,130,69,66,87,108,89,75,51,29,49,117,135,108,101,115,107,128,118,135,163,129,117,104,92,81,57,44,40,32,25,25,19,21,45,73,61,38,23,39,82,92,64,17,44,113,127,125,79,61,108,108,88,42,78,108,75,39,25,89,114,88,48,34,33,32,36,33,29,39,19,46,207,249,248,248,180,141,151,153,181,165,156,149,132,132,147,153,167,177,127,59,27,29,25,29,27,22,27,25,22,23,27,19,23,22,23,24,23,22,22,26,22,25,21,21,18,20,18,17,22,16,48,68,50,31,17,15,18,20,33,51,52,53,60,60,70,83,88,96,96,83,76,69,75,75,75,80,73,68,61,64,54,44,36,23,30,35,49,112,113,122,101,53,96,120,96,64,39,78,106,98,111,126,135,146,97,120,69,54,34,39,58,132,225,233,243,242,239,241,235,238,237,236,237,237,237,240,238,236,238,238,234,235,233,233,233,233,234,231,232,234,233,232,232,232,233,230,229,231,230,229,229,227,229,229,227,229,227,226,225,225,230,225,228,228,222,117,4,0,4,8,10,9,10,10,11,11,12,11,200,201,200,199,200,201,200,199,199,201,204,202,201,201,200,200,203,202,204,204,200,203,205,203,203,204,206,205,207,206,207,210,205,205,207,206,208,206,208,207,205,206,204,205,203,204,207,206,204,204,204,203,205,207,206,205,207,207,208,206,205,206,205,205,205,205,207,207,205,203,205,203,204,203,206,208,206,207,206,208,206,206,206,206,206,208,208,210,209,208,211,208,210,208,208,209,208,211,210,209,211,212,209,210,212,212,213,211,211,211,212,213,212,211,212,210,211,212,210,210,210,211,213,212,212,213,212,212,214,215,213,214,214,212,214,214,213,213,214,214,214,214,214,214,214,216,214,214,214,213,214,214,214,215,213,214,213,214,215,213,212,214,214,211,212,211,212,211,211,212,212,213,213,216,211,212,212,211,213,211,213,211,212,214,214,211,211,212,214,214,212,215,214,212,213,211,212,214,215,213,214,214,213,216,215,217,217,217,218,217,217,216,216,214,214,214,214,214,215,216,217,213,214,214,214,217,214,216,215,215,217,216,218,218,217,218,219,220,222,220,222,224,218,222,222,224,224,223,225,223,224,224,225,224,224,226,224,226,223,224,226,224,223,223,227,224,212,206,221,231,228,212,220,234,231,221,218,229,233,232,234,234,233,235,232,232,231,231,233,232,234,233,234,235,234,235,236,236,236,234,236,236,237,237,237,236,236,236,236,237,236,236,234,237,236,235,236,234,235,235,235,236,238,243,246,247,247,241,234,206,180,165,102,49,42,145,211,210,228,139,125,221,244,251,233,215,232,239,202,195,236,187,205,235,234,249,162,150,122,99,61,64,219,244,246,238,213,205,238,247,222,214,228,229,224,227,224,237,200,198,170,174,232,188,196,187,234,188,107,179,164,165,182,167,172,197,217,183,205,196,228,178,171,98,74,236,237,251,181,62,98,115,92,49,7,77,124,177,237,251,166,91,202,212,222,200,180,197,209,199,196,151,159,249,192,86,95,174,244,224,194,216,169,106,165,238,238,236,233,233,227,214,229,247,183,188,241,231,234,235,238,252,181,96,98,94,127,72,37,27,66,128,113,65,28,43,44,36,40,53,106,139,136,70,83,113,105,139,95,69,68,42,33,24,19,22,26,55,68,38,24,19,13,24,63,96,94,87,57,60,107,105,72,25,63,123,122,122,129,127,113,108,96,38,72,110,84,43,39,95,100,83,44,29,35,33,29,37,29,39,15,88,250,250,249,249,172,152,157,160,174,152,143,143,137,145,158,160,171,168,102,35,26,29,27,27,31,23,17,26,25,26,24,24,27,20,25,24,23,22,27,26,22,24,21,25,23,23,26,23,21,24,58,71,51,33,18,21,29,54,63,57,64,63,50,30,20,24,32,45,65,77,81,87,81,74,69,63,64,75,80,76,78,66,64,55,51,45,54,120,122,117,102,60,96,118,84,57,35,93,124,113,120,85,66,67,105,155,122,79,56,47,51,107,214,231,237,244,235,237,236,237,236,236,236,236,238,238,240,238,233,234,233,233,233,233,232,233,232,231,230,231,233,233,230,231,229,230,231,232,230,227,230,227,228,226,225,229,225,227,226,226,227,228,229,225,224,116,3,0,4,8,10,9,10,10,12,11,10,12,199,203,197,200,201,198,199,198,201,202,201,203,203,202,204,199,201,203,203,204,203,205,205,204,204,205,204,203,206,202,205,205,205,207,204,206,204,205,206,206,205,205,202,206,202,202,204,203,203,202,205,203,205,203,205,204,203,205,202,204,202,201,204,202,204,207,206,204,204,202,202,203,202,204,204,203,202,203,203,203,208,206,207,205,205,207,205,208,206,207,208,207,210,209,208,210,209,205,208,211,208,208,208,209,209,208,211,211,209,209,211,211,211,210,211,210,208,210,211,209,208,209,212,211,210,211,212,212,211,212,213,212,213,213,213,213,215,214,214,214,213,216,212,211,213,214,214,214,214,213,213,212,215,216,211,214,214,212,215,212,212,212,210,214,211,210,211,210,210,208,211,212,211,211,212,214,212,214,214,211,213,214,213,212,211,212,212,211,213,214,212,212,212,211,212,213,212,214,214,214,212,211,213,212,213,215,214,215,215,214,214,213,214,214,214,214,213,215,214,214,214,215,213,214,217,212,215,217,216,216,216,217,217,217,216,215,218,219,221,220,220,219,221,222,222,222,221,223,222,223,223,222,221,222,222,223,223,223,223,222,223,222,222,223,222,218,208,205,224,231,217,210,225,232,226,217,225,231,231,231,232,233,230,232,232,230,230,232,230,230,231,230,233,232,233,233,232,236,236,236,235,236,234,234,235,235,234,235,235,233,234,233,232,234,235,236,234,235,235,237,238,244,249,249,246,234,211,180,166,170,194,226,174,89,127,212,191,212,224,132,129,221,237,238,202,184,222,235,196,196,229,184,213,236,235,248,175,169,141,113,60,65,219,244,251,238,198,214,251,251,222,211,226,227,223,226,221,236,203,191,158,147,207,185,190,166,178,163,141,189,162,185,212,190,189,215,214,192,211,196,230,174,186,104,73,235,227,247,176,56,89,118,106,56,27,49,56,146,230,251,144,83,198,212,227,191,144,170,208,157,129,141,168,243,158,88,98,147,237,213,205,241,181,101,171,246,237,235,232,230,229,216,229,239,175,203,239,231,237,231,245,252,219,127,87,103,131,101,62,23,35,93,128,114,67,49,49,49,76,108,131,136,78,28,93,107,116,121,53,21,15,17,20,20,11,19,59,92,96,77,43,21,21,23,68,114,122,131,122,103,108,100,74,27,64,116,94,88,88,116,132,104,89,42,34,70,100,81,79,103,93,64,33,34,32,34,34,29,37,41,39,161,234,249,250,234,166,149,155,149,163,174,191,163,139,153,152,155,170,150,82,19,26,27,29,27,26,22,23,26,23,24,26,22,23,24,26,23,23,24,23,26,25,25,24,22,25,22,24,30,19,33,58,66,48,34,20,31,62,67,71,60,46,42,39,34,22,16,17,35,56,44,33,38,48,61,71,77,69,67,67,63,61,67,83,89,91,79,96,134,124,117,107,70,97,110,88,56,34,89,133,131,93,56,39,29,57,113,122,90,54,55,54,89,207,231,234,240,233,238,231,233,236,236,237,235,236,236,238,235,235,235,234,236,233,232,232,232,233,231,230,232,231,231,230,231,227,228,227,227,230,228,228,226,226,226,224,223,224,225,223,224,225,222,226,225,222,118,3,1,6,9,9,8,10,10,11,12,12,12,195,199,199,198,200,197,198,199,200,199,200,200,199,200,201,201,205,200,200,202,204,205,203,203,205,203,203,202,203,204,201,205,202,205,205,203,205,201,202,202,203,206,203,204,204,201,202,203,201,203,206,203,204,204,204,202,202,203,201,201,203,202,204,204,203,205,205,204,204,205,205,202,205,206,203,203,203,202,203,205,205,206,206,205,205,204,204,206,206,205,206,205,206,206,206,207,207,206,207,208,208,207,208,208,210,209,209,210,209,207,209,208,208,210,211,209,208,210,210,209,211,210,211,209,211,212,209,207,210,211,211,212,214,212,211,212,213,212,211,213,212,212,214,214,214,212,210,209,210,211,209,210,211,211,211,213,213,211,210,212,211,213,211,210,211,210,208,208,212,209,210,210,208,210,211,214,212,210,212,211,211,212,211,210,209,211,213,211,211,210,211,212,208,210,214,211,212,212,212,212,211,214,211,211,213,213,213,213,214,214,212,212,215,214,214,213,213,211,213,214,211,213,216,215,216,217,216,217,215,215,215,219,217,215,217,217,217,217,220,219,220,221,219,221,219,221,221,218,220,218,220,221,221,221,221,221,219,221,220,219,220,220,221,223,221,216,206,209,226,224,211,211,226,231,222,216,227,229,229,229,231,230,229,231,229,231,231,229,230,227,230,228,228,230,230,232,232,232,233,233,234,231,234,235,231,235,234,234,233,232,233,232,234,234,232,234,237,237,243,247,248,246,239,215,184,166,170,184,200,218,236,252,205,115,167,238,227,242,226,116,110,201,219,222,196,191,222,236,193,196,230,186,220,245,251,252,191,181,144,117,63,60,217,244,249,192,142,171,231,249,221,210,224,228,226,228,219,236,209,214,178,137,189,180,174,154,162,164,191,188,171,204,213,187,164,217,207,188,207,194,226,165,187,90,75,231,226,247,167,54,81,110,98,59,33,40,12,89,199,246,148,81,190,206,238,192,103,174,177,95,143,168,213,195,80,100,123,119,205,209,219,235,172,108,169,235,232,238,230,230,233,215,230,228,170,219,239,233,236,241,251,251,212,81,57,87,136,135,95,45,22,50,100,133,147,138,112,122,139,153,140,82,27,17,105,109,113,116,37,21,21,20,19,17,26,42,93,125,132,128,80,39,31,25,66,101,91,93,101,125,117,93,80,24,54,104,92,45,10,39,75,101,95,60,26,33,91,122,118,118,83,50,44,47,59,57,65,64,72,50,87,239,251,245,250,246,162,153,148,144,164,206,231,170,138,146,147,166,171,134,73,43,43,41,41,38,35,32,36,37,33,34,29,25,27,27,29,27,24,24,22,26,25,24,22,23,23,22,24,25,22,36,66,64,43,24,27,56,65,55,54,68,63,37,22,23,21,21,53,85,90,66,45,51,42,21,23,36,48,57,68,73,70,66,71,72,79,95,104,106,101,118,112,74,87,116,94,67,36,63,122,141,121,65,64,81,70,105,118,99,60,48,54,85,206,232,233,242,234,239,230,232,236,236,236,236,237,235,236,236,236,236,235,236,233,233,233,233,233,231,232,231,231,230,231,231,228,229,227,228,229,227,227,225,226,226,225,226,224,224,224,222,224,225,224,224,224,118,3,1,5,8,9,9,10,10,11,12,12,12,193,200,194,198,198,196,196,198,197,194,198,199,197,199,199,198,202,198,199,202,199,201,200,201,201,202,203,202,203,201,205,203,204,205,203,203,200,204,203,203,205,202,203,207,203,204,202,205,206,202,203,203,203,205,206,206,203,204,204,206,204,201,205,203,204,206,204,206,205,203,204,205,204,203,204,204,202,201,204,204,205,202,204,205,202,204,204,207,205,203,205,205,204,203,204,205,202,203,206,206,208,208,208,208,207,206,207,207,206,205,205,207,210,208,208,208,207,208,209,208,208,208,208,210,210,210,212,210,209,210,209,210,210,208,207,210,212,210,210,210,211,211,211,214,212,212,212,207,208,210,212,207,206,211,207,210,211,209,211,210,210,210,209,208,209,208,207,207,209,208,211,209,209,208,208,210,206,207,208,208,210,211,210,211,210,212,209,210,213,210,209,211,211,211,210,212,213,210,213,212,209,210,211,211,211,212,212,211,213,213,212,213,212,210,213,213,210,210,211,212,212,213,212,212,215,213,213,213,215,216,214,215,216,216,218,216,217,217,217,217,217,218,217,215,217,218,216,218,218,218,218,219,220,218,218,218,220,218,218,216,218,217,217,222,216,210,202,212,228,216,203,218,228,225,217,220,229,229,229,228,228,226,226,227,226,229,227,227,226,226,227,225,229,227,230,231,230,229,227,229,229,232,231,231,232,234,231,232,231,229,230,229,232,234,237,240,243,247,245,237,216,191,170,168,181,197,217,230,239,242,248,252,188,83,166,247,219,247,186,95,129,217,242,247,226,216,240,241,194,205,235,192,227,249,253,253,187,160,125,97,55,58,210,243,246,127,36,65,164,238,224,210,226,225,226,227,222,232,206,191,170,167,207,194,181,205,167,173,213,179,184,203,205,141,114,213,190,183,199,195,224,162,188,83,89,235,228,247,168,56,74,110,97,56,33,42,18,48,181,243,161,126,196,214,244,189,152,217,196,141,173,196,200,158,86,177,196,116,163,178,218,221,143,113,181,230,232,236,233,230,235,224,235,211,173,229,237,232,240,250,250,248,122,34,29,65,118,137,138,80,47,32,44,98,141,160,142,131,129,146,135,66,13,41,128,96,125,99,34,28,12,23,30,27,38,80,114,105,100,120,114,77,36,25,70,86,68,38,36,66,89,92,74,25,43,97,94,67,35,16,61,113,135,104,71,72,108,128,123,131,119,107,113,127,133,139,125,131,128,68,148,234,250,249,252,234,159,150,152,177,212,232,202,140,131,139,158,191,177,125,84,79,72,69,74,69,66,57,57,64,59,56,57,53,47,50,45,41,39,39,35,31,31,32,26,25,27,25,24,18,22,50,69,60,39,26,36,60,67,49,56,76,69,46,24,26,33,50,86,124,129,95,76,58,32,26,21,38,69,61,42,36,38,52,67,84,94,86,83,90,108,117,104,72,79,123,122,81,46,38,74,136,149,150,141,103,88,119,122,92,61,47,50,98,211,233,234,245,236,235,233,235,235,234,236,235,235,235,234,234,235,235,235,233,232,232,231,230,232,231,231,232,231,228,227,229,228,228,227,226,227,225,227,229,226,224,225,226,225,225,224,222,223,224,228,225,222,117,5,1,4,8,10,9,10,10,11,11,11,11,197,198,197,195,199,197,197,199,198,195,197,201,198,199,200,197,200,198,200,200,198,201,201,200,200,201,204,202,205,203,201,203,199,201,201,202,203,202,206,201,203,203,201,200,205,206,202,205,203,203,203,203,205,205,206,204,206,205,203,203,207,203,202,203,204,205,204,205,204,201,203,204,206,203,202,205,203,201,202,203,202,201,204,203,202,205,206,202,203,203,203,203,203,205,204,204,205,202,204,204,204,205,207,207,206,206,206,206,208,204,206,206,204,205,205,205,206,207,206,207,207,205,208,207,207,208,207,208,209,208,207,209,209,207,208,210,209,207,208,209,208,208,209,209,208,210,210,212,208,208,208,208,210,208,209,208,209,208,206,211,206,206,208,206,207,210,210,207,209,208,209,210,208,208,207,208,209,206,208,207,208,210,208,211,211,210,210,209,211,210,210,212,209,211,211,212,212,211,211,212,211,211,210,211,213,210,212,211,211,213,214,214,213,210,210,212,212,212,211,212,211,213,213,214,215,213,214,215,213,213,215,215,216,216,218,217,218,216,216,214,217,219,215,217,216,216,216,218,220,217,215,216,216,218,221,217,217,217,218,216,216,217,217,217,214,207,201,218,226,208,208,220,226,220,213,223,230,227,228,227,226,227,227,227,226,228,225,227,227,227,229,226,228,227,229,229,228,229,230,231,230,229,232,231,231,232,229,228,230,231,232,233,235,239,243,243,236,220,195,173,167,178,199,214,230,239,242,244,239,236,238,245,181,79,164,239,168,163,142,102,155,243,251,251,238,230,248,246,202,217,245,199,197,194,194,193,142,137,122,123,92,101,207,240,190,43,2,5,109,237,226,211,223,223,227,223,224,230,201,171,164,173,205,216,202,222,175,178,217,141,156,187,181,156,150,231,187,192,205,223,242,188,205,82,114,243,237,251,167,67,82,120,105,57,37,49,15,73,200,247,190,137,205,221,249,200,167,249,222,173,165,142,224,150,98,226,235,145,113,166,251,192,94,100,202,243,236,239,231,231,235,229,236,192,179,236,232,237,250,250,251,144,33,2,19,30,76,118,142,139,89,54,41,27,88,143,112,58,46,122,129,73,20,72,135,103,137,97,27,27,19,24,38,34,67,101,113,77,54,103,113,103,63,28,63,92,76,48,13,27,68,101,103,68,113,146,150,139,134,140,139,147,162,162,146,145,131,123,117,117,123,123,118,125,127,130,130,125,100,79,200,251,251,251,252,235,159,152,165,214,252,242,165,117,129,139,174,217,177,113,89,84,66,58,58,57,57,54,59,55,57,62,62,63,66,65,72,70,64,66,59,63,62,54,53,50,49,39,35,35,43,66,72,63,44,19,33,68,79,74,72,71,60,43,24,48,71,58,57,79,88,71,46,30,25,22,24,62,87,74,44,19,21,15,23,33,43,59,81,90,107,113,100,80,66,104,146,116,67,52,39,56,84,82,72,64,89,133,113,83,51,42,60,111,218,235,236,244,234,235,233,236,235,234,233,232,234,236,232,232,233,232,231,231,232,232,232,232,233,232,232,231,232,231,230,229,227,229,228,230,230,225,226,227,225,224,224,226,225,224,225,224,224,227,225,226,226,116,5,1,4,8,10,9,10,10,11,11,11,11,193,196,194,196,197,195,195,198,198,198,197,197,198,199,196,195,199,198,200,200,199,199,199,201,201,199,199,201,201,202,201,200,201,200,203,203,200,203,200,201,203,201,204,201,199,205,202,203,202,201,203,201,203,206,204,204,203,203,204,203,205,205,204,204,204,205,205,202,204,204,203,207,203,202,202,201,203,202,203,202,203,201,201,204,201,201,203,204,205,203,204,205,206,205,204,206,202,206,205,203,205,202,205,204,206,205,204,206,208,206,204,206,203,204,205,207,204,203,205,206,205,204,205,208,208,206,208,207,206,208,206,207,208,207,207,207,207,206,208,209,207,208,207,208,207,208,210,208,208,208,209,207,210,211,207,207,208,207,207,207,207,207,206,208,208,208,207,209,208,207,210,206,208,208,208,209,209,210,208,208,207,208,208,210,208,211,211,208,211,211,210,208,211,210,210,211,212,210,212,210,210,214,213,212,211,209,211,211,213,212,211,211,212,212,212,210,208,212,212,211,212,212,213,214,214,213,213,215,214,212,212,216,215,213,216,216,214,212,215,215,215,215,215,215,217,217,214,215,214,214,215,216,217,217,217,214,217,217,217,216,217,216,218,217,208,203,201,219,221,203,212,223,223,214,216,226,225,225,224,225,226,225,224,224,227,226,225,226,227,225,227,226,225,226,227,225,227,228,230,229,227,230,227,229,229,226,229,228,231,233,239,241,239,235,219,199,179,167,177,192,210,227,238,244,241,239,238,232,230,228,235,236,182,81,159,222,162,153,113,100,139,207,214,220,183,186,227,225,190,214,241,174,113,97,107,131,142,156,152,163,154,141,162,167,144,48,1,3,113,237,236,217,231,229,230,224,227,220,213,191,187,186,183,199,184,200,148,151,163,110,164,196,201,199,199,250,212,227,235,242,253,206,188,62,103,242,252,252,185,81,81,121,105,57,36,48,23,75,229,246,212,159,212,236,247,186,184,247,235,163,103,187,248,113,68,208,249,181,129,163,252,190,69,130,227,243,243,236,228,230,233,234,236,171,186,239,229,250,250,250,170,41,1,8,12,38,48,70,118,129,153,105,66,41,31,107,104,55,17,33,83,78,33,96,125,102,141,75,26,31,17,30,29,42,91,100,97,80,72,76,78,101,95,76,100,111,112,86,83,100,120,157,156,160,173,163,162,155,139,150,143,121,113,118,108,106,105,99,108,117,116,105,109,121,117,116,103,98,59,83,220,252,250,251,253,230,160,168,199,234,234,199,136,121,130,143,195,216,141,75,65,51,65,59,65,61,59,64,64,69,61,59,63,59,57,53,54,56,57,56,57,60,64,66,70,71,71,65,68,67,69,81,81,71,52,39,35,63,91,91,84,63,43,31,21,60,90,74,55,70,87,62,32,24,23,21,39,81,91,65,41,24,19,14,15,19,23,19,29,35,61,97,101,86,55,60,109,146,113,70,58,49,50,48,50,78,111,130,90,64,43,43,69,141,238,241,240,235,231,239,232,232,232,231,231,232,234,231,230,233,232,230,232,231,230,231,229,231,231,230,230,229,229,229,227,227,227,226,226,225,224,227,225,223,224,224,225,224,223,225,224,225,226,224,227,224,224,118,3,0,5,9,9,9,12,10,11,12,12,12,194,196,195,193,194,194,194,193,197,196,196,198,194,198,199,196,200,195,195,199,195,200,198,199,200,198,200,200,201,199,201,201,201,200,200,202,202,202,202,202,203,202,204,203,204,206,202,206,204,203,204,205,206,202,203,203,206,205,203,204,206,203,205,206,203,204,204,202,205,207,206,202,203,204,203,204,201,202,201,200,204,203,202,202,203,202,201,202,204,204,202,203,205,205,202,204,204,204,205,205,203,201,206,203,203,203,203,205,206,204,206,206,206,204,204,204,205,204,203,206,205,206,210,206,207,208,208,208,206,208,206,207,207,206,207,206,208,208,207,206,206,208,207,209,210,208,208,208,209,208,210,208,208,207,207,206,206,208,206,208,207,206,209,207,207,209,208,209,210,206,208,207,208,209,207,208,208,206,208,208,206,213,211,210,213,209,211,211,209,211,212,211,209,210,210,210,208,211,212,208,212,214,211,212,211,213,213,210,212,210,210,210,212,212,211,211,212,212,211,213,212,213,210,211,212,212,210,210,212,213,213,212,212,211,214,214,214,217,214,214,214,214,214,213,214,215,215,214,213,215,214,214,213,214,216,214,214,215,216,214,214,214,217,214,209,202,206,223,208,203,219,224,220,212,218,225,224,223,226,224,224,223,224,225,223,227,223,224,224,224,225,224,227,226,226,229,226,224,227,226,229,228,228,227,228,229,232,237,239,239,234,221,202,182,174,181,193,210,223,236,241,244,243,239,236,230,229,228,225,225,232,230,188,84,149,234,197,179,124,102,118,141,141,146,127,125,157,160,135,174,188,122,99,98,114,133,148,162,164,176,166,153,154,154,145,93,51,61,147,239,241,225,241,234,230,223,228,217,219,175,179,208,171,182,154,167,147,127,185,153,210,249,241,236,214,249,219,220,181,153,129,90,98,12,59,196,238,249,150,60,73,125,114,66,39,50,28,81,237,246,246,160,193,240,239,189,201,246,198,139,127,230,246,112,78,217,252,226,123,97,184,174,109,165,237,242,239,236,231,230,227,237,228,153,201,237,237,250,251,203,62,2,9,6,31,43,39,36,54,94,125,142,113,62,38,62,111,97,42,23,28,48,38,110,119,99,128,56,27,29,17,26,32,74,107,96,99,65,51,59,74,122,130,142,145,144,153,166,168,160,142,137,139,128,139,128,123,116,116,110,100,88,69,67,64,69,59,56,58,59,56,49,44,49,49,41,34,46,11,69,237,250,251,251,252,221,162,188,232,252,246,155,119,130,131,152,217,206,103,37,14,16,24,29,29,25,33,32,38,44,46,50,46,51,48,50,59,57,54,61,60,58,61,56,60,57,55,59,57,60,64,67,66,76,76,68,63,70,88,82,69,54,39,38,28,57,99,112,113,100,79,51,33,28,26,24,51,90,86,59,36,29,24,19,22,24,24,21,19,18,55,92,103,93,64,32,52,109,136,131,109,96,82,81,101,127,139,112,72,44,35,60,95,201,246,243,239,231,234,235,232,231,230,233,233,233,232,232,230,230,232,230,230,231,230,228,230,229,227,227,227,227,229,229,228,225,225,226,227,225,224,224,225,225,223,224,223,224,224,226,225,224,226,228,225,224,225,118,3,0,5,8,8,9,12,10,11,11,12,12,194,194,194,193,193,189,193,193,192,194,195,194,194,196,196,196,194,195,196,194,197,197,196,199,197,197,199,199,196,198,198,197,201,198,198,199,196,201,200,199,200,198,201,200,201,200,200,200,201,202,200,201,199,201,202,200,202,200,200,201,201,203,201,201,201,201,203,201,202,200,201,203,203,200,199,202,201,198,200,202,201,200,203,201,199,200,200,200,200,200,201,201,200,201,201,201,201,205,203,203,203,201,203,202,200,201,204,203,203,202,202,203,203,206,202,203,204,203,204,205,205,205,205,205,204,204,204,206,203,204,207,206,207,204,206,206,207,203,203,206,205,206,207,205,205,205,206,205,202,207,208,205,206,203,206,206,205,206,206,204,205,208,206,206,207,206,207,208,207,208,206,205,208,206,207,205,204,208,206,206,209,207,210,212,209,211,210,206,210,212,208,207,210,207,208,210,209,210,208,210,208,206,209,208,209,211,211,210,210,208,212,211,210,211,213,211,207,210,210,208,210,209,210,210,210,210,212,210,210,212,210,214,214,213,212,212,215,212,214,214,213,214,212,210,213,216,213,212,214,214,212,212,214,213,212,212,212,211,212,213,214,213,214,211,203,201,210,220,202,207,222,222,216,208,220,222,222,223,222,222,224,222,223,223,225,223,220,224,223,223,225,222,223,226,224,225,226,224,224,224,223,227,228,229,234,234,235,229,219,203,186,175,179,191,205,219,232,240,242,241,237,233,229,227,225,227,223,225,224,226,230,223,190,80,149,233,220,178,91,78,107,147,153,149,137,122,152,134,111,143,153,116,106,103,95,103,112,125,139,152,139,141,148,144,133,103,100,108,141,211,219,204,234,242,237,228,227,211,222,151,162,206,170,208,162,199,226,178,234,210,238,249,228,217,145,126,103,89,60,20,14,18,54,32,29,94,129,134,71,42,52,99,104,63,54,62,27,59,184,226,155,66,130,178,167,174,164,128,88,61,125,228,208,76,86,185,206,188,137,38,76,139,112,208,238,231,243,232,232,231,223,241,223,159,211,243,249,250,225,114,11,5,8,18,42,44,27,19,37,51,89,102,112,96,46,36,90,127,96,56,46,38,33,122,124,105,110,45,26,22,19,37,67,116,129,115,114,94,99,113,117,133,130,134,135,126,133,131,137,141,126,122,112,113,107,85,89,76,59,51,49,44,37,34,30,41,48,35,24,29,28,24,32,39,36,42,53,55,15,121,251,251,252,252,244,166,162,209,244,234,199,131,122,130,133,173,226,172,74,24,15,25,27,30,22,12,22,27,42,49,45,30,17,20,30,29,27,39,39,33,36,41,43,45,50,53,56,59,57,57,55,55,57,59,59,65,66,70,69,72,76,79,85,85,72,81,107,117,124,94,50,38,31,29,24,36,72,96,81,53,31,26,27,27,31,27,30,27,25,21,57,95,104,99,66,41,32,83,130,141,144,154,159,143,144,139,96,66,42,38,48,66,174,241,249,241,235,236,233,230,230,231,233,233,233,232,233,233,233,233,233,231,231,230,228,229,230,229,229,229,228,229,228,227,225,226,226,226,229,229,228,228,227,226,226,226,226,226,226,228,226,225,227,225,228,226,222,117,4,1,4,8,10,8,10,10,11,12,11,11,191,194,194,195,194,192,194,194,195,191,192,195,193,194,196,193,194,193,195,197,195,197,194,197,196,196,198,198,198,200,199,195,201,200,199,200,197,198,198,198,199,198,200,198,197,199,198,200,200,199,199,198,199,200,201,200,199,200,201,200,202,200,199,202,200,200,202,202,202,200,201,200,200,202,201,201,200,202,199,200,201,197,199,200,200,198,200,200,199,200,202,200,201,201,201,200,200,204,199,202,202,200,204,202,202,206,203,203,204,205,203,201,203,202,203,202,203,202,203,203,203,203,203,203,204,205,205,204,202,204,204,204,205,204,206,206,203,203,205,203,203,205,204,204,204,204,204,204,205,206,206,204,203,204,206,207,203,203,205,203,203,203,205,205,203,206,206,204,205,203,206,207,206,208,206,206,207,206,207,207,208,208,203,206,205,206,209,207,210,208,207,209,210,206,207,211,208,208,211,210,208,207,211,210,210,210,209,211,209,207,208,208,211,208,208,208,207,208,209,210,210,210,210,209,210,210,211,211,210,213,210,212,212,212,213,212,211,212,208,211,212,212,211,209,212,211,212,211,212,211,210,212,211,212,212,214,212,210,212,214,213,212,215,210,202,198,215,214,199,212,220,217,207,211,220,222,221,218,222,220,222,221,220,222,220,224,221,221,225,220,224,223,222,222,221,225,223,222,227,225,229,232,232,233,232,217,203,188,174,177,188,202,217,229,235,237,236,235,234,229,227,225,225,225,223,225,223,224,223,225,227,223,198,78,133,212,209,156,51,47,106,166,179,182,160,157,179,159,132,165,170,117,112,85,63,62,72,90,91,94,90,95,106,109,93,87,84,76,84,173,184,173,222,232,237,229,222,205,227,149,157,217,165,204,190,236,252,198,203,142,122,117,77,71,26,2,14,23,81,129,131,167,127,98,133,143,165,177,105,42,35,87,95,49,40,44,36,25,107,160,139,66,83,144,151,145,118,65,61,84,121,173,130,84,112,158,186,181,129,49,39,76,117,225,245,227,238,235,234,233,226,246,213,158,229,248,251,246,119,36,2,3,18,34,47,29,19,18,31,56,75,87,95,94,73,40,42,89,133,122,93,68,76,154,124,122,106,29,47,62,105,142,141,153,134,123,125,112,124,125,129,135,134,146,145,136,133,123,101,90,69,56,51,45,42,30,29,36,46,53,30,36,41,17,32,59,66,52,34,41,37,28,32,42,51,50,87,69,20,186,251,251,252,241,164,113,165,229,252,241,152,112,129,128,134,198,229,134,47,18,14,27,30,33,21,18,30,34,51,63,51,34,19,40,42,35,48,49,39,26,13,27,31,21,25,23,30,25,28,31,36,40,48,52,61,67,63,59,59,53,56,61,91,120,118,117,98,105,100,73,63,57,44,40,36,70,100,96,78,47,32,29,27,30,34,33,30,31,31,28,56,87,111,105,75,41,53,123,134,118,100,101,122,128,100,69,57,39,42,33,45,160,245,245,249,230,234,231,230,228,230,232,231,233,234,232,232,232,231,231,232,231,230,230,231,231,231,230,228,228,229,229,229,228,226,228,229,229,227,228,228,226,226,225,226,228,227,227,227,228,226,228,227,225,228,228,224,117,3,0,4,8,10,9,10,10,11,12,12,12,193,196,193,192,192,192,193,193,193,196,192,191,194,192,194,193,194,195,194,193,195,197,194,198,197,196,199,196,198,198,198,197,198,198,199,200,197,199,198,198,198,196,198,199,199,197,200,199,198,200,200,200,200,199,198,197,198,199,201,200,200,202,201,200,201,200,201,202,200,200,200,200,201,200,200,200,200,200,201,201,199,198,200,198,200,199,198,202,202,198,199,200,201,201,200,203,200,201,201,202,202,200,203,202,203,200,203,201,201,206,205,205,202,201,202,201,202,203,201,203,204,202,201,203,204,204,203,205,203,201,206,202,203,204,202,204,204,203,204,202,203,203,203,203,205,205,203,206,202,205,205,203,205,205,206,204,205,204,204,203,203,203,201,205,204,204,205,206,204,205,206,208,208,205,209,206,205,207,205,207,208,205,206,206,204,206,210,206,208,210,207,207,207,206,207,208,206,208,209,210,210,210,210,211,211,208,207,210,210,209,206,207,208,207,210,209,208,210,209,208,208,208,208,208,208,208,210,208,208,210,208,211,208,207,208,210,210,209,210,209,206,209,211,210,211,212,211,209,211,211,211,211,213,212,211,211,211,211,211,210,210,214,207,204,200,202,216,203,200,218,218,211,207,216,220,216,220,218,220,217,220,220,218,218,222,221,220,222,221,224,222,222,224,223,223,224,225,225,230,232,229,230,218,203,190,179,181,186,200,213,225,232,233,235,234,232,229,227,226,223,223,225,225,225,222,226,223,227,220,226,223,223,200,83,120,199,239,172,64,87,129,171,172,171,156,163,189,164,152,180,158,113,119,88,60,69,59,59,51,59,61,55,57,61,69,82,81,53,60,157,155,125,159,186,219,225,219,207,237,155,187,242,146,133,127,184,172,67,56,19,0,12,9,59,66,58,74,77,117,143,183,159,136,151,142,189,234,238,126,52,57,120,111,48,41,47,29,75,207,247,199,114,172,210,173,196,163,149,182,168,217,189,130,164,199,206,203,181,138,66,59,54,112,243,242,233,233,232,234,231,229,245,195,157,240,249,252,156,26,3,2,11,29,44,32,18,21,26,46,53,88,106,89,100,89,57,38,35,76,122,153,149,148,165,110,119,128,128,161,163,165,147,133,125,113,131,145,139,144,130,121,108,97,90,70,56,47,35,32,24,25,30,26,29,45,32,22,47,77,81,60,67,66,29,36,63,76,63,34,40,43,33,39,42,53,29,62,69,84,230,251,251,250,191,138,130,190,245,251,192,127,118,131,129,152,218,199,103,22,10,15,23,32,40,29,24,28,21,47,55,43,32,29,57,49,39,58,56,33,21,33,50,41,23,22,29,24,19,20,22,24,39,43,35,25,33,35,39,52,52,59,62,69,80,83,83,87,89,97,98,89,90,83,74,103,127,129,114,77,46,30,25,29,32,34,30,27,32,33,30,61,91,106,106,77,44,83,133,94,66,35,49,103,93,67,47,39,50,25,55,185,242,249,249,233,233,229,229,229,229,230,229,231,231,232,233,231,229,230,231,230,230,231,230,230,229,230,227,226,229,228,227,227,227,228,228,227,227,227,225,225,226,227,226,227,228,227,226,226,228,226,225,229,227,227,224,222,118,4,1,5,8,9,9,12,10,11,12,11,12,193,194,190,193,189,191,191,190,193,193,193,193,193,191,193,191,195,194,194,196,193,195,194,196,196,196,196,197,197,195,198,195,199,196,195,199,196,199,198,198,198,195,198,198,195,198,196,196,198,197,199,199,199,198,198,198,195,198,200,196,200,198,198,201,196,200,200,199,199,197,201,198,199,200,199,198,198,202,197,199,199,198,198,197,200,199,199,199,199,198,200,201,199,198,200,202,202,201,200,202,202,200,200,200,200,200,200,201,199,200,200,200,201,201,202,200,201,200,203,201,200,203,201,202,202,203,203,201,203,203,204,203,201,203,204,203,202,202,203,205,203,203,203,201,204,204,203,205,203,205,205,201,205,203,200,204,204,203,202,203,204,204,206,205,205,202,204,206,206,206,205,206,205,206,206,203,206,205,206,205,205,207,205,209,206,207,207,206,208,207,208,207,208,207,206,206,206,206,208,207,205,206,209,207,208,208,208,208,207,208,208,207,210,206,205,206,207,208,204,204,206,207,208,207,206,206,208,210,205,207,208,208,207,208,208,208,208,208,207,210,207,210,211,210,210,211,212,210,212,211,211,212,208,210,209,208,212,212,210,210,209,210,207,204,197,208,216,198,208,219,215,206,208,218,216,217,217,217,218,214,220,216,217,220,218,221,218,219,222,218,221,219,223,225,224,232,230,230,230,217,202,190,178,177,188,199,213,223,228,234,233,232,229,226,224,223,223,222,224,223,221,223,225,226,222,226,223,222,219,227,217,223,196,65,121,212,252,217,136,139,151,171,159,152,146,168,191,158,130,149,141,116,132,107,87,84,57,53,48,63,65,55,49,53,66,90,73,47,73,150,131,76,90,132,197,229,214,211,239,170,194,252,134,45,20,23,27,9,40,66,78,117,137,178,180,189,191,143,134,130,144,141,99,95,84,102,146,172,107,37,41,109,117,57,57,57,33,47,146,181,131,71,121,144,120,146,141,145,156,122,127,117,83,136,123,118,149,129,150,136,79,51,134,243,246,232,230,231,234,232,230,245,182,188,248,248,188,49,2,12,5,28,42,36,24,21,27,45,56,54,83,114,102,88,99,87,59,46,39,43,64,75,109,141,95,137,157,133,157,139,149,141,129,137,130,127,111,76,55,43,39,31,28,37,44,32,21,27,19,18,27,38,41,49,78,55,30,42,81,96,56,64,82,63,57,75,77,49,25,29,46,55,56,49,51,48,53,50,140,243,251,251,241,186,161,153,214,250,224,147,113,123,139,131,175,237,164,83,70,57,55,55,50,49,45,35,34,29,29,60,56,52,56,47,33,30,51,51,38,33,48,54,29,22,27,27,26,19,24,31,46,65,64,45,26,16,29,32,31,41,35,36,40,51,62,76,84,87,81,76,79,83,94,103,110,114,105,104,92,66,54,50,41,39,34,31,33,29,35,30,60,96,103,105,83,47,41,58,57,46,55,91,95,75,49,44,50,28,116,239,249,249,250,236,232,233,233,227,227,228,229,229,230,231,230,230,231,231,230,232,230,230,231,230,231,229,229,231,229,229,229,227,227,227,228,227,227,227,227,226,226,228,227,228,227,228,229,226,227,229,222,225,228,226,228,226,224,118,4,1,5,8,9,9,12,10,11,12,11,11,189,192,193,192,194,191,188,190,190,191,189,189,194,189,191,191,192,193,191,193,193,194,193,193,195,197,197,194,194,195,197,195,196,196,198,196,194,199,195,196,198,198,196,195,195,193,196,195,195,197,195,195,198,196,197,198,199,200,199,199,197,197,199,198,198,198,198,199,196,196,198,199,199,200,199,196,198,196,200,199,196,198,201,198,198,198,196,198,196,197,198,198,197,198,199,199,198,200,200,199,200,199,200,199,201,200,201,200,198,203,200,199,200,201,199,201,201,202,200,199,201,201,200,201,201,199,200,202,200,201,203,202,205,203,202,202,204,202,204,203,199,206,203,201,203,200,204,204,201,201,200,201,203,202,201,202,205,204,203,203,205,204,205,207,205,203,203,206,205,206,206,205,207,204,206,205,205,205,204,203,203,205,207,206,207,208,206,205,205,205,202,205,207,205,207,207,205,205,206,206,204,207,207,205,206,206,206,207,204,207,207,207,205,205,205,204,206,204,207,206,204,206,205,206,205,207,209,204,206,206,205,206,206,208,206,205,206,206,207,206,208,209,207,208,208,207,208,206,209,208,210,209,209,208,207,210,209,209,209,207,211,212,205,207,202,220,218,202,214,217,208,205,211,214,215,215,213,213,215,215,217,217,217,218,217,217,219,217,218,219,221,224,226,230,230,228,218,204,194,181,177,185,195,209,220,226,229,230,231,229,225,222,222,222,219,222,223,220,226,223,223,223,221,223,220,223,219,222,218,227,216,222,178,51,93,194,251,208,143,152,167,176,164,161,142,146,173,138,105,132,139,121,128,95,83,71,50,60,49,47,54,56,61,66,70,85,75,48,57,113,111,107,148,180,217,233,207,218,241,159,191,239,141,95,115,105,76,97,198,216,223,247,237,247,225,252,232,215,229,225,227,99,96,160,139,173,184,213,134,45,39,89,113,54,43,48,28,45,114,153,141,81,72,99,124,158,133,134,115,86,120,51,43,131,132,122,138,131,160,160,111,45,119,234,237,240,229,228,235,224,239,237,183,220,250,181,79,7,5,7,16,43,42,25,25,26,46,50,59,53,29,76,106,94,92,102,103,75,57,48,41,21,57,112,101,132,139,133,139,127,120,87,77,53,34,39,49,36,33,34,30,17,29,58,62,50,31,35,33,18,30,46,40,45,72,81,62,72,93,79,48,37,77,110,143,131,91,81,83,100,103,113,123,114,120,118,131,116,100,206,243,243,244,207,154,125,197,251,193,133,120,131,133,139,201,226,127,72,124,136,156,64,91,69,63,67,61,57,58,57,71,73,57,50,43,42,55,67,63,57,46,36,22,22,27,25,29,32,27,24,43,65,62,42,25,32,57,48,49,59,56,36,18,19,30,32,31,45,48,61,78,81,85,80,79,74,77,87,93,91,91,84,73,68,63,51,46,39,34,29,59,93,101,110,80,51,35,30,46,66,101,101,83,54,44,54,66,186,248,248,251,251,238,234,234,230,226,229,228,230,230,228,232,233,233,231,229,229,231,231,228,229,230,228,231,231,231,230,229,228,225,228,228,227,228,228,228,229,229,229,229,227,229,227,228,227,229,229,226,229,229,228,228,226,228,227,225,117,4,0,5,10,10,8,10,10,11,12,12,12,189,191,193,193,192,191,190,190,192,191,191,192,192,193,195,191,193,189,189,193,192,194,193,192,191,193,192,192,193,193,195,194,195,195,196,196,193,194,195,198,196,195,196,195,196,198,194,194,195,194,196,195,195,196,198,197,197,199,197,196,198,195,196,200,197,199,198,198,198,196,200,198,196,194,196,199,198,200,198,198,198,198,200,196,197,198,196,197,196,196,198,198,198,197,198,199,200,199,198,200,201,200,201,198,199,201,200,201,201,203,200,200,201,199,201,200,203,199,199,202,199,200,197,199,200,199,200,200,202,200,200,201,200,202,201,204,205,203,204,205,204,201,203,202,203,199,202,203,200,205,203,203,204,202,204,203,202,204,204,204,203,203,203,203,205,202,203,204,203,204,205,205,204,206,206,204,205,203,205,205,205,207,204,205,205,205,205,205,207,203,204,205,205,204,203,206,205,205,203,205,205,203,206,204,205,205,204,208,206,206,205,203,206,203,204,204,204,205,204,203,203,203,203,203,204,206,204,205,205,205,205,205,205,206,207,205,207,207,207,208,205,206,207,207,208,209,207,208,208,208,208,205,207,208,208,207,209,208,207,208,210,206,214,212,217,244,219,206,216,214,208,205,213,213,213,216,213,214,216,213,218,215,217,219,217,220,217,221,224,225,228,225,227,217,200,194,180,177,184,191,205,218,225,227,230,229,228,225,223,225,223,224,223,221,218,222,224,222,222,223,224,223,222,224,223,221,221,222,222,227,213,217,168,42,78,173,241,210,147,166,170,183,178,176,162,159,165,132,135,163,141,117,77,20,26,34,29,33,29,33,31,36,32,46,65,66,63,37,33,77,105,166,238,250,244,226,205,224,240,168,191,234,171,203,238,203,185,186,249,247,247,251,236,244,236,247,213,200,192,227,181,105,131,177,188,227,244,252,193,80,79,108,115,52,33,44,25,81,214,247,227,146,131,196,251,251,201,160,151,181,221,103,71,222,237,242,224,186,223,191,163,69,94,216,224,238,229,225,236,225,242,231,185,249,220,97,27,1,11,11,39,47,22,23,26,39,51,61,54,12,79,111,108,94,83,111,107,122,99,70,66,32,83,119,103,139,121,66,50,34,29,33,34,24,16,44,68,60,49,60,53,30,21,55,84,53,24,34,43,46,49,52,48,50,68,111,156,171,170,142,137,153,155,153,162,164,145,143,137,139,125,110,93,77,71,62,63,56,52,22,115,217,250,234,144,87,174,245,167,128,128,133,136,149,229,207,89,51,35,29,43,42,41,38,39,42,43,44,48,50,49,59,63,65,69,69,69,69,73,65,58,56,51,48,42,40,47,36,31,27,32,56,54,44,46,59,60,47,55,61,51,36,25,44,50,30,34,33,30,29,26,32,42,52,66,74,84,85,81,79,80,86,100,111,101,93,79,63,61,50,71,96,99,115,85,57,58,81,116,133,118,81,54,38,45,93,221,245,250,250,248,239,233,234,231,226,227,226,230,230,230,230,229,231,232,231,231,229,228,231,229,228,228,228,228,229,227,227,230,229,228,229,229,230,228,229,229,228,230,231,229,229,231,232,230,229,229,230,229,229,227,230,230,227,230,230,226,117,3,0,4,8,9,9,10,9,11,11,11,11,190,193,194,192,195,191,190,191,191,193,192,190,192,191,192,191,193,192,190,193,194,191,191,193,193,192,193,194,194,194,196,195,193,194,197,195,192,194,194,194,195,194,194,195,195,194,193,196,194,195,199,194,198,197,196,195,195,196,194,198,194,196,198,193,199,196,193,194,196,195,196,195,194,195,196,198,197,198,196,198,196,198,197,195,200,196,196,196,196,197,195,198,198,196,195,198,197,198,199,198,200,198,199,198,197,197,198,197,198,199,200,199,198,201,198,200,199,198,198,199,199,200,200,201,201,201,202,199,199,200,200,199,201,201,203,202,201,200,202,202,201,204,200,200,203,201,200,200,202,202,202,203,201,201,203,200,201,202,201,203,202,203,201,203,205,204,204,203,202,205,205,203,204,202,206,203,204,203,206,206,205,207,205,204,202,202,203,205,205,205,203,204,204,204,205,205,202,203,205,203,205,203,203,202,204,204,202,202,202,206,203,203,203,202,204,202,203,203,203,201,202,201,202,203,202,206,205,203,206,204,203,203,202,204,204,206,203,203,205,205,203,206,205,203,206,204,207,206,206,206,205,206,205,205,208,208,204,206,207,205,207,204,212,201,137,184,190,202,215,212,202,206,211,212,213,214,213,213,214,214,215,215,213,217,220,220,223,224,226,226,218,205,195,182,175,180,190,201,214,222,223,228,227,223,224,220,220,220,221,221,222,222,220,223,223,222,223,220,222,223,224,221,222,223,222,221,222,222,222,226,214,218,149,51,103,196,252,227,156,161,175,184,172,168,158,162,187,160,165,184,149,64,4,1,17,32,21,22,25,26,22,22,27,53,58,63,41,44,73,128,167,196,234,234,234,217,200,227,233,159,208,239,152,210,243,201,194,217,249,230,234,221,213,175,189,186,140,145,148,231,160,88,150,171,171,215,224,251,189,72,68,94,121,54,32,35,14,84,227,246,242,164,128,234,252,252,178,174,187,213,222,118,163,249,250,251,212,204,240,228,214,85,91,206,212,230,232,222,236,224,249,229,195,245,119,20,15,1,22,32,46,29,22,26,36,53,52,61,19,85,219,178,122,89,67,89,94,103,109,107,108,99,134,128,123,124,52,23,19,17,14,29,69,45,18,46,78,85,59,61,74,55,44,74,83,48,42,46,66,103,140,157,165,190,121,180,200,198,187,169,147,131,117,88,65,53,43,45,39,37,36,29,29,29,25,23,19,24,21,27,15,109,158,203,143,91,185,234,150,130,126,140,140,174,234,169,61,21,10,10,20,14,18,17,15,21,19,24,18,22,27,29,27,34,36,36,44,42,46,51,57,60,64,63,63,69,69,61,54,54,47,56,73,61,59,57,38,30,48,55,39,33,53,62,54,36,42,39,34,31,19,22,33,46,53,50,42,46,61,68,75,74,71,74,83,97,101,100,97,89,94,103,100,116,96,108,155,165,164,116,71,48,37,41,100,216,249,249,250,239,237,233,233,228,225,226,228,231,230,228,229,231,228,228,228,229,230,229,230,231,228,228,229,228,228,228,230,228,229,230,229,229,229,228,230,229,230,233,231,229,231,231,230,229,230,233,229,228,227,231,228,226,227,229,231,229,227,118,4,1,5,8,9,9,10,10,11,12,11,11,193,190,191,190,192,190,190,194,192,192,192,191,192,192,192,192,194,191,190,192,190,190,192,194,195,196,195,196,195,194,194,193,196,193,193,194,191,193,192,191,195,192,193,197,193,194,194,196,194,195,198,193,193,193,195,197,195,197,196,196,196,194,196,196,193,194,196,197,196,194,195,194,194,197,197,197,197,197,195,196,197,198,198,196,196,196,195,194,196,197,198,198,198,199,198,198,196,198,198,197,196,196,198,199,200,198,197,198,198,198,199,198,198,198,197,199,198,197,201,199,199,203,200,201,201,200,200,198,198,200,202,203,200,202,200,197,199,198,201,203,200,201,200,200,202,199,201,199,199,201,199,202,201,200,200,202,202,200,203,201,201,201,200,204,204,203,203,205,205,203,204,205,204,204,203,204,205,203,208,207,204,204,202,203,202,201,203,202,204,202,203,203,200,202,203,205,201,203,200,203,204,202,202,200,203,205,201,201,200,200,204,201,201,200,200,203,202,201,201,201,202,202,202,201,202,202,202,203,201,204,203,202,204,203,205,203,202,204,203,205,204,204,205,204,205,204,205,206,207,206,206,204,205,203,206,207,205,208,207,206,208,197,209,129,45,86,156,210,215,204,202,206,213,212,211,210,207,210,212,211,214,215,217,220,223,223,222,214,201,192,184,174,181,189,200,215,220,225,226,224,225,223,221,220,217,215,219,218,221,221,219,222,221,222,222,222,221,221,222,220,220,222,222,222,223,221,224,220,223,222,215,207,146,55,119,217,252,230,158,159,155,170,165,162,154,175,200,164,171,183,130,62,11,2,24,38,30,28,33,25,29,35,33,54,59,53,30,64,124,199,239,214,229,235,230,214,204,229,227,158,196,211,166,206,199,158,152,212,230,197,185,171,155,141,184,170,167,174,198,252,116,112,181,164,167,206,224,240,182,66,49,71,116,73,32,40,13,61,222,239,216,154,107,213,251,163,117,173,206,189,164,149,213,249,238,242,171,184,228,200,221,127,76,180,211,216,241,231,237,228,252,228,187,162,28,2,16,14,42,45,36,17,24,39,52,54,63,23,67,208,252,203,145,110,57,69,65,82,71,80,89,91,142,132,136,120,33,15,19,19,17,35,86,78,42,41,85,83,41,29,51,92,126,146,144,151,171,190,200,202,200,191,179,168,142,123,102,72,59,40,31,25,16,22,22,16,18,22,15,21,20,22,18,17,23,15,21,20,23,14,27,10,48,144,136,112,205,207,135,139,133,142,147,201,237,128,40,23,8,9,14,13,17,14,15,15,14,17,18,14,17,19,14,16,15,21,20,14,19,21,24,28,28,35,37,46,46,49,59,56,57,65,67,70,71,61,52,49,57,69,64,67,62,46,29,26,39,35,31,28,39,38,53,78,76,66,41,24,24,34,36,46,61,70,77,89,88,83,86,95,106,96,94,108,97,95,109,95,66,55,42,45,61,73,179,242,249,249,237,236,229,231,231,226,226,231,231,229,230,229,230,233,230,229,229,228,229,226,228,231,229,230,228,228,229,228,231,228,230,231,229,231,231,231,232,232,231,231,231,231,230,231,230,230,229,231,230,229,229,229,229,229,229,228,230,229,226,118,3,1,6,9,9,9,10,10,11,12,11,11,191,193,193,192,194,193,188,192,190,190,191,191,192,188,190,188,188,191,193,189,192,193,193,193,191,196,193,191,193,193,194,191,188,192,194,192,191,192,193,191,193,193,193,193,192,196,192,193,193,191,194,193,194,192,191,193,193,193,193,196,193,195,195,192,195,193,194,196,195,193,194,194,194,194,195,196,193,198,198,196,196,197,196,196,198,195,198,196,195,198,198,195,196,194,196,198,193,197,196,195,195,194,198,196,196,198,198,198,198,197,197,196,196,199,196,200,200,199,199,198,200,198,198,199,198,200,199,198,198,200,200,200,203,199,200,201,200,199,202,200,199,199,200,200,202,202,199,203,199,196,201,198,200,201,200,201,200,202,202,200,200,202,203,201,202,200,200,200,202,203,202,201,201,200,202,204,202,203,205,203,202,201,201,202,203,200,201,200,200,201,200,203,203,200,200,201,199,200,200,199,202,200,202,202,200,200,200,200,200,199,200,199,200,201,200,199,199,200,200,199,199,201,203,201,201,200,200,200,201,202,202,203,199,202,201,200,201,202,205,205,205,205,205,204,206,204,205,205,204,204,202,204,203,203,204,206,204,206,206,208,207,203,191,100,23,77,181,214,214,203,201,211,210,214,210,207,211,212,213,216,220,221,219,222,212,199,192,181,177,180,186,196,210,216,223,227,223,225,220,220,219,219,220,217,219,219,218,218,220,222,222,221,220,220,221,219,220,221,221,222,223,222,221,222,223,221,224,220,224,218,212,207,126,61,135,217,252,229,155,157,160,173,163,166,184,179,179,136,156,162,163,156,110,123,130,100,29,21,47,33,33,29,43,80,67,67,74,99,173,231,234,208,219,230,220,206,209,234,223,148,179,186,163,203,180,106,107,187,160,160,153,162,186,189,231,212,217,197,232,217,58,136,191,161,165,199,232,240,183,68,43,68,125,79,32,50,18,66,233,241,214,156,103,210,200,122,129,196,198,150,146,164,236,245,236,240,168,198,207,187,224,189,90,116,196,206,229,227,229,240,253,220,127,66,1,15,21,22,53,33,20,22,27,50,51,66,29,42,186,245,249,198,143,115,56,84,84,80,71,66,50,67,138,137,151,100,25,22,14,22,22,19,53,82,93,103,96,79,77,110,141,160,182,195,191,190,189,164,137,107,71,47,33,32,69,20,22,22,15,19,16,13,17,15,19,22,24,30,29,23,24,23,16,23,23,23,22,21,22,20,18,22,23,71,94,148,228,197,135,141,139,151,159,224,216,96,34,18,7,16,16,17,17,16,16,16,17,20,20,15,18,21,15,16,19,19,19,16,15,15,16,16,13,15,18,17,21,18,27,31,28,42,42,50,55,60,66,65,71,81,81,74,65,49,40,39,42,41,41,44,43,33,59,83,77,65,36,36,61,57,54,64,66,49,37,44,54,75,92,93,87,97,98,102,88,59,41,39,41,45,63,87,84,107,213,244,248,246,235,235,235,229,230,231,230,229,230,231,231,230,230,229,231,229,228,228,228,230,228,228,231,230,229,229,232,232,231,231,231,232,231,233,236,234,234,234,233,234,232,232,234,231,232,233,234,235,229,233,231,229,232,231,232,232,230,230,227,117,4,0,4,8,10,9,10,10,11,11,11,11,193,193,194,190,192,192,188,193,190,190,190,192,191,189,190,190,190,191,193,192,193,192,191,191,191,193,191,192,193,193,193,192,192,189,191,192,191,192,193,190,191,191,193,193,191,192,191,193,191,192,193,195,195,192,192,192,193,193,193,194,192,193,194,195,193,192,194,193,194,194,193,193,191,192,191,195,193,193,196,195,196,194,195,194,195,197,197,195,196,195,194,193,195,195,195,195,196,196,193,195,196,196,195,196,198,196,194,196,198,195,198,196,198,199,197,200,198,198,199,198,196,199,198,198,198,198,201,200,199,199,198,199,199,200,203,202,201,200,201,200,199,200,201,202,201,199,201,199,201,201,198,201,197,200,200,199,200,199,201,200,200,201,201,201,199,199,199,200,200,200,201,201,200,202,202,202,206,202,200,202,200,202,201,202,200,199,204,200,202,200,201,202,202,200,200,203,199,202,199,199,200,200,203,199,200,201,199,201,198,198,200,196,198,198,198,201,200,199,199,199,201,201,200,199,199,199,201,201,199,202,200,198,201,201,201,202,200,203,201,201,204,202,204,202,202,205,204,202,205,201,201,203,202,201,200,204,205,205,204,208,206,202,212,143,115,186,217,223,211,199,208,208,213,211,212,214,214,220,219,220,219,213,201,190,184,176,179,186,194,208,217,220,224,223,222,220,217,217,216,215,214,215,215,216,219,219,221,220,220,222,220,221,222,219,220,222,221,222,221,222,223,223,219,221,223,218,222,217,225,218,215,192,119,66,143,223,250,225,172,178,172,188,170,169,152,124,156,133,145,166,183,203,196,223,213,148,51,40,57,29,33,29,53,109,83,120,127,132,217,250,240,198,212,229,212,206,212,236,224,163,191,185,193,170,152,145,122,183,182,196,193,211,215,232,230,213,205,166,201,132,49,178,198,159,158,197,241,234,174,60,39,50,104,75,29,46,19,66,236,246,201,162,144,220,206,167,171,179,191,160,117,175,220,221,231,224,174,198,207,195,235,240,130,68,142,158,198,223,235,250,253,202,63,10,3,22,28,39,46,21,22,21,49,55,61,34,31,160,247,247,248,206,155,107,41,61,53,76,72,76,55,70,132,122,141,81,20,22,18,44,50,61,70,94,139,165,167,141,141,141,139,121,104,93,63,51,37,29,24,15,16,16,15,17,17,14,21,20,13,17,22,19,22,29,34,40,43,46,48,43,37,27,22,29,18,17,27,19,24,17,23,27,23,23,50,177,250,196,142,146,143,156,185,243,188,67,31,14,15,20,15,24,20,15,19,22,18,15,20,18,17,15,17,18,19,19,17,17,16,17,16,16,15,14,13,15,15,14,16,15,17,18,16,21,22,25,36,33,39,49,58,61,60,69,78,84,78,72,69,54,57,41,45,72,64,53,50,76,83,66,79,83,75,55,26,27,44,38,32,40,55,78,98,105,90,63,48,55,68,93,121,128,95,130,240,244,248,241,234,237,231,231,232,232,231,232,231,230,231,231,230,231,231,229,228,228,231,231,230,229,229,232,231,232,232,231,231,232,233,233,233,235,236,234,234,236,234,236,236,235,234,235,234,236,235,234,232,233,232,229,230,229,230,232,231,231,226,117,4,1,4,8,9,9,10,10,11,11,11,11,190,193,194,189,189,190,188,193,190,189,191,190,191,188,191,191,189,191,193,190,193,191,191,191,189,194,192,193,192,192,191,191,193,191,192,192,190,191,193,190,191,191,190,191,190,194,191,192,193,191,193,191,190,193,193,194,193,194,193,193,193,192,193,191,193,194,191,191,192,194,193,193,194,191,191,193,191,191,191,189,193,194,191,194,194,193,195,193,194,193,194,194,194,195,195,195,194,195,193,192,194,193,194,195,193,196,195,194,195,193,197,195,195,198,196,198,196,196,200,196,197,198,198,200,199,200,198,199,200,198,199,196,199,198,197,198,197,200,200,200,202,202,200,201,201,202,202,200,200,199,201,199,203,200,199,200,199,199,199,199,200,199,199,199,200,200,198,199,198,196,200,199,197,200,201,202,200,200,201,201,201,200,201,199,200,198,201,200,199,200,199,202,200,199,200,200,199,201,200,199,200,200,199,198,198,199,198,197,198,199,199,198,200,196,196,198,198,198,196,198,199,200,199,199,200,198,199,201,198,199,199,198,200,201,201,202,199,199,201,199,199,200,203,200,199,201,204,203,203,202,201,202,201,204,203,202,202,205,204,207,202,207,222,194,204,233,231,217,204,202,210,217,218,228,230,225,223,213,208,200,190,185,176,174,182,193,206,212,216,219,220,220,218,214,217,213,212,216,211,214,215,215,216,217,217,217,219,220,220,220,221,221,220,221,220,220,219,222,221,220,221,220,220,221,221,219,219,218,224,213,210,189,117,79,157,229,247,203,166,168,169,201,179,146,113,79,138,145,158,167,173,184,163,177,175,136,48,46,68,31,40,28,69,97,56,158,193,185,234,234,234,198,224,236,214,213,221,242,229,170,170,213,235,179,199,166,151,215,221,239,233,222,213,225,179,183,158,166,234,133,112,236,220,191,178,212,251,239,162,66,39,38,87,78,24,31,25,46,218,246,210,160,174,248,239,174,157,200,240,163,106,194,246,230,248,221,170,209,208,212,246,252,199,94,76,132,213,239,252,252,250,121,19,8,4,25,42,42,24,16,22,45,56,59,44,22,140,241,251,251,251,241,196,122,30,38,34,54,54,59,40,83,135,112,136,66,32,67,100,155,169,173,171,150,143,136,107,71,39,34,33,24,23,19,12,18,19,14,19,12,12,18,15,17,26,22,20,22,22,21,22,24,31,40,47,43,40,51,53,47,41,28,28,20,18,25,18,22,22,15,31,24,26,24,32,177,234,180,146,144,147,161,196,222,134,34,22,15,21,22,27,20,23,23,19,26,21,16,14,19,19,15,15,17,18,24,22,19,19,18,22,19,16,18,14,19,20,17,17,16,24,21,16,19,16,17,17,16,22,19,23,27,34,39,48,68,79,88,101,102,89,80,75,92,99,89,80,76,61,48,68,77,61,38,45,73,65,40,23,24,19,45,83,97,101,83,110,127,129,142,146,123,91,181,247,247,243,239,235,235,231,231,233,233,231,231,232,232,231,231,231,231,231,230,229,230,230,232,229,228,230,228,230,231,231,231,231,233,232,233,236,234,233,234,234,234,233,235,232,232,233,232,233,231,233,235,229,233,231,230,231,228,230,230,231,231,229,118,2,1,5,9,9,9,10,10,11,12,12,12,190,193,193,190,191,186,190,191,189,190,190,191,188,192,188,190,191,189,191,188,193,192,191,193,192,191,192,193,191,192,191,191,195,189,190,191,191,189,190,191,191,192,190,192,191,189,192,190,191,194,191,193,191,189,195,190,191,191,191,192,191,194,191,191,194,193,194,191,193,193,190,194,192,193,193,193,192,191,193,192,195,194,194,192,194,194,193,194,195,196,197,193,196,194,193,195,193,193,193,194,192,193,192,194,195,193,193,194,194,192,195,195,195,195,194,196,195,197,196,196,197,196,197,196,195,198,196,195,196,196,199,199,198,198,200,199,200,198,201,201,200,203,200,203,202,200,202,199,200,198,198,200,199,202,200,200,199,199,198,198,200,199,201,201,199,199,199,200,200,199,199,199,199,201,200,199,200,199,198,199,200,199,199,199,199,199,199,200,199,200,198,198,203,199,198,199,198,201,202,200,199,199,198,198,198,199,198,198,198,198,197,195,199,196,194,195,196,196,198,195,196,198,194,199,198,196,198,198,198,199,197,198,200,199,200,199,198,199,200,200,200,198,200,201,200,202,201,201,203,200,202,201,201,200,202,203,203,204,205,208,198,209,209,191,216,230,218,209,197,207,224,228,248,249,244,224,198,193,185,179,178,183,190,200,212,216,220,218,219,218,215,215,214,213,213,212,212,214,213,214,214,214,217,217,216,220,223,222,222,222,223,222,222,223,222,224,222,224,225,223,226,226,228,229,227,229,229,226,229,219,213,185,145,128,182,239,238,184,143,121,157,205,174,166,149,92,130,162,175,176,178,162,120,128,142,133,55,53,71,41,43,32,52,46,31,192,251,243,253,253,251,230,249,252,236,241,249,252,238,119,125,224,252,210,214,184,167,229,250,250,236,193,215,230,188,245,207,222,252,174,208,250,252,229,196,235,252,252,190,65,46,53,100,85,18,35,26,51,200,245,237,147,198,248,252,186,185,250,245,131,120,246,252,252,252,227,211,225,215,234,251,251,245,149,83,137,230,246,253,253,146,31,6,9,12,38,44,25,18,13,39,55,61,47,23,121,240,247,251,251,252,252,207,112,46,50,45,55,55,44,44,111,136,120,146,141,154,167,162,175,156,128,93,55,36,25,33,18,15,16,19,22,14,18,14,18,21,19,27,24,28,27,24,23,24,28,22,22,30,26,30,37,45,50,42,47,48,39,46,42,36,24,14,17,23,20,22,17,23,22,32,27,29,27,25,184,235,160,148,143,154,171,200,190,82,11,15,17,22,27,25,27,21,28,31,29,30,20,19,18,17,16,19,19,16,18,22,27,23,22,21,17,20,22,21,27,31,31,27,31,32,29,32,25,21,18,16,16,15,15,16,18,18,20,19,25,34,35,42,57,69,82,102,119,125,123,101,77,57,45,68,72,61,56,78,89,62,34,28,31,18,56,96,106,102,90,121,132,142,132,133,104,103,237,248,243,245,234,238,234,230,236,234,232,231,232,232,233,233,232,233,233,233,232,231,231,229,229,229,228,228,228,230,230,231,231,232,231,230,232,235,233,233,232,235,233,232,234,233,234,233,234,232,231,229,232,231,229,231,229,230,230,231,233,231,231,229,118,3,1,5,9,9,9,10,10,11,12,12,12,190,193,192,190,193,192,190,193,191,190,193,191,189,188,193,190,188,192,193,190,191,190,191,191,190,192,189,187,189,191,191,190,188,190,189,190,190,191,191,189,189,189,188,190,191,191,188,187,193,190,191,190,191,190,187,192,189,191,191,191,191,191,193,191,192,190,189,193,190,194,191,190,193,189,191,194,189,190,194,194,195,193,193,195,193,194,198,194,196,194,195,194,191,196,195,193,193,196,195,196,194,194,195,193,194,196,195,193,196,197,195,195,197,194,194,195,196,195,198,195,194,196,195,198,197,197,196,197,199,198,198,200,200,199,200,199,200,202,201,200,199,200,199,200,199,199,199,198,197,200,201,199,200,198,201,198,197,198,199,200,198,198,199,199,199,199,198,199,200,199,201,201,199,200,200,199,198,199,201,200,201,200,199,199,201,200,200,198,198,198,196,199,198,198,199,198,198,199,199,199,198,196,199,198,198,198,196,197,196,196,195,196,198,196,198,198,195,197,197,196,198,198,196,196,198,198,198,196,196,199,197,198,196,196,200,200,199,198,199,198,200,199,200,200,200,200,199,200,203,200,200,202,201,201,202,204,203,205,206,204,197,207,193,180,208,211,213,204,199,217,224,214,177,164,170,174,179,178,184,196,204,214,221,223,222,222,221,219,218,220,217,218,218,218,223,219,221,224,223,226,225,229,229,230,232,236,237,237,240,239,241,241,241,243,239,242,244,243,245,244,249,250,251,252,250,252,252,252,252,243,231,211,191,151,189,248,250,199,165,105,139,213,202,236,215,155,177,172,196,193,170,139,101,124,139,137,64,52,69,38,52,39,46,24,20,188,247,247,252,252,250,237,250,250,245,251,252,252,230,99,92,207,237,186,192,177,162,195,214,181,202,179,224,230,206,249,201,233,214,150,208,243,227,182,139,200,246,235,160,76,48,57,96,91,28,29,39,39,172,230,140,83,152,211,151,131,169,201,160,81,112,202,222,205,198,193,186,181,155,163,200,235,218,117,66,108,186,198,200,147,47,14,10,10,32,45,33,18,14,34,53,54,56,18,93,222,252,252,252,252,250,217,171,101,50,70,60,78,72,57,69,125,137,127,167,143,134,100,52,42,29,27,22,18,19,16,15,19,20,13,18,26,29,33,31,33,34,33,41,45,41,44,31,32,24,24,34,28,30,26,38,46,43,48,62,53,67,79,49,30,29,18,21,24,19,22,25,21,22,28,35,30,40,23,51,217,212,144,146,141,165,180,185,163,80,33,22,28,32,22,28,29,26,22,33,34,28,28,18,18,15,17,15,15,22,17,16,18,27,26,30,27,22,26,24,32,36,39,50,37,37,37,35,33,24,21,17,19,18,21,16,14,15,14,17,15,17,17,15,24,30,33,44,54,63,75,89,103,97,90,102,111,103,83,77,60,41,34,27,32,29,74,106,108,112,86,105,125,135,126,119,87,131,240,246,241,243,233,233,233,236,229,232,234,233,236,236,235,235,235,234,234,232,231,232,232,231,231,232,232,233,233,233,233,236,235,231,233,234,232,234,235,233,234,234,235,236,236,233,235,234,232,232,231,232,230,228,230,230,230,230,230,231,230,235,232,226,118,4,1,3,8,10,9,10,10,11,12,11,11,192,195,194,193,194,191,193,193,193,193,191,196,192,191,191,191,192,192,194,191,193,189,190,191,189,190,192,192,189,193,188,188,193,190,192,190,192,191,193,190,190,195,189,191,193,190,192,192,191,191,189,192,191,191,194,191,192,190,189,192,190,193,191,192,194,189,192,190,189,194,190,193,192,188,190,193,192,193,193,191,193,190,194,195,193,194,194,193,193,194,193,192,196,193,194,196,194,196,195,194,193,196,193,195,196,195,195,195,196,196,196,197,196,195,195,195,196,196,196,198,198,199,200,199,200,200,199,203,202,200,202,200,203,202,204,202,200,201,199,200,198,200,199,198,199,196,200,197,200,202,201,203,199,200,199,199,200,202,198,199,200,198,202,200,200,199,199,200,200,198,199,200,198,199,198,198,201,200,200,201,201,199,199,200,199,199,196,198,200,200,197,198,199,198,199,199,197,199,198,198,198,198,198,196,198,198,197,198,197,196,198,197,198,197,199,195,197,199,198,197,198,198,195,197,198,197,199,196,195,199,195,197,197,196,198,198,200,198,198,198,198,201,199,200,199,199,200,202,203,200,202,201,205,205,205,205,205,206,208,204,201,206,188,200,217,220,222,216,217,225,208,103,23,11,85,179,208,223,229,236,242,246,245,243,241,240,241,240,240,240,244,245,244,247,246,248,251,251,252,252,251,251,252,252,252,252,252,252,252,252,252,252,252,252,252,252,253,253,253,253,252,252,252,252,250,250,252,252,252,252,245,218,210,147,159,244,253,225,205,114,141,211,225,245,242,204,190,170,176,168,158,120,101,124,133,139,91,93,89,81,97,83,95,80,84,181,187,171,185,169,160,141,153,154,158,183,174,180,149,74,78,127,127,111,104,106,93,75,84,70,105,97,129,110,95,120,95,124,99,66,96,100,116,105,32,42,73,78,102,66,46,51,78,88,37,31,33,46,50,47,48,9,35,43,29,24,49,71,50,53,61,32,12,6,56,107,122,128,80,48,21,40,61,47,68,72,76,88,69,51,30,19,12,23,45,35,28,21,23,47,60,63,24,72,208,245,252,252,254,254,211,170,138,85,89,100,93,98,99,84,99,137,123,135,125,55,29,20,15,16,16,16,18,17,20,21,17,19,25,23,18,24,37,44,38,38,36,36,37,41,45,49,50,44,41,33,33,31,30,29,37,54,55,71,60,91,154,117,66,41,17,19,21,20,25,22,24,24,33,35,41,34,53,18,82,245,186,132,144,140,182,189,169,144,90,70,77,57,45,33,24,29,27,27,28,34,31,31,24,19,21,16,18,19,19,21,18,15,16,18,29,25,29,30,28,33,32,39,43,46,39,38,41,34,31,22,21,23,18,21,22,19,22,22,19,18,18,21,18,16,17,20,17,20,29,32,36,49,61,73,86,94,101,101,83,63,50,45,42,40,33,70,101,108,108,99,103,107,115,106,112,59,93,222,234,243,247,230,233,232,235,233,234,234,233,234,235,235,231,231,234,234,234,233,231,233,235,236,235,234,234,234,233,233,233,235,233,233,234,233,233,233,234,234,237,236,233,232,232,233,233,232,232,231,230,233,231,232,233,231,232,230,231,232,231,232,229,117,4,1,4,8,10,9,10,10,12,12,10,12,192,195,193,193,195,191,189,191,192,192,190,191,192,192,192,191,192,191,191,189,190,192,191,192,190,190,194,193,191,193,190,190,192,191,191,190,190,189,190,193,192,192,189,193,191,190,192,189,192,189,191,194,190,190,190,193,191,191,190,191,189,191,191,189,193,191,190,193,189,191,189,190,195,191,193,190,192,194,193,194,195,193,194,194,191,194,194,193,196,193,195,193,191,195,192,193,192,194,194,191,193,195,194,193,194,191,194,194,194,196,192,194,197,194,197,197,195,194,197,196,196,199,197,198,199,200,203,200,200,200,200,201,200,200,200,199,201,201,199,199,197,199,200,201,198,197,198,199,200,200,201,199,201,201,202,202,202,201,201,201,198,200,202,202,201,200,199,201,201,200,201,200,200,201,201,200,201,200,200,199,199,199,198,199,200,197,200,198,198,200,199,198,198,196,199,197,198,198,198,198,197,197,197,195,197,197,197,197,197,198,197,196,197,197,195,194,193,195,195,196,195,196,196,198,197,193,196,196,198,197,195,201,200,198,200,201,200,200,202,202,203,202,207,207,207,210,210,212,214,214,217,217,219,218,220,223,223,226,228,218,223,224,210,224,236,245,242,229,226,223,201,123,59,95,191,246,252,252,251,251,251,251,251,251,251,251,251,251,250,250,251,251,251,251,251,251,250,250,251,251,249,250,246,248,249,242,244,242,241,238,239,225,214,228,209,214,222,214,214,199,184,170,166,167,159,163,171,187,192,179,153,157,172,104,104,172,170,171,163,95,122,147,150,189,160,141,142,116,117,107,120,107,98,114,107,117,114,120,125,115,130,129,144,134,121,116,79,74,68,60,59,48,55,41,31,60,50,62,69,69,88,58,40,53,50,57,48,24,29,42,60,38,56,36,37,27,19,68,41,43,57,28,53,85,36,19,18,4,46,63,52,53,59,88,54,35,40,42,47,36,65,49,57,44,20,26,41,57,32,49,74,49,12,10,26,78,110,126,137,71,20,9,16,37,64,67,84,72,47,45,23,16,23,42,43,27,25,30,51,53,63,32,50,185,248,248,252,252,252,181,142,128,94,95,122,142,142,142,137,120,130,141,118,130,92,21,14,17,14,19,13,14,20,19,26,28,27,23,27,28,31,30,26,36,43,41,41,50,56,48,45,52,46,50,53,50,46,30,37,35,36,47,50,50,44,99,159,116,69,43,16,21,21,24,23,24,29,21,33,44,46,45,51,17,97,226,144,129,147,175,217,178,154,141,103,92,112,106,64,36,36,26,27,33,32,30,27,24,27,24,18,24,24,23,24,19,23,20,18,16,16,23,26,27,25,30,27,33,49,56,46,37,44,41,35,28,24,29,24,20,22,20,22,24,22,27,24,21,22,16,17,19,18,19,21,16,17,23,24,29,33,45,63,76,89,96,93,87,77,67,53,81,103,108,119,94,76,72,73,66,61,18,96,233,238,247,242,232,238,231,236,233,232,234,233,233,233,234,235,234,233,233,235,234,232,235,233,234,236,234,234,233,233,232,232,233,233,233,232,233,233,232,232,233,232,234,232,232,231,231,234,230,231,232,230,232,229,229,230,230,231,231,231,230,231,230,227,118,3,0,6,10,9,9,11,9,11,12,12,11,192,193,191,192,191,190,191,191,193,192,191,192,191,188,191,193,187,188,189,191,195,190,192,191,191,191,189,189,190,189,189,191,190,188,191,191,188,188,190,189,191,189,187,190,189,191,190,188,190,188,189,192,189,188,187,186,191,191,190,191,190,190,189,189,193,190,191,192,189,191,189,191,192,191,192,191,190,191,192,192,195,193,194,194,194,196,192,192,193,193,195,193,195,193,194,194,193,194,192,195,195,194,194,194,195,193,194,196,194,195,195,196,195,195,196,195,196,194,196,198,196,197,198,196,198,201,200,201,202,203,202,199,199,198,200,200,199,200,196,196,197,197,198,196,196,196,199,198,200,201,199,200,200,201,200,201,200,201,200,205,201,199,204,199,199,198,200,200,199,201,201,203,202,201,200,202,199,196,199,199,199,198,199,199,199,200,198,196,198,197,198,198,196,197,198,198,197,197,198,198,197,199,198,197,198,199,195,195,198,198,198,196,195,194,196,195,196,196,193,196,196,196,195,194,196,194,195,194,196,202,201,210,217,217,220,220,222,224,228,227,226,231,230,235,236,235,237,237,241,242,245,245,243,244,245,246,247,248,246,233,236,225,222,228,219,229,203,193,203,219,234,203,200,227,246,242,227,218,210,207,194,190,184,184,183,178,177,177,173,169,167,166,168,168,159,157,152,150,146,145,146,136,137,134,130,125,121,117,113,111,107,93,100,137,101,85,100,95,96,89,81,71,66,64,64,73,66,61,67,65,67,68,81,70,73,70,62,80,79,69,87,71,64,69,59,79,77,69,74,69,83,84,87,93,89,98,104,119,107,112,119,109,116,113,104,96,71,63,65,63,61,57,56,50,45,55,52,61,79,99,103,78,59,68,58,66,61,47,64,69,83,65,69,49,71,45,31,67,53,51,53,54,80,105,60,65,60,24,61,83,72,59,62,97,63,40,46,42,39,49,83,74,87,71,55,54,70,62,61,103,78,56,50,16,69,63,63,133,142,160,85,31,20,20,53,81,96,75,42,34,17,23,45,42,31,21,29,51,50,62,39,39,165,243,252,252,252,252,160,90,81,65,44,87,139,148,162,166,173,155,156,139,110,126,67,16,18,17,19,20,23,29,29,29,34,34,23,17,18,21,29,29,26,25,37,55,61,64,60,49,49,54,57,54,49,55,70,67,60,48,34,32,37,63,79,97,92,61,45,27,21,22,24,26,26,26,23,27,38,44,50,33,49,19,110,184,100,132,156,223,250,162,144,141,110,100,95,74,64,39,23,29,36,35,35,34,29,33,31,31,36,34,41,41,34,29,21,21,21,19,19,17,19,22,16,25,23,39,56,49,56,46,45,42,39,35,24,27,28,27,24,26,29,21,28,29,22,26,25,24,27,18,18,25,17,19,21,17,19,22,20,19,23,27,40,50,62,84,103,109,104,124,112,102,112,85,59,56,57,57,50,24,159,239,249,249,238,233,236,231,233,232,232,233,234,233,232,235,235,233,235,234,235,236,235,235,233,233,235,234,233,233,235,236,235,234,234,232,233,233,234,233,232,233,231,230,229,232,233,231,232,229,232,231,229,230,228,229,228,229,230,226,229,232,231,228,227,119,4,1,5,8,9,9,12,10,11,12,12,11,191,194,194,195,194,189,191,193,192,191,192,193,190,187,192,192,190,189,190,190,188,188,189,190,188,187,189,188,188,190,188,187,191,190,187,187,188,188,188,189,189,188,186,188,188,188,190,188,191,189,189,189,190,190,189,189,189,190,190,189,187,187,189,189,191,190,188,190,188,189,189,190,190,190,190,190,192,189,189,191,192,193,193,194,192,192,193,193,195,192,193,191,193,194,189,195,193,194,196,193,196,196,196,194,196,196,196,195,196,196,192,194,196,194,194,196,199,198,197,199,198,197,199,201,200,199,199,202,203,201,200,202,200,200,199,199,198,197,195,199,196,196,195,196,196,196,200,200,199,200,202,200,201,200,199,200,201,202,201,200,200,201,201,200,201,200,201,201,201,200,200,200,200,200,200,200,200,199,198,199,199,199,198,198,199,198,197,195,198,198,193,195,197,196,198,198,198,194,194,196,196,194,197,195,198,196,195,196,196,198,197,199,194,198,197,195,196,195,197,195,194,196,194,193,194,193,195,192,200,207,214,239,240,241,242,244,247,243,245,242,244,243,241,237,237,235,232,231,226,223,222,217,217,210,204,204,201,200,197,185,188,196,199,164,127,139,127,132,141,146,158,150,157,155,144,129,116,107,102,97,87,81,84,76,77,79,71,78,73,70,66,61,64,62,61,93,51,49,51,50,54,55,48,48,51,41,49,48,46,46,46,38,49,118,82,51,60,58,70,63,86,84,90,93,91,99,85,60,55,63,57,66,66,53,71,66,56,60,63,57,72,61,47,62,47,61,63,55,74,63,68,81,68,83,78,88,85,83,91,90,84,74,84,83,87,88,77,75,69,74,77,70,78,75,57,64,51,77,81,62,86,87,79,81,53,88,74,72,71,65,65,57,75,45,64,54,49,82,66,45,42,50,72,107,72,56,41,15,73,102,75,81,71,99,68,45,53,42,39,44,50,27,64,72,63,45,65,77,73,112,63,44,32,20,69,72,30,70,127,127,162,94,36,23,12,56,76,55,34,31,22,41,53,33,21,27,47,49,61,45,27,137,244,246,252,252,253,170,131,124,135,115,59,65,80,108,105,134,130,131,150,124,109,122,60,14,22,24,19,20,29,35,31,34,37,42,35,24,19,16,19,18,22,24,16,32,48,53,57,60,61,60,48,57,54,55,91,118,113,92,70,66,86,103,104,73,43,33,27,25,16,32,48,46,37,31,35,38,46,47,50,49,47,30,109,146,96,161,193,234,234,138,144,130,79,60,33,43,49,33,23,30,43,37,43,39,49,53,38,42,41,39,30,31,44,39,35,29,26,23,18,22,15,18,17,19,36,55,61,62,78,63,48,47,43,36,31,34,35,33,34,30,32,33,31,31,32,40,40,35,27,27,26,21,20,23,23,23,20,18,15,16,18,17,21,24,25,31,52,63,72,88,91,96,102,85,61,56,59,70,60,59,193,240,249,249,239,236,236,234,236,231,234,235,233,234,236,236,235,235,233,234,236,234,233,236,234,236,236,234,237,237,235,234,236,234,234,236,233,235,235,235,234,233,231,231,232,233,231,231,233,229,230,231,230,231,231,232,230,231,230,227,229,230,231,231,227,117,4,1,4,9,10,9,11,10,12,12,11,11,193,196,193,194,193,192,190,194,191,191,192,193,192,189,191,190,189,190,189,190,190,189,193,190,191,190,188,190,188,189,190,189,190,190,189,189,189,188,191,187,189,188,186,190,189,189,189,188,191,190,189,191,190,193,190,189,191,189,190,191,187,188,189,189,193,189,189,190,190,191,188,193,189,191,191,190,193,189,191,193,193,192,194,194,191,193,193,194,194,194,194,193,194,192,193,193,194,195,196,196,194,197,197,197,198,196,196,198,198,196,198,196,196,197,196,198,198,197,199,198,198,202,201,200,202,201,200,199,200,202,201,204,204,200,200,200,198,198,199,199,197,198,202,199,200,198,200,200,199,199,200,201,200,201,199,201,203,202,201,200,198,199,201,199,203,203,201,202,198,201,199,196,197,199,199,199,198,200,200,197,199,196,196,198,197,198,196,197,199,195,198,198,195,196,195,196,194,192,196,194,196,195,194,194,194,196,194,195,193,193,196,194,196,196,195,197,196,196,195,194,194,193,197,193,192,193,193,192,206,205,190,198,198,193,191,188,186,182,179,173,171,168,157,154,153,147,141,132,124,121,118,115,114,111,103,100,98,101,107,94,101,141,172,107,40,58,67,81,71,65,72,65,70,69,65,62,64,61,61,63,56,63,55,60,67,62,68,68,69,62,60,57,52,55,50,55,51,49,49,50,57,50,56,56,50,55,54,54,53,53,53,46,61,119,91,52,55,55,62,69,89,102,107,107,106,113,99,70,61,57,67,59,66,65,57,66,61,59,56,56,61,58,52,57,53,54,59,53,59,55,59,55,53,54,54,51,52,54,51,53,51,50,53,53,52,58,58,55,55,59,51,59,57,52,46,45,46,51,73,53,39,62,69,68,59,73,76,61,54,54,48,37,57,44,59,36,61,103,61,33,24,44,49,87,70,48,41,8,54,92,82,96,77,98,73,42,52,42,41,49,32,19,56,75,55,22,66,57,62,98,46,41,23,14,81,45,36,25,18,72,78,127,93,31,19,42,60,41,31,35,45,55,37,24,25,39,56,55,50,25,113,235,250,250,252,252,174,155,211,246,251,216,69,9,8,15,41,49,51,68,113,111,110,114,46,19,25,15,27,26,19,24,26,29,31,43,52,38,24,22,22,17,23,21,19,18,20,38,42,50,54,49,56,59,69,80,93,93,117,123,103,111,113,115,98,61,37,18,22,18,27,69,71,66,77,90,74,54,63,66,59,46,48,46,116,123,108,200,235,253,207,117,147,117,51,31,48,70,53,35,24,22,36,48,57,51,47,58,54,51,48,43,38,36,42,41,35,44,37,25,26,23,24,22,21,23,36,56,63,69,71,51,33,32,29,29,42,48,42,41,41,42,42,42,44,46,43,40,44,44,42,36,31,34,30,26,27,20,21,22,18,24,22,22,18,18,22,18,23,25,33,47,59,84,101,89,65,69,87,103,102,84,181,239,249,249,242,234,236,236,234,234,234,236,235,235,234,234,234,234,234,233,234,236,233,235,236,236,238,236,234,235,234,235,234,234,236,235,236,236,232,233,233,234,232,233,234,230,232,233,233,232,232,231,230,231,230,231,228,229,232,229,230,232,231,230,227,118,4,1,4,8,10,9,10,10,12,12,12,12,191,194,194,193,192,190,189,191,191,192,189,192,193,189,191,191,191,190,188,189,191,191,190,191,193,192,191,191,192,190,188,189,190,189,188,190,189,188,189,189,187,187,190,191,190,186,189,189,186,189,188,189,188,187,190,189,189,188,188,191,191,191,193,191,189,191,190,190,191,191,191,191,191,192,190,190,190,190,191,193,193,191,192,193,191,193,195,194,195,194,195,196,195,196,195,198,194,195,195,196,196,197,201,197,198,199,198,198,197,199,199,199,198,198,200,196,200,199,199,202,199,200,199,200,202,202,201,201,201,203,203,202,200,200,199,199,199,198,200,200,199,200,198,199,200,200,200,199,199,199,199,199,199,200,200,201,200,201,199,201,200,199,201,200,201,199,200,199,200,198,198,198,198,198,199,200,198,198,199,199,196,196,200,197,195,196,196,196,194,194,195,196,193,194,195,193,195,193,194,196,194,194,197,195,195,194,194,194,194,196,194,194,190,194,191,194,195,193,195,193,196,193,191,192,196,195,194,191,198,153,71,62,57,51,49,42,44,35,39,37,36,35,31,31,31,26,24,25,24,25,26,27,24,20,25,28,33,35,37,34,37,108,154,84,16,30,43,49,49,43,44,43,48,48,52,48,52,48,50,52,51,55,53,49,50,54,51,57,50,57,53,44,54,48,51,52,53,49,50,52,49,53,48,50,53,48,49,50,50,45,48,43,34,77,65,44,43,36,48,41,42,68,53,50,47,55,53,47,40,39,45,46,39,45,48,40,48,39,42,44,36,40,38,37,37,43,39,34,43,37,36,42,36,38,30,36,36,36,37,33,36,35,36,33,32,37,43,38,39,39,40,35,33,36,32,33,24,45,81,43,37,57,57,48,36,51,55,51,42,51,30,39,42,31,53,14,36,70,54,21,17,27,22,47,36,55,42,3,27,60,78,105,70,97,81,44,57,39,54,46,19,24,60,73,37,10,36,35,56,71,16,35,19,18,53,21,9,23,19,14,40,49,108,75,36,59,33,33,34,42,53,43,28,17,37,55,53,54,23,89,222,244,252,252,252,185,164,207,249,249,250,204,68,19,6,16,33,62,43,22,95,106,118,110,34,23,23,23,34,19,20,20,19,22,30,39,46,53,47,32,24,20,23,26,21,24,19,21,19,29,35,38,55,56,59,72,88,76,66,80,93,117,125,125,141,104,55,23,17,18,68,93,42,95,155,141,92,48,45,49,36,41,72,61,130,159,168,230,234,252,176,115,153,100,48,79,138,100,86,41,37,22,18,24,36,50,53,56,59,59,57,59,49,40,46,43,34,36,42,38,25,27,21,22,21,17,31,35,44,49,50,44,39,30,30,33,31,38,37,38,38,46,57,60,60,55,47,47,49,49,49,50,46,41,44,40,37,38,28,27,31,27,18,25,21,21,24,17,16,19,30,33,68,94,98,98,83,110,139,149,141,117,139,191,241,245,236,232,237,233,235,233,233,233,233,233,233,233,233,234,233,233,235,231,233,235,234,236,235,236,236,236,236,235,237,235,235,236,234,236,233,234,234,233,235,231,232,233,231,233,231,230,230,229,229,230,229,229,229,229,228,228,231,232,231,228,227,119,3,0,5,9,9,9,11,9,11,12,11,11,187,192,188,193,191,190,192,192,191,190,191,190,193,191,193,191,192,190,191,193,192,192,191,191,191,192,190,191,189,188,190,188,189,190,186,188,189,186,190,189,187,190,188,191,191,189,188,189,191,188,188,189,188,191,190,190,188,189,191,191,189,192,192,190,191,190,191,189,190,192,188,193,191,190,193,192,193,192,193,195,195,195,194,195,197,194,194,194,193,194,193,194,196,195,197,196,199,196,195,198,197,200,199,199,201,201,202,200,199,199,200,201,200,200,202,201,200,200,201,200,200,200,200,201,203,203,205,203,205,205,204,201,200,200,200,201,200,203,203,201,200,200,200,199,200,198,201,201,200,200,203,200,199,201,199,201,200,200,202,201,200,201,200,198,199,198,198,199,200,200,196,199,198,198,198,198,198,198,199,195,199,196,198,197,194,195,195,196,196,192,196,195,194,196,194,197,194,195,197,192,194,194,194,193,193,196,190,193,196,192,196,190,195,197,193,196,194,194,193,194,193,192,194,194,195,194,195,193,198,117,6,3,5,6,10,9,10,8,10,11,10,11,10,10,10,10,10,10,10,10,11,11,11,11,10,11,12,12,10,12,10,32,77,46,6,11,9,21,25,29,32,29,34,32,35,32,36,35,35,36,32,37,35,32,38,33,34,35,35,34,31,37,33,32,33,35,36,33,33,34,35,35,30,30,34,36,33,32,32,32,35,31,24,45,53,34,25,27,29,28,32,36,33,31,33,28,32,35,26,26,27,27,33,33,31,34,30,29,35,29,29,31,30,26,26,29,31,31,28,28,30,25,30,32,27,28,25,26,27,30,29,31,31,34,31,25,29,29,29,29,27,29,30,30,28,35,19,35,75,36,41,46,56,36,27,42,53,32,16,24,10,29,23,26,29,24,65,43,21,14,27,54,58,73,31,31,28,30,35,37,73,104,72,91,85,45,55,47,64,78,43,35,84,96,87,74,78,86,120,128,95,123,114,118,132,103,131,120,127,108,84,174,159,155,101,27,31,28,41,50,44,24,21,33,49,54,64,30,73,207,247,247,252,252,186,164,216,251,250,250,245,143,73,61,47,53,75,90,32,41,119,105,131,99,27,31,20,25,36,26,24,17,20,24,21,29,46,50,56,49,31,33,39,36,29,27,23,24,21,21,26,22,29,26,46,71,88,91,87,91,112,146,138,157,160,131,78,37,21,26,120,88,45,113,120,105,69,36,25,15,19,57,97,71,138,203,237,252,236,234,150,121,163,83,53,131,122,119,95,47,30,22,20,17,22,39,47,59,63,57,58,47,56,45,39,43,39,40,34,46,45,38,27,21,22,19,22,20,29,35,51,62,60,67,67,57,49,48,46,46,41,57,71,64,60,53,55,54,47,45,51,53,50,55,55,52,51,52,46,32,22,21,19,27,23,29,25,17,22,18,30,39,77,99,106,103,107,159,164,163,155,124,101,82,144,206,228,239,236,234,233,236,236,234,235,233,232,231,233,235,234,236,235,235,234,235,235,235,237,236,233,236,236,234,235,236,236,235,234,233,235,234,232,233,232,231,232,232,232,230,231,230,228,231,229,230,231,228,229,229,229,228,230,230,230,229,226,119,3,1,5,8,9,9,11,10,11,12,12,11,188,191,191,189,191,190,189,190,191,191,190,192,191,190,190,190,190,190,190,192,192,188,192,192,191,191,190,189,191,190,189,189,188,189,188,187,188,189,189,191,188,190,189,187,190,188,191,191,191,190,188,191,189,189,192,188,193,191,189,191,191,191,188,193,189,190,193,190,194,192,192,193,192,192,190,194,194,194,194,193,193,196,194,193,196,195,197,196,196,193,192,193,195,197,198,197,198,198,199,199,200,200,200,201,201,201,202,202,201,203,200,202,203,198,200,200,202,201,200,203,203,203,204,203,204,205,203,202,205,205,202,205,203,203,202,202,205,202,201,201,202,200,201,203,199,198,200,202,201,200,202,200,202,200,200,199,197,200,198,199,197,199,200,196,198,198,198,196,198,198,196,196,198,196,198,198,197,198,198,197,196,196,196,195,196,195,195,195,198,196,192,194,192,192,194,194,196,193,194,194,193,193,193,192,193,195,193,192,194,192,193,197,193,195,194,195,194,192,196,193,193,192,192,193,193,194,193,196,202,145,75,63,46,54,53,48,49,39,43,38,36,33,31,29,23,21,21,21,17,14,16,16,12,12,12,12,10,11,11,12,10,15,51,42,10,11,10,12,12,17,24,24,30,29,29,29,31,31,28,33,29,31,30,34,31,31,29,29,26,24,27,99,19,14,16,13,12,13,13,13,13,14,13,14,13,13,14,15,15,14,13,14,15,15,14,15,13,15,17,15,14,13,14,14,15,14,14,15,19,22,32,56,50,36,45,55,56,46,32,30,35,39,57,50,22,29,28,30,36,39,51,47,39,44,58,60,41,35,39,33,33,44,34,62,83,53,36,122,41,130,57,129,54,134,123,145,144,134,144,122,81,104,124,122,108,126,160,132,102,140,168,191,191,206,224,210,185,110,177,236,246,238,236,188,146,213,232,249,189,89,57,78,59,92,86,46,43,57,212,251,214,171,177,243,243,201,151,143,223,247,245,251,252,226,246,243,250,252,252,252,252,252,250,153,49,38,15,31,43,47,25,12,23,48,48,66,36,47,194,245,249,249,252,188,155,203,246,249,253,253,208,99,82,81,82,91,86,121,53,87,140,106,136,84,20,22,24,28,31,27,25,18,22,26,23,29,34,46,58,54,51,54,59,59,55,51,43,49,62,55,56,60,56,54,113,153,134,123,98,105,130,118,91,92,86,71,63,61,43,69,157,108,83,106,55,42,42,32,18,14,38,94,95,72,128,234,234,234,236,234,164,146,168,71,67,153,107,114,105,52,25,9,21,15,15,20,18,30,43,45,46,54,52,51,44,34,37,39,39,44,51,51,39,29,35,45,45,30,40,55,56,66,76,84,75,59,59,61,58,56,55,59,61,54,50,49,56,62,63,45,33,45,60,62,51,39,36,38,30,26,20,15,28,29,23,29,31,24,17,18,24,44,83,94,104,103,108,146,152,150,127,103,75,20,73,191,235,246,235,231,237,234,235,236,233,233,235,233,234,234,233,234,233,234,236,234,236,236,234,236,235,236,234,234,235,233,233,236,235,235,235,234,234,232,233,233,232,231,231,234,231,229,230,230,229,230,228,230,229,229,230,228,229,227,230,231,226,118,4,1,4,9,10,9,10,10,12,12,12,12,189,193,190,192,191,189,190,191,193,191,191,191,191,188,191,190,190,191,193,192,190,191,194,195,193,193,191,191,192,191,191,191,191,193,188,187,191,189,190,188,189,192,188,191,190,191,191,191,194,191,194,191,190,192,189,191,190,192,191,193,193,190,191,193,194,194,193,193,191,193,193,193,194,191,194,194,194,198,193,196,194,194,198,197,195,194,195,195,197,196,196,198,199,199,200,199,200,201,201,201,201,205,203,199,201,202,201,204,203,201,203,203,206,204,205,203,203,203,205,206,204,204,205,204,203,202,207,206,205,203,206,205,205,202,202,205,201,203,203,203,205,202,202,204,201,200,203,201,201,200,202,201,200,202,201,199,198,199,200,200,199,199,198,195,198,198,198,198,199,198,198,198,198,198,196,197,198,198,197,197,198,196,196,194,197,198,195,195,196,196,195,193,193,195,193,195,195,194,194,194,193,193,195,192,192,193,193,196,195,193,195,190,194,195,191,194,192,193,193,194,193,194,199,194,193,196,193,198,214,208,209,228,228,231,231,229,230,229,229,227,227,224,223,221,222,221,217,217,219,216,213,214,210,208,206,206,202,194,195,194,183,151,159,174,174,183,181,184,177,177,175,173,170,167,165,160,163,162,160,158,159,166,166,166,174,173,181,180,185,184,188,196,194,199,200,204,206,207,209,211,215,217,216,224,225,225,225,232,232,231,234,234,223,170,192,231,236,244,243,235,214,213,213,209,212,208,219,235,243,241,246,235,122,106,211,250,175,83,61,139,222,248,239,173,165,206,217,238,247,250,250,250,240,242,241,163,78,75,64,33,44,50,59,208,248,235,249,241,252,252,249,248,249,252,252,252,252,238,246,217,125,112,164,173,122,128,225,232,217,252,253,253,253,253,252,237,176,153,251,251,252,252,252,238,227,252,252,252,220,117,69,65,35,72,89,48,35,70,230,245,246,226,218,228,251,226,140,186,249,250,252,252,253,224,242,250,246,252,252,252,252,252,210,92,28,2,22,37,48,27,15,22,35,52,55,54,19,84,239,243,249,236,168,167,210,239,251,251,250,252,143,27,69,71,71,78,80,101,53,115,147,107,138,70,15,27,24,33,29,24,27,21,19,27,18,20,31,36,40,44,54,64,71,67,83,86,109,139,127,122,128,142,146,163,184,162,145,121,100,113,99,72,45,36,36,42,39,56,73,100,131,87,74,60,28,55,45,20,22,26,93,115,93,49,108,240,250,248,252,246,180,170,160,52,72,150,108,106,80,42,23,12,19,17,17,21,24,18,18,18,19,43,56,50,47,43,46,47,40,43,46,43,44,38,46,68,70,61,59,62,64,66,73,68,72,76,63,54,47,51,53,51,63,65,66,62,57,57,53,38,21,16,22,32,32,22,15,18,19,17,19,18,24,27,28,27,29,32,24,23,22,46,89,89,97,95,84,120,127,118,86,88,137,164,222,247,242,243,236,239,235,235,236,235,232,234,237,234,233,235,235,234,234,234,233,234,235,236,236,236,237,238,236,233,234,235,234,236,235,233,235,234,233,231,232,233,235,232,230,231,231,229,229,231,228,230,228,229,230,228,230,230,230,230,231,231,225,117,4,0,4,8,10,9,10,10,11,12,12,12,188,192,190,186,189,188,188,189,191,190,189,191,190,192,191,187,192,193,192,193,193,191,190,189,190,192,190,190,192,190,190,191,191,191,192,193,191,190,189,191,190,191,192,191,192,191,192,192,189,191,193,191,191,190,197,190,190,190,191,192,188,193,188,191,192,192,195,191,192,191,190,191,191,192,193,193,195,193,193,194,194,196,195,195,196,192,194,193,196,198,196,197,198,198,199,201,202,203,202,203,205,203,202,203,201,199,203,202,203,204,202,205,203,203,206,204,205,206,204,205,203,205,205,204,204,205,205,203,206,205,201,203,201,204,205,203,205,203,204,203,202,200,201,201,199,200,202,200,199,201,199,200,202,198,200,200,200,201,196,201,199,198,200,199,200,195,196,198,198,196,198,199,195,198,198,198,195,196,197,195,196,194,196,195,195,195,195,195,193,195,193,193,193,193,195,193,192,193,193,192,193,193,195,193,194,194,192,194,194,193,193,191,195,194,192,195,194,195,195,192,193,194,193,194,193,193,193,196,205,208,223,241,243,248,249,251,251,251,251,251,252,252,252,252,252,252,252,252,252,252,253,253,252,252,252,252,252,252,252,252,252,252,252,248,253,253,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,252,252,252,242,252,252,253,253,253,253,253,253,252,252,252,252,252,252,252,252,249,243,113,118,243,246,237,203,203,247,253,253,252,252,252,252,253,253,252,252,252,252,252,252,249,182,85,81,64,29,37,41,61,217,252,252,252,252,253,253,252,252,252,252,250,250,250,241,248,230,103,92,168,173,60,47,232,247,224,252,252,252,220,229,186,176,141,155,251,251,251,239,252,198,195,250,246,252,184,99,75,75,43,61,87,46,41,53,207,247,198,212,122,162,241,177,165,207,234,234,223,251,216,163,208,213,214,227,252,252,243,206,97,13,4,5,25,50,32,19,16,40,50,57,52,38,63,145,244,244,239,160,152,219,247,252,250,243,248,181,83,60,89,75,53,77,90,74,55,133,127,113,132,53,20,25,29,34,33,29,21,18,21,21,19,26,21,28,29,39,54,61,76,83,138,173,157,169,170,157,155,147,141,160,158,118,87,73,68,68,53,45,39,28,19,21,33,46,53,63,64,62,67,62,73,74,32,23,32,89,161,113,71,41,48,186,234,252,232,187,122,101,86,26,38,91,68,77,67,39,21,10,21,14,19,21,21,25,22,24,20,16,26,34,39,29,26,28,22,23,28,34,35,29,31,46,56,51,46,48,58,72,72,53,56,81,75,59,59,62,63,57,63,69,69,64,55,41,28,23,20,17,16,17,20,18,17,20,15,16,23,19,27,32,27,34,40,36,30,23,27,61,89,97,103,97,69,74,81,76,33,103,244,244,247,247,241,241,239,238,234,234,235,233,236,238,235,236,237,234,236,236,235,235,234,235,233,232,235,234,232,234,231,233,233,231,233,233,232,231,231,232,230,232,233,233,231,231,231,230,228,230,229,229,230,230,228,228,229,230,227,227,230,229,231,231,227,118,3,1,5,8,9,9,11,9,11,12,11,11,185,190,188,188,190,188,189,188,190,188,189,189,189,190,189,190,189,191,189,188,192,192,191,190,190,191,191,190,190,188,190,192,191,190,191,191,191,188,191,194,191,192,190,192,191,190,194,191,192,193,192,191,196,193,194,196,192,195,193,192,192,192,194,193,192,192,191,192,191,192,194,190,191,191,194,195,193,195,195,194,191,193,194,193,195,194,195,195,198,194,198,198,198,199,197,200,200,200,203,204,201,201,205,202,203,203,200,205,203,201,202,201,205,203,204,204,202,205,203,203,205,205,207,206,206,205,205,205,203,202,203,201,200,200,199,201,200,200,203,199,202,200,199,200,199,200,200,199,200,198,199,199,198,198,198,197,199,198,198,200,199,198,197,198,198,198,197,197,198,196,197,195,196,194,194,196,195,193,196,196,194,193,194,194,194,193,194,195,191,191,196,192,192,194,194,193,192,195,193,195,193,192,193,193,194,193,193,193,192,193,195,189,192,191,193,193,193,194,190,194,192,192,195,191,193,193,191,192,195,196,200,202,207,208,206,209,209,210,213,213,210,212,213,215,216,215,217,219,222,220,221,220,223,226,227,230,221,222,230,231,236,234,234,229,229,235,237,239,241,244,241,243,245,246,246,246,250,250,249,251,251,250,251,251,251,251,251,250,251,251,251,251,251,251,252,252,252,252,250,249,249,249,249,248,248,248,247,246,246,245,245,251,246,215,239,251,244,246,244,246,244,247,248,248,248,247,249,249,226,148,214,174,51,83,201,245,244,251,247,251,253,253,252,219,212,219,196,191,189,196,200,183,164,179,175,94,51,68,60,50,61,38,34,160,204,209,222,211,244,245,252,245,252,252,251,251,232,208,223,212,187,200,245,229,146,95,209,193,149,203,198,170,152,209,196,189,144,188,251,239,239,212,243,171,169,227,220,251,163,91,77,75,65,59,99,46,39,66,224,248,174,124,82,175,204,122,138,198,227,226,217,230,181,163,186,193,206,223,251,251,215,106,19,2,6,17,44,34,24,17,28,54,56,59,20,107,184,210,246,239,163,145,212,250,252,252,243,239,147,152,134,121,117,49,57,59,91,83,57,128,119,118,121,39,19,31,22,28,31,28,24,18,23,24,25,20,19,22,24,40,46,54,67,80,167,170,160,198,168,152,119,98,105,130,125,89,77,59,56,51,33,29,25,16,14,22,21,29,33,39,47,29,35,49,46,39,18,48,142,190,175,97,43,35,16,112,239,239,183,99,45,22,19,15,14,38,50,66,56,28,21,16,12,19,23,23,21,20,31,28,23,22,21,17,19,18,17,17,16,14,19,19,14,21,22,21,34,33,33,40,50,67,55,43,57,71,73,55,36,33,54,68,59,44,34,30,24,17,19,20,16,19,18,21,23,22,21,17,17,24,22,19,32,38,43,56,49,45,36,23,36,71,103,103,107,107,63,49,73,76,26,98,235,239,247,247,239,241,236,237,235,236,234,233,236,237,236,235,238,236,234,236,234,234,236,234,233,234,234,235,233,233,233,232,234,231,232,232,231,231,230,233,230,232,233,230,231,230,230,231,229,231,230,229,229,231,228,229,231,230,231,229,231,233,231,231,228,118,3,0,6,9,9,9,12,10,11,12,11,11,186,190,188,188,188,185,186,186,185,186,188,191,188,185,188,186,189,189,187,189,188,190,192,191,192,192,193,192,191,191,192,190,190,192,193,192,190,191,191,193,193,191,192,191,192,193,191,192,192,192,194,194,196,193,194,193,194,194,192,194,193,197,195,195,194,194,193,192,195,193,195,196,195,193,193,196,195,196,198,195,195,195,195,195,195,192,194,196,197,195,196,199,197,199,199,198,199,200,201,200,201,201,202,201,203,204,203,201,202,203,201,203,203,202,203,203,203,202,205,204,204,204,203,205,203,204,204,204,204,203,202,201,202,201,200,199,199,198,199,200,199,197,201,201,198,199,200,199,200,200,198,198,197,196,198,198,199,200,199,198,194,196,198,194,196,194,196,196,196,196,195,196,195,195,194,197,195,195,194,193,193,193,194,193,193,193,192,193,193,194,194,194,194,191,194,193,194,193,193,194,194,193,192,192,191,191,190,191,190,191,194,192,192,192,191,193,191,191,193,190,191,193,194,193,190,190,192,193,194,194,191,195,194,195,195,194,193,193,196,194,197,196,194,198,196,197,197,198,199,198,198,198,200,199,201,203,194,198,203,206,207,206,200,200,207,206,205,206,207,208,209,212,211,210,214,216,213,214,216,215,216,217,218,218,220,220,218,220,219,218,221,221,222,221,221,221,220,222,222,221,219,220,220,220,221,221,221,219,220,222,218,221,224,203,217,227,220,222,217,219,218,219,221,220,216,218,223,222,178,74,152,174,92,83,165,229,186,186,177,165,156,157,153,124,130,128,115,100,89,108,116,92,84,102,108,90,110,140,129,139,145,127,75,57,55,39,42,36,48,54,56,65,99,150,168,149,128,137,156,186,207,244,252,252,247,127,132,122,65,122,180,174,200,228,207,215,134,198,251,234,241,205,244,171,168,219,220,251,160,88,70,73,62,53,97,52,33,67,232,247,170,142,120,201,156,59,117,174,229,225,221,218,161,190,189,179,208,238,251,251,145,44,6,1,11,29,42,30,21,32,48,56,62,29,70,217,230,219,249,167,138,207,252,252,252,248,241,146,132,245,211,141,112,62,36,34,64,36,59,131,95,124,107,29,28,24,29,36,34,27,22,22,19,26,24,21,20,21,21,32,41,46,59,45,108,113,138,203,148,114,92,78,95,123,111,74,83,76,66,50,32,31,21,18,24,21,19,18,21,23,23,26,22,20,21,21,26,131,211,199,145,63,36,27,9,56,152,106,100,38,35,72,63,33,45,83,75,60,52,50,40,21,21,20,22,24,26,29,22,31,31,25,27,26,24,24,21,22,22,16,17,21,22,19,17,17,19,31,50,38,47,58,41,35,48,69,74,56,33,15,23,35,34,29,22,19,20,19,18,23,22,20,23,24,25,19,25,42,45,33,21,23,43,58,58,57,54,48,39,25,46,88,101,105,106,105,69,60,86,98,53,62,205,232,243,249,238,239,231,237,236,235,238,235,234,237,236,234,236,235,236,236,234,235,234,234,234,234,235,233,235,234,232,233,236,235,233,233,231,231,234,233,233,233,231,231,231,230,230,230,230,231,231,230,231,228,232,229,229,231,227,231,232,230,231,230,228,118,4,0,4,8,10,9,10,10,12,11,12,11,185,188,187,188,187,185,188,187,188,186,186,186,185,188,188,188,187,187,189,189,189,189,189,191,191,191,191,189,192,191,191,191,191,191,191,190,192,191,191,191,191,191,192,193,193,194,196,191,191,194,191,192,196,191,193,194,191,195,193,193,194,194,195,194,194,193,193,194,197,194,196,195,194,194,195,197,194,197,199,197,196,198,197,195,197,196,197,198,198,195,196,198,198,199,198,201,200,200,201,199,200,200,201,199,202,202,201,202,202,202,201,202,200,202,201,202,204,203,204,203,204,203,205,203,201,203,203,203,202,202,201,203,203,200,199,199,200,199,200,198,198,196,198,199,198,198,196,197,197,198,198,198,196,198,200,198,199,198,195,197,196,196,195,194,193,196,196,194,196,196,193,194,194,194,195,195,193,194,196,193,195,194,193,194,195,193,192,193,193,191,194,194,192,192,193,192,191,191,191,191,191,192,193,193,193,191,192,193,190,189,192,191,193,190,192,193,193,192,193,191,189,192,190,190,192,192,191,191,191,193,194,192,193,194,193,192,193,196,195,194,194,197,195,194,195,196,198,195,196,195,196,198,198,199,200,197,188,198,202,200,205,198,193,200,203,204,202,200,204,203,200,206,208,206,205,205,207,206,206,207,208,209,210,210,211,212,211,212,212,212,211,211,214,215,216,215,216,216,215,215,216,215,217,214,215,215,211,214,216,217,217,216,222,191,198,222,214,217,212,214,214,213,214,214,211,213,223,223,179,102,179,174,113,93,106,141,106,98,71,67,57,64,96,95,112,102,105,119,111,141,160,113,98,149,167,162,182,198,197,200,195,162,118,74,43,25,17,12,10,10,9,10,11,10,10,11,10,24,39,53,78,129,155,199,198,104,126,139,97,174,231,218,245,232,208,175,107,196,239,231,236,203,244,171,166,221,216,251,157,85,75,65,60,44,101,63,40,57,222,246,139,183,196,223,152,100,166,182,233,220,222,196,150,212,184,163,217,249,251,181,49,4,3,7,33,38,34,24,26,48,53,65,27,49,184,246,232,216,184,120,191,250,252,252,250,250,149,131,218,252,237,134,91,81,85,25,37,23,64,123,87,120,84,25,35,22,33,38,33,30,19,24,21,20,21,23,26,20,24,23,25,35,33,32,46,38,114,165,108,92,76,76,94,98,69,55,62,51,47,35,29,35,27,24,23,19,19,24,24,15,21,23,24,24,15,28,21,82,131,106,81,53,33,15,10,19,69,85,46,15,59,147,110,106,203,246,191,118,135,125,68,29,16,21,25,23,18,26,29,27,30,34,37,32,33,31,28,28,24,28,33,39,39,33,28,23,27,46,55,50,50,42,39,36,41,52,65,66,48,27,17,19,18,17,20,18,22,20,22,22,22,26,21,24,24,24,73,111,81,54,35,25,60,71,55,59,54,55,46,24,57,90,99,95,95,108,81,64,79,77,52,61,176,230,238,249,238,237,233,236,235,235,236,234,233,232,234,233,234,235,235,236,233,233,234,234,233,234,233,233,234,235,234,232,234,234,234,234,232,233,233,232,232,232,233,231,234,232,230,231,230,231,231,230,229,230,230,231,231,230,230,230,232,233,232,231,227,117,4,0,4,8,10,9,10,9,11,12,11,11,184,187,186,188,188,187,187,186,188,190,185,187,185,185,191,186,187,187,186,190,190,187,186,188,188,188,189,190,188,188,190,188,190,192,189,193,191,191,192,191,191,189,192,190,192,191,189,192,192,190,192,194,192,191,192,193,193,192,193,192,192,194,192,194,194,192,193,196,194,193,198,193,191,196,196,195,194,196,198,194,199,199,198,199,199,198,199,196,195,198,197,198,199,196,198,199,198,198,199,200,199,198,200,198,199,200,198,200,202,200,199,202,201,203,201,202,203,202,205,201,202,204,205,203,200,204,200,202,200,200,203,202,203,201,200,199,199,199,197,197,197,199,200,199,198,198,198,196,196,196,194,194,197,196,196,197,193,194,195,195,195,193,193,193,198,195,194,193,195,193,191,192,194,195,191,193,191,194,193,193,193,191,193,191,193,193,190,192,190,191,193,188,192,192,192,191,189,191,191,192,192,190,191,192,191,190,191,190,188,190,190,190,193,191,190,192,192,192,191,190,192,188,188,187,190,191,191,192,191,190,192,193,190,193,191,193,193,193,194,193,197,195,197,196,194,197,196,197,193,195,198,198,198,196,198,190,190,200,199,201,202,195,194,200,205,204,203,206,200,205,204,201,206,202,207,205,204,205,207,208,207,210,208,210,210,208,207,207,210,210,212,214,214,213,214,214,214,212,213,213,214,214,214,215,213,212,215,212,212,214,215,214,220,185,187,217,212,215,212,212,214,212,212,212,209,210,225,219,190,126,135,121,95,97,98,98,92,104,103,120,89,64,63,39,93,153,181,212,214,214,229,147,73,159,230,188,160,185,177,145,138,132,102,83,159,214,209,202,206,145,139,129,46,6,18,37,115,62,28,11,9,23,19,30,58,34,106,137,116,239,251,233,249,176,162,148,107,212,232,232,234,200,246,176,169,217,217,251,152,87,75,71,63,34,96,76,30,45,201,189,146,222,216,200,110,132,204,198,244,213,226,177,149,224,171,172,235,248,212,75,5,9,8,22,46,32,24,23,42,54,66,45,32,155,248,248,227,159,139,174,237,252,251,251,251,170,131,207,251,251,198,84,104,112,120,45,55,24,81,135,101,134,77,24,35,26,37,37,31,32,17,19,24,24,18,24,29,21,23,24,28,24,24,21,41,69,130,120,79,74,61,63,61,49,46,34,36,36,30,33,39,44,45,35,23,21,23,29,24,19,21,27,26,21,29,22,33,86,88,51,49,33,22,15,14,20,64,105,64,69,124,143,86,84,184,231,201,92,155,118,69,45,33,33,39,38,23,25,24,28,29,34,31,31,37,24,24,29,31,53,68,63,65,53,38,27,31,38,36,47,50,35,36,38,45,52,65,80,67,53,46,34,27,26,19,22,21,26,19,21,27,21,24,28,22,80,177,163,101,97,72,59,71,71,60,61,55,54,43,32,57,84,90,90,95,112,89,65,78,80,42,45,191,235,239,249,235,238,234,235,236,233,235,232,233,235,233,236,235,234,235,233,233,233,234,233,232,231,233,233,233,233,232,233,232,233,233,235,233,233,232,231,231,233,234,232,232,232,231,231,231,231,230,231,232,229,232,230,231,234,229,232,232,232,233,233,229,118,3,0,5,9,9,9,12,10,11,12,12,12,186,187,184,184,185,186,190,186,187,189,186,188,186,186,189,186,188,188,186,187,185,186,186,188,186,186,189,188,190,189,189,188,191,191,188,191,190,190,191,193,190,189,190,188,193,190,192,192,190,193,189,191,194,191,193,193,193,193,194,192,192,194,191,191,190,191,193,193,194,191,194,198,195,196,196,196,196,196,194,195,198,199,198,196,200,197,195,197,198,198,198,196,196,195,196,196,197,196,198,201,199,199,199,197,199,196,198,199,200,202,202,202,201,203,203,200,201,200,201,200,202,204,205,203,202,203,201,201,200,201,202,201,199,202,202,198,198,197,198,198,200,198,198,194,196,199,196,198,196,196,195,195,196,196,196,194,197,196,192,194,195,193,194,194,194,195,193,193,193,190,191,193,191,194,196,195,193,192,193,193,194,192,194,190,190,193,190,190,191,191,192,191,190,191,191,191,193,194,193,193,191,193,191,190,191,190,190,193,190,192,190,190,193,191,191,191,191,189,193,191,191,193,189,191,192,191,193,193,193,193,192,193,191,190,193,193,190,192,194,192,192,196,196,196,195,195,196,194,196,199,197,196,194,198,199,192,192,201,200,200,201,191,196,203,201,205,203,204,205,202,203,206,206,205,206,204,205,204,206,207,209,211,210,209,210,209,208,211,210,210,211,210,211,214,212,212,212,212,214,213,214,212,215,213,213,215,212,214,212,212,213,211,226,196,199,230,221,218,213,213,212,211,212,210,211,207,226,194,131,108,96,83,84,71,60,62,84,153,211,251,193,99,161,205,252,252,252,252,250,214,228,143,73,149,198,156,109,126,121,104,90,84,95,100,211,241,245,248,248,248,246,188,118,93,119,215,243,200,174,101,62,67,23,28,27,14,38,22,68,190,212,225,177,126,185,141,139,225,226,236,229,201,244,179,168,214,214,251,151,70,77,65,67,36,101,106,66,60,117,151,180,248,199,123,65,153,227,211,249,211,224,163,170,225,191,200,250,235,99,16,6,18,17,36,40,26,19,38,56,62,50,27,128,243,250,250,170,113,188,229,253,253,244,245,170,142,209,251,252,242,124,108,108,82,74,51,84,72,120,144,119,151,69,21,37,27,38,34,33,32,19,23,19,26,24,21,24,24,35,30,30,33,27,32,51,88,110,74,54,60,51,51,34,34,34,33,32,19,17,39,49,44,51,46,24,24,25,27,29,24,26,25,27,25,23,28,53,107,91,47,43,22,17,16,24,31,72,135,124,106,145,121,67,48,18,67,80,47,76,62,61,61,60,67,59,57,42,26,21,23,21,27,28,27,33,27,29,29,53,61,50,34,34,47,50,50,50,36,26,39,51,68,75,73,65,64,80,86,82,79,77,69,63,50,40,34,33,29,29,27,24,28,32,45,77,136,137,94,99,152,135,89,79,63,63,61,51,48,49,30,55,86,92,89,84,107,92,75,70,83,40,44,200,236,242,249,234,238,234,235,234,235,233,233,234,233,236,236,236,234,233,234,233,235,233,233,232,233,233,233,231,230,233,233,233,231,233,233,232,232,234,231,232,232,233,233,231,231,231,231,231,231,231,230,233,233,229,230,230,232,233,232,234,234,233,233,230,118,3,0,4,9,9,9,12,10,11,12,12,12,186,188,184,185,188,185,184,186,185,186,186,187,188,187,188,188,186,185,188,185,186,187,189,189,187,188,187,189,188,188,189,188,189,191,188,189,188,189,187,190,190,188,192,190,192,192,190,191,191,190,191,192,190,193,191,192,192,190,194,191,193,193,191,193,193,194,195,194,193,194,195,195,196,195,196,194,194,196,198,193,194,196,196,198,196,195,198,196,196,199,196,198,198,198,198,198,196,198,198,195,197,198,200,200,199,200,199,199,202,199,201,204,199,201,200,200,200,200,201,199,202,201,200,201,199,201,198,203,200,198,201,199,201,197,196,199,198,197,197,200,197,196,199,196,199,196,195,198,196,195,195,196,195,194,195,196,193,197,196,194,195,194,195,193,194,194,192,192,196,192,192,192,193,193,192,193,193,194,193,194,192,191,193,190,194,193,194,193,190,193,191,190,191,191,190,191,191,192,189,190,190,188,190,192,192,189,193,192,190,192,191,190,192,191,191,191,192,191,192,191,191,194,194,189,191,193,192,192,194,194,195,194,193,194,194,194,192,196,195,193,197,195,198,196,196,196,195,196,198,200,198,199,196,199,196,188,200,200,200,203,198,192,199,203,203,205,204,207,205,205,202,203,201,203,210,205,205,206,205,207,208,209,212,212,210,211,212,209,210,211,210,211,211,212,213,213,212,214,213,213,215,214,213,215,214,213,211,213,214,214,213,217,235,206,216,249,232,220,214,212,214,210,211,212,208,209,233,172,90,77,55,55,67,43,39,44,75,147,208,244,200,198,252,252,252,252,253,253,191,116,111,90,73,66,63,76,65,53,55,48,50,57,57,56,76,93,110,117,141,152,140,97,53,67,113,209,230,215,196,149,158,127,108,120,165,122,47,41,33,66,137,206,189,169,206,154,166,232,228,237,225,202,241,187,170,212,217,251,153,79,80,58,62,25,98,115,96,85,120,146,178,226,169,141,123,198,224,216,247,218,224,163,187,229,201,217,237,142,33,9,12,19,41,42,27,20,33,50,62,55,22,98,235,245,251,218,111,152,232,246,252,252,249,169,137,202,250,250,252,158,87,106,87,27,29,24,59,93,123,134,125,131,46,21,37,25,38,36,34,24,23,21,23,27,21,26,25,25,34,29,33,35,34,33,60,92,87,71,52,49,41,38,34,16,20,23,16,21,18,29,54,60,63,58,45,24,23,29,24,24,23,29,29,30,31,37,79,105,87,66,42,23,30,42,57,75,92,123,151,111,81,113,88,68,49,22,29,51,65,53,69,68,82,84,64,70,59,40,26,15,19,21,24,24,23,27,24,30,55,55,26,16,19,24,50,77,83,75,76,97,109,122,124,97,80,71,61,59,59,77,93,97,103,89,78,71,63,63,53,34,31,50,73,99,115,97,59,42,52,62,63,57,63,57,53,59,53,53,52,37,45,84,91,97,88,100,92,66,69,65,27,63,228,241,244,248,234,239,233,236,234,233,235,233,233,234,233,234,234,236,236,235,235,234,235,236,236,234,231,231,233,233,234,233,233,233,231,232,232,231,232,234,233,233,232,231,231,231,232,232,231,231,230,231,231,230,234,231,230,233,230,233,233,231,233,233,229,118,4,0,4,7,10,9,10,10,11,12,11,11,187,191,188,188,188,188,186,185,185,186,188,188,187,184,187,185,184,187,186,188,189,187,186,188,186,186,185,185,188,187,188,186,188,187,186,190,187,190,188,188,190,189,191,187,191,192,187,190,190,189,190,189,191,190,190,193,194,192,192,192,193,194,193,193,193,194,194,194,196,193,197,195,191,193,197,196,194,194,193,197,196,195,197,196,197,196,197,196,199,194,198,198,197,195,195,197,196,196,196,199,198,198,197,198,199,196,198,199,199,199,200,199,200,201,202,200,201,201,201,203,202,201,201,199,198,200,199,199,201,198,198,199,198,200,199,197,199,198,198,198,198,196,196,197,197,196,195,197,195,196,196,194,194,194,194,193,193,193,195,194,193,194,193,195,193,195,195,191,196,193,195,192,193,195,192,193,192,194,191,190,193,193,194,194,193,192,191,193,191,192,193,190,191,191,190,190,191,190,192,190,191,190,188,190,189,189,191,192,190,191,189,191,193,189,192,192,190,192,191,193,191,191,193,192,191,191,194,193,190,193,192,193,193,193,196,195,194,198,198,198,198,198,196,196,198,196,196,196,198,199,197,198,200,203,196,192,198,201,203,203,196,198,205,206,206,205,205,207,208,205,204,202,202,206,208,208,207,208,209,209,210,210,211,211,213,212,210,210,211,211,213,213,212,215,213,214,214,210,212,210,213,214,213,214,213,217,214,214,215,212,212,218,232,178,150,214,230,217,212,211,213,212,212,212,211,207,230,155,61,36,11,61,108,103,98,90,84,80,60,78,48,63,153,157,152,122,108,74,16,6,29,21,24,24,12,33,29,26,27,29,25,21,29,19,29,28,33,34,24,33,28,31,31,45,74,82,79,83,96,119,135,128,119,145,181,145,112,89,66,59,152,245,204,221,212,141,195,230,233,239,223,203,238,197,173,208,214,251,156,82,93,52,64,14,69,73,36,76,116,125,160,234,186,145,156,216,199,200,242,223,205,160,206,229,217,203,147,50,3,18,16,36,42,27,22,27,51,61,56,28,81,222,247,247,221,139,137,205,249,249,250,251,173,124,186,238,250,250,169,90,96,108,34,46,109,87,64,73,101,128,126,111,33,23,39,27,38,34,35,27,23,26,26,24,25,23,25,30,33,38,36,42,35,60,144,177,141,87,66,61,64,60,29,16,18,17,16,24,24,21,57,72,72,79,64,44,27,26,28,27,29,30,40,37,31,49,86,95,61,37,29,25,52,60,63,66,64,89,149,145,96,76,83,97,87,87,55,48,69,64,64,65,75,67,55,55,53,44,27,16,16,22,20,27,29,27,29,32,55,44,23,22,19,24,57,84,91,101,98,92,77,71,65,53,50,47,44,34,39,61,69,78,92,104,110,116,129,120,89,57,81,126,128,113,78,53,35,24,27,23,20,30,37,40,49,55,57,51,52,40,45,80,95,98,97,103,92,64,39,42,45,139,246,246,249,249,240,240,233,236,236,236,234,236,238,238,238,236,236,236,236,234,234,234,234,236,237,238,234,233,233,233,236,236,238,235,233,234,236,234,236,236,236,236,236,236,233,232,234,233,234,233,231,233,234,232,231,235,233,234,234,234,234,232,235,236,228,117,4,0,4,8,10,9,10,10,11,12,12,12,184,189,184,187,189,182,185,185,187,188,187,186,186,184,184,186,184,186,184,184,188,184,186,185,186,186,184,188,184,184,185,186,186,186,188,186,188,187,187,189,186,185,184,190,189,187,190,188,188,190,192,189,187,192,188,189,189,187,193,190,191,191,190,192,188,189,193,191,191,192,194,192,193,193,193,194,195,196,195,194,196,196,194,194,195,195,195,193,194,194,191,194,196,195,194,193,197,194,194,197,196,194,196,195,194,197,196,196,200,199,199,201,198,200,200,200,198,198,203,199,199,198,198,200,196,200,200,198,195,196,199,196,196,197,195,196,196,195,194,195,195,193,193,192,192,195,194,193,194,191,194,193,196,194,192,195,191,193,190,192,196,190,190,192,192,192,191,191,192,192,194,191,192,193,192,194,190,191,194,193,193,193,191,192,190,191,191,190,190,192,191,189,191,192,190,190,189,191,190,190,191,193,190,190,191,187,191,189,188,191,190,189,190,190,189,190,190,192,191,191,191,190,191,192,193,189,190,191,191,191,190,191,193,192,194,196,194,197,197,197,197,196,197,196,197,196,198,198,197,199,197,200,200,203,197,193,202,200,204,199,194,200,207,206,206,210,205,205,206,206,206,206,202,204,204,205,208,208,207,205,207,207,210,211,212,212,208,211,211,211,211,214,214,214,214,214,212,213,212,211,212,210,214,214,211,212,212,216,211,211,211,216,208,92,39,129,198,215,215,212,211,211,212,210,208,212,221,144,52,56,79,97,86,75,83,56,54,69,41,28,23,26,15,27,44,14,46,54,19,15,15,14,14,21,21,18,23,20,26,29,17,19,22,18,24,19,24,26,27,26,19,28,25,44,48,59,59,52,27,39,95,55,71,63,72,92,75,87,47,68,189,243,192,200,183,147,213,223,230,236,218,204,234,198,173,206,213,251,147,77,95,59,66,19,61,83,23,46,118,155,219,250,163,78,138,200,173,202,236,223,182,156,210,232,221,141,64,5,10,22,31,48,34,25,24,41,60,63,32,61,201,244,250,230,139,159,187,231,250,233,251,178,114,177,220,249,254,193,94,83,107,84,26,68,118,83,67,76,92,118,131,96,27,28,35,33,43,33,31,24,22,26,31,24,23,25,24,32,38,48,70,53,69,167,201,180,143,122,129,130,92,49,24,15,21,19,18,23,25,27,63,73,74,91,76,61,41,29,29,27,27,32,39,38,34,42,55,43,24,26,23,26,42,48,50,44,50,59,121,155,125,118,87,98,106,96,90,69,60,63,70,67,66,49,40,36,30,36,34,20,18,17,20,27,23,30,23,28,57,51,31,21,23,33,46,51,53,50,39,30,29,31,33,33,33,28,26,32,45,55,58,57,62,69,76,87,105,118,107,103,123,123,98,71,47,27,21,26,21,15,21,21,34,53,60,60,55,54,56,38,44,95,105,99,76,97,103,66,29,61,101,179,247,247,250,250,240,243,237,236,235,234,233,234,236,239,235,235,236,235,234,233,235,235,234,233,235,234,236,235,235,234,235,236,233,236,235,234,235,233,234,233,233,234,233,235,233,233,235,234,234,234,234,233,233,233,233,230,231,235,229,231,233,232,236,231,229,118,2,0,5,8,9,9,10,10,11,12,12,10,184,190,184,187,186,186,186,187,189,187,187,186,188,185,188,188,185,187,185,186,185,186,185,186,189,188,187,187,188,188,188,187,190,187,188,188,188,190,187,190,189,189,189,188,188,190,189,189,190,188,193,190,191,191,190,191,188,190,191,190,191,189,191,191,191,193,190,193,194,192,193,191,193,194,194,194,193,195,191,197,195,195,196,194,194,196,196,195,196,192,196,194,194,195,198,197,197,198,197,199,196,198,196,198,198,195,199,200,201,201,203,200,199,198,199,199,196,199,198,197,199,200,199,196,197,198,199,197,199,197,197,198,197,200,196,194,198,196,196,194,192,196,194,195,194,193,192,194,192,192,192,191,194,193,193,193,192,192,193,192,193,192,190,193,189,192,193,192,194,191,194,191,191,192,193,191,193,194,194,194,192,192,191,194,194,191,191,191,189,191,193,192,192,194,192,190,190,189,194,192,192,193,191,191,191,193,189,190,191,191,191,190,190,190,191,192,191,191,190,193,192,189,193,193,193,189,190,193,192,194,192,194,194,193,195,195,196,196,197,198,198,200,201,200,202,199,199,200,200,201,200,200,200,200,193,199,203,203,206,196,198,206,209,205,207,207,207,209,207,209,210,208,208,203,203,205,209,210,207,207,210,210,211,209,212,211,210,213,212,211,214,212,210,214,212,212,214,212,213,211,213,213,212,210,212,214,210,212,212,214,213,222,195,78,7,88,191,216,216,210,210,209,210,211,207,211,219,177,107,54,44,63,34,33,46,38,39,63,53,36,24,22,24,45,63,29,59,44,17,21,14,24,19,19,18,20,19,22,24,24,20,15,25,14,18,23,17,27,25,22,22,21,25,51,44,44,76,78,39,49,66,51,60,43,52,44,50,53,27,76,191,191,133,200,156,159,229,208,233,229,222,208,232,206,173,203,216,251,152,73,96,62,73,30,93,104,30,51,139,215,251,251,136,79,159,207,181,207,237,223,152,154,214,242,198,68,15,7,7,30,43,38,23,18,42,55,64,42,47,177,246,246,239,150,162,220,210,246,250,235,182,124,171,227,251,251,251,134,77,103,95,73,35,84,84,39,88,95,81,114,124,85,23,30,37,31,41,33,33,26,25,24,28,29,29,28,27,42,98,147,163,137,146,199,169,137,122,119,130,88,53,33,17,22,19,23,24,23,28,31,66,72,66,90,85,69,55,31,24,34,33,35,38,33,28,22,19,17,22,21,18,19,23,34,38,36,45,35,52,74,74,122,129,101,94,102,110,104,93,83,70,66,62,46,33,23,21,26,33,27,22,20,23,26,25,27,33,21,36,59,48,41,34,41,39,28,25,23,17,17,21,17,15,19,25,25,32,38,46,46,50,49,35,43,53,62,64,67,65,70,79,69,58,50,32,24,21,23,25,23,32,27,51,76,69,67,64,54,55,44,45,107,122,101,78,84,100,63,67,110,95,98,165,221,249,249,242,242,238,238,236,235,234,236,236,233,235,234,234,236,235,236,234,232,233,233,233,234,233,234,234,233,234,233,234,235,233,235,238,236,237,236,233,235,235,233,233,235,235,234,234,236,235,233,231,231,233,232,232,234,233,232,232,233,234,232,229,120,3,0,5,8,9,9,12,10,11,12,13,12,185,191,186,185,188,185,187,186,187,186,188,189,187,189,191,188,185,188,189,185,187,186,189,186,185,188,186,187,187,187,188,188,184,186,186,186,189,189,186,187,189,190,187,188,187,186,190,188,189,189,189,188,189,190,189,188,190,191,190,190,189,189,191,192,190,189,192,191,193,192,192,191,191,194,194,196,193,193,195,196,196,196,197,193,196,196,193,195,196,196,195,196,195,194,197,197,196,197,194,194,195,197,198,196,197,199,198,198,200,200,198,197,198,199,198,197,198,198,197,196,200,198,196,198,196,196,196,196,197,196,193,196,196,197,197,195,195,195,196,197,196,194,195,195,192,194,195,193,193,192,193,191,192,193,192,191,191,191,189,192,194,191,193,192,191,191,189,192,189,192,193,190,192,192,193,194,194,192,192,192,194,196,194,193,191,190,190,193,193,191,193,192,193,191,191,192,192,190,191,192,189,193,190,190,192,189,191,192,191,189,191,192,188,192,192,193,190,193,193,192,192,192,193,191,191,191,196,193,191,192,190,192,192,194,194,196,196,198,198,198,199,201,202,200,201,200,201,202,200,201,199,200,203,198,192,202,206,205,204,194,201,207,209,208,206,207,207,210,209,208,208,208,208,208,205,204,205,209,212,210,210,209,210,210,210,213,212,211,211,210,210,210,212,212,214,213,212,213,212,211,210,212,212,211,209,214,211,213,211,210,214,223,218,127,90,165,212,220,215,210,211,209,208,210,208,215,211,212,122,5,2,21,29,33,35,31,37,64,56,45,36,29,25,66,66,42,74,41,21,28,13,23,23,15,23,15,24,27,27,26,17,22,18,21,19,18,19,25,31,22,17,19,42,53,36,57,77,80,55,67,72,56,63,38,57,63,53,39,33,128,178,151,170,220,151,179,230,206,235,227,222,203,228,211,174,201,214,252,151,71,104,63,72,41,74,98,39,62,157,223,249,218,127,151,234,226,184,207,242,212,147,177,212,235,131,26,9,4,19,45,40,26,12,37,54,61,47,29,155,244,249,249,160,156,219,242,222,236,252,171,135,182,226,249,252,252,244,96,84,108,77,76,29,57,93,96,111,83,86,126,126,71,21,29,29,34,39,29,33,24,27,29,26,51,80,49,49,135,226,231,198,151,142,177,143,122,112,83,59,38,32,18,17,20,23,21,25,29,36,32,46,72,68,93,90,68,57,36,31,32,31,36,34,33,26,22,19,17,16,15,17,16,18,21,23,21,28,27,29,27,53,122,132,131,103,83,97,100,109,96,87,75,55,50,30,22,19,22,27,32,29,17,21,21,28,32,24,25,22,39,61,60,61,57,35,20,15,16,22,18,19,22,18,17,19,25,21,33,41,39,37,35,33,46,41,39,43,43,45,55,62,52,43,33,21,21,23,19,25,30,36,31,61,89,71,73,68,51,61,43,38,104,116,111,94,87,101,88,102,109,62,27,18,89,228,236,248,237,235,240,238,238,237,236,232,236,235,235,236,237,236,235,235,235,235,235,235,234,234,232,234,234,234,236,233,235,234,236,238,236,238,236,236,236,233,235,235,235,233,233,235,232,233,236,234,233,231,232,233,233,230,231,231,231,236,233,229,118,4,0,4,8,10,9,10,9,11,12,11,11,184,189,187,186,185,187,187,187,188,186,188,188,188,186,186,186,187,187,188,188,186,189,187,185,187,188,188,186,187,187,188,186,187,187,187,185,184,187,185,188,188,186,187,186,189,188,186,188,190,186,187,187,191,189,189,192,190,192,188,190,190,191,191,189,193,191,190,191,190,192,192,193,192,193,195,195,196,195,193,196,195,192,194,194,191,192,195,194,194,192,196,195,196,197,196,194,196,197,194,196,195,199,199,196,197,197,197,199,198,196,198,196,198,196,198,196,196,201,199,198,198,199,195,198,198,196,199,194,197,197,196,197,195,194,194,194,196,194,195,193,193,195,193,195,191,192,194,194,194,192,193,193,193,191,191,193,194,191,191,193,193,191,192,192,190,194,193,189,190,192,191,190,191,192,193,191,195,193,192,193,191,193,192,193,192,191,191,191,192,194,194,193,193,194,191,193,194,193,192,190,191,188,190,192,190,191,192,193,191,192,193,193,191,194,196,191,193,194,194,195,196,195,193,191,192,193,193,195,193,194,194,193,194,194,194,196,195,198,198,200,204,204,203,203,203,201,202,202,204,204,204,202,205,198,198,206,205,210,198,196,207,207,208,208,208,207,208,209,207,208,206,208,208,208,209,203,203,203,205,210,210,210,210,209,211,209,210,211,210,210,212,212,210,213,210,211,212,210,213,212,212,212,213,212,213,214,212,215,212,214,211,224,228,202,201,217,231,222,214,212,212,212,212,212,213,212,207,212,144,43,11,21,11,27,30,27,30,42,55,33,32,36,28,54,58,47,72,39,30,54,39,44,42,24,29,24,19,25,21,29,21,15,21,18,24,21,25,37,31,24,26,29,64,59,46,71,87,78,72,92,51,51,55,48,61,42,59,31,25,135,221,194,212,231,137,207,231,204,237,221,223,198,227,212,173,198,215,251,152,62,92,65,64,38,57,89,52,38,149,237,233,208,126,178,251,218,186,197,244,207,167,208,193,155,61,3,12,6,41,42,25,20,29,54,60,54,29,126,245,245,251,169,154,214,243,247,200,245,201,129,198,238,250,250,253,253,202,45,89,101,66,64,26,31,50,76,61,48,98,136,128,55,21,36,33,35,40,32,34,23,22,29,42,106,139,133,147,189,224,180,120,89,76,108,102,83,62,34,29,19,23,19,20,26,25,28,34,39,38,37,31,47,62,66,62,57,75,57,37,36,27,37,38,31,25,24,23,17,16,15,14,15,15,18,18,19,18,18,19,23,55,100,129,131,116,103,87,64,75,93,90,79,65,57,34,12,17,15,32,38,27,24,20,20,31,32,25,20,25,18,21,39,39,31,19,22,22,24,28,23,23,27,26,32,27,19,27,21,23,31,32,31,43,57,45,44,35,23,24,26,35,30,28,23,20,22,20,24,35,30,29,29,66,85,64,72,66,55,57,51,35,84,106,114,99,89,105,103,122,80,72,82,34,23,153,231,249,247,235,236,237,233,236,234,232,233,236,236,235,236,234,232,233,234,236,234,233,234,233,234,234,235,235,235,236,236,233,235,239,235,235,235,235,236,234,236,234,234,233,234,233,233,234,233,233,231,232,232,231,233,232,234,235,234,235,234,229,117,4,0,3,8,10,9,10,10,12,12,11,11,184,190,187,185,188,185,190,189,187,188,188,188,188,188,186,185,186,189,186,188,187,185,190,188,186,186,186,189,188,186,187,184,188,189,188,190,186,186,187,187,188,187,185,186,187,187,189,188,188,187,188,188,190,188,191,188,188,191,186,191,188,189,194,190,191,191,190,193,195,194,192,195,194,193,193,192,196,197,195,195,192,194,194,192,194,192,192,197,195,193,194,195,195,196,195,193,194,194,197,198,198,197,199,200,196,198,198,198,201,197,199,199,196,197,198,196,198,198,198,198,197,196,198,197,195,198,200,197,197,198,196,196,195,193,193,194,196,198,194,193,196,196,192,193,193,191,193,191,194,192,192,191,191,192,190,190,194,191,191,193,191,191,191,191,193,191,191,193,192,193,192,191,194,192,192,193,191,192,194,192,191,191,192,193,193,193,194,192,192,192,194,192,195,194,196,197,196,198,194,193,192,194,190,189,192,192,192,194,195,193,195,193,194,197,196,194,194,195,194,198,198,193,194,194,194,195,193,192,193,193,195,196,196,196,198,199,198,198,200,203,205,201,204,205,205,206,206,207,205,207,206,207,205,195,202,207,208,207,199,201,208,208,207,208,209,209,208,208,207,210,211,209,209,212,210,206,207,204,204,208,211,212,210,210,212,210,210,210,211,212,209,211,214,210,213,212,210,210,212,214,213,213,212,214,212,214,212,214,214,214,213,219,221,205,223,224,220,222,214,214,212,210,211,210,212,212,198,219,182,129,126,79,12,8,21,23,24,37,50,33,21,28,24,45,41,54,73,37,53,73,49,69,60,31,41,23,18,23,15,22,20,25,20,25,28,30,30,28,33,34,49,64,90,87,77,87,83,59,75,79,50,51,37,38,20,19,51,36,90,210,234,197,228,200,134,228,223,203,232,218,224,198,224,216,176,200,212,251,147,59,94,48,72,63,74,116,49,46,171,217,247,185,71,180,244,213,196,191,247,197,190,229,129,68,14,9,15,20,49,33,23,24,46,63,57,27,107,237,250,250,171,145,212,239,247,233,201,199,150,186,249,249,251,249,251,243,95,24,89,84,44,48,59,93,75,57,27,57,126,144,117,38,21,33,28,35,36,34,28,23,24,29,60,135,112,124,119,115,115,71,44,41,37,47,49,43,31,19,16,19,16,22,36,33,51,75,65,55,49,35,31,30,34,45,46,41,62,74,64,43,33,36,30,33,32,27,27,18,17,20,15,19,16,17,17,15,16,14,18,27,36,55,63,81,86,82,78,67,71,68,71,78,67,51,33,19,14,20,27,30,29,29,21,17,28,26,26,28,23,26,20,16,29,22,21,23,24,24,25,29,26,27,27,33,33,29,29,26,19,34,48,46,58,60,73,68,54,36,21,19,19,18,21,23,24,21,28,31,31,37,33,29,62,78,64,66,62,55,60,54,38,76,97,118,105,92,104,109,109,78,98,115,83,39,106,223,245,245,241,231,235,237,234,233,233,234,234,232,234,233,234,235,231,232,233,234,234,234,236,236,233,233,234,235,235,234,236,237,235,234,236,234,234,234,233,234,233,235,234,234,235,235,233,232,231,235,234,232,233,232,230,231,233,233,234,232,229,119,3,0,6,10,9,9,11,10,11,12,11,11,187,191,188,188,187,189,186,187,188,185,189,188,191,189,187,188,188,187,189,188,189,189,185,189,188,187,189,187,188,186,187,188,186,188,188,188,188,188,187,188,186,183,188,188,187,186,187,187,186,187,189,188,188,185,190,191,186,189,187,188,188,188,190,189,191,191,190,192,193,192,193,190,194,195,195,196,193,196,195,195,196,196,197,195,193,195,197,195,195,196,196,196,195,195,195,195,195,196,194,194,196,194,194,195,198,200,197,196,197,196,196,197,198,196,197,197,197,194,196,196,196,197,198,196,193,195,196,194,197,193,194,195,194,197,196,195,192,194,196,192,193,193,193,194,193,193,192,190,196,194,192,194,193,194,192,191,192,191,190,191,191,190,193,193,190,191,193,192,192,191,191,193,191,192,192,193,191,191,194,193,193,194,196,196,193,198,198,194,194,194,196,194,195,196,195,199,198,197,196,192,193,191,195,194,195,199,197,196,196,198,195,194,195,198,199,198,198,196,197,195,194,193,193,193,194,192,193,194,193,194,195,195,196,197,198,200,199,198,202,203,203,206,205,205,205,204,208,208,206,206,208,209,204,198,208,210,209,205,197,207,208,208,210,210,210,209,210,211,210,211,212,211,211,210,208,207,212,208,204,205,207,211,210,210,213,208,209,212,209,210,209,210,210,211,211,214,212,211,212,212,212,210,213,213,212,212,211,214,212,214,212,218,215,198,211,204,214,218,210,210,211,212,212,209,216,210,201,200,184,163,200,203,63,12,12,14,22,27,47,29,19,22,18,42,44,57,72,35,35,39,32,53,51,38,36,22,20,19,17,22,18,21,22,23,27,24,30,30,20,36,53,68,70,58,68,63,61,42,74,56,35,47,21,61,67,89,189,179,151,210,197,154,223,175,143,239,210,210,226,217,223,193,226,217,176,192,211,251,150,75,120,63,74,92,107,129,73,50,178,246,250,149,42,175,233,238,227,188,251,185,213,225,55,17,8,6,24,37,45,22,29,51,53,61,26,82,222,243,251,178,140,198,241,246,240,229,151,132,191,239,252,249,236,247,230,120,45,38,111,63,39,96,88,95,100,103,57,96,139,139,109,28,25,32,31,42,33,37,32,19,26,41,98,101,61,47,29,34,38,33,35,30,21,19,27,21,16,25,21,19,42,70,72,68,96,118,108,76,52,43,40,35,30,27,27,30,28,39,55,48,31,24,31,35,32,24,22,21,19,22,18,18,18,19,16,16,16,15,14,22,34,34,38,47,43,59,73,64,71,66,56,62,54,48,31,15,14,14,24,30,42,32,26,23,18,29,33,26,26,28,29,27,25,25,26,24,24,27,24,27,26,25,33,32,31,34,33,32,27,54,71,53,53,56,62,74,71,53,36,24,27,26,21,28,32,27,26,30,31,30,34,32,58,75,55,57,69,64,56,56,37,69,99,120,114,101,114,106,110,76,66,84,84,54,80,189,241,243,245,235,235,236,236,235,236,236,232,234,234,232,233,234,234,232,234,235,234,236,234,235,234,235,236,236,234,233,235,234,233,233,234,235,234,235,234,235,234,234,234,232,233,234,234,233,233,235,233,233,234,235,232,231,234,234,234,234,229,118,3,0,5,9,9,9,11,10,11,12,12,12,184,188,186,188,189,185,186,187,186,186,186,188,186,184,187,188,185,186,184,187,186,186,189,187,188,188,189,186,185,188,186,186,189,186,186,186,185,188,188,186,186,189,188,188,186,186,187,188,187,187,186,188,188,188,192,188,187,190,189,192,191,190,193,191,194,192,191,194,193,194,192,192,192,192,195,197,197,194,196,197,194,195,196,194,196,193,194,196,194,195,195,194,196,196,196,194,195,197,194,191,192,194,195,196,195,199,197,194,194,196,196,195,196,196,195,194,195,196,195,195,196,196,194,197,195,196,196,193,196,195,192,193,194,195,193,193,195,193,193,194,193,193,191,194,194,192,191,192,193,192,194,191,193,192,193,194,191,191,191,192,191,193,193,192,193,193,194,193,191,191,191,190,192,194,191,192,193,193,194,196,196,194,195,198,200,198,197,196,198,198,199,198,198,196,198,200,199,198,197,197,197,198,197,198,198,199,199,200,197,197,196,195,195,197,197,196,197,197,195,195,197,193,193,194,192,196,195,194,196,196,198,198,198,200,199,200,199,202,204,203,207,207,208,206,206,206,207,208,208,209,208,209,203,203,213,213,212,200,204,212,213,214,211,213,211,210,212,213,211,210,212,208,207,210,211,209,210,209,206,206,204,209,210,208,212,211,211,211,212,212,211,211,214,211,210,211,211,212,212,213,211,212,212,212,211,212,213,213,212,214,213,217,216,199,207,197,208,220,211,214,207,208,212,208,218,206,199,195,180,134,193,249,141,67,15,10,17,19,43,29,18,23,15,44,40,59,61,23,34,26,27,36,32,34,32,23,23,23,25,30,27,27,27,19,21,25,35,33,27,26,42,71,42,41,58,59,57,37,61,23,22,49,119,234,214,200,252,211,113,129,133,174,232,149,163,243,205,210,225,223,222,196,225,219,172,190,208,251,152,83,152,84,78,38,49,127,62,58,207,244,224,65,47,214,226,228,218,199,251,185,172,127,13,26,4,18,46,39,29,27,44,57,64,32,63,201,248,248,190,133,190,237,247,239,228,185,97,166,245,249,252,217,241,198,149,112,39,71,106,58,66,105,57,36,27,90,101,114,134,133,98,24,35,32,26,42,33,35,32,25,24,90,111,61,37,23,24,16,17,23,19,22,18,18,22,23,19,21,25,38,68,83,103,88,86,104,106,103,78,66,59,47,39,32,26,29,32,28,26,29,24,28,35,39,34,29,29,24,29,24,17,21,16,20,21,16,19,16,12,19,30,35,37,31,30,38,53,63,66,65,64,55,40,40,29,13,14,15,23,38,49,48,31,24,24,27,29,25,25,24,26,27,28,28,28,29,24,31,32,24,31,28,36,39,38,48,44,37,36,57,73,62,46,39,44,46,57,70,65,61,52,47,43,28,29,28,32,30,28,27,33,28,49,76,55,59,73,60,57,62,37,67,98,114,112,110,113,100,98,92,94,83,67,50,64,170,241,240,241,238,238,237,234,237,234,233,235,231,234,234,235,237,235,236,235,233,235,234,233,232,236,236,233,233,233,234,233,235,236,235,234,233,235,235,234,234,234,234,234,233,233,234,233,234,233,234,234,234,235,233,234,232,233,233,235,235,227,117,4,0,4,8,10,9,10,10,12,12,12,12,186,191,186,188,187,185,187,187,188,185,187,187,187,188,186,188,185,185,189,187,187,188,187,188,188,185,189,186,186,190,184,190,187,188,188,184,189,187,189,190,190,189,187,188,188,187,187,187,190,189,190,189,187,188,190,191,188,191,191,188,192,193,194,194,192,193,193,194,195,194,194,192,196,194,194,195,196,196,194,194,196,198,198,197,195,196,198,196,195,195,194,194,197,197,196,195,195,194,192,195,195,194,197,196,196,196,195,194,195,195,194,195,193,195,196,193,196,196,198,195,195,197,194,198,196,196,196,194,197,193,194,195,196,195,193,196,194,194,194,192,195,194,195,193,190,193,192,193,196,194,193,192,194,194,195,193,193,194,194,195,193,193,193,195,193,194,194,193,194,192,195,193,196,194,191,196,196,195,194,194,196,197,197,198,196,198,200,198,200,201,200,198,199,201,200,200,205,202,202,203,201,201,200,201,200,199,201,200,200,200,199,199,200,199,200,197,195,194,196,196,193,195,196,194,198,195,196,198,198,201,201,202,203,202,203,200,203,206,206,206,207,207,208,208,211,210,210,210,212,211,212,212,202,205,213,211,208,204,208,213,213,217,213,212,214,211,213,212,212,212,210,211,210,210,212,211,212,211,214,209,205,206,210,212,212,211,211,213,210,210,213,214,212,213,212,213,211,212,213,213,214,213,216,212,214,214,214,216,212,214,211,217,216,198,209,198,209,220,212,217,212,211,211,211,220,203,198,192,217,163,187,226,132,125,31,38,80,37,40,31,47,47,55,65,35,53,29,7,39,33,31,34,31,29,33,30,27,31,36,36,28,34,30,31,20,27,40,29,27,21,44,70,29,51,36,36,45,18,24,24,117,162,208,252,240,208,240,136,72,118,155,220,232,136,192,241,207,214,215,224,221,197,227,219,174,186,204,251,148,79,148,87,80,31,12,80,49,59,220,248,136,18,94,236,128,153,208,196,249,151,114,44,1,27,6,38,47,31,22,37,51,66,48,46,182,246,251,218,147,189,236,247,234,223,191,115,126,228,251,250,222,224,181,134,217,128,50,48,92,61,79,124,78,87,32,65,84,103,133,131,81,13,34,40,29,42,32,33,36,19,27,83,80,30,24,15,14,19,18,19,22,23,24,24,28,25,19,28,27,44,61,61,79,71,80,89,105,129,106,77,67,56,54,48,31,30,27,31,33,24,28,32,34,35,35,33,26,27,33,27,21,23,18,19,20,18,15,16,17,23,31,36,37,39,35,35,33,43,72,69,60,55,39,33,28,15,16,12,30,55,59,53,39,26,22,26,30,27,21,21,22,29,27,25,29,25,27,23,30,32,27,32,40,88,114,108,98,83,55,48,62,57,62,48,30,35,49,73,87,85,81,76,66,61,51,47,49,41,30,22,30,26,49,72,62,63,75,66,55,63,39,65,101,110,113,97,121,89,49,66,77,82,49,37,39,125,240,238,241,241,237,238,233,238,236,234,235,236,238,236,237,237,238,235,236,235,233,237,236,234,236,236,231,235,235,236,237,233,234,235,234,237,235,233,234,235,235,233,235,233,234,233,233,234,232,233,230,232,232,231,234,235,236,234,234,233,229,118,4,0,4,9,10,9,10,10,12,12,12,11,184,189,189,188,189,185,186,188,187,189,187,187,188,186,188,187,188,186,185,188,185,188,186,187,189,186,188,186,187,189,189,190,191,185,189,191,188,189,188,189,188,188,187,189,190,187,190,188,186,188,189,190,188,190,190,189,190,190,191,190,193,194,194,193,191,193,195,194,194,196,193,196,197,196,196,196,194,195,195,193,194,195,195,196,199,198,196,198,196,194,196,195,198,195,195,194,195,195,194,193,193,192,194,194,191,195,193,195,195,194,195,194,194,191,192,196,196,193,195,194,194,195,196,195,193,197,196,194,197,195,194,196,192,194,193,192,195,196,193,192,192,193,193,192,193,194,192,193,197,191,192,191,194,195,191,196,194,194,194,194,196,197,196,196,196,194,194,193,193,193,194,194,191,192,193,194,192,192,195,194,195,194,196,195,195,198,198,199,200,200,200,201,202,201,201,203,202,203,200,200,203,202,203,203,203,204,202,201,201,203,202,202,203,201,201,200,199,198,196,193,194,195,196,200,198,197,198,198,201,203,205,205,204,204,203,205,207,206,208,208,207,210,210,212,212,211,213,213,214,214,217,210,200,212,214,214,205,204,214,213,214,213,213,212,211,212,214,212,209,214,211,209,213,213,212,211,211,212,210,210,208,208,206,207,214,214,211,211,212,212,212,211,214,214,212,214,213,213,213,213,214,214,216,217,215,216,213,211,211,214,214,214,215,198,208,200,207,220,212,217,212,214,215,212,220,199,195,192,231,196,199,189,99,104,51,156,220,185,214,226,246,247,241,235,198,186,168,164,206,156,57,16,43,50,48,35,27,32,33,45,32,33,33,24,31,37,66,77,87,35,42,65,22,51,22,46,46,66,160,186,247,214,224,251,139,129,166,109,138,197,194,248,207,133,218,236,209,210,217,224,216,198,224,220,174,187,205,251,150,63,130,84,72,29,30,79,53,53,213,236,106,34,97,124,53,162,218,208,220,108,61,12,7,26,22,48,35,21,31,48,63,50,34,148,249,249,248,184,203,235,250,236,202,182,122,147,189,245,251,218,223,170,125,194,253,144,29,47,89,68,75,143,148,148,63,63,81,99,144,126,62,13,33,39,35,44,30,34,31,18,19,33,37,23,20,16,21,20,24,27,27,29,25,32,34,25,31,36,44,92,81,92,99,76,79,90,115,99,49,55,58,75,71,41,29,32,27,28,27,27,34,33,38,37,39,31,25,32,33,27,21,25,25,29,26,19,23,17,20,24,31,32,37,38,34,42,39,42,46,65,68,49,45,35,26,18,20,27,28,31,25,29,37,26,19,28,28,24,21,26,26,29,29,26,28,28,21,25,33,27,34,26,71,158,192,191,184,153,134,100,49,51,67,55,27,21,44,55,65,68,66,68,77,98,112,107,89,55,30,25,24,24,50,79,63,62,72,71,59,63,41,50,90,106,118,101,112,102,56,33,50,85,81,64,23,65,222,235,244,241,233,239,236,236,236,236,236,235,236,235,235,238,234,234,234,233,237,234,235,234,235,236,235,236,234,236,235,236,234,233,235,234,233,234,235,235,235,236,234,235,232,233,232,233,231,231,232,230,233,235,231,232,233,234,235,233,229,118,3,0,4,9,9,9,11,9,11,12,11,12,182,189,185,187,188,184,187,185,187,186,188,186,182,185,185,185,188,186,188,185,186,188,185,188,189,186,189,186,188,190,188,189,189,189,190,190,192,190,189,189,192,188,188,190,189,190,192,191,192,189,190,191,187,189,193,190,192,193,194,193,191,194,194,190,193,195,192,194,194,195,196,195,197,195,195,193,195,197,194,196,196,195,194,196,198,198,197,197,197,195,196,195,196,196,195,195,198,196,194,195,194,193,194,195,193,193,194,194,191,193,195,194,194,192,196,194,193,193,194,192,193,193,195,198,195,195,195,195,196,197,197,195,194,195,195,196,194,196,194,191,195,192,193,194,193,195,194,194,195,195,194,194,194,194,194,194,196,195,195,195,196,196,195,198,196,196,196,198,197,194,198,195,196,193,191,196,193,194,195,196,196,196,198,197,196,200,202,201,202,204,204,204,204,203,204,202,205,203,202,206,205,206,204,204,205,204,204,203,202,206,206,202,204,203,204,201,199,202,199,198,196,198,199,198,203,199,199,202,201,204,204,208,207,204,207,205,206,205,206,211,209,207,211,209,212,213,214,214,214,213,217,207,207,216,217,211,202,212,215,213,214,214,212,211,212,211,213,214,212,214,214,215,213,213,216,213,214,211,212,212,212,211,208,209,210,214,214,213,214,211,212,215,214,215,214,215,214,216,216,214,216,215,215,215,215,214,215,216,211,216,214,216,217,198,208,199,207,220,212,217,214,214,216,216,216,199,190,190,234,195,191,172,104,107,50,178,250,250,253,253,252,252,252,252,251,249,252,249,248,194,49,53,128,113,67,39,24,31,33,42,33,33,33,35,32,90,208,251,243,105,115,188,132,165,168,170,169,217,252,252,252,214,226,179,83,139,178,147,182,189,181,250,173,146,231,225,214,206,218,227,214,195,222,218,178,187,205,251,150,53,107,79,67,24,31,97,61,41,190,209,110,61,9,7,69,226,251,212,176,42,24,17,5,33,39,39,21,24,50,59,56,35,122,239,252,252,200,213,247,251,252,222,173,119,155,215,221,249,236,228,174,117,184,235,249,138,19,25,85,69,26,77,119,102,40,78,90,110,139,119,50,16,45,34,35,39,28,35,29,15,24,19,16,29,24,22,24,24,29,29,35,37,32,30,37,36,39,36,47,97,91,110,117,97,128,117,89,63,48,55,64,63,61,41,24,28,27,29,27,27,34,31,33,42,33,29,26,31,30,26,23,27,28,27,30,27,22,23,18,24,33,34,39,37,38,42,46,50,51,57,59,57,48,38,34,41,47,35,29,22,17,24,20,21,23,23,26,27,23,24,30,31,31,24,27,27,29,27,30,31,34,33,75,155,173,176,171,165,165,158,95,60,98,59,30,21,23,31,33,40,39,45,49,59,93,112,95,58,31,22,24,22,50,83,69,63,80,76,51,63,45,45,90,108,118,99,117,99,61,94,103,120,113,109,60,34,193,236,243,246,232,240,233,234,236,233,235,234,234,234,234,237,235,231,234,236,234,235,236,235,235,236,236,236,234,234,236,234,235,235,233,235,234,234,237,235,234,235,235,233,232,234,238,238,235,236,233,235,236,235,234,232,234,235,235,234,230,119,3,0,5,8,9,9,11,10,11,12,11,11,184,188,184,185,187,185,182,187,185,188,186,185,188,184,186,184,187,186,187,186,185,187,188,186,188,187,189,188,188,189,188,190,189,188,190,189,187,190,189,190,191,190,190,190,192,191,189,189,190,189,190,192,192,192,193,193,193,193,193,194,197,193,191,196,195,193,195,194,195,194,193,195,194,193,196,192,193,196,196,197,195,196,196,197,196,195,196,196,198,198,193,196,196,196,198,194,194,195,195,194,196,195,194,194,193,195,193,196,193,194,195,190,192,193,193,194,193,193,192,193,193,193,196,194,195,196,195,194,197,197,195,198,195,196,196,196,196,196,196,193,194,194,193,194,194,195,195,196,198,194,196,193,194,194,196,198,192,196,197,192,196,194,195,195,196,198,196,198,199,198,198,198,196,196,197,197,198,198,199,199,198,200,199,198,200,203,203,204,206,203,204,205,206,205,204,204,203,204,205,206,208,205,203,205,203,204,205,204,205,205,206,206,206,205,205,205,203,203,202,203,202,201,202,201,202,203,204,204,203,206,206,207,209,208,208,205,208,208,208,210,208,212,212,213,214,214,214,213,214,213,214,203,208,215,214,208,201,214,215,213,215,212,214,212,214,213,213,214,212,214,213,214,214,213,212,214,214,214,215,213,214,216,215,209,207,211,215,217,217,216,216,216,215,217,218,215,216,219,216,217,218,216,217,216,216,217,216,216,215,216,216,215,220,200,206,201,204,221,213,220,213,213,212,212,217,196,189,193,239,207,204,177,113,113,57,176,248,235,249,241,250,250,240,225,195,210,205,214,227,99,42,75,145,152,81,39,24,30,37,44,42,35,34,38,33,29,158,238,243,145,87,186,207,250,250,240,226,251,252,230,198,145,221,205,143,191,188,122,136,147,186,250,146,168,236,222,216,201,223,224,214,193,219,222,180,190,205,251,154,53,95,79,79,33,36,75,63,40,192,209,89,54,6,4,134,245,252,214,96,11,19,6,18,38,37,29,16,44,55,59,30,97,226,245,250,201,201,240,252,252,253,216,131,144,217,241,237,246,247,178,123,173,231,247,250,153,41,43,71,81,45,29,39,42,53,107,97,108,141,104,38,21,43,33,41,39,26,43,28,19,21,20,22,18,30,26,23,29,34,37,39,41,33,30,29,33,41,34,48,62,69,55,67,90,102,81,65,42,50,80,61,57,56,45,37,22,23,30,27,28,34,34,31,36,36,34,26,33,30,31,29,24,28,28,36,28,28,29,18,24,32,35,37,41,44,48,55,54,62,56,54,66,55,40,32,43,49,32,19,18,15,18,17,21,20,22,25,30,21,25,27,34,27,24,30,27,26,33,33,45,65,81,127,156,151,141,139,139,152,147,98,75,97,63,27,29,21,19,20,20,19,24,18,29,51,82,90,63,40,19,24,24,51,80,66,73,79,74,54,63,48,38,88,98,113,102,112,113,92,127,110,79,48,86,90,39,173,237,234,246,236,237,233,233,235,232,233,233,233,233,236,234,234,234,232,233,235,235,235,233,234,233,233,234,235,236,232,233,233,235,235,234,235,234,232,235,235,233,232,234,233,236,239,241,241,240,240,240,239,238,236,236,238,237,238,234,229,117,4,0,4,9,10,9,10,10,11,12,12,11,184,188,185,186,187,184,186,186,185,186,187,189,190,186,188,185,185,188,191,189,187,191,187,188,189,187,189,187,189,191,191,191,190,188,189,188,187,189,188,190,192,191,192,190,190,192,189,191,192,190,192,194,193,195,196,188,193,195,192,195,194,195,196,196,197,196,195,197,195,195,196,196,194,194,198,192,195,196,194,198,198,198,196,197,196,195,199,198,199,196,195,198,195,198,195,194,194,193,193,194,196,195,195,194,193,193,193,193,194,195,192,193,193,191,195,191,193,194,193,196,196,196,193,193,196,194,196,195,197,196,196,197,195,198,198,198,196,197,195,192,196,193,196,196,196,194,194,196,195,194,194,195,193,197,198,195,194,196,198,195,194,197,196,196,195,198,198,195,198,198,200,197,199,199,199,201,198,200,200,200,200,200,202,201,202,204,202,200,202,204,206,206,205,207,207,206,206,206,208,208,207,207,207,207,208,205,208,208,208,209,209,206,209,208,209,207,206,208,205,206,204,204,206,203,206,202,202,206,205,207,206,210,210,208,210,211,214,212,212,211,212,213,214,214,214,215,214,214,215,217,211,204,211,215,212,203,206,216,215,215,214,214,214,214,214,211,212,214,212,214,213,214,214,212,217,213,217,218,214,218,215,216,219,214,208,207,213,217,218,219,217,217,218,217,218,217,218,218,216,218,218,218,219,218,220,217,218,215,214,216,216,215,219,201,206,201,201,220,212,214,214,213,212,215,212,196,184,205,243,220,213,130,62,67,29,115,183,206,218,201,214,207,204,179,143,157,150,179,196,92,65,94,158,160,109,50,16,22,38,48,38,40,36,34,39,35,113,208,231,122,11,59,179,243,249,219,184,220,174,183,158,141,217,210,197,205,162,108,151,169,229,243,134,198,234,226,220,199,224,222,215,195,220,219,179,185,205,251,155,47,73,84,87,57,55,84,69,61,207,171,63,48,4,42,206,248,253,145,33,10,10,11,35,46,25,15,34,52,64,26,77,213,249,249,204,200,235,250,251,248,252,170,157,216,236,245,215,246,193,119,182,223,249,249,252,202,100,30,16,62,71,66,56,72,101,89,103,123,121,84,29,24,34,32,44,36,28,38,29,22,25,21,27,29,27,33,28,33,36,35,42,36,35,33,27,35,39,39,49,77,57,70,101,93,78,64,54,44,64,56,46,59,51,50,37,19,22,28,31,30,28,34,39,34,38,36,30,33,29,29,32,26,23,35,34,35,30,27,29,21,34,37,41,52,41,49,56,56,60,63,53,50,57,44,32,30,33,22,13,16,18,15,19,24,18,29,25,27,26,23,29,29,31,30,28,24,31,43,88,131,149,156,165,151,130,126,127,127,137,134,80,55,74,49,33,28,24,24,24,20,21,19,18,27,33,64,87,75,43,23,24,16,43,78,66,75,76,72,52,59,52,26,83,107,105,97,102,105,107,119,53,64,76,85,107,36,159,236,226,248,238,239,234,235,237,231,236,233,233,233,233,233,233,234,236,233,234,233,231,234,235,236,233,233,235,234,234,233,235,235,233,234,233,232,233,233,234,232,233,236,238,241,243,244,243,245,244,243,243,242,244,244,245,241,238,234,227,117,4,0,4,8,10,9,10,10,11,12,11,11,183,190,186,185,186,185,184,190,186,186,186,184,187,183,184,184,185,185,185,184,187,185,187,189,189,188,187,189,190,187,190,191,191,187,188,189,188,191,188,190,187,189,192,189,189,189,192,190,192,191,193,193,191,192,191,190,191,193,194,194,195,194,193,196,193,196,193,194,196,194,196,193,194,196,196,192,196,197,194,196,195,195,196,195,195,195,198,194,195,196,196,196,194,193,191,193,195,193,191,193,193,193,195,194,193,191,191,194,190,192,192,191,196,192,192,194,192,194,196,196,192,192,196,194,194,194,194,195,193,196,194,196,195,195,193,193,194,194,196,192,196,196,196,197,195,196,195,191,196,194,194,195,195,193,192,196,193,196,197,193,197,195,197,199,198,198,198,200,196,197,199,198,200,199,201,202,200,200,198,200,199,200,204,201,201,201,201,200,201,202,205,206,205,206,205,205,208,208,206,206,207,208,206,206,206,208,208,207,210,208,205,207,208,208,206,206,208,207,206,208,208,206,207,207,206,203,205,206,204,209,209,208,211,209,211,211,212,212,214,214,213,215,216,214,214,212,212,213,212,214,205,203,213,215,208,202,211,217,214,212,213,214,214,212,212,212,214,215,213,214,213,213,214,214,215,216,215,214,214,213,217,216,214,217,214,208,206,212,217,218,217,218,217,218,219,218,220,218,217,218,220,219,219,217,218,219,217,217,216,214,216,212,219,202,202,202,199,220,212,215,212,211,214,217,210,191,192,209,232,159,111,76,25,38,19,47,69,135,187,178,190,183,188,170,141,175,185,234,210,106,120,108,173,186,126,66,11,24,39,39,43,34,41,33,62,140,200,241,219,150,57,9,129,213,213,146,129,183,164,215,197,176,214,198,158,148,162,173,205,203,251,216,130,224,232,231,215,198,223,217,217,195,222,219,174,183,203,251,158,48,71,65,63,54,56,78,89,62,131,94,34,47,5,93,249,249,198,68,12,7,6,27,47,32,12,30,47,66,37,56,189,243,251,190,196,238,251,252,249,251,191,159,228,235,237,228,215,195,135,171,235,245,249,249,253,219,84,35,7,34,79,111,71,107,61,59,112,125,122,64,19,25,31,33,45,38,29,36,32,25,24,27,27,30,31,29,32,32,35,37,40,34,37,37,31,30,45,29,41,94,72,107,125,105,70,66,86,74,84,54,30,49,51,40,27,18,27,27,23,34,29,29,34,34,33,27,33,32,28,32,25,26,30,27,33,32,33,34,22,30,44,48,62,61,45,50,57,51,53,54,49,35,33,39,29,23,24,19,19,16,15,27,23,27,24,23,28,25,25,25,29,31,27,27,26,29,36,104,152,157,160,152,145,132,124,113,116,121,133,122,61,36,44,30,21,29,27,25,27,27,27,19,19,22,21,42,74,71,44,22,27,19,37,81,66,66,68,68,54,56,54,25,77,105,107,92,95,103,100,105,54,110,149,159,125,21,144,232,235,249,240,242,236,233,238,236,238,237,236,235,235,234,236,238,237,238,233,231,236,237,240,237,236,236,235,236,238,238,234,233,233,231,233,234,233,234,233,233,235,239,241,244,245,244,244,247,245,244,244,246,243,244,243,240,236,232,229,118,3,0,4,8,9,9,10,9,11,12,11,11,184,188,187,186,186,184,185,186,186,190,188,184,185,183,186,186,187,185,185,187,188,190,189,188,193,189,190,191,190,193,189,190,191,188,191,188,189,190,189,194,190,190,193,190,190,191,192,192,190,193,193,194,194,191,197,195,195,195,194,195,196,195,194,196,193,196,194,192,195,194,196,195,193,196,196,193,197,195,195,195,196,196,192,197,196,195,196,196,199,194,195,196,194,196,194,193,196,194,194,195,194,194,194,195,192,192,193,191,194,192,192,193,193,195,196,193,193,194,193,194,191,191,191,195,198,196,195,195,197,195,195,196,193,198,193,192,193,194,196,192,196,196,196,196,195,194,194,193,195,196,194,194,194,195,197,196,195,195,194,196,198,196,198,197,196,200,199,199,201,199,200,200,201,201,200,202,200,201,202,201,200,201,204,203,202,201,203,206,206,204,206,207,205,210,207,207,205,205,211,206,207,207,206,208,208,208,208,207,208,208,208,207,209,207,208,211,210,210,210,210,212,211,211,210,213,212,207,210,210,214,212,215,214,211,214,214,213,216,217,216,217,214,214,215,216,213,211,213,214,213,202,206,216,214,206,206,213,214,216,215,212,214,212,214,215,211,214,212,214,216,213,214,212,215,216,212,215,214,212,212,214,214,215,214,216,218,210,209,213,217,217,220,219,218,217,217,221,219,219,218,219,219,220,220,219,218,220,218,216,214,213,212,222,205,203,201,199,218,214,214,214,211,215,218,207,192,185,211,186,77,26,9,13,27,32,63,29,58,129,161,200,196,216,178,150,216,229,251,178,102,132,106,188,189,145,84,7,26,35,39,44,33,44,26,78,173,219,249,232,207,137,26,93,138,164,123,137,186,182,223,203,202,164,160,142,164,188,190,214,208,251,182,150,241,223,235,208,200,227,219,221,197,225,218,175,181,201,251,155,51,95,73,47,60,58,70,91,48,48,36,78,59,9,146,250,234,95,19,14,4,23,44,36,18,23,47,63,47,45,165,246,246,178,167,232,251,252,243,252,191,155,211,242,239,216,236,160,100,158,229,249,249,247,247,239,159,156,116,58,59,55,70,64,57,38,62,131,130,112,56,16,29,33,35,39,35,34,37,23,25,27,24,27,28,32,39,34,28,36,33,37,37,34,27,30,36,33,36,75,97,92,91,67,69,50,56,66,37,54,43,39,59,36,27,22,17,23,23,33,34,28,33,30,36,35,31,33,29,29,28,25,24,31,33,32,37,34,31,35,64,90,84,88,71,51,53,48,43,40,39,33,30,21,28,32,22,19,16,17,23,39,34,25,20,21,25,24,24,19,28,29,29,33,28,28,24,75,143,153,128,118,115,116,115,112,104,111,122,133,115,57,27,20,16,24,29,30,31,26,31,22,26,28,21,24,36,61,52,35,23,20,26,39,77,71,62,61,60,57,46,55,29,65,107,104,100,95,98,101,104,49,80,125,125,91,13,124,227,240,250,242,243,240,240,243,240,241,242,243,243,242,242,243,243,243,244,243,243,245,245,244,241,239,237,238,241,242,241,239,236,237,239,238,238,239,238,235,237,240,241,243,244,246,245,245,247,247,245,246,247,244,242,241,241,242,244,234,116,3,1,6,9,9,9,10,10,11,12,11,11,182,186,185,183,186,184,184,186,184,184,186,186,188,186,186,187,186,188,188,188,190,189,192,190,188,193,191,190,193,190,189,190,190,191,191,191,189,191,191,190,193,192,191,190,191,193,194,192,192,193,194,193,193,194,196,195,195,193,193,194,191,193,195,197,192,193,195,193,196,193,194,195,191,194,194,193,195,197,193,194,194,194,196,194,193,193,195,197,198,196,194,195,195,195,196,196,195,191,194,195,192,191,193,190,192,194,194,195,193,195,192,193,196,194,192,194,196,191,193,192,191,193,191,193,191,190,194,198,195,194,195,194,194,193,194,192,191,192,193,193,194,193,193,193,193,195,198,194,196,195,193,196,195,193,196,194,192,196,197,196,198,199,199,198,199,201,200,200,198,202,199,197,204,201,203,202,200,203,200,201,203,201,201,200,202,203,203,206,207,209,208,205,208,208,208,208,208,206,205,208,210,211,209,208,210,208,208,207,209,207,207,208,207,209,212,212,214,213,213,213,211,214,214,217,216,212,214,214,214,217,218,217,216,215,217,217,215,217,219,218,215,215,214,212,215,213,210,212,214,212,203,212,218,207,203,208,215,214,213,214,214,214,210,211,211,211,213,215,214,213,215,214,214,212,214,213,212,214,215,214,214,215,213,214,217,216,219,212,209,213,217,222,218,220,217,218,219,221,220,217,218,218,222,220,220,220,217,217,219,214,216,213,221,206,199,206,198,217,214,214,216,214,216,218,208,189,193,192,150,77,16,10,33,60,71,87,42,70,163,190,221,217,224,166,157,222,240,225,124,100,139,98,178,182,113,81,14,24,34,40,39,31,48,19,79,153,194,249,191,215,170,37,64,162,221,169,192,188,168,162,160,128,142,192,160,190,179,176,175,214,251,151,169,241,220,240,207,204,224,218,222,193,222,221,174,179,204,251,159,48,108,96,69,75,61,63,96,60,60,114,195,103,26,180,248,133,36,17,6,16,40,42,22,23,43,57,57,38,135,240,249,199,147,214,252,252,236,232,188,149,205,234,249,214,219,217,71,84,201,244,247,246,246,229,93,85,135,147,85,36,37,46,47,51,36,100,143,137,122,45,15,28,29,32,47,35,31,39,24,27,29,21,26,29,31,35,37,31,35,33,29,34,34,32,29,35,31,38,56,74,84,67,48,59,41,27,41,35,31,26,42,43,23,22,20,19,19,24,29,28,29,29,32,29,33,32,32,32,30,28,27,29,24,37,35,35,42,37,50,126,163,133,98,111,75,52,45,35,36,36,34,27,27,26,33,29,25,32,31,51,57,38,23,19,24,21,21,28,27,24,26,31,29,23,28,29,95,150,126,119,114,110,107,102,109,109,116,117,130,106,54,26,21,24,22,29,26,30,32,29,37,34,25,21,24,24,32,31,24,26,24,24,36,80,68,60,60,66,61,48,59,30,70,118,114,101,97,101,92,102,72,51,47,51,42,7,141,241,247,249,244,242,243,244,244,242,244,246,246,247,247,247,248,249,248,248,248,249,251,248,249,247,245,246,246,247,245,246,244,244,244,246,247,243,243,243,240,242,245,246,245,245,247,245,247,247,246,247,247,247,244,244,243,245,249,250,237,116,4,1,4,8,10,9,9,10,11,12,12,12,181,186,186,184,189,185,183,183,185,187,184,184,185,185,184,185,187,186,188,188,188,190,190,191,192,189,193,193,191,192,190,191,191,191,192,189,191,190,190,191,189,192,192,191,191,189,193,194,193,195,191,195,194,190,195,191,193,195,192,193,196,193,193,196,195,195,191,195,196,194,194,195,195,193,197,198,198,196,195,195,194,194,191,194,194,192,195,194,195,195,196,198,195,193,196,192,192,195,191,192,191,191,191,193,192,191,191,193,193,192,193,190,193,193,189,190,193,194,192,192,192,192,194,194,194,191,191,194,191,193,191,191,194,194,191,192,193,193,192,193,191,192,195,194,195,194,194,194,194,196,195,198,197,194,196,195,193,200,200,198,198,196,200,199,201,203,201,202,201,200,202,200,201,201,201,202,203,203,201,201,202,203,201,200,205,204,205,208,207,204,208,210,206,208,206,208,208,205,209,207,211,211,211,211,207,208,209,210,208,208,208,207,212,209,211,215,211,214,216,216,216,215,216,215,215,215,215,215,216,217,215,217,216,216,217,216,216,215,215,215,215,214,214,213,213,214,210,210,216,208,205,214,214,207,203,211,214,212,213,213,213,214,211,212,212,212,214,214,214,213,213,212,212,213,213,212,214,215,214,214,213,212,217,216,214,216,217,219,212,209,211,217,222,223,220,219,219,217,221,219,220,218,218,221,219,219,220,216,217,215,220,215,221,207,200,204,196,217,217,214,219,215,218,224,211,199,211,232,204,128,97,89,96,100,79,87,56,120,216,188,207,215,232,185,168,223,230,171,97,103,151,106,174,174,52,60,31,21,38,39,45,32,46,15,72,145,190,232,165,210,199,58,60,206,249,211,220,194,146,129,164,136,150,189,155,139,106,145,189,247,249,143,200,241,221,243,206,205,221,218,221,194,227,222,174,182,206,251,158,53,100,93,74,56,46,50,96,87,128,185,237,135,89,201,143,54,21,15,13,33,42,28,26,37,54,62,35,105,226,246,215,149,194,245,252,250,225,175,146,201,239,252,239,203,183,121,87,179,237,247,246,245,227,78,14,49,112,132,78,24,27,45,50,51,41,115,144,136,113,34,19,36,30,33,45,31,29,39,27,28,25,25,28,26,30,40,35,33,32,29,38,36,34,31,26,29,34,33,44,59,65,42,53,77,32,42,60,30,34,28,30,32,14,27,24,18,22,27,34,25,28,30,29,37,33,35,39,32,33,30,27,25,26,34,37,37,42,39,63,129,156,125,98,92,76,61,45,29,33,33,29,24,28,25,25,32,44,47,53,53,40,25,21,26,19,22,23,19,27,22,27,28,27,27,30,22,96,144,122,117,109,106,110,113,111,113,113,108,116,94,53,28,22,24,28,32,33,31,26,30,36,33,32,29,26,28,29,26,26,21,24,25,33,77,71,61,66,72,69,47,62,34,63,125,115,111,96,98,86,85,92,84,57,39,20,89,249,249,249,249,244,248,242,244,244,243,247,249,247,248,249,249,250,248,250,249,249,249,247,248,250,250,249,247,248,248,247,247,247,246,247,247,247,245,244,244,244,244,245,245,248,248,247,247,248,246,247,247,245,245,244,245,248,250,251,250,236,116,4,1,4,8,9,9,10,9,11,12,11,11,183,187,185,184,185,184,185,184,184,187,186,184,186,183,185,188,188,188,188,187,191,194,191,194,196,194,194,191,193,192,191,194,192,191,190,189,190,192,192,192,193,191,193,194,191,191,193,192,190,190,192,191,190,190,194,195,192,194,192,194,193,193,193,196,193,193,195,194,199,196,194,195,196,198,197,199,196,197,194,195,196,197,198,194,195,196,194,195,196,194,195,196,198,196,193,197,194,188,193,193,191,194,192,190,193,191,193,191,191,192,193,192,192,192,192,190,192,191,196,195,191,194,193,192,193,193,192,192,193,193,192,190,191,192,193,192,195,195,192,194,193,194,193,193,194,194,196,190,195,197,195,200,198,198,201,197,199,203,200,199,200,200,200,200,200,200,200,200,200,201,201,200,203,200,201,204,200,202,202,203,202,202,204,204,206,204,207,208,206,208,209,209,209,209,209,207,208,209,208,210,210,208,209,211,210,210,210,209,211,210,211,211,212,214,213,214,214,214,212,216,215,214,215,215,214,214,217,215,215,214,214,214,216,214,212,215,215,214,214,213,214,214,214,213,212,213,210,213,213,205,205,215,211,203,210,211,212,213,210,212,213,214,213,212,213,214,214,214,213,212,214,213,213,212,213,214,212,214,212,210,213,215,213,215,217,215,217,215,220,216,211,212,220,230,233,234,225,221,221,219,219,218,220,217,220,218,219,220,217,216,216,214,222,208,198,208,196,214,220,219,222,224,229,231,210,197,209,223,225,186,187,169,132,119,78,134,129,120,221,208,213,234,244,194,179,220,208,128,96,115,153,113,181,191,24,29,20,37,71,62,55,31,40,11,110,176,198,223,186,227,205,108,77,190,207,170,169,170,179,194,230,178,205,166,95,109,98,147,182,244,210,134,227,245,234,244,205,211,223,224,222,192,226,222,173,178,205,251,153,48,97,71,50,61,40,49,63,86,135,163,243,193,183,163,37,18,22,11,27,42,29,22,36,50,66,34,84,207,249,229,148,181,232,249,252,239,186,149,205,242,252,247,234,170,74,125,152,230,249,249,244,220,87,21,37,89,100,117,136,71,43,44,52,49,70,141,122,129,95,23,23,30,33,36,48,35,31,36,24,25,22,26,34,26,29,37,35,35,32,33,35,34,29,31,28,27,32,29,45,43,35,24,44,56,42,68,57,37,32,28,38,23,17,31,23,27,24,24,31,29,34,33,29,38,33,31,38,32,33,26,27,28,31,39,33,34,32,37,59,121,120,87,96,82,69,59,34,28,32,29,23,22,25,25,25,34,45,34,39,43,29,24,27,23,19,23,24,22,23,30,27,30,31,22,33,26,91,139,121,122,101,99,89,71,76,86,101,105,111,81,53,29,19,30,26,29,33,26,32,30,33,38,32,32,29,28,30,29,33,29,26,29,35,81,72,65,69,66,70,49,63,40,50,117,115,107,105,107,78,49,75,105,112,99,173,246,252,252,251,248,250,249,247,242,244,247,248,248,248,249,248,248,249,248,248,251,251,250,249,247,249,249,249,249,249,248,246,250,248,248,248,247,247,246,248,246,244,244,245,247,247,246,248,248,247,245,246,245,244,244,245,247,249,251,250,249,236,116,3,1,4,8,10,10,10,10,11,12,11,11,183,189,187,184,187,185,185,188,187,188,185,185,188,190,187,187,190,190,192,191,191,194,193,195,194,194,195,193,193,189,190,192,192,193,191,191,193,191,196,194,192,193,194,196,194,194,195,191,190,193,191,193,191,192,193,191,193,193,195,194,195,194,195,195,191,194,194,194,194,196,198,198,199,198,198,198,195,195,196,196,198,199,195,196,198,194,195,196,195,197,196,195,195,196,198,196,195,194,193,196,193,191,190,191,195,190,192,193,192,192,193,191,191,192,192,194,194,193,194,196,197,196,196,194,193,194,195,193,192,193,194,194,193,193,193,195,196,195,195,195,195,196,197,195,194,196,194,197,198,198,200,203,205,203,201,202,203,203,202,202,202,203,205,201,201,202,200,205,202,200,203,202,202,200,204,203,199,200,204,205,204,206,205,208,208,205,208,208,210,211,211,212,209,211,212,212,212,210,212,209,211,211,210,214,212,214,212,212,213,213,213,211,215,212,214,215,212,216,214,214,215,213,213,214,214,215,217,213,215,216,213,214,215,216,213,212,215,214,214,215,214,212,215,214,212,214,213,217,212,201,212,215,207,206,212,214,213,211,213,216,212,214,214,214,215,214,214,213,213,212,212,214,213,211,213,214,214,212,212,214,214,214,217,216,215,217,217,219,218,223,215,211,215,229,248,248,239,226,222,221,219,219,219,221,219,218,219,217,220,217,220,213,220,211,200,211,200,222,229,226,231,225,218,205,169,158,167,142,142,182,240,202,159,126,95,205,189,129,218,214,225,242,237,170,155,219,184,111,115,123,170,121,184,211,27,5,29,92,105,91,68,36,20,83,221,209,200,217,211,221,216,168,77,92,107,135,173,198,199,240,241,208,208,125,98,139,170,177,185,221,143,122,210,216,224,237,212,215,229,230,226,195,228,222,173,177,205,251,145,71,100,79,88,68,48,45,48,56,95,184,252,252,211,81,1,24,21,27,42,32,18,29,48,62,39,59,184,244,242,159,173,224,250,250,252,205,150,203,247,252,251,243,234,95,97,194,183,245,247,247,234,83,25,43,93,116,116,166,199,129,77,73,93,97,123,142,122,127,78,20,25,36,32,36,47,28,33,39,24,27,18,25,29,25,30,36,36,37,31,29,38,31,32,30,27,29,34,35,27,19,19,24,24,24,34,65,56,38,33,21,24,22,24,25,22,30,26,30,33,29,32,31,31,37,36,33,34,35,29,25,25,32,32,31,40,36,31,39,51,91,130,99,74,83,55,38,38,30,26,20,25,21,24,28,28,42,46,48,40,30,24,23,24,21,23,22,27,29,29,32,31,34,31,26,27,26,53,106,125,118,99,76,60,54,35,38,66,88,101,88,62,37,27,27,31,30,28,30,29,30,28,29,32,39,36,33,35,33,41,34,33,35,38,84,72,66,68,64,68,51,69,41,45,111,105,110,107,112,95,61,68,105,102,113,187,206,242,252,251,251,248,251,247,247,249,247,247,248,248,249,248,249,249,248,251,250,248,250,250,249,251,249,249,251,251,251,248,249,249,249,251,249,248,247,247,245,244,245,247,247,247,247,246,245,246,247,247,246,246,246,248,249,251,250,248,249,236,117,3,1,4,8,9,9,10,9,11,12,12,12,185,190,187,188,186,186,187,187,188,189,189,188,190,189,190,189,188,190,191,191,193,194,192,195,191,193,195,190,193,193,193,194,193,193,192,193,193,197,196,192,196,193,193,196,193,193,193,192,194,194,194,193,194,192,192,195,192,197,196,198,196,196,197,196,196,194,196,198,198,196,198,201,200,200,200,199,198,194,197,199,196,198,200,197,198,197,196,196,195,196,195,195,198,198,195,193,192,193,194,192,192,193,193,191,194,193,193,194,193,191,194,192,194,193,193,191,194,196,193,196,195,198,198,195,196,195,196,196,194,193,194,197,197,195,196,197,200,198,198,199,196,200,197,197,197,198,199,198,201,200,202,203,203,203,205,205,202,204,204,203,203,204,203,203,203,205,205,203,205,205,205,204,204,202,202,206,203,204,204,203,205,206,208,206,206,207,208,210,210,211,210,211,209,210,211,212,212,211,211,212,214,213,214,213,211,214,214,213,214,212,213,211,214,213,211,212,210,214,214,216,214,210,214,216,214,214,214,214,215,214,216,214,212,215,213,214,213,214,215,211,215,212,211,213,211,214,211,216,208,203,212,211,204,209,216,212,214,214,213,214,214,213,211,211,213,213,211,214,213,211,214,214,215,213,213,214,212,214,213,215,214,215,215,216,216,216,219,217,221,223,219,215,196,178,192,217,231,229,222,222,220,221,221,217,222,218,219,218,217,217,218,216,224,218,207,219,208,222,222,207,195,181,177,172,155,165,158,99,118,172,231,181,169,124,79,229,218,90,146,182,192,215,201,154,155,195,154,120,134,142,198,130,192,238,62,15,62,128,119,104,84,38,24,145,250,217,185,219,226,187,185,151,50,40,61,154,215,223,210,231,167,144,198,142,108,153,190,202,234,239,127,126,192,179,194,203,190,203,227,234,228,199,235,229,178,177,212,252,128,67,125,109,96,79,43,36,34,82,159,234,252,252,144,22,4,26,30,38,34,18,23,43,61,54,45,153,248,248,171,171,222,248,252,252,234,152,197,244,252,252,250,245,177,126,186,235,198,244,244,244,104,11,48,87,118,113,125,153,177,114,51,33,70,77,99,134,117,132,63,15,29,33,29,41,51,32,35,37,23,27,23,27,29,25,33,39,36,36,39,31,29,36,36,32,30,36,33,32,29,26,26,23,18,18,38,59,47,37,47,23,17,26,21,23,26,29,28,31,37,30,31,32,32,39,34,34,32,34,31,27,27,23,34,39,30,33,37,38,50,78,110,127,96,56,52,35,23,27,23,19,22,20,23,32,33,44,59,54,32,21,19,19,22,25,23,22,25,30,31,26,31,36,31,25,29,27,33,63,106,126,93,74,77,74,62,42,34,68,108,114,95,58,33,29,27,33,28,30,30,32,30,33,38,34,36,35,36,38,39,39,34,35,42,85,74,60,61,64,77,53,71,52,33,110,116,110,115,116,104,88,66,89,53,27,5,79,218,247,250,250,248,250,249,251,249,247,247,248,249,249,249,250,249,249,251,251,249,249,251,252,252,251,250,251,251,251,250,252,250,251,251,251,251,247,248,245,244,245,246,246,247,247,245,245,247,246,245,245,245,246,248,249,249,249,250,249,236,116,4,1,4,8,9,9,10,9,11,12,12,12,187,193,190,186,191,191,187,191,189,189,191,190,191,190,191,191,192,191,193,195,194,194,194,191,193,194,194,195,194,195,194,193,193,196,196,193,196,196,197,195,198,198,196,195,193,197,199,195,196,195,195,196,194,197,196,194,196,195,195,196,196,191,194,196,196,197,196,198,198,196,196,199,198,201,200,200,199,196,198,199,198,198,195,198,198,196,198,198,197,198,197,195,197,197,194,197,195,195,194,194,193,193,193,191,193,192,193,192,192,194,195,194,192,195,195,193,197,196,196,197,196,197,199,199,198,198,195,197,198,196,199,199,198,198,198,200,198,199,200,200,199,200,200,199,200,199,199,200,201,203,205,202,201,202,203,204,205,203,204,204,202,202,203,204,204,206,202,206,203,202,207,203,205,206,206,206,204,205,206,205,207,208,206,208,208,209,212,210,210,210,210,212,211,211,210,208,210,210,212,212,215,214,210,214,213,212,211,212,212,214,214,211,213,212,214,213,211,215,213,213,214,212,214,212,214,213,214,211,211,217,217,214,214,214,213,214,214,212,212,213,215,212,212,213,212,211,212,214,204,204,216,207,204,212,213,214,214,212,212,212,211,214,212,212,212,211,212,212,214,214,210,214,212,212,214,214,214,212,215,216,214,215,216,216,216,217,217,219,218,222,218,224,184,85,71,125,199,230,225,225,220,222,221,220,223,219,220,218,221,220,224,222,230,225,207,213,188,189,178,163,175,178,192,200,182,201,207,188,178,185,198,127,174,117,56,201,196,94,146,152,156,198,199,162,144,170,134,137,145,165,220,125,192,252,126,56,89,157,143,124,120,57,34,181,251,191,157,219,199,130,150,158,108,63,83,160,229,207,192,197,127,159,190,168,133,125,159,196,252,244,142,196,233,202,198,190,178,183,205,211,213,194,235,232,187,183,217,238,89,49,101,96,91,50,28,28,59,129,198,249,252,164,50,13,11,27,37,36,24,20,34,55,55,43,141,243,249,182,169,222,247,249,249,235,154,177,243,251,249,244,245,159,179,192,226,248,202,248,236,107,34,44,90,115,114,120,114,81,120,91,34,19,45,52,103,127,120,117,42,23,29,31,34,44,53,27,39,40,22,27,24,23,29,26,33,37,34,38,31,34,33,35,39,30,31,32,32,30,29,28,27,28,23,23,31,47,43,55,51,21,24,19,26,37,24,31,37,34,34,30,29,28,34,37,31,33,32,31,33,26,27,29,30,34,32,31,31,60,59,56,119,135,96,64,47,40,38,26,21,18,16,22,26,27,36,47,54,49,23,23,23,18,22,18,29,25,27,26,24,33,32,30,32,29,24,29,22,38,71,98,109,91,100,105,71,64,42,55,99,124,145,100,53,33,25,29,30,29,32,39,31,34,38,36,39,35,38,41,38,37,40,35,37,89,76,61,62,64,81,55,70,56,28,99,116,116,117,112,112,71,58,47,48,11,59,214,251,251,250,248,251,250,249,249,249,249,250,249,249,250,249,251,251,249,249,249,251,251,251,251,252,251,250,251,251,252,250,251,251,249,250,249,248,248,248,247,245,246,247,245,245,245,245,247,246,245,244,244,246,247,249,249,249,250,250,250,237,115,4,1,4,8,9,9,10,9,11,11,12,12,187,194,191,193,191,191,193,191,192,191,192,193,192,189,194,194,195,197,194,196,196,195,194,198,193,194,197,191,196,195,192,194,192,194,197,195,194,197,196,196,198,196,199,198,194,196,197,196,196,195,198,198,198,198,198,196,195,198,195,198,197,196,197,196,198,197,197,199,196,198,199,198,198,199,197,196,199,195,196,196,199,199,198,199,198,198,199,198,198,198,195,198,196,198,198,195,196,194,195,196,196,194,195,193,194,194,195,195,194,195,198,196,196,195,195,197,196,197,196,198,196,198,201,200,199,198,198,198,198,200,201,201,200,200,200,198,200,199,201,201,200,203,201,202,203,201,200,200,204,205,203,203,202,200,201,200,202,204,203,204,203,204,203,203,203,203,204,202,204,206,202,205,206,205,207,207,204,206,208,208,208,209,208,206,207,210,210,211,210,208,212,213,210,210,210,210,209,208,211,211,213,213,211,212,211,213,211,211,213,210,213,209,213,213,210,212,213,213,212,213,213,212,213,214,212,213,213,212,213,211,212,212,212,213,210,212,211,213,214,213,214,209,212,215,212,211,211,210,202,212,212,201,205,211,214,212,212,212,214,209,211,211,211,213,210,214,211,210,211,211,212,211,212,212,211,214,214,214,213,214,216,217,215,215,214,216,217,215,216,217,221,226,203,133,94,116,177,217,226,227,219,219,220,218,222,220,222,225,226,224,224,221,217,205,177,175,164,178,190,195,210,219,229,223,198,214,232,245,211,182,202,129,172,111,62,191,173,118,197,205,191,224,222,167,134,157,130,143,141,179,222,92,174,249,175,67,77,151,149,137,120,61,32,185,251,151,134,209,183,149,185,224,196,108,85,169,200,157,198,218,187,190,188,141,107,145,182,214,252,208,144,227,242,235,238,230,207,188,202,200,199,179,217,214,178,186,214,201,69,50,80,67,64,25,25,19,54,116,147,246,201,57,19,10,17,38,41,31,15,27,53,60,39,56,193,243,189,162,210,246,249,249,234,162,160,217,251,245,236,236,164,152,240,214,245,240,212,246,92,35,44,86,123,110,119,101,59,59,121,85,35,21,130,186,157,132,110,105,37,18,36,34,35,42,47,33,39,34,29,28,22,29,27,28,33,32,34,35,33,33,31,33,33,36,33,29,30,29,31,33,33,34,31,27,24,34,77,84,64,33,19,30,31,36,31,28,37,37,32,29,32,28,30,36,33,29,36,30,32,34,22,28,32,28,34,37,76,96,66,53,63,100,98,68,49,40,44,41,29,19,21,19,17,27,38,44,39,33,24,16,26,26,22,24,20,27,25,27,35,30,35,30,31,29,23,24,26,29,36,68,101,118,122,118,86,71,50,47,84,108,158,168,96,45,28,23,31,34,34,35,34,37,38,41,39,33,39,35,34,38,36,40,39,87,81,62,64,63,74,54,63,63,25,84,116,110,120,107,105,72,53,56,18,112,248,248,248,248,252,252,249,251,249,247,248,249,251,250,251,250,250,248,248,248,249,250,248,250,251,251,250,250,250,251,251,251,249,250,249,248,247,248,248,247,247,245,246,245,245,245,247,247,245,245,246,245,245,245,244,246,248,249,250,251,250,250,237,116,3,1,4,8,9,8,10,9,11,12,11,11,189,194,195,191,193,192,191,193,192,193,195,195,195,194,194,196,199,198,195,196,194,196,197,194,197,196,193,193,194,193,195,195,194,194,193,194,193,196,194,194,195,195,196,195,194,196,197,196,197,196,198,198,195,196,198,198,200,199,197,199,198,198,199,197,200,198,198,199,198,199,197,198,197,198,197,194,198,196,196,197,193,198,196,198,200,198,198,200,199,195,197,197,198,195,196,198,196,196,198,197,197,197,196,197,197,196,197,194,196,196,195,197,196,200,199,197,199,198,198,197,199,200,200,199,198,201,200,200,200,200,203,201,204,202,200,203,200,201,203,201,201,202,204,200,203,201,200,204,200,204,203,201,203,203,202,202,201,202,201,201,203,202,202,204,204,203,203,205,205,205,207,205,206,204,207,207,207,208,208,208,207,207,210,210,208,209,207,207,210,209,209,211,209,210,212,210,212,210,210,210,210,212,208,211,212,212,213,211,212,212,210,211,213,211,212,211,211,212,210,212,212,212,213,211,213,212,214,212,212,213,212,214,212,213,211,211,212,210,212,211,213,211,212,213,212,212,213,208,201,212,208,200,209,213,211,212,211,212,213,212,212,210,211,212,212,214,212,211,213,212,210,212,212,212,214,213,214,214,214,217,215,214,217,218,216,215,214,217,216,218,217,228,232,217,208,188,183,194,211,225,223,224,222,221,225,222,224,222,218,209,201,191,191,188,173,192,190,211,229,227,235,231,235,219,188,211,227,242,185,160,215,155,178,109,96,236,193,114,210,239,215,230,219,155,138,160,133,156,136,195,206,75,174,249,183,57,75,144,137,120,101,54,27,190,242,122,160,244,190,181,216,249,224,108,66,139,194,197,237,227,226,198,145,119,141,193,215,237,251,170,149,238,234,236,235,239,222,214,237,229,214,181,205,199,166,167,179,177,71,56,79,50,62,31,27,45,86,107,134,184,84,21,22,9,29,45,28,19,31,45,57,51,42,96,223,190,157,207,238,249,247,240,164,166,218,245,252,229,231,162,153,224,250,221,243,237,171,108,25,53,83,116,118,119,80,58,46,56,141,78,32,31,130,164,150,132,117,94,22,26,32,38,39,44,45,29,39,34,22,25,23,28,27,33,33,32,35,34,32,36,33,29,40,39,33,32,29,34,34,31,33,35,30,28,28,32,65,81,69,47,24,30,34,37,32,31,29,27,33,27,29,27,27,37,33,27,35,39,29,29,29,28,33,27,49,84,118,149,90,44,51,59,92,89,66,57,41,39,26,16,17,24,24,21,41,44,29,22,19,24,29,26,29,24,27,27,29,33,32,33,32,34,29,31,30,27,27,21,30,36,56,79,91,86,65,59,43,47,58,92,157,190,158,66,25,23,22,28,30,34,34,39,38,36,40,37,33,37,37,36,37,36,36,83,78,57,62,55,75,54,60,71,25,81,122,110,116,109,120,87,80,59,66,223,252,252,249,249,252,252,251,250,247,249,250,252,250,250,252,250,249,248,248,248,249,248,249,248,248,249,250,251,250,251,250,249,248,249,249,249,248,247,248,247,246,247,245,245,245,244,245,246,245,246,246,247,247,247,247,248,248,250,251,252,251,251,237,116,3,1,4,8,9,9,10,9,10,12,12,12,191,196,194,195,195,192,193,191,193,194,195,194,196,194,196,195,194,194,195,195,195,194,193,195,193,195,194,191,195,195,193,196,196,193,193,192,195,194,193,193,192,196,194,193,194,194,197,196,197,197,195,194,197,198,195,194,198,199,196,197,197,195,195,196,199,197,196,197,197,198,198,199,198,199,196,196,198,195,199,196,196,197,195,198,196,198,201,198,199,198,195,197,194,197,197,194,197,196,198,196,194,198,199,198,196,197,196,194,198,198,197,197,198,198,197,200,198,198,199,197,199,198,198,200,200,200,198,201,199,201,200,200,202,201,203,203,204,201,205,207,203,203,203,206,203,202,203,202,204,203,200,201,203,203,203,200,205,202,201,203,200,202,203,201,202,205,205,200,204,206,204,206,205,204,205,206,209,208,207,207,207,208,206,208,208,209,207,209,210,209,208,208,211,209,209,207,210,210,211,208,208,211,207,210,212,209,212,214,212,211,211,210,212,211,211,211,212,210,210,211,209,211,210,210,212,211,211,210,210,211,212,210,211,214,210,212,210,211,212,210,212,211,211,211,209,211,212,202,205,214,199,205,212,210,212,211,212,210,210,209,214,212,211,214,210,212,213,211,212,212,214,213,212,214,213,214,214,212,214,214,214,215,213,216,215,216,216,216,217,217,220,221,227,234,241,234,204,189,198,215,225,229,228,224,223,214,207,198,193,193,194,199,207,214,201,214,211,219,233,227,227,223,224,204,182,206,224,239,155,150,227,172,191,97,102,249,217,119,175,220,177,194,212,158,148,160,150,165,143,214,215,94,178,237,204,102,91,138,129,115,70,43,27,189,223,105,200,249,190,186,210,217,160,94,66,131,184,234,250,215,198,141,171,142,169,222,215,249,245,148,169,241,225,237,227,235,217,214,238,235,227,202,234,224,177,162,151,171,98,57,65,54,85,60,101,150,169,178,98,99,36,15,21,11,44,37,22,27,36,63,47,58,133,181,186,157,199,233,250,250,244,179,152,215,246,251,244,235,171,137,214,252,249,208,249,213,41,36,35,92,114,115,127,87,40,82,62,74,146,67,26,33,28,59,118,127,122,75,21,26,33,38,48,56,45,28,44,31,23,28,24,28,27,34,34,28,34,39,28,31,33,35,33,33,34,33,30,32,38,34,36,33,37,32,29,31,40,61,71,54,27,29,36,36,28,29,29,30,27,29,32,29,34,33,30,32,33,36,28,28,30,35,44,74,113,98,141,190,112,50,40,40,79,96,75,59,46,39,29,18,27,26,27,28,34,34,24,17,17,24,25,30,31,27,29,33,32,35,35,34,41,33,34,30,31,37,28,25,30,30,23,28,40,44,49,49,35,32,42,46,95,162,133,60,28,21,23,30,38,37,36,38,34,34,40,36,35,35,38,41,33,38,30,78,84,53,65,65,81,56,57,81,27,70,124,113,122,109,119,97,78,62,47,200,250,250,248,248,253,253,250,250,249,249,249,252,251,251,250,250,250,247,248,247,248,248,247,249,248,250,251,251,251,250,249,249,248,250,249,249,249,249,248,247,248,246,247,246,245,244,246,247,246,246,248,248,247,248,248,251,250,252,251,250,250,251,238,115,4,1,4,8,9,9,10,9,11,12,11,11,197,199,198,196,196,196,196,196,194,192,194,197,196,195,196,196,196,195,198,198,196,198,198,198,196,196,196,199,199,196,197,194,193,194,196,195,194,194,195,196,196,195,194,192,192,194,195,192,194,196,193,195,196,196,194,195,196,196,195,197,198,196,195,194,195,194,196,196,193,197,196,197,198,196,198,198,198,198,197,199,198,201,199,197,200,199,201,200,199,196,196,195,195,196,195,196,197,196,198,196,196,199,196,198,196,198,198,197,203,200,197,200,199,200,200,199,200,202,201,199,198,199,198,200,200,199,201,201,207,203,202,203,202,205,206,207,204,203,206,206,205,206,205,206,207,203,205,203,199,204,203,202,203,204,205,204,202,204,201,201,203,203,204,203,203,205,204,205,205,205,205,205,206,204,206,207,207,207,208,209,210,207,207,210,207,207,207,208,209,207,208,209,209,207,208,210,208,206,209,210,211,211,210,210,208,210,211,210,210,211,210,209,210,209,209,210,208,209,210,211,211,210,211,210,211,212,211,208,210,211,210,212,209,211,210,210,214,211,210,210,214,212,211,211,210,212,209,203,210,210,199,207,212,213,211,212,212,210,212,211,211,211,210,212,210,211,211,211,209,212,212,214,212,211,214,214,216,214,213,215,216,214,214,214,215,216,215,215,214,216,218,220,225,226,233,241,234,216,205,202,206,214,214,207,204,200,203,200,207,211,215,223,224,230,207,214,209,212,227,217,222,221,219,196,181,210,227,229,141,161,240,190,190,88,96,246,218,108,150,205,167,199,220,160,161,157,157,167,154,242,214,96,187,234,232,151,93,122,130,114,68,39,21,191,210,108,219,241,171,141,170,184,164,154,101,113,169,243,192,170,198,177,155,114,170,189,216,250,225,137,191,241,225,238,225,236,211,208,232,229,225,201,237,237,206,185,162,215,109,19,36,57,99,70,124,167,211,180,60,38,16,19,21,29,41,21,21,35,50,54,42,137,216,169,162,196,237,245,248,252,188,161,199,237,250,233,241,178,139,207,251,252,247,199,240,125,18,49,70,121,112,123,91,34,64,127,63,89,146,50,29,41,96,118,120,127,116,65,12,26,39,42,47,53,42,30,48,33,29,30,20,32,24,34,32,23,33,31,31,33,32,36,36,31,34,32,30,37,37,39,41,39,42,32,30,34,32,49,70,59,34,26,34,38,35,21,27,28,27,32,26,33,29,30,31,33,36,36,31,33,57,84,109,119,121,137,153,147,87,37,29,31,59,65,55,50,49,46,26,27,24,19,19,27,31,18,22,16,23,24,21,30,26,30,24,26,31,29,37,36,34,36,34,36,31,32,33,31,25,30,28,26,24,28,54,54,30,20,20,23,27,46,51,39,32,19,27,29,33,38,37,42,40,33,40,39,36,36,33,37,34,37,30,78,83,57,66,57,76,65,56,73,30,57,127,120,117,106,117,101,60,41,22,130,244,244,246,246,253,253,250,250,251,252,250,250,251,252,251,249,249,249,250,250,248,249,248,249,249,249,249,249,249,249,249,249,249,251,251,252,251,251,250,248,247,247,247,245,245,244,245,245,246,249,249,249,251,250,249,251,250,252,252,252,250,250,237,115,4,1,4,8,9,8,10,9,11,12,12,12,198,202,198,200,201,199,200,200,200,198,198,198,198,198,196,199,199,197,198,195,198,198,199,200,202,199,198,200,204,202,198,198,198,198,198,196,196,193,195,197,195,198,195,194,198,195,194,195,191,192,194,194,197,194,194,194,193,195,193,193,196,198,198,194,196,198,198,198,195,198,196,198,196,196,198,195,199,195,198,198,200,202,198,199,199,199,199,198,199,198,198,197,195,194,195,198,198,198,196,198,198,200,198,199,200,200,200,200,201,201,200,201,202,201,201,201,200,201,201,201,202,202,203,199,200,203,204,205,203,205,207,204,205,206,208,211,207,208,207,207,207,207,207,205,206,208,206,204,204,205,204,205,205,205,205,202,205,203,205,207,205,206,207,205,203,205,206,206,206,206,204,207,204,206,208,205,207,204,206,208,206,208,207,206,209,208,206,208,208,207,208,208,210,206,207,208,206,207,210,208,213,212,207,209,210,209,208,210,207,209,209,208,209,208,208,208,211,210,208,210,212,212,211,211,211,209,211,213,212,211,209,207,209,210,208,210,210,210,210,212,212,210,210,212,209,213,206,202,212,202,201,214,211,212,210,210,211,210,211,210,211,209,211,212,210,211,212,209,212,212,212,212,212,214,213,214,215,214,214,213,215,216,214,215,213,215,216,216,219,221,223,223,227,230,230,233,228,220,196,181,178,181,191,200,212,214,218,221,222,226,225,223,222,225,204,211,204,204,223,217,220,221,218,188,184,213,230,220,126,172,246,191,192,74,80,214,211,135,148,229,192,216,226,142,153,159,168,167,168,247,205,99,192,236,247,190,88,112,124,120,66,30,28,185,193,108,212,189,124,130,197,208,205,205,109,98,162,202,163,201,210,188,148,59,117,192,225,250,198,131,217,234,227,237,222,236,206,211,230,227,223,197,233,232,202,160,158,222,99,27,47,59,53,62,89,131,155,121,43,23,12,19,35,31,29,21,32,51,56,42,97,229,217,143,194,229,246,230,250,217,162,222,235,252,237,230,191,143,200,248,252,252,246,174,118,50,27,84,98,123,118,90,47,55,115,144,46,104,137,39,33,135,208,164,136,120,108,53,15,13,19,40,44,46,36,28,41,29,28,27,19,24,31,32,30,31,30,29,29,35,30,27,36,27,36,33,26,34,35,34,41,45,44,34,31,36,26,45,68,62,37,24,35,36,29,31,23,25,31,28,29,30,30,34,31,33,32,37,30,51,137,170,118,112,148,153,119,59,27,27,24,27,49,64,64,60,53,38,23,16,19,20,21,25,22,21,15,25,25,21,22,24,26,24,20,27,28,31,32,34,38,31,34,35,33,33,28,33,33,25,27,31,22,35,61,53,31,24,22,21,16,23,27,29,33,28,25,30,35,39,35,36,41,37,39,36,35,35,37,39,31,39,34,74,88,55,67,61,79,66,46,82,39,43,120,114,116,107,113,106,57,48,15,81,237,240,246,246,253,253,250,250,252,251,252,251,251,249,249,250,248,248,249,250,250,248,249,247,248,249,248,248,247,248,249,249,249,251,250,251,251,250,248,248,247,246,246,246,245,245,246,247,247,246,248,250,249,249,249,252,249,252,252,250,251,250,237,116,3,1,4,8,9,9,10,9,11,12,12,12,200,202,202,201,204,200,200,203,205,203,202,204,202,201,202,203,202,199,202,200,199,201,200,204,204,203,201,204,205,202,204,202,201,200,199,199,200,198,198,199,198,196,196,198,198,197,199,196,195,197,198,198,196,195,195,196,196,198,195,194,195,196,196,198,197,198,199,198,200,198,198,198,200,200,200,200,198,200,199,199,198,199,199,199,201,200,200,200,198,198,198,198,198,198,198,198,200,198,198,198,198,199,198,202,200,200,202,199,201,198,198,204,200,200,203,204,201,201,202,202,204,208,206,206,208,205,208,205,208,208,208,209,208,209,208,207,206,209,210,208,208,211,207,208,207,207,209,206,206,206,206,207,206,205,206,205,206,206,204,207,207,205,203,204,204,204,206,205,205,205,204,205,208,205,208,208,206,207,205,206,206,206,207,208,206,209,210,206,207,206,207,209,209,207,206,207,207,208,208,209,209,208,208,208,208,208,208,209,208,208,208,207,208,209,209,210,210,211,211,209,208,210,209,207,210,210,210,210,211,210,210,210,210,210,208,210,211,208,210,210,211,208,210,211,211,214,201,205,210,200,211,214,214,214,212,212,211,213,212,211,212,212,215,214,212,214,212,212,212,214,213,212,212,214,215,214,216,214,214,215,215,218,217,218,219,222,221,223,228,226,225,226,223,215,205,192,182,183,188,192,191,186,193,209,221,227,229,225,225,223,224,223,219,223,202,208,208,204,222,218,222,224,217,186,189,217,231,213,134,190,245,194,191,81,88,210,230,149,138,233,201,209,193,127,152,161,174,157,180,250,205,125,201,234,249,229,130,112,125,117,76,43,20,169,167,103,167,139,150,193,234,215,214,199,94,63,136,196,200,239,213,188,118,69,147,208,241,251,168,147,233,229,230,232,223,234,204,214,229,227,222,198,231,230,198,137,133,232,134,65,92,86,38,73,127,94,96,66,37,31,14,36,39,28,25,22,43,61,46,83,201,243,160,171,232,244,234,214,193,162,208,252,251,245,234,184,152,206,249,252,252,252,237,84,39,14,27,101,105,127,87,35,66,100,153,122,33,132,125,35,35,115,201,155,125,127,98,45,10,19,16,22,27,30,32,35,44,26,24,26,22,25,29,33,29,33,30,28,33,34,32,29,27,34,31,28,34,27,29,40,43,43,39,36,38,34,33,41,60,72,45,25,27,38,33,26,31,24,31,30,29,31,29,32,32,25,28,29,33,44,104,169,133,109,114,84,45,23,14,13,21,26,47,72,71,50,40,38,25,23,17,19,23,20,21,20,21,21,23,19,27,27,24,23,25,30,25,30,29,31,38,32,36,39,32,31,31,31,31,27,31,30,22,41,59,47,34,27,23,27,29,29,33,24,29,36,29,29,34,38,36,37,38,35,40,32,33,37,34,41,37,39,32,78,91,55,69,57,74,71,50,78,51,34,110,120,115,107,109,116,67,52,16,62,229,237,246,246,253,253,251,251,252,252,252,251,249,248,249,250,249,249,250,249,249,251,250,249,248,249,249,249,248,247,249,249,249,250,249,250,250,251,251,248,247,247,247,246,245,245,247,246,248,249,248,248,249,249,250,252,251,250,250,251,250,249,236,116,4,1,4,8,9,9,10,9,12,11,12,12,200,204,201,201,205,202,203,201,202,202,202,204,203,203,200,202,201,199,205,202,203,206,203,207,205,204,205,204,206,206,204,205,202,202,201,198,200,200,198,198,199,199,197,196,199,197,197,199,198,198,195,195,196,196,197,196,197,195,195,197,198,195,196,198,199,198,196,198,200,200,199,201,199,198,200,198,200,199,203,200,198,198,198,199,199,199,201,201,196,198,196,195,198,198,197,197,198,199,198,198,199,200,201,200,200,201,200,202,204,201,201,200,202,203,204,202,201,202,205,202,204,209,208,208,208,208,208,207,208,209,210,210,212,210,208,210,208,210,208,208,209,208,209,208,208,208,210,210,207,206,207,208,207,207,207,203,206,206,206,207,205,205,205,203,206,203,205,206,203,207,206,206,206,203,205,205,207,205,206,207,206,208,205,205,205,204,207,205,206,207,204,205,208,207,207,210,208,207,208,206,208,207,207,208,206,207,206,208,208,208,209,208,209,209,210,209,209,208,209,207,209,210,209,209,209,208,210,208,210,210,209,208,209,208,208,207,211,211,207,211,209,208,210,210,210,210,200,210,207,200,213,214,214,214,213,212,212,212,214,213,214,212,211,213,213,212,214,213,215,214,217,221,221,224,225,224,226,225,227,230,229,233,236,234,233,235,237,236,234,232,226,212,198,187,190,197,204,215,224,235,232,224,218,217,225,229,233,233,234,232,231,233,232,239,217,221,220,216,238,231,234,237,226,195,206,231,247,214,141,214,252,205,201,86,122,238,241,161,120,214,187,191,165,110,154,172,179,165,202,250,214,152,217,234,251,247,126,111,97,105,80,33,29,129,157,117,175,171,174,218,228,187,167,145,107,78,124,170,215,235,175,149,136,136,188,238,246,248,149,169,241,221,236,227,225,235,204,212,227,227,220,195,230,236,187,125,181,252,154,68,85,89,60,151,151,72,58,65,57,25,33,40,33,23,30,39,60,43,55,178,248,192,148,194,244,234,223,192,149,192,233,252,244,231,199,136,202,248,252,252,250,250,154,36,34,3,41,99,118,87,31,56,104,130,151,87,37,137,112,31,22,82,130,120,126,127,88,26,23,26,25,28,24,45,39,35,48,26,27,24,22,32,27,31,36,30,33,35,30,36,34,35,32,27,31,29,30,33,34,31,34,35,36,41,38,38,34,29,63,70,52,29,21,31,26,30,32,32,30,34,35,31,32,27,31,27,31,37,32,33,46,115,105,46,49,38,24,21,13,14,14,31,41,59,66,47,42,35,27,25,21,21,19,14,18,25,21,21,23,25,25,24,25,24,27,27,24,33,28,35,40,32,36,30,31,35,29,33,30,32,26,25,27,43,63,42,28,29,23,29,32,26,32,31,33,33,32,33,27,33,42,38,36,39,36,39,35,34,34,35,36,37,32,80,96,57,72,55,71,78,50,74,53,33,95,115,116,109,110,115,69,51,19,54,218,237,247,247,251,252,252,252,251,251,249,248,247,248,249,249,249,249,250,249,249,247,251,250,248,250,249,248,249,248,249,248,248,249,248,251,249,250,249,248,248,246,248,248,247,247,248,248,249,249,249,251,249,250,249,252,252,251,249,249,250,250,237,116,4,1,4,8,9,9,10,10,11,12,11,11,200,204,204,202,206,200,203,204,202,201,200,200,200,202,202,199,200,200,203,203,205,204,203,204,204,205,205,206,206,203,202,203,203,202,200,198,200,199,199,198,198,198,199,200,199,198,197,197,199,197,198,198,196,198,196,196,197,194,195,198,197,195,195,197,194,196,200,200,200,200,201,200,199,198,199,198,198,201,201,201,200,200,200,198,198,197,198,200,198,198,198,195,199,199,197,197,197,198,200,201,200,201,198,201,201,199,202,199,200,204,204,204,204,203,204,204,200,203,204,204,206,207,205,206,208,207,209,208,210,209,211,211,209,210,209,209,205,207,208,209,210,206,205,208,207,206,207,206,208,206,208,207,205,203,205,205,206,206,205,206,205,206,204,205,204,205,206,203,204,205,205,202,204,203,202,204,205,206,205,205,204,205,205,206,203,204,206,205,206,206,206,206,206,205,205,206,207,208,205,207,206,206,208,207,211,208,208,208,207,206,207,207,207,206,209,208,208,209,208,208,208,210,209,208,211,210,207,210,208,209,208,206,208,209,209,208,210,209,210,209,210,208,208,209,213,207,199,211,203,205,216,210,212,211,212,213,213,214,212,212,212,212,211,212,210,212,211,217,220,225,241,244,244,244,247,248,250,251,252,250,248,251,248,246,244,247,246,237,233,226,219,221,228,236,249,252,252,252,252,252,252,252,252,245,239,235,239,243,249,251,252,252,252,252,248,245,248,243,252,252,252,252,249,217,235,251,252,234,167,232,253,196,162,66,110,244,251,172,141,224,223,214,161,123,165,192,200,189,239,252,234,170,231,249,250,250,123,83,72,65,63,34,12,120,185,156,202,205,198,216,179,163,165,174,147,105,111,142,195,177,184,167,163,201,234,251,252,237,146,205,246,231,240,230,233,237,207,212,225,227,219,195,229,237,184,145,202,252,150,51,63,70,92,175,110,53,53,57,57,38,43,35,24,29,44,53,53,44,135,243,205,165,171,213,237,224,196,152,193,233,252,252,246,206,145,178,243,250,251,251,248,172,35,14,50,42,86,122,86,49,59,107,134,130,131,64,44,141,92,29,33,45,107,125,121,137,71,16,29,28,33,40,47,56,35,33,46,26,20,27,24,27,29,27,32,31,32,33,33,27,29,33,29,29,31,30,35,31,29,32,25,29,33,31,32,29,36,34,59,74,51,37,21,22,28,34,33,31,33,26,32,35,29,31,36,34,33,37,34,32,40,88,66,45,49,26,23,14,14,17,15,21,39,59,58,49,42,36,27,18,14,21,19,19,21,19,24,26,27,24,25,24,25,24,26,26,29,36,29,33,36,34,34,30,36,33,26,33,32,28,31,29,33,51,55,42,31,28,33,32,33,29,31,35,28,29,32,31,30,31,35,39,38,31,34,37,31,35,29,34,32,33,31,71,98,59,72,65,54,62,52,62,57,28,89,119,107,107,102,115,75,49,28,30,183,236,247,247,252,252,252,249,251,249,249,248,248,248,249,249,249,248,248,251,250,249,250,248,248,248,249,250,249,249,249,248,246,247,248,249,248,249,249,248,248,247,247,248,247,249,249,249,249,248,249,250,250,251,251,252,251,251,250,250,249,250,237,115,3,1,4,8,9,9,10,10,11,12,11,11,201,204,203,202,202,201,202,202,202,201,202,200,201,200,202,204,201,204,203,204,206,200,203,203,203,205,203,201,202,204,205,203,202,201,201,198,200,201,200,200,201,199,198,200,201,202,199,199,199,196,201,200,199,197,196,197,196,196,197,195,198,198,197,197,198,200,197,200,202,199,201,201,198,199,199,198,199,200,201,200,200,200,202,200,200,199,198,201,198,201,198,198,199,198,200,199,199,200,198,201,203,201,201,200,202,200,200,202,200,201,202,203,201,202,204,203,204,204,206,205,208,206,203,208,208,209,209,207,207,208,210,211,212,208,208,208,207,208,207,208,208,205,206,208,207,207,207,206,206,206,208,206,207,206,205,205,204,206,206,206,207,203,204,203,204,202,205,207,203,205,207,205,205,205,203,203,205,205,205,207,204,204,204,204,206,203,206,203,203,207,204,207,206,205,208,207,205,206,205,205,210,208,207,207,207,210,207,207,208,207,209,208,209,210,209,208,207,210,210,208,210,209,208,210,210,210,210,209,210,208,210,209,209,208,209,210,209,211,210,212,211,208,211,211,214,204,205,210,201,211,214,213,212,209,212,213,213,214,214,212,213,212,212,212,211,209,211,212,212,206,203,205,202,201,204,204,207,207,216,211,182,173,170,171,176,165,151,137,138,147,164,188,199,206,207,206,208,210,203,201,201,199,191,183,187,186,183,189,197,202,196,198,197,200,187,185,189,181,199,199,201,201,183,164,186,205,214,173,124,186,197,133,112,19,57,176,171,128,124,190,200,189,145,117,153,179,180,182,235,250,207,160,220,247,252,241,103,95,82,70,57,15,34,149,212,151,213,194,184,200,179,213,224,229,183,165,103,108,178,190,232,173,190,208,244,252,250,240,164,246,252,252,252,247,250,248,212,212,227,226,220,194,236,231,162,158,212,230,113,39,53,72,139,189,84,42,59,87,60,37,49,17,20,34,52,53,43,111,229,220,172,185,187,216,229,195,141,182,235,253,250,250,241,149,171,233,250,250,249,249,171,39,8,12,74,81,121,95,30,74,105,137,126,93,101,41,57,136,77,24,73,173,160,132,131,118,68,16,32,36,34,53,69,62,33,39,42,27,29,26,21,29,29,32,29,29,33,27,32,30,31,34,30,55,73,67,63,58,44,34,30,27,32,29,30,30,35,35,54,76,59,36,23,26,29,33,33,33,31,27,33,31,28,30,36,30,30,36,33,32,33,53,47,47,55,34,17,14,15,15,14,23,37,55,56,46,44,39,31,20,19,20,18,22,16,24,27,21,25,24,24,29,28,24,26,25,29,32,30,33,37,37,33,33,33,32,29,34,32,33,31,31,40,57,59,35,29,37,27,32,34,31,31,31,38,29,30,30,26,32,37,32,35,38,29,37,34,32,34,35,33,32,31,71,95,59,72,69,41,37,38,48,44,29,71,113,113,102,99,118,94,55,27,28,157,233,249,249,252,252,252,250,249,248,249,249,247,249,249,249,249,248,249,249,248,249,249,248,249,249,248,247,248,247,247,245,246,246,245,248,247,249,249,248,248,246,247,246,248,248,248,248,248,248,249,250,249,250,249,252,250,250,250,249,250,250,236,117,4,1,4,8,9,9,10,9,11,11,12,12,201,204,201,200,205,199,202,202,201,203,201,200,202,203,202,200,203,203,203,204,203,203,204,202,203,203,203,202,201,200,199,204,203,202,203,200,200,201,200,201,199,199,200,198,198,199,201,200,200,200,201,200,199,201,200,200,198,198,198,198,200,201,200,201,202,200,200,200,201,200,201,201,201,200,200,198,200,201,200,198,201,200,198,201,201,203,200,199,197,200,203,202,200,199,200,199,202,200,200,202,201,203,202,204,203,202,205,203,202,204,202,204,205,203,205,205,203,207,205,205,206,207,207,208,209,210,208,207,208,209,211,209,209,208,207,208,207,209,207,210,210,207,208,208,207,208,208,207,207,207,209,207,206,205,207,207,207,205,205,205,205,205,203,203,203,202,204,206,203,204,206,204,205,202,203,206,205,204,203,203,204,204,206,206,205,204,203,205,205,205,204,206,206,207,210,208,206,207,207,208,209,208,207,204,210,210,208,207,208,212,210,208,210,209,210,209,210,210,210,210,208,208,208,210,213,212,210,210,208,207,207,210,211,209,213,213,213,212,211,214,212,212,213,213,215,205,210,208,203,213,214,214,212,209,212,211,212,213,212,214,210,212,211,211,211,213,211,212,188,130,108,99,98,96,92,90,78,74,102,88,56,60,57,54,44,34,26,19,19,21,38,56,63,57,62,63,54,53,49,51,50,48,43,45,59,79,89,89,93,84,69,55,50,51,51,52,57,54,51,50,50,53,54,50,48,53,57,53,24,29,46,89,132,48,5,19,13,14,35,71,78,70,72,77,83,86,89,92,103,106,98,73,103,108,119,128,64,119,122,84,79,36,37,129,127,77,128,94,100,139,137,159,165,145,125,114,53,98,138,158,175,101,99,111,185,227,243,157,127,226,218,227,226,220,224,232,210,210,229,231,220,193,243,223,163,177,198,200,114,57,49,83,203,208,68,36,75,107,69,40,35,15,29,49,56,36,89,215,239,196,207,219,198,210,200,140,172,225,252,252,249,249,173,165,227,246,250,250,249,177,35,33,51,82,99,88,92,44,64,118,136,129,77,73,89,33,77,133,71,21,100,201,182,136,126,122,41,18,40,33,40,51,57,43,27,39,48,24,30,28,20,29,33,31,33,30,32,27,31,35,46,57,73,153,164,150,158,153,95,40,42,36,23,28,33,27,39,35,57,77,61,41,23,27,27,36,34,27,31,30,35,33,27,32,34,35,30,32,32,31,28,26,36,45,34,21,15,13,15,15,14,28,33,40,52,49,46,42,39,23,18,16,19,26,21,25,20,22,26,27,23,22,27,21,24,28,29,32,28,36,35,31,32,36,34,27,35,32,29,36,31,29,42,62,52,33,25,22,27,33,29,27,34,33,29,29,27,31,29,29,33,35,33,36,35,32,33,40,32,36,33,36,32,65,106,57,68,84,51,43,39,38,45,26,59,113,106,107,103,116,103,55,32,27,120,232,251,251,250,249,250,249,249,246,247,247,248,248,249,249,248,249,249,247,247,247,248,249,249,248,247,248,249,247,246,246,245,246,245,246,248,248,247,247,247,246,245,247,245,245,247,249,248,248,249,247,249,249,250,252,249,249,248,251,250,249,237,115,4,1,4,8,9,9,10,9,11,11,12,12,200,205,204,203,204,203,202,200,201,202,201,201,202,201,203,201,200,201,200,200,204,203,204,200,201,204,202,202,200,201,203,201,201,202,202,200,200,201,200,201,200,200,199,200,199,199,198,198,202,198,200,198,200,200,200,201,199,201,199,200,202,200,200,198,200,201,198,203,201,199,201,200,200,199,200,201,199,200,201,199,196,199,202,199,201,199,201,201,198,201,199,200,203,200,202,198,199,202,201,202,203,203,203,201,202,203,204,204,203,204,202,201,205,204,206,205,203,205,205,206,211,207,205,208,205,207,205,208,207,207,208,205,207,207,209,210,207,207,206,206,208,208,205,207,208,206,207,206,206,204,205,206,206,205,205,203,206,204,203,204,203,205,205,203,204,205,204,205,203,202,203,201,202,205,203,203,204,204,201,202,203,205,205,203,205,204,205,206,206,208,205,206,206,207,210,207,208,207,205,208,208,208,208,208,208,208,208,210,211,208,210,208,210,209,210,212,209,209,210,210,211,208,210,212,210,211,211,211,211,210,211,211,213,213,212,213,214,214,212,214,213,212,212,222,220,214,224,213,211,215,212,214,213,214,210,209,212,211,212,212,212,212,210,210,212,213,211,216,199,163,153,153,148,145,145,134,104,89,103,117,115,129,136,128,125,118,118,113,110,106,105,108,102,106,100,87,97,103,103,107,105,105,105,107,110,128,131,128,125,122,113,101,103,110,109,104,111,103,108,108,104,105,96,97,104,109,106,77,53,78,97,132,171,77,61,95,67,56,44,63,89,87,84,107,114,106,99,92,86,85,94,65,66,66,68,89,59,119,123,89,78,50,72,100,83,42,66,58,50,64,59,66,52,46,38,44,39,77,105,83,68,38,47,25,59,98,95,46,44,108,98,109,116,111,137,199,205,213,230,230,224,195,245,203,160,195,178,195,162,114,66,120,253,189,43,28,40,78,62,33,21,17,45,51,44,60,189,249,217,226,248,242,188,193,129,122,204,239,251,251,239,171,165,219,249,249,248,248,198,53,26,72,116,123,119,59,36,72,117,139,122,75,66,97,76,34,90,131,63,27,50,126,144,131,134,98,37,18,25,24,27,29,31,22,23,49,46,31,28,23,29,27,30,31,31,33,34,33,69,68,130,146,155,170,140,169,179,160,132,89,64,38,29,25,27,29,35,33,51,78,66,46,20,22,29,30,30,33,33,30,35,29,30,33,34,30,29,31,34,33,27,25,18,21,19,15,21,20,16,17,16,24,30,41,63,65,61,51,33,20,17,22,21,23,23,23,23,24,24,25,29,24,21,24,27,28,29,27,26,37,37,39,32,32,39,27,33,37,27,34,33,27,53,64,47,32,27,27,22,31,34,34,39,46,42,33,35,34,34,34,35,30,31,35,36,35,30,33,36,33,29,40,27,62,103,60,75,83,56,64,67,50,53,29,56,119,115,113,106,121,99,57,36,35,83,209,253,253,249,247,250,248,246,248,248,246,246,247,248,249,249,249,249,248,246,247,248,249,249,248,248,248,249,248,246,245,244,245,247,247,244,247,247,248,248,247,245,244,247,247,249,249,248,249,251,249,249,248,251,252,250,251,250,251,250,250,237,116,3,1,4,8,9,9,10,10,12,12,12,10,199,205,202,201,206,197,202,202,199,203,201,202,202,201,202,201,199,200,201,200,200,202,201,200,200,202,200,199,201,200,200,203,200,199,201,199,202,201,200,201,199,201,200,200,200,199,199,197,199,198,199,198,198,199,197,198,200,200,198,199,200,201,198,199,200,198,198,202,198,196,201,199,198,199,199,201,200,202,202,197,200,200,200,201,199,200,198,201,200,199,200,201,199,199,198,198,201,201,202,201,200,200,202,202,201,201,201,201,201,204,201,203,205,202,204,203,202,203,205,204,204,206,203,204,206,206,205,206,206,207,206,204,208,205,207,208,207,206,203,208,206,205,207,207,205,204,204,205,203,202,205,203,206,206,205,203,203,205,205,202,202,202,201,203,204,205,204,202,201,204,204,203,203,200,203,203,201,203,204,203,202,202,204,205,204,204,206,206,205,205,204,206,206,207,209,208,207,207,208,210,208,207,209,209,211,208,210,211,208,212,212,210,211,210,210,210,211,211,209,212,211,208,210,210,210,210,208,212,212,212,213,213,212,213,216,215,215,214,213,215,213,212,217,231,216,222,230,210,214,214,213,212,211,208,212,211,214,213,210,212,210,212,212,211,212,214,217,222,223,229,236,236,233,225,225,219,190,183,211,218,235,247,248,249,247,251,252,252,251,251,248,245,244,244,236,219,236,244,243,246,242,246,246,249,247,244,239,227,223,222,225,225,231,244,240,231,240,227,244,249,248,242,208,222,239,251,233,187,208,241,237,204,177,89,150,249,234,205,173,165,226,191,168,191,200,205,173,232,250,250,238,185,220,240,240,233,107,110,113,74,76,46,76,173,191,152,142,174,171,134,103,122,160,125,174,178,118,100,90,151,117,123,156,130,153,139,139,68,101,166,152,163,151,151,161,208,206,210,228,229,217,197,240,177,168,198,159,216,171,92,101,198,252,112,2,94,105,49,39,26,15,24,49,43,39,145,240,220,230,249,252,228,177,131,69,145,233,244,248,247,168,158,223,242,248,248,246,196,54,27,62,108,110,111,87,33,48,99,139,121,71,48,97,96,61,38,109,136,53,26,42,108,138,132,139,97,26,9,15,16,30,27,31,40,34,44,51,25,28,23,20,31,26,33,32,27,37,39,116,183,190,180,124,99,100,149,104,62,74,75,59,28,32,24,27,32,36,27,46,83,67,41,24,21,28,29,30,30,30,34,33,30,30,29,30,34,32,33,29,26,30,26,19,19,18,19,21,16,17,18,26,36,29,35,64,73,69,44,27,29,13,17,24,23,24,17,23,29,24,24,21,22,24,26,29,24,30,32,27,35,33,34,36,34,40,38,34,34,33,32,30,33,63,65,39,26,22,24,33,38,52,65,64,60,59,62,71,75,61,40,28,33,33,34,38,29,31,34,30,34,31,38,27,57,104,59,64,83,63,103,109,65,61,33,55,125,105,116,104,112,107,57,30,48,64,159,249,249,249,249,249,249,245,247,247,247,246,248,247,249,249,249,250,250,247,247,248,248,249,248,246,245,245,246,246,247,245,245,247,246,246,246,247,247,245,245,247,247,246,246,249,249,248,250,249,249,248,249,251,252,250,251,251,251,250,251,237,115,3,1,4,8,9,9,10,9,11,12,12,12,198,200,200,202,203,200,199,199,199,201,202,201,202,202,200,202,200,201,203,200,201,201,201,199,200,201,198,200,200,198,198,199,201,199,198,199,200,198,200,201,199,200,198,200,198,198,198,197,199,197,199,198,200,199,199,199,198,200,196,200,198,197,202,199,200,199,195,198,199,199,198,199,201,200,200,198,198,198,200,200,199,200,199,198,200,200,200,200,200,199,199,200,199,198,200,200,200,200,199,201,199,201,201,199,200,200,202,203,201,201,202,201,202,200,203,203,202,205,204,202,203,201,203,207,204,204,204,206,203,206,208,205,206,205,205,206,208,207,208,204,206,206,205,206,204,204,203,206,205,204,202,202,205,203,202,202,205,205,205,204,202,202,199,201,202,202,202,203,201,200,204,205,201,202,201,201,203,203,202,200,201,204,202,203,203,205,205,204,206,204,206,207,206,208,208,208,210,209,209,207,210,208,207,208,207,210,210,210,212,211,211,210,212,210,212,214,212,212,210,211,212,210,212,212,210,212,211,211,213,213,213,214,214,213,215,216,216,215,213,214,213,212,214,224,171,146,185,201,214,215,213,212,210,212,212,210,212,214,211,211,214,214,215,216,217,217,216,214,212,209,210,212,208,211,217,225,202,197,235,241,242,246,242,240,243,245,248,247,249,251,251,252,250,252,248,236,249,251,251,251,251,252,250,249,249,248,245,241,231,230,227,231,233,244,245,231,246,232,244,251,252,245,212,229,245,252,236,211,248,249,248,214,167,77,150,246,249,249,202,190,242,192,142,140,122,103,139,230,252,252,253,227,249,252,251,251,155,124,129,85,81,53,57,184,248,218,233,252,210,203,209,247,252,212,252,252,221,129,128,204,164,201,221,211,252,252,209,139,195,249,251,251,245,245,236,242,210,212,225,226,214,194,220,167,197,192,164,227,111,30,71,198,226,34,47,205,131,41,29,17,27,33,39,24,116,219,208,218,246,253,250,206,114,91,130,202,248,229,235,180,152,218,247,249,245,246,200,56,28,57,107,110,113,84,33,35,87,129,124,79,47,97,105,64,50,61,135,128,45,30,100,159,142,130,142,82,23,16,15,31,31,42,48,49,45,50,42,25,25,21,28,27,35,28,29,29,39,74,88,157,171,127,97,108,122,128,81,66,83,77,57,45,33,20,24,32,34,25,56,81,66,47,23,21,26,31,34,32,33,33,31,29,27,31,31,29,31,31,43,50,38,30,27,26,29,22,18,23,20,23,38,39,33,38,48,56,56,41,30,17,15,21,21,27,27,22,24,26,27,27,24,27,29,29,28,29,32,31,25,36,32,36,39,34,33,34,37,34,28,35,29,39,66,61,37,23,19,27,35,33,49,61,79,77,92,131,128,102,74,58,44,35,36,29,33,34,31,32,32,35,29,38,28,57,106,65,57,74,68,118,128,70,66,36,57,128,108,108,97,115,101,64,29,57,62,100,243,244,249,249,245,250,244,246,247,248,247,247,248,248,249,249,250,247,247,248,246,245,246,248,245,245,246,246,246,246,245,245,244,245,246,248,247,247,247,245,245,246,246,247,246,248,248,249,249,248,248,248,250,250,249,251,250,252,250,250,237,116,4,1,4,8,9,9,10,9,11,12,11,11,197,202,200,200,200,196,201,200,198,203,198,199,201,200,199,201,200,200,200,199,202,202,200,198,198,199,200,199,199,199,198,199,196,200,198,196,203,197,199,201,199,200,197,199,197,196,198,197,199,200,197,200,201,203,203,200,201,200,200,200,198,200,199,199,200,199,200,199,198,199,200,200,199,198,199,200,197,198,198,196,199,199,199,199,200,200,198,201,200,198,200,201,200,199,199,201,203,200,200,201,202,200,200,204,200,201,200,200,202,200,201,201,200,200,201,204,201,203,204,203,205,203,203,203,204,207,202,204,205,206,207,203,204,203,206,205,203,205,205,207,205,206,206,204,201,203,205,203,204,204,207,205,203,203,202,204,203,204,202,202,203,201,203,203,205,204,203,201,203,203,201,203,202,199,202,202,203,202,202,203,203,203,204,202,204,205,204,206,207,207,206,206,205,207,209,207,206,207,207,207,207,207,209,207,210,208,210,211,209,211,211,210,212,214,213,214,215,213,212,213,215,211,212,212,213,215,213,213,213,215,215,215,216,216,217,214,217,213,215,214,211,214,214,207,92,52,150,201,218,217,213,214,214,214,216,210,213,214,214,216,217,219,216,214,208,203,199,194,189,192,194,197,203,208,217,225,205,199,224,219,223,223,216,219,217,218,218,219,221,221,223,224,225,225,222,212,226,230,221,226,225,224,222,223,220,222,222,222,222,213,210,208,208,213,207,202,218,207,221,228,229,214,180,212,222,236,199,186,232,234,234,193,153,66,142,244,241,232,138,102,117,66,42,41,28,27,20,50,87,110,106,74,133,162,179,157,59,105,108,94,111,59,55,148,194,159,185,219,174,188,199,249,225,176,208,168,212,135,104,159,95,150,208,212,252,252,189,158,224,241,251,247,241,244,237,243,206,210,224,225,218,197,204,181,220,169,134,184,87,53,63,176,159,1,144,240,85,39,22,23,39,45,30,52,194,202,203,244,252,252,230,143,66,150,192,242,244,223,177,150,214,249,249,247,246,209,63,32,59,101,116,113,89,31,35,96,130,132,96,63,102,113,69,51,44,61,141,111,41,32,53,148,151,134,141,60,26,28,28,37,40,58,57,60,50,48,36,18,28,22,24,28,36,32,27,33,41,48,50,81,114,112,105,102,117,137,104,81,66,47,42,44,37,19,20,29,34,26,57,84,68,51,25,19,23,27,36,28,31,33,30,33,39,37,33,35,33,39,89,82,39,33,28,37,35,33,30,22,25,26,33,43,39,42,41,44,42,33,21,16,19,19,24,23,25,25,23,26,24,33,50,43,35,28,31,27,27,32,27,35,37,36,34,36,37,35,33,36,33,35,30,46,83,60,36,22,16,29,33,35,37,63,90,87,117,123,102,98,97,91,61,42,37,34,33,33,34,34,34,31,31,40,26,59,111,71,54,51,45,70,76,50,51,33,44,122,109,110,93,107,117,72,30,51,65,51,201,240,249,249,247,252,247,246,247,245,245,245,248,248,249,250,249,249,249,248,248,247,246,246,246,245,244,244,244,244,244,245,244,244,244,244,245,246,245,244,248,248,247,247,246,246,247,246,248,249,248,249,250,252,248,248,250,251,250,250,237,116,3,1,4,8,9,9,10,9,11,12,10,12,198,202,200,198,202,199,200,199,199,200,200,199,198,199,200,200,197,199,200,199,200,198,199,198,198,199,198,199,198,198,198,197,198,199,199,196,199,197,199,198,198,198,196,195,198,200,197,198,199,198,200,200,200,200,199,201,200,200,199,200,201,199,198,199,199,201,200,199,200,199,198,199,199,196,199,199,200,200,199,201,199,198,198,196,199,200,199,199,200,198,199,199,199,199,200,200,200,202,199,201,199,200,200,199,202,201,199,199,200,200,200,199,200,200,199,200,202,204,202,202,205,204,203,205,205,206,206,205,204,205,205,201,203,203,203,205,205,203,202,206,205,204,204,204,201,204,204,205,201,204,206,203,204,203,204,204,202,202,201,202,202,204,203,203,201,200,202,201,203,203,204,203,201,203,203,202,202,205,204,202,203,205,203,203,204,204,205,204,207,205,206,208,205,208,205,207,208,204,207,206,208,207,209,209,209,210,208,209,212,209,212,212,212,212,210,214,213,214,213,212,214,214,214,214,213,214,216,214,217,216,214,214,213,213,214,214,213,213,212,211,212,211,218,206,115,118,191,211,220,212,216,215,217,224,221,218,216,217,214,214,210,203,198,194,192,188,194,195,199,208,210,216,217,220,221,228,213,202,222,214,216,219,213,215,214,215,216,216,218,219,218,220,216,226,225,210,232,231,219,217,216,219,218,217,219,218,218,220,219,222,214,208,203,206,205,188,207,197,212,228,225,203,175,209,220,232,171,176,224,222,233,185,151,67,145,241,236,216,83,31,44,49,79,105,110,84,88,79,68,74,58,27,27,45,66,64,12,32,42,41,81,63,43,94,93,48,71,110,83,98,107,122,105,66,70,49,107,92,53,54,29,144,217,203,250,248,154,175,229,228,235,227,228,229,226,234,203,213,226,229,226,191,193,207,236,151,122,162,115,89,96,192,90,17,212,203,38,26,24,34,54,41,30,116,188,186,232,253,251,241,197,114,121,208,222,229,240,187,145,206,252,252,248,248,217,71,34,60,101,112,113,91,35,36,107,147,134,96,55,90,117,78,52,40,36,72,146,105,45,22,31,130,139,134,123,47,31,29,33,40,39,45,45,47,42,28,21,24,23,25,29,29,34,31,28,29,29,61,87,77,77,69,59,42,66,87,63,45,22,29,33,23,31,29,25,27,36,29,63,91,73,50,24,21,21,30,32,33,33,28,29,57,85,58,33,50,60,112,135,98,62,52,43,39,56,76,49,26,23,23,37,43,52,43,39,43,32,25,29,19,20,24,24,29,25,28,24,26,23,69,98,50,42,34,25,33,34,31,32,36,37,38,34,36,36,33,33,32,33,31,29,66,88,57,30,21,18,30,42,46,60,77,73,61,77,87,88,103,112,106,81,66,52,36,36,33,33,35,32,36,31,38,26,65,122,79,45,36,28,32,39,32,33,24,38,107,111,107,94,100,106,72,39,46,69,28,136,237,252,252,250,248,249,249,247,246,247,247,249,248,248,248,249,249,249,248,245,244,245,245,245,246,244,244,244,244,244,244,246,245,246,245,244,244,244,245,247,247,246,246,248,247,245,247,248,249,248,249,250,250,250,250,250,251,250,250,237,116,4,1,4,8,9,9,10,9,12,12,13,13,200,203,200,201,200,196,199,200,200,200,198,199,198,196,199,200,198,199,199,198,198,198,198,197,196,197,198,197,198,199,199,198,198,199,199,198,200,194,197,199,195,199,194,197,198,196,198,196,198,198,196,200,196,198,197,196,200,198,197,199,201,199,198,198,200,198,199,198,199,199,196,200,200,200,198,197,199,199,200,199,200,198,199,199,198,198,198,199,199,198,198,198,198,197,199,199,199,198,199,200,200,200,198,201,199,199,203,198,198,199,199,199,198,200,199,199,200,201,202,200,201,201,200,203,203,202,202,203,203,202,201,202,203,202,204,203,201,203,203,204,201,201,205,203,201,202,203,202,203,201,203,200,202,205,202,203,200,202,204,203,203,200,203,200,202,202,202,201,202,204,200,202,205,200,203,203,203,203,202,204,203,206,205,203,206,205,203,203,204,207,206,207,208,206,207,205,208,206,206,207,206,208,206,205,207,206,208,210,209,208,210,210,212,212,212,213,214,212,212,214,213,214,213,212,212,214,213,214,214,215,215,212,214,214,214,213,213,211,214,213,210,213,221,223,189,215,237,222,220,222,228,231,242,241,227,215,211,207,199,195,192,191,189,193,198,206,211,214,215,216,217,218,216,217,217,224,206,198,220,210,213,216,213,218,215,217,217,217,217,219,218,219,215,227,231,204,231,236,219,223,219,219,217,216,216,220,220,221,221,220,225,220,211,212,204,194,201,194,210,225,229,198,182,218,226,232,172,193,229,232,240,191,158,65,152,245,244,222,136,130,149,167,190,212,211,220,244,248,243,235,210,153,190,211,221,200,59,31,11,17,77,66,41,100,153,125,141,152,129,118,113,110,99,110,130,138,158,108,58,47,67,183,234,213,252,244,153,211,239,232,251,240,248,245,244,249,222,232,240,244,241,184,196,244,237,170,149,164,160,108,128,195,68,76,246,123,6,37,22,49,50,39,96,128,184,226,253,253,250,203,161,168,184,230,199,214,192,155,199,238,252,252,247,213,72,29,57,105,114,108,99,37,47,117,143,144,86,66,59,63,83,55,48,52,52,89,153,94,44,19,63,157,115,122,101,33,35,36,24,22,23,19,19,25,27,16,24,24,22,24,27,36,31,34,34,50,64,72,82,61,49,40,47,36,25,53,53,27,48,59,27,26,46,46,31,22,29,24,63,94,75,55,26,23,22,27,35,29,33,31,33,85,114,80,74,139,165,174,150,131,159,132,79,80,155,141,60,31,23,21,25,44,61,48,42,41,29,29,33,19,23,22,24,53,49,32,26,36,64,108,108,76,64,52,35,38,71,60,34,37,36,40,35,31,33,35,33,28,32,30,34,79,96,52,30,19,22,61,65,43,51,57,57,47,38,41,62,71,72,78,71,61,43,33,29,33,33,37,35,30,31,41,28,61,121,81,52,51,49,36,32,38,31,39,44,104,114,107,98,98,105,73,43,35,77,29,41,188,247,247,246,246,249,251,249,247,248,246,246,246,247,248,248,247,247,246,247,247,246,245,247,247,244,244,244,244,245,244,244,245,244,244,246,244,245,245,245,247,245,246,245,246,246,245,246,246,246,249,249,250,249,249,251,251,250,250,237,115,4,1,4,9,9,9,10,9,11,11,11,11,198,201,198,198,198,198,198,196,199,199,199,197,198,197,198,198,199,197,196,199,196,196,198,198,199,196,196,199,195,197,195,195,197,194,198,197,198,197,196,197,198,196,196,196,195,195,193,198,197,198,198,196,198,196,196,198,196,198,197,197,199,199,200,199,196,196,197,195,198,198,196,199,198,198,198,196,201,197,196,198,196,200,198,198,199,198,197,200,200,199,199,201,200,197,200,199,198,197,196,198,199,199,198,199,198,199,200,199,199,198,201,202,200,200,198,200,200,202,202,201,204,201,200,201,201,202,201,200,201,203,203,199,204,201,201,203,203,204,201,205,203,201,202,201,203,203,201,203,201,201,200,200,202,202,203,201,199,200,202,203,201,199,200,201,201,203,203,202,203,201,201,205,203,201,201,203,201,203,204,202,203,205,203,203,205,204,206,205,205,203,202,204,204,207,205,206,204,203,207,205,207,206,207,205,207,208,206,208,209,209,210,212,210,210,211,212,212,212,212,213,216,212,211,212,214,213,214,212,212,213,212,213,212,215,215,212,214,213,213,213,212,213,222,211,207,236,237,229,225,230,236,217,217,222,211,204,200,201,199,201,205,212,217,221,226,225,230,226,225,227,226,225,223,224,223,225,209,199,223,224,223,227,224,227,229,230,231,232,230,230,230,234,234,241,228,159,189,238,238,240,234,235,235,235,239,238,238,239,240,243,241,244,244,240,236,213,225,214,216,237,235,208,203,239,246,247,191,219,249,249,250,186,134,63,150,245,252,238,208,207,217,223,226,216,215,252,252,252,252,252,252,246,252,252,250,250,126,38,28,39,88,57,39,162,249,251,252,252,229,173,234,247,251,231,242,251,215,230,180,120,146,210,241,225,252,246,167,242,252,252,252,252,253,253,252,252,251,251,252,252,252,185,210,251,191,129,124,199,207,116,158,155,61,170,220,50,8,33,33,52,49,132,134,127,217,253,252,244,218,141,180,212,206,208,179,159,141,199,224,252,248,248,222,67,24,49,94,112,114,97,43,52,108,141,140,94,55,56,29,41,64,55,57,61,51,98,150,78,38,14,96,158,102,117,85,33,35,31,13,17,19,19,23,20,25,28,23,22,23,21,28,34,31,33,36,80,104,69,40,43,46,44,57,48,55,84,75,53,65,63,40,41,60,54,39,23,28,27,62,114,95,67,30,16,24,24,35,30,34,31,38,66,81,86,88,113,122,93,57,58,121,134,115,120,141,112,57,31,17,24,43,98,113,60,34,33,31,30,26,22,25,22,29,57,61,46,59,73,95,113,129,158,146,91,73,133,137,73,31,27,38,35,35,30,36,36,31,36,32,31,44,97,92,47,27,16,46,86,69,41,39,65,74,53,37,55,68,61,50,46,47,36,27,24,27,33,33,34,33,34,33,35,29,56,114,89,58,62,61,48,58,53,47,50,51,123,121,104,98,93,111,82,41,32,76,49,30,89,224,236,245,245,250,251,250,251,248,246,247,245,246,248,248,248,246,245,246,248,246,245,245,245,244,244,244,244,243,244,244,244,241,243,244,244,246,244,244,244,245,244,246,246,244,246,246,248,247,249,248,248,250,248,249,249,248,249,236,116,4,1,4,8,9,9,10,10,12,13,12,12,195,201,199,199,200,196,198,198,198,198,197,198,198,196,198,197,197,198,194,196,199,196,197,199,195,195,196,196,195,193,195,195,196,200,197,194,197,193,195,196,196,198,193,197,197,194,198,197,195,197,198,201,196,200,199,197,198,198,200,198,199,196,198,199,199,199,199,196,197,199,198,199,198,199,198,199,200,198,198,196,198,198,199,202,202,201,200,200,199,198,199,199,198,200,199,200,200,198,201,200,201,201,198,202,199,199,200,198,199,201,201,199,198,200,199,199,200,199,201,198,199,202,199,200,203,201,203,200,200,203,202,202,201,201,203,203,203,203,200,202,200,201,202,201,203,201,203,201,201,203,203,202,200,200,201,204,201,202,204,200,199,200,201,200,202,199,200,200,199,203,200,200,203,199,203,203,200,202,202,205,201,202,203,202,203,202,203,204,203,205,205,205,203,200,206,205,205,205,205,204,205,207,206,207,207,207,210,209,209,213,211,211,215,213,214,212,213,212,214,215,215,218,216,217,217,220,217,219,221,219,222,222,225,224,224,225,224,225,229,230,230,230,230,214,216,244,245,244,240,240,203,110,97,165,216,227,227,235,241,245,248,250,251,249,251,252,251,251,248,248,247,248,247,248,249,247,235,233,248,249,250,251,251,251,252,252,251,251,252,252,251,251,252,252,217,106,150,233,250,250,252,252,251,251,252,252,251,251,252,252,252,252,252,252,252,236,235,233,229,225,229,200,208,236,242,234,179,224,234,232,219,140,113,59,143,240,240,217,187,182,165,150,139,147,175,234,246,246,247,246,222,167,217,240,252,228,89,69,69,82,64,18,37,133,199,199,194,202,170,129,190,215,236,190,212,199,171,245,186,139,145,183,194,185,246,188,149,227,227,233,232,217,218,216,214,212,205,226,239,240,234,135,184,215,99,74,103,195,191,123,173,92,54,209,150,2,28,35,26,43,122,204,133,153,246,252,241,215,152,142,236,227,181,186,136,122,189,229,243,252,244,236,91,22,48,90,113,111,103,46,51,102,129,139,84,63,56,33,14,19,55,53,54,55,49,76,120,78,33,26,128,153,96,118,69,21,23,14,18,21,16,17,21,24,23,29,23,26,27,22,27,25,30,31,32,58,106,97,65,53,40,67,83,78,108,134,118,105,112,94,67,53,50,46,33,19,26,26,67,113,107,74,36,22,17,22,33,34,31,36,83,95,45,33,24,27,24,27,29,19,27,29,33,31,26,43,77,53,31,33,112,201,145,64,35,31,29,27,27,24,21,19,27,34,47,45,47,48,51,41,33,83,113,102,104,116,94,60,34,29,33,31,35,34,34,31,34,37,41,31,60,108,75,44,29,22,69,76,71,56,54,107,101,74,74,97,99,64,39,35,31,24,33,29,23,29,27,34,31,36,37,37,26,52,115,90,75,70,67,63,76,71,57,62,50,121,131,107,95,88,111,90,45,22,163,177,63,46,135,227,250,250,251,247,251,252,251,246,248,247,247,245,247,247,246,244,244,245,244,244,247,245,244,244,243,242,244,242,242,244,243,242,243,244,244,244,244,243,244,244,245,245,246,247,246,248,246,248,248,247,247,247,247,248,247,247,236,116,3,1,4,8,9,9,10,9,12,12,12,12,200,203,200,200,199,199,196,196,200,199,198,200,202,198,199,199,196,196,195,196,196,196,196,195,198,196,197,198,196,195,196,195,196,196,196,196,197,195,196,197,197,195,194,197,198,198,196,198,197,198,196,195,198,198,198,200,196,198,197,197,199,194,198,200,197,199,198,199,199,198,198,197,198,202,199,197,200,197,201,200,200,200,199,200,200,199,201,199,198,200,200,202,200,199,200,200,202,202,201,202,202,200,199,198,196,199,200,198,201,200,201,199,201,203,199,202,199,201,200,198,201,200,199,200,200,200,200,200,200,200,201,201,205,203,202,202,200,200,200,202,200,203,203,202,200,200,202,204,203,202,201,203,202,200,201,202,202,200,202,202,200,200,201,200,200,200,199,201,200,202,201,200,201,201,202,200,202,200,201,203,203,202,201,203,203,201,204,206,205,205,205,206,206,203,204,208,208,206,206,206,208,206,208,208,209,208,208,213,213,213,212,212,211,212,214,212,212,215,219,226,234,237,240,239,242,243,246,246,246,247,247,249,249,251,251,251,252,252,252,252,251,251,246,223,236,249,251,248,235,240,194,95,101,192,244,248,252,252,249,249,251,251,250,249,249,246,249,248,245,245,244,243,242,244,248,248,240,232,241,246,243,244,244,240,242,239,241,235,237,232,221,228,226,218,173,104,133,184,205,222,221,218,218,219,217,213,214,211,205,210,205,198,198,198,199,174,181,184,178,186,169,155,158,166,167,146,116,138,127,125,126,106,106,77,96,139,150,135,112,93,94,83,77,89,85,114,137,126,138,108,100,57,51,90,118,114,58,86,107,86,51,28,53,62,57,59,51,59,76,35,42,38,83,61,47,74,53,107,112,104,107,104,103,80,133,71,68,130,98,95,79,68,77,72,73,58,76,90,86,98,141,101,71,125,98,117,112,161,149,120,191,61,79,217,74,5,45,23,25,92,204,203,140,193,251,251,226,148,141,200,249,202,157,175,149,192,234,251,250,250,247,103,29,47,93,112,113,105,51,44,104,137,134,91,62,67,34,18,12,17,39,53,56,57,46,71,110,73,33,41,152,137,101,118,49,19,19,11,16,19,23,22,21,25,26,24,27,23,24,25,28,30,31,34,31,28,67,106,93,54,54,122,139,105,116,139,142,120,111,97,60,44,37,40,32,20,19,26,52,78,84,64,33,30,23,22,30,34,27,58,130,96,34,21,17,26,26,25,30,28,28,24,24,28,24,47,103,80,36,42,134,172,90,49,42,32,31,27,29,32,18,29,53,34,23,21,20,17,22,25,21,25,30,30,30,29,32,74,63,33,38,32,34,33,33,31,34,40,46,53,83,100,61,39,21,42,88,94,85,67,75,170,144,103,117,133,128,78,53,51,29,34,46,34,21,24,29,33,33,34,37,37,31,43,91,82,79,86,70,69,83,71,68,69,48,123,132,110,102,92,112,94,38,34,207,236,177,121,38,168,245,245,246,244,252,252,249,249,250,248,248,247,245,246,246,245,246,247,244,244,245,245,244,244,244,244,242,242,243,243,244,244,244,243,244,244,244,242,244,244,245,246,245,246,246,246,247,248,248,248,248,248,249,250,249,248,236,116,4,1,4,8,9,9,10,10,11,12,11,11,198,198,196,198,199,195,196,198,197,197,198,195,196,197,197,196,198,195,195,196,196,196,194,195,194,195,196,195,195,193,196,194,193,197,194,193,198,196,196,194,196,199,197,197,197,195,197,196,196,197,196,196,196,198,195,197,197,196,197,196,198,196,198,195,195,195,196,195,197,198,197,200,196,198,198,196,198,196,199,200,199,200,200,199,198,198,200,199,199,200,200,199,199,200,200,199,200,200,199,198,200,202,199,200,200,199,200,198,200,200,198,201,200,203,200,200,201,201,203,199,201,201,199,202,203,199,199,203,201,202,199,201,202,200,201,203,201,202,202,203,199,203,203,201,205,199,200,202,200,202,203,200,201,201,200,202,200,201,201,200,200,199,200,200,201,199,202,200,198,202,200,201,201,201,202,203,201,203,202,203,204,202,201,203,203,203,203,204,205,203,204,204,205,205,208,207,208,207,208,209,206,206,209,207,208,212,211,212,212,213,212,211,214,211,214,210,212,227,228,246,250,250,252,252,251,251,251,251,251,251,251,251,250,250,250,250,248,248,242,236,230,220,221,219,188,184,190,182,186,199,190,155,170,192,191,194,191,181,174,173,174,171,176,167,166,159,155,160,157,163,162,162,159,157,163,159,158,153,156,159,158,155,151,151,151,152,151,144,160,148,119,125,121,137,128,112,130,118,116,122,127,127,119,116,113,111,106,100,99,97,93,87,80,87,80,96,85,100,126,110,134,116,112,118,92,72,51,57,46,60,71,83,96,74,77,75,81,79,61,76,90,81,90,73,38,39,41,46,34,21,34,37,19,16,41,52,53,110,125,101,52,46,67,63,33,22,23,41,44,25,47,16,37,21,24,41,33,59,31,80,97,69,78,57,33,8,35,55,25,27,17,23,29,42,46,18,46,35,11,27,128,100,42,97,134,147,141,174,104,136,198,63,151,231,41,15,40,3,55,178,206,191,170,223,250,244,179,134,191,242,250,158,127,174,210,246,252,252,249,249,129,27,42,85,121,117,104,49,49,98,133,143,89,70,68,39,19,13,15,23,46,68,64,68,51,95,127,67,23,55,160,130,118,118,42,18,18,15,17,16,24,26,26,25,20,24,27,23,26,25,30,29,31,30,34,30,38,78,77,53,48,80,88,76,114,134,103,82,59,54,47,29,32,39,39,24,18,31,54,68,72,51,38,53,35,24,30,30,28,50,115,89,33,27,15,27,35,37,41,34,37,33,22,29,31,51,114,90,40,31,46,62,44,40,44,33,25,32,33,24,19,39,56,35,24,20,17,24,26,24,24,23,25,23,28,27,38,93,88,42,35,35,34,27,32,36,43,71,77,81,94,81,54,27,38,69,107,100,73,51,81,130,113,120,128,134,120,78,62,49,39,51,49,32,23,22,26,31,37,31,32,43,32,34,54,41,54,65,46,42,50,53,65,77,48,113,139,122,111,90,118,108,37,32,190,227,229,167,25,46,213,238,247,247,249,251,248,251,251,248,247,245,245,246,245,245,245,247,245,244,245,242,243,243,244,244,244,243,244,244,244,244,244,245,242,242,244,244,244,244,244,245,245,248,247,247,246,245,247,247,245,247,247,248,247,247,236,115,4,1,4,8,9,9,10,10,11,11,12,12,195,199,195,196,196,196,196,195,198,196,198,198,195,194,194,197,194,198,197,198,196,195,196,194,195,193,193,195,196,195,195,194,195,195,195,194,195,195,194,196,193,195,198,194,196,196,192,195,196,195,197,198,197,195,195,198,197,196,198,196,198,195,196,196,196,198,195,197,198,198,198,196,198,196,194,198,198,198,195,196,199,196,199,198,200,198,198,200,201,200,199,200,199,199,201,202,201,200,200,200,199,198,201,203,201,200,200,203,203,200,200,198,200,200,200,200,197,201,201,200,199,202,201,201,202,199,202,201,200,201,201,198,200,200,200,200,200,201,200,200,201,202,199,199,201,203,202,200,198,200,201,200,200,200,201,201,201,200,200,200,198,199,199,200,200,199,199,202,201,200,201,201,201,200,202,201,203,201,203,205,200,203,203,204,205,202,204,204,203,205,205,205,206,208,211,208,209,209,210,210,209,208,210,211,211,212,212,212,213,213,214,213,213,212,212,210,212,223,190,177,183,180,181,175,176,171,170,167,166,163,158,155,151,150,143,140,141,134,129,125,116,119,169,179,98,73,87,91,97,103,101,108,113,104,98,89,93,82,72,83,71,73,76,79,73,66,66,61,67,80,84,81,73,62,62,64,68,65,64,67,66,63,65,59,59,60,69,64,101,105,48,53,52,89,112,108,119,106,88,63,62,64,56,56,50,49,47,47,45,51,48,47,45,40,43,63,61,51,79,102,115,121,107,94,111,92,60,53,25,44,61,71,88,68,63,77,72,83,94,89,109,90,90,87,36,24,24,29,28,32,45,56,47,44,41,39,69,118,135,110,65,57,68,80,56,41,36,53,62,50,55,22,57,34,37,63,36,51,27,61,90,71,86,72,46,16,16,37,24,32,23,37,43,35,46,27,53,39,7,25,70,60,41,84,114,114,163,161,84,155,179,93,210,236,49,18,23,6,121,191,190,207,201,229,242,201,148,181,242,249,230,128,97,163,232,246,253,253,249,160,36,41,87,111,118,84,57,46,95,127,137,94,57,59,39,19,14,12,25,47,81,101,103,117,88,141,141,44,12,76,161,112,129,112,33,21,11,16,22,19,23,23,30,27,21,23,25,21,24,26,28,31,27,29,37,28,35,54,77,69,35,33,25,68,98,76,56,49,37,53,46,24,37,51,43,28,21,24,53,68,60,49,55,61,33,31,33,27,33,34,96,95,41,29,24,29,35,45,40,38,37,36,24,29,32,73,122,74,23,22,28,31,37,37,36,37,33,37,30,29,19,34,53,30,25,19,20,27,27,27,26,23,24,24,25,33,41,109,101,46,36,25,36,34,32,31,68,87,82,99,85,63,39,26,57,77,89,77,53,42,29,55,113,142,127,82,71,57,58,55,42,46,36,27,31,29,25,27,32,36,33,32,33,35,30,29,29,32,31,28,27,29,59,69,43,108,141,134,113,89,115,100,52,14,129,154,147,129,25,29,168,237,250,250,252,252,248,249,249,248,245,245,245,245,245,244,244,245,245,244,243,242,243,244,244,242,242,244,244,242,244,242,244,244,244,244,244,245,244,244,244,243,245,244,244,245,244,245,246,246,244,244,246,247,247,246,236,116,4,1,5,9,9,9,10,9,12,12,12,12,196,199,198,195,199,196,195,198,197,199,196,195,197,198,196,192,196,197,195,193,197,194,194,196,193,193,194,198,195,193,197,196,194,196,195,195,195,191,195,196,196,195,195,196,196,196,196,194,193,196,196,196,196,195,196,195,194,195,195,196,195,194,199,196,196,198,198,198,197,197,197,200,199,198,198,199,199,198,198,196,196,195,200,199,198,199,198,197,198,200,200,201,200,199,202,198,199,199,199,200,199,200,199,200,202,200,201,201,202,201,200,202,196,200,200,200,200,199,200,200,201,200,200,198,200,198,200,202,200,201,201,202,200,201,202,201,200,201,200,200,200,202,204,200,203,201,199,202,200,201,200,200,202,201,200,201,202,202,200,200,201,198,199,202,200,201,201,200,199,198,200,201,201,198,199,201,200,204,203,205,203,203,205,203,205,202,205,206,206,207,208,210,210,210,211,212,213,212,212,214,212,213,214,212,214,214,213,214,213,214,214,211,212,211,212,212,210,207,104,19,17,13,12,12,11,11,9,10,11,10,10,10,10,10,10,10,10,10,12,10,12,9,61,91,27,5,10,9,12,10,13,12,10,12,12,12,13,12,13,13,12,12,12,12,12,12,12,12,13,13,12,12,13,13,13,14,13,13,14,13,14,14,13,14,13,12,14,12,21,23,12,13,12,14,15,17,23,18,14,14,13,14,14,15,14,13,14,14,14,14,15,15,13,15,13,18,17,15,18,15,60,76,60,66,53,72,89,58,21,10,11,17,30,22,35,67,67,85,90,84,84,66,66,26,8,26,26,27,30,31,33,41,40,31,32,22,43,93,133,114,74,65,68,86,46,32,24,25,25,44,34,12,58,28,33,28,12,30,11,27,38,39,77,95,48,27,21,20,27,32,33,29,36,27,29,11,33,39,12,13,24,12,39,83,66,86,174,105,95,194,189,171,248,190,21,38,15,46,150,174,210,238,212,233,201,144,184,240,249,249,199,141,133,174,227,248,249,249,156,44,38,81,109,109,112,46,53,102,130,138,89,66,55,35,19,10,18,27,46,86,117,124,123,129,101,156,134,30,18,105,156,102,122,87,29,20,9,20,19,22,24,24,25,24,27,24,24,25,23,24,29,27,27,36,34,32,33,55,93,59,27,22,21,60,77,67,39,30,27,35,36,24,46,57,42,32,21,19,40,55,61,45,49,53,32,29,27,29,37,21,66,96,45,30,28,30,38,39,37,38,37,38,31,28,28,74,113,48,19,24,29,41,30,40,41,39,38,36,31,24,16,25,57,36,24,21,19,29,23,28,28,22,32,24,29,28,65,122,81,39,31,29,30,33,36,44,73,85,81,93,81,50,35,23,54,74,67,59,41,37,33,45,88,98,71,56,49,36,47,54,45,39,34,32,33,27,28,29,34,35,27,37,33,33,33,27,39,33,37,36,27,29,37,48,27,96,139,131,119,83,106,101,58,30,45,57,24,42,26,28,182,244,250,250,246,252,249,250,249,249,246,244,245,245,244,245,245,245,245,244,244,244,244,244,244,242,244,243,244,244,243,244,242,244,244,244,244,243,245,243,244,244,243,244,243,244,244,244,244,244,243,243,244,246,243,245,237,116,4,1,5,9,9,9,11,9,12,12,13,12,196,201,197,199,198,195,197,196,198,194,195,196,193,197,198,194,194,193,194,196,195,197,194,196,196,196,198,195,196,194,195,195,196,198,196,195,196,194,197,196,194,196,197,193,195,198,196,197,194,194,197,193,195,196,195,196,194,194,195,196,196,195,198,196,198,198,196,197,196,195,195,198,197,197,196,198,198,197,196,198,200,199,199,195,200,200,199,200,198,199,200,200,199,199,199,198,198,199,200,198,198,199,201,198,198,199,198,200,199,198,200,199,200,200,199,202,200,202,199,199,200,200,199,198,202,200,199,200,199,200,199,200,201,200,202,201,200,203,200,201,201,199,200,203,201,200,201,198,200,201,201,200,200,203,201,202,201,201,203,200,201,200,200,199,201,203,201,200,199,199,201,201,200,200,199,200,203,200,204,205,203,207,204,204,205,203,204,207,209,208,212,213,214,214,214,212,214,214,215,214,216,217,213,215,216,214,214,213,214,213,213,212,214,213,212,213,216,211,115,48,41,28,32,28,25,22,19,19,19,18,16,17,16,14,12,12,13,10,11,10,13,8,24,37,12,9,12,10,11,10,12,11,11,12,12,13,12,12,12,12,12,12,13,13,13,13,13,14,12,12,12,12,13,12,13,13,14,14,13,13,13,13,13,13,14,14,14,14,13,14,14,15,13,14,14,14,15,14,14,14,17,15,14,14,15,16,15,17,15,15,15,14,16,15,14,14,14,15,14,15,13,34,46,28,37,41,59,79,49,21,13,12,27,14,48,76,62,91,81,62,82,64,48,14,10,14,13,15,20,24,33,32,24,42,38,10,44,95,120,105,80,61,72,81,49,23,15,30,19,36,23,26,61,39,57,32,36,54,30,39,66,63,71,92,63,64,49,42,129,32,21,17,18,15,21,17,36,42,48,117,93,75,134,95,34,125,168,82,165,210,160,171,161,49,29,139,104,103,148,174,241,244,207,199,141,166,229,250,250,231,160,139,176,166,234,247,249,165,41,36,72,116,108,103,57,58,101,134,139,104,63,64,39,14,13,15,29,48,84,111,125,116,111,113,92,154,117,22,24,121,141,95,125,74,22,19,11,18,16,24,23,21,24,23,24,24,22,23,22,26,28,29,28,33,35,29,27,38,79,71,39,33,21,39,48,41,36,19,15,27,33,33,49,61,50,33,22,23,44,66,63,46,60,55,31,23,21,24,34,26,65,97,42,31,24,24,38,37,37,33,35,35,28,33,32,77,103,40,21,19,30,42,36,43,44,46,38,38,29,22,19,19,49,37,24,22,18,27,25,22,28,22,29,27,29,31,61,112,61,27,29,29,35,24,33,53,83,84,81,92,69,44,27,17,48,60,65,59,57,55,48,57,53,52,52,39,42,42,47,44,37,38,29,26,28,33,27,29,33,36,36,28,33,36,33,35,36,36,36,42,36,33,35,31,26,93,142,130,116,78,98,100,67,33,38,57,38,43,20,91,250,250,250,247,245,252,247,252,249,246,248,247,245,245,244,243,245,245,245,244,243,245,246,244,244,243,244,244,244,244,244,244,244,244,244,244,244,244,244,244,244,243,244,244,244,244,243,244,244,244,242,243,244,245,246,244,235,116,5,1,4,9,10,9,11,10,12,12,11,12,198,200,200,197,198,196,194,196,194,194,193,194,194,196,197,193,194,195,195,193,194,194,196,195,195,196,195,195,196,194,195,195,193,196,194,193,196,197,199,194,195,196,195,196,195,194,192,194,196,195,195,194,196,195,194,196,196,198,195,196,196,195,195,194,196,195,195,197,196,196,196,196,196,195,197,198,198,199,200,196,198,198,199,194,194,199,200,200,197,198,197,195,200,198,200,199,198,198,198,199,200,200,198,198,198,198,198,199,199,200,198,201,201,201,201,198,199,198,200,200,199,201,199,200,199,199,202,199,201,200,200,203,200,200,204,204,201,203,201,204,200,201,203,199,201,200,200,202,201,201,202,199,200,201,202,203,200,202,203,202,202,202,204,204,201,202,203,203,204,205,206,202,203,202,202,205,203,203,202,205,206,205,206,204,206,206,206,211,212,212,212,212,212,212,213,212,213,213,213,215,214,215,214,214,217,213,211,212,212,211,212,213,215,212,214,215,219,231,212,215,223,219,226,223,224,222,222,221,216,217,214,212,212,208,205,201,198,195,190,183,174,164,165,158,147,158,160,159,155,151,152,144,139,136,131,129,122,117,117,118,119,120,125,125,123,127,127,125,129,131,129,130,129,134,134,136,139,138,129,130,145,141,139,142,141,145,144,139,135,95,117,146,136,132,109,107,109,108,127,136,143,157,156,153,158,155,158,164,159,160,159,158,161,158,157,162,147,157,149,139,142,126,145,181,146,115,157,156,139,168,135,91,66,15,87,160,144,156,130,112,104,101,157,168,179,193,191,199,199,198,165,151,175,191,203,123,75,104,117,108,75,57,81,124,141,142,168,198,183,200,169,132,168,172,219,198,155,161,172,219,215,151,118,107,109,192,234,249,247,226,214,232,233,229,229,220,222,232,240,239,234,232,163,63,55,158,149,118,226,170,87,136,70,4,133,239,133,102,150,201,252,222,159,134,153,206,249,249,233,184,85,132,178,194,240,246,186,40,27,67,102,107,98,58,51,90,135,133,92,71,59,41,16,10,13,22,46,80,111,110,113,103,101,109,108,158,96,9,34,137,128,101,122,52,21,20,11,16,12,24,25,21,24,25,24,17,23,24,24,27,29,26,29,31,27,30,25,33,91,83,55,52,61,81,65,43,34,22,20,33,29,38,53,55,61,46,14,25,47,33,29,31,60,58,29,31,20,24,30,31,85,104,39,27,27,22,34,33,35,31,30,37,27,33,28,78,110,44,30,19,28,44,39,55,75,81,74,62,49,32,19,21,48,30,26,21,16,30,22,27,30,24,29,22,31,32,63,100,47,26,32,28,34,36,38,67,89,82,71,61,44,28,22,19,48,66,75,78,76,70,60,77,69,47,33,28,46,42,35,36,36,33,31,28,29,29,33,29,32,34,33,33,32,36,35,34,36,32,34,41,41,37,33,39,28,101,143,130,120,75,100,101,69,29,40,104,117,143,216,253,252,252,251,247,252,250,249,247,247,248,245,246,247,244,245,244,246,246,245,244,243,244,244,244,245,244,244,243,244,244,241,242,244,244,244,243,243,242,244,243,244,244,242,243,243,242,241,243,244,244,244,244,245,245,244,244,234,116,3,1,4,9,9,9,11,10,12,12,13,12,197,200,199,200,200,195,195,195,197,196,195,196,194,194,194,196,197,196,195,195,194,196,195,197,196,192,195,194,198,197,194,194,193,194,193,195,198,193,197,197,194,197,196,194,195,196,195,195,194,194,197,194,194,195,194,195,196,194,195,195,195,193,196,194,194,195,196,197,194,194,196,194,198,198,195,200,198,199,198,199,196,194,199,197,198,197,195,198,198,195,198,199,196,199,199,196,197,197,200,196,198,198,198,198,198,199,198,200,198,198,199,199,198,197,199,200,199,200,200,200,201,200,202,200,200,201,202,201,200,201,200,202,203,202,205,203,203,203,201,204,205,207,206,203,203,202,202,200,203,205,205,205,205,204,203,205,205,203,205,204,206,204,207,205,205,205,203,206,208,206,202,205,203,205,206,205,207,205,206,208,204,205,206,205,208,208,209,211,212,212,214,212,210,213,213,212,214,213,214,210,211,214,212,212,214,214,213,211,213,214,212,212,214,214,213,215,218,229,232,250,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,252,252,253,253,253,253,253,253,253,253,253,253,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,253,253,252,252,251,234,249,252,252,252,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,252,252,251,250,232,244,252,252,246,194,219,251,232,251,241,214,168,103,162,236,192,187,159,151,125,145,238,250,252,252,253,253,253,253,252,252,252,252,248,184,93,101,109,107,83,68,89,153,234,244,252,252,216,252,248,218,220,233,252,211,193,219,244,252,248,208,132,117,122,219,252,252,252,252,252,252,252,252,252,252,252,252,250,231,251,251,138,19,92,184,110,145,223,98,104,181,113,100,230,252,128,111,173,225,246,184,98,149,209,244,248,248,204,145,129,139,215,227,248,202,57,34,64,99,101,99,58,44,88,127,139,95,61,57,45,16,11,16,23,46,81,108,117,113,108,99,100,97,112,154,80,7,52,147,121,105,111,44,21,17,11,18,15,24,24,19,24,21,19,26,22,27,27,26,30,28,24,27,26,36,26,43,99,80,92,89,71,72,68,58,43,42,34,28,29,36,44,54,69,45,18,23,46,46,25,15,33,39,31,28,16,27,27,36,115,105,38,31,22,23,33,32,33,29,29,33,34,38,31,90,128,58,26,23,34,39,33,125,188,189,186,166,122,74,25,26,48,31,24,18,20,26,29,24,27,27,27,21,29,28,63,103,47,31,33,26,37,33,41,78,83,57,42,30,24,22,19,22,57,92,77,75,92,71,53,74,81,58,37,31,33,24,25,33,42,29,22,33,32,31,29,29,26,27,38,35,34,33,38,34,30,35,33,42,35,37,35,37,31,90,146,127,123,80,93,103,79,18,89,239,252,252,252,252,252,252,251,249,249,249,245,245,247,247,246,245,244,244,243,243,244,244,244,244,244,242,242,242,244,244,244,242,241,242,243,244,243,241,241,241,241,241,241,241,242,242,244,242,241,241,241,241,242,244,241,243,244,243,244,242,234,118,3,1,4,9,9,9,10,9,11,12,13,12,197,198,199,197,199,197,196,196,195,198,195,195,197,195,194,193,196,196,193,195,197,195,198,197,196,196,196,194,196,196,198,199,195,198,196,197,196,193,196,193,195,195,193,196,195,199,198,196,196,195,199,195,195,196,193,194,194,194,196,193,193,196,197,197,194,194,196,195,196,197,196,197,198,197,198,196,194,198,198,196,196,198,196,197,198,199,198,195,196,199,201,198,196,198,198,199,199,198,198,198,198,196,196,198,200,199,198,199,199,196,198,199,199,198,199,201,201,201,200,200,198,200,200,203,201,201,204,200,202,202,202,204,202,202,203,203,203,206,204,203,205,207,206,205,204,204,204,203,206,205,207,205,206,207,205,206,205,205,203,205,207,206,206,206,205,206,206,208,207,206,207,205,208,206,206,208,207,207,208,208,206,207,207,207,211,211,211,213,211,210,214,212,212,212,213,213,212,212,214,214,210,214,213,208,214,216,214,215,214,214,213,214,213,214,214,213,214,215,218,225,227,231,234,232,236,235,237,238,239,241,240,242,242,242,242,244,247,245,245,245,247,246,249,247,244,249,249,250,252,250,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,251,252,252,252,251,244,252,252,252,252,252,252,252,252,252,231,239,251,252,252,252,252,252,252,252,252,244,247,251,251,251,249,250,249,248,251,250,250,249,246,252,245,232,246,237,230,212,232,250,251,221,185,227,236,235,251,232,214,184,141,217,208,154,170,181,182,153,170,237,249,252,252,249,247,251,249,219,244,252,252,250,149,83,97,91,103,70,78,75,133,237,231,230,174,184,249,231,217,171,181,222,193,199,207,235,249,247,226,128,71,115,163,188,227,248,235,227,245,247,244,244,246,228,239,185,179,243,214,87,14,151,164,93,154,134,57,109,191,156,128,228,226,115,137,208,229,238,144,125,230,252,252,251,227,155,227,215,195,230,241,200,71,43,63,98,103,101,62,36,88,124,128,97,52,59,42,17,14,12,26,48,84,108,114,120,113,114,103,97,99,118,145,63,6,77,151,110,110,100,31,20,16,14,18,18,24,16,22,24,21,27,24,23,28,28,32,30,27,29,30,25,31,22,35,67,63,116,99,69,59,36,37,39,37,45,33,24,30,29,51,59,45,20,17,40,49,28,24,43,37,26,27,14,25,22,62,133,95,36,27,22,24,31,33,35,32,31,39,30,42,33,98,158,73,37,16,38,36,54,216,251,250,241,223,205,157,62,39,50,27,27,16,18,27,22,27,26,29,23,18,31,28,75,109,62,29,29,24,34,32,41,64,56,40,25,29,36,30,26,53,116,110,79,84,70,52,28,34,55,50,47,44,39,29,26,48,45,26,24,21,29,33,32,31,27,32,37,33,33,38,31,37,32,31,33,34,30,41,37,39,29,76,141,132,125,87,94,105,86,20,95,238,252,252,252,252,252,248,250,247,247,247,245,245,245,247,247,244,243,242,241,242,244,242,244,244,242,242,242,244,242,241,241,243,243,242,242,244,244,242,242,240,241,241,241,240,241,241,241,239,240,241,241,241,242,244,243,242,244,245,244,244,233,117,3,1,4,8,9,9,11,9,11,12,12,11,196,200,197,199,200,196,197,197,198,198,198,199,196,194,195,199,195,196,194,194,196,194,196,194,197,197,195,198,194,194,195,197,198,197,199,197,196,194,196,196,194,192,193,195,196,193,196,196,194,196,198,194,194,194,194,194,194,195,196,195,194,195,194,196,198,195,195,198,195,196,197,195,198,195,193,198,195,194,198,199,196,196,198,195,197,198,197,196,196,196,199,198,195,196,199,196,199,196,198,197,197,199,197,196,195,199,196,198,198,197,199,199,199,199,197,197,198,200,200,199,202,200,202,203,203,201,205,205,204,204,203,205,205,201,204,203,205,206,205,208,203,206,206,207,205,202,206,205,205,206,208,207,206,205,205,206,206,206,207,205,206,208,207,204,203,205,205,205,206,208,208,208,207,207,207,207,206,208,208,208,205,205,207,208,210,210,208,211,210,212,214,211,212,212,213,212,211,212,212,212,213,215,211,212,213,214,214,211,212,212,214,213,214,214,213,215,214,214,215,216,215,215,215,214,215,216,218,217,217,218,217,217,217,218,217,218,218,217,220,216,217,219,217,217,212,212,217,219,218,219,216,218,219,219,222,221,224,222,223,223,222,221,223,222,221,222,221,223,223,220,223,222,222,220,214,224,221,225,218,210,225,220,223,227,224,224,221,223,226,203,197,216,220,225,223,222,224,220,221,219,210,212,214,217,218,214,220,215,216,217,217,217,218,215,220,213,202,214,207,189,180,212,224,231,193,166,215,218,208,227,207,188,153,133,190,169,154,186,194,187,151,171,221,231,230,221,222,218,226,217,186,220,223,230,250,123,81,87,78,84,69,88,61,151,208,155,203,168,190,239,210,172,105,179,220,193,197,187,222,228,233,235,153,73,113,136,141,200,223,221,209,217,223,219,220,226,210,214,186,153,184,176,47,61,193,125,112,135,96,72,87,141,135,84,160,185,113,171,216,234,212,155,203,249,246,246,229,142,125,198,229,214,247,175,72,49,56,106,104,110,71,42,89,122,134,98,58,60,39,17,14,13,29,48,87,112,116,117,113,111,106,94,101,101,125,148,49,5,100,143,104,117,82,25,19,8,19,19,17,22,21,27,24,21,26,24,27,26,29,29,30,32,27,27,28,38,33,32,34,39,93,77,47,85,75,48,57,66,69,43,21,20,26,40,55,45,19,22,43,41,32,56,67,39,26,22,16,30,23,79,128,66,27,26,24,27,33,30,27,32,32,36,31,41,36,64,137,101,60,46,83,82,126,194,185,144,159,155,152,104,55,49,53,22,21,18,16,30,25,28,25,22,26,21,30,27,69,118,74,39,27,26,37,31,41,60,60,44,34,65,74,27,53,117,144,134,83,62,59,34,30,48,71,43,34,37,29,31,36,45,35,23,30,19,25,34,29,32,24,26,36,30,32,37,32,34,35,37,29,33,34,41,36,39,31,63,138,137,122,93,95,115,98,18,69,237,252,252,250,248,251,244,249,248,244,245,245,245,244,245,244,244,244,242,244,242,243,242,239,241,239,239,242,241,241,242,242,242,243,241,241,242,242,242,240,241,239,240,240,241,241,239,239,239,239,241,242,244,243,244,242,244,244,244,246,244,233,116,4,1,4,8,9,9,10,10,12,12,12,12,199,199,199,198,199,198,199,199,198,199,198,196,198,196,196,198,195,198,196,197,196,192,198,194,196,196,195,196,196,195,195,195,194,197,195,196,197,197,199,194,195,196,196,196,195,195,194,194,193,194,194,192,196,196,194,194,196,195,196,195,198,195,194,195,193,193,195,193,194,195,196,195,193,195,198,197,194,196,196,196,196,195,195,196,193,193,197,197,197,197,197,193,194,198,194,196,196,196,200,197,197,198,196,196,198,198,198,197,200,198,198,198,198,199,198,199,199,200,200,203,203,202,202,202,201,203,206,206,206,204,204,207,206,205,206,205,203,204,206,204,206,205,207,208,206,208,205,206,207,205,207,205,206,206,204,205,204,203,205,206,203,203,206,204,205,205,205,207,204,207,208,204,208,209,208,207,208,209,207,209,206,209,210,208,212,210,211,214,213,212,211,213,213,212,211,212,212,211,212,212,212,213,213,212,212,212,212,212,211,212,211,212,212,214,213,213,215,213,217,215,214,215,215,215,214,214,213,215,217,215,215,214,215,216,214,217,215,214,214,214,214,213,214,214,207,208,211,211,212,212,214,211,213,213,212,212,212,214,211,211,214,212,212,211,212,214,213,213,213,210,211,212,213,207,208,215,212,215,212,196,212,212,209,216,214,214,213,213,220,202,188,217,224,214,208,209,209,208,210,209,204,196,202,209,209,207,208,208,208,208,206,205,210,207,211,205,191,207,199,173,178,210,216,217,165,160,207,218,212,212,195,169,141,110,164,173,166,187,160,132,103,143,213,222,221,214,217,215,218,210,178,214,214,225,235,107,64,60,60,77,71,92,68,172,202,169,215,175,220,186,167,166,138,227,217,181,158,181,226,221,229,250,155,67,123,124,150,200,215,227,210,215,219,219,219,234,208,212,158,105,173,113,45,140,191,108,122,116,65,80,76,129,143,83,169,185,150,205,229,214,173,163,168,175,134,110,88,45,5,105,179,204,197,60,40,61,98,112,110,72,41,86,125,134,102,61,56,41,16,11,17,25,47,87,111,117,114,104,117,104,92,97,103,102,131,139,34,14,117,134,103,109,67,22,17,15,21,21,21,19,22,26,24,22,25,29,39,46,47,46,40,36,33,27,32,39,30,28,25,35,76,95,103,146,136,90,67,40,35,33,30,27,39,55,46,57,41,26,42,34,43,83,63,33,21,18,22,24,27,90,110,43,29,26,28,36,30,35,30,34,32,32,33,36,40,31,124,109,94,200,235,214,127,86,52,33,31,24,43,37,35,57,36,16,27,18,24,29,24,27,29,28,26,24,29,24,54,108,95,49,30,26,34,35,49,79,67,37,76,116,75,44,103,137,165,150,90,96,61,47,41,70,79,34,56,47,26,39,42,36,23,24,25,23,22,29,38,26,26,29,29,30,33,33,36,32,31,36,31,33,39,38,37,49,31,57,133,135,130,103,101,115,99,18,55,232,252,252,250,242,249,242,247,245,245,245,244,245,243,244,244,243,241,243,241,241,241,240,241,241,239,239,240,241,242,241,240,241,242,242,241,240,240,241,240,240,239,239,239,239,240,238,238,239,240,242,242,245,244,244,242,242,244,242,241,242,233,116,3,1,4,9,10,9,10,9,12,12,12,12,195,199,198,198,199,194,194,196,197,197,193,196,196,195,193,195,195,196,195,193,196,195,196,196,197,193,191,195,194,194,194,195,195,193,197,195,195,194,194,194,194,196,196,198,196,195,195,193,194,195,195,195,193,196,198,195,194,196,197,193,194,196,196,196,196,194,194,194,193,194,193,194,195,196,194,193,194,195,194,193,194,196,194,195,195,194,193,195,197,193,194,194,195,195,195,194,196,194,196,197,196,194,192,198,196,195,198,197,198,195,198,198,196,199,198,200,202,200,200,201,204,203,203,202,204,203,203,205,203,206,205,204,205,204,206,204,202,201,201,204,203,206,206,203,204,206,205,206,204,202,204,200,205,206,204,203,202,202,202,203,203,205,204,205,206,205,206,207,207,205,206,207,206,206,206,208,208,209,208,211,209,210,212,210,211,211,211,212,211,209,212,210,210,212,212,212,211,212,210,211,213,212,210,211,210,211,211,211,212,212,213,212,213,213,212,212,212,212,213,212,212,213,212,213,214,212,213,212,214,215,214,215,214,215,214,214,214,211,213,210,211,212,212,214,210,211,212,211,209,212,213,210,209,210,212,211,208,210,212,208,209,207,210,209,210,212,207,208,209,207,207,209,210,203,205,210,209,211,210,195,208,215,203,215,219,212,212,206,222,215,192,230,234,213,208,204,210,204,203,208,203,199,199,205,209,204,206,205,203,205,205,206,207,205,208,205,189,207,195,159,184,211,214,214,149,160,208,210,215,220,198,167,125,107,164,161,149,168,148,133,98,125,203,221,221,213,212,212,222,206,172,213,211,229,223,102,77,72,89,69,73,95,61,193,216,199,224,184,184,143,190,180,192,235,147,142,158,198,229,218,222,247,137,61,112,134,174,185,201,212,208,224,217,216,226,234,213,175,108,100,160,65,69,192,156,114,139,71,27,50,62,139,178,128,187,213,183,228,221,146,101,72,55,42,25,27,17,10,10,110,184,150,81,21,57,95,108,121,77,35,64,118,131,94,62,59,45,21,14,14,27,45,81,106,111,114,107,103,97,96,103,102,93,98,125,118,22,19,123,123,103,108,57,21,20,15,22,21,16,23,21,27,25,29,50,68,83,80,78,66,57,51,39,32,32,35,34,31,22,50,139,159,122,155,135,97,86,38,27,33,36,45,54,59,59,72,44,27,43,33,62,89,64,37,23,19,18,26,27,82,108,64,37,27,30,36,32,30,28,35,34,35,35,37,34,66,178,139,74,209,232,186,128,108,94,78,44,41,43,32,41,61,30,19,22,29,24,27,31,24,29,27,29,27,30,30,34,95,107,57,33,23,30,37,63,91,66,49,137,141,74,87,120,119,179,165,118,138,74,79,68,51,56,28,52,38,23,40,33,31,29,26,25,21,28,30,26,28,29,29,31,28,31,35,33,34,29,32,35,34,33,41,41,38,35,49,131,136,128,116,101,111,101,27,52,229,252,252,250,239,246,243,244,241,241,242,239,241,244,243,241,244,241,240,239,239,240,239,237,238,238,240,241,240,241,243,242,242,243,240,241,244,241,241,240,239,240,239,239,237,238,239,239,238,238,241,242,241,241,242,239,240,241,241,242,241,233,117,3,1,4,8,9,9,10,10,11,12,12,12,195,196,198,197,198,194,195,194,195,198,196,195,195,195,194,196,195,195,194,196,198,196,198,193,195,197,194,196,195,195,196,195,197,197,195,198,195,194,195,195,197,196,196,196,195,196,198,198,197,198,196,194,196,196,196,195,197,196,197,195,195,194,196,198,196,198,194,194,196,194,198,196,194,196,194,196,195,195,195,195,196,195,194,194,193,195,196,192,194,194,195,193,194,196,194,196,194,194,196,193,195,195,196,196,198,195,197,196,196,197,198,198,197,196,199,199,200,200,200,201,202,202,202,206,203,202,202,201,203,203,205,204,202,203,205,204,203,203,204,202,203,202,200,201,203,203,200,202,201,202,205,203,203,202,204,203,202,203,205,205,206,205,204,207,205,204,206,205,206,206,206,205,206,206,205,207,208,208,209,207,206,208,210,211,209,211,210,210,212,209,210,211,210,211,210,211,209,210,213,212,212,209,208,212,212,211,212,211,212,214,211,212,212,212,214,212,212,212,214,212,214,212,213,214,213,214,213,213,213,211,214,213,213,214,213,214,214,213,212,213,213,210,214,212,210,211,211,212,210,209,211,208,212,210,212,210,208,210,210,209,207,206,208,209,206,208,207,207,207,206,208,208,208,203,205,209,210,210,211,196,203,216,197,207,223,208,208,204,215,198,131,166,214,210,208,203,205,203,203,206,205,201,196,202,205,204,206,204,201,203,208,203,206,205,206,208,189,209,191,150,188,210,215,208,151,176,207,210,212,222,205,162,133,120,155,152,162,185,178,184,144,153,210,217,224,212,212,210,218,201,169,212,212,235,216,134,105,80,110,78,78,77,50,164,200,199,181,169,185,178,222,186,202,186,136,165,177,221,223,217,225,241,120,80,125,128,186,160,162,193,207,232,213,223,233,240,178,127,110,125,130,43,148,183,97,148,133,34,10,42,36,107,174,111,141,167,167,195,127,64,27,27,41,56,66,60,53,47,47,196,197,62,43,43,103,107,111,85,26,50,105,129,94,61,60,43,19,12,15,25,48,79,101,111,112,114,103,90,101,99,105,101,93,105,122,100,15,30,136,114,110,116,50,25,17,11,22,22,21,24,24,26,37,64,93,117,118,109,101,85,78,71,65,54,44,50,43,48,39,55,130,149,87,116,118,110,117,45,62,63,38,49,57,66,72,77,38,29,44,33,67,98,71,34,15,15,19,27,24,32,76,73,45,43,46,47,45,48,46,47,45,50,48,48,46,104,177,95,31,50,76,71,55,117,143,127,102,66,45,37,46,57,43,35,27,26,29,28,32,30,30,31,35,29,42,31,63,144,113,53,33,23,27,24,52,93,67,105,185,114,87,110,117,95,133,150,141,162,77,73,77,44,37,26,38,29,26,34,33,28,27,22,23,27,24,30,30,29,39,38,29,34,39,33,31,36,33,37,32,34,38,37,36,39,36,43,119,136,125,116,98,104,111,35,36,217,250,250,250,238,246,242,245,239,240,240,239,241,241,242,241,241,239,237,239,238,236,237,237,237,237,236,238,238,239,243,243,242,242,240,240,244,241,243,240,239,240,239,241,240,242,241,240,239,239,241,241,241,240,241,240,240,241,241,242,243,233,116,3,1,4,8,9,9,11,9,11,12,11,11,193,200,198,196,199,194,196,198,196,197,195,198,195,197,197,195,197,196,195,196,195,195,196,196,194,195,197,197,197,198,199,195,195,196,196,195,196,198,197,196,196,196,196,197,195,195,197,196,195,194,196,196,196,196,197,196,196,195,196,195,194,197,194,194,197,194,196,198,196,196,197,196,197,194,193,196,194,195,197,194,195,194,193,195,193,196,194,193,194,193,195,194,193,193,194,194,196,193,195,195,195,199,195,195,196,197,196,196,198,198,198,198,198,198,198,196,198,200,200,201,200,202,202,203,204,203,202,203,201,203,203,200,203,200,203,204,200,203,200,203,201,201,205,202,203,203,201,204,201,199,205,202,201,200,200,203,203,202,202,201,202,205,203,204,203,202,204,204,204,204,205,205,205,205,205,206,207,207,206,207,207,208,208,207,208,209,211,209,208,208,209,209,208,210,210,209,210,210,209,211,212,212,210,211,210,212,213,211,211,208,212,211,209,212,214,214,214,214,214,212,214,214,214,212,211,212,213,212,213,214,213,213,209,212,211,212,213,212,214,212,212,211,212,214,212,211,209,210,208,211,211,210,210,209,210,210,210,208,210,208,210,207,207,206,208,207,205,208,206,208,208,209,205,200,210,209,208,208,212,197,196,214,169,166,207,205,210,206,208,175,61,73,171,203,209,202,202,203,203,207,203,201,196,193,205,206,204,203,203,202,205,202,201,204,202,208,188,206,190,149,196,207,216,203,153,182,203,209,210,220,202,154,115,113,171,178,185,195,171,198,176,167,206,211,223,216,214,211,221,198,168,211,212,234,199,141,116,64,104,87,97,79,24,120,167,151,168,198,177,208,198,121,198,196,178,197,200,225,215,217,229,233,117,130,164,136,183,141,141,178,194,230,222,227,240,176,105,132,139,148,84,74,214,139,19,148,147,35,54,66,33,62,108,71,62,78,84,82,53,27,43,53,81,93,92,56,73,71,83,190,123,52,62,74,113,105,85,34,54,106,136,108,63,61,46,23,12,12,29,54,88,102,108,115,110,115,96,94,111,110,102,90,84,124,143,102,44,90,162,112,116,116,44,19,12,17,25,19,17,24,28,42,67,104,116,141,131,111,101,91,94,101,103,100,93,78,79,83,85,93,96,118,84,135,105,157,122,50,113,57,52,61,63,77,71,70,36,33,37,28,61,78,57,29,17,15,22,27,23,20,27,39,77,90,84,91,82,87,97,98,89,94,95,92,98,83,56,30,29,24,52,50,41,93,122,125,105,70,38,27,22,37,45,39,41,32,37,39,40,43,39,40,45,46,53,43,85,122,72,35,27,25,22,27,38,67,70,113,175,87,70,120,113,87,83,87,110,127,50,59,83,52,48,43,46,39,39,47,45,31,19,21,23,24,31,37,36,48,53,61,69,63,56,42,32,35,35,28,37,31,33,39,33,38,40,36,110,136,132,125,95,104,113,48,26,199,249,246,250,235,245,240,242,240,237,240,239,239,241,239,238,237,235,237,236,236,234,234,232,232,232,233,236,239,241,244,242,241,242,239,238,240,242,241,242,242,243,242,242,244,243,243,241,242,243,242,242,242,240,239,236,237,237,239,241,241,234,116,3,1,4,9,9,9,10,9,11,12,12,12,196,198,198,195,196,196,197,196,198,197,196,197,196,196,195,198,195,195,198,197,195,193,197,198,195,197,195,197,196,194,197,195,196,198,197,196,194,196,198,196,198,196,195,200,196,198,198,195,196,196,196,198,198,198,198,195,196,196,195,195,195,198,196,196,196,194,196,198,197,195,199,194,195,197,194,196,195,194,196,195,194,194,196,195,195,194,194,196,197,195,195,196,194,198,195,194,195,198,198,195,197,198,198,196,197,194,196,195,194,198,195,196,199,196,199,199,200,201,200,200,204,203,202,203,200,202,202,203,203,199,202,201,199,204,201,200,203,200,202,202,202,200,201,202,203,202,200,202,201,201,203,200,201,200,203,202,200,201,204,202,200,203,201,202,200,203,205,202,206,204,203,203,203,206,205,205,204,203,206,205,208,207,210,210,208,211,205,207,208,207,208,208,210,209,209,208,211,210,210,211,210,208,210,212,210,210,212,213,211,212,213,212,214,212,213,212,212,213,213,210,211,212,213,210,211,212,210,213,214,212,212,212,211,212,212,213,211,212,212,210,211,210,212,210,209,210,210,210,209,209,210,207,209,208,210,210,208,208,207,207,210,210,206,209,208,207,208,208,208,207,206,210,201,201,208,207,208,206,208,199,193,212,160,141,198,204,210,205,206,196,105,100,170,202,213,205,201,203,203,205,205,203,198,194,199,204,203,204,204,201,202,201,203,203,205,207,186,209,188,158,205,207,218,192,145,190,205,208,208,221,196,146,114,99,160,181,178,157,143,196,166,153,204,201,215,211,218,213,221,195,169,214,217,228,156,131,106,75,118,88,115,72,26,103,170,179,205,211,169,193,135,123,215,201,188,179,208,224,213,214,234,222,134,201,201,160,161,128,145,144,163,226,231,239,198,74,84,142,180,152,60,141,235,96,57,196,127,55,77,66,62,48,59,56,63,66,42,51,44,53,50,39,44,40,44,39,29,38,30,56,46,59,90,85,112,78,43,48,94,137,112,70,62,50,23,11,15,26,49,91,110,112,115,111,112,110,97,103,115,102,93,79,112,179,163,125,108,130,164,101,104,105,35,25,12,14,24,18,22,27,41,61,103,139,125,117,109,106,98,92,111,126,139,137,119,93,97,120,120,103,93,95,105,118,98,151,78,24,73,45,62,78,92,102,66,50,37,27,33,20,30,38,29,23,18,19,19,23,22,26,24,26,77,95,90,95,69,111,149,124,101,136,139,121,100,46,21,21,29,34,67,56,71,142,141,141,116,56,31,27,25,17,26,51,56,56,63,59,51,61,69,73,78,78,84,69,69,58,28,27,23,21,29,28,27,40,32,75,118,59,56,79,100,85,70,74,87,98,73,86,117,81,55,71,68,55,56,62,42,30,25,27,29,30,44,48,59,63,71,93,102,93,76,60,46,31,28,33,35,32,34,40,34,33,39,25,100,143,134,127,93,97,106,48,23,189,249,244,250,236,243,235,239,239,237,238,237,238,237,236,237,236,234,235,235,232,232,234,233,234,236,236,238,240,244,244,242,241,239,239,243,244,243,244,243,243,244,244,244,243,243,242,242,244,244,243,242,241,240,239,237,236,237,239,241,241,232,116,3,1,4,8,10,9,10,9,11,12,12,12,194,199,196,196,200,195,195,197,195,197,194,194,194,194,196,197,197,196,196,196,196,194,195,196,194,196,196,196,194,196,198,195,197,196,194,195,195,198,197,195,199,196,198,196,194,196,195,196,198,197,196,195,198,198,197,196,196,198,198,197,193,196,197,197,197,198,197,196,197,196,198,197,196,197,197,197,196,195,197,194,195,196,193,193,194,196,192,193,196,195,197,195,194,196,195,196,199,194,197,197,196,199,197,196,196,198,196,197,195,196,196,196,198,199,201,197,199,200,200,202,200,200,198,201,202,202,200,200,200,200,203,199,202,203,202,203,200,203,200,202,203,200,200,199,200,199,199,200,201,204,202,202,203,202,203,203,203,200,200,202,202,203,202,202,203,202,202,204,202,202,203,204,205,204,205,205,203,205,206,205,205,205,207,207,206,207,206,206,206,207,206,207,208,208,209,207,211,210,209,211,207,209,211,210,210,209,210,209,212,209,211,211,209,210,210,209,209,211,210,210,213,210,210,213,212,213,210,210,212,210,212,213,210,211,209,211,212,209,210,211,212,209,211,207,208,210,207,211,209,209,209,208,206,206,208,209,208,207,208,206,205,206,206,204,206,205,203,207,205,205,206,205,200,202,207,205,206,204,207,200,189,211,188,165,201,205,208,202,205,213,181,189,197,199,212,203,205,201,199,203,199,201,200,193,193,199,202,201,203,202,202,204,200,200,198,210,189,203,182,151,208,207,217,177,147,195,200,211,207,223,192,149,116,72,122,148,160,159,150,201,172,156,197,191,205,204,207,210,224,194,169,212,219,210,130,134,103,75,119,94,116,73,44,158,216,205,243,171,131,207,150,156,214,178,160,172,217,224,215,211,241,203,153,242,221,173,146,148,158,125,162,219,235,245,137,31,107,153,164,150,145,191,167,72,88,160,86,58,54,38,56,44,48,59,69,79,73,61,53,44,29,21,17,23,21,18,15,16,15,35,54,85,97,83,82,47,43,84,125,108,72,63,47,27,12,13,21,46,89,104,109,111,107,110,99,100,100,103,105,89,85,115,185,221,170,145,132,146,160,95,110,92,23,19,12,17,19,21,32,26,56,107,125,132,115,92,94,96,90,76,87,92,91,95,76,55,61,79,88,77,68,82,92,83,44,76,43,35,67,37,59,82,83,78,53,48,37,23,30,26,18,16,18,20,16,17,24,27,26,29,39,62,74,64,47,39,28,21,34,32,24,25,36,31,33,32,21,30,24,48,76,46,121,195,169,131,86,43,27,40,35,26,14,33,40,34,51,33,38,55,49,52,78,87,116,124,81,60,31,30,30,21,33,27,27,26,22,73,91,54,37,48,94,83,80,95,79,104,101,114,137,55,42,78,74,55,47,49,46,43,43,42,44,53,54,63,70,71,88,106,107,98,90,71,63,45,32,34,28,38,36,29,35,36,39,29,87,137,134,128,94,93,108,59,19,171,246,242,249,232,238,237,236,237,234,237,233,233,235,233,233,232,231,231,233,235,233,235,238,240,240,240,243,244,242,243,241,241,240,241,241,242,242,241,241,244,243,243,243,241,241,241,243,241,240,241,239,241,236,235,234,234,235,237,236,237,231,117,3,1,5,8,9,9,10,10,11,12,12,12,198,198,198,196,197,199,196,196,196,196,197,197,195,195,197,196,195,194,195,194,196,196,198,194,195,196,194,198,198,197,196,195,196,194,198,198,195,198,195,195,196,196,195,198,197,196,198,197,197,196,196,196,194,196,198,198,196,197,198,196,196,198,194,195,196,194,198,194,194,197,198,194,197,199,196,196,196,197,197,198,195,195,196,195,198,195,194,196,194,194,195,195,195,196,197,198,196,197,198,198,199,198,199,198,198,197,197,197,196,198,198,198,200,197,199,200,201,200,200,201,201,200,199,199,202,201,201,201,200,200,201,201,200,202,201,203,203,200,201,200,200,200,202,201,199,201,201,202,203,199,201,202,203,204,202,202,203,202,200,202,199,201,202,203,202,201,202,203,204,200,203,204,201,202,203,201,202,205,206,205,205,203,206,204,202,204,204,207,206,208,208,207,207,207,210,209,209,209,208,210,209,211,211,211,210,207,209,208,209,210,212,209,208,209,208,210,208,211,211,208,210,210,208,208,210,211,211,212,211,210,211,212,212,212,210,208,210,210,209,211,211,208,212,210,207,207,208,207,208,208,209,208,209,208,207,206,206,207,207,205,206,205,205,207,205,205,206,206,206,205,208,204,197,204,206,206,206,205,208,203,191,211,203,190,210,207,206,198,205,214,201,221,209,197,208,201,203,200,198,200,199,203,202,197,192,197,203,203,203,202,205,202,201,201,199,206,188,202,167,156,211,205,217,168,147,194,202,212,206,221,188,143,101,74,140,178,183,168,169,211,169,137,186,200,205,195,201,200,214,192,173,215,217,198,134,162,109,79,108,82,99,44,66,198,231,218,211,138,177,218,171,172,145,141,167,200,232,220,217,212,245,177,167,250,228,188,132,163,184,152,157,189,225,236,107,34,138,135,155,182,225,187,84,65,67,89,42,29,27,14,35,23,27,36,40,45,43,37,27,21,22,15,19,17,16,18,16,19,18,54,74,97,106,80,50,42,83,106,105,67,62,56,21,13,11,29,46,81,113,112,112,107,107,98,96,107,101,100,87,75,119,184,235,197,120,128,131,142,145,97,115,83,15,20,17,16,22,24,34,41,110,182,152,115,105,88,78,86,94,70,48,45,48,51,48,49,46,55,69,70,72,71,76,62,41,44,35,39,48,35,46,70,58,56,52,47,49,41,38,27,17,15,17,20,26,26,24,36,45,63,87,85,77,64,67,66,35,28,26,23,29,27,26,25,35,27,30,42,40,57,66,71,148,189,146,103,62,52,46,37,35,34,29,25,22,23,24,17,17,24,21,19,31,68,103,116,112,74,56,54,43,37,36,31,27,21,48,105,87,53,29,63,116,85,89,81,108,157,92,54,61,44,41,51,51,36,37,47,50,56,56,56,60,60,76,87,99,107,108,112,99,97,95,88,76,61,43,37,31,31,34,26,35,32,40,28,79,140,135,131,98,94,102,59,15,156,245,241,249,230,233,234,232,234,231,235,232,232,231,230,232,231,231,230,231,232,230,238,244,246,245,243,244,244,244,242,241,241,241,242,240,242,239,240,241,241,242,241,242,240,242,242,238,239,240,240,241,239,237,235,232,233,234,234,237,237,229,117,3,1,5,9,9,9,11,10,11,12,11,11,196,198,196,198,199,197,197,198,196,196,197,201,199,196,195,195,197,195,195,195,196,197,195,194,196,197,196,195,192,195,198,195,196,195,195,198,198,198,196,193,198,194,198,198,198,199,194,195,194,192,194,195,196,196,197,195,194,196,195,195,196,195,192,193,194,195,195,196,196,193,196,197,195,196,198,197,196,196,196,195,195,198,198,196,196,195,195,195,196,195,196,195,195,198,196,197,200,198,199,198,200,199,199,200,199,200,195,198,196,200,201,198,198,196,198,198,200,199,199,199,199,200,200,201,199,199,200,200,201,202,201,199,202,200,198,201,200,199,199,199,200,200,203,200,200,200,200,202,200,202,199,201,201,201,202,199,200,201,201,201,199,200,200,201,202,204,204,203,200,201,202,203,202,203,203,203,204,204,205,202,207,206,205,207,205,205,205,208,206,207,208,205,207,206,208,207,208,207,206,211,206,208,211,210,210,208,210,210,211,207,211,212,208,209,208,210,210,210,212,211,211,209,209,208,210,211,208,210,211,210,210,209,209,211,210,209,211,210,208,209,209,208,209,208,210,210,206,207,205,206,207,206,207,207,204,207,207,205,206,205,206,203,205,207,204,206,205,206,206,203,206,202,199,204,207,206,205,202,206,207,191,201,213,189,198,209,206,199,200,208,195,207,203,194,205,201,205,204,202,204,201,201,203,203,198,195,203,203,205,205,203,204,203,205,201,208,194,199,169,170,216,206,210,158,156,200,201,212,209,217,174,137,100,104,187,207,178,159,179,214,165,128,186,197,206,205,193,195,210,186,175,219,208,196,155,176,131,81,95,97,81,30,70,169,182,194,207,158,203,225,174,149,153,167,181,221,229,223,220,224,243,136,168,249,233,198,141,169,193,150,124,181,216,216,98,76,146,138,162,202,249,151,49,59,72,57,20,23,14,11,19,20,21,19,24,20,21,18,20,17,15,18,17,17,18,18,18,20,37,73,86,101,94,50,60,77,105,99,63,57,55,29,12,11,23,44,76,106,120,107,106,111,96,98,106,102,96,89,83,110,167,206,197,133,89,106,99,116,127,98,114,61,19,24,12,24,24,31,29,61,163,187,131,107,75,54,50,42,51,43,32,35,34,38,47,40,31,44,63,76,77,72,64,46,41,40,34,36,41,43,55,74,59,61,57,46,60,62,60,39,23,18,16,19,36,64,69,70,76,122,128,98,74,75,78,58,40,30,28,32,36,41,46,57,69,75,88,106,124,139,161,149,133,141,125,122,118,92,93,97,85,78,67,64,51,43,37,29,27,19,23,19,44,76,88,104,92,99,92,73,71,59,57,44,41,25,71,119,87,63,25,78,127,84,103,89,129,136,61,44,53,45,31,30,35,44,43,50,53,55,60,57,77,90,84,90,86,87,100,96,85,79,84,84,78,71,55,43,33,30,35,31,33,33,39,29,69,143,139,130,104,89,104,71,16,152,245,244,249,234,235,229,231,236,229,233,232,230,232,232,232,232,230,229,229,229,229,238,244,245,245,245,244,242,243,242,239,240,239,241,240,240,241,239,240,241,243,241,239,240,240,240,241,239,242,241,240,242,236,235,233,234,233,234,236,234,229,118,3,1,4,8,10,9,10,10,11,12,11,11,197,199,198,199,199,197,198,198,199,198,200,200,197,197,197,198,196,195,197,194,194,194,197,193,194,196,193,197,195,195,196,195,195,194,196,194,194,195,195,196,194,196,196,195,193,194,195,193,195,196,196,191,193,194,193,193,195,194,193,193,193,193,193,194,195,192,195,195,193,196,195,195,196,195,197,195,196,197,194,196,196,195,196,197,197,197,194,195,195,196,197,195,196,196,198,198,196,198,199,197,198,196,196,198,197,198,199,199,196,199,200,199,200,198,198,199,199,199,198,196,198,199,199,200,200,201,200,203,201,201,200,199,200,198,199,201,200,199,201,201,199,199,202,204,202,199,199,200,203,201,204,201,200,199,200,200,200,201,200,203,202,203,200,202,202,203,202,199,204,201,203,203,203,204,202,201,202,203,201,202,202,202,204,203,206,205,206,206,204,204,204,208,207,207,208,207,207,205,207,208,207,208,208,208,205,206,211,210,209,208,210,209,208,209,207,210,208,210,210,208,209,211,212,210,209,210,210,208,210,211,208,208,209,211,210,209,210,208,208,207,209,207,210,208,208,208,207,207,205,205,207,205,206,208,204,205,206,205,204,205,207,203,206,204,202,201,201,206,204,203,206,199,199,206,204,205,204,202,205,206,190,195,208,194,194,205,203,196,204,205,193,201,200,192,201,203,203,204,202,206,202,203,203,202,202,196,203,206,205,205,202,203,204,204,203,210,198,195,160,176,218,212,208,156,174,206,203,215,209,210,165,133,105,116,177,167,144,164,196,223,172,125,184,202,207,205,202,201,204,181,177,215,196,199,165,157,103,83,118,107,84,57,71,99,152,222,202,174,196,175,162,157,187,194,201,230,220,226,221,237,227,98,165,244,235,215,160,173,166,131,130,188,219,213,165,134,147,165,182,213,235,95,35,66,47,35,16,21,17,15,22,20,20,18,17,18,17,17,22,20,14,13,20,17,14,15,18,25,59,87,81,82,55,47,81,101,102,66,59,53,24,14,19,29,46,81,100,113,116,108,107,100,94,103,105,103,94,89,128,183,184,143,133,99,81,88,67,98,115,109,122,57,17,22,10,24,33,35,30,71,157,159,119,83,57,44,37,33,40,37,43,45,39,51,45,30,24,34,51,60,61,54,53,44,46,41,35,38,33,45,50,64,61,57,62,55,59,69,67,63,51,35,27,34,72,91,78,77,94,98,86,57,36,35,34,30,29,35,42,68,89,127,158,185,212,203,205,196,185,184,175,151,93,55,74,79,89,113,119,139,133,117,127,131,132,127,97,82,67,53,45,28,34,44,47,70,63,71,71,48,63,76,81,78,62,35,86,117,81,52,51,119,111,90,108,64,102,109,53,52,56,46,39,39,46,56,51,52,57,64,65,66,101,98,63,53,50,61,62,62,61,59,61,69,70,66,62,48,34,30,35,32,36,29,38,24,60,144,145,136,108,89,110,90,45,165,241,241,250,240,239,234,233,234,231,233,230,231,232,232,232,229,231,231,232,233,235,244,245,244,245,244,244,241,241,240,240,240,241,241,239,241,240,241,241,241,241,240,240,240,239,240,239,240,240,240,240,240,239,236,234,234,235,234,236,233,225,117,3,1,4,8,10,9,10,10,11,12,12,12,199,201,201,196,196,198,199,200,198,199,196,196,196,196,198,198,198,196,194,198,198,196,195,194,196,196,195,196,195,196,196,195,194,195,197,194,195,194,193,194,196,194,193,193,193,193,193,191,194,194,193,191,193,194,194,193,193,194,194,193,191,192,193,195,191,194,194,193,194,193,196,194,196,196,194,194,195,196,195,195,196,196,196,194,196,196,194,193,196,195,197,198,199,196,196,199,198,195,194,196,193,194,199,197,196,196,198,198,195,198,196,197,200,200,199,198,200,198,198,199,198,199,200,200,201,199,200,200,200,200,200,200,202,202,200,200,199,201,203,201,201,201,201,201,202,203,201,201,202,202,201,201,200,201,200,199,201,203,201,201,202,202,202,199,201,200,202,203,202,201,203,203,199,202,203,201,203,204,202,203,204,202,203,202,203,204,206,206,202,205,208,206,209,208,205,206,207,208,208,208,208,210,206,207,208,207,208,205,208,206,208,207,206,210,208,207,207,208,208,208,208,207,208,207,209,210,208,210,209,208,208,209,209,210,209,208,208,208,208,206,208,207,208,206,206,206,206,207,204,206,206,208,207,205,204,204,203,205,204,203,204,200,203,204,202,204,202,201,203,202,202,198,200,205,202,206,205,204,203,206,195,188,208,199,193,203,198,199,205,205,193,202,205,192,198,206,207,201,201,204,205,203,202,206,203,194,197,201,203,204,203,203,204,202,201,210,202,184,153,174,211,214,199,165,183,203,205,212,208,197,148,114,93,82,143,161,157,185,207,224,180,130,171,196,203,212,203,200,211,180,174,201,176,207,137,83,81,86,133,112,73,65,61,130,214,234,208,170,129,164,165,146,181,163,203,230,214,229,221,247,198,81,183,236,235,225,185,164,124,151,158,188,212,216,204,160,184,213,196,231,191,66,62,55,24,18,22,22,15,22,18,19,21,20,17,16,19,15,17,18,18,18,15,19,19,19,16,41,85,84,66,52,46,71,97,98,68,60,53,27,16,77,83,68,87,115,114,120,117,111,99,98,98,100,109,93,93,107,160,202,127,72,83,89,94,98,72,91,113,122,122,41,14,19,14,38,35,35,33,93,171,133,89,67,48,42,42,42,49,50,42,48,49,37,31,22,23,33,39,57,51,48,44,44,46,33,34,35,34,39,45,53,48,46,55,60,55,63,68,73,72,59,64,86,103,81,63,52,52,63,59,45,24,18,17,27,48,72,129,191,217,214,200,191,165,142,135,87,73,58,87,107,54,38,34,33,62,39,77,88,46,50,53,56,84,128,150,164,160,147,115,75,63,44,32,42,29,30,27,26,41,44,59,69,71,68,103,93,77,91,97,136,112,93,88,45,102,101,41,46,42,44,46,45,55,55,55,61,67,69,59,50,73,75,46,47,45,46,51,51,54,53,51,52,54,62,58,51,44,33,32,30,35,29,36,28,51,144,147,133,106,89,113,98,58,143,210,229,250,250,244,239,236,232,227,233,231,229,231,229,230,229,229,236,239,242,244,245,245,244,242,241,240,240,242,239,239,239,237,239,238,239,241,240,240,238,240,240,238,239,239,239,239,236,239,239,237,235,230,230,228,230,233,233,234,233,229,118,3,1,5,9,9,9,12,10,11,12,12,12,200,204,201,199,199,200,200,198,198,199,197,199,196,198,198,196,198,197,198,199,199,199,198,195,195,198,198,197,196,195,199,198,195,194,196,197,198,194,194,198,195,193,193,195,194,196,196,194,196,194,195,192,193,194,194,195,193,193,194,193,193,194,195,193,194,193,190,194,193,193,196,194,195,194,196,196,197,198,197,196,197,198,195,196,195,195,197,197,197,195,198,198,197,197,194,196,197,196,197,196,198,197,196,199,198,195,198,198,197,199,200,198,198,198,198,199,197,199,199,201,203,199,199,201,198,201,201,199,200,202,201,200,201,201,202,201,202,201,200,202,202,202,202,200,203,201,201,201,201,201,202,200,202,202,203,203,202,202,202,201,200,203,200,202,202,201,201,203,205,202,204,202,202,202,203,203,203,206,204,203,204,207,208,207,203,206,207,206,208,205,208,209,208,209,208,209,208,208,209,209,207,209,208,208,208,209,208,204,208,207,208,208,208,209,206,208,210,208,208,208,209,209,210,208,208,208,208,207,209,209,209,210,206,207,206,207,207,205,208,210,207,205,208,206,207,207,205,205,205,205,206,205,204,205,205,204,203,204,203,202,203,200,200,198,203,206,202,204,203,205,202,194,203,203,203,203,205,206,203,207,200,190,206,203,194,199,199,201,207,206,193,200,209,197,195,202,207,206,204,205,202,206,204,202,208,199,195,201,207,208,204,203,201,203,202,207,204,173,152,178,208,214,188,155,183,200,205,214,196,190,128,93,88,89,169,175,182,200,207,218,170,123,163,201,203,200,203,205,211,184,183,182,165,196,132,110,75,71,114,86,77,71,71,169,250,220,164,158,146,203,170,127,150,156,214,222,214,225,223,248,170,102,207,234,238,235,200,165,139,169,169,182,178,186,181,167,213,221,212,225,180,87,67,35,8,19,17,22,22,19,20,19,19,19,19,20,16,14,16,17,16,15,16,15,15,23,31,61,88,71,47,50,69,88,88,65,61,55,27,15,74,154,173,138,106,108,110,120,110,96,95,103,98,101,98,98,103,100,117,118,68,54,88,87,103,107,81,98,107,127,109,29,19,19,25,41,29,28,26,56,92,68,54,47,39,38,43,46,47,54,55,48,39,30,21,22,27,36,37,44,48,39,39,38,32,35,35,27,31,37,45,53,52,53,53,53,51,54,59,61,66,72,80,90,80,52,42,34,31,30,31,20,17,37,59,103,160,190,187,169,148,111,77,54,50,49,57,61,56,57,68,74,62,56,58,46,49,58,48,49,52,45,44,46,46,49,55,63,86,122,141,155,142,99,71,53,35,26,14,19,27,23,32,44,56,76,82,83,93,105,118,122,95,80,59,48,81,57,43,49,40,37,43,53,56,56,57,69,66,61,53,43,45,39,44,51,47,48,47,40,46,46,46,50,53,59,63,61,56,47,36,29,39,30,36,34,49,136,151,128,107,87,110,100,70,97,133,190,240,244,250,250,242,233,231,231,230,229,229,229,229,227,233,239,245,245,245,247,247,245,243,243,242,241,243,242,240,239,237,239,239,239,239,240,240,240,239,237,237,239,237,238,236,237,238,236,236,234,231,228,229,230,232,232,236,236,229,118,3,1,5,9,9,9,10,10,11,12,13,12,198,199,198,199,197,200,198,198,200,196,198,200,197,198,198,196,196,196,195,200,199,196,196,194,197,195,193,198,196,199,201,196,195,194,196,196,194,194,195,194,193,193,194,193,193,196,197,195,194,194,194,192,194,194,192,193,193,191,190,193,194,193,193,194,191,194,194,193,193,192,196,193,194,198,194,196,196,195,196,196,197,194,197,196,196,198,198,197,196,197,198,196,195,196,195,196,196,194,195,193,196,199,197,196,196,197,196,197,197,197,198,197,198,200,198,198,199,199,198,200,199,200,201,199,202,200,200,203,200,200,200,199,202,201,200,203,204,202,202,202,200,200,201,202,200,198,200,202,200,199,202,201,201,204,204,203,202,204,201,204,203,201,203,200,203,203,202,202,203,203,204,203,203,207,205,202,205,206,203,206,205,205,206,206,208,204,205,206,207,206,206,205,206,208,208,211,208,207,209,210,208,208,207,209,209,208,210,208,209,207,208,208,207,208,209,209,208,209,209,207,207,209,208,206,209,209,207,207,208,207,208,205,207,207,207,207,208,207,206,205,206,208,206,205,207,207,207,205,204,206,203,205,202,203,205,201,203,203,200,201,204,202,201,203,203,202,203,202,203,203,198,199,204,203,202,202,201,200,201,206,201,187,203,207,194,195,200,206,205,208,197,197,208,202,193,195,205,205,204,203,202,202,203,205,205,203,198,195,203,205,202,203,204,204,200,207,208,160,153,184,207,216,176,154,188,201,208,210,196,189,130,107,93,79,153,167,195,206,208,211,160,130,167,212,213,207,198,198,213,185,176,175,184,209,155,172,123,81,96,70,87,83,58,177,244,153,173,169,179,245,148,144,173,185,235,214,216,218,227,242,147,129,212,227,238,233,218,190,173,178,172,182,157,178,159,162,203,204,202,199,153,92,55,7,9,16,11,22,24,19,18,19,20,18,18,17,16,18,18,21,14,16,18,17,16,23,53,72,83,56,41,77,96,92,64,62,51,33,7,44,145,194,215,139,74,97,106,116,105,92,101,104,97,92,95,103,107,98,85,81,91,130,127,90,101,101,83,112,113,131,100,26,22,19,24,27,15,19,21,33,53,50,46,38,31,36,41,50,56,62,57,36,33,23,36,55,43,48,46,46,48,46,31,35,35,29,33,29,27,37,35,43,50,54,57,49,51,50,50,54,55,63,67,59,43,33,34,23,16,17,24,46,78,149,203,205,170,126,80,55,49,51,52,51,56,53,52,53,50,42,36,49,76,79,69,65,61,67,63,62,65,55,47,39,37,44,53,61,61,69,66,78,113,127,134,104,70,59,37,26,23,14,23,27,35,53,65,66,71,83,78,73,64,57,55,48,51,44,43,50,27,29,42,55,57,59,69,64,52,55,48,48,39,31,40,40,45,50,49,51,48,46,44,52,55,57,63,64,62,49,37,37,34,35,35,36,36,116,156,129,103,84,107,93,86,114,90,98,173,224,243,244,233,229,230,230,231,228,227,227,227,228,234,243,245,244,245,245,244,244,244,244,244,241,239,239,239,238,237,240,238,238,240,238,237,238,238,237,237,237,237,238,236,234,233,232,233,233,232,231,229,231,232,234,236,237,230,117,3,1,4,8,10,9,12,10,11,12,13,13,194,199,198,200,200,198,200,196,197,197,198,200,196,198,198,198,199,196,198,196,195,196,196,198,195,196,198,196,198,197,198,195,195,194,192,193,195,195,196,193,191,193,193,193,194,193,191,196,194,192,195,194,195,195,191,193,194,192,193,192,191,193,193,194,195,194,193,195,194,196,198,197,196,195,196,196,195,196,196,196,198,198,198,198,194,196,198,198,196,193,195,194,196,198,197,196,195,196,194,196,198,196,198,196,197,198,195,198,196,195,198,198,196,198,198,198,198,200,198,202,204,200,203,203,202,203,200,201,204,204,201,202,203,200,201,201,205,203,202,205,201,201,201,201,203,202,203,203,203,202,203,201,203,204,202,203,203,203,205,205,203,204,202,201,203,203,201,205,203,203,203,203,207,204,205,205,207,206,205,205,205,206,206,206,205,206,207,205,208,208,207,207,208,207,207,209,207,208,207,208,208,208,210,208,207,208,211,210,211,208,212,208,206,207,207,209,208,208,208,208,207,207,209,208,208,208,208,208,206,207,207,206,207,207,208,205,207,207,206,206,205,205,206,206,205,206,204,204,206,205,207,203,201,201,204,204,201,203,200,201,204,200,202,202,200,199,198,202,200,201,198,198,203,200,202,200,202,202,201,202,202,187,198,208,192,192,203,203,205,206,194,196,205,203,195,191,202,205,204,202,200,205,201,201,206,202,200,196,199,207,202,202,202,201,202,204,208,155,160,191,206,213,172,166,191,198,213,212,192,199,142,115,81,74,143,160,206,198,212,206,164,154,165,212,212,214,213,199,204,182,172,167,213,215,180,187,124,133,131,95,93,76,51,142,215,178,210,188,200,217,155,191,198,214,235,212,219,212,233,214,139,159,211,225,232,230,225,200,194,182,166,175,169,189,171,178,213,209,180,135,93,46,23,8,9,14,12,21,25,23,17,19,19,16,19,17,18,16,13,17,18,17,19,21,25,46,60,75,59,46,74,103,101,69,60,55,25,15,15,74,134,141,162,87,50,99,98,107,101,100,111,104,93,99,101,101,109,91,84,136,171,154,122,96,107,100,86,110,125,137,83,17,23,19,14,20,17,22,12,19,38,40,44,40,44,65,73,70,58,45,35,30,26,42,79,81,65,63,57,53,45,39,32,31,30,28,33,31,25,27,27,36,36,46,54,41,45,42,41,41,47,53,46,40,32,18,18,17,29,55,98,172,218,204,157,104,57,49,58,51,53,42,40,34,27,29,21,24,21,17,20,46,70,55,57,56,57,59,50,54,51,50,33,20,27,25,35,43,49,67,70,77,74,74,83,102,114,101,84,67,47,32,18,16,19,32,48,53,57,58,55,51,54,50,45,49,46,36,33,27,23,35,54,55,54,58,54,54,44,49,48,37,36,28,36,45,39,48,51,57,55,48,46,48,46,47,51,51,56,49,44,45,39,36,36,38,28,99,155,130,107,92,97,89,102,108,92,72,78,117,134,182,217,226,230,224,229,227,225,229,228,232,240,244,243,239,238,242,244,244,244,244,243,241,239,238,237,236,237,239,238,238,240,239,237,238,238,235,236,236,235,235,233,233,231,228,229,230,230,232,233,232,233,234,236,235,229,117,4,1,4,9,10,9,11,10,12,12,13,13,196,200,199,198,199,200,197,196,199,197,198,198,194,198,198,196,198,197,196,196,195,196,197,196,197,197,198,196,194,194,194,194,193,193,194,193,192,193,195,193,194,193,193,194,194,192,193,193,193,195,195,193,194,193,194,193,194,195,191,192,193,194,192,194,192,195,196,195,195,194,198,194,197,198,195,195,196,197,194,199,196,195,197,196,196,194,197,195,194,195,193,193,194,196,196,194,194,195,196,194,198,195,195,199,198,198,198,200,198,198,198,195,196,197,199,198,200,198,200,202,201,203,203,202,200,203,203,203,201,202,203,203,203,203,205,201,200,201,201,202,202,204,202,203,205,204,203,203,203,203,205,201,201,202,202,200,201,203,202,202,204,203,201,201,203,201,203,203,203,206,204,205,203,204,206,204,203,206,206,207,204,206,205,207,209,205,206,205,206,207,207,208,206,208,206,207,208,208,207,208,208,209,211,208,208,208,208,209,209,210,210,208,209,208,209,210,207,208,208,208,208,209,208,206,206,206,205,208,207,206,207,204,207,208,203,205,207,206,206,204,203,206,204,205,205,205,206,201,204,203,204,206,200,203,204,200,201,200,201,200,198,201,199,201,200,201,201,197,201,200,192,198,201,199,200,200,201,199,199,203,203,188,192,206,193,191,200,204,199,203,193,192,208,204,198,190,197,203,201,203,201,201,202,203,202,202,203,199,196,200,201,203,200,203,200,206,210,153,174,190,201,208,170,168,185,202,213,194,185,175,119,119,83,91,159,181,216,190,217,196,165,157,139,201,209,212,212,207,218,185,163,179,229,198,143,149,106,139,149,127,121,83,29,135,208,201,230,151,205,206,152,193,188,221,226,212,218,208,239,188,139,181,208,226,225,226,228,205,200,175,145,154,165,196,191,208,207,229,188,79,44,16,8,12,12,12,13,18,25,27,25,17,19,17,15,20,15,18,13,14,17,15,14,24,36,49,80,60,51,71,91,98,68,60,54,29,14,14,46,110,63,45,113,69,63,119,113,115,119,116,117,99,97,105,100,101,90,71,101,174,179,131,106,94,110,98,79,118,124,127,70,10,24,15,19,25,19,19,20,19,30,37,56,85,83,79,68,52,35,29,33,48,71,95,126,110,75,82,66,37,31,27,32,29,30,29,27,30,25,25,21,23,31,35,42,35,29,28,29,29,31,29,31,27,23,20,27,54,113,194,227,200,142,77,44,49,57,50,42,36,24,21,16,17,17,13,16,17,15,14,17,40,59,50,67,84,84,78,59,46,42,32,18,13,17,16,17,25,30,34,43,55,68,80,83,73,68,83,102,111,92,68,47,27,17,18,30,37,41,41,35,44,41,40,51,41,32,29,24,24,24,48,47,42,57,42,54,44,40,66,43,31,45,56,56,50,48,43,39,47,54,54,46,43,43,43,48,46,49,47,44,47,43,47,39,40,28,86,153,135,120,99,92,79,101,122,106,90,69,56,87,178,227,230,228,223,226,224,229,230,232,238,243,243,234,234,238,239,239,242,238,237,237,236,236,235,236,236,237,239,238,237,236,235,235,234,234,234,235,233,232,234,232,232,230,228,226,227,231,230,231,232,232,233,233,232,227,118,4,1,5,9,9,9,12,10,12,12,12,11,195,203,198,197,198,196,201,198,198,195,195,198,196,199,196,193,196,195,198,196,194,196,195,195,196,197,198,198,198,196,196,197,193,195,195,193,193,191,193,192,192,191,193,193,194,192,193,195,192,193,194,193,194,193,192,192,190,191,193,194,193,195,195,192,192,191,194,195,193,196,191,193,194,193,196,194,196,195,193,193,193,195,195,197,194,193,193,194,195,194,195,194,194,195,194,194,193,195,194,198,199,196,198,198,200,199,199,199,198,201,199,199,199,197,200,201,200,202,200,201,203,200,203,203,202,204,203,203,202,199,202,204,204,202,201,202,202,204,202,202,201,202,202,203,202,203,202,200,205,202,202,200,203,203,199,202,203,200,201,203,203,203,202,203,203,202,203,205,204,203,203,206,205,204,206,203,206,206,208,210,205,208,207,207,207,208,208,205,207,206,205,206,207,208,207,210,209,211,209,209,209,208,209,209,210,209,210,209,210,207,208,212,208,209,208,207,209,208,206,208,206,208,208,204,207,206,206,207,206,208,206,207,208,204,206,204,204,205,205,204,203,205,202,204,202,203,203,201,202,199,204,201,201,203,200,203,201,201,200,198,201,198,200,200,199,202,201,200,200,199,196,199,201,199,200,201,200,201,201,199,206,191,191,203,191,190,199,202,204,203,192,193,201,201,201,191,191,201,202,203,201,200,200,198,202,201,204,199,191,200,202,204,199,201,202,208,206,155,185,194,198,202,157,171,194,203,208,181,149,137,128,141,101,87,155,207,225,191,221,188,162,155,122,195,206,207,212,203,223,188,165,189,214,135,107,129,96,123,132,140,140,107,51,131,201,208,167,139,230,195,127,145,182,224,222,217,214,214,237,160,159,188,210,237,217,225,230,213,204,172,134,128,164,197,191,141,131,173,128,59,22,7,8,12,20,17,17,19,22,24,24,27,17,19,15,14,18,15,16,18,15,19,19,29,46,69,63,51,85,99,90,68,60,56,30,17,16,20,96,110,39,59,109,104,126,151,149,145,117,112,117,108,110,107,101,93,101,125,137,152,137,108,107,101,109,89,81,115,128,120,50,14,17,15,22,22,23,33,39,34,25,37,94,95,76,69,53,51,43,52,73,96,110,125,132,94,52,44,39,24,26,29,26,31,27,26,24,18,17,16,17,17,18,27,32,30,29,34,38,44,49,48,56,61,61,65,124,203,224,200,135,71,46,53,53,45,31,19,21,12,19,18,17,17,14,18,13,16,14,20,15,37,53,61,93,102,112,75,58,45,31,22,10,16,13,16,17,15,18,17,23,32,35,45,59,69,77,78,67,73,103,110,88,67,41,27,30,39,34,30,31,29,35,38,41,33,28,21,17,22,24,30,31,43,46,53,49,40,73,62,38,72,96,94,87,69,57,50,37,36,35,49,51,44,45,37,39,36,41,38,26,31,37,50,45,47,33,73,162,148,123,109,90,79,98,116,118,101,112,190,217,243,243,227,232,227,229,227,230,237,235,240,242,236,236,237,240,243,241,239,236,235,235,234,234,236,237,237,236,235,236,235,234,234,233,232,232,232,231,230,233,233,233,231,226,224,223,224,225,228,228,230,230,229,229,229,228,117,3,1,6,9,9,10,12,10,11,12,11,12,196,199,198,198,197,196,198,198,198,195,198,198,197,195,196,198,198,195,195,197,196,197,198,196,198,196,196,196,197,196,194,195,192,192,193,191,193,191,193,190,190,195,190,191,192,191,191,192,193,189,192,191,191,194,189,190,191,189,192,193,191,192,193,197,193,191,191,193,193,194,191,190,193,190,191,192,193,193,192,192,191,193,195,193,194,193,194,196,194,197,195,197,194,194,195,193,194,196,198,195,198,199,199,199,198,198,198,199,197,199,199,198,199,199,200,199,200,200,201,200,202,201,200,203,200,202,202,201,203,202,202,204,202,202,202,204,203,202,204,203,201,202,203,203,201,203,203,200,203,201,202,203,202,202,202,204,202,203,201,201,206,203,203,204,202,204,206,202,202,204,202,205,205,204,205,205,207,207,205,204,205,206,206,208,208,207,208,207,210,209,207,207,208,208,210,212,210,210,210,211,208,208,210,210,211,212,212,211,212,211,209,209,210,209,210,211,208,208,208,205,207,205,207,207,207,207,206,208,204,208,209,205,205,205,204,203,203,202,203,202,203,206,203,204,202,200,203,201,203,202,201,200,200,204,201,199,202,200,199,200,198,199,200,200,198,197,199,200,200,196,196,201,200,199,200,199,200,200,201,201,203,194,187,197,194,192,196,201,201,203,196,188,201,203,203,197,189,199,202,199,200,200,200,201,201,202,200,200,196,194,198,203,199,202,197,208,199,147,198,195,200,198,162,180,192,207,204,183,160,146,149,159,111,98,179,226,226,202,227,188,176,174,135,199,203,206,211,201,220,171,160,194,180,126,119,123,88,121,119,112,134,115,77,136,176,197,174,163,234,139,93,155,196,231,220,222,214,224,228,145,184,193,210,239,213,223,226,224,210,181,142,150,186,186,137,74,33,67,66,31,22,9,14,14,20,16,17,20,20,24,27,26,19,17,15,17,16,14,20,12,15,17,27,43,57,62,49,77,100,99,71,57,55,33,16,10,22,40,107,90,30,88,119,98,107,113,112,90,71,77,98,124,121,107,92,92,162,206,145,106,100,97,114,104,102,87,80,124,141,110,34,12,16,21,36,35,45,49,38,29,14,38,84,78,67,59,62,63,60,63,60,77,74,49,45,47,37,31,20,20,25,24,28,28,32,30,34,38,43,46,55,67,66,72,79,84,104,136,160,169,176,182,187,176,170,168,187,198,139,119,46,48,53,41,28,17,13,15,15,17,15,19,15,16,17,15,15,17,19,16,15,31,48,61,67,61,65,53,40,40,23,16,15,14,16,15,17,17,16,18,16,19,19,27,35,46,54,69,77,74,64,80,119,117,93,83,74,69,95,54,51,51,41,37,40,36,29,20,15,21,16,22,30,31,44,48,39,50,56,40,69,112,139,131,104,92,75,61,51,41,34,34,44,46,39,39,34,27,32,27,24,21,18,30,39,53,34,62,159,155,127,107,92,84,92,117,114,100,141,236,250,250,242,227,229,225,229,229,234,239,242,240,241,241,235,239,243,243,240,239,239,237,235,237,237,237,238,235,235,235,233,233,234,230,229,230,230,230,229,230,232,230,231,227,224,226,221,222,223,224,225,229,230,229,232,233,229,117,4,1,4,9,10,9,11,10,12,12,12,12,196,202,198,196,200,196,197,196,198,196,199,198,195,196,194,196,196,197,198,197,198,198,199,198,198,196,196,196,196,194,193,194,192,193,193,194,191,193,191,189,193,192,191,190,194,191,191,192,190,192,192,191,191,194,192,193,193,191,192,191,193,192,191,192,193,192,193,193,192,195,193,193,193,190,193,193,192,193,193,194,194,192,193,194,193,194,194,193,194,195,196,195,194,193,193,196,195,197,197,196,199,200,200,198,202,199,198,200,202,201,195,198,198,200,202,200,199,198,200,202,204,201,200,200,202,203,200,202,202,202,202,202,202,200,202,205,202,200,202,203,204,202,203,203,201,203,202,200,204,202,203,203,203,205,200,203,202,199,203,203,203,201,200,202,202,203,205,205,203,204,204,205,205,205,206,206,206,204,204,207,204,208,207,207,207,207,206,207,210,209,207,210,210,210,212,210,209,210,211,212,211,213,213,211,211,208,211,210,213,211,211,213,209,211,211,211,212,209,211,210,208,210,209,208,210,209,211,210,207,208,209,207,206,202,204,204,202,205,203,202,202,203,203,202,203,202,200,202,202,200,200,200,201,200,198,199,198,198,200,199,198,199,199,200,198,199,201,200,199,194,197,200,199,196,198,200,199,198,199,198,202,196,184,191,196,195,196,202,200,201,196,192,200,201,202,200,189,195,201,199,199,200,203,200,201,200,201,201,198,193,194,202,198,204,198,211,188,150,204,195,193,194,169,174,193,210,203,196,172,157,147,139,103,122,218,248,239,212,235,193,189,192,141,200,206,204,211,206,218,155,162,205,199,151,161,182,106,96,105,119,122,81,74,118,181,213,184,207,180,95,113,188,228,230,219,225,210,236,203,139,210,171,211,240,211,222,219,229,222,193,160,178,209,181,77,39,53,38,33,24,21,23,17,19,21,16,22,18,19,21,24,27,17,21,15,15,17,12,15,20,19,21,28,48,55,51,66,94,90,73,63,55,33,14,13,19,37,64,108,56,47,117,90,60,48,41,54,48,45,45,51,95,117,100,82,91,155,162,108,78,76,80,86,83,94,78,80,123,137,95,25,9,13,23,34,36,33,28,18,21,16,18,47,41,41,33,28,38,36,34,39,45,43,39,44,48,47,46,49,60,66,70,76,81,98,124,146,169,170,175,187,191,191,181,177,178,174,185,185,175,160,149,134,121,113,85,68,55,51,53,46,39,29,16,13,16,15,15,17,13,22,17,15,21,18,19,12,17,17,16,17,26,50,52,48,40,55,42,34,32,13,13,14,15,16,17,19,17,15,16,22,21,14,21,19,27,35,41,55,71,79,78,75,111,131,139,156,145,157,146,139,144,125,105,93,77,65,57,43,39,40,31,32,36,26,35,30,36,36,32,59,79,99,106,96,70,72,84,74,65,53,47,47,47,50,51,40,30,26,27,23,20,21,23,27,50,49,49,148,153,122,117,94,90,95,108,109,99,102,157,221,242,242,234,230,223,226,230,235,241,241,240,241,238,240,244,244,245,241,239,241,239,238,238,238,239,237,235,234,230,231,231,230,231,231,230,228,228,227,229,229,227,227,227,227,224,224,225,221,223,223,227,229,231,235,237,231,116,3,1,5,10,10,9,12,10,12,12,12,13,199,201,200,198,198,196,198,199,198,199,198,198,197,198,196,195,197,192,195,195,194,196,199,196,199,198,198,198,195,196,193,195,192,194,194,194,193,191,191,192,192,192,193,191,190,192,193,194,191,192,193,192,192,193,192,191,193,191,192,191,191,192,191,191,192,192,191,194,191,191,190,192,194,191,191,192,193,192,194,191,193,192,192,192,190,193,192,193,193,194,194,197,195,194,195,194,195,196,198,195,197,197,197,199,198,199,199,200,198,199,200,198,199,200,200,199,199,201,204,202,200,199,201,202,199,202,202,200,202,201,201,201,201,201,200,201,199,202,204,203,203,203,202,203,203,204,202,201,205,202,201,203,203,204,202,202,202,202,202,203,205,203,202,203,203,201,203,204,203,204,203,204,204,205,205,205,206,204,207,210,208,207,206,208,207,208,208,208,210,208,210,211,211,210,211,212,212,210,212,212,212,214,211,208,208,209,209,211,211,209,211,210,209,211,211,210,210,210,210,210,209,207,208,207,210,211,210,210,207,211,207,208,207,203,204,202,204,201,205,204,201,202,201,203,202,200,201,199,201,202,201,200,200,200,202,200,200,203,198,198,199,200,200,195,198,198,198,200,196,193,197,198,198,198,199,198,198,199,198,196,199,200,179,188,205,196,193,199,199,201,195,188,199,200,199,198,190,190,200,202,201,200,199,200,200,197,199,200,197,195,191,200,196,200,199,213,178,141,204,191,189,182,158,182,196,200,192,164,121,117,119,124,112,132,210,216,194,199,228,190,189,191,132,190,204,203,208,203,206,158,193,217,209,168,185,191,118,137,112,105,112,90,83,148,223,211,194,186,172,133,148,220,229,227,218,220,211,237,172,141,219,145,203,235,212,221,211,229,224,194,181,186,212,172,59,47,57,41,19,13,24,19,19,23,18,17,22,19,16,19,22,23,18,20,14,12,16,17,19,17,19,27,46,48,42,66,92,92,68,60,56,31,16,14,19,33,61,93,108,70,94,124,71,68,71,57,57,45,33,39,29,45,88,99,98,86,108,113,81,84,81,77,90,75,83,68,70,118,118,77,27,28,28,30,42,40,36,36,49,47,46,50,53,57,61,56,64,64,69,81,81,80,90,121,135,147,150,166,173,175,178,167,92,157,164,174,169,164,123,126,137,127,98,81,67,56,57,57,59,64,60,56,62,60,53,49,48,48,43,42,37,17,14,15,13,16,14,15,19,17,16,17,14,20,19,15,21,19,17,19,14,30,61,57,41,41,39,41,36,17,16,18,15,16,16,16,18,19,16,16,14,18,18,15,15,20,19,23,34,55,63,77,86,74,71,64,66,73,113,101,124,137,138,146,155,151,153,150,150,139,110,87,69,66,57,51,49,37,52,36,41,45,48,71,63,39,44,51,59,66,69,65,56,55,55,67,62,43,38,27,28,30,24,20,25,51,42,43,131,150,128,113,99,93,92,110,103,101,95,89,164,235,249,248,231,223,226,233,244,250,250,250,251,251,251,251,250,250,247,239,236,234,232,236,235,234,231,230,229,229,228,227,229,228,229,229,229,226,225,226,228,227,227,225,224,225,222,222,222,224,225,227,229,233,235,235,231,118,3,1,5,9,10,9,12,10,12,13,13,12,198,203,200,199,199,198,198,198,201,196,198,197,195,198,198,199,199,197,197,196,198,197,196,197,197,195,195,195,195,195,192,194,193,191,193,192,191,193,192,192,192,191,193,190,193,193,193,193,192,195,192,193,192,190,192,190,190,191,193,191,191,191,191,192,190,191,190,188,193,193,191,194,193,190,192,190,192,195,191,193,191,192,193,190,194,193,196,196,192,193,194,195,196,195,194,196,196,197,194,196,199,198,198,197,199,198,200,199,198,199,197,201,201,201,200,200,200,198,201,199,200,201,201,201,201,200,201,203,200,202,200,202,203,203,204,203,203,202,203,203,202,202,202,202,203,201,201,201,203,203,202,202,202,204,202,205,204,203,205,202,205,205,202,206,205,203,204,205,203,205,205,201,205,205,204,207,205,206,207,207,205,207,211,208,208,210,208,212,210,210,211,211,212,212,214,213,213,213,214,212,211,211,208,210,209,208,209,209,212,210,211,210,210,211,208,211,211,208,208,206,207,207,207,208,210,210,209,208,208,210,211,210,208,207,208,206,206,207,207,206,204,204,203,204,204,202,204,205,203,206,204,205,204,203,204,201,203,200,202,203,199,198,200,200,197,200,200,200,194,193,197,198,199,197,198,198,196,197,199,196,202,199,178,190,203,198,193,199,199,200,198,188,197,198,199,201,193,188,196,203,201,199,199,198,198,196,200,198,199,196,191,199,198,202,198,211,165,141,205,196,186,177,170,185,200,197,176,136,90,105,115,126,128,123,138,125,124,165,226,182,181,189,127,186,206,202,209,204,188,160,204,195,170,114,176,196,107,138,139,146,99,47,78,171,233,190,176,176,185,171,188,236,229,229,221,219,215,236,142,167,225,129,215,230,217,223,209,226,226,203,195,171,158,133,51,61,45,15,19,13,19,22,25,24,22,18,19,22,19,16,17,17,23,29,18,16,17,15,17,19,21,36,46,42,63,98,99,78,63,57,35,18,13,16,34,61,92,117,133,97,106,107,77,99,78,55,63,43,42,47,33,32,66,106,147,150,160,150,139,147,134,139,136,135,139,118,118,127,118,89,104,135,141,156,152,148,150,163,178,186,179,182,178,175,181,172,176,168,170,166,160,139,143,158,150,149,143,138,136,124,107,99,77,66,66,69,65,64,56,53,58,61,63,57,57,59,57,57,57,61,63,59,60,54,47,46,43,40,35,26,16,23,22,18,21,15,15,17,18,17,16,19,17,16,16,23,17,18,19,18,16,41,92,81,64,54,49,51,32,15,13,16,15,16,18,17,19,21,17,18,16,17,17,18,18,19,22,25,19,25,46,59,68,74,76,66,71,63,66,68,59,66,57,62,68,67,86,100,125,152,148,143,153,151,150,140,120,102,83,73,67,56,53,57,50,39,36,32,37,36,42,51,55,52,57,69,67,60,38,30,31,33,31,30,55,53,41,30,107,149,116,113,103,93,97,104,104,116,110,92,162,224,234,246,235,229,242,248,251,251,252,252,252,252,253,253,252,252,251,237,231,229,230,231,227,226,225,226,227,225,229,229,226,228,229,225,226,227,226,225,227,227,225,225,224,224,223,225,226,227,228,229,232,234,236,234,228,117,3,1,5,9,9,9,11,10,11,12,12,13,201,205,200,198,199,194,196,198,198,198,195,198,196,197,198,199,199,195,197,198,198,199,199,195,194,194,195,194,191,194,193,195,195,193,192,191,192,190,190,191,191,191,191,191,191,191,189,191,191,188,189,190,193,191,190,192,192,190,190,189,191,191,188,191,189,189,191,191,191,192,192,190,193,191,191,192,193,191,194,191,193,192,192,194,193,193,196,195,195,194,194,196,196,199,196,195,198,197,196,194,194,197,195,197,199,199,200,200,198,199,201,200,201,200,199,200,202,200,199,200,200,200,202,200,199,200,200,200,201,201,201,201,201,200,200,203,202,201,201,199,201,202,202,201,200,202,204,202,203,203,203,204,200,203,203,202,204,203,204,202,204,203,200,202,203,202,203,202,202,206,205,204,206,205,206,204,206,203,207,207,205,210,208,208,208,210,211,211,212,208,211,212,212,212,213,213,213,213,212,212,212,210,210,209,208,208,208,208,208,207,209,211,207,208,210,210,210,209,210,210,210,208,208,207,207,208,208,209,209,212,210,208,207,205,206,207,209,207,208,207,206,208,207,207,206,205,208,207,207,207,208,207,208,207,206,205,205,206,205,207,202,203,203,201,201,202,199,200,195,194,200,199,199,199,199,198,198,198,196,199,201,196,182,188,201,199,196,198,198,200,196,189,196,199,198,200,201,188,193,198,198,200,196,198,198,199,200,198,198,199,192,192,198,199,200,206,162,150,199,196,182,181,171,177,198,193,190,143,101,111,101,121,116,97,94,79,81,145,219,180,183,199,135,177,204,200,212,202,172,172,182,141,148,89,179,200,103,109,99,136,129,90,77,168,186,164,193,178,188,151,185,232,221,230,222,220,226,225,128,193,225,131,224,226,218,226,210,224,217,210,181,93,82,84,54,59,13,16,25,13,18,19,21,24,22,21,24,19,19,16,15,21,19,25,18,17,19,17,16,24,34,44,41,49,90,101,84,66,56,40,16,14,17,37,63,84,113,122,101,83,103,110,83,90,88,73,89,75,61,64,52,49,71,100,132,133,134,120,116,122,113,119,119,117,126,111,110,107,92,91,119,147,148,162,158,147,141,151,151,146,147,141,128,128,125,110,98,86,87,75,66,63,57,61,62,65,63,63,62,62,68,65,61,70,63,69,67,63,56,56,61,55,54,49,46,44,44,42,39,39,32,37,36,24,26,27,29,35,35,39,41,45,46,47,36,16,16,13,20,19,14,18,16,22,20,17,18,21,19,21,20,72,132,119,97,84,78,77,47,15,14,13,16,21,21,17,21,17,19,18,18,19,18,18,23,42,57,55,48,32,31,39,37,41,47,50,54,55,60,63,66,66,72,71,66,66,63,59,63,57,63,65,62,83,94,115,125,89,135,130,127,121,121,106,89,86,71,62,56,53,54,48,48,53,59,61,53,50,34,18,26,29,39,52,54,47,33,21,84,127,118,107,102,95,89,110,106,120,119,99,130,134,146,211,229,238,248,248,246,246,231,226,233,240,247,247,230,235,247,224,226,227,225,224,221,222,220,222,222,224,224,224,229,227,229,228,226,229,226,225,223,223,223,224,226,226,227,228,228,230,230,229,230,233,232,230,228,117,4,1,5,9,10,10,10,10,12,12,13,12,202,206,203,202,201,198,197,199,200,200,199,199,200,199,198,195,197,198,198,199,196,194,198,195,194,195,198,194,194,194,193,195,196,194,193,193,193,193,193,190,191,190,190,191,189,191,192,190,190,192,189,193,192,192,193,191,190,189,191,188,189,191,192,189,191,191,191,192,191,192,190,193,191,190,194,190,191,193,190,193,191,192,195,192,196,196,196,196,195,197,196,198,199,198,198,198,196,199,198,196,197,199,198,197,199,198,199,199,202,200,200,199,200,200,198,200,199,200,201,198,202,202,202,201,201,200,200,202,202,202,201,203,202,200,202,202,202,203,202,205,203,206,206,202,207,207,206,208,206,208,207,206,206,206,204,203,203,203,205,203,205,206,203,201,204,206,206,205,203,205,204,202,204,205,206,207,207,206,206,208,209,208,208,210,210,211,212,213,212,212,212,210,211,212,212,214,213,213,213,212,213,210,207,208,209,208,211,208,207,210,208,208,210,209,208,209,207,208,210,210,210,208,210,208,206,208,209,209,209,212,208,208,209,209,210,206,206,208,211,210,208,210,207,208,208,208,209,208,207,210,208,209,208,207,208,207,208,203,206,210,206,205,209,208,205,206,204,203,196,199,200,198,199,196,199,200,198,198,199,196,196,196,187,185,198,200,194,195,195,198,198,188,197,198,197,201,199,197,190,197,200,199,200,198,201,199,199,198,199,200,194,190,194,199,205,199,156,164,203,197,163,159,171,185,192,182,182,133,105,106,91,105,93,77,66,62,71,139,215,178,190,204,148,175,203,207,212,185,153,165,159,154,173,111,194,171,76,91,88,95,73,108,120,184,199,199,204,147,160,134,186,221,205,218,214,219,235,207,121,223,211,131,226,218,224,224,218,214,220,205,105,56,46,60,53,36,12,15,18,15,23,19,16,21,19,18,25,13,19,20,14,19,18,19,23,19,16,20,20,31,46,41,44,74,99,94,65,56,42,18,12,19,35,65,92,100,110,84,41,43,98,107,111,148,113,131,177,120,103,125,102,124,142,119,100,71,61,52,52,52,55,63,55,57,53,55,65,63,60,58,61,62,67,68,59,63,60,58,63,63,66,60,59,62,65,63,70,66,69,65,69,71,67,65,63,75,67,67,65,55,56,54,54,53,47,47,41,41,31,29,36,27,27,27,21,20,23,19,18,21,17,21,20,16,17,19,22,43,57,65,66,75,82,80,61,23,12,18,19,19,26,19,20,23,23,24,19,23,19,24,32,112,177,146,133,128,139,135,64,18,12,16,17,18,20,17,19,19,17,19,20,21,22,19,33,70,94,89,77,69,56,41,36,21,18,19,24,27,27,32,36,42,42,51,54,50,57,55,55,64,63,64,65,65,66,63,62,59,62,65,60,69,87,106,112,127,133,128,128,109,107,94,88,95,83,83,81,69,58,53,42,39,42,45,46,30,36,27,69,127,127,118,104,102,100,103,103,118,114,88,82,47,59,164,232,245,249,223,165,139,115,109,118,138,157,141,98,154,215,217,227,220,220,218,217,221,219,221,222,218,219,221,221,223,225,226,229,227,225,226,226,222,222,225,225,226,226,228,228,229,230,228,229,229,229,228,225,117,4,1,5,9,11,10,12,10,12,13,13,13,200,203,202,203,204,200,200,201,201,200,201,201,200,198,199,200,198,198,198,199,197,196,198,193,197,197,194,196,193,196,193,195,196,196,194,193,195,192,195,191,189,192,192,193,191,191,192,193,190,190,191,190,192,191,190,190,189,191,193,191,190,191,189,192,191,191,193,188,190,190,191,190,191,192,190,191,192,192,194,189,193,193,194,195,195,196,194,197,196,198,198,199,195,196,198,197,199,198,199,199,199,199,199,200,199,200,200,198,198,200,198,197,200,200,200,202,200,198,199,200,200,200,201,201,202,201,199,202,203,203,203,203,204,204,205,203,204,204,203,205,206,207,209,209,208,210,208,206,209,208,210,209,208,211,208,207,204,205,205,205,206,205,205,204,205,204,204,205,205,201,202,204,202,204,205,204,208,207,210,208,207,211,209,211,208,210,212,212,211,212,213,213,211,211,212,212,212,210,211,208,210,210,209,210,210,208,207,209,210,208,207,209,207,210,209,208,210,208,209,208,209,210,208,207,210,208,209,213,211,210,209,207,210,210,208,210,209,210,210,209,208,209,209,209,209,208,208,208,208,209,210,208,208,207,207,208,210,208,207,208,206,208,208,207,207,206,208,203,198,202,201,200,199,199,199,197,198,199,198,197,193,193,193,184,195,201,193,194,196,199,197,188,194,199,195,199,200,192,188,191,197,197,196,196,196,197,196,195,196,197,198,190,190,193,208,188,153,176,197,200,154,150,184,189,185,165,143,105,107,100,74,91,66,56,49,47,71,140,214,177,190,212,155,170,213,211,207,168,157,183,163,171,199,127,149,122,77,93,98,90,53,48,71,184,202,231,171,136,176,131,207,220,201,206,200,211,230,179,129,247,203,139,232,220,224,220,227,218,228,187,59,44,58,52,31,22,17,19,21,15,22,19,19,24,18,20,23,16,19,14,18,18,12,27,21,23,20,23,31,39,38,48,64,84,95,62,59,47,17,14,13,32,61,89,104,103,114,76,20,35,85,80,105,144,101,130,141,113,114,94,116,162,155,113,93,88,79,63,73,71,75,94,93,93,83,87,96,85,84,66,75,99,90,84,77,75,74,83,85,76,75,69,71,72,72,75,74,66,63,61,58,50,46,44,37,45,38,36,37,23,32,30,26,31,25,22,19,19,18,16,19,17,14,17,18,17,18,14,15,15,16,19,14,16,21,18,33,85,102,116,126,139,143,148,111,36,12,14,17,23,23,17,27,22,23,23,23,22,19,24,39,122,169,146,142,139,152,151,78,20,15,14,18,15,19,24,18,18,21,26,18,20,22,20,40,109,135,131,131,112,102,84,59,23,10,14,14,15,18,15,18,19,21,25,21,30,29,29,30,37,41,51,51,50,57,57,61,66,63,62,65,62,61,56,57,95,59,65,65,70,89,101,113,116,120,123,131,141,137,142,124,105,98,84,76,72,95,60,118,153,157,143,116,116,105,111,103,106,105,87,83,65,80,203,244,244,237,113,49,62,66,66,53,43,53,49,11,78,187,211,228,221,218,217,217,219,216,217,217,218,220,220,218,219,222,224,225,225,224,224,225,226,226,225,225,224,225,223,223,225,224,223,227,226,225,225,224,119,4,1,6,9,10,10,13,10,12,13,13,13,202,203,202,203,204,204,200,204,202,201,203,202,202,202,202,201,200,199,200,198,198,198,198,197,196,198,197,193,195,196,196,193,193,194,194,194,194,193,192,193,196,194,193,193,192,192,191,193,189,188,189,190,191,191,190,191,190,190,193,190,190,191,191,189,190,192,192,192,190,191,193,193,192,190,192,189,193,194,192,196,194,196,197,196,196,195,194,197,196,197,196,198,199,198,199,200,202,200,201,200,198,200,200,201,203,200,202,201,200,199,201,201,202,199,200,203,200,203,202,200,199,200,201,200,201,203,202,203,203,205,205,204,205,207,208,206,205,205,208,207,208,210,208,210,212,211,211,211,212,211,212,212,211,212,210,211,210,209,209,208,208,206,203,204,204,205,202,204,205,203,205,206,206,206,207,209,208,210,211,210,210,207,208,208,210,212,212,212,210,210,210,212,211,210,212,210,210,210,212,210,211,211,211,213,210,210,210,209,210,211,208,210,210,210,210,211,211,211,211,210,212,213,211,212,211,211,212,210,210,212,210,211,212,210,210,212,211,210,211,210,210,210,212,211,208,208,210,209,207,209,207,208,208,207,208,206,209,209,207,210,205,204,210,206,203,206,208,200,200,207,203,203,204,201,200,201,199,199,200,194,191,198,199,183,191,202,198,198,198,198,195,182,194,199,195,200,194,196,188,187,198,197,196,194,195,195,197,195,193,194,197,193,191,187,209,179,155,185,193,199,146,158,187,179,180,155,153,129,117,86,74,87,48,35,27,44,85,173,223,178,189,207,156,159,205,215,197,170,199,201,169,185,203,107,110,129,95,121,107,81,67,44,22,117,183,204,170,183,185,144,212,217,212,214,195,210,217,148,164,252,199,160,235,231,226,228,234,217,233,164,39,55,51,31,23,16,17,17,14,23,24,19,18,19,26,21,25,19,19,16,13,16,12,23,27,23,28,29,37,42,46,65,80,87,69,59,46,23,11,15,30,51,83,106,110,103,119,86,23,61,107,83,99,89,62,80,76,68,63,64,84,122,101,61,40,31,41,53,80,66,46,55,57,66,59,104,120,103,112,76,49,45,53,42,45,39,36,46,37,37,38,44,37,33,38,32,33,31,25,24,23,18,22,23,16,22,19,27,32,30,38,33,33,36,40,24,13,16,14,17,19,15,15,15,16,20,16,17,16,20,20,14,22,14,24,21,55,177,178,179,179,179,186,184,169,56,7,16,21,23,25,26,22,20,24,27,23,22,21,21,54,121,163,153,142,144,148,162,108,37,11,11,15,16,18,19,19,18,25,26,36,19,23,22,55,152,163,147,160,159,157,157,108,34,13,12,13,15,16,16,14,18,16,17,17,20,17,18,22,18,23,24,25,27,31,35,40,40,49,50,48,59,59,60,61,66,66,64,64,61,66,65,62,70,65,62,70,85,105,125,129,134,139,146,141,141,152,132,146,155,146,125,97,96,106,118,122,125,120,112,116,120,181,241,244,244,185,46,4,49,74,80,52,36,37,27,12,86,199,216,229,222,217,218,218,220,215,217,217,215,216,217,214,218,219,220,222,221,220,220,222,221,222,223,221,221,221,224,223,223,221,217,222,223,225,223,222,118,4,1,6,9,10,10,13,10,12,13,13,13,204,205,203,202,203,199,202,203,204,202,202,202,203,200,199,201,200,201,202,199,198,198,200,196,198,199,196,198,198,198,197,196,194,194,194,194,194,193,196,195,193,193,193,194,191,193,192,192,194,193,194,190,194,193,192,194,192,193,193,193,194,193,191,194,192,192,193,192,195,191,193,194,192,192,195,194,195,194,196,198,198,198,200,199,196,199,200,200,197,196,199,200,198,201,201,199,200,199,199,201,201,200,203,203,199,201,200,201,202,201,201,199,199,202,201,201,199,201,203,203,205,203,204,203,202,205,203,204,207,204,206,208,207,207,208,208,210,210,207,210,208,209,210,210,212,211,211,211,212,212,213,214,212,211,212,211,213,213,208,208,208,207,208,204,201,201,203,202,206,206,207,205,207,210,208,211,211,209,210,209,208,208,208,208,208,211,211,210,210,211,209,208,211,210,212,213,212,213,214,212,213,212,212,211,210,208,209,211,210,210,212,212,212,211,210,211,211,210,211,213,212,214,213,212,213,210,211,212,211,211,212,211,211,212,212,212,210,210,211,209,210,210,210,209,208,209,211,210,209,208,208,208,208,210,206,208,212,208,207,207,208,207,206,206,206,207,208,201,202,207,205,206,203,203,200,199,202,199,200,195,194,198,202,185,190,203,198,198,197,200,198,186,194,200,196,198,196,198,192,185,193,198,198,197,196,193,194,195,196,196,196,191,192,188,208,169,161,195,186,203,146,160,198,174,182,178,169,152,136,106,105,81,51,55,57,110,154,218,226,176,185,210,158,130,198,214,188,186,213,191,157,156,177,116,165,177,113,108,90,84,76,52,23,93,184,223,171,187,171,150,227,219,206,209,202,219,206,144,184,248,181,144,217,238,236,236,223,174,153,100,45,66,29,18,18,16,19,18,20,19,22,19,21,21,20,22,15,21,19,15,16,14,18,17,22,30,29,40,42,42,60,77,90,66,63,47,24,18,12,30,47,78,109,114,115,107,109,76,39,83,112,107,121,84,56,119,79,42,68,45,59,110,116,57,19,81,72,63,100,75,28,21,21,32,36,95,98,92,101,53,24,22,23,28,24,24,19,15,19,18,19,15,19,17,17,19,21,21,14,19,14,16,21,16,21,17,17,46,57,61,69,73,76,77,85,47,15,16,13,14,14,18,20,18,20,17,19,16,16,20,20,22,20,14,24,16,74,195,184,185,173,174,173,181,175,59,15,13,13,25,27,25,25,25,27,25,25,26,26,23,53,101,142,163,141,139,144,162,148,56,16,10,13,26,22,17,23,24,19,28,24,21,26,22,67,144,155,157,164,169,170,180,162,70,16,9,14,14,13,18,16,19,16,21,21,16,16,14,19,19,17,16,20,24,31,28,29,29,31,31,29,39,36,39,43,44,53,53,60,66,66,72,74,76,73,69,72,68,64,69,71,71,72,70,74,78,114,77,91,84,71,69,57,64,74,91,98,101,110,107,113,130,151,175,153,153,119,36,27,46,70,83,70,54,50,37,17,148,242,226,234,218,223,220,219,223,219,220,217,215,216,217,217,219,217,219,219,217,220,219,221,220,219,220,220,219,220,221,221,220,219,219,222,223,224,222,222,119,5,1,7,10,12,10,12,12,13,13,13,13,205,205,203,205,203,202,201,201,203,204,205,202,200,201,202,200,199,199,200,200,201,202,201,203,201,199,199,199,199,198,198,196,198,196,196,195,195,196,192,193,193,193,194,193,193,193,193,194,191,195,195,193,195,195,193,192,196,194,193,193,195,193,193,194,195,193,192,196,197,194,196,196,193,198,195,194,196,195,198,198,199,199,198,199,199,199,198,198,200,201,199,200,201,200,200,198,201,199,200,201,201,202,199,203,200,200,202,199,201,200,202,202,202,200,202,202,201,202,201,202,205,205,205,205,206,205,204,205,204,205,204,207,207,205,207,205,210,208,208,208,209,212,209,210,212,211,211,208,212,211,211,213,211,212,212,212,212,211,213,208,210,208,204,203,202,204,201,202,203,204,205,204,206,209,209,210,207,208,208,208,210,209,211,210,211,210,211,210,209,210,208,212,210,208,213,211,213,214,213,212,212,211,212,211,210,211,210,211,213,210,210,214,212,212,212,212,212,210,212,212,211,212,211,213,213,214,214,211,212,212,212,212,211,212,210,210,212,211,209,211,212,207,209,208,206,208,208,208,209,210,210,207,208,208,210,208,211,212,206,209,205,204,207,206,205,209,207,200,205,208,206,205,205,203,202,201,199,199,196,193,195,200,203,186,187,201,199,197,198,198,200,190,194,199,193,198,196,198,197,186,188,196,196,194,195,193,194,196,194,195,197,194,193,188,203,160,171,195,186,200,151,192,207,177,184,154,141,139,141,129,113,62,46,55,105,188,207,237,218,171,186,208,177,144,194,208,181,208,214,161,118,149,199,142,196,161,90,117,87,94,89,61,51,164,242,233,177,156,154,160,214,216,216,213,199,225,191,134,206,182,130,139,174,232,244,237,169,88,56,41,54,46,12,20,18,16,22,19,18,18,21,24,19,19,21,21,21,18,19,15,14,15,17,18,23,29,46,41,38,65,71,79,72,61,54,26,19,41,48,59,69,95,117,117,120,93,98,89,55,95,118,103,125,120,124,134,65,58,56,51,87,134,132,76,88,143,146,71,111,132,53,22,24,39,76,120,92,89,92,46,32,27,25,28,28,32,22,16,15,18,19,14,17,21,15,16,14,15,18,19,16,16,16,22,14,22,20,89,136,139,156,160,165,174,190,80,10,15,8,15,13,15,18,15,17,16,18,18,19,21,20,17,18,15,25,19,74,171,130,135,124,107,117,116,114,42,18,15,15,24,24,26,29,29,28,29,28,32,29,22,52,78,141,174,141,148,139,155,168,75,17,13,11,24,20,16,22,28,29,24,22,25,28,29,72,143,151,150,165,158,163,176,194,108,24,14,11,14,17,18,16,17,18,21,17,16,16,15,16,17,19,19,20,31,50,47,44,43,34,33,35,29,19,19,19,22,22,29,28,34,37,36,49,47,55,57,58,65,63,74,77,79,79,80,73,71,73,66,70,62,65,73,84,80,73,50,38,39,40,53,58,63,56,43,24,40,43,29,37,47,66,71,63,50,59,22,52,231,250,232,232,219,224,218,219,222,218,220,219,218,218,221,220,220,220,218,220,220,220,220,220,222,221,220,219,220,219,220,220,219,220,221,222,222,221,221,223,118,5,0,6,10,11,11,13,12,13,13,14,14,203,210,204,204,205,202,205,202,204,203,205,202,204,205,201,203,202,201,202,202,203,204,201,200,203,200,201,200,199,201,197,198,198,199,198,198,198,194,196,198,197,196,194,194,196,195,194,196,194,191,193,196,196,193,195,196,195,196,195,195,196,197,195,198,194,196,199,194,197,197,198,198,197,199,198,194,198,196,198,200,199,199,200,200,200,200,199,199,199,200,199,201,200,200,200,200,201,199,198,200,200,200,203,203,201,203,203,203,200,201,205,203,202,202,204,204,205,206,203,204,204,205,205,207,206,203,205,203,207,206,205,203,204,206,206,204,206,206,208,208,208,211,208,207,209,212,210,211,211,211,210,211,210,209,214,208,211,212,209,212,209,208,206,202,202,203,205,206,206,203,204,204,207,209,209,212,209,207,209,208,207,208,210,210,210,210,211,210,209,208,210,212,211,209,210,210,210,211,211,210,211,211,213,211,211,212,212,212,210,212,211,212,212,210,212,212,210,211,208,211,212,211,210,212,213,211,211,212,211,210,211,211,212,210,210,211,210,208,208,209,209,207,211,211,207,209,208,208,208,207,210,211,209,210,213,210,214,222,215,208,208,208,209,207,203,207,202,199,206,207,207,206,204,206,203,205,204,204,198,193,201,199,202,191,183,198,196,200,201,200,202,190,197,201,198,200,196,199,198,194,187,194,200,194,196,195,194,195,192,193,194,193,196,189,193,148,175,199,182,204,163,210,220,170,153,123,129,155,161,135,110,43,41,46,103,213,214,232,209,168,190,210,189,145,201,205,175,222,177,137,143,187,222,141,165,109,81,113,89,99,75,46,61,199,251,203,141,160,173,174,217,205,202,212,209,230,166,150,188,87,75,84,117,205,243,217,92,56,57,41,49,27,12,25,21,19,19,15,22,16,22,22,19,21,15,20,21,18,16,18,14,15,23,23,21,39,45,36,61,70,74,66,60,54,31,12,63,111,103,98,97,103,112,120,103,90,118,135,123,118,93,81,110,123,121,119,69,73,93,88,87,90,124,117,88,147,167,87,94,149,103,45,42,63,132,147,93,92,87,43,44,42,43,49,57,57,21,12,15,13,17,16,18,15,19,17,17,19,17,16,17,19,19,16,21,23,29,160,208,192,208,197,199,194,227,120,7,11,11,15,18,19,16,18,17,17,19,20,23,16,27,26,19,22,27,19,81,134,81,62,54,51,53,52,60,36,24,14,21,24,23,27,28,28,28,31,34,28,28,31,59,87,142,173,131,149,140,147,174,86,22,15,12,22,23,23,29,27,29,30,17,28,26,29,87,134,132,122,123,130,130,148,173,112,32,13,10,16,24,17,19,18,15,18,15,20,21,19,18,17,19,17,20,40,84,92,81,73,67,63,59,43,21,16,15,15,18,18,18,19,21,23,22,24,27,29,32,33,39,38,44,53,49,57,63,64,66,69,74,89,99,108,128,128,104,72,42,24,32,42,61,77,60,46,40,46,44,44,57,75,75,65,69,71,54,22,143,238,248,231,227,221,223,216,217,217,217,220,217,217,221,220,218,220,216,218,217,217,218,215,217,219,219,217,220,221,220,219,219,220,220,217,220,218,223,217,218,120,5,1,6,10,11,10,12,11,12,14,13,13,206,205,205,205,203,202,203,203,205,206,203,204,202,203,204,200,203,203,200,200,200,200,200,201,199,198,199,201,202,200,200,198,200,199,198,200,199,199,198,199,198,196,197,199,198,198,198,198,194,198,197,197,199,198,198,197,197,199,200,199,199,199,201,199,196,197,196,198,197,198,201,199,198,201,198,199,200,199,200,200,199,200,198,200,201,201,202,201,200,201,201,202,203,200,201,201,204,203,200,202,204,200,201,203,202,203,204,204,204,200,203,202,201,203,203,203,203,203,205,204,203,204,205,205,204,208,206,205,206,207,208,207,204,203,207,204,206,207,205,207,206,207,208,206,209,208,209,210,208,212,208,208,205,207,210,210,211,208,212,211,208,209,208,208,205,203,204,204,205,205,206,205,206,207,208,209,208,209,208,208,208,209,210,210,209,208,211,209,207,210,210,211,210,211,212,209,209,207,211,210,211,211,210,211,212,213,213,212,214,214,212,212,212,214,212,211,213,212,210,212,212,213,212,212,208,211,213,213,213,210,211,210,211,211,210,209,211,209,208,208,208,207,210,210,210,210,209,205,209,208,207,209,205,208,216,204,203,222,218,208,206,206,207,206,205,206,201,201,207,205,208,204,205,209,209,208,210,206,200,203,206,205,206,198,186,202,204,201,205,203,207,196,199,208,205,206,203,202,203,201,189,191,200,198,200,198,196,197,195,193,194,198,195,199,185,138,188,198,189,204,162,208,173,142,168,149,164,190,195,159,101,49,44,35,105,189,195,225,200,171,198,188,125,112,165,176,185,201,155,160,176,224,173,85,175,131,93,101,71,92,55,41,56,190,207,150,154,179,197,184,228,212,201,198,200,227,144,164,178,75,73,61,79,165,235,190,49,42,64,46,33,14,19,27,21,17,22,20,17,22,22,21,20,20,19,21,19,21,17,15,17,16,24,24,37,46,39,48,63,77,66,60,63,26,26,62,144,208,142,115,110,109,121,100,105,104,153,201,171,109,37,73,117,103,115,109,90,107,74,66,72,51,95,130,146,164,176,94,76,148,130,81,27,63,157,132,87,99,71,42,62,66,85,106,132,106,31,13,15,12,23,19,15,17,17,19,16,18,21,15,22,15,23,25,20,27,39,177,194,172,186,161,166,166,200,95,9,16,7,16,21,16,19,19,23,23,21,26,25,23,28,30,23,27,22,34,96,113,60,46,43,36,39,45,46,25,22,18,20,24,24,28,26,24,30,34,35,36,32,27,51,70,124,141,113,134,128,137,159,79,24,16,10,22,23,29,26,29,30,24,29,31,27,36,98,139,97,63,64,66,86,85,112,83,29,17,12,20,17,21,17,17,22,16,21,19,16,21,23,22,21,19,18,48,147,181,156,153,141,133,126,88,41,19,14,15,16,18,19,17,18,20,19,19,22,19,19,23,18,23,21,23,32,29,35,39,37,40,44,52,45,67,116,129,117,93,120,138,147,148,116,92,75,74,61,57,60,87,111,112,100,95,118,107,90,78,165,238,231,225,230,217,223,219,222,218,214,219,218,218,216,219,217,217,220,215,215,216,217,216,216,218,217,218,217,217,216,217,220,220,218,218,218,219,220,218,220,118,5,1,6,10,10,10,13,12,12,13,13,13,202,206,203,204,206,202,202,203,204,203,205,201,205,203,199,202,200,200,199,200,199,198,201,200,199,197,201,201,200,201,198,197,199,201,202,201,200,200,198,196,198,196,198,198,198,198,196,198,198,196,197,198,198,198,198,197,198,198,198,198,199,199,199,199,199,201,198,197,199,198,199,198,198,199,198,196,200,199,200,200,198,199,200,201,198,199,200,201,201,200,201,203,202,204,201,201,202,200,201,201,203,201,201,203,200,203,204,203,201,202,200,201,202,200,202,203,205,203,201,204,203,204,203,204,207,203,207,204,205,208,203,206,206,206,207,205,206,204,206,205,205,210,206,208,208,207,208,207,210,211,208,207,204,203,208,208,209,208,209,212,208,206,208,205,203,206,203,204,206,208,209,207,207,208,212,209,207,206,205,207,207,210,208,210,210,210,209,208,209,210,208,210,212,208,208,210,207,208,207,207,211,208,210,211,209,209,212,211,212,212,211,212,212,211,214,210,209,213,210,211,210,209,208,209,211,209,210,209,212,211,210,210,209,209,208,208,210,208,210,211,208,208,208,210,208,209,205,206,208,206,207,206,207,205,207,173,133,170,202,206,207,203,206,205,208,208,198,205,206,204,207,204,205,206,205,207,207,205,202,207,211,210,210,205,191,201,210,207,209,206,210,199,199,210,208,210,206,209,207,207,197,192,204,203,202,200,199,202,200,198,197,200,198,208,184,141,198,208,184,181,131,150,102,98,170,168,176,184,185,146,101,51,53,37,90,177,180,222,203,173,205,163,89,121,99,147,202,201,156,178,190,202,129,89,198,167,115,98,82,91,60,49,57,165,194,174,179,210,195,178,230,215,219,205,201,206,132,192,149,46,116,83,83,165,229,162,39,59,47,24,23,14,19,16,23,23,22,23,20,23,18,21,23,17,18,19,19,17,19,19,18,21,28,33,42,41,53,63,64,63,64,59,36,12,48,125,178,188,101,90,117,113,111,92,99,115,147,183,181,114,42,63,114,131,113,120,123,108,101,74,64,42,50,118,165,198,152,62,57,149,151,71,6,44,153,121,91,98,79,76,126,144,171,173,196,174,44,9,9,14,23,16,19,18,19,21,17,20,19,16,23,22,22,21,24,19,55,171,135,102,102,80,87,79,122,57,19,20,9,21,24,24,18,21,27,26,28,27,20,27,34,27,25,30,29,37,96,97,53,50,42,41,45,46,42,27,19,22,24,28,29,24,31,39,39,41,42,38,27,26,48,52,93,107,69,85,80,83,101,60,29,19,15,28,29,31,34,31,33,31,24,30,24,47,115,120,78,47,48,45,51,56,62,55,29,17,16,19,23,22,22,17,21,21,21,25,22,22,15,21,18,22,19,46,154,190,179,182,184,189,189,170,94,27,14,17,21,17,21,19,17,19,19,24,22,20,22,21,21,19,21,16,24,36,39,39,33,38,29,34,29,46,97,113,105,172,251,251,245,177,85,42,29,32,66,79,90,101,97,106,96,101,122,135,128,81,130,211,216,233,226,216,222,217,221,221,219,217,216,218,217,215,217,218,216,218,215,214,217,217,217,216,217,218,214,214,217,217,216,216,217,217,218,216,223,217,218,120,4,0,5,10,12,10,13,12,13,13,13,13,202,204,202,203,206,203,203,207,203,203,203,203,204,203,203,200,201,202,200,201,200,200,200,200,199,200,201,199,199,201,199,199,198,199,198,196,199,198,198,197,199,199,198,199,194,199,200,198,198,198,197,198,199,200,201,200,200,200,200,199,199,196,199,200,199,199,199,200,198,200,200,200,200,202,202,200,200,199,199,200,200,200,199,200,201,201,199,202,199,201,202,199,201,201,203,200,202,201,199,202,201,201,203,202,200,204,204,203,201,200,201,201,205,202,202,203,204,204,202,204,203,204,205,205,201,203,204,205,206,206,204,209,209,205,208,205,205,206,205,205,206,205,210,208,208,208,208,212,210,212,207,206,207,206,207,204,207,208,208,210,207,208,206,205,204,205,204,205,208,207,208,207,209,208,207,206,204,205,206,206,207,208,207,208,208,208,210,208,209,213,210,211,211,208,207,209,208,208,211,209,209,211,210,211,211,208,211,211,211,208,210,212,210,211,211,211,211,211,212,213,210,209,210,210,208,211,210,209,210,209,212,210,210,211,208,210,211,210,208,210,210,207,208,207,208,208,207,207,208,208,203,205,208,207,207,162,105,141,195,206,210,205,205,205,205,205,200,203,203,205,206,204,205,204,204,206,206,199,204,207,207,211,210,210,194,199,211,205,210,208,208,200,199,210,206,210,207,208,210,207,205,195,203,207,207,207,202,206,205,203,202,203,201,212,184,151,206,214,187,160,87,120,78,78,141,127,146,155,149,121,91,50,44,33,98,182,182,223,201,185,212,174,148,152,86,124,224,196,167,146,160,203,150,126,214,156,100,110,98,94,63,60,62,178,213,202,193,217,187,179,232,208,221,213,229,193,122,207,143,49,88,71,91,181,230,165,73,68,32,11,22,14,19,14,20,24,21,21,22,21,22,22,22,18,16,19,16,20,22,19,21,27,33,39,43,52,71,72,63,59,62,34,26,7,63,106,103,118,47,83,110,96,108,93,104,100,93,119,168,154,83,76,97,120,119,123,132,124,106,68,44,30,41,49,95,108,69,26,62,171,118,49,4,54,159,110,103,104,105,141,167,179,193,175,196,170,41,7,8,13,24,15,21,23,24,19,21,24,22,20,31,29,28,25,29,23,71,140,74,47,57,41,50,47,56,36,24,22,16,22,23,25,28,26,24,27,26,27,24,25,37,34,22,33,27,44,105,93,66,66,60,62,68,71,62,35,35,41,40,49,53,52,50,66,66,61,59,56,54,55,64,63,108,76,38,49,39,46,52,41,24,20,22,31,26,33,37,36,29,34,32,27,33,53,117,113,63,47,44,37,45,41,52,38,20,25,18,23,24,26,27,27,23,18,26,26,27,19,18,24,24,22,21,54,133,174,154,166,174,181,185,196,146,39,14,16,15,20,18,24,21,20,26,21,22,24,23,23,23,17,22,23,24,44,71,77,62,63,59,60,40,50,102,110,111,208,249,249,224,114,49,28,11,74,123,97,71,48,53,72,82,86,116,119,117,59,73,218,232,232,226,215,225,217,219,218,218,217,219,217,215,218,217,217,215,214,216,214,217,214,214,217,213,216,217,217,218,217,215,214,215,218,219,219,222,217,218,117,5,0,6,10,11,10,13,12,12,13,14,13,202,205,203,202,205,200,201,203,200,202,202,200,202,200,200,201,201,201,200,202,200,198,199,199,202,201,199,198,198,196,198,200,198,196,194,195,198,198,194,198,199,197,198,198,198,194,196,198,198,198,198,199,200,199,200,198,200,201,201,201,198,200,198,197,200,201,200,200,200,199,199,199,201,200,201,199,201,203,200,200,203,203,200,203,200,200,202,199,200,200,198,199,199,198,199,199,202,199,200,201,202,201,199,202,200,200,202,201,203,203,200,203,203,202,204,203,204,201,205,209,203,206,205,202,204,205,206,202,205,206,202,206,204,204,205,205,205,202,205,204,203,206,205,207,208,207,204,205,208,207,208,206,207,207,206,207,208,206,209,208,206,207,206,204,205,206,207,207,206,205,205,207,206,204,203,202,203,204,204,206,206,208,207,209,209,208,208,208,209,210,210,210,210,208,208,209,207,208,208,206,209,208,207,210,210,210,212,208,209,209,209,211,210,210,210,208,211,211,211,214,212,211,210,210,210,207,209,210,210,209,211,211,210,211,207,208,211,208,209,209,208,208,205,207,205,205,206,207,207,205,206,205,205,206,212,202,177,185,205,205,206,206,204,203,204,200,197,203,204,204,205,204,203,205,206,207,201,200,206,211,208,209,210,211,198,197,211,207,211,208,212,202,199,211,207,211,207,207,207,209,207,198,200,208,211,208,208,207,208,206,206,206,205,220,178,156,208,213,183,154,117,158,118,101,144,143,165,167,161,130,93,49,52,43,122,210,186,224,200,184,217,198,219,198,101,128,200,183,145,141,181,206,164,128,197,153,103,120,96,86,79,42,42,152,210,176,163,221,174,189,229,206,225,210,231,172,138,232,170,111,125,54,103,226,250,174,65,46,10,15,22,16,21,16,23,28,20,21,22,16,19,23,19,16,19,16,15,19,21,27,27,32,48,37,43,69,76,76,60,61,42,18,15,30,100,74,53,82,62,87,86,95,103,93,112,104,80,87,153,182,153,101,76,93,85,104,115,107,95,58,39,33,37,29,27,45,39,45,123,128,57,29,4,70,150,89,101,97,95,118,118,127,132,117,138,106,27,22,13,13,25,24,30,22,26,31,23,20,29,29,29,36,31,24,31,27,87,123,54,49,53,40,50,42,53,28,21,29,16,23,22,23,26,27,28,34,35,34,34,50,56,52,57,67,65,79,150,157,137,138,133,144,149,161,127,126,144,152,173,132,164,169,174,183,178,179,173,174,168,172,168,156,153,72,45,47,38,46,45,49,40,42,42,46,49,45,47,42,45,33,29,35,33,59,112,120,89,63,57,57,50,53,59,36,20,20,17,27,23,27,29,30,27,24,33,22,21,24,24,21,22,25,25,63,131,144,111,110,122,130,137,160,123,52,16,11,13,19,21,23,21,21,26,23,24,22,23,26,22,18,21,24,22,64,152,164,149,134,125,122,93,116,121,101,95,184,249,177,95,37,32,23,75,142,120,76,66,55,42,75,86,72,79,85,90,56,122,237,247,243,242,231,232,221,219,215,217,217,214,214,217,217,217,216,215,216,214,214,216,214,213,214,212,215,214,215,216,214,217,216,217,218,219,217,219,216,220,120,4,0,6,9,10,10,13,12,12,13,12,12,202,205,203,204,203,199,201,202,200,200,203,200,201,200,202,201,202,200,198,198,199,201,200,199,199,197,198,198,197,199,199,198,197,198,197,198,195,198,201,195,200,195,196,200,196,198,196,196,198,199,201,200,197,198,198,199,199,197,200,199,201,199,199,200,199,198,200,202,199,199,198,199,199,200,199,199,200,199,201,199,199,198,200,202,200,199,201,200,200,203,201,200,200,198,201,199,201,200,200,202,200,199,200,203,201,202,202,203,201,201,202,200,203,202,204,204,203,202,203,205,202,203,204,205,203,204,203,203,204,206,204,203,203,202,205,203,203,203,202,204,205,204,206,204,205,206,206,205,206,208,207,210,207,207,206,204,207,207,205,206,207,208,205,206,207,206,208,209,208,207,206,207,207,205,205,204,203,204,204,206,208,208,208,208,208,208,208,209,209,210,208,208,211,208,208,209,206,208,208,208,208,209,208,208,211,208,209,208,210,208,212,211,210,213,208,209,211,209,212,211,209,211,208,210,208,207,210,209,211,208,208,210,208,208,208,207,208,207,207,209,209,209,210,208,206,208,205,203,203,204,206,205,205,210,213,215,213,213,212,208,208,206,208,204,203,198,201,206,202,204,206,206,205,209,206,202,199,202,210,207,208,210,207,210,200,194,209,205,207,207,211,206,198,210,210,210,207,206,209,209,211,201,196,205,211,208,210,210,210,208,210,208,210,217,174,174,215,198,170,185,159,204,163,145,198,189,208,189,194,163,99,51,52,51,134,226,191,219,197,193,212,198,237,216,114,137,157,137,166,181,203,205,141,138,225,144,103,118,93,97,70,40,21,106,176,154,174,217,163,197,223,206,214,213,222,141,159,250,194,120,113,98,178,250,226,120,51,19,8,22,22,12,23,15,24,28,23,26,19,21,20,20,19,17,17,16,16,21,21,21,30,45,42,47,66,76,72,62,61,43,23,17,15,69,113,41,63,109,78,85,71,95,109,103,111,113,105,83,133,191,188,153,90,79,86,113,108,71,75,58,34,24,27,24,27,29,40,135,151,71,18,24,9,69,119,73,106,80,53,53,45,52,46,48,61,51,25,19,15,23,29,28,27,31,31,30,31,27,30,39,37,30,31,29,31,35,107,110,49,63,54,49,59,60,56,29,30,35,37,49,50,59,66,73,82,92,118,131,149,178,182,188,199,216,203,198,204,184,185,181,171,170,160,162,171,177,183,193,193,184,177,170,173,177,175,177,173,176,176,164,155,151,142,103,95,81,77,95,98,98,116,131,151,168,152,157,145,132,122,98,77,73,61,79,157,173,148,133,124,122,120,112,92,39,13,22,17,24,30,23,30,35,29,33,27,26,35,24,25,25,26,27,22,74,124,122,67,45,47,57,69,85,86,45,26,12,17,22,18,23,22,23,21,22,20,24,21,25,24,24,22,19,25,93,203,221,211,206,211,205,186,184,147,105,81,164,199,71,17,29,20,54,132,142,67,66,118,89,89,97,83,70,54,56,79,55,104,225,244,244,250,246,241,226,223,216,217,214,214,214,215,216,213,214,213,215,215,212,214,212,214,214,211,214,212,215,214,214,216,216,215,213,216,217,216,214,218,118,4,1,6,9,10,10,13,12,11,13,13,13,200,200,200,199,203,199,200,201,199,202,201,200,200,200,200,200,199,198,199,200,198,199,200,197,198,198,198,197,199,199,200,199,197,197,198,198,196,200,194,196,198,195,197,195,197,195,198,199,198,199,199,199,201,201,198,199,201,200,200,200,199,199,200,201,202,198,198,199,199,198,200,200,199,198,199,199,199,201,200,200,201,201,201,200,198,199,200,200,200,202,201,202,201,199,203,200,202,200,199,201,199,201,200,202,202,200,203,201,202,203,199,203,203,201,203,203,206,201,200,202,200,204,203,203,203,203,205,204,205,203,201,205,201,203,205,205,205,203,204,204,203,202,205,202,203,205,203,206,207,206,208,207,208,208,204,205,205,203,206,208,205,208,207,207,208,207,207,207,208,208,208,207,205,206,205,205,205,205,205,205,208,209,207,207,210,208,208,208,206,208,208,208,208,206,207,208,209,207,208,208,208,209,207,210,210,208,209,210,209,209,208,211,208,210,211,212,212,209,210,211,209,209,208,206,206,207,210,208,209,209,208,208,209,211,207,206,207,207,207,207,208,208,206,207,208,207,207,205,205,206,204,205,206,204,206,211,209,203,207,208,204,205,205,206,203,198,203,205,207,206,206,206,206,208,206,203,200,206,209,207,205,208,207,212,204,190,205,205,207,206,208,203,196,211,207,209,209,206,209,209,211,206,197,200,211,210,208,209,210,208,212,208,213,214,166,188,224,199,184,194,190,208,175,173,195,179,184,172,206,175,100,54,41,38,141,231,193,214,190,191,207,196,232,204,144,136,141,164,179,201,183,169,140,171,234,137,105,98,77,90,72,38,27,111,191,170,202,221,157,206,213,204,211,214,200,124,178,244,188,121,91,67,162,206,136,70,18,12,13,22,21,17,27,22,18,24,23,22,18,19,22,20,20,15,17,18,21,22,21,31,44,40,48,62,77,78,57,61,39,24,15,21,29,92,108,27,85,118,91,92,87,122,120,104,108,117,119,105,133,165,185,167,129,101,85,99,76,67,80,53,30,19,26,30,42,50,119,164,82,38,15,20,7,75,104,74,106,74,41,43,40,43,45,42,50,35,16,25,17,22,29,28,33,31,33,28,31,32,30,37,38,38,32,36,35,42,136,136,95,126,76,91,105,106,91,73,106,129,128,182,191,201,206,206,214,214,208,203,200,202,198,196,193,183,162,149,125,98,122,87,78,73,61,71,61,71,70,130,126,63,64,61,60,63,56,62,59,60,65,55,59,67,62,67,77,67,71,81,89,85,112,128,146,125,173,179,184,179,175,178,187,177,169,112,184,184,167,159,160,161,155,166,134,73,33,35,30,37,32,31,29,31,32,35,35,31,39,29,29,37,37,33,29,93,124,97,49,37,47,41,48,54,56,37,23,17,17,23,29,22,23,25,20,27,22,26,24,21,20,27,23,21,24,71,169,188,190,196,194,199,188,194,146,107,89,136,141,27,27,34,12,71,137,117,46,67,128,111,125,111,77,69,57,47,51,39,32,152,221,220,227,220,239,232,222,217,217,214,215,214,213,216,214,215,213,213,213,214,213,213,214,212,213,214,215,214,214,214,214,214,214,214,215,215,219,213,217,119,5,0,5,9,11,10,12,12,12,12,13,13,203,203,201,201,200,198,202,201,200,202,203,200,200,200,199,198,200,198,199,200,197,199,197,198,198,197,201,196,195,199,196,198,195,198,198,199,195,195,202,197,201,199,197,200,197,199,199,198,201,197,199,201,199,202,201,199,201,200,200,200,200,198,199,200,199,200,200,201,201,199,199,201,202,201,198,199,201,200,200,200,202,201,200,201,199,200,199,201,200,200,199,198,203,199,202,199,199,200,200,201,200,201,200,199,201,201,200,202,203,204,203,205,206,203,203,203,204,202,202,204,201,201,202,204,202,203,205,202,203,202,203,204,204,202,202,205,204,204,205,204,204,202,204,203,206,203,205,206,204,206,206,208,204,204,204,204,205,204,203,205,203,206,207,206,208,208,206,204,206,206,206,206,203,203,202,201,205,205,205,204,207,210,209,208,208,209,206,206,206,208,207,207,210,208,207,208,208,208,206,206,207,207,207,210,209,206,209,208,208,207,207,207,206,208,208,211,209,208,212,208,208,208,205,209,208,208,207,206,207,207,207,209,208,206,205,206,210,209,206,206,207,206,205,207,203,206,207,205,206,205,206,206,204,205,203,205,207,202,204,203,202,202,207,207,200,199,206,205,206,209,206,206,204,208,205,201,205,209,210,210,207,206,208,210,208,191,204,208,204,206,208,205,196,205,208,209,209,207,205,205,208,211,202,196,209,212,209,208,208,206,210,206,215,210,162,201,229,198,193,205,181,177,147,136,143,134,143,162,219,174,92,51,39,43,134,233,195,201,184,194,202,203,214,201,200,174,181,196,190,171,147,174,121,171,230,141,107,77,75,98,73,48,53,169,222,187,220,208,155,206,208,207,205,218,172,124,199,234,186,127,115,51,64,97,48,37,16,9,14,16,27,21,22,17,20,24,23,25,16,17,21,14,19,17,15,21,18,20,24,47,44,40,67,75,72,65,60,42,28,31,48,66,78,139,109,71,127,122,129,119,93,123,126,121,107,115,118,109,126,136,150,145,112,81,59,55,53,38,52,37,24,31,39,59,101,139,112,60,24,21,20,15,16,91,92,81,117,72,51,44,46,55,59,58,51,28,16,24,19,28,33,29,40,46,44,48,50,55,64,69,74,81,84,84,96,128,185,183,177,190,168,178,182,184,184,169,176,179,189,196,182,181,159,141,141,121,103,88,77,75,66,61,60,57,57,66,59,59,65,61,59,59,66,69,73,80,78,77,69,64,74,75,75,70,69,71,66,65,65,60,65,71,68,74,79,68,64,66,64,62,60,60,58,56,56,63,66,67,77,96,113,125,120,117,125,118,113,113,115,122,123,143,145,113,125,131,111,93,71,60,51,47,52,49,47,45,40,42,34,34,36,33,35,90,117,92,63,49,48,46,49,53,54,35,23,22,26,25,24,29,24,22,24,26,27,27,26,24,25,25,19,22,27,66,125,134,130,126,132,130,128,137,118,104,83,121,101,33,33,26,11,82,137,103,54,54,84,98,121,99,78,65,46,49,61,44,37,103,108,99,128,165,214,234,224,215,216,217,219,217,215,214,213,214,212,215,214,214,214,214,214,213,215,215,214,214,213,216,214,213,213,213,217,216,218,214,217,117,5,1,6,10,11,10,13,12,13,13,13,13,202,205,203,201,203,201,200,200,199,198,198,199,200,200,198,200,200,199,200,199,196,199,198,198,200,196,196,196,198,197,198,197,197,199,199,198,196,197,199,198,199,197,201,201,200,197,201,200,196,200,200,200,199,201,200,200,200,196,199,199,198,199,200,198,199,201,201,202,200,199,201,201,199,198,198,200,198,200,199,198,199,200,201,200,202,199,200,200,198,198,199,200,199,199,200,200,199,200,201,201,199,200,202,197,199,201,203,201,204,203,202,205,201,202,205,203,204,201,203,203,203,205,204,205,204,202,202,203,202,202,204,204,203,205,201,203,203,200,204,203,205,201,206,203,201,206,203,205,202,203,206,205,205,205,203,203,206,203,203,206,206,206,205,207,205,205,205,203,205,204,206,205,205,204,203,204,202,205,207,205,206,204,207,207,207,206,206,207,204,207,207,206,207,207,208,207,206,205,207,207,205,208,208,206,205,206,208,207,208,208,205,208,207,209,206,208,210,207,210,210,208,209,208,206,209,208,207,207,207,205,205,207,205,206,205,203,206,207,206,206,206,206,204,206,204,202,206,204,205,205,202,204,203,202,203,202,205,205,206,206,203,202,205,203,199,200,203,205,205,204,203,206,209,207,200,201,206,206,208,208,206,207,204,206,207,190,198,205,206,206,207,205,195,208,208,207,208,207,208,204,207,209,204,195,202,211,208,209,210,205,209,207,217,201,162,213,227,181,163,154,150,118,95,127,147,163,164,198,240,179,97,44,22,36,142,240,208,207,189,210,218,220,193,189,225,179,199,184,170,175,170,183,107,146,196,139,108,84,91,98,67,46,63,177,218,193,223,190,159,210,202,205,206,214,145,140,211,221,179,90,81,65,83,54,22,33,10,17,16,22,28,21,23,24,18,22,26,17,22,21,15,14,19,16,16,22,20,30,42,42,42,52,69,68,57,71,63,45,68,110,133,146,141,158,139,117,130,106,108,95,81,103,119,130,108,110,92,67,81,75,100,102,81,53,38,43,37,35,27,25,40,55,83,134,124,73,39,18,20,15,16,11,31,110,90,92,121,66,56,54,61,89,91,100,76,35,42,54,63,72,77,87,125,133,156,162,180,190,200,208,198,194,188,186,162,168,181,152,156,132,138,128,116,115,103,83,74,71,127,72,65,65,62,57,67,64,69,70,70,76,65,70,71,70,70,71,77,72,63,56,62,60,55,61,61,59,57,51,45,43,47,49,48,49,41,50,38,44,49,45,48,50,53,53,58,55,61,58,53,59,58,57,51,53,57,59,57,56,57,54,63,61,59,57,55,63,65,57,59,67,72,85,99,96,156,167,168,172,155,142,139,139,137,133,113,95,83,67,62,53,53,45,40,96,141,146,109,93,80,66,74,74,65,37,23,19,25,29,31,32,29,29,23,30,28,24,27,27,24,24,24,25,29,70,107,87,62,53,54,53,53,55,66,96,95,111,92,41,27,19,20,71,139,113,76,45,41,51,68,68,61,53,46,61,71,69,59,73,55,34,61,87,154,213,219,220,218,217,217,213,216,214,214,214,213,212,210,211,212,213,214,212,215,212,212,213,212,214,213,213,213,214,216,214,217,215,217,120,6,1,7,10,11,10,13,12,12,14,13,13,200,200,201,200,200,199,200,198,198,200,199,200,199,199,199,198,201,198,195,196,197,198,196,198,197,194,197,195,198,198,196,200,197,198,198,195,197,196,198,195,198,200,198,200,198,199,200,199,199,199,197,197,196,199,201,198,200,196,198,201,200,200,199,199,200,200,200,200,200,199,199,200,198,199,198,197,199,200,200,200,199,199,199,199,199,200,199,199,199,199,200,198,200,200,201,200,202,200,199,204,200,200,201,202,202,200,200,201,201,201,202,200,202,203,202,202,202,203,203,203,203,205,202,203,201,202,206,204,204,202,206,204,204,205,202,203,204,203,204,205,205,203,203,204,204,201,204,202,203,204,203,203,205,204,203,206,203,203,205,205,206,205,207,205,205,205,202,204,207,206,205,207,204,205,205,203,205,206,205,205,205,205,208,206,206,207,206,206,205,205,207,203,205,205,203,205,205,207,207,205,205,208,205,207,208,205,209,207,206,208,204,207,208,208,208,207,206,207,208,204,207,208,205,208,207,207,207,206,207,206,206,205,206,205,206,207,208,206,205,205,204,204,203,204,203,203,203,202,206,203,202,206,203,201,203,203,205,205,203,203,201,203,207,203,195,203,206,205,207,205,205,203,205,204,198,203,207,204,204,206,205,206,207,205,208,191,194,205,205,206,204,203,195,204,206,206,207,206,207,207,206,206,207,194,198,206,206,206,206,205,208,207,216,193,166,227,217,141,116,129,153,111,102,157,190,206,204,237,252,192,103,57,40,48,135,246,241,222,216,241,246,243,163,139,158,126,141,160,202,185,191,195,87,100,177,153,108,89,86,80,59,41,57,129,177,192,221,182,163,208,200,202,212,203,124,162,213,217,193,96,75,51,44,39,16,29,18,17,25,16,27,22,21,24,23,23,21,26,25,19,20,15,18,18,23,23,23,41,39,42,61,66,59,63,115,145,131,133,145,150,156,149,137,137,128,113,100,72,71,71,72,79,87,107,110,108,63,52,50,24,55,67,76,59,32,32,22,25,27,40,63,71,79,67,42,27,18,19,19,10,16,12,53,116,87,105,116,77,101,119,155,179,181,193,167,160,189,202,210,213,203,203,197,190,186,185,184,179,169,151,136,117,103,88,66,70,68,58,53,57,59,54,56,59,63,61,65,72,77,74,76,87,87,86,81,78,67,64,60,57,55,48,47,40,40,36,31,36,31,26,28,21,27,25,25,22,20,28,21,19,24,20,18,19,19,24,16,16,21,18,23,24,23,23,19,24,28,26,29,31,29,34,33,35,37,38,42,44,49,55,58,61,56,62,67,66,71,67,63,63,62,60,64,65,64,72,80,92,105,120,128,142,151,149,159,157,163,158,148,143,136,120,122,159,170,178,163,151,155,154,150,140,97,36,27,24,29,34,27,37,38,36,32,24,30,23,31,27,24,28,23,28,33,80,99,68,56,43,51,45,47,41,35,76,92,112,92,52,29,15,20,55,98,113,93,58,31,24,33,42,59,49,53,63,67,65,66,67,55,38,48,48,85,200,235,236,232,219,216,214,216,215,214,212,212,210,212,214,212,211,212,211,213,212,209,212,211,212,213,212,210,211,215,215,217,211,216,119,4,1,6,10,11,10,13,12,13,13,14,14,196,201,199,198,202,198,198,200,200,198,201,199,199,198,197,200,199,197,197,197,196,198,198,196,198,196,198,198,197,197,196,196,196,196,196,196,197,198,198,196,198,198,199,197,198,197,199,200,199,200,198,199,198,199,198,198,201,198,199,200,200,199,200,200,198,199,198,200,200,199,199,198,200,198,199,200,199,200,199,201,198,199,199,198,199,199,201,199,199,203,201,201,199,200,201,199,199,201,200,200,201,199,201,200,201,200,200,199,201,200,201,205,202,201,203,204,205,203,204,202,202,203,202,201,202,205,204,204,202,201,204,202,204,204,202,205,204,202,204,202,203,201,206,202,201,205,201,205,203,204,204,203,203,204,204,203,204,203,204,203,202,205,205,205,202,205,206,206,206,205,206,204,203,203,204,203,205,207,206,207,205,205,205,205,206,205,205,204,205,204,204,206,205,206,205,204,204,204,205,206,205,204,205,206,209,208,207,208,207,206,206,205,204,207,206,208,208,205,208,205,205,203,205,204,205,209,205,207,205,205,205,205,205,205,207,203,205,204,203,205,205,205,203,203,203,202,203,201,201,205,203,202,201,202,205,203,203,204,202,202,200,201,206,199,199,205,201,206,206,204,204,204,205,199,199,205,204,205,204,205,203,204,206,206,210,195,191,204,206,206,206,206,196,203,207,205,208,204,206,204,205,206,208,199,194,205,209,205,205,208,206,206,216,185,181,236,211,110,89,145,173,123,124,162,178,183,177,203,191,148,93,89,105,65,98,192,192,160,162,193,202,211,117,69,71,45,103,193,232,193,178,156,86,107,206,164,103,97,87,87,65,68,56,96,182,204,221,172,161,209,200,201,220,187,122,186,213,220,214,122,83,39,17,17,15,30,17,28,29,23,23,23,25,23,23,23,22,25,22,19,19,19,21,17,28,24,36,42,45,65,78,71,63,127,174,180,175,160,134,122,128,122,98,94,91,86,79,72,86,74,66,63,62,72,84,84,54,47,56,41,79,58,69,46,23,23,23,27,50,64,57,42,29,26,19,16,17,15,15,15,23,11,74,116,87,107,112,137,180,185,195,194,187,141,152,156,186,174,167,140,126,112,93,119,61,53,57,53,47,55,53,51,53,51,45,48,47,47,59,48,54,63,56,61,59,56,58,52,54,50,47,45,39,41,32,31,30,24,26,21,22,22,18,21,21,18,15,19,19,16,15,14,16,16,17,21,19,21,23,24,21,21,23,20,19,21,17,15,16,16,21,21,20,18,20,19,18,16,19,19,15,19,19,20,19,21,25,26,31,29,30,33,31,39,39,39,48,50,53,53,54,62,63,64,63,69,68,69,71,68,68,68,71,71,77,88,106,120,124,134,142,145,145,146,132,127,127,150,160,154,162,164,136,84,69,62,54,57,53,50,52,52,46,44,37,36,37,34,36,31,30,29,36,83,101,78,66,57,57,55,61,44,40,101,100,113,111,50,28,18,15,30,64,98,104,77,53,33,27,41,49,52,63,57,60,61,57,64,67,50,64,68,111,227,244,244,250,236,232,228,223,217,214,212,212,212,213,212,211,213,214,211,212,212,211,213,212,214,215,212,213,214,215,217,218,214,215,118,6,1,6,10,12,10,13,13,13,13,14,14,200,200,200,199,200,196,200,199,198,199,196,199,198,195,196,197,198,197,195,198,196,194,198,198,197,195,198,198,197,197,196,195,196,198,196,196,198,195,198,198,199,198,196,198,198,199,199,198,196,200,199,199,201,200,200,198,199,199,198,199,198,198,200,199,200,197,198,200,199,199,196,201,199,200,202,198,200,198,200,200,197,199,201,201,199,199,200,200,198,198,201,201,200,200,200,200,199,198,201,202,200,201,198,198,198,201,198,199,200,199,201,199,201,200,201,204,201,201,201,204,205,204,202,203,205,201,200,201,203,203,201,201,202,203,204,200,201,203,201,203,203,202,202,203,203,203,203,202,202,201,202,202,203,203,203,203,202,203,204,203,205,205,205,203,201,204,204,204,205,206,206,205,205,206,204,202,205,207,205,206,206,203,206,203,204,205,203,203,203,203,204,204,206,205,203,207,203,204,204,203,206,205,204,204,206,203,204,206,206,206,204,206,207,205,205,206,206,205,206,206,205,206,204,205,205,204,203,203,202,203,203,202,206,204,203,204,203,204,203,205,203,205,205,202,202,203,203,203,206,202,200,203,202,201,203,202,204,205,203,204,201,202,203,197,197,203,201,202,201,202,205,204,200,196,202,205,204,204,204,207,204,204,206,202,209,200,189,200,204,205,203,205,193,202,208,206,207,205,204,204,203,206,208,202,194,198,208,206,205,205,200,205,213,181,193,217,162,92,101,132,130,106,120,131,133,129,125,124,95,101,66,92,139,84,41,87,87,39,36,33,60,98,64,37,22,17,77,171,203,177,146,148,105,155,244,146,88,90,94,97,76,67,36,123,226,216,222,166,166,210,198,209,222,158,131,210,208,230,213,108,60,30,29,18,12,24,34,49,50,37,28,22,28,28,19,22,23,19,18,22,22,18,20,23,25,36,42,44,61,76,93,69,87,141,162,175,160,143,102,92,115,89,54,32,39,45,50,66,63,62,56,55,52,55,59,71,61,56,47,45,134,132,69,43,21,29,35,45,44,38,31,18,19,17,18,17,20,16,15,20,17,22,98,111,90,113,102,113,118,95,93,85,67,66,59,55,56,51,50,50,51,53,53,55,52,48,51,54,49,48,55,48,44,43,41,38,34,34,34,36,31,39,35,29,28,27,29,23,19,21,21,19,16,18,17,15,16,16,19,15,17,17,19,23,21,21,19,18,19,15,18,17,19,25,27,17,23,24,19,26,22,27,29,28,24,19,19,15,18,18,24,23,30,41,46,45,33,26,17,17,17,17,17,21,27,23,25,22,26,23,20,29,21,22,23,20,24,20,27,26,28,32,33,39,43,48,48,49,60,65,69,70,67,67,66,62,65,69,68,68,67,70,71,72,70,68,68,76,92,93,99,118,120,118,139,145,153,151,131,139,139,129,122,102,92,87,82,73,71,68,59,49,50,96,141,149,132,110,110,102,101,96,111,130,110,121,112,63,26,20,15,21,33,66,101,98,80,61,48,46,43,47,54,60,66,68,62,64,76,64,103,103,129,217,217,231,246,246,249,245,241,234,229,227,223,220,218,213,212,212,214,210,212,215,211,213,214,214,214,214,212,213,215,214,216,214,217,117,5,0,6,10,12,10,13,12,13,13,14,13,198,203,200,200,199,199,200,200,201,198,199,198,199,199,197,198,198,197,197,198,199,197,197,198,199,197,194,196,198,196,194,196,196,197,199,200,198,197,196,197,198,198,200,199,198,197,196,199,198,199,200,201,199,199,202,200,200,199,200,199,199,198,197,198,196,200,199,197,199,198,200,200,200,199,201,200,200,200,199,199,199,200,200,201,200,199,199,199,200,200,199,201,200,201,202,200,201,203,202,203,202,202,200,199,203,200,201,201,201,201,199,202,200,201,201,199,200,200,203,202,202,202,203,204,200,202,204,202,203,201,202,203,202,202,200,201,201,202,203,201,202,202,203,203,204,203,202,204,203,203,202,203,203,202,202,204,204,203,205,204,204,204,203,203,204,204,202,204,206,207,208,205,206,206,205,203,204,206,206,208,204,204,204,205,205,204,205,203,205,203,204,204,201,205,206,205,204,202,203,206,204,204,204,204,205,202,203,204,204,207,207,205,206,205,204,207,207,206,205,205,206,205,204,205,203,202,200,203,203,203,203,203,204,203,204,201,203,203,204,203,200,201,201,203,202,202,204,203,201,202,203,202,203,201,203,203,201,205,205,205,203,204,203,194,198,205,202,202,203,205,204,204,200,197,203,204,203,204,203,203,202,204,203,202,207,202,190,198,205,203,200,204,195,199,206,203,205,204,206,204,206,205,206,207,194,194,204,206,206,206,203,208,204,171,194,148,84,84,103,103,82,91,127,127,111,98,116,113,72,90,78,87,117,88,64,120,125,79,53,13,6,57,69,61,42,31,38,61,118,165,183,189,145,193,239,117,103,113,94,103,73,62,41,145,243,218,217,164,174,214,199,214,217,128,143,213,208,234,190,100,49,36,40,62,56,36,47,64,45,64,99,44,19,22,19,23,22,22,24,21,17,21,19,27,38,42,39,56,79,105,147,135,116,117,127,138,130,121,87,56,73,71,54,40,33,31,31,42,43,42,44,45,44,47,50,62,66,57,97,159,214,203,134,66,35,24,21,21,19,19,19,17,16,17,19,19,16,19,14,23,16,39,118,108,109,127,77,62,62,52,52,51,51,59,54,54,49,47,50,44,48,48,48,44,39,44,41,36,33,31,33,33,25,30,24,19,21,18,22,19,17,17,22,19,15,19,19,22,21,23,21,25,22,18,20,17,17,15,22,17,21,23,27,31,33,34,29,26,18,15,20,28,29,29,23,19,19,18,19,24,22,21,24,32,33,27,19,19,20,16,21,35,69,88,100,97,75,36,15,16,19,23,25,30,20,18,21,15,24,17,19,22,24,30,22,19,19,17,20,18,18,20,21,25,21,24,25,27,28,28,37,36,44,46,43,50,51,49,57,66,63,65,68,70,67,67,67,67,63,62,67,67,69,79,103,116,130,143,146,152,153,167,162,163,170,182,188,178,133,172,165,142,136,149,168,184,173,166,162,169,172,157,153,153,120,122,128,71,30,16,11,16,20,40,66,96,112,95,78,69,51,45,48,57,72,64,66,63,74,89,112,102,84,108,99,129,185,219,239,243,247,248,249,248,243,241,231,227,222,217,214,211,212,209,208,210,212,213,211,211,212,210,214,211,212,213,216,120,5,1,7,10,11,10,13,12,12,14,14,13,200,200,200,200,200,196,199,199,199,201,198,200,198,198,199,197,198,198,195,199,199,196,196,195,199,198,197,198,196,197,197,198,200,199,199,199,200,199,199,200,200,201,201,200,201,199,199,201,200,202,199,199,200,200,200,201,200,199,199,201,202,202,202,199,201,200,199,201,199,198,198,200,200,199,200,198,200,200,200,202,202,204,201,200,201,202,201,201,203,203,203,203,204,201,202,203,201,203,203,203,203,203,201,203,202,201,199,199,199,200,201,201,202,199,200,201,202,204,203,202,202,201,203,203,201,204,203,204,204,201,201,202,203,201,202,201,201,203,202,200,203,203,203,200,201,202,200,201,203,205,205,203,204,203,205,204,203,203,202,203,205,202,202,203,203,203,203,203,206,204,206,206,206,204,205,207,205,207,206,202,205,203,203,201,200,204,203,204,204,203,205,203,203,205,204,205,203,203,206,204,204,205,203,203,206,201,204,206,203,205,203,205,205,204,205,204,205,204,205,203,205,204,201,204,203,202,202,202,203,204,204,201,204,202,201,204,203,202,200,203,201,200,203,202,203,203,202,200,201,201,202,204,201,200,203,200,201,202,203,207,204,203,199,196,201,203,202,203,202,202,203,201,196,202,205,202,203,203,204,203,201,201,202,200,205,204,190,196,207,202,202,205,193,199,205,202,204,203,204,205,204,203,204,207,202,191,200,206,205,206,203,209,201,170,198,102,16,35,68,98,80,107,147,130,116,103,136,119,73,115,88,78,124,99,96,193,190,137,110,44,36,71,93,97,76,62,64,49,84,155,207,210,157,202,210,117,126,124,116,108,81,73,42,153,238,210,214,157,179,214,193,226,195,101,157,214,213,231,157,78,53,31,127,213,116,49,53,37,69,214,210,35,8,24,16,26,22,19,15,21,23,27,28,38,46,44,53,75,89,114,159,165,155,114,95,97,101,107,66,23,31,51,54,54,42,40,39,34,31,27,35,36,38,34,29,46,38,89,162,178,190,146,95,42,12,15,8,13,14,13,21,21,29,36,39,44,24,20,18,22,18,56,135,107,122,125,58,49,51,47,47,44,37,42,40,35,35,32,31,28,30,27,26,23,24,22,18,17,17,17,16,17,19,18,14,19,17,18,17,17,16,16,16,20,26,21,22,21,24,24,21,28,29,33,27,16,22,17,15,21,19,29,46,64,76,67,58,41,21,14,26,34,17,20,33,27,14,18,19,17,29,35,22,17,29,41,37,19,22,14,31,89,105,120,146,128,83,41,20,17,24,32,27,18,21,21,15,19,17,20,22,20,24,34,38,27,22,16,16,16,20,21,22,22,23,21,16,22,17,24,18,19,23,22,26,23,30,29,27,32,41,42,47,50,53,61,66,66,69,70,69,66,70,74,69,69,77,73,71,81,77,84,97,99,113,129,136,147,162,167,161,150,145,135,129,129,132,141,151,153,155,158,159,132,116,129,130,84,28,20,14,15,17,27,41,59,106,139,132,99,69,59,48,36,46,52,53,65,69,63,78,71,65,53,20,25,67,114,144,172,200,225,234,237,245,246,247,244,240,231,222,217,215,212,209,212,211,211,209,209,212,210,212,213,215,212,215,118,5,0,7,10,11,11,14,12,12,14,14,14,199,200,199,198,196,194,196,196,198,197,197,197,195,197,196,196,197,198,195,194,195,196,199,197,196,198,196,197,199,195,200,199,195,200,199,201,201,203,201,198,198,199,200,199,199,199,202,203,200,202,201,202,200,200,202,200,201,200,202,202,202,202,203,203,201,200,199,200,199,198,198,198,199,196,199,198,199,202,199,200,200,203,200,200,202,202,203,204,203,204,205,204,201,203,204,202,203,201,202,201,202,202,201,202,203,200,201,200,200,200,197,200,200,202,200,199,200,200,201,201,203,200,201,203,201,201,204,202,202,201,202,203,200,203,201,201,201,199,201,202,201,200,200,202,203,202,202,202,201,203,203,202,201,200,202,201,201,202,201,202,205,203,202,202,201,202,202,201,203,200,204,205,204,204,203,205,203,203,203,201,203,204,201,201,202,201,205,201,200,203,202,202,202,203,200,201,203,201,202,203,203,200,199,201,202,202,206,203,201,203,203,202,202,203,204,205,202,202,203,203,203,202,204,201,202,203,204,202,201,203,199,200,203,201,202,200,201,202,200,202,201,203,204,202,202,200,201,201,200,200,201,201,201,200,198,199,200,200,200,200,202,203,193,194,201,201,201,200,200,200,200,196,195,200,203,202,200,202,200,201,200,202,199,201,203,204,191,191,201,200,200,203,193,198,206,203,206,203,203,202,201,203,203,204,203,193,193,203,205,205,200,210,195,175,214,108,23,43,77,110,96,116,136,132,110,90,92,76,64,64,57,61,119,95,39,108,135,105,94,66,57,45,45,74,76,80,84,81,112,150,194,151,118,212,202,116,118,90,99,118,71,62,36,138,225,205,207,151,182,212,193,231,160,77,183,221,226,217,113,70,42,50,174,244,101,63,42,33,163,249,243,57,2,12,12,17,17,24,17,16,25,30,35,51,46,49,74,79,93,56,61,105,142,128,79,81,86,92,63,41,31,43,53,50,50,50,51,43,34,28,29,31,29,26,34,49,85,136,154,134,77,43,21,15,9,13,14,13,16,29,33,44,58,53,59,46,24,24,16,23,16,81,136,98,126,122,56,27,21,19,22,20,23,21,18,25,21,18,22,19,19,21,22,18,16,15,17,17,15,14,18,18,18,23,17,19,17,21,18,16,20,19,29,28,20,24,22,22,21,20,22,16,26,37,37,28,25,20,20,19,24,56,99,118,109,99,87,58,25,23,33,22,15,26,30,23,20,17,18,20,29,30,21,19,19,32,47,29,19,14,55,131,158,132,108,95,78,61,28,20,33,25,15,24,24,17,17,16,16,18,23,23,16,19,33,36,31,18,17,15,17,28,27,26,32,29,21,24,19,19,16,16,20,22,23,25,26,26,26,20,25,29,25,27,30,30,31,36,40,39,46,48,53,55,62,69,79,84,87,92,82,81,79,77,71,65,66,59,107,64,67,69,70,66,69,72,68,101,105,91,91,99,101,74,86,120,130,85,38,17,14,17,17,20,18,31,56,89,104,104,102,87,64,43,36,42,48,54,56,54,57,57,61,59,47,46,41,44,60,76,105,132,143,156,177,203,213,233,245,246,241,227,220,217,216,213,213,214,212,211,212,214,215,212,216,217,219,119,6,1,7,11,12,11,14,13,15,15,15,15,198,201,198,200,201,196,198,197,197,196,195,197,196,197,198,195,193,196,198,197,198,196,198,197,198,196,194,199,194,196,198,196,199,199,203,203,202,203,200,200,200,201,200,198,200,200,200,200,200,204,202,201,202,203,203,205,204,203,204,204,205,204,203,203,201,202,200,201,199,199,200,199,198,198,200,200,203,202,203,201,201,201,202,201,199,200,203,205,202,203,202,205,205,202,202,201,200,204,202,200,200,202,203,201,201,202,202,203,202,201,199,200,200,198,201,199,200,199,200,202,201,200,200,202,201,202,199,202,202,201,200,200,203,201,202,202,203,202,199,200,200,200,200,200,202,202,201,201,204,203,203,202,201,203,202,203,201,202,203,202,203,200,202,203,201,200,202,200,203,203,203,202,204,203,204,205,200,203,204,203,204,203,201,202,201,203,200,203,202,200,203,202,202,201,199,202,203,200,202,202,202,202,201,200,201,199,200,203,201,201,202,203,201,203,202,202,201,200,200,199,202,202,200,201,200,202,201,200,200,200,202,197,200,201,200,201,200,200,202,201,199,199,200,200,199,201,200,201,202,199,200,200,201,200,201,202,201,199,198,200,203,201,193,196,202,200,201,200,200,201,198,194,197,202,201,201,202,202,199,201,200,200,199,200,201,203,195,185,198,202,200,200,191,196,203,202,203,204,204,202,202,202,201,201,203,194,190,199,203,202,199,207,189,179,231,153,76,69,91,122,96,101,109,110,69,20,28,28,30,41,36,30,105,88,13,48,78,127,87,35,55,14,36,67,63,48,29,59,107,137,166,104,121,221,193,118,81,66,83,91,74,66,34,149,227,206,204,150,191,208,193,231,109,81,211,221,230,148,69,69,46,35,127,138,69,62,34,128,217,246,231,91,49,15,11,25,17,17,21,25,34,35,49,49,51,72,80,94,95,59,46,63,115,135,93,59,62,77,82,73,47,39,51,53,44,46,51,43,45,40,34,32,28,33,71,135,159,173,125,50,33,13,12,14,11,14,15,31,40,51,53,53,55,59,53,34,21,18,16,22,19,103,133,98,138,116,45,14,11,14,14,14,17,15,15,17,21,22,21,20,24,24,19,20,24,22,18,15,16,17,15,21,21,24,23,22,24,24,20,19,16,33,36,18,19,24,26,17,16,21,21,19,26,22,29,41,35,23,18,19,31,110,155,156,136,116,111,69,46,30,27,22,22,26,14,17,25,17,22,29,19,17,25,23,17,31,48,40,29,14,68,160,157,140,109,104,112,66,37,29,23,14,18,26,19,21,18,16,18,18,24,23,18,18,21,40,44,25,20,18,24,42,49,59,63,51,46,33,17,18,16,23,27,29,26,19,22,30,23,24,28,25,25,20,21,19,23,21,18,22,23,26,27,23,30,37,43,44,44,55,56,59,58,57,64,61,61,63,64,64,71,66,68,75,72,75,69,73,71,66,60,61,63,49,90,109,119,95,37,25,15,15,16,19,22,17,23,39,60,89,119,115,87,59,45,37,36,35,43,40,50,48,49,52,43,57,66,67,65,61,59,63,64,73,81,102,122,155,197,223,246,246,235,227,220,216,216,218,216,217,215,217,219,218,222,223,222,117,6,1,7,11,13,12,14,13,15,15,15,15,194,198,199,199,199,196,196,196,196,196,197,198,198,196,194,195,195,197,196,195,198,199,198,196,197,198,197,196,198,195,196,199,200,201,201,205,207,205,204,203,202,204,203,202,201,201,200,201,203,202,204,204,201,203,205,202,205,207,205,204,205,204,205,203,203,205,200,200,203,202,200,198,199,201,205,205,206,207,202,204,203,204,205,203,205,203,203,202,200,202,202,203,200,201,200,202,204,202,204,203,203,203,203,205,205,201,204,203,203,203,200,201,200,199,199,199,199,202,200,199,201,199,200,200,200,203,203,203,202,200,202,200,202,204,199,199,202,202,201,200,200,200,201,200,200,202,200,202,200,201,202,203,202,201,201,202,203,201,201,202,202,201,201,201,202,199,202,202,201,202,200,202,203,203,203,202,201,200,200,202,201,201,200,201,200,200,202,201,201,201,199,201,201,203,202,200,203,203,199,201,203,202,201,202,202,201,201,203,201,201,202,200,203,204,204,202,199,200,200,200,200,200,201,198,200,198,200,200,199,201,198,199,200,199,199,200,200,200,199,201,198,197,199,200,199,198,199,198,199,198,199,200,200,199,201,200,199,200,201,199,203,200,193,200,201,199,198,199,201,202,196,196,201,202,199,199,202,200,199,201,200,200,199,199,199,204,195,186,195,200,198,199,194,195,203,201,202,201,203,201,199,200,199,199,201,198,189,194,200,201,200,208,181,184,233,180,125,69,70,103,76,89,81,76,46,19,26,18,21,27,46,32,112,104,49,89,125,176,110,55,38,12,55,60,57,27,11,42,83,154,189,112,140,188,159,134,87,76,77,81,74,68,38,163,235,211,201,147,199,205,206,220,76,106,224,227,170,69,49,35,14,27,32,36,31,55,143,220,237,205,169,197,224,198,115,21,14,14,23,31,36,45,43,56,74,81,80,76,85,78,91,94,96,140,97,62,46,51,74,74,60,32,35,41,54,51,41,46,45,42,35,37,48,103,159,182,160,89,48,33,19,13,13,12,18,27,45,55,54,57,54,60,60,54,52,28,17,17,15,20,35,129,133,103,141,103,36,10,10,14,16,14,16,16,20,24,21,18,24,21,17,23,23,17,23,35,31,23,19,15,19,20,33,36,29,30,27,27,22,18,29,36,23,19,27,24,22,18,20,19,24,27,24,22,16,31,55,36,19,18,50,144,170,174,120,101,117,72,56,35,25,31,22,19,14,19,21,30,35,19,16,14,19,26,33,31,33,52,32,24,65,113,105,111,108,73,73,48,39,32,19,19,21,16,14,19,24,20,24,19,18,19,21,23,16,33,50,35,22,17,49,92,106,108,97,84,71,46,24,15,26,32,28,19,22,23,18,20,25,21,19,24,30,29,28,25,17,19,17,19,24,20,20,22,22,19,22,20,23,27,31,30,27,34,32,37,38,38,44,42,46,48,54,57,58,69,61,65,70,72,68,67,70,77,113,106,105,96,49,21,16,18,17,18,20,20,22,20,24,40,51,69,73,66,59,50,43,31,31,35,42,45,41,41,37,45,60,72,88,84,81,66,60,60,63,63,54,64,97,150,212,233,245,241,230,218,217,218,220,222,220,220,222,224,225,224,225,119,6,1,8,11,12,12,14,12,14,15,14,14,193,199,195,198,197,194,196,195,196,196,196,197,195,194,195,194,196,194,193,195,195,194,198,195,196,198,197,200,195,195,198,200,201,199,204,203,205,205,203,204,203,203,205,203,201,203,204,203,205,206,205,205,204,203,204,203,204,205,207,208,206,208,206,206,206,206,205,206,204,204,205,203,203,203,204,205,206,203,205,205,203,205,203,202,206,206,206,204,203,205,202,203,201,202,201,203,202,203,202,202,204,204,204,202,203,203,202,200,201,200,198,199,200,199,198,199,201,200,199,201,199,198,201,201,200,201,198,201,200,201,202,200,201,200,203,201,199,201,201,200,199,202,200,199,201,199,201,201,202,201,202,201,199,200,198,200,201,199,202,202,202,203,200,201,200,201,202,200,201,200,201,202,204,201,199,201,200,200,201,199,199,201,200,202,201,201,200,203,201,201,201,197,202,200,200,202,198,199,200,201,199,201,201,202,201,200,201,199,201,202,202,200,200,203,199,201,204,200,200,200,203,201,200,201,200,201,200,201,200,199,201,202,200,196,198,201,199,197,199,199,199,200,198,199,198,199,199,200,199,195,200,199,196,198,199,198,197,199,199,198,201,200,198,199,198,200,200,199,200,197,191,197,200,200,199,201,199,202,199,199,200,197,199,198,199,200,198,185,193,201,200,198,191,193,200,203,200,199,198,198,199,199,198,200,202,198,193,189,200,200,203,205,173,190,225,200,169,84,59,73,61,70,70,54,13,26,27,24,35,29,54,32,97,89,30,79,145,184,102,84,50,25,63,50,29,18,93,159,157,186,193,130,139,142,167,156,102,104,75,72,78,60,29,159,230,208,191,143,205,200,219,203,63,156,241,169,77,9,7,9,27,27,39,23,23,135,221,248,212,125,171,252,252,252,215,51,4,30,43,54,57,43,45,65,93,84,82,88,83,86,100,99,92,122,100,66,51,40,55,62,55,30,21,24,39,60,51,40,27,27,32,55,127,158,183,136,47,24,15,17,16,11,17,24,44,53,54,58,59,59,62,61,60,62,51,21,20,16,25,16,47,146,121,115,137,83,33,13,12,16,22,20,17,20,30,22,22,31,21,16,19,18,18,24,27,25,32,32,23,18,15,37,45,50,57,52,53,51,31,27,40,24,17,26,21,17,21,21,17,20,23,19,25,25,23,20,39,49,32,16,73,139,129,117,104,99,88,61,53,35,29,31,24,19,19,16,23,34,32,20,19,19,17,27,35,29,22,49,40,28,63,82,81,127,100,77,62,31,49,24,23,25,13,19,17,19,23,32,24,15,19,15,21,32,24,23,40,44,27,30,111,168,166,157,117,104,91,53,26,24,27,20,22,26,24,19,18,18,17,27,32,21,23,27,37,27,21,20,16,21,26,29,27,22,24,22,17,17,20,22,15,21,20,24,27,26,25,24,29,24,26,24,23,29,29,34,29,35,36,39,39,44,41,52,107,97,106,106,46,28,19,19,22,22,31,25,21,19,21,20,22,29,42,48,52,55,44,41,36,31,41,39,33,33,39,42,48,64,72,76,74,76,64,63,57,54,56,51,51,66,127,170,205,236,227,219,221,223,225,225,222,221,224,227,227,224,224,118,6,2,8,11,12,12,15,14,15,16,15,15,196,199,196,197,197,194,198,197,196,196,195,195,197,194,193,196,192,197,196,195,196,196,199,197,198,199,200,198,199,198,198,198,200,203,201,201,203,203,203,204,203,204,204,205,203,203,203,205,205,202,204,205,203,205,207,206,203,202,203,204,208,208,208,206,208,209,205,205,208,207,208,211,205,203,203,204,202,205,204,204,205,201,204,203,203,204,206,205,205,201,204,205,200,203,200,201,203,200,202,202,201,203,203,200,202,202,203,201,201,203,200,201,200,199,199,198,199,199,198,199,200,200,201,201,201,199,196,200,199,199,200,202,202,201,200,199,200,197,198,199,200,199,199,200,200,200,198,201,199,198,202,203,199,200,199,200,199,200,201,200,201,200,200,201,201,198,200,200,198,201,200,199,204,199,200,201,200,202,199,199,200,201,201,203,201,201,200,200,202,200,199,199,200,199,200,199,200,201,198,200,200,200,199,200,199,199,202,199,199,200,202,200,198,199,200,199,198,199,197,200,198,198,201,196,198,199,199,201,199,201,199,197,200,200,199,200,199,197,197,196,199,198,198,202,198,198,198,199,200,198,198,198,198,197,196,199,197,198,199,200,201,194,200,200,199,200,200,200,199,194,192,200,198,200,199,198,200,195,199,198,196,196,196,196,196,200,198,185,186,200,201,198,190,189,200,199,197,198,196,196,198,199,199,198,199,200,195,187,196,201,202,199,173,199,217,198,194,132,99,81,66,77,72,41,4,23,36,62,40,35,60,38,67,69,12,45,137,146,73,57,19,33,66,55,52,69,180,220,201,212,184,129,143,165,207,171,103,122,90,77,78,50,25,158,224,205,187,147,208,196,224,159,63,213,202,74,24,24,24,44,33,36,43,29,117,211,242,242,145,130,210,249,249,249,163,71,75,105,124,108,62,41,49,87,93,66,109,101,96,95,97,97,88,116,95,66,49,68,79,62,63,35,34,24,27,38,44,39,29,39,66,118,154,154,87,32,21,17,15,13,14,23,43,55,63,59,56,61,62,63,63,56,52,41,27,15,18,19,24,12,70,149,107,114,132,71,25,16,15,17,21,16,21,27,26,22,29,25,18,15,14,17,21,22,28,26,17,35,33,23,15,36,64,71,92,102,106,79,34,33,36,18,29,27,17,17,20,22,27,28,19,18,17,20,30,28,27,47,46,20,61,111,81,78,85,104,89,41,44,39,22,18,24,34,21,24,34,17,21,24,23,23,22,22,19,29,35,55,47,31,55,81,89,111,133,98,52,33,36,26,16,26,24,16,20,22,26,25,19,22,17,16,26,24,24,16,37,57,25,64,156,165,173,156,98,112,92,63,46,27,23,14,24,27,21,18,17,20,18,34,28,16,21,22,38,35,26,22,17,21,24,27,28,27,27,23,21,18,21,20,19,25,26,30,26,28,31,24,25,25,23,24,23,17,25,21,20,23,21,24,21,26,24,43,98,96,108,106,58,31,19,14,32,46,50,55,41,27,28,19,15,19,22,20,26,30,39,37,34,39,37,50,45,33,37,39,45,49,59,59,54,56,59,55,55,59,61,67,51,30,50,69,118,199,217,219,224,221,225,223,224,225,225,224,225,225,224,118,6,2,8,12,13,12,15,14,15,15,16,15,198,198,197,198,196,196,196,196,196,198,195,196,195,193,195,194,197,194,195,198,195,195,200,199,199,200,198,199,196,197,198,196,199,200,203,201,202,203,203,204,204,203,204,200,200,204,202,200,201,203,204,203,203,204,203,205,203,203,202,205,205,207,207,208,208,207,208,207,207,208,211,207,206,205,203,206,205,205,205,203,204,205,206,203,205,205,204,205,203,204,204,203,203,203,204,205,201,204,206,203,201,201,202,203,203,202,202,199,202,203,201,204,200,198,199,198,199,199,198,200,198,198,202,199,201,200,199,200,199,198,201,200,200,200,199,201,200,201,200,198,201,199,198,201,200,199,200,200,201,200,200,201,200,200,199,202,201,198,200,200,199,201,199,199,200,200,199,196,199,199,200,201,202,200,199,200,200,201,202,202,202,201,202,201,201,201,199,200,197,200,202,198,200,198,199,200,199,200,201,200,198,198,198,200,200,198,201,199,200,199,200,199,198,198,196,199,198,198,198,198,199,197,198,198,199,200,197,196,196,197,197,196,198,197,198,196,198,196,199,198,196,198,195,199,197,198,196,197,198,194,200,200,199,199,198,199,197,198,200,199,193,194,198,199,198,198,198,196,195,193,196,199,198,198,198,198,198,196,196,199,196,195,200,195,196,195,198,186,181,198,199,197,189,190,199,199,196,198,196,197,198,196,196,196,200,197,199,187,190,198,206,191,170,204,198,174,158,134,134,135,146,133,145,112,61,83,88,124,83,60,91,38,78,66,4,53,147,130,112,150,77,49,57,116,185,184,223,206,181,163,154,141,133,188,215,121,94,123,95,99,74,38,24,158,226,209,178,150,218,200,232,131,67,204,98,29,6,73,187,117,82,53,36,125,227,251,213,157,159,196,246,225,214,155,76,97,119,130,142,104,62,80,102,83,65,69,110,101,90,98,92,102,100,108,89,75,70,65,65,56,53,37,32,28,19,25,22,33,36,96,148,146,143,71,27,21,12,16,11,14,27,48,60,57,61,59,59,61,68,71,66,62,44,29,22,19,17,21,29,12,95,154,99,122,122,57,25,15,16,21,21,22,30,31,27,26,21,16,17,22,16,19,27,21,21,27,22,23,39,35,19,69,102,99,104,101,81,51,46,38,28,24,23,21,17,24,19,27,36,22,18,18,16,22,30,32,22,40,49,22,48,81,80,80,117,149,81,36,42,26,15,15,17,28,41,36,20,22,16,19,33,34,23,15,16,23,34,71,57,24,48,68,53,87,102,66,35,33,37,17,15,15,24,26,22,27,19,14,22,24,21,27,26,14,22,24,40,66,35,104,158,143,139,114,105,89,76,59,37,26,15,20,21,17,20,20,23,26,23,21,27,23,26,22,29,44,31,23,17,26,34,39,41,37,32,31,31,26,19,25,30,32,25,22,25,21,27,25,20,26,29,36,27,19,23,18,21,23,18,21,17,21,24,55,112,103,118,112,61,37,15,13,30,56,78,78,69,57,43,33,23,19,19,19,21,19,19,30,29,35,43,61,61,46,38,36,41,45,46,50,50,47,51,51,57,60,56,50,51,37,50,36,57,163,202,219,227,221,224,225,227,226,226,225,226,225,224,117,6,2,7,12,13,12,15,13,15,15,16,15,197,199,195,198,197,196,196,197,197,196,198,194,196,194,195,198,197,198,197,197,196,198,200,200,200,200,197,198,201,197,196,197,198,202,201,202,204,201,203,200,202,205,202,203,198,202,204,203,203,198,201,204,204,204,203,204,204,205,205,206,207,207,209,208,209,203,203,205,205,205,204,206,203,207,207,206,207,207,205,208,203,205,207,204,206,203,205,203,202,203,202,204,202,205,203,201,204,201,203,203,204,205,204,201,201,200,200,201,199,201,201,202,200,200,199,198,198,199,201,200,198,199,199,196,199,198,196,200,200,200,199,198,200,199,202,201,199,201,199,198,199,198,198,198,200,203,200,199,200,200,200,200,199,200,199,198,199,200,198,198,199,199,198,200,200,199,200,200,200,199,200,198,200,197,199,199,199,201,199,200,200,199,198,200,200,200,198,197,198,199,200,200,197,198,200,198,199,199,198,198,198,198,198,199,199,197,198,198,199,199,197,198,197,199,198,195,199,198,196,199,198,198,198,197,199,199,194,198,195,196,199,196,197,197,198,198,198,197,199,199,200,198,195,199,197,199,198,195,196,192,197,196,197,199,195,198,196,196,197,198,196,192,201,198,196,198,195,196,191,192,196,199,196,196,196,195,194,193,198,193,194,194,195,196,195,197,198,190,179,194,198,198,187,184,197,197,196,194,196,196,193,194,194,195,198,196,196,189,185,191,205,184,173,203,181,150,132,138,174,179,199,193,208,199,126,105,178,207,177,115,125,59,132,94,5,84,166,137,177,247,153,56,57,170,232,221,228,168,140,124,169,146,116,178,174,92,83,123,96,101,73,32,24,156,226,208,175,160,226,217,248,141,70,115,30,5,43,129,237,160,110,34,99,225,251,242,149,172,217,252,250,189,117,66,77,114,107,117,88,53,89,136,100,65,57,77,90,86,97,85,79,107,101,89,89,83,59,64,50,36,42,32,30,26,14,21,26,20,52,146,157,145,112,49,17,12,15,13,15,19,51,66,59,59,59,62,63,62,64,71,74,83,74,56,38,23,22,22,26,27,128,145,100,127,107,56,30,31,41,42,33,29,37,33,29,18,15,17,22,24,23,29,24,13,20,26,29,33,37,41,28,104,149,134,113,74,85,66,49,44,18,14,17,26,25,21,31,26,19,27,23,20,27,27,19,29,25,45,53,32,21,60,87,94,152,125,53,38,40,27,22,19,16,35,37,32,24,17,18,22,39,39,24,16,17,18,40,92,72,28,47,62,32,43,55,41,27,35,35,15,16,16,16,33,36,17,18,18,19,29,35,27,19,14,19,20,50,78,43,107,157,111,96,101,93,103,67,36,37,19,29,23,12,19,23,20,33,30,21,20,15,27,33,31,28,39,44,28,16,39,54,64,77,71,66,70,51,24,29,35,29,28,28,22,23,22,22,27,33,33,28,35,33,26,23,19,23,18,21,22,18,22,23,75,128,110,111,107,71,31,19,17,23,50,79,92,85,79,71,53,43,36,21,17,18,17,17,26,19,27,41,49,64,69,58,44,40,37,37,51,56,54,54,53,50,51,48,49,54,45,67,53,65,165,203,221,231,223,224,225,225,223,222,228,228,223,225,118,6,2,8,12,12,12,15,14,15,16,15,16,196,197,196,197,196,198,197,195,197,194,196,199,196,198,199,198,201,198,198,200,198,200,203,203,203,200,202,202,199,201,201,201,197,199,201,200,201,202,202,202,203,200,201,202,202,203,202,200,205,204,201,202,201,202,204,205,203,203,204,203,206,208,206,206,205,206,205,205,206,204,207,206,205,207,205,205,207,207,206,205,206,205,205,203,204,205,204,203,203,203,201,203,207,204,200,201,201,204,203,204,204,203,203,201,204,201,201,202,200,201,200,200,200,199,201,201,198,199,199,199,198,200,202,197,199,196,199,203,200,200,201,199,199,200,199,200,199,199,200,198,200,201,200,199,199,196,199,198,199,200,197,200,198,201,199,201,202,196,199,198,199,198,197,199,199,198,199,200,200,198,198,196,199,199,197,200,198,198,198,198,198,198,199,198,199,199,198,199,196,198,199,196,199,199,198,198,198,199,199,198,199,198,198,198,198,197,197,196,199,199,198,198,198,199,198,200,198,199,198,198,200,198,200,199,199,196,198,198,196,200,198,199,199,198,200,198,200,198,196,198,198,197,198,199,198,199,197,199,195,195,198,195,197,198,196,198,196,199,201,198,190,195,199,200,198,199,198,193,192,195,196,197,197,198,195,193,195,194,194,195,193,193,195,192,196,194,196,191,179,192,197,200,186,181,194,194,195,194,194,194,194,195,193,195,194,192,196,193,185,188,204,174,179,202,177,175,159,177,186,183,198,172,189,204,159,82,139,243,208,134,117,67,141,121,50,129,181,128,174,207,113,101,122,171,189,177,155,129,152,149,198,136,98,182,171,111,99,92,84,102,70,42,24,155,225,212,178,166,234,198,201,152,69,44,4,7,26,143,143,87,72,69,189,250,252,189,159,223,252,252,249,129,43,63,108,130,118,76,41,77,133,134,83,61,69,112,92,60,105,117,99,104,90,80,105,105,93,88,71,59,53,33,29,25,16,21,24,37,97,144,126,84,54,28,15,15,10,15,15,32,60,63,63,61,63,59,69,66,46,41,39,60,78,89,81,46,18,22,17,46,147,134,98,128,92,41,29,66,100,78,41,32,37,31,29,19,13,18,19,27,31,27,18,15,16,23,41,34,37,41,39,134,150,134,128,99,106,67,53,41,16,13,15,19,29,38,27,23,13,19,26,27,37,22,16,21,25,49,71,36,23,60,73,73,61,44,37,36,46,28,23,24,34,33,20,31,30,26,30,35,27,22,31,23,17,37,45,94,77,25,60,54,26,33,33,27,27,37,39,20,15,16,24,24,23,27,18,14,21,32,32,21,18,16,24,23,68,93,40,78,116,101,87,83,153,126,46,39,29,26,24,21,16,16,22,27,28,25,19,21,18,19,33,31,23,30,47,34,19,55,84,117,127,124,135,98,45,28,36,24,26,30,28,22,21,21,23,27,34,32,24,27,33,41,28,18,29,26,26,27,27,28,22,83,127,102,103,103,77,42,20,16,26,54,76,93,89,80,83,76,67,57,41,27,20,21,22,21,21,22,25,33,56,76,74,65,48,33,43,54,55,50,46,39,42,47,61,71,65,54,61,43,89,204,223,226,228,222,223,223,225,222,224,226,228,223,223,118,5,2,9,13,12,12,15,14,14,15,15,15,198,198,197,199,195,197,195,198,199,195,198,198,199,197,198,197,198,199,199,200,200,202,203,203,203,204,205,202,201,200,203,200,200,203,203,202,203,201,202,202,202,203,200,200,199,200,198,197,201,199,200,202,199,200,203,204,203,203,201,205,207,205,207,206,207,206,208,208,207,206,205,207,207,207,205,203,208,206,204,208,203,205,206,203,206,201,203,206,203,203,204,203,203,205,204,204,205,204,205,204,202,203,203,203,203,203,202,202,203,201,203,202,198,203,200,199,200,201,200,199,201,199,198,198,200,200,198,198,199,200,198,201,200,198,200,199,198,199,198,198,199,195,198,200,198,199,199,198,197,198,196,197,198,196,198,196,198,199,198,198,197,200,198,197,198,197,199,199,198,197,198,195,198,195,196,195,193,197,194,197,197,197,196,198,201,198,199,197,196,197,197,199,196,194,196,196,198,198,198,196,198,197,193,198,198,196,199,198,199,198,198,198,198,199,197,198,198,196,196,199,197,196,199,198,197,198,198,200,199,199,199,198,199,199,198,198,199,197,199,198,198,199,197,199,197,198,199,195,198,197,198,199,197,199,196,197,199,198,199,197,194,193,197,198,199,197,197,194,194,200,198,197,196,198,195,194,196,193,199,195,194,195,195,191,191,194,197,196,180,187,195,198,187,179,195,191,191,195,194,194,193,194,192,193,193,193,196,196,184,189,200,171,186,194,175,175,168,178,178,159,160,149,172,212,200,121,78,157,167,139,110,61,122,81,17,113,182,131,130,104,99,156,184,192,145,136,133,147,166,170,195,119,155,210,184,149,100,81,75,92,68,44,25,151,227,209,181,151,181,128,130,118,39,17,6,17,26,49,53,40,83,173,233,250,218,169,207,250,251,251,190,45,44,90,132,132,91,42,45,113,142,93,71,46,85,140,77,93,170,162,109,103,92,101,110,117,118,97,84,73,65,46,43,30,26,26,38,88,144,135,67,36,18,15,16,17,23,20,20,41,69,71,62,63,66,72,86,78,42,25,27,31,39,65,99,80,30,21,15,58,157,126,107,123,73,39,22,84,119,73,47,42,37,18,22,27,21,17,28,27,18,23,24,14,21,26,31,34,43,50,49,130,131,110,121,94,98,57,46,41,12,14,14,21,33,36,31,16,16,23,26,37,24,20,19,17,18,60,89,43,30,59,45,47,38,31,30,36,46,28,24,37,26,22,16,22,32,45,47,23,20,18,24,42,57,34,39,109,71,28,62,44,32,36,26,26,31,39,33,21,21,32,30,17,20,28,25,29,37,22,17,26,27,23,34,36,79,93,37,41,81,98,101,137,170,96,31,35,29,16,19,19,22,24,22,27,21,21,23,19,19,34,29,27,34,30,54,40,37,82,102,149,122,100,99,72,48,37,33,22,33,25,23,25,24,18,31,37,19,29,27,22,28,45,38,20,24,28,30,29,26,31,26,80,125,102,103,107,87,46,21,16,21,46,63,81,86,92,87,87,87,76,67,50,36,27,21,21,23,20,22,24,34,51,69,73,72,51,39,42,32,39,43,46,51,48,74,87,73,57,57,57,148,231,231,228,225,222,224,223,224,224,224,225,229,224,225,118,6,2,7,12,14,13,14,14,15,15,15,15,199,203,201,203,200,198,200,199,200,199,201,203,198,201,200,198,201,201,201,201,201,202,202,203,201,203,203,200,200,200,200,200,200,203,203,202,204,199,200,200,200,203,199,201,203,202,203,200,200,202,198,202,203,203,203,203,204,203,205,205,205,205,205,208,206,208,208,205,210,208,207,206,207,208,208,207,206,207,208,207,207,207,206,204,203,203,204,203,203,206,205,203,206,202,205,205,202,205,205,205,200,200,204,201,204,202,202,203,201,200,201,202,200,200,200,203,202,201,200,198,201,199,199,198,200,199,198,202,198,196,199,196,200,200,198,199,199,198,198,200,199,198,199,199,199,196,197,197,197,198,196,198,196,198,198,198,197,198,198,196,199,197,195,196,195,198,198,196,196,198,197,194,196,196,195,195,195,198,197,195,197,198,195,193,198,198,196,196,195,195,199,197,196,196,195,198,195,195,196,195,196,194,198,198,197,196,195,198,197,196,197,197,197,196,198,196,195,198,196,198,196,197,198,195,199,198,198,198,198,198,197,198,198,198,198,195,198,199,198,198,200,198,198,198,197,197,197,201,196,198,199,199,200,198,194,196,198,196,200,193,191,195,196,199,200,197,194,193,196,199,196,196,194,196,197,196,195,193,195,195,195,195,195,195,195,195,197,195,182,183,193,199,187,177,193,196,194,195,193,194,192,192,193,191,194,195,192,196,189,188,192,170,184,176,140,133,123,137,133,120,145,166,191,220,220,185,115,66,94,124,104,66,98,73,4,76,161,125,77,73,151,194,208,170,143,168,160,170,159,164,184,132,198,210,193,162,86,101,85,78,61,35,23,153,230,224,174,83,132,124,69,39,7,29,43,47,50,55,39,53,183,250,252,214,179,203,245,250,247,226,81,26,63,105,133,99,45,55,76,105,92,71,63,23,71,140,137,186,219,153,111,105,101,121,100,85,79,77,77,66,61,46,42,33,34,30,54,125,149,98,34,19,14,10,17,35,46,25,14,48,69,66,66,69,81,92,90,60,35,27,23,20,32,44,82,109,42,25,13,79,165,107,117,116,57,43,61,99,103,76,62,44,26,14,12,24,29,32,26,14,17,18,27,31,27,22,15,20,50,60,46,95,92,85,105,101,83,40,46,33,17,17,15,29,19,22,29,26,18,29,30,17,26,24,20,23,28,66,92,40,34,55,29,27,26,27,27,31,49,33,31,34,30,24,17,19,36,53,49,30,23,18,27,60,57,29,54,120,68,24,61,38,26,39,23,28,26,39,38,23,38,31,21,19,22,25,36,39,27,19,15,19,27,52,48,25,90,96,35,35,71,87,88,75,61,43,35,35,17,16,18,19,29,39,27,16,16,18,27,29,33,25,16,19,28,39,69,49,56,117,127,152,126,92,110,85,51,38,35,33,22,18,23,22,29,36,32,21,16,27,28,30,33,45,42,27,35,38,43,43,51,57,40,108,134,111,101,104,94,44,24,16,18,20,36,74,96,112,112,98,93,86,81,76,57,45,27,17,19,22,19,18,26,24,37,62,73,71,56,50,53,48,49,52,55,65,72,66,69,62,57,137,230,239,234,224,225,222,224,225,221,223,224,226,229,224,224,117,5,2,7,11,13,12,13,12,15,16,15,15,203,205,205,204,203,206,201,203,202,200,204,200,201,203,203,202,203,203,201,201,203,203,203,201,202,203,200,205,203,201,202,201,202,200,201,200,202,203,199,200,201,199,202,202,200,203,202,200,203,203,205,204,202,204,202,201,205,204,203,204,205,207,207,208,207,207,207,208,209,210,208,208,209,209,207,205,212,208,208,213,207,208,210,208,208,205,205,204,205,205,205,206,205,203,203,207,203,203,203,201,203,202,203,203,201,200,201,201,202,201,201,201,200,203,200,199,199,200,200,200,200,197,200,200,198,200,200,199,199,198,199,198,196,198,200,198,199,199,200,198,200,199,196,199,198,198,199,198,198,198,197,198,198,198,198,197,198,199,197,198,197,196,199,196,197,198,196,196,196,195,200,198,197,196,199,197,195,199,195,198,197,197,197,196,199,195,195,196,196,196,198,198,195,199,198,195,196,196,197,191,196,198,193,198,198,194,196,198,196,194,194,198,197,196,195,197,198,197,198,197,196,198,198,198,197,198,197,196,196,196,198,199,197,199,198,198,199,194,199,198,197,197,195,197,197,198,197,195,197,196,198,200,198,198,197,194,198,199,197,191,192,196,198,199,200,197,188,193,198,200,195,190,194,196,194,195,195,192,196,193,193,198,195,195,193,193,194,196,186,182,192,199,194,180,192,197,193,198,193,194,195,194,193,195,193,191,193,198,190,192,183,168,188,160,128,124,124,138,145,136,168,191,188,212,216,210,185,85,26,70,100,81,101,69,5,55,116,109,60,104,191,189,185,151,166,195,162,134,138,189,169,118,188,156,178,146,77,119,91,83,61,30,19,155,236,217,171,72,122,120,57,34,4,117,171,87,69,60,46,98,231,249,203,171,205,229,247,247,215,99,45,51,81,92,98,53,81,122,117,71,57,64,41,13,34,102,166,215,195,133,109,109,100,114,91,53,51,65,75,57,59,50,39,37,29,34,85,141,131,74,22,17,12,12,18,47,67,32,19,51,73,74,78,96,111,105,68,31,31,20,11,15,22,35,85,116,53,27,21,113,157,94,121,103,50,41,92,120,102,91,61,45,24,14,17,19,36,27,19,15,15,14,24,41,27,15,21,12,46,81,41,62,69,68,131,132,72,32,37,29,15,26,29,21,20,17,26,32,41,33,17,15,16,29,34,46,25,63,100,39,41,51,27,29,32,22,27,34,46,46,27,22,22,28,26,36,37,30,46,48,35,29,57,42,33,40,78,131,60,32,67,40,37,49,33,33,36,47,36,26,28,23,30,28,18,34,39,29,27,20,17,24,43,49,34,29,101,96,37,41,66,50,39,44,31,22,37,39,21,17,17,18,35,34,22,21,19,24,31,36,29,26,22,17,24,49,89,51,84,129,134,144,105,116,113,80,56,45,36,27,23,23,19,24,36,39,24,21,16,17,28,47,35,41,48,33,48,60,70,81,116,121,98,142,149,118,102,111,102,46,26,19,18,31,46,84,117,131,125,114,85,84,90,86,76,63,49,29,19,21,23,21,18,24,24,35,54,73,78,79,64,51,44,53,68,55,56,47,65,40,75,207,243,241,225,220,225,221,220,221,220,221,220,224,228,223,225,118,5,2,7,12,12,12,15,13,15,15,15,15,205,206,205,206,201,201,202,201,202,203,205,204,201,203,201,200,199,200,200,200,201,199,200,202,200,201,202,204,202,203,203,202,202,203,204,203,201,199,199,200,201,202,201,201,201,200,201,200,203,204,200,203,203,202,202,203,203,203,206,202,205,208,206,207,207,208,209,207,207,206,208,205,208,210,208,208,208,208,209,208,210,209,208,207,207,207,204,205,207,210,207,204,205,202,206,205,204,204,201,204,205,204,204,203,203,203,202,202,201,202,202,203,201,200,200,200,200,198,201,198,200,200,200,200,200,199,199,200,195,198,199,199,200,197,198,197,199,200,198,199,196,197,197,197,198,198,199,200,199,198,199,200,199,198,198,199,197,198,196,196,199,199,199,198,198,198,195,198,196,197,196,196,196,195,199,198,196,196,197,196,195,198,196,196,201,197,194,195,194,196,195,194,194,196,197,198,195,196,196,194,195,194,196,193,194,197,193,196,195,193,196,196,194,194,195,195,195,196,196,198,195,195,196,195,196,196,195,196,194,195,196,195,195,196,198,195,195,195,195,195,196,196,195,197,196,195,194,195,194,197,196,194,198,198,196,195,194,195,198,191,194,197,197,196,196,195,190,196,197,196,194,193,193,194,193,194,194,194,195,192,195,194,191,191,190,194,193,197,190,177,193,199,196,187,192,200,194,194,193,193,193,192,195,195,192,194,193,195,194,198,178,165,189,173,153,156,170,179,176,168,198,200,185,202,203,212,213,158,59,26,76,76,106,102,38,74,105,114,113,141,156,168,181,161,177,167,138,122,160,208,152,102,130,118,188,141,86,129,91,88,72,31,38,158,191,163,155,87,103,78,32,25,16,164,162,80,68,39,132,156,244,227,146,199,233,237,244,218,93,56,65,87,92,72,54,96,134,159,114,77,66,29,20,10,29,97,123,133,101,81,105,105,106,106,93,81,59,53,67,67,66,46,36,36,30,59,125,149,110,49,14,11,14,25,39,63,73,47,18,48,84,98,112,122,141,117,53,27,37,27,19,21,21,31,100,121,49,28,28,131,153,88,125,92,41,30,73,99,68,63,51,47,31,16,22,28,24,24,26,17,16,30,26,23,27,20,19,20,75,89,30,59,55,85,141,97,57,29,34,31,14,33,27,20,16,16,18,46,54,34,20,19,17,31,57,46,17,73,103,36,46,59,31,38,40,25,30,44,50,33,19,21,20,30,44,39,25,22,23,39,65,66,35,15,24,31,99,145,54,39,83,66,78,82,51,76,74,58,47,20,19,18,26,31,38,38,25,23,30,33,23,42,32,27,38,56,123,92,30,43,63,33,25,38,29,21,33,40,26,21,24,23,24,22,28,31,19,36,38,26,29,32,23,28,41,65,98,49,83,130,119,113,98,95,100,69,42,37,22,25,29,24,27,31,28,29,24,19,17,28,29,33,36,42,58,36,71,101,106,120,155,157,107,118,126,118,103,116,113,55,26,24,40,58,66,78,87,99,108,103,102,88,95,104,97,90,69,43,25,22,22,21,18,23,23,24,36,51,74,81,71,62,57,52,53,41,35,37,54,30,96,220,241,235,228,225,229,220,223,222,221,224,224,227,228,224,225,117,5,1,8,12,12,12,14,12,14,15,15,15,203,204,202,204,202,203,199,202,203,204,203,202,205,203,202,200,198,199,200,198,199,202,201,199,199,200,200,200,199,201,202,200,200,199,200,199,199,200,200,201,200,199,200,199,201,203,201,200,201,201,201,203,202,203,204,201,202,202,202,205,206,205,204,208,210,207,207,206,206,206,206,207,208,207,208,208,209,208,208,208,206,209,208,207,208,206,206,207,207,205,204,206,204,204,206,202,200,201,203,202,202,201,200,201,203,201,201,201,200,201,200,199,201,200,199,201,199,200,199,197,200,199,200,198,196,198,198,197,196,196,198,200,197,196,199,195,196,198,198,196,198,197,195,199,196,195,198,196,195,197,196,199,197,198,199,197,198,197,198,197,196,194,197,196,196,198,197,197,197,196,196,194,196,197,194,193,193,196,197,198,195,197,196,194,197,195,194,195,194,194,194,195,196,195,195,198,196,195,196,194,196,194,194,195,195,194,193,196,194,195,196,195,195,195,194,196,194,195,196,195,194,193,194,194,195,194,194,193,195,192,193,194,193,195,196,194,195,194,194,195,194,195,193,195,194,193,194,193,196,194,193,194,193,196,194,192,196,195,191,191,193,194,195,193,194,193,192,196,194,193,195,194,191,193,192,193,193,191,195,195,193,194,194,194,191,192,194,196,191,176,186,195,196,179,188,195,191,196,192,194,195,191,191,191,192,194,193,196,195,207,174,158,174,157,154,161,171,165,155,156,196,200,179,200,194,204,214,205,130,33,56,71,94,96,42,100,119,138,144,124,156,186,188,157,135,144,153,158,191,195,120,102,148,156,222,132,87,116,76,103,72,36,52,132,137,93,132,100,76,42,23,25,17,89,84,56,47,127,214,174,195,172,174,248,246,244,210,97,53,71,102,93,54,46,73,127,160,116,83,78,39,12,12,27,51,84,75,39,25,26,55,89,108,118,113,113,107,81,72,69,52,33,31,31,33,89,143,141,81,23,16,14,31,67,66,70,71,74,48,65,111,112,120,116,112,74,34,21,24,31,27,24,23,67,148,105,33,22,39,148,136,96,124,81,36,24,64,96,84,54,34,45,34,27,32,21,14,16,27,31,35,29,21,12,19,27,33,46,91,86,37,60,52,47,63,41,39,35,35,39,26,16,23,23,25,17,41,41,38,44,32,21,21,53,33,35,34,81,105,36,60,86,43,73,64,37,69,67,56,36,19,25,19,35,42,36,24,21,20,41,73,57,34,23,25,19,125,163,49,54,124,123,191,183,133,185,141,74,37,12,16,14,21,37,43,24,23,25,21,38,55,41,14,24,33,61,133,90,32,55,52,27,24,31,26,24,36,41,25,27,35,24,17,14,23,30,44,40,23,16,22,38,39,53,33,58,103,48,64,88,88,91,90,133,107,57,40,29,22,19,33,35,39,33,17,17,26,27,31,37,21,19,24,53,68,54,121,146,148,143,119,96,69,68,90,118,98,107,115,60,34,37,57,63,44,45,44,45,59,108,117,109,101,96,106,104,88,58,34,23,20,18,23,20,17,25,26,33,56,76,81,78,66,37,26,27,35,45,48,18,132,231,243,247,246,243,239,230,234,235,235,236,232,226,229,226,223,117,5,1,6,10,13,12,13,12,14,14,15,15,202,201,204,201,201,202,202,204,203,203,205,201,199,204,200,205,200,200,204,198,202,200,200,202,199,200,199,199,200,199,201,198,199,200,198,201,200,201,202,198,201,199,198,200,200,199,200,201,200,200,201,203,203,201,201,202,201,201,203,203,205,205,206,209,209,208,208,208,208,207,208,206,208,206,207,208,205,206,206,204,208,210,206,208,207,208,207,203,203,204,205,204,204,202,205,204,203,203,201,203,200,200,200,200,200,200,200,200,199,199,199,200,198,198,198,197,200,198,200,198,199,197,195,198,196,197,197,198,198,197,196,197,198,195,196,193,195,198,194,198,196,197,198,197,198,196,198,197,198,197,197,197,198,197,196,199,197,199,198,198,198,195,198,197,198,199,196,198,197,195,195,197,198,197,196,196,197,195,196,196,196,198,199,197,198,196,197,198,195,198,194,197,197,193,191,194,196,196,194,195,195,194,197,193,195,196,196,197,196,194,195,196,194,195,197,196,193,193,193,196,194,195,196,193,198,194,191,195,194,196,197,196,194,196,194,193,196,194,195,194,191,195,195,194,194,196,193,192,194,194,194,193,195,195,193,192,196,195,189,190,195,193,196,196,190,193,192,193,194,195,192,196,197,194,194,193,191,192,196,192,194,195,192,195,193,193,193,194,197,180,184,194,195,179,178,195,192,194,193,193,193,192,191,191,192,196,195,196,199,199,155,135,142,114,116,133,150,132,132,156,194,196,179,196,192,194,206,216,172,118,113,69,71,79,44,122,134,123,132,114,182,214,169,143,139,161,165,184,189,142,112,124,169,203,219,100,86,105,68,87,60,39,39,121,108,64,111,77,51,29,18,24,34,73,41,34,117,211,252,162,152,193,229,250,246,215,101,49,73,102,96,74,50,65,85,125,106,78,71,44,26,11,29,46,79,117,69,43,36,37,36,40,89,121,130,125,121,120,110,74,50,27,24,35,34,101,151,110,47,22,17,25,55,63,46,42,54,77,110,138,126,116,77,41,38,24,17,11,17,20,30,33,46,132,139,47,27,20,47,155,127,97,123,75,34,24,55,117,93,44,35,40,42,33,31,17,12,14,19,43,39,24,14,19,15,35,62,42,97,88,37,63,37,29,32,27,37,28,43,37,18,19,14,24,29,36,36,21,16,38,47,49,51,26,16,24,40,97,107,29,92,137,129,200,144,134,178,124,71,36,27,27,35,32,20,27,35,24,39,50,36,50,57,33,37,69,142,168,52,39,81,87,147,125,95,141,99,62,46,17,21,19,29,36,33,31,26,20,26,48,44,38,19,25,22,66,141,86,32,57,55,25,43,38,21,32,39,39,26,37,32,19,15,17,22,30,40,32,18,26,19,35,61,44,26,65,101,50,37,62,61,84,148,165,92,33,37,29,18,21,26,45,45,27,22,19,26,44,41,29,16,19,19,61,85,63,138,154,145,139,101,92,86,56,66,116,97,103,112,66,36,37,57,42,28,32,30,31,51,100,122,104,94,100,108,108,102,73,36,25,30,26,23,19,16,22,24,30,42,60,76,81,63,35,19,22,31,42,33,39,188,233,250,250,248,248,249,249,249,247,246,246,240,232,227,225,223,116,5,2,7,11,12,12,14,13,15,15,15,15,204,205,205,206,203,201,204,205,207,205,202,204,202,204,201,203,200,200,202,201,201,200,200,200,201,200,202,200,200,200,198,200,199,200,199,201,200,198,201,201,200,200,200,200,200,200,200,201,202,201,205,205,201,202,204,204,204,202,203,205,205,202,204,206,205,206,206,207,206,206,206,206,206,203,206,205,203,207,206,208,207,205,206,203,204,205,206,204,204,202,203,206,204,203,204,203,203,203,201,202,201,201,200,200,202,198,198,199,198,198,198,196,200,196,195,199,195,198,197,197,198,194,197,198,196,198,198,196,194,196,196,196,195,196,200,194,197,196,194,198,196,199,195,195,196,196,197,195,195,195,196,198,196,194,195,196,196,195,193,196,197,196,198,197,194,197,197,197,196,197,197,198,197,195,197,197,198,198,198,197,196,197,198,198,201,199,196,199,199,195,196,198,193,195,196,197,195,195,197,196,197,193,195,194,196,197,196,198,197,196,196,194,195,194,193,194,194,196,193,196,196,194,195,194,195,194,192,193,193,193,198,194,195,192,193,193,195,196,195,195,193,193,195,196,193,192,197,194,194,194,194,193,192,192,194,192,193,195,188,192,193,192,193,189,189,193,193,190,191,195,194,193,196,195,193,193,193,193,193,192,193,192,190,191,194,194,193,192,195,182,182,196,196,178,176,194,194,195,190,190,193,192,193,193,196,197,195,189,181,185,142,131,134,122,142,154,166,152,152,171,195,196,177,192,193,191,200,208,171,169,163,73,70,94,90,190,151,130,157,122,197,174,136,156,167,185,162,166,168,154,141,115,160,200,177,88,87,103,71,82,48,33,82,129,95,74,89,44,35,29,14,40,61,75,37,92,191,245,247,146,197,252,244,245,208,105,52,62,107,105,79,55,72,107,95,77,72,69,45,22,15,33,60,83,115,155,118,71,53,38,42,39,32,70,116,129,131,119,121,106,74,45,26,23,43,134,155,85,31,10,16,38,61,43,22,19,15,34,77,136,145,101,60,24,22,17,31,32,15,16,26,27,41,68,51,25,31,13,66,162,115,108,132,67,32,16,41,85,61,35,28,42,33,22,26,23,19,16,33,32,32,36,20,16,37,52,39,55,119,81,32,56,26,27,31,22,35,26,40,40,18,18,15,19,39,39,28,17,23,18,46,71,40,18,14,22,34,110,110,31,83,102,116,181,122,132,120,85,52,29,40,48,32,21,16,24,31,48,48,23,21,22,48,67,85,36,92,148,57,32,43,39,42,44,38,39,36,53,50,23,24,28,30,24,22,35,33,31,40,33,25,43,39,30,39,80,137,79,29,78,69,48,71,53,38,55,49,46,39,19,21,27,18,19,32,27,26,33,30,22,38,49,31,35,35,83,104,47,29,57,61,96,115,81,50,40,39,29,26,32,33,33,42,39,30,28,55,44,28,35,21,24,41,104,83,77,139,119,107,102,108,89,76,52,49,113,103,105,113,73,35,47,45,32,29,23,27,26,46,83,105,114,98,103,106,113,125,82,33,39,76,56,30,24,17,23,19,24,39,51,65,81,66,41,26,24,25,43,31,55,208,226,249,236,205,229,246,243,228,222,219,230,240,235,227,221,221,118,5,1,8,11,13,13,14,13,14,15,15,15,204,205,203,203,204,205,202,205,203,202,205,203,205,204,202,203,198,202,203,200,200,199,199,200,200,198,200,199,200,199,201,201,199,199,200,200,198,198,201,199,199,200,201,201,200,200,202,206,204,201,206,205,204,203,205,204,204,204,205,203,205,205,203,204,203,203,204,204,204,206,205,204,205,204,206,203,203,207,206,203,203,205,204,205,206,206,203,203,206,204,205,205,204,204,204,204,202,203,201,204,203,203,200,202,200,198,200,200,200,198,196,199,198,197,198,197,197,194,195,197,198,196,197,198,195,198,197,196,196,198,198,196,200,198,199,198,198,199,195,198,198,197,195,194,198,196,195,195,194,196,196,197,196,196,195,194,194,196,193,197,198,194,195,196,197,196,195,197,198,196,198,198,196,197,197,197,196,195,199,198,198,198,196,197,196,197,196,196,196,198,198,198,198,196,199,198,195,198,197,195,195,195,199,196,195,198,197,196,199,196,197,196,196,196,194,196,193,198,198,195,196,196,195,195,199,195,194,197,195,194,194,193,196,195,194,194,196,194,194,197,191,196,194,194,196,191,193,192,193,194,194,193,193,195,197,193,194,193,188,193,194,194,194,187,189,196,193,191,192,193,193,194,196,193,191,193,191,194,194,191,193,191,189,193,194,191,192,192,196,186,176,194,202,183,180,191,191,193,193,193,194,194,195,195,193,186,179,174,188,190,146,162,168,161,187,193,193,160,185,192,192,198,173,191,194,194,196,200,142,158,173,73,84,103,134,248,155,140,192,136,167,155,154,172,171,153,136,173,170,181,167,99,154,186,173,113,89,98,85,87,50,50,115,139,97,86,72,22,27,42,46,55,72,55,94,214,244,228,172,139,243,251,243,198,89,51,64,101,128,115,73,57,81,129,105,78,71,39,25,13,26,60,88,113,130,157,131,116,112,73,50,38,37,29,47,92,122,133,126,118,124,103,62,37,53,113,117,56,19,10,15,49,64,33,19,15,13,16,25,74,117,107,68,25,33,85,100,73,35,21,22,28,31,22,26,21,28,10,91,160,103,122,128,63,22,14,18,34,30,25,29,36,30,15,17,23,31,33,25,18,16,36,39,48,51,29,18,57,126,76,38,57,23,29,34,21,30,29,47,46,23,16,21,33,26,27,29,27,15,34,50,41,47,38,20,31,54,97,100,40,40,55,34,49,46,40,41,39,51,33,44,48,30,25,21,18,44,53,42,28,27,31,56,95,90,28,69,118,46,36,40,42,51,45,45,46,26,47,43,25,45,32,19,22,20,26,39,42,29,24,23,29,44,65,43,54,129,71,40,127,113,128,159,95,120,123,78,51,17,17,13,21,29,35,29,21,17,23,33,46,50,26,16,21,47,100,102,46,39,67,50,44,45,33,37,29,47,35,30,41,25,23,25,42,49,63,51,21,18,32,34,59,86,122,84,68,132,101,87,129,104,105,83,34,36,99,107,103,121,75,41,53,47,30,21,21,16,28,48,63,120,132,112,113,101,118,126,79,26,85,137,84,44,26,23,19,20,24,34,49,65,81,76,60,33,36,45,37,37,29,115,145,130,118,84,141,180,175,174,156,152,171,209,229,222,216,219,117,6,2,8,12,13,12,14,13,14,15,16,15,202,201,200,202,203,203,201,200,205,205,203,205,201,203,201,203,200,201,200,201,201,200,200,199,199,198,201,200,200,202,201,201,200,202,202,201,201,199,198,198,200,199,200,201,201,200,201,201,202,203,205,201,203,203,202,206,204,202,203,204,205,205,205,207,207,204,203,204,205,206,206,203,203,204,205,202,201,202,204,205,201,202,205,204,202,203,205,205,204,203,203,205,203,201,204,200,200,201,200,201,200,202,201,201,201,197,199,201,200,200,199,197,198,198,197,200,197,198,198,196,198,198,198,198,196,196,197,197,197,197,196,198,198,196,197,193,196,197,194,196,194,195,193,197,194,192,195,195,196,194,196,196,196,196,194,198,194,196,195,196,195,195,197,196,196,196,196,198,198,197,193,196,195,196,195,193,196,194,196,198,196,196,196,195,196,196,195,195,196,194,195,196,196,198,195,195,198,196,196,197,196,195,196,198,198,195,196,194,196,195,195,196,194,198,197,195,194,198,196,196,195,193,195,194,194,197,194,196,195,195,195,191,195,195,194,193,193,193,194,195,193,194,192,193,193,193,192,192,194,193,191,191,192,193,194,193,193,191,190,194,194,192,191,188,193,192,193,194,193,192,192,193,194,194,193,191,190,191,192,194,192,190,192,193,194,191,193,191,192,189,176,190,198,188,182,187,192,194,191,193,196,193,192,185,178,176,181,186,202,191,158,172,163,165,181,181,171,157,194,192,190,198,174,190,194,191,195,194,143,166,166,61,61,89,134,220,121,116,153,115,160,169,183,171,134,122,151,184,175,187,141,121,198,195,188,134,91,117,95,93,61,66,120,90,81,101,57,17,42,81,63,76,47,76,210,253,252,178,171,178,246,248,210,100,36,65,101,121,121,83,69,60,92,113,89,79,41,24,18,22,49,87,112,118,119,137,130,116,121,111,98,59,34,40,31,37,73,118,141,147,147,141,142,116,68,37,40,31,20,14,13,46,65,41,28,21,12,19,15,26,80,122,88,33,122,188,139,72,29,24,36,46,35,27,25,22,27,27,115,157,101,128,121,52,20,12,15,18,22,25,33,36,22,15,14,23,41,29,19,15,11,24,56,62,26,16,15,29,111,77,43,67,23,45,50,28,48,39,47,39,26,33,30,25,16,11,29,33,34,38,30,17,33,60,41,79,56,90,105,34,39,36,38,42,29,45,37,33,58,49,26,29,52,49,35,61,74,50,63,73,66,68,117,113,127,92,131,130,33,101,51,61,69,62,62,74,55,59,63,57,58,51,48,46,53,39,42,42,31,23,21,28,50,59,34,50,124,73,36,108,96,139,173,121,158,155,80,42,21,13,16,19,36,37,23,18,17,17,34,50,43,21,26,21,43,113,101,48,43,63,36,24,34,24,31,35,33,46,42,28,23,19,20,34,64,59,33,21,18,22,58,66,74,124,71,56,100,83,108,130,139,136,68,37,28,95,114,93,114,87,44,47,52,34,24,25,26,39,44,59,113,140,126,126,112,110,107,56,24,122,164,90,55,33,21,19,22,27,35,49,62,76,77,68,48,42,41,49,50,43,47,25,36,54,34,50,105,131,134,113,89,101,149,189,206,214,220,118,6,2,9,12,13,13,15,14,15,15,16,16,203,204,202,202,202,202,202,203,200,203,205,201,204,201,201,201,200,201,201,199,202,201,200,200,201,202,201,201,201,202,202,203,201,201,202,199,203,200,200,200,200,201,202,203,201,199,200,200,200,204,203,202,202,200,203,203,203,205,203,205,204,202,202,203,204,203,204,204,202,203,204,203,202,202,202,204,205,202,202,202,203,201,202,201,200,200,201,204,203,201,202,200,200,200,200,200,200,200,201,202,199,203,200,200,199,198,201,199,201,200,198,199,200,198,196,198,197,198,199,196,199,198,198,198,198,198,193,196,196,197,193,195,198,193,196,195,196,194,194,195,193,193,193,194,195,196,194,194,194,195,195,194,193,195,194,194,199,196,193,196,195,195,196,193,194,196,197,195,196,196,195,196,195,194,194,195,198,195,196,194,195,196,194,196,195,194,196,195,197,197,196,193,193,195,196,198,196,194,196,195,195,196,197,195,193,197,195,196,198,194,198,195,194,196,195,195,196,196,196,193,194,195,194,194,195,195,196,194,192,193,196,193,194,195,193,193,194,193,193,194,190,194,191,193,196,191,193,193,192,192,191,192,194,191,191,191,190,189,190,194,193,191,188,189,193,192,193,191,190,193,193,195,192,193,195,192,192,193,193,191,191,194,193,190,193,190,193,191,190,193,180,184,198,187,183,189,190,195,197,193,190,181,173,177,182,184,191,191,208,179,128,136,114,120,148,133,136,152,199,187,181,198,170,188,194,189,195,193,159,194,174,55,53,69,122,174,74,70,134,139,146,173,162,140,139,145,190,173,142,179,136,160,227,167,171,119,91,130,90,90,64,68,97,49,60,89,52,15,50,76,78,64,61,177,250,250,199,175,234,208,246,234,99,49,57,99,118,114,73,57,62,71,85,84,77,53,24,12,24,47,83,113,117,116,107,117,119,107,106,105,121,107,75,56,47,49,36,47,98,135,159,165,152,151,127,79,47,27,19,16,12,27,51,49,46,33,14,21,18,20,42,103,113,101,183,148,48,22,23,51,68,61,55,41,28,30,31,45,142,141,106,141,110,46,17,15,16,16,20,23,30,42,31,16,23,30,29,25,24,22,12,37,45,45,38,22,14,42,127,71,62,92,62,107,83,61,100,70,55,35,21,43,29,20,17,14,16,42,44,27,17,19,18,55,73,164,193,135,106,33,32,43,46,51,39,48,44,35,56,48,53,52,74,101,105,119,119,118,143,161,177,182,188,191,180,183,212,187,161,184,171,160,157,149,162,160,150,150,127,132,137,127,139,124,103,95,70,64,69,53,53,69,45,45,26,59,135,63,28,60,49,43,51,42,50,52,56,51,28,21,20,28,29,25,28,24,16,33,30,28,43,41,32,37,66,102,96,43,50,56,25,29,34,23,28,31,43,37,28,33,24,23,29,37,39,52,41,26,25,50,63,41,79,138,79,39,57,61,66,117,127,87,52,31,27,97,120,93,120,91,41,42,51,41,29,24,23,31,41,42,59,83,93,110,105,97,92,38,93,180,162,118,84,59,33,23,23,26,37,48,61,69,76,75,52,35,39,57,69,67,64,37,33,40,40,81,111,107,101,84,73,62,95,166,198,212,220,118,6,3,9,12,13,12,14,13,14,15,15,16,202,206,201,200,202,203,204,204,204,200,202,202,200,201,202,204,203,201,200,200,201,204,203,200,202,201,202,201,200,201,200,202,200,201,201,200,199,201,203,199,202,200,201,205,199,200,204,200,200,202,202,201,201,201,202,200,201,201,202,202,200,203,201,202,201,200,204,201,202,200,203,201,200,202,204,201,202,201,199,201,199,200,200,199,201,198,198,198,200,200,198,200,199,197,198,198,201,199,200,200,199,200,199,200,202,199,200,199,199,202,200,201,198,198,201,199,198,198,198,196,198,199,198,198,193,198,198,196,198,195,198,197,196,198,196,194,194,196,193,196,194,194,195,195,196,195,195,195,196,194,196,196,196,192,191,196,195,195,193,194,195,193,194,196,195,193,192,194,194,192,194,195,192,194,194,194,197,193,196,195,193,194,195,194,192,195,195,195,195,195,197,196,195,193,194,193,196,193,194,194,192,196,196,195,193,196,196,195,198,193,193,195,196,195,195,193,193,193,191,195,196,194,195,193,193,195,194,194,193,193,193,193,195,192,192,193,193,192,193,193,193,193,195,193,193,194,193,194,193,193,192,191,192,193,191,194,191,187,191,191,193,190,190,190,194,191,192,191,187,191,193,193,193,194,194,191,190,190,191,191,191,190,191,191,190,189,189,193,191,190,181,181,195,190,186,191,194,194,188,179,175,174,179,190,191,190,192,195,203,150,115,121,118,141,154,144,136,167,206,187,182,194,173,187,194,189,194,196,171,201,184,101,94,102,96,99,35,77,190,171,148,142,132,151,156,191,190,122,150,193,120,178,203,116,177,122,95,118,66,83,61,65,67,33,37,65,45,29,53,62,49,61,159,249,252,200,163,215,249,211,213,122,35,66,92,122,111,69,63,71,81,83,79,69,47,35,15,33,51,78,114,118,122,110,103,111,106,104,100,95,108,120,120,133,70,37,49,42,33,63,125,152,162,157,150,144,105,68,51,33,18,13,22,34,37,28,22,25,17,18,27,67,112,143,178,99,27,19,33,95,117,97,75,57,33,34,39,69,148,133,108,144,101,29,17,15,14,21,19,23,29,49,40,29,34,16,14,21,29,30,36,36,19,21,48,41,43,101,145,50,99,145,136,231,153,153,202,112,60,29,23,23,22,33,19,15,37,33,32,31,25,16,44,50,39,194,113,85,112,19,39,36,42,45,42,51,54,57,122,93,122,151,181,200,206,213,164,209,218,210,203,197,203,182,186,188,201,193,185,201,186,181,184,195,204,205,201,205,191,189,194,198,200,192,148,179,175,165,162,149,136,120,89,65,41,109,149,51,23,37,33,34,31,38,35,24,41,40,24,34,37,25,14,17,27,24,41,35,22,23,35,48,53,61,42,81,97,40,55,59,27,43,41,24,43,37,38,36,17,26,27,33,40,27,18,27,41,44,60,54,31,14,69,130,76,39,55,48,39,62,63,45,38,29,24,96,125,92,115,104,47,25,35,38,30,23,27,26,26,26,24,42,51,72,89,103,97,99,149,168,137,124,124,84,55,32,26,28,33,53,60,71,81,78,59,37,35,39,48,58,61,41,59,89,105,120,108,87,80,76,79,106,156,199,216,217,220,118,7,2,8,12,14,12,14,13,14,15,14,14,203,206,206,206,203,202,202,201,203,205,201,201,202,201,202,201,203,202,203,203,205,203,202,203,200,202,202,202,202,202,200,200,203,202,204,203,203,202,202,204,205,202,204,202,201,201,201,202,200,200,198,199,201,200,202,200,199,201,198,203,201,203,200,200,201,201,203,201,201,200,202,200,200,203,202,202,200,199,201,202,201,199,200,198,199,199,198,199,198,199,200,199,199,200,196,200,200,198,196,197,198,199,199,200,199,197,199,199,200,201,199,200,201,198,199,199,198,200,200,196,200,198,196,197,196,197,196,198,196,198,198,196,197,195,198,197,197,196,196,197,198,197,194,196,193,194,197,195,195,197,195,194,195,196,194,192,195,193,191,196,197,194,194,195,195,193,192,192,192,194,195,196,194,193,194,195,194,194,195,192,197,198,194,196,193,195,197,193,194,194,195,196,195,194,195,195,193,195,194,192,192,193,194,196,194,196,194,193,195,193,193,193,193,195,195,191,194,195,193,194,193,193,193,192,195,193,193,195,193,194,193,192,193,193,193,193,194,189,192,194,192,194,194,194,192,191,193,191,190,191,193,192,190,189,194,192,187,190,193,192,191,189,190,191,193,195,191,190,190,189,191,191,191,192,191,193,193,191,191,189,192,190,189,193,193,191,191,190,191,194,184,179,196,194,189,192,181,178,173,175,182,187,192,196,191,191,195,200,195,131,136,168,165,185,188,165,171,194,208,189,181,200,175,184,197,191,198,191,179,202,195,123,125,158,125,73,46,139,232,188,131,144,163,168,176,184,168,138,190,200,95,161,163,139,210,121,110,119,68,76,57,61,56,19,29,57,43,45,66,54,45,145,241,250,231,156,191,232,250,194,95,55,53,104,110,115,68,68,106,98,87,83,75,46,30,16,29,57,83,112,122,118,116,101,101,118,106,101,105,87,104,109,117,141,78,39,49,40,43,34,34,78,128,160,163,154,137,141,151,100,53,36,18,12,19,22,20,22,17,37,59,55,84,131,160,91,32,13,69,163,134,141,120,59,36,33,38,94,159,112,112,141,79,30,21,12,19,25,20,25,36,43,40,43,27,13,17,14,23,47,39,18,14,17,25,72,73,77,128,51,57,60,77,141,76,77,99,60,60,39,15,17,15,32,31,36,32,17,13,30,36,45,56,51,36,52,62,110,94,28,55,55,57,70,86,123,148,177,200,202,224,222,225,212,194,187,183,181,184,170,142,144,139,125,119,122,131,122,125,133,127,131,141,155,158,162,179,179,170,172,165,164,162,162,172,184,188,170,177,174,185,188,170,169,147,170,159,67,57,61,46,49,39,36,44,31,40,38,21,45,34,22,26,13,21,42,45,36,19,20,22,45,70,53,32,80,97,45,73,78,43,82,69,49,77,51,46,34,17,24,29,47,36,15,14,15,24,54,63,31,18,11,41,119,85,42,56,45,27,39,41,31,27,26,24,98,132,94,121,107,54,24,19,27,25,30,23,27,45,57,46,34,37,66,116,125,131,116,90,60,53,59,74,97,82,56,27,24,35,43,61,65,77,78,61,42,29,44,60,55,55,90,133,141,124,108,92,76,76,112,171,206,230,243,229,225,224,116,6,1,8,11,12,12,14,13,15,16,15,16,203,206,205,205,203,202,202,200,201,201,204,204,202,200,201,204,201,199,202,201,202,205,203,200,203,200,203,206,202,204,202,202,205,204,205,205,203,206,204,201,203,202,200,200,199,201,204,202,200,199,199,200,200,202,199,200,201,200,201,198,199,201,199,200,200,201,201,199,201,200,199,200,202,199,200,200,201,201,200,204,201,200,202,201,201,198,199,201,201,199,199,199,199,200,200,197,200,198,199,198,198,198,198,199,199,199,198,199,198,200,197,197,200,200,199,198,198,199,199,198,199,196,196,197,197,198,195,195,197,196,196,197,196,195,196,196,195,195,194,198,196,193,195,194,196,195,195,195,192,193,193,193,194,195,194,196,194,193,193,193,192,193,194,194,193,194,193,193,194,192,193,195,194,193,193,193,194,193,195,194,193,195,194,193,193,191,194,195,194,194,191,194,193,191,194,192,194,193,191,199,196,191,191,196,194,193,191,193,196,194,194,193,191,191,193,194,192,194,196,195,193,191,192,192,193,189,191,194,193,193,193,194,193,195,192,192,191,192,193,195,194,192,194,192,194,195,193,191,191,193,192,193,193,194,192,192,188,187,194,194,189,189,192,190,193,191,192,194,190,188,189,191,190,191,193,195,193,191,194,190,190,190,190,191,191,191,189,193,192,193,194,182,194,188,174,174,170,179,184,188,193,195,192,195,191,192,193,203,177,122,145,163,173,181,165,158,183,204,207,190,179,196,178,183,196,188,194,193,182,193,192,125,130,194,153,118,94,171,236,160,137,155,187,159,152,190,167,162,211,151,64,152,162,182,208,96,121,130,97,98,80,77,41,9,32,92,65,71,59,37,135,224,249,249,157,173,224,235,249,124,48,61,90,124,108,74,62,111,129,94,74,77,50,26,14,24,46,80,104,116,124,112,107,103,113,144,118,98,99,92,105,96,104,124,81,78,70,41,39,32,35,31,48,95,131,153,146,151,161,163,145,105,64,34,23,12,21,17,22,53,50,40,48,91,139,105,57,19,26,55,24,50,108,80,37,33,43,118,154,110,116,131,68,26,30,23,37,37,38,42,42,52,34,24,29,22,17,19,33,33,25,24,21,15,42,69,59,100,124,54,36,38,29,48,30,37,41,28,60,39,12,17,17,21,48,44,27,25,32,46,70,88,93,86,89,137,98,156,149,120,177,163,191,198,201,227,218,213,204,169,173,162,155,143,128,109,119,135,136,134,134,129,132,129,138,138,141,146,146,154,156,151,139,148,142,165,181,178,184,186,178,170,169,170,161,156,154,128,136,143,153,162,158,168,167,180,174,152,174,158,139,117,85,63,57,57,49,58,42,33,32,32,31,26,41,31,24,28,26,24,34,44,40,45,49,109,103,36,118,105,125,221,119,141,166,75,53,29,23,30,30,31,30,24,18,23,35,42,38,39,38,12,80,141,78,51,60,33,29,35,29,30,27,26,28,100,136,103,115,113,60,27,18,24,28,29,27,40,80,85,80,60,36,117,150,133,108,56,44,24,24,26,42,95,102,62,40,28,29,32,46,53,56,53,44,104,151,148,140,131,137,150,148,118,95,82,73,99,155,210,239,246,246,238,229,224,223,115,4,2,7,11,12,12,13,12,15,15,15,15,201,206,205,200,204,201,202,202,203,200,200,203,201,199,200,201,201,202,201,203,203,202,203,203,201,204,206,206,206,205,205,203,204,206,203,202,204,202,202,199,198,198,199,199,202,202,199,201,200,199,201,200,200,200,202,201,199,200,198,200,199,200,200,201,202,200,200,198,199,200,201,201,201,202,200,201,202,203,200,200,199,200,204,200,202,201,201,202,201,200,199,199,201,204,200,200,200,199,200,198,198,199,199,198,196,199,198,196,198,198,196,198,198,198,199,198,198,198,198,197,196,198,197,196,198,196,194,196,195,198,198,198,197,196,195,196,196,193,195,196,194,191,193,198,195,193,196,195,197,196,194,194,194,196,195,193,194,193,193,195,193,193,195,193,195,193,192,194,194,193,193,194,193,195,193,191,193,191,193,190,193,195,190,194,193,194,194,191,193,192,193,194,191,190,194,194,191,193,196,194,195,192,193,193,189,193,192,194,193,191,193,193,192,193,193,192,191,192,193,193,192,192,194,193,192,192,192,191,193,194,194,193,192,194,193,191,194,192,193,193,190,191,192,192,193,193,192,192,192,191,192,191,193,193,194,190,188,193,191,191,188,189,193,192,193,193,189,192,193,189,190,195,193,193,192,193,198,193,193,192,193,190,189,190,190,195,193,192,196,199,194,173,172,169,163,179,184,192,193,193,196,194,195,194,190,193,195,196,155,105,122,123,143,153,122,139,189,201,205,191,177,199,177,185,199,191,198,193,188,189,189,137,134,150,119,97,81,146,196,187,163,158,155,128,174,198,177,173,171,112,60,159,180,198,166,73,130,135,111,124,115,73,23,21,129,204,118,78,56,106,216,246,239,177,161,224,236,232,179,60,75,98,105,116,64,73,107,134,118,67,71,54,25,12,18,43,77,105,118,128,119,105,102,98,112,156,141,101,94,84,96,100,116,122,83,98,89,52,44,31,37,36,32,35,49,89,126,140,150,153,159,171,160,110,64,40,27,23,17,22,24,22,25,34,86,115,87,49,21,24,26,77,114,79,42,20,46,138,153,104,122,118,53,25,47,51,60,64,60,59,54,43,24,12,23,34,31,35,23,15,23,29,26,48,51,26,30,97,130,52,37,50,54,55,44,61,45,35,52,44,26,21,32,55,61,69,71,65,88,123,132,151,168,176,193,200,186,205,189,202,226,196,200,184,174,169,149,138,125,123,118,123,139,144,150,140,145,146,141,147,150,144,132,130,131,129,127,120,110,104,98,99,90,89,89,99,105,108,124,136,143,145,156,152,142,155,159,162,165,162,166,155,153,147,137,143,140,151,185,186,185,184,169,168,146,123,125,102,84,67,47,56,77,71,43,27,21,26,33,40,44,31,17,28,65,126,105,33,89,93,132,190,104,140,149,65,60,36,36,37,20,21,23,29,28,42,34,21,23,45,59,63,122,157,74,46,61,29,24,37,25,30,30,26,29,93,126,99,121,115,75,33,22,34,35,34,35,43,60,95,123,74,93,174,150,92,51,32,32,21,21,25,67,135,113,71,42,26,22,29,50,50,76,114,177,214,204,196,174,160,141,124,99,74,64,84,147,206,239,248,249,239,236,231,225,222,219,115,4,1,7,11,13,12,13,13,14,14,15,14,204,206,206,206,203,200,203,205,202,203,202,200,201,202,202,202,203,201,203,203,206,205,204,205,208,208,205,206,206,207,209,205,206,205,205,205,202,204,203,203,200,200,203,200,201,200,200,200,198,201,201,201,200,202,205,203,202,203,206,204,205,204,203,204,201,205,201,203,202,200,202,202,203,203,203,204,202,202,201,203,202,200,203,201,203,201,201,202,199,200,201,201,203,201,201,200,200,200,199,199,200,198,200,199,198,197,199,199,194,196,197,196,196,196,194,197,198,196,193,194,196,197,196,194,195,197,197,195,195,195,195,195,195,195,197,194,194,195,194,195,194,193,196,193,194,193,193,195,195,194,194,194,193,193,193,195,192,193,194,194,195,195,195,195,194,194,193,194,196,194,194,198,192,192,195,193,192,193,193,193,193,193,193,193,193,193,192,194,191,192,193,193,196,194,196,193,196,195,191,195,193,193,191,191,194,192,193,192,192,193,191,193,193,194,194,191,193,191,189,193,191,194,195,193,195,194,195,194,194,192,193,194,193,194,192,193,194,191,192,194,193,191,193,193,191,192,193,192,192,193,193,192,191,192,192,188,189,192,193,189,189,193,195,193,191,193,192,191,194,192,188,193,193,191,192,194,196,195,192,189,193,193,194,193,193,195,198,196,192,184,174,162,169,178,181,192,193,194,195,194,191,191,191,193,190,196,197,193,145,107,135,146,163,156,124,150,199,203,203,193,176,200,184,184,203,193,200,198,191,186,192,146,99,96,71,68,46,65,123,185,160,123,153,146,194,191,154,164,174,137,63,121,163,173,149,95,125,124,109,127,99,48,7,58,177,190,115,72,128,200,238,206,155,179,210,247,234,140,73,38,104,115,105,71,57,109,127,120,77,66,50,25,20,19,39,79,105,121,125,117,107,104,97,100,95,136,140,103,101,84,112,131,139,123,87,87,59,53,49,44,45,27,36,36,31,40,54,100,133,151,154,165,169,163,149,119,86,45,31,22,14,14,21,18,31,75,102,84,76,71,108,168,130,60,33,18,54,151,139,103,127,107,46,19,30,38,45,42,41,46,46,43,20,15,13,30,48,29,19,16,18,26,45,50,24,21,12,53,110,61,38,57,58,56,47,49,42,34,53,51,50,62,80,105,125,117,130,154,187,205,199,200,196,201,195,184,170,169,147,155,148,115,121,107,114,128,133,146,136,136,140,145,147,145,137,108,101,81,63,53,45,41,33,29,33,39,33,34,22,24,30,29,29,27,24,28,33,27,29,37,39,37,39,47,53,84,88,103,100,128,141,147,152,156,150,147,141,137,139,142,154,154,169,177,177,172,168,178,160,159,143,141,147,116,87,60,52,41,61,60,42,27,22,16,60,137,103,37,49,41,34,56,38,44,48,36,55,48,43,29,17,24,19,29,46,33,19,19,15,37,79,77,118,152,66,52,64,27,41,50,31,35,35,24,32,103,127,102,110,112,78,37,29,41,54,56,37,37,20,52,121,104,155,177,94,58,35,27,23,20,41,61,141,175,103,61,33,32,20,97,207,214,215,189,172,173,159,148,132,105,84,77,67,78,133,197,239,247,250,244,234,229,226,226,226,219,219,117,5,1,7,11,12,11,14,12,14,15,15,14,203,208,206,203,207,203,203,203,205,206,204,203,203,202,204,203,205,205,204,206,206,206,206,208,207,210,209,209,208,206,207,207,206,206,205,205,205,204,203,204,204,205,205,200,202,201,201,206,201,202,203,203,205,205,206,204,204,206,207,207,207,205,209,208,207,206,207,207,206,208,208,206,206,206,206,207,204,205,204,204,205,203,205,202,205,204,202,203,202,202,205,201,200,202,198,200,200,200,200,198,201,201,200,200,198,200,199,199,200,197,198,199,198,198,198,196,197,198,198,198,195,198,195,194,197,194,197,194,194,195,193,194,194,195,194,194,193,194,194,195,193,195,194,194,194,193,195,193,194,193,193,193,192,193,193,192,196,193,191,194,194,194,191,194,194,191,193,193,193,193,194,195,193,193,194,193,194,191,193,191,191,194,193,194,192,192,193,194,194,194,193,193,194,194,196,194,193,192,194,193,195,193,195,196,193,198,194,194,194,194,195,191,194,194,192,195,194,194,194,194,194,193,193,194,195,194,194,196,193,191,195,196,193,196,196,194,196,195,193,192,197,196,196,196,191,194,193,194,193,193,193,192,193,194,193,183,191,193,189,190,191,194,192,191,192,191,192,194,193,194,190,190,193,191,193,192,194,193,193,193,192,194,196,198,198,195,187,177,173,173,177,177,181,193,193,199,198,193,196,195,191,190,192,192,192,201,198,193,145,134,181,187,203,174,152,185,208,207,205,196,179,207,189,187,203,196,202,201,194,179,176,110,73,92,88,74,56,61,84,136,136,139,162,174,193,135,148,182,186,180,52,72,160,179,182,123,121,123,103,121,89,26,8,45,114,123,66,127,232,240,214,146,166,229,234,246,139,59,58,54,116,108,67,67,102,136,111,84,76,44,21,15,23,38,66,106,122,116,113,109,101,110,118,118,69,72,108,99,100,104,137,142,141,127,71,47,47,57,55,57,46,23,19,26,35,35,38,38,60,110,134,151,167,154,156,163,168,148,93,56,37,31,21,12,24,32,57,85,112,146,166,139,70,33,26,16,73,160,124,108,131,91,38,10,16,16,25,25,21,31,38,45,27,15,27,29,30,36,27,16,19,38,37,35,24,23,15,56,118,63,35,33,21,27,23,29,22,40,73,80,95,121,160,182,184,193,200,212,222,213,198,187,177,162,142,136,126,122,109,122,125,117,125,121,127,133,137,125,95,83,61,41,46,39,37,24,18,21,22,19,23,24,15,17,19,21,21,29,25,32,30,24,17,19,27,26,31,25,30,32,27,21,16,24,20,24,23,20,28,29,36,49,59,76,93,104,117,118,125,133,142,151,159,162,148,151,159,159,167,172,174,168,178,174,156,159,134,139,118,85,81,60,38,45,57,108,110,43,38,41,48,45,33,49,38,30,53,41,33,37,28,23,30,43,33,24,24,19,19,40,61,60,126,170,65,63,84,50,94,89,53,72,65,27,46,126,132,112,110,108,90,40,40,66,69,57,50,47,19,70,145,128,158,126,50,41,27,22,18,29,45,71,115,102,63,47,28,28,28,149,245,242,220,160,127,116,117,115,104,93,67,83,140,200,242,249,250,249,236,233,231,225,224,222,223,223,222,117,4,1,7,11,11,11,14,12,15,15,14,14,207,207,206,205,206,205,205,204,200,204,206,204,205,202,203,203,202,204,202,204,205,204,208,205,205,209,208,206,207,203,206,203,202,203,207,206,206,209,205,204,207,205,204,203,200,205,204,203,201,202,202,201,204,204,204,205,204,200,202,204,207,207,205,208,207,206,205,207,205,207,209,206,208,206,205,206,205,205,203,205,203,205,207,204,205,203,205,202,201,202,200,199,201,200,200,200,199,202,199,196,198,198,199,198,200,199,198,198,197,198,198,197,199,198,198,197,197,198,198,197,197,196,199,198,196,196,195,194,194,194,193,195,195,196,194,192,195,195,194,193,194,193,193,193,195,194,192,194,190,191,194,191,194,194,194,195,190,192,193,192,193,192,193,192,191,189,191,194,194,191,191,195,193,192,195,193,193,193,193,193,194,192,192,193,193,196,193,192,192,195,196,194,195,193,192,190,194,196,192,194,194,194,193,191,195,193,195,193,193,195,194,195,193,192,194,191,194,194,193,198,191,194,194,195,194,191,197,191,192,192,191,194,191,194,194,193,196,193,196,197,195,193,191,196,194,193,194,192,194,194,194,194,193,196,189,188,196,191,189,190,191,192,192,194,191,192,195,191,193,193,193,190,191,194,190,194,193,193,198,194,194,195,194,192,184,175,174,177,181,189,199,196,190,194,193,200,200,196,197,194,196,198,196,196,196,204,193,178,135,139,188,198,201,173,174,200,210,210,208,199,179,204,194,186,207,199,204,203,197,174,159,102,57,80,95,95,73,78,92,149,173,135,169,184,164,144,181,184,200,188,55,131,204,180,203,133,122,134,90,107,68,62,71,88,127,91,60,148,248,219,147,160,220,250,246,159,52,63,88,83,110,74,57,99,134,131,84,78,44,19,14,23,49,63,95,115,115,117,103,101,107,125,125,110,38,41,88,85,106,97,116,110,105,103,44,45,76,81,95,86,48,31,19,19,45,44,37,34,30,37,59,102,144,157,150,152,159,171,176,149,119,76,49,39,27,23,23,36,53,56,56,51,31,25,27,13,79,159,113,113,137,86,29,11,22,30,32,39,54,60,62,53,35,32,31,18,21,27,39,34,33,31,19,23,38,38,47,122,128,53,38,33,34,43,52,68,66,89,134,165,184,198,208,193,181,184,172,174,170,160,141,139,137,134,133,139,146,112,142,143,124,110,98,70,56,38,29,36,26,27,21,23,31,26,24,16,17,15,18,20,20,23,18,18,15,18,18,23,26,28,23,15,21,16,18,24,24,32,32,21,19,22,19,15,20,21,21,23,23,19,20,27,28,31,26,24,32,53,60,72,93,107,126,130,144,147,146,157,155,155,142,141,162,165,161,168,169,172,173,153,156,130,101,110,98,123,89,32,46,50,60,61,61,63,55,39,49,34,22,34,40,51,40,29,21,22,29,29,42,39,26,29,105,157,53,99,136,110,208,150,140,166,80,36,65,146,140,120,107,107,98,39,52,63,45,35,45,43,23,128,169,135,132,62,38,39,21,22,19,21,32,43,52,50,39,36,24,28,20,125,193,159,160,118,99,99,111,122,121,128,165,219,250,250,251,251,241,231,231,229,222,223,225,223,225,225,222,116,4,1,6,11,12,11,13,13,13,14,14,14,203,207,208,205,205,204,204,205,203,204,203,202,204,206,204,202,203,200,202,202,202,205,207,205,205,205,204,205,205,204,205,207,206,206,207,208,210,208,206,206,207,206,206,206,206,204,203,203,200,204,203,204,202,201,202,202,204,203,202,204,205,203,205,204,204,204,206,205,204,205,204,205,207,205,205,206,202,206,205,205,207,202,205,204,204,205,202,204,203,202,203,201,202,200,201,202,199,199,202,199,198,199,198,198,196,201,197,197,199,198,199,196,198,199,198,197,197,196,196,195,195,197,196,197,198,196,196,196,196,198,195,196,195,193,197,198,196,196,197,195,191,193,194,195,193,192,195,194,195,194,194,195,192,193,196,193,196,193,191,191,193,193,190,194,193,192,194,193,192,194,194,193,193,192,191,191,194,194,193,192,195,195,195,193,194,193,193,194,192,196,192,194,191,193,194,193,195,191,194,191,193,191,193,193,191,194,192,193,193,195,195,193,194,193,193,194,193,193,192,193,194,193,193,193,193,194,191,194,193,193,196,192,191,195,195,194,191,191,194,189,194,194,192,194,191,194,193,193,193,191,191,192,194,192,187,186,193,188,186,191,192,193,193,194,191,189,191,193,192,191,190,188,190,191,193,193,195,195,197,198,191,190,180,172,173,180,187,192,198,200,202,200,196,197,194,198,200,197,195,198,198,198,202,200,203,203,173,141,106,118,160,170,167,154,186,204,200,208,204,201,182,206,195,184,207,200,202,206,200,162,160,139,64,78,109,100,76,71,105,184,204,146,166,168,159,177,202,152,184,167,67,209,216,162,212,125,116,129,81,84,53,109,155,185,181,102,59,122,205,128,136,214,246,248,174,68,57,90,113,80,63,53,88,133,128,96,70,44,21,11,27,56,78,100,117,122,116,110,105,111,117,135,139,95,24,54,92,89,104,87,103,121,131,107,66,89,108,97,106,89,60,44,19,21,51,62,56,32,21,28,32,33,59,98,129,142,143,146,153,168,173,161,140,110,75,49,34,24,24,18,19,23,19,23,23,22,92,150,107,122,128,72,25,14,26,34,32,50,98,97,68,52,43,41,27,16,17,18,43,59,49,36,38,59,68,106,118,139,115,50,77,97,125,146,158,185,193,205,205,192,186,178,165,151,137,127,137,128,133,137,137,136,139,141,132,124,107,92,60,51,41,25,21,23,37,33,33,31,16,20,27,25,27,18,16,19,16,21,15,21,21,16,22,19,18,19,17,19,19,25,30,20,19,17,14,22,24,23,27,21,21,18,20,19,16,22,19,21,21,20,19,21,25,34,35,24,27,32,32,27,28,30,35,45,59,89,111,124,131,141,147,142,150,145,134,139,132,142,147,154,160,159,155,163,146,152,99,36,46,35,43,48,41,45,39,36,51,35,21,26,51,61,48,29,21,21,27,49,46,26,23,11,64,130,63,80,86,84,147,103,112,123,70,35,81,155,139,126,107,108,102,45,51,59,39,24,28,26,68,165,155,107,73,37,31,34,29,29,24,21,27,27,29,30,31,35,51,75,106,160,137,107,114,95,98,115,141,137,153,226,249,249,251,251,249,241,233,229,227,225,223,222,224,227,228,227,224,115,4,1,6,11,12,11,12,11,14,14,15,14,204,206,204,203,206,203,201,202,202,204,206,201,203,203,201,201,202,204,202,202,204,204,207,206,204,205,205,204,205,205,206,205,206,204,207,205,206,207,206,206,205,204,203,207,202,205,204,205,206,203,205,204,201,201,204,205,205,205,206,202,203,205,202,204,202,202,207,204,202,205,205,203,203,204,203,202,203,203,203,203,203,205,202,201,201,199,202,202,202,202,200,201,201,199,198,198,198,200,199,199,199,197,200,197,197,197,196,199,197,198,198,196,198,193,198,196,194,197,196,197,198,198,197,197,195,194,197,197,196,194,196,196,194,197,195,194,197,193,196,195,194,194,192,194,195,194,193,198,196,194,196,192,193,193,192,193,193,192,191,193,191,193,193,191,194,193,193,194,191,191,194,195,193,191,194,193,191,191,194,192,192,193,193,193,192,193,193,191,191,192,192,190,191,192,193,190,189,192,192,193,191,191,192,192,192,190,193,192,192,193,191,192,190,192,194,191,194,191,189,195,191,192,192,191,194,189,192,193,191,193,192,193,191,191,192,193,193,191,192,191,190,190,191,191,190,193,193,190,191,191,193,190,190,191,183,189,193,187,190,192,190,189,190,189,190,190,192,188,190,190,189,189,188,194,194,194,195,194,191,188,182,174,177,181,189,195,200,200,201,202,203,201,199,199,193,200,199,196,199,199,199,199,200,199,204,196,171,130,101,122,152,155,131,150,203,205,195,201,203,202,180,204,196,181,205,199,202,205,189,143,163,194,141,106,110,116,98,89,110,200,204,119,141,160,160,205,178,120,202,162,84,190,168,166,226,122,109,127,83,84,48,151,206,220,186,83,50,79,115,93,175,239,246,203,73,58,79,117,112,57,59,89,130,133,93,80,50,26,15,27,59,87,111,118,125,121,106,106,109,124,118,139,133,53,17,93,110,81,100,80,120,130,116,121,83,87,77,70,85,66,49,36,15,24,62,84,76,50,39,32,36,32,25,32,47,86,103,103,130,131,150,148,163,167,157,152,113,77,53,37,28,23,25,23,26,24,110,145,97,128,115,59,22,8,19,19,35,92,115,82,63,53,31,34,36,32,45,54,76,85,84,88,95,119,131,135,146,151,148,152,184,190,206,212,205,199,191,181,154,139,139,135,125,125,136,137,121,147,145,125,100,93,71,50,40,35,34,53,29,32,21,15,18,32,51,48,39,38,15,19,24,19,16,17,19,18,22,16,17,17,15,18,16,19,20,20,23,34,50,49,46,36,25,22,18,26,41,46,45,36,29,22,15,16,18,18,16,20,18,19,19,17,21,26,32,31,33,25,19,39,41,31,28,20,24,31,36,39,46,63,84,90,123,129,134,137,131,136,130,122,129,128,148,153,148,163,123,108,106,67,56,48,35,35,33,39,60,40,22,39,42,49,52,42,28,28,39,38,40,29,25,20,103,143,60,51,51,31,47,37,43,48,39,29,52,127,137,122,99,106,100,47,54,60,52,43,35,61,129,155,91,57,45,22,23,29,29,32,28,34,38,34,43,54,67,105,145,170,167,160,113,93,125,109,107,127,122,92,128,207,241,242,251,251,236,228,229,223,223,223,217,219,223,226,226,223,224,116,4,1,7,11,11,11,14,13,14,15,15,14,204,207,204,203,206,205,206,203,203,205,204,205,204,203,201,203,204,201,203,206,208,206,206,206,206,206,204,207,207,204,204,206,206,207,205,205,206,205,208,206,205,204,205,208,205,203,204,205,205,206,205,203,204,205,204,204,204,203,204,203,206,205,204,204,205,206,203,204,203,206,205,204,204,205,203,202,201,202,203,203,206,202,201,203,204,202,199,200,200,198,198,199,201,198,198,200,200,199,200,199,201,201,199,201,198,202,199,200,200,195,201,198,198,198,198,198,198,197,198,197,196,197,195,196,194,195,198,197,198,198,195,196,195,195,196,195,194,194,196,194,197,196,195,196,194,195,195,193,195,195,193,196,193,194,193,192,196,193,192,195,194,193,196,194,191,193,193,194,195,196,192,194,196,193,192,192,194,194,194,194,193,194,193,191,192,193,195,193,191,193,191,191,191,193,193,191,193,189,193,190,190,193,191,190,191,191,191,193,194,192,191,193,192,192,191,193,193,194,190,193,194,189,191,190,191,192,191,194,194,190,191,192,189,192,190,192,193,191,193,194,193,192,193,191,190,191,189,190,190,191,192,193,193,189,191,193,193,196,196,196,193,194,195,194,192,192,191,193,193,195,196,191,193,193,197,195,190,183,177,184,185,188,193,198,201,202,202,199,201,200,198,200,200,193,192,199,201,200,199,199,198,197,200,201,207,189,181,153,132,177,198,172,134,174,216,205,197,196,205,205,181,200,193,178,203,201,199,208,182,139,182,222,176,117,110,136,121,104,107,175,175,137,163,144,169,202,147,153,239,152,59,135,152,200,242,131,111,127,98,90,43,161,220,226,143,13,10,66,117,126,208,245,203,87,43,77,101,117,90,62,104,125,127,100,81,62,23,17,31,61,93,107,122,122,122,113,99,108,115,113,98,111,90,14,40,141,103,79,97,79,105,76,58,105,87,66,67,55,44,36,36,24,14,34,51,65,74,75,68,46,36,32,30,30,27,29,44,57,87,108,123,131,139,137,144,173,188,187,166,133,114,85,54,50,35,39,125,135,96,122,99,42,14,14,11,23,52,89,90,51,42,48,48,55,61,71,86,110,138,134,141,160,179,181,175,181,179,182,174,178,179,160,157,152,149,145,144,141,129,134,134,133,131,122,122,103,94,72,46,33,24,25,18,17,20,25,22,29,30,24,14,16,36,39,41,48,47,38,21,16,16,14,20,17,19,17,19,30,29,34,30,22,19,16,21,30,53,69,63,60,64,56,50,35,36,68,69,64,66,60,57,43,24,15,19,21,15,16,23,25,18,21,20,18,20,21,24,18,33,75,74,53,49,29,17,21,27,29,26,30,34,36,49,46,71,97,112,123,130,147,146,143,137,133,128,147,139,156,170,136,134,107,74,76,60,59,79,57,67,63,37,32,45,57,62,56,30,19,30,41,46,72,152,152,59,42,45,44,49,37,47,42,29,26,31,96,133,124,102,98,106,61,38,48,63,71,76,95,96,75,47,35,29,24,18,23,28,35,44,55,72,91,129,155,160,165,163,154,134,133,123,137,158,112,92,83,51,31,24,64,135,180,230,242,224,224,224,222,222,220,223,221,221,223,223,221,222,116,4,1,8,11,12,12,14,13,14,14,15,14,200,204,203,203,205,203,203,203,203,202,203,201,203,204,203,204,201,202,203,205,206,203,206,203,204,205,204,207,205,205,206,206,205,205,208,205,205,206,205,204,205,204,205,205,204,205,202,206,206,206,206,203,205,207,204,203,205,202,205,204,206,205,203,206,204,202,203,202,205,204,205,204,200,204,205,205,207,203,204,203,200,201,201,205,201,201,201,200,201,199,203,201,201,200,198,200,199,203,200,200,202,198,203,201,200,203,199,199,200,201,200,198,200,199,200,199,199,198,198,198,198,195,195,196,198,199,195,197,196,196,198,196,197,197,194,196,197,195,198,195,194,196,194,195,194,193,196,196,195,194,195,195,194,195,192,192,195,194,194,194,193,194,192,191,194,192,192,194,192,193,193,194,193,192,194,195,193,191,193,192,194,195,193,195,194,195,194,193,192,191,196,193,193,192,192,194,193,193,191,193,193,190,191,190,192,190,193,194,190,190,191,191,190,191,191,193,193,190,192,194,191,191,192,190,192,191,190,190,189,191,193,192,190,193,191,191,192,192,193,194,193,192,190,190,192,193,192,192,194,193,192,194,197,193,199,199,193,196,200,200,200,200,198,201,200,198,198,199,199,198,199,196,193,188,187,184,184,187,188,193,196,200,204,199,199,200,200,199,199,198,199,198,199,189,184,200,200,198,199,195,196,196,198,203,199,174,173,148,159,202,208,181,152,200,219,206,198,189,201,204,179,200,196,175,198,198,203,212,179,156,196,207,165,115,118,136,110,84,66,125,163,175,174,125,153,186,146,194,230,93,61,141,168,224,235,134,110,121,112,79,22,144,239,239,127,5,36,149,173,167,241,209,95,56,67,107,107,83,53,66,131,123,88,87,59,32,18,23,53,78,113,124,121,122,111,106,106,109,109,94,66,62,59,15,56,155,101,82,98,78,115,126,118,113,83,105,95,57,37,22,23,21,30,49,43,33,33,49,55,36,29,21,24,32,29,27,21,26,33,49,74,101,127,123,123,139,152,169,176,177,184,177,162,147,118,120,155,128,95,115,83,37,27,20,42,59,89,78,71,41,44,59,77,123,127,146,143,181,189,192,189,195,190,181,178,171,156,144,140,122,112,112,124,138,145,148,149,142,133,113,93,82,66,51,33,24,16,18,22,21,16,17,17,16,16,16,23,19,19,21,17,31,38,34,44,62,54,29,21,18,15,15,16,18,21,21,40,59,69,65,56,45,23,21,29,69,77,78,96,98,91,78,78,58,85,100,107,112,92,74,83,81,44,19,14,19,34,79,53,49,35,24,19,20,16,15,17,33,76,87,69,71,54,26,20,16,27,34,30,38,29,24,22,18,21,25,37,41,69,106,124,129,129,133,139,137,127,141,139,133,150,148,160,160,133,127,131,122,133,105,69,58,31,90,100,60,37,24,26,39,85,70,125,146,57,48,51,53,59,50,55,46,42,29,26,96,145,133,98,105,115,65,29,22,45,60,55,57,47,35,34,34,26,19,33,40,56,69,107,154,165,177,174,164,150,135,129,133,142,146,126,123,152,116,61,38,33,24,35,31,19,47,145,216,224,223,223,221,225,225,222,224,225,223,223,223,222,116,4,1,6,11,13,12,12,11,15,15,14,14,200,203,201,203,204,201,201,199,202,201,199,203,202,203,201,203,201,201,204,203,204,200,206,203,203,204,203,205,204,205,203,206,206,205,205,207,206,205,206,204,207,205,203,204,203,206,206,204,205,204,202,203,203,205,203,202,203,203,203,201,205,205,205,205,203,203,202,204,203,202,200,202,202,202,203,202,204,204,204,201,202,202,201,206,203,200,203,202,202,202,203,200,200,202,203,203,203,202,202,199,202,202,200,201,200,203,200,200,203,198,204,201,200,200,198,199,200,198,198,198,199,201,199,199,198,198,199,198,200,197,196,198,199,198,197,197,196,196,196,195,197,194,194,197,195,197,196,195,197,196,198,196,192,196,194,194,196,195,195,195,194,193,192,193,193,196,192,191,193,193,194,194,195,194,193,191,195,193,189,193,194,191,193,193,194,195,193,191,193,192,192,194,193,194,191,192,193,192,195,193,194,194,194,194,192,191,193,194,196,192,192,191,191,194,191,193,192,190,191,192,193,192,191,193,196,193,195,191,192,195,194,193,189,193,194,195,194,195,193,194,194,193,193,193,196,195,193,194,194,196,195,206,202,201,221,201,196,203,199,204,199,199,203,200,206,204,201,205,200,199,199,195,191,185,188,190,192,195,199,202,199,201,203,198,195,198,198,196,196,197,198,195,200,185,177,196,197,199,198,196,196,197,199,199,182,139,127,106,133,184,176,153,152,205,218,208,203,184,194,203,182,199,193,177,201,198,202,208,162,169,188,152,159,128,115,119,92,84,67,91,154,213,147,90,176,193,164,206,181,68,53,98,158,218,231,141,115,123,93,61,12,113,234,252,169,118,185,252,212,206,225,101,60,73,100,114,76,67,76,87,117,87,83,63,25,21,22,47,77,109,131,127,125,116,105,116,116,113,113,95,48,21,43,15,60,154,92,84,96,84,141,143,147,122,98,145,100,66,49,29,53,55,57,51,44,39,23,24,31,20,16,17,19,32,40,33,19,25,30,28,30,39,55,81,110,125,129,137,140,134,145,159,175,192,189,179,165,117,104,120,86,65,84,118,149,156,144,154,164,161,174,170,179,185,182,179,179,181,186,174,160,149,139,134,130,115,113,116,114,127,134,138,151,151,130,110,86,59,45,34,26,30,34,24,19,12,14,17,13,19,16,16,17,15,17,13,17,20,20,17,27,36,24,39,81,93,51,25,22,16,15,17,15,19,19,35,79,82,51,52,77,83,59,30,82,109,91,165,145,107,100,102,102,108,134,146,147,104,139,87,98,136,62,30,22,55,120,148,124,86,71,39,14,24,17,16,23,51,67,66,88,77,44,24,17,16,19,28,33,23,22,19,19,16,19,23,22,25,19,26,35,46,61,76,89,103,111,122,135,123,131,138,146,161,146,154,150,150,165,156,162,146,141,135,131,89,60,49,49,90,71,41,118,143,57,38,25,26,38,36,45,45,34,32,30,98,152,130,99,92,117,80,33,17,16,27,28,33,25,29,38,39,48,63,85,118,148,169,182,177,160,148,136,136,135,141,147,141,125,94,50,64,135,106,50,33,24,29,35,34,48,12,64,195,214,227,226,220,227,221,221,222,223,223,225,223,223,115,4,1,6,11,12,11,14,13,14,14,14,15,200,202,198,199,202,199,199,200,202,200,200,201,202,198,198,199,200,200,201,203,202,202,204,202,201,202,201,204,202,201,203,204,204,204,204,203,204,203,206,204,204,204,204,200,202,203,202,206,202,203,203,202,202,201,201,203,205,202,203,203,203,203,203,202,201,204,201,200,202,202,201,202,202,203,204,203,202,201,201,203,205,205,200,202,202,205,205,200,202,201,201,200,203,201,204,204,200,202,199,201,202,201,202,202,200,200,200,199,201,200,198,199,200,200,200,201,199,199,200,199,200,198,199,198,198,198,197,200,198,199,199,195,197,198,196,198,198,198,198,197,197,199,196,196,195,195,197,193,196,196,196,195,194,194,195,195,194,193,192,194,194,194,194,195,195,192,194,195,195,193,193,193,193,195,195,196,191,193,194,192,194,193,192,193,192,194,192,193,193,190,194,189,192,192,193,193,191,194,192,192,194,194,192,192,194,193,193,192,195,196,194,192,193,196,195,194,195,193,193,196,195,194,195,195,195,196,195,198,198,196,195,197,196,195,195,194,195,196,195,196,196,197,196,195,199,196,195,198,196,198,195,205,183,162,189,193,200,202,199,200,198,201,200,203,204,203,200,199,196,191,191,188,191,194,199,196,197,200,197,200,198,198,200,194,196,196,196,198,197,196,196,195,199,186,171,195,200,195,199,197,197,198,201,198,170,124,116,103,131,164,131,125,143,194,220,209,202,186,187,203,180,195,199,176,198,196,203,189,139,158,154,148,169,130,98,87,94,99,97,115,174,206,140,131,194,183,158,202,162,73,41,55,153,221,231,155,110,96,73,41,10,125,233,252,202,206,252,252,205,185,123,57,81,100,105,76,56,101,110,89,86,76,55,30,12,23,47,87,118,126,134,130,118,119,99,130,130,112,114,98,60,27,38,17,50,141,98,88,95,93,81,49,101,93,110,152,80,49,57,87,110,82,61,46,45,32,23,22,23,23,19,17,20,34,50,46,37,35,33,31,23,19,22,26,37,56,84,108,124,132,141,136,148,159,165,162,138,113,111,112,85,117,175,202,222,214,167,204,192,198,189,173,165,153,149,132,147,128,126,122,111,113,117,113,124,127,123,125,126,122,105,88,61,42,33,28,24,27,26,30,36,32,25,15,15,14,13,13,13,18,19,17,16,15,16,18,19,20,16,21,30,29,26,49,89,80,37,22,18,14,14,21,16,20,19,54,108,73,47,49,63,108,84,75,137,122,107,174,124,46,53,105,147,119,141,136,89,92,128,98,74,125,98,43,85,160,164,109,72,105,125,71,27,14,17,16,39,47,52,99,112,90,41,19,22,10,23,21,20,17,17,25,18,17,21,24,20,20,23,19,19,31,34,36,33,36,45,66,81,95,116,116,136,144,147,142,135,145,148,163,180,184,167,168,153,155,164,122,153,136,94,84,146,125,59,39,29,23,27,23,32,30,29,31,32,90,139,133,91,84,114,84,34,22,16,22,39,48,57,73,84,110,134,152,178,175,173,157,143,132,131,148,150,152,150,133,111,96,58,23,17,108,153,75,50,36,24,25,31,39,55,19,57,193,214,225,231,219,223,220,220,219,220,222,225,223,222,115,4,1,7,10,12,12,13,13,14,15,14,14,200,204,199,199,200,199,198,201,202,199,201,201,199,200,198,199,200,199,201,198,203,201,201,200,200,204,200,204,202,201,202,203,203,200,204,206,203,202,203,203,204,201,201,203,200,202,201,203,204,201,200,201,201,201,203,202,206,202,202,203,200,201,200,200,201,201,201,200,199,204,203,201,202,201,203,201,203,201,203,201,200,202,201,203,200,201,202,200,201,199,202,201,198,201,201,199,199,201,202,201,200,199,200,200,199,198,200,200,204,199,200,198,197,200,196,200,199,197,199,199,196,197,199,198,198,200,199,196,198,196,200,198,196,199,196,197,198,199,197,196,196,195,196,194,196,195,195,195,194,193,193,195,195,197,194,193,196,192,195,196,195,195,192,196,193,197,194,193,196,194,194,195,193,194,194,193,194,192,193,195,194,191,195,196,194,195,190,191,195,195,192,192,194,194,192,195,194,194,197,193,191,193,193,194,195,194,193,194,194,193,195,193,195,197,193,194,196,196,196,196,197,199,198,201,200,197,201,198,200,200,195,198,196,199,197,196,195,197,196,196,198,195,196,198,197,196,196,196,194,199,197,203,136,79,153,191,199,203,198,208,203,204,203,196,200,194,190,191,189,191,193,198,198,197,199,198,199,198,198,194,195,194,196,199,196,196,196,197,197,196,198,197,202,190,170,194,200,198,200,196,196,200,207,196,178,162,155,148,184,179,122,129,143,169,212,215,201,191,185,200,182,192,198,173,196,199,202,171,145,159,155,168,182,143,93,96,109,105,92,107,177,226,166,168,175,130,177,214,142,76,50,98,218,243,243,160,74,66,55,20,45,178,235,232,160,179,248,248,142,76,60,64,110,110,72,53,94,132,108,55,42,47,34,16,18,50,98,130,137,133,128,120,115,103,82,91,97,98,107,106,83,41,39,12,45,154,111,99,112,52,59,38,58,96,90,103,29,29,95,108,87,44,38,36,35,32,25,27,24,24,21,26,19,28,39,36,43,37,26,22,18,16,17,17,21,21,24,35,51,77,97,110,130,142,146,156,147,141,149,122,107,152,184,197,194,175,167,156,150,137,133,127,120,111,115,120,119,134,141,128,133,140,125,114,101,92,73,50,33,30,25,24,16,17,18,18,21,13,24,27,24,24,19,19,16,12,20,17,15,18,17,17,16,15,17,16,14,16,19,17,22,24,30,44,62,55,27,16,15,15,15,20,16,19,16,56,118,108,93,79,49,104,110,105,156,102,90,144,105,103,90,148,125,63,106,117,124,101,155,87,71,147,122,97,148,150,86,66,27,82,161,97,33,14,10,24,43,42,43,101,126,78,32,21,17,13,18,21,21,17,19,20,16,18,21,19,20,18,24,21,16,24,30,36,34,23,31,35,39,41,44,55,76,100,111,122,123,137,141,154,168,162,163,160,165,170,182,179,168,171,143,135,162,125,78,80,63,60,60,52,54,51,55,57,47,103,156,143,107,86,117,87,42,43,46,73,102,136,153,172,174,169,171,163,156,136,129,141,152,148,153,166,145,121,93,63,56,44,41,11,79,212,148,53,43,33,30,26,42,45,57,22,72,214,210,222,230,217,223,220,223,222,220,222,222,217,220,116,3,0,7,11,12,12,13,11,14,15,14,14,206,207,205,203,205,201,200,202,201,202,199,200,202,200,201,199,199,198,201,202,198,199,203,200,199,201,200,203,199,200,201,199,202,203,202,202,202,202,202,203,202,201,207,202,202,202,200,200,200,201,200,200,202,202,199,201,201,200,201,201,199,201,202,202,202,201,202,201,203,202,201,200,200,201,199,203,200,201,203,199,201,199,200,204,201,201,203,200,200,200,203,200,203,199,201,201,200,202,200,200,199,200,201,200,199,203,200,199,200,199,200,198,199,198,196,199,199,200,198,198,198,199,197,195,198,196,198,198,196,196,195,196,198,198,196,196,194,195,197,194,195,193,194,196,191,195,194,194,195,193,196,195,193,195,194,195,196,194,195,194,194,194,192,196,194,193,193,191,193,192,194,196,193,193,191,193,193,190,195,194,194,196,193,192,195,193,196,196,194,195,195,194,195,192,192,193,195,195,191,196,194,193,195,195,196,194,195,195,198,196,195,195,193,196,198,196,197,196,195,198,198,198,198,198,198,199,198,200,200,198,198,198,196,198,196,196,196,197,196,195,196,193,195,196,198,198,198,198,195,198,198,200,142,120,180,200,217,214,216,226,209,204,196,191,192,191,193,196,199,201,203,204,205,205,206,203,200,201,201,202,200,200,201,201,204,200,200,202,202,203,202,203,206,204,179,194,209,207,210,207,206,210,209,185,170,165,158,165,208,192,143,172,169,142,189,221,216,209,195,205,191,201,210,185,204,210,199,175,181,173,159,153,171,192,128,119,123,84,77,82,163,214,162,160,134,160,220,191,108,63,50,147,249,249,239,112,39,43,35,20,97,231,227,206,126,109,205,167,51,38,61,95,117,73,59,95,124,134,81,47,46,24,19,25,35,88,129,141,138,129,121,118,108,86,42,33,53,62,93,103,93,57,46,39,66,161,127,96,66,48,67,15,77,109,72,30,8,66,131,98,49,39,60,53,37,39,29,24,28,26,25,27,24,26,24,19,24,19,16,16,16,15,18,21,19,19,24,20,22,28,41,40,50,57,79,97,110,148,156,151,153,171,188,192,201,184,166,165,153,151,143,141,150,128,127,118,81,119,96,78,69,53,45,33,32,31,20,22,15,15,21,18,19,17,15,14,16,16,17,21,16,21,17,14,22,18,16,22,17,14,19,19,19,18,17,16,17,16,21,15,27,36,43,60,59,56,40,35,24,14,18,17,19,22,16,31,89,123,110,91,48,83,92,98,126,75,89,100,103,123,118,88,49,22,53,77,113,149,131,81,134,159,142,106,155,98,79,143,95,162,199,101,21,9,13,14,33,48,86,94,78,48,27,15,14,16,16,17,17,23,21,15,21,22,19,20,21,19,18,21,18,19,21,36,38,32,44,36,33,24,21,23,34,34,47,56,65,91,111,130,136,153,162,162,151,154,157,150,156,147,151,152,159,154,130,181,166,166,156,141,143,137,141,141,143,169,179,171,127,91,113,92,61,108,135,160,192,182,166,155,140,138,130,129,142,139,159,170,155,128,113,98,69,55,38,39,32,39,36,30,194,241,115,41,41,26,29,33,42,37,60,29,71,214,210,220,230,216,222,215,218,218,219,222,223,220,219,115,4,0,6,10,13,12,13,12,13,14,15,14,205,210,207,206,206,203,204,205,205,200,201,203,201,203,199,201,200,198,201,197,202,200,200,202,200,205,201,200,200,200,194,198,202,200,199,199,201,200,201,201,200,200,201,202,201,202,201,200,201,202,203,204,203,202,201,200,201,199,200,201,200,198,200,201,202,199,200,200,200,202,199,201,201,198,199,200,199,198,201,200,200,202,199,199,199,200,201,200,199,199,200,200,197,199,202,200,200,199,198,198,198,199,199,199,196,198,199,196,199,196,197,198,197,199,198,198,198,196,198,196,196,198,196,198,195,197,193,195,199,193,195,195,195,198,194,196,195,193,195,194,194,193,194,194,193,194,194,194,193,195,195,194,193,192,193,193,195,190,191,190,191,193,190,191,191,192,192,192,193,193,193,191,192,192,194,193,193,194,193,195,194,193,196,193,193,195,196,198,195,194,192,195,195,195,194,194,196,194,195,194,193,198,198,194,195,195,195,195,196,194,196,193,194,196,194,196,194,195,197,199,197,198,197,199,198,198,201,196,199,198,196,198,198,198,197,198,199,201,201,203,203,205,205,206,207,208,210,208,212,211,213,221,193,207,235,231,233,218,211,218,214,210,210,211,217,217,224,226,228,227,225,228,227,227,227,226,223,222,227,225,226,226,226,225,226,226,223,226,227,227,227,228,229,227,207,211,226,226,229,225,221,231,210,166,137,128,127,151,197,167,160,217,199,138,159,225,239,236,225,225,212,223,232,208,229,235,205,198,212,177,158,158,214,239,139,96,103,93,86,73,119,142,112,142,154,219,248,133,65,61,39,107,235,239,151,46,6,32,31,43,166,246,213,205,115,92,163,74,27,71,93,124,79,54,97,123,133,87,79,63,28,16,17,48,87,124,128,122,127,124,122,121,113,81,69,77,60,44,89,104,66,76,96,63,48,130,102,46,60,48,45,17,105,165,72,36,63,139,162,84,44,89,104,69,54,53,37,22,24,24,23,27,19,25,21,23,23,19,20,13,17,17,16,16,18,18,19,16,19,26,30,34,27,22,26,33,37,44,59,51,66,81,92,114,121,121,122,116,101,93,83,73,63,50,43,40,33,25,24,30,23,21,26,27,26,19,17,21,17,14,17,17,17,17,22,20,16,18,17,21,22,16,18,15,17,19,19,18,15,21,16,19,21,18,19,18,24,21,25,19,34,85,101,108,105,85,86,75,49,24,17,15,17,22,16,17,34,40,71,103,48,67,74,90,104,64,110,101,65,56,41,44,40,38,44,38,64,63,89,139,165,144,122,89,131,68,81,182,111,169,157,38,23,13,18,23,33,71,141,114,63,55,30,26,19,15,19,22,21,18,21,21,17,23,20,20,20,20,21,20,23,21,20,28,29,31,38,25,21,22,15,19,22,24,21,27,27,32,44,58,76,97,118,127,134,141,146,142,139,147,141,142,146,148,167,180,183,182,185,181,184,193,189,188,181,169,175,165,123,103,125,101,101,138,145,161,150,142,126,131,141,133,139,134,141,137,127,114,79,48,34,38,41,38,33,27,22,33,23,94,248,233,78,45,44,19,29,29,36,39,56,27,69,202,210,225,229,214,224,220,219,220,217,221,224,222,223,113,4,1,6,11,13,12,13,13,14,14,14,14,206,205,206,205,205,205,206,205,202,203,199,201,200,198,201,199,202,201,200,200,200,203,201,202,203,203,203,200,204,203,195,201,202,200,204,200,202,204,202,204,201,200,203,202,201,204,201,200,202,203,206,206,205,203,204,202,201,202,200,201,200,202,200,203,200,199,202,200,202,200,203,198,198,202,198,202,199,201,201,200,203,201,198,199,198,197,198,200,200,199,198,198,200,196,197,198,200,199,197,198,198,198,199,196,195,200,198,198,196,195,197,196,198,197,196,198,196,197,196,196,195,198,198,195,196,194,195,198,195,195,196,194,194,194,193,194,195,193,195,195,195,194,193,195,192,195,194,193,194,196,193,190,191,193,190,192,193,190,191,192,191,193,191,194,194,191,193,194,195,190,192,194,193,196,194,193,196,196,195,196,194,194,194,195,195,193,196,192,193,194,194,197,197,195,195,193,192,196,193,195,196,194,197,194,195,195,197,195,193,196,194,196,196,194,194,194,196,194,194,196,193,193,196,199,198,199,198,199,195,195,196,198,197,199,203,208,220,229,230,229,233,234,233,237,237,234,240,237,242,239,234,244,222,243,249,241,230,160,134,193,229,246,245,244,248,248,249,249,248,249,249,249,248,248,243,250,237,229,242,243,246,242,242,240,239,245,229,229,238,240,241,240,238,232,224,209,209,215,222,226,212,225,200,160,133,127,129,136,159,131,170,229,198,139,123,188,226,235,226,218,212,215,226,212,214,225,188,197,201,136,159,176,229,226,108,96,110,101,113,95,85,61,80,139,141,234,208,70,65,72,31,30,122,139,66,11,14,38,29,92,214,246,217,235,139,103,121,59,67,88,112,88,59,92,128,128,91,80,57,30,17,21,45,89,127,132,128,121,120,128,132,124,88,79,102,84,69,43,93,72,48,67,74,48,27,63,77,69,62,25,29,62,144,177,85,100,133,165,157,73,50,121,108,78,74,63,47,21,23,21,24,27,22,27,22,24,24,19,21,20,15,17,21,15,15,19,16,13,15,16,25,36,37,23,27,31,25,31,19,19,24,20,19,23,25,25,28,29,34,33,32,22,26,29,31,19,18,24,29,39,29,30,24,18,27,19,16,18,14,17,17,17,19,17,15,19,19,19,19,18,17,16,20,19,17,18,19,17,19,19,17,19,17,20,21,19,25,17,29,16,94,196,201,187,170,169,171,160,90,25,11,15,21,15,19,25,15,33,84,96,38,57,71,90,118,75,101,72,40,44,36,45,37,35,44,47,53,52,69,111,147,124,120,84,106,55,55,131,53,52,42,17,27,14,16,27,74,129,162,129,91,90,57,63,35,19,23,16,22,22,22,22,22,21,21,23,19,20,15,19,19,22,22,15,18,23,20,17,22,20,16,16,27,29,24,24,26,22,26,34,37,41,39,46,61,84,95,106,113,120,132,141,150,147,145,149,148,147,142,139,147,141,141,145,136,128,135,145,131,117,144,134,112,140,139,143,141,126,132,133,134,124,109,94,76,52,49,47,41,37,28,36,39,36,25,25,23,23,28,115,248,181,66,55,35,25,27,31,48,46,62,26,73,206,211,224,226,214,220,218,222,218,218,222,222,221,222,116,4,1,8,12,12,11,14,13,14,14,15,15,204,204,203,205,205,205,203,201,202,200,200,200,199,202,198,199,200,200,201,199,203,200,201,204,202,202,204,204,220,234,206,200,202,200,204,203,207,203,202,205,203,202,205,203,201,202,205,203,201,205,206,206,205,206,206,205,206,204,205,204,203,205,207,202,205,206,207,206,204,205,201,203,203,201,203,204,203,204,202,200,200,199,201,200,200,200,199,198,200,200,201,200,198,200,200,202,201,198,199,200,199,198,199,197,196,199,199,196,198,197,196,196,196,200,198,195,195,194,198,196,197,196,197,196,192,197,195,197,195,193,195,193,192,194,194,194,193,191,195,194,193,193,193,192,194,195,193,193,192,193,192,192,193,193,190,190,194,192,196,194,194,195,193,195,194,192,193,193,192,194,196,195,196,192,195,196,195,194,196,198,194,196,198,196,198,194,195,193,195,196,196,197,195,197,195,195,197,196,194,196,193,195,197,195,197,197,196,194,194,195,195,196,197,196,194,196,195,196,195,196,196,197,195,199,198,198,197,197,202,198,195,199,195,200,203,195,200,205,205,205,204,203,200,200,198,193,194,192,187,174,162,203,187,165,158,151,139,83,73,119,158,167,164,162,163,161,160,155,155,153,151,156,151,149,146,168,167,132,144,148,147,149,141,144,149,161,149,127,133,137,139,141,140,139,161,145,129,133,153,170,128,134,125,125,125,120,122,130,149,120,177,231,199,139,88,106,125,135,135,130,137,131,131,128,136,137,123,139,113,71,105,124,141,128,83,114,133,116,126,113,99,49,54,90,81,145,113,36,51,60,36,27,41,41,43,29,43,46,53,165,245,246,232,242,168,116,103,84,99,100,68,62,87,122,127,93,87,54,33,17,23,49,92,138,128,129,130,126,133,125,130,105,66,48,70,76,71,80,57,42,31,45,49,38,31,70,91,62,36,30,128,116,67,80,86,125,128,130,113,56,39,78,76,64,67,65,46,17,19,23,27,28,23,27,27,21,25,23,21,22,17,19,16,19,17,13,17,16,14,17,17,28,37,29,35,31,22,15,16,19,17,21,16,20,18,19,16,21,19,21,31,28,31,29,20,16,19,37,61,59,46,43,19,14,23,15,18,14,16,19,19,23,15,17,19,16,17,17,18,18,21,18,16,19,17,18,16,17,21,20,19,22,17,21,20,16,25,15,26,18,63,160,164,170,166,166,178,149,78,20,12,16,15,19,26,22,34,80,104,61,29,71,66,107,145,83,55,40,39,37,42,58,56,60,66,71,59,55,55,58,75,131,158,98,102,54,46,107,72,43,14,14,27,17,21,29,106,161,162,141,137,136,139,135,71,24,13,18,17,18,23,19,20,21,19,22,21,19,17,16,22,18,19,21,19,21,17,17,19,19,17,19,19,19,22,18,19,20,22,29,36,37,30,27,41,39,37,40,39,45,51,71,91,103,111,106,112,120,122,122,126,132,134,133,133,125,136,145,126,139,163,139,123,128,124,127,107,95,76,66,55,41,43,33,30,27,27,30,30,38,39,41,36,26,23,29,17,27,24,79,144,80,58,60,33,31,33,37,46,55,53,36,140,226,220,228,224,216,222,217,219,217,219,221,221,220,221,116,4,1,8,13,12,12,15,13,14,15,15,15,202,202,201,202,202,200,201,200,200,204,201,203,200,198,201,199,201,200,202,200,198,201,200,202,202,200,206,203,240,252,205,201,202,198,206,200,203,205,203,200,204,203,203,200,201,205,200,203,204,205,206,206,205,205,206,204,205,205,201,206,205,203,206,206,205,206,206,204,206,204,204,205,202,203,205,205,203,203,202,204,202,200,201,203,201,202,202,200,202,200,200,200,201,199,200,199,200,198,198,202,199,198,200,199,200,201,197,198,197,197,198,198,197,195,197,196,198,198,196,196,195,196,196,194,195,196,193,196,195,193,195,191,194,195,197,194,194,195,194,194,192,194,194,194,192,191,192,193,193,193,192,193,193,192,193,196,194,196,196,196,194,194,193,193,195,196,193,193,194,194,195,193,195,196,194,194,195,196,196,194,197,196,196,196,195,193,193,196,194,195,193,196,195,194,195,193,194,195,193,195,193,193,195,195,196,193,194,195,194,193,194,196,193,193,193,194,196,193,195,195,196,196,195,196,195,197,198,196,198,198,198,198,193,199,190,120,79,80,71,69,66,62,56,55,53,46,46,36,33,29,29,105,81,14,12,10,18,12,10,16,10,15,14,14,14,11,12,11,12,12,11,12,10,14,8,34,52,15,9,12,10,14,11,15,9,37,40,10,13,12,13,11,15,10,49,62,25,19,34,76,33,35,57,60,65,71,78,111,153,149,199,212,180,143,85,51,15,6,12,12,28,23,10,15,12,32,34,32,36,16,44,36,29,21,30,110,132,121,129,119,107,70,52,49,15,42,35,18,65,64,59,57,43,25,31,44,66,48,78,184,241,249,221,234,180,134,122,94,108,68,61,80,110,127,93,89,57,29,21,26,46,96,139,131,125,128,126,133,129,109,100,71,41,35,63,83,91,55,45,25,32,73,54,35,38,74,69,33,50,127,224,137,22,35,75,105,75,76,81,58,40,30,39,56,56,48,34,23,24,24,33,31,23,28,26,27,27,22,26,23,25,29,19,19,21,19,16,16,18,17,21,19,18,22,26,20,17,19,19,18,18,18,16,20,19,15,17,18,17,19,23,27,20,20,20,16,30,53,69,59,53,45,25,16,15,18,18,18,19,18,16,19,21,16,16,21,22,17,22,19,21,21,16,20,19,18,24,16,19,24,17,19,19,22,19,20,24,16,21,28,35,63,66,63,62,56,53,44,38,21,15,16,19,22,27,34,69,117,86,40,46,63,65,114,141,65,43,34,24,24,42,77,95,127,123,65,36,40,57,54,47,113,165,106,92,53,41,87,97,57,20,13,20,20,17,27,81,142,162,150,160,124,173,158,72,23,12,14,20,18,21,22,19,22,21,21,19,20,22,22,19,17,20,19,19,23,16,21,23,19,22,18,16,15,20,21,19,19,16,19,34,41,40,41,34,33,33,27,28,30,24,27,37,36,41,42,51,61,65,79,85,88,98,102,107,99,101,89,88,104,100,86,59,76,50,49,52,39,33,29,29,25,21,24,25,23,22,22,26,30,30,33,25,27,29,27,21,24,29,27,40,45,55,50,28,34,42,41,41,44,21,71,210,235,226,223,221,220,220,220,218,217,220,221,223,220,221,116,6,2,8,12,13,12,15,14,16,16,16,15,204,205,201,202,204,203,205,200,202,202,201,200,200,202,201,201,203,201,201,201,201,199,201,201,202,201,207,198,241,252,200,204,203,197,205,199,205,202,201,205,201,201,202,200,202,201,202,203,203,205,204,204,205,206,206,206,204,203,205,205,204,204,204,203,206,203,205,205,202,204,205,205,202,202,206,205,203,205,201,204,204,202,202,201,203,202,202,201,203,199,203,201,202,202,198,200,199,201,199,200,199,200,200,199,201,198,200,201,199,199,201,201,197,200,196,198,199,197,198,196,199,195,197,197,196,198,196,196,194,197,198,198,196,195,196,196,198,195,195,197,196,194,196,198,195,196,196,197,196,196,196,193,195,197,194,198,198,194,198,195,196,198,198,196,195,195,194,194,194,197,196,194,196,194,194,196,198,195,194,195,195,196,196,194,195,195,192,193,193,194,194,194,193,194,192,196,195,191,193,194,193,194,195,194,194,193,195,194,194,195,194,195,194,194,193,194,192,192,193,196,195,195,196,195,196,199,197,196,198,198,195,196,196,199,185,115,79,76,72,70,68,68,63,68,65,61,62,62,59,63,61,77,87,71,56,57,59,61,68,60,54,45,46,45,42,47,46,46,46,46,46,46,45,45,37,59,103,69,43,40,38,44,35,38,35,49,85,61,35,29,29,32,31,23,69,78,23,12,17,67,59,58,67,66,56,46,65,95,153,184,207,179,130,94,62,51,22,11,15,9,27,27,12,16,13,14,15,20,25,17,30,35,24,12,27,71,85,110,130,123,103,67,51,34,3,35,43,26,74,88,75,55,28,19,32,50,70,35,65,181,205,177,168,170,149,146,128,96,59,46,88,120,121,92,84,54,29,18,28,54,94,135,132,128,120,119,121,131,117,68,76,51,48,62,72,102,67,41,15,33,118,148,69,39,61,55,22,48,104,141,198,78,36,69,65,79,60,67,55,59,51,25,18,33,38,33,42,41,33,24,28,31,24,31,31,28,28,23,25,29,23,30,27,22,24,21,22,15,20,19,15,20,19,18,17,15,21,17,16,18,15,21,20,17,17,17,19,17,17,17,16,19,18,18,20,27,40,42,61,70,55,37,20,15,15,17,19,16,20,21,19,19,19,17,17,21,17,17,19,21,19,17,20,24,21,17,19,18,16,24,23,18,20,22,23,18,18,24,27,30,35,51,51,41,29,29,30,22,27,21,12,21,23,21,33,53,104,121,65,52,61,57,66,90,84,45,39,23,21,25,28,61,118,131,72,32,19,26,32,61,68,95,128,112,93,41,40,66,103,88,28,18,17,19,22,23,45,70,83,81,91,84,58,49,37,19,17,16,23,25,18,26,27,19,20,19,21,22,17,21,20,17,21,24,24,17,20,22,17,19,20,18,21,19,19,21,19,19,16,19,21,31,34,34,27,23,34,65,76,57,45,32,22,26,29,29,33,37,39,30,27,32,34,34,33,33,37,37,33,36,44,36,30,33,31,38,38,26,22,21,25,25,25,22,26,25,28,27,23,26,21,23,24,25,24,25,24,28,27,25,31,30,39,43,46,47,49,49,46,32,25,134,240,237,222,219,218,219,220,218,217,219,219,220,222,220,220,115,6,2,8,12,13,12,15,15,15,15,16,16,199,203,203,203,203,200,204,203,200,204,199,200,203,203,202,201,203,202,201,202,201,202,201,200,201,201,209,195,243,252,196,205,203,197,205,200,207,202,202,203,202,203,203,203,203,203,203,205,204,203,203,204,202,203,207,203,204,206,202,206,204,205,205,204,204,206,206,205,206,202,204,205,203,204,203,201,202,202,203,203,203,205,200,201,202,201,200,200,202,202,202,197,200,200,200,200,199,200,198,200,202,201,200,198,199,199,200,202,201,201,199,199,201,199,200,198,197,199,198,199,200,198,200,198,197,199,196,200,197,198,200,197,199,198,199,196,198,199,197,198,197,199,196,196,200,196,196,198,195,196,196,198,198,197,196,195,195,195,195,196,196,196,195,194,197,198,195,196,196,195,196,193,196,196,193,194,193,194,195,193,194,194,194,196,195,193,198,198,193,194,194,196,193,193,195,192,193,193,194,193,193,195,194,193,193,195,194,194,194,192,193,194,194,193,193,193,194,192,192,196,196,196,197,196,196,196,195,196,197,197,195,196,196,199,205,191,193,200,199,205,206,203,205,210,208,209,210,211,211,211,191,171,189,218,225,216,218,217,217,221,219,220,215,218,216,217,219,219,222,218,221,222,220,218,210,220,219,214,215,214,216,214,216,212,207,210,209,212,210,207,208,205,200,198,205,176,156,169,172,191,170,161,148,130,139,120,105,117,160,190,186,157,90,44,36,98,169,183,176,176,174,153,171,171,129,154,160,138,110,102,163,153,145,127,113,104,81,124,134,115,91,46,53,98,91,101,87,59,74,67,55,33,17,12,36,57,61,29,100,198,117,79,92,79,98,130,125,63,29,64,125,126,94,86,49,29,17,21,61,103,125,131,128,123,114,116,120,126,101,38,30,33,44,60,95,90,44,24,48,116,194,208,127,82,52,13,63,162,142,49,71,65,68,78,60,71,47,56,52,49,61,34,23,23,34,46,55,56,44,28,27,32,32,29,17,28,32,22,28,28,27,25,20,24,27,23,24,18,18,19,17,19,18,19,17,18,17,16,17,20,19,19,19,22,15,19,19,14,21,18,18,21,18,15,22,33,34,59,97,90,61,31,21,21,17,17,15,19,18,19,20,21,17,17,19,20,17,19,18,20,21,17,19,22,21,21,20,20,19,19,22,21,21,20,22,24,24,19,28,31,36,50,45,37,34,29,29,19,19,20,19,22,23,24,29,59,128,113,68,83,81,59,62,74,46,33,32,21,34,28,54,126,145,97,41,26,30,25,29,43,71,88,97,111,62,51,71,69,116,111,51,19,12,19,18,25,48,52,49,41,33,27,29,27,17,24,19,16,21,21,24,26,25,25,21,22,24,24,27,24,27,21,23,21,20,25,20,19,19,17,19,23,21,19,23,21,21,17,18,24,24,21,19,22,21,35,80,122,107,79,57,29,19,29,33,31,38,31,24,22,20,22,23,24,23,22,23,22,20,25,28,40,38,37,42,34,27,21,25,20,16,25,25,24,24,22,22,24,28,28,21,28,25,22,24,30,28,25,47,36,28,36,35,48,68,68,59,61,54,123,167,231,248,225,222,221,217,215,214,215,215,215,217,220,218,215,219,117,6,2,8,12,13,12,14,12,14,15,16,15,199,203,201,201,201,200,203,202,205,203,201,203,202,201,201,200,204,200,203,204,200,203,202,202,204,200,206,193,244,252,191,203,200,197,206,200,206,203,199,202,200,202,205,202,204,203,203,205,203,201,203,203,201,204,201,203,204,203,205,205,203,206,204,204,206,202,205,204,204,204,205,204,202,202,203,203,202,202,203,202,200,201,203,202,200,201,200,200,201,199,200,197,201,198,200,200,198,200,198,199,199,199,199,199,200,198,199,199,199,197,198,199,198,201,199,199,198,194,198,196,200,198,197,197,196,199,196,199,198,198,198,196,195,196,196,197,198,195,196,197,196,196,198,196,196,196,196,195,198,196,194,196,198,200,195,198,196,195,194,195,196,194,195,196,195,194,196,196,195,196,194,193,196,194,196,195,192,194,193,194,193,193,194,193,191,194,194,196,193,194,196,193,194,196,193,192,194,194,195,193,191,195,195,192,193,193,194,193,195,194,189,193,193,195,193,192,195,192,192,195,191,195,197,196,196,195,194,195,196,195,193,193,194,199,206,209,224,230,235,235,235,239,238,241,239,240,242,242,240,242,239,214,205,239,252,252,246,244,249,246,251,252,248,246,248,251,251,250,249,252,250,248,250,251,246,246,249,240,247,249,250,251,248,252,245,249,247,241,251,248,249,247,248,244,251,232,218,242,252,252,213,200,188,187,205,168,132,128,165,189,160,112,67,32,55,174,241,250,252,252,250,221,252,242,205,248,238,196,174,203,252,230,210,179,201,179,112,132,123,93,56,31,102,219,199,183,163,91,77,40,27,35,22,27,41,63,59,54,181,200,88,83,89,49,40,78,75,34,58,116,125,92,81,52,25,16,29,53,107,134,122,118,118,118,119,123,125,130,94,39,33,35,35,45,61,61,31,48,111,177,202,178,100,40,15,10,104,212,128,21,44,62,92,72,64,71,44,49,43,50,49,41,31,40,58,69,68,59,61,39,27,25,29,31,26,29,33,28,27,35,31,25,26,25,27,27,27,25,24,19,19,21,20,18,21,23,14,20,18,18,21,18,19,17,20,19,18,18,15,15,20,18,18,20,29,37,30,81,129,98,55,31,20,17,14,19,20,15,23,19,16,20,21,19,17,22,21,21,18,21,23,16,20,19,23,20,20,19,18,22,22,20,20,23,22,21,20,21,39,33,28,47,50,48,44,46,30,25,23,15,20,21,21,24,30,67,132,117,67,75,101,79,56,56,41,27,26,22,34,27,89,160,150,102,67,71,44,24,31,23,59,82,94,67,58,113,118,68,117,138,63,21,12,17,17,29,43,51,51,30,29,30,24,30,23,21,23,20,21,23,30,27,26,27,23,21,22,30,25,27,24,24,31,20,23,27,23,19,19,19,17,21,17,18,21,24,17,17,18,23,24,18,18,17,34,78,115,104,77,71,61,33,16,22,24,24,33,27,17,18,22,21,18,24,23,21,21,20,20,20,24,33,41,33,35,29,22,23,21,23,23,22,28,24,22,23,27,27,25,27,25,24,22,27,23,32,25,65,104,53,27,34,48,57,76,77,68,69,99,217,246,246,243,216,225,217,214,214,212,210,213,214,214,217,217,214,218,116,5,1,8,12,12,13,16,13,14,15,15,15,200,204,201,203,205,201,202,202,200,200,200,200,199,199,198,201,200,200,202,202,202,201,204,202,206,203,205,193,244,245,180,199,200,202,204,199,207,203,204,203,199,201,200,200,203,203,203,201,203,203,201,204,204,202,200,203,203,203,201,202,203,203,203,203,201,201,200,200,204,202,204,203,202,203,203,203,205,202,202,201,200,202,199,201,201,199,199,198,199,199,200,199,200,199,200,198,198,200,198,199,198,198,198,198,199,198,195,198,195,196,199,196,199,198,198,195,196,199,197,199,197,195,198,198,195,196,194,197,194,195,198,195,196,195,196,197,197,196,196,197,195,195,194,193,195,194,198,195,193,196,196,196,196,193,194,195,198,195,195,196,192,197,194,193,195,193,192,192,193,192,197,195,193,195,191,194,196,194,192,191,194,193,193,194,193,192,194,194,194,192,192,194,194,193,192,193,193,194,194,192,193,192,193,194,193,191,190,191,192,193,192,193,194,192,193,194,193,193,192,191,193,193,193,194,194,196,195,194,195,194,196,196,195,194,196,199,200,205,204,206,206,205,207,208,208,208,207,208,209,211,213,203,184,189,214,224,217,210,211,212,212,212,212,212,212,213,213,214,214,212,213,212,213,214,212,214,211,213,214,213,211,212,214,211,212,211,213,213,211,212,214,214,212,215,217,204,199,210,228,213,178,177,175,190,213,167,124,148,182,185,141,82,39,39,122,212,234,229,221,225,222,197,221,197,186,227,173,160,181,221,249,174,151,177,226,192,122,133,96,60,72,52,135,244,208,212,165,83,55,10,63,98,55,39,60,66,63,154,249,219,107,115,105,41,17,27,49,67,109,97,78,73,51,27,17,29,49,103,131,122,118,122,120,116,125,128,127,117,111,73,40,29,34,37,39,34,31,97,162,178,161,92,33,38,56,18,66,141,82,52,76,65,77,76,66,66,40,39,36,39,54,33,30,44,70,84,73,60,58,56,29,32,53,49,33,25,30,25,30,34,27,27,21,29,29,25,25,27,24,21,22,23,25,21,21,19,18,22,18,22,19,20,20,24,23,17,18,23,19,16,21,20,21,19,26,35,27,53,83,66,39,18,14,17,15,18,19,22,22,22,21,17,21,21,22,21,21,21,21,23,19,20,24,17,23,24,22,20,19,22,22,19,22,25,22,22,23,24,41,45,39,59,75,73,81,84,57,31,21,18,17,20,23,23,23,56,146,123,87,109,127,110,63,51,31,22,31,43,67,55,71,83,59,71,79,72,42,30,44,33,39,74,85,66,96,157,98,54,126,149,76,20,12,14,23,28,37,51,55,38,41,44,29,27,18,25,23,24,25,21,27,26,27,27,23,24,29,29,24,25,25,26,27,23,26,27,23,23,22,19,24,20,18,21,25,24,17,22,21,21,19,17,24,28,59,88,77,76,80,80,50,27,21,15,17,18,25,21,22,23,19,20,22,19,24,23,19,22,21,24,23,24,27,26,21,21,25,25,27,23,24,24,23,27,27,24,24,27,27,26,27,26,33,25,27,26,41,150,125,46,39,38,55,66,85,78,71,76,78,200,245,237,230,213,218,216,212,211,211,214,213,214,214,216,217,215,216,115,6,2,8,12,13,12,15,14,15,15,15,16,205,207,205,202,203,201,199,202,203,202,201,202,201,200,203,202,201,202,201,201,202,203,203,203,206,205,206,193,234,219,167,195,195,199,205,200,203,202,205,206,203,201,203,200,204,203,200,203,203,201,203,203,202,203,203,200,201,201,200,200,202,202,203,201,201,201,201,203,201,202,203,203,203,202,202,201,202,202,202,201,198,200,202,199,200,200,200,202,200,198,198,198,200,197,199,199,199,200,198,198,198,201,199,196,198,198,198,198,199,199,198,196,195,197,197,197,198,196,197,193,194,193,193,195,196,195,191,195,195,195,196,195,196,195,195,195,196,194,193,194,196,195,196,195,196,194,193,193,195,197,196,196,192,193,194,193,194,194,196,193,194,194,193,193,191,191,192,192,191,194,193,192,195,193,193,193,195,195,191,193,193,194,193,195,192,192,192,193,192,192,193,191,193,194,192,193,195,194,192,193,193,190,194,193,194,194,192,193,194,194,191,194,194,195,193,193,194,193,194,194,193,193,194,192,194,196,196,195,194,195,195,195,194,194,194,195,196,195,197,195,198,198,198,200,198,199,196,199,199,199,204,199,188,166,178,207,208,204,199,198,200,200,198,199,200,199,200,199,197,199,200,198,199,199,200,198,199,199,200,200,198,198,198,196,197,198,198,198,199,198,197,199,195,198,200,186,182,187,204,170,128,143,158,191,207,152,124,173,207,180,130,63,30,102,198,231,217,204,196,203,197,184,194,147,162,187,147,159,179,225,177,72,108,193,230,186,141,109,89,81,74,66,127,196,156,126,100,65,30,2,72,130,82,79,71,63,167,237,246,182,74,84,39,16,19,28,84,105,90,57,49,49,27,19,30,57,102,125,128,113,117,121,110,118,129,119,108,119,124,98,63,57,53,38,31,38,92,156,179,162,103,100,110,127,130,64,44,69,71,86,79,62,77,65,57,50,42,37,33,39,39,31,30,49,64,74,69,54,57,73,87,106,105,73,36,30,28,27,34,33,29,24,27,26,23,29,27,29,30,23,25,27,27,31,25,19,21,21,24,27,22,20,24,24,22,22,20,19,18,17,19,19,18,18,24,39,44,66,77,57,37,23,14,15,15,20,19,21,28,19,22,23,18,19,20,22,21,21,23,21,19,25,25,21,22,22,22,19,20,23,23,20,27,24,25,28,27,30,48,53,61,120,135,141,155,134,92,53,25,13,17,20,19,29,19,49,128,136,145,139,152,143,95,53,22,23,34,81,115,98,82,61,34,25,52,63,71,83,60,35,32,64,98,75,117,159,101,94,148,165,75,18,15,9,21,31,39,57,71,66,81,79,51,29,24,23,22,22,22,23,26,27,24,30,22,26,29,27,30,27,24,25,25,25,28,29,26,29,24,22,27,24,21,27,23,21,23,21,19,19,21,23,24,41,72,53,67,131,137,95,39,24,21,17,23,16,18,22,19,23,17,23,23,19,21,18,27,20,17,27,19,22,18,21,29,21,23,24,26,24,26,26,27,24,26,26,25,25,29,30,30,28,29,29,29,17,116,200,96,69,88,66,73,74,93,71,76,76,43,161,219,214,226,211,215,211,212,216,213,213,216,214,214,216,216,216,216,115,6,2,7,11,14,12,15,14,14,15,15,15,201,203,202,202,201,203,202,204,203,200,204,203,205,204,203,204,200,201,201,203,205,201,201,202,206,204,203,195,235,233,184,194,193,203,201,201,203,201,203,202,203,204,205,201,201,203,204,201,201,201,200,203,202,202,201,201,202,201,200,200,201,200,199,201,200,200,202,203,201,201,205,202,204,203,201,202,200,200,201,200,201,200,201,199,200,200,199,200,199,200,199,199,200,199,199,198,197,199,198,199,199,199,200,197,198,197,197,199,196,196,198,199,197,196,197,197,198,196,195,196,195,196,195,196,195,196,194,197,196,194,195,192,194,194,193,193,193,195,196,194,196,196,196,200,195,197,198,196,199,197,196,195,195,194,196,192,193,194,192,195,193,193,194,193,196,193,193,194,194,193,195,193,191,193,193,193,192,192,196,196,195,193,193,194,192,192,195,193,193,193,192,192,191,193,191,193,194,193,193,192,192,192,192,193,192,195,196,193,195,196,193,193,194,194,193,194,194,194,193,193,195,192,194,194,193,193,190,193,196,193,193,194,194,193,195,198,191,194,194,195,197,196,199,198,196,197,198,198,199,196,200,202,198,174,163,189,203,207,202,194,197,198,198,198,195,195,198,198,195,196,198,196,198,196,196,195,196,196,199,198,193,194,197,195,194,196,191,195,196,194,196,193,194,196,196,186,184,182,186,160,115,111,128,178,182,130,146,195,215,184,119,55,90,191,230,225,206,197,193,199,194,182,176,138,176,187,162,176,194,199,103,44,133,186,178,177,139,112,106,81,81,77,127,152,78,62,49,42,41,3,34,72,76,81,57,149,240,246,206,91,66,37,5,16,11,45,108,86,55,43,39,31,17,31,56,95,127,120,118,118,107,100,105,128,124,103,108,111,128,118,113,100,53,24,42,108,166,199,161,116,128,150,160,154,131,66,50,57,53,74,72,70,73,55,43,45,42,45,46,55,56,37,43,53,67,64,53,67,102,122,130,145,144,107,59,32,26,29,30,35,28,27,24,28,25,28,34,32,28,26,24,28,31,33,34,27,23,22,29,27,24,21,24,23,24,25,15,19,21,18,21,20,21,19,47,80,95,117,95,62,63,60,34,21,21,19,19,26,22,19,24,23,21,19,17,19,21,26,21,19,23,21,19,22,27,23,24,19,18,26,24,21,30,31,27,28,22,32,53,79,124,169,199,162,145,141,105,72,30,12,16,19,15,20,25,28,89,113,124,145,133,152,110,42,21,27,16,96,166,145,122,87,36,53,97,105,139,105,39,23,24,71,95,94,124,148,145,141,174,149,49,15,18,12,15,34,54,107,123,139,169,137,85,46,22,28,25,24,24,16,22,27,24,27,24,27,25,31,33,26,27,27,30,27,29,26,24,25,26,21,27,26,26,30,24,23,24,26,18,23,21,19,24,48,60,38,66,130,125,67,35,25,17,16,18,19,22,21,20,20,19,26,22,19,22,22,24,23,20,22,21,21,23,27,22,24,26,20,27,27,24,25,24,29,28,24,24,27,30,29,27,29,32,33,19,83,223,168,89,160,129,83,57,49,52,47,71,63,27,143,205,206,227,211,214,208,210,213,210,215,212,215,217,216,217,214,219,116,5,1,8,12,12,12,14,12,14,15,14,14,200,200,201,201,203,202,199,204,201,202,203,203,204,200,203,202,203,203,200,200,201,203,200,203,205,204,207,203,252,252,208,201,200,204,200,199,203,200,202,201,202,202,203,201,201,202,201,203,200,200,200,201,201,200,203,202,201,202,202,201,200,199,200,201,200,200,199,200,198,199,201,199,200,200,201,201,199,199,200,202,200,201,200,199,199,200,201,198,199,200,201,200,196,199,199,199,200,198,198,194,196,198,199,198,196,199,196,196,196,196,196,197,198,196,197,196,196,198,198,195,198,194,197,196,195,198,195,197,193,192,193,195,195,194,195,196,198,197,197,198,198,197,196,196,200,198,196,198,196,196,194,195,197,195,198,194,195,196,194,194,196,196,197,197,196,195,198,198,194,198,193,196,197,193,196,193,194,193,193,196,191,193,195,194,194,195,195,193,192,191,193,194,192,195,191,192,191,192,194,192,195,190,192,193,191,196,191,191,194,194,193,193,194,194,191,191,193,191,192,192,193,193,193,194,196,193,194,193,193,191,194,195,192,196,196,194,194,194,196,197,196,198,198,196,198,198,196,196,196,196,199,198,203,194,177,179,194,204,205,202,195,196,196,198,196,195,197,196,196,195,195,194,193,196,195,193,196,195,196,201,197,196,196,192,196,194,193,193,193,194,193,195,191,191,196,185,193,183,170,159,135,119,132,167,132,100,155,208,217,201,134,126,189,224,227,207,193,198,193,197,192,174,166,157,192,177,151,159,184,187,93,54,97,100,124,177,164,123,109,71,79,114,127,92,37,27,35,78,49,23,38,50,54,66,160,227,244,195,96,92,97,65,14,12,33,92,95,49,50,37,22,22,26,53,79,106,117,118,125,102,96,96,105,115,107,108,108,113,122,117,111,79,36,20,79,169,188,156,81,81,118,130,120,78,56,51,44,44,45,57,72,61,56,41,41,45,46,46,61,81,63,30,41,56,75,112,103,111,122,123,122,122,129,103,70,35,22,31,26,31,34,26,29,25,27,33,29,36,25,25,30,25,37,29,33,29,19,31,34,30,29,25,29,27,24,25,19,19,18,23,23,17,29,18,110,193,171,171,146,134,145,124,83,30,10,17,18,22,24,23,22,22,21,21,21,19,18,29,21,20,27,20,22,22,24,26,26,22,21,30,29,27,29,23,29,29,31,33,66,71,112,174,137,134,131,114,89,55,40,18,14,18,18,19,19,27,73,85,94,84,110,148,74,34,26,21,31,129,148,93,113,105,53,107,148,145,146,77,26,26,15,61,97,107,122,130,128,136,139,93,36,21,17,17,24,50,117,176,212,172,173,167,116,72,33,31,21,17,24,18,21,23,29,22,21,25,27,34,30,33,25,27,34,28,26,27,27,26,21,24,28,28,28,29,22,24,30,24,28,18,25,25,22,45,56,54,76,88,65,47,27,17,21,15,19,21,21,19,19,27,19,23,20,20,25,20,21,24,26,21,20,25,26,21,27,26,24,24,31,24,21,27,27,28,30,27,25,29,28,31,29,27,38,18,84,236,239,119,97,177,149,88,68,40,41,38,66,50,25,150,212,211,227,211,211,210,214,214,211,215,215,213,215,218,217,216,218,115,5,2,8,11,12,12,15,13,14,15,15,15,200,200,202,200,201,200,201,202,200,201,203,202,202,201,201,202,199,200,201,200,203,200,200,203,203,203,203,206,252,252,230,209,203,202,201,201,202,200,200,201,200,200,200,198,201,199,199,200,199,199,200,200,200,200,200,200,201,201,200,200,202,200,199,202,199,198,200,200,200,198,198,197,199,199,200,200,199,197,198,200,199,199,199,198,200,199,198,198,197,199,199,197,200,196,198,200,196,198,196,196,199,198,196,196,198,198,198,196,195,196,196,197,195,196,195,195,198,195,196,196,196,195,195,197,195,196,194,194,195,194,196,194,195,197,196,198,199,196,198,197,199,197,199,202,196,197,196,196,196,194,198,195,194,197,197,195,195,198,196,196,199,196,197,195,196,198,196,197,195,194,195,195,195,197,195,195,198,195,193,193,193,191,194,194,196,193,193,194,193,195,193,193,193,195,193,193,196,190,193,194,193,194,192,194,192,193,193,193,194,191,192,193,193,195,192,191,192,192,192,191,193,191,193,193,193,194,192,196,194,193,193,193,196,195,195,195,193,199,197,196,199,197,196,197,197,195,196,195,196,198,194,198,201,199,197,184,185,199,204,205,203,198,199,202,199,196,194,196,198,195,196,195,195,194,196,194,193,192,195,199,195,194,195,193,192,196,193,193,195,194,195,197,195,192,195,186,201,184,160,164,151,164,184,180,116,105,181,210,215,194,179,197,219,226,206,197,192,194,196,197,198,170,162,182,180,114,105,162,213,184,72,43,35,63,160,207,170,119,99,69,78,100,95,48,4,67,114,93,68,48,78,62,61,174,228,245,191,96,88,107,132,84,37,66,121,132,84,49,40,23,13,37,47,79,114,108,110,113,119,106,96,101,101,108,105,109,115,110,108,92,71,52,24,28,131,188,134,76,8,42,104,103,87,46,80,76,41,42,46,57,61,51,47,43,38,47,42,33,45,64,46,34,57,103,155,155,133,117,110,113,107,110,94,59,36,23,22,34,27,30,35,31,31,27,29,29,31,28,29,28,28,32,34,36,33,29,23,32,44,36,36,35,33,33,32,29,19,21,22,23,22,16,25,24,95,191,191,205,180,180,189,191,122,25,8,12,15,20,22,22,19,21,25,22,26,22,21,24,20,29,24,26,26,24,25,29,25,20,24,24,29,26,24,27,27,35,26,32,70,75,89,103,139,118,110,110,71,64,43,21,16,12,19,20,25,24,74,107,104,109,119,104,55,45,29,22,33,101,98,47,101,129,60,95,146,83,53,40,22,29,16,61,83,114,114,107,127,92,121,109,35,19,19,14,32,50,118,206,160,150,146,122,108,61,40,43,23,15,23,21,27,22,25,28,24,26,34,39,28,30,27,27,31,29,34,27,29,27,25,28,24,33,29,25,29,28,27,26,29,29,23,21,25,57,72,93,118,77,57,47,33,24,19,18,21,24,20,19,22,23,23,24,21,24,24,22,23,21,25,18,27,25,22,28,23,26,25,28,31,24,25,29,25,28,29,31,29,29,28,30,31,32,28,58,212,242,182,67,72,134,104,74,76,49,39,45,62,25,38,190,226,219,224,209,216,213,217,216,213,213,213,215,213,214,218,214,218,116,5,1,7,12,13,12,15,13,14,15,15,15,202,202,201,200,201,199,199,204,203,200,202,200,200,199,202,201,199,202,201,202,200,201,202,200,201,201,201,200,253,253,243,213,198,202,200,201,201,198,203,201,200,200,200,201,201,200,200,202,200,200,198,199,200,201,200,199,201,200,201,199,198,203,200,200,200,200,200,200,200,200,200,200,200,200,200,200,199,199,201,198,197,198,199,200,197,198,199,199,199,197,197,197,198,200,196,196,198,199,199,197,198,198,197,196,196,201,199,198,198,198,198,196,198,195,196,199,196,198,196,196,198,196,200,196,196,199,197,198,198,199,198,199,198,196,196,195,194,194,196,195,196,196,197,198,195,195,196,196,196,196,194,197,198,194,197,196,196,198,195,196,196,197,198,196,196,194,193,195,193,196,193,193,194,193,197,195,198,195,193,196,194,196,193,193,195,193,195,194,193,191,192,196,193,193,191,194,194,192,195,191,193,193,191,193,193,194,191,193,191,192,193,189,192,191,193,193,193,193,194,193,193,191,192,193,193,192,194,196,195,195,195,195,196,197,197,196,196,195,195,198,195,198,196,198,201,197,198,197,198,197,197,196,200,200,201,192,181,191,200,208,205,204,201,207,206,198,199,196,197,195,196,198,196,199,196,194,197,193,193,197,196,193,194,193,194,195,193,192,195,195,192,195,194,192,196,187,200,177,148,159,166,181,204,191,129,151,205,208,202,184,188,211,217,207,199,197,192,193,192,202,198,155,172,200,125,60,131,215,230,139,47,23,9,112,210,213,141,67,70,54,73,74,59,21,34,203,192,87,57,74,66,84,196,244,243,157,90,84,103,116,74,116,132,144,134,102,89,39,20,20,40,65,75,117,118,107,113,108,108,104,113,99,103,112,110,111,104,100,90,81,50,36,18,60,152,139,69,14,15,83,141,129,72,76,114,78,42,38,47,51,42,46,48,39,41,40,33,34,29,44,66,83,138,150,164,157,114,103,99,102,82,53,43,25,19,17,17,27,32,29,30,31,30,28,29,31,27,31,31,26,29,27,35,38,33,32,29,31,39,44,31,36,41,32,41,30,24,29,24,26,24,23,23,24,60,111,129,148,135,139,144,102,53,22,11,15,23,17,26,21,28,27,24,27,24,27,19,28,29,26,27,30,31,27,26,27,20,18,27,29,29,26,27,30,31,25,33,29,71,58,80,114,95,118,92,81,62,63,49,17,19,16,21,22,25,25,53,110,133,116,96,108,81,46,26,20,31,73,95,97,155,144,117,152,135,78,61,42,25,25,29,70,88,112,93,126,151,150,188,101,17,20,9,20,24,46,101,133,171,115,108,118,84,67,42,41,22,14,23,20,27,18,24,26,26,31,32,34,24,31,35,24,28,32,28,30,34,29,30,29,30,29,22,32,26,26,30,27,30,29,20,33,29,85,150,150,160,117,86,87,71,56,32,19,18,17,20,26,24,24,19,26,22,19,30,26,23,23,24,24,24,27,26,23,27,22,25,30,22,29,27,28,24,29,29,24,31,29,29,33,32,44,41,104,231,211,103,66,64,72,59,61,66,45,41,44,39,12,123,233,232,221,217,213,214,213,214,214,211,214,211,213,216,215,218,214,218,115,5,1,7,12,13,12,14,13,14,15,16,15,198,199,201,200,201,199,200,198,198,201,198,198,199,199,201,199,201,197,200,201,200,203,200,202,200,203,200,198,241,250,206,197,197,195,198,200,199,199,199,198,200,200,199,199,199,199,199,200,197,198,201,200,199,200,200,200,199,200,201,200,201,197,198,201,200,201,200,200,199,199,200,200,199,200,198,197,200,199,199,198,197,199,199,198,198,199,198,198,198,200,196,199,200,196,198,199,198,196,198,198,199,198,195,196,196,196,197,195,198,197,196,199,198,200,198,195,198,196,199,196,197,198,200,198,194,198,200,198,197,198,198,197,198,199,194,198,198,197,198,195,199,196,196,198,195,195,195,196,196,194,197,193,194,197,194,195,194,196,195,194,197,195,197,196,194,194,194,192,192,194,194,195,194,193,192,193,192,195,194,194,194,191,195,191,194,195,195,194,192,195,192,193,193,194,192,192,193,190,194,193,193,191,190,193,189,191,191,191,191,193,193,189,190,193,192,190,192,192,195,193,191,192,192,195,194,193,192,196,195,193,196,198,196,195,196,196,196,198,197,195,197,196,199,200,200,199,199,198,197,200,199,197,198,199,200,199,188,181,193,205,206,199,188,197,204,200,200,199,198,198,197,195,197,198,198,197,196,197,196,195,196,194,196,197,194,195,193,193,193,192,194,196,196,192,193,189,191,153,117,139,169,191,207,171,126,172,216,198,189,189,197,203,200,199,194,196,193,191,197,200,190,151,184,184,101,65,181,236,157,81,48,9,37,174,239,179,107,46,40,45,56,50,25,19,89,193,136,68,73,41,97,204,245,245,141,80,82,105,107,44,84,144,143,122,98,83,46,25,25,35,79,125,117,117,113,114,114,95,102,111,112,101,105,117,110,106,94,81,101,83,41,26,16,85,119,64,8,37,97,119,110,65,76,73,87,74,44,44,36,43,40,39,45,38,35,34,29,32,33,43,108,154,159,162,139,118,116,103,71,50,36,35,42,45,32,15,16,22,28,38,33,29,29,28,33,24,30,29,31,29,28,34,32,37,33,29,29,33,42,39,41,35,38,39,35,34,29,25,25,24,30,29,24,33,49,59,57,51,36,37,35,31,27,17,17,15,19,24,17,25,30,25,31,25,23,31,29,31,29,27,27,27,28,25,33,30,25,22,30,26,30,31,22,29,29,33,27,36,58,69,77,66,105,64,62,65,48,65,41,26,19,14,19,24,29,30,29,58,83,69,102,147,92,53,36,27,24,39,65,107,120,96,90,98,82,94,102,50,22,25,44,97,102,139,94,90,144,151,141,42,14,17,16,19,36,39,79,151,106,112,101,84,87,49,50,45,22,22,20,23,24,28,29,22,28,27,33,31,29,34,25,31,35,31,27,31,33,29,27,24,32,29,26,28,27,29,30,29,28,27,22,33,31,96,166,183,189,173,144,168,167,135,50,15,15,20,21,26,27,21,26,24,24,22,29,22,27,24,21,27,22,28,22,25,28,19,26,31,24,27,29,28,29,24,22,32,24,28,34,32,41,47,74,141,177,134,96,98,75,49,51,46,54,49,41,40,29,114,235,242,228,215,217,214,214,211,212,211,214,212,214,214,216,216,219,213,215,117,4,2,8,12,12,12,15,13,15,16,15,15,196,199,196,199,199,198,198,198,199,195,199,199,198,200,199,198,199,198,196,197,198,198,198,198,200,198,200,193,249,240,173,187,192,194,197,198,200,193,199,199,200,200,198,199,198,198,196,199,198,200,198,198,200,200,200,198,198,197,198,198,198,201,199,199,199,197,199,197,198,200,200,200,199,197,197,195,199,199,196,198,197,197,198,199,198,198,198,198,197,198,198,196,199,196,194,197,198,198,198,197,198,198,197,198,196,196,196,196,197,195,198,196,199,199,197,200,195,196,198,198,197,193,199,197,197,199,197,198,198,200,198,199,198,198,199,200,199,199,200,198,199,198,196,196,195,198,198,196,194,196,197,195,196,194,192,194,195,194,193,195,194,196,193,192,193,191,193,193,191,193,193,193,193,190,194,190,191,192,191,194,193,191,193,193,193,189,191,191,192,192,191,193,191,193,191,191,192,189,193,190,191,192,188,190,191,192,193,193,191,189,194,190,194,191,191,191,189,193,191,193,195,193,194,196,195,196,199,198,199,195,193,196,197,195,196,199,196,196,197,194,197,198,196,198,197,198,199,199,199,200,200,195,197,198,199,198,197,184,181,196,201,196,162,170,197,199,201,197,199,199,197,200,195,198,198,194,198,196,195,196,196,196,195,194,194,194,196,193,195,195,193,194,192,192,197,191,181,146,110,113,150,186,191,149,116,179,203,189,197,193,201,198,196,197,191,194,191,195,194,203,174,144,194,185,87,64,128,126,60,47,90,55,98,237,235,166,121,55,52,34,34,40,12,39,92,101,59,53,45,113,214,249,246,146,82,86,107,93,57,79,108,122,84,90,79,33,29,15,38,78,130,143,117,120,113,109,104,105,111,106,105,103,116,114,103,94,83,92,113,103,56,24,30,73,61,21,43,157,170,84,97,100,94,81,84,99,53,25,31,34,43,45,35,33,35,30,26,30,33,31,103,155,152,128,126,112,78,53,27,31,35,39,53,54,47,34,21,19,21,32,39,32,32,30,26,32,32,27,32,29,30,35,30,33,33,30,31,36,42,45,37,36,44,42,35,35,25,27,27,25,28,27,26,31,45,49,50,40,26,31,25,23,29,19,16,18,20,22,21,22,33,34,27,32,25,26,31,29,29,27,32,27,27,27,27,32,24,28,30,26,32,27,24,32,33,29,30,33,58,59,74,83,45,79,55,35,47,54,57,23,19,21,18,24,29,25,31,71,89,115,149,163,130,66,39,27,22,27,27,33,66,59,55,56,47,51,44,21,24,22,73,120,118,171,110,65,56,56,57,26,16,21,15,27,22,34,79,66,100,58,61,63,35,59,48,40,22,25,23,17,30,26,29,26,26,30,29,33,32,30,26,31,32,33,33,30,32,25,25,24,31,34,23,30,29,28,30,29,29,29,27,29,29,69,123,144,167,162,176,174,168,129,42,21,12,16,18,21,26,23,26,23,28,33,29,24,28,24,24,27,24,25,28,24,24,26,23,30,27,27,25,29,29,29,29,27,29,30,36,36,39,48,117,170,161,130,128,140,111,93,81,63,55,42,37,27,125,229,251,249,219,220,219,214,212,211,212,213,213,216,214,214,217,216,218,215,220,115,4,2,8,11,12,12,14,12,15,16,15,15,195,197,198,198,196,195,199,199,198,199,198,199,201,197,198,197,199,198,199,200,196,198,195,195,198,200,202,195,250,237,171,201,199,198,199,197,199,198,200,201,201,200,199,200,199,200,200,200,199,199,200,200,199,199,199,199,201,198,199,200,201,198,198,200,196,198,199,200,199,197,199,198,200,199,200,200,198,198,198,198,198,196,199,197,194,199,197,198,196,198,198,199,199,198,197,197,198,199,199,198,199,199,199,198,199,198,200,197,198,200,198,199,196,198,197,195,200,196,198,198,198,197,196,198,196,198,199,197,198,198,198,197,198,200,198,201,198,198,199,198,198,198,199,199,196,197,197,196,195,193,195,194,196,195,195,196,193,193,193,194,196,192,193,191,191,193,192,190,191,194,194,192,193,192,191,193,192,192,186,190,191,191,191,188,193,190,190,189,190,192,189,192,193,193,191,193,191,190,193,190,191,191,190,193,190,190,193,190,191,193,195,191,191,194,193,194,192,192,193,194,192,197,195,192,198,196,198,199,196,198,198,196,196,194,195,196,197,198,196,195,193,195,194,192,197,196,197,196,198,198,197,198,199,198,199,199,200,195,179,184,199,200,174,171,196,200,201,198,194,198,199,198,198,195,196,199,196,196,197,195,196,195,196,192,195,197,196,196,196,193,191,191,193,196,201,191,169,158,143,120,159,169,154,126,122,183,190,195,203,201,200,193,196,197,192,196,193,192,201,198,165,160,214,164,70,36,16,57,63,126,179,95,159,231,222,180,117,55,39,9,55,149,64,98,196,116,34,37,141,227,243,242,139,84,87,114,84,56,97,113,146,108,61,60,39,24,19,39,79,124,139,123,120,112,103,100,104,115,108,104,104,106,116,104,89,83,88,104,118,124,103,73,54,48,16,63,117,201,160,69,142,147,134,110,120,108,48,30,25,31,35,40,39,31,27,33,30,29,24,25,56,84,94,93,69,43,36,31,28,28,36,34,33,37,50,56,33,17,23,24,35,35,32,32,27,30,23,29,33,28,31,35,34,33,36,33,29,36,47,45,36,41,43,45,41,35,30,25,29,28,24,31,26,27,38,50,49,39,36,35,29,25,24,20,17,16,24,25,22,27,33,30,35,37,33,33,29,32,35,33,31,34,29,27,35,29,32,33,27,35,34,28,30,31,27,31,31,37,57,87,108,69,112,100,53,64,52,74,69,49,26,14,24,24,27,28,73,156,166,152,159,160,118,66,45,36,21,30,27,45,107,89,77,107,84,36,19,22,29,29,110,107,125,182,150,143,84,60,58,16,19,16,15,22,29,48,66,88,45,67,66,33,52,45,49,45,22,21,21,22,27,26,29,30,28,25,32,29,27,30,25,32,29,30,32,31,27,31,33,26,35,29,29,29,25,33,28,26,33,29,26,31,37,50,66,66,75,93,100,97,60,48,34,17,16,18,22,27,22,31,27,29,34,37,36,31,27,25,28,27,32,34,27,25,29,26,27,27,28,32,28,29,28,28,31,28,31,34,33,33,36,48,93,131,144,149,150,154,147,146,139,108,69,44,24,18,159,233,249,238,224,227,224,219,219,217,215,215,216,214,218,216,215,216,218,216,215,115,5,2,7,11,13,12,14,13,14,14,15,15,196,200,198,198,198,196,197,199,200,198,199,199,198,198,199,195,198,198,196,199,198,198,198,199,199,197,202,194,249,238,175,214,203,198,200,197,201,196,201,199,199,201,199,200,199,199,200,199,198,198,198,200,199,199,198,199,201,200,201,200,198,199,200,198,198,198,197,198,200,199,200,198,198,198,200,199,201,198,198,200,198,198,197,198,198,197,199,198,197,198,197,197,198,197,199,200,197,198,196,196,199,198,198,198,198,200,199,198,198,197,199,196,199,197,196,198,198,198,196,194,195,195,198,197,197,198,198,198,197,196,195,197,197,198,197,195,198,198,198,196,196,197,198,198,196,197,196,195,196,194,195,195,194,196,198,193,192,195,195,194,192,193,193,193,192,192,196,191,191,194,192,194,193,191,195,192,192,192,192,191,192,191,191,192,191,191,193,193,191,192,191,194,194,192,192,191,194,192,193,191,192,191,193,193,192,192,189,191,191,191,193,191,194,192,191,194,193,196,195,194,195,194,196,197,194,194,196,195,195,198,196,197,194,193,194,194,193,193,195,193,194,196,193,193,195,197,195,198,197,195,196,196,198,198,196,199,199,198,192,177,186,203,200,200,200,200,203,201,201,197,195,198,197,198,196,196,195,194,196,194,195,194,193,195,196,194,193,193,192,192,195,193,192,197,203,189,159,151,160,156,176,156,128,134,148,195,200,204,220,207,199,194,195,196,194,195,194,196,204,192,171,207,208,122,58,33,18,116,188,207,215,148,219,223,174,168,96,36,21,9,169,225,81,139,179,85,64,157,235,245,214,112,89,85,109,71,53,99,132,141,109,94,57,30,24,21,38,84,132,134,125,116,116,110,95,110,112,109,104,104,110,98,106,92,75,92,102,121,126,124,112,74,39,13,73,155,100,97,71,60,117,113,112,113,110,76,48,31,19,34,36,33,31,34,29,24,31,29,28,31,28,39,43,37,34,22,29,25,17,27,33,32,31,43,54,59,55,38,27,20,31,43,37,31,29,29,29,33,30,32,32,27,30,33,33,35,31,37,46,42,37,38,38,43,46,34,29,27,25,28,31,26,29,27,40,55,59,58,58,61,44,31,18,16,18,17,21,25,24,27,31,31,36,33,31,29,32,35,33,33,33,31,35,30,34,37,31,34,35,35,28,29,31,27,31,31,36,31,78,154,163,148,161,168,146,165,165,166,171,103,31,10,15,21,24,39,154,199,128,99,63,52,51,47,60,52,36,34,24,101,171,158,181,182,111,36,14,29,29,88,117,81,81,97,126,152,149,134,72,26,24,12,24,24,36,71,108,76,115,104,66,71,49,77,76,69,34,18,20,19,33,26,28,31,26,33,31,31,33,30,32,32,27,31,30,32,29,26,33,32,33,27,25,31,30,27,26,29,29,28,29,27,41,53,54,57,48,34,31,29,26,31,24,17,18,17,24,24,25,27,29,32,35,42,35,35,33,26,29,28,33,36,35,29,24,32,29,27,32,27,34,34,32,33,31,34,32,40,33,85,124,66,66,67,84,110,122,141,141,160,165,144,78,45,24,37,203,234,249,246,235,244,238,235,235,229,228,224,219,218,213,214,215,214,218,213,214,114,5,2,7,11,12,12,13,12,14,14,15,15,194,201,200,200,199,199,198,200,198,198,199,198,200,197,197,199,199,196,198,200,196,200,198,199,202,198,200,196,247,232,153,202,202,197,201,197,203,199,199,198,196,197,196,200,198,198,199,199,199,196,199,198,199,200,198,200,200,200,201,199,200,200,198,202,201,200,200,199,200,200,201,198,198,198,200,202,197,200,200,198,198,196,201,199,197,202,196,198,198,198,199,197,198,198,198,199,198,198,197,198,198,196,198,198,199,197,198,198,196,199,198,199,198,200,199,196,199,195,198,196,196,198,198,198,198,199,198,195,197,198,199,196,198,197,196,198,194,195,194,196,196,194,194,193,194,195,196,196,196,195,194,193,196,194,194,194,192,194,192,193,195,191,193,191,192,192,192,194,193,193,191,192,193,190,193,194,192,192,192,194,194,190,191,189,192,193,194,193,191,193,195,194,193,192,193,193,193,191,192,192,193,194,191,191,190,191,193,190,191,191,194,190,189,191,192,195,193,193,193,194,193,194,193,192,196,194,193,193,193,195,195,197,195,193,195,194,192,192,195,194,194,194,194,195,192,193,195,194,195,192,193,194,193,195,193,192,197,198,198,186,177,191,199,204,204,196,198,202,202,200,198,194,194,195,193,193,193,194,194,195,194,194,196,191,193,194,192,193,191,192,193,193,193,201,201,184,148,118,149,168,175,151,106,124,170,221,195,193,222,224,198,194,193,194,194,194,194,198,210,188,158,217,169,73,91,73,34,126,128,140,148,164,243,200,113,107,50,19,5,27,142,129,54,95,71,63,175,245,244,190,90,86,89,99,59,63,109,137,141,90,97,75,31,21,23,43,84,131,134,120,116,119,119,96,105,118,105,103,105,111,106,97,95,79,84,106,121,120,110,103,57,27,13,29,148,196,63,21,51,70,79,95,114,113,87,40,35,67,45,29,26,33,31,36,38,25,24,28,27,26,26,29,19,27,31,23,24,18,26,24,27,35,41,52,57,63,60,60,50,32,39,57,66,59,57,57,59,57,53,54,57,54,57,61,56,58,56,57,60,66,66,57,63,68,61,55,52,46,71,41,42,44,44,42,54,89,97,88,112,100,66,49,24,14,21,21,22,20,21,24,31,34,31,36,31,31,29,26,36,29,31,37,30,28,27,29,27,25,30,28,30,29,27,27,35,30,36,29,67,164,186,141,149,147,159,171,161,181,127,103,24,15,12,15,30,40,137,149,73,45,46,54,56,72,76,64,60,51,29,136,153,99,115,128,117,34,26,31,74,107,96,100,59,41,42,64,78,118,116,46,21,13,30,24,66,151,166,170,180,193,171,170,178,171,190,136,40,15,14,15,27,27,29,33,36,33,33,33,37,34,39,44,34,34,33,28,34,33,36,31,38,31,27,28,27,29,28,34,31,32,34,29,36,44,53,57,41,27,26,28,24,29,20,20,23,24,25,24,28,29,29,36,41,40,45,38,29,32,30,28,29,42,38,25,29,29,32,30,35,29,34,39,33,33,35,35,41,36,64,197,215,137,105,64,57,78,78,85,115,146,151,120,67,96,127,160,237,222,220,239,235,238,236,237,233,235,233,232,227,219,220,212,215,217,216,214,216,116,4,1,7,11,12,11,15,12,14,14,14,15,197,200,198,196,199,199,198,196,196,198,198,199,198,198,198,197,200,197,196,196,196,199,199,199,200,198,200,196,246,214,105,170,188,191,203,195,200,196,198,201,197,198,198,200,198,198,200,199,200,199,199,200,198,201,200,200,199,199,201,200,201,200,199,200,198,200,200,198,198,198,200,199,200,200,198,198,198,197,197,197,199,199,198,200,198,197,199,198,197,198,198,197,198,198,196,195,197,197,198,198,198,198,198,199,197,196,200,199,198,199,199,198,200,198,198,198,196,196,199,197,199,198,198,199,195,196,196,195,196,198,198,197,195,195,196,197,196,196,197,195,194,195,196,195,192,193,195,194,196,194,194,194,193,194,192,192,193,193,193,191,191,195,194,195,195,191,196,193,191,192,193,192,192,191,194,194,191,193,193,192,193,191,193,191,191,193,193,191,192,194,193,193,191,193,194,191,193,192,193,192,191,193,192,192,190,190,189,191,196,194,192,192,193,191,189,193,191,191,191,191,193,191,191,192,191,191,194,193,194,191,193,193,192,194,193,195,191,192,195,195,193,192,193,194,193,193,193,193,193,192,193,191,194,192,194,195,191,194,196,195,181,177,189,198,199,199,194,195,200,201,200,197,197,195,193,191,192,191,195,193,194,196,191,193,196,192,194,200,194,202,205,198,198,207,200,174,138,98,134,156,163,141,85,108,188,235,189,119,146,202,200,195,196,191,194,192,196,203,202,157,121,132,77,89,166,117,37,37,55,34,74,158,237,151,24,7,14,29,7,54,110,61,111,165,129,182,240,242,167,86,83,96,89,51,72,111,145,140,90,76,81,61,26,19,47,91,127,136,122,116,117,114,106,101,105,104,96,105,114,105,97,88,86,97,97,116,121,112,76,36,50,65,89,92,136,130,42,76,68,78,86,94,116,68,21,23,111,125,60,24,19,31,29,30,29,25,23,25,33,26,25,31,21,27,24,24,24,24,29,22,22,33,34,42,60,70,71,64,76,57,65,139,167,175,173,171,171,178,178,178,173,165,171,164,168,168,163,157,161,180,191,186,180,185,169,155,155,147,127,131,137,140,142,91,146,188,171,173,151,151,134,87,46,41,47,53,50,50,58,57,67,82,78,78,82,74,74,71,77,80,76,80,76,75,72,68,65,61,66,63,61,61,62,63,62,61,63,57,101,166,128,47,27,41,41,48,49,49,57,41,27,14,12,17,29,25,68,104,67,71,69,89,107,109,95,64,81,57,74,122,66,33,14,53,109,61,34,76,83,100,133,108,101,81,57,57,28,88,143,65,20,11,27,19,92,189,201,170,167,170,177,185,179,186,188,133,33,13,16,12,22,24,28,34,31,33,29,31,34,28,42,38,31,31,33,31,30,33,32,33,34,27,24,31,29,29,25,33,23,26,31,21,29,37,48,52,45,37,43,39,23,27,18,24,25,20,27,25,29,30,32,34,47,45,42,44,28,33,29,30,35,34,39,31,32,32,32,31,32,31,33,36,36,33,36,41,39,41,51,156,189,141,141,101,87,101,81,60,49,80,83,53,37,121,179,182,190,144,160,179,181,186,178,180,185,200,214,224,229,231,227,221,219,217,217,212,214,115,4,1,7,11,12,11,15,13,14,14,15,14,194,196,196,196,199,194,193,198,197,197,197,198,196,193,199,198,198,195,196,199,194,200,197,197,199,197,201,199,249,217,91,160,183,187,203,192,196,193,199,200,198,199,195,198,198,199,199,198,199,198,200,202,200,199,198,200,200,198,201,199,199,199,196,199,200,196,198,199,199,197,198,197,198,198,195,199,196,196,194,196,198,195,198,199,196,197,193,195,195,195,196,197,198,198,199,198,196,196,198,198,194,199,196,196,198,195,198,196,199,198,194,199,195,195,195,194,196,196,196,194,194,195,196,196,196,195,195,193,194,194,195,195,196,193,195,195,194,196,194,195,195,196,195,193,194,193,193,195,193,192,193,193,193,191,194,191,190,193,193,193,192,192,191,192,191,192,195,192,194,195,193,193,193,193,194,193,194,192,191,192,193,191,192,193,192,192,192,193,195,193,193,195,194,193,191,194,195,193,194,195,194,193,193,192,191,192,195,192,191,191,193,191,191,193,190,193,191,192,192,191,191,191,191,191,190,189,194,193,193,193,193,193,193,193,194,194,195,193,193,191,191,193,191,193,194,196,194,193,194,190,195,191,192,194,190,193,194,192,194,195,194,177,177,193,196,196,194,191,194,198,201,198,198,198,197,198,193,195,191,191,193,191,196,193,198,196,207,213,204,220,211,197,206,212,181,164,139,105,118,130,132,123,95,90,165,234,212,115,112,193,198,207,197,193,194,193,208,204,183,113,54,20,55,158,230,130,56,70,102,87,85,144,193,104,15,8,122,145,59,199,132,102,185,220,246,249,229,163,92,84,104,87,55,86,114,146,118,82,62,38,53,46,29,54,94,130,132,120,113,119,110,108,112,95,98,97,101,116,112,99,83,83,96,105,108,97,109,70,38,95,145,165,157,116,76,50,51,86,81,72,87,76,77,25,20,87,180,160,56,28,18,21,27,27,24,26,25,24,26,27,25,25,23,27,24,21,24,27,30,26,26,26,29,33,44,71,82,75,71,71,67,118,146,150,160,160,165,168,175,178,170,161,170,169,159,165,156,145,152,163,184,185,177,162,120,154,150,150,101,153,156,172,154,109,131,184,188,145,154,141,123,107,83,88,89,91,95,94,107,115,143,169,170,167,159,173,166,172,190,179,183,185,192,193,179,178,167,174,165,164,172,175,182,181,183,184,193,188,193,159,81,40,32,27,33,38,35,36,39,27,21,14,15,16,19,21,43,57,97,149,137,122,103,117,128,112,99,79,76,80,46,37,29,38,82,60,66,60,91,128,148,165,146,139,120,101,81,140,133,39,25,10,21,22,89,188,152,83,43,46,60,62,65,61,61,57,30,27,26,27,32,31,42,45,46,45,51,46,44,43,41,47,41,38,41,40,40,41,38,38,41,38,39,40,33,37,41,32,36,35,33,33,34,47,60,79,70,71,84,66,40,27,21,22,19,21,29,22,30,33,35,44,50,41,43,44,29,30,29,30,29,34,37,32,34,31,31,32,34,33,33,33,36,39,39,41,49,49,55,133,137,104,97,42,47,108,108,51,25,30,43,25,25,116,150,144,141,110,118,124,125,127,124,126,130,148,163,193,217,227,236,230,230,227,222,218,215,114,5,2,7,11,13,12,13,12,14,15,15,15,195,200,196,196,196,197,196,196,197,198,194,196,196,195,197,193,195,195,193,195,198,194,197,195,196,199,201,211,251,217,123,185,189,193,200,193,199,195,196,200,198,198,199,199,199,197,198,196,198,197,198,201,197,198,199,200,199,201,201,198,199,196,197,199,199,198,198,198,197,198,199,197,199,198,199,199,196,197,198,198,198,198,197,197,195,197,197,195,198,196,197,197,196,198,196,198,199,196,197,194,195,196,195,197,195,198,198,195,196,198,196,195,197,196,196,198,198,195,197,195,195,194,193,198,195,196,194,194,195,195,195,196,199,195,195,196,195,195,195,197,195,194,193,193,195,196,193,193,195,194,194,194,192,194,193,195,196,192,194,194,194,193,191,193,191,190,193,193,191,192,192,190,193,194,191,193,192,193,192,189,193,192,193,191,191,193,191,191,193,193,190,192,193,194,193,194,194,193,196,194,195,193,191,194,191,193,194,189,193,194,192,192,192,193,194,193,192,192,191,191,190,192,191,192,193,192,194,188,191,191,192,193,193,193,191,194,192,195,193,195,192,191,196,194,195,195,193,193,194,194,194,192,195,193,193,193,192,195,193,196,194,190,177,179,193,197,193,192,193,193,195,198,201,199,198,199,196,195,195,193,192,193,201,198,211,219,224,229,170,178,200,194,209,179,144,129,113,116,121,132,115,100,108,75,123,214,226,132,115,203,216,213,212,203,205,202,217,198,136,94,26,22,95,216,247,131,64,84,151,150,105,133,152,163,157,121,227,160,109,210,136,107,110,161,241,204,106,84,87,99,63,70,107,125,148,111,73,56,41,25,38,44,63,108,131,133,112,113,118,115,108,108,116,96,98,103,101,106,102,92,86,98,101,113,105,88,62,13,69,148,169,161,123,74,37,31,40,60,60,79,75,52,39,55,100,142,184,118,44,32,17,17,20,23,23,23,21,27,29,32,31,27,21,23,28,22,25,26,23,25,27,21,27,36,35,52,74,81,78,74,66,60,43,37,34,34,36,49,55,51,46,48,50,53,50,42,62,58,60,57,48,50,39,45,54,60,46,47,48,42,47,46,51,31,98,119,127,139,103,112,75,68,64,34,36,37,48,58,60,64,69,89,90,88,81,91,87,90,118,114,111,105,134,130,117,116,103,124,120,130,122,130,150,131,143,147,159,172,153,104,94,92,62,55,57,60,57,64,51,19,17,16,20,22,18,19,43,86,120,136,99,106,160,175,177,180,154,84,76,86,57,60,57,57,66,46,68,132,170,209,203,159,136,118,137,137,148,141,51,9,25,12,16,26,46,85,79,62,46,36,36,38,40,40,40,50,55,55,54,56,63,82,105,118,122,131,136,134,129,119,123,111,104,108,103,99,96,101,94,87,90,88,80,81,77,81,76,71,70,68,76,76,67,90,145,148,152,164,155,131,67,36,28,26,34,30,32,33,37,48,57,66,73,63,57,59,49,45,44,49,49,51,51,44,41,38,38,44,41,39,36,36,45,47,41,50,57,87,132,165,137,77,46,20,22,95,96,44,21,21,43,56,100,150,138,133,133,118,128,116,112,117,114,120,121,122,123,150,170,182,212,224,228,231,229,220,214,113,5,2,7,11,14,12,13,12,15,15,14,15,198,198,198,197,198,197,199,198,195,197,195,198,198,194,198,196,197,198,194,196,195,199,198,195,200,198,199,210,252,211,142,210,198,200,200,194,200,195,200,198,199,199,198,198,196,198,198,198,198,198,200,200,200,199,198,201,197,196,200,198,200,198,197,198,196,199,199,198,198,198,198,196,197,198,199,200,198,199,196,196,198,198,196,199,199,198,198,197,199,197,196,199,196,196,197,196,199,196,196,197,194,198,198,197,198,195,198,196,196,196,195,197,195,198,195,194,194,196,196,196,194,195,198,193,193,193,196,196,193,196,198,194,196,192,195,195,192,197,192,192,195,196,194,196,194,193,194,194,193,192,194,194,192,192,195,193,194,194,192,195,194,193,193,192,195,191,191,192,192,192,191,191,189,192,194,190,194,191,190,192,192,193,193,193,193,191,191,190,192,192,191,192,192,194,193,191,194,193,193,193,192,196,191,191,194,192,194,192,193,193,194,192,193,193,193,196,196,195,195,195,191,193,194,193,195,190,196,195,193,191,190,193,193,193,192,193,192,191,194,193,192,194,193,193,191,191,193,193,191,193,192,193,194,192,193,193,194,192,194,195,194,196,187,178,184,197,200,194,192,192,194,194,200,201,198,196,194,198,197,197,200,199,214,195,220,234,232,201,110,174,215,198,226,178,124,102,71,103,177,154,97,83,83,65,69,110,148,136,134,220,235,221,216,220,214,207,218,159,150,163,94,96,113,197,209,91,59,36,79,97,65,113,188,252,253,163,131,88,89,125,67,65,26,79,155,101,94,93,88,58,71,117,136,135,103,79,53,35,27,24,55,70,114,141,129,124,107,114,110,107,109,109,109,94,102,101,95,97,83,75,79,94,112,122,120,71,23,5,74,132,129,111,56,51,52,32,37,48,62,69,54,40,50,99,132,132,125,71,30,39,32,16,23,22,26,26,24,29,26,38,32,25,25,21,24,25,28,26,23,25,25,24,30,27,34,41,61,78,77,79,76,71,53,30,32,27,30,44,45,41,42,46,42,44,37,49,76,84,98,59,34,32,42,78,78,72,60,37,39,36,33,45,34,48,85,109,135,90,93,71,67,68,44,26,18,23,55,80,77,83,53,62,84,72,71,77,42,27,66,78,71,41,60,69,53,72,30,34,72,84,57,31,51,55,63,55,37,35,36,27,94,138,119,118,122,113,116,104,46,16,13,16,19,18,29,23,77,141,114,81,117,174,139,115,103,105,141,128,67,69,79,91,91,101,119,153,150,200,245,198,158,101,76,76,79,91,112,122,59,19,19,12,16,18,24,43,72,89,78,62,54,55,61,60,55,53,56,63,61,69,92,128,95,143,149,142,145,141,147,146,141,132,142,156,151,151,145,144,151,149,144,132,139,143,133,141,142,144,148,87,155,160,150,152,174,212,176,165,170,140,105,67,51,52,57,60,59,56,64,99,128,143,136,125,132,123,113,112,122,125,123,131,118,102,101,98,109,107,94,87,81,94,94,74,41,60,126,153,155,164,147,111,77,54,101,148,125,93,92,136,153,145,157,160,141,140,137,132,136,127,125,122,124,131,129,132,119,120,127,138,168,178,189,211,225,221,219,115,4,1,7,11,11,11,14,12,13,14,14,14,194,200,197,197,197,196,195,197,196,198,197,198,196,194,197,193,197,200,197,197,198,197,198,198,198,199,195,203,251,209,162,214,194,202,196,197,200,193,198,200,197,198,199,198,198,198,198,198,198,197,198,199,198,197,199,201,197,199,199,198,197,197,200,199,197,195,196,197,198,198,199,197,196,199,198,198,198,196,196,196,197,195,197,198,198,195,194,198,196,195,195,198,198,199,198,194,195,193,196,196,196,197,193,198,194,196,196,197,197,194,196,196,196,195,196,194,194,194,195,196,194,194,197,195,194,193,192,195,194,192,192,192,194,192,193,195,194,191,191,194,193,194,193,193,194,195,194,193,193,193,194,194,192,193,193,191,192,191,194,192,194,194,193,194,191,191,192,193,193,191,191,190,190,192,189,191,190,191,195,190,194,189,189,192,194,194,192,194,192,193,193,193,193,192,193,192,193,191,193,191,193,196,195,192,191,194,194,190,193,190,189,193,191,190,193,193,194,193,194,195,194,193,189,192,193,191,195,193,193,194,191,193,192,191,193,193,192,191,193,192,192,191,193,191,191,193,191,192,193,189,191,191,191,192,193,194,191,193,191,193,196,193,196,185,177,189,194,197,193,191,195,192,195,198,198,198,195,193,189,191,196,199,211,147,139,191,150,122,138,232,244,188,220,171,117,94,71,107,180,182,92,51,60,46,37,21,51,74,118,207,221,204,200,209,207,199,178,121,165,209,138,107,100,131,139,93,66,35,35,27,17,84,205,247,242,154,76,81,199,150,53,61,4,31,85,88,116,83,53,86,115,146,129,83,60,40,37,23,23,44,76,115,133,134,121,110,110,112,106,110,113,108,110,102,104,101,90,83,73,71,87,110,120,129,76,26,21,6,66,114,122,89,77,101,60,31,39,51,54,47,44,41,54,110,103,84,91,47,29,43,43,29,17,17,21,23,23,30,32,38,33,25,26,24,24,29,24,25,27,27,24,26,25,27,32,36,47,69,80,84,88,81,69,51,39,35,30,37,43,48,45,42,41,39,38,76,130,121,113,63,28,73,118,121,89,77,54,36,40,48,44,39,41,29,81,74,109,99,62,76,46,63,53,16,20,14,55,110,113,116,71,81,125,125,124,128,66,25,94,133,102,19,49,94,95,87,18,37,111,160,57,57,132,98,115,81,29,22,30,18,97,152,109,129,136,130,145,101,29,10,15,14,22,21,23,24,78,152,81,94,173,95,60,57,53,23,36,104,77,65,100,139,168,171,218,182,148,213,139,38,20,16,64,71,73,48,85,169,90,25,17,11,26,18,24,35,87,133,123,111,104,106,111,89,42,31,39,41,49,44,38,55,65,66,74,61,68,56,50,71,69,59,53,59,66,65,64,58,59,72,74,64,62,76,66,60,58,74,86,80,102,111,88,116,160,143,150,136,129,118,81,77,58,53,59,60,70,70,92,122,133,133,128,128,131,141,134,141,160,145,145,153,150,159,159,163,168,160,170,174,168,159,152,76,50,129,153,158,140,137,135,140,175,171,182,186,153,151,159,170,171,157,149,137,128,142,139,133,137,127,128,130,126,131,136,130,120,120,115,113,125,128,134,170,212,223,223,115,4,1,7,11,10,11,12,11,13,14,14,13,197,198,197,198,199,195,195,197,196,198,195,196,197,195,198,196,195,196,196,198,198,200,196,197,197,199,198,206,252,217,185,215,187,200,195,194,200,197,197,196,197,198,198,196,198,198,198,198,198,196,196,197,199,199,198,200,198,199,199,197,199,197,197,200,198,198,197,195,198,195,198,198,200,198,196,196,193,198,197,196,197,196,196,196,193,196,197,198,197,197,196,196,195,196,196,194,196,194,196,198,196,195,198,196,195,197,194,196,195,194,196,194,197,195,193,196,196,196,193,194,194,193,196,194,194,194,194,196,194,196,196,193,196,195,197,195,192,193,194,195,192,193,194,193,189,191,192,193,193,194,195,195,193,192,192,191,191,192,192,191,191,192,194,191,190,190,190,191,192,192,192,190,188,192,189,189,193,191,190,191,190,189,191,191,194,192,191,192,192,193,192,194,192,191,191,191,192,191,191,194,191,192,191,191,193,189,192,191,189,189,190,192,193,190,190,192,191,191,191,193,189,191,192,192,193,191,192,191,193,191,190,191,190,191,192,195,192,190,190,189,191,191,191,190,193,191,193,191,191,195,189,190,191,190,192,192,193,192,194,195,192,196,193,193,181,174,188,196,197,194,191,192,192,190,196,200,199,200,191,190,186,187,192,61,60,100,42,19,61,130,161,103,148,122,110,99,78,108,150,148,98,49,76,86,66,73,27,29,64,88,162,151,150,165,165,169,146,103,121,129,62,84,66,79,120,104,69,9,45,25,28,126,184,245,235,173,107,174,253,158,86,48,2,65,105,105,89,57,97,119,137,116,72,55,37,30,16,26,59,87,112,121,132,116,107,113,101,103,107,117,116,107,115,110,107,90,87,81,87,99,109,128,118,78,34,18,20,29,100,102,87,100,102,122,72,37,43,41,46,40,45,43,48,84,87,73,69,56,27,26,37,35,26,12,18,19,22,33,32,32,27,27,26,27,26,24,25,27,29,26,25,25,27,27,31,37,54,79,84,91,114,106,86,72,54,41,37,39,44,47,44,41,39,43,42,85,115,114,104,40,39,109,125,132,104,73,49,27,41,41,40,37,36,46,47,87,74,41,63,32,46,59,40,24,20,12,44,110,113,125,68,68,122,107,128,150,71,18,103,137,93,20,38,106,114,118,63,50,124,160,57,103,164,108,141,96,29,32,45,25,119,164,117,116,110,119,124,74,19,14,14,13,20,23,24,23,67,138,66,122,157,64,77,71,97,51,11,87,91,61,73,124,174,182,144,72,110,169,54,8,66,72,92,75,89,71,64,140,107,41,20,12,18,21,28,36,106,152,144,133,134,139,150,97,37,43,46,53,61,52,58,66,72,93,97,89,92,43,36,105,102,63,30,30,64,79,72,43,39,74,83,64,60,71,75,54,39,63,78,51,33,29,56,87,99,136,121,110,117,88,81,59,36,34,28,36,43,41,46,55,66,74,63,53,57,70,68,65,61,72,76,108,74,69,79,83,92,96,96,109,94,96,74,49,103,150,152,138,129,129,134,158,192,194,185,171,156,160,150,151,145,129,127,124,122,131,128,125,132,127,131,128,122,125,122,128,127,126,122,115,127,122,110,136,184,212,224,116,5,2,6,11,12,12,14,13,15,15,15,15,196,200,198,198,198,199,199,199,199,199,198,200,196,194,196,197,198,198,198,196,197,198,196,197,196,198,191,204,251,224,194,210,188,203,193,198,196,193,198,197,196,196,197,195,195,195,196,198,197,198,198,195,199,196,195,198,196,198,197,197,196,196,198,194,196,197,198,198,197,195,196,199,196,197,197,198,194,194,198,197,196,196,196,197,196,195,199,196,195,196,195,196,193,195,195,194,195,195,198,195,194,197,193,193,195,194,197,196,194,196,192,195,197,196,193,194,194,193,194,192,193,194,193,193,194,193,191,193,193,194,193,191,194,192,191,194,193,191,193,195,193,194,194,193,190,191,192,191,190,193,191,193,193,194,191,192,195,190,194,189,191,191,188,192,191,190,189,192,192,190,191,190,191,193,192,192,190,191,190,188,191,191,190,190,191,190,189,193,189,190,193,190,193,191,191,191,190,192,191,190,192,192,189,190,190,190,191,188,191,190,191,193,190,188,192,193,189,190,189,191,190,189,188,189,189,190,190,190,190,191,193,193,192,190,191,191,191,193,193,189,190,191,190,189,191,191,190,194,192,189,191,189,192,192,191,194,190,194,192,191,195,191,193,194,188,179,174,187,197,193,193,192,191,193,190,191,198,201,200,200,189,193,197,82,14,87,56,5,5,10,7,43,89,72,96,89,71,99,84,108,111,98,128,93,73,75,53,40,31,50,109,111,103,113,125,146,140,99,84,64,29,61,53,56,117,101,53,21,43,14,74,203,178,142,123,97,87,160,234,106,54,23,24,146,117,67,82,97,140,137,90,66,57,81,65,24,36,64,123,128,106,121,110,110,109,98,90,102,117,111,106,114,114,105,95,81,84,83,99,116,125,116,67,31,14,23,46,98,135,88,76,92,93,102,77,43,30,34,40,42,40,39,37,59,71,72,63,50,38,18,25,29,18,17,16,22,24,24,27,32,28,26,23,24,25,24,28,33,28,28,27,26,34,27,31,41,74,91,84,100,109,81,64,75,73,44,33,39,41,45,45,42,43,39,35,55,96,104,92,38,40,121,142,130,126,117,56,31,30,39,43,36,51,49,105,63,54,81,36,54,39,61,68,33,27,14,35,100,99,81,35,51,113,107,138,120,34,4,84,130,90,19,51,112,115,131,115,99,126,133,24,57,92,79,116,58,21,36,40,36,151,172,108,108,85,98,106,63,24,12,15,12,24,25,27,20,63,137,72,120,149,59,43,55,137,93,21,104,104,65,36,39,69,48,24,60,176,176,19,49,129,42,105,134,105,92,41,130,126,42,24,13,16,20,27,37,122,163,134,123,119,133,140,80,22,44,62,58,74,73,83,74,83,121,131,129,132,41,47,159,131,55,19,31,97,110,77,33,42,114,125,67,78,121,117,97,88,136,104,45,26,35,50,77,107,95,112,93,92,78,69,65,36,32,24,34,43,44,50,61,94,95,65,57,74,84,52,35,66,79,81,68,39,41,41,44,46,34,36,37,39,38,42,88,142,158,130,130,131,142,146,151,174,167,167,162,152,150,145,143,137,130,132,134,129,129,122,121,126,121,128,127,117,117,118,124,128,124,122,128,141,139,120,117,149,168,196,118,6,3,8,11,13,12,14,13,14,14,15,15,198,200,198,199,199,198,199,199,200,200,199,199,200,197,199,199,198,199,197,196,196,199,195,200,192,193,179,169,248,212,186,208,184,202,194,196,196,192,199,198,195,195,196,195,196,195,196,198,196,198,199,196,197,196,198,195,194,196,198,196,197,197,196,197,193,196,196,196,198,194,199,196,196,195,195,199,197,198,196,195,196,194,195,197,196,198,194,196,194,195,197,193,198,196,194,194,196,196,194,193,193,193,195,193,192,196,193,196,195,194,198,194,196,196,195,195,193,193,193,196,193,193,196,194,195,198,194,193,191,193,191,190,192,193,194,192,191,191,191,192,192,192,192,194,193,193,193,191,193,195,192,192,194,192,191,193,193,194,192,192,193,192,193,193,192,191,193,193,193,190,191,193,190,191,191,190,190,191,193,191,190,191,192,190,188,191,193,190,191,190,190,189,189,192,190,191,191,192,190,191,191,190,191,191,193,191,192,190,190,190,190,190,190,191,191,191,191,190,190,189,190,190,190,190,188,190,190,192,189,189,191,192,193,190,192,191,190,191,190,192,191,190,191,191,194,190,191,190,191,193,192,193,192,191,192,192,191,191,192,190,188,190,190,193,194,189,176,174,190,196,195,195,192,191,194,192,191,198,199,204,198,212,250,137,80,149,133,62,3,9,5,25,92,70,80,73,77,113,84,92,141,147,107,59,53,72,35,46,44,57,112,141,135,128,141,156,152,155,154,113,97,97,81,105,137,94,51,93,63,8,69,126,93,107,74,44,29,141,191,49,24,4,39,122,77,86,114,131,141,82,67,72,103,147,84,48,72,108,143,114,105,109,102,106,103,90,77,94,109,106,109,114,106,84,83,84,88,89,110,131,108,64,20,19,27,61,108,127,155,157,132,87,84,114,75,33,23,28,29,38,36,28,29,39,59,60,61,55,42,22,26,39,36,31,19,16,24,25,23,27,26,24,22,26,29,23,27,29,31,31,26,25,26,33,30,37,64,77,76,74,66,50,48,63,70,46,35,36,39,46,40,42,37,39,37,59,99,105,89,36,39,101,103,92,127,110,54,28,32,39,45,39,42,110,144,155,168,150,145,129,131,147,157,117,30,9,30,87,75,43,11,53,125,116,134,103,34,11,84,129,86,18,53,116,130,128,122,143,147,133,21,24,65,85,133,71,19,31,33,53,174,163,102,92,75,82,87,69,29,14,15,19,29,24,25,24,57,144,100,107,112,51,69,141,226,96,15,111,130,89,59,44,78,39,60,204,241,155,11,60,117,62,120,115,113,90,134,179,114,51,15,14,16,27,24,41,141,152,110,95,88,102,105,61,22,34,50,67,77,77,76,56,65,92,71,99,121,34,54,153,100,42,22,28,98,95,69,24,28,102,114,46,63,123,128,129,142,152,85,33,28,29,57,71,63,86,59,63,65,34,60,55,40,30,27,39,39,47,46,68,105,114,113,112,120,64,39,92,105,124,101,52,42,39,51,51,39,37,32,45,35,45,113,148,173,152,127,133,144,153,152,150,151,149,157,154,145,155,147,142,151,147,147,150,146,146,141,136,139,136,139,132,128,123,121,135,130,117,117,130,146,146,130,121,128,130,157,111,12,2,9,11,14,12,15,13,13,15,15,14,193,200,199,199,199,197,198,195,199,203,198,199,200,199,198,198,199,198,197,198,196,198,195,200,197,204,183,157,214,172,178,203,188,203,191,198,197,195,201,196,195,199,197,196,196,196,196,195,196,196,198,198,199,194,195,198,197,197,196,194,196,196,198,198,197,197,197,196,193,198,198,198,196,196,198,198,195,195,198,195,197,197,193,196,197,195,195,195,197,196,195,195,193,193,195,195,194,192,194,195,193,194,196,195,194,193,195,193,192,194,193,193,193,194,192,193,193,193,196,192,193,196,194,193,193,193,195,195,192,194,192,192,194,193,195,192,191,193,193,195,191,191,192,191,193,192,193,193,192,191,190,194,192,191,191,192,193,189,192,191,191,192,190,192,193,191,190,192,189,187,192,190,189,191,190,192,188,190,193,190,191,190,191,187,189,191,191,190,189,189,188,189,190,190,188,189,187,193,190,189,191,188,190,191,190,191,194,192,190,189,190,191,191,189,189,190,190,191,188,193,191,190,191,189,191,191,189,188,189,188,192,191,189,191,193,193,192,192,192,189,189,191,190,191,194,191,190,191,190,191,193,190,191,191,192,193,189,191,192,189,189,193,191,194,190,193,187,172,179,190,198,194,191,191,191,193,192,193,192,196,196,232,253,177,148,216,204,190,155,117,62,47,83,74,92,56,78,159,120,98,109,143,116,29,24,39,30,43,35,39,106,113,110,98,93,95,162,205,170,159,125,153,150,128,141,59,88,207,100,33,77,39,66,163,123,41,57,185,152,15,6,15,114,131,97,130,134,120,95,70,76,132,153,164,90,55,84,98,127,99,92,97,98,103,93,82,71,80,94,103,107,97,77,83,96,92,97,98,124,110,53,23,14,28,68,114,125,129,147,144,120,89,112,109,61,45,21,17,29,34,35,33,27,30,37,42,51,56,44,27,37,59,63,46,24,16,21,17,26,26,23,26,23,26,30,24,26,29,23,29,28,27,27,29,32,34,47,55,57,63,61,56,58,60,39,22,32,41,45,40,37,43,39,40,33,71,104,98,88,42,16,41,61,63,71,54,42,34,33,41,41,44,35,113,204,190,201,199,196,212,205,212,209,147,37,7,22,74,78,46,14,44,116,98,103,119,58,24,96,132,83,14,38,117,123,80,83,127,158,144,20,57,99,98,134,66,15,33,32,81,186,132,78,79,60,74,81,71,37,14,14,19,24,26,27,19,40,101,111,98,74,76,122,230,215,57,8,90,164,153,113,98,119,136,203,248,202,119,10,54,111,93,125,85,66,150,173,183,108,10,22,14,13,33,27,59,148,133,85,69,56,67,72,57,27,17,43,78,83,83,90,79,87,54,30,83,113,34,57,139,84,44,22,28,90,92,61,25,37,105,102,31,79,113,75,96,113,131,69,30,24,30,54,69,88,45,62,68,40,43,57,59,47,36,24,28,35,46,58,49,57,98,110,85,61,31,35,88,116,114,75,36,33,42,57,53,46,35,42,48,63,135,170,160,156,148,130,145,148,144,145,136,137,139,150,149,149,165,155,146,150,148,150,153,152,158,155,157,162,156,160,153,141,142,137,141,131,116,118,128,135,139,134,121,120,113,139,107,14,2,10,10,14,12,13,12,13,15,15,14,195,198,194,198,199,198,196,196,198,198,199,197,198,197,199,199,198,200,198,199,197,198,196,200,200,225,228,167,153,135,169,197,191,200,193,198,200,198,198,197,196,198,199,198,197,198,196,196,195,196,198,196,198,196,195,195,197,199,198,196,197,194,195,197,195,196,197,196,196,196,199,196,196,197,197,198,197,197,196,196,195,194,195,197,196,195,195,196,194,194,194,191,196,196,193,193,194,194,192,194,194,194,195,193,190,194,193,193,195,193,193,193,195,193,192,193,195,193,194,194,190,194,194,192,192,192,191,194,193,190,194,193,192,193,194,193,192,193,194,194,193,191,191,192,190,190,193,190,190,193,190,189,191,190,192,191,189,193,191,191,189,191,190,188,190,188,191,191,191,192,192,190,190,192,190,190,192,189,191,189,189,191,190,190,190,190,189,188,189,190,190,190,191,192,189,191,190,188,188,187,191,188,191,196,189,200,205,194,193,189,191,191,188,190,190,190,190,191,191,192,192,190,191,191,190,192,191,193,191,190,190,191,190,189,192,190,192,192,190,191,193,193,192,191,193,191,191,190,189,190,191,190,192,189,189,191,193,191,194,193,188,191,190,190,192,193,195,186,177,180,193,199,195,195,193,192,194,194,194,187,191,234,234,162,163,218,229,236,244,249,200,126,95,69,91,56,73,155,120,75,55,91,119,81,52,60,33,43,43,27,99,33,39,21,18,50,149,173,125,113,90,128,127,80,102,31,103,194,99,170,177,35,131,211,118,60,125,214,122,23,18,109,212,165,127,139,113,93,81,54,87,122,150,165,81,59,76,90,113,89,85,104,110,108,93,85,78,89,94,95,97,83,83,99,107,102,103,113,94,50,22,20,27,77,116,123,131,133,129,87,77,98,95,84,39,49,39,31,30,31,36,33,31,26,29,35,48,51,35,27,48,72,66,57,37,14,15,22,24,22,21,23,25,28,32,27,28,26,27,29,31,30,30,24,34,67,81,86,92,91,91,95,95,75,62,56,71,83,112,77,73,73,74,73,68,97,104,97,91,56,46,51,70,76,66,55,51,53,55,56,63,53,72,150,157,117,97,89,107,111,108,103,113,78,33,33,41,71,114,67,29,76,108,87,97,117,95,79,121,129,92,19,70,116,101,50,25,92,137,144,41,35,98,116,93,29,25,37,44,139,178,84,66,61,64,70,71,81,44,25,24,17,29,32,27,23,19,44,66,96,101,95,131,165,95,15,14,18,84,147,156,170,137,178,171,167,119,50,6,13,32,77,116,74,84,121,161,108,24,27,15,21,16,27,35,83,160,103,54,55,49,57,70,66,32,15,33,74,83,101,126,113,77,21,23,96,112,29,70,146,75,57,43,54,121,93,66,45,64,128,99,28,95,117,45,30,57,104,63,31,26,35,86,121,119,129,120,98,81,69,97,103,88,55,26,26,35,73,92,57,72,98,104,63,56,40,39,107,108,93,48,29,40,42,65,66,45,39,36,81,147,173,142,87,103,129,130,153,154,144,132,126,130,135,146,150,155,162,151,146,142,139,144,155,155,155,156,155,159,155,155,151,147,147,142,141,131,122,129,136,137,132,120,107,112,118,144,105,15,3,10,10,14,12,15,13,14,15,14,15,191,196,195,195,195,194,196,195,198,195,198,198,196,195,197,198,196,199,198,197,197,199,196,196,193,223,230,149,132,133,170,198,193,200,193,199,197,196,199,197,196,196,196,196,198,197,196,197,198,195,197,198,198,198,197,198,199,197,199,197,198,197,196,197,196,196,197,198,198,198,196,198,197,198,199,198,197,197,196,194,195,196,196,194,194,195,196,194,193,194,195,195,194,192,195,195,195,196,193,194,192,192,193,194,192,193,196,193,193,195,195,193,195,195,192,193,191,191,193,191,192,189,190,193,194,192,192,193,193,193,193,193,192,191,193,194,192,194,191,193,193,191,193,190,190,189,193,192,192,191,192,191,189,191,189,191,192,190,190,191,189,191,191,189,191,190,189,190,189,190,190,188,189,191,190,189,189,189,191,189,188,191,191,188,191,190,188,187,188,191,189,191,191,193,196,196,195,193,191,192,195,197,198,204,195,190,208,202,192,191,189,192,193,189,191,190,191,190,188,191,187,190,191,190,189,192,190,190,193,189,190,191,189,190,188,186,190,190,191,190,188,191,191,191,190,190,191,189,192,193,192,191,190,191,191,191,191,190,192,193,191,193,191,191,192,195,194,197,187,178,189,199,206,199,199,200,201,203,199,188,204,252,212,142,159,206,217,229,241,251,217,145,97,68,93,91,81,112,136,110,57,49,97,141,151,95,31,49,93,116,126,118,107,93,80,92,139,132,108,69,63,95,77,69,81,75,71,86,104,197,129,46,165,165,57,68,175,204,125,55,91,211,219,152,137,102,84,76,31,33,46,70,136,154,69,58,69,86,112,73,95,106,114,120,100,96,89,101,102,98,92,91,101,113,118,104,110,78,37,19,22,38,77,120,131,117,121,122,117,101,98,95,62,13,25,105,98,49,22,22,29,27,28,29,23,28,47,55,37,30,61,77,66,60,51,46,42,29,18,19,22,22,24,30,26,27,29,26,30,30,27,27,32,28,50,139,179,178,168,161,166,162,179,169,155,161,164,174,173,175,182,180,179,170,163,160,139,136,148,151,170,174,162,162,180,161,139,143,153,166,165,162,175,184,147,93,83,78,83,93,88,100,112,122,137,145,160,174,159,173,167,165,158,138,150,157,150,145,164,159,150,136,144,141,134,122,104,122,141,151,108,126,133,128,125,94,111,127,152,199,164,108,116,110,115,118,115,123,92,75,70,64,76,75,75,67,68,53,57,98,116,129,121,81,29,38,56,27,48,86,121,137,135,125,118,109,43,28,39,31,26,54,100,89,91,117,74,35,39,31,38,33,39,48,54,141,169,70,52,55,60,70,71,76,49,24,28,71,52,54,110,87,41,14,57,121,89,30,103,134,92,111,89,97,129,96,95,78,75,134,85,23,99,107,47,12,34,98,61,40,17,44,132,160,181,158,169,159,152,162,165,171,159,86,32,18,35,114,123,103,99,104,100,111,135,86,76,100,87,71,33,26,41,63,81,63,47,38,41,106,183,174,110,40,64,122,138,166,158,142,135,132,135,137,146,139,134,139,139,143,136,136,141,148,147,146,145,144,143,139,144,143,141,145,144,145,145,144,146,148,139,129,118,111,119,129,149,104,14,2,10,10,14,12,13,13,13,13,15,14,192,198,194,195,196,194,194,194,195,197,195,193,199,195,195,195,193,196,195,196,196,196,195,198,189,200,198,170,193,176,181,196,192,200,195,198,197,195,196,199,197,197,198,196,195,198,196,196,196,195,199,196,198,197,198,198,195,196,196,196,197,198,200,199,197,197,198,196,196,197,195,197,198,197,198,196,197,194,194,196,196,198,196,196,194,194,193,195,196,195,196,193,195,196,193,194,194,193,191,193,195,193,192,193,193,193,194,193,192,193,192,193,193,192,192,193,192,190,192,192,190,191,192,192,193,194,194,196,193,194,192,191,192,190,190,189,191,193,190,188,189,190,192,191,189,191,192,190,189,191,190,190,192,191,192,193,194,193,191,196,192,192,190,190,192,188,190,193,189,187,188,187,186,190,186,184,190,189,190,188,187,187,188,189,190,191,192,189,191,193,190,195,196,206,212,212,211,204,205,205,209,212,214,210,157,148,182,202,201,191,191,190,190,190,189,187,189,191,187,192,190,188,190,190,193,189,190,191,190,190,188,191,190,189,192,190,192,189,189,193,188,189,190,188,189,190,190,191,189,191,191,191,191,191,192,187,191,191,193,192,190,193,196,198,203,206,201,202,207,197,190,198,211,211,207,208,213,212,211,193,229,234,169,130,143,181,208,217,234,221,168,118,87,59,77,86,101,111,119,147,127,93,93,149,168,126,59,35,104,165,201,214,216,212,183,164,137,130,116,75,114,97,107,171,124,157,136,59,38,81,53,71,179,107,8,87,178,136,60,76,179,245,171,121,101,79,59,36,14,39,55,71,140,126,48,55,71,86,99,69,95,102,97,111,102,89,89,100,85,87,105,102,108,116,117,93,76,39,13,24,48,79,108,129,129,111,112,112,105,73,93,63,56,16,103,196,127,46,28,11,21,24,24,18,24,32,62,75,37,36,63,76,77,71,62,56,49,35,22,19,20,24,26,28,29,28,29,28,29,33,31,28,30,31,53,126,151,103,81,81,83,88,90,113,91,76,78,81,113,105,109,119,106,102,106,97,78,84,101,113,129,130,130,133,135,128,128,125,128,130,130,127,134,119,90,92,95,93,100,101,107,120,137,144,149,160,162,167,109,151,154,153,139,134,139,136,127,128,132,134,100,139,160,154,159,147,147,155,132,146,141,174,159,148,160,161,190,183,177,181,156,145,163,161,169,165,167,181,174,184,179,175,178,178,183,179,169,162,153,139,144,153,142,141,130,145,159,140,144,141,150,149,146,147,139,121,117,121,100,124,120,114,134,133,133,113,103,91,113,131,118,147,154,160,172,209,179,112,115,121,127,129,137,132,96,73,91,106,63,81,106,81,92,68,124,132,101,83,127,128,105,130,105,97,111,96,111,74,84,123,79,59,100,91,39,33,79,104,64,51,36,66,143,159,143,113,108,116,121,124,126,134,137,71,29,27,29,73,99,92,95,57,54,113,139,98,56,75,63,41,34,27,46,70,65,56,49,36,27,61,131,149,114,74,113,137,137,158,146,137,136,141,136,139,141,131,126,124,128,139,145,142,131,130,127,125,125,130,136,136,141,141,145,151,148,152,152,148,147,142,136,136,146,153,151,132,142,105,13,1,10,10,13,11,14,13,14,15,14,14,193,196,194,195,196,194,195,194,195,195,195,195,194,195,195,196,194,195,196,193,196,197,193,196,193,203,203,212,249,213,191,202,192,203,194,198,196,195,200,197,197,199,199,200,198,195,196,196,196,195,197,196,195,197,196,195,199,195,197,198,194,196,196,197,196,194,195,195,197,195,195,194,193,197,196,194,197,194,194,194,193,195,193,194,194,197,195,193,192,193,193,193,194,192,194,193,193,191,192,193,194,196,193,193,192,192,192,193,193,191,191,192,191,191,191,190,190,190,192,193,193,191,194,196,198,203,208,204,203,199,192,193,190,191,194,192,190,191,190,193,190,190,192,191,191,188,192,190,191,191,190,189,190,194,193,205,206,204,205,205,200,191,188,191,189,189,191,189,190,191,192,191,189,194,186,188,191,188,192,191,190,194,193,191,191,194,200,200,207,210,204,207,198,196,195,196,218,216,214,211,201,209,201,181,133,101,155,204,205,204,195,190,193,189,192,190,190,191,189,191,190,191,191,192,190,192,191,190,190,187,190,190,191,190,187,190,190,190,189,187,191,191,191,191,188,190,191,189,190,190,192,192,191,191,189,188,191,195,196,196,198,205,202,200,210,214,204,205,200,200,188,174,196,202,206,194,195,211,204,194,252,224,130,126,116,141,155,169,199,183,122,93,68,49,67,105,120,118,124,108,131,141,144,145,152,151,80,18,12,22,84,141,167,197,180,171,144,143,110,86,168,133,148,171,139,232,170,58,118,107,57,120,194,72,9,131,161,110,67,125,213,185,111,87,81,53,28,18,17,95,81,72,145,94,42,62,70,99,121,92,105,85,92,102,87,91,81,73,62,81,107,118,122,105,105,70,29,22,18,64,106,113,123,119,128,114,107,98,73,66,88,93,80,118,188,201,96,42,31,9,16,18,27,20,18,27,54,60,35,62,91,103,84,62,53,42,32,25,19,18,18,27,27,22,24,25,31,28,31,29,29,30,34,28,44,117,120,112,97,79,85,71,71,66,69,63,59,67,60,55,65,62,57,61,61,60,56,59,63,60,63,54,57,55,51,57,54,62,56,49,50,57,55,48,49,54,58,57,66,66,54,57,60,56,49,53,55,52,46,51,44,46,48,49,46,40,47,41,46,39,41,48,60,63,57,52,51,55,50,53,50,70,61,67,69,66,81,67,75,77,61,76,83,87,93,86,92,105,98,105,96,97,102,98,117,110,109,113,112,109,92,102,112,120,124,148,145,134,153,128,118,105,106,108,102,118,120,141,139,144,149,136,125,112,121,123,123,143,158,161,173,185,196,176,159,170,148,133,150,146,158,162,165,170,165,168,179,173,144,142,154,148,149,156,158,158,143,147,157,136,133,142,132,136,138,129,145,127,125,147,133,136,152,153,146,150,166,166,136,153,151,165,180,155,127,76,77,77,83,92,87,95,105,80,63,58,59,87,103,118,94,62,67,110,134,92,84,81,52,57,54,59,67,76,73,61,63,51,42,35,66,114,138,129,132,124,131,156,141,138,138,145,134,131,137,127,124,121,122,137,127,123,125,123,122,121,122,132,144,139,142,145,149,151,147,145,144,137,130,130,136,151,179,189,174,153,153,104,12,1,9,9,12,11,14,13,13,14,14,14,193,198,197,194,194,194,195,196,191,194,195,195,197,194,194,196,191,195,198,193,196,196,193,198,196,202,201,225,248,214,193,202,192,203,201,204,205,203,204,204,205,205,205,203,200,199,196,197,198,198,198,197,198,197,194,196,195,196,196,196,196,195,194,194,197,196,195,194,193,194,196,196,196,193,195,196,198,195,195,197,196,195,193,194,197,195,196,197,195,193,193,194,196,193,193,192,192,193,193,194,193,195,192,194,195,191,191,192,191,191,193,192,191,191,190,191,191,190,191,193,193,194,198,203,207,216,211,213,212,203,198,194,195,195,193,194,197,193,192,196,193,194,196,193,195,193,199,203,196,197,195,190,193,192,196,203,202,209,199,202,200,190,193,189,191,189,190,194,196,206,203,203,209,220,205,199,204,200,203,203,208,211,208,208,208,217,213,205,220,223,218,219,177,139,122,131,176,185,181,166,145,140,133,139,114,101,141,196,218,220,213,203,206,201,201,196,190,193,190,194,192,193,194,190,193,191,190,190,190,193,190,195,192,190,191,190,194,191,192,193,189,193,192,192,194,190,188,190,190,190,189,193,193,195,197,196,202,204,208,210,211,213,193,164,169,192,185,173,163,151,149,134,155,164,144,137,139,159,153,174,246,167,119,137,97,112,100,121,147,150,127,84,69,58,84,118,136,143,128,130,96,100,174,159,136,156,93,25,15,18,26,16,17,57,80,96,110,124,95,108,221,146,73,87,105,161,89,115,243,155,92,159,175,85,85,173,189,170,152,167,151,92,66,63,46,22,19,33,65,134,77,74,129,64,45,65,76,114,132,95,110,93,98,110,84,86,78,71,62,90,112,125,129,98,59,37,22,26,45,111,127,121,115,123,118,108,100,83,88,77,114,125,135,165,174,139,59,34,39,24,14,17,27,21,21,22,27,47,68,94,101,93,67,37,30,23,19,18,17,16,15,24,22,24,25,29,27,27,29,29,30,30,35,26,49,124,105,166,155,147,146,142,141,142,157,157,155,148,130,127,126,127,115,117,117,105,115,116,121,118,123,129,129,124,130,134,86,140,141,141,90,139,142,132,125,123,135,136,138,131,120,114,113,115,103,102,105,110,108,103,104,99,103,110,107,97,101,101,101,107,102,96,93,92,105,95,84,87,79,83,83,94,81,93,83,63,77,94,85,73,59,60,51,56,57,65,57,61,63,61,55,58,67,59,64,59,59,60,56,64,59,57,51,54,60,55,59,54,59,50,50,49,47,52,50,50,57,61,114,66,62,57,61,50,51,49,59,59,56,56,51,63,53,51,47,43,45,57,66,61,72,73,84,82,85,113,99,100,84,97,96,91,105,104,98,103,111,110,101,96,96,102,96,106,114,105,119,105,118,123,92,101,131,141,151,151,148,139,142,158,158,153,141,128,103,88,87,78,79,98,96,97,109,104,125,121,121,130,128,139,130,119,127,134,128,117,116,113,112,119,124,127,134,137,102,127,145,123,115,105,107,129,143,145,141,131,147,161,160,162,159,157,131,124,122,112,118,115,115,118,104,116,132,141,140,132,127,138,141,139,145,146,150,153,149,150,146,144,145,142,149,161,185,191,185,169,172,110,9,1,9,10,13,12,13,12,13,14,14,14,191,198,196,196,193,191,195,194,196,196,194,195,194,198,197,193,193,190,195,193,195,198,193,197,196,199,194,223,247,207,182,198,191,203,215,224,222,224,229,227,228,227,221,217,211,202,196,199,198,198,196,194,197,194,194,194,196,197,194,196,197,196,196,194,195,194,194,196,195,193,194,194,196,199,192,195,198,194,197,196,195,198,197,195,196,192,193,195,193,196,195,193,195,194,194,191,190,193,195,191,193,192,189,193,193,193,190,194,194,192,193,192,192,193,193,193,192,193,191,192,193,196,193,191,177,166,174,168,194,206,198,202,203,202,203,211,207,205,202,206,200,205,208,201,204,200,223,216,210,209,198,194,193,192,163,160,162,155,147,173,196,192,193,193,191,190,199,198,214,225,211,207,214,243,218,205,218,213,219,216,217,220,220,219,217,220,205,155,150,177,184,189,150,114,92,78,108,117,119,111,86,80,81,90,118,114,129,177,184,214,229,223,225,218,217,208,207,204,197,199,207,203,199,198,197,202,198,195,204,202,200,203,201,200,201,206,203,201,203,204,205,198,192,192,190,192,193,191,193,191,193,196,200,204,205,209,208,198,186,183,188,191,153,110,118,148,146,137,127,115,122,116,115,105,84,73,80,97,84,163,224,111,115,125,97,117,102,127,116,133,163,113,70,63,81,112,145,158,118,127,115,59,135,149,93,144,122,47,28,24,34,31,25,42,16,9,36,58,88,150,205,155,95,57,80,69,33,166,236,101,103,156,162,155,164,208,179,191,192,139,71,56,55,37,26,31,54,90,136,152,63,78,110,33,51,76,83,106,88,79,104,77,86,91,61,58,64,81,87,102,113,130,112,46,27,17,29,62,93,123,117,112,123,110,112,101,81,103,101,92,113,122,118,108,110,83,45,31,43,48,24,14,21,24,22,25,31,47,76,84,67,51,39,27,23,17,15,15,23,22,19,21,19,28,27,24,27,30,32,28,31,29,33,34,36,63,70,65,60,48,65,66,65,74,78,77,72,66,65,64,72,75,72,80,75,76,73,75,84,88,95,94,107,97,108,113,106,120,118,125,116,128,122,114,112,108,120,120,127,115,114,119,119,122,111,124,118,117,127,118,122,118,116,130,131,134,128,124,144,142,141,134,130,139,133,132,132,120,125,128,139,143,129,146,129,126,125,132,136,117,114,109,106,108,114,113,116,117,118,121,106,118,118,123,137,134,134,125,132,141,135,127,121,123,120,116,114,101,95,95,107,113,115,111,107,109,103,106,105,112,112,108,113,106,103,100,102,96,93,83,80,76,79,84,72,74,68,74,73,57,59,61,63,63,62,60,66,62,70,69,65,69,65,71,61,69,63,57,60,55,65,59,52,60,58,56,57,56,63,61,48,57,46,54,59,63,62,57,62,61,61,55,60,57,58,59,61,56,60,64,62,69,71,74,83,85,89,91,83,88,80,79,86,81,80,81,95,101,104,119,110,108,115,120,118,123,137,132,129,122,118,106,104,106,108,103,115,120,116,132,130,122,99,104,106,101,107,107,103,102,97,112,135,141,141,137,135,129,136,135,137,141,138,142,146,158,165,160,163,158,152,157,173,181,179,163,167,110,10,2,9,10,14,12,13,12,14,15,14,14,196,198,194,195,198,196,198,197,196,198,198,193,198,197,196,198,196,195,196,196,198,197,192,196,196,203,192,227,248,195,175,194,185,191,217,229,231,234,234,237,243,241,239,233,217,201,198,199,196,193,195,196,197,196,195,198,195,197,198,197,196,196,196,195,197,194,194,195,195,198,195,193,194,194,193,195,196,196,196,194,198,195,194,196,196,194,191,193,194,193,192,193,196,191,195,195,194,195,192,194,195,193,192,194,193,191,196,194,192,191,193,194,191,192,190,191,191,191,193,193,193,192,183,139,99,106,98,110,171,206,208,214,216,213,218,216,215,211,220,222,207,217,215,209,201,195,211,210,204,210,207,198,208,188,148,134,114,125,111,158,200,196,205,196,200,200,203,185,177,181,153,157,188,204,171,175,196,190,179,167,160,167,173,164,151,168,142,81,83,96,112,130,116,122,117,96,87,84,96,98,97,89,87,111,135,116,106,108,110,168,188,182,188,193,206,207,211,216,205,196,201,204,203,199,193,204,205,199,211,202,189,201,208,203,198,201,206,197,191,195,199,205,199,201,202,202,204,205,201,200,200,198,193,186,192,193,179,155,133,130,136,142,125,106,109,131,131,126,121,116,119,109,100,91,76,55,59,71,78,195,193,85,113,99,100,112,83,166,152,126,180,124,79,57,75,106,120,133,95,101,92,78,104,105,89,107,129,112,97,76,95,74,48,22,11,15,61,81,95,108,142,199,113,105,173,75,36,158,166,55,116,132,127,181,200,190,156,181,189,149,82,61,51,42,59,72,81,107,136,116,69,96,80,25,56,83,86,101,77,59,89,51,36,71,46,41,49,70,83,97,90,84,49,19,21,39,75,110,127,127,116,114,105,106,97,93,100,106,113,91,118,117,99,82,57,63,53,31,27,45,31,14,17,22,25,25,32,34,50,51,32,33,33,21,22,19,21,35,44,39,22,19,23,23,25,29,28,27,30,33,29,32,35,28,30,28,32,37,39,52,59,55,50,37,22,21,20,23,33,43,44,43,45,44,43,38,45,41,45,43,44,47,42,43,40,42,65,41,38,35,70,37,39,36,39,35,34,38,40,44,42,44,44,41,42,43,40,42,47,47,45,41,45,46,44,48,47,40,46,52,53,51,44,47,48,46,44,50,50,45,48,51,47,51,51,51,53,49,55,50,54,54,54,58,59,62,60,65,57,59,73,68,69,79,92,91,98,85,97,98,97,100,103,101,87,100,99,88,94,98,104,110,114,116,114,119,115,116,110,121,119,122,130,123,137,125,84,139,136,123,127,132,128,137,125,130,135,122,113,102,107,98,100,106,96,99,108,120,116,132,130,125,133,125,128,125,124,131,127,122,88,121,124,117,117,119,84,120,115,108,78,89,95,98,107,110,97,113,122,104,95,101,107,110,94,91,98,96,101,103,97,98,94,86,86,79,86,76,55,68,65,66,70,71,74,79,88,83,90,88,78,82,80,83,107,67,119,66,69,65,61,53,52,55,50,49,53,47,46,56,59,59,73,106,124,121,120,103,91,91,83,92,98,109,132,143,133,133,136,130,127,126,129,130,127,139,147,153,160,152,149,142,158,165,163,139,138,107,13,2,11,10,14,11,14,12,14,15,15,15,196,200,195,198,199,197,200,197,197,200,199,198,196,198,198,195,199,197,197,196,197,201,195,199,199,202,195,234,248,206,177,188,151,105,122,128,125,133,132,141,145,149,166,197,208,198,197,200,197,195,198,198,195,195,197,198,196,194,197,196,194,193,196,196,196,197,195,197,194,196,195,193,198,195,193,196,196,195,195,195,194,196,194,194,196,195,195,196,196,194,194,194,196,191,193,193,191,196,195,193,197,198,196,195,195,194,195,196,192,191,193,194,194,195,193,196,194,196,199,204,198,208,189,129,112,113,108,105,163,205,201,200,205,185,162,177,179,178,199,207,162,166,211,213,179,121,131,141,156,205,217,211,223,200,144,127,124,139,130,170,209,210,216,210,210,213,206,133,91,86,69,117,167,158,119,114,133,122,105,83,80,81,92,87,77,96,92,78,69,68,71,83,88,117,122,96,89,89,107,114,122,126,122,122,121,92,75,81,62,81,85,85,100,108,132,148,175,202,182,145,156,171,179,169,169,191,181,155,176,166,141,162,179,174,154,172,178,154,143,149,176,203,205,212,220,217,210,205,211,207,201,179,156,152,150,149,134,122,106,101,110,106,91,93,109,119,124,128,123,114,120,113,112,111,127,91,76,72,105,232,157,70,98,88,101,110,112,197,165,110,175,147,77,56,63,74,47,65,108,118,104,75,101,101,79,86,105,177,171,130,122,114,78,100,97,92,208,147,111,132,117,126,81,171,189,87,85,171,140,65,141,83,89,186,153,97,104,184,208,189,91,57,51,63,90,74,63,67,88,74,55,81,59,29,60,79,89,99,79,59,72,48,42,63,46,42,57,64,70,80,60,39,22,29,52,76,119,132,132,135,120,101,102,96,89,107,101,111,107,99,99,63,68,59,56,65,62,42,15,20,22,19,15,15,23,22,29,31,31,26,27,29,24,25,24,25,24,40,44,30,24,23,16,24,27,29,30,29,32,29,31,31,31,32,33,29,31,40,63,72,75,56,32,25,15,22,24,33,48,49,46,52,48,44,47,43,46,44,41,47,42,41,49,43,39,41,38,40,37,42,39,38,39,38,41,34,39,33,36,40,35,47,39,44,42,37,44,39,46,42,40,48,46,46,46,48,53,46,45,46,46,48,45,44,44,42,46,37,38,36,41,44,34,42,37,37,36,40,41,34,40,36,37,40,39,41,41,43,40,44,41,41,37,42,43,40,42,39,38,39,39,39,41,39,43,41,37,45,41,44,43,37,45,44,41,44,40,48,42,42,46,39,44,41,41,48,44,42,43,41,47,45,47,49,49,55,50,51,49,47,46,47,51,53,50,49,59,60,67,71,72,82,81,81,83,95,92,100,99,90,96,95,104,103,101,99,105,112,113,113,103,116,111,112,119,119,129,130,128,126,128,123,134,131,117,124,116,120,134,137,126,128,122,115,120,114,111,105,99,107,109,110,122,119,122,125,123,122,117,118,117,117,120,121,107,104,118,113,118,113,101,84,84,72,74,73,68,70,55,63,63,73,105,149,160,154,139,112,98,94,84,76,87,110,141,149,150,142,144,135,128,130,120,109,98,104,111,118,138,142,136,126,136,147,147,120,122,100,16,2,11,11,14,11,14,13,15,15,15,14,190,194,193,198,200,198,200,200,203,203,205,204,203,202,196,198,197,196,199,194,196,196,196,198,196,202,198,243,250,207,180,194,117,18,5,11,9,14,15,19,16,17,57,150,199,200,201,202,200,197,202,198,195,198,199,201,200,198,198,196,198,196,195,197,199,198,197,195,196,196,195,198,198,196,198,199,194,195,199,198,197,196,195,196,196,195,196,196,197,197,196,196,195,194,194,195,195,197,195,196,200,196,200,204,200,200,199,198,197,195,196,194,195,193,198,203,203,213,215,215,210,202,196,156,139,159,141,133,162,171,146,150,158,105,95,115,120,114,136,141,79,108,189,211,130,62,66,67,106,174,210,194,192,161,121,112,100,123,130,175,193,193,203,194,200,203,191,103,49,57,101,159,165,131,102,87,100,92,76,69,69,69,77,85,87,104,101,91,98,92,94,100,98,115,113,91,93,109,116,120,130,125,110,87,84,71,66,77,67,71,54,64,80,75,97,120,142,165,155,123,132,149,152,152,149,161,147,119,146,145,129,145,146,141,141,154,153,138,142,145,159,181,190,188,186,184,164,167,184,184,171,156,150,130,124,115,118,124,102,103,111,143,123,109,101,90,104,102,107,104,141,163,161,164,182,159,104,88,166,241,118,81,102,94,145,173,174,229,188,94,146,158,107,55,61,77,35,53,134,211,169,83,89,113,97,89,82,149,196,187,159,149,125,174,178,148,213,151,164,158,54,28,56,168,137,68,121,178,134,114,166,92,118,163,95,29,26,133,167,156,89,44,41,38,61,59,51,55,67,57,52,78,51,39,66,80,96,83,57,46,71,54,47,73,46,52,63,65,69,48,33,23,32,70,100,124,135,132,135,118,108,101,87,87,98,110,101,93,99,75,41,18,33,51,43,59,66,45,21,28,36,38,28,12,18,21,22,26,30,25,19,26,26,29,28,29,29,37,32,21,24,29,24,19,29,24,31,33,24,31,30,30,33,30,36,33,45,69,85,79,56,35,22,19,18,21,42,54,55,61,56,50,54,53,53,50,47,46,44,40,44,47,42,39,41,43,40,49,44,45,46,36,39,33,31,33,34,29,31,39,37,41,41,39,43,43,49,51,46,51,52,49,52,54,53,55,53,55,55,52,51,54,54,55,50,49,49,40,38,44,43,42,41,42,41,37,39,39,36,35,39,39,35,41,41,43,38,38,43,43,40,45,45,38,43,39,41,46,44,43,47,44,45,46,45,44,44,44,43,45,43,46,45,39,42,41,39,44,38,39,45,41,44,37,41,39,34,38,35,36,35,33,36,40,38,41,36,34,41,37,41,39,37,41,43,41,37,41,41,41,41,43,40,35,44,37,36,40,39,43,41,39,44,42,43,36,41,43,43,41,68,73,41,42,46,45,81,44,48,48,48,48,48,48,51,49,53,54,85,60,58,57,59,79,60,57,59,59,61,67,68,73,80,86,85,90,84,78,90,99,99,96,106,110,96,106,84,110,111,126,110,101,104,103,115,101,103,102,97,101,108,113,125,152,157,153,147,126,115,109,106,114,125,149,156,160,159,153,146,142,141,125,112,107,98,96,90,97,117,124,131,120,121,123,129,115,113,100,17,3,12,10,16,13,14,13,14,15,15,15,184,190,186,199,201,201,205,202,203,205,208,210,208,205,198,194,193,193,194,193,197,195,196,197,198,198,201,249,250,205,181,199,120,15,4,10,17,27,25,29,40,32,50,158,214,214,222,218,210,213,212,206,203,208,217,215,214,209,204,204,203,202,201,204,202,202,203,202,200,200,201,203,205,204,208,210,205,207,213,206,206,205,200,202,201,200,203,198,203,207,203,202,201,200,204,201,199,203,199,211,214,214,221,214,211,214,209,205,200,193,189,183,184,188,199,206,206,212,209,212,164,121,137,118,123,131,105,106,119,103,86,87,90,71,69,92,100,77,86,107,69,86,167,166,91,67,56,45,74,118,150,123,129,120,99,93,85,93,96,142,155,145,151,141,144,150,155,101,75,95,158,207,165,113,101,117,116,107,108,104,109,107,119,121,123,131,122,123,120,128,130,133,139,135,127,123,132,140,132,112,103,100,86,97,106,100,110,116,105,102,93,116,133,131,138,151,149,138,149,158,157,168,171,161,162,160,155,149,160,162,163,168,155,160,168,184,178,165,170,169,162,163,131,103,98,91,81,98,138,151,165,162,163,151,129,112,119,126,112,120,164,205,185,151,119,88,75,73,75,88,159,198,198,196,225,184,119,117,223,227,88,120,108,110,182,224,242,244,191,84,119,163,131,69,64,125,108,101,163,212,162,87,105,105,109,86,32,100,188,229,221,230,177,212,217,106,140,120,103,127,114,46,72,165,98,81,141,137,130,162,184,196,175,124,82,22,4,53,51,69,49,30,32,17,39,45,44,35,42,38,61,94,55,56,81,92,94,75,63,55,74,61,55,75,54,47,58,55,45,27,23,34,74,118,121,122,123,128,116,99,95,92,96,95,104,117,93,83,72,37,19,15,42,46,38,57,65,47,22,39,53,54,49,21,17,18,22,26,26,28,23,25,24,28,26,27,29,37,38,34,38,37,43,23,19,29,28,31,28,34,28,29,25,33,39,54,76,86,82,48,25,18,18,18,22,41,56,55,51,59,57,55,57,51,52,56,54,51,48,47,46,49,50,44,44,46,48,48,47,45,25,24,21,15,23,21,18,16,25,33,33,35,36,42,46,48,52,56,51,46,54,55,55,53,53,55,55,58,53,54,54,55,54,54,55,53,53,45,46,42,43,44,39,46,36,32,26,22,28,26,26,26,26,35,36,37,38,40,40,39,43,43,37,43,46,50,51,53,55,53,57,50,55,57,53,56,52,52,49,54,57,53,56,50,50,45,44,45,40,43,38,39,41,39,43,37,37,35,36,39,39,37,38,36,36,34,32,34,32,36,42,41,42,41,42,46,40,43,41,42,42,41,39,39,38,37,42,41,40,36,43,43,38,41,44,41,38,41,41,40,42,40,39,39,42,42,41,35,39,42,37,41,45,42,44,49,52,54,44,48,49,48,53,49,50,44,43,45,46,49,48,46,53,52,57,56,54,55,59,61,57,51,38,38,39,35,37,39,38,49,56,57,87,102,113,98,83,98,103,112,108,108,99,105,110,116,127,119,120,125,132,141,149,153,154,154,160,155,145,143,143,136,122,119,132,118,105,95,93,108,112,111,106,105,110,97,114,99,18,4,12,11,15,13,15,13,15,15,15,15,191,188,181,188,187,188,190,183,187,177,180,192,196,200,195,188,189,189,188,187,192,191,190,192,192,193,195,253,249,188,168,213,128,22,33,30,27,31,33,36,46,46,88,195,227,220,222,225,219,207,214,208,203,205,210,212,212,214,214,212,210,217,222,217,211,213,214,215,208,211,215,216,214,214,222,220,208,216,218,217,221,220,215,215,217,214,213,212,221,220,218,222,214,219,218,217,215,215,203,211,214,192,208,214,215,220,214,213,196,184,184,187,194,187,191,179,137,132,144,147,97,66,83,78,90,95,80,88,92,88,76,72,77,82,95,99,100,101,108,105,89,97,119,96,67,75,79,72,88,96,107,117,132,107,97,118,147,152,134,151,145,133,114,100,101,109,126,115,125,148,188,200,173,134,136,134,128,122,138,141,126,117,118,124,130,122,120,131,133,130,133,133,131,144,148,152,153,148,125,98,94,99,97,119,159,149,152,149,120,128,130,151,164,135,128,143,131,115,147,171,168,171,172,170,170,177,178,168,181,184,188,199,178,169,181,194,178,150,141,151,162,132,94,65,69,81,83,137,168,182,189,179,184,162,148,131,127,134,123,138,163,207,219,220,206,189,175,166,160,157,187,194,181,202,220,184,95,121,226,171,98,155,115,122,184,235,236,197,177,108,115,146,149,94,69,118,161,180,130,120,129,111,106,85,81,24,4,58,90,160,200,196,151,166,190,125,132,102,69,178,160,78,154,178,78,108,142,108,174,172,163,224,178,108,65,12,12,35,32,63,44,28,29,17,31,27,32,27,31,40,74,87,57,66,65,71,66,55,64,59,50,38,39,55,46,39,42,35,28,31,54,66,98,116,105,97,101,105,103,88,85,89,95,101,110,117,105,75,35,16,24,41,59,70,51,54,60,35,26,49,63,61,57,32,13,17,18,21,21,24,24,20,24,28,24,21,32,37,39,51,48,42,47,34,24,24,24,29,34,31,29,30,35,44,58,74,77,70,44,24,21,17,16,15,35,54,54,57,57,54,57,56,55,53,53,60,50,51,56,53,54,57,51,48,48,50,48,50,48,35,21,16,21,17,16,15,18,19,17,25,36,39,38,45,49,51,54,55,51,50,51,55,56,56,54,58,57,54,54,56,57,54,53,55,54,55,53,47,44,40,44,44,40,38,23,21,17,19,17,17,19,18,23,22,32,33,39,36,39,43,40,37,30,46,56,57,59,59,61,60,61,62,56,55,63,55,55,56,51,57,55,54,60,49,45,49,44,41,35,42,44,39,41,44,39,39,38,34,46,38,37,34,20,23,21,23,25,21,25,27,35,41,42,44,45,44,49,47,46,49,47,42,39,39,40,39,42,41,38,41,39,42,50,42,38,42,44,45,43,41,43,47,37,44,46,39,42,36,40,35,35,39,36,41,49,55,55,53,48,54,54,53,58,54,53,55,51,55,54,53,58,54,55,61,60,63,59,64,69,61,53,36,23,23,27,28,37,36,27,47,51,54,71,82,87,74,77,100,115,106,98,85,64,67,68,88,98,83,113,127,131,131,126,130,133,150,154,144,136,139,139,129,132,149,152,138,123,117,104,109,117,109,105,98,108,101,111,97,19,4,12,11,16,13,15,16,14,15,15,15,194,180,158,155,145,141,151,151,141,129,128,143,162,184,192,191,192,190,187,188,192,192,195,196,194,196,177,234,242,143,165,215,131,28,33,29,25,30,28,41,42,51,82,176,178,136,150,166,157,153,176,177,152,120,134,146,165,188,194,210,200,202,210,210,200,199,213,216,199,199,215,216,194,171,187,207,185,179,193,187,202,214,209,208,216,220,214,196,192,205,207,213,214,208,210,212,209,208,166,158,139,113,141,152,146,154,176,210,216,219,244,251,245,202,168,108,67,77,84,99,87,82,98,101,118,129,121,129,117,108,117,100,103,107,113,117,110,119,123,125,110,91,94,78,85,118,120,141,139,132,143,151,146,93,94,146,207,213,176,177,162,143,126,108,118,127,151,150,151,195,201,187,177,154,135,126,112,110,136,134,113,95,98,113,116,114,104,109,111,117,110,100,109,121,131,146,140,125,113,100,105,105,97,131,145,127,136,135,120,126,132,152,136,108,102,100,93,93,147,180,159,150,134,138,157,151,144,128,145,170,173,169,145,127,141,150,119,108,107,120,144,127,101,95,127,160,174,196,195,184,179,162,161,146,132,129,136,135,132,133,140,170,209,232,249,249,251,251,249,243,230,200,168,142,146,118,79,120,182,126,94,144,116,118,143,193,236,175,162,120,108,120,149,129,72,67,149,197,116,102,122,131,147,113,84,42,14,55,19,62,96,113,120,114,204,162,181,158,127,205,124,99,198,151,65,128,110,104,191,124,78,162,131,57,36,6,39,122,121,91,62,61,74,63,59,49,41,48,57,61,77,69,46,44,42,48,47,34,46,47,37,32,31,33,26,31,28,22,37,65,91,94,118,108,81,88,85,105,94,77,75,80,106,111,120,122,73,32,21,22,52,68,97,112,74,59,55,31,37,66,69,62,66,51,43,39,24,17,19,23,21,27,22,24,28,21,26,29,30,44,55,50,46,40,27,21,25,33,28,33,31,29,44,63,77,74,54,32,24,19,17,16,17,40,53,54,57,60,61,57,55,55,57,53,54,53,57,55,57,56,50,54,55,55,51,50,46,47,44,29,22,18,17,21,17,20,20,17,19,23,39,42,43,44,50,55,52,56,50,48,57,55,53,54,57,59,53,54,53,59,58,52,53,51,56,53,51,51,50,46,44,46,44,32,18,20,18,21,18,17,19,16,18,25,29,32,34,33,44,46,39,30,39,58,54,57,61,63,61,60,61,60,60,57,60,61,59,54,56,49,52,53,51,52,41,50,45,37,42,41,43,44,38,42,40,38,44,41,45,45,26,23,19,16,21,17,22,18,20,26,31,36,39,46,41,46,50,52,51,53,50,46,41,42,43,41,44,41,43,46,42,43,44,39,43,46,44,43,45,43,41,45,41,42,46,38,30,27,21,24,29,24,27,34,35,42,48,49,52,50,53,56,56,55,55,58,56,59,56,55,59,60,63,63,62,61,61,66,63,50,38,26,29,28,27,35,51,56,65,79,55,53,51,39,57,60,59,83,87,68,53,47,36,62,74,68,77,77,107,119,121,118,106,112,125,132,136,124,121,116,102,107,117,129,142,120,118,129,122,134,134,126,115,124,133,111,130,103,16,4,11,12,15,13,15,15,15,16,16,15,183,160,138,123,108,112,127,143,145,127,118,116,139,181,203,209,206,201,203,204,211,210,212,215,214,214,196,238,224,133,155,205,109,27,46,22,29,33,29,36,42,51,72,130,95,50,60,81,78,91,144,128,89,63,70,89,114,139,156,168,136,116,125,137,129,147,158,162,139,130,149,148,116,87,123,154,113,112,121,114,132,141,134,131,156,178,157,104,111,129,141,158,128,126,134,141,153,160,105,83,87,64,90,78,74,98,133,208,246,249,250,250,250,207,135,96,93,108,107,119,116,122,143,145,159,157,143,150,135,130,139,120,119,128,115,113,112,122,128,128,128,104,115,127,148,182,183,182,163,143,149,160,139,88,90,137,184,178,162,170,151,149,141,131,132,129,145,153,187,192,170,157,143,130,109,102,93,79,107,114,107,99,92,97,101,95,95,103,98,103,99,86,94,112,115,117,120,128,115,104,120,114,99,107,110,99,109,114,96,94,108,131,125,101,97,94,88,97,150,170,143,119,99,100,111,108,103,80,112,141,128,118,95,92,102,105,101,110,125,137,147,136,142,133,141,160,151,165,137,118,118,111,113,107,105,108,112,117,115,107,109,117,128,141,144,146,154,167,178,195,211,216,212,200,202,166,92,151,234,152,92,106,101,127,186,228,229,175,155,141,144,109,133,152,97,31,132,219,128,97,122,147,186,246,241,214,180,159,99,112,141,159,182,137,148,130,192,170,150,203,80,125,207,104,86,151,136,139,153,80,22,63,73,44,29,9,62,142,152,122,90,124,138,125,107,83,77,73,76,83,90,67,41,34,30,42,44,39,41,39,53,53,43,33,29,25,21,39,74,109,111,109,129,112,88,89,97,100,88,91,88,97,114,113,111,74,29,20,19,45,85,97,112,121,114,111,73,23,30,64,80,93,89,69,63,55,33,19,14,22,17,19,27,21,27,24,25,23,25,40,45,53,52,42,36,27,24,26,29,33,30,47,66,77,69,48,28,24,18,17,15,22,53,55,56,57,56,57,55,57,54,59,56,54,60,57,52,56,59,57,57,51,50,50,48,50,49,44,40,24,20,17,18,18,18,21,21,22,17,31,40,44,45,45,49,51,54,50,52,55,51,49,49,49,55,51,52,55,58,57,53,56,55,59,54,53,55,53,56,56,51,50,42,25,21,21,16,21,21,19,19,22,19,22,31,36,44,49,51,45,25,28,48,61,60,61,63,59,57,56,59,59,60,61,57,58,55,51,55,50,48,52,46,48,48,44,41,38,44,42,40,42,42,46,43,48,46,46,48,39,22,19,17,19,16,21,22,17,21,16,37,37,40,44,42,46,49,53,54,50,46,51,48,45,42,43,48,42,46,43,44,44,41,45,42,48,44,45,47,45,47,45,45,42,33,26,20,16,20,22,18,22,25,24,27,36,42,41,44,49,52,55,51,51,50,53,57,55,55,55,56,61,63,60,60,62,63,61,56,42,28,29,29,29,27,37,58,99,142,118,69,67,84,75,74,70,63,69,60,38,38,29,33,84,96,103,111,104,128,129,126,118,112,116,123,118,106,108,106,105,97,92,87,108,106,83,83,109,130,128,125,123,124,134,137,121,141,102,15,4,11,12,15,14,16,15,15,16,16,15,166,149,125,112,114,115,132,162,173,171,159,155,164,193,211,210,208,205,203,202,205,206,206,207,214,219,229,247,237,139,119,138,57,25,39,24,29,32,31,35,38,56,75,122,100,64,68,78,85,104,142,132,113,99,122,106,111,126,120,120,76,57,53,62,71,78,88,100,83,62,67,80,78,61,71,94,77,76,80,69,71,78,72,65,89,101,88,66,63,74,89,86,65,59,59,84,103,109,93,83,77,81,95,92,95,99,103,151,178,178,173,174,181,144,113,108,117,115,119,132,122,124,131,134,140,122,114,131,124,114,123,116,117,122,117,114,111,121,120,122,126,124,147,166,186,223,237,218,190,160,145,149,138,106,112,129,136,130,118,127,119,125,128,123,109,92,109,157,195,177,141,124,119,106,81,77,75,67,93,116,107,85,78,75,79,84,100,105,89,95,86,79,108,130,110,102,126,132,118,103,125,124,103,113,101,96,105,114,112,84,83,107,105,103,101,102,104,103,128,124,100,100,82,81,103,107,107,96,118,131,112,98,92,89,95,110,113,132,149,152,156,150,158,151,107,91,98,95,87,88,82,80,89,91,96,89,96,102,99,97,88,84,69,57,55,53,54,61,69,105,154,200,230,233,245,217,116,134,203,152,121,122,100,161,241,237,193,160,160,158,174,111,119,173,157,73,153,238,123,98,101,99,186,247,250,250,251,251,187,232,204,177,212,107,100,121,148,123,172,160,61,172,163,71,129,201,252,184,95,59,16,28,43,49,38,33,55,59,62,65,56,85,102,88,88,76,72,77,73,69,72,63,52,34,27,42,41,43,40,36,57,69,53,29,20,28,34,69,106,113,114,101,128,112,95,109,88,106,107,105,113,111,125,108,61,28,12,26,49,82,110,102,110,112,118,121,72,30,51,94,111,102,80,61,46,30,26,24,16,15,18,25,24,24,29,27,30,25,27,29,35,51,56,49,48,34,22,27,29,34,50,74,79,57,38,24,20,16,15,19,31,50,59,57,54,57,57,54,54,61,57,59,56,56,56,53,61,57,55,51,58,55,52,54,46,50,47,45,40,24,18,19,17,27,21,19,24,18,19,30,42,40,46,49,52,54,53,56,54,47,51,45,41,49,49,51,54,55,53,57,55,57,59,51,54,58,57,57,57,55,57,51,44,30,21,17,19,25,19,19,18,22,19,30,48,45,52,53,43,35,20,32,55,57,61,61,59,60,50,57,56,55,60,55,58,59,53,53,48,47,54,47,55,53,47,47,39,42,41,43,44,44,47,56,49,49,47,48,40,27,21,18,22,17,21,20,26,21,18,23,29,41,41,42,45,46,48,53,51,54,52,44,45,45,45,46,43,45,48,44,44,43,46,50,48,50,51,51,47,50,52,49,50,42,26,22,21,21,22,21,21,24,23,22,25,33,37,38,47,47,48,50,49,54,54,50,57,56,54,49,53,62,57,60,56,59,57,54,48,31,29,33,33,33,26,38,87,125,149,137,104,126,129,123,134,123,98,98,84,86,100,90,101,120,139,148,152,135,128,131,132,128,121,125,137,134,122,117,119,118,111,101,102,111,100,87,89,116,110,101,107,93,87,105,116,109,128,96,17,5,11,12,14,14,16,14,15,16,15,16,144,122,116,117,120,109,110,130,143,155,171,175,180,174,166,167,164,155,141,126,127,120,116,117,142,154,183,245,236,136,67,62,9,18,36,21,31,29,30,40,41,52,99,149,128,110,116,137,133,141,173,167,165,141,148,141,127,131,122,115,101,102,93,83,77,92,102,103,86,70,69,76,81,85,92,89,80,88,84,87,94,89,89,81,84,95,91,92,96,96,103,107,92,83,81,95,113,120,112,110,110,101,125,122,115,112,89,84,86,85,76,75,86,84,100,109,105,104,109,118,112,109,111,107,111,97,101,113,107,109,115,116,116,121,118,123,118,116,111,102,127,145,151,159,199,226,234,234,233,213,197,190,165,122,111,117,129,122,100,95,95,100,110,113,108,94,141,184,191,152,105,119,119,105,87,88,102,108,127,140,122,98,83,92,107,108,117,122,99,100,101,98,133,153,124,118,142,146,122,108,127,128,126,124,112,114,127,142,136,113,99,103,108,99,103,112,109,104,104,96,82,84,100,110,123,135,126,119,127,119,111,110,100,96,110,116,123,152,156,154,153,152,157,155,108,67,81,101,92,93,95,93,97,97,95,95,101,101,103,94,94,85,67,71,81,84,73,67,57,86,108,131,147,133,165,172,125,81,91,126,139,131,113,115,160,143,88,76,74,73,86,63,93,151,172,88,98,119,103,119,79,38,82,222,252,252,251,251,232,186,148,110,166,137,124,145,108,126,188,91,83,188,81,87,173,200,249,160,84,38,2,40,54,51,45,44,44,36,39,43,41,41,53,59,57,61,55,46,42,41,47,45,38,32,28,36,35,29,33,40,55,46,33,28,27,31,55,95,100,105,110,100,101,87,100,98,98,112,106,118,116,124,112,56,24,14,29,60,87,115,111,100,101,113,111,115,100,77,109,116,99,71,44,30,26,24,22,18,15,17,24,24,22,27,33,24,28,30,26,28,29,42,51,51,56,46,27,22,32,55,76,73,54,33,21,19,20,17,21,41,56,60,60,53,57,55,56,55,55,56,56,57,54,52,52,54,57,56,57,55,53,53,55,55,49,51,45,43,37,20,22,18,17,24,20,17,19,24,21,31,39,41,46,47,47,57,59,53,55,53,38,31,37,47,52,50,60,55,53,56,54,56,51,56,56,59,56,53,58,54,53,53,39,19,21,18,19,19,22,22,18,19,24,38,42,50,49,46,31,22,20,29,57,60,61,58,56,57,54,57,60,51,59,59,53,54,51,53,53,50,52,53,50,47,40,38,44,44,41,41,45,50,48,50,51,50,52,45,41,28,21,22,19,19,21,20,20,19,21,20,34,43,42,46,49,49,49,54,57,49,46,53,46,46,45,44,45,41,47,46,47,52,51,48,50,51,51,58,55,59,57,55,51,36,27,20,25,21,20,21,21,22,21,23,22,35,39,39,47,48,43,47,48,50,52,49,52,48,50,54,57,55,55,59,56,56,59,53,40,30,29,36,33,27,26,66,112,127,131,119,120,126,128,136,144,139,116,119,130,138,148,144,149,153,146,145,138,124,124,120,123,125,125,136,153,155,138,129,125,125,125,123,127,137,122,121,117,116,114,96,97,75,64,86,84,85,110,88,20,5,12,12,15,13,15,15,15,16,15,15,117,109,117,117,117,106,93,87,79,90,107,129,133,114,102,110,119,110,85,64,53,44,46,46,69,75,131,239,239,128,55,58,12,13,33,34,31,25,36,44,43,52,93,154,153,149,149,165,165,164,174,168,163,115,137,143,128,128,121,126,129,140,139,145,141,146,149,146,127,92,91,104,114,113,112,116,102,105,106,108,117,109,107,111,116,105,114,126,122,125,135,132,125,130,120,126,129,127,119,116,118,128,141,123,108,102,102,98,90,77,69,84,94,95,105,105,108,110,110,112,106,110,114,109,110,110,107,111,109,112,119,113,115,116,116,122,125,126,124,114,122,140,137,165,203,214,214,227,252,252,246,236,200,149,120,113,128,129,106,97,94,98,118,139,147,153,195,215,180,136,132,144,142,143,150,156,170,170,163,178,168,163,166,168,169,141,136,152,141,151,155,137,139,144,129,134,152,147,134,127,139,141,139,136,125,127,125,137,142,127,127,112,99,103,111,121,120,109,110,103,98,119,125,123,145,146,126,112,109,107,106,112,104,117,146,133,125,156,159,154,151,149,153,161,130,74,87,107,114,112,106,106,105,103,101,98,109,110,104,108,118,151,134,93,106,107,100,109,99,98,92,91,84,45,93,141,151,107,45,51,66,72,48,37,66,69,53,46,35,33,21,11,26,71,146,101,41,11,84,158,83,35,9,28,127,205,219,244,177,131,139,126,203,156,148,154,95,167,143,46,133,166,133,183,129,73,139,103,45,22,20,61,53,41,32,29,30,30,32,29,27,29,34,33,37,34,33,30,23,29,27,27,25,20,27,27,21,27,27,31,37,41,30,40,42,47,69,90,104,108,112,94,95,93,87,99,100,110,115,116,125,99,50,21,15,32,67,99,98,106,109,93,95,107,116,112,101,94,111,76,40,36,26,26,17,19,15,21,23,21,22,21,24,27,26,29,28,25,33,28,31,31,46,57,55,62,40,29,52,75,71,44,29,21,21,15,18,29,42,55,64,61,57,55,53,58,58,56,55,55,55,60,51,55,57,55,54,57,59,55,59,55,56,51,51,50,47,37,24,26,23,17,21,22,16,21,19,19,21,34,42,39,43,48,52,55,54,54,55,44,35,34,37,53,57,59,61,57,57,55,56,57,57,57,55,57,57,55,55,59,57,47,25,19,25,21,18,21,19,19,19,18,24,35,44,49,51,34,22,24,17,44,58,59,63,55,59,59,57,60,52,56,56,51,61,53,58,60,54,59,53,52,51,47,46,43,42,41,43,42,48,53,50,49,47,48,50,49,36,21,22,23,20,19,24,25,18,24,22,24,36,45,45,45,51,51,52,50,53,55,50,54,47,48,44,45,49,46,51,50,53,57,53,45,49,49,56,57,57,57,52,55,46,34,20,24,28,17,20,27,22,23,26,24,34,36,38,44,48,47,50,50,49,50,54,51,46,49,54,53,55,60,57,62,60,58,61,52,33,34,38,39,36,21,39,104,129,113,117,110,107,99,94,107,99,107,92,101,124,134,141,131,146,149,128,111,119,119,122,124,119,122,133,141,148,142,125,131,134,127,122,121,143,150,128,121,114,122,119,108,111,87,84,83,78,81,106,94,19,5,13,12,16,13,15,15,15,16,15,15,95,110,121,122,122,110,103,80,67,63,72,83,92,97,96,127,164,165,144,130,110,95,96,92,88,80,141,245,246,142,102,126,50,19,29,31,33,26,32,34,39,50,93,157,170,180,180,178,178,175,162,145,139,110,134,150,129,122,118,127,136,141,150,160,166,172,170,174,159,128,111,116,121,118,114,112,118,113,107,114,111,107,116,117,113,101,105,126,123,130,135,136,137,143,134,126,122,114,113,112,119,132,141,121,98,95,105,102,103,95,86,105,111,107,107,101,103,112,115,117,112,115,114,111,116,117,110,111,119,111,118,121,110,110,116,128,126,126,136,139,131,129,129,136,151,150,144,152,179,199,216,221,213,197,182,146,149,125,109,112,114,114,124,136,171,226,226,215,179,161,171,176,183,186,192,209,205,194,171,173,164,198,228,222,179,138,140,168,184,182,169,132,111,111,103,105,128,139,135,130,134,125,122,127,131,124,113,113,110,112,127,115,100,104,117,128,124,117,112,118,125,125,131,130,143,148,112,100,98,102,122,118,120,159,215,190,139,155,152,157,153,146,158,161,147,96,73,98,118,125,109,106,108,131,137,111,126,127,125,126,173,251,234,123,97,105,110,140,135,108,101,113,91,108,186,200,207,173,65,18,10,20,10,21,120,169,139,133,132,140,110,45,21,27,122,151,154,115,156,188,117,96,63,23,20,78,95,152,172,153,198,190,212,146,160,139,108,180,97,102,184,192,198,171,92,18,25,46,45,44,49,57,38,26,23,23,27,29,25,24,27,27,27,27,30,29,26,31,24,22,22,21,26,19,22,21,20,25,28,26,29,44,51,61,53,53,68,95,108,106,105,91,98,90,98,112,105,119,117,118,96,45,21,12,39,71,108,109,87,101,92,90,71,80,101,107,103,90,96,55,33,38,51,35,17,24,36,48,34,19,18,18,27,29,29,29,28,29,28,31,24,27,38,51,60,66,59,50,59,61,39,24,19,17,18,22,30,51,58,67,64,57,59,58,55,55,57,55,59,53,57,56,55,60,60,57,55,57,53,54,53,52,56,51,52,46,42,33,19,28,26,21,18,23,24,23,25,18,29,43,39,46,50,49,55,54,54,52,44,38,30,32,42,56,63,59,57,59,63,59,57,59,59,55,53,56,56,61,59,57,53,33,22,27,21,22,23,23,21,19,26,20,32,42,47,48,43,27,21,21,22,49,63,60,64,61,60,61,59,59,57,60,57,53,59,60,60,61,57,59,55,57,57,49,48,44,45,42,48,47,48,56,51,51,54,53,48,38,28,19,19,23,16,18,20,22,18,19,20,25,41,43,43,49,50,52,50,51,51,50,55,54,50,50,51,51,52,48,56,58,55,54,48,49,51,53,53,54,55,55,55,53,38,25,21,25,23,18,25,23,18,27,27,24,30,40,41,43,49,45,52,54,48,54,53,55,55,55,56,59,59,60,63,65,60,63,61,43,39,38,44,39,38,22,69,142,131,120,128,117,98,68,66,71,71,90,67,78,115,128,128,122,132,136,123,121,117,115,134,140,130,130,146,143,140,133,116,129,122,114,118,122,149,151,123,121,120,122,130,126,133,114,110,109,95,106,117,91,19,4,13,12,15,14,15,14,15,15,15,15,92,106,112,116,122,114,108,96,78,75,73,79,89,86,94,144,206,218,204,196,181,162,150,132,129,124,181,248,249,159,156,211,104,37,33,30,34,29,36,28,37,45,79,133,139,167,169,165,164,153,139,116,120,118,139,139,120,115,120,129,141,132,125,134,137,148,139,155,159,128,117,110,103,93,99,111,102,105,106,106,109,108,108,111,111,89,93,113,125,131,128,124,129,132,118,114,116,115,118,120,127,129,113,107,111,108,105,107,123,112,104,112,108,107,111,111,121,126,131,142,131,121,115,108,104,113,112,111,115,104,120,114,107,112,117,131,121,107,119,143,149,136,101,91,96,84,84,89,108,136,164,183,198,224,237,222,182,148,125,113,111,111,104,100,163,241,230,181,159,160,174,180,186,191,184,191,199,178,139,140,125,160,202,195,142,103,118,156,181,161,135,108,98,93,86,110,120,117,118,120,120,94,101,121,121,119,90,82,84,84,112,114,125,133,131,135,129,112,116,122,113,112,113,110,129,118,89,90,106,135,152,150,152,190,247,226,150,154,152,155,159,145,153,157,159,116,71,83,105,120,118,113,145,181,177,137,135,134,122,126,162,239,226,126,94,101,132,167,170,117,78,96,92,118,176,233,231,161,131,88,77,63,55,57,127,208,177,182,158,196,192,96,81,24,94,172,179,172,206,201,126,116,105,89,83,57,57,156,157,158,165,150,199,153,169,114,139,182,125,196,222,221,189,118,68,10,12,30,63,71,51,34,23,24,23,27,29,25,29,30,27,28,27,29,28,28,25,24,24,18,23,22,26,20,23,25,24,24,25,29,29,47,67,78,56,53,83,109,105,91,97,77,81,98,103,115,108,117,120,77,42,16,15,39,72,114,125,110,98,96,99,104,71,71,96,103,113,102,107,88,86,107,103,68,26,25,49,51,29,22,24,22,25,24,31,28,28,29,24,26,27,29,31,46,61,66,76,61,46,27,15,19,18,19,26,39,55,57,63,68,61,59,62,57,56,60,57,59,59,60,66,61,59,61,57,63,63,57,56,53,53,55,57,51,45,41,45,30,23,21,22,25,19,21,24,27,23,27,39,43,42,50,54,53,50,51,54,39,39,36,25,35,40,51,58,60,60,60,61,57,61,62,57,55,56,55,55,55,56,57,45,27,23,24,25,27,18,19,23,21,20,30,39,42,46,46,42,28,31,24,37,63,63,64,65,61,57,57,57,61,55,56,57,58,65,63,58,54,58,57,56,60,57,54,50,45,51,51,55,53,50,54,51,52,48,51,51,37,26,19,23,22,19,23,19,17,25,24,20,35,43,43,48,49,50,53,57,55,57,55,51,53,60,62,54,56,58,57,54,48,53,46,49,55,52,53,52,55,56,57,51,42,31,22,25,30,21,22,24,19,24,22,21,23,31,46,47,51,52,56,54,53,60,60,60,56,64,68,68,66,64,65,66,65,60,59,53,44,41,45,38,46,44,44,125,160,141,140,148,133,115,76,68,85,84,105,77,96,128,127,136,127,137,136,131,135,131,132,145,148,141,135,132,113,125,130,108,123,118,125,134,137,151,148,143,147,141,134,132,137,142,127,125,124,121,110,109,84,19,6,12,12,14,13,15,15,15,16,15,15,91,97,103,118,122,119,119,104,93,82,84,77,78,85,71,83,128,143,140,145,152,147,137,134,148,176,229,251,248,142,170,244,158,62,25,17,33,34,44,49,39,46,67,91,88,118,123,104,108,101,86,89,95,94,108,110,99,97,100,107,101,100,98,88,93,83,91,109,117,104,95,93,81,92,98,96,99,108,110,108,110,112,109,114,115,100,103,122,134,127,111,113,112,110,107,112,122,123,121,122,140,124,105,120,127,128,127,124,131,127,119,126,129,120,125,133,134,141,148,149,137,129,118,110,99,105,109,100,110,94,110,111,103,115,117,136,117,103,103,120,131,131,104,83,84,75,77,78,84,102,124,137,139,151,178,210,207,208,181,162,116,98,77,63,136,192,169,122,124,129,137,145,134,132,122,132,132,130,117,121,93,98,127,130,107,95,106,122,136,130,113,105,101,118,125,129,121,101,102,104,109,86,105,113,101,88,72,73,62,72,85,98,126,137,135,130,118,117,125,122,110,109,116,104,99,97,89,117,141,165,178,170,168,184,220,188,147,165,156,160,160,142,149,150,168,147,83,71,79,105,113,137,171,181,184,156,135,112,104,98,95,165,149,122,107,121,152,171,183,145,78,76,81,107,127,139,160,169,221,156,153,166,118,95,95,162,199,165,132,152,174,130,71,10,50,139,145,123,189,165,74,83,75,79,90,78,97,120,139,136,88,141,185,162,137,83,151,159,168,235,193,229,170,59,38,2,31,60,57,54,32,25,32,35,29,27,29,28,30,33,29,25,32,24,27,26,24,24,19,24,22,19,25,24,25,24,26,25,27,32,42,66,74,80,60,79,106,98,92,85,84,64,82,102,103,105,101,110,75,30,18,21,49,84,116,124,127,100,95,107,103,156,114,88,100,107,117,112,119,105,124,139,145,101,39,33,34,33,26,30,29,21,24,29,29,35,34,31,24,23,28,27,26,43,61,68,81,79,46,21,19,17,17,27,50,55,61,67,62,66,63,61,60,61,63,61,65,61,62,64,62,62,62,60,53,55,55,51,57,54,54,55,55,53,45,46,39,23,17,20,23,21,19,23,26,24,22,30,43,45,51,53,52,51,53,54,49,44,36,33,29,34,43,52,57,60,56,55,60,58,57,60,60,57,58,53,49,52,54,50,41,28,19,23,21,19,17,19,22,21,23,31,42,45,48,44,39,37,34,27,46,66,62,61,65,59,55,56,53,55,54,54,54,60,56,59,60,61,56,55,61,57,60,61,50,52,60,55,53,55,60,54,50,53,53,47,49,35,19,23,17,19,25,19,19,25,24,21,30,37,46,46,47,57,57,52,54,57,55,52,58,64,57,56,55,58,59,55,49,45,44,44,48,54,53,57,61,57,55,51,50,35,25,26,21,25,22,24,21,17,22,23,27,25,43,57,54,58,61,61,63,60,55,61,60,61,69,66,63,63,61,61,60,62,59,55,49,41,42,33,37,48,57,91,145,142,127,139,141,141,127,111,121,122,114,143,127,116,122,130,135,112,132,145,131,139,132,132,151,149,136,113,101,84,105,125,111,118,120,137,143,130,129,132,144,162,153,141,130,121,128,113,116,111,105,104,102,86,19,4,12,11,15,13,15,14,15,15,15,15,110,113,112,122,124,126,129,118,110,108,107,103,104,100,95,94,92,87,67,76,101,98,98,103,142,195,232,252,246,105,128,237,157,39,27,18,28,37,63,78,48,45,67,83,72,91,81,66,77,67,74,77,83,82,87,92,90,92,89,89,93,86,86,87,83,87,78,85,97,92,96,97,102,101,96,96,96,104,104,98,101,111,111,124,127,109,119,124,133,119,100,106,107,107,101,105,124,120,117,120,136,136,129,136,121,125,129,120,131,135,141,144,135,117,117,117,128,136,138,147,131,134,131,124,113,114,129,125,131,110,121,116,107,116,113,122,116,110,99,118,151,171,147,123,110,96,98,94,97,92,103,115,103,85,110,178,212,229,233,218,182,146,78,47,86,142,107,67,93,99,104,106,108,110,96,87,91,100,96,96,88,87,89,105,107,109,112,97,110,117,113,109,124,136,127,131,100,82,88,98,113,97,108,102,84,78,63,78,69,61,74,66,86,112,122,127,120,124,134,120,113,123,120,116,112,116,133,152,175,184,186,184,171,164,168,160,155,173,165,166,168,153,150,139,167,176,106,68,68,93,113,129,150,150,147,139,130,118,105,91,96,107,109,119,145,160,173,174,186,174,114,86,94,121,125,131,107,189,240,185,245,216,152,104,109,185,199,172,148,161,135,81,66,9,45,159,165,135,181,131,42,42,38,60,66,69,89,122,144,106,113,171,171,139,76,105,148,127,189,157,85,147,113,27,7,28,67,57,38,34,33,31,27,39,42,35,32,30,27,28,28,29,23,27,25,24,30,21,22,23,20,22,23,21,29,26,24,26,27,43,59,84,79,63,57,87,98,98,91,73,78,63,89,110,98,97,97,62,29,17,27,59,98,122,122,116,110,92,97,89,104,129,88,97,105,106,117,105,110,108,117,120,129,133,86,42,33,37,42,45,43,36,22,25,33,31,32,30,26,28,27,30,34,51,67,73,97,103,66,42,51,39,41,51,56,61,63,66,65,62,61,60,60,57,57,60,58,63,59,56,60,53,55,54,50,54,47,49,55,51,46,53,53,52,48,45,37,19,19,23,22,23,19,19,18,19,30,40,50,49,47,57,50,53,57,55,50,42,39,34,38,37,44,54,54,57,57,61,58,57,64,53,56,61,59,55,52,51,50,48,32,20,23,19,17,22,19,21,22,20,27,42,46,49,48,38,29,30,29,33,53,63,59,61,61,53,55,54,50,49,54,56,57,57,58,57,55,60,62,59,53,59,63,59,61,59,57,59,61,58,56,55,53,53,49,49,46,40,20,18,25,19,21,15,17,24,27,27,33,43,44,53,57,57,57,57,57,55,55,58,62,58,59,58,55,50,53,50,41,39,40,47,49,52,60,60,60,61,59,51,49,33,18,22,24,25,24,22,16,21,27,27,27,39,51,55,59,61,59,62,61,57,60,57,62,63,60,64,59,66,62,58,62,59,60,57,45,47,44,43,77,77,74,107,122,105,106,120,114,114,124,127,152,142,143,165,127,127,122,117,117,102,132,139,118,120,116,122,139,143,137,124,117,107,118,131,121,113,99,107,113,105,100,105,125,145,145,131,119,98,93,90,98,112,103,95,108,86,21,6,11,12,15,13,15,14,15,15,15,15,116,123,117,119,112,117,134,136,138,139,142,141,145,150,151,167,160,137,113,100,107,104,103,101,111,139,196,250,237,79,88,180,74,31,33,17,25,26,65,82,46,45,75,89,89,93,79,78,85,86,89,84,84,87,89,92,93,88,87,94,88,93,98,91,96,90,90,90,93,96,107,131,126,111,98,94,94,91,92,87,96,105,108,123,122,106,109,112,115,112,97,104,106,101,101,107,118,117,117,115,125,128,134,136,103,103,106,104,117,135,136,123,125,108,105,110,122,133,140,139,131,136,126,135,131,139,150,155,160,134,137,119,106,112,105,109,107,107,113,159,189,189,166,139,113,116,141,129,111,108,126,134,113,116,112,156,181,197,232,234,238,215,127,101,132,133,113,69,66,63,71,117,140,128,121,125,104,90,99,108,96,103,118,117,116,121,108,93,105,105,104,107,112,123,108,102,81,81,111,127,123,103,118,112,93,79,74,74,71,73,64,57,58,71,107,128,122,133,128,107,102,112,125,129,127,133,156,172,179,182,178,171,170,164,167,160,159,171,170,169,165,166,163,121,126,156,124,102,96,110,111,112,118,111,112,103,107,120,129,117,107,111,102,109,143,170,181,162,169,169,130,117,125,129,129,137,128,225,222,191,251,213,163,103,128,214,201,136,126,137,103,68,50,5,66,205,173,143,189,100,26,66,78,122,129,125,142,142,140,89,146,178,146,126,102,179,196,154,148,78,5,37,34,32,57,55,56,35,29,28,23,37,28,31,43,43,39,33,29,26,33,25,30,27,23,29,20,24,24,24,24,22,26,29,31,23,24,27,35,53,70,100,88,60,41,75,97,87,78,70,86,82,104,120,109,89,50,27,25,29,62,98,111,120,116,104,106,84,90,90,83,85,35,56,68,85,97,94,100,84,110,103,123,141,116,75,38,48,57,57,56,51,33,20,31,38,26,27,29,25,26,44,61,58,55,74,103,94,54,62,79,60,59,59,62,67,68,69,62,56,55,59,55,59,56,53,56,53,53,50,57,56,51,55,54,55,53,47,59,54,51,57,55,50,54,53,30,19,22,24,20,19,23,20,22,22,25,48,54,53,54,53,54,56,56,55,48,42,42,33,38,47,46,50,58,57,55,56,57,59,59,54,53,58,58,53,52,53,50,41,22,22,19,21,21,24,21,20,22,21,31,44,50,53,42,36,31,29,32,29,52,59,58,59,60,55,55,55,53,49,50,52,54,59,54,56,46,48,55,59,59,54,54,56,59,55,59,55,56,57,57,57,53,53,52,55,51,28,19,21,20,19,21,24,19,18,21,24,39,48,50,58,57,53,59,61,57,55,60,59,58,58,57,60,57,60,57,44,46,43,47,54,47,60,57,60,57,57,57,52,47,25,20,21,22,27,21,15,23,19,21,26,24,43,59,59,56,61,61,58,50,49,57,61,60,61,61,58,59,63,63,60,57,57,60,54,42,58,101,140,162,140,124,116,102,83,96,100,78,84,95,120,132,131,134,132,105,97,92,92,113,111,127,111,97,113,112,122,139,152,150,145,146,135,127,119,129,122,82,78,97,103,99,101,108,125,117,111,108,92,83,81,92,102,101,95,107,95,19,5,13,11,14,13,14,14,14,15,15,15,116,118,105,112,104,110,130,130,136,141,144,137,143,146,168,205,204,191,153,129,123,118,119,113,120,116,171,250,233,78,69,84,11,37,35,16,29,26,39,45,46,57,89,113,103,111,98,92,100,101,101,89,93,94,98,105,95,92,89,92,96,88,96,94,91,92,93,100,96,91,107,133,125,115,110,100,89,88,101,105,98,106,115,116,101,90,111,111,113,108,101,116,113,115,111,111,117,111,110,102,106,106,110,120,97,100,108,107,110,111,111,111,113,105,116,121,128,137,126,127,113,103,101,110,116,113,118,124,127,115,116,103,100,103,100,104,98,104,112,141,157,150,128,110,106,128,163,137,110,115,132,134,112,106,122,143,159,169,176,178,191,242,218,208,167,159,189,139,131,81,57,112,151,157,155,159,137,125,128,137,133,122,129,132,117,120,114,85,89,94,80,81,90,92,83,78,77,97,124,130,108,108,119,100,98,91,98,99,84,81,75,62,51,60,95,99,106,120,126,107,101,121,113,112,120,142,165,171,177,172,168,170,169,168,167,168,167,166,169,167,164,166,178,114,75,123,131,130,125,138,116,103,102,91,94,77,78,101,123,106,108,107,98,106,113,132,143,139,133,125,120,129,133,120,127,118,158,252,201,215,251,221,166,123,201,221,136,79,110,184,137,49,42,1,78,184,131,152,198,101,42,95,132,166,159,155,146,177,141,104,170,145,154,170,172,230,245,158,92,42,1,19,36,65,71,46,35,24,24,29,26,31,34,33,34,40,44,37,30,29,26,26,26,24,24,23,23,23,23,26,24,29,29,25,28,27,25,27,43,54,82,108,100,77,54,76,77,78,77,81,94,89,119,128,102,51,17,17,37,66,105,116,114,108,106,108,95,85,92,78,118,103,17,14,30,69,87,90,69,53,88,98,119,131,102,80,59,53,54,62,67,63,47,27,22,32,33,33,33,26,46,57,54,41,31,55,61,55,39,59,81,68,63,59,62,71,70,61,56,56,55,56,57,53,58,55,49,54,53,55,53,50,56,60,57,59,59,57,55,54,59,54,55,57,51,42,28,20,17,21,23,21,21,16,24,19,32,51,52,52,50,54,53,50,54,53,49,39,32,29,35,54,54,55,50,53,57,55,56,52,54,51,56,57,47,52,55,53,48,36,21,18,24,26,21,21,24,23,23,24,34,50,52,51,39,35,34,37,29,36,52,55,52,59,59,53,53,50,52,45,48,49,51,48,50,45,30,38,47,53,58,53,53,51,54,57,50,54,54,58,55,53,57,52,50,52,42,25,17,22,20,21,24,22,20,15,24,32,46,52,55,63,53,57,57,59,56,55,62,59,61,65,65,65,59,61,55,45,46,45,50,49,54,59,59,62,62,60,58,53,37,19,19,22,23,19,17,22,25,19,20,26,37,56,60,59,57,65,62,50,48,51,58,58,59,59,56,59,60,60,60,63,63,55,61,49,43,116,162,184,187,160,148,113,85,79,81,87,73,77,90,102,121,109,106,100,88,77,53,71,99,113,117,100,89,107,125,134,141,153,148,131,136,133,114,108,147,148,106,101,108,114,114,113,105,112,113,108,113,108,106,96,89,80,81,84,101,89,19,5,13,12,16,13,15,15,15,16,15,16,155,141,112,120,116,110,111,107,111,113,118,109,108,112,120,158,169,166,135,122,122,118,116,122,139,133,196,250,228,71,72,64,4,49,28,26,31,23,36,41,44,55,103,122,113,119,112,111,110,116,116,112,108,110,116,123,124,117,115,110,113,115,115,107,107,110,119,125,113,113,107,117,117,116,111,95,96,103,113,103,107,112,109,110,89,95,117,120,114,114,118,126,125,121,126,125,122,112,90,91,94,80,92,110,99,106,117,113,101,107,104,109,116,107,117,114,108,107,115,111,92,90,92,95,82,83,84,91,105,104,108,98,102,98,97,99,92,86,88,111,110,111,108,111,97,114,147,124,108,99,92,89,69,69,86,99,149,193,190,139,149,234,201,157,88,98,202,230,225,179,149,150,145,117,131,139,141,146,148,137,128,118,122,124,105,110,106,90,102,98,95,91,95,102,82,77,80,84,111,116,91,93,97,84,79,99,135,131,120,108,88,79,68,81,98,95,95,124,137,120,125,118,99,116,143,158,171,171,163,170,171,170,169,164,168,168,168,170,168,166,163,166,173,141,94,112,127,122,128,136,131,117,109,111,103,78,67,78,96,93,95,97,102,105,96,84,94,107,97,99,102,117,128,112,129,123,202,252,212,242,251,243,174,160,225,192,85,64,169,211,177,62,32,7,93,204,143,181,210,125,80,101,153,155,136,162,163,179,120,144,171,150,181,196,185,165,188,136,71,19,5,49,61,64,43,31,33,29,29,27,32,33,31,33,32,33,43,44,43,26,28,33,24,24,23,24,23,26,23,29,27,27,30,21,26,26,29,39,51,61,82,105,103,86,66,71,75,87,91,99,98,97,129,93,41,19,16,42,78,106,119,114,101,107,104,105,99,83,96,99,140,146,40,14,31,69,95,84,78,74,93,98,124,114,67,70,96,97,57,57,67,68,61,39,26,23,33,33,35,51,60,48,30,22,20,35,59,55,50,81,83,65,59,62,63,69,69,61,53,50,53,56,55,52,55,49,55,57,55,51,50,54,54,58,53,61,59,55,60,56,53,53,53,54,53,44,26,19,21,18,19,17,17,20,22,28,39,50,50,49,49,47,53,51,57,57,47,38,29,35,45,50,50,56,53,56,55,51,56,59,55,53,53,56,53,54,53,50,44,30,18,19,19,16,21,19,18,21,18,29,48,51,50,50,40,35,39,36,31,41,55,49,51,53,51,45,35,36,39,39,37,36,32,33,28,26,25,25,34,43,56,53,50,56,53,53,51,53,57,49,56,57,50,52,54,46,37,24,15,19,16,24,24,19,19,22,20,31,54,57,54,59,57,57,56,57,59,60,58,56,62,61,61,63,57,61,50,43,46,42,53,56,54,59,57,61,62,59,57,45,23,21,17,19,21,19,25,21,20,21,23,34,50,60,59,60,64,63,60,47,41,54,57,57,57,58,59,54,60,62,57,59,66,60,63,43,58,126,159,156,146,131,105,81,56,44,84,107,78,77,99,129,122,115,110,105,113,91,83,79,93,107,105,95,98,122,130,141,145,150,136,118,110,106,110,118,165,172,138,124,109,104,105,105,108,122,121,119,125,128,116,99,93,74,71,59,69,75,21,7,12,12,15,13,14,14,14,14,15,15,198,162,136,132,130,120,119,111,101,120,135,117,121,119,100,114,117,120,115,122,118,115,116,127,151,150,214,249,229,71,69,49,13,52,21,35,31,26,43,40,45,54,104,120,105,113,115,117,115,125,126,115,118,110,115,128,132,137,123,118,120,123,128,115,110,109,119,128,127,125,115,109,105,105,115,112,104,109,115,112,110,117,116,109,100,104,125,129,130,123,121,132,130,129,130,121,125,113,100,95,81,80,93,104,97,98,103,103,90,85,89,95,104,98,113,97,88,92,103,122,105,113,116,115,110,105,112,119,128,128,120,118,113,108,114,102,103,101,104,112,109,122,122,111,100,104,115,111,110,85,62,77,78,63,55,51,97,157,177,151,143,151,127,97,24,28,121,181,237,237,221,201,156,109,84,78,84,113,121,107,102,90,102,106,91,98,112,112,119,127,105,99,101,95,111,101,77,85,101,107,84,98,103,77,81,84,121,134,127,117,96,88,77,91,126,121,145,159,146,131,123,129,126,147,163,161,158,156,148,152,160,160,159,160,160,159,163,163,162,157,153,157,173,158,126,114,103,118,127,135,128,113,106,102,89,72,84,92,106,91,77,84,86,87,68,60,62,80,91,87,92,107,105,112,118,139,242,252,228,227,244,240,156,157,220,155,47,32,97,191,175,69,36,21,145,253,185,159,180,143,107,115,182,158,151,179,182,139,96,172,174,168,180,164,110,42,86,83,34,19,36,66,56,38,29,27,32,36,35,35,34,33,33,32,32,31,28,46,46,35,32,21,31,25,23,32,25,25,26,31,23,24,28,24,23,27,38,50,57,80,101,95,81,63,69,91,94,94,93,117,112,88,70,30,18,24,50,87,111,117,109,100,105,98,105,104,88,97,106,94,160,177,90,66,52,74,87,88,93,97,107,106,115,96,78,92,113,136,94,49,63,67,71,54,31,21,24,43,52,51,39,23,21,22,16,24,41,60,67,91,84,63,67,61,69,70,61,54,55,49,55,55,56,53,48,55,55,52,53,55,55,55,55,58,63,55,56,59,53,55,53,53,53,54,49,39,24,17,20,17,18,18,19,19,32,36,39,49,44,48,41,39,42,42,46,43,38,29,35,31,36,48,44,49,55,58,57,56,57,55,54,50,53,56,53,51,49,47,44,27,19,19,16,19,19,21,19,16,27,41,46,53,50,45,43,34,37,32,32,54,59,57,54,55,37,19,18,16,17,15,20,21,17,19,17,20,20,15,32,50,55,53,50,52,48,52,53,50,51,51,53,54,55,47,51,46,26,21,21,17,16,19,22,18,19,23,29,43,60,51,52,57,54,59,53,57,55,59,59,50,57,57,57,61,61,50,42,38,38,40,46,56,49,56,59,57,61,58,54,38,26,17,17,22,19,21,21,27,21,19,26,44,63,65,61,66,60,57,56,41,46,55,60,64,59,63,61,62,66,63,69,70,64,65,55,38,92,146,139,126,114,86,73,68,51,62,113,125,69,63,96,105,112,115,118,105,100,90,93,93,86,84,83,101,112,118,120,127,136,145,126,101,101,101,102,99,137,156,133,110,76,64,65,77,92,104,111,116,126,122,114,95,91,83,53,39,34,43,24,8,14,13,14,13,14,15,15,15,15,15,171,155,144,131,126,128,131,126,115,122,141,130,137,139,115,109,101,117,121,131,128,120,126,135,147,147,216,250,228,74,70,43,12,50,19,29,30,29,44,43,47,56,101,117,92,96,108,111,108,119,115,108,108,101,100,103,113,117,113,98,96,110,115,111,101,89,93,108,112,109,102,109,105,110,121,117,119,114,117,118,122,118,112,118,107,122,136,133,130,115,121,126,124,126,126,120,109,113,103,98,92,84,111,117,97,90,87,87,77,80,71,73,93,94,104,101,95,93,110,115,131,145,145,151,145,147,147,150,134,133,130,126,125,115,103,107,121,122,117,115,118,122,126,122,112,107,102,102,109,102,101,121,111,79,56,51,56,78,93,104,126,121,117,147,127,51,35,112,153,164,220,195,194,181,138,104,83,88,89,74,83,73,86,110,103,115,122,118,123,130,113,94,72,79,115,104,92,84,104,121,108,125,127,118,96,79,82,99,120,113,123,123,96,120,139,158,163,131,120,117,131,149,149,162,165,149,148,146,135,143,152,155,157,152,151,152,148,147,148,148,145,152,170,162,140,122,108,105,111,111,115,100,95,105,74,69,99,122,110,68,56,51,53,61,53,44,55,79,88,92,88,94,84,89,107,179,234,252,229,213,240,226,163,188,221,148,40,6,12,46,84,40,31,40,159,245,140,104,122,132,111,144,203,163,161,176,174,100,137,210,193,182,154,118,70,14,8,27,39,53,68,48,26,29,27,27,29,24,40,39,35,36,29,33,29,29,29,25,44,47,34,24,22,26,24,27,27,28,27,24,24,26,27,27,28,31,49,54,70,93,99,73,62,72,82,101,103,107,106,118,84,49,27,15,33,65,95,121,116,105,101,104,101,99,102,92,94,104,113,95,150,176,94,92,75,76,80,86,104,102,103,105,113,98,98,111,113,122,88,59,54,63,75,64,48,21,31,49,44,38,21,17,24,17,21,30,53,57,68,90,74,63,66,71,78,68,55,56,55,55,56,61,54,53,57,59,60,59,53,57,61,55,57,59,56,59,58,60,53,53,55,53,55,50,50,32,16,24,13,19,20,16,21,24,40,44,43,45,42,40,37,39,33,25,27,27,24,27,25,24,22,33,36,43,56,54,54,53,54,54,54,55,51,50,49,54,50,44,39,26,15,20,21,16,18,18,18,29,32,32,39,47,48,41,34,39,36,27,39,50,44,50,53,48,30,11,17,19,14,16,16,15,15,19,21,17,19,22,47,57,57,58,52,51,48,55,49,48,50,50,53,49,51,51,45,41,25,16,18,18,16,19,22,15,22,33,39,50,53,54,55,56,58,51,53,57,53,58,53,47,54,52,51,53,54,47,35,31,42,43,46,53,49,56,55,53,53,53,47,31,23,21,16,17,18,19,21,18,20,25,39,56,61,63,68,65,62,56,45,36,48,63,60,64,66,63,75,71,68,71,68,65,52,44,34,72,143,134,119,118,100,72,74,92,108,124,143,127,66,43,55,58,55,75,74,47,42,38,72,89,75,70,60,69,78,84,81,87,108,125,113,100,110,95,67,47,83,113,94,71,48,34,44,53,57,72,81,97,98,102,88,77,83,64,55,34,21,29,17,12,15,14,15,15,15,15,16,15,16,15,132,155,167,126,115,124,141,140,108,113,117,98,123,132,117,117,109,122,116,120,122,119,130,134,122,124,210,251,226,71,74,43,13,48,24,37,26,30,43,46,53,58,112,122,97,101,101,108,109,118,114,100,115,107,105,101,96,102,101,101,103,110,113,113,107,94,92,106,103,107,107,102,116,121,122,117,113,122,120,118,120,117,117,116,115,130,149,138,118,118,113,111,111,117,119,113,107,95,108,117,100,101,121,115,108,108,102,112,100,97,94,96,113,103,107,109,122,105,94,106,114,128,123,135,139,127,130,129,130,122,121,127,96,85,82,80,119,111,97,96,96,109,120,127,124,117,109,106,114,116,128,128,102,94,88,91,79,57,69,80,88,116,163,190,169,80,46,99,128,128,144,162,212,232,207,162,115,87,65,63,75,81,98,116,126,128,130,124,121,125,116,121,106,96,117,107,106,114,123,134,128,139,149,128,102,80,75,81,123,150,153,137,120,133,145,141,119,82,76,114,146,172,176,185,185,177,178,180,165,178,190,180,184,181,172,160,159,163,176,185,177,190,193,165,156,148,115,93,95,107,115,113,121,129,99,90,110,118,112,75,56,63,60,68,65,56,49,60,78,77,83,84,77,84,106,219,251,251,234,218,250,236,187,200,225,168,61,9,2,12,46,41,39,75,129,141,89,43,64,104,128,195,193,171,171,177,157,142,221,245,169,158,157,97,57,9,11,49,72,69,42,35,31,26,27,29,33,35,27,37,44,39,33,29,31,28,31,29,25,39,42,35,29,23,32,27,30,29,22,28,24,27,30,28,36,48,55,65,88,103,78,64,83,88,98,103,110,121,120,97,48,28,23,28,69,101,120,111,96,103,95,94,105,97,108,100,89,99,97,74,124,149,93,95,83,87,88,96,106,101,105,104,109,110,109,105,93,111,90,49,47,54,79,77,64,42,35,39,25,23,19,18,18,23,38,56,66,73,78,62,41,58,84,89,80,67,70,67,67,67,69,69,57,47,69,72,70,63,61,67,65,66,63,59,60,58,62,63,57,57,55,55,57,54,47,29,21,17,16,20,23,19,16,31,45,41,46,42,37,40,30,41,41,28,31,24,26,27,19,22,26,25,31,41,48,42,54,62,59,60,57,58,57,57,58,57,55,53,39,20,18,17,19,19,19,22,20,20,19,20,25,28,33,40,39,34,33,27,24,28,29,25,37,39,22,15,17,17,15,17,15,17,17,18,19,19,16,29,51,47,51,50,46,44,41,53,50,54,59,54,51,47,49,48,46,30,20,19,17,20,19,18,22,19,32,42,41,51,52,54,56,55,55,58,57,51,48,47,47,49,45,41,49,41,48,42,39,39,39,44,37,44,43,52,54,50,49,45,37,21,22,21,15,22,16,21,23,19,28,42,55,61,65,62,60,63,57,48,42,43,55,69,64,73,78,75,79,74,71,55,51,47,34,34,31,96,122,108,136,132,99,85,106,130,149,153,145,131,96,77,59,30,48,67,52,30,28,42,110,140,102,87,86,92,87,75,72,87,99,122,115,112,111,74,53,33,53,84,70,77,83,66,63,63,53,50,63,71,76,78,68,62,68,53,47,66,79,76,23,6,14,12,16,15,16,15,15,16,16,16,142,174,188,127,110,134,170,168,120,122,121,101,122,130,128,126,120,139,124,122,125,124,120,126,118,125,211,253,224,66,86,48,14,49,22,32,27,27,44,50,55,63,120,135,108,101,103,117,111,126,126,109,118,110,105,102,104,114,120,116,105,109,117,120,113,109,105,111,119,116,113,109,110,117,116,107,104,103,110,112,116,114,107,111,104,131,141,128,118,110,124,111,105,109,113,119,105,93,111,124,113,103,105,94,99,112,112,114,105,113,113,117,130,116,114,118,131,114,92,89,96,96,88,102,108,92,84,94,109,105,111,108,77,78,83,97,104,89,81,97,101,105,106,91,108,112,110,100,93,103,97,86,77,102,121,118,122,115,99,102,85,106,134,139,121,69,59,114,127,98,96,120,132,176,185,181,180,172,137,107,109,101,99,121,126,127,123,125,122,116,128,137,131,128,128,111,123,136,114,117,117,129,135,111,98,89,73,83,113,134,142,107,92,117,108,104,89,59,56,91,163,200,192,207,220,219,224,219,198,201,220,214,209,207,182,170,182,205,221,228,205,213,220,163,134,135,105,92,95,92,115,124,131,127,100,96,89,106,129,116,107,99,89,95,74,54,53,41,66,60,70,114,114,98,126,234,252,252,250,232,251,232,180,185,185,172,112,36,19,100,164,61,53,110,104,62,39,12,18,101,167,214,188,179,188,187,174,201,250,174,94,123,114,52,18,6,59,96,72,48,30,32,36,31,28,32,33,30,34,33,39,43,36,34,28,27,31,29,25,29,40,41,39,27,28,30,33,26,23,33,22,24,31,30,42,57,60,83,95,78,77,83,96,96,101,117,112,114,83,35,19,25,51,65,99,108,99,99,97,98,77,92,103,100,108,84,79,65,51,39,108,141,84,97,76,93,100,88,102,103,98,103,111,104,98,93,92,102,101,83,59,52,74,84,76,59,41,17,17,23,17,18,27,44,55,60,65,76,84,64,61,92,93,94,79,78,84,75,83,77,70,63,53,55,76,89,83,78,70,73,76,73,71,65,66,66,64,60,63,61,57,63,57,55,46,34,24,19,16,18,19,17,23,28,44,41,37,34,32,30,29,39,29,35,32,19,20,26,27,24,23,24,26,37,35,32,54,59,62,63,64,61,55,62,62,56,55,50,37,20,19,16,16,21,18,22,20,21,20,27,22,20,36,41,37,39,33,23,21,20,20,30,41,31,21,16,16,17,14,19,19,19,15,17,19,19,23,23,31,34,31,37,32,28,29,38,54,55,54,56,54,52,48,52,46,31,18,16,19,21,21,17,21,24,34,42,45,55,59,53,56,53,53,55,53,53,49,47,41,41,43,44,44,38,43,40,40,34,32,38,41,47,43,44,50,49,46,44,35,19,18,18,19,22,14,22,24,30,47,49,58,61,60,57,62,60,51,45,39,49,66,74,72,81,78,77,84,65,44,36,37,33,32,28,36,89,109,112,145,132,91,104,146,146,147,133,116,129,124,110,109,116,133,143,121,107,112,133,186,184,139,141,152,164,156,129,123,125,137,148,139,136,123,95,76,46,83,125,98,99,107,90,88,83,77,57,49,67,77,89,71,65,66,45,72,108,157,113,12,5,10,13,14,12,15,15,15,15,16,16,122,169,180,132,126,149,188,184,147,160,150,128,148,148,153,149,146,165,149,148,151,147,141,146,130,145,224,253,218,69,105,56,15,51,21,36,24,29,44,51,54,59,122,131,117,113,105,118,113,119,120,112,115,103,106,115,120,128,120,110,103,107,113,115,109,109,110,121,119,119,121,112,120,118,117,103,89,101,103,113,112,104,103,91,98,111,130,134,129,141,135,124,112,121,123,123,124,112,122,123,110,96,85,78,90,99,89,88,75,83,88,94,112,102,107,105,118,116,97,100,107,111,103,104,90,79,80,88,94,90,96,95,81,93,100,105,104,85,85,101,102,97,84,70,77,95,103,89,77,71,77,69,65,92,97,107,123,126,120,114,94,78,76,93,101,74,74,91,91,102,114,94,92,113,139,167,219,245,216,174,166,132,105,124,121,111,109,107,103,113,125,118,116,129,136,130,136,127,91,84,88,110,135,103,92,97,74,60,77,98,122,118,85,91,96,87,66,70,57,67,143,183,178,208,231,230,232,218,170,174,210,211,200,193,184,171,193,213,214,214,172,201,217,131,90,103,101,104,105,98,97,110,124,97,66,66,61,83,125,139,136,105,79,85,70,57,40,44,85,80,99,128,139,108,149,253,253,253,235,214,240,203,171,147,178,183,150,132,134,227,201,71,54,79,54,7,26,28,28,110,192,200,189,194,211,201,179,199,187,119,57,38,45,18,26,67,85,63,41,36,30,31,35,39,37,28,27,33,36,35,32,39,44,43,28,23,27,24,31,26,27,40,46,41,29,29,24,24,25,28,26,29,39,41,56,59,83,90,73,88,96,92,93,94,107,116,108,66,28,19,28,57,83,97,112,106,100,99,100,104,92,92,98,100,105,90,84,67,61,51,112,141,84,92,80,99,88,85,100,91,97,92,94,93,89,94,93,105,107,112,100,64,64,75,83,73,45,20,14,18,13,33,50,57,61,63,77,89,101,97,96,103,110,101,74,63,69,64,66,67,55,46,43,48,62,69,83,81,74,67,72,66,67,73,64,69,66,68,66,65,63,59,59,58,42,29,22,14,18,17,21,23,16,22,33,34,28,24,29,33,34,29,24,22,30,27,19,26,23,29,25,22,27,29,38,37,42,48,53,59,57,54,44,41,52,42,37,39,28,20,19,16,21,20,21,21,18,20,24,29,26,29,33,39,39,34,31,27,22,24,31,33,31,25,18,14,19,21,19,21,21,22,20,23,19,16,21,25,27,29,23,25,24,21,25,30,48,64,56,55,58,54,51,46,46,27,19,21,17,22,18,22,22,21,45,51,51,56,56,58,57,56,58,59,56,50,51,45,43,48,50,51,47,43,44,35,38,39,40,40,43,53,48,50,51,45,48,41,27,25,23,17,22,25,19,19,26,40,50,54,57,60,62,64,66,63,48,45,46,54,69,79,72,76,78,61,60,45,35,26,24,31,31,29,48,120,112,92,111,95,92,120,139,127,115,103,89,103,99,101,153,182,194,193,181,171,170,193,201,179,145,141,173,184,179,156,136,141,141,156,153,155,154,129,112,77,121,171,117,89,92,88,113,124,106,90,91,98,114,122,103,99,103,94,112,152,187,115,10,3,10,12,14,12,15,14,14,14,15,15,92,124,141,118,128,151,166,162,149,162,150,145,155,150,160,152,150,163,149,151,163,161,152,157,142,161,235,253,206,64,117,54,13,46,17,37,16,30,42,46,58,64,113,122,115,111,101,113,95,96,107,108,123,118,116,115,119,125,116,110,102,106,113,111,108,107,110,113,113,117,116,121,128,124,118,108,112,115,120,113,103,101,104,111,108,127,143,148,147,148,148,135,140,152,142,149,147,141,149,137,133,128,119,108,110,109,104,98,81,71,69,83,92,101,105,102,125,116,117,136,136,140,116,99,97,92,92,95,94,81,91,92,95,88,76,97,96,83,80,92,83,81,74,43,64,90,106,98,74,71,78,72,72,85,88,89,104,113,118,120,107,104,95,103,118,104,94,78,75,92,121,121,109,125,110,125,153,185,188,185,207,174,121,120,123,118,92,79,69,83,114,105,107,123,133,124,123,115,101,105,87,83,116,110,92,87,81,58,54,67,118,140,107,90,80,80,74,65,55,59,121,167,159,192,239,240,237,206,160,159,193,212,193,186,184,181,189,195,192,189,155,168,161,98,84,106,95,109,116,113,117,109,119,77,39,42,55,94,98,103,97,69,63,69,57,45,32,56,162,157,95,77,126,129,188,234,240,212,165,175,195,172,129,160,160,113,114,73,120,160,127,37,27,66,56,22,87,94,63,133,171,184,205,207,230,223,142,127,146,74,33,14,12,50,85,74,48,32,34,34,27,35,34,37,40,38,34,33,34,30,33,36,37,45,36,30,25,27,32,27,27,29,45,44,37,28,23,29,24,30,35,36,36,48,57,64,68,68,73,90,100,99,99,95,112,103,50,23,18,41,70,86,108,99,103,105,101,108,102,100,97,105,104,102,108,100,104,99,104,77,139,152,86,103,82,104,95,77,93,96,100,88,85,93,94,93,94,105,117,118,123,96,70,71,80,76,57,29,16,20,41,49,55,58,62,74,80,100,104,96,110,125,119,98,52,40,48,48,50,46,48,46,42,42,44,55,62,57,57,53,48,51,56,51,47,48,57,53,53,51,45,53,46,37,32,23,20,18,15,19,17,19,20,22,25,24,25,31,34,29,35,34,25,27,32,24,25,25,24,30,26,30,32,37,40,34,39,46,42,34,41,40,30,26,30,30,31,29,19,21,20,18,22,21,20,17,22,26,22,27,25,23,31,35,35,35,31,28,28,29,38,31,24,19,17,18,18,18,25,23,20,24,19,22,18,19,21,25,29,24,25,22,23,26,28,29,45,56,57,57,53,54,55,53,37,22,24,19,17,21,19,20,19,33,38,47,51,53,61,55,57,55,60,60,51,51,47,43,49,47,51,51,48,46,38,36,41,37,45,46,49,57,51,52,55,48,45,39,29,21,22,17,19,18,22,24,22,44,53,60,66,60,66,66,62,63,49,46,50,56,60,66,61,73,69,50,43,30,29,27,27,27,33,31,93,128,93,66,60,69,86,93,101,78,75,84,71,73,73,104,152,165,171,174,178,160,155,171,158,148,132,131,146,135,139,130,119,114,104,128,145,143,141,137,134,104,152,190,122,86,85,107,143,146,141,135,141,151,140,136,124,134,150,147,164,171,181,111,9,3,9,11,13,12,15,13,15,16,15,15,83,94,93,93,104,123,140,139,136,142,131,139,145,132,144,135,129,137,125,136,143,150,147,148,142,164,237,252,196,63,96,37,18,46,24,39,19,30,40,46,56,67,117,126,114,110,100,98,94,98,105,109,121,123,105,100,105,111,107,103,113,108,112,121,108,109,105,110,110,106,106,119,129,118,118,114,110,120,117,110,104,110,117,122,133,142,159,155,144,154,151,150,151,154,156,157,158,156,166,171,177,174,159,149,149,159,160,158,142,126,125,125,135,139,135,137,143,124,127,152,151,134,110,108,103,101,98,103,103,84,92,99,94,81,57,83,89,85,80,82,78,67,65,50,58,90,111,118,114,92,95,83,86,104,99,112,110,113,117,119,129,128,117,118,117,93,88,86,70,79,102,114,128,146,136,93,89,112,131,163,218,198,133,123,112,112,84,62,55,70,112,115,113,113,121,119,113,116,121,117,103,98,97,94,102,109,95,100,95,80,85,108,83,64,71,80,90,77,55,66,137,176,152,157,214,251,249,201,155,166,198,217,207,191,183,171,179,182,190,186,151,138,118,89,98,114,87,101,120,123,108,101,121,92,74,72,69,94,93,85,83,53,61,111,115,145,131,154,232,191,171,141,131,172,227,236,181,138,104,129,173,128,136,157,84,99,127,81,34,64,67,28,19,69,152,119,122,101,94,170,179,206,231,212,197,194,101,61,78,46,13,10,70,88,63,48,36,29,30,37,27,34,34,31,44,39,33,34,40,31,34,35,31,35,34,39,27,28,30,25,31,31,34,41,41,35,27,25,33,35,34,38,50,55,62,64,81,78,74,106,104,105,113,106,74,31,20,28,50,86,101,107,101,90,101,99,102,101,97,98,98,105,109,105,101,103,110,105,111,86,161,172,92,104,84,112,100,94,97,86,100,91,101,94,96,100,96,109,111,124,133,97,75,73,80,83,68,49,29,42,57,59,59,69,73,80,97,99,95,103,119,127,129,81,46,45,42,47,46,43,46,48,43,40,41,42,46,47,46,48,39,34,42,43,39,45,43,43,40,35,44,39,34,34,30,21,22,18,24,17,22,24,21,29,18,24,29,29,26,27,30,26,27,35,37,23,26,21,25,32,27,33,40,37,36,43,41,44,43,32,32,37,29,30,27,35,42,28,19,18,23,21,19,19,18,22,19,23,27,24,26,30,29,26,27,30,25,27,28,38,41,28,28,17,15,18,17,17,17,26,21,19,17,18,22,19,21,28,31,32,31,25,29,29,36,34,37,47,41,43,40,44,49,46,34,21,22,17,17,24,16,20,21,23,34,35,44,54,59,57,56,57,56,55,53,50,41,38,46,46,51,53,44,39,37,39,44,40,42,42,43,51,51,51,53,55,50,36,23,23,21,16,21,22,16,22,29,40,57,66,67,69,63,63,64,57,54,50,51,54,50,53,52,68,68,45,37,28,29,27,25,28,30,40,89,124,103,83,67,72,70,61,66,48,39,50,63,90,111,113,122,125,130,141,154,146,136,142,123,123,114,107,98,83,115,122,111,93,76,101,121,115,108,117,136,133,166,188,130,100,112,122,138,140,135,146,156,153,141,128,138,150,160,164,169,163,157,106,13,2,11,10,15,13,15,14,15,16,16,15,92,93,87,81,80,89,106,120,133,136,133,139,135,127,133,134,124,124,124,126,139,148,143,148,140,167,239,253,190,54,77,20,22,44,19,40,22,32,36,48,58,71,126,130,124,112,99,102,104,117,106,100,101,101,102,89,92,101,98,111,108,108,116,107,105,102,106,102,105,102,95,115,110,103,106,105,101,101,106,102,105,109,115,120,122,122,128,138,130,135,130,127,134,140,130,123,116,120,137,147,170,167,152,140,138,156,169,177,169,167,168,173,180,181,174,158,135,100,112,130,132,131,100,101,106,89,90,94,86,73,81,78,84,73,63,87,99,106,91,77,65,87,77,61,87,101,123,126,113,107,96,93,101,108,114,120,112,107,115,105,117,119,100,111,98,84,76,81,81,69,87,97,96,111,119,113,97,101,107,127,160,146,127,107,93,90,75,85,82,105,123,94,87,109,153,163,127,114,109,127,129,119,118,120,119,118,93,117,110,103,82,63,57,70,76,89,111,112,81,74,117,160,147,100,131,193,179,115,106,147,198,214,191,182,169,154,141,141,152,157,147,140,116,98,103,106,93,105,113,101,83,82,120,111,98,92,83,110,103,113,105,73,109,163,208,252,251,251,253,253,253,213,182,211,234,213,160,143,132,158,156,122,152,139,107,170,184,122,103,115,119,63,16,84,164,139,142,113,161,208,209,245,227,145,125,150,69,34,21,10,33,67,81,59,43,33,33,32,36,30,28,37,29,30,31,37,42,35,37,33,29,27,30,33,38,44,35,34,31,31,38,29,24,28,41,42,35,34,36,39,36,44,61,65,76,90,96,99,98,116,118,118,92,46,31,18,27,65,98,112,107,94,93,93,99,99,100,100,94,85,77,97,95,102,101,97,106,99,109,78,177,174,78,102,81,105,100,92,91,76,94,96,103,97,96,95,85,105,123,119,83,40,41,67,89,91,86,66,61,74,62,63,60,68,76,87,94,89,103,77,61,56,47,50,51,49,46,48,50,51,52,48,46,46,41,39,43,49,48,41,38,33,42,40,43,41,40,37,40,43,41,38,40,35,30,23,19,24,17,24,21,22,25,22,24,23,26,35,34,24,22,27,34,40,39,31,24,17,31,42,40,36,38,42,39,39,37,43,49,42,33,29,24,29,32,35,46,35,24,22,19,21,22,27,26,27,24,23,30,28,27,27,30,32,33,33,22,27,38,41,42,27,19,18,22,23,20,21,21,29,29,20,18,21,22,21,23,29,36,35,29,28,30,32,42,37,23,36,32,30,34,28,32,32,26,25,17,17,18,19,19,19,23,23,27,28,31,38,37,51,50,48,55,53,56,44,36,34,38,46,48,48,42,44,38,35,36,33,34,33,40,46,45,43,49,50,48,34,17,21,18,18,19,20,22,22,30,46,49,57,64,60,59,60,61,59,54,53,51,57,56,56,52,61,61,43,36,29,32,29,27,29,28,39,60,91,101,104,88,73,69,62,85,61,26,51,82,104,105,90,76,76,88,94,131,141,141,139,99,100,102,83,78,83,114,111,106,96,73,94,106,96,85,93,122,118,144,156,118,119,120,110,109,104,111,117,125,123,115,116,124,136,136,143,156,148,141,99,16,3,11,11,15,13,15,15,15,16,15,15,95,97,93,93,84,77,76,107,122,136,146,146,137,132,143,137,132,132,129,142,147,155,145,143,136,166,239,253,192,84,102,21,29,46,17,33,24,36,34,47,62,64,110,124,118,116,103,106,112,116,103,88,89,101,105,99,105,110,112,108,112,99,103,115,99,103,101,96,99,101,91,102,111,96,91,98,94,100,104,92,95,98,101,110,100,92,97,108,106,98,101,97,109,111,92,66,51,71,93,107,117,117,125,118,106,115,133,149,151,154,157,165,176,168,143,118,87,61,83,95,94,108,99,91,88,78,67,66,64,54,50,48,61,61,64,97,113,120,100,95,87,89,93,93,117,115,103,84,70,80,105,116,127,130,121,117,103,114,125,103,95,117,94,93,98,83,91,88,79,77,79,87,81,87,118,122,111,107,104,106,109,95,90,96,87,88,94,119,109,93,105,90,89,99,143,152,116,110,119,131,154,153,134,139,131,101,84,98,99,95,85,88,82,84,85,70,104,124,116,96,87,110,99,56,34,70,75,52,70,120,181,183,155,156,162,145,127,99,103,111,118,134,122,112,93,98,98,105,103,83,64,75,100,91,84,70,84,106,103,105,110,93,112,184,216,250,251,251,244,215,244,168,128,220,239,193,199,165,168,173,122,128,161,201,187,191,154,116,118,149,141,41,2,51,146,161,132,160,226,220,232,228,133,84,63,85,38,5,29,41,71,66,54,39,38,33,33,36,26,38,36,34,32,31,35,27,36,42,33,32,33,29,33,29,36,38,41,39,36,33,29,29,23,30,31,45,50,39,39,40,49,57,69,90,104,98,99,102,114,135,123,77,38,16,23,40,78,107,112,110,92,96,94,95,102,96,92,101,97,79,75,83,99,101,99,98,104,99,103,84,187,184,75,87,74,102,87,78,71,60,90,90,105,96,98,90,77,117,100,49,26,18,26,51,76,102,101,66,72,82,54,63,70,72,83,81,90,104,83,48,22,12,23,36,48,46,48,50,50,51,48,50,50,52,48,44,50,49,52,53,46,45,44,44,41,37,39,44,47,41,39,42,39,33,30,22,19,16,19,24,17,21,23,19,28,26,29,38,34,25,19,29,43,43,47,34,32,28,40,66,47,39,37,39,39,38,39,45,50,46,35,25,26,30,34,38,47,36,28,20,19,19,21,29,24,26,30,28,29,28,26,35,33,33,34,22,23,32,45,44,43,35,22,27,29,33,33,33,30,30,39,44,42,32,21,24,28,33,30,31,35,29,36,39,42,38,31,28,22,32,26,27,29,29,23,19,23,17,23,20,20,24,24,27,30,28,32,32,34,41,37,46,49,46,39,30,25,31,37,43,46,41,39,35,33,26,29,31,22,27,34,44,39,42,43,40,35,26,23,22,22,19,21,22,21,28,37,40,48,54,56,59,56,61,62,55,57,54,54,58,56,55,52,56,57,45,39,34,28,33,35,28,33,42,47,61,68,69,64,67,54,63,83,52,50,80,57,57,61,40,38,42,35,42,106,139,130,111,79,98,100,92,92,96,111,100,107,92,83,104,105,93,80,84,98,96,94,116,116,116,125,108,93,94,100,96,105,103,101,97,104,105,96,115,137,138,139,100,18,3,12,12,16,13,16,15,15,16,16,16,89,101,103,108,117,108,103,108,110,128,145,148,140,140,142,137,136,139,147,150,153,157,141,135,126,160,237,251,189,107,127,34,30,36,25,35,22,35,34,45,59,69,94,95,101,107,102,96,98,109,98,87,86,104,108,92,116,127,119,119,100,100,105,103,107,97,97,89,99,98,86,107,120,112,97,97,89,93,98,86,94,97,100,108,107,95,96,101,86,83,87,97,105,96,78,66,53,62,69,69,87,89,113,117,110,115,113,120,114,121,122,132,141,109,80,63,43,36,56,50,59,94,85,81,85,66,66,58,57,53,53,42,53,68,61,84,101,99,89,100,83,89,101,109,132,87,68,65,42,72,86,125,137,114,120,119,118,117,129,110,101,108,87,93,80,89,98,102,101,75,92,104,104,110,137,143,114,112,125,123,108,78,66,87,114,107,104,114,90,64,78,92,89,78,99,89,84,89,101,118,132,134,117,121,116,93,93,106,122,117,113,128,122,108,103,92,123,153,158,145,93,75,57,45,42,55,65,62,90,122,157,160,146,152,165,157,128,104,97,92,97,105,106,103,93,93,93,95,94,83,77,89,89,68,49,60,92,113,104,84,79,69,90,145,145,174,148,134,109,79,137,79,78,160,176,196,235,199,190,159,145,151,206,242,218,160,87,89,106,125,108,41,12,68,161,162,147,208,246,187,188,172,96,57,10,16,24,29,56,69,60,42,31,43,41,34,38,33,30,35,40,33,33,39,35,29,30,37,38,34,29,29,27,27,36,34,43,48,37,35,27,24,31,30,39,37,45,56,45,49,59,83,97,103,108,97,98,111,122,107,58,29,24,26,56,89,110,112,110,105,90,92,91,92,98,95,101,103,94,87,84,97,97,108,105,95,105,95,107,89,191,200,91,84,68,89,76,68,74,73,94,98,106,104,96,110,97,67,35,21,22,14,19,37,66,84,66,40,62,71,57,60,66,77,74,95,103,101,71,34,18,17,18,32,41,46,51,50,44,45,46,47,48,47,47,49,43,50,49,49,49,44,50,51,48,42,41,44,42,43,42,42,42,35,24,23,17,16,20,22,18,23,24,26,25,31,33,35,38,31,30,36,39,46,47,40,30,27,50,57,48,40,42,42,43,43,35,50,50,41,36,25,29,30,33,43,48,34,21,19,22,16,18,23,22,29,33,28,27,25,32,29,28,35,33,26,30,41,41,46,49,36,24,31,39,34,38,39,34,38,53,66,53,31,17,27,29,31,40,28,33,38,36,37,41,39,34,28,24,29,25,25,32,37,21,20,22,21,21,19,23,24,25,29,30,35,33,32,38,32,36,47,43,37,29,26,28,26,29,38,39,39,34,24,27,29,31,28,24,33,28,33,35,28,39,30,28,23,19,27,22,24,21,23,26,33,45,46,47,48,59,59,55,57,55,57,55,55,53,51,59,53,50,47,49,46,39,34,29,35,39,39,44,43,49,51,45,48,55,55,41,34,76,70,63,71,44,39,30,55,63,46,37,43,105,128,115,98,93,106,100,99,106,104,98,99,111,97,98,110,106,95,84,93,102,87,77,101,110,126,137,126,115,106,104,103,111,103,107,93,86,94,81,103,118,118,124,92,20,4,13,12,16,14,16,15,15,16,16,16,105,121,130,153,189,186,170,172,137,117,132,143,136,129,131,126,128,135,145,150,148,158,139,134,120,161,239,248,169,96,128,22,38,38,22,38,16,32,35,43,60,68,83,78,89,100,79,81,75,77,84,79,84,93,89,84,98,114,123,112,114,98,98,109,100,95,93,97,101,105,95,121,138,123,109,91,85,95,98,84,87,97,101,116,121,114,113,102,92,101,110,93,76,78,91,98,97,87,73,85,85,77,101,110,120,128,114,105,88,100,118,131,121,81,60,55,44,35,43,53,70,91,88,73,81,80,62,63,65,70,72,57,63,65,60,65,71,74,73,77,77,99,108,103,103,61,43,62,63,73,74,85,104,100,107,104,111,117,102,89,88,115,101,89,107,108,101,111,132,120,109,113,94,113,152,148,125,128,134,119,107,89,68,92,107,103,98,99,83,66,81,75,78,63,61,65,54,66,76,71,87,93,81,105,114,125,134,152,152,153,151,141,160,154,161,163,182,195,178,142,96,86,67,63,74,85,77,76,103,122,135,134,136,145,146,131,119,103,108,104,92,93,91,105,86,95,92,83,93,90,116,119,78,61,72,93,116,96,88,87,84,61,39,90,63,56,27,58,68,25,94,34,100,196,148,196,240,205,224,210,195,207,202,241,198,139,119,116,132,150,141,99,100,100,137,163,166,228,229,127,146,139,49,28,3,29,49,61,69,48,47,34,36,41,38,37,29,34,39,32,39,43,35,37,33,32,30,30,39,35,33,33,33,33,39,35,36,36,39,38,31,37,44,51,42,39,45,53,50,55,96,115,110,97,95,105,107,98,65,35,22,24,46,71,94,111,119,101,102,101,87,93,83,87,103,101,106,100,88,96,100,98,104,100,102,103,108,106,99,88,160,209,136,72,51,78,84,89,88,96,103,103,115,107,119,97,46,26,21,23,18,15,30,53,63,64,63,46,69,67,50,62,63,80,103,86,92,98,55,27,19,14,22,34,41,42,47,48,43,47,45,45,45,43,47,39,44,44,43,47,42,45,48,51,50,51,49,43,42,46,41,40,43,37,27,20,24,16,22,22,21,27,36,33,29,30,33,37,37,34,36,43,46,47,47,42,34,30,46,55,42,41,44,39,41,45,41,50,53,44,36,27,27,24,35,51,47,23,24,22,21,19,15,16,21,31,28,27,24,22,33,29,31,36,29,32,40,45,44,46,47,44,32,35,40,36,38,35,35,42,44,61,75,47,24,27,32,33,36,34,32,39,44,38,36,38,33,28,33,29,26,26,36,39,20,20,19,17,21,21,24,22,23,25,30,34,28,36,33,35,39,42,40,31,25,26,19,23,30,27,30,29,30,23,29,33,29,27,29,26,25,29,28,31,29,28,24,20,22,22,22,24,22,24,23,34,43,43,46,44,50,49,54,53,48,53,53,50,52,53,51,50,55,48,43,42,33,36,29,41,43,33,44,43,44,45,35,43,44,46,34,69,112,89,79,55,23,41,63,101,114,98,93,82,121,133,119,122,112,113,103,106,101,85,90,104,124,103,110,114,95,86,75,93,101,85,74,91,105,115,137,131,126,120,110,105,109,109,111,95,91,103,91,102,103,92,93,77,23,5,15,13,15,15,15,16,15,16,16,16,152,156,160,188,235,224,189,186,142,115,108,114,111,107,112,104,114,110,109,118,128,145,141,142,135,179,246,247,150,62,87,21,50,41,24,34,22,33,31,51,62,72,104,94,89,85,73,74,66,62,66,74,74,73,64,55,78,95,102,113,108,102,98,108,103,92,97,93,106,103,99,121,128,125,103,96,89,104,104,88,95,98,103,114,112,112,107,101,99,99,97,71,65,70,87,124,124,113,88,71,74,57,71,84,98,110,108,108,91,107,129,130,102,87,78,69,64,58,76,75,76,100,92,92,91,79,78,69,83,97,108,83,65,69,59,80,73,53,57,69,78,103,104,77,71,59,77,74,65,66,58,77,86,87,86,88,122,118,78,55,80,113,133,122,98,99,91,123,156,146,118,89,83,104,156,156,137,133,127,106,101,98,73,82,85,88,91,81,71,61,71,62,67,61,52,57,60,90,79,66,64,68,83,127,150,155,169,170,179,159,148,166,165,158,160,139,139,146,134,100,73,72,74,77,64,74,81,62,78,105,112,112,109,122,122,102,95,89,94,91,83,85,84,100,99,101,91,86,86,85,105,117,83,77,91,100,103,75,84,98,92,66,60,92,74,69,26,99,135,45,129,76,136,203,114,162,195,206,228,218,227,179,146,169,155,148,153,147,156,157,148,145,142,129,149,187,177,184,181,98,98,77,29,15,17,67,70,59,48,37,50,50,39,38,40,37,32,39,36,37,38,36,31,37,32,24,34,29,34,38,39,41,37,38,36,33,29,31,37,39,48,67,69,53,42,49,59,69,62,54,77,105,87,73,91,103,81,42,24,19,30,56,86,98,99,107,99,97,101,92,83,90,83,88,101,103,104,89,100,105,98,103,95,102,108,103,101,99,96,77,118,195,173,77,45,84,89,94,91,95,107,108,115,113,72,39,21,20,20,16,25,46,71,81,75,62,51,49,84,74,59,68,81,98,81,83,89,82,42,14,15,19,23,35,36,38,44,45,46,43,38,44,45,40,42,39,36,37,40,44,44,49,47,48,53,49,48,44,45,44,44,46,46,34,25,30,33,33,25,34,33,27,42,33,33,35,36,44,43,39,37,46,47,45,43,37,33,35,53,53,40,47,46,44,41,39,45,44,44,44,31,31,22,30,42,40,41,24,22,19,23,21,21,26,24,32,27,28,29,27,38,26,32,38,27,36,42,43,45,49,50,45,39,37,39,40,39,38,38,39,29,46,79,67,40,32,27,31,36,33,37,37,42,46,40,36,37,37,38,31,28,27,25,31,21,17,17,19,19,22,23,24,31,33,27,31,28,33,36,40,45,36,31,28,21,22,22,26,37,37,38,37,29,24,35,31,33,35,28,33,31,33,41,39,33,28,26,24,18,20,21,16,20,23,21,31,34,34,41,39,44,44,42,40,41,45,33,45,45,42,45,42,44,36,39,46,41,37,34,39,36,35,42,32,32,27,28,39,57,53,88,138,134,109,105,72,38,87,90,112,122,119,125,121,142,130,124,117,101,110,112,103,76,72,99,110,112,98,105,98,77,62,49,76,91,88,76,91,89,83,110,120,127,112,94,101,110,113,125,99,84,105,97,111,104,97,91,68,24,6,14,14,14,13,15,16,16,16,16,16,188,178,139,143,198,167,113,120,103,88,86,85,88,86,89,84,87,74,78,84,91,123,130,147,136,187,246,246,139,47,79,14,47,39,29,40,20,34,34,51,63,78,117,104,104,97,76,100,94,78,84,87,76,55,49,49,63,74,86,91,106,101,97,103,107,95,95,106,107,104,92,107,109,108,112,107,110,118,117,110,115,113,105,100,93,92,98,83,80,81,73,71,77,79,99,116,117,110,78,57,39,50,63,49,59,81,92,98,101,108,122,112,95,89,95,95,77,83,93,92,93,98,97,98,88,76,89,95,99,105,101,88,89,83,87,98,82,67,69,68,68,92,96,69,82,97,89,83,63,55,50,59,71,72,89,98,109,121,81,53,85,125,123,87,97,95,94,124,139,142,113,89,89,127,165,159,146,132,112,94,94,100,98,78,67,86,81,74,63,53,60,59,71,57,48,44,67,101,87,64,73,82,90,141,155,155,141,141,150,147,163,161,145,95,68,58,55,71,63,45,43,57,69,107,65,64,81,57,92,109,126,120,96,101,100,88,90,84,84,86,75,83,80,87,77,85,91,84,87,74,78,94,79,73,78,79,87,81,82,72,80,92,107,125,81,65,65,154,217,90,119,160,203,169,108,193,220,188,191,184,150,107,73,81,70,121,169,146,163,170,162,172,180,176,187,222,151,106,122,53,41,22,21,48,50,69,57,47,33,41,51,47,43,41,40,40,38,35,41,35,39,36,36,37,30,32,28,29,37,43,46,42,36,35,36,26,32,33,32,59,82,91,64,45,53,68,103,114,85,71,56,59,83,90,81,55,37,22,26,46,69,97,113,110,103,103,104,99,95,83,79,91,86,83,96,100,101,90,95,107,100,97,97,101,108,103,94,99,84,80,97,158,201,111,61,76,85,96,84,103,107,110,103,49,25,21,20,19,15,35,61,87,105,116,105,103,71,59,104,82,87,105,93,87,78,79,85,66,33,19,13,18,34,36,35,35,39,41,34,39,33,36,42,39,36,42,48,46,52,53,51,53,48,48,47,49,45,47,44,46,44,41,46,31,29,34,34,34,32,31,33,34,37,37,36,39,41,44,44,40,42,43,49,47,50,40,29,37,56,55,37,42,42,42,39,40,46,41,37,32,33,27,27,34,39,44,34,26,27,19,23,23,24,32,33,29,28,29,24,28,39,25,34,41,32,41,42,45,44,47,47,48,37,35,38,33,39,38,34,35,32,33,61,76,64,40,29,33,34,38,37,40,45,37,38,36,36,36,39,36,31,32,33,24,18,23,21,21,25,24,20,30,33,28,31,31,30,42,39,39,34,27,35,29,22,19,19,27,44,50,49,39,21,30,35,35,39,32,38,38,36,48,45,44,41,39,28,19,19,20,20,19,23,19,33,32,30,36,34,40,43,49,45,38,39,41,38,30,37,36,37,36,35,35,35,40,38,37,33,36,37,35,40,30,23,26,25,53,72,90,137,149,122,127,145,113,119,118,84,98,105,125,143,137,128,113,117,109,93,105,107,83,64,94,125,116,103,80,89,98,83,81,77,93,95,83,67,75,66,59,87,96,113,96,75,85,103,119,125,101,81,87,90,121,137,133,121,80,22,4,13,14,15,13,15,15,16,16,16,16,133,132,87,65,91,73,74,84,74,88,78,77,83,79,82,64,72,72,71,75,86,98,110,129,119,182,245,246,138,60,84,21,46,30,25,36,18,34,34,47,67,84,117,109,105,92,73,107,109,103,93,92,87,75,56,52,59,66,73,79,83,86,89,104,103,90,98,102,111,103,94,93,95,103,105,119,117,122,115,112,118,108,97,92,78,87,84,78,78,64,72,90,103,94,107,99,85,100,72,64,67,71,79,63,64,77,81,93,97,102,104,91,83,88,100,103,100,93,109,115,101,101,90,76,63,63,85,95,99,82,78,73,87,91,81,99,89,78,71,67,65,91,96,89,94,96,97,85,77,59,56,65,66,98,95,88,87,92,97,87,112,120,120,121,107,99,102,117,118,127,131,109,118,141,157,157,153,133,123,99,95,101,96,106,68,87,83,69,71,66,75,61,66,71,49,53,60,63,68,75,101,105,83,91,108,98,70,66,92,110,125,120,92,54,37,20,28,32,27,33,26,67,77,149,144,71,74,107,130,139,134,136,111,89,94,87,92,97,100,97,84,78,90,75,61,79,76,85,76,77,70,79,88,62,55,52,62,77,66,44,51,91,125,110,71,70,88,159,190,118,148,213,226,129,131,240,232,199,133,97,129,76,10,11,8,90,151,124,145,150,159,169,168,164,168,184,103,52,57,19,18,35,61,68,47,47,47,46,42,44,46,43,46,44,43,38,36,39,36,33,34,39,39,31,30,35,34,38,45,36,39,40,38,30,26,34,30,31,36,76,105,89,69,52,41,72,118,132,113,115,87,88,110,91,49,28,26,36,58,87,108,107,110,105,98,105,105,110,101,86,93,96,88,97,97,89,101,97,90,95,98,103,97,88,97,99,93,91,83,81,74,114,200,160,77,74,94,103,97,114,116,75,59,21,19,20,17,28,48,73,99,110,108,110,122,127,118,83,79,97,112,100,84,83,77,92,92,54,26,16,15,16,32,35,31,35,32,36,36,36,42,39,32,41,49,44,53,49,52,54,49,53,46,50,45,46,50,43,47,43,45,42,41,32,27,32,27,24,24,26,22,33,44,35,34,37,35,42,46,46,47,45,45,48,45,37,29,40,57,50,39,32,29,35,33,39,37,27,27,24,27,29,36,45,43,38,30,22,23,23,27,26,24,30,27,27,24,27,38,42,39,43,50,41,41,43,38,44,42,39,41,40,35,41,36,31,32,30,25,30,30,29,34,66,84,51,31,30,31,34,40,38,37,39,31,38,39,30,32,34,24,28,33,25,18,27,27,24,27,28,33,29,30,32,35,33,39,43,39,33,23,29,33,24,29,23,18,31,46,49,44,31,21,21,34,34,33,33,31,31,36,49,48,44,41,36,27,18,26,25,24,26,25,27,33,41,35,32,36,32,38,50,52,48,44,46,41,36,34,39,36,30,35,32,34,34,32,37,31,33,39,32,34,27,23,23,26,49,74,101,125,118,115,124,134,132,122,129,108,105,109,128,139,124,107,106,121,107,86,100,96,76,80,106,123,115,104,86,106,123,116,124,103,103,76,54,61,68,55,45,57,69,76,74,70,81,99,111,122,105,90,89,91,133,159,161,147,91,17,3,12,12,15,14,15,14,15,16,16,16,58,59,41,36,43,54,72,89,87,86,89,89,77,71,70,66,78,70,81,87,88,100,98,107,102,179,244,245,135,66,97,13,46,27,19,35,18,38,36,45,65,85,114,99,90,75,65,88,105,96,80,86,98,96,90,69,60,59,53,61,67,76,83,94,100,87,95,106,109,101,93,96,97,102,101,92,90,92,79,83,101,105,107,107,88,81,83,74,81,77,75,96,112,99,107,81,66,87,83,95,97,95,102,89,92,97,102,98,97,87,81,77,68,74,79,78,91,115,121,124,114,109,105,96,81,66,77,85,79,69,83,87,84,78,64,86,104,93,89,63,65,87,97,84,76,96,109,110,82,72,72,81,107,122,111,77,70,92,108,121,125,128,140,134,105,92,104,107,96,108,136,142,146,152,152,150,152,155,144,109,83,77,111,99,71,89,75,61,66,69,76,74,68,72,68,63,60,63,72,69,83,95,67,63,53,49,46,37,47,72,108,104,89,71,45,37,38,49,47,49,71,89,90,173,134,57,81,102,141,119,107,117,96,101,103,112,113,115,116,100,93,92,84,78,71,84,75,74,77,62,60,65,89,74,60,46,41,59,56,39,40,69,91,83,69,84,82,80,127,171,237,243,156,92,179,252,229,146,35,69,149,83,25,26,24,95,125,64,53,55,66,84,74,75,68,71,50,23,40,25,45,69,62,56,39,41,44,50,48,46,49,50,49,49,45,37,42,37,36,35,37,38,33,35,36,38,41,42,44,37,34,38,39,36,32,36,33,33,45,83,100,89,69,57,42,41,87,120,137,140,134,132,82,38,24,28,46,73,101,111,113,108,102,97,101,101,100,101,99,97,99,106,103,105,103,91,104,100,98,99,97,104,93,89,94,93,89,96,90,83,80,87,180,190,103,73,91,108,111,107,53,23,23,12,23,21,38,62,89,110,108,109,107,115,113,123,99,89,106,98,94,86,81,82,79,83,85,49,22,15,15,23,33,39,43,46,42,44,43,44,46,36,33,39,45,43,47,49,49,49,47,45,43,51,46,42,45,44,42,39,39,40,36,26,31,34,26,18,19,17,19,38,39,37,36,29,29,41,43,44,42,40,40,43,42,31,28,43,57,51,42,32,23,40,45,41,30,23,27,24,24,31,39,36,43,41,24,22,24,20,23,20,19,23,20,27,18,27,38,44,45,33,44,39,42,39,39,44,46,41,34,42,35,43,35,27,31,25,27,33,31,36,33,41,80,81,46,27,27,29,32,34,38,35,34,36,36,30,29,34,30,33,28,18,21,24,26,22,27,23,25,24,27,38,37,34,37,34,26,29,29,38,38,24,24,28,31,40,51,45,39,27,20,23,21,21,26,22,24,26,38,48,45,42,39,34,23,19,22,28,27,27,25,31,33,39,45,39,38,31,28,39,44,51,54,50,51,47,46,46,38,34,39,33,33,33,32,33,31,32,31,36,34,25,19,18,36,51,65,83,87,91,94,105,104,92,111,105,109,111,97,117,118,100,91,92,102,87,70,86,106,101,91,100,111,108,103,93,105,121,120,127,100,85,54,45,77,83,56,27,40,45,61,77,83,93,103,121,122,117,122,105,104,146,171,167,147,94,17,3,12,12,15,14,15,14,15,17,16,16,56,54,47,43,47,48,74,85,75,89,93,88,71,69,78,72,81,85,94,93,105,110,111,116,104,185,242,242,124,58,72,10,46,31,24,33,23,32,36,44,66,83,101,95,80,70,62,77,92,95,65,71,76,78,92,79,71,57,57,60,54,66,80,87,89,87,96,104,107,95,99,113,105,97,84,69,63,74,74,79,94,114,125,122,94,87,96,87,99,97,82,108,118,102,101,74,76,105,96,98,99,98,106,99,84,100,103,102,89,71,77,63,63,68,46,55,89,109,128,133,124,134,133,141,129,105,99,93,83,81,103,100,98,86,63,92,111,102,81,63,60,87,98,91,91,97,113,112,78,63,71,94,111,117,94,70,89,109,121,106,109,111,119,121,82,87,97,79,74,83,112,136,141,144,139,129,137,144,141,99,71,75,115,129,71,82,60,49,53,58,64,73,76,75,64,74,101,106,94,64,69,95,66,56,73,80,65,49,59,99,132,120,116,98,99,87,77,86,75,104,116,117,90,113,113,68,92,118,111,100,92,91,91,93,97,99,110,96,88,87,89,86,87,61,77,84,69,83,74,71,60,61,78,63,73,60,42,56,59,57,48,42,61,56,44,69,54,86,186,239,241,150,65,92,172,243,189,80,8,29,114,107,102,103,99,124,72,11,10,12,13,17,17,20,14,31,43,34,50,62,69,54,52,51,44,34,45,51,47,53,48,51,50,45,41,38,43,34,33,39,33,33,39,39,41,43,46,45,37,36,33,32,36,36,43,41,35,42,46,71,89,83,68,55,49,42,47,81,141,147,108,65,26,20,34,60,89,101,105,110,102,97,100,94,98,98,93,98,92,88,83,97,100,104,103,96,103,104,102,103,96,100,97,91,99,111,101,98,108,105,96,89,152,210,141,84,92,107,78,38,21,19,16,16,26,51,72,96,109,104,108,111,111,107,105,98,97,100,94,81,84,78,78,80,74,88,75,39,19,15,14,32,40,46,48,43,49,46,45,45,40,38,38,45,45,39,46,43,44,46,39,41,44,40,39,38,32,36,40,33,36,36,32,29,32,32,15,19,19,17,21,34,36,27,31,26,37,31,28,35,33,32,25,27,31,28,30,45,51,43,40,34,32,36,29,32,33,27,26,25,32,28,29,41,42,39,26,21,25,19,21,18,18,21,21,21,21,32,34,39,32,29,39,38,45,40,43,49,43,38,34,37,34,30,32,31,31,37,36,32,35,39,35,33,63,97,73,34,31,29,29,34,32,34,31,31,33,31,36,34,31,29,22,20,21,18,23,20,22,21,24,33,29,34,35,34,31,24,25,26,40,44,32,25,27,34,43,49,48,42,38,27,16,23,25,19,23,21,24,29,37,42,43,48,42,29,18,22,24,24,25,21,21,29,29,37,39,39,42,39,30,27,29,42,50,53,53,54,51,49,40,39,44,40,40,37,35,36,35,38,37,33,31,24,21,17,37,50,52,60,62,77,77,77,82,83,79,74,87,104,103,112,99,71,66,79,89,69,73,114,131,120,103,102,118,101,91,72,61,67,72,97,81,65,49,39,65,52,41,32,40,88,94,94,94,99,108,110,113,115,126,115,111,136,148,137,115,78,21,5,13,13,16,15,14,15,16,16,16,16,79,90,77,64,54,69,79,81,85,87,100,87,65,61,67,82,97,104,110,106,125,130,120,130,125,196,243,243,117,48,68,8,51,29,26,36,22,31,35,50,66,86,92,96,84,67,65,71,94,87,77,56,39,51,65,73,78,74,66,72,64,70,82,95,110,84,89,104,106,105,105,111,99,90,84,75,84,97,87,99,113,128,140,116,108,112,109,107,122,139,106,112,127,94,95,79,102,117,90,87,100,102,128,131,83,72,89,103,94,79,87,84,87,96,77,67,100,128,136,137,132,142,130,147,150,134,133,128,127,114,120,106,109,137,103,106,123,98,94,87,96,113,126,114,109,95,130,164,86,73,69,84,109,90,83,70,67,93,96,86,93,91,112,107,84,103,99,88,98,96,103,114,126,134,131,127,123,122,112,94,85,78,126,153,111,84,77,61,52,53,55,51,59,80,59,74,114,116,110,85,79,100,87,119,128,106,104,78,78,97,120,124,129,140,135,132,109,107,119,120,122,113,84,97,104,114,139,124,119,116,108,101,91,93,76,66,57,78,92,79,76,77,92,79,74,79,74,99,89,81,83,58,61,58,82,86,74,81,73,73,59,53,69,54,53,60,41,142,251,241,144,45,30,45,80,163,125,29,2,25,76,98,120,117,106,114,88,53,79,73,41,41,36,75,77,61,57,57,92,76,53,44,42,48,44,44,44,49,51,53,47,50,52,39,43,39,36,36,32,37,36,40,42,42,48,45,45,42,31,36,37,36,37,42,47,47,42,47,61,77,87,78,64,57,54,53,39,51,100,88,45,28,23,45,72,95,101,98,100,97,97,93,97,98,91,96,97,104,98,74,69,78,94,99,96,97,99,100,101,101,103,98,93,105,127,137,89,77,105,152,128,57,91,177,184,108,61,42,24,22,13,19,20,31,64,86,107,102,112,117,109,108,102,100,99,108,94,86,83,74,79,74,73,81,86,81,66,36,16,20,16,33,41,39,40,37,36,39,38,37,42,36,37,42,36,39,42,46,41,39,42,41,37,30,28,34,35,34,35,33,36,35,29,33,46,38,18,16,18,18,27,32,29,33,32,33,31,27,32,27,29,29,29,31,33,39,45,43,45,44,42,43,43,39,33,39,37,33,30,36,39,27,32,37,43,36,19,21,23,19,20,18,21,21,26,29,27,35,36,33,29,30,39,37,42,39,34,39,33,27,29,30,24,23,31,29,31,32,36,33,27,29,27,29,39,68,85,61,32,29,29,32,38,32,31,29,31,35,34,32,33,28,18,22,22,17,19,22,20,23,26,25,32,29,30,29,23,24,21,35,47,40,38,34,34,45,47,53,47,40,34,22,19,21,25,19,24,23,22,36,39,42,46,45,37,26,24,22,16,21,24,20,23,23,35,39,38,45,42,39,34,30,23,29,35,41,46,50,49,51,47,42,44,46,50,44,44,39,43,37,35,36,29,26,24,21,33,44,42,51,62,66,59,59,67,74,72,49,70,108,115,112,92,60,68,84,86,96,101,110,130,135,121,112,110,96,87,67,47,42,50,72,64,52,49,41,37,42,39,31,75,121,118,93,98,113,106,98,84,87,101,100,97,102,97,87,83,66,23,7,13,13,16,15,15,15,16,16,16,16,87,103,105,88,86,96,97,92,90,96,101,83,49,53,69,74,92,97,101,108,134,135,127,137,133,200,243,243,103,53,68,10,50,24,29,27,27,34,37,50,72,86,87,75,66,62,66,72,91,100,91,81,34,30,43,59,72,67,70,77,76,71,81,103,139,109,81,98,116,110,110,113,97,101,97,94,110,119,118,122,121,134,136,113,114,111,108,100,129,157,104,107,112,93,78,62,102,106,88,85,91,100,161,184,75,48,78,125,122,81,99,91,112,127,104,104,127,134,139,117,109,139,87,120,154,142,147,143,146,128,123,96,128,187,130,111,125,108,109,111,119,129,113,104,100,80,151,194,76,61,80,97,115,92,89,77,61,63,87,117,177,159,113,112,120,154,152,132,140,151,139,141,151,160,160,158,154,134,138,111,90,89,110,141,122,113,92,68,57,54,55,57,46,63,77,92,110,96,83,78,82,94,95,147,130,128,140,124,108,103,117,130,131,131,130,126,127,122,123,131,127,111,84,99,125,112,123,122,105,115,118,103,98,107,104,87,84,100,91,80,69,81,101,86,71,71,79,92,88,80,70,56,66,68,60,98,120,89,63,59,51,77,113,78,75,76,18,97,187,200,178,32,2,43,39,82,54,7,13,51,89,87,99,95,89,102,93,102,109,103,99,84,67,153,125,112,105,88,89,50,38,37,39,45,43,42,42,45,50,56,53,52,46,39,39,32,39,39,34,37,39,37,46,50,45,39,39,40,40,39,31,40,43,42,49,48,49,63,86,92,92,80,57,49,55,53,51,49,53,49,35,40,54,88,93,108,101,97,96,91,98,98,93,91,95,96,102,101,101,94,75,85,93,97,96,98,101,100,99,101,98,104,121,139,121,113,53,37,95,138,81,21,17,113,177,93,41,18,12,18,16,17,39,71,90,104,109,111,113,112,105,103,99,104,106,91,74,66,70,73,75,69,76,79,84,89,59,27,18,15,18,34,37,38,42,39,38,39,39,37,33,34,41,37,37,36,39,39,42,46,35,28,32,30,27,34,34,34,38,33,34,30,23,46,57,46,34,29,22,25,31,31,35,39,37,37,38,33,37,32,25,33,35,38,44,41,40,42,40,42,45,46,46,46,44,45,42,39,46,44,44,35,32,39,45,31,19,23,19,24,25,22,23,30,30,23,34,31,37,35,25,31,29,29,28,34,26,25,27,21,24,24,22,29,29,27,24,24,26,27,32,24,22,20,24,36,65,83,52,24,33,32,32,33,29,36,30,33,34,29,32,31,17,19,19,18,21,16,21,17,27,32,23,32,26,20,19,19,25,46,48,44,43,39,42,46,43,51,41,39,32,23,24,22,24,23,27,25,24,31,36,42,47,45,34,25,18,23,23,21,20,22,21,27,38,37,39,41,44,40,39,36,27,21,27,31,37,46,45,47,50,49,47,44,45,47,46,46,44,43,46,39,32,19,20,29,33,41,43,54,59,53,58,48,60,83,93,77,62,105,110,106,94,58,70,77,94,112,92,90,99,124,111,95,105,111,124,118,114,107,102,103,89,92,87,84,95,86,92,81,109,152,121,107,118,131,117,101,97,96,107,104,103,90,84,90,100,83,22,7,14,12,15,15,16,15,15,16,16,16,89,117,112,103,110,122,114,105,104,104,111,90,57,44,48,49,66,74,87,104,139,134,121,139,132,197,240,240,103,57,54,9,49,24,32,27,22,35,35,54,80,104,75,55,44,42,57,64,87,84,124,123,46,30,39,53,65,67,66,55,54,61,60,96,179,120,77,108,131,129,105,115,105,112,110,101,120,127,116,124,102,108,129,82,87,84,69,74,110,153,101,83,85,71,69,61,88,91,79,69,81,81,158,192,69,49,86,145,125,67,66,63,85,100,106,110,118,116,120,89,94,127,53,78,142,143,134,116,114,104,112,84,117,200,121,83,104,98,101,85,92,95,89,66,73,63,135,182,54,61,84,107,118,83,124,188,148,100,110,159,237,171,130,135,149,197,181,146,146,165,184,179,178,201,205,205,190,160,152,126,100,93,115,122,97,95,83,53,62,73,70,61,45,57,67,99,113,84,75,71,83,96,90,116,127,133,146,132,128,122,124,127,120,119,116,119,125,119,123,128,123,109,101,111,122,111,105,111,104,103,105,103,123,137,140,128,106,110,92,80,72,81,103,87,81,77,86,89,83,90,83,65,84,76,53,89,116,106,89,52,32,59,107,105,111,76,22,34,49,142,156,24,12,80,105,98,37,16,69,103,113,108,98,98,95,93,102,95,108,103,108,95,90,134,102,98,89,39,39,45,42,38,47,45,43,46,51,52,52,60,55,46,46,39,44,37,36,39,35,38,44,46,47,45,41,41,42,36,37,41,40,40,44,44,46,58,62,74,93,96,99,83,65,54,49,53,55,51,49,45,60,88,112,117,102,106,101,90,93,101,100,93,91,98,94,98,102,99,103,98,106,103,98,98,102,105,101,101,100,101,108,125,134,128,96,52,34,44,42,57,50,35,17,40,125,116,65,24,6,17,23,49,73,92,108,106,109,97,110,108,90,96,98,98,83,72,63,63,66,71,74,69,80,83,88,80,49,23,16,16,21,37,36,39,41,36,35,35,38,37,37,37,36,39,32,35,39,39,43,38,34,30,33,35,28,30,35,31,34,31,32,27,29,46,50,39,29,33,19,24,33,30,38,37,39,47,44,36,34,31,36,34,35,45,44,45,34,32,33,38,44,45,49,45,47,44,44,46,47,42,42,42,39,38,36,27,21,26,29,28,32,30,28,29,28,21,24,30,23,29,28,23,24,22,28,30,22,27,28,26,26,22,24,29,29,22,19,24,26,20,23,24,21,26,24,23,42,83,77,42,27,24,31,31,34,36,33,33,31,37,30,24,21,23,22,21,23,19,21,23,30,28,22,27,30,21,18,23,33,46,48,44,46,45,44,45,40,42,39,35,34,23,20,28,30,30,33,31,31,40,40,40,47,42,29,25,21,24,20,18,20,21,27,31,38,38,34,41,43,45,45,42,31,25,26,23,26,31,40,43,46,50,44,48,43,46,49,44,47,45,43,45,30,21,23,25,37,42,46,53,52,52,51,52,67,92,112,92,72,88,85,85,81,55,50,67,90,87,89,61,52,84,90,98,127,147,159,169,170,167,159,156,152,154,158,153,153,156,157,144,157,162,141,140,154,149,135,139,137,136,137,136,132,120,131,127,143,107,13,4,11,12,15,13,15,13,15,16,16,15,89,105,102,98,118,133,134,131,126,127,135,129,87,65,51,27,42,67,89,105,141,136,114,128,132,200,241,241,81,45,45,4,44,22,31,29,29,40,37,55,94,137,124,96,59,39,68,67,85,94,150,155,83,77,58,56,64,74,85,47,32,47,59,99,179,120,78,107,134,115,94,107,95,103,99,96,106,95,84,82,65,103,125,60,61,63,58,60,107,160,111,96,89,84,93,97,115,103,96,89,98,97,168,192,73,61,90,147,125,63,74,74,96,99,104,106,106,108,102,89,120,173,71,52,99,108,106,95,95,88,100,86,122,169,97,83,92,92,102,87,89,102,90,78,101,95,182,210,84,85,90,107,116,104,188,252,221,116,113,155,165,144,142,141,158,153,123,120,125,146,160,159,153,170,172,152,128,90,93,92,79,88,81,76,59,51,47,56,61,66,55,56,50,55,41,69,81,76,81,72,71,70,53,78,83,101,122,114,113,116,120,118,95,99,113,117,120,121,116,104,98,100,110,113,125,138,130,136,136,122,132,132,134,143,141,119,109,122,116,104,84,77,80,79,79,79,89,78,81,80,73,62,75,86,61,122,143,122,116,82,51,83,127,122,113,78,72,78,43,66,133,59,24,115,147,139,101,96,117,126,125,108,114,106,108,114,107,128,124,132,126,106,103,100,69,49,45,34,38,47,53,48,46,53,52,57,57,50,55,57,53,59,50,42,41,34,41,38,45,47,43,42,34,39,40,35,36,37,38,44,43,39,42,50,67,85,77,62,72,85,102,130,119,71,48,51,53,51,55,49,66,118,128,116,101,93,88,87,92,87,94,90,92,97,98,98,92,100,102,105,104,101,94,101,99,104,104,96,96,108,111,94,69,53,60,30,36,43,34,37,40,46,17,69,152,117,92,39,12,30,45,78,98,110,109,91,93,92,83,86,83,84,81,75,66,64,65,61,66,67,79,76,81,86,87,75,42,21,20,16,34,38,36,36,39,41,32,36,38,41,38,37,37,33,41,41,43,36,34,37,29,30,30,27,33,38,31,27,32,28,27,31,36,50,35,22,24,22,19,23,32,37,37,44,38,39,45,39,43,36,44,47,39,44,44,41,29,27,27,30,43,44,44,47,42,41,46,42,45,47,42,44,40,39,39,29,26,24,20,21,22,21,26,24,33,28,27,33,29,32,31,31,36,33,34,34,34,31,26,30,33,25,24,31,23,22,20,21,27,19,25,22,22,24,20,22,27,58,84,67,40,27,26,36,29,30,36,30,30,34,33,20,21,23,22,27,27,24,28,26,29,33,26,31,29,20,19,25,42,44,35,42,45,40,44,42,41,42,42,34,30,29,29,32,32,33,36,34,36,38,38,40,39,33,26,29,23,24,22,21,27,24,35,33,31,39,38,40,39,39,42,37,38,40,24,22,28,21,31,36,36,46,44,49,48,42,50,48,46,48,48,37,29,21,19,27,42,48,54,59,57,49,47,48,57,63,69,71,51,67,64,66,69,55,48,40,46,57,71,50,25,38,52,103,170,174,178,188,188,190,197,208,199,192,192,190,194,190,191,176,177,182,171,180,170,162,160,168,172,162,169,160,157,158,163,158,163,110,12,2,10,11,15,12,15,15,15,16,16,15,80,91,90,92,109,140,148,150,139,130,153,163,155,139,115,49,42,80,93,108,147,131,105,114,120,199,241,241,73,54,46,8,48,17,36,29,23,33,37,54,97,169,199,209,150,90,101,102,141,183,233,231,167,134,106,83,74,91,122,104,68,72,81,128,207,135,89,100,115,110,76,96,94,102,107,100,103,91,80,84,83,137,171,93,69,74,90,84,116,175,149,147,152,151,159,162,162,157,165,158,174,161,214,216,111,97,102,152,145,134,154,151,162,158,159,156,159,152,152,155,188,229,136,70,56,75,85,83,92,73,87,75,112,160,135,147,153,156,158,154,161,158,157,148,180,181,232,249,154,134,106,125,127,100,173,240,172,107,114,120,123,93,105,95,101,102,60,61,69,79,71,53,46,48,44,31,17,10,12,12,14,14,13,14,13,13,14,15,14,14,14,14,15,15,14,13,14,14,17,20,21,31,20,14,17,40,77,88,103,101,102,98,94,110,121,125,121,118,105,105,107,100,126,117,125,159,155,153,142,141,145,134,127,131,135,120,128,140,131,118,93,81,72,66,67,82,89,77,65,48,40,42,75,76,59,139,151,111,104,91,110,133,151,117,97,98,112,122,81,87,155,103,44,111,134,118,112,117,126,109,116,116,108,115,114,110,120,121,128,134,125,114,116,110,62,65,92,79,68,56,62,57,48,53,54,51,55,57,53,57,53,61,51,41,44,45,53,50,49,41,37,45,39,38,32,31,37,41,46,41,46,47,69,105,109,111,97,84,63,87,139,157,145,67,41,45,48,59,56,50,54,87,121,117,94,84,79,78,84,84,88,87,84,98,91,93,92,96,104,98,93,92,99,100,107,103,96,100,97,92,59,32,31,32,37,34,31,36,32,39,41,39,33,152,216,132,104,51,26,54,74,106,111,98,89,79,73,60,61,71,80,72,66,67,66,66,63,72,73,75,90,81,87,92,77,52,19,21,19,21,31,38,42,34,41,40,46,46,38,37,40,38,33,39,41,41,36,35,32,28,31,30,31,27,31,35,29,32,33,28,29,29,46,47,24,22,22,16,22,26,31,37,39,40,41,41,41,44,45,43,45,40,39,43,46,43,34,37,32,32,34,38,47,44,43,41,45,47,39,43,42,39,42,41,38,24,22,20,21,26,17,21,28,28,35,31,37,38,37,39,33,36,41,39,39,36,38,41,33,32,31,28,35,32,32,28,21,24,21,22,27,23,23,20,24,23,22,36,65,84,61,30,26,33,34,28,33,35,35,35,30,21,20,26,24,19,20,24,26,26,29,28,33,27,19,21,21,30,39,41,40,42,39,39,42,43,39,41,43,32,33,29,24,26,26,30,32,28,36,41,38,38,38,32,23,24,23,22,23,29,24,29,31,31,34,36,38,40,41,41,41,37,34,33,30,24,21,26,23,27,29,34,40,44,51,50,50,51,48,50,56,42,23,22,21,31,50,60,62,67,55,51,51,44,51,48,45,46,44,47,50,58,57,57,53,53,63,65,96,79,39,22,23,120,196,194,205,205,206,217,229,238,220,206,213,212,214,217,212,199,200,202,199,198,190,187,186,199,199,194,203,198,188,180,188,184,193,115,7,2,8,11,14,13,15,13,15,15,15,15,82,90,83,97,102,130,139,138,130,130,160,194,241,250,223,112,79,105,100,102,141,130,105,110,116,193,244,243,87,110,81,12,51,22,32,29,25,36,34,47,79,155,218,250,217,125,140,136,213,247,252,252,185,167,147,139,137,142,178,157,138,152,155,188,246,184,120,110,133,132,117,134,135,147,149,145,152,150,141,155,160,206,235,154,127,162,205,162,157,179,176,194,203,196,195,194,193,203,207,212,213,188,199,186,152,148,136,170,185,195,209,203,206,203,204,201,196,197,204,194,191,194,164,131,74,66,71,74,68,58,71,86,136,179,186,206,202,206,208,199,199,206,193,197,214,195,200,185,174,168,106,120,107,65,92,129,136,110,93,93,64,21,11,7,22,28,13,23,31,38,44,45,49,133,79,60,33,16,15,15,14,14,13,14,14,13,15,14,14,14,13,13,14,14,13,14,13,13,14,14,15,18,20,18,15,16,38,67,81,85,92,118,123,129,133,125,125,107,89,103,108,117,146,137,132,154,150,137,124,117,117,111,112,105,121,134,129,126,118,113,107,115,101,78,79,95,94,75,71,60,42,48,78,79,47,89,111,110,107,88,109,131,136,111,89,89,105,89,76,68,154,153,51,88,96,66,64,72,98,98,92,92,102,99,92,92,87,93,80,92,95,90,87,84,59,66,113,120,116,101,105,86,57,47,43,50,49,53,46,44,51,55,47,47,50,48,51,50,41,39,44,41,35,33,34,38,37,41,45,47,57,80,137,150,114,99,96,119,108,117,138,107,60,27,35,47,48,54,57,55,44,57,81,97,91,81,75,72,84,83,89,83,88,90,91,93,88,97,95,96,93,95,106,107,101,97,100,96,88,77,64,41,31,32,32,35,33,31,35,35,35,37,34,162,183,118,136,51,42,83,101,114,78,67,73,64,62,63,64,73,71,63,62,67,70,76,71,70,90,104,96,102,95,65,38,23,17,15,17,22,36,41,45,35,36,41,45,48,41,38,37,39,36,36,40,34,32,31,32,29,33,29,27,31,28,35,32,30,28,32,31,27,49,36,17,25,17,18,19,25,36,36,43,39,41,43,42,40,44,41,41,46,40,42,40,37,39,41,39,36,35,44,46,43,43,47,46,38,45,43,39,44,41,38,39,24,20,24,14,21,21,21,23,31,41,36,36,39,37,36,37,37,41,35,40,37,35,38,38,36,36,34,33,31,30,25,23,28,22,19,27,24,22,23,24,22,21,27,41,73,81,54,26,24,33,33,33,33,39,35,27,19,19,22,19,21,22,21,24,24,29,33,24,26,19,29,31,33,44,36,39,38,44,41,41,42,37,41,39,39,41,31,29,30,24,29,29,28,34,42,40,39,37,29,23,24,23,24,23,18,22,25,31,33,36,41,40,37,39,43,34,40,41,39,38,25,29,23,24,27,25,25,34,34,46,50,49,50,50,50,53,44,24,19,20,38,65,76,73,57,53,52,55,54,48,40,35,44,39,46,45,42,45,57,76,95,108,113,147,105,63,36,53,176,213,208,217,206,216,226,228,225,208,207,214,208,210,218,217,205,202,202,204,206,203,203,204,216,220,215,226,217,208,205,207,213,216,113,5,2,8,11,12,12,15,13,15,15,15,15,73,89,78,90,94,117,125,127,124,130,189,237,249,249,250,160,116,128,117,124,146,151,159,160,141,198,247,242,104,169,126,31,53,14,32,27,25,32,35,53,69,105,155,238,209,141,159,152,229,252,247,221,173,177,179,194,192,194,201,195,194,201,191,201,228,188,156,147,160,184,183,190,190,192,200,196,196,194,195,200,193,210,212,183,201,243,252,235,199,208,197,203,202,194,206,225,218,198,208,195,190,128,92,97,129,183,175,189,192,199,209,200,201,195,196,196,191,191,188,170,105,81,130,172,124,78,59,59,63,54,85,124,168,194,203,206,205,207,196,189,193,189,200,196,199,148,87,85,137,176,113,94,83,65,83,122,137,102,95,81,36,15,24,24,57,84,110,155,179,213,242,249,253,253,249,237,227,227,224,224,217,214,212,210,210,204,200,205,211,202,196,199,194,191,180,173,170,166,166,156,160,165,174,171,128,112,97,67,80,86,102,123,127,109,107,122,113,105,88,81,87,95,139,141,134,144,121,119,111,96,103,95,102,103,121,131,111,108,126,133,125,131,122,104,90,96,84,68,71,71,73,77,93,74,47,63,83,100,88,64,86,117,121,89,73,73,63,57,53,50,126,163,61,45,69,34,22,32,59,65,69,81,84,81,73,73,76,65,56,55,56,58,53,56,42,36,43,70,102,98,113,116,104,92,76,84,96,87,70,64,56,50,42,50,49,41,43,38,46,39,39,41,34,39,39,44,43,42,50,54,72,83,141,145,96,103,105,137,135,108,60,20,24,27,66,64,54,55,51,60,54,46,46,55,65,67,71,83,91,90,95,92,93,95,93,99,97,97,99,97,90,101,106,95,106,95,83,82,79,95,97,80,47,30,31,35,34,32,32,32,31,42,29,78,91,76,136,65,33,62,60,66,51,57,60,57,72,73,74,75,66,50,61,71,71,76,71,81,93,92,95,84,53,30,19,21,14,17,23,31,42,39,39,38,39,37,38,39,45,38,36,42,39,39,35,33,36,29,26,33,35,30,34,34,31,31,31,32,29,33,31,36,32,20,21,24,19,16,18,24,35,36,38,39,39,39,39,38,44,42,39,42,41,39,39,39,42,45,40,43,42,42,45,42,45,44,43,43,44,41,36,41,44,38,34,24,17,19,22,17,16,18,26,37,39,37,42,41,37,38,36,39,39,39,37,38,37,36,38,37,32,29,29,29,25,28,24,29,22,28,33,22,27,27,22,25,26,29,29,46,78,69,40,22,29,33,33,35,33,33,21,15,22,19,19,24,19,21,20,24,35,27,24,25,22,21,27,37,39,39,36,39,39,39,48,44,40,40,39,45,38,35,36,29,33,28,29,35,39,43,36,38,37,28,25,20,20,22,20,21,22,23,29,37,34,38,36,39,42,37,41,39,43,40,38,33,30,25,22,25,19,25,33,29,33,39,44,42,44,53,50,40,21,25,24,44,90,97,95,85,87,99,99,83,79,78,84,89,95,79,37,32,36,50,62,83,98,103,128,105,90,66,136,229,213,211,213,208,221,221,201,198,199,208,214,193,193,208,204,199,192,198,211,206,207,206,205,218,218,218,225,219,208,205,219,225,222,113,5,2,8,12,13,13,14,13,15,16,15,15,66,79,75,106,104,121,127,129,131,137,204,240,249,249,250,139,105,148,152,160,174,201,245,245,196,214,250,226,105,193,126,22,45,14,35,26,24,36,29,52,72,116,132,147,148,130,160,144,179,217,183,156,154,181,180,190,199,198,200,194,194,192,168,118,106,138,178,185,194,202,201,208,198,197,194,191,192,193,192,188,164,85,103,144,223,251,252,234,189,201,177,182,185,188,226,250,209,190,187,174,129,41,24,22,80,169,200,213,192,191,188,187,188,182,187,178,184,176,165,110,40,36,73,158,168,108,58,60,74,92,128,157,189,197,197,196,190,191,184,179,177,192,194,184,161,77,50,50,92,165,148,117,89,89,100,132,148,118,112,169,231,243,252,252,252,252,252,252,253,253,252,252,252,252,252,252,252,252,252,252,253,253,252,252,253,253,253,253,252,252,253,253,253,253,253,253,253,253,252,252,253,253,253,253,252,252,229,168,112,98,101,117,110,89,96,112,110,106,109,92,64,78,111,125,131,127,110,109,119,117,98,119,134,110,127,131,112,131,143,134,110,110,123,105,92,81,59,59,57,79,90,81,77,54,46,69,76,71,68,48,75,103,105,94,76,77,86,67,58,44,96,168,78,23,33,26,41,19,32,56,66,81,63,62,61,49,62,71,71,55,47,50,40,53,44,35,36,37,62,77,89,110,134,130,113,131,158,159,148,134,78,30,34,45,68,47,48,42,45,43,40,41,44,42,41,43,50,64,81,95,74,48,119,137,108,128,120,112,65,28,20,20,36,70,117,105,73,53,50,55,48,39,29,38,58,78,91,93,93,92,97,98,101,104,106,96,96,104,98,101,96,103,103,105,92,75,70,79,87,100,113,113,91,51,34,34,32,35,31,34,33,33,33,53,45,39,85,54,31,29,32,52,51,56,53,70,80,81,77,71,71,68,67,72,76,74,74,84,83,77,62,42,23,17,21,15,19,24,43,49,41,43,37,41,42,35,34,36,35,37,43,38,39,34,34,33,28,29,29,32,30,35,34,31,33,33,32,26,34,33,29,37,25,20,24,17,21,21,19,29,35,36,34,38,37,38,36,36,47,39,41,43,43,42,41,41,37,42,41,39,43,43,42,39,43,48,39,43,42,41,47,43,39,35,27,21,22,17,22,21,15,26,30,35,38,39,39,37,37,34,41,38,35,39,36,38,35,40,35,32,34,23,23,29,30,29,33,34,40,36,33,33,37,35,30,37,33,29,29,28,57,81,66,38,24,25,29,30,35,27,18,21,19,17,17,19,21,19,23,28,24,26,25,22,18,18,37,41,39,38,39,35,38,36,39,36,41,40,35,41,38,40,38,30,28,31,35,38,37,39,40,36,33,25,19,20,18,21,21,19,20,21,30,34,27,33,34,28,32,31,36,39,34,35,33,33,33,29,25,22,24,23,22,27,29,34,37,42,43,47,45,40,26,61,153,171,147,129,118,122,141,146,139,141,137,143,157,156,160,148,107,86,58,51,55,68,74,80,113,113,98,113,197,230,207,211,205,208,218,200,185,191,200,211,203,183,186,199,200,199,196,202,204,202,207,204,200,210,208,205,216,201,193,202,214,214,208,114,5,0,8,11,13,12,13,12,15,15,14,15,62,82,72,118,123,124,141,137,148,151,195,239,245,245,236,114,110,145,161,182,201,243,252,252,247,240,252,220,108,165,74,13,39,14,32,23,26,33,35,53,70,112,105,77,66,82,134,128,117,137,141,146,177,179,171,181,184,190,182,179,171,166,100,22,11,59,155,199,209,199,180,178,177,169,164,164,167,178,171,160,73,1,10,55,178,246,248,214,166,152,142,169,174,185,210,212,186,181,179,139,53,9,17,20,36,104,182,217,201,184,181,184,182,181,180,183,185,174,124,68,40,45,61,103,167,155,89,64,81,122,159,182,197,188,189,187,183,179,171,170,180,190,179,155,86,32,34,37,46,95,139,111,63,35,41,83,105,96,173,250,252,252,253,253,252,252,253,253,252,252,253,253,253,253,252,252,253,253,252,252,252,252,253,253,252,252,252,252,252,252,252,252,253,253,253,253,252,252,252,252,252,252,252,252,252,252,251,251,210,130,96,105,109,90,108,137,115,118,120,109,100,103,117,121,129,126,114,115,118,117,125,139,137,97,117,137,129,151,114,79,68,84,117,105,98,79,42,59,81,80,62,53,56,51,42,62,74,74,71,53,44,57,95,95,82,93,92,79,72,74,73,152,99,10,45,59,67,55,33,42,83,86,103,102,48,33,57,93,88,73,71,51,51,67,42,46,39,35,57,78,94,101,132,137,119,121,139,143,121,87,54,36,38,75,95,92,63,45,53,42,46,49,48,48,44,61,90,110,110,111,108,63,111,134,125,130,64,35,20,15,34,71,105,115,129,130,122,59,53,73,34,16,21,60,90,95,100,99,97,97,105,104,103,100,96,97,98,106,96,96,98,106,103,83,77,79,83,86,92,98,112,124,125,98,50,31,30,36,36,34,35,31,27,54,58,41,99,68,27,21,19,45,53,62,67,75,83,78,69,73,83,73,76,75,71,82,81,91,76,49,34,17,17,17,19,23,29,48,59,53,39,38,37,42,40,36,40,39,36,41,39,36,32,35,37,32,35,36,33,33,35,34,37,33,30,32,31,34,36,32,33,43,37,28,27,27,24,23,22,22,29,26,28,32,29,38,41,43,42,41,41,43,40,42,44,43,37,42,47,44,45,41,44,42,40,44,42,43,42,42,42,41,42,34,22,25,19,18,21,21,22,25,30,30,29,34,38,35,36,37,38,39,36,41,39,36,43,37,33,25,20,21,26,29,33,29,24,38,36,30,26,27,31,27,37,32,33,31,26,34,39,62,81,64,34,24,30,29,29,22,17,18,18,23,18,19,19,22,26,27,22,21,22,18,24,29,32,32,31,31,37,39,37,36,34,36,35,35,39,38,36,37,36,32,34,34,33,41,34,39,39,36,28,21,24,16,16,19,16,25,19,29,36,37,31,27,29,18,23,24,23,27,24,26,24,33,33,27,27,27,24,23,27,25,26,29,32,27,42,45,50,27,51,212,250,250,223,171,154,141,150,149,152,160,163,170,170,171,178,169,160,143,107,89,77,81,84,100,123,116,111,158,219,216,203,206,197,194,193,183,177,193,199,201,194,182,190,201,201,204,196,189,197,197,203,196,195,206,198,198,200,190,182,196,210,200,200,114,6,1,8,11,13,12,13,12,15,14,13,13,83,96,92,134,123,124,139,142,149,147,196,237,248,248,235,130,122,144,144,169,200,250,252,252,252,252,252,221,134,175,68,17,40,8,35,27,29,36,36,57,76,108,73,48,59,71,115,125,141,163,166,177,179,185,177,168,178,176,179,164,163,123,49,16,17,33,90,170,206,193,164,155,157,155,157,150,165,168,165,119,51,34,37,31,77,170,215,195,164,149,160,184,171,180,176,169,174,173,163,82,27,34,32,33,29,43,130,200,205,194,179,176,185,181,184,180,175,146,79,61,76,89,84,68,125,176,136,94,102,146,171,195,189,173,179,172,178,177,172,173,184,175,159,102,33,21,34,31,24,22,62,99,74,58,26,23,33,37,54,120,235,248,252,237,223,209,205,188,179,156,138,112,135,231,252,252,251,251,252,252,252,252,252,252,252,252,251,251,252,252,252,252,252,252,252,252,252,252,253,253,252,252,253,253,253,253,252,252,252,173,92,74,87,93,104,130,119,116,104,111,127,140,140,127,132,132,120,125,123,108,105,118,105,70,114,137,137,127,84,59,73,94,100,100,107,102,75,80,84,87,72,56,62,75,60,50,60,69,77,52,45,27,55,87,94,106,98,91,91,105,84,150,124,30,85,140,144,115,83,76,108,157,169,122,46,24,71,93,88,77,84,87,83,105,91,82,68,65,78,92,110,109,111,115,98,73,64,56,54,63,83,89,93,101,88,91,78,89,88,66,63,53,53,50,53,81,119,129,113,110,119,104,131,131,84,51,7,17,21,55,95,117,125,112,113,122,132,107,117,86,28,9,34,99,114,105,105,101,101,106,105,108,99,92,102,104,107,104,89,97,110,97,87,81,77,81,93,98,94,106,110,119,114,107,50,23,30,32,33,39,33,36,28,63,68,76,196,114,17,19,14,47,59,76,78,78,77,69,71,70,78,82,79,73,82,89,88,71,42,23,16,17,19,18,25,40,57,65,59,48,38,37,41,42,42,41,39,39,42,33,36,37,33,36,33,37,36,33,31,36,37,40,39,33,36,33,33,37,42,37,39,57,51,52,49,37,39,38,30,27,30,19,29,34,27,33,37,41,38,41,40,36,41,41,41,37,43,45,42,41,39,45,43,36,42,44,44,42,43,39,39,42,42,29,19,25,26,29,32,29,28,30,27,35,30,32,41,33,33,38,39,41,38,35,39,35,37,37,29,17,17,34,32,21,28,24,27,28,27,29,22,25,21,26,29,27,25,32,31,29,36,41,73,78,54,30,24,29,29,24,18,19,20,23,18,18,21,27,25,27,21,21,25,18,23,24,30,30,31,32,36,34,34,36,34,40,31,38,39,33,37,34,37,36,34,40,36,35,36,32,38,31,27,26,22,16,20,21,21,23,20,35,30,33,32,24,27,27,25,21,24,23,24,24,25,31,35,37,29,27,26,25,31,26,25,27,24,31,37,35,45,18,79,211,247,238,229,217,196,167,147,129,128,141,143,149,154,158,163,162,170,171,152,141,122,103,113,130,116,96,122,201,230,215,217,215,201,187,181,183,192,200,198,203,204,199,202,208,210,209,194,197,201,199,198,188,196,203,194,195,198,190,191,199,203,198,205,113,5,1,7,11,13,12,12,12,14,15,14,14,98,130,113,144,126,115,128,113,125,122,189,244,252,252,238,200,149,125,118,124,161,167,199,244,251,251,252,217,144,186,76,21,37,16,38,24,28,39,39,63,98,153,130,91,101,104,134,150,170,190,186,188,183,184,178,169,167,175,172,165,132,70,59,67,71,60,47,108,184,199,180,165,163,174,171,170,166,160,122,86,112,130,131,79,41,109,200,189,170,172,205,198,160,165,163,165,168,155,89,43,56,89,105,101,60,34,75,152,200,205,188,180,191,189,189,177,150,90,62,93,100,111,114,74,88,152,174,141,139,165,186,194,181,175,175,176,179,183,181,186,186,171,129,102,107,105,113,104,96,120,160,230,251,251,209,144,70,25,16,10,90,141,145,146,177,237,252,239,229,189,135,115,199,253,252,252,251,251,252,252,252,252,252,252,252,252,248,249,252,252,251,251,252,252,251,251,253,253,251,251,249,249,212,137,186,246,245,245,252,169,79,60,60,83,96,127,126,120,100,101,125,131,136,125,108,115,126,124,128,111,81,95,101,74,118,142,120,127,97,89,95,97,96,97,117,113,99,101,123,121,104,100,93,83,66,66,54,63,77,49,42,27,46,80,96,94,89,105,105,131,110,150,153,53,101,144,146,133,89,114,174,194,169,111,46,33,61,79,68,73,92,79,95,147,146,133,122,111,94,74,85,96,89,66,63,70,66,71,97,131,150,139,129,109,75,71,83,95,107,103,98,88,89,97,73,73,98,110,120,119,134,131,145,109,28,12,12,53,84,110,126,119,122,118,119,120,123,105,131,82,20,12,47,120,116,111,110,104,106,105,105,96,96,100,108,108,101,94,99,106,97,78,67,78,84,90,93,98,103,109,113,113,85,38,20,26,24,29,35,35,32,37,29,55,69,92,212,146,39,22,27,66,75,81,75,80,73,68,74,72,76,74,84,85,89,89,57,33,19,18,21,17,19,32,51,63,70,69,61,48,38,36,34,39,44,36,40,40,36,36,29,35,34,31,34,35,33,33,34,36,39,42,38,37,38,33,39,36,40,40,42,69,70,70,66,64,66,54,49,28,23,26,22,27,23,30,33,39,41,38,39,37,41,38,39,43,38,41,40,37,46,41,38,40,42,46,43,44,39,39,41,40,39,27,26,37,39,38,36,40,38,32,34,34,31,36,34,36,37,35,37,37,37,37,36,31,37,36,25,17,27,36,24,21,23,26,27,25,24,19,21,19,23,23,18,25,24,24,31,27,35,33,44,81,75,51,25,21,29,19,21,30,28,29,31,25,30,29,23,21,17,23,17,22,22,27,29,29,33,30,35,35,33,30,33,35,29,32,34,36,34,30,35,32,35,38,33,34,33,34,29,31,29,17,22,22,22,19,23,26,27,40,32,33,29,24,31,27,28,29,30,29,30,34,24,33,37,36,32,27,24,27,27,25,29,23,29,29,34,26,44,124,196,229,194,185,191,198,208,203,200,162,132,121,114,123,128,137,143,144,157,165,165,165,150,136,147,146,102,95,166,231,235,234,242,236,225,213,216,221,229,230,222,232,235,230,227,232,229,224,221,220,220,213,213,211,214,210,206,208,205,201,204,208,211,210,211,113,4,0,7,10,12,12,14,12,14,14,14,14,92,120,117,150,125,108,107,103,88,53,111,215,249,249,242,215,104,89,79,87,83,50,56,107,139,192,245,187,144,175,61,41,34,19,36,21,34,34,39,69,123,199,184,150,137,136,162,177,193,197,188,184,183,183,180,166,168,166,167,141,74,52,101,130,120,88,47,56,118,191,207,188,183,181,193,178,163,124,49,86,155,187,175,124,86,117,198,169,161,188,201,186,137,141,149,144,153,86,24,24,91,171,162,168,108,37,57,96,171,206,189,176,182,189,180,151,100,52,61,108,90,103,130,77,66,96,146,138,128,171,192,194,181,177,186,185,190,193,196,186,173,116,92,149,152,162,185,225,252,252,252,252,252,252,253,253,243,215,160,122,170,210,241,252,252,252,252,252,252,252,252,252,253,253,251,251,252,252,252,252,250,249,252,251,252,252,250,252,252,252,252,252,252,252,253,253,251,251,252,252,229,121,48,38,86,168,141,158,173,110,88,70,103,117,116,137,141,138,104,108,113,93,98,99,86,77,84,98,112,97,92,125,116,114,148,123,114,118,120,112,102,95,89,99,110,99,110,122,115,114,108,94,79,76,66,109,97,58,71,61,58,50,83,92,87,89,97,103,106,129,122,156,177,66,63,112,99,92,102,174,194,153,147,110,48,28,54,75,69,72,73,78,113,158,151,154,150,126,119,80,80,98,99,87,93,120,122,122,136,165,172,137,118,113,89,93,89,92,116,112,98,107,124,113,91,63,55,92,129,145,133,103,76,59,75,92,103,120,130,146,138,139,139,130,127,121,125,113,117,84,53,29,74,122,109,113,111,108,110,99,101,99,101,105,105,99,86,97,104,89,95,84,80,84,87,95,99,101,87,113,93,47,45,16,16,16,22,37,38,29,37,34,34,39,52,66,143,167,97,42,59,96,86,79,71,76,76,73,74,75,71,73,88,83,67,43,27,17,16,18,17,26,42,58,66,64,69,70,61,48,28,34,39,34,41,39,36,33,32,34,37,37,31,35,35,34,33,31,35,37,36,42,35,36,40,38,40,36,36,41,45,73,84,77,80,81,87,74,58,35,17,21,24,23,25,24,29,35,34,34,39,40,38,36,38,39,40,38,39,41,45,44,44,41,41,43,42,41,36,40,39,38,39,23,27,36,36,39,34,31,31,30,33,37,28,35,38,30,37,38,38,36,37,38,36,30,35,33,25,18,28,24,17,18,22,25,23,23,25,19,21,27,27,27,23,23,20,24,27,33,28,28,32,59,86,72,41,24,22,19,26,35,33,38,36,37,36,31,27,19,19,22,22,21,23,27,32,32,29,28,35,35,32,32,32,33,33,36,30,31,35,37,34,32,34,36,31,32,30,28,36,29,22,26,33,31,31,27,34,31,31,32,32,35,31,29,29,36,34,35,34,32,36,35,39,36,37,35,30,30,24,27,25,27,25,29,24,26,32,33,184,246,250,250,216,198,183,171,180,198,215,205,179,155,131,113,108,113,122,129,142,149,152,159,155,152,162,148,120,147,188,210,210,213,226,226,221,222,225,229,238,234,231,237,239,238,233,236,237,235,229,230,233,230,236,230,230,224,223,225,208,207,217,222,226,218,217,113,2,0,6,10,12,10,13,12,14,14,14,13,90,110,105,152,130,118,113,107,111,75,60,100,162,211,229,143,50,51,55,69,68,42,39,56,57,135,219,128,115,151,52,37,32,20,34,19,34,36,39,60,98,171,174,147,154,156,179,191,196,194,184,182,176,181,177,171,162,156,146,77,36,58,133,152,143,115,66,39,54,154,194,184,156,150,162,152,142,65,19,84,162,183,190,184,165,152,160,142,149,174,178,150,125,149,146,127,112,41,9,45,108,177,177,191,127,46,55,50,114,181,175,123,120,136,146,116,58,44,71,111,87,94,120,72,65,69,98,111,109,169,208,197,188,188,195,200,200,210,193,173,122,37,45,113,171,242,252,252,252,252,253,253,253,253,252,252,253,253,252,252,252,252,252,252,252,252,250,250,252,252,252,252,253,253,252,252,250,252,251,250,245,249,252,252,251,251,252,252,252,252,252,252,251,251,253,253,251,251,249,237,94,36,41,45,77,89,83,127,153,68,81,108,131,154,131,139,123,125,107,91,79,70,88,87,61,49,67,81,95,110,113,140,118,104,135,122,110,122,139,123,88,83,91,96,99,100,118,118,108,108,95,74,73,47,51,109,77,55,62,97,133,121,126,116,104,125,148,142,137,143,139,157,188,102,52,94,130,137,136,192,150,115,135,87,46,35,77,92,62,66,71,89,125,149,152,148,143,145,151,132,108,105,105,85,89,120,131,130,131,151,150,112,116,123,100,52,87,148,146,131,87,79,104,118,89,83,77,82,127,111,63,17,7,84,146,128,123,134,139,140,144,142,129,119,117,112,106,100,112,92,97,99,102,121,98,105,111,103,101,101,107,108,105,96,106,83,79,103,106,105,103,105,107,109,100,102,107,108,96,68,35,20,19,12,20,35,53,71,56,37,33,33,33,33,36,25,83,174,170,85,59,78,81,83,67,77,77,71,74,73,79,83,69,49,33,17,16,24,15,18,36,49,66,69,67,72,71,69,64,43,31,36,36,30,35,36,34,34,33,41,38,36,33,34,34,38,34,30,36,38,39,39,36,35,39,41,40,39,45,39,40,74,77,76,77,79,85,72,65,39,19,19,21,21,28,29,25,29,26,32,34,31,35,37,36,40,35,38,39,36,48,44,39,44,38,44,42,38,39,36,37,42,34,28,25,29,36,37,33,31,33,35,29,33,33,35,35,31,39,34,37,38,34,39,32,35,36,33,30,18,28,24,18,18,18,22,21,17,21,24,29,40,42,39,29,26,20,24,29,29,30,27,29,36,69,89,63,35,16,14,18,30,34,37,37,36,37,29,26,18,19,22,19,21,24,27,27,26,32,30,33,31,30,32,30,35,28,32,32,32,33,31,34,32,37,36,32,35,34,31,29,25,21,21,39,39,38,33,34,42,37,35,27,32,31,31,33,31,39,33,34,36,35,38,33,34,32,35,36,31,31,27,26,34,24,32,27,34,20,70,245,246,250,250,247,245,218,196,178,170,184,185,197,203,194,168,143,127,122,145,153,145,149,157,162,158,165,165,156,170,169,163,143,130,136,133,143,154,152,158,167,166,170,173,182,183,178,187,184,189,184,188,199,201,207,199,203,204,208,210,191,193,202,212,221,219,219,113,3,0,7,11,12,12,13,12,15,15,14,14,90,108,95,139,123,113,122,157,192,190,163,110,113,108,87,39,37,76,79,98,93,74,84,93,83,138,184,85,78,97,24,36,29,28,36,16,34,34,41,62,86,124,127,134,150,171,193,190,191,187,182,176,175,174,174,168,147,142,84,25,34,64,128,144,145,135,132,113,60,103,152,150,115,97,125,139,74,29,29,92,151,165,189,222,223,190,190,125,116,167,164,125,122,179,184,132,123,74,33,43,101,181,176,203,124,44,61,37,73,136,148,98,77,112,128,71,49,60,76,107,59,84,113,71,74,106,185,222,194,181,206,201,190,185,185,180,168,181,177,145,89,91,181,250,253,253,252,252,253,253,250,248,252,252,252,252,252,252,253,253,253,253,252,252,250,247,250,252,252,252,252,252,252,252,251,240,229,252,252,249,248,250,252,252,251,251,252,252,252,252,252,252,252,251,253,254,250,250,222,122,71,64,83,91,73,70,61,113,128,78,90,83,125,134,116,107,90,84,70,71,68,76,101,110,88,81,96,111,119,120,148,140,89,78,130,137,123,100,100,104,95,97,97,98,91,95,116,129,120,106,98,83,84,69,31,43,46,77,102,134,129,161,156,123,120,137,146,137,131,134,121,117,163,126,45,66,103,113,84,88,84,53,67,66,49,32,60,87,51,57,84,112,126,145,138,126,139,145,189,168,138,98,64,60,55,78,77,92,105,117,117,120,130,113,73,16,53,144,159,132,96,57,64,89,115,117,97,83,57,30,12,13,45,98,127,100,81,83,99,110,96,103,96,84,81,74,86,81,85,81,83,89,98,105,98,112,104,100,105,99,99,97,102,113,123,117,75,88,122,103,69,63,100,116,115,98,110,87,43,25,15,19,19,20,46,69,91,110,97,63,39,27,34,31,30,34,65,155,187,97,38,53,69,76,70,74,74,71,77,79,76,57,35,23,21,17,22,21,27,46,57,64,66,69,77,69,69,70,61,40,33,35,34,34,35,36,35,36,36,41,39,35,37,38,38,37,34,36,37,37,40,36,39,42,43,43,45,44,38,35,46,71,66,63,64,60,69,61,55,39,17,23,15,23,31,28,29,24,28,30,31,27,27,34,29,30,34,32,32,33,37,36,39,39,41,45,38,44,44,33,34,35,34,24,21,34,39,39,38,35,39,33,31,32,25,31,27,33,34,33,38,36,36,37,41,29,30,36,27,23,33,22,20,21,15,23,19,22,22,17,27,30,34,39,31,27,23,24,26,32,27,27,31,28,41,74,86,58,22,13,16,24,37,36,33,36,36,35,27,18,20,23,18,21,25,25,27,25,24,24,28,27,28,29,30,33,29,29,31,32,30,32,31,32,33,33,33,30,32,31,24,25,23,21,35,39,36,41,43,40,37,37,27,29,36,26,33,33,26,33,29,34,36,29,34,32,34,37,35,33,31,31,26,30,30,33,21,40,15,92,246,246,249,249,248,249,243,232,216,196,180,172,185,205,220,229,237,212,174,177,177,173,176,178,180,177,183,181,174,171,162,141,97,51,21,12,20,28,27,36,39,39,42,46,55,58,61,63,71,75,79,83,93,101,108,109,117,129,140,142,130,136,152,164,182,178,191,114,7,3,9,12,14,13,14,12,15,16,16,16,102,119,93,123,101,101,137,211,243,243,240,180,140,87,42,46,71,106,131,154,146,109,123,131,118,172,191,85,66,63,23,41,32,30,31,25,30,35,46,64,95,132,129,130,162,182,188,181,178,173,173,173,168,162,165,155,127,76,24,12,33,53,78,87,105,129,205,189,63,76,111,136,126,117,141,75,28,23,44,117,162,173,185,227,204,165,181,109,96,173,194,125,118,230,250,165,158,134,69,40,90,177,171,198,118,40,61,39,53,80,151,151,141,148,83,54,70,81,87,110,83,98,113,87,93,174,251,251,227,147,168,173,159,133,107,92,82,103,113,139,211,249,253,253,252,252,252,252,250,250,250,251,252,252,252,252,252,252,252,252,252,252,251,249,250,251,252,252,252,252,252,252,252,252,251,244,230,252,252,252,251,251,252,252,252,252,252,252,252,252,253,253,251,251,249,237,190,165,95,57,57,54,55,71,81,68,57,77,96,97,112,94,98,103,85,83,83,79,74,95,80,88,130,130,103,102,99,104,115,109,126,134,111,121,140,135,99,57,76,81,93,107,103,102,103,103,110,121,106,110,89,77,112,93,78,43,45,109,105,101,98,92,89,73,63,72,72,52,57,48,48,36,95,142,59,10,19,35,36,37,35,44,48,49,56,38,59,76,54,56,89,141,149,131,120,125,132,150,185,168,167,150,131,115,104,97,67,74,92,105,104,128,131,108,95,24,20,73,100,113,103,81,63,92,134,117,61,31,17,9,48,97,108,104,97,66,54,41,51,62,65,66,65,67,59,57,62,65,69,62,66,68,79,96,92,98,99,103,108,94,101,112,132,148,152,97,42,67,86,61,32,27,42,86,105,83,49,31,23,16,15,20,37,57,76,99,122,131,142,118,66,34,38,40,36,59,72,103,146,133,74,40,62,74,75,78,73,78,77,73,49,23,20,17,18,21,19,37,52,65,66,66,73,67,71,72,68,67,61,41,39,39,32,37,35,38,36,38,45,44,45,42,35,37,38,35,42,41,41,35,38,40,39,42,45,47,37,43,44,42,47,60,54,42,42,40,43,42,41,24,23,24,20,29,24,25,24,28,27,32,32,24,32,27,28,32,34,26,27,33,29,31,34,35,33,36,38,40,38,31,32,34,27,20,27,34,36,40,34,33,34,34,31,26,27,27,31,32,29,29,39,35,29,35,33,33,31,27,21,19,35,24,18,21,17,22,23,22,19,22,26,24,32,31,29,25,14,22,25,29,33,27,27,30,24,49,83,79,45,18,14,25,37,40,39,37,43,34,29,23,18,19,22,20,22,29,24,27,27,24,31,28,26,27,26,32,31,25,29,29,29,31,27,29,32,31,30,24,29,27,28,24,18,23,35,38,46,47,43,42,41,39,26,26,29,30,33,28,29,32,29,33,32,32,35,34,35,33,35,36,32,28,33,27,26,35,22,41,15,122,247,247,249,246,241,247,245,247,241,233,220,206,196,186,213,234,252,252,176,166,171,169,175,169,163,158,163,166,155,151,144,127,87,39,15,9,11,12,14,13,14,15,14,14,14,15,14,15,15,15,15,15,15,15,15,16,23,29,38,44,38,51,61,76,92,91,110,94,19,5,13,12,15,12,16,15,15,16,16,16,113,134,105,126,97,84,113,188,222,222,192,115,124,88,59,96,108,104,129,154,153,123,139,155,145,193,197,85,74,63,31,50,29,27,32,28,34,32,50,74,115,155,147,156,178,183,182,174,169,170,174,169,160,160,164,152,98,50,21,32,56,49,49,44,61,91,181,155,41,83,109,113,134,109,98,32,13,30,69,140,182,181,201,226,168,152,182,120,111,183,229,155,132,236,252,208,169,174,122,63,102,170,126,126,69,39,66,43,59,58,115,169,163,110,64,57,83,102,84,110,101,118,113,110,131,147,204,202,132,72,91,99,90,76,60,65,79,98,123,187,251,252,252,252,252,252,252,252,250,251,252,252,252,252,252,252,252,252,252,252,252,252,250,251,252,252,252,252,252,252,252,252,252,252,252,252,252,252,253,253,252,252,253,253,252,252,253,253,253,253,253,253,252,252,252,252,160,88,43,41,59,31,47,65,86,68,61,102,105,127,156,136,119,107,86,92,120,109,103,92,89,132,139,117,91,87,77,83,96,93,117,119,126,124,133,132,99,58,59,69,66,87,88,96,97,88,109,113,104,109,79,78,114,112,97,66,41,66,63,48,41,38,40,39,38,41,36,25,36,25,31,13,36,139,78,15,16,35,42,37,44,41,48,53,63,48,56,92,66,79,112,149,162,135,147,160,163,164,168,168,178,151,131,115,112,134,111,101,104,118,97,95,112,107,90,29,6,12,11,66,78,83,92,118,91,45,20,14,24,78,120,140,153,125,76,42,24,21,30,35,37,42,39,38,43,41,42,38,42,45,43,54,71,84,81,87,94,103,105,101,115,139,143,113,76,48,39,44,42,37,32,32,29,39,57,55,26,21,16,15,22,49,64,89,110,128,128,129,136,133,108,91,104,64,39,66,61,75,105,150,125,56,50,63,79,86,81,73,48,35,27,21,19,17,21,29,50,63,68,70,69,77,75,72,68,59,61,57,51,44,40,39,39,38,30,39,38,38,43,43,41,40,39,37,34,38,43,37,42,42,37,40,39,43,42,42,46,46,39,41,50,52,39,34,33,26,28,29,31,24,20,27,19,26,25,19,21,28,25,24,29,34,31,24,29,33,28,27,27,29,27,28,29,29,25,29,30,30,29,28,34,26,21,19,22,25,24,31,29,29,27,24,23,20,29,21,21,27,25,25,26,28,25,25,28,26,27,23,17,31,35,21,20,17,16,20,22,23,22,18,26,27,22,24,33,26,20,24,27,29,26,26,22,26,28,28,55,85,76,34,15,22,35,35,35,40,27,35,33,19,19,21,21,20,29,26,24,29,27,27,24,24,29,26,26,24,30,28,26,31,25,28,24,27,32,27,29,25,27,27,23,24,19,19,31,40,41,44,47,42,40,38,31,27,27,28,31,29,29,33,32,33,35,33,29,33,35,28,33,33,37,35,29,27,29,32,29,38,23,168,248,246,249,243,243,245,240,245,245,245,245,238,230,211,211,185,210,182,86,83,90,90,90,87,84,79,81,83,82,75,93,101,101,133,139,131,121,102,88,89,111,96,87,73,75,63,60,61,53,44,39,38,48,49,43,37,26,16,19,25,24,22,20,26,29,31,34,43,24,10,15,14,16,15,15,16,16,16,16,16,152,166,148,162,133,108,89,101,103,96,74,50,82,83,103,155,144,103,81,93,110,100,134,157,148,194,196,89,77,65,30,46,30,31,33,24,35,34,46,83,141,205,215,216,224,218,220,217,213,208,209,209,211,208,226,174,91,116,109,102,101,92,104,90,94,98,122,114,83,135,159,110,94,85,64,37,45,65,96,167,190,183,234,237,164,164,196,142,137,195,213,158,118,188,246,191,155,172,159,117,138,166,66,62,58,69,97,74,79,57,73,92,87,81,71,76,98,123,97,87,84,96,100,125,119,84,107,95,54,26,36,53,76,92,103,153,205,242,250,250,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,252,253,253,250,250,247,244,144,76,46,79,111,93,86,92,81,51,59,106,134,155,159,150,139,114,108,107,103,89,83,98,103,114,117,92,81,91,76,78,79,96,125,119,103,92,124,136,101,61,39,37,38,51,86,60,42,56,105,122,110,114,74,93,114,105,94,38,35,50,39,46,43,51,45,42,37,64,84,49,51,52,69,53,53,143,126,67,86,98,78,66,50,54,72,78,63,40,73,112,132,164,156,162,146,134,158,160,160,147,141,87,53,31,11,10,17,57,69,71,79,81,44,38,47,56,76,41,14,17,14,57,104,92,95,57,15,10,18,66,107,131,132,132,141,110,48,16,24,24,29,32,31,33,32,33,27,28,31,29,28,25,31,50,63,74,73,84,89,88,94,102,111,84,52,38,31,36,38,36,34,32,36,35,34,34,35,36,27,19,14,33,63,79,105,122,126,131,124,118,113,119,146,162,152,73,29,58,57,60,74,125,162,106,49,37,66,86,64,42,22,17,21,15,25,24,45,64,65,74,74,75,73,75,74,62,64,61,52,59,55,36,38,35,39,41,39,45,46,44,42,46,46,41,40,39,40,40,43,39,43,46,43,43,40,42,43,42,44,42,37,43,53,56,35,20,25,21,26,24,27,21,20,22,21,18,17,27,20,21,25,21,24,27,28,23,32,24,26,27,24,24,29,27,26,24,23,25,24,23,26,28,27,23,20,20,20,23,21,24,22,20,22,23,26,18,19,19,20,22,24,28,24,19,19,25,21,23,21,21,18,28,38,17,15,21,19,23,20,20,21,20,23,19,23,23,22,24,21,23,23,27,24,27,25,24,24,22,29,62,86,66,36,17,22,28,29,27,29,26,23,24,22,24,21,22,27,28,24,22,22,21,26,30,21,26,27,25,29,27,27,25,27,28,23,26,27,23,28,27,26,21,22,26,19,18,26,34,34,36,35,34,34,31,27,31,24,30,37,32,30,34,33,26,35,31,31,33,32,32,31,36,32,30,36,26,28,30,32,30,35,208,251,251,248,246,247,242,238,240,239,244,246,249,248,233,223,128,61,39,6,11,12,16,24,29,35,28,23,29,21,19,65,141,191,230,244,239,235,225,217,216,216,224,216,202,207,195,193,196,182,179,171,178,181,181,177,156,139,131,138,140,122,110,94,81,69,56,50,45,21,7,11,12,15,14,14,15,14,15,15,15,198,206,190,209,203,185,137,120,108,92,80,103,151,135,148,182,160,128,103,96,99,82,93,118,127,184,185,84,71,44,25,46,27,33,32,28,33,33,50,71,145,234,247,247,248,245,240,243,223,208,242,249,235,233,232,133,109,187,120,135,128,111,130,122,131,121,129,125,118,181,229,120,51,69,101,125,131,151,173,214,234,229,234,208,177,141,142,120,123,177,134,87,75,84,152,118,103,141,146,136,160,186,113,88,77,97,115,89,106,80,73,59,63,73,92,85,105,133,104,98,83,99,96,88,88,105,141,112,79,59,58,68,104,131,155,205,232,242,250,250,252,252,252,252,252,252,253,253,252,252,252,252,252,252,252,252,252,252,253,253,253,253,252,252,253,253,252,252,252,252,252,252,253,253,251,251,244,159,165,175,170,163,160,158,123,152,164,156,146,160,157,150,152,145,148,132,55,65,79,104,124,92,94,89,78,67,69,116,139,146,131,120,120,120,101,82,66,57,91,92,92,98,89,98,94,92,73,69,88,95,120,108,78,71,101,102,96,61,34,34,31,55,68,48,42,49,107,119,112,110,73,93,101,82,71,34,33,39,39,39,33,73,74,54,49,112,153,125,134,138,145,130,108,171,172,117,141,136,105,117,92,110,140,67,37,23,77,131,122,168,148,123,111,74,81,69,68,64,19,7,12,11,16,16,14,14,27,51,69,89,93,97,110,155,203,229,243,247,248,178,253,155,51,10,5,47,101,132,140,130,117,113,124,92,57,49,61,84,84,89,89,77,70,71,69,52,63,58,32,27,43,64,57,65,71,80,79,85,93,79,56,25,30,38,29,32,37,33,33,31,31,36,36,41,35,35,36,30,44,65,91,120,121,118,108,103,90,85,92,101,110,114,110,51,29,57,66,70,66,92,150,155,95,42,47,49,31,23,16,18,20,21,40,54,66,72,72,73,78,81,72,70,71,71,101,122,123,113,91,63,90,63,75,76,72,73,69,78,82,79,98,71,66,66,70,66,69,76,74,76,81,69,71,98,63,66,67,63,62,62,60,52,25,19,17,18,21,21,23,21,22,23,21,24,24,21,25,25,24,28,23,28,27,21,30,38,38,41,27,19,26,22,24,26,25,25,25,31,23,22,26,22,19,17,22,22,22,21,20,25,19,24,24,19,21,20,24,23,21,23,19,20,19,24,23,21,27,21,17,29,34,22,20,17,15,25,18,19,24,17,17,22,21,20,18,19,21,24,25,19,20,22,27,23,24,23,22,30,66,93,66,28,15,17,23,27,24,22,22,21,18,18,20,24,27,23,20,24,22,21,24,27,22,23,26,24,26,24,27,24,28,24,21,25,23,27,26,22,26,26,20,19,18,18,24,21,28,29,25,26,27,29,27,24,26,29,25,26,33,29,26,28,27,30,31,27,30,31,29,31,32,29,30,27,31,23,42,16,55,237,253,253,251,251,249,248,246,243,245,246,248,249,246,238,214,109,8,6,10,11,22,26,25,40,53,51,49,47,45,42,98,180,213,240,241,231,227,218,217,211,209,217,216,219,227,218,219,224,225,231,229,230,231,229,236,224,212,215,225,223,215,202,191,173,164,147,143,102,12,2,10,10,13,12,15,13,13,14,13,13,172,178,158,169,186,182,160,170,174,159,137,160,181,146,134,151,170,164,136,119,108,82,78,91,111,172,164,61,37,22,11,37,30,31,29,24,32,39,52,58,92,157,182,185,171,145,162,158,114,126,175,193,161,123,109,66,83,137,103,113,104,92,98,78,94,88,84,94,90,160,236,96,22,88,147,196,218,241,251,251,242,182,157,161,174,122,106,98,101,155,96,59,53,50,73,45,69,96,86,79,136,153,98,99,80,89,101,95,111,105,91,89,81,71,83,81,87,105,97,98,101,97,87,75,60,75,121,114,110,85,63,72,79,128,173,194,155,119,168,214,194,208,236,237,253,253,250,250,253,253,252,252,252,252,252,252,253,253,252,252,253,253,253,253,253,253,252,252,253,253,253,253,252,252,190,212,232,33,24,38,32,34,41,95,31,27,49,26,37,39,34,34,53,46,58,60,55,75,89,101,84,81,70,73,80,81,98,113,122,120,118,105,110,117,105,89,88,92,86,89,93,106,107,93,72,79,80,83,89,89,97,93,69,45,54,76,75,60,58,48,31,56,80,40,35,46,107,118,110,110,62,77,74,68,69,44,58,55,47,35,21,66,80,50,54,127,143,99,116,112,132,122,83,129,148,122,124,95,70,101,108,122,123,81,45,26,70,68,37,72,40,18,21,9,15,12,13,14,13,14,20,27,73,108,138,182,210,237,251,251,252,252,252,252,252,252,252,252,246,246,245,101,21,46,93,143,153,152,115,108,109,119,120,98,93,102,135,149,158,161,157,157,150,150,149,136,146,101,35,24,62,84,62,67,64,77,86,77,74,74,63,44,36,27,26,34,38,34,33,33,35,43,36,41,38,35,35,41,62,81,90,89,82,82,73,62,61,71,66,42,40,34,42,34,37,63,69,82,68,71,110,170,156,62,24,15,18,20,14,19,25,46,65,70,72,73,76,78,72,73,69,66,66,75,117,157,171,187,179,160,147,143,150,151,152,148,151,151,160,159,151,165,159,158,160,166,161,156,169,158,164,160,155,156,145,152,160,154,148,119,76,40,16,12,12,12,16,16,21,27,22,27,29,31,35,29,34,46,48,46,46,41,37,41,53,82,87,77,48,31,36,33,35,29,33,32,31,29,28,29,26,22,23,22,19,22,17,16,23,19,17,18,18,21,19,21,23,21,19,24,24,20,22,28,24,18,18,24,18,30,39,24,20,17,15,24,23,22,20,19,21,18,19,23,19,23,23,19,22,24,23,22,22,25,23,21,25,21,33,84,92,57,23,15,19,21,19,18,26,19,23,24,24,24,22,19,26,24,21,24,21,22,27,27,24,24,23,21,21,28,27,22,27,26,21,24,24,23,24,19,19,22,20,19,23,19,19,24,21,25,24,25,21,21,25,23,27,22,24,29,29,28,27,25,27,29,29,29,24,31,32,29,31,27,38,26,33,15,88,240,252,252,252,252,252,252,249,249,249,249,249,249,249,216,206,163,116,73,16,10,11,11,18,45,68,77,87,87,83,74,98,130,154,193,203,198,186,173,175,171,172,171,168,180,191,188,188,193,200,207,203,208,201,203,209,200,202,202,208,214,211,208,204,201,203,198,199,112,5,2,8,10,12,11,13,12,14,15,14,14,130,134,97,95,119,110,104,141,171,150,113,143,141,89,73,98,125,131,126,120,123,107,92,99,119,170,155,57,50,35,16,31,30,48,42,37,42,45,57,56,71,73,61,63,66,80,86,82,43,53,81,73,37,21,35,27,49,61,65,87,78,76,78,53,59,39,51,61,44,123,215,87,24,92,153,205,226,243,235,184,106,65,81,137,219,214,213,179,175,207,167,155,134,101,103,97,86,85,76,57,49,56,62,74,74,77,79,79,97,98,94,86,78,66,86,81,79,84,89,98,81,86,70,62,45,46,76,74,73,67,67,52,53,81,115,139,62,7,11,12,17,27,44,43,63,87,116,118,92,93,98,89,85,109,113,76,69,84,82,53,77,79,40,47,53,48,60,61,93,178,90,110,52,27,6,128,210,27,13,15,16,10,96,207,48,19,43,19,27,15,20,30,62,50,57,86,80,110,104,104,88,57,95,77,54,65,64,76,98,124,130,130,116,105,103,117,125,115,119,94,97,105,93,83,55,73,80,55,68,69,73,76,61,45,48,50,61,63,57,61,35,64,88,43,19,51,100,117,127,104,65,91,72,68,79,81,105,113,128,111,104,121,117,73,76,134,116,60,54,47,79,85,47,47,66,75,88,84,63,62,39,61,59,51,56,35,43,30,12,35,28,27,29,25,57,125,130,167,145,240,252,252,252,252,253,253,253,253,252,252,248,248,243,243,228,200,139,172,116,93,79,43,71,118,139,162,151,117,100,107,104,120,116,100,96,106,130,136,135,140,132,132,129,132,132,139,139,69,24,15,57,94,86,92,70,77,85,91,94,91,101,84,60,32,31,37,29,33,36,37,36,41,38,34,40,37,35,45,51,63,60,62,63,60,56,68,71,64,65,37,28,27,35,40,37,63,75,83,74,74,84,130,175,106,23,9,15,18,19,37,58,61,72,74,75,75,73,73,67,73,60,65,75,64,77,97,108,129,112,126,96,108,117,118,118,122,125,121,127,126,100,125,115,124,128,124,110,115,102,111,119,108,126,104,117,99,122,109,96,75,50,34,21,14,12,16,18,21,23,24,32,28,34,37,34,37,54,69,72,75,65,65,66,74,120,158,171,148,85,53,57,72,67,55,58,50,37,30,35,34,26,33,26,21,24,18,15,19,19,15,17,23,21,27,23,23,27,23,27,22,25,27,25,25,24,21,26,21,24,26,24,21,21,17,19,24,19,20,21,21,21,24,20,19,23,24,22,19,23,28,24,23,26,23,23,21,23,21,17,44,84,82,45,18,17,15,20,20,19,23,24,25,26,29,26,28,28,24,26,27,24,26,35,38,30,28,23,27,26,24,25,21,23,21,24,23,23,27,24,22,19,16,21,21,23,22,22,19,24,24,18,24,28,26,31,26,29,31,22,29,27,30,24,25,34,30,33,26,29,31,32,31,30,28,30,31,39,14,97,226,172,206,229,245,251,251,249,249,249,249,249,249,241,200,223,237,242,229,191,151,109,49,56,33,52,34,44,50,45,42,38,48,68,109,143,151,153,154,165,171,176,171,160,172,183,178,173,179,182,181,176,181,174,176,171,162,171,168,174,176,178,178,177,181,184,184,192,113,7,1,8,11,13,11,14,13,15,15,15,14,120,118,88,83,108,75,62,97,132,102,71,123,119,71,37,57,67,68,106,143,181,177,162,165,170,207,196,139,149,134,121,131,122,122,107,84,60,62,74,78,92,87,89,98,101,97,117,109,63,59,53,33,27,44,53,47,61,56,49,75,59,80,89,75,77,55,81,87,61,124,208,117,80,123,137,163,164,148,92,60,41,32,63,143,253,253,253,253,242,240,248,249,180,107,118,134,113,84,86,80,72,57,39,57,66,78,86,81,84,77,78,81,69,74,97,92,82,84,88,89,87,103,110,119,92,62,47,54,87,87,90,91,63,65,56,53,42,66,59,13,22,19,19,17,20,24,72,83,24,18,25,23,20,36,50,24,16,24,32,19,53,86,34,13,22,18,28,28,59,165,76,8,16,26,13,141,243,34,24,29,21,31,131,253,83,33,76,42,53,32,30,46,87,70,81,112,120,110,81,69,35,44,60,46,43,32,45,78,105,112,115,103,99,92,97,112,115,122,101,94,76,100,116,97,82,86,79,47,52,59,52,50,59,51,53,63,44,48,57,63,45,59,80,58,69,74,146,165,149,118,87,116,109,124,130,133,165,171,194,195,190,198,175,132,93,121,108,46,39,16,22,32,14,15,12,13,15,15,14,14,14,15,23,40,65,78,116,135,162,210,220,247,253,253,252,252,253,253,252,252,252,252,248,248,238,234,232,208,169,150,118,111,99,79,43,6,12,15,13,15,47,79,141,158,142,132,110,102,101,101,104,120,100,103,106,105,120,120,122,119,112,113,109,111,117,118,86,28,10,10,70,105,108,98,77,90,98,101,103,103,103,120,108,66,36,32,37,36,39,33,32,34,36,37,32,37,40,30,34,50,61,59,59,66,74,70,69,67,60,40,34,29,36,37,39,69,79,87,74,64,45,35,125,165,77,18,12,28,50,58,66,67,67,79,79,73,73,74,69,65,62,79,93,67,65,63,39,30,29,29,27,23,29,34,32,32,35,29,31,33,32,32,27,25,29,29,26,21,31,27,28,28,25,36,32,28,28,28,24,27,23,21,19,18,17,19,25,22,22,24,21,26,22,24,25,23,35,34,35,44,36,42,36,39,39,74,131,121,69,37,47,45,42,45,38,35,31,29,28,23,25,27,22,21,21,24,18,20,19,17,23,24,25,23,21,25,29,24,21,25,22,23,27,23,23,24,22,21,23,21,17,16,19,19,17,20,21,25,19,20,24,23,24,17,21,25,18,22,23,24,27,22,27,24,23,28,23,25,26,25,56,88,77,39,17,15,19,22,27,25,26,33,33,34,28,31,35,25,29,32,27,32,40,42,42,31,31,32,31,35,30,28,30,29,31,31,24,28,28,23,26,23,19,17,22,21,18,19,19,21,23,27,31,35,35,32,34,34,35,32,36,38,35,37,38,39,35,40,34,35,38,35,41,41,43,39,46,33,61,106,74,92,142,200,238,217,202,204,212,212,214,212,195,182,203,234,251,251,249,239,211,157,111,85,56,37,29,29,32,25,24,21,24,43,63,91,108,131,157,171,187,178,171,184,188,187,181,187,191,186,183,178,165,166,155,155,169,164,165,157,150,155,161,164,166,163,182,113,8,2,9,11,14,12,15,13,14,15,15,15,122,113,81,65,98,80,74,85,119,84,61,127,152,132,76,71,54,56,110,168,232,240,220,222,231,244,238,207,230,211,193,214,206,224,215,164,127,123,121,109,121,129,149,163,148,116,125,132,97,99,89,65,53,65,88,95,118,113,94,109,77,101,124,102,126,132,164,159,139,181,237,198,181,201,184,172,156,141,118,139,127,106,130,158,224,243,251,222,217,208,246,236,123,106,108,122,83,55,69,79,96,74,53,51,57,78,87,78,64,55,66,84,70,74,94,86,91,83,91,89,92,148,178,212,169,90,59,65,95,122,155,134,92,81,88,110,144,208,179,117,183,217,205,179,154,125,250,246,150,152,174,180,172,209,205,156,179,174,169,160,228,250,171,166,159,136,166,173,207,253,178,141,135,139,184,253,243,57,97,132,151,119,190,253,110,115,128,104,120,94,99,103,129,103,103,133,103,75,48,42,59,48,48,33,26,64,90,97,99,95,100,118,119,111,107,92,84,90,100,92,98,121,117,101,105,134,110,53,46,63,66,61,54,46,59,57,47,43,43,49,77,109,107,130,145,149,186,165,142,93,81,128,142,167,164,171,180,181,198,198,196,192,174,144,109,79,27,3,12,11,13,13,16,17,20,23,33,56,126,132,160,199,222,241,252,252,253,253,252,252,253,253,252,252,246,246,245,245,220,193,184,168,148,109,119,21,27,18,18,22,12,16,23,27,20,8,20,39,20,14,91,141,160,153,125,105,86,90,108,113,118,129,96,89,107,106,115,112,121,114,107,113,108,114,115,118,77,9,27,66,111,103,91,98,91,107,96,98,95,100,109,115,115,99,49,30,32,44,36,28,37,33,37,43,44,50,47,28,34,57,56,63,72,76,74,64,64,67,63,41,31,31,31,40,39,62,76,73,62,33,26,17,29,129,143,60,31,51,62,74,76,70,75,70,73,77,70,67,66,78,86,88,85,61,62,58,33,16,19,20,21,20,20,22,22,23,21,19,22,19,18,21,24,21,20,22,19,21,20,20,22,17,22,22,18,20,19,22,22,17,23,18,21,21,17,20,18,22,25,17,17,16,22,23,18,20,20,26,21,22,23,21,18,22,20,26,80,100,71,27,12,14,21,21,24,24,19,21,20,19,20,23,19,18,20,23,19,17,18,19,21,19,16,19,19,16,23,18,16,20,18,21,19,18,20,19,17,21,16,21,19,16,22,18,18,23,18,24,24,24,25,21,27,27,20,23,22,19,24,19,21,21,21,23,23,20,21,21,21,21,24,64,91,76,42,14,17,22,27,29,22,26,25,24,29,29,27,27,24,33,24,32,38,42,40,32,30,26,34,29,29,34,31,28,30,30,29,28,25,23,26,25,20,24,24,15,21,21,21,21,29,35,33,34,34,33,35,33,35,40,39,41,42,38,41,42,43,42,40,44,41,47,44,44,51,50,52,51,57,57,44,52,57,98,148,154,143,148,151,154,156,152,152,141,148,162,197,232,222,229,225,185,171,172,141,127,117,123,120,116,110,115,101,89,92,98,115,131,153,170,172,156,152,162,171,177,187,202,201,198,195,175,154,160,158,165,175,165,163,151,145,152,162,159,159,161,181,113,9,4,10,10,13,11,15,13,14,15,15,14,151,116,78,49,81,74,102,117,152,122,85,163,230,218,135,146,124,104,116,142,217,214,179,205,223,217,199,175,193,177,138,143,155,207,232,153,95,101,112,122,130,136,155,160,167,160,159,145,140,142,128,128,100,89,119,143,189,182,164,194,156,165,184,184,225,238,247,247,250,250,252,252,251,251,251,251,239,240,252,252,250,250,253,250,223,194,210,216,227,197,245,224,84,95,131,139,96,52,57,66,90,78,60,69,74,75,71,60,55,58,79,89,75,75,85,82,83,82,83,74,82,134,149,179,127,80,69,70,88,106,122,101,83,94,123,159,160,222,191,129,210,233,245,226,200,214,253,253,218,214,225,214,218,237,234,196,219,240,222,229,249,249,206,206,236,248,236,223,204,248,214,208,242,233,210,247,198,65,141,186,188,121,173,248,118,118,122,103,119,98,101,92,127,121,110,121,96,85,97,116,126,106,66,44,83,122,126,91,77,115,135,140,118,95,89,64,40,57,79,126,121,104,98,76,109,148,122,66,36,46,76,76,57,41,51,56,46,55,66,84,126,150,155,170,171,157,166,130,82,72,85,107,110,127,147,146,149,169,165,140,149,141,116,127,125,104,57,47,102,115,136,167,200,235,250,250,252,252,253,253,252,252,252,252,250,250,243,241,230,224,196,187,171,112,94,72,58,66,25,3,14,27,42,44,17,8,13,15,21,30,19,21,19,24,51,46,42,65,44,31,73,104,146,116,93,81,74,100,116,126,150,154,84,72,83,103,112,112,116,105,112,114,115,113,116,112,85,90,116,117,113,95,104,101,97,107,96,98,99,108,111,118,104,53,30,28,44,42,34,36,34,34,34,38,43,53,51,37,45,59,69,84,73,71,69,65,75,66,64,47,29,29,35,38,36,60,61,44,27,22,15,18,14,41,153,146,74,57,63,83,80,75,64,62,64,72,70,72,92,88,90,80,69,54,41,37,25,19,13,17,19,15,27,20,21,21,18,22,16,21,18,18,24,21,19,19,16,22,22,17,21,17,20,20,17,20,16,18,21,19,21,19,19,18,21,17,16,21,17,20,20,21,20,21,19,20,19,17,23,18,23,20,15,23,19,21,63,96,80,30,11,17,16,20,18,18,21,21,21,19,20,23,19,22,20,19,19,20,19,16,23,20,19,18,21,23,17,19,19,19,15,18,21,19,16,17,20,17,23,19,16,18,18,19,20,22,18,27,24,24,25,20,24,19,22,25,18,21,19,19,21,19,18,18,21,21,21,20,21,18,18,30,68,96,69,29,15,18,22,18,17,24,21,21,20,20,21,23,17,23,18,23,45,45,40,21,21,23,19,22,22,24,24,21,22,24,26,27,25,18,21,27,21,22,18,18,21,18,20,23,24,27,27,23,24,27,33,27,28,30,27,33,31,33,24,34,33,30,33,32,35,36,34,35,39,39,39,47,40,41,46,51,30,20,109,151,148,159,156,160,156,149,152,147,142,142,155,172,177,183,183,181,193,212,213,206,211,212,206,197,205,206,196,188,181,179,170,174,183,176,165,146,141,140,146,163,179,195,196,201,199,175,157,165,165,173,176,165,167,164,153,161,172,167,169,176,191,113,8,3,10,12,12,12,14,13,14,15,15,14,192,149,135,124,128,98,132,162,236,189,118,205,251,250,200,239,238,220,179,159,209,211,200,238,244,206,195,174,208,190,137,132,115,191,235,139,107,120,134,151,138,144,141,142,191,200,178,173,170,171,163,157,145,141,160,167,218,206,186,251,221,226,252,252,253,253,252,252,252,252,252,252,253,253,252,252,252,252,252,252,252,252,253,251,221,232,250,244,253,184,250,193,32,100,117,144,118,87,89,86,116,92,69,71,83,77,67,72,74,80,84,93,90,101,101,73,73,78,86,74,59,79,63,60,63,56,51,71,76,66,74,72,85,116,119,98,74,181,130,49,74,91,110,105,137,172,225,247,153,180,152,92,66,130,154,66,131,158,173,226,246,174,121,148,201,246,194,163,131,180,230,206,199,172,126,204,123,69,120,95,113,59,124,167,67,97,63,70,90,66,64,61,110,112,112,126,105,122,142,139,119,83,93,102,105,127,117,93,110,130,140,118,81,78,57,47,42,21,37,92,99,83,61,40,76,98,74,55,33,40,46,57,58,43,45,55,33,52,129,141,160,174,150,161,147,130,138,100,73,72,95,92,76,92,116,133,151,167,146,134,152,152,178,216,237,248,250,250,252,252,253,253,253,253,252,252,251,251,239,239,240,245,210,158,128,112,104,60,33,19,3,16,31,14,10,12,12,27,18,9,18,25,29,45,55,43,31,39,46,44,42,45,42,100,134,52,41,81,56,40,39,71,95,75,78,77,96,123,138,141,152,147,80,68,86,102,110,112,111,107,109,113,112,109,109,110,108,116,105,97,113,103,111,107,100,107,93,111,112,117,103,59,59,23,32,38,32,38,33,31,35,36,36,37,42,48,46,43,64,79,73,75,71,68,75,70,78,77,64,45,32,33,34,35,37,52,46,33,35,30,23,24,42,56,104,154,134,80,51,66,79,73,66,61,65,70,85,92,89,76,78,72,60,46,26,14,18,21,15,16,20,17,18,18,19,20,16,19,19,20,23,18,22,22,21,21,17,20,17,17,19,19,19,19,21,24,17,21,18,18,20,19,21,17,17,21,16,17,15,22,22,16,20,17,21,21,17,21,19,18,18,17,21,20,21,14,48,101,87,39,13,12,18,21,17,16,19,18,18,21,19,19,15,18,18,15,18,17,20,22,17,20,18,18,17,20,18,17,21,19,20,19,19,20,18,17,17,18,20,19,17,17,21,18,19,24,19,26,23,27,27,24,28,24,22,21,24,24,19,19,17,21,23,15,19,19,20,16,17,19,23,22,38,77,87,63,23,14,19,16,21,22,18,17,24,19,22,19,20,23,16,29,40,46,29,17,19,20,19,21,22,18,23,21,23,20,20,22,19,24,18,18,20,19,22,19,19,22,20,19,23,18,21,20,19,29,19,19,21,18,24,22,23,24,21,21,23,25,21,23,22,27,22,24,27,23,29,26,23,31,28,32,23,27,131,190,190,199,190,197,191,183,183,179,178,164,162,167,156,164,176,179,191,185,192,206,214,220,203,199,212,222,220,213,215,218,211,204,201,191,186,182,171,165,165,171,178,175,177,193,202,188,184,192,184,184,183,183,183,179,180,189,195,186,194,196,200,114,7,2,7,11,13,12,15,13,14,15,15,15,226,212,231,233,227,165,181,199,248,220,119,203,244,242,220,252,243,243,218,178,222,244,253,253,252,252,251,251,252,252,243,237,181,223,241,190,174,183,174,186,146,134,138,145,193,174,170,175,179,160,119,146,163,184,187,153,171,143,143,217,159,175,238,225,230,206,220,201,226,250,250,251,252,252,252,252,252,252,252,252,252,244,240,178,157,192,208,230,218,171,247,184,45,78,72,103,104,107,141,122,128,103,64,65,70,64,83,96,94,84,77,92,107,121,109,77,70,74,84,79,68,83,71,77,61,61,48,63,88,75,92,77,110,131,146,127,74,155,125,91,101,72,75,90,123,132,217,250,207,219,214,113,63,132,174,115,144,158,155,218,240,203,172,148,199,248,214,215,169,205,247,227,221,208,137,169,133,90,98,93,104,69,101,93,72,96,70,101,103,87,88,86,127,106,104,131,106,117,132,92,68,51,75,71,73,99,107,131,129,122,113,89,64,73,95,91,57,24,13,44,58,55,57,38,48,48,41,42,39,46,37,61,77,76,71,33,17,39,118,152,149,141,117,125,113,117,135,123,119,126,141,154,157,174,213,232,242,250,250,242,252,252,252,252,252,252,252,252,250,250,241,237,216,194,161,137,145,115,65,38,19,105,129,30,4,8,10,14,13,25,10,19,42,17,16,39,40,56,59,47,41,54,47,71,118,91,46,37,42,61,67,89,128,153,168,86,48,83,51,40,40,31,52,59,101,113,128,147,144,145,135,108,66,80,98,114,107,109,119,107,110,107,118,114,106,106,101,98,74,89,110,105,108,99,100,111,113,117,116,84,38,21,30,42,45,45,33,22,37,34,36,39,33,37,34,45,57,69,74,73,66,62,67,67,69,75,77,81,73,41,31,33,31,38,33,45,54,45,55,56,56,63,70,77,80,107,162,136,66,56,60,70,68,65,73,85,89,80,75,63,69,72,61,38,21,16,16,19,17,24,21,19,23,17,20,22,19,21,18,15,21,20,17,18,16,24,18,17,21,15,23,21,21,15,22,26,15,22,21,19,18,20,20,17,20,15,17,20,17,21,18,19,20,19,21,21,17,16,23,18,20,19,19,19,20,19,35,88,91,46,18,15,19,18,17,19,18,13,21,19,21,24,19,22,17,23,18,21,23,19,21,16,18,17,17,20,20,20,19,19,18,16,16,17,17,18,20,17,17,19,17,17,19,20,22,26,23,28,31,27,29,30,26,23,26,25,25,26,22,19,18,20,20,15,18,19,17,20,19,18,17,21,19,41,80,88,59,20,14,17,17,20,15,22,20,17,21,20,24,20,18,26,42,45,25,17,22,18,22,19,21,21,20,20,21,21,18,21,18,18,18,18,19,19,20,20,21,19,17,23,22,19,20,19,21,21,19,23,21,23,18,19,23,20,22,22,19,21,22,24,22,21,22,25,22,23,21,26,24,22,22,31,21,65,183,200,190,190,181,191,182,178,166,165,179,181,171,150,141,149,150,163,181,178,179,174,174,175,158,154,187,204,194,186,187,198,199,192,190,187,194,197,199,200,199,193,181,173,157,177,203,204,217,223,208,201,205,212,209,206,205,212,218,208,208,201,208,114,5,2,6,11,13,12,14,13,14,15,15,14,204,223,234,233,224,158,165,165,244,181,83,169,222,159,134,197,202,194,121,92,121,178,221,250,244,210,250,239,251,251,240,227,131,197,236,134,119,127,136,171,93,101,120,130,170,131,131,172,176,113,76,112,151,177,188,122,97,92,96,160,81,70,100,101,139,84,98,92,150,203,197,198,164,243,254,254,252,252,252,252,252,231,236,141,75,120,116,148,191,178,251,181,40,81,60,85,76,96,133,112,124,101,83,75,66,60,77,104,88,78,73,85,83,89,92,72,57,61,73,67,56,98,111,91,84,78,62,55,71,99,111,83,79,84,93,96,61,145,90,92,117,65,85,88,125,108,162,217,148,176,160,65,58,105,155,103,110,98,94,156,160,144,131,118,176,208,184,204,159,201,244,184,194,183,147,179,114,90,99,91,112,74,102,86,74,96,68,101,97,95,103,110,128,96,104,126,100,102,101,74,62,54,57,44,42,71,106,118,114,77,70,79,75,99,114,111,105,86,49,31,26,44,47,38,58,41,47,56,42,46,20,54,84,67,69,53,30,71,132,134,142,145,145,168,173,198,230,237,250,250,252,252,252,252,252,252,252,252,245,244,239,242,252,240,229,212,204,203,177,128,105,85,39,13,8,9,14,18,9,16,12,67,98,20,8,10,26,41,38,43,34,46,67,51,71,112,87,119,95,53,55,63,49,76,139,131,66,32,48,89,86,87,95,95,93,53,40,49,43,39,23,41,69,96,137,132,128,144,143,136,131,105,68,85,106,115,112,116,113,114,110,111,113,95,93,84,92,103,87,98,107,101,105,94,97,117,119,99,45,26,21,23,60,56,69,54,23,21,24,40,35,37,39,37,49,66,74,74,71,73,60,64,76,71,69,67,77,74,66,50,31,34,36,37,35,48,57,45,63,79,88,93,86,78,72,81,131,172,117,53,46,57,78,87,88,83,75,72,69,60,71,68,54,41,21,17,16,21,18,19,24,17,22,18,16,21,19,19,19,21,19,19,21,22,18,19,17,21,22,17,20,16,19,21,21,19,19,19,22,19,19,19,18,21,19,20,18,18,17,16,21,19,21,18,22,20,17,19,21,21,16,19,24,19,19,21,25,75,104,49,19,16,15,17,19,22,17,16,17,19,19,20,24,20,17,19,19,18,18,20,18,20,21,18,21,19,17,22,18,17,19,18,17,17,24,17,19,21,17,18,19,18,19,23,23,30,28,29,30,33,33,26,31,29,27,27,27,28,21,15,19,19,18,19,23,19,17,19,21,19,21,20,20,22,48,91,90,46,17,17,17,21,20,19,21,19,22,21,18,17,21,27,48,46,26,21,17,17,25,21,16,18,17,18,21,21,19,20,18,18,21,19,20,20,19,19,19,21,19,18,19,17,22,20,24,19,16,19,17,22,21,19,23,22,18,22,19,22,20,18,23,23,24,19,21,23,21,24,19,23,31,23,40,131,190,168,157,146,140,155,145,131,127,159,205,205,188,174,164,155,141,170,219,229,227,198,165,151,128,113,133,148,151,131,128,139,136,141,143,144,152,163,170,181,178,162,150,128,131,148,181,198,216,224,205,203,212,226,224,215,216,212,215,196,185,184,193,114,6,1,9,12,12,12,14,12,15,15,15,15,97,124,137,147,119,72,91,97,199,162,94,169,189,108,42,60,57,58,46,55,78,117,156,219,193,145,193,172,218,211,167,166,70,151,221,64,39,61,88,137,65,66,87,106,134,88,111,155,152,110,81,107,113,127,150,111,103,92,118,204,124,59,49,77,111,60,75,63,123,147,112,148,134,182,253,253,252,252,252,252,249,249,244,128,50,80,63,98,132,163,240,141,28,78,67,87,59,60,96,92,109,118,112,95,79,68,76,87,79,76,77,69,61,56,73,67,61,61,56,64,58,81,93,84,64,73,76,53,48,58,86,64,53,41,67,69,59,85,39,39,55,61,71,78,97,72,69,78,51,36,34,27,31,46,57,33,43,40,38,71,50,33,41,47,71,93,98,128,107,122,146,110,134,142,105,101,66,57,56,57,70,56,63,51,62,59,42,64,46,58,100,119,126,96,96,96,64,68,87,103,103,65,48,42,45,50,59,87,104,95,81,76,76,73,81,111,118,97,81,48,13,9,15,15,25,42,61,72,72,63,57,74,87,109,134,148,184,230,249,249,252,252,252,252,252,252,253,253,253,253,253,253,250,244,240,229,201,222,165,88,54,116,188,180,154,131,147,155,131,79,61,44,23,27,12,10,12,16,18,12,16,57,95,94,58,24,45,57,34,50,35,90,132,84,73,83,115,136,137,84,37,46,53,46,71,86,58,44,41,56,59,53,53,53,76,57,36,65,62,41,33,107,146,143,159,134,128,134,129,137,141,125,111,105,109,120,113,112,115,111,114,98,89,73,75,74,73,98,97,105,101,104,111,95,105,98,59,36,17,22,16,49,85,95,102,39,15,23,30,39,44,36,41,64,73,77,71,73,77,75,69,72,75,69,79,82,83,79,69,47,32,32,37,37,29,51,54,47,66,81,96,88,79,80,72,66,90,137,163,114,48,57,80,93,81,71,76,70,68,63,66,64,56,36,22,18,18,22,20,24,19,19,21,18,21,21,21,18,18,19,20,18,17,18,18,19,21,19,18,23,19,18,22,19,21,19,20,21,19,20,21,22,19,17,22,24,16,20,20,20,23,19,18,20,20,19,20,17,21,19,17,21,19,23,17,21,19,57,114,66,21,15,14,18,15,20,24,21,20,18,18,21,21,19,20,22,15,17,21,17,21,21,21,24,18,19,17,20,23,17,19,17,18,17,19,19,22,24,18,18,18,19,22,23,23,29,23,28,32,25,28,29,29,28,26,24,29,30,23,20,18,16,21,18,19,20,18,21,21,18,15,17,21,20,28,55,91,82,41,17,15,17,18,20,22,16,19,22,18,18,20,25,48,41,19,22,17,19,24,18,18,22,20,20,20,24,21,21,20,16,20,19,21,21,19,18,22,22,16,21,20,23,21,19,19,21,21,19,21,17,20,19,21,23,20,20,21,23,24,20,19,19,19,18,26,23,21,19,22,18,26,21,61,150,164,155,145,125,125,132,124,147,169,217,252,241,244,244,235,220,185,222,252,252,250,241,203,186,119,72,72,83,119,118,93,101,101,108,125,117,106,92,94,111,100,82,73,77,65,84,114,127,152,153,141,141,152,159,164,167,172,167,168,151,151,149,162,113,10,4,10,11,15,13,15,14,15,16,16,15,54,55,60,72,69,66,111,130,229,190,125,212,239,160,91,102,96,104,118,143,151,180,183,228,181,135,196,163,217,200,163,166,75,179,222,55,59,59,82,134,65,70,76,93,98,66,88,118,141,127,103,99,92,110,162,163,141,118,153,250,160,85,69,105,144,100,125,83,54,39,67,146,186,210,242,251,250,250,251,251,251,251,246,125,78,124,100,96,71,92,160,42,36,73,65,86,49,64,63,72,101,105,115,96,97,79,66,89,69,77,92,101,66,48,44,56,62,55,64,74,74,85,85,81,78,66,69,57,67,78,69,83,91,64,108,159,130,168,154,77,69,110,102,71,84,69,79,90,97,138,170,143,181,176,208,168,144,113,141,184,116,107,71,42,72,53,56,83,68,80,77,61,85,88,72,52,16,36,41,35,46,42,46,46,41,46,42,43,31,61,121,135,115,92,77,60,45,76,107,106,89,54,51,49,49,52,30,60,110,109,93,64,36,23,43,63,49,51,63,49,15,15,16,36,87,113,155,178,195,231,249,249,252,252,253,253,253,253,252,252,252,252,252,252,246,240,237,230,214,203,200,208,195,156,160,181,150,172,108,8,4,52,157,133,80,65,67,72,55,40,41,37,41,77,89,101,130,147,115,44,65,129,147,152,84,24,71,72,42,60,41,86,132,76,39,42,71,83,80,64,45,39,49,53,47,56,77,71,39,45,52,67,97,97,123,89,39,87,88,41,90,170,174,156,151,134,131,137,112,91,124,145,113,112,110,112,113,115,115,96,78,74,78,74,72,69,81,77,84,98,96,109,122,103,59,31,19,19,19,27,40,90,94,70,77,35,20,19,29,54,52,49,58,71,79,80,73,70,77,76,77,76,74,80,83,77,84,64,42,25,19,27,29,35,33,49,57,48,67,79,86,82,76,78,74,65,51,83,152,166,108,62,57,74,75,70,69,66,66,66,66,63,52,35,19,19,18,19,22,22,24,22,23,19,21,21,16,19,17,21,17,17,20,16,21,21,19,22,20,21,21,21,18,24,20,18,19,17,21,18,21,19,18,21,18,23,18,16,19,18,21,19,21,21,24,21,21,21,16,21,17,20,22,22,22,18,17,54,112,72,29,17,12,18,17,19,16,23,18,21,19,20,19,22,23,23,23,16,23,19,19,23,17,19,21,20,19,17,22,18,17,20,20,15,19,16,26,23,17,24,23,24,25,24,33,33,29,27,26,28,29,33,33,27,27,33,28,27,26,17,17,19,19,22,16,19,22,18,21,16,21,19,20,22,19,31,62,91,76,33,16,18,18,19,19,21,23,19,17,23,19,22,53,38,21,22,18,24,19,19,22,17,16,18,19,21,17,23,21,20,19,21,25,17,18,21,21,21,18,21,23,21,21,22,24,16,25,25,17,22,18,22,20,21,19,19,24,19,19,23,23,21,21,18,22,23,20,20,26,22,27,22,48,110,122,111,116,91,105,109,117,159,182,204,216,212,225,226,227,207,166,217,245,245,249,208,206,171,88,51,47,73,140,140,127,129,118,139,149,139,114,88,49,37,31,23,37,55,39,34,41,47,52,54,33,32,33,36,44,58,75,66,121,165,184,173,135,97,17,3,12,11,15,12,14,14,16,16,16,16,106,113,118,141,141,136,166,166,250,206,131,217,235,191,125,154,148,152,170,185,198,221,221,250,177,134,200,186,237,214,189,193,95,197,221,57,66,60,85,132,71,53,39,36,51,34,48,78,115,144,120,95,98,150,225,217,167,117,155,247,168,81,71,123,150,141,152,105,39,1,26,131,250,237,229,253,245,245,197,181,172,168,204,123,134,180,130,105,69,87,136,70,58,54,43,78,66,69,62,57,74,82,103,98,90,86,83,103,87,67,112,118,77,51,31,35,50,53,65,81,87,86,89,89,75,72,61,50,92,120,86,113,139,84,144,174,202,252,253,192,107,145,137,86,72,72,125,234,252,252,252,252,253,253,252,252,252,250,249,252,250,250,225,193,151,45,24,39,29,40,42,29,39,42,43,43,39,37,41,41,38,41,39,37,40,34,32,43,25,59,116,130,109,75,81,68,54,76,71,57,42,22,22,26,28,20,12,12,19,44,50,51,53,55,69,86,95,106,153,177,190,221,246,249,253,253,252,252,253,253,253,253,252,252,247,247,243,241,223,206,196,182,162,171,164,134,145,174,179,154,147,149,145,110,118,122,80,112,65,5,6,18,67,66,57,60,59,83,77,94,151,147,145,159,182,212,217,231,214,95,39,93,127,129,74,32,63,69,46,55,37,41,61,51,49,44,46,36,71,102,46,57,90,97,114,104,108,83,50,46,42,68,100,98,103,82,42,63,62,31,91,169,160,145,134,129,136,141,89,38,99,125,99,106,105,105,108,100,87,65,59,62,69,71,78,81,83,91,89,92,100,97,68,39,27,17,17,23,33,78,118,99,53,33,36,27,23,17,32,55,60,63,60,72,80,80,75,74,77,79,69,80,88,85,89,63,49,29,19,19,16,22,34,41,40,59,66,77,87,85,87,78,78,76,66,67,60,53,98,157,163,99,39,48,61,69,71,65,64,76,92,72,51,26,18,22,14,22,24,21,23,23,22,20,23,21,16,19,20,21,19,19,22,20,19,19,20,19,18,20,21,18,21,20,22,21,21,25,19,21,22,21,18,19,22,18,22,21,19,20,19,24,19,21,21,21,24,18,23,17,17,19,17,23,21,22,18,29,98,87,34,19,14,16,21,18,22,19,19,21,20,20,18,21,20,17,20,23,23,21,22,17,23,22,19,24,17,21,20,18,20,18,19,18,19,21,22,22,18,18,23,20,19,26,38,36,25,26,26,32,29,29,34,28,29,29,27,24,23,22,16,18,18,22,24,19,17,22,25,21,22,17,19,21,21,22,32,77,99,68,29,15,20,16,17,21,21,19,19,23,18,23,49,36,19,20,16,24,20,19,23,20,19,23,21,20,24,18,21,22,19,20,23,21,24,16,19,21,21,25,21,19,20,20,21,18,21,20,21,21,19,19,22,22,19,19,23,20,19,23,21,23,26,20,24,23,20,21,17,23,27,21,41,56,64,66,63,57,57,56,56,86,79,88,89,88,98,88,103,91,62,107,131,124,123,104,128,96,52,53,54,85,139,156,129,128,113,121,137,131,135,105,44,29,35,33,41,46,41,39,40,43,34,19,13,17,21,21,24,18,17,13,122,227,237,225,147,71,16,3,14,12,16,14,13,16,17,16,16,16,122,124,132,154,156,149,163,168,250,188,107,195,232,162,92,111,88,108,150,169,200,231,233,252,195,163,218,200,245,244,227,199,96,206,218,66,75,51,85,130,63,48,29,24,25,25,39,35,89,163,132,94,91,117,197,175,103,69,95,173,101,51,47,91,130,133,158,121,77,29,51,118,205,213,210,244,214,212,170,79,21,44,104,97,139,149,104,91,95,162,234,124,79,66,65,79,66,72,56,61,71,81,120,97,76,77,86,132,137,85,106,145,93,93,72,51,49,55,77,69,84,79,80,88,61,68,62,54,108,90,105,118,122,75,97,141,177,246,252,213,70,114,128,96,95,94,129,198,247,247,250,250,249,249,252,252,252,244,235,231,203,241,249,222,128,31,6,10,21,35,39,39,38,31,41,43,37,39,36,43,42,38,37,41,53,59,55,41,34,23,30,59,61,75,107,87,31,8,11,12,13,14,15,19,20,24,24,35,123,105,148,196,219,236,252,252,253,253,252,252,253,253,252,252,252,252,246,246,236,222,184,155,144,133,112,95,86,102,121,125,145,138,116,120,120,94,85,118,124,104,71,52,60,53,71,62,37,76,56,22,23,11,67,137,179,191,191,194,167,178,214,200,173,188,181,169,156,168,151,56,14,21,49,63,43,42,44,51,50,46,41,43,69,59,57,61,43,20,91,151,76,64,90,126,149,112,96,75,54,52,44,43,53,51,54,56,42,53,50,32,94,158,153,139,132,131,133,141,83,50,110,125,97,108,112,109,89,69,63,54,63,73,78,80,80,82,96,97,100,93,63,37,24,23,19,20,33,46,54,69,77,67,43,28,22,16,25,32,39,51,57,61,64,72,79,84,75,76,77,75,83,88,88,75,48,32,22,22,19,17,27,45,58,66,68,84,90,93,99,79,75,74,69,71,65,69,73,64,61,92,152,148,96,53,37,53,69,73,72,102,95,68,56,27,19,16,22,24,19,24,20,24,21,22,20,25,24,18,24,17,19,21,21,21,18,18,19,21,20,21,24,15,20,23,19,21,23,17,21,21,20,24,21,21,19,25,19,20,23,17,24,20,17,20,21,21,23,20,22,21,19,19,17,21,19,18,25,25,77,101,51,20,12,16,22,18,21,22,19,23,20,22,25,20,18,21,21,18,22,23,25,22,19,23,20,18,21,19,21,21,19,17,17,22,17,23,24,19,19,23,20,22,21,19,30,35,31,31,30,26,31,31,30,29,29,32,28,29,21,17,20,18,20,19,20,24,23,21,19,20,19,20,24,23,18,19,20,35,79,88,61,29,15,19,17,19,18,25,18,21,23,29,47,31,22,23,17,21,23,19,21,21,21,19,22,23,21,22,21,23,21,21,21,24,21,21,23,22,23,21,21,24,24,18,25,22,19,23,17,24,20,21,19,21,19,21,24,21,21,21,21,25,24,21,25,23,23,23,23,22,24,25,31,50,53,59,57,53,56,48,36,35,26,16,18,15,18,15,50,45,16,46,43,34,39,38,63,47,46,66,65,77,105,96,77,69,46,57,69,71,91,84,50,45,44,43,46,50,48,44,39,37,20,23,66,115,141,145,129,93,47,29,171,232,241,240,158,69,12,5,14,13,14,14,15,15,16,17,16,15,131,117,80,76,111,125,158,169,238,164,105,208,241,172,100,84,29,74,130,164,197,192,205,248,206,174,202,213,237,236,194,130,86,202,205,86,85,52,90,128,81,87,84,64,63,54,56,62,92,130,117,88,49,49,72,43,27,17,19,33,18,21,17,42,57,90,118,116,110,122,146,148,173,151,155,169,120,146,166,114,48,27,48,73,120,87,57,76,87,160,197,96,68,83,85,104,110,95,59,60,81,96,106,87,72,60,76,146,148,110,142,167,133,141,115,92,74,65,84,75,105,102,87,102,76,71,69,88,134,107,80,106,83,52,95,106,127,183,236,131,19,71,101,139,163,154,144,99,61,131,169,159,153,177,253,253,252,240,235,228,174,141,142,162,90,15,9,12,30,38,39,44,38,39,40,35,39,37,42,49,40,37,38,33,83,120,93,47,12,13,11,24,67,89,114,91,41,36,37,42,74,115,151,194,219,239,251,251,253,253,252,252,252,252,252,252,250,250,246,246,227,197,180,190,174,147,139,107,98,86,27,4,10,30,56,41,28,53,76,90,93,78,60,42,51,42,23,17,28,44,48,66,87,114,153,131,125,170,118,60,43,36,136,215,240,237,203,175,111,98,121,88,83,85,67,60,58,68,87,91,109,110,115,129,136,123,106,110,44,50,49,54,110,74,72,66,62,42,60,124,73,37,35,66,81,59,60,55,63,64,63,73,64,66,76,71,44,50,67,46,99,149,149,145,133,141,139,137,61,60,128,111,101,108,98,80,69,55,67,61,74,84,83,89,83,88,97,92,67,39,23,21,18,28,35,38,51,63,74,56,79,106,72,28,15,26,56,62,43,39,38,51,49,63,69,70,79,79,84,90,91,78,59,35,25,18,16,17,23,39,65,81,92,96,102,105,84,91,84,72,75,66,76,77,75,74,68,67,59,52,93,163,173,113,43,34,67,78,83,79,71,56,46,29,16,23,22,24,24,22,26,19,23,27,21,20,23,18,18,23,21,22,18,19,20,19,24,20,19,21,24,19,20,24,21,21,21,20,22,23,19,22,23,17,21,19,18,20,18,21,24,21,21,19,19,24,20,19,23,20,25,19,21,19,19,19,24,24,60,106,66,24,12,15,19,22,21,19,18,19,24,19,22,23,19,21,21,22,17,22,21,19,26,19,21,21,19,20,22,21,19,20,19,19,19,19,22,21,18,17,19,19,22,22,32,29,28,32,27,28,32,27,30,30,30,30,24,24,22,20,22,19,18,23,21,19,26,21,20,21,21,17,20,20,22,20,20,23,41,83,91,60,26,15,18,23,21,23,19,22,26,32,49,30,20,26,16,21,23,21,26,21,22,23,17,26,21,23,26,19,23,18,23,25,21,21,19,24,21,21,23,23,22,16,22,24,21,23,25,23,23,24,21,21,23,23,24,23,24,21,21,23,26,25,24,22,24,26,22,23,21,21,36,53,63,62,61,51,38,31,17,15,15,16,16,16,16,19,65,51,32,57,31,17,25,28,57,43,58,81,72,77,56,50,49,54,37,39,56,48,56,69,55,51,73,84,91,103,98,71,50,39,36,81,139,206,227,226,218,191,132,109,228,238,241,234,178,99,9,4,13,11,14,13,16,14,16,17,15,15,214,175,89,50,101,142,191,214,249,191,156,241,251,233,188,155,76,90,141,171,165,127,159,221,162,120,160,185,229,188,111,97,92,194,174,101,119,81,116,126,104,138,133,123,123,120,130,125,134,141,100,89,71,57,68,54,54,56,62,72,54,66,70,64,69,77,86,97,129,159,181,163,155,132,111,120,83,84,107,108,93,69,53,53,69,57,53,61,61,98,114,46,63,90,98,129,153,122,59,62,71,77,94,68,67,73,86,132,134,103,131,178,122,137,136,123,113,85,86,87,129,124,92,111,63,51,55,110,161,98,72,29,39,65,55,56,45,54,27,8,13,18,31,77,106,96,77,31,4,23,34,28,25,26,77,105,110,112,159,177,131,91,90,148,95,10,6,10,14,21,36,39,32,34,32,32,38,36,50,59,41,36,29,12,83,139,134,109,81,87,92,122,158,174,192,205,220,245,250,250,252,252,253,253,252,252,253,253,249,249,243,237,212,177,151,150,141,105,89,91,42,7,8,17,19,29,68,60,74,73,20,5,31,77,72,38,21,27,42,47,49,50,54,51,46,57,28,17,12,63,106,119,140,160,182,141,130,167,128,71,46,26,93,139,119,97,75,67,57,59,69,61,66,108,105,81,88,117,141,164,179,184,188,183,120,156,125,125,69,73,65,42,73,55,65,56,55,59,39,50,61,63,36,55,75,61,59,62,79,98,97,98,99,95,101,92,57,48,69,45,67,112,125,129,126,131,143,144,84,87,113,79,82,78,72,72,57,60,64,71,81,78,89,85,87,87,64,43,31,26,29,37,33,38,56,80,108,124,130,136,134,145,124,57,50,75,81,60,38,33,40,48,47,59,62,70,74,78,85,80,59,37,24,16,20,16,22,41,54,77,90,99,108,101,96,86,84,81,76,73,69,75,77,77,70,64,69,66,63,63,55,95,170,179,119,58,48,74,68,39,29,32,31,22,22,23,22,24,23,30,19,23,30,23,23,17,25,19,17,25,18,21,23,23,24,20,21,22,26,22,17,23,23,19,19,20,23,19,18,23,20,24,22,21,20,22,19,20,20,22,21,19,21,21,24,18,23,18,22,21,17,21,22,21,22,23,18,25,38,83,69,26,22,14,19,19,20,17,19,27,19,23,19,19,22,20,24,18,20,23,22,22,20,20,22,20,23,24,21,18,19,18,18,20,18,17,24,20,19,20,18,22,17,24,31,29,29,26,29,27,28,28,32,29,29,35,30,27,17,19,22,16,22,18,18,24,21,24,24,21,22,21,23,20,22,22,20,22,22,52,90,89,53,22,18,17,21,23,18,22,21,39,53,25,21,25,19,19,21,21,24,21,24,22,20,24,18,22,22,25,21,19,24,18,24,21,21,22,22,28,22,22,22,21,25,20,18,26,23,24,22,21,26,22,24,19,22,24,24,28,23,21,24,26,24,21,23,22,21,26,24,23,37,61,69,71,57,31,22,22,26,32,27,25,25,19,18,27,84,70,57,66,32,30,36,36,50,39,58,83,80,72,60,56,57,52,34,42,55,49,62,64,49,98,165,200,222,236,230,208,182,167,154,172,201,234,229,227,205,187,154,147,241,245,238,229,205,111,6,3,9,12,14,12,14,14,15,15,15,16,246,245,174,122,167,205,252,252,251,244,232,249,253,253,246,244,159,139,152,165,160,127,143,173,127,116,135,139,160,133,109,117,126,166,150,130,147,122,146,139,114,154,165,172,203,216,235,235,223,212,195,195,198,208,216,204,213,214,219,225,220,223,219,224,212,201,196,192,202,212,216,203,181,169,184,186,135,81,54,55,79,80,47,44,72,63,88,92,78,127,143,89,103,132,141,155,146,146,131,90,54,41,61,66,72,71,73,116,98,87,129,137,111,116,123,154,133,105,112,92,97,82,55,79,49,32,34,62,101,48,42,44,57,75,52,60,58,42,33,22,22,31,30,36,30,24,30,28,27,26,28,27,26,24,27,24,29,41,61,74,57,55,81,121,67,14,20,14,16,17,20,22,17,20,25,24,29,19,73,83,32,37,41,92,187,213,219,232,245,247,250,250,252,248,252,252,248,248,246,245,233,199,223,215,174,160,134,122,118,107,92,71,53,29,10,18,24,15,13,16,15,13,13,13,14,14,15,14,14,14,14,12,18,25,29,55,78,88,92,102,97,93,94,71,73,57,35,15,11,27,21,25,37,44,36,14,17,27,24,15,20,14,49,73,61,55,53,101,130,151,159,134,135,145,106,52,50,61,80,85,78,78,77,71,66,64,52,59,42,47,45,39,48,55,67,69,103,80,19,76,143,89,45,104,104,53,47,46,53,51,56,61,48,53,52,47,45,44,42,41,42,62,95,98,108,125,140,148,110,91,83,76,94,89,82,71,65,72,80,73,85,89,89,91,66,48,30,31,42,41,42,50,65,88,104,127,143,148,149,140,143,139,124,105,85,88,77,59,46,29,36,41,46,73,76,74,77,69,60,42,25,18,21,16,24,33,53,69,76,96,101,99,98,89,87,84,66,77,77,78,85,75,73,64,64,64,67,70,66,65,60,49,95,170,182,144,81,41,40,34,23,22,24,21,24,21,24,30,24,24,22,24,28,25,23,23,25,22,21,22,20,22,21,21,26,19,19,22,21,26,20,24,22,17,19,21,19,20,24,23,18,22,24,19,21,18,20,22,23,20,17,23,24,19,23,24,20,21,23,19,23,20,22,26,19,24,18,21,26,71,77,33,22,15,19,21,19,23,22,19,18,18,19,19,21,27,20,19,21,19,24,19,22,19,18,25,21,17,21,19,19,18,18,21,16,19,24,16,23,23,17,21,19,24,33,27,27,26,27,29,30,29,31,34,26,29,32,23,19,17,19,20,20,21,20,21,24,21,21,19,19,22,25,22,17,22,19,24,24,21,56,95,87,52,23,19,21,16,23,21,22,38,36,21,24,17,18,22,22,26,24,19,19,19,20,23,19,26,23,23,22,18,24,26,18,22,21,21,23,22,20,21,23,23,22,23,21,20,21,21,26,20,23,18,20,25,23,25,21,22,20,24,29,20,21,24,20,21,27,29,21,23,44,69,84,89,96,116,150,171,181,189,184,184,176,154,136,161,192,157,157,161,126,107,121,141,126,72,66,89,84,94,78,67,61,55,41,41,56,51,65,61,55,120,184,216,246,246,247,247,241,230,218,222,241,245,215,198,172,155,129,135,220,238,228,233,224,111,4,1,7,11,14,11,13,14,15,15,15,15,247,247,246,197,236,245,249,249,252,240,218,250,252,252,248,248,188,138,159,151,148,136,147,152,131,142,149,137,143,129,120,142,150,169,168,184,198,192,209,184,177,206,230,248,252,252,252,252,253,253,253,253,252,252,252,252,252,252,252,252,253,253,252,252,252,252,253,253,252,252,252,252,246,251,253,253,224,124,81,69,76,100,94,90,118,120,153,165,152,176,186,124,146,200,209,217,193,190,140,115,59,37,43,57,74,70,63,92,99,132,152,166,127,107,118,140,156,124,104,83,70,66,72,122,92,76,68,66,77,60,122,127,162,194,159,214,247,247,247,230,225,228,239,239,224,216,236,237,239,243,230,229,228,220,217,142,181,181,193,188,184,185,193,210,196,208,227,230,232,228,235,235,229,230,151,223,217,147,247,247,245,247,252,252,253,253,252,252,253,253,253,253,252,241,223,213,194,191,181,184,156,49,65,128,139,135,101,72,48,27,14,10,14,13,14,19,26,36,51,68,71,66,63,68,68,125,73,76,71,84,47,13,29,53,93,132,134,113,97,82,66,42,35,37,30,33,27,23,28,29,26,25,24,34,33,26,36,67,87,41,20,18,101,153,130,103,65,66,75,85,89,73,67,54,15,8,13,11,14,12,13,13,13,14,75,26,55,81,46,34,41,38,54,92,141,144,152,102,36,116,160,78,61,93,65,42,46,39,45,49,47,52,42,44,46,37,43,34,34,38,49,70,80,87,95,103,120,129,94,76,66,67,98,98,90,78,71,77,84,91,98,98,71,47,42,38,40,36,37,51,71,88,110,122,129,137,137,134,126,124,117,99,90,76,73,72,72,66,42,38,41,42,48,66,79,79,65,41,27,22,21,22,19,29,56,67,80,80,80,94,90,97,95,81,75,72,78,78,87,92,68,74,67,63,71,73,81,66,71,77,60,66,60,96,180,210,163,62,26,33,24,20,19,24,22,24,27,23,31,29,27,29,27,23,28,23,21,26,21,20,21,24,27,19,21,24,21,18,21,27,29,29,24,21,22,19,23,27,22,22,22,19,17,22,24,21,24,21,21,19,24,24,22,25,23,19,23,18,22,21,19,23,20,24,19,18,20,22,16,66,81,33,25,16,17,23,17,23,21,16,22,21,21,23,21,21,22,17,21,21,21,26,19,19,23,16,22,22,19,20,20,22,16,18,19,19,26,19,23,20,16,17,23,23,32,34,22,26,30,32,30,29,29,33,28,27,29,25,22,18,19,18,20,24,18,19,21,21,18,18,22,19,21,19,19,21,20,24,19,21,31,62,99,84,45,24,18,18,24,21,19,35,34,24,19,21,22,18,22,23,25,22,24,22,20,25,22,23,19,19,21,21,29,21,20,21,22,26,23,22,23,27,21,22,23,22,23,23,22,19,17,17,24,27,23,24,24,24,24,17,25,25,22,28,19,24,23,22,25,21,27,31,44,64,70,113,188,245,252,252,252,252,253,253,252,252,252,252,251,251,250,250,236,228,248,248,239,155,94,108,89,88,76,50,56,58,47,38,53,56,71,65,77,134,149,152,189,242,245,243,234,235,237,241,245,241,204,196,183,170,137,155,226,228,227,243,237,108,1,1,8,11,10,12,13,12,14,14,15,14,246,246,249,218,244,246,240,240,240,129,118,197,207,237,237,236,144,108,143,134,139,132,144,155,147,159,159,155,160,161,165,184,212,243,250,250,252,252,252,252,253,253,252,252,253,253,253,253,253,253,253,253,253,253,252,252,253,253,253,253,253,253,253,253,252,252,253,253,252,252,252,252,252,252,253,253,247,192,144,119,125,157,162,155,165,167,210,229,183,171,136,100,156,202,223,237,210,185,118,76,80,79,74,54,79,86,93,124,105,106,133,147,134,134,94,111,137,110,82,71,70,77,101,192,174,134,136,123,142,111,155,164,207,242,215,252,252,252,253,253,252,252,253,253,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,253,253,253,253,252,252,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,253,253,252,252,253,253,252,252,253,253,251,76,100,237,233,218,158,121,69,48,78,99,141,173,204,237,251,251,252,252,253,253,252,252,252,252,252,252,252,250,196,61,96,223,216,177,97,63,67,57,61,74,101,141,181,213,249,249,252,252,253,253,252,252,253,253,252,252,251,209,76,23,110,165,110,62,11,17,24,22,46,59,83,113,140,162,177,167,168,164,148,165,138,121,125,136,120,106,64,30,49,37,39,97,124,103,103,79,46,69,78,49,50,56,51,60,65,71,80,85,105,112,117,120,101,92,71,30,63,66,36,69,75,71,77,85,98,90,81,61,55,66,80,82,81,74,69,81,95,97,76,51,41,36,33,29,33,50,73,83,97,112,116,122,127,125,121,120,119,102,85,66,69,66,59,73,75,63,48,36,45,45,42,66,63,49,34,17,21,19,18,31,51,66,80,83,90,95,81,91,81,78,77,67,71,73,78,84,79,76,72,64,63,68,76,70,74,75,74,77,73,74,78,68,74,162,177,63,27,27,20,26,18,21,20,20,25,24,24,26,25,25,29,26,25,24,24,24,25,22,19,24,22,23,20,23,23,18,28,34,19,32,36,21,21,24,29,24,20,24,21,21,26,26,28,19,21,24,19,24,23,21,24,21,25,21,21,26,21,23,19,18,21,21,21,22,19,21,24,46,85,47,18,21,17,20,20,24,19,21,27,21,18,20,22,21,18,23,25,20,24,23,21,20,19,21,22,21,19,17,20,19,21,21,19,21,22,18,23,19,21,23,21,31,30,30,28,25,30,26,31,35,31,27,23,30,29,27,25,18,19,19,20,19,20,19,20,22,24,21,18,21,20,21,21,21,21,22,19,25,25,23,67,100,80,43,20,19,18,17,27,45,33,19,22,21,19,23,23,25,23,21,28,23,23,24,21,22,22,24,26,20,18,25,20,24,24,23,23,21,22,23,22,18,21,23,21,24,21,21,24,22,23,23,24,24,25,22,17,22,22,25,23,23,23,19,27,23,25,20,23,25,29,45,33,112,234,246,250,250,253,253,253,253,253,253,253,253,253,253,253,253,251,251,253,253,247,190,128,116,60,41,21,14,39,57,52,48,51,67,85,87,112,141,120,83,91,140,181,191,213,237,249,249,248,248,238,242,238,230,217,231,248,244,241,244,236,105,1,1,7,11,10,12,13,12,14,14,13,13,247,247,242,191,207,244,227,193,111,37,68,150,159,163,173,174,109,105,140,136,153,144,150,165,165,169,163,165,183,218,242,251,238,236,230,210,236,244,241,241,231,230,217,213,213,209,215,213,209,208,206,207,210,216,151,217,211,211,210,200,200,198,181,179,156,173,201,199,202,211,215,200,189,208,208,194,160,148,131,112,126,158,144,128,146,124,167,156,124,104,74,67,101,114,122,134,118,88,68,37,78,106,103,64,74,98,109,137,90,71,87,90,108,97,51,82,94,67,74,71,71,70,81,149,136,113,113,85,130,109,116,121,136,144,129,229,237,236,207,174,232,196,138,211,227,184,234,217,235,239,234,241,242,250,252,252,252,252,253,253,252,252,253,253,252,252,253,253,252,252,250,250,253,253,252,252,253,253,252,252,252,252,253,253,252,252,252,252,252,252,252,252,252,252,252,252,246,246,231,229,204,42,71,176,195,214,199,216,239,252,253,253,252,252,252,252,253,253,253,253,252,252,252,252,252,252,249,249,231,226,146,17,86,193,204,189,174,200,237,249,253,253,252,252,253,253,253,253,253,253,252,252,253,253,252,252,246,245,251,150,35,12,81,132,110,146,178,213,239,247,252,252,253,253,253,253,253,253,252,252,252,252,249,194,158,147,98,85,46,41,52,33,41,49,55,54,53,57,60,78,77,81,95,88,86,104,102,99,110,118,133,144,151,141,79,104,68,32,71,53,32,59,68,87,99,100,90,79,83,78,83,95,97,92,81,86,86,85,75,55,40,29,23,23,34,51,68,84,95,105,113,108,115,114,114,112,107,99,78,63,44,49,66,61,66,72,77,67,45,42,43,37,41,44,46,34,24,30,22,32,48,64,81,89,90,92,93,87,80,88,78,66,69,73,77,75,66,66,57,59,53,63,73,64,69,69,70,69,76,80,78,81,61,44,23,23,58,50,33,29,23,22,24,26,20,21,21,25,33,22,26,30,24,22,25,27,28,26,23,23,21,19,19,25,22,22,22,29,33,23,23,26,20,20,26,24,23,24,22,20,19,24,26,28,20,21,26,18,23,23,31,19,25,25,20,25,19,26,23,21,22,17,20,23,21,23,20,24,27,37,84,62,26,21,21,24,21,22,24,20,21,26,21,24,23,24,22,21,27,22,21,26,27,24,23,21,19,17,19,24,16,16,22,21,17,20,22,19,19,20,17,17,22,24,34,32,29,31,27,30,30,32,32,27,29,31,29,28,24,22,19,21,19,20,20,20,18,19,21,23,22,20,20,19,23,21,19,24,25,22,22,19,36,76,100,76,34,20,19,17,29,45,27,19,24,20,21,19,25,19,23,24,19,24,21,24,24,23,21,24,20,22,23,22,27,25,25,17,23,25,17,23,21,22,21,20,21,18,22,23,21,22,19,24,26,21,23,24,21,21,27,23,23,27,24,21,23,25,23,24,23,29,25,30,19,111,217,230,242,229,218,217,223,225,219,226,237,235,222,240,246,224,227,249,252,252,226,179,156,112,59,53,34,16,39,57,63,55,69,91,102,104,112,125,98,66,50,79,152,174,221,243,251,251,251,251,253,253,252,252,253,253,250,250,244,244,236,105,2,1,7,11,12,11,12,12,14,14,15,14,244,244,243,147,100,125,113,133,87,17,79,170,224,237,215,200,152,134,151,160,191,172,153,165,163,168,178,199,223,252,246,246,219,150,101,35,65,151,122,99,71,35,21,7,14,12,14,21,16,12,15,15,15,23,31,29,36,39,39,26,23,29,36,50,23,12,14,19,45,79,81,29,6,33,71,64,65,78,85,87,83,77,54,53,74,74,71,55,30,51,52,45,53,41,43,45,31,45,38,49,68,78,93,81,76,81,102,108,89,88,98,94,80,76,53,66,71,79,86,76,70,56,50,65,46,45,37,37,78,61,69,60,48,51,37,83,81,30,12,19,40,23,24,68,89,112,140,147,169,165,172,203,208,235,250,250,249,249,248,248,242,238,222,208,199,192,184,183,179,173,118,67,107,162,178,193,174,136,169,208,215,209,203,202,189,176,165,151,137,131,128,117,110,103,85,75,64,66,66,81,82,15,30,74,117,196,228,243,245,245,246,246,245,245,241,238,228,216,200,189,182,169,152,131,116,105,100,70,46,58,48,4,44,177,236,250,251,251,252,252,251,251,250,250,248,248,244,244,234,227,214,191,167,160,136,105,69,38,90,63,15,1,91,238,253,253,252,252,252,252,252,252,252,252,253,253,251,251,244,242,229,227,173,81,48,46,61,71,49,32,40,37,90,125,88,51,47,63,71,85,84,82,89,76,77,77,56,53,48,46,51,47,52,53,50,63,61,46,71,59,44,86,129,143,131,122,104,84,84,80,110,128,130,116,94,79,54,42,32,23,27,24,35,51,66,84,94,108,120,112,108,112,111,104,100,96,77,69,57,45,55,51,74,77,63,74,74,68,49,44,46,39,33,39,51,33,44,64,66,82,95,104,108,101,97,99,84,87,79,79,74,77,81,84,81,61,57,52,57,53,52,58,63,66,66,70,75,73,81,81,64,47,29,18,18,16,18,33,42,30,16,26,25,27,26,20,23,24,26,23,26,23,24,22,21,36,39,24,24,27,21,25,22,24,25,19,31,31,19,20,24,22,21,24,29,28,21,19,26,19,24,23,22,23,24,20,20,29,24,23,23,28,23,17,25,23,21,24,21,25,20,21,28,19,22,22,37,42,20,27,85,76,32,19,28,41,23,25,31,29,24,21,31,28,23,19,23,27,22,22,24,29,24,27,24,17,21,24,20,17,19,18,18,18,19,20,21,16,22,21,21,18,19,32,31,31,31,27,27,31,30,28,27,33,31,25,28,25,23,18,17,23,19,20,22,18,22,21,21,23,19,20,21,21,25,17,22,26,21,23,21,26,21,34,84,100,76,35,19,19,27,45,24,19,22,20,22,22,24,27,19,21,24,21,22,23,24,24,25,18,22,26,22,26,22,23,27,24,19,26,22,19,23,25,26,20,23,24,19,22,20,22,24,21,23,24,23,21,24,19,24,23,28,21,23,27,24,21,20,25,24,19,24,29,22,134,204,190,195,180,178,188,196,199,202,212,221,214,203,212,204,189,195,189,203,193,160,172,158,129,135,158,153,118,82,81,73,63,76,94,93,95,100,103,114,101,67,126,218,210,235,247,250,250,251,251,252,252,252,252,253,253,248,230,221,244,237,107,4,1,6,10,12,12,12,13,14,14,15,15,183,200,197,130,113,121,127,142,97,63,117,173,214,250,250,250,237,184,188,201,235,191,133,148,146,163,197,225,232,216,224,221,144,103,89,22,23,59,32,15,16,14,15,13,14,14,14,15,14,14,15,15,25,14,15,16,16,17,23,22,24,34,52,96,42,9,19,11,39,74,66,25,7,34,68,56,61,88,104,92,63,54,29,39,67,70,63,39,42,47,51,54,50,46,54,59,50,53,53,62,66,73,79,73,77,71,89,114,93,79,80,77,81,71,63,79,72,73,83,60,45,30,39,61,26,19,18,27,74,45,41,42,31,33,33,61,34,17,22,38,89,54,48,104,141,199,227,238,249,247,245,245,239,238,226,213,204,188,175,157,139,116,103,92,91,93,105,115,122,110,39,1,24,85,102,94,46,2,32,44,45,45,38,35,27,21,16,22,19,15,15,14,15,16,21,44,59,79,88,109,122,34,23,15,36,113,111,103,87,84,86,78,70,59,46,33,34,27,27,24,12,15,15,24,43,46,54,21,46,105,61,8,46,134,173,158,153,154,139,128,117,122,86,71,62,44,37,32,24,14,12,15,15,16,29,48,21,18,96,65,34,11,74,203,217,154,176,134,187,176,171,149,129,131,119,113,126,70,51,38,19,11,12,25,36,10,81,132,63,35,48,43,110,150,100,49,29,37,47,47,47,47,43,46,43,48,45,50,47,51,72,76,92,101,97,137,129,84,140,83,49,62,116,111,86,67,58,40,46,59,80,104,90,65,40,31,24,19,22,22,34,48,66,82,93,105,114,120,114,115,113,107,105,95,85,69,60,61,56,60,60,60,82,78,74,79,81,75,45,43,50,40,33,44,52,29,69,107,114,114,108,116,105,93,94,94,80,74,78,78,76,84,77,66,64,53,55,57,54,59,58,60,67,69,71,67,75,87,75,53,34,19,18,20,17,16,16,25,39,37,30,21,23,25,26,19,21,29,22,25,28,19,24,26,22,24,22,24,22,21,27,23,23,24,25,33,27,24,20,22,25,21,29,24,32,45,32,21,22,25,27,21,22,24,24,29,22,22,29,24,23,24,27,22,24,24,21,25,24,21,25,31,33,23,19,22,27,25,24,21,68,87,38,23,30,29,19,25,37,21,19,26,29,25,16,23,24,16,21,24,23,34,28,22,25,24,22,23,24,20,17,19,22,17,19,24,18,21,23,16,17,21,29,30,29,29,24,30,29,31,31,25,30,32,24,31,27,26,21,19,24,19,18,21,26,22,17,22,22,23,18,21,24,22,25,22,27,24,19,21,30,23,19,25,43,88,98,70,33,14,33,40,29,23,17,21,22,21,26,23,24,24,22,21,22,24,23,22,22,23,21,19,24,23,22,27,25,21,26,22,22,25,22,24,21,19,23,21,25,20,23,19,24,30,21,24,18,28,29,21,25,19,26,25,21,29,21,24,21,22,27,22,30,22,32,151,176,153,160,154,162,165,173,177,174,181,193,189,184,189,187,173,157,135,132,141,130,144,122,89,126,185,205,151,87,64,42,38,57,64,56,52,61,89,111,104,103,200,249,232,231,246,249,249,250,250,249,249,251,251,253,253,237,183,191,241,238,107,3,2,7,11,12,12,14,12,13,14,14,14,230,243,250,250,250,250,249,249,237,201,227,243,252,252,252,252,251,251,246,245,240,180,125,143,144,169,222,250,223,181,166,198,196,196,193,132,125,139,105,112,129,148,122,80,96,77,94,116,102,93,92,101,88,93,96,109,114,107,121,112,128,141,149,175,97,78,99,84,95,109,108,58,45,85,98,68,88,110,120,117,90,78,57,60,72,84,77,62,67,71,67,64,51,59,76,98,87,74,55,57,77,69,65,60,56,51,69,87,81,57,41,35,24,33,31,47,48,46,57,39,35,24,46,77,54,64,35,59,129,103,127,141,149,153,130,170,165,125,118,134,183,141,134,198,229,245,242,234,220,178,169,155,118,129,131,126,116,106,104,94,85,82,79,78,76,65,58,56,59,61,28,4,24,60,49,33,19,9,15,14,16,14,14,22,38,57,71,83,87,90,87,88,88,92,94,99,97,98,95,103,112,47,15,14,17,42,23,14,14,12,15,14,15,15,17,34,53,72,87,81,76,84,101,109,103,102,110,67,73,131,97,41,16,39,32,10,17,14,15,15,15,15,14,16,18,31,36,43,48,55,67,48,8,46,100,117,72,33,107,102,57,33,14,54,41,34,22,11,21,12,16,16,14,14,14,15,14,16,13,28,30,20,19,53,59,23,81,108,62,48,49,35,53,61,51,49,39,43,50,56,58,66,80,84,99,109,120,141,144,155,155,148,146,136,132,122,116,96,75,46,37,31,33,52,49,48,31,30,46,46,43,41,38,27,26,23,20,25,38,55,69,79,97,99,105,116,108,114,114,112,104,98,88,68,69,59,50,65,68,65,65,72,93,86,79,101,86,51,38,34,38,41,34,43,50,30,78,106,106,113,98,96,96,91,87,84,68,83,82,87,81,69,73,59,57,46,53,57,62,64,66,69,67,71,70,82,74,50,37,21,17,22,16,18,21,17,16,33,39,38,33,22,24,21,23,22,22,21,27,24,21,22,28,24,24,23,22,29,24,24,19,24,27,23,28,28,24,28,25,27,24,23,29,22,36,41,24,22,21,29,24,21,24,25,24,24,22,23,26,24,27,24,22,24,26,23,25,23,28,28,30,33,30,33,23,22,25,20,27,20,46,92,50,28,22,15,24,27,22,23,19,24,24,21,24,23,21,21,22,31,32,26,30,29,25,24,24,32,27,17,20,19,19,19,17,24,19,19,22,24,17,24,26,27,32,30,31,31,25,31,33,32,28,27,35,28,23,26,21,23,20,19,24,19,19,21,23,20,22,23,18,19,17,19,21,20,16,23,22,19,21,23,22,19,24,44,89,97,66,27,33,39,21,20,20,18,23,24,23,25,24,24,25,22,23,23,23,23,22,23,20,22,20,23,24,26,24,23,24,17,23,22,22,25,17,21,25,25,20,24,22,21,24,20,24,22,22,27,21,22,26,21,23,30,26,23,26,22,22,23,24,23,29,22,41,116,118,120,129,122,130,133,133,128,120,121,128,137,138,142,145,134,128,106,113,117,99,102,61,27,61,107,131,89,46,73,66,25,21,24,16,16,26,49,66,71,113,226,248,210,228,245,248,248,250,250,249,249,251,251,251,251,214,173,184,217,231,110,3,2,8,12,12,13,14,13,14,15,15,15,252,252,252,252,252,252,250,250,249,241,251,251,252,252,254,254,251,251,248,248,240,163,115,141,146,185,248,250,246,179,137,219,252,252,248,215,202,243,196,190,235,239,173,139,174,165,185,234,242,225,217,225,160,226,246,253,246,234,249,249,252,252,251,251,167,137,179,156,134,128,119,59,39,91,107,83,96,125,142,137,126,107,66,57,63,70,74,78,75,62,55,59,65,54,71,90,76,55,51,51,65,66,53,42,41,38,61,70,63,61,51,38,25,35,32,35,55,85,108,113,130,99,114,156,135,138,109,145,227,196,207,206,214,200,156,205,178,122,97,102,105,85,126,162,170,177,155,131,122,39,25,48,32,84,83,82,92,82,90,89,97,91,83,78,63,42,21,12,14,15,22,17,19,66,81,105,56,16,97,123,126,125,128,142,144,157,158,158,156,149,144,141,120,105,90,72,47,35,33,25,52,35,39,26,37,112,119,125,115,120,124,125,127,128,145,151,171,171,171,155,139,125,111,95,70,61,67,39,64,81,45,47,29,58,66,77,91,86,85,95,95,111,122,127,145,151,151,136,133,117,110,86,36,66,78,78,35,41,100,60,42,37,17,57,66,78,59,46,50,38,14,19,23,21,19,26,39,35,34,43,44,41,45,51,49,36,54,50,44,49,45,41,44,50,67,88,103,128,134,147,153,155,154,150,150,147,141,133,91,100,81,56,45,36,29,28,27,27,23,21,41,33,44,76,61,47,36,30,41,36,36,35,21,21,20,29,38,58,76,89,101,103,114,105,107,107,107,109,104,98,80,73,60,55,58,56,62,63,66,67,78,94,105,105,88,65,42,24,21,21,23,40,38,65,63,66,100,106,104,102,88,92,84,75,71,74,79,84,84,77,65,55,55,56,53,57,57,55,64,65,67,75,70,68,65,52,40,22,18,19,17,17,19,21,19,26,29,39,38,34,33,28,39,25,21,27,24,24,25,24,21,25,22,24,23,22,31,24,22,28,24,24,27,27,26,24,21,26,21,24,27,20,24,20,24,24,17,24,24,23,25,23,22,23,25,24,24,24,23,23,28,24,23,29,22,23,24,25,23,29,27,24,38,33,23,25,22,23,24,25,41,91,66,28,18,20,26,21,27,20,23,25,25,26,23,29,19,22,29,27,29,25,31,28,21,24,32,28,19,22,18,16,19,19,18,21,21,19,18,20,16,21,24,32,29,30,30,31,33,29,30,34,30,29,32,28,24,26,19,21,24,18,23,19,21,21,19,19,19,20,22,22,18,24,21,16,23,24,22,19,21,19,23,25,18,23,50,94,100,60,45,33,20,21,19,23,19,21,23,21,19,22,21,24,22,21,24,19,22,26,24,24,25,19,25,24,22,24,22,29,22,20,22,19,21,19,22,21,25,21,24,24,22,24,22,27,20,21,26,21,23,22,21,26,24,20,23,23,25,19,25,25,24,21,48,119,122,135,130,122,137,134,124,122,125,125,125,122,124,130,136,131,132,128,125,123,112,105,96,89,88,105,114,91,108,138,116,79,63,80,94,94,92,88,91,98,146,235,243,218,248,248,251,251,251,251,251,251,251,251,249,244,216,202,202,199,200,112,6,1,7,11,14,12,15,13,15,15,15,15,148,157,179,189,200,175,163,186,135,94,138,173,184,159,150,192,217,229,245,240,233,153,115,133,118,133,160,210,199,131,95,127,195,210,218,117,112,162,110,104,145,150,95,94,151,118,152,214,217,188,169,199,180,194,210,225,206,170,227,234,245,245,246,246,120,108,157,113,118,126,122,51,21,92,108,73,67,90,118,128,119,100,66,55,44,53,67,82,87,58,51,57,50,51,44,40,23,39,37,39,50,47,54,48,45,36,89,103,117,151,151,174,143,128,123,104,139,192,217,229,245,175,158,169,127,170,120,124,219,161,113,107,130,120,102,113,63,51,47,14,29,14,61,111,71,90,97,105,100,27,26,26,31,51,29,29,25,27,32,51,68,75,88,96,106,110,113,114,117,129,103,38,47,142,182,199,127,56,130,178,171,144,126,109,94,92,84,83,80,79,81,91,87,93,98,94,96,90,100,100,125,96,53,39,79,180,201,200,190,179,171,151,132,125,123,125,119,101,98,85,71,63,54,53,56,62,75,47,61,96,63,44,25,96,175,179,183,164,155,156,141,135,130,128,130,118,105,88,64,53,57,45,22,51,79,92,56,54,143,136,66,26,27,155,186,173,131,101,125,83,80,63,68,66,45,46,38,39,49,55,44,50,52,53,55,86,122,112,110,123,116,61,78,128,153,159,150,154,131,119,111,86,62,47,44,42,29,29,21,24,22,17,22,16,17,17,14,16,24,24,44,44,53,69,57,57,29,30,36,24,28,24,21,28,35,52,69,83,111,115,111,107,108,108,107,107,99,103,90,70,60,61,58,64,66,57,69,72,70,78,87,95,109,78,44,28,17,24,21,18,40,55,66,94,104,101,112,106,103,103,79,79,75,70,78,78,75,74,69,63,57,50,56,58,60,53,57,60,65,66,68,74,67,57,37,20,20,21,15,17,17,19,23,27,27,29,40,47,37,37,39,76,94,37,23,23,20,26,22,21,26,24,24,24,27,27,21,22,24,22,21,31,28,24,25,21,21,31,24,22,26,20,24,20,22,23,24,26,22,29,29,26,24,25,25,21,26,26,21,27,24,28,27,25,22,23,22,23,33,24,23,24,24,21,19,22,27,24,28,29,27,79,80,33,23,18,24,22,25,23,21,24,21,26,31,20,22,21,24,25,24,25,26,23,24,27,24,22,21,22,19,21,19,20,21,23,22,20,21,15,21,23,27,32,30,35,27,24,28,23,31,33,34,32,30,32,31,29,19,20,21,17,22,26,23,23,21,19,22,22,21,20,21,18,20,24,22,19,20,21,22,24,23,19,22,21,24,61,109,103,77,37,18,27,17,21,20,21,21,19,23,19,24,22,22,25,24,21,22,23,24,24,27,23,19,23,23,27,23,23,24,24,24,24,23,22,22,18,25,22,24,27,21,20,24,23,19,24,22,21,22,23,27,22,25,26,22,24,23,23,25,19,27,22,69,125,111,134,130,137,160,147,140,157,173,181,166,160,169,171,184,182,195,182,181,189,179,205,219,202,202,211,207,196,198,204,189,176,186,204,201,201,200,193,171,170,198,230,246,246,251,251,251,251,250,250,249,249,251,251,247,240,242,242,224,196,188,109,7,1,7,10,12,11,12,12,14,14,14,14,17,17,27,45,46,21,18,33,26,33,76,88,83,75,92,146,181,193,197,200,218,150,118,129,115,93,26,14,43,64,38,9,18,20,16,13,19,29,18,32,46,45,25,32,38,24,41,52,48,36,28,33,36,32,33,54,50,25,32,47,86,96,115,118,41,49,52,56,101,122,166,89,55,123,118,85,55,57,59,60,73,79,89,105,54,57,62,82,94,67,48,50,59,70,83,49,42,71,96,53,61,76,105,134,132,98,158,189,191,244,246,242,195,165,147,153,195,237,240,217,226,188,188,148,73,103,66,80,155,118,73,60,76,72,77,91,55,53,51,37,25,17,48,54,27,34,49,64,66,14,31,38,22,66,53,80,44,51,81,95,110,128,143,158,166,171,129,173,169,168,127,31,36,128,147,148,69,39,107,103,86,71,61,54,39,37,33,36,53,74,97,114,131,143,139,148,155,157,157,146,146,92,36,25,73,141,131,119,87,76,61,43,38,26,36,26,20,19,31,54,78,93,113,126,134,151,148,78,98,137,73,52,19,91,124,139,118,85,66,47,27,24,19,15,16,16,24,34,66,83,107,82,44,127,165,172,86,77,174,164,96,38,23,102,130,118,63,32,44,46,51,44,41,46,58,69,73,107,134,167,147,142,145,160,164,169,177,161,158,187,167,77,44,63,62,51,42,39,27,29,26,19,22,16,18,21,23,16,21,21,21,23,18,22,19,19,29,29,39,38,39,38,59,98,85,57,27,18,25,20,21,23,33,50,54,61,64,73,91,99,100,100,108,97,103,99,92,77,59,54,57,67,67,66,71,69,72,79,79,86,88,75,54,38,25,21,20,16,30,55,73,86,100,101,101,104,105,93,98,75,66,80,79,90,83,74,61,59,61,55,54,50,55,60,63,59,56,62,67,66,66,56,38,27,19,17,18,18,19,20,21,25,27,34,44,44,47,39,41,41,38,64,132,71,29,18,21,20,25,23,20,25,20,21,31,25,39,37,24,24,24,29,23,22,24,22,27,26,23,24,22,17,25,24,22,24,28,24,29,29,24,24,24,27,25,26,21,22,28,24,22,25,27,19,24,23,22,28,24,25,24,18,23,22,26,29,21,22,22,27,23,67,85,40,22,16,23,23,23,23,27,21,25,27,22,21,22,22,24,27,20,20,24,26,26,24,23,18,19,23,21,19,19,19,21,20,21,22,22,21,17,22,23,29,36,33,30,24,31,32,31,29,30,30,29,28,33,31,20,19,23,21,18,21,20,21,20,23,21,18,25,21,19,22,21,18,21,19,20,23,19,17,19,24,27,22,18,30,62,110,99,53,26,18,22,22,23,21,19,24,22,19,26,24,18,24,19,24,21,22,23,22,23,24,25,21,24,21,22,20,24,25,22,23,21,23,17,20,27,18,21,22,21,26,23,25,19,24,22,23,26,21,27,29,26,24,26,24,23,18,20,24,24,22,52,76,56,66,82,105,118,105,100,128,148,154,145,141,143,162,181,190,201,192,200,205,212,244,250,248,245,250,239,217,204,209,213,208,218,229,210,198,210,204,187,178,186,208,217,215,226,221,207,206,204,203,202,203,212,210,217,220,242,238,208,190,187,112,5,2,9,10,13,12,14,12,14,15,14,14,25,42,77,82,88,47,46,81,47,93,155,165,177,179,183,174,148,118,126,169,213,154,130,139,129,101,25,16,43,74,45,32,36,21,19,16,62,63,43,63,70,66,46,47,39,24,50,48,29,17,28,38,29,44,35,57,56,33,34,16,27,42,77,96,58,107,112,89,155,208,246,198,110,177,211,198,134,68,60,54,63,99,149,163,110,93,83,91,94,57,61,75,113,158,195,169,132,174,174,164,112,89,145,217,232,155,189,177,143,185,179,168,101,71,86,110,155,182,162,146,198,240,252,178,53,105,105,138,229,202,119,93,85,66,80,86,73,74,76,57,45,39,44,70,60,71,60,78,78,29,74,60,81,141,135,149,143,147,146,141,131,124,122,122,117,116,111,98,91,79,47,18,23,37,35,45,27,25,73,87,100,108,120,124,125,128,126,132,129,135,139,145,137,143,139,124,121,97,93,75,64,41,24,32,32,37,26,32,42,57,73,89,104,107,111,116,116,116,127,127,146,166,171,171,158,152,131,61,63,72,47,60,29,35,54,28,32,35,46,62,79,92,97,100,101,109,120,136,158,171,181,142,73,106,134,117,51,19,63,71,50,46,17,40,60,66,71,73,109,120,132,118,127,130,142,166,163,166,182,191,168,150,146,152,125,93,71,47,53,71,53,34,35,22,19,27,16,19,21,17,21,21,23,20,17,24,20,21,22,19,24,26,29,35,35,36,34,32,28,25,29,22,57,91,66,37,17,15,15,22,33,44,57,59,59,63,62,64,68,76,86,87,88,93,85,75,58,58,57,55,57,61,66,72,71,77,80,83,88,77,53,30,24,19,24,21,24,42,62,84,96,93,98,97,91,93,86,79,76,65,65,85,80,69,71,54,55,53,58,56,53,59,65,62,61,61,62,67,63,55,39,26,16,16,22,18,15,19,30,28,27,38,40,42,48,44,40,44,37,34,28,36,100,112,64,23,14,21,22,22,22,24,24,22,25,29,35,32,19,32,29,20,24,22,24,27,22,20,26,24,23,22,22,23,27,25,23,24,22,30,23,23,31,29,22,23,27,27,26,27,24,25,25,20,26,24,24,27,23,23,22,23,24,25,25,26,23,24,21,20,19,53,93,53,23,16,22,26,24,22,24,26,21,27,20,19,25,24,23,21,22,22,29,29,22,21,21,22,19,24,25,18,17,21,21,21,21,20,20,20,19,21,28,30,27,34,28,32,31,27,32,25,30,30,27,34,29,28,23,18,22,18,17,22,19,24,19,21,23,20,23,20,20,19,20,20,21,25,19,23,20,17,20,22,23,21,20,25,28,65,106,83,50,22,15,23,22,21,26,20,19,24,24,24,21,22,21,21,22,26,23,23,22,20,24,26,21,23,24,21,22,29,21,24,24,21,23,23,25,21,23,21,23,24,21,23,25,23,21,27,24,21,24,23,23,25,28,22,22,17,23,28,27,29,53,89,58,42,36,38,55,47,42,45,61,68,59,69,72,88,112,113,129,119,137,150,149,189,203,186,189,182,155,138,141,165,179,173,169,162,132,130,133,134,122,109,113,118,130,118,116,112,99,104,103,107,115,125,135,143,152,158,178,172,155,145,157,110,12,3,11,11,15,13,16,14,15,16,16,15,94,118,155,145,141,87,95,131,94,162,231,245,252,252,216,164,128,119,141,184,219,160,145,155,146,118,31,29,92,136,71,47,65,45,46,68,97,82,78,95,97,90,66,73,66,61,87,80,58,48,61,66,70,81,81,100,99,91,76,48,71,77,127,139,95,144,117,114,200,242,249,174,83,200,241,228,120,90,106,120,134,134,167,155,112,137,134,130,127,64,48,94,147,202,249,193,177,240,233,211,111,49,107,171,204,138,104,71,38,94,45,55,31,38,59,69,68,69,71,94,155,161,186,131,71,110,93,109,186,163,86,67,75,92,121,124,119,123,129,125,65,13,54,136,153,146,145,147,139,72,93,89,103,173,152,115,130,105,94,66,44,27,18,21,27,36,104,73,85,108,107,39,17,87,134,142,77,51,132,175,188,184,182,121,170,166,151,130,109,82,67,55,49,55,54,63,67,64,80,95,128,118,58,20,56,112,138,150,151,168,182,190,199,189,186,168,154,136,110,91,87,88,88,88,80,82,68,33,47,65,47,51,36,45,87,114,140,146,160,172,180,190,183,178,165,141,125,109,111,106,110,69,29,59,54,62,29,35,69,73,101,62,39,53,98,164,170,176,192,182,178,162,156,148,133,104,103,83,97,47,36,33,27,32,28,21,24,20,22,17,16,37,35,23,16,20,21,18,22,23,22,21,24,24,24,27,23,23,27,26,31,40,33,32,30,20,23,24,26,31,24,22,33,42,31,27,21,16,23,45,64,73,71,66,66,64,62,66,68,75,78,80,90,80,66,48,48,66,61,60,61,57,69,71,79,83,87,78,49,35,27,22,18,20,22,43,62,67,82,90,96,94,95,87,88,85,77,71,79,71,75,75,64,60,54,57,55,51,54,54,62,67,58,61,69,66,65,53,40,31,19,19,17,17,21,21,22,24,32,41,40,38,42,44,39,45,44,44,41,30,28,19,34,96,102,48,19,15,24,19,24,27,20,26,21,22,28,21,26,29,23,21,25,22,22,27,24,19,19,26,22,21,28,25,22,29,22,22,31,29,23,23,28,27,22,26,28,22,27,25,26,25,20,22,23,29,23,22,21,25,27,22,33,22,20,29,23,25,21,22,21,35,85,65,27,19,16,22,23,24,24,24,24,23,21,21,24,25,22,21,24,31,30,20,26,22,20,23,21,25,18,24,23,17,21,21,26,24,19,18,22,23,30,27,31,36,31,28,27,32,31,34,29,29,30,29,32,25,20,21,19,20,23,20,22,23,17,21,24,21,17,19,24,21,20,23,24,25,21,20,22,21,19,22,24,23,19,25,19,43,86,98,79,46,23,17,20,21,23,20,24,24,21,21,21,24,19,22,24,21,24,19,22,20,24,24,22,20,19,26,18,21,24,23,26,21,18,22,24,20,22,22,26,28,24,26,22,22,22,22,25,23,26,24,22,27,21,24,26,22,26,32,34,29,87,114,61,33,27,24,47,49,42,39,47,55,44,59,52,50,67,69,79,55,82,86,73,100,101,98,105,91,84,81,98,143,155,137,119,112,90,83,86,66,66,68,61,60,61,53,52,53,47,46,36,44,63,71,72,65,79,86,100,98,90,94,110,97,21,5,14,12,17,15,17,16,16,17,17,16,98,132,165,145,139,88,87,109,81,162,241,248,252,252,247,239,234,230,232,236,235,158,145,153,151,123,33,22,88,141,78,72,85,64,57,66,92,89,108,122,111,94,63,49,53,69,92,79,60,58,56,60,67,68,77,87,75,81,71,68,79,74,79,77,78,96,79,86,153,204,224,130,72,145,149,126,113,120,153,181,176,169,123,95,105,146,149,139,128,73,54,62,95,125,159,132,134,186,175,171,114,64,76,113,136,88,61,19,17,31,20,55,47,46,69,69,47,21,14,37,34,13,16,33,55,36,24,61,57,48,51,47,79,131,174,181,186,198,200,183,61,8,22,83,99,86,87,82,72,31,64,48,64,92,58,65,61,69,62,55,48,45,50,59,84,106,122,139,145,165,159,80,28,102,177,160,66,33,99,115,113,103,94,88,72,66,57,53,48,41,33,29,27,35,61,87,110,128,153,163,186,161,77,32,81,156,152,142,130,134,132,127,120,92,78,65,56,43,31,19,14,15,15,36,63,105,105,48,101,112,57,57,21,74,169,185,174,153,157,151,134,124,106,89,62,52,40,23,21,11,34,46,68,121,152,171,135,164,193,178,156,118,61,73,132,132,133,100,80,69,49,38,31,34,27,25,21,26,18,17,22,21,19,18,21,17,20,19,18,21,22,42,42,34,23,21,24,21,21,23,27,22,22,25,21,25,34,31,38,39,33,31,32,37,39,46,35,21,27,35,34,33,34,23,24,33,33,49,71,89,94,84,76,70,65,65,63,66,70,71,75,78,78,76,60,63,61,61,66,61,65,64,71,76,81,73,49,35,28,20,24,22,25,41,59,76,80,89,92,92,91,87,92,84,79,80,75,76,84,82,71,64,60,57,61,55,51,54,58,61,64,66,60,68,64,53,42,27,22,15,18,21,19,22,21,27,36,39,43,45,44,43,40,47,46,40,41,42,43,37,29,26,24,40,97,95,36,18,16,25,23,24,27,18,25,24,25,26,29,27,21,21,25,23,23,22,21,24,24,24,24,21,30,28,23,24,24,29,24,27,25,22,39,30,26,27,25,30,23,24,19,23,26,22,26,24,30,21,24,25,21,25,25,24,23,23,21,23,24,26,26,27,78,75,33,22,18,19,25,29,23,17,26,28,25,30,22,29,22,18,32,21,22,28,21,23,22,27,22,18,27,31,27,19,24,28,23,20,22,17,20,28,24,30,32,31,31,29,30,28,29,30,27,27,33,31,25,30,21,23,22,21,24,19,19,22,21,21,23,19,24,23,21,20,21,23,21,19,21,21,22,21,19,21,23,18,23,22,22,47,51,79,105,81,44,19,18,27,19,19,24,21,23,25,29,20,25,29,22,23,21,24,24,23,24,22,21,26,22,23,24,22,24,24,21,19,22,24,21,21,26,23,24,21,23,25,24,28,21,25,24,21,29,25,26,23,23,24,22,24,29,55,40,53,150,171,132,107,61,31,35,40,45,39,35,46,48,67,54,53,70,65,69,56,66,62,50,60,48,59,100,91,68,62,93,136,129,113,103,107,91,88,81,61,63,61,62,54,53,50,60,64,46,41,35,50,55,53,46,35,45,42,49,48,53,57,60,63,25,9,16,15,17,15,16,17,16,17,16,17,83,107,139,137,145,105,92,108,80,154,233,248,253,253,253,253,252,252,243,243,230,148,139,150,151,130,71,45,90,133,91,113,119,84,78,95,124,114,124,125,109,101,59,41,39,65,101,99,67,50,61,59,57,62,83,103,89,95,82,87,120,81,50,36,75,113,72,95,166,221,247,119,71,169,131,93,119,166,138,131,164,206,192,98,61,86,93,101,86,62,74,70,51,48,62,52,70,84,91,141,129,100,90,87,91,67,55,36,46,53,42,68,59,59,66,71,71,53,39,39,63,57,45,53,78,74,52,77,96,68,83,85,66,56,89,120,113,113,136,92,31,36,70,108,87,64,69,71,71,24,42,42,61,105,102,130,136,146,153,149,162,161,160,160,152,143,129,113,106,107,104,44,6,56,89,61,35,36,38,46,48,48,45,49,53,115,93,104,120,123,135,134,130,134,129,136,134,131,138,127,133,105,51,32,51,47,31,30,35,33,36,33,21,22,39,57,77,88,95,103,110,87,102,110,120,139,122,84,102,103,59,47,38,31,59,66,53,47,51,50,37,36,45,66,107,135,140,141,137,129,141,152,165,184,188,192,157,147,132,112,103,55,47,38,74,34,28,29,26,17,23,20,19,19,15,25,21,21,22,17,22,21,19,23,26,25,23,23,23,26,27,40,39,33,35,29,40,51,57,62,66,67,46,38,37,42,47,49,48,50,61,71,85,98,116,105,67,20,23,39,42,49,44,37,29,40,59,66,76,83,82,79,69,64,67,60,64,67,71,65,59,63,51,56,65,57,61,60,62,67,62,69,73,69,54,37,30,25,19,19,22,40,62,73,79,94,98,94,98,88,86,84,84,82,81,79,76,75,80,68,56,57,57,57,56,57,56,63,66,59,64,66,64,53,39,26,21,22,20,17,19,22,24,32,35,45,42,46,49,45,42,40,43,43,41,45,38,42,47,44,41,33,35,32,63,106,81,33,17,23,23,29,24,20,24,24,30,32,23,21,25,26,21,30,22,20,22,21,24,24,27,26,24,27,24,24,28,22,27,27,23,26,23,27,24,24,27,24,24,20,21,27,25,19,21,24,19,23,23,24,24,30,30,27,23,21,25,19,30,28,23,25,61,89,45,23,19,21,26,21,21,22,21,31,35,23,22,28,27,26,21,27,34,29,22,20,22,21,22,18,27,29,18,24,25,29,25,19,22,17,21,25,31,28,27,28,27,34,30,29,30,31,30,33,32,31,29,23,21,22,21,19,22,20,25,19,23,24,15,21,22,20,24,22,19,21,17,24,19,20,23,18,23,18,18,19,27,24,24,49,41,37,89,108,79,43,20,16,20,22,19,26,22,23,24,21,29,24,22,21,24,24,21,22,22,21,23,25,22,23,26,22,23,21,20,23,23,23,22,29,22,19,21,22,23,22,24,20,20,27,25,25,26,24,24,22,23,21,26,24,48,53,72,175,218,218,215,193,167,117,62,22,30,35,33,36,43,44,50,56,59,59,65,49,62,62,65,71,37,75,125,101,97,75,85,100,85,92,86,82,68,81,85,56,70,71,74,67,55,61,65,66,56,55,58,60,53,59,71,57,49,46,43,46,51,48,50,54,23,10,15,14,15,15,16,16,16,17,16,16,147,137,148,149,181,171,166,172,137,181,234,246,253,253,252,252,252,252,237,232,208,127,133,148,157,171,156,108,127,156,101,113,97,71,61,74,101,80,81,77,86,96,64,45,38,56,91,96,54,33,45,52,51,48,76,84,75,95,67,101,115,60,41,18,73,98,64,84,154,206,177,69,66,160,105,61,109,120,80,71,100,188,154,60,48,38,53,63,54,59,87,82,74,52,46,43,27,26,26,86,98,59,66,61,64,79,74,39,50,56,39,64,57,72,90,72,76,85,101,104,103,103,103,112,95,89,73,86,90,88,109,63,34,18,21,46,24,12,15,16,61,176,245,251,222,158,161,190,202,113,95,69,120,194,180,190,169,172,164,152,151,139,118,92,75,53,27,11,9,16,33,47,30,34,100,119,63,63,148,177,177,177,174,174,171,179,179,180,184,174,169,154,142,125,105,77,56,41,35,32,48,50,38,35,31,84,112,136,145,152,163,168,166,155,169,184,194,197,191,183,174,152,129,106,77,66,63,34,63,55,41,58,39,42,45,81,118,120,122,119,121,127,144,170,183,189,189,179,177,146,135,117,91,80,60,62,34,31,22,19,29,14,33,39,29,17,24,24,23,22,21,21,18,21,23,26,22,24,24,24,24,24,25,24,29,48,63,69,55,36,41,44,46,69,78,106,133,138,134,119,103,76,59,54,62,84,101,107,108,103,113,128,136,146,154,131,65,28,20,32,56,69,62,44,37,39,45,49,44,41,57,66,63,63,53,57,63,65,63,55,49,57,59,50,56,61,59,62,63,60,66,59,50,36,27,27,20,20,28,41,58,74,77,86,95,95,95,90,86,88,79,71,79,77,87,80,66,58,62,55,50,57,49,53,59,59,64,66,66,64,61,54,41,28,21,21,16,15,21,25,23,33,37,39,45,45,48,46,45,44,42,44,43,40,44,43,44,49,41,43,46,42,40,40,41,78,107,81,35,21,27,24,41,35,23,27,27,34,39,28,23,21,22,27,27,20,24,24,22,27,25,27,27,25,27,29,24,30,29,26,25,19,24,25,30,31,27,26,27,22,22,29,22,25,23,23,28,23,24,25,23,36,33,21,22,19,36,39,28,18,31,29,44,93,58,24,22,21,30,29,23,27,28,25,24,25,26,26,27,28,23,22,30,21,20,26,23,26,20,18,25,22,25,24,31,27,24,25,17,20,24,27,28,27,33,30,33,29,31,34,28,32,33,27,30,34,29,30,21,23,23,19,22,21,29,23,20,19,22,25,19,22,20,20,25,25,23,23,24,18,21,22,19,21,25,24,24,22,30,62,42,24,40,93,114,78,38,15,19,22,23,20,24,22,21,23,28,20,23,25,27,26,17,26,23,20,22,27,22,22,25,21,21,22,18,22,26,24,24,21,22,23,24,23,24,26,26,24,22,30,26,22,25,28,24,22,27,26,25,41,59,73,152,204,201,204,200,204,211,198,114,56,46,34,35,33,34,37,41,46,49,49,54,56,59,61,60,61,45,61,94,82,101,78,50,50,48,70,63,56,48,53,60,59,66,62,68,66,59,61,61,63,61,58,59,60,50,69,90,80,65,45,51,50,48,50,47,48,24,12,16,15,17,16,16,16,16,16,17,17,230,224,197,186,241,251,248,248,198,207,247,248,253,252,248,246,237,239,227,220,190,117,128,141,163,204,224,162,150,139,70,90,74,47,42,52,64,56,68,84,104,109,91,70,57,64,73,77,60,52,62,68,71,69,74,67,65,79,71,95,105,69,45,15,54,77,59,61,76,90,81,33,42,80,55,66,74,76,62,56,67,89,81,50,39,43,47,53,41,57,78,77,79,84,88,76,72,59,43,74,83,101,136,160,139,114,92,57,55,50,36,59,78,86,94,82,82,84,104,98,82,72,74,82,65,47,29,42,15,36,41,26,26,33,62,99,92,73,100,130,190,248,234,233,201,131,142,185,238,132,49,32,54,97,84,80,71,64,54,45,38,27,22,29,25,29,27,25,31,58,90,59,22,68,143,154,84,69,131,124,155,131,122,118,108,111,109,96,119,69,61,47,42,49,46,43,33,32,37,38,91,93,53,34,44,158,182,182,176,171,167,158,148,98,140,145,143,124,111,96,82,67,61,63,54,47,44,57,93,116,130,142,102,71,68,191,206,206,179,149,138,91,129,102,91,77,49,46,41,29,23,24,24,26,23,22,22,21,23,16,24,18,33,46,25,22,21,18,21,24,20,23,27,31,34,33,30,30,33,36,33,39,36,33,71,102,129,137,85,61,71,64,95,135,129,123,123,101,74,54,64,74,81,105,101,117,116,122,123,115,132,124,121,103,63,41,33,27,35,35,35,40,43,39,29,39,38,41,44,37,58,61,60,63,63,60,60,60,60,59,57,78,74,57,55,54,63,61,61,59,47,38,27,24,22,22,30,38,61,77,79,89,92,97,91,85,92,81,79,78,78,83,84,73,62,62,54,54,55,49,50,57,55,56,61,61,57,59,62,51,37,29,19,21,19,20,20,21,31,30,41,43,44,46,44,49,45,42,44,44,43,46,41,41,47,45,37,46,43,41,45,46,41,38,49,46,85,115,64,33,17,41,70,30,22,22,24,39,35,26,23,23,26,26,25,20,23,23,19,29,27,26,27,24,24,26,26,26,27,24,22,27,25,25,30,34,36,31,27,30,26,20,29,28,23,25,25,29,24,22,23,26,24,24,21,33,54,37,23,23,32,31,32,90,75,33,17,33,40,20,27,31,33,25,25,36,28,25,28,21,20,24,18,19,27,27,22,22,20,24,23,25,23,23,27,29,26,23,23,19,22,24,28,29,31,29,29,28,31,31,27,31,29,29,31,26,29,31,23,19,20,19,24,23,23,26,20,20,28,21,19,22,21,24,20,22,26,21,22,24,25,20,23,26,21,21,24,18,41,74,36,18,28,42,101,106,76,37,15,21,18,22,24,20,21,23,21,24,22,21,27,23,21,22,26,21,26,26,21,19,22,22,22,24,22,19,20,21,22,27,26,25,24,21,26,24,21,23,18,25,27,21,25,33,45,47,53,60,53,78,123,139,149,143,113,117,117,115,157,181,152,140,116,77,51,33,35,37,35,40,45,40,51,57,60,57,49,57,57,59,50,41,70,61,49,53,41,53,57,61,54,45,47,49,55,53,58,57,57,55,53,59,61,65,55,57,63,65,73,67,65,56,48,49,46,44,48,48,24,12,16,15,16,16,16,17,16,16,16,17,224,242,204,181,240,245,241,224,150,186,243,249,253,249,244,243,237,241,227,222,186,110,125,129,145,185,216,151,128,116,59,101,94,66,50,68,104,95,115,143,157,160,152,131,118,120,134,136,114,111,121,131,139,132,134,126,120,137,120,135,122,128,118,51,93,120,118,95,84,110,112,83,84,113,94,113,141,127,113,114,103,101,72,57,51,45,83,69,53,47,61,69,65,68,97,129,132,129,104,139,157,180,246,244,203,136,71,61,59,46,39,65,68,62,69,63,66,63,77,53,43,49,36,49,41,31,20,24,22,39,89,109,147,163,194,235,238,239,229,224,218,209,160,97,63,33,29,84,113,63,41,17,32,60,71,100,96,109,111,110,113,124,143,153,158,159,159,150,152,151,148,100,24,28,99,81,37,41,57,49,28,24,26,33,47,50,49,64,74,81,83,79,105,122,133,138,134,139,138,141,142,125,84,31,46,104,98,78,63,43,29,21,23,33,40,56,61,61,58,61,80,103,142,158,153,143,145,151,155,159,166,148,104,60,70,117,119,102,59,32,34,30,26,25,23,21,19,21,23,17,20,25,15,23,23,21,23,20,26,28,26,27,48,48,41,43,40,39,50,51,54,51,52,86,103,83,53,48,79,109,107,83,46,36,118,162,141,118,85,59,56,55,71,81,53,47,47,57,71,78,85,81,91,101,108,103,105,112,117,117,111,84,50,36,29,28,34,48,85,94,65,42,27,31,25,33,46,42,41,42,59,63,65,68,65,64,67,68,74,73,69,85,83,59,63,66,57,57,48,39,27,19,26,24,24,39,60,76,78,84,92,91,95,84,86,87,79,77,69,83,84,78,79,56,48,55,58,51,55,54,57,60,59,62,57,60,62,53,40,28,22,19,18,19,23,20,28,36,38,47,47,38,46,49,47,46,46,40,39,46,44,49,48,48,48,46,45,46,42,41,45,41,46,46,44,40,38,86,106,66,30,22,34,22,24,24,21,27,21,22,24,29,24,24,29,24,24,25,24,25,21,26,27,26,29,24,23,26,22,28,23,26,34,25,28,30,29,24,22,30,23,26,25,24,25,22,25,24,23,24,25,22,22,29,26,25,27,24,22,27,27,24,27,72,82,37,24,21,25,24,28,29,24,23,21,27,24,24,24,23,20,19,26,26,26,23,21,25,24,24,26,26,24,25,27,24,30,27,22,18,18,29,28,31,27,26,30,26,30,30,27,31,27,27,33,33,31,28,23,19,20,22,19,19,19,19,22,21,26,20,23,21,17,25,23,23,26,20,22,22,26,20,22,29,24,17,29,21,55,81,28,27,25,24,47,90,111,70,36,17,15,23,20,22,18,21,27,22,22,25,24,21,22,24,24,20,23,24,21,24,24,22,23,22,21,21,21,26,23,19,24,20,22,29,27,20,24,24,24,24,22,27,28,61,90,101,128,117,100,146,169,143,145,105,74,73,50,57,94,142,153,165,160,122,107,53,32,35,35,39,41,49,47,53,58,53,42,52,60,53,47,37,56,48,51,61,47,45,53,67,59,43,43,46,44,45,46,49,55,53,51,49,54,55,54,57,48,49,43,48,56,50,54,45,47,46,45,48,23,12,16,16,16,16,16,17,16,17,16,17,184,227,168,125,169,192,181,118,65,141,213,241,252,249,249,248,243,243,227,222,181,107,123,125,125,133,113,71,132,134,90,127,129,114,77,99,128,134,160,173,182,176,179,176,168,174,177,182,174,177,177,177,180,174,182,173,174,175,164,158,133,161,134,71,124,166,163,146,139,152,155,145,148,159,154,158,155,164,160,152,144,127,99,99,78,74,100,87,64,51,57,63,60,39,52,71,92,102,82,109,100,121,165,130,115,91,52,63,66,33,42,39,47,44,59,57,45,43,52,41,28,45,56,65,65,89,105,137,174,205,233,247,240,238,230,222,198,177,164,111,78,57,19,9,19,27,22,28,59,76,112,100,128,168,174,201,208,210,213,214,214,219,228,223,144,184,163,146,131,114,108,61,9,24,34,36,20,42,75,107,103,120,144,155,164,175,175,116,120,208,210,206,137,205,187,175,152,136,123,110,96,77,37,27,42,29,46,71,88,99,109,126,142,158,157,164,159,148,94,159,152,143,154,152,139,115,90,88,56,36,40,37,27,43,44,24,25,27,29,22,19,15,22,21,21,19,20,23,19,23,22,21,29,25,19,26,23,33,43,55,68,76,116,130,141,171,149,153,189,191,192,163,149,189,208,175,98,89,153,189,181,149,81,91,209,219,127,100,76,50,47,27,44,49,59,81,84,94,104,103,96,98,100,103,98,110,105,104,77,66,41,27,32,34,34,48,84,131,154,155,164,115,34,22,55,63,42,39,44,46,67,71,73,68,63,67,74,79,77,90,86,93,72,73,76,61,53,36,27,20,21,24,23,39,53,61,69,81,84,83,81,83,81,82,80,78,75,69,76,79,72,61,55,50,51,54,54,59,60,62,59,60,57,63,64,53,42,24,21,19,19,17,19,19,27,36,39,42,49,49,45,46,44,43,46,45,43,46,46,45,49,49,46,48,46,48,45,45,44,41,44,44,48,46,42,41,31,37,98,109,54,21,17,24,29,24,21,24,25,27,25,22,24,27,29,27,24,25,26,26,24,25,21,29,32,27,28,28,27,25,23,29,28,23,29,31,22,28,29,27,24,27,22,24,27,22,32,22,21,23,23,24,21,29,27,27,26,25,31,22,22,28,21,56,91,51,23,15,23,25,20,29,24,19,27,27,23,27,26,18,20,21,23,21,20,24,21,27,22,24,29,18,21,24,25,29,30,27,24,24,19,24,29,27,28,30,27,29,32,26,32,31,27,29,29,32,32,35,27,18,21,19,21,21,22,23,19,21,21,21,20,21,21,22,22,19,27,22,20,25,22,22,22,22,23,26,23,22,71,71,25,29,22,16,23,45,97,102,67,37,22,22,21,21,19,24,25,25,25,22,24,24,19,22,28,24,21,24,21,23,24,26,24,17,26,25,20,18,24,22,20,25,23,28,24,20,26,23,24,24,24,24,39,99,123,113,113,93,101,132,126,116,118,111,96,91,76,65,80,108,122,131,123,117,108,55,32,30,36,43,51,50,48,47,41,41,41,57,59,53,56,59,63,48,55,66,51,57,60,69,56,48,50,48,50,43,52,48,51,50,41,43,61,64,39,38,41,39,44,36,43,33,36,45,36,47,50,43,23,12,17,16,16,16,16,17,17,17,17,16,200,228,166,135,167,185,210,142,87,160,214,244,252,251,252,248,245,248,227,226,181,109,129,124,121,93,48,51,137,169,144,166,171,134,92,128,163,174,179,184,187,181,185,181,186,187,195,191,199,203,198,199,199,193,194,198,191,182,174,153,113,143,110,56,98,134,171,164,146,159,160,170,181,179,175,163,140,143,157,157,148,139,137,145,104,82,105,84,80,70,54,59,52,37,24,14,24,33,34,49,28,18,31,36,66,90,84,110,78,16,12,14,42,74,83,85,63,72,98,96,117,125,150,174,185,223,233,240,236,233,222,196,178,143,121,98,60,40,51,43,21,21,21,20,88,127,65,50,62,86,135,160,194,188,174,170,141,130,122,124,125,127,134,121,97,88,98,104,129,127,129,116,51,26,84,114,75,98,162,183,191,201,206,194,188,174,147,134,128,131,120,122,130,108,98,89,84,90,93,104,106,83,70,49,51,103,150,181,185,178,187,184,184,176,156,144,112,112,110,91,69,42,37,32,34,30,21,20,18,18,24,19,23,41,33,25,19,22,24,22,28,19,27,29,27,32,27,37,41,49,60,61,61,68,65,67,55,76,136,161,183,189,193,190,200,208,188,203,217,209,219,165,162,201,197,159,116,135,165,189,186,167,78,76,181,135,73,50,37,54,49,47,89,113,109,116,121,121,123,127,116,108,114,109,103,87,72,48,34,30,33,39,38,72,107,129,145,146,155,153,169,130,37,35,90,101,63,42,47,56,74,70,68,63,65,74,69,81,81,94,104,103,93,76,63,40,25,25,22,27,25,33,51,62,68,72,76,82,82,78,76,69,72,66,67,72,76,78,68,61,55,49,52,53,57,55,59,57,57,65,65,68,63,60,46,29,23,17,21,22,17,23,25,35,41,43,50,48,45,47,46,47,43,48,49,41,51,48,48,49,45,49,48,46,46,43,43,42,45,48,41,41,43,43,40,43,42,31,54,105,98,48,19,21,22,19,19,27,29,29,31,24,24,24,21,22,25,25,25,26,24,24,31,27,24,30,29,20,18,26,28,32,30,26,22,27,25,28,24,31,39,24,26,23,26,27,25,24,22,21,25,30,25,32,28,28,29,25,36,35,29,32,28,41,92,67,27,20,18,24,22,23,29,30,27,30,29,27,22,24,25,22,18,24,22,22,24,20,26,25,31,31,22,25,24,28,28,27,33,20,16,30,27,25,29,30,33,26,29,29,32,33,30,29,28,37,32,25,24,19,18,21,18,20,20,19,24,17,25,24,20,20,23,23,19,24,21,22,22,23,22,19,21,22,16,25,29,20,81,66,19,28,18,18,18,23,53,99,103,67,34,17,21,22,22,20,24,23,18,21,23,23,20,21,24,22,27,22,19,25,22,22,21,25,24,21,21,24,22,25,27,21,26,23,23,23,25,21,23,25,31,26,39,98,113,94,84,74,88,97,78,65,93,104,113,117,98,88,74,89,85,81,78,75,83,65,65,66,61,77,94,92,90,87,92,105,106,104,97,81,88,90,91,77,69,79,81,73,75,81,60,50,58,54,49,50,57,56,74,87,87,87,86,90,85,84,77,84,95,97,94,87,89,72,56,64,67,55,22,9,16,14,16,16,16,16,16,17,16,16,196,221,178,171,200,227,243,168,132,200,244,250,252,252,252,249,243,243,224,226,171,112,132,125,139,114,93,111,150,126,104,125,116,94,48,92,124,123,137,129,132,125,131,132,126,132,129,140,138,142,142,142,144,134,143,141,148,153,163,125,63,76,50,13,44,71,79,85,77,92,96,110,134,131,129,110,109,113,115,124,122,112,98,122,102,87,90,80,79,65,46,41,53,46,54,53,60,81,80,118,84,65,90,84,128,125,99,141,92,24,40,57,141,158,169,170,162,185,201,199,205,201,201,211,206,208,164,177,138,112,73,45,39,29,46,39,18,10,61,74,40,36,19,55,156,187,132,50,15,42,81,134,145,124,98,60,46,34,27,36,55,65,69,76,102,119,125,152,179,171,178,166,47,63,160,188,111,121,165,162,146,127,118,82,57,39,18,12,17,17,22,38,129,89,115,140,158,179,193,193,178,159,112,66,84,121,132,122,94,78,68,53,37,39,35,31,30,28,61,24,24,18,25,19,19,26,20,21,26,21,24,30,28,42,43,37,27,27,33,26,28,30,39,44,55,66,107,155,181,210,219,214,218,218,217,206,132,152,216,224,227,222,214,208,201,202,200,198,201,193,200,155,129,168,174,179,178,185,196,198,148,136,65,33,78,97,92,93,85,83,94,109,136,135,125,134,129,128,134,141,125,105,87,69,38,30,34,31,37,35,61,87,122,138,142,142,135,136,129,128,89,25,16,21,75,97,76,77,64,67,75,66,71,65,64,66,75,81,78,92,110,105,73,45,28,23,22,23,27,29,50,57,62,68,68,69,73,77,71,66,71,66,71,66,64,69,74,63,53,48,47,48,51,55,56,59,63,66,64,61,71,61,46,35,20,15,21,21,21,17,26,37,42,43,44,49,48,46,46,45,45,46,49,48,49,53,51,53,45,46,41,44,49,45,48,39,44,47,40,42,41,38,41,39,42,44,39,43,45,69,112,96,41,16,19,21,25,33,24,37,38,27,28,24,25,24,24,26,20,26,27,31,29,28,28,23,24,24,27,27,30,33,35,24,21,27,27,27,27,45,37,22,26,27,24,23,21,25,29,23,30,33,26,27,24,30,27,29,36,32,29,27,28,32,82,80,39,29,19,23,23,21,31,31,24,27,29,27,22,28,26,22,23,17,22,23,19,30,26,28,42,27,19,23,22,28,30,26,29,26,24,32,30,28,31,31,26,30,33,33,30,30,31,29,30,35,34,29,23,24,20,19,22,22,22,24,24,23,23,27,24,23,22,27,24,20,26,23,27,23,22,28,23,22,21,29,21,35,85,49,21,27,16,22,22,23,27,50,100,103,69,32,15,21,24,23,24,25,23,23,21,27,24,26,25,22,22,18,25,27,24,22,20,23,23,22,25,27,22,21,23,27,21,24,26,21,26,19,23,31,30,22,34,85,124,125,105,108,113,113,92,82,103,108,114,114,106,101,93,100,93,79,90,79,98,108,125,125,124,133,141,139,137,139,147,159,154,146,130,113,120,111,109,108,101,102,94,100,108,101,85,78,82,83,87,87,91,95,120,140,135,138,134,141,150,138,150,148,155,164,167,151,141,133,119,122,128,99,17,3,13,12,16,13,15,15,16,15,15,15,142,190,191,216,243,243,246,163,128,205,248,252,252,252,253,248,244,245,224,226,162,108,131,128,149,151,174,199,144,36,2,21,35,47,36,30,27,33,37,36,37,34,36,30,36,37,36,35,39,40,39,39,39,36,40,38,40,69,111,87,37,41,32,39,35,25,30,29,30,29,26,25,36,35,35,40,46,50,42,56,60,46,43,52,65,67,67,49,36,29,24,45,93,150,178,173,171,188,191,193,155,162,191,181,183,168,154,184,175,184,210,226,243,240,236,231,220,217,200,168,160,128,113,104,74,66,62,48,42,54,29,8,19,36,52,50,39,32,50,61,42,37,37,105,229,250,225,114,14,10,27,30,36,32,27,29,31,36,34,40,50,42,45,57,73,73,77,78,88,78,76,75,33,26,71,81,46,41,50,48,38,24,22,21,25,50,74,94,118,131,134,144,150,158,160,154,150,169,160,135,116,78,64,45,35,35,28,30,28,27,28,23,23,25,24,28,20,21,29,24,27,27,27,23,26,27,26,28,27,30,29,27,37,41,35,34,27,22,22,32,44,52,56,66,104,139,169,175,175,178,179,190,188,186,191,166,92,113,161,163,165,154,160,149,147,157,145,155,151,146,157,100,98,129,141,159,160,170,155,136,92,65,31,46,108,117,125,92,70,66,61,83,104,104,99,93,84,75,73,73,50,31,33,34,39,37,47,61,82,107,119,134,133,139,132,132,134,130,125,89,32,5,65,90,56,80,79,83,81,67,67,61,71,64,71,77,83,96,85,100,83,51,39,21,23,19,26,36,50,64,64,65,66,70,70,73,73,69,68,68,63,68,73,68,75,66,50,51,49,49,52,50,56,58,63,66,66,74,71,62,55,28,23,22,19,24,14,21,31,36,44,45,44,49,50,47,51,48,48,53,47,52,47,47,54,50,52,45,46,47,44,40,46,48,42,45,47,46,46,45,41,39,45,39,41,45,45,47,46,41,79,118,86,39,19,20,25,22,24,36,30,21,25,27,27,29,25,23,30,27,25,25,31,24,27,25,24,34,22,30,33,27,31,26,32,31,24,30,27,28,24,25,27,23,26,28,25,24,23,30,32,36,39,27,25,24,31,29,44,53,31,25,24,29,71,92,75,47,17,18,26,27,21,27,36,25,23,22,24,31,25,35,28,24,25,21,29,25,27,23,24,22,21,27,27,30,28,24,29,28,25,27,31,31,34,32,26,31,30,31,31,33,29,29,29,35,34,26,29,22,19,21,23,22,22,19,23,22,22,29,24,23,26,22,20,24,25,22,24,20,20,23,24,27,19,28,20,46,89,38,27,27,13,23,20,23,21,25,55,101,104,67,35,21,19,22,24,23,20,24,23,24,21,22,25,22,23,19,20,25,24,21,24,21,22,25,25,24,23,21,19,24,25,27,17,26,26,23,28,24,26,21,31,59,94,108,116,137,153,166,151,141,147,139,142,135,137,138,128,134,122,117,117,130,144,144,154,148,146,155,155,146,141,144,148,159,157,148,142,125,124,117,113,117,111,111,112,121,128,125,117,113,111,110,114,119,122,127,150,161,162,166,172,181,180,181,177,177,180,183,186,181,179,161,160,166,173,111,10,2,9,11,14,11,14,14,15,15,15,15,110,184,236,249,246,246,252,147,117,207,241,251,252,252,252,248,246,243,225,225,155,117,138,134,147,118,155,177,101,20,6,27,36,49,38,27,23,21,34,25,29,27,21,27,25,30,24,27,26,27,29,27,29,25,30,46,53,71,70,45,53,52,37,50,52,41,45,51,42,35,30,21,29,27,30,33,39,44,27,23,26,26,28,47,50,56,58,47,51,64,109,106,171,245,251,228,231,235,211,198,177,205,226,209,206,197,205,232,229,227,226,217,211,192,173,152,128,117,94,92,122,101,69,36,8,11,22,13,23,69,59,22,44,54,63,48,33,38,40,40,39,36,54,160,249,249,252,202,46,5,13,14,21,18,26,23,23,32,32,28,36,37,42,44,46,46,39,35,27,31,42,47,45,46,44,48,61,79,78,84,99,116,122,114,126,139,150,157,97,160,150,122,93,75,55,39,29,36,42,36,30,22,27,39,44,25,20,27,25,29,29,27,25,31,29,30,33,32,34,31,28,26,30,34,27,27,31,27,29,32,40,38,21,24,25,23,23,20,23,21,24,30,30,34,29,33,39,39,38,35,31,34,33,31,36,40,46,44,22,26,35,35,36,39,38,30,35,38,38,34,39,42,46,45,41,63,52,43,35,26,29,30,47,47,37,37,39,44,46,47,40,32,35,36,39,45,41,42,42,45,43,50,50,54,72,88,101,118,132,133,129,122,125,123,125,125,125,118,113,108,88,101,130,108,60,68,61,73,65,63,66,64,73,73,87,90,94,105,77,45,35,28,21,24,25,36,60,76,80,77,80,76,76,71,74,71,65,62,65,68,71,67,69,68,65,58,50,52,50,47,50,56,65,66,66,62,75,72,51,34,27,22,19,19,18,23,29,36,47,44,46,50,46,53,51,53,50,57,53,51,53,52,53,50,51,48,50,48,46,45,46,46,48,44,46,41,39,49,39,43,46,39,42,43,49,46,44,42,39,39,43,80,109,81,38,17,23,26,25,21,18,23,25,28,27,28,24,24,28,31,28,28,30,29,29,19,30,34,27,27,33,33,29,33,33,29,23,32,29,22,30,27,29,29,31,29,24,29,30,34,27,35,38,21,27,29,31,34,49,45,22,23,31,32,52,93,80,36,14,19,25,23,22,33,30,19,26,28,27,22,23,30,22,20,24,26,22,26,23,24,28,23,29,25,26,29,29,30,31,23,26,32,27,32,29,34,27,32,33,28,28,29,33,32,31,31,33,31,19,23,21,21,24,22,23,25,22,19,20,23,24,21,17,24,21,19,24,23,24,22,23,20,18,28,21,25,19,57,84,29,23,20,17,24,20,26,22,21,28,56,101,99,68,34,15,23,21,22,21,23,22,26,25,23,23,19,25,22,22,22,22,23,26,23,16,22,24,23,23,23,24,26,29,25,22,22,24,20,27,27,24,23,31,59,59,55,77,113,162,197,194,183,178,175,173,168,172,183,178,166,154,155,159,177,173,171,169,162,167,175,172,160,153,152,152,159,155,153,142,126,130,125,126,133,117,122,126,139,130,136,139,137,139,139,132,134,139,148,156,169,165,170,177,179,182,176,173,174,177,178,177,175,179,166,167,173,183,114,8,2,9,10,13,12,14,12,15,15,15,15,92,163,203,244,233,230,239,100,114,202,237,251,252,251,252,246,247,240,223,224,143,125,154,159,167,76,52,55,17,24,27,22,37,42,41,33,29,31,27,34,33,29,32,28,34,30,28,36,29,29,33,33,32,29,35,44,66,98,66,36,38,34,34,44,60,55,52,66,72,62,44,48,55,47,55,52,43,39,41,71,87,94,105,120,139,139,156,170,175,196,205,217,235,247,242,212,193,177,155,137,139,159,167,153,152,142,133,148,128,114,89,78,63,45,38,19,13,26,50,78,137,122,67,46,27,23,22,24,25,50,41,32,49,44,46,54,38,32,49,51,43,38,91,166,225,239,243,228,101,24,21,18,30,35,41,43,45,46,53,50,47,50,55,57,57,57,63,61,84,100,110,113,65,62,96,126,157,156,148,149,157,156,131,111,95,78,57,42,43,36,34,33,24,27,21,24,24,24,22,23,26,23,32,40,45,33,27,33,32,37,31,29,30,32,27,35,34,29,26,29,32,20,29,26,19,25,29,22,23,33,51,41,22,22,22,18,23,24,20,23,22,22,23,24,24,22,24,23,24,25,18,22,23,21,19,29,48,38,20,20,25,24,22,24,25,21,25,29,31,36,44,39,38,37,26,31,28,23,19,22,22,28,50,39,24,19,29,25,29,47,42,39,39,39,54,66,66,61,58,63,75,93,100,115,123,117,126,122,119,120,115,121,113,120,115,113,117,113,132,141,125,103,92,69,61,70,59,62,65,67,72,67,84,85,95,94,70,49,34,25,24,23,27,44,61,77,98,101,94,102,97,89,81,79,81,78,68,59,71,76,67,66,66,66,63,56,51,48,47,53,56,63,73,69,73,67,52,37,25,19,17,17,20,20,31,38,41,48,44,50,46,50,55,54,56,57,57,56,55,49,52,53,49,49,44,48,46,47,48,42,49,50,45,39,43,47,39,39,43,45,41,45,46,44,39,41,41,39,44,40,40,34,73,110,78,35,23,23,20,22,23,30,26,27,31,26,29,25,29,30,28,27,30,30,27,29,31,32,26,30,32,31,37,28,28,29,22,29,29,30,34,24,31,38,30,27,27,32,28,29,30,29,27,23,31,28,28,27,29,24,25,33,28,29,36,84,79,31,21,24,27,19,27,24,23,26,29,29,23,23,22,27,25,26,29,21,25,28,24,27,24,28,24,28,29,25,25,27,33,27,25,28,29,28,34,31,31,35,31,33,31,32,27,30,30,34,36,22,21,20,21,23,22,22,23,22,23,23,22,27,21,20,24,22,24,26,25,27,25,23,22,23,24,21,23,24,17,74,74,20,30,27,19,23,21,25,21,23,26,26,56,97,96,62,29,21,18,24,24,19,23,21,26,22,24,24,20,27,22,26,25,23,27,23,26,23,22,23,24,24,21,22,30,23,21,26,24,24,22,24,28,24,52,81,98,122,112,77,117,186,201,206,200,187,185,184,193,194,188,187,172,181,193,195,195,186,189,186,185,188,188,181,178,180,177,179,177,171,162,151,162,160,155,155,155,162,164,167,160,162,170,171,175,168,165,160,160,158,161,171,173,174,174,179,175,170,169,169,173,171,171,171,174,168,173,176,184,114,8,1,9,10,14,12,15,13,15,15,14,15,105,142,147,173,150,146,165,81,126,201,236,251,253,253,251,245,245,238,223,214,124,133,169,193,203,64,5,9,19,37,28,24,35,36,33,36,33,28,33,30,36,40,41,44,36,42,42,39,42,40,41,39,43,48,44,49,43,68,92,45,35,42,45,49,50,56,40,66,85,88,96,143,133,139,155,166,182,195,191,199,207,213,211,213,210,208,214,222,216,215,212,203,152,180,164,146,139,141,147,130,109,111,117,128,123,71,23,65,25,17,12,19,30,32,36,36,25,35,49,53,69,51,53,47,28,71,146,154,93,51,19,25,50,34,45,57,39,33,50,63,51,81,141,158,159,157,182,169,104,60,29,38,54,51,57,58,67,76,83,100,101,124,143,155,165,162,152,146,142,140,136,123,72,47,56,76,84,60,49,48,45,38,34,34,28,21,23,25,25,23,29,27,18,24,28,27,27,29,33,26,34,33,37,46,40,32,28,27,28,32,26,25,28,26,24,28,25,24,27,24,25,26,23,24,25,22,22,22,24,34,51,44,23,22,22,24,24,23,23,24,22,22,24,23,27,22,21,23,25,22,17,23,25,23,21,29,46,42,24,19,20,24,25,30,32,37,40,41,38,33,27,24,24,24,24,22,19,27,25,18,21,33,53,40,21,23,27,22,52,60,39,47,52,48,83,106,103,101,94,108,120,122,116,122,123,112,108,113,110,105,105,105,110,111,115,117,125,120,114,90,65,53,54,56,59,64,58,63,62,77,85,83,89,90,75,53,33,23,24,23,31,46,64,91,100,108,105,100,97,97,98,87,89,78,80,73,74,77,70,70,68,64,75,75,66,61,53,57,50,60,61,59,63,59,53,35,27,20,18,24,17,24,32,36,44,51,53,47,52,55,57,54,53,57,55,55,57,58,55,46,42,48,48,46,44,43,50,47,49,45,45,44,44,44,43,45,41,41,44,42,41,43,40,34,43,40,42,39,41,41,42,39,31,78,111,75,33,13,23,27,29,26,27,31,25,27,25,24,27,28,27,33,26,30,36,26,26,29,33,31,30,34,31,26,24,27,24,29,30,30,27,22,28,29,23,29,27,23,29,29,29,33,32,25,31,29,27,29,24,29,28,30,23,31,31,70,94,46,23,21,17,28,25,22,27,33,28,32,41,31,25,24,26,25,29,28,24,26,26,26,27,21,24,24,30,31,29,33,27,24,24,32,29,28,34,32,32,35,31,30,28,31,34,27,29,31,32,32,24,18,23,22,22,24,20,22,25,19,19,21,25,23,17,24,21,19,23,24,21,21,25,19,26,23,23,27,27,85,66,19,29,19,22,21,21,23,19,28,26,29,29,55,101,92,60,32,17,19,22,24,21,23,22,24,23,25,24,19,22,23,23,24,24,25,19,21,25,20,24,23,28,24,22,24,24,30,21,24,25,19,44,84,89,99,142,181,170,82,55,123,171,193,193,197,196,184,169,165,200,212,190,189,187,184,185,189,186,184,187,186,183,184,187,186,185,190,185,184,180,181,189,184,188,188,194,188,184,189,177,187,189,188,189,186,186,181,178,175,176,182,177,177,181,178,178,178,177,177,177,177,174,171,174,170,176,174,184,115,9,2,8,10,14,12,12,12,13,15,15,14,145,171,168,189,158,158,158,76,135,203,243,250,253,253,252,245,244,232,218,198,89,105,155,199,222,69,6,11,21,45,38,36,32,32,36,34,36,34,32,32,39,44,47,54,54,53,57,59,57,57,63,56,53,57,56,51,59,105,111,87,86,93,116,133,141,151,169,192,204,208,224,239,242,241,240,235,235,233,226,227,224,208,191,170,159,138,137,134,112,101,87,66,46,36,39,84,137,190,213,160,120,99,108,118,106,55,3,11,24,38,40,41,47,51,52,38,37,49,59,56,53,47,42,39,77,174,251,251,201,116,24,11,59,75,68,51,36,36,55,48,67,152,192,128,82,76,77,93,62,75,98,117,141,134,147,152,153,159,143,140,135,126,125,121,120,110,79,51,39,36,33,33,30,35,34,19,28,27,21,29,23,25,21,27,29,24,29,19,27,26,26,31,31,40,33,40,48,51,57,56,54,49,46,45,42,37,37,46,45,45,49,47,44,48,47,44,50,43,50,50,46,47,51,49,50,51,49,51,29,44,58,45,46,46,48,41,40,40,37,43,42,48,56,53,50,48,47,47,46,42,48,43,48,50,46,61,63,54,34,27,28,41,45,47,49,48,46,43,39,34,40,39,36,38,35,34,29,31,30,32,30,44,56,39,29,21,42,57,54,52,31,41,50,55,91,103,96,84,87,98,104,104,97,111,111,103,104,102,104,101,101,104,103,114,121,120,102,73,57,36,33,47,55,53,58,61,61,65,68,81,84,73,65,44,33,26,23,24,23,51,66,87,113,107,102,101,100,100,96,93,84,86,83,83,83,89,82,62,65,65,71,78,79,76,69,64,59,54,61,62,62,60,49,37,29,22,22,20,18,23,26,38,44,46,54,54,56,56,53,57,54,58,56,51,55,48,53,51,46,44,44,47,47,49,43,44,49,45,41,47,44,43,41,39,42,42,44,45,48,43,41,41,43,39,41,45,38,42,41,44,47,35,33,32,82,109,70,34,21,24,28,63,50,23,22,27,26,26,29,28,28,23,30,27,29,27,25,37,30,34,30,27,25,20,29,26,28,30,29,33,25,27,24,19,25,33,34,31,27,29,25,29,33,31,32,26,27,29,28,34,29,29,29,27,23,50,93,63,30,22,19,27,27,27,31,25,23,42,37,19,22,29,29,30,26,25,28,22,28,29,20,27,29,24,32,36,31,29,27,28,29,32,32,24,33,33,34,31,33,33,30,27,29,33,30,31,35,26,25,25,19,20,23,19,21,27,22,23,21,21,21,21,19,24,22,20,24,16,23,19,24,24,21,20,24,18,44,93,48,20,27,17,24,21,23,23,19,21,28,29,22,32,64,100,101,63,29,17,22,22,25,24,21,21,23,28,17,23,27,22,23,21,28,25,21,21,24,24,22,25,19,27,23,22,23,23,26,26,25,34,141,182,111,66,63,121,130,41,7,23,77,139,167,190,195,191,160,166,239,250,221,187,178,176,179,177,177,175,178,179,179,181,178,167,171,186,185,184,186,185,192,191,194,196,189,171,181,188,177,184,186,186,189,189,192,193,191,187,186,186,184,185,183,185,185,184,186,184,185,183,184,180,181,180,184,183,188,116,7,1,9,10,13,12,14,11,14,15,15,14,177,194,198,213,184,181,145,75,136,202,245,251,253,252,251,246,243,233,217,181,52,64,139,192,214,65,9,14,26,55,49,53,47,38,38,37,40,43,43,46,54,53,64,61,55,99,64,77,91,107,122,137,148,162,179,178,182,212,223,214,234,243,242,242,237,236,234,232,227,225,220,219,214,206,199,182,169,150,128,122,118,114,116,116,112,84,74,72,46,39,24,12,10,14,11,31,103,145,155,125,107,99,97,105,106,65,5,16,50,49,38,40,46,56,50,35,38,63,103,110,111,98,87,94,149,242,250,250,247,145,36,16,75,107,93,60,31,38,75,87,118,171,151,90,63,35,32,26,45,101,129,91,125,112,118,97,83,53,39,39,33,33,67,27,23,24,27,26,19,26,24,26,29,35,33,22,25,27,23,27,31,29,33,37,36,39,43,45,49,51,56,53,49,75,104,131,152,172,182,190,203,190,186,184,179,179,187,197,192,187,182,148,194,200,198,198,193,192,188,153,184,182,186,194,194,199,200,198,190,158,191,183,193,193,187,116,75,82,85,135,154,174,187,184,183,174,181,182,175,169,178,196,206,202,188,178,152,91,27,9,33,76,141,159,155,164,160,163,153,149,153,149,154,147,145,139,118,106,95,108,123,137,129,97,72,85,96,113,103,68,42,46,38,48,80,79,81,70,70,86,87,82,80,87,94,94,101,98,95,98,105,108,109,85,85,63,45,37,34,34,36,48,50,58,61,61,68,69,62,65,59,46,30,24,19,22,28,47,63,86,101,112,102,94,91,93,98,97,91,84,80,77,81,83,87,77,63,69,66,78,74,73,81,72,65,61,63,69,70,63,52,37,27,20,22,18,19,21,27,41,47,51,50,50,52,56,59,53,58,49,47,54,53,51,48,50,46,45,44,47,47,46,44,43,49,48,44,43,44,43,44,43,42,41,43,46,45,45,42,40,39,41,43,42,42,36,45,42,42,45,40,36,36,27,32,81,109,69,27,17,33,56,35,20,26,24,25,31,26,27,29,27,28,29,25,28,33,29,29,35,29,26,29,26,24,28,33,32,27,29,27,21,23,29,32,31,50,48,27,26,26,33,30,27,29,28,32,27,29,37,27,26,30,30,26,39,89,75,34,23,17,25,24,29,32,24,24,26,25,21,29,29,31,28,29,24,20,32,23,29,39,31,25,24,35,34,34,31,32,24,28,33,29,31,31,35,29,33,34,30,33,32,33,29,32,34,31,29,22,23,19,19,22,22,27,22,20,20,23,24,22,24,22,22,19,24,26,19,21,27,24,20,30,22,27,21,54,92,34,24,26,16,24,22,26,21,21,22,24,24,21,26,29,63,105,93,59,29,19,21,22,24,17,24,24,22,25,21,23,17,19,22,23,28,26,21,23,24,24,23,25,19,21,29,22,24,27,30,25,53,180,178,101,92,57,46,40,14,17,12,37,102,126,154,192,206,178,185,248,247,190,179,177,173,176,176,171,172,173,178,177,179,173,157,165,182,179,177,178,179,186,186,186,181,170,152,168,184,178,180,180,179,183,188,187,189,193,188,182,185,184,185,188,186,185,186,183,185,184,184,186,185,189,185,190,187,193,115,7,1,8,10,13,12,14,13,14,15,14,14,185,195,189,191,167,171,146,81,139,201,249,251,253,253,252,246,244,237,222,172,43,66,141,201,207,49,8,15,20,53,57,59,53,45,53,43,48,62,66,82,95,113,127,142,152,163,181,188,200,214,224,231,236,236,235,230,228,230,222,224,223,218,215,201,184,168,149,131,111,92,81,76,65,48,38,29,32,21,17,14,28,65,115,152,153,94,69,93,71,57,59,59,33,17,14,18,74,127,130,113,110,119,133,121,95,33,3,44,74,66,35,32,51,56,55,35,25,59,103,100,96,87,95,128,189,250,248,246,229,145,33,9,66,133,157,125,36,55,155,151,125,78,55,82,78,55,27,28,31,28,35,41,31,27,30,33,22,22,26,20,26,18,27,24,21,27,25,28,23,27,28,29,36,39,38,34,32,39,45,50,50,57,58,51,69,92,107,130,159,176,194,204,206,214,219,225,225,229,236,235,240,235,234,241,237,236,237,234,224,221,223,230,242,246,238,237,238,238,240,234,235,238,242,241,244,243,249,250,249,249,245,236,211,203,162,110,116,151,213,236,240,245,236,237,236,241,244,244,248,245,246,241,228,191,152,127,96,101,136,172,200,222,239,241,231,230,231,235,229,232,238,237,234,220,222,231,202,171,162,175,198,217,216,186,167,151,140,122,119,109,83,81,53,61,85,71,72,61,66,81,77,71,66,76,83,90,95,93,94,94,103,101,71,52,40,34,39,36,36,36,43,52,51,58,59,61,67,63,53,42,27,26,25,19,34,45,69,80,96,116,96,93,95,91,89,87,87,84,80,81,84,84,80,74,68,65,66,71,81,74,71,66,67,67,65,69,70,66,59,44,21,20,25,22,19,19,26,40,53,53,52,57,49,52,55,51,55,46,53,51,46,52,55,53,42,47,49,43,46,42,46,49,49,47,46,44,41,41,43,43,37,44,45,44,47,40,46,41,37,41,39,39,39,38,43,45,43,43,43,38,38,41,34,26,22,32,87,106,72,35,18,26,23,27,25,23,27,26,26,27,28,29,26,26,25,24,26,29,30,31,29,24,27,24,24,30,29,27,27,27,22,29,28,29,30,29,43,30,25,28,30,27,27,27,21,29,32,26,30,28,30,29,24,24,29,29,69,87,45,27,24,24,25,25,22,24,23,22,24,29,33,34,27,30,30,27,22,21,22,34,38,20,20,29,29,32,29,34,39,30,43,46,34,33,32,31,33,30,35,35,28,28,33,33,30,29,31,31,22,22,23,19,22,22,19,24,20,24,24,22,24,22,22,19,23,23,23,18,23,27,19,21,23,20,29,17,66,81,25,30,24,19,24,19,28,26,21,27,24,22,20,25,26,32,66,99,89,60,32,21,17,23,25,19,20,23,21,23,30,23,21,23,24,23,22,24,28,22,24,29,27,23,22,25,23,27,18,30,22,88,174,139,163,175,145,125,94,59,30,37,87,122,117,115,159,204,178,182,215,173,145,149,157,159,164,167,163,164,168,170,169,172,167,151,157,169,164,164,166,165,166,162,162,162,154,141,159,174,167,176,174,171,173,175,171,167,175,177,174,177,176,174,177,183,183,180,178,179,181,179,183,184,185,186,188,185,191,115,7,1,8,11,13,11,14,13,15,15,14,15,175,177,171,170,154,165,123,69,124,191,242,251,253,253,251,250,244,241,230,178,59,93,165,218,222,75,56,73,84,125,127,151,149,152,166,171,184,203,212,223,232,234,234,227,222,168,215,212,209,202,195,189,178,160,153,127,127,112,77,109,112,68,65,42,19,9,13,13,19,16,18,34,33,32,26,19,26,28,26,29,36,43,96,127,113,81,61,79,81,92,109,128,141,145,104,66,105,139,133,110,99,100,110,112,110,53,1,57,130,135,129,139,147,147,113,54,23,38,56,49,36,48,92,134,169,201,216,208,190,109,22,4,76,128,135,102,35,59,137,116,59,29,21,59,61,33,23,15,24,27,21,27,27,26,25,17,33,27,26,30,27,33,26,37,46,43,43,31,39,46,47,48,56,59,54,53,66,90,112,146,167,189,203,207,213,214,214,220,232,234,236,230,220,211,199,189,173,172,179,183,188,188,190,194,193,190,185,180,176,180,184,187,184,171,168,179,188,192,190,188,193,186,186,185,191,206,205,214,211,206,166,91,68,77,131,154,179,212,211,219,212,209,211,216,224,220,210,191,174,145,117,87,60,59,78,118,152,184,208,223,224,220,221,202,193,196,186,182,183,185,193,195,193,186,192,201,175,141,128,133,142,161,181,171,162,139,122,103,97,103,90,88,91,107,119,84,85,88,77,89,77,68,72,67,80,79,80,89,87,86,78,69,49,45,43,43,38,38,44,43,72,89,63,58,59,59,52,36,33,24,23,26,30,50,63,83,105,106,100,104,92,89,84,81,78,83,78,78,79,87,84,74,77,61,72,69,77,87,71,77,61,65,66,75,79,65,62,45,27,26,21,19,21,19,27,43,51,55,60,49,49,51,50,50,44,47,53,47,48,53,51,48,50,46,44,48,44,46,48,46,50,46,45,44,43,39,45,45,38,43,43,44,43,43,42,42,44,40,39,38,38,39,43,41,43,39,39,39,40,39,41,37,35,33,24,26,32,82,108,69,31,16,21,27,26,27,26,24,27,28,27,32,24,26,28,24,29,27,29,31,26,25,23,25,32,29,26,26,22,30,25,27,28,26,27,26,29,23,27,33,23,26,27,27,29,28,35,25,24,29,27,27,29,31,27,29,54,89,61,27,25,24,27,24,23,23,26,25,33,29,35,41,27,24,24,27,21,22,30,26,17,17,24,27,29,36,33,30,39,39,44,46,41,41,38,27,29,27,33,31,32,36,33,28,30,29,31,29,21,23,27,25,21,24,19,21,24,18,22,22,24,23,20,24,22,17,21,23,20,22,21,24,24,20,21,26,83,67,21,30,20,16,23,24,26,21,24,24,24,22,22,23,21,28,33,61,102,93,61,33,15,21,23,21,20,19,25,23,23,25,24,24,22,26,22,22,23,25,24,26,26,22,25,23,25,26,31,19,57,175,207,213,230,217,215,214,200,166,155,140,112,113,92,51,89,141,134,145,171,157,147,154,152,141,146,156,150,149,146,145,143,146,139,120,128,141,139,139,139,137,133,133,133,139,139,127,144,146,143,153,152,152,148,153,141,139,155,157,158,162,162,163,166,169,170,169,167,171,173,177,176,174,180,182,188,178,186,116,8,2,9,11,13,11,14,12,13,14,14,13,143,150,142,148,141,144,104,48,98,165,227,252,250,250,250,250,246,246,242,212,119,151,195,246,243,155,177,192,207,217,205,218,221,223,222,220,220,218,216,216,215,221,206,184,162,141,132,122,110,98,91,88,83,76,70,62,79,69,14,8,11,18,33,39,45,39,40,48,48,46,43,50,51,44,41,38,33,35,39,41,52,47,45,39,33,24,32,43,36,54,87,138,198,249,228,156,148,168,197,193,188,190,200,204,229,147,19,79,182,223,220,212,208,187,168,69,28,61,85,77,60,78,122,138,95,101,116,129,113,70,36,14,30,49,60,61,39,39,54,51,57,38,49,62,27,25,24,21,29,29,32,37,38,43,45,47,55,60,59,60,74,78,56,66,132,137,86,74,88,118,143,160,173,184,197,204,208,217,225,232,238,241,240,232,230,216,209,203,198,194,186,186,175,171,174,170,164,158,155,156,164,159,154,155,150,150,151,156,165,169,164,150,141,134,141,155,162,168,170,173,175,169,166,180,193,186,160,120,102,97,63,55,82,138,179,199,203,196,193,185,188,197,201,192,171,140,106,68,41,39,67,97,124,152,169,183,190,201,198,191,183,174,171,165,168,165,160,154,155,165,166,174,174,170,182,189,165,125,112,114,97,99,111,109,100,93,86,77,94,92,77,91,114,141,136,102,103,107,108,100,84,81,81,74,76,78,78,81,72,71,67,61,57,50,56,49,51,48,42,54,101,89,60,59,46,41,29,26,21,24,37,58,76,84,98,98,100,95,87,90,81,83,81,77,75,76,82,72,75,77,71,61,59,55,64,81,80,80,75,67,72,73,81,81,73,51,33,22,19,21,16,21,27,38,53,53,54,54,51,51,47,49,46,51,45,47,53,46,50,50,49,47,48,45,47,50,45,48,51,44,50,48,42,45,43,43,40,42,44,43,46,42,42,42,39,37,40,38,37,37,41,49,42,41,36,41,43,37,43,38,35,42,39,35,27,24,26,35,86,108,69,29,16,25,30,26,26,27,30,24,24,29,31,29,25,32,30,27,26,31,23,25,28,28,32,22,29,28,25,29,30,27,28,27,24,26,32,27,25,31,48,33,28,30,27,30,26,29,25,27,21,23,32,28,25,26,38,87,76,35,23,20,26,20,24,24,24,31,27,31,35,31,23,25,27,26,23,19,22,28,22,24,24,26,28,34,29,36,31,25,35,32,45,44,41,34,25,33,32,28,34,29,34,29,33,32,26,23,25,26,21,24,26,23,19,21,19,27,21,19,24,24,24,20,22,21,19,20,22,25,23,25,17,26,24,34,90,53,18,31,18,17,27,24,17,23,23,22,22,21,24,23,23,27,23,29,70,105,92,64,31,14,19,20,24,24,19,24,23,20,24,23,24,23,20,20,25,22,19,25,20,24,24,23,28,28,29,24,139,230,238,240,202,170,165,195,204,192,199,177,119,96,63,13,9,30,53,97,163,194,222,233,212,177,177,189,178,169,160,156,153,153,155,142,141,150,153,154,152,148,142,145,146,149,154,146,146,132,113,124,129,135,136,135,122,117,133,139,139,145,143,146,149,147,148,151,151,157,161,160,166,165,169,171,177,165,168,113,10,2,10,10,13,12,15,13,14,15,15,15,148,152,150,154,155,162,132,109,146,190,243,247,248,248,247,247,243,243,242,216,166,180,184,207,193,156,192,190,192,193,179,174,163,159,152,144,133,122,102,77,68,84,95,98,106,104,106,105,104,112,111,116,125,127,134,129,150,134,75,39,6,19,32,59,91,96,94,100,98,93,101,82,71,69,71,58,50,42,37,55,83,97,89,76,66,53,33,36,26,33,70,142,228,251,251,231,229,248,252,252,253,253,252,252,245,198,28,45,144,166,143,111,93,85,54,32,35,60,142,183,176,145,110,90,55,35,22,17,17,17,34,30,50,59,53,51,27,42,46,49,54,53,62,52,52,42,47,49,59,53,54,75,89,120,140,150,173,181,206,205,232,189,92,126,214,226,224,213,224,231,241,235,237,229,226,221,215,221,212,208,199,194,186,185,184,179,182,179,181,177,172,173,173,177,178,179,174,163,161,158,161,163,159,162,157,160,170,166,173,175,165,163,156,153,156,163,171,172,176,177,191,187,187,180,163,152,73,35,35,86,151,160,185,202,210,208,199,193,170,152,132,105,111,66,41,43,66,99,130,156,175,185,199,208,204,193,181,179,175,172,169,164,165,165,169,174,166,162,167,163,160,165,160,159,170,179,170,130,108,112,101,74,75,41,19,49,69,61,52,62,57,84,127,147,147,117,124,135,132,131,110,105,98,90,92,88,90,87,88,83,89,84,76,74,72,66,66,64,53,57,63,53,41,38,27,27,27,25,43,60,75,92,105,98,89,88,90,91,89,83,85,84,83,74,72,77,81,75,71,64,57,53,54,55,75,73,75,76,73,83,77,84,74,49,39,22,20,19,14,19,27,42,51,53,54,57,53,49,53,46,45,46,49,46,50,53,48,49,49,49,48,48,49,44,48,51,49,47,40,47,45,42,46,44,44,43,45,43,42,45,40,43,45,36,37,40,38,39,42,44,45,39,37,41,40,41,37,37,37,38,44,41,41,37,30,24,28,24,31,81,103,71,29,33,36,22,28,27,24,25,26,28,27,25,32,28,29,32,24,27,24,27,30,25,33,24,26,26,26,33,28,25,26,26,27,33,28,31,28,48,56,24,26,27,34,29,28,31,22,29,24,28,22,26,28,23,31,71,93,46,19,18,21,25,22,21,30,33,27,29,26,24,28,30,26,21,24,26,21,21,27,29,24,26,29,29,29,37,31,20,30,34,30,41,44,33,25,29,36,27,33,32,27,33,31,30,29,27,24,17,22,24,22,27,22,24,21,21,25,23,22,20,24,25,24,20,23,19,26,25,18,28,19,26,21,45,92,39,24,30,17,27,24,22,25,23,24,24,19,23,24,21,23,25,23,21,32,71,110,97,61,26,17,26,27,20,20,21,22,26,26,23,22,25,22,22,18,27,23,22,23,25,25,27,23,29,22,57,180,227,211,170,131,100,114,168,175,170,196,195,153,149,104,18,7,13,13,42,114,213,253,253,251,230,232,248,238,233,223,220,219,227,229,226,231,232,233,234,230,227,224,224,225,228,229,215,212,190,165,177,181,182,181,177,159,151,160,158,158,159,154,151,149,146,150,147,146,148,149,150,153,151,154,155,165,146,145,108,13,2,9,10,13,10,14,13,13,13,14,14,191,194,192,193,194,205,196,193,202,206,221,222,206,194,182,176,181,174,170,153,131,133,123,127,108,90,124,121,122,121,119,122,114,119,120,122,121,115,99,35,5,13,46,105,142,155,158,159,166,175,183,191,196,195,206,196,205,192,153,139,50,20,58,105,155,151,144,137,131,129,125,122,113,110,123,122,114,64,29,77,133,179,186,162,151,98,85,77,59,81,104,171,238,248,241,227,227,233,253,253,248,247,226,196,194,93,6,20,71,75,47,38,36,35,37,26,25,71,171,206,168,102,59,69,63,49,17,12,9,26,33,37,89,107,88,62,31,38,64,62,59,62,57,59,101,139,153,174,194,200,211,216,220,221,227,231,232,232,232,229,240,156,100,174,210,214,210,223,226,224,217,207,193,185,185,174,168,171,169,169,169,170,171,164,169,170,174,167,166,171,168,175,169,174,173,171,170,168,167,165,163,160,169,170,167,167,166,169,170,165,167,174,170,159,171,171,175,187,180,189,190,187,131,76,68,78,95,102,145,194,212,230,232,222,203,176,156,123,80,51,24,18,47,77,101,115,131,153,174,191,198,196,194,188,180,177,173,176,165,169,173,161,168,171,174,169,167,167,168,168,155,158,155,155,167,174,166,144,125,108,96,86,70,24,4,27,47,39,33,36,31,81,133,154,147,127,132,139,137,123,111,106,97,90,93,90,93,100,98,100,105,107,96,87,75,78,92,74,58,46,36,29,28,27,25,31,46,64,71,82,89,92,90,89,82,76,81,80,79,86,78,81,78,81,79,76,83,75,65,51,50,57,66,69,76,78,74,75,76,85,79,60,40,21,19,19,21,18,25,38,48,51,56,54,50,49,51,49,46,49,42,48,53,48,49,46,48,49,45,45,46,43,45,46,46,46,45,40,43,44,42,43,40,44,46,46,45,39,38,41,41,39,37,38,44,43,46,39,39,38,36,40,37,39,39,38,38,42,42,40,42,39,36,35,33,26,23,28,26,30,80,108,75,46,25,19,24,24,30,31,27,28,26,24,32,32,24,26,27,25,29,29,23,26,27,25,28,27,39,28,22,29,25,26,29,32,28,23,29,29,29,23,29,33,29,31,27,26,28,24,26,24,24,31,26,24,24,54,97,60,26,17,25,26,31,32,27,29,28,33,25,27,31,27,35,29,23,23,20,27,26,27,25,21,31,32,31,34,27,21,29,41,39,30,35,34,29,33,30,29,29,32,29,23,29,29,28,30,21,21,24,19,21,23,23,24,22,24,20,20,21,24,27,21,19,22,24,23,24,23,24,23,19,25,18,65,86,25,26,25,18,25,25,22,23,23,19,23,20,27,24,23,26,20,23,21,25,30,69,113,96,60,31,18,21,19,24,24,19,23,24,22,21,22,24,22,28,23,24,23,23,29,22,23,21,31,23,75,210,229,184,186,173,137,145,178,171,166,182,184,181,194,162,112,122,91,39,21,96,194,248,248,250,232,249,252,252,252,252,252,252,252,253,253,253,253,253,253,253,253,253,253,252,252,253,253,251,251,249,249,249,249,246,246,239,237,238,233,231,229,223,214,206,195,195,189,184,182,174,171,171,161,158,155,157,136,130,103,14,1,9,9,12,10,12,11,12,12,13,13,178,178,171,167,158,165,154,158,163,140,123,97,83,71,53,40,48,63,76,90,112,134,143,166,150,124,139,137,143,145,146,155,153,159,165,169,170,169,160,81,9,22,73,142,184,186,182,167,162,164,159,159,154,149,147,136,137,127,121,97,21,23,60,98,128,122,122,128,132,138,149,150,167,171,183,192,198,94,21,77,155,210,222,210,175,134,112,95,105,132,132,165,208,223,201,162,136,134,178,168,133,107,84,69,59,23,7,53,99,118,110,111,124,124,74,24,17,65,148,118,73,61,57,103,104,87,80,83,105,139,41,30,139,154,122,86,67,102,107,112,83,34,36,110,168,239,240,244,245,242,231,219,207,201,197,188,189,188,184,186,188,90,78,177,184,181,174,168,163,158,167,162,165,163,162,159,153,156,150,151,156,160,163,159,158,161,160,154,154,160,162,160,158,158,157,150,151,160,164,169,164,159,157,145,139,137,135,138,148,152,152,160,173,182,179,186,185,182,167,106,103,81,45,54,78,151,186,211,222,220,203,171,150,119,87,51,28,22,44,89,117,134,124,165,161,151,145,152,162,166,169,179,176,165,168,166,158,164,164,155,157,159,163,162,156,165,170,170,160,147,151,159,156,150,160,162,163,154,137,106,72,95,112,57,8,32,59,59,72,78,68,96,122,121,116,98,112,105,95,93,83,92,89,94,89,87,101,107,111,107,114,103,105,95,85,89,72,55,33,24,24,25,29,39,50,63,76,79,92,85,81,80,79,77,73,76,74,76,77,71,78,82,86,81,74,77,65,63,53,50,64,71,77,76,84,81,76,79,68,61,44,25,16,19,26,20,25,38,48,55,50,51,52,51,49,46,47,50,42,50,49,47,49,49,46,46,45,45,47,48,50,48,44,45,46,45,40,46,48,40,44,45,47,43,44,44,40,41,41,43,38,42,43,44,45,43,45,39,41,39,36,43,37,40,39,39,45,40,38,41,35,37,37,35,33,24,26,21,24,28,30,82,104,78,35,21,27,21,26,29,25,24,32,30,24,30,31,22,24,29,29,26,25,27,27,24,28,34,26,24,27,28,33,26,29,27,24,25,21,28,28,27,34,32,32,33,27,27,27,29,23,26,25,25,22,24,30,39,86,74,33,23,20,29,31,36,31,24,33,29,26,24,18,37,49,29,18,25,28,24,27,27,27,29,28,29,32,29,24,24,28,34,37,34,35,33,31,35,27,30,29,30,32,27,28,31,31,28,24,19,21,23,23,24,26,23,20,21,21,23,29,21,19,23,23,20,23,23,22,22,23,22,22,24,26,78,71,18,27,22,17,22,23,26,24,16,23,28,20,23,21,22,23,23,25,26,28,22,33,80,114,97,66,29,15,23,18,23,23,24,25,23,24,23,25,21,30,29,20,24,19,24,23,24,27,24,40,126,218,213,185,215,225,184,166,184,182,171,178,182,177,196,189,166,191,172,130,147,171,178,162,159,179,170,215,250,251,251,249,249,253,253,252,252,253,253,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,250,247,243,239,234,226,219,217,213,199,194,186,183,157,148,107,10,1,8,8,11,10,12,10,11,13,13,12,105,109,101,98,87,90,71,65,93,103,106,96,82,86,56,18,13,24,69,117,152,181,208,249,227,173,178,170,176,176,179,181,173,177,173,176,171,169,163,83,16,28,73,127,145,134,118,102,90,81,74,69,64,62,62,53,61,60,69,59,12,27,73,130,168,168,171,179,188,193,198,204,207,213,217,228,220,113,28,69,159,236,244,239,211,123,93,89,110,155,127,103,112,127,120,77,36,38,86,118,127,126,133,150,182,74,9,112,190,205,183,168,161,152,104,41,19,58,127,87,52,66,64,108,106,129,186,237,249,249,111,38,129,144,125,72,97,158,145,152,120,36,21,113,212,220,212,209,205,208,200,181,170,169,164,156,163,167,159,165,136,40,92,162,152,163,145,137,122,127,135,126,132,126,142,154,160,164,144,149,152,151,143,144,152,151,154,141,150,151,144,136,131,149,150,134,137,162,175,193,185,177,176,153,155,147,131,147,162,159,147,165,196,184,193,185,165,134,51,12,46,119,155,179,207,212,206,174,140,111,61,31,14,28,62,92,105,120,156,191,185,184,199,193,184,154,161,179,199,203,184,178,157,160,162,140,156,168,158,145,156,164,167,160,159,170,174,167,140,152,163,167,152,157,176,162,162,156,143,108,78,92,117,81,48,76,96,86,106,118,93,90,82,80,79,78,80,84,92,86,83,97,97,102,102,97,108,108,107,116,121,120,105,90,62,39,34,22,26,31,30,42,55,67,77,87,89,92,79,80,82,78,79,71,78,81,75,74,75,86,87,88,78,76,66,60,61,56,60,66,81,81,79,80,77,87,85,66,42,27,20,21,24,24,27,42,49,50,51,50,49,49,51,51,49,49,46,48,49,50,48,45,48,49,41,48,48,49,50,47,52,47,42,41,44,46,44,46,44,43,47,50,43,42,42,45,39,41,41,42,41,44,46,43,39,37,38,39,40,36,37,40,37,38,45,42,36,39,36,41,36,33,38,37,38,31,26,22,26,28,25,29,79,108,75,42,21,22,24,24,26,29,26,26,23,27,31,25,25,29,28,23,27,28,22,27,30,28,24,27,27,24,27,31,29,22,32,26,23,26,33,32,29,46,43,25,25,30,24,24,23,22,25,24,27,27,29,25,68,91,48,26,21,30,33,32,29,27,28,27,28,24,26,28,32,24,21,24,20,39,39,23,27,36,29,27,35,29,25,21,27,32,33,31,37,35,23,31,32,30,34,30,33,29,29,24,26,27,19,21,20,22,21,24,24,20,27,23,23,20,27,28,22,23,21,22,23,22,23,23,27,22,25,17,34,87,54,23,26,15,24,21,22,23,18,21,23,25,18,25,23,25,24,20,27,21,23,21,24,32,71,113,99,62,32,21,22,24,23,23,21,19,27,25,18,25,23,22,24,23,21,24,24,24,27,35,92,119,153,142,109,179,208,169,154,152,162,163,167,183,188,203,197,182,206,216,219,240,237,170,76,24,43,77,133,175,207,202,174,175,219,244,252,252,252,252,252,252,253,253,252,252,252,252,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,253,252,252,252,252,252,249,244,242,241,234,233,226,226,207,198,114,4,1,6,9,10,9,12,10,13,13,13,13,106,113,113,118,109,118,63,39,91,132,152,147,141,141,91,22,16,34,91,148,173,191,227,245,231,170,163,152,154,152,145,144,129,122,119,113,108,94,79,29,13,39,61,95,96,84,79,71,78,79,87,94,96,105,116,117,124,126,149,128,27,30,94,168,215,216,221,225,227,230,228,224,226,216,203,195,188,65,1,35,89,145,142,126,89,59,58,50,82,122,98,59,42,35,38,24,14,43,126,210,253,253,250,250,246,141,23,110,181,191,150,118,108,97,56,31,26,51,136,150,128,132,146,123,66,137,219,253,242,237,86,16,102,93,56,42,116,199,167,170,129,18,16,113,192,200,191,194,191,189,188,179,171,178,166,152,167,167,167,160,100,50,127,168,155,167,163,163,145,155,165,160,165,152,158,166,180,182,161,165,174,176,162,158,165,177,169,155,164,169,170,153,157,170,181,186,171,183,199,229,229,177,168,165,199,223,199,185,200,227,222,206,200,190,143,106,96,127,174,156,175,211,214,207,168,137,108,66,40,40,78,127,143,160,183,218,241,222,207,218,228,201,190,199,229,207,179,199,234,234,178,159,157,196,229,210,177,181,212,212,190,179,202,216,193,178,188,217,212,186,184,217,224,191,183,163,165,160,139,128,98,99,112,82,73,95,96,109,120,115,103,93,86,84,93,76,95,100,101,106,97,105,99,104,102,103,113,106,110,125,117,82,50,33,30,24,27,29,35,51,54,65,76,86,91,88,88,80,80,78,81,78,78,79,70,82,80,79,84,86,87,78,74,64,57,59,57,67,74,76,96,89,89,89,85,78,42,29,21,17,17,21,29,38,51,51,54,58,51,50,53,51,50,51,48,45,50,52,49,49,44,47,44,41,46,44,49,49,47,47,47,44,44,42,43,46,46,44,43,46,47,44,41,41,40,40,41,39,45,47,45,43,37,40,39,39,42,35,41,37,41,44,39,42,34,33,38,36,38,36,30,35,37,35,39,32,29,25,27,24,21,27,31,84,114,84,52,24,19,26,26,27,25,26,26,24,28,33,32,25,26,29,30,22,26,29,25,23,25,27,25,28,25,26,26,30,29,24,28,31,32,25,29,35,33,22,28,28,19,27,22,25,29,28,39,29,26,25,50,93,66,37,26,24,29,28,29,29,32,22,28,24,22,33,22,21,26,22,29,36,29,22,29,35,35,32,29,33,24,22,25,29,32,31,37,32,31,28,32,28,35,31,28,28,31,27,29,26,22,22,17,21,26,23,22,21,27,27,19,21,22,26,21,23,22,20,26,18,25,24,23,21,27,19,50,94,43,23,28,16,25,19,21,26,18,24,22,19,22,25,23,24,25,25,22,23,23,24,21,21,33,75,111,95,63,29,16,22,19,23,23,21,24,23,24,21,24,24,19,26,24,24,27,25,30,89,92,44,80,72,55,122,149,130,116,99,116,141,161,185,200,220,211,184,211,229,229,246,231,200,122,14,9,27,48,83,130,142,114,122,179,233,238,236,224,217,236,241,248,248,252,252,252,252,252,252,253,253,252,252,252,252,252,252,253,253,252,252,252,252,251,251,250,250,252,252,248,240,236,243,245,250,253,248,250,247,227,112,3,1,5,9,10,10,12,12,13,13,13,13,150,160,160,165,154,155,87,56,116,151,172,168,162,166,122,45,33,53,100,151,161,170,188,226,169,97,102,93,95,89,88,79,71,71,66,66,66,65,58,23,18,62,97,141,157,159,168,174,187,192,200,206,208,214,219,215,218,210,218,175,60,39,99,188,238,245,239,235,227,216,194,180,159,134,110,90,72,17,18,35,25,23,15,16,11,16,39,69,74,73,71,79,55,21,35,64,107,146,188,249,252,252,246,246,240,98,2,56,98,96,61,47,40,30,28,34,25,53,152,209,187,197,206,208,84,46,157,179,159,107,8,19,81,83,73,44,132,237,194,165,120,16,13,106,183,188,184,164,119,115,145,164,184,233,249,200,176,188,228,228,103,87,214,248,215,190,185,224,210,178,179,207,249,225,181,181,227,251,193,170,204,249,236,176,181,240,251,205,179,210,251,244,177,197,250,250,189,157,165,203,135,85,125,149,243,250,205,186,242,248,222,141,148,93,45,56,133,248,250,190,141,126,92,50,21,65,184,220,147,152,234,250,247,203,235,253,252,199,185,252,251,193,168,224,250,204,149,165,210,170,92,122,160,245,250,213,162,208,250,225,153,175,249,248,164,161,230,251,216,166,211,251,248,181,162,140,158,155,138,131,122,118,103,66,55,91,103,107,118,118,117,114,111,109,105,106,106,106,112,110,100,108,99,100,109,118,129,114,105,82,33,23,30,24,33,42,51,62,70,76,74,83,88,81,82,82,82,82,77,81,82,79,78,74,83,88,84,86,80,69,71,62,50,64,58,67,69,72,82,89,105,103,98,79,46,33,19,19,19,18,29,37,51,59,51,59,53,55,53,46,51,49,46,49,53,46,44,49,47,46,43,45,48,45,46,52,46,44,44,49,40,41,46,40,43,44,53,45,41,42,42,41,40,45,39,41,40,47,45,41,43,41,39,39,42,38,39,41,38,39,39,38,41,34,40,38,35,40,32,34,36,39,38,37,38,32,33,27,23,28,29,27,29,27,73,117,86,51,21,23,31,27,23,24,23,29,33,32,27,26,30,26,28,28,26,24,29,28,24,33,28,21,29,28,27,24,25,30,31,34,28,25,24,26,22,23,27,24,27,24,27,25,39,49,28,29,24,24,35,81,93,42,24,19,27,32,28,25,22,29,25,27,29,23,22,26,26,25,26,30,20,24,30,30,31,29,32,27,27,22,23,29,35,34,32,33,33,27,27,35,33,29,31,29,33,28,30,29,23,19,20,21,19,24,20,21,26,26,23,17,24,19,22,28,19,27,21,22,23,18,27,20,27,23,67,86,28,24,24,19,22,16,25,23,20,24,22,21,24,23,17,26,21,21,23,23,26,21,24,25,21,33,71,114,98,60,36,18,19,21,23,24,19,24,26,23,24,22,21,28,26,25,26,31,95,138,82,27,78,85,48,91,111,125,136,105,141,168,165,189,203,214,191,165,192,208,208,227,232,240,201,104,116,94,43,28,48,82,118,155,178,174,150,156,140,127,149,155,166,216,249,229,160,186,207,195,198,208,218,242,251,251,249,249,252,252,252,252,252,252,250,250,250,250,251,250,237,221,222,236,250,252,252,252,252,252,237,109,2,0,6,9,12,12,11,11,13,14,14,13,182,181,178,175,169,165,91,74,129,162,171,160,149,153,109,36,42,54,92,120,112,108,123,136,75,46,70,71,83,87,89,103,110,118,130,139,155,160,167,78,28,84,135,205,229,237,244,240,248,245,248,248,248,247,247,242,238,227,219,162,44,25,65,108,191,178,159,135,120,97,74,62,56,56,59,69,67,23,28,43,44,50,33,29,48,78,90,99,73,51,64,89,92,59,84,156,206,227,184,188,221,202,168,138,107,20,14,53,76,89,83,94,98,95,49,61,57,87,128,140,114,134,183,191,35,14,87,92,90,67,9,34,115,128,99,60,118,196,190,204,163,33,7,93,169,177,153,100,43,54,103,131,166,245,251,165,158,199,250,210,72,137,249,250,201,165,188,235,191,149,157,204,250,204,155,165,197,213,165,152,188,208,190,156,171,237,253,175,159,200,251,217,163,184,236,231,147,120,123,102,74,100,138,158,217,227,163,146,162,169,95,60,63,72,34,60,76,115,117,61,36,54,125,84,107,160,247,250,200,193,237,249,208,165,177,214,215,163,154,198,205,172,163,187,207,166,141,144,192,110,88,144,148,206,226,170,139,165,197,170,148,158,188,177,150,156,198,218,173,152,174,216,196,157,151,133,146,147,132,135,135,136,109,70,61,54,57,97,106,103,110,107,108,111,109,108,115,108,110,118,114,117,102,115,118,103,86,56,30,25,25,35,39,47,67,81,93,93,88,83,81,79,85,83,79,80,83,83,88,84,86,83,84,90,81,84,76,77,66,63,63,51,56,56,64,73,71,92,96,100,108,85,55,32,19,21,21,19,24,35,51,60,59,58,55,53,53,53,53,49,46,49,50,51,47,45,45,44,45,44,44,47,45,47,44,43,51,46,44,45,43,46,43,45,46,44,39,44,46,40,45,45,45,43,43,45,42,42,43,42,39,42,37,41,35,36,38,44,41,37,38,34,39,36,35,36,33,36,40,39,41,38,31,36,33,35,33,29,23,35,58,31,21,27,23,70,109,86,57,27,20,24,20,26,24,37,35,24,30,24,20,24,24,28,34,21,28,24,26,27,23,26,31,23,24,27,25,36,31,28,26,24,30,23,27,28,26,47,35,24,22,29,44,30,27,23,22,24,34,73,92,61,27,23,24,29,27,26,28,22,27,29,27,26,30,30,26,24,27,29,19,24,26,28,29,29,33,29,27,21,34,31,34,33,33,33,30,26,29,33,30,31,29,31,29,30,33,29,26,19,16,22,24,22,27,19,22,23,21,27,23,18,19,19,24,24,21,23,23,21,18,25,22,23,84,69,19,26,20,19,23,18,24,21,16,25,23,22,24,19,23,24,21,22,24,27,22,24,22,24,21,23,37,72,106,98,62,32,19,19,23,26,22,19,27,21,25,22,22,28,20,24,25,87,162,171,109,89,149,134,97,121,141,177,193,170,199,191,179,205,205,200,181,172,194,194,199,222,226,242,217,175,211,173,84,29,9,36,104,141,154,137,133,167,143,128,137,131,127,176,234,151,52,74,115,117,120,140,150,204,243,216,183,154,182,206,213,224,202,234,253,253,249,249,251,250,237,213,220,245,252,252,251,251,252,252,231,111,4,1,6,10,11,11,13,12,14,13,14,14,160,159,148,137,128,127,66,59,108,117,121,100,92,89,37,10,41,57,88,112,107,114,120,134,110,110,142,150,171,184,195,200,208,218,222,224,225,227,222,104,21,71,137,208,229,232,233,223,221,214,205,198,193,181,171,157,148,128,108,62,16,21,17,23,23,15,14,12,13,15,14,31,64,91,123,150,147,55,38,110,192,238,235,229,222,185,95,59,70,64,71,71,37,39,68,110,126,103,70,62,85,101,107,117,112,27,24,118,178,186,155,151,148,127,95,95,127,122,104,81,41,57,97,77,9,55,146,180,208,190,46,29,118,125,100,38,83,146,142,208,206,49,3,83,166,188,176,153,126,130,146,141,144,155,146,135,142,165,192,134,50,120,190,178,162,152,152,168,153,147,155,171,194,154,147,150,160,162,139,148,150,152,139,143,148,155,155,141,144,142,159,147,143,154,174,165,139,153,158,178,163,182,212,194,213,181,109,98,73,52,47,55,92,74,38,25,15,51,78,102,154,174,188,188,189,198,206,188,159,160,173,177,147,145,152,160,151,142,151,165,159,152,163,167,167,145,153,165,180,159,155,173,146,159,162,137,148,161,171,157,160,169,169,162,159,171,177,171,153,161,157,160,142,150,161,124,145,145,139,138,139,141,131,111,83,66,66,94,99,97,100,93,95,95,89,104,110,103,105,111,108,101,88,66,45,22,19,27,30,36,39,73,104,107,111,99,91,88,84,82,82,89,88,83,82,82,83,83,85,84,81,86,86,78,70,61,66,66,57,62,66,64,64,63,64,83,83,93,111,96,63,34,30,21,18,18,24,39,45,56,60,61,59,55,51,52,53,54,50,47,53,50,47,46,47,47,49,44,42,47,48,49,48,47,39,41,45,45,41,43,45,46,47,43,41,41,41,41,42,43,44,45,42,46,44,37,39,37,41,39,42,37,35,44,40,38,41,42,39,36,39,38,37,37,33,39,43,36,35,39,34,33,34,32,35,34,33,27,23,29,39,25,21,24,24,23,60,108,93,57,26,16,30,26,32,34,22,23,27,24,24,31,27,24,24,29,29,24,24,23,26,27,27,26,24,29,39,30,23,26,21,26,28,22,32,25,36,38,24,26,25,29,22,26,26,25,25,32,34,45,95,75,33,27,24,24,26,23,27,29,25,25,30,28,27,25,24,28,25,29,25,16,26,30,31,31,32,32,27,22,30,34,30,32,36,36,34,24,35,34,26,33,29,26,33,36,34,25,24,19,21,19,17,22,22,23,22,24,22,24,24,23,26,23,20,22,24,25,21,20,20,27,27,42,91,53,20,26,17,19,21,26,21,26,22,19,23,22,23,18,25,23,16,26,21,22,24,21,27,24,21,23,24,33,69,105,99,57,32,19,20,22,18,29,29,23,26,23,27,23,24,25,62,120,137,120,101,129,150,125,106,130,150,171,181,170,181,178,179,195,198,200,194,185,194,184,172,183,178,188,172,152,195,155,119,123,54,20,38,57,89,97,133,171,163,160,163,154,140,174,241,152,29,77,134,132,134,143,143,184,199,154,113,61,100,141,149,162,117,167,245,246,247,247,252,252,242,221,221,242,249,249,249,234,231,206,182,107,9,1,9,10,12,11,13,12,13,14,14,13,88,89,84,70,68,63,26,39,78,92,95,87,82,89,41,17,53,83,146,176,190,205,229,245,201,185,199,201,220,225,226,227,225,226,222,218,213,212,196,80,23,54,92,162,173,164,146,127,118,100,94,83,73,70,59,46,31,16,11,15,28,39,33,42,49,42,37,22,18,13,25,44,69,83,93,105,87,44,47,116,204,251,245,243,232,146,39,32,75,98,96,40,12,22,45,73,57,49,38,44,79,139,174,194,172,32,20,103,158,150,103,93,80,63,30,70,137,147,157,144,117,127,198,167,45,107,228,252,242,228,76,30,95,88,56,33,108,169,146,165,165,49,1,41,110,162,203,218,205,196,190,175,162,163,150,144,170,166,157,67,50,150,174,168,154,156,163,162,150,163,163,160,161,139,152,155,160,158,150,165,167,165,150,155,158,159,150,145,158,154,155,144,151,159,159,159,159,186,208,218,217,207,178,158,127,54,25,15,41,45,19,33,55,88,116,136,155,174,184,191,205,213,205,193,180,177,175,157,152,165,167,162,151,160,164,166,166,158,158,155,138,130,134,143,146,147,157,165,182,182,178,168,156,162,155,150,154,161,161,156,160,162,170,165,159,144,163,162,159,164,158,158,156,162,163,134,139,151,143,131,133,142,130,128,129,110,105,111,100,93,96,89,84,78,77,77,78,76,71,49,41,35,19,24,19,27,33,68,71,104,121,134,124,112,101,93,91,87,86,80,85,89,88,81,78,84,85,78,80,77,83,80,73,65,57,59,60,66,63,66,70,66,76,75,89,95,91,102,66,35,31,18,20,25,23,31,43,58,57,58,62,57,57,52,53,49,47,55,50,52,51,42,43,47,49,44,44,45,47,47,46,44,43,45,40,43,41,46,45,47,46,39,43,39,42,44,39,38,46,43,46,42,41,45,38,39,40,38,37,40,41,42,41,44,42,39,37,37,34,35,41,36,39,34,33,43,40,37,33,34,37,39,28,29,33,29,39,32,25,21,23,22,24,26,19,33,25,52,108,95,62,35,20,31,25,23,28,22,24,23,29,31,24,31,25,28,21,19,32,26,31,27,24,27,30,39,25,19,25,29,27,23,27,27,27,24,21,28,24,29,34,21,26,24,24,29,24,34,36,36,80,90,46,23,20,25,29,27,26,26,30,26,27,30,24,23,27,24,29,27,25,25,21,30,33,27,31,34,23,23,26,30,32,36,34,31,34,33,29,32,33,27,28,35,29,30,38,29,18,21,25,19,19,20,19,20,23,23,21,24,21,24,24,23,21,21,26,24,19,24,17,28,19,57,89,31,21,25,18,24,21,23,21,19,22,22,25,24,22,23,24,24,22,25,24,24,19,24,23,18,26,25,27,22,33,63,104,98,67,37,21,23,20,21,23,24,21,20,26,25,27,61,92,83,62,48,64,87,63,56,63,74,95,83,95,89,110,120,117,137,142,147,143,133,130,118,124,137,148,162,149,141,163,142,168,191,116,96,101,46,46,74,97,135,154,173,187,185,175,208,248,168,52,113,163,163,160,166,155,193,213,162,110,66,120,153,156,158,91,146,242,243,247,247,252,252,245,228,213,214,225,220,201,185,179,160,144,97,15,2,10,10,14,12,14,14,14,15,14,15,88,94,101,107,107,111,57,72,135,162,177,170,169,177,99,36,63,100,163,212,219,236,245,245,226,190,188,186,193,186,184,177,166,162,151,145,131,127,111,29,13,30,63,103,91,76,59,49,39,35,39,46,53,63,73,79,76,66,49,20,17,39,72,98,96,101,98,103,110,101,99,84,74,68,54,40,22,13,34,67,125,146,125,119,109,105,88,80,90,110,150,106,16,24,89,132,131,120,110,100,110,138,132,102,59,15,14,29,46,47,33,34,27,24,15,46,138,153,170,212,191,228,245,245,102,96,208,232,218,130,14,37,98,109,85,44,142,230,212,190,131,19,12,25,26,57,121,190,212,197,193,184,179,182,171,185,181,183,122,34,98,176,181,170,164,165,163,170,166,166,166,163,167,157,164,161,163,162,168,172,166,165,165,170,166,168,165,167,166,165,170,163,162,160,178,188,184,201,202,198,151,83,39,7,6,10,12,16,42,94,130,152,178,188,198,208,211,212,202,191,179,176,173,163,163,162,167,168,167,162,165,172,171,171,173,170,166,166,157,155,151,143,138,143,155,155,159,163,167,165,164,162,162,168,171,170,171,165,155,155,157,149,141,139,128,118,129,135,139,151,149,154,153,163,166,133,132,140,139,130,125,128,130,127,126,124,117,123,110,109,107,95,86,75,69,66,63,53,46,39,37,46,55,61,83,93,99,121,125,132,129,121,107,91,94,86,83,77,80,77,81,81,77,80,80,89,88,79,86,80,73,74,61,62,66,61,76,72,66,75,72,71,84,95,109,103,72,39,23,27,24,26,26,30,44,50,59,59,55,59,57,57,56,55,46,50,53,53,51,48,53,43,42,47,44,50,47,45,45,43,45,42,46,45,42,45,46,46,42,43,40,46,43,43,41,37,42,44,44,42,44,43,43,38,42,40,33,39,38,38,46,37,39,39,34,42,35,40,40,36,37,36,41,38,34,37,35,36,37,33,36,30,33,34,33,36,37,32,28,28,23,25,26,22,27,27,30,31,48,96,107,81,44,22,23,25,23,27,22,27,30,26,24,23,28,27,25,30,30,24,25,26,26,31,36,28,21,23,23,27,22,25,29,26,28,23,25,30,27,33,24,30,33,24,27,29,30,28,36,36,54,92,67,34,21,21,30,22,26,34,32,27,26,26,27,28,24,29,32,29,28,24,23,28,34,33,29,29,27,22,29,36,33,30,33,29,31,30,34,29,33,33,26,33,31,29,36,27,16,23,23,19,22,18,21,21,19,21,20,29,23,22,20,24,23,19,24,24,25,21,26,23,23,77,77,27,23,22,17,22,21,24,23,20,21,21,23,22,24,24,22,28,22,24,27,22,26,26,22,24,23,21,25,21,25,32,60,104,102,68,40,24,17,22,23,21,27,19,23,29,66,101,77,59,50,49,53,60,48,35,48,50,49,46,48,39,41,51,45,43,49,53,52,41,78,146,168,194,214,222,224,221,238,197,198,203,132,188,191,105,78,55,81,93,118,177,208,216,200,225,247,169,60,116,147,153,159,168,164,202,226,170,112,86,148,173,167,170,96,148,244,244,247,247,252,252,245,237,202,179,182,176,159,145,144,139,144,100,16,2,10,10,14,12,15,13,14,15,15,15,170,178,183,184,190,174,98,110,167,197,210,204,198,195,103,42,55,93,162,193,206,221,236,239,185,141,135,121,115,105,93,84,78,73,66,58,49,42,24,9,29,48,78,116,112,113,119,122,123,128,137,140,146,149,155,156,164,169,167,108,29,51,124,194,215,203,198,180,172,155,134,113,99,86,73,55,28,10,30,83,117,131,130,139,162,184,205,108,107,147,195,173,30,27,111,165,165,150,127,104,107,111,82,27,7,11,33,63,81,76,90,115,130,151,89,88,172,158,139,166,144,160,228,168,19,37,88,75,35,10,8,33,78,83,61,50,137,221,233,234,175,31,9,104,131,52,31,90,155,196,207,202,198,195,200,196,190,133,41,70,156,188,178,168,168,162,169,173,168,170,164,166,171,168,172,167,166,168,166,166,161,160,163,167,166,169,170,171,176,175,184,184,185,200,208,210,177,116,79,57,16,6,11,14,39,88,137,163,188,198,203,216,212,208,201,190,186,181,171,160,158,159,171,173,167,166,164,168,166,165,166,168,170,168,163,160,163,168,173,177,180,179,174,176,175,174,170,163,167,164,165,169,168,169,167,169,167,162,162,160,159,150,161,160,153,145,152,155,139,149,149,147,156,168,170,135,109,123,134,137,138,125,119,120,117,108,103,112,118,123,125,120,117,103,98,107,99,92,96,95,107,109,114,130,124,134,126,121,120,105,109,102,95,93,87,76,68,71,71,76,77,76,77,85,84,81,79,69,73,66,63,68,61,63,62,65,67,78,70,73,73,79,102,91,71,47,25,27,23,24,29,34,47,53,53,60,63,59,53,52,55,55,51,48,50,53,53,55,47,48,46,45,47,44,50,45,46,43,37,47,46,45,47,46,45,48,44,39,45,43,40,44,47,40,44,44,46,46,36,45,38,35,41,36,42,41,38,43,39,41,36,38,34,37,39,35,36,41,36,36,43,38,33,37,36,38,33,33,34,33,38,35,35,35,37,35,33,30,35,30,24,24,24,24,21,26,24,24,24,31,83,106,75,50,24,22,19,21,30,25,26,27,24,27,29,22,27,28,24,27,27,24,28,34,27,27,24,28,22,27,28,21,28,23,25,27,27,34,29,22,35,30,24,27,27,26,29,27,29,31,37,81,84,45,17,18,27,23,27,32,33,26,28,26,25,31,30,23,31,45,24,21,20,27,34,31,30,28,24,26,26,33,36,32,33,29,30,32,31,31,29,32,32,29,33,41,34,22,22,24,18,22,24,21,21,24,21,20,25,21,21,21,22,19,22,24,26,23,23,24,22,16,34,89,56,19,27,16,19,21,22,21,25,24,21,21,22,24,21,24,24,23,24,18,25,25,24,24,20,21,23,23,23,24,21,21,33,65,115,113,71,46,18,24,26,19,25,25,24,66,108,82,55,50,53,54,53,59,42,36,46,48,53,51,54,47,51,63,48,38,41,29,22,87,185,227,244,251,253,253,252,252,253,244,244,199,106,168,181,118,84,65,56,33,61,131,172,191,172,203,247,144,53,98,104,114,128,150,150,193,220,152,93,90,158,172,169,168,100,160,244,244,247,247,252,252,247,245,217,178,176,171,154,141,141,138,154,109,14,2,10,10,14,12,14,14,14,15,14,14,198,197,196,193,194,174,97,107,156,183,194,178,174,170,83,27,45,71,124,148,148,154,170,156,89,74,72,61,63,56,57,61,67,76,80,92,98,108,79,16,37,87,143,193,197,205,206,208,209,211,212,214,200,189,192,184,190,189,202,126,22,37,103,152,164,150,131,125,117,110,105,113,116,129,141,155,114,31,59,128,196,237,246,247,227,212,156,109,135,183,218,155,35,22,78,115,111,86,72,65,77,91,75,64,26,9,69,143,193,203,208,227,233,245,162,121,202,202,174,156,74,45,48,12,8,19,21,26,29,39,31,34,46,59,54,33,103,158,166,224,211,53,15,125,203,174,66,16,53,107,155,184,189,186,174,154,80,24,66,144,192,184,171,170,170,170,169,168,170,171,167,164,168,167,171,165,171,169,163,171,170,168,167,166,171,178,182,188,199,212,215,212,202,166,134,110,39,1,11,23,65,105,143,171,192,214,221,226,216,207,194,184,184,178,174,174,174,174,166,160,155,159,162,166,174,169,169,171,167,171,171,175,174,166,166,160,165,171,169,173,177,177,174,170,169,166,165,167,170,173,177,180,172,154,144,137,135,137,137,150,168,179,184,185,181,176,181,181,174,171,166,170,170,168,169,133,104,108,131,149,138,121,118,116,118,114,105,110,109,124,125,130,133,116,127,129,130,126,123,124,126,121,122,118,113,113,108,107,100,102,101,92,87,86,80,74,69,64,78,72,78,81,82,85,71,69,65,60,67,63,66,66,66,68,73,64,78,86,81,82,87,82,59,39,28,26,25,21,32,44,44,55,59,61,59,56,64,54,54,52,52,59,53,49,47,53,47,50,50,45,45,44,50,47,42,47,47,49,46,43,45,46,43,49,42,40,43,44,43,41,40,42,46,49,45,40,38,40,41,38,38,39,39,36,42,39,43,38,41,39,39,38,34,39,38,39,33,39,36,35,34,34,37,33,36,34,32,34,39,33,32,36,34,33,32,34,30,35,36,28,21,25,25,23,25,21,26,28,23,39,41,78,114,90,61,31,25,29,25,24,28,23,26,31,25,27,23,25,27,29,29,29,30,23,27,21,25,24,27,34,26,28,23,21,29,29,29,27,23,23,24,29,21,25,29,32,31,31,32,28,30,63,88,61,32,21,21,31,31,28,33,27,30,31,27,32,28,21,29,29,21,23,26,27,29,31,34,24,29,24,21,36,34,31,33,35,33,28,29,29,33,35,34,30,29,35,37,27,21,20,19,22,17,21,17,22,21,22,23,19,24,21,23,22,21,21,23,21,22,23,23,16,54,87,39,24,25,19,17,22,27,18,16,24,27,22,24,21,22,20,23,25,20,25,24,25,24,23,24,23,23,22,24,22,24,21,21,34,60,111,110,72,44,24,22,21,25,23,69,105,77,55,44,50,53,49,53,60,52,38,46,55,52,55,57,47,56,60,49,37,42,37,103,205,227,237,230,249,249,249,250,250,253,253,249,215,91,101,109,97,115,94,34,6,12,20,51,78,66,125,200,89,46,85,64,86,96,106,107,157,211,136,68,94,151,169,164,164,105,160,245,245,247,247,252,252,249,249,239,195,182,181,173,160,152,148,163,110,12,2,10,12,14,13,15,13,14,15,15,14,181,180,178,168,165,121,81,85,125,136,136,114,105,93,30,29,39,55,90,90,86,80,87,78,70,86,102,117,129,143,153,162,171,178,183,191,194,207,152,45,42,85,154,209,216,215,208,206,206,200,192,184,162,149,141,123,115,104,97,44,13,41,65,97,103,97,107,112,131,146,155,170,185,203,206,218,142,50,55,139,203,251,236,220,139,68,63,46,103,134,131,82,5,19,44,65,51,46,48,57,91,131,152,170,99,18,84,177,224,232,226,222,211,207,112,89,179,211,222,211,114,65,81,23,7,76,139,162,172,137,40,49,88,74,60,42,117,163,142,165,170,42,15,121,190,197,173,98,22,12,38,64,74,73,55,15,27,86,145,194,191,171,167,164,162,160,159,165,162,166,167,164,164,165,165,164,171,172,171,173,177,181,186,198,205,211,218,217,211,191,159,117,61,83,8,25,63,100,141,173,190,208,217,217,217,205,195,186,181,177,172,174,170,170,175,172,172,171,168,168,166,160,166,167,167,171,166,169,171,171,171,171,170,166,162,165,168,166,161,154,150,155,156,158,160,162,157,152,164,169,171,178,173,165,154,151,148,146,148,160,170,175,184,181,177,173,180,184,182,181,178,175,165,154,159,141,116,105,109,121,118,108,109,112,114,113,114,112,116,124,125,123,122,122,124,125,120,118,120,116,114,110,108,109,100,106,105,107,103,97,104,87,86,80,75,74,65,71,73,79,84,81,74,71,65,67,65,59,67,67,71,68,66,73,76,78,78,85,92,80,60,39,23,22,24,24,29,40,47,57,62,57,57,56,55,57,57,53,50,51,54,54,50,49,49,50,47,49,49,47,54,50,48,43,44,46,47,45,41,46,46,52,43,44,44,44,43,41,42,40,45,48,49,41,37,39,40,42,40,41,38,36,43,41,43,40,35,41,37,36,40,33,36,38,40,39,35,34,32,35,33,39,32,28,39,35,33,35,37,33,33,35,33,28,33,31,31,32,33,34,24,24,25,24,26,26,26,24,29,34,32,47,82,134,111,68,41,21,21,20,22,24,26,22,27,25,23,27,27,31,24,31,24,23,27,26,25,31,37,29,22,27,28,22,24,27,27,27,23,23,25,28,33,29,35,39,28,28,30,28,24,45,87,84,39,20,28,34,29,29,29,26,29,26,31,27,19,20,24,19,25,36,27,24,30,32,24,33,25,24,33,28,33,33,32,34,29,32,32,26,36,36,31,30,35,33,37,29,22,20,19,24,24,17,24,21,22,21,22,24,23,24,28,19,24,24,21,27,21,20,21,23,71,78,28,25,20,16,24,18,22,29,19,24,24,19,24,22,23,24,20,24,23,22,26,22,21,22,22,22,22,24,25,25,21,25,27,22,32,61,108,112,68,40,24,19,23,66,107,78,49,48,50,51,46,46,59,61,49,40,41,48,53,55,52,50,53,62,46,35,43,47,139,189,195,216,206,208,200,194,210,234,250,250,249,214,123,107,88,100,149,156,86,32,11,12,12,17,11,54,147,70,42,77,75,93,77,63,63,141,203,121,56,74,125,143,142,147,96,160,247,247,248,248,253,253,252,252,247,210,189,189,184,177,170,160,173,111,11,3,10,12,14,13,15,13,15,15,15,15,124,114,96,90,89,61,26,59,78,79,77,56,53,36,10,36,61,86,122,139,141,144,151,156,159,182,195,200,212,211,216,219,216,215,212,213,211,215,164,47,32,72,125,170,166,155,135,124,118,105,95,88,79,71,70,61,60,59,57,20,16,60,108,158,169,172,182,189,198,202,204,213,208,213,208,208,142,34,39,97,160,196,170,79,44,57,51,59,66,64,57,25,8,35,77,98,103,102,103,119,155,206,215,232,132,27,82,145,187,170,142,125,95,68,12,49,139,164,200,236,181,164,190,89,42,132,205,225,205,135,49,68,107,84,47,52,150,225,199,177,122,10,21,121,187,191,186,185,157,105,44,36,44,50,71,102,135,170,199,193,179,171,166,160,153,152,150,150,154,159,162,163,167,166,170,171,180,184,186,198,197,201,202,191,178,156,131,101,64,19,5,11,24,47,98,153,178,201,214,210,205,196,187,183,176,172,167,165,162,165,167,164,169,167,169,168,163,163,157,162,165,163,170,171,171,166,166,172,170,173,170,163,152,146,158,162,169,173,163,155,154,157,160,160,163,165,157,148,145,149,153,164,168,169,168,171,177,175,179,176,165,155,155,162,163,164,166,173,174,172,171,174,175,173,177,163,134,105,83,86,96,96,96,103,106,99,105,116,118,127,124,120,122,118,122,122,120,118,118,112,114,109,111,114,106,106,105,103,99,99,97,88,83,84,81,77,76,67,82,81,78,72,60,66,61,69,66,65,66,69,69,77,84,73,81,87,92,73,56,38,27,19,21,29,34,38,50,55,59,61,56,60,59,59,54,53,58,44,51,54,49,55,53,49,45,48,49,48,52,53,49,46,46,44,43,43,48,47,45,47,43,46,44,43,47,43,43,40,43,46,46,41,41,44,43,39,39,35,42,42,44,44,35,43,37,36,37,38,39,34,39,41,34,41,37,32,38,33,34,35,31,36,32,29,39,33,41,37,25,34,31,32,31,31,34,34,33,35,33,31,29,25,22,25,24,22,29,26,27,27,25,37,26,42,99,107,80,51,26,19,29,28,26,20,27,32,21,25,27,29,27,23,27,24,27,28,28,33,28,24,22,26,29,25,28,28,27,27,23,24,25,27,32,33,48,39,20,28,22,24,26,30,70,92,57,28,28,24,30,27,29,29,23,31,27,21,22,24,22,25,37,34,26,29,27,30,31,28,26,22,31,31,33,31,33,36,34,36,27,28,37,30,28,29,34,35,33,31,16,20,19,22,20,20,23,19,23,18,21,21,20,21,19,26,24,27,22,24,24,21,19,32,92,63,16,27,23,16,21,24,19,24,23,25,23,24,26,18,24,24,29,26,20,22,22,26,23,19,23,25,27,24,27,23,23,26,26,27,24,29,59,108,107,71,47,21,75,111,76,47,33,50,48,45,50,49,53,61,47,33,45,44,50,55,55,50,54,63,53,40,39,41,82,122,163,215,212,208,200,185,186,202,209,211,219,171,144,136,104,111,191,235,207,197,188,167,125,80,59,124,172,51,33,92,105,122,84,53,62,143,203,121,50,52,68,84,87,100,73,149,247,247,249,249,252,252,250,253,248,224,195,190,193,182,179,176,176,110,10,1,10,12,14,12,15,13,15,15,15,15,60,63,53,52,42,19,22,53,94,114,128,128,132,100,25,43,81,115,187,200,208,204,212,210,208,210,209,210,206,204,199,191,188,184,174,163,153,151,87,16,26,47,90,108,89,76,66,60,64,64,66,82,89,105,117,128,145,155,150,54,16,73,138,201,208,206,210,205,202,193,189,181,160,146,129,113,50,4,32,74,110,115,100,75,71,108,122,107,136,146,144,85,2,55,126,169,158,143,132,130,165,200,203,198,93,2,50,103,115,88,67,61,67,75,29,71,154,160,152,191,170,171,201,89,33,90,150,137,95,55,22,35,35,39,27,37,138,220,231,226,159,21,21,121,183,190,187,191,196,199,182,169,165,174,184,190,202,198,182,176,168,167,168,157,156,160,159,160,160,163,170,174,178,186,192,195,198,191,177,157,136,114,85,56,24,5,11,10,14,11,30,111,166,190,205,211,207,196,188,179,172,172,171,164,163,165,169,162,159,162,162,165,163,161,162,159,160,158,156,159,159,161,167,171,171,171,167,170,170,165,165,162,161,160,162,169,170,167,163,164,168,170,168,168,166,168,171,165,165,165,166,171,169,170,168,169,172,174,174,174,165,160,155,159,169,173,180,175,162,151,159,173,179,176,180,174,145,104,80,81,87,94,93,88,94,87,96,111,118,126,123,125,119,115,122,122,121,119,115,111,119,112,110,111,108,107,101,102,96,103,104,93,96,88,93,77,79,83,77,76,69,64,66,63,61,72,71,65,75,69,73,80,87,92,71,78,49,38,24,22,26,27,34,39,52,57,55,60,60,61,60,51,54,55,48,51,57,57,52,50,49,49,50,51,49,47,51,55,50,47,43,45,45,47,42,42,46,47,43,45,46,43,42,42,44,38,44,45,42,41,38,36,43,41,39,40,41,41,49,39,36,45,37,36,39,38,37,39,35,37,38,40,39,31,36,36,34,36,31,38,33,29,39,36,36,30,34,33,32,33,34,36,33,33,34,35,35,38,29,35,33,27,28,21,26,22,26,27,23,24,24,26,27,30,36,81,110,84,58,37,21,21,24,20,30,25,25,29,26,31,29,23,24,28,27,33,31,23,25,28,24,29,27,31,28,19,27,28,25,31,27,29,30,28,33,26,24,23,25,32,25,26,49,88,77,37,19,26,31,25,24,27,30,27,24,24,23,22,25,31,31,32,28,32,24,27,34,27,27,26,29,29,30,36,33,33,32,28,31,31,34,36,31,27,27,31,35,33,19,19,23,19,23,22,21,24,21,20,21,20,17,22,24,23,28,24,21,28,22,20,17,59,92,40,19,27,16,23,26,22,22,21,24,26,21,22,24,21,18,26,26,21,24,24,25,21,23,22,21,26,17,23,26,21,27,25,23,28,25,23,31,57,97,108,79,77,118,67,39,37,42,49,46,44,49,48,50,57,42,35,43,48,51,53,54,51,52,61,50,34,37,37,57,61,100,156,172,201,212,212,212,216,210,196,182,133,121,115,97,118,199,246,251,251,253,253,248,247,230,246,223,88,83,123,124,141,112,91,110,188,232,157,83,52,50,51,42,63,60,149,247,247,249,249,251,251,216,237,248,227,205,191,193,180,179,179,179,111,10,1,10,12,14,12,14,13,14,15,15,14,124,135,133,139,128,60,47,101,146,187,200,188,197,146,46,42,78,134,183,201,203,194,192,185,177,171,161,149,145,136,121,111,103,97,90,81,63,46,17,12,39,57,90,110,104,114,121,136,147,157,168,178,187,193,201,204,209,215,195,78,19,67,133,186,189,172,163,147,131,117,102,91,68,61,49,35,17,15,50,100,152,167,168,127,116,142,147,147,187,226,236,134,23,59,135,164,152,123,103,89,104,121,101,84,22,10,58,99,127,117,137,157,176,179,103,128,206,203,169,143,100,112,122,46,23,44,59,69,48,32,27,21,29,30,31,28,78,150,169,227,193,33,17,118,184,184,174,175,182,188,196,203,201,199,193,190,183,177,169,163,165,163,162,164,164,170,172,177,185,190,198,198,191,181,169,146,127,97,64,42,17,23,29,36,47,46,41,31,39,82,146,191,214,199,194,189,176,178,169,167,171,173,174,176,172,174,173,169,168,170,169,167,168,165,167,165,163,166,166,169,170,169,170,168,167,168,167,166,165,163,165,169,169,168,170,169,167,168,167,170,171,167,168,169,165,172,173,177,179,177,175,171,165,165,159,159,158,155,164,167,169,170,172,174,174,179,174,168,159,157,162,165,165,165,169,174,148,110,92,81,84,95,96,81,77,69,79,99,115,126,120,117,119,116,115,118,114,116,116,110,109,106,108,105,101,101,101,102,103,111,111,100,89,79,67,73,80,76,81,73,74,79,73,74,71,72,67,76,79,78,79,92,93,66,52,34,22,22,29,33,34,43,52,56,60,60,59,57,56,57,54,58,59,54,50,53,55,55,54,47,53,49,45,51,54,52,45,48,49,49,48,45,49,46,47,44,48,46,45,46,42,42,40,43,47,43,43,41,39,39,43,43,36,43,43,41,42,41,40,37,34,40,36,41,38,35,45,34,43,39,37,38,35,41,33,38,39,31,36,33,33,39,33,33,33,33,35,33,36,31,28,33,34,35,36,38,29,33,33,29,35,29,27,22,25,22,30,23,20,27,24,28,29,29,27,34,69,113,104,73,48,24,22,25,28,22,23,31,24,28,23,28,28,27,29,21,27,29,29,24,27,32,29,26,28,21,20,24,26,33,30,26,31,32,23,24,30,29,28,30,28,29,33,76,92,47,27,25,25,27,27,26,22,27,25,25,27,23,30,24,28,32,26,34,31,34,29,26,23,25,34,33,31,33,37,34,30,33,34,31,33,35,28,29,28,29,30,25,24,21,22,24,23,22,25,23,22,22,24,18,20,24,19,22,22,23,22,21,23,20,28,81,80,27,22,23,15,18,23,23,25,23,19,24,22,20,23,22,22,22,24,21,26,22,19,24,24,24,25,24,18,27,27,21,29,25,28,29,19,26,32,31,53,99,133,130,83,33,39,42,38,43,46,54,55,60,60,52,49,51,48,48,53,49,49,51,52,54,42,35,38,43,46,47,39,58,79,114,149,162,183,202,209,213,185,125,100,88,83,89,141,193,219,244,246,250,253,253,253,253,244,181,198,188,136,134,126,134,181,242,246,226,171,140,130,127,112,120,110,179,250,250,250,250,246,229,168,208,246,235,216,188,196,180,175,183,172,108,11,0,9,10,13,12,14,13,15,15,15,14,190,199,186,190,172,82,64,108,161,194,198,189,185,136,41,33,59,105,144,157,146,127,121,109,96,87,81,72,64,60,60,57,55,61,66,70,78,84,33,14,55,97,159,186,187,195,201,203,210,208,207,210,209,210,206,201,193,191,170,56,12,43,85,131,117,101,84,75,66,56,62,70,79,94,113,132,63,18,89,162,235,253,249,191,137,147,141,137,177,215,227,130,16,33,88,113,94,71,57,50,71,81,85,87,26,17,101,167,200,199,207,210,213,196,108,129,205,232,207,178,87,45,61,18,25,29,33,34,30,36,22,33,36,38,48,34,78,139,132,169,158,23,19,109,173,181,167,165,163,169,171,172,171,170,166,165,166,164,166,169,168,171,174,173,179,191,193,192,190,176,160,135,103,77,49,27,23,46,70,92,115,136,156,169,183,190,190,178,122,158,158,153,148,146,148,152,155,155,155,157,153,156,160,160,163,163,169,169,170,168,170,171,165,163,165,163,161,160,163,168,167,165,165,164,160,163,163,162,165,165,167,166,166,170,170,169,171,172,170,174,173,168,162,162,164,164,165,159,159,161,158,158,155,148,153,160,165,163,165,171,169,171,171,174,173,169,167,162,168,168,165,165,159,162,164,168,165,132,104,94,74,92,102,82,77,55,61,73,97,114,115,119,112,116,117,114,113,114,107,107,104,102,107,103,106,114,110,108,105,103,97,77,66,55,73,72,69,83,77,77,77,79,75,74,77,78,83,76,87,89,61,58,56,29,37,24,25,37,38,51,55,53,62,60,60,58,58,56,54,57,54,55,55,57,53,51,55,52,51,50,47,52,53,51,50,48,50,48,46,47,48,47,50,49,47,42,45,46,42,49,44,44,45,43,48,43,44,46,40,39,34,39,40,37,45,42,36,39,40,38,36,39,39,39,39,37,37,37,39,32,37,38,32,36,32,33,34,29,35,36,34,34,33,31,33,35,35,32,34,34,35,39,31,29,33,30,29,30,29,33,27,29,29,25,23,19,24,26,22,21,29,26,31,27,47,52,24,44,101,115,84,55,33,26,23,20,27,30,23,21,29,25,29,28,28,29,24,27,27,26,23,27,29,23,23,26,27,28,29,33,30,27,29,27,29,27,24,31,38,33,22,31,33,50,89,67,32,21,21,26,24,27,25,24,27,24,26,24,29,35,29,28,27,34,36,33,33,29,26,24,31,32,36,32,30,38,35,29,29,33,29,35,28,29,34,28,27,25,21,19,21,21,19,22,20,17,22,18,21,21,21,20,22,23,23,22,23,23,27,19,40,96,57,21,24,21,18,20,24,24,22,24,20,22,24,24,21,19,27,26,22,22,22,21,23,21,26,27,21,26,22,25,20,23,34,16,24,23,26,24,24,26,50,118,124,108,79,29,54,39,36,34,44,49,54,64,57,62,64,61,50,53,54,55,53,48,53,51,41,42,42,40,43,46,44,48,35,32,86,62,89,127,147,165,158,122,115,101,83,73,118,171,188,187,164,162,174,190,201,211,203,193,220,185,102,63,57,94,182,242,251,251,248,243,242,240,230,229,207,241,252,252,252,252,246,235,186,229,252,250,230,195,201,184,175,185,172,105,12,0,10,10,13,12,14,13,14,15,15,14,186,188,179,181,146,63,62,94,136,173,163,152,142,84,31,31,46,78,98,95,77,65,63,59,61,68,73,81,98,109,121,136,145,151,162,172,175,182,93,25,63,116,182,212,212,213,205,205,195,188,178,164,156,146,132,117,96,90,63,14,18,32,71,98,89,92,103,116,130,142,156,174,182,200,206,214,127,39,94,168,250,251,248,208,137,139,130,113,113,122,119,45,13,29,47,64,52,53,69,88,139,177,193,190,68,31,112,182,218,196,186,167,147,112,49,90,157,183,213,222,130,47,27,13,26,26,19,23,23,22,34,43,65,75,69,99,164,213,197,175,108,7,25,119,178,176,162,165,165,164,164,162,164,166,167,167,167,176,177,180,190,191,198,196,177,161,139,116,90,55,36,25,33,56,83,111,138,160,172,185,193,199,195,192,193,188,189,183,184,176,159,148,144,138,144,143,139,143,143,143,145,141,142,149,145,145,151,153,152,154,155,158,151,150,155,154,158,159,159,160,158,161,165,159,158,159,160,160,163,165,160,162,167,169,169,170,171,173,168,170,168,164,166,169,166,167,162,160,161,163,166,167,163,167,171,178,177,172,174,171,169,164,162,165,165,165,168,165,162,169,174,174,174,169,171,180,183,168,122,88,78,87,87,83,90,77,65,59,78,108,111,113,117,115,116,116,116,114,112,108,107,113,114,111,113,117,107,97,82,67,63,65,69,73,78,78,81,81,85,75,73,81,70,76,86,83,94,97,69,54,35,27,26,27,36,36,48,53,59,61,60,61,60,62,64,61,53,53,55,56,52,55,55,49,49,52,52,53,55,49,51,53,50,46,48,53,44,47,49,53,51,46,47,46,46,41,38,46,46,47,40,45,47,41,44,42,40,39,36,41,39,41,44,42,42,39,38,40,33,39,40,34,37,39,39,37,38,37,33,31,32,36,35,36,32,37,34,33,39,32,34,34,33,35,32,32,35,36,34,35,33,33,29,30,31,27,33,33,29,32,30,30,34,26,25,25,28,27,22,27,23,24,31,27,30,28,21,27,28,65,110,100,72,45,24,24,27,25,24,23,25,26,27,25,24,29,27,19,23,31,32,29,26,25,27,31,25,32,29,29,26,29,29,25,33,24,25,33,29,27,26,26,28,33,76,87,52,26,16,25,24,24,23,29,23,26,31,25,37,33,27,24,30,39,32,31,35,35,24,25,31,33,34,26,38,33,31,33,28,33,31,33,29,33,29,29,31,29,17,21,22,19,19,21,21,22,23,20,21,24,23,25,19,19,23,17,24,21,23,18,61,94,36,24,30,17,15,19,26,21,22,28,23,19,25,23,20,21,24,21,23,21,20,27,22,27,22,23,26,21,22,21,28,33,34,25,25,27,22,27,25,52,110,98,61,74,78,78,69,42,39,38,53,48,45,56,52,57,57,54,55,50,54,55,56,54,51,55,52,50,49,44,46,49,50,52,50,46,45,33,28,35,53,89,110,124,141,126,101,103,141,180,199,188,164,150,139,146,150,151,148,150,161,130,52,15,9,27,118,193,240,250,252,252,252,252,252,252,253,253,252,252,252,252,248,246,232,251,250,250,242,199,210,192,180,190,167,102,13,1,10,10,13,11,13,13,14,15,15,14,152,149,136,113,83,46,53,73,99,125,98,81,63,27,18,41,62,92,106,111,116,122,132,143,153,165,172,179,190,197,208,208,211,212,211,214,213,203,102,29,53,99,162,188,184,169,157,143,128,113,99,92,83,73,66,59,49,61,45,10,29,73,133,168,167,178,187,204,208,213,219,219,222,221,219,213,108,19,55,114,183,210,191,137,118,132,124,89,27,5,10,18,32,38,49,51,41,41,71,121,174,196,200,185,71,28,89,143,165,136,113,84,57,33,24,91,135,142,159,213,132,10,12,15,28,33,30,24,21,28,29,60,118,162,165,152,175,212,215,211,127,4,34,125,176,175,166,168,166,166,164,165,168,167,177,186,192,198,192,182,167,147,130,105,74,51,27,28,48,66,97,125,147,168,184,198,200,199,196,188,181,174,167,161,160,159,162,165,169,173,166,159,156,156,155,153,155,154,155,154,154,157,153,153,154,151,152,149,149,149,149,148,151,151,150,151,155,158,155,157,159,158,156,156,155,156,156,157,158,160,162,167,169,166,168,166,168,169,169,171,167,168,167,167,167,168,168,171,177,174,174,176,173,174,177,173,172,170,168,166,162,162,161,160,160,163,165,165,162,152,157,171,172,173,169,168,169,169,146,122,89,78,64,63,106,96,96,98,118,134,128,133,127,120,120,118,122,122,117,121,112,117,121,108,100,90,83,74,72,61,68,79,85,86,90,81,85,90,83,75,79,85,86,93,86,76,55,40,29,29,29,35,38,43,52,54,64,63,61,66,60,59,57,53,59,55,55,53,55,56,51,51,51,53,51,50,53,55,54,53,48,50,52,50,45,49,49,49,51,49,53,44,46,46,42,42,39,40,49,49,42,40,43,45,45,43,43,41,37,40,42,39,43,38,39,40,39,33,37,41,42,41,38,33,34,36,33,37,41,34,32,37,39,37,39,39,37,33,38,32,29,37,33,34,36,33,38,35,33,31,29,37,29,34,30,21,35,30,30,37,28,25,27,31,32,21,22,26,22,27,29,24,27,29,28,28,27,27,24,28,53,101,111,88,62,31,20,17,27,26,29,32,24,26,27,30,21,27,32,27,31,23,22,28,26,30,33,33,35,26,22,32,30,24,26,32,24,24,28,24,34,27,26,29,48,89,71,33,24,23,24,28,25,28,21,28,29,22,27,25,25,25,28,34,30,33,35,29,25,29,32,31,35,36,35,37,31,29,30,34,35,34,31,31,29,32,31,24,23,24,18,21,22,20,23,18,20,21,22,19,16,25,19,23,22,18,23,23,26,27,85,74,22,30,19,19,21,19,23,18,22,24,24,24,22,21,19,22,22,21,26,26,23,22,23,24,25,20,27,25,19,26,24,34,29,21,31,25,27,28,67,110,86,54,12,61,128,125,100,76,100,100,107,86,66,57,37,74,72,41,64,45,44,52,51,49,43,55,51,54,46,46,48,51,56,54,55,51,55,47,46,42,27,53,87,131,168,165,130,120,134,181,223,232,219,204,195,189,188,182,171,167,160,144,108,74,48,49,79,86,130,185,224,248,252,252,252,252,252,252,253,253,252,252,251,251,252,252,252,252,247,208,216,203,187,198,168,96,15,1,10,10,12,12,13,13,14,15,15,14,78,69,58,43,23,21,44,55,72,94,91,89,93,43,13,53,97,148,181,196,201,203,210,214,212,213,212,212,213,211,213,207,201,198,191,182,181,163,63,20,39,73,118,118,108,91,79,77,70,72,81,95,105,120,134,148,155,169,139,33,38,103,170,216,218,222,220,215,213,201,192,179,163,143,129,112,30,1,33,49,76,75,93,101,109,132,123,105,59,20,14,21,34,46,60,76,71,79,91,100,136,131,122,91,27,33,61,92,94,81,71,56,48,29,32,123,188,178,145,156,90,9,15,15,27,34,34,27,28,26,28,51,86,105,100,99,104,128,160,208,149,16,21,105,167,179,174,176,174,181,189,192,193,187,181,170,150,128,105,79,54,27,21,41,63,92,116,141,159,178,194,200,200,199,194,185,179,175,168,167,166,163,162,160,162,166,170,166,168,169,168,164,162,160,158,160,167,165,161,158,158,160,160,160,155,157,160,156,155,154,155,157,158,158,159,157,159,159,154,157,157,157,158,154,157,159,156,155,158,160,163,165,166,165,162,163,164,160,162,168,165,166,169,167,165,168,166,167,170,165,162,162,158,161,161,165,167,170,169,164,163,167,171,171,167,167,171,174,167,159,159,159,158,152,153,152,155,166,194,219,192,122,54,65,96,95,118,124,126,134,128,132,132,125,123,119,116,115,110,106,104,98,89,74,69,75,84,92,83,75,83,94,92,96,93,86,93,91,87,75,95,105,84,63,43,28,26,29,30,41,45,54,55,56,61,61,62,60,65,60,56,60,59,55,57,55,54,57,54,54,53,50,51,53,53,51,55,54,53,50,50,50,50,47,46,50,51,49,47,48,46,46,48,44,46,45,46,46,43,46,42,40,44,44,46,43,43,46,38,40,38,35,41,35,39,39,34,42,42,39,40,42,36,33,36,35,35,34,33,34,37,37,38,39,35,32,34,38,33,36,34,33,33,34,38,30,31,31,30,33,28,33,32,24,34,34,34,35,32,29,24,31,30,31,30,26,25,24,26,22,22,24,24,27,30,32,23,40,69,36,21,30,78,123,100,76,46,28,24,29,33,29,29,27,24,27,24,36,29,24,26,22,27,30,32,31,27,33,27,27,27,31,33,22,27,32,25,27,31,30,27,36,40,27,31,72,87,55,27,17,22,25,23,27,27,27,26,26,31,25,26,25,29,32,29,33,34,27,23,29,31,33,36,33,36,33,29,29,29,33,34,36,36,32,29,34,33,29,19,19,16,21,27,19,23,21,22,22,23,23,20,22,22,23,22,22,17,25,23,51,99,53,23,27,15,19,26,19,25,23,23,29,19,28,23,17,26,23,21,21,26,24,23,20,22,24,22,21,26,24,23,24,23,26,25,27,25,24,33,80,113,77,55,16,42,164,185,160,127,107,126,149,155,119,91,69,82,106,106,101,83,54,48,50,46,35,43,42,44,45,39,42,37,44,52,47,49,53,58,55,51,47,43,54,71,128,180,171,152,147,146,189,250,251,250,250,246,243,242,242,238,226,222,219,208,204,186,162,117,60,77,96,115,144,184,229,242,246,246,248,246,246,251,251,253,253,252,252,253,253,249,209,214,208,197,207,178,100,12,1,9,9,13,12,13,13,14,14,14,14,57,59,61,67,23,27,50,70,116,138,171,178,169,99,41,65,106,161,198,208,211,206,210,203,198,194,188,186,177,168,158,148,139,126,116,102,85,53,11,18,42,68,93,93,98,104,107,122,141,154,168,180,189,196,205,211,208,228,169,57,41,87,161,198,194,178,156,145,126,113,103,89,79,59,56,44,12,16,53,93,116,110,125,134,136,138,139,135,128,146,121,51,34,77,137,157,146,133,121,109,95,78,62,49,27,31,47,57,60,50,45,35,33,30,49,145,217,232,190,155,61,6,25,9,26,34,26,27,29,32,24,46,69,75,83,92,103,136,146,186,160,32,56,136,178,195,191,197,194,191,182,162,141,112,89,64,35,21,31,51,72,101,129,147,165,183,195,200,202,196,190,181,175,171,165,165,164,163,162,159,156,158,159,157,157,160,160,160,162,163,159,159,156,152,155,152,159,160,158,155,151,150,153,158,155,153,160,158,156,159,157,155,158,157,155,156,154,151,147,151,151,156,157,157,160,159,159,155,155,156,153,154,156,158,156,157,159,155,158,161,157,159,160,159,159,156,150,150,147,142,148,149,148,151,155,160,166,165,170,165,165,163,165,173,166,168,168,170,174,165,165,160,151,145,153,155,152,152,195,252,252,209,89,59,94,105,123,113,99,97,92,101,108,106,102,91,84,83,81,83,86,77,78,69,76,91,98,92,96,88,92,97,87,93,93,90,101,101,103,89,69,50,33,33,24,29,40,46,54,55,58,61,63,60,61,64,60,58,57,61,56,57,55,55,57,55,53,53,56,51,49,53,53,48,54,54,52,53,51,49,46,49,50,47,50,47,45,49,46,44,49,46,46,44,47,46,39,42,40,42,45,41,44,44,45,45,35,45,37,34,40,36,42,39,35,37,41,41,35,38,37,36,36,37,35,34,38,33,35,43,36,33,34,29,35,37,34,34,34,36,34,35,35,36,31,30,36,31,29,29,31,26,31,36,33,35,27,29,31,26,27,28,28,33,33,28,23,24,26,26,24,22,26,31,24,26,25,53,60,25,21,30,25,50,71,115,96,65,44,29,20,24,20,23,28,24,31,27,24,27,24,22,29,33,32,27,34,32,28,33,26,27,29,29,27,24,28,32,36,25,29,36,29,27,23,50,88,71,39,22,20,26,26,27,23,24,24,30,31,26,28,21,25,31,34,35,32,29,22,27,33,35,33,33,36,33,32,29,27,31,32,35,33,28,34,30,35,31,17,20,20,24,22,21,23,16,25,26,24,20,19,20,19,26,19,21,22,23,18,72,94,29,25,27,15,22,18,27,22,19,21,20,25,22,26,24,22,23,19,24,22,18,24,28,22,23,23,27,24,22,24,26,26,24,26,23,30,45,94,111,71,48,21,28,137,219,201,168,151,139,116,116,133,125,115,100,115,117,119,123,113,103,99,99,101,89,94,100,111,114,102,86,60,44,55,49,35,33,36,50,47,42,41,47,41,68,113,132,147,161,160,206,252,252,252,252,253,253,253,253,252,252,250,250,250,250,246,239,204,165,159,116,59,33,55,89,113,132,158,185,198,206,212,220,231,239,244,246,250,250,232,171,188,191,183,200,180,103,12,1,10,10,13,12,12,12,13,14,14,14,145,151,154,108,57,53,46,78,132,178,198,199,195,105,38,61,98,147,179,184,171,161,153,147,134,123,115,107,99,86,78,69,62,59,58,49,58,47,10,29,66,112,162,175,177,181,186,189,196,197,194,196,193,196,194,191,187,190,129,23,22,57,101,127,102,84,72,65,67,73,84,95,107,120,141,136,37,25,102,159,208,200,207,191,149,145,138,137,157,184,163,73,39,79,135,153,125,109,99,98,92,77,62,38,23,25,23,30,34,28,22,19,21,29,37,100,173,207,221,210,80,7,22,11,26,27,27,26,24,31,26,69,131,149,151,150,150,148,138,146,100,57,135,200,204,185,164,143,121,97,68,42,24,27,44,69,96,117,148,164,179,187,198,199,190,187,178,181,175,170,163,160,165,163,161,159,163,166,158,152,152,155,160,159,159,159,157,156,157,150,152,157,158,154,149,153,159,159,160,154,150,149,157,159,159,158,156,157,156,159,157,158,158,152,147,152,150,144,144,145,150,151,155,152,153,153,151,147,150,150,145,144,142,151,149,151,152,143,148,150,144,143,139,139,138,131,131,132,139,141,141,147,146,144,144,137,133,132,131,132,127,123,128,134,139,143,146,147,145,148,148,145,143,139,151,155,147,136,163,232,253,253,135,66,67,93,121,104,99,95,89,86,91,97,88,74,77,85,76,69,75,88,96,93,98,103,103,103,96,98,100,101,84,86,101,93,94,80,54,36,32,30,33,39,45,51,55,64,67,65,60,62,64,59,57,58,60,54,50,54,60,54,54,57,55,53,56,51,47,56,50,57,55,49,51,51,49,52,53,49,50,49,49,48,49,45,47,46,46,53,47,46,45,48,43,37,46,44,39,44,42,44,41,46,44,39,42,36,35,41,35,36,39,40,39,42,41,36,33,36,40,36,34,31,37,41,39,37,37,33,34,38,34,34,33,30,32,35,33,36,37,26,34,33,31,31,29,34,31,32,31,33,35,30,29,24,29,31,31,25,27,32,32,33,33,30,29,27,26,23,25,25,39,40,22,28,27,27,27,26,28,28,28,24,31,73,113,118,93,56,36,27,21,22,27,25,24,26,26,27,27,25,31,32,32,33,27,29,27,34,36,28,26,26,31,25,26,34,29,21,31,29,23,21,28,31,69,91,57,27,15,28,28,27,27,24,30,26,27,23,25,21,25,36,30,29,33,31,25,29,34,32,38,35,37,31,29,33,26,34,30,30,29,32,30,33,33,29,22,20,20,22,23,19,22,20,25,19,20,24,21,22,24,21,20,25,20,22,40,97,62,20,30,14,20,22,21,23,22,23,22,27,24,22,23,23,28,20,23,26,21,22,22,25,21,24,21,20,28,24,24,24,24,25,26,31,68,115,104,62,41,20,35,118,197,210,183,176,170,150,117,100,109,124,129,117,121,118,120,135,139,139,134,132,137,139,149,159,165,160,158,147,133,139,119,95,73,51,48,36,46,40,35,44,26,28,47,65,96,120,132,182,248,249,251,251,253,253,253,253,253,253,252,252,253,253,252,252,251,251,246,210,141,68,25,26,19,46,108,158,175,174,171,165,171,180,177,185,195,191,157,116,142,147,141,158,152,99,15,1,11,10,14,12,13,12,13,14,15,15,185,186,175,102,60,49,42,71,109,152,173,174,160,71,24,43,69,112,128,116,104,87,82,72,66,63,59,61,70,81,85,98,110,118,137,143,159,128,42,35,86,144,197,210,208,200,198,201,193,184,176,165,160,147,132,118,92,91,45,6,22,42,86,98,97,105,114,134,147,161,173,182,193,198,210,186,78,39,98,162,208,195,199,168,130,132,133,120,101,103,75,35,32,45,78,89,83,78,61,54,50,37,29,27,21,23,32,29,27,26,24,28,27,38,33,73,124,146,192,227,101,7,17,10,25,29,24,25,16,19,27,40,69,74,72,71,63,63,45,47,30,44,138,155,127,92,50,30,28,34,57,81,107,130,155,179,186,193,201,200,191,185,178,171,167,155,154,151,157,155,158,160,164,158,151,162,160,163,162,157,154,156,159,159,158,159,162,160,154,153,148,151,158,160,161,153,157,155,154,152,151,149,150,153,150,151,153,152,154,156,157,153,158,156,152,154,150,156,154,150,150,148,146,142,149,150,152,153,147,149,152,150,150,149,149,148,146,146,146,147,143,139,142,136,138,139,136,145,145,137,139,137,138,136,135,125,106,109,111,107,109,109,115,122,122,124,118,112,113,121,124,129,135,135,147,139,126,122,152,199,249,250,158,77,52,79,107,99,104,98,100,104,105,108,98,96,101,95,87,89,86,98,100,101,106,108,107,98,104,106,107,107,87,78,60,38,38,30,27,36,43,45,52,53,57,60,63,68,63,64,57,62,60,53,58,56,56,57,55,52,59,57,50,57,50,55,53,51,53,49,54,50,50,53,50,53,47,53,51,48,57,47,49,46,48,49,41,47,48,48,53,50,49,39,43,44,43,42,43,44,43,45,43,44,45,34,37,42,36,38,40,40,39,39,35,38,38,34,41,37,38,37,35,39,39,35,33,33,38,36,31,35,33,34,37,35,34,32,29,34,37,33,31,34,29,33,33,27,29,31,35,27,29,34,29,28,29,30,26,29,30,33,28,31,27,27,26,25,21,23,25,29,46,33,23,22,25,27,27,33,25,41,53,27,24,26,46,102,134,113,75,50,27,24,30,25,25,22,25,24,25,31,30,34,30,30,26,25,36,31,27,28,26,27,26,29,32,21,24,25,31,29,27,26,29,53,61,88,77,33,27,23,23,26,19,22,27,23,26,32,24,24,32,30,38,30,34,28,25,29,31,39,27,37,35,31,35,29,28,35,33,31,33,28,36,34,33,27,22,18,17,21,21,24,18,19,22,20,19,24,21,24,21,21,28,17,24,19,61,92,35,25,28,16,23,23,19,22,20,24,24,21,22,24,23,29,23,18,27,23,23,22,23,22,22,23,24,27,21,23,29,26,21,25,33,80,124,83,54,38,19,33,107,183,200,179,172,181,177,160,139,131,116,115,126,131,142,130,145,146,152,155,145,142,148,156,161,160,163,158,161,161,169,165,155,136,130,120,108,108,98,92,82,79,67,53,36,35,45,53,67,108,170,214,235,245,250,250,252,252,252,252,253,253,253,253,252,252,253,253,252,252,238,178,143,93,53,61,96,153,189,202,203,190,175,165,157,156,157,149,130,118,139,134,120,122,117,87,20,2,10,9,14,13,14,13,13,15,14,14,176,169,146,72,37,46,46,66,86,114,110,101,72,17,27,47,64,90,86,81,80,84,93,104,113,125,141,151,164,174,183,191,200,200,209,203,213,163,57,45,84,145,188,198,192,183,174,160,145,126,109,96,85,73,63,55,45,49,28,8,52,100,153,181,183,191,196,202,207,206,206,204,199,195,195,162,59,22,67,116,140,129,131,122,113,125,127,109,71,56,47,26,27,32,42,45,39,36,23,24,21,19,18,23,30,26,33,32,29,46,55,62,71,78,97,144,177,152,153,178,76,7,14,8,24,32,25,19,17,17,21,30,27,29,30,25,32,50,48,40,39,36,44,43,49,66,83,103,126,146,164,184,195,200,202,196,183,173,176,169,167,162,160,154,148,148,139,146,149,145,150,153,153,156,154,152,153,157,158,156,160,154,152,155,150,153,162,157,156,151,150,150,153,154,155,155,150,148,148,150,153,150,151,150,150,152,146,147,151,152,145,147,153,155,153,154,156,157,158,150,149,145,147,141,143,149,150,153,150,149,148,151,155,151,153,154,151,145,152,152,149,153,152,155,164,161,157,160,155,153,149,147,148,149,154,144,137,141,145,157,155,152,148,144,143,141,135,130,137,141,139,139,141,143,137,116,106,108,160,191,204,204,158,136,99,102,98,87,108,98,99,106,111,115,108,107,105,103,97,95,97,93,95,101,105,115,110,90,100,96,79,66,36,29,30,33,41,39,48,53,55,53,60,65,56,58,60,65,60,54,58,55,57,57,59,53,56,60,55,56,53,54,53,49,54,57,52,51,55,58,51,55,54,46,53,49,49,57,52,48,49,48,49,49,51,46,47,46,49,48,47,49,46,49,43,41,44,37,42,44,43,42,39,38,42,41,36,42,37,42,39,36,36,38,40,39,39,36,38,39,36,42,42,36,37,34,35,30,32,36,36,32,33,36,36,34,31,38,33,30,31,36,31,26,33,35,34,33,29,30,34,35,31,31,31,26,31,29,29,33,31,26,29,31,25,28,32,29,21,23,27,25,24,25,24,26,22,29,32,26,25,48,46,21,23,35,30,29,54,103,135,118,79,48,29,20,17,22,21,24,33,31,32,28,29,29,27,36,35,27,24,29,31,24,33,31,28,27,27,25,29,28,29,24,30,53,39,61,86,61,29,21,26,24,24,30,29,23,33,31,22,27,33,35,33,36,34,27,24,26,34,35,32,33,34,35,30,28,32,34,31,37,33,27,35,29,35,28,20,22,18,21,21,28,21,18,23,20,17,23,20,19,21,19,21,21,20,29,90,69,19,27,22,17,22,22,24,26,22,23,19,23,22,20,22,24,28,26,25,22,21,22,26,21,21,23,23,27,24,25,19,24,27,53,113,116,79,46,32,27,41,92,160,195,181,168,174,182,173,168,171,155,131,111,112,139,166,170,173,174,170,164,154,154,158,162,165,162,162,158,156,160,163,160,150,149,158,158,148,148,144,118,112,110,91,81,58,41,36,27,17,25,58,89,117,132,149,181,217,241,246,248,251,251,253,253,253,253,252,252,252,252,252,252,248,201,156,124,113,164,211,237,246,226,198,186,177,174,177,163,156,161,180,171,148,134,112,83,20,1,10,9,14,12,13,14,13,14,14,13,107,112,86,46,40,45,51,79,89,94,78,65,45,15,39,66,99,137,143,162,167,173,187,192,197,201,204,211,211,210,207,206,208,199,192,177,183,126,30,29,57,102,144,144,134,113,101,86,69,63,63,64,76,93,103,126,133,157,89,17,66,120,189,213,210,209,203,202,193,185,170,154,140,122,119,80,15,27,56,89,107,103,136,128,122,128,129,102,48,32,31,24,22,22,25,26,21,24,18,23,22,17,25,24,30,39,30,31,42,83,152,114,177,181,186,212,225,191,156,129,39,14,17,10,24,29,28,21,20,24,30,35,42,35,35,42,43,44,45,38,30,24,76,123,142,160,168,183,192,188,187,185,179,173,166,162,151,150,152,147,148,149,147,146,147,148,153,147,152,148,145,145,150,149,143,150,145,152,150,150,158,151,152,152,153,152,149,151,153,158,153,152,147,144,151,151,149,142,146,152,154,152,158,155,153,155,152,157,153,145,148,149,153,158,153,147,146,154,152,149,148,149,151,149,151,145,149,156,150,148,145,143,153,156,157,156,148,155,154,152,157,146,151,157,159,153,148,145,150,158,155,144,147,153,155,153,142,158,167,168,168,160,155,155,163,160,158,138,159,160,158,151,146,150,145,122,98,84,118,171,192,164,129,141,128,127,109,94,94,75,73,84,101,105,102,115,106,104,102,94,103,91,98,108,112,87,101,73,52,41,34,35,32,40,45,45,53,51,55,59,63,63,63,57,59,61,57,60,52,53,56,57,59,55,57,59,59,59,54,55,52,49,54,50,56,55,54,54,49,53,51,53,52,54,53,51,55,52,49,51,48,44,51,49,45,51,49,47,46,49,47,42,47,45,47,40,41,47,40,40,42,39,39,41,38,39,41,42,42,39,40,36,37,40,32,38,39,39,42,38,43,34,34,36,32,35,35,36,34,33,37,30,32,33,39,31,29,31,31,35,34,35,32,32,31,33,29,29,36,28,28,30,34,31,25,31,33,29,26,26,30,28,25,28,26,26,27,29,29,27,24,22,26,25,23,27,25,25,27,27,24,24,24,29,34,31,23,23,27,33,72,125,134,105,74,47,29,19,29,29,27,28,31,34,27,28,34,33,28,28,31,23,28,28,26,33,26,28,29,27,32,30,28,24,24,30,21,37,81,84,49,19,21,27,25,25,23,25,30,29,24,29,36,35,35,32,37,25,23,31,36,35,34,34,36,38,32,35,32,32,31,32,33,32,31,33,38,23,22,21,18,18,20,26,20,21,22,20,19,26,21,19,20,24,19,19,21,52,89,39,20,28,18,19,21,21,27,20,21,23,18,24,18,23,23,23,24,22,24,24,24,21,24,24,25,26,24,25,22,21,25,30,78,114,92,64,43,32,31,50,89,134,181,193,181,175,176,179,177,179,174,165,149,131,102,109,145,164,178,180,178,173,162,156,155,158,162,162,163,162,156,157,155,151,152,150,153,149,148,150,141,132,130,114,100,78,62,53,53,44,28,28,51,80,86,82,77,84,109,124,138,157,182,200,206,218,226,231,247,248,253,253,253,253,253,248,222,193,189,218,237,238,229,211,191,194,194,200,208,191,186,197,217,212,194,172,147,96,14,1,10,9,13,12,13,13,13,15,14,14,47,39,27,27,42,47,59,100,125,152,147,154,115,35,44,79,136,183,200,210,208,210,210,205,204,199,194,190,179,169,159,154,139,125,110,87,81,40,8,26,26,59,80,78,79,71,81,92,109,128,142,160,178,189,200,210,208,214,127,33,59,112,182,204,188,175,159,144,133,120,113,105,91,91,79,50,25,29,56,101,118,130,168,163,140,137,132,91,25,8,18,21,25,21,24,21,22,29,19,21,22,19,26,19,24,33,31,29,39,83,131,134,120,121,127,142,179,189,180,137,40,14,16,10,24,31,30,35,39,34,42,51,51,48,54,59,62,61,57,50,53,105,160,182,185,173,164,158,162,156,148,148,140,131,134,139,139,131,135,133,132,138,145,150,152,152,149,153,152,143,144,147,143,145,141,147,145,142,143,136,141,144,147,148,146,141,140,139,150,154,150,150,147,139,146,151,150,150,148,153,155,154,158,161,160,156,150,156,159,154,154,160,159,152,146,141,144,147,155,150,154,152,150,153,155,153,148,155,155,160,155,149,160,160,163,159,150,148,154,155,155,155,143,141,142,139,128,131,141,141,136,133,140,142,136,129,137,141,132,130,132,138,139,150,158,151,146,138,132,127,137,133,123,137,148,137,104,60,50,73,105,92,61,87,109,123,108,94,92,65,42,43,49,61,82,99,103,100,99,93,91,90,84,86,71,50,40,33,36,39,41,47,48,56,55,56,55,54,59,59,66,66,66,62,52,59,56,52,58,54,54,57,57,57,51,54,57,54,53,52,53,55,54,57,56,49,54,56,49,55,54,50,51,51,56,51,48,51,54,45,48,44,45,51,51,53,45,43,44,44,44,49,49,44,51,45,34,41,44,36,42,42,39,39,41,46,38,43,36,41,40,35,42,39,38,37,38,37,36,39,35,36,34,37,37,33,37,34,35,41,34,28,37,33,35,38,30,27,33,34,34,29,34,35,27,34,32,27,34,35,27,30,32,31,34,31,24,27,25,28,32,24,28,25,30,29,27,29,32,27,27,31,26,24,23,23,30,26,22,24,29,27,26,37,35,24,30,29,26,22,28,40,84,137,139,116,75,48,33,25,32,23,31,29,28,34,30,27,24,27,30,29,28,27,23,29,28,29,29,28,28,22,25,22,21,31,26,29,60,91,78,37,18,31,24,19,27,22,26,25,26,24,34,37,29,35,29,27,31,28,36,41,32,36,34,35,33,34,33,33,37,32,30,34,35,30,34,31,21,22,20,23,19,20,23,21,21,16,27,23,15,21,21,23,21,22,24,77,70,22,27,18,18,22,27,21,19,27,22,24,20,24,24,24,21,18,22,21,27,27,21,21,26,23,27,24,22,26,24,27,58,108,108,76,50,27,32,32,63,95,124,173,193,190,181,172,170,174,172,171,162,153,164,155,120,98,92,118,145,163,178,166,151,146,149,161,161,155,159,163,162,159,155,157,156,153,152,152,151,152,154,148,151,144,129,110,87,87,75,73,73,84,122,139,140,121,97,80,76,81,82,102,122,125,128,132,136,139,155,182,201,204,204,222,237,227,228,231,245,250,243,186,147,156,159,174,189,220,236,222,219,223,241,243,233,216,194,111,7,1,8,10,13,12,14,12,14,15,14,14,98,95,66,48,40,45,55,92,127,169,185,199,141,47,51,77,134,178,193,196,186,179,173,163,147,136,123,110,103,89,76,71,63,54,46,42,47,22,19,41,41,66,97,118,141,159,178,189,202,213,219,222,222,230,218,219,213,205,110,27,39,79,139,146,129,111,94,84,84,93,98,102,114,115,111,63,19,24,31,47,69,120,184,172,145,135,132,90,21,16,16,19,24,29,25,22,23,23,24,21,21,25,25,19,24,36,35,29,37,48,66,117,51,44,56,98,134,168,211,196,84,21,14,24,44,40,46,50,41,36,37,39,38,67,110,126,114,116,153,170,174,176,173,156,147,146,142,142,139,134,133,131,128,126,122,134,139,138,135,134,140,139,141,150,151,148,145,142,148,138,139,144,142,142,141,145,139,135,137,137,139,142,141,138,143,145,140,136,137,141,134,139,143,144,146,145,147,147,150,151,148,139,141,148,146,141,135,143,154,155,153,146,155,156,149,153,158,161,152,149,150,146,146,147,148,143,145,147,155,161,157,152,155,152,153,146,141,146,141,142,150,149,146,144,142,139,145,146,142,142,146,145,151,152,148,150,150,145,136,137,146,145,150,158,160,147,132,141,126,129,134,131,124,145,160,145,128,92,59,31,61,36,63,76,82,98,94,105,145,175,122,34,6,10,24,46,55,62,65,60,63,50,43,60,42,45,45,49,46,51,51,55,61,60,57,56,56,53,61,63,61,61,62,57,54,60,60,60,60,55,57,53,51,59,52,57,56,55,60,55,57,55,54,48,54,57,50,53,53,53,50,49,59,56,53,52,52,50,50,51,49,48,46,46,46,50,46,42,45,44,47,49,48,44,43,43,38,42,43,40,42,43,45,41,39,41,39,37,39,38,35,38,37,36,42,39,33,35,37,36,35,37,32,36,34,32,38,37,36,35,30,32,31,38,35,29,32,35,35,32,34,31,34,31,25,36,27,30,31,28,33,32,33,28,30,31,24,28,30,30,26,24,27,28,29,28,27,30,25,25,30,23,24,26,24,27,23,25,24,27,25,32,35,29,33,35,31,24,24,23,27,29,27,40,86,138,138,110,81,52,29,26,23,21,29,28,24,29,29,28,27,30,32,24,24,32,26,27,30,27,29,21,20,27,27,28,31,29,40,76,88,67,31,21,23,21,22,21,23,22,24,24,32,38,34,34,32,29,27,26,39,34,35,39,36,39,27,29,35,34,35,28,33,29,29,35,35,25,26,32,16,21,23,23,18,20,20,18,23,20,19,22,25,19,27,19,41,78,42,21,29,19,15,27,19,22,25,18,24,24,23,20,22,21,21,25,26,22,25,24,20,23,26,22,25,25,21,26,44,92,113,88,58,35,28,34,32,60,102,119,156,194,191,184,168,157,155,167,162,160,153,149,168,166,143,112,94,90,106,137,170,169,160,159,166,169,168,160,158,168,166,169,165,160,161,159,160,155,156,157,157,163,169,165,164,145,129,116,99,100,100,116,139,150,153,153,132,113,115,116,125,136,134,129,122,129,128,115,123,129,141,141,133,175,171,170,177,188,210,206,177,107,74,122,139,148,174,224,251,249,245,244,248,248,249,249,230,112,4,1,6,11,13,12,12,12,14,15,14,14,173,145,92,64,43,37,48,62,86,138,165,180,124,45,37,56,108,139,142,131,116,106,96,81,74,66,55,54,57,61,66,83,90,108,122,128,147,76,17,47,72,119,146,177,204,211,218,208,210,208,203,192,182,167,157,154,136,117,54,19,35,47,87,89,85,84,78,80,86,76,72,60,47,48,39,32,25,29,29,33,43,87,150,144,127,127,129,90,35,13,14,21,26,28,22,26,17,24,26,22,24,24,25,23,27,34,31,31,52,81,124,128,127,134,131,139,155,159,184,184,94,46,34,31,42,33,32,29,24,25,22,29,30,76,153,166,170,173,187,191,181,173,152,144,147,153,157,153,145,141,141,147,152,147,146,150,153,149,147,146,143,144,143,141,143,141,141,141,146,143,137,139,146,144,139,145,141,138,139,141,142,142,144,141,143,147,143,137,137,134,133,139,148,152,153,151,148,146,147,149,147,142,134,134,142,142,141,141,139,142,141,142,144,140,143,150,156,155,154,148,144,145,145,143,147,147,145,149,145,146,139,136,136,127,136,135,140,142,141,137,140,153,153,160,149,146,153,151,153,152,150,149,153,159,160,162,166,159,156,152,152,148,141,141,145,145,141,149,156,152,156,143,139,150,151,143,109,81,59,49,45,44,56,71,77,70,68,113,221,252,245,188,131,75,25,4,11,15,24,27,39,43,44,48,50,53,56,59,63,61,64,63,65,59,50,59,53,51,57,62,59,56,65,61,59,56,56,55,56,56,55,55,52,53,51,56,54,57,57,57,55,50,54,53,48,54,53,50,53,51,55,48,51,54,49,53,52,50,47,50,51,47,46,43,44,49,49,49,40,44,47,45,46,44,44,43,41,42,44,43,42,41,40,43,42,37,39,41,37,35,40,40,37,41,37,35,33,38,36,34,39,36,36,37,38,37,33,32,36,33,33,36,33,29,38,30,33,35,27,33,30,27,35,35,29,31,27,33,37,27,31,29,28,27,29,30,30,26,29,29,28,31,32,27,21,29,25,26,29,29,26,27,24,24,26,27,25,22,24,29,24,28,30,26,31,23,27,27,23,29,27,25,22,27,26,44,86,134,143,129,101,59,42,29,21,24,24,25,23,31,31,29,27,23,28,24,29,33,27,28,26,26,24,22,29,33,27,26,29,38,79,83,49,29,19,20,27,22,26,21,18,27,33,39,36,32,32,27,27,28,32,37,37,31,34,42,31,29,37,37,29,31,30,31,33,33,34,24,22,22,21,20,19,24,21,23,20,20,23,22,24,25,21,19,26,24,59,64,26,23,24,16,22,24,21,23,25,24,20,27,26,24,19,24,24,22,23,19,21,23,24,27,24,21,25,23,32,83,115,95,66,41,28,27,26,38,72,91,112,141,184,193,178,171,160,154,153,158,160,161,159,149,162,174,165,155,129,115,95,113,160,179,189,185,178,177,169,164,170,173,175,176,174,172,166,167,161,162,165,164,167,165,173,169,171,162,146,132,113,109,114,125,139,146,153,155,150,145,148,155,162,173,165,150,148,159,160,148,139,137,141,133,127,139,146,139,132,133,143,139,125,72,51,92,96,104,137,202,245,248,252,239,248,248,252,252,239,110,3,0,6,11,13,12,12,12,14,15,14,14,157,122,71,54,42,46,47,63,73,102,129,142,76,26,45,45,80,85,83,69,56,62,65,71,84,100,106,121,137,147,160,170,176,180,188,193,187,98,46,59,94,147,171,191,184,168,158,142,132,126,132,107,88,107,73,63,56,39,28,33,30,51,72,69,69,46,42,42,33,28,25,21,24,22,24,29,30,36,28,39,41,65,120,114,114,123,125,89,25,15,14,17,28,28,23,25,27,28,25,22,19,21,24,25,25,35,37,36,43,98,150,160,155,151,152,137,110,79,62,46,42,31,17,21,33,36,33,37,34,32,29,31,32,55,132,170,170,170,166,156,156,159,155,155,145,150,155,151,153,154,153,152,155,152,150,154,154,157,154,147,149,146,143,145,138,140,144,141,146,146,147,145,139,134,141,143,143,141,143,143,139,148,146,149,150,144,151,145,145,148,150,152,152,156,151,154,157,155,158,158,155,150,143,147,147,148,149,133,132,135,140,128,121,129,125,122,122,134,138,134,136,131,134,136,134,137,141,125,120,129,133,130,129,134,146,146,143,151,152,146,145,146,145,149,147,146,145,146,142,136,136,134,139,141,136,132,124,123,124,118,115,107,100,97,105,110,119,127,126,129,124,122,111,105,106,93,74,45,47,50,52,32,30,51,49,53,45,63,173,250,252,252,251,248,175,95,24,4,10,17,28,44,55,61,59,66,75,74,72,76,67,57,60,57,53,54,50,50,53,57,63,59,55,61,57,54,53,53,55,56,57,53,59,54,52,59,57,52,57,50,53,52,53,51,47,54,57,56,49,49,53,55,52,49,50,50,53,50,48,51,48,44,47,46,45,44,47,51,47,41,44,45,46,49,44,44,43,44,38,42,39,36,40,38,41,35,35,37,36,38,38,39,34,34,33,39,36,34,42,37,42,36,39,41,29,33,35,34,37,33,31,35,33,30,33,34,30,30,29,33,31,29,31,31,30,31,35,35,27,29,29,28,27,27,33,28,24,30,29,29,29,23,25,25,28,29,22,26,26,27,31,27,29,27,22,22,28,29,21,24,28,22,28,27,27,24,30,24,24,33,35,27,24,27,29,37,36,80,85,142,152,147,111,90,47,27,25,29,18,24,27,24,30,26,29,25,26,31,29,24,25,25,21,26,32,29,28,25,25,24,51,89,75,44,27,19,23,23,24,22,19,25,35,49,38,32,36,27,24,30,41,33,36,30,35,39,28,29,29,36,34,29,33,32,35,34,33,31,24,19,21,28,19,23,22,18,21,19,19,20,20,22,19,21,19,39,69,40,22,27,20,18,18,23,22,21,24,24,23,22,21,25,28,21,19,21,20,24,25,23,24,21,23,25,34,66,115,113,78,52,33,25,28,25,39,80,99,104,120,161,186,181,167,165,159,156,161,169,157,155,150,145,161,172,168,181,177,152,114,108,128,158,180,187,176,165,160,160,164,171,171,176,169,168,173,171,171,172,177,175,169,171,173,174,177,170,153,137,130,123,125,139,142,151,160,167,160,157,159,151,159,174,163,152,146,150,152,148,145,134,141,137,129,128,136,131,129,132,145,143,138,104,91,85,51,49,72,134,184,206,206,206,236,244,251,251,236,111,3,0,6,10,12,12,12,12,14,14,15,14,121,83,52,51,44,37,46,53,61,76,73,66,40,41,60,66,94,110,120,120,124,133,148,158,172,179,178,190,191,191,192,193,191,188,185,179,166,78,34,50,78,132,147,145,117,95,84,74,73,69,72,57,60,61,53,60,54,40,23,23,21,23,33,27,28,23,21,26,25,19,22,27,20,20,22,28,36,44,47,53,47,80,129,125,126,128,119,68,24,17,12,20,20,24,29,27,23,24,22,24,19,23,22,21,28,33,39,31,39,49,57,62,57,57,46,41,35,31,27,22,25,28,26,29,45,64,78,87,82,81,86,94,99,124,154,157,163,162,160,152,148,151,149,145,134,128,134,132,131,137,137,134,136,136,142,142,146,151,150,147,147,149,152,152,155,147,148,147,145,147,143,143,141,139,145,147,140,148,152,152,147,145,147,152,155,147,148,145,147,152,147,139,137,141,134,137,145,141,147,146,133,131,139,139,137,140,136,137,141,141,136,136,141,139,131,125,119,125,138,142,142,141,141,140,140,135,131,130,132,132,123,129,145,151,156,150,148,146,143,135,134,132,131,134,123,122,122,128,137,129,125,122,111,111,111,95,100,110,112,114,122,127,120,117,114,112,108,113,120,104,105,98,96,94,76,77,75,65,59,60,61,63,53,38,41,41,39,27,93,181,203,251,252,252,252,241,167,90,27,11,21,33,50,62,72,81,84,77,78,72,64,54,48,53,54,52,53,48,53,60,64,66,56,55,56,55,57,52,57,57,55,59,59,55,54,56,55,55,51,56,53,53,51,53,55,53,49,51,57,51,54,53,49,50,52,54,49,49,47,46,53,45,50,47,44,53,46,48,46,43,46,47,47,43,45,42,45,40,36,39,34,39,37,35,42,34,38,39,41,36,35,38,34,42,31,33,39,33,40,39,39,35,32,36,36,31,32,32,29,29,35,41,35,31,36,34,31,33,31,30,29,30,29,33,36,31,31,29,27,32,31,32,27,28,32,25,27,28,24,27,26,27,27,25,27,27,24,31,26,23,27,21,21,26,24,22,24,24,27,29,25,30,21,27,32,24,31,26,27,31,28,29,28,37,28,27,34,32,25,36,73,134,160,147,132,87,55,39,29,24,22,26,23,26,27,31,27,21,29,22,25,24,29,25,26,29,23,26,27,29,30,65,95,71,30,22,20,24,41,27,23,27,38,45,38,38,32,27,26,30,36,40,34,32,36,38,28,30,33,29,36,32,30,34,34,35,32,27,24,18,22,24,19,20,21,22,22,19,21,18,20,24,21,22,28,54,46,27,21,24,23,18,22,18,19,24,25,21,24,24,21,28,23,22,24,22,27,31,29,20,22,20,31,68,109,118,84,55,36,24,32,24,23,43,87,108,106,115,154,190,184,171,171,163,157,164,169,165,156,153,158,162,168,171,170,173,173,169,143,112,98,109,145,167,166,171,162,159,169,168,165,168,166,170,170,174,177,178,181,177,176,171,171,169,177,176,165,153,150,153,152,152,158,162,160,166,157,151,151,141,151,169,175,163,142,136,136,144,143,139,135,134,130,121,118,119,128,145,162,165,169,155,150,134,83,61,47,74,106,125,130,136,183,201,210,218,215,116,5,1,7,10,13,13,13,12,14,15,15,15,47,31,29,42,32,36,37,46,58,63,81,84,52,50,65,94,155,174,193,186,188,193,191,191,190,190,184,183,179,178,174,172,164,149,138,132,100,36,31,39,64,97,100,105,94,94,100,100,92,87,53,66,51,38,35,33,28,27,27,25,15,25,24,18,29,23,24,22,19,27,23,27,21,19,23,30,38,46,78,98,116,154,181,155,145,141,131,73,14,16,15,18,19,25,28,20,24,30,25,27,23,26,25,28,37,46,43,42,40,34,31,29,34,23,22,25,27,30,27,29,34,45,55,72,122,164,194,207,194,191,199,198,192,184,174,160,150,148,142,142,146,142,139,131,127,126,128,134,129,134,131,123,125,122,127,133,132,132,136,134,140,139,133,142,148,149,146,145,143,134,135,142,147,153,153,149,150,151,152,147,136,144,140,139,142,139,139,139,140,137,131,127,127,130,121,118,122,119,131,129,121,122,123,129,128,133,139,139,143,141,138,141,144,141,130,125,120,122,129,133,148,137,128,128,131,139,144,141,137,130,126,130,141,137,129,133,128,118,124,131,141,137,131,126,117,113,103,112,121,120,130,125,120,127,132,139,139,139,141,146,158,168,167,161,150,128,119,116,102,90,90,95,96,88,93,97,75,80,78,77,84,80,81,70,43,37,39,34,79,136,134,179,249,253,252,252,252,235,154,83,22,5,24,31,55,69,71,71,72,64,60,47,47,54,49,49,49,50,54,60,63,57,55,61,53,54,61,53,59,57,57,56,57,59,49,51,50,54,56,53,57,54,55,50,48,52,50,52,49,48,51,50,52,53,52,50,53,49,42,49,47,46,50,51,51,48,45,44,46,42,42,42,45,44,44,42,40,43,35,38,38,37,37,37,38,39,40,39,39,40,37,35,36,36,36,35,35,37,39,31,35,33,35,39,32,33,33,34,34,36,36,29,35,33,33,37,29,30,29,30,32,33,35,26,29,32,24,29,32,31,31,27,28,30,29,26,25,25,27,31,28,19,29,25,27,30,24,31,24,22,27,21,21,24,22,24,25,21,26,28,27,25,27,27,25,29,22,24,28,29,22,36,41,33,37,31,29,28,22,25,27,37,56,99,151,156,145,127,84,55,68,33,25,22,27,25,24,23,27,26,27,29,30,27,25,29,30,33,31,26,29,35,71,84,48,32,16,35,51,22,20,25,38,45,41,42,34,21,27,34,34,40,31,34,38,29,30,33,29,35,28,30,33,28,40,37,34,26,18,16,22,20,19,27,24,21,21,20,19,24,24,23,24,22,52,58,29,19,23,24,21,21,22,21,22,22,23,23,21,22,19,19,22,27,24,23,24,23,24,24,39,66,116,124,89,60,37,30,29,26,21,25,49,84,113,110,104,134,185,194,177,174,175,163,163,168,168,166,157,168,175,172,174,172,165,165,164,170,162,140,116,93,103,132,163,179,179,177,173,171,167,167,170,171,167,168,166,166,170,171,173,169,165,168,174,175,171,158,155,151,154,147,156,158,155,159,157,150,144,142,156,177,183,171,153,149,153,159,157,146,149,150,149,129,121,125,132,145,163,169,175,178,186,178,152,139,115,105,100,85,63,65,111,133,160,171,186,119,8,2,8,11,12,10,15,13,13,14,14,14,17,34,40,43,36,42,38,44,58,77,124,141,78,48,72,116,165,191,203,194,194,190,183,175,168,164,152,144,136,128,122,113,107,92,79,66,39,25,39,36,68,107,117,118,113,106,85,61,39,32,33,25,22,19,18,21,26,23,29,27,23,27,21,23,25,26,22,20,24,24,27,24,18,30,28,27,31,57,116,160,182,196,194,154,140,142,139,84,24,11,12,16,26,31,27,30,31,32,40,36,36,42,37,36,41,37,35,27,24,25,24,22,26,24,23,26,36,41,46,59,77,124,161,190,210,211,206,198,183,182,181,181,178,164,162,157,149,149,147,146,137,130,132,130,135,136,141,139,142,139,134,141,130,122,124,120,118,117,129,138,136,125,117,124,132,130,133,137,142,146,146,145,136,134,134,145,141,133,127,117,128,130,127,124,125,131,134,130,130,129,127,139,140,143,134,122,127,131,132,129,129,125,132,128,119,125,132,131,131,126,111,117,130,137,132,130,130,113,107,120,129,125,121,136,142,141,148,142,141,136,128,131,137,130,111,109,116,125,139,137,137,126,122,118,114,129,116,100,97,113,127,129,118,131,141,132,142,126,121,137,151,148,134,142,133,115,100,83,84,79,85,81,83,97,95,89,74,75,81,77,83,90,74,70,66,48,45,36,92,122,82,98,129,188,252,252,252,252,252,240,159,85,23,6,12,27,38,45,60,66,66,57,60,59,49,46,50,51,52,56,60,61,60,61,60,57,59,57,57,55,55,58,49,55,59,52,53,55,53,55,57,53,53,49,53,48,52,50,48,50,52,51,47,54,50,52,53,49,46,43,46,49,49,40,47,46,43,49,43,44,45,48,41,41,44,46,40,39,42,34,37,40,38,37,37,38,40,37,38,37,35,35,36,36,37,40,35,34,38,33,32,34,35,34,34,34,36,35,29,38,33,30,33,30,33,29,35,35,33,29,29,31,31,33,29,29,32,29,30,34,32,35,28,27,28,25,29,25,26,25,26,25,32,27,28,30,21,26,22,32,26,26,23,23,24,27,22,22,23,24,27,24,24,26,27,29,26,23,29,27,33,43,38,38,46,32,21,25,29,31,23,23,28,32,45,78,127,154,161,153,128,96,62,46,29,19,23,21,22,24,31,27,24,31,29,28,30,32,27,26,26,26,42,76,81,48,23,31,29,18,24,28,38,46,39,37,42,33,31,44,39,34,28,36,38,29,33,27,36,33,31,30,27,37,35,33,38,25,17,21,21,17,24,23,18,23,19,21,24,21,20,24,21,34,59,37,23,23,22,22,16,23,23,18,24,23,18,23,23,24,22,23,28,25,21,21,22,24,48,83,119,115,89,58,34,32,29,27,23,22,37,64,98,118,115,103,119,168,190,174,162,169,170,165,165,167,167,163,165,167,170,170,164,164,165,162,162,165,165,160,149,113,92,101,134,170,182,181,177,168,165,175,176,169,165,157,151,151,157,166,164,160,158,162,174,173,165,150,139,139,137,142,147,148,153,165,166,155,152,152,164,173,178,173,162,170,169,162,155,155,160,168,167,148,146,147,142,154,169,170,179,188,199,200,193,191,175,164,142,113,64,28,51,72,104,130,152,111,15,2,10,12,13,11,14,12,13,15,15,15,49,49,45,48,45,45,45,46,51,63,107,117,59,46,61,94,148,167,174,160,152,137,125,113,110,92,92,87,69,73,70,66,72,74,74,73,46,30,26,28,41,51,53,44,40,36,24,27,19,20,22,16,22,22,22,22,22,24,29,26,24,26,26,24,22,23,24,22,23,22,21,27,24,21,24,24,27,46,85,118,130,140,142,118,125,134,135,86,23,16,17,28,32,39,39,39,43,41,39,35,34,29,29,21,28,30,29,31,19,23,25,19,28,34,36,54,71,108,122,137,184,203,201,202,184,169,163,161,159,156,158,153,152,149,149,152,149,150,149,146,146,138,137,137,141,139,137,146,137,127,118,127,139,136,131,125,125,122,131,140,142,134,125,133,133,131,129,133,147,146,135,125,109,111,122,120,111,98,103,118,120,121,123,123,110,130,127,135,143,146,143,142,148,145,135,132,135,125,136,125,120,125,128,124,116,124,123,128,117,105,107,120,128,132,126,122,112,94,101,108,119,117,122,115,119,132,135,136,137,120,112,120,124,131,121,123,130,129,136,127,115,101,95,109,124,127,118,113,99,91,95,89,87,92,99,90,77,82,88,104,99,85,98,111,116,118,118,102,94,91,89,97,96,90,92,89,63,76,86,81,70,82,64,67,70,64,60,42,54,94,94,92,86,106,150,205,253,253,252,252,252,242,142,81,17,2,9,13,34,48,59,58,60,59,50,49,53,54,50,57,64,59,59,66,61,58,56,57,57,52,57,53,54,52,52,59,55,57,52,54,54,50,54,54,49,53,50,51,56,50,49,52,49,50,55,53,49,50,50,49,48,45,49,46,43,46,48,44,41,49,46,41,44,37,42,42,39,39,40,41,37,41,36,33,37,43,39,33,39,39,35,35,36,38,34,35,38,36,33,35,35,34,34,34,42,28,32,38,31,37,29,24,36,30,30,37,29,33,34,30,29,30,33,27,36,29,28,34,27,33,28,25,30,23,28,29,24,24,29,27,21,31,27,25,27,21,27,24,24,26,23,23,27,23,23,25,22,22,23,18,22,24,24,28,23,24,24,23,26,30,35,36,33,35,34,23,29,30,29,29,27,30,26,28,32,32,34,48,76,130,100,162,154,138,104,67,48,35,27,29,23,22,26,26,31,35,28,23,29,30,24,29,27,49,81,73,45,23,20,23,29,29,38,43,38,39,37,39,46,39,40,41,38,38,34,33,35,36,36,34,29,31,29,34,36,32,37,25,21,16,20,23,20,23,19,21,24,19,19,21,21,23,27,53,49,24,21,23,19,21,21,17,21,23,22,19,24,21,22,24,23,23,24,30,21,37,67,97,123,120,89,50,41,27,28,29,18,24,25,41,79,108,124,115,105,113,155,187,172,160,161,167,167,166,172,167,162,160,160,165,164,160,160,157,162,168,162,163,167,168,168,150,130,105,102,127,156,175,180,174,166,168,163,160,158,158,151,146,156,166,163,160,157,165,176,175,165,150,146,141,143,143,146,151,156,168,169,162,153,156,163,169,167,158,154,163,163,158,153,148,154,165,170,159,155,149,151,158,170,176,182,191,202,208,204,205,197,192,185,162,114,70,56,42,65,78,113,105,17,4,11,10,15,12,14,13,13,15,14,15,63,63,41,50,48,42,51,49,52,52,72,75,47,42,49,69,105,120,113,97,81,76,75,78,75,78,83,84,89,96,88,80,78,65,50,44,34,26,25,19,23,23,25,26,22,24,22,24,24,20,19,19,23,23,24,21,21,23,24,29,24,26,25,20,26,23,21,23,20,24,22,22,21,22,20,21,27,40,64,71,87,106,122,127,142,146,139,87,33,29,24,37,45,36,40,34,30,31,27,24,24,25,24,17,27,30,28,28,29,36,40,41,53,63,88,134,170,183,195,201,198,194,174,156,158,155,149,153,151,152,159,162,158,156,155,148,143,139,139,143,141,143,146,140,138,127,130,128,135,130,113,116,128,147,141,136,132,123,132,135,131,127,120,124,131,123,115,120,110,95,94,87,95,110,112,111,94,94,105,110,108,103,100,94,91,83,93,105,116,117,101,103,105,112,100,98,107,112,121,113,117,113,119,131,131,128,130,127,112,110,114,124,125,121,120,113,114,108,104,110,106,102,92,93,109,112,106,112,119,107,105,108,112,121,115,117,117,114,124,116,116,110,114,127,128,116,97,98,103,97,84,81,82,101,92,74,99,102,98,98,104,120,128,138,130,123,128,125,111,105,125,120,107,96,91,94,96,88,80,76,74,80,70,67,69,66,68,53,52,67,93,143,170,162,136,142,171,219,253,253,252,252,252,248,181,110,44,7,6,11,19,26,46,52,53,55,55,56,57,61,67,61,55,57,56,57,57,59,60,54,50,59,53,54,57,53,51,51,55,49,53,53,51,53,53,53,50,54,47,50,52,48,46,52,52,50,49,50,49,44,47,45,47,42,46,43,46,50,45,45,42,39,43,41,42,43,38,42,38,35,38,34,36,35,35,36,35,34,39,41,36,36,35,34,34,35,33,33,34,35,36,35,35,38,34,30,36,33,31,32,31,30,32,33,29,35,33,27,35,29,32,27,28,33,28,33,33,28,31,29,27,29,25,29,28,28,28,27,28,29,29,24,24,27,27,25,23,31,22,24,24,23,29,22,20,21,23,27,25,19,24,27,21,26,27,23,25,47,42,22,29,26,32,31,28,35,30,26,33,36,31,29,24,28,31,25,26,29,30,36,43,71,115,155,162,159,148,134,108,68,52,43,36,29,29,29,28,22,27,26,26,27,27,23,46,78,69,46,22,18,24,26,33,34,36,36,37,33,36,48,50,51,42,39,39,35,34,30,36,34,29,34,34,31,31,34,39,26,20,20,21,19,19,23,19,22,21,23,24,23,22,23,53,53,31,21,21,20,19,20,24,18,20,27,20,21,24,19,22,23,20,31,39,61,89,131,146,107,71,47,35,31,27,26,23,21,24,30,53,80,116,111,100,104,108,151,185,186,171,159,161,165,163,163,170,166,156,152,159,161,158,165,162,163,171,163,160,165,169,173,179,179,169,139,105,95,113,153,183,182,171,168,160,156,159,163,162,162,163,169,172,169,169,174,179,177,170,160,150,151,148,150,153,154,161,162,166,163,158,160,167,170,168,167,162,162,155,151,148,143,143,155,168,157,148,151,151,153,163,165,166,176,189,198,200,200,200,195,188,189,166,136,110,71,59,49,76,87,21,4,12,10,15,13,13,14,14,14,14,14,50,50,48,49,51,56,56,56,58,56,59,49,37,43,46,66,86,92,96,97,100,100,120,106,106,102,97,58,57,48,34,27,32,27,24,20,27,27,21,20,22,26,21,22,27,27,23,21,25,24,18,26,20,21,25,18,20,24,26,26,24,28,24,21,25,26,24,24,21,22,23,21,23,17,22,22,31,50,83,106,118,133,137,136,130,122,95,63,40,28,23,28,33,31,27,22,27,27,24,26,19,21,24,20,27,30,29,34,41,69,109,141,160,168,182,203,212,198,191,184,180,179,165,162,160,159,167,154,148,154,155,159,160,156,147,141,139,142,141,135,134,139,141,133,124,109,105,118,136,137,128,130,137,144,140,140,130,120,131,117,128,124,103,114,119,128,128,123,108,98,103,120,127,131,127,132,130,123,127,107,125,105,122,115,112,77,98,103,83,83,80,82,99,94,91,113,95,115,105,119,136,126,122,126,128,110,103,101,92,114,103,115,114,108,112,116,134,110,99,92,79,86,94,98,99,92,89,97,108,100,102,106,111,118,104,105,113,101,88,99,114,114,116,116,111,81,71,86,106,86,91,94,101,100,107,126,130,126,120,130,146,148,150,137,112,103,98,103,110,111,122,119,104,96,107,105,99,89,91,105,103,91,74,66,65,65,64,57,64,63,66,109,168,123,110,110,125,136,167,177,253,253,252,252,252,252,202,133,69,15,4,9,14,35,49,56,53,56,57,62,63,57,61,57,57,57,56,55,56,55,56,57,57,56,53,52,53,53,52,51,54,56,54,53,53,50,48,52,52,47,45,49,51,54,54,45,47,49,47,45,45,45,41,43,46,46,45,41,43,38,40,42,43,43,37,39,38,38,39,36,35,38,39,36,36,36,34,35,34,35,39,35,34,32,33,33,33,37,35,35,36,35,33,29,37,32,33,29,27,39,29,32,33,29,34,33,30,33,29,29,33,28,30,33,29,28,28,29,29,31,27,27,31,25,27,29,24,27,28,29,26,23,23,24,30,23,29,26,22,25,26,24,20,23,23,22,22,21,20,25,21,23,21,22,27,21,35,49,34,25,23,21,27,24,33,30,32,33,29,34,26,24,30,26,30,29,28,25,23,37,29,27,34,41,55,99,100,167,167,159,151,144,137,114,106,60,45,37,31,25,24,32,22,26,29,47,77,67,45,30,22,24,37,39,33,45,51,77,126,137,98,56,47,45,40,34,26,31,31,34,31,31,33,26,33,29,36,27,23,30,22,24,18,20,22,22,22,19,21,19,21,42,56,35,22,23,22,22,19,19,19,22,20,19,25,22,19,27,36,43,63,88,128,153,130,88,54,42,31,29,30,20,21,21,22,25,39,60,83,106,109,92,104,99,130,175,182,177,157,155,155,155,158,157,159,159,155,153,157,158,160,163,167,163,161,160,159,165,159,157,173,178,178,160,134,105,88,113,154,175,178,177,169,169,165,169,169,165,173,174,175,170,166,173,177,178,170,161,143,139,139,148,152,162,169,169,169,167,163,166,168,169,181,185,173,162,156,154,155,156,156,167,179,162,150,152,151,158,157,155,158,160,179,195,196,198,198,193,187,187,184,171,155,121,102,84,88,87,20,2,11,10,14,12,13,12,13,15,13,13,56,55,53,66,64,72,77,80,82,71,57,39,30,34,45,69,105,115,103,98,92,76,55,46,38,32,26,26,24,21,18,21,29,21,23,23,24,31,24,22,23,23,22,24,25,21,21,22,23,24,20,22,19,21,20,19,22,22,22,22,22,26,29,27,24,22,27,24,25,24,23,27,23,23,23,28,33,53,78,100,108,83,71,52,51,58,46,40,27,19,16,24,27,29,28,23,22,23,23,24,23,22,24,23,27,25,29,37,65,152,212,227,217,201,186,175,174,161,153,154,149,154,154,146,147,136,128,131,133,140,146,145,137,139,120,107,110,117,129,131,129,128,133,123,119,117,109,108,120,137,141,138,140,137,131,139,125,113,126,128,129,127,124,136,136,131,143,148,136,138,145,153,155,145,148,144,144,140,141,149,144,147,152,146,145,128,113,112,114,127,126,129,130,136,135,137,130,127,143,149,153,139,127,120,105,103,99,108,121,125,124,117,102,95,105,118,116,99,93,105,118,123,119,118,125,114,116,127,130,136,147,146,150,148,145,152,144,140,128,125,131,111,113,126,114,105,101,113,117,118,122,118,117,124,122,112,104,93,96,103,101,100,105,96,75,75,79,82,92,103,97,78,74,81,91,96,97,91,89,91,82,66,57,55,57,55,55,56,57,54,41,35,39,57,75,116,144,112,86,93,141,204,253,253,252,252,252,252,239,165,96,34,5,9,22,36,51,51,53,59,61,63,64,57,61,61,55,59,60,56,56,55,50,59,59,48,56,54,52,55,54,53,53,53,50,51,48,53,49,53,52,50,50,47,48,49,49,45,46,45,43,45,45,45,47,40,39,44,42,44,43,41,40,42,43,41,38,35,36,40,38,33,38,36,39,38,34,35,28,38,34,34,36,34,33,37,38,35,37,31,29,38,34,32,30,30,35,31,34,33,33,33,32,32,34,34,27,32,29,32,33,29,33,27,28,29,29,26,28,28,21,31,27,25,31,26,23,26,27,28,25,26,27,20,34,28,23,26,24,24,19,26,26,19,25,24,24,25,24,22,19,19,23,25,22,17,27,27,22,29,27,24,21,26,30,27,25,30,25,23,26,27,26,32,38,30,27,25,31,31,27,28,27,30,27,31,31,42,62,86,125,153,170,166,173,165,156,147,141,132,117,93,71,66,59,60,92,125,117,94,65,92,124,135,155,167,196,209,212,198,133,64,53,51,47,37,23,30,35,30,28,32,29,29,33,33,28,31,44,41,39,36,27,28,21,16,21,19,22,21,40,51,38,23,16,22,20,16,24,16,21,24,22,35,39,52,66,82,122,137,149,129,84,66,43,34,30,27,29,21,19,21,21,27,36,50,70,85,94,98,96,97,100,117,157,177,170,159,157,157,159,157,157,156,162,167,160,163,165,164,164,167,165,158,164,160,161,159,152,154,161,167,169,169,171,147,107,92,111,137,163,182,178,176,174,174,170,166,169,170,169,170,165,169,177,179,171,162,155,155,154,159,169,174,177,177,176,171,168,165,165,175,184,184,170,158,157,162,168,163,165,178,178,162,150,154,154,158,165,159,151,156,181,198,205,207,203,191,178,184,182,181,168,155,152,132,136,99,15,1,9,9,13,12,13,12,13,14,14,14,73,89,95,109,118,123,124,87,123,103,61,30,22,25,27,35,37,36,36,27,30,21,24,23,18,21,21,21,19,22,21,20,22,21,22,24,24,27,26,28,22,24,24,23,27,26,28,22,20,24,21,26,22,20,21,19,24,18,21,27,26,26,23,28,24,27,28,26,31,29,29,33,33,33,34,37,39,40,39,41,39,32,24,31,30,37,40,30,22,17,22,23,30,32,24,25,25,23,21,27,25,19,21,23,22,19,21,20,51,121,164,174,160,134,138,109,112,107,110,114,120,125,125,120,110,120,115,103,109,117,129,132,131,137,134,118,125,117,131,128,137,136,124,122,131,134,128,107,101,115,116,119,110,114,118,125,106,96,111,129,129,134,123,117,107,94,86,94,105,115,110,113,113,99,101,87,85,80,91,107,122,100,117,108,105,105,97,97,120,122,125,122,120,128,120,127,127,101,128,125,93,91,98,96,87,87,124,114,118,118,131,112,88,84,86,104,95,105,128,141,143,136,116,112,117,99,118,118,123,128,128,121,121,134,124,121,136,137,128,135,130,119,95,114,126,115,117,132,132,133,122,117,133,121,113,110,106,107,97,95,84,77,83,84,68,91,89,84,96,89,87,93,98,105,88,94,86,75,72,74,69,59,53,61,61,62,63,57,67,60,46,48,30,46,68,105,145,171,122,130,101,123,159,202,245,253,252,252,252,252,250,187,90,35,6,10,29,34,45,60,60,59,61,61,60,60,56,54,57,53,53,51,56,60,54,55,54,57,54,54,53,51,51,53,53,45,47,52,51,51,54,49,42,49,46,48,49,45,50,44,48,42,43,45,39,43,40,43,45,44,40,40,42,40,40,37,32,39,35,33,35,37,39,36,35,33,32,35,32,34,33,33,39,38,34,35,32,29,33,30,36,28,30,31,31,35,32,32,32,30,27,36,31,31,33,27,30,31,27,33,30,21,32,30,26,28,26,27,28,27,29,25,27,26,27,24,27,30,27,27,27,27,26,22,28,27,23,18,22,21,24,28,23,27,23,23,29,19,20,26,20,25,22,21,24,21,24,24,25,24,23,26,27,24,22,26,26,25,23,31,24,27,27,31,34,22,33,32,27,38,30,24,29,27,26,32,23,26,31,31,34,42,52,60,81,116,137,152,169,172,177,174,173,178,167,158,155,177,199,183,184,201,198,202,193,191,172,153,136,101,51,31,35,35,42,35,32,29,31,34,29,29,28,30,30,31,32,34,41,48,51,55,49,48,40,40,41,32,34,53,65,41,31,53,52,34,35,39,45,75,55,68,88,88,142,146,146,137,100,68,53,75,34,33,27,29,22,19,24,22,21,22,30,46,56,76,86,92,96,97,99,100,113,145,177,177,165,167,168,169,169,168,164,165,173,170,165,162,165,164,162,162,162,160,160,163,163,159,155,159,162,160,164,172,183,177,147,118,93,97,136,171,184,184,175,174,171,171,173,165,169,167,167,169,169,171,164,162,170,168,165,169,171,181,181,177,177,175,170,168,168,170,176,171,162,159,161,167,169,172,170,170,171,160,155,155,152,160,162,160,162,173,197,214,217,210,203,190,178,182,183,174,169,163,165,159,157,107,13,1,9,9,12,10,13,13,14,14,12,13,123,137,134,134,122,113,96,73,47,33,33,22,21,22,18,23,21,24,21,22,23,19,23,22,23,23,26,22,24,21,18,21,22,24,20,21,21,22,27,29,24,25,21,24,27,21,24,23,24,22,22,22,19,22,20,19,22,24,21,28,30,27,27,31,34,35,34,35,41,41,40,35,39,35,29,32,27,28,24,27,29,23,23,24,22,22,21,28,25,21,24,23,33,36,31,33,35,33,34,30,28,24,26,25,22,24,29,34,46,74,96,123,130,122,98,81,89,89,92,104,116,113,100,95,103,108,108,119,112,105,115,120,129,147,142,141,141,137,154,158,165,163,155,144,144,144,133,132,111,101,93,81,90,83,84,101,90,86,101,105,112,109,92,96,100,84,77,77,90,93,79,81,77,63,67,47,45,55,49,48,45,47,55,62,70,76,77,77,76,81,76,88,87,89,92,84,81,65,46,33,36,47,63,68,80,97,105,111,103,99,93,88,77,73,87,95,104,109,110,118,113,94,84,78,76,77,90,112,114,102,93,83,82,80,76,87,77,72,78,93,103,79,81,106,113,108,96,97,100,106,107,101,100,103,103,108,119,111,96,87,76,62,69,75,65,70,77,78,76,71,81,81,80,79,79,84,68,57,57,63,61,59,70,69,68,68,76,78,62,66,64,62,65,57,70,87,128,160,186,191,168,137,104,109,130,167,216,251,252,252,252,252,204,139,86,49,32,17,19,23,33,33,41,49,51,56,56,60,56,53,55,53,58,61,54,60,54,50,50,49,57,53,50,53,46,53,52,48,50,47,49,50,46,46,47,51,49,47,52,44,40,43,41,44,45,46,41,42,45,42,43,44,39,39,41,39,37,38,39,37,36,34,42,33,37,34,34,29,32,40,35,40,36,32,34,38,34,35,32,29,33,35,32,30,38,32,26,32,32,30,34,29,29,30,30,37,31,28,29,30,28,25,32,29,26,26,30,33,27,23,24,27,23,29,28,24,28,23,23,24,30,28,21,27,25,21,24,27,26,22,25,27,23,22,21,21,24,19,24,22,27,28,20,29,24,20,22,26,22,24,24,24,22,25,28,25,29,22,24,26,24,30,27,29,24,27,36,25,54,50,26,28,26,28,30,32,23,25,29,29,33,29,32,27,29,25,28,34,43,52,56,69,71,84,102,107,124,131,122,131,145,148,139,101,71,36,35,36,19,42,65,37,24,22,35,39,23,35,28,25,33,29,33,28,27,33,33,31,23,33,45,49,60,63,62,68,71,66,103,121,101,98,86,92,96,103,117,136,141,130,144,146,141,120,83,63,52,40,41,29,27,27,22,27,19,22,24,21,21,25,25,42,61,71,72,86,95,82,100,102,94,112,149,179,181,170,167,173,174,170,173,167,163,164,163,165,158,156,162,160,163,164,162,163,163,160,163,163,165,165,158,160,162,168,177,175,170,153,119,89,99,135,162,179,181,173,171,169,169,165,162,170,168,164,160,157,158,165,168,166,169,170,178,180,176,178,176,180,177,171,171,172,171,168,166,164,169,170,177,177,170,167,166,168,170,162,154,154,158,159,157,170,193,207,213,208,198,185,175,179,178,174,166,159,168,158,164,111,11,2,9,9,14,10,13,13,13,14,13,14,75,61,41,35,28,25,18,39,17,22,24,19,20,21,20,21,22,26,25,19,20,28,29,28,29,30,28,24,23,18,19,21,22,21,19,22,22,18,26,23,24,24,23,22,26,24,21,27,23,24,22,24,24,22,24,21,24,25,25,35,40,40,37,39,44,37,39,41,34,33,34,29,26,22,21,24,22,22,31,26,24,24,22,22,22,29,23,22,29,22,28,30,41,54,68,86,91,90,86,78,69,64,63,69,66,105,100,93,95,105,122,135,137,112,109,95,96,89,93,103,108,97,84,72,78,98,102,103,96,89,96,105,105,103,94,101,118,112,116,108,134,134,134,129,120,120,118,113,115,112,103,100,104,100,99,101,101,102,115,109,106,112,103,122,137,141,138,133,126,110,128,129,107,100,122,130,136,131,98,68,69,69,75,91,96,105,107,94,98,93,94,101,92,83,102,75,68,83,53,87,94,105,111,102,112,102,88,112,102,86,87,92,90,96,107,122,106,104,91,94,96,96,87,77,82,82,99,108,108,106,103,101,95,97,90,80,67,78,69,87,90,87,102,112,106,92,79,71,85,88,98,97,82,86,94,92,83,97,90,77,81,74,80,72,70,68,68,87,77,76,94,79,84,81,72,80,62,67,66,63,68,64,64,74,63,69,75,76,64,54,60,63,62,70,55,33,85,87,139,162,199,176,153,124,102,107,122,160,212,251,252,252,240,210,172,141,104,66,43,25,15,11,13,14,24,35,42,53,56,57,60,59,56,57,51,55,56,53,54,54,56,48,50,54,51,52,50,50,50,51,48,49,46,49,47,48,50,49,42,43,43,39,44,44,43,46,43,45,44,43,41,41,42,42,36,39,40,37,40,33,35,35,37,39,35,37,35,30,32,35,39,35,32,33,32,35,28,30,36,31,37,34,33,29,29,37,31,29,29,32,27,31,30,29,31,23,32,29,25,32,29,30,24,28,29,32,26,31,28,23,28,27,29,23,27,25,27,24,23,28,26,24,25,28,24,23,21,21,24,23,25,20,20,22,22,22,19,22,22,21,21,25,24,22,22,22,24,17,19,20,24,24,24,29,24,22,24,22,24,24,24,25,28,29,28,31,24,39,59,39,27,29,27,32,29,27,27,30,29,30,26,28,31,30,22,24,30,29,28,28,28,27,23,29,32,28,36,35,29,32,51,63,74,67,53,46,47,23,47,145,122,47,26,27,41,36,29,30,32,33,30,31,31,31,30,32,31,27,19,22,25,24,39,46,41,60,71,63,76,88,93,95,100,70,72,91,91,81,76,80,53,52,39,68,33,29,31,29,23,22,24,21,19,24,22,21,24,27,30,37,53,64,74,82,76,90,90,81,98,92,102,143,181,182,165,157,154,153,160,159,153,149,146,147,151,150,157,157,159,163,160,163,160,162,161,160,165,163,168,165,159,159,158,156,159,167,172,171,147,111,93,94,124,158,167,171,166,165,166,159,164,167,165,163,160,157,156,163,169,169,167,169,167,164,163,162,165,167,165,168,172,173,176,171,169,170,175,177,174,171,164,167,173,173,174,168,159,159,160,154,155,167,173,191,193,194,192,184,178,180,178,171,170,160,162,157,159,111,13,1,9,9,12,11,13,11,12,13,14,14,18,20,21,20,18,16,17,19,19,22,21,24,24,25,22,23,21,23,26,22,26,26,29,40,44,44,47,47,35,26,21,18,23,20,17,18,23,19,25,24,26,27,19,24,27,24,24,27,21,24,29,26,29,28,30,34,34,36,39,38,41,37,34,35,34,37,29,27,26,23,23,22,21,23,22,23,21,27,29,22,28,25,27,25,22,28,24,23,29,37,52,61,92,135,174,202,207,206,207,216,205,191,188,192,195,189,169,147,137,135,128,122,109,105,114,118,124,124,121,117,125,128,113,99,92,97,94,96,86,81,100,104,101,85,76,89,97,90,75,57,60,70,74,79,95,96,94,106,112,123,126,123,125,116,118,136,142,141,132,116,119,117,115,136,146,145,137,124,119,122,131,135,131,129,136,139,151,156,136,111,104,105,110,116,128,137,131,115,122,139,137,144,137,138,124,95,99,116,137,153,158,165,156,131,138,140,130,120,109,105,101,102,105,118,124,106,97,92,79,79,81,78,91,98,99,92,95,105,103,113,114,107,101,88,94,104,101,96,91,102,116,115,110,107,93,91,92,81,79,87,100,103,101,98,90,87,85,84,89,95,100,89,94,87,81,91,96,99,100,111,108,90,89,87,84,84,76,83,80,81,78,67,59,57,63,61,66,70,63,65,63,70,68,61,59,48,57,59,75,103,119,147,160,173,179,152,117,105,117,141,182,196,185,193,194,190,181,161,151,150,146,111,66,27,7,10,12,15,26,41,51,49,52,58,53,55,55,50,57,54,52,51,53,52,50,54,48,47,53,46,50,47,46,48,46,49,45,48,45,44,48,39,45,41,48,48,40,47,38,43,41,39,39,37,43,44,42,36,33,34,31,35,34,34,34,28,39,37,34,31,31,33,36,34,35,30,32,36,36,36,30,29,30,31,29,33,29,25,29,32,32,31,30,27,28,31,27,29,31,23,30,31,29,29,27,31,22,24,29,24,26,27,27,24,25,27,28,23,22,26,23,23,24,26,24,24,24,25,24,19,25,22,20,23,22,21,21,25,24,22,22,21,24,21,19,25,24,23,20,22,22,18,24,23,23,22,27,24,24,28,26,27,24,33,25,24,27,22,30,27,24,32,29,37,39,29,29,27,32,30,27,31,30,24,30,29,22,24,27,29,26,27,31,30,28,30,25,23,25,26,26,40,63,68,77,77,95,110,169,226,150,53,38,41,48,44,42,48,47,49,46,48,43,36,40,38,44,33,22,17,22,24,21,31,37,45,39,32,33,29,35,40,29,34,34,29,33,30,29,27,27,25,21,26,19,21,23,18,25,23,22,25,25,20,24,24,25,41,50,63,65,79,84,72,77,86,88,87,95,92,132,177,184,174,157,155,148,147,152,145,146,145,144,152,152,160,165,160,164,161,161,163,160,161,159,156,160,160,160,152,147,151,148,152,156,160,170,168,163,145,121,96,92,119,140,162,176,169,160,147,151,154,157,159,158,155,155,162,165,162,162,158,149,149,148,141,134,142,150,158,164,163,167,165,158,162,169,166,165,166,162,160,162,159,159,155,155,162,155,152,152,159,168,175,188,195,194,190,188,190,184,180,176,169,169,156,166,113,11,2,9,10,12,11,13,12,12,14,14,13,21,21,17,19,19,21,17,17,20,20,21,24,25,21,21,27,20,18,21,24,27,29,33,49,57,61,69,66,52,35,20,17,21,17,21,16,21,25,22,26,27,29,26,22,29,29,29,27,30,36,35,37,39,42,39,42,33,34,35,29,38,28,29,25,22,24,21,23,24,20,24,23,20,18,21,23,24,25,24,24,29,29,23,25,24,24,29,26,44,64,67,137,96,162,132,144,153,153,163,155,194,139,158,149,138,143,125,118,117,111,101,88,98,105,124,141,145,147,133,129,128,122,123,120,117,108,114,122,109,101,95,93,92,96,90,92,96,97,74,78,75,84,93,91,98,101,101,104,110,116,118,104,115,117,123,131,136,123,128,119,112,116,110,118,116,117,106,97,90,88,98,95,88,85,92,105,114,134,128,112,125,127,122,114,110,111,108,98,110,123,131,123,136,131,109,96,108,118,130,119,121,125,118,104,110,113,111,96,95,93,93,93,84,90,105,87,84,84,78,84,78,90,108,116,108,96,100,100,99,105,100,93,98,84,95,92,99,103,110,110,114,116,120,110,102,109,101,94,90,95,94,97,94,88,84,83,78,89,82,82,83,86,93,91,93,95,89,91,94,83,93,80,86,83,81,78,74,81,87,81,71,71,59,60,61,61,61,64,65,66,69,59,64,66,56,54,52,44,35,61,51,108,142,175,193,184,171,138,101,94,123,125,122,137,148,169,180,191,208,249,252,252,231,180,133,87,37,8,9,11,13,21,35,43,49,56,57,58,55,52,51,51,54,52,50,49,47,52,47,49,50,48,46,46,44,48,48,44,47,45,45,43,43,42,41,42,46,42,39,39,37,41,37,44,42,36,33,37,35,33,37,35,39,37,33,33,33,33,34,31,31,32,34,35,36,34,31,35,31,30,31,31,29,31,33,29,32,32,27,28,29,33,27,29,29,27,28,29,27,31,33,23,30,29,26,24,27,25,22,26,28,25,26,27,22,28,29,25,22,24,23,26,28,23,26,23,22,22,21,22,20,20,27,24,25,25,19,24,24,21,20,17,20,25,23,24,24,19,24,22,21,19,18,23,22,22,26,24,25,29,28,27,24,26,25,28,24,23,23,27,30,29,26,39,35,27,35,33,27,29,27,29,25,22,27,27,27,27,28,27,27,24,24,30,27,23,25,29,24,25,24,36,40,45,59,68,69,68,123,125,68,46,41,39,51,75,109,98,98,112,103,90,78,81,89,90,73,48,27,26,32,30,38,37,39,37,21,23,21,19,25,24,27,25,23,22,22,19,21,22,24,19,19,24,21,20,21,29,23,23,24,22,24,25,28,30,42,55,62,66,68,78,76,69,74,95,89,83,96,113,161,184,181,168,159,162,155,152,160,158,160,158,153,159,164,166,167,163,164,161,165,163,160,160,156,155,156,155,147,143,141,143,148,154,157,156,158,161,162,158,152,121,93,88,103,147,166,171,166,152,150,153,153,152,156,158,155,156,154,154,152,150,151,146,145,139,137,141,143,151,154,149,145,145,149,155,159,159,162,161,160,157,158,156,153,152,155,160,158,157,159,164,165,171,183,194,200,199,198,191,188,184,181,172,171,164,169,113,11,2,9,10,14,10,14,13,14,14,14,15,21,21,18,21,17,21,19,19,18,18,21,20,27,23,19,21,21,21,19,23,25,27,41,51,60,63,76,77,54,32,19,22,17,21,22,19,23,19,31,33,30,33,33,37,35,39,37,35,39,40,38,37,36,31,31,26,23,19,19,27,30,25,22,21,22,26,26,22,23,21,22,22,21,23,19,21,29,21,21,30,23,25,29,23,29,31,30,33,49,63,71,79,74,65,66,62,48,44,54,67,66,64,54,39,43,60,74,90,103,97,102,106,95,96,97,90,89,83,81,63,62,84,94,105,110,118,123,130,135,121,92,84,103,112,104,101,100,108,118,111,110,124,123,109,103,107,102,95,90,76,70,77,89,90,91,90,86,94,114,115,99,89,76,75,79,78,79,81,73,78,87,83,78,76,74,78,87,94,98,97,107,104,101,106,92,86,84,78,80,86,88,90,86,91,86,83,83,78,83,71,73,76,71,77,74,69,72,72,83,81,82,87,86,89,89,88,88,94,90,93,101,101,110,111,114,110,101,101,94,97,93,89,101,97,99,99,105,108,106,109,110,107,103,106,108,105,102,108,111,104,105,101,95,89,96,97,96,103,101,101,97,88,101,97,98,102,102,102,89,89,80,73,86,83,81,75,72,78,80,84,85,79,71,68,63,64,72,71,68,61,63,70,69,68,64,61,59,59,49,41,50,68,83,90,103,131,137,128,139,159,171,130,79,74,95,114,132,140,162,228,253,253,252,252,252,252,201,155,103,60,21,6,11,12,17,19,40,45,48,53,50,53,50,49,51,55,54,57,49,46,48,46,47,48,46,44,47,54,49,45,43,44,44,43,41,42,43,40,41,39,39,39,35,43,35,32,38,32,39,32,32,37,33,35,36,37,36,29,34,28,29,35,33,36,36,32,33,29,33,35,29,32,30,29,31,36,30,31,31,29,34,31,29,25,26,29,29,30,31,24,29,28,27,28,23,27,26,26,27,25,30,24,25,24,28,28,19,26,29,20,24,29,19,27,19,25,24,19,24,21,25,22,24,24,24,22,20,21,23,23,23,19,23,23,21,19,20,23,18,24,22,19,22,21,21,24,22,25,26,20,27,23,24,27,28,25,18,27,24,26,27,27,27,27,29,28,29,31,35,30,25,29,24,29,31,26,30,25,30,26,31,26,23,30,27,25,24,27,27,27,31,38,38,32,40,39,29,27,43,49,34,36,34,33,48,106,155,141,143,151,145,132,120,134,149,150,141,91,36,27,31,29,32,28,21,24,22,23,23,19,24,20,22,28,26,21,21,23,22,22,19,23,23,19,23,22,22,26,24,20,24,31,31,39,42,47,55,52,66,72,70,84,71,70,96,89,84,93,122,164,184,177,168,167,166,165,157,162,166,162,163,162,158,162,166,165,165,163,165,162,165,160,160,162,161,160,163,158,155,155,153,157,156,165,164,160,160,157,166,165,161,153,126,102,86,105,142,162,175,172,168,162,158,154,156,162,159,155,159,154,150,149,148,155,157,155,152,155,155,158,156,149,147,151,151,157,161,161,164,163,162,160,162,156,155,157,158,166,163,165,165,167,172,175,179,187,194,194,192,189,188,184,181,174,173,165,173,114,10,2,8,10,12,10,13,12,13,14,13,13,18,21,21,19,21,17,17,19,18,19,18,19,24,21,21,21,26,21,19,23,21,29,34,50,57,63,71,66,51,30,22,22,20,20,19,25,30,30,34,38,42,43,39,39,44,38,37,32,33,28,23,23,22,26,19,22,24,19,24,27,27,26,23,24,23,23,24,24,23,25,23,24,21,20,23,17,19,24,20,18,25,27,33,34,30,43,42,36,39,36,49,50,58,57,52,50,44,48,46,59,71,73,84,93,96,97,88,118,133,104,132,113,103,94,89,66,70,83,81,81,87,90,116,120,131,137,136,131,129,132,98,87,96,124,131,109,106,130,118,108,97,96,84,74,75,84,83,64,56,57,62,77,60,53,49,56,78,55,57,91,72,59,61,85,71,77,97,89,81,94,78,72,75,82,83,83,87,95,88,79,83,86,95,105,109,105,83,80,76,71,78,86,83,80,85,91,83,76,69,67,68,81,77,89,88,84,93,96,105,98,99,97,99,101,99,101,105,113,104,104,107,106,108,106,107,103,110,106,105,106,109,114,109,103,105,105,124,120,119,117,111,107,105,108,97,109,106,116,114,115,121,118,122,121,117,120,115,112,118,108,107,99,97,100,95,101,94,93,91,82,90,87,89,81,81,83,71,81,96,99,83,76,63,62,66,73,78,66,73,73,77,71,74,69,71,69,67,66,62,55,46,46,39,38,36,34,39,73,110,152,177,130,84,66,67,68,59,71,94,154,230,252,253,253,252,252,252,252,252,220,175,128,83,46,12,9,11,12,21,33,43,45,50,50,54,50,52,52,46,46,45,46,45,47,43,44,48,55,55,42,42,42,39,44,42,38,41,39,42,39,34,38,39,39,37,37,38,37,36,38,34,35,39,35,30,31,35,38,35,33,34,36,34,31,29,34,34,25,31,30,32,33,33,36,27,32,32,34,34,27,24,29,29,26,30,29,24,23,27,23,30,32,24,29,27,30,25,26,26,25,22,26,29,20,22,24,24,26,29,25,25,23,21,24,26,20,24,21,20,29,24,22,23,22,19,24,21,21,21,23,24,16,26,22,19,24,19,19,23,22,19,19,24,19,21,19,19,24,24,27,23,27,22,28,23,29,54,35,25,23,25,26,28,37,32,32,30,36,42,26,27,33,27,27,24,28,31,26,23,28,28,26,24,24,30,28,28,25,24,29,29,38,40,39,36,36,31,23,30,30,29,33,32,39,46,81,142,136,137,142,138,134,124,126,130,153,157,111,37,14,20,19,24,20,23,27,18,24,27,21,27,19,20,24,20,22,24,23,24,25,24,24,21,21,24,20,21,26,25,22,31,36,39,48,48,48,55,65,75,68,71,78,75,88,91,81,92,122,167,184,177,170,162,160,162,165,162,164,168,168,171,166,157,160,159,161,165,160,163,163,166,160,160,163,160,162,166,164,166,167,163,168,166,167,164,163,167,165,165,165,169,164,155,137,103,88,97,131,166,180,179,173,168,163,162,160,158,159,158,151,148,152,158,158,159,164,165,158,161,166,160,158,159,159,157,157,161,162,160,162,167,162,162,162,165,162,162,171,171,168,169,172,170,176,179,186,186,184,188,181,181,176,173,170,172,162,170,116,10,2,9,10,12,11,14,12,13,14,14,14,23,20,19,19,19,20,19,18,22,18,20,18,18,23,21,24,21,25,20,22,24,24,35,44,48,56,59,63,49,29,29,24,27,27,31,31,36,36,34,42,39,34,36,32,29,29,27,23,22,24,23,20,23,23,24,23,20,23,22,26,27,25,24,24,24,22,23,24,24,25,22,21,23,18,16,17,21,21,19,19,23,29,38,37,30,38,37,39,39,31,37,49,52,49,49,52,45,44,52,63,76,90,92,105,120,109,98,87,91,93,87,105,100,91,81,71,81,85,101,102,102,96,87,91,83,87,88,74,69,84,92,82,80,83,96,107,104,108,109,98,82,68,76,80,80,86,80,85,86,92,96,90,84,76,80,80,85,83,83,79,82,94,98,106,106,114,120,121,116,113,107,95,95,100,103,111,105,107,106,97,99,93,96,104,110,113,101,97,95,92,98,97,94,100,101,104,104,108,110,99,98,98,97,107,104,107,104,100,110,104,108,108,105,103,103,112,113,114,112,107,110,112,111,105,108,107,104,111,116,119,116,109,116,116,118,120,128,131,123,120,112,116,120,113,111,107,113,122,116,117,122,120,123,117,118,111,108,115,104,99,99,102,106,97,96,93,94,92,89,103,105,97,98,95,104,110,103,96,95,90,87,76,69,71,64,67,65,67,80,77,71,67,72,65,62,69,61,57,56,55,54,52,49,41,38,34,33,30,46,80,107,110,101,89,77,60,46,43,48,70,123,159,193,225,238,253,253,253,252,252,253,253,247,211,170,122,86,48,17,8,11,11,18,36,34,41,47,44,35,42,47,39,49,46,47,48,45,50,49,50,40,43,41,41,41,38,46,39,34,33,38,39,34,36,37,41,34,37,38,36,35,31,34,32,31,33,36,29,37,31,30,33,27,32,32,32,34,32,30,35,31,32,27,33,29,33,28,26,32,24,34,25,29,29,25,27,27,26,27,33,23,22,29,30,25,25,24,23,21,25,24,24,26,23,26,27,28,23,22,25,21,19,21,29,19,24,26,21,27,16,19,21,21,20,22,23,21,24,23,22,19,22,23,22,21,26,22,19,23,20,21,19,16,21,22,21,22,19,18,23,24,30,27,30,22,50,61,25,24,24,24,23,30,27,33,32,30,34,23,30,28,28,27,34,33,26,28,28,33,32,26,25,28,27,28,29,33,37,33,27,32,37,39,42,38,37,29,24,28,35,35,35,29,41,45,68,119,125,124,126,132,129,119,101,115,141,156,119,39,14,14,14,23,24,24,22,19,26,23,21,23,24,17,21,24,18,26,22,23,23,23,24,19,24,20,25,27,19,28,29,42,44,42,52,50,56,62,69,78,64,76,90,76,87,88,90,113,151,181,173,166,156,146,145,153,163,164,167,171,169,172,167,158,162,156,163,162,160,162,159,162,160,161,162,162,163,166,167,167,165,167,168,168,168,166,170,168,166,168,167,165,166,169,166,139,104,84,91,128,159,179,181,178,173,165,164,155,153,155,151,153,157,163,162,162,161,160,160,158,160,158,157,158,153,150,153,161,160,158,158,155,155,159,163,170,168,165,169,167,168,166,164,167,169,173,177,176,175,174,170,172,166,169,168,170,166,174,115,10,0,9,10,12,11,13,12,13,14,14,14,21,22,16,19,17,17,20,19,18,16,22,20,21,24,19,21,21,23,27,23,20,25,31,36,39,43,47,49,42,31,42,30,31,37,28,36,26,32,28,24,31,24,28,23,19,23,24,22,20,24,23,19,23,22,22,21,20,21,22,24,30,28,23,29,26,19,23,24,24,23,17,23,23,17,18,21,22,17,21,18,20,29,29,31,29,27,27,31,32,31,32,32,33,35,38,39,38,36,36,37,37,40,42,40,39,45,67,37,39,39,33,65,41,40,42,40,39,47,49,53,57,78,59,46,33,38,46,35,38,34,74,35,71,71,59,69,91,84,90,89,88,83,93,92,98,96,93,96,94,109,112,105,104,106,105,102,111,100,111,105,103,99,110,111,116,110,105,117,117,112,112,101,109,112,112,104,105,109,106,107,103,99,98,97,103,113,107,119,116,118,121,115,107,107,113,118,110,110,107,115,115,116,119,112,108,105,107,112,113,115,116,111,118,117,114,119,122,123,122,125,120,124,117,114,122,116,118,110,114,111,110,116,113,122,122,116,117,115,116,116,120,114,124,124,118,116,114,117,118,117,110,107,110,107,105,112,116,114,110,110,116,111,109,107,101,102,97,97,102,99,102,98,104,110,107,108,95,89,91,85,89,86,75,79,72,79,69,67,77,71,70,73,70,69,64,63,58,52,52,55,55,53,49,51,53,57,63,48,49,63,82,85,81,83,85,79,69,62,51,48,68,121,103,125,147,178,196,218,252,252,252,252,253,253,252,252,250,195,152,122,84,44,14,9,56,12,24,36,36,40,47,46,42,44,46,46,42,45,49,46,44,44,42,42,42,38,40,41,39,39,36,39,35,40,39,38,35,31,34,42,36,35,38,25,38,35,33,32,29,37,27,27,29,26,32,29,29,33,30,33,35,28,30,35,29,30,28,28,30,27,29,29,29,27,27,25,27,24,30,29,24,25,33,26,24,23,22,26,17,22,24,24,24,26,29,26,24,27,24,20,23,26,25,20,24,24,21,24,21,19,21,19,22,21,23,27,23,24,18,19,24,19,23,23,22,21,21,24,21,18,23,18,19,22,19,24,18,19,21,21,22,20,24,28,24,31,35,21,24,25,24,28,20,29,23,30,29,21,28,27,29,30,38,34,28,28,30,34,35,24,29,26,25,31,27,34,40,41,27,22,33,38,43,42,34,36,27,23,28,32,32,31,37,36,44,53,94,105,101,107,115,113,102,97,106,130,147,111,39,15,17,16,25,24,20,24,20,19,22,25,24,21,24,26,19,23,22,24,27,23,19,23,21,20,23,24,22,29,34,39,44,42,47,51,54,57,66,72,66,67,84,89,79,83,88,108,149,164,167,165,160,148,138,139,151,162,156,160,161,162,168,163,162,164,157,159,157,154,158,159,160,157,162,163,160,164,162,159,157,159,157,159,158,158,163,160,163,160,159,162,164,164,168,169,159,144,114,85,91,119,154,177,181,177,168,164,160,157,161,156,159,162,162,165,160,162,160,155,160,160,157,160,156,156,157,154,153,152,151,150,152,153,154,156,163,160,159,160,157,159,157,156,158,163,160,167,165,163,164,159,164,163,165,164,168,161,174,116,10,1,8,10,13,11,13,13,13,13,14,13,18,17,21,18,19,19,19,17,21,20,21,23,24,24,24,24,22,25,28,29,30,29,32,34,32,33,39,34,37,33,29,34,24,21,27,24,22,21,28,26,23,22,19,25,24,25,27,24,21,23,21,21,23,22,24,19,24,21,24,23,26,28,21,23,20,20,24,20,17,18,20,17,19,17,20,18,20,18,18,21,22,31,40,49,46,48,53,59,61,59,57,54,55,52,48,43,41,43,40,45,51,52,46,42,42,42,46,51,42,41,38,41,47,37,40,45,46,48,52,53,59,69,85,81,75,83,83,87,80,79,78,74,82,79,88,85,88,103,99,98,98,100,106,105,108,110,108,108,102,105,120,128,123,117,116,108,111,113,118,120,110,116,122,125,119,113,100,98,106,112,112,107,117,117,114,111,108,114,111,105,105,110,112,111,113,118,117,118,127,117,127,120,117,125,119,120,114,116,114,114,118,123,120,122,112,113,121,118,125,118,120,122,120,121,120,122,124,125,127,122,123,124,126,127,125,125,117,111,111,111,110,107,112,117,117,111,107,108,111,113,116,123,129,123,125,124,115,112,100,103,112,115,119,119,118,117,118,119,118,114,117,114,106,100,104,98,94,92,99,100,87,94,100,98,87,74,86,92,89,79,74,73,74,87,88,83,75,68,75,90,83,73,74,81,77,77,66,56,59,53,57,57,55,60,69,81,90,72,69,77,72,67,60,58,72,93,97,99,96,114,132,111,107,101,88,104,111,126,155,187,218,239,253,253,252,252,252,252,252,252,244,213,184,153,114,90,37,6,10,11,23,33,38,42,39,44,44,42,45,51,47,45,40,40,39,35,37,39,33,36,39,37,41,42,46,41,31,37,35,33,35,33,36,36,33,29,28,32,36,33,29,34,28,27,28,31,33,24,35,33,22,33,31,30,31,28,31,33,32,21,29,29,26,26,22,29,23,23,34,22,20,26,19,29,24,23,24,17,29,27,24,26,25,25,21,24,22,29,22,24,25,21,28,25,24,24,22,19,21,23,22,20,24,19,22,23,19,24,19,20,22,21,22,20,20,19,23,21,16,23,19,23,24,20,21,21,21,22,22,21,19,20,23,19,22,24,27,29,26,23,23,26,24,25,25,26,20,30,32,25,27,33,39,28,23,31,34,32,37,31,24,31,28,29,31,29,32,29,24,26,33,39,41,37,36,37,27,21,27,32,33,34,35,35,39,48,63,85,90,95,98,89,93,85,104,107,127,107,36,24,17,17,24,22,23,20,23,21,22,25,18,28,22,23,24,17,26,21,21,23,24,22,20,27,24,29,25,33,42,35,47,44,49,60,56,59,64,71,73,72,91,81,76,92,103,149,171,163,167,163,163,163,152,153,159,154,147,147,155,157,163,162,160,166,159,160,159,155,157,157,154,153,163,162,160,165,160,157,160,151,148,148,144,144,147,148,150,155,152,151,153,158,160,166,171,171,155,122,93,83,116,145,169,175,172,171,162,165,162,159,162,164,159,160,162,159,160,162,161,163,165,169,165,164,167,161,155,153,158,155,152,155,155,154,155,154,150,148,144,147,150,151,153,152,153,158,157,159,160,158,162,163,165,165,167,162,170,113,12,2,8,10,13,11,13,12,14,15,13,14,22,20,17,23,18,19,19,22,20,21,24,20,31,28,28,34,33,33,34,36,37,38,36,35,35,31,32,29,24,25,24,20,22,19,17,21,22,24,24,20,25,22,23,29,23,23,27,22,24,23,22,20,19,23,19,21,21,18,24,21,22,24,18,22,19,17,21,21,19,21,18,20,17,17,18,19,20,16,24,23,21,31,45,54,63,63,74,89,62,64,64,60,59,57,63,57,58,61,66,73,66,53,55,46,42,37,53,52,49,46,53,59,57,52,57,57,62,70,54,47,47,72,97,109,103,108,113,104,114,122,134,130,129,107,113,115,107,111,107,108,107,110,109,112,118,114,117,113,107,113,115,118,117,113,115,108,107,107,112,117,118,117,124,131,127,116,116,113,117,124,124,124,121,122,128,122,118,118,116,110,115,115,123,126,120,123,113,117,117,118,119,118,129,125,132,124,122,124,116,122,118,113,128,126,126,121,119,126,122,124,123,119,126,130,128,127,125,123,122,118,114,125,125,122,125,122,122,117,119,121,117,120,119,121,122,123,122,121,123,123,127,113,119,120,117,122,119,114,113,120,118,122,121,118,119,121,122,117,110,111,108,116,108,109,105,103,92,99,98,89,88,86,89,93,84,84,87,86,92,85,92,94,82,85,93,77,72,66,76,73,66,66,70,76,77,75,68,62,60,59,63,63,63,63,70,62,79,70,74,81,84,71,59,59,71,81,81,87,98,104,117,93,111,105,107,105,88,71,76,89,111,137,168,194,211,235,252,252,253,253,252,252,252,252,199,228,154,120,108,64,25,8,9,17,24,28,34,41,40,46,50,43,36,40,40,38,38,39,39,36,39,41,34,36,37,42,37,34,35,34,39,33,34,34,33,35,30,33,29,30,31,30,33,24,29,29,24,30,25,28,30,25,31,34,30,29,32,29,25,26,24,27,24,24,27,21,32,27,29,22,21,19,24,26,26,25,27,28,23,27,26,27,21,26,26,19,24,21,21,25,27,20,22,22,19,20,23,21,25,19,23,26,18,20,22,22,21,19,19,22,21,22,23,19,20,21,22,19,21,21,22,20,17,17,21,21,18,19,21,22,19,24,23,18,22,18,24,24,25,27,24,29,26,23,26,24,27,34,30,25,27,25,28,27,32,32,31,42,42,34,29,28,29,29,32,32,28,29,25,32,35,39,44,36,38,35,25,24,31,32,39,32,34,38,42,43,67,104,112,114,108,97,95,95,109,110,142,126,50,23,12,16,21,20,20,19,24,24,18,23,24,20,21,26,21,20,20,21,24,20,21,25,21,24,30,29,31,33,33,39,51,46,57,58,59,62,74,75,75,81,83,88,93,107,144,171,174,162,159,163,169,165,163,165,163,161,150,153,159,158,159,159,162,167,166,168,162,159,156,160,162,157,163,164,164,163,158,159,164,158,157,156,145,148,151,146,152,156,156,155,148,153,162,167,177,179,174,160,137,103,87,104,135,165,177,179,168,162,165,161,165,161,159,160,159,168,169,167,171,172,169,173,170,171,171,163,160,156,161,166,157,158,155,155,163,158,154,150,149,148,152,155,152,151,153,161,159,158,162,160,160,162,164,160,163,153,169,115,10,1,9,10,12,10,14,12,12,14,14,14,21,20,22,21,23,20,24,27,22,29,29,27,33,35,33,34,36,33,34,30,29,27,21,27,26,21,23,22,26,20,17,21,18,19,18,18,18,22,21,26,24,22,23,23,23,17,23,22,21,24,19,19,18,18,21,18,17,17,24,20,19,21,17,17,17,17,17,17,19,19,19,18,16,17,17,18,19,19,19,18,18,29,39,53,57,50,45,42,49,53,56,55,52,54,58,57,63,67,70,69,62,57,59,54,50,50,49,50,51,49,51,59,61,62,64,66,71,65,57,51,51,53,66,81,90,98,86,77,83,105,116,121,122,125,132,120,117,117,114,118,116,117,112,107,116,113,106,111,113,112,117,117,111,112,113,110,112,110,117,122,114,120,117,117,120,117,108,115,128,137,138,122,116,115,122,127,123,116,115,116,124,124,126,124,120,119,118,112,110,109,116,112,110,110,111,125,109,111,117,118,123,118,110,112,112,115,118,123,131,127,129,127,131,132,134,128,117,124,124,115,121,121,125,124,119,122,122,122,125,127,129,129,136,134,128,135,126,124,130,122,115,106,111,118,119,122,125,129,134,127,116,116,115,118,124,127,127,127,125,124,120,122,126,121,119,110,117,109,103,89,74,76,81,93,89,86,91,81,83,85,93,105,90,83,76,70,69,67,64,60,60,64,72,74,71,74,72,71,71,68,68,65,58,53,54,59,66,65,74,77,76,74,61,46,54,64,57,57,59,78,80,62,84,112,138,149,134,115,89,66,53,53,64,83,108,136,162,184,205,220,237,246,253,253,253,253,220,248,250,211,177,149,147,114,55,30,12,9,14,21,37,38,35,32,36,33,37,42,34,34,36,38,32,33,39,41,33,34,39,34,35,32,32,35,32,32,32,31,29,37,32,27,29,29,25,27,28,27,24,27,29,29,29,29,30,24,23,28,32,27,27,24,24,31,26,27,26,22,25,25,29,27,26,23,27,29,25,23,28,23,19,26,23,24,25,24,23,25,24,23,21,23,28,21,19,21,25,26,23,22,22,19,20,21,20,21,21,22,23,21,23,20,19,24,19,19,20,19,23,20,18,22,22,22,21,19,18,18,21,20,19,21,19,20,19,23,22,16,19,24,22,24,27,24,27,24,37,35,29,36,32,26,30,31,31,31,42,53,36,33,29,32,27,27,31,27,27,26,32,29,35,39,41,41,39,42,27,24,32,34,38,34,37,39,42,43,77,137,145,135,135,118,113,109,127,134,167,150,63,27,11,14,21,22,24,18,24,25,19,23,24,23,19,19,25,21,23,24,22,21,21,23,27,27,22,27,34,35,37,43,50,54,56,56,63,68,78,79,78,87,96,92,105,152,173,179,169,157,160,160,165,169,162,162,160,164,168,165,165,160,159,164,167,170,166,167,163,160,155,161,165,163,171,165,160,155,149,148,159,162,166,163,160,160,156,153,158,165,164,159,157,160,161,161,165,172,173,176,170,136,104,84,101,132,163,182,174,169,166,166,167,160,158,159,162,164,165,169,167,166,164,161,159,162,164,158,154,159,166,165,161,159,156,162,170,163,163,163,158,160,164,165,163,165,168,174,175,170,165,165,161,160,162,154,154,148,163,114,11,1,9,10,12,11,13,11,13,14,14,14,24,22,25,27,33,26,32,33,27,34,33,34,33,31,33,27,26,24,27,23,19,22,19,19,22,19,23,22,20,18,23,23,18,21,18,17,20,24,27,20,25,24,19,22,22,24,19,18,20,19,18,17,20,20,17,21,19,19,20,22,21,16,18,18,19,18,18,17,20,21,19,21,18,20,18,18,19,18,19,18,19,33,37,51,54,45,48,57,56,57,54,54,63,57,74,65,66,66,89,89,89,85,76,63,76,66,59,63,66,72,70,83,89,86,87,88,103,99,90,82,68,72,79,77,79,86,85,86,71,95,76,88,112,108,128,124,125,124,127,126,125,123,119,120,119,118,119,115,122,121,128,131,126,118,115,121,125,123,125,125,118,122,119,115,117,122,122,118,126,127,129,120,118,124,121,123,126,120,122,127,125,129,128,121,118,122,120,123,124,120,118,118,115,106,108,107,122,127,121,127,126,111,107,110,113,110,115,125,126,130,122,115,123,126,128,121,118,129,126,118,117,121,123,124,124,120,125,124,123,129,125,134,134,128,112,109,114,109,111,110,120,123,123,121,118,116,120,117,121,128,120,124,120,130,127,128,135,129,131,132,131,134,141,138,134,138,136,129,120,102,92,93,96,96,90,82,77,72,63,63,87,95,94,96,78,83,89,83,82,79,84,73,92,93,88,76,68,73,70,78,79,64,59,54,60,57,57,60,57,61,61,54,48,49,42,46,54,52,51,66,71,64,92,120,139,156,148,156,154,143,127,108,90,91,84,89,101,127,118,130,150,165,189,215,236,210,217,253,253,253,252,252,252,241,221,194,153,115,82,112,26,9,16,19,24,36,33,30,35,37,33,33,32,34,36,36,39,35,34,27,34,32,33,32,30,34,33,33,28,28,25,27,30,30,27,24,26,27,30,26,24,24,21,29,23,25,27,23,25,27,29,24,28,26,24,24,26,27,22,25,28,24,30,25,25,25,20,24,22,22,27,24,23,27,28,24,24,24,24,23,26,22,24,20,21,27,20,19,20,17,22,24,21,19,19,22,21,22,20,21,21,18,22,20,21,21,17,19,21,22,21,21,21,23,21,21,21,19,24,19,21,19,20,19,18,21,20,21,19,23,24,24,25,25,23,36,33,24,30,29,29,29,35,27,30,31,34,35,29,35,32,33,35,31,26,27,30,33,31,24,30,41,42,45,41,40,27,23,29,40,40,32,40,36,38,41,63,113,116,123,116,98,98,103,112,109,146,130,55,27,15,16,24,21,20,22,24,21,17,24,20,21,21,20,25,23,22,19,20,25,23,23,29,29,27,25,31,39,40,45,56,54,56,62,64,70,80,80,86,108,89,105,149,174,180,172,171,164,159,165,172,162,156,157,156,164,165,162,163,155,158,165,155,160,163,161,156,155,152,156,160,160,166,162,160,149,136,141,159,162,165,166,161,165,160,157,165,169,163,159,148,146,145,141,148,151,157,163,172,163,137,103,84,98,127,163,177,177,173,170,173,163,162,161,158,162,155,158,156,151,154,151,151,155,156,154,158,161,167,166,159,162,162,164,167,163,164,160,163,163,168,170,168,171,175,180,177,172,170,162,159,160,162,156,157,148,163,113,12,1,8,10,13,11,12,12,12,14,13,13,30,32,37,33,35,36,31,31,30,28,25,23,25,27,22,18,19,20,17,18,22,21,18,22,19,20,21,20,23,19,21,21,18,19,20,19,22,24,24,22,18,22,22,21,18,21,19,16,21,18,16,17,18,17,17,21,21,26,24,19,23,19,19,19,17,24,21,21,19,21,21,19,20,20,19,15,22,19,19,20,21,30,36,49,45,46,54,61,71,67,76,83,79,79,89,109,112,105,106,107,109,104,89,68,62,51,46,51,54,66,75,95,106,103,104,102,118,124,120,99,93,91,93,92,87,106,107,96,87,87,77,89,100,111,120,121,130,118,125,129,121,132,131,126,126,129,133,128,130,127,125,128,127,131,127,126,129,121,125,134,128,125,128,129,131,134,125,120,117,117,121,120,126,125,121,120,114,110,117,119,125,125,124,122,119,124,132,134,134,134,134,130,125,116,115,131,131,131,133,125,128,125,120,128,123,120,118,116,125,120,122,115,112,125,133,132,136,143,127,116,118,124,123,122,131,129,127,124,130,130,130,129,125,127,113,120,118,118,125,121,132,133,131,129,124,124,125,125,130,127,127,133,130,128,118,112,107,113,122,127,133,136,139,137,138,130,128,129,131,118,111,110,108,104,87,84,95,93,101,89,90,98,100,110,105,103,109,101,97,95,85,92,91,87,92,83,75,77,75,74,74,73,61,59,59,61,67,57,53,52,50,51,46,46,48,47,54,51,49,67,81,80,95,106,126,146,143,163,185,186,182,173,164,147,130,122,113,92,83,75,69,72,90,130,157,148,184,234,241,251,239,242,252,252,252,252,252,249,238,226,198,147,113,78,49,19,6,15,22,27,36,31,31,36,33,34,36,34,31,29,33,30,32,30,29,38,26,30,29,28,32,27,29,27,26,25,26,24,26,24,27,23,25,25,22,28,25,29,27,24,25,24,22,24,24,29,24,25,27,24,27,22,26,24,22,25,24,24,24,26,24,22,28,25,22,22,20,23,27,25,23,23,19,21,24,23,21,18,22,23,20,21,20,22,25,24,21,20,22,21,16,23,23,16,22,20,17,26,17,21,24,20,21,21,23,20,23,18,20,22,17,20,24,20,18,22,23,19,19,19,19,19,22,21,25,33,25,27,32,25,29,33,29,29,31,33,31,29,31,38,37,48,44,27,31,31,34,33,29,28,34,43,42,35,40,39,27,24,26,41,44,35,42,30,34,46,48,61,67,67,61,54,52,58,54,56,74,73,55,26,15,17,18,25,21,19,24,26,22,22,23,21,25,21,20,22,21,22,24,22,23,24,28,28,28,26,26,39,43,52,54,53,65,65,65,74,83,85,96,98,106,147,173,177,166,159,165,163,162,160,162,156,152,153,149,153,155,154,153,155,157,155,148,151,157,155,153,157,151,158,158,154,163,160,157,155,153,156,165,163,165,161,163,171,163,165,169,167,164,153,147,141,139,137,139,143,147,161,166,169,163,139,113,89,89,125,157,173,173,169,171,165,160,163,160,159,155,155,152,152,155,154,150,155,155,162,167,164,168,165,161,162,158,161,163,153,156,153,156,164,165,164,163,168,169,170,168,165,163,157,156,161,164,155,163,158,167,114,11,2,8,10,13,11,13,12,13,13,13,13,36,29,30,29,28,27,24,22,16,19,23,19,23,19,20,22,21,27,20,18,21,21,20,17,25,24,21,22,24,24,17,23,20,22,19,18,24,19,19,17,20,20,17,18,18,18,18,19,17,19,19,16,19,17,17,20,19,19,22,20,17,19,16,18,18,19,22,17,19,20,21,18,17,22,18,17,18,19,20,18,21,27,27,33,31,33,42,42,39,48,47,57,57,60,62,59,57,64,69,71,70,67,61,49,48,38,38,44,39,39,44,55,64,65,67,71,66,82,63,61,50,69,68,49,45,50,64,55,48,43,47,51,63,89,89,81,75,80,88,98,96,101,104,110,109,117,128,128,128,120,118,111,119,128,128,129,121,123,129,133,129,124,125,124,119,118,116,117,113,108,117,115,115,122,125,126,128,122,126,135,128,127,124,125,128,133,138,134,136,129,124,128,123,127,131,134,139,130,129,125,126,124,128,138,136,134,129,127,125,133,129,124,128,130,134,134,137,126,122,128,127,131,131,129,131,124,122,123,124,129,131,128,128,137,134,132,138,136,142,139,134,129,125,131,131,125,124,132,133,129,133,132,128,124,114,117,122,117,123,127,127,126,123,118,110,110,110,110,108,113,113,111,103,108,108,110,116,115,117,106,106,110,107,104,103,107,99,88,81,83,83,92,92,75,75,75,74,65,57,66,67,67,71,66,68,69,73,60,61,57,54,53,48,50,51,60,62,53,55,73,85,81,108,87,81,85,105,100,102,102,98,95,89,100,109,110,113,106,97,89,93,52,51,66,64,57,107,131,138,159,169,179,194,218,234,245,252,252,252,252,253,253,242,215,201,171,136,114,117,55,22,15,14,25,28,25,29,26,31,27,29,34,29,29,29,26,27,31,31,28,29,27,29,30,29,28,24,27,24,28,27,26,27,29,27,32,25,27,24,24,29,21,26,20,28,25,25,27,24,24,20,26,24,21,28,24,20,27,24,21,24,24,19,25,24,23,28,23,20,24,24,22,23,23,17,23,20,22,22,19,24,23,21,19,23,23,20,18,18,20,20,22,19,19,19,19,22,20,23,21,18,22,23,21,21,22,22,15,24,27,19,23,22,19,22,21,19,19,22,24,20,17,20,19,22,23,22,25,26,28,26,28,29,34,34,30,36,36,35,39,40,49,39,28,29,30,43,42,27,31,33,39,44,43,40,39,28,26,33,39,43,37,42,40,41,41,42,51,46,51,41,33,41,36,38,38,48,49,41,27,17,18,18,21,24,21,21,22,26,24,19,18,21,22,21,26,22,19,23,19,24,22,23,27,26,29,35,35,40,56,55,57,70,67,68,86,86,86,97,113,150,174,175,163,156,158,164,158,155,156,160,153,157,156,151,150,147,151,157,153,155,161,156,159,160,158,162,165,162,163,157,153,162,155,154,161,160,163,168,162,160,157,157,162,154,156,159,162,160,160,152,153,158,151,153,156,163,160,164,165,166,162,153,128,84,83,112,140,156,165,165,159,159,160,162,164,164,162,157,160,163,159,155,159,162,168,169,160,160,158,158,160,159,158,159,156,153,155,158,160,163,159,159,160,163,165,161,162,160,158,160,160,163,162,167,161,171,115,11,2,10,10,12,11,14,12,13,14,14,13,19,25,22,17,25,20,16,17,17,20,22,26,21,21,22,21,23,20,21,20,22,22,21,22,19,21,20,23,24,19,19,18,21,23,16,17,17,16,19,17,20,21,15,23,22,16,21,18,19,18,17,19,16,21,19,17,18,20,18,17,19,17,18,18,18,18,17,21,20,20,19,16,17,17,20,19,16,21,21,18,19,21,20,19,21,19,19,21,22,28,26,32,35,39,35,33,30,31,30,34,35,36,36,33,44,40,42,44,36,38,38,34,41,39,45,46,42,43,39,48,51,53,50,49,46,47,43,37,37,38,43,46,54,60,59,62,66,54,59,74,79,87,93,101,108,114,125,124,124,121,116,113,112,128,127,122,128,129,128,127,125,120,125,116,104,105,111,122,122,113,116,120,119,124,131,130,129,128,134,134,135,132,130,132,130,131,127,122,121,126,126,125,128,129,126,129,129,134,137,125,128,128,131,141,146,140,137,136,134,132,132,133,135,139,133,123,128,130,122,122,131,138,134,133,135,133,128,124,127,129,136,134,132,142,141,143,139,137,139,134,135,131,128,130,123,120,121,128,135,130,129,131,125,128,130,135,140,142,141,133,129,125,117,109,110,107,105,111,113,110,116,113,108,117,112,119,114,99,110,106,110,107,89,89,89,92,86,74,79,73,74,88,88,75,81,84,72,73,63,63,67,65,69,74,74,73,67,59,70,69,64,66,62,58,51,54,53,51,60,65,80,70,69,69,59,65,51,47,48,43,57,60,63,71,80,88,103,119,130,136,132,121,91,87,52,25,50,38,36,59,64,81,99,129,151,163,186,207,220,241,252,252,252,252,252,252,251,247,223,200,186,157,127,95,59,32,25,15,26,28,26,28,24,31,24,23,24,25,26,27,29,30,28,25,30,26,27,26,29,29,27,29,32,32,23,28,25,25,25,23,26,23,25,28,23,24,25,27,23,20,29,23,21,23,23,26,24,24,23,22,25,24,18,22,24,22,21,23,25,24,23,19,20,21,22,21,22,24,24,21,22,22,16,20,18,21,23,18,21,21,20,19,22,18,23,19,18,23,23,21,22,22,19,21,18,21,24,20,20,23,18,19,19,22,25,20,16,20,21,19,21,17,22,21,18,19,22,24,23,27,23,26,32,29,36,46,45,38,39,35,34,32,31,35,27,29,52,38,19,32,38,39,45,42,44,36,29,28,34,47,42,39,41,35,40,41,46,50,45,46,34,29,32,35,33,39,40,39,40,30,22,17,19,18,22,22,18,22,24,21,20,21,21,19,24,22,17,23,19,19,25,25,24,30,27,27,36,35,46,56,59,64,74,77,86,95,91,92,115,159,181,179,172,162,163,162,155,150,153,156,158,159,163,163,163,164,165,162,160,160,168,168,167,165,164,163,166,165,155,163,157,161,168,164,169,171,165,160,162,159,160,152,152,155,150,152,151,155,159,160,161,164,167,164,165,163,166,166,165,163,162,169,173,156,117,83,77,97,131,158,168,168,166,166,165,170,168,168,167,166,165,166,161,162,162,163,166,162,162,159,160,165,160,164,166,164,166,165,163,161,162,162,161,166,165,164,165,162,162,162,159,158,163,162,167,157,166,115,11,0,9,10,12,11,13,11,12,13,14,13,19,21,20,16,17,17,18,19,22,16,21,23,19,21,23,20,15,23,21,22,21,16,19,19,18,17,19,19,17,17,19,18,17,19,18,17,19,17,16,20,19,17,17,16,19,19,18,17,16,17,18,18,20,19,17,15,21,19,18,21,16,19,16,19,21,19,22,22,23,20,19,18,19,19,18,21,19,18,19,22,19,17,21,18,19,22,19,22,20,19,21,21,26,24,27,24,23,29,29,31,32,36,44,43,49,54,52,57,52,50,46,39,43,42,56,52,47,47,45,52,61,66,70,69,61,59,50,54,55,52,65,66,82,75,78,84,85,92,77,106,99,128,128,123,130,130,128,131,128,132,128,125,125,124,128,130,128,131,126,124,129,129,130,130,129,126,127,133,134,125,127,130,131,130,128,124,124,125,128,130,131,129,131,132,120,118,114,111,126,128,127,130,124,125,126,128,135,131,128,126,126,128,131,133,137,137,136,131,126,129,123,124,130,133,130,126,132,131,131,139,134,133,135,141,141,136,133,138,139,135,135,132,133,141,146,139,136,132,132,135,140,138,135,133,135,139,134,142,141,144,133,131,134,133,130,133,139,136,132,137,122,123,123,125,115,114,125,127,123,120,119,122,118,120,111,107,101,93,107,113,110,104,90,82,80,89,84,82,80,77,79,90,84,81,92,77,85,87,75,68,69,74,66,65,73,72,74,68,70,78,74,75,65,69,61,55,56,59,60,56,64,56,60,56,61,52,50,44,44,40,71,49,52,74,71,67,99,103,127,128,127,143,146,141,86,57,87,86,90,68,58,55,48,53,69,87,99,122,147,168,197,215,226,232,236,245,246,248,251,251,252,249,236,218,205,200,155,123,117,87,66,43,27,24,26,25,24,28,28,25,29,24,27,24,28,28,24,28,28,30,27,33,26,26,30,24,29,27,29,24,27,21,26,29,21,23,21,24,26,25,24,25,26,26,24,23,23,24,21,21,23,19,23,28,21,29,20,22,21,19,24,20,21,20,21,29,24,20,21,19,19,23,17,23,21,18,24,19,18,17,24,26,19,23,22,17,18,21,18,18,21,21,25,22,21,23,21,22,24,20,24,21,19,22,19,22,17,21,19,18,20,20,21,17,24,22,15,21,22,19,23,26,28,30,44,50,36,35,39,32,36,40,33,33,26,29,35,26,27,34,34,43,45,40,49,43,24,29,41,47,39,43,41,37,45,42,45,46,43,42,39,35,33,34,33,36,39,45,42,26,25,19,20,21,21,24,16,20,19,21,22,18,23,19,22,20,23,26,21,21,22,25,24,24,30,28,36,40,46,52,53,69,79,84,100,91,89,123,167,183,182,174,164,166,161,159,160,152,154,159,162,158,162,163,171,171,163,165,163,160,159,159,155,155,160,160,157,154,156,160,159,163,168,165,169,168,158,159,165,165,164,157,162,167,161,156,153,155,152,151,157,162,165,160,155,157,158,163,165,159,159,160,176,172,147,124,95,77,90,129,159,175,179,174,171,164,164,166,167,167,167,168,166,163,163,166,164,166,165,160,163,163,165,162,164,169,169,165,162,165,168,167,167,171,171,169,167,169,170,165,161,156,164,161,166,159,166,114,12,2,8,10,13,11,14,13,14,14,12,14,22,23,19,20,18,21,21,18,19,18,22,19,20,21,19,22,19,21,18,17,20,17,17,18,19,19,20,17,16,17,17,19,17,19,19,18,17,16,17,16,17,16,19,21,16,18,19,17,20,18,17,17,18,19,17,19,19,17,19,18,16,18,18,22,19,21,23,22,20,20,20,16,18,17,18,19,17,19,18,19,22,26,28,24,23,28,26,24,27,23,23,30,32,37,43,42,45,47,50,52,50,49,54,55,61,65,71,81,74,67,65,70,72,73,71,62,60,55,50,55,50,47,57,69,74,65,63,68,80,88,91,93,96,92,87,89,84,83,89,101,112,117,122,124,127,128,128,126,127,136,136,132,125,126,122,123,127,125,128,130,130,134,147,142,140,139,137,141,143,137,131,130,132,132,131,128,129,125,125,125,127,127,127,121,121,123,120,127,132,136,121,120,131,128,136,135,128,130,122,123,131,128,132,129,122,118,119,126,120,123,127,131,131,135,141,135,144,146,143,144,141,141,141,139,141,141,143,150,145,144,146,148,154,153,144,134,136,139,146,142,143,149,148,154,153,153,153,153,155,155,152,155,157,155,148,140,143,147,147,147,141,139,139,136,128,141,153,148,145,133,126,130,132,134,127,124,116,121,128,121,120,107,97,92,84,91,85,74,78,81,85,77,80,67,59,66,72,77,69,79,83,76,85,79,78,89,84,76,67,54,64,67,70,68,53,62,55,54,49,51,55,54,48,48,51,48,46,48,50,44,49,43,45,46,48,51,56,86,105,110,112,124,149,159,117,117,158,165,167,162,130,101,71,69,76,64,58,47,48,67,89,113,136,152,162,183,196,206,224,231,235,236,235,250,251,237,229,227,234,227,215,197,183,179,129,87,66,57,36,20,30,25,26,21,20,29,27,26,25,22,22,27,26,28,29,29,24,24,28,22,24,21,24,27,30,23,21,26,25,29,27,21,23,23,21,22,25,22,23,25,21,28,22,24,23,17,24,18,19,21,21,22,21,21,24,20,18,21,24,17,19,19,19,23,21,21,21,23,19,21,19,19,22,20,23,20,22,24,16,21,24,19,20,22,19,23,21,19,18,19,18,20,21,19,21,18,24,18,19,21,21,20,17,22,23,23,19,18,18,20,23,22,24,25,34,36,32,33,34,39,38,37,32,29,29,31,31,25,34,34,30,45,47,44,45,41,33,26,40,46,44,42,39,44,44,42,43,46,42,39,45,35,35,34,29,34,38,44,36,24,23,23,21,22,23,23,19,18,21,21,21,22,22,22,23,24,23,23,23,22,21,20,29,28,28,29,30,41,43,53,62,78,84,84,90,82,105,155,173,171,165,162,167,160,160,166,167,166,164,160,155,149,155,158,163,162,156,152,153,152,151,149,150,147,148,156,157,161,161,159,155,153,155,156,160,159,154,158,160,162,162,159,163,167,165,160,161,160,151,150,157,162,165,155,147,147,152,154,158,155,150,159,168,166,162,160,135,102,86,93,125,153,174,174,164,160,148,149,155,156,160,160,160,160,158,161,160,164,171,166,164,166,164,166,163,160,160,155,158,158,163,165,159,166,168,169,174,171,171,166,164,161,166,163,163,159,165,113,12,2,8,10,13,10,12,12,13,13,14,15,23,20,18,19,18,17,20,20,18,19,19,17,19,18,20,19,17,21,16,18,17,18,20,17,17,17,17,18,18,18,16,16,17,16,17,17,19,17,17,20,16,18,20,17,18,19,19,19,18,18,18,17,19,19,17,19,18,22,17,19,18,20,24,17,21,22,19,19,21,20,21,17,16,18,15,18,21,18,22,18,23,33,29,32,37,33,32,32,34,33,35,34,35,43,39,48,49,39,56,56,49,45,51,57,58,49,55,66,77,83,55,57,73,65,59,59,72,66,65,59,52,47,48,46,50,47,40,55,66,68,73,76,87,89,84,77,63,55,80,92,106,107,114,103,98,104,107,108,111,118,119,127,133,131,130,115,117,124,116,117,116,108,110,117,131,127,118,120,131,137,141,141,132,129,133,135,130,119,121,128,132,130,126,118,117,135,139,138,140,138,137,139,141,142,137,134,139,143,139,143,145,142,133,118,114,125,132,133,131,137,140,145,146,145,141,137,140,135,139,142,144,147,141,145,146,145,146,151,155,151,149,151,148,136,134,139,139,135,136,136,138,132,141,136,134,133,140,142,145,142,143,146,161,174,160,160,142,164,145,144,142,142,141,136,145,146,145,137,130,122,119,117,112,115,112,108,113,117,120,109,106,100,85,100,83,95,83,73,79,79,63,61,57,59,67,67,65,71,67,83,99,84,81,84,75,83,72,78,64,72,81,68,53,57,57,57,54,53,54,57,51,55,59,54,58,57,56,55,50,52,49,54,65,58,61,58,60,83,81,78,94,97,110,129,99,118,142,146,162,162,134,116,113,113,120,105,95,69,50,38,72,46,43,52,66,92,110,121,141,149,158,174,194,205,210,217,214,224,232,239,242,247,251,234,218,202,195,193,171,149,139,134,101,52,25,24,30,28,28,23,43,26,24,24,30,23,27,29,23,23,23,23,24,26,23,25,24,24,28,25,23,21,24,23,21,21,24,27,23,27,21,23,23,21,23,22,22,22,24,24,21,21,21,19,21,19,19,21,22,19,20,20,15,24,19,21,28,17,22,19,21,21,19,21,18,23,19,22,19,17,22,21,17,19,25,18,23,25,22,21,18,19,22,17,22,19,21,21,18,21,20,24,21,21,22,24,20,22,22,19,24,19,19,26,26,27,28,34,36,34,40,38,34,33,29,33,29,33,34,32,40,45,49,46,45,37,27,29,44,46,48,44,44,43,42,46,39,48,47,39,40,35,36,36,28,34,33,33,35,25,19,17,21,19,17,20,20,19,22,23,21,24,20,21,19,19,20,22,23,21,21,24,25,27,32,30,33,39,44,47,67,81,87,89,87,120,145,155,159,150,148,156,154,155,162,161,162,164,155,153,156,145,155,158,164,157,151,151,150,146,147,151,160,149,148,151,156,159,160,160,155,153,149,145,147,151,152,152,153,152,152,151,155,155,155,158,162,163,160,162,166,170,163,154,150,156,155,154,149,149,154,153,160,160,156,160,159,141,117,92,83,112,144,159,168,160,146,139,145,148,150,151,155,158,152,153,155,163,159,161,168,160,161,162,158,158,156,155,157,160,160,158,155,158,161,167,171,170,167,165,165,161,167,164,165,156,168,116,11,2,9,10,12,11,13,11,12,14,14,14,20,20,20,21,18,17,17,19,19,16,17,18,19,18,16,17,17,18,17,18,17,18,18,16,17,18,17,16,17,16,17,20,18,17,16,18,17,16,16,16,17,16,18,18,21,19,17,17,17,17,17,21,17,19,20,16,17,18,19,18,19,18,19,19,17,21,19,20,18,18,19,19,19,17,17,20,20,17,20,20,19,25,21,26,25,25,29,30,33,24,24,28,29,31,32,28,30,29,30,29,27,22,30,33,30,34,30,28,27,31,37,36,38,37,44,54,61,57,49,49,50,44,44,38,32,32,31,25,37,47,50,61,70,81,82,82,81,77,82,92,104,95,91,89,83,87,94,104,101,103,110,122,128,128,132,133,129,136,134,129,116,95,97,108,123,113,104,113,117,118,124,127,126,126,126,136,128,122,133,131,136,137,132,133,137,139,141,142,147,148,146,153,151,141,139,144,149,154,157,157,157,148,144,138,135,139,129,134,134,128,124,122,130,128,127,121,120,129,127,121,118,104,102,111,126,129,126,129,129,129,114,108,114,125,136,137,135,132,128,113,109,115,110,111,107,105,103,100,105,108,110,113,125,127,125,129,132,134,132,131,125,113,120,134,130,122,116,107,107,107,108,107,102,102,96,94,100,110,104,93,101,94,90,110,115,103,86,77,90,86,77,64,79,90,84,77,68,68,88,104,96,86,73,63,64,65,64,90,92,83,75,59,63,57,60,60,53,59,57,57,55,58,61,60,65,70,67,66,58,62,70,84,91,68,71,78,72,76,73,68,75,65,77,91,69,82,77,80,102,89,86,99,116,148,147,135,137,131,118,105,88,65,57,43,42,53,43,34,32,39,57,79,97,113,136,148,157,160,174,192,198,202,205,214,208,209,221,229,229,237,241,228,200,178,162,147,126,108,87,89,84,29,23,34,24,27,22,25,23,27,27,21,23,24,28,25,22,21,25,25,21,25,22,22,26,24,21,25,26,22,24,23,21,17,24,22,21,24,24,23,22,16,18,21,21,22,19,21,20,21,21,19,25,22,21,21,16,20,21,22,17,19,21,16,22,21,18,20,22,23,16,19,21,16,19,22,17,25,22,17,22,19,22,19,22,18,22,20,20,21,20,22,21,25,18,22,22,19,21,21,21,20,22,21,21,23,28,35,33,37,39,38,40,34,36,35,29,37,43,40,41,54,49,46,44,39,27,31,48,46,42,45,45,43,42,39,44,51,47,45,36,38,39,35,32,33,36,34,34,29,19,20,17,20,22,19,24,20,19,22,23,20,21,23,19,25,17,22,26,20,21,23,26,23,27,33,31,46,48,48,66,77,95,113,141,164,165,165,162,159,162,161,155,154,154,151,151,159,162,157,159,159,160,157,165,172,166,161,156,148,146,156,160,145,147,155,148,148,151,154,160,160,160,154,153,159,155,159,155,150,147,151,158,155,150,150,155,165,166,165,168,161,161,160,162,166,163,158,153,154,155,155,157,156,155,160,162,169,154,123,97,83,105,131,158,171,156,149,153,155,157,155,159,160,158,160,154,155,154,152,158,156,156,158,157,157,157,155,157,161,162,160,155,157,156,162,169,169,170,164,163,160,162,162,165,164,169,113,11,2,9,10,12,11,14,12,12,14,13,13,20,16,17,20,18,16,18,17,16,18,17,16,17,16,17,17,17,17,18,19,17,16,17,16,16,16,17,17,17,17,16,17,17,17,16,17,17,16,18,17,16,22,18,18,20,16,18,19,16,17,17,18,18,16,17,17,18,17,17,18,20,16,21,20,17,22,17,21,17,20,21,18,19,17,20,17,21,20,18,21,19,19,24,27,32,30,28,31,30,28,26,29,26,26,28,33,26,19,25,27,26,26,23,24,24,26,28,24,30,33,33,39,37,48,60,61,64,63,55,53,52,52,46,45,42,41,38,44,44,49,57,53,54,60,73,65,71,100,99,76,80,63,54,75,89,102,114,118,117,117,117,124,113,110,112,120,138,146,139,136,139,132,128,127,120,116,118,118,117,108,105,108,120,113,129,130,128,137,137,136,131,130,130,123,130,131,125,130,123,121,111,103,106,110,113,117,108,109,114,107,110,116,125,129,132,136,123,122,122,122,118,120,131,122,135,128,122,112,97,76,71,73,87,109,123,130,130,117,107,106,109,119,122,130,133,130,134,128,123,123,121,116,118,122,116,108,77,86,99,100,102,106,105,94,84,88,91,83,91,97,99,104,118,120,114,110,104,113,112,114,122,115,99,98,89,96,102,107,102,100,116,112,109,117,104,99,96,88,115,101,95,82,83,94,92,101,101,116,98,101,86,73,55,48,60,74,82,96,81,62,59,61,57,62,55,56,59,53,52,57,50,52,55,54,54,50,53,53,54,51,53,63,67,65,70,63,55,55,55,60,61,53,46,53,51,47,53,39,42,35,61,60,94,115,111,104,129,143,117,132,117,109,131,110,101,91,74,62,53,60,63,63,31,69,46,53,67,73,87,111,123,130,145,153,165,174,183,189,203,212,200,212,210,205,210,211,209,214,218,202,144,107,110,97,88,61,39,40,45,32,27,26,21,23,23,22,19,21,22,20,22,26,25,23,22,22,23,21,23,22,21,21,22,26,23,22,25,20,20,26,17,23,21,22,26,15,23,22,18,23,20,19,24,16,20,21,19,19,21,20,18,18,19,16,17,24,21,19,20,19,21,18,21,23,18,21,18,19,21,22,21,16,20,19,22,19,21,24,15,21,20,19,19,19,22,21,21,18,21,19,24,22,19,21,19,22,27,26,27,37,35,36,41,34,32,37,34,44,39,30,45,49,50,53,45,36,24,26,45,46,48,46,42,42,43,42,43,55,46,43,40,35,35,35,37,32,35,37,35,24,23,16,19,22,18,20,18,21,20,21,21,19,21,19,22,23,22,25,22,22,24,24,26,23,25,30,37,43,43,48,74,85,112,165,158,170,176,167,175,172,168,167,166,163,163,159,158,165,159,165,163,162,164,158,166,164,166,160,161,160,152,161,159,151,159,157,146,144,153,159,161,160,166,168,163,171,168,167,166,157,160,162,165,160,156,151,156,162,156,155,152,151,154,154,160,160,160,159,153,157,158,155,157,162,159,160,169,174,173,155,127,98,83,98,133,160,169,167,162,160,161,157,157,157,158,160,149,149,145,146,150,146,153,154,152,155,160,159,159,159,163,159,155,159,162,168,171,170,168,165,160,155,156,155,165,157,168,115,11,1,8,10,12,11,13,12,14,13,13,13,18,17,17,17,16,17,16,17,17,17,17,16,17,16,17,16,17,17,17,16,16,16,17,18,16,17,18,16,18,18,16,19,16,17,17,16,17,16,19,19,16,18,20,18,17,18,19,17,18,18,17,16,19,20,17,19,19,17,17,17,20,20,17,19,17,17,18,18,20,19,18,18,17,18,18,18,22,18,20,20,19,24,23,29,30,33,33,30,32,26,28,22,24,26,27,26,25,27,29,29,36,32,34,31,30,33,26,34,39,39,44,42,40,45,51,61,81,97,87,62,50,49,51,49,45,44,40,35,39,38,37,40,37,40,38,44,53,50,51,50,52,63,75,87,103,118,137,141,134,136,137,132,123,122,125,128,127,131,127,133,146,150,149,137,134,130,134,141,133,126,117,117,118,119,128,134,137,141,143,131,123,119,110,108,115,123,122,118,117,98,81,78,84,91,93,87,71,66,71,60,68,88,104,113,120,128,115,128,133,118,118,125,137,142,143,137,137,132,113,101,115,129,145,153,153,148,132,124,130,139,139,136,132,131,130,126,122,130,142,142,143,142,141,150,143,131,125,130,137,127,125,133,138,129,119,117,110,112,120,118,117,122,139,146,138,125,119,120,127,128,125,122,116,116,103,103,104,104,101,105,112,107,107,103,85,81,87,95,110,104,106,96,83,95,100,123,128,111,100,74,65,63,59,62,63,76,91,87,60,59,62,53,58,57,61,60,51,55,57,56,54,54,51,48,53,53,54,55,54,66,61,62,63,52,56,51,48,51,53,50,53,49,46,46,44,49,44,41,44,41,50,62,65,79,69,64,73,88,95,96,121,129,128,116,117,120,110,104,98,109,98,78,67,59,62,45,46,37,27,34,39,50,61,71,86,99,112,126,128,142,151,160,173,184,197,197,203,214,199,200,191,179,190,186,179,178,183,173,132,100,91,83,70,54,36,31,40,40,26,23,24,22,21,21,21,18,24,20,21,22,19,24,26,20,22,20,18,23,20,20,24,21,19,19,20,22,16,21,23,22,22,19,25,17,23,25,19,24,19,20,20,19,19,21,19,18,23,21,19,22,16,27,21,16,24,18,22,19,19,21,20,23,22,21,17,22,22,18,19,22,19,23,22,22,21,19,19,22,19,21,21,18,27,20,15,21,18,27,25,27,34,39,39,39,35,32,37,35,30,34,43,50,57,55,53,37,23,30,45,45,43,45,42,46,46,45,46,45,42,41,33,33,32,32,37,28,36,32,32,22,18,23,17,22,17,20,21,19,22,19,23,21,20,22,19,22,22,20,24,21,22,26,26,25,23,34,39,41,39,53,68,75,122,152,157,167,168,176,175,169,165,163,168,166,159,159,159,157,155,147,150,160,160,160,159,154,151,154,165,165,165,170,166,163,166,165,157,162,169,166,162,160,159,159,161,167,166,168,168,162,165,165,163,163,158,159,159,160,151,150,154,147,152,152,149,155,153,153,156,157,159,156,156,161,162,163,163,170,172,170,158,134,108,83,99,131,152,170,171,168,162,153,152,152,154,159,147,143,140,145,151,148,153,152,151,156,158,159,157,161,162,158,159,163,162,167,170,167,170,163,163,159,159,158,160,158,165,112,13,2,9,10,13,11,13,12,13,13,13,13,17,17,18,17,18,18,17,20,16,17,18,16,17,17,17,16,19,18,18,17,16,17,20,19,16,19,16,19,17,16,17,18,17,19,21,17,17,16,19,20,17,21,17,17,19,20,19,16,19,19,16,19,19,16,17,17,18,18,18,18,16,17,17,19,19,18,19,18,17,18,19,16,18,19,17,17,19,24,19,21,23,25,27,22,21,30,23,23,23,21,22,20,20,21,19,23,25,23,27,21,28,28,22,26,25,23,24,23,24,31,34,34,35,35,39,37,44,49,53,58,61,56,52,43,46,36,42,41,32,37,40,42,42,47,47,40,45,47,47,46,47,59,81,100,106,119,134,139,143,142,139,130,136,136,130,127,125,127,127,139,147,141,141,146,148,146,147,143,143,143,143,141,128,131,140,137,135,145,136,131,120,118,124,118,125,123,125,119,131,112,109,116,139,135,130,121,114,103,113,112,116,123,119,109,111,119,113,107,106,121,123,126,139,145,149,148,139,133,143,152,155,162,161,151,139,132,135,131,125,123,127,143,139,139,137,125,131,132,130,135,147,147,153,149,149,154,145,148,137,135,140,143,144,144,139,137,141,142,145,145,139,144,153,147,139,133,127,135,133,134,116,111,116,122,119,119,117,114,106,105,109,102,104,95,80,81,81,97,79,97,106,106,103,106,105,101,104,81,81,66,70,71,71,75,71,66,63,56,61,65,58,54,56,63,65,59,51,53,59,59,57,57,57,59,58,59,71,71,59,83,75,71,59,49,56,56,53,59,52,57,56,51,53,49,48,49,53,48,51,53,53,61,55,58,50,50,55,75,76,76,87,92,47,60,87,84,99,102,107,123,122,92,107,116,103,92,89,74,67,63,63,57,55,46,41,40,42,46,64,60,70,95,115,130,125,143,153,158,171,178,174,170,185,189,195,199,200,190,184,184,187,180,136,168,171,175,154,100,72,76,89,44,43,36,39,47,37,27,25,24,23,17,15,17,23,20,19,19,19,19,16,21,20,20,18,15,21,22,21,18,24,23,19,19,22,23,20,19,20,17,23,22,17,20,20,21,19,19,23,20,22,21,17,23,19,21,19,17,22,18,17,19,21,19,19,21,19,22,17,24,18,19,19,19,26,18,21,20,24,21,21,24,18,21,21,20,22,22,23,27,27,33,37,37,36,25,31,32,25,36,46,49,49,54,51,40,31,40,56,50,49,46,45,44,47,49,44,35,33,36,34,29,34,31,33,29,29,34,29,23,23,20,19,21,18,23,20,22,20,22,19,17,20,22,21,19,25,21,24,27,25,24,25,24,31,34,36,36,42,53,66,73,83,105,133,158,171,178,159,160,168,165,165,162,154,150,146,144,146,147,155,157,152,159,160,155,156,153,162,163,164,166,165,163,169,170,169,177,177,169,161,163,155,147,149,161,160,166,170,161,160,157,155,155,153,160,163,161,159,163,159,155,158,152,153,153,156,157,153,156,155,155,156,159,159,157,156,158,163,161,164,162,143,113,91,98,124,152,171,174,170,156,152,157,155,157,152,148,144,147,152,152,158,154,151,152,154,158,157,157,160,157,159,162,161,164,165,163,165,164,165,164,163,162,165,151,160,113,12,1,9,9,12,11,14,12,12,14,13,13,19,17,17,18,16,17,18,16,17,17,17,17,16,16,16,16,17,17,18,17,17,19,17,17,16,18,19,17,18,16,17,21,17,17,18,17,19,19,16,20,17,18,20,20,21,15,17,17,19,17,17,19,17,19,16,17,19,17,20,17,19,19,18,19,20,22,18,17,22,18,20,17,19,19,16,22,19,21,19,19,23,25,26,31,31,36,30,26,30,26,30,24,24,23,18,26,21,24,22,22,29,24,28,26,24,34,27,30,34,26,28,29,34,37,46,46,45,45,50,53,50,54,53,55,58,58,55,57,51,46,48,51,59,57,50,55,54,60,56,47,44,63,87,98,110,107,101,99,101,105,115,121,121,118,111,113,118,119,120,123,121,117,116,116,120,121,132,141,134,136,141,147,139,129,139,136,133,134,127,116,120,134,141,139,127,117,116,124,128,128,139,153,156,148,144,151,152,147,164,170,169,163,140,115,114,120,119,124,130,132,133,137,136,143,149,145,141,134,141,144,151,148,139,139,136,133,131,121,122,136,134,142,140,132,133,125,127,132,142,144,143,143,133,135,141,137,141,141,140,136,134,138,139,142,137,141,140,149,154,143,137,134,135,127,126,134,135,131,139,131,119,123,118,125,129,133,130,125,121,117,108,93,101,99,107,110,104,104,100,100,113,120,115,116,107,99,80,68,71,78,84,72,83,96,76,69,66,59,55,55,63,66,65,55,55,59,56,62,54,52,57,59,55,60,61,53,56,60,59,59,55,52,52,50,47,55,54,52,50,48,52,50,55,50,53,54,52,53,52,62,54,48,51,48,44,51,72,76,74,76,76,77,69,79,95,97,94,101,104,113,101,93,99,110,116,114,123,122,137,136,125,108,108,103,84,77,55,49,27,29,34,30,28,36,51,54,72,83,89,110,124,138,154,160,154,149,157,165,177,187,185,175,181,202,208,205,187,177,174,174,163,148,162,176,169,127,89,81,72,58,47,43,49,67,53,27,29,25,27,23,19,22,18,21,18,23,20,15,17,21,19,16,21,22,18,19,19,18,24,19,21,18,17,21,21,26,18,21,19,22,23,18,21,17,17,22,17,19,19,15,22,20,17,22,19,19,16,19,23,21,17,22,21,18,19,19,24,22,20,21,17,20,20,20,24,21,18,21,24,20,27,27,34,38,33,31,29,32,34,41,43,45,48,44,51,48,48,68,69,57,54,45,47,45,46,48,44,41,30,33,28,27,30,30,36,29,32,31,32,22,19,19,19,21,18,20,16,24,22,20,22,20,22,23,21,22,22,22,25,26,21,22,27,26,28,35,36,37,46,69,81,66,73,84,110,146,163,160,145,153,160,162,164,159,156,153,153,153,158,157,157,155,146,160,167,160,157,153,158,158,159,158,150,155,160,164,167,172,167,160,165,165,158,151,154,162,164,167,164,160,165,157,157,164,163,168,160,159,160,161,166,159,163,165,163,163,163,159,151,157,160,158,156,156,157,158,152,152,152,154,154,162,168,144,120,94,87,112,143,165,174,165,160,157,159,160,159,161,151,156,155,154,153,149,151,155,156,156,155,156,157,156,160,160,160,160,157,156,155,155,162,165,171,167,164,154,159,112,12,1,9,9,12,10,14,12,12,14,13,13,15,16,16,16,16,16,17,17,17,17,17,17,16,16,18,16,17,16,17,18,15,18,17,17,18,16,16,17,18,18,17,17,17,17,16,23,19,18,21,21,20,22,22,17,17,17,17,17,17,17,17,16,19,18,20,18,17,17,17,20,17,19,19,18,22,19,21,20,17,21,18,16,17,17,20,20,17,19,17,23,24,29,25,34,35,37,39,35,36,34,34,28,29,30,29,28,26,26,33,32,31,30,34,36,38,41,40,43,35,32,34,38,49,45,49,50,48,55,53,50,46,51,51,65,83,82,70,70,63,60,57,50,76,65,63,67,76,70,75,67,70,91,101,122,124,118,107,91,96,94,96,95,106,108,110,107,105,111,111,115,107,96,94,99,121,106,118,133,131,127,131,132,124,128,127,131,128,135,122,126,128,135,144,136,142,137,136,141,141,133,130,138,134,131,135,142,141,146,153,156,158,143,126,114,136,149,147,145,157,162,159,146,133,132,137,137,131,137,144,139,137,142,141,139,139,138,139,140,148,147,146,152,131,132,131,127,122,132,136,132,131,124,115,123,129,134,131,147,140,141,139,141,141,139,131,137,141,137,137,137,134,129,128,132,134,130,129,130,135,142,136,129,137,129,126,120,117,110,111,113,110,100,100,98,105,111,110,111,108,113,117,119,120,105,98,88,79,76,85,79,83,85,86,69,69,65,69,62,64,63,57,62,63,61,60,57,62,67,59,64,63,61,63,63,62,59,53,54,61,53,59,53,49,49,53,61,52,50,56,51,48,49,51,59,57,55,57,47,50,58,46,51,44,46,51,54,67,68,72,75,107,100,71,96,98,97,99,94,99,83,75,77,83,98,100,97,107,116,128,131,101,114,116,119,92,94,103,89,71,67,54,41,43,36,47,47,25,21,25,29,30,42,84,70,77,77,98,106,125,140,143,146,149,164,166,175,176,177,177,173,162,185,208,189,170,146,141,152,148,137,144,153,175,163,111,85,90,98,84,58,57,65,39,25,29,57,32,28,23,24,22,18,19,21,21,19,18,22,19,20,19,15,21,18,19,19,15,18,18,19,22,20,21,21,16,17,19,22,21,21,21,17,17,20,17,18,17,17,22,18,19,18,20,21,17,22,19,17,19,19,27,23,17,24,19,22,26,20,22,20,30,26,30,37,34,34,31,29,34,34,40,48,47,44,47,51,48,65,73,66,61,46,46,42,44,50,44,41,33,30,35,31,32,36,29,38,34,28,32,22,20,18,17,17,20,21,20,21,19,22,18,20,18,25,26,17,27,23,23,27,22,26,30,29,33,37,44,56,66,81,92,85,78,80,94,125,131,137,147,159,157,151,150,147,156,161,170,165,168,166,162,157,153,163,161,158,143,134,140,151,146,155,150,146,150,153,149,150,156,159,161,160,155,165,173,170,166,163,158,159,161,160,165,169,168,166,163,161,162,162,160,164,169,170,171,169,168,164,155,156,161,161,162,162,159,160,157,157,162,156,162,163,166,163,150,126,89,87,103,134,162,171,174,169,166,166,163,165,162,160,162,156,157,152,154,159,162,160,154,156,156,158,159,162,163,158,151,149,148,146,153,159,163,163,166,151,159,113,12,1,8,10,13,11,12,12,13,13,14,13,17,17,17,17,17,18,17,17,18,17,16,19,18,17,18,19,18,18,17,16,17,16,16,17,17,19,16,17,18,18,18,19,19,19,17,23,22,21,21,19,24,19,20,17,17,19,17,17,19,19,19,18,16,16,17,17,16,20,18,17,19,18,19,19,20,18,19,23,19,17,18,18,18,18,19,19,17,19,18,18,21,26,23,25,26,26,26,24,31,31,34,40,41,39,34,35,33,36,38,37,49,47,49,49,46,52,49,56,50,45,48,46,56,57,51,51,53,57,65,63,57,55,53,71,86,88,79,79,83,75,74,76,92,103,106,116,124,111,97,103,113,123,126,133,141,139,129,127,128,119,108,105,114,126,125,124,110,106,115,118,118,115,120,133,129,120,120,125,123,111,106,105,106,109,116,113,111,118,118,120,119,114,104,107,112,119,128,139,138,134,131,118,105,104,112,114,116,108,108,106,103,97,99,110,129,134,130,137,135,139,136,138,130,123,129,134,137,136,134,130,129,129,141,136,133,134,139,138,136,146,132,132,124,118,119,117,118,120,129,130,125,116,114,117,124,123,123,130,131,132,134,131,139,138,139,140,129,132,134,130,128,124,122,116,114,117,115,111,114,119,128,133,129,117,100,98,95,100,106,114,118,111,109,98,100,97,87,95,114,122,117,114,98,95,81,73,83,85,80,73,76,82,78,71,71,71,71,75,77,75,74,64,62,64,63,61,63,72,67,63,65,63,63,63,65,57,56,58,61,61,60,61,53,54,53,53,60,56,53,54,53,57,61,61,61,54,59,49,50,55,42,49,50,51,54,52,57,54,56,61,61,69,61,50,45,51,53,46,51,51,52,55,52,62,69,59,59,66,77,70,63,77,91,94,90,101,113,119,110,103,95,93,76,59,67,66,62,46,41,42,37,35,33,34,35,36,41,33,30,39,59,70,89,100,105,119,121,120,128,137,135,139,141,139,136,131,137,152,139,150,168,150,142,144,133,139,145,154,153,139,155,130,100,96,95,105,93,87,86,81,60,55,59,46,39,37,31,27,25,24,27,23,20,17,16,21,18,19,17,17,21,21,19,20,21,19,16,19,19,19,22,19,17,18,19,16,18,16,17,22,19,24,21,20,20,22,21,19,21,19,19,20,24,23,22,22,26,23,26,25,25,29,31,36,34,40,28,29,34,34,43,47,42,45,54,47,36,46,59,63,60,54,49,44,48,50,46,42,36,29,35,31,29,35,29,35,31,33,30,18,22,19,19,23,21,24,21,21,21,17,24,22,20,23,18,22,25,20,25,30,27,30,37,42,53,54,57,73,76,86,82,77,82,70,84,97,110,135,147,161,150,142,139,137,157,170,176,169,164,162,154,158,160,162,157,152,153,148,151,150,154,162,153,151,154,148,139,146,155,158,159,157,153,152,157,155,155,151,148,155,159,162,167,168,162,165,167,164,164,160,161,157,159,163,160,163,162,159,151,153,158,157,162,166,168,169,165,163,166,168,163,164,170,166,165,156,133,103,92,98,127,156,173,179,172,167,169,171,160,161,158,163,160,152,159,160,162,156,153,159,158,159,159,159,161,157,156,153,151,151,153,160,164,158,159,153,159,111,14,0,9,10,13,11,13,12,13,14,14,13,19,18,16,17,20,18,19,18,17,17,18,18,19,19,18,19,20,17,16,17,19,16,16,17,19,18,16,19,17,19,21,17,21,24,20,20,21,21,23,17,18,19,20,21,20,17,17,18,16,19,17,17,18,17,18,17,17,17,18,21,17,18,20,17,19,18,19,18,19,19,18,20,15,19,19,17,17,17,17,22,30,30,38,40,34,31,26,31,42,49,53,55,51,44,43,37,38,43,44,50,61,64,57,59,61,54,54,52,45,36,38,42,38,36,42,44,41,47,50,53,52,47,43,50,51,51,58,53,44,41,41,46,62,57,54,57,87,112,94,128,117,115,99,90,90,112,117,116,123,114,106,102,105,105,115,111,108,110,105,119,125,122,124,130,131,131,119,121,119,110,107,107,112,112,120,103,92,100,117,109,104,118,105,103,104,119,122,132,139,131,122,109,107,104,103,105,104,109,113,109,113,107,110,117,128,112,106,105,103,104,110,108,108,112,121,128,132,131,116,116,125,131,132,127,125,123,121,118,122,116,112,121,124,126,118,123,127,132,139,140,139,133,122,126,134,128,126,125,128,126,126,126,128,131,132,141,139,128,119,113,107,100,105,115,105,111,108,99,97,104,118,120,116,121,116,105,117,115,116,109,108,106,104,97,110,101,99,102,101,100,95,103,89,96,93,85,95,101,89,70,69,70,75,83,87,83,68,60,61,64,60,52,52,53,60,58,64,61,49,54,57,55,62,55,56,58,49,57,56,55,53,45,54,51,51,54,51,55,54,51,57,54,56,57,55,63,52,45,51,51,54,44,49,53,51,54,51,53,53,51,53,47,44,49,48,48,48,43,39,46,42,47,47,51,57,50,56,59,61,69,53,59,60,75,73,78,85,93,92,84,84,98,82,68,110,78,74,99,87,76,69,64,63,51,45,45,38,36,39,38,39,39,41,42,39,42,36,39,45,49,63,62,73,85,93,96,102,112,112,123,124,110,113,115,134,138,123,137,142,133,142,137,127,135,139,133,145,141,145,112,136,149,139,120,111,93,84,84,70,71,67,53,47,41,38,39,34,31,29,33,29,40,46,24,35,18,24,22,19,21,17,21,17,16,19,21,19,18,18,20,20,22,20,17,20,22,24,19,21,21,22,22,19,26,19,22,26,22,27,29,28,29,29,29,36,39,42,36,33,31,41,47,42,49,49,45,30,38,48,56,60,57,57,50,54,51,52,51,32,33,35,30,34,33,32,35,31,31,35,24,18,18,17,17,22,20,21,21,18,22,20,25,22,24,25,27,28,28,39,39,40,45,52,56,63,68,74,71,75,78,74,76,72,69,79,85,93,110,123,132,129,128,135,147,156,159,160,158,157,158,151,155,155,151,148,159,157,158,166,161,164,172,167,166,173,159,154,170,168,166,162,158,150,137,137,139,151,150,148,157,160,155,157,159,159,157,155,158,163,158,154,150,156,159,152,154,152,150,148,151,150,152,157,158,164,166,160,159,164,163,161,165,168,165,168,172,155,142,115,97,106,122,153,171,175,174,168,165,157,153,158,158,162,159,159,164,162,159,156,155,153,154,155,155,160,163,160,159,161,161,163,161,164,158,158,146,157,113,12,1,9,10,12,10,13,12,12,14,13,13,17,17,18,18,16,17,19,23,16,18,19,18,19,20,16,18,20,17,19,17,18,17,18,17,17,18,17,19,19,16,19,20,19,23,16,21,21,21,25,20,20,17,19,19,16,17,17,19,16,18,22,18,17,18,16,17,17,17,18,17,18,18,17,20,19,18,17,17,19,20,18,18,20,17,19,20,19,19,20,19,33,34,33,40,39,37,36,43,49,57,55,56,43,40,42,29,30,29,24,30,32,33,32,27,29,26,29,29,28,26,21,26,28,30,34,38,46,41,37,48,44,45,44,46,39,40,39,29,34,33,30,37,31,34,39,39,43,60,77,93,91,79,77,66,69,91,106,108,109,107,102,98,101,108,108,112,112,120,124,117,116,117,122,127,120,119,120,115,119,117,122,116,116,117,126,130,122,122,120,119,118,131,136,132,125,123,118,119,121,122,126,118,116,117,121,120,122,121,130,130,133,129,125,131,130,125,125,126,122,122,103,107,107,106,117,122,127,128,125,125,133,134,136,130,128,133,128,118,110,108,113,125,125,125,125,130,136,139,136,135,137,137,138,137,145,142,139,137,135,132,133,128,129,117,116,119,113,117,110,107,114,118,122,123,120,113,119,112,104,117,116,113,123,120,117,119,116,122,117,96,93,98,96,95,107,109,101,95,85,78,74,96,107,108,106,101,110,107,90,79,73,67,76,74,76,71,55,53,50,52,55,47,53,53,53,57,52,51,55,55,56,48,53,54,48,56,56,57,55,52,53,51,53,54,48,51,55,53,53,54,50,53,51,51,57,55,50,43,54,48,46,50,42,49,49,48,49,47,50,48,53,57,55,58,58,57,58,56,58,53,54,48,49,50,54,50,49,65,74,75,57,55,69,63,59,57,61,56,72,68,69,91,74,62,66,71,65,51,59,61,78,100,113,107,102,92,84,80,69,59,48,42,39,42,42,39,39,36,36,39,40,47,45,45,50,44,44,61,44,63,72,78,92,84,104,100,99,113,116,114,123,127,136,130,126,141,126,129,126,124,124,130,127,128,131,122,121,120,128,135,137,132,129,119,113,117,112,110,92,91,91,74,87,76,64,66,57,49,40,42,43,39,36,29,29,27,29,23,16,25,19,20,19,17,19,19,21,17,20,24,23,19,19,17,23,21,21,26,24,26,26,28,28,27,33,40,37,34,30,26,40,43,47,52,53,47,33,40,51,54,55,56,63,59,57,55,46,45,40,39,35,31,34,27,32,37,29,34,35,24,19,17,20,19,19,24,21,27,25,27,32,29,33,37,41,43,41,45,50,53,55,60,59,56,65,61,65,63,64,68,70,83,80,71,77,77,83,89,101,107,109,142,162,160,157,144,138,144,153,159,153,151,151,142,135,134,141,150,152,151,152,158,155,158,167,166,163,160,160,165,166,160,149,141,144,154,161,158,156,162,153,149,154,152,156,151,148,153,157,152,152,151,154,160,150,152,155,152,149,154,156,151,148,146,151,152,151,154,158,156,154,156,156,156,158,165,165,155,143,124,104,97,114,145,162,173,172,161,152,143,147,155,159,164,162,160,160,158,159,162,161,159,160,158,162,162,161,165,161,165,168,164,165,158,158,149,159,111,12,1,9,10,13,12,13,12,13,14,14,14,16,16,18,17,17,17,18,18,17,16,17,17,17,17,18,17,16,18,17,17,19,19,17,16,17,17,18,20,19,18,18,18,18,21,23,19,21,23,23,24,20,18,17,16,20,21,16,18,17,19,19,15,19,18,16,18,17,20,18,16,16,18,17,20,20,16,18,16,19,19,19,22,17,18,20,16,19,17,19,29,30,38,42,45,43,43,48,51,57,59,55,48,44,42,46,46,36,36,33,32,29,31,27,26,31,26,27,26,27,28,31,37,37,37,42,41,45,47,36,37,42,46,38,37,34,35,41,35,35,36,36,33,30,36,38,42,48,48,57,68,78,81,87,92,90,106,113,117,108,106,110,104,114,114,110,121,116,116,122,126,122,126,128,135,115,132,125,125,128,123,118,118,113,115,122,127,124,117,120,125,126,128,128,141,132,124,115,108,109,118,109,111,116,118,118,119,120,114,110,116,123,119,124,119,114,122,119,131,131,128,123,116,120,112,130,141,142,130,132,128,130,125,133,135,138,140,144,129,123,130,131,133,127,128,127,128,130,120,119,127,137,154,152,144,144,142,141,140,132,124,127,125,122,114,110,114,124,131,133,136,134,137,138,127,123,131,129,128,127,115,114,123,121,118,111,94,93,97,94,92,100,107,97,100,98,84,75,76,79,80,78,95,95,102,102,97,97,105,100,90,101,80,77,57,57,64,66,70,63,60,53,56,58,55,48,55,49,55,57,53,59,52,56,55,54,63,54,54,57,53,53,53,54,54,55,57,49,56,56,54,54,51,56,53,47,50,52,48,53,46,49,49,51,47,46,53,44,52,59,46,50,56,56,59,56,55,58,59,61,71,63,54,44,48,51,48,57,56,57,60,56,71,67,54,57,63,54,61,58,57,51,66,50,49,43,42,53,50,53,64,73,92,102,98,103,96,106,113,76,99,90,86,83,79,74,69,53,55,51,42,52,48,53,45,37,44,46,47,39,45,47,45,66,55,50,51,56,58,60,66,80,85,90,88,89,103,93,107,107,93,97,108,105,101,118,107,114,122,132,131,131,131,137,120,133,151,156,148,112,118,132,100,116,139,103,141,107,139,123,119,126,123,96,91,100,90,96,74,53,57,57,49,38,31,54,44,46,25,22,26,27,20,18,26,20,21,22,22,19,18,25,24,24,26,28,33,33,29,27,21,29,40,43,48,56,51,27,38,55,57,57,54,56,57,64,53,50,46,31,34,35,33,29,25,33,29,26,35,36,25,21,26,27,27,31,36,38,38,39,42,43,45,51,45,52,53,50,53,51,53,51,56,57,49,51,49,53,59,65,74,77,82,76,74,71,71,78,90,102,106,122,158,172,172,155,133,129,135,145,153,155,158,159,149,139,141,144,156,145,153,143,144,153,152,151,150,147,142,141,151,156,154,151,154,159,157,167,162,160,163,152,152,153,155,158,157,158,155,156,154,153,149,154,159,152,160,165,163,164,165,164,157,154,147,146,152,151,151,157,156,155,157,153,152,154,158,151,151,147,140,128,94,84,104,133,157,164,165,156,146,148,153,163,165,162,161,160,163,160,162,162,159,159,158,160,160,155,157,160,162,166,164,162,158,163,153,162,113,12,1,8,10,13,11,12,12,13,14,13,13,17,15,19,19,16,17,16,17,17,19,18,18,20,17,17,17,17,18,16,17,19,17,16,18,18,19,18,20,17,19,19,19,21,19,22,24,22,22,18,21,21,18,18,17,17,16,18,20,17,15,19,17,18,19,16,18,17,17,17,16,19,19,16,19,20,20,19,17,20,22,18,21,17,20,20,19,21,15,22,24,27,27,29,34,39,36,31,37,36,40,44,41,31,29,34,39,36,35,34,32,29,27,29,21,27,27,28,27,24,34,29,27,31,32,30,29,28,32,27,23,23,24,28,21,30,30,30,34,33,29,34,32,31,37,36,40,40,53,59,59,63,78,92,95,110,117,119,117,111,117,117,112,115,112,114,109,103,111,113,123,122,119,115,102,106,105,109,110,111,110,114,108,107,113,111,115,111,113,113,110,117,122,114,120,120,123,119,110,112,117,127,126,121,114,111,112,113,111,109,107,105,106,103,97,98,94,105,114,115,120,113,123,121,116,116,117,126,127,122,119,113,101,103,103,102,124,127,122,123,127,132,134,133,123,121,119,116,117,118,124,141,149,146,130,127,136,137,129,120,118,123,120,125,120,128,141,139,145,136,128,123,114,118,125,128,125,125,134,128,125,119,118,118,104,111,110,105,103,98,103,112,110,103,100,92,84,72,67,83,89,82,81,74,78,98,86,82,97,100,97,100,92,82,74,80,86,93,88,77,73,66,63,67,63,57,58,55,65,66,59,62,53,58,63,57,55,53,55,51,52,56,53,55,52,53,54,56,57,59,58,50,55,55,48,55,54,52,55,55,55,53,52,55,52,51,54,56,49,51,46,48,50,44,50,50,50,50,50,49,54,56,47,46,47,53,49,49,50,43,46,44,51,55,45,46,46,48,39,49,49,47,50,46,56,46,46,52,69,63,59,72,64,66,60,60,59,56,66,69,67,69,75,81,95,104,99,98,79,76,78,80,89,80,77,73,61,55,59,45,54,48,42,50,37,48,41,46,48,37,41,41,47,46,46,53,52,50,56,55,61,58,66,61,74,93,82,91,102,107,101,108,112,103,106,123,125,126,132,114,125,120,108,128,127,128,131,141,152,145,135,141,151,151,149,145,149,161,142,130,141,133,131,117,116,104,87,89,83,90,77,52,51,42,27,28,41,59,55,61,62,21,8,15,24,24,23,31,29,24,19,23,31,36,43,53,41,23,40,59,59,61,55,51,54,59,57,53,44,29,32,33,29,28,29,37,29,29,35,36,37,41,42,43,44,50,48,50,57,50,53,58,51,50,49,47,47,45,45,37,48,45,41,46,42,50,53,59,57,66,81,82,82,70,73,72,66,81,87,113,118,132,167,176,179,170,146,129,131,139,147,149,150,161,165,158,159,159,163,158,157,166,159,155,152,151,144,135,133,143,155,150,145,153,161,157,157,154,147,154,159,156,157,160,162,160,157,158,158,161,163,160,153,158,165,161,165,169,167,168,171,168,165,160,157,158,158,158,155,158,162,160,161,160,156,159,162,157,152,148,153,153,125,101,88,97,127,151,170,174,169,165,164,165,168,165,165,163,166,162,162,162,161,165,162,162,154,151,152,151,158,161,160,160,156,161,155,162,112,13,1,8,9,13,11,12,12,14,14,13,13,20,17,18,18,17,19,17,17,20,21,21,21,19,19,21,19,19,18,19,21,20,20,22,19,18,17,19,21,18,17,18,20,18,19,25,19,22,21,21,22,16,19,18,16,18,17,19,17,17,18,18,17,17,19,16,16,17,18,19,19,20,18,19,21,16,18,22,20,19,17,20,19,19,19,18,21,17,21,23,19,18,22,22,27,23,19,22,18,20,24,29,27,24,24,27,25,23,30,23,20,29,20,21,23,22,20,24,29,24,23,24,25,29,24,26,30,34,35,35,30,30,29,27,39,43,46,48,42,43,41,48,45,44,49,48,46,44,46,50,66,76,85,90,92,86,86,96,107,102,118,115,113,105,103,101,97,105,103,103,102,99,102,93,84,83,89,96,107,104,104,110,105,106,105,117,123,120,122,118,113,112,118,112,113,110,114,116,116,124,114,127,118,107,109,115,115,119,119,119,127,113,103,116,114,116,120,109,114,107,112,116,113,118,111,116,115,108,108,101,103,100,95,92,88,95,97,99,95,102,109,110,112,107,110,111,117,117,118,113,110,112,115,116,111,121,122,123,123,118,119,118,117,132,122,131,140,137,129,120,111,108,113,111,114,118,120,113,114,112,106,106,104,110,115,122,122,116,105,108,100,105,102,105,112,101,97,95,87,91,86,79,72,66,74,81,87,95,84,85,97,95,93,108,112,112,104,91,89,95,87,96,94,93,96,95,85,87,82,83,61,59,58,57,63,61,53,53,50,51,49,53,57,52,51,52,52,56,54,53,50,52,50,51,55,50,51,57,49,46,46,50,53,53,55,49,50,50,51,52,49,49,45,39,42,47,48,43,45,41,43,42,43,43,42,43,44,44,43,46,43,47,45,45,41,40,41,40,42,46,40,50,59,56,55,60,60,51,48,46,46,48,47,46,48,47,43,43,53,48,53,51,56,57,64,74,75,68,68,64,73,86,93,100,92,87,72,65,73,68,68,62,57,59,52,48,51,46,42,47,45,43,38,43,39,43,44,35,42,42,38,45,43,43,46,50,49,44,53,55,47,57,63,61,71,66,79,86,87,83,97,92,96,104,113,118,105,117,138,116,113,116,122,138,125,124,134,144,145,125,133,138,150,157,157,141,137,138,163,153,143,137,145,154,160,177,198,218,224,223,207,184,184,170,158,139,136,91,39,30,46,66,39,9,11,16,26,22,39,53,58,63,57,50,47,51,54,56,43,27,30,31,30,35,37,35,33,31,36,42,53,57,49,57,54,53,53,46,51,47,43,45,44,39,36,38,36,34,32,36,39,39,46,38,46,53,49,61,55,63,70,66,67,59,55,53,56,59,64,92,100,112,139,145,153,156,150,149,144,147,151,154,154,159,163,159,161,159,163,165,165,177,165,153,151,147,140,135,138,150,159,159,153,156,165,162,154,154,150,150,155,154,161,164,163,156,153,155,160,171,174,170,162,160,163,161,160,160,160,158,163,170,165,163,165,164,165,157,154,162,163,164,159,159,161,161,164,160,159,157,162,166,157,142,118,95,94,119,149,170,172,171,162,159,160,159,152,153,163,160,159,165,165,166,160,160,160,151,150,156,159,162,158,156,155,162,152,160,112,12,1,9,10,12,10,13,12,13,13,13,12,24,28,31,29,29,29,31,34,29,34,31,33,38,38,42,36,40,41,38,39,36,39,36,39,40,32,28,21,18,19,18,19,17,21,22,17,19,24,22,19,20,17,17,17,17,17,15,18,18,17,17,17,18,16,20,20,16,19,17,17,21,17,17,17,18,17,19,20,21,17,19,21,15,19,21,18,21,23,21,22,28,27,26,24,23,26,22,24,27,31,31,31,39,28,29,34,32,33,31,33,32,29,30,30,30,29,25,27,23,23,27,25,29,27,33,38,35,42,40,33,32,26,34,33,34,44,47,51,48,53,61,52,57,55,53,56,51,55,65,77,89,96,92,84,80,82,90,96,91,101,110,108,106,104,116,107,110,108,97,106,103,105,109,105,108,108,109,113,112,107,108,97,92,101,111,119,115,120,122,119,121,119,117,124,122,117,108,104,108,110,109,105,108,115,118,120,118,111,114,117,119,121,122,128,136,129,120,117,117,117,112,115,116,120,119,118,118,111,105,103,107,102,102,101,87,85,77,85,100,101,100,96,98,101,106,109,114,114,112,106,101,99,103,104,115,118,111,121,127,130,130,133,141,128,126,127,114,114,120,122,126,122,115,113,108,112,103,96,93,92,97,95,103,105,108,112,108,110,101,103,108,105,105,102,103,112,109,101,93,89,97,91,84,70,75,76,72,72,72,83,89,88,89,91,89,84,80,84,90,93,99,97,100,107,104,106,76,68,81,64,62,60,59,59,55,55,54,56,55,56,59,57,55,51,58,54,59,58,53,55,50,48,53,53,57,54,53,54,45,53,51,49,53,48,49,47,47,49,49,43,44,44,44,46,48,46,42,45,52,49,52,47,44,45,43,46,50,44,43,48,49,49,43,43,47,44,45,43,47,51,60,61,59,66,71,63,54,49,46,47,41,38,43,45,41,44,49,47,44,51,48,40,46,52,51,47,61,64,62,62,67,71,66,73,64,59,68,70,66,68,77,83,70,64,67,66,66,66,66,59,55,53,56,55,51,48,51,49,47,47,42,40,40,41,40,39,39,36,40,41,41,39,43,41,36,49,48,50,50,53,57,65,74,74,77,63,87,98,82,95,93,99,113,112,106,109,136,125,122,141,136,149,152,157,159,166,167,154,153,163,188,217,232,234,237,249,251,251,250,250,250,250,253,253,251,251,243,229,233,245,249,237,185,141,94,18,2,11,9,17,37,53,53,53,55,54,57,42,29,35,27,30,37,39,38,33,34,30,39,48,51,51,46,41,44,44,42,39,37,36,33,31,34,31,33,36,34,33,36,40,39,43,44,44,52,50,50,50,54,60,55,49,51,48,46,49,56,52,67,81,92,108,110,125,138,157,168,158,155,157,160,160,161,159,157,160,159,166,168,174,175,155,150,149,145,141,141,147,154,160,156,153,157,168,171,169,164,157,152,144,149,158,159,160,152,152,152,155,163,166,164,159,160,163,165,163,154,148,153,159,158,164,161,160,168,163,159,157,163,166,162,163,162,158,158,159,158,157,159,165,165,159,162,145,123,101,88,108,134,153,160,160,158,154,151,148,144,149,152,156,163,166,166,166,168,163,160,160,160,165,165,163,160,155,158,149,160,113,12,0,10,10,12,10,13,12,12,14,13,12,59,58,62,59,58,63,62,66,63,66,71,71,76,79,87,89,93,105,107,113,111,110,109,108,92,69,54,42,31,28,19,16,18,16,23,18,19,21,17,22,20,16,20,16,16,16,16,17,17,17,16,17,16,17,17,18,19,17,17,17,17,18,21,18,18,19,17,15,22,20,20,21,19,16,20,19,20,21,21,26,26,22,25,29,31,22,28,32,29,39,46,47,42,41,39,36,43,42,36,34,30,30,31,27,24,24,22,18,23,21,24,18,21,29,28,28,23,29,26,25,30,27,31,29,24,35,38,37,47,45,56,45,38,38,41,42,45,50,62,69,75,75,46,49,67,72,77,84,83,89,87,78,83,96,104,103,108,101,105,107,105,116,118,118,119,125,122,124,121,117,120,118,118,114,110,112,112,117,121,114,119,118,115,127,120,114,112,107,111,114,117,118,113,110,113,110,108,113,115,125,123,119,125,122,125,128,120,128,120,121,118,116,125,114,117,117,122,125,126,124,120,115,109,102,89,89,103,113,123,127,122,113,108,113,116,127,122,126,124,126,131,129,123,118,113,116,109,117,131,137,140,142,130,121,114,108,109,114,121,121,114,122,128,125,120,114,106,105,110,106,107,96,93,101,100,98,103,104,91,92,90,93,97,105,99,101,99,92,92,79,83,73,66,77,81,66,71,83,59,84,81,51,49,50,58,58,65,54,61,68,80,61,60,63,77,87,59,53,59,59,71,59,56,60,61,61,64,59,55,57,65,69,57,59,62,59,63,50,55,55,52,57,54,57,55,53,57,54,50,46,51,53,52,50,51,49,49,49,48,42,43,48,42,44,44,43,49,48,51,57,55,51,53,54,50,54,48,48,52,46,55,54,44,50,46,36,48,43,40,50,46,40,46,45,55,46,43,41,44,46,42,41,44,42,37,45,46,45,48,50,44,39,40,46,56,57,61,57,57,57,54,47,49,63,51,51,57,59,63,63,65,66,69,65,68,69,71,77,77,76,69,70,67,65,72,63,63,63,60,61,53,55,49,48,52,51,45,46,47,44,43,42,41,37,34,35,39,39,37,36,39,42,53,43,53,45,45,55,48,57,59,59,68,73,75,77,91,93,91,101,124,125,136,139,127,134,145,156,179,209,235,247,251,245,248,251,252,252,252,252,252,252,252,252,252,252,253,253,253,253,252,252,252,252,244,187,75,26,5,5,13,9,21,37,47,57,54,48,38,34,35,29,39,35,31,35,35,38,34,39,41,38,34,32,37,37,32,32,34,33,30,32,31,29,35,35,38,39,39,39,41,37,37,51,45,42,48,45,56,58,54,57,49,49,49,53,53,56,66,78,89,81,93,113,137,160,165,155,156,147,151,157,167,160,155,153,156,163,174,167,173,158,151,154,155,151,157,153,153,162,159,154,161,169,179,174,166,158,146,147,150,152,155,154,155,156,160,157,158,157,155,153,155,164,168,164,154,152,153,155,158,158,158,160,163,160,155,155,158,160,161,158,160,157,155,153,153,156,157,155,153,156,152,156,152,124,96,81,103,127,148,162,165,162,159,150,149,149,150,154,157,161,160,165,164,161,159,158,158,158,162,160,162,155,159,152,157,112,12,1,9,9,13,11,13,12,13,13,13,13,131,126,125,124,131,134,134,139,136,144,145,156,160,162,171,171,177,174,174,171,162,162,148,150,139,98,66,45,33,25,22,18,13,17,17,15,19,18,17,22,16,18,18,14,17,18,15,15,15,16,16,15,18,16,17,17,17,18,20,19,16,17,17,18,17,19,18,17,18,16,18,17,17,18,19,18,19,18,18,21,18,19,23,24,21,22,20,27,36,35,42,40,34,31,28,27,29,31,26,25,20,24,29,23,22,23,26,24,23,22,22,31,27,27,29,22,32,33,29,33,41,41,37,40,38,44,47,52,52,53,57,59,55,47,44,38,33,40,54,57,64,62,64,66,65,71,72,83,87,86,81,78,80,87,101,95,94,93,102,110,111,110,113,119,120,126,122,120,118,118,124,126,121,112,107,105,106,113,115,111,111,113,110,110,113,116,114,122,126,122,125,112,113,114,111,112,107,110,117,113,117,114,109,115,115,117,121,118,118,118,121,123,119,113,107,112,118,125,128,131,130,128,131,124,120,127,129,128,125,125,120,120,124,125,125,127,125,124,123,120,125,123,117,114,117,113,107,110,109,109,109,115,117,111,109,99,94,110,117,113,114,116,113,115,109,107,112,117,118,111,113,106,106,97,92,95,97,95,84,84,78,88,101,100,99,99,105,98,91,75,81,83,87,94,99,95,99,102,94,92,82,77,78,76,75,75,62,58,59,50,65,54,49,59,57,73,60,54,56,54,66,59,54,61,66,66,60,54,51,60,57,56,57,57,57,53,57,54,50,61,52,50,54,50,53,52,52,55,53,53,56,53,52,49,51,49,51,47,52,50,49,50,49,47,46,45,48,51,46,47,47,48,46,49,43,45,45,42,46,43,47,51,44,44,45,46,42,38,46,42,43,42,37,41,42,40,44,43,36,45,45,39,41,49,42,35,45,40,42,45,39,42,43,45,47,48,46,42,46,52,50,41,38,49,44,46,46,52,57,49,48,50,53,51,51,46,53,56,53,54,53,54,55,55,56,54,55,60,63,64,66,69,60,64,68,59,56,56,62,60,56,58,46,47,45,49,47,50,49,40,43,41,45,46,43,40,41,44,39,45,41,43,47,46,48,48,53,46,56,55,53,66,72,83,88,96,115,136,160,180,194,205,208,210,217,226,233,240,253,253,253,253,252,252,252,252,253,253,253,253,252,252,252,252,252,252,214,178,150,117,55,7,6,11,10,19,39,52,46,39,34,32,32,40,39,35,33,34,36,29,34,31,27,36,29,34,36,33,33,33,35,34,36,35,42,39,37,42,34,39,41,36,41,43,42,39,49,53,55,57,57,57,57,55,54,60,54,60,69,71,73,75,85,104,130,145,141,133,134,140,141,150,159,163,154,146,151,154,151,153,152,146,150,151,154,158,153,153,152,153,159,158,159,160,162,165,153,149,150,149,159,160,155,157,156,163,168,163,157,155,152,155,162,163,169,169,163,163,160,158,156,159,162,159,161,159,153,154,152,151,152,157,155,151,151,150,157,159,160,159,156,155,160,159,163,155,131,108,92,101,125,150,172,178,174,166,160,160,155,152,156,155,149,153,155,153,156,151,148,150,155,159,159,154,156,150,159,112,13,2,9,10,13,11,12,12,13,13,13,13,137,135,139,138,145,146,146,151,147,157,165,162,165,163,162,154,148,139,112,106,95,75,57,45,43,42,41,30,26,21,18,16,15,18,15,17,21,17,17,18,20,17,16,19,20,16,15,16,17,17,16,17,17,17,16,17,19,19,16,16,18,17,19,19,17,17,17,17,18,16,20,21,17,17,20,20,19,22,24,24,27,33,27,24,29,26,28,30,36,38,38,33,30,30,31,33,31,33,39,36,31,37,32,27,31,33,34,32,27,35,44,40,35,35,39,40,38,34,46,46,47,46,43,35,41,45,47,58,61,68,67,65,52,55,47,45,50,54,69,65,88,92,93,101,94,96,95,97,99,103,101,107,112,111,112,112,105,107,109,119,117,116,110,112,110,113,113,112,116,113,115,121,109,107,105,107,113,113,113,107,112,115,118,118,117,122,122,120,129,120,114,112,112,117,121,118,114,120,117,115,115,112,119,117,122,122,112,118,115,119,118,114,117,109,113,115,117,122,124,123,123,122,120,118,118,118,118,115,116,117,110,123,126,123,124,117,118,120,111,114,115,112,115,122,117,115,108,104,103,103,106,106,108,101,101,99,102,109,115,119,106,105,111,104,107,114,115,121,117,116,118,117,114,107,107,105,100,92,99,89,92,97,102,102,96,113,113,98,102,95,97,94,93,92,99,120,123,116,113,109,105,107,91,100,79,79,83,78,71,56,67,70,69,76,64,73,71,59,56,56,58,53,54,64,55,61,59,50,57,56,56,55,49,59,56,55,54,47,54,54,51,52,48,50,52,52,49,51,57,56,57,51,46,53,53,49,55,53,52,54,51,53,48,47,49,48,48,51,48,48,48,41,42,41,43,44,40,44,44,45,44,38,48,43,39,41,42,41,45,41,39,43,42,40,42,39,40,44,42,41,44,44,46,40,44,42,39,39,39,42,42,46,41,41,39,45,47,42,39,39,45,41,39,39,41,41,38,39,40,43,43,36,42,41,39,42,40,47,42,45,43,46,50,42,52,47,53,57,57,60,56,61,62,59,64,61,63,63,65,65,59,69,62,61,69,70,76,68,57,59,56,54,57,60,57,49,56,58,50,45,44,49,50,47,44,44,45,47,44,48,50,45,49,45,42,53,54,61,68,83,89,94,103,106,128,137,141,160,176,187,204,224,239,251,252,252,253,243,253,253,234,252,252,252,253,253,251,251,252,252,234,134,125,25,1,9,10,19,33,37,38,34,40,38,38,38,36,35,35,35,29,38,35,35,41,33,34,37,39,37,37,37,31,35,35,34,37,34,36,36,36,33,35,35,35,41,42,45,51,51,48,54,57,56,52,56,54,53,58,54,60,55,74,80,96,96,110,130,142,150,136,140,137,146,142,130,133,129,121,127,129,128,146,132,144,146,147,143,149,153,154,151,151,146,149,146,143,147,144,148,155,155,160,159,162,150,156,157,162,159,146,146,151,160,164,165,166,161,156,151,150,152,159,162,159,160,154,158,162,155,155,159,160,158,152,150,153,152,155,160,158,160,161,163,167,162,160,138,113,96,89,120,147,169,182,174,170,162,159,156,153,154,148,152,156,156,155,151,149,148,148,153,158,153,158,150,159,113,12,1,10,10,12,10,13,12,13,14,13,13,125,126,128,132,134,129,128,126,126,133,120,118,100,89,79,65,54,41,35,32,31,34,27,26,29,29,27,21,24,17,18,18,16,17,21,21,21,19,19,17,17,21,16,15,18,17,18,16,16,19,17,17,17,17,19,18,18,17,20,20,17,19,20,17,17,17,16,18,19,23,16,16,18,18,22,22,21,26,29,34,43,37,36,37,40,40,47,46,48,46,44,43,41,43,37,36,42,39,33,36,34,30,28,28,33,31,31,31,34,34,40,44,40,38,31,33,36,33,40,46,47,44,39,35,41,44,41,45,47,53,51,53,48,53,56,59,62,60,86,101,114,121,121,119,120,123,118,116,115,122,126,126,121,114,124,121,122,121,117,115,119,118,118,112,110,113,112,118,118,117,122,122,121,116,118,123,119,117,118,117,117,122,126,122,115,118,122,131,126,118,122,122,128,124,123,122,121,127,125,119,129,130,130,131,125,131,129,128,130,119,120,123,128,127,125,125,124,125,125,129,122,114,115,116,122,121,122,123,122,119,114,124,122,117,118,120,119,125,129,127,123,127,138,136,134,129,123,127,129,130,122,123,113,111,116,118,124,127,125,117,113,111,113,118,117,117,120,124,119,114,117,114,121,118,120,123,117,114,116,117,115,120,113,105,103,103,98,93,106,105,103,95,87,79,87,88,92,89,91,103,98,95,94,89,74,85,94,94,89,60,72,68,64,68,65,71,61,62,60,63,68,67,66,60,65,58,49,56,51,57,57,55,56,55,63,53,56,54,53,60,49,55,55,55,61,50,51,51,55,57,50,51,49,47,51,50,52,51,51,56,50,47,48,46,52,51,46,49,49,48,44,44,46,46,48,48,45,43,42,43,44,40,40,45,43,40,43,44,42,38,46,52,45,42,40,44,45,46,44,42,42,46,47,44,43,50,47,42,42,44,44,42,39,44,48,46,48,47,46,46,47,44,43,45,40,41,39,42,42,40,40,39,41,39,42,44,46,44,42,47,43,46,39,43,46,46,45,47,54,45,45,50,47,57,60,57,57,51,57,58,57,67,56,53,59,58,69,63,60,61,60,63,57,61,61,62,72,73,61,60,67,66,65,67,60,59,56,53,55,50,54,53,53,61,57,53,51,45,46,43,53,53,42,48,50,45,46,53,65,67,85,103,118,138,151,186,185,156,187,228,239,250,253,253,253,253,253,253,252,252,252,252,242,221,156,84,21,2,9,11,24,24,26,31,33,32,24,29,34,30,36,37,34,42,37,36,34,41,40,33,40,37,33,34,33,31,38,37,33,30,30,33,31,31,39,39,36,38,44,48,44,51,51,50,53,57,53,51,49,53,54,53,62,58,69,84,105,127,154,158,136,139,140,149,145,131,134,127,123,130,139,143,150,150,154,158,153,150,148,148,149,146,147,143,144,146,141,147,146,144,148,150,145,148,149,139,145,152,150,150,140,139,148,153,160,152,152,157,157,160,158,157,162,159,161,162,156,162,163,162,160,166,167,160,157,152,155,151,157,158,159,158,160,165,165,168,163,160,139,114,91,84,109,137,163,169,170,170,164,156,152,153,154,161,163,161,163,162,159,155,156,153,158,157,159,154,164,114,12,0,9,10,13,10,13,12,12,14,13,13,118,123,119,113,110,96,78,76,73,49,46,37,39,31,32,35,29,29,24,29,22,27,28,23,24,22,28,27,20,26,22,17,21,21,21,18,21,19,19,19,16,17,20,18,16,18,18,18,16,16,17,20,20,16,17,21,19,18,19,15,18,19,16,21,19,20,19,16,21,17,18,21,17,16,18,20,20,25,21,28,30,28,32,33,28,28,38,22,31,33,28,37,29,29,31,28,29,29,27,28,33,37,31,29,31,31,32,39,38,44,44,40,39,29,35,33,29,31,27,24,27,37,25,29,34,28,34,33,42,40,46,49,44,47,46,53,56,59,68,81,88,102,111,122,117,115,122,119,125,122,113,117,112,118,114,112,115,118,123,124,128,126,124,125,123,123,122,122,125,124,125,123,120,125,121,125,127,122,122,119,126,130,132,127,119,126,128,124,128,126,128,128,128,128,126,122,124,123,118,113,126,122,128,124,122,123,124,122,119,119,126,133,128,127,131,127,128,122,121,120,122,119,118,124,128,128,125,118,122,123,119,123,117,124,127,125,129,130,127,125,122,128,127,125,128,129,125,128,128,128,126,121,126,127,133,127,134,130,116,113,107,116,118,124,119,118,121,117,114,98,106,107,109,113,116,113,106,114,112,115,106,106,95,90,89,89,78,90,91,79,75,91,83,75,68,68,80,81,93,99,75,95,93,80,70,82,92,74,78,60,70,55,51,59,51,60,50,54,57,62,60,66,58,61,64,55,56,53,55,55,58,54,55,57,53,58,66,56,59,57,57,59,54,61,56,53,55,53,54,56,51,49,51,49,55,49,50,50,55,56,44,53,50,45,53,48,44,49,47,43,48,51,44,45,53,51,47,45,45,44,44,42,45,44,45,43,39,44,45,46,43,46,47,44,44,44,47,48,44,40,43,47,41,45,47,43,45,43,46,48,48,45,42,44,43,45,46,45,43,41,43,41,48,44,42,45,42,40,38,45,42,40,46,38,39,46,42,43,45,43,41,48,44,40,42,39,42,43,47,45,36,47,47,43,49,45,50,40,42,47,42,54,47,42,40,48,50,47,55,51,53,48,45,51,54,54,64,65,65,66,59,62,65,67,68,62,61,62,61,64,71,75,81,76,76,77,70,77,61,66,90,95,80,68,78,76,58,66,71,59,55,46,46,48,57,91,78,37,56,132,156,174,213,250,251,250,240,253,253,253,253,253,252,252,246,246,208,145,59,5,5,10,12,14,12,14,14,22,36,34,35,41,36,36,38,36,39,32,34,34,30,39,33,27,33,33,34,33,30,27,32,27,31,33,34,39,34,37,39,44,51,49,43,49,49,57,62,59,57,52,56,65,70,65,76,85,89,92,125,134,141,135,145,134,120,125,133,148,152,163,164,169,164,166,171,169,154,158,156,157,154,157,163,156,160,156,150,149,150,149,145,142,145,144,144,142,151,157,156,153,157,153,150,154,149,144,150,154,158,163,163,155,158,161,159,159,152,153,160,158,162,166,161,152,153,158,158,162,156,157,161,159,163,166,168,167,165,160,156,148,121,101,88,101,130,154,171,171,170,164,156,153,156,165,165,166,165,167,165,163,162,160,161,159,166,159,165,113,12,1,9,10,12,10,13,12,13,13,13,13,92,86,69,53,44,38,35,31,28,28,23,28,25,23,26,27,26,22,23,25,26,27,27,26,24,22,25,28,26,19,20,21,18,21,22,20,22,22,15,19,21,15,21,20,17,17,17,17,19,18,17,18,19,16,21,19,17,19,15,17,18,21,18,18,19,18,18,19,22,17,21,19,18,19,17,24,23,20,21,27,23,21,26,22,19,20,29,22,22,24,29,26,27,31,27,33,30,28,27,25,34,36,29,33,34,33,34,34,41,36,37,31,26,36,35,38,31,33,39,32,36,34,34,26,27,31,34,36,42,44,41,40,45,47,48,54,56,54,54,52,63,84,102,105,98,96,100,106,108,107,109,115,113,116,117,113,113,119,125,123,120,113,117,117,122,120,123,125,125,123,121,120,122,127,121,123,127,125,124,122,122,123,130,129,122,122,115,117,121,124,127,123,119,118,118,116,117,117,111,105,107,104,102,105,101,103,106,116,117,117,133,130,130,131,136,131,126,122,113,118,114,110,114,113,116,115,114,114,113,105,112,122,121,126,124,123,123,124,122,118,112,114,120,117,125,122,122,116,107,111,106,122,123,125,127,126,127,114,117,115,117,124,123,121,112,101,101,109,110,108,113,123,126,119,111,113,112,108,116,102,91,93,82,91,93,93,95,97,94,89,94,97,90,84,81,76,83,90,89,89,99,101,94,73,53,57,54,57,71,53,54,42,45,56,50,52,50,56,48,52,56,50,61,61,67,62,51,57,53,60,55,60,60,55,59,48,57,55,54,55,48,53,49,56,56,51,56,51,55,54,54,54,61,56,50,60,54,52,54,53,54,50,49,54,48,50,50,50,49,48,53,48,51,50,48,47,48,49,45,44,44,43,48,47,39,45,42,40,44,43,43,36,46,40,40,47,42,44,41,42,38,44,43,43,44,42,45,41,43,46,44,45,44,40,40,42,41,35,38,46,39,41,43,40,43,40,38,41,42,39,39,41,41,44,44,40,40,40,40,44,37,46,40,39,43,40,40,36,41,44,41,41,43,36,38,41,35,42,38,39,40,39,37,34,41,36,36,50,46,42,45,43,43,41,46,48,51,55,55,50,49,58,53,54,57,66,71,71,63,61,74,76,78,68,73,77,72,74,79,92,110,115,116,116,127,131,137,153,154,142,127,107,112,108,107,120,72,22,27,64,92,107,125,160,201,175,182,250,253,253,252,252,253,253,252,252,250,250,229,205,200,190,195,200,194,193,178,204,171,45,29,39,28,35,29,30,36,29,31,36,32,32,29,29,32,30,33,30,30,24,24,27,33,31,31,37,37,37,35,39,47,50,45,44,50,59,64,64,62,61,62,71,83,76,90,89,84,101,117,116,114,124,117,105,97,117,140,152,160,163,169,171,174,163,161,165,155,155,163,164,160,167,170,169,165,157,151,151,153,158,155,154,153,155,153,148,160,162,162,177,176,170,165,155,152,146,150,151,154,163,159,158,159,163,160,160,155,153,158,158,158,159,155,153,152,155,161,161,164,160,158,154,160,168,166,166,163,164,160,160,156,132,108,89,96,125,154,170,170,165,162,157,158,168,170,169,163,165,165,159,163,155,160,159,162,158,163,113,12,1,9,10,13,10,13,12,13,13,13,13,34,31,35,24,24,27,19,24,20,22,21,19,25,21,21,29,28,24,29,24,22,23,28,27,25,27,24,27,29,19,17,18,17,17,21,21,17,18,17,19,17,17,18,18,19,19,17,20,19,17,18,19,18,16,19,17,17,17,17,20,19,17,19,23,16,18,19,19,18,21,22,18,18,21,19,20,22,20,24,20,21,26,21,27,30,27,33,38,35,33,34,36,39,41,35,37,36,35,28,29,29,30,32,36,29,26,35,27,31,29,36,35,29,36,33,35,46,44,46,49,45,45,37,38,34,36,40,38,45,47,53,55,60,67,72,76,75,75,85,89,103,108,120,103,104,99,104,110,99,107,101,112,112,118,117,112,122,118,118,117,110,107,111,115,118,118,121,122,117,114,116,118,119,124,118,115,109,108,110,105,108,108,103,100,101,98,107,109,108,112,119,123,117,108,114,116,121,116,110,109,104,110,112,112,115,116,110,112,110,127,127,122,122,131,132,130,132,122,124,124,125,124,120,122,122,123,122,115,108,109,115,127,122,113,118,125,124,125,124,117,123,127,127,129,126,125,120,106,103,112,115,125,128,120,122,118,115,116,123,129,126,126,118,106,101,109,110,114,122,125,129,120,114,115,125,125,112,110,103,104,108,116,127,122,119,121,113,119,113,105,107,111,115,100,97,83,78,87,91,100,85,83,84,71,50,55,64,78,72,72,63,49,51,61,56,57,57,65,56,56,55,50,55,66,62,56,60,56,58,57,60,56,57,60,54,56,52,57,58,53,56,53,57,57,50,56,58,63,55,55,57,66,60,53,62,56,60,55,52,53,53,55,47,54,51,50,54,49,52,51,50,53,55,54,47,49,49,44,46,49,43,49,42,44,47,44,43,39,43,46,42,43,44,35,43,41,39,44,38,42,40,45,48,41,43,41,44,39,39,42,42,41,41,43,41,39,40,43,40,38,38,42,40,40,42,42,39,39,38,38,44,44,39,37,38,39,42,40,36,42,44,35,39,42,39,37,40,36,36,42,39,42,38,38,41,38,39,35,40,42,33,37,40,33,34,39,34,40,35,36,43,35,40,38,41,43,41,46,48,45,48,54,52,54,59,63,62,54,53,60,60,65,66,59,66,69,79,75,75,89,101,121,125,124,135,136,152,156,155,153,133,159,182,169,165,154,131,98,120,129,141,125,97,97,139,76,117,210,251,252,252,252,253,253,252,252,253,253,252,252,252,252,252,252,252,252,251,251,226,67,32,33,25,31,26,29,28,27,29,29,29,29,26,27,25,28,24,29,27,22,27,24,24,33,24,30,34,29,36,37,37,42,42,42,43,52,60,62,61,55,57,63,75,70,93,85,109,122,112,107,109,121,110,123,112,122,131,146,149,147,152,158,155,150,151,155,160,170,179,173,163,160,162,158,155,151,151,152,158,165,161,162,163,157,157,151,155,153,158,175,174,170,162,159,156,152,155,154,153,155,159,159,161,165,162,161,159,159,159,154,154,149,155,152,154,155,157,161,161,161,153,149,157,155,158,160,160,159,157,164,162,162,136,106,87,93,121,143,162,166,166,164,160,167,165,163,158,156,160,159,156,156,160,158,160,151,160,113,12,0,9,9,12,10,13,11,12,14,13,13,21,19,24,22,22,20,20,24,23,24,21,24,25,23,24,23,27,25,24,24,22,26,25,29,27,21,25,23,20,24,21,15,21,20,16,18,17,19,20,17,17,19,17,18,17,17,21,16,17,16,17,18,17,20,17,17,18,18,19,19,21,18,20,19,18,19,19,20,19,19,20,18,22,23,16,22,18,23,23,22,29,26,29,33,34,36,39,33,32,26,27,30,34,33,40,39,37,43,39,41,45,49,43,43,42,35,37,41,43,44,45,46,43,40,43,48,48,45,46,47,43,42,50,49,47,45,49,40,42,50,53,61,73,85,96,100,98,99,95,103,101,98,96,101,117,118,121,117,126,122,117,116,111,115,113,117,114,114,118,117,110,104,110,120,125,120,122,118,113,116,116,116,116,119,117,114,110,105,103,110,112,108,111,113,113,117,119,119,118,119,123,122,117,115,115,119,127,127,124,120,125,125,124,127,125,120,116,110,107,113,121,113,111,114,115,114,109,114,122,130,132,130,127,129,131,125,129,127,123,122,123,127,119,122,125,122,125,127,125,125,129,131,128,127,131,127,128,123,118,129,127,131,127,126,119,118,117,117,129,119,125,124,108,110,117,113,116,115,116,113,105,109,111,116,115,115,112,108,116,118,119,130,129,132,128,123,125,131,122,111,112,111,110,110,109,97,90,88,77,71,68,67,80,77,85,95,96,98,93,93,84,65,66,66,69,70,73,80,58,58,51,45,61,58,59,59,62,61,60,63,57,57,55,55,60,57,59,57,58,62,61,58,54,57,55,56,55,54,53,50,56,53,57,53,46,53,49,50,51,48,55,50,48,53,49,47,47,52,50,51,52,47,49,49,47,49,49,47,46,48,43,44,45,47,44,44,45,45,47,45,45,46,42,41,48,42,41,44,45,42,43,46,43,43,40,42,44,43,41,38,39,43,41,43,49,46,46,45,40,44,44,46,43,45,44,42,40,40,45,41,45,40,39,42,38,40,40,34,40,44,37,41,38,34,38,36,38,38,36,37,39,43,41,41,45,44,44,38,38,41,38,41,40,34,41,41,35,40,36,33,36,39,38,42,42,42,41,46,50,40,46,49,46,51,53,55,49,51,54,53,51,55,58,54,62,70,75,68,67,81,91,100,113,113,115,116,106,106,105,113,122,137,162,150,168,172,142,143,160,188,189,150,113,109,104,57,72,133,148,178,217,248,253,253,252,252,253,253,252,252,253,253,252,252,253,253,252,252,226,44,21,21,12,31,21,27,27,30,28,29,27,27,24,27,27,21,30,27,27,23,21,27,28,27,29,28,28,33,30,32,34,33,31,36,39,42,50,52,52,54,57,54,64,62,67,79,97,117,105,104,115,134,161,150,133,137,132,136,143,137,133,130,139,153,166,168,165,170,171,166,154,153,157,162,160,157,158,159,159,162,163,161,159,160,151,154,154,145,154,160,160,153,153,157,155,152,156,150,148,150,151,156,156,160,157,162,159,155,159,155,150,151,150,155,155,151,154,153,160,165,161,150,148,150,144,151,155,155,154,158,162,156,152,132,101,86,88,108,137,160,170,170,164,162,160,158,153,157,158,157,163,158,162,160,160,154,160,113,13,1,9,10,12,11,14,12,12,14,13,13,19,17,20,19,24,22,22,22,24,23,18,22,22,24,25,23,25,27,24,25,25,22,27,27,26,26,29,24,19,22,16,18,18,19,22,19,21,16,19,22,17,17,17,19,16,17,17,19,21,17,16,18,17,19,18,17,20,20,17,20,21,17,21,18,18,19,18,22,18,17,16,19,18,19,19,18,22,21,22,27,32,29,25,29,31,29,28,25,26,19,25,28,33,33,30,27,29,37,35,37,36,36,37,33,38,38,31,46,50,46,45,39,35,43,40,40,39,36,35,36,38,41,39,47,46,45,47,46,48,42,39,42,59,71,80,91,94,100,95,98,103,102,108,111,119,113,121,108,124,125,121,123,125,120,124,120,116,114,110,116,119,121,114,122,125,116,120,123,122,124,125,124,121,127,122,123,121,123,122,118,125,130,131,129,130,127,130,126,123,123,119,123,120,115,122,116,118,116,124,127,118,120,119,118,120,121,126,123,123,120,122,118,116,118,115,119,124,116,120,119,122,122,122,122,122,126,128,128,124,125,122,126,126,120,123,125,125,125,122,121,123,122,124,128,125,129,135,131,131,130,132,127,125,122,119,120,121,122,119,112,108,110,115,117,117,116,107,98,103,106,107,111,110,103,101,101,105,111,110,106,101,98,104,106,105,108,103,104,93,90,90,92,96,105,107,112,108,102,92,79,85,82,92,97,102,99,103,98,88,97,94,95,100,104,103,100,93,95,80,89,80,74,79,61,65,61,66,59,54,60,56,53,56,55,56,60,55,56,57,62,60,56,59,59,54,54,57,54,54,49,53,53,50,50,48,51,51,51,49,50,53,55,50,50,49,46,52,49,47,47,50,48,48,48,42,47,44,46,50,46,46,47,49,47,49,43,46,47,37,48,43,44,49,41,48,43,44,41,40,44,41,43,42,43,48,41,45,42,36,40,41,39,44,42,41,48,39,38,44,41,42,47,46,43,39,39,42,42,41,39,46,41,38,41,39,42,41,41,42,40,39,42,36,36,37,39,41,37,40,39,39,38,43,38,40,44,37,36,39,38,41,45,41,36,36,39,36,33,36,37,38,36,38,40,40,39,39,40,37,44,45,42,42,45,46,44,47,50,43,46,47,52,50,49,57,53,61,57,58,68,64,66,83,85,59,63,73,82,101,98,100,105,124,126,142,132,108,122,140,165,160,149,172,192,146,101,97,116,105,103,117,141,178,208,233,249,253,253,252,250,253,253,252,252,253,253,252,252,209,38,16,13,12,29,17,30,27,24,26,27,25,31,29,25,26,22,28,23,24,27,24,22,21,26,29,29,35,32,28,35,39,32,29,33,35,42,39,47,52,54,57,52,57,59,51,60,74,89,98,114,130,145,164,138,122,125,130,145,156,151,142,137,143,153,166,160,154,160,164,146,154,151,158,164,165,164,165,161,158,155,153,158,161,160,161,164,160,153,152,155,152,147,146,146,150,148,146,146,143,141,142,146,148,150,151,156,157,155,157,157,152,152,155,158,158,156,156,157,160,160,160,156,154,151,151,151,151,155,155,155,153,157,151,150,133,114,93,81,103,130,154,164,164,162,156,154,150,152,156,161,163,163,165,161,169,159,166,113,11,1,9,10,13,11,12,12,13,13,13,13,21,20,22,21,24,24,18,24,22,24,20,19,22,24,26,24,26,24,24,28,23,26,27,23,23,24,27,20,21,19,17,18,17,19,19,19,16,21,19,20,18,18,19,18,18,17,17,17,17,19,19,18,20,19,18,17,16,17,16,21,18,17,21,19,21,21,18,18,17,20,18,19,22,19,18,18,19,23,22,28,29,23,24,27,23,27,28,23,22,22,25,22,22,29,25,27,31,31,30,30,27,29,27,21,27,28,24,29,30,36,35,36,32,29,29,30,34,30,35,42,36,41,38,38,45,42,47,45,42,42,42,39,47,55,61,76,81,93,93,99,106,111,118,117,116,102,99,99,107,112,122,130,130,128,124,125,117,125,127,127,130,130,122,120,122,128,128,123,120,127,129,122,121,128,130,128,131,132,128,123,125,127,124,126,125,122,125,122,122,125,123,122,126,126,128,127,122,121,124,124,130,134,125,124,129,134,138,137,139,137,134,129,128,137,136,134,131,132,134,128,131,129,128,128,124,121,125,122,122,129,129,129,121,124,130,124,128,129,128,125,125,124,126,137,134,130,135,133,129,137,134,130,126,122,121,126,127,122,124,122,118,116,120,122,128,117,116,114,117,117,115,126,118,115,114,117,113,109,104,97,93,98,104,103,104,98,88,86,86,89,81,84,93,101,108,111,112,113,109,107,105,97,103,104,107,102,105,102,86,103,118,116,123,118,118,104,102,106,102,120,110,107,102,80,74,61,57,55,59,55,54,59,56,59,54,63,57,55,67,64,56,53,56,53,55,57,53,57,55,51,53,52,54,50,53,57,51,55,52,51,53,50,51,48,53,52,51,50,48,54,50,47,49,45,47,49,51,50,40,49,51,46,50,46,47,48,47,47,42,40,43,46,47,43,45,40,43,45,42,45,42,43,40,45,44,42,44,44,43,40,39,43,36,43,45,42,44,39,46,42,37,45,42,37,43,42,36,39,41,38,41,37,37,39,42,39,37,42,36,40,40,39,40,38,41,41,42,36,39,40,39,41,35,37,39,36,35,38,36,41,41,42,39,39,44,38,39,36,39,38,34,38,38,39,34,37,42,35,35,40,43,36,42,39,39,46,42,53,44,40,49,45,48,48,53,45,46,56,51,55,54,49,57,61,49,48,53,72,72,86,82,72,89,95,114,92,97,108,112,132,125,152,190,209,179,125,135,161,145,114,86,74,79,98,127,148,189,204,190,198,202,198,197,199,204,210,222,251,179,19,15,9,18,29,21,23,26,29,27,28,29,26,21,25,27,31,29,24,24,22,26,27,23,32,31,30,32,35,34,32,39,32,36,35,36,41,39,50,51,51,51,52,53,51,54,61,61,75,94,120,133,139,139,113,103,110,129,152,165,165,157,155,155,155,151,145,146,155,161,162,159,151,152,162,166,167,168,160,156,148,153,153,152,161,156,161,159,160,164,159,160,154,152,151,147,141,147,148,155,155,147,148,146,143,139,143,146,148,150,153,155,160,160,157,156,158,163,160,162,164,165,161,162,162,155,158,160,158,155,159,157,159,160,160,158,151,128,98,85,95,124,148,159,167,165,159,152,153,152,155,157,156,161,160,164,159,169,114,11,1,9,10,13,11,12,12,13,14,13,13,28,21,20,20,24,21,19,23,24,24,23,21,22,27,23,24,24,26,24,21,21,24,28,26,31,28,26,24,19,18,18,18,19,20,19,19,17,21,20,16,19,23,16,20,21,17,19,17,19,16,19,20,17,18,17,18,19,18,21,19,16,21,18,19,20,19,21,18,18,21,19,16,19,22,17,20,22,24,25,24,21,25,24,25,28,27,27,27,26,29,26,23,26,22,24,28,26,24,29,24,27,27,24,25,25,31,28,27,36,35,35,34,32,31,28,29,31,35,37,37,38,38,42,48,42,41,48,43,50,42,40,46,44,51,55,53,64,69,79,89,90,93,102,105,112,108,116,122,127,129,128,126,123,115,111,111,116,120,117,124,126,127,130,124,126,119,118,120,121,125,124,124,130,129,132,133,128,129,126,118,115,123,128,130,134,128,128,131,132,137,128,133,128,121,131,127,129,134,126,137,133,131,114,120,128,136,132,137,134,140,143,138,131,141,136,139,145,146,146,139,132,133,131,130,125,127,124,123,128,129,121,129,136,135,137,130,131,133,134,139,138,136,137,139,129,130,136,132,135,132,136,129,131,139,133,136,135,141,145,147,141,132,127,132,128,128,127,127,119,125,128,131,122,112,114,113,118,118,117,113,121,129,133,133,122,113,118,103,125,110,101,110,107,112,106,95,89,84,99,106,102,92,100,110,107,105,112,108,84,107,93,81,70,84,85,84,89,81,84,95,88,115,82,62,69,67,71,82,71,81,86,78,69,60,62,62,61,61,53,53,49,55,54,51,54,52,51,47,61,55,53,54,52,57,56,54,58,53,51,53,51,53,51,55,51,51,49,50,49,50,52,45,48,46,45,51,44,42,47,47,46,46,48,43,48,46,44,51,41,38,43,43,46,44,45,45,42,44,45,42,41,46,44,42,38,45,46,38,43,42,40,42,42,41,44,42,40,44,45,39,37,41,36,39,46,41,40,37,34,39,38,36,41,37,42,40,38,41,34,40,39,36,39,39,37,38,40,35,38,39,37,39,34,42,38,38,38,35,37,39,40,41,39,37,39,41,38,39,38,33,37,38,37,39,33,36,38,37,35,40,38,36,41,39,41,42,44,45,39,46,41,34,49,45,43,43,36,42,45,47,47,44,46,51,51,53,57,53,66,67,62,57,59,73,77,60,74,78,82,101,101,126,153,169,149,119,134,171,165,164,142,125,92,128,150,131,147,139,138,125,99,90,97,98,101,107,126,177,93,16,24,9,21,24,19,26,20,27,28,25,24,24,24,24,27,23,26,24,26,24,25,24,24,29,25,30,32,33,38,38,32,34,34,38,41,39,43,47,47,49,49,51,51,55,62,60,62,64,87,92,99,102,115,84,100,110,116,125,139,151,162,171,163,157,152,156,154,165,169,165,159,151,150,156,158,156,156,156,150,142,141,143,149,150,154,151,153,159,160,162,161,158,155,151,147,147,155,163,165,167,162,159,153,144,142,139,142,146,148,152,155,157,155,151,149,153,157,157,154,153,159,162,162,161,156,156,162,158,153,153,153,155,155,163,155,159,147,129,108,79,91,112,142,162,170,171,166,162,158,152,148,148,151,152,159,152,167,115,11,1,9,10,12,11,14,12,13,14,14,13,22,24,17,21,21,19,23,22,21,24,18,21,22,22,23,23,25,23,22,21,25,27,24,21,30,22,22,25,24,23,16,16,17,19,19,16,20,20,19,19,19,19,19,16,17,17,17,17,23,21,16,20,20,21,19,17,22,21,19,19,19,16,24,23,17,19,18,23,19,22,21,19,20,19,24,19,20,21,21,25,19,24,19,20,28,21,24,26,20,24,24,23,25,22,21,22,24,23,22,26,23,26,29,20,27,25,27,33,27,27,32,29,29,29,29,26,31,35,37,33,29,34,33,38,40,39,41,43,37,37,42,47,54,55,53,57,57,66,71,80,81,79,93,103,116,124,131,129,132,129,124,118,122,124,120,121,118,121,121,123,124,125,125,127,122,123,126,131,126,127,131,129,129,128,128,125,127,124,122,127,133,136,134,135,133,137,141,131,130,132,127,124,122,122,130,134,134,129,125,118,114,114,106,116,127,132,124,111,118,127,131,129,128,133,130,134,136,140,142,132,133,128,133,136,130,133,129,130,130,126,122,123,127,131,133,132,129,132,132,131,132,128,122,120,122,122,126,130,132,136,127,126,137,141,139,141,141,138,135,131,135,132,130,128,128,127,125,113,118,121,122,125,117,117,114,115,114,126,128,125,128,132,127,130,134,120,116,118,111,107,106,108,118,117,105,98,95,87,78,87,90,88,94,90,87,92,98,83,75,65,59,50,50,59,54,64,64,57,57,60,55,60,51,39,61,68,83,79,77,92,84,72,68,68,64,61,56,57,59,52,60,58,55,59,56,52,51,56,58,54,54,54,52,56,57,49,56,55,51,51,54,56,51,59,56,49,54,52,48,49,52,51,56,45,50,50,50,48,48,51,44,44,46,46,45,46,46,48,43,49,45,41,48,45,45,44,45,42,45,42,39,44,44,46,42,42,39,42,44,39,44,41,39,40,38,39,42,41,41,41,41,41,42,39,45,42,41,46,41,41,41,39,42,45,43,45,44,40,36,37,46,42,39,37,34,37,41,37,36,39,40,39,38,38,40,42,39,40,38,37,37,39,41,36,39,37,40,34,36,39,33,39,34,37,37,34,37,37,37,35,38,35,37,40,38,43,34,40,43,40,41,36,38,36,42,41,35,39,36,42,44,40,43,46,43,51,57,57,56,52,47,50,56,55,53,49,60,54,61,81,83,99,113,125,118,110,117,141,200,223,205,190,180,187,203,196,184,147,105,74,62,74,84,78,70,42,45,77,39,28,29,10,23,22,23,25,17,24,21,26,25,22,25,27,27,17,29,28,27,24,22,31,27,24,26,30,35,32,39,39,34,37,38,39,44,45,46,53,47,48,52,54,57,57,60,65,67,64,69,85,83,86,104,98,110,122,110,102,115,137,159,170,165,159,157,158,160,169,170,163,155,151,153,155,156,149,147,153,152,140,148,152,152,155,153,154,154,159,158,153,153,153,154,159,157,153,162,164,165,167,165,161,159,160,157,154,153,159,155,156,155,155,157,146,146,151,158,154,148,150,150,154,152,157,161,159,161,160,154,152,149,151,154,153,156,158,159,155,136,111,91,87,110,139,162,173,178,174,168,166,158,151,151,146,148,148,162,112,13,1,10,10,12,11,14,12,13,14,14,13,22,20,17,18,19,23,22,22,21,21,22,24,23,23,24,21,24,27,22,23,26,24,23,23,28,20,23,24,18,17,20,18,17,19,16,17,16,21,20,17,19,16,17,16,17,21,19,18,18,19,19,17,19,19,18,18,21,19,18,18,18,19,18,17,20,23,20,22,19,18,20,19,21,17,21,21,24,21,22,20,19,24,22,24,24,22,23,21,24,27,22,22,24,20,22,24,22,30,24,24,27,24,26,24,25,24,25,30,27,24,29,24,29,31,30,29,28,31,33,27,27,27,34,37,36,40,41,40,44,47,51,55,61,57,61,61,59,55,51,54,57,67,100,111,113,108,106,99,95,98,106,115,126,129,129,127,127,124,135,137,133,127,128,128,124,128,130,128,128,130,127,121,125,128,126,123,125,128,125,130,135,139,129,123,126,122,126,125,125,129,128,131,127,132,136,127,125,118,127,120,113,116,115,111,124,118,115,124,118,117,120,118,130,123,124,126,125,125,133,137,136,132,134,135,132,137,138,125,121,126,119,122,114,128,126,125,129,129,125,116,117,112,111,113,117,124,126,128,127,125,128,129,134,140,136,136,128,122,115,110,105,105,117,107,114,120,114,110,119,116,118,121,119,124,111,123,115,114,114,116,122,127,128,126,118,111,119,123,112,119,123,110,122,115,103,122,134,120,106,102,88,92,96,103,93,84,83,65,68,76,83,61,55,70,60,66,67,40,51,61,39,64,59,51,41,65,45,78,72,58,52,47,55,56,64,49,51,61,53,57,57,53,54,61,60,59,64,59,60,55,57,56,53,55,53,55,52,55,55,52,55,53,53,49,48,54,51,55,53,55,51,49,59,53,55,51,49,53,53,44,45,53,51,53,48,43,48,48,46,51,52,42,44,48,46,41,47,45,41,45,44,46,43,41,43,42,44,39,47,42,40,47,38,42,39,41,45,44,43,46,39,40,44,40,44,40,46,42,38,42,42,41,38,41,42,39,45,42,36,39,39,41,45,36,40,43,35,40,39,36,40,39,33,40,40,39,38,37,40,37,38,37,34,42,34,37,36,34,38,35,36,39,34,37,36,37,39,32,36,39,35,33,39,37,38,39,38,42,35,39,39,37,41,34,35,41,33,40,36,37,40,41,43,41,48,57,57,46,48,48,51,54,50,55,61,67,67,55,62,80,84,102,107,98,97,104,104,120,170,192,187,179,171,171,172,161,148,130,121,121,127,140,137,146,129,103,109,108,43,18,27,9,22,26,17,23,23,23,20,23,25,20,26,23,24,26,22,29,24,25,27,23,23,30,32,29,32,35,39,35,36,41,40,43,47,47,49,48,51,56,58,63,62,66,63,73,66,65,80,94,101,102,119,114,125,147,134,121,134,145,154,141,136,130,135,141,140,147,150,149,147,143,147,152,153,152,154,155,160,146,163,165,167,162,162,160,161,162,160,152,153,159,164,163,158,157,162,162,162,162,154,155,155,155,159,159,158,163,165,160,159,160,160,153,151,155,156,158,155,155,153,150,150,151,151,154,155,155,156,155,147,145,145,152,151,157,151,155,147,135,118,95,90,101,135,157,170,175,174,171,162,150,148,145,146,147,161,112,13,1,9,10,14,11,13,12,13,14,13,12,22,22,16,24,22,19,22,18,24,24,22,21,23,22,24,24,19,24,23,21,26,24,23,24,23,24,22,20,22,19,17,17,18,17,17,17,17,16,19,18,18,17,17,21,18,20,19,18,19,17,17,18,17,18,19,18,18,19,21,17,22,21,20,21,17,20,19,21,17,19,22,19,22,19,17,20,21,27,25,27,27,31,34,28,26,27,31,33,38,35,27,30,32,25,30,29,25,26,25,31,30,35,31,29,36,27,27,29,25,28,27,31,33,29,34,33,29,40,43,41,39,36,41,38,43,48,45,48,53,56,57,57,69,72,69,78,71,70,56,53,55,57,77,76,75,73,73,76,85,100,112,119,126,128,128,127,128,126,129,127,127,127,123,132,130,128,125,128,126,124,124,119,125,125,127,126,125,126,123,131,131,124,125,124,122,123,123,124,128,130,131,135,132,130,128,122,118,118,127,128,130,135,134,136,127,125,124,129,133,128,134,139,138,136,134,130,117,116,122,121,134,132,131,130,129,133,128,127,132,133,129,127,127,126,123,126,129,126,120,118,124,126,125,130,140,145,141,140,127,125,126,129,133,130,132,135,131,123,122,117,122,118,110,113,113,115,126,124,123,122,119,114,113,114,109,112,107,108,111,118,114,117,110,101,106,104,118,128,126,129,139,136,126,113,101,111,130,125,119,120,103,100,116,114,101,105,113,116,128,127,114,98,96,92,76,78,82,83,81,77,81,84,81,77,86,73,78,84,76,67,61,65,67,74,69,60,57,55,53,48,55,47,59,61,55,65,66,68,65,50,53,52,47,49,51,49,51,54,51,53,48,54,50,52,50,45,50,48,56,49,55,53,49,54,51,51,49,52,51,48,51,49,48,47,52,47,47,50,45,46,48,45,44,41,48,46,43,46,46,45,45,46,39,45,48,42,40,49,43,40,51,44,45,40,42,43,40,46,44,44,42,44,46,44,42,40,45,42,38,43,44,39,44,41,39,44,42,42,43,41,41,43,46,42,39,42,38,37,41,37,37,41,40,38,38,39,39,39,40,42,40,36,37,36,38,39,37,39,38,34,36,33,36,42,39,39,38,41,39,38,36,35,44,41,40,39,39,42,34,39,41,42,40,41,37,37,36,36,41,40,44,44,47,49,55,63,59,48,54,58,60,51,46,64,78,79,67,56,61,74,82,103,98,71,76,93,87,92,124,140,146,143,133,124,116,109,103,107,134,165,187,196,177,168,181,184,183,156,64,30,25,13,28,22,27,25,23,31,22,27,26,25,27,26,26,23,28,27,29,29,29,32,29,30,28,31,39,35,39,44,40,41,44,48,53,53,55,57,59,60,62,67,71,68,61,63,66,67,93,123,120,121,133,125,132,149,151,142,154,161,150,139,127,117,121,128,122,133,139,141,142,137,142,149,154,159,161,160,156,153,159,159,159,160,158,159,158,165,167,164,163,163,163,162,165,165,162,162,162,160,162,158,156,156,154,152,151,160,159,159,156,157,163,159,158,160,162,165,163,162,164,160,152,147,146,146,150,156,155,154,150,147,145,144,152,154,151,154,152,153,149,118,92,83,95,122,146,167,173,165,163,152,150,144,147,146,155,111,14,1,9,10,14,11,13,12,13,14,14,14,19,21,23,21,25,19,23,20,21,27,18,23,24,17,24,24,23,23,24,25,21,28,23,18,27,24,21,25,21,19,17,21,19,18,20,18,18,17,19,18,22,20,17,17,22,19,16,19,18,18,21,17,16,18,19,22,19,17,19,18,19,22,19,17,23,21,20,22,17,20,20,22,22,19,21,21,23,27,26,29,31,32,35,28,30,39,43,43,38,41,46,45,44,37,31,30,35,30,31,41,35,38,38,38,34,35,36,28,33,32,34,37,36,39,39,42,36,49,53,47,45,46,45,48,54,59,61,60,54,56,59,70,86,96,110,100,87,99,78,84,80,81,86,99,94,97,89,91,100,109,118,119,122,120,120,117,115,112,107,120,122,121,125,123,127,125,123,127,125,128,126,125,120,121,123,121,128,130,136,135,131,132,131,132,132,131,131,129,131,134,134,134,130,133,130,124,125,128,138,136,136,141,141,142,145,144,136,138,140,138,142,143,148,152,145,129,116,115,125,128,127,131,136,133,124,125,136,137,134,132,135,126,126,127,125,131,133,131,129,125,130,130,138,137,134,134,129,127,125,125,128,133,129,141,134,127,122,127,127,133,137,134,138,139,141,141,138,136,135,136,136,134,120,121,122,123,131,128,129,120,112,107,99,91,93,106,109,107,102,102,111,110,106,95,92,98,94,102,119,119,112,118,122,100,86,96,107,107,113,122,122,94,102,98,90,89,88,93,94,84,81,79,84,87,108,78,74,74,90,77,80,80,69,66,73,57,55,53,55,58,55,63,57,49,49,50,54,53,54,49,52,54,51,54,50,47,47,47,50,49,55,50,49,53,51,50,49,52,50,54,52,55,52,47,49,47,47,45,48,47,47,47,44,50,47,44,46,45,46,46,42,44,43,44,45,46,48,49,43,44,49,47,45,43,45,42,44,43,45,44,41,44,40,44,43,44,40,41,46,41,42,44,47,43,43,41,40,39,41,42,40,42,44,42,42,41,37,39,41,45,42,42,43,42,38,39,40,39,42,39,40,38,37,38,39,42,36,43,39,36,43,37,41,36,35,42,35,42,37,36,37,37,36,32,38,38,36,39,43,38,38,42,38,36,41,36,34,36,38,41,39,38,39,39,35,37,37,36,42,40,44,45,41,48,51,50,51,48,52,53,54,46,48,56,54,59,55,43,52,50,55,73,61,54,44,69,62,57,75,98,106,90,78,60,97,107,102,92,104,122,142,149,129,125,138,167,173,160,92,41,24,17,30,24,27,29,23,29,25,30,27,29,29,22,32,29,26,29,22,30,33,27,29,29,36,35,34,40,41,41,44,41,44,47,49,50,56,57,56,60,62,60,66,63,59,55,55,70,93,114,120,123,124,110,111,139,145,129,144,141,149,141,139,143,140,141,149,153,152,151,148,146,146,155,166,161,160,163,152,151,162,161,158,156,151,150,149,153,159,155,152,153,154,154,151,154,151,157,156,158,164,162,159,152,148,150,149,151,153,150,151,156,157,157,157,163,160,160,161,159,156,158,159,152,147,145,147,149,146,143,146,142,141,145,147,149,145,152,154,151,150,141,122,97,82,89,115,139,157,161,160,153,147,144,144,139,152,109,14,1,10,10,12,11,14,12,12,14,13,13,20,22,19,22,23,22,23,21,24,22,21,18,26,27,22,25,24,24,22,22,22,25,22,24,24,23,24,24,22,18,20,19,18,17,17,19,19,19,20,19,16,19,20,19,15,21,22,17,19,17,18,17,19,18,19,19,16,19,21,16,20,21,18,20,19,18,21,27,17,20,24,19,21,17,24,25,23,17,24,30,27,35,31,32,33,38,40,34,37,38,38,42,43,45,41,33,41,37,35,36,32,36,35,34,36,32,30,34,32,33,36,39,37,39,39,36,40,35,38,43,44,45,44,45,49,49,52,51,51,54,57,55,71,84,95,108,89,93,93,103,113,117,132,125,130,130,125,120,122,118,118,121,118,116,115,110,107,104,106,120,127,126,116,114,110,110,108,118,124,121,123,122,127,130,130,131,133,135,133,139,135,129,128,128,129,127,125,125,130,128,127,128,130,127,127,125,128,137,137,135,124,118,123,131,136,141,139,136,134,133,139,142,143,141,132,129,136,137,142,134,128,127,133,132,133,137,133,128,129,128,127,126,128,133,133,141,141,140,141,139,141,144,140,129,122,121,118,125,129,140,142,137,137,141,141,140,139,135,130,124,125,127,125,135,143,145,147,137,145,144,139,127,124,133,128,129,131,133,112,95,100,101,95,101,111,105,93,90,86,81,86,83,87,80,86,98,92,90,101,118,118,128,130,100,75,64,71,72,75,77,75,83,96,90,93,89,83,86,89,84,72,63,75,81,75,70,60,59,60,50,52,53,50,57,55,53,48,50,54,58,70,62,61,57,53,61,49,63,69,64,68,61,63,64,59,58,63,63,55,50,60,62,59,63,54,54,51,53,55,53,54,49,52,49,47,52,52,45,46,49,48,51,50,47,49,51,49,50,48,50,40,46,53,47,53,51,47,49,47,48,47,47,50,45,44,49,43,45,45,45,48,43,42,47,47,43,44,44,46,44,42,40,42,44,46,40,41,41,42,41,39,40,43,42,38,42,42,41,37,41,42,39,41,36,39,44,43,38,38,40,37,39,36,40,39,40,37,44,44,37,39,39,43,37,35,39,40,39,38,40,36,34,42,34,38,39,38,40,38,39,38,44,43,36,40,35,33,39,36,44,32,35,41,37,43,41,40,39,42,44,48,46,41,47,49,48,53,50,47,49,52,54,54,52,42,45,46,44,51,43,46,50,53,53,51,56,49,36,46,73,81,56,45,66,97,115,108,83,77,77,95,101,85,82,83,115,144,149,108,47,22,19,27,28,29,28,25,30,28,30,32,28,29,28,29,29,27,27,29,29,32,29,36,32,32,37,38,40,34,37,44,44,44,49,48,47,57,52,50,57,59,61,62,67,57,56,59,72,90,102,116,122,116,101,117,132,127,113,116,116,114,132,162,173,164,161,165,173,162,159,153,148,151,160,164,159,158,158,158,162,166,164,160,157,151,152,153,155,157,154,147,147,147,142,144,143,142,147,148,152,160,162,157,153,153,148,146,147,148,149,146,148,151,151,153,153,153,155,153,151,152,153,156,155,152,150,149,150,139,141,143,145,147,146,142,138,142,144,146,144,147,149,137,125,101,89,90,105,132,148,162,158,160,152,147,143,150,108,14,1,10,10,12,11,14,13,13,14,13,13,21,19,22,22,19,19,23,18,20,23,23,23,21,22,24,24,21,21,23,22,25,23,28,24,21,25,27,24,19,19,16,16,19,18,18,17,18,19,19,18,19,20,21,19,20,20,16,17,17,19,19,19,21,18,18,19,17,20,21,20,21,19,21,19,19,22,20,20,25,18,19,23,20,19,23,21,24,24,21,25,27,29,28,24,27,25,23,26,31,24,25,30,29,33,31,32,30,33,33,29,31,35,34,35,28,33,38,32,36,36,36,42,37,42,42,41,41,37,37,41,42,42,43,47,45,47,51,48,48,49,50,56,67,69,75,79,78,85,73,77,101,103,113,106,115,122,126,126,135,132,124,123,122,124,127,131,123,134,135,137,131,128,128,119,117,118,120,125,118,120,116,121,130,131,133,132,130,130,130,130,128,128,129,126,123,125,129,128,131,130,123,120,123,130,127,125,128,131,130,122,118,119,122,132,133,123,126,127,121,117,129,129,129,130,132,140,136,133,131,127,124,129,134,137,133,132,132,131,130,129,127,127,135,141,139,140,135,141,143,144,159,155,153,145,135,143,146,146,149,147,142,144,143,146,138,146,140,139,139,141,139,134,131,130,132,134,139,136,137,125,115,106,103,116,126,120,121,120,111,113,113,119,119,122,112,96,94,111,122,123,113,113,107,99,103,116,101,101,109,111,114,119,116,101,106,111,89,103,95,99,88,88,90,86,90,85,62,76,71,79,48,70,74,74,45,42,64,65,55,47,55,64,68,63,70,70,68,72,78,83,77,83,78,75,80,76,68,100,87,81,80,69,83,77,81,92,90,84,73,62,65,75,74,69,60,50,56,57,51,51,55,51,47,49,52,50,51,51,51,49,55,51,46,53,48,54,53,55,55,51,53,50,54,54,51,52,53,53,51,46,42,51,48,53,50,36,46,44,46,44,43,46,45,47,41,46,43,42,44,41,44,44,45,42,42,39,40,39,40,43,41,39,33,44,42,42,43,38,38,38,37,40,41,39,40,39,41,38,37,41,37,40,43,34,40,39,35,42,36,34,46,37,36,36,33,44,38,38,39,40,38,37,41,33,38,39,39,39,37,40,42,38,38,40,35,35,42,38,35,36,37,35,37,39,37,38,41,46,41,41,44,43,50,48,50,46,50,48,52,69,68,69,63,52,45,46,53,54,52,48,54,55,52,60,66,67,65,46,60,64,88,81,71,79,87,98,104,95,103,93,97,94,92,82,66,83,102,115,94,44,28,26,27,28,33,31,29,30,29,34,30,30,30,24,28,32,33,31,26,35,29,29,33,33,37,30,37,36,33,42,37,42,44,37,46,44,48,47,50,55,55,57,57,66,54,53,55,64,78,96,109,123,121,117,128,134,131,124,118,116,123,139,160,164,163,171,172,161,148,148,143,146,153,154,155,149,158,159,158,160,162,161,160,163,157,150,152,151,154,154,150,150,152,149,154,155,152,156,155,152,159,159,163,163,159,154,145,143,154,152,145,146,143,142,148,151,153,155,152,152,148,144,149,149,150,151,150,152,141,141,148,147,150,148,144,143,142,144,144,143,136,137,141,143,131,112,95,83,102,125,144,155,161,159,154,146,154,108,14,1,9,10,13,11,13,12,13,14,13,14,21,21,17,23,22,20,21,21,19,20,18,21,25,24,27,20,24,21,21,29,23,24,24,26,22,23,26,21,20,17,17,18,17,19,21,17,20,17,17,20,18,19,19,20,19,19,19,18,21,20,17,19,19,19,20,19,18,21,24,20,17,22,23,20,22,18,22,22,21,21,17,24,22,20,29,21,22,22,23,24,23,23,21,25,21,24,27,27,27,22,23,27,32,24,26,32,30,29,27,31,30,30,34,34,34,29,29,32,35,37,39,42,40,43,45,43,46,46,44,43,45,44,44,46,46,46,48,45,46,46,46,53,66,59,63,69,74,91,87,95,93,91,93,87,95,103,119,134,141,141,137,137,128,132,141,139,141,141,137,137,132,130,130,127,134,144,144,142,138,136,139,134,132,127,127,133,132,133,128,128,124,124,130,134,140,139,143,142,146,144,134,132,134,137,136,128,126,122,115,116,116,123,132,136,135,131,135,130,131,131,129,125,121,129,134,135,130,119,127,135,141,143,144,134,135,141,145,141,130,133,136,137,141,149,141,137,134,141,148,140,142,144,147,146,146,151,144,146,142,134,134,130,131,132,131,131,132,136,141,144,144,141,134,128,127,125,132,127,133,130,125,122,123,127,120,125,119,125,133,143,144,138,141,144,136,126,124,139,141,136,143,148,141,122,123,134,129,114,98,100,101,100,94,90,100,98,105,108,122,124,103,97,93,93,98,93,95,99,94,83,91,89,82,77,73,81,79,79,79,72,73,76,86,95,101,94,89,90,92,81,77,76,81,64,64,83,76,77,57,61,72,71,82,72,78,89,90,88,86,67,64,67,68,69,61,57,60,58,47,54,53,49,51,49,55,46,53,54,50,54,49,57,50,54,53,48,53,51,55,52,52,50,54,50,53,48,50,53,46,48,46,43,46,45,44,42,42,46,48,49,46,45,42,46,43,46,43,39,46,41,46,46,42,42,41,43,43,40,42,40,42,37,41,41,39,39,39,41,37,42,37,41,45,39,46,41,37,40,41,39,36,40,39,41,36,37,41,37,41,38,41,38,37,42,33,37,41,41,39,33,39,38,36,35,41,39,35,41,36,41,44,37,40,41,39,39,39,41,36,36,38,41,39,37,40,41,39,42,41,40,39,44,51,44,46,47,47,45,59,62,56,65,59,55,48,49,61,65,54,59,60,53,66,71,81,95,81,73,73,71,95,102,93,84,69,78,96,104,106,92,93,85,87,86,59,60,65,83,80,45,35,27,26,32,32,29,30,34,32,30,32,30,33,29,32,34,30,32,31,29,31,31,31,32,32,35,37,36,36,39,41,42,41,43,44,41,47,47,48,51,50,57,57,61,56,53,53,62,70,68,99,116,120,127,132,125,130,139,129,125,138,149,150,149,157,164,153,146,137,145,152,155,154,152,151,147,152,158,160,156,155,154,158,165,161,155,153,158,159,164,162,159,157,153,162,169,163,158,155,154,152,155,160,163,168,168,162,152,155,156,150,151,148,150,153,153,157,160,162,162,160,157,155,150,149,145,146,151,148,151,149,150,150,149,151,150,146,149,145,139,139,137,139,146,146,136,114,88,85,94,122,142,161,162,162,153,158,112,14,1,9,10,13,11,13,12,13,14,14,14,18,21,20,26,24,22,21,23,24,17,22,22,22,22,21,24,25,22,24,24,24,27,24,28,24,24,24,18,22,19,19,18,21,22,17,19,17,19,17,19,18,19,22,19,18,20,19,22,21,18,21,19,20,19,20,18,20,21,19,21,21,19,23,23,19,23,19,26,22,21,25,18,21,22,23,21,24,23,23,21,21,22,22,27,24,26,30,25,29,27,33,30,29,32,31,35,33,39,34,28,39,38,33,35,36,39,39,44,45,39,54,46,50,54,47,50,52,52,56,54,50,57,49,56,53,45,55,51,55,53,67,70,77,76,88,84,82,89,110,111,107,92,92,90,91,86,99,110,117,120,123,125,122,127,125,127,128,127,130,131,125,123,133,135,143,147,145,139,128,137,136,130,133,125,127,130,124,122,123,123,118,127,120,133,136,147,151,153,153,152,138,149,134,139,134,133,130,129,123,127,128,131,140,141,137,139,131,136,134,131,136,128,131,131,131,130,125,132,139,139,139,142,149,145,141,147,145,137,141,137,132,129,132,125,129,124,128,130,122,124,121,122,130,132,136,125,128,127,122,120,106,107,109,119,124,125,119,115,127,135,126,144,135,118,115,122,132,113,127,134,131,132,128,121,122,119,120,123,131,151,131,122,131,130,131,126,120,124,126,136,139,133,128,126,126,120,113,99,95,94,91,89,80,79,61,87,90,86,85,102,83,85,82,88,114,78,106,98,104,108,102,94,68,65,78,72,84,67,73,69,66,66,70,80,89,75,70,72,73,50,51,60,53,54,52,67,53,49,55,57,58,60,66,62,61,65,72,70,70,57,59,64,65,70,62,62,58,64,71,61,57,49,50,53,52,56,48,49,53,45,50,49,50,50,45,49,44,48,50,48,50,49,43,51,47,42,47,45,48,45,43,46,42,46,44,44,49,43,43,47,49,44,44,44,42,46,41,47,44,39,46,39,44,44,39,45,44,41,40,42,38,46,38,39,42,36,41,38,42,42,38,42,38,40,40,38,44,37,41,42,33,39,39,36,41,39,37,40,35,41,38,39,41,37,40,40,42,34,41,44,37,40,40,42,41,41,39,39,43,40,44,44,39,41,38,44,41,39,41,38,43,39,42,40,40,40,37,42,40,39,41,40,42,46,43,44,46,45,44,45,47,52,49,47,50,55,63,60,54,57,61,59,67,67,74,88,71,63,66,69,80,80,81,85,68,65,73,78,84,55,54,60,61,63,54,50,55,66,65,48,37,24,32,34,29,29,35,29,31,36,30,32,31,29,29,28,30,34,33,30,30,34,32,31,35,29,38,33,30,42,38,41,42,39,44,45,46,48,48,50,55,54,59,60,64,58,59,73,79,68,77,80,96,107,105,104,111,132,146,122,132,141,133,134,133,148,145,147,147,153,157,163,154,149,148,145,147,151,153,152,153,148,154,159,159,159,156,148,152,155,153,157,154,156,159,162,162,155,151,148,151,148,152,153,154,155,158,154,152,150,147,151,151,150,153,152,158,155,160,157,158,159,159,158,155,152,147,151,147,149,152,145,148,150,149,147,147,149,147,148,141,144,142,143,145,139,129,106,86,75,90,115,141,152,160,160,163,111,14,1,10,10,12,11,14,12,13,14,14,13,21,22,20,19,19,19,26,23,19,23,21,24,19,27,26,19,24,27,21,19,24,29,25,23,22,23,24,21,23,19,18,18,17,20,20,17,19,18,17,21,17,23,21,21,20,16,21,19,17,20,22,18,18,19,21,20,21,19,23,23,22,21,21,23,21,21,19,19,24,21,23,22,19,25,27,18,24,22,21,24,23,22,25,30,28,34,29,28,37,30,22,27,30,31,38,27,34,33,29,34,37,37,38,42,43,46,45,47,50,49,52,53,57,56,53,55,51,56,56,51,48,46,51,53,57,54,61,67,79,88,94,92,92,97,97,91,81,77,80,93,89,89,84,80,81,79,87,92,100,105,114,132,133,131,132,125,130,124,125,127,127,126,130,134,130,129,120,120,117,114,122,122,125,122,121,126,122,131,128,134,135,131,137,134,133,127,131,133,131,134,140,141,132,127,127,134,136,136,136,137,141,139,143,139,134,131,131,128,121,127,132,131,128,127,122,125,132,131,137,135,142,145,147,149,139,130,130,135,134,131,123,120,126,138,137,131,125,123,119,115,117,128,137,131,127,122,124,133,127,118,117,118,126,129,122,118,110,109,116,130,145,150,139,125,125,113,105,97,105,112,110,108,110,106,100,117,125,136,137,129,115,105,107,106,124,124,117,124,122,131,131,124,114,93,87,92,96,98,100,98,96,94,81,84,92,88,83,68,76,80,71,75,69,78,71,54,57,60,73,78,96,83,55,55,49,62,57,53,65,64,66,70,69,75,72,64,67,64,61,60,61,57,58,58,57,58,54,62,57,55,62,60,60,57,53,56,57,55,55,50,56,54,61,65,61,60,61,64,61,62,55,59,56,55,53,54,53,47,56,50,49,49,48,54,51,46,46,45,44,46,52,42,46,49,47,45,46,46,48,50,46,51,50,45,47,48,50,44,44,49,48,45,48,47,44,44,42,46,42,41,46,43,39,44,46,45,45,47,40,38,46,43,39,42,39,42,45,39,43,44,39,41,41,38,43,39,40,46,37,42,41,43,41,42,45,39,44,41,45,43,38,45,43,43,45,46,48,42,44,46,48,47,46,43,45,45,43,47,43,40,38,42,42,40,39,43,39,39,38,37,39,39,42,40,36,39,39,41,45,42,44,43,40,43,46,44,45,49,50,48,55,60,48,47,54,61,61,55,54,57,57,53,57,62,62,65,58,53,62,67,66,57,73,93,80,64,58,60,71,63,59,45,48,61,53,57,51,51,53,50,41,32,34,32,34,35,35,35,31,34,33,32,32,29,37,35,33,34,32,35,29,34,36,32,32,33,39,33,37,41,36,44,44,43,43,42,47,49,48,52,53,57,62,65,67,64,63,80,86,67,61,67,78,90,98,87,96,125,130,118,127,139,127,127,139,146,145,154,162,164,171,170,153,146,143,143,145,145,145,145,152,148,149,148,145,146,147,150,149,153,154,150,152,151,157,165,165,160,159,159,158,154,149,146,140,141,151,150,151,148,148,154,155,152,150,155,157,157,153,150,153,150,155,160,163,154,150,153,153,158,156,155,156,153,152,148,139,141,142,139,144,147,146,147,143,144,141,132,115,91,81,79,103,126,149,152,167,117,12,2,10,10,13,11,14,12,13,14,14,13,22,19,21,23,20,22,20,21,22,20,25,23,22,25,25,24,21,25,24,23,21,24,25,22,25,26,25,19,19,19,21,20,18,18,17,21,19,19,18,20,20,17,17,21,20,17,19,22,19,19,23,19,17,21,19,18,21,22,17,21,21,18,21,21,20,20,21,24,24,19,23,26,24,20,21,21,20,23,21,26,29,24,24,27,22,29,31,27,29,29,32,26,25,26,30,27,25,34,29,36,31,29,34,41,39,42,48,48,50,54,55,50,61,55,56,53,46,49,44,49,57,56,49,48,49,55,73,88,85,99,86,87,83,85,84,82,76,75,82,83,79,95,76,97,93,100,104,111,116,120,123,128,131,138,140,139,134,132,129,129,126,123,127,126,129,125,125,122,121,129,134,134,139,138,139,137,134,131,137,135,141,141,141,140,125,129,128,118,118,129,117,137,122,112,108,120,129,122,129,135,137,132,131,129,127,118,121,135,131,137,132,141,140,137,140,141,134,133,141,142,135,127,120,134,129,121,136,138,139,129,123,132,130,133,132,147,132,137,134,137,137,139,135,132,130,131,139,142,145,137,130,128,122,122,123,124,122,119,120,125,132,137,132,133,128,121,117,116,125,124,127,127,114,122,136,133,133,142,132,122,114,109,123,129,127,125,131,117,123,122,129,133,127,108,116,122,126,128,129,120,136,137,132,120,122,117,112,96,96,94,91,97,92,77,80,68,69,70,74,94,95,93,84,72,67,80,77,76,93,102,101,95,91,81,91,83,84,86,65,79,56,59,66,69,76,77,69,66,59,55,57,60,55,58,60,54,53,51,54,54,51,51,55,63,58,59,53,52,54,57,61,59,60,58,55,60,60,55,57,53,51,51,53,52,52,53,49,48,50,47,48,47,46,49,48,47,47,47,42,50,49,47,53,47,47,47,46,47,46,49,45,46,48,48,43,45,43,43,45,44,47,41,43,45,45,45,45,45,40,39,43,41,43,43,43,42,40,40,41,42,39,44,43,36,43,49,43,40,44,42,42,44,40,44,40,40,40,40,43,41,42,46,43,42,46,44,46,45,43,42,43,46,47,42,47,44,41,39,39,41,42,41,40,41,41,41,43,37,37,38,37,39,39,41,41,36,39,41,36,40,41,37,41,44,47,48,53,48,52,54,53,57,46,48,53,53,58,56,50,51,53,51,52,58,59,59,52,49,64,70,66,68,84,90,87,85,75,91,86,76,84,56,48,63,57,57,47,51,48,39,43,36,32,34,29,34,36,31,36,33,35,36,35,35,31,29,39,39,35,30,33,34,39,36,36,38,37,37,37,42,39,42,48,41,43,43,45,51,49,54,50,53,57,56,61,60,60,63,63,62,59,57,65,86,101,95,94,127,144,130,140,152,138,137,145,149,155,160,162,168,174,169,161,154,149,144,158,158,162,158,160,160,158,146,139,138,152,157,159,158,152,139,137,144,157,162,168,165,162,164,159,157,158,157,148,144,140,145,150,151,149,155,155,148,147,150,152,149,143,141,139,145,148,152,152,154,153,152,157,157,154,154,158,153,150,144,141,146,141,139,139,140,142,142,146,147,152,154,145,126,96,79,83,93,125,143,162,114,14,2,10,11,13,11,13,12,13,14,14,14,19,19,17,21,20,19,24,19,21,19,22,24,27,31,24,22,25,21,27,19,24,28,25,26,25,25,19,20,20,19,20,16,17,18,16,24,18,19,23,17,23,18,17,22,22,18,16,22,19,20,23,17,21,20,21,19,20,22,17,19,19,19,25,24,18,21,19,24,24,17,27,21,19,27,22,23,24,29,27,28,35,24,25,28,24,26,27,27,27,32,24,27,32,26,29,29,30,31,32,31,27,33,30,35,35,41,46,38,50,45,44,51,49,53,56,56,59,63,64,70,76,79,81,78,77,81,87,84,87,72,74,82,77,79,79,84,80,80,86,91,103,114,121,121,123,125,122,126,128,125,120,115,115,111,118,119,125,125,124,125,126,131,126,131,137,139,151,152,145,142,146,146,153,155,145,141,130,131,125,123,124,122,130,129,129,127,128,133,132,128,135,140,141,133,120,117,126,131,131,135,135,128,127,131,132,133,137,145,144,149,145,139,146,147,151,148,151,149,145,139,134,131,122,129,129,132,141,142,137,136,130,125,135,126,122,107,107,126,134,133,119,118,123,127,141,142,146,146,144,142,138,136,125,131,136,143,137,123,128,122,121,123,125,129,130,129,127,139,144,135,137,132,134,139,145,148,136,131,120,120,129,148,146,130,131,128,124,116,114,131,140,138,141,142,150,153,154,150,132,136,146,155,148,146,141,135,133,120,117,108,112,119,113,103,96,89,101,84,71,80,90,100,87,84,80,87,93,97,101,99,88,80,78,69,72,80,87,74,65,60,55,59,61,71,71,61,71,68,63,71,66,59,52,61,70,73,63,59,57,58,53,53,59,54,50,50,56,48,51,52,56,59,48,57,56,51,51,55,50,48,54,51,53,45,53,54,52,51,49,51,51,51,49,49,45,51,52,45,46,48,45,48,44,45,47,43,43,48,46,44,45,41,45,43,45,44,42,43,42,49,47,43,42,42,47,46,46,45,44,44,48,43,43,43,43,43,40,42,43,44,46,49,43,42,43,41,41,41,34,40,41,41,42,37,40,38,42,41,41,41,39,46,40,41,41,43,41,42,42,40,43,43,43,41,44,43,40,42,41,40,40,36,42,41,36,43,38,39,38,36,42,36,40,38,35,43,41,40,42,36,40,42,43,45,46,54,46,46,42,46,49,40,44,44,41,45,47,45,50,45,51,49,52,57,61,63,53,55,67,72,64,60,77,81,89,101,88,90,94,84,72,48,55,68,53,48,49,48,44,45,43,38,29,32,38,35,37,36,37,31,38,36,35,42,34,33,36,35,37,39,32,34,40,35,37,33,38,41,39,39,43,45,45,43,45,47,46,50,52,55,57,51,53,55,53,55,57,53,54,52,59,53,52,80,97,89,92,129,154,138,148,160,145,143,146,154,164,165,161,159,171,179,172,166,160,166,174,170,173,177,175,170,166,160,152,154,161,165,170,166,154,136,137,148,157,159,162,161,159,159,161,164,167,169,157,151,151,150,153,154,154,154,155,151,146,150,152,152,144,136,142,145,146,147,153,153,151,151,150,152,152,155,154,155,154,151,152,152,150,147,141,141,141,142,145,144,149,152,157,151,135,109,91,81,95,112,143,112,16,2,10,10,14,12,13,12,13,14,14,14,19,19,21,19,19,21,21,21,20,23,24,19,25,23,22,25,22,24,23,21,23,25,25,19,24,26,21,24,20,19,16,19,21,16,17,20,18,19,21,18,19,18,22,22,21,20,21,20,18,22,20,20,22,22,24,22,22,27,22,22,25,21,21,21,24,22,27,25,20,21,20,28,19,22,28,26,22,22,23,24,30,24,27,25,23,32,28,28,31,29,27,29,31,33,35,29,34,36,36,36,30,33,41,38,45,47,47,48,51,50,45,49,47,53,60,77,103,114,77,117,108,105,103,103,78,108,74,70,75,65,95,85,117,96,94,97,96,96,108,121,131,132,130,128,123,128,123,120,120,121,121,120,126,121,117,121,122,125,126,130,133,140,143,145,145,146,142,147,142,134,127,119,116,124,126,130,135,130,128,119,118,122,130,133,128,132,141,135,146,148,156,162,141,157,153,152,143,136,141,146,141,139,137,138,149,151,153,155,151,148,138,146,150,143,137,132,136,142,143,137,139,141,143,139,130,128,132,130,131,131,130,135,139,113,128,120,131,133,134,135,132,137,141,141,142,141,143,141,137,139,138,136,136,141,152,140,141,150,148,145,143,138,134,131,132,136,141,140,128,123,115,121,119,132,147,136,121,117,111,117,125,125,122,117,118,122,119,124,131,117,115,116,129,136,137,128,122,122,127,122,120,123,117,114,108,104,104,93,96,91,93,90,90,94,98,88,96,85,69,70,68,66,72,77,77,80,82,82,72,78,69,44,65,50,52,48,50,53,53,56,50,60,60,52,57,60,62,69,66,63,66,59,54,59,64,68,76,66,64,61,57,60,55,54,51,46,53,56,52,57,54,57,53,47,51,48,50,53,50,49,49,46,48,50,51,52,47,53,50,47,53,53,51,47,50,52,48,48,50,47,42,45,43,41,41,46,48,42,42,46,45,45,44,43,44,43,45,42,46,40,47,46,44,43,43,45,44,46,45,49,47,43,44,44,41,41,47,44,42,41,45,44,42,42,40,42,42,40,42,42,39,41,41,41,39,42,42,39,37,38,41,36,39,41,37,39,39,36,39,42,38,36,42,42,42,38,38,41,36,35,38,37,33,38,37,36,39,36,42,42,41,40,37,41,34,42,43,39,42,48,47,41,43,43,44,47,44,44,47,41,42,46,42,46,42,50,53,45,46,45,51,56,62,61,57,57,57,58,66,63,66,72,84,76,77,79,57,56,63,58,56,54,56,61,56,55,45,45,42,35,42,42,34,32,34,36,37,35,34,36,39,34,39,36,35,35,33,41,37,38,36,39,36,34,37,38,37,39,41,43,43,44,46,43,48,45,48,50,49,55,56,52,53,53,51,53,53,55,60,51,58,61,55,71,82,75,78,116,137,121,134,147,140,143,147,151,164,163,150,145,157,174,177,171,161,163,164,167,173,170,173,160,164,163,160,160,159,160,165,163,157,151,148,149,156,158,157,153,152,148,150,161,159,158,157,151,149,152,154,155,153,149,154,155,153,153,151,153,147,142,146,148,149,151,148,151,150,141,148,149,147,151,149,151,153,152,150,151,150,145,146,141,140,142,137,143,139,144,147,147,147,132,119,90,82,87,111,103,19,2,12,10,14,11,14,12,12,14,14,13,20,17,23,19,19,19,19,22,24,21,18,25,24,22,22,24,24,18,22,22,23,24,24,30,28,21,23,24,20,20,16,17,19,19,17,19,18,19,21,19,23,15,19,22,19,22,17,23,22,16,21,22,19,22,22,24,20,19,27,21,25,24,26,24,20,29,22,24,24,25,22,24,27,24,29,29,24,25,26,27,26,24,29,25,25,25,29,26,28,33,29,31,31,36,31,23,35,30,30,30,34,34,33,36,42,45,45,48,48,51,52,54,53,59,66,70,70,69,76,71,71,66,65,68,69,62,53,51,59,55,65,79,87,102,101,105,110,115,133,139,141,134,134,131,131,129,125,122,121,124,124,131,136,136,138,133,133,128,132,135,136,143,138,143,143,133,130,125,113,111,114,114,121,129,127,135,132,130,130,133,134,137,141,136,140,141,135,137,138,136,136,132,135,142,149,147,151,149,141,136,131,130,127,131,133,131,131,135,143,153,152,155,155,142,134,127,129,127,122,124,130,130,127,127,130,137,129,130,129,136,140,131,145,136,141,144,150,149,136,138,141,144,144,140,136,130,140,141,138,141,143,142,133,135,132,137,144,147,152,147,137,126,132,137,146,153,148,145,136,126,135,141,148,150,145,144,143,145,145,136,117,101,94,99,101,107,119,128,118,101,93,88,93,111,118,96,84,100,108,105,92,92,87,85,80,80,84,73,85,81,69,67,74,88,102,92,97,97,81,76,61,56,60,73,81,78,75,78,69,66,61,51,61,48,46,43,47,55,57,55,57,59,55,58,59,50,50,54,55,64,66,63,57,59,54,62,66,58,57,57,59,61,60,57,54,57,59,57,62,56,61,59,54,55,54,61,63,63,65,59,60,59,60,60,53,47,53,54,52,53,52,51,51,53,51,51,50,53,45,49,48,45,50,47,44,46,47,43,48,48,45,46,47,44,43,43,42,43,42,44,44,44,39,41,45,44,46,36,39,42,41,45,42,43,44,45,43,43,40,41,47,42,44,42,47,41,41,47,42,44,42,47,45,43,42,39,43,43,41,39,35,44,43,40,39,39,39,37,41,39,41,40,37,39,43,40,43,41,33,40,39,41,42,37,41,42,40,40,40,47,43,43,46,41,38,45,49,41,47,45,46,47,44,49,50,46,47,53,56,49,49,47,45,47,55,62,63,57,52,55,55,59,69,61,50,60,57,58,63,54,69,87,93,76,66,59,50,57,56,55,64,60,56,56,49,56,48,42,48,50,45,42,39,37,32,35,40,33,35,38,34,37,39,40,33,36,41,45,40,38,37,37,38,34,42,37,37,40,46,46,39,44,47,47,44,49,48,41,56,54,55,56,53,55,55,57,51,61,69,54,71,66,63,66,57,59,67,105,123,108,118,130,136,145,141,145,160,160,139,131,144,155,168,172,159,149,151,154,160,158,152,150,152,159,156,157,155,153,160,159,160,153,149,154,154,150,149,146,142,143,147,143,141,146,145,151,151,154,158,159,160,157,162,165,160,156,155,154,152,150,149,152,150,148,153,155,158,155,152,153,154,154,149,149,151,151,151,153,151,150,151,145,146,142,142,144,141,139,142,142,144,144,136,117,92,75,86,86,23,3,12,10,14,12,14,13,12,14,14,14,20,19,24,21,19,22,17,20,23,19,21,27,24,22,25,29,24,19,21,24,27,24,25,23,25,31,24,24,19,17,20,19,21,18,21,19,17,24,19,19,22,17,24,19,19,27,20,19,19,23,27,17,24,23,23,22,24,24,17,21,24,23,26,23,22,25,22,23,23,25,21,24,25,28,26,23,32,29,26,29,26,25,36,35,34,33,30,36,33,33,34,33,31,33,33,32,38,33,32,38,35,36,35,42,45,41,45,47,50,52,55,51,52,55,56,55,53,54,56,53,53,48,55,52,67,51,44,44,41,51,44,51,60,76,94,96,113,122,124,132,136,136,146,149,157,160,144,148,145,150,150,145,144,139,141,143,139,139,139,137,131,134,127,130,129,127,132,122,122,122,124,133,141,142,135,130,135,141,148,148,148,144,141,141,141,141,135,130,127,122,120,121,126,131,135,139,139,135,136,141,140,141,135,129,122,116,118,122,131,142,147,148,145,142,141,143,143,141,132,130,139,138,136,139,144,143,145,141,142,144,143,139,132,131,133,139,144,137,130,127,121,124,128,123,131,134,142,148,145,148,142,135,130,127,127,131,129,135,139,131,127,128,125,129,138,131,142,145,151,154,150,148,133,128,141,145,134,127,127,124,119,112,118,118,113,122,128,122,111,114,120,114,104,98,86,96,84,77,92,92,99,99,90,86,92,101,101,92,107,108,103,110,94,106,95,82,82,79,109,89,71,60,52,51,59,62,65,71,73,69,68,77,77,72,60,65,74,75,70,64,62,65,63,62,59,55,60,60,56,56,58,55,50,63,61,54,60,54,59,52,58,60,61,61,54,59,60,57,55,55,59,52,50,54,50,55,56,57,62,59,55,55,61,56,53,54,51,49,51,55,55,53,51,51,47,51,51,47,50,52,47,51,50,47,49,50,49,48,45,46,47,47,46,41,46,43,40,45,46,41,45,44,42,42,42,43,40,45,40,41,44,38,48,42,40,42,39,45,42,42,44,39,40,39,40,40,39,38,43,44,41,40,46,41,38,40,40,42,39,39,43,42,41,45,42,38,41,39,37,39,43,39,41,42,44,45,36,42,41,42,46,46,46,43,41,50,46,43,44,44,47,45,42,43,44,43,44,43,43,46,44,49,49,45,50,50,50,49,50,49,53,54,50,58,51,59,61,54,55,47,52,58,57,50,50,49,56,55,50,51,59,70,76,66,59,50,56,53,56,67,63,57,56,61,58,59,57,47,48,45,42,39,36,37,38,34,33,37,37,37,36,38,38,39,35,41,38,38,39,37,41,35,33,42,38,36,39,43,41,39,42,41,42,42,45,47,45,44,47,49,45,47,57,47,51,54,55,58,73,64,76,72,80,80,74,73,94,128,130,113,122,125,132,139,134,141,160,167,153,138,141,141,164,171,155,147,144,147,151,148,146,149,138,148,152,146,151,154,157,153,148,153,151,149,148,147,141,141,144,144,145,149,143,143,148,148,150,156,159,160,162,158,157,163,163,157,157,160,152,149,148,148,149,150,151,153,154,154,156,153,151,155,150,151,150,148,153,153,151,153,152,148,148,151,146,146,139,145,142,143,150,143,145,131,115,96,80,71,23,4,12,11,15,13,13,14,13,14,14,14,21,23,21,21,20,23,22,21,22,24,26,19,24,26,24,23,24,27,23,28,27,23,24,29,23,29,29,24,23,15,19,17,19,20,17,22,21,19,18,20,20,18,23,20,22,21,19,24,18,21,25,23,23,23,21,19,24,21,22,25,24,24,23,26,23,21,22,27,30,21,25,22,21,27,27,26,25,31,30,34,38,39,39,45,41,37,39,36,39,36,37,42,38,41,40,42,39,39,45,36,41,47,44,41,46,45,45,50,50,51,53,49,56,50,44,44,45,53,50,47,44,43,53,64,79,68,74,68,63,59,48,50,54,77,95,101,103,99,104,114,122,122,127,131,141,151,157,150,146,137,128,130,126,137,139,135,134,131,139,135,128,129,122,125,130,130,132,132,132,134,136,139,137,137,137,137,137,142,147,147,142,141,141,143,139,137,136,133,136,132,132,136,137,137,131,129,128,131,130,135,134,134,128,128,129,122,120,117,122,127,129,133,129,130,133,137,144,146,147,139,146,150,151,149,148,150,140,140,137,142,136,129,141,137,145,146,145,139,128,130,131,137,140,142,143,141,141,137,143,147,141,131,124,132,130,124,122,121,126,141,142,128,122,117,113,110,114,120,118,113,110,102,97,97,96,93,89,90,83,86,103,128,138,130,124,124,124,127,129,142,150,135,128,127,125,111,99,98,104,113,116,114,100,95,111,125,117,108,126,133,129,121,109,104,84,64,56,50,74,95,82,72,71,64,64,59,64,76,73,72,73,68,95,90,88,88,83,77,64,59,54,61,59,65,73,79,73,63,51,54,60,63,64,66,58,54,55,47,54,54,51,55,63,53,52,57,52,51,56,55,47,50,53,53,53,53,53,51,57,53,51,50,54,50,51,55,49,52,53,53,48,52,57,49,48,51,49,53,52,50,50,48,50,54,53,49,47,52,46,49,51,48,44,42,48,53,43,46,50,48,47,48,49,48,46,47,44,41,48,45,43,42,44,39,43,41,40,45,43,40,39,39,36,44,37,41,42,41,44,39,45,45,45,47,43,42,46,42,48,46,43,39,38,46,39,42,41,35,43,38,39,38,38,45,41,43,40,42,49,43,44,46,44,45,40,44,45,46,43,44,45,46,43,40,46,43,44,45,45,47,41,45,49,48,49,50,49,49,48,51,52,53,53,54,50,44,54,54,48,48,45,49,48,52,47,47,47,48,51,46,48,51,58,54,55,57,50,53,59,60,55,54,53,59,60,58,53,53,48,44,41,38,37,37,38,34,33,35,39,35,37,39,36,39,37,36,39,41,36,36,39,36,39,39,39,39,39,42,41,39,41,44,42,44,48,42,42,42,45,49,44,46,49,53,53,55,49,60,81,78,84,78,91,100,102,121,126,137,138,128,141,132,131,136,132,146,166,179,171,150,144,148,164,171,159,150,152,151,153,156,153,151,145,151,147,141,150,153,157,150,148,153,151,151,158,153,148,148,148,152,160,164,160,160,159,161,156,156,155,158,163,155,155,158,157,161,166,163,160,164,160,155,153,152,158,154,159,162,158,155,152,150,148,149,149,146,145,144,148,146,145,145,144,146,146,146,146,148,150,146,147,150,148,142,138,123,99,77,21,3,11,11,14,12,12,13,13,14,14,13,23,23,22,23,21,21,20,23,23,25,26,22,25,24,24,24,26,29,26,26,24,25,26,24,28,22,23,29,21,21,24,19,19,19,21,23,19,21,22,19,21,24,26,24,19,24,24,22,24,20,23,25,21,24,25,23,23,26,27,24,25,29,27,25,28,28,30,27,31,27,25,29,29,26,29,24,27,31,26,33,31,34,32,32,37,33,36,40,42,41,39,43,50,41,46,46,49,53,45,51,55,53,53,65,73,74,84,78,76,81,92,110,112,119,127,131,134,137,130,126,115,110,108,88,93,80,80,83,89,104,105,107,96,98,96,91,94,92,96,94,99,99,102,98,89,97,101,107,114,118,128,134,136,139,135,137,131,135,142,140,148,148,149,145,137,137,139,139,136,139,141,141,137,134,141,136,137,139,136,135,138,136,136,141,147,153,151,139,133,136,134,139,143,142,134,137,143,150,155,150,148,148,147,146,152,150,146,151,155,154,148,146,148,154,158,160,152,145,145,136,138,125,124,127,120,125,121,131,134,137,139,139,137,130,131,129,129,127,134,146,147,155,157,157,156,155,158,140,126,122,132,145,151,155,153,153,151,147,131,121,131,134,141,152,150,145,145,155,156,149,141,134,145,154,149,142,140,144,142,136,122,120,126,124,131,131,131,134,121,106,115,119,124,132,120,108,115,132,137,129,130,131,132,134,127,124,127,131,130,122,112,96,96,113,120,125,138,128,112,83,69,82,93,101,100,101,102,97,102,106,95,81,85,76,66,53,49,63,65,72,71,66,59,70,66,61,65,66,76,81,78,78,72,71,63,64,79,80,87,79,72,62,73,69,55,55,50,51,55,50,51,50,49,57,51,53,52,45,48,47,51,48,48,53,53,47,51,55,41,46,53,47,49,49,45,49,49,51,53,49,51,54,51,51,48,51,52,52,53,53,55,56,53,47,49,49,55,53,53,46,50,50,50,50,53,49,45,50,51,44,49,47,44,46,45,46,41,48,44,44,45,46,44,43,43,43,45,43,46,44,45,42,48,45,45,51,42,43,42,43,46,45,44,45,43,41,41,46,43,43,49,45,42,42,46,49,45,45,45,46,45,41,44,47,43,43,46,44,44,45,46,47,42,45,42,38,46,45,47,45,50,47,41,44,49,45,46,46,46,49,43,47,46,42,44,47,45,46,48,47,48,47,45,48,48,48,49,51,47,48,54,50,54,51,56,53,50,59,53,55,56,58,56,49,51,48,49,49,50,45,39,43,42,38,41,41,41,39,39,40,40,37,36,38,42,37,40,41,38,42,38,42,39,40,44,37,42,39,44,46,41,40,46,51,41,45,48,45,50,43,48,48,49,54,56,63,61,74,69,62,77,78,87,91,99,119,113,101,93,113,146,135,139,147,153,162,162,163,168,172,174,159,151,157,165,165,163,164,160,160,160,160,158,162,162,162,170,163,165,164,155,159,152,145,152,158,162,159,152,155,155,152,151,150,156,160,158,158,150,148,155,157,152,144,142,146,148,151,148,149,148,146,152,150,151,150,152,152,149,149,148,149,147,154,155,147,146,143,141,139,134,141,139,139,137,129,130,131,131,134,136,139,144,141,145,139,143,105,15,1,10,10,13,12,14,12,13,14,14,13,23,23,22,23,21,21,20,23,23,25,26,22,25,24,24,24,26,29,26,26,24,25,26,24,28,22,23,29,21,21,24,19,19,19,21,23,19,21,22,19,21,24,26,24,19,24,24,22,24,20,23,25,21,24,25,23,23,26,27,24,25,29,27,25,28,28,30,27,31,27,25,29,29,26,29,24,27,31,26,33,31,34,32,32,37,33,36,40,42,41,39,43,50,41,46,46,49,53,45,51,55,53,53,65,73,74,84,78,76,81,92,110,112,119,127,131,134,137,130,126,115,110,108,88,93,80,80,83,89,104,105,107,96,98,96,91,94,92,96,94,99,99,102,98,89,97,101,107,114,118,128,134,136,139,135,137,131,135,142,140,148,148,149,145,137,137,139,139,136,139,141,141,137,134,141,136,137,139,136,135,138,136,136,141,147,153,151,139,133,136,134,139,143,142,134,137,143,150,155,150,148,148,147,146,152,150,146,151,155,154,148,146,148,154,158,160,152,145,145,136,138,125,124,127,120,125,121,131,134,137,139,139,137,130,131,129,129,127,134,146,147,155,157,157,156,155,158,140,126,122,132,145,151,155,153,153,151,147,131,121,131,134,141,152,150,145,145,155,156,149,141,134,145,154,149,142,140,144,142,136,122,120,126,124,131,131,131,134,121,106,115,119,124,132,120,108,115,132,137,129,130,131,132,134,127,124,127,131,130,122,112,96,96,113,120,125,138,128,112,83,69,82,93,101,100,101,102,97,102,106,95,81,85,76,66,53,49,63,65,72,71,66,59,70,66,61,65,66,76,81,78,78,72,71,63,64,79,80,87,79,72,62,73,69,55,55,50,51,55,50,51,50,49,57,51,53,52,45,48,47,51,48,48,53,53,47,51,55,41,46,53,47,49,49,45,49,49,51,53,49,51,54,51,51,48,51,52,52,53,53,55,56,53,47,49,49,55,53,53,46,50,50,50,50,53,49,45,50,51,44,49,47,44,46,45,46,41,48,44,44,45,46,44,43,43,43,45,43,46,44,45,42,48,45,45,51,42,43,42,43,46,45,44,45,43,41,41,46,43,43,49,45,42,42,46,49,45,45,45,46,45,41,44,47,43,43,46,44,44,45,46,47,42,45,42,38,46,45,47,45,50,47,41,44,49,45,46,46,46,49,43,47,46,42,44,47,45,46,48,47,48,47,45,48,48,48,49,51,47,48,54,50,54,51,56,53,50,59,53,55,56,58,56,49,51,48,49,49,50,45,39,43,42,38,41,41,41,39,39,40,40,37,36,38,42,37,40,41,38,42,38,42,39,40,44,37,42,39,44,46,41,40,46,51,41,45,48,45,50,43,48,48,49,54,56,63,61,74,69,62,77,78,87,91,99,119,113,101,93,113,146,135,139,147,153,162,162,163,168,172,174,159,151,157,165,165,163,164,160,160,160,160,158,162,162,162,170,163,165,164,155,159,152,145,152,158,162,159,152,155,155,152,151,150,156,160,158,158,150,148,155,157,152,144,142,146,148,151,148,149,148,146,152,150,151,150,152,152,149,149,148,149,147,154,155,147,146,143,141,139,134,141,139,139,137,129,130,131,131,134,136,139,144,141,145,139,143,105,15,1,10,10,13,12,14,12,13,14,14,13,29,24,24,21,24,25,23,29,24,26,26,29,29,27,25,27,30,24,27,24,27,26,29,28,26,27,24,29,24,23,25,23,24,21,21,25,23,22,24,26,26,21,25,24,27,27,21,27,26,22,29,27,29,30,23,29,32,27,26,29,30,29,27,33,29,27,35,27,27,29,30,28,33,32,30,34,29,29,28,37,30,28,32,33,36,33,33,35,36,35,41,42,49,46,49,52,49,55,59,57,57,64,63,63,73,82,97,87,98,108,102,109,101,107,111,107,110,115,112,111,105,101,110,110,122,111,117,122,124,128,119,128,128,131,134,134,140,138,141,134,129,126,124,128,132,141,144,155,160,152,144,141,144,144,145,147,147,149,149,147,145,141,136,135,138,139,139,138,138,141,139,139,141,139,145,145,141,144,142,147,146,142,138,127,127,128,136,142,150,153,142,140,139,144,147,148,149,148,138,129,133,141,144,137,135,127,123,128,132,135,141,147,145,150,143,134,133,139,136,134,143,141,141,139,136,147,147,148,145,142,145,146,149,153,148,146,145,140,148,146,144,146,141,141,139,141,148,147,145,139,134,133,138,143,135,128,127,142,153,142,133,119,112,114,123,124,119,129,126,126,133,136,131,129,127,128,121,144,111,15,2,10,10,14,11,14,12,12,14,13,13,15,13,13,13,13,14,13,14,14,14,14,14,14,14,14,14,15,15,14,14,14,14,15,14,15,14,14,15,15,15,14,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,14,14,15,15,15,15,15,15,16,16,15,15,15,15,16,15,15,15,16,15,16,17,16,16,16,15,16,15,16,16,16,16,15,16,15,16,16,15,16,16,16,16,16,16,17,16,16,16,16,16,16,16,16,17,16,17,17,16,16,16,16,16,16,16,16,16,16,16,16,17,17,16,16,16,16,16,16,16,16,16,16,16,16,16,17,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,17,16,16,16,16,16,16,16,16,16,17,16,16,16,16,16,16,17,16,16,16,16,17,16,16,16,17,17,16,16,16,17,17,16,17,16,17,16,16,16,16,17,17,17,17,16,17,16,17,17,16,16,16,17,16,17,17,16,17,17,17,16,16,17,16,17,17,16,17,16,17,16,16,16,16,17,16,16,16,16,17,16,16,16,16,17,17,17,16,16,17,16,17,16,17,16,16,17,17,16,16,17,17,16,17,17,17,17,17,16,17,17,16,17,17,16,16,16,17,17,16,17,17,16,16,16,16,16,17,17,16,16,16,17,16,16,16,16,17,16,17,16,16,17,16,17,17,17,16,16,17,16,17,16,16,16,17,17,16,17,17,17,17,16,16,17,17,17,16,17,16,17,17,16,17,16,17,17,17,16,16,17,17,17,17,16,16,16,17,17,16,16,17,17,17,17,16,17,17,17,17,16,16,17,17,17,16,16,17,17,17,16,16,17,17,17,17,16,16,17,17,16,16,16,16,17,16,16,17,16,17,16,17,16,16,17,16,17,17,16,17,16,17,16,17,17,17,17,17,16,16,17,17,17,17,16,17,16,16,17,16,17,16,17,17,16,17,16,17,16,16,16,16,17,17,17,17,29,24,24,21,24,25,23,29,24,26,26,29,29,27,25,27,30,24,27,24,27,26,29,28,26,27,24,29,24,23,25,23,24,21,21,25,23,22,24,26,26,21,25,24,27,27,21,27,26,22,29,27,29,30,23,29,32,27,26,29,30,29,27,33,29,27,35,27,27,29,30,28,33,32,30,34,29,29,28,37,30,28,32,33,36,33,33,35,36,35,41,42,49,46,49,52,49,55,59,57,57,64,63,63,73,82,97,87,98,108,102,109,101,107,111,107,110,115,112,111,105,101,110,110,122,111,117,122,124,128,119,128,128,131,134,134,140,138,141,134,129,126,124,128,132,141,144,155,160,152,144,141,144,144,145,147,147,149,149,147,145,141,136,135,138,139,139,138,138,141,139,139,141,139,145,145,141,144,142,147,146,142,138,127,127,128,136,142,150,153,142,140,139,144,147,148,149,148,138,129,133,141,144,137,135,127,123,128,132,135,141,147,145,150,143,134,133,139,136,134,143,141,141,139,136,147,147,148,145,142,145,146,149,153,148,146,145,140,148,146,144,146,141,141,139,141,148,147,145,139,134,133,138,143,135,128,127,142,153,142,133,119,112,114,123,124,119,129,126,126,133,136,131,129,127,128,121,144,111,15,2,10,10,14,11,14,12,12,14,13,13,15,13,13,13,13,14,13,14,14,14,14,14,14,14,14,14,15,15,14,14,14,14,15,14,15,14,14,15,15,15,14,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,14,14,15,15,15,15,15,15,16,16,15,15,15,15,16,15,15,15,16,15,16,17,16,16,16,15,16,15,16,16,16,16,15,16,15,16,16,15,16,16,16,16,16,16,17,16,16,16,16,16,16,16,16,17,16,17,17,16,16,16,16,16,16,16,16,16,16,16,16,17,17,16,16,16,16,16,16,16,16,16,16,16,16,16,17,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,17,16,16,16,16,16,16,16,16,16,17,16,16,16,16,16,16,17,16,16,16,16,17,16,16,16,17,17,16,16,16,17,17,16,17,16,17,16,16,16,16,17,17,17,17,16,17,16,17,17,16,16,16,17,16,17,17,16,17,17,17,16,16,17,16,17,17,16,17,16,17,16,16,16,16,17,16,16,16,16,17,16,16,16,16,17,17,17,16,16,17,16,17,16,17,16,16,17,17,16,16,17,17,16,17,17,17,17,17,16,17,17,16,17,17,16,16,16,17,17,16,17,17,16,16,16,16,16,17,17,16,16,16,17,16,16,16,16,17,16,17,16,16,17,16,17,17,17,16,16,17,16,17,16,16,16,17,17,16,17,17,17,17,16,16,17,17,17,16,17,16,17,17,16,17,16,17,17,17,16,16,17,17,17,17,16,16,16,17,17,16,16,17,17,17,17,16,17,17,17,17,16,16,17,17,17,16,16,17,17,17,16,16,17,17,17,17,16,16,17,17,16,16,16,16,17,16,16,17,16,17,16,17,16,16,17,16,17,17,16,17,16,17,16,17,17,17,17,17,16,16,17,17,17,17,16,17,16,16,17,16,17,16,17,17,16,17,16,17,16,16,16,16,17,17,17,17,
\ No newline at end of file
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/codereview.settings b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/codereview.settings
index 34c6f1d..ccba2ee 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/codereview.settings
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/codereview.settings
@@ -1,5 +1,4 @@
-# This file is used by gcl to get repository specific information.
-GERRIT_HOST: chromium-review.googlesource.com
-GERRIT_PORT: 29418
+# This file is used by git cl to get repository specific information.
+GERRIT_HOST: True
 CODE_REVIEW_SERVER: chromium-review.googlesource.com
 GERRIT_SQUASH_UPLOADS: False
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/configure b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/configure
old mode 100644
new mode 100755
index e5a74c6..181e27b
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/configure
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/configure
@@ -31,7 +31,6 @@
   --libc=PATH                     path to alternate libc
   --size-limit=WxH                max size to allow in the decoder
   --as={yasm|nasm|auto}           use specified assembler [auto, yasm preferred]
-  --sdk-path=PATH                 path to root of sdk (android builds only)
   ${toggle_codec_srcs}            in/exclude codec library source code
   ${toggle_debug_libs}            in/exclude debug version of libraries
   ${toggle_static_msvcrt}         use static MSVCRT (VS builds only)
@@ -101,20 +100,20 @@
 all_platforms="${all_platforms} arm64-android-gcc"
 all_platforms="${all_platforms} arm64-darwin-gcc"
 all_platforms="${all_platforms} arm64-linux-gcc"
+all_platforms="${all_platforms} arm64-win64-gcc"
+all_platforms="${all_platforms} arm64-win64-vs15"
 all_platforms="${all_platforms} armv7-android-gcc"   #neon Cortex-A8
 all_platforms="${all_platforms} armv7-darwin-gcc"    #neon Cortex-A8
 all_platforms="${all_platforms} armv7-linux-rvct"    #neon Cortex-A8
 all_platforms="${all_platforms} armv7-linux-gcc"     #neon Cortex-A8
 all_platforms="${all_platforms} armv7-none-rvct"     #neon Cortex-A8
-all_platforms="${all_platforms} armv7-win32-vs11"
-all_platforms="${all_platforms} armv7-win32-vs12"
+all_platforms="${all_platforms} armv7-win32-gcc"
 all_platforms="${all_platforms} armv7-win32-vs14"
 all_platforms="${all_platforms} armv7-win32-vs15"
 all_platforms="${all_platforms} armv7s-darwin-gcc"
 all_platforms="${all_platforms} armv8-linux-gcc"
 all_platforms="${all_platforms} mips32-linux-gcc"
 all_platforms="${all_platforms} mips64-linux-gcc"
-all_platforms="${all_platforms} ppc64-linux-gcc"
 all_platforms="${all_platforms} ppc64le-linux-gcc"
 all_platforms="${all_platforms} sparc-solaris-gcc"
 all_platforms="${all_platforms} x86-android-gcc"
@@ -129,17 +128,16 @@
 all_platforms="${all_platforms} x86-darwin14-gcc"
 all_platforms="${all_platforms} x86-darwin15-gcc"
 all_platforms="${all_platforms} x86-darwin16-gcc"
+all_platforms="${all_platforms} x86-darwin17-gcc"
 all_platforms="${all_platforms} x86-iphonesimulator-gcc"
 all_platforms="${all_platforms} x86-linux-gcc"
 all_platforms="${all_platforms} x86-linux-icc"
 all_platforms="${all_platforms} x86-os2-gcc"
 all_platforms="${all_platforms} x86-solaris-gcc"
 all_platforms="${all_platforms} x86-win32-gcc"
-all_platforms="${all_platforms} x86-win32-vs10"
-all_platforms="${all_platforms} x86-win32-vs11"
-all_platforms="${all_platforms} x86-win32-vs12"
 all_platforms="${all_platforms} x86-win32-vs14"
 all_platforms="${all_platforms} x86-win32-vs15"
+all_platforms="${all_platforms} x86-win32-vs16"
 all_platforms="${all_platforms} x86_64-android-gcc"
 all_platforms="${all_platforms} x86_64-darwin9-gcc"
 all_platforms="${all_platforms} x86_64-darwin10-gcc"
@@ -149,16 +147,16 @@
 all_platforms="${all_platforms} x86_64-darwin14-gcc"
 all_platforms="${all_platforms} x86_64-darwin15-gcc"
 all_platforms="${all_platforms} x86_64-darwin16-gcc"
+all_platforms="${all_platforms} x86_64-darwin17-gcc"
+all_platforms="${all_platforms} x86_64-darwin18-gcc"
 all_platforms="${all_platforms} x86_64-iphonesimulator-gcc"
 all_platforms="${all_platforms} x86_64-linux-gcc"
 all_platforms="${all_platforms} x86_64-linux-icc"
 all_platforms="${all_platforms} x86_64-solaris-gcc"
 all_platforms="${all_platforms} x86_64-win64-gcc"
-all_platforms="${all_platforms} x86_64-win64-vs10"
-all_platforms="${all_platforms} x86_64-win64-vs11"
-all_platforms="${all_platforms} x86_64-win64-vs12"
 all_platforms="${all_platforms} x86_64-win64-vs14"
 all_platforms="${all_platforms} x86_64-win64-vs15"
+all_platforms="${all_platforms} x86_64-win64-vs16"
 all_platforms="${all_platforms} generic-gnu"
 
 # all_targets is a list of all targets that can be configured
@@ -273,9 +271,10 @@
     unistd_h
 "
 EXPERIMENT_LIST="
-    spatial_svc
     fp_mb_stats
     emulate_hardware
+    non_greedy_mv
+    rate_ctrl
 "
 CONFIG_LIST="
     dependency_tracking
@@ -325,12 +324,15 @@
     multi_res_encoding
     temporal_denoising
     vp9_temporal_denoising
+    consistent_recode
     coefficient_range_checking
     vp9_highbitdepth
     better_hw_compatibility
     experimental
     size_limit
     always_adjust_bpm
+    bitstream_debug
+    mismatch_debug
     ${EXPERIMENT_LIST}
 "
 CMDLINE_SELECT="
@@ -386,11 +388,14 @@
     multi_res_encoding
     temporal_denoising
     vp9_temporal_denoising
+    consistent_recode
     coefficient_range_checking
     better_hw_compatibility
     vp9_highbitdepth
     experimental
     always_adjust_bpm
+    bitstream_debug
+    mismatch_debug
 "
 
 process_cmdline() {
@@ -421,6 +426,12 @@
 }
 
 post_process_cmdline() {
+    if enabled coefficient_range_checking; then
+      echo "coefficient-range-checking is for decoders only, disabling encoders:"
+      soft_disable vp8_encoder
+      soft_disable vp9_encoder
+    fi
+
     c=""
 
     # Enable all detected codecs, if they haven't been disabled
@@ -442,6 +453,7 @@
     enabled child || write_common_config_banner
     write_common_target_config_h ${BUILD_PFX}vpx_config.h
     write_common_config_targets
+    enabled win_arm64_neon_h_workaround && write_win_arm64_neon_h_workaround ${BUILD_PFX}arm_neon.h
 
     # Calculate the default distribution name, based on the enabled features
     cf=""
@@ -518,7 +530,7 @@
         # here rather than at option parse time because the target auto-detect
         # magic happens after the command line has been parsed.
         case "${tgt_os}" in
-        linux|os2|darwin*|iphonesimulator*)
+        linux|os2|solaris|darwin*|iphonesimulator*)
             # Supported platforms
             ;;
         *)
@@ -570,16 +582,30 @@
         check_ld() {
             true
         }
+        check_lib() {
+            true
+        }
     fi
     check_header stdio.h || die "Unable to invoke compiler: ${CC} ${CFLAGS}"
     check_ld <<EOF || die "Toolchain is unable to link executables"
 int main(void) {return 0;}
 EOF
     # check system headers
-    check_header pthread.h
+
+    # Use both check_header and check_lib here, since check_lib
+    # could be a stub that always returns true.
+    check_header pthread.h && check_lib -lpthread <<EOF || disable_feature pthread_h
+#include <pthread.h>
+#include <stddef.h>
+int main(void) { return pthread_create(NULL, NULL, NULL, NULL); }
+EOF
     check_header unistd.h # for sysconf(3) and friends.
 
     check_header vpx/vpx_integer.h -I${source_path} && enable_feature vpx_ports
+
+    if enabled neon && ! enabled external_build; then
+      check_header arm_neon.h || die "Unable to find arm_neon.h"
+    fi
 }
 
 process_toolchain() {
@@ -598,22 +624,39 @@
         check_add_cflags -Wcast-qual
         check_add_cflags -Wvla
         check_add_cflags -Wimplicit-function-declaration
+        check_add_cflags -Wmissing-declarations
+        check_add_cflags -Wmissing-prototypes
         check_add_cflags -Wuninitialized
         check_add_cflags -Wunused
-        # -Wextra has some tricky cases. Rather than fix them all now, get the
-        # flag for as many files as possible and fix the remaining issues
-        # piecemeal.
-        # https://bugs.chromium.org/p/webm/issues/detail?id=1069
         check_add_cflags -Wextra
         # check_add_cflags also adds to cxxflags. gtest does not do well with
-        # -Wundef so add it explicitly to CFLAGS only.
+        # these flags so add them explicitly to CFLAGS only.
         check_cflags -Wundef && add_cflags_only -Wundef
+        check_cflags -Wframe-larger-than=52000 && \
+          add_cflags_only -Wframe-larger-than=52000
         if enabled mips || [ -z "${INLINE}" ]; then
           enabled extra_warnings || check_add_cflags -Wno-unused-function
         fi
+        # Enforce c89 for c files. Don't be too strict about it though. Allow
+        # gnu extensions like "//" for comments.
+        check_cflags -std=gnu89 && add_cflags_only -std=gnu89
         # Avoid this warning for third_party C++ sources. Some reorganization
         # would be needed to apply this only to test/*.cc.
         check_cflags -Wshorten-64-to-32 && add_cflags_only -Wshorten-64-to-32
+
+        # Quiet gcc 6 vs 7 abi warnings:
+        # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77728
+        if enabled arm; then
+          check_add_cxxflags -Wno-psabi
+        fi
+
+        # disable some warnings specific to libyuv.
+        check_cxxflags -Wno-missing-declarations \
+          && LIBYUV_CXXFLAGS="${LIBYUV_CXXFLAGS} -Wno-missing-declarations"
+        check_cxxflags -Wno-missing-prototypes \
+          && LIBYUV_CXXFLAGS="${LIBYUV_CXXFLAGS} -Wno-missing-prototypes"
+        check_cxxflags -Wno-unused-parameter \
+          && LIBYUV_CXXFLAGS="${LIBYUV_CXXFLAGS} -Wno-unused-parameter"
     fi
 
     if enabled icc; then
@@ -684,7 +727,7 @@
             soft_enable libyuv
         ;;
         *-android-*)
-            soft_enable webm_io
+            check_add_cxxflags -std=c++11 && soft_enable webm_io
             soft_enable libyuv
             # GTestLog must be modified to use Android logging utilities.
         ;;
@@ -693,30 +736,23 @@
             # x86 targets.
         ;;
         *-iphonesimulator-*)
-            soft_enable webm_io
+            check_add_cxxflags -std=c++11 && soft_enable webm_io
             soft_enable libyuv
         ;;
         *-win*)
             # Some mingw toolchains don't have pthread available by default.
             # Treat these more like visual studio where threading in gtest
             # would be disabled for the same reason.
-            check_cxx "$@" <<EOF && soft_enable unit_tests
-int z;
-EOF
-            check_cxx "$@" <<EOF && soft_enable webm_io
-int z;
-EOF
+            check_add_cxxflags -std=c++11 && soft_enable unit_tests \
+              && soft_enable webm_io
             check_cxx "$@" <<EOF && soft_enable libyuv
 int z;
 EOF
         ;;
         *)
-            enabled pthread_h && check_cxx "$@" <<EOF && soft_enable unit_tests
-int z;
-EOF
-            check_cxx "$@" <<EOF && soft_enable webm_io
-int z;
-EOF
+            enabled pthread_h && check_add_cxxflags -std=c++11 \
+              && soft_enable unit_tests
+            check_add_cxxflags -std=c++11 && soft_enable webm_io
             check_cxx "$@" <<EOF && soft_enable libyuv
 int z;
 EOF
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples.mk
index 38c4d75..a28e529 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples.mk
@@ -23,7 +23,7 @@
                 third_party/libyuv/source/row_any.cc \
                 third_party/libyuv/source/row_common.cc \
                 third_party/libyuv/source/row_gcc.cc \
-                third_party/libyuv/source/row_mips.cc \
+                third_party/libyuv/source/row_msa.cc \
                 third_party/libyuv/source/row_neon.cc \
                 third_party/libyuv/source/row_neon64.cc \
                 third_party/libyuv/source/row_win.cc \
@@ -31,7 +31,7 @@
                 third_party/libyuv/source/scale_any.cc \
                 third_party/libyuv/source/scale_common.cc \
                 third_party/libyuv/source/scale_gcc.cc \
-                third_party/libyuv/source/scale_mips.cc \
+                third_party/libyuv/source/scale_msa.cc \
                 third_party/libyuv/source/scale_neon.cc \
                 third_party/libyuv/source/scale_neon64.cc \
                 third_party/libyuv/source/scale_win.cc \
@@ -65,6 +65,7 @@
 # while EXAMPLES demonstrate specific portions of the API.
 UTILS-$(CONFIG_DECODERS)    += vpxdec.c
 vpxdec.SRCS                 += md5_utils.c md5_utils.h
+vpxdec.SRCS                 += vpx_ports/compiler_attributes.h
 vpxdec.SRCS                 += vpx_ports/mem_ops.h
 vpxdec.SRCS                 += vpx_ports/mem_ops_aligned.h
 vpxdec.SRCS                 += vpx_ports/msvc.h
@@ -72,11 +73,12 @@
 vpxdec.SRCS                 += vpx/vpx_integer.h
 vpxdec.SRCS                 += args.c args.h
 vpxdec.SRCS                 += ivfdec.c ivfdec.h
+vpxdec.SRCS                 += y4minput.c y4minput.h
 vpxdec.SRCS                 += tools_common.c tools_common.h
 vpxdec.SRCS                 += y4menc.c y4menc.h
 ifeq ($(CONFIG_LIBYUV),yes)
   vpxdec.SRCS                 += $(LIBYUV_SRCS)
-  $(BUILD_PFX)third_party/libyuv/%.cc.o: CXXFLAGS += -Wno-unused-parameter
+  $(BUILD_PFX)third_party/libyuv/%.cc.o: CXXFLAGS += ${LIBYUV_CXXFLAGS}
 endif
 ifeq ($(CONFIG_WEBM_IO),yes)
   vpxdec.SRCS                 += $(LIBWEBM_COMMON_SRCS)
@@ -109,18 +111,20 @@
 endif
 vpxenc.GUID                  = 548DEC74-7A15-4B2B-AFC3-AA102E7C25C1
 vpxenc.DESCRIPTION           = Full featured encoder
-ifeq ($(CONFIG_SPATIAL_SVC),yes)
-  EXAMPLES-$(CONFIG_VP9_ENCODER)      += vp9_spatial_svc_encoder.c
-  vp9_spatial_svc_encoder.SRCS        += args.c args.h
-  vp9_spatial_svc_encoder.SRCS        += ivfenc.c ivfenc.h
-  vp9_spatial_svc_encoder.SRCS        += tools_common.c tools_common.h
-  vp9_spatial_svc_encoder.SRCS        += video_common.h
-  vp9_spatial_svc_encoder.SRCS        += video_writer.h video_writer.c
-  vp9_spatial_svc_encoder.SRCS        += vpx_ports/msvc.h
-  vp9_spatial_svc_encoder.SRCS        += vpxstats.c vpxstats.h
-  vp9_spatial_svc_encoder.GUID        = 4A38598D-627D-4505-9C7B-D4020C84100D
-  vp9_spatial_svc_encoder.DESCRIPTION = VP9 Spatial SVC Encoder
-endif
+
+EXAMPLES-$(CONFIG_VP9_ENCODER)      += vp9_spatial_svc_encoder.c
+vp9_spatial_svc_encoder.SRCS        += args.c args.h
+vp9_spatial_svc_encoder.SRCS        += ivfenc.c ivfenc.h
+vp9_spatial_svc_encoder.SRCS        += y4minput.c y4minput.h
+vp9_spatial_svc_encoder.SRCS        += tools_common.c tools_common.h
+vp9_spatial_svc_encoder.SRCS        += video_common.h
+vp9_spatial_svc_encoder.SRCS        += video_writer.h video_writer.c
+vp9_spatial_svc_encoder.SRCS        += vpx_ports/msvc.h
+vp9_spatial_svc_encoder.SRCS        += vpxstats.c vpxstats.h
+vp9_spatial_svc_encoder.SRCS        += examples/svc_encodeframe.c
+vp9_spatial_svc_encoder.SRCS        += examples/svc_context.h
+vp9_spatial_svc_encoder.GUID        = 4A38598D-627D-4505-9C7B-D4020C84100D
+vp9_spatial_svc_encoder.DESCRIPTION = VP9 Spatial SVC Encoder
 
 ifneq ($(CONFIG_SHARED),yes)
 EXAMPLES-$(CONFIG_VP9_ENCODER)    += resize_util.c
@@ -128,6 +132,7 @@
 
 EXAMPLES-$(CONFIG_ENCODERS)          += vpx_temporal_svc_encoder.c
 vpx_temporal_svc_encoder.SRCS        += ivfenc.c ivfenc.h
+vpx_temporal_svc_encoder.SRCS        += y4minput.c y4minput.h
 vpx_temporal_svc_encoder.SRCS        += tools_common.c tools_common.h
 vpx_temporal_svc_encoder.SRCS        += video_common.h
 vpx_temporal_svc_encoder.SRCS        += video_writer.h video_writer.c
@@ -137,6 +142,7 @@
 EXAMPLES-$(CONFIG_DECODERS)        += simple_decoder.c
 simple_decoder.GUID                 = D3BBF1E9-2427-450D-BBFF-B2843C1D44CC
 simple_decoder.SRCS                += ivfdec.h ivfdec.c
+simple_decoder.SRCS                += y4minput.c y4minput.h
 simple_decoder.SRCS                += tools_common.h tools_common.c
 simple_decoder.SRCS                += video_common.h
 simple_decoder.SRCS                += video_reader.h video_reader.c
@@ -146,6 +152,7 @@
 simple_decoder.DESCRIPTION          = Simplified decoder loop
 EXAMPLES-$(CONFIG_DECODERS)        += postproc.c
 postproc.SRCS                      += ivfdec.h ivfdec.c
+postproc.SRCS                      += y4minput.c y4minput.h
 postproc.SRCS                      += tools_common.h tools_common.c
 postproc.SRCS                      += video_common.h
 postproc.SRCS                      += video_reader.h video_reader.c
@@ -157,9 +164,11 @@
 EXAMPLES-$(CONFIG_DECODERS)        += decode_to_md5.c
 decode_to_md5.SRCS                 += md5_utils.h md5_utils.c
 decode_to_md5.SRCS                 += ivfdec.h ivfdec.c
+decode_to_md5.SRCS                 += y4minput.c y4minput.h
 decode_to_md5.SRCS                 += tools_common.h tools_common.c
 decode_to_md5.SRCS                 += video_common.h
 decode_to_md5.SRCS                 += video_reader.h video_reader.c
+decode_to_md5.SRCS                 += vpx_ports/compiler_attributes.h
 decode_to_md5.SRCS                 += vpx_ports/mem_ops.h
 decode_to_md5.SRCS                 += vpx_ports/mem_ops_aligned.h
 decode_to_md5.SRCS                 += vpx_ports/msvc.h
@@ -167,6 +176,7 @@
 decode_to_md5.DESCRIPTION           = Frame by frame MD5 checksum
 EXAMPLES-$(CONFIG_ENCODERS)     += simple_encoder.c
 simple_encoder.SRCS             += ivfenc.h ivfenc.c
+simple_encoder.SRCS             += y4minput.c y4minput.h
 simple_encoder.SRCS             += tools_common.h tools_common.c
 simple_encoder.SRCS             += video_common.h
 simple_encoder.SRCS             += video_writer.h video_writer.c
@@ -175,6 +185,7 @@
 simple_encoder.DESCRIPTION       = Simplified encoder loop
 EXAMPLES-$(CONFIG_VP9_ENCODER)  += vp9_lossless_encoder.c
 vp9_lossless_encoder.SRCS       += ivfenc.h ivfenc.c
+vp9_lossless_encoder.SRCS       += y4minput.c y4minput.h
 vp9_lossless_encoder.SRCS       += tools_common.h tools_common.c
 vp9_lossless_encoder.SRCS       += video_common.h
 vp9_lossless_encoder.SRCS       += video_writer.h video_writer.c
@@ -183,6 +194,7 @@
 vp9_lossless_encoder.DESCRIPTION = Simplified lossless VP9 encoder
 EXAMPLES-$(CONFIG_ENCODERS)     += twopass_encoder.c
 twopass_encoder.SRCS            += ivfenc.h ivfenc.c
+twopass_encoder.SRCS            += y4minput.c y4minput.h
 twopass_encoder.SRCS            += tools_common.h tools_common.c
 twopass_encoder.SRCS            += video_common.h
 twopass_encoder.SRCS            += video_writer.h video_writer.c
@@ -191,6 +203,7 @@
 twopass_encoder.DESCRIPTION      = Two-pass encoder loop
 EXAMPLES-$(CONFIG_DECODERS)     += decode_with_drops.c
 decode_with_drops.SRCS          += ivfdec.h ivfdec.c
+decode_with_drops.SRCS          += y4minput.c y4minput.h
 decode_with_drops.SRCS          += tools_common.h tools_common.c
 decode_with_drops.SRCS          += video_common.h
 decode_with_drops.SRCS          += video_reader.h video_reader.c
@@ -201,6 +214,7 @@
 decode_with_drops.DESCRIPTION    = Drops frames while decoding
 EXAMPLES-$(CONFIG_ENCODERS)        += set_maps.c
 set_maps.SRCS                      += ivfenc.h ivfenc.c
+set_maps.SRCS                      += y4minput.c y4minput.h
 set_maps.SRCS                      += tools_common.h tools_common.c
 set_maps.SRCS                      += video_common.h
 set_maps.SRCS                      += video_writer.h video_writer.c
@@ -209,6 +223,7 @@
 set_maps.DESCRIPTION                = Set active and ROI maps
 EXAMPLES-$(CONFIG_VP8_ENCODER)     += vp8cx_set_ref.c
 vp8cx_set_ref.SRCS                 += ivfenc.h ivfenc.c
+vp8cx_set_ref.SRCS                 += y4minput.c y4minput.h
 vp8cx_set_ref.SRCS                 += tools_common.h tools_common.c
 vp8cx_set_ref.SRCS                 += video_common.h
 vp8cx_set_ref.SRCS                 += video_writer.h video_writer.c
@@ -220,6 +235,7 @@
 ifeq ($(CONFIG_DECODERS),yes)
 EXAMPLES-yes                       += vp9cx_set_ref.c
 vp9cx_set_ref.SRCS                 += ivfenc.h ivfenc.c
+vp9cx_set_ref.SRCS                 += y4minput.c y4minput.h
 vp9cx_set_ref.SRCS                 += tools_common.h tools_common.c
 vp9cx_set_ref.SRCS                 += video_common.h
 vp9cx_set_ref.SRCS                 += video_writer.h video_writer.c
@@ -232,6 +248,7 @@
 ifeq ($(CONFIG_LIBYUV),yes)
 EXAMPLES-$(CONFIG_VP8_ENCODER)          += vp8_multi_resolution_encoder.c
 vp8_multi_resolution_encoder.SRCS       += ivfenc.h ivfenc.c
+vp8_multi_resolution_encoder.SRCS       += y4minput.c y4minput.h
 vp8_multi_resolution_encoder.SRCS       += tools_common.h tools_common.c
 vp8_multi_resolution_encoder.SRCS       += video_writer.h video_writer.c
 vp8_multi_resolution_encoder.SRCS       += vpx_ports/msvc.h
@@ -403,3 +420,4 @@
 DOCS-yes += examples.doxy samples.dox
 examples.doxy: samples.dox $(ALL_EXAMPLES:.c=.dox)
 	@echo "INPUT += $^" > $@
+	@echo "ENABLED_SECTIONS += samples" >> $@
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/svc_context.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/svc_context.h
similarity index 83%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/svc_context.h
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/svc_context.h
index 4627850..c5779ce8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/svc_context.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/svc_context.h
@@ -13,11 +13,11 @@
  * spatial SVC frame
  */
 
-#ifndef VPX_SVC_CONTEXT_H_
-#define VPX_SVC_CONTEXT_H_
+#ifndef VPX_EXAMPLES_SVC_CONTEXT_H_
+#define VPX_EXAMPLES_SVC_CONTEXT_H_
 
-#include "./vp8cx.h"
-#include "./vpx_encoder.h"
+#include "vpx/vp8cx.h"
+#include "vpx/vpx_encoder.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -35,10 +35,8 @@
   int temporal_layers;  // number of temporal layers
   int temporal_layering_mode;
   SVC_LOG_LEVEL log_level;  // amount of information to display
-  int log_print;       // when set, printf log messages instead of returning the
-                       // message with svc_get_message
-  int output_rc_stat;  // for outputting rc stats
-  int speed;           // speed setting for codec
+  int output_rc_stat;       // for outputting rc stats
+  int speed;                // speed setting for codec
   int threads;
   int aqmode;  // turns on aq-mode=3 (cyclic_refresh): 0=off, 1=on.
   // private storage for vpx_svc_encode
@@ -71,7 +69,6 @@
   int layer;
   int use_multiple_frame_contexts;
 
-  char message_buffer[2048];
   vpx_codec_ctx_t *codec_ctx;
 } SvcInternal_t;
 
@@ -106,15 +103,10 @@
 /**
  * dump accumulated statistics and reset accumulated values
  */
-const char *vpx_svc_dump_statistics(SvcContext *svc_ctx);
-
-/**
- *  get status message from previous encode
- */
-const char *vpx_svc_get_message(const SvcContext *svc_ctx);
+void vpx_svc_dump_statistics(SvcContext *svc_ctx);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VPX_SVC_CONTEXT_H_
+#endif  // VPX_EXAMPLES_SVC_CONTEXT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/svc_encodeframe.c
similarity index 85%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/svc_encodeframe.c
index f633600..a73ee8e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/svc_encodeframe.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/svc_encodeframe.c
@@ -22,7 +22,7 @@
 #include <string.h>
 #define VPX_DISABLE_CTRL_TYPECHECKS 1
 #include "./vpx_config.h"
-#include "vpx/svc_context.h"
+#include "./svc_context.h"
 #include "vpx/vp8cx.h"
 #include "vpx/vpx_encoder.h"
 #include "vpx_mem/vpx_mem.h"
@@ -95,17 +95,11 @@
   return (const SvcInternal_t *)svc_ctx->internal;
 }
 
-static void svc_log_reset(SvcContext *svc_ctx) {
-  SvcInternal_t *const si = (SvcInternal_t *)svc_ctx->internal;
-  si->message_buffer[0] = '\0';
-}
-
 static int svc_log(SvcContext *svc_ctx, SVC_LOG_LEVEL level, const char *fmt,
                    ...) {
   char buf[512];
   int retval = 0;
   va_list ap;
-  SvcInternal_t *const si = get_svc_internal(svc_ctx);
 
   if (level > svc_ctx->log_level) {
     return retval;
@@ -115,16 +109,8 @@
   retval = vsnprintf(buf, sizeof(buf), fmt, ap);
   va_end(ap);
 
-  if (svc_ctx->log_print) {
-    printf("%s", buf);
-  } else {
-    strncat(si->message_buffer, buf,
-            sizeof(si->message_buffer) - strlen(si->message_buffer) - 1);
-  }
+  printf("%s", buf);
 
-  if (level == SVC_LOG_ERROR) {
-    si->codec_ctx->err_detail = si->message_buffer;
-  }
   return retval;
 }
 
@@ -169,6 +155,7 @@
     return VPX_CODEC_INVALID_PARAM;
 
   input_string = strdup(input);
+  if (input_string == NULL) return VPX_CODEC_MEM_ERROR;
   token = strtok_r(input_string, delim, &save_ptr);
   for (i = 0; i < num_layers; ++i) {
     if (token != NULL) {
@@ -208,6 +195,7 @@
 
   if (options == NULL) return VPX_CODEC_OK;
   input_string = strdup(options);
+  if (input_string == NULL) return VPX_CODEC_MEM_ERROR;
 
   // parse option name
   option_name = strtok_r(input_string, "=", &input_ptr);
@@ -294,8 +282,8 @@
   return VPX_CODEC_OK;
 }
 
-vpx_codec_err_t assign_layer_bitrates(const SvcContext *svc_ctx,
-                                      vpx_codec_enc_cfg_t *const enc_cfg) {
+static vpx_codec_err_t assign_layer_bitrates(
+    const SvcContext *svc_ctx, vpx_codec_enc_cfg_t *const enc_cfg) {
   int i;
   const SvcInternal_t *const si = get_const_svc_internal(svc_ctx);
   int sl, tl, spatial_layer_target;
@@ -471,8 +459,7 @@
     svc_log(svc_ctx, SVC_LOG_ERROR,
             "spatial layers * temporal layers exceeds the maximum number of "
             "allowed layers of %d\n",
-            svc_ctx->spatial_layers * svc_ctx->temporal_layers,
-            (int)VPX_MAX_LAYERS);
+            svc_ctx->spatial_layers * svc_ctx->temporal_layers, VPX_MAX_LAYERS);
     return VPX_CODEC_INVALID_PARAM;
   }
   res = assign_layer_bitrates(svc_ctx, enc_cfg);
@@ -485,11 +472,6 @@
     return VPX_CODEC_INVALID_PARAM;
   }
 
-#if CONFIG_SPATIAL_SVC
-  for (i = 0; i < svc_ctx->spatial_layers; ++i)
-    enc_cfg->ss_enable_auto_alt_ref[i] = si->enable_auto_alt_ref[i];
-#endif
-
   if (svc_ctx->temporal_layers > 1) {
     int i;
     for (i = 0; i < svc_ctx->temporal_layers; ++i) {
@@ -514,7 +496,17 @@
     enc_cfg->rc_buf_initial_sz = 500;
     enc_cfg->rc_buf_optimal_sz = 600;
     enc_cfg->rc_buf_sz = 1000;
-    enc_cfg->rc_dropframe_thresh = 0;
+  }
+
+  for (tl = 0; tl < svc_ctx->temporal_layers; ++tl) {
+    for (sl = 0; sl < svc_ctx->spatial_layers; ++sl) {
+      i = sl * svc_ctx->temporal_layers + tl;
+      if (enc_cfg->rc_end_usage == VPX_CBR &&
+          enc_cfg->g_pass == VPX_RC_ONE_PASS) {
+        si->svc_params.max_quantizers[i] = enc_cfg->rc_max_quantizer;
+        si->svc_params.min_quantizers[i] = enc_cfg->rc_min_quantizer;
+      }
+    }
   }
 
   if (enc_cfg->g_error_resilient == 0 && si->use_multiple_frame_contexts == 0)
@@ -548,8 +540,6 @@
     return VPX_CODEC_INVALID_PARAM;
   }
 
-  svc_log_reset(svc_ctx);
-
   res =
       vpx_codec_encode(codec_ctx, rawimg, pts, (uint32_t)duration, 0, deadline);
   if (res != VPX_CODEC_OK) {
@@ -559,56 +549,7 @@
   iter = NULL;
   while ((cx_pkt = vpx_codec_get_cx_data(codec_ctx, &iter))) {
     switch (cx_pkt->kind) {
-#if CONFIG_SPATIAL_SVC && defined(VPX_TEST_SPATIAL_SVC)
-      case VPX_CODEC_SPATIAL_SVC_LAYER_PSNR: {
-        int i;
-        for (i = 0; i < svc_ctx->spatial_layers; ++i) {
-          int j;
-          svc_log(svc_ctx, SVC_LOG_DEBUG,
-                  "SVC frame: %d, layer: %d, PSNR(Total/Y/U/V): "
-                  "%2.3f  %2.3f  %2.3f  %2.3f \n",
-                  si->psnr_pkt_received, i, cx_pkt->data.layer_psnr[i].psnr[0],
-                  cx_pkt->data.layer_psnr[i].psnr[1],
-                  cx_pkt->data.layer_psnr[i].psnr[2],
-                  cx_pkt->data.layer_psnr[i].psnr[3]);
-          svc_log(svc_ctx, SVC_LOG_DEBUG,
-                  "SVC frame: %d, layer: %d, SSE(Total/Y/U/V): "
-                  "%2.3f  %2.3f  %2.3f  %2.3f \n",
-                  si->psnr_pkt_received, i, cx_pkt->data.layer_psnr[i].sse[0],
-                  cx_pkt->data.layer_psnr[i].sse[1],
-                  cx_pkt->data.layer_psnr[i].sse[2],
-                  cx_pkt->data.layer_psnr[i].sse[3]);
-
-          for (j = 0; j < COMPONENTS; ++j) {
-            si->psnr_sum[i][j] += cx_pkt->data.layer_psnr[i].psnr[j];
-            si->sse_sum[i][j] += cx_pkt->data.layer_psnr[i].sse[j];
-          }
-        }
-        ++si->psnr_pkt_received;
-        break;
-      }
-      case VPX_CODEC_SPATIAL_SVC_LAYER_SIZES: {
-        int i;
-        for (i = 0; i < svc_ctx->spatial_layers; ++i)
-          si->bytes_sum[i] += cx_pkt->data.layer_sizes[i];
-        break;
-      }
-#endif
       case VPX_CODEC_PSNR_PKT: {
-#if CONFIG_SPATIAL_SVC && defined(VPX_TEST_SPATIAL_SVC)
-        int j;
-        svc_log(svc_ctx, SVC_LOG_DEBUG,
-                "frame: %d, layer: %d, PSNR(Total/Y/U/V): "
-                "%2.3f  %2.3f  %2.3f  %2.3f \n",
-                si->psnr_pkt_received, 0, cx_pkt->data.layer_psnr[0].psnr[0],
-                cx_pkt->data.layer_psnr[0].psnr[1],
-                cx_pkt->data.layer_psnr[0].psnr[2],
-                cx_pkt->data.layer_psnr[0].psnr[3]);
-        for (j = 0; j < COMPONENTS; ++j) {
-          si->psnr_sum[0][j] += cx_pkt->data.layer_psnr[0].psnr[j];
-          si->sse_sum[0][j] += cx_pkt->data.layer_psnr[0].sse[j];
-        }
-#endif
       }
         ++si->psnr_pkt_received;
         break;
@@ -619,19 +560,13 @@
   return VPX_CODEC_OK;
 }
 
-const char *vpx_svc_get_message(const SvcContext *svc_ctx) {
-  const SvcInternal_t *const si = get_const_svc_internal(svc_ctx);
-  if (svc_ctx == NULL || si == NULL) return NULL;
-  return si->message_buffer;
-}
-
 static double calc_psnr(double d) {
   if (d == 0) return 100;
   return -10.0 * log(d) / log(10.0);
 }
 
 // dump accumulated statistics and reset accumulated values
-const char *vpx_svc_dump_statistics(SvcContext *svc_ctx) {
+void vpx_svc_dump_statistics(SvcContext *svc_ctx) {
   int number_of_frames;
   int i, j;
   uint32_t bytes_total = 0;
@@ -641,21 +576,19 @@
   double y_scale;
 
   SvcInternal_t *const si = get_svc_internal(svc_ctx);
-  if (svc_ctx == NULL || si == NULL) return NULL;
-
-  svc_log_reset(svc_ctx);
+  if (svc_ctx == NULL || si == NULL) return;
 
   number_of_frames = si->psnr_pkt_received;
-  if (number_of_frames <= 0) return vpx_svc_get_message(svc_ctx);
+  if (number_of_frames <= 0) return;
 
   svc_log(svc_ctx, SVC_LOG_INFO, "\n");
   for (i = 0; i < svc_ctx->spatial_layers; ++i) {
     svc_log(svc_ctx, SVC_LOG_INFO,
             "Layer %d Average PSNR=[%2.3f, %2.3f, %2.3f, %2.3f], Bytes=[%u]\n",
-            i, (double)si->psnr_sum[i][0] / number_of_frames,
-            (double)si->psnr_sum[i][1] / number_of_frames,
-            (double)si->psnr_sum[i][2] / number_of_frames,
-            (double)si->psnr_sum[i][3] / number_of_frames, si->bytes_sum[i]);
+            i, si->psnr_sum[i][0] / number_of_frames,
+            si->psnr_sum[i][1] / number_of_frames,
+            si->psnr_sum[i][2] / number_of_frames,
+            si->psnr_sum[i][3] / number_of_frames, si->bytes_sum[i]);
     // the following psnr calculation is deduced from ffmpeg.c#print_report
     y_scale = si->width * si->height * 255.0 * 255.0 * number_of_frames;
     scale[1] = y_scale;
@@ -686,7 +619,6 @@
   si->psnr_pkt_received = 0;
 
   svc_log(svc_ctx, SVC_LOG_INFO, "Total Bytes=[%u]\n", bytes_total);
-  return vpx_svc_get_message(svc_ctx);
 }
 
 void vpx_svc_release(SvcContext *svc_ctx) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp8_multi_resolution_encoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp8_multi_resolution_encoder.c
index b14b1ff..e72f8a0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp8_multi_resolution_encoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp8_multi_resolution_encoder.c
@@ -61,7 +61,7 @@
 
 int (*read_frame_p)(FILE *f, vpx_image_t *img);
 
-static int read_frame(FILE *f, vpx_image_t *img) {
+static int mulres_read_frame(FILE *f, vpx_image_t *img) {
   size_t nbytes, to_read;
   int res = 1;
 
@@ -75,7 +75,7 @@
   return res;
 }
 
-static int read_frame_by_row(FILE *f, vpx_image_t *img) {
+static int mulres_read_frame_by_row(FILE *f, vpx_image_t *img) {
   size_t nbytes, to_read;
   int res = 1;
   int plane;
@@ -471,9 +471,9 @@
       die("Failed to allocate image", cfg[i].g_w, cfg[i].g_h);
 
   if (raw[0].stride[VPX_PLANE_Y] == (int)raw[0].d_w)
-    read_frame_p = read_frame;
+    read_frame_p = mulres_read_frame;
   else
-    read_frame_p = read_frame_by_row;
+    read_frame_p = mulres_read_frame_by_row;
 
   for (i = 0; i < NUM_ENCODERS; i++)
     if (outfile[i]) write_ivf_file_header(outfile[i], &cfg[i], 0);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9_spatial_svc_encoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9_spatial_svc_encoder.c
index d49a230..b987989 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9_spatial_svc_encoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9_spatial_svc_encoder.c
@@ -25,13 +25,19 @@
 #include "../video_writer.h"
 
 #include "../vpx_ports/vpx_timer.h"
-#include "vpx/svc_context.h"
+#include "./svc_context.h"
 #include "vpx/vp8cx.h"
 #include "vpx/vpx_encoder.h"
 #include "../vpxstats.h"
 #include "vp9/encoder/vp9_encoder.h"
+#include "./y4minput.h"
+
 #define OUTPUT_RC_STATS 1
 
+#define SIMULCAST_MODE 0
+
+static const arg_def_t outputfile =
+    ARG_DEF("o", "output", 1, "Output filename");
 static const arg_def_t skip_frames_arg =
     ARG_DEF("s", "skip-frames", 1, "input frames to skip");
 static const arg_def_t frames_arg =
@@ -86,6 +92,19 @@
     ARG_DEF("aq", "aqmode", 1, "aq-mode off/on");
 static const arg_def_t bitrates_arg =
     ARG_DEF("bl", "bitrates", 1, "bitrates[sl * num_tl + tl]");
+static const arg_def_t dropframe_thresh_arg =
+    ARG_DEF(NULL, "drop-frame", 1, "Temporal resampling threshold (buf %)");
+static const struct arg_enum_list tune_content_enum[] = {
+  { "default", VP9E_CONTENT_DEFAULT },
+  { "screen", VP9E_CONTENT_SCREEN },
+  { "film", VP9E_CONTENT_FILM },
+  { NULL, 0 }
+};
+
+static const arg_def_t tune_content_arg = ARG_DEF_ENUM(
+    NULL, "tune-content", 1, "Tune content type", tune_content_enum);
+static const arg_def_t inter_layer_pred_arg = ARG_DEF(
+    NULL, "inter-layer-pred", 1, "0 - 3: On, Off, Key-frames, Constrained");
 
 #if CONFIG_VP9_HIGHBITDEPTH
 static const struct arg_enum_list bitdepth_enum[] = {
@@ -97,6 +116,7 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
 static const arg_def_t *svc_args[] = { &frames_arg,
+                                       &outputfile,
                                        &width_arg,
                                        &height_arg,
                                        &timebase_arg,
@@ -127,6 +147,9 @@
                                        &speed_arg,
                                        &rc_end_usage_arg,
                                        &bitrates_arg,
+                                       &dropframe_thresh_arg,
+                                       &tune_content_arg,
+                                       &inter_layer_pred_arg,
                                        NULL };
 
 static const uint32_t default_frames_to_skip = 0;
@@ -145,7 +168,6 @@
 static const uint32_t default_threads = 0;  // zero means use library default.
 
 typedef struct {
-  const char *input_filename;
   const char *output_filename;
   uint32_t frames_to_code;
   uint32_t frames_to_skip;
@@ -153,12 +175,14 @@
   stats_io_t rc_stats;
   int passes;
   int pass;
+  int tune_content;
+  int inter_layer_pred;
 } AppInput;
 
 static const char *exec_name;
 
 void usage_exit(void) {
-  fprintf(stderr, "Usage: %s <options> input_filename output_filename\n",
+  fprintf(stderr, "Usage: %s <options> input_filename -o output_filename\n",
           exec_name);
   fprintf(stderr, "Options:\n");
   arg_show_usage(stderr, svc_args);
@@ -217,6 +241,8 @@
 
     if (arg_match(&arg, &frames_arg, argi)) {
       app_input->frames_to_code = arg_parse_uint(&arg);
+    } else if (arg_match(&arg, &outputfile, argi)) {
+      app_input->output_filename = arg.val;
     } else if (arg_match(&arg, &width_arg, argi)) {
       enc_cfg->g_w = arg_parse_uint(&arg);
     } else if (arg_match(&arg, &height_arg, argi)) {
@@ -237,6 +263,9 @@
 #endif
     } else if (arg_match(&arg, &speed_arg, argi)) {
       svc_ctx->speed = arg_parse_uint(&arg);
+      if (svc_ctx->speed > 9) {
+        warn("Mapping speed %d to speed 9.\n", svc_ctx->speed);
+      }
     } else if (arg_match(&arg, &aqmode_arg, argi)) {
       svc_ctx->aqmode = arg_parse_uint(&arg);
     } else if (arg_match(&arg, &threads_arg, argi)) {
@@ -251,11 +280,15 @@
       enc_cfg->kf_min_dist = arg_parse_uint(&arg);
       enc_cfg->kf_max_dist = enc_cfg->kf_min_dist;
     } else if (arg_match(&arg, &scale_factors_arg, argi)) {
-      snprintf(string_options, sizeof(string_options), "%s scale-factors=%s",
-               string_options, arg.val);
+      strncat(string_options, " scale-factors=",
+              sizeof(string_options) - strlen(string_options) - 1);
+      strncat(string_options, arg.val,
+              sizeof(string_options) - strlen(string_options) - 1);
     } else if (arg_match(&arg, &bitrates_arg, argi)) {
-      snprintf(string_options, sizeof(string_options), "%s bitrates=%s",
-               string_options, arg.val);
+      strncat(string_options, " bitrates=",
+              sizeof(string_options) - strlen(string_options) - 1);
+      strncat(string_options, arg.val,
+              sizeof(string_options) - strlen(string_options) - 1);
     } else if (arg_match(&arg, &passes_arg, argi)) {
       passes = arg_parse_uint(&arg);
       if (passes < 1 || passes > 2) {
@@ -269,11 +302,15 @@
     } else if (arg_match(&arg, &fpf_name_arg, argi)) {
       fpf_file_name = arg.val;
     } else if (arg_match(&arg, &min_q_arg, argi)) {
-      snprintf(string_options, sizeof(string_options), "%s min-quantizers=%s",
-               string_options, arg.val);
+      strncat(string_options, " min-quantizers=",
+              sizeof(string_options) - strlen(string_options) - 1);
+      strncat(string_options, arg.val,
+              sizeof(string_options) - strlen(string_options) - 1);
     } else if (arg_match(&arg, &max_q_arg, argi)) {
-      snprintf(string_options, sizeof(string_options), "%s max-quantizers=%s",
-               string_options, arg.val);
+      strncat(string_options, " max-quantizers=",
+              sizeof(string_options) - strlen(string_options) - 1);
+      strncat(string_options, arg.val,
+              sizeof(string_options) - strlen(string_options) - 1);
     } else if (arg_match(&arg, &min_bitrate_arg, argi)) {
       min_bitrate = arg_parse_uint(&arg);
     } else if (arg_match(&arg, &max_bitrate_arg, argi)) {
@@ -303,6 +340,12 @@
           break;
       }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
+    } else if (arg_match(&arg, &dropframe_thresh_arg, argi)) {
+      enc_cfg->rc_dropframe_thresh = arg_parse_uint(&arg);
+    } else if (arg_match(&arg, &tune_content_arg, argi)) {
+      app_input->tune_content = arg_parse_uint(&arg);
+    } else if (arg_match(&arg, &inter_layer_pred_arg, argi)) {
+      app_input->inter_layer_pred = arg_parse_uint(&arg);
     } else {
       ++argj;
     }
@@ -358,13 +401,18 @@
     if (argi[0][0] == '-' && strlen(argi[0]) > 1)
       die("Error: Unrecognized option %s\n", *argi);
 
-  if (argv[0] == NULL || argv[1] == 0) {
+  if (argv[0] == NULL) {
     usage_exit();
   }
-  app_input->input_filename = argv[0];
-  app_input->output_filename = argv[1];
+  app_input->input_ctx.filename = argv[0];
   free(argv);
 
+  open_input_file(&app_input->input_ctx);
+  if (app_input->input_ctx.file_type == FILE_TYPE_Y4M) {
+    enc_cfg->g_w = app_input->input_ctx.width;
+    enc_cfg->g_h = app_input->input_ctx.height;
+  }
+
   if (enc_cfg->g_w < 16 || enc_cfg->g_w % 2 || enc_cfg->g_h < 16 ||
       enc_cfg->g_h % 2)
     die("Invalid resolution: %d x %d\n", enc_cfg->g_w, enc_cfg->g_h);
@@ -503,14 +551,13 @@
   printf("Average, rms-variance, and percent-fluct: %f %f %f \n",
          rc->avg_st_encoding_bitrate, sqrt(rc->variance_st_encoding_bitrate),
          perc_fluctuation);
-  if (frame_cnt != tot_num_frames)
-    die("Error: Number of input frames not equal to output encoded frames != "
-        "%d tot_num_frames = %d\n",
-        frame_cnt, tot_num_frames);
+  printf("Num of input, num of encoded (super) frames: %d %d \n", frame_cnt,
+         tot_num_frames);
 }
 
-vpx_codec_err_t parse_superframe_index(const uint8_t *data, size_t data_sz,
-                                       uint64_t sizes[8], int *count) {
+static vpx_codec_err_t parse_superframe_index(const uint8_t *data,
+                                              size_t data_sz, uint64_t sizes[8],
+                                              int *count) {
   // A chunk ending with a byte matching 0xc0 is an invalid chunk unless
   // it is a super frame index. If the last byte of real video compression
   // data is 0xc0 the encoder must add a 0 byte. If we have the marker but
@@ -562,39 +609,15 @@
 // bypass/flexible mode. The pattern corresponds to the pattern
 // VP9E_TEMPORAL_LAYERING_MODE_0101 (temporal_layering_mode == 2) used in
 // non-flexible mode.
-void set_frame_flags_bypass_mode(int sl, int tl, int num_spatial_layers,
-                                 int is_key_frame,
-                                 vpx_svc_ref_frame_config_t *ref_frame_config) {
+static void set_frame_flags_bypass_mode_ex0(
+    int tl, int num_spatial_layers, int is_key_frame,
+    vpx_svc_ref_frame_config_t *ref_frame_config) {
+  int sl;
+  for (sl = 0; sl < num_spatial_layers; ++sl)
+    ref_frame_config->update_buffer_slot[sl] = 0;
+
   for (sl = 0; sl < num_spatial_layers; ++sl) {
-    if (!tl) {
-      if (!sl) {
-        ref_frame_config->frame_flags[sl] =
-            VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_GF |
-            VP8_EFLAG_NO_UPD_ARF;
-      } else {
-        if (is_key_frame) {
-          ref_frame_config->frame_flags[sl] =
-              VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_REF_ARF |
-              VP8_EFLAG_NO_UPD_LAST | VP8_EFLAG_NO_UPD_ARF;
-        } else {
-          ref_frame_config->frame_flags[sl] =
-              VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_ARF;
-        }
-      }
-    } else if (tl == 1) {
-      if (!sl) {
-        ref_frame_config->frame_flags[sl] =
-            VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_LAST |
-            VP8_EFLAG_NO_UPD_GF;
-      } else {
-        ref_frame_config->frame_flags[sl] =
-            VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_LAST | VP8_EFLAG_NO_UPD_GF;
-        if (sl == num_spatial_layers - 1)
-          ref_frame_config->frame_flags[sl] =
-              VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_REF_ARF |
-              VP8_EFLAG_NO_UPD_LAST | VP8_EFLAG_NO_UPD_GF;
-      }
-    }
+    // Set the buffer idx.
     if (tl == 0) {
       ref_frame_config->lst_fb_idx[sl] = sl;
       if (sl) {
@@ -613,65 +636,359 @@
       ref_frame_config->gld_fb_idx[sl] = num_spatial_layers + sl - 1;
       ref_frame_config->alt_fb_idx[sl] = num_spatial_layers + sl;
     }
+    // Set the reference and update flags.
+    if (!tl) {
+      if (!sl) {
+        // Base spatial and base temporal (sl = 0, tl = 0)
+        ref_frame_config->reference_last[sl] = 1;
+        ref_frame_config->reference_golden[sl] = 0;
+        ref_frame_config->reference_alt_ref[sl] = 0;
+        ref_frame_config->update_buffer_slot[sl] |=
+            1 << ref_frame_config->lst_fb_idx[sl];
+      } else {
+        if (is_key_frame) {
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 0;
+          ref_frame_config->reference_alt_ref[sl] = 0;
+          ref_frame_config->update_buffer_slot[sl] |=
+              1 << ref_frame_config->gld_fb_idx[sl];
+        } else {
+          // Non-zero spatiall layer.
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 1;
+          ref_frame_config->reference_alt_ref[sl] = 1;
+          ref_frame_config->update_buffer_slot[sl] |=
+              1 << ref_frame_config->lst_fb_idx[sl];
+        }
+      }
+    } else if (tl == 1) {
+      if (!sl) {
+        // Base spatial and top temporal (tl = 1)
+        ref_frame_config->reference_last[sl] = 1;
+        ref_frame_config->reference_golden[sl] = 0;
+        ref_frame_config->reference_alt_ref[sl] = 0;
+        ref_frame_config->update_buffer_slot[sl] |=
+            1 << ref_frame_config->alt_fb_idx[sl];
+      } else {
+        // Non-zero spatial.
+        if (sl < num_spatial_layers - 1) {
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 1;
+          ref_frame_config->reference_alt_ref[sl] = 0;
+          ref_frame_config->update_buffer_slot[sl] |=
+              1 << ref_frame_config->alt_fb_idx[sl];
+        } else if (sl == num_spatial_layers - 1) {
+          // Top spatial and top temporal (non-reference -- doesn't update any
+          // reference buffers)
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 1;
+          ref_frame_config->reference_alt_ref[sl] = 0;
+        }
+      }
+    }
   }
 }
 
+// Example pattern for 2 spatial layers and 2 temporal layers used in the
+// bypass/flexible mode, except only 1 spatial layer when temporal_layer_id = 1.
+static void set_frame_flags_bypass_mode_ex1(
+    int tl, int num_spatial_layers, int is_key_frame,
+    vpx_svc_ref_frame_config_t *ref_frame_config) {
+  int sl;
+  for (sl = 0; sl < num_spatial_layers; ++sl)
+    ref_frame_config->update_buffer_slot[sl] = 0;
+
+  if (tl == 0) {
+    if (is_key_frame) {
+      ref_frame_config->lst_fb_idx[1] = 0;
+      ref_frame_config->gld_fb_idx[1] = 1;
+    } else {
+      ref_frame_config->lst_fb_idx[1] = 1;
+      ref_frame_config->gld_fb_idx[1] = 0;
+    }
+    ref_frame_config->alt_fb_idx[1] = 0;
+
+    ref_frame_config->lst_fb_idx[0] = 0;
+    ref_frame_config->gld_fb_idx[0] = 0;
+    ref_frame_config->alt_fb_idx[0] = 0;
+  }
+  if (tl == 1) {
+    ref_frame_config->lst_fb_idx[0] = 0;
+    ref_frame_config->gld_fb_idx[0] = 1;
+    ref_frame_config->alt_fb_idx[0] = 2;
+
+    ref_frame_config->lst_fb_idx[1] = 1;
+    ref_frame_config->gld_fb_idx[1] = 2;
+    ref_frame_config->alt_fb_idx[1] = 3;
+  }
+  // Set the reference and update flags.
+  if (tl == 0) {
+    // Base spatial and base temporal (sl = 0, tl = 0)
+    ref_frame_config->reference_last[0] = 1;
+    ref_frame_config->reference_golden[0] = 0;
+    ref_frame_config->reference_alt_ref[0] = 0;
+    ref_frame_config->update_buffer_slot[0] |=
+        1 << ref_frame_config->lst_fb_idx[0];
+
+    if (is_key_frame) {
+      ref_frame_config->reference_last[1] = 1;
+      ref_frame_config->reference_golden[1] = 0;
+      ref_frame_config->reference_alt_ref[1] = 0;
+      ref_frame_config->update_buffer_slot[1] |=
+          1 << ref_frame_config->gld_fb_idx[1];
+    } else {
+      // Non-zero spatiall layer.
+      ref_frame_config->reference_last[1] = 1;
+      ref_frame_config->reference_golden[1] = 1;
+      ref_frame_config->reference_alt_ref[1] = 1;
+      ref_frame_config->update_buffer_slot[1] |=
+          1 << ref_frame_config->lst_fb_idx[1];
+    }
+  }
+  if (tl == 1) {
+    // Top spatial and top temporal (non-reference -- doesn't update any
+    // reference buffers)
+    ref_frame_config->reference_last[1] = 1;
+    ref_frame_config->reference_golden[1] = 0;
+    ref_frame_config->reference_alt_ref[1] = 0;
+  }
+}
+
+#if CONFIG_VP9_DECODER && !SIMULCAST_MODE
+static void test_decode(vpx_codec_ctx_t *encoder, vpx_codec_ctx_t *decoder,
+                        const int frames_out, int *mismatch_seen) {
+  vpx_image_t enc_img, dec_img;
+  struct vp9_ref_frame ref_enc, ref_dec;
+  if (*mismatch_seen) return;
+  /* Get the internal reference frame */
+  ref_enc.idx = 0;
+  ref_dec.idx = 0;
+  vpx_codec_control(encoder, VP9_GET_REFERENCE, &ref_enc);
+  enc_img = ref_enc.img;
+  vpx_codec_control(decoder, VP9_GET_REFERENCE, &ref_dec);
+  dec_img = ref_dec.img;
+#if CONFIG_VP9_HIGHBITDEPTH
+  if ((enc_img.fmt & VPX_IMG_FMT_HIGHBITDEPTH) !=
+      (dec_img.fmt & VPX_IMG_FMT_HIGHBITDEPTH)) {
+    if (enc_img.fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
+      vpx_img_alloc(&enc_img, enc_img.fmt - VPX_IMG_FMT_HIGHBITDEPTH,
+                    enc_img.d_w, enc_img.d_h, 16);
+      vpx_img_truncate_16_to_8(&enc_img, &ref_enc.img);
+    }
+    if (dec_img.fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
+      vpx_img_alloc(&dec_img, dec_img.fmt - VPX_IMG_FMT_HIGHBITDEPTH,
+                    dec_img.d_w, dec_img.d_h, 16);
+      vpx_img_truncate_16_to_8(&dec_img, &ref_dec.img);
+    }
+  }
+#endif
+
+  if (!compare_img(&enc_img, &dec_img)) {
+    int y[4], u[4], v[4];
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (enc_img.fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
+      find_mismatch_high(&enc_img, &dec_img, y, u, v);
+    } else {
+      find_mismatch(&enc_img, &dec_img, y, u, v);
+    }
+#else
+    find_mismatch(&enc_img, &dec_img, y, u, v);
+#endif
+    decoder->err = 1;
+    printf(
+        "Encode/decode mismatch on frame %d at"
+        " Y[%d, %d] {%d/%d},"
+        " U[%d, %d] {%d/%d},"
+        " V[%d, %d] {%d/%d}\n",
+        frames_out, y[0], y[1], y[2], y[3], u[0], u[1], u[2], u[3], v[0], v[1],
+        v[2], v[3]);
+    *mismatch_seen = frames_out;
+  }
+
+  vpx_img_free(&enc_img);
+  vpx_img_free(&dec_img);
+}
+#endif
+
+#if OUTPUT_RC_STATS
+static void svc_output_rc_stats(
+    vpx_codec_ctx_t *codec, vpx_codec_enc_cfg_t *enc_cfg,
+    vpx_svc_layer_id_t *layer_id, const vpx_codec_cx_pkt_t *cx_pkt,
+    struct RateControlStats *rc, VpxVideoWriter **outfile,
+    const uint32_t frame_cnt, const double framerate) {
+  int num_layers_encoded = 0;
+  unsigned int sl, tl;
+  uint64_t sizes[8];
+  uint64_t sizes_parsed[8];
+  int count = 0;
+  double sum_bitrate = 0.0;
+  double sum_bitrate2 = 0.0;
+  vp9_zero(sizes);
+  vp9_zero(sizes_parsed);
+  vpx_codec_control(codec, VP9E_GET_SVC_LAYER_ID, layer_id);
+  parse_superframe_index(cx_pkt->data.frame.buf, cx_pkt->data.frame.sz,
+                         sizes_parsed, &count);
+  if (enc_cfg->ss_number_layers == 1) sizes[0] = cx_pkt->data.frame.sz;
+  for (sl = 0; sl < enc_cfg->ss_number_layers; ++sl) {
+    sizes[sl] = 0;
+    if (cx_pkt->data.frame.spatial_layer_encoded[sl]) {
+      sizes[sl] = sizes_parsed[num_layers_encoded];
+      num_layers_encoded++;
+    }
+  }
+  for (sl = 0; sl < enc_cfg->ss_number_layers; ++sl) {
+    unsigned int sl2;
+    uint64_t tot_size = 0;
+#if SIMULCAST_MODE
+    for (sl2 = 0; sl2 < sl; ++sl2) {
+      if (cx_pkt->data.frame.spatial_layer_encoded[sl2]) tot_size += sizes[sl2];
+    }
+    vpx_video_writer_write_frame(outfile[sl],
+                                 (uint8_t *)(cx_pkt->data.frame.buf) + tot_size,
+                                 (size_t)(sizes[sl]), cx_pkt->data.frame.pts);
+#else
+    for (sl2 = 0; sl2 <= sl; ++sl2) {
+      if (cx_pkt->data.frame.spatial_layer_encoded[sl2]) tot_size += sizes[sl2];
+    }
+    if (tot_size > 0)
+      vpx_video_writer_write_frame(outfile[sl], cx_pkt->data.frame.buf,
+                                   (size_t)(tot_size), cx_pkt->data.frame.pts);
+#endif  // SIMULCAST_MODE
+  }
+  for (sl = 0; sl < enc_cfg->ss_number_layers; ++sl) {
+    if (cx_pkt->data.frame.spatial_layer_encoded[sl]) {
+      for (tl = layer_id->temporal_layer_id; tl < enc_cfg->ts_number_layers;
+           ++tl) {
+        const int layer = sl * enc_cfg->ts_number_layers + tl;
+        ++rc->layer_tot_enc_frames[layer];
+        rc->layer_encoding_bitrate[layer] += 8.0 * sizes[sl];
+        // Keep count of rate control stats per layer, for non-key
+        // frames.
+        if (tl == (unsigned int)layer_id->temporal_layer_id &&
+            !(cx_pkt->data.frame.flags & VPX_FRAME_IS_KEY)) {
+          rc->layer_avg_frame_size[layer] += 8.0 * sizes[sl];
+          rc->layer_avg_rate_mismatch[layer] +=
+              fabs(8.0 * sizes[sl] - rc->layer_pfb[layer]) /
+              rc->layer_pfb[layer];
+          ++rc->layer_enc_frames[layer];
+        }
+      }
+    }
+  }
+
+  // Update for short-time encoding bitrate states, for moving
+  // window of size rc->window, shifted by rc->window / 2.
+  // Ignore first window segment, due to key frame.
+  if (frame_cnt > (unsigned int)rc->window_size) {
+    for (sl = 0; sl < enc_cfg->ss_number_layers; ++sl) {
+      if (cx_pkt->data.frame.spatial_layer_encoded[sl])
+        sum_bitrate += 0.001 * 8.0 * sizes[sl] * framerate;
+    }
+    if (frame_cnt % rc->window_size == 0) {
+      rc->window_count += 1;
+      rc->avg_st_encoding_bitrate += sum_bitrate / rc->window_size;
+      rc->variance_st_encoding_bitrate +=
+          (sum_bitrate / rc->window_size) * (sum_bitrate / rc->window_size);
+    }
+  }
+
+  // Second shifted window.
+  if (frame_cnt > (unsigned int)(rc->window_size + rc->window_size / 2)) {
+    for (sl = 0; sl < enc_cfg->ss_number_layers; ++sl) {
+      sum_bitrate2 += 0.001 * 8.0 * sizes[sl] * framerate;
+    }
+
+    if (frame_cnt > (unsigned int)(2 * rc->window_size) &&
+        frame_cnt % rc->window_size == 0) {
+      rc->window_count += 1;
+      rc->avg_st_encoding_bitrate += sum_bitrate2 / rc->window_size;
+      rc->variance_st_encoding_bitrate +=
+          (sum_bitrate2 / rc->window_size) * (sum_bitrate2 / rc->window_size);
+    }
+  }
+}
+#endif
+
 int main(int argc, const char **argv) {
   AppInput app_input;
   VpxVideoWriter *writer = NULL;
   VpxVideoInfo info;
-  vpx_codec_ctx_t codec;
+  vpx_codec_ctx_t encoder;
   vpx_codec_enc_cfg_t enc_cfg;
   SvcContext svc_ctx;
+  vpx_svc_frame_drop_t svc_drop_frame;
   uint32_t i;
   uint32_t frame_cnt = 0;
   vpx_image_t raw;
   vpx_codec_err_t res;
   int pts = 0;            /* PTS starts at 0 */
   int frame_duration = 1; /* 1 timebase tick per frame */
-  FILE *infile = NULL;
   int end_of_stream = 0;
   int frames_received = 0;
 #if OUTPUT_RC_STATS
-  VpxVideoWriter *outfile[VPX_TS_MAX_LAYERS] = { NULL };
+  VpxVideoWriter *outfile[VPX_SS_MAX_LAYERS] = { NULL };
   struct RateControlStats rc;
   vpx_svc_layer_id_t layer_id;
   vpx_svc_ref_frame_config_t ref_frame_config;
-  unsigned int sl, tl;
-  double sum_bitrate = 0.0;
-  double sum_bitrate2 = 0.0;
+  unsigned int sl;
   double framerate = 30.0;
 #endif
   struct vpx_usec_timer timer;
   int64_t cx_time = 0;
+#if CONFIG_INTERNAL_STATS
+  FILE *f = fopen("opsnr.stt", "a");
+#endif
+#if CONFIG_VP9_DECODER && !SIMULCAST_MODE
+  int mismatch_seen = 0;
+  vpx_codec_ctx_t decoder;
+#endif
   memset(&svc_ctx, 0, sizeof(svc_ctx));
-  svc_ctx.log_print = 1;
+  memset(&app_input, 0, sizeof(AppInput));
+  memset(&info, 0, sizeof(VpxVideoInfo));
+  memset(&layer_id, 0, sizeof(vpx_svc_layer_id_t));
+  memset(&rc, 0, sizeof(struct RateControlStats));
   exec_name = argv[0];
+
+  /* Setup default input stream settings */
+  app_input.input_ctx.framerate.numerator = 30;
+  app_input.input_ctx.framerate.denominator = 1;
+  app_input.input_ctx.only_i420 = 1;
+  app_input.input_ctx.bit_depth = 0;
+
   parse_command_line(argc, argv, &app_input, &svc_ctx, &enc_cfg);
 
+  // Y4M reader handles its own allocation.
+  if (app_input.input_ctx.file_type != FILE_TYPE_Y4M) {
 // Allocate image buffer
 #if CONFIG_VP9_HIGHBITDEPTH
-  if (!vpx_img_alloc(&raw,
-                     enc_cfg.g_input_bit_depth == 8 ? VPX_IMG_FMT_I420
-                                                    : VPX_IMG_FMT_I42016,
-                     enc_cfg.g_w, enc_cfg.g_h, 32)) {
-    die("Failed to allocate image %dx%d\n", enc_cfg.g_w, enc_cfg.g_h);
-  }
+    if (!vpx_img_alloc(&raw,
+                       enc_cfg.g_input_bit_depth == 8 ? VPX_IMG_FMT_I420
+                                                      : VPX_IMG_FMT_I42016,
+                       enc_cfg.g_w, enc_cfg.g_h, 32)) {
+      die("Failed to allocate image %dx%d\n", enc_cfg.g_w, enc_cfg.g_h);
+    }
 #else
-  if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, enc_cfg.g_w, enc_cfg.g_h, 32)) {
-    die("Failed to allocate image %dx%d\n", enc_cfg.g_w, enc_cfg.g_h);
-  }
+    if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, enc_cfg.g_w, enc_cfg.g_h, 32)) {
+      die("Failed to allocate image %dx%d\n", enc_cfg.g_w, enc_cfg.g_h);
+    }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-
-  if (!(infile = fopen(app_input.input_filename, "rb")))
-    die("Failed to open %s for reading\n", app_input.input_filename);
+  }
 
   // Initialize codec
-  if (vpx_svc_init(&svc_ctx, &codec, vpx_codec_vp9_cx(), &enc_cfg) !=
+  if (vpx_svc_init(&svc_ctx, &encoder, vpx_codec_vp9_cx(), &enc_cfg) !=
       VPX_CODEC_OK)
     die("Failed to initialize encoder\n");
+#if CONFIG_VP9_DECODER && !SIMULCAST_MODE
+  if (vpx_codec_dec_init(
+          &decoder, get_vpx_decoder_by_name("vp9")->codec_interface(), NULL, 0))
+    die("Failed to initialize decoder\n");
+#endif
 
 #if OUTPUT_RC_STATS
+  rc.window_count = 1;
+  rc.window_size = 15;  // Silence a static analysis warning.
+  rc.avg_st_encoding_bitrate = 0.0;
+  rc.variance_st_encoding_bitrate = 0.0;
   if (svc_ctx.output_rc_stat) {
     set_rate_control_stats(&rc, &enc_cfg);
     framerate = enc_cfg.g_timebase.den / enc_cfg.g_timebase.num;
@@ -679,6 +996,8 @@
 #endif
 
   info.codec_fourcc = VP9_FOURCC;
+  info.frame_width = enc_cfg.g_w;
+  info.frame_height = enc_cfg.g_h;
   info.time_base.numerator = enc_cfg.g_timebase.num;
   info.time_base.denominator = enc_cfg.g_timebase.den;
 
@@ -690,43 +1009,65 @@
       die("Failed to open %s for writing\n", app_input.output_filename);
   }
 #if OUTPUT_RC_STATS
-  // For now, just write temporal layer streams.
-  // TODO(marpan): do spatial by re-writing superframe.
+  // Write out spatial layer stream.
+  // TODO(marpan/jianj): allow for writing each spatial and temporal stream.
   if (svc_ctx.output_rc_stat) {
-    for (tl = 0; tl < enc_cfg.ts_number_layers; ++tl) {
+    for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
       char file_name[PATH_MAX];
 
-      snprintf(file_name, sizeof(file_name), "%s_t%d.ivf",
-               app_input.output_filename, tl);
-      outfile[tl] = vpx_video_writer_open(file_name, kContainerIVF, &info);
-      if (!outfile[tl]) die("Failed to open %s for writing", file_name);
+      snprintf(file_name, sizeof(file_name), "%s_s%d.ivf",
+               app_input.output_filename, sl);
+      outfile[sl] = vpx_video_writer_open(file_name, kContainerIVF, &info);
+      if (!outfile[sl]) die("Failed to open %s for writing", file_name);
     }
   }
 #endif
 
   // skip initial frames
-  for (i = 0; i < app_input.frames_to_skip; ++i) vpx_img_read(&raw, infile);
+  for (i = 0; i < app_input.frames_to_skip; ++i)
+    read_frame(&app_input.input_ctx, &raw);
 
   if (svc_ctx.speed != -1)
-    vpx_codec_control(&codec, VP8E_SET_CPUUSED, svc_ctx.speed);
+    vpx_codec_control(&encoder, VP8E_SET_CPUUSED, svc_ctx.speed);
   if (svc_ctx.threads) {
-    vpx_codec_control(&codec, VP9E_SET_TILE_COLUMNS, (svc_ctx.threads >> 1));
+    vpx_codec_control(&encoder, VP9E_SET_TILE_COLUMNS,
+                      get_msb(svc_ctx.threads));
     if (svc_ctx.threads > 1)
-      vpx_codec_control(&codec, VP9E_SET_ROW_MT, 1);
+      vpx_codec_control(&encoder, VP9E_SET_ROW_MT, 1);
     else
-      vpx_codec_control(&codec, VP9E_SET_ROW_MT, 0);
+      vpx_codec_control(&encoder, VP9E_SET_ROW_MT, 0);
   }
   if (svc_ctx.speed >= 5 && svc_ctx.aqmode == 1)
-    vpx_codec_control(&codec, VP9E_SET_AQ_MODE, 3);
+    vpx_codec_control(&encoder, VP9E_SET_AQ_MODE, 3);
   if (svc_ctx.speed >= 5)
-    vpx_codec_control(&codec, VP8E_SET_STATIC_THRESHOLD, 1);
-  vpx_codec_control(&codec, VP8E_SET_MAX_INTRA_BITRATE_PCT, 900);
+    vpx_codec_control(&encoder, VP8E_SET_STATIC_THRESHOLD, 1);
+  vpx_codec_control(&encoder, VP8E_SET_MAX_INTRA_BITRATE_PCT, 900);
+
+  vpx_codec_control(&encoder, VP9E_SET_SVC_INTER_LAYER_PRED,
+                    app_input.inter_layer_pred);
+
+  vpx_codec_control(&encoder, VP9E_SET_NOISE_SENSITIVITY, 0);
+
+  vpx_codec_control(&encoder, VP9E_SET_TUNE_CONTENT, app_input.tune_content);
+
+  svc_drop_frame.framedrop_mode = FULL_SUPERFRAME_DROP;
+  for (sl = 0; sl < (unsigned int)svc_ctx.spatial_layers; ++sl)
+    svc_drop_frame.framedrop_thresh[sl] = enc_cfg.rc_dropframe_thresh;
+  svc_drop_frame.max_consec_drop = INT_MAX;
+  vpx_codec_control(&encoder, VP9E_SET_SVC_FRAME_DROP_LAYER, &svc_drop_frame);
 
   // Encode frames
   while (!end_of_stream) {
     vpx_codec_iter_t iter = NULL;
     const vpx_codec_cx_pkt_t *cx_pkt;
-    if (frame_cnt >= app_input.frames_to_code || !vpx_img_read(&raw, infile)) {
+    // Example patterns for bypass/flexible mode:
+    // example_pattern = 0: 2 temporal layers, and spatial_layers = 1,2,3. Exact
+    // to fixed SVC patterns. example_pattern = 1: 2 spatial and 2 temporal
+    // layers, with SL0 only has TL0, and SL1 has both TL0 and TL1. This example
+    // uses the extended API.
+    int example_pattern = 0;
+    if (frame_cnt >= app_input.frames_to_code ||
+        !read_frame(&app_input.input_ctx, &raw)) {
       // We need one extra vpx_svc_encode call at end of stream to flush
       // encoder and get remaining data
       end_of_stream = 1;
@@ -734,142 +1075,97 @@
 
     // For BYPASS/FLEXIBLE mode, set the frame flags (reference and updates)
     // and the buffer indices for each spatial layer of the current
-    // (super)frame to be encoded. The temporal layer_id for the current frame
-    // also needs to be set.
+    // (super)frame to be encoded. The spatial and temporal layer_id for the
+    // current frame also needs to be set.
     // TODO(marpan): Should rename the "VP9E_TEMPORAL_LAYERING_MODE_BYPASS"
     // mode to "VP9E_LAYERING_MODE_BYPASS".
     if (svc_ctx.temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
       layer_id.spatial_layer_id = 0;
       // Example for 2 temporal layers.
-      if (frame_cnt % 2 == 0)
+      if (frame_cnt % 2 == 0) {
         layer_id.temporal_layer_id = 0;
-      else
+        for (i = 0; i < VPX_SS_MAX_LAYERS; i++)
+          layer_id.temporal_layer_id_per_spatial[i] = 0;
+      } else {
         layer_id.temporal_layer_id = 1;
-      // Note that we only set the temporal layer_id, since we are calling
-      // the encode for the whole superframe. The encoder will internally loop
-      // over all the spatial layers for the current superframe.
-      vpx_codec_control(&codec, VP9E_SET_SVC_LAYER_ID, &layer_id);
+        for (i = 0; i < VPX_SS_MAX_LAYERS; i++)
+          layer_id.temporal_layer_id_per_spatial[i] = 1;
+      }
+      if (example_pattern == 1) {
+        // example_pattern 1 is hard-coded for 2 spatial and 2 temporal layers.
+        assert(svc_ctx.spatial_layers == 2);
+        assert(svc_ctx.temporal_layers == 2);
+        if (frame_cnt % 2 == 0) {
+          // Spatial layer 0 and 1 are encoded.
+          layer_id.temporal_layer_id_per_spatial[0] = 0;
+          layer_id.temporal_layer_id_per_spatial[1] = 0;
+          layer_id.spatial_layer_id = 0;
+        } else {
+          // Only spatial layer 1 is encoded here.
+          layer_id.temporal_layer_id_per_spatial[1] = 1;
+          layer_id.spatial_layer_id = 1;
+        }
+      }
+      vpx_codec_control(&encoder, VP9E_SET_SVC_LAYER_ID, &layer_id);
       // TODO(jianj): Fix the parameter passing for "is_key_frame" in
       // set_frame_flags_bypass_model() for case of periodic key frames.
-      set_frame_flags_bypass_mode(sl, layer_id.temporal_layer_id,
-                                  svc_ctx.spatial_layers, frame_cnt == 0,
-                                  &ref_frame_config);
-      vpx_codec_control(&codec, VP9E_SET_SVC_REF_FRAME_CONFIG,
+      if (example_pattern == 0) {
+        set_frame_flags_bypass_mode_ex0(layer_id.temporal_layer_id,
+                                        svc_ctx.spatial_layers, frame_cnt == 0,
+                                        &ref_frame_config);
+      } else if (example_pattern == 1) {
+        set_frame_flags_bypass_mode_ex1(layer_id.temporal_layer_id,
+                                        svc_ctx.spatial_layers, frame_cnt == 0,
+                                        &ref_frame_config);
+      }
+      ref_frame_config.duration[0] = frame_duration * 1;
+      ref_frame_config.duration[1] = frame_duration * 1;
+
+      vpx_codec_control(&encoder, VP9E_SET_SVC_REF_FRAME_CONFIG,
                         &ref_frame_config);
       // Keep track of input frames, to account for frame drops in rate control
       // stats/metrics.
-      for (sl = 0; sl < (unsigned int)enc_cfg.ss_number_layers; ++sl) {
+      for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
         ++rc.layer_input_frames[sl * enc_cfg.ts_number_layers +
                                 layer_id.temporal_layer_id];
       }
+    } else {
+      // For the fixed pattern SVC, temporal layer is given by superframe count.
+      unsigned int tl = 0;
+      if (enc_cfg.ts_number_layers == 2)
+        tl = (frame_cnt % 2 != 0);
+      else if (enc_cfg.ts_number_layers == 3) {
+        if (frame_cnt % 2 != 0) tl = 2;
+        if ((frame_cnt > 1) && ((frame_cnt - 2) % 4 == 0)) tl = 1;
+      }
+      for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl)
+        ++rc.layer_input_frames[sl * enc_cfg.ts_number_layers + tl];
     }
 
     vpx_usec_timer_start(&timer);
     res = vpx_svc_encode(
-        &svc_ctx, &codec, (end_of_stream ? NULL : &raw), pts, frame_duration,
+        &svc_ctx, &encoder, (end_of_stream ? NULL : &raw), pts, frame_duration,
         svc_ctx.speed >= 5 ? VPX_DL_REALTIME : VPX_DL_GOOD_QUALITY);
     vpx_usec_timer_mark(&timer);
     cx_time += vpx_usec_timer_elapsed(&timer);
 
-    printf("%s", vpx_svc_get_message(&svc_ctx));
     fflush(stdout);
     if (res != VPX_CODEC_OK) {
-      die_codec(&codec, "Failed to encode frame");
+      die_codec(&encoder, "Failed to encode frame");
     }
 
-    while ((cx_pkt = vpx_codec_get_cx_data(&codec, &iter)) != NULL) {
+    while ((cx_pkt = vpx_codec_get_cx_data(&encoder, &iter)) != NULL) {
       switch (cx_pkt->kind) {
         case VPX_CODEC_CX_FRAME_PKT: {
           SvcInternal_t *const si = (SvcInternal_t *)svc_ctx.internal;
           if (cx_pkt->data.frame.sz > 0) {
-#if OUTPUT_RC_STATS
-            uint64_t sizes[8];
-            int count = 0;
-#endif
             vpx_video_writer_write_frame(writer, cx_pkt->data.frame.buf,
                                          cx_pkt->data.frame.sz,
                                          cx_pkt->data.frame.pts);
 #if OUTPUT_RC_STATS
-            // TODO(marpan): Put this (to line728) in separate function.
             if (svc_ctx.output_rc_stat) {
-              vpx_codec_control(&codec, VP9E_GET_SVC_LAYER_ID, &layer_id);
-              parse_superframe_index(cx_pkt->data.frame.buf,
-                                     cx_pkt->data.frame.sz, sizes, &count);
-              if (enc_cfg.ss_number_layers == 1)
-                sizes[0] = cx_pkt->data.frame.sz;
-              // Note computing input_layer_frames here won't account for frame
-              // drops in rate control stats.
-              // TODO(marpan): Fix this for non-bypass mode so we can get stats
-              // for dropped frames.
-              if (svc_ctx.temporal_layering_mode !=
-                  VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
-                for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
-                  ++rc.layer_input_frames[sl * enc_cfg.ts_number_layers +
-                                          layer_id.temporal_layer_id];
-                }
-              }
-              for (tl = layer_id.temporal_layer_id;
-                   tl < enc_cfg.ts_number_layers; ++tl) {
-                vpx_video_writer_write_frame(
-                    outfile[tl], cx_pkt->data.frame.buf, cx_pkt->data.frame.sz,
-                    cx_pkt->data.frame.pts);
-              }
-
-              for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
-                for (tl = layer_id.temporal_layer_id;
-                     tl < enc_cfg.ts_number_layers; ++tl) {
-                  const int layer = sl * enc_cfg.ts_number_layers + tl;
-                  ++rc.layer_tot_enc_frames[layer];
-                  rc.layer_encoding_bitrate[layer] += 8.0 * sizes[sl];
-                  // Keep count of rate control stats per layer, for non-key
-                  // frames.
-                  if (tl == (unsigned int)layer_id.temporal_layer_id &&
-                      !(cx_pkt->data.frame.flags & VPX_FRAME_IS_KEY)) {
-                    rc.layer_avg_frame_size[layer] += 8.0 * sizes[sl];
-                    rc.layer_avg_rate_mismatch[layer] +=
-                        fabs(8.0 * sizes[sl] - rc.layer_pfb[layer]) /
-                        rc.layer_pfb[layer];
-                    ++rc.layer_enc_frames[layer];
-                  }
-                }
-              }
-
-              // Update for short-time encoding bitrate states, for moving
-              // window of size rc->window, shifted by rc->window / 2.
-              // Ignore first window segment, due to key frame.
-              if (frame_cnt > (unsigned int)rc.window_size) {
-                tl = layer_id.temporal_layer_id;
-                for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
-                  sum_bitrate += 0.001 * 8.0 * sizes[sl] * framerate;
-                }
-                if (frame_cnt % rc.window_size == 0) {
-                  rc.window_count += 1;
-                  rc.avg_st_encoding_bitrate += sum_bitrate / rc.window_size;
-                  rc.variance_st_encoding_bitrate +=
-                      (sum_bitrate / rc.window_size) *
-                      (sum_bitrate / rc.window_size);
-                  sum_bitrate = 0.0;
-                }
-              }
-
-              // Second shifted window.
-              if (frame_cnt >
-                  (unsigned int)(rc.window_size + rc.window_size / 2)) {
-                tl = layer_id.temporal_layer_id;
-                for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
-                  sum_bitrate2 += 0.001 * 8.0 * sizes[sl] * framerate;
-                }
-
-                if (frame_cnt > (unsigned int)(2 * rc.window_size) &&
-                    frame_cnt % rc.window_size == 0) {
-                  rc.window_count += 1;
-                  rc.avg_st_encoding_bitrate += sum_bitrate2 / rc.window_size;
-                  rc.variance_st_encoding_bitrate +=
-                      (sum_bitrate2 / rc.window_size) *
-                      (sum_bitrate2 / rc.window_size);
-                  sum_bitrate2 = 0.0;
-                }
-              }
+              svc_output_rc_stats(&encoder, &enc_cfg, &layer_id, cx_pkt, &rc,
+                                  outfile, frame_cnt, framerate);
             }
 #endif
           }
@@ -881,6 +1177,11 @@
           if (enc_cfg.ss_number_layers == 1 && enc_cfg.ts_number_layers == 1)
             si->bytes_sum[0] += (int)cx_pkt->data.frame.sz;
           ++frames_received;
+#if CONFIG_VP9_DECODER && !SIMULCAST_MODE
+          if (vpx_codec_decode(&decoder, cx_pkt->data.frame.buf,
+                               (unsigned int)cx_pkt->data.frame.sz, NULL, 0))
+            die_codec(&decoder, "Failed to decode frame.");
+#endif
           break;
         }
         case VPX_CODEC_STATS_PKT: {
@@ -890,6 +1191,19 @@
         }
         default: { break; }
       }
+
+#if CONFIG_VP9_DECODER && !SIMULCAST_MODE
+      vpx_codec_control(&encoder, VP9E_GET_SVC_LAYER_ID, &layer_id);
+      // Don't look for mismatch on top spatial and top temporal layers as they
+      // are non reference frames.
+      if ((enc_cfg.ss_number_layers > 1 || enc_cfg.ts_number_layers > 1) &&
+          !(layer_id.temporal_layer_id > 0 &&
+            layer_id.temporal_layer_id == (int)enc_cfg.ts_number_layers - 1 &&
+            cx_pkt->data.frame
+                .spatial_layer_encoded[enc_cfg.ss_number_layers - 1])) {
+        test_decode(&encoder, &decoder, frame_cnt, &mismatch_seen);
+      }
+#endif
     }
 
     if (!end_of_stream) {
@@ -898,41 +1212,45 @@
     }
   }
 
-  // Compensate for the extra frame count for the bypass mode.
-  if (svc_ctx.temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
-    for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
-      const int layer =
-          sl * enc_cfg.ts_number_layers + layer_id.temporal_layer_id;
-      --rc.layer_input_frames[layer];
-    }
-  }
-
   printf("Processed %d frames\n", frame_cnt);
-  fclose(infile);
+
+  close_input_file(&app_input.input_ctx);
+
 #if OUTPUT_RC_STATS
   if (svc_ctx.output_rc_stat) {
     printout_rate_control_summary(&rc, &enc_cfg, frame_cnt);
     printf("\n");
   }
 #endif
-  if (vpx_codec_destroy(&codec)) die_codec(&codec, "Failed to destroy codec");
+  if (vpx_codec_destroy(&encoder))
+    die_codec(&encoder, "Failed to destroy codec");
   if (app_input.passes == 2) stats_close(&app_input.rc_stats, 1);
   if (writer) {
     vpx_video_writer_close(writer);
   }
 #if OUTPUT_RC_STATS
   if (svc_ctx.output_rc_stat) {
-    for (tl = 0; tl < enc_cfg.ts_number_layers; ++tl) {
-      vpx_video_writer_close(outfile[tl]);
+    for (sl = 0; sl < enc_cfg.ss_number_layers; ++sl) {
+      vpx_video_writer_close(outfile[sl]);
     }
   }
 #endif
+#if CONFIG_INTERNAL_STATS
+  if (mismatch_seen) {
+    fprintf(f, "First mismatch occurred in frame %d\n", mismatch_seen);
+  } else {
+    fprintf(f, "No mismatch detected in recon buffers\n");
+  }
+  fclose(f);
+#endif
   printf("Frame cnt and encoding time/FPS stats for encoding: %d %f %f \n",
          frame_cnt, 1000 * (float)cx_time / (double)(frame_cnt * 1000000),
          1000000 * (double)frame_cnt / (double)cx_time);
-  vpx_img_free(&raw);
+  if (app_input.input_ctx.file_type != FILE_TYPE_Y4M) {
+    vpx_img_free(&raw);
+  }
   // display average size, psnr
-  printf("%s", vpx_svc_dump_statistics(&svc_ctx));
+  vpx_svc_dump_statistics(&svc_ctx);
   vpx_svc_release(&svc_ctx);
   return EXIT_SUCCESS;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9cx_set_ref.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9cx_set_ref.c
index 3472689..911ad38 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9cx_set_ref.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vp9cx_set_ref.c
@@ -68,128 +68,6 @@
   exit(EXIT_FAILURE);
 }
 
-static int compare_img(const vpx_image_t *const img1,
-                       const vpx_image_t *const img2) {
-  uint32_t l_w = img1->d_w;
-  uint32_t c_w = (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
-  const uint32_t c_h =
-      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
-  uint32_t i;
-  int match = 1;
-
-  match &= (img1->fmt == img2->fmt);
-  match &= (img1->d_w == img2->d_w);
-  match &= (img1->d_h == img2->d_h);
-
-  for (i = 0; i < img1->d_h; ++i)
-    match &= (memcmp(img1->planes[VPX_PLANE_Y] + i * img1->stride[VPX_PLANE_Y],
-                     img2->planes[VPX_PLANE_Y] + i * img2->stride[VPX_PLANE_Y],
-                     l_w) == 0);
-
-  for (i = 0; i < c_h; ++i)
-    match &= (memcmp(img1->planes[VPX_PLANE_U] + i * img1->stride[VPX_PLANE_U],
-                     img2->planes[VPX_PLANE_U] + i * img2->stride[VPX_PLANE_U],
-                     c_w) == 0);
-
-  for (i = 0; i < c_h; ++i)
-    match &= (memcmp(img1->planes[VPX_PLANE_V] + i * img1->stride[VPX_PLANE_V],
-                     img2->planes[VPX_PLANE_V] + i * img2->stride[VPX_PLANE_V],
-                     c_w) == 0);
-
-  return match;
-}
-
-#define mmin(a, b) ((a) < (b) ? (a) : (b))
-static void find_mismatch(const vpx_image_t *const img1,
-                          const vpx_image_t *const img2, int yloc[4],
-                          int uloc[4], int vloc[4]) {
-  const uint32_t bsize = 64;
-  const uint32_t bsizey = bsize >> img1->y_chroma_shift;
-  const uint32_t bsizex = bsize >> img1->x_chroma_shift;
-  const uint32_t c_w =
-      (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
-  const uint32_t c_h =
-      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
-  int match = 1;
-  uint32_t i, j;
-  yloc[0] = yloc[1] = yloc[2] = yloc[3] = -1;
-  for (i = 0, match = 1; match && i < img1->d_h; i += bsize) {
-    for (j = 0; match && j < img1->d_w; j += bsize) {
-      int k, l;
-      const int si = mmin(i + bsize, img1->d_h) - i;
-      const int sj = mmin(j + bsize, img1->d_w) - j;
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(img1->planes[VPX_PLANE_Y] +
-                (i + k) * img1->stride[VPX_PLANE_Y] + j + l) !=
-              *(img2->planes[VPX_PLANE_Y] +
-                (i + k) * img2->stride[VPX_PLANE_Y] + j + l)) {
-            yloc[0] = i + k;
-            yloc[1] = j + l;
-            yloc[2] = *(img1->planes[VPX_PLANE_Y] +
-                        (i + k) * img1->stride[VPX_PLANE_Y] + j + l);
-            yloc[3] = *(img2->planes[VPX_PLANE_Y] +
-                        (i + k) * img2->stride[VPX_PLANE_Y] + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-
-  uloc[0] = uloc[1] = uloc[2] = uloc[3] = -1;
-  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
-    for (j = 0; match && j < c_w; j += bsizex) {
-      int k, l;
-      const int si = mmin(i + bsizey, c_h - i);
-      const int sj = mmin(j + bsizex, c_w - j);
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(img1->planes[VPX_PLANE_U] +
-                (i + k) * img1->stride[VPX_PLANE_U] + j + l) !=
-              *(img2->planes[VPX_PLANE_U] +
-                (i + k) * img2->stride[VPX_PLANE_U] + j + l)) {
-            uloc[0] = i + k;
-            uloc[1] = j + l;
-            uloc[2] = *(img1->planes[VPX_PLANE_U] +
-                        (i + k) * img1->stride[VPX_PLANE_U] + j + l);
-            uloc[3] = *(img2->planes[VPX_PLANE_U] +
-                        (i + k) * img2->stride[VPX_PLANE_U] + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-  vloc[0] = vloc[1] = vloc[2] = vloc[3] = -1;
-  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
-    for (j = 0; match && j < c_w; j += bsizex) {
-      int k, l;
-      const int si = mmin(i + bsizey, c_h - i);
-      const int sj = mmin(j + bsizex, c_w - j);
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(img1->planes[VPX_PLANE_V] +
-                (i + k) * img1->stride[VPX_PLANE_V] + j + l) !=
-              *(img2->planes[VPX_PLANE_V] +
-                (i + k) * img2->stride[VPX_PLANE_V] + j + l)) {
-            vloc[0] = i + k;
-            vloc[1] = j + l;
-            vloc[2] = *(img1->planes[VPX_PLANE_V] +
-                        (i + k) * img1->stride[VPX_PLANE_V] + j + l);
-            vloc[3] = *(img2->planes[VPX_PLANE_V] +
-                        (i + k) * img2->stride[VPX_PLANE_V] + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-}
-
 static void testing_decode(vpx_codec_ctx_t *encoder, vpx_codec_ctx_t *decoder,
                            unsigned int frame_out, int *mismatch_seen) {
   vpx_image_t enc_img, dec_img;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_dec_fuzzer.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_dec_fuzzer.cc
new file mode 100644
index 0000000..d55fe15
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_dec_fuzzer.cc
@@ -0,0 +1,118 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+/*
+ * Fuzzer for libvpx decoders
+ * ==========================
+ * Requirements
+ * --------------
+ * Requires Clang 6.0 or above as -fsanitize=fuzzer is used as a linker
+ * option.
+
+ * Steps to build
+ * --------------
+ * Clone libvpx repository
+   $git clone https://chromium.googlesource.com/webm/libvpx
+
+ * Create a directory in parallel to libvpx and change directory
+   $mkdir vpx_dec_fuzzer
+   $cd vpx_dec_fuzzer/
+
+ * Enable sanitizers (Supported: address integer memory thread undefined)
+   $source ../libvpx/tools/set_analyzer_env.sh address
+
+ * Configure libvpx.
+ * Note --size-limit and VPX_MAX_ALLOCABLE_MEMORY are defined to avoid
+ * Out of memory errors when running generated fuzzer binary
+   $../libvpx/configure --disable-unit-tests --size-limit=12288x12288 \
+   --extra-cflags="-fsanitize=fuzzer-no-link \
+   -DVPX_MAX_ALLOCABLE_MEMORY=1073741824" \
+   --disable-webm-io --enable-debug --disable-vp8-encoder \
+   --disable-vp9-encoder --disable-examples
+
+ * Build libvpx
+   $make -j32
+
+ * Build vp9 fuzzer
+   $ $CXX $CXXFLAGS -std=c++11 -DDECODER=vp9 \
+   -fsanitize=fuzzer -I../libvpx -I. -Wl,--start-group \
+   ../libvpx/examples/vpx_dec_fuzzer.cc -o ./vpx_dec_fuzzer_vp9 \
+   ./libvpx.a -Wl,--end-group
+
+ * DECODER should be defined as vp9 or vp8 to enable vp9/vp8
+ *
+ * create a corpus directory and copy some ivf files there.
+ * Based on which codec (vp8/vp9) is being tested, it is recommended to
+ * have corresponding ivf files in corpus directory
+ * Empty corpus directoy also is acceptable, though not recommended
+   $mkdir CORPUS && cp some-files CORPUS
+
+ * Run fuzzing:
+   $./vpx_dec_fuzzer_vp9 CORPUS
+
+ * References:
+ * http://llvm.org/docs/LibFuzzer.html
+ * https://github.com/google/oss-fuzz
+ */
+
+#include <stddef.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <algorithm>
+#include <memory>
+
+#include "vpx/vp8dx.h"
+#include "vpx/vpx_decoder.h"
+#include "vpx_ports/mem_ops.h"
+
+#define IVF_FRAME_HDR_SZ (4 + 8) /* 4 byte size + 8 byte timestamp */
+#define IVF_FILE_HDR_SZ 32
+
+#define VPXD_INTERFACE(name) VPXD_INTERFACE_(name)
+#define VPXD_INTERFACE_(name) vpx_codec_##name##_dx()
+
+extern "C" void usage_exit(void) { exit(EXIT_FAILURE); }
+
+extern "C" int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {
+  if (size <= IVF_FILE_HDR_SZ) {
+    return 0;
+  }
+
+  vpx_codec_ctx_t codec;
+  // Set thread count in the range [1, 64].
+  const unsigned int threads = (data[IVF_FILE_HDR_SZ] & 0x3f) + 1;
+  vpx_codec_dec_cfg_t cfg = { threads, 0, 0 };
+  if (vpx_codec_dec_init(&codec, VPXD_INTERFACE(DECODER), &cfg, 0)) {
+    return 0;
+  }
+
+  data += IVF_FILE_HDR_SZ;
+  size -= IVF_FILE_HDR_SZ;
+
+  while (size > IVF_FRAME_HDR_SZ) {
+    size_t frame_size = mem_get_le32(data);
+    size -= IVF_FRAME_HDR_SZ;
+    data += IVF_FRAME_HDR_SZ;
+    frame_size = std::min(size, frame_size);
+
+    const vpx_codec_err_t err =
+        vpx_codec_decode(&codec, data, frame_size, nullptr, 0);
+    static_cast<void>(err);
+    vpx_codec_iter_t iter = nullptr;
+    vpx_image_t *img = nullptr;
+    while ((img = vpx_codec_get_frame(&codec, &iter)) != nullptr) {
+    }
+    data += frame_size;
+    size -= frame_size;
+  }
+  vpx_codec_destroy(&codec);
+  return 0;
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_temporal_svc_encoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_temporal_svc_encoder.c
index 4c985ba..925043d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_temporal_svc_encoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/examples/vpx_temporal_svc_encoder.c
@@ -19,16 +19,18 @@
 #include <string.h>
 
 #include "./vpx_config.h"
+#include "./y4minput.h"
 #include "../vpx_ports/vpx_timer.h"
 #include "vpx/vp8cx.h"
 #include "vpx/vpx_encoder.h"
+#include "vpx_ports/bitops.h"
 
 #include "../tools_common.h"
 #include "../video_writer.h"
 
 #define ROI_MAP 0
 
-#define zero(Dest) memset(&Dest, 0, sizeof(Dest));
+#define zero(Dest) memset(&(Dest), 0, sizeof(Dest));
 
 static const char *exec_name;
 
@@ -91,14 +93,15 @@
 // in the stream.
 static void set_rate_control_metrics(struct RateControlMetrics *rc,
                                      vpx_codec_enc_cfg_t *cfg) {
-  unsigned int i = 0;
+  int i = 0;
   // Set the layer (cumulative) framerate and the target layer (non-cumulative)
   // per-frame-bandwidth, for the rate control encoding stats below.
   const double framerate = cfg->g_timebase.den / cfg->g_timebase.num;
+  const int ts_number_layers = cfg->ts_number_layers;
   rc->layer_framerate[0] = framerate / cfg->ts_rate_decimator[0];
   rc->layer_pfb[0] =
       1000.0 * rc->layer_target_bitrate[0] / rc->layer_framerate[0];
-  for (i = 0; i < cfg->ts_number_layers; ++i) {
+  for (i = 0; i < ts_number_layers; ++i) {
     if (i > 0) {
       rc->layer_framerate[i] = framerate / cfg->ts_rate_decimator[i];
       rc->layer_pfb[i] =
@@ -117,6 +120,9 @@
   rc->window_size = 15;
   rc->avg_st_encoding_bitrate = 0.0;
   rc->variance_st_encoding_bitrate = 0.0;
+  // Target bandwidth for the whole stream.
+  // Set to layer_target_bitrate for highest layer (total bitrate).
+  cfg->rc_target_bitrate = rc->layer_target_bitrate[ts_number_layers - 1];
 }
 
 static void printout_rate_control_summary(struct RateControlMetrics *rc,
@@ -591,9 +597,9 @@
 #if ROI_MAP
   vpx_roi_map_t roi;
 #endif
-  vpx_svc_layer_id_t layer_id = { 0, 0 };
+  vpx_svc_layer_id_t layer_id;
   const VpxInterface *encoder = NULL;
-  FILE *infile = NULL;
+  struct VpxInputContext input_ctx;
   struct RateControlMetrics rc;
   int64_t cx_time = 0;
   const int min_args_base = 13;
@@ -608,6 +614,15 @@
   double sum_bitrate2 = 0.0;
   double framerate = 30.0;
 
+  zero(rc.layer_target_bitrate);
+  memset(&layer_id, 0, sizeof(vpx_svc_layer_id_t));
+  memset(&input_ctx, 0, sizeof(input_ctx));
+  /* Setup default input stream settings */
+  input_ctx.framerate.numerator = 30;
+  input_ctx.framerate.denominator = 1;
+  input_ctx.only_i420 = 1;
+  input_ctx.bit_depth = 0;
+
   exec_name = argv[0];
   // Check usage and arguments.
   if (argc < min_args) {
@@ -646,6 +661,9 @@
     die("Invalid number of arguments");
   }
 
+  input_ctx.filename = argv[1];
+  open_input_file(&input_ctx);
+
 #if CONFIG_VP9_HIGHBITDEPTH
   switch (strtol(argv[argc - 1], NULL, 0)) {
     case 8:
@@ -662,14 +680,22 @@
       break;
     default: die("Invalid bit depth (8, 10, 12) %s", argv[argc - 1]);
   }
-  if (!vpx_img_alloc(
-          &raw, bit_depth == VPX_BITS_8 ? VPX_IMG_FMT_I420 : VPX_IMG_FMT_I42016,
-          width, height, 32)) {
-    die("Failed to allocate image", width, height);
+
+  // Y4M reader has its own allocation.
+  if (input_ctx.file_type != FILE_TYPE_Y4M) {
+    if (!vpx_img_alloc(
+            &raw,
+            bit_depth == VPX_BITS_8 ? VPX_IMG_FMT_I420 : VPX_IMG_FMT_I42016,
+            width, height, 32)) {
+      die("Failed to allocate image", width, height);
+    }
   }
 #else
-  if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, width, height, 32)) {
-    die("Failed to allocate image", width, height);
+  // Y4M reader has its own allocation.
+  if (input_ctx.file_type != FILE_TYPE_Y4M) {
+    if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, width, height, 32)) {
+      die("Failed to allocate image", width, height);
+    }
   }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
@@ -700,6 +726,9 @@
   if (speed < 0) {
     die("Invalid speed setting: must be positive");
   }
+  if (strncmp(encoder->name, "vp9", 3) == 0 && speed > 9) {
+    warn("Mapping speed %d to speed 9.\n", speed);
+  }
 
   for (i = min_args_base;
        (int)i < min_args_base + mode_to_num_layers[layering_mode]; ++i) {
@@ -747,13 +776,15 @@
 
   set_rate_control_metrics(&rc, &cfg);
 
-  // Target bandwidth for the whole stream.
-  // Set to layer_target_bitrate for highest layer (total bitrate).
-  cfg.rc_target_bitrate = rc.layer_target_bitrate[cfg.ts_number_layers - 1];
-
-  // Open input file.
-  if (!(infile = fopen(argv[1], "rb"))) {
-    die("Failed to open %s for reading", argv[1]);
+  if (input_ctx.file_type == FILE_TYPE_Y4M) {
+    if (input_ctx.width != cfg.g_w || input_ctx.height != cfg.g_h) {
+      die("Incorrect width or height: %d x %d", cfg.g_w, cfg.g_h);
+    }
+    if (input_ctx.framerate.numerator != cfg.g_timebase.den ||
+        input_ctx.framerate.denominator != cfg.g_timebase.num) {
+      die("Incorrect framerate: numerator %d denominator %d",
+          cfg.g_timebase.num, cfg.g_timebase.den);
+    }
   }
 
   framerate = cfg.g_timebase.den / cfg.g_timebase.num;
@@ -808,16 +839,14 @@
     vpx_codec_control(&codec, VP9E_SET_NOISE_SENSITIVITY, kVp9DenoiserOff);
     vpx_codec_control(&codec, VP8E_SET_STATIC_THRESHOLD, 1);
     vpx_codec_control(&codec, VP9E_SET_TUNE_CONTENT, 0);
-    vpx_codec_control(&codec, VP9E_SET_TILE_COLUMNS, (cfg.g_threads >> 1));
+    vpx_codec_control(&codec, VP9E_SET_TILE_COLUMNS, get_msb(cfg.g_threads));
 #if ROI_MAP
     set_roi_map(encoder->name, &cfg, &roi);
     if (vpx_codec_control(&codec, VP9E_SET_ROI_MAP, &roi))
       die_codec(&codec, "Failed to set ROI map");
     vpx_codec_control(&codec, VP9E_SET_AQ_MODE, 0);
 #endif
-    // TODO(marpan/jianj): There is an issue with row-mt for low resolutons at
-    // high speed settings, disable its use for those cases for now.
-    if (cfg.g_threads > 1 && ((cfg.g_w > 320 && cfg.g_h > 240) || speed < 7))
+    if (cfg.g_threads > 1)
       vpx_codec_control(&codec, VP9E_SET_ROW_MT, 1);
     else
       vpx_codec_control(&codec, VP9E_SET_ROW_MT, 0);
@@ -853,6 +882,7 @@
     layer_id.spatial_layer_id = 0;
     layer_id.temporal_layer_id =
         cfg.ts_layer_id[frame_cnt % cfg.ts_periodicity];
+    layer_id.temporal_layer_id_per_spatial[0] = layer_id.temporal_layer_id;
     if (strncmp(encoder->name, "vp9", 3) == 0) {
       vpx_codec_control(&codec, VP9E_SET_SVC_LAYER_ID, &layer_id);
     } else if (strncmp(encoder->name, "vp8", 3) == 0) {
@@ -861,7 +891,7 @@
     }
     flags = layer_flags[frame_cnt % flag_periodicity];
     if (layering_mode == 0) flags = 0;
-    frame_avail = vpx_img_read(&raw, infile);
+    frame_avail = read_frame(&input_ctx, &raw);
     if (frame_avail) ++rc.layer_input_frames[layer_id.temporal_layer_id];
     vpx_usec_timer_start(&timer);
     if (vpx_codec_encode(&codec, frame_avail ? &raw : NULL, pts, 1, flags,
@@ -929,7 +959,7 @@
     ++frame_cnt;
     pts += frame_duration;
   }
-  fclose(infile);
+  close_input_file(&input_ctx);
   printout_rate_control_summary(&rc, &cfg, frame_cnt);
   printf("\n");
   printf("Frame cnt and encoding time/FPS stats for encoding: %d %f %f \n",
@@ -941,7 +971,10 @@
   // Try to rewrite the output file headers with the actual frame count.
   for (i = 0; i < cfg.ts_number_layers; ++i) vpx_video_writer_close(outfile[i]);
 
-  vpx_img_free(&raw);
+  if (input_ctx.file_type != FILE_TYPE_Y4M) {
+    vpx_img_free(&raw);
+  }
+
 #if ROI_MAP
   free(roi.roi_map);
 #endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.c
index f64e594..3e179bc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.c
@@ -76,12 +76,12 @@
   size_t frame_size = 0;
 
   if (fread(raw_header, IVF_FRAME_HDR_SZ, 1, infile) != 1) {
-    if (!feof(infile)) warn("Failed to read frame size\n");
+    if (!feof(infile)) warn("Failed to read frame size");
   } else {
     frame_size = mem_get_le32(raw_header);
 
     if (frame_size > 256 * 1024 * 1024) {
-      warn("Read invalid frame size (%u)\n", (unsigned int)frame_size);
+      warn("Read invalid frame size (%u)", (unsigned int)frame_size);
       frame_size = 0;
     }
 
@@ -92,7 +92,7 @@
         *buffer = new_buffer;
         *buffer_size = 2 * frame_size;
       } else {
-        warn("Failed to allocate compressed data buffer\n");
+        warn("Failed to allocate compressed data buffer");
         frame_size = 0;
       }
     }
@@ -100,7 +100,7 @@
 
   if (!feof(infile)) {
     if (fread(*buffer, 1, frame_size, infile) != frame_size) {
-      warn("Failed to read full frame\n");
+      warn("Failed to read full frame");
       return 1;
     }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.h
index af72557..847cd79 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfdec.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef IVFDEC_H_
-#define IVFDEC_H_
+#ifndef VPX_IVFDEC_H_
+#define VPX_IVFDEC_H_
 
 #include "./tools_common.h"
 
@@ -25,4 +25,4 @@
 } /* extern "C" */
 #endif
 
-#endif  // IVFDEC_H_
+#endif  // VPX_IVFDEC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.c
index a50d318..2e8e042 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.c
@@ -13,27 +13,35 @@
 #include "vpx/vpx_encoder.h"
 #include "vpx_ports/mem_ops.h"
 
-void ivf_write_file_header(FILE *outfile, const struct vpx_codec_enc_cfg *cfg,
-                           unsigned int fourcc, int frame_cnt) {
+void ivf_write_file_header_with_video_info(FILE *outfile, unsigned int fourcc,
+                                           int frame_cnt, int frame_width,
+                                           int frame_height,
+                                           vpx_rational_t timebase) {
   char header[32];
 
   header[0] = 'D';
   header[1] = 'K';
   header[2] = 'I';
   header[3] = 'F';
-  mem_put_le16(header + 4, 0);                     // version
-  mem_put_le16(header + 6, 32);                    // header size
-  mem_put_le32(header + 8, fourcc);                // fourcc
-  mem_put_le16(header + 12, cfg->g_w);             // width
-  mem_put_le16(header + 14, cfg->g_h);             // height
-  mem_put_le32(header + 16, cfg->g_timebase.den);  // rate
-  mem_put_le32(header + 20, cfg->g_timebase.num);  // scale
-  mem_put_le32(header + 24, frame_cnt);            // length
-  mem_put_le32(header + 28, 0);                    // unused
+  mem_put_le16(header + 4, 0);              // version
+  mem_put_le16(header + 6, 32);             // header size
+  mem_put_le32(header + 8, fourcc);         // fourcc
+  mem_put_le16(header + 12, frame_width);   // width
+  mem_put_le16(header + 14, frame_height);  // height
+  mem_put_le32(header + 16, timebase.den);  // rate
+  mem_put_le32(header + 20, timebase.num);  // scale
+  mem_put_le32(header + 24, frame_cnt);     // length
+  mem_put_le32(header + 28, 0);             // unused
 
   fwrite(header, 1, 32, outfile);
 }
 
+void ivf_write_file_header(FILE *outfile, const struct vpx_codec_enc_cfg *cfg,
+                           unsigned int fourcc, int frame_cnt) {
+  ivf_write_file_header_with_video_info(outfile, fourcc, frame_cnt, cfg->g_w,
+                                        cfg->g_h, cfg->g_timebase);
+}
+
 void ivf_write_frame_header(FILE *outfile, int64_t pts, size_t frame_size) {
   char header[12];
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.h
index ebdce47..27b6910 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/ivfenc.h
@@ -7,11 +7,13 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef IVFENC_H_
-#define IVFENC_H_
+#ifndef VPX_IVFENC_H_
+#define VPX_IVFENC_H_
 
 #include "./tools_common.h"
 
+#include "vpx/vpx_encoder.h"
+
 struct vpx_codec_enc_cfg;
 struct vpx_codec_cx_pkt;
 
@@ -19,6 +21,11 @@
 extern "C" {
 #endif
 
+void ivf_write_file_header_with_video_info(FILE *outfile, unsigned int fourcc,
+                                           int frame_cnt, int frame_width,
+                                           int frame_height,
+                                           vpx_rational_t timebase);
+
 void ivf_write_file_header(FILE *outfile, const struct vpx_codec_enc_cfg *cfg,
                            uint32_t fourcc, int frame_cnt);
 
@@ -30,4 +37,4 @@
 } /* extern "C" */
 #endif
 
-#endif  // IVFENC_H_
+#endif  // VPX_IVFENC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/libs.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/libs.mk
index a3e2f9d..569dbdc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/libs.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/libs.mk
@@ -11,7 +11,7 @@
 
 # ARM assembly files are written in RVCT-style. We use some make magic to
 # filter those files to allow GCC compilation
-ifeq ($(ARCH_ARM),yes)
+ifeq ($(VPX_ARCH_ARM),yes)
   ASM:=$(if $(filter yes,$(CONFIG_GCC)$(CONFIG_MSVS)),.asm.S,.asm)
 else
   ASM:=.asm
@@ -88,7 +88,6 @@
   CODEC_EXPORTS-yes += $(addprefix $(VP9_PREFIX),$(VP9_CX_EXPORTS))
   CODEC_SRCS-yes += $(VP9_PREFIX)vp9cx.mk vpx/vp8.h vpx/vp8cx.h
   INSTALL-LIBS-yes += include/vpx/vp8.h include/vpx/vp8cx.h
-  INSTALL-LIBS-$(CONFIG_SPATIAL_SVC) += include/vpx/svc_context.h
   INSTALL_MAPS += include/vpx/% $(SRC_PATH_BARE)/$(VP9_PREFIX)/%
   CODEC_DOC_SRCS += vpx/vp8.h vpx/vp8cx.h
   CODEC_DOC_SECTIONS += vp9 vp9_encoder
@@ -113,13 +112,6 @@
   CODEC_DOC_SECTIONS += decoder
 endif
 
-# Suppress -Wextra warnings in third party code.
-$(BUILD_PFX)third_party/googletest/%.cc.o: CXXFLAGS += -Wno-missing-field-initializers
-# Suppress -Wextra warnings in first party code pending investigation.
-# https://bugs.chromium.org/p/webm/issues/detail?id=1069
-$(BUILD_PFX)vp8/encoder/onyx_if.c.o: CFLAGS += -Wno-unknown-warning-option -Wno-clobbered
-$(BUILD_PFX)vp8/decoder/onyxd_if.c.o: CFLAGS += -Wno-unknown-warning-option -Wno-clobbered
-
 ifeq ($(CONFIG_MSVS),yes)
 CODEC_LIB=$(if $(CONFIG_STATIC_MSVCRT),vpxmt,vpxmd)
 GTEST_LIB=$(if $(CONFIG_STATIC_MSVCRT),gtestmt,gtestmd)
@@ -147,15 +139,12 @@
 CODEC_SRCS-yes += vpx_ports/vpx_once.h
 CODEC_SRCS-yes += $(BUILD_PFX)vpx_config.c
 INSTALL-SRCS-no += $(BUILD_PFX)vpx_config.c
-ifeq ($(ARCH_X86)$(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86)$(VPX_ARCH_X86_64),yes)
 INSTALL-SRCS-$(CONFIG_CODEC_SRCS) += third_party/x86inc/x86inc.asm
 INSTALL-SRCS-$(CONFIG_CODEC_SRCS) += vpx_dsp/x86/bitdepth_conversion_sse2.asm
 endif
 CODEC_EXPORTS-yes += vpx/exports_com
 CODEC_EXPORTS-$(CONFIG_ENCODERS) += vpx/exports_enc
-ifeq ($(CONFIG_SPATIAL_SVC),yes)
-CODEC_EXPORTS-$(CONFIG_ENCODERS) += vpx/exports_spatial_svc
-endif
 CODEC_EXPORTS-$(CONFIG_DECODERS) += vpx/exports_dec
 
 INSTALL-LIBS-yes += include/vpx/vpx_codec.h
@@ -206,6 +195,8 @@
             --out=$@ $^
 CLEAN-OBJS += vpx.def
 
+vpx.$(VCPROJ_SFX): VCPROJ_SRCS=$(filter-out $(addprefix %, $(ASM_INCLUDES)), $^)
+
 vpx.$(VCPROJ_SFX): $(CODEC_SRCS) vpx.def
 	@echo "    [CREATE] $@"
 	$(qexec)$(GEN_VCPROJ) \
@@ -218,7 +209,15 @@
             --ver=$(CONFIG_VS_VERSION) \
             --src-path-bare="$(SRC_PATH_BARE)" \
             --out=$@ $(CFLAGS) \
-            $(filter-out $(addprefix %, $(ASM_INCLUDES)), $^) \
+            $(filter $(SRC_PATH_BARE)/vp8/%.c, $(VCPROJ_SRCS)) \
+            $(filter $(SRC_PATH_BARE)/vp8/%.h, $(VCPROJ_SRCS)) \
+            $(filter $(SRC_PATH_BARE)/vp9/%.c, $(VCPROJ_SRCS)) \
+            $(filter $(SRC_PATH_BARE)/vp9/%.h, $(VCPROJ_SRCS)) \
+            $(filter $(SRC_PATH_BARE)/vpx/%, $(VCPROJ_SRCS)) \
+            $(filter $(SRC_PATH_BARE)/vpx_dsp/%, $(VCPROJ_SRCS)) \
+            $(filter-out $(addprefix $(SRC_PATH_BARE)/, \
+                           vp8/%.c vp8/%.h vp9/%.c vp9/%.h vpx/% vpx_dsp/%), \
+              $(VCPROJ_SRCS)) \
             --src-path-bare="$(SRC_PATH_BARE)" \
 
 PROJECTS-yes += vpx.$(VCPROJ_SFX)
@@ -233,8 +232,8 @@
 LIBS-$(if yes,$(CONFIG_STATIC)) += $(BUILD_PFX)libvpx.a $(BUILD_PFX)libvpx_g.a
 $(BUILD_PFX)libvpx_g.a: $(LIBVPX_OBJS)
 
-SO_VERSION_MAJOR := 5
-SO_VERSION_MINOR := 0
+SO_VERSION_MAJOR := 6
+SO_VERSION_MINOR := 2
 SO_VERSION_PATCH := 0
 ifeq ($(filter darwin%,$(TGT_OS)),$(TGT_OS))
 LIBVPX_SO               := libvpx.$(SO_VERSION_MAJOR).dylib
@@ -274,18 +273,6 @@
 $(BUILD_PFX)$(LIBVPX_SO): SONAME = libvpx.so.$(SO_VERSION_MAJOR)
 $(BUILD_PFX)$(LIBVPX_SO): EXPORTS_FILE = $(EXPORT_FILE)
 
-libvpx.ver: $(call enabled,CODEC_EXPORTS)
-	@echo "    [CREATE] $@"
-	$(qexec)echo "{ global:" > $@
-	$(qexec)for f in $?; do awk '{print $$2";"}' < $$f >>$@; done
-	$(qexec)echo "local: *; };" >> $@
-CLEAN-OBJS += libvpx.ver
-
-libvpx.syms: $(call enabled,CODEC_EXPORTS)
-	@echo "    [CREATE] $@"
-	$(qexec)awk '{print "_"$$2}' $^ >$@
-CLEAN-OBJS += libvpx.syms
-
 libvpx.def: $(call enabled,CODEC_EXPORTS)
 	@echo "    [CREATE] $@"
 	$(qexec)echo LIBRARY $(LIBVPX_SO:.dll=) INITINSTANCE TERMINSTANCE > $@
@@ -345,10 +332,22 @@
 CLEAN-OBJS += vpx.pc
 endif
 
+libvpx.ver: $(call enabled,CODEC_EXPORTS)
+	@echo "    [CREATE] $@"
+	$(qexec)echo "{ global:" > $@
+	$(qexec)for f in $?; do awk '{print $$2";"}' < $$f >>$@; done
+	$(qexec)echo "local: *; };" >> $@
+CLEAN-OBJS += libvpx.ver
+
+libvpx.syms: $(call enabled,CODEC_EXPORTS)
+	@echo "    [CREATE] $@"
+	$(qexec)awk '{print "_"$$2}' $^ >$@
+CLEAN-OBJS += libvpx.syms
+
 #
 # Rule to make assembler configuration file from C configuration file
 #
-ifeq ($(ARCH_X86)$(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86)$(VPX_ARCH_X86_64),yes)
 # YASM
 $(BUILD_PFX)vpx_config.asm: $(BUILD_PFX)vpx_config.h
 	@echo "    [CREATE] $@"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/mainpage.dox b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/mainpage.dox
index ec202fa..4b0dff0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/mainpage.dox
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/mainpage.dox
@@ -25,8 +25,10 @@
     release.
   - The \ref readme contains instructions on recompiling the sample applications.
   - Read the \ref usage "usage" for a narrative on codec usage.
+  \if samples
   - Read the \ref samples "sample code" for examples of how to interact with the
     codec.
+  \endif
   - \ref codec reference
   \if encoder
   - \ref encoder reference
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.c
index 093798b..c410652 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.c
@@ -23,6 +23,7 @@
 #include <string.h> /* for memcpy() */
 
 #include "md5_utils.h"
+#include "vpx_ports/compiler_attributes.h"
 
 static void byteSwap(UWORD32 *buf, unsigned words) {
   md5byte *p;
@@ -145,17 +146,6 @@
 #define MD5STEP(f, w, x, y, z, in, s) \
   (w += f(x, y, z) + in, w = (w << s | w >> (32 - s)) + x)
 
-#if defined(__clang__) && defined(__has_attribute)
-#if __has_attribute(no_sanitize)
-#define VPX_NO_UNSIGNED_OVERFLOW_CHECK \
-  __attribute__((no_sanitize("unsigned-integer-overflow")))
-#endif
-#endif
-
-#ifndef VPX_NO_UNSIGNED_OVERFLOW_CHECK
-#define VPX_NO_UNSIGNED_OVERFLOW_CHECK
-#endif
-
 /*
  * The core of the MD5 algorithm, this alters an existing MD5 hash to
  * reflect the addition of 16 longwords of new data.  MD5Update blocks
@@ -163,7 +153,7 @@
  */
 VPX_NO_UNSIGNED_OVERFLOW_CHECK void MD5Transform(UWORD32 buf[4],
                                                  UWORD32 const in[16]) {
-  register UWORD32 a, b, c, d;
+  UWORD32 a, b, c, d;
 
   a = buf[0];
   b = buf[1];
@@ -244,6 +234,4 @@
   buf[3] += d;
 }
 
-#undef VPX_NO_UNSIGNED_OVERFLOW_CHECK
-
 #endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.h
index bd4991b..e0d5a2d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/md5_utils.h
@@ -20,8 +20,8 @@
  * Still in the public domain.
  */
 
-#ifndef MD5_UTILS_H_
-#define MD5_UTILS_H_
+#ifndef VPX_MD5_UTILS_H_
+#define VPX_MD5_UTILS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -46,4 +46,4 @@
 }  // extern "C"
 #endif
 
-#endif  // MD5_UTILS_H_
+#endif  // VPX_MD5_UTILS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/rate_hist.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/rate_hist.h
index 00a1676..d6a4c68 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/rate_hist.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/rate_hist.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef RATE_HIST_H_
-#define RATE_HIST_H_
+#ifndef VPX_RATE_HIST_H_
+#define VPX_RATE_HIST_H_
 
 #include "vpx/vpx_encoder.h"
 
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // RATE_HIST_H_
+#endif  // VPX_RATE_HIST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/acm_random.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/acm_random.h
index d915cf9..3458340 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/acm_random.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/acm_random.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_ACM_RANDOM_H_
-#define TEST_ACM_RANDOM_H_
+#ifndef VPX_TEST_ACM_RANDOM_H_
+#define VPX_TEST_ACM_RANDOM_H_
 
 #include <assert.h>
 
@@ -34,6 +34,23 @@
     return (value >> 15) & 0xffff;
   }
 
+  int32_t Rand20Signed(void) {
+    // Use 20 bits: values between 524287 and -524288.
+    const uint32_t value = random_.Generate(1048576);
+    return static_cast<int32_t>(value) - 524288;
+  }
+
+  int16_t Rand16Signed(void) {
+    // Use 16 bits: values between 32767 and -32768.
+    return static_cast<int16_t>(random_.Generate(65536));
+  }
+
+  int16_t Rand13Signed(void) {
+    // Use 13 bits: values between 4095 and -4096.
+    const uint32_t value = random_.Generate(8192);
+    return static_cast<int16_t>(value) - 4096;
+  }
+
   int16_t Rand9Signed(void) {
     // Use 9 bits: values between 255 (0x0FF) and -256 (0x100).
     const uint32_t value = random_.Generate(512);
@@ -51,7 +68,7 @@
     // Returns a random value near 0 or near 255, to better exercise
     // saturation behavior.
     const uint8_t r = Rand8();
-    return r < 128 ? r << 4 : r >> 4;
+    return static_cast<uint8_t>((r < 128) ? r << 4 : r >> 4);
   }
 
   uint32_t RandRange(const uint32_t range) {
@@ -73,4 +90,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_ACM_RANDOM_H_
+#endif  // VPX_TEST_ACM_RANDOM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_refresh_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_refresh_test.cc
index d893635..a985ed4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_refresh_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_refresh_test.cc
@@ -74,7 +74,7 @@
                                   ::libvpx_test::Encoder *encoder) {
     ::libvpx_test::Y4mVideoSource *y4m_video =
         static_cast<libvpx_test::Y4mVideoSource *>(video);
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, cpu_used_);
       encoder->Control(VP9E_SET_AQ_MODE, kAqModeCyclicRefresh);
     } else if (video->frame() >= 2 && video->img()) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_test.cc
index 1d24f95..03536c8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/active_map_test.cc
@@ -35,7 +35,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, cpu_used_);
     } else if (video->frame() == 3) {
       vpx_active_map_t map = vpx_active_map_t();
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/add_noise_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/add_noise_test.cc
index eae32c3..0d1893c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/add_noise_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/add_noise_test.cc
@@ -8,8 +8,11 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 #include <math.h>
+#include <tuple>
+
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
+#include "test/util.h"
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
@@ -25,7 +28,10 @@
                              int blackclamp, int whiteclamp, int width,
                              int height, int pitch);
 
-class AddNoiseTest : public ::testing::TestWithParam<AddNoiseFunc> {
+typedef std::tuple<double, AddNoiseFunc> AddNoiseTestFPParam;
+
+class AddNoiseTest : public ::testing::Test,
+                     public ::testing::WithParamInterface<AddNoiseTestFPParam> {
  public:
   virtual void TearDown() { libvpx_test::ClearSystemState(); }
   virtual ~AddNoiseTest() {}
@@ -44,14 +50,14 @@
   const int height = 64;
   const int image_size = width * height;
   int8_t noise[kNoiseSize];
-  const int clamp = vpx_setup_noise(4.4, noise, kNoiseSize);
+  const int clamp = vpx_setup_noise(GET_PARAM(0), noise, kNoiseSize);
   uint8_t *const s =
       reinterpret_cast<uint8_t *>(vpx_calloc(image_size, sizeof(*s)));
   ASSERT_TRUE(s != NULL);
   memset(s, 99, image_size * sizeof(*s));
 
   ASM_REGISTER_STATE_CHECK(
-      GetParam()(s, noise, clamp, clamp, width, height, width));
+      GET_PARAM(1)(s, noise, clamp, clamp, width, height, width));
 
   // Check to make sure we don't end up having either the same or no added
   // noise either vertically or horizontally.
@@ -70,7 +76,7 @@
   memset(s, 255, image_size);
 
   ASM_REGISTER_STATE_CHECK(
-      GetParam()(s, noise, clamp, clamp, width, height, width));
+      GET_PARAM(1)(s, noise, clamp, clamp, width, height, width));
 
   // Check to make sure don't roll over.
   for (int i = 0; i < image_size; ++i) {
@@ -81,7 +87,7 @@
   memset(s, 0, image_size);
 
   ASM_REGISTER_STATE_CHECK(
-      GetParam()(s, noise, clamp, clamp, width, height, width));
+      GET_PARAM(1)(s, noise, clamp, clamp, width, height, width));
 
   // Check to make sure don't roll under.
   for (int i = 0; i < image_size; ++i) {
@@ -108,7 +114,7 @@
 
   srand(0);
   ASM_REGISTER_STATE_CHECK(
-      GetParam()(s, noise, clamp, clamp, width, height, width));
+      GET_PARAM(1)(s, noise, clamp, clamp, width, height, width));
   srand(0);
   ASM_REGISTER_STATE_CHECK(
       vpx_plane_add_noise_c(d, noise, clamp, clamp, width, height, width));
@@ -121,16 +127,24 @@
   vpx_free(s);
 }
 
-INSTANTIATE_TEST_CASE_P(C, AddNoiseTest,
-                        ::testing::Values(vpx_plane_add_noise_c));
+using std::make_tuple;
+
+INSTANTIATE_TEST_CASE_P(
+    C, AddNoiseTest,
+    ::testing::Values(make_tuple(3.25, vpx_plane_add_noise_c),
+                      make_tuple(4.4, vpx_plane_add_noise_c)));
 
 #if HAVE_SSE2
-INSTANTIATE_TEST_CASE_P(SSE2, AddNoiseTest,
-                        ::testing::Values(vpx_plane_add_noise_sse2));
+INSTANTIATE_TEST_CASE_P(
+    SSE2, AddNoiseTest,
+    ::testing::Values(make_tuple(3.25, vpx_plane_add_noise_sse2),
+                      make_tuple(4.4, vpx_plane_add_noise_sse2)));
 #endif
 
 #if HAVE_MSA
-INSTANTIATE_TEST_CASE_P(MSA, AddNoiseTest,
-                        ::testing::Values(vpx_plane_add_noise_msa));
+INSTANTIATE_TEST_CASE_P(
+    MSA, AddNoiseTest,
+    ::testing::Values(make_tuple(3.25, vpx_plane_add_noise_msa),
+                      make_tuple(4.4, vpx_plane_add_noise_msa)));
 #endif
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/alt_ref_aq_segment_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/alt_ref_aq_segment_test.cc
index 64a3011..6e03a47 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/alt_ref_aq_segment_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/alt_ref_aq_segment_test.cc
@@ -32,7 +32,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
       encoder->Control(VP9E_SET_ALT_REF_AQ, alt_ref_aq_mode_);
       encoder->Control(VP9E_SET_AQ_MODE, aq_mode_);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/altref_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/altref_test.cc
index f9308c2..0119be4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/altref_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/altref_test.cc
@@ -35,7 +35,7 @@
 
   virtual void PreEncodeFrameHook(libvpx_test::VideoSource *video,
                                   libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_ENABLEAUTOALTREF, 1);
       encoder->Control(VP8E_SET_CPUUSED, 3);
     }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/android/README b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/android/README
index 4a1adcf..f67fea5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/android/README
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/android/README
@@ -3,15 +3,16 @@
 ./libvpx/configure --target=armv7-android-gcc --enable-external-build \
   --enable-postproc --disable-install-srcs --enable-multi-res-encoding \
   --enable-temporal-denoising --disable-unit-tests --disable-install-docs \
-  --disable-examples --disable-runtime-cpu-detect --sdk-path=$NDK
+  --disable-examples --disable-runtime-cpu-detect
 
 2) From the parent directory, invoke ndk-build:
 NDK_PROJECT_PATH=. ndk-build APP_BUILD_SCRIPT=./libvpx/test/android/Android.mk \
   APP_ABI=armeabi-v7a APP_PLATFORM=android-18 APP_OPTIM=release \
-  APP_STL=gnustl_static
+  APP_STL=c++_static
 
-Note: Both adb and ndk-build are available prebuilt at:
-  https://chromium.googlesource.com/android_tools
+Note: Both adb and ndk-build are available at:
+  https://developer.android.com/studio#downloads
+  https://developer.android.com/ndk/downloads
 
 3) Run get_files.py to download the test files:
 python get_files.py -i /path/to/test-data.sha1 -o /path/to/put/files \
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/aq_segment_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/aq_segment_test.cc
index 1c2147f..3c4053b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/aq_segment_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/aq_segment_test.cc
@@ -31,7 +31,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
       encoder->Control(VP9E_SET_AQ_MODE, aq_mode_);
       encoder->Control(VP8E_SET_MAX_INTRA_BITRATE_PCT, 100);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/avg_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/avg_test.cc
index ad21198..72e16f6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/avg_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/avg_test.cc
@@ -11,6 +11,7 @@
 #include <limits.h>
 #include <stdio.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -22,40 +23,43 @@
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
 #include "test/util.h"
+#include "vpx/vpx_codec.h"
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/vpx_timer.h"
 
 using libvpx_test::ACMRandom;
 
 namespace {
+
+template <typename Pixel>
 class AverageTestBase : public ::testing::Test {
  public:
-  AverageTestBase(int width, int height) : width_(width), height_(height) {}
+  AverageTestBase(int width, int height)
+      : width_(width), height_(height), source_data_(NULL), source_stride_(0),
+        bit_depth_(8) {}
 
-  static void SetUpTestCase() {
-    source_data_ = reinterpret_cast<uint8_t *>(
-        vpx_memalign(kDataAlignment, kDataBlockSize));
-  }
-
-  static void TearDownTestCase() {
+  virtual void TearDown() {
     vpx_free(source_data_);
     source_data_ = NULL;
+    libvpx_test::ClearSystemState();
   }
 
-  virtual void TearDown() { libvpx_test::ClearSystemState(); }
-
  protected:
   // Handle blocks up to 4 blocks 64x64 with stride up to 128
   static const int kDataAlignment = 16;
   static const int kDataBlockSize = 64 * 128;
 
   virtual void SetUp() {
+    source_data_ = reinterpret_cast<Pixel *>(
+        vpx_memalign(kDataAlignment, kDataBlockSize * sizeof(source_data_[0])));
+    ASSERT_TRUE(source_data_ != NULL);
     source_stride_ = (width_ + 31) & ~31;
+    bit_depth_ = 8;
     rnd_.Reset(ACMRandom::DeterministicSeed());
   }
 
   // Sum Pixels
-  static unsigned int ReferenceAverage8x8(const uint8_t *source, int pitch) {
+  static unsigned int ReferenceAverage8x8(const Pixel *source, int pitch) {
     unsigned int average = 0;
     for (int h = 0; h < 8; ++h) {
       for (int w = 0; w < 8; ++w) average += source[h * pitch + w];
@@ -63,7 +67,7 @@
     return ((average + 32) >> 6);
   }
 
-  static unsigned int ReferenceAverage4x4(const uint8_t *source, int pitch) {
+  static unsigned int ReferenceAverage4x4(const Pixel *source, int pitch) {
     unsigned int average = 0;
     for (int h = 0; h < 4; ++h) {
       for (int w = 0; w < 4; ++w) average += source[h * pitch + w];
@@ -71,7 +75,7 @@
     return ((average + 8) >> 4);
   }
 
-  void FillConstant(uint8_t fill_constant) {
+  void FillConstant(Pixel fill_constant) {
     for (int i = 0; i < width_ * height_; ++i) {
       source_data_[i] = fill_constant;
     }
@@ -79,21 +83,22 @@
 
   void FillRandom() {
     for (int i = 0; i < width_ * height_; ++i) {
-      source_data_[i] = rnd_.Rand8();
+      source_data_[i] = rnd_.Rand16() & ((1 << bit_depth_) - 1);
     }
   }
 
   int width_, height_;
-  static uint8_t *source_data_;
+  Pixel *source_data_;
   int source_stride_;
+  int bit_depth_;
 
   ACMRandom rnd_;
 };
 typedef unsigned int (*AverageFunction)(const uint8_t *s, int pitch);
 
-typedef std::tr1::tuple<int, int, int, int, AverageFunction> AvgFunc;
+typedef std::tuple<int, int, int, int, AverageFunction> AvgFunc;
 
-class AverageTest : public AverageTestBase,
+class AverageTest : public AverageTestBase<uint8_t>,
                     public ::testing::WithParamInterface<AvgFunc> {
  public:
   AverageTest() : AverageTestBase(GET_PARAM(0), GET_PARAM(1)) {}
@@ -119,12 +124,40 @@
   }
 };
 
+#if CONFIG_VP9_HIGHBITDEPTH
+class AverageTestHBD : public AverageTestBase<uint16_t>,
+                       public ::testing::WithParamInterface<AvgFunc> {
+ public:
+  AverageTestHBD() : AverageTestBase(GET_PARAM(0), GET_PARAM(1)) {}
+
+ protected:
+  void CheckAverages() {
+    const int block_size = GET_PARAM(3);
+    unsigned int expected = 0;
+    if (block_size == 8) {
+      expected =
+          ReferenceAverage8x8(source_data_ + GET_PARAM(2), source_stride_);
+    } else if (block_size == 4) {
+      expected =
+          ReferenceAverage4x4(source_data_ + GET_PARAM(2), source_stride_);
+    }
+
+    ASM_REGISTER_STATE_CHECK(GET_PARAM(4)(
+        CONVERT_TO_BYTEPTR(source_data_ + GET_PARAM(2)), source_stride_));
+    unsigned int actual = GET_PARAM(4)(
+        CONVERT_TO_BYTEPTR(source_data_ + GET_PARAM(2)), source_stride_);
+
+    EXPECT_EQ(expected, actual);
+  }
+};
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 typedef void (*IntProRowFunc)(int16_t hbuf[16], uint8_t const *ref,
                               const int ref_stride, const int height);
 
-typedef std::tr1::tuple<int, IntProRowFunc, IntProRowFunc> IntProRowParam;
+typedef std::tuple<int, IntProRowFunc, IntProRowFunc> IntProRowParam;
 
-class IntProRowTest : public AverageTestBase,
+class IntProRowTest : public AverageTestBase<uint8_t>,
                       public ::testing::WithParamInterface<IntProRowParam> {
  public:
   IntProRowTest()
@@ -135,6 +168,10 @@
 
  protected:
   virtual void SetUp() {
+    source_data_ = reinterpret_cast<uint8_t *>(
+        vpx_memalign(kDataAlignment, kDataBlockSize * sizeof(source_data_[0])));
+    ASSERT_TRUE(source_data_ != NULL);
+
     hbuf_asm_ = reinterpret_cast<int16_t *>(
         vpx_memalign(kDataAlignment, sizeof(*hbuf_asm_) * 16));
     hbuf_c_ = reinterpret_cast<int16_t *>(
@@ -142,6 +179,8 @@
   }
 
   virtual void TearDown() {
+    vpx_free(source_data_);
+    source_data_ = NULL;
     vpx_free(hbuf_c_);
     hbuf_c_ = NULL;
     vpx_free(hbuf_asm_);
@@ -164,9 +203,9 @@
 
 typedef int16_t (*IntProColFunc)(uint8_t const *ref, const int width);
 
-typedef std::tr1::tuple<int, IntProColFunc, IntProColFunc> IntProColParam;
+typedef std::tuple<int, IntProColFunc, IntProColFunc> IntProColParam;
 
-class IntProColTest : public AverageTestBase,
+class IntProColTest : public AverageTestBase<uint8_t>,
                       public ::testing::WithParamInterface<IntProColParam> {
  public:
   IntProColTest() : AverageTestBase(GET_PARAM(0), 1), sum_asm_(0), sum_c_(0) {
@@ -189,7 +228,7 @@
 };
 
 typedef int (*SatdFunc)(const tran_low_t *coeffs, int length);
-typedef std::tr1::tuple<int, SatdFunc> SatdTestParam;
+typedef std::tuple<int, SatdFunc> SatdTestParam;
 
 class SatdTest : public ::testing::Test,
                  public ::testing::WithParamInterface<SatdTestParam> {
@@ -212,12 +251,7 @@
     for (int i = 0; i < satd_size_; ++i) src_[i] = val;
   }
 
-  void FillRandom() {
-    for (int i = 0; i < satd_size_; ++i) {
-      const int16_t tmp = rnd_.Rand16();
-      src_[i] = (tran_low_t)tmp;
-    }
-  }
+  virtual void FillRandom() = 0;
 
   void Check(const int expected) {
     int total;
@@ -225,17 +259,29 @@
     EXPECT_EQ(expected, total);
   }
 
+  tran_low_t *GetCoeff() const { return src_; }
+
   int satd_size_;
+  ACMRandom rnd_;
+  tran_low_t *src_;
 
  private:
-  tran_low_t *src_;
   SatdFunc satd_func_;
-  ACMRandom rnd_;
+};
+
+class SatdLowbdTest : public SatdTest {
+ protected:
+  virtual void FillRandom() {
+    for (int i = 0; i < satd_size_; ++i) {
+      const int16_t tmp = rnd_.Rand16Signed();
+      src_[i] = (tran_low_t)tmp;
+    }
+  }
 };
 
 typedef int64_t (*BlockErrorFunc)(const tran_low_t *coeff,
                                   const tran_low_t *dqcoeff, int block_size);
-typedef std::tr1::tuple<int, BlockErrorFunc> BlockErrorTestFPParam;
+typedef std::tuple<int, BlockErrorFunc> BlockErrorTestFPParam;
 
 class BlockErrorTestFP
     : public ::testing::Test,
@@ -279,6 +325,10 @@
     EXPECT_EQ(expected, total);
   }
 
+  tran_low_t *GetCoeff() const { return coeff_; }
+
+  tran_low_t *GetDQCoeff() const { return dqcoeff_; }
+
   int txfm_size_;
 
  private:
@@ -288,8 +338,6 @@
   ACMRandom rnd_;
 };
 
-uint8_t *AverageTestBase::source_data_ = NULL;
-
 TEST_P(AverageTest, MinValue) {
   FillConstant(0);
   CheckAverages();
@@ -308,6 +356,27 @@
     CheckAverages();
   }
 }
+#if CONFIG_VP9_HIGHBITDEPTH
+TEST_P(AverageTestHBD, MinValue) {
+  FillConstant(0);
+  CheckAverages();
+}
+
+TEST_P(AverageTestHBD, MaxValue) {
+  FillConstant((1 << VPX_BITS_12) - 1);
+  CheckAverages();
+}
+
+TEST_P(AverageTestHBD, Random) {
+  bit_depth_ = VPX_BITS_12;
+  // The reference frame, but not the source frame, may be unaligned for
+  // certain types of searches.
+  for (int i = 0; i < 1000; i++) {
+    FillRandom();
+    CheckAverages();
+  }
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
 TEST_P(IntProRowTest, MinValue) {
   FillConstant(0);
@@ -339,27 +408,27 @@
   RunComparison();
 }
 
-TEST_P(SatdTest, MinValue) {
+TEST_P(SatdLowbdTest, MinValue) {
   const int kMin = -32640;
   const int expected = -kMin * satd_size_;
   FillConstant(kMin);
   Check(expected);
 }
 
-TEST_P(SatdTest, MaxValue) {
+TEST_P(SatdLowbdTest, MaxValue) {
   const int kMax = 32640;
   const int expected = kMax * satd_size_;
   FillConstant(kMax);
   Check(expected);
 }
 
-TEST_P(SatdTest, Random) {
+TEST_P(SatdLowbdTest, Random) {
   int expected;
   switch (satd_size_) {
-    case 16: expected = 205298; break;
-    case 64: expected = 1113950; break;
-    case 256: expected = 4268415; break;
-    case 1024: expected = 16954082; break;
+    case 16: expected = 261036; break;
+    case 64: expected = 991732; break;
+    case 256: expected = 4136358; break;
+    case 1024: expected = 16677592; break;
     default:
       FAIL() << "Invalid satd size (" << satd_size_
              << ") valid: 16/64/256/1024";
@@ -368,11 +437,12 @@
   Check(expected);
 }
 
-TEST_P(SatdTest, DISABLED_Speed) {
+TEST_P(SatdLowbdTest, DISABLED_Speed) {
   const int kCountSpeedTestBlock = 20000;
   vpx_usec_timer timer;
-  DECLARE_ALIGNED(16, tran_low_t, coeff[1024]);
   const int blocksize = GET_PARAM(0);
+  FillRandom();
+  tran_low_t *coeff = GetCoeff();
 
   vpx_usec_timer_start(&timer);
   for (int i = 0; i < kCountSpeedTestBlock; ++i) {
@@ -383,6 +453,62 @@
   printf("blocksize: %4d time: %4d us\n", blocksize, elapsed_time);
 }
 
+#if CONFIG_VP9_HIGHBITDEPTH
+class SatdHighbdTest : public SatdTest {
+ protected:
+  virtual void FillRandom() {
+    for (int i = 0; i < satd_size_; ++i) {
+      src_[i] = rnd_.Rand20Signed();
+    }
+  }
+};
+
+TEST_P(SatdHighbdTest, MinValue) {
+  const int kMin = -524280;
+  const int expected = -kMin * satd_size_;
+  FillConstant(kMin);
+  Check(expected);
+}
+
+TEST_P(SatdHighbdTest, MaxValue) {
+  const int kMax = 524280;
+  const int expected = kMax * satd_size_;
+  FillConstant(kMax);
+  Check(expected);
+}
+
+TEST_P(SatdHighbdTest, Random) {
+  int expected;
+  switch (satd_size_) {
+    case 16: expected = 5249712; break;
+    case 64: expected = 18362120; break;
+    case 256: expected = 66100520; break;
+    case 1024: expected = 266094734; break;
+    default:
+      FAIL() << "Invalid satd size (" << satd_size_
+             << ") valid: 16/64/256/1024";
+  }
+  FillRandom();
+  Check(expected);
+}
+
+TEST_P(SatdHighbdTest, DISABLED_Speed) {
+  const int kCountSpeedTestBlock = 20000;
+  vpx_usec_timer timer;
+  const int blocksize = GET_PARAM(0);
+  FillRandom();
+  tran_low_t *coeff = GetCoeff();
+
+  vpx_usec_timer_start(&timer);
+  for (int i = 0; i < kCountSpeedTestBlock; ++i) {
+    GET_PARAM(1)(coeff, blocksize);
+  }
+  vpx_usec_timer_mark(&timer);
+  const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
+  printf("blocksize: %4d time: %4d us\n", blocksize, elapsed_time);
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 TEST_P(BlockErrorTestFP, MinValue) {
   const int64_t kMin = -32640;
   const int64_t expected = kMin * kMin * txfm_size_;
@@ -415,9 +541,10 @@
 TEST_P(BlockErrorTestFP, DISABLED_Speed) {
   const int kCountSpeedTestBlock = 20000;
   vpx_usec_timer timer;
-  DECLARE_ALIGNED(16, tran_low_t, coeff[1024]);
-  DECLARE_ALIGNED(16, tran_low_t, dqcoeff[1024]);
   const int blocksize = GET_PARAM(0);
+  FillRandom();
+  tran_low_t *coeff = GetCoeff();
+  tran_low_t *dqcoeff = GetDQCoeff();
 
   vpx_usec_timer_start(&timer);
   for (int i = 0; i < kCountSpeedTestBlock; ++i) {
@@ -428,14 +555,34 @@
   printf("blocksize: %4d time: %4d us\n", blocksize, elapsed_time);
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 INSTANTIATE_TEST_CASE_P(
     C, AverageTest,
     ::testing::Values(make_tuple(16, 16, 1, 8, &vpx_avg_8x8_c),
                       make_tuple(16, 16, 1, 4, &vpx_avg_4x4_c)));
 
-INSTANTIATE_TEST_CASE_P(C, SatdTest,
+#if CONFIG_VP9_HIGHBITDEPTH
+INSTANTIATE_TEST_CASE_P(
+    C, AverageTestHBD,
+    ::testing::Values(make_tuple(16, 16, 1, 8, &vpx_highbd_avg_8x8_c),
+                      make_tuple(16, 16, 1, 4, &vpx_highbd_avg_4x4_c)));
+
+#if HAVE_SSE2
+INSTANTIATE_TEST_CASE_P(
+    SSE2, AverageTestHBD,
+    ::testing::Values(make_tuple(16, 16, 1, 8, &vpx_highbd_avg_8x8_sse2),
+                      make_tuple(16, 16, 1, 4, &vpx_highbd_avg_4x4_sse2)));
+#endif  // HAVE_SSE2
+
+INSTANTIATE_TEST_CASE_P(C, SatdHighbdTest,
+                        ::testing::Values(make_tuple(16, &vpx_satd_c),
+                                          make_tuple(64, &vpx_satd_c),
+                                          make_tuple(256, &vpx_satd_c),
+                                          make_tuple(1024, &vpx_satd_c)));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+INSTANTIATE_TEST_CASE_P(C, SatdLowbdTest,
                         ::testing::Values(make_tuple(16, &vpx_satd_c),
                                           make_tuple(64, &vpx_satd_c),
                                           make_tuple(256, &vpx_satd_c),
@@ -472,7 +619,7 @@
                       make_tuple(64, &vpx_int_pro_col_sse2,
                                  &vpx_int_pro_col_c)));
 
-INSTANTIATE_TEST_CASE_P(SSE2, SatdTest,
+INSTANTIATE_TEST_CASE_P(SSE2, SatdLowbdTest,
                         ::testing::Values(make_tuple(16, &vpx_satd_sse2),
                                           make_tuple(64, &vpx_satd_sse2),
                                           make_tuple(256, &vpx_satd_sse2),
@@ -487,12 +634,21 @@
 #endif  // HAVE_SSE2
 
 #if HAVE_AVX2
-INSTANTIATE_TEST_CASE_P(AVX2, SatdTest,
+INSTANTIATE_TEST_CASE_P(AVX2, SatdLowbdTest,
                         ::testing::Values(make_tuple(16, &vpx_satd_avx2),
                                           make_tuple(64, &vpx_satd_avx2),
                                           make_tuple(256, &vpx_satd_avx2),
                                           make_tuple(1024, &vpx_satd_avx2)));
 
+#if CONFIG_VP9_HIGHBITDEPTH
+INSTANTIATE_TEST_CASE_P(
+    AVX2, SatdHighbdTest,
+    ::testing::Values(make_tuple(16, &vpx_highbd_satd_avx2),
+                      make_tuple(64, &vpx_highbd_satd_avx2),
+                      make_tuple(256, &vpx_highbd_satd_avx2),
+                      make_tuple(1024, &vpx_highbd_satd_avx2)));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 INSTANTIATE_TEST_CASE_P(
     AVX2, BlockErrorTestFP,
     ::testing::Values(make_tuple(16, &vp9_block_error_fp_avx2),
@@ -525,7 +681,7 @@
                       make_tuple(64, &vpx_int_pro_col_neon,
                                  &vpx_int_pro_col_c)));
 
-INSTANTIATE_TEST_CASE_P(NEON, SatdTest,
+INSTANTIATE_TEST_CASE_P(NEON, SatdLowbdTest,
                         ::testing::Values(make_tuple(16, &vpx_satd_neon),
                                           make_tuple(64, &vpx_satd_neon),
                                           make_tuple(256, &vpx_satd_neon),
@@ -570,7 +726,7 @@
 // TODO(jingning): Remove the highbitdepth flag once the SIMD functions are
 // in place.
 #if !CONFIG_VP9_HIGHBITDEPTH
-INSTANTIATE_TEST_CASE_P(MSA, SatdTest,
+INSTANTIATE_TEST_CASE_P(MSA, SatdLowbdTest,
                         ::testing::Values(make_tuple(16, &vpx_satd_msa),
                                           make_tuple(64, &vpx_satd_msa),
                                           make_tuple(256, &vpx_satd_msa),
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/bench.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/bench.cc
new file mode 100644
index 0000000..4b883d8
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/bench.cc
@@ -0,0 +1,38 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <stdio.h>
+#include <algorithm>
+
+#include "test/bench.h"
+#include "vpx_ports/vpx_timer.h"
+
+void AbstractBench::RunNTimes(int n) {
+  for (int r = 0; r < VPX_BENCH_ROBUST_ITER; r++) {
+    vpx_usec_timer timer;
+    vpx_usec_timer_start(&timer);
+    for (int j = 0; j < n; ++j) {
+      Run();
+    }
+    vpx_usec_timer_mark(&timer);
+    times_[r] = static_cast<int>(vpx_usec_timer_elapsed(&timer));
+  }
+}
+
+void AbstractBench::PrintMedian(const char *title) {
+  std::sort(times_, times_ + VPX_BENCH_ROBUST_ITER);
+  const int med = times_[VPX_BENCH_ROBUST_ITER >> 1];
+  int sad = 0;
+  for (int t = 0; t < VPX_BENCH_ROBUST_ITER; t++) {
+    sad += abs(times_[t] - med);
+  }
+  printf("[%10s] %s %.1f ms ( ±%.1f ms )\n", "BENCH ", title, med / 1000.0,
+         sad / (VPX_BENCH_ROBUST_ITER * 1000.0));
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/bench.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/bench.h
new file mode 100644
index 0000000..57ca911
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/bench.h
@@ -0,0 +1,30 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_TEST_BENCH_H_
+#define VPX_TEST_BENCH_H_
+
+// Number of iterations used to compute median run time.
+#define VPX_BENCH_ROBUST_ITER 15
+
+class AbstractBench {
+ public:
+  void RunNTimes(int n);
+  void PrintMedian(const char *title);
+
+ protected:
+  // Implement this method and put the code to benchmark in it.
+  virtual void Run() = 0;
+
+ private:
+  int times_[VPX_BENCH_ROBUST_ITER];
+};
+
+#endif  // VPX_TEST_BENCH_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/blockiness_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/blockiness_test.cc
index 08190c9..ced6e66 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/blockiness_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/blockiness_test.cc
@@ -11,6 +11,7 @@
 #include <limits.h>
 #include <stdio.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -25,10 +26,7 @@
 #include "test/util.h"
 
 #include "vpx_mem/vpx_mem.h"
-
-extern "C" double vp9_get_blockiness(const unsigned char *img1, int img1_pitch,
-                                     const unsigned char *img2, int img2_pitch,
-                                     int width, int height);
+#include "vp9/encoder/vp9_blockiness.h"
 
 using libvpx_test::ACMRandom;
 
@@ -141,7 +139,7 @@
 };
 
 #if CONFIG_VP9_ENCODER
-typedef std::tr1::tuple<int, int> BlockinessParam;
+typedef std::tuple<int, int> BlockinessParam;
 class BlockinessVP9Test
     : public BlockinessTestBase,
       public ::testing::WithParamInterface<BlockinessParam> {
@@ -208,15 +206,15 @@
 }
 #endif  // CONFIG_VP9_ENCODER
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 //------------------------------------------------------------------------------
 // C functions
 
 #if CONFIG_VP9_ENCODER
-const BlockinessParam c_vp9_tests[] = {
-  make_tuple(320, 240), make_tuple(318, 242), make_tuple(318, 238)
-};
+const BlockinessParam c_vp9_tests[] = { make_tuple(320, 240),
+                                        make_tuple(318, 242),
+                                        make_tuple(318, 238) };
 INSTANTIATE_TEST_CASE_P(C, BlockinessVP9Test, ::testing::ValuesIn(c_vp9_tests));
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/borders_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/borders_test.cc
index e66ff02..b91a15b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/borders_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/borders_test.cc
@@ -31,7 +31,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, 1);
       encoder->Control(VP8E_SET_ENABLEAUTOALTREF, 1);
       encoder->Control(VP8E_SET_ARNR_MAXFRAMES, 7);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/buffer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/buffer.h
index 2175dad..b003d2f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/buffer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/buffer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_BUFFER_H_
-#define TEST_BUFFER_H_
+#ifndef VPX_TEST_BUFFER_H_
+#define VPX_TEST_BUFFER_H_
 
 #include <stdio.h>
 
@@ -379,4 +379,4 @@
   return true;
 }
 }  // namespace libvpx_test
-#endif  // TEST_BUFFER_H_
+#endif  // VPX_TEST_BUFFER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/byte_alignment_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/byte_alignment_test.cc
index 5a058b2..0ef6c4c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/byte_alignment_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/byte_alignment_test.cc
@@ -171,8 +171,9 @@
 TEST_P(ByteAlignmentTest, TestAlignment) {
   const ByteAlignmentTestParam t = GetParam();
   SetByteAlignment(t.byte_alignment, t.expected_value);
-  if (t.decode_remaining)
+  if (t.decode_remaining) {
     ASSERT_EQ(VPX_CODEC_OK, DecodeRemainingFrames(t.byte_alignment));
+  }
 }
 
 INSTANTIATE_TEST_CASE_P(Alignments, ByteAlignmentTest,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/clear_system_state.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/clear_system_state.h
index 044a5c7..ba3c0b3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/clear_system_state.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/clear_system_state.h
@@ -7,23 +7,17 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_CLEAR_SYSTEM_STATE_H_
-#define TEST_CLEAR_SYSTEM_STATE_H_
+#ifndef VPX_TEST_CLEAR_SYSTEM_STATE_H_
+#define VPX_TEST_CLEAR_SYSTEM_STATE_H_
 
 #include "./vpx_config.h"
-#if ARCH_X86 || ARCH_X86_64
-#include "vpx_ports/x86.h"
-#endif
+#include "vpx_ports/system_state.h"
 
 namespace libvpx_test {
 
 // Reset system to a known state. This function should be used for all non-API
 // test cases.
-inline void ClearSystemState() {
-#if ARCH_X86 || ARCH_X86_64
-  vpx_reset_mmx_state();
-#endif
-}
+inline void ClearSystemState() { vpx_clear_system_state(); }
 
 }  // namespace libvpx_test
-#endif  // TEST_CLEAR_SYSTEM_STATE_H_
+#endif  // VPX_TEST_CLEAR_SYSTEM_STATE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/codec_factory.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/codec_factory.h
index d5882ed..17c9512 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/codec_factory.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/codec_factory.h
@@ -7,8 +7,10 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_CODEC_FACTORY_H_
-#define TEST_CODEC_FACTORY_H_
+#ifndef VPX_TEST_CODEC_FACTORY_H_
+#define VPX_TEST_CODEC_FACTORY_H_
+
+#include <tuple>
 
 #include "./vpx_config.h"
 #include "vpx/vpx_decoder.h"
@@ -53,23 +55,22 @@
 template <class T1>
 class CodecTestWithParam
     : public ::testing::TestWithParam<
-          std::tr1::tuple<const libvpx_test::CodecFactory *, T1> > {};
+          std::tuple<const libvpx_test::CodecFactory *, T1> > {};
 
 template <class T1, class T2>
 class CodecTestWith2Params
     : public ::testing::TestWithParam<
-          std::tr1::tuple<const libvpx_test::CodecFactory *, T1, T2> > {};
+          std::tuple<const libvpx_test::CodecFactory *, T1, T2> > {};
 
 template <class T1, class T2, class T3>
 class CodecTestWith3Params
     : public ::testing::TestWithParam<
-          std::tr1::tuple<const libvpx_test::CodecFactory *, T1, T2, T3> > {};
+          std::tuple<const libvpx_test::CodecFactory *, T1, T2, T3> > {};
 
 template <class T1, class T2, class T3, class T4>
 class CodecTestWith4Params
     : public ::testing::TestWithParam<
-          std::tr1::tuple<const libvpx_test::CodecFactory *, T1, T2, T3, T4> > {
-};
+          std::tuple<const libvpx_test::CodecFactory *, T1, T2, T3, T4> > {};
 
 /*
  * VP8 Codec Definitions
@@ -264,4 +265,4 @@
 #endif  // CONFIG_VP9
 
 }  // namespace libvpx_test
-#endif  // TEST_CODEC_FACTORY_H_
+#endif  // VPX_TEST_CODEC_FACTORY_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/comp_avg_pred_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/comp_avg_pred_test.cc
index 110e065..56e701e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/comp_avg_pred_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/comp_avg_pred_test.cc
@@ -29,6 +29,10 @@
 
 void reference_pred(const Buffer<uint8_t> &pred, const Buffer<uint8_t> &ref,
                     int width, int height, Buffer<uint8_t> *avg) {
+  ASSERT_TRUE(avg->TopLeftPixel() != NULL);
+  ASSERT_TRUE(pred.TopLeftPixel() != NULL);
+  ASSERT_TRUE(ref.TopLeftPixel() != NULL);
+
   for (int y = 0; y < height; ++y) {
     for (int x = 0; x < width; ++x) {
       avg->TopLeftPixel()[y * avg->stride() + x] =
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/consistency_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/consistency_test.cc
index c21751f..875b06f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/consistency_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/consistency_test.cc
@@ -11,6 +11,7 @@
 #include <limits.h>
 #include <stdio.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -127,7 +128,7 @@
 };
 
 #if CONFIG_VP9_ENCODER
-typedef std::tr1::tuple<int, int> ConsistencyParam;
+typedef std::tuple<int, int> ConsistencyParam;
 class ConsistencyVP9Test
     : public ConsistencyTestBase,
       public ::testing::WithParamInterface<ConsistencyParam> {
@@ -198,15 +199,15 @@
 }
 #endif  // CONFIG_VP9_ENCODER
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 //------------------------------------------------------------------------------
 // C functions
 
 #if CONFIG_VP9_ENCODER
-const ConsistencyParam c_vp9_tests[] = {
-  make_tuple(320, 240), make_tuple(318, 242), make_tuple(318, 238)
-};
+const ConsistencyParam c_vp9_tests[] = { make_tuple(320, 240),
+                                         make_tuple(318, 242),
+                                         make_tuple(318, 238) };
 INSTANTIATE_TEST_CASE_P(C, ConsistencyVP9Test,
                         ::testing::ValuesIn(c_vp9_tests));
 #endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/convolve_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/convolve_test.cc
index 70f0b11..d8d053d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/convolve_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/convolve_test.cc
@@ -9,6 +9,7 @@
  */
 
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -77,7 +78,7 @@
   int use_highbd_;  // 0 if high bitdepth not used, else the actual bit depth.
 };
 
-typedef std::tr1::tuple<int, int, const ConvolveFunctions *> ConvolveParam;
+typedef std::tuple<int, int, const ConvolveFunctions *> ConvolveParam;
 
 #define ALL_SIZES(convolve_fn)                                            \
   make_tuple(4, 4, &convolve_fn), make_tuple(8, 4, &convolve_fn),         \
@@ -114,6 +115,7 @@
   // and filter_max_width          = 16
   //
   uint8_t intermediate_buffer[71 * kMaxDimension];
+  vp9_zero(intermediate_buffer);
   const int intermediate_next_stride =
       1 - static_cast<int>(intermediate_height * output_width);
 
@@ -213,6 +215,8 @@
   const int intermediate_next_stride =
       1 - static_cast<int>(intermediate_height * output_width);
 
+  vp9_zero(intermediate_buffer);
+
   // Horizontal pass (src -> transposed intermediate).
   {
     uint16_t *output_ptr = intermediate_buffer;
@@ -412,8 +416,14 @@
     for (int i = 0; i < kOutputBufferSize; ++i) {
       if (IsIndexInBorder(i)) {
         output_[i] = 255;
+#if CONFIG_VP9_HIGHBITDEPTH
+        output16_[i] = mask_;
+#endif
       } else {
         output_[i] = 0;
+#if CONFIG_VP9_HIGHBITDEPTH
+        output16_[i] = 0;
+#endif
       }
     }
 
@@ -450,7 +460,9 @@
 
   void CheckGuardBlocks() {
     for (int i = 0; i < kOutputBufferSize; ++i) {
-      if (IsIndexInBorder(i)) EXPECT_EQ(255, output_[i]);
+      if (IsIndexInBorder(i)) {
+        EXPECT_EQ(255, output_[i]);
+      }
     }
   }
 
@@ -672,6 +684,74 @@
          UUT_->use_highbd_ ? UUT_->use_highbd_ : 8, elapsed_time);
 }
 
+TEST_P(ConvolveTest, DISABLED_4Tap_Speed) {
+  const uint8_t *const in = input();
+  uint8_t *const out = output();
+  const InterpKernel *const fourtap = vp9_filter_kernels[FOURTAP];
+  const int kNumTests = 5000000;
+  const int width = Width();
+  const int height = Height();
+  vpx_usec_timer timer;
+
+  SetConstantInput(127);
+
+  vpx_usec_timer_start(&timer);
+  for (int n = 0; n < kNumTests; ++n) {
+    UUT_->hv8_[0](in, kInputStride, out, kOutputStride, fourtap, 8, 16, 8, 16,
+                  width, height);
+  }
+  vpx_usec_timer_mark(&timer);
+
+  const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
+  printf("convolve4_%dx%d_%d: %d us\n", width, height,
+         UUT_->use_highbd_ ? UUT_->use_highbd_ : 8, elapsed_time);
+}
+
+TEST_P(ConvolveTest, DISABLED_4Tap_Horiz_Speed) {
+  const uint8_t *const in = input();
+  uint8_t *const out = output();
+  const InterpKernel *const fourtap = vp9_filter_kernels[FOURTAP];
+  const int kNumTests = 5000000;
+  const int width = Width();
+  const int height = Height();
+  vpx_usec_timer timer;
+
+  SetConstantInput(127);
+
+  vpx_usec_timer_start(&timer);
+  for (int n = 0; n < kNumTests; ++n) {
+    UUT_->h8_[0](in, kInputStride, out, kOutputStride, fourtap, 8, 16, 8, 16,
+                 width, height);
+  }
+  vpx_usec_timer_mark(&timer);
+
+  const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
+  printf("convolve4_horiz_%dx%d_%d: %d us\n", width, height,
+         UUT_->use_highbd_ ? UUT_->use_highbd_ : 8, elapsed_time);
+}
+
+TEST_P(ConvolveTest, DISABLED_4Tap_Vert_Speed) {
+  const uint8_t *const in = input();
+  uint8_t *const out = output();
+  const InterpKernel *const fourtap = vp9_filter_kernels[FOURTAP];
+  const int kNumTests = 5000000;
+  const int width = Width();
+  const int height = Height();
+  vpx_usec_timer timer;
+
+  SetConstantInput(127);
+
+  vpx_usec_timer_start(&timer);
+  for (int n = 0; n < kNumTests; ++n) {
+    UUT_->v8_[0](in, kInputStride, out, kOutputStride, fourtap, 8, 16, 8, 16,
+                 width, height);
+  }
+  vpx_usec_timer_mark(&timer);
+
+  const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
+  printf("convolve4_vert_%dx%d_%d: %d us\n", width, height,
+         UUT_->use_highbd_ ? UUT_->use_highbd_ : 8, elapsed_time);
+}
 TEST_P(ConvolveTest, DISABLED_8Tap_Avg_Speed) {
   const uint8_t *const in = input();
   uint8_t *const out = output();
@@ -787,7 +867,7 @@
   }
 }
 
-const int kNumFilterBanks = 4;
+const int kNumFilterBanks = 5;
 const int kNumFilters = 16;
 
 TEST(ConvolveTest, FiltersWontSaturateWhenAddedPairwise) {
@@ -1040,7 +1120,7 @@
 }
 #endif
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if CONFIG_VP9_HIGHBITDEPTH
 #define WRAP(func, bd)                                                       \
@@ -1053,7 +1133,7 @@
                       x0_q4, x_step_q4, y0_q4, y_step_q4, w, h, bd);         \
   }
 
-#if HAVE_SSE2 && ARCH_X86_64
+#if HAVE_SSE2 && VPX_ARCH_X86_64
 WRAP(convolve_copy_sse2, 8)
 WRAP(convolve_avg_sse2, 8)
 WRAP(convolve_copy_sse2, 10)
@@ -1078,7 +1158,7 @@
 WRAP(convolve8_avg_vert_sse2, 12)
 WRAP(convolve8_sse2, 12)
 WRAP(convolve8_avg_sse2, 12)
-#endif  // HAVE_SSE2 && ARCH_X86_64
+#endif  // HAVE_SSE2 && VPX_ARCH_X86_64
 
 #if HAVE_AVX2
 WRAP(convolve_copy_avx2, 8)
@@ -1183,9 +1263,9 @@
     wrap_convolve8_horiz_c_12, wrap_convolve8_avg_horiz_c_12,
     wrap_convolve8_vert_c_12, wrap_convolve8_avg_vert_c_12, wrap_convolve8_c_12,
     wrap_convolve8_avg_c_12, 12);
-const ConvolveParam kArrayConvolve_c[] = {
-  ALL_SIZES(convolve8_c), ALL_SIZES(convolve10_c), ALL_SIZES(convolve12_c)
-};
+const ConvolveParam kArrayConvolve_c[] = { ALL_SIZES(convolve8_c),
+                                           ALL_SIZES(convolve10_c),
+                                           ALL_SIZES(convolve12_c) };
 
 #else
 const ConvolveFunctions convolve8_c(
@@ -1198,7 +1278,7 @@
 #endif
 INSTANTIATE_TEST_CASE_P(C, ConvolveTest, ::testing::ValuesIn(kArrayConvolve_c));
 
-#if HAVE_SSE2 && ARCH_X86_64
+#if HAVE_SSE2 && VPX_ARCH_X86_64
 #if CONFIG_VP9_HIGHBITDEPTH
 const ConvolveFunctions convolve8_sse2(
     wrap_convolve_copy_sse2_8, wrap_convolve_avg_sse2_8,
@@ -1377,4 +1457,16 @@
 INSTANTIATE_TEST_CASE_P(VSX, ConvolveTest,
                         ::testing::ValuesIn(kArrayConvolve_vsx));
 #endif  // HAVE_VSX
+
+#if HAVE_MMI
+const ConvolveFunctions convolve8_mmi(
+    vpx_convolve_copy_c, vpx_convolve_avg_mmi, vpx_convolve8_horiz_mmi,
+    vpx_convolve8_avg_horiz_mmi, vpx_convolve8_vert_mmi,
+    vpx_convolve8_avg_vert_mmi, vpx_convolve8_mmi, vpx_convolve8_avg_mmi,
+    vpx_scaled_horiz_c, vpx_scaled_avg_horiz_c, vpx_scaled_vert_c,
+    vpx_scaled_avg_vert_c, vpx_scaled_2d_c, vpx_scaled_avg_2d_c, 0);
+const ConvolveParam kArrayConvolve_mmi[] = { ALL_SIZES(convolve8_mmi) };
+INSTANTIATE_TEST_CASE_P(MMI, ConvolveTest,
+                        ::testing::ValuesIn(kArrayConvolve_mmi));
+#endif  // HAVE_MMI
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cpu_speed_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cpu_speed_test.cc
index 404b5b4..2fb5c10 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cpu_speed_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cpu_speed_test.cc
@@ -44,7 +44,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
       encoder->Control(VP9E_SET_TUNE_CONTENT, tune_content_);
       if (encoding_mode_ != ::libvpx_test::kRealTime) {
@@ -152,5 +152,5 @@
                           ::testing::Values(::libvpx_test::kTwoPassGood,
                                             ::libvpx_test::kOnePassGood,
                                             ::libvpx_test::kRealTime),
-                          ::testing::Range(0, 9));
+                          ::testing::Range(0, 10));
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cq_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cq_test.cc
index 20e1f0f..474b9d0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cq_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cq_test.cc
@@ -65,7 +65,7 @@
 
   virtual void PreEncodeFrameHook(libvpx_test::VideoSource *video,
                                   libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       if (cfg_.rc_end_usage == VPX_CQ) {
         encoder->Control(VP8E_SET_CQ_LEVEL, cq_level_);
       }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cx_set_ref.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/cx_set_ref.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/datarate_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/datarate_test.cc
deleted file mode 100644
index 70be44e..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/datarate_test.cc
+++ /dev/null
@@ -1,2253 +0,0 @@
-/*
- *  Copyright (c) 2012 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-#include "./vpx_config.h"
-#include "third_party/googletest/src/include/gtest/gtest.h"
-#include "test/codec_factory.h"
-#include "test/encode_test_driver.h"
-#include "test/i420_video_source.h"
-#include "test/util.h"
-#include "test/y4m_video_source.h"
-#include "vpx/vpx_codec.h"
-
-namespace {
-
-class DatarateTestLarge
-    : public ::libvpx_test::EncoderTest,
-      public ::libvpx_test::CodecTestWith2Params<libvpx_test::TestMode, int> {
- public:
-  DatarateTestLarge() : EncoderTest(GET_PARAM(0)) {}
-
-  virtual ~DatarateTestLarge() {}
-
- protected:
-  virtual void SetUp() {
-    InitializeConfig();
-    SetMode(GET_PARAM(1));
-    set_cpu_used_ = GET_PARAM(2);
-    ResetModel();
-  }
-
-  virtual void ResetModel() {
-    last_pts_ = 0;
-    bits_in_buffer_model_ = cfg_.rc_target_bitrate * cfg_.rc_buf_initial_sz;
-    frame_number_ = 0;
-    first_drop_ = 0;
-    bits_total_ = 0;
-    duration_ = 0.0;
-    denoiser_offon_test_ = 0;
-    denoiser_offon_period_ = -1;
-    gf_boost_ = 0;
-    use_roi_ = false;
-  }
-
-  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
-                                  ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 0) {
-      encoder->Control(VP8E_SET_NOISE_SENSITIVITY, denoiser_on_);
-      encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
-      encoder->Control(VP8E_SET_GF_CBR_BOOST_PCT, gf_boost_);
-    }
-
-    if (use_roi_) {
-      encoder->Control(VP8E_SET_ROI_MAP, &roi_);
-    }
-
-    if (denoiser_offon_test_) {
-      ASSERT_GT(denoiser_offon_period_, 0)
-          << "denoiser_offon_period_ is not positive.";
-      if ((video->frame() + 1) % denoiser_offon_period_ == 0) {
-        // Flip denoiser_on_ periodically
-        denoiser_on_ ^= 1;
-      }
-      encoder->Control(VP8E_SET_NOISE_SENSITIVITY, denoiser_on_);
-    }
-
-    const vpx_rational_t tb = video->timebase();
-    timebase_ = static_cast<double>(tb.num) / tb.den;
-    duration_ = 0;
-  }
-
-  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
-    // Time since last timestamp = duration.
-    vpx_codec_pts_t duration = pkt->data.frame.pts - last_pts_;
-
-    // TODO(jimbankoski): Remove these lines when the issue:
-    // http://code.google.com/p/webm/issues/detail?id=496 is fixed.
-    // For now the codec assumes buffer starts at starting buffer rate
-    // plus one frame's time.
-    if (last_pts_ == 0) duration = 1;
-
-    // Add to the buffer the bits we'd expect from a constant bitrate server.
-    bits_in_buffer_model_ += static_cast<int64_t>(
-        duration * timebase_ * cfg_.rc_target_bitrate * 1000);
-
-    /* Test the buffer model here before subtracting the frame. Do so because
-     * the way the leaky bucket model works in libvpx is to allow the buffer to
-     * empty - and then stop showing frames until we've got enough bits to
-     * show one. As noted in comment below (issue 495), this does not currently
-     * apply to key frames. For now exclude key frames in condition below. */
-    const bool key_frame =
-        (pkt->data.frame.flags & VPX_FRAME_IS_KEY) ? true : false;
-    if (!key_frame) {
-      ASSERT_GE(bits_in_buffer_model_, 0)
-          << "Buffer Underrun at frame " << pkt->data.frame.pts;
-    }
-
-    const int64_t frame_size_in_bits = pkt->data.frame.sz * 8;
-
-    // Subtract from the buffer the bits associated with a played back frame.
-    bits_in_buffer_model_ -= frame_size_in_bits;
-
-    // Update the running total of bits for end of test datarate checks.
-    bits_total_ += frame_size_in_bits;
-
-    // If first drop not set and we have a drop set it to this time.
-    if (!first_drop_ && duration > 1) first_drop_ = last_pts_ + 1;
-
-    // Update the most recent pts.
-    last_pts_ = pkt->data.frame.pts;
-
-    // We update this so that we can calculate the datarate minus the last
-    // frame encoded in the file.
-    bits_in_last_frame_ = frame_size_in_bits;
-
-    ++frame_number_;
-  }
-
-  virtual void EndPassHook(void) {
-    if (bits_total_) {
-      const double file_size_in_kb = bits_total_ / 1000.;  // bits per kilobit
-
-      duration_ = (last_pts_ + 1) * timebase_;
-
-      // Effective file datarate includes the time spent prebuffering.
-      effective_datarate_ = (bits_total_ - bits_in_last_frame_) / 1000.0 /
-                            (cfg_.rc_buf_initial_sz / 1000.0 + duration_);
-
-      file_datarate_ = file_size_in_kb / duration_;
-    }
-  }
-
-  vpx_codec_pts_t last_pts_;
-  int64_t bits_in_buffer_model_;
-  double timebase_;
-  int frame_number_;
-  vpx_codec_pts_t first_drop_;
-  int64_t bits_total_;
-  double duration_;
-  double file_datarate_;
-  double effective_datarate_;
-  int64_t bits_in_last_frame_;
-  int denoiser_on_;
-  int denoiser_offon_test_;
-  int denoiser_offon_period_;
-  int set_cpu_used_;
-  int gf_boost_;
-  bool use_roi_;
-  vpx_roi_map_t roi_;
-};
-
-#if CONFIG_TEMPORAL_DENOISING
-// Check basic datarate targeting, for a single bitrate, but loop over the
-// various denoiser settings.
-TEST_P(DatarateTestLarge, DenoiserLevels) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  for (int j = 1; j < 5; ++j) {
-    // Run over the denoiser levels.
-    // For the temporal denoiser (#if CONFIG_TEMPORAL_DENOISING) the level j
-    // refers to the 4 denoiser modes: denoiserYonly, denoiserOnYUV,
-    // denoiserOnAggressive, and denoiserOnAdaptive.
-    denoiser_on_ = j;
-    cfg_.rc_target_bitrate = 300;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-        << " The datarate for the file exceeds the target!";
-
-    ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-        << " The datarate for the file missed the target!";
-  }
-}
-
-// Check basic datarate targeting, for a single bitrate, when denoiser is off
-// and on.
-TEST_P(DatarateTestLarge, DenoiserOffOn) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 299);
-  cfg_.rc_target_bitrate = 300;
-  ResetModel();
-  // The denoiser is off by default.
-  denoiser_on_ = 0;
-  // Set the offon test flag.
-  denoiser_offon_test_ = 1;
-  denoiser_offon_period_ = 100;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-      << " The datarate for the file exceeds the target!";
-  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-      << " The datarate for the file missed the target!";
-}
-#endif  // CONFIG_TEMPORAL_DENOISING
-
-TEST_P(DatarateTestLarge, BasicBufferModel) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  // 2 pass cbr datarate control has a bug hidden by the small # of
-  // frames selected in this encode. The problem is that even if the buffer is
-  // negative we produce a keyframe on a cutscene. Ignoring datarate
-  // constraints
-  // TODO(jimbankoski): ( Fix when issue
-  // http://code.google.com/p/webm/issues/detail?id=495 is addressed. )
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-
-  // There is an issue for low bitrates in real-time mode, where the
-  // effective_datarate slightly overshoots the target bitrate.
-  // This is same the issue as noted about (#495).
-  // TODO(jimbankoski/marpan): Update test to run for lower bitrates (< 100),
-  // when the issue is resolved.
-  for (int i = 100; i < 800; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-        << " The datarate for the file exceeds the target!";
-    ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-        << " The datarate for the file missed the target!";
-  }
-}
-
-TEST_P(DatarateTestLarge, ChangingDropFrameThresh) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_max_quantizer = 36;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.rc_target_bitrate = 200;
-  cfg_.kf_mode = VPX_KF_DISABLED;
-
-  const int frame_count = 40;
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, frame_count);
-
-  // Here we check that the first dropped frame gets earlier and earlier
-  // as the drop frame threshold is increased.
-
-  const int kDropFrameThreshTestStep = 30;
-  vpx_codec_pts_t last_drop = frame_count;
-  for (int i = 1; i < 91; i += kDropFrameThreshTestStep) {
-    cfg_.rc_dropframe_thresh = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_LE(first_drop_, last_drop)
-        << " The first dropped frame for drop_thresh " << i
-        << " > first dropped frame for drop_thresh "
-        << i - kDropFrameThreshTestStep;
-    last_drop = first_drop_;
-  }
-}
-
-TEST_P(DatarateTestLarge, DropFramesMultiThreads) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 30;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_threads = 2;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  cfg_.rc_target_bitrate = 200;
-  ResetModel();
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-      << " The datarate for the file exceeds the target!";
-
-  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-      << " The datarate for the file missed the target!";
-}
-
-class DatarateTestRealTime : public DatarateTestLarge {
- public:
-  virtual ~DatarateTestRealTime() {}
-};
-
-#if CONFIG_TEMPORAL_DENOISING
-// Check basic datarate targeting, for a single bitrate, but loop over the
-// various denoiser settings.
-TEST_P(DatarateTestRealTime, DenoiserLevels) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  for (int j = 1; j < 5; ++j) {
-    // Run over the denoiser levels.
-    // For the temporal denoiser (#if CONFIG_TEMPORAL_DENOISING) the level j
-    // refers to the 4 denoiser modes: denoiserYonly, denoiserOnYUV,
-    // denoiserOnAggressive, and denoiserOnAdaptive.
-    denoiser_on_ = j;
-    cfg_.rc_target_bitrate = 300;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-        << " The datarate for the file exceeds the target!";
-    ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-        << " The datarate for the file missed the target!";
-  }
-}
-
-// Check basic datarate targeting, for a single bitrate, when denoiser is off
-// and on.
-TEST_P(DatarateTestRealTime, DenoiserOffOn) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 299);
-  cfg_.rc_target_bitrate = 300;
-  ResetModel();
-  // The denoiser is off by default.
-  denoiser_on_ = 0;
-  // Set the offon test flag.
-  denoiser_offon_test_ = 1;
-  denoiser_offon_period_ = 100;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-      << " The datarate for the file exceeds the target!";
-  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-      << " The datarate for the file missed the target!";
-}
-#endif  // CONFIG_TEMPORAL_DENOISING
-
-TEST_P(DatarateTestRealTime, BasicBufferModel) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  // 2 pass cbr datarate control has a bug hidden by the small # of
-  // frames selected in this encode. The problem is that even if the buffer is
-  // negative we produce a keyframe on a cutscene, ignoring datarate
-  // constraints
-  // TODO(jimbankoski): Fix when issue
-  // http://bugs.chromium.org/p/webm/issues/detail?id=495 is addressed.
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-
-  // There is an issue for low bitrates in real-time mode, where the
-  // effective_datarate slightly overshoots the target bitrate.
-  // This is same the issue as noted above (#495).
-  // TODO(jimbankoski/marpan): Update test to run for lower bitrates (< 100),
-  // when the issue is resolved.
-  for (int i = 100; i <= 700; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-        << " The datarate for the file exceeds the target!";
-    ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-        << " The datarate for the file missed the target!";
-  }
-}
-
-TEST_P(DatarateTestRealTime, ChangingDropFrameThresh) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_max_quantizer = 36;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.rc_target_bitrate = 200;
-  cfg_.kf_mode = VPX_KF_DISABLED;
-
-  const int frame_count = 40;
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, frame_count);
-
-  // Check that the first dropped frame gets earlier and earlier
-  // as the drop frame threshold is increased.
-
-  const int kDropFrameThreshTestStep = 30;
-  vpx_codec_pts_t last_drop = frame_count;
-  for (int i = 1; i < 91; i += kDropFrameThreshTestStep) {
-    cfg_.rc_dropframe_thresh = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_LE(first_drop_, last_drop)
-        << " The first dropped frame for drop_thresh " << i
-        << " > first dropped frame for drop_thresh "
-        << i - kDropFrameThreshTestStep;
-    last_drop = first_drop_;
-  }
-}
-
-TEST_P(DatarateTestRealTime, DropFramesMultiThreads) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 30;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  // Encode using multiple threads.
-  cfg_.g_threads = 2;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  cfg_.rc_target_bitrate = 200;
-  ResetModel();
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-      << " The datarate for the file exceeds the target!";
-
-  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-      << " The datarate for the file missed the target!";
-}
-
-TEST_P(DatarateTestRealTime, RegionOfInterest) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  // Encode using multiple threads.
-  cfg_.g_threads = 2;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 300);
-  cfg_.rc_target_bitrate = 450;
-  cfg_.g_w = 352;
-  cfg_.g_h = 288;
-
-  ResetModel();
-
-  // Set ROI parameters
-  use_roi_ = true;
-  memset(&roi_, 0, sizeof(roi_));
-
-  roi_.rows = (cfg_.g_h + 15) / 16;
-  roi_.cols = (cfg_.g_w + 15) / 16;
-
-  roi_.delta_q[0] = 0;
-  roi_.delta_q[1] = -20;
-  roi_.delta_q[2] = 0;
-  roi_.delta_q[3] = 0;
-
-  roi_.delta_lf[0] = 0;
-  roi_.delta_lf[1] = -20;
-  roi_.delta_lf[2] = 0;
-  roi_.delta_lf[3] = 0;
-
-  roi_.static_threshold[0] = 0;
-  roi_.static_threshold[1] = 1000;
-  roi_.static_threshold[2] = 0;
-  roi_.static_threshold[3] = 0;
-
-  // Use 2 states: 1 is center square, 0 is the rest.
-  roi_.roi_map =
-      (uint8_t *)calloc(roi_.rows * roi_.cols, sizeof(*roi_.roi_map));
-  for (unsigned int i = 0; i < roi_.rows; ++i) {
-    for (unsigned int j = 0; j < roi_.cols; ++j) {
-      if (i > (roi_.rows >> 2) && i < ((roi_.rows * 3) >> 2) &&
-          j > (roi_.cols >> 2) && j < ((roi_.cols * 3) >> 2)) {
-        roi_.roi_map[i * roi_.cols + j] = 1;
-      }
-    }
-  }
-
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-      << " The datarate for the file exceeds the target!";
-
-  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-      << " The datarate for the file missed the target!";
-
-  free(roi_.roi_map);
-}
-
-TEST_P(DatarateTestRealTime, GFBoost) {
-  denoiser_on_ = 0;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_error_resilient = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 300);
-  cfg_.rc_target_bitrate = 300;
-  ResetModel();
-  // Apply a gf boost.
-  gf_boost_ = 50;
-
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
-      << " The datarate for the file exceeds the target!";
-
-  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
-      << " The datarate for the file missed the target!";
-}
-
-class DatarateTestVP9Large
-    : public ::libvpx_test::EncoderTest,
-      public ::libvpx_test::CodecTestWith2Params<libvpx_test::TestMode, int> {
- public:
-  DatarateTestVP9Large() : EncoderTest(GET_PARAM(0)) {}
-
- protected:
-  virtual ~DatarateTestVP9Large() {}
-
-  virtual void SetUp() {
-    InitializeConfig();
-    SetMode(GET_PARAM(1));
-    set_cpu_used_ = GET_PARAM(2);
-    ResetModel();
-  }
-
-  virtual void ResetModel() {
-    last_pts_ = 0;
-    bits_in_buffer_model_ = cfg_.rc_target_bitrate * cfg_.rc_buf_initial_sz;
-    frame_number_ = 0;
-    tot_frame_number_ = 0;
-    first_drop_ = 0;
-    num_drops_ = 0;
-    // Denoiser is off by default.
-    denoiser_on_ = 0;
-    // For testing up to 3 layers.
-    for (int i = 0; i < 3; ++i) {
-      bits_total_[i] = 0;
-    }
-    denoiser_offon_test_ = 0;
-    denoiser_offon_period_ = -1;
-    frame_parallel_decoding_mode_ = 1;
-    use_roi_ = false;
-  }
-
-  //
-  // Frame flags and layer id for temporal layers.
-  //
-
-  // For two layers, test pattern is:
-  //   1     3
-  // 0    2     .....
-  // For three layers, test pattern is:
-  //   1      3    5      7
-  //      2           6
-  // 0          4            ....
-  // LAST is always update on base/layer 0, GOLDEN is updated on layer 1.
-  // For this 3 layer example, the 2nd enhancement layer (layer 2) updates
-  // the altref frame.
-  int SetFrameFlags(int frame_num, int num_temp_layers) {
-    int frame_flags = 0;
-    if (num_temp_layers == 2) {
-      if (frame_num % 2 == 0) {
-        // Layer 0: predict from L and ARF, update L.
-        frame_flags =
-            VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_ARF;
-      } else {
-        // Layer 1: predict from L, G and ARF, and update G.
-        frame_flags = VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_UPD_LAST |
-                      VP8_EFLAG_NO_UPD_ENTROPY;
-      }
-    } else if (num_temp_layers == 3) {
-      if (frame_num % 4 == 0) {
-        // Layer 0: predict from L and ARF; update L.
-        frame_flags =
-            VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_REF_GF;
-      } else if ((frame_num - 2) % 4 == 0) {
-        // Layer 1: predict from L, G, ARF; update G.
-        frame_flags = VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_UPD_LAST;
-      } else if ((frame_num - 1) % 2 == 0) {
-        // Layer 2: predict from L, G, ARF; update ARF.
-        frame_flags = VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_LAST;
-      }
-    }
-    return frame_flags;
-  }
-
-  int SetLayerId(int frame_num, int num_temp_layers) {
-    int layer_id = 0;
-    if (num_temp_layers == 2) {
-      if (frame_num % 2 == 0) {
-        layer_id = 0;
-      } else {
-        layer_id = 1;
-      }
-    } else if (num_temp_layers == 3) {
-      if (frame_num % 4 == 0) {
-        layer_id = 0;
-      } else if ((frame_num - 2) % 4 == 0) {
-        layer_id = 1;
-      } else if ((frame_num - 1) % 2 == 0) {
-        layer_id = 2;
-      }
-    }
-    return layer_id;
-  }
-
-  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
-                                  ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 0) encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
-
-    if (denoiser_offon_test_) {
-      ASSERT_GT(denoiser_offon_period_, 0)
-          << "denoiser_offon_period_ is not positive.";
-      if ((video->frame() + 1) % denoiser_offon_period_ == 0) {
-        // Flip denoiser_on_ periodically
-        denoiser_on_ ^= 1;
-      }
-    }
-
-    encoder->Control(VP9E_SET_NOISE_SENSITIVITY, denoiser_on_);
-    encoder->Control(VP9E_SET_TILE_COLUMNS, (cfg_.g_threads >> 1));
-    encoder->Control(VP9E_SET_FRAME_PARALLEL_DECODING,
-                     frame_parallel_decoding_mode_);
-
-    if (use_roi_) {
-      encoder->Control(VP9E_SET_ROI_MAP, &roi_);
-    }
-
-    if (cfg_.ts_number_layers > 1) {
-      if (video->frame() == 0) {
-        encoder->Control(VP9E_SET_SVC, 1);
-      }
-      vpx_svc_layer_id_t layer_id;
-      layer_id.spatial_layer_id = 0;
-      frame_flags_ = SetFrameFlags(video->frame(), cfg_.ts_number_layers);
-      layer_id.temporal_layer_id =
-          SetLayerId(video->frame(), cfg_.ts_number_layers);
-      encoder->Control(VP9E_SET_SVC_LAYER_ID, &layer_id);
-    }
-    const vpx_rational_t tb = video->timebase();
-    timebase_ = static_cast<double>(tb.num) / tb.den;
-    duration_ = 0;
-  }
-
-  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
-    // Time since last timestamp = duration.
-    vpx_codec_pts_t duration = pkt->data.frame.pts - last_pts_;
-
-    if (duration > 1) {
-      // If first drop not set and we have a drop set it to this time.
-      if (!first_drop_) first_drop_ = last_pts_ + 1;
-      // Update the number of frame drops.
-      num_drops_ += static_cast<int>(duration - 1);
-      // Update counter for total number of frames (#frames input to encoder).
-      // Needed for setting the proper layer_id below.
-      tot_frame_number_ += static_cast<int>(duration - 1);
-    }
-
-    int layer = SetLayerId(tot_frame_number_, cfg_.ts_number_layers);
-
-    // Add to the buffer the bits we'd expect from a constant bitrate server.
-    bits_in_buffer_model_ += static_cast<int64_t>(
-        duration * timebase_ * cfg_.rc_target_bitrate * 1000);
-
-    // Buffer should not go negative.
-    ASSERT_GE(bits_in_buffer_model_, 0)
-        << "Buffer Underrun at frame " << pkt->data.frame.pts;
-
-    const size_t frame_size_in_bits = pkt->data.frame.sz * 8;
-
-    // Update the total encoded bits. For temporal layers, update the cumulative
-    // encoded bits per layer.
-    for (int i = layer; i < static_cast<int>(cfg_.ts_number_layers); ++i) {
-      bits_total_[i] += frame_size_in_bits;
-    }
-
-    // Update the most recent pts.
-    last_pts_ = pkt->data.frame.pts;
-    ++frame_number_;
-    ++tot_frame_number_;
-  }
-
-  virtual void EndPassHook(void) {
-    for (int layer = 0; layer < static_cast<int>(cfg_.ts_number_layers);
-         ++layer) {
-      duration_ = (last_pts_ + 1) * timebase_;
-      if (bits_total_[layer]) {
-        // Effective file datarate:
-        effective_datarate_[layer] = (bits_total_[layer] / 1000.0) / duration_;
-      }
-    }
-  }
-
-  vpx_codec_pts_t last_pts_;
-  double timebase_;
-  int frame_number_;      // Counter for number of non-dropped/encoded frames.
-  int tot_frame_number_;  // Counter for total number of input frames.
-  int64_t bits_total_[3];
-  double duration_;
-  double effective_datarate_[3];
-  int set_cpu_used_;
-  int64_t bits_in_buffer_model_;
-  vpx_codec_pts_t first_drop_;
-  int num_drops_;
-  int denoiser_on_;
-  int denoiser_offon_test_;
-  int denoiser_offon_period_;
-  int frame_parallel_decoding_mode_;
-  bool use_roi_;
-  vpx_roi_map_t roi_;
-};
-
-// Check basic rate targeting for VBR mode with 0 lag.
-TEST_P(DatarateTestVP9Large, BasicRateTargetingVBRLagZero) {
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.g_error_resilient = 0;
-  cfg_.rc_end_usage = VPX_VBR;
-  cfg_.g_lag_in_frames = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 300);
-  for (int i = 400; i <= 800; i += 400) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.75)
-        << " The datarate for the file is lower than target by too much!";
-    ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.30)
-        << " The datarate for the file is greater than target by too much!";
-  }
-}
-
-// Check basic rate targeting for VBR mode with non-zero lag.
-TEST_P(DatarateTestVP9Large, BasicRateTargetingVBRLagNonZero) {
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.g_error_resilient = 0;
-  cfg_.rc_end_usage = VPX_VBR;
-  // For non-zero lag, rate control will work (be within bounds) for
-  // real-time mode.
-  if (deadline_ == VPX_DL_REALTIME) {
-    cfg_.g_lag_in_frames = 15;
-  } else {
-    cfg_.g_lag_in_frames = 0;
-  }
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 300);
-  for (int i = 400; i <= 800; i += 400) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.75)
-        << " The datarate for the file is lower than target by too much!";
-    ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.30)
-        << " The datarate for the file is greater than target by too much!";
-  }
-}
-
-// Check basic rate targeting for VBR mode with non-zero lag, with
-// frame_parallel_decoding_mode off. This enables the adapt_coeff/mode/mv probs
-// since error_resilience is off.
-TEST_P(DatarateTestVP9Large, BasicRateTargetingVBRLagNonZeroFrameParDecOff) {
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.g_error_resilient = 0;
-  cfg_.rc_end_usage = VPX_VBR;
-  // For non-zero lag, rate control will work (be within bounds) for
-  // real-time mode.
-  if (deadline_ == VPX_DL_REALTIME) {
-    cfg_.g_lag_in_frames = 15;
-  } else {
-    cfg_.g_lag_in_frames = 0;
-  }
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 300);
-  for (int i = 400; i <= 800; i += 400) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    frame_parallel_decoding_mode_ = 0;
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.75)
-        << " The datarate for the file is lower than target by too much!";
-    ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.30)
-        << " The datarate for the file is greater than target by too much!";
-  }
-}
-
-// Check basic rate targeting for CBR mode.
-TEST_P(DatarateTestVP9Large, BasicRateTargeting) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  for (int i = 150; i < 800; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-        << " The datarate for the file is lower than target by too much!";
-    ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
-        << " The datarate for the file is greater than target by too much!";
-  }
-}
-
-// Check basic rate targeting for CBR mode, with frame_parallel_decoding_mode
-// off( and error_resilience off).
-TEST_P(DatarateTestVP9Large, BasicRateTargetingFrameParDecOff) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.g_error_resilient = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  for (int i = 150; i < 800; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    frame_parallel_decoding_mode_ = 0;
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-        << " The datarate for the file is lower than target by too much!";
-    ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
-        << " The datarate for the file is greater than target by too much!";
-  }
-}
-
-// Check basic rate targeting for CBR mode, with 2 threads and dropped frames.
-TEST_P(DatarateTestVP9Large, BasicRateTargetingDropFramesMultiThreads) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 30;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  // Encode using multiple threads.
-  cfg_.g_threads = 2;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-  cfg_.rc_target_bitrate = 200;
-  ResetModel();
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-      << " The datarate for the file is lower than target by too much!";
-  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
-      << " The datarate for the file is greater than target by too much!";
-}
-
-// Check basic rate targeting for CBR.
-TEST_P(DatarateTestVP9Large, BasicRateTargeting444) {
-  ::libvpx_test::Y4mVideoSource video("rush_hour_444.y4m", 0, 140);
-
-  cfg_.g_profile = 1;
-  cfg_.g_timebase = video.timebase();
-
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-
-  for (int i = 250; i < 900; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    ASSERT_GE(static_cast<double>(cfg_.rc_target_bitrate),
-              effective_datarate_[0] * 0.80)
-        << " The datarate for the file exceeds the target by too much!";
-    ASSERT_LE(static_cast<double>(cfg_.rc_target_bitrate),
-              effective_datarate_[0] * 1.15)
-        << " The datarate for the file missed the target!"
-        << cfg_.rc_target_bitrate << " " << effective_datarate_;
-  }
-}
-
-// Check that (1) the first dropped frame gets earlier and earlier
-// as the drop frame threshold is increased, and (2) that the total number of
-// frame drops does not decrease as we increase frame drop threshold.
-// Use a lower qp-max to force some frame drops.
-TEST_P(DatarateTestVP9Large, ChangingDropFrameThresh) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_undershoot_pct = 20;
-  cfg_.rc_undershoot_pct = 20;
-  cfg_.rc_dropframe_thresh = 10;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 50;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.rc_target_bitrate = 200;
-  cfg_.g_lag_in_frames = 0;
-  // TODO(marpan): Investigate datarate target failures with a smaller keyframe
-  // interval (128).
-  cfg_.kf_max_dist = 9999;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-
-  const int kDropFrameThreshTestStep = 30;
-  for (int j = 50; j <= 150; j += 100) {
-    cfg_.rc_target_bitrate = j;
-    vpx_codec_pts_t last_drop = 140;
-    int last_num_drops = 0;
-    for (int i = 10; i < 100; i += kDropFrameThreshTestStep) {
-      cfg_.rc_dropframe_thresh = i;
-      ResetModel();
-      ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-      ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-          << " The datarate for the file is lower than target by too much!";
-      ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.25)
-          << " The datarate for the file is greater than target by too much!";
-      ASSERT_LE(first_drop_, last_drop)
-          << " The first dropped frame for drop_thresh " << i
-          << " > first dropped frame for drop_thresh "
-          << i - kDropFrameThreshTestStep;
-      ASSERT_GE(num_drops_, last_num_drops * 0.85)
-          << " The number of dropped frames for drop_thresh " << i
-          << " < number of dropped frames for drop_thresh "
-          << i - kDropFrameThreshTestStep;
-      last_drop = first_drop_;
-      last_num_drops = num_drops_;
-    }
-  }
-}
-
-// Check basic rate targeting for 2 temporal layers.
-TEST_P(DatarateTestVP9Large, BasicRateTargeting2TemporalLayers) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  // 2 Temporal layers, no spatial layers: Framerate decimation (2, 1).
-  cfg_.ss_number_layers = 1;
-  cfg_.ts_number_layers = 2;
-  cfg_.ts_rate_decimator[0] = 2;
-  cfg_.ts_rate_decimator[1] = 1;
-
-  cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
-
-  if (deadline_ == VPX_DL_REALTIME) cfg_.g_error_resilient = 1;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 200);
-  for (int i = 200; i <= 800; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    // 60-40 bitrate allocation for 2 temporal layers.
-    cfg_.layer_target_bitrate[0] = 60 * cfg_.rc_target_bitrate / 100;
-    cfg_.layer_target_bitrate[1] = cfg_.rc_target_bitrate;
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    for (int j = 0; j < static_cast<int>(cfg_.ts_number_layers); ++j) {
-      ASSERT_GE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 0.85)
-          << " The datarate for the file is lower than target by too much, "
-             "for layer: "
-          << j;
-      ASSERT_LE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 1.15)
-          << " The datarate for the file is greater than target by too much, "
-             "for layer: "
-          << j;
-    }
-  }
-}
-
-// Check basic rate targeting for 3 temporal layers.
-TEST_P(DatarateTestVP9Large, BasicRateTargeting3TemporalLayers) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  // 3 Temporal layers, no spatial layers: Framerate decimation (4, 2, 1).
-  cfg_.ss_number_layers = 1;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-
-  cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 200);
-  for (int i = 200; i <= 800; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    // 40-20-40 bitrate allocation for 3 temporal layers.
-    cfg_.layer_target_bitrate[0] = 40 * cfg_.rc_target_bitrate / 100;
-    cfg_.layer_target_bitrate[1] = 60 * cfg_.rc_target_bitrate / 100;
-    cfg_.layer_target_bitrate[2] = cfg_.rc_target_bitrate;
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    for (int j = 0; j < static_cast<int>(cfg_.ts_number_layers); ++j) {
-      // TODO(yaowu): Work out more stable rc control strategy and
-      //              Adjust the thresholds to be tighter than .75.
-      ASSERT_GE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 0.75)
-          << " The datarate for the file is lower than target by too much, "
-             "for layer: "
-          << j;
-      // TODO(yaowu): Work out more stable rc control strategy and
-      //              Adjust the thresholds to be tighter than 1.25.
-      ASSERT_LE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 1.25)
-          << " The datarate for the file is greater than target by too much, "
-             "for layer: "
-          << j;
-    }
-  }
-}
-
-// Check basic rate targeting for 3 temporal layers, with frame dropping.
-// Only for one (low) bitrate with lower max_quantizer, and somewhat higher
-// frame drop threshold, to force frame dropping.
-TEST_P(DatarateTestVP9Large, BasicRateTargeting3TemporalLayersFrameDropping) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  // Set frame drop threshold and rc_max_quantizer to force some frame drops.
-  cfg_.rc_dropframe_thresh = 20;
-  cfg_.rc_max_quantizer = 45;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  // 3 Temporal layers, no spatial layers: Framerate decimation (4, 2, 1).
-  cfg_.ss_number_layers = 1;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-
-  cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 200);
-  cfg_.rc_target_bitrate = 200;
-  ResetModel();
-  // 40-20-40 bitrate allocation for 3 temporal layers.
-  cfg_.layer_target_bitrate[0] = 40 * cfg_.rc_target_bitrate / 100;
-  cfg_.layer_target_bitrate[1] = 60 * cfg_.rc_target_bitrate / 100;
-  cfg_.layer_target_bitrate[2] = cfg_.rc_target_bitrate;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  for (int j = 0; j < static_cast<int>(cfg_.ts_number_layers); ++j) {
-    ASSERT_GE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 0.85)
-        << " The datarate for the file is lower than target by too much, "
-           "for layer: "
-        << j;
-    ASSERT_LE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 1.15)
-        << " The datarate for the file is greater than target by too much, "
-           "for layer: "
-        << j;
-    // Expect some frame drops in this test: for this 200 frames test,
-    // expect at least 10% and not more than 60% drops.
-    ASSERT_GE(num_drops_, 20);
-    ASSERT_LE(num_drops_, 130);
-  }
-}
-
-class DatarateTestVP9RealTime : public DatarateTestVP9Large {
- public:
-  virtual ~DatarateTestVP9RealTime() {}
-};
-
-// Check VP9 region of interest feature.
-TEST_P(DatarateTestVP9RealTime, RegionOfInterest) {
-  if (deadline_ != VPX_DL_REALTIME || set_cpu_used_ < 5) return;
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 300);
-
-  cfg_.rc_target_bitrate = 450;
-  cfg_.g_w = 352;
-  cfg_.g_h = 288;
-
-  ResetModel();
-
-  // Set ROI parameters
-  use_roi_ = true;
-  memset(&roi_, 0, sizeof(roi_));
-
-  roi_.rows = (cfg_.g_h + 7) / 8;
-  roi_.cols = (cfg_.g_w + 7) / 8;
-
-  roi_.delta_q[1] = -20;
-  roi_.delta_lf[1] = -20;
-  memset(roi_.ref_frame, -1, sizeof(roi_.ref_frame));
-  roi_.ref_frame[1] = 1;
-
-  // Use 2 states: 1 is center square, 0 is the rest.
-  roi_.roi_map = reinterpret_cast<uint8_t *>(
-      calloc(roi_.rows * roi_.cols, sizeof(*roi_.roi_map)));
-  ASSERT_TRUE(roi_.roi_map != NULL);
-
-  for (unsigned int i = 0; i < roi_.rows; ++i) {
-    for (unsigned int j = 0; j < roi_.cols; ++j) {
-      if (i > (roi_.rows >> 2) && i < ((roi_.rows * 3) >> 2) &&
-          j > (roi_.cols >> 2) && j < ((roi_.cols * 3) >> 2)) {
-        roi_.roi_map[i * roi_.cols + j] = 1;
-      }
-    }
-  }
-
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_[0] * 0.90)
-      << " The datarate for the file exceeds the target!";
-
-  ASSERT_LE(cfg_.rc_target_bitrate, effective_datarate_[0] * 1.4)
-      << " The datarate for the file missed the target!";
-
-  free(roi_.roi_map);
-}
-
-#if CONFIG_VP9_TEMPORAL_DENOISING
-class DatarateTestVP9LargeDenoiser : public DatarateTestVP9Large {
- public:
-  virtual ~DatarateTestVP9LargeDenoiser() {}
-};
-
-// Check basic datarate targeting, for a single bitrate, when denoiser is on.
-TEST_P(DatarateTestVP9LargeDenoiser, LowNoise) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 2;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 140);
-
-  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
-  // there is only one denoiser mode: denoiserYonly(which is 1),
-  // but may add more modes in the future.
-  cfg_.rc_target_bitrate = 300;
-  ResetModel();
-  // Turn on the denoiser.
-  denoiser_on_ = 1;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-      << " The datarate for the file is lower than target by too much!";
-  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
-      << " The datarate for the file is greater than target by too much!";
-}
-
-// Check basic datarate targeting, for a single bitrate, when denoiser is on,
-// for clip with high noise level. Use 2 threads.
-TEST_P(DatarateTestVP9LargeDenoiser, HighNoise) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 2;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.g_threads = 2;
-
-  ::libvpx_test::Y4mVideoSource video("noisy_clip_640_360.y4m", 0, 200);
-
-  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
-  // there is only one denoiser mode: kDenoiserOnYOnly(which is 1),
-  // but may add more modes in the future.
-  cfg_.rc_target_bitrate = 1000;
-  ResetModel();
-  // Turn on the denoiser.
-  denoiser_on_ = 1;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-      << " The datarate for the file is lower than target by too much!";
-  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
-      << " The datarate for the file is greater than target by too much!";
-}
-
-// Check basic datarate targeting, for a single bitrate, when denoiser is on,
-// for 1280x720 clip with 4 threads.
-TEST_P(DatarateTestVP9LargeDenoiser, 4threads) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 2;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.g_threads = 4;
-
-  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 300);
-
-  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
-  // there is only one denoiser mode: denoiserYonly(which is 1),
-  // but may add more modes in the future.
-  cfg_.rc_target_bitrate = 1000;
-  ResetModel();
-  // Turn on the denoiser.
-  denoiser_on_ = 1;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-      << " The datarate for the file is lower than target by too much!";
-  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.29)
-      << " The datarate for the file is greater than target by too much!";
-}
-
-// Check basic datarate targeting, for a single bitrate, when denoiser is off
-// and on.
-TEST_P(DatarateTestVP9LargeDenoiser, DenoiserOffOn) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_dropframe_thresh = 1;
-  cfg_.rc_min_quantizer = 2;
-  cfg_.rc_max_quantizer = 56;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-
-  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
-                                       30, 1, 0, 299);
-
-  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
-  // there is only one denoiser mode: denoiserYonly(which is 1),
-  // but may add more modes in the future.
-  cfg_.rc_target_bitrate = 300;
-  ResetModel();
-  // The denoiser is off by default.
-  denoiser_on_ = 0;
-  // Set the offon test flag.
-  denoiser_offon_test_ = 1;
-  denoiser_offon_period_ = 100;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
-      << " The datarate for the file is lower than target by too much!";
-  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
-      << " The datarate for the file is greater than target by too much!";
-}
-#endif  // CONFIG_VP9_TEMPORAL_DENOISING
-
-static void assign_layer_bitrates(vpx_codec_enc_cfg_t *const enc_cfg,
-                                  const vpx_svc_extra_cfg_t *svc_params,
-                                  int spatial_layers, int temporal_layers,
-                                  int temporal_layering_mode,
-                                  int *layer_target_avg_bandwidth,
-                                  int64_t *bits_in_buffer_model) {
-  int sl, spatial_layer_target;
-  float total = 0;
-  float alloc_ratio[VPX_MAX_LAYERS] = { 0 };
-  float framerate = 30.0;
-  for (sl = 0; sl < spatial_layers; ++sl) {
-    if (svc_params->scaling_factor_den[sl] > 0) {
-      alloc_ratio[sl] = (float)(svc_params->scaling_factor_num[sl] * 1.0 /
-                                svc_params->scaling_factor_den[sl]);
-      total += alloc_ratio[sl];
-    }
-  }
-  for (sl = 0; sl < spatial_layers; ++sl) {
-    enc_cfg->ss_target_bitrate[sl] = spatial_layer_target =
-        (unsigned int)(enc_cfg->rc_target_bitrate * alloc_ratio[sl] / total);
-    const int index = sl * temporal_layers;
-    if (temporal_layering_mode == 3) {
-      enc_cfg->layer_target_bitrate[index] = spatial_layer_target >> 1;
-      enc_cfg->layer_target_bitrate[index + 1] =
-          (spatial_layer_target >> 1) + (spatial_layer_target >> 2);
-      enc_cfg->layer_target_bitrate[index + 2] = spatial_layer_target;
-    } else if (temporal_layering_mode == 2) {
-      enc_cfg->layer_target_bitrate[index] = spatial_layer_target * 2 / 3;
-      enc_cfg->layer_target_bitrate[index + 1] = spatial_layer_target;
-    } else if (temporal_layering_mode <= 1) {
-      enc_cfg->layer_target_bitrate[index] = spatial_layer_target;
-    }
-  }
-  for (sl = 0; sl < spatial_layers; ++sl) {
-    for (int tl = 0; tl < temporal_layers; ++tl) {
-      const int layer = sl * temporal_layers + tl;
-      float layer_framerate = framerate;
-      if (temporal_layers == 2 && tl == 0) layer_framerate = framerate / 2;
-      if (temporal_layers == 3 && tl == 0) layer_framerate = framerate / 4;
-      if (temporal_layers == 3 && tl == 1) layer_framerate = framerate / 2;
-      layer_target_avg_bandwidth[layer] = static_cast<int>(
-          enc_cfg->layer_target_bitrate[layer] * 1000.0 / layer_framerate);
-      bits_in_buffer_model[layer] =
-          enc_cfg->layer_target_bitrate[layer] * enc_cfg->rc_buf_initial_sz;
-    }
-  }
-}
-
-static void CheckLayerRateTargeting(vpx_codec_enc_cfg_t *const cfg,
-                                    int number_spatial_layers,
-                                    int number_temporal_layers,
-                                    double *file_datarate,
-                                    double thresh_overshoot,
-                                    double thresh_undershoot) {
-  for (int sl = 0; sl < number_spatial_layers; ++sl)
-    for (int tl = 0; tl < number_temporal_layers; ++tl) {
-      const int layer = sl * number_temporal_layers + tl;
-      ASSERT_GE(cfg->layer_target_bitrate[layer],
-                file_datarate[layer] * thresh_overshoot)
-          << " The datarate for the file exceeds the target by too much!";
-      ASSERT_LE(cfg->layer_target_bitrate[layer],
-                file_datarate[layer] * thresh_undershoot)
-          << " The datarate for the file is lower than the target by too much!";
-    }
-}
-
-class DatarateOnePassCbrSvc
-    : public ::libvpx_test::EncoderTest,
-      public ::libvpx_test::CodecTestWith2Params<libvpx_test::TestMode, int> {
- public:
-  DatarateOnePassCbrSvc() : EncoderTest(GET_PARAM(0)) {
-    memset(&svc_params_, 0, sizeof(svc_params_));
-  }
-  virtual ~DatarateOnePassCbrSvc() {}
-
- protected:
-  virtual void SetUp() {
-    InitializeConfig();
-    SetMode(GET_PARAM(1));
-    speed_setting_ = GET_PARAM(2);
-    ResetModel();
-  }
-  virtual void ResetModel() {
-    last_pts_ = 0;
-    duration_ = 0.0;
-    mismatch_psnr_ = 0.0;
-    mismatch_nframes_ = 0;
-    denoiser_on_ = 0;
-    tune_content_ = 0;
-    base_speed_setting_ = 5;
-    spatial_layer_id_ = 0;
-    temporal_layer_id_ = 0;
-    update_pattern_ = 0;
-    memset(bits_in_buffer_model_, 0, sizeof(bits_in_buffer_model_));
-    memset(bits_total_, 0, sizeof(bits_total_));
-    memset(layer_target_avg_bandwidth_, 0, sizeof(layer_target_avg_bandwidth_));
-    dynamic_drop_layer_ = false;
-    change_bitrate_ = false;
-    last_pts_ref_ = 0;
-  }
-  virtual void BeginPassHook(unsigned int /*pass*/) {}
-
-  // Example pattern for spatial layers and 2 temporal layers used in the
-  // bypass/flexible mode. The pattern corresponds to the pattern
-  // VP9E_TEMPORAL_LAYERING_MODE_0101 (temporal_layering_mode == 2) used in
-  // non-flexible mode, except that we disable inter-layer prediction.
-  void set_frame_flags_bypass_mode(
-      int tl, int num_spatial_layers, int is_key_frame,
-      vpx_svc_ref_frame_config_t *ref_frame_config) {
-    for (int sl = 0; sl < num_spatial_layers; ++sl) {
-      if (!tl) {
-        if (!sl) {
-          ref_frame_config->frame_flags[sl] =
-              VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_GF |
-              VP8_EFLAG_NO_UPD_ARF;
-        } else {
-          if (is_key_frame) {
-            ref_frame_config->frame_flags[sl] =
-                VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_REF_ARF |
-                VP8_EFLAG_NO_UPD_LAST | VP8_EFLAG_NO_UPD_ARF;
-          } else {
-            ref_frame_config->frame_flags[sl] =
-                VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_GF |
-                VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_REF_GF;
-          }
-        }
-      } else if (tl == 1) {
-        if (!sl) {
-          ref_frame_config->frame_flags[sl] =
-              VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_REF_ARF |
-              VP8_EFLAG_NO_UPD_LAST | VP8_EFLAG_NO_UPD_GF;
-        } else {
-          ref_frame_config->frame_flags[sl] =
-              VP8_EFLAG_NO_REF_ARF | VP8_EFLAG_NO_UPD_LAST |
-              VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_REF_GF;
-        }
-      }
-      if (tl == 0) {
-        ref_frame_config->lst_fb_idx[sl] = sl;
-        if (sl) {
-          if (is_key_frame) {
-            ref_frame_config->lst_fb_idx[sl] = sl - 1;
-            ref_frame_config->gld_fb_idx[sl] = sl;
-          } else {
-            ref_frame_config->gld_fb_idx[sl] = sl - 1;
-          }
-        } else {
-          ref_frame_config->gld_fb_idx[sl] = 0;
-        }
-        ref_frame_config->alt_fb_idx[sl] = 0;
-      } else if (tl == 1) {
-        ref_frame_config->lst_fb_idx[sl] = sl;
-        ref_frame_config->gld_fb_idx[sl] = num_spatial_layers + sl - 1;
-        ref_frame_config->alt_fb_idx[sl] = num_spatial_layers + sl;
-      }
-    }
-  }
-
-  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
-                                  ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 0) {
-      int i;
-      for (i = 0; i < VPX_MAX_LAYERS; ++i) {
-        svc_params_.max_quantizers[i] = 63;
-        svc_params_.min_quantizers[i] = 0;
-      }
-      svc_params_.speed_per_layer[0] = base_speed_setting_;
-      for (i = 1; i < VPX_SS_MAX_LAYERS; ++i) {
-        svc_params_.speed_per_layer[i] = speed_setting_;
-      }
-
-      encoder->Control(VP9E_SET_NOISE_SENSITIVITY, denoiser_on_);
-      encoder->Control(VP9E_SET_SVC, 1);
-      encoder->Control(VP9E_SET_SVC_PARAMETERS, &svc_params_);
-      encoder->Control(VP8E_SET_CPUUSED, speed_setting_);
-      encoder->Control(VP9E_SET_TILE_COLUMNS, 0);
-      encoder->Control(VP8E_SET_MAX_INTRA_BITRATE_PCT, 300);
-      encoder->Control(VP9E_SET_TILE_COLUMNS, (cfg_.g_threads >> 1));
-      encoder->Control(VP9E_SET_ROW_MT, 1);
-      encoder->Control(VP8E_SET_STATIC_THRESHOLD, 1);
-      encoder->Control(VP9E_SET_TUNE_CONTENT, tune_content_);
-    }
-
-    if (update_pattern_ && video->frame() >= 100) {
-      vpx_svc_layer_id_t layer_id;
-      if (video->frame() == 100) {
-        cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
-        encoder->Config(&cfg_);
-      }
-      // Set layer id since the pattern changed.
-      layer_id.spatial_layer_id = 0;
-      layer_id.temporal_layer_id = (video->frame() % 2 != 0);
-      encoder->Control(VP9E_SET_SVC_LAYER_ID, &layer_id);
-      set_frame_flags_bypass_mode(layer_id.temporal_layer_id,
-                                  number_spatial_layers_, 0, &ref_frame_config);
-      encoder->Control(VP9E_SET_SVC_REF_FRAME_CONFIG, &ref_frame_config);
-    }
-
-    if (change_bitrate_ && video->frame() == 200) {
-      duration_ = (last_pts_ + 1) * timebase_;
-      for (int sl = 0; sl < number_spatial_layers_; ++sl) {
-        for (int tl = 0; tl < number_temporal_layers_; ++tl) {
-          const int layer = sl * number_temporal_layers_ + tl;
-          const double file_size_in_kb = bits_total_[layer] / 1000.;
-          file_datarate_[layer] = file_size_in_kb / duration_;
-        }
-      }
-
-      CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                              number_temporal_layers_, file_datarate_, 0.78,
-                              1.15);
-
-      memset(file_datarate_, 0, sizeof(file_datarate_));
-      memset(bits_total_, 0, sizeof(bits_total_));
-      int64_t bits_in_buffer_model_tmp[VPX_MAX_LAYERS];
-      last_pts_ref_ = last_pts_;
-      // Set new target bitarate.
-      cfg_.rc_target_bitrate = cfg_.rc_target_bitrate >> 1;
-      // Buffer level should not reset on dynamic bitrate change.
-      memcpy(bits_in_buffer_model_tmp, bits_in_buffer_model_,
-             sizeof(bits_in_buffer_model_));
-      assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                            cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                            layer_target_avg_bandwidth_, bits_in_buffer_model_);
-      memcpy(bits_in_buffer_model_, bits_in_buffer_model_tmp,
-             sizeof(bits_in_buffer_model_));
-
-      // Change config to update encoder with new bitrate configuration.
-      encoder->Config(&cfg_);
-    }
-
-    if (dynamic_drop_layer_) {
-      if (video->frame() == 100) {
-        // Change layer bitrates to set top layer to 0. This will trigger skip
-        // encoding/dropping of top spatial layer.
-        cfg_.rc_target_bitrate -= cfg_.layer_target_bitrate[2];
-        cfg_.layer_target_bitrate[2] = 0;
-        encoder->Config(&cfg_);
-      } else if (video->frame() == 300) {
-        // Change layer bitrate on top layer to non-zero to start encoding it
-        // again.
-        cfg_.layer_target_bitrate[2] = 500;
-        cfg_.rc_target_bitrate += cfg_.layer_target_bitrate[2];
-        encoder->Config(&cfg_);
-      }
-    }
-    const vpx_rational_t tb = video->timebase();
-    timebase_ = static_cast<double>(tb.num) / tb.den;
-    duration_ = 0;
-  }
-
-  virtual void PostEncodeFrameHook(::libvpx_test::Encoder *encoder) {
-    vpx_svc_layer_id_t layer_id;
-    encoder->Control(VP9E_GET_SVC_LAYER_ID, &layer_id);
-    spatial_layer_id_ = layer_id.spatial_layer_id;
-    temporal_layer_id_ = layer_id.temporal_layer_id;
-    // Update buffer with per-layer target frame bandwidth, this is done
-    // for every frame passed to the encoder (encoded or dropped).
-    // For temporal layers, update the cumulative buffer level.
-    for (int sl = 0; sl < number_spatial_layers_; ++sl) {
-      for (int tl = temporal_layer_id_; tl < number_temporal_layers_; ++tl) {
-        const int layer = sl * number_temporal_layers_ + tl;
-        bits_in_buffer_model_[layer] +=
-            static_cast<int64_t>(layer_target_avg_bandwidth_[layer]);
-      }
-    }
-  }
-
-  vpx_codec_err_t parse_superframe_index(const uint8_t *data, size_t data_sz,
-                                         uint32_t sizes[8], int *count) {
-    uint8_t marker;
-    marker = *(data + data_sz - 1);
-    *count = 0;
-    if ((marker & 0xe0) == 0xc0) {
-      const uint32_t frames = (marker & 0x7) + 1;
-      const uint32_t mag = ((marker >> 3) & 0x3) + 1;
-      const size_t index_sz = 2 + mag * frames;
-      // This chunk is marked as having a superframe index but doesn't have
-      // enough data for it, thus it's an invalid superframe index.
-      if (data_sz < index_sz) return VPX_CODEC_CORRUPT_FRAME;
-      {
-        const uint8_t marker2 = *(data + data_sz - index_sz);
-        // This chunk is marked as having a superframe index but doesn't have
-        // the matching marker byte at the front of the index therefore it's an
-        // invalid chunk.
-        if (marker != marker2) return VPX_CODEC_CORRUPT_FRAME;
-      }
-      {
-        uint32_t i, j;
-        const uint8_t *x = &data[data_sz - index_sz + 1];
-        for (i = 0; i < frames; ++i) {
-          uint32_t this_sz = 0;
-
-          for (j = 0; j < mag; ++j) this_sz |= (*x++) << (j * 8);
-          sizes[i] = this_sz;
-        }
-        *count = frames;
-      }
-    }
-    return VPX_CODEC_OK;
-  }
-
-  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
-    uint32_t sizes[8] = { 0 };
-    int count = 0;
-    last_pts_ = pkt->data.frame.pts;
-    const bool key_frame =
-        (pkt->data.frame.flags & VPX_FRAME_IS_KEY) ? true : false;
-    parse_superframe_index(static_cast<const uint8_t *>(pkt->data.frame.buf),
-                           pkt->data.frame.sz, sizes, &count);
-    if (!dynamic_drop_layer_) ASSERT_EQ(count, number_spatial_layers_);
-    for (int sl = 0; sl < number_spatial_layers_; ++sl) {
-      sizes[sl] = sizes[sl] << 3;
-      // Update the total encoded bits per layer.
-      // For temporal layers, update the cumulative encoded bits per layer.
-      for (int tl = temporal_layer_id_; tl < number_temporal_layers_; ++tl) {
-        const int layer = sl * number_temporal_layers_ + tl;
-        bits_total_[layer] += static_cast<int64_t>(sizes[sl]);
-        // Update the per-layer buffer level with the encoded frame size.
-        bits_in_buffer_model_[layer] -= static_cast<int64_t>(sizes[sl]);
-        // There should be no buffer underrun, except on the base
-        // temporal layer, since there may be key frames there.
-        if (!key_frame && tl > 0) {
-          ASSERT_GE(bits_in_buffer_model_[layer], 0)
-              << "Buffer Underrun at frame " << pkt->data.frame.pts;
-        }
-      }
-
-      ASSERT_EQ(pkt->data.frame.width[sl],
-                top_sl_width_ * svc_params_.scaling_factor_num[sl] /
-                    svc_params_.scaling_factor_den[sl]);
-
-      ASSERT_EQ(pkt->data.frame.height[sl],
-                top_sl_height_ * svc_params_.scaling_factor_num[sl] /
-                    svc_params_.scaling_factor_den[sl]);
-    }
-  }
-
-  virtual void EndPassHook(void) {
-    if (change_bitrate_) last_pts_ = last_pts_ - last_pts_ref_;
-    duration_ = (last_pts_ + 1) * timebase_;
-    for (int sl = 0; sl < number_spatial_layers_; ++sl) {
-      for (int tl = 0; tl < number_temporal_layers_; ++tl) {
-        const int layer = sl * number_temporal_layers_ + tl;
-        const double file_size_in_kb = bits_total_[layer] / 1000.;
-        file_datarate_[layer] = file_size_in_kb / duration_;
-      }
-    }
-  }
-
-  virtual void MismatchHook(const vpx_image_t *img1, const vpx_image_t *img2) {
-    double mismatch_psnr = compute_psnr(img1, img2);
-    mismatch_psnr_ += mismatch_psnr;
-    ++mismatch_nframes_;
-  }
-
-  unsigned int GetMismatchFrames() { return mismatch_nframes_; }
-
-  vpx_codec_pts_t last_pts_;
-  int64_t bits_in_buffer_model_[VPX_MAX_LAYERS];
-  double timebase_;
-  int64_t bits_total_[VPX_MAX_LAYERS];
-  double duration_;
-  double file_datarate_[VPX_MAX_LAYERS];
-  size_t bits_in_last_frame_;
-  vpx_svc_extra_cfg_t svc_params_;
-  int speed_setting_;
-  double mismatch_psnr_;
-  int mismatch_nframes_;
-  int denoiser_on_;
-  int tune_content_;
-  int base_speed_setting_;
-  int spatial_layer_id_;
-  int temporal_layer_id_;
-  int number_spatial_layers_;
-  int number_temporal_layers_;
-  int layer_target_avg_bandwidth_[VPX_MAX_LAYERS];
-  bool dynamic_drop_layer_;
-  unsigned int top_sl_width_;
-  unsigned int top_sl_height_;
-  vpx_svc_ref_frame_config_t ref_frame_config;
-  int update_pattern_;
-  bool change_bitrate_;
-  vpx_codec_pts_t last_pts_ref_;
-};
-
-// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and 1
-// temporal layer, with screen content mode on and same speed setting for all
-// layers.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc2SL1TLScreenContent1) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 2;
-  cfg_.ts_number_layers = 1;
-  cfg_.ts_rate_decimator[0] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 0;
-  svc_params_.scaling_factor_num[0] = 144;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 288;
-  svc_params_.scaling_factor_den[1] = 288;
-  cfg_.rc_dropframe_thresh = 10;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
-  top_sl_width_ = 1280;
-  top_sl_height_ = 720;
-  cfg_.rc_target_bitrate = 500;
-  ResetModel();
-  tune_content_ = 1;
-  base_speed_setting_ = speed_setting_;
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-  EXPECT_EQ(static_cast<unsigned int>(0), GetMismatchFrames());
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and
-// 3 temporal layers. Run CIF clip with 1 thread.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc2SL3TL) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 2;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 144;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 288;
-  svc_params_.scaling_factor_den[1] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  // TODO(marpan): Check that effective_datarate for each layer hits the
-  // layer target_bitrate.
-  for (int i = 200; i <= 800; i += 200) {
-    cfg_.rc_target_bitrate = i;
-    ResetModel();
-    assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                          cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                          layer_target_avg_bandwidth_, bits_in_buffer_model_);
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                            number_temporal_layers_, file_datarate_, 0.78,
-                            1.15);
-#if CONFIG_VP9_DECODER
-    // Number of temporal layers > 1, so half of the frames in this SVC pattern
-    // will be non-reference frame and hence encoder will avoid loopfilter.
-    // Since frame dropper is off, we can expect 200 (half of the sequence)
-    // mismatched frames.
-    EXPECT_EQ(static_cast<unsigned int>(200), GetMismatchFrames());
-#endif
-  }
-}
-
-// Check basic rate targeting for 1 pass CBR SVC with denoising.
-// 2 spatial layers and 3 temporal layer. Run HD clip with 2 threads.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc2SL3TLDenoiserOn) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 2;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 2;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 144;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 288;
-  svc_params_.scaling_factor_den[1] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  // TODO(marpan): Check that effective_datarate for each layer hits the
-  // layer target_bitrate.
-  // For SVC, noise_sen = 1 means denoising only the top spatial layer
-  // noise_sen = 2 means denoising the two top spatial layers.
-  for (int noise_sen = 1; noise_sen <= 2; noise_sen++) {
-    for (int i = 600; i <= 1000; i += 200) {
-      cfg_.rc_target_bitrate = i;
-      ResetModel();
-      denoiser_on_ = noise_sen;
-      assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                            cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                            layer_target_avg_bandwidth_, bits_in_buffer_model_);
-      ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-      CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                              number_temporal_layers_, file_datarate_, 0.78,
-                              1.15);
-#if CONFIG_VP9_DECODER
-      // Number of temporal layers > 1, so half of the frames in this SVC
-      // pattern
-      // will be non-reference frame and hence encoder will avoid loopfilter.
-      // Since frame dropper is off, we can expect 200 (half of the sequence)
-      // mismatched frames.
-      EXPECT_EQ(static_cast<unsigned int>(200), GetMismatchFrames());
-#endif
-    }
-  }
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and 3
-// temporal layers. Run CIF clip with 1 thread, and few short key frame periods.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc2SL3TLSmallKf) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 2;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 144;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 288;
-  svc_params_.scaling_factor_den[1] = 288;
-  cfg_.rc_dropframe_thresh = 10;
-  cfg_.rc_target_bitrate = 400;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  // For this 3 temporal layer case, pattern repeats every 4 frames, so choose
-  // 4 key neighboring key frame periods (so key frame will land on 0-2-1-2).
-  for (int j = 64; j <= 67; j++) {
-    cfg_.kf_max_dist = j;
-    ResetModel();
-    assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                          cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                          layer_target_avg_bandwidth_, bits_in_buffer_model_);
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                            number_temporal_layers_, file_datarate_, 0.78,
-                            1.15);
-  }
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and
-// 3 temporal layers. Run HD clip with 4 threads.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc2SL3TL4Threads) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 2;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 4;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 144;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 288;
-  svc_params_.scaling_factor_den[1] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
-  top_sl_width_ = 1280;
-  top_sl_height_ = 720;
-  cfg_.rc_target_bitrate = 800;
-  ResetModel();
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-#if CONFIG_VP9_DECODER
-  // Number of temporal layers > 1, so half of the frames in this SVC pattern
-  // will be non-reference frame and hence encoder will avoid loopfilter.
-  // Since frame dropper is off, we can expect 30 (half of the sequence)
-  // mismatched frames.
-  EXPECT_EQ(static_cast<unsigned int>(30), GetMismatchFrames());
-#endif
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and
-// 3 temporal layers. Run CIF clip with 1 thread.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc3SL3TL) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 3;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 72;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 144;
-  svc_params_.scaling_factor_den[1] = 288;
-  svc_params_.scaling_factor_num[2] = 288;
-  svc_params_.scaling_factor_den[2] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  cfg_.rc_target_bitrate = 800;
-  ResetModel();
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-#if CONFIG_VP9_DECODER
-  // Number of temporal layers > 1, so half of the frames in this SVC pattern
-  // will be non-reference frame and hence encoder will avoid loopfilter.
-  // Since frame dropper is off, we can expect 200 (half of the sequence)
-  // mismatched frames.
-  EXPECT_EQ(static_cast<unsigned int>(200), GetMismatchFrames());
-#endif
-}
-
-// Check rate targeting for 1 pass CBR SVC: 3 spatial layers and
-// 3 temporal layers, changing the target bitrate at the middle of encoding.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc3SL3TLDynamicBitrateChange) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 3;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 72;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 144;
-  svc_params_.scaling_factor_den[1] = 288;
-  svc_params_.scaling_factor_num[2] = 288;
-  svc_params_.scaling_factor_den[2] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  cfg_.rc_target_bitrate = 800;
-  ResetModel();
-  change_bitrate_ = true;
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-#if CONFIG_VP9_DECODER
-  // Number of temporal layers > 1, so half of the frames in this SVC pattern
-  // will be non-reference frame and hence encoder will avoid loopfilter.
-  // Since frame dropper is off, we can expect 200 (half of the sequence)
-  // mismatched frames.
-  EXPECT_EQ(static_cast<unsigned int>(200), GetMismatchFrames());
-#endif
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and
-// 2 temporal layers, with a change on the fly from the fixed SVC pattern to one
-// generate via SVC_SET_REF_FRAME_CONFIG. The new pattern also disables
-// inter-layer prediction.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc3SL2TLDynamicPatternChange) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 3;
-  cfg_.ts_number_layers = 2;
-  cfg_.ts_rate_decimator[0] = 2;
-  cfg_.ts_rate_decimator[1] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 2;
-  svc_params_.scaling_factor_num[0] = 72;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 144;
-  svc_params_.scaling_factor_den[1] = 288;
-  svc_params_.scaling_factor_num[2] = 288;
-  svc_params_.scaling_factor_den[2] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  // Change SVC pattern on the fly.
-  update_pattern_ = 1;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  cfg_.rc_target_bitrate = 800;
-  ResetModel();
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-#if CONFIG_VP9_DECODER
-  // Number of temporal layers > 1, so half of the frames in this SVC pattern
-  // will be non-reference frame and hence encoder will avoid loopfilter.
-  // Since frame dropper is off, we can expect 200 (half of the sequence)
-  // mismatched frames.
-  EXPECT_EQ(static_cast<unsigned int>(200), GetMismatchFrames());
-#endif
-}
-
-// Check basic rate targeting for 1 pass CBR SVC with 3 spatial layers and on
-// the fly switching to 2 spatial layers and then back to 3. This switch is done
-// by setting top spatial layer bitrate to 0, and then back to non-zero, during
-// the sequence.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc3SL_to_2SL_dynamic) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 3;
-  cfg_.ts_number_layers = 1;
-  cfg_.ts_rate_decimator[0] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 0;
-  svc_params_.scaling_factor_num[0] = 72;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 144;
-  svc_params_.scaling_factor_den[1] = 288;
-  svc_params_.scaling_factor_num[2] = 288;
-  svc_params_.scaling_factor_den[2] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  cfg_.rc_target_bitrate = 800;
-  ResetModel();
-  dynamic_drop_layer_ = true;
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  // Don't check rate targeting on top spatial layer since it will be skipped
-  // for part of the sequence.
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_ - 1,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and 3
-// temporal layers. Run CIF clip with 1 thread, and few short key frame periods.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc3SL3TLSmallKf) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 3;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 1;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 72;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 144;
-  svc_params_.scaling_factor_den[1] = 288;
-  svc_params_.scaling_factor_num[2] = 288;
-  svc_params_.scaling_factor_den[2] = 288;
-  cfg_.rc_dropframe_thresh = 10;
-  cfg_.rc_target_bitrate = 800;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
-                                       0, 400);
-  top_sl_width_ = 640;
-  top_sl_height_ = 480;
-  // For this 3 temporal layer case, pattern repeats every 4 frames, so choose
-  // 4 key neighboring key frame periods (so key frame will land on 0-2-1-2).
-  for (int j = 32; j <= 35; j++) {
-    cfg_.kf_max_dist = j;
-    ResetModel();
-    assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                          cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                          layer_target_avg_bandwidth_, bits_in_buffer_model_);
-    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-    CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                            number_temporal_layers_, file_datarate_, 0.78,
-                            1.15);
-  }
-}
-
-// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and
-// 3 temporal layers. Run HD clip with 4 threads.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc3SL3TL4threads) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 3;
-  cfg_.ts_number_layers = 3;
-  cfg_.ts_rate_decimator[0] = 4;
-  cfg_.ts_rate_decimator[1] = 2;
-  cfg_.ts_rate_decimator[2] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 4;
-  cfg_.temporal_layering_mode = 3;
-  svc_params_.scaling_factor_num[0] = 72;
-  svc_params_.scaling_factor_den[0] = 288;
-  svc_params_.scaling_factor_num[1] = 144;
-  svc_params_.scaling_factor_den[1] = 288;
-  svc_params_.scaling_factor_num[2] = 288;
-  svc_params_.scaling_factor_den[2] = 288;
-  cfg_.rc_dropframe_thresh = 0;
-  cfg_.kf_max_dist = 9999;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
-  top_sl_width_ = 1280;
-  top_sl_height_ = 720;
-  cfg_.rc_target_bitrate = 800;
-  ResetModel();
-  assign_layer_bitrates(&cfg_, &svc_params_, cfg_.ss_number_layers,
-                        cfg_.ts_number_layers, cfg_.temporal_layering_mode,
-                        layer_target_avg_bandwidth_, bits_in_buffer_model_);
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-#if CONFIG_VP9_DECODER
-  // Number of temporal layers > 1, so half of the frames in this SVC pattern
-  // will be non-reference frame and hence encoder will avoid loopfilter.
-  // Since frame dropper is off, we can expect 30 (half of the sequence)
-  // mismatched frames.
-  EXPECT_EQ(static_cast<unsigned int>(30), GetMismatchFrames());
-#endif
-}
-
-// Run SVC encoder for 1 temporal layer, 2 spatial layers, with spatial
-// downscale 5x5.
-TEST_P(DatarateOnePassCbrSvc, OnePassCbrSvc2SL1TL5x5MultipleRuns) {
-  cfg_.rc_buf_initial_sz = 500;
-  cfg_.rc_buf_optimal_sz = 500;
-  cfg_.rc_buf_sz = 1000;
-  cfg_.rc_min_quantizer = 0;
-  cfg_.rc_max_quantizer = 63;
-  cfg_.rc_end_usage = VPX_CBR;
-  cfg_.g_lag_in_frames = 0;
-  cfg_.ss_number_layers = 2;
-  cfg_.ts_number_layers = 1;
-  cfg_.ts_rate_decimator[0] = 1;
-  cfg_.g_error_resilient = 1;
-  cfg_.g_threads = 3;
-  cfg_.temporal_layering_mode = 0;
-  svc_params_.scaling_factor_num[0] = 256;
-  svc_params_.scaling_factor_den[0] = 1280;
-  svc_params_.scaling_factor_num[1] = 1280;
-  svc_params_.scaling_factor_den[1] = 1280;
-  cfg_.rc_dropframe_thresh = 10;
-  cfg_.kf_max_dist = 999999;
-  cfg_.kf_min_dist = 0;
-  cfg_.ss_target_bitrate[0] = 300;
-  cfg_.ss_target_bitrate[1] = 1400;
-  cfg_.layer_target_bitrate[0] = 300;
-  cfg_.layer_target_bitrate[1] = 1400;
-  cfg_.rc_target_bitrate = 1700;
-  number_spatial_layers_ = cfg_.ss_number_layers;
-  number_temporal_layers_ = cfg_.ts_number_layers;
-  ResetModel();
-  layer_target_avg_bandwidth_[0] = cfg_.layer_target_bitrate[0] * 1000 / 30;
-  bits_in_buffer_model_[0] =
-      cfg_.layer_target_bitrate[0] * cfg_.rc_buf_initial_sz;
-  layer_target_avg_bandwidth_[1] = cfg_.layer_target_bitrate[1] * 1000 / 30;
-  bits_in_buffer_model_[1] =
-      cfg_.layer_target_bitrate[1] * cfg_.rc_buf_initial_sz;
-  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
-  top_sl_width_ = 1280;
-  top_sl_height_ = 720;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
-  CheckLayerRateTargeting(&cfg_, number_spatial_layers_,
-                          number_temporal_layers_, file_datarate_, 0.78, 1.15);
-  EXPECT_EQ(static_cast<unsigned int>(0), GetMismatchFrames());
-}
-
-VP8_INSTANTIATE_TEST_CASE(DatarateTestLarge, ALL_TEST_MODES,
-                          ::testing::Values(0));
-VP8_INSTANTIATE_TEST_CASE(DatarateTestRealTime,
-                          ::testing::Values(::libvpx_test::kRealTime),
-                          ::testing::Values(-6, -12));
-VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9Large,
-                          ::testing::Values(::libvpx_test::kOnePassGood,
-                                            ::libvpx_test::kRealTime),
-                          ::testing::Range(2, 9));
-VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9RealTime,
-                          ::testing::Values(::libvpx_test::kRealTime),
-                          ::testing::Range(5, 9));
-#if CONFIG_VP9_TEMPORAL_DENOISING
-VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9LargeDenoiser,
-                          ::testing::Values(::libvpx_test::kRealTime),
-                          ::testing::Range(5, 9));
-#endif
-VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvc,
-                          ::testing::Values(::libvpx_test::kRealTime),
-                          ::testing::Range(5, 9));
-}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct16x16_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct16x16_test.cc
index ce0bd37..9ccf2b8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct16x16_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct16x16_test.cc
@@ -11,6 +11,7 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -229,10 +230,9 @@
 typedef void (*IhtFunc)(const tran_low_t *in, uint8_t *out, int stride,
                         int tx_type);
 
-typedef std::tr1::tuple<FdctFunc, IdctFunc, int, vpx_bit_depth_t> Dct16x16Param;
-typedef std::tr1::tuple<FhtFunc, IhtFunc, int, vpx_bit_depth_t> Ht16x16Param;
-typedef std::tr1::tuple<IdctFunc, IdctFunc, int, vpx_bit_depth_t>
-    Idct16x16Param;
+typedef std::tuple<FdctFunc, IdctFunc, int, vpx_bit_depth_t> Dct16x16Param;
+typedef std::tuple<FhtFunc, IhtFunc, int, vpx_bit_depth_t> Ht16x16Param;
+typedef std::tuple<IdctFunc, IdctFunc, int, vpx_bit_depth_t> Idct16x16Param;
 
 void fdct16x16_ref(const int16_t *in, tran_low_t *out, int stride,
                    int /*tx_type*/) {
@@ -744,7 +744,7 @@
   CompareInvReference(ref_txfm_, thresh_);
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if CONFIG_VP9_HIGHBITDEPTH
 INSTANTIATE_TEST_CASE_P(
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct32x32_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct32x32_test.cc
index a95ff97..94d6b37 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct32x32_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct32x32_test.cc
@@ -11,6 +11,7 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -18,6 +19,7 @@
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
 #include "test/util.h"
@@ -66,7 +68,7 @@
 typedef void (*FwdTxfmFunc)(const int16_t *in, tran_low_t *out, int stride);
 typedef void (*InvTxfmFunc)(const tran_low_t *in, uint8_t *out, int stride);
 
-typedef std::tr1::tuple<FwdTxfmFunc, InvTxfmFunc, int, vpx_bit_depth_t>
+typedef std::tuple<FwdTxfmFunc, InvTxfmFunc, int, vpx_bit_depth_t>
     Trans32x32Param;
 
 #if CONFIG_VP9_HIGHBITDEPTH
@@ -79,7 +81,8 @@
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-class Trans32x32Test : public ::testing::TestWithParam<Trans32x32Param> {
+class Trans32x32Test : public AbstractBench,
+                       public ::testing::TestWithParam<Trans32x32Param> {
  public:
   virtual ~Trans32x32Test() {}
   virtual void SetUp() {
@@ -99,8 +102,14 @@
   int mask_;
   FwdTxfmFunc fwd_txfm_;
   InvTxfmFunc inv_txfm_;
+
+  int16_t *bench_in_;
+  tran_low_t *bench_out_;
+  virtual void Run();
 };
 
+void Trans32x32Test::Run() { fwd_txfm_(bench_in_, bench_out_, 32); }
+
 TEST_P(Trans32x32Test, AccuracyCheck) {
   ACMRandom rnd(ACMRandom::DeterministicSeed());
   uint32_t max_error = 0;
@@ -237,6 +246,19 @@
   }
 }
 
+TEST_P(Trans32x32Test, DISABLED_Speed) {
+  ACMRandom rnd(ACMRandom::DeterministicSeed());
+
+  DECLARE_ALIGNED(16, int16_t, input_extreme_block[kNumCoeffs]);
+  DECLARE_ALIGNED(16, tran_low_t, output_block[kNumCoeffs]);
+
+  bench_in_ = input_extreme_block;
+  bench_out_ = output_block;
+
+  RunNTimes(INT16_MAX);
+  PrintMedian("32x32");
+}
+
 TEST_P(Trans32x32Test, InverseAccuracy) {
   ACMRandom rnd(ACMRandom::DeterministicSeed());
   const int count_test_block = 1000;
@@ -292,7 +314,7 @@
   }
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if CONFIG_VP9_HIGHBITDEPTH
 INSTANTIATE_TEST_CASE_P(
@@ -371,7 +393,7 @@
     VSX, Trans32x32Test,
     ::testing::Values(make_tuple(&vpx_fdct32x32_c, &vpx_idct32x32_1024_add_vsx,
                                  0, VPX_BITS_8),
-                      make_tuple(&vpx_fdct32x32_rd_c,
+                      make_tuple(&vpx_fdct32x32_rd_vsx,
                                  &vpx_idct32x32_1024_add_vsx, 1, VPX_BITS_8)));
 #endif  // HAVE_VSX && !CONFIG_VP9_HIGHBITDEPTH && !CONFIG_EMULATE_HARDWARE
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_partial_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_partial_test.cc
index f06ff40..c889e92 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_partial_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_partial_test.cc
@@ -11,8 +11,8 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
-
 #include <limits>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -28,8 +28,8 @@
 
 using libvpx_test::ACMRandom;
 using libvpx_test::Buffer;
-using std::tr1::make_tuple;
-using std::tr1::tuple;
+using std::make_tuple;
+using std::tuple;
 
 namespace {
 typedef void (*PartialFdctFunc)(const int16_t *in, tran_low_t *out, int stride);
@@ -39,10 +39,14 @@
 
 tran_low_t partial_fdct_ref(const Buffer<int16_t> &in, int size) {
   int64_t sum = 0;
-  for (int y = 0; y < size; ++y) {
-    for (int x = 0; x < size; ++x) {
-      sum += in.TopLeftPixel()[y * in.stride() + x];
+  if (in.TopLeftPixel() != NULL) {
+    for (int y = 0; y < size; ++y) {
+      for (int x = 0; x < size; ++x) {
+        sum += in.TopLeftPixel()[y * in.stride() + x];
+      }
     }
+  } else {
+    assert(0);
   }
 
   switch (size) {
@@ -77,21 +81,25 @@
     Buffer<tran_low_t> output_block = Buffer<tran_low_t>(size_, size_, 0, 16);
     ASSERT_TRUE(output_block.Init());
 
-    for (int i = 0; i < 100; ++i) {
-      if (i == 0) {
-        input_block.Set(maxvalue);
-      } else if (i == 1) {
-        input_block.Set(minvalue);
-      } else {
-        input_block.Set(&rnd, minvalue, maxvalue);
+    if (output_block.TopLeftPixel() != NULL) {
+      for (int i = 0; i < 100; ++i) {
+        if (i == 0) {
+          input_block.Set(maxvalue);
+        } else if (i == 1) {
+          input_block.Set(minvalue);
+        } else {
+          input_block.Set(&rnd, minvalue, maxvalue);
+        }
+
+        ASM_REGISTER_STATE_CHECK(fwd_txfm_(input_block.TopLeftPixel(),
+                                           output_block.TopLeftPixel(),
+                                           input_block.stride()));
+
+        EXPECT_EQ(partial_fdct_ref(input_block, size_),
+                  output_block.TopLeftPixel()[0]);
       }
-
-      ASM_REGISTER_STATE_CHECK(fwd_txfm_(input_block.TopLeftPixel(),
-                                         output_block.TopLeftPixel(),
-                                         input_block.stride()));
-
-      EXPECT_EQ(partial_fdct_ref(input_block, size_),
-                output_block.TopLeftPixel()[0]);
+    } else {
+      assert(0);
     }
   }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_test.cc
index 2a6fccb..ed12f77 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/dct_test.cc
@@ -11,6 +11,7 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -28,8 +29,8 @@
 
 using libvpx_test::ACMRandom;
 using libvpx_test::Buffer;
-using std::tr1::make_tuple;
-using std::tr1::tuple;
+using std::make_tuple;
+using std::tuple;
 
 namespace {
 typedef void (*FdctFunc)(const int16_t *in, tran_low_t *out, int stride);
@@ -210,6 +211,7 @@
     Buffer<int16_t> test_input_block =
         Buffer<int16_t>(size_, size_, 8, size_ == 4 ? 0 : 16);
     ASSERT_TRUE(test_input_block.Init());
+    ASSERT_TRUE(test_input_block.TopLeftPixel() != NULL);
     Buffer<tran_low_t> test_temp_block =
         Buffer<tran_low_t>(size_, size_, 0, 16);
     ASSERT_TRUE(test_temp_block.Init());
@@ -314,6 +316,7 @@
       } else if (i == 1) {
         input_extreme_block.Set(-max_pixel_value_);
       } else {
+        ASSERT_TRUE(input_extreme_block.TopLeftPixel() != NULL);
         for (int h = 0; h < size_; ++h) {
           for (int w = 0; w < size_; ++w) {
             input_extreme_block
@@ -328,6 +331,7 @@
 
       // The minimum quant value is 4.
       EXPECT_TRUE(output_block.CheckValues(output_ref_block));
+      ASSERT_TRUE(output_block.TopLeftPixel() != NULL);
       for (int h = 0; h < size_; ++h) {
         for (int w = 0; w < size_; ++w) {
           EXPECT_GE(
@@ -365,6 +369,7 @@
 
     for (int i = 0; i < count_test_block; ++i) {
       InitMem();
+      ASSERT_TRUE(in.TopLeftPixel() != NULL);
       // Initialize a test block with input range [-max_pixel_value_,
       // max_pixel_value_].
       for (int h = 0; h < size_; ++h) {
@@ -401,8 +406,7 @@
           EXPECT_GE(static_cast<uint32_t>(limit), error)
               << "Error: " << size_ << "x" << size_
               << " inverse transform has error " << error << " at " << w << ","
-              << h << " org:" << (int)src_[h * stride_ + w]
-              << " opt:" << (int)dst_[h * stride_ + w];
+              << h;
           if (::testing::Test::HasFailure()) {
             printf("Size: %d Transform type: %d\n", size_, tx_type_);
             return;
@@ -510,7 +514,7 @@
         ::testing::Values(VPX_BITS_8, VPX_BITS_10, VPX_BITS_12)));
 #endif  // HAVE_SSE2
 
-#if HAVE_SSSE3 && !CONFIG_VP9_HIGHBITDEPTH && ARCH_X86_64
+#if HAVE_SSSE3 && !CONFIG_VP9_HIGHBITDEPTH && VPX_ARCH_X86_64
 // vpx_fdct8x8_ssse3 is only available in 64 bit builds.
 static const FuncInfo dct_ssse3_func_info = {
   &fdct_wrapper<vpx_fdct8x8_ssse3>, &idct_wrapper<vpx_idct8x8_64_add_sse2>, 8, 1
@@ -520,7 +524,7 @@
 INSTANTIATE_TEST_CASE_P(SSSE3, TransDCT,
                         ::testing::Values(make_tuple(0, &dct_ssse3_func_info, 0,
                                                      VPX_BITS_8)));
-#endif  // HAVE_SSSE3 && !CONFIG_VP9_HIGHBITDEPTH && ARCH_X86_64
+#endif  // HAVE_SSSE3 && !CONFIG_VP9_HIGHBITDEPTH && VPX_ARCH_X86_64
 
 #if HAVE_AVX2 && !CONFIG_VP9_HIGHBITDEPTH
 static const FuncInfo dct_avx2_func_info = {
@@ -629,22 +633,16 @@
 
 static const FuncInfo ht_neon_func_info[] = {
 #if CONFIG_VP9_HIGHBITDEPTH
-// TODO(linfengz): reenable these functions once test vector failures are
-// addressed.
-#if 0
   { &vp9_highbd_fht4x4_c, &highbd_iht_wrapper<vp9_highbd_iht4x4_16_add_neon>, 4,
     2 },
   { &vp9_highbd_fht8x8_c, &highbd_iht_wrapper<vp9_highbd_iht8x8_64_add_neon>, 8,
     2 },
-#endif
+  { &vp9_highbd_fht16x16_c,
+    &highbd_iht_wrapper<vp9_highbd_iht16x16_256_add_neon>, 16, 2 },
 #endif
   { &vp9_fht4x4_c, &iht_wrapper<vp9_iht4x4_16_add_neon>, 4, 1 },
-// TODO(linfengz): reenable these functions once test vector failures are
-// addressed.
-#if 0
   { &vp9_fht8x8_c, &iht_wrapper<vp9_iht8x8_64_add_neon>, 8, 1 },
   { &vp9_fht16x16_c, &iht_wrapper<vp9_iht16x16_256_add_neon>, 16, 1 }
-#endif
 };
 
 INSTANTIATE_TEST_CASE_P(
@@ -690,6 +688,19 @@
                                          VPX_BITS_12)));
 #endif  // HAVE_SSE4_1 && CONFIG_VP9_HIGHBITDEPTH
 
+#if HAVE_VSX && !CONFIG_EMULATE_HARDWARE && !CONFIG_VP9_HIGHBITDEPTH
+static const FuncInfo ht_vsx_func_info[3] = {
+  { &vp9_fht4x4_c, &iht_wrapper<vp9_iht4x4_16_add_vsx>, 4, 1 },
+  { &vp9_fht8x8_c, &iht_wrapper<vp9_iht8x8_64_add_vsx>, 8, 1 },
+  { &vp9_fht16x16_c, &iht_wrapper<vp9_iht16x16_256_add_vsx>, 16, 1 }
+};
+
+INSTANTIATE_TEST_CASE_P(VSX, TransHT,
+                        ::testing::Combine(::testing::Range(0, 3),
+                                           ::testing::Values(ht_vsx_func_info),
+                                           ::testing::Range(0, 4),
+                                           ::testing::Values(VPX_BITS_8)));
+#endif  // HAVE_VSX
 #endif  // !CONFIG_EMULATE_HARDWARE
 
 /* -------------------------------------------------------------------------- */
@@ -732,4 +743,14 @@
                         ::testing::Values(make_tuple(0, &wht_sse2_func_info, 0,
                                                      VPX_BITS_8)));
 #endif  // HAVE_SSE2 && !CONFIG_EMULATE_HARDWARE
+
+#if HAVE_VSX && !CONFIG_EMULATE_HARDWARE && !CONFIG_VP9_HIGHBITDEPTH
+static const FuncInfo wht_vsx_func_info = {
+  &fdct_wrapper<vp9_fwht4x4_c>, &idct_wrapper<vpx_iwht4x4_16_add_vsx>, 4, 1
+};
+
+INSTANTIATE_TEST_CASE_P(VSX, TransWHT,
+                        ::testing::Values(make_tuple(0, &wht_vsx_func_info, 0,
+                                                     VPX_BITS_8)));
+#endif  // HAVE_VSX && !CONFIG_EMULATE_HARDWARE
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_api_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_api_test.cc
index 4167cf3..d4b67cc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_api_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_api_test.cc
@@ -138,8 +138,30 @@
   EXPECT_EQ(VPX_CODEC_OK, vpx_codec_destroy(&dec));
 }
 
-TEST(DecodeAPI, Vp9PeekSI) {
+void TestPeekInfo(const uint8_t *const data, uint32_t data_sz,
+                  uint32_t peek_size) {
   const vpx_codec_iface_t *const codec = &vpx_codec_vp9_dx_algo;
+  // Verify behavior of vpx_codec_decode. vpx_codec_decode doesn't even get
+  // to decoder_peek_si_internal on frames of size < 8.
+  if (data_sz >= 8) {
+    vpx_codec_ctx_t dec;
+    EXPECT_EQ(VPX_CODEC_OK, vpx_codec_dec_init(&dec, codec, NULL, 0));
+    EXPECT_EQ((data_sz < peek_size) ? VPX_CODEC_UNSUP_BITSTREAM
+                                    : VPX_CODEC_CORRUPT_FRAME,
+              vpx_codec_decode(&dec, data, data_sz, NULL, 0));
+    vpx_codec_iter_t iter = NULL;
+    EXPECT_EQ(NULL, vpx_codec_get_frame(&dec, &iter));
+    EXPECT_EQ(VPX_CODEC_OK, vpx_codec_destroy(&dec));
+  }
+
+  // Verify behavior of vpx_codec_peek_stream_info.
+  vpx_codec_stream_info_t si;
+  si.sz = sizeof(si);
+  EXPECT_EQ((data_sz < peek_size) ? VPX_CODEC_UNSUP_BITSTREAM : VPX_CODEC_OK,
+            vpx_codec_peek_stream_info(codec, data, data_sz, &si));
+}
+
+TEST(DecodeAPI, Vp9PeekStreamInfo) {
   // The first 9 bytes are valid and the rest of the bytes are made up. Until
   // size 10, this should return VPX_CODEC_UNSUP_BITSTREAM and after that it
   // should return VPX_CODEC_CORRUPT_FRAME.
@@ -150,24 +172,18 @@
   };
 
   for (uint32_t data_sz = 1; data_sz <= 32; ++data_sz) {
-    // Verify behavior of vpx_codec_decode. vpx_codec_decode doesn't even get
-    // to decoder_peek_si_internal on frames of size < 8.
-    if (data_sz >= 8) {
-      vpx_codec_ctx_t dec;
-      EXPECT_EQ(VPX_CODEC_OK, vpx_codec_dec_init(&dec, codec, NULL, 0));
-      EXPECT_EQ(
-          (data_sz < 10) ? VPX_CODEC_UNSUP_BITSTREAM : VPX_CODEC_CORRUPT_FRAME,
-          vpx_codec_decode(&dec, data, data_sz, NULL, 0));
-      vpx_codec_iter_t iter = NULL;
-      EXPECT_EQ(NULL, vpx_codec_get_frame(&dec, &iter));
-      EXPECT_EQ(VPX_CODEC_OK, vpx_codec_destroy(&dec));
-    }
+    TestPeekInfo(data, data_sz, 10);
+  }
+}
 
-    // Verify behavior of vpx_codec_peek_stream_info.
-    vpx_codec_stream_info_t si;
-    si.sz = sizeof(si);
-    EXPECT_EQ((data_sz < 10) ? VPX_CODEC_UNSUP_BITSTREAM : VPX_CODEC_OK,
-              vpx_codec_peek_stream_info(codec, data, data_sz, &si));
+TEST(DecodeAPI, Vp9PeekStreamInfoTruncated) {
+  // This profile 1 header requires 10.25 bytes, ensure
+  // vpx_codec_peek_stream_info doesn't over read.
+  const uint8_t profile1_data[10] = { 0xa4, 0xe9, 0x30, 0x68, 0x53,
+                                      0xe9, 0x30, 0x68, 0x53, 0x04 };
+
+  for (uint32_t data_sz = 1; data_sz <= 10; ++data_sz) {
+    TestPeekInfo(profile1_data, data_sz, 11);
   }
 }
 #endif  // CONFIG_VP9_DECODER
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_corrupted.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_corrupted.cc
new file mode 100644
index 0000000..b1495ce
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_corrupted.cc
@@ -0,0 +1,103 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <tuple>
+
+#include "third_party/googletest/src/include/gtest/gtest.h"
+
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/util.h"
+#include "test/i420_video_source.h"
+#include "vpx_mem/vpx_mem.h"
+
+namespace {
+
+class DecodeCorruptedFrameTest
+    : public ::libvpx_test::EncoderTest,
+      public ::testing::TestWithParam<
+          std::tuple<const libvpx_test::CodecFactory *> > {
+ public:
+  DecodeCorruptedFrameTest() : EncoderTest(GET_PARAM(0)) {}
+
+ protected:
+  virtual ~DecodeCorruptedFrameTest() {}
+
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    cfg_.g_lag_in_frames = 0;
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.rc_buf_sz = 1000;
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_buf_optimal_sz = 600;
+
+    // Set small key frame distance such that we insert more key frames.
+    cfg_.kf_max_dist = 3;
+    dec_cfg_.threads = 1;
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    if (video->frame() == 0) encoder->Control(VP8E_SET_CPUUSED, 7);
+  }
+
+  virtual void MismatchHook(const vpx_image_t * /*img1*/,
+                            const vpx_image_t * /*img2*/) {}
+
+  virtual const vpx_codec_cx_pkt_t *MutateEncoderOutputHook(
+      const vpx_codec_cx_pkt_t *pkt) {
+    // Don't edit frame packet on key frame.
+    if (pkt->data.frame.flags & VPX_FRAME_IS_KEY) return pkt;
+    if (pkt->kind != VPX_CODEC_CX_FRAME_PKT) return pkt;
+
+    memcpy(&modified_pkt_, pkt, sizeof(*pkt));
+
+    // Halve the size so it's corrupted to decoder.
+    modified_pkt_.data.frame.sz = modified_pkt_.data.frame.sz / 2;
+
+    return &modified_pkt_;
+  }
+
+  virtual bool HandleDecodeResult(const vpx_codec_err_t res_dec,
+                                  const libvpx_test::VideoSource & /*video*/,
+                                  libvpx_test::Decoder *decoder) {
+    EXPECT_NE(res_dec, VPX_CODEC_MEM_ERROR) << decoder->DecodeError();
+    return VPX_CODEC_MEM_ERROR != res_dec;
+  }
+
+  vpx_codec_cx_pkt_t modified_pkt_;
+};
+
+TEST_P(DecodeCorruptedFrameTest, DecodeCorruptedFrame) {
+  cfg_.rc_target_bitrate = 200;
+  cfg_.g_error_resilient = 0;
+
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+}
+
+#if CONFIG_VP9
+INSTANTIATE_TEST_CASE_P(
+    VP9, DecodeCorruptedFrameTest,
+    ::testing::Values(
+        static_cast<const libvpx_test::CodecFactory *>(&libvpx_test::kVP9)));
+#endif  // CONFIG_VP9
+
+#if CONFIG_VP8
+INSTANTIATE_TEST_CASE_P(
+    VP8, DecodeCorruptedFrameTest,
+    ::testing::Values(
+        static_cast<const libvpx_test::CodecFactory *>(&libvpx_test::kVP8)));
+#endif  // CONFIG_VP8
+
+}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_perf_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_perf_test.cc
index ee26c3c..aecdd3e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_perf_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_perf_test.cc
@@ -9,6 +9,8 @@
  */
 
 #include <string>
+#include <tuple>
+
 #include "test/codec_factory.h"
 #include "test/decode_test_driver.h"
 #include "test/encode_test_driver.h"
@@ -21,7 +23,7 @@
 #include "./ivfenc.h"
 #include "./vpx_version.h"
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 namespace {
 
@@ -34,7 +36,7 @@
 /*
  DecodePerfTest takes a tuple of filename + number of threads to decode with
  */
-typedef std::tr1::tuple<const char *, unsigned> DecodePerfParam;
+typedef std::tuple<const char *, unsigned> DecodePerfParam;
 
 const DecodePerfParam kVP9DecodePerfVectors[] = {
   make_tuple("vp90-2-bbb_426x240_tile_1x1_180kbps.webm", 1),
@@ -137,7 +139,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, speed_);
       encoder->Control(VP9E_SET_FRAME_PARALLEL_DECODING, 1);
       encoder->Control(VP9E_SET_TILE_COLUMNS, 2);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_svc_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_svc_test.cc
index 69f62f1..c6f0873 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_svc_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_svc_test.cc
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <memory>
 #include <string>
 
 #include "test/codec_factory.h"
@@ -53,7 +54,7 @@
 // number of frames decoded. This results in 1/4x1/4 resolution (320x180).
 TEST_P(DecodeSvcTest, DecodeSvcTestUpToSpatialLayer0) {
   const std::string filename = GET_PARAM(1);
-  testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+  std::unique_ptr<libvpx_test::CompressedVideoSource> video;
   video.reset(new libvpx_test::IVFVideoSource(filename));
   ASSERT_TRUE(video.get() != NULL);
   video->Init();
@@ -70,7 +71,7 @@
 // number of frames decoded. This results in 1/2x1/2 resolution (640x360).
 TEST_P(DecodeSvcTest, DecodeSvcTestUpToSpatialLayer1) {
   const std::string filename = GET_PARAM(1);
-  testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+  std::unique_ptr<libvpx_test::CompressedVideoSource> video;
   video.reset(new libvpx_test::IVFVideoSource(filename));
   ASSERT_TRUE(video.get() != NULL);
   video->Init();
@@ -87,7 +88,7 @@
 // number of frames decoded. This results in the full resolution (1280x720).
 TEST_P(DecodeSvcTest, DecodeSvcTestUpToSpatialLayer2) {
   const std::string filename = GET_PARAM(1);
-  testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+  std::unique_ptr<libvpx_test::CompressedVideoSource> video;
   video.reset(new libvpx_test::IVFVideoSource(filename));
   ASSERT_TRUE(video.get() != NULL);
   video->Init();
@@ -105,7 +106,7 @@
 // the decoding should result in the full resolution (1280x720).
 TEST_P(DecodeSvcTest, DecodeSvcTestUpToSpatialLayer10) {
   const std::string filename = GET_PARAM(1);
-  testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+  std::unique_ptr<libvpx_test::CompressedVideoSource> video;
   video.reset(new libvpx_test::IVFVideoSource(filename));
   ASSERT_TRUE(video.get() != NULL);
   video->Init();
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.cc
index 48680eb..ae23587 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.cc
@@ -52,9 +52,10 @@
     /* Vp8's implementation of PeekStream returns an error if the frame you
      * pass it is not a keyframe, so we only expect VPX_CODEC_OK on the first
      * frame, which must be a keyframe. */
-    if (video->frame_number() == 0)
+    if (video->frame_number() == 0) {
       ASSERT_EQ(VPX_CODEC_OK, res_peek)
           << "Peek return failed: " << vpx_codec_err_to_string(res_peek);
+    }
   } else {
     /* The Vp9 implementation of PeekStream returns an error only if the
      * data passed to it isn't a valid Vp9 chunk. */
@@ -97,7 +98,7 @@
     const vpx_image_t *img = NULL;
 
     // Get decompressed data
-    while ((img = dec_iter.Next())) {
+    while (!::testing::Test::HasFailure() && (img = dec_iter.Next())) {
       DecompressedFrameHook(*img, video->frame_number());
     }
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.h
index 644fc9e..04876cdd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_test_driver.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_DECODE_TEST_DRIVER_H_
-#define TEST_DECODE_TEST_DRIVER_H_
+#ifndef VPX_TEST_DECODE_TEST_DRIVER_H_
+#define VPX_TEST_DECODE_TEST_DRIVER_H_
 #include <cstring>
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "./vpx_config.h"
@@ -159,4 +159,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_DECODE_TEST_DRIVER_H_
+#endif  // VPX_TEST_DECODE_TEST_DRIVER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_to_md5.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_to_md5.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_with_drops.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/decode_with_drops.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_perf_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_perf_test.cc
index 0bb4355..142d9e2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_perf_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_perf_test.cc
@@ -48,7 +48,7 @@
   EncodePerfTestVideo("niklas_1280_720_30.yuv", 1280, 720, 600, 470),
 };
 
-const int kEncodePerfTestSpeeds[] = { 5, 6, 7, 8 };
+const int kEncodePerfTestSpeeds[] = { 5, 6, 7, 8, 9 };
 const int kEncodePerfTestThreads[] = { 1, 2, 4 };
 
 #define NELEMENTS(x) (sizeof((x)) / sizeof((x)[0]))
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.cc
index b2cbc3f..8fdbdb6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.cc
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <memory>
 #include <string>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
@@ -128,6 +129,8 @@
   bool match = (img1->fmt == img2->fmt) && (img1->cs == img2->cs) &&
                (img1->d_w == img2->d_w) && (img1->d_h == img2->d_h);
 
+  if (!match) return false;
+
   const unsigned int width_y = img1->d_w;
   const unsigned int height_y = img1->d_h;
   unsigned int i;
@@ -177,7 +180,7 @@
     }
 
     BeginPassHook(pass);
-    testing::internal::scoped_ptr<Encoder> encoder(
+    std::unique_ptr<Encoder> encoder(
         codec_->CreateEncoder(cfg_, deadline_, init_flags_, &stats_));
     ASSERT_TRUE(encoder.get() != NULL);
 
@@ -191,7 +194,7 @@
     if (init_flags_ & VPX_CODEC_USE_OUTPUT_PARTITION) {
       dec_init_flags |= VPX_CODEC_USE_INPUT_FRAGMENTS;
     }
-    testing::internal::scoped_ptr<Decoder> decoder(
+    std::unique_ptr<Decoder> decoder(
         codec_->CreateDecoder(dec_cfg, dec_init_flags));
     bool again;
     for (again = true; again; video->Next()) {
@@ -214,6 +217,7 @@
           case VPX_CODEC_CX_FRAME_PKT:
             has_cxdata = true;
             if (decoder.get() != NULL && DoDecode()) {
+              PreDecodeFrameHook(video, decoder.get());
               vpx_codec_err_t res_dec = decoder->DecodeFrame(
                   (const uint8_t *)pkt->data.frame.buf, pkt->data.frame.sz);
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.h
index bec6ca3..3edba4b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/encode_test_driver.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_ENCODE_TEST_DRIVER_H_
-#define TEST_ENCODE_TEST_DRIVER_H_
+#ifndef VPX_TEST_ENCODE_TEST_DRIVER_H_
+#define VPX_TEST_ENCODE_TEST_DRIVER_H_
 
 #include <string>
 #include <vector>
@@ -137,6 +137,17 @@
     const vpx_codec_err_t res = vpx_codec_control_(&encoder_, ctrl_id, arg);
     ASSERT_EQ(VPX_CODEC_OK, res) << EncoderError();
   }
+
+  void Control(int ctrl_id, struct vpx_svc_frame_drop *arg) {
+    const vpx_codec_err_t res = vpx_codec_control_(&encoder_, ctrl_id, arg);
+    ASSERT_EQ(VPX_CODEC_OK, res) << EncoderError();
+  }
+
+  void Control(int ctrl_id, struct vpx_svc_spatial_layer_sync *arg) {
+    const vpx_codec_err_t res = vpx_codec_control_(&encoder_, ctrl_id, arg);
+    ASSERT_EQ(VPX_CODEC_OK, res) << EncoderError();
+  }
+
 #if CONFIG_VP8_ENCODER || CONFIG_VP9_ENCODER
   void Control(int ctrl_id, vpx_active_map_t *arg) {
     const vpx_codec_err_t res = vpx_codec_control_(&encoder_, ctrl_id, arg);
@@ -221,6 +232,9 @@
   virtual void PreEncodeFrameHook(VideoSource * /*video*/,
                                   Encoder * /*encoder*/) {}
 
+  virtual void PreDecodeFrameHook(VideoSource * /*video*/,
+                                  Decoder * /*decoder*/) {}
+
   virtual void PostEncodeFrameHook(Encoder * /*encoder*/) {}
 
   // Hook to be called on every compressed data packet.
@@ -275,4 +289,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_ENCODE_TEST_DRIVER_H_
+#endif  // VPX_TEST_ENCODE_TEST_DRIVER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/examples.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/examples.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/external_frame_buffer_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/external_frame_buffer_test.cc
index dbf2971..438eeb3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/external_frame_buffer_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/external_frame_buffer_test.cc
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <memory>
 #include <string>
 
 #include "./vpx_config.h"
@@ -113,9 +114,9 @@
     return 0;
   }
 
-  // Checks that the ximage data is contained within the external frame buffer
-  // private data passed back in the ximage.
-  void CheckXImageFrameBuffer(const vpx_image_t *img) {
+  // Checks that the vpx_image_t data is contained within the external frame
+  // buffer private data passed back in the vpx_image_t.
+  void CheckImageFrameBuffer(const vpx_image_t *img) {
     if (img->fb_priv != NULL) {
       const struct ExternalFrameBuffer *const ext_fb =
           reinterpret_cast<ExternalFrameBuffer *>(img->fb_priv);
@@ -335,14 +336,13 @@
     return VPX_CODEC_OK;
   }
 
- protected:
   void CheckDecodedFrames() {
     libvpx_test::DxDataIterator dec_iter = decoder_->GetDxData();
     const vpx_image_t *img = NULL;
 
     // Get decompressed data
     while ((img = dec_iter.Next()) != NULL) {
-      fb_list_.CheckXImageFrameBuffer(img);
+      fb_list_.CheckImageFrameBuffer(img);
     }
   }
 
@@ -393,7 +393,7 @@
 #endif
 
   // Open compressed video file.
-  testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+  std::unique_ptr<libvpx_test::CompressedVideoSource> video;
   if (filename.substr(filename.length() - 3, 3) == "ivf") {
     video.reset(new libvpx_test::IVFVideoSource(filename));
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/fdct8x8_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/fdct8x8_test.cc
index 15033db..1d4f318 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/fdct8x8_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/fdct8x8_test.cc
@@ -11,6 +11,7 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -43,9 +44,9 @@
 typedef void (*IhtFunc)(const tran_low_t *in, uint8_t *out, int stride,
                         int tx_type);
 
-typedef std::tr1::tuple<FdctFunc, IdctFunc, int, vpx_bit_depth_t> Dct8x8Param;
-typedef std::tr1::tuple<FhtFunc, IhtFunc, int, vpx_bit_depth_t> Ht8x8Param;
-typedef std::tr1::tuple<IdctFunc, IdctFunc, int, vpx_bit_depth_t> Idct8x8Param;
+typedef std::tuple<FdctFunc, IdctFunc, int, vpx_bit_depth_t> Dct8x8Param;
+typedef std::tuple<FhtFunc, IhtFunc, int, vpx_bit_depth_t> Ht8x8Param;
+typedef std::tuple<IdctFunc, IdctFunc, int, vpx_bit_depth_t> Idct8x8Param;
 
 void reference_8x8_dct_1d(const double in[8], double out[8]) {
   const double kInvSqrt2 = 0.707106781186547524400844362104;
@@ -628,7 +629,7 @@
   CompareInvReference(ref_txfm_, thresh_);
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if CONFIG_VP9_HIGHBITDEPTH
 INSTANTIATE_TEST_CASE_P(
@@ -675,9 +676,8 @@
                         ::testing::Values(make_tuple(&vpx_fdct8x8_neon,
                                                      &vpx_idct8x8_64_add_neon,
                                                      0, VPX_BITS_8)));
-// TODO(linfengz): reenable these functions once test vector failures are
-// addressed.
-#if 0   // !CONFIG_VP9_HIGHBITDEPTH
+
+#if !CONFIG_VP9_HIGHBITDEPTH
 INSTANTIATE_TEST_CASE_P(
     NEON, FwdTrans8x8HT,
     ::testing::Values(
@@ -737,7 +737,7 @@
         make_tuple(&idct8x8_12, &idct8x8_64_add_12_sse2, 6225, VPX_BITS_12)));
 #endif  // HAVE_SSE2 && CONFIG_VP9_HIGHBITDEPTH && !CONFIG_EMULATE_HARDWARE
 
-#if HAVE_SSSE3 && ARCH_X86_64 && !CONFIG_VP9_HIGHBITDEPTH && \
+#if HAVE_SSSE3 && VPX_ARCH_X86_64 && !CONFIG_VP9_HIGHBITDEPTH && \
     !CONFIG_EMULATE_HARDWARE
 INSTANTIATE_TEST_CASE_P(SSSE3, FwdTrans8x8DCT,
                         ::testing::Values(make_tuple(&vpx_fdct8x8_ssse3,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/frame_size_tests.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/frame_size_tests.cc
index 5a9b166..9397af2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/frame_size_tests.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/frame_size_tests.cc
@@ -7,12 +7,73 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
+#include <memory>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "test/codec_factory.h"
+#include "test/register_state_check.h"
 #include "test/video_source.h"
 
 namespace {
 
+class EncoderWithExpectedError : public ::libvpx_test::Encoder {
+ public:
+  EncoderWithExpectedError(vpx_codec_enc_cfg_t cfg,
+                           unsigned long deadline,          // NOLINT
+                           const unsigned long init_flags,  // NOLINT
+                           ::libvpx_test::TwopassStatsStore *stats)
+      : ::libvpx_test::Encoder(cfg, deadline, init_flags, stats) {}
+  // This overrides with expected error code.
+  void EncodeFrame(::libvpx_test::VideoSource *video,
+                   const unsigned long frame_flags,  // NOLINT
+                   const vpx_codec_err_t expected_err) {
+    if (video->img()) {
+      EncodeFrameInternal(*video, frame_flags, expected_err);
+    } else {
+      Flush();
+    }
+
+    // Handle twopass stats
+    ::libvpx_test::CxDataIterator iter = GetCxData();
+
+    while (const vpx_codec_cx_pkt_t *pkt = iter.Next()) {
+      if (pkt->kind != VPX_CODEC_STATS_PKT) continue;
+
+      stats_->Append(*pkt);
+    }
+  }
+
+ protected:
+  void EncodeFrameInternal(const ::libvpx_test::VideoSource &video,
+                           const unsigned long frame_flags,  // NOLINT
+                           const vpx_codec_err_t expected_err) {
+    vpx_codec_err_t res;
+    const vpx_image_t *img = video.img();
+
+    // Handle frame resizing
+    if (cfg_.g_w != img->d_w || cfg_.g_h != img->d_h) {
+      cfg_.g_w = img->d_w;
+      cfg_.g_h = img->d_h;
+      res = vpx_codec_enc_config_set(&encoder_, &cfg_);
+      ASSERT_EQ(res, VPX_CODEC_OK) << EncoderError();
+    }
+
+    // Encode the frame
+    API_REGISTER_STATE_CHECK(res = vpx_codec_encode(&encoder_, img, video.pts(),
+                                                    video.duration(),
+                                                    frame_flags, deadline_));
+    ASSERT_EQ(expected_err, res) << EncoderError();
+  }
+
+  virtual vpx_codec_iface_t *CodecInterface() const {
+#if CONFIG_VP9_ENCODER
+    return &vpx_codec_vp9_cx_algo;
+#else
+    return NULL;
+#endif
+  }
+};
+
 class VP9FrameSizeTestsLarge : public ::libvpx_test::EncoderTest,
                                public ::testing::Test {
  protected:
@@ -34,7 +95,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, 7);
       encoder->Control(VP8E_SET_ENABLEAUTOALTREF, 1);
       encoder->Control(VP8E_SET_ARNR_MAXFRAMES, 7);
@@ -43,7 +104,67 @@
     }
   }
 
-  int expected_res_;
+  using ::libvpx_test::EncoderTest::RunLoop;
+  virtual void RunLoop(::libvpx_test::VideoSource *video,
+                       const vpx_codec_err_t expected_err) {
+    stats_.Reset();
+
+    ASSERT_TRUE(passes_ == 1 || passes_ == 2);
+    for (unsigned int pass = 0; pass < passes_; pass++) {
+      last_pts_ = 0;
+
+      if (passes_ == 1) {
+        cfg_.g_pass = VPX_RC_ONE_PASS;
+      } else if (pass == 0) {
+        cfg_.g_pass = VPX_RC_FIRST_PASS;
+      } else {
+        cfg_.g_pass = VPX_RC_LAST_PASS;
+      }
+
+      BeginPassHook(pass);
+      std::unique_ptr<EncoderWithExpectedError> encoder(
+          new EncoderWithExpectedError(cfg_, deadline_, init_flags_, &stats_));
+      ASSERT_NE(encoder.get(), nullptr);
+
+      ASSERT_NO_FATAL_FAILURE(video->Begin());
+      encoder->InitEncoder(video);
+      ASSERT_FALSE(::testing::Test::HasFatalFailure());
+      for (bool again = true; again; video->Next()) {
+        again = (video->img() != NULL);
+
+        PreEncodeFrameHook(video, encoder.get());
+        encoder->EncodeFrame(video, frame_flags_, expected_err);
+
+        PostEncodeFrameHook(encoder.get());
+
+        ::libvpx_test::CxDataIterator iter = encoder->GetCxData();
+
+        while (const vpx_codec_cx_pkt_t *pkt = iter.Next()) {
+          pkt = MutateEncoderOutputHook(pkt);
+          again = true;
+          switch (pkt->kind) {
+            case VPX_CODEC_CX_FRAME_PKT:
+              ASSERT_GE(pkt->data.frame.pts, last_pts_);
+              last_pts_ = pkt->data.frame.pts;
+              FramePktHook(pkt);
+              break;
+
+            case VPX_CODEC_PSNR_PKT: PSNRPktHook(pkt); break;
+            case VPX_CODEC_STATS_PKT: StatsPktHook(pkt); break;
+            default: break;
+          }
+        }
+
+        if (!Continue()) break;
+      }
+
+      EndPassHook();
+
+      if (!Continue()) break;
+    }
+  }
+
+  vpx_codec_err_t expected_res_;
 };
 
 TEST_F(VP9FrameSizeTestsLarge, TestInvalidSizes) {
@@ -52,8 +173,8 @@
 #if CONFIG_SIZE_LIMIT
   video.SetSize(DECODE_WIDTH_LIMIT + 16, DECODE_HEIGHT_LIMIT + 16);
   video.set_limit(2);
-  expected_res_ = VPX_CODEC_CORRUPT_FRAME;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  expected_res_ = VPX_CODEC_MEM_ERROR;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video, expected_res_));
 #endif
 }
 
@@ -64,7 +185,7 @@
   video.SetSize(DECODE_WIDTH_LIMIT, DECODE_HEIGHT_LIMIT);
   video.set_limit(2);
   expected_res_ = VPX_CODEC_OK;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_NO_FATAL_FAILURE(::libvpx_test::EncoderTest::RunLoop(&video));
 #else
 // This test produces a pretty large single frame allocation,  (roughly
 // 25 megabits). The encoder allocates a good number of these frames
@@ -80,7 +201,7 @@
 #endif
   video.set_limit(2);
   expected_res_ = VPX_CODEC_OK;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_NO_FATAL_FAILURE(::libvpx_test::EncoderTest::RunLoop(&video));
 #endif
 }
 
@@ -90,6 +211,6 @@
   video.SetSize(1, 1);
   video.set_limit(2);
   expected_res_ = VPX_CODEC_OK;
-  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_NO_FATAL_FAILURE(::libvpx_test::EncoderTest::RunLoop(&video));
 }
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/hadamard_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/hadamard_test.cc
index 3b7cfed..6b7aae3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/hadamard_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/hadamard_test.cc
@@ -25,13 +25,13 @@
 typedef void (*HadamardFunc)(const int16_t *a, ptrdiff_t a_stride,
                              tran_low_t *b);
 
-void hadamard_loop(const int16_t *a, int a_stride, int16_t *out) {
-  int16_t b[8];
+void hadamard_loop(const tran_low_t *a, tran_low_t *out) {
+  tran_low_t b[8];
   for (int i = 0; i < 8; i += 2) {
-    b[i + 0] = a[i * a_stride] + a[(i + 1) * a_stride];
-    b[i + 1] = a[i * a_stride] - a[(i + 1) * a_stride];
+    b[i + 0] = a[i * 8] + a[(i + 1) * 8];
+    b[i + 1] = a[i * 8] - a[(i + 1) * 8];
   }
-  int16_t c[8];
+  tran_low_t c[8];
   for (int i = 0; i < 8; i += 4) {
     c[i + 0] = b[i + 0] + b[i + 2];
     c[i + 1] = b[i + 1] + b[i + 3];
@@ -49,12 +49,15 @@
 }
 
 void reference_hadamard8x8(const int16_t *a, int a_stride, tran_low_t *b) {
-  int16_t buf[64];
-  int16_t buf2[64];
-  for (int i = 0; i < 8; ++i) hadamard_loop(a + i, a_stride, buf + i * 8);
-  for (int i = 0; i < 8; ++i) hadamard_loop(buf + i, 8, buf2 + i * 8);
-
-  for (int i = 0; i < 64; ++i) b[i] = (tran_low_t)buf2[i];
+  tran_low_t input[64];
+  tran_low_t buf[64];
+  for (int i = 0; i < 8; ++i) {
+    for (int j = 0; j < 8; ++j) {
+      input[i * 8 + j] = static_cast<tran_low_t>(a[i * a_stride + j]);
+    }
+  }
+  for (int i = 0; i < 8; ++i) hadamard_loop(input + i, buf + i * 8);
+  for (int i = 0; i < 8; ++i) hadamard_loop(buf + i, b + i * 8);
 }
 
 void reference_hadamard16x16(const int16_t *a, int a_stride, tran_low_t *b) {
@@ -89,205 +92,229 @@
   }
 }
 
-class HadamardTestBase : public ::testing::TestWithParam<HadamardFunc> {
+void reference_hadamard32x32(const int16_t *a, int a_stride, tran_low_t *b) {
+  reference_hadamard16x16(a + 0 + 0 * a_stride, a_stride, b + 0);
+  reference_hadamard16x16(a + 16 + 0 * a_stride, a_stride, b + 256);
+  reference_hadamard16x16(a + 0 + 16 * a_stride, a_stride, b + 512);
+  reference_hadamard16x16(a + 16 + 16 * a_stride, a_stride, b + 768);
+
+  for (int i = 0; i < 256; ++i) {
+    const tran_low_t a0 = b[0];
+    const tran_low_t a1 = b[256];
+    const tran_low_t a2 = b[512];
+    const tran_low_t a3 = b[768];
+
+    const tran_low_t b0 = (a0 + a1) >> 2;
+    const tran_low_t b1 = (a0 - a1) >> 2;
+    const tran_low_t b2 = (a2 + a3) >> 2;
+    const tran_low_t b3 = (a2 - a3) >> 2;
+
+    b[0] = b0 + b2;
+    b[256] = b1 + b3;
+    b[512] = b0 - b2;
+    b[768] = b1 - b3;
+
+    ++b;
+  }
+}
+
+struct HadamardFuncWithSize {
+  HadamardFuncWithSize(HadamardFunc f, int s) : func(f), block_size(s) {}
+  HadamardFunc func;
+  int block_size;
+};
+
+std::ostream &operator<<(std::ostream &os, const HadamardFuncWithSize &hfs) {
+  return os << "block size: " << hfs.block_size;
+}
+
+class HadamardTestBase : public ::testing::TestWithParam<HadamardFuncWithSize> {
  public:
   virtual void SetUp() {
-    h_func_ = GetParam();
+    h_func_ = GetParam().func;
+    bwh_ = GetParam().block_size;
+    block_size_ = bwh_ * bwh_;
     rnd_.Reset(ACMRandom::DeterministicSeed());
   }
 
+  virtual int16_t Rand() = 0;
+
+  void ReferenceHadamard(const int16_t *a, int a_stride, tran_low_t *b,
+                         int bwh) {
+    if (bwh == 32)
+      reference_hadamard32x32(a, a_stride, b);
+    else if (bwh == 16)
+      reference_hadamard16x16(a, a_stride, b);
+    else
+      reference_hadamard8x8(a, a_stride, b);
+  }
+
+  void CompareReferenceRandom() {
+    const int kMaxBlockSize = 32 * 32;
+    DECLARE_ALIGNED(16, int16_t, a[kMaxBlockSize]);
+    DECLARE_ALIGNED(16, tran_low_t, b[kMaxBlockSize]);
+    memset(a, 0, sizeof(a));
+    memset(b, 0, sizeof(b));
+
+    tran_low_t b_ref[kMaxBlockSize];
+    memset(b_ref, 0, sizeof(b_ref));
+
+    for (int i = 0; i < block_size_; ++i) a[i] = Rand();
+
+    ReferenceHadamard(a, bwh_, b_ref, bwh_);
+    ASM_REGISTER_STATE_CHECK(h_func_(a, bwh_, b));
+
+    // The order of the output is not important. Sort before checking.
+    std::sort(b, b + block_size_);
+    std::sort(b_ref, b_ref + block_size_);
+    EXPECT_EQ(0, memcmp(b, b_ref, sizeof(b)));
+  }
+
+  void VaryStride() {
+    const int kMaxBlockSize = 32 * 32;
+    DECLARE_ALIGNED(16, int16_t, a[kMaxBlockSize * 8]);
+    DECLARE_ALIGNED(16, tran_low_t, b[kMaxBlockSize]);
+    memset(a, 0, sizeof(a));
+    for (int i = 0; i < block_size_ * 8; ++i) a[i] = Rand();
+
+    tran_low_t b_ref[kMaxBlockSize];
+    for (int i = 8; i < 64; i += 8) {
+      memset(b, 0, sizeof(b));
+      memset(b_ref, 0, sizeof(b_ref));
+
+      ReferenceHadamard(a, i, b_ref, bwh_);
+      ASM_REGISTER_STATE_CHECK(h_func_(a, i, b));
+
+      // The order of the output is not important. Sort before checking.
+      std::sort(b, b + block_size_);
+      std::sort(b_ref, b_ref + block_size_);
+      EXPECT_EQ(0, memcmp(b, b_ref, sizeof(b)));
+    }
+  }
+
+  void SpeedTest(int times) {
+    const int kMaxBlockSize = 32 * 32;
+    DECLARE_ALIGNED(16, int16_t, input[kMaxBlockSize]);
+    DECLARE_ALIGNED(16, tran_low_t, output[kMaxBlockSize]);
+    memset(input, 1, sizeof(input));
+    memset(output, 0, sizeof(output));
+
+    vpx_usec_timer timer;
+    vpx_usec_timer_start(&timer);
+    for (int i = 0; i < times; ++i) {
+      h_func_(input, bwh_, output);
+    }
+    vpx_usec_timer_mark(&timer);
+
+    const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
+    printf("Hadamard%dx%d[%12d runs]: %d us\n", bwh_, bwh_, times,
+           elapsed_time);
+  }
+
  protected:
+  int bwh_;
+  int block_size_;
   HadamardFunc h_func_;
   ACMRandom rnd_;
 };
 
-void HadamardSpeedTest(const char *name, HadamardFunc const func,
-                       const int16_t *input, int stride, tran_low_t *output,
-                       int times) {
-  int i;
-  vpx_usec_timer timer;
+class HadamardLowbdTest : public HadamardTestBase {
+ protected:
+  virtual int16_t Rand() { return rnd_.Rand9Signed(); }
+};
 
-  vpx_usec_timer_start(&timer);
-  for (i = 0; i < times; ++i) {
-    func(input, stride, output);
-  }
-  vpx_usec_timer_mark(&timer);
+TEST_P(HadamardLowbdTest, CompareReferenceRandom) { CompareReferenceRandom(); }
 
-  const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
-  printf("%s[%12d runs]: %d us\n", name, times, elapsed_time);
+TEST_P(HadamardLowbdTest, VaryStride) { VaryStride(); }
+
+TEST_P(HadamardLowbdTest, DISABLED_Speed) {
+  SpeedTest(10);
+  SpeedTest(10000);
+  SpeedTest(10000000);
 }
 
-class Hadamard8x8Test : public HadamardTestBase {};
-
-void HadamardSpeedTest8x8(HadamardFunc const func, int times) {
-  DECLARE_ALIGNED(16, int16_t, input[64]);
-  DECLARE_ALIGNED(16, tran_low_t, output[64]);
-  memset(input, 1, sizeof(input));
-  HadamardSpeedTest("Hadamard8x8", func, input, 8, output, times);
-}
-
-TEST_P(Hadamard8x8Test, CompareReferenceRandom) {
-  DECLARE_ALIGNED(16, int16_t, a[64]);
-  DECLARE_ALIGNED(16, tran_low_t, b[64]);
-  tran_low_t b_ref[64];
-  for (int i = 0; i < 64; ++i) {
-    a[i] = rnd_.Rand9Signed();
-  }
-  memset(b, 0, sizeof(b));
-  memset(b_ref, 0, sizeof(b_ref));
-
-  reference_hadamard8x8(a, 8, b_ref);
-  ASM_REGISTER_STATE_CHECK(h_func_(a, 8, b));
-
-  // The order of the output is not important. Sort before checking.
-  std::sort(b, b + 64);
-  std::sort(b_ref, b_ref + 64);
-  EXPECT_EQ(0, memcmp(b, b_ref, sizeof(b)));
-}
-
-TEST_P(Hadamard8x8Test, VaryStride) {
-  DECLARE_ALIGNED(16, int16_t, a[64 * 8]);
-  DECLARE_ALIGNED(16, tran_low_t, b[64]);
-  tran_low_t b_ref[64];
-  for (int i = 0; i < 64 * 8; ++i) {
-    a[i] = rnd_.Rand9Signed();
-  }
-
-  for (int i = 8; i < 64; i += 8) {
-    memset(b, 0, sizeof(b));
-    memset(b_ref, 0, sizeof(b_ref));
-
-    reference_hadamard8x8(a, i, b_ref);
-    ASM_REGISTER_STATE_CHECK(h_func_(a, i, b));
-
-    // The order of the output is not important. Sort before checking.
-    std::sort(b, b + 64);
-    std::sort(b_ref, b_ref + 64);
-    EXPECT_EQ(0, memcmp(b, b_ref, sizeof(b)));
-  }
-}
-
-TEST_P(Hadamard8x8Test, DISABLED_Speed) {
-  HadamardSpeedTest8x8(h_func_, 10);
-  HadamardSpeedTest8x8(h_func_, 10000);
-  HadamardSpeedTest8x8(h_func_, 10000000);
-}
-
-INSTANTIATE_TEST_CASE_P(C, Hadamard8x8Test,
-                        ::testing::Values(&vpx_hadamard_8x8_c));
+INSTANTIATE_TEST_CASE_P(
+    C, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_8x8_c, 8),
+                      HadamardFuncWithSize(&vpx_hadamard_16x16_c, 16),
+                      HadamardFuncWithSize(&vpx_hadamard_32x32_c, 32)));
 
 #if HAVE_SSE2
-INSTANTIATE_TEST_CASE_P(SSE2, Hadamard8x8Test,
-                        ::testing::Values(&vpx_hadamard_8x8_sse2));
+INSTANTIATE_TEST_CASE_P(
+    SSE2, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_8x8_sse2, 8),
+                      HadamardFuncWithSize(&vpx_hadamard_16x16_sse2, 16),
+                      HadamardFuncWithSize(&vpx_hadamard_32x32_sse2, 32)));
 #endif  // HAVE_SSE2
 
-#if HAVE_SSSE3 && ARCH_X86_64
-INSTANTIATE_TEST_CASE_P(SSSE3, Hadamard8x8Test,
-                        ::testing::Values(&vpx_hadamard_8x8_ssse3));
-#endif  // HAVE_SSSE3 && ARCH_X86_64
+#if HAVE_AVX2
+INSTANTIATE_TEST_CASE_P(
+    AVX2, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_16x16_avx2, 16),
+                      HadamardFuncWithSize(&vpx_hadamard_32x32_avx2, 32)));
+#endif  // HAVE_AVX2
+
+#if HAVE_SSSE3 && VPX_ARCH_X86_64
+INSTANTIATE_TEST_CASE_P(
+    SSSE3, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_8x8_ssse3, 8)));
+#endif  // HAVE_SSSE3 && VPX_ARCH_X86_64
 
 #if HAVE_NEON
-INSTANTIATE_TEST_CASE_P(NEON, Hadamard8x8Test,
-                        ::testing::Values(&vpx_hadamard_8x8_neon));
+INSTANTIATE_TEST_CASE_P(
+    NEON, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_8x8_neon, 8),
+                      HadamardFuncWithSize(&vpx_hadamard_16x16_neon, 16)));
 #endif  // HAVE_NEON
 
 // TODO(jingning): Remove highbitdepth flag when the SIMD functions are
 // in place and turn on the unit test.
 #if !CONFIG_VP9_HIGHBITDEPTH
 #if HAVE_MSA
-INSTANTIATE_TEST_CASE_P(MSA, Hadamard8x8Test,
-                        ::testing::Values(&vpx_hadamard_8x8_msa));
+INSTANTIATE_TEST_CASE_P(
+    MSA, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_8x8_msa, 8),
+                      HadamardFuncWithSize(&vpx_hadamard_16x16_msa, 16)));
 #endif  // HAVE_MSA
 #endif  // !CONFIG_VP9_HIGHBITDEPTH
 
 #if HAVE_VSX
-INSTANTIATE_TEST_CASE_P(VSX, Hadamard8x8Test,
-                        ::testing::Values(&vpx_hadamard_8x8_vsx));
+INSTANTIATE_TEST_CASE_P(
+    VSX, HadamardLowbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_hadamard_8x8_vsx, 8),
+                      HadamardFuncWithSize(&vpx_hadamard_16x16_vsx, 16)));
 #endif  // HAVE_VSX
 
-class Hadamard16x16Test : public HadamardTestBase {};
+#if CONFIG_VP9_HIGHBITDEPTH
+class HadamardHighbdTest : public HadamardTestBase {
+ protected:
+  virtual int16_t Rand() { return rnd_.Rand13Signed(); }
+};
 
-void HadamardSpeedTest16x16(HadamardFunc const func, int times) {
-  DECLARE_ALIGNED(16, int16_t, input[256]);
-  DECLARE_ALIGNED(16, tran_low_t, output[256]);
-  memset(input, 1, sizeof(input));
-  HadamardSpeedTest("Hadamard16x16", func, input, 16, output, times);
+TEST_P(HadamardHighbdTest, CompareReferenceRandom) { CompareReferenceRandom(); }
+
+TEST_P(HadamardHighbdTest, VaryStride) { VaryStride(); }
+
+TEST_P(HadamardHighbdTest, DISABLED_Speed) {
+  SpeedTest(10);
+  SpeedTest(10000);
+  SpeedTest(10000000);
 }
 
-TEST_P(Hadamard16x16Test, CompareReferenceRandom) {
-  DECLARE_ALIGNED(16, int16_t, a[16 * 16]);
-  DECLARE_ALIGNED(16, tran_low_t, b[16 * 16]);
-  tran_low_t b_ref[16 * 16];
-  for (int i = 0; i < 16 * 16; ++i) {
-    a[i] = rnd_.Rand9Signed();
-  }
-  memset(b, 0, sizeof(b));
-  memset(b_ref, 0, sizeof(b_ref));
-
-  reference_hadamard16x16(a, 16, b_ref);
-  ASM_REGISTER_STATE_CHECK(h_func_(a, 16, b));
-
-  // The order of the output is not important. Sort before checking.
-  std::sort(b, b + 16 * 16);
-  std::sort(b_ref, b_ref + 16 * 16);
-  EXPECT_EQ(0, memcmp(b, b_ref, sizeof(b)));
-}
-
-TEST_P(Hadamard16x16Test, VaryStride) {
-  DECLARE_ALIGNED(16, int16_t, a[16 * 16 * 8]);
-  DECLARE_ALIGNED(16, tran_low_t, b[16 * 16]);
-  tran_low_t b_ref[16 * 16];
-  for (int i = 0; i < 16 * 16 * 8; ++i) {
-    a[i] = rnd_.Rand9Signed();
-  }
-
-  for (int i = 8; i < 64; i += 8) {
-    memset(b, 0, sizeof(b));
-    memset(b_ref, 0, sizeof(b_ref));
-
-    reference_hadamard16x16(a, i, b_ref);
-    ASM_REGISTER_STATE_CHECK(h_func_(a, i, b));
-
-    // The order of the output is not important. Sort before checking.
-    std::sort(b, b + 16 * 16);
-    std::sort(b_ref, b_ref + 16 * 16);
-    EXPECT_EQ(0, memcmp(b, b_ref, sizeof(b)));
-  }
-}
-
-TEST_P(Hadamard16x16Test, DISABLED_Speed) {
-  HadamardSpeedTest16x16(h_func_, 10);
-  HadamardSpeedTest16x16(h_func_, 10000);
-  HadamardSpeedTest16x16(h_func_, 10000000);
-}
-
-INSTANTIATE_TEST_CASE_P(C, Hadamard16x16Test,
-                        ::testing::Values(&vpx_hadamard_16x16_c));
-
-#if HAVE_SSE2
-INSTANTIATE_TEST_CASE_P(SSE2, Hadamard16x16Test,
-                        ::testing::Values(&vpx_hadamard_16x16_sse2));
-#endif  // HAVE_SSE2
+INSTANTIATE_TEST_CASE_P(
+    C, HadamardHighbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_highbd_hadamard_8x8_c, 8),
+                      HadamardFuncWithSize(&vpx_highbd_hadamard_16x16_c, 16),
+                      HadamardFuncWithSize(&vpx_highbd_hadamard_32x32_c, 32)));
 
 #if HAVE_AVX2
-INSTANTIATE_TEST_CASE_P(AVX2, Hadamard16x16Test,
-                        ::testing::Values(&vpx_hadamard_16x16_avx2));
+INSTANTIATE_TEST_CASE_P(
+    AVX2, HadamardHighbdTest,
+    ::testing::Values(HadamardFuncWithSize(&vpx_highbd_hadamard_8x8_avx2, 8),
+                      HadamardFuncWithSize(&vpx_highbd_hadamard_16x16_avx2, 16),
+                      HadamardFuncWithSize(&vpx_highbd_hadamard_32x32_avx2,
+                                           32)));
 #endif  // HAVE_AVX2
 
-#if HAVE_VSX
-INSTANTIATE_TEST_CASE_P(VSX, Hadamard16x16Test,
-                        ::testing::Values(&vpx_hadamard_16x16_vsx));
-#endif  // HAVE_VSX
-
-#if HAVE_NEON
-INSTANTIATE_TEST_CASE_P(NEON, Hadamard16x16Test,
-                        ::testing::Values(&vpx_hadamard_16x16_neon));
-#endif  // HAVE_NEON
-
-#if !CONFIG_VP9_HIGHBITDEPTH
-#if HAVE_MSA
-INSTANTIATE_TEST_CASE_P(MSA, Hadamard16x16Test,
-                        ::testing::Values(&vpx_hadamard_16x16_msa));
-#endif  // HAVE_MSA
-#endif  // !CONFIG_VP9_HIGHBITDEPTH
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/i420_video_source.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/i420_video_source.h
index 4957382..97473b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/i420_video_source.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/i420_video_source.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_I420_VIDEO_SOURCE_H_
-#define TEST_I420_VIDEO_SOURCE_H_
+#ifndef VPX_TEST_I420_VIDEO_SOURCE_H_
+#define VPX_TEST_I420_VIDEO_SOURCE_H_
 #include <cstdio>
 #include <cstdlib>
 #include <string>
@@ -30,4 +30,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_I420_VIDEO_SOURCE_H_
+#endif  // VPX_TEST_I420_VIDEO_SOURCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/idct_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/idct_test.cc
index 79a5e9b..3564c0b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/idct_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/idct_test.cc
@@ -72,6 +72,7 @@
 
 TEST_P(IDCTTest, TestAllOnes) {
   input->Set(0);
+  ASSERT_TRUE(input->TopLeftPixel() != NULL);
   // When the first element is '4' it will fill the output buffer with '1'.
   input->TopLeftPixel()[0] = 4;
   predict->Set(0);
@@ -89,6 +90,7 @@
   // Set the transform output to '1' and make sure it gets added to the
   // prediction buffer.
   input->Set(0);
+  ASSERT_TRUE(input->TopLeftPixel() != NULL);
   input->TopLeftPixel()[0] = 4;
   output->Set(0);
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/invalid_file_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/invalid_file_test.cc
index 43a4c69..8b1bc56 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/invalid_file_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/invalid_file_test.cc
@@ -10,6 +10,7 @@
 
 #include <cstdio>
 #include <cstdlib>
+#include <memory>
 #include <string>
 #include <vector>
 #include "third_party/googletest/src/include/gtest/gtest.h"
@@ -89,7 +90,7 @@
     const std::string filename = input.filename;
 
     // Open compressed video file.
-    testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+    std::unique_ptr<libvpx_test::CompressedVideoSource> video;
     if (filename.substr(filename.length() - 3, 3) == "ivf") {
       video.reset(new libvpx_test::IVFVideoSource(filename));
     } else if (filename.substr(filename.length() - 4, 4) == "webm") {
@@ -123,7 +124,9 @@
 #if CONFIG_VP8_DECODER
 const DecodeParam kVP8InvalidFileTests[] = {
   { 1, "invalid-bug-1443.ivf" },
+  { 1, "invalid-bug-148271109.ivf" },
   { 1, "invalid-token-partition.ivf" },
+  { 1, "invalid-vp80-00-comprehensive-s17661_r01-05_b6-.ivf" },
 };
 
 VP8_INSTANTIATE_TEST_CASE(InvalidFileTest,
@@ -145,7 +148,7 @@
 // This file will cause a large allocation which is expected to fail in 32-bit
 // environments. Test x86 for coverage purposes as the allocation failure will
 // be in platform agnostic code.
-#if ARCH_X86
+#if VPX_ARCH_X86
   { 1, "invalid-vp90-2-00-quantizer-63.ivf.kf_65527x61446.ivf" },
 #endif
   { 1, "invalid-vp90-2-12-droppable_1.ivf.s3676_r01-05_b6-.ivf" },
@@ -203,6 +206,8 @@
   { 2, "invalid-vp90-2-09-aq2.webm.ivf.s3984_r01-05_b6-.v2.ivf" },
   { 4, "invalid-vp90-2-09-subpixel-00.ivf.s19552_r01-05_b6-.v2.ivf" },
   { 2, "invalid-crbug-629481.webm" },
+  { 3, "invalid-crbug-1558.ivf" },
+  { 4, "invalid-crbug-1562.ivf" },
 };
 
 INSTANTIATE_TEST_CASE_P(
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/ivf_video_source.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/ivf_video_source.h
index 5862d26..22c05ec 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/ivf_video_source.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/ivf_video_source.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_IVF_VIDEO_SOURCE_H_
-#define TEST_IVF_VIDEO_SOURCE_H_
+#ifndef VPX_TEST_IVF_VIDEO_SOURCE_H_
+#define VPX_TEST_IVF_VIDEO_SOURCE_H_
 #include <cstdio>
 #include <cstdlib>
 #include <new>
@@ -16,7 +16,7 @@
 #include "test/video_source.h"
 
 namespace libvpx_test {
-const unsigned int kCodeBufferSize = 256 * 1024;
+const unsigned int kCodeBufferSize = 256 * 1024 * 1024;
 const unsigned int kIvfFileHdrSize = 32;
 const unsigned int kIvfFrameHdrSize = 12;
 
@@ -103,4 +103,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_IVF_VIDEO_SOURCE_H_
+#endif  // VPX_TEST_IVF_VIDEO_SOURCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/keyframe_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/keyframe_test.cc
index ee75f40..582d448 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/keyframe_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/keyframe_test.cc
@@ -38,7 +38,7 @@
     if (kf_do_force_kf_) {
       frame_flags_ = (video->frame() % 3) ? 0 : VPX_EFLAG_FORCE_KF;
     }
-    if (set_cpu_used_ && video->frame() == 1) {
+    if (set_cpu_used_ && video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
     }
   }
@@ -68,7 +68,9 @@
 
   // In realtime mode - auto placed keyframes are exceedingly rare,  don't
   // bother with this check   if(GetParam() > 0)
-  if (GET_PARAM(1) > 0) EXPECT_GT(kf_count_, 1);
+  if (GET_PARAM(1) > 0) {
+    EXPECT_GT(kf_count_, 1);
+  }
 }
 
 TEST_P(KeyframeTest, TestDisableKeyframes) {
@@ -128,8 +130,9 @@
 
   // In realtime mode - auto placed keyframes are exceedingly rare,  don't
   // bother with this check
-  if (GET_PARAM(1) > 0)
+  if (GET_PARAM(1) > 0) {
     EXPECT_EQ(2u, kf_pts_list_.size()) << " Not the right number of keyframes ";
+  }
 
   // Verify that keyframes match the file keyframes in the file.
   for (std::vector<vpx_codec_pts_t>::const_iterator iter = kf_pts_list_.begin();
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/lpf_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/lpf_test.cc
index e04b996..9db1181 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/lpf_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/lpf_test.cc
@@ -11,6 +11,7 @@
 #include <cmath>
 #include <cstdlib>
 #include <string>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -56,8 +57,8 @@
                                const uint8_t *thresh1);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-typedef std::tr1::tuple<loop_op_t, loop_op_t, int> loop8_param_t;
-typedef std::tr1::tuple<dual_loop_op_t, dual_loop_op_t, int> dualloop8_param_t;
+typedef std::tuple<loop_op_t, loop_op_t, int> loop8_param_t;
+typedef std::tuple<dual_loop_op_t, dual_loop_op_t, int> dualloop8_param_t;
 
 void InitInput(Pixel *s, Pixel *ref_s, ACMRandom *rnd, const uint8_t limit,
                const int mask, const int32_t p, const int i) {
@@ -74,9 +75,9 @@
         if (j < 1) {
           tmp_s[j] = rnd->Rand16();
         } else if (val & 0x20) {  // Increment by a value within the limit.
-          tmp_s[j] = tmp_s[j - 1] + (limit - 1);
+          tmp_s[j] = static_cast<uint16_t>(tmp_s[j - 1] + (limit - 1));
         } else {  // Decrement by a value within the limit.
-          tmp_s[j] = tmp_s[j - 1] - (limit - 1);
+          tmp_s[j] = static_cast<uint16_t>(tmp_s[j - 1] - (limit - 1));
         }
         j++;
       }
@@ -93,11 +94,11 @@
         if (j < 1) {
           tmp_s[j] = rnd->Rand16();
         } else if (val & 0x20) {  // Increment by a value within the limit.
-          tmp_s[(j % 32) * 32 + j / 32] =
-              tmp_s[((j - 1) % 32) * 32 + (j - 1) / 32] + (limit - 1);
+          tmp_s[(j % 32) * 32 + j / 32] = static_cast<uint16_t>(
+              tmp_s[((j - 1) % 32) * 32 + (j - 1) / 32] + (limit - 1));
         } else {  // Decrement by a value within the limit.
-          tmp_s[(j % 32) * 32 + j / 32] =
-              tmp_s[((j - 1) % 32) * 32 + (j - 1) / 32] - (limit - 1);
+          tmp_s[(j % 32) * 32 + j / 32] = static_cast<uint16_t>(
+              tmp_s[((j - 1) % 32) * 32 + (j - 1) / 32] - (limit - 1));
         }
         j++;
       }
@@ -402,7 +403,7 @@
       << "First failed at test case " << first_failure;
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if HAVE_SSE2
 #if CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/md5_helper.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/md5_helper.h
index ef310a2..dc28dc6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/md5_helper.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/md5_helper.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_MD5_HELPER_H_
-#define TEST_MD5_HELPER_H_
+#ifndef VPX_TEST_MD5_HELPER_H_
+#define VPX_TEST_MD5_HELPER_H_
 
 #include "./md5_utils.h"
 #include "vpx/vpx_decoder.h"
@@ -72,4 +72,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_MD5_HELPER_H_
+#endif  // VPX_TEST_MD5_HELPER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/non_greedy_mv_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/non_greedy_mv_test.cc
new file mode 100644
index 0000000..c78331b
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/non_greedy_mv_test.cc
@@ -0,0 +1,200 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <math.h>
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "vp9/encoder/vp9_non_greedy_mv.h"
+#include "./vpx_dsp_rtcd.h"
+
+namespace {
+
+static void read_in_mf(const char *filename, int *rows_ptr, int *cols_ptr,
+                       MV **buffer_ptr) {
+  FILE *input = fopen(filename, "rb");
+  int row, col;
+  int idx;
+
+  ASSERT_NE(input, nullptr) << "Cannot open file: " << filename << std::endl;
+
+  fscanf(input, "%d,%d\n", rows_ptr, cols_ptr);
+
+  *buffer_ptr = (MV *)malloc((*rows_ptr) * (*cols_ptr) * sizeof(MV));
+
+  for (idx = 0; idx < (*rows_ptr) * (*cols_ptr); ++idx) {
+    fscanf(input, "%d,%d;", &row, &col);
+    (*buffer_ptr)[idx].row = row;
+    (*buffer_ptr)[idx].col = col;
+  }
+  fclose(input);
+}
+
+static void read_in_local_var(const char *filename, int *rows_ptr,
+                              int *cols_ptr,
+                              int (**M_ptr)[MF_LOCAL_STRUCTURE_SIZE]) {
+  FILE *input = fopen(filename, "rb");
+  int M00, M01, M10, M11;
+  int idx;
+  int int_type;
+
+  ASSERT_NE(input, nullptr) << "Cannot open file: " << filename << std::endl;
+
+  fscanf(input, "%d,%d\n", rows_ptr, cols_ptr);
+
+  *M_ptr = (int(*)[MF_LOCAL_STRUCTURE_SIZE])malloc(
+      (*rows_ptr) * (*cols_ptr) * MF_LOCAL_STRUCTURE_SIZE * sizeof(int_type));
+
+  for (idx = 0; idx < (*rows_ptr) * (*cols_ptr); ++idx) {
+    fscanf(input, "%d,%d,%d,%d;", &M00, &M01, &M10, &M11);
+    (*M_ptr)[idx][0] = M00;
+    (*M_ptr)[idx][1] = M01;
+    (*M_ptr)[idx][2] = M10;
+    (*M_ptr)[idx][3] = M11;
+  }
+  fclose(input);
+}
+
+static void compare_mf(const MV *mf1, const MV *mf2, int rows, int cols,
+                       float *mean_ptr, float *std_ptr) {
+  float float_type;
+  float *diffs = (float *)malloc(rows * cols * sizeof(float_type));
+  int idx;
+  float accu = 0.0f;
+  for (idx = 0; idx < rows * cols; ++idx) {
+    MV mv1 = mf1[idx];
+    MV mv2 = mf2[idx];
+    float row_diff2 = (float)((mv1.row - mv2.row) * (mv1.row - mv2.row));
+    float col_diff2 = (float)((mv1.col - mv2.col) * (mv1.col - mv2.col));
+    diffs[idx] = sqrt(row_diff2 + col_diff2);
+    accu += diffs[idx];
+  }
+  *mean_ptr = accu / rows / cols;
+  *std_ptr = 0;
+  for (idx = 0; idx < rows * cols; ++idx) {
+    *std_ptr += (diffs[idx] - (*mean_ptr)) * (diffs[idx] - (*mean_ptr));
+  }
+  *std_ptr = sqrt(*std_ptr / rows / cols);
+  free(diffs);
+}
+
+static void load_frame_info(const char *filename,
+                            YV12_BUFFER_CONFIG *ref_frame_ptr) {
+  FILE *input = fopen(filename, "rb");
+  int idx;
+  uint8_t data_type;
+
+  ASSERT_NE(input, nullptr) << "Cannot open file: " << filename << std::endl;
+
+  fscanf(input, "%d,%d\n", &(ref_frame_ptr->y_height),
+         &(ref_frame_ptr->y_width));
+
+  ref_frame_ptr->y_buffer = (uint8_t *)malloc(
+      (ref_frame_ptr->y_width) * (ref_frame_ptr->y_height) * sizeof(data_type));
+
+  for (idx = 0; idx < (ref_frame_ptr->y_width) * (ref_frame_ptr->y_height);
+       ++idx) {
+    int value;
+    fscanf(input, "%d,", &value);
+    ref_frame_ptr->y_buffer[idx] = (uint8_t)value;
+  }
+
+  ref_frame_ptr->y_stride = ref_frame_ptr->y_width;
+  fclose(input);
+}
+
+static int compare_local_var(const int (*local_var1)[MF_LOCAL_STRUCTURE_SIZE],
+                             const int (*local_var2)[MF_LOCAL_STRUCTURE_SIZE],
+                             int rows, int cols) {
+  int diff = 0;
+  int outter_idx, inner_idx;
+  for (outter_idx = 0; outter_idx < rows * cols; ++outter_idx) {
+    for (inner_idx = 0; inner_idx < MF_LOCAL_STRUCTURE_SIZE; ++inner_idx) {
+      diff += abs(local_var1[outter_idx][inner_idx] -
+                  local_var2[outter_idx][inner_idx]);
+    }
+  }
+  return diff / rows / cols;
+}
+
+TEST(non_greedy_mv, smooth_mf) {
+  const char *search_mf_file = "non_greedy_mv_test_files/exhaust_16x16.txt";
+  const char *local_var_file = "non_greedy_mv_test_files/localVar_16x16.txt";
+  const char *estimation_file = "non_greedy_mv_test_files/estimation_16x16.txt";
+  const char *ground_truth_file =
+      "non_greedy_mv_test_files/ground_truth_16x16.txt";
+  BLOCK_SIZE bsize = BLOCK_32X32;
+  MV *search_mf = NULL;
+  MV *smooth_mf = NULL;
+  MV *estimation = NULL;
+  MV *ground_truth = NULL;
+  int(*local_var)[MF_LOCAL_STRUCTURE_SIZE] = NULL;
+  int rows = 0, cols = 0;
+
+  int alpha = 100, max_iter = 100;
+
+  read_in_mf(search_mf_file, &rows, &cols, &search_mf);
+  read_in_local_var(local_var_file, &rows, &cols, &local_var);
+  read_in_mf(estimation_file, &rows, &cols, &estimation);
+  read_in_mf(ground_truth_file, &rows, &cols, &ground_truth);
+
+  float sm_mean, sm_std;
+  float est_mean, est_std;
+
+  smooth_mf = (MV *)malloc(rows * cols * sizeof(MV));
+  vp9_get_smooth_motion_field(search_mf, local_var, rows, cols, bsize, alpha,
+                              max_iter, smooth_mf);
+
+  compare_mf(smooth_mf, ground_truth, rows, cols, &sm_mean, &sm_std);
+  compare_mf(smooth_mf, estimation, rows, cols, &est_mean, &est_std);
+
+  EXPECT_LE(sm_mean, 3);
+  EXPECT_LE(est_mean, 2);
+
+  free(search_mf);
+  free(local_var);
+  free(estimation);
+  free(ground_truth);
+  free(smooth_mf);
+}
+
+TEST(non_greedy_mv, local_var) {
+  const char *ref_frame_file = "non_greedy_mv_test_files/ref_frame_16x16.txt";
+  const char *cur_frame_file = "non_greedy_mv_test_files/cur_frame_16x16.txt";
+  const char *gt_local_var_file = "non_greedy_mv_test_files/localVar_16x16.txt";
+  const char *search_mf_file = "non_greedy_mv_test_files/exhaust_16x16.txt";
+  BLOCK_SIZE bsize = BLOCK_16X16;
+  int(*gt_local_var)[MF_LOCAL_STRUCTURE_SIZE] = NULL;
+  int(*est_local_var)[MF_LOCAL_STRUCTURE_SIZE] = NULL;
+  YV12_BUFFER_CONFIG ref_frame, cur_frame;
+  int rows, cols;
+  MV *search_mf;
+  int int_type;
+  int local_var_diff;
+  vp9_variance_fn_ptr_t fn;
+
+  load_frame_info(ref_frame_file, &ref_frame);
+  load_frame_info(cur_frame_file, &cur_frame);
+  read_in_mf(search_mf_file, &rows, &cols, &search_mf);
+
+  fn.sdf = vpx_sad16x16;
+  est_local_var = (int(*)[MF_LOCAL_STRUCTURE_SIZE])malloc(
+      rows * cols * MF_LOCAL_STRUCTURE_SIZE * sizeof(int_type));
+  vp9_get_local_structure(&cur_frame, &ref_frame, search_mf, &fn, rows, cols,
+                          bsize, est_local_var);
+  read_in_local_var(gt_local_var_file, &rows, &cols, &gt_local_var);
+
+  local_var_diff = compare_local_var(est_local_var, gt_local_var, rows, cols);
+
+  EXPECT_LE(local_var_diff, 1);
+
+  free(gt_local_var);
+  free(est_local_var);
+  free(ref_frame.y_buffer);
+}
+}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/partial_idct_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/partial_idct_test.cc
index f7b50f5..e66a695 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/partial_idct_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/partial_idct_test.cc
@@ -11,8 +11,8 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
-
 #include <limits>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -51,8 +51,8 @@
 }
 #endif
 
-typedef std::tr1::tuple<FwdTxfmFunc, InvTxfmWithBdFunc, InvTxfmWithBdFunc,
-                        TX_SIZE, int, int, int>
+typedef std::tuple<FwdTxfmFunc, InvTxfmWithBdFunc, InvTxfmWithBdFunc, TX_SIZE,
+                   int, int, int>
     PartialInvTxfmParam;
 const int kMaxNumCoeffs = 1024;
 const int kCountTestBlock = 1000;
@@ -324,7 +324,7 @@
       << "Error: partial inverse transform produces different results";
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 const PartialInvTxfmParam c_partial_idct_tests[] = {
 #if CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/postproc.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/postproc.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/pp_filter_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/pp_filter_test.cc
index 5a2ade1..1ed261b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/pp_filter_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/pp_filter_test.cc
@@ -11,6 +11,7 @@
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/buffer.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
@@ -32,7 +33,6 @@
                                       int cols, int flimit);
 
 namespace {
-
 // Compute the filter level used in post proc from the loop filter strength
 int q2mbl(int x) {
   if (x < 20) x = 20;
@@ -42,33 +42,52 @@
 }
 
 class VpxPostProcDownAndAcrossMbRowTest
-    : public ::testing::TestWithParam<VpxPostProcDownAndAcrossMbRowFunc> {
+    : public AbstractBench,
+      public ::testing::TestWithParam<VpxPostProcDownAndAcrossMbRowFunc> {
  public:
+  VpxPostProcDownAndAcrossMbRowTest()
+      : mb_post_proc_down_and_across_(GetParam()) {}
   virtual void TearDown() { libvpx_test::ClearSystemState(); }
+
+ protected:
+  virtual void Run();
+
+  const VpxPostProcDownAndAcrossMbRowFunc mb_post_proc_down_and_across_;
+  // Size of the underlying data block that will be filtered.
+  int block_width_;
+  int block_height_;
+  Buffer<uint8_t> *src_image_;
+  Buffer<uint8_t> *dst_image_;
+  uint8_t *flimits_;
 };
 
+void VpxPostProcDownAndAcrossMbRowTest::Run() {
+  mb_post_proc_down_and_across_(
+      src_image_->TopLeftPixel(), dst_image_->TopLeftPixel(),
+      src_image_->stride(), dst_image_->stride(), block_width_, flimits_, 16);
+}
+
 // Test routine for the VPx post-processing function
 // vpx_post_proc_down_and_across_mb_row_c.
 
 TEST_P(VpxPostProcDownAndAcrossMbRowTest, CheckFilterOutput) {
   // Size of the underlying data block that will be filtered.
-  const int block_width = 16;
-  const int block_height = 16;
+  block_width_ = 16;
+  block_height_ = 16;
 
   // 5-tap filter needs 2 padding rows above and below the block in the input.
-  Buffer<uint8_t> src_image = Buffer<uint8_t>(block_width, block_height, 2);
+  Buffer<uint8_t> src_image = Buffer<uint8_t>(block_width_, block_height_, 2);
   ASSERT_TRUE(src_image.Init());
 
   // Filter extends output block by 8 samples at left and right edges.
   // Though the left padding is only 8 bytes, the assembly code tries to
   // read 16 bytes before the pointer.
   Buffer<uint8_t> dst_image =
-      Buffer<uint8_t>(block_width, block_height, 8, 16, 8, 8);
+      Buffer<uint8_t>(block_width_, block_height_, 8, 16, 8, 8);
   ASSERT_TRUE(dst_image.Init());
 
-  uint8_t *const flimits =
-      reinterpret_cast<uint8_t *>(vpx_memalign(16, block_width));
-  (void)memset(flimits, 255, block_width);
+  flimits_ = reinterpret_cast<uint8_t *>(vpx_memalign(16, block_width_));
+  (void)memset(flimits_, 255, block_width_);
 
   // Initialize pixels in the input:
   //   block pixels to value 1,
@@ -79,37 +98,36 @@
   // Initialize pixels in the output to 99.
   dst_image.Set(99);
 
-  ASM_REGISTER_STATE_CHECK(GetParam()(
+  ASM_REGISTER_STATE_CHECK(mb_post_proc_down_and_across_(
       src_image.TopLeftPixel(), dst_image.TopLeftPixel(), src_image.stride(),
-      dst_image.stride(), block_width, flimits, 16));
+      dst_image.stride(), block_width_, flimits_, 16));
 
-  static const uint8_t kExpectedOutput[block_height] = {
-    4, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 4
-  };
+  static const uint8_t kExpectedOutput[] = { 4, 3, 1, 1, 1, 1, 1, 1,
+                                             1, 1, 1, 1, 1, 1, 3, 4 };
 
   uint8_t *pixel_ptr = dst_image.TopLeftPixel();
-  for (int i = 0; i < block_height; ++i) {
-    for (int j = 0; j < block_width; ++j) {
+  for (int i = 0; i < block_height_; ++i) {
+    for (int j = 0; j < block_width_; ++j) {
       ASSERT_EQ(kExpectedOutput[i], pixel_ptr[j])
           << "at (" << i << ", " << j << ")";
     }
     pixel_ptr += dst_image.stride();
   }
 
-  vpx_free(flimits);
+  vpx_free(flimits_);
 };
 
 TEST_P(VpxPostProcDownAndAcrossMbRowTest, CheckCvsAssembly) {
   // Size of the underlying data block that will be filtered.
   // Y blocks are always a multiple of 16 wide and exactly 16 high. U and V
   // blocks are always a multiple of 8 wide and exactly 8 high.
-  const int block_width = 136;
-  const int block_height = 16;
+  block_width_ = 136;
+  block_height_ = 16;
 
   // 5-tap filter needs 2 padding rows above and below the block in the input.
   // SSE2 reads in blocks of 16. Pad an extra 8 in case the width is not %16.
   Buffer<uint8_t> src_image =
-      Buffer<uint8_t>(block_width, block_height, 2, 2, 10, 2);
+      Buffer<uint8_t>(block_width_, block_height_, 2, 2, 10, 2);
   ASSERT_TRUE(src_image.Init());
 
   // Filter extends output block by 8 samples at left and right edges.
@@ -118,17 +136,17 @@
   // not a problem.
   // SSE2 reads in blocks of 16. Pad an extra 8 in case the width is not %16.
   Buffer<uint8_t> dst_image =
-      Buffer<uint8_t>(block_width, block_height, 8, 8, 16, 8);
+      Buffer<uint8_t>(block_width_, block_height_, 8, 8, 16, 8);
   ASSERT_TRUE(dst_image.Init());
-  Buffer<uint8_t> dst_image_ref = Buffer<uint8_t>(block_width, block_height, 8);
+  Buffer<uint8_t> dst_image_ref =
+      Buffer<uint8_t>(block_width_, block_height_, 8);
   ASSERT_TRUE(dst_image_ref.Init());
 
   // Filter values are set in blocks of 16 for Y and 8 for U/V. Each macroblock
   // can have a different filter. SSE2 assembly reads flimits in blocks of 16 so
   // it must be padded out.
-  const int flimits_width = block_width % 16 ? block_width + 8 : block_width;
-  uint8_t *const flimits =
-      reinterpret_cast<uint8_t *>(vpx_memalign(16, flimits_width));
+  const int flimits_width = block_width_ % 16 ? block_width_ + 8 : block_width_;
+  flimits_ = reinterpret_cast<uint8_t *>(vpx_memalign(16, flimits_width));
 
   ACMRandom rnd;
   rnd.Reset(ACMRandom::DeterministicSeed());
@@ -138,37 +156,78 @@
   src_image.SetPadding(10);
   src_image.Set(&rnd, &ACMRandom::Rand8);
 
-  for (int blocks = 0; blocks < block_width; blocks += 8) {
-    (void)memset(flimits, 0, sizeof(*flimits) * flimits_width);
+  for (int blocks = 0; blocks < block_width_; blocks += 8) {
+    (void)memset(flimits_, 0, sizeof(*flimits_) * flimits_width);
 
     for (int f = 0; f < 255; f++) {
-      (void)memset(flimits + blocks, f, sizeof(*flimits) * 8);
-
+      (void)memset(flimits_ + blocks, f, sizeof(*flimits_) * 8);
       dst_image.Set(0);
       dst_image_ref.Set(0);
 
       vpx_post_proc_down_and_across_mb_row_c(
           src_image.TopLeftPixel(), dst_image_ref.TopLeftPixel(),
-          src_image.stride(), dst_image_ref.stride(), block_width, flimits,
-          block_height);
-      ASM_REGISTER_STATE_CHECK(
-          GetParam()(src_image.TopLeftPixel(), dst_image.TopLeftPixel(),
-                     src_image.stride(), dst_image.stride(), block_width,
-                     flimits, block_height));
+          src_image.stride(), dst_image_ref.stride(), block_width_, flimits_,
+          block_height_);
+      ASM_REGISTER_STATE_CHECK(mb_post_proc_down_and_across_(
+          src_image.TopLeftPixel(), dst_image.TopLeftPixel(),
+          src_image.stride(), dst_image.stride(), block_width_, flimits_,
+          block_height_));
 
       ASSERT_TRUE(dst_image.CheckValues(dst_image_ref));
     }
   }
 
-  vpx_free(flimits);
+  vpx_free(flimits_);
 }
 
+TEST_P(VpxPostProcDownAndAcrossMbRowTest, DISABLED_Speed) {
+  // Size of the underlying data block that will be filtered.
+  block_width_ = 16;
+  block_height_ = 16;
+
+  // 5-tap filter needs 2 padding rows above and below the block in the input.
+  Buffer<uint8_t> src_image = Buffer<uint8_t>(block_width_, block_height_, 2);
+  ASSERT_TRUE(src_image.Init());
+  this->src_image_ = &src_image;
+
+  // Filter extends output block by 8 samples at left and right edges.
+  // Though the left padding is only 8 bytes, the assembly code tries to
+  // read 16 bytes before the pointer.
+  Buffer<uint8_t> dst_image =
+      Buffer<uint8_t>(block_width_, block_height_, 8, 16, 8, 8);
+  ASSERT_TRUE(dst_image.Init());
+  this->dst_image_ = &dst_image;
+
+  flimits_ = reinterpret_cast<uint8_t *>(vpx_memalign(16, block_width_));
+  (void)memset(flimits_, 255, block_width_);
+
+  // Initialize pixels in the input:
+  //   block pixels to value 1,
+  //   border pixels to value 10.
+  src_image.SetPadding(10);
+  src_image.Set(1);
+
+  // Initialize pixels in the output to 99.
+  dst_image.Set(99);
+
+  RunNTimes(INT16_MAX);
+  PrintMedian("16x16");
+
+  vpx_free(flimits_);
+};
+
 class VpxMbPostProcAcrossIpTest
-    : public ::testing::TestWithParam<VpxMbPostProcAcrossIpFunc> {
+    : public AbstractBench,
+      public ::testing::TestWithParam<VpxMbPostProcAcrossIpFunc> {
  public:
+  VpxMbPostProcAcrossIpTest()
+      : rows_(16), cols_(16), mb_post_proc_across_ip_(GetParam()),
+        src_(Buffer<uint8_t>(rows_, cols_, 8, 8, 17, 8)) {}
   virtual void TearDown() { libvpx_test::ClearSystemState(); }
 
  protected:
+  virtual void Run();
+
   void SetCols(unsigned char *s, int rows, int cols, int src_width) {
     for (int r = 0; r < rows; r++) {
       for (int c = 0; c < cols; c++) {
@@ -195,71 +254,67 @@
         GetParam()(s, src_width, rows, cols, filter_level));
     RunComparison(expected_output, s, rows, cols, src_width);
   }
+
+  const int rows_;
+  const int cols_;
+  const VpxMbPostProcAcrossIpFunc mb_post_proc_across_ip_;
+  Buffer<uint8_t> src_;
 };
 
+void VpxMbPostProcAcrossIpTest::Run() {
+  mb_post_proc_across_ip_(src_.TopLeftPixel(), src_.stride(), rows_, cols_,
+                          q2mbl(0));
+}
+
 TEST_P(VpxMbPostProcAcrossIpTest, CheckLowFilterOutput) {
-  const int rows = 16;
-  const int cols = 16;
+  ASSERT_TRUE(src_.Init());
+  src_.SetPadding(10);
+  SetCols(src_.TopLeftPixel(), rows_, cols_, src_.stride());
 
-  Buffer<uint8_t> src = Buffer<uint8_t>(cols, rows, 8, 8, 17, 8);
-  ASSERT_TRUE(src.Init());
-  src.SetPadding(10);
-  SetCols(src.TopLeftPixel(), rows, cols, src.stride());
-
-  Buffer<uint8_t> expected_output = Buffer<uint8_t>(cols, rows, 0);
+  Buffer<uint8_t> expected_output = Buffer<uint8_t>(cols_, rows_, 0);
   ASSERT_TRUE(expected_output.Init());
-  SetCols(expected_output.TopLeftPixel(), rows, cols, expected_output.stride());
+  SetCols(expected_output.TopLeftPixel(), rows_, cols_,
+          expected_output.stride());
 
-  RunFilterLevel(src.TopLeftPixel(), rows, cols, src.stride(), q2mbl(0),
+  RunFilterLevel(src_.TopLeftPixel(), rows_, cols_, src_.stride(), q2mbl(0),
                  expected_output.TopLeftPixel());
 }
 
 TEST_P(VpxMbPostProcAcrossIpTest, CheckMediumFilterOutput) {
-  const int rows = 16;
-  const int cols = 16;
+  ASSERT_TRUE(src_.Init());
+  src_.SetPadding(10);
+  SetCols(src_.TopLeftPixel(), rows_, cols_, src_.stride());
 
-  Buffer<uint8_t> src = Buffer<uint8_t>(cols, rows, 8, 8, 17, 8);
-  ASSERT_TRUE(src.Init());
-  src.SetPadding(10);
-  SetCols(src.TopLeftPixel(), rows, cols, src.stride());
-
-  static const unsigned char kExpectedOutput[cols] = {
+  static const unsigned char kExpectedOutput[] = {
     2, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 13
   };
 
-  RunFilterLevel(src.TopLeftPixel(), rows, cols, src.stride(), q2mbl(70),
+  RunFilterLevel(src_.TopLeftPixel(), rows_, cols_, src_.stride(), q2mbl(70),
                  kExpectedOutput);
 }
 
 TEST_P(VpxMbPostProcAcrossIpTest, CheckHighFilterOutput) {
-  const int rows = 16;
-  const int cols = 16;
+  ASSERT_TRUE(src_.Init());
+  src_.SetPadding(10);
+  SetCols(src_.TopLeftPixel(), rows_, cols_, src_.stride());
 
-  Buffer<uint8_t> src = Buffer<uint8_t>(cols, rows, 8, 8, 17, 8);
-  ASSERT_TRUE(src.Init());
-  src.SetPadding(10);
-  SetCols(src.TopLeftPixel(), rows, cols, src.stride());
-
-  static const unsigned char kExpectedOutput[cols] = {
+  static const unsigned char kExpectedOutput[] = {
     2, 2, 3, 4, 4, 5, 6, 7, 8, 9, 10, 11, 11, 12, 13, 13
   };
 
-  RunFilterLevel(src.TopLeftPixel(), rows, cols, src.stride(), INT_MAX,
+  RunFilterLevel(src_.TopLeftPixel(), rows_, cols_, src_.stride(), INT_MAX,
                  kExpectedOutput);
 
-  SetCols(src.TopLeftPixel(), rows, cols, src.stride());
+  SetCols(src_.TopLeftPixel(), rows_, cols_, src_.stride());
 
-  RunFilterLevel(src.TopLeftPixel(), rows, cols, src.stride(), q2mbl(100),
+  RunFilterLevel(src_.TopLeftPixel(), rows_, cols_, src_.stride(), q2mbl(100),
                  kExpectedOutput);
 }
 
 TEST_P(VpxMbPostProcAcrossIpTest, CheckCvsAssembly) {
-  const int rows = 16;
-  const int cols = 16;
-
-  Buffer<uint8_t> c_mem = Buffer<uint8_t>(cols, rows, 8, 8, 17, 8);
+  Buffer<uint8_t> c_mem = Buffer<uint8_t>(cols_, rows_, 8, 8, 17, 8);
   ASSERT_TRUE(c_mem.Init());
-  Buffer<uint8_t> asm_mem = Buffer<uint8_t>(cols, rows, 8, 8, 17, 8);
+  Buffer<uint8_t> asm_mem = Buffer<uint8_t>(cols_, rows_, 8, 8, 17, 8);
   ASSERT_TRUE(asm_mem.Init());
 
   // When level >= 100, the filter behaves the same as the level = INT_MAX
@@ -267,24 +322,41 @@
   for (int level = 0; level < 100; level++) {
     c_mem.SetPadding(10);
     asm_mem.SetPadding(10);
-    SetCols(c_mem.TopLeftPixel(), rows, cols, c_mem.stride());
-    SetCols(asm_mem.TopLeftPixel(), rows, cols, asm_mem.stride());
+    SetCols(c_mem.TopLeftPixel(), rows_, cols_, c_mem.stride());
+    SetCols(asm_mem.TopLeftPixel(), rows_, cols_, asm_mem.stride());
 
-    vpx_mbpost_proc_across_ip_c(c_mem.TopLeftPixel(), c_mem.stride(), rows,
-                                cols, q2mbl(level));
+    vpx_mbpost_proc_across_ip_c(c_mem.TopLeftPixel(), c_mem.stride(), rows_,
+                                cols_, q2mbl(level));
     ASM_REGISTER_STATE_CHECK(GetParam()(
-        asm_mem.TopLeftPixel(), asm_mem.stride(), rows, cols, q2mbl(level)));
+        asm_mem.TopLeftPixel(), asm_mem.stride(), rows_, cols_, q2mbl(level)));
 
     ASSERT_TRUE(asm_mem.CheckValues(c_mem));
   }
 }
 
+TEST_P(VpxMbPostProcAcrossIpTest, DISABLED_Speed) {
+  ASSERT_TRUE(src_.Init());
+  src_.SetPadding(10);
+
+  SetCols(src_.TopLeftPixel(), rows_, cols_, src_.stride());
+
+  RunNTimes(100000);
+  PrintMedian("16x16");
+}
+
 class VpxMbPostProcDownTest
-    : public ::testing::TestWithParam<VpxMbPostProcDownFunc> {
+    : public AbstractBench,
+      public ::testing::TestWithParam<VpxMbPostProcDownFunc> {
  public:
+  VpxMbPostProcDownTest()
+      : rows_(16), cols_(16), mb_post_proc_down_(GetParam()),
+        src_c_(Buffer<uint8_t>(rows_, cols_, 8, 8, 8, 17)) {}
+
   virtual void TearDown() { libvpx_test::ClearSystemState(); }
 
  protected:
+  virtual void Run();
+
   void SetRows(unsigned char *src_c, int rows, int cols, int src_width) {
     for (int r = 0; r < rows; r++) {
       memset(src_c, r, cols);
@@ -306,22 +378,28 @@
   void RunFilterLevel(unsigned char *s, int rows, int cols, int src_width,
                       int filter_level, const unsigned char *expected_output) {
     ASM_REGISTER_STATE_CHECK(
-        GetParam()(s, src_width, rows, cols, filter_level));
+        mb_post_proc_down_(s, src_width, rows, cols, filter_level));
     RunComparison(expected_output, s, rows, cols, src_width);
   }
+
+  const int rows_;
+  const int cols_;
+  const VpxMbPostProcDownFunc mb_post_proc_down_;
+  Buffer<uint8_t> src_c_;
 };
 
+void VpxMbPostProcDownTest::Run() {
+  mb_post_proc_down_(src_c_.TopLeftPixel(), src_c_.stride(), rows_, cols_,
+                     q2mbl(0));
+}
+
 TEST_P(VpxMbPostProcDownTest, CheckHighFilterOutput) {
-  const int rows = 16;
-  const int cols = 16;
+  ASSERT_TRUE(src_c_.Init());
+  src_c_.SetPadding(10);
 
-  Buffer<uint8_t> src_c = Buffer<uint8_t>(cols, rows, 8, 8, 8, 17);
-  ASSERT_TRUE(src_c.Init());
-  src_c.SetPadding(10);
+  SetRows(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride());
 
-  SetRows(src_c.TopLeftPixel(), rows, cols, src_c.stride());
-
-  static const unsigned char kExpectedOutput[rows * cols] = {
+  static const unsigned char kExpectedOutput[] = {
     2,  2,  1,  1,  2,  2,  2,  2,  2,  2,  1,  1,  2,  2,  2,  2,  2,  2,  2,
     2,  3,  2,  2,  2,  2,  2,  2,  2,  3,  2,  2,  2,  3,  3,  3,  3,  3,  3,
     3,  3,  3,  3,  3,  3,  3,  3,  3,  3,  3,  4,  4,  3,  4,  4,  3,  3,  3,
@@ -338,26 +416,22 @@
     13, 13, 13, 13, 14, 13, 13, 13, 13
   };
 
-  RunFilterLevel(src_c.TopLeftPixel(), rows, cols, src_c.stride(), INT_MAX,
+  RunFilterLevel(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride(), INT_MAX,
                  kExpectedOutput);
 
-  src_c.SetPadding(10);
-  SetRows(src_c.TopLeftPixel(), rows, cols, src_c.stride());
-  RunFilterLevel(src_c.TopLeftPixel(), rows, cols, src_c.stride(), q2mbl(100),
-                 kExpectedOutput);
+  src_c_.SetPadding(10);
+  SetRows(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride());
+  RunFilterLevel(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride(),
+                 q2mbl(100), kExpectedOutput);
 }
 
 TEST_P(VpxMbPostProcDownTest, CheckMediumFilterOutput) {
-  const int rows = 16;
-  const int cols = 16;
+  ASSERT_TRUE(src_c_.Init());
+  src_c_.SetPadding(10);
 
-  Buffer<uint8_t> src_c = Buffer<uint8_t>(cols, rows, 8, 8, 8, 17);
-  ASSERT_TRUE(src_c.Init());
-  src_c.SetPadding(10);
+  SetRows(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride());
 
-  SetRows(src_c.TopLeftPixel(), rows, cols, src_c.stride());
-
-  static const unsigned char kExpectedOutput[rows * cols] = {
+  static const unsigned char kExpectedOutput[] = {
     2,  2,  1,  1,  2,  2,  2,  2,  2,  2,  1,  1,  2,  2,  2,  2,  2,  2,  2,
     2,  3,  2,  2,  2,  2,  2,  2,  2,  3,  2,  2,  2,  2,  2,  2,  2,  2,  2,
     2,  2,  2,  2,  2,  2,  2,  2,  2,  2,  3,  3,  3,  3,  3,  3,  3,  3,  3,
@@ -374,67 +448,69 @@
     13, 13, 13, 13, 14, 13, 13, 13, 13
   };
 
-  RunFilterLevel(src_c.TopLeftPixel(), rows, cols, src_c.stride(), q2mbl(70),
-                 kExpectedOutput);
+  RunFilterLevel(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride(),
+                 q2mbl(70), kExpectedOutput);
 }
 
 TEST_P(VpxMbPostProcDownTest, CheckLowFilterOutput) {
-  const int rows = 16;
-  const int cols = 16;
+  ASSERT_TRUE(src_c_.Init());
+  src_c_.SetPadding(10);
 
-  Buffer<uint8_t> src_c = Buffer<uint8_t>(cols, rows, 8, 8, 8, 17);
-  ASSERT_TRUE(src_c.Init());
-  src_c.SetPadding(10);
+  SetRows(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride());
 
-  SetRows(src_c.TopLeftPixel(), rows, cols, src_c.stride());
-
-  unsigned char *expected_output = new unsigned char[rows * cols];
+  unsigned char *expected_output = new unsigned char[rows_ * cols_];
   ASSERT_TRUE(expected_output != NULL);
-  SetRows(expected_output, rows, cols, cols);
+  SetRows(expected_output, rows_, cols_, cols_);
 
-  RunFilterLevel(src_c.TopLeftPixel(), rows, cols, src_c.stride(), q2mbl(0),
+  RunFilterLevel(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride(), q2mbl(0),
                  expected_output);
 
   delete[] expected_output;
 }
 
 TEST_P(VpxMbPostProcDownTest, CheckCvsAssembly) {
-  const int rows = 16;
-  const int cols = 16;
-
   ACMRandom rnd;
   rnd.Reset(ACMRandom::DeterministicSeed());
 
-  Buffer<uint8_t> src_c = Buffer<uint8_t>(cols, rows, 8, 8, 8, 17);
-  ASSERT_TRUE(src_c.Init());
-  Buffer<uint8_t> src_asm = Buffer<uint8_t>(cols, rows, 8, 8, 8, 17);
+  ASSERT_TRUE(src_c_.Init());
+  Buffer<uint8_t> src_asm = Buffer<uint8_t>(cols_, rows_, 8, 8, 8, 17);
   ASSERT_TRUE(src_asm.Init());
 
   for (int level = 0; level < 100; level++) {
-    src_c.SetPadding(10);
+    src_c_.SetPadding(10);
     src_asm.SetPadding(10);
-    src_c.Set(&rnd, &ACMRandom::Rand8);
-    src_asm.CopyFrom(src_c);
+    src_c_.Set(&rnd, &ACMRandom::Rand8);
+    src_asm.CopyFrom(src_c_);
 
-    vpx_mbpost_proc_down_c(src_c.TopLeftPixel(), src_c.stride(), rows, cols,
+    vpx_mbpost_proc_down_c(src_c_.TopLeftPixel(), src_c_.stride(), rows_, cols_,
                            q2mbl(level));
-    ASM_REGISTER_STATE_CHECK(GetParam()(
-        src_asm.TopLeftPixel(), src_asm.stride(), rows, cols, q2mbl(level)));
-    ASSERT_TRUE(src_asm.CheckValues(src_c));
+    ASM_REGISTER_STATE_CHECK(mb_post_proc_down_(
+        src_asm.TopLeftPixel(), src_asm.stride(), rows_, cols_, q2mbl(level)));
+    ASSERT_TRUE(src_asm.CheckValues(src_c_));
 
-    src_c.SetPadding(10);
+    src_c_.SetPadding(10);
     src_asm.SetPadding(10);
-    src_c.Set(&rnd, &ACMRandom::Rand8Extremes);
-    src_asm.CopyFrom(src_c);
+    src_c_.Set(&rnd, &ACMRandom::Rand8Extremes);
+    src_asm.CopyFrom(src_c_);
 
-    vpx_mbpost_proc_down_c(src_c.TopLeftPixel(), src_c.stride(), rows, cols,
+    vpx_mbpost_proc_down_c(src_c_.TopLeftPixel(), src_c_.stride(), rows_, cols_,
                            q2mbl(level));
-    ASM_REGISTER_STATE_CHECK(GetParam()(
-        src_asm.TopLeftPixel(), src_asm.stride(), rows, cols, q2mbl(level)));
-    ASSERT_TRUE(src_asm.CheckValues(src_c));
+    ASM_REGISTER_STATE_CHECK(mb_post_proc_down_(
+        src_asm.TopLeftPixel(), src_asm.stride(), rows_, cols_, q2mbl(level)));
+    ASSERT_TRUE(src_asm.CheckValues(src_c_));
   }
 }
 
+TEST_P(VpxMbPostProcDownTest, DISABLED_Speed) {
+  ASSERT_TRUE(src_c_.Init());
+  src_c_.SetPadding(10);
+
+  SetRows(src_c_.TopLeftPixel(), rows_, cols_, src_c_.stride());
+
+  RunNTimes(100000);
+  PrintMedian("16x16");
+}
+
 INSTANTIATE_TEST_CASE_P(
     C, VpxPostProcDownAndAcrossMbRowTest,
     ::testing::Values(vpx_post_proc_down_and_across_mb_row_c));
@@ -481,4 +557,16 @@
                         ::testing::Values(vpx_mbpost_proc_down_msa));
 #endif  // HAVE_MSA
 
+#if HAVE_VSX
+INSTANTIATE_TEST_CASE_P(
+    VSX, VpxPostProcDownAndAcrossMbRowTest,
+    ::testing::Values(vpx_post_proc_down_and_across_mb_row_vsx));
+
+INSTANTIATE_TEST_CASE_P(VSX, VpxMbPostProcAcrossIpTest,
+                        ::testing::Values(vpx_mbpost_proc_across_ip_vsx));
+
+INSTANTIATE_TEST_CASE_P(VSX, VpxMbPostProcDownTest,
+                        ::testing::Values(vpx_mbpost_proc_down_vsx));
+#endif  // HAVE_VSX
+
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/predict_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/predict_test.cc
index 9f366ae..d40d9c7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/predict_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/predict_test.cc
@@ -10,30 +10,34 @@
 
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "./vp8_rtcd.h"
 #include "./vpx_config.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
 #include "test/util.h"
 #include "vpx/vpx_integer.h"
 #include "vpx_mem/vpx_mem.h"
+#include "vpx_ports/msvc.h"
 
 namespace {
 
 using libvpx_test::ACMRandom;
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 typedef void (*PredictFunc)(uint8_t *src_ptr, int src_pixels_per_line,
                             int xoffset, int yoffset, uint8_t *dst_ptr,
                             int dst_pitch);
 
-typedef std::tr1::tuple<int, int, PredictFunc> PredictParam;
+typedef std::tuple<int, int, PredictFunc> PredictParam;
 
-class PredictTestBase : public ::testing::TestWithParam<PredictParam> {
+class PredictTestBase : public AbstractBench,
+                        public ::testing::TestWithParam<PredictParam> {
  public:
   PredictTestBase()
       : width_(GET_PARAM(0)), height_(GET_PARAM(1)), predict_(GET_PARAM(2)),
@@ -204,7 +208,20 @@
       }
     }
   }
-};
+
+  void Run() {
+    for (int xoffset = 0; xoffset < 8; ++xoffset) {
+      for (int yoffset = 0; yoffset < 8; ++yoffset) {
+        if (xoffset == 0 && yoffset == 0) {
+          continue;
+        }
+
+        predict_(&src_[kSrcStride * 2 + 2], kSrcStride, xoffset, yoffset, dst_,
+                 dst_stride_);
+      }
+    }
+  }
+};  // namespace
 
 class SixtapPredictTest : public PredictTestBase {};
 
@@ -341,6 +358,14 @@
 TEST_P(BilinearPredictTest, TestWithUnalignedDst) {
   TestWithUnalignedDst(vp8_bilinear_predict16x16_c);
 }
+TEST_P(BilinearPredictTest, DISABLED_Speed) {
+  const int kCountSpeedTestBlock = 5000000 / (width_ * height_);
+  RunNTimes(kCountSpeedTestBlock);
+
+  char title[16];
+  snprintf(title, sizeof(title), "%dx%d", width_, height_);
+  PrintMedian(title);
+}
 
 INSTANTIATE_TEST_CASE_P(
     C, BilinearPredictTest,
@@ -356,17 +381,13 @@
                       make_tuple(8, 4, &vp8_bilinear_predict8x4_neon),
                       make_tuple(4, 4, &vp8_bilinear_predict4x4_neon)));
 #endif
-#if HAVE_MMX
-INSTANTIATE_TEST_CASE_P(
-    MMX, BilinearPredictTest,
-    ::testing::Values(make_tuple(8, 4, &vp8_bilinear_predict8x4_mmx),
-                      make_tuple(4, 4, &vp8_bilinear_predict4x4_mmx)));
-#endif
 #if HAVE_SSE2
 INSTANTIATE_TEST_CASE_P(
     SSE2, BilinearPredictTest,
     ::testing::Values(make_tuple(16, 16, &vp8_bilinear_predict16x16_sse2),
-                      make_tuple(8, 8, &vp8_bilinear_predict8x8_sse2)));
+                      make_tuple(8, 8, &vp8_bilinear_predict8x8_sse2),
+                      make_tuple(8, 4, &vp8_bilinear_predict8x4_sse2),
+                      make_tuple(4, 4, &vp8_bilinear_predict4x4_sse2)));
 #endif
 #if HAVE_SSSE3
 INSTANTIATE_TEST_CASE_P(
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/quantize_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/quantize_test.cc
index 40bb264..a749774 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/quantize_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/quantize_test.cc
@@ -9,12 +9,14 @@
  */
 
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
-#include "./vpx_config.h"
 #include "./vp8_rtcd.h"
+#include "./vpx_config.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
 #include "test/util.h"
@@ -33,10 +35,10 @@
 
 typedef void (*VP8Quantize)(BLOCK *b, BLOCKD *d);
 
-typedef std::tr1::tuple<VP8Quantize, VP8Quantize> VP8QuantizeParam;
+typedef std::tuple<VP8Quantize, VP8Quantize> VP8QuantizeParam;
 
 using libvpx_test::ACMRandom;
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 // Create and populate a VP8_COMP instance which has a complete set of
 // quantization inputs as well as a second MACROBLOCKD for output.
@@ -116,7 +118,8 @@
 };
 
 class QuantizeTest : public QuantizeTestBase,
-                     public ::testing::TestWithParam<VP8QuantizeParam> {
+                     public ::testing::TestWithParam<VP8QuantizeParam>,
+                     public AbstractBench {
  protected:
   virtual void SetUp() {
     SetupCompressor();
@@ -124,6 +127,10 @@
     c_quant_ = GET_PARAM(1);
   }
 
+  virtual void Run() {
+    asm_quant_(&vp8_comp_->mb.block[0], &macroblockd_dst_->block[0]);
+  }
+
   void RunComparison() {
     for (int i = 0; i < kNumBlocks; ++i) {
       ASM_REGISTER_STATE_CHECK(
@@ -166,6 +173,13 @@
   }
 }
 
+TEST_P(QuantizeTest, DISABLED_Speed) {
+  FillCoeffRandom();
+
+  RunNTimes(10000000);
+  PrintMedian("vp8 quantize");
+}
+
 #if HAVE_SSE2
 INSTANTIATE_TEST_CASE_P(
     SSE2, QuantizeTest,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/register_state_check.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/register_state_check.h
index a779e5c..4366466 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/register_state_check.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/register_state_check.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_REGISTER_STATE_CHECK_H_
-#define TEST_REGISTER_STATE_CHECK_H_
+#ifndef VPX_TEST_REGISTER_STATE_CHECK_H_
+#define VPX_TEST_REGISTER_STATE_CHECK_H_
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "./vpx_config.h"
@@ -28,7 +28,7 @@
 //   See platform implementations of RegisterStateCheckXXX for details.
 //
 
-#if defined(_WIN64)
+#if defined(_WIN64) && VPX_ARCH_X86_64
 
 #undef NOMINMAX
 #define NOMINMAX
@@ -138,9 +138,9 @@
 
 }  // namespace libvpx_test
 
-#endif  // _WIN64
+#endif  // _WIN64 && VPX_ARCH_X86_64
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
 #if defined(__GNUC__)
 
 namespace libvpx_test {
@@ -178,10 +178,10 @@
 }  // namespace libvpx_test
 
 #endif  // __GNUC__
-#endif  // ARCH_X86 || ARCH_X86_64
+#endif  // VPX_ARCH_X86 || VPX_ARCH_X86_64
 
 #ifndef API_REGISTER_STATE_CHECK
 #define API_REGISTER_STATE_CHECK ASM_REGISTER_STATE_CHECK
 #endif
 
-#endif  // TEST_REGISTER_STATE_CHECK_H_
+#endif  // VPX_TEST_REGISTER_STATE_CHECK_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/resize_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/resize_test.cc
index 5f80af6..5f323db5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/resize_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/resize_test.cc
@@ -93,7 +93,21 @@
 
 void ScaleForFrameNumber(unsigned int frame, unsigned int initial_w,
                          unsigned int initial_h, unsigned int *w,
-                         unsigned int *h, int flag_codec) {
+                         unsigned int *h, bool flag_codec,
+                         bool smaller_width_larger_size_) {
+  if (smaller_width_larger_size_) {
+    if (frame < 30) {
+      *w = initial_w;
+      *h = initial_h;
+      return;
+    }
+    if (frame < 100) {
+      *w = initial_w * 7 / 10;
+      *h = initial_h * 16 / 10;
+      return;
+    }
+    return;
+  }
   if (frame < 10) {
     *w = initial_w;
     *h = initial_h;
@@ -248,8 +262,10 @@
   ResizingVideoSource() {
     SetSize(kInitialWidth, kInitialHeight);
     limit_ = 350;
+    smaller_width_larger_size_ = false;
   }
-  int flag_codec_;
+  bool flag_codec_;
+  bool smaller_width_larger_size_;
   virtual ~ResizingVideoSource() {}
 
  protected:
@@ -258,7 +274,7 @@
     unsigned int width;
     unsigned int height;
     ScaleForFrameNumber(frame_, kInitialWidth, kInitialHeight, &width, &height,
-                        flag_codec_);
+                        flag_codec_, smaller_width_larger_size_);
     SetSize(width, height);
     FillFrame();
   }
@@ -304,7 +320,8 @@
 
 TEST_P(ResizeTest, TestExternalResizeWorks) {
   ResizingVideoSource video;
-  video.flag_codec_ = 0;
+  video.flag_codec_ = false;
+  video.smaller_width_larger_size_ = false;
   cfg_.g_lag_in_frames = 0;
   ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
 
@@ -317,7 +334,8 @@
     ASSERT_EQ(info->w, GetFrameWidth(idx));
     ASSERT_EQ(info->h, GetFrameHeight(idx));
     ScaleForFrameNumber(frame, kInitialWidth, kInitialHeight, &expected_w,
-                        &expected_h, 0);
+                        &expected_h, video.flag_codec_,
+                        video.smaller_width_larger_size_);
     EXPECT_EQ(expected_w, info->w)
         << "Frame " << frame << " had unexpected width";
     EXPECT_EQ(expected_h, info->h)
@@ -534,7 +552,8 @@
 
 TEST_P(ResizeRealtimeTest, TestExternalResizeWorks) {
   ResizingVideoSource video;
-  video.flag_codec_ = 1;
+  video.flag_codec_ = true;
+  video.smaller_width_larger_size_ = false;
   DefaultConfig();
   // Disable internal resize for this test.
   cfg_.rc_resize_allowed = 0;
@@ -549,7 +568,36 @@
     unsigned int expected_w;
     unsigned int expected_h;
     ScaleForFrameNumber(frame, kInitialWidth, kInitialHeight, &expected_w,
-                        &expected_h, 1);
+                        &expected_h, video.flag_codec_,
+                        video.smaller_width_larger_size_);
+    EXPECT_EQ(expected_w, info->w)
+        << "Frame " << frame << " had unexpected width";
+    EXPECT_EQ(expected_h, info->h)
+        << "Frame " << frame << " had unexpected height";
+    EXPECT_EQ(static_cast<unsigned int>(0), GetMismatchFrames());
+  }
+}
+
+TEST_P(ResizeRealtimeTest, DISABLED_TestExternalResizeSmallerWidthBiggerSize) {
+  ResizingVideoSource video;
+  video.flag_codec_ = true;
+  video.smaller_width_larger_size_ = true;
+  DefaultConfig();
+  // Disable internal resize for this test.
+  cfg_.rc_resize_allowed = 0;
+  change_bitrate_ = false;
+  mismatch_psnr_ = 0.0;
+  mismatch_nframes_ = 0;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+
+  for (std::vector<FrameInfo>::const_iterator info = frame_info_list_.begin();
+       info != frame_info_list_.end(); ++info) {
+    const unsigned int frame = static_cast<unsigned>(info->pts);
+    unsigned int expected_w;
+    unsigned int expected_h;
+    ScaleForFrameNumber(frame, kInitialWidth, kInitialHeight, &expected_w,
+                        &expected_h, video.flag_codec_,
+                        video.smaller_width_larger_size_);
     EXPECT_EQ(expected_w, info->w)
         << "Frame " << frame << " had unexpected width";
     EXPECT_EQ(expected_h, info->h)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/resize_util.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/resize_util.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sad_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sad_test.cc
index 67c3c53..e39775c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sad_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sad_test.cc
@@ -10,19 +10,21 @@
 
 #include <string.h>
 #include <limits.h>
-#include <stdio.h>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
 #include "test/util.h"
 #include "vpx/vpx_codec.h"
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem.h"
+#include "vpx_ports/msvc.h"
+#include "vpx_ports/vpx_timer.h"
 
 template <typename Function>
 struct TestParams {
@@ -46,6 +48,12 @@
                              unsigned int *sad_array);
 typedef TestParams<SadMxNx4Func> SadMxNx4Param;
 
+typedef void (*SadMxNx8Func)(const uint8_t *src_ptr, int src_stride,
+                             const uint8_t *ref_ptr, int ref_stride,
+                             unsigned int *sad_array);
+
+typedef TestParams<SadMxNx8Func> SadMxNx8Param;
+
 using libvpx_test::ACMRandom;
 
 namespace {
@@ -84,7 +92,7 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     }
     mask_ = (1 << bit_depth_) - 1;
-    source_stride_ = (params_.width + 31) & ~31;
+    source_stride_ = (params_.width + 63) & ~63;
     reference_stride_ = params_.width * 2;
     rnd_.Reset(ACMRandom::DeterministicSeed());
   }
@@ -108,29 +116,43 @@
 
  protected:
   // Handle blocks up to 4 blocks 64x64 with stride up to 128
-  static const int kDataAlignment = 16;
+  // crbug.com/webm/1660
+  // const[expr] should be sufficient for DECLARE_ALIGNED but early
+  // implementations of c++11 appear to have some issues with it.
+  enum { kDataAlignment = 32 };
   static const int kDataBlockSize = 64 * 128;
   static const int kDataBufferSize = 4 * kDataBlockSize;
 
-  uint8_t *GetReference(int block_idx) const {
+  int GetBlockRefOffset(int block_idx) const {
+    return block_idx * kDataBlockSize;
+  }
+
+  uint8_t *GetReferenceFromOffset(int ref_offset) const {
+    assert((params_.height - 1) * reference_stride_ + params_.width - 1 +
+               ref_offset <
+           kDataBufferSize);
 #if CONFIG_VP9_HIGHBITDEPTH
     if (use_high_bit_depth_) {
       return CONVERT_TO_BYTEPTR(CONVERT_TO_SHORTPTR(reference_data_) +
-                                block_idx * kDataBlockSize);
+                                ref_offset);
     }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-    return reference_data_ + block_idx * kDataBlockSize;
+    return reference_data_ + ref_offset;
+  }
+
+  uint8_t *GetReference(int block_idx) const {
+    return GetReferenceFromOffset(GetBlockRefOffset(block_idx));
   }
 
   // Sum of Absolute Differences. Given two blocks, calculate the absolute
   // difference between two pixels in the same relative location; accumulate.
-  uint32_t ReferenceSAD(int block_idx) const {
+  uint32_t ReferenceSAD(int ref_offset) const {
     uint32_t sad = 0;
-    const uint8_t *const reference8 = GetReference(block_idx);
+    const uint8_t *const reference8 = GetReferenceFromOffset(ref_offset);
     const uint8_t *const source8 = source_data_;
 #if CONFIG_VP9_HIGHBITDEPTH
     const uint16_t *const reference16 =
-        CONVERT_TO_SHORTPTR(GetReference(block_idx));
+        CONVERT_TO_SHORTPTR(GetReferenceFromOffset(ref_offset));
     const uint16_t *const source16 = CONVERT_TO_SHORTPTR(source_data_);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     for (int h = 0; h < params_.height; ++h) {
@@ -201,24 +223,28 @@
     }
   }
 
-  void FillRandom(uint8_t *data, int stride) {
+  void FillRandomWH(uint8_t *data, int stride, int w, int h) {
     uint8_t *data8 = data;
 #if CONFIG_VP9_HIGHBITDEPTH
     uint16_t *data16 = CONVERT_TO_SHORTPTR(data);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-    for (int h = 0; h < params_.height; ++h) {
-      for (int w = 0; w < params_.width; ++w) {
+    for (int r = 0; r < h; ++r) {
+      for (int c = 0; c < w; ++c) {
         if (!use_high_bit_depth_) {
-          data8[h * stride + w] = rnd_.Rand8();
+          data8[r * stride + c] = rnd_.Rand8();
 #if CONFIG_VP9_HIGHBITDEPTH
         } else {
-          data16[h * stride + w] = rnd_.Rand16() & mask_;
+          data16[r * stride + c] = rnd_.Rand16() & mask_;
 #endif  // CONFIG_VP9_HIGHBITDEPTH
         }
       }
     }
   }
 
+  void FillRandom(uint8_t *data, int stride) {
+    FillRandomWH(data, stride, params_.width, params_.height);
+  }
+
   uint32_t mask_;
   vpx_bit_depth_t bit_depth_;
   int source_stride_;
@@ -239,6 +265,30 @@
   ParamType params_;
 };
 
+class SADx8Test : public SADTestBase<SadMxNx8Param> {
+ public:
+  SADx8Test() : SADTestBase(GetParam()) {}
+
+ protected:
+  void SADs(unsigned int *results) const {
+    const uint8_t *reference = GetReferenceFromOffset(0);
+
+    ASM_REGISTER_STATE_CHECK(params_.func(
+        source_data_, source_stride_, reference, reference_stride_, results));
+  }
+
+  void CheckSADs() const {
+    uint32_t reference_sad;
+    DECLARE_ALIGNED(kDataAlignment, uint32_t, exp_sad[8]);
+
+    SADs(exp_sad);
+    for (int offset = 0; offset < 8; ++offset) {
+      reference_sad = ReferenceSAD(offset);
+      EXPECT_EQ(reference_sad, exp_sad[offset]) << "offset " << offset;
+    }
+  }
+};
+
 class SADx4Test : public SADTestBase<SadMxNx4Param> {
  public:
   SADx4Test() : SADTestBase(GetParam()) {}
@@ -253,18 +303,19 @@
   }
 
   void CheckSADs() const {
-    uint32_t reference_sad, exp_sad[4];
+    uint32_t reference_sad;
+    DECLARE_ALIGNED(kDataAlignment, uint32_t, exp_sad[4]);
 
     SADs(exp_sad);
     for (int block = 0; block < 4; ++block) {
-      reference_sad = ReferenceSAD(block);
+      reference_sad = ReferenceSAD(GetBlockRefOffset(block));
 
       EXPECT_EQ(reference_sad, exp_sad[block]) << "block " << block;
     }
   }
 };
 
-class SADTest : public SADTestBase<SadMxNParam> {
+class SADTest : public AbstractBench, public SADTestBase<SadMxNParam> {
  public:
   SADTest() : SADTestBase(GetParam()) {}
 
@@ -279,11 +330,16 @@
   }
 
   void CheckSAD() const {
-    const unsigned int reference_sad = ReferenceSAD(0);
+    const unsigned int reference_sad = ReferenceSAD(GetBlockRefOffset(0));
     const unsigned int exp_sad = SAD(0);
 
     ASSERT_EQ(reference_sad, exp_sad);
   }
+
+  void Run() {
+    params_.func(source_data_, source_stride_, reference_data_,
+                 reference_stride_);
+  }
 };
 
 class SADavgTest : public SADTestBase<SadMxNAvgParam> {
@@ -350,6 +406,17 @@
   source_stride_ = tmp_stride;
 }
 
+TEST_P(SADTest, DISABLED_Speed) {
+  const int kCountSpeedTestBlock = 50000000 / (params_.width * params_.height);
+  FillRandom(source_data_, source_stride_);
+
+  RunNTimes(kCountSpeedTestBlock);
+
+  char title[16];
+  snprintf(title, sizeof(title), "%dx%d", params_.width, params_.height);
+  PrintMedian(title);
+}
+
 TEST_P(SADavgTest, MaxRef) {
   FillConstant(source_data_, source_stride_, 0);
   FillConstant(reference_data_, reference_stride_, mask_);
@@ -463,6 +530,46 @@
   source_data_ = tmp_source_data;
 }
 
+TEST_P(SADx4Test, DISABLED_Speed) {
+  int tmp_stride = reference_stride_;
+  reference_stride_ -= 1;
+  FillRandom(source_data_, source_stride_);
+  FillRandom(GetReference(0), reference_stride_);
+  FillRandom(GetReference(1), reference_stride_);
+  FillRandom(GetReference(2), reference_stride_);
+  FillRandom(GetReference(3), reference_stride_);
+  const int kCountSpeedTestBlock = 500000000 / (params_.width * params_.height);
+  uint32_t reference_sad[4];
+  DECLARE_ALIGNED(kDataAlignment, uint32_t, exp_sad[4]);
+  vpx_usec_timer timer;
+
+  memset(reference_sad, 0, sizeof(reference_sad));
+  SADs(exp_sad);
+  vpx_usec_timer_start(&timer);
+  for (int i = 0; i < kCountSpeedTestBlock; ++i) {
+    for (int block = 0; block < 4; ++block) {
+      reference_sad[block] = ReferenceSAD(GetBlockRefOffset(block));
+    }
+  }
+  vpx_usec_timer_mark(&timer);
+  for (int block = 0; block < 4; ++block) {
+    EXPECT_EQ(reference_sad[block], exp_sad[block]) << "block " << block;
+  }
+  const int elapsed_time =
+      static_cast<int>(vpx_usec_timer_elapsed(&timer) / 1000);
+  printf("sad%dx%dx4 (%2dbit) time: %5d ms\n", params_.width, params_.height,
+         bit_depth_, elapsed_time);
+
+  reference_stride_ = tmp_stride;
+}
+
+TEST_P(SADx8Test, Regular) {
+  FillRandomWH(source_data_, source_stride_, params_.width, params_.height);
+  FillRandomWH(GetReferenceFromOffset(0), reference_stride_, params_.width + 8,
+               params_.height);
+  CheckSADs();
+}
+
 //------------------------------------------------------------------------------
 // C functions
 const SadMxNParam c_tests[] = {
@@ -639,6 +746,24 @@
 };
 INSTANTIATE_TEST_CASE_P(C, SADx4Test, ::testing::ValuesIn(x4d_c_tests));
 
+// TODO(angiebird): implement the marked-down sad functions
+const SadMxNx8Param x8_c_tests[] = {
+  // SadMxNx8Param(64, 64, &vpx_sad64x64x8_c),
+  // SadMxNx8Param(64, 32, &vpx_sad64x32x8_c),
+  // SadMxNx8Param(32, 64, &vpx_sad32x64x8_c),
+  SadMxNx8Param(32, 32, &vpx_sad32x32x8_c),
+  // SadMxNx8Param(32, 16, &vpx_sad32x16x8_c),
+  // SadMxNx8Param(16, 32, &vpx_sad16x32x8_c),
+  SadMxNx8Param(16, 16, &vpx_sad16x16x8_c),
+  SadMxNx8Param(16, 8, &vpx_sad16x8x8_c),
+  SadMxNx8Param(8, 16, &vpx_sad8x16x8_c),
+  SadMxNx8Param(8, 8, &vpx_sad8x8x8_c),
+  // SadMxNx8Param(8, 4, &vpx_sad8x4x8_c),
+  // SadMxNx8Param(4, 8, &vpx_sad4x8x8_c),
+  SadMxNx8Param(4, 4, &vpx_sad4x4x8_c),
+};
+INSTANTIATE_TEST_CASE_P(C, SADx8Test, ::testing::ValuesIn(x8_c_tests));
+
 //------------------------------------------------------------------------------
 // ARM functions
 #if HAVE_NEON
@@ -867,7 +992,15 @@
 #endif  // HAVE_SSSE3
 
 #if HAVE_SSE4_1
-// Only functions are x8, which do not have tests.
+const SadMxNx8Param x8_sse4_1_tests[] = {
+  SadMxNx8Param(16, 16, &vpx_sad16x16x8_sse4_1),
+  SadMxNx8Param(16, 8, &vpx_sad16x8x8_sse4_1),
+  SadMxNx8Param(8, 16, &vpx_sad8x16x8_sse4_1),
+  SadMxNx8Param(8, 8, &vpx_sad8x8x8_sse4_1),
+  SadMxNx8Param(4, 4, &vpx_sad4x4x8_sse4_1),
+};
+INSTANTIATE_TEST_CASE_P(SSE4_1, SADx8Test,
+                        ::testing::ValuesIn(x8_sse4_1_tests));
 #endif  // HAVE_SSE4_1
 
 #if HAVE_AVX2
@@ -894,6 +1027,12 @@
   SadMxNx4Param(32, 32, &vpx_sad32x32x4d_avx2),
 };
 INSTANTIATE_TEST_CASE_P(AVX2, SADx4Test, ::testing::ValuesIn(x4d_avx2_tests));
+
+const SadMxNx8Param x8_avx2_tests[] = {
+  // SadMxNx8Param(64, 64, &vpx_sad64x64x8_c),
+  SadMxNx8Param(32, 32, &vpx_sad32x32x8_avx2),
+};
+INSTANTIATE_TEST_CASE_P(AVX2, SADx8Test, ::testing::ValuesIn(x8_avx2_tests));
 #endif  // HAVE_AVX2
 
 #if HAVE_AVX512
@@ -971,6 +1110,9 @@
   SadMxNParam(16, 32, &vpx_sad16x32_vsx),
   SadMxNParam(16, 16, &vpx_sad16x16_vsx),
   SadMxNParam(16, 8, &vpx_sad16x8_vsx),
+  SadMxNParam(8, 16, &vpx_sad8x16_vsx),
+  SadMxNParam(8, 8, &vpx_sad8x8_vsx),
+  SadMxNParam(8, 4, &vpx_sad8x4_vsx),
 };
 INSTANTIATE_TEST_CASE_P(VSX, SADTest, ::testing::ValuesIn(vsx_tests));
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/set_maps.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/set_maps.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_decoder.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_decoder.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_encode_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_encode_test.cc
new file mode 100644
index 0000000..edaf5bf
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_encode_test.cc
@@ -0,0 +1,374 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <math.h>
+#include <memory>
+#include <vector>
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "vp9/simple_encode.h"
+
+namespace vp9 {
+namespace {
+
+// TODO(angirbid): Find a better way to construct encode info
+const int w = 352;
+const int h = 288;
+const int frame_rate_num = 30;
+const int frame_rate_den = 1;
+const int target_bitrate = 1000;
+const int num_frames = 17;
+const char infile_path[] = "bus_352x288_420_f20_b8.yuv";
+
+double GetBitrateInKbps(size_t bit_size, int num_frames, int frame_rate_num,
+                        int frame_rate_den) {
+  return static_cast<double>(bit_size) / num_frames * frame_rate_num /
+         frame_rate_den / 1000.0;
+}
+
+// Returns the number of unit in size of 4.
+// For example, if size is 7, return 2.
+int GetNumUnit4x4(int size) { return (size + 3) >> 2; }
+
+TEST(SimpleEncode, ComputeFirstPassStats) {
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  std::vector<std::vector<double>> frame_stats =
+      simple_encode.ObserveFirstPassStats();
+  EXPECT_EQ(frame_stats.size(), static_cast<size_t>(num_frames));
+  size_t data_num = frame_stats[0].size();
+  // Read ObserveFirstPassStats before changing FIRSTPASS_STATS.
+  EXPECT_EQ(data_num, static_cast<size_t>(25));
+  for (size_t i = 0; i < frame_stats.size(); ++i) {
+    EXPECT_EQ(frame_stats[i].size(), data_num);
+    // FIRSTPASS_STATS's first element is frame
+    EXPECT_EQ(frame_stats[i][0], i);
+    // FIRSTPASS_STATS's last element is count, and the count is 1 for single
+    // frame stats
+    EXPECT_EQ(frame_stats[i][data_num - 1], 1);
+  }
+}
+
+TEST(SimpleEncode, GetCodingFrameNum) {
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  int num_coding_frames = simple_encode.GetCodingFrameNum();
+  EXPECT_EQ(num_coding_frames, 19);
+}
+
+TEST(SimpleEncode, EncodeFrame) {
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  int num_coding_frames = simple_encode.GetCodingFrameNum();
+  EXPECT_GE(num_coding_frames, num_frames);
+  simple_encode.StartEncode();
+  size_t total_data_bit_size = 0;
+  int coded_show_frame_count = 0;
+  int frame_coding_index = 0;
+  while (coded_show_frame_count < num_frames) {
+    const GroupOfPicture group_of_picture =
+        simple_encode.ObserveGroupOfPicture();
+    const std::vector<EncodeFrameInfo> &encode_frame_list =
+        group_of_picture.encode_frame_list;
+    for (size_t group_index = 0; group_index < encode_frame_list.size();
+         ++group_index) {
+      EncodeFrameResult encode_frame_result;
+      simple_encode.EncodeFrame(&encode_frame_result);
+      EXPECT_EQ(encode_frame_result.show_idx,
+                encode_frame_list[group_index].show_idx);
+      EXPECT_EQ(encode_frame_result.frame_type,
+                encode_frame_list[group_index].frame_type);
+      EXPECT_EQ(encode_frame_list[group_index].coding_index,
+                frame_coding_index);
+      EXPECT_GE(encode_frame_result.psnr, 34)
+          << "The psnr is supposed to be greater than 34 given the "
+             "target_bitrate 1000 kbps";
+      total_data_bit_size += encode_frame_result.coding_data_bit_size;
+      ++frame_coding_index;
+    }
+    coded_show_frame_count += group_of_picture.show_frame_count;
+  }
+  const double bitrate = GetBitrateInKbps(total_data_bit_size, num_frames,
+                                          frame_rate_num, frame_rate_den);
+  const double off_target_threshold = 150;
+  EXPECT_LE(fabs(target_bitrate - bitrate), off_target_threshold);
+  simple_encode.EndEncode();
+}
+
+TEST(SimpleEncode, EncodeFrameWithQuantizeIndex) {
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  int num_coding_frames = simple_encode.GetCodingFrameNum();
+  simple_encode.StartEncode();
+  for (int i = 0; i < num_coding_frames; ++i) {
+    const int assigned_quantize_index = 100 + i;
+    EncodeFrameResult encode_frame_result;
+    simple_encode.EncodeFrameWithQuantizeIndex(&encode_frame_result,
+                                               assigned_quantize_index);
+    EXPECT_EQ(encode_frame_result.quantize_index, assigned_quantize_index);
+  }
+  simple_encode.EndEncode();
+}
+
+TEST(SimpleEncode, EncodeConsistencyTest) {
+  std::vector<int> quantize_index_list;
+  std::vector<uint64_t> ref_sse_list;
+  std::vector<double> ref_psnr_list;
+  std::vector<size_t> ref_bit_size_list;
+  {
+    // The first encode.
+    SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                               target_bitrate, num_frames, infile_path);
+    simple_encode.ComputeFirstPassStats();
+    const int num_coding_frames = simple_encode.GetCodingFrameNum();
+    simple_encode.StartEncode();
+    for (int i = 0; i < num_coding_frames; ++i) {
+      EncodeFrameResult encode_frame_result;
+      simple_encode.EncodeFrame(&encode_frame_result);
+      quantize_index_list.push_back(encode_frame_result.quantize_index);
+      ref_sse_list.push_back(encode_frame_result.sse);
+      ref_psnr_list.push_back(encode_frame_result.psnr);
+      ref_bit_size_list.push_back(encode_frame_result.coding_data_bit_size);
+    }
+    simple_encode.EndEncode();
+  }
+  {
+    // The second encode with quantize index got from the first encode.
+    SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                               target_bitrate, num_frames, infile_path);
+    simple_encode.ComputeFirstPassStats();
+    const int num_coding_frames = simple_encode.GetCodingFrameNum();
+    EXPECT_EQ(static_cast<size_t>(num_coding_frames),
+              quantize_index_list.size());
+    simple_encode.StartEncode();
+    for (int i = 0; i < num_coding_frames; ++i) {
+      EncodeFrameResult encode_frame_result;
+      simple_encode.EncodeFrameWithQuantizeIndex(&encode_frame_result,
+                                                 quantize_index_list[i]);
+      EXPECT_EQ(encode_frame_result.quantize_index, quantize_index_list[i]);
+      EXPECT_EQ(encode_frame_result.sse, ref_sse_list[i]);
+      EXPECT_DOUBLE_EQ(encode_frame_result.psnr, ref_psnr_list[i]);
+      EXPECT_EQ(encode_frame_result.coding_data_bit_size, ref_bit_size_list[i]);
+    }
+    simple_encode.EndEncode();
+  }
+}
+
+// Test the information (partition info and motion vector info) stored in
+// encoder is the same between two encode runs.
+TEST(SimpleEncode, EncodeConsistencyTest2) {
+  const int num_rows_4x4 = GetNumUnit4x4(w);
+  const int num_cols_4x4 = GetNumUnit4x4(h);
+  const int num_units_4x4 = num_rows_4x4 * num_cols_4x4;
+  // The first encode.
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  const int num_coding_frames = simple_encode.GetCodingFrameNum();
+  std::vector<PartitionInfo> partition_info_list(num_units_4x4 *
+                                                 num_coding_frames);
+  std::vector<MotionVectorInfo> motion_vector_info_list(num_units_4x4 *
+                                                        num_coding_frames);
+  simple_encode.StartEncode();
+  for (int i = 0; i < num_coding_frames; ++i) {
+    EncodeFrameResult encode_frame_result;
+    simple_encode.EncodeFrame(&encode_frame_result);
+    for (int j = 0; j < num_rows_4x4 * num_cols_4x4; ++j) {
+      partition_info_list[i * num_units_4x4 + j] =
+          encode_frame_result.partition_info[j];
+      motion_vector_info_list[i * num_units_4x4 + j] =
+          encode_frame_result.motion_vector_info[j];
+    }
+  }
+  simple_encode.EndEncode();
+  // The second encode.
+  SimpleEncode simple_encode_2(w, h, frame_rate_num, frame_rate_den,
+                               target_bitrate, num_frames, infile_path);
+  simple_encode_2.ComputeFirstPassStats();
+  const int num_coding_frames_2 = simple_encode_2.GetCodingFrameNum();
+  simple_encode_2.StartEncode();
+  for (int i = 0; i < num_coding_frames_2; ++i) {
+    EncodeFrameResult encode_frame_result;
+    simple_encode_2.EncodeFrame(&encode_frame_result);
+    for (int j = 0; j < num_rows_4x4 * num_cols_4x4; ++j) {
+      EXPECT_EQ(encode_frame_result.partition_info[j].row,
+                partition_info_list[i * num_units_4x4 + j].row);
+      EXPECT_EQ(encode_frame_result.partition_info[j].column,
+                partition_info_list[i * num_units_4x4 + j].column);
+      EXPECT_EQ(encode_frame_result.partition_info[j].row_start,
+                partition_info_list[i * num_units_4x4 + j].row_start);
+      EXPECT_EQ(encode_frame_result.partition_info[j].column_start,
+                partition_info_list[i * num_units_4x4 + j].column_start);
+      EXPECT_EQ(encode_frame_result.partition_info[j].width,
+                partition_info_list[i * num_units_4x4 + j].width);
+      EXPECT_EQ(encode_frame_result.partition_info[j].height,
+                partition_info_list[i * num_units_4x4 + j].height);
+
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].mv_count,
+                motion_vector_info_list[i * num_units_4x4 + j].mv_count);
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].ref_frame[0],
+                motion_vector_info_list[i * num_units_4x4 + j].ref_frame[0]);
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].ref_frame[1],
+                motion_vector_info_list[i * num_units_4x4 + j].ref_frame[1]);
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].mv_row[0],
+                motion_vector_info_list[i * num_units_4x4 + j].mv_row[0]);
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].mv_column[0],
+                motion_vector_info_list[i * num_units_4x4 + j].mv_column[0]);
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].mv_row[1],
+                motion_vector_info_list[i * num_units_4x4 + j].mv_row[1]);
+      EXPECT_EQ(encode_frame_result.motion_vector_info[j].mv_column[1],
+                motion_vector_info_list[i * num_units_4x4 + j].mv_column[1]);
+    }
+  }
+  simple_encode_2.EndEncode();
+}
+
+// Test the information stored in encoder is the same between two encode runs.
+TEST(SimpleEncode, EncodeConsistencyTest3) {
+  std::vector<int> quantize_index_list;
+  const int num_rows_4x4 = GetNumUnit4x4(w);
+  const int num_cols_4x4 = GetNumUnit4x4(h);
+  const int num_units_4x4 = num_rows_4x4 * num_cols_4x4;
+  // The first encode.
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  const int num_coding_frames = simple_encode.GetCodingFrameNum();
+  std::vector<PartitionInfo> partition_info_list(num_units_4x4 *
+                                                 num_coding_frames);
+  simple_encode.StartEncode();
+  for (int i = 0; i < num_coding_frames; ++i) {
+    EncodeFrameResult encode_frame_result;
+    simple_encode.EncodeFrame(&encode_frame_result);
+    quantize_index_list.push_back(encode_frame_result.quantize_index);
+    for (int j = 0; j < num_rows_4x4 * num_cols_4x4; ++j) {
+      partition_info_list[i * num_units_4x4 + j] =
+          encode_frame_result.partition_info[j];
+    }
+  }
+  simple_encode.EndEncode();
+  // The second encode.
+  SimpleEncode simple_encode_2(w, h, frame_rate_num, frame_rate_den,
+                               target_bitrate, num_frames, infile_path);
+  simple_encode_2.ComputeFirstPassStats();
+  const int num_coding_frames_2 = simple_encode_2.GetCodingFrameNum();
+  simple_encode_2.StartEncode();
+  for (int i = 0; i < num_coding_frames_2; ++i) {
+    EncodeFrameResult encode_frame_result;
+    simple_encode_2.EncodeFrameWithQuantizeIndex(&encode_frame_result,
+                                                 quantize_index_list[i]);
+    for (int j = 0; j < num_rows_4x4 * num_cols_4x4; ++j) {
+      EXPECT_EQ(encode_frame_result.partition_info[j].row,
+                partition_info_list[i * num_units_4x4 + j].row);
+      EXPECT_EQ(encode_frame_result.partition_info[j].column,
+                partition_info_list[i * num_units_4x4 + j].column);
+      EXPECT_EQ(encode_frame_result.partition_info[j].row_start,
+                partition_info_list[i * num_units_4x4 + j].row_start);
+      EXPECT_EQ(encode_frame_result.partition_info[j].column_start,
+                partition_info_list[i * num_units_4x4 + j].column_start);
+      EXPECT_EQ(encode_frame_result.partition_info[j].width,
+                partition_info_list[i * num_units_4x4 + j].width);
+      EXPECT_EQ(encode_frame_result.partition_info[j].height,
+                partition_info_list[i * num_units_4x4 + j].height);
+    }
+  }
+  simple_encode_2.EndEncode();
+}
+
+// Encode with default VP9 decision first.
+// Get QPs and arf locations from the first encode.
+// Set external arfs and QPs for the second encode.
+// Expect to get matched results.
+TEST(SimpleEncode, EncodeConsistencyTestUseExternalArfs) {
+  std::vector<int> quantize_index_list;
+  std::vector<uint64_t> ref_sse_list;
+  std::vector<double> ref_psnr_list;
+  std::vector<size_t> ref_bit_size_list;
+  std::vector<int> external_arf_indexes(num_frames, 0);
+  {
+    // The first encode.
+    SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                               target_bitrate, num_frames, infile_path);
+    simple_encode.ComputeFirstPassStats();
+    const int num_coding_frames = simple_encode.GetCodingFrameNum();
+    simple_encode.StartEncode();
+    for (int i = 0; i < num_coding_frames; ++i) {
+      EncodeFrameResult encode_frame_result;
+      simple_encode.EncodeFrame(&encode_frame_result);
+      quantize_index_list.push_back(encode_frame_result.quantize_index);
+      ref_sse_list.push_back(encode_frame_result.sse);
+      ref_psnr_list.push_back(encode_frame_result.psnr);
+      ref_bit_size_list.push_back(encode_frame_result.coding_data_bit_size);
+      if (encode_frame_result.frame_type == kFrameTypeKey) {
+        external_arf_indexes[encode_frame_result.show_idx] = 0;
+      } else if (encode_frame_result.frame_type == kFrameTypeAltRef) {
+        external_arf_indexes[encode_frame_result.show_idx] = 1;
+      } else {
+        // This has to be |= because we can't let overlay overwrites the
+        // arf type for the same frame.
+        external_arf_indexes[encode_frame_result.show_idx] |= 0;
+      }
+    }
+    simple_encode.EndEncode();
+  }
+  {
+    // The second encode with quantize index got from the first encode.
+    // The external arfs are the same as the first encode.
+    SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                               target_bitrate, num_frames, infile_path);
+    simple_encode.ComputeFirstPassStats();
+    simple_encode.SetExternalGroupOfPicture(external_arf_indexes);
+    const int num_coding_frames = simple_encode.GetCodingFrameNum();
+    EXPECT_EQ(static_cast<size_t>(num_coding_frames),
+              quantize_index_list.size());
+    simple_encode.StartEncode();
+    for (int i = 0; i < num_coding_frames; ++i) {
+      EncodeFrameResult encode_frame_result;
+      simple_encode.EncodeFrameWithQuantizeIndex(&encode_frame_result,
+                                                 quantize_index_list[i]);
+      EXPECT_EQ(encode_frame_result.quantize_index, quantize_index_list[i]);
+      EXPECT_EQ(encode_frame_result.sse, ref_sse_list[i]);
+      EXPECT_DOUBLE_EQ(encode_frame_result.psnr, ref_psnr_list[i]);
+      EXPECT_EQ(encode_frame_result.coding_data_bit_size, ref_bit_size_list[i]);
+    }
+    simple_encode.EndEncode();
+  }
+}
+
+TEST(SimpleEncode, GetEncodeFrameInfo) {
+  // Makes sure that the encode_frame_info obtained from GetEncodeFrameInfo()
+  // matches the counterpart in encode_frame_result obtained from EncodeFrame()
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  simple_encode.ComputeFirstPassStats();
+  const int num_coding_frames = simple_encode.GetCodingFrameNum();
+  simple_encode.StartEncode();
+  for (int i = 0; i < num_coding_frames; ++i) {
+    EncodeFrameInfo encode_frame_info = simple_encode.GetNextEncodeFrameInfo();
+    EncodeFrameResult encode_frame_result;
+    simple_encode.EncodeFrame(&encode_frame_result);
+    EXPECT_EQ(encode_frame_info.show_idx, encode_frame_result.show_idx);
+    EXPECT_EQ(encode_frame_info.frame_type, encode_frame_result.frame_type);
+  }
+  simple_encode.EndEncode();
+}
+
+TEST(SimpleEncode, GetFramePixelCount) {
+  SimpleEncode simple_encode(w, h, frame_rate_num, frame_rate_den,
+                             target_bitrate, num_frames, infile_path);
+  EXPECT_EQ(simple_encode.GetFramePixelCount(),
+            static_cast<uint64_t>(w * h * 3 / 2));
+}
+
+}  // namespace
+}  // namespace vp9
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_encoder.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/simple_encoder.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/stress.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/stress.sh
old mode 100644
new mode 100755
index a899c80..fdec764
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/stress.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/stress.sh
@@ -30,7 +30,7 @@
 # Download a file from the url and check its sha1sum.
 download_and_check_file() {
   # Get the file from the file path.
-  local readonly root="${1#${LIBVPX_TEST_DATA_PATH}/}"
+  local root="${1#${LIBVPX_TEST_DATA_PATH}/}"
 
   # Download the file using curl. Trap to insure non partial file.
   (trap "rm -f $1" INT TERM \
@@ -72,13 +72,13 @@
 # This function runs tests on libvpx that run multiple encodes and decodes
 # in parallel in hopes of catching synchronization and/or threading issues.
 stress() {
-  local readonly decoder="$(vpx_tool_path vpxdec)"
-  local readonly encoder="$(vpx_tool_path vpxenc)"
-  local readonly codec="$1"
-  local readonly webm="$2"
-  local readonly decode_count="$3"
-  local readonly threads="$4"
-  local readonly enc_args="$5"
+  local decoder="$(vpx_tool_path vpxdec)"
+  local encoder="$(vpx_tool_path vpxenc)"
+  local codec="$1"
+  local webm="$2"
+  local decode_count="$3"
+  local threads="$4"
+  local enc_args="$5"
   local pids=""
   local rt_max_jobs=${STRESS_RT_MAX_JOBS:-5}
   local onepass_max_jobs=${STRESS_ONEPASS_MAX_JOBS:-5}
@@ -144,6 +144,19 @@
   fi
 }
 
+vp8_stress_test_token_parititions() {
+  local vp8_max_jobs=${STRESS_VP8_DECODE_MAX_JOBS:-40}
+  if [ "$(vp8_decode_available)" = "yes" -a \
+       "$(vp8_encode_available)" = "yes" ]; then
+    for threads in 2 4 8; do
+      for token_partitions in 1 2 3; do
+        stress vp8 "${VP8}" "${vp8_max_jobs}" ${threads} \
+          "--token-parts=$token_partitions"
+      done
+    done
+  fi
+}
+
 vp9_stress() {
   local vp9_max_jobs=${STRESS_VP9_DECODE_MAX_JOBS:-25}
 
@@ -154,16 +167,17 @@
 }
 
 vp9_stress_test() {
-  for threads in 4 8 100; do
+  for threads in 4 8 64; do
     vp9_stress "$threads" "--row-mt=0"
   done
 }
 
 vp9_stress_test_row_mt() {
-  for threads in 4 8 100; do
+  for threads in 4 8 64; do
     vp9_stress "$threads" "--row-mt=1"
   done
 }
 
 run_tests stress_verify_environment \
-  "vp8_stress_test vp9_stress_test vp9_stress_test_row_mt"
+  "vp8_stress_test vp8_stress_test_token_parititions
+   vp9_stress_test vp9_stress_test_row_mt"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sum_squares_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sum_squares_test.cc
index e9d2efa..d2c70f4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sum_squares_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/sum_squares_test.cc
@@ -11,6 +11,7 @@
 #include <cmath>
 #include <cstdlib>
 #include <string>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -28,7 +29,7 @@
 const int kNumIterations = 10000;
 
 typedef uint64_t (*SSI16Func)(const int16_t *src, int stride, int size);
-typedef std::tr1::tuple<SSI16Func, SSI16Func> SumSquaresParam;
+typedef std::tuple<SSI16Func, SSI16Func> SumSquaresParam;
 
 class SumSquaresTest : public ::testing::TestWithParam<SumSquaresParam> {
  public:
@@ -102,7 +103,14 @@
   }
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
+
+#if HAVE_NEON
+INSTANTIATE_TEST_CASE_P(
+    NEON, SumSquaresTest,
+    ::testing::Values(make_tuple(&vpx_sum_squares_2d_i16_c,
+                                 &vpx_sum_squares_2d_i16_neon)));
+#endif  // HAVE_NEON
 
 #if HAVE_SSE2
 INSTANTIATE_TEST_CASE_P(
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/superframe_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/superframe_test.cc
index 421dfcc..8c8d1ae 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/superframe_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/superframe_test.cc
@@ -8,6 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 #include <climits>
+#include <tuple>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "test/codec_factory.h"
 #include "test/encode_test_driver.h"
@@ -18,7 +20,7 @@
 
 const int kTestMode = 0;
 
-typedef std::tr1::tuple<libvpx_test::TestMode, int> SuperframeTestParam;
+typedef std::tuple<libvpx_test::TestMode, int> SuperframeTestParam;
 
 class SuperframeTest
     : public ::libvpx_test::EncoderTest,
@@ -31,7 +33,7 @@
   virtual void SetUp() {
     InitializeConfig();
     const SuperframeTestParam input = GET_PARAM(1);
-    const libvpx_test::TestMode mode = std::tr1::get<kTestMode>(input);
+    const libvpx_test::TestMode mode = std::get<kTestMode>(input);
     SetMode(mode);
     sf_count_ = 0;
     sf_count_max_ = INT_MAX;
@@ -41,7 +43,7 @@
 
   virtual void PreEncodeFrameHook(libvpx_test::VideoSource *video,
                                   libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_ENABLEAUTOALTREF, 1);
     }
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_datarate_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_datarate_test.cc
new file mode 100644
index 0000000..f995e4c
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_datarate_test.cc
@@ -0,0 +1,1509 @@
+/*
+ *  Copyright (c) 2012 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+#include "./vpx_config.h"
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/i420_video_source.h"
+#include "test/svc_test.h"
+#include "test/util.h"
+#include "test/y4m_video_source.h"
+#include "vp9/common/vp9_onyxc_int.h"
+#include "vpx/vpx_codec.h"
+#include "vpx_ports/bitops.h"
+
+namespace svc_test {
+namespace {
+
+typedef enum {
+  // Inter-layer prediction is on on all frames.
+  INTER_LAYER_PRED_ON,
+  // Inter-layer prediction is off on all frames.
+  INTER_LAYER_PRED_OFF,
+  // Inter-layer prediction is off on non-key frames and non-sync frames.
+  INTER_LAYER_PRED_OFF_NONKEY,
+  // Inter-layer prediction is on on all frames, but constrained such
+  // that any layer S (> 0) can only predict from previous spatial
+  // layer S-1, from the same superframe.
+  INTER_LAYER_PRED_ON_CONSTRAINED
+} INTER_LAYER_PRED;
+
+class DatarateOnePassCbrSvc : public OnePassCbrSvc {
+ public:
+  explicit DatarateOnePassCbrSvc(const ::libvpx_test::CodecFactory *codec)
+      : OnePassCbrSvc(codec) {
+    inter_layer_pred_mode_ = 0;
+  }
+
+ protected:
+  virtual ~DatarateOnePassCbrSvc() {}
+
+  virtual void ResetModel() {
+    last_pts_ = 0;
+    duration_ = 0.0;
+    mismatch_psnr_ = 0.0;
+    mismatch_nframes_ = 0;
+    denoiser_on_ = 0;
+    tune_content_ = 0;
+    base_speed_setting_ = 5;
+    spatial_layer_id_ = 0;
+    temporal_layer_id_ = 0;
+    update_pattern_ = 0;
+    memset(bits_in_buffer_model_, 0, sizeof(bits_in_buffer_model_));
+    memset(bits_total_, 0, sizeof(bits_total_));
+    memset(layer_target_avg_bandwidth_, 0, sizeof(layer_target_avg_bandwidth_));
+    dynamic_drop_layer_ = false;
+    single_layer_resize_ = false;
+    change_bitrate_ = false;
+    last_pts_ref_ = 0;
+    middle_bitrate_ = 0;
+    top_bitrate_ = 0;
+    superframe_count_ = -1;
+    key_frame_spacing_ = 9999;
+    num_nonref_frames_ = 0;
+    layer_framedrop_ = 0;
+    force_key_ = 0;
+    force_key_test_ = 0;
+    insert_layer_sync_ = 0;
+    layer_sync_on_base_ = 0;
+    force_intra_only_frame_ = 0;
+    superframe_has_intra_only_ = 0;
+    use_post_encode_drop_ = 0;
+    denoiser_off_on_ = false;
+    denoiser_enable_layers_ = false;
+    num_resize_down_ = 0;
+    num_resize_up_ = 0;
+    for (int i = 0; i < VPX_MAX_LAYERS; i++) {
+      prev_frame_width[i] = 320;
+      prev_frame_height[i] = 240;
+    }
+  }
+  virtual void BeginPassHook(unsigned int /*pass*/) {}
+
+  // Example pattern for spatial layers and 2 temporal layers used in the
+  // bypass/flexible mode. The pattern corresponds to the pattern
+  // VP9E_TEMPORAL_LAYERING_MODE_0101 (temporal_layering_mode == 2) used in
+  // non-flexible mode, except that we disable inter-layer prediction.
+  void set_frame_flags_bypass_mode(
+      int tl, int num_spatial_layers, int is_key_frame,
+      vpx_svc_ref_frame_config_t *ref_frame_config) {
+    for (int sl = 0; sl < num_spatial_layers; ++sl)
+      ref_frame_config->update_buffer_slot[sl] = 0;
+
+    for (int sl = 0; sl < num_spatial_layers; ++sl) {
+      if (tl == 0) {
+        ref_frame_config->lst_fb_idx[sl] = sl;
+        if (sl) {
+          if (is_key_frame) {
+            ref_frame_config->lst_fb_idx[sl] = sl - 1;
+            ref_frame_config->gld_fb_idx[sl] = sl;
+          } else {
+            ref_frame_config->gld_fb_idx[sl] = sl - 1;
+          }
+        } else {
+          ref_frame_config->gld_fb_idx[sl] = 0;
+        }
+        ref_frame_config->alt_fb_idx[sl] = 0;
+      } else if (tl == 1) {
+        ref_frame_config->lst_fb_idx[sl] = sl;
+        ref_frame_config->gld_fb_idx[sl] =
+            VPXMIN(REF_FRAMES - 1, num_spatial_layers + sl - 1);
+        ref_frame_config->alt_fb_idx[sl] =
+            VPXMIN(REF_FRAMES - 1, num_spatial_layers + sl);
+      }
+      if (!tl) {
+        if (!sl) {
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 0;
+          ref_frame_config->reference_alt_ref[sl] = 0;
+          ref_frame_config->update_buffer_slot[sl] |=
+              1 << ref_frame_config->lst_fb_idx[sl];
+        } else {
+          if (is_key_frame) {
+            ref_frame_config->reference_last[sl] = 1;
+            ref_frame_config->reference_golden[sl] = 0;
+            ref_frame_config->reference_alt_ref[sl] = 0;
+            ref_frame_config->update_buffer_slot[sl] |=
+                1 << ref_frame_config->gld_fb_idx[sl];
+          } else {
+            ref_frame_config->reference_last[sl] = 1;
+            ref_frame_config->reference_golden[sl] = 0;
+            ref_frame_config->reference_alt_ref[sl] = 0;
+            ref_frame_config->update_buffer_slot[sl] |=
+                1 << ref_frame_config->lst_fb_idx[sl];
+          }
+        }
+      } else if (tl == 1) {
+        if (!sl) {
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 0;
+          ref_frame_config->reference_alt_ref[sl] = 0;
+          ref_frame_config->update_buffer_slot[sl] |=
+              1 << ref_frame_config->alt_fb_idx[sl];
+        } else {
+          ref_frame_config->reference_last[sl] = 1;
+          ref_frame_config->reference_golden[sl] = 0;
+          ref_frame_config->reference_alt_ref[sl] = 0;
+          ref_frame_config->update_buffer_slot[sl] |=
+              1 << ref_frame_config->alt_fb_idx[sl];
+        }
+      }
+    }
+  }
+
+  void CheckLayerRateTargeting(int num_spatial_layers, int num_temporal_layers,
+                               double thresh_overshoot,
+                               double thresh_undershoot) const {
+    for (int sl = 0; sl < num_spatial_layers; ++sl)
+      for (int tl = 0; tl < num_temporal_layers; ++tl) {
+        const int layer = sl * num_temporal_layers + tl;
+        ASSERT_GE(cfg_.layer_target_bitrate[layer],
+                  file_datarate_[layer] * thresh_overshoot)
+            << " The datarate for the file exceeds the target by too much!";
+        ASSERT_LE(cfg_.layer_target_bitrate[layer],
+                  file_datarate_[layer] * thresh_undershoot)
+            << " The datarate for the file is lower than the target by too "
+               "much!";
+      }
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    PreEncodeFrameHookSetup(video, encoder);
+
+    if (video->frame() == 0) {
+      if (force_intra_only_frame_) {
+        // Decoder sets the color_space for Intra-only frames
+        // to BT_601 (see line 1810 in vp9_decodeframe.c).
+        // So set it here in these tess to avoid encoder-decoder
+        // mismatch check on color space setting.
+        encoder->Control(VP9E_SET_COLOR_SPACE, VPX_CS_BT_601);
+      }
+      encoder->Control(VP9E_SET_NOISE_SENSITIVITY, denoiser_on_);
+      encoder->Control(VP9E_SET_TUNE_CONTENT, tune_content_);
+      encoder->Control(VP9E_SET_SVC_INTER_LAYER_PRED, inter_layer_pred_mode_);
+
+      if (layer_framedrop_) {
+        vpx_svc_frame_drop_t svc_drop_frame;
+        svc_drop_frame.framedrop_mode = LAYER_DROP;
+        for (int i = 0; i < number_spatial_layers_; i++)
+          svc_drop_frame.framedrop_thresh[i] = 30;
+        svc_drop_frame.max_consec_drop = 30;
+        encoder->Control(VP9E_SET_SVC_FRAME_DROP_LAYER, &svc_drop_frame);
+      }
+
+      if (use_post_encode_drop_) {
+        encoder->Control(VP9E_SET_POSTENCODE_DROP, use_post_encode_drop_);
+      }
+    }
+
+    if (denoiser_off_on_) {
+      encoder->Control(VP9E_SET_AQ_MODE, 3);
+      // Set inter_layer_pred to INTER_LAYER_PRED_OFF_NONKEY (K-SVC).
+      encoder->Control(VP9E_SET_SVC_INTER_LAYER_PRED, 2);
+      if (!denoiser_enable_layers_) {
+        if (video->frame() == 0)
+          encoder->Control(VP9E_SET_NOISE_SENSITIVITY, 0);
+        else if (video->frame() == 100)
+          encoder->Control(VP9E_SET_NOISE_SENSITIVITY, 1);
+      } else {
+        // Cumulative bitrates for top spatial layers, for
+        // 3 temporal layers.
+        if (video->frame() == 0) {
+          encoder->Control(VP9E_SET_NOISE_SENSITIVITY, 0);
+          // Change layer bitrates to set top spatial layer to 0.
+          // This is for 3 spatial 3 temporal layers.
+          // This will trigger skip encoding/dropping of top spatial layer.
+          cfg_.rc_target_bitrate -= cfg_.layer_target_bitrate[8];
+          for (int i = 0; i < 3; i++)
+            bitrate_sl3_[i] = cfg_.layer_target_bitrate[i + 6];
+          cfg_.layer_target_bitrate[6] = 0;
+          cfg_.layer_target_bitrate[7] = 0;
+          cfg_.layer_target_bitrate[8] = 0;
+          encoder->Config(&cfg_);
+        } else if (video->frame() == 100) {
+          // Change layer bitrates to non-zero on top spatial layer.
+          // This will trigger skip encoding of top spatial layer
+          // on key frame (period = 100).
+          for (int i = 0; i < 3; i++)
+            cfg_.layer_target_bitrate[i + 6] = bitrate_sl3_[i];
+          cfg_.rc_target_bitrate += cfg_.layer_target_bitrate[8];
+          encoder->Config(&cfg_);
+        } else if (video->frame() == 120) {
+          // Enable denoiser and top spatial layer after key frame (period is
+          // 100).
+          encoder->Control(VP9E_SET_NOISE_SENSITIVITY, 1);
+        }
+      }
+    }
+
+    if (update_pattern_ && video->frame() >= 100) {
+      vpx_svc_layer_id_t layer_id;
+      if (video->frame() == 100) {
+        cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
+        encoder->Config(&cfg_);
+      }
+      // Set layer id since the pattern changed.
+      layer_id.spatial_layer_id = 0;
+      layer_id.temporal_layer_id = (video->frame() % 2 != 0);
+      temporal_layer_id_ = layer_id.temporal_layer_id;
+      for (int i = 0; i < number_spatial_layers_; i++)
+        layer_id.temporal_layer_id_per_spatial[i] = temporal_layer_id_;
+      encoder->Control(VP9E_SET_SVC_LAYER_ID, &layer_id);
+      set_frame_flags_bypass_mode(layer_id.temporal_layer_id,
+                                  number_spatial_layers_, 0, &ref_frame_config);
+      encoder->Control(VP9E_SET_SVC_REF_FRAME_CONFIG, &ref_frame_config);
+    }
+
+    if (change_bitrate_ && video->frame() == 200) {
+      duration_ = (last_pts_ + 1) * timebase_;
+      for (int sl = 0; sl < number_spatial_layers_; ++sl) {
+        for (int tl = 0; tl < number_temporal_layers_; ++tl) {
+          const int layer = sl * number_temporal_layers_ + tl;
+          const double file_size_in_kb = bits_total_[layer] / 1000.;
+          file_datarate_[layer] = file_size_in_kb / duration_;
+        }
+      }
+
+      CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_,
+                              0.78, 1.15);
+
+      memset(file_datarate_, 0, sizeof(file_datarate_));
+      memset(bits_total_, 0, sizeof(bits_total_));
+      int64_t bits_in_buffer_model_tmp[VPX_MAX_LAYERS];
+      last_pts_ref_ = last_pts_;
+      // Set new target bitarate.
+      cfg_.rc_target_bitrate = cfg_.rc_target_bitrate >> 1;
+      // Buffer level should not reset on dynamic bitrate change.
+      memcpy(bits_in_buffer_model_tmp, bits_in_buffer_model_,
+             sizeof(bits_in_buffer_model_));
+      AssignLayerBitrates();
+      memcpy(bits_in_buffer_model_, bits_in_buffer_model_tmp,
+             sizeof(bits_in_buffer_model_));
+
+      // Change config to update encoder with new bitrate configuration.
+      encoder->Config(&cfg_);
+    }
+
+    if (dynamic_drop_layer_ && !single_layer_resize_) {
+      if (video->frame() == 0) {
+        // Change layer bitrates to set top layers to 0. This will trigger skip
+        // encoding/dropping of top two spatial layers.
+        cfg_.rc_target_bitrate -=
+            (cfg_.layer_target_bitrate[1] + cfg_.layer_target_bitrate[2]);
+        middle_bitrate_ = cfg_.layer_target_bitrate[1];
+        top_bitrate_ = cfg_.layer_target_bitrate[2];
+        cfg_.layer_target_bitrate[1] = 0;
+        cfg_.layer_target_bitrate[2] = 0;
+        encoder->Config(&cfg_);
+      } else if (video->frame() == 50) {
+        // Change layer bitrates to non-zero on two top spatial layers.
+        // This will trigger skip encoding of top two spatial layers.
+        cfg_.layer_target_bitrate[1] = middle_bitrate_;
+        cfg_.layer_target_bitrate[2] = top_bitrate_;
+        cfg_.rc_target_bitrate +=
+            cfg_.layer_target_bitrate[2] + cfg_.layer_target_bitrate[1];
+        encoder->Config(&cfg_);
+      } else if (video->frame() == 100) {
+        // Change layer bitrates to set top layers to 0. This will trigger skip
+        // encoding/dropping of top two spatial layers.
+        cfg_.rc_target_bitrate -=
+            (cfg_.layer_target_bitrate[1] + cfg_.layer_target_bitrate[2]);
+        middle_bitrate_ = cfg_.layer_target_bitrate[1];
+        top_bitrate_ = cfg_.layer_target_bitrate[2];
+        cfg_.layer_target_bitrate[1] = 0;
+        cfg_.layer_target_bitrate[2] = 0;
+        encoder->Config(&cfg_);
+      } else if (video->frame() == 150) {
+        // Change layer bitrate on second layer to non-zero to start
+        // encoding it again.
+        cfg_.layer_target_bitrate[1] = middle_bitrate_;
+        cfg_.rc_target_bitrate += cfg_.layer_target_bitrate[1];
+        encoder->Config(&cfg_);
+      } else if (video->frame() == 200) {
+        // Change layer bitrate on top layer to non-zero to start
+        // encoding it again.
+        cfg_.layer_target_bitrate[2] = top_bitrate_;
+        cfg_.rc_target_bitrate += cfg_.layer_target_bitrate[2];
+        encoder->Config(&cfg_);
+      }
+    } else if (dynamic_drop_layer_ && single_layer_resize_) {
+      // Change layer bitrates to set top layers to 0. This will trigger skip
+      // encoding/dropping of top spatial layers.
+      if (video->frame() == 10) {
+        cfg_.rc_target_bitrate -=
+            (cfg_.layer_target_bitrate[1] + cfg_.layer_target_bitrate[2]);
+        middle_bitrate_ = cfg_.layer_target_bitrate[1];
+        top_bitrate_ = cfg_.layer_target_bitrate[2];
+        cfg_.layer_target_bitrate[1] = 0;
+        cfg_.layer_target_bitrate[2] = 0;
+        // Set spatial layer 0 to a very low bitrate to trigger resize.
+        cfg_.layer_target_bitrate[0] = 30;
+        cfg_.rc_target_bitrate = cfg_.layer_target_bitrate[0];
+        encoder->Config(&cfg_);
+      } else if (video->frame() == 300) {
+        // Set base spatial layer to very high to go back up to original size.
+        cfg_.layer_target_bitrate[0] = 300;
+        cfg_.rc_target_bitrate = cfg_.layer_target_bitrate[0];
+        encoder->Config(&cfg_);
+      }
+    }
+    if (force_key_test_ && force_key_) frame_flags_ = VPX_EFLAG_FORCE_KF;
+
+    if (insert_layer_sync_) {
+      vpx_svc_spatial_layer_sync_t svc_layer_sync;
+      svc_layer_sync.base_layer_intra_only = 0;
+      for (int i = 0; i < number_spatial_layers_; i++)
+        svc_layer_sync.spatial_layer_sync[i] = 0;
+      if (force_intra_only_frame_) {
+        superframe_has_intra_only_ = 0;
+        if (video->frame() == 0) {
+          svc_layer_sync.base_layer_intra_only = 1;
+          svc_layer_sync.spatial_layer_sync[0] = 1;
+          encoder->Control(VP9E_SET_SVC_SPATIAL_LAYER_SYNC, &svc_layer_sync);
+          superframe_has_intra_only_ = 1;
+        } else if (video->frame() == 100) {
+          svc_layer_sync.base_layer_intra_only = 1;
+          svc_layer_sync.spatial_layer_sync[0] = 1;
+          encoder->Control(VP9E_SET_SVC_SPATIAL_LAYER_SYNC, &svc_layer_sync);
+          superframe_has_intra_only_ = 1;
+        }
+      } else {
+        layer_sync_on_base_ = 0;
+        if (video->frame() == 150) {
+          svc_layer_sync.spatial_layer_sync[1] = 1;
+          encoder->Control(VP9E_SET_SVC_SPATIAL_LAYER_SYNC, &svc_layer_sync);
+        } else if (video->frame() == 240) {
+          svc_layer_sync.spatial_layer_sync[2] = 1;
+          encoder->Control(VP9E_SET_SVC_SPATIAL_LAYER_SYNC, &svc_layer_sync);
+        } else if (video->frame() == 320) {
+          svc_layer_sync.spatial_layer_sync[0] = 1;
+          layer_sync_on_base_ = 1;
+          encoder->Control(VP9E_SET_SVC_SPATIAL_LAYER_SYNC, &svc_layer_sync);
+        }
+      }
+    }
+
+    const vpx_rational_t tb = video->timebase();
+    timebase_ = static_cast<double>(tb.num) / tb.den;
+    duration_ = 0;
+  }
+
+  vpx_codec_err_t parse_superframe_index(const uint8_t *data, size_t data_sz,
+                                         uint32_t sizes[8], int *count) {
+    uint8_t marker;
+    marker = *(data + data_sz - 1);
+    *count = 0;
+    if ((marker & 0xe0) == 0xc0) {
+      const uint32_t frames = (marker & 0x7) + 1;
+      const uint32_t mag = ((marker >> 3) & 0x3) + 1;
+      const size_t index_sz = 2 + mag * frames;
+      // This chunk is marked as having a superframe index but doesn't have
+      // enough data for it, thus it's an invalid superframe index.
+      if (data_sz < index_sz) return VPX_CODEC_CORRUPT_FRAME;
+      {
+        const uint8_t marker2 = *(data + data_sz - index_sz);
+        // This chunk is marked as having a superframe index but doesn't have
+        // the matching marker byte at the front of the index therefore it's an
+        // invalid chunk.
+        if (marker != marker2) return VPX_CODEC_CORRUPT_FRAME;
+      }
+      {
+        uint32_t i, j;
+        const uint8_t *x = &data[data_sz - index_sz + 1];
+        for (i = 0; i < frames; ++i) {
+          uint32_t this_sz = 0;
+
+          for (j = 0; j < mag; ++j) this_sz |= (*x++) << (j * 8);
+          sizes[i] = this_sz;
+        }
+        *count = frames;
+      }
+    }
+    return VPX_CODEC_OK;
+  }
+
+  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
+    uint32_t sizes[8] = { 0 };
+    uint32_t sizes_parsed[8] = { 0 };
+    int count = 0;
+    int num_layers_encoded = 0;
+    last_pts_ = pkt->data.frame.pts;
+    const bool key_frame =
+        (pkt->data.frame.flags & VPX_FRAME_IS_KEY) ? true : false;
+    if (key_frame) {
+      // For test that inserts layer sync frames: requesting a layer_sync on
+      // the base layer must force key frame. So if any key frame occurs after
+      // first superframe it must due to layer sync on base spatial layer.
+      if (superframe_count_ > 0 && insert_layer_sync_ &&
+          !force_intra_only_frame_) {
+        ASSERT_EQ(layer_sync_on_base_, 1);
+      }
+      temporal_layer_id_ = 0;
+      superframe_count_ = 0;
+    }
+    parse_superframe_index(static_cast<const uint8_t *>(pkt->data.frame.buf),
+                           pkt->data.frame.sz, sizes_parsed, &count);
+    // Count may be less than number of spatial layers because of frame drops.
+    for (int sl = 0; sl < number_spatial_layers_; ++sl) {
+      if (pkt->data.frame.spatial_layer_encoded[sl]) {
+        sizes[sl] = sizes_parsed[num_layers_encoded];
+        num_layers_encoded++;
+      }
+    }
+    // For superframe with Intra-only count will be +1 larger
+    // because of no-show frame.
+    if (force_intra_only_frame_ && superframe_has_intra_only_)
+      ASSERT_EQ(count, num_layers_encoded + 1);
+    else
+      ASSERT_EQ(count, num_layers_encoded);
+
+    // In the constrained frame drop mode, if a given spatial is dropped all
+    // upper layers must be dropped too.
+    if (!layer_framedrop_) {
+      int num_layers_dropped = 0;
+      for (int sl = 0; sl < number_spatial_layers_; ++sl) {
+        if (!pkt->data.frame.spatial_layer_encoded[sl]) {
+          // Check that all upper layers are dropped.
+          num_layers_dropped++;
+          for (int sl2 = sl + 1; sl2 < number_spatial_layers_; ++sl2)
+            ASSERT_EQ(pkt->data.frame.spatial_layer_encoded[sl2], 0);
+        }
+      }
+      if (num_layers_dropped == number_spatial_layers_ - 1)
+        force_key_ = 1;
+      else
+        force_key_ = 0;
+    }
+    // Keep track of number of non-reference frames, needed for mismatch check.
+    // Non-reference frames are top spatial and temporal layer frames,
+    // for TL > 0.
+    if (temporal_layer_id_ == number_temporal_layers_ - 1 &&
+        temporal_layer_id_ > 0 &&
+        pkt->data.frame.spatial_layer_encoded[number_spatial_layers_ - 1])
+      num_nonref_frames_++;
+    for (int sl = 0; sl < number_spatial_layers_; ++sl) {
+      sizes[sl] = sizes[sl] << 3;
+      // Update the total encoded bits per layer.
+      // For temporal layers, update the cumulative encoded bits per layer.
+      for (int tl = temporal_layer_id_; tl < number_temporal_layers_; ++tl) {
+        const int layer = sl * number_temporal_layers_ + tl;
+        bits_total_[layer] += static_cast<int64_t>(sizes[sl]);
+        // Update the per-layer buffer level with the encoded frame size.
+        bits_in_buffer_model_[layer] -= static_cast<int64_t>(sizes[sl]);
+        // There should be no buffer underrun, except on the base
+        // temporal layer, since there may be key frames there.
+        // Fo short key frame spacing, buffer can underrun on individual frames.
+        if (!key_frame && tl > 0 && key_frame_spacing_ < 100) {
+          ASSERT_GE(bits_in_buffer_model_[layer], 0)
+              << "Buffer Underrun at frame " << pkt->data.frame.pts;
+        }
+      }
+
+      if (!single_layer_resize_) {
+        ASSERT_EQ(pkt->data.frame.width[sl],
+                  top_sl_width_ * svc_params_.scaling_factor_num[sl] /
+                      svc_params_.scaling_factor_den[sl]);
+
+        ASSERT_EQ(pkt->data.frame.height[sl],
+                  top_sl_height_ * svc_params_.scaling_factor_num[sl] /
+                      svc_params_.scaling_factor_den[sl]);
+      } else if (superframe_count_ > 0) {
+        if (pkt->data.frame.width[sl] < prev_frame_width[sl] &&
+            pkt->data.frame.height[sl] < prev_frame_height[sl])
+          num_resize_down_ += 1;
+        if (pkt->data.frame.width[sl] > prev_frame_width[sl] &&
+            pkt->data.frame.height[sl] > prev_frame_height[sl])
+          num_resize_up_ += 1;
+      }
+      prev_frame_width[sl] = pkt->data.frame.width[sl];
+      prev_frame_height[sl] = pkt->data.frame.height[sl];
+    }
+  }
+
+  virtual void EndPassHook(void) {
+    if (change_bitrate_) last_pts_ = last_pts_ - last_pts_ref_;
+    duration_ = (last_pts_ + 1) * timebase_;
+    for (int sl = 0; sl < number_spatial_layers_; ++sl) {
+      for (int tl = 0; tl < number_temporal_layers_; ++tl) {
+        const int layer = sl * number_temporal_layers_ + tl;
+        const double file_size_in_kb = bits_total_[layer] / 1000.;
+        file_datarate_[layer] = file_size_in_kb / duration_;
+      }
+    }
+  }
+
+  virtual void MismatchHook(const vpx_image_t *img1, const vpx_image_t *img2) {
+    double mismatch_psnr = compute_psnr(img1, img2);
+    mismatch_psnr_ += mismatch_psnr;
+    ++mismatch_nframes_;
+  }
+
+  unsigned int GetMismatchFrames() { return mismatch_nframes_; }
+  unsigned int GetNonRefFrames() { return num_nonref_frames_; }
+
+  vpx_codec_pts_t last_pts_;
+  double timebase_;
+  int64_t bits_total_[VPX_MAX_LAYERS];
+  double duration_;
+  double file_datarate_[VPX_MAX_LAYERS];
+  size_t bits_in_last_frame_;
+  double mismatch_psnr_;
+  int denoiser_on_;
+  int tune_content_;
+  int spatial_layer_id_;
+  bool dynamic_drop_layer_;
+  bool single_layer_resize_;
+  unsigned int top_sl_width_;
+  unsigned int top_sl_height_;
+  vpx_svc_ref_frame_config_t ref_frame_config;
+  int update_pattern_;
+  bool change_bitrate_;
+  vpx_codec_pts_t last_pts_ref_;
+  int middle_bitrate_;
+  int top_bitrate_;
+  int key_frame_spacing_;
+  int layer_framedrop_;
+  int force_key_;
+  int force_key_test_;
+  int inter_layer_pred_mode_;
+  int insert_layer_sync_;
+  int layer_sync_on_base_;
+  int force_intra_only_frame_;
+  int superframe_has_intra_only_;
+  int use_post_encode_drop_;
+  int bitrate_sl3_[3];
+  // Denoiser switched on the fly.
+  bool denoiser_off_on_;
+  // Top layer enabled on the fly.
+  bool denoiser_enable_layers_;
+  int num_resize_up_;
+  int num_resize_down_;
+  unsigned int prev_frame_width[VPX_MAX_LAYERS];
+  unsigned int prev_frame_height[VPX_MAX_LAYERS];
+
+ private:
+  virtual void SetConfig(const int num_temporal_layer) {
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.g_lag_in_frames = 0;
+    cfg_.g_error_resilient = 1;
+    if (num_temporal_layer == 3) {
+      cfg_.ts_rate_decimator[0] = 4;
+      cfg_.ts_rate_decimator[1] = 2;
+      cfg_.ts_rate_decimator[2] = 1;
+      cfg_.temporal_layering_mode = 3;
+    } else if (num_temporal_layer == 2) {
+      cfg_.ts_rate_decimator[0] = 2;
+      cfg_.ts_rate_decimator[1] = 1;
+      cfg_.temporal_layering_mode = 2;
+    } else if (num_temporal_layer == 1) {
+      cfg_.ts_rate_decimator[0] = 1;
+      cfg_.temporal_layering_mode = 0;
+    }
+  }
+
+  unsigned int num_nonref_frames_;
+  unsigned int mismatch_nframes_;
+};
+
+// Params: speed setting.
+class DatarateOnePassCbrSvcSingleBR
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWithParam<int> {
+ public:
+  DatarateOnePassCbrSvcSingleBR() : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcSingleBR() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and 1
+// temporal layer, with screen content mode on and same speed setting for all
+// layers.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc2SL1TLScreenContent1) {
+  SetSvcConfig(2, 1);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.kf_max_dist = 9999;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  top_sl_width_ = 1280;
+  top_sl_height_ = 720;
+  cfg_.rc_target_bitrate = 500;
+  ResetModel();
+  tune_content_ = 1;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and
+// 3 temporal layers, with force key frame after frame drop
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL3TLForceKey) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  cfg_.rc_target_bitrate = 100;
+  ResetModel();
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.25);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and
+// 2 temporal layers, with a change on the fly from the fixed SVC pattern to one
+// generate via SVC_SET_REF_FRAME_CONFIG. The new pattern also disables
+// inter-layer prediction.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL2TLDynamicPatternChange) {
+  SetSvcConfig(3, 2);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  // Change SVC pattern on the fly.
+  update_pattern_ = 1;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  cfg_.rc_target_bitrate = 800;
+  ResetModel();
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC with 3 spatial and 3 temporal
+// layers, for inter_layer_pred=OffKey (K-SVC) and on the fly switching
+// of denoiser from off to on (on at frame = 100). Key frame period is set to
+// 1000 so denoise is enabled on non-key.
+TEST_P(DatarateOnePassCbrSvcSingleBR,
+       OnePassCbrSvc3SL3TL_DenoiserOffOnFixedLayers) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 1000;
+  ::libvpx_test::I420VideoSource video("desktop_office1.1280_720-020.yuv", 1280,
+                                       720, 30, 1, 0, 300);
+  top_sl_width_ = 1280;
+  top_sl_height_ = 720;
+  cfg_.rc_target_bitrate = 1000;
+  ResetModel();
+  denoiser_off_on_ = true;
+  denoiser_enable_layers_ = false;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  // Don't check rate targeting on two top spatial layer since they will be
+  // skipped for part of the sequence.
+  CheckLayerRateTargeting(number_spatial_layers_ - 2, number_temporal_layers_,
+                          0.78, 1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC with 3 spatial and 3 temporal
+// layers, for inter_layer_pred=OffKey (K-SVC) and on the fly switching
+// of denoiser from off to on, for dynamic layers. Start at 2 spatial layers
+// and enable 3rd spatial layer at frame = 100. Use periodic key frame with
+// period 100 so enabling of spatial layer occurs at key frame. Enable denoiser
+// at frame > 100, after the key frame sync.
+TEST_P(DatarateOnePassCbrSvcSingleBR,
+       OnePassCbrSvc3SL3TL_DenoiserOffOnEnableLayers) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 0;
+  cfg_.kf_max_dist = 100;
+  ::libvpx_test::I420VideoSource video("desktop_office1.1280_720-020.yuv", 1280,
+                                       720, 30, 1, 0, 300);
+  top_sl_width_ = 1280;
+  top_sl_height_ = 720;
+  cfg_.rc_target_bitrate = 1000;
+  ResetModel();
+  denoiser_off_on_ = true;
+  denoiser_enable_layers_ = true;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  // Don't check rate targeting on two top spatial layer since they will be
+  // skipped for part of the sequence.
+  CheckLayerRateTargeting(number_spatial_layers_ - 2, number_temporal_layers_,
+                          0.78, 1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC with 3 spatial layers and on
+// the fly switching to 1 and then 2 and back to 3 spatial layers. This switch
+// is done by setting spatial layer bitrates to 0, and then back to non-zero,
+// during the sequence.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL_DisableEnableLayers) {
+  SetSvcConfig(3, 1);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.temporal_layering_mode = 0;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  cfg_.rc_target_bitrate = 800;
+  ResetModel();
+  dynamic_drop_layer_ = true;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  // Don't check rate targeting on two top spatial layer since they will be
+  // skipped for part of the sequence.
+  CheckLayerRateTargeting(number_spatial_layers_ - 2, number_temporal_layers_,
+                          0.78, 1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC with 2 spatial layers and on
+// the fly switching to 1 spatial layer with dynamic resize enabled.
+// The resizer will resize the single layer down and back up again, as the
+// bitrate goes back up.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL_SingleLayerResize) {
+  SetSvcConfig(2, 1);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.temporal_layering_mode = 0;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  cfg_.rc_resize_allowed = 1;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  cfg_.rc_target_bitrate = 800;
+  ResetModel();
+  dynamic_drop_layer_ = true;
+  single_layer_resize_ = true;
+  base_speed_setting_ = speed_setting_;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  // Expect at least one resize down and at least one resize back up.
+  EXPECT_GE(num_resize_down_, 1);
+  EXPECT_GE(num_resize_up_, 1);
+  // Don't check rate targeting on two top spatial layer since they will be
+  // skipped for part of the sequence.
+  CheckLayerRateTargeting(number_spatial_layers_ - 2, number_temporal_layers_,
+                          0.78, 1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Run SVC encoder for 1 temporal layer, 2 spatial layers, with spatial
+// downscale 5x5.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc2SL1TL5x5MultipleRuns) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.ss_number_layers = 2;
+  cfg_.ts_number_layers = 1;
+  cfg_.ts_rate_decimator[0] = 1;
+  cfg_.g_error_resilient = 1;
+  cfg_.g_threads = 3;
+  cfg_.temporal_layering_mode = 0;
+  svc_params_.scaling_factor_num[0] = 256;
+  svc_params_.scaling_factor_den[0] = 1280;
+  svc_params_.scaling_factor_num[1] = 1280;
+  svc_params_.scaling_factor_den[1] = 1280;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.kf_max_dist = 999999;
+  cfg_.kf_min_dist = 0;
+  cfg_.ss_target_bitrate[0] = 300;
+  cfg_.ss_target_bitrate[1] = 1400;
+  cfg_.layer_target_bitrate[0] = 300;
+  cfg_.layer_target_bitrate[1] = 1400;
+  cfg_.rc_target_bitrate = 1700;
+  number_spatial_layers_ = cfg_.ss_number_layers;
+  number_temporal_layers_ = cfg_.ts_number_layers;
+  ResetModel();
+  layer_target_avg_bandwidth_[0] = cfg_.layer_target_bitrate[0] * 1000 / 30;
+  bits_in_buffer_model_[0] =
+      cfg_.layer_target_bitrate[0] * cfg_.rc_buf_initial_sz;
+  layer_target_avg_bandwidth_[1] = cfg_.layer_target_bitrate[1] * 1000 / 30;
+  bits_in_buffer_model_[1] =
+      cfg_.layer_target_bitrate[1] * cfg_.rc_buf_initial_sz;
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  top_sl_width_ = 1280;
+  top_sl_height_ = 720;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Params: speed setting and index for bitrate array.
+class DatarateOnePassCbrSvcMultiBR
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ public:
+  DatarateOnePassCbrSvcMultiBR() : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcMultiBR() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and
+// 3 temporal layers. Run CIF clip with 1 thread.
+TEST_P(DatarateOnePassCbrSvcMultiBR, OnePassCbrSvc2SL3TL) {
+  SetSvcConfig(2, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  const int bitrates[3] = { 200, 400, 600 };
+  // TODO(marpan): Check that effective_datarate for each layer hits the
+  // layer target_bitrate.
+  cfg_.rc_target_bitrate = bitrates[GET_PARAM(2)];
+  ResetModel();
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.75,
+                          1.2);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Params: speed setting, layer framedrop control and index for bitrate array.
+class DatarateOnePassCbrSvcFrameDropMultiBR
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWith3Params<int, int, int> {
+ public:
+  DatarateOnePassCbrSvcFrameDropMultiBR()
+      : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcFrameDropMultiBR() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and
+// 3 temporal layers. Run HD clip with 4 threads.
+TEST_P(DatarateOnePassCbrSvcFrameDropMultiBR, OnePassCbrSvc2SL3TL4Threads) {
+  SetSvcConfig(2, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 4;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  top_sl_width_ = 1280;
+  top_sl_height_ = 720;
+  layer_framedrop_ = 0;
+  const int bitrates[3] = { 200, 400, 600 };
+  cfg_.rc_target_bitrate = bitrates[GET_PARAM(3)];
+  ResetModel();
+  layer_framedrop_ = GET_PARAM(2);
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.64,
+                          1.45);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and
+// 3 temporal layers. Run HD clip with 4 threads.
+TEST_P(DatarateOnePassCbrSvcFrameDropMultiBR, OnePassCbrSvc3SL3TL4Threads) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 4;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  top_sl_width_ = 1280;
+  top_sl_height_ = 720;
+  layer_framedrop_ = 0;
+  const int bitrates[3] = { 200, 400, 600 };
+  cfg_.rc_target_bitrate = bitrates[GET_PARAM(3)];
+  ResetModel();
+  layer_framedrop_ = GET_PARAM(2);
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.58,
+                          1.2);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Params: speed setting, inter-layer prediction mode.
+class DatarateOnePassCbrSvcInterLayerPredSingleBR
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ public:
+  DatarateOnePassCbrSvcInterLayerPredSingleBR()
+      : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcInterLayerPredSingleBR() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    inter_layer_pred_mode_ = GET_PARAM(2);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting with different inter-layer prediction modes for 1
+// pass CBR SVC: 3 spatial layers and 3 temporal layers. Run CIF clip with 1
+// thread.
+TEST_P(DatarateOnePassCbrSvcInterLayerPredSingleBR, OnePassCbrSvc3SL3TL) {
+  // Disable test for inter-layer pred off for now since simulcast_mode fails.
+  if (inter_layer_pred_mode_ == INTER_LAYER_PRED_OFF) return;
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.temporal_layering_mode = 3;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  cfg_.rc_target_bitrate = 800;
+  ResetModel();
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check rate targeting with different inter-layer prediction modes for 1 pass
+// CBR SVC: 3 spatial layers and 3 temporal layers, changing the target bitrate
+// at the middle of encoding.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL3TLDynamicBitrateChange) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  cfg_.rc_target_bitrate = 800;
+  ResetModel();
+  change_bitrate_ = true;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+// Params: speed setting, noise sensitivity, index for bitrate array and inter
+// layer pred mode.
+class DatarateOnePassCbrSvcDenoiser
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWith4Params<int, int, int, int> {
+ public:
+  DatarateOnePassCbrSvcDenoiser() : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcDenoiser() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    inter_layer_pred_mode_ = GET_PARAM(3);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for 1 pass CBR SVC with denoising.
+// 2 spatial layers and 3 temporal layer. Run HD clip with 2 threads.
+TEST_P(DatarateOnePassCbrSvcDenoiser, OnePassCbrSvc2SL3TLDenoiserOn) {
+  SetSvcConfig(2, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 2;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  number_spatial_layers_ = cfg_.ss_number_layers;
+  number_temporal_layers_ = cfg_.ts_number_layers;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  const int bitrates[3] = { 600, 800, 1000 };
+  // TODO(marpan): Check that effective_datarate for each layer hits the
+  // layer target_bitrate.
+  // For SVC, noise_sen = 1 means denoising only the top spatial layer
+  // noise_sen = 2 means denoising the two top spatial layers.
+  cfg_.rc_target_bitrate = bitrates[GET_PARAM(3)];
+  ResetModel();
+  denoiser_on_ = GET_PARAM(2);
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+#endif
+
+// Params: speed setting, key frame dist.
+class DatarateOnePassCbrSvcSmallKF
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ public:
+  DatarateOnePassCbrSvcSmallKF() : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcSmallKF() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and 3
+// temporal layers. Run CIF clip with 1 thread, and few short key frame periods.
+TEST_P(DatarateOnePassCbrSvcSmallKF, OnePassCbrSvc3SL3TLSmallKf) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.rc_target_bitrate = 800;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  // For this 3 temporal layer case, pattern repeats every 4 frames, so choose
+  // 4 key neighboring key frame periods (so key frame will land on 0-2-1-2).
+  const int kf_dist = GET_PARAM(2);
+  cfg_.kf_max_dist = kf_dist;
+  key_frame_spacing_ = kf_dist;
+  ResetModel();
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  // TODO(jianj): webm:1554
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.70,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC: 2 spatial layers and 3
+// temporal layers. Run CIF clip with 1 thread, and few short key frame periods.
+TEST_P(DatarateOnePassCbrSvcSmallKF, OnePassCbrSvc2SL3TLSmallKf) {
+  SetSvcConfig(2, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.rc_target_bitrate = 400;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  // For this 3 temporal layer case, pattern repeats every 4 frames, so choose
+  // 4 key neighboring key frame periods (so key frame will land on 0-2-1-2).
+  const int kf_dist = GET_PARAM(2) + 32;
+  cfg_.kf_max_dist = kf_dist;
+  key_frame_spacing_ = kf_dist;
+  ResetModel();
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Check basic rate targeting for 1 pass CBR SVC: 3 spatial layers and 3
+// temporal layers. Run VGA clip with 1 thread, and place layer sync frames:
+// one at middle layer first, then another one for top layer, and another
+// insert for base spatial layer (which forces key frame).
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL3TLSyncFrames) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.kf_max_dist = 9999;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.rc_target_bitrate = 400;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  ResetModel();
+  insert_layer_sync_ = 1;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.78,
+                          1.15);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Run SVC encoder for 3 spatial layers, 1 temporal layer, with
+// intra-only frame as sync frame on base spatial layer.
+// Intra_only is inserted at start and in middle of sequence.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc3SL1TLSyncWithIntraOnly) {
+  SetSvcConfig(3, 1);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 4;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  cfg_.rc_target_bitrate = 400;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  ResetModel();
+  insert_layer_sync_ = 1;
+  // Use intra_only frame for sync on base layer.
+  force_intra_only_frame_ = 1;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.73,
+                          1.2);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Run SVC encoder for 2 quality layers (same resolution different,
+// bitrates), 1 temporal layer, with screen content mode.
+TEST_P(DatarateOnePassCbrSvcSingleBR, OnePassCbrSvc2QL1TLScreen) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.ss_number_layers = 2;
+  cfg_.ts_number_layers = 1;
+  cfg_.ts_rate_decimator[0] = 1;
+  cfg_.temporal_layering_mode = 0;
+  cfg_.g_error_resilient = 1;
+  cfg_.g_threads = 2;
+  svc_params_.scaling_factor_num[0] = 1;
+  svc_params_.scaling_factor_den[0] = 1;
+  svc_params_.scaling_factor_num[1] = 1;
+  svc_params_.scaling_factor_den[1] = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  number_spatial_layers_ = cfg_.ss_number_layers;
+  number_temporal_layers_ = cfg_.ts_number_layers;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  top_sl_width_ = 640;
+  top_sl_height_ = 480;
+  ResetModel();
+  tune_content_ = 1;
+  // Set the layer bitrates, for 2 spatial layers, 1 temporal.
+  cfg_.rc_target_bitrate = 400;
+  cfg_.ss_target_bitrate[0] = 100;
+  cfg_.ss_target_bitrate[1] = 300;
+  cfg_.layer_target_bitrate[0] = 100;
+  cfg_.layer_target_bitrate[1] = 300;
+  for (int sl = 0; sl < 2; ++sl) {
+    float layer_framerate = 30.0;
+    layer_target_avg_bandwidth_[sl] = static_cast<int>(
+        cfg_.layer_target_bitrate[sl] * 1000.0 / layer_framerate);
+    bits_in_buffer_model_[sl] =
+        cfg_.layer_target_bitrate[sl] * cfg_.rc_buf_initial_sz;
+  }
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.73,
+                          1.25);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Params: speed setting.
+class DatarateOnePassCbrSvcPostencodeDrop
+    : public DatarateOnePassCbrSvc,
+      public ::libvpx_test::CodecTestWithParam<int> {
+ public:
+  DatarateOnePassCbrSvcPostencodeDrop() : DatarateOnePassCbrSvc(GET_PARAM(0)) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+  }
+  virtual ~DatarateOnePassCbrSvcPostencodeDrop() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    speed_setting_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Run SVC encoder for 2 quality layers (same resolution different,
+// bitrates), 1 temporal layer, with screen content mode.
+TEST_P(DatarateOnePassCbrSvcPostencodeDrop, OnePassCbrSvc2QL1TLScreen) {
+  cfg_.rc_buf_initial_sz = 200;
+  cfg_.rc_buf_optimal_sz = 200;
+  cfg_.rc_buf_sz = 400;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 52;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.ss_number_layers = 2;
+  cfg_.ts_number_layers = 1;
+  cfg_.ts_rate_decimator[0] = 1;
+  cfg_.temporal_layering_mode = 0;
+  cfg_.g_error_resilient = 1;
+  cfg_.g_threads = 2;
+  svc_params_.scaling_factor_num[0] = 1;
+  svc_params_.scaling_factor_den[0] = 1;
+  svc_params_.scaling_factor_num[1] = 1;
+  svc_params_.scaling_factor_den[1] = 1;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.kf_max_dist = 9999;
+  number_spatial_layers_ = cfg_.ss_number_layers;
+  number_temporal_layers_ = cfg_.ts_number_layers;
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+  top_sl_width_ = 352;
+  top_sl_height_ = 288;
+  ResetModel();
+  base_speed_setting_ = speed_setting_;
+  tune_content_ = 1;
+  use_post_encode_drop_ = 1;
+  // Set the layer bitrates, for 2 spatial layers, 1 temporal.
+  cfg_.rc_target_bitrate = 400;
+  cfg_.ss_target_bitrate[0] = 100;
+  cfg_.ss_target_bitrate[1] = 300;
+  cfg_.layer_target_bitrate[0] = 100;
+  cfg_.layer_target_bitrate[1] = 300;
+  for (int sl = 0; sl < 2; ++sl) {
+    float layer_framerate = 30.0;
+    layer_target_avg_bandwidth_[sl] = static_cast<int>(
+        cfg_.layer_target_bitrate[sl] * 1000.0 / layer_framerate);
+    bits_in_buffer_model_[sl] =
+        cfg_.layer_target_bitrate[sl] * cfg_.rc_buf_initial_sz;
+  }
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  CheckLayerRateTargeting(number_spatial_layers_, number_temporal_layers_, 0.73,
+                          1.25);
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcSingleBR,
+                          ::testing::Range(5, 10));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcPostencodeDrop,
+                          ::testing::Range(5, 6));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcInterLayerPredSingleBR,
+                          ::testing::Range(5, 10), ::testing::Range(0, 3));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcMultiBR, ::testing::Range(5, 10),
+                          ::testing::Range(0, 3));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcFrameDropMultiBR,
+                          ::testing::Range(5, 10), ::testing::Range(0, 2),
+                          ::testing::Range(0, 3));
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcDenoiser,
+                          ::testing::Range(5, 10), ::testing::Range(1, 3),
+                          ::testing::Range(0, 3), ::testing::Range(0, 4));
+#endif
+
+VP9_INSTANTIATE_TEST_CASE(DatarateOnePassCbrSvcSmallKF, ::testing::Range(5, 10),
+                          ::testing::Range(32, 36));
+}  // namespace
+}  // namespace svc_test
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_end_to_end_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_end_to_end_test.cc
new file mode 100644
index 0000000..82259ac
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_end_to_end_test.cc
@@ -0,0 +1,481 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+#include "./vpx_config.h"
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/i420_video_source.h"
+#include "test/svc_test.h"
+#include "test/util.h"
+#include "test/y4m_video_source.h"
+#include "vpx/vpx_codec.h"
+#include "vpx_ports/bitops.h"
+
+namespace svc_test {
+namespace {
+
+typedef enum {
+  // Inter-layer prediction is on on all frames.
+  INTER_LAYER_PRED_ON,
+  // Inter-layer prediction is off on all frames.
+  INTER_LAYER_PRED_OFF,
+  // Inter-layer prediction is off on non-key frames and non-sync frames.
+  INTER_LAYER_PRED_OFF_NONKEY,
+  // Inter-layer prediction is on on all frames, but constrained such
+  // that any layer S (> 0) can only predict from previous spatial
+  // layer S-1, from the same superframe.
+  INTER_LAYER_PRED_ON_CONSTRAINED
+} INTER_LAYER_PRED;
+
+class ScalePartitionOnePassCbrSvc
+    : public OnePassCbrSvc,
+      public ::testing::TestWithParam<const ::libvpx_test::CodecFactory *> {
+ public:
+  ScalePartitionOnePassCbrSvc()
+      : OnePassCbrSvc(GetParam()), mismatch_nframes_(0), num_nonref_frames_(0) {
+    SetMode(::libvpx_test::kRealTime);
+  }
+
+ protected:
+  virtual ~ScalePartitionOnePassCbrSvc() {}
+
+  virtual void SetUp() {
+    InitializeConfig();
+    speed_setting_ = 7;
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    PreEncodeFrameHookSetup(video, encoder);
+  }
+
+  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
+    // Keep track of number of non-reference frames, needed for mismatch check.
+    // Non-reference frames are top spatial and temporal layer frames,
+    // for TL > 0.
+    if (temporal_layer_id_ == number_temporal_layers_ - 1 &&
+        temporal_layer_id_ > 0 &&
+        pkt->data.frame.spatial_layer_encoded[number_spatial_layers_ - 1])
+      num_nonref_frames_++;
+  }
+
+  virtual void MismatchHook(const vpx_image_t * /*img1*/,
+                            const vpx_image_t * /*img2*/) {
+    ++mismatch_nframes_;
+  }
+
+  virtual void SetConfig(const int /*num_temporal_layer*/) {}
+
+  unsigned int GetMismatchFrames() const { return mismatch_nframes_; }
+  unsigned int GetNonRefFrames() const { return num_nonref_frames_; }
+
+ private:
+  unsigned int mismatch_nframes_;
+  unsigned int num_nonref_frames_;
+};
+
+TEST_P(ScalePartitionOnePassCbrSvc, OnePassCbrSvc3SL3TL1080P) {
+  SetSvcConfig(3, 3);
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_threads = 1;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.rc_target_bitrate = 800;
+  cfg_.kf_max_dist = 9999;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.g_error_resilient = 1;
+  cfg_.ts_rate_decimator[0] = 4;
+  cfg_.ts_rate_decimator[1] = 2;
+  cfg_.ts_rate_decimator[2] = 1;
+  cfg_.temporal_layering_mode = 3;
+  ::libvpx_test::I420VideoSource video(
+      "slides_code_term_web_plot.1920_1080.yuv", 1920, 1080, 30, 1, 0, 100);
+  // For this 3 temporal layer case, pattern repeats every 4 frames, so choose
+  // 4 key neighboring key frame periods (so key frame will land on 0-2-1-2).
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Params: Inter layer prediction modes.
+class SyncFrameOnePassCbrSvc : public OnePassCbrSvc,
+                               public ::libvpx_test::CodecTestWithParam<int> {
+ public:
+  SyncFrameOnePassCbrSvc()
+      : OnePassCbrSvc(GET_PARAM(0)), current_video_frame_(0),
+        frame_to_start_decode_(0), frame_to_sync_(0),
+        inter_layer_pred_mode_(GET_PARAM(1)), decode_to_layer_before_sync_(-1),
+        decode_to_layer_after_sync_(-1), denoiser_on_(0),
+        intra_only_test_(false), mismatch_nframes_(0), num_nonref_frames_(0) {
+    SetMode(::libvpx_test::kRealTime);
+    memset(&svc_layer_sync_, 0, sizeof(svc_layer_sync_));
+  }
+
+ protected:
+  virtual ~SyncFrameOnePassCbrSvc() {}
+
+  virtual void SetUp() {
+    InitializeConfig();
+    speed_setting_ = 7;
+  }
+
+  virtual bool DoDecode() const {
+    return current_video_frame_ >= frame_to_start_decode_;
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    current_video_frame_ = video->frame();
+    PreEncodeFrameHookSetup(video, encoder);
+    if (video->frame() == 0) {
+      // Do not turn off inter-layer pred completely because simulcast mode
+      // fails.
+      if (inter_layer_pred_mode_ != INTER_LAYER_PRED_OFF)
+        encoder->Control(VP9E_SET_SVC_INTER_LAYER_PRED, inter_layer_pred_mode_);
+      encoder->Control(VP9E_SET_NOISE_SENSITIVITY, denoiser_on_);
+      if (intra_only_test_)
+        // Decoder sets the color_space for Intra-only frames
+        // to BT_601 (see line 1810 in vp9_decodeframe.c).
+        // So set it here in these tess to avoid encoder-decoder
+        // mismatch check on color space setting.
+        encoder->Control(VP9E_SET_COLOR_SPACE, VPX_CS_BT_601);
+    }
+    if (video->frame() == frame_to_sync_) {
+      encoder->Control(VP9E_SET_SVC_SPATIAL_LAYER_SYNC, &svc_layer_sync_);
+    }
+  }
+
+#if CONFIG_VP9_DECODER
+  virtual void PreDecodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Decoder *decoder) {
+    if (video->frame() < frame_to_sync_) {
+      if (decode_to_layer_before_sync_ >= 0)
+        decoder->Control(VP9_DECODE_SVC_SPATIAL_LAYER,
+                         decode_to_layer_before_sync_);
+    } else {
+      if (decode_to_layer_after_sync_ >= 0)
+        decoder->Control(VP9_DECODE_SVC_SPATIAL_LAYER,
+                         decode_to_layer_after_sync_);
+    }
+  }
+#endif
+
+  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
+    // Keep track of number of non-reference frames, needed for mismatch check.
+    // Non-reference frames are top spatial and temporal layer frames,
+    // for TL > 0.
+    if (temporal_layer_id_ == number_temporal_layers_ - 1 &&
+        temporal_layer_id_ > 0 &&
+        pkt->data.frame.spatial_layer_encoded[number_spatial_layers_ - 1] &&
+        current_video_frame_ >= frame_to_sync_)
+      num_nonref_frames_++;
+
+    if (intra_only_test_ && current_video_frame_ == frame_to_sync_) {
+      // Intra-only frame is only generated for spatial layers > 1 and <= 3,
+      // among other conditions (see constraint in set_intra_only_frame(). If
+      // intra-only is no allowed then encoder will insert key frame instead.
+      const bool key_frame =
+          (pkt->data.frame.flags & VPX_FRAME_IS_KEY) ? true : false;
+      if (number_spatial_layers_ == 1 || number_spatial_layers_ > 3)
+        ASSERT_TRUE(key_frame);
+      else
+        ASSERT_FALSE(key_frame);
+    }
+  }
+
+  virtual void MismatchHook(const vpx_image_t * /*img1*/,
+                            const vpx_image_t * /*img2*/) {
+    if (current_video_frame_ >= frame_to_sync_) ++mismatch_nframes_;
+  }
+
+  unsigned int GetMismatchFrames() const { return mismatch_nframes_; }
+  unsigned int GetNonRefFrames() const { return num_nonref_frames_; }
+
+  unsigned int current_video_frame_;
+  unsigned int frame_to_start_decode_;
+  unsigned int frame_to_sync_;
+  int inter_layer_pred_mode_;
+  int decode_to_layer_before_sync_;
+  int decode_to_layer_after_sync_;
+  int denoiser_on_;
+  bool intra_only_test_;
+  vpx_svc_spatial_layer_sync_t svc_layer_sync_;
+
+ private:
+  virtual void SetConfig(const int num_temporal_layer) {
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_buf_optimal_sz = 500;
+    cfg_.rc_buf_sz = 1000;
+    cfg_.rc_min_quantizer = 0;
+    cfg_.rc_max_quantizer = 63;
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.g_lag_in_frames = 0;
+    cfg_.g_error_resilient = 1;
+    cfg_.g_threads = 1;
+    cfg_.rc_dropframe_thresh = 30;
+    cfg_.kf_max_dist = 9999;
+    if (num_temporal_layer == 3) {
+      cfg_.ts_rate_decimator[0] = 4;
+      cfg_.ts_rate_decimator[1] = 2;
+      cfg_.ts_rate_decimator[2] = 1;
+      cfg_.temporal_layering_mode = 3;
+    } else if (num_temporal_layer == 2) {
+      cfg_.ts_rate_decimator[0] = 2;
+      cfg_.ts_rate_decimator[1] = 1;
+      cfg_.temporal_layering_mode = 2;
+    } else if (num_temporal_layer == 1) {
+      cfg_.ts_rate_decimator[0] = 1;
+      cfg_.temporal_layering_mode = 1;
+    }
+  }
+
+  unsigned int mismatch_nframes_;
+  unsigned int num_nonref_frames_;
+};
+
+// Test for sync layer for 1 pass CBR SVC: 3 spatial layers and
+// 3 temporal layers. Only start decoding on the sync layer.
+// Full sync: insert key frame on base layer.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc3SL3TLFullSync) {
+  SetSvcConfig(3, 3);
+  // Sync is on base layer so the frame to sync and the frame to start decoding
+  // is the same.
+  frame_to_start_decode_ = 20;
+  frame_to_sync_ = 20;
+  decode_to_layer_before_sync_ = -1;
+  decode_to_layer_after_sync_ = 2;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 0;
+  svc_layer_sync_.spatial_layer_sync[0] = 1;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+
+  cfg_.rc_target_bitrate = 600;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Test for sync layer for 1 pass CBR SVC: 2 spatial layers and
+// 3 temporal layers. Decoding QVGA before sync frame and decode up to
+// VGA on and after sync.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc2SL3TLSyncToVGA) {
+  SetSvcConfig(2, 3);
+  frame_to_start_decode_ = 0;
+  frame_to_sync_ = 100;
+  decode_to_layer_before_sync_ = 0;
+  decode_to_layer_after_sync_ = 1;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 0;
+  svc_layer_sync_.spatial_layer_sync[0] = 0;
+  svc_layer_sync_.spatial_layer_sync[1] = 1;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  cfg_.rc_target_bitrate = 400;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Test for sync layer for 1 pass CBR SVC: 3 spatial layers and
+// 3 temporal layers. Decoding QVGA and VGA before sync frame and decode up to
+// HD on and after sync.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc3SL3TLSyncToHD) {
+  SetSvcConfig(3, 3);
+  frame_to_start_decode_ = 0;
+  frame_to_sync_ = 20;
+  decode_to_layer_before_sync_ = 1;
+  decode_to_layer_after_sync_ = 2;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 0;
+  svc_layer_sync_.spatial_layer_sync[0] = 0;
+  svc_layer_sync_.spatial_layer_sync[1] = 0;
+  svc_layer_sync_.spatial_layer_sync[2] = 1;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  cfg_.rc_target_bitrate = 600;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Test for sync layer for 1 pass CBR SVC: 3 spatial layers and
+// 3 temporal layers. Decoding QVGA before sync frame and decode up to
+// HD on and after sync.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc3SL3TLSyncToVGAHD) {
+  SetSvcConfig(3, 3);
+  frame_to_start_decode_ = 0;
+  frame_to_sync_ = 20;
+  decode_to_layer_before_sync_ = 0;
+  decode_to_layer_after_sync_ = 2;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 0;
+  svc_layer_sync_.spatial_layer_sync[0] = 0;
+  svc_layer_sync_.spatial_layer_sync[1] = 1;
+  svc_layer_sync_.spatial_layer_sync[2] = 1;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  cfg_.rc_target_bitrate = 600;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+// Test for sync layer for 1 pass CBR SVC: 2 spatial layers and
+// 3 temporal layers. Decoding QVGA before sync frame and decode up to
+// VGA on and after sync.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc2SL3TLSyncFrameVGADenoise) {
+  SetSvcConfig(2, 3);
+  frame_to_start_decode_ = 0;
+  frame_to_sync_ = 100;
+  decode_to_layer_before_sync_ = 0;
+  decode_to_layer_after_sync_ = 1;
+
+  denoiser_on_ = 1;
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 0;
+  svc_layer_sync_.spatial_layer_sync[0] = 0;
+  svc_layer_sync_.spatial_layer_sync[1] = 1;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  cfg_.rc_target_bitrate = 400;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+#endif
+
+// Start decoding from beginning of sequence, during sequence insert intra-only
+// on base/qvga layer. Decode all layers.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc3SL3TLSyncFrameIntraOnlyQVGA) {
+  SetSvcConfig(3, 3);
+  frame_to_start_decode_ = 0;
+  frame_to_sync_ = 20;
+  decode_to_layer_before_sync_ = 2;
+  // The superframe containing intra-only layer will have 4 frames. Thus set the
+  // layer to decode after sync frame to 3.
+  decode_to_layer_after_sync_ = 3;
+  intra_only_test_ = true;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 1;
+  svc_layer_sync_.spatial_layer_sync[0] = 1;
+  svc_layer_sync_.spatial_layer_sync[1] = 0;
+  svc_layer_sync_.spatial_layer_sync[2] = 0;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  cfg_.rc_target_bitrate = 600;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Start decoding from beginning of sequence, during sequence insert intra-only
+// on base/qvga layer and sync_layer on middle/VGA layer. Decode all layers.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc3SL3TLSyncFrameIntraOnlyVGA) {
+  SetSvcConfig(3, 3);
+  frame_to_start_decode_ = 0;
+  frame_to_sync_ = 20;
+  decode_to_layer_before_sync_ = 2;
+  // The superframe containing intra-only layer will have 4 frames. Thus set the
+  // layer to decode after sync frame to 3.
+  decode_to_layer_after_sync_ = 3;
+  intra_only_test_ = true;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 1;
+  svc_layer_sync_.spatial_layer_sync[0] = 1;
+  svc_layer_sync_.spatial_layer_sync[1] = 1;
+  svc_layer_sync_.spatial_layer_sync[2] = 0;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  cfg_.rc_target_bitrate = 600;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+// Start decoding from sync frame, insert intra-only on base/qvga layer. Decode
+// all layers. For 1 spatial layer, it inserts a key frame.
+TEST_P(SyncFrameOnePassCbrSvc, OnePassCbrSvc1SL3TLSyncFrameIntraOnlyQVGA) {
+  SetSvcConfig(1, 3);
+  frame_to_start_decode_ = 20;
+  frame_to_sync_ = 20;
+  decode_to_layer_before_sync_ = 0;
+  decode_to_layer_after_sync_ = 0;
+  intra_only_test_ = true;
+
+  // Set up svc layer sync structure.
+  svc_layer_sync_.base_layer_intra_only = 1;
+  svc_layer_sync_.spatial_layer_sync[0] = 1;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 60);
+  cfg_.rc_target_bitrate = 600;
+  AssignLayerBitrates();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+#if CONFIG_VP9_DECODER
+  // The non-reference frames are expected to be mismatched frames as the
+  // encoder will avoid loopfilter on these frames.
+  EXPECT_EQ(GetNonRefFrames(), GetMismatchFrames());
+#endif
+}
+
+VP9_INSTANTIATE_TEST_CASE(SyncFrameOnePassCbrSvc, ::testing::Range(0, 3));
+
+INSTANTIATE_TEST_CASE_P(
+    VP9, ScalePartitionOnePassCbrSvc,
+    ::testing::Values(
+        static_cast<const libvpx_test::CodecFactory *>(&libvpx_test::kVP9)));
+
+}  // namespace
+}  // namespace svc_test
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.cc
index 482d9ff..4798c77 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.cc
@@ -1,5 +1,5 @@
 /*
- *  Copyright (c) 2013 The WebM project authors. All Rights Reserved.
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
  *
  *  Use of this source code is governed by a BSD-style license
  *  that can be found in the LICENSE file in the root of the source
@@ -8,782 +8,127 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include <string>
-#include "third_party/googletest/src/include/gtest/gtest.h"
-#include "test/codec_factory.h"
-#include "test/decode_test_driver.h"
-#include "test/i420_video_source.h"
+#include "test/svc_test.h"
 
-#include "vp9/decoder/vp9_decoder.h"
+namespace svc_test {
+void OnePassCbrSvc::SetSvcConfig(const int num_spatial_layer,
+                                 const int num_temporal_layer) {
+  SetConfig(num_temporal_layer);
+  cfg_.ss_number_layers = num_spatial_layer;
+  cfg_.ts_number_layers = num_temporal_layer;
+  if (num_spatial_layer == 1) {
+    svc_params_.scaling_factor_num[0] = 288;
+    svc_params_.scaling_factor_den[0] = 288;
+  } else if (num_spatial_layer == 2) {
+    svc_params_.scaling_factor_num[0] = 144;
+    svc_params_.scaling_factor_den[0] = 288;
+    svc_params_.scaling_factor_num[1] = 288;
+    svc_params_.scaling_factor_den[1] = 288;
+  } else if (num_spatial_layer == 3) {
+    svc_params_.scaling_factor_num[0] = 72;
+    svc_params_.scaling_factor_den[0] = 288;
+    svc_params_.scaling_factor_num[1] = 144;
+    svc_params_.scaling_factor_den[1] = 288;
+    svc_params_.scaling_factor_num[2] = 288;
+    svc_params_.scaling_factor_den[2] = 288;
+  }
+  number_spatial_layers_ = cfg_.ss_number_layers;
+  number_temporal_layers_ = cfg_.ts_number_layers;
+}
 
-#include "vpx/svc_context.h"
-#include "vpx/vp8cx.h"
-#include "vpx/vpx_encoder.h"
+void OnePassCbrSvc::PreEncodeFrameHookSetup(::libvpx_test::VideoSource *video,
+                                            ::libvpx_test::Encoder *encoder) {
+  if (video->frame() == 0) {
+    for (int i = 0; i < VPX_MAX_LAYERS; ++i) {
+      svc_params_.max_quantizers[i] = 63;
+      svc_params_.min_quantizers[i] = 0;
+    }
+    svc_params_.speed_per_layer[0] = base_speed_setting_;
+    for (int i = 1; i < VPX_SS_MAX_LAYERS; ++i) {
+      svc_params_.speed_per_layer[i] = speed_setting_;
+    }
 
-namespace {
-
-using libvpx_test::CodecFactory;
-using libvpx_test::Decoder;
-using libvpx_test::DxDataIterator;
-using libvpx_test::VP9CodecFactory;
-
-class SvcTest : public ::testing::Test {
- protected:
-  static const uint32_t kWidth = 352;
-  static const uint32_t kHeight = 288;
-
-  SvcTest()
-      : codec_iface_(0), test_file_name_("hantro_collage_w352h288.yuv"),
-        codec_initialized_(false), decoder_(0) {
-    memset(&svc_, 0, sizeof(svc_));
-    memset(&codec_, 0, sizeof(codec_));
-    memset(&codec_enc_, 0, sizeof(codec_enc_));
+    encoder->Control(VP9E_SET_SVC, 1);
+    encoder->Control(VP9E_SET_SVC_PARAMETERS, &svc_params_);
+    encoder->Control(VP8E_SET_CPUUSED, speed_setting_);
+    encoder->Control(VP9E_SET_AQ_MODE, 3);
+    encoder->Control(VP8E_SET_MAX_INTRA_BITRATE_PCT, 300);
+    encoder->Control(VP9E_SET_TILE_COLUMNS, get_msb(cfg_.g_threads));
+    encoder->Control(VP9E_SET_ROW_MT, 1);
+    encoder->Control(VP8E_SET_STATIC_THRESHOLD, 1);
   }
 
-  virtual ~SvcTest() {}
-
-  virtual void SetUp() {
-    svc_.log_level = SVC_LOG_DEBUG;
-    svc_.log_print = 0;
-
-    codec_iface_ = vpx_codec_vp9_cx();
-    const vpx_codec_err_t res =
-        vpx_codec_enc_config_default(codec_iface_, &codec_enc_, 0);
-    EXPECT_EQ(VPX_CODEC_OK, res);
-
-    codec_enc_.g_w = kWidth;
-    codec_enc_.g_h = kHeight;
-    codec_enc_.g_timebase.num = 1;
-    codec_enc_.g_timebase.den = 60;
-    codec_enc_.kf_min_dist = 100;
-    codec_enc_.kf_max_dist = 100;
-
-    vpx_codec_dec_cfg_t dec_cfg = vpx_codec_dec_cfg_t();
-    VP9CodecFactory codec_factory;
-    decoder_ = codec_factory.CreateDecoder(dec_cfg, 0);
-
-    tile_columns_ = 0;
-    tile_rows_ = 0;
-  }
-
-  virtual void TearDown() {
-    ReleaseEncoder();
-    delete (decoder_);
-  }
-
-  void InitializeEncoder() {
-    const vpx_codec_err_t res =
-        vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-    EXPECT_EQ(VPX_CODEC_OK, res);
-    vpx_codec_control(&codec_, VP8E_SET_CPUUSED, 4);  // Make the test faster
-    vpx_codec_control(&codec_, VP9E_SET_TILE_COLUMNS, tile_columns_);
-    vpx_codec_control(&codec_, VP9E_SET_TILE_ROWS, tile_rows_);
-    codec_initialized_ = true;
-  }
-
-  void ReleaseEncoder() {
-    vpx_svc_release(&svc_);
-    if (codec_initialized_) vpx_codec_destroy(&codec_);
-    codec_initialized_ = false;
-  }
-
-  void GetStatsData(std::string *const stats_buf) {
-    vpx_codec_iter_t iter = NULL;
-    const vpx_codec_cx_pkt_t *cx_pkt;
-
-    while ((cx_pkt = vpx_codec_get_cx_data(&codec_, &iter)) != NULL) {
-      if (cx_pkt->kind == VPX_CODEC_STATS_PKT) {
-        EXPECT_GT(cx_pkt->data.twopass_stats.sz, 0U);
-        ASSERT_TRUE(cx_pkt->data.twopass_stats.buf != NULL);
-        stats_buf->append(static_cast<char *>(cx_pkt->data.twopass_stats.buf),
-                          cx_pkt->data.twopass_stats.sz);
-      }
+  superframe_count_++;
+  temporal_layer_id_ = 0;
+  if (number_temporal_layers_ == 2) {
+    temporal_layer_id_ = (superframe_count_ % 2 != 0);
+  } else if (number_temporal_layers_ == 3) {
+    if (superframe_count_ % 2 != 0) temporal_layer_id_ = 2;
+    if (superframe_count_ > 1) {
+      if ((superframe_count_ - 2) % 4 == 0) temporal_layer_id_ = 1;
     }
   }
 
-  void Pass1EncodeNFrames(const int n, const int layers,
-                          std::string *const stats_buf) {
-    vpx_codec_err_t res;
+  frame_flags_ = 0;
+}
 
-    ASSERT_GT(n, 0);
-    ASSERT_GT(layers, 0);
-    svc_.spatial_layers = layers;
-    codec_enc_.g_pass = VPX_RC_FIRST_PASS;
-    InitializeEncoder();
-
-    libvpx_test::I420VideoSource video(
-        test_file_name_, codec_enc_.g_w, codec_enc_.g_h,
-        codec_enc_.g_timebase.den, codec_enc_.g_timebase.num, 0, 30);
-    video.Begin();
-
-    for (int i = 0; i < n; ++i) {
-      res = vpx_svc_encode(&svc_, &codec_, video.img(), video.pts(),
-                           video.duration(), VPX_DL_GOOD_QUALITY);
-      ASSERT_EQ(VPX_CODEC_OK, res);
-      GetStatsData(stats_buf);
-      video.Next();
-    }
-
-    // Flush encoder and test EOS packet.
-    res = vpx_svc_encode(&svc_, &codec_, NULL, video.pts(), video.duration(),
-                         VPX_DL_GOOD_QUALITY);
-    ASSERT_EQ(VPX_CODEC_OK, res);
-    GetStatsData(stats_buf);
-
-    ReleaseEncoder();
-  }
-
-  void StoreFrames(const size_t max_frame_received,
-                   struct vpx_fixed_buf *const outputs,
-                   size_t *const frame_received) {
-    vpx_codec_iter_t iter = NULL;
-    const vpx_codec_cx_pkt_t *cx_pkt;
-
-    while ((cx_pkt = vpx_codec_get_cx_data(&codec_, &iter)) != NULL) {
-      if (cx_pkt->kind == VPX_CODEC_CX_FRAME_PKT) {
-        const size_t frame_size = cx_pkt->data.frame.sz;
-
-        EXPECT_GT(frame_size, 0U);
-        ASSERT_TRUE(cx_pkt->data.frame.buf != NULL);
-        ASSERT_LT(*frame_received, max_frame_received);
-
-        if (*frame_received == 0)
-          EXPECT_EQ(1, !!(cx_pkt->data.frame.flags & VPX_FRAME_IS_KEY));
-
-        outputs[*frame_received].buf = malloc(frame_size + 16);
-        ASSERT_TRUE(outputs[*frame_received].buf != NULL);
-        memcpy(outputs[*frame_received].buf, cx_pkt->data.frame.buf,
-               frame_size);
-        outputs[*frame_received].sz = frame_size;
-        ++(*frame_received);
-      }
+void OnePassCbrSvc::PostEncodeFrameHook(::libvpx_test::Encoder *encoder) {
+  vpx_svc_layer_id_t layer_id;
+  encoder->Control(VP9E_GET_SVC_LAYER_ID, &layer_id);
+  temporal_layer_id_ = layer_id.temporal_layer_id;
+  for (int sl = 0; sl < number_spatial_layers_; ++sl) {
+    for (int tl = temporal_layer_id_; tl < number_temporal_layers_; ++tl) {
+      const int layer = sl * number_temporal_layers_ + tl;
+      bits_in_buffer_model_[layer] +=
+          static_cast<int64_t>(layer_target_avg_bandwidth_[layer]);
     }
   }
+}
 
-  void Pass2EncodeNFrames(std::string *const stats_buf, const int n,
-                          const int layers,
-                          struct vpx_fixed_buf *const outputs) {
-    vpx_codec_err_t res;
-    size_t frame_received = 0;
-
-    ASSERT_TRUE(outputs != NULL);
-    ASSERT_GT(n, 0);
-    ASSERT_GT(layers, 0);
-    svc_.spatial_layers = layers;
-    codec_enc_.rc_target_bitrate = 500;
-    if (codec_enc_.g_pass == VPX_RC_LAST_PASS) {
-      ASSERT_TRUE(stats_buf != NULL);
-      ASSERT_GT(stats_buf->size(), 0U);
-      codec_enc_.rc_twopass_stats_in.buf = &(*stats_buf)[0];
-      codec_enc_.rc_twopass_stats_in.sz = stats_buf->size();
-    }
-    InitializeEncoder();
-
-    libvpx_test::I420VideoSource video(
-        test_file_name_, codec_enc_.g_w, codec_enc_.g_h,
-        codec_enc_.g_timebase.den, codec_enc_.g_timebase.num, 0, 30);
-    video.Begin();
-
-    for (int i = 0; i < n; ++i) {
-      res = vpx_svc_encode(&svc_, &codec_, video.img(), video.pts(),
-                           video.duration(), VPX_DL_GOOD_QUALITY);
-      ASSERT_EQ(VPX_CODEC_OK, res);
-      StoreFrames(n, outputs, &frame_received);
-      video.Next();
-    }
-
-    // Flush encoder.
-    res = vpx_svc_encode(&svc_, &codec_, NULL, 0, video.duration(),
-                         VPX_DL_GOOD_QUALITY);
-    EXPECT_EQ(VPX_CODEC_OK, res);
-    StoreFrames(n, outputs, &frame_received);
-
-    EXPECT_EQ(frame_received, static_cast<size_t>(n));
-
-    ReleaseEncoder();
-  }
-
-  void DecodeNFrames(const struct vpx_fixed_buf *const inputs, const int n) {
-    int decoded_frames = 0;
-    int received_frames = 0;
-
-    ASSERT_TRUE(inputs != NULL);
-    ASSERT_GT(n, 0);
-
-    for (int i = 0; i < n; ++i) {
-      ASSERT_TRUE(inputs[i].buf != NULL);
-      ASSERT_GT(inputs[i].sz, 0U);
-      const vpx_codec_err_t res_dec = decoder_->DecodeFrame(
-          static_cast<const uint8_t *>(inputs[i].buf), inputs[i].sz);
-      ASSERT_EQ(VPX_CODEC_OK, res_dec) << decoder_->DecodeError();
-      ++decoded_frames;
-
-      DxDataIterator dec_iter = decoder_->GetDxData();
-      while (dec_iter.Next() != NULL) {
-        ++received_frames;
-      }
-    }
-    EXPECT_EQ(decoded_frames, n);
-    EXPECT_EQ(received_frames, n);
-  }
-
-  void DropEnhancementLayers(struct vpx_fixed_buf *const inputs,
-                             const int num_super_frames,
-                             const int remained_spatial_layers) {
-    ASSERT_TRUE(inputs != NULL);
-    ASSERT_GT(num_super_frames, 0);
-    ASSERT_GT(remained_spatial_layers, 0);
-
-    for (int i = 0; i < num_super_frames; ++i) {
-      uint32_t frame_sizes[8] = { 0 };
-      int frame_count = 0;
-      int frames_found = 0;
-      int frame;
-      ASSERT_TRUE(inputs[i].buf != NULL);
-      ASSERT_GT(inputs[i].sz, 0U);
-
-      vpx_codec_err_t res = vp9_parse_superframe_index(
-          static_cast<const uint8_t *>(inputs[i].buf), inputs[i].sz,
-          frame_sizes, &frame_count, NULL, NULL);
-      ASSERT_EQ(VPX_CODEC_OK, res);
-
-      if (frame_count == 0) {
-        // There's no super frame but only a single frame.
-        ASSERT_EQ(1, remained_spatial_layers);
-      } else {
-        // Found a super frame.
-        uint8_t *frame_data = static_cast<uint8_t *>(inputs[i].buf);
-        uint8_t *frame_start = frame_data;
-        for (frame = 0; frame < frame_count; ++frame) {
-          // Looking for a visible frame.
-          if (frame_data[0] & 0x02) {
-            ++frames_found;
-            if (frames_found == remained_spatial_layers) break;
-          }
-          frame_data += frame_sizes[frame];
-        }
-        ASSERT_LT(frame, frame_count)
-            << "Couldn't find a visible frame. "
-            << "remained_spatial_layers: " << remained_spatial_layers
-            << "    super_frame: " << i;
-        if (frame == frame_count - 1) continue;
-
-        frame_data += frame_sizes[frame];
-
-        // We need to add one more frame for multiple frame contexts.
-        uint8_t marker =
-            static_cast<const uint8_t *>(inputs[i].buf)[inputs[i].sz - 1];
-        const uint32_t mag = ((marker >> 3) & 0x3) + 1;
-        const size_t index_sz = 2 + mag * frame_count;
-        const size_t new_index_sz = 2 + mag * (frame + 1);
-        marker &= 0x0f8;
-        marker |= frame;
-
-        // Copy existing frame sizes.
-        memmove(frame_data + 1, frame_start + inputs[i].sz - index_sz + 1,
-                new_index_sz - 2);
-        // New marker.
-        frame_data[0] = marker;
-        frame_data += (mag * (frame + 1) + 1);
-
-        *frame_data++ = marker;
-        inputs[i].sz = frame_data - frame_start;
-      }
+void OnePassCbrSvc::AssignLayerBitrates() {
+  int sl, spatial_layer_target;
+  int spatial_layers = cfg_.ss_number_layers;
+  int temporal_layers = cfg_.ts_number_layers;
+  float total = 0;
+  float alloc_ratio[VPX_MAX_LAYERS] = { 0 };
+  float framerate = 30.0;
+  for (sl = 0; sl < spatial_layers; ++sl) {
+    if (svc_params_.scaling_factor_den[sl] > 0) {
+      alloc_ratio[sl] =
+          static_cast<float>((svc_params_.scaling_factor_num[sl] * 1.0 /
+                              svc_params_.scaling_factor_den[sl]));
+      total += alloc_ratio[sl];
     }
   }
-
-  void FreeBitstreamBuffers(struct vpx_fixed_buf *const inputs, const int n) {
-    ASSERT_TRUE(inputs != NULL);
-    ASSERT_GT(n, 0);
-
-    for (int i = 0; i < n; ++i) {
-      free(inputs[i].buf);
-      inputs[i].buf = NULL;
-      inputs[i].sz = 0;
+  for (sl = 0; sl < spatial_layers; ++sl) {
+    cfg_.ss_target_bitrate[sl] = spatial_layer_target =
+        static_cast<unsigned int>(cfg_.rc_target_bitrate * alloc_ratio[sl] /
+                                  total);
+    const int index = sl * temporal_layers;
+    if (cfg_.temporal_layering_mode == 3) {
+      cfg_.layer_target_bitrate[index] = spatial_layer_target >> 1;
+      cfg_.layer_target_bitrate[index + 1] =
+          (spatial_layer_target >> 1) + (spatial_layer_target >> 2);
+      cfg_.layer_target_bitrate[index + 2] = spatial_layer_target;
+    } else if (cfg_.temporal_layering_mode == 2) {
+      cfg_.layer_target_bitrate[index] = spatial_layer_target * 2 / 3;
+      cfg_.layer_target_bitrate[index + 1] = spatial_layer_target;
+    } else if (cfg_.temporal_layering_mode <= 1) {
+      cfg_.layer_target_bitrate[index] = spatial_layer_target;
     }
   }
-
-  SvcContext svc_;
-  vpx_codec_ctx_t codec_;
-  struct vpx_codec_enc_cfg codec_enc_;
-  vpx_codec_iface_t *codec_iface_;
-  std::string test_file_name_;
-  bool codec_initialized_;
-  Decoder *decoder_;
-  int tile_columns_;
-  int tile_rows_;
-};
-
-TEST_F(SvcTest, SvcInit) {
-  // test missing parameters
-  vpx_codec_err_t res = vpx_svc_init(NULL, &codec_, codec_iface_, &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-  res = vpx_svc_init(&svc_, NULL, codec_iface_, &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-  res = vpx_svc_init(&svc_, &codec_, NULL, &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_init(&svc_, &codec_, codec_iface_, NULL);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  svc_.spatial_layers = 6;  // too many layers
-  res = vpx_svc_init(&svc_, &codec_, codec_iface_, &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  svc_.spatial_layers = 0;  // use default layers
-  InitializeEncoder();
-  EXPECT_EQ(VPX_SS_DEFAULT_LAYERS, svc_.spatial_layers);
+  for (sl = 0; sl < spatial_layers; ++sl) {
+    for (int tl = 0; tl < temporal_layers; ++tl) {
+      const int layer = sl * temporal_layers + tl;
+      float layer_framerate = framerate;
+      if (temporal_layers == 2 && tl == 0) layer_framerate = framerate / 2;
+      if (temporal_layers == 3 && tl == 0) layer_framerate = framerate / 4;
+      if (temporal_layers == 3 && tl == 1) layer_framerate = framerate / 2;
+      layer_target_avg_bandwidth_[layer] = static_cast<int>(
+          cfg_.layer_target_bitrate[layer] * 1000.0 / layer_framerate);
+      bits_in_buffer_model_[layer] =
+          cfg_.layer_target_bitrate[layer] * cfg_.rc_buf_initial_sz;
+    }
+  }
 }
-
-TEST_F(SvcTest, InitTwoLayers) {
-  svc_.spatial_layers = 2;
-  InitializeEncoder();
-}
-
-TEST_F(SvcTest, InvalidOptions) {
-  vpx_codec_err_t res = vpx_svc_set_options(&svc_, NULL);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "not-an-option=1");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-}
-
-TEST_F(SvcTest, SetLayersOption) {
-  vpx_codec_err_t res = vpx_svc_set_options(&svc_, "spatial-layers=3");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  InitializeEncoder();
-  EXPECT_EQ(3, svc_.spatial_layers);
-}
-
-TEST_F(SvcTest, SetMultipleOptions) {
-  vpx_codec_err_t res =
-      vpx_svc_set_options(&svc_, "spatial-layers=2 scale-factors=1/3,2/3");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  InitializeEncoder();
-  EXPECT_EQ(2, svc_.spatial_layers);
-}
-
-TEST_F(SvcTest, SetScaleFactorsOption) {
-  svc_.spatial_layers = 2;
-  vpx_codec_err_t res =
-      vpx_svc_set_options(&svc_, "scale-factors=not-scale-factors");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "scale-factors=1/3, 3*3");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "scale-factors=1/3");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "scale-factors=1/3,2/3");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  InitializeEncoder();
-}
-
-TEST_F(SvcTest, SetQuantizersOption) {
-  svc_.spatial_layers = 2;
-  vpx_codec_err_t res = vpx_svc_set_options(&svc_, "max-quantizers=nothing");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "min-quantizers=nothing");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "max-quantizers=40");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "min-quantizers=40");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "max-quantizers=30,30 min-quantizers=40,40");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "max-quantizers=40,40 min-quantizers=30,30");
-  InitializeEncoder();
-}
-
-TEST_F(SvcTest, SetAutoAltRefOption) {
-  svc_.spatial_layers = 5;
-  vpx_codec_err_t res = vpx_svc_set_options(&svc_, "auto-alt-refs=none");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  res = vpx_svc_set_options(&svc_, "auto-alt-refs=1,1,1,1,0");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  vpx_svc_set_options(&svc_, "auto-alt-refs=0,1,1,1,0");
-  InitializeEncoder();
-}
-
-// Test that decoder can handle an SVC frame as the first frame in a sequence.
-TEST_F(SvcTest, OnePassEncodeOneFrame) {
-  codec_enc_.g_pass = VPX_RC_ONE_PASS;
-  vpx_fixed_buf output = vpx_fixed_buf();
-  Pass2EncodeNFrames(NULL, 1, 2, &output);
-  DecodeNFrames(&output, 1);
-  FreeBitstreamBuffers(&output, 1);
-}
-
-TEST_F(SvcTest, OnePassEncodeThreeFrames) {
-  codec_enc_.g_pass = VPX_RC_ONE_PASS;
-  codec_enc_.g_lag_in_frames = 0;
-  vpx_fixed_buf outputs[3];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(NULL, 3, 2, &outputs[0]);
-  DecodeNFrames(&outputs[0], 3);
-  FreeBitstreamBuffers(&outputs[0], 3);
-}
-
-TEST_F(SvcTest, TwoPassEncode10Frames) {
-  // First pass encode
-  std::string stats_buf;
-  Pass1EncodeNFrames(10, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 2, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode20FramesWithAltRef) {
-  // First pass encode
-  std::string stats_buf;
-  Pass1EncodeNFrames(20, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1,1");
-  vpx_fixed_buf outputs[20];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 20, 2, &outputs[0]);
-  DecodeNFrames(&outputs[0], 20);
-  FreeBitstreamBuffers(&outputs[0], 20);
-}
-
-TEST_F(SvcTest, TwoPassEncode2SpatialLayersDecodeBaseLayerOnly) {
-  // First pass encode
-  std::string stats_buf;
-  Pass1EncodeNFrames(10, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1,1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 2, &outputs[0]);
-  DropEnhancementLayers(&outputs[0], 10, 1);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode5SpatialLayersDecode54321Layers) {
-  // First pass encode
-  std::string stats_buf;
-  Pass1EncodeNFrames(10, 5, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=0,1,1,1,0");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 5, &outputs[0]);
-
-  DecodeNFrames(&outputs[0], 10);
-  DropEnhancementLayers(&outputs[0], 10, 4);
-  DecodeNFrames(&outputs[0], 10);
-  DropEnhancementLayers(&outputs[0], 10, 3);
-  DecodeNFrames(&outputs[0], 10);
-  DropEnhancementLayers(&outputs[0], 10, 2);
-  DecodeNFrames(&outputs[0], 10);
-  DropEnhancementLayers(&outputs[0], 10, 1);
-  DecodeNFrames(&outputs[0], 10);
-
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2SNRLayers) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1,1/1");
-  Pass1EncodeNFrames(20, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1,1 scale-factors=1/1,1/1");
-  vpx_fixed_buf outputs[20];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 20, 2, &outputs[0]);
-  DecodeNFrames(&outputs[0], 20);
-  FreeBitstreamBuffers(&outputs[0], 20);
-}
-
-TEST_F(SvcTest, TwoPassEncode3SNRLayersDecode321Layers) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1,1/1,1/1");
-  Pass1EncodeNFrames(20, 3, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1,1,1 scale-factors=1/1,1/1,1/1");
-  vpx_fixed_buf outputs[20];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 20, 3, &outputs[0]);
-  DecodeNFrames(&outputs[0], 20);
-  DropEnhancementLayers(&outputs[0], 20, 2);
-  DecodeNFrames(&outputs[0], 20);
-  DropEnhancementLayers(&outputs[0], 20, 1);
-  DecodeNFrames(&outputs[0], 20);
-
-  FreeBitstreamBuffers(&outputs[0], 20);
-}
-
-TEST_F(SvcTest, SetMultipleFrameContextsOption) {
-  svc_.spatial_layers = 5;
-  vpx_codec_err_t res = vpx_svc_set_options(&svc_, "multi-frame-contexts=1");
-  EXPECT_EQ(VPX_CODEC_OK, res);
-  res = vpx_svc_init(&svc_, &codec_, vpx_codec_vp9_cx(), &codec_enc_);
-  EXPECT_EQ(VPX_CODEC_INVALID_PARAM, res);
-
-  svc_.spatial_layers = 2;
-  res = vpx_svc_set_options(&svc_, "multi-frame-contexts=1");
-  InitializeEncoder();
-}
-
-TEST_F(SvcTest, TwoPassEncode2SpatialLayersWithMultipleFrameContexts) {
-  // First pass encode
-  std::string stats_buf;
-  Pass1EncodeNFrames(10, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  codec_enc_.g_error_resilient = 0;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1,1 multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 2, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest,
-       TwoPassEncode2SpatialLayersWithMultipleFrameContextsDecodeBaselayer) {
-  // First pass encode
-  std::string stats_buf;
-  Pass1EncodeNFrames(10, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  codec_enc_.g_error_resilient = 0;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1,1 multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 2, &outputs[0]);
-  DropEnhancementLayers(&outputs[0], 10, 1);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2SNRLayersWithMultipleFrameContexts) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1,1/1");
-  Pass1EncodeNFrames(10, 2, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  codec_enc_.g_error_resilient = 0;
-  vpx_svc_set_options(&svc_,
-                      "auto-alt-refs=1,1 scale-factors=1/1,1/1 "
-                      "multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 2, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest,
-       TwoPassEncode3SNRLayersWithMultipleFrameContextsDecode321Layer) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1,1/1,1/1");
-  Pass1EncodeNFrames(10, 3, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  codec_enc_.g_error_resilient = 0;
-  vpx_svc_set_options(&svc_,
-                      "auto-alt-refs=1,1,1 scale-factors=1/1,1/1,1/1 "
-                      "multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 3, &outputs[0]);
-
-  DecodeNFrames(&outputs[0], 10);
-  DropEnhancementLayers(&outputs[0], 10, 2);
-  DecodeNFrames(&outputs[0], 10);
-  DropEnhancementLayers(&outputs[0], 10, 1);
-  DecodeNFrames(&outputs[0], 10);
-
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2TemporalLayers) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1");
-  svc_.temporal_layers = 2;
-  Pass1EncodeNFrames(10, 1, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  svc_.temporal_layers = 2;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1 scale-factors=1/1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 1, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2TemporalLayersWithMultipleFrameContexts) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1");
-  svc_.temporal_layers = 2;
-  Pass1EncodeNFrames(10, 1, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  svc_.temporal_layers = 2;
-  codec_enc_.g_error_resilient = 0;
-  vpx_svc_set_options(&svc_,
-                      "auto-alt-refs=1 scale-factors=1/1 "
-                      "multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 1, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2TemporalLayersDecodeBaseLayer) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1");
-  svc_.temporal_layers = 2;
-  Pass1EncodeNFrames(10, 1, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  svc_.temporal_layers = 2;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1 scale-factors=1/1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 1, &outputs[0]);
-
-  vpx_fixed_buf base_layer[5];
-  for (int i = 0; i < 5; ++i) base_layer[i] = outputs[i * 2];
-
-  DecodeNFrames(&base_layer[0], 5);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest,
-       TwoPassEncode2TemporalLayersWithMultipleFrameContextsDecodeBaseLayer) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1");
-  svc_.temporal_layers = 2;
-  Pass1EncodeNFrames(10, 1, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  svc_.temporal_layers = 2;
-  codec_enc_.g_error_resilient = 0;
-  vpx_svc_set_options(&svc_,
-                      "auto-alt-refs=1 scale-factors=1/1 "
-                      "multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 1, &outputs[0]);
-
-  vpx_fixed_buf base_layer[5];
-  for (int i = 0; i < 5; ++i) base_layer[i] = outputs[i * 2];
-
-  DecodeNFrames(&base_layer[0], 5);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2TemporalLayersWithTiles) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1");
-  svc_.temporal_layers = 2;
-  Pass1EncodeNFrames(10, 1, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  svc_.temporal_layers = 2;
-  vpx_svc_set_options(&svc_, "auto-alt-refs=1 scale-factors=1/1");
-  codec_enc_.g_w = 704;
-  codec_enc_.g_h = 144;
-  tile_columns_ = 1;
-  tile_rows_ = 1;
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 1, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-TEST_F(SvcTest, TwoPassEncode2TemporalLayersWithMultipleFrameContextsAndTiles) {
-  // First pass encode
-  std::string stats_buf;
-  vpx_svc_set_options(&svc_, "scale-factors=1/1");
-  svc_.temporal_layers = 2;
-  Pass1EncodeNFrames(10, 1, &stats_buf);
-
-  // Second pass encode
-  codec_enc_.g_pass = VPX_RC_LAST_PASS;
-  svc_.temporal_layers = 2;
-  codec_enc_.g_error_resilient = 0;
-  codec_enc_.g_w = 704;
-  codec_enc_.g_h = 144;
-  tile_columns_ = 1;
-  tile_rows_ = 1;
-  vpx_svc_set_options(&svc_,
-                      "auto-alt-refs=1 scale-factors=1/1 "
-                      "multi-frame-contexts=1");
-  vpx_fixed_buf outputs[10];
-  memset(&outputs[0], 0, sizeof(outputs));
-  Pass2EncodeNFrames(&stats_buf, 10, 1, &outputs[0]);
-  DecodeNFrames(&outputs[0], 10);
-  FreeBitstreamBuffers(&outputs[0], 10);
-}
-
-}  // namespace
+}  // namespace svc_test
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.h
new file mode 100644
index 0000000..f1d727f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/svc_test.h
@@ -0,0 +1,67 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_TEST_SVC_TEST_H_
+#define VPX_TEST_SVC_TEST_H_
+
+#include "./vpx_config.h"
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/i420_video_source.h"
+#include "test/util.h"
+#include "test/y4m_video_source.h"
+#include "vpx/vpx_codec.h"
+#include "vpx_ports/bitops.h"
+
+namespace svc_test {
+class OnePassCbrSvc : public ::libvpx_test::EncoderTest {
+ public:
+  explicit OnePassCbrSvc(const ::libvpx_test::CodecFactory *codec)
+      : EncoderTest(codec), base_speed_setting_(0), speed_setting_(0),
+        superframe_count_(0), temporal_layer_id_(0), number_temporal_layers_(0),
+        number_spatial_layers_(0) {
+    memset(&svc_params_, 0, sizeof(svc_params_));
+    memset(bits_in_buffer_model_, 0,
+           sizeof(bits_in_buffer_model_[0]) * VPX_MAX_LAYERS);
+    memset(layer_target_avg_bandwidth_, 0,
+           sizeof(layer_target_avg_bandwidth_[0]) * VPX_MAX_LAYERS);
+  }
+
+ protected:
+  virtual ~OnePassCbrSvc() {}
+
+  virtual void SetConfig(const int num_temporal_layer) = 0;
+
+  virtual void SetSvcConfig(const int num_spatial_layer,
+                            const int num_temporal_layer);
+
+  virtual void PreEncodeFrameHookSetup(::libvpx_test::VideoSource *video,
+                                       ::libvpx_test::Encoder *encoder);
+
+  virtual void PostEncodeFrameHook(::libvpx_test::Encoder *encoder);
+
+  virtual void AssignLayerBitrates();
+
+  virtual void MismatchHook(const vpx_image_t *, const vpx_image_t *) {}
+
+  vpx_svc_extra_cfg_t svc_params_;
+  int64_t bits_in_buffer_model_[VPX_MAX_LAYERS];
+  int layer_target_avg_bandwidth_[VPX_MAX_LAYERS];
+  int base_speed_setting_;
+  int speed_setting_;
+  int superframe_count_;
+  int temporal_layer_id_;
+  int number_temporal_layers_;
+  int number_spatial_layers_;
+};
+}  // namespace svc_test
+
+#endif  // VPX_TEST_SVC_TEST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/temporal_filter_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/temporal_filter_test.cc
deleted file mode 100644
index 655a36b..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/temporal_filter_test.cc
+++ /dev/null
@@ -1,277 +0,0 @@
-/*
- *  Copyright (c) 2016 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include <limits>
-
-#include "third_party/googletest/src/include/gtest/gtest.h"
-
-#include "./vp9_rtcd.h"
-#include "test/acm_random.h"
-#include "test/buffer.h"
-#include "test/register_state_check.h"
-#include "vpx_ports/vpx_timer.h"
-
-namespace {
-
-using ::libvpx_test::ACMRandom;
-using ::libvpx_test::Buffer;
-
-typedef void (*TemporalFilterFunc)(const uint8_t *a, unsigned int stride,
-                                   const uint8_t *b, unsigned int w,
-                                   unsigned int h, int filter_strength,
-                                   int filter_weight, unsigned int *accumulator,
-                                   uint16_t *count);
-
-// Calculate the difference between 'a' and 'b', sum in blocks of 9, and apply
-// filter based on strength and weight. Store the resulting filter amount in
-// 'count' and apply it to 'b' and store it in 'accumulator'.
-void reference_filter(const Buffer<uint8_t> &a, const Buffer<uint8_t> &b, int w,
-                      int h, int filter_strength, int filter_weight,
-                      Buffer<unsigned int> *accumulator,
-                      Buffer<uint16_t> *count) {
-  Buffer<int> diff_sq = Buffer<int>(w, h, 0);
-  ASSERT_TRUE(diff_sq.Init());
-  diff_sq.Set(0);
-
-  int rounding = 0;
-  if (filter_strength > 0) {
-    rounding = 1 << (filter_strength - 1);
-  }
-
-  // Calculate all the differences. Avoids re-calculating a bunch of extra
-  // values.
-  for (int height = 0; height < h; ++height) {
-    for (int width = 0; width < w; ++width) {
-      int diff = a.TopLeftPixel()[height * a.stride() + width] -
-                 b.TopLeftPixel()[height * b.stride() + width];
-      diff_sq.TopLeftPixel()[height * diff_sq.stride() + width] = diff * diff;
-    }
-  }
-
-  // For any given point, sum the neighboring values and calculate the
-  // modifier.
-  for (int height = 0; height < h; ++height) {
-    for (int width = 0; width < w; ++width) {
-      // Determine how many values are being summed.
-      int summed_values = 9;
-
-      if (height == 0 || height == (h - 1)) {
-        summed_values -= 3;
-      }
-
-      if (width == 0 || width == (w - 1)) {
-        if (summed_values == 6) {  // corner
-          summed_values -= 2;
-        } else {
-          summed_values -= 3;
-        }
-      }
-
-      // Sum the diff_sq of the surrounding values.
-      int sum = 0;
-      for (int idy = -1; idy <= 1; ++idy) {
-        for (int idx = -1; idx <= 1; ++idx) {
-          const int y = height + idy;
-          const int x = width + idx;
-
-          // If inside the border.
-          if (y >= 0 && y < h && x >= 0 && x < w) {
-            sum += diff_sq.TopLeftPixel()[y * diff_sq.stride() + x];
-          }
-        }
-      }
-
-      sum *= 3;
-      sum /= summed_values;
-      sum += rounding;
-      sum >>= filter_strength;
-
-      // Clamp the value and invert it.
-      if (sum > 16) sum = 16;
-      sum = 16 - sum;
-
-      sum *= filter_weight;
-
-      count->TopLeftPixel()[height * count->stride() + width] += sum;
-      accumulator->TopLeftPixel()[height * accumulator->stride() + width] +=
-          sum * b.TopLeftPixel()[height * b.stride() + width];
-    }
-  }
-}
-
-class TemporalFilterTest : public ::testing::TestWithParam<TemporalFilterFunc> {
- public:
-  virtual void SetUp() {
-    filter_func_ = GetParam();
-    rnd_.Reset(ACMRandom::DeterministicSeed());
-  }
-
- protected:
-  TemporalFilterFunc filter_func_;
-  ACMRandom rnd_;
-};
-
-TEST_P(TemporalFilterTest, SizeCombinations) {
-  // Depending on subsampling this function may be called with values of 8 or 16
-  // for width and height, in any combination.
-  Buffer<uint8_t> a = Buffer<uint8_t>(16, 16, 8);
-  ASSERT_TRUE(a.Init());
-
-  const int filter_weight = 2;
-  const int filter_strength = 6;
-
-  for (int width = 8; width <= 16; width += 8) {
-    for (int height = 8; height <= 16; height += 8) {
-      // The second buffer must not have any border.
-      Buffer<uint8_t> b = Buffer<uint8_t>(width, height, 0);
-      ASSERT_TRUE(b.Init());
-      Buffer<unsigned int> accum_ref = Buffer<unsigned int>(width, height, 0);
-      ASSERT_TRUE(accum_ref.Init());
-      Buffer<unsigned int> accum_chk = Buffer<unsigned int>(width, height, 0);
-      ASSERT_TRUE(accum_chk.Init());
-      Buffer<uint16_t> count_ref = Buffer<uint16_t>(width, height, 0);
-      ASSERT_TRUE(count_ref.Init());
-      Buffer<uint16_t> count_chk = Buffer<uint16_t>(width, height, 0);
-      ASSERT_TRUE(count_chk.Init());
-
-      // The difference between the buffers must be small to pass the threshold
-      // to apply the filter.
-      a.Set(&rnd_, 0, 7);
-      b.Set(&rnd_, 0, 7);
-
-      accum_ref.Set(rnd_.Rand8());
-      accum_chk.CopyFrom(accum_ref);
-      count_ref.Set(rnd_.Rand8());
-      count_chk.CopyFrom(count_ref);
-      reference_filter(a, b, width, height, filter_strength, filter_weight,
-                       &accum_ref, &count_ref);
-      ASM_REGISTER_STATE_CHECK(
-          filter_func_(a.TopLeftPixel(), a.stride(), b.TopLeftPixel(), width,
-                       height, filter_strength, filter_weight,
-                       accum_chk.TopLeftPixel(), count_chk.TopLeftPixel()));
-      EXPECT_TRUE(accum_chk.CheckValues(accum_ref));
-      EXPECT_TRUE(count_chk.CheckValues(count_ref));
-      if (HasFailure()) {
-        printf("Width: %d Height: %d\n", width, height);
-        count_chk.PrintDifference(count_ref);
-        accum_chk.PrintDifference(accum_ref);
-        return;
-      }
-    }
-  }
-}
-
-TEST_P(TemporalFilterTest, CompareReferenceRandom) {
-  for (int width = 8; width <= 16; width += 8) {
-    for (int height = 8; height <= 16; height += 8) {
-      Buffer<uint8_t> a = Buffer<uint8_t>(width, height, 8);
-      ASSERT_TRUE(a.Init());
-      // The second buffer must not have any border.
-      Buffer<uint8_t> b = Buffer<uint8_t>(width, height, 0);
-      ASSERT_TRUE(b.Init());
-      Buffer<unsigned int> accum_ref = Buffer<unsigned int>(width, height, 0);
-      ASSERT_TRUE(accum_ref.Init());
-      Buffer<unsigned int> accum_chk = Buffer<unsigned int>(width, height, 0);
-      ASSERT_TRUE(accum_chk.Init());
-      Buffer<uint16_t> count_ref = Buffer<uint16_t>(width, height, 0);
-      ASSERT_TRUE(count_ref.Init());
-      Buffer<uint16_t> count_chk = Buffer<uint16_t>(width, height, 0);
-      ASSERT_TRUE(count_chk.Init());
-
-      for (int filter_strength = 0; filter_strength <= 6; ++filter_strength) {
-        for (int filter_weight = 0; filter_weight <= 2; ++filter_weight) {
-          for (int repeat = 0; repeat < 100; ++repeat) {
-            if (repeat < 50) {
-              a.Set(&rnd_, 0, 7);
-              b.Set(&rnd_, 0, 7);
-            } else {
-              // Check large (but close) values as well.
-              a.Set(&rnd_, std::numeric_limits<uint8_t>::max() - 7,
-                    std::numeric_limits<uint8_t>::max());
-              b.Set(&rnd_, std::numeric_limits<uint8_t>::max() - 7,
-                    std::numeric_limits<uint8_t>::max());
-            }
-
-            accum_ref.Set(rnd_.Rand8());
-            accum_chk.CopyFrom(accum_ref);
-            count_ref.Set(rnd_.Rand8());
-            count_chk.CopyFrom(count_ref);
-            reference_filter(a, b, width, height, filter_strength,
-                             filter_weight, &accum_ref, &count_ref);
-            ASM_REGISTER_STATE_CHECK(filter_func_(
-                a.TopLeftPixel(), a.stride(), b.TopLeftPixel(), width, height,
-                filter_strength, filter_weight, accum_chk.TopLeftPixel(),
-                count_chk.TopLeftPixel()));
-            EXPECT_TRUE(accum_chk.CheckValues(accum_ref));
-            EXPECT_TRUE(count_chk.CheckValues(count_ref));
-            if (HasFailure()) {
-              printf("Weight: %d Strength: %d\n", filter_weight,
-                     filter_strength);
-              count_chk.PrintDifference(count_ref);
-              accum_chk.PrintDifference(accum_ref);
-              return;
-            }
-          }
-        }
-      }
-    }
-  }
-}
-
-TEST_P(TemporalFilterTest, DISABLED_Speed) {
-  Buffer<uint8_t> a = Buffer<uint8_t>(16, 16, 8);
-  ASSERT_TRUE(a.Init());
-
-  const int filter_weight = 2;
-  const int filter_strength = 6;
-
-  for (int width = 8; width <= 16; width += 8) {
-    for (int height = 8; height <= 16; height += 8) {
-      // The second buffer must not have any border.
-      Buffer<uint8_t> b = Buffer<uint8_t>(width, height, 0);
-      ASSERT_TRUE(b.Init());
-      Buffer<unsigned int> accum_ref = Buffer<unsigned int>(width, height, 0);
-      ASSERT_TRUE(accum_ref.Init());
-      Buffer<unsigned int> accum_chk = Buffer<unsigned int>(width, height, 0);
-      ASSERT_TRUE(accum_chk.Init());
-      Buffer<uint16_t> count_ref = Buffer<uint16_t>(width, height, 0);
-      ASSERT_TRUE(count_ref.Init());
-      Buffer<uint16_t> count_chk = Buffer<uint16_t>(width, height, 0);
-      ASSERT_TRUE(count_chk.Init());
-
-      a.Set(&rnd_, 0, 7);
-      b.Set(&rnd_, 0, 7);
-
-      accum_chk.Set(0);
-      count_chk.Set(0);
-
-      vpx_usec_timer timer;
-      vpx_usec_timer_start(&timer);
-      for (int i = 0; i < 10000; ++i) {
-        filter_func_(a.TopLeftPixel(), a.stride(), b.TopLeftPixel(), width,
-                     height, filter_strength, filter_weight,
-                     accum_chk.TopLeftPixel(), count_chk.TopLeftPixel());
-      }
-      vpx_usec_timer_mark(&timer);
-      const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
-      printf("Temporal filter %dx%d time: %5d us\n", width, height,
-             elapsed_time);
-    }
-  }
-}
-
-INSTANTIATE_TEST_CASE_P(C, TemporalFilterTest,
-                        ::testing::Values(&vp9_temporal_filter_apply_c));
-
-#if HAVE_SSE4_1
-INSTANTIATE_TEST_CASE_P(SSE4_1, TemporalFilterTest,
-                        ::testing::Values(&vp9_temporal_filter_apply_sse4_1));
-#endif  // HAVE_SSE4_1
-}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.mk
index 7ca11bc..81f035d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.mk
@@ -3,14 +3,16 @@
 # Encoder test source
 LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += hantro_collage_w352h288.yuv
 LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += hantro_odd.yuv
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += desktop_office1.1280_720-020.yuv
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += slides_code_term_web_plot.1920_1080.yuv
 
-LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_420.y4m
-LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_422.y4m
-LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_444.y4m
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_420_20f.y4m
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_422_20f.y4m
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_444_20f.y4m
 LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_10_440.yuv
-LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_420.y4m
-LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_422.y4m
-LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_444.y4m
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_420_20f.y4m
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_422_20f.y4m
+LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_444_20f.y4m
 LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_12_440.yuv
 LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_8_420_a10-1.y4m
 LIBVPX_TEST_DATA-$(CONFIG_ENCODERS) += park_joy_90p_8_420.y4m
@@ -24,6 +26,7 @@
 LIBVPX_TEST_DATA-$(CONFIG_VP9_ENCODER) += rush_hour_444.y4m
 LIBVPX_TEST_DATA-$(CONFIG_VP9_ENCODER) += screendata.y4m
 LIBVPX_TEST_DATA-$(CONFIG_VP9_ENCODER) += niklas_640_480_30.yuv
+LIBVPX_TEST_DATA-$(CONFIG_RATE_CTRL) += bus_352x288_420_f20_b8.yuv
 
 # Test vectors
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += vp80-00-comprehensive-001.ivf
@@ -734,10 +737,14 @@
 # Invalid files for testing libvpx error checking.
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-bug-1443.ivf
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-bug-1443.ivf.res
+LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-bug-148271109.ivf
+LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-bug-148271109.ivf.res
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-token-partition.ivf
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-token-partition.ivf.res
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-vp80-00-comprehensive-018.ivf.2kf_0x6.ivf
 LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-vp80-00-comprehensive-018.ivf.2kf_0x6.ivf.res
+LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-vp80-00-comprehensive-s17661_r01-05_b6-.ivf
+LIBVPX_TEST_DATA-$(CONFIG_VP8_DECODER) += invalid-vp80-00-comprehensive-s17661_r01-05_b6-.ivf.res
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-vp90-01-v3.webm
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-vp90-01-v3.webm.res
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-vp90-02-v2.webm
@@ -785,8 +792,13 @@
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-vp90-2-07-frame_parallel-3.webm
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-629481.webm
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-629481.webm.res
+LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-1558.ivf
+LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-1558.ivf.res
+LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-1562.ivf
+LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-1562.ivf.res
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-667044.webm
 LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += invalid-crbug-667044.webm.res
+LIBVPX_TEST_DATA-$(CONFIG_VP9_DECODER) += crbug-1539.rawfile
 
 ifeq ($(CONFIG_DECODE_PERF_TESTS),yes)
 # Encode / Decode test
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.sha1 b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.sha1
index 3a23ff5..dcaea28 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.sha1
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test-data.sha1
@@ -1,3 +1,4 @@
+3eaf216d9fc8b4b9bb8c3956311f49a85974806c *bus_352x288_420_f20_b8.yuv
 d5dfb0151c9051f8c85999255645d7a23916d3c0 *hantro_collage_w352h288.yuv
 b87815bf86020c592ccc7a846ba2e28ec8043902 *hantro_odd.yuv
 76024eb753cdac6a5e5703aaea189d35c3c30ac7 *invalid-vp90-2-00-quantizer-00.webm.ivf.s5861_r01-05_b6-.v2.ivf
@@ -17,13 +18,13 @@
 d637297561dd904eb2c97a9015deeb31c4a1e8d2 *invalid-vp90-2-08-tile_1x4_frame_parallel_all_key.webm
 3a204bdbeaa3c6458b77bcebb8366d107267f55d *invalid-vp90-2-08-tile_1x4_frame_parallel_all_key.webm.res
 9aa21d8b2cb9d39abe8a7bb6032dc66955fb4342 *noisy_clip_640_360.y4m
-a432f96ff0a787268e2f94a8092ab161a18d1b06 *park_joy_90p_10_420.y4m
-0b194cc312c3a2e84d156a221b0a5eb615dfddc5 *park_joy_90p_10_422.y4m
-ff0e0a21dc2adc95b8c1b37902713700655ced17 *park_joy_90p_10_444.y4m
+0936b837708ae68c034719f8e07596021c2c214f *park_joy_90p_10_420_20f.y4m
+5727a853c083c1099f837d27967bc1322d50ed4f *park_joy_90p_10_422_20f.y4m
+e13489470ef8e8b2a871a5640d795a42a39be58d *park_joy_90p_10_444_20f.y4m
 c934da6fb8cc54ee2a8c17c54cf6076dac37ead0 *park_joy_90p_10_440.yuv
-614c32ae1eca391e867c70d19974f0d62664dd99 *park_joy_90p_12_420.y4m
-c92825f1ea25c5c37855083a69faac6ac4641a9e *park_joy_90p_12_422.y4m
-b592189b885b6cc85db55cc98512a197d73d3b34 *park_joy_90p_12_444.y4m
+79b0dc1784635a7f291e21c4e8d66a29c496ab99 *park_joy_90p_12_420_20f.y4m
+9cf22b0f809f7464c8b9058f0cfa9d905921cbd1 *park_joy_90p_12_422_20f.y4m
+22b2a4abaecc4a9ade6bb503d25fb82367947e85 *park_joy_90p_12_444_20f.y4m
 82c1bfcca368c2f22bad7d693d690d5499ecdd11 *park_joy_90p_12_440.yuv
 b9e1e90aece2be6e2c90d89e6ab2372d5f8c792d *park_joy_90p_8_420_a10-1.y4m
 4e0eb61e76f0684188d9bc9f3ce61f6b6b77bb2c *park_joy_90p_8_420.y4m
@@ -856,3 +857,14 @@
 90a8a95e7024f015b87f5483a65036609b3d1b74 *invalid-token-partition.ivf.res
 17696cd21e875f1d6e5d418cbf89feab02c8850a *vp90-2-22-svc_1280x720_1.webm
 e2f9e1e47a791b4e939a9bdc50bf7a25b3761f77 *vp90-2-22-svc_1280x720_1.webm.md5
+a0fbbbc5dd50fd452096f4455a58c1a8c9f66697 *invalid-vp80-00-comprehensive-s17661_r01-05_b6-.ivf
+a61774cf03fc584bd9f0904fc145253bb8ea6c4c *invalid-vp80-00-comprehensive-s17661_r01-05_b6-.ivf.res
+894fae3afee0290546590823974203ab4b8abd95 *crbug-1539.rawfile
+f1026c03efd5da21b381c8eb21f0d64e6d7e4ba3 *invalid-crbug-1558.ivf
+eb198c25f861c3fe2cbd310de11eb96843019345 *invalid-crbug-1558.ivf.res
+c62b005a9fd32c36a1b3f67de6840330f9915e34 *invalid-crbug-1562.ivf
+f0cd8389948ad16085714d96567612136f6a46c5 *invalid-crbug-1562.ivf.res
+bac455906360b45338a16dd626ac5f19bc36a307 *desktop_office1.1280_720-020.yuv
+094be4b80fa30bd227149ea16ab6476d549ea092 *slides_code_term_web_plot.1920_1080.yuv
+518a0be998afece76d3df76047d51e256c591ff2 *invalid-bug-148271109.ivf
+d3964f9dad9f60363c81b688324d95b4ec7c8038 *invalid-bug-148271109.ivf.res
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test.mk
index a3716be..b4a5ea0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test.mk
@@ -1,4 +1,6 @@
 LIBVPX_TEST_SRCS-yes += acm_random.h
+LIBVPX_TEST_SRCS-yes += bench.h
+LIBVPX_TEST_SRCS-yes += bench.cc
 LIBVPX_TEST_SRCS-yes += buffer.h
 LIBVPX_TEST_SRCS-yes += clear_system_state.h
 LIBVPX_TEST_SRCS-yes += codec_factory.h
@@ -22,7 +24,8 @@
 LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += altref_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += aq_segment_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += alt_ref_aq_segment_test.cc
-LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += datarate_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += vp8_datarate_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += vp9_datarate_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += encode_api_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += error_resilience_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_ENCODERS)    += i420_video_source.h
@@ -46,9 +49,16 @@
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += frame_size_tests.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_lossless_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_end_to_end_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += decode_corrupted.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_ethread_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_motion_vector_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += level_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += svc_datarate_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += svc_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += svc_test.h
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += svc_end_to_end_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += timestamp_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_RATE_CTRL)   += simple_encode_test.cc
 
 LIBVPX_TEST_SRCS-yes                   += decode_test_driver.cc
 LIBVPX_TEST_SRCS-yes                   += decode_test_driver.h
@@ -67,6 +77,7 @@
 LIBWEBM_PARSER_SRCS += ../third_party/libwebm/mkvparser/mkvreader.cc
 LIBWEBM_PARSER_SRCS += ../third_party/libwebm/mkvparser/mkvparser.h
 LIBWEBM_PARSER_SRCS += ../third_party/libwebm/mkvparser/mkvreader.h
+LIBWEBM_PARSER_SRCS += ../third_party/libwebm/common/webmids.h
 LIBVPX_TEST_SRCS-$(CONFIG_DECODERS)    += $(LIBWEBM_PARSER_SRCS)
 LIBVPX_TEST_SRCS-$(CONFIG_DECODERS)    += ../tools_common.h
 LIBVPX_TEST_SRCS-$(CONFIG_DECODERS)    += ../webmdec.cc
@@ -161,7 +172,7 @@
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += minmax_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_scale_test.cc
 ifneq ($(CONFIG_REALTIME_ONLY),yes)
-LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += temporal_filter_test.cc
+LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += yuv_temporal_filter_test.cc
 endif
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += variance_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_block_error_test.cc
@@ -169,11 +180,14 @@
 LIBVPX_TEST_SRCS-$(CONFIG_VP9_ENCODER) += vp9_subtract_test.cc
 
 ifeq ($(CONFIG_VP9_ENCODER),yes)
-LIBVPX_TEST_SRCS-$(CONFIG_SPATIAL_SVC) += svc_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_INTERNAL_STATS) += blockiness_test.cc
 LIBVPX_TEST_SRCS-$(CONFIG_INTERNAL_STATS) += consistency_test.cc
 endif
 
+ifeq ($(CONFIG_VP9_ENCODER),yes)
+LIBVPX_TEST_SRCS-$(CONFIG_NON_GREEDY_MV) += non_greedy_mv_test.cc
+endif
+
 ifeq ($(CONFIG_VP9_ENCODER)$(CONFIG_VP9_TEMPORAL_DENOISING),yesyes)
 LIBVPX_TEST_SRCS-yes += vp9_denoiser_test.cc
 endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_intra_pred_speed.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_intra_pred_speed.cc
index 1cdeda4..0be9fee 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_intra_pred_speed.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_intra_pred_speed.cc
@@ -313,6 +313,8 @@
 #endif  // HAVE_MSA
 
 #if HAVE_VSX
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 INTRA_PRED_TEST(VSX, TestIntraPred4, NULL, NULL, NULL, NULL, NULL,
                 vpx_h_predictor_4x4_vsx, NULL, NULL, NULL, NULL, NULL, NULL,
                 vpx_tm_predictor_4x4_vsx)
@@ -321,6 +323,7 @@
                 NULL, vpx_h_predictor_8x8_vsx, vpx_d45_predictor_8x8_vsx, NULL,
                 NULL, NULL, NULL, vpx_d63_predictor_8x8_vsx,
                 vpx_tm_predictor_8x8_vsx)
+#endif
 
 INTRA_PRED_TEST(VSX, TestIntraPred16, vpx_dc_predictor_16x16_vsx,
                 vpx_dc_left_predictor_16x16_vsx, vpx_dc_top_predictor_16x16_vsx,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_libvpx.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_libvpx.cc
index 3405e45..222a83f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_libvpx.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_libvpx.cc
@@ -12,7 +12,7 @@
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "./vpx_config.h"
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
 #include "vpx_ports/x86.h"
 #endif
 extern "C" {
@@ -26,7 +26,7 @@
 extern void vpx_scale_rtcd();
 }
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
 static void append_negative_gtest_filter(const char *str) {
   std::string filter = ::testing::FLAGS_gtest_filter;
   // Negative patterns begin with one '-' followed by a ':' separated list.
@@ -34,12 +34,12 @@
   filter += str;
   ::testing::FLAGS_gtest_filter = filter;
 }
-#endif  // ARCH_X86 || ARCH_X86_64
+#endif  // VPX_ARCH_X86 || VPX_ARCH_X86_64
 
 int main(int argc, char **argv) {
   ::testing::InitGoogleTest(&argc, argv);
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   const int simd_caps = x86_simd_caps();
   if (!(simd_caps & HAS_MMX)) append_negative_gtest_filter(":MMX.*:MMX/*");
   if (!(simd_caps & HAS_SSE)) append_negative_gtest_filter(":SSE.*:SSE/*");
@@ -56,7 +56,7 @@
   if (!(simd_caps & HAS_AVX512)) {
     append_negative_gtest_filter(":AVX512.*:AVX512/*");
   }
-#endif  // ARCH_X86 || ARCH_X86_64
+#endif  // VPX_ARCH_X86 || VPX_ARCH_X86_64
 
 #if !CONFIG_SHARED
 // Shared library builds don't support whitebox tests
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vector_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vector_test.cc
index 1879b3d..5a97371 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vector_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vector_test.cc
@@ -10,8 +10,11 @@
 
 #include <cstdio>
 #include <cstdlib>
+#include <memory>
 #include <set>
 #include <string>
+#include <tuple>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "../tools_common.h"
 #include "./vpx_config.h"
@@ -29,9 +32,10 @@
 namespace {
 
 const int kThreads = 0;
-const int kFileName = 1;
+const int kMtMode = 1;
+const int kFileName = 2;
 
-typedef std::tr1::tuple<int, const char *> DecodeParam;
+typedef std::tuple<int, int, const char *> DecodeParam;
 
 class TestVectorTest : public ::libvpx_test::DecoderTest,
                        public ::libvpx_test::CodecTestWithParam<DecodeParam> {
@@ -54,6 +58,25 @@
         << "Md5 file open failed. Filename: " << md5_file_name_;
   }
 
+#if CONFIG_VP9_DECODER
+  virtual void PreDecodeFrameHook(
+      const libvpx_test::CompressedVideoSource &video,
+      libvpx_test::Decoder *decoder) {
+    if (video.frame_number() == 0 && mt_mode_ >= 0) {
+      if (mt_mode_ == 1) {
+        decoder->Control(VP9D_SET_LOOP_FILTER_OPT, 1);
+        decoder->Control(VP9D_SET_ROW_MT, 0);
+      } else if (mt_mode_ == 2) {
+        decoder->Control(VP9D_SET_LOOP_FILTER_OPT, 0);
+        decoder->Control(VP9D_SET_ROW_MT, 1);
+      } else {
+        decoder->Control(VP9D_SET_LOOP_FILTER_OPT, 0);
+        decoder->Control(VP9D_SET_ROW_MT, 0);
+      }
+    }
+  }
+#endif
+
   virtual void DecompressedFrameHook(const vpx_image_t &img,
                                      const unsigned int frame_number) {
     ASSERT_TRUE(md5_file_ != NULL);
@@ -77,6 +100,7 @@
 #if CONFIG_VP9_DECODER
   std::set<std::string> resize_clips_;
 #endif
+  int mt_mode_;
 
  private:
   FILE *md5_file_;
@@ -88,19 +112,20 @@
 // the test failed.
 TEST_P(TestVectorTest, MD5Match) {
   const DecodeParam input = GET_PARAM(1);
-  const std::string filename = std::tr1::get<kFileName>(input);
+  const std::string filename = std::get<kFileName>(input);
   vpx_codec_flags_t flags = 0;
   vpx_codec_dec_cfg_t cfg = vpx_codec_dec_cfg_t();
   char str[256];
 
-  cfg.threads = std::tr1::get<kThreads>(input);
-
-  snprintf(str, sizeof(str) / sizeof(str[0]) - 1, "file: %s threads: %d",
-           filename.c_str(), cfg.threads);
+  cfg.threads = std::get<kThreads>(input);
+  mt_mode_ = std::get<kMtMode>(input);
+  snprintf(str, sizeof(str) / sizeof(str[0]) - 1,
+           "file: %s threads: %d MT mode: %d", filename.c_str(), cfg.threads,
+           mt_mode_);
   SCOPED_TRACE(str);
 
   // Open compressed video file.
-  testing::internal::scoped_ptr<libvpx_test::CompressedVideoSource> video;
+  std::unique_ptr<libvpx_test::CompressedVideoSource> video;
   if (filename.substr(filename.length() - 3, 3) == "ivf") {
     video.reset(new libvpx_test::IVFVideoSource(filename));
   } else if (filename.substr(filename.length() - 4, 4) == "webm") {
@@ -131,7 +156,8 @@
 VP8_INSTANTIATE_TEST_CASE(
     TestVectorTest,
     ::testing::Combine(
-        ::testing::Values(1),  // Single thread.
+        ::testing::Values(1),   // Single thread.
+        ::testing::Values(-1),  // LPF opt and Row MT is not applicable
         ::testing::ValuesIn(libvpx_test::kVP8TestVectors,
                             libvpx_test::kVP8TestVectors +
                                 libvpx_test::kNumVP8TestVectors)));
@@ -144,6 +170,7 @@
             static_cast<const libvpx_test::CodecFactory *>(&libvpx_test::kVP8)),
         ::testing::Combine(
             ::testing::Range(2, 9),  // With 2 ~ 8 threads.
+            ::testing::Values(-1),   // LPF opt and Row MT is not applicable
             ::testing::ValuesIn(libvpx_test::kVP8TestVectors,
                                 libvpx_test::kVP8TestVectors +
                                     libvpx_test::kNumVP8TestVectors))));
@@ -154,7 +181,8 @@
 VP9_INSTANTIATE_TEST_CASE(
     TestVectorTest,
     ::testing::Combine(
-        ::testing::Values(1),  // Single thread.
+        ::testing::Values(1),   // Single thread.
+        ::testing::Values(-1),  // LPF opt and Row MT is not applicable
         ::testing::ValuesIn(libvpx_test::kVP9TestVectors,
                             libvpx_test::kVP9TestVectors +
                                 libvpx_test::kNumVP9TestVectors)));
@@ -166,6 +194,10 @@
             static_cast<const libvpx_test::CodecFactory *>(&libvpx_test::kVP9)),
         ::testing::Combine(
             ::testing::Range(2, 9),  // With 2 ~ 8 threads.
+            ::testing::Range(0, 3),  // With multi threads modes 0 ~ 2
+                                     // 0: LPF opt and Row MT disabled
+                                     // 1: LPF opt enabled
+                                     // 2: Row MT enabled
             ::testing::ValuesIn(libvpx_test::kVP9TestVectors,
                                 libvpx_test::kVP9TestVectors +
                                     libvpx_test::kNumVP9TestVectors))));
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vectors.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vectors.h
index 3df3e81..0a4be0f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vectors.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/test_vectors.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_TEST_VECTORS_H_
-#define TEST_TEST_VECTORS_H_
+#ifndef VPX_TEST_TEST_VECTORS_H_
+#define VPX_TEST_TEST_VECTORS_H_
 
 #include "./vpx_config.h"
 
@@ -31,4 +31,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_TEST_VECTORS_H_
+#endif  // VPX_TEST_TEST_VECTORS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tile_independence_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tile_independence_test.cc
index e24981c..1d1020a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tile_independence_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tile_independence_test.cc
@@ -48,7 +48,7 @@
 
   virtual void PreEncodeFrameHook(libvpx_test::VideoSource *video,
                                   libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP9E_SET_TILE_COLUMNS, n_tiles_);
     }
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/timestamp_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/timestamp_test.cc
new file mode 100644
index 0000000..41c1928
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/timestamp_test.cc
@@ -0,0 +1,101 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/util.h"
+#include "test/video_source.h"
+#include "third_party/googletest/src/include/gtest/gtest.h"
+
+namespace {
+
+const int kVideoSourceWidth = 320;
+const int kVideoSourceHeight = 240;
+const int kFramesToEncode = 3;
+
+// A video source that exposes functions to set the timebase, framerate and
+// starting pts.
+class DummyTimebaseVideoSource : public ::libvpx_test::DummyVideoSource {
+ public:
+  // Parameters num and den set the timebase for the video source.
+  DummyTimebaseVideoSource(int num, int den)
+      : timebase_({ num, den }), framerate_numerator_(30),
+        framerate_denominator_(1), starting_pts_(0) {
+    SetSize(kVideoSourceWidth, kVideoSourceHeight);
+    set_limit(kFramesToEncode);
+  }
+
+  void SetFramerate(int numerator, int denominator) {
+    framerate_numerator_ = numerator;
+    framerate_denominator_ = denominator;
+  }
+
+  // Returns one frames duration in timebase units as a double.
+  double FrameDuration() const {
+    return (static_cast<double>(timebase_.den) / timebase_.num) /
+           (static_cast<double>(framerate_numerator_) / framerate_denominator_);
+  }
+
+  virtual vpx_codec_pts_t pts() const {
+    return static_cast<vpx_codec_pts_t>(frame_ * FrameDuration() +
+                                        starting_pts_ + 0.5);
+  }
+
+  virtual unsigned long duration() const {
+    return static_cast<unsigned long>(FrameDuration() + 0.5);
+  }
+
+  virtual vpx_rational_t timebase() const { return timebase_; }
+
+  void set_starting_pts(int64_t starting_pts) { starting_pts_ = starting_pts; }
+
+ private:
+  vpx_rational_t timebase_;
+  int framerate_numerator_;
+  int framerate_denominator_;
+  int64_t starting_pts_;
+};
+
+class TimestampTest
+    : public ::libvpx_test::EncoderTest,
+      public ::libvpx_test::CodecTestWithParam<libvpx_test::TestMode> {
+ protected:
+  TimestampTest() : EncoderTest(GET_PARAM(0)) {}
+  virtual ~TimestampTest() {}
+
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(GET_PARAM(1));
+  }
+};
+
+// Tests encoding in millisecond timebase.
+TEST_P(TimestampTest, EncodeFrames) {
+  DummyTimebaseVideoSource video(1, 1000);
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+}
+
+TEST_P(TimestampTest, TestMicrosecondTimebase) {
+  // Set the timebase to microseconds.
+  DummyTimebaseVideoSource video(1, 1000000);
+  video.set_limit(1);
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+}
+
+TEST_P(TimestampTest, TestVpxRollover) {
+  DummyTimebaseVideoSource video(1, 1000);
+  video.set_starting_pts(922337170351ll);
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+}
+
+VP8_INSTANTIATE_TEST_CASE(TimestampTest,
+                          ::testing::Values(::libvpx_test::kTwoPassGood));
+VP9_INSTANTIATE_TEST_CASE(TimestampTest,
+                          ::testing::Values(::libvpx_test::kTwoPassGood));
+}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tools_common.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tools_common.sh
old mode 100644
new mode 100755
index 0bdcc08..844a125
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tools_common.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/tools_common.sh
@@ -150,7 +150,7 @@
 # empty string. Caller is responsible for testing the string once the function
 # returns.
 vpx_tool_path() {
-  local readonly tool_name="$1"
+  local tool_name="$1"
   local tool_path="${LIBVPX_BIN_PATH}/${tool_name}${VPX_TEST_EXE_SUFFIX}"
   if [ ! -x "${tool_path}" ]; then
     # Try one directory up: when running via examples.sh the tool could be in
@@ -404,12 +404,16 @@
 VP9_FPM_WEBM_FILE="${LIBVPX_TEST_DATA_PATH}/vp90-2-07-frame_parallel-1.webm"
 VP9_LT_50_FRAMES_WEBM_FILE="${LIBVPX_TEST_DATA_PATH}/vp90-2-02-size-32x08.webm"
 
+VP9_RAW_FILE="${LIBVPX_TEST_DATA_PATH}/crbug-1539.rawfile"
+
 YUV_RAW_INPUT="${LIBVPX_TEST_DATA_PATH}/hantro_collage_w352h288.yuv"
 YUV_RAW_INPUT_WIDTH=352
 YUV_RAW_INPUT_HEIGHT=288
 
 Y4M_NOSQ_PAR_INPUT="${LIBVPX_TEST_DATA_PATH}/park_joy_90p_8_420_a10-1.y4m"
 Y4M_720P_INPUT="${LIBVPX_TEST_DATA_PATH}/niklas_1280_720_30.y4m"
+Y4M_720P_INPUT_WIDTH=1280
+Y4M_720P_INPUT_HEIGHT=720
 
 # Setup a trap function to clean up after tests complete.
 trap cleanup EXIT
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/twopass_encoder.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/twopass_encoder.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/user_priv_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/user_priv_test.cc
index 2d03769..7bea76b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/user_priv_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/user_priv_test.cc
@@ -73,7 +73,7 @@
         CheckUserPrivateData(img->user_priv, &frame_num);
 
         // Also test ctrl_get_reference api.
-        struct vp9_ref_frame ref;
+        struct vp9_ref_frame ref = vp9_ref_frame();
         // Randomly fetch a reference frame.
         ref.idx = rnd.Rand8() % 3;
         decoder.Control(VP9_GET_REFERENCE, &ref);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/util.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/util.h
index 1f2540e..985f487 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/util.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/util.h
@@ -8,16 +8,18 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_UTIL_H_
-#define TEST_UTIL_H_
+#ifndef VPX_TEST_UTIL_H_
+#define VPX_TEST_UTIL_H_
 
 #include <stdio.h>
 #include <math.h>
+#include <tuple>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "vpx/vpx_image.h"
 
 // Macros
-#define GET_PARAM(k) std::tr1::get<k>(GetParam())
+#define GET_PARAM(k) std::get<k>(GetParam())
 
 inline double compute_psnr(const vpx_image_t *img1, const vpx_image_t *img2) {
   assert((img1->fmt == img2->fmt) && (img1->d_w == img2->d_w) &&
@@ -43,4 +45,4 @@
   return psnr;
 }
 
-#endif  // TEST_UTIL_H_
+#endif  // VPX_TEST_UTIL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/variance_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/variance_test.cc
index 421024a..e9fa03c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/variance_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/variance_test.cc
@@ -20,24 +20,13 @@
 #include "test/register_state_check.h"
 #include "vpx/vpx_codec.h"
 #include "vpx/vpx_integer.h"
+#include "vpx_dsp/variance.h"
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem.h"
 #include "vpx_ports/vpx_timer.h"
 
 namespace {
 
-typedef unsigned int (*VarianceMxNFunc)(const uint8_t *a, int a_stride,
-                                        const uint8_t *b, int b_stride,
-                                        unsigned int *sse);
-typedef unsigned int (*SubpixVarMxNFunc)(const uint8_t *a, int a_stride,
-                                         int xoffset, int yoffset,
-                                         const uint8_t *b, int b_stride,
-                                         unsigned int *sse);
-typedef unsigned int (*SubpixAvgVarMxNFunc)(const uint8_t *a, int a_stride,
-                                            int xoffset, int yoffset,
-                                            const uint8_t *b, int b_stride,
-                                            uint32_t *sse,
-                                            const uint8_t *second_pred);
 typedef unsigned int (*Get4x4SseFunc)(const uint8_t *a, int a_stride,
                                       const uint8_t *b, int b_stride);
 typedef unsigned int (*SumOfSquaresFunction)(const int16_t *src);
@@ -572,15 +561,16 @@
     if (!use_high_bit_depth()) {
       src_ = reinterpret_cast<uint8_t *>(vpx_memalign(16, block_size()));
       sec_ = reinterpret_cast<uint8_t *>(vpx_memalign(16, block_size()));
-      ref_ = new uint8_t[block_size() + width() + height() + 1];
+      ref_ = reinterpret_cast<uint8_t *>(
+          vpx_malloc(block_size() + width() + height() + 1));
 #if CONFIG_VP9_HIGHBITDEPTH
     } else {
       src_ = CONVERT_TO_BYTEPTR(reinterpret_cast<uint16_t *>(
           vpx_memalign(16, block_size() * sizeof(uint16_t))));
       sec_ = CONVERT_TO_BYTEPTR(reinterpret_cast<uint16_t *>(
           vpx_memalign(16, block_size() * sizeof(uint16_t))));
-      ref_ = CONVERT_TO_BYTEPTR(
-          new uint16_t[block_size() + width() + height() + 1]);
+      ref_ = CONVERT_TO_BYTEPTR(reinterpret_cast<uint16_t *>(vpx_malloc(
+          (block_size() + width() + height() + 1) * sizeof(uint16_t))));
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     }
     ASSERT_TRUE(src_ != NULL);
@@ -591,12 +581,12 @@
   virtual void TearDown() {
     if (!use_high_bit_depth()) {
       vpx_free(src_);
-      delete[] ref_;
       vpx_free(sec_);
+      vpx_free(ref_);
 #if CONFIG_VP9_HIGHBITDEPTH
     } else {
       vpx_free(CONVERT_TO_SHORTPTR(src_));
-      delete[] CONVERT_TO_SHORTPTR(ref_);
+      vpx_free(CONVERT_TO_SHORTPTR(ref_));
       vpx_free(CONVERT_TO_SHORTPTR(sec_));
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     }
@@ -692,7 +682,7 @@
 }
 
 template <>
-void SubpelVarianceTest<SubpixAvgVarMxNFunc>::RefTest() {
+void SubpelVarianceTest<vpx_subp_avg_variance_fn_t>::RefTest() {
   for (int x = 0; x < 8; ++x) {
     for (int y = 0; y < 8; ++y) {
       if (!use_high_bit_depth()) {
@@ -728,10 +718,10 @@
 }
 
 typedef MainTestClass<Get4x4SseFunc> VpxSseTest;
-typedef MainTestClass<VarianceMxNFunc> VpxMseTest;
-typedef MainTestClass<VarianceMxNFunc> VpxVarianceTest;
-typedef SubpelVarianceTest<SubpixVarMxNFunc> VpxSubpelVarianceTest;
-typedef SubpelVarianceTest<SubpixAvgVarMxNFunc> VpxSubpelAvgVarianceTest;
+typedef MainTestClass<vpx_variance_fn_t> VpxMseTest;
+typedef MainTestClass<vpx_variance_fn_t> VpxVarianceTest;
+typedef SubpelVarianceTest<vpx_subpixvariance_fn_t> VpxSubpelVarianceTest;
+typedef SubpelVarianceTest<vpx_subp_avg_variance_fn_t> VpxSubpelAvgVarianceTest;
 
 TEST_P(VpxSseTest, RefSse) { RefTestSse(); }
 TEST_P(VpxSseTest, MaxSse) { MaxTestSse(); }
@@ -756,14 +746,14 @@
                         ::testing::Values(SseParams(2, 2,
                                                     &vpx_get4x4sse_cs_c)));
 
-typedef TestParams<VarianceMxNFunc> MseParams;
+typedef TestParams<vpx_variance_fn_t> MseParams;
 INSTANTIATE_TEST_CASE_P(C, VpxMseTest,
                         ::testing::Values(MseParams(4, 4, &vpx_mse16x16_c),
                                           MseParams(4, 3, &vpx_mse16x8_c),
                                           MseParams(3, 4, &vpx_mse8x16_c),
                                           MseParams(3, 3, &vpx_mse8x8_c)));
 
-typedef TestParams<VarianceMxNFunc> VarianceParams;
+typedef TestParams<vpx_variance_fn_t> VarianceParams;
 INSTANTIATE_TEST_CASE_P(
     C, VpxVarianceTest,
     ::testing::Values(VarianceParams(6, 6, &vpx_variance64x64_c),
@@ -780,7 +770,7 @@
                       VarianceParams(2, 3, &vpx_variance4x8_c),
                       VarianceParams(2, 2, &vpx_variance4x4_c)));
 
-typedef TestParams<SubpixVarMxNFunc> SubpelVarianceParams;
+typedef TestParams<vpx_subpixvariance_fn_t> SubpelVarianceParams;
 INSTANTIATE_TEST_CASE_P(
     C, VpxSubpelVarianceTest,
     ::testing::Values(
@@ -798,7 +788,7 @@
         SubpelVarianceParams(2, 3, &vpx_sub_pixel_variance4x8_c, 0),
         SubpelVarianceParams(2, 2, &vpx_sub_pixel_variance4x4_c, 0)));
 
-typedef TestParams<SubpixAvgVarMxNFunc> SubpelAvgVarianceParams;
+typedef TestParams<vpx_subp_avg_variance_fn_t> SubpelAvgVarianceParams;
 INSTANTIATE_TEST_CASE_P(
     C, VpxSubpelAvgVarianceTest,
     ::testing::Values(
@@ -817,10 +807,11 @@
         SubpelAvgVarianceParams(2, 2, &vpx_sub_pixel_avg_variance4x4_c, 0)));
 
 #if CONFIG_VP9_HIGHBITDEPTH
-typedef MainTestClass<VarianceMxNFunc> VpxHBDMseTest;
-typedef MainTestClass<VarianceMxNFunc> VpxHBDVarianceTest;
-typedef SubpelVarianceTest<SubpixVarMxNFunc> VpxHBDSubpelVarianceTest;
-typedef SubpelVarianceTest<SubpixAvgVarMxNFunc> VpxHBDSubpelAvgVarianceTest;
+typedef MainTestClass<vpx_variance_fn_t> VpxHBDMseTest;
+typedef MainTestClass<vpx_variance_fn_t> VpxHBDVarianceTest;
+typedef SubpelVarianceTest<vpx_subpixvariance_fn_t> VpxHBDSubpelVarianceTest;
+typedef SubpelVarianceTest<vpx_subp_avg_variance_fn_t>
+    VpxHBDSubpelAvgVarianceTest;
 
 TEST_P(VpxHBDMseTest, RefMse) { RefTestMse(); }
 TEST_P(VpxHBDMseTest, MaxMse) { MaxTestMse(); }
@@ -1384,15 +1375,19 @@
 
 #if HAVE_AVX2
 INSTANTIATE_TEST_CASE_P(AVX2, VpxMseTest,
-                        ::testing::Values(MseParams(4, 4, &vpx_mse16x16_avx2)));
+                        ::testing::Values(MseParams(4, 4, &vpx_mse16x16_avx2),
+                                          MseParams(4, 3, &vpx_mse16x8_avx2)));
 
 INSTANTIATE_TEST_CASE_P(
     AVX2, VpxVarianceTest,
     ::testing::Values(VarianceParams(6, 6, &vpx_variance64x64_avx2),
                       VarianceParams(6, 5, &vpx_variance64x32_avx2),
+                      VarianceParams(5, 6, &vpx_variance32x64_avx2),
                       VarianceParams(5, 5, &vpx_variance32x32_avx2),
                       VarianceParams(5, 4, &vpx_variance32x16_avx2),
-                      VarianceParams(4, 4, &vpx_variance16x16_avx2)));
+                      VarianceParams(4, 5, &vpx_variance16x32_avx2),
+                      VarianceParams(4, 4, &vpx_variance16x16_avx2),
+                      VarianceParams(4, 3, &vpx_variance16x8_avx2)));
 
 INSTANTIATE_TEST_CASE_P(
     AVX2, VpxSubpelVarianceTest,
@@ -1539,6 +1534,27 @@
 INSTANTIATE_TEST_CASE_P(VSX, VpxSseTest,
                         ::testing::Values(SseParams(2, 2,
                                                     &vpx_get4x4sse_cs_vsx)));
+INSTANTIATE_TEST_CASE_P(VSX, VpxMseTest,
+                        ::testing::Values(MseParams(4, 4, &vpx_mse16x16_vsx),
+                                          MseParams(4, 3, &vpx_mse16x8_vsx),
+                                          MseParams(3, 4, &vpx_mse8x16_vsx),
+                                          MseParams(3, 3, &vpx_mse8x8_vsx)));
+
+INSTANTIATE_TEST_CASE_P(
+    VSX, VpxVarianceTest,
+    ::testing::Values(VarianceParams(6, 6, &vpx_variance64x64_vsx),
+                      VarianceParams(6, 5, &vpx_variance64x32_vsx),
+                      VarianceParams(5, 6, &vpx_variance32x64_vsx),
+                      VarianceParams(5, 5, &vpx_variance32x32_vsx),
+                      VarianceParams(5, 4, &vpx_variance32x16_vsx),
+                      VarianceParams(4, 5, &vpx_variance16x32_vsx),
+                      VarianceParams(4, 4, &vpx_variance16x16_vsx),
+                      VarianceParams(4, 3, &vpx_variance16x8_vsx),
+                      VarianceParams(3, 4, &vpx_variance8x16_vsx),
+                      VarianceParams(3, 3, &vpx_variance8x8_vsx),
+                      VarianceParams(3, 2, &vpx_variance8x4_vsx),
+                      VarianceParams(2, 3, &vpx_variance4x8_vsx),
+                      VarianceParams(2, 2, &vpx_variance4x4_vsx)));
 #endif  // HAVE_VSX
 
 #if HAVE_MMI
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/video_source.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/video_source.h
index 54f6928..e9340f2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/video_source.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/video_source.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_VIDEO_SOURCE_H_
-#define TEST_VIDEO_SOURCE_H_
+#ifndef VPX_TEST_VIDEO_SOURCE_H_
+#define VPX_TEST_VIDEO_SOURCE_H_
 
 #if defined(_WIN32)
 #undef NOMINMAX
@@ -255,4 +255,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_VIDEO_SOURCE_H_
+#endif  // VPX_TEST_VIDEO_SOURCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_boolcoder_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_boolcoder_test.cc
index 9d81f93..c78b0b3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_boolcoder_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_boolcoder_test.cc
@@ -93,6 +93,9 @@
         }
 
         vp8_stop_encode(&bw);
+        // vp8dx_bool_decoder_fill() may read into uninitialized data that
+        // isn't used meaningfully, but may trigger an MSan warning.
+        memset(bw_buffer + bw.pos, 0, sizeof(VP8_BD_VALUE) - 1);
 
         BOOL_DECODER br;
         encrypt_buffer(bw_buffer, kBufferSize);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_datarate_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_datarate_test.cc
new file mode 100644
index 0000000..95a1157
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_datarate_test.cc
@@ -0,0 +1,416 @@
+/*
+ *  Copyright (c) 2012 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+#include "./vpx_config.h"
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/i420_video_source.h"
+#include "test/util.h"
+#include "test/y4m_video_source.h"
+#include "vpx/vpx_codec.h"
+
+namespace {
+
+class DatarateTestLarge
+    : public ::libvpx_test::EncoderTest,
+      public ::libvpx_test::CodecTestWith2Params<libvpx_test::TestMode, int> {
+ public:
+  DatarateTestLarge() : EncoderTest(GET_PARAM(0)) {}
+
+  virtual ~DatarateTestLarge() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(GET_PARAM(1));
+    set_cpu_used_ = GET_PARAM(2);
+    ResetModel();
+  }
+
+  virtual void ResetModel() {
+    last_pts_ = 0;
+    bits_in_buffer_model_ = cfg_.rc_target_bitrate * cfg_.rc_buf_initial_sz;
+    frame_number_ = 0;
+    first_drop_ = 0;
+    bits_total_ = 0;
+    duration_ = 0.0;
+    denoiser_offon_test_ = 0;
+    denoiser_offon_period_ = -1;
+    gf_boost_ = 0;
+    use_roi_ = false;
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    if (video->frame() == 0) {
+      encoder->Control(VP8E_SET_NOISE_SENSITIVITY, denoiser_on_);
+      encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
+      encoder->Control(VP8E_SET_GF_CBR_BOOST_PCT, gf_boost_);
+    }
+
+    if (use_roi_) {
+      encoder->Control(VP8E_SET_ROI_MAP, &roi_);
+    }
+
+    if (denoiser_offon_test_) {
+      ASSERT_GT(denoiser_offon_period_, 0)
+          << "denoiser_offon_period_ is not positive.";
+      if ((video->frame() + 1) % denoiser_offon_period_ == 0) {
+        // Flip denoiser_on_ periodically
+        denoiser_on_ ^= 1;
+      }
+      encoder->Control(VP8E_SET_NOISE_SENSITIVITY, denoiser_on_);
+    }
+
+    const vpx_rational_t tb = video->timebase();
+    timebase_ = static_cast<double>(tb.num) / tb.den;
+    duration_ = 0;
+  }
+
+  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
+    // Time since last timestamp = duration.
+    vpx_codec_pts_t duration = pkt->data.frame.pts - last_pts_;
+
+    // TODO(jimbankoski): Remove these lines when the issue:
+    // http://code.google.com/p/webm/issues/detail?id=496 is fixed.
+    // For now the codec assumes buffer starts at starting buffer rate
+    // plus one frame's time.
+    if (last_pts_ == 0) duration = 1;
+
+    // Add to the buffer the bits we'd expect from a constant bitrate server.
+    bits_in_buffer_model_ += static_cast<int64_t>(
+        duration * timebase_ * cfg_.rc_target_bitrate * 1000);
+
+    /* Test the buffer model here before subtracting the frame. Do so because
+     * the way the leaky bucket model works in libvpx is to allow the buffer to
+     * empty - and then stop showing frames until we've got enough bits to
+     * show one. As noted in comment below (issue 495), this does not currently
+     * apply to key frames. For now exclude key frames in condition below. */
+    const bool key_frame =
+        (pkt->data.frame.flags & VPX_FRAME_IS_KEY) ? true : false;
+    if (!key_frame) {
+      ASSERT_GE(bits_in_buffer_model_, 0)
+          << "Buffer Underrun at frame " << pkt->data.frame.pts;
+    }
+
+    const int64_t frame_size_in_bits = pkt->data.frame.sz * 8;
+
+    // Subtract from the buffer the bits associated with a played back frame.
+    bits_in_buffer_model_ -= frame_size_in_bits;
+
+    // Update the running total of bits for end of test datarate checks.
+    bits_total_ += frame_size_in_bits;
+
+    // If first drop not set and we have a drop set it to this time.
+    if (!first_drop_ && duration > 1) first_drop_ = last_pts_ + 1;
+
+    // Update the most recent pts.
+    last_pts_ = pkt->data.frame.pts;
+
+    // We update this so that we can calculate the datarate minus the last
+    // frame encoded in the file.
+    bits_in_last_frame_ = frame_size_in_bits;
+
+    ++frame_number_;
+  }
+
+  virtual void EndPassHook(void) {
+    if (bits_total_) {
+      const double file_size_in_kb = bits_total_ / 1000.;  // bits per kilobit
+
+      duration_ = (last_pts_ + 1) * timebase_;
+
+      // Effective file datarate includes the time spent prebuffering.
+      effective_datarate_ = (bits_total_ - bits_in_last_frame_) / 1000.0 /
+                            (cfg_.rc_buf_initial_sz / 1000.0 + duration_);
+
+      file_datarate_ = file_size_in_kb / duration_;
+    }
+  }
+
+  virtual void DenoiserLevelsTest() {
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_dropframe_thresh = 1;
+    cfg_.rc_max_quantizer = 56;
+    cfg_.rc_end_usage = VPX_CBR;
+    ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352,
+                                         288, 30, 1, 0, 140);
+    for (int j = 1; j < 5; ++j) {
+      // Run over the denoiser levels.
+      // For the temporal denoiser (#if CONFIG_TEMPORAL_DENOISING) the level j
+      // refers to the 4 denoiser modes: denoiserYonly, denoiserOnYUV,
+      // denoiserOnAggressive, and denoiserOnAdaptive.
+      denoiser_on_ = j;
+      cfg_.rc_target_bitrate = 300;
+      ResetModel();
+      ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+      ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
+          << " The datarate for the file exceeds the target!";
+
+      ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
+          << " The datarate for the file missed the target!";
+    }
+  }
+
+  virtual void DenoiserOffOnTest() {
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_dropframe_thresh = 1;
+    cfg_.rc_max_quantizer = 56;
+    cfg_.rc_end_usage = VPX_CBR;
+    ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352,
+                                         288, 30, 1, 0, 299);
+    cfg_.rc_target_bitrate = 300;
+    ResetModel();
+    // The denoiser is off by default.
+    denoiser_on_ = 0;
+    // Set the offon test flag.
+    denoiser_offon_test_ = 1;
+    denoiser_offon_period_ = 100;
+    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+    ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
+        << " The datarate for the file exceeds the target!";
+    ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
+        << " The datarate for the file missed the target!";
+  }
+
+  virtual void BasicBufferModelTest() {
+    denoiser_on_ = 0;
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_dropframe_thresh = 1;
+    cfg_.rc_max_quantizer = 56;
+    cfg_.rc_end_usage = VPX_CBR;
+    // 2 pass cbr datarate control has a bug hidden by the small # of
+    // frames selected in this encode. The problem is that even if the buffer is
+    // negative we produce a keyframe on a cutscene. Ignoring datarate
+    // constraints
+    // TODO(jimbankoski): ( Fix when issue
+    // http://code.google.com/p/webm/issues/detail?id=495 is addressed. )
+    ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352,
+                                         288, 30, 1, 0, 140);
+
+    // There is an issue for low bitrates in real-time mode, where the
+    // effective_datarate slightly overshoots the target bitrate.
+    // This is same the issue as noted about (#495).
+    // TODO(jimbankoski/marpan): Update test to run for lower bitrates (< 100),
+    // when the issue is resolved.
+    for (int i = 100; i < 800; i += 200) {
+      cfg_.rc_target_bitrate = i;
+      ResetModel();
+      ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+      ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
+          << " The datarate for the file exceeds the target!";
+      ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
+          << " The datarate for the file missed the target!";
+    }
+  }
+
+  virtual void ChangingDropFrameThreshTest() {
+    denoiser_on_ = 0;
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_max_quantizer = 36;
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.rc_target_bitrate = 200;
+    cfg_.kf_mode = VPX_KF_DISABLED;
+
+    const int frame_count = 40;
+    ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352,
+                                         288, 30, 1, 0, frame_count);
+
+    // Here we check that the first dropped frame gets earlier and earlier
+    // as the drop frame threshold is increased.
+
+    const int kDropFrameThreshTestStep = 30;
+    vpx_codec_pts_t last_drop = frame_count;
+    for (int i = 1; i < 91; i += kDropFrameThreshTestStep) {
+      cfg_.rc_dropframe_thresh = i;
+      ResetModel();
+      ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+      ASSERT_LE(first_drop_, last_drop)
+          << " The first dropped frame for drop_thresh " << i
+          << " > first dropped frame for drop_thresh "
+          << i - kDropFrameThreshTestStep;
+      last_drop = first_drop_;
+    }
+  }
+
+  virtual void DropFramesMultiThreadsTest() {
+    denoiser_on_ = 0;
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_dropframe_thresh = 30;
+    cfg_.rc_max_quantizer = 56;
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.g_threads = 2;
+
+    ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352,
+                                         288, 30, 1, 0, 140);
+    cfg_.rc_target_bitrate = 200;
+    ResetModel();
+    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+    ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
+        << " The datarate for the file exceeds the target!";
+
+    ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
+        << " The datarate for the file missed the target!";
+  }
+
+  vpx_codec_pts_t last_pts_;
+  int64_t bits_in_buffer_model_;
+  double timebase_;
+  int frame_number_;
+  vpx_codec_pts_t first_drop_;
+  int64_t bits_total_;
+  double duration_;
+  double file_datarate_;
+  double effective_datarate_;
+  int64_t bits_in_last_frame_;
+  int denoiser_on_;
+  int denoiser_offon_test_;
+  int denoiser_offon_period_;
+  int set_cpu_used_;
+  int gf_boost_;
+  bool use_roi_;
+  vpx_roi_map_t roi_;
+};
+
+#if CONFIG_TEMPORAL_DENOISING
+// Check basic datarate targeting, for a single bitrate, but loop over the
+// various denoiser settings.
+TEST_P(DatarateTestLarge, DenoiserLevels) { DenoiserLevelsTest(); }
+
+// Check basic datarate targeting, for a single bitrate, when denoiser is off
+// and on.
+TEST_P(DatarateTestLarge, DenoiserOffOn) { DenoiserOffOnTest(); }
+#endif  // CONFIG_TEMPORAL_DENOISING
+
+TEST_P(DatarateTestLarge, BasicBufferModel) { BasicBufferModelTest(); }
+
+TEST_P(DatarateTestLarge, ChangingDropFrameThresh) {
+  ChangingDropFrameThreshTest();
+}
+
+TEST_P(DatarateTestLarge, DropFramesMultiThreads) {
+  DropFramesMultiThreadsTest();
+}
+
+class DatarateTestRealTime : public DatarateTestLarge {
+ public:
+  virtual ~DatarateTestRealTime() {}
+};
+
+#if CONFIG_TEMPORAL_DENOISING
+// Check basic datarate targeting, for a single bitrate, but loop over the
+// various denoiser settings.
+TEST_P(DatarateTestRealTime, DenoiserLevels) { DenoiserLevelsTest(); }
+
+// Check basic datarate targeting, for a single bitrate, when denoiser is off
+// and on.
+TEST_P(DatarateTestRealTime, DenoiserOffOn) {}
+#endif  // CONFIG_TEMPORAL_DENOISING
+
+TEST_P(DatarateTestRealTime, BasicBufferModel) { BasicBufferModelTest(); }
+
+TEST_P(DatarateTestRealTime, ChangingDropFrameThresh) {
+  ChangingDropFrameThreshTest();
+}
+
+TEST_P(DatarateTestRealTime, DropFramesMultiThreads) {
+  DropFramesMultiThreadsTest();
+}
+
+TEST_P(DatarateTestRealTime, RegionOfInterest) {
+  denoiser_on_ = 0;
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_dropframe_thresh = 0;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  // Encode using multiple threads.
+  cfg_.g_threads = 2;
+
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+  cfg_.rc_target_bitrate = 450;
+  cfg_.g_w = 352;
+  cfg_.g_h = 288;
+
+  ResetModel();
+
+  // Set ROI parameters
+  use_roi_ = true;
+  memset(&roi_, 0, sizeof(roi_));
+
+  roi_.rows = (cfg_.g_h + 15) / 16;
+  roi_.cols = (cfg_.g_w + 15) / 16;
+
+  roi_.delta_q[0] = 0;
+  roi_.delta_q[1] = -20;
+  roi_.delta_q[2] = 0;
+  roi_.delta_q[3] = 0;
+
+  roi_.delta_lf[0] = 0;
+  roi_.delta_lf[1] = -20;
+  roi_.delta_lf[2] = 0;
+  roi_.delta_lf[3] = 0;
+
+  roi_.static_threshold[0] = 0;
+  roi_.static_threshold[1] = 1000;
+  roi_.static_threshold[2] = 0;
+  roi_.static_threshold[3] = 0;
+
+  // Use 2 states: 1 is center square, 0 is the rest.
+  roi_.roi_map =
+      (uint8_t *)calloc(roi_.rows * roi_.cols, sizeof(*roi_.roi_map));
+  for (unsigned int i = 0; i < roi_.rows; ++i) {
+    for (unsigned int j = 0; j < roi_.cols; ++j) {
+      if (i > (roi_.rows >> 2) && i < ((roi_.rows * 3) >> 2) &&
+          j > (roi_.cols >> 2) && j < ((roi_.cols * 3) >> 2)) {
+        roi_.roi_map[i * roi_.cols + j] = 1;
+      }
+    }
+  }
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
+      << " The datarate for the file exceeds the target!";
+
+  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
+      << " The datarate for the file missed the target!";
+
+  free(roi_.roi_map);
+}
+
+TEST_P(DatarateTestRealTime, GFBoost) {
+  denoiser_on_ = 0;
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_dropframe_thresh = 0;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_error_resilient = 0;
+
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+  cfg_.rc_target_bitrate = 300;
+  ResetModel();
+  // Apply a gf boost.
+  gf_boost_ = 50;
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_ * 0.95)
+      << " The datarate for the file exceeds the target!";
+
+  ASSERT_LE(cfg_.rc_target_bitrate, file_datarate_ * 1.4)
+      << " The datarate for the file missed the target!";
+}
+
+VP8_INSTANTIATE_TEST_CASE(DatarateTestLarge, ALL_TEST_MODES,
+                          ::testing::Values(0));
+VP8_INSTANTIATE_TEST_CASE(DatarateTestRealTime,
+                          ::testing::Values(::libvpx_test::kRealTime),
+                          ::testing::Values(-6, -12));
+}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_fragments_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_fragments_test.cc
index ac967d1..6e5baf2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_fragments_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_fragments_test.cc
@@ -13,11 +13,11 @@
 
 namespace {
 
-class VP8FramgmentsTest : public ::libvpx_test::EncoderTest,
-                          public ::testing::Test {
+class VP8FragmentsTest : public ::libvpx_test::EncoderTest,
+                         public ::testing::Test {
  protected:
-  VP8FramgmentsTest() : EncoderTest(&::libvpx_test::kVP8) {}
-  virtual ~VP8FramgmentsTest() {}
+  VP8FragmentsTest() : EncoderTest(&::libvpx_test::kVP8) {}
+  virtual ~VP8FragmentsTest() {}
 
   virtual void SetUp() {
     const unsigned long init_flags =  // NOLINT(runtime/int)
@@ -28,7 +28,7 @@
   }
 };
 
-TEST_F(VP8FramgmentsTest, TestFragmentsEncodeDecode) {
+TEST_F(VP8FragmentsTest, TestFragmentsEncodeDecode) {
   ::libvpx_test::RandomVideoSource video;
   ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_multi_resolution_encoder.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_multi_resolution_encoder.sh
old mode 100644
new mode 100755
index a8b7fe7..bd45b53
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_multi_resolution_encoder.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp8_multi_resolution_encoder.sh
@@ -22,7 +22,7 @@
       elog "Libvpx test data must exist in LIBVPX_TEST_DATA_PATH."
       return 1
     fi
-    local readonly app="vp8_multi_resolution_encoder"
+    local app="vp8_multi_resolution_encoder"
     if [ -z "$(vpx_tool_path "${app}")" ]; then
       elog "${app} not found. It must exist in LIBVPX_BIN_PATH or its parent."
       return 1
@@ -33,7 +33,7 @@
 # Runs vp8_multi_resolution_encoder. Simply forwards all arguments to
 # vp8_multi_resolution_encoder after building path to the executable.
 vp8_mre() {
-  local readonly encoder="$(vpx_tool_path vp8_multi_resolution_encoder)"
+  local encoder="$(vpx_tool_path vp8_multi_resolution_encoder)"
   if [ ! -x "${encoder}" ]; then
     elog "${encoder} does not exist or is not executable."
     return 1
@@ -43,22 +43,34 @@
 }
 
 vp8_multi_resolution_encoder_three_formats() {
-  local readonly output_files="${VPX_TEST_OUTPUT_DIR}/vp8_mre_0.ivf
-                               ${VPX_TEST_OUTPUT_DIR}/vp8_mre_1.ivf
-                               ${VPX_TEST_OUTPUT_DIR}/vp8_mre_2.ivf"
+  local output_files="${VPX_TEST_OUTPUT_DIR}/vp8_mre_0.ivf
+                      ${VPX_TEST_OUTPUT_DIR}/vp8_mre_1.ivf
+                      ${VPX_TEST_OUTPUT_DIR}/vp8_mre_2.ivf"
+  local layer_bitrates="150 80 50"
+  local keyframe_insert="200"
+  local temporal_layers="3 3 3"
+  local framerate="30"
 
   if [ "$(vpx_config_option_enabled CONFIG_MULTI_RES_ENCODING)" = "yes" ]; then
     if [ "$(vp8_encode_available)" = "yes" ]; then
       # Param order:
       #  Input width
       #  Input height
+      #  Framerate
       #  Input file path
       #  Output file names
+      #  Layer bitrates
+      #  Temporal layers
+      #  Keyframe insert
       #  Output PSNR
       vp8_mre "${YUV_RAW_INPUT_WIDTH}" \
         "${YUV_RAW_INPUT_HEIGHT}" \
+        "${framerate}" \
         "${YUV_RAW_INPUT}" \
         ${output_files} \
+        ${layer_bitrates} \
+        ${temporal_layers} \
+        "${keyframe_insert}" \
         0
 
       for output_file in ${output_files}; do
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_arf_freq_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_arf_freq_test.cc
index 48a4ca7..9a3455b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_arf_freq_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_arf_freq_test.cc
@@ -8,6 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <memory>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "test/codec_factory.h"
@@ -190,7 +192,7 @@
   init_flags_ = VPX_CODEC_USE_PSNR;
   if (cfg_.g_bit_depth > 8) init_flags_ |= VPX_CODEC_USE_HIGHBITDEPTH;
 
-  testing::internal::scoped_ptr<libvpx_test::VideoSource> video;
+  std::unique_ptr<libvpx_test::VideoSource> video;
   if (is_extension_y4m(test_video_param_.filename)) {
     video.reset(new libvpx_test::Y4mVideoSource(test_video_param_.filename, 0,
                                                 kFrames));
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_block_error_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_block_error_test.cc
index 0b4d1df..71a0686 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_block_error_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_block_error_test.cc
@@ -11,6 +11,7 @@
 #include <cmath>
 #include <cstdlib>
 #include <string>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -35,7 +36,7 @@
                                      intptr_t block_size, int64_t *ssz,
                                      int bps);
 
-typedef std::tr1::tuple<HBDBlockErrorFunc, HBDBlockErrorFunc, vpx_bit_depth_t>
+typedef std::tuple<HBDBlockErrorFunc, HBDBlockErrorFunc, vpx_bit_depth_t>
     BlockErrorParam;
 
 typedef int64_t (*BlockErrorFunc)(const tran_low_t *coeff,
@@ -168,7 +169,7 @@
       << "First failed at test case " << first_failure;
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if HAVE_SSE2
 const BlockErrorParam sse2_block_error_tests[] = {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_boolcoder_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_boolcoder_test.cc
index 5dbfd5c..0cafa67 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_boolcoder_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_boolcoder_test.cc
@@ -66,6 +66,9 @@
         }
 
         vpx_stop_encode(&bw);
+        // vpx_reader_fill() may read into uninitialized data that
+        // isn't used meaningfully, but may trigger an MSan warning.
+        memset(bw_buffer + bw.pos, 0, sizeof(BD_VALUE) - 1);
 
         // First bit should be zero
         GTEST_ASSERT_EQ(bw_buffer[0] & 0x80, 0);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_datarate_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_datarate_test.cc
new file mode 100644
index 0000000..f9f2aaa
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_datarate_test.cc
@@ -0,0 +1,957 @@
+/*
+ *  Copyright (c) 2012 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+#include "./vpx_config.h"
+#include "third_party/googletest/src/include/gtest/gtest.h"
+#include "test/codec_factory.h"
+#include "test/encode_test_driver.h"
+#include "test/i420_video_source.h"
+#include "test/util.h"
+#include "test/y4m_video_source.h"
+#include "vpx/vpx_codec.h"
+#include "vpx_ports/bitops.h"
+
+namespace {
+
+class DatarateTestVP9 : public ::libvpx_test::EncoderTest {
+ public:
+  explicit DatarateTestVP9(const ::libvpx_test::CodecFactory *codec)
+      : EncoderTest(codec) {
+    tune_content_ = 0;
+  }
+
+ protected:
+  virtual ~DatarateTestVP9() {}
+
+  virtual void ResetModel() {
+    last_pts_ = 0;
+    bits_in_buffer_model_ = cfg_.rc_target_bitrate * cfg_.rc_buf_initial_sz;
+    frame_number_ = 0;
+    tot_frame_number_ = 0;
+    first_drop_ = 0;
+    num_drops_ = 0;
+    aq_mode_ = 3;
+    // Denoiser is off by default.
+    denoiser_on_ = 0;
+    // For testing up to 3 layers.
+    for (int i = 0; i < 3; ++i) {
+      bits_total_[i] = 0;
+    }
+    denoiser_offon_test_ = 0;
+    denoiser_offon_period_ = -1;
+    frame_parallel_decoding_mode_ = 1;
+    delta_q_uv_ = 0;
+    use_roi_ = false;
+  }
+
+  //
+  // Frame flags and layer id for temporal layers.
+  //
+
+  // For two layers, test pattern is:
+  //   1     3
+  // 0    2     .....
+  // For three layers, test pattern is:
+  //   1      3    5      7
+  //      2           6
+  // 0          4            ....
+  // LAST is always update on base/layer 0, GOLDEN is updated on layer 1.
+  // For this 3 layer example, the 2nd enhancement layer (layer 2) updates
+  // the altref frame.
+  static int GetFrameFlags(int frame_num, int num_temp_layers) {
+    int frame_flags = 0;
+    if (num_temp_layers == 2) {
+      if (frame_num % 2 == 0) {
+        // Layer 0: predict from L and ARF, update L.
+        frame_flags =
+            VP8_EFLAG_NO_REF_GF | VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_ARF;
+      } else {
+        // Layer 1: predict from L, G and ARF, and update G.
+        frame_flags = VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_UPD_LAST |
+                      VP8_EFLAG_NO_UPD_ENTROPY;
+      }
+    } else if (num_temp_layers == 3) {
+      if (frame_num % 4 == 0) {
+        // Layer 0: predict from L and ARF; update L.
+        frame_flags =
+            VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_REF_GF;
+      } else if ((frame_num - 2) % 4 == 0) {
+        // Layer 1: predict from L, G, ARF; update G.
+        frame_flags = VP8_EFLAG_NO_UPD_ARF | VP8_EFLAG_NO_UPD_LAST;
+      } else if ((frame_num - 1) % 2 == 0) {
+        // Layer 2: predict from L, G, ARF; update ARF.
+        frame_flags = VP8_EFLAG_NO_UPD_GF | VP8_EFLAG_NO_UPD_LAST;
+      }
+    }
+    return frame_flags;
+  }
+
+  static int SetLayerId(int frame_num, int num_temp_layers) {
+    int layer_id = 0;
+    if (num_temp_layers == 2) {
+      if (frame_num % 2 == 0) {
+        layer_id = 0;
+      } else {
+        layer_id = 1;
+      }
+    } else if (num_temp_layers == 3) {
+      if (frame_num % 4 == 0) {
+        layer_id = 0;
+      } else if ((frame_num - 2) % 4 == 0) {
+        layer_id = 1;
+      } else if ((frame_num - 1) % 2 == 0) {
+        layer_id = 2;
+      }
+    }
+    return layer_id;
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    if (video->frame() == 0) {
+      encoder->Control(VP8E_SET_CPUUSED, set_cpu_used_);
+      encoder->Control(VP9E_SET_AQ_MODE, aq_mode_);
+      encoder->Control(VP9E_SET_TUNE_CONTENT, tune_content_);
+    }
+
+    if (denoiser_offon_test_) {
+      ASSERT_GT(denoiser_offon_period_, 0)
+          << "denoiser_offon_period_ is not positive.";
+      if ((video->frame() + 1) % denoiser_offon_period_ == 0) {
+        // Flip denoiser_on_ periodically
+        denoiser_on_ ^= 1;
+      }
+    }
+
+    encoder->Control(VP9E_SET_NOISE_SENSITIVITY, denoiser_on_);
+    encoder->Control(VP9E_SET_TILE_COLUMNS, get_msb(cfg_.g_threads));
+    encoder->Control(VP9E_SET_FRAME_PARALLEL_DECODING,
+                     frame_parallel_decoding_mode_);
+
+    if (use_roi_) {
+      encoder->Control(VP9E_SET_ROI_MAP, &roi_);
+      encoder->Control(VP9E_SET_AQ_MODE, 0);
+    }
+
+    if (delta_q_uv_ != 0) {
+      encoder->Control(VP9E_SET_DELTA_Q_UV, delta_q_uv_);
+    }
+
+    if (cfg_.ts_number_layers > 1) {
+      if (video->frame() == 0) {
+        encoder->Control(VP9E_SET_SVC, 1);
+      }
+      vpx_svc_layer_id_t layer_id;
+      layer_id.spatial_layer_id = 0;
+      frame_flags_ = GetFrameFlags(video->frame(), cfg_.ts_number_layers);
+      layer_id.temporal_layer_id =
+          SetLayerId(video->frame(), cfg_.ts_number_layers);
+      layer_id.temporal_layer_id_per_spatial[0] =
+          SetLayerId(video->frame(), cfg_.ts_number_layers);
+      encoder->Control(VP9E_SET_SVC_LAYER_ID, &layer_id);
+    }
+    const vpx_rational_t tb = video->timebase();
+    timebase_ = static_cast<double>(tb.num) / tb.den;
+    duration_ = 0;
+  }
+
+  virtual void FramePktHook(const vpx_codec_cx_pkt_t *pkt) {
+    // Time since last timestamp = duration.
+    vpx_codec_pts_t duration = pkt->data.frame.pts - last_pts_;
+
+    if (duration > 1) {
+      // If first drop not set and we have a drop set it to this time.
+      if (!first_drop_) first_drop_ = last_pts_ + 1;
+      // Update the number of frame drops.
+      num_drops_ += static_cast<int>(duration - 1);
+      // Update counter for total number of frames (#frames input to encoder).
+      // Needed for setting the proper layer_id below.
+      tot_frame_number_ += static_cast<int>(duration - 1);
+    }
+
+    int layer = SetLayerId(tot_frame_number_, cfg_.ts_number_layers);
+
+    // Add to the buffer the bits we'd expect from a constant bitrate server.
+    bits_in_buffer_model_ += static_cast<int64_t>(
+        duration * timebase_ * cfg_.rc_target_bitrate * 1000);
+
+    // Buffer should not go negative.
+    ASSERT_GE(bits_in_buffer_model_, 0)
+        << "Buffer Underrun at frame " << pkt->data.frame.pts;
+
+    const size_t frame_size_in_bits = pkt->data.frame.sz * 8;
+
+    // Update the total encoded bits. For temporal layers, update the cumulative
+    // encoded bits per layer.
+    for (int i = layer; i < static_cast<int>(cfg_.ts_number_layers); ++i) {
+      bits_total_[i] += frame_size_in_bits;
+    }
+
+    // Update the most recent pts.
+    last_pts_ = pkt->data.frame.pts;
+    ++frame_number_;
+    ++tot_frame_number_;
+  }
+
+  virtual void EndPassHook(void) {
+    for (int layer = 0; layer < static_cast<int>(cfg_.ts_number_layers);
+         ++layer) {
+      duration_ = (last_pts_ + 1) * timebase_;
+      if (bits_total_[layer]) {
+        // Effective file datarate:
+        effective_datarate_[layer] = (bits_total_[layer] / 1000.0) / duration_;
+      }
+    }
+  }
+
+  vpx_codec_pts_t last_pts_;
+  double timebase_;
+  int tune_content_;
+  int frame_number_;      // Counter for number of non-dropped/encoded frames.
+  int tot_frame_number_;  // Counter for total number of input frames.
+  int64_t bits_total_[3];
+  double duration_;
+  double effective_datarate_[3];
+  int set_cpu_used_;
+  int64_t bits_in_buffer_model_;
+  vpx_codec_pts_t first_drop_;
+  int num_drops_;
+  int aq_mode_;
+  int denoiser_on_;
+  int denoiser_offon_test_;
+  int denoiser_offon_period_;
+  int frame_parallel_decoding_mode_;
+  int delta_q_uv_;
+  bool use_roi_;
+  vpx_roi_map_t roi_;
+};
+
+// Params: test mode, speed setting and index for bitrate array.
+class DatarateTestVP9RealTimeMultiBR
+    : public DatarateTestVP9,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ public:
+  DatarateTestVP9RealTimeMultiBR() : DatarateTestVP9(GET_PARAM(0)) {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    set_cpu_used_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Params: speed setting and index for bitrate array.
+class DatarateTestVP9LargeVBR
+    : public DatarateTestVP9,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ public:
+  DatarateTestVP9LargeVBR() : DatarateTestVP9(GET_PARAM(0)) {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    set_cpu_used_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for VBR mode with 0 lag.
+TEST_P(DatarateTestVP9LargeVBR, BasicRateTargetingVBRLagZero) {
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_error_resilient = 0;
+  cfg_.rc_end_usage = VPX_VBR;
+  cfg_.g_lag_in_frames = 0;
+
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+
+  const int bitrates[2] = { 400, 800 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.75)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.36)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic rate targeting for VBR mode with non-zero lag.
+TEST_P(DatarateTestVP9LargeVBR, BasicRateTargetingVBRLagNonZero) {
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_error_resilient = 0;
+  cfg_.rc_end_usage = VPX_VBR;
+  // For non-zero lag, rate control will work (be within bounds) for
+  // real-time mode.
+  if (deadline_ == VPX_DL_REALTIME) {
+    cfg_.g_lag_in_frames = 15;
+  } else {
+    cfg_.g_lag_in_frames = 0;
+  }
+
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+  const int bitrates[2] = { 400, 800 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.75)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.35)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic rate targeting for VBR mode with non-zero lag, with
+// frame_parallel_decoding_mode off. This enables the adapt_coeff/mode/mv probs
+// since error_resilience is off.
+TEST_P(DatarateTestVP9LargeVBR, BasicRateTargetingVBRLagNonZeroFrameParDecOff) {
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.g_error_resilient = 0;
+  cfg_.rc_end_usage = VPX_VBR;
+  // For non-zero lag, rate control will work (be within bounds) for
+  // real-time mode.
+  if (deadline_ == VPX_DL_REALTIME) {
+    cfg_.g_lag_in_frames = 15;
+  } else {
+    cfg_.g_lag_in_frames = 0;
+  }
+
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+  const int bitrates[2] = { 400, 800 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  frame_parallel_decoding_mode_ = 0;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.75)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.35)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic rate targeting for CBR mode.
+TEST_P(DatarateTestVP9RealTimeMultiBR, BasicRateTargeting) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  const int bitrates[4] = { 150, 350, 550, 750 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic rate targeting for CBR mode, with frame_parallel_decoding_mode
+// off( and error_resilience off).
+TEST_P(DatarateTestVP9RealTimeMultiBR, BasicRateTargetingFrameParDecOff) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.g_error_resilient = 0;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  const int bitrates[4] = { 150, 350, 550, 750 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  frame_parallel_decoding_mode_ = 0;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic rate targeting for CBR.
+TEST_P(DatarateTestVP9RealTimeMultiBR, BasicRateTargeting444) {
+  ::libvpx_test::Y4mVideoSource video("rush_hour_444.y4m", 0, 140);
+
+  cfg_.g_profile = 1;
+  cfg_.g_timebase = video.timebase();
+
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  const int bitrates[4] = { 250, 450, 650, 850 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(static_cast<double>(cfg_.rc_target_bitrate),
+            effective_datarate_[0] * 0.80)
+      << " The datarate for the file exceeds the target by too much!";
+  ASSERT_LE(static_cast<double>(cfg_.rc_target_bitrate),
+            effective_datarate_[0] * 1.15)
+      << " The datarate for the file missed the target!"
+      << cfg_.rc_target_bitrate << " " << effective_datarate_;
+}
+
+// Check that (1) the first dropped frame gets earlier and earlier
+// as the drop frame threshold is increased, and (2) that the total number of
+// frame drops does not decrease as we increase frame drop threshold.
+// Use a lower qp-max to force some frame drops.
+TEST_P(DatarateTestVP9RealTimeMultiBR, ChangingDropFrameThresh) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_undershoot_pct = 20;
+  cfg_.rc_undershoot_pct = 20;
+  cfg_.rc_dropframe_thresh = 10;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 50;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.rc_target_bitrate = 200;
+  cfg_.g_lag_in_frames = 0;
+  // TODO(marpan): Investigate datarate target failures with a smaller keyframe
+  // interval (128).
+  cfg_.kf_max_dist = 9999;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+
+  const int kDropFrameThreshTestStep = 30;
+  const int bitrates[2] = { 50, 150 };
+  const int bitrate_index = GET_PARAM(2);
+  if (bitrate_index > 1) return;
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  vpx_codec_pts_t last_drop = 140;
+  int last_num_drops = 0;
+  for (int i = 10; i < 100; i += kDropFrameThreshTestStep) {
+    cfg_.rc_dropframe_thresh = i;
+    ResetModel();
+    ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+    ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+        << " The datarate for the file is lower than target by too much!";
+    ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.25)
+        << " The datarate for the file is greater than target by too much!";
+    ASSERT_LE(first_drop_, last_drop)
+        << " The first dropped frame for drop_thresh " << i
+        << " > first dropped frame for drop_thresh "
+        << i - kDropFrameThreshTestStep;
+    ASSERT_GE(num_drops_, last_num_drops * 0.85)
+        << " The number of dropped frames for drop_thresh " << i
+        << " < number of dropped frames for drop_thresh "
+        << i - kDropFrameThreshTestStep;
+    last_drop = first_drop_;
+    last_num_drops = num_drops_;
+  }
+}  // namespace
+
+// Check basic rate targeting for 2 temporal layers.
+TEST_P(DatarateTestVP9RealTimeMultiBR, BasicRateTargeting2TemporalLayers) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  // 2 Temporal layers, no spatial layers: Framerate decimation (2, 1).
+  cfg_.ss_number_layers = 1;
+  cfg_.ts_number_layers = 2;
+  cfg_.ts_rate_decimator[0] = 2;
+  cfg_.ts_rate_decimator[1] = 1;
+
+  cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  const int bitrates[4] = { 200, 400, 600, 800 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  // 60-40 bitrate allocation for 2 temporal layers.
+  cfg_.layer_target_bitrate[0] = 60 * cfg_.rc_target_bitrate / 100;
+  cfg_.layer_target_bitrate[1] = cfg_.rc_target_bitrate;
+  aq_mode_ = 0;
+  if (deadline_ == VPX_DL_REALTIME) {
+    aq_mode_ = 3;
+    cfg_.g_error_resilient = 1;
+  }
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  for (int j = 0; j < static_cast<int>(cfg_.ts_number_layers); ++j) {
+    ASSERT_GE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 0.85)
+        << " The datarate for the file is lower than target by too much, "
+           "for layer: "
+        << j;
+    ASSERT_LE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 1.15)
+        << " The datarate for the file is greater than target by too much, "
+           "for layer: "
+        << j;
+  }
+}
+
+// Check basic rate targeting for 3 temporal layers.
+TEST_P(DatarateTestVP9RealTimeMultiBR, BasicRateTargeting3TemporalLayers) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  // 3 Temporal layers, no spatial layers: Framerate decimation (4, 2, 1).
+  cfg_.ss_number_layers = 1;
+  cfg_.ts_number_layers = 3;
+  cfg_.ts_rate_decimator[0] = 4;
+  cfg_.ts_rate_decimator[1] = 2;
+  cfg_.ts_rate_decimator[2] = 1;
+
+  cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  const int bitrates[4] = { 200, 400, 600, 800 };
+  const int bitrate_index = GET_PARAM(2);
+  cfg_.rc_target_bitrate = bitrates[bitrate_index];
+  ResetModel();
+  // 40-20-40 bitrate allocation for 3 temporal layers.
+  cfg_.layer_target_bitrate[0] = 40 * cfg_.rc_target_bitrate / 100;
+  cfg_.layer_target_bitrate[1] = 60 * cfg_.rc_target_bitrate / 100;
+  cfg_.layer_target_bitrate[2] = cfg_.rc_target_bitrate;
+  aq_mode_ = 0;
+  if (deadline_ == VPX_DL_REALTIME) {
+    aq_mode_ = 3;
+    cfg_.g_error_resilient = 1;
+  }
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  for (int j = 0; j < static_cast<int>(cfg_.ts_number_layers); ++j) {
+    // TODO(yaowu): Work out more stable rc control strategy and
+    //              Adjust the thresholds to be tighter than .75.
+    ASSERT_GE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 0.75)
+        << " The datarate for the file is lower than target by too much, "
+           "for layer: "
+        << j;
+    // TODO(yaowu): Work out more stable rc control strategy and
+    //              Adjust the thresholds to be tighter than 1.25.
+    ASSERT_LE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 1.25)
+        << " The datarate for the file is greater than target by too much, "
+           "for layer: "
+        << j;
+  }
+}
+
+// Params: speed setting.
+class DatarateTestVP9RealTime : public DatarateTestVP9,
+                                public ::libvpx_test::CodecTestWithParam<int> {
+ public:
+  DatarateTestVP9RealTime() : DatarateTestVP9(GET_PARAM(0)) {}
+  virtual ~DatarateTestVP9RealTime() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    set_cpu_used_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for CBR mode, with 2 threads and dropped frames.
+TEST_P(DatarateTestVP9RealTime, BasicRateTargetingDropFramesMultiThreads) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  // Encode using multiple threads.
+  cfg_.g_threads = 2;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  cfg_.rc_target_bitrate = 200;
+  ResetModel();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic rate targeting for 3 temporal layers, with frame dropping.
+// Only for one (low) bitrate with lower max_quantizer, and somewhat higher
+// frame drop threshold, to force frame dropping.
+TEST_P(DatarateTestVP9RealTime,
+       BasicRateTargeting3TemporalLayersFrameDropping) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  // Set frame drop threshold and rc_max_quantizer to force some frame drops.
+  cfg_.rc_dropframe_thresh = 20;
+  cfg_.rc_max_quantizer = 45;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  // 3 Temporal layers, no spatial layers: Framerate decimation (4, 2, 1).
+  cfg_.ss_number_layers = 1;
+  cfg_.ts_number_layers = 3;
+  cfg_.ts_rate_decimator[0] = 4;
+  cfg_.ts_rate_decimator[1] = 2;
+  cfg_.ts_rate_decimator[2] = 1;
+
+  cfg_.temporal_layering_mode = VP9E_TEMPORAL_LAYERING_MODE_BYPASS;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+  cfg_.rc_target_bitrate = 200;
+  ResetModel();
+  // 40-20-40 bitrate allocation for 3 temporal layers.
+  cfg_.layer_target_bitrate[0] = 40 * cfg_.rc_target_bitrate / 100;
+  cfg_.layer_target_bitrate[1] = 60 * cfg_.rc_target_bitrate / 100;
+  cfg_.layer_target_bitrate[2] = cfg_.rc_target_bitrate;
+  aq_mode_ = 0;
+  if (deadline_ == VPX_DL_REALTIME) {
+    aq_mode_ = 3;
+    cfg_.g_error_resilient = 1;
+  }
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  for (int j = 0; j < static_cast<int>(cfg_.ts_number_layers); ++j) {
+    ASSERT_GE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 0.85)
+        << " The datarate for the file is lower than target by too much, "
+           "for layer: "
+        << j;
+    ASSERT_LE(effective_datarate_[j], cfg_.layer_target_bitrate[j] * 1.20)
+        << " The datarate for the file is greater than target by too much, "
+           "for layer: "
+        << j;
+    // Expect some frame drops in this test: for this 200 frames test,
+    // expect at least 10% and not more than 60% drops.
+    ASSERT_GE(num_drops_, 20);
+    ASSERT_LE(num_drops_, 280);
+  }
+}
+
+// Check VP9 region of interest feature.
+TEST_P(DatarateTestVP9RealTime, RegionOfInterest) {
+  if (deadline_ != VPX_DL_REALTIME || set_cpu_used_ < 5) return;
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 0;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+
+  cfg_.rc_target_bitrate = 450;
+  cfg_.g_w = 640;
+  cfg_.g_h = 480;
+
+  ResetModel();
+
+  // Set ROI parameters
+  use_roi_ = true;
+  memset(&roi_, 0, sizeof(roi_));
+
+  roi_.rows = (cfg_.g_h + 7) / 8;
+  roi_.cols = (cfg_.g_w + 7) / 8;
+
+  roi_.delta_q[1] = -20;
+  roi_.delta_lf[1] = -20;
+  memset(roi_.ref_frame, -1, sizeof(roi_.ref_frame));
+  roi_.ref_frame[1] = 1;
+
+  // Use 2 states: 1 is center square, 0 is the rest.
+  roi_.roi_map = reinterpret_cast<uint8_t *>(
+      calloc(roi_.rows * roi_.cols, sizeof(*roi_.roi_map)));
+  ASSERT_TRUE(roi_.roi_map != NULL);
+
+  for (unsigned int i = 0; i < roi_.rows; ++i) {
+    for (unsigned int j = 0; j < roi_.cols; ++j) {
+      if (i > (roi_.rows >> 2) && i < ((roi_.rows * 3) >> 2) &&
+          j > (roi_.cols >> 2) && j < ((roi_.cols * 3) >> 2)) {
+        roi_.roi_map[i * roi_.cols + j] = 1;
+      }
+    }
+  }
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_[0] * 0.90)
+      << " The datarate for the file exceeds the target!";
+
+  ASSERT_LE(cfg_.rc_target_bitrate, effective_datarate_[0] * 1.4)
+      << " The datarate for the file missed the target!";
+
+  free(roi_.roi_map);
+}
+
+// Params: speed setting, delta q UV.
+class DatarateTestVP9RealTimeDeltaQUV
+    : public DatarateTestVP9,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ public:
+  DatarateTestVP9RealTimeDeltaQUV() : DatarateTestVP9(GET_PARAM(0)) {}
+  virtual ~DatarateTestVP9RealTimeDeltaQUV() {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    set_cpu_used_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+TEST_P(DatarateTestVP9RealTimeDeltaQUV, DeltaQUV) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 0;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 63;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+
+  cfg_.rc_target_bitrate = 450;
+  cfg_.g_w = 640;
+  cfg_.g_h = 480;
+
+  ResetModel();
+
+  delta_q_uv_ = GET_PARAM(2);
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(cfg_.rc_target_bitrate, effective_datarate_[0] * 0.90)
+      << " The datarate for the file exceeds the target!";
+
+  ASSERT_LE(cfg_.rc_target_bitrate, effective_datarate_[0] * 1.4)
+      << " The datarate for the file missed the target!";
+}
+
+// Params: test mode, speed setting and index for bitrate array.
+class DatarateTestVP9PostEncodeDrop
+    : public DatarateTestVP9,
+      public ::libvpx_test::CodecTestWithParam<int> {
+ public:
+  DatarateTestVP9PostEncodeDrop() : DatarateTestVP9(GET_PARAM(0)) {}
+
+ protected:
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    set_cpu_used_ = GET_PARAM(1);
+    ResetModel();
+  }
+};
+
+// Check basic rate targeting for CBR mode, with 2 threads and dropped frames.
+TEST_P(DatarateTestVP9PostEncodeDrop, PostEncodeDropScreenContent) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 30;
+  cfg_.rc_min_quantizer = 0;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  // Encode using multiple threads.
+  cfg_.g_threads = 2;
+  cfg_.g_error_resilient = 0;
+  tune_content_ = 1;
+  ::libvpx_test::I420VideoSource video("hantro_collage_w352h288.yuv", 352, 288,
+                                       30, 1, 0, 300);
+  cfg_.rc_target_bitrate = 300;
+  ResetModel();
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+// Params: speed setting.
+class DatarateTestVP9RealTimeDenoiser : public DatarateTestVP9RealTime {
+ public:
+  virtual ~DatarateTestVP9RealTimeDenoiser() {}
+};
+
+// Check basic datarate targeting, for a single bitrate, when denoiser is on.
+TEST_P(DatarateTestVP9RealTimeDenoiser, LowNoise) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 2;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+
+  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
+  // there is only one denoiser mode: denoiserYonly(which is 1),
+  // but may add more modes in the future.
+  cfg_.rc_target_bitrate = 400;
+  ResetModel();
+  // Turn on the denoiser.
+  denoiser_on_ = 1;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic datarate targeting, for a single bitrate, when denoiser is on,
+// for clip with high noise level. Use 2 threads.
+TEST_P(DatarateTestVP9RealTimeDenoiser, HighNoise) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 2;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.g_threads = 2;
+
+  ::libvpx_test::Y4mVideoSource video("noisy_clip_640_360.y4m", 0, 200);
+
+  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
+  // there is only one denoiser mode: kDenoiserOnYOnly(which is 1),
+  // but may add more modes in the future.
+  cfg_.rc_target_bitrate = 1000;
+  ResetModel();
+  // Turn on the denoiser.
+  denoiser_on_ = 1;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic datarate targeting, for a single bitrate, when denoiser is on,
+// for 1280x720 clip with 4 threads.
+TEST_P(DatarateTestVP9RealTimeDenoiser, 4threads) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 2;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+  cfg_.g_threads = 4;
+
+  ::libvpx_test::Y4mVideoSource video("niklas_1280_720_30.y4m", 0, 300);
+
+  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
+  // there is only one denoiser mode: denoiserYonly(which is 1),
+  // but may add more modes in the future.
+  cfg_.rc_target_bitrate = 1000;
+  ResetModel();
+  // Turn on the denoiser.
+  denoiser_on_ = 1;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.29)
+      << " The datarate for the file is greater than target by too much!";
+}
+
+// Check basic datarate targeting, for a single bitrate, when denoiser is off
+// and on.
+TEST_P(DatarateTestVP9RealTimeDenoiser, DenoiserOffOn) {
+  cfg_.rc_buf_initial_sz = 500;
+  cfg_.rc_buf_optimal_sz = 500;
+  cfg_.rc_buf_sz = 1000;
+  cfg_.rc_dropframe_thresh = 1;
+  cfg_.rc_min_quantizer = 2;
+  cfg_.rc_max_quantizer = 56;
+  cfg_.rc_end_usage = VPX_CBR;
+  cfg_.g_lag_in_frames = 0;
+
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+
+  // For the temporal denoiser (#if CONFIG_VP9_TEMPORAL_DENOISING),
+  // there is only one denoiser mode: denoiserYonly(which is 1),
+  // but may add more modes in the future.
+  cfg_.rc_target_bitrate = 400;
+  ResetModel();
+  // The denoiser is off by default.
+  denoiser_on_ = 0;
+  // Set the offon test flag.
+  denoiser_offon_test_ = 1;
+  denoiser_offon_period_ = 100;
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+  ASSERT_GE(effective_datarate_[0], cfg_.rc_target_bitrate * 0.85)
+      << " The datarate for the file is lower than target by too much!";
+  ASSERT_LE(effective_datarate_[0], cfg_.rc_target_bitrate * 1.15)
+      << " The datarate for the file is greater than target by too much!";
+}
+#endif  // CONFIG_VP9_TEMPORAL_DENOISING
+
+VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9RealTimeMultiBR,
+                          ::testing::Range(5, 10), ::testing::Range(0, 4));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9LargeVBR, ::testing::Range(5, 9),
+                          ::testing::Range(0, 2));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9RealTime, ::testing::Range(5, 10));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9RealTimeDeltaQUV,
+                          ::testing::Range(5, 10),
+                          ::testing::Values(-5, -10, -15));
+
+VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9PostEncodeDrop,
+                          ::testing::Range(5, 6));
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+VP9_INSTANTIATE_TEST_CASE(DatarateTestVP9RealTimeDenoiser,
+                          ::testing::Range(5, 10));
+#endif
+}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_denoiser_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_denoiser_test.cc
index 56ca257..47fa587 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_denoiser_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_denoiser_test.cc
@@ -11,6 +11,7 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 #include "test/acm_random.h"
@@ -35,7 +36,7 @@
                                      uint8_t *avg, int avg_stride,
                                      int increase_denoising, BLOCK_SIZE bs,
                                      int motion_magnitude);
-typedef std::tr1::tuple<Vp9DenoiserFilterFunc, BLOCK_SIZE> VP9DenoiserTestParam;
+typedef std::tuple<Vp9DenoiserFilterFunc, BLOCK_SIZE> VP9DenoiserTestParam;
 
 class VP9DenoiserTest
     : public ::testing::Test,
@@ -99,7 +100,7 @@
   }
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 // Test for all block size.
 #if HAVE_SSE2
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_encoder_parms_get_to_decoder.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_encoder_parms_get_to_decoder.cc
index 62e8dcb..fade08b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_encoder_parms_get_to_decoder.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_encoder_parms_get_to_decoder.cc
@@ -8,6 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <memory>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "test/codec_factory.h"
@@ -74,7 +76,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP9E_SET_COLOR_SPACE, encode_parms.cs);
       encoder->Control(VP9E_SET_COLOR_RANGE, encode_parms.color_range);
       encoder->Control(VP9E_SET_LOSSLESS, encode_parms.lossless);
@@ -138,7 +140,7 @@
 TEST_P(VpxEncoderParmsGetToDecoder, BitstreamParms) {
   init_flags_ = VPX_CODEC_USE_PSNR;
 
-  testing::internal::scoped_ptr<libvpx_test::VideoSource> video(
+  std::unique_ptr<libvpx_test::VideoSource> video(
       new libvpx_test::Y4mVideoSource(test_video_.name, 0, test_video_.frames));
   ASSERT_TRUE(video.get() != NULL);
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_end_to_end_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_end_to_end_test.cc
index d0cbfbb..7cb716f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_end_to_end_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_end_to_end_test.cc
@@ -8,10 +8,13 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "memory"
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "test/codec_factory.h"
 #include "test/encode_test_driver.h"
+#include "test/i420_video_source.h"
 #include "test/util.h"
 #include "test/y4m_video_source.h"
 #include "test/yuv_video_source.h"
@@ -21,14 +24,14 @@
 const unsigned int kWidth = 160;
 const unsigned int kHeight = 90;
 const unsigned int kFramerate = 50;
-const unsigned int kFrames = 10;
+const unsigned int kFrames = 20;
 const int kBitrate = 500;
 // List of psnr thresholds for speed settings 0-7 and 5 encoding modes
 const double kPsnrThreshold[][5] = {
   { 36.0, 37.0, 37.0, 37.0, 37.0 }, { 35.0, 36.0, 36.0, 36.0, 36.0 },
   { 34.0, 35.0, 35.0, 35.0, 35.0 }, { 33.0, 34.0, 34.0, 34.0, 34.0 },
-  { 32.0, 33.0, 33.0, 33.0, 33.0 }, { 31.0, 32.0, 32.0, 32.0, 32.0 },
-  { 30.0, 31.0, 31.0, 31.0, 31.0 }, { 29.0, 30.0, 30.0, 30.0, 30.0 },
+  { 32.0, 33.0, 33.0, 33.0, 33.0 }, { 28.0, 32.0, 32.0, 32.0, 32.0 },
+  { 28.5, 31.0, 31.0, 31.0, 31.0 }, { 27.5, 30.0, 30.0, 30.0, 30.0 },
 };
 
 typedef struct {
@@ -45,13 +48,13 @@
   { "park_joy_90p_8_444.y4m", 8, VPX_IMG_FMT_I444, VPX_BITS_8, 1 },
   { "park_joy_90p_8_440.yuv", 8, VPX_IMG_FMT_I440, VPX_BITS_8, 1 },
 #if CONFIG_VP9_HIGHBITDEPTH
-  { "park_joy_90p_10_420.y4m", 10, VPX_IMG_FMT_I42016, VPX_BITS_10, 2 },
-  { "park_joy_90p_10_422.y4m", 10, VPX_IMG_FMT_I42216, VPX_BITS_10, 3 },
-  { "park_joy_90p_10_444.y4m", 10, VPX_IMG_FMT_I44416, VPX_BITS_10, 3 },
+  { "park_joy_90p_10_420_20f.y4m", 10, VPX_IMG_FMT_I42016, VPX_BITS_10, 2 },
+  { "park_joy_90p_10_422_20f.y4m", 10, VPX_IMG_FMT_I42216, VPX_BITS_10, 3 },
+  { "park_joy_90p_10_444_20f.y4m", 10, VPX_IMG_FMT_I44416, VPX_BITS_10, 3 },
   { "park_joy_90p_10_440.yuv", 10, VPX_IMG_FMT_I44016, VPX_BITS_10, 3 },
-  { "park_joy_90p_12_420.y4m", 12, VPX_IMG_FMT_I42016, VPX_BITS_12, 2 },
-  { "park_joy_90p_12_422.y4m", 12, VPX_IMG_FMT_I42216, VPX_BITS_12, 3 },
-  { "park_joy_90p_12_444.y4m", 12, VPX_IMG_FMT_I44416, VPX_BITS_12, 3 },
+  { "park_joy_90p_12_420_20f.y4m", 12, VPX_IMG_FMT_I42016, VPX_BITS_12, 2 },
+  { "park_joy_90p_12_422_20f.y4m", 12, VPX_IMG_FMT_I42216, VPX_BITS_12, 3 },
+  { "park_joy_90p_12_444_20f.y4m", 12, VPX_IMG_FMT_I44416, VPX_BITS_12, 3 },
   { "park_joy_90p_12_440.yuv", 12, VPX_IMG_FMT_I44016, VPX_BITS_12, 3 },
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 };
@@ -63,7 +66,7 @@
 };
 
 // Speed settings tested
-const int kCpuUsedVectors[] = { 1, 2, 3, 5, 6 };
+const int kCpuUsedVectors[] = { 1, 2, 3, 5, 6, 7 };
 
 int is_extension_y4m(const char *filename) {
   const char *dot = strrchr(filename, '.');
@@ -74,6 +77,43 @@
   }
 }
 
+class EndToEndTestAdaptiveRDThresh
+    : public ::libvpx_test::EncoderTest,
+      public ::libvpx_test::CodecTestWith2Params<int, int> {
+ protected:
+  EndToEndTestAdaptiveRDThresh()
+      : EncoderTest(GET_PARAM(0)), cpu_used_start_(GET_PARAM(1)),
+        cpu_used_end_(GET_PARAM(2)) {}
+
+  virtual ~EndToEndTestAdaptiveRDThresh() {}
+
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    cfg_.g_lag_in_frames = 0;
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.rc_buf_sz = 1000;
+    cfg_.rc_buf_initial_sz = 500;
+    cfg_.rc_buf_optimal_sz = 600;
+    dec_cfg_.threads = 4;
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    if (video->frame() == 0) {
+      encoder->Control(VP8E_SET_CPUUSED, cpu_used_start_);
+      encoder->Control(VP9E_SET_ROW_MT, 1);
+      encoder->Control(VP9E_SET_TILE_COLUMNS, 2);
+    }
+    if (video->frame() == 100)
+      encoder->Control(VP8E_SET_CPUUSED, cpu_used_end_);
+  }
+
+ private:
+  int cpu_used_start_;
+  int cpu_used_end_;
+};
+
 class EndToEndTestLarge
     : public ::libvpx_test::EncoderTest,
       public ::libvpx_test::CodecTestWith3Params<libvpx_test::TestMode,
@@ -82,7 +122,10 @@
   EndToEndTestLarge()
       : EncoderTest(GET_PARAM(0)), test_video_param_(GET_PARAM(2)),
         cpu_used_(GET_PARAM(3)), psnr_(0.0), nframes_(0),
-        encoding_mode_(GET_PARAM(1)) {}
+        encoding_mode_(GET_PARAM(1)) {
+    cyclic_refresh_ = 0;
+    denoiser_on_ = 0;
+  }
 
   virtual ~EndToEndTestLarge() {}
 
@@ -114,7 +157,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP9E_SET_FRAME_PARALLEL_DECODING, 1);
       encoder->Control(VP9E_SET_TILE_COLUMNS, 4);
       encoder->Control(VP8E_SET_CPUUSED, cpu_used_);
@@ -123,6 +166,9 @@
         encoder->Control(VP8E_SET_ARNR_MAXFRAMES, 7);
         encoder->Control(VP8E_SET_ARNR_STRENGTH, 5);
         encoder->Control(VP8E_SET_ARNR_TYPE, 3);
+      } else {
+        encoder->Control(VP9E_SET_NOISE_SENSITIVITY, denoiser_on_);
+        encoder->Control(VP9E_SET_AQ_MODE, cyclic_refresh_);
       }
     }
   }
@@ -138,6 +184,8 @@
 
   TestVideoParam test_video_param_;
   int cpu_used_;
+  int cyclic_refresh_;
+  int denoiser_on_;
 
  private:
   double psnr_;
@@ -145,6 +193,50 @@
   libvpx_test::TestMode encoding_mode_;
 };
 
+#if CONFIG_VP9_DECODER
+// The test parameters control VP9D_SET_LOOP_FILTER_OPT and the number of
+// decoder threads.
+class EndToEndTestLoopFilterThreading
+    : public ::libvpx_test::EncoderTest,
+      public ::libvpx_test::CodecTestWith2Params<bool, int> {
+ protected:
+  EndToEndTestLoopFilterThreading()
+      : EncoderTest(GET_PARAM(0)), use_loop_filter_opt_(GET_PARAM(1)) {}
+
+  virtual ~EndToEndTestLoopFilterThreading() {}
+
+  virtual void SetUp() {
+    InitializeConfig();
+    SetMode(::libvpx_test::kRealTime);
+    cfg_.g_threads = 2;
+    cfg_.g_lag_in_frames = 0;
+    cfg_.rc_target_bitrate = 500;
+    cfg_.rc_end_usage = VPX_CBR;
+    cfg_.kf_min_dist = 1;
+    cfg_.kf_max_dist = 1;
+    dec_cfg_.threads = GET_PARAM(2);
+  }
+
+  virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Encoder *encoder) {
+    if (video->frame() == 0) {
+      encoder->Control(VP8E_SET_CPUUSED, 8);
+    }
+    encoder->Control(VP9E_SET_TILE_COLUMNS, 4 - video->frame() % 5);
+  }
+
+  virtual void PreDecodeFrameHook(::libvpx_test::VideoSource *video,
+                                  ::libvpx_test::Decoder *decoder) {
+    if (video->frame() == 0) {
+      decoder->Control(VP9D_SET_LOOP_FILTER_OPT, use_loop_filter_opt_ ? 1 : 0);
+    }
+  }
+
+ private:
+  const bool use_loop_filter_opt_;
+};
+#endif  // CONFIG_VP9_DECODER
+
 TEST_P(EndToEndTestLarge, EndtoEndPSNRTest) {
   cfg_.rc_target_bitrate = kBitrate;
   cfg_.g_error_resilient = 0;
@@ -154,7 +246,7 @@
   init_flags_ = VPX_CODEC_USE_PSNR;
   if (cfg_.g_bit_depth > 8) init_flags_ |= VPX_CODEC_USE_HIGHBITDEPTH;
 
-  testing::internal::scoped_ptr<libvpx_test::VideoSource> video;
+  std::unique_ptr<libvpx_test::VideoSource> video;
   if (is_extension_y4m(test_video_param_.filename)) {
     video.reset(new libvpx_test::Y4mVideoSource(test_video_param_.filename, 0,
                                                 kFrames));
@@ -170,8 +262,63 @@
   EXPECT_GT(psnr, GetPsnrThreshold());
 }
 
+TEST_P(EndToEndTestLarge, EndtoEndPSNRDenoiserAQTest) {
+  cfg_.rc_target_bitrate = kBitrate;
+  cfg_.g_error_resilient = 0;
+  cfg_.g_profile = test_video_param_.profile;
+  cfg_.g_input_bit_depth = test_video_param_.input_bit_depth;
+  cfg_.g_bit_depth = test_video_param_.bit_depth;
+  init_flags_ = VPX_CODEC_USE_PSNR;
+  cyclic_refresh_ = 3;
+  denoiser_on_ = 1;
+  if (cfg_.g_bit_depth > 8) init_flags_ |= VPX_CODEC_USE_HIGHBITDEPTH;
+
+  std::unique_ptr<libvpx_test::VideoSource> video;
+  if (is_extension_y4m(test_video_param_.filename)) {
+    video.reset(new libvpx_test::Y4mVideoSource(test_video_param_.filename, 0,
+                                                kFrames));
+  } else {
+    video.reset(new libvpx_test::YUVVideoSource(
+        test_video_param_.filename, test_video_param_.fmt, kWidth, kHeight,
+        kFramerate, 1, 0, kFrames));
+  }
+  ASSERT_TRUE(video.get() != NULL);
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(video.get()));
+  const double psnr = GetAveragePsnr();
+  EXPECT_GT(psnr, GetPsnrThreshold());
+}
+
+TEST_P(EndToEndTestAdaptiveRDThresh, EndtoEndAdaptiveRDThreshRowMT) {
+  cfg_.rc_target_bitrate = kBitrate;
+  cfg_.g_error_resilient = 0;
+  cfg_.g_threads = 2;
+  ::libvpx_test::I420VideoSource video("niklas_640_480_30.yuv", 640, 480, 30, 1,
+                                       0, 400);
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+}
+
+#if CONFIG_VP9_DECODER
+TEST_P(EndToEndTestLoopFilterThreading, TileCountChange) {
+  ::libvpx_test::RandomVideoSource video;
+  video.SetSize(4096, 2160);
+  video.set_limit(10);
+
+  ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
+}
+#endif  // CONFIG_VP9_DECODER
+
 VP9_INSTANTIATE_TEST_CASE(EndToEndTestLarge,
                           ::testing::ValuesIn(kEncodingModeVectors),
                           ::testing::ValuesIn(kTestVectors),
                           ::testing::ValuesIn(kCpuUsedVectors));
+
+VP9_INSTANTIATE_TEST_CASE(EndToEndTestAdaptiveRDThresh,
+                          ::testing::Values(5, 6, 7), ::testing::Values(8, 9));
+
+#if CONFIG_VP9_DECODER
+VP9_INSTANTIATE_TEST_CASE(EndToEndTestLoopFilterThreading, ::testing::Bool(),
+                          ::testing::Range(2, 6));
+#endif  // CONFIG_VP9_DECODER
 }  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_ethread_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_ethread_test.cc
index 6b7e5121..6de76e9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_ethread_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_ethread_test.cc
@@ -387,7 +387,7 @@
   ASSERT_NO_FATAL_FAILURE(RunLoop(&video));
   const double multi_thr_psnr = GetAveragePsnr();
 
-  EXPECT_NEAR(single_thr_psnr, multi_thr_psnr, 0.1);
+  EXPECT_NEAR(single_thr_psnr, multi_thr_psnr, 0.2);
 }
 
 INSTANTIATE_TEST_CASE_P(
@@ -409,7 +409,7 @@
         ::testing::Values(::libvpx_test::kTwoPassGood,
                           ::libvpx_test::kOnePassGood,
                           ::libvpx_test::kRealTime),
-        ::testing::Range(3, 9),    // cpu_used
+        ::testing::Range(3, 10),   // cpu_used
         ::testing::Range(0, 3),    // tile_columns
         ::testing::Range(2, 5)));  // threads
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_intrapred_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_intrapred_test.cc
index 39c5e79..58091f8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_intrapred_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_intrapred_test.cc
@@ -130,6 +130,12 @@
   RunTest(left_col, above_data, dst, ref_dst);
 }
 
+// Instantiate a token test to avoid -Wuninitialized warnings when none of the
+// other tests are enabled.
+INSTANTIATE_TEST_CASE_P(
+    C, VP9IntraPredTest,
+    ::testing::Values(IntraPredParam(&vpx_d45_predictor_4x4_c,
+                                     &vpx_d45_predictor_4x4_c, 4, 8)));
 #if HAVE_SSE2
 INSTANTIATE_TEST_CASE_P(
     SSE2, VP9IntraPredTest,
@@ -378,58 +384,61 @@
                        8)));
 #endif  // HAVE_MSA
 
-#if HAVE_VSX
-INSTANTIATE_TEST_CASE_P(
-    VSX, VP9IntraPredTest,
-    ::testing::Values(
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
         IntraPredParam(&vpx_d45_predictor_8x8_vsx, &vpx_d45_predictor_8x8_c, 8,
                        8),
-        IntraPredParam(&vpx_d45_predictor_16x16_vsx, &vpx_d45_predictor_16x16_c,
-                       16, 8),
-        IntraPredParam(&vpx_d45_predictor_32x32_vsx, &vpx_d45_predictor_32x32_c,
-                       32, 8),
         IntraPredParam(&vpx_d63_predictor_8x8_vsx, &vpx_d63_predictor_8x8_c, 8,
                        8),
-        IntraPredParam(&vpx_d63_predictor_16x16_vsx, &vpx_d63_predictor_16x16_c,
-                       16, 8),
-        IntraPredParam(&vpx_d63_predictor_32x32_vsx, &vpx_d63_predictor_32x32_c,
-                       32, 8),
-        IntraPredParam(&vpx_dc_128_predictor_16x16_vsx,
-                       &vpx_dc_128_predictor_16x16_c, 16, 8),
-        IntraPredParam(&vpx_dc_128_predictor_32x32_vsx,
-                       &vpx_dc_128_predictor_32x32_c, 32, 8),
-        IntraPredParam(&vpx_dc_left_predictor_16x16_vsx,
-                       &vpx_dc_left_predictor_16x16_c, 16, 8),
-        IntraPredParam(&vpx_dc_left_predictor_32x32_vsx,
-                       &vpx_dc_left_predictor_32x32_c, 32, 8),
         IntraPredParam(&vpx_dc_predictor_8x8_vsx, &vpx_dc_predictor_8x8_c, 8,
                        8),
-        IntraPredParam(&vpx_dc_predictor_16x16_vsx, &vpx_dc_predictor_16x16_c,
-                       16, 8),
-        IntraPredParam(&vpx_dc_predictor_32x32_vsx, &vpx_dc_predictor_32x32_c,
-                       32, 8),
-        IntraPredParam(&vpx_dc_top_predictor_16x16_vsx,
-                       &vpx_dc_top_predictor_16x16_c, 16, 8),
-        IntraPredParam(&vpx_dc_top_predictor_32x32_vsx,
-                       &vpx_dc_top_predictor_32x32_c, 32, 8),
         IntraPredParam(&vpx_h_predictor_4x4_vsx, &vpx_h_predictor_4x4_c, 4, 8),
         IntraPredParam(&vpx_h_predictor_8x8_vsx, &vpx_h_predictor_8x8_c, 8, 8),
-        IntraPredParam(&vpx_h_predictor_16x16_vsx, &vpx_h_predictor_16x16_c, 16,
-                       8),
-        IntraPredParam(&vpx_h_predictor_32x32_vsx, &vpx_h_predictor_32x32_c, 32,
-                       8),
         IntraPredParam(&vpx_tm_predictor_4x4_vsx, &vpx_tm_predictor_4x4_c, 4,
                        8),
         IntraPredParam(&vpx_tm_predictor_8x8_vsx, &vpx_tm_predictor_8x8_c, 8,
                        8),
-        IntraPredParam(&vpx_tm_predictor_16x16_vsx, &vpx_tm_predictor_16x16_c,
-                       16, 8),
-        IntraPredParam(&vpx_tm_predictor_32x32_vsx, &vpx_tm_predictor_32x32_c,
-                       32, 8),
-        IntraPredParam(&vpx_v_predictor_16x16_vsx, &vpx_v_predictor_16x16_c, 16,
-                       8),
-        IntraPredParam(&vpx_v_predictor_32x32_vsx, &vpx_v_predictor_32x32_c, 32,
-                       8)));
+#endif
+
+#if HAVE_VSX
+INSTANTIATE_TEST_CASE_P(
+    VSX, VP9IntraPredTest,
+    ::testing::Values(IntraPredParam(&vpx_d45_predictor_16x16_vsx,
+                                     &vpx_d45_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_d45_predictor_32x32_vsx,
+                                     &vpx_d45_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_d63_predictor_16x16_vsx,
+                                     &vpx_d63_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_d63_predictor_32x32_vsx,
+                                     &vpx_d63_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_dc_128_predictor_16x16_vsx,
+                                     &vpx_dc_128_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_dc_128_predictor_32x32_vsx,
+                                     &vpx_dc_128_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_dc_left_predictor_16x16_vsx,
+                                     &vpx_dc_left_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_dc_left_predictor_32x32_vsx,
+                                     &vpx_dc_left_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_dc_predictor_16x16_vsx,
+                                     &vpx_dc_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_dc_predictor_32x32_vsx,
+                                     &vpx_dc_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_dc_top_predictor_16x16_vsx,
+                                     &vpx_dc_top_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_dc_top_predictor_32x32_vsx,
+                                     &vpx_dc_top_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_h_predictor_16x16_vsx,
+                                     &vpx_h_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_h_predictor_32x32_vsx,
+                                     &vpx_h_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_tm_predictor_16x16_vsx,
+                                     &vpx_tm_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_tm_predictor_32x32_vsx,
+                                     &vpx_tm_predictor_32x32_c, 32, 8),
+                      IntraPredParam(&vpx_v_predictor_16x16_vsx,
+                                     &vpx_v_predictor_16x16_c, 16, 8),
+                      IntraPredParam(&vpx_v_predictor_32x32_vsx,
+                                     &vpx_v_predictor_32x32_c, 32, 8)));
 #endif  // HAVE_VSX
 
 #if CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_lossless_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_lossless_test.cc
index 703b55e..5cf0a41 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_lossless_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_lossless_test.cc
@@ -38,7 +38,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       // Only call Control if quantizer > 0 to verify that using quantizer
       // alone will activate lossless
       if (cfg_.rc_max_quantizer > 0 || cfg_.rc_min_quantizer > 0) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_motion_vector_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_motion_vector_test.cc
index 5dd2ddb..b556a1c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_motion_vector_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_motion_vector_test.cc
@@ -8,6 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <memory>
+
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
 #include "test/codec_factory.h"
@@ -59,7 +61,7 @@
 
   virtual void PreEncodeFrameHook(::libvpx_test::VideoSource *video,
                                   ::libvpx_test::Encoder *encoder) {
-    if (video->frame() == 1) {
+    if (video->frame() == 0) {
       encoder->Control(VP8E_SET_CPUUSED, cpu_used_);
       encoder->Control(VP9E_ENABLE_MOTION_VECTOR_UNIT_TEST, mv_test_mode_);
       if (encoding_mode_ != ::libvpx_test::kRealTime) {
@@ -81,7 +83,7 @@
   cfg_.g_profile = 0;
   init_flags_ = VPX_CODEC_USE_PSNR;
 
-  testing::internal::scoped_ptr<libvpx_test::VideoSource> video;
+  std::unique_ptr<libvpx_test::VideoSource> video;
   video.reset(new libvpx_test::YUVVideoSource(
       "niklas_640_480_30.yuv", VPX_IMG_FMT_I420, 3840, 2160,  // 2048, 1080,
       30, 1, 0, 5));
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_quantize_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_quantize_test.cc
index f2234f0..69c2c5a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_quantize_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_quantize_test.cc
@@ -11,6 +11,7 @@
 #include <math.h>
 #include <stdlib.h>
 #include <string.h>
+#include <tuple>
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -18,6 +19,7 @@
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/buffer.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
@@ -26,6 +28,7 @@
 #include "vp9/common/vp9_scan.h"
 #include "vpx/vpx_codec.h"
 #include "vpx/vpx_integer.h"
+#include "vpx_ports/msvc.h"
 #include "vpx_ports/vpx_timer.h"
 
 using libvpx_test::ACMRandom;
@@ -41,8 +44,8 @@
                              tran_low_t *dqcoeff, const int16_t *dequant,
                              uint16_t *eob, const int16_t *scan,
                              const int16_t *iscan);
-typedef std::tr1::tuple<QuantizeFunc, QuantizeFunc, vpx_bit_depth_t,
-                        int /*max_size*/, bool /*is_fp*/>
+typedef std::tuple<QuantizeFunc, QuantizeFunc, vpx_bit_depth_t,
+                   int /*max_size*/, bool /*is_fp*/>
     QuantizeParam;
 
 // Wrapper for FP version which does not use zbin or quant_shift.
@@ -67,11 +70,19 @@
      scan, iscan);
 }
 
-class VP9QuantizeBase {
+class VP9QuantizeBase : public AbstractBench {
  public:
   VP9QuantizeBase(vpx_bit_depth_t bit_depth, int max_size, bool is_fp)
-      : bit_depth_(bit_depth), max_size_(max_size), is_fp_(is_fp) {
+      : bit_depth_(bit_depth), max_size_(max_size), is_fp_(is_fp),
+        coeff_(Buffer<tran_low_t>(max_size_, max_size_, 0, 16)),
+        qcoeff_(Buffer<tran_low_t>(max_size_, max_size_, 0, 32)),
+        dqcoeff_(Buffer<tran_low_t>(max_size_, max_size_, 0, 32)) {
+    // TODO(jianj): SSSE3 and AVX2 tests fail on extreme values.
+#if HAVE_NEON
+    max_value_ = (1 << (7 + bit_depth_)) - 1;
+#else
     max_value_ = (1 << bit_depth_) - 1;
+#endif
     zbin_ptr_ =
         reinterpret_cast<int16_t *>(vpx_memalign(16, 8 * sizeof(*zbin_ptr_)));
     round_fp_ptr_ = reinterpret_cast<int16_t *>(
@@ -86,6 +97,9 @@
         vpx_memalign(16, 8 * sizeof(*quant_shift_ptr_)));
     dequant_ptr_ = reinterpret_cast<int16_t *>(
         vpx_memalign(16, 8 * sizeof(*dequant_ptr_)));
+
+    r_ptr_ = (is_fp_) ? round_fp_ptr_ : round_ptr_;
+    q_ptr_ = (is_fp_) ? quant_fp_ptr_ : quant_ptr_;
   }
 
   ~VP9QuantizeBase() {
@@ -118,6 +132,15 @@
   int max_value_;
   const int max_size_;
   const bool is_fp_;
+  Buffer<tran_low_t> coeff_;
+  Buffer<tran_low_t> qcoeff_;
+  Buffer<tran_low_t> dqcoeff_;
+  int16_t *r_ptr_;
+  int16_t *q_ptr_;
+  int count_;
+  int skip_block_;
+  const scan_order *scan_;
+  uint16_t eob_;
 };
 
 class VP9QuantizeTest : public VP9QuantizeBase,
@@ -128,10 +151,18 @@
         quantize_op_(GET_PARAM(0)), ref_quantize_op_(GET_PARAM(1)) {}
 
  protected:
+  virtual void Run();
   const QuantizeFunc quantize_op_;
   const QuantizeFunc ref_quantize_op_;
 };
 
+void VP9QuantizeTest::Run() {
+  quantize_op_(coeff_.TopLeftPixel(), count_, skip_block_, zbin_ptr_, r_ptr_,
+               q_ptr_, quant_shift_ptr_, qcoeff_.TopLeftPixel(),
+               dqcoeff_.TopLeftPixel(), dequant_ptr_, &eob_, scan_->scan,
+               scan_->iscan);
+}
+
 // This quantizer compares the AC coefficients to the quantization step size to
 // determine if further multiplication operations are needed.
 // Based on vp9_quantize_fp_sse2().
@@ -183,12 +214,15 @@
         tmp = clamp(abs_coeff[y] + _round, INT16_MIN, INT16_MAX);
         tmp = (tmp * quant_ptr[rc != 0]) >> (16 - is_32x32);
         qcoeff_ptr[rc] = (tmp ^ coeff_sign[y]) - coeff_sign[y];
-        dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr[rc != 0];
+        dqcoeff_ptr[rc] =
+            static_cast<tran_low_t>(qcoeff_ptr[rc] * dequant_ptr[rc != 0]);
 
         if (is_32x32) {
-          dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr[rc != 0] / 2;
+          dqcoeff_ptr[rc] = static_cast<tran_low_t>(qcoeff_ptr[rc] *
+                                                    dequant_ptr[rc != 0] / 2);
         } else {
-          dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr[rc != 0];
+          dqcoeff_ptr[rc] =
+              static_cast<tran_low_t>(qcoeff_ptr[rc] * dequant_ptr[rc != 0]);
         }
       } else {
         qcoeff_ptr[rc] = 0;
@@ -269,19 +303,17 @@
 
 TEST_P(VP9QuantizeTest, OperationCheck) {
   ACMRandom rnd(ACMRandom::DeterministicSeed());
-  Buffer<tran_low_t> coeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 16);
-  ASSERT_TRUE(coeff.Init());
-  Buffer<tran_low_t> qcoeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
-  ASSERT_TRUE(qcoeff.Init());
-  Buffer<tran_low_t> dqcoeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
-  ASSERT_TRUE(dqcoeff.Init());
+  ASSERT_TRUE(coeff_.Init());
+  ASSERT_TRUE(qcoeff_.Init());
+  ASSERT_TRUE(dqcoeff_.Init());
   Buffer<tran_low_t> ref_qcoeff =
       Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
   ASSERT_TRUE(ref_qcoeff.Init());
   Buffer<tran_low_t> ref_dqcoeff =
       Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
   ASSERT_TRUE(ref_dqcoeff.Init());
-  uint16_t eob, ref_eob;
+  uint16_t ref_eob = 0;
+  eob_ = 0;
 
   for (int i = 0; i < number_of_iterations; ++i) {
     // Test skip block for the first three iterations to catch all the different
@@ -294,33 +326,31 @@
       sz = TX_32X32;
     }
     const TX_TYPE tx_type = static_cast<TX_TYPE>((i >> 2) % 3);
-    const scan_order *scan_order = &vp9_scan_orders[sz][tx_type];
-    const int count = (4 << sz) * (4 << sz);
-    coeff.Set(&rnd, -max_value_, max_value_);
+    scan_ = &vp9_scan_orders[sz][tx_type];
+    count_ = (4 << sz) * (4 << sz);
+    coeff_.Set(&rnd, -max_value_, max_value_);
     GenerateHelperArrays(&rnd, zbin_ptr_, round_ptr_, quant_ptr_,
                          quant_shift_ptr_, dequant_ptr_, round_fp_ptr_,
                          quant_fp_ptr_);
-    int16_t *r_ptr = (is_fp_) ? round_fp_ptr_ : round_ptr_;
-    int16_t *q_ptr = (is_fp_) ? quant_fp_ptr_ : quant_ptr_;
-    ref_quantize_op_(coeff.TopLeftPixel(), count, skip_block, zbin_ptr_, r_ptr,
-                     q_ptr, quant_shift_ptr_, ref_qcoeff.TopLeftPixel(),
-                     ref_dqcoeff.TopLeftPixel(), dequant_ptr_, &ref_eob,
-                     scan_order->scan, scan_order->iscan);
+    ref_quantize_op_(coeff_.TopLeftPixel(), count_, skip_block, zbin_ptr_,
+                     r_ptr_, q_ptr_, quant_shift_ptr_,
+                     ref_qcoeff.TopLeftPixel(), ref_dqcoeff.TopLeftPixel(),
+                     dequant_ptr_, &ref_eob, scan_->scan, scan_->iscan);
 
     ASM_REGISTER_STATE_CHECK(quantize_op_(
-        coeff.TopLeftPixel(), count, skip_block, zbin_ptr_, r_ptr, q_ptr,
-        quant_shift_ptr_, qcoeff.TopLeftPixel(), dqcoeff.TopLeftPixel(),
-        dequant_ptr_, &eob, scan_order->scan, scan_order->iscan));
+        coeff_.TopLeftPixel(), count_, skip_block, zbin_ptr_, r_ptr_, q_ptr_,
+        quant_shift_ptr_, qcoeff_.TopLeftPixel(), dqcoeff_.TopLeftPixel(),
+        dequant_ptr_, &eob_, scan_->scan, scan_->iscan));
 
-    EXPECT_TRUE(qcoeff.CheckValues(ref_qcoeff));
-    EXPECT_TRUE(dqcoeff.CheckValues(ref_dqcoeff));
+    EXPECT_TRUE(qcoeff_.CheckValues(ref_qcoeff));
+    EXPECT_TRUE(dqcoeff_.CheckValues(ref_dqcoeff));
 
-    EXPECT_EQ(eob, ref_eob);
+    EXPECT_EQ(eob_, ref_eob);
 
     if (HasFailure()) {
       printf("Failure on iteration %d.\n", i);
-      qcoeff.PrintDifference(ref_qcoeff);
-      dqcoeff.PrintDifference(ref_dqcoeff);
+      qcoeff_.PrintDifference(ref_qcoeff);
+      dqcoeff_.PrintDifference(ref_dqcoeff);
       return;
     }
   }
@@ -328,22 +358,21 @@
 
 TEST_P(VP9QuantizeTest, EOBCheck) {
   ACMRandom rnd(ACMRandom::DeterministicSeed());
-  Buffer<tran_low_t> coeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 16);
-  ASSERT_TRUE(coeff.Init());
-  Buffer<tran_low_t> qcoeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
-  ASSERT_TRUE(qcoeff.Init());
-  Buffer<tran_low_t> dqcoeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
-  ASSERT_TRUE(dqcoeff.Init());
+  ASSERT_TRUE(coeff_.Init());
+  ASSERT_TRUE(qcoeff_.Init());
+  ASSERT_TRUE(dqcoeff_.Init());
   Buffer<tran_low_t> ref_qcoeff =
       Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
   ASSERT_TRUE(ref_qcoeff.Init());
   Buffer<tran_low_t> ref_dqcoeff =
       Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
   ASSERT_TRUE(ref_dqcoeff.Init());
-  uint16_t eob, ref_eob;
+  uint16_t ref_eob = 0;
+  eob_ = 0;
+  const uint32_t max_index = max_size_ * max_size_ - 1;
 
   for (int i = 0; i < number_of_iterations; ++i) {
-    const int skip_block = 0;
+    skip_block_ = 0;
     TX_SIZE sz;
     if (max_size_ == 16) {
       sz = static_cast<TX_SIZE>(i % 3);  // TX_4X4, TX_8X8 TX_16X16
@@ -351,38 +380,36 @@
       sz = TX_32X32;
     }
     const TX_TYPE tx_type = static_cast<TX_TYPE>((i >> 2) % 3);
-    const scan_order *scan_order = &vp9_scan_orders[sz][tx_type];
-    int count = (4 << sz) * (4 << sz);
+    scan_ = &vp9_scan_orders[sz][tx_type];
+    count_ = (4 << sz) * (4 << sz);
     // Two random entries
-    coeff.Set(0);
-    coeff.TopLeftPixel()[rnd(count)] =
+    coeff_.Set(0);
+    coeff_.TopLeftPixel()[rnd.RandRange(count_) & max_index] =
         static_cast<int>(rnd.RandRange(max_value_ * 2)) - max_value_;
-    coeff.TopLeftPixel()[rnd(count)] =
+    coeff_.TopLeftPixel()[rnd.RandRange(count_) & max_index] =
         static_cast<int>(rnd.RandRange(max_value_ * 2)) - max_value_;
     GenerateHelperArrays(&rnd, zbin_ptr_, round_ptr_, quant_ptr_,
                          quant_shift_ptr_, dequant_ptr_, round_fp_ptr_,
                          quant_fp_ptr_);
-    int16_t *r_ptr = (is_fp_) ? round_fp_ptr_ : round_ptr_;
-    int16_t *q_ptr = (is_fp_) ? quant_fp_ptr_ : quant_ptr_;
-    ref_quantize_op_(coeff.TopLeftPixel(), count, skip_block, zbin_ptr_, r_ptr,
-                     q_ptr, quant_shift_ptr_, ref_qcoeff.TopLeftPixel(),
-                     ref_dqcoeff.TopLeftPixel(), dequant_ptr_, &ref_eob,
-                     scan_order->scan, scan_order->iscan);
+    ref_quantize_op_(coeff_.TopLeftPixel(), count_, skip_block_, zbin_ptr_,
+                     r_ptr_, q_ptr_, quant_shift_ptr_,
+                     ref_qcoeff.TopLeftPixel(), ref_dqcoeff.TopLeftPixel(),
+                     dequant_ptr_, &ref_eob, scan_->scan, scan_->iscan);
 
     ASM_REGISTER_STATE_CHECK(quantize_op_(
-        coeff.TopLeftPixel(), count, skip_block, zbin_ptr_, r_ptr, q_ptr,
-        quant_shift_ptr_, qcoeff.TopLeftPixel(), dqcoeff.TopLeftPixel(),
-        dequant_ptr_, &eob, scan_order->scan, scan_order->iscan));
+        coeff_.TopLeftPixel(), count_, skip_block_, zbin_ptr_, r_ptr_, q_ptr_,
+        quant_shift_ptr_, qcoeff_.TopLeftPixel(), dqcoeff_.TopLeftPixel(),
+        dequant_ptr_, &eob_, scan_->scan, scan_->iscan));
 
-    EXPECT_TRUE(qcoeff.CheckValues(ref_qcoeff));
-    EXPECT_TRUE(dqcoeff.CheckValues(ref_dqcoeff));
+    EXPECT_TRUE(qcoeff_.CheckValues(ref_qcoeff));
+    EXPECT_TRUE(dqcoeff_.CheckValues(ref_dqcoeff));
 
-    EXPECT_EQ(eob, ref_eob);
+    EXPECT_EQ(eob_, ref_eob);
 
     if (HasFailure()) {
       printf("Failure on iteration %d.\n", i);
-      qcoeff.PrintDifference(ref_qcoeff);
-      dqcoeff.PrintDifference(ref_dqcoeff);
+      qcoeff_.PrintDifference(ref_qcoeff);
+      dqcoeff_.PrintDifference(ref_dqcoeff);
       return;
     }
   }
@@ -390,13 +417,9 @@
 
 TEST_P(VP9QuantizeTest, DISABLED_Speed) {
   ACMRandom rnd(ACMRandom::DeterministicSeed());
-  Buffer<tran_low_t> coeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 16);
-  ASSERT_TRUE(coeff.Init());
-  Buffer<tran_low_t> qcoeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
-  ASSERT_TRUE(qcoeff.Init());
-  Buffer<tran_low_t> dqcoeff = Buffer<tran_low_t>(max_size_, max_size_, 0, 32);
-  ASSERT_TRUE(dqcoeff.Init());
-  uint16_t eob;
+  ASSERT_TRUE(coeff_.Init());
+  ASSERT_TRUE(qcoeff_.Init());
+  ASSERT_TRUE(dqcoeff_.Init());
   TX_SIZE starting_sz, ending_sz;
 
   if (max_size_ == 16) {
@@ -410,18 +433,16 @@
   for (TX_SIZE sz = starting_sz; sz <= ending_sz; ++sz) {
     // zbin > coeff, zbin < coeff.
     for (int i = 0; i < 2; ++i) {
-      const int skip_block = 0;
+      skip_block_ = 0;
       // TX_TYPE defines the scan order. That is not relevant to the speed test.
       // Pick the first one.
       const TX_TYPE tx_type = DCT_DCT;
-      const scan_order *scan_order = &vp9_scan_orders[sz][tx_type];
-      const int count = (4 << sz) * (4 << sz);
+      count_ = (4 << sz) * (4 << sz);
+      scan_ = &vp9_scan_orders[sz][tx_type];
 
       GenerateHelperArrays(&rnd, zbin_ptr_, round_ptr_, quant_ptr_,
                            quant_shift_ptr_, dequant_ptr_, round_fp_ptr_,
                            quant_fp_ptr_);
-      int16_t *r_ptr = (is_fp_) ? round_fp_ptr_ : round_ptr_;
-      int16_t *q_ptr = (is_fp_) ? quant_fp_ptr_ : quant_ptr_;
 
       if (i == 0) {
         // When |coeff values| are less than zbin the results are 0.
@@ -432,40 +453,33 @@
           threshold = 200;
         }
         for (int j = 0; j < 8; ++j) zbin_ptr_[j] = threshold;
-        coeff.Set(&rnd, -99, 99);
+        coeff_.Set(&rnd, -99, 99);
       } else if (i == 1) {
         for (int j = 0; j < 8; ++j) zbin_ptr_[j] = 50;
-        coeff.Set(&rnd, -500, 500);
+        coeff_.Set(&rnd, -500, 500);
       }
 
-      vpx_usec_timer timer;
-      vpx_usec_timer_start(&timer);
-      for (int j = 0; j < 100000000 / count; ++j) {
-        quantize_op_(coeff.TopLeftPixel(), count, skip_block, zbin_ptr_, r_ptr,
-                     q_ptr, quant_shift_ptr_, qcoeff.TopLeftPixel(),
-                     dqcoeff.TopLeftPixel(), dequant_ptr_, &eob,
-                     scan_order->scan, scan_order->iscan);
-      }
-      vpx_usec_timer_mark(&timer);
-      const int elapsed_time = static_cast<int>(vpx_usec_timer_elapsed(&timer));
-      if (i == 0) printf("Bypass calculations.\n");
-      if (i == 1) printf("Full calculations.\n");
-      printf("Quantize %dx%d time: %5d ms\n", 4 << sz, 4 << sz,
-             elapsed_time / 1000);
+      RunNTimes(10000000 / count_);
+      const char *type =
+          (i == 0) ? "Bypass calculations " : "Full calculations ";
+      char block_size[16];
+      snprintf(block_size, sizeof(block_size), "%dx%d", 4 << sz, 4 << sz);
+      char title[100];
+      snprintf(title, sizeof(title), "%25s %8s ", type, block_size);
+      PrintMedian(title);
     }
-    printf("\n");
   }
 }
 
-using std::tr1::make_tuple;
+using std::make_tuple;
 
 #if HAVE_SSE2
 #if CONFIG_VP9_HIGHBITDEPTH
-// TODO(johannkoenig): Fix vpx_quantize_b_sse2 in highbitdepth builds.
-// make_tuple(&vpx_quantize_b_sse2, &vpx_highbd_quantize_b_c, VPX_BITS_8),
 INSTANTIATE_TEST_CASE_P(
     SSE2, VP9QuantizeTest,
     ::testing::Values(
+        make_tuple(&vpx_quantize_b_sse2, &vpx_quantize_b_c, VPX_BITS_8, 16,
+                   false),
         make_tuple(&vpx_highbd_quantize_b_sse2, &vpx_highbd_quantize_b_c,
                    VPX_BITS_8, 16, false),
         make_tuple(&vpx_highbd_quantize_b_sse2, &vpx_highbd_quantize_b_c,
@@ -490,12 +504,15 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 #endif  // HAVE_SSE2
 
-#if HAVE_SSSE3 && !CONFIG_VP9_HIGHBITDEPTH
-#if ARCH_X86_64
+#if HAVE_SSSE3
+#if VPX_ARCH_X86_64
 INSTANTIATE_TEST_CASE_P(
     SSSE3, VP9QuantizeTest,
     ::testing::Values(make_tuple(&vpx_quantize_b_ssse3, &vpx_quantize_b_c,
                                  VPX_BITS_8, 16, false),
+                      make_tuple(&vpx_quantize_b_32x32_ssse3,
+                                 &vpx_quantize_b_32x32_c, VPX_BITS_8, 32,
+                                 false),
                       make_tuple(&QuantFPWrapper<vp9_quantize_fp_ssse3>,
                                  &QuantFPWrapper<quantize_fp_nz_c>, VPX_BITS_8,
                                  16, true),
@@ -503,45 +520,36 @@
                                  &QuantFPWrapper<quantize_fp_32x32_nz_c>,
                                  VPX_BITS_8, 32, true)));
 #else
-INSTANTIATE_TEST_CASE_P(SSSE3, VP9QuantizeTest,
-                        ::testing::Values(make_tuple(&vpx_quantize_b_ssse3,
-                                                     &vpx_quantize_b_c,
-                                                     VPX_BITS_8, 16, false)));
-#endif
-
-#if ARCH_X86_64
-// TODO(johannkoenig): SSSE3 optimizations do not yet pass this test.
-INSTANTIATE_TEST_CASE_P(DISABLED_SSSE3, VP9QuantizeTest,
-                        ::testing::Values(make_tuple(
-                            &vpx_quantize_b_32x32_ssse3,
-                            &vpx_quantize_b_32x32_c, VPX_BITS_8, 32, false)));
-#endif  // ARCH_X86_64
-#endif  // HAVE_SSSE3 && !CONFIG_VP9_HIGHBITDEPTH
-
-// TODO(johannkoenig): AVX optimizations do not yet pass the 32x32 test or
-// highbitdepth configurations.
-#if HAVE_AVX && !CONFIG_VP9_HIGHBITDEPTH
 INSTANTIATE_TEST_CASE_P(
-    AVX, VP9QuantizeTest,
-    ::testing::Values(make_tuple(&vpx_quantize_b_avx, &vpx_quantize_b_c,
+    SSSE3, VP9QuantizeTest,
+    ::testing::Values(make_tuple(&vpx_quantize_b_ssse3, &vpx_quantize_b_c,
                                  VPX_BITS_8, 16, false),
-                      // Even though SSSE3 and AVX do not match the reference
-                      // code, we can keep them in sync with each other.
-                      make_tuple(&vpx_quantize_b_32x32_avx,
-                                 &vpx_quantize_b_32x32_ssse3, VPX_BITS_8, 32,
+                      make_tuple(&vpx_quantize_b_32x32_ssse3,
+                                 &vpx_quantize_b_32x32_c, VPX_BITS_8, 32,
                                  false)));
-#endif  // HAVE_AVX && !CONFIG_VP9_HIGHBITDEPTH
 
-#if ARCH_X86_64 && HAVE_AVX2
+#endif  // VPX_ARCH_X86_64
+#endif  // HAVE_SSSE3
+
+#if HAVE_AVX
+INSTANTIATE_TEST_CASE_P(AVX, VP9QuantizeTest,
+                        ::testing::Values(make_tuple(&vpx_quantize_b_avx,
+                                                     &vpx_quantize_b_c,
+                                                     VPX_BITS_8, 16, false),
+                                          make_tuple(&vpx_quantize_b_32x32_avx,
+                                                     &vpx_quantize_b_32x32_c,
+                                                     VPX_BITS_8, 32, false)));
+#endif  // HAVE_AVX
+
+#if VPX_ARCH_X86_64 && HAVE_AVX2
 INSTANTIATE_TEST_CASE_P(
     AVX2, VP9QuantizeTest,
     ::testing::Values(make_tuple(&QuantFPWrapper<vp9_quantize_fp_avx2>,
                                  &QuantFPWrapper<quantize_fp_nz_c>, VPX_BITS_8,
                                  16, true)));
-#endif  // HAVE_AVX2 && !CONFIG_VP9_HIGHBITDEPTH
+#endif  // HAVE_AVX2
 
-// TODO(webm:1448): dqcoeff is not handled correctly in HBD builds.
-#if HAVE_NEON && !CONFIG_VP9_HIGHBITDEPTH
+#if HAVE_NEON
 INSTANTIATE_TEST_CASE_P(
     NEON, VP9QuantizeTest,
     ::testing::Values(make_tuple(&vpx_quantize_b_neon, &vpx_quantize_b_c,
@@ -555,7 +563,23 @@
                       make_tuple(&QuantFPWrapper<vp9_quantize_fp_32x32_neon>,
                                  &QuantFPWrapper<vp9_quantize_fp_32x32_c>,
                                  VPX_BITS_8, 32, true)));
-#endif  // HAVE_NEON && !CONFIG_VP9_HIGHBITDEPTH
+#endif  // HAVE_NEON
+
+#if HAVE_VSX && !CONFIG_VP9_HIGHBITDEPTH
+INSTANTIATE_TEST_CASE_P(
+    VSX, VP9QuantizeTest,
+    ::testing::Values(make_tuple(&vpx_quantize_b_vsx, &vpx_quantize_b_c,
+                                 VPX_BITS_8, 16, false),
+                      make_tuple(&vpx_quantize_b_32x32_vsx,
+                                 &vpx_quantize_b_32x32_c, VPX_BITS_8, 32,
+                                 false),
+                      make_tuple(&QuantFPWrapper<vp9_quantize_fp_vsx>,
+                                 &QuantFPWrapper<vp9_quantize_fp_c>, VPX_BITS_8,
+                                 16, true),
+                      make_tuple(&QuantFPWrapper<vp9_quantize_fp_32x32_vsx>,
+                                 &QuantFPWrapper<vp9_quantize_fp_32x32_c>,
+                                 VPX_BITS_8, 32, true)));
+#endif  // HAVE_VSX && !CONFIG_VP9_HIGHBITDEPTH
 
 // Only useful to compare "Speed" test results.
 INSTANTIATE_TEST_CASE_P(
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_spatial_svc_encoder.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_spatial_svc_encoder.sh
deleted file mode 100644
index 65031073..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_spatial_svc_encoder.sh
+++ /dev/null
@@ -1,72 +0,0 @@
-#!/bin/sh
-##
-##  Copyright (c) 2014 The WebM project authors. All Rights Reserved.
-##
-##  Use of this source code is governed by a BSD-style license
-##  that can be found in the LICENSE file in the root of the source
-##  tree. An additional intellectual property rights grant can be found
-##  in the file PATENTS.  All contributing project authors may
-##  be found in the AUTHORS file in the root of the source tree.
-##
-##  This file tests the libvpx vp9_spatial_svc_encoder example. To add new
-##  tests to to this file, do the following:
-##    1. Write a shell function (this is your test).
-##    2. Add the function to vp9_spatial_svc_tests (on a new line).
-##
-. $(dirname $0)/tools_common.sh
-
-# Environment check: $YUV_RAW_INPUT is required.
-vp9_spatial_svc_encoder_verify_environment() {
-  if [ ! -e "${YUV_RAW_INPUT}" ]; then
-    echo "Libvpx test data must exist in LIBVPX_TEST_DATA_PATH."
-    return 1
-  fi
-}
-
-# Runs vp9_spatial_svc_encoder. $1 is the test name.
-vp9_spatial_svc_encoder() {
-  local readonly \
-    encoder="${LIBVPX_BIN_PATH}/vp9_spatial_svc_encoder${VPX_TEST_EXE_SUFFIX}"
-  local readonly test_name="$1"
-  local readonly \
-    output_file="${VPX_TEST_OUTPUT_DIR}/vp9_ssvc_encoder${test_name}.ivf"
-  local readonly frames_to_encode=10
-  local readonly max_kf=9999
-
-  shift
-
-  if [ ! -x "${encoder}" ]; then
-    elog "${encoder} does not exist or is not executable."
-    return 1
-  fi
-
-  eval "${VPX_TEST_PREFIX}" "${encoder}" -w "${YUV_RAW_INPUT_WIDTH}" \
-    -h "${YUV_RAW_INPUT_HEIGHT}" -k "${max_kf}" -f "${frames_to_encode}" \
-    "$@" "${YUV_RAW_INPUT}" "${output_file}" ${devnull}
-
-  [ -e "${output_file}" ] || return 1
-}
-
-# Each test is run with layer count 1-$vp9_ssvc_test_layers.
-vp9_ssvc_test_layers=5
-
-vp9_spatial_svc() {
-  if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly test_name="vp9_spatial_svc"
-    for layers in $(seq 1 ${vp9_ssvc_test_layers}); do
-      vp9_spatial_svc_encoder "${test_name}" -sl ${layers}
-    done
-  fi
-}
-
-readonly vp9_spatial_svc_tests="DISABLED_vp9_spatial_svc_mode_i
-                                DISABLED_vp9_spatial_svc_mode_altip
-                                DISABLED_vp9_spatial_svc_mode_ip
-                                DISABLED_vp9_spatial_svc_mode_gf
-                                vp9_spatial_svc"
-
-if [ "$(vpx_config_option_enabled CONFIG_SPATIAL_SVC)" = "yes" ]; then
-  run_tests \
-    vp9_spatial_svc_encoder_verify_environment \
-    "${vp9_spatial_svc_tests}"
-fi
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_subtract_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_subtract_test.cc
index 62845ad..67e8de6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_subtract_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_subtract_test.cc
@@ -14,9 +14,11 @@
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "test/acm_random.h"
+#include "test/bench.h"
 #include "test/clear_system_state.h"
 #include "test/register_state_check.h"
 #include "vp9/common/vp9_blockd.h"
+#include "vpx_ports/msvc.h"
 #include "vpx_mem/vpx_mem.h"
 
 typedef void (*SubtractFunc)(int rows, int cols, int16_t *diff_ptr,
@@ -26,62 +28,101 @@
 
 namespace vp9 {
 
-class VP9SubtractBlockTest : public ::testing::TestWithParam<SubtractFunc> {
+class VP9SubtractBlockTest : public AbstractBench,
+                             public ::testing::TestWithParam<SubtractFunc> {
  public:
   virtual void TearDown() { libvpx_test::ClearSystemState(); }
+
+ protected:
+  virtual void Run() {
+    GetParam()(block_height_, block_width_, diff_, block_width_, src_,
+               block_width_, pred_, block_width_);
+  }
+
+  void SetupBlocks(BLOCK_SIZE bsize) {
+    block_width_ = 4 * num_4x4_blocks_wide_lookup[bsize];
+    block_height_ = 4 * num_4x4_blocks_high_lookup[bsize];
+    diff_ = reinterpret_cast<int16_t *>(
+        vpx_memalign(16, sizeof(*diff_) * block_width_ * block_height_ * 2));
+    pred_ = reinterpret_cast<uint8_t *>(
+        vpx_memalign(16, block_width_ * block_height_ * 2));
+    src_ = reinterpret_cast<uint8_t *>(
+        vpx_memalign(16, block_width_ * block_height_ * 2));
+  }
+
+  int block_width_;
+  int block_height_;
+  int16_t *diff_;
+  uint8_t *pred_;
+  uint8_t *src_;
 };
 
 using libvpx_test::ACMRandom;
 
+TEST_P(VP9SubtractBlockTest, DISABLED_Speed) {
+  ACMRandom rnd(ACMRandom::DeterministicSeed());
+
+  for (BLOCK_SIZE bsize = BLOCK_4X4; bsize < BLOCK_SIZES;
+       bsize = static_cast<BLOCK_SIZE>(static_cast<int>(bsize) + 1)) {
+    SetupBlocks(bsize);
+
+    RunNTimes(100000000 / (block_height_ * block_width_));
+    char block_size[16];
+    snprintf(block_size, sizeof(block_size), "%dx%d", block_height_,
+             block_width_);
+    char title[100];
+    snprintf(title, sizeof(title), "%8s ", block_size);
+    PrintMedian(title);
+
+    vpx_free(diff_);
+    vpx_free(pred_);
+    vpx_free(src_);
+  }
+}
+
 TEST_P(VP9SubtractBlockTest, SimpleSubtract) {
   ACMRandom rnd(ACMRandom::DeterministicSeed());
 
-  // FIXME(rbultje) split in its own file
   for (BLOCK_SIZE bsize = BLOCK_4X4; bsize < BLOCK_SIZES;
        bsize = static_cast<BLOCK_SIZE>(static_cast<int>(bsize) + 1)) {
-    const int block_width = 4 * num_4x4_blocks_wide_lookup[bsize];
-    const int block_height = 4 * num_4x4_blocks_high_lookup[bsize];
-    int16_t *diff = reinterpret_cast<int16_t *>(
-        vpx_memalign(16, sizeof(*diff) * block_width * block_height * 2));
-    uint8_t *pred = reinterpret_cast<uint8_t *>(
-        vpx_memalign(16, block_width * block_height * 2));
-    uint8_t *src = reinterpret_cast<uint8_t *>(
-        vpx_memalign(16, block_width * block_height * 2));
+    SetupBlocks(bsize);
 
     for (int n = 0; n < 100; n++) {
-      for (int r = 0; r < block_height; ++r) {
-        for (int c = 0; c < block_width * 2; ++c) {
-          src[r * block_width * 2 + c] = rnd.Rand8();
-          pred[r * block_width * 2 + c] = rnd.Rand8();
+      for (int r = 0; r < block_height_; ++r) {
+        for (int c = 0; c < block_width_ * 2; ++c) {
+          src_[r * block_width_ * 2 + c] = rnd.Rand8();
+          pred_[r * block_width_ * 2 + c] = rnd.Rand8();
         }
       }
 
-      GetParam()(block_height, block_width, diff, block_width, src, block_width,
-                 pred, block_width);
+      GetParam()(block_height_, block_width_, diff_, block_width_, src_,
+                 block_width_, pred_, block_width_);
 
-      for (int r = 0; r < block_height; ++r) {
-        for (int c = 0; c < block_width; ++c) {
-          EXPECT_EQ(diff[r * block_width + c],
-                    (src[r * block_width + c] - pred[r * block_width + c]))
-              << "r = " << r << ", c = " << c << ", bs = " << bsize;
+      for (int r = 0; r < block_height_; ++r) {
+        for (int c = 0; c < block_width_; ++c) {
+          EXPECT_EQ(diff_[r * block_width_ + c],
+                    (src_[r * block_width_ + c] - pred_[r * block_width_ + c]))
+              << "r = " << r << ", c = " << c
+              << ", bs = " << static_cast<int>(bsize);
         }
       }
 
-      GetParam()(block_height, block_width, diff, block_width * 2, src,
-                 block_width * 2, pred, block_width * 2);
+      GetParam()(block_height_, block_width_, diff_, block_width_ * 2, src_,
+                 block_width_ * 2, pred_, block_width_ * 2);
 
-      for (int r = 0; r < block_height; ++r) {
-        for (int c = 0; c < block_width; ++c) {
-          EXPECT_EQ(
-              diff[r * block_width * 2 + c],
-              (src[r * block_width * 2 + c] - pred[r * block_width * 2 + c]))
-              << "r = " << r << ", c = " << c << ", bs = " << bsize;
+      for (int r = 0; r < block_height_; ++r) {
+        for (int c = 0; c < block_width_; ++c) {
+          EXPECT_EQ(diff_[r * block_width_ * 2 + c],
+                    (src_[r * block_width_ * 2 + c] -
+                     pred_[r * block_width_ * 2 + c]))
+              << "r = " << r << ", c = " << c
+              << ", bs = " << static_cast<int>(bsize);
         }
       }
     }
-    vpx_free(diff);
-    vpx_free(pred);
-    vpx_free(src);
+    vpx_free(diff_);
+    vpx_free(pred_);
+    vpx_free(src_);
   }
 }
 
@@ -106,4 +147,9 @@
                         ::testing::Values(vpx_subtract_block_mmi));
 #endif
 
+#if HAVE_VSX
+INSTANTIATE_TEST_CASE_P(VSX, VP9SubtractBlockTest,
+                        ::testing::Values(vpx_subtract_block_vsx));
+#endif
+
 }  // namespace vp9
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_thread_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_thread_test.cc
index 8eff3f1..31b6fe5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_thread_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vp9_thread_test.cc
@@ -196,6 +196,7 @@
 // Note any worker that requires synchronization between other workers will
 // hang.
 namespace impl {
+namespace {
 
 void Init(VPxWorker *const worker) { memset(worker, 0, sizeof(*worker)); }
 int Reset(VPxWorker *const /*worker*/) { return 1; }
@@ -208,6 +209,7 @@
 void Launch(VPxWorker *const worker) { Execute(worker); }
 void End(VPxWorker *const /*worker*/) {}
 
+}  // namespace
 }  // namespace impl
 
 TEST(VPxWorkerThreadTest, TestSerialInterface) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.cc
index ac75dce..057a2a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.cc
@@ -20,6 +20,15 @@
 #include "vpx_scale/yv12config.h"
 
 namespace libvpx_test {
+namespace {
+
+#if VPX_ARCH_ARM || (VPX_ARCH_MIPS && !HAVE_MIPS64) || VPX_ARCH_X86
+// Avoid OOM failures on 32-bit platforms.
+const int kNumSizesToTest = 7;
+#else
+const int kNumSizesToTest = 8;
+#endif
+const int kSizesToTest[] = { 1, 15, 33, 145, 512, 1025, 3840, 16383 };
 
 typedef void (*ExtendFrameBorderFunc)(YV12_BUFFER_CONFIG *ybf);
 typedef void (*CopyFrameFunc)(const YV12_BUFFER_CONFIG *src_ybf,
@@ -37,13 +46,6 @@
   void ExtendBorder() { ASM_REGISTER_STATE_CHECK(extend_fn_(&img_)); }
 
   void RunTest() {
-#if ARCH_ARM
-    // Some arm devices OOM when trying to allocate the largest buffers.
-    static const int kNumSizesToTest = 6;
-#else
-    static const int kNumSizesToTest = 7;
-#endif
-    static const int kSizesToTest[] = { 1, 15, 33, 145, 512, 1025, 16383 };
     for (int h = 0; h < kNumSizesToTest; ++h) {
       for (int w = 0; w < kNumSizesToTest; ++w) {
         ASSERT_NO_FATAL_FAILURE(ResetImages(kSizesToTest[w], kSizesToTest[h]));
@@ -76,13 +78,6 @@
   }
 
   void RunTest() {
-#if ARCH_ARM
-    // Some arm devices OOM when trying to allocate the largest buffers.
-    static const int kNumSizesToTest = 6;
-#else
-    static const int kNumSizesToTest = 7;
-#endif
-    static const int kSizesToTest[] = { 1, 15, 33, 145, 512, 1025, 16383 };
     for (int h = 0; h < kNumSizesToTest; ++h) {
       for (int w = 0; w < kNumSizesToTest; ++w) {
         ASSERT_NO_FATAL_FAILURE(ResetImages(kSizesToTest[w], kSizesToTest[h]));
@@ -102,4 +97,5 @@
 INSTANTIATE_TEST_CASE_P(C, CopyFrameTest,
                         ::testing::Values(vp8_yv12_copy_frame_c));
 
+}  // namespace
 }  // namespace libvpx_test
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.h
index dcbd02b..11c259a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_scale_test.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef TEST_VPX_SCALE_TEST_H_
-#define TEST_VPX_SCALE_TEST_H_
+#ifndef VPX_TEST_VPX_SCALE_TEST_H_
+#define VPX_TEST_VPX_SCALE_TEST_H_
 
 #include "third_party/googletest/src/include/gtest/gtest.h"
 
@@ -33,7 +33,8 @@
                   const int height) {
     memset(img, 0, sizeof(*img));
     ASSERT_EQ(
-        0, vp8_yv12_alloc_frame_buffer(img, width, height, VP8BORDERINPIXELS));
+        0, vp8_yv12_alloc_frame_buffer(img, width, height, VP8BORDERINPIXELS))
+        << "for width: " << width << " height: " << height;
     memset(img->buffer_alloc, kBufFiller, img->frame_size);
   }
 
@@ -197,4 +198,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_VPX_SCALE_TEST_H_
+#endif  // VPX_TEST_VPX_SCALE_TEST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_temporal_svc_encoder.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_temporal_svc_encoder.sh
old mode 100644
new mode 100755
index 56a7902..5e5bac8
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_temporal_svc_encoder.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpx_temporal_svc_encoder.sh
@@ -38,6 +38,7 @@
   local output_file="${VPX_TEST_OUTPUT_DIR}/${output_file_base}"
   local timebase_num="1"
   local timebase_den="1000"
+  local timebase_den_y4m="30"
   local speed="6"
   local frame_drop_thresh="30"
   local max_threads="4"
@@ -58,6 +59,12 @@
         "${YUV_RAW_INPUT_HEIGHT}" "${timebase_num}" "${timebase_den}" \
         "${speed}" "${frame_drop_thresh}" "${error_resilient}" "${threads}" \
         "$@" ${devnull}
+      # Test for y4m input.
+      eval "${VPX_TEST_PREFIX}" "${encoder}" "${Y4M_720P_INPUT}" \
+        "${output_file}" "${codec}" "${Y4M_720P_INPUT_WIDTH}" \
+        "${Y4M_720P_INPUT_HEIGHT}" "${timebase_num}" "${timebase_den_y4m}" \
+        "${speed}" "${frame_drop_thresh}" "${error_resilient}" "${threads}" \
+        "$@" ${devnull}
     else
       eval "${VPX_TEST_PREFIX}" "${encoder}" "${YUV_RAW_INPUT}" \
         "${output_file}" "${codec}" "${YUV_RAW_INPUT_WIDTH}" \
@@ -85,7 +92,7 @@
 
 vpx_tsvc_encoder_vp8_mode_0() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_0"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_0"
     vpx_tsvc_encoder vp8 "${output_basename}" 0 200 || return 1
     # Mode 0 produces 1 stream
     files_exist "${output_basename}" 1 || return 1
@@ -94,7 +101,7 @@
 
 vpx_tsvc_encoder_vp8_mode_1() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_1"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_1"
     vpx_tsvc_encoder vp8 "${output_basename}" 1 200 400 || return 1
     # Mode 1 produces 2 streams
     files_exist "${output_basename}" 2 || return 1
@@ -103,7 +110,7 @@
 
 vpx_tsvc_encoder_vp8_mode_2() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_2"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_2"
     vpx_tsvc_encoder vp8 "${output_basename}" 2 200 400 || return 1
     # Mode 2 produces 2 streams
     files_exist "${output_basename}" 2 || return 1
@@ -112,7 +119,7 @@
 
 vpx_tsvc_encoder_vp8_mode_3() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_3"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_3"
     vpx_tsvc_encoder vp8 "${output_basename}" 3 200 400 600 || return 1
     # Mode 3 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -121,7 +128,7 @@
 
 vpx_tsvc_encoder_vp8_mode_4() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_4"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_4"
     vpx_tsvc_encoder vp8 "${output_basename}" 4 200 400 600 || return 1
     # Mode 4 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -130,7 +137,7 @@
 
 vpx_tsvc_encoder_vp8_mode_5() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_5"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_5"
     vpx_tsvc_encoder vp8 "${output_basename}" 5 200 400 600 || return 1
     # Mode 5 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -139,7 +146,7 @@
 
 vpx_tsvc_encoder_vp8_mode_6() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_6"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_6"
     vpx_tsvc_encoder vp8 "${output_basename}" 6 200 400 600 || return 1
     # Mode 6 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -148,7 +155,7 @@
 
 vpx_tsvc_encoder_vp8_mode_7() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_7"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_7"
     vpx_tsvc_encoder vp8 "${output_basename}" 7 200 400 600 800 1000 || return 1
     # Mode 7 produces 5 streams
     files_exist "${output_basename}" 5 || return 1
@@ -157,7 +164,7 @@
 
 vpx_tsvc_encoder_vp8_mode_8() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_8"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_8"
     vpx_tsvc_encoder vp8 "${output_basename}" 8 200 400 || return 1
     # Mode 8 produces 2 streams
     files_exist "${output_basename}" 2 || return 1
@@ -166,7 +173,7 @@
 
 vpx_tsvc_encoder_vp8_mode_9() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_9"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_9"
     vpx_tsvc_encoder vp8 "${output_basename}" 9 200 400 600 || return 1
     # Mode 9 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -175,7 +182,7 @@
 
 vpx_tsvc_encoder_vp8_mode_10() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_10"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_10"
     vpx_tsvc_encoder vp8 "${output_basename}" 10 200 400 600 || return 1
     # Mode 10 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -184,7 +191,7 @@
 
 vpx_tsvc_encoder_vp8_mode_11() {
   if [ "$(vp8_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp8_mode_11"
+    local output_basename="vpx_tsvc_encoder_vp8_mode_11"
     vpx_tsvc_encoder vp8 "${output_basename}" 11 200 400 600 || return 1
     # Mode 11 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -193,7 +200,7 @@
 
 vpx_tsvc_encoder_vp9_mode_0() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_0"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_0"
     vpx_tsvc_encoder vp9 "${output_basename}" 0 200 || return 1
     # Mode 0 produces 1 stream
     files_exist "${output_basename}" 1 || return 1
@@ -202,7 +209,7 @@
 
 vpx_tsvc_encoder_vp9_mode_1() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_1"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_1"
     vpx_tsvc_encoder vp9 "${output_basename}" 1 200 400 || return 1
     # Mode 1 produces 2 streams
     files_exist "${output_basename}" 2 || return 1
@@ -211,7 +218,7 @@
 
 vpx_tsvc_encoder_vp9_mode_2() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_2"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_2"
     vpx_tsvc_encoder vp9 "${output_basename}" 2 200 400 || return 1
     # Mode 2 produces 2 streams
     files_exist "${output_basename}" 2 || return 1
@@ -220,7 +227,7 @@
 
 vpx_tsvc_encoder_vp9_mode_3() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_3"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_3"
     vpx_tsvc_encoder vp9 "${output_basename}" 3 200 400 600 || return 1
     # Mode 3 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -229,7 +236,7 @@
 
 vpx_tsvc_encoder_vp9_mode_4() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_4"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_4"
     vpx_tsvc_encoder vp9 "${output_basename}" 4 200 400 600 || return 1
     # Mode 4 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -238,7 +245,7 @@
 
 vpx_tsvc_encoder_vp9_mode_5() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_5"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_5"
     vpx_tsvc_encoder vp9 "${output_basename}" 5 200 400 600 || return 1
     # Mode 5 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -247,7 +254,7 @@
 
 vpx_tsvc_encoder_vp9_mode_6() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_6"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_6"
     vpx_tsvc_encoder vp9 "${output_basename}" 6 200 400 600 || return 1
     # Mode 6 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -256,7 +263,7 @@
 
 vpx_tsvc_encoder_vp9_mode_7() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_7"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_7"
     vpx_tsvc_encoder vp9 "${output_basename}" 7 200 400 600 800 1000 || return 1
     # Mode 7 produces 5 streams
     files_exist "${output_basename}" 5 || return 1
@@ -265,7 +272,7 @@
 
 vpx_tsvc_encoder_vp9_mode_8() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_8"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_8"
     vpx_tsvc_encoder vp9 "${output_basename}" 8 200 400 || return 1
     # Mode 8 produces 2 streams
     files_exist "${output_basename}" 2 || return 1
@@ -274,7 +281,7 @@
 
 vpx_tsvc_encoder_vp9_mode_9() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_9"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_9"
     vpx_tsvc_encoder vp9 "${output_basename}" 9 200 400 600 || return 1
     # Mode 9 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -283,7 +290,7 @@
 
 vpx_tsvc_encoder_vp9_mode_10() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_10"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_10"
     vpx_tsvc_encoder vp9 "${output_basename}" 10 200 400 600 || return 1
     # Mode 10 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
@@ -292,7 +299,7 @@
 
 vpx_tsvc_encoder_vp9_mode_11() {
   if [ "$(vp9_encode_available)" = "yes" ]; then
-    local readonly output_basename="vpx_tsvc_encoder_vp9_mode_11"
+    local output_basename="vpx_tsvc_encoder_vp9_mode_11"
     vpx_tsvc_encoder vp9 "${output_basename}" 11 200 400 600 || return 1
     # Mode 11 produces 3 streams
     files_exist "${output_basename}" 3 || return 1
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxdec.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxdec.sh
old mode 100644
new mode 100755
index de51c80..044aa7e
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxdec.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxdec.sh
@@ -18,7 +18,8 @@
 vpxdec_verify_environment() {
   if [ ! -e "${VP8_IVF_FILE}" ] || [ ! -e "${VP9_WEBM_FILE}" ] || \
     [ ! -e "${VP9_FPM_WEBM_FILE}" ] || \
-    [ ! -e "${VP9_LT_50_FRAMES_WEBM_FILE}" ] ; then
+    [ ! -e "${VP9_LT_50_FRAMES_WEBM_FILE}" ] || \
+    [ ! -e "${VP9_RAW_FILE}" ]; then
     elog "Libvpx test data must exist in LIBVPX_TEST_DATA_PATH."
     return 1
   fi
@@ -33,8 +34,8 @@
 # input file path and shifted away. All remaining parameters are passed through
 # to vpxdec.
 vpxdec_pipe() {
-  local readonly decoder="$(vpx_tool_path vpxdec)"
-  local readonly input="$1"
+  local decoder="$(vpx_tool_path vpxdec)"
+  local input="$1"
   shift
   cat "${input}" | eval "${VPX_TEST_PREFIX}" "${decoder}" - "$@" ${devnull}
 }
@@ -43,8 +44,8 @@
 # the directory containing vpxdec. $1 one is used as the input file path and
 # shifted away. All remaining parameters are passed through to vpxdec.
 vpxdec() {
-  local readonly decoder="$(vpx_tool_path vpxdec)"
-  local readonly input="$1"
+  local decoder="$(vpx_tool_path vpxdec)"
+  local input="$1"
   shift
   eval "${VPX_TEST_PREFIX}" "${decoder}" "$input" "$@" ${devnull}
 }
@@ -95,9 +96,9 @@
   # frames in actual webm_read_frame calls.
   if [ "$(vpxdec_can_decode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly decoder="$(vpx_tool_path vpxdec)"
-    local readonly expected=10
-    local readonly num_frames=$(${VPX_TEST_PREFIX} "${decoder}" \
+    local decoder="$(vpx_tool_path vpxdec)"
+    local expected=10
+    local num_frames=$(${VPX_TEST_PREFIX} "${decoder}" \
       "${VP9_LT_50_FRAMES_WEBM_FILE}" --summary --noblit 2>&1 \
       | awk '/^[0-9]+ decoded frames/ { print $1 }')
     if [ "$num_frames" -ne "$expected" ]; then
@@ -107,10 +108,28 @@
   fi
 }
 
+# Ensures VP9_RAW_FILE correctly produces 1 frame instead of causing a hang.
+vpxdec_vp9_raw_file() {
+  # Ensure a raw file properly reports eof and doesn't cause a hang.
+  if [ "$(vpxdec_can_decode_vp9)" = "yes" ]; then
+    local decoder="$(vpx_tool_path vpxdec)"
+    local expected=1
+    [ -x /usr/bin/timeout ] && local TIMEOUT="/usr/bin/timeout 30s"
+    local num_frames=$(${TIMEOUT} ${VPX_TEST_PREFIX} "${decoder}" \
+      "${VP9_RAW_FILE}" --summary --noblit 2>&1 \
+      | awk '/^[0-9]+ decoded frames/ { print $1 }')
+    if [ -z "$num_frames" ] || [ "$num_frames" -ne "$expected" ]; then
+      elog "Output frames ($num_frames) != expected ($expected)"
+      return 1
+    fi
+  fi
+}
+
 vpxdec_tests="vpxdec_vp8_ivf
               vpxdec_vp8_ivf_pipe_input
               vpxdec_vp9_webm
               vpxdec_vp9_webm_frame_parallel
-              vpxdec_vp9_webm_less_than_50_frames"
+              vpxdec_vp9_webm_less_than_50_frames
+              vpxdec_vp9_raw_file"
 
 run_tests vpxdec_verify_environment "${vpxdec_tests}"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxenc.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxenc.sh
old mode 100644
new mode 100755
index 0c160da..f94e2e0
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxenc.sh
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/vpxenc.sh
@@ -67,7 +67,7 @@
 # Echo default vpxenc real time encoding params. $1 is the codec, which defaults
 # to vp8 if unspecified.
 vpxenc_rt_params() {
-  local readonly codec="${1:-vp8}"
+  local codec="${1:-vp8}"
   echo "--codec=${codec}
     --buf-initial-sz=500
     --buf-optimal-sz=600
@@ -104,8 +104,8 @@
 # input file path and shifted away. All remaining parameters are passed through
 # to vpxenc.
 vpxenc_pipe() {
-  local readonly encoder="$(vpx_tool_path vpxenc)"
-  local readonly input="$1"
+  local encoder="$(vpx_tool_path vpxenc)"
+  local input="$1"
   shift
   cat "${input}" | eval "${VPX_TEST_PREFIX}" "${encoder}" - \
     --test-decode=fatal \
@@ -116,8 +116,8 @@
 # the directory containing vpxenc. $1 one is used as the input file path and
 # shifted away. All remaining parameters are passed through to vpxenc.
 vpxenc() {
-  local readonly encoder="$(vpx_tool_path vpxenc)"
-  local readonly input="$1"
+  local encoder="$(vpx_tool_path vpxenc)"
+  local input="$1"
   shift
   eval "${VPX_TEST_PREFIX}" "${encoder}" "${input}" \
     --test-decode=fatal \
@@ -126,7 +126,7 @@
 
 vpxenc_vp8_ivf() {
   if [ "$(vpxenc_can_encode_vp8)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp8.ivf"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp8.ivf"
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp8 \
       --limit="${TEST_FRAMES}" \
@@ -143,7 +143,7 @@
 vpxenc_vp8_webm() {
   if [ "$(vpxenc_can_encode_vp8)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp8.webm"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp8.webm"
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp8 \
       --limit="${TEST_FRAMES}" \
@@ -159,7 +159,7 @@
 vpxenc_vp8_webm_rt() {
   if [ "$(vpxenc_can_encode_vp8)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp8_rt.webm"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp8_rt.webm"
     vpxenc $(yuv_input_hantro_collage) \
       $(vpxenc_rt_params vp8) \
       --output="${output}"
@@ -173,7 +173,7 @@
 vpxenc_vp8_webm_2pass() {
   if [ "$(vpxenc_can_encode_vp8)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp8.webm"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp8.webm"
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp8 \
       --limit="${TEST_FRAMES}" \
@@ -190,9 +190,9 @@
 vpxenc_vp8_webm_lag10_frames20() {
   if [ "$(vpxenc_can_encode_vp8)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly lag_total_frames=20
-    local readonly lag_frames=10
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp8_lag10_frames20.webm"
+    local lag_total_frames=20
+    local lag_frames=10
+    local output="${VPX_TEST_OUTPUT_DIR}/vp8_lag10_frames20.webm"
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp8 \
       --limit="${lag_total_frames}" \
@@ -210,7 +210,7 @@
 
 vpxenc_vp8_ivf_piped_input() {
   if [ "$(vpxenc_can_encode_vp8)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp8_piped_input.ivf"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp8_piped_input.ivf"
     vpxenc_pipe $(yuv_input_hantro_collage) \
       --codec=vp8 \
       --limit="${TEST_FRAMES}" \
@@ -226,8 +226,8 @@
 
 vpxenc_vp9_ivf() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9.ivf"
-    local readonly passes=$(vpxenc_passes_param)
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9.ivf"
+    local passes=$(vpxenc_passes_param)
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp9 \
       --limit="${TEST_FRAMES}" \
@@ -245,8 +245,8 @@
 vpxenc_vp9_webm() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9.webm"
-    local readonly passes=$(vpxenc_passes_param)
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9.webm"
+    local passes=$(vpxenc_passes_param)
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp9 \
       --limit="${TEST_FRAMES}" \
@@ -263,7 +263,7 @@
 vpxenc_vp9_webm_rt() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_rt.webm"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_rt.webm"
     vpxenc $(yuv_input_hantro_collage) \
       $(vpxenc_rt_params vp9) \
       --output="${output}"
@@ -278,11 +278,11 @@
 vpxenc_vp9_webm_rt_multithread_tiled() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_rt_multithread_tiled.webm"
-    local readonly tilethread_min=2
-    local readonly tilethread_max=4
-    local readonly num_threads="$(seq ${tilethread_min} ${tilethread_max})"
-    local readonly num_tile_cols="$(seq ${tilethread_min} ${tilethread_max})"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_rt_multithread_tiled.webm"
+    local tilethread_min=2
+    local tilethread_max=4
+    local num_threads="$(seq ${tilethread_min} ${tilethread_max})"
+    local num_tile_cols="$(seq ${tilethread_min} ${tilethread_max})"
 
     for threads in ${num_threads}; do
       for tile_cols in ${num_tile_cols}; do
@@ -291,26 +291,25 @@
           --threads=${threads} \
           --tile-columns=${tile_cols} \
           --output="${output}"
+
+        if [ ! -e "${output}" ]; then
+          elog "Output file does not exist."
+          return 1
+        fi
+        rm "${output}"
       done
     done
-
-    if [ ! -e "${output}" ]; then
-      elog "Output file does not exist."
-      return 1
-    fi
-
-    rm "${output}"
   fi
 }
 
 vpxenc_vp9_webm_rt_multithread_tiled_frameparallel() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_rt_mt_t_fp.webm"
-    local readonly tilethread_min=2
-    local readonly tilethread_max=4
-    local readonly num_threads="$(seq ${tilethread_min} ${tilethread_max})"
-    local readonly num_tile_cols="$(seq ${tilethread_min} ${tilethread_max})"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_rt_mt_t_fp.webm"
+    local tilethread_min=2
+    local tilethread_max=4
+    local num_threads="$(seq ${tilethread_min} ${tilethread_max})"
+    local num_tile_cols="$(seq ${tilethread_min} ${tilethread_max})"
 
     for threads in ${num_threads}; do
       for tile_cols in ${num_tile_cols}; do
@@ -320,22 +319,20 @@
           --tile-columns=${tile_cols} \
           --frame-parallel=1 \
           --output="${output}"
+        if [ ! -e "${output}" ]; then
+          elog "Output file does not exist."
+          return 1
+        fi
+        rm "${output}"
       done
     done
-
-    if [ ! -e "${output}" ]; then
-      elog "Output file does not exist."
-      return 1
-    fi
-
-    rm "${output}"
   fi
 }
 
 vpxenc_vp9_webm_2pass() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9.webm"
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9.webm"
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp9 \
       --limit="${TEST_FRAMES}" \
@@ -351,8 +348,8 @@
 
 vpxenc_vp9_ivf_lossless() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_lossless.ivf"
-    local readonly passes=$(vpxenc_passes_param)
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_lossless.ivf"
+    local passes=$(vpxenc_passes_param)
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp9 \
       --limit="${TEST_FRAMES}" \
@@ -370,8 +367,8 @@
 
 vpxenc_vp9_ivf_minq0_maxq0() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_lossless_minq0_maxq0.ivf"
-    local readonly passes=$(vpxenc_passes_param)
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_lossless_minq0_maxq0.ivf"
+    local passes=$(vpxenc_passes_param)
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp9 \
       --limit="${TEST_FRAMES}" \
@@ -391,10 +388,10 @@
 vpxenc_vp9_webm_lag10_frames20() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly lag_total_frames=20
-    local readonly lag_frames=10
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_lag10_frames20.webm"
-    local readonly passes=$(vpxenc_passes_param)
+    local lag_total_frames=20
+    local lag_frames=10
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_lag10_frames20.webm"
+    local passes=$(vpxenc_passes_param)
     vpxenc $(yuv_input_hantro_collage) \
       --codec=vp9 \
       --limit="${lag_total_frames}" \
@@ -414,8 +411,8 @@
 vpxenc_vp9_webm_non_square_par() {
   if [ "$(vpxenc_can_encode_vp9)" = "yes" ] && \
      [ "$(webm_io_available)" = "yes" ]; then
-    local readonly output="${VPX_TEST_OUTPUT_DIR}/vp9_non_square_par.webm"
-    local readonly passes=$(vpxenc_passes_param)
+    local output="${VPX_TEST_OUTPUT_DIR}/vp9_non_square_par.webm"
+    local passes=$(vpxenc_passes_param)
     vpxenc $(y4m_input_non_square_par) \
       --codec=vp9 \
       --limit="${TEST_FRAMES}" \
@@ -429,6 +426,42 @@
   fi
 }
 
+vpxenc_vp9_webm_sharpness() {
+  if [ "$(vpxenc_can_encode_vp9)" = "yes" ]; then
+    local sharpnesses="0 1 2 3 4 5 6 7"
+    local output="${VPX_TEST_OUTPUT_DIR}/vpxenc_vp9_webm_sharpness.ivf"
+    local last_size=0
+    local this_size=0
+
+    for sharpness in ${sharpnesses}; do
+
+      vpxenc $(yuv_input_hantro_collage) \
+        --sharpness="${sharpness}" \
+        --codec=vp9 \
+        --limit=1 \
+        --cpu-used=2 \
+        --end-usage=q \
+        --cq-level=40 \
+        --output="${output}" \
+        "${passes}"
+
+      if [ ! -e "${output}" ]; then
+        elog "Output file does not exist."
+        return 1
+      fi
+
+      this_size=$(stat -c '%s' "${output}")
+      if [ "${this_size}" -lt "${last_size}" ]; then
+        elog "Higher sharpness value yielded lower file size."
+        echo "${this_size}" " < " "${last_size}"
+        return 1
+      fi
+      last_size="${this_size}"
+
+    done
+  fi
+}
+
 vpxenc_tests="vpxenc_vp8_ivf
               vpxenc_vp8_webm
               vpxenc_vp8_webm_rt
@@ -441,7 +474,9 @@
               vpxenc_vp9_ivf_lossless
               vpxenc_vp9_ivf_minq0_maxq0
               vpxenc_vp9_webm_lag10_frames20
-              vpxenc_vp9_webm_non_square_par"
+              vpxenc_vp9_webm_non_square_par
+              vpxenc_vp9_webm_sharpness"
+
 if [ "$(vpx_config_option_enabled CONFIG_REALTIME_ONLY)" != "yes" ]; then
   vpxenc_tests="$vpxenc_tests
                 vpxenc_vp8_webm_2pass
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/webm_video_source.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/webm_video_source.h
index 09c007a..6f55f7d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/webm_video_source.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/webm_video_source.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_WEBM_VIDEO_SOURCE_H_
-#define TEST_WEBM_VIDEO_SOURCE_H_
+#ifndef VPX_TEST_WEBM_VIDEO_SOURCE_H_
+#define VPX_TEST_WEBM_VIDEO_SOURCE_H_
 #include <cstdarg>
 #include <cstdio>
 #include <cstdlib>
@@ -90,4 +90,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_WEBM_VIDEO_SOURCE_H_
+#endif  // VPX_TEST_WEBM_VIDEO_SOURCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_test.cc
index ced717a..76d033d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_test.cc
@@ -40,18 +40,18 @@
     "284a47a47133b12884ec3a14e959a0b6" },
   { "park_joy_90p_8_444.y4m", 8, VPX_IMG_FMT_I444,
     "90517ff33843d85de712fd4fe60dbed0" },
-  { "park_joy_90p_10_420.y4m", 10, VPX_IMG_FMT_I42016,
-    "63f21f9f717d8b8631bd2288ee87137b" },
-  { "park_joy_90p_10_422.y4m", 10, VPX_IMG_FMT_I42216,
-    "48ab51fb540aed07f7ff5af130c9b605" },
-  { "park_joy_90p_10_444.y4m", 10, VPX_IMG_FMT_I44416,
-    "067bfd75aa85ff9bae91fa3e0edd1e3e" },
-  { "park_joy_90p_12_420.y4m", 12, VPX_IMG_FMT_I42016,
-    "9e6d8f6508c6e55625f6b697bc461cef" },
-  { "park_joy_90p_12_422.y4m", 12, VPX_IMG_FMT_I42216,
-    "b239c6b301c0b835485be349ca83a7e3" },
-  { "park_joy_90p_12_444.y4m", 12, VPX_IMG_FMT_I44416,
-    "5a6481a550821dab6d0192f5c63845e9" },
+  { "park_joy_90p_10_420_20f.y4m", 10, VPX_IMG_FMT_I42016,
+    "2f56ab9809269f074df7e3daf1ce0be6" },
+  { "park_joy_90p_10_422_20f.y4m", 10, VPX_IMG_FMT_I42216,
+    "1b5c73d2e8e8c4e02dc4889ecac41c83" },
+  { "park_joy_90p_10_444_20f.y4m", 10, VPX_IMG_FMT_I44416,
+    "ec4ab5be53195c5b838d1d19e1bc2674" },
+  { "park_joy_90p_12_420_20f.y4m", 12, VPX_IMG_FMT_I42016,
+    "3370856c8ddebbd1f9bb2e66f97677f4" },
+  { "park_joy_90p_12_422_20f.y4m", 12, VPX_IMG_FMT_I42216,
+    "4eab364318dd8201acbb182e43bd4966" },
+  { "park_joy_90p_12_444_20f.y4m", 12, VPX_IMG_FMT_I44416,
+    "f189dfbbd92119fc8e5f211a550166be" },
 };
 
 static void write_image_file(const vpx_image_t *img, FILE *file) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_video_source.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_video_source.h
index 1301f69..89aa2a4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_video_source.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/y4m_video_source.h
@@ -7,9 +7,10 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_Y4M_VIDEO_SOURCE_H_
-#define TEST_Y4M_VIDEO_SOURCE_H_
+#ifndef VPX_TEST_Y4M_VIDEO_SOURCE_H_
+#define VPX_TEST_Y4M_VIDEO_SOURCE_H_
 #include <algorithm>
+#include <memory>
 #include <string>
 
 #include "test/video_source.h"
@@ -108,7 +109,7 @@
 
   std::string file_name_;
   FILE *input_file_;
-  testing::internal::scoped_ptr<vpx_image_t> img_;
+  std::unique_ptr<vpx_image_t> img_;
   unsigned int start_;
   unsigned int limit_;
   unsigned int frame_;
@@ -119,4 +120,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_Y4M_VIDEO_SOURCE_H_
+#endif  // VPX_TEST_Y4M_VIDEO_SOURCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_temporal_filter_test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_temporal_filter_test.cc
new file mode 100644
index 0000000..8f3c58b
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_temporal_filter_test.cc
@@ -0,0 +1,708 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "third_party/googletest/src/include/gtest/gtest.h"
+
+#include "./vp9_rtcd.h"
+#include "test/acm_random.h"
+#include "test/buffer.h"
+#include "test/register_state_check.h"
+#include "vpx_ports/vpx_timer.h"
+
+namespace {
+
+using ::libvpx_test::ACMRandom;
+using ::libvpx_test::Buffer;
+
+typedef void (*YUVTemporalFilterFunc)(
+    const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,
+    int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,
+    int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32,
+    uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator,
+    uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count);
+
+struct TemporalFilterWithBd {
+  TemporalFilterWithBd(YUVTemporalFilterFunc func, int bitdepth)
+      : temporal_filter(func), bd(bitdepth) {}
+
+  YUVTemporalFilterFunc temporal_filter;
+  int bd;
+};
+
+std::ostream &operator<<(std::ostream &os, const TemporalFilterWithBd &tf) {
+  return os << "Bitdepth: " << tf.bd;
+}
+
+int GetFilterWeight(unsigned int row, unsigned int col,
+                    unsigned int block_height, unsigned int block_width,
+                    const int *const blk_fw, int use_32x32) {
+  if (use_32x32) {
+    return blk_fw[0];
+  }
+
+  return blk_fw[2 * (row >= block_height / 2) + (col >= block_width / 2)];
+}
+
+template <typename PixelType>
+int GetModIndex(int sum_dist, int index, int rounding, int strength,
+                int filter_weight) {
+  int mod = sum_dist * 3 / index;
+  mod += rounding;
+  mod >>= strength;
+
+  mod = VPXMIN(16, mod);
+
+  mod = 16 - mod;
+  mod *= filter_weight;
+
+  return mod;
+}
+
+template <>
+int GetModIndex<uint8_t>(int sum_dist, int index, int rounding, int strength,
+                         int filter_weight) {
+  unsigned int index_mult[14] = { 0,     0,     0,     0,     49152,
+                                  39322, 32768, 28087, 24576, 21846,
+                                  19661, 17874, 0,     15124 };
+
+  assert(index >= 0 && index <= 13);
+  assert(index_mult[index] != 0);
+
+  int mod = (clamp(sum_dist, 0, UINT16_MAX) * index_mult[index]) >> 16;
+  mod += rounding;
+  mod >>= strength;
+
+  mod = VPXMIN(16, mod);
+
+  mod = 16 - mod;
+  mod *= filter_weight;
+
+  return mod;
+}
+
+template <>
+int GetModIndex<uint16_t>(int sum_dist, int index, int rounding, int strength,
+                          int filter_weight) {
+  int64_t index_mult[14] = { 0U,          0U,          0U,          0U,
+                             3221225472U, 2576980378U, 2147483648U, 1840700270U,
+                             1610612736U, 1431655766U, 1288490189U, 1171354718U,
+                             0U,          991146300U };
+
+  assert(index >= 0 && index <= 13);
+  assert(index_mult[index] != 0);
+
+  int mod = static_cast<int>((sum_dist * index_mult[index]) >> 32);
+  mod += rounding;
+  mod >>= strength;
+
+  mod = VPXMIN(16, mod);
+
+  mod = 16 - mod;
+  mod *= filter_weight;
+
+  return mod;
+}
+
+template <typename PixelType>
+void ApplyReferenceFilter(
+    const Buffer<PixelType> &y_src, const Buffer<PixelType> &y_pre,
+    const Buffer<PixelType> &u_src, const Buffer<PixelType> &v_src,
+    const Buffer<PixelType> &u_pre, const Buffer<PixelType> &v_pre,
+    unsigned int block_width, unsigned int block_height, int ss_x, int ss_y,
+    int strength, const int *const blk_fw, int use_32x32,
+    Buffer<uint32_t> *y_accumulator, Buffer<uint16_t> *y_counter,
+    Buffer<uint32_t> *u_accumulator, Buffer<uint16_t> *u_counter,
+    Buffer<uint32_t> *v_accumulator, Buffer<uint16_t> *v_counter) {
+  const PixelType *y_src_ptr = y_src.TopLeftPixel();
+  const PixelType *y_pre_ptr = y_pre.TopLeftPixel();
+  const PixelType *u_src_ptr = u_src.TopLeftPixel();
+  const PixelType *u_pre_ptr = u_pre.TopLeftPixel();
+  const PixelType *v_src_ptr = v_src.TopLeftPixel();
+  const PixelType *v_pre_ptr = v_pre.TopLeftPixel();
+
+  const int uv_block_width = block_width >> ss_x,
+            uv_block_height = block_height >> ss_y;
+  const int y_src_stride = y_src.stride(), y_pre_stride = y_pre.stride();
+  const int uv_src_stride = u_src.stride(), uv_pre_stride = u_pre.stride();
+  const int y_diff_stride = block_width, uv_diff_stride = uv_block_width;
+
+  Buffer<int> y_dif = Buffer<int>(block_width, block_height, 0);
+  Buffer<int> u_dif = Buffer<int>(uv_block_width, uv_block_height, 0);
+  Buffer<int> v_dif = Buffer<int>(uv_block_width, uv_block_height, 0);
+
+  ASSERT_TRUE(y_dif.Init());
+  ASSERT_TRUE(u_dif.Init());
+  ASSERT_TRUE(v_dif.Init());
+  y_dif.Set(0);
+  u_dif.Set(0);
+  v_dif.Set(0);
+
+  int *y_diff_ptr = y_dif.TopLeftPixel();
+  int *u_diff_ptr = u_dif.TopLeftPixel();
+  int *v_diff_ptr = v_dif.TopLeftPixel();
+
+  uint32_t *y_accum = y_accumulator->TopLeftPixel();
+  uint32_t *u_accum = u_accumulator->TopLeftPixel();
+  uint32_t *v_accum = v_accumulator->TopLeftPixel();
+  uint16_t *y_count = y_counter->TopLeftPixel();
+  uint16_t *u_count = u_counter->TopLeftPixel();
+  uint16_t *v_count = v_counter->TopLeftPixel();
+
+  const int y_accum_stride = y_accumulator->stride();
+  const int u_accum_stride = u_accumulator->stride();
+  const int v_accum_stride = v_accumulator->stride();
+  const int y_count_stride = y_counter->stride();
+  const int u_count_stride = u_counter->stride();
+  const int v_count_stride = v_counter->stride();
+
+  const int rounding = (1 << strength) >> 1;
+
+  // Get the square diffs
+  for (int row = 0; row < static_cast<int>(block_height); row++) {
+    for (int col = 0; col < static_cast<int>(block_width); col++) {
+      const int diff = y_src_ptr[row * y_src_stride + col] -
+                       y_pre_ptr[row * y_pre_stride + col];
+      y_diff_ptr[row * y_diff_stride + col] = diff * diff;
+    }
+  }
+
+  for (int row = 0; row < uv_block_height; row++) {
+    for (int col = 0; col < uv_block_width; col++) {
+      const int u_diff = u_src_ptr[row * uv_src_stride + col] -
+                         u_pre_ptr[row * uv_pre_stride + col];
+      const int v_diff = v_src_ptr[row * uv_src_stride + col] -
+                         v_pre_ptr[row * uv_pre_stride + col];
+      u_diff_ptr[row * uv_diff_stride + col] = u_diff * u_diff;
+      v_diff_ptr[row * uv_diff_stride + col] = v_diff * v_diff;
+    }
+  }
+
+  // Apply the filter to luma
+  for (int row = 0; row < static_cast<int>(block_height); row++) {
+    for (int col = 0; col < static_cast<int>(block_width); col++) {
+      const int uv_row = row >> ss_y;
+      const int uv_col = col >> ss_x;
+      const int filter_weight = GetFilterWeight(row, col, block_height,
+                                                block_width, blk_fw, use_32x32);
+
+      // First we get the modifier for the current y pixel
+      const int y_pixel = y_pre_ptr[row * y_pre_stride + col];
+      int y_num_used = 0;
+      int y_mod = 0;
+
+      // Sum the neighboring 3x3 y pixels
+      for (int row_step = -1; row_step <= 1; row_step++) {
+        for (int col_step = -1; col_step <= 1; col_step++) {
+          const int sub_row = row + row_step;
+          const int sub_col = col + col_step;
+
+          if (sub_row >= 0 && sub_row < static_cast<int>(block_height) &&
+              sub_col >= 0 && sub_col < static_cast<int>(block_width)) {
+            y_mod += y_diff_ptr[sub_row * y_diff_stride + sub_col];
+            y_num_used++;
+          }
+        }
+      }
+
+      // Sum the corresponding uv pixels to the current y modifier
+      // Note we are rounding down instead of rounding to the nearest pixel.
+      y_mod += u_diff_ptr[uv_row * uv_diff_stride + uv_col];
+      y_mod += v_diff_ptr[uv_row * uv_diff_stride + uv_col];
+
+      y_num_used += 2;
+
+      // Set the modifier
+      y_mod = GetModIndex<PixelType>(y_mod, y_num_used, rounding, strength,
+                                     filter_weight);
+
+      // Accumulate the result
+      y_count[row * y_count_stride + col] += y_mod;
+      y_accum[row * y_accum_stride + col] += y_mod * y_pixel;
+    }
+  }
+
+  // Apply the filter to chroma
+  for (int uv_row = 0; uv_row < uv_block_height; uv_row++) {
+    for (int uv_col = 0; uv_col < uv_block_width; uv_col++) {
+      const int y_row = uv_row << ss_y;
+      const int y_col = uv_col << ss_x;
+      const int filter_weight = GetFilterWeight(
+          uv_row, uv_col, uv_block_height, uv_block_width, blk_fw, use_32x32);
+
+      const int u_pixel = u_pre_ptr[uv_row * uv_pre_stride + uv_col];
+      const int v_pixel = v_pre_ptr[uv_row * uv_pre_stride + uv_col];
+
+      int uv_num_used = 0;
+      int u_mod = 0, v_mod = 0;
+
+      // Sum the neighboring 3x3 chromal pixels to the chroma modifier
+      for (int row_step = -1; row_step <= 1; row_step++) {
+        for (int col_step = -1; col_step <= 1; col_step++) {
+          const int sub_row = uv_row + row_step;
+          const int sub_col = uv_col + col_step;
+
+          if (sub_row >= 0 && sub_row < uv_block_height && sub_col >= 0 &&
+              sub_col < uv_block_width) {
+            u_mod += u_diff_ptr[sub_row * uv_diff_stride + sub_col];
+            v_mod += v_diff_ptr[sub_row * uv_diff_stride + sub_col];
+            uv_num_used++;
+          }
+        }
+      }
+
+      // Sum all the luma pixels associated with the current luma pixel
+      for (int row_step = 0; row_step < 1 + ss_y; row_step++) {
+        for (int col_step = 0; col_step < 1 + ss_x; col_step++) {
+          const int sub_row = y_row + row_step;
+          const int sub_col = y_col + col_step;
+          const int y_diff = y_diff_ptr[sub_row * y_diff_stride + sub_col];
+
+          u_mod += y_diff;
+          v_mod += y_diff;
+          uv_num_used++;
+        }
+      }
+
+      // Set the modifier
+      u_mod = GetModIndex<PixelType>(u_mod, uv_num_used, rounding, strength,
+                                     filter_weight);
+      v_mod = GetModIndex<PixelType>(v_mod, uv_num_used, rounding, strength,
+                                     filter_weight);
+
+      // Accumulate the result
+      u_count[uv_row * u_count_stride + uv_col] += u_mod;
+      u_accum[uv_row * u_accum_stride + uv_col] += u_mod * u_pixel;
+      v_count[uv_row * v_count_stride + uv_col] += v_mod;
+      v_accum[uv_row * v_accum_stride + uv_col] += v_mod * v_pixel;
+    }
+  }
+}
+
+class YUVTemporalFilterTest
+    : public ::testing::TestWithParam<TemporalFilterWithBd> {
+ public:
+  virtual void SetUp() {
+    filter_func_ = GetParam().temporal_filter;
+    bd_ = GetParam().bd;
+    use_highbd_ = (bd_ != 8);
+
+    rnd_.Reset(ACMRandom::DeterministicSeed());
+    saturate_test_ = 0;
+    num_repeats_ = 10;
+
+    ASSERT_TRUE(bd_ == 8 || bd_ == 10 || bd_ == 12);
+  }
+
+ protected:
+  template <typename PixelType>
+  void CompareTestWithParam(int width, int height, int ss_x, int ss_y,
+                            int filter_strength, int use_32x32,
+                            const int *filter_weight);
+  template <typename PixelType>
+  void RunTestFilterWithParam(int width, int height, int ss_x, int ss_y,
+                              int filter_strength, int use_32x32,
+                              const int *filter_weight);
+  YUVTemporalFilterFunc filter_func_;
+  ACMRandom rnd_;
+  int saturate_test_;
+  int num_repeats_;
+  int use_highbd_;
+  int bd_;
+};
+
+template <typename PixelType>
+void YUVTemporalFilterTest::CompareTestWithParam(int width, int height,
+                                                 int ss_x, int ss_y,
+                                                 int filter_strength,
+                                                 int use_32x32,
+                                                 const int *filter_weight) {
+  const int uv_width = width >> ss_x, uv_height = height >> ss_y;
+
+  Buffer<PixelType> y_src = Buffer<PixelType>(width, height, 0);
+  Buffer<PixelType> y_pre = Buffer<PixelType>(width, height, 0);
+  Buffer<uint16_t> y_count_ref = Buffer<uint16_t>(width, height, 0);
+  Buffer<uint32_t> y_accum_ref = Buffer<uint32_t>(width, height, 0);
+  Buffer<uint16_t> y_count_tst = Buffer<uint16_t>(width, height, 0);
+  Buffer<uint32_t> y_accum_tst = Buffer<uint32_t>(width, height, 0);
+
+  Buffer<PixelType> u_src = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<PixelType> u_pre = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<uint16_t> u_count_ref = Buffer<uint16_t>(uv_width, uv_height, 0);
+  Buffer<uint32_t> u_accum_ref = Buffer<uint32_t>(uv_width, uv_height, 0);
+  Buffer<uint16_t> u_count_tst = Buffer<uint16_t>(uv_width, uv_height, 0);
+  Buffer<uint32_t> u_accum_tst = Buffer<uint32_t>(uv_width, uv_height, 0);
+
+  Buffer<PixelType> v_src = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<PixelType> v_pre = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<uint16_t> v_count_ref = Buffer<uint16_t>(uv_width, uv_height, 0);
+  Buffer<uint32_t> v_accum_ref = Buffer<uint32_t>(uv_width, uv_height, 0);
+  Buffer<uint16_t> v_count_tst = Buffer<uint16_t>(uv_width, uv_height, 0);
+  Buffer<uint32_t> v_accum_tst = Buffer<uint32_t>(uv_width, uv_height, 0);
+
+  ASSERT_TRUE(y_src.Init());
+  ASSERT_TRUE(y_pre.Init());
+  ASSERT_TRUE(y_count_ref.Init());
+  ASSERT_TRUE(y_accum_ref.Init());
+  ASSERT_TRUE(y_count_tst.Init());
+  ASSERT_TRUE(y_accum_tst.Init());
+  ASSERT_TRUE(u_src.Init());
+  ASSERT_TRUE(u_pre.Init());
+  ASSERT_TRUE(u_count_ref.Init());
+  ASSERT_TRUE(u_accum_ref.Init());
+  ASSERT_TRUE(u_count_tst.Init());
+  ASSERT_TRUE(u_accum_tst.Init());
+
+  ASSERT_TRUE(v_src.Init());
+  ASSERT_TRUE(v_pre.Init());
+  ASSERT_TRUE(v_count_ref.Init());
+  ASSERT_TRUE(v_accum_ref.Init());
+  ASSERT_TRUE(v_count_tst.Init());
+  ASSERT_TRUE(v_accum_tst.Init());
+
+  y_accum_ref.Set(0);
+  y_accum_tst.Set(0);
+  y_count_ref.Set(0);
+  y_count_tst.Set(0);
+  u_accum_ref.Set(0);
+  u_accum_tst.Set(0);
+  u_count_ref.Set(0);
+  u_count_tst.Set(0);
+  v_accum_ref.Set(0);
+  v_accum_tst.Set(0);
+  v_count_ref.Set(0);
+  v_count_tst.Set(0);
+
+  for (int repeats = 0; repeats < num_repeats_; repeats++) {
+    if (saturate_test_) {
+      const int max_val = (1 << bd_) - 1;
+      y_src.Set(max_val);
+      y_pre.Set(0);
+      u_src.Set(max_val);
+      u_pre.Set(0);
+      v_src.Set(max_val);
+      v_pre.Set(0);
+    } else {
+      y_src.Set(&rnd_, 0, 7 << (bd_ - 8));
+      y_pre.Set(&rnd_, 0, 7 << (bd_ - 8));
+      u_src.Set(&rnd_, 0, 7 << (bd_ - 8));
+      u_pre.Set(&rnd_, 0, 7 << (bd_ - 8));
+      v_src.Set(&rnd_, 0, 7 << (bd_ - 8));
+      v_pre.Set(&rnd_, 0, 7 << (bd_ - 8));
+    }
+
+    ApplyReferenceFilter<PixelType>(
+        y_src, y_pre, u_src, v_src, u_pre, v_pre, width, height, ss_x, ss_y,
+        filter_strength, filter_weight, use_32x32, &y_accum_ref, &y_count_ref,
+        &u_accum_ref, &u_count_ref, &v_accum_ref, &v_count_ref);
+
+    ASM_REGISTER_STATE_CHECK(filter_func_(
+        reinterpret_cast<const uint8_t *>(y_src.TopLeftPixel()), y_src.stride(),
+        reinterpret_cast<const uint8_t *>(y_pre.TopLeftPixel()), y_pre.stride(),
+        reinterpret_cast<const uint8_t *>(u_src.TopLeftPixel()),
+        reinterpret_cast<const uint8_t *>(v_src.TopLeftPixel()), u_src.stride(),
+        reinterpret_cast<const uint8_t *>(u_pre.TopLeftPixel()),
+        reinterpret_cast<const uint8_t *>(v_pre.TopLeftPixel()), u_pre.stride(),
+        width, height, ss_x, ss_y, filter_strength, filter_weight, use_32x32,
+        y_accum_tst.TopLeftPixel(), y_count_tst.TopLeftPixel(),
+        u_accum_tst.TopLeftPixel(), u_count_tst.TopLeftPixel(),
+        v_accum_tst.TopLeftPixel(), v_count_tst.TopLeftPixel()));
+
+    EXPECT_TRUE(y_accum_tst.CheckValues(y_accum_ref));
+    EXPECT_TRUE(y_count_tst.CheckValues(y_count_ref));
+    EXPECT_TRUE(u_accum_tst.CheckValues(u_accum_ref));
+    EXPECT_TRUE(u_count_tst.CheckValues(u_count_ref));
+    EXPECT_TRUE(v_accum_tst.CheckValues(v_accum_ref));
+    EXPECT_TRUE(v_count_tst.CheckValues(v_count_ref));
+
+    if (HasFailure()) {
+      if (use_32x32) {
+        printf("SS_X: %d, SS_Y: %d, Strength: %d, Weight: %d\n", ss_x, ss_y,
+               filter_strength, *filter_weight);
+      } else {
+        printf("SS_X: %d, SS_Y: %d, Strength: %d, Weights: %d,%d,%d,%d\n", ss_x,
+               ss_y, filter_strength, filter_weight[0], filter_weight[1],
+               filter_weight[2], filter_weight[3]);
+      }
+      y_accum_tst.PrintDifference(y_accum_ref);
+      y_count_tst.PrintDifference(y_count_ref);
+      u_accum_tst.PrintDifference(u_accum_ref);
+      u_count_tst.PrintDifference(u_count_ref);
+      v_accum_tst.PrintDifference(v_accum_ref);
+      v_count_tst.PrintDifference(v_count_ref);
+
+      return;
+    }
+  }
+}
+
+template <typename PixelType>
+void YUVTemporalFilterTest::RunTestFilterWithParam(int width, int height,
+                                                   int ss_x, int ss_y,
+                                                   int filter_strength,
+                                                   int use_32x32,
+                                                   const int *filter_weight) {
+  const int uv_width = width >> ss_x, uv_height = height >> ss_y;
+
+  Buffer<PixelType> y_src = Buffer<PixelType>(width, height, 0);
+  Buffer<PixelType> y_pre = Buffer<PixelType>(width, height, 0);
+  Buffer<uint16_t> y_count = Buffer<uint16_t>(width, height, 0);
+  Buffer<uint32_t> y_accum = Buffer<uint32_t>(width, height, 0);
+
+  Buffer<PixelType> u_src = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<PixelType> u_pre = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<uint16_t> u_count = Buffer<uint16_t>(uv_width, uv_height, 0);
+  Buffer<uint32_t> u_accum = Buffer<uint32_t>(uv_width, uv_height, 0);
+
+  Buffer<PixelType> v_src = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<PixelType> v_pre = Buffer<PixelType>(uv_width, uv_height, 0);
+  Buffer<uint16_t> v_count = Buffer<uint16_t>(uv_width, uv_height, 0);
+  Buffer<uint32_t> v_accum = Buffer<uint32_t>(uv_width, uv_height, 0);
+
+  ASSERT_TRUE(y_src.Init());
+  ASSERT_TRUE(y_pre.Init());
+  ASSERT_TRUE(y_count.Init());
+  ASSERT_TRUE(y_accum.Init());
+
+  ASSERT_TRUE(u_src.Init());
+  ASSERT_TRUE(u_pre.Init());
+  ASSERT_TRUE(u_count.Init());
+  ASSERT_TRUE(u_accum.Init());
+
+  ASSERT_TRUE(v_src.Init());
+  ASSERT_TRUE(v_pre.Init());
+  ASSERT_TRUE(v_count.Init());
+  ASSERT_TRUE(v_accum.Init());
+
+  y_accum.Set(0);
+  y_count.Set(0);
+
+  u_accum.Set(0);
+  u_count.Set(0);
+
+  v_accum.Set(0);
+  v_count.Set(0);
+
+  y_src.Set(&rnd_, 0, 7 << (bd_ - 8));
+  y_pre.Set(&rnd_, 0, 7 << (bd_ - 8));
+  u_src.Set(&rnd_, 0, 7 << (bd_ - 8));
+  u_pre.Set(&rnd_, 0, 7 << (bd_ - 8));
+  v_src.Set(&rnd_, 0, 7 << (bd_ - 8));
+  v_pre.Set(&rnd_, 0, 7 << (bd_ - 8));
+
+  for (int repeats = 0; repeats < num_repeats_; repeats++) {
+    ASM_REGISTER_STATE_CHECK(filter_func_(
+        reinterpret_cast<const uint8_t *>(y_src.TopLeftPixel()), y_src.stride(),
+        reinterpret_cast<const uint8_t *>(y_pre.TopLeftPixel()), y_pre.stride(),
+        reinterpret_cast<const uint8_t *>(u_src.TopLeftPixel()),
+        reinterpret_cast<const uint8_t *>(v_src.TopLeftPixel()), u_src.stride(),
+        reinterpret_cast<const uint8_t *>(u_pre.TopLeftPixel()),
+        reinterpret_cast<const uint8_t *>(v_pre.TopLeftPixel()), u_pre.stride(),
+        width, height, ss_x, ss_y, filter_strength, filter_weight, use_32x32,
+        y_accum.TopLeftPixel(), y_count.TopLeftPixel(), u_accum.TopLeftPixel(),
+        u_count.TopLeftPixel(), v_accum.TopLeftPixel(),
+        v_count.TopLeftPixel()));
+  }
+}
+
+TEST_P(YUVTemporalFilterTest, Use32x32) {
+  const int width = 32, height = 32;
+  const int use_32x32 = 1;
+
+  for (int ss_x = 0; ss_x <= 1; ss_x++) {
+    for (int ss_y = 0; ss_y <= 1; ss_y++) {
+      for (int filter_strength = 0; filter_strength <= 6;
+           filter_strength += 2) {
+        for (int filter_weight = 0; filter_weight <= 2; filter_weight++) {
+          if (use_highbd_) {
+            const int adjusted_strength = filter_strength + 2 * (bd_ - 8);
+            CompareTestWithParam<uint16_t>(width, height, ss_x, ss_y,
+                                           adjusted_strength, use_32x32,
+                                           &filter_weight);
+          } else {
+            CompareTestWithParam<uint8_t>(width, height, ss_x, ss_y,
+                                          filter_strength, use_32x32,
+                                          &filter_weight);
+          }
+          ASSERT_FALSE(HasFailure());
+        }
+      }
+    }
+  }
+}
+
+TEST_P(YUVTemporalFilterTest, Use16x16) {
+  const int width = 32, height = 32;
+  const int use_32x32 = 0;
+
+  for (int ss_x = 0; ss_x <= 1; ss_x++) {
+    for (int ss_y = 0; ss_y <= 1; ss_y++) {
+      for (int filter_idx = 0; filter_idx < 3 * 3 * 3 * 3; filter_idx++) {
+        // Set up the filter
+        int filter_weight[4];
+        int filter_idx_cp = filter_idx;
+        for (int idx = 0; idx < 4; idx++) {
+          filter_weight[idx] = filter_idx_cp % 3;
+          filter_idx_cp /= 3;
+        }
+
+        // Test each parameter
+        for (int filter_strength = 0; filter_strength <= 6;
+             filter_strength += 2) {
+          if (use_highbd_) {
+            const int adjusted_strength = filter_strength + 2 * (bd_ - 8);
+            CompareTestWithParam<uint16_t>(width, height, ss_x, ss_y,
+                                           adjusted_strength, use_32x32,
+                                           filter_weight);
+          } else {
+            CompareTestWithParam<uint8_t>(width, height, ss_x, ss_y,
+                                          filter_strength, use_32x32,
+                                          filter_weight);
+          }
+
+          ASSERT_FALSE(HasFailure());
+        }
+      }
+    }
+  }
+}
+
+TEST_P(YUVTemporalFilterTest, SaturationTest) {
+  const int width = 32, height = 32;
+  const int use_32x32 = 1;
+  const int filter_weight = 1;
+  saturate_test_ = 1;
+
+  for (int ss_x = 0; ss_x <= 1; ss_x++) {
+    for (int ss_y = 0; ss_y <= 1; ss_y++) {
+      for (int filter_strength = 0; filter_strength <= 6;
+           filter_strength += 2) {
+        if (use_highbd_) {
+          const int adjusted_strength = filter_strength + 2 * (bd_ - 8);
+          CompareTestWithParam<uint16_t>(width, height, ss_x, ss_y,
+                                         adjusted_strength, use_32x32,
+                                         &filter_weight);
+        } else {
+          CompareTestWithParam<uint8_t>(width, height, ss_x, ss_y,
+                                        filter_strength, use_32x32,
+                                        &filter_weight);
+        }
+
+        ASSERT_FALSE(HasFailure());
+      }
+    }
+  }
+}
+
+TEST_P(YUVTemporalFilterTest, DISABLED_Speed) {
+  const int width = 32, height = 32;
+  num_repeats_ = 1000;
+
+  for (int use_32x32 = 0; use_32x32 <= 1; use_32x32++) {
+    const int num_filter_weights = use_32x32 ? 3 : 3 * 3 * 3 * 3;
+    for (int ss_x = 0; ss_x <= 1; ss_x++) {
+      for (int ss_y = 0; ss_y <= 1; ss_y++) {
+        for (int filter_idx = 0; filter_idx < num_filter_weights;
+             filter_idx++) {
+          // Set up the filter
+          int filter_weight[4];
+          int filter_idx_cp = filter_idx;
+          for (int idx = 0; idx < 4; idx++) {
+            filter_weight[idx] = filter_idx_cp % 3;
+            filter_idx_cp /= 3;
+          }
+
+          // Test each parameter
+          for (int filter_strength = 0; filter_strength <= 6;
+               filter_strength += 2) {
+            vpx_usec_timer timer;
+            vpx_usec_timer_start(&timer);
+
+            if (use_highbd_) {
+              RunTestFilterWithParam<uint16_t>(width, height, ss_x, ss_y,
+                                               filter_strength, use_32x32,
+                                               filter_weight);
+            } else {
+              RunTestFilterWithParam<uint8_t>(width, height, ss_x, ss_y,
+                                              filter_strength, use_32x32,
+                                              filter_weight);
+            }
+
+            vpx_usec_timer_mark(&timer);
+            const int elapsed_time =
+                static_cast<int>(vpx_usec_timer_elapsed(&timer));
+
+            printf(
+                "Bitdepth: %d, Use 32X32: %d, SS_X: %d, SS_Y: %d, Weight Idx: "
+                "%d, Strength: %d, Time: %5d\n",
+                bd_, use_32x32, ss_x, ss_y, filter_idx, filter_strength,
+                elapsed_time);
+          }
+        }
+      }
+    }
+  }
+}
+
+#if CONFIG_VP9_HIGHBITDEPTH
+#define WRAP_HIGHBD_FUNC(func, bd)                                            \
+  void wrap_##func##_##bd(                                                    \
+      const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,           \
+      int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,           \
+      int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,          \
+      int uv_pre_stride, unsigned int block_width, unsigned int block_height, \
+      int ss_x, int ss_y, int strength, const int *const blk_fw,              \
+      int use_32x32, uint32_t *y_accumulator, uint16_t *y_count,              \
+      uint32_t *u_accumulator, uint16_t *u_count, uint32_t *v_accumulator,    \
+      uint16_t *v_count) {                                                    \
+    func(reinterpret_cast<const uint16_t *>(y_src), y_src_stride,             \
+         reinterpret_cast<const uint16_t *>(y_pre), y_pre_stride,             \
+         reinterpret_cast<const uint16_t *>(u_src),                           \
+         reinterpret_cast<const uint16_t *>(v_src), uv_src_stride,            \
+         reinterpret_cast<const uint16_t *>(u_pre),                           \
+         reinterpret_cast<const uint16_t *>(v_pre), uv_pre_stride,            \
+         block_width, block_height, ss_x, ss_y, strength, blk_fw, use_32x32,  \
+         y_accumulator, y_count, u_accumulator, u_count, v_accumulator,       \
+         v_count);                                                            \
+  }
+
+WRAP_HIGHBD_FUNC(vp9_highbd_apply_temporal_filter_c, 10);
+WRAP_HIGHBD_FUNC(vp9_highbd_apply_temporal_filter_c, 12);
+
+INSTANTIATE_TEST_CASE_P(
+    C, YUVTemporalFilterTest,
+    ::testing::Values(
+        TemporalFilterWithBd(&wrap_vp9_highbd_apply_temporal_filter_c_10, 10),
+        TemporalFilterWithBd(&wrap_vp9_highbd_apply_temporal_filter_c_12, 12)));
+#if HAVE_SSE4_1
+WRAP_HIGHBD_FUNC(vp9_highbd_apply_temporal_filter_sse4_1, 10);
+WRAP_HIGHBD_FUNC(vp9_highbd_apply_temporal_filter_sse4_1, 12);
+
+INSTANTIATE_TEST_CASE_P(
+    SSE4_1, YUVTemporalFilterTest,
+    ::testing::Values(
+        TemporalFilterWithBd(&wrap_vp9_highbd_apply_temporal_filter_sse4_1_10,
+                             10),
+        TemporalFilterWithBd(&wrap_vp9_highbd_apply_temporal_filter_sse4_1_12,
+                             12)));
+#endif  // HAVE_SSE4_1
+#else
+INSTANTIATE_TEST_CASE_P(
+    C, YUVTemporalFilterTest,
+    ::testing::Values(TemporalFilterWithBd(&vp9_apply_temporal_filter_c, 8)));
+
+#if HAVE_SSE4_1
+INSTANTIATE_TEST_CASE_P(SSE4_1, YUVTemporalFilterTest,
+                        ::testing::Values(TemporalFilterWithBd(
+                            &vp9_apply_temporal_filter_sse4_1, 8)));
+#endif  // HAVE_SSE4_1
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+}  // namespace
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_video_source.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_video_source.h
index aee6b2f..020ce80 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_video_source.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/test/yuv_video_source.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TEST_YUV_VIDEO_SOURCE_H_
-#define TEST_YUV_VIDEO_SOURCE_H_
+#ifndef VPX_TEST_YUV_VIDEO_SOURCE_H_
+#define VPX_TEST_YUV_VIDEO_SOURCE_H_
 
 #include <cstdio>
 #include <cstdlib>
@@ -122,4 +122,4 @@
 
 }  // namespace libvpx_test
 
-#endif  // TEST_YUV_VIDEO_SOURCE_H_
+#endif  // VPX_TEST_YUV_VIDEO_SOURCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/README.libvpx b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/README.libvpx
index 2cd6910..49005dd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/README.libvpx
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/README.libvpx
@@ -1,5 +1,5 @@
-URL: https://github.com/google/googletest
-Version: 1.8.0
+URL: https://github.com/google/googletest.git
+Version: release-1.8.1
 License: BSD
 License File: LICENSE
 
@@ -13,12 +13,16 @@
 
 Local Modifications:
 - Remove everything but:
-  googletest-release-1.8.0/googletest/
+  googletest-release-1.8.1/googletest/
    CHANGES
    CONTRIBUTORS
    include
    LICENSE
    README.md
    src
-- Suppress unsigned overflow instrumentation in the LCG
-  https://github.com/google/googletest/pull/1066
+
+- Make WithParamInterface<T>::GetParam static in order to avoid
+  initialization issues
+  https://github.com/google/googletest/pull/1830
+- Use wcslen() instead of std::wcslen()
+  https://github.com/google/googletest/pull/1899
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/README.md b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/README.md
index edd4408..e30fe80 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/README.md
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/README.md
@@ -1,23 +1,21 @@
+### Generic Build Instructions
 
-### Generic Build Instructions ###
+#### Setup
 
-#### Setup ####
+To build Google Test and your tests that use it, you need to tell your build
+system where to find its headers and source files. The exact way to do it
+depends on which build system you use, and is usually straightforward.
 
-To build Google Test and your tests that use it, you need to tell your
-build system where to find its headers and source files.  The exact
-way to do it depends on which build system you use, and is usually
-straightforward.
+#### Build
 
-#### Build ####
-
-Suppose you put Google Test in directory `${GTEST_DIR}`.  To build it,
-create a library build target (or a project as called by Visual Studio
-and Xcode) to compile
+Suppose you put Google Test in directory `${GTEST_DIR}`. To build it, create a
+library build target (or a project as called by Visual Studio and Xcode) to
+compile
 
     ${GTEST_DIR}/src/gtest-all.cc
 
 with `${GTEST_DIR}/include` in the system header search path and `${GTEST_DIR}`
-in the normal header search path.  Assuming a Linux-like system and gcc,
+in the normal header search path. Assuming a Linux-like system and gcc,
 something like the following will do:
 
     g++ -isystem ${GTEST_DIR}/include -I${GTEST_DIR} \
@@ -26,136 +24,239 @@
 
 (We need `-pthread` as Google Test uses threads.)
 
-Next, you should compile your test source file with
-`${GTEST_DIR}/include` in the system header search path, and link it
-with gtest and any other necessary libraries:
+Next, you should compile your test source file with `${GTEST_DIR}/include` in
+the system header search path, and link it with gtest and any other necessary
+libraries:
 
     g++ -isystem ${GTEST_DIR}/include -pthread path/to/your_test.cc libgtest.a \
         -o your_test
 
-As an example, the make/ directory contains a Makefile that you can
-use to build Google Test on systems where GNU make is available
-(e.g. Linux, Mac OS X, and Cygwin).  It doesn't try to build Google
-Test's own tests.  Instead, it just builds the Google Test library and
-a sample test.  You can use it as a starting point for your own build
-script.
+As an example, the make/ directory contains a Makefile that you can use to build
+Google Test on systems where GNU make is available (e.g. Linux, Mac OS X, and
+Cygwin). It doesn't try to build Google Test's own tests. Instead, it just
+builds the Google Test library and a sample test. You can use it as a starting
+point for your own build script.
 
-If the default settings are correct for your environment, the
-following commands should succeed:
+If the default settings are correct for your environment, the following commands
+should succeed:
 
     cd ${GTEST_DIR}/make
     make
     ./sample1_unittest
 
-If you see errors, try to tweak the contents of `make/Makefile` to make
-them go away.  There are instructions in `make/Makefile` on how to do
-it.
+If you see errors, try to tweak the contents of `make/Makefile` to make them go
+away. There are instructions in `make/Makefile` on how to do it.
 
-### Using CMake ###
+### Using CMake
 
 Google Test comes with a CMake build script (
-[CMakeLists.txt](CMakeLists.txt)) that can be used on a wide range of platforms ("C" stands for
-cross-platform.). If you don't have CMake installed already, you can
-download it for free from <http://www.cmake.org/>.
+[CMakeLists.txt](https://github.com/google/googletest/blob/master/CMakeLists.txt))
+that can be used on a wide range of platforms ("C" stands for cross-platform.).
+If you don't have CMake installed already, you can download it for free from
+<http://www.cmake.org/>.
 
-CMake works by generating native makefiles or build projects that can
-be used in the compiler environment of your choice.  The typical
-workflow starts with:
+CMake works by generating native makefiles or build projects that can be used in
+the compiler environment of your choice. You can either build Google Test as a
+standalone project or it can be incorporated into an existing CMake build for
+another project.
+
+#### Standalone CMake Project
+
+When building Google Test as a standalone project, the typical workflow starts
+with:
 
     mkdir mybuild       # Create a directory to hold the build output.
     cd mybuild
     cmake ${GTEST_DIR}  # Generate native build scripts.
 
-If you want to build Google Test's samples, you should replace the
-last command with
+If you want to build Google Test's samples, you should replace the last command
+with
 
     cmake -Dgtest_build_samples=ON ${GTEST_DIR}
 
-If you are on a \*nix system, you should now see a Makefile in the
-current directory.  Just type 'make' to build gtest.
+If you are on a \*nix system, you should now see a Makefile in the current
+directory. Just type 'make' to build gtest.
 
-If you use Windows and have Visual Studio installed, a `gtest.sln` file
-and several `.vcproj` files will be created.  You can then build them
-using Visual Studio.
+If you use Windows and have Visual Studio installed, a `gtest.sln` file and
+several `.vcproj` files will be created. You can then build them using Visual
+Studio.
 
 On Mac OS X with Xcode installed, a `.xcodeproj` file will be generated.
 
-### Legacy Build Scripts ###
+#### Incorporating Into An Existing CMake Project
+
+If you want to use gtest in a project which already uses CMake, then a more
+robust and flexible approach is to build gtest as part of that project directly.
+This is done by making the GoogleTest source code available to the main build
+and adding it using CMake's `add_subdirectory()` command. This has the
+significant advantage that the same compiler and linker settings are used
+between gtest and the rest of your project, so issues associated with using
+incompatible libraries (eg debug/release), etc. are avoided. This is
+particularly useful on Windows. Making GoogleTest's source code available to the
+main build can be done a few different ways:
+
+*   Download the GoogleTest source code manually and place it at a known
+    location. This is the least flexible approach and can make it more difficult
+    to use with continuous integration systems, etc.
+*   Embed the GoogleTest source code as a direct copy in the main project's
+    source tree. This is often the simplest approach, but is also the hardest to
+    keep up to date. Some organizations may not permit this method.
+*   Add GoogleTest as a git submodule or equivalent. This may not always be
+    possible or appropriate. Git submodules, for example, have their own set of
+    advantages and drawbacks.
+*   Use CMake to download GoogleTest as part of the build's configure step. This
+    is just a little more complex, but doesn't have the limitations of the other
+    methods.
+
+The last of the above methods is implemented with a small piece of CMake code in
+a separate file (e.g. `CMakeLists.txt.in`) which is copied to the build area and
+then invoked as a sub-build _during the CMake stage_. That directory is then
+pulled into the main build with `add_subdirectory()`. For example:
+
+New file `CMakeLists.txt.in`:
+
+    cmake_minimum_required(VERSION 2.8.2)
+
+    project(googletest-download NONE)
+
+    include(ExternalProject)
+    ExternalProject_Add(googletest
+      GIT_REPOSITORY    https://github.com/google/googletest.git
+      GIT_TAG           master
+      SOURCE_DIR        "${CMAKE_BINARY_DIR}/googletest-src"
+      BINARY_DIR        "${CMAKE_BINARY_DIR}/googletest-build"
+      CONFIGURE_COMMAND ""
+      BUILD_COMMAND     ""
+      INSTALL_COMMAND   ""
+      TEST_COMMAND      ""
+    )
+
+Existing build's `CMakeLists.txt`:
+
+    # Download and unpack googletest at configure time
+    configure_file(CMakeLists.txt.in googletest-download/CMakeLists.txt)
+    execute_process(COMMAND ${CMAKE_COMMAND} -G "${CMAKE_GENERATOR}" .
+      RESULT_VARIABLE result
+      WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/googletest-download )
+    if(result)
+      message(FATAL_ERROR "CMake step for googletest failed: ${result}")
+    endif()
+    execute_process(COMMAND ${CMAKE_COMMAND} --build .
+      RESULT_VARIABLE result
+      WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/googletest-download )
+    if(result)
+      message(FATAL_ERROR "Build step for googletest failed: ${result}")
+    endif()
+
+    # Prevent overriding the parent project's compiler/linker
+    # settings on Windows
+    set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
+
+    # Add googletest directly to our build. This defines
+    # the gtest and gtest_main targets.
+    add_subdirectory(${CMAKE_BINARY_DIR}/googletest-src
+                     ${CMAKE_BINARY_DIR}/googletest-build
+                     EXCLUDE_FROM_ALL)
+
+    # The gtest/gtest_main targets carry header search path
+    # dependencies automatically when using CMake 2.8.11 or
+    # later. Otherwise we have to add them here ourselves.
+    if (CMAKE_VERSION VERSION_LESS 2.8.11)
+      include_directories("${gtest_SOURCE_DIR}/include")
+    endif()
+
+    # Now simply link against gtest or gtest_main as needed. Eg
+    add_executable(example example.cpp)
+    target_link_libraries(example gtest_main)
+    add_test(NAME example_test COMMAND example)
+
+Note that this approach requires CMake 2.8.2 or later due to its use of the
+`ExternalProject_Add()` command. The above technique is discussed in more detail
+in [this separate article](http://crascit.com/2015/07/25/cmake-gtest/) which
+also contains a link to a fully generalized implementation of the technique.
+
+##### Visual Studio Dynamic vs Static Runtimes
+
+By default, new Visual Studio projects link the C runtimes dynamically but
+Google Test links them statically. This will generate an error that looks
+something like the following: gtest.lib(gtest-all.obj) : error LNK2038: mismatch
+detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value
+'MDd_DynamicDebug' in main.obj
+
+Google Test already has a CMake option for this: `gtest_force_shared_crt`
+
+Enabling this option will make gtest link the runtimes dynamically too, and
+match the project in which it is included.
+
+### Legacy Build Scripts
 
 Before settling on CMake, we have been providing hand-maintained build
-projects/scripts for Visual Studio, Xcode, and Autotools.  While we
-continue to provide them for convenience, they are not actively
-maintained any more.  We highly recommend that you follow the
-instructions in the previous two sections to integrate Google Test
-with your existing build system.
+projects/scripts for Visual Studio, Xcode, and Autotools. While we continue to
+provide them for convenience, they are not actively maintained any more. We
+highly recommend that you follow the instructions in the above sections to
+integrate Google Test with your existing build system.
 
 If you still need to use the legacy build scripts, here's how:
 
-The msvc\ folder contains two solutions with Visual C++ projects.
-Open the `gtest.sln` or `gtest-md.sln` file using Visual Studio, and you
-are ready to build Google Test the same way you build any Visual
-Studio project.  Files that have names ending with -md use DLL
-versions of Microsoft runtime libraries (the /MD or the /MDd compiler
-option).  Files without that suffix use static versions of the runtime
-libraries (the /MT or the /MTd option).  Please note that one must use
-the same option to compile both gtest and the test code.  If you use
-Visual Studio 2005 or above, we recommend the -md version as /MD is
-the default for new projects in these versions of Visual Studio.
+The msvc\ folder contains two solutions with Visual C++ projects. Open the
+`gtest.sln` or `gtest-md.sln` file using Visual Studio, and you are ready to
+build Google Test the same way you build any Visual Studio project. Files that
+have names ending with -md use DLL versions of Microsoft runtime libraries (the
+/MD or the /MDd compiler option). Files without that suffix use static versions
+of the runtime libraries (the /MT or the /MTd option). Please note that one must
+use the same option to compile both gtest and the test code. If you use Visual
+Studio 2005 or above, we recommend the -md version as /MD is the default for new
+projects in these versions of Visual Studio.
 
-On Mac OS X, open the `gtest.xcodeproj` in the `xcode/` folder using
-Xcode.  Build the "gtest" target.  The universal binary framework will
-end up in your selected build directory (selected in the Xcode
-"Preferences..." -> "Building" pane and defaults to xcode/build).
-Alternatively, at the command line, enter:
+On Mac OS X, open the `gtest.xcodeproj` in the `xcode/` folder using Xcode.
+Build the "gtest" target. The universal binary framework will end up in your
+selected build directory (selected in the Xcode "Preferences..." -> "Building"
+pane and defaults to xcode/build). Alternatively, at the command line, enter:
 
     xcodebuild
 
-This will build the "Release" configuration of gtest.framework in your
-default build location.  See the "xcodebuild" man page for more
-information about building different configurations and building in
-different locations.
+This will build the "Release" configuration of gtest.framework in your default
+build location. See the "xcodebuild" man page for more information about
+building different configurations and building in different locations.
 
-If you wish to use the Google Test Xcode project with Xcode 4.x and
-above, you need to either:
+If you wish to use the Google Test Xcode project with Xcode 4.x and above, you
+need to either:
 
- * update the SDK configuration options in xcode/Config/General.xconfig.
-   Comment options `SDKROOT`, `MACOS_DEPLOYMENT_TARGET`, and `GCC_VERSION`. If
-   you choose this route you lose the ability to target earlier versions
-   of MacOS X.
- * Install an SDK for an earlier version. This doesn't appear to be
-   supported by Apple, but has been reported to work
-   (http://stackoverflow.com/questions/5378518).
+*   update the SDK configuration options in xcode/Config/General.xconfig.
+    Comment options `SDKROOT`, `MACOS_DEPLOYMENT_TARGET`, and `GCC_VERSION`. If
+    you choose this route you lose the ability to target earlier versions of
+    MacOS X.
+*   Install an SDK for an earlier version. This doesn't appear to be supported
+    by Apple, but has been reported to work
+    (http://stackoverflow.com/questions/5378518).
 
-### Tweaking Google Test ###
+### Tweaking Google Test
 
-Google Test can be used in diverse environments.  The default
-configuration may not work (or may not work well) out of the box in
-some environments.  However, you can easily tweak Google Test by
-defining control macros on the compiler command line.  Generally,
-these macros are named like `GTEST_XYZ` and you define them to either 1
-or 0 to enable or disable a certain feature.
+Google Test can be used in diverse environments. The default configuration may
+not work (or may not work well) out of the box in some environments. However,
+you can easily tweak Google Test by defining control macros on the compiler
+command line. Generally, these macros are named like `GTEST_XYZ` and you define
+them to either 1 or 0 to enable or disable a certain feature.
 
-We list the most frequently used macros below.  For a complete list,
-see file [include/gtest/internal/gtest-port.h](include/gtest/internal/gtest-port.h).
+We list the most frequently used macros below. For a complete list, see file
+[include/gtest/internal/gtest-port.h](https://github.com/google/googletest/blob/master/include/gtest/internal/gtest-port.h).
 
-### Choosing a TR1 Tuple Library ###
+### Choosing a TR1 Tuple Library
 
-Some Google Test features require the C++ Technical Report 1 (TR1)
-tuple library, which is not yet available with all compilers.  The
-good news is that Google Test implements a subset of TR1 tuple that's
-enough for its own need, and will automatically use this when the
-compiler doesn't provide TR1 tuple.
+Some Google Test features require the C++ Technical Report 1 (TR1) tuple
+library, which is not yet available with all compilers. The good news is that
+Google Test implements a subset of TR1 tuple that's enough for its own need, and
+will automatically use this when the compiler doesn't provide TR1 tuple.
 
-Usually you don't need to care about which tuple library Google Test
-uses.  However, if your project already uses TR1 tuple, you need to
-tell Google Test to use the same TR1 tuple library the rest of your
-project uses, or the two tuple implementations will clash.  To do
-that, add
+Usually you don't need to care about which tuple library Google Test uses.
+However, if your project already uses TR1 tuple, you need to tell Google Test to
+use the same TR1 tuple library the rest of your project uses, or the two tuple
+implementations will clash. To do that, add
 
     -DGTEST_USE_OWN_TR1_TUPLE=0
 
-to the compiler flags while compiling Google Test and your tests.  If
-you want to force Google Test to use its own tuple library, just add
+to the compiler flags while compiling Google Test and your tests. If you want to
+force Google Test to use its own tuple library, just add
 
     -DGTEST_USE_OWN_TR1_TUPLE=1
 
@@ -167,15 +268,15 @@
 
 and all features using tuple will be disabled.
 
-### Multi-threaded Tests ###
+### Multi-threaded Tests
 
-Google Test is thread-safe where the pthread library is available.
-After `#include "gtest/gtest.h"`, you can check the `GTEST_IS_THREADSAFE`
-macro to see whether this is the case (yes if the macro is `#defined` to
-1, no if it's undefined.).
+Google Test is thread-safe where the pthread library is available. After
+`#include "gtest/gtest.h"`, you can check the `GTEST_IS_THREADSAFE` macro to see
+whether this is the case (yes if the macro is `#defined` to 1, no if it's
+undefined.).
 
-If Google Test doesn't correctly detect whether pthread is available
-in your environment, you can force it with
+If Google Test doesn't correctly detect whether pthread is available in your
+environment, you can force it with
 
     -DGTEST_HAS_PTHREAD=1
 
@@ -183,26 +284,24 @@
 
     -DGTEST_HAS_PTHREAD=0
 
-When Google Test uses pthread, you may need to add flags to your
-compiler and/or linker to select the pthread library, or you'll get
-link errors.  If you use the CMake script or the deprecated Autotools
-script, this is taken care of for you.  If you use your own build
-script, you'll need to read your compiler and linker's manual to
-figure out what flags to add.
+When Google Test uses pthread, you may need to add flags to your compiler and/or
+linker to select the pthread library, or you'll get link errors. If you use the
+CMake script or the deprecated Autotools script, this is taken care of for you.
+If you use your own build script, you'll need to read your compiler and linker's
+manual to figure out what flags to add.
 
-### As a Shared Library (DLL) ###
+### As a Shared Library (DLL)
 
-Google Test is compact, so most users can build and link it as a
-static library for the simplicity.  You can choose to use Google Test
-as a shared library (known as a DLL on Windows) if you prefer.
+Google Test is compact, so most users can build and link it as a static library
+for the simplicity. You can choose to use Google Test as a shared library (known
+as a DLL on Windows) if you prefer.
 
 To compile *gtest* as a shared library, add
 
     -DGTEST_CREATE_SHARED_LIBRARY=1
 
-to the compiler flags.  You'll also need to tell the linker to produce
-a shared library instead - consult your linker's manual for how to do
-it.
+to the compiler flags. You'll also need to tell the linker to produce a shared
+library instead - consult your linker's manual for how to do it.
 
 To compile your *tests* that use the gtest shared library, add
 
@@ -210,31 +309,28 @@
 
 to the compiler flags.
 
-Note: while the above steps aren't technically necessary today when
-using some compilers (e.g. GCC), they may become necessary in the
-future, if we decide to improve the speed of loading the library (see
-<http://gcc.gnu.org/wiki/Visibility> for details).  Therefore you are
-recommended to always add the above flags when using Google Test as a
-shared library.  Otherwise a future release of Google Test may break
-your build script.
+Note: while the above steps aren't technically necessary today when using some
+compilers (e.g. GCC), they may become necessary in the future, if we decide to
+improve the speed of loading the library (see
+<http://gcc.gnu.org/wiki/Visibility> for details). Therefore you are recommended
+to always add the above flags when using Google Test as a shared library.
+Otherwise a future release of Google Test may break your build script.
 
-### Avoiding Macro Name Clashes ###
+### Avoiding Macro Name Clashes
 
-In C++, macros don't obey namespaces.  Therefore two libraries that
-both define a macro of the same name will clash if you `#include` both
-definitions.  In case a Google Test macro clashes with another
-library, you can force Google Test to rename its macro to avoid the
-conflict.
+In C++, macros don't obey namespaces. Therefore two libraries that both define a
+macro of the same name will clash if you `#include` both definitions. In case a
+Google Test macro clashes with another library, you can force Google Test to
+rename its macro to avoid the conflict.
 
-Specifically, if both Google Test and some other code define macro
-FOO, you can add
+Specifically, if both Google Test and some other code define macro FOO, you can
+add
 
     -DGTEST_DONT_DEFINE_FOO=1
 
-to the compiler flags to tell Google Test to change the macro's name
-from `FOO` to `GTEST_FOO`.  Currently `FOO` can be `FAIL`, `SUCCEED`,
-or `TEST`.  For example, with `-DGTEST_DONT_DEFINE_TEST=1`, you'll
-need to write
+to the compiler flags to tell Google Test to change the macro's name from `FOO`
+to `GTEST_FOO`. Currently `FOO` can be `FAIL`, `SUCCEED`, or `TEST`. For
+example, with `-DGTEST_DONT_DEFINE_TEST=1`, you'll need to write
 
     GTEST_TEST(SomeTest, DoesThis) { ... }
 
@@ -243,38 +339,3 @@
     TEST(SomeTest, DoesThis) { ... }
 
 in order to define a test.
-
-## Developing Google Test ##
-
-This section discusses how to make your own changes to Google Test.
-
-### Testing Google Test Itself ###
-
-To make sure your changes work as intended and don't break existing
-functionality, you'll want to compile and run Google Test's own tests.
-For that you can use CMake:
-
-    mkdir mybuild
-    cd mybuild
-    cmake -Dgtest_build_tests=ON ${GTEST_DIR}
-
-Make sure you have Python installed, as some of Google Test's tests
-are written in Python.  If the cmake command complains about not being
-able to find Python (`Could NOT find PythonInterp (missing:
-PYTHON_EXECUTABLE)`), try telling it explicitly where your Python
-executable can be found:
-
-    cmake -DPYTHON_EXECUTABLE=path/to/python -Dgtest_build_tests=ON ${GTEST_DIR}
-
-Next, you can build Google Test and all of its own tests.  On \*nix,
-this is usually done by 'make'.  To run the tests, do
-
-    make test
-
-All tests should pass.
-
-Normally you don't need to worry about regenerating the source files,
-unless you need to modify them.  In that case, you should modify the
-corresponding .pump files instead and run the pump.py Python script to
-regenerate them.  You can find pump.py in the [scripts/](scripts/) directory.
-Read the [Pump manual](docs/PumpManual.md) for how to use it.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-death-test.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-death-test.h
index 957a69c..20c54d8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-death-test.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-death-test.h
@@ -26,14 +26,14 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: wan@google.com (Zhanyong Wan)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file defines the public API for death tests.  It is
 // #included by gtest.h so a user doesn't need to include this
 // directly.
+// GOOGLETEST_CM0001 DO NOT DELETE
 
 #ifndef GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
 #define GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_
@@ -99,10 +99,11 @@
 //
 // On the regular expressions used in death tests:
 //
+//   GOOGLETEST_CM0005 DO NOT DELETE
 //   On POSIX-compliant systems (*nix), we use the <regex.h> library,
 //   which uses the POSIX extended regex syntax.
 //
-//   On other platforms (e.g. Windows), we only support a simple regex
+//   On other platforms (e.g. Windows or Mac), we only support a simple regex
 //   syntax implemented as part of Google Test.  This limited
 //   implementation should be enough most of the time when writing
 //   death tests; though it lacks many features you can find in PCRE
@@ -160,7 +161,7 @@
 //   is rarely a problem as people usually don't put the test binary
 //   directory in PATH.
 //
-// TODO(wan@google.com): make thread-safe death tests search the PATH.
+// FIXME: make thread-safe death tests search the PATH.
 
 // Asserts that a given statement causes the program to exit, with an
 // integer exit status that satisfies predicate, and emitting error output
@@ -198,9 +199,10 @@
   const int exit_code_;
 };
 
-# if !GTEST_OS_WINDOWS
+# if !GTEST_OS_WINDOWS && !GTEST_OS_FUCHSIA
 // Tests that an exit code describes an exit due to termination by a
 // given signal.
+// GOOGLETEST_CM0006 DO NOT DELETE
 class GTEST_API_ KilledBySignal {
  public:
   explicit KilledBySignal(int signum);
@@ -272,6 +274,54 @@
 # endif  // NDEBUG for EXPECT_DEBUG_DEATH
 #endif  // GTEST_HAS_DEATH_TEST
 
+// This macro is used for implementing macros such as
+// EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED on systems where
+// death tests are not supported. Those macros must compile on such systems
+// iff EXPECT_DEATH and ASSERT_DEATH compile with the same parameters on
+// systems that support death tests. This allows one to write such a macro
+// on a system that does not support death tests and be sure that it will
+// compile on a death-test supporting system. It is exposed publicly so that
+// systems that have death-tests with stricter requirements than
+// GTEST_HAS_DEATH_TEST can write their own equivalent of
+// EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED.
+//
+// Parameters:
+//   statement -  A statement that a macro such as EXPECT_DEATH would test
+//                for program termination. This macro has to make sure this
+//                statement is compiled but not executed, to ensure that
+//                EXPECT_DEATH_IF_SUPPORTED compiles with a certain
+//                parameter iff EXPECT_DEATH compiles with it.
+//   regex     -  A regex that a macro such as EXPECT_DEATH would use to test
+//                the output of statement.  This parameter has to be
+//                compiled but not evaluated by this macro, to ensure that
+//                this macro only accepts expressions that a macro such as
+//                EXPECT_DEATH would accept.
+//   terminator - Must be an empty statement for EXPECT_DEATH_IF_SUPPORTED
+//                and a return statement for ASSERT_DEATH_IF_SUPPORTED.
+//                This ensures that ASSERT_DEATH_IF_SUPPORTED will not
+//                compile inside functions where ASSERT_DEATH doesn't
+//                compile.
+//
+//  The branch that has an always false condition is used to ensure that
+//  statement and regex are compiled (and thus syntactically correct) but
+//  never executed. The unreachable code macro protects the terminator
+//  statement from generating an 'unreachable code' warning in case
+//  statement unconditionally returns or throws. The Message constructor at
+//  the end allows the syntax of streaming additional messages into the
+//  macro, for compilational compatibility with EXPECT_DEATH/ASSERT_DEATH.
+# define GTEST_UNSUPPORTED_DEATH_TEST(statement, regex, terminator) \
+    GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
+    if (::testing::internal::AlwaysTrue()) { \
+      GTEST_LOG_(WARNING) \
+          << "Death tests are not supported on this platform.\n" \
+          << "Statement '" #statement "' cannot be verified."; \
+    } else if (::testing::internal::AlwaysFalse()) { \
+      ::testing::internal::RE::PartialMatch(".*", (regex)); \
+      GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
+      terminator; \
+    } else \
+      ::testing::Message()
+
 // EXPECT_DEATH_IF_SUPPORTED(statement, regex) and
 // ASSERT_DEATH_IF_SUPPORTED(statement, regex) expand to real death tests if
 // death tests are supported; otherwise they just issue a warning.  This is
@@ -284,9 +334,9 @@
     ASSERT_DEATH(statement, regex)
 #else
 # define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \
-    GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, )
+    GTEST_UNSUPPORTED_DEATH_TEST(statement, regex, )
 # define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \
-    GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, return)
+    GTEST_UNSUPPORTED_DEATH_TEST(statement, regex, return)
 #endif
 
 }  // namespace testing
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-message.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-message.h
index fe879bc..5ca0416 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-message.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-message.h
@@ -26,10 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: wan@google.com (Zhanyong Wan)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file defines the Message class.
 //
@@ -43,6 +42,8 @@
 // to CHANGE WITHOUT NOTICE.  Therefore DO NOT DEPEND ON IT in a user
 // program!
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
 #define GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
 
@@ -50,6 +51,9 @@
 
 #include "gtest/internal/gtest-port.h"
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 // Ensures that there is at least one operator<< in the global namespace.
 // See Message& operator<<(...) below for why.
 void operator<<(const testing::internal::Secret&, int);
@@ -196,7 +200,6 @@
   std::string GetString() const;
 
  private:
-
 #if GTEST_OS_SYMBIAN
   // These are needed as the Nokia Symbian Compiler cannot decide between
   // const T& and const T* in a function template. The Nokia compiler _can_
@@ -247,4 +250,6 @@
 }  // namespace internal
 }  // namespace testing
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 #endif  // GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h
index 038f9ba..3e95e43 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h
@@ -31,13 +31,12 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: vladl@google.com (Vlad Losev)
-//
 // Macros and functions for implementing parameterized tests
-// in Google C++ Testing Framework (Google Test)
+// in Google C++ Testing and Mocking Framework (Google Test)
 //
 // This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
 //
+// GOOGLETEST_CM0001 DO NOT DELETE
 #ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
 #define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
 
@@ -79,7 +78,7 @@
 // Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test
 // case with any set of parameters you want. Google Test defines a number
 // of functions for generating test parameters. They return what we call
-// (surprise!) parameter generators. Here is a  summary of them, which
+// (surprise!) parameter generators. Here is a summary of them, which
 // are all in the testing namespace:
 //
 //
@@ -185,15 +184,10 @@
 # include <utility>
 #endif
 
-// scripts/fuse_gtest.py depends on gtest's own header being #included
-// *unconditionally*.  Therefore these #includes cannot be moved
-// inside #if GTEST_HAS_PARAM_TEST.
 #include "gtest/internal/gtest-internal.h"
 #include "gtest/internal/gtest-param-util.h"
 #include "gtest/internal/gtest-param-util-generated.h"
 
-#if GTEST_HAS_PARAM_TEST
-
 namespace testing {
 
 // Functions producing parameter generators.
@@ -273,7 +267,7 @@
 // each with C-string values of "foo", "bar", and "baz":
 //
 // const char* strings[] = {"foo", "bar", "baz"};
-// INSTANTIATE_TEST_CASE_P(StringSequence, SrtingTest, ValuesIn(strings));
+// INSTANTIATE_TEST_CASE_P(StringSequence, StringTest, ValuesIn(strings));
 //
 // This instantiates tests from test case StlStringTest
 // each with STL strings with values "a" and "b":
@@ -1375,8 +1369,6 @@
 }
 # endif  // GTEST_HAS_COMBINE
 
-
-
 # define TEST_P(test_case_name, test_name) \
   class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \
       : public test_case_name { \
@@ -1390,8 +1382,8 @@
               #test_case_name, \
               ::testing::internal::CodeLocation(\
                   __FILE__, __LINE__))->AddTestPattern(\
-                      #test_case_name, \
-                      #test_name, \
+                      GTEST_STRINGIFY_(test_case_name), \
+                      GTEST_STRINGIFY_(test_name), \
                       new ::testing::internal::TestMetaFactory< \
                           GTEST_TEST_CLASS_NAME_(\
                               test_case_name, test_name)>()); \
@@ -1412,21 +1404,21 @@
 // type testing::TestParamInfo<class ParamType>, and return std::string.
 //
 // testing::PrintToStringParamName is a builtin test suffix generator that
-// returns the value of testing::PrintToString(GetParam()). It does not work
-// for std::string or C strings.
+// returns the value of testing::PrintToString(GetParam()).
 //
 // Note: test names must be non-empty, unique, and may only contain ASCII
-// alphanumeric characters or underscore.
+// alphanumeric characters or underscore. Because PrintToString adds quotes
+// to std::string and C strings, it won't work for these types.
 
 # define INSTANTIATE_TEST_CASE_P(prefix, test_case_name, generator, ...) \
-  ::testing::internal::ParamGenerator<test_case_name::ParamType> \
+  static ::testing::internal::ParamGenerator<test_case_name::ParamType> \
       gtest_##prefix##test_case_name##_EvalGenerator_() { return generator; } \
-  ::std::string gtest_##prefix##test_case_name##_EvalGenerateName_( \
+  static ::std::string gtest_##prefix##test_case_name##_EvalGenerateName_( \
       const ::testing::TestParamInfo<test_case_name::ParamType>& info) { \
     return ::testing::internal::GetParamNameGen<test_case_name::ParamType> \
         (__VA_ARGS__)(info); \
   } \
-  int gtest_##prefix##test_case_name##_dummy_ GTEST_ATTRIBUTE_UNUSED_ = \
+  static int gtest_##prefix##test_case_name##_dummy_ GTEST_ATTRIBUTE_UNUSED_ = \
       ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \
           GetTestCasePatternHolder<test_case_name>(\
               #test_case_name, \
@@ -1439,6 +1431,4 @@
 
 }  // namespace testing
 
-#endif  // GTEST_HAS_PARAM_TEST
-
 #endif  // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h.pump b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h.pump
index 3078d6d..274f2b3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h.pump
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-param-test.h.pump
@@ -30,13 +30,12 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: vladl@google.com (Vlad Losev)
-//
 // Macros and functions for implementing parameterized tests
-// in Google C++ Testing Framework (Google Test)
+// in Google C++ Testing and Mocking Framework (Google Test)
 //
 // This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
 //
+// GOOGLETEST_CM0001 DO NOT DELETE
 #ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
 #define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
 
@@ -78,7 +77,7 @@
 // Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test
 // case with any set of parameters you want. Google Test defines a number
 // of functions for generating test parameters. They return what we call
-// (surprise!) parameter generators. Here is a  summary of them, which
+// (surprise!) parameter generators. Here is a summary of them, which
 // are all in the testing namespace:
 //
 //
@@ -184,15 +183,10 @@
 # include <utility>
 #endif
 
-// scripts/fuse_gtest.py depends on gtest's own header being #included
-// *unconditionally*.  Therefore these #includes cannot be moved
-// inside #if GTEST_HAS_PARAM_TEST.
 #include "gtest/internal/gtest-internal.h"
 #include "gtest/internal/gtest-param-util.h"
 #include "gtest/internal/gtest-param-util-generated.h"
 
-#if GTEST_HAS_PARAM_TEST
-
 namespace testing {
 
 // Functions producing parameter generators.
@@ -272,7 +266,7 @@
 // each with C-string values of "foo", "bar", and "baz":
 //
 // const char* strings[] = {"foo", "bar", "baz"};
-// INSTANTIATE_TEST_CASE_P(StringSequence, SrtingTest, ValuesIn(strings));
+// INSTANTIATE_TEST_CASE_P(StringSequence, StringTest, ValuesIn(strings));
 //
 // This instantiates tests from test case StlStringTest
 // each with STL strings with values "a" and "b":
@@ -441,8 +435,6 @@
 ]]
 # endif  // GTEST_HAS_COMBINE
 
-
-
 # define TEST_P(test_case_name, test_name) \
   class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \
       : public test_case_name { \
@@ -456,8 +448,8 @@
               #test_case_name, \
               ::testing::internal::CodeLocation(\
                   __FILE__, __LINE__))->AddTestPattern(\
-                      #test_case_name, \
-                      #test_name, \
+                      GTEST_STRINGIFY_(test_case_name), \
+                      GTEST_STRINGIFY_(test_name), \
                       new ::testing::internal::TestMetaFactory< \
                           GTEST_TEST_CLASS_NAME_(\
                               test_case_name, test_name)>()); \
@@ -485,14 +477,14 @@
 // to std::string and C strings, it won't work for these types.
 
 # define INSTANTIATE_TEST_CASE_P(prefix, test_case_name, generator, ...) \
-  ::testing::internal::ParamGenerator<test_case_name::ParamType> \
+  static ::testing::internal::ParamGenerator<test_case_name::ParamType> \
       gtest_##prefix##test_case_name##_EvalGenerator_() { return generator; } \
-  ::std::string gtest_##prefix##test_case_name##_EvalGenerateName_( \
+  static ::std::string gtest_##prefix##test_case_name##_EvalGenerateName_( \
       const ::testing::TestParamInfo<test_case_name::ParamType>& info) { \
     return ::testing::internal::GetParamNameGen<test_case_name::ParamType> \
         (__VA_ARGS__)(info); \
   } \
-  int gtest_##prefix##test_case_name##_dummy_ GTEST_ATTRIBUTE_UNUSED_ = \
+  static int gtest_##prefix##test_case_name##_dummy_ GTEST_ATTRIBUTE_UNUSED_ = \
       ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \
           GetTestCasePatternHolder<test_case_name>(\
               #test_case_name, \
@@ -505,6 +497,4 @@
 
 }  // namespace testing
 
-#endif  // GTEST_HAS_PARAM_TEST
-
 #endif  // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-printers.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-printers.h
index 8a33164..51865f8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-printers.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-printers.h
@@ -26,10 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
 
-// Google Test - The Google C++ Testing Framework
+
+// Google Test - The Google C++ Testing and Mocking Framework
 //
 // This file implements a universal value printer that can print a
 // value of any type T:
@@ -46,6 +45,10 @@
 //   2. operator<<(ostream&, const T&) defined in either foo or the
 //      global namespace.
 //
+// However if T is an STL-style container then it is printed element-wise
+// unless foo::PrintTo(const T&, ostream*) is defined. Note that
+// operator<<() is ignored for container types.
+//
 // If none of the above is defined, it will print the debug string of
 // the value if it is a protocol buffer, or print the raw bytes in the
 // value otherwise.
@@ -92,6 +95,8 @@
 // being defined as many user-defined container types don't have
 // value_type.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
 #define GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
 
@@ -107,6 +112,12 @@
 # include <tuple>
 #endif
 
+#if GTEST_HAS_ABSL
+#include "absl/strings/string_view.h"
+#include "absl/types/optional.h"
+#include "absl/types/variant.h"
+#endif  // GTEST_HAS_ABSL
+
 namespace testing {
 
 // Definitions in the 'internal' and 'internal2' name spaces are
@@ -125,7 +136,11 @@
   kProtobuf,              // a protobuf type
   kConvertibleToInteger,  // a type implicitly convertible to BiggestInt
                           // (e.g. a named or unnamed enum type)
-  kOtherType              // anything else
+#if GTEST_HAS_ABSL
+  kConvertibleToStringView,  // a type implicitly convertible to
+                             // absl::string_view
+#endif
+  kOtherType  // anything else
 };
 
 // TypeWithoutFormatter<T, kTypeKind>::PrintValue(value, os) is called
@@ -137,7 +152,8 @@
  public:
   // This default version is called when kTypeKind is kOtherType.
   static void PrintValue(const T& value, ::std::ostream* os) {
-    PrintBytesInObjectTo(reinterpret_cast<const unsigned char*>(&value),
+    PrintBytesInObjectTo(static_cast<const unsigned char*>(
+                             reinterpret_cast<const void*>(&value)),
                          sizeof(value), os);
   }
 };
@@ -151,10 +167,10 @@
 class TypeWithoutFormatter<T, kProtobuf> {
  public:
   static void PrintValue(const T& value, ::std::ostream* os) {
-    const ::testing::internal::string short_str = value.ShortDebugString();
-    const ::testing::internal::string pretty_str =
-        short_str.length() <= kProtobufOneLinerMaxLength ?
-        short_str : ("\n" + value.DebugString());
+    std::string pretty_str = value.ShortDebugString();
+    if (pretty_str.length() > kProtobufOneLinerMaxLength) {
+      pretty_str = "\n" + value.DebugString();
+    }
     *os << ("<" + pretty_str + ">");
   }
 };
@@ -175,6 +191,19 @@
   }
 };
 
+#if GTEST_HAS_ABSL
+template <typename T>
+class TypeWithoutFormatter<T, kConvertibleToStringView> {
+ public:
+  // Since T has neither operator<< nor PrintTo() but can be implicitly
+  // converted to absl::string_view, we print it as a absl::string_view.
+  //
+  // Note: the implementation is further below, as it depends on
+  // internal::PrintTo symbol which is defined later in the file.
+  static void PrintValue(const T& value, ::std::ostream* os);
+};
+#endif
+
 // Prints the given value to the given ostream.  If the value is a
 // protocol message, its debug string is printed; if it's an enum or
 // of a type implicitly convertible to BiggestInt, it's printed as an
@@ -202,10 +231,19 @@
 template <typename Char, typename CharTraits, typename T>
 ::std::basic_ostream<Char, CharTraits>& operator<<(
     ::std::basic_ostream<Char, CharTraits>& os, const T& x) {
-  TypeWithoutFormatter<T,
-      (internal::IsAProtocolMessage<T>::value ? kProtobuf :
-       internal::ImplicitlyConvertible<const T&, internal::BiggestInt>::value ?
-       kConvertibleToInteger : kOtherType)>::PrintValue(x, &os);
+  TypeWithoutFormatter<T, (internal::IsAProtocolMessage<T>::value
+                               ? kProtobuf
+                               : internal::ImplicitlyConvertible<
+                                     const T&, internal::BiggestInt>::value
+                                     ? kConvertibleToInteger
+                                     :
+#if GTEST_HAS_ABSL
+                                     internal::ImplicitlyConvertible<
+                                         const T&, absl::string_view>::value
+                                         ? kConvertibleToStringView
+                                         :
+#endif
+                                         kOtherType)>::PrintValue(x, &os);
   return os;
 }
 
@@ -364,11 +402,18 @@
 template <typename T>
 void UniversalPrint(const T& value, ::std::ostream* os);
 
+enum DefaultPrinterType {
+  kPrintContainer,
+  kPrintPointer,
+  kPrintFunctionPointer,
+  kPrintOther,
+};
+template <DefaultPrinterType type> struct WrapPrinterType {};
+
 // Used to print an STL-style container when the user doesn't define
 // a PrintTo() for it.
 template <typename C>
-void DefaultPrintTo(IsContainer /* dummy */,
-                    false_type /* is not a pointer */,
+void DefaultPrintTo(WrapPrinterType<kPrintContainer> /* dummy */,
                     const C& container, ::std::ostream* os) {
   const size_t kMaxCount = 32;  // The maximum number of elements to print.
   *os << '{';
@@ -401,40 +446,34 @@
 // implementation-defined.  Therefore they will be printed as raw
 // bytes.)
 template <typename T>
-void DefaultPrintTo(IsNotContainer /* dummy */,
-                    true_type /* is a pointer */,
+void DefaultPrintTo(WrapPrinterType<kPrintPointer> /* dummy */,
                     T* p, ::std::ostream* os) {
   if (p == NULL) {
     *os << "NULL";
   } else {
-    // C++ doesn't allow casting from a function pointer to any object
-    // pointer.
-    //
-    // IsTrue() silences warnings: "Condition is always true",
-    // "unreachable code".
-    if (IsTrue(ImplicitlyConvertible<T*, const void*>::value)) {
-      // T is not a function type.  We just call << to print p,
-      // relying on ADL to pick up user-defined << for their pointer
-      // types, if any.
-      *os << p;
-    } else {
-      // T is a function type, so '*os << p' doesn't do what we want
-      // (it just prints p as bool).  We want to print p as a const
-      // void*.  However, we cannot cast it to const void* directly,
-      // even using reinterpret_cast, as earlier versions of gcc
-      // (e.g. 3.4.5) cannot compile the cast when p is a function
-      // pointer.  Casting to UInt64 first solves the problem.
-      *os << reinterpret_cast<const void*>(
-          reinterpret_cast<internal::UInt64>(p));
-    }
+    // T is not a function type.  We just call << to print p,
+    // relying on ADL to pick up user-defined << for their pointer
+    // types, if any.
+    *os << p;
+  }
+}
+template <typename T>
+void DefaultPrintTo(WrapPrinterType<kPrintFunctionPointer> /* dummy */,
+                    T* p, ::std::ostream* os) {
+  if (p == NULL) {
+    *os << "NULL";
+  } else {
+    // T is a function type, so '*os << p' doesn't do what we want
+    // (it just prints p as bool).  We want to print p as a const
+    // void*.
+    *os << reinterpret_cast<const void*>(p);
   }
 }
 
 // Used to print a non-container, non-pointer value when the user
 // doesn't define PrintTo() for it.
 template <typename T>
-void DefaultPrintTo(IsNotContainer /* dummy */,
-                    false_type /* is not a pointer */,
+void DefaultPrintTo(WrapPrinterType<kPrintOther> /* dummy */,
                     const T& value, ::std::ostream* os) {
   ::testing_internal::DefaultPrintNonContainerTo(value, os);
 }
@@ -452,11 +491,8 @@
 // wants).
 template <typename T>
 void PrintTo(const T& value, ::std::ostream* os) {
-  // DefaultPrintTo() is overloaded.  The type of its first two
-  // arguments determine which version will be picked.  If T is an
-  // STL-style container, the version for container will be called; if
-  // T is a pointer, the pointer version will be called; otherwise the
-  // generic version will be called.
+  // DefaultPrintTo() is overloaded.  The type of its first argument
+  // determines which version will be picked.
   //
   // Note that we check for container types here, prior to we check
   // for protocol message types in our operator<<.  The rationale is:
@@ -468,13 +504,27 @@
   // elements; therefore we check for container types here to ensure
   // that our format is used.
   //
-  // The second argument of DefaultPrintTo() is needed to bypass a bug
-  // in Symbian's C++ compiler that prevents it from picking the right
-  // overload between:
-  //
-  //   PrintTo(const T& x, ...);
-  //   PrintTo(T* x, ...);
-  DefaultPrintTo(IsContainerTest<T>(0), is_pointer<T>(), value, os);
+  // Note that MSVC and clang-cl do allow an implicit conversion from
+  // pointer-to-function to pointer-to-object, but clang-cl warns on it.
+  // So don't use ImplicitlyConvertible if it can be helped since it will
+  // cause this warning, and use a separate overload of DefaultPrintTo for
+  // function pointers so that the `*os << p` in the object pointer overload
+  // doesn't cause that warning either.
+  DefaultPrintTo(
+      WrapPrinterType <
+                  (sizeof(IsContainerTest<T>(0)) == sizeof(IsContainer)) &&
+              !IsRecursiveContainer<T>::value
+          ? kPrintContainer
+          : !is_pointer<T>::value
+                ? kPrintOther
+#if GTEST_LANG_CXX11
+                : std::is_function<typename std::remove_pointer<T>::type>::value
+#else
+                : !internal::ImplicitlyConvertible<T, const void*>::value
+#endif
+                      ? kPrintFunctionPointer
+                      : kPrintPointer > (),
+      value, os);
 }
 
 // The following list of PrintTo() overloads tells
@@ -581,6 +631,17 @@
 }
 #endif  // GTEST_HAS_STD_WSTRING
 
+#if GTEST_HAS_ABSL
+// Overload for absl::string_view.
+inline void PrintTo(absl::string_view sp, ::std::ostream* os) {
+  PrintTo(::std::string(sp), os);
+}
+#endif  // GTEST_HAS_ABSL
+
+#if GTEST_LANG_CXX11
+inline void PrintTo(std::nullptr_t, ::std::ostream* os) { *os << "(nullptr)"; }
+#endif  // GTEST_LANG_CXX11
+
 #if GTEST_HAS_TR1_TUPLE || GTEST_HAS_STD_TUPLE_
 // Helper function for printing a tuple.  T must be instantiated with
 // a tuple type.
@@ -710,6 +771,48 @@
   GTEST_DISABLE_MSC_WARNINGS_POP_()
 };
 
+#if GTEST_HAS_ABSL
+
+// Printer for absl::optional
+
+template <typename T>
+class UniversalPrinter<::absl::optional<T>> {
+ public:
+  static void Print(const ::absl::optional<T>& value, ::std::ostream* os) {
+    *os << '(';
+    if (!value) {
+      *os << "nullopt";
+    } else {
+      UniversalPrint(*value, os);
+    }
+    *os << ')';
+  }
+};
+
+// Printer for absl::variant
+
+template <typename... T>
+class UniversalPrinter<::absl::variant<T...>> {
+ public:
+  static void Print(const ::absl::variant<T...>& value, ::std::ostream* os) {
+    *os << '(';
+    absl::visit(Visitor{os}, value);
+    *os << ')';
+  }
+
+ private:
+  struct Visitor {
+    template <typename U>
+    void operator()(const U& u) const {
+      *os << "'" << GetTypeName<U>() << "' with value ";
+      UniversalPrint(u, os);
+    }
+    ::std::ostream* os;
+  };
+};
+
+#endif  // GTEST_HAS_ABSL
+
 // UniversalPrintArray(begin, len, os) prints an array of 'len'
 // elements, starting at address 'begin'.
 template <typename T>
@@ -723,7 +826,7 @@
     // If the array has more than kThreshold elements, we'll have to
     // omit some details by printing only the first and the last
     // kChunkSize elements.
-    // TODO(wan@google.com): let the user control the threshold using a flag.
+    // FIXME: let the user control the threshold using a flag.
     if (len <= kThreshold) {
       PrintRawArrayTo(begin, len, os);
     } else {
@@ -805,7 +908,7 @@
     if (str == NULL) {
       *os << "NULL";
     } else {
-      UniversalPrint(string(str), os);
+      UniversalPrint(std::string(str), os);
     }
   }
 };
@@ -856,7 +959,7 @@
   UniversalPrinter<T1>::Print(value, os);
 }
 
-typedef ::std::vector<string> Strings;
+typedef ::std::vector< ::std::string> Strings;
 
 // TuplePolicy<TupleT> must provide:
 // - tuple_size
@@ -875,12 +978,13 @@
   static const size_t tuple_size = ::std::tr1::tuple_size<Tuple>::value;
 
   template <size_t I>
-  struct tuple_element : ::std::tr1::tuple_element<I, Tuple> {};
+  struct tuple_element : ::std::tr1::tuple_element<static_cast<int>(I), Tuple> {
+  };
 
   template <size_t I>
-  static typename AddReference<
-      const typename ::std::tr1::tuple_element<I, Tuple>::type>::type get(
-      const Tuple& tuple) {
+  static typename AddReference<const typename ::std::tr1::tuple_element<
+      static_cast<int>(I), Tuple>::type>::type
+  get(const Tuple& tuple) {
     return ::std::tr1::get<I>(tuple);
   }
 };
@@ -976,6 +1080,16 @@
 
 }  // namespace internal
 
+#if GTEST_HAS_ABSL
+namespace internal2 {
+template <typename T>
+void TypeWithoutFormatter<T, kConvertibleToStringView>::PrintValue(
+    const T& value, ::std::ostream* os) {
+  internal::PrintTo(absl::string_view(value), os);
+}
+}  // namespace internal2
+#endif
+
 template <typename T>
 ::std::string PrintToString(const T& value) {
   ::std::stringstream ss;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-spi.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-spi.h
index f63fa9a..1e89839 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-spi.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-spi.h
@@ -26,17 +26,21 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 //
 // Utilities for testing Google Test itself and code that uses Google Test
 // (e.g. frameworks built on top of Google Test).
 
+// GOOGLETEST_CM0004 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_GTEST_SPI_H_
 #define GTEST_INCLUDE_GTEST_GTEST_SPI_H_
 
 #include "gtest/gtest.h"
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 namespace testing {
 
 // This helper class can be used to mock out Google Test failure reporting
@@ -97,13 +101,12 @@
  public:
   // The constructor remembers the arguments.
   SingleFailureChecker(const TestPartResultArray* results,
-                       TestPartResult::Type type,
-                       const string& substr);
+                       TestPartResult::Type type, const std::string& substr);
   ~SingleFailureChecker();
  private:
   const TestPartResultArray* const results_;
   const TestPartResult::Type type_;
-  const string substr_;
+  const std::string substr_;
 
   GTEST_DISALLOW_COPY_AND_ASSIGN_(SingleFailureChecker);
 };
@@ -112,6 +115,8 @@
 
 }  // namespace testing
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 // A set of macros for testing Google Test assertions or code that's expected
 // to generate Google Test fatal failures.  It verifies that the given
 // statement will cause exactly one fatal Google Test failure with 'substr'
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-test-part.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-test-part.h
index 77eb844..1c7b89e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-test-part.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-test-part.h
@@ -27,8 +27,7 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Author: mheule@google.com (Markus Heule)
-//
+// GOOGLETEST_CM0001 DO NOT DELETE
 
 #ifndef GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
 #define GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
@@ -38,6 +37,9 @@
 #include "gtest/internal/gtest-internal.h"
 #include "gtest/internal/gtest-string.h"
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 namespace testing {
 
 // A copyable object representing the result of a test part (i.e. an
@@ -143,7 +145,7 @@
 };
 
 // This interface knows how to report a test part result.
-class TestPartResultReporterInterface {
+class GTEST_API_ TestPartResultReporterInterface {
  public:
   virtual ~TestPartResultReporterInterface() {}
 
@@ -176,4 +178,6 @@
 
 }  // namespace testing
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 #endif  // GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-typed-test.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-typed-test.h
index 5f69d56..74bce46 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-typed-test.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest-typed-test.h
@@ -26,8 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
+
+// GOOGLETEST_CM0001 DO NOT DELETE
 
 #ifndef GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
 #define GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_
@@ -82,6 +83,24 @@
 
 TYPED_TEST(FooTest, HasPropertyA) { ... }
 
+// TYPED_TEST_CASE takes an optional third argument which allows to specify a
+// class that generates custom test name suffixes based on the type. This should
+// be a class which has a static template function GetName(int index) returning
+// a string for each type. The provided integer index equals the index of the
+// type in the provided type list. In many cases the index can be ignored.
+//
+// For example:
+//   class MyTypeNames {
+//    public:
+//     template <typename T>
+//     static std::string GetName(int) {
+//       if (std::is_same<T, char>()) return "char";
+//       if (std::is_same<T, int>()) return "int";
+//       if (std::is_same<T, unsigned int>()) return "unsignedInt";
+//     }
+//   };
+//   TYPED_TEST_CASE(FooTest, MyTypes, MyTypeNames);
+
 #endif  // 0
 
 // Type-parameterized tests are abstract test patterns parameterized
@@ -143,6 +162,11 @@
 // If the type list contains only one type, you can write that type
 // directly without Types<...>:
 //   INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int);
+//
+// Similar to the optional argument of TYPED_TEST_CASE above,
+// INSTANTIATE_TEST_CASE_P takes an optional fourth argument which allows to
+// generate custom names.
+//   INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes, MyTypeNames);
 
 #endif  // 0
 
@@ -159,32 +183,46 @@
 // given test case.
 # define GTEST_TYPE_PARAMS_(TestCaseName) gtest_type_params_##TestCaseName##_
 
+// Expands to the name of the typedef for the NameGenerator, responsible for
+// creating the suffixes of the name.
+#define GTEST_NAME_GENERATOR_(TestCaseName) \
+  gtest_type_params_##TestCaseName##_NameGenerator
+
 // The 'Types' template argument below must have spaces around it
 // since some compilers may choke on '>>' when passing a template
 // instance (e.g. Types<int>)
-# define TYPED_TEST_CASE(CaseName, Types) \
-  typedef ::testing::internal::TypeList< Types >::type \
-      GTEST_TYPE_PARAMS_(CaseName)
+# define TYPED_TEST_CASE(CaseName, Types, ...)                             \
+  typedef ::testing::internal::TypeList< Types >::type GTEST_TYPE_PARAMS_( \
+      CaseName);                                                           \
+  typedef ::testing::internal::NameGeneratorSelector<__VA_ARGS__>::type    \
+      GTEST_NAME_GENERATOR_(CaseName)
 
-# define TYPED_TEST(CaseName, TestName) \
-  template <typename gtest_TypeParam_> \
-  class GTEST_TEST_CLASS_NAME_(CaseName, TestName) \
-      : public CaseName<gtest_TypeParam_> { \
-   private: \
-    typedef CaseName<gtest_TypeParam_> TestFixture; \
-    typedef gtest_TypeParam_ TypeParam; \
-    virtual void TestBody(); \
-  }; \
-  bool gtest_##CaseName##_##TestName##_registered_ GTEST_ATTRIBUTE_UNUSED_ = \
-      ::testing::internal::TypeParameterizedTest< \
-          CaseName, \
-          ::testing::internal::TemplateSel< \
-              GTEST_TEST_CLASS_NAME_(CaseName, TestName)>, \
-          GTEST_TYPE_PARAMS_(CaseName)>::Register(\
-              "", ::testing::internal::CodeLocation(__FILE__, __LINE__), \
-              #CaseName, #TestName, 0); \
-  template <typename gtest_TypeParam_> \
-  void GTEST_TEST_CLASS_NAME_(CaseName, TestName)<gtest_TypeParam_>::TestBody()
+# define TYPED_TEST(CaseName, TestName)                                       \
+  template <typename gtest_TypeParam_>                                        \
+  class GTEST_TEST_CLASS_NAME_(CaseName, TestName)                            \
+      : public CaseName<gtest_TypeParam_> {                                   \
+   private:                                                                   \
+    typedef CaseName<gtest_TypeParam_> TestFixture;                           \
+    typedef gtest_TypeParam_ TypeParam;                                       \
+    virtual void TestBody();                                                  \
+  };                                                                          \
+  static bool gtest_##CaseName##_##TestName##_registered_                     \
+        GTEST_ATTRIBUTE_UNUSED_ =                                             \
+      ::testing::internal::TypeParameterizedTest<                             \
+          CaseName,                                                           \
+          ::testing::internal::TemplateSel<GTEST_TEST_CLASS_NAME_(CaseName,   \
+                                                                  TestName)>, \
+          GTEST_TYPE_PARAMS_(                                                 \
+              CaseName)>::Register("",                                        \
+                                   ::testing::internal::CodeLocation(         \
+                                       __FILE__, __LINE__),                   \
+                                   #CaseName, #TestName, 0,                   \
+                                   ::testing::internal::GenerateNames<        \
+                                       GTEST_NAME_GENERATOR_(CaseName),       \
+                                       GTEST_TYPE_PARAMS_(CaseName)>());      \
+  template <typename gtest_TypeParam_>                                        \
+  void GTEST_TEST_CLASS_NAME_(CaseName,                                       \
+                              TestName)<gtest_TypeParam_>::TestBody()
 
 #endif  // GTEST_HAS_TYPED_TEST
 
@@ -241,22 +279,27 @@
   namespace GTEST_CASE_NAMESPACE_(CaseName) { \
   typedef ::testing::internal::Templates<__VA_ARGS__>::type gtest_AllTests_; \
   } \
-  static const char* const GTEST_REGISTERED_TEST_NAMES_(CaseName) = \
-      GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).VerifyRegisteredTestNames(\
-          __FILE__, __LINE__, #__VA_ARGS__)
+  static const char* const GTEST_REGISTERED_TEST_NAMES_(CaseName) \
+      GTEST_ATTRIBUTE_UNUSED_ = \
+          GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).VerifyRegisteredTestNames( \
+              __FILE__, __LINE__, #__VA_ARGS__)
 
 // The 'Types' template argument below must have spaces around it
 // since some compilers may choke on '>>' when passing a template
 // instance (e.g. Types<int>)
-# define INSTANTIATE_TYPED_TEST_CASE_P(Prefix, CaseName, Types) \
-  bool gtest_##Prefix##_##CaseName GTEST_ATTRIBUTE_UNUSED_ = \
-      ::testing::internal::TypeParameterizedTestCase<CaseName, \
-          GTEST_CASE_NAMESPACE_(CaseName)::gtest_AllTests_, \
-          ::testing::internal::TypeList< Types >::type>::Register(\
-              #Prefix, \
-              ::testing::internal::CodeLocation(__FILE__, __LINE__), \
-              &GTEST_TYPED_TEST_CASE_P_STATE_(CaseName), \
-              #CaseName, GTEST_REGISTERED_TEST_NAMES_(CaseName))
+# define INSTANTIATE_TYPED_TEST_CASE_P(Prefix, CaseName, Types, ...)      \
+  static bool gtest_##Prefix##_##CaseName GTEST_ATTRIBUTE_UNUSED_ =       \
+      ::testing::internal::TypeParameterizedTestCase<                     \
+          CaseName, GTEST_CASE_NAMESPACE_(CaseName)::gtest_AllTests_,     \
+          ::testing::internal::TypeList< Types >::type>::                 \
+          Register(#Prefix,                                               \
+                   ::testing::internal::CodeLocation(__FILE__, __LINE__), \
+                   &GTEST_TYPED_TEST_CASE_P_STATE_(CaseName), #CaseName,  \
+                   GTEST_REGISTERED_TEST_NAMES_(CaseName),                \
+                   ::testing::internal::GenerateNames<                    \
+                       ::testing::internal::NameGeneratorSelector<        \
+                           __VA_ARGS__>::type,                            \
+                       ::testing::internal::TypeList< Types >::type>())
 
 #endif  // GTEST_HAS_TYPED_TEST_P
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest.h
index f846c5b..3b4bb1e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest.h
@@ -26,10 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: wan@google.com (Zhanyong Wan)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file defines the public API for Google Test.  It should be
 // included by any test program that uses Google Test.
@@ -48,6 +47,8 @@
 // registration from Barthelemy Dagenais' (barthelemy@prologique.com)
 // easyUnit framework.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_GTEST_H_
 #define GTEST_INCLUDE_GTEST_GTEST_H_
 
@@ -65,6 +66,9 @@
 #include "gtest/gtest-test-part.h"
 #include "gtest/gtest-typed-test.h"
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 // Depending on the platform, different string classes are available.
 // On Linux, in addition to ::std::string, Google also makes use of
 // class ::string, which has the same interface as ::std::string, but
@@ -82,6 +86,15 @@
 
 namespace testing {
 
+// Silence C4100 (unreferenced formal parameter) and 4805
+// unsafe mix of type 'const int' and type 'const bool'
+#ifdef _MSC_VER
+# pragma warning(push)
+# pragma warning(disable:4805)
+# pragma warning(disable:4100)
+#endif
+
+
 // Declares the flags.
 
 // This flag temporary enables the disabled tests.
@@ -103,6 +116,10 @@
 // the tests to run. If the filter is not given all tests are executed.
 GTEST_DECLARE_string_(filter);
 
+// This flag controls whether Google Test installs a signal handler that dumps
+// debugging information when fatal signals are raised.
+GTEST_DECLARE_bool_(install_failure_signal_handler);
+
 // This flag causes the Google Test to list tests. None of the tests listed
 // are actually run if the flag is provided.
 GTEST_DECLARE_bool_(list_tests);
@@ -115,6 +132,9 @@
 // test.
 GTEST_DECLARE_bool_(print_time);
 
+// This flags control whether Google Test prints UTF8 characters as text.
+GTEST_DECLARE_bool_(print_utf8);
+
 // This flag specifies the random number seed.
 GTEST_DECLARE_int32_(random_seed);
 
@@ -135,7 +155,7 @@
 
 // When this flag is specified, a failed assertion will throw an
 // exception if exceptions are enabled, or exit the program with a
-// non-zero code otherwise.
+// non-zero code otherwise. For use with an external test framework.
 GTEST_DECLARE_bool_(throw_on_failure);
 
 // When this flag is set with a "host:port" string, on supported
@@ -143,6 +163,10 @@
 // the specified host machine.
 GTEST_DECLARE_string_(stream_result_to);
 
+#if GTEST_USE_OWN_FLAGFILE_FLAG_
+GTEST_DECLARE_string_(flagfile);
+#endif  // GTEST_USE_OWN_FLAGFILE_FLAG_
+
 // The upper limit for valid stack trace depths.
 const int kMaxStackTraceDepth = 100;
 
@@ -160,6 +184,7 @@
 class TestEventRepeater;
 class UnitTestRecordPropertyTestHelper;
 class WindowsDeathTest;
+class FuchsiaDeathTest;
 class UnitTestImpl* GetUnitTestImpl();
 void ReportFailureInUnknownLocation(TestPartResult::Type result_type,
                                     const std::string& message);
@@ -259,7 +284,9 @@
   // Used in EXPECT_TRUE/FALSE(assertion_result).
   AssertionResult(const AssertionResult& other);
 
+#if defined(_MSC_VER) && _MSC_VER < 1910
   GTEST_DISABLE_MSC_WARNINGS_PUSH_(4800 /* forcing value to bool */)
+#endif
 
   // Used in the EXPECT_TRUE/FALSE(bool_expression).
   //
@@ -276,7 +303,9 @@
           /*enabler*/ = NULL)
       : success_(success) {}
 
+#if defined(_MSC_VER) && _MSC_VER < 1910
   GTEST_DISABLE_MSC_WARNINGS_POP_()
+#endif
 
   // Assignment operator.
   AssertionResult& operator=(AssertionResult other) {
@@ -297,7 +326,7 @@
   const char* message() const {
     return message_.get() != NULL ?  message_->c_str() : "";
   }
-  // TODO(vladl@google.com): Remove this after making sure no clients use it.
+  // FIXME: Remove this after making sure no clients use it.
   // Deprecated; please use message() instead.
   const char* failure_message() const { return message(); }
 
@@ -345,6 +374,15 @@
 // Deprecated; use AssertionFailure() << msg.
 GTEST_API_ AssertionResult AssertionFailure(const Message& msg);
 
+}  // namespace testing
+
+// Includes the auto-generated header that implements a family of generic
+// predicate assertion macros. This include comes late because it relies on
+// APIs declared above.
+#include "gtest/gtest_pred_impl.h"
+
+namespace testing {
+
 // The abstract class that all tests inherit from.
 //
 // In Google Test, a unit test program contains one or many TestCases, and
@@ -355,7 +393,7 @@
 // this for you.
 //
 // The only time you derive from Test is when defining a test fixture
-// to be used a TEST_F.  For example:
+// to be used in a TEST_F.  For example:
 //
 //   class FooTest : public testing::Test {
 //    protected:
@@ -550,9 +588,8 @@
   // Returns the elapsed time, in milliseconds.
   TimeInMillis elapsed_time() const { return elapsed_time_; }
 
-  // Returns the i-th test part result among all the results. i can range
-  // from 0 to test_property_count() - 1. If i is not in that range, aborts
-  // the program.
+  // Returns the i-th test part result among all the results. i can range from 0
+  // to total_part_count() - 1. If i is not in that range, aborts the program.
   const TestPartResult& GetTestPartResult(int i) const;
 
   // Returns the i-th test property. i can range from 0 to
@@ -569,6 +606,7 @@
   friend class internal::TestResultAccessor;
   friend class internal::UnitTestImpl;
   friend class internal::WindowsDeathTest;
+  friend class internal::FuchsiaDeathTest;
 
   // Gets the vector of TestPartResults.
   const std::vector<TestPartResult>& test_part_results() const {
@@ -594,7 +632,7 @@
 
   // Adds a failure if the key is a reserved attribute of Google Test
   // testcase tags.  Returns true if the property is valid.
-  // TODO(russr): Validate attribute names are legal and human readable.
+  // FIXME: Validate attribute names are legal and human readable.
   static bool ValidateTestProperty(const std::string& xml_element,
                                    const TestProperty& test_property);
 
@@ -675,6 +713,9 @@
   // Returns the line where this test is defined.
   int line() const { return location_.line; }
 
+  // Return true if this test should not be run because it's in another shard.
+  bool is_in_another_shard() const { return is_in_another_shard_; }
+
   // Returns true if this test should run, that is if the test is not
   // disabled (or it is disabled but the also_run_disabled_tests flag has
   // been specified) and its full name matches the user-specified filter.
@@ -695,10 +736,9 @@
 
   // Returns true iff this test will appear in the XML report.
   bool is_reportable() const {
-    // For now, the XML report includes all tests matching the filter.
-    // In the future, we may trim tests that are excluded because of
-    // sharding.
-    return matches_filter_;
+    // The XML report includes tests matching the filter, excluding those
+    // run in other shards.
+    return matches_filter_ && !is_in_another_shard_;
   }
 
   // Returns the result of the test.
@@ -762,6 +802,7 @@
   bool is_disabled_;                // True iff this test is disabled
   bool matches_filter_;             // True if this test matches the
                                     // user-specified filter.
+  bool is_in_another_shard_;        // Will be run in another shard.
   internal::TestFactoryBase* const factory_;  // The factory that creates
                                               // the test object
 
@@ -986,6 +1027,18 @@
   virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; }
 };
 
+#if GTEST_HAS_EXCEPTIONS
+
+// Exception which can be thrown from TestEventListener::OnTestPartResult.
+class GTEST_API_ AssertionException
+    : public internal::GoogleTestFailureException {
+ public:
+  explicit AssertionException(const TestPartResult& result)
+      : GoogleTestFailureException(result) {}
+};
+
+#endif  // GTEST_HAS_EXCEPTIONS
+
 // The interface for tracing execution of tests. The methods are organized in
 // the order the corresponding events are fired.
 class TestEventListener {
@@ -1014,6 +1067,8 @@
   virtual void OnTestStart(const TestInfo& test_info) = 0;
 
   // Fired after a failed assertion or a SUCCEED() invocation.
+  // If you want to throw an exception from this function to skip to the next
+  // TEST, it must be AssertionException defined above, or inherited from it.
   virtual void OnTestPartResult(const TestPartResult& test_part_result) = 0;
 
   // Fired after the test ends.
@@ -1180,14 +1235,12 @@
   // Returns the random seed used at the start of the current test run.
   int random_seed() const;
 
-#if GTEST_HAS_PARAM_TEST
   // Returns the ParameterizedTestCaseRegistry object used to keep track of
   // value-parameterized tests and instantiate and register them.
   //
   // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
   internal::ParameterizedTestCaseRegistry& parameterized_test_registry()
       GTEST_LOCK_EXCLUDED_(mutex_);
-#endif  // GTEST_HAS_PARAM_TEST
 
   // Gets the number of successful test cases.
   int successful_test_case_count() const;
@@ -1287,11 +1340,11 @@
   internal::UnitTestImpl* impl() { return impl_; }
   const internal::UnitTestImpl* impl() const { return impl_; }
 
-  // These classes and funcions are friends as they need to access private
+  // These classes and functions are friends as they need to access private
   // members of UnitTest.
+  friend class ScopedTrace;
   friend class Test;
   friend class internal::AssertHelper;
-  friend class internal::ScopedTrace;
   friend class internal::StreamingListenerTest;
   friend class internal::UnitTestRecordPropertyTestHelper;
   friend Environment* AddGlobalTestEnvironment(Environment* env);
@@ -1388,11 +1441,9 @@
                             const char* rhs_expression,
                             const T1& lhs,
                             const T2& rhs) {
-GTEST_DISABLE_MSC_WARNINGS_PUSH_(4389 /* signed/unsigned mismatch */)
   if (lhs == rhs) {
     return AssertionSuccess();
   }
-GTEST_DISABLE_MSC_WARNINGS_POP_()
 
   return CmpHelperEQFailure(lhs_expression, rhs_expression, lhs, rhs);
 }
@@ -1706,7 +1757,6 @@
 
 }  // namespace internal
 
-#if GTEST_HAS_PARAM_TEST
 // The pure interface class that all value-parameterized tests inherit from.
 // A value-parameterized class must inherit from both ::testing::Test and
 // ::testing::WithParamInterface. In most cases that just means inheriting
@@ -1748,11 +1798,8 @@
   virtual ~WithParamInterface() {}
 
   // The current parameter value. Is also available in the test fixture's
-  // constructor. This member function is non-static, even though it only
-  // references static data, to reduce the opportunity for incorrect uses
-  // like writing 'WithParamInterface<bool>::GetParam()' for a test that
-  // uses a fixture whose parameter type is int.
-  const ParamType& GetParam() const {
+  // constructor.
+  static const ParamType& GetParam() {
     GTEST_CHECK_(parameter_ != NULL)
         << "GetParam() can only be called inside a value-parameterized test "
         << "-- did you intend to write TEST_P instead of TEST_F?";
@@ -1783,8 +1830,6 @@
 class TestWithParam : public Test, public WithParamInterface<T> {
 };
 
-#endif  // GTEST_HAS_PARAM_TEST
-
 // Macros for indicating success/failure in test code.
 
 // ADD_FAILURE unconditionally adds a failure to the current test.
@@ -1857,22 +1902,18 @@
 // AssertionResult. For more information on how to use AssertionResult with
 // these macros see comments on that class.
 #define EXPECT_TRUE(condition) \
-  GTEST_TEST_BOOLEAN_((condition), #condition, false, true, \
+  GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \
                       GTEST_NONFATAL_FAILURE_)
 #define EXPECT_FALSE(condition) \
   GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \
                       GTEST_NONFATAL_FAILURE_)
 #define ASSERT_TRUE(condition) \
-  GTEST_TEST_BOOLEAN_((condition), #condition, false, true, \
+  GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \
                       GTEST_FATAL_FAILURE_)
 #define ASSERT_FALSE(condition) \
   GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \
                       GTEST_FATAL_FAILURE_)
 
-// Includes the auto-generated header that implements a family of
-// generic predicate assertion macros.
-#include "gtest/gtest_pred_impl.h"
-
 // Macros for testing equalities and inequalities.
 //
 //    * {ASSERT|EXPECT}_EQ(v1, v2): Tests that v1 == v2
@@ -1914,8 +1955,8 @@
 //
 // Examples:
 //
-//   EXPECT_NE(5, Foo());
-//   EXPECT_EQ(NULL, a_pointer);
+//   EXPECT_NE(Foo(), 5);
+//   EXPECT_EQ(a_pointer, NULL);
 //   ASSERT_LT(i, array_size);
 //   ASSERT_GT(records.size(), 0) << "There is no record left.";
 
@@ -2101,6 +2142,57 @@
 #define EXPECT_NO_FATAL_FAILURE(statement) \
     GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_NONFATAL_FAILURE_)
 
+// Causes a trace (including the given source file path and line number,
+// and the given message) to be included in every test failure message generated
+// by code in the scope of the lifetime of an instance of this class. The effect
+// is undone with the destruction of the instance.
+//
+// The message argument can be anything streamable to std::ostream.
+//
+// Example:
+//   testing::ScopedTrace trace("file.cc", 123, "message");
+//
+class GTEST_API_ ScopedTrace {
+ public:
+  // The c'tor pushes the given source file location and message onto
+  // a trace stack maintained by Google Test.
+
+  // Template version. Uses Message() to convert the values into strings.
+  // Slow, but flexible.
+  template <typename T>
+  ScopedTrace(const char* file, int line, const T& message) {
+    PushTrace(file, line, (Message() << message).GetString());
+  }
+
+  // Optimize for some known types.
+  ScopedTrace(const char* file, int line, const char* message) {
+    PushTrace(file, line, message ? message : "(null)");
+  }
+
+#if GTEST_HAS_GLOBAL_STRING
+  ScopedTrace(const char* file, int line, const ::string& message) {
+    PushTrace(file, line, message);
+  }
+#endif
+
+  ScopedTrace(const char* file, int line, const std::string& message) {
+    PushTrace(file, line, message);
+  }
+
+  // The d'tor pops the info pushed by the c'tor.
+  //
+  // Note that the d'tor is not virtual in order to be efficient.
+  // Don't inherit from ScopedTrace!
+  ~ScopedTrace();
+
+ private:
+  void PushTrace(const char* file, int line, std::string message);
+
+  GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedTrace);
+} GTEST_ATTRIBUTE_UNUSED_;  // A ScopedTrace object does its job in its
+                            // c'tor and d'tor.  Therefore it doesn't
+                            // need to be used otherwise.
+
 // Causes a trace (including the source file path, the current line
 // number, and the given message) to be included in every test failure
 // message generated by code in the current scope.  The effect is
@@ -2112,9 +2204,14 @@
 // of the dummy variable name, thus allowing multiple SCOPED_TRACE()s
 // to appear in the same block - as long as they are on different
 // lines.
+//
+// Assuming that each thread maintains its own stack of traces.
+// Therefore, a SCOPED_TRACE() would (correctly) only affect the
+// assertions in its own thread.
 #define SCOPED_TRACE(message) \
-  ::testing::internal::ScopedTrace GTEST_CONCAT_TOKEN_(gtest_trace_, __LINE__)(\
-    __FILE__, __LINE__, ::testing::Message() << (message))
+  ::testing::ScopedTrace GTEST_CONCAT_TOKEN_(gtest_trace_, __LINE__)(\
+    __FILE__, __LINE__, (message))
+
 
 // Compile-time assertion for type equality.
 // StaticAssertTypeEq<type1, type2>() compiles iff type1 and type2 are
@@ -2194,7 +2291,7 @@
 // name of the test within the test case.
 //
 // A test fixture class must be declared earlier.  The user should put
-// his test code between braces after using this macro.  Example:
+// the test code between braces after using this macro.  Example:
 //
 //   class FooTest : public testing::Test {
 //    protected:
@@ -2209,14 +2306,22 @@
 //   }
 //
 //   TEST_F(FooTest, ReturnsElementCountCorrectly) {
-//     EXPECT_EQ(0, a_.size());
-//     EXPECT_EQ(1, b_.size());
+//     EXPECT_EQ(a_.size(), 0);
+//     EXPECT_EQ(b_.size(), 1);
 //   }
 
 #define TEST_F(test_fixture, test_name)\
   GTEST_TEST_(test_fixture, test_name, test_fixture, \
               ::testing::internal::GetTypeId<test_fixture>())
 
+// Returns a path to temporary directory.
+// Tries to determine an appropriate directory for the platform.
+GTEST_API_ std::string TempDir();
+
+#ifdef _MSC_VER
+#  pragma warning(pop)
+#endif
+
 }  // namespace testing
 
 // Use this function in main() to run all tests.  It returns 0 if all
@@ -2233,4 +2338,6 @@
   return ::testing::UnitTest::GetInstance()->Run();
 }
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 #endif  // GTEST_INCLUDE_GTEST_GTEST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_pred_impl.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_pred_impl.h
index 30ae712..0c1105c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_pred_impl.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_pred_impl.h
@@ -27,18 +27,19 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-// This file is AUTOMATICALLY GENERATED on 10/31/2011 by command
+// This file is AUTOMATICALLY GENERATED on 01/02/2018 by command
 // 'gen_gtest_pred_impl.py 5'.  DO NOT EDIT BY HAND!
 //
 // Implements a family of generic predicate assertion macros.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
 #define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
 
-// Makes sure this header is not included before gtest.h.
-#ifndef GTEST_INCLUDE_GTEST_GTEST_H_
-# error Do not include gtest_pred_impl.h directly.  Include gtest.h instead.
-#endif  // GTEST_INCLUDE_GTEST_GTEST_H_
+#include "gtest/gtest.h"
+
+namespace testing {
 
 // This header implements a family of generic predicate assertion
 // macros:
@@ -66,8 +67,6 @@
 // We also define the EXPECT_* variations.
 //
 // For now we only support predicates whose arity is at most 5.
-// Please email googletestframework@googlegroups.com if you need
-// support for higher arities.
 
 // GTEST_ASSERT_ is the basic statement to which all of the assertions
 // in this file reduce.  Don't use this in your code.
@@ -355,4 +354,6 @@
 
 
 
+}  // namespace testing
+
 #endif  // GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_prod.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_prod.h
index da80ddc..e651671 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_prod.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/gtest_prod.h
@@ -26,10 +26,10 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: wan@google.com (Zhanyong Wan)
-//
-// Google C++ Testing Framework definitions useful in production code.
+// Google C++ Testing and Mocking Framework definitions useful in production code.
+// GOOGLETEST_CM0003 DO NOT DELETE
 
 #ifndef GTEST_INCLUDE_GTEST_GTEST_PROD_H_
 #define GTEST_INCLUDE_GTEST_GTEST_PROD_H_
@@ -40,17 +40,20 @@
 //
 // class MyClass {
 //  private:
-//   void MyMethod();
-//   FRIEND_TEST(MyClassTest, MyMethod);
+//   void PrivateMethod();
+//   FRIEND_TEST(MyClassTest, PrivateMethodWorks);
 // };
 //
 // class MyClassTest : public testing::Test {
 //   // ...
 // };
 //
-// TEST_F(MyClassTest, MyMethod) {
-//   // Can call MyClass::MyMethod() here.
+// TEST_F(MyClassTest, PrivateMethodWorks) {
+//   // Can call MyClass::PrivateMethod() here.
 // }
+//
+// Note: The test class must be in the same namespace as the class being tested.
+// For example, putting MyClassTest in an anonymous namespace will not work.
 
 #define FRIEND_TEST(test_case_name, test_name)\
 friend class test_case_name##_##test_name##_Test
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/README.md b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/README.md
new file mode 100644
index 0000000..ff391fb
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/README.md
@@ -0,0 +1,56 @@
+# Customization Points
+
+The custom directory is an injection point for custom user configurations.
+
+## Header `gtest.h`
+
+### The following macros can be defined:
+
+*   `GTEST_OS_STACK_TRACE_GETTER_` - The name of an implementation of
+    `OsStackTraceGetterInterface`.
+*   `GTEST_CUSTOM_TEMPDIR_FUNCTION_` - An override for `testing::TempDir()`. See
+    `testing::TempDir` for semantics and signature.
+
+## Header `gtest-port.h`
+
+The following macros can be defined:
+
+### Flag related macros:
+
+*   `GTEST_FLAG(flag_name)`
+*   `GTEST_USE_OWN_FLAGFILE_FLAG_` - Define to 0 when the system provides its
+    own flagfile flag parsing.
+*   `GTEST_DECLARE_bool_(name)`
+*   `GTEST_DECLARE_int32_(name)`
+*   `GTEST_DECLARE_string_(name)`
+*   `GTEST_DEFINE_bool_(name, default_val, doc)`
+*   `GTEST_DEFINE_int32_(name, default_val, doc)`
+*   `GTEST_DEFINE_string_(name, default_val, doc)`
+
+### Logging:
+
+*   `GTEST_LOG_(severity)`
+*   `GTEST_CHECK_(condition)`
+*   Functions `LogToStderr()` and `FlushInfoLog()` have to be provided too.
+
+### Threading:
+
+*   `GTEST_HAS_NOTIFICATION_` - Enabled if Notification is already provided.
+*   `GTEST_HAS_MUTEX_AND_THREAD_LOCAL_` - Enabled if `Mutex` and `ThreadLocal`
+    are already provided. Must also provide `GTEST_DECLARE_STATIC_MUTEX_(mutex)`
+    and `GTEST_DEFINE_STATIC_MUTEX_(mutex)`
+*   `GTEST_EXCLUSIVE_LOCK_REQUIRED_(locks)`
+*   `GTEST_LOCK_EXCLUDED_(locks)`
+
+### Underlying library support features
+
+*   `GTEST_HAS_CXXABI_H_`
+
+### Exporting API symbols:
+
+*   `GTEST_API_` - Specifier for exported symbols.
+
+## Header `gtest-printers.h`
+
+*   See documentation at `gtest/gtest-printers.h` for details on how to define a
+    custom printer.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-port.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-port.h
index 7e744bd..cd85d95 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-port.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-port.h
@@ -27,39 +27,7 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Injection point for custom user configurations.
-// The following macros can be defined:
-//
-//   Flag related macros:
-//     GTEST_FLAG(flag_name)
-//     GTEST_USE_OWN_FLAGFILE_FLAG_  - Define to 0 when the system provides its
-//                                     own flagfile flag parsing.
-//     GTEST_DECLARE_bool_(name)
-//     GTEST_DECLARE_int32_(name)
-//     GTEST_DECLARE_string_(name)
-//     GTEST_DEFINE_bool_(name, default_val, doc)
-//     GTEST_DEFINE_int32_(name, default_val, doc)
-//     GTEST_DEFINE_string_(name, default_val, doc)
-//
-//   Test filtering:
-//     GTEST_TEST_FILTER_ENV_VAR_ - The name of an environment variable that
-//                                  will be used if --GTEST_FLAG(test_filter)
-//                                  is not provided.
-//
-//   Logging:
-//     GTEST_LOG_(severity)
-//     GTEST_CHECK_(condition)
-//     Functions LogToStderr() and FlushInfoLog() have to be provided too.
-//
-//   Threading:
-//     GTEST_HAS_NOTIFICATION_ - Enabled if Notification is already provided.
-//     GTEST_HAS_MUTEX_AND_THREAD_LOCAL_ - Enabled if Mutex and ThreadLocal are
-//                                         already provided.
-//     Must also provide GTEST_DECLARE_STATIC_MUTEX_(mutex) and
-//     GTEST_DEFINE_STATIC_MUTEX_(mutex)
-//
-//     GTEST_EXCLUSIVE_LOCK_REQUIRED_(locks)
-//     GTEST_LOCK_EXCLUDED_(locks)
+// Injection point for custom user configurations. See README for details
 //
 // ** Custom implementation starts here **
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-printers.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-printers.h
index 60c1ea0..eb4467a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-printers.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest-printers.h
@@ -31,8 +31,8 @@
 // installation of gTest.
 // It will be included from gtest-printers.h and the overrides in this file
 // will be visible to everyone.
-// See documentation at gtest/gtest-printers.h for details on how to define a
-// custom printer.
+//
+// Injection point for custom user configurations. See README for details
 //
 // ** Custom implementation starts here **
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest.h
index c27412a..4c8e07b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/custom/gtest.h
@@ -27,11 +27,7 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Injection point for custom user configurations.
-// The following macros can be defined:
-//
-// GTEST_OS_STACK_TRACE_GETTER_  - The name of an implementation of
-//                                 OsStackTraceGetterInterface.
+// Injection point for custom user configurations. See README for details
 //
 // ** Custom implementation starts here **
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-death-test-internal.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-death-test-internal.h
index 2b3a78f..0a9b42c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-death-test-internal.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-death-test-internal.h
@@ -27,12 +27,11 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file defines internal utilities needed for implementing
 // death tests.  They are subject to change without notice.
+// GOOGLETEST_CM0001 DO NOT DELETE
 
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_
@@ -53,6 +52,9 @@
 
 #if GTEST_HAS_DEATH_TEST
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 // DeathTest is a class that hides much of the complexity of the
 // GTEST_DEATH_TEST_ macro.  It is abstract; its static Create method
 // returns a concrete class that depends on the prevailing death test
@@ -136,6 +138,8 @@
   GTEST_DISALLOW_COPY_AND_ASSIGN_(DeathTest);
 };
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 // Factory interface for death tests.  May be mocked out for testing.
 class DeathTestFactory {
  public:
@@ -218,14 +222,18 @@
 // can be streamed.
 
 // This macro is for implementing ASSERT/EXPECT_DEBUG_DEATH when compiled in
-// NDEBUG mode. In this case we need the statements to be executed, the regex is
-// ignored, and the macro must accept a streamed message even though the message
-// is never printed.
-# define GTEST_EXECUTE_STATEMENT_(statement, regex) \
-  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
-  if (::testing::internal::AlwaysTrue()) { \
-     GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
-  } else \
+// NDEBUG mode. In this case we need the statements to be executed and the macro
+// must accept a streamed message even though the message is never printed.
+// The regex object is not evaluated, but it is used to prevent "unused"
+// warnings and to avoid an expression that doesn't compile in debug mode.
+#define GTEST_EXECUTE_STATEMENT_(statement, regex)             \
+  GTEST_AMBIGUOUS_ELSE_BLOCKER_                                \
+  if (::testing::internal::AlwaysTrue()) {                     \
+    GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
+  } else if (!::testing::internal::AlwaysTrue()) {             \
+    const ::testing::internal::RE& gtest_regex = (regex);      \
+    static_cast<void>(gtest_regex);                            \
+  } else                                                       \
     ::testing::Message()
 
 // A class representing the parsed contents of the
@@ -264,53 +272,6 @@
 // the flag is specified; otherwise returns NULL.
 InternalRunDeathTestFlag* ParseInternalRunDeathTestFlag();
 
-#else  // GTEST_HAS_DEATH_TEST
-
-// This macro is used for implementing macros such as
-// EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED on systems where
-// death tests are not supported. Those macros must compile on such systems
-// iff EXPECT_DEATH and ASSERT_DEATH compile with the same parameters on
-// systems that support death tests. This allows one to write such a macro
-// on a system that does not support death tests and be sure that it will
-// compile on a death-test supporting system.
-//
-// Parameters:
-//   statement -  A statement that a macro such as EXPECT_DEATH would test
-//                for program termination. This macro has to make sure this
-//                statement is compiled but not executed, to ensure that
-//                EXPECT_DEATH_IF_SUPPORTED compiles with a certain
-//                parameter iff EXPECT_DEATH compiles with it.
-//   regex     -  A regex that a macro such as EXPECT_DEATH would use to test
-//                the output of statement.  This parameter has to be
-//                compiled but not evaluated by this macro, to ensure that
-//                this macro only accepts expressions that a macro such as
-//                EXPECT_DEATH would accept.
-//   terminator - Must be an empty statement for EXPECT_DEATH_IF_SUPPORTED
-//                and a return statement for ASSERT_DEATH_IF_SUPPORTED.
-//                This ensures that ASSERT_DEATH_IF_SUPPORTED will not
-//                compile inside functions where ASSERT_DEATH doesn't
-//                compile.
-//
-//  The branch that has an always false condition is used to ensure that
-//  statement and regex are compiled (and thus syntactically correct) but
-//  never executed. The unreachable code macro protects the terminator
-//  statement from generating an 'unreachable code' warning in case
-//  statement unconditionally returns or throws. The Message constructor at
-//  the end allows the syntax of streaming additional messages into the
-//  macro, for compilational compatibility with EXPECT_DEATH/ASSERT_DEATH.
-# define GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, terminator) \
-    GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
-    if (::testing::internal::AlwaysTrue()) { \
-      GTEST_LOG_(WARNING) \
-          << "Death tests are not supported on this platform.\n" \
-          << "Statement '" #statement "' cannot be verified."; \
-    } else if (::testing::internal::AlwaysFalse()) { \
-      ::testing::internal::RE::PartialMatch(".*", (regex)); \
-      GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \
-      terminator; \
-    } else \
-      ::testing::Message()
-
 #endif  // GTEST_HAS_DEATH_TEST
 
 }  // namespace internal
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-filepath.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-filepath.h
index 7a13b4b..ae38d95 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-filepath.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-filepath.h
@@ -27,21 +27,24 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Author: keith.ray@gmail.com (Keith Ray)
-//
 // Google Test filepath utilities
 //
 // This header file declares classes and functions used internally by
 // Google Test.  They are subject to change without notice.
 //
-// This file is #included in <gtest/internal/gtest-internal.h>.
+// This file is #included in gtest/internal/gtest-internal.h.
 // Do not include this header file separately!
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
 
 #include "gtest/internal/gtest-string.h"
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 namespace testing {
 namespace internal {
 
@@ -203,4 +206,6 @@
 }  // namespace internal
 }  // namespace testing
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 #endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-internal.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-internal.h
index ebd1cf6..b762f61 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-internal.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-internal.h
@@ -27,13 +27,13 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file declares functions and macros used internally by
 // Google Test.  They are subject to change without notice.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
 
@@ -61,8 +61,8 @@
 #include <vector>
 
 #include "gtest/gtest-message.h"
-#include "gtest/internal/gtest-string.h"
 #include "gtest/internal/gtest-filepath.h"
+#include "gtest/internal/gtest-string.h"
 #include "gtest/internal/gtest-type-util.h"
 
 // Due to C++ preprocessor weirdness, we need double indirection to
@@ -76,6 +76,9 @@
 #define GTEST_CONCAT_TOKEN_(foo, bar) GTEST_CONCAT_TOKEN_IMPL_(foo, bar)
 #define GTEST_CONCAT_TOKEN_IMPL_(foo, bar) foo ## bar
 
+// Stringifies its argument.
+#define GTEST_STRINGIFY_(name) #name
+
 class ProtocolMessage;
 namespace proto2 { class Message; }
 
@@ -96,7 +99,6 @@
 namespace internal {
 
 struct TraceInfo;                      // Information about a trace point.
-class ScopedTrace;                     // Implements scoped trace.
 class TestInfoImpl;                    // Opaque implementation of TestInfo
 class UnitTestImpl;                    // Opaque implementation of UnitTest
 
@@ -139,6 +141,9 @@
 
 #if GTEST_HAS_EXCEPTIONS
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4275 \
+/* an exported class was derived from a class that was not exported */)
+
 // This exception is thrown by (and only by) a failed Google Test
 // assertion when GTEST_FLAG(throw_on_failure) is true (if exceptions
 // are enabled).  We derive it from std::runtime_error, which is for
@@ -150,32 +155,15 @@
   explicit GoogleTestFailureException(const TestPartResult& failure);
 };
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4275
+
 #endif  // GTEST_HAS_EXCEPTIONS
 
-// A helper class for creating scoped traces in user programs.
-class GTEST_API_ ScopedTrace {
- public:
-  // The c'tor pushes the given source file location and message onto
-  // a trace stack maintained by Google Test.
-  ScopedTrace(const char* file, int line, const Message& message);
-
-  // The d'tor pops the info pushed by the c'tor.
-  //
-  // Note that the d'tor is not virtual in order to be efficient.
-  // Don't inherit from ScopedTrace!
-  ~ScopedTrace();
-
- private:
-  GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedTrace);
-} GTEST_ATTRIBUTE_UNUSED_;  // A ScopedTrace object does its job in its
-                            // c'tor and d'tor.  Therefore it doesn't
-                            // need to be used otherwise.
-
 namespace edit_distance {
 // Returns the optimal edits to go from 'left' to 'right'.
 // All edits cost the same, with replace having lower priority than
 // add/remove.
-// Simple implementation of the Wagner–Fischer algorithm.
+// Simple implementation of the Wagner-Fischer algorithm.
 // See http://en.wikipedia.org/wiki/Wagner-Fischer_algorithm
 enum EditType { kMatch, kAdd, kRemove, kReplace };
 GTEST_API_ std::vector<EditType> CalculateOptimalEdits(
@@ -502,9 +490,10 @@
 typedef void (*TearDownTestCaseFunc)();
 
 struct CodeLocation {
-  CodeLocation(const string& a_file, int a_line) : file(a_file), line(a_line) {}
+  CodeLocation(const std::string& a_file, int a_line)
+      : file(a_file), line(a_line) {}
 
-  string file;
+  std::string file;
   int line;
 };
 
@@ -544,6 +533,9 @@
 
 #if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 // State of the definition of a type-parameterized test case.
 class GTEST_API_ TypedTestCasePState {
  public:
@@ -589,6 +581,8 @@
   RegisteredTestsMap registered_tests_;
 };
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 // Skips to the first non-space char after the first comma in 'str';
 // returns NULL if no comma is found in 'str'.
 inline const char* SkipComma(const char* str) {
@@ -612,6 +606,37 @@
 void SplitString(const ::std::string& str, char delimiter,
                  ::std::vector< ::std::string>* dest);
 
+// The default argument to the template below for the case when the user does
+// not provide a name generator.
+struct DefaultNameGenerator {
+  template <typename T>
+  static std::string GetName(int i) {
+    return StreamableToString(i);
+  }
+};
+
+template <typename Provided = DefaultNameGenerator>
+struct NameGeneratorSelector {
+  typedef Provided type;
+};
+
+template <typename NameGenerator>
+void GenerateNamesRecursively(Types0, std::vector<std::string>*, int) {}
+
+template <typename NameGenerator, typename Types>
+void GenerateNamesRecursively(Types, std::vector<std::string>* result, int i) {
+  result->push_back(NameGenerator::template GetName<typename Types::Head>(i));
+  GenerateNamesRecursively<NameGenerator>(typename Types::Tail(), result,
+                                          i + 1);
+}
+
+template <typename NameGenerator, typename Types>
+std::vector<std::string> GenerateNames() {
+  std::vector<std::string> result;
+  GenerateNamesRecursively<NameGenerator>(Types(), &result, 0);
+  return result;
+}
+
 // TypeParameterizedTest<Fixture, TestSel, Types>::Register()
 // registers a list of type-parameterized tests with Google Test.  The
 // return value is insignificant - we just need to return something
@@ -626,10 +651,10 @@
   // specified in INSTANTIATE_TYPED_TEST_CASE_P(Prefix, TestCase,
   // Types).  Valid values for 'index' are [0, N - 1] where N is the
   // length of Types.
-  static bool Register(const char* prefix,
-                       CodeLocation code_location,
-                       const char* case_name, const char* test_names,
-                       int index) {
+  static bool Register(const char* prefix, const CodeLocation& code_location,
+                       const char* case_name, const char* test_names, int index,
+                       const std::vector<std::string>& type_names =
+                           GenerateNames<DefaultNameGenerator, Types>()) {
     typedef typename Types::Head Type;
     typedef Fixture<Type> FixtureClass;
     typedef typename GTEST_BIND_(TestSel, Type) TestClass;
@@ -637,20 +662,23 @@
     // First, registers the first type-parameterized test in the type
     // list.
     MakeAndRegisterTestInfo(
-        (std::string(prefix) + (prefix[0] == '\0' ? "" : "/") + case_name + "/"
-         + StreamableToString(index)).c_str(),
+        (std::string(prefix) + (prefix[0] == '\0' ? "" : "/") + case_name +
+         "/" + type_names[index])
+            .c_str(),
         StripTrailingSpaces(GetPrefixUntilComma(test_names)).c_str(),
         GetTypeName<Type>().c_str(),
         NULL,  // No value parameter.
-        code_location,
-        GetTypeId<FixtureClass>(),
-        TestClass::SetUpTestCase,
-        TestClass::TearDownTestCase,
-        new TestFactoryImpl<TestClass>);
+        code_location, GetTypeId<FixtureClass>(), TestClass::SetUpTestCase,
+        TestClass::TearDownTestCase, new TestFactoryImpl<TestClass>);
 
     // Next, recurses (at compile time) with the tail of the type list.
-    return TypeParameterizedTest<Fixture, TestSel, typename Types::Tail>
-        ::Register(prefix, code_location, case_name, test_names, index + 1);
+    return TypeParameterizedTest<Fixture, TestSel,
+                                 typename Types::Tail>::Register(prefix,
+                                                                 code_location,
+                                                                 case_name,
+                                                                 test_names,
+                                                                 index + 1,
+                                                                 type_names);
   }
 };
 
@@ -658,9 +686,11 @@
 template <GTEST_TEMPLATE_ Fixture, class TestSel>
 class TypeParameterizedTest<Fixture, TestSel, Types0> {
  public:
-  static bool Register(const char* /*prefix*/, CodeLocation,
+  static bool Register(const char* /*prefix*/, const CodeLocation&,
                        const char* /*case_name*/, const char* /*test_names*/,
-                       int /*index*/) {
+                       int /*index*/,
+                       const std::vector<std::string>& =
+                           std::vector<std::string>() /*type_names*/) {
     return true;
   }
 };
@@ -673,8 +703,10 @@
 class TypeParameterizedTestCase {
  public:
   static bool Register(const char* prefix, CodeLocation code_location,
-                       const TypedTestCasePState* state,
-                       const char* case_name, const char* test_names) {
+                       const TypedTestCasePState* state, const char* case_name,
+                       const char* test_names,
+                       const std::vector<std::string>& type_names =
+                           GenerateNames<DefaultNameGenerator, Types>()) {
     std::string test_name = StripTrailingSpaces(
         GetPrefixUntilComma(test_names));
     if (!state->TestExists(test_name)) {
@@ -691,12 +723,14 @@
 
     // First, register the first test in 'Test' for each type in 'Types'.
     TypeParameterizedTest<Fixture, Head, Types>::Register(
-        prefix, test_location, case_name, test_names, 0);
+        prefix, test_location, case_name, test_names, 0, type_names);
 
     // Next, recurses (at compile time) with the tail of the test list.
-    return TypeParameterizedTestCase<Fixture, typename Tests::Tail, Types>
-        ::Register(prefix, code_location, state,
-                   case_name, SkipComma(test_names));
+    return TypeParameterizedTestCase<Fixture, typename Tests::Tail,
+                                     Types>::Register(prefix, code_location,
+                                                      state, case_name,
+                                                      SkipComma(test_names),
+                                                      type_names);
   }
 };
 
@@ -704,9 +738,11 @@
 template <GTEST_TEMPLATE_ Fixture, typename Types>
 class TypeParameterizedTestCase<Fixture, Templates0, Types> {
  public:
-  static bool Register(const char* /*prefix*/, CodeLocation,
+  static bool Register(const char* /*prefix*/, const CodeLocation&,
                        const TypedTestCasePState* /*state*/,
-                       const char* /*case_name*/, const char* /*test_names*/) {
+                       const char* /*case_name*/, const char* /*test_names*/,
+                       const std::vector<std::string>& =
+                           std::vector<std::string>() /*type_names*/) {
     return true;
   }
 };
@@ -823,31 +859,6 @@
 #define GTEST_REMOVE_REFERENCE_AND_CONST_(T) \
     GTEST_REMOVE_CONST_(GTEST_REMOVE_REFERENCE_(T))
 
-// Adds reference to a type if it is not a reference type,
-// otherwise leaves it unchanged.  This is the same as
-// tr1::add_reference, which is not widely available yet.
-template <typename T>
-struct AddReference { typedef T& type; };  // NOLINT
-template <typename T>
-struct AddReference<T&> { typedef T& type; };  // NOLINT
-
-// A handy wrapper around AddReference that works when the argument T
-// depends on template parameters.
-#define GTEST_ADD_REFERENCE_(T) \
-    typename ::testing::internal::AddReference<T>::type
-
-// Adds a reference to const on top of T as necessary.  For example,
-// it transforms
-//
-//   char         ==> const char&
-//   const char   ==> const char&
-//   char&        ==> const char&
-//   const char&  ==> const char&
-//
-// The argument T must depend on some template parameters.
-#define GTEST_REFERENCE_TO_CONST_(T) \
-    GTEST_ADD_REFERENCE_(const GTEST_REMOVE_REFERENCE_(T))
-
 // ImplicitlyConvertible<From, To>::value is a compile-time bool
 // constant that's true iff type From can be implicitly converted to
 // type To.
@@ -917,8 +928,11 @@
 // a container class by checking the type of IsContainerTest<C>(0).
 // The value of the expression is insignificant.
 //
-// Note that we look for both C::iterator and C::const_iterator.  The
-// reason is that C++ injects the name of a class as a member of the
+// In C++11 mode we check the existence of a const_iterator and that an
+// iterator is properly implemented for the container.
+//
+// For pre-C++11 that we look for both C::iterator and C::const_iterator.
+// The reason is that C++ injects the name of a class as a member of the
 // class itself (e.g. you can refer to class iterator as either
 // 'iterator' or 'iterator::iterator').  If we look for C::iterator
 // only, for example, we would mistakenly think that a class named
@@ -928,17 +942,96 @@
 // IsContainerTest(typename C::const_iterator*) and
 // IsContainerTest(...) doesn't work with Visual Age C++ and Sun C++.
 typedef int IsContainer;
+#if GTEST_LANG_CXX11
+template <class C,
+          class Iterator = decltype(::std::declval<const C&>().begin()),
+          class = decltype(::std::declval<const C&>().end()),
+          class = decltype(++::std::declval<Iterator&>()),
+          class = decltype(*::std::declval<Iterator>()),
+          class = typename C::const_iterator>
+IsContainer IsContainerTest(int /* dummy */) {
+  return 0;
+}
+#else
 template <class C>
 IsContainer IsContainerTest(int /* dummy */,
                             typename C::iterator* /* it */ = NULL,
                             typename C::const_iterator* /* const_it */ = NULL) {
   return 0;
 }
+#endif  // GTEST_LANG_CXX11
 
 typedef char IsNotContainer;
 template <class C>
 IsNotContainer IsContainerTest(long /* dummy */) { return '\0'; }
 
+// Trait to detect whether a type T is a hash table.
+// The heuristic used is that the type contains an inner type `hasher` and does
+// not contain an inner type `reverse_iterator`.
+// If the container is iterable in reverse, then order might actually matter.
+template <typename T>
+struct IsHashTable {
+ private:
+  template <typename U>
+  static char test(typename U::hasher*, typename U::reverse_iterator*);
+  template <typename U>
+  static int test(typename U::hasher*, ...);
+  template <typename U>
+  static char test(...);
+
+ public:
+  static const bool value = sizeof(test<T>(0, 0)) == sizeof(int);
+};
+
+template <typename T>
+const bool IsHashTable<T>::value;
+
+template<typename T>
+struct VoidT {
+    typedef void value_type;
+};
+
+template <typename T, typename = void>
+struct HasValueType : false_type {};
+template <typename T>
+struct HasValueType<T, VoidT<typename T::value_type> > : true_type {
+};
+
+template <typename C,
+          bool = sizeof(IsContainerTest<C>(0)) == sizeof(IsContainer),
+          bool = HasValueType<C>::value>
+struct IsRecursiveContainerImpl;
+
+template <typename C, bool HV>
+struct IsRecursiveContainerImpl<C, false, HV> : public false_type {};
+
+// Since the IsRecursiveContainerImpl depends on the IsContainerTest we need to
+// obey the same inconsistencies as the IsContainerTest, namely check if
+// something is a container is relying on only const_iterator in C++11 and
+// is relying on both const_iterator and iterator otherwise
+template <typename C>
+struct IsRecursiveContainerImpl<C, true, false> : public false_type {};
+
+template <typename C>
+struct IsRecursiveContainerImpl<C, true, true> {
+  #if GTEST_LANG_CXX11
+  typedef typename IteratorTraits<typename C::const_iterator>::value_type
+      value_type;
+#else
+  typedef typename IteratorTraits<typename C::iterator>::value_type value_type;
+#endif
+  typedef is_same<value_type, C> type;
+};
+
+// IsRecursiveContainer<Type> is a unary compile-time predicate that
+// evaluates whether C is a recursive container type. A recursive container
+// type is a container type whose value_type is equal to the container type
+// itself. An example for a recursive container type is
+// boost::filesystem::path, whose iterator has a value_type that is equal to
+// boost::filesystem::path.
+template <typename C>
+struct IsRecursiveContainer : public IsRecursiveContainerImpl<C>::type {};
+
 // EnableIf<condition>::type is void when 'Cond' is true, and
 // undefined when 'Cond' is false.  To use SFINAE to make a function
 // overload only apply when a particular expression is true, add
@@ -1070,7 +1163,7 @@
  private:
   enum {
     kCheckTypeIsNotConstOrAReference = StaticAssertTypeEqHelper<
-        Element, GTEST_REMOVE_REFERENCE_AND_CONST_(Element)>::value,
+        Element, GTEST_REMOVE_REFERENCE_AND_CONST_(Element)>::value
   };
 
   // Initializes this object with a copy of the input.
@@ -1115,7 +1208,7 @@
 #define GTEST_SUCCESS_(message) \
   GTEST_MESSAGE_(message, ::testing::TestPartResult::kSuccess)
 
-// Suppresses MSVC warnings 4072 (unreachable code) for the code following
+// Suppress MSVC warning 4702 (unreachable code) for the code following
 // statement if it returns or throws (or doesn't return or throw in some
 // situations).
 #define GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) \
@@ -1235,4 +1328,3 @@
 void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody()
 
 #endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-linked_ptr.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-linked_ptr.h
index 3602942..082b872 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-linked_ptr.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-linked_ptr.h
@@ -27,8 +27,6 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: Dan Egnor (egnor@google.com)
-//
 // A "smart" pointer type with reference tracking.  Every pointer to a
 // particular object is kept on a circular linked list.  When the last pointer
 // to an object is destroyed or reassigned, the object is deleted.
@@ -62,9 +60,11 @@
 //       raw pointer (e.g. via get()) concurrently, and
 //     - it's safe to write to two linked_ptrs that point to the same
 //       shared object concurrently.
-// TODO(wan@google.com): rename this to safe_linked_ptr to avoid
+// FIXME: rename this to safe_linked_ptr to avoid
 // confusion with normal linked_ptr.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h
index 4d1d81d..4fac8c0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h
@@ -30,8 +30,7 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: vladl@google.com (Vlad Losev)
+
 
 // Type and function utilities for implementing parameterized tests.
 // This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
@@ -43,17 +42,14 @@
 // by the maximum arity of the implementation of tuple which is
 // currently set at 10.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
 
-// scripts/fuse_gtest.py depends on gtest's own header being #included
-// *unconditionally*.  Therefore these #includes cannot be moved
-// inside #if GTEST_HAS_PARAM_TEST.
 #include "gtest/internal/gtest-param-util.h"
 #include "gtest/internal/gtest-port.h"
 
-#if GTEST_HAS_PARAM_TEST
-
 namespace testing {
 
 // Forward declarations of ValuesIn(), which is implemented in
@@ -84,6 +80,8 @@
     return ValuesIn(array);
   }
 
+  ValueArray1(const ValueArray1& other) : v1_(other.v1_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray1& other);
@@ -102,6 +100,8 @@
     return ValuesIn(array);
   }
 
+  ValueArray2(const ValueArray2& other) : v1_(other.v1_), v2_(other.v2_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray2& other);
@@ -122,6 +122,9 @@
     return ValuesIn(array);
   }
 
+  ValueArray3(const ValueArray3& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray3& other);
@@ -144,6 +147,9 @@
     return ValuesIn(array);
   }
 
+  ValueArray4(const ValueArray4& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray4& other);
@@ -167,6 +173,9 @@
     return ValuesIn(array);
   }
 
+  ValueArray5(const ValueArray5& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray5& other);
@@ -193,6 +202,9 @@
     return ValuesIn(array);
   }
 
+  ValueArray6(const ValueArray6& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray6& other);
@@ -220,6 +232,10 @@
     return ValuesIn(array);
   }
 
+  ValueArray7(const ValueArray7& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray7& other);
@@ -249,6 +265,10 @@
     return ValuesIn(array);
   }
 
+  ValueArray8(const ValueArray8& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray8& other);
@@ -280,6 +300,10 @@
     return ValuesIn(array);
   }
 
+  ValueArray9(const ValueArray9& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray9& other);
@@ -312,6 +336,10 @@
     return ValuesIn(array);
   }
 
+  ValueArray10(const ValueArray10& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray10& other);
@@ -346,6 +374,11 @@
     return ValuesIn(array);
   }
 
+  ValueArray11(const ValueArray11& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray11& other);
@@ -382,6 +415,11 @@
     return ValuesIn(array);
   }
 
+  ValueArray12(const ValueArray12& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray12& other);
@@ -420,6 +458,11 @@
     return ValuesIn(array);
   }
 
+  ValueArray13(const ValueArray13& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray13& other);
@@ -459,6 +502,11 @@
     return ValuesIn(array);
   }
 
+  ValueArray14(const ValueArray14& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray14& other);
@@ -500,6 +548,12 @@
     return ValuesIn(array);
   }
 
+  ValueArray15(const ValueArray15& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray15& other);
@@ -544,6 +598,12 @@
     return ValuesIn(array);
   }
 
+  ValueArray16(const ValueArray16& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray16& other);
@@ -589,6 +649,12 @@
     return ValuesIn(array);
   }
 
+  ValueArray17(const ValueArray17& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray17& other);
@@ -636,6 +702,12 @@
     return ValuesIn(array);
   }
 
+  ValueArray18(const ValueArray18& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray18& other);
@@ -684,6 +756,13 @@
     return ValuesIn(array);
   }
 
+  ValueArray19(const ValueArray19& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray19& other);
@@ -734,6 +813,13 @@
     return ValuesIn(array);
   }
 
+  ValueArray20(const ValueArray20& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray20& other);
@@ -787,6 +873,13 @@
     return ValuesIn(array);
   }
 
+  ValueArray21(const ValueArray21& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray21& other);
@@ -841,6 +934,13 @@
     return ValuesIn(array);
   }
 
+  ValueArray22(const ValueArray22& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray22& other);
@@ -897,6 +997,14 @@
     return ValuesIn(array);
   }
 
+  ValueArray23(const ValueArray23& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray23& other);
@@ -955,6 +1063,14 @@
     return ValuesIn(array);
   }
 
+  ValueArray24(const ValueArray24& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray24& other);
@@ -1014,6 +1130,14 @@
     return ValuesIn(array);
   }
 
+  ValueArray25(const ValueArray25& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray25& other);
@@ -1075,6 +1199,14 @@
     return ValuesIn(array);
   }
 
+  ValueArray26(const ValueArray26& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray26& other);
@@ -1139,6 +1271,15 @@
     return ValuesIn(array);
   }
 
+  ValueArray27(const ValueArray27& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray27& other);
@@ -1204,6 +1345,15 @@
     return ValuesIn(array);
   }
 
+  ValueArray28(const ValueArray28& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray28& other);
@@ -1270,6 +1420,15 @@
     return ValuesIn(array);
   }
 
+  ValueArray29(const ValueArray29& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray29& other);
@@ -1339,6 +1498,15 @@
     return ValuesIn(array);
   }
 
+  ValueArray30(const ValueArray30& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray30& other);
@@ -1410,6 +1578,16 @@
     return ValuesIn(array);
   }
 
+  ValueArray31(const ValueArray31& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray31& other);
@@ -1482,6 +1660,16 @@
     return ValuesIn(array);
   }
 
+  ValueArray32(const ValueArray32& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray32& other);
@@ -1557,6 +1745,16 @@
     return ValuesIn(array);
   }
 
+  ValueArray33(const ValueArray33& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray33& other);
@@ -1633,6 +1831,16 @@
     return ValuesIn(array);
   }
 
+  ValueArray34(const ValueArray34& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray34& other);
@@ -1710,6 +1918,17 @@
     return ValuesIn(array);
   }
 
+  ValueArray35(const ValueArray35& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray35& other);
@@ -1790,6 +2009,17 @@
     return ValuesIn(array);
   }
 
+  ValueArray36(const ValueArray36& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray36& other);
@@ -1872,6 +2102,17 @@
     return ValuesIn(array);
   }
 
+  ValueArray37(const ValueArray37& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray37& other);
@@ -1955,6 +2196,17 @@
     return ValuesIn(array);
   }
 
+  ValueArray38(const ValueArray38& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray38& other);
@@ -2040,6 +2292,18 @@
     return ValuesIn(array);
   }
 
+  ValueArray39(const ValueArray39& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray39& other);
@@ -2127,6 +2391,18 @@
     return ValuesIn(array);
   }
 
+  ValueArray40(const ValueArray40& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray40& other);
@@ -2216,6 +2492,18 @@
     return ValuesIn(array);
   }
 
+  ValueArray41(const ValueArray41& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray41& other);
@@ -2307,6 +2595,18 @@
     return ValuesIn(array);
   }
 
+  ValueArray42(const ValueArray42& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray42& other);
@@ -2399,6 +2699,19 @@
     return ValuesIn(array);
   }
 
+  ValueArray43(const ValueArray43& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray43& other);
@@ -2493,6 +2806,19 @@
     return ValuesIn(array);
   }
 
+  ValueArray44(const ValueArray44& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray44& other);
@@ -2589,6 +2915,19 @@
     return ValuesIn(array);
   }
 
+  ValueArray45(const ValueArray45& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_), v45_(other.v45_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray45& other);
@@ -2687,6 +3026,19 @@
     return ValuesIn(array);
   }
 
+  ValueArray46(const ValueArray46& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_), v45_(other.v45_), v46_(other.v46_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray46& other);
@@ -2787,6 +3139,20 @@
     return ValuesIn(array);
   }
 
+  ValueArray47(const ValueArray47& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_), v45_(other.v45_), v46_(other.v46_),
+      v47_(other.v47_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray47& other);
@@ -2889,6 +3255,20 @@
     return ValuesIn(array);
   }
 
+  ValueArray48(const ValueArray48& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_), v45_(other.v45_), v46_(other.v46_),
+      v47_(other.v47_), v48_(other.v48_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray48& other);
@@ -2992,6 +3372,20 @@
     return ValuesIn(array);
   }
 
+  ValueArray49(const ValueArray49& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_), v45_(other.v45_), v46_(other.v46_),
+      v47_(other.v47_), v48_(other.v48_), v49_(other.v49_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray49& other);
@@ -3096,6 +3490,20 @@
     return ValuesIn(array);
   }
 
+  ValueArray50(const ValueArray50& other) : v1_(other.v1_), v2_(other.v2_),
+      v3_(other.v3_), v4_(other.v4_), v5_(other.v5_), v6_(other.v6_),
+      v7_(other.v7_), v8_(other.v8_), v9_(other.v9_), v10_(other.v10_),
+      v11_(other.v11_), v12_(other.v12_), v13_(other.v13_), v14_(other.v14_),
+      v15_(other.v15_), v16_(other.v16_), v17_(other.v17_), v18_(other.v18_),
+      v19_(other.v19_), v20_(other.v20_), v21_(other.v21_), v22_(other.v22_),
+      v23_(other.v23_), v24_(other.v24_), v25_(other.v25_), v26_(other.v26_),
+      v27_(other.v27_), v28_(other.v28_), v29_(other.v29_), v30_(other.v30_),
+      v31_(other.v31_), v32_(other.v32_), v33_(other.v33_), v34_(other.v34_),
+      v35_(other.v35_), v36_(other.v36_), v37_(other.v37_), v38_(other.v38_),
+      v39_(other.v39_), v40_(other.v40_), v41_(other.v41_), v42_(other.v42_),
+      v43_(other.v43_), v44_(other.v44_), v45_(other.v45_), v46_(other.v46_),
+      v47_(other.v47_), v48_(other.v48_), v49_(other.v49_), v50_(other.v50_) {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray50& other);
@@ -3208,7 +3616,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -3240,7 +3648,7 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_);
+        current_value_.reset(new ParamType(*current1_, *current2_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -3262,7 +3670,7 @@
     const typename ParamGenerator<T2>::iterator begin2_;
     const typename ParamGenerator<T2>::iterator end2_;
     typename ParamGenerator<T2>::iterator current2_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator2::Iterator
 
   // No implementation - assignment is unsupported.
@@ -3331,7 +3739,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -3367,7 +3775,7 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_);
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -3393,7 +3801,7 @@
     const typename ParamGenerator<T3>::iterator begin3_;
     const typename ParamGenerator<T3>::iterator end3_;
     typename ParamGenerator<T3>::iterator current3_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator3::Iterator
 
   // No implementation - assignment is unsupported.
@@ -3472,7 +3880,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -3512,8 +3920,8 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
-            *current4_);
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
+            *current4_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -3543,7 +3951,7 @@
     const typename ParamGenerator<T4>::iterator begin4_;
     const typename ParamGenerator<T4>::iterator end4_;
     typename ParamGenerator<T4>::iterator current4_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator4::Iterator
 
   // No implementation - assignment is unsupported.
@@ -3630,7 +4038,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -3674,8 +4082,8 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
-            *current4_, *current5_);
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
+            *current4_, *current5_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -3709,7 +4117,7 @@
     const typename ParamGenerator<T5>::iterator begin5_;
     const typename ParamGenerator<T5>::iterator end5_;
     typename ParamGenerator<T5>::iterator current5_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator5::Iterator
 
   // No implementation - assignment is unsupported.
@@ -3807,7 +4215,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -3855,8 +4263,8 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
-            *current4_, *current5_, *current6_);
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
+            *current4_, *current5_, *current6_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -3894,7 +4302,7 @@
     const typename ParamGenerator<T6>::iterator begin6_;
     const typename ParamGenerator<T6>::iterator end6_;
     typename ParamGenerator<T6>::iterator current6_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator6::Iterator
 
   // No implementation - assignment is unsupported.
@@ -4001,7 +4409,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -4053,8 +4461,8 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
-            *current4_, *current5_, *current6_, *current7_);
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
+            *current4_, *current5_, *current6_, *current7_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -4096,7 +4504,7 @@
     const typename ParamGenerator<T7>::iterator begin7_;
     const typename ParamGenerator<T7>::iterator end7_;
     typename ParamGenerator<T7>::iterator current7_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator7::Iterator
 
   // No implementation - assignment is unsupported.
@@ -4214,7 +4622,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -4270,8 +4678,8 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
-            *current4_, *current5_, *current6_, *current7_, *current8_);
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
+            *current4_, *current5_, *current6_, *current7_, *current8_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -4317,7 +4725,7 @@
     const typename ParamGenerator<T8>::iterator begin8_;
     const typename ParamGenerator<T8>::iterator end8_;
     typename ParamGenerator<T8>::iterator current8_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator8::Iterator
 
   // No implementation - assignment is unsupported.
@@ -4443,7 +4851,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -4503,9 +4911,9 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
             *current4_, *current5_, *current6_, *current7_, *current8_,
-            *current9_);
+            *current9_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -4555,7 +4963,7 @@
     const typename ParamGenerator<T9>::iterator begin9_;
     const typename ParamGenerator<T9>::iterator end9_;
     typename ParamGenerator<T9>::iterator current9_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator9::Iterator
 
   // No implementation - assignment is unsupported.
@@ -4690,7 +5098,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -4754,9 +5162,9 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType(*current1_, *current2_, *current3_,
+        current_value_.reset(new ParamType(*current1_, *current2_, *current3_,
             *current4_, *current5_, *current6_, *current7_, *current8_,
-            *current9_, *current10_);
+            *current9_, *current10_));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -4810,7 +5218,7 @@
     const typename ParamGenerator<T10>::iterator begin10_;
     const typename ParamGenerator<T10>::iterator end10_;
     typename ParamGenerator<T10>::iterator current10_;
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator10::Iterator
 
   // No implementation - assignment is unsupported.
@@ -5141,6 +5549,4 @@
 }  // namespace internal
 }  // namespace testing
 
-#endif  //  GTEST_HAS_PARAM_TEST
-
 #endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h.pump b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h.pump
index 5c7c47a..30dffe4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h.pump
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util-generated.h.pump
@@ -29,8 +29,7 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: vladl@google.com (Vlad Losev)
+
 
 // Type and function utilities for implementing parameterized tests.
 // This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
@@ -42,17 +41,14 @@
 // by the maximum arity of the implementation of tuple which is
 // currently set at $maxtuple.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
 
-// scripts/fuse_gtest.py depends on gtest's own header being #included
-// *unconditionally*.  Therefore these #includes cannot be moved
-// inside #if GTEST_HAS_PARAM_TEST.
 #include "gtest/internal/gtest-param-util.h"
 #include "gtest/internal/gtest-port.h"
 
-#if GTEST_HAS_PARAM_TEST
-
 namespace testing {
 
 // Forward declarations of ValuesIn(), which is implemented in
@@ -87,6 +83,8 @@
     return ValuesIn(array);
   }
 
+  ValueArray$i(const ValueArray$i& other) : $for j, [[v$(j)_(other.v$(j)_)]] {}
+
  private:
   // No implementation - assignment is unsupported.
   void operator=(const ValueArray$i& other);
@@ -165,7 +163,7 @@
     virtual ParamIteratorInterface<ParamType>* Clone() const {
       return new Iterator(*this);
     }
-    virtual const ParamType* Current() const { return &current_value_; }
+    virtual const ParamType* Current() const { return current_value_.get(); }
     virtual bool Equals(const ParamIteratorInterface<ParamType>& other) const {
       // Having the same base generator guarantees that the other
       // iterator is of the same type and we can downcast.
@@ -197,7 +195,7 @@
 
     void ComputeCurrentValue() {
       if (!AtEnd())
-        current_value_ = ParamType($for j, [[*current$(j)_]]);
+        current_value_.reset(new ParamType($for j, [[*current$(j)_]]));
     }
     bool AtEnd() const {
       // We must report iterator past the end of the range when either of the
@@ -222,7 +220,7 @@
     typename ParamGenerator<T$j>::iterator current$(j)_;
 ]]
 
-    ParamType current_value_;
+    linked_ptr<ParamType> current_value_;
   };  // class CartesianProductGenerator$i::Iterator
 
   // No implementation - assignment is unsupported.
@@ -281,6 +279,4 @@
 }  // namespace internal
 }  // namespace testing
 
-#endif  //  GTEST_HAS_PARAM_TEST
-
 #endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util.h
index 82cab9b..d64f620 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-param-util.h
@@ -26,11 +26,12 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: vladl@google.com (Vlad Losev)
+
 
 // Type and function utilities for implementing parameterized tests.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
 
@@ -41,16 +42,11 @@
 #include <utility>
 #include <vector>
 
-// scripts/fuse_gtest.py depends on gtest's own header being #included
-// *unconditionally*.  Therefore these #includes cannot be moved
-// inside #if GTEST_HAS_PARAM_TEST.
 #include "gtest/internal/gtest-internal.h"
 #include "gtest/internal/gtest-linked_ptr.h"
 #include "gtest/internal/gtest-port.h"
 #include "gtest/gtest-printers.h"
 
-#if GTEST_HAS_PARAM_TEST
-
 namespace testing {
 
 // Input to a parameterized test name generator, describing a test parameter.
@@ -472,7 +468,7 @@
   virtual ~ParameterizedTestCaseInfoBase() {}
 
   // Base part of test case name for display purposes.
-  virtual const string& GetTestCaseName() const = 0;
+  virtual const std::string& GetTestCaseName() const = 0;
   // Test case id to verify identity.
   virtual TypeId GetTestCaseTypeId() const = 0;
   // UnitTest class invokes this method to register tests in this
@@ -511,7 +507,7 @@
       : test_case_name_(name), code_location_(code_location) {}
 
   // Test case base name for display purposes.
-  virtual const string& GetTestCaseName() const { return test_case_name_; }
+  virtual const std::string& GetTestCaseName() const { return test_case_name_; }
   // Test case id to verify identity.
   virtual TypeId GetTestCaseTypeId() const { return GetTypeId<TestCase>(); }
   // TEST_P macro uses AddTestPattern() to record information
@@ -529,11 +525,10 @@
   }
   // INSTANTIATE_TEST_CASE_P macro uses AddGenerator() to record information
   // about a generator.
-  int AddTestCaseInstantiation(const string& instantiation_name,
+  int AddTestCaseInstantiation(const std::string& instantiation_name,
                                GeneratorCreationFunc* func,
                                ParamNameGeneratorFunc* name_func,
-                               const char* file,
-                               int line) {
+                               const char* file, int line) {
     instantiations_.push_back(
         InstantiationInfo(instantiation_name, func, name_func, file, line));
     return 0;  // Return value used only to run this method in namespace scope.
@@ -550,13 +545,13 @@
       for (typename InstantiationContainer::iterator gen_it =
                instantiations_.begin(); gen_it != instantiations_.end();
                ++gen_it) {
-        const string& instantiation_name = gen_it->name;
+        const std::string& instantiation_name = gen_it->name;
         ParamGenerator<ParamType> generator((*gen_it->generator)());
         ParamNameGeneratorFunc* name_func = gen_it->name_func;
         const char* file = gen_it->file;
         int line = gen_it->line;
 
-        string test_case_name;
+        std::string test_case_name;
         if ( !instantiation_name.empty() )
           test_case_name = instantiation_name + "/";
         test_case_name += test_info->test_case_base_name;
@@ -609,8 +604,8 @@
         test_base_name(a_test_base_name),
         test_meta_factory(a_test_meta_factory) {}
 
-    const string test_case_base_name;
-    const string test_base_name;
+    const std::string test_case_base_name;
+    const std::string test_base_name;
     const scoped_ptr<TestMetaFactoryBase<ParamType> > test_meta_factory;
   };
   typedef ::std::vector<linked_ptr<TestInfo> > TestInfoContainer;
@@ -651,7 +646,7 @@
     return true;
   }
 
-  const string test_case_name_;
+  const std::string test_case_name_;
   CodeLocation code_location_;
   TestInfoContainer tests_;
   InstantiationContainer instantiations_;
@@ -726,6 +721,4 @@
 }  // namespace internal
 }  // namespace testing
 
-#endif  //  GTEST_HAS_PARAM_TEST
-
 #endif  // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port-arch.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port-arch.h
index 74ab949..f83700e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port-arch.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port-arch.h
@@ -27,7 +27,7 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file defines the GTEST_OS_* macro.
 // It is separate from gtest-port.h so that custom/gtest-port.h can include it.
@@ -54,6 +54,9 @@
 #   define GTEST_OS_WINDOWS_PHONE 1
 #  elif WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_APP)
 #   define GTEST_OS_WINDOWS_RT 1
+#  elif WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_TV_TITLE)
+#   define GTEST_OS_WINDOWS_PHONE 1
+#   define GTEST_OS_WINDOWS_TV_TITLE 1
 #  else
     // WINAPI_FAMILY defined but no known partition matched.
     // Default to desktop.
@@ -69,6 +72,8 @@
 # endif
 #elif defined __FreeBSD__
 # define GTEST_OS_FREEBSD 1
+#elif defined __Fuchsia__
+# define GTEST_OS_FUCHSIA 1
 #elif defined __linux__
 # define GTEST_OS_LINUX 1
 # if defined __ANDROID__
@@ -84,6 +89,8 @@
 # define GTEST_OS_HPUX 1
 #elif defined __native_client__
 # define GTEST_OS_NACL 1
+#elif defined __NetBSD__
+# define GTEST_OS_NETBSD 1
 #elif defined __OpenBSD__
 # define GTEST_OS_OPENBSD 1
 #elif defined __QNX__
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port.h
index da57e65..786497d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-port.h
@@ -27,8 +27,6 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: wan@google.com (Zhanyong Wan)
-//
 // Low-level types and utilities for porting Google Test to various
 // platforms.  All macros ending with _ and symbols defined in an
 // internal namespace are subject to change without notice.  Code
@@ -40,6 +38,8 @@
 // files are expected to #include this.  Therefore, it cannot #include
 // any other Google Test header.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
 
@@ -73,11 +73,9 @@
 //   GTEST_HAS_EXCEPTIONS     - Define it to 1/0 to indicate that exceptions
 //                              are enabled.
 //   GTEST_HAS_GLOBAL_STRING  - Define it to 1/0 to indicate that ::string
-//                              is/isn't available (some systems define
-//                              ::string, which is different to std::string).
-//   GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::string
-//                              is/isn't available (some systems define
-//                              ::wstring, which is different to std::wstring).
+//                              is/isn't available
+//   GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::wstring
+//                              is/isn't available
 //   GTEST_HAS_POSIX_RE       - Define it to 1/0 to indicate that POSIX regular
 //                              expressions are/aren't available.
 //   GTEST_HAS_PTHREAD        - Define it to 1/0 to indicate that <pthread.h>
@@ -109,6 +107,12 @@
 //   GTEST_CREATE_SHARED_LIBRARY
 //                            - Define to 1 when compiling Google Test itself
 //                              as a shared library.
+//   GTEST_DEFAULT_DEATH_TEST_STYLE
+//                            - The default value of --gtest_death_test_style.
+//                              The legacy default has been "fast" in the open
+//                              source version since 2008. The recommended value
+//                              is "threadsafe", and can be set in
+//                              custom/gtest-port.h.
 
 // Platform-indicating macros
 // --------------------------
@@ -122,12 +126,14 @@
 //   GTEST_OS_AIX      - IBM AIX
 //   GTEST_OS_CYGWIN   - Cygwin
 //   GTEST_OS_FREEBSD  - FreeBSD
+//   GTEST_OS_FUCHSIA  - Fuchsia
 //   GTEST_OS_HPUX     - HP-UX
 //   GTEST_OS_LINUX    - Linux
 //     GTEST_OS_LINUX_ANDROID - Google Android
 //   GTEST_OS_MAC      - Mac OS X
 //     GTEST_OS_IOS    - iOS
 //   GTEST_OS_NACL     - Google Native Client (NaCl)
+//   GTEST_OS_NETBSD   - NetBSD
 //   GTEST_OS_OPENBSD  - OpenBSD
 //   GTEST_OS_QNX      - QNX
 //   GTEST_OS_SOLARIS  - Sun Solaris
@@ -169,15 +175,15 @@
 //   GTEST_HAS_COMBINE      - the Combine() function (for value-parameterized
 //                            tests)
 //   GTEST_HAS_DEATH_TEST   - death tests
-//   GTEST_HAS_PARAM_TEST   - value-parameterized tests
 //   GTEST_HAS_TYPED_TEST   - typed tests
 //   GTEST_HAS_TYPED_TEST_P - type-parameterized tests
 //   GTEST_IS_THREADSAFE    - Google Test is thread-safe.
+//   GOOGLETEST_CM0007 DO NOT DELETE
 //   GTEST_USES_POSIX_RE    - enhanced POSIX regex is used. Do not confuse with
 //                            GTEST_HAS_POSIX_RE (see above) which users can
 //                            define themselves.
 //   GTEST_USES_SIMPLE_RE   - our own simple regex is used;
-//                            the above two are mutually exclusive.
+//                            the above RE\b(s) are mutually exclusive.
 //   GTEST_CAN_COMPARE_NULL - accepts untyped NULL in EXPECT_EQ().
 
 // Misc public macros
@@ -206,6 +212,7 @@
 //
 // C++11 feature wrappers:
 //
+//   testing::internal::forward - portability wrapper for std::forward.
 //   testing::internal::move  - portability wrapper for std::move.
 //
 // Synchronization:
@@ -222,10 +229,10 @@
 //
 // Regular expressions:
 //   RE             - a simple regular expression class using the POSIX
-//                    Extended Regular Expression syntax on UNIX-like
-//                    platforms, or a reduced regular exception syntax on
-//                    other platforms, including Windows.
-//
+//                    Extended Regular Expression syntax on UNIX-like platforms
+//                    GOOGLETEST_CM0008 DO NOT DELETE
+//                    or a reduced regular exception syntax on other
+//                    platforms, including Windows.
 // Logging:
 //   GTEST_LOG_()   - logs messages at the specified severity level.
 //   LogToStderr()  - directs all log messages to stderr.
@@ -271,10 +278,12 @@
 # include <TargetConditionals.h>
 #endif
 
+// Brings in the definition of HAS_GLOBAL_STRING.  This must be done
+// BEFORE we test HAS_GLOBAL_STRING.
+#include <string>  // NOLINT
 #include <algorithm>  // NOLINT
 #include <iostream>  // NOLINT
 #include <sstream>  // NOLINT
-#include <string>  // NOLINT
 #include <utility>
 #include <vector>  // NOLINT
 
@@ -306,7 +315,7 @@
 //   GTEST_DISABLE_MSC_WARNINGS_PUSH_(4800 4385)
 //   /* code that triggers warnings C4800 and C4385 */
 //   GTEST_DISABLE_MSC_WARNINGS_POP_()
-#if _MSC_VER >= 1500
+#if _MSC_VER >= 1400
 # define GTEST_DISABLE_MSC_WARNINGS_PUSH_(warnings) \
     __pragma(warning(push))                        \
     __pragma(warning(disable: warnings))
@@ -318,12 +327,28 @@
 # define GTEST_DISABLE_MSC_WARNINGS_POP_()
 #endif
 
+// Clang on Windows does not understand MSVC's pragma warning.
+// We need clang-specific way to disable function deprecation warning.
+#ifdef __clang__
+# define GTEST_DISABLE_MSC_DEPRECATED_PUSH_()                         \
+    _Pragma("clang diagnostic push")                                  \
+    _Pragma("clang diagnostic ignored \"-Wdeprecated-declarations\"") \
+    _Pragma("clang diagnostic ignored \"-Wdeprecated-implementations\"")
+#define GTEST_DISABLE_MSC_DEPRECATED_POP_() \
+    _Pragma("clang diagnostic pop")
+#else
+# define GTEST_DISABLE_MSC_DEPRECATED_PUSH_() \
+    GTEST_DISABLE_MSC_WARNINGS_PUSH_(4996)
+# define GTEST_DISABLE_MSC_DEPRECATED_POP_() \
+    GTEST_DISABLE_MSC_WARNINGS_POP_()
+#endif
+
 #ifndef GTEST_LANG_CXX11
 // gcc and clang define __GXX_EXPERIMENTAL_CXX0X__ when
 // -std={c,gnu}++{0x,11} is passed.  The C++11 standard specifies a
 // value for __cplusplus, and recent versions of clang, gcc, and
 // probably other compilers set that too in C++11 mode.
-# if __GXX_EXPERIMENTAL_CXX0X__ || __cplusplus >= 201103L
+# if __GXX_EXPERIMENTAL_CXX0X__ || __cplusplus >= 201103L || _MSC_VER >= 1900
 // Compiling in at least C++11 mode.
 #  define GTEST_LANG_CXX11 1
 # else
@@ -355,12 +380,16 @@
 #if GTEST_STDLIB_CXX11
 # define GTEST_HAS_STD_BEGIN_AND_END_ 1
 # define GTEST_HAS_STD_FORWARD_LIST_ 1
-# define GTEST_HAS_STD_FUNCTION_ 1
+# if !defined(_MSC_VER) || (_MSC_FULL_VER >= 190023824)
+// works only with VS2015U2 and better
+#   define GTEST_HAS_STD_FUNCTION_ 1
+# endif
 # define GTEST_HAS_STD_INITIALIZER_LIST_ 1
 # define GTEST_HAS_STD_MOVE_ 1
-# define GTEST_HAS_STD_SHARED_PTR_ 1
-# define GTEST_HAS_STD_TYPE_TRAITS_ 1
 # define GTEST_HAS_STD_UNIQUE_PTR_ 1
+# define GTEST_HAS_STD_SHARED_PTR_ 1
+# define GTEST_HAS_UNORDERED_MAP_ 1
+# define GTEST_HAS_UNORDERED_SET_ 1
 #endif
 
 // C++11 specifies that <tuple> provides std::tuple.
@@ -368,7 +397,8 @@
 #if GTEST_LANG_CXX11
 # define GTEST_HAS_STD_TUPLE_ 1
 # if defined(__clang__)
-// Inspired by http://clang.llvm.org/docs/LanguageExtensions.html#__has_include
+// Inspired by
+// https://clang.llvm.org/docs/LanguageExtensions.html#include-file-checking-macros
 #  if defined(__has_include) && !__has_include(<tuple>)
 #   undef GTEST_HAS_STD_TUPLE_
 #  endif
@@ -380,7 +410,7 @@
 # elif defined(__GLIBCXX__)
 // Inspired by boost/config/stdlib/libstdcpp3.hpp,
 // http://gcc.gnu.org/gcc-4.2/changes.html and
-// http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt01ch01.html#manual.intro.status.standard.200x
+// https://web.archive.org/web/20140227044429/gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt01ch01.html#manual.intro.status.standard.200x
 #  if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 2)
 #   undef GTEST_HAS_STD_TUPLE_
 #  endif
@@ -396,10 +426,16 @@
 #  include <io.h>
 # endif
 // In order to avoid having to include <windows.h>, use forward declaration
-// assuming CRITICAL_SECTION is a typedef of _RTL_CRITICAL_SECTION.
+#if GTEST_OS_WINDOWS_MINGW && !defined(__MINGW64_VERSION_MAJOR)
+// MinGW defined _CRITICAL_SECTION and _RTL_CRITICAL_SECTION as two
+// separate (equivalent) structs, instead of using typedef
+typedef struct _CRITICAL_SECTION GTEST_CRITICAL_SECTION;
+#else
+// Assume CRITICAL_SECTION is a typedef of _RTL_CRITICAL_SECTION.
 // This assumption is verified by
 // WindowsTypesTest.CRITICAL_SECTIONIs_RTL_CRITICAL_SECTION.
-struct _RTL_CRITICAL_SECTION;
+typedef struct _RTL_CRITICAL_SECTION GTEST_CRITICAL_SECTION;
+#endif
 #else
 // This assumes that non-Windows OSes provide unistd.h. For OSes where this
 // is not the case, we need to include headers that provide the functions
@@ -453,8 +489,11 @@
 #ifndef GTEST_HAS_EXCEPTIONS
 // The user didn't tell us whether exceptions are enabled, so we need
 // to figure it out.
-# if defined(_MSC_VER) || defined(__BORLANDC__)
-// MSVC's and C++Builder's implementations of the STL use the _HAS_EXCEPTIONS
+# if defined(_MSC_VER) && defined(_CPPUNWIND)
+// MSVC defines _CPPUNWIND to 1 iff exceptions are enabled.
+#  define GTEST_HAS_EXCEPTIONS 1
+# elif defined(__BORLANDC__)
+// C++Builder's implementation of the STL uses the _HAS_EXCEPTIONS
 // macro to enable exceptions, so we'll do the same.
 // Assumes that exceptions are enabled by default.
 #  ifndef _HAS_EXCEPTIONS
@@ -498,21 +537,17 @@
 # define GTEST_HAS_STD_STRING 1
 #elif !GTEST_HAS_STD_STRING
 // The user told us that ::std::string isn't available.
-# error "Google Test cannot be used where ::std::string isn't available."
+# error "::std::string isn't available."
 #endif  // !defined(GTEST_HAS_STD_STRING)
 
 #ifndef GTEST_HAS_GLOBAL_STRING
-// The user didn't tell us whether ::string is available, so we need
-// to figure it out.
-
 # define GTEST_HAS_GLOBAL_STRING 0
-
 #endif  // GTEST_HAS_GLOBAL_STRING
 
 #ifndef GTEST_HAS_STD_WSTRING
 // The user didn't tell us whether ::std::wstring is available, so we need
 // to figure it out.
-// TODO(wan@google.com): uses autoconf to detect whether ::std::wstring
+// FIXME: uses autoconf to detect whether ::std::wstring
 //   is available.
 
 // Cygwin 1.7 and below doesn't support ::std::wstring.
@@ -600,8 +635,9 @@
 //
 // To disable threading support in Google Test, add -DGTEST_HAS_PTHREAD=0
 // to your compiler flags.
-# define GTEST_HAS_PTHREAD (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX \
-    || GTEST_OS_QNX || GTEST_OS_FREEBSD || GTEST_OS_NACL)
+#define GTEST_HAS_PTHREAD                                             \
+  (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX || GTEST_OS_QNX || \
+   GTEST_OS_FREEBSD || GTEST_OS_NACL || GTEST_OS_NETBSD || GTEST_OS_FUCHSIA)
 #endif  // GTEST_HAS_PTHREAD
 
 #if GTEST_HAS_PTHREAD
@@ -616,7 +652,7 @@
 // Determines if hash_map/hash_set are available.
 // Only used for testing against those containers.
 #if !defined(GTEST_HAS_HASH_MAP_)
-# if _MSC_VER
+# if defined(_MSC_VER) && (_MSC_VER < 1900)
 #  define GTEST_HAS_HASH_MAP_ 1  // Indicates that hash_map is available.
 #  define GTEST_HAS_HASH_SET_ 1  // Indicates that hash_set is available.
 # endif  // _MSC_VER
@@ -629,6 +665,14 @@
 # if GTEST_OS_LINUX_ANDROID && defined(_STLPORT_MAJOR)
 // STLport, provided with the Android NDK, has neither <tr1/tuple> or <tuple>.
 #  define GTEST_HAS_TR1_TUPLE 0
+# elif defined(_MSC_VER) && (_MSC_VER >= 1910)
+// Prevent `warning C4996: 'std::tr1': warning STL4002:
+// The non-Standard std::tr1 namespace and TR1-only machinery
+// are deprecated and will be REMOVED.`
+#  define GTEST_HAS_TR1_TUPLE 0
+# elif GTEST_LANG_CXX11 && defined(_LIBCPP_VERSION)
+// libc++ doesn't support TR1.
+#  define GTEST_HAS_TR1_TUPLE 0
 # else
 // The user didn't tell us not to do it, so we assume it's OK.
 #  define GTEST_HAS_TR1_TUPLE 1
@@ -638,6 +682,10 @@
 // Determines whether Google Test's own tr1 tuple implementation
 // should be used.
 #ifndef GTEST_USE_OWN_TR1_TUPLE
+// We use our own tuple implementation on Symbian.
+# if GTEST_OS_SYMBIAN
+#  define GTEST_USE_OWN_TR1_TUPLE 1
+# else
 // The user didn't tell us, so we need to figure it out.
 
 // We use our own TR1 tuple if we aren't sure the user has an
@@ -651,7 +699,8 @@
 // support TR1 tuple.  libc++ only provides std::tuple, in C++11 mode,
 // and it can be used with some compilers that define __GNUC__.
 # if (defined(__GNUC__) && !defined(__CUDACC__) && (GTEST_GCC_VER_ >= 40000) \
-      && !GTEST_OS_QNX && !defined(_LIBCPP_VERSION)) || _MSC_VER >= 1600
+      && !GTEST_OS_QNX && !defined(_LIBCPP_VERSION)) \
+      || (_MSC_VER >= 1600 && _MSC_VER < 1900)
 #  define GTEST_ENV_HAS_TR1_TUPLE_ 1
 # endif
 
@@ -667,12 +716,11 @@
 # else
 #  define GTEST_USE_OWN_TR1_TUPLE 1
 # endif
-
+# endif  // GTEST_OS_SYMBIAN
 #endif  // GTEST_USE_OWN_TR1_TUPLE
 
-// To avoid conditional compilation everywhere, we make it
-// gtest-port.h's responsibility to #include the header implementing
-// tuple.
+// To avoid conditional compilation we make it gtest-port.h's responsibility
+// to #include the header implementing tuple.
 #if GTEST_HAS_STD_TUPLE_
 # include <tuple>  // IWYU pragma: export
 # define GTEST_TUPLE_NAMESPACE_ ::std
@@ -687,22 +735,6 @@
 
 # if GTEST_USE_OWN_TR1_TUPLE
 #  include "gtest/internal/gtest-tuple.h"  // IWYU pragma: export  // NOLINT
-# elif GTEST_ENV_HAS_STD_TUPLE_
-#  include <tuple>
-// C++11 puts its tuple into the ::std namespace rather than
-// ::std::tr1.  gtest expects tuple to live in ::std::tr1, so put it there.
-// This causes undefined behavior, but supported compilers react in
-// the way we intend.
-namespace std {
-namespace tr1 {
-using ::std::get;
-using ::std::make_tuple;
-using ::std::tuple;
-using ::std::tuple_element;
-using ::std::tuple_size;
-}
-}
-
 # elif GTEST_OS_SYMBIAN
 
 // On Symbian, BOOST_HAS_TR1_TUPLE causes Boost's TR1 tuple library to
@@ -727,20 +759,22 @@
 // Until version 4.3.2, gcc has a bug that causes <tr1/functional>,
 // which is #included by <tr1/tuple>, to not compile when RTTI is
 // disabled.  _TR1_FUNCTIONAL is the header guard for
-// <tr1/functional>.  Hence the following #define is a hack to prevent
+// <tr1/functional>.  Hence the following #define is used to prevent
 // <tr1/functional> from being included.
 #   define _TR1_FUNCTIONAL 1
 #   include <tr1/tuple>
 #   undef _TR1_FUNCTIONAL  // Allows the user to #include
-                        // <tr1/functional> if he chooses to.
+                        // <tr1/functional> if they choose to.
 #  else
 #   include <tr1/tuple>  // NOLINT
 #  endif  // !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302
 
-# else
-// If the compiler is not GCC 4.0+, we assume the user is using a
-// spec-conforming TR1 implementation.
+// VS 2010 now has tr1 support.
+# elif _MSC_VER >= 1600
 #  include <tuple>  // IWYU pragma: export  // NOLINT
+
+# else  // GTEST_USE_OWN_TR1_TUPLE
+#  include <tr1/tuple>  // IWYU pragma: export  // NOLINT
 # endif  // GTEST_USE_OWN_TR1_TUPLE
 
 #endif  // GTEST_HAS_TR1_TUPLE
@@ -754,8 +788,12 @@
 
 # if GTEST_OS_LINUX && !defined(__ia64__)
 #  if GTEST_OS_LINUX_ANDROID
-// On Android, clone() is only available on ARM starting with Gingerbread.
-#    if defined(__arm__) && __ANDROID_API__ >= 9
+// On Android, clone() became available at different API levels for each 32-bit
+// architecture.
+#    if defined(__LP64__) || \
+        (defined(__arm__) && __ANDROID_API__ >= 9) || \
+        (defined(__mips__) && __ANDROID_API__ >= 12) || \
+        (defined(__i386__) && __ANDROID_API__ >= 17)
 #     define GTEST_HAS_CLONE 1
 #    else
 #     define GTEST_HAS_CLONE 0
@@ -786,19 +824,15 @@
 // Google Test does not support death tests for VC 7.1 and earlier as
 // abort() in a VC 7.1 application compiled as GUI in debug config
 // pops up a dialog window that cannot be suppressed programmatically.
-#if (GTEST_OS_LINUX || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \
-     (GTEST_OS_MAC && !GTEST_OS_IOS) || \
-     (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER >= 1400) || \
+#if (GTEST_OS_LINUX || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS ||   \
+     (GTEST_OS_MAC && !GTEST_OS_IOS) ||                         \
+     (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER >= 1400) ||          \
      GTEST_OS_WINDOWS_MINGW || GTEST_OS_AIX || GTEST_OS_HPUX || \
-     GTEST_OS_OPENBSD || GTEST_OS_QNX || GTEST_OS_FREEBSD)
+     GTEST_OS_OPENBSD || GTEST_OS_QNX || GTEST_OS_FREEBSD || \
+     GTEST_OS_NETBSD || GTEST_OS_FUCHSIA)
 # define GTEST_HAS_DEATH_TEST 1
 #endif
 
-// We don't support MSVC 7.1 with exceptions disabled now.  Therefore
-// all the compilers we care about are adequate for supporting
-// value-parameterized tests.
-#define GTEST_HAS_PARAM_TEST 1
-
 // Determines whether to support type-driven tests.
 
 // Typed tests need <typeinfo> and variadic macros, which GCC, VC++ 8.0,
@@ -813,7 +847,7 @@
 // value-parameterized tests are enabled.  The implementation doesn't
 // work on Sun Studio since it doesn't understand templated conversion
 // operators.
-#if GTEST_HAS_PARAM_TEST && GTEST_HAS_TR1_TUPLE && !defined(__SUNPRO_CC)
+#if (GTEST_HAS_TR1_TUPLE || GTEST_HAS_STD_TUPLE_) && !defined(__SUNPRO_CC)
 # define GTEST_HAS_COMBINE 1
 #endif
 
@@ -864,15 +898,39 @@
 # define GTEST_ATTRIBUTE_UNUSED_
 #endif
 
+#if GTEST_LANG_CXX11
+# define GTEST_CXX11_EQUALS_DELETE_ = delete
+#else  // GTEST_LANG_CXX11
+# define GTEST_CXX11_EQUALS_DELETE_
+#endif  // GTEST_LANG_CXX11
+
+// Use this annotation before a function that takes a printf format string.
+#if (defined(__GNUC__) || defined(__clang__)) && !defined(COMPILER_ICC)
+# if defined(__MINGW_PRINTF_FORMAT)
+// MinGW has two different printf implementations. Ensure the format macro
+// matches the selected implementation. See
+// https://sourceforge.net/p/mingw-w64/wiki2/gnu%20printf/.
+#  define GTEST_ATTRIBUTE_PRINTF_(string_index, first_to_check) \
+       __attribute__((__format__(__MINGW_PRINTF_FORMAT, string_index, \
+                                 first_to_check)))
+# else
+#  define GTEST_ATTRIBUTE_PRINTF_(string_index, first_to_check) \
+       __attribute__((__format__(__printf__, string_index, first_to_check)))
+# endif
+#else
+# define GTEST_ATTRIBUTE_PRINTF_(string_index, first_to_check)
+#endif
+
+
 // A macro to disallow operator=
 // This should be used in the private: declarations for a class.
-#define GTEST_DISALLOW_ASSIGN_(type)\
-  void operator=(type const &)
+#define GTEST_DISALLOW_ASSIGN_(type) \
+  void operator=(type const &) GTEST_CXX11_EQUALS_DELETE_
 
 // A macro to disallow copy constructor and operator=
 // This should be used in the private: declarations for a class.
-#define GTEST_DISALLOW_COPY_AND_ASSIGN_(type)\
-  type(type const &);\
+#define GTEST_DISALLOW_COPY_AND_ASSIGN_(type) \
+  type(type const &) GTEST_CXX11_EQUALS_DELETE_; \
   GTEST_DISALLOW_ASSIGN_(type)
 
 // Tell the compiler to warn about unused return values for functions declared
@@ -920,6 +978,11 @@
 
 #endif  // GTEST_HAS_SEH
 
+// GTEST_API_ qualifies all symbols that must be exported. The definitions below
+// are guarded by #ifndef to give embedders a chance to define GTEST_API_ in
+// gtest/internal/custom/gtest-port.h
+#ifndef GTEST_API_
+
 #ifdef _MSC_VER
 # if GTEST_LINKED_AS_SHARED_LIBRARY
 #  define GTEST_API_ __declspec(dllimport)
@@ -928,11 +991,17 @@
 # endif
 #elif __GNUC__ >= 4 || defined(__clang__)
 # define GTEST_API_ __attribute__((visibility ("default")))
-#endif // _MSC_VER
+#endif  // _MSC_VER
+
+#endif  // GTEST_API_
 
 #ifndef GTEST_API_
 # define GTEST_API_
-#endif
+#endif  // GTEST_API_
+
+#ifndef GTEST_DEFAULT_DEATH_TEST_STYLE
+# define GTEST_DEFAULT_DEATH_TEST_STYLE  "fast"
+#endif  // GTEST_DEFAULT_DEATH_TEST_STYLE
 
 #ifdef __GNUC__
 // Ask the compiler to never inline a given function.
@@ -942,10 +1011,12 @@
 #endif
 
 // _LIBCPP_VERSION is defined by the libc++ library from the LLVM project.
-#if defined(__GLIBCXX__) || defined(_LIBCPP_VERSION)
-# define GTEST_HAS_CXXABI_H_ 1
-#else
-# define GTEST_HAS_CXXABI_H_ 0
+#if !defined(GTEST_HAS_CXXABI_H_)
+# if defined(__GLIBCXX__) || (defined(_LIBCPP_VERSION) && !defined(_MSC_VER))
+#  define GTEST_HAS_CXXABI_H_ 1
+# else
+#  define GTEST_HAS_CXXABI_H_ 0
+# endif
 #endif
 
 // A function level attribute to disable checking for use of uninitialized
@@ -985,19 +1056,6 @@
 # define GTEST_ATTRIBUTE_NO_SANITIZE_THREAD_
 #endif  // __clang__
 
-// A function level attribute to disable UndefinedBehaviorSanitizer's (defined)
-// unsigned integer overflow instrumentation.
-#if defined(__clang__)
-# if defined(__has_attribute) && __has_attribute(no_sanitize)
-#  define GTEST_ATTRIBUTE_NO_SANITIZE_UNSIGNED_OVERFLOW_ \
-       __attribute__((no_sanitize("unsigned-integer-overflow")))
-# else
-#  define GTEST_ATTRIBUTE_NO_SANITIZE_UNSIGNED_OVERFLOW_
-# endif  // defined(__has_attribute) && __has_attribute(no_sanitize)
-#else
-# define GTEST_ATTRIBUTE_NO_SANITIZE_UNSIGNED_OVERFLOW_
-#endif  // __clang__
-
 namespace testing {
 
 class Message;
@@ -1101,6 +1159,16 @@
   enum { value = true };
 };
 
+// Same as std::is_same<>.
+template <typename T, typename U>
+struct IsSame {
+  enum { value = false };
+};
+template <typename T>
+struct IsSame<T, T> {
+  enum { value = true };
+};
+
 // Evaluates to the number of elements in 'array'.
 #define GTEST_ARRAY_SIZE_(array) (sizeof(array) / sizeof(array[0]))
 
@@ -1164,6 +1232,10 @@
 
 // Defines RE.
 
+#if GTEST_USES_PCRE
+// if used, PCRE is injected by custom/gtest-port.h
+#elif GTEST_USES_POSIX_RE || GTEST_USES_SIMPLE_RE
+
 // A simple C++ wrapper for <regex.h>.  It uses the POSIX Extended
 // Regular Expression syntax.
 class GTEST_API_ RE {
@@ -1175,11 +1247,11 @@
   // Constructs an RE from a string.
   RE(const ::std::string& regex) { Init(regex.c_str()); }  // NOLINT
 
-#if GTEST_HAS_GLOBAL_STRING
+# if GTEST_HAS_GLOBAL_STRING
 
   RE(const ::string& regex) { Init(regex.c_str()); }  // NOLINT
 
-#endif  // GTEST_HAS_GLOBAL_STRING
+# endif  // GTEST_HAS_GLOBAL_STRING
 
   RE(const char* regex) { Init(regex); }  // NOLINT
   ~RE();
@@ -1192,7 +1264,7 @@
   // PartialMatch(str, re) returns true iff regular expression re
   // matches a substring of str (including str itself).
   //
-  // TODO(wan@google.com): make FullMatch() and PartialMatch() work
+  // FIXME: make FullMatch() and PartialMatch() work
   // when str contains NUL characters.
   static bool FullMatch(const ::std::string& str, const RE& re) {
     return FullMatch(str.c_str(), re);
@@ -1201,7 +1273,7 @@
     return PartialMatch(str.c_str(), re);
   }
 
-#if GTEST_HAS_GLOBAL_STRING
+# if GTEST_HAS_GLOBAL_STRING
 
   static bool FullMatch(const ::string& str, const RE& re) {
     return FullMatch(str.c_str(), re);
@@ -1210,7 +1282,7 @@
     return PartialMatch(str.c_str(), re);
   }
 
-#endif  // GTEST_HAS_GLOBAL_STRING
+# endif  // GTEST_HAS_GLOBAL_STRING
 
   static bool FullMatch(const char* str, const RE& re);
   static bool PartialMatch(const char* str, const RE& re);
@@ -1219,25 +1291,27 @@
   void Init(const char* regex);
 
   // We use a const char* instead of an std::string, as Google Test used to be
-  // used where std::string is not available.  TODO(wan@google.com): change to
+  // used where std::string is not available.  FIXME: change to
   // std::string.
   const char* pattern_;
   bool is_valid_;
 
-#if GTEST_USES_POSIX_RE
+# if GTEST_USES_POSIX_RE
 
   regex_t full_regex_;     // For FullMatch().
   regex_t partial_regex_;  // For PartialMatch().
 
-#else  // GTEST_USES_SIMPLE_RE
+# else  // GTEST_USES_SIMPLE_RE
 
   const char* full_pattern_;  // For FullMatch();
 
-#endif
+# endif
 
   GTEST_DISALLOW_ASSIGN_(RE);
 };
 
+#endif  // GTEST_USES_PCRE
+
 // Formats a source file path and a line number as they would appear
 // in an error message from the compiler used to compile this code.
 GTEST_API_ ::std::string FormatFileLocation(const char* file, int line);
@@ -1323,13 +1397,59 @@
     GTEST_LOG_(FATAL) << #posix_call << "failed with error " \
                       << gtest_error
 
+// Adds reference to a type if it is not a reference type,
+// otherwise leaves it unchanged.  This is the same as
+// tr1::add_reference, which is not widely available yet.
+template <typename T>
+struct AddReference { typedef T& type; };  // NOLINT
+template <typename T>
+struct AddReference<T&> { typedef T& type; };  // NOLINT
+
+// A handy wrapper around AddReference that works when the argument T
+// depends on template parameters.
+#define GTEST_ADD_REFERENCE_(T) \
+    typename ::testing::internal::AddReference<T>::type
+
+// Transforms "T" into "const T&" according to standard reference collapsing
+// rules (this is only needed as a backport for C++98 compilers that do not
+// support reference collapsing). Specifically, it transforms:
+//
+//   char         ==> const char&
+//   const char   ==> const char&
+//   char&        ==> char&
+//   const char&  ==> const char&
+//
+// Note that the non-const reference will not have "const" added. This is
+// standard, and necessary so that "T" can always bind to "const T&".
+template <typename T>
+struct ConstRef { typedef const T& type; };
+template <typename T>
+struct ConstRef<T&> { typedef T& type; };
+
+// The argument T must depend on some template parameters.
+#define GTEST_REFERENCE_TO_CONST_(T) \
+  typename ::testing::internal::ConstRef<T>::type
+
 #if GTEST_HAS_STD_MOVE_
+using std::forward;
 using std::move;
+
+template <typename T>
+struct RvalueRef {
+  typedef T&& type;
+};
 #else  // GTEST_HAS_STD_MOVE_
 template <typename T>
 const T& move(const T& t) {
   return t;
 }
+template <typename T>
+GTEST_ADD_REFERENCE_(T) forward(GTEST_ADD_REFERENCE_(T) t) { return t; }
+
+template <typename T>
+struct RvalueRef {
+  typedef const T& type;
+};
 #endif  // GTEST_HAS_STD_MOVE_
 
 // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.
@@ -1430,10 +1550,6 @@
 GTEST_API_ std::string GetCapturedStderr();
 
 #endif  // GTEST_HAS_STREAM_REDIRECTION
-
-// Returns a path to temporary directory.
-GTEST_API_ std::string TempDir();
-
 // Returns the size (in bytes) of a file.
 GTEST_API_ size_t GetFileSize(FILE* file);
 
@@ -1441,14 +1557,18 @@
 GTEST_API_ std::string ReadEntireFile(FILE* file);
 
 // All command line arguments.
-GTEST_API_ const ::std::vector<testing::internal::string>& GetArgvs();
+GTEST_API_ std::vector<std::string> GetArgvs();
 
 #if GTEST_HAS_DEATH_TEST
 
-const ::std::vector<testing::internal::string>& GetInjectableArgvs();
-void SetInjectableArgvs(const ::std::vector<testing::internal::string>*
-                             new_argvs);
-
+std::vector<std::string> GetInjectableArgvs();
+// Deprecated: pass the args vector by value instead.
+void SetInjectableArgvs(const std::vector<std::string>* new_argvs);
+void SetInjectableArgvs(const std::vector<std::string>& new_argvs);
+#if GTEST_HAS_GLOBAL_STRING
+void SetInjectableArgvs(const std::vector< ::string>& new_argvs);
+#endif  // GTEST_HAS_GLOBAL_STRING
+void ClearInjectableArgvs();
 
 #endif  // GTEST_HAS_DEATH_TEST
 
@@ -1698,7 +1818,7 @@
   // Initializes owner_thread_id_ and critical_section_ in static mutexes.
   void ThreadSafeLazyInit();
 
-  // Per http://blogs.msdn.com/b/oldnewthing/archive/2004/02/23/78395.aspx,
+  // Per https://blogs.msdn.microsoft.com/oldnewthing/20040223-00/?p=40503,
   // we assume that 0 is an invalid value for thread IDs.
   unsigned int owner_thread_id_;
 
@@ -1706,7 +1826,7 @@
   // by the linker.
   MutexType type_;
   long critical_section_init_phase_;  // NOLINT
-  _RTL_CRITICAL_SECTION* critical_section_;
+  GTEST_CRITICAL_SECTION* critical_section_;
 
   GTEST_DISALLOW_COPY_AND_ASSIGN_(Mutex);
 };
@@ -1982,8 +2102,13 @@
      extern ::testing::internal::MutexBase mutex
 
 // Defines and statically (i.e. at link time) initializes a static mutex.
-#  define GTEST_DEFINE_STATIC_MUTEX_(mutex) \
-     ::testing::internal::MutexBase mutex = { PTHREAD_MUTEX_INITIALIZER, false, pthread_t() }
+// The initialization list here does not explicitly initialize each field,
+// instead relying on default initialization for the unspecified fields. In
+// particular, the owner_ field (a pthread_t) is not explicitly initialized.
+// This allows initialization to work whether pthread_t is a scalar or struct.
+// The flag -Wmissing-field-initializers must not be specified for this to work.
+#define GTEST_DEFINE_STATIC_MUTEX_(mutex) \
+  ::testing::internal::MutexBase mutex = {PTHREAD_MUTEX_INITIALIZER, false, 0}
 
 // The Mutex class can only be used for mutexes created at runtime. It
 // shares its API with MutexBase otherwise.
@@ -2040,7 +2165,7 @@
 
 // Implements thread-local storage on pthreads-based systems.
 template <typename T>
-class ThreadLocal {
+class GTEST_API_ ThreadLocal {
  public:
   ThreadLocal()
       : key_(CreateKey()), default_factory_(new DefaultValueHolderFactory()) {}
@@ -2172,7 +2297,7 @@
 typedef GTestMutexLock MutexLock;
 
 template <typename T>
-class ThreadLocal {
+class GTEST_API_ ThreadLocal {
  public:
   ThreadLocal() : value_() {}
   explicit ThreadLocal(const T& value) : value_(value) {}
@@ -2191,12 +2316,13 @@
 GTEST_API_ size_t GetThreadCount();
 
 // Passing non-POD classes through ellipsis (...) crashes the ARM
-// compiler and generates a warning in Sun Studio.  The Nokia Symbian
+// compiler and generates a warning in Sun Studio before 12u4. The Nokia Symbian
 // and the IBM XL C/C++ compiler try to instantiate a copy constructor
 // for objects passed through ellipsis (...), failing for uncopyable
 // objects.  We define this to ensure that only POD is passed through
 // ellipsis on these systems.
-#if defined(__SYMBIAN32__) || defined(__IBMCPP__) || defined(__SUNPRO_CC)
+#if defined(__SYMBIAN32__) || defined(__IBMCPP__) || \
+     (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x5130)
 // We lose support for NULL detection where the compiler doesn't like
 // passing non-POD classes through ellipsis (...).
 # define GTEST_ELLIPSIS_NEEDS_POD_ 1
@@ -2222,6 +2348,13 @@
 typedef bool_constant<false> false_type;
 typedef bool_constant<true> true_type;
 
+template <typename T, typename U>
+struct is_same : public false_type {};
+
+template <typename T>
+struct is_same<T, T> : public true_type {};
+
+
 template <typename T>
 struct is_pointer : public false_type {};
 
@@ -2233,6 +2366,7 @@
   typedef typename Iterator::value_type value_type;
 };
 
+
 template <typename T>
 struct IteratorTraits<T*> {
   typedef T value_type;
@@ -2364,7 +2498,7 @@
 
 // Functions deprecated by MSVC 8.0.
 
-GTEST_DISABLE_MSC_WARNINGS_PUSH_(4996 /* deprecated function */)
+GTEST_DISABLE_MSC_DEPRECATED_PUSH_()
 
 inline const char* StrNCpy(char* dest, const char* src, size_t n) {
   return strncpy(dest, src, n);
@@ -2398,7 +2532,7 @@
 inline const char* StrError(int errnum) { return strerror(errnum); }
 #endif
 inline const char* GetEnv(const char* name) {
-#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE | GTEST_OS_WINDOWS_RT
+#if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_PHONE || GTEST_OS_WINDOWS_RT
   // We are on Windows CE, which has no environment variables.
   static_cast<void>(name);  // To prevent 'unused argument' warning.
   return NULL;
@@ -2412,7 +2546,7 @@
 #endif
 }
 
-GTEST_DISABLE_MSC_WARNINGS_POP_()
+GTEST_DISABLE_MSC_DEPRECATED_POP_()
 
 #if GTEST_OS_WINDOWS_MOBILE
 // Windows CE has no C library. The abort() function is used in
@@ -2528,15 +2662,15 @@
 # define GTEST_DECLARE_bool_(name) GTEST_API_ extern bool GTEST_FLAG(name)
 # define GTEST_DECLARE_int32_(name) \
     GTEST_API_ extern ::testing::internal::Int32 GTEST_FLAG(name)
-#define GTEST_DECLARE_string_(name) \
+# define GTEST_DECLARE_string_(name) \
     GTEST_API_ extern ::std::string GTEST_FLAG(name)
 
 // Macros for defining flags.
-#define GTEST_DEFINE_bool_(name, default_val, doc) \
+# define GTEST_DEFINE_bool_(name, default_val, doc) \
     GTEST_API_ bool GTEST_FLAG(name) = (default_val)
-#define GTEST_DEFINE_int32_(name, default_val, doc) \
+# define GTEST_DEFINE_int32_(name, default_val, doc) \
     GTEST_API_ ::testing::internal::Int32 GTEST_FLAG(name) = (default_val)
-#define GTEST_DEFINE_string_(name, default_val, doc) \
+# define GTEST_DEFINE_string_(name, default_val, doc) \
     GTEST_API_ ::std::string GTEST_FLAG(name) = (default_val)
 
 #endif  // !defined(GTEST_DECLARE_bool_)
@@ -2550,7 +2684,7 @@
 // Parses 'str' for a 32-bit signed integer.  If successful, writes the result
 // to *value and returns true; otherwise leaves *value unchanged and returns
 // false.
-// TODO(chandlerc): Find a better way to refactor flag and environment parsing
+// FIXME: Find a better way to refactor flag and environment parsing
 // out of both gtest-port.cc and gtest.cc to avoid exporting this utility
 // function.
 bool ParseInt32(const Message& src_text, const char* str, Int32* value);
@@ -2559,7 +2693,8 @@
 // corresponding to the given Google Test flag.
 bool BoolFromGTestEnv(const char* flag, bool default_val);
 GTEST_API_ Int32 Int32FromGTestEnv(const char* flag, Int32 default_val);
-std::string StringFromGTestEnv(const char* flag, const char* default_val);
+std::string OutputFlagAlsoCheckEnvVar();
+const char* StringFromGTestEnv(const char* flag, const char* default_val);
 
 }  // namespace internal
 }  // namespace testing
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-string.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-string.h
index 97f1a7f..4c9b626 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-string.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-string.h
@@ -27,17 +27,17 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 //
 // This header file declares the String class and functions used internally by
 // Google Test.  They are subject to change without notice. They should not used
 // by code external to Google Test.
 //
-// This header file is #included by <gtest/internal/gtest-internal.h>.
+// This header file is #included by gtest-internal.h.
 // It should not be #included by other files.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h
index e9b4053..78a3a6a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h
@@ -30,11 +30,12 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 
 // Implements a subset of TR1 tuple needed by Google Test and Google Mock.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_
 
@@ -42,7 +43,7 @@
 
 // The compiler used in Symbian has a bug that prevents us from declaring the
 // tuple template as a friend (it complains that tuple is redefined).  This
-// hack bypasses the bug by declaring the members that should otherwise be
+// bypasses the bug by declaring the members that should otherwise be
 // private as public.
 // Sun Studio versions < 12 also have the above bug.
 #if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h.pump b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h.pump
index 429ddfe..bb626e0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h.pump
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-tuple.h.pump
@@ -29,11 +29,12 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 
 // Implements a subset of TR1 tuple needed by Google Test and Google Mock.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_
 
@@ -41,7 +42,7 @@
 
 // The compiler used in Symbian has a bug that prevents us from declaring the
 // tuple template as a friend (it complains that tuple is redefined).  This
-// hack bypasses the bug by declaring the members that should otherwise be
+// bypasses the bug by declaring the members that should otherwise be
 // private as public.
 // Sun Studio versions < 12 also have the above bug.
 #if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h
index e46f7cf..28e4112 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h
@@ -30,8 +30,7 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 
 // Type utilities needed for implementing typed and type-parameterized
 // tests.  This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
@@ -41,6 +40,8 @@
 // Please contact googletestframework@googlegroups.com if you need
 // more.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
 
@@ -57,6 +58,22 @@
 namespace testing {
 namespace internal {
 
+// Canonicalizes a given name with respect to the Standard C++ Library.
+// This handles removing the inline namespace within `std` that is
+// used by various standard libraries (e.g., `std::__1`).  Names outside
+// of namespace std are returned unmodified.
+inline std::string CanonicalizeForStdLibVersioning(std::string s) {
+  static const char prefix[] = "std::__";
+  if (s.compare(0, strlen(prefix), prefix) == 0) {
+    std::string::size_type end = s.find("::", strlen(prefix));
+    if (end != s.npos) {
+      // Erase everything between the initial `std` and the second `::`.
+      s.erase(strlen("std"), end - strlen("std"));
+    }
+  }
+  return s;
+}
+
 // GetTypeName<T>() returns a human-readable name of type T.
 // NB: This function is also used in Google Mock, so don't move it inside of
 // the typed-test-only section below.
@@ -75,7 +92,7 @@
   char* const readable_name = __cxa_demangle(name, 0, 0, &status);
   const std::string name_str(status == 0 ? readable_name : name);
   free(readable_name);
-  return name_str;
+  return CanonicalizeForStdLibVersioning(name_str);
 #  else
   return name;
 #  endif  // GTEST_HAS_CXXABI_H_ || __HP_aCC
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h.pump b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h.pump
index 251fdf0..0001a5d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h.pump
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/include/gtest/internal/gtest-type-util.h.pump
@@ -28,8 +28,7 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 
 // Type utilities needed for implementing typed and type-parameterized
 // tests.  This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
@@ -39,6 +38,8 @@
 // Please contact googletestframework@googlegroups.com if you need
 // more.
 
+// GOOGLETEST_CM0001 DO NOT DELETE
+
 #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
 #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
 
@@ -55,6 +56,22 @@
 namespace testing {
 namespace internal {
 
+// Canonicalizes a given name with respect to the Standard C++ Library.
+// This handles removing the inline namespace within `std` that is
+// used by various standard libraries (e.g., `std::__1`).  Names outside
+// of namespace std are returned unmodified.
+inline std::string CanonicalizeForStdLibVersioning(std::string s) {
+  static const char prefix[] = "std::__";
+  if (s.compare(0, strlen(prefix), prefix) == 0) {
+    std::string::size_type end = s.find("::", strlen(prefix));
+    if (end != s.npos) {
+      // Erase everything between the initial `std` and the second `::`.
+      s.erase(strlen("std"), end - strlen("std"));
+    }
+  }
+  return s;
+}
+
 // GetTypeName<T>() returns a human-readable name of type T.
 // NB: This function is also used in Google Mock, so don't move it inside of
 // the typed-test-only section below.
@@ -73,7 +90,7 @@
   char* const readable_name = __cxa_demangle(name, 0, 0, &status);
   const std::string name_str(status == 0 ? readable_name : name);
   free(readable_name);
-  return name_str;
+  return CanonicalizeForStdLibVersioning(name_str);
 #  else
   return name;
 #  endif  // GTEST_HAS_CXXABI_H_ || __HP_aCC
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-all.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-all.cc
index 0a9cee5..b217a18 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-all.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-all.cc
@@ -26,10 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: mheule@google.com (Markus Heule)
-//
-// Google C++ Testing Framework (Google Test)
+// Google C++ Testing and Mocking Framework (Google Test)
 //
 // Sometimes it's desirable to build Google Test by compiling a single file.
 // This file serves this purpose.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-death-test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-death-test.cc
index a01a369..0908355 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-death-test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-death-test.cc
@@ -26,8 +26,7 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan), vladl@google.com (Vlad Losev)
+
 //
 // This file implements death tests.
 
@@ -62,26 +61,30 @@
 #  include <spawn.h>
 # endif  // GTEST_OS_QNX
 
+# if GTEST_OS_FUCHSIA
+#  include <lib/fdio/io.h>
+#  include <lib/fdio/spawn.h>
+#  include <zircon/processargs.h>
+#  include <zircon/syscalls.h>
+#  include <zircon/syscalls/port.h>
+# endif  // GTEST_OS_FUCHSIA
+
 #endif  // GTEST_HAS_DEATH_TEST
 
 #include "gtest/gtest-message.h"
 #include "gtest/internal/gtest-string.h"
-
-// Indicates that this translation unit is part of Google Test's
-// implementation.  It must come before gtest-internal-inl.h is
-// included, or there will be a compiler error.  This trick exists to
-// prevent the accidental inclusion of gtest-internal-inl.h in the
-// user's code.
-#define GTEST_IMPLEMENTATION_ 1
 #include "src/gtest-internal-inl.h"
-#undef GTEST_IMPLEMENTATION_
 
 namespace testing {
 
 // Constants.
 
 // The default death test style.
-static const char kDefaultDeathTestStyle[] = "fast";
+//
+// This is defined in internal/gtest-port.h as "fast", but can be overridden by
+// a definition in internal/custom/gtest-port.h. The recommended value, which is
+// used internally at Google, is "threadsafe".
+static const char kDefaultDeathTestStyle[] = GTEST_DEFAULT_DEATH_TEST_STYLE;
 
 GTEST_DEFINE_string_(
     death_test_style,
@@ -121,7 +124,7 @@
 
 // Valid only for fast death tests. Indicates the code is running in the
 // child process of a fast style death test.
-# if !GTEST_OS_WINDOWS
+# if !GTEST_OS_WINDOWS && !GTEST_OS_FUCHSIA
 static bool g_in_fast_death_test_child = false;
 # endif
 
@@ -131,10 +134,10 @@
 // tests.  IMPORTANT: This is an internal utility.  Using it may break the
 // implementation of death tests.  User code MUST NOT use it.
 bool InDeathTestChild() {
-# if GTEST_OS_WINDOWS
+# if GTEST_OS_WINDOWS || GTEST_OS_FUCHSIA
 
-  // On Windows, death tests are thread-safe regardless of the value of the
-  // death_test_style flag.
+  // On Windows and Fuchsia, death tests are thread-safe regardless of the value
+  // of the death_test_style flag.
   return !GTEST_FLAG(internal_run_death_test).empty();
 
 # else
@@ -154,7 +157,7 @@
 
 // ExitedWithCode function-call operator.
 bool ExitedWithCode::operator()(int exit_status) const {
-# if GTEST_OS_WINDOWS
+# if GTEST_OS_WINDOWS || GTEST_OS_FUCHSIA
 
   return exit_status == exit_code_;
 
@@ -162,10 +165,10 @@
 
   return WIFEXITED(exit_status) && WEXITSTATUS(exit_status) == exit_code_;
 
-# endif  // GTEST_OS_WINDOWS
+# endif  // GTEST_OS_WINDOWS || GTEST_OS_FUCHSIA
 }
 
-# if !GTEST_OS_WINDOWS
+# if !GTEST_OS_WINDOWS && !GTEST_OS_FUCHSIA
 // KilledBySignal constructor.
 KilledBySignal::KilledBySignal(int signum) : signum_(signum) {
 }
@@ -182,7 +185,7 @@
 #  endif  // defined(GTEST_KILLED_BY_SIGNAL_OVERRIDE_)
   return WIFSIGNALED(exit_status) && WTERMSIG(exit_status) == signum_;
 }
-# endif  // !GTEST_OS_WINDOWS
+# endif  // !GTEST_OS_WINDOWS && !GTEST_OS_FUCHSIA
 
 namespace internal {
 
@@ -193,7 +196,7 @@
 static std::string ExitSummary(int exit_code) {
   Message m;
 
-# if GTEST_OS_WINDOWS
+# if GTEST_OS_WINDOWS || GTEST_OS_FUCHSIA
 
   m << "Exited with exit status " << exit_code;
 
@@ -209,7 +212,7 @@
     m << " (core dumped)";
   }
 #  endif
-# endif  // GTEST_OS_WINDOWS
+# endif  // GTEST_OS_WINDOWS || GTEST_OS_FUCHSIA
 
   return m.GetString();
 }
@@ -220,7 +223,7 @@
   return !ExitedWithCode(0)(exit_status);
 }
 
-# if !GTEST_OS_WINDOWS
+# if !GTEST_OS_WINDOWS && !GTEST_OS_FUCHSIA
 // Generates a textual failure message when a death test finds more than
 // one thread running, or cannot determine the number of threads, prior
 // to executing the given statement.  It is the responsibility of the
@@ -229,13 +232,19 @@
   Message msg;
   msg << "Death tests use fork(), which is unsafe particularly"
       << " in a threaded context. For this test, " << GTEST_NAME_ << " ";
-  if (thread_count == 0)
+  if (thread_count == 0) {
     msg << "couldn't detect the number of threads.";
-  else
+  } else {
     msg << "detected " << thread_count << " threads.";
+  }
+  msg << " See "
+         "https://github.com/google/googletest/blob/master/googletest/docs/"
+         "advanced.md#death-tests-and-threads"
+      << " for more explanation and suggested solutions, especially if"
+      << " this is the last message you see before your test times out.";
   return msg.GetString();
 }
-# endif  // !GTEST_OS_WINDOWS
+# endif  // !GTEST_OS_WINDOWS && !GTEST_OS_FUCHSIA
 
 // Flag characters for reporting a death test that did not die.
 static const char kDeathTestLived = 'L';
@@ -243,6 +252,13 @@
 static const char kDeathTestThrew = 'T';
 static const char kDeathTestInternalError = 'I';
 
+#if GTEST_OS_FUCHSIA
+
+// File descriptor used for the pipe in the child process.
+static const int kFuchsiaReadPipeFd = 3;
+
+#endif
+
 // An enumeration describing all of the possible ways that a death test can
 // conclude.  DIED means that the process died while executing the test
 // code; LIVED means that process lived beyond the end of the test code;
@@ -250,7 +266,7 @@
 // statement, which is not allowed; THREW means that the test statement
 // returned control by throwing an exception.  IN_PROGRESS means the test
 // has not yet concluded.
-// TODO(vladl@google.com): Unify names and possibly values for
+// FIXME: Unify names and possibly values for
 // AbortReason, DeathTestOutcome, and flag characters above.
 enum DeathTestOutcome { IN_PROGRESS, DIED, LIVED, RETURNED, THREW };
 
@@ -259,7 +275,7 @@
 // message is propagated back to the parent process.  Otherwise, the
 // message is simply printed to stderr.  In either case, the program
 // then exits with status 1.
-void DeathTestAbort(const std::string& message) {
+static void DeathTestAbort(const std::string& message) {
   // On a POSIX system, this function may be called from a threadsafe-style
   // death test child process, which operates on a very small stack.  Use
   // the heap for any additional non-minuscule memory requirements.
@@ -563,7 +579,12 @@
       break;
     case DIED:
       if (status_ok) {
+# if GTEST_USES_PCRE
+        // PCRE regexes support embedded NULs.
+        const bool matched = RE::PartialMatch(error_message, *regex());
+# else
         const bool matched = RE::PartialMatch(error_message.c_str(), *regex());
+# endif  // GTEST_USES_PCRE
         if (matched) {
           success = true;
         } else {
@@ -779,7 +800,200 @@
   set_spawned(true);
   return OVERSEE_TEST;
 }
-# else  // We are not on Windows.
+
+# elif GTEST_OS_FUCHSIA
+
+class FuchsiaDeathTest : public DeathTestImpl {
+ public:
+  FuchsiaDeathTest(const char* a_statement,
+                   const RE* a_regex,
+                   const char* file,
+                   int line)
+      : DeathTestImpl(a_statement, a_regex), file_(file), line_(line) {}
+  virtual ~FuchsiaDeathTest() {
+    zx_status_t status = zx_handle_close(child_process_);
+    GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
+    status = zx_handle_close(port_);
+    GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
+  }
+
+  // All of these virtual functions are inherited from DeathTest.
+  virtual int Wait();
+  virtual TestRole AssumeRole();
+
+ private:
+  // The name of the file in which the death test is located.
+  const char* const file_;
+  // The line number on which the death test is located.
+  const int line_;
+
+  zx_handle_t child_process_ = ZX_HANDLE_INVALID;
+  zx_handle_t port_ = ZX_HANDLE_INVALID;
+};
+
+// Utility class for accumulating command-line arguments.
+class Arguments {
+ public:
+  Arguments() {
+    args_.push_back(NULL);
+  }
+
+  ~Arguments() {
+    for (std::vector<char*>::iterator i = args_.begin(); i != args_.end();
+         ++i) {
+      free(*i);
+    }
+  }
+  void AddArgument(const char* argument) {
+    args_.insert(args_.end() - 1, posix::StrDup(argument));
+  }
+
+  template <typename Str>
+  void AddArguments(const ::std::vector<Str>& arguments) {
+    for (typename ::std::vector<Str>::const_iterator i = arguments.begin();
+         i != arguments.end();
+         ++i) {
+      args_.insert(args_.end() - 1, posix::StrDup(i->c_str()));
+    }
+  }
+  char* const* Argv() {
+    return &args_[0];
+  }
+
+  int size() {
+    return args_.size() - 1;
+  }
+
+ private:
+  std::vector<char*> args_;
+};
+
+// Waits for the child in a death test to exit, returning its exit
+// status, or 0 if no child process exists.  As a side effect, sets the
+// outcome data member.
+int FuchsiaDeathTest::Wait() {
+  if (!spawned())
+    return 0;
+
+  // Register to wait for the child process to terminate.
+  zx_status_t status_zx;
+  status_zx = zx_object_wait_async(child_process_,
+                                   port_,
+                                   0 /* key */,
+                                   ZX_PROCESS_TERMINATED,
+                                   ZX_WAIT_ASYNC_ONCE);
+  GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
+  // Wait for it to terminate, or an exception to be received.
+  zx_port_packet_t packet;
+  status_zx = zx_port_wait(port_, ZX_TIME_INFINITE, &packet);
+  GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
+  if (ZX_PKT_IS_EXCEPTION(packet.type)) {
+    // Process encountered an exception. Kill it directly rather than letting
+    // other handlers process the event.
+    status_zx = zx_task_kill(child_process_);
+    GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
+    // Now wait for |child_process_| to terminate.
+    zx_signals_t signals = 0;
+    status_zx = zx_object_wait_one(
+        child_process_, ZX_PROCESS_TERMINATED, ZX_TIME_INFINITE, &signals);
+    GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+    GTEST_DEATH_TEST_CHECK_(signals & ZX_PROCESS_TERMINATED);
+  } else {
+    // Process terminated.
+    GTEST_DEATH_TEST_CHECK_(ZX_PKT_IS_SIGNAL_ONE(packet.type));
+    GTEST_DEATH_TEST_CHECK_(packet.signal.observed & ZX_PROCESS_TERMINATED);
+  }
+
+  ReadAndInterpretStatusByte();
+
+  zx_info_process_t buffer;
+  status_zx = zx_object_get_info(
+      child_process_,
+      ZX_INFO_PROCESS,
+      &buffer,
+      sizeof(buffer),
+      nullptr,
+      nullptr);
+  GTEST_DEATH_TEST_CHECK_(status_zx == ZX_OK);
+
+  GTEST_DEATH_TEST_CHECK_(buffer.exited);
+  set_status(buffer.return_code);
+  return status();
+}
+
+// The AssumeRole process for a Fuchsia death test.  It creates a child
+// process with the same executable as the current process to run the
+// death test.  The child process is given the --gtest_filter and
+// --gtest_internal_run_death_test flags such that it knows to run the
+// current death test only.
+DeathTest::TestRole FuchsiaDeathTest::AssumeRole() {
+  const UnitTestImpl* const impl = GetUnitTestImpl();
+  const InternalRunDeathTestFlag* const flag =
+      impl->internal_run_death_test_flag();
+  const TestInfo* const info = impl->current_test_info();
+  const int death_test_index = info->result()->death_test_count();
+
+  if (flag != NULL) {
+    // ParseInternalRunDeathTestFlag() has performed all the necessary
+    // processing.
+    set_write_fd(kFuchsiaReadPipeFd);
+    return EXECUTE_TEST;
+  }
+
+  CaptureStderr();
+  // Flush the log buffers since the log streams are shared with the child.
+  FlushInfoLog();
+
+  // Build the child process command line.
+  const std::string filter_flag =
+      std::string("--") + GTEST_FLAG_PREFIX_ + kFilterFlag + "="
+      + info->test_case_name() + "." + info->name();
+  const std::string internal_flag =
+      std::string("--") + GTEST_FLAG_PREFIX_ + kInternalRunDeathTestFlag + "="
+      + file_ + "|"
+      + StreamableToString(line_) + "|"
+      + StreamableToString(death_test_index);
+  Arguments args;
+  args.AddArguments(GetInjectableArgvs());
+  args.AddArgument(filter_flag.c_str());
+  args.AddArgument(internal_flag.c_str());
+
+  // Build the pipe for communication with the child.
+  zx_status_t status;
+  zx_handle_t child_pipe_handle;
+  uint32_t type;
+  status = fdio_pipe_half(&child_pipe_handle, &type);
+  GTEST_DEATH_TEST_CHECK_(status >= 0);
+  set_read_fd(status);
+
+  // Set the pipe handle for the child.
+  fdio_spawn_action_t add_handle_action = {};
+  add_handle_action.action = FDIO_SPAWN_ACTION_ADD_HANDLE;
+  add_handle_action.h.id = PA_HND(type, kFuchsiaReadPipeFd);
+  add_handle_action.h.handle = child_pipe_handle;
+
+  // Spawn the child process.
+  status = fdio_spawn_etc(ZX_HANDLE_INVALID, FDIO_SPAWN_CLONE_ALL,
+                          args.Argv()[0], args.Argv(), nullptr, 1,
+                          &add_handle_action, &child_process_, nullptr);
+  GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
+
+  // Create an exception port and attach it to the |child_process_|, to allow
+  // us to suppress the system default exception handler from firing.
+  status = zx_port_create(0, &port_);
+  GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
+  status = zx_task_bind_exception_port(
+      child_process_, port_, 0 /* key */, 0 /*options */);
+  GTEST_DEATH_TEST_CHECK_(status == ZX_OK);
+
+  set_spawned(true);
+  return OVERSEE_TEST;
+}
+
+#else  // We are neither on Windows, nor on Fuchsia.
 
 // ForkingDeathTest provides implementations for most of the abstract
 // methods of the DeathTest interface.  Only the AssumeRole method is
@@ -883,11 +1097,10 @@
       ForkingDeathTest(a_statement, a_regex), file_(file), line_(line) { }
   virtual TestRole AssumeRole();
  private:
-  static ::std::vector<testing::internal::string>
-  GetArgvsForDeathTestChildProcess() {
-    ::std::vector<testing::internal::string> args = GetInjectableArgvs();
+  static ::std::vector<std::string> GetArgvsForDeathTestChildProcess() {
+    ::std::vector<std::string> args = GetInjectableArgvs();
 #  if defined(GTEST_EXTRA_DEATH_TEST_COMMAND_LINE_ARGS_)
-    ::std::vector<testing::internal::string> extra_args =
+    ::std::vector<std::string> extra_args =
         GTEST_EXTRA_DEATH_TEST_COMMAND_LINE_ARGS_();
     args.insert(args.end(), extra_args.begin(), extra_args.end());
 #  endif  // defined(GTEST_EXTRA_DEATH_TEST_COMMAND_LINE_ARGS_)
@@ -986,6 +1199,7 @@
 }
 #  endif  // !GTEST_OS_QNX
 
+#  if GTEST_HAS_CLONE
 // Two utility routines that together determine the direction the stack
 // grows.
 // This could be accomplished more elegantly by a single recursive
@@ -995,20 +1209,22 @@
 // GTEST_NO_INLINE_ is required to prevent GCC 4.6 from inlining
 // StackLowerThanAddress into StackGrowsDown, which then doesn't give
 // correct answer.
-void StackLowerThanAddress(const void* ptr, bool* result) GTEST_NO_INLINE_;
-void StackLowerThanAddress(const void* ptr, bool* result) {
+static void StackLowerThanAddress(const void* ptr,
+                                  bool* result) GTEST_NO_INLINE_;
+static void StackLowerThanAddress(const void* ptr, bool* result) {
   int dummy;
   *result = (&dummy < ptr);
 }
 
 // Make sure AddressSanitizer does not tamper with the stack here.
 GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
-bool StackGrowsDown() {
+static bool StackGrowsDown() {
   int dummy;
   bool result;
   StackLowerThanAddress(&dummy, &result);
   return result;
 }
+#  endif  // GTEST_HAS_CLONE
 
 // Spawns a child process with the same executable as the current process in
 // a thread-safe manner and instructs it to run the death test.  The
@@ -1200,6 +1416,13 @@
     *test = new WindowsDeathTest(statement, regex, file, line);
   }
 
+# elif GTEST_OS_FUCHSIA
+
+  if (GTEST_FLAG(death_test_style) == "threadsafe" ||
+      GTEST_FLAG(death_test_style) == "fast") {
+    *test = new FuchsiaDeathTest(statement, regex, file, line);
+  }
+
 # else
 
   if (GTEST_FLAG(death_test_style) == "threadsafe") {
@@ -1224,7 +1447,7 @@
 // Recreates the pipe and event handles from the provided parameters,
 // signals the event, and returns a file descriptor wrapped around the pipe
 // handle. This function is called in the child process only.
-int GetStatusFileDescriptor(unsigned int parent_process_id,
+static int GetStatusFileDescriptor(unsigned int parent_process_id,
                             size_t write_handle_as_size_t,
                             size_t event_handle_as_size_t) {
   AutoHandle parent_process_handle(::OpenProcess(PROCESS_DUP_HANDLE,
@@ -1235,7 +1458,7 @@
                    StreamableToString(parent_process_id));
   }
 
-  // TODO(vladl@google.com): Replace the following check with a
+  // FIXME: Replace the following check with a
   // compile-time assertion when available.
   GTEST_CHECK_(sizeof(HANDLE) <= sizeof(size_t));
 
@@ -1243,7 +1466,7 @@
       reinterpret_cast<HANDLE>(write_handle_as_size_t);
   HANDLE dup_write_handle;
 
-  // The newly initialized handle is accessible only in in the parent
+  // The newly initialized handle is accessible only in the parent
   // process. To obtain one accessible within the child, we need to use
   // DuplicateHandle.
   if (!::DuplicateHandle(parent_process_handle.Get(), write_handle,
@@ -1320,6 +1543,16 @@
   write_fd = GetStatusFileDescriptor(parent_process_id,
                                      write_handle_as_size_t,
                                      event_handle_as_size_t);
+
+# elif GTEST_OS_FUCHSIA
+
+  if (fields.size() != 3
+      || !ParseNaturalNumber(fields[1], &line)
+      || !ParseNaturalNumber(fields[2], &index)) {
+    DeathTestAbort("Bad --gtest_internal_run_death_test flag: "
+        + GTEST_FLAG(internal_run_death_test));
+  }
+
 # else
 
   if (fields.size() != 4
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-filepath.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-filepath.cc
index 0292dc1..a7e65c0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-filepath.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-filepath.cc
@@ -26,14 +26,12 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Authors: keith.ray@gmail.com (Keith Ray)
 
-#include "gtest/gtest-message.h"
 #include "gtest/internal/gtest-filepath.h"
-#include "gtest/internal/gtest-port.h"
 
 #include <stdlib.h>
+#include "gtest/internal/gtest-port.h"
+#include "gtest/gtest-message.h"
 
 #if GTEST_OS_WINDOWS_MOBILE
 # include <windows.h>
@@ -48,6 +46,8 @@
 # include <climits>  // Some Linux distributions define PATH_MAX here.
 #endif  // GTEST_OS_WINDOWS_MOBILE
 
+#include "gtest/internal/gtest-string.h"
+
 #if GTEST_OS_WINDOWS
 # define GTEST_PATH_MAX_ _MAX_PATH
 #elif defined(PATH_MAX)
@@ -58,8 +58,6 @@
 # define GTEST_PATH_MAX_ _POSIX_PATH_MAX
 #endif  // GTEST_OS_WINDOWS
 
-#include "gtest/internal/gtest-string.h"
-
 namespace testing {
 namespace internal {
 
@@ -130,7 +128,7 @@
   return *this;
 }
 
-// Returns a pointer to the last occurence of a valid path separator in
+// Returns a pointer to the last occurrence of a valid path separator in
 // the FilePath. On Windows, for example, both '/' and '\' are valid path
 // separators. Returns NULL if no path separator was found.
 const char* FilePath::FindLastPathSeparator() const {
@@ -252,7 +250,7 @@
 // root directory per disk drive.)
 bool FilePath::IsRootDirectory() const {
 #if GTEST_OS_WINDOWS
-  // TODO(wan@google.com): on Windows a network share like
+  // FIXME: on Windows a network share like
   // \\server\share can be a root directory, although it cannot be the
   // current directory.  Handle this properly.
   return pathname_.length() == 3 && IsAbsolutePath();
@@ -352,7 +350,7 @@
 // Removes any redundant separators that might be in the pathname.
 // For example, "bar///foo" becomes "bar/foo". Does not eliminate other
 // redundancies that might be in a pathname involving "." or "..".
-// TODO(wan@google.com): handle Windows network shares (e.g. \\server\share).
+// FIXME: handle Windows network shares (e.g. \\server\share).
 void FilePath::Normalize() {
   if (pathname_.c_str() == NULL) {
     pathname_ = "";
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-internal-inl.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-internal-inl.h
index ed8a682..4790041 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-internal-inl.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-internal-inl.h
@@ -27,24 +27,13 @@
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
-// Utility functions and classes used by the Google C++ testing framework.
-//
-// Author: wan@google.com (Zhanyong Wan)
-//
+// Utility functions and classes used by the Google C++ testing framework.//
 // This file contains purely Google Test's internal implementation.  Please
 // DO NOT #INCLUDE IT IN A USER PROGRAM.
 
 #ifndef GTEST_SRC_GTEST_INTERNAL_INL_H_
 #define GTEST_SRC_GTEST_INTERNAL_INL_H_
 
-// GTEST_IMPLEMENTATION_ is defined to 1 iff the current translation unit is
-// part of Google Test's implementation; otherwise it's undefined.
-#if !GTEST_IMPLEMENTATION_
-// If this file is included from the user's code, just say no.
-# error "gtest-internal-inl.h is part of Google Test's internal implementation."
-# error "It must not be included except by Google Test itself."
-#endif  // GTEST_IMPLEMENTATION_
-
 #ifndef _WIN32_WCE
 # include <errno.h>
 #endif  // !_WIN32_WCE
@@ -67,9 +56,12 @@
 # include <windows.h>  // NOLINT
 #endif  // GTEST_OS_WINDOWS
 
-#include "gtest/gtest.h"  // NOLINT
+#include "gtest/gtest.h"
 #include "gtest/gtest-spi.h"
 
+GTEST_DISABLE_MSC_WARNINGS_PUSH_(4251 \
+/* class A needs to have dll-interface to be used by clients of class B */)
+
 namespace testing {
 
 // Declares the flags.
@@ -94,6 +86,7 @@
 const char kListTestsFlag[] = "list_tests";
 const char kOutputFlag[] = "output";
 const char kPrintTimeFlag[] = "print_time";
+const char kPrintUTF8Flag[] = "print_utf8";
 const char kRandomSeedFlag[] = "random_seed";
 const char kRepeatFlag[] = "repeat";
 const char kShuffleFlag[] = "shuffle";
@@ -174,6 +167,7 @@
     list_tests_ = GTEST_FLAG(list_tests);
     output_ = GTEST_FLAG(output);
     print_time_ = GTEST_FLAG(print_time);
+    print_utf8_ = GTEST_FLAG(print_utf8);
     random_seed_ = GTEST_FLAG(random_seed);
     repeat_ = GTEST_FLAG(repeat);
     shuffle_ = GTEST_FLAG(shuffle);
@@ -195,6 +189,7 @@
     GTEST_FLAG(list_tests) = list_tests_;
     GTEST_FLAG(output) = output_;
     GTEST_FLAG(print_time) = print_time_;
+    GTEST_FLAG(print_utf8) = print_utf8_;
     GTEST_FLAG(random_seed) = random_seed_;
     GTEST_FLAG(repeat) = repeat_;
     GTEST_FLAG(shuffle) = shuffle_;
@@ -216,6 +211,7 @@
   bool list_tests_;
   std::string output_;
   bool print_time_;
+  bool print_utf8_;
   internal::Int32 random_seed_;
   internal::Int32 repeat_;
   bool shuffle_;
@@ -426,7 +422,7 @@
   //                in the trace.
   //   skip_count - the number of top frames to be skipped; doesn't count
   //                against max_depth.
-  virtual string CurrentStackTrace(int max_depth, int skip_count) = 0;
+  virtual std::string CurrentStackTrace(int max_depth, int skip_count) = 0;
 
   // UponLeavingGTest() should be called immediately before Google Test calls
   // user code. It saves some information about the current stack that
@@ -446,10 +442,20 @@
  public:
   OsStackTraceGetter() {}
 
-  virtual string CurrentStackTrace(int max_depth, int skip_count);
+  virtual std::string CurrentStackTrace(int max_depth, int skip_count);
   virtual void UponLeavingGTest();
 
  private:
+#if GTEST_HAS_ABSL
+  Mutex mutex_;  // Protects all internal state.
+
+  // We save the stack frame below the frame that calls user code.
+  // We do this because the address of the frame immediately below
+  // the user code changes between the call to UponLeavingGTest()
+  // and any calls to the stack trace code from within the user code.
+  void* caller_frame_ = nullptr;
+#endif  // GTEST_HAS_ABSL
+
   GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetter);
 };
 
@@ -664,13 +670,11 @@
                 tear_down_tc)->AddTestInfo(test_info);
   }
 
-#if GTEST_HAS_PARAM_TEST
   // Returns ParameterizedTestCaseRegistry object used to keep track of
   // value-parameterized tests and instantiate and register them.
   internal::ParameterizedTestCaseRegistry& parameterized_test_registry() {
     return parameterized_test_registry_;
   }
-#endif  // GTEST_HAS_PARAM_TEST
 
   // Sets the TestCase object for the test that's currently running.
   void set_current_test_case(TestCase* a_current_test_case) {
@@ -845,14 +849,12 @@
   // shuffled order.
   std::vector<int> test_case_indices_;
 
-#if GTEST_HAS_PARAM_TEST
   // ParameterizedTestRegistry object used to register value-parameterized
   // tests.
   internal::ParameterizedTestCaseRegistry parameterized_test_registry_;
 
   // Indicates whether RegisterParameterizedTests() has been called already.
   bool parameterized_tests_registered_;
-#endif  // GTEST_HAS_PARAM_TEST
 
   // Index of the last death test case registered.  Initially -1.
   int last_death_test_case_;
@@ -992,7 +994,7 @@
 
   const bool parse_success = *end == '\0' && errno == 0;
 
-  // TODO(vladl@google.com): Convert this to compile time assertion when it is
+  // FIXME: Convert this to compile time assertion when it is
   // available.
   GTEST_CHECK_(sizeof(Integer) <= sizeof(parsed));
 
@@ -1032,7 +1034,7 @@
 #if GTEST_CAN_STREAM_RESULTS_
 
 // Streams test results to the given port on the given host machine.
-class GTEST_API_ StreamingListener : public EmptyTestEventListener {
+class StreamingListener : public EmptyTestEventListener {
  public:
   // Abstract base class for writing strings to a socket.
   class AbstractSocketWriter {
@@ -1040,21 +1042,19 @@
     virtual ~AbstractSocketWriter() {}
 
     // Sends a string to the socket.
-    virtual void Send(const string& message) = 0;
+    virtual void Send(const std::string& message) = 0;
 
     // Closes the socket.
     virtual void CloseConnection() {}
 
     // Sends a string and a newline to the socket.
-    void SendLn(const string& message) {
-      Send(message + "\n");
-    }
+    void SendLn(const std::string& message) { Send(message + "\n"); }
   };
 
   // Concrete class for actually writing strings to a socket.
   class SocketWriter : public AbstractSocketWriter {
    public:
-    SocketWriter(const string& host, const string& port)
+    SocketWriter(const std::string& host, const std::string& port)
         : sockfd_(-1), host_name_(host), port_num_(port) {
       MakeConnection();
     }
@@ -1065,7 +1065,7 @@
     }
 
     // Sends a string to the socket.
-    virtual void Send(const string& message) {
+    virtual void Send(const std::string& message) {
       GTEST_CHECK_(sockfd_ != -1)
           << "Send() can be called only when there is a connection.";
 
@@ -1091,17 +1091,19 @@
     }
 
     int sockfd_;  // socket file descriptor
-    const string host_name_;
-    const string port_num_;
+    const std::string host_name_;
+    const std::string port_num_;
 
     GTEST_DISALLOW_COPY_AND_ASSIGN_(SocketWriter);
   };  // class SocketWriter
 
   // Escapes '=', '&', '%', and '\n' characters in str as "%xx".
-  static string UrlEncode(const char* str);
+  static std::string UrlEncode(const char* str);
 
-  StreamingListener(const string& host, const string& port)
-      : socket_writer_(new SocketWriter(host, port)) { Start(); }
+  StreamingListener(const std::string& host, const std::string& port)
+      : socket_writer_(new SocketWriter(host, port)) {
+    Start();
+  }
 
   explicit StreamingListener(AbstractSocketWriter* socket_writer)
       : socket_writer_(socket_writer) { Start(); }
@@ -1162,13 +1164,13 @@
 
  private:
   // Sends the given message and a newline to the socket.
-  void SendLn(const string& message) { socket_writer_->SendLn(message); }
+  void SendLn(const std::string& message) { socket_writer_->SendLn(message); }
 
   // Called at the start of streaming to notify the receiver what
   // protocol we are using.
   void Start() { SendLn("gtest_streaming_protocol_version=1.0"); }
 
-  string FormatBool(bool value) { return value ? "1" : "0"; }
+  std::string FormatBool(bool value) { return value ? "1" : "0"; }
 
   const scoped_ptr<AbstractSocketWriter> socket_writer_;
 
@@ -1180,4 +1182,6 @@
 }  // namespace internal
 }  // namespace testing
 
+GTEST_DISABLE_MSC_WARNINGS_POP_()  //  4251
+
 #endif  // GTEST_SRC_GTEST_INTERNAL_INL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-port.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-port.cc
index e5bf3dd..fecb5d1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-port.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-port.cc
@@ -26,8 +26,7 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 
 #include "gtest/internal/gtest-port.h"
 
@@ -63,19 +62,16 @@
 # include <sys/types.h>
 #endif  // GTEST_OS_AIX
 
+#if GTEST_OS_FUCHSIA
+# include <zircon/process.h>
+# include <zircon/syscalls.h>
+#endif  // GTEST_OS_FUCHSIA
+
 #include "gtest/gtest-spi.h"
 #include "gtest/gtest-message.h"
 #include "gtest/internal/gtest-internal.h"
 #include "gtest/internal/gtest-string.h"
-
-// Indicates that this translation unit is part of Google Test's
-// implementation.  It must come before gtest-internal-inl.h is
-// included, or there will be a compiler error.  This trick exists to
-// prevent the accidental inclusion of gtest-internal-inl.h in the
-// user's code.
-#define GTEST_IMPLEMENTATION_ 1
 #include "src/gtest-internal-inl.h"
-#undef GTEST_IMPLEMENTATION_
 
 namespace testing {
 namespace internal {
@@ -93,7 +89,7 @@
 
 namespace {
 template <typename T>
-T ReadProcFileField(const string& filename, int field) {
+T ReadProcFileField(const std::string& filename, int field) {
   std::string dummy;
   std::ifstream file(filename.c_str());
   while (field-- > 0) {
@@ -107,7 +103,7 @@
 
 // Returns the number of active threads, or 0 when there is an error.
 size_t GetThreadCount() {
-  const string filename =
+  const std::string filename =
       (Message() << "/proc/" << getpid() << "/stat").GetString();
   return ReadProcFileField<int>(filename, 19);
 }
@@ -164,6 +160,25 @@
   }
 }
 
+#elif GTEST_OS_FUCHSIA
+
+size_t GetThreadCount() {
+  int dummy_buffer;
+  size_t avail;
+  zx_status_t status = zx_object_get_info(
+      zx_process_self(),
+      ZX_INFO_PROCESS_THREADS,
+      &dummy_buffer,
+      0,
+      nullptr,
+      &avail);
+  if (status == ZX_OK) {
+    return avail;
+  } else {
+    return 0;
+  }
+}
+
 #else
 
 size_t GetThreadCount() {
@@ -246,9 +261,9 @@
 Mutex::~Mutex() {
   // Static mutexes are leaked intentionally. It is not thread-safe to try
   // to clean them up.
-  // TODO(yukawa): Switch to Slim Reader/Writer (SRW) Locks, which requires
+  // FIXME: Switch to Slim Reader/Writer (SRW) Locks, which requires
   // nothing to clean it up but is available only on Vista and later.
-  // http://msdn.microsoft.com/en-us/library/windows/desktop/aa904937.aspx
+  // https://docs.microsoft.com/en-us/windows/desktop/Sync/slim-reader-writer--srw--locks
   if (type_ == kDynamic) {
     ::DeleteCriticalSection(critical_section_);
     delete critical_section_;
@@ -279,6 +294,43 @@
       << "The current thread is not holding the mutex @" << this;
 }
 
+namespace {
+
+// Use the RAII idiom to flag mem allocs that are intentionally never
+// deallocated. The motivation is to silence the false positive mem leaks
+// that are reported by the debug version of MS's CRT which can only detect
+// if an alloc is missing a matching deallocation.
+// Example:
+//    MemoryIsNotDeallocated memory_is_not_deallocated;
+//    critical_section_ = new CRITICAL_SECTION;
+//
+class MemoryIsNotDeallocated
+{
+ public:
+  MemoryIsNotDeallocated() : old_crtdbg_flag_(0) {
+#ifdef _MSC_VER
+    old_crtdbg_flag_ = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG);
+    // Set heap allocation block type to _IGNORE_BLOCK so that MS debug CRT
+    // doesn't report mem leak if there's no matching deallocation.
+    _CrtSetDbgFlag(old_crtdbg_flag_ & ~_CRTDBG_ALLOC_MEM_DF);
+#endif  //  _MSC_VER
+  }
+
+  ~MemoryIsNotDeallocated() {
+#ifdef _MSC_VER
+    // Restore the original _CRTDBG_ALLOC_MEM_DF flag
+    _CrtSetDbgFlag(old_crtdbg_flag_);
+#endif  //  _MSC_VER
+  }
+
+ private:
+  int old_crtdbg_flag_;
+
+  GTEST_DISALLOW_COPY_AND_ASSIGN_(MemoryIsNotDeallocated);
+};
+
+}  // namespace
+
 // Initializes owner_thread_id_ and critical_section_ in static mutexes.
 void Mutex::ThreadSafeLazyInit() {
   // Dynamic mutexes are initialized in the constructor.
@@ -289,7 +341,11 @@
         // If critical_section_init_phase_ was 0 before the exchange, we
         // are the first to test it and need to perform the initialization.
         owner_thread_id_ = 0;
-        critical_section_ = new CRITICAL_SECTION;
+        {
+          // Use RAII to flag that following mem alloc is never deallocated.
+          MemoryIsNotDeallocated memory_is_not_deallocated;
+          critical_section_ = new CRITICAL_SECTION;
+        }
         ::InitializeCriticalSection(critical_section_);
         // Updates the critical_section_init_phase_ to 2 to signal
         // initialization complete.
@@ -328,7 +384,7 @@
                              Notification* thread_can_start) {
     ThreadMainParam* param = new ThreadMainParam(runnable, thread_can_start);
     DWORD thread_id;
-    // TODO(yukawa): Consider to use _beginthreadex instead.
+    // FIXME: Consider to use _beginthreadex instead.
     HANDLE thread_handle = ::CreateThread(
         NULL,    // Default security.
         0,       // Default stack size.
@@ -496,7 +552,7 @@
                                  FALSE,
                                  thread_id);
     GTEST_CHECK_(thread != NULL);
-    // We need to to pass a valid thread ID pointer into CreateThread for it
+    // We need to pass a valid thread ID pointer into CreateThread for it
     // to work correctly under Win98.
     DWORD watcher_thread_id;
     HANDLE watcher_thread = ::CreateThread(
@@ -531,7 +587,8 @@
   // Returns map of thread local instances.
   static ThreadIdToThreadLocals* GetThreadLocalsMapLocked() {
     mutex_.AssertHeld();
-    static ThreadIdToThreadLocals* map = new ThreadIdToThreadLocals;
+    MemoryIsNotDeallocated memory_is_not_deallocated;
+    static ThreadIdToThreadLocals* map = new ThreadIdToThreadLocals();
     return map;
   }
 
@@ -671,7 +728,7 @@
 }
 
 // Helper function used by ValidateRegex() to format error messages.
-std::string FormatRegexSyntaxError(const char* regex, int index) {
+static std::string FormatRegexSyntaxError(const char* regex, int index) {
   return (Message() << "Syntax error at index " << index
           << " in simple regular expression \"" << regex << "\": ").GetString();
 }
@@ -680,7 +737,7 @@
 // otherwise returns true.
 bool ValidateRegex(const char* regex) {
   if (regex == NULL) {
-    // TODO(wan@google.com): fix the source file location in the
+    // FIXME: fix the source file location in the
     // assertion failures to match where the regex is used in user
     // code.
     ADD_FAILURE() << "NULL is not a valid simple regular expression.";
@@ -923,9 +980,10 @@
     posix::Abort();
   }
 }
+
 // Disable Microsoft deprecation warnings for POSIX functions called from
 // this class (creat, dup, dup2, and close)
-GTEST_DISABLE_MSC_WARNINGS_PUSH_(4996)
+GTEST_DISABLE_MSC_DEPRECATED_PUSH_()
 
 #if GTEST_HAS_STREAM_REDIRECTION
 
@@ -1009,13 +1067,14 @@
   GTEST_DISALLOW_COPY_AND_ASSIGN_(CapturedStream);
 };
 
-GTEST_DISABLE_MSC_WARNINGS_POP_()
+GTEST_DISABLE_MSC_DEPRECATED_POP_()
 
 static CapturedStream* g_captured_stderr = NULL;
 static CapturedStream* g_captured_stdout = NULL;
 
 // Starts capturing an output stream (stdout/stderr).
-void CaptureStream(int fd, const char* stream_name, CapturedStream** stream) {
+static void CaptureStream(int fd, const char* stream_name,
+                          CapturedStream** stream) {
   if (*stream != NULL) {
     GTEST_LOG_(FATAL) << "Only one " << stream_name
                       << " capturer can exist at a time.";
@@ -1024,7 +1083,7 @@
 }
 
 // Stops capturing the output stream and returns the captured string.
-std::string GetCapturedStream(CapturedStream** captured_stream) {
+static std::string GetCapturedStream(CapturedStream** captured_stream) {
   const std::string content = (*captured_stream)->GetCapturedString();
 
   delete *captured_stream;
@@ -1055,23 +1114,9 @@
 
 #endif  // GTEST_HAS_STREAM_REDIRECTION
 
-std::string TempDir() {
-#if GTEST_OS_WINDOWS_MOBILE
-  return "\\temp\\";
-#elif GTEST_OS_WINDOWS
-  const char* temp_dir = posix::GetEnv("TEMP");
-  if (temp_dir == NULL || temp_dir[0] == '\0')
-    return "\\temp\\";
-  else if (temp_dir[strlen(temp_dir) - 1] == '\\')
-    return temp_dir;
-  else
-    return std::string(temp_dir) + "\\";
-#elif GTEST_OS_LINUX_ANDROID
-  return "/sdcard/";
-#else
-  return "/tmp/";
-#endif  // GTEST_OS_WINDOWS_MOBILE
-}
+
+
+
 
 size_t GetFileSize(FILE* file) {
   fseek(file, 0, SEEK_END);
@@ -1101,22 +1146,36 @@
 }
 
 #if GTEST_HAS_DEATH_TEST
+static const std::vector<std::string>* g_injected_test_argvs = NULL;  // Owned.
 
-static const ::std::vector<testing::internal::string>* g_injected_test_argvs =
-                                        NULL;  // Owned.
-
-void SetInjectableArgvs(const ::std::vector<testing::internal::string>* argvs) {
-  if (g_injected_test_argvs != argvs)
-    delete g_injected_test_argvs;
-  g_injected_test_argvs = argvs;
-}
-
-const ::std::vector<testing::internal::string>& GetInjectableArgvs() {
+std::vector<std::string> GetInjectableArgvs() {
   if (g_injected_test_argvs != NULL) {
     return *g_injected_test_argvs;
   }
   return GetArgvs();
 }
+
+void SetInjectableArgvs(const std::vector<std::string>* new_argvs) {
+  if (g_injected_test_argvs != new_argvs) delete g_injected_test_argvs;
+  g_injected_test_argvs = new_argvs;
+}
+
+void SetInjectableArgvs(const std::vector<std::string>& new_argvs) {
+  SetInjectableArgvs(
+      new std::vector<std::string>(new_argvs.begin(), new_argvs.end()));
+}
+
+#if GTEST_HAS_GLOBAL_STRING
+void SetInjectableArgvs(const std::vector< ::string>& new_argvs) {
+  SetInjectableArgvs(
+      new std::vector<std::string>(new_argvs.begin(), new_argvs.end()));
+}
+#endif  // GTEST_HAS_GLOBAL_STRING
+
+void ClearInjectableArgvs() {
+  delete g_injected_test_argvs;
+  g_injected_test_argvs = NULL;
+}
 #endif  // GTEST_HAS_DEATH_TEST
 
 #if GTEST_OS_WINDOWS_MOBILE
@@ -1191,11 +1250,12 @@
 bool BoolFromGTestEnv(const char* flag, bool default_value) {
 #if defined(GTEST_GET_BOOL_FROM_ENV_)
   return GTEST_GET_BOOL_FROM_ENV_(flag, default_value);
-#endif  // defined(GTEST_GET_BOOL_FROM_ENV_)
+#else
   const std::string env_var = FlagToEnvVar(flag);
   const char* const string_value = posix::GetEnv(env_var.c_str());
   return string_value == NULL ?
       default_value : strcmp(string_value, "0") != 0;
+#endif  // defined(GTEST_GET_BOOL_FROM_ENV_)
 }
 
 // Reads and returns a 32-bit integer stored in the environment
@@ -1204,7 +1264,7 @@
 Int32 Int32FromGTestEnv(const char* flag, Int32 default_value) {
 #if defined(GTEST_GET_INT32_FROM_ENV_)
   return GTEST_GET_INT32_FROM_ENV_(flag, default_value);
-#endif  // defined(GTEST_GET_INT32_FROM_ENV_)
+#else
   const std::string env_var = FlagToEnvVar(flag);
   const char* const string_value = posix::GetEnv(env_var.c_str());
   if (string_value == NULL) {
@@ -1222,37 +1282,36 @@
   }
 
   return result;
+#endif  // defined(GTEST_GET_INT32_FROM_ENV_)
+}
+
+// As a special case for the 'output' flag, if GTEST_OUTPUT is not
+// set, we look for XML_OUTPUT_FILE, which is set by the Bazel build
+// system.  The value of XML_OUTPUT_FILE is a filename without the
+// "xml:" prefix of GTEST_OUTPUT.
+// Note that this is meant to be called at the call site so it does
+// not check that the flag is 'output'
+// In essence this checks an env variable called XML_OUTPUT_FILE
+// and if it is set we prepend "xml:" to its value, if it not set we return ""
+std::string OutputFlagAlsoCheckEnvVar(){
+  std::string default_value_for_output_flag = "";
+  const char* xml_output_file_env = posix::GetEnv("XML_OUTPUT_FILE");
+  if (NULL != xml_output_file_env) {
+    default_value_for_output_flag = std::string("xml:") + xml_output_file_env;
+  }
+  return default_value_for_output_flag;
 }
 
 // Reads and returns the string environment variable corresponding to
 // the given flag; if it's not set, returns default_value.
-std::string StringFromGTestEnv(const char* flag, const char* default_value) {
+const char* StringFromGTestEnv(const char* flag, const char* default_value) {
 #if defined(GTEST_GET_STRING_FROM_ENV_)
   return GTEST_GET_STRING_FROM_ENV_(flag, default_value);
-#endif  // defined(GTEST_GET_STRING_FROM_ENV_)
+#else
   const std::string env_var = FlagToEnvVar(flag);
-  const char* value = posix::GetEnv(env_var.c_str());
-  if (value != NULL) {
-    return value;
-  }
-
-  // As a special case for the 'output' flag, if GTEST_OUTPUT is not
-  // set, we look for XML_OUTPUT_FILE, which is set by the Bazel build
-  // system.  The value of XML_OUTPUT_FILE is a filename without the
-  // "xml:" prefix of GTEST_OUTPUT.
-  //
-  // The net priority order after flag processing is thus:
-  //   --gtest_output command line flag
-  //   GTEST_OUTPUT environment variable
-  //   XML_OUTPUT_FILE environment variable
-  //   'default_value'
-  if (strcmp(flag, "output") == 0) {
-    value = posix::GetEnv("XML_OUTPUT_FILE");
-    if (value != NULL) {
-      return std::string("xml:") + value;
-    }
-  }
-  return default_value;
+  const char* const value = posix::GetEnv(env_var.c_str());
+  return value == NULL ? default_value : value;
+#endif  // defined(GTEST_GET_STRING_FROM_ENV_)
 }
 
 }  // namespace internal
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-printers.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-printers.cc
index a2df412..b502254 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-printers.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-printers.cc
@@ -26,10 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
 
-// Google Test - The Google C++ Testing Framework
+
+// Google Test - The Google C++ Testing and Mocking Framework
 //
 // This file implements a universal value printer that can print a
 // value of any type T:
@@ -43,12 +42,13 @@
 // defines Foo.
 
 #include "gtest/gtest-printers.h"
-#include <ctype.h>
 #include <stdio.h>
+#include <cctype>
 #include <cwchar>
 #include <ostream>  // NOLINT
 #include <string>
 #include "gtest/internal/gtest-port.h"
+#include "src/gtest-internal-inl.h"
 
 namespace testing {
 
@@ -89,7 +89,7 @@
   // If the object size is bigger than kThreshold, we'll have to omit
   // some details by printing only the first and the last kChunkSize
   // bytes.
-  // TODO(wan): let the user control the threshold using a flag.
+  // FIXME: let the user control the threshold using a flag.
   if (count < kThreshold) {
     PrintByteSegmentInObjectTo(obj_bytes, 0, count, os);
   } else {
@@ -123,7 +123,7 @@
 // Depending on the value of a char (or wchar_t), we print it in one
 // of three formats:
 //   - as is if it's a printable ASCII (e.g. 'a', '2', ' '),
-//   - as a hexidecimal escape sequence (e.g. '\x7F'), or
+//   - as a hexadecimal escape sequence (e.g. '\x7F'), or
 //   - as a special escape sequence (e.g. '\r', '\n').
 enum CharFormat {
   kAsIs,
@@ -180,7 +180,10 @@
         *os << static_cast<char>(c);
         return kAsIs;
       } else {
-        *os << "\\x" + String::FormatHexInt(static_cast<UnsignedChar>(c));
+        ostream::fmtflags flags = os->flags();
+        *os << "\\x" << std::hex << std::uppercase
+            << static_cast<int>(static_cast<UnsignedChar>(c));
+        os->flags(flags);
         return kHexEscape;
       }
   }
@@ -227,7 +230,7 @@
     return;
   *os << " (" << static_cast<int>(c);
 
-  // For more convenience, we print c's code again in hexidecimal,
+  // For more convenience, we print c's code again in hexadecimal,
   // unless c was already printed in the form '\x##' or the code is in
   // [1, 9].
   if (format == kHexEscape || (1 <= c && c <= 9)) {
@@ -259,11 +262,12 @@
 GTEST_ATTRIBUTE_NO_SANITIZE_MEMORY_
 GTEST_ATTRIBUTE_NO_SANITIZE_ADDRESS_
 GTEST_ATTRIBUTE_NO_SANITIZE_THREAD_
-static void PrintCharsAsStringTo(
+static CharFormat PrintCharsAsStringTo(
     const CharType* begin, size_t len, ostream* os) {
   const char* const kQuoteBegin = sizeof(CharType) == 1 ? "\"" : "L\"";
   *os << kQuoteBegin;
   bool is_previous_hex = false;
+  CharFormat print_format = kAsIs;
   for (size_t index = 0; index < len; ++index) {
     const CharType cur = begin[index];
     if (is_previous_hex && IsXDigit(cur)) {
@@ -273,8 +277,13 @@
       *os << "\" " << kQuoteBegin;
     }
     is_previous_hex = PrintAsStringLiteralTo(cur, os) == kHexEscape;
+    // Remember if any characters required hex escaping.
+    if (is_previous_hex) {
+      print_format = kHexEscape;
+    }
   }
   *os << "\"";
+  return print_format;
 }
 
 // Prints a (const) char/wchar_t array of 'len' elements, starting at address
@@ -339,20 +348,95 @@
     *os << "NULL";
   } else {
     *os << ImplicitCast_<const void*>(s) << " pointing to ";
-    PrintCharsAsStringTo(s, std::wcslen(s), os);
+    PrintCharsAsStringTo(s, wcslen(s), os);
   }
 }
 #endif  // wchar_t is native
 
+namespace {
+
+bool ContainsUnprintableControlCodes(const char* str, size_t length) {
+  const unsigned char *s = reinterpret_cast<const unsigned char *>(str);
+
+  for (size_t i = 0; i < length; i++) {
+    unsigned char ch = *s++;
+    if (std::iscntrl(ch)) {
+        switch (ch) {
+        case '\t':
+        case '\n':
+        case '\r':
+          break;
+        default:
+          return true;
+        }
+      }
+  }
+  return false;
+}
+
+bool IsUTF8TrailByte(unsigned char t) { return 0x80 <= t && t<= 0xbf; }
+
+bool IsValidUTF8(const char* str, size_t length) {
+  const unsigned char *s = reinterpret_cast<const unsigned char *>(str);
+
+  for (size_t i = 0; i < length;) {
+    unsigned char lead = s[i++];
+
+    if (lead <= 0x7f) {
+      continue;  // single-byte character (ASCII) 0..7F
+    }
+    if (lead < 0xc2) {
+      return false;  // trail byte or non-shortest form
+    } else if (lead <= 0xdf && (i + 1) <= length && IsUTF8TrailByte(s[i])) {
+      ++i;  // 2-byte character
+    } else if (0xe0 <= lead && lead <= 0xef && (i + 2) <= length &&
+               IsUTF8TrailByte(s[i]) &&
+               IsUTF8TrailByte(s[i + 1]) &&
+               // check for non-shortest form and surrogate
+               (lead != 0xe0 || s[i] >= 0xa0) &&
+               (lead != 0xed || s[i] < 0xa0)) {
+      i += 2;  // 3-byte character
+    } else if (0xf0 <= lead && lead <= 0xf4 && (i + 3) <= length &&
+               IsUTF8TrailByte(s[i]) &&
+               IsUTF8TrailByte(s[i + 1]) &&
+               IsUTF8TrailByte(s[i + 2]) &&
+               // check for non-shortest form
+               (lead != 0xf0 || s[i] >= 0x90) &&
+               (lead != 0xf4 || s[i] < 0x90)) {
+      i += 3;  // 4-byte character
+    } else {
+      return false;
+    }
+  }
+  return true;
+}
+
+void ConditionalPrintAsText(const char* str, size_t length, ostream* os) {
+  if (!ContainsUnprintableControlCodes(str, length) &&
+      IsValidUTF8(str, length)) {
+    *os << "\n    As Text: \"" << str << "\"";
+  }
+}
+
+}  // anonymous namespace
+
 // Prints a ::string object.
 #if GTEST_HAS_GLOBAL_STRING
 void PrintStringTo(const ::string& s, ostream* os) {
-  PrintCharsAsStringTo(s.data(), s.size(), os);
+  if (PrintCharsAsStringTo(s.data(), s.size(), os) == kHexEscape) {
+    if (GTEST_FLAG(print_utf8)) {
+      ConditionalPrintAsText(s.data(), s.size(), os);
+    }
+  }
 }
 #endif  // GTEST_HAS_GLOBAL_STRING
 
 void PrintStringTo(const ::std::string& s, ostream* os) {
-  PrintCharsAsStringTo(s.data(), s.size(), os);
+  if (PrintCharsAsStringTo(s.data(), s.size(), os) == kHexEscape) {
+    if (GTEST_FLAG(print_utf8)) {
+      ConditionalPrintAsText(s.data(), s.size(), os);
+    }
+  }
 }
 
 // Prints a ::wstring object.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-test-part.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-test-part.cc
index fb0e354..c88860d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-test-part.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-test-part.cc
@@ -26,21 +26,12 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: mheule@google.com (Markus Heule)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 
 #include "gtest/gtest-test-part.h"
-
-// Indicates that this translation unit is part of Google Test's
-// implementation.  It must come before gtest-internal-inl.h is
-// included, or there will be a compiler error.  This trick exists to
-// prevent the accidental inclusion of gtest-internal-inl.h in the
-// user's code.
-#define GTEST_IMPLEMENTATION_ 1
 #include "src/gtest-internal-inl.h"
-#undef GTEST_IMPLEMENTATION_
 
 namespace testing {
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-typed-test.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-typed-test.cc
index df1eef4..1dc2ad3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-typed-test.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest-typed-test.cc
@@ -26,10 +26,10 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-//
-// Author: wan@google.com (Zhanyong Wan)
+
 
 #include "gtest/gtest-typed-test.h"
+
 #include "gtest/gtest.h"
 
 namespace testing {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest.cc
index 5a8932c..96b07c6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest.cc
@@ -26,10 +26,9 @@
 // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
 //
-// Author: wan@google.com (Zhanyong Wan)
-//
-// The Google C++ Testing Framework (Google Test)
+// The Google C++ Testing and Mocking Framework (Google Test)
 
 #include "gtest/gtest.h"
 #include "gtest/internal/custom/gtest.h"
@@ -55,7 +54,7 @@
 
 #if GTEST_OS_LINUX
 
-// TODO(kenton@google.com): Use autoconf to detect availability of
+// FIXME: Use autoconf to detect availability of
 // gettimeofday().
 # define GTEST_HAS_GETTIMEOFDAY_ 1
 
@@ -94,9 +93,9 @@
 
 # if GTEST_OS_WINDOWS_MINGW
 // MinGW has gettimeofday() but not _ftime64().
-// TODO(kenton@google.com): Use autoconf to detect availability of
+// FIXME: Use autoconf to detect availability of
 //   gettimeofday().
-// TODO(kenton@google.com): There are other ways to get the time on
+// FIXME: There are other ways to get the time on
 //   Windows, like GetTickCount() or GetSystemTimeAsFileTime().  MinGW
 //   supports these.  consider using them instead.
 #  define GTEST_HAS_GETTIMEOFDAY_ 1
@@ -111,7 +110,7 @@
 #else
 
 // Assume other platforms have gettimeofday().
-// TODO(kenton@google.com): Use autoconf to detect availability of
+// FIXME: Use autoconf to detect availability of
 //   gettimeofday().
 # define GTEST_HAS_GETTIMEOFDAY_ 1
 
@@ -133,19 +132,25 @@
 # include <sys/types.h>  // NOLINT
 #endif
 
-// Indicates that this translation unit is part of Google Test's
-// implementation.  It must come before gtest-internal-inl.h is
-// included, or there will be a compiler error.  This trick is to
-// prevent a user from accidentally including gtest-internal-inl.h in
-// his code.
-#define GTEST_IMPLEMENTATION_ 1
 #include "src/gtest-internal-inl.h"
-#undef GTEST_IMPLEMENTATION_
 
 #if GTEST_OS_WINDOWS
 # define vsnprintf _vsnprintf
 #endif  // GTEST_OS_WINDOWS
 
+#if GTEST_OS_MAC
+#ifndef GTEST_OS_IOS
+#include <crt_externs.h>
+#endif
+#endif
+
+#if GTEST_HAS_ABSL
+#include "absl/debugging/failure_signal_handler.h"
+#include "absl/debugging/stacktrace.h"
+#include "absl/debugging/symbolize.h"
+#include "absl/strings/str_cat.h"
+#endif  // GTEST_HAS_ABSL
+
 namespace testing {
 
 using internal::CountIf;
@@ -167,8 +172,10 @@
 // A test filter that matches everything.
 static const char kUniversalFilter[] = "*";
 
-// The default output file for XML output.
-static const char kDefaultOutputFile[] = "test_detail.xml";
+// The default output format.
+static const char kDefaultOutputFormat[] = "xml";
+// The default output file.
+static const char kDefaultOutputFile[] = "test_detail";
 
 // The environment variable name for the test shard index.
 static const char kTestShardIndex[] = "GTEST_SHARD_INDEX";
@@ -187,15 +194,31 @@
 // specified on the command line.
 bool g_help_flag = false;
 
+// Utilty function to Open File for Writing
+static FILE* OpenFileForWriting(const std::string& output_file) {
+  FILE* fileout = NULL;
+  FilePath output_file_path(output_file);
+  FilePath output_dir(output_file_path.RemoveFileName());
+
+  if (output_dir.CreateDirectoriesRecursively()) {
+    fileout = posix::FOpen(output_file.c_str(), "w");
+  }
+  if (fileout == NULL) {
+    GTEST_LOG_(FATAL) << "Unable to open file \"" << output_file << "\"";
+  }
+  return fileout;
+}
+
 }  // namespace internal
 
+// Bazel passes in the argument to '--test_filter' via the TESTBRIDGE_TEST_ONLY
+// environment variable.
 static const char* GetDefaultFilter() {
-#ifdef GTEST_TEST_FILTER_ENV_VAR_
-  const char* const testbridge_test_only = getenv(GTEST_TEST_FILTER_ENV_VAR_);
+  const char* const testbridge_test_only =
+      internal::posix::GetEnv("TESTBRIDGE_TEST_ONLY");
   if (testbridge_test_only != NULL) {
     return testbridge_test_only;
   }
-#endif  // GTEST_TEST_FILTER_ENV_VAR_
   return kUniversalFilter;
 }
 
@@ -232,15 +255,28 @@
     "exclude).  A test is run if it matches one of the positive "
     "patterns and does not match any of the negative patterns.");
 
+GTEST_DEFINE_bool_(
+    install_failure_signal_handler,
+    internal::BoolFromGTestEnv("install_failure_signal_handler", false),
+    "If true and supported on the current platform, " GTEST_NAME_ " should "
+    "install a signal handler that dumps debugging information when fatal "
+    "signals are raised.");
+
 GTEST_DEFINE_bool_(list_tests, false,
                    "List all tests without running them.");
 
+// The net priority order after flag processing is thus:
+//   --gtest_output command line flag
+//   GTEST_OUTPUT environment variable
+//   XML_OUTPUT_FILE environment variable
+//   ''
 GTEST_DEFINE_string_(
     output,
-    internal::StringFromGTestEnv("output", ""),
-    "A format (currently must be \"xml\"), optionally followed "
-    "by a colon and an output file name or directory. A directory "
-    "is indicated by a trailing pathname separator. "
+    internal::StringFromGTestEnv("output",
+      internal::OutputFlagAlsoCheckEnvVar().c_str()),
+    "A format (defaults to \"xml\" but can be specified to be \"json\"), "
+    "optionally followed by a colon and an output file name or directory. "
+    "A directory is indicated by a trailing pathname separator. "
     "Examples: \"xml:filename.xml\", \"xml::directoryname/\". "
     "If a directory is specified, output files will be created "
     "within that directory, with file-names based on the test "
@@ -253,6 +289,12 @@
     "True iff " GTEST_NAME_
     " should display elapsed time in text output.");
 
+GTEST_DEFINE_bool_(
+    print_utf8,
+    internal::BoolFromGTestEnv("print_utf8", true),
+    "True iff " GTEST_NAME_
+    " prints UTF8 characters as text.");
+
 GTEST_DEFINE_int32_(
     random_seed,
     internal::Int32FromGTestEnv("random_seed", 0),
@@ -294,7 +336,7 @@
     internal::BoolFromGTestEnv("throw_on_failure", false),
     "When this flag is specified, a failed assertion will throw an exception "
     "if exceptions are enabled or exit the program with a non-zero code "
-    "otherwise.");
+    "otherwise. For use with an external test framework.");
 
 #if GTEST_USE_OWN_FLAGFILE_FLAG_
 GTEST_DEFINE_string_(
@@ -308,10 +350,10 @@
 // Generates a random number from [0, range), using a Linear
 // Congruential Generator (LCG).  Crashes if 'range' is 0 or greater
 // than kMaxRange.
-GTEST_ATTRIBUTE_NO_SANITIZE_UNSIGNED_OVERFLOW_
 UInt32 Random::Generate(UInt32 range) {
   // These constants are the same as are used in glibc's rand(3).
-  state_ = (1103515245U*state_ + 12345U) % kMaxRange;
+  // Use wider types than necessary to prevent unsigned overflow diagnostics.
+  state_ = static_cast<UInt32>(1103515245ULL*state_ + 12345U) % kMaxRange;
 
   GTEST_CHECK_(range > 0)
       << "Cannot generate a number in the range [0, 0).";
@@ -385,12 +427,15 @@
 GTEST_API_ GTEST_DEFINE_STATIC_MUTEX_(g_linked_ptr_mutex);
 
 // A copy of all command line arguments.  Set by InitGoogleTest().
-::std::vector<testing::internal::string> g_argvs;
+static ::std::vector<std::string> g_argvs;
 
-const ::std::vector<testing::internal::string>& GetArgvs() {
+::std::vector<std::string> GetArgvs() {
 #if defined(GTEST_CUSTOM_GET_ARGVS_)
-  return GTEST_CUSTOM_GET_ARGVS_();
-#else  // defined(GTEST_CUSTOM_GET_ARGVS_)
+  // GTEST_CUSTOM_GET_ARGVS_() may return a container of std::string or
+  // ::string. This code converts it to the appropriate type.
+  const auto& custom = GTEST_CUSTOM_GET_ARGVS_();
+  return ::std::vector<std::string>(custom.begin(), custom.end());
+#else   // defined(GTEST_CUSTOM_GET_ARGVS_)
   return g_argvs;
 #endif  // defined(GTEST_CUSTOM_GET_ARGVS_)
 }
@@ -414,8 +459,6 @@
 // Returns the output format, or "" for normal printed output.
 std::string UnitTestOptions::GetOutputFormat() {
   const char* const gtest_output_flag = GTEST_FLAG(output).c_str();
-  if (gtest_output_flag == NULL) return std::string("");
-
   const char* const colon = strchr(gtest_output_flag, ':');
   return (colon == NULL) ?
       std::string(gtest_output_flag) :
@@ -426,19 +469,22 @@
 // was explicitly specified.
 std::string UnitTestOptions::GetAbsolutePathToOutputFile() {
   const char* const gtest_output_flag = GTEST_FLAG(output).c_str();
-  if (gtest_output_flag == NULL)
-    return "";
+
+  std::string format = GetOutputFormat();
+  if (format.empty())
+    format = std::string(kDefaultOutputFormat);
 
   const char* const colon = strchr(gtest_output_flag, ':');
   if (colon == NULL)
-    return internal::FilePath::ConcatPaths(
+    return internal::FilePath::MakeFileName(
         internal::FilePath(
             UnitTest::GetInstance()->original_working_dir()),
-        internal::FilePath(kDefaultOutputFile)).string();
+        internal::FilePath(kDefaultOutputFile), 0,
+        format.c_str()).string();
 
   internal::FilePath output_name(colon + 1);
   if (!output_name.IsAbsolutePath())
-    // TODO(wan@google.com): on Windows \some\path is not an absolute
+    // FIXME: on Windows \some\path is not an absolute
     // path (as its meaning depends on the current drive), yet the
     // following logic for turning it into an absolute path is wrong.
     // Fix it.
@@ -629,12 +675,12 @@
 // This predicate-formatter checks that 'results' contains a test part
 // failure of the given type and that the failure message contains the
 // given substring.
-AssertionResult HasOneFailure(const char* /* results_expr */,
-                              const char* /* type_expr */,
-                              const char* /* substr_expr */,
-                              const TestPartResultArray& results,
-                              TestPartResult::Type type,
-                              const string& substr) {
+static AssertionResult HasOneFailure(const char* /* results_expr */,
+                                     const char* /* type_expr */,
+                                     const char* /* substr_expr */,
+                                     const TestPartResultArray& results,
+                                     TestPartResult::Type type,
+                                     const std::string& substr) {
   const std::string expected(type == TestPartResult::kFatalFailure ?
                         "1 fatal failure" :
                         "1 non-fatal failure");
@@ -668,13 +714,10 @@
 // The constructor of SingleFailureChecker remembers where to look up
 // test part results, what type of failure we expect, and what
 // substring the failure message should contain.
-SingleFailureChecker:: SingleFailureChecker(
-    const TestPartResultArray* results,
-    TestPartResult::Type type,
-    const string& substr)
-    : results_(results),
-      type_(type),
-      substr_(substr) {}
+SingleFailureChecker::SingleFailureChecker(const TestPartResultArray* results,
+                                           TestPartResult::Type type,
+                                           const std::string& substr)
+    : results_(results), type_(type), substr_(substr) {}
 
 // The destructor of SingleFailureChecker verifies that the given
 // TestPartResultArray contains exactly one failure that has the given
@@ -815,7 +858,7 @@
   SYSTEMTIME now_systime;
   FILETIME now_filetime;
   ULARGE_INTEGER now_int64;
-  // TODO(kenton@google.com): Shouldn't this just use
+  // FIXME: Shouldn't this just use
   //   GetSystemTimeAsFileTime()?
   GetSystemTime(&now_systime);
   if (SystemTimeToFileTime(&now_systime, &now_filetime)) {
@@ -831,11 +874,11 @@
 
   // MSVC 8 deprecates _ftime64(), so we want to suppress warning 4996
   // (deprecated function) there.
-  // TODO(kenton@google.com): Use GetTickCount()?  Or use
+  // FIXME: Use GetTickCount()?  Or use
   //   SystemTimeToFileTime()
-  GTEST_DISABLE_MSC_WARNINGS_PUSH_(4996)
+  GTEST_DISABLE_MSC_DEPRECATED_PUSH_()
   _ftime64(&now);
-  GTEST_DISABLE_MSC_WARNINGS_POP_()
+  GTEST_DISABLE_MSC_DEPRECATED_POP_()
 
   return static_cast<TimeInMillis>(now.time) * 1000 + now.millitm;
 #elif GTEST_HAS_GETTIMEOFDAY_
@@ -1172,7 +1215,7 @@
   // Print a unified diff header for one hunk.
   // The format is
   //   "@@ -<left_start>,<left_length> +<right_start>,<right_length> @@"
-  // where the left/right parts are ommitted if unnecessary.
+  // where the left/right parts are omitted if unnecessary.
   void PrintHeader(std::ostream* ss) const {
     *ss << "@@ ";
     if (removes_) {
@@ -1316,13 +1359,14 @@
                           const std::string& rhs_value,
                           bool ignoring_case) {
   Message msg;
-  msg << "      Expected: " << lhs_expression;
+  msg << "Expected equality of these values:";
+  msg << "\n  " << lhs_expression;
   if (lhs_value != lhs_expression) {
-    msg << "\n      Which is: " << lhs_value;
+    msg << "\n    Which is: " << lhs_value;
   }
-  msg << "\nTo be equal to: " << rhs_expression;
+  msg << "\n  " << rhs_expression;
   if (rhs_value != rhs_expression) {
-    msg << "\n      Which is: " << rhs_value;
+    msg << "\n    Which is: " << rhs_value;
   }
 
   if (ignoring_case) {
@@ -1369,7 +1413,7 @@
   const double diff = fabs(val1 - val2);
   if (diff <= abs_error) return AssertionSuccess();
 
-  // TODO(wan): do not print the value of an expression if it's
+  // FIXME: do not print the value of an expression if it's
   // already a literal.
   return AssertionFailure()
       << "The difference between " << expr1 << " and " << expr2
@@ -1664,7 +1708,7 @@
 AssertionResult HRESULTFailureHelper(const char* expr,
                                      const char* expected,
                                      long hr) {  // NOLINT
-# if GTEST_OS_WINDOWS_MOBILE
+# if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_WINDOWS_TV_TITLE
 
   // Windows CE doesn't support FormatMessage.
   const char error_text[] = "";
@@ -1721,7 +1765,7 @@
 // Utility functions for encoding Unicode text (wide strings) in
 // UTF-8.
 
-// A Unicode code-point can have upto 21 bits, and is encoded in UTF-8
+// A Unicode code-point can have up to 21 bits, and is encoded in UTF-8
 // like this:
 //
 // Code-point length   Encoding
@@ -1785,7 +1829,7 @@
   return str;
 }
 
-// The following two functions only make sense if the the system
+// The following two functions only make sense if the system
 // uses UTF-16 for wide string encoding. All supported systems
 // with 16 bit wchar_t (Windows, Cygwin, Symbian OS) do use UTF-16.
 
@@ -2097,13 +2141,8 @@
 
 // The list of reserved attributes used in the <testcase> element of XML output.
 static const char* const kReservedTestCaseAttributes[] = {
-  "classname",
-  "name",
-  "status",
-  "time",
-  "type_param",
-  "value_param"
-};
+    "classname",  "name",        "status", "time",
+    "type_param", "value_param", "file",   "line"};
 
 template <int kSize>
 std::vector<std::string> ArrayAsVector(const char* const (&array)[kSize]) {
@@ -2139,8 +2178,9 @@
   return word_list.GetString();
 }
 
-bool ValidateTestPropertyName(const std::string& property_name,
-                              const std::vector<std::string>& reserved_names) {
+static bool ValidateTestPropertyName(
+    const std::string& property_name,
+    const std::vector<std::string>& reserved_names) {
   if (std::find(reserved_names.begin(), reserved_names.end(), property_name) !=
           reserved_names.end()) {
     ADD_FAILURE() << "Reserved key used in RecordProperty(): " << property_name
@@ -2437,6 +2477,8 @@
 #if GTEST_HAS_EXCEPTIONS
     try {
       return HandleSehExceptionsInMethodIfSupported(object, method, location);
+    } catch (const AssertionException&) {  // NOLINT
+      // This failure was reported already.
     } catch (const internal::GoogleTestFailureException&) {  // NOLINT
       // This exception type can only be thrown by a failed Google
       // Test assertion with the intention of letting another testing
@@ -2558,7 +2600,6 @@
   return test_info;
 }
 
-#if GTEST_HAS_PARAM_TEST
 void ReportInvalidTestCaseType(const char* test_case_name,
                                CodeLocation code_location) {
   Message errors;
@@ -2572,13 +2613,10 @@
       << "probably rename one of the classes to put the tests into different\n"
       << "test cases.";
 
-  fprintf(stderr, "%s %s",
-          FormatFileLocation(code_location.file.c_str(),
-                             code_location.line).c_str(),
-          errors.GetString().c_str());
+  GTEST_LOG_(ERROR) << FormatFileLocation(code_location.file.c_str(),
+                                          code_location.line)
+                    << " " << errors.GetString();
 }
-#endif  // GTEST_HAS_PARAM_TEST
-
 }  // namespace internal
 
 namespace {
@@ -2616,12 +2654,10 @@
 // and INSTANTIATE_TEST_CASE_P into regular tests and registers those.
 // This will be done just once during the program runtime.
 void UnitTestImpl::RegisterParameterizedTests() {
-#if GTEST_HAS_PARAM_TEST
   if (!parameterized_tests_registered_) {
     parameterized_test_registry_.RegisterTests();
     parameterized_tests_registered_ = true;
   }
-#endif
 }
 
 }  // namespace internal
@@ -2649,18 +2685,18 @@
       factory_, &internal::TestFactoryBase::CreateTest,
       "the test fixture's constructor");
 
-  // Runs the test only if the test object was created and its
-  // constructor didn't generate a fatal failure.
-  if ((test != NULL) && !Test::HasFatalFailure()) {
+  // Runs the test if the constructor didn't generate a fatal failure.
+  // Note that the object will not be null
+  if (!Test::HasFatalFailure()) {
     // This doesn't throw as all user code that can throw are wrapped into
     // exception handling code.
     test->Run();
   }
 
-  // Deletes the test object.
-  impl->os_stack_trace_getter()->UponLeavingGTest();
-  internal::HandleExceptionsInMethodIfSupported(
-      test, &Test::DeleteSelf_, "the test fixture's destructor");
+    // Deletes the test object.
+    impl->os_stack_trace_getter()->UponLeavingGTest();
+    internal::HandleExceptionsInMethodIfSupported(
+        test, &Test::DeleteSelf_, "the test fixture's destructor");
 
   result_.set_elapsed_time(internal::GetTimeInMillis() - start);
 
@@ -2886,10 +2922,10 @@
 };
 
 #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE && \
-    !GTEST_OS_WINDOWS_PHONE && !GTEST_OS_WINDOWS_RT
+    !GTEST_OS_WINDOWS_PHONE && !GTEST_OS_WINDOWS_RT && !GTEST_OS_WINDOWS_MINGW
 
 // Returns the character attribute for the given color.
-WORD GetColorAttribute(GTestColor color) {
+static WORD GetColorAttribute(GTestColor color) {
   switch (color) {
     case COLOR_RED:    return FOREGROUND_RED;
     case COLOR_GREEN:  return FOREGROUND_GREEN;
@@ -2898,11 +2934,42 @@
   }
 }
 
+static int GetBitOffset(WORD color_mask) {
+  if (color_mask == 0) return 0;
+
+  int bitOffset = 0;
+  while ((color_mask & 1) == 0) {
+    color_mask >>= 1;
+    ++bitOffset;
+  }
+  return bitOffset;
+}
+
+static WORD GetNewColor(GTestColor color, WORD old_color_attrs) {
+  // Let's reuse the BG
+  static const WORD background_mask = BACKGROUND_BLUE | BACKGROUND_GREEN |
+                                      BACKGROUND_RED | BACKGROUND_INTENSITY;
+  static const WORD foreground_mask = FOREGROUND_BLUE | FOREGROUND_GREEN |
+                                      FOREGROUND_RED | FOREGROUND_INTENSITY;
+  const WORD existing_bg = old_color_attrs & background_mask;
+
+  WORD new_color =
+      GetColorAttribute(color) | existing_bg | FOREGROUND_INTENSITY;
+  static const int bg_bitOffset = GetBitOffset(background_mask);
+  static const int fg_bitOffset = GetBitOffset(foreground_mask);
+
+  if (((new_color & background_mask) >> bg_bitOffset) ==
+      ((new_color & foreground_mask) >> fg_bitOffset)) {
+    new_color ^= FOREGROUND_INTENSITY;  // invert intensity
+  }
+  return new_color;
+}
+
 #else
 
 // Returns the ANSI color code for the given color.  COLOR_DEFAULT is
 // an invalid input.
-const char* GetAnsiColorCode(GTestColor color) {
+static const char* GetAnsiColorCode(GTestColor color) {
   switch (color) {
     case COLOR_RED:     return "1";
     case COLOR_GREEN:   return "2";
@@ -2918,7 +2985,7 @@
   const char* const gtest_color = GTEST_FLAG(color).c_str();
 
   if (String::CaseInsensitiveCStringEquals(gtest_color, "auto")) {
-#if GTEST_OS_WINDOWS
+#if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MINGW
     // On Windows the TERM variable is usually not set, but the
     // console there does support colors.
     return stdout_is_tty;
@@ -2954,7 +3021,7 @@
 // cannot simply emit special characters and have the terminal change colors.
 // This routine must actually emit the characters rather than return a string
 // that would be colored when printed, as can be done on Linux.
-void ColoredPrintf(GTestColor color, const char* fmt, ...) {
+static void ColoredPrintf(GTestColor color, const char* fmt, ...) {
   va_list args;
   va_start(args, fmt);
 
@@ -2975,20 +3042,21 @@
   }
 
 #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE && \
-    !GTEST_OS_WINDOWS_PHONE && !GTEST_OS_WINDOWS_RT
+    !GTEST_OS_WINDOWS_PHONE && !GTEST_OS_WINDOWS_RT && !GTEST_OS_WINDOWS_MINGW
   const HANDLE stdout_handle = GetStdHandle(STD_OUTPUT_HANDLE);
 
   // Gets the current text color.
   CONSOLE_SCREEN_BUFFER_INFO buffer_info;
   GetConsoleScreenBufferInfo(stdout_handle, &buffer_info);
   const WORD old_color_attrs = buffer_info.wAttributes;
+  const WORD new_color = GetNewColor(color, old_color_attrs);
 
   // We need to flush the stream buffers into the console before each
   // SetConsoleTextAttribute call lest it affect the text that is already
   // printed but has not yet reached the console.
   fflush(stdout);
-  SetConsoleTextAttribute(stdout_handle,
-                          GetColorAttribute(color) | FOREGROUND_INTENSITY);
+  SetConsoleTextAttribute(stdout_handle, new_color);
+
   vprintf(fmt, args);
 
   fflush(stdout);
@@ -3002,12 +3070,12 @@
   va_end(args);
 }
 
-// Text printed in Google Test's text output and --gunit_list_tests
+// Text printed in Google Test's text output and --gtest_list_tests
 // output to label the type parameter and value parameter for a test.
 static const char kTypeParamLabel[] = "TypeParam";
 static const char kValueParamLabel[] = "GetParam()";
 
-void PrintFullTestCommentIfPresent(const TestInfo& test_info) {
+static void PrintFullTestCommentIfPresent(const TestInfo& test_info) {
   const char* const type_param = test_info.type_param();
   const char* const value_param = test_info.value_param();
 
@@ -3278,7 +3346,7 @@
   listeners_.push_back(listener);
 }
 
-// TODO(vladl@google.com): Factor the search functionality into Vector::Find.
+// FIXME: Factor the search functionality into Vector::Find.
 TestEventListener* TestEventRepeater::Release(TestEventListener *listener) {
   for (size_t i = 0; i < listeners_.size(); ++i) {
     if (listeners_[i] == listener) {
@@ -3352,6 +3420,11 @@
   explicit XmlUnitTestResultPrinter(const char* output_file);
 
   virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration);
+  void ListTestsMatchingFilter(const std::vector<TestCase*>& test_cases);
+
+  // Prints an XML summary of all unit tests.
+  static void PrintXmlTestsList(std::ostream* stream,
+                                const std::vector<TestCase*>& test_cases);
 
  private:
   // Is c a whitespace character that is normalized to a space character
@@ -3413,6 +3486,11 @@
   // to delimit this attribute from prior attributes.
   static std::string TestPropertiesAsXmlAttributes(const TestResult& result);
 
+  // Streams an XML representation of the test properties of a TestResult
+  // object.
+  static void OutputXmlTestProperties(std::ostream* stream,
+                                      const TestResult& result);
+
   // The output file.
   const std::string output_file_;
 
@@ -3422,46 +3500,30 @@
 // Creates a new XmlUnitTestResultPrinter.
 XmlUnitTestResultPrinter::XmlUnitTestResultPrinter(const char* output_file)
     : output_file_(output_file) {
-  if (output_file_.c_str() == NULL || output_file_.empty()) {
-    fprintf(stderr, "XML output file may not be null\n");
-    fflush(stderr);
-    exit(EXIT_FAILURE);
+  if (output_file_.empty()) {
+    GTEST_LOG_(FATAL) << "XML output file may not be null";
   }
 }
 
 // Called after the unit test ends.
 void XmlUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test,
                                                   int /*iteration*/) {
-  FILE* xmlout = NULL;
-  FilePath output_file(output_file_);
-  FilePath output_dir(output_file.RemoveFileName());
-
-  if (output_dir.CreateDirectoriesRecursively()) {
-    xmlout = posix::FOpen(output_file_.c_str(), "w");
-  }
-  if (xmlout == NULL) {
-    // TODO(wan): report the reason of the failure.
-    //
-    // We don't do it for now as:
-    //
-    //   1. There is no urgent need for it.
-    //   2. It's a bit involved to make the errno variable thread-safe on
-    //      all three operating systems (Linux, Windows, and Mac OS).
-    //   3. To interpret the meaning of errno in a thread-safe way,
-    //      we need the strerror_r() function, which is not available on
-    //      Windows.
-    fprintf(stderr,
-            "Unable to open file \"%s\"\n",
-            output_file_.c_str());
-    fflush(stderr);
-    exit(EXIT_FAILURE);
-  }
+  FILE* xmlout = OpenFileForWriting(output_file_);
   std::stringstream stream;
   PrintXmlUnitTest(&stream, unit_test);
   fprintf(xmlout, "%s", StringStreamToString(&stream).c_str());
   fclose(xmlout);
 }
 
+void XmlUnitTestResultPrinter::ListTestsMatchingFilter(
+    const std::vector<TestCase*>& test_cases) {
+  FILE* xmlout = OpenFileForWriting(output_file_);
+  std::stringstream stream;
+  PrintXmlTestsList(&stream, test_cases);
+  fprintf(xmlout, "%s", StringStreamToString(&stream).c_str());
+  fclose(xmlout);
+}
+
 // Returns an XML-escaped copy of the input string str.  If is_attribute
 // is true, the text is meant to appear as an attribute value, and
 // normalizable whitespace is preserved by replacing it with character
@@ -3472,7 +3534,7 @@
 // module will consist of ordinary English text.
 // If this module is ever modified to produce version 1.1 XML output,
 // most invalid characters can be retained using character references.
-// TODO(wan): It might be nice to have a minimally invasive, human-readable
+// FIXME: It might be nice to have a minimally invasive, human-readable
 // escaping scheme for invalid characters, rather than dropping them.
 std::string XmlUnitTestResultPrinter::EscapeXml(
     const std::string& str, bool is_attribute) {
@@ -3533,6 +3595,7 @@
 
 // The following routines generate an XML representation of a UnitTest
 // object.
+// GOOGLETEST_CM0009 DO NOT DELETE
 //
 // This is how Google Test concepts map to the DTD:
 //
@@ -3622,13 +3685,17 @@
 }
 
 // Prints an XML representation of a TestInfo object.
-// TODO(wan): There is also value in printing properties with the plain printer.
+// FIXME: There is also value in printing properties with the plain printer.
 void XmlUnitTestResultPrinter::OutputXmlTestInfo(::std::ostream* stream,
                                                  const char* test_case_name,
                                                  const TestInfo& test_info) {
   const TestResult& result = *test_info.result();
   const std::string kTestcase = "testcase";
 
+  if (test_info.is_in_another_shard()) {
+    return;
+  }
+
   *stream << "    <testcase";
   OutputXmlAttribute(stream, kTestcase, "name", test_info.name());
 
@@ -3639,13 +3706,19 @@
   if (test_info.type_param() != NULL) {
     OutputXmlAttribute(stream, kTestcase, "type_param", test_info.type_param());
   }
+  if (GTEST_FLAG(list_tests)) {
+    OutputXmlAttribute(stream, kTestcase, "file", test_info.file());
+    OutputXmlAttribute(stream, kTestcase, "line",
+                       StreamableToString(test_info.line()));
+    *stream << " />\n";
+    return;
+  }
 
   OutputXmlAttribute(stream, kTestcase, "status",
                      test_info.should_run() ? "run" : "notrun");
   OutputXmlAttribute(stream, kTestcase, "time",
                      FormatTimeInMillisAsSeconds(result.elapsed_time()));
   OutputXmlAttribute(stream, kTestcase, "classname", test_case_name);
-  *stream << TestPropertiesAsXmlAttributes(result);
 
   int failures = 0;
   for (int i = 0; i < result.total_part_count(); ++i) {
@@ -3654,22 +3727,28 @@
       if (++failures == 1) {
         *stream << ">\n";
       }
-      const string location = internal::FormatCompilerIndependentFileLocation(
-          part.file_name(), part.line_number());
-      const string summary = location + "\n" + part.summary();
+      const std::string location =
+          internal::FormatCompilerIndependentFileLocation(part.file_name(),
+                                                          part.line_number());
+      const std::string summary = location + "\n" + part.summary();
       *stream << "      <failure message=\""
               << EscapeXmlAttribute(summary.c_str())
               << "\" type=\"\">";
-      const string detail = location + "\n" + part.message();
+      const std::string detail = location + "\n" + part.message();
       OutputXmlCDataSection(stream, RemoveInvalidXmlCharacters(detail).c_str());
       *stream << "</failure>\n";
     }
   }
 
-  if (failures == 0)
+  if (failures == 0 && result.test_property_count() == 0) {
     *stream << " />\n";
-  else
+  } else {
+    if (failures == 0) {
+      *stream << ">\n";
+    }
+    OutputXmlTestProperties(stream, result);
     *stream << "    </testcase>\n";
+  }
 }
 
 // Prints an XML representation of a TestCase object
@@ -3680,17 +3759,18 @@
   OutputXmlAttribute(stream, kTestsuite, "name", test_case.name());
   OutputXmlAttribute(stream, kTestsuite, "tests",
                      StreamableToString(test_case.reportable_test_count()));
-  OutputXmlAttribute(stream, kTestsuite, "failures",
-                     StreamableToString(test_case.failed_test_count()));
-  OutputXmlAttribute(
-      stream, kTestsuite, "disabled",
-      StreamableToString(test_case.reportable_disabled_test_count()));
-  OutputXmlAttribute(stream, kTestsuite, "errors", "0");
-  OutputXmlAttribute(stream, kTestsuite, "time",
-                     FormatTimeInMillisAsSeconds(test_case.elapsed_time()));
-  *stream << TestPropertiesAsXmlAttributes(test_case.ad_hoc_test_result())
-          << ">\n";
-
+  if (!GTEST_FLAG(list_tests)) {
+    OutputXmlAttribute(stream, kTestsuite, "failures",
+                       StreamableToString(test_case.failed_test_count()));
+    OutputXmlAttribute(
+        stream, kTestsuite, "disabled",
+        StreamableToString(test_case.reportable_disabled_test_count()));
+    OutputXmlAttribute(stream, kTestsuite, "errors", "0");
+    OutputXmlAttribute(stream, kTestsuite, "time",
+                       FormatTimeInMillisAsSeconds(test_case.elapsed_time()));
+    *stream << TestPropertiesAsXmlAttributes(test_case.ad_hoc_test_result());
+  }
+  *stream << ">\n";
   for (int i = 0; i < test_case.total_test_count(); ++i) {
     if (test_case.GetTestInfo(i)->is_reportable())
       OutputXmlTestInfo(stream, test_case.name(), *test_case.GetTestInfo(i));
@@ -3724,7 +3804,6 @@
     OutputXmlAttribute(stream, kTestsuites, "random_seed",
                        StreamableToString(unit_test.random_seed()));
   }
-
   *stream << TestPropertiesAsXmlAttributes(unit_test.ad_hoc_test_result());
 
   OutputXmlAttribute(stream, kTestsuites, "name", "AllTests");
@@ -3737,6 +3816,28 @@
   *stream << "</" << kTestsuites << ">\n";
 }
 
+void XmlUnitTestResultPrinter::PrintXmlTestsList(
+    std::ostream* stream, const std::vector<TestCase*>& test_cases) {
+  const std::string kTestsuites = "testsuites";
+
+  *stream << "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n";
+  *stream << "<" << kTestsuites;
+
+  int total_tests = 0;
+  for (size_t i = 0; i < test_cases.size(); ++i) {
+    total_tests += test_cases[i]->total_test_count();
+  }
+  OutputXmlAttribute(stream, kTestsuites, "tests",
+                     StreamableToString(total_tests));
+  OutputXmlAttribute(stream, kTestsuites, "name", "AllTests");
+  *stream << ">\n";
+
+  for (size_t i = 0; i < test_cases.size(); ++i) {
+    PrintXmlTestCase(stream, *test_cases[i]);
+  }
+  *stream << "</" << kTestsuites << ">\n";
+}
+
 // Produces a string representing the test properties in a result as space
 // delimited XML attributes based on the property key="value" pairs.
 std::string XmlUnitTestResultPrinter::TestPropertiesAsXmlAttributes(
@@ -3750,8 +3851,390 @@
   return attributes.GetString();
 }
 
+void XmlUnitTestResultPrinter::OutputXmlTestProperties(
+    std::ostream* stream, const TestResult& result) {
+  const std::string kProperties = "properties";
+  const std::string kProperty = "property";
+
+  if (result.test_property_count() <= 0) {
+    return;
+  }
+
+  *stream << "<" << kProperties << ">\n";
+  for (int i = 0; i < result.test_property_count(); ++i) {
+    const TestProperty& property = result.GetTestProperty(i);
+    *stream << "<" << kProperty;
+    *stream << " name=\"" << EscapeXmlAttribute(property.key()) << "\"";
+    *stream << " value=\"" << EscapeXmlAttribute(property.value()) << "\"";
+    *stream << "/>\n";
+  }
+  *stream << "</" << kProperties << ">\n";
+}
+
 // End XmlUnitTestResultPrinter
 
+// This class generates an JSON output file.
+class JsonUnitTestResultPrinter : public EmptyTestEventListener {
+ public:
+  explicit JsonUnitTestResultPrinter(const char* output_file);
+
+  virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration);
+
+  // Prints an JSON summary of all unit tests.
+  static void PrintJsonTestList(::std::ostream* stream,
+                                const std::vector<TestCase*>& test_cases);
+
+ private:
+  // Returns an JSON-escaped copy of the input string str.
+  static std::string EscapeJson(const std::string& str);
+
+  //// Verifies that the given attribute belongs to the given element and
+  //// streams the attribute as JSON.
+  static void OutputJsonKey(std::ostream* stream,
+                            const std::string& element_name,
+                            const std::string& name,
+                            const std::string& value,
+                            const std::string& indent,
+                            bool comma = true);
+  static void OutputJsonKey(std::ostream* stream,
+                            const std::string& element_name,
+                            const std::string& name,
+                            int value,
+                            const std::string& indent,
+                            bool comma = true);
+
+  // Streams a JSON representation of a TestInfo object.
+  static void OutputJsonTestInfo(::std::ostream* stream,
+                                 const char* test_case_name,
+                                 const TestInfo& test_info);
+
+  // Prints a JSON representation of a TestCase object
+  static void PrintJsonTestCase(::std::ostream* stream,
+                                const TestCase& test_case);
+
+  // Prints a JSON summary of unit_test to output stream out.
+  static void PrintJsonUnitTest(::std::ostream* stream,
+                                const UnitTest& unit_test);
+
+  // Produces a string representing the test properties in a result as
+  // a JSON dictionary.
+  static std::string TestPropertiesAsJson(const TestResult& result,
+                                          const std::string& indent);
+
+  // The output file.
+  const std::string output_file_;
+
+  GTEST_DISALLOW_COPY_AND_ASSIGN_(JsonUnitTestResultPrinter);
+};
+
+// Creates a new JsonUnitTestResultPrinter.
+JsonUnitTestResultPrinter::JsonUnitTestResultPrinter(const char* output_file)
+    : output_file_(output_file) {
+  if (output_file_.empty()) {
+    GTEST_LOG_(FATAL) << "JSON output file may not be null";
+  }
+}
+
+void JsonUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test,
+                                                  int /*iteration*/) {
+  FILE* jsonout = OpenFileForWriting(output_file_);
+  std::stringstream stream;
+  PrintJsonUnitTest(&stream, unit_test);
+  fprintf(jsonout, "%s", StringStreamToString(&stream).c_str());
+  fclose(jsonout);
+}
+
+// Returns an JSON-escaped copy of the input string str.
+std::string JsonUnitTestResultPrinter::EscapeJson(const std::string& str) {
+  Message m;
+
+  for (size_t i = 0; i < str.size(); ++i) {
+    const char ch = str[i];
+    switch (ch) {
+      case '\\':
+      case '"':
+      case '/':
+        m << '\\' << ch;
+        break;
+      case '\b':
+        m << "\\b";
+        break;
+      case '\t':
+        m << "\\t";
+        break;
+      case '\n':
+        m << "\\n";
+        break;
+      case '\f':
+        m << "\\f";
+        break;
+      case '\r':
+        m << "\\r";
+        break;
+      default:
+        if (ch < ' ') {
+          m << "\\u00" << String::FormatByte(static_cast<unsigned char>(ch));
+        } else {
+          m << ch;
+        }
+        break;
+    }
+  }
+
+  return m.GetString();
+}
+
+// The following routines generate an JSON representation of a UnitTest
+// object.
+
+// Formats the given time in milliseconds as seconds.
+static std::string FormatTimeInMillisAsDuration(TimeInMillis ms) {
+  ::std::stringstream ss;
+  ss << (static_cast<double>(ms) * 1e-3) << "s";
+  return ss.str();
+}
+
+// Converts the given epoch time in milliseconds to a date string in the
+// RFC3339 format, without the timezone information.
+static std::string FormatEpochTimeInMillisAsRFC3339(TimeInMillis ms) {
+  struct tm time_struct;
+  if (!PortableLocaltime(static_cast<time_t>(ms / 1000), &time_struct))
+    return "";
+  // YYYY-MM-DDThh:mm:ss
+  return StreamableToString(time_struct.tm_year + 1900) + "-" +
+      String::FormatIntWidth2(time_struct.tm_mon + 1) + "-" +
+      String::FormatIntWidth2(time_struct.tm_mday) + "T" +
+      String::FormatIntWidth2(time_struct.tm_hour) + ":" +
+      String::FormatIntWidth2(time_struct.tm_min) + ":" +
+      String::FormatIntWidth2(time_struct.tm_sec) + "Z";
+}
+
+static inline std::string Indent(int width) {
+  return std::string(width, ' ');
+}
+
+void JsonUnitTestResultPrinter::OutputJsonKey(
+    std::ostream* stream,
+    const std::string& element_name,
+    const std::string& name,
+    const std::string& value,
+    const std::string& indent,
+    bool comma) {
+  const std::vector<std::string>& allowed_names =
+      GetReservedAttributesForElement(element_name);
+
+  GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) !=
+                   allowed_names.end())
+      << "Key \"" << name << "\" is not allowed for value \"" << element_name
+      << "\".";
+
+  *stream << indent << "\"" << name << "\": \"" << EscapeJson(value) << "\"";
+  if (comma)
+    *stream << ",\n";
+}
+
+void JsonUnitTestResultPrinter::OutputJsonKey(
+    std::ostream* stream,
+    const std::string& element_name,
+    const std::string& name,
+    int value,
+    const std::string& indent,
+    bool comma) {
+  const std::vector<std::string>& allowed_names =
+      GetReservedAttributesForElement(element_name);
+
+  GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) !=
+                   allowed_names.end())
+      << "Key \"" << name << "\" is not allowed for value \"" << element_name
+      << "\".";
+
+  *stream << indent << "\"" << name << "\": " << StreamableToString(value);
+  if (comma)
+    *stream << ",\n";
+}
+
+// Prints a JSON representation of a TestInfo object.
+void JsonUnitTestResultPrinter::OutputJsonTestInfo(::std::ostream* stream,
+                                                   const char* test_case_name,
+                                                   const TestInfo& test_info) {
+  const TestResult& result = *test_info.result();
+  const std::string kTestcase = "testcase";
+  const std::string kIndent = Indent(10);
+
+  *stream << Indent(8) << "{\n";
+  OutputJsonKey(stream, kTestcase, "name", test_info.name(), kIndent);
+
+  if (test_info.value_param() != NULL) {
+    OutputJsonKey(stream, kTestcase, "value_param",
+                  test_info.value_param(), kIndent);
+  }
+  if (test_info.type_param() != NULL) {
+    OutputJsonKey(stream, kTestcase, "type_param", test_info.type_param(),
+                  kIndent);
+  }
+  if (GTEST_FLAG(list_tests)) {
+    OutputJsonKey(stream, kTestcase, "file", test_info.file(), kIndent);
+    OutputJsonKey(stream, kTestcase, "line", test_info.line(), kIndent, false);
+    *stream << "\n" << Indent(8) << "}";
+    return;
+  }
+
+  OutputJsonKey(stream, kTestcase, "status",
+                test_info.should_run() ? "RUN" : "NOTRUN", kIndent);
+  OutputJsonKey(stream, kTestcase, "time",
+                FormatTimeInMillisAsDuration(result.elapsed_time()), kIndent);
+  OutputJsonKey(stream, kTestcase, "classname", test_case_name, kIndent, false);
+  *stream << TestPropertiesAsJson(result, kIndent);
+
+  int failures = 0;
+  for (int i = 0; i < result.total_part_count(); ++i) {
+    const TestPartResult& part = result.GetTestPartResult(i);
+    if (part.failed()) {
+      *stream << ",\n";
+      if (++failures == 1) {
+        *stream << kIndent << "\"" << "failures" << "\": [\n";
+      }
+      const std::string location =
+          internal::FormatCompilerIndependentFileLocation(part.file_name(),
+                                                          part.line_number());
+      const std::string message = EscapeJson(location + "\n" + part.message());
+      *stream << kIndent << "  {\n"
+              << kIndent << "    \"failure\": \"" << message << "\",\n"
+              << kIndent << "    \"type\": \"\"\n"
+              << kIndent << "  }";
+    }
+  }
+
+  if (failures > 0)
+    *stream << "\n" << kIndent << "]";
+  *stream << "\n" << Indent(8) << "}";
+}
+
+// Prints an JSON representation of a TestCase object
+void JsonUnitTestResultPrinter::PrintJsonTestCase(std::ostream* stream,
+                                                  const TestCase& test_case) {
+  const std::string kTestsuite = "testsuite";
+  const std::string kIndent = Indent(6);
+
+  *stream << Indent(4) << "{\n";
+  OutputJsonKey(stream, kTestsuite, "name", test_case.name(), kIndent);
+  OutputJsonKey(stream, kTestsuite, "tests", test_case.reportable_test_count(),
+                kIndent);
+  if (!GTEST_FLAG(list_tests)) {
+    OutputJsonKey(stream, kTestsuite, "failures", test_case.failed_test_count(),
+                  kIndent);
+    OutputJsonKey(stream, kTestsuite, "disabled",
+                  test_case.reportable_disabled_test_count(), kIndent);
+    OutputJsonKey(stream, kTestsuite, "errors", 0, kIndent);
+    OutputJsonKey(stream, kTestsuite, "time",
+                  FormatTimeInMillisAsDuration(test_case.elapsed_time()),
+                  kIndent, false);
+    *stream << TestPropertiesAsJson(test_case.ad_hoc_test_result(), kIndent)
+            << ",\n";
+  }
+
+  *stream << kIndent << "\"" << kTestsuite << "\": [\n";
+
+  bool comma = false;
+  for (int i = 0; i < test_case.total_test_count(); ++i) {
+    if (test_case.GetTestInfo(i)->is_reportable()) {
+      if (comma) {
+        *stream << ",\n";
+      } else {
+        comma = true;
+      }
+      OutputJsonTestInfo(stream, test_case.name(), *test_case.GetTestInfo(i));
+    }
+  }
+  *stream << "\n" << kIndent << "]\n" << Indent(4) << "}";
+}
+
+// Prints a JSON summary of unit_test to output stream out.
+void JsonUnitTestResultPrinter::PrintJsonUnitTest(std::ostream* stream,
+                                                  const UnitTest& unit_test) {
+  const std::string kTestsuites = "testsuites";
+  const std::string kIndent = Indent(2);
+  *stream << "{\n";
+
+  OutputJsonKey(stream, kTestsuites, "tests", unit_test.reportable_test_count(),
+                kIndent);
+  OutputJsonKey(stream, kTestsuites, "failures", unit_test.failed_test_count(),
+                kIndent);
+  OutputJsonKey(stream, kTestsuites, "disabled",
+                unit_test.reportable_disabled_test_count(), kIndent);
+  OutputJsonKey(stream, kTestsuites, "errors", 0, kIndent);
+  if (GTEST_FLAG(shuffle)) {
+    OutputJsonKey(stream, kTestsuites, "random_seed", unit_test.random_seed(),
+                  kIndent);
+  }
+  OutputJsonKey(stream, kTestsuites, "timestamp",
+                FormatEpochTimeInMillisAsRFC3339(unit_test.start_timestamp()),
+                kIndent);
+  OutputJsonKey(stream, kTestsuites, "time",
+                FormatTimeInMillisAsDuration(unit_test.elapsed_time()), kIndent,
+                false);
+
+  *stream << TestPropertiesAsJson(unit_test.ad_hoc_test_result(), kIndent)
+          << ",\n";
+
+  OutputJsonKey(stream, kTestsuites, "name", "AllTests", kIndent);
+  *stream << kIndent << "\"" << kTestsuites << "\": [\n";
+
+  bool comma = false;
+  for (int i = 0; i < unit_test.total_test_case_count(); ++i) {
+    if (unit_test.GetTestCase(i)->reportable_test_count() > 0) {
+      if (comma) {
+        *stream << ",\n";
+      } else {
+        comma = true;
+      }
+      PrintJsonTestCase(stream, *unit_test.GetTestCase(i));
+    }
+  }
+
+  *stream << "\n" << kIndent << "]\n" << "}\n";
+}
+
+void JsonUnitTestResultPrinter::PrintJsonTestList(
+    std::ostream* stream, const std::vector<TestCase*>& test_cases) {
+  const std::string kTestsuites = "testsuites";
+  const std::string kIndent = Indent(2);
+  *stream << "{\n";
+  int total_tests = 0;
+  for (size_t i = 0; i < test_cases.size(); ++i) {
+    total_tests += test_cases[i]->total_test_count();
+  }
+  OutputJsonKey(stream, kTestsuites, "tests", total_tests, kIndent);
+
+  OutputJsonKey(stream, kTestsuites, "name", "AllTests", kIndent);
+  *stream << kIndent << "\"" << kTestsuites << "\": [\n";
+
+  for (size_t i = 0; i < test_cases.size(); ++i) {
+    if (i != 0) {
+      *stream << ",\n";
+    }
+    PrintJsonTestCase(stream, *test_cases[i]);
+  }
+
+  *stream << "\n"
+          << kIndent << "]\n"
+          << "}\n";
+}
+// Produces a string representing the test properties in a result as
+// a JSON dictionary.
+std::string JsonUnitTestResultPrinter::TestPropertiesAsJson(
+    const TestResult& result, const std::string& indent) {
+  Message attributes;
+  for (int i = 0; i < result.test_property_count(); ++i) {
+    const TestProperty& property = result.GetTestProperty(i);
+    attributes << ",\n" << indent << "\"" << property.key() << "\": "
+               << "\"" << EscapeJson(property.value()) << "\"";
+  }
+  return attributes.GetString();
+}
+
+// End JsonUnitTestResultPrinter
+
 #if GTEST_CAN_STREAM_RESULTS_
 
 // Checks if str contains '=', '&', '%' or '\n' characters. If yes,
@@ -3759,8 +4242,8 @@
 // example, replaces "=" with "%3D".  This algorithm is O(strlen(str))
 // in both time and space -- important as the input str may contain an
 // arbitrarily long test failure message and stack trace.
-string StreamingListener::UrlEncode(const char* str) {
-  string result;
+std::string StreamingListener::UrlEncode(const char* str) {
+  std::string result;
   result.reserve(strlen(str) + 1);
   for (char ch = *str; ch != '\0'; ch = *++str) {
     switch (ch) {
@@ -3822,47 +4305,82 @@
 // End of class Streaming Listener
 #endif  // GTEST_CAN_STREAM_RESULTS__
 
-// Class ScopedTrace
-
-// Pushes the given source file location and message onto a per-thread
-// trace stack maintained by Google Test.
-ScopedTrace::ScopedTrace(const char* file, int line, const Message& message)
-    GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) {
-  TraceInfo trace;
-  trace.file = file;
-  trace.line = line;
-  trace.message = message.GetString();
-
-  UnitTest::GetInstance()->PushGTestTrace(trace);
-}
-
-// Pops the info pushed by the c'tor.
-ScopedTrace::~ScopedTrace()
-    GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) {
-  UnitTest::GetInstance()->PopGTestTrace();
-}
-
-
 // class OsStackTraceGetter
 
 const char* const OsStackTraceGetterInterface::kElidedFramesMarker =
     "... " GTEST_NAME_ " internal frames ...";
 
-string OsStackTraceGetter::CurrentStackTrace(int /*max_depth*/,
-                                             int /*skip_count*/) {
+std::string OsStackTraceGetter::CurrentStackTrace(int max_depth, int skip_count)
+    GTEST_LOCK_EXCLUDED_(mutex_) {
+#if GTEST_HAS_ABSL
+  std::string result;
+
+  if (max_depth <= 0) {
+    return result;
+  }
+
+  max_depth = std::min(max_depth, kMaxStackTraceDepth);
+
+  std::vector<void*> raw_stack(max_depth);
+  // Skips the frames requested by the caller, plus this function.
+  const int raw_stack_size =
+      absl::GetStackTrace(&raw_stack[0], max_depth, skip_count + 1);
+
+  void* caller_frame = nullptr;
+  {
+    MutexLock lock(&mutex_);
+    caller_frame = caller_frame_;
+  }
+
+  for (int i = 0; i < raw_stack_size; ++i) {
+    if (raw_stack[i] == caller_frame &&
+        !GTEST_FLAG(show_internal_stack_frames)) {
+      // Add a marker to the trace and stop adding frames.
+      absl::StrAppend(&result, kElidedFramesMarker, "\n");
+      break;
+    }
+
+    char tmp[1024];
+    const char* symbol = "(unknown)";
+    if (absl::Symbolize(raw_stack[i], tmp, sizeof(tmp))) {
+      symbol = tmp;
+    }
+
+    char line[1024];
+    snprintf(line, sizeof(line), "  %p: %s\n", raw_stack[i], symbol);
+    result += line;
+  }
+
+  return result;
+
+#else  // !GTEST_HAS_ABSL
+  static_cast<void>(max_depth);
+  static_cast<void>(skip_count);
   return "";
+#endif  // GTEST_HAS_ABSL
 }
 
-void OsStackTraceGetter::UponLeavingGTest() {}
+void OsStackTraceGetter::UponLeavingGTest() GTEST_LOCK_EXCLUDED_(mutex_) {
+#if GTEST_HAS_ABSL
+  void* caller_frame = nullptr;
+  if (absl::GetStackTrace(&caller_frame, 1, 3) <= 0) {
+    caller_frame = nullptr;
+  }
+
+  MutexLock lock(&mutex_);
+  caller_frame_ = caller_frame;
+#endif  // GTEST_HAS_ABSL
+}
 
 // A helper class that creates the premature-exit file in its
 // constructor and deletes the file in its destructor.
 class ScopedPrematureExitFile {
  public:
   explicit ScopedPrematureExitFile(const char* premature_exit_filepath)
-      : premature_exit_filepath_(premature_exit_filepath) {
+      : premature_exit_filepath_(premature_exit_filepath ?
+                                 premature_exit_filepath : "") {
     // If a path to the premature-exit file is specified...
-    if (premature_exit_filepath != NULL && *premature_exit_filepath != '\0') {
+    if (!premature_exit_filepath_.empty()) {
       // create the file with a single "0" character in it.  I/O
       // errors are ignored as there's nothing better we can do and we
       // don't want to fail the test because of this.
@@ -3873,13 +4391,18 @@
   }
 
   ~ScopedPrematureExitFile() {
-    if (premature_exit_filepath_ != NULL && *premature_exit_filepath_ != '\0') {
-      remove(premature_exit_filepath_);
+    if (!premature_exit_filepath_.empty()) {
+      int retval = remove(premature_exit_filepath_.c_str());
+      if (retval) {
+        GTEST_LOG_(ERROR) << "Failed to remove premature exit filepath \""
+                          << premature_exit_filepath_ << "\" with error "
+                          << retval;
+      }
     }
   }
 
  private:
-  const char* const premature_exit_filepath_;
+  const std::string premature_exit_filepath_;
 
   GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedPrematureExitFile);
 };
@@ -4149,6 +4672,11 @@
       // when a failure happens and both the --gtest_break_on_failure and
       // the --gtest_catch_exceptions flags are specified.
       DebugBreak();
+#elif (!defined(__native_client__)) &&            \
+    ((defined(__clang__) || defined(__GNUC__)) && \
+     (defined(__x86_64__) || defined(__i386__)))
+      // with clang/gcc we can achieve the same effect on x86 by invoking int3
+      asm("int3");
 #else
       // Dereference NULL through a volatile pointer to prevent the compiler
       // from removing. We use this rather than abort() or __builtin_trap() for
@@ -4216,7 +4744,7 @@
   // used for the duration of the program.
   impl()->set_catch_exceptions(GTEST_FLAG(catch_exceptions));
 
-#if GTEST_HAS_SEH
+#if GTEST_OS_WINDOWS
   // Either the user wants Google Test to catch exceptions thrown by the
   // tests or this is executing in the context of death test child
   // process. In either case the user does not want to see pop-up dialogs
@@ -4245,7 +4773,7 @@
     // VC++ doesn't define _set_abort_behavior() prior to the version 8.0.
     // Users of prior VC versions shall suffer the agony and pain of
     // clicking through the countless debug dialogs.
-    // TODO(vladl@google.com): find a way to suppress the abort dialog() in the
+    // FIXME: find a way to suppress the abort dialog() in the
     // debug mode when compiled with VC 7.1 or lower.
     if (!GTEST_FLAG(break_on_failure))
       _set_abort_behavior(
@@ -4253,7 +4781,7 @@
           _WRITE_ABORT_MSG | _CALL_REPORTFAULT);  // pop-up window, core dump.
 # endif
   }
-#endif  // GTEST_HAS_SEH
+#endif  // GTEST_OS_WINDOWS
 
   return internal::HandleExceptionsInMethodIfSupported(
       impl(),
@@ -4286,7 +4814,6 @@
 // Returns the random seed used at the start of the current test run.
 int UnitTest::random_seed() const { return impl_->random_seed(); }
 
-#if GTEST_HAS_PARAM_TEST
 // Returns ParameterizedTestCaseRegistry object used to keep track of
 // value-parameterized tests and instantiate and register them.
 internal::ParameterizedTestCaseRegistry&
@@ -4294,7 +4821,6 @@
         GTEST_LOCK_EXCLUDED_(mutex_) {
   return impl_->parameterized_test_registry();
 }
-#endif  // GTEST_HAS_PARAM_TEST
 
 // Creates an empty UnitTest.
 UnitTest::UnitTest() {
@@ -4333,10 +4859,8 @@
           &default_global_test_part_result_reporter_),
       per_thread_test_part_result_reporter_(
           &default_per_thread_test_part_result_reporter_),
-#if GTEST_HAS_PARAM_TEST
       parameterized_test_registry_(),
       parameterized_tests_registered_(false),
-#endif  // GTEST_HAS_PARAM_TEST
       last_death_test_case_(-1),
       current_test_case_(NULL),
       current_test_info_(NULL),
@@ -4403,10 +4927,12 @@
   if (output_format == "xml") {
     listeners()->SetDefaultXmlGenerator(new XmlUnitTestResultPrinter(
         UnitTestOptions::GetAbsolutePathToOutputFile().c_str()));
+  } else if (output_format == "json") {
+    listeners()->SetDefaultXmlGenerator(new JsonUnitTestResultPrinter(
+        UnitTestOptions::GetAbsolutePathToOutputFile().c_str()));
   } else if (output_format != "") {
-    printf("WARNING: unrecognized output format \"%s\" ignored.\n",
-           output_format.c_str());
-    fflush(stdout);
+    GTEST_LOG_(WARNING) << "WARNING: unrecognized output format \""
+                        << output_format << "\" ignored.";
   }
 }
 
@@ -4421,9 +4947,8 @@
       listeners()->Append(new StreamingListener(target.substr(0, pos),
                                                 target.substr(pos+1)));
     } else {
-      printf("WARNING: unrecognized streaming target \"%s\" ignored.\n",
-             target.c_str());
-      fflush(stdout);
+      GTEST_LOG_(WARNING) << "unrecognized streaming target \"" << target
+                          << "\" ignored.";
     }
   }
 }
@@ -4462,6 +4987,13 @@
     // Configures listeners for streaming test results to the specified server.
     ConfigureStreamingOutput();
 #endif  // GTEST_CAN_STREAM_RESULTS_
+
+#if GTEST_HAS_ABSL
+    if (GTEST_FLAG(install_failure_signal_handler)) {
+      absl::FailureSignalHandlerOptions options;
+      absl::InstallFailureSignalHandler(options);
+    }
+#endif  // GTEST_HAS_ABSL
   }
 }
 
@@ -4505,11 +5037,11 @@
                                     Test::SetUpTestCaseFunc set_up_tc,
                                     Test::TearDownTestCaseFunc tear_down_tc) {
   // Can we find a TestCase with the given name?
-  const std::vector<TestCase*>::const_iterator test_case =
-      std::find_if(test_cases_.begin(), test_cases_.end(),
+  const std::vector<TestCase*>::const_reverse_iterator test_case =
+      std::find_if(test_cases_.rbegin(), test_cases_.rend(),
                    TestCaseNameIs(test_case_name));
 
-  if (test_case != test_cases_.end())
+  if (test_case != test_cases_.rend())
     return *test_case;
 
   // No.  Let's create one.
@@ -4550,13 +5082,8 @@
 // All other functions called from RunAllTests() may safely assume that
 // parameterized tests are ready to be counted and run.
 bool UnitTestImpl::RunAllTests() {
-  // Makes sure InitGoogleTest() was called.
-  if (!GTestIsInitialized()) {
-    printf("%s",
-           "\nThis test program did NOT call ::testing::InitGoogleTest "
-           "before calling RUN_ALL_TESTS().  Please fix it.\n");
-    return false;
-  }
+  // True iff Google Test is initialized before RUN_ALL_TESTS() is called.
+  const bool gtest_is_initialized_before_run_all_tests = GTestIsInitialized();
 
   // Do not run any test if the --help flag was specified.
   if (g_help_flag)
@@ -4684,6 +5211,20 @@
 
   repeater->OnTestProgramEnd(*parent_);
 
+  if (!gtest_is_initialized_before_run_all_tests) {
+    ColoredPrintf(
+        COLOR_RED,
+        "\nIMPORTANT NOTICE - DO NOT IGNORE:\n"
+        "This test program did NOT call " GTEST_INIT_GOOGLE_TEST_NAME_
+        "() before calling RUN_ALL_TESTS(). This is INVALID. Soon " GTEST_NAME_
+        " will start to enforce the valid usage. "
+        "Please fix it ASAP, or IT WILL START TO FAIL.\n");  // NOLINT
+#if GTEST_FOR_GOOGLE_
+    ColoredPrintf(COLOR_RED,
+                  "For more details, see http://wiki/Main/ValidGUnitMain.\n");
+#endif  // GTEST_FOR_GOOGLE_
+  }
+
   return !failed;
 }
 
@@ -4785,8 +5326,8 @@
 // each TestCase and TestInfo object.
 // If shard_tests == true, further filters tests based on sharding
 // variables in the environment - see
-// http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide.
-// Returns the number of tests that should run.
+// https://github.com/google/googletest/blob/master/googletest/docs/advanced.md
+// . Returns the number of tests that should run.
 int UnitTestImpl::FilterTests(ReactionToSharding shard_tests) {
   const Int32 total_shards = shard_tests == HONOR_SHARDING_PROTOCOL ?
       Int32FromEnvOrDie(kTestTotalShards, -1) : -1;
@@ -4825,10 +5366,11 @@
           (GTEST_FLAG(also_run_disabled_tests) || !is_disabled) &&
           matches_filter;
 
-      const bool is_selected = is_runnable &&
-          (shard_tests == IGNORE_SHARDING_PROTOCOL ||
-           ShouldRunTestOnShard(total_shards, shard_index,
-                                num_runnable_tests));
+      const bool is_in_another_shard =
+          shard_tests != IGNORE_SHARDING_PROTOCOL &&
+          !ShouldRunTestOnShard(total_shards, shard_index, num_runnable_tests);
+      test_info->is_in_another_shard_ = is_in_another_shard;
+      const bool is_selected = is_runnable && !is_in_another_shard;
 
       num_runnable_tests += is_runnable;
       num_selected_tests += is_selected;
@@ -4898,6 +5440,23 @@
     }
   }
   fflush(stdout);
+  const std::string& output_format = UnitTestOptions::GetOutputFormat();
+  if (output_format == "xml" || output_format == "json") {
+    FILE* fileout = OpenFileForWriting(
+        UnitTestOptions::GetAbsolutePathToOutputFile().c_str());
+    std::stringstream stream;
+    if (output_format == "xml") {
+      XmlUnitTestResultPrinter(
+          UnitTestOptions::GetAbsolutePathToOutputFile().c_str())
+          .PrintXmlTestsList(&stream, test_cases_);
+    } else if (output_format == "json") {
+      JsonUnitTestResultPrinter(
+          UnitTestOptions::GetAbsolutePathToOutputFile().c_str())
+          .PrintJsonTestList(&stream, test_cases_);
+    }
+    fprintf(fileout, "%s", StringStreamToString(&stream).c_str());
+    fclose(fileout);
+  }
 }
 
 // Sets the OS stack trace getter.
@@ -4928,11 +5487,15 @@
   return os_stack_trace_getter_;
 }
 
-// Returns the TestResult for the test that's currently running, or
-// the TestResult for the ad hoc test if no test is running.
+// Returns the most specific TestResult currently running.
 TestResult* UnitTestImpl::current_test_result() {
-  return current_test_info_ ?
-      &(current_test_info_->result_) : &ad_hoc_test_result_;
+  if (current_test_info_ != NULL) {
+    return &current_test_info_->result_;
+  }
+  if (current_test_case_ != NULL) {
+    return &current_test_case_->ad_hoc_test_result_;
+  }
+  return &ad_hoc_test_result_;
 }
 
 // Shuffles all test cases, and the tests within each test case,
@@ -5013,9 +5576,8 @@
 // part can be omitted.
 //
 // Returns the value of the flag, or NULL if the parsing failed.
-const char* ParseFlagValue(const char* str,
-                           const char* flag,
-                           bool def_optional) {
+static const char* ParseFlagValue(const char* str, const char* flag,
+                                  bool def_optional) {
   // str and flag must not be NULL.
   if (str == NULL || flag == NULL) return NULL;
 
@@ -5051,7 +5613,7 @@
 //
 // On success, stores the value of the flag in *value, and returns
 // true.  On failure, returns false without changing *value.
-bool ParseBoolFlag(const char* str, const char* flag, bool* value) {
+static bool ParseBoolFlag(const char* str, const char* flag, bool* value) {
   // Gets the value of the flag as a string.
   const char* const value_str = ParseFlagValue(str, flag, true);
 
@@ -5085,7 +5647,8 @@
 //
 // On success, stores the value of the flag in *value, and returns
 // true.  On failure, returns false without changing *value.
-bool ParseStringFlag(const char* str, const char* flag, std::string* value) {
+template <typename String>
+static bool ParseStringFlag(const char* str, const char* flag, String* value) {
   // Gets the value of the flag as a string.
   const char* const value_str = ParseFlagValue(str, flag, false);
 
@@ -5121,7 +5684,7 @@
 //   @Y    changes the color to yellow.
 //   @D    changes to the default terminal text color.
 //
-// TODO(wan@google.com): Write tests for this once we add stdout
+// FIXME: Write tests for this once we add stdout
 // capturing to Google Test.
 static void PrintColorEncoded(const char* str) {
   GTestColor color = COLOR_DEFAULT;  // The current color.
@@ -5187,24 +5750,25 @@
 "      Enable/disable colored output. The default is @Gauto@D.\n"
 "  -@G-" GTEST_FLAG_PREFIX_ "print_time=0@D\n"
 "      Don't print the elapsed time of each test.\n"
-"  @G--" GTEST_FLAG_PREFIX_ "output=xml@Y[@G:@YDIRECTORY_PATH@G"
+"  @G--" GTEST_FLAG_PREFIX_ "output=@Y(@Gjson@Y|@Gxml@Y)[@G:@YDIRECTORY_PATH@G"
     GTEST_PATH_SEP_ "@Y|@G:@YFILE_PATH]@D\n"
-"      Generate an XML report in the given directory or with the given file\n"
-"      name. @YFILE_PATH@D defaults to @Gtest_details.xml@D.\n"
-#if GTEST_CAN_STREAM_RESULTS_
+"      Generate a JSON or XML report in the given directory or with the given\n"
+"      file name. @YFILE_PATH@D defaults to @Gtest_details.xml@D.\n"
+# if GTEST_CAN_STREAM_RESULTS_
 "  @G--" GTEST_FLAG_PREFIX_ "stream_result_to=@YHOST@G:@YPORT@D\n"
 "      Stream test results to the given server.\n"
-#endif  // GTEST_CAN_STREAM_RESULTS_
+# endif  // GTEST_CAN_STREAM_RESULTS_
 "\n"
 "Assertion Behavior:\n"
-#if GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS
+# if GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS
 "  @G--" GTEST_FLAG_PREFIX_ "death_test_style=@Y(@Gfast@Y|@Gthreadsafe@Y)@D\n"
 "      Set the default death test style.\n"
-#endif  // GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS
+# endif  // GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS
 "  @G--" GTEST_FLAG_PREFIX_ "break_on_failure@D\n"
 "      Turn assertion failures into debugger break-points.\n"
 "  @G--" GTEST_FLAG_PREFIX_ "throw_on_failure@D\n"
-"      Turn assertion failures into C++ exceptions.\n"
+"      Turn assertion failures into C++ exceptions for use by an external\n"
+"      test framework.\n"
 "  @G--" GTEST_FLAG_PREFIX_ "catch_exceptions=0@D\n"
 "      Do not report exceptions as test failures. Instead, allow them\n"
 "      to crash the program or throw a pop-up (on Windows).\n"
@@ -5221,7 +5785,7 @@
 "(not one in your own code or tests), please report it to\n"
 "@G<" GTEST_DEV_EMAIL_ ">@D.\n";
 
-bool ParseGoogleTestFlag(const char* const arg) {
+static bool ParseGoogleTestFlag(const char* const arg) {
   return ParseBoolFlag(arg, kAlsoRunDisabledTestsFlag,
                        &GTEST_FLAG(also_run_disabled_tests)) ||
       ParseBoolFlag(arg, kBreakOnFailureFlag,
@@ -5239,6 +5803,7 @@
       ParseBoolFlag(arg, kListTestsFlag, &GTEST_FLAG(list_tests)) ||
       ParseStringFlag(arg, kOutputFlag, &GTEST_FLAG(output)) ||
       ParseBoolFlag(arg, kPrintTimeFlag, &GTEST_FLAG(print_time)) ||
+      ParseBoolFlag(arg, kPrintUTF8Flag, &GTEST_FLAG(print_utf8)) ||
       ParseInt32Flag(arg, kRandomSeedFlag, &GTEST_FLAG(random_seed)) ||
       ParseInt32Flag(arg, kRepeatFlag, &GTEST_FLAG(repeat)) ||
       ParseBoolFlag(arg, kShuffleFlag, &GTEST_FLAG(shuffle)) ||
@@ -5251,14 +5816,11 @@
 }
 
 #if GTEST_USE_OWN_FLAGFILE_FLAG_
-void LoadFlagsFromFile(const std::string& path) {
+static void LoadFlagsFromFile(const std::string& path) {
   FILE* flagfile = posix::FOpen(path.c_str(), "r");
   if (!flagfile) {
-    fprintf(stderr,
-            "Unable to open file \"%s\"\n",
-            GTEST_FLAG(flagfile).c_str());
-    fflush(stderr);
-    exit(EXIT_FAILURE);
+    GTEST_LOG_(FATAL) << "Unable to open file \"" << GTEST_FLAG(flagfile)
+                      << "\"";
   }
   std::string contents(ReadEntireFile(flagfile));
   posix::FClose(flagfile);
@@ -5332,6 +5894,17 @@
 // other parts of Google Test.
 void ParseGoogleTestFlagsOnly(int* argc, char** argv) {
   ParseGoogleTestFlagsOnlyImpl(argc, argv);
+
+  // Fix the value of *_NSGetArgc() on macOS, but iff
+  // *_NSGetArgv() == argv
+  // Only applicable to char** version of argv
+#if GTEST_OS_MAC
+#ifndef GTEST_OS_IOS
+  if (*_NSGetArgv() == argv) {
+    *_NSGetArgc() = *argc;
+  }
+#endif
+#endif
 }
 void ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv) {
   ParseGoogleTestFlagsOnlyImpl(argc, argv);
@@ -5353,6 +5926,10 @@
     g_argvs.push_back(StreamableToString(argv[i]));
   }
 
+#if GTEST_HAS_ABSL
+  absl::InitializeSymbolizer(g_argvs[0].c_str());
+#endif  // GTEST_HAS_ABSL
+
   ParseGoogleTestFlagsOnly(argc, argv);
   GetUnitTestImpl()->PostFlagParsingInit();
 }
@@ -5386,4 +5963,45 @@
 #endif  // defined(GTEST_CUSTOM_INIT_GOOGLE_TEST_FUNCTION_)
 }
 
+std::string TempDir() {
+#if defined(GTEST_CUSTOM_TEMPDIR_FUNCTION_)
+  return GTEST_CUSTOM_TEMPDIR_FUNCTION_();
+#endif
+
+#if GTEST_OS_WINDOWS_MOBILE
+  return "\\temp\\";
+#elif GTEST_OS_WINDOWS
+  const char* temp_dir = internal::posix::GetEnv("TEMP");
+  if (temp_dir == NULL || temp_dir[0] == '\0')
+    return "\\temp\\";
+  else if (temp_dir[strlen(temp_dir) - 1] == '\\')
+    return temp_dir;
+  else
+    return std::string(temp_dir) + "\\";
+#elif GTEST_OS_LINUX_ANDROID
+  return "/sdcard/";
+#else
+  return "/tmp/";
+#endif  // GTEST_OS_WINDOWS_MOBILE
+}
+
+// Class ScopedTrace
+
+// Pushes the given source file location and message onto a per-thread
+// trace stack maintained by Google Test.
+void ScopedTrace::PushTrace(const char* file, int line, std::string message) {
+  internal::TraceInfo trace;
+  trace.file = file;
+  trace.line = line;
+  trace.message.swap(message);
+
+  UnitTest::GetInstance()->PushGTestTrace(trace);
+}
+
+// Pops the info pushed by the c'tor.
+ScopedTrace::~ScopedTrace()
+    GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) {
+  UnitTest::GetInstance()->PopGTestTrace();
+}
+
 }  // namespace testing
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest_main.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest_main.cc
index f302822..2113f62 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest_main.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/googletest/src/src/gtest_main.cc
@@ -28,11 +28,10 @@
 // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 #include <stdio.h>
-
 #include "gtest/gtest.h"
 
 GTEST_API_ int main(int argc, char **argv) {
-  printf("Running main() from gtest_main.cc\n");
+  printf("Running main() from %s\n", __FILE__);
   testing::InitGoogleTest(&argc, argv);
   return RUN_ALL_TESTS();
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/Android.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/Android.mk
index 8149a08..b46ba10 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/Android.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/Android.mk
@@ -3,7 +3,7 @@
 include $(CLEAR_VARS)
 LOCAL_MODULE:= libwebm
 LOCAL_CPPFLAGS:=-D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS
-LOCAL_CPPFLAGS+=-D__STDC_LIMIT_MACROS -Wno-extern-c-compat
+LOCAL_CPPFLAGS+=-D__STDC_LIMIT_MACROS -std=c++11
 LOCAL_C_INCLUDES:= $(LOCAL_PATH)
 LOCAL_EXPORT_C_INCLUDES:= $(LOCAL_PATH)
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/README.libvpx b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/README.libvpx
index ebb5ff2..1e87afd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/README.libvpx
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/README.libvpx
@@ -1,5 +1,5 @@
 URL: https://chromium.googlesource.com/webm/libwebm
-Version: 0ae757087f5e6eb01dfea16cc09205b2425cfb74
+Version: 37d9b860ebbf40cb0f6dcb7a6fef452d798062da
 License: BSD
 License File: LICENSE.txt
 
@@ -7,4 +7,14 @@
 libwebm is used to handle WebM container I/O.
 
 Local Changes:
-* <none>
+Only keep:
+ - Android.mk
+ - AUTHORS.TXT
+ - common/
+    file_util.cc/h
+    hdr_util.cc/h
+    webmids.h
+ - LICENSE.TXT
+ - mkvmuxer/
+ - mkvparser/
+ - PATENTS.TXT
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.cc
index 6dab146..6eb6428 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.cc
@@ -17,14 +17,15 @@
 #include <cstring>
 #include <fstream>
 #include <ios>
+#include <string>
 
 namespace libwebm {
 
 std::string GetTempFileName() {
 #if !defined _MSC_VER && !defined __MINGW32__
   std::string temp_file_name_template_str =
-      std::string(std::getenv("TEST_TMPDIR") ? std::getenv("TEST_TMPDIR") :
-                                               ".") +
+      std::string(std::getenv("TEST_TMPDIR") ? std::getenv("TEST_TMPDIR")
+                                             : ".") +
       "/libwebm_temp.XXXXXX";
   char* temp_file_name_template =
       new char[temp_file_name_template_str.length() + 1];
@@ -41,7 +42,12 @@
   return temp_file_name;
 #else
   char tmp_file_name[_MAX_PATH];
+#if defined _MSC_VER || defined MINGW_HAS_SECURE_API
   errno_t err = tmpnam_s(tmp_file_name);
+#else
+  char* fname_pointer = tmpnam(tmp_file_name);
+  int err = (fname_pointer == &tmp_file_name[0]) ? 0 : -1;
+#endif
   if (err == 0) {
     return std::string(tmp_file_name);
   }
@@ -65,6 +71,15 @@
   return file_size;
 }
 
+bool GetFileContents(const std::string& file_name, std::string* contents) {
+  std::ifstream file(file_name.c_str());
+  *contents = std::string(static_cast<size_t>(GetFileSize(file_name)), 0);
+  if (file.good() && contents->size()) {
+    file.read(&(*contents)[0], contents->size());
+  }
+  return !file.fail();
+}
+
 TempFileDeleter::TempFileDeleter() { file_name_ = GetTempFileName(); }
 
 TempFileDeleter::~TempFileDeleter() {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.h
index 0e71eac..a873734 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/file_util.h
@@ -22,6 +22,9 @@
 // Returns size of file specified by |file_name|, or 0 upon failure.
 uint64_t GetFileSize(const std::string& file_name);
 
+// Gets the contents file_name as a string. Returns false on error.
+bool GetFileContents(const std::string& file_name, std::string* contents);
+
 // Manages life of temporary file specified at time of construction. Deletes
 // file upon destruction.
 class TempFileDeleter {
@@ -38,4 +41,4 @@
 
 }  // namespace libwebm
 
-#endif  // LIBWEBM_COMMON_FILE_UTIL_H_
\ No newline at end of file
+#endif  // LIBWEBM_COMMON_FILE_UTIL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.cc
index e1618ce..916f717 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.cc
@@ -36,10 +36,10 @@
   if (MasteringMetadataValuePresent(parser_mm.luminance_min))
     muxer_mm->set_luminance_min(parser_mm.luminance_min);
 
-  PrimaryChromaticityPtr r_ptr(NULL);
-  PrimaryChromaticityPtr g_ptr(NULL);
-  PrimaryChromaticityPtr b_ptr(NULL);
-  PrimaryChromaticityPtr wp_ptr(NULL);
+  PrimaryChromaticityPtr r_ptr(nullptr);
+  PrimaryChromaticityPtr g_ptr(nullptr);
+  PrimaryChromaticityPtr b_ptr(nullptr);
+  PrimaryChromaticityPtr wp_ptr(nullptr);
 
   if (parser_mm.r) {
     if (!CopyPrimaryChromaticity(*parser_mm.r, &r_ptr))
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.h
index 3ef5388..78e2eeb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/hdr_util.h
@@ -47,15 +47,7 @@
   int chroma_subsampling;
 };
 
-// disable deprecation warnings for auto_ptr
-#if defined(__GNUC__) && __GNUC__ >= 5
-#pragma GCC diagnostic push
-#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
-#endif
-typedef std::auto_ptr<mkvmuxer::PrimaryChromaticity> PrimaryChromaticityPtr;
-#if defined(__GNUC__) && __GNUC__ >= 5
-#pragma GCC diagnostic pop
-#endif
+typedef std::unique_ptr<mkvmuxer::PrimaryChromaticity> PrimaryChromaticityPtr;
 
 bool CopyPrimaryChromaticity(const mkvparser::PrimaryChromaticity& parser_pc,
                              PrimaryChromaticityPtr* muxer_pc);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/webmids.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/webmids.h
index 89d722a..fc0c208 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/webmids.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/common/webmids.h
@@ -93,6 +93,7 @@
   kMkvDisplayHeight = 0x54BA,
   kMkvDisplayUnit = 0x54B2,
   kMkvAspectRatioType = 0x54B3,
+  kMkvColourSpace = 0x2EB524,
   kMkvFrameRate = 0x2383E3,
   // end video
   // colour
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.cc
index 15b9a90..5120312 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.cc
@@ -8,6 +8,8 @@
 
 #include "mkvmuxer/mkvmuxer.h"
 
+#include <stdint.h>
+
 #include <cfloat>
 #include <climits>
 #include <cstdio>
@@ -24,11 +26,6 @@
 #include "mkvmuxer/mkvwriter.h"
 #include "mkvparser/mkvparser.h"
 
-// disable deprecation warnings for auto_ptr
-#if defined(__GNUC__) && __GNUC__ >= 5
-#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
-#endif
-
 namespace mkvmuxer {
 
 const float PrimaryChromaticity::kChromaticityMin = 0.0f;
@@ -72,7 +69,7 @@
   return true;
 }
 
-typedef std::auto_ptr<PrimaryChromaticity> PrimaryChromaticityPtr;
+typedef std::unique_ptr<PrimaryChromaticity> PrimaryChromaticityPtr;
 bool CopyChromaticity(const PrimaryChromaticity* src,
                       PrimaryChromaticityPtr* dst) {
   if (!dst)
@@ -776,6 +773,14 @@
   if (!type_ || !codec_id_)
     return false;
 
+  // AV1 tracks require a CodecPrivate. See
+  // https://github.com/Matroska-Org/matroska-specification/blob/av1-mappin/codec/av1.md
+  // TODO(tomfinegan): Update the above link to the AV1 Matroska mappings to
+  // point to a stable version once it is finalized, or our own WebM mappings
+  // page on webmproject.org should we decide to release them.
+  if (!strcmp(codec_id_, Tracks::kAv1CodecId) && !codec_private_)
+    return false;
+
   // |size| may be bigger than what is written out in this function because
   // derived classes may write out more data in the Track element.
   const uint64_t payload_size = PayloadSize();
@@ -1030,19 +1035,16 @@
       !WriteEbmlElement(writer, libwebm::kMkvLuminanceMin, luminance_min_)) {
     return false;
   }
-  if (r_ &&
-      !r_->Write(writer, libwebm::kMkvPrimaryRChromaticityX,
-                 libwebm::kMkvPrimaryRChromaticityY)) {
+  if (r_ && !r_->Write(writer, libwebm::kMkvPrimaryRChromaticityX,
+                       libwebm::kMkvPrimaryRChromaticityY)) {
     return false;
   }
-  if (g_ &&
-      !g_->Write(writer, libwebm::kMkvPrimaryGChromaticityX,
-                 libwebm::kMkvPrimaryGChromaticityY)) {
+  if (g_ && !g_->Write(writer, libwebm::kMkvPrimaryGChromaticityX,
+                       libwebm::kMkvPrimaryGChromaticityY)) {
     return false;
   }
-  if (b_ &&
-      !b_->Write(writer, libwebm::kMkvPrimaryBChromaticityX,
-                 libwebm::kMkvPrimaryBChromaticityY)) {
+  if (b_ && !b_->Write(writer, libwebm::kMkvPrimaryBChromaticityX,
+                       libwebm::kMkvPrimaryBChromaticityY)) {
     return false;
   }
   if (white_point_ &&
@@ -1057,22 +1059,22 @@
 bool MasteringMetadata::SetChromaticity(
     const PrimaryChromaticity* r, const PrimaryChromaticity* g,
     const PrimaryChromaticity* b, const PrimaryChromaticity* white_point) {
-  PrimaryChromaticityPtr r_ptr(NULL);
+  PrimaryChromaticityPtr r_ptr(nullptr);
   if (r) {
     if (!CopyChromaticity(r, &r_ptr))
       return false;
   }
-  PrimaryChromaticityPtr g_ptr(NULL);
+  PrimaryChromaticityPtr g_ptr(nullptr);
   if (g) {
     if (!CopyChromaticity(g, &g_ptr))
       return false;
   }
-  PrimaryChromaticityPtr b_ptr(NULL);
+  PrimaryChromaticityPtr b_ptr(nullptr);
   if (b) {
     if (!CopyChromaticity(b, &b_ptr))
       return false;
   }
-  PrimaryChromaticityPtr wp_ptr(NULL);
+  PrimaryChromaticityPtr wp_ptr(nullptr);
   if (white_point) {
     if (!CopyChromaticity(white_point, &wp_ptr))
       return false;
@@ -1238,7 +1240,7 @@
 }
 
 bool Colour::SetMasteringMetadata(const MasteringMetadata& mastering_metadata) {
-  std::auto_ptr<MasteringMetadata> mm_ptr(new MasteringMetadata());
+  std::unique_ptr<MasteringMetadata> mm_ptr(new MasteringMetadata());
   if (!mm_ptr.get())
     return false;
 
@@ -1424,6 +1426,7 @@
       stereo_mode_(0),
       alpha_mode_(0),
       width_(0),
+      colour_space_(NULL),
       colour_(NULL),
       projection_(NULL) {}
 
@@ -1521,6 +1524,10 @@
                           static_cast<uint64>(alpha_mode_)))
       return false;
   }
+  if (colour_space_) {
+    if (!WriteEbmlElement(writer, libwebm::kMkvColourSpace, colour_space_))
+      return false;
+  }
   if (frame_rate_ > 0.0) {
     if (!WriteEbmlElement(writer, libwebm::kMkvFrameRate,
                           static_cast<float>(frame_rate_))) {
@@ -1545,8 +1552,24 @@
   return true;
 }
 
+void VideoTrack::set_colour_space(const char* colour_space) {
+  if (colour_space) {
+    delete[] colour_space_;
+
+    const size_t length = strlen(colour_space) + 1;
+    colour_space_ = new (std::nothrow) char[length];  // NOLINT
+    if (colour_space_) {
+#ifdef _MSC_VER
+      strcpy_s(colour_space_, length, colour_space);
+#else
+      strcpy(colour_space_, colour_space);
+#endif
+    }
+  }
+}
+
 bool VideoTrack::SetColour(const Colour& colour) {
-  std::auto_ptr<Colour> colour_ptr(new Colour());
+  std::unique_ptr<Colour> colour_ptr(new Colour());
   if (!colour_ptr.get())
     return false;
 
@@ -1574,7 +1597,7 @@
 }
 
 bool VideoTrack::SetProjection(const Projection& projection) {
-  std::auto_ptr<Projection> projection_ptr(new Projection());
+  std::unique_ptr<Projection> projection_ptr(new Projection());
   if (!projection_ptr.get())
     return false;
 
@@ -1628,6 +1651,8 @@
   if (frame_rate_ > 0.0)
     size += EbmlElementSize(libwebm::kMkvFrameRate,
                             static_cast<float>(frame_rate_));
+  if (colour_space_)
+    size += EbmlElementSize(libwebm::kMkvColourSpace, colour_space_);
   if (colour_)
     size += colour_->ColourSize();
   if (projection_)
@@ -1705,9 +1730,9 @@
 
 const char Tracks::kOpusCodecId[] = "A_OPUS";
 const char Tracks::kVorbisCodecId[] = "A_VORBIS";
+const char Tracks::kAv1CodecId[] = "V_AV1";
 const char Tracks::kVp8CodecId[] = "V_VP8";
 const char Tracks::kVp9CodecId[] = "V_VP9";
-const char Tracks::kVp10CodecId[] = "V_VP10";
 const char Tracks::kWebVttCaptionsId[] = "D_WEBVTT/CAPTIONS";
 const char Tracks::kWebVttDescriptionsId[] = "D_WEBVTT/DESCRIPTIONS";
 const char Tracks::kWebVttMetadataId[] = "D_WEBVTT/METADATA";
@@ -2666,7 +2691,7 @@
   // and write it if it is okay to do so (i.e.) no other track has an held back
   // frame with timestamp <= the timestamp of the frame in question.
   std::vector<std::list<Frame*>::iterator> frames_to_erase;
-  for (std::list<Frame *>::iterator
+  for (std::list<Frame*>::iterator
            current_track_iterator = stored_frames_[track_number].begin(),
            end = --stored_frames_[track_number].end();
        current_track_iterator != end; ++current_track_iterator) {
@@ -4168,8 +4193,8 @@
   // TODO(vigneshv): Tweak .clang-format.
   const char* kWebmCodecIds[kNumCodecIds] = {
       Tracks::kOpusCodecId,          Tracks::kVorbisCodecId,
-      Tracks::kVp8CodecId,           Tracks::kVp9CodecId,
-      Tracks::kVp10CodecId,          Tracks::kWebVttCaptionsId,
+      Tracks::kAv1CodecId,           Tracks::kVp8CodecId,
+      Tracks::kVp9CodecId,           Tracks::kWebVttCaptionsId,
       Tracks::kWebVttDescriptionsId, Tracks::kWebVttMetadataId,
       Tracks::kWebVttSubtitlesId};
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.h
index 46b0029..f2db377 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxer.h
@@ -795,6 +795,8 @@
   uint64_t alpha_mode() { return alpha_mode_; }
   void set_width(uint64_t width) { width_ = width; }
   uint64_t width() const { return width_; }
+  void set_colour_space(const char* colour_space);
+  const char* colour_space() const { return colour_space_; }
 
   Colour* colour() { return colour_; }
 
@@ -824,6 +826,7 @@
   uint64_t stereo_mode_;
   uint64_t alpha_mode_;
   uint64_t width_;
+  char* colour_space_;
 
   Colour* colour_;
   Projection* projection_;
@@ -871,9 +874,9 @@
 
   static const char kOpusCodecId[];
   static const char kVorbisCodecId[];
+  static const char kAv1CodecId[];
   static const char kVp8CodecId[];
   static const char kVp9CodecId[];
-  static const char kVp10CodecId[];
   static const char kWebVttCaptionsId[];
   static const char kWebVttDescriptionsId[];
   static const char kWebVttMetadataId[];
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.cc
index 355d4e2..6436817 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.cc
@@ -136,9 +136,8 @@
     return false;
   }
 
-  if (!frame->is_key() &&
-      !WriteEbmlElement(writer, libwebm::kMkvReferenceBlock,
-                        reference_block_timestamp)) {
+  if (!frame->is_key() && !WriteEbmlElement(writer, libwebm::kMkvReferenceBlock,
+                                            reference_block_timestamp)) {
     return false;
   }
 
@@ -509,7 +508,7 @@
   if (WriteUInt(writer, length))
     return false;
 
-  if (writer->Write(value, static_cast<const uint32>(length)))
+  if (writer->Write(value, static_cast<uint32>(length)))
     return false;
 
   return true;
@@ -563,10 +562,10 @@
   if (relative_timecode < 0 || relative_timecode > kMaxBlockTimecode)
     return 0;
 
-  return frame->CanBeSimpleBlock() ?
-             WriteSimpleBlock(writer, frame, relative_timecode) :
-             WriteBlock(writer, frame, relative_timecode,
-                        cluster->timecode_scale());
+  return frame->CanBeSimpleBlock()
+             ? WriteSimpleBlock(writer, frame, relative_timecode)
+             : WriteBlock(writer, frame, relative_timecode,
+                          cluster->timecode_scale());
 }
 
 uint64 WriteVoidElement(IMkvWriter* writer, uint64 size) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.h
index 132388d..3355428 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvmuxerutil.h
@@ -31,6 +31,9 @@
 // Writes out |value| in Big Endian order. Returns 0 on success.
 int32 SerializeInt(IMkvWriter* writer, int64 value, int32 size);
 
+// Writes out |f| in Big Endian order. Returns 0 on success.
+int32 SerializeFloat(IMkvWriter* writer, float f);
+
 // Returns the size in bytes of the element.
 int32 GetUIntSize(uint64 value);
 int32 GetIntSize(int64 value);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvwriter.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvwriter.cc
index 84655d8..d668384 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvwriter.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvmuxer/mkvwriter.cc
@@ -78,6 +78,8 @@
 
 #ifdef _MSC_VER
   return _fseeki64(file_, position, SEEK_SET);
+#elif defined(_WIN32)
+  return fseeko64(file_, static_cast<off_t>(position), SEEK_SET);
 #else
   return fseeko(file_, static_cast<off_t>(position), SEEK_SET);
 #endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.cc
index 37f230d..ace65bd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.cc
@@ -22,12 +22,8 @@
 
 #include "common/webmids.h"
 
-// disable deprecation warnings for auto_ptr
-#if defined(__GNUC__) && __GNUC__ >= 5
-#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
-#endif
-
 namespace mkvparser {
+const long long kStringElementSizeLimit = 20 * 1000 * 1000;
 const float MasteringMetadata::kValueNotPresent = FLT_MAX;
 const long long Colour::kValueNotPresent = LLONG_MAX;
 const float Projection::kValueNotPresent = FLT_MAX;
@@ -40,8 +36,6 @@
 inline bool isinf(double val) { return std::isinf(val); }
 #endif  // MSC_COMPAT
 
-IMkvReader::~IMkvReader() {}
-
 template <typename Type>
 Type* SafeArrayAlloc(unsigned long long num_elements,
                      unsigned long long element_size) {
@@ -330,7 +324,7 @@
   delete[] str;
   str = NULL;
 
-  if (size >= LONG_MAX || size < 0)
+  if (size >= LONG_MAX || size < 0 || size > kStringElementSizeLimit)
     return E_FILE_FORMAT_INVALID;
 
   // +1 for '\0' terminator
@@ -4236,6 +4230,7 @@
         new (std::nothrow) ContentEncryption*[encryption_count];
     if (!encryption_entries_) {
       delete[] compression_entries_;
+      compression_entries_ = NULL;
       return -1;
     }
     encryption_entries_end_ = encryption_entries_;
@@ -4267,6 +4262,7 @@
         delete compression;
         return status;
       }
+      assert(compression_count > 0);
       *compression_entries_end_++ = compression;
     } else if (id == libwebm::kMkvContentEncryption) {
       ContentEncryption* const encryption =
@@ -4279,6 +4275,7 @@
         delete encryption;
         return status;
       }
+      assert(encryption_count > 0);
       *encryption_entries_end_++ = encryption;
     }
 
@@ -4331,6 +4328,12 @@
         return status;
       }
 
+      // There should be only one settings element per content compression.
+      if (compression->settings != NULL) {
+        delete[] buf;
+        return E_FILE_FORMAT_INVALID;
+      }
+
       compression->settings = buf;
       compression->settings_len = buflen;
     }
@@ -5015,7 +5018,7 @@
   if (!reader || *mm)
     return false;
 
-  std::auto_ptr<MasteringMetadata> mm_ptr(new MasteringMetadata());
+  std::unique_ptr<MasteringMetadata> mm_ptr(new MasteringMetadata());
   if (!mm_ptr.get())
     return false;
 
@@ -5035,6 +5038,10 @@
       double value = 0;
       const long long value_parse_status =
           UnserializeFloat(reader, read_pos, child_size, value);
+      if (value < -FLT_MAX || value > FLT_MAX ||
+          (value > 0.0 && value < FLT_MIN)) {
+        return false;
+      }
       mm_ptr->luminance_max = static_cast<float>(value);
       if (value_parse_status < 0 || mm_ptr->luminance_max < 0.0 ||
           mm_ptr->luminance_max > 9999.99) {
@@ -5044,6 +5051,10 @@
       double value = 0;
       const long long value_parse_status =
           UnserializeFloat(reader, read_pos, child_size, value);
+      if (value < -FLT_MAX || value > FLT_MAX ||
+          (value > 0.0 && value < FLT_MIN)) {
+        return false;
+      }
       mm_ptr->luminance_min = static_cast<float>(value);
       if (value_parse_status < 0 || mm_ptr->luminance_min < 0.0 ||
           mm_ptr->luminance_min > 999.9999) {
@@ -5096,7 +5107,7 @@
   if (!reader || *colour)
     return false;
 
-  std::auto_ptr<Colour> colour_ptr(new Colour());
+  std::unique_ptr<Colour> colour_ptr(new Colour());
   if (!colour_ptr.get())
     return false;
 
@@ -5194,7 +5205,7 @@
   if (!reader || *projection)
     return false;
 
-  std::auto_ptr<Projection> projection_ptr(new Projection());
+  std::unique_ptr<Projection> projection_ptr(new Projection());
   if (!projection_ptr.get())
     return false;
 
@@ -5270,6 +5281,7 @@
 VideoTrack::VideoTrack(Segment* pSegment, long long element_start,
                        long long element_size)
     : Track(pSegment, element_start, element_size),
+      m_colour_space(NULL),
       m_colour(NULL),
       m_projection(NULL) {}
 
@@ -5295,6 +5307,7 @@
   long long stereo_mode = 0;
 
   double rate = 0.0;
+  char* colour_space = NULL;
 
   IMkvReader* const pReader = pSegment->m_pReader;
 
@@ -5307,8 +5320,8 @@
 
   const long long stop = pos + s.size;
 
-  Colour* colour = NULL;
-  Projection* projection = NULL;
+  std::unique_ptr<Colour> colour_ptr;
+  std::unique_ptr<Projection> projection_ptr;
 
   while (pos < stop) {
     long long id, size;
@@ -5357,11 +5370,23 @@
       if (rate <= 0)
         return E_FILE_FORMAT_INVALID;
     } else if (id == libwebm::kMkvColour) {
-      if (!Colour::Parse(pReader, pos, size, &colour))
+      Colour* colour = NULL;
+      if (!Colour::Parse(pReader, pos, size, &colour)) {
         return E_FILE_FORMAT_INVALID;
+      } else {
+        colour_ptr.reset(colour);
+      }
     } else if (id == libwebm::kMkvProjection) {
-      if (!Projection::Parse(pReader, pos, size, &projection))
+      Projection* projection = NULL;
+      if (!Projection::Parse(pReader, pos, size, &projection)) {
         return E_FILE_FORMAT_INVALID;
+      } else {
+        projection_ptr.reset(projection);
+      }
+    } else if (id == libwebm::kMkvColourSpace) {
+      const long status = UnserializeString(pReader, pos, size, colour_space);
+      if (status < 0)
+        return status;
     }
 
     pos += size;  // consume payload
@@ -5392,8 +5417,9 @@
   pTrack->m_display_unit = display_unit;
   pTrack->m_stereo_mode = stereo_mode;
   pTrack->m_rate = rate;
-  pTrack->m_colour = colour;
-  pTrack->m_projection = projection;
+  pTrack->m_colour = colour_ptr.release();
+  pTrack->m_colour_space = colour_space;
+  pTrack->m_projection = projection_ptr.release();
 
   pResult = pTrack;
   return 0;  // success
@@ -7903,6 +7929,10 @@
         return E_FILE_FORMAT_INVALID;
 
       curr.len = static_cast<long>(frame_size);
+      // Check if size + curr.len could overflow.
+      if (size > LLONG_MAX - curr.len) {
+        return E_FILE_FORMAT_INVALID;
+      }
       size += curr.len;  // contribution of this frame
 
       --frame_count;
@@ -7964,6 +7994,11 @@
   const long long tc0 = pCluster->GetTimeCode();
   assert(tc0 >= 0);
 
+  // Check if tc0 + m_timecode would overflow.
+  if (tc0 < 0 || LLONG_MAX - tc0 < m_timecode) {
+    return -1;
+  }
+
   const long long tc = tc0 + m_timecode;
 
   return tc;  // unscaled timecode units
@@ -7981,6 +8016,10 @@
   const long long scale = pInfo->GetTimeCodeScale();
   assert(scale >= 1);
 
+  // Check if tc * scale could overflow.
+  if (tc != 0 && scale > LLONG_MAX / tc) {
+    return -1;
+  }
   const long long ns = tc * scale;
 
   return ns;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.h
index 26c2b7e..848d01f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvparser.h
@@ -22,7 +22,7 @@
   virtual int Length(long long* total, long long* available) = 0;
 
  protected:
-  virtual ~IMkvReader();
+  virtual ~IMkvReader() {}
 };
 
 template <typename Type>
@@ -527,6 +527,8 @@
 
   Projection* GetProjection() const;
 
+  const char* GetColourSpace() const { return m_colour_space; }
+
  private:
   long long m_width;
   long long m_height;
@@ -534,7 +536,7 @@
   long long m_display_height;
   long long m_display_unit;
   long long m_stereo_mode;
-
+  char* m_colour_space;
   double m_rate;
 
   Colour* m_colour;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvreader.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvreader.cc
index 23d68f5..9d19c1b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvreader.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libwebm/mkvparser/mkvreader.cc
@@ -118,6 +118,8 @@
 
   if (status)
     return -1;  // error
+#elif defined(_WIN32)
+  fseeko64(m_file, static_cast<off_t>(offset), SEEK_SET);
 #else
   fseeko(m_file, static_cast<off_t>(offset), SEEK_SET);
 #endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/LICENSE b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/LICENSE
new file mode 100644
index 0000000..c911747
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/LICENSE
@@ -0,0 +1,29 @@
+Copyright 2011 The LibYuv Project Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+  * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+
+  * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+
+  * Neither the name of Google nor the names of its contributors may
+    be used to endorse or promote products derived from this software
+    without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/README.libvpx b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/README.libvpx
index 485f79c..9519dc4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/README.libvpx
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/README.libvpx
@@ -1,6 +1,6 @@
 Name: libyuv
 URL: https://chromium.googlesource.com/libyuv/libyuv
-Version: de944ed8c74909ea6fbd743a22efe1e55e851b83
+Version: a37e7bfece9e0676ae90a1700b0ec85b0f4f22a1
 License: BSD
 License File: LICENSE
 
@@ -8,15 +8,16 @@
 libyuv is an open source project that includes YUV conversion and scaling
 functionality.
 
-The optimized scaler in libyuv is used in multiple resolution encoder example,
-which down-samples the original input video (f.g. 1280x720) a number of times
-in order to encode multiple resolution bit streams.
+The optimized scaler in libyuv is used in the multiple resolution encoder
+example which down-samples the original input video (f.g. 1280x720) a number of
+times in order to encode multiple resolution bit streams.
 
 Local Modifications:
-rm -rf .gitignore .gn AUTHORS Android.mk BUILD.gn CMakeLists.txt DEPS LICENSE \
-  LICENSE_THIRD_PARTY OWNERS PATENTS PRESUBMIT.py README.chromium README.md \
-  all.gyp build_overrides/ chromium/ codereview.settings docs/ \
-  download_vs_toolchain.py gyp_libyuv gyp_libyuv.py include/libyuv.h \
-  include/libyuv/compare_row.h libyuv.gyp libyuv.gypi libyuv_nacl.gyp \
-  libyuv_test.gyp linux.mk public.mk setup_links.py sync_chromium.py \
-  third_party/ tools/ unit_test/ util/ winarm.mk
+Disable ARGBToRGB24Row_AVX512VBMI due to build failure on Mac.
+rm libyuv/include/libyuv.h libyuv/include/libyuv/compare_row.h
+mv libyuv/include tmp/
+mv libyuv/source tmp/
+mv libyuv/LICENSE tmp/
+rm -rf libyuv
+
+mv tmp/* third_party/libyuv/
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/basic_types.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/basic_types.h
index 54a2181..01d9dfc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/basic_types.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/basic_types.h
@@ -8,82 +8,36 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_BASIC_TYPES_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_BASIC_TYPES_H_
 #define INCLUDE_LIBYUV_BASIC_TYPES_H_
 
-#include <stddef.h>  // for NULL, size_t
+#include <stddef.h>  // For size_t and NULL
+
+#if !defined(INT_TYPES_DEFINED) && !defined(GG_LONGLONG)
+#define INT_TYPES_DEFINED
 
 #if defined(_MSC_VER) && (_MSC_VER < 1600)
 #include <sys/types.h>  // for uintptr_t on x86
+typedef unsigned __int64 uint64_t;
+typedef __int64 int64_t;
+typedef unsigned int uint32_t;
+typedef int int32_t;
+typedef unsigned short uint16_t;
+typedef short int16_t;
+typedef unsigned char uint8_t;
+typedef signed char int8_t;
 #else
-#include <stdint.h>  // for uintptr_t
-#endif
-
-#ifndef GG_LONGLONG
-#ifndef INT_TYPES_DEFINED
-#define INT_TYPES_DEFINED
-#ifdef COMPILER_MSVC
-typedef unsigned __int64 uint64;
-typedef __int64 int64;
-#ifndef INT64_C
-#define INT64_C(x) x ## I64
-#endif
-#ifndef UINT64_C
-#define UINT64_C(x) x ## UI64
-#endif
-#define INT64_F "I64"
-#else  // COMPILER_MSVC
-#if defined(__LP64__) && !defined(__OpenBSD__) && !defined(__APPLE__)
-typedef unsigned long uint64;  // NOLINT
-typedef long int64;  // NOLINT
-#ifndef INT64_C
-#define INT64_C(x) x ## L
-#endif
-#ifndef UINT64_C
-#define UINT64_C(x) x ## UL
-#endif
-#define INT64_F "l"
-#else  // defined(__LP64__) && !defined(__OpenBSD__) && !defined(__APPLE__)
-typedef unsigned long long uint64;  // NOLINT
-typedef long long int64;  // NOLINT
-#ifndef INT64_C
-#define INT64_C(x) x ## LL
-#endif
-#ifndef UINT64_C
-#define UINT64_C(x) x ## ULL
-#endif
-#define INT64_F "ll"
-#endif  // __LP64__
-#endif  // COMPILER_MSVC
-typedef unsigned int uint32;
-typedef int int32;
-typedef unsigned short uint16;  // NOLINT
-typedef short int16;  // NOLINT
-typedef unsigned char uint8;
-typedef signed char int8;
+#include <stdint.h>  // for uintptr_t and C99 types
+#endif               // defined(_MSC_VER) && (_MSC_VER < 1600)
+typedef uint64_t uint64;
+typedef int64_t int64;
+typedef uint32_t uint32;
+typedef int32_t int32;
+typedef uint16_t uint16;
+typedef int16_t int16;
+typedef uint8_t uint8;
+typedef int8_t int8;
 #endif  // INT_TYPES_DEFINED
-#endif  // GG_LONGLONG
-
-// Detect compiler is for x86 or x64.
-#if defined(__x86_64__) || defined(_M_X64) || \
-    defined(__i386__) || defined(_M_IX86)
-#define CPU_X86 1
-#endif
-// Detect compiler is for ARM.
-#if defined(__arm__) || defined(_M_ARM)
-#define CPU_ARM 1
-#endif
-
-#ifndef ALIGNP
-#ifdef __cplusplus
-#define ALIGNP(p, t) \
-    (reinterpret_cast<uint8*>(((reinterpret_cast<uintptr_t>(p) + \
-    ((t) - 1)) & ~((t) - 1))))
-#else
-#define ALIGNP(p, t) \
-    ((uint8*)((((uintptr_t)(p) + ((t) - 1)) & ~((t) - 1))))  /* NOLINT */
-#endif
-#endif
 
 #if !defined(LIBYUV_API)
 #if defined(_WIN32) || defined(__CYGWIN__)
@@ -95,24 +49,17 @@
 #define LIBYUV_API
 #endif  // LIBYUV_BUILDING_SHARED_LIBRARY
 #elif defined(__GNUC__) && (__GNUC__ >= 4) && !defined(__APPLE__) && \
-    (defined(LIBYUV_BUILDING_SHARED_LIBRARY) || \
-    defined(LIBYUV_USING_SHARED_LIBRARY))
-#define LIBYUV_API __attribute__ ((visibility ("default")))
+    (defined(LIBYUV_BUILDING_SHARED_LIBRARY) ||                      \
+     defined(LIBYUV_USING_SHARED_LIBRARY))
+#define LIBYUV_API __attribute__((visibility("default")))
 #else
 #define LIBYUV_API
 #endif  // __GNUC__
 #endif  // LIBYUV_API
 
+// TODO(fbarchard): Remove bool macros.
 #define LIBYUV_BOOL int
 #define LIBYUV_FALSE 0
 #define LIBYUV_TRUE 1
 
-// Visual C x86 or GCC little endian.
-#if defined(__x86_64__) || defined(_M_X64) || \
-  defined(__i386__) || defined(_M_IX86) || \
-  defined(__arm__) || defined(_M_ARM) || \
-  (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
-#define LIBYUV_LITTLE_ENDIAN
-#endif
-
-#endif  // INCLUDE_LIBYUV_BASIC_TYPES_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_BASIC_TYPES_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/compare.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/compare.h
index 08b2bb2..3353ad7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/compare.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/compare.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_COMPARE_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_COMPARE_H_
 #define INCLUDE_LIBYUV_COMPARE_H_
 
 #include "libyuv/basic_types.h"
@@ -20,59 +20,92 @@
 
 // Compute a hash for specified memory. Seed of 5381 recommended.
 LIBYUV_API
-uint32 HashDjb2(const uint8* src, uint64 count, uint32 seed);
+uint32_t HashDjb2(const uint8_t* src, uint64_t count, uint32_t seed);
+
+// Hamming Distance
+LIBYUV_API
+uint64_t ComputeHammingDistance(const uint8_t* src_a,
+                                const uint8_t* src_b,
+                                int count);
 
 // Scan an opaque argb image and return fourcc based on alpha offset.
 // Returns FOURCC_ARGB, FOURCC_BGRA, or 0 if unknown.
 LIBYUV_API
-uint32 ARGBDetect(const uint8* argb, int stride_argb, int width, int height);
+uint32_t ARGBDetect(const uint8_t* argb,
+                    int stride_argb,
+                    int width,
+                    int height);
 
 // Sum Square Error - used to compute Mean Square Error or PSNR.
 LIBYUV_API
-uint64 ComputeSumSquareError(const uint8* src_a,
-                             const uint8* src_b, int count);
+uint64_t ComputeSumSquareError(const uint8_t* src_a,
+                               const uint8_t* src_b,
+                               int count);
 
 LIBYUV_API
-uint64 ComputeSumSquareErrorPlane(const uint8* src_a, int stride_a,
-                                  const uint8* src_b, int stride_b,
-                                  int width, int height);
+uint64_t ComputeSumSquareErrorPlane(const uint8_t* src_a,
+                                    int stride_a,
+                                    const uint8_t* src_b,
+                                    int stride_b,
+                                    int width,
+                                    int height);
 
 static const int kMaxPsnr = 128;
 
 LIBYUV_API
-double SumSquareErrorToPsnr(uint64 sse, uint64 count);
+double SumSquareErrorToPsnr(uint64_t sse, uint64_t count);
 
 LIBYUV_API
-double CalcFramePsnr(const uint8* src_a, int stride_a,
-                     const uint8* src_b, int stride_b,
-                     int width, int height);
+double CalcFramePsnr(const uint8_t* src_a,
+                     int stride_a,
+                     const uint8_t* src_b,
+                     int stride_b,
+                     int width,
+                     int height);
 
 LIBYUV_API
-double I420Psnr(const uint8* src_y_a, int stride_y_a,
-                const uint8* src_u_a, int stride_u_a,
-                const uint8* src_v_a, int stride_v_a,
-                const uint8* src_y_b, int stride_y_b,
-                const uint8* src_u_b, int stride_u_b,
-                const uint8* src_v_b, int stride_v_b,
-                int width, int height);
+double I420Psnr(const uint8_t* src_y_a,
+                int stride_y_a,
+                const uint8_t* src_u_a,
+                int stride_u_a,
+                const uint8_t* src_v_a,
+                int stride_v_a,
+                const uint8_t* src_y_b,
+                int stride_y_b,
+                const uint8_t* src_u_b,
+                int stride_u_b,
+                const uint8_t* src_v_b,
+                int stride_v_b,
+                int width,
+                int height);
 
 LIBYUV_API
-double CalcFrameSsim(const uint8* src_a, int stride_a,
-                     const uint8* src_b, int stride_b,
-                     int width, int height);
+double CalcFrameSsim(const uint8_t* src_a,
+                     int stride_a,
+                     const uint8_t* src_b,
+                     int stride_b,
+                     int width,
+                     int height);
 
 LIBYUV_API
-double I420Ssim(const uint8* src_y_a, int stride_y_a,
-                const uint8* src_u_a, int stride_u_a,
-                const uint8* src_v_a, int stride_v_a,
-                const uint8* src_y_b, int stride_y_b,
-                const uint8* src_u_b, int stride_u_b,
-                const uint8* src_v_b, int stride_v_b,
-                int width, int height);
+double I420Ssim(const uint8_t* src_y_a,
+                int stride_y_a,
+                const uint8_t* src_u_a,
+                int stride_u_a,
+                const uint8_t* src_v_a,
+                int stride_v_a,
+                const uint8_t* src_y_b,
+                int stride_y_b,
+                const uint8_t* src_u_b,
+                int stride_u_b,
+                const uint8_t* src_v_b,
+                int stride_v_b,
+                int width,
+                int height);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_COMPARE_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_COMPARE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert.h
index fcfcf54..d12ef24 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_CONVERT_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_CONVERT_H_
 #define INCLUDE_LIBYUV_CONVERT_H_
 
 #include "libyuv/basic_types.h"
@@ -16,8 +16,8 @@
 #include "libyuv/rotate.h"  // For enum RotationMode.
 
 // TODO(fbarchard): fix WebRTC source to include following libyuv headers:
-#include "libyuv/convert_argb.h"  // For WebRTC I420ToARGB. b/620
-#include "libyuv/convert_from.h"  // For WebRTC ConvertFromI420. b/620
+#include "libyuv/convert_argb.h"      // For WebRTC I420ToARGB. b/620
+#include "libyuv/convert_from.h"      // For WebRTC ConvertFromI420. b/620
 #include "libyuv/planar_functions.h"  // For WebRTC I420Rect, CopyPlane. b/618
 
 #ifdef __cplusplus
@@ -27,195 +27,335 @@
 
 // Convert I444 to I420.
 LIBYUV_API
-int I444ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int I444ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert I422 to I420.
 LIBYUV_API
-int I422ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
-
-// Convert I411 to I420.
-LIBYUV_API
-int I411ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int I422ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Copy I420 to I420.
 #define I420ToI420 I420Copy
 LIBYUV_API
-int I420Copy(const uint8* src_y, int src_stride_y,
-             const uint8* src_u, int src_stride_u,
-             const uint8* src_v, int src_stride_v,
-             uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int width, int height);
+int I420Copy(const uint8_t* src_y,
+             int src_stride_y,
+             const uint8_t* src_u,
+             int src_stride_u,
+             const uint8_t* src_v,
+             int src_stride_v,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height);
+
+// Copy I010 to I010
+#define I010ToI010 I010Copy
+#define H010ToH010 I010Copy
+LIBYUV_API
+int I010Copy(const uint16_t* src_y,
+             int src_stride_y,
+             const uint16_t* src_u,
+             int src_stride_u,
+             const uint16_t* src_v,
+             int src_stride_v,
+             uint16_t* dst_y,
+             int dst_stride_y,
+             uint16_t* dst_u,
+             int dst_stride_u,
+             uint16_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height);
+
+// Convert 10 bit YUV to 8 bit
+#define H010ToH420 I010ToI420
+LIBYUV_API
+int I010ToI420(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert I400 (grey) to I420.
 LIBYUV_API
-int I400ToI420(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int I400ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 #define J400ToJ420 I400ToI420
 
 // Convert NV12 to I420.
 LIBYUV_API
-int NV12ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_uv, int src_stride_uv,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int NV12ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_uv,
+               int src_stride_uv,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert NV21 to I420.
 LIBYUV_API
-int NV21ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_vu, int src_stride_vu,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int NV21ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_vu,
+               int src_stride_vu,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert YUY2 to I420.
 LIBYUV_API
-int YUY2ToI420(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int YUY2ToI420(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert UYVY to I420.
 LIBYUV_API
-int UYVYToI420(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int UYVYToI420(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert M420 to I420.
 LIBYUV_API
-int M420ToI420(const uint8* src_m420, int src_stride_m420,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int M420ToI420(const uint8_t* src_m420,
+               int src_stride_m420,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert Android420 to I420.
 LIBYUV_API
-int Android420ToI420(const uint8* src_y, int src_stride_y,
-                     const uint8* src_u, int src_stride_u,
-                     const uint8* src_v, int src_stride_v,
-                     int pixel_stride_uv,
-                     uint8* dst_y, int dst_stride_y,
-                     uint8* dst_u, int dst_stride_u,
-                     uint8* dst_v, int dst_stride_v,
-                     int width, int height);
+int Android420ToI420(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_u,
+                     int src_stride_u,
+                     const uint8_t* src_v,
+                     int src_stride_v,
+                     int src_pixel_stride_uv,
+                     uint8_t* dst_y,
+                     int dst_stride_y,
+                     uint8_t* dst_u,
+                     int dst_stride_u,
+                     uint8_t* dst_v,
+                     int dst_stride_v,
+                     int width,
+                     int height);
 
 // ARGB little endian (bgra in memory) to I420.
 LIBYUV_API
-int ARGBToI420(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ARGBToI420(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // BGRA little endian (argb in memory) to I420.
 LIBYUV_API
-int BGRAToI420(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int BGRAToI420(const uint8_t* src_bgra,
+               int src_stride_bgra,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // ABGR little endian (rgba in memory) to I420.
 LIBYUV_API
-int ABGRToI420(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ABGRToI420(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // RGBA little endian (abgr in memory) to I420.
 LIBYUV_API
-int RGBAToI420(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int RGBAToI420(const uint8_t* src_rgba,
+               int src_stride_rgba,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // RGB little endian (bgr in memory) to I420.
 LIBYUV_API
-int RGB24ToI420(const uint8* src_frame, int src_stride_frame,
-                uint8* dst_y, int dst_stride_y,
-                uint8* dst_u, int dst_stride_u,
-                uint8* dst_v, int dst_stride_v,
-                int width, int height);
+int RGB24ToI420(const uint8_t* src_rgb24,
+                int src_stride_rgb24,
+                uint8_t* dst_y,
+                int dst_stride_y,
+                uint8_t* dst_u,
+                int dst_stride_u,
+                uint8_t* dst_v,
+                int dst_stride_v,
+                int width,
+                int height);
 
 // RGB big endian (rgb in memory) to I420.
 LIBYUV_API
-int RAWToI420(const uint8* src_frame, int src_stride_frame,
-              uint8* dst_y, int dst_stride_y,
-              uint8* dst_u, int dst_stride_u,
-              uint8* dst_v, int dst_stride_v,
-              int width, int height);
+int RAWToI420(const uint8_t* src_raw,
+              int src_stride_raw,
+              uint8_t* dst_y,
+              int dst_stride_y,
+              uint8_t* dst_u,
+              int dst_stride_u,
+              uint8_t* dst_v,
+              int dst_stride_v,
+              int width,
+              int height);
 
 // RGB16 (RGBP fourcc) little endian to I420.
 LIBYUV_API
-int RGB565ToI420(const uint8* src_frame, int src_stride_frame,
-                 uint8* dst_y, int dst_stride_y,
-                 uint8* dst_u, int dst_stride_u,
-                 uint8* dst_v, int dst_stride_v,
-                 int width, int height);
+int RGB565ToI420(const uint8_t* src_rgb565,
+                 int src_stride_rgb565,
+                 uint8_t* dst_y,
+                 int dst_stride_y,
+                 uint8_t* dst_u,
+                 int dst_stride_u,
+                 uint8_t* dst_v,
+                 int dst_stride_v,
+                 int width,
+                 int height);
 
 // RGB15 (RGBO fourcc) little endian to I420.
 LIBYUV_API
-int ARGB1555ToI420(const uint8* src_frame, int src_stride_frame,
-                   uint8* dst_y, int dst_stride_y,
-                   uint8* dst_u, int dst_stride_u,
-                   uint8* dst_v, int dst_stride_v,
-                   int width, int height);
+int ARGB1555ToI420(const uint8_t* src_argb1555,
+                   int src_stride_argb1555,
+                   uint8_t* dst_y,
+                   int dst_stride_y,
+                   uint8_t* dst_u,
+                   int dst_stride_u,
+                   uint8_t* dst_v,
+                   int dst_stride_v,
+                   int width,
+                   int height);
 
 // RGB12 (R444 fourcc) little endian to I420.
 LIBYUV_API
-int ARGB4444ToI420(const uint8* src_frame, int src_stride_frame,
-                   uint8* dst_y, int dst_stride_y,
-                   uint8* dst_u, int dst_stride_u,
-                   uint8* dst_v, int dst_stride_v,
-                   int width, int height);
+int ARGB4444ToI420(const uint8_t* src_argb4444,
+                   int src_stride_argb4444,
+                   uint8_t* dst_y,
+                   int dst_stride_y,
+                   uint8_t* dst_u,
+                   int dst_stride_u,
+                   uint8_t* dst_v,
+                   int dst_stride_v,
+                   int width,
+                   int height);
 
 #ifdef HAVE_JPEG
 // src_width/height provided by capture.
 // dst_width/height for clipping determine final size.
 LIBYUV_API
-int MJPGToI420(const uint8* sample, size_t sample_size,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int src_width, int src_height,
-               int dst_width, int dst_height);
+int MJPGToI420(const uint8_t* sample,
+               size_t sample_size,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int src_width,
+               int src_height,
+               int dst_width,
+               int dst_height);
 
 // Query size of MJPG in pixels.
 LIBYUV_API
-int MJPGSize(const uint8* sample, size_t sample_size,
-             int* width, int* height);
+int MJPGSize(const uint8_t* sample,
+             size_t sample_size,
+             int* width,
+             int* height);
 #endif
 
 // Convert camera sample to I420 with cropping, rotation and vertical flip.
@@ -238,22 +378,29 @@
 //    Must be less than or equal to src_width/src_height
 //    Cropping parameters are pre-rotation.
 // "rotation" can be 0, 90, 180 or 270.
-// "format" is a fourcc. ie 'I420', 'YUY2'
+// "fourcc" is a fourcc. ie 'I420', 'YUY2'
 // Returns 0 for successful; -1 for invalid parameter. Non-zero for failure.
 LIBYUV_API
-int ConvertToI420(const uint8* src_frame, size_t src_size,
-                  uint8* dst_y, int dst_stride_y,
-                  uint8* dst_u, int dst_stride_u,
-                  uint8* dst_v, int dst_stride_v,
-                  int crop_x, int crop_y,
-                  int src_width, int src_height,
-                  int crop_width, int crop_height,
+int ConvertToI420(const uint8_t* sample,
+                  size_t sample_size,
+                  uint8_t* dst_y,
+                  int dst_stride_y,
+                  uint8_t* dst_u,
+                  int dst_stride_u,
+                  uint8_t* dst_v,
+                  int dst_stride_v,
+                  int crop_x,
+                  int crop_y,
+                  int src_width,
+                  int src_height,
+                  int crop_width,
+                  int crop_height,
                   enum RotationMode rotation,
-                  uint32 format);
+                  uint32_t fourcc);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_CONVERT_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_CONVERT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_argb.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_argb.h
index 19672f3..ab772b6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_argb.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_argb.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_CONVERT_ARGB_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_CONVERT_ARGB_H_
 #define INCLUDE_LIBYUV_CONVERT_ARGB_H_
 
 #include "libyuv/basic_types.h"
@@ -30,258 +30,621 @@
 
 // Copy ARGB to ARGB.
 LIBYUV_API
-int ARGBCopy(const uint8* src_argb, int src_stride_argb,
-             uint8* dst_argb, int dst_stride_argb,
-             int width, int height);
+int ARGBCopy(const uint8_t* src_argb,
+             int src_stride_argb,
+             uint8_t* dst_argb,
+             int dst_stride_argb,
+             int width,
+             int height);
 
 // Convert I420 to ARGB.
 LIBYUV_API
-int I420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Duplicate prototype for function in convert_from.h for remoting.
 LIBYUV_API
-int I420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert I010 to ARGB.
+LIBYUV_API
+int I010ToARGB(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert I010 to ARGB.
+LIBYUV_API
+int I010ToARGB(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert I010 to ABGR.
+LIBYUV_API
+int I010ToABGR(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert H010 to ARGB.
+LIBYUV_API
+int H010ToARGB(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert H010 to ABGR.
+LIBYUV_API
+int H010ToABGR(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 // Convert I422 to ARGB.
 LIBYUV_API
-int I422ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I422ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert I444 to ARGB.
 LIBYUV_API
-int I444ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I444ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert J444 to ARGB.
 LIBYUV_API
-int J444ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int J444ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert I444 to ABGR.
 LIBYUV_API
-int I444ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
-
-// Convert I411 to ARGB.
-LIBYUV_API
-int I411ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I444ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 // Convert I420 with Alpha to preattenuated ARGB.
 LIBYUV_API
-int I420AlphaToARGB(const uint8* src_y, int src_stride_y,
-                    const uint8* src_u, int src_stride_u,
-                    const uint8* src_v, int src_stride_v,
-                    const uint8* src_a, int src_stride_a,
-                    uint8* dst_argb, int dst_stride_argb,
-                    int width, int height, int attenuate);
+int I420AlphaToARGB(const uint8_t* src_y,
+                    int src_stride_y,
+                    const uint8_t* src_u,
+                    int src_stride_u,
+                    const uint8_t* src_v,
+                    int src_stride_v,
+                    const uint8_t* src_a,
+                    int src_stride_a,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    int width,
+                    int height,
+                    int attenuate);
 
 // Convert I420 with Alpha to preattenuated ABGR.
 LIBYUV_API
-int I420AlphaToABGR(const uint8* src_y, int src_stride_y,
-                    const uint8* src_u, int src_stride_u,
-                    const uint8* src_v, int src_stride_v,
-                    const uint8* src_a, int src_stride_a,
-                    uint8* dst_abgr, int dst_stride_abgr,
-                    int width, int height, int attenuate);
+int I420AlphaToABGR(const uint8_t* src_y,
+                    int src_stride_y,
+                    const uint8_t* src_u,
+                    int src_stride_u,
+                    const uint8_t* src_v,
+                    int src_stride_v,
+                    const uint8_t* src_a,
+                    int src_stride_a,
+                    uint8_t* dst_abgr,
+                    int dst_stride_abgr,
+                    int width,
+                    int height,
+                    int attenuate);
 
 // Convert I400 (grey) to ARGB.  Reverse of ARGBToI400.
 LIBYUV_API
-int I400ToARGB(const uint8* src_y, int src_stride_y,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I400ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert J400 (jpeg grey) to ARGB.
 LIBYUV_API
-int J400ToARGB(const uint8* src_y, int src_stride_y,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int J400ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Alias.
 #define YToARGB I400ToARGB
 
 // Convert NV12 to ARGB.
 LIBYUV_API
-int NV12ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_uv, int src_stride_uv,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int NV12ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_uv,
+               int src_stride_uv,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert NV21 to ARGB.
 LIBYUV_API
-int NV21ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_vu, int src_stride_vu,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int NV21ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_vu,
+               int src_stride_vu,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert NV12 to ABGR.
+int NV12ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_uv,
+               int src_stride_uv,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert NV21 to ABGR.
+LIBYUV_API
+int NV21ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_vu,
+               int src_stride_vu,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert NV12 to RGB24.
+LIBYUV_API
+int NV12ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_uv,
+                int src_stride_uv,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height);
+
+// Convert NV21 to RGB24.
+LIBYUV_API
+int NV21ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_vu,
+                int src_stride_vu,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height);
 
 // Convert M420 to ARGB.
 LIBYUV_API
-int M420ToARGB(const uint8* src_m420, int src_stride_m420,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int M420ToARGB(const uint8_t* src_m420,
+               int src_stride_m420,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert YUY2 to ARGB.
 LIBYUV_API
-int YUY2ToARGB(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int YUY2ToARGB(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert UYVY to ARGB.
 LIBYUV_API
-int UYVYToARGB(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int UYVYToARGB(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert J420 to ARGB.
 LIBYUV_API
-int J420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int J420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert J422 to ARGB.
 LIBYUV_API
-int J422ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int J422ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert J420 to ABGR.
 LIBYUV_API
-int J420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
+int J420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 // Convert J422 to ABGR.
 LIBYUV_API
-int J422ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
+int J422ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 // Convert H420 to ARGB.
 LIBYUV_API
-int H420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int H420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert H422 to ARGB.
 LIBYUV_API
-int H422ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int H422ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Convert H420 to ABGR.
 LIBYUV_API
-int H420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
+int H420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 // Convert H422 to ABGR.
 LIBYUV_API
-int H422ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
+int H422ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert H010 to ARGB.
+LIBYUV_API
+int H010ToARGB(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert I010 to AR30.
+LIBYUV_API
+int I010ToAR30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height);
+
+// Convert H010 to AR30.
+LIBYUV_API
+int H010ToAR30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height);
+
+// Convert I010 to AB30.
+LIBYUV_API
+int I010ToAB30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ab30,
+               int dst_stride_ab30,
+               int width,
+               int height);
+
+// Convert H010 to AB30.
+LIBYUV_API
+int H010ToAB30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ab30,
+               int dst_stride_ab30,
+               int width,
+               int height);
 
 // BGRA little endian (argb in memory) to ARGB.
 LIBYUV_API
-int BGRAToARGB(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int BGRAToARGB(const uint8_t* src_bgra,
+               int src_stride_bgra,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // ABGR little endian (rgba in memory) to ARGB.
 LIBYUV_API
-int ABGRToARGB(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int ABGRToARGB(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // RGBA little endian (abgr in memory) to ARGB.
 LIBYUV_API
-int RGBAToARGB(const uint8* src_frame, int src_stride_frame,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int RGBAToARGB(const uint8_t* src_rgba,
+               int src_stride_rgba,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 // Deprecated function name.
 #define BG24ToARGB RGB24ToARGB
 
 // RGB little endian (bgr in memory) to ARGB.
 LIBYUV_API
-int RGB24ToARGB(const uint8* src_frame, int src_stride_frame,
-                uint8* dst_argb, int dst_stride_argb,
-                int width, int height);
+int RGB24ToARGB(const uint8_t* src_rgb24,
+                int src_stride_rgb24,
+                uint8_t* dst_argb,
+                int dst_stride_argb,
+                int width,
+                int height);
 
 // RGB big endian (rgb in memory) to ARGB.
 LIBYUV_API
-int RAWToARGB(const uint8* src_frame, int src_stride_frame,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height);
+int RAWToARGB(const uint8_t* src_raw,
+              int src_stride_raw,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height);
 
 // RGB16 (RGBP fourcc) little endian to ARGB.
 LIBYUV_API
-int RGB565ToARGB(const uint8* src_frame, int src_stride_frame,
-                 uint8* dst_argb, int dst_stride_argb,
-                 int width, int height);
+int RGB565ToARGB(const uint8_t* src_rgb565,
+                 int src_stride_rgb565,
+                 uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int width,
+                 int height);
 
 // RGB15 (RGBO fourcc) little endian to ARGB.
 LIBYUV_API
-int ARGB1555ToARGB(const uint8* src_frame, int src_stride_frame,
-                   uint8* dst_argb, int dst_stride_argb,
-                   int width, int height);
+int ARGB1555ToARGB(const uint8_t* src_argb1555,
+                   int src_stride_argb1555,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   int width,
+                   int height);
 
 // RGB12 (R444 fourcc) little endian to ARGB.
 LIBYUV_API
-int ARGB4444ToARGB(const uint8* src_frame, int src_stride_frame,
-                   uint8* dst_argb, int dst_stride_argb,
-                   int width, int height);
+int ARGB4444ToARGB(const uint8_t* src_argb4444,
+                   int src_stride_argb4444,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   int width,
+                   int height);
+
+// Aliases
+#define AB30ToARGB AR30ToABGR
+#define AB30ToABGR AR30ToARGB
+#define AB30ToAR30 AR30ToAB30
+
+// Convert AR30 To ARGB.
+LIBYUV_API
+int AR30ToARGB(const uint8_t* src_ar30,
+               int src_stride_ar30,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert AR30 To ABGR.
+LIBYUV_API
+int AR30ToABGR(const uint8_t* src_ar30,
+               int src_stride_ar30,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert AR30 To AB30.
+LIBYUV_API
+int AR30ToAB30(const uint8_t* src_ar30,
+               int src_stride_ar30,
+               uint8_t* dst_ab30,
+               int dst_stride_ab30,
+               int width,
+               int height);
 
 #ifdef HAVE_JPEG
 // src_width/height provided by capture
 // dst_width/height for clipping determine final size.
 LIBYUV_API
-int MJPGToARGB(const uint8* sample, size_t sample_size,
-               uint8* dst_argb, int dst_stride_argb,
-               int src_width, int src_height,
-               int dst_width, int dst_height);
+int MJPGToARGB(const uint8_t* sample,
+               size_t sample_size,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int src_width,
+               int src_height,
+               int dst_width,
+               int dst_height);
 #endif
 
+// Convert Android420 to ARGB.
+LIBYUV_API
+int Android420ToARGB(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_u,
+                     int src_stride_u,
+                     const uint8_t* src_v,
+                     int src_stride_v,
+                     int src_pixel_stride_uv,
+                     uint8_t* dst_argb,
+                     int dst_stride_argb,
+                     int width,
+                     int height);
+
+// Convert Android420 to ABGR.
+LIBYUV_API
+int Android420ToABGR(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_u,
+                     int src_stride_u,
+                     const uint8_t* src_v,
+                     int src_stride_v,
+                     int src_pixel_stride_uv,
+                     uint8_t* dst_abgr,
+                     int dst_stride_abgr,
+                     int width,
+                     int height);
+
 // Convert camera sample to ARGB with cropping, rotation and vertical flip.
-// "src_size" is needed to parse MJPG.
+// "sample_size" is needed to parse MJPG.
 // "dst_stride_argb" number of bytes in a row of the dst_argb plane.
 //   Normally this would be the same as dst_width, with recommended alignment
 //   to 16 bytes for better efficiency.
@@ -300,20 +663,25 @@
 //    Must be less than or equal to src_width/src_height
 //    Cropping parameters are pre-rotation.
 // "rotation" can be 0, 90, 180 or 270.
-// "format" is a fourcc. ie 'I420', 'YUY2'
+// "fourcc" is a fourcc. ie 'I420', 'YUY2'
 // Returns 0 for successful; -1 for invalid parameter. Non-zero for failure.
 LIBYUV_API
-int ConvertToARGB(const uint8* src_frame, size_t src_size,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int crop_x, int crop_y,
-                  int src_width, int src_height,
-                  int crop_width, int crop_height,
+int ConvertToARGB(const uint8_t* sample,
+                  size_t sample_size,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int crop_x,
+                  int crop_y,
+                  int src_width,
+                  int src_height,
+                  int crop_width,
+                  int crop_height,
                   enum RotationMode rotation,
-                  uint32 format);
+                  uint32_t fourcc);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_CONVERT_ARGB_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_CONVERT_ARGB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from.h
index 39e1578..5cd8a4b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_CONVERT_FROM_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_CONVERT_FROM_H_
 #define INCLUDE_LIBYUV_CONVERT_FROM_H_
 
 #include "libyuv/basic_types.h"
@@ -21,159 +21,322 @@
 
 // See Also convert.h for conversions from formats to I420.
 
-// I420Copy in convert to I420ToI420.
+// Convert 8 bit YUV to 10 bit.
+#define H420ToH010 I420ToI010
+int I420ToI010(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint16_t* dst_y,
+               int dst_stride_y,
+               uint16_t* dst_u,
+               int dst_stride_u,
+               uint16_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToI422(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int I420ToI422(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToI444(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
-
-LIBYUV_API
-int I420ToI411(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int I420ToI444(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Copy to I400. Source can be I420, I422, I444, I400, NV12 or NV21.
 LIBYUV_API
-int I400Copy(const uint8* src_y, int src_stride_y,
-             uint8* dst_y, int dst_stride_y,
-             int width, int height);
+int I400Copy(const uint8_t* src_y,
+             int src_stride_y,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             int width,
+             int height);
 
 LIBYUV_API
-int I420ToNV12(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height);
+int I420ToNV12(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToNV21(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_vu, int dst_stride_vu,
-               int width, int height);
+int I420ToNV21(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_vu,
+               int dst_stride_vu,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToYUY2(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_frame, int dst_stride_frame,
-               int width, int height);
+int I420ToYUY2(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_yuy2,
+               int dst_stride_yuy2,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToUYVY(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_frame, int dst_stride_frame,
-               int width, int height);
+int I420ToUYVY(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_uyvy,
+               int dst_stride_uyvy,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToBGRA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I420ToBGRA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_bgra,
+               int dst_stride_bgra,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
+int I420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToRGBA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_rgba, int dst_stride_rgba,
-               int width, int height);
+int I420ToRGBA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_rgba,
+               int dst_stride_rgba,
+               int width,
+               int height);
 
 LIBYUV_API
-int I420ToRGB24(const uint8* src_y, int src_stride_y,
-                const uint8* src_u, int src_stride_u,
-                const uint8* src_v, int src_stride_v,
-                uint8* dst_frame, int dst_stride_frame,
-                int width, int height);
+int I420ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_u,
+                int src_stride_u,
+                const uint8_t* src_v,
+                int src_stride_v,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height);
 
 LIBYUV_API
-int I420ToRAW(const uint8* src_y, int src_stride_y,
-              const uint8* src_u, int src_stride_u,
-              const uint8* src_v, int src_stride_v,
-              uint8* dst_frame, int dst_stride_frame,
-              int width, int height);
+int I420ToRAW(const uint8_t* src_y,
+              int src_stride_y,
+              const uint8_t* src_u,
+              int src_stride_u,
+              const uint8_t* src_v,
+              int src_stride_v,
+              uint8_t* dst_raw,
+              int dst_stride_raw,
+              int width,
+              int height);
 
 LIBYUV_API
-int I420ToRGB565(const uint8* src_y, int src_stride_y,
-                 const uint8* src_u, int src_stride_u,
-                 const uint8* src_v, int src_stride_v,
-                 uint8* dst_frame, int dst_stride_frame,
-                 int width, int height);
+int H420ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_u,
+                int src_stride_u,
+                const uint8_t* src_v,
+                int src_stride_v,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height);
+
+LIBYUV_API
+int H420ToRAW(const uint8_t* src_y,
+              int src_stride_y,
+              const uint8_t* src_u,
+              int src_stride_u,
+              const uint8_t* src_v,
+              int src_stride_v,
+              uint8_t* dst_raw,
+              int dst_stride_raw,
+              int width,
+              int height);
+
+LIBYUV_API
+int I420ToRGB565(const uint8_t* src_y,
+                 int src_stride_y,
+                 const uint8_t* src_u,
+                 int src_stride_u,
+                 const uint8_t* src_v,
+                 int src_stride_v,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height);
+
+LIBYUV_API
+int I422ToRGB565(const uint8_t* src_y,
+                 int src_stride_y,
+                 const uint8_t* src_u,
+                 int src_stride_u,
+                 const uint8_t* src_v,
+                 int src_stride_v,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height);
 
 // Convert I420 To RGB565 with 4x4 dither matrix (16 bytes).
 // Values in dither matrix from 0 to 7 recommended.
 // The order of the dither matrix is first byte is upper left.
 
 LIBYUV_API
-int I420ToRGB565Dither(const uint8* src_y, int src_stride_y,
-                       const uint8* src_u, int src_stride_u,
-                       const uint8* src_v, int src_stride_v,
-                       uint8* dst_frame, int dst_stride_frame,
-                       const uint8* dither4x4, int width, int height);
+int I420ToRGB565Dither(const uint8_t* src_y,
+                       int src_stride_y,
+                       const uint8_t* src_u,
+                       int src_stride_u,
+                       const uint8_t* src_v,
+                       int src_stride_v,
+                       uint8_t* dst_rgb565,
+                       int dst_stride_rgb565,
+                       const uint8_t* dither4x4,
+                       int width,
+                       int height);
 
 LIBYUV_API
-int I420ToARGB1555(const uint8* src_y, int src_stride_y,
-                   const uint8* src_u, int src_stride_u,
-                   const uint8* src_v, int src_stride_v,
-                   uint8* dst_frame, int dst_stride_frame,
-                   int width, int height);
+int I420ToARGB1555(const uint8_t* src_y,
+                   int src_stride_y,
+                   const uint8_t* src_u,
+                   int src_stride_u,
+                   const uint8_t* src_v,
+                   int src_stride_v,
+                   uint8_t* dst_argb1555,
+                   int dst_stride_argb1555,
+                   int width,
+                   int height);
 
 LIBYUV_API
-int I420ToARGB4444(const uint8* src_y, int src_stride_y,
-                   const uint8* src_u, int src_stride_u,
-                   const uint8* src_v, int src_stride_v,
-                   uint8* dst_frame, int dst_stride_frame,
-                   int width, int height);
+int I420ToARGB4444(const uint8_t* src_y,
+                   int src_stride_y,
+                   const uint8_t* src_u,
+                   int src_stride_u,
+                   const uint8_t* src_v,
+                   int src_stride_v,
+                   uint8_t* dst_argb4444,
+                   int dst_stride_argb4444,
+                   int width,
+                   int height);
+
+// Convert I420 to AR30.
+LIBYUV_API
+int I420ToAR30(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height);
+
+// Convert H420 to AR30.
+LIBYUV_API
+int H420ToAR30(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height);
 
 // Convert I420 to specified format.
 // "dst_sample_stride" is bytes in a row for the destination. Pass 0 if the
 //    buffer has contiguous rows. Can be negative. A multiple of 16 is optimal.
 LIBYUV_API
-int ConvertFromI420(const uint8* y, int y_stride,
-                    const uint8* u, int u_stride,
-                    const uint8* v, int v_stride,
-                    uint8* dst_sample, int dst_sample_stride,
-                    int width, int height,
-                    uint32 format);
+int ConvertFromI420(const uint8_t* y,
+                    int y_stride,
+                    const uint8_t* u,
+                    int u_stride,
+                    const uint8_t* v,
+                    int v_stride,
+                    uint8_t* dst_sample,
+                    int dst_sample_stride,
+                    int width,
+                    int height,
+                    uint32_t fourcc);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_CONVERT_FROM_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_CONVERT_FROM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from_argb.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from_argb.h
index 1df5320..05c815a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from_argb.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/convert_from_argb.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_CONVERT_FROM_ARGB_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_CONVERT_FROM_ARGB_H_
 #define INCLUDE_LIBYUV_CONVERT_FROM_ARGB_H_
 
 #include "libyuv/basic_types.h"
@@ -21,170 +21,267 @@
 // Copy ARGB to ARGB.
 #define ARGBToARGB ARGBCopy
 LIBYUV_API
-int ARGBCopy(const uint8* src_argb, int src_stride_argb,
-             uint8* dst_argb, int dst_stride_argb,
-             int width, int height);
+int ARGBCopy(const uint8_t* src_argb,
+             int src_stride_argb,
+             uint8_t* dst_argb,
+             int dst_stride_argb,
+             int width,
+             int height);
 
 // Convert ARGB To BGRA.
 LIBYUV_API
-int ARGBToBGRA(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_bgra, int dst_stride_bgra,
-               int width, int height);
+int ARGBToBGRA(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_bgra,
+               int dst_stride_bgra,
+               int width,
+               int height);
 
 // Convert ARGB To ABGR.
 LIBYUV_API
-int ARGBToABGR(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
+int ARGBToABGR(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
 
 // Convert ARGB To RGBA.
 LIBYUV_API
-int ARGBToRGBA(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_rgba, int dst_stride_rgba,
-               int width, int height);
+int ARGBToRGBA(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_rgba,
+               int dst_stride_rgba,
+               int width,
+               int height);
+
+// Aliases
+#define ARGBToAB30 ABGRToAR30
+#define ABGRToAB30 ARGBToAR30
+
+// Convert ABGR To AR30.
+LIBYUV_API
+int ABGRToAR30(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height);
+
+// Convert ARGB To AR30.
+LIBYUV_API
+int ARGBToAR30(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height);
 
 // Convert ARGB To RGB24.
 LIBYUV_API
-int ARGBToRGB24(const uint8* src_argb, int src_stride_argb,
-                uint8* dst_rgb24, int dst_stride_rgb24,
-                int width, int height);
+int ARGBToRGB24(const uint8_t* src_argb,
+                int src_stride_argb,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height);
 
 // Convert ARGB To RAW.
 LIBYUV_API
-int ARGBToRAW(const uint8* src_argb, int src_stride_argb,
-              uint8* dst_rgb, int dst_stride_rgb,
-              int width, int height);
+int ARGBToRAW(const uint8_t* src_argb,
+              int src_stride_argb,
+              uint8_t* dst_raw,
+              int dst_stride_raw,
+              int width,
+              int height);
 
 // Convert ARGB To RGB565.
 LIBYUV_API
-int ARGBToRGB565(const uint8* src_argb, int src_stride_argb,
-                 uint8* dst_rgb565, int dst_stride_rgb565,
-                 int width, int height);
+int ARGBToRGB565(const uint8_t* src_argb,
+                 int src_stride_argb,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height);
 
 // Convert ARGB To RGB565 with 4x4 dither matrix (16 bytes).
 // Values in dither matrix from 0 to 7 recommended.
 // The order of the dither matrix is first byte is upper left.
 // TODO(fbarchard): Consider pointer to 2d array for dither4x4.
-// const uint8(*dither)[4][4];
+// const uint8_t(*dither)[4][4];
 LIBYUV_API
-int ARGBToRGB565Dither(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_rgb565, int dst_stride_rgb565,
-                       const uint8* dither4x4, int width, int height);
+int ARGBToRGB565Dither(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_rgb565,
+                       int dst_stride_rgb565,
+                       const uint8_t* dither4x4,
+                       int width,
+                       int height);
 
 // Convert ARGB To ARGB1555.
 LIBYUV_API
-int ARGBToARGB1555(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_argb1555, int dst_stride_argb1555,
-                   int width, int height);
+int ARGBToARGB1555(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb1555,
+                   int dst_stride_argb1555,
+                   int width,
+                   int height);
 
 // Convert ARGB To ARGB4444.
 LIBYUV_API
-int ARGBToARGB4444(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_argb4444, int dst_stride_argb4444,
-                   int width, int height);
+int ARGBToARGB4444(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb4444,
+                   int dst_stride_argb4444,
+                   int width,
+                   int height);
 
 // Convert ARGB To I444.
 LIBYUV_API
-int ARGBToI444(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ARGBToI444(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert ARGB To I422.
 LIBYUV_API
-int ARGBToI422(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ARGBToI422(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert ARGB To I420. (also in convert.h)
 LIBYUV_API
-int ARGBToI420(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ARGBToI420(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert ARGB to J420. (JPeg full range I420).
 LIBYUV_API
-int ARGBToJ420(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yj, int dst_stride_yj,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ARGBToJ420(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yj,
+               int dst_stride_yj,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert ARGB to J422.
 LIBYUV_API
-int ARGBToJ422(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yj, int dst_stride_yj,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
-
-// Convert ARGB To I411.
-LIBYUV_API
-int ARGBToI411(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
+int ARGBToJ422(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yj,
+               int dst_stride_yj,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
 
 // Convert ARGB to J400. (JPeg full range).
 LIBYUV_API
-int ARGBToJ400(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yj, int dst_stride_yj,
-               int width, int height);
+int ARGBToJ400(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yj,
+               int dst_stride_yj,
+               int width,
+               int height);
 
 // Convert ARGB to I400.
 LIBYUV_API
-int ARGBToI400(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height);
+int ARGBToI400(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height);
 
 // Convert ARGB to G. (Reverse of J400toARGB, which replicates G back to ARGB)
 LIBYUV_API
-int ARGBToG(const uint8* src_argb, int src_stride_argb,
-            uint8* dst_g, int dst_stride_g,
-            int width, int height);
+int ARGBToG(const uint8_t* src_argb,
+            int src_stride_argb,
+            uint8_t* dst_g,
+            int dst_stride_g,
+            int width,
+            int height);
 
 // Convert ARGB To NV12.
 LIBYUV_API
-int ARGBToNV12(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height);
+int ARGBToNV12(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height);
 
 // Convert ARGB To NV21.
 LIBYUV_API
-int ARGBToNV21(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_vu, int dst_stride_vu,
-               int width, int height);
+int ARGBToNV21(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_vu,
+               int dst_stride_vu,
+               int width,
+               int height);
 
 // Convert ARGB To NV21.
 LIBYUV_API
-int ARGBToNV21(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_vu, int dst_stride_vu,
-               int width, int height);
+int ARGBToNV21(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_vu,
+               int dst_stride_vu,
+               int width,
+               int height);
 
 // Convert ARGB To YUY2.
 LIBYUV_API
-int ARGBToYUY2(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yuy2, int dst_stride_yuy2,
-               int width, int height);
+int ARGBToYUY2(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yuy2,
+               int dst_stride_yuy2,
+               int width,
+               int height);
 
 // Convert ARGB To UYVY.
 LIBYUV_API
-int ARGBToUYVY(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_uyvy, int dst_stride_uyvy,
-               int width, int height);
+int ARGBToUYVY(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_uyvy,
+               int dst_stride_uyvy,
+               int width,
+               int height);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_CONVERT_FROM_ARGB_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_CONVERT_FROM_ARGB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/cpu_id.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/cpu_id.h
index dfb7445..0229cb5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/cpu_id.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/cpu_id.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_CPU_ID_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_CPU_ID_H_
 #define INCLUDE_LIBYUV_CPU_ID_H_
 
 #include "libyuv/basic_types.h"
@@ -31,50 +31,89 @@
 static const int kCpuHasSSE2 = 0x20;
 static const int kCpuHasSSSE3 = 0x40;
 static const int kCpuHasSSE41 = 0x80;
-static const int kCpuHasSSE42 = 0x100;
+static const int kCpuHasSSE42 = 0x100;  // unused at this time.
 static const int kCpuHasAVX = 0x200;
 static const int kCpuHasAVX2 = 0x400;
 static const int kCpuHasERMS = 0x800;
 static const int kCpuHasFMA3 = 0x1000;
-static const int kCpuHasAVX3 = 0x2000;
-// 0x2000, 0x4000, 0x8000 reserved for future X86 flags.
+static const int kCpuHasF16C = 0x2000;
+static const int kCpuHasGFNI = 0x4000;
+static const int kCpuHasAVX512BW = 0x8000;
+static const int kCpuHasAVX512VL = 0x10000;
+static const int kCpuHasAVX512VBMI = 0x20000;
+static const int kCpuHasAVX512VBMI2 = 0x40000;
+static const int kCpuHasAVX512VBITALG = 0x80000;
+static const int kCpuHasAVX512VPOPCNTDQ = 0x100000;
 
 // These flags are only valid on MIPS processors.
-static const int kCpuHasMIPS = 0x10000;
-static const int kCpuHasDSPR2 = 0x20000;
+static const int kCpuHasMIPS = 0x200000;
+static const int kCpuHasMSA = 0x400000;
 
-// Internal function used to auto-init.
+// Optional init function. TestCpuFlag does an auto-init.
+// Returns cpu_info flags.
 LIBYUV_API
 int InitCpuFlags(void);
 
+// Detect CPU has SSE2 etc.
+// Test_flag parameter should be one of kCpuHas constants above.
+// Returns non-zero if instruction set is detected
+static __inline int TestCpuFlag(int test_flag) {
+  LIBYUV_API extern int cpu_info_;
+#ifdef __ATOMIC_RELAXED
+  int cpu_info = __atomic_load_n(&cpu_info_, __ATOMIC_RELAXED);
+#else
+  int cpu_info = cpu_info_;
+#endif
+  return (!cpu_info ? InitCpuFlags() : cpu_info) & test_flag;
+}
+
 // Internal function for parsing /proc/cpuinfo.
 LIBYUV_API
 int ArmCpuCaps(const char* cpuinfo_name);
 
-// Detect CPU has SSE2 etc.
-// Test_flag parameter should be one of kCpuHas constants above.
-// returns non-zero if instruction set is detected
-static __inline int TestCpuFlag(int test_flag) {
-  LIBYUV_API extern int cpu_info_;
-  return (!cpu_info_ ? InitCpuFlags() : cpu_info_) & test_flag;
-}
-
 // For testing, allow CPU flags to be disabled.
 // ie MaskCpuFlags(~kCpuHasSSSE3) to disable SSSE3.
 // MaskCpuFlags(-1) to enable all cpu specific optimizations.
 // MaskCpuFlags(1) to disable all cpu specific optimizations.
+// MaskCpuFlags(0) to reset state so next call will auto init.
+// Returns cpu_info flags.
 LIBYUV_API
-void MaskCpuFlags(int enable_flags);
+int MaskCpuFlags(int enable_flags);
+
+// Sets the CPU flags to |cpu_flags|, bypassing the detection code. |cpu_flags|
+// should be a valid combination of the kCpuHas constants above and include
+// kCpuInitialized. Use this method when running in a sandboxed process where
+// the detection code might fail (as it might access /proc/cpuinfo). In such
+// cases the cpu_info can be obtained from a non sandboxed process by calling
+// InitCpuFlags() and passed to the sandboxed process (via command line
+// parameters, IPC...) which can then call this method to initialize the CPU
+// flags.
+// Notes:
+// - when specifying 0 for |cpu_flags|, the auto initialization is enabled
+//   again.
+// - enabling CPU features that are not supported by the CPU will result in
+//   undefined behavior.
+// TODO(fbarchard): consider writing a helper function that translates from
+// other library CPU info to libyuv CPU info and add a .md doc that explains
+// CPU detection.
+static __inline void SetCpuFlags(int cpu_flags) {
+  LIBYUV_API extern int cpu_info_;
+#ifdef __ATOMIC_RELAXED
+  __atomic_store_n(&cpu_info_, cpu_flags, __ATOMIC_RELAXED);
+#else
+  cpu_info_ = cpu_flags;
+#endif
+}
 
 // Low level cpuid for X86. Returns zeros on other CPUs.
 // eax is the info type that you want.
 // ecx is typically the cpu number, and should normally be zero.
 LIBYUV_API
-void CpuId(uint32 eax, uint32 ecx, uint32* cpu_info);
+void CpuId(int info_eax, int info_ecx, int* cpu_info);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_CPU_ID_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_CPU_ID_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/macros_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/macros_msa.h
new file mode 100644
index 0000000..bba0e8a
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/macros_msa.h
@@ -0,0 +1,233 @@
+/*
+ *  Copyright 2016 The LibYuv Project Authors. All rights reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS. All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef INCLUDE_LIBYUV_MACROS_MSA_H_
+#define INCLUDE_LIBYUV_MACROS_MSA_H_
+
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#include <msa.h>
+#include <stdint.h>
+
+#if (__mips_isa_rev >= 6)
+#define LW(psrc)                                        \
+  ({                                                    \
+    const uint8_t* psrc_lw_m = (const uint8_t*)(psrc);  \
+    uint32_t val_m;                                     \
+    asm volatile("lw  %[val_m],  %[psrc_lw_m]  \n"      \
+                 : [val_m] "=r"(val_m)                  \
+                 : [psrc_lw_m] "m"(*psrc_lw_m));        \
+    val_m;                                              \
+  })
+
+#if (__mips == 64)
+#define LD(psrc)                                        \
+  ({                                                    \
+    const uint8_t* psrc_ld_m = (const uint8_t*)(psrc);  \
+    uint64_t val_m = 0;                                 \
+    asm volatile("ld  %[val_m],  %[psrc_ld_m]  \n"      \
+                 : [val_m] "=r"(val_m)                  \
+                 : [psrc_ld_m] "m"(*psrc_ld_m));        \
+    val_m;                                              \
+  })
+#else  // !(__mips == 64)
+#define LD(psrc)                                                         \
+  ({                                                                     \
+    const uint8_t* psrc_ld_m = (const uint8_t*)(psrc);                   \
+    uint32_t val0_m, val1_m;                                             \
+    uint64_t val_m = 0;                                                  \
+    val0_m = LW(psrc_ld_m);                                              \
+    val1_m = LW(psrc_ld_m + 4);                                          \
+    val_m = (uint64_t)(val1_m);                             /* NOLINT */ \
+    val_m = (uint64_t)((val_m << 32) & 0xFFFFFFFF00000000); /* NOLINT */ \
+    val_m = (uint64_t)(val_m | (uint64_t)val0_m);           /* NOLINT */ \
+    val_m;                                                               \
+  })
+#endif  // (__mips == 64)
+
+#define SW(val, pdst)                                   \
+  ({                                                    \
+    uint8_t* pdst_sw_m = (uint8_t*)(pdst); /* NOLINT */ \
+    uint32_t val_m = (val);                             \
+    asm volatile("sw  %[val_m],  %[pdst_sw_m]  \n"      \
+                 : [pdst_sw_m] "=m"(*pdst_sw_m)         \
+                 : [val_m] "r"(val_m));                 \
+  })
+
+#if (__mips == 64)
+#define SD(val, pdst)                                   \
+  ({                                                    \
+    uint8_t* pdst_sd_m = (uint8_t*)(pdst); /* NOLINT */ \
+    uint64_t val_m = (val);                             \
+    asm volatile("sd  %[val_m],  %[pdst_sd_m]  \n"      \
+                 : [pdst_sd_m] "=m"(*pdst_sd_m)         \
+                 : [val_m] "r"(val_m));                 \
+  })
+#else  // !(__mips == 64)
+#define SD(val, pdst)                                        \
+  ({                                                         \
+    uint8_t* pdst_sd_m = (uint8_t*)(pdst); /* NOLINT */      \
+    uint32_t val0_m, val1_m;                                 \
+    val0_m = (uint32_t)((val)&0x00000000FFFFFFFF);           \
+    val1_m = (uint32_t)(((val) >> 32) & 0x00000000FFFFFFFF); \
+    SW(val0_m, pdst_sd_m);                                   \
+    SW(val1_m, pdst_sd_m + 4);                               \
+  })
+#endif  // !(__mips == 64)
+#else   // !(__mips_isa_rev >= 6)
+#define LW(psrc)                                        \
+  ({                                                    \
+    const uint8_t* psrc_lw_m = (const uint8_t*)(psrc);  \
+    uint32_t val_m;                                     \
+    asm volatile("ulw  %[val_m],  %[psrc_lw_m]  \n"     \
+                 : [val_m] "=r"(val_m)                  \
+                 : [psrc_lw_m] "m"(*psrc_lw_m));        \
+    val_m;                                              \
+  })
+
+#if (__mips == 64)
+#define LD(psrc)                                        \
+  ({                                                    \
+    const uint8_t* psrc_ld_m = (const uint8_t*)(psrc);  \
+    uint64_t val_m = 0;                                 \
+    asm volatile("uld  %[val_m],  %[psrc_ld_m]  \n"     \
+                 : [val_m] "=r"(val_m)                  \
+                 : [psrc_ld_m] "m"(*psrc_ld_m));        \
+    val_m;                                              \
+  })
+#else  // !(__mips == 64)
+#define LD(psrc)                                                         \
+  ({                                                                     \
+    const uint8_t* psrc_ld_m = (const uint8_t*)(psrc);                   \
+    uint32_t val0_m, val1_m;                                             \
+    uint64_t val_m = 0;                                                  \
+    val0_m = LW(psrc_ld_m);                                              \
+    val1_m = LW(psrc_ld_m + 4);                                          \
+    val_m = (uint64_t)(val1_m);                             /* NOLINT */ \
+    val_m = (uint64_t)((val_m << 32) & 0xFFFFFFFF00000000); /* NOLINT */ \
+    val_m = (uint64_t)(val_m | (uint64_t)val0_m);           /* NOLINT */ \
+    val_m;                                                               \
+  })
+#endif  // (__mips == 64)
+
+#define SW(val, pdst)                                   \
+  ({                                                    \
+    uint8_t* pdst_sw_m = (uint8_t*)(pdst); /* NOLINT */ \
+    uint32_t val_m = (val);                             \
+    asm volatile("usw  %[val_m],  %[pdst_sw_m]  \n"     \
+                 : [pdst_sw_m] "=m"(*pdst_sw_m)         \
+                 : [val_m] "r"(val_m));                 \
+  })
+
+#define SD(val, pdst)                                        \
+  ({                                                         \
+    uint8_t* pdst_sd_m = (uint8_t*)(pdst); /* NOLINT */      \
+    uint32_t val0_m, val1_m;                                 \
+    val0_m = (uint32_t)((val)&0x00000000FFFFFFFF);           \
+    val1_m = (uint32_t)(((val) >> 32) & 0x00000000FFFFFFFF); \
+    SW(val0_m, pdst_sd_m);                                   \
+    SW(val1_m, pdst_sd_m + 4);                               \
+  })
+#endif  // (__mips_isa_rev >= 6)
+
+// TODO(fbarchard): Consider removing __VAR_ARGS versions.
+#define LD_B(RTYPE, psrc) *((RTYPE*)(psrc)) /* NOLINT */
+#define LD_UB(...) LD_B(const v16u8, __VA_ARGS__)
+
+#define ST_B(RTYPE, in, pdst) *((RTYPE*)(pdst)) = (in) /* NOLINT */
+#define ST_UB(...) ST_B(v16u8, __VA_ARGS__)
+
+#define ST_H(RTYPE, in, pdst) *((RTYPE*)(pdst)) = (in) /* NOLINT */
+#define ST_UH(...) ST_H(v8u16, __VA_ARGS__)
+
+/* Description : Load two vectors with 16 'byte' sized elements
+   Arguments   : Inputs  - psrc, stride
+                 Outputs - out0, out1
+                 Return Type - as per RTYPE
+   Details     : Load 16 byte elements in 'out0' from (psrc)
+                 Load 16 byte elements in 'out1' from (psrc + stride)
+*/
+#define LD_B2(RTYPE, psrc, stride, out0, out1) \
+  {                                            \
+    out0 = LD_B(RTYPE, (psrc));                \
+    out1 = LD_B(RTYPE, (psrc) + stride);       \
+  }
+#define LD_UB2(...) LD_B2(const v16u8, __VA_ARGS__)
+
+#define LD_B4(RTYPE, psrc, stride, out0, out1, out2, out3) \
+  {                                                        \
+    LD_B2(RTYPE, (psrc), stride, out0, out1);              \
+    LD_B2(RTYPE, (psrc) + 2 * stride, stride, out2, out3); \
+  }
+#define LD_UB4(...) LD_B4(const v16u8, __VA_ARGS__)
+
+/* Description : Store two vectors with stride each having 16 'byte' sized
+                 elements
+   Arguments   : Inputs - in0, in1, pdst, stride
+   Details     : Store 16 byte elements from 'in0' to (pdst)
+                 Store 16 byte elements from 'in1' to (pdst + stride)
+*/
+#define ST_B2(RTYPE, in0, in1, pdst, stride) \
+  {                                          \
+    ST_B(RTYPE, in0, (pdst));                \
+    ST_B(RTYPE, in1, (pdst) + stride);       \
+  }
+#define ST_UB2(...) ST_B2(v16u8, __VA_ARGS__)
+
+#define ST_B4(RTYPE, in0, in1, in2, in3, pdst, stride)   \
+  {                                                      \
+    ST_B2(RTYPE, in0, in1, (pdst), stride);              \
+    ST_B2(RTYPE, in2, in3, (pdst) + 2 * stride, stride); \
+  }
+#define ST_UB4(...) ST_B4(v16u8, __VA_ARGS__)
+
+/* Description : Store vectors of 8 halfword elements with stride
+   Arguments   : Inputs - in0, in1, pdst, stride
+   Details     : Store 8 halfword elements from 'in0' to (pdst)
+                 Store 8 halfword elements from 'in1' to (pdst + stride)
+*/
+#define ST_H2(RTYPE, in0, in1, pdst, stride) \
+  {                                          \
+    ST_H(RTYPE, in0, (pdst));                \
+    ST_H(RTYPE, in1, (pdst) + stride);       \
+  }
+#define ST_UH2(...) ST_H2(v8u16, __VA_ARGS__)
+
+// TODO(fbarchard): Consider using __msa_vshf_b and __msa_ilvr_b directly.
+/* Description : Shuffle byte vector elements as per mask vector
+   Arguments   : Inputs  - in0, in1, in2, in3, mask0, mask1
+                 Outputs - out0, out1
+                 Return Type - as per RTYPE
+   Details     : Byte elements from 'in0' & 'in1' are copied selectively to
+                 'out0' as per control vector 'mask0'
+*/
+#define VSHF_B2(RTYPE, in0, in1, in2, in3, mask0, mask1, out0, out1)  \
+  {                                                                   \
+    out0 = (RTYPE)__msa_vshf_b((v16i8)mask0, (v16i8)in1, (v16i8)in0); \
+    out1 = (RTYPE)__msa_vshf_b((v16i8)mask1, (v16i8)in3, (v16i8)in2); \
+  }
+#define VSHF_B2_UB(...) VSHF_B2(v16u8, __VA_ARGS__)
+
+/* Description : Interleave both left and right half of input vectors
+   Arguments   : Inputs  - in0, in1
+                 Outputs - out0, out1
+                 Return Type - as per RTYPE
+   Details     : Right half of byte elements from 'in0' and 'in1' are
+                 interleaved and written to 'out0'
+*/
+#define ILVRL_B2(RTYPE, in0, in1, out0, out1)           \
+  {                                                     \
+    out0 = (RTYPE)__msa_ilvr_b((v16i8)in0, (v16i8)in1); \
+    out1 = (RTYPE)__msa_ilvl_b((v16i8)in0, (v16i8)in1); \
+  }
+#define ILVRL_B2_UB(...) ILVRL_B2(v16u8, __VA_ARGS__)
+
+#endif /* !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa) */
+
+#endif  // INCLUDE_LIBYUV_MACROS_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/mjpeg_decoder.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/mjpeg_decoder.h
index 8423121..275f8d4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/mjpeg_decoder.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/mjpeg_decoder.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_MJPEG_DECODER_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_MJPEG_DECODER_H_
 #define INCLUDE_LIBYUV_MJPEG_DECODER_H_
 
 #include "libyuv/basic_types.h"
@@ -26,25 +26,24 @@
 extern "C" {
 #endif
 
-LIBYUV_BOOL ValidateJpeg(const uint8* sample, size_t sample_size);
+LIBYUV_BOOL ValidateJpeg(const uint8_t* sample, size_t sample_size);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-static const uint32 kUnknownDataSize = 0xFFFFFFFF;
+static const uint32_t kUnknownDataSize = 0xFFFFFFFF;
 
 enum JpegSubsamplingType {
   kJpegYuv420,
   kJpegYuv422,
-  kJpegYuv411,
   kJpegYuv444,
   kJpegYuv400,
   kJpegUnknown
 };
 
 struct Buffer {
-  const uint8* data;
+  const uint8_t* data;
   int len;
 };
 
@@ -66,7 +65,7 @@
 class LIBYUV_API MJpegDecoder {
  public:
   typedef void (*CallbackFunction)(void* opaque,
-                                   const uint8* const* data,
+                                   const uint8_t* const* data,
                                    const int* strides,
                                    int rows);
 
@@ -86,7 +85,7 @@
   // If return value is LIBYUV_TRUE, then the values for all the following
   // getters are populated.
   // src_len is the size of the compressed mjpeg frame in bytes.
-  LIBYUV_BOOL LoadFrame(const uint8* src, size_t src_len);
+  LIBYUV_BOOL LoadFrame(const uint8_t* src, size_t src_len);
 
   // Returns width of the last loaded frame in pixels.
   int GetWidth();
@@ -139,18 +138,22 @@
   // at least GetComponentSize(i). The pointers in planes are incremented
   // to point to after the end of the written data.
   // TODO(fbarchard): Add dst_x, dst_y to allow specific rect to be decoded.
-  LIBYUV_BOOL DecodeToBuffers(uint8** planes, int dst_width, int dst_height);
+  LIBYUV_BOOL DecodeToBuffers(uint8_t** planes, int dst_width, int dst_height);
 
   // Decodes the entire image and passes the data via repeated calls to a
   // callback function. Each call will get the data for a whole number of
   // image scanlines.
   // TODO(fbarchard): Add dst_x, dst_y to allow specific rect to be decoded.
-  LIBYUV_BOOL DecodeToCallback(CallbackFunction fn, void* opaque,
-                        int dst_width, int dst_height);
+  LIBYUV_BOOL DecodeToCallback(CallbackFunction fn,
+                               void* opaque,
+                               int dst_width,
+                               int dst_height);
 
   // The helper function which recognizes the jpeg sub-sampling type.
   static JpegSubsamplingType JpegSubsamplingTypeHelper(
-     int* subsample_x, int* subsample_y, int number_of_components);
+      int* subsample_x,
+      int* subsample_y,
+      int number_of_components);
 
  private:
   void AllocOutputBuffers(int num_outbufs);
@@ -159,7 +162,7 @@
   LIBYUV_BOOL StartDecode();
   LIBYUV_BOOL FinishDecode();
 
-  void SetScanlinePointers(uint8** data);
+  void SetScanlinePointers(uint8_t** data);
   LIBYUV_BOOL DecodeImcuRow();
 
   int GetComponentScanlinePadding(int component);
@@ -178,15 +181,15 @@
 
   // Temporaries used to point to scanline outputs.
   int num_outbufs_;  // Outermost size of all arrays below.
-  uint8*** scanlines_;
+  uint8_t*** scanlines_;
   int* scanlines_sizes_;
   // Temporary buffer used for decoding when we can't decode directly to the
   // output buffers. Large enough for just one iMCU row.
-  uint8** databuf_;
+  uint8_t** databuf_;
   int* databuf_strides_;
 };
 
 }  // namespace libyuv
 
 #endif  //  __cplusplus
-#endif  // INCLUDE_LIBYUV_MJPEG_DECODER_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_MJPEG_DECODER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/planar_functions.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/planar_functions.h
index 9662516..91137ba 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/planar_functions.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/planar_functions.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_PLANAR_FUNCTIONS_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_PLANAR_FUNCTIONS_H_
 #define INCLUDE_LIBYUV_PLANAR_FUNCTIONS_H_
 
 #include "libyuv/basic_types.h"
@@ -22,449 +22,10 @@
 extern "C" {
 #endif
 
-// Copy a plane of data.
-LIBYUV_API
-void CopyPlane(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height);
-
-LIBYUV_API
-void CopyPlane_16(const uint16* src_y, int src_stride_y,
-                  uint16* dst_y, int dst_stride_y,
-                  int width, int height);
-
-// Set a plane of data to a 32 bit value.
-LIBYUV_API
-void SetPlane(uint8* dst_y, int dst_stride_y,
-              int width, int height,
-              uint32 value);
-
-// Split interleaved UV plane into separate U and V planes.
-LIBYUV_API
-void SplitUVPlane(const uint8* src_uv, int src_stride_uv,
-                  uint8* dst_u, int dst_stride_u,
-                  uint8* dst_v, int dst_stride_v,
-                  int width, int height);
-
-// Merge separate U and V planes into one interleaved UV plane.
-LIBYUV_API
-void MergeUVPlane(const uint8* src_u, int src_stride_u,
-                  const uint8* src_v, int src_stride_v,
-                  uint8* dst_uv, int dst_stride_uv,
-                  int width, int height);
-
-// Copy I400.  Supports inverting.
-LIBYUV_API
-int I400ToI400(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height);
-
-#define J400ToJ400 I400ToI400
-
-// Copy I422 to I422.
-#define I422ToI422 I422Copy
-LIBYUV_API
-int I422Copy(const uint8* src_y, int src_stride_y,
-             const uint8* src_u, int src_stride_u,
-             const uint8* src_v, int src_stride_v,
-             uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int width, int height);
-
-// Copy I444 to I444.
-#define I444ToI444 I444Copy
-LIBYUV_API
-int I444Copy(const uint8* src_y, int src_stride_y,
-             const uint8* src_u, int src_stride_u,
-             const uint8* src_v, int src_stride_v,
-             uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int width, int height);
-
-// Convert YUY2 to I422.
-LIBYUV_API
-int YUY2ToI422(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
-
-// Convert UYVY to I422.
-LIBYUV_API
-int UYVYToI422(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
-
-LIBYUV_API
-int YUY2ToNV12(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height);
-
-LIBYUV_API
-int UYVYToNV12(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height);
-
-// Convert I420 to I400. (calls CopyPlane ignoring u/v).
-LIBYUV_API
-int I420ToI400(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height);
-
-// Alias
-#define J420ToJ400 I420ToI400
-#define I420ToI420Mirror I420Mirror
-
-// I420 mirror.
-LIBYUV_API
-int I420Mirror(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height);
-
-// Alias
-#define I400ToI400Mirror I400Mirror
-
-// I400 mirror.  A single plane is mirrored horizontally.
-// Pass negative height to achieve 180 degree rotation.
-LIBYUV_API
-int I400Mirror(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height);
-
-// Alias
-#define ARGBToARGBMirror ARGBMirror
-
-// ARGB mirror.
-LIBYUV_API
-int ARGBMirror(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
-
-// Convert NV12 to RGB565.
-LIBYUV_API
-int NV12ToRGB565(const uint8* src_y, int src_stride_y,
-                 const uint8* src_uv, int src_stride_uv,
-                 uint8* dst_rgb565, int dst_stride_rgb565,
-                 int width, int height);
-
-// I422ToARGB is in convert_argb.h
-// Convert I422 to BGRA.
-LIBYUV_API
-int I422ToBGRA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_bgra, int dst_stride_bgra,
-               int width, int height);
-
-// Convert I422 to ABGR.
-LIBYUV_API
-int I422ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height);
-
-// Convert I422 to RGBA.
-LIBYUV_API
-int I422ToRGBA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_rgba, int dst_stride_rgba,
-               int width, int height);
-
-// Alias
-#define RGB24ToRAW RAWToRGB24
-
-LIBYUV_API
-int RAWToRGB24(const uint8* src_raw, int src_stride_raw,
-               uint8* dst_rgb24, int dst_stride_rgb24,
-               int width, int height);
-
-// Draw a rectangle into I420.
-LIBYUV_API
-int I420Rect(uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int x, int y, int width, int height,
-             int value_y, int value_u, int value_v);
-
-// Draw a rectangle into ARGB.
-LIBYUV_API
-int ARGBRect(uint8* dst_argb, int dst_stride_argb,
-             int x, int y, int width, int height, uint32 value);
-
-// Convert ARGB to gray scale ARGB.
-LIBYUV_API
-int ARGBGrayTo(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height);
-
-// Make a rectangle of ARGB gray scale.
-LIBYUV_API
-int ARGBGray(uint8* dst_argb, int dst_stride_argb,
-             int x, int y, int width, int height);
-
-// Make a rectangle of ARGB Sepia tone.
-LIBYUV_API
-int ARGBSepia(uint8* dst_argb, int dst_stride_argb,
-              int x, int y, int width, int height);
-
-// Apply a matrix rotation to each ARGB pixel.
-// matrix_argb is 4 signed ARGB values. -128 to 127 representing -2 to 2.
-// The first 4 coefficients apply to B, G, R, A and produce B of the output.
-// The next 4 coefficients apply to B, G, R, A and produce G of the output.
-// The next 4 coefficients apply to B, G, R, A and produce R of the output.
-// The last 4 coefficients apply to B, G, R, A and produce A of the output.
-LIBYUV_API
-int ARGBColorMatrix(const uint8* src_argb, int src_stride_argb,
-                    uint8* dst_argb, int dst_stride_argb,
-                    const int8* matrix_argb,
-                    int width, int height);
-
-// Deprecated. Use ARGBColorMatrix instead.
-// Apply a matrix rotation to each ARGB pixel.
-// matrix_argb is 3 signed ARGB values. -128 to 127 representing -1 to 1.
-// The first 4 coefficients apply to B, G, R, A and produce B of the output.
-// The next 4 coefficients apply to B, G, R, A and produce G of the output.
-// The last 4 coefficients apply to B, G, R, A and produce R of the output.
-LIBYUV_API
-int RGBColorMatrix(uint8* dst_argb, int dst_stride_argb,
-                   const int8* matrix_rgb,
-                   int x, int y, int width, int height);
-
-// Apply a color table each ARGB pixel.
-// Table contains 256 ARGB values.
-LIBYUV_API
-int ARGBColorTable(uint8* dst_argb, int dst_stride_argb,
-                   const uint8* table_argb,
-                   int x, int y, int width, int height);
-
-// Apply a color table each ARGB pixel but preserve destination alpha.
-// Table contains 256 ARGB values.
-LIBYUV_API
-int RGBColorTable(uint8* dst_argb, int dst_stride_argb,
-                  const uint8* table_argb,
-                  int x, int y, int width, int height);
-
-// Apply a luma/color table each ARGB pixel but preserve destination alpha.
-// Table contains 32768 values indexed by [Y][C] where 7 it 7 bit luma from
-// RGB (YJ style) and C is an 8 bit color component (R, G or B).
-LIBYUV_API
-int ARGBLumaColorTable(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_argb, int dst_stride_argb,
-                       const uint8* luma_rgb_table,
-                       int width, int height);
-
-// Apply a 3 term polynomial to ARGB values.
-// poly points to a 4x4 matrix.  The first row is constants.  The 2nd row is
-// coefficients for b, g, r and a.  The 3rd row is coefficients for b squared,
-// g squared, r squared and a squared.  The 4rd row is coefficients for b to
-// the 3, g to the 3, r to the 3 and a to the 3.  The values are summed and
-// result clamped to 0 to 255.
-// A polynomial approximation can be dirived using software such as 'R'.
-
-LIBYUV_API
-int ARGBPolynomial(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_argb, int dst_stride_argb,
-                   const float* poly,
-                   int width, int height);
-
-// Quantize a rectangle of ARGB. Alpha unaffected.
-// scale is a 16 bit fractional fixed point scaler between 0 and 65535.
-// interval_size should be a value between 1 and 255.
-// interval_offset should be a value between 0 and 255.
-LIBYUV_API
-int ARGBQuantize(uint8* dst_argb, int dst_stride_argb,
-                 int scale, int interval_size, int interval_offset,
-                 int x, int y, int width, int height);
-
-// Copy ARGB to ARGB.
-LIBYUV_API
-int ARGBCopy(const uint8* src_argb, int src_stride_argb,
-             uint8* dst_argb, int dst_stride_argb,
-             int width, int height);
-
-// Copy Alpha channel of ARGB to alpha of ARGB.
-LIBYUV_API
-int ARGBCopyAlpha(const uint8* src_argb, int src_stride_argb,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int width, int height);
-
-// Extract the alpha channel from ARGB.
-LIBYUV_API
-int ARGBExtractAlpha(const uint8* src_argb, int src_stride_argb,
-                     uint8* dst_a, int dst_stride_a,
-                     int width, int height);
-
-// Copy Y channel to Alpha of ARGB.
-LIBYUV_API
-int ARGBCopyYToAlpha(const uint8* src_y, int src_stride_y,
-                     uint8* dst_argb, int dst_stride_argb,
-                     int width, int height);
-
-typedef void (*ARGBBlendRow)(const uint8* src_argb0, const uint8* src_argb1,
-                             uint8* dst_argb, int width);
-
-// Get function to Alpha Blend ARGB pixels and store to destination.
-LIBYUV_API
-ARGBBlendRow GetARGBBlend();
-
-// Alpha Blend ARGB images and store to destination.
-// Source is pre-multiplied by alpha using ARGBAttenuate.
-// Alpha of destination is set to 255.
-LIBYUV_API
-int ARGBBlend(const uint8* src_argb0, int src_stride_argb0,
-              const uint8* src_argb1, int src_stride_argb1,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height);
-
-// Alpha Blend plane and store to destination.
-// Source is not pre-multiplied by alpha.
-LIBYUV_API
-int BlendPlane(const uint8* src_y0, int src_stride_y0,
-               const uint8* src_y1, int src_stride_y1,
-               const uint8* alpha, int alpha_stride,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height);
-
-// Alpha Blend YUV images and store to destination.
-// Source is not pre-multiplied by alpha.
-// Alpha is full width x height and subsampled to half size to apply to UV.
-LIBYUV_API
-int I420Blend(const uint8* src_y0, int src_stride_y0,
-              const uint8* src_u0, int src_stride_u0,
-              const uint8* src_v0, int src_stride_v0,
-              const uint8* src_y1, int src_stride_y1,
-              const uint8* src_u1, int src_stride_u1,
-              const uint8* src_v1, int src_stride_v1,
-              const uint8* alpha, int alpha_stride,
-              uint8* dst_y, int dst_stride_y,
-              uint8* dst_u, int dst_stride_u,
-              uint8* dst_v, int dst_stride_v,
-              int width, int height);
-
-// Multiply ARGB image by ARGB image. Shifted down by 8. Saturates to 255.
-LIBYUV_API
-int ARGBMultiply(const uint8* src_argb0, int src_stride_argb0,
-                 const uint8* src_argb1, int src_stride_argb1,
-                 uint8* dst_argb, int dst_stride_argb,
-                 int width, int height);
-
-// Add ARGB image with ARGB image. Saturates to 255.
-LIBYUV_API
-int ARGBAdd(const uint8* src_argb0, int src_stride_argb0,
-            const uint8* src_argb1, int src_stride_argb1,
-            uint8* dst_argb, int dst_stride_argb,
-            int width, int height);
-
-// Subtract ARGB image (argb1) from ARGB image (argb0). Saturates to 0.
-LIBYUV_API
-int ARGBSubtract(const uint8* src_argb0, int src_stride_argb0,
-                 const uint8* src_argb1, int src_stride_argb1,
-                 uint8* dst_argb, int dst_stride_argb,
-                 int width, int height);
-
-// Convert I422 to YUY2.
-LIBYUV_API
-int I422ToYUY2(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_frame, int dst_stride_frame,
-               int width, int height);
-
-// Convert I422 to UYVY.
-LIBYUV_API
-int I422ToUYVY(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_frame, int dst_stride_frame,
-               int width, int height);
-
-// Convert unattentuated ARGB to preattenuated ARGB.
-LIBYUV_API
-int ARGBAttenuate(const uint8* src_argb, int src_stride_argb,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int width, int height);
-
-// Convert preattentuated ARGB to unattenuated ARGB.
-LIBYUV_API
-int ARGBUnattenuate(const uint8* src_argb, int src_stride_argb,
-                    uint8* dst_argb, int dst_stride_argb,
-                    int width, int height);
-
-// Internal function - do not call directly.
-// Computes table of cumulative sum for image where the value is the sum
-// of all values above and to the left of the entry. Used by ARGBBlur.
-LIBYUV_API
-int ARGBComputeCumulativeSum(const uint8* src_argb, int src_stride_argb,
-                             int32* dst_cumsum, int dst_stride32_cumsum,
-                             int width, int height);
-
-// Blur ARGB image.
-// dst_cumsum table of width * (height + 1) * 16 bytes aligned to
-//   16 byte boundary.
-// dst_stride32_cumsum is number of ints in a row (width * 4).
-// radius is number of pixels around the center.  e.g. 1 = 3x3. 2=5x5.
-// Blur is optimized for radius of 5 (11x11) or less.
-LIBYUV_API
-int ARGBBlur(const uint8* src_argb, int src_stride_argb,
-             uint8* dst_argb, int dst_stride_argb,
-             int32* dst_cumsum, int dst_stride32_cumsum,
-             int width, int height, int radius);
-
-// Multiply ARGB image by ARGB value.
-LIBYUV_API
-int ARGBShade(const uint8* src_argb, int src_stride_argb,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height, uint32 value);
-
-// Interpolate between two images using specified amount of interpolation
-// (0 to 255) and store to destination.
-// 'interpolation' is specified as 8 bit fraction where 0 means 100% src0
-// and 255 means 1% src0 and 99% src1.
-LIBYUV_API
-int InterpolatePlane(const uint8* src0, int src_stride0,
-                     const uint8* src1, int src_stride1,
-                     uint8* dst, int dst_stride,
-                     int width, int height, int interpolation);
-
-// Interpolate between two ARGB images using specified amount of interpolation
-// Internally calls InterpolatePlane with width * 4 (bpp).
-LIBYUV_API
-int ARGBInterpolate(const uint8* src_argb0, int src_stride_argb0,
-                    const uint8* src_argb1, int src_stride_argb1,
-                    uint8* dst_argb, int dst_stride_argb,
-                    int width, int height, int interpolation);
-
-// Interpolate between two YUV images using specified amount of interpolation
-// Internally calls InterpolatePlane on each plane where the U and V planes
-// are half width and half height.
-LIBYUV_API
-int I420Interpolate(const uint8* src0_y, int src0_stride_y,
-                    const uint8* src0_u, int src0_stride_u,
-                    const uint8* src0_v, int src0_stride_v,
-                    const uint8* src1_y, int src1_stride_y,
-                    const uint8* src1_u, int src1_stride_u,
-                    const uint8* src1_v, int src1_stride_v,
-                    uint8* dst_y, int dst_stride_y,
-                    uint8* dst_u, int dst_stride_u,
-                    uint8* dst_v, int dst_stride_v,
-                    int width, int height, int interpolation);
-
-#if defined(__pnacl__) || defined(__CLR_VER) || \
-    (defined(__i386__) && !defined(__SSE2__))
+// TODO(fbarchard): Move cpu macros to row.h
+#if defined(__pnacl__) || defined(__CLR_VER) ||            \
+    (defined(__native_client__) && defined(__x86_64__)) || \
+    (defined(__i386__) && !defined(__SSE__) && !defined(__clang__))
 #define LIBYUV_DISABLE_X86
 #endif
 // MemorySanitizer does not support assembly code yet. http://crbug.com/344505
@@ -479,43 +40,808 @@
 #define HAS_ARGBAFFINEROW_SSE2
 #endif
 
+// Copy a plane of data.
+LIBYUV_API
+void CopyPlane(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height);
+
+LIBYUV_API
+void CopyPlane_16(const uint16_t* src_y,
+                  int src_stride_y,
+                  uint16_t* dst_y,
+                  int dst_stride_y,
+                  int width,
+                  int height);
+
+LIBYUV_API
+void Convert16To8Plane(const uint16_t* src_y,
+                       int src_stride_y,
+                       uint8_t* dst_y,
+                       int dst_stride_y,
+                       int scale,  // 16384 for 10 bits
+                       int width,
+                       int height);
+
+LIBYUV_API
+void Convert8To16Plane(const uint8_t* src_y,
+                       int src_stride_y,
+                       uint16_t* dst_y,
+                       int dst_stride_y,
+                       int scale,  // 1024 for 10 bits
+                       int width,
+                       int height);
+
+// Set a plane of data to a 32 bit value.
+LIBYUV_API
+void SetPlane(uint8_t* dst_y,
+              int dst_stride_y,
+              int width,
+              int height,
+              uint32_t value);
+
+// Split interleaved UV plane into separate U and V planes.
+LIBYUV_API
+void SplitUVPlane(const uint8_t* src_uv,
+                  int src_stride_uv,
+                  uint8_t* dst_u,
+                  int dst_stride_u,
+                  uint8_t* dst_v,
+                  int dst_stride_v,
+                  int width,
+                  int height);
+
+// Merge separate U and V planes into one interleaved UV plane.
+LIBYUV_API
+void MergeUVPlane(const uint8_t* src_u,
+                  int src_stride_u,
+                  const uint8_t* src_v,
+                  int src_stride_v,
+                  uint8_t* dst_uv,
+                  int dst_stride_uv,
+                  int width,
+                  int height);
+
+// Split interleaved RGB plane into separate R, G and B planes.
+LIBYUV_API
+void SplitRGBPlane(const uint8_t* src_rgb,
+                   int src_stride_rgb,
+                   uint8_t* dst_r,
+                   int dst_stride_r,
+                   uint8_t* dst_g,
+                   int dst_stride_g,
+                   uint8_t* dst_b,
+                   int dst_stride_b,
+                   int width,
+                   int height);
+
+// Merge separate R, G and B planes into one interleaved RGB plane.
+LIBYUV_API
+void MergeRGBPlane(const uint8_t* src_r,
+                   int src_stride_r,
+                   const uint8_t* src_g,
+                   int src_stride_g,
+                   const uint8_t* src_b,
+                   int src_stride_b,
+                   uint8_t* dst_rgb,
+                   int dst_stride_rgb,
+                   int width,
+                   int height);
+
+// Copy I400.  Supports inverting.
+LIBYUV_API
+int I400ToI400(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height);
+
+#define J400ToJ400 I400ToI400
+
+// Copy I422 to I422.
+#define I422ToI422 I422Copy
+LIBYUV_API
+int I422Copy(const uint8_t* src_y,
+             int src_stride_y,
+             const uint8_t* src_u,
+             int src_stride_u,
+             const uint8_t* src_v,
+             int src_stride_v,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height);
+
+// Copy I444 to I444.
+#define I444ToI444 I444Copy
+LIBYUV_API
+int I444Copy(const uint8_t* src_y,
+             int src_stride_y,
+             const uint8_t* src_u,
+             int src_stride_u,
+             const uint8_t* src_v,
+             int src_stride_v,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height);
+
+// Convert YUY2 to I422.
+LIBYUV_API
+int YUY2ToI422(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
+
+// Convert UYVY to I422.
+LIBYUV_API
+int UYVYToI422(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
+
+LIBYUV_API
+int YUY2ToNV12(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height);
+
+LIBYUV_API
+int UYVYToNV12(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height);
+
+LIBYUV_API
+int YUY2ToY(const uint8_t* src_yuy2,
+            int src_stride_yuy2,
+            uint8_t* dst_y,
+            int dst_stride_y,
+            int width,
+            int height);
+
+// Convert I420 to I400. (calls CopyPlane ignoring u/v).
+LIBYUV_API
+int I420ToI400(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height);
+
+// Alias
+#define J420ToJ400 I420ToI400
+#define I420ToI420Mirror I420Mirror
+
+// I420 mirror.
+LIBYUV_API
+int I420Mirror(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height);
+
+// Alias
+#define I400ToI400Mirror I400Mirror
+
+// I400 mirror.  A single plane is mirrored horizontally.
+// Pass negative height to achieve 180 degree rotation.
+LIBYUV_API
+int I400Mirror(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height);
+
+// Alias
+#define ARGBToARGBMirror ARGBMirror
+
+// ARGB mirror.
+LIBYUV_API
+int ARGBMirror(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Convert NV12 to RGB565.
+LIBYUV_API
+int NV12ToRGB565(const uint8_t* src_y,
+                 int src_stride_y,
+                 const uint8_t* src_uv,
+                 int src_stride_uv,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height);
+
+// I422ToARGB is in convert_argb.h
+// Convert I422 to BGRA.
+LIBYUV_API
+int I422ToBGRA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_bgra,
+               int dst_stride_bgra,
+               int width,
+               int height);
+
+// Convert I422 to ABGR.
+LIBYUV_API
+int I422ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height);
+
+// Convert I422 to RGBA.
+LIBYUV_API
+int I422ToRGBA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_rgba,
+               int dst_stride_rgba,
+               int width,
+               int height);
+
+// Alias
+#define RGB24ToRAW RAWToRGB24
+
+LIBYUV_API
+int RAWToRGB24(const uint8_t* src_raw,
+               int src_stride_raw,
+               uint8_t* dst_rgb24,
+               int dst_stride_rgb24,
+               int width,
+               int height);
+
+// Draw a rectangle into I420.
+LIBYUV_API
+int I420Rect(uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int x,
+             int y,
+             int width,
+             int height,
+             int value_y,
+             int value_u,
+             int value_v);
+
+// Draw a rectangle into ARGB.
+LIBYUV_API
+int ARGBRect(uint8_t* dst_argb,
+             int dst_stride_argb,
+             int dst_x,
+             int dst_y,
+             int width,
+             int height,
+             uint32_t value);
+
+// Convert ARGB to gray scale ARGB.
+LIBYUV_API
+int ARGBGrayTo(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height);
+
+// Make a rectangle of ARGB gray scale.
+LIBYUV_API
+int ARGBGray(uint8_t* dst_argb,
+             int dst_stride_argb,
+             int dst_x,
+             int dst_y,
+             int width,
+             int height);
+
+// Make a rectangle of ARGB Sepia tone.
+LIBYUV_API
+int ARGBSepia(uint8_t* dst_argb,
+              int dst_stride_argb,
+              int dst_x,
+              int dst_y,
+              int width,
+              int height);
+
+// Apply a matrix rotation to each ARGB pixel.
+// matrix_argb is 4 signed ARGB values. -128 to 127 representing -2 to 2.
+// The first 4 coefficients apply to B, G, R, A and produce B of the output.
+// The next 4 coefficients apply to B, G, R, A and produce G of the output.
+// The next 4 coefficients apply to B, G, R, A and produce R of the output.
+// The last 4 coefficients apply to B, G, R, A and produce A of the output.
+LIBYUV_API
+int ARGBColorMatrix(const uint8_t* src_argb,
+                    int src_stride_argb,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    const int8_t* matrix_argb,
+                    int width,
+                    int height);
+
+// Deprecated. Use ARGBColorMatrix instead.
+// Apply a matrix rotation to each ARGB pixel.
+// matrix_argb is 3 signed ARGB values. -128 to 127 representing -1 to 1.
+// The first 4 coefficients apply to B, G, R, A and produce B of the output.
+// The next 4 coefficients apply to B, G, R, A and produce G of the output.
+// The last 4 coefficients apply to B, G, R, A and produce R of the output.
+LIBYUV_API
+int RGBColorMatrix(uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   const int8_t* matrix_rgb,
+                   int dst_x,
+                   int dst_y,
+                   int width,
+                   int height);
+
+// Apply a color table each ARGB pixel.
+// Table contains 256 ARGB values.
+LIBYUV_API
+int ARGBColorTable(uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   const uint8_t* table_argb,
+                   int dst_x,
+                   int dst_y,
+                   int width,
+                   int height);
+
+// Apply a color table each ARGB pixel but preserve destination alpha.
+// Table contains 256 ARGB values.
+LIBYUV_API
+int RGBColorTable(uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  const uint8_t* table_argb,
+                  int dst_x,
+                  int dst_y,
+                  int width,
+                  int height);
+
+// Apply a luma/color table each ARGB pixel but preserve destination alpha.
+// Table contains 32768 values indexed by [Y][C] where 7 it 7 bit luma from
+// RGB (YJ style) and C is an 8 bit color component (R, G or B).
+LIBYUV_API
+int ARGBLumaColorTable(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_argb,
+                       int dst_stride_argb,
+                       const uint8_t* luma,
+                       int width,
+                       int height);
+
+// Apply a 3 term polynomial to ARGB values.
+// poly points to a 4x4 matrix.  The first row is constants.  The 2nd row is
+// coefficients for b, g, r and a.  The 3rd row is coefficients for b squared,
+// g squared, r squared and a squared.  The 4rd row is coefficients for b to
+// the 3, g to the 3, r to the 3 and a to the 3.  The values are summed and
+// result clamped to 0 to 255.
+// A polynomial approximation can be dirived using software such as 'R'.
+
+LIBYUV_API
+int ARGBPolynomial(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   const float* poly,
+                   int width,
+                   int height);
+
+// Convert plane of 16 bit shorts to half floats.
+// Source values are multiplied by scale before storing as half float.
+LIBYUV_API
+int HalfFloatPlane(const uint16_t* src_y,
+                   int src_stride_y,
+                   uint16_t* dst_y,
+                   int dst_stride_y,
+                   float scale,
+                   int width,
+                   int height);
+
+// Convert a buffer of bytes to floats, scale the values and store as floats.
+LIBYUV_API
+int ByteToFloat(const uint8_t* src_y, float* dst_y, float scale, int width);
+
+// Quantize a rectangle of ARGB. Alpha unaffected.
+// scale is a 16 bit fractional fixed point scaler between 0 and 65535.
+// interval_size should be a value between 1 and 255.
+// interval_offset should be a value between 0 and 255.
+LIBYUV_API
+int ARGBQuantize(uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int scale,
+                 int interval_size,
+                 int interval_offset,
+                 int dst_x,
+                 int dst_y,
+                 int width,
+                 int height);
+
+// Copy ARGB to ARGB.
+LIBYUV_API
+int ARGBCopy(const uint8_t* src_argb,
+             int src_stride_argb,
+             uint8_t* dst_argb,
+             int dst_stride_argb,
+             int width,
+             int height);
+
+// Copy Alpha channel of ARGB to alpha of ARGB.
+LIBYUV_API
+int ARGBCopyAlpha(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int width,
+                  int height);
+
+// Extract the alpha channel from ARGB.
+LIBYUV_API
+int ARGBExtractAlpha(const uint8_t* src_argb,
+                     int src_stride_argb,
+                     uint8_t* dst_a,
+                     int dst_stride_a,
+                     int width,
+                     int height);
+
+// Copy Y channel to Alpha of ARGB.
+LIBYUV_API
+int ARGBCopyYToAlpha(const uint8_t* src_y,
+                     int src_stride_y,
+                     uint8_t* dst_argb,
+                     int dst_stride_argb,
+                     int width,
+                     int height);
+
+typedef void (*ARGBBlendRow)(const uint8_t* src_argb0,
+                             const uint8_t* src_argb1,
+                             uint8_t* dst_argb,
+                             int width);
+
+// Get function to Alpha Blend ARGB pixels and store to destination.
+LIBYUV_API
+ARGBBlendRow GetARGBBlend();
+
+// Alpha Blend ARGB images and store to destination.
+// Source is pre-multiplied by alpha using ARGBAttenuate.
+// Alpha of destination is set to 255.
+LIBYUV_API
+int ARGBBlend(const uint8_t* src_argb0,
+              int src_stride_argb0,
+              const uint8_t* src_argb1,
+              int src_stride_argb1,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height);
+
+// Alpha Blend plane and store to destination.
+// Source is not pre-multiplied by alpha.
+LIBYUV_API
+int BlendPlane(const uint8_t* src_y0,
+               int src_stride_y0,
+               const uint8_t* src_y1,
+               int src_stride_y1,
+               const uint8_t* alpha,
+               int alpha_stride,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height);
+
+// Alpha Blend YUV images and store to destination.
+// Source is not pre-multiplied by alpha.
+// Alpha is full width x height and subsampled to half size to apply to UV.
+LIBYUV_API
+int I420Blend(const uint8_t* src_y0,
+              int src_stride_y0,
+              const uint8_t* src_u0,
+              int src_stride_u0,
+              const uint8_t* src_v0,
+              int src_stride_v0,
+              const uint8_t* src_y1,
+              int src_stride_y1,
+              const uint8_t* src_u1,
+              int src_stride_u1,
+              const uint8_t* src_v1,
+              int src_stride_v1,
+              const uint8_t* alpha,
+              int alpha_stride,
+              uint8_t* dst_y,
+              int dst_stride_y,
+              uint8_t* dst_u,
+              int dst_stride_u,
+              uint8_t* dst_v,
+              int dst_stride_v,
+              int width,
+              int height);
+
+// Multiply ARGB image by ARGB image. Shifted down by 8. Saturates to 255.
+LIBYUV_API
+int ARGBMultiply(const uint8_t* src_argb0,
+                 int src_stride_argb0,
+                 const uint8_t* src_argb1,
+                 int src_stride_argb1,
+                 uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int width,
+                 int height);
+
+// Add ARGB image with ARGB image. Saturates to 255.
+LIBYUV_API
+int ARGBAdd(const uint8_t* src_argb0,
+            int src_stride_argb0,
+            const uint8_t* src_argb1,
+            int src_stride_argb1,
+            uint8_t* dst_argb,
+            int dst_stride_argb,
+            int width,
+            int height);
+
+// Subtract ARGB image (argb1) from ARGB image (argb0). Saturates to 0.
+LIBYUV_API
+int ARGBSubtract(const uint8_t* src_argb0,
+                 int src_stride_argb0,
+                 const uint8_t* src_argb1,
+                 int src_stride_argb1,
+                 uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int width,
+                 int height);
+
+// Convert I422 to YUY2.
+LIBYUV_API
+int I422ToYUY2(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_yuy2,
+               int dst_stride_yuy2,
+               int width,
+               int height);
+
+// Convert I422 to UYVY.
+LIBYUV_API
+int I422ToUYVY(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_uyvy,
+               int dst_stride_uyvy,
+               int width,
+               int height);
+
+// Convert unattentuated ARGB to preattenuated ARGB.
+LIBYUV_API
+int ARGBAttenuate(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int width,
+                  int height);
+
+// Convert preattentuated ARGB to unattenuated ARGB.
+LIBYUV_API
+int ARGBUnattenuate(const uint8_t* src_argb,
+                    int src_stride_argb,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    int width,
+                    int height);
+
+// Internal function - do not call directly.
+// Computes table of cumulative sum for image where the value is the sum
+// of all values above and to the left of the entry. Used by ARGBBlur.
+LIBYUV_API
+int ARGBComputeCumulativeSum(const uint8_t* src_argb,
+                             int src_stride_argb,
+                             int32_t* dst_cumsum,
+                             int dst_stride32_cumsum,
+                             int width,
+                             int height);
+
+// Blur ARGB image.
+// dst_cumsum table of width * (height + 1) * 16 bytes aligned to
+//   16 byte boundary.
+// dst_stride32_cumsum is number of ints in a row (width * 4).
+// radius is number of pixels around the center.  e.g. 1 = 3x3. 2=5x5.
+// Blur is optimized for radius of 5 (11x11) or less.
+LIBYUV_API
+int ARGBBlur(const uint8_t* src_argb,
+             int src_stride_argb,
+             uint8_t* dst_argb,
+             int dst_stride_argb,
+             int32_t* dst_cumsum,
+             int dst_stride32_cumsum,
+             int width,
+             int height,
+             int radius);
+
+// Multiply ARGB image by ARGB value.
+LIBYUV_API
+int ARGBShade(const uint8_t* src_argb,
+              int src_stride_argb,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height,
+              uint32_t value);
+
+// Interpolate between two images using specified amount of interpolation
+// (0 to 255) and store to destination.
+// 'interpolation' is specified as 8 bit fraction where 0 means 100% src0
+// and 255 means 1% src0 and 99% src1.
+LIBYUV_API
+int InterpolatePlane(const uint8_t* src0,
+                     int src_stride0,
+                     const uint8_t* src1,
+                     int src_stride1,
+                     uint8_t* dst,
+                     int dst_stride,
+                     int width,
+                     int height,
+                     int interpolation);
+
+// Interpolate between two ARGB images using specified amount of interpolation
+// Internally calls InterpolatePlane with width * 4 (bpp).
+LIBYUV_API
+int ARGBInterpolate(const uint8_t* src_argb0,
+                    int src_stride_argb0,
+                    const uint8_t* src_argb1,
+                    int src_stride_argb1,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    int width,
+                    int height,
+                    int interpolation);
+
+// Interpolate between two YUV images using specified amount of interpolation
+// Internally calls InterpolatePlane on each plane where the U and V planes
+// are half width and half height.
+LIBYUV_API
+int I420Interpolate(const uint8_t* src0_y,
+                    int src0_stride_y,
+                    const uint8_t* src0_u,
+                    int src0_stride_u,
+                    const uint8_t* src0_v,
+                    int src0_stride_v,
+                    const uint8_t* src1_y,
+                    int src1_stride_y,
+                    const uint8_t* src1_u,
+                    int src1_stride_u,
+                    const uint8_t* src1_v,
+                    int src1_stride_v,
+                    uint8_t* dst_y,
+                    int dst_stride_y,
+                    uint8_t* dst_u,
+                    int dst_stride_u,
+                    uint8_t* dst_v,
+                    int dst_stride_v,
+                    int width,
+                    int height,
+                    int interpolation);
+
 // Row function for copying pixels from a source with a slope to a row
 // of destination. Useful for scaling, rotation, mirror, texture mapping.
 LIBYUV_API
-void ARGBAffineRow_C(const uint8* src_argb, int src_argb_stride,
-                     uint8* dst_argb, const float* uv_dudv, int width);
+void ARGBAffineRow_C(const uint8_t* src_argb,
+                     int src_argb_stride,
+                     uint8_t* dst_argb,
+                     const float* uv_dudv,
+                     int width);
+// TODO(fbarchard): Move ARGBAffineRow_SSE2 to row.h
 LIBYUV_API
-void ARGBAffineRow_SSE2(const uint8* src_argb, int src_argb_stride,
-                        uint8* dst_argb, const float* uv_dudv, int width);
+void ARGBAffineRow_SSE2(const uint8_t* src_argb,
+                        int src_argb_stride,
+                        uint8_t* dst_argb,
+                        const float* uv_dudv,
+                        int width);
 
 // Shuffle ARGB channel order.  e.g. BGRA to ARGB.
 // shuffler is 16 bytes and must be aligned.
 LIBYUV_API
-int ARGBShuffle(const uint8* src_bgra, int src_stride_bgra,
-                uint8* dst_argb, int dst_stride_argb,
-                const uint8* shuffler, int width, int height);
+int ARGBShuffle(const uint8_t* src_bgra,
+                int src_stride_bgra,
+                uint8_t* dst_argb,
+                int dst_stride_argb,
+                const uint8_t* shuffler,
+                int width,
+                int height);
 
 // Sobel ARGB effect with planar output.
 LIBYUV_API
-int ARGBSobelToPlane(const uint8* src_argb, int src_stride_argb,
-                     uint8* dst_y, int dst_stride_y,
-                     int width, int height);
+int ARGBSobelToPlane(const uint8_t* src_argb,
+                     int src_stride_argb,
+                     uint8_t* dst_y,
+                     int dst_stride_y,
+                     int width,
+                     int height);
 
 // Sobel ARGB effect.
 LIBYUV_API
-int ARGBSobel(const uint8* src_argb, int src_stride_argb,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height);
+int ARGBSobel(const uint8_t* src_argb,
+              int src_stride_argb,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height);
 
 // Sobel ARGB effect w/ Sobel X, Sobel, Sobel Y in ARGB.
 LIBYUV_API
-int ARGBSobelXY(const uint8* src_argb, int src_stride_argb,
-                uint8* dst_argb, int dst_stride_argb,
-                int width, int height);
+int ARGBSobelXY(const uint8_t* src_argb,
+                int src_stride_argb,
+                uint8_t* dst_argb,
+                int dst_stride_argb,
+                int width,
+                int height);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_PLANAR_FUNCTIONS_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_PLANAR_FUNCTIONS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate.h
index 8af60b8..76b692b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_ROTATE_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_ROTATE_H_
 #define INCLUDE_LIBYUV_ROTATE_H_
 
 #include "libyuv/basic_types.h"
@@ -20,8 +20,8 @@
 
 // Supported rotation.
 typedef enum RotationMode {
-  kRotate0 = 0,  // No rotation.
-  kRotate90 = 90,  // Rotate 90 degrees clockwise.
+  kRotate0 = 0,      // No rotation.
+  kRotate90 = 90,    // Rotate 90 degrees clockwise.
   kRotate180 = 180,  // Rotate 180 degrees.
   kRotate270 = 270,  // Rotate 270 degrees clockwise.
 
@@ -33,85 +33,132 @@
 
 // Rotate I420 frame.
 LIBYUV_API
-int I420Rotate(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int src_width, int src_height, enum RotationMode mode);
+int I420Rotate(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height,
+               enum RotationMode mode);
 
 // Rotate NV12 input and store in I420.
 LIBYUV_API
-int NV12ToI420Rotate(const uint8* src_y, int src_stride_y,
-                     const uint8* src_uv, int src_stride_uv,
-                     uint8* dst_y, int dst_stride_y,
-                     uint8* dst_u, int dst_stride_u,
-                     uint8* dst_v, int dst_stride_v,
-                     int src_width, int src_height, enum RotationMode mode);
+int NV12ToI420Rotate(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_uv,
+                     int src_stride_uv,
+                     uint8_t* dst_y,
+                     int dst_stride_y,
+                     uint8_t* dst_u,
+                     int dst_stride_u,
+                     uint8_t* dst_v,
+                     int dst_stride_v,
+                     int width,
+                     int height,
+                     enum RotationMode mode);
 
 // Rotate a plane by 0, 90, 180, or 270.
 LIBYUV_API
-int RotatePlane(const uint8* src, int src_stride,
-                uint8* dst, int dst_stride,
-                int src_width, int src_height, enum RotationMode mode);
+int RotatePlane(const uint8_t* src,
+                int src_stride,
+                uint8_t* dst,
+                int dst_stride,
+                int width,
+                int height,
+                enum RotationMode mode);
 
 // Rotate planes by 90, 180, 270. Deprecated.
 LIBYUV_API
-void RotatePlane90(const uint8* src, int src_stride,
-                   uint8* dst, int dst_stride,
-                   int width, int height);
+void RotatePlane90(const uint8_t* src,
+                   int src_stride,
+                   uint8_t* dst,
+                   int dst_stride,
+                   int width,
+                   int height);
 
 LIBYUV_API
-void RotatePlane180(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height);
+void RotatePlane180(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height);
 
 LIBYUV_API
-void RotatePlane270(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height);
+void RotatePlane270(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height);
 
 LIBYUV_API
-void RotateUV90(const uint8* src, int src_stride,
-                uint8* dst_a, int dst_stride_a,
-                uint8* dst_b, int dst_stride_b,
-                int width, int height);
+void RotateUV90(const uint8_t* src,
+                int src_stride,
+                uint8_t* dst_a,
+                int dst_stride_a,
+                uint8_t* dst_b,
+                int dst_stride_b,
+                int width,
+                int height);
 
 // Rotations for when U and V are interleaved.
 // These functions take one input pointer and
 // split the data into two buffers while
 // rotating them. Deprecated.
 LIBYUV_API
-void RotateUV180(const uint8* src, int src_stride,
-                 uint8* dst_a, int dst_stride_a,
-                 uint8* dst_b, int dst_stride_b,
-                 int width, int height);
+void RotateUV180(const uint8_t* src,
+                 int src_stride,
+                 uint8_t* dst_a,
+                 int dst_stride_a,
+                 uint8_t* dst_b,
+                 int dst_stride_b,
+                 int width,
+                 int height);
 
 LIBYUV_API
-void RotateUV270(const uint8* src, int src_stride,
-                 uint8* dst_a, int dst_stride_a,
-                 uint8* dst_b, int dst_stride_b,
-                 int width, int height);
+void RotateUV270(const uint8_t* src,
+                 int src_stride,
+                 uint8_t* dst_a,
+                 int dst_stride_a,
+                 uint8_t* dst_b,
+                 int dst_stride_b,
+                 int width,
+                 int height);
 
 // The 90 and 270 functions are based on transposes.
 // Doing a transpose with reversing the read/write
 // order will result in a rotation by +- 90 degrees.
 // Deprecated.
 LIBYUV_API
-void TransposePlane(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height);
+void TransposePlane(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height);
 
 LIBYUV_API
-void TransposeUV(const uint8* src, int src_stride,
-                 uint8* dst_a, int dst_stride_a,
-                 uint8* dst_b, int dst_stride_b,
-                 int width, int height);
+void TransposeUV(const uint8_t* src,
+                 int src_stride,
+                 uint8_t* dst_a,
+                 int dst_stride_a,
+                 uint8_t* dst_b,
+                 int dst_stride_b,
+                 int width,
+                 int height);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_ROTATE_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_ROTATE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_argb.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_argb.h
index 660ff55..2043294 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_argb.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_argb.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_ROTATE_ARGB_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_ROTATE_ARGB_H_
 #define INCLUDE_LIBYUV_ROTATE_ARGB_H_
 
 #include "libyuv/basic_types.h"
@@ -21,13 +21,17 @@
 
 // Rotate ARGB frame
 LIBYUV_API
-int ARGBRotate(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_argb, int dst_stride_argb,
-               int src_width, int src_height, enum RotationMode mode);
+int ARGBRotate(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int src_width,
+               int src_height,
+               enum RotationMode mode);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_ROTATE_ARGB_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_ROTATE_ARGB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_row.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_row.h
index ebc487f..5edc0fc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_row.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/rotate_row.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_ROTATE_ROW_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_ROTATE_ROW_H_
 #define INCLUDE_LIBYUV_ROTATE_ROW_H_
 
 #include "libyuv/basic_types.h"
@@ -18,10 +18,14 @@
 extern "C" {
 #endif
 
-#if defined(__pnacl__) || defined(__CLR_VER) || \
-    (defined(__i386__) && !defined(__SSE2__))
+#if defined(__pnacl__) || defined(__CLR_VER) ||            \
+    (defined(__native_client__) && defined(__x86_64__)) || \
+    (defined(__i386__) && !defined(__SSE__) && !defined(__clang__))
 #define LIBYUV_DISABLE_X86
 #endif
+#if defined(__native_client__)
+#define LIBYUV_DISABLE_NEON
+#endif
 // MemorySanitizer does not support assembly code yet. http://crbug.com/344505
 #if defined(__has_feature)
 #if __has_feature(memory_sanitizer)
@@ -29,93 +33,162 @@
 #endif
 #endif
 // The following are available for Visual C and clangcl 32 bit:
-#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86)
+#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86) && defined(_MSC_VER)
 #define HAS_TRANSPOSEWX8_SSSE3
 #define HAS_TRANSPOSEUVWX8_SSE2
 #endif
 
-// The following are available for GCC 32 or 64 bit but not NaCL for 64 bit:
-#if !defined(LIBYUV_DISABLE_X86) && \
-    (defined(__i386__) || (defined(__x86_64__) && !defined(__native_client__)))
+// The following are available for GCC 32 or 64 bit:
+#if !defined(LIBYUV_DISABLE_X86) && (defined(__i386__) || defined(__x86_64__))
 #define HAS_TRANSPOSEWX8_SSSE3
 #endif
 
-// The following are available for 64 bit GCC but not NaCL:
-#if !defined(LIBYUV_DISABLE_X86) && !defined(__native_client__) && \
-    defined(__x86_64__)
+// The following are available for 64 bit GCC:
+#if !defined(LIBYUV_DISABLE_X86) && defined(__x86_64__)
 #define HAS_TRANSPOSEWX8_FAST_SSSE3
 #define HAS_TRANSPOSEUVWX8_SSE2
 #endif
 
-#if !defined(LIBYUV_DISABLE_NEON) && !defined(__native_client__) && \
+#if !defined(LIBYUV_DISABLE_NEON) && \
     (defined(__ARM_NEON__) || defined(LIBYUV_NEON) || defined(__aarch64__))
 #define HAS_TRANSPOSEWX8_NEON
 #define HAS_TRANSPOSEUVWX8_NEON
 #endif
 
-#if !defined(LIBYUV_DISABLE_MIPS) && !defined(__native_client__) && \
-    defined(__mips__) && \
-    defined(__mips_dsp) && (__mips_dsp_rev >= 2)
-#define HAS_TRANSPOSEWX8_DSPR2
-#define HAS_TRANSPOSEUVWX8_DSPR2
-#endif  // defined(__mips__)
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#define HAS_TRANSPOSEWX16_MSA
+#define HAS_TRANSPOSEUVWX16_MSA
+#endif
 
-void TransposeWxH_C(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride, int width, int height);
+void TransposeWxH_C(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height);
 
-void TransposeWx8_C(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride, int width);
-void TransposeWx8_NEON(const uint8* src, int src_stride,
-                       uint8* dst, int dst_stride, int width);
-void TransposeWx8_SSSE3(const uint8* src, int src_stride,
-                        uint8* dst, int dst_stride, int width);
-void TransposeWx8_Fast_SSSE3(const uint8* src, int src_stride,
-                             uint8* dst, int dst_stride, int width);
-void TransposeWx8_DSPR2(const uint8* src, int src_stride,
-                        uint8* dst, int dst_stride, int width);
-void TransposeWx8_Fast_DSPR2(const uint8* src, int src_stride,
-                             uint8* dst, int dst_stride, int width);
+void TransposeWx8_C(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width);
+void TransposeWx16_C(const uint8_t* src,
+                     int src_stride,
+                     uint8_t* dst,
+                     int dst_stride,
+                     int width);
+void TransposeWx8_NEON(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst,
+                       int dst_stride,
+                       int width);
+void TransposeWx8_SSSE3(const uint8_t* src,
+                        int src_stride,
+                        uint8_t* dst,
+                        int dst_stride,
+                        int width);
+void TransposeWx8_Fast_SSSE3(const uint8_t* src,
+                             int src_stride,
+                             uint8_t* dst,
+                             int dst_stride,
+                             int width);
+void TransposeWx16_MSA(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst,
+                       int dst_stride,
+                       int width);
 
-void TransposeWx8_Any_NEON(const uint8* src, int src_stride,
-                           uint8* dst, int dst_stride, int width);
-void TransposeWx8_Any_SSSE3(const uint8* src, int src_stride,
-                            uint8* dst, int dst_stride, int width);
-void TransposeWx8_Fast_Any_SSSE3(const uint8* src, int src_stride,
-                                 uint8* dst, int dst_stride, int width);
-void TransposeWx8_Any_DSPR2(const uint8* src, int src_stride,
-                            uint8* dst, int dst_stride, int width);
+void TransposeWx8_Any_NEON(const uint8_t* src,
+                           int src_stride,
+                           uint8_t* dst,
+                           int dst_stride,
+                           int width);
+void TransposeWx8_Any_SSSE3(const uint8_t* src,
+                            int src_stride,
+                            uint8_t* dst,
+                            int dst_stride,
+                            int width);
+void TransposeWx8_Fast_Any_SSSE3(const uint8_t* src,
+                                 int src_stride,
+                                 uint8_t* dst,
+                                 int dst_stride,
+                                 int width);
+void TransposeWx16_Any_MSA(const uint8_t* src,
+                           int src_stride,
+                           uint8_t* dst,
+                           int dst_stride,
+                           int width);
 
-void TransposeUVWxH_C(const uint8* src, int src_stride,
-                      uint8* dst_a, int dst_stride_a,
-                      uint8* dst_b, int dst_stride_b,
-                      int width, int height);
+void TransposeUVWxH_C(const uint8_t* src,
+                      int src_stride,
+                      uint8_t* dst_a,
+                      int dst_stride_a,
+                      uint8_t* dst_b,
+                      int dst_stride_b,
+                      int width,
+                      int height);
 
-void TransposeUVWx8_C(const uint8* src, int src_stride,
-                      uint8* dst_a, int dst_stride_a,
-                      uint8* dst_b, int dst_stride_b, int width);
-void TransposeUVWx8_SSE2(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b, int width);
-void TransposeUVWx8_NEON(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b, int width);
-void TransposeUVWx8_DSPR2(const uint8* src, int src_stride,
-                          uint8* dst_a, int dst_stride_a,
-                          uint8* dst_b, int dst_stride_b, int width);
+void TransposeUVWx8_C(const uint8_t* src,
+                      int src_stride,
+                      uint8_t* dst_a,
+                      int dst_stride_a,
+                      uint8_t* dst_b,
+                      int dst_stride_b,
+                      int width);
+void TransposeUVWx16_C(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst_a,
+                       int dst_stride_a,
+                       uint8_t* dst_b,
+                       int dst_stride_b,
+                       int width);
+void TransposeUVWx8_SSE2(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
+                         int width);
+void TransposeUVWx8_NEON(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
+                         int width);
+void TransposeUVWx16_MSA(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
+                         int width);
 
-void TransposeUVWx8_Any_SSE2(const uint8* src, int src_stride,
-                             uint8* dst_a, int dst_stride_a,
-                             uint8* dst_b, int dst_stride_b, int width);
-void TransposeUVWx8_Any_NEON(const uint8* src, int src_stride,
-                             uint8* dst_a, int dst_stride_a,
-                             uint8* dst_b, int dst_stride_b, int width);
-void TransposeUVWx8_Any_DSPR2(const uint8* src, int src_stride,
-                              uint8* dst_a, int dst_stride_a,
-                              uint8* dst_b, int dst_stride_b, int width);
+void TransposeUVWx8_Any_SSE2(const uint8_t* src,
+                             int src_stride,
+                             uint8_t* dst_a,
+                             int dst_stride_a,
+                             uint8_t* dst_b,
+                             int dst_stride_b,
+                             int width);
+void TransposeUVWx8_Any_NEON(const uint8_t* src,
+                             int src_stride,
+                             uint8_t* dst_a,
+                             int dst_stride_a,
+                             uint8_t* dst_b,
+                             int dst_stride_b,
+                             int width);
+void TransposeUVWx16_Any_MSA(const uint8_t* src,
+                             int src_stride,
+                             uint8_t* dst_a,
+                             int dst_stride_a,
+                             uint8_t* dst_b,
+                             int dst_stride_b,
+                             int width);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_ROTATE_ROW_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_ROTATE_ROW_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/row.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/row.h
index 013a7e5..65ef448 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/row.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/row.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_ROW_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_ROW_H_
 #define INCLUDE_LIBYUV_ROW_H_
 
 #include <stdlib.h>  // For malloc.
@@ -20,41 +20,20 @@
 extern "C" {
 #endif
 
-#define IS_ALIGNED(p, a) (!((uintptr_t)(p) & ((a) - 1)))
-
-#ifdef __cplusplus
-#define align_buffer_64(var, size)                                             \
-  uint8* var##_mem = reinterpret_cast<uint8*>(malloc((size) + 63));            \
-  uint8* var = reinterpret_cast<uint8*>                                        \
-      ((reinterpret_cast<intptr_t>(var##_mem) + 63) & ~63)
-#else
-#define align_buffer_64(var, size)                                             \
-  uint8* var##_mem = (uint8*)(malloc((size) + 63));               /* NOLINT */ \
-  uint8* var = (uint8*)(((intptr_t)(var##_mem) + 63) & ~63)       /* NOLINT */
-#endif
-
-#define free_aligned_buffer_64(var) \
-  free(var##_mem);  \
-  var = 0
-
-#if defined(__pnacl__) || defined(__CLR_VER) || \
-    (defined(__i386__) && !defined(__SSE2__))
+#if defined(__pnacl__) || defined(__CLR_VER) ||            \
+    (defined(__native_client__) && defined(__x86_64__)) || \
+    (defined(__i386__) && !defined(__SSE__) && !defined(__clang__))
 #define LIBYUV_DISABLE_X86
 #endif
+#if defined(__native_client__)
+#define LIBYUV_DISABLE_NEON
+#endif
 // MemorySanitizer does not support assembly code yet. http://crbug.com/344505
 #if defined(__has_feature)
 #if __has_feature(memory_sanitizer)
 #define LIBYUV_DISABLE_X86
 #endif
 #endif
-// True if compiling for SSSE3 as a requirement.
-#if defined(__SSSE3__) || (defined(_M_IX86_FP) && (_M_IX86_FP >= 3))
-#define LIBYUV_SSSE3_ONLY
-#endif
-
-#if defined(__native_client__)
-#define LIBYUV_DISABLE_NEON
-#endif
 // clang >= 3.5.0 required for Arm64.
 #if defined(__clang__) && defined(__aarch64__) && !defined(LIBYUV_DISABLE_NEON)
 #if (__clang_major__ < 3) || (__clang_major__ == 3 && (__clang_minor__ < 5))
@@ -76,9 +55,19 @@
 #endif  // clang >= 3.4
 #endif  // __clang__
 
+// clang >= 6.0.0 required for AVX512.
+// TODO(fbarchard): fix xcode 9 ios b/789.
+#if 0  // Build fails in libvpx on Mac
+#if defined(__clang__) && (defined(__x86_64__) || defined(__i386__))
+#if (__clang_major__ >= 7) && !defined(__APPLE_EMBEDDED_SIMULATOR__)
+#define CLANG_HAS_AVX512 1
+#endif  // clang >= 7
+#endif  // __clang__
+#endif  // 0
+
 // Visual C 2012 required for AVX2.
-#if defined(_M_IX86) && !defined(__clang__) && \
-    defined(_MSC_VER) && _MSC_VER >= 1700
+#if defined(_M_IX86) && !defined(__clang__) && defined(_MSC_VER) && \
+    _MSC_VER >= 1700
 #define VISUALC_HAS_AVX2 1
 #endif  // VisualStudio >= 2012
 
@@ -90,8 +79,8 @@
 #define HAS_ABGRTOYROW_SSSE3
 #define HAS_ARGB1555TOARGBROW_SSE2
 #define HAS_ARGB4444TOARGBROW_SSE2
+#define HAS_ARGBEXTRACTALPHAROW_SSE2
 #define HAS_ARGBSETROW_X86
-#define HAS_ARGBSHUFFLEROW_SSE2
 #define HAS_ARGBSHUFFLEROW_SSSE3
 #define HAS_ARGBTOARGB1555ROW_SSE2
 #define HAS_ARGBTOARGB4444ROW_SSE2
@@ -104,12 +93,12 @@
 #define HAS_ARGBTOUVROW_SSSE3
 #define HAS_ARGBTOYJROW_SSSE3
 #define HAS_ARGBTOYROW_SSSE3
-#define HAS_ARGBEXTRACTALPHAROW_SSE2
 #define HAS_BGRATOUVROW_SSSE3
 #define HAS_BGRATOYROW_SSSE3
 #define HAS_COPYROW_ERMS
 #define HAS_COPYROW_SSE2
 #define HAS_H422TOARGBROW_SSSE3
+#define HAS_HALFFLOATROW_SSE2
 #define HAS_I400TOARGBROW_SSE2
 #define HAS_I422TOARGB1555ROW_SSSE3
 #define HAS_I422TOARGB4444ROW_SSSE3
@@ -126,8 +115,10 @@
 #define HAS_MIRRORROW_SSSE3
 #define HAS_MIRRORUVROW_SSSE3
 #define HAS_NV12TOARGBROW_SSSE3
+#define HAS_NV12TORGB24ROW_SSSE3
 #define HAS_NV12TORGB565ROW_SSSE3
 #define HAS_NV21TOARGBROW_SSSE3
+#define HAS_NV21TORGB24ROW_SSSE3
 #define HAS_RAWTOARGBROW_SSSE3
 #define HAS_RAWTORGB24ROW_SSSE3
 #define HAS_RAWTOYROW_SSSE3
@@ -180,11 +171,8 @@
 
 // The following functions fail on gcc/clang 32 bit with fpic and framepointer.
 // caveat: clangcl uses row_win.cc which works.
-#if defined(NDEBUG) || !(defined(_DEBUG) && defined(__i386__)) || \
-    !defined(__i386__) || defined(_MSC_VER)
-// TODO(fbarchard): fix build error on x86 debug
-// https://code.google.com/p/libyuv/issues/detail?id=524
-#define HAS_I411TOARGBROW_SSSE3
+#if defined(__x86_64__) || !defined(__pic__) || defined(__clang__) || \
+    defined(_MSC_VER)
 // TODO(fbarchard): fix build error on android_full_debug=1
 // https://code.google.com/p/libyuv/issues/detail?id=517
 #define HAS_I422ALPHATOARGBROW_SSSE3
@@ -193,11 +181,12 @@
 
 // The following are available on all x86 platforms, but
 // require VS2012, clang 3.4 or gcc 4.7.
-// The code supports NaCL but requires a new compiler and validator.
-#if !defined(LIBYUV_DISABLE_X86) && (defined(VISUALC_HAS_AVX2) || \
-    defined(CLANG_HAS_AVX2) || defined(GCC_HAS_AVX2))
+#if !defined(LIBYUV_DISABLE_X86) &&                          \
+    (defined(VISUALC_HAS_AVX2) || defined(CLANG_HAS_AVX2) || \
+     defined(GCC_HAS_AVX2))
 #define HAS_ARGBCOPYALPHAROW_AVX2
 #define HAS_ARGBCOPYYTOALPHAROW_AVX2
+#define HAS_ARGBEXTRACTALPHAROW_AVX2
 #define HAS_ARGBMIRRORROW_AVX2
 #define HAS_ARGBPOLYNOMIALROW_AVX2
 #define HAS_ARGBSHUFFLEROW_AVX2
@@ -208,13 +197,9 @@
 #define HAS_ARGBTOYROW_AVX2
 #define HAS_COPYROW_AVX
 #define HAS_H422TOARGBROW_AVX2
+#define HAS_HALFFLOATROW_AVX2
+//  #define HAS_HALFFLOATROW_F16C  // Enable to test halffloat cast
 #define HAS_I400TOARGBROW_AVX2
-#if !(defined(_DEBUG) && defined(__i386__))
-// TODO(fbarchard): fix build error on android_full_debug=1
-// https://code.google.com/p/libyuv/issues/detail?id=517
-#define HAS_I422ALPHATOARGBROW_AVX2
-#endif
-#define HAS_I411TOARGBROW_AVX2
 #define HAS_I422TOARGB1555ROW_AVX2
 #define HAS_I422TOARGB4444ROW_AVX2
 #define HAS_I422TOARGBROW_AVX2
@@ -227,8 +212,10 @@
 #define HAS_MERGEUVROW_AVX2
 #define HAS_MIRRORROW_AVX2
 #define HAS_NV12TOARGBROW_AVX2
+#define HAS_NV12TORGB24ROW_AVX2
 #define HAS_NV12TORGB565ROW_AVX2
 #define HAS_NV21TOARGBROW_AVX2
+#define HAS_NV21TORGB24ROW_AVX2
 #define HAS_SPLITUVROW_AVX2
 #define HAS_UYVYTOARGBROW_AVX2
 #define HAS_UYVYTOUV422ROW_AVX2
@@ -246,11 +233,18 @@
 #define HAS_ARGBSUBTRACTROW_AVX2
 #define HAS_ARGBUNATTENUATEROW_AVX2
 #define HAS_BLENDPLANEROW_AVX2
+
+#if defined(__x86_64__) || !defined(__pic__) || defined(__clang__) || \
+    defined(_MSC_VER)
+// TODO(fbarchard): fix build error on android_full_debug=1
+// https://code.google.com/p/libyuv/issues/detail?id=517
+#define HAS_I422ALPHATOARGBROW_AVX2
+#endif
 #endif
 
 // The following are available for AVX2 Visual C and clangcl 32 bit:
 // TODO(fbarchard): Port to gcc.
-#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86) && \
+#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86) && defined(_MSC_VER) && \
     (defined(VISUALC_HAS_AVX2) || defined(CLANG_HAS_AVX2))
 #define HAS_ARGB1555TOARGBROW_AVX2
 #define HAS_ARGB4444TOARGBROW_AVX2
@@ -268,6 +262,51 @@
 #define HAS_I422TOARGBROW_SSSE3
 #endif
 
+// The following are available for gcc/clang x86 platforms:
+// TODO(fbarchard): Port to Visual C
+#if !defined(LIBYUV_DISABLE_X86) && \
+    (defined(__x86_64__) || (defined(__i386__) && !defined(_MSC_VER)))
+#define HAS_ABGRTOAR30ROW_SSSE3
+#define HAS_ARGBTOAR30ROW_SSSE3
+#define HAS_CONVERT16TO8ROW_SSSE3
+#define HAS_CONVERT8TO16ROW_SSE2
+// I210 is for H010.  2 = 422.  I for 601 vs H for 709.
+#define HAS_I210TOAR30ROW_SSSE3
+#define HAS_I210TOARGBROW_SSSE3
+#define HAS_I422TOAR30ROW_SSSE3
+#define HAS_MERGERGBROW_SSSE3
+#define HAS_SPLITRGBROW_SSSE3
+#endif
+
+// The following are available for AVX2 gcc/clang x86 platforms:
+// TODO(fbarchard): Port to Visual C
+#if !defined(LIBYUV_DISABLE_X86) &&                                       \
+    (defined(__x86_64__) || (defined(__i386__) && !defined(_MSC_VER))) && \
+    (defined(CLANG_HAS_AVX2) || defined(GCC_HAS_AVX2))
+#define HAS_ABGRTOAR30ROW_AVX2
+#define HAS_ARGBTOAR30ROW_AVX2
+#define HAS_ARGBTORAWROW_AVX2
+#define HAS_ARGBTORGB24ROW_AVX2
+#define HAS_CONVERT16TO8ROW_AVX2
+#define HAS_CONVERT8TO16ROW_AVX2
+#define HAS_I210TOAR30ROW_AVX2
+#define HAS_I210TOARGBROW_AVX2
+#define HAS_I422TOAR30ROW_AVX2
+#define HAS_I422TOUYVYROW_AVX2
+#define HAS_I422TOYUY2ROW_AVX2
+#define HAS_MERGEUVROW_16_AVX2
+#define HAS_MULTIPLYROW_16_AVX2
+#endif
+
+// The following are available for AVX512 clang x86 platforms:
+// TODO(fbarchard): Port to GCC and Visual C
+// TODO(fbarchard): re-enable HAS_ARGBTORGB24ROW_AVX512VBMI. Issue libyuv:789
+#if !defined(LIBYUV_DISABLE_X86) &&                                       \
+    (defined(__x86_64__) || (defined(__i386__) && !defined(_MSC_VER))) && \
+    (defined(CLANG_HAS_AVX512))
+#define HAS_ARGBTORGB24ROW_AVX512VBMI
+#endif
+
 // The following are available on Neon platforms:
 #if !defined(LIBYUV_DISABLE_NEON) && \
     (defined(__aarch64__) || defined(__ARM_NEON__) || defined(LIBYUV_NEON))
@@ -279,6 +318,7 @@
 #define HAS_ARGB4444TOARGBROW_NEON
 #define HAS_ARGB4444TOUVROW_NEON
 #define HAS_ARGB4444TOYROW_NEON
+#define HAS_ARGBEXTRACTALPHAROW_NEON
 #define HAS_ARGBSETROW_NEON
 #define HAS_ARGBTOARGB1555ROW_NEON
 #define HAS_ARGBTOARGB4444ROW_NEON
@@ -286,18 +326,17 @@
 #define HAS_ARGBTORGB24ROW_NEON
 #define HAS_ARGBTORGB565DITHERROW_NEON
 #define HAS_ARGBTORGB565ROW_NEON
-#define HAS_ARGBTOUV411ROW_NEON
 #define HAS_ARGBTOUV444ROW_NEON
 #define HAS_ARGBTOUVJROW_NEON
 #define HAS_ARGBTOUVROW_NEON
 #define HAS_ARGBTOYJROW_NEON
 #define HAS_ARGBTOYROW_NEON
-#define HAS_ARGBEXTRACTALPHAROW_NEON
 #define HAS_BGRATOUVROW_NEON
 #define HAS_BGRATOYROW_NEON
+#define HAS_BYTETOFLOATROW_NEON
 #define HAS_COPYROW_NEON
+#define HAS_HALFFLOATROW_NEON
 #define HAS_I400TOARGBROW_NEON
-#define HAS_I411TOARGBROW_NEON
 #define HAS_I422ALPHATOARGBROW_NEON
 #define HAS_I422TOARGB1555ROW_NEON
 #define HAS_I422TOARGB4444ROW_NEON
@@ -313,8 +352,10 @@
 #define HAS_MIRRORROW_NEON
 #define HAS_MIRRORUVROW_NEON
 #define HAS_NV12TOARGBROW_NEON
+#define HAS_NV12TORGB24ROW_NEON
 #define HAS_NV12TORGB565ROW_NEON
 #define HAS_NV21TOARGBROW_NEON
+#define HAS_NV21TORGB24ROW_NEON
 #define HAS_RAWTOARGBROW_NEON
 #define HAS_RAWTORGB24ROW_NEON
 #define HAS_RAWTOUVROW_NEON
@@ -328,6 +369,7 @@
 #define HAS_RGBATOUVROW_NEON
 #define HAS_RGBATOYROW_NEON
 #define HAS_SETROW_NEON
+#define HAS_SPLITRGBROW_NEON
 #define HAS_SPLITUVROW_NEON
 #define HAS_UYVYTOARGBROW_NEON
 #define HAS_UYVYTOUV422ROW_NEON
@@ -359,17 +401,87 @@
 #define HAS_SOBELYROW_NEON
 #endif
 
-// The following are available on Mips platforms:
-#if !defined(LIBYUV_DISABLE_MIPS) && defined(__mips__) && \
-    (_MIPS_SIM == _MIPS_SIM_ABI32) && (__mips_isa_rev < 6)
-#define HAS_COPYROW_MIPS
-#if defined(__mips_dsp) && (__mips_dsp_rev >= 2)
-#define HAS_I422TOARGBROW_DSPR2
-#define HAS_INTERPOLATEROW_DSPR2
-#define HAS_MIRRORROW_DSPR2
-#define HAS_MIRRORUVROW_DSPR2
-#define HAS_SPLITUVROW_DSPR2
+// The following are available on AArch64 platforms:
+#if !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
+#define HAS_SCALESUMSAMPLES_NEON
 #endif
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#define HAS_ABGRTOUVROW_MSA
+#define HAS_ABGRTOYROW_MSA
+#define HAS_ARGB1555TOARGBROW_MSA
+#define HAS_ARGB1555TOUVROW_MSA
+#define HAS_ARGB1555TOYROW_MSA
+#define HAS_ARGB4444TOARGBROW_MSA
+#define HAS_ARGBADDROW_MSA
+#define HAS_ARGBATTENUATEROW_MSA
+#define HAS_ARGBBLENDROW_MSA
+#define HAS_ARGBCOLORMATRIXROW_MSA
+#define HAS_ARGBEXTRACTALPHAROW_MSA
+#define HAS_ARGBGRAYROW_MSA
+#define HAS_ARGBMIRRORROW_MSA
+#define HAS_ARGBMULTIPLYROW_MSA
+#define HAS_ARGBQUANTIZEROW_MSA
+#define HAS_ARGBSEPIAROW_MSA
+#define HAS_ARGBSETROW_MSA
+#define HAS_ARGBSHADEROW_MSA
+#define HAS_ARGBSHUFFLEROW_MSA
+#define HAS_ARGBSUBTRACTROW_MSA
+#define HAS_ARGBTOARGB1555ROW_MSA
+#define HAS_ARGBTOARGB4444ROW_MSA
+#define HAS_ARGBTORAWROW_MSA
+#define HAS_ARGBTORGB24ROW_MSA
+#define HAS_ARGBTORGB565DITHERROW_MSA
+#define HAS_ARGBTORGB565ROW_MSA
+#define HAS_ARGBTOUV444ROW_MSA
+#define HAS_ARGBTOUVJROW_MSA
+#define HAS_ARGBTOUVROW_MSA
+#define HAS_ARGBTOYJROW_MSA
+#define HAS_ARGBTOYROW_MSA
+#define HAS_BGRATOUVROW_MSA
+#define HAS_BGRATOYROW_MSA
+#define HAS_HALFFLOATROW_MSA
+#define HAS_I400TOARGBROW_MSA
+#define HAS_I422ALPHATOARGBROW_MSA
+#define HAS_I422TOARGBROW_MSA
+#define HAS_I422TORGB24ROW_MSA
+#define HAS_I422TORGBAROW_MSA
+#define HAS_I422TOUYVYROW_MSA
+#define HAS_I422TOYUY2ROW_MSA
+#define HAS_I444TOARGBROW_MSA
+#define HAS_INTERPOLATEROW_MSA
+#define HAS_J400TOARGBROW_MSA
+#define HAS_MERGEUVROW_MSA
+#define HAS_MIRRORROW_MSA
+#define HAS_MIRRORUVROW_MSA
+#define HAS_NV12TOARGBROW_MSA
+#define HAS_NV12TORGB565ROW_MSA
+#define HAS_NV21TOARGBROW_MSA
+#define HAS_RAWTOARGBROW_MSA
+#define HAS_RAWTORGB24ROW_MSA
+#define HAS_RAWTOUVROW_MSA
+#define HAS_RAWTOYROW_MSA
+#define HAS_RGB24TOARGBROW_MSA
+#define HAS_RGB24TOUVROW_MSA
+#define HAS_RGB24TOYROW_MSA
+#define HAS_RGB565TOARGBROW_MSA
+#define HAS_RGB565TOUVROW_MSA
+#define HAS_RGB565TOYROW_MSA
+#define HAS_RGBATOUVROW_MSA
+#define HAS_RGBATOYROW_MSA
+#define HAS_SETROW_MSA
+#define HAS_SOBELROW_MSA
+#define HAS_SOBELTOPLANEROW_MSA
+#define HAS_SOBELXROW_MSA
+#define HAS_SOBELXYROW_MSA
+#define HAS_SOBELYROW_MSA
+#define HAS_SPLITUVROW_MSA
+#define HAS_UYVYTOARGBROW_MSA
+#define HAS_UYVYTOUVROW_MSA
+#define HAS_UYVYTOYROW_MSA
+#define HAS_YUY2TOARGBROW_MSA
+#define HAS_YUY2TOUV422ROW_MSA
+#define HAS_YUY2TOUVROW_MSA
+#define HAS_YUY2TOYROW_MSA
 #endif
 
 #if defined(_MSC_VER) && !defined(__CLR_VER) && !defined(__clang__)
@@ -378,18 +490,18 @@
 #else
 #define SIMD_ALIGNED(var) __declspec(align(16)) var
 #endif
-typedef __declspec(align(16)) int16 vec16[8];
-typedef __declspec(align(16)) int32 vec32[4];
-typedef __declspec(align(16)) int8 vec8[16];
-typedef __declspec(align(16)) uint16 uvec16[8];
-typedef __declspec(align(16)) uint32 uvec32[4];
-typedef __declspec(align(16)) uint8 uvec8[16];
-typedef __declspec(align(32)) int16 lvec16[16];
-typedef __declspec(align(32)) int32 lvec32[8];
-typedef __declspec(align(32)) int8 lvec8[32];
-typedef __declspec(align(32)) uint16 ulvec16[16];
-typedef __declspec(align(32)) uint32 ulvec32[8];
-typedef __declspec(align(32)) uint8 ulvec8[32];
+typedef __declspec(align(16)) int16_t vec16[8];
+typedef __declspec(align(16)) int32_t vec32[4];
+typedef __declspec(align(16)) int8_t vec8[16];
+typedef __declspec(align(16)) uint16_t uvec16[8];
+typedef __declspec(align(16)) uint32_t uvec32[4];
+typedef __declspec(align(16)) uint8_t uvec8[16];
+typedef __declspec(align(32)) int16_t lvec16[16];
+typedef __declspec(align(32)) int32_t lvec32[8];
+typedef __declspec(align(32)) int8_t lvec8[32];
+typedef __declspec(align(32)) uint16_t ulvec16[16];
+typedef __declspec(align(32)) uint32_t ulvec32[8];
+typedef __declspec(align(32)) uint8_t ulvec8[32];
 #elif !defined(__pnacl__) && (defined(__GNUC__) || defined(__clang__))
 // Caveat GCC 4.2 to 4.7 have a known issue using vectors with const.
 #if defined(CLANG_HAS_AVX2) || defined(GCC_HAS_AVX2)
@@ -397,32 +509,32 @@
 #else
 #define SIMD_ALIGNED(var) var __attribute__((aligned(16)))
 #endif
-typedef int16 __attribute__((vector_size(16))) vec16;
-typedef int32 __attribute__((vector_size(16))) vec32;
-typedef int8 __attribute__((vector_size(16))) vec8;
-typedef uint16 __attribute__((vector_size(16))) uvec16;
-typedef uint32 __attribute__((vector_size(16))) uvec32;
-typedef uint8 __attribute__((vector_size(16))) uvec8;
-typedef int16 __attribute__((vector_size(32))) lvec16;
-typedef int32 __attribute__((vector_size(32))) lvec32;
-typedef int8 __attribute__((vector_size(32))) lvec8;
-typedef uint16 __attribute__((vector_size(32))) ulvec16;
-typedef uint32 __attribute__((vector_size(32))) ulvec32;
-typedef uint8 __attribute__((vector_size(32))) ulvec8;
+typedef int16_t __attribute__((vector_size(16))) vec16;
+typedef int32_t __attribute__((vector_size(16))) vec32;
+typedef int8_t __attribute__((vector_size(16))) vec8;
+typedef uint16_t __attribute__((vector_size(16))) uvec16;
+typedef uint32_t __attribute__((vector_size(16))) uvec32;
+typedef uint8_t __attribute__((vector_size(16))) uvec8;
+typedef int16_t __attribute__((vector_size(32))) lvec16;
+typedef int32_t __attribute__((vector_size(32))) lvec32;
+typedef int8_t __attribute__((vector_size(32))) lvec8;
+typedef uint16_t __attribute__((vector_size(32))) ulvec16;
+typedef uint32_t __attribute__((vector_size(32))) ulvec32;
+typedef uint8_t __attribute__((vector_size(32))) ulvec8;
 #else
 #define SIMD_ALIGNED(var) var
-typedef int16 vec16[8];
-typedef int32 vec32[4];
-typedef int8 vec8[16];
-typedef uint16 uvec16[8];
-typedef uint32 uvec32[4];
-typedef uint8 uvec8[16];
-typedef int16 lvec16[16];
-typedef int32 lvec32[8];
-typedef int8 lvec8[32];
-typedef uint16 ulvec16[16];
-typedef uint32 ulvec32[8];
-typedef uint8 ulvec8[32];
+typedef int16_t vec16[8];
+typedef int32_t vec32[4];
+typedef int8_t vec8[16];
+typedef uint16_t uvec16[8];
+typedef uint32_t uvec32[4];
+typedef uint8_t uvec8[16];
+typedef int16_t lvec16[16];
+typedef int32_t lvec32[8];
+typedef int8_t lvec8[32];
+typedef uint16_t ulvec16[16];
+typedef uint32_t ulvec32[8];
+typedef uint8_t ulvec8[32];
 #endif
 
 #if defined(__aarch64__)
@@ -446,23 +558,23 @@
 #else
 // This struct is for Intel color conversion.
 struct YuvConstants {
-  int8 kUVToB[32];
-  int8 kUVToG[32];
-  int8 kUVToR[32];
-  int16 kUVBiasB[16];
-  int16 kUVBiasG[16];
-  int16 kUVBiasR[16];
-  int16 kYToRgb[16];
+  int8_t kUVToB[32];
+  int8_t kUVToG[32];
+  int8_t kUVToR[32];
+  int16_t kUVBiasB[16];
+  int16_t kUVBiasG[16];
+  int16_t kUVBiasR[16];
+  int16_t kYToRgb[16];
 };
 
 // Offsets into YuvConstants structure
-#define KUVTOB   0
-#define KUVTOG   32
-#define KUVTOR   64
+#define KUVTOB 0
+#define KUVTOG 32
+#define KUVTOR 64
 #define KUVBIASB 96
 #define KUVBIASG 128
 #define KUVBIASR 160
-#define KYTORGB  192
+#define KYTORGB 192
 #endif
 
 // Conversion matrix for YUV to RGB
@@ -475,6 +587,16 @@
 extern const struct YuvConstants SIMD_ALIGNED(kYvuJPEGConstants);  // JPeg
 extern const struct YuvConstants SIMD_ALIGNED(kYvuH709Constants);  // BT.709
 
+#define IS_ALIGNED(p, a) (!((uintptr_t)(p) & ((a)-1)))
+
+#define align_buffer_64(var, size)                                           \
+  uint8_t* var##_mem = (uint8_t*)(malloc((size) + 63));         /* NOLINT */ \
+  uint8_t* var = (uint8_t*)(((intptr_t)(var##_mem) + 63) & ~63) /* NOLINT */
+
+#define free_aligned_buffer_64(var) \
+  free(var##_mem);                  \
+  var = 0
+
 #if defined(__APPLE__) || defined(__x86_64__) || defined(__llvm__)
 #define OMITFP
 #else
@@ -487,1458 +609,2863 @@
 #else
 #define LABELALIGN
 #endif
-#if defined(__native_client__) && defined(__x86_64__)
-// r14 is used for MEMOP macros.
-#define NACL_R14 "r14",
-#define BUNDLELOCK ".bundle_lock\n"
-#define BUNDLEUNLOCK ".bundle_unlock\n"
-#define MEMACCESS(base) "%%nacl:(%%r15,%q" #base ")"
-#define MEMACCESS2(offset, base) "%%nacl:" #offset "(%%r15,%q" #base ")"
-#define MEMLEA(offset, base) #offset "(%q" #base ")"
-#define MEMLEA3(offset, index, scale) \
-    #offset "(,%q" #index "," #scale ")"
-#define MEMLEA4(offset, base, index, scale) \
-    #offset "(%q" #base ",%q" #index "," #scale ")"
-#define MEMMOVESTRING(s, d) "%%nacl:(%q" #s "),%%nacl:(%q" #d "), %%r15"
-#define MEMSTORESTRING(reg, d) "%%" #reg ",%%nacl:(%q" #d "), %%r15"
-#define MEMOPREG(opcode, offset, base, index, scale, reg) \
-    BUNDLELOCK \
-    "lea " #offset "(%q" #base ",%q" #index "," #scale "),%%r14d\n" \
-    #opcode " (%%r15,%%r14),%%" #reg "\n" \
-    BUNDLEUNLOCK
-#define MEMOPMEM(opcode, reg, offset, base, index, scale) \
-    BUNDLELOCK \
-    "lea " #offset "(%q" #base ",%q" #index "," #scale "),%%r14d\n" \
-    #opcode " %%" #reg ",(%%r15,%%r14)\n" \
-    BUNDLEUNLOCK
-#define MEMOPARG(opcode, offset, base, index, scale, arg) \
-    BUNDLELOCK \
-    "lea " #offset "(%q" #base ",%q" #index "," #scale "),%%r14d\n" \
-    #opcode " (%%r15,%%r14),%" #arg "\n" \
-    BUNDLEUNLOCK
-#define VMEMOPREG(opcode, offset, base, index, scale, reg1, reg2) \
-    BUNDLELOCK \
-    "lea " #offset "(%q" #base ",%q" #index "," #scale "),%%r14d\n" \
-    #opcode " (%%r15,%%r14),%%" #reg1 ",%%" #reg2 "\n" \
-    BUNDLEUNLOCK
-#define VEXTOPMEM(op, sel, reg, offset, base, index, scale) \
-    BUNDLELOCK \
-    "lea " #offset "(%q" #base ",%q" #index "," #scale "),%%r14d\n" \
-    #op " $" #sel ",%%" #reg ",(%%r15,%%r14)\n" \
-    BUNDLEUNLOCK
-#else  // defined(__native_client__) && defined(__x86_64__)
-#define NACL_R14
-#define BUNDLEALIGN
-#define MEMACCESS(base) "(%" #base ")"
-#define MEMACCESS2(offset, base) #offset "(%" #base ")"
-#define MEMLEA(offset, base) #offset "(%" #base ")"
-#define MEMLEA3(offset, index, scale) \
-    #offset "(,%" #index "," #scale ")"
-#define MEMLEA4(offset, base, index, scale) \
-    #offset "(%" #base ",%" #index "," #scale ")"
-#define MEMMOVESTRING(s, d)
-#define MEMSTORESTRING(reg, d)
-#define MEMOPREG(opcode, offset, base, index, scale, reg) \
-    #opcode " " #offset "(%" #base ",%" #index "," #scale "),%%" #reg "\n"
-#define MEMOPMEM(opcode, reg, offset, base, index, scale) \
-    #opcode " %%" #reg ","#offset "(%" #base ",%" #index "," #scale ")\n"
-#define MEMOPARG(opcode, offset, base, index, scale, arg) \
-    #opcode " " #offset "(%" #base ",%" #index "," #scale "),%" #arg "\n"
-#define VMEMOPREG(opcode, offset, base, index, scale, reg1, reg2) \
-    #opcode " " #offset "(%" #base ",%" #index "," #scale "),%%" #reg1 ",%%" \
-    #reg2 "\n"
-#define VEXTOPMEM(op, sel, reg, offset, base, index, scale) \
-    #op " $" #sel ",%%" #reg ","#offset "(%" #base ",%" #index "," #scale ")\n"
-#endif  // defined(__native_client__) && defined(__x86_64__)
 
-#if defined(__arm__) || defined(__aarch64__)
-#undef MEMACCESS
-#if defined(__native_client__)
-#define MEMACCESS(base) ".p2align 3\nbic %" #base ", #0xc0000000\n"
-#else
-#define MEMACCESS(base)
-#endif
+// Intel Code Analizer markers.  Insert IACA_START IACA_END around code to be
+// measured and then run with iaca -64 libyuv_unittest.
+// IACA_ASM_START amd IACA_ASM_END are equivalents that can be used within
+// inline assembly blocks.
+// example of iaca:
+// ~/iaca-lin64/bin/iaca.sh -64 -analysis LATENCY out/Release/libyuv_unittest
+
+#if defined(__x86_64__) || defined(__i386__)
+
+#define IACA_ASM_START  \
+  ".byte 0x0F, 0x0B\n"  \
+  " movl $111, %%ebx\n" \
+  ".byte 0x64, 0x67, 0x90\n"
+
+#define IACA_ASM_END         \
+  " movl $222, %%ebx\n"      \
+  ".byte 0x64, 0x67, 0x90\n" \
+  ".byte 0x0F, 0x0B\n"
+
+#define IACA_SSC_MARK(MARK_ID)                        \
+  __asm__ __volatile__("\n\t  movl $" #MARK_ID        \
+                       ", %%ebx"                      \
+                       "\n\t  .byte 0x64, 0x67, 0x90" \
+                       :                              \
+                       :                              \
+                       : "memory");
+
+#define IACA_UD_BYTES __asm__ __volatile__("\n\t .byte 0x0F, 0x0B");
+
+#else /* Visual C */
+#define IACA_UD_BYTES \
+  { __asm _emit 0x0F __asm _emit 0x0B }
+
+#define IACA_SSC_MARK(x) \
+  { __asm mov ebx, x __asm _emit 0x64 __asm _emit 0x67 __asm _emit 0x90 }
+
+#define IACA_VC64_START __writegsbyte(111, 111);
+#define IACA_VC64_END __writegsbyte(222, 222);
 #endif
 
-void I444ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+#define IACA_START     \
+  {                    \
+    IACA_UD_BYTES      \
+    IACA_SSC_MARK(111) \
+  }
+#define IACA_END       \
+  {                    \
+    IACA_SSC_MARK(222) \
+    IACA_UD_BYTES      \
+  }
+
+void I444ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422AlphaToARGBRow_NEON(const uint8* y_buf,
-                             const uint8* u_buf,
-                             const uint8* v_buf,
-                             const uint8* a_buf,
-                             uint8* dst_argb,
+void I422AlphaToARGBRow_NEON(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             const uint8_t* src_a,
+                             uint8_t* dst_argb,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I411ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToRGBARow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_rgba,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422ToRGBARow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_rgba,
-                        const struct YuvConstants* yuvconstants,
-                        int width);
-void I422ToRGB24Row_NEON(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_rgb24,
+void I422ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb24,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I422ToRGB565Row_NEON(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          uint8* dst_rgb565,
+void I422ToRGB565Row_NEON(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width);
-void I422ToARGB1555Row_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb1555,
+void I422ToARGB1555Row_NEON(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb1555,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToARGB4444Row_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb4444,
+void I422ToARGB4444Row_NEON(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb4444,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void NV12ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_uv,
-                        uint8* dst_argb,
+void NV12ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_uv,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void NV12ToRGB565Row_NEON(const uint8* src_y,
-                          const uint8* src_uv,
-                          uint8* dst_rgb565,
+void NV12ToRGB565Row_NEON(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width);
-void NV21ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_vu,
-                        uint8* dst_argb,
+void NV21ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_vu,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void YUY2ToARGBRow_NEON(const uint8* src_yuy2,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width);
-void UYVYToARGBRow_NEON(const uint8* src_uyvy,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width);
-
-void ARGBToYRow_AVX2(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYRow_Any_AVX2(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_AVX2(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_Any_AVX2(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width);
-void BGRAToYRow_SSSE3(const uint8* src_bgra, uint8* dst_y, int width);
-void ABGRToYRow_SSSE3(const uint8* src_abgr, uint8* dst_y, int width);
-void RGBAToYRow_SSSE3(const uint8* src_rgba, uint8* dst_y, int width);
-void RGB24ToYRow_SSSE3(const uint8* src_rgb24, uint8* dst_y, int width);
-void RAWToYRow_SSSE3(const uint8* src_raw, uint8* dst_y, int width);
-void ARGBToYRow_NEON(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_NEON(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToUV444Row_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
+void NV12ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
                          int width);
-void ARGBToUV411Row_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
+void NV21ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_vu,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
                          int width);
-void ARGBToUVRow_NEON(const uint8* src_argb, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_NEON(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width);
-void BGRAToUVRow_NEON(const uint8* src_bgra, int src_stride_bgra,
-                      uint8* dst_u, uint8* dst_v, int width);
-void ABGRToUVRow_NEON(const uint8* src_abgr, int src_stride_abgr,
-                      uint8* dst_u, uint8* dst_v, int width);
-void RGBAToUVRow_NEON(const uint8* src_rgba, int src_stride_rgba,
-                      uint8* dst_u, uint8* dst_v, int width);
-void RGB24ToUVRow_NEON(const uint8* src_rgb24, int src_stride_rgb24,
-                       uint8* dst_u, uint8* dst_v, int width);
-void RAWToUVRow_NEON(const uint8* src_raw, int src_stride_raw,
-                     uint8* dst_u, uint8* dst_v, int width);
-void RGB565ToUVRow_NEON(const uint8* src_rgb565, int src_stride_rgb565,
-                        uint8* dst_u, uint8* dst_v, int width);
-void ARGB1555ToUVRow_NEON(const uint8* src_argb1555, int src_stride_argb1555,
-                          uint8* dst_u, uint8* dst_v, int width);
-void ARGB4444ToUVRow_NEON(const uint8* src_argb4444, int src_stride_argb4444,
-                          uint8* dst_u, uint8* dst_v, int width);
-void BGRAToYRow_NEON(const uint8* src_bgra, uint8* dst_y, int width);
-void ABGRToYRow_NEON(const uint8* src_abgr, uint8* dst_y, int width);
-void RGBAToYRow_NEON(const uint8* src_rgba, uint8* dst_y, int width);
-void RGB24ToYRow_NEON(const uint8* src_rgb24, uint8* dst_y, int width);
-void RAWToYRow_NEON(const uint8* src_raw, uint8* dst_y, int width);
-void RGB565ToYRow_NEON(const uint8* src_rgb565, uint8* dst_y, int width);
-void ARGB1555ToYRow_NEON(const uint8* src_argb1555, uint8* dst_y, int width);
-void ARGB4444ToYRow_NEON(const uint8* src_argb4444, uint8* dst_y, int width);
-void ARGBToYRow_C(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_C(const uint8* src_argb, uint8* dst_y, int width);
-void BGRAToYRow_C(const uint8* src_bgra, uint8* dst_y, int width);
-void ABGRToYRow_C(const uint8* src_abgr, uint8* dst_y, int width);
-void RGBAToYRow_C(const uint8* src_rgba, uint8* dst_y, int width);
-void RGB24ToYRow_C(const uint8* src_rgb24, uint8* dst_y, int width);
-void RAWToYRow_C(const uint8* src_raw, uint8* dst_y, int width);
-void RGB565ToYRow_C(const uint8* src_rgb565, uint8* dst_y, int width);
-void ARGB1555ToYRow_C(const uint8* src_argb1555, uint8* dst_y, int width);
-void ARGB4444ToYRow_C(const uint8* src_argb4444, uint8* dst_y, int width);
-void ARGBToYRow_Any_SSSE3(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_Any_SSSE3(const uint8* src_argb, uint8* dst_y, int width);
-void BGRAToYRow_Any_SSSE3(const uint8* src_bgra, uint8* dst_y, int width);
-void ABGRToYRow_Any_SSSE3(const uint8* src_abgr, uint8* dst_y, int width);
-void RGBAToYRow_Any_SSSE3(const uint8* src_rgba, uint8* dst_y, int width);
-void RGB24ToYRow_Any_SSSE3(const uint8* src_rgb24, uint8* dst_y, int width);
-void RAWToYRow_Any_SSSE3(const uint8* src_raw, uint8* dst_y, int width);
-void ARGBToYRow_Any_NEON(const uint8* src_argb, uint8* dst_y, int width);
-void ARGBToYJRow_Any_NEON(const uint8* src_argb, uint8* dst_y, int width);
-void BGRAToYRow_Any_NEON(const uint8* src_bgra, uint8* dst_y, int width);
-void ABGRToYRow_Any_NEON(const uint8* src_abgr, uint8* dst_y, int width);
-void RGBAToYRow_Any_NEON(const uint8* src_rgba, uint8* dst_y, int width);
-void RGB24ToYRow_Any_NEON(const uint8* src_rgb24, uint8* dst_y, int width);
-void RAWToYRow_Any_NEON(const uint8* src_raw, uint8* dst_y, int width);
-void RGB565ToYRow_Any_NEON(const uint8* src_rgb565, uint8* dst_y, int width);
-void ARGB1555ToYRow_Any_NEON(const uint8* src_argb1555, uint8* dst_y,
-                             int width);
-void ARGB4444ToYRow_Any_NEON(const uint8* src_argb4444, uint8* dst_y,
-                             int width);
-
-void ARGBToUVRow_AVX2(const uint8* src_argb, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_AVX2(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVRow_SSSE3(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_SSSE3(const uint8* src_argb, int src_stride_argb,
-                        uint8* dst_u, uint8* dst_v, int width);
-void BGRAToUVRow_SSSE3(const uint8* src_bgra, int src_stride_bgra,
-                       uint8* dst_u, uint8* dst_v, int width);
-void ABGRToUVRow_SSSE3(const uint8* src_abgr, int src_stride_abgr,
-                       uint8* dst_u, uint8* dst_v, int width);
-void RGBAToUVRow_SSSE3(const uint8* src_rgba, int src_stride_rgba,
-                       uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVRow_Any_AVX2(const uint8* src_argb, int src_stride_argb,
-                          uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_Any_AVX2(const uint8* src_argb, int src_stride_argb,
-                           uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVRow_Any_SSSE3(const uint8* src_argb, int src_stride_argb,
-                           uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_Any_SSSE3(const uint8* src_argb, int src_stride_argb,
-                            uint8* dst_u, uint8* dst_v, int width);
-void BGRAToUVRow_Any_SSSE3(const uint8* src_bgra, int src_stride_bgra,
-                           uint8* dst_u, uint8* dst_v, int width);
-void ABGRToUVRow_Any_SSSE3(const uint8* src_abgr, int src_stride_abgr,
-                           uint8* dst_u, uint8* dst_v, int width);
-void RGBAToUVRow_Any_SSSE3(const uint8* src_rgba, int src_stride_rgba,
-                           uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUV444Row_Any_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
-                             int width);
-void ARGBToUV411Row_Any_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
-                             int width);
-void ARGBToUVRow_Any_NEON(const uint8* src_argb, int src_stride_argb,
-                          uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_Any_NEON(const uint8* src_argb, int src_stride_argb,
-                           uint8* dst_u, uint8* dst_v, int width);
-void BGRAToUVRow_Any_NEON(const uint8* src_bgra, int src_stride_bgra,
-                          uint8* dst_u, uint8* dst_v, int width);
-void ABGRToUVRow_Any_NEON(const uint8* src_abgr, int src_stride_abgr,
-                          uint8* dst_u, uint8* dst_v, int width);
-void RGBAToUVRow_Any_NEON(const uint8* src_rgba, int src_stride_rgba,
-                          uint8* dst_u, uint8* dst_v, int width);
-void RGB24ToUVRow_Any_NEON(const uint8* src_rgb24, int src_stride_rgb24,
-                           uint8* dst_u, uint8* dst_v, int width);
-void RAWToUVRow_Any_NEON(const uint8* src_raw, int src_stride_raw,
-                         uint8* dst_u, uint8* dst_v, int width);
-void RGB565ToUVRow_Any_NEON(const uint8* src_rgb565, int src_stride_rgb565,
-                            uint8* dst_u, uint8* dst_v, int width);
-void ARGB1555ToUVRow_Any_NEON(const uint8* src_argb1555,
-                              int src_stride_argb1555,
-                              uint8* dst_u, uint8* dst_v, int width);
-void ARGB4444ToUVRow_Any_NEON(const uint8* src_argb4444,
-                              int src_stride_argb4444,
-                              uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVRow_C(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUVJRow_C(const uint8* src_argb, int src_stride_argb,
-                    uint8* dst_u, uint8* dst_v, int width);
-void BGRAToUVRow_C(const uint8* src_bgra, int src_stride_bgra,
-                   uint8* dst_u, uint8* dst_v, int width);
-void ABGRToUVRow_C(const uint8* src_abgr, int src_stride_abgr,
-                   uint8* dst_u, uint8* dst_v, int width);
-void RGBAToUVRow_C(const uint8* src_rgba, int src_stride_rgba,
-                   uint8* dst_u, uint8* dst_v, int width);
-void RGB24ToUVRow_C(const uint8* src_rgb24, int src_stride_rgb24,
-                    uint8* dst_u, uint8* dst_v, int width);
-void RAWToUVRow_C(const uint8* src_raw, int src_stride_raw,
-                  uint8* dst_u, uint8* dst_v, int width);
-void RGB565ToUVRow_C(const uint8* src_rgb565, int src_stride_rgb565,
-                     uint8* dst_u, uint8* dst_v, int width);
-void ARGB1555ToUVRow_C(const uint8* src_argb1555, int src_stride_argb1555,
-                       uint8* dst_u, uint8* dst_v, int width);
-void ARGB4444ToUVRow_C(const uint8* src_argb4444, int src_stride_argb4444,
-                       uint8* dst_u, uint8* dst_v, int width);
-
-void ARGBToUV444Row_SSSE3(const uint8* src_argb,
-                          uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUV444Row_Any_SSSE3(const uint8* src_argb,
-                              uint8* dst_u, uint8* dst_v, int width);
-
-void ARGBToUV444Row_C(const uint8* src_argb,
-                      uint8* dst_u, uint8* dst_v, int width);
-void ARGBToUV411Row_C(const uint8* src_argb,
-                      uint8* dst_u, uint8* dst_v, int width);
-
-void MirrorRow_AVX2(const uint8* src, uint8* dst, int width);
-void MirrorRow_SSSE3(const uint8* src, uint8* dst, int width);
-void MirrorRow_NEON(const uint8* src, uint8* dst, int width);
-void MirrorRow_DSPR2(const uint8* src, uint8* dst, int width);
-void MirrorRow_C(const uint8* src, uint8* dst, int width);
-void MirrorRow_Any_AVX2(const uint8* src, uint8* dst, int width);
-void MirrorRow_Any_SSSE3(const uint8* src, uint8* dst, int width);
-void MirrorRow_Any_SSE2(const uint8* src, uint8* dst, int width);
-void MirrorRow_Any_NEON(const uint8* src, uint8* dst, int width);
-
-void MirrorUVRow_SSSE3(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void YUY2ToARGBRow_NEON(const uint8_t* src_yuy2,
+                        uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants,
+                        int width);
+void UYVYToARGBRow_NEON(const uint8_t* src_uyvy,
+                        uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants,
+                        int width);
+void I444ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
                        int width);
-void MirrorUVRow_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                      int width);
-void MirrorUVRow_DSPR2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+
+void I422ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
                        int width);
-void MirrorUVRow_C(const uint8* src_uv, uint8* dst_u, uint8* dst_v, int width);
+void I422ToRGBARow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width);
+void I422AlphaToARGBRow_MSA(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            const uint8_t* src_a,
+                            uint8_t* dst_argb,
+                            const struct YuvConstants* yuvconstants,
+                            int width);
+void I422ToRGB24Row_MSA(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants,
+                        int width);
+void I422ToRGB565Row_MSA(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb565,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void I422ToARGB4444Row_MSA(const uint8_t* src_y,
+                           const uint8_t* src_u,
+                           const uint8_t* src_v,
+                           uint8_t* dst_argb4444,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void I422ToARGB1555Row_MSA(const uint8_t* src_y,
+                           const uint8_t* src_u,
+                           const uint8_t* src_v,
+                           uint8_t* dst_argb1555,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void NV12ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_uv,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width);
+void NV12ToRGB565Row_MSA(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb565,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void NV21ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_vu,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width);
+void YUY2ToARGBRow_MSA(const uint8_t* src_yuy2,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width);
+void UYVYToARGBRow_MSA(const uint8_t* src_uyvy,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width);
 
-void ARGBMirrorRow_AVX2(const uint8* src, uint8* dst, int width);
-void ARGBMirrorRow_SSE2(const uint8* src, uint8* dst, int width);
-void ARGBMirrorRow_NEON(const uint8* src, uint8* dst, int width);
-void ARGBMirrorRow_C(const uint8* src, uint8* dst, int width);
-void ARGBMirrorRow_Any_AVX2(const uint8* src, uint8* dst, int width);
-void ARGBMirrorRow_Any_SSE2(const uint8* src, uint8* dst, int width);
-void ARGBMirrorRow_Any_NEON(const uint8* src, uint8* dst, int width);
-
-void SplitUVRow_C(const uint8* src_uv, uint8* dst_u, uint8* dst_v, int width);
-void SplitUVRow_SSE2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                     int width);
-void SplitUVRow_AVX2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                     int width);
-void SplitUVRow_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                     int width);
-void SplitUVRow_DSPR2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void ARGBToYRow_AVX2(const uint8_t* src_argb, uint8_t* dst_y, int width);
+void ARGBToYRow_Any_AVX2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToYRow_SSSE3(const uint8_t* src_argb, uint8_t* dst_y, int width);
+void ARGBToYJRow_AVX2(const uint8_t* src_argb, uint8_t* dst_y, int width);
+void ARGBToYJRow_Any_AVX2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToYJRow_SSSE3(const uint8_t* src_argb, uint8_t* dst_y, int width);
+void BGRAToYRow_SSSE3(const uint8_t* src_bgra, uint8_t* dst_y, int width);
+void ABGRToYRow_SSSE3(const uint8_t* src_abgr, uint8_t* dst_y, int width);
+void RGBAToYRow_SSSE3(const uint8_t* src_rgba, uint8_t* dst_y, int width);
+void RGB24ToYRow_SSSE3(const uint8_t* src_rgb24, uint8_t* dst_y, int width);
+void RAWToYRow_SSSE3(const uint8_t* src_raw, uint8_t* dst_y, int width);
+void ARGBToYRow_NEON(const uint8_t* src_argb, uint8_t* dst_y, int width);
+void ARGBToYJRow_NEON(const uint8_t* src_argb, uint8_t* dst_y, int width);
+void ARGBToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void ARGBToYJRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void ARGBToUV444Row_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void ARGBToUVRow_NEON(const uint8_t* src_argb,
+                      int src_stride_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
                       int width);
-void SplitUVRow_Any_SSE2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                         int width);
-void SplitUVRow_Any_AVX2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                         int width);
-void SplitUVRow_Any_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                         int width);
-void SplitUVRow_Any_DSPR2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void ARGBToUV444Row_MSA(const uint8_t* src_argb,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
+void ARGBToUVRow_MSA(const uint8_t* src_argb0,
+                     int src_stride_argb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void ARGBToUVJRow_NEON(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void BGRAToUVRow_NEON(const uint8_t* src_bgra,
+                      int src_stride_bgra,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void ABGRToUVRow_NEON(const uint8_t* src_abgr,
+                      int src_stride_abgr,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void RGBAToUVRow_NEON(const uint8_t* src_rgba,
+                      int src_stride_rgba,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void RGB24ToUVRow_NEON(const uint8_t* src_rgb24,
+                       int src_stride_rgb24,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void RAWToUVRow_NEON(const uint8_t* src_raw,
+                     int src_stride_raw,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void RGB565ToUVRow_NEON(const uint8_t* src_rgb565,
+                        int src_stride_rgb565,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
+void ARGB1555ToUVRow_NEON(const uint8_t* src_argb1555,
+                          int src_stride_argb1555,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
                           int width);
+void ARGB4444ToUVRow_NEON(const uint8_t* src_argb4444,
+                          int src_stride_argb4444,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void ARGBToUVJRow_MSA(const uint8_t* src_rgb0,
+                      int src_stride_rgb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void BGRAToUVRow_MSA(const uint8_t* src_rgb0,
+                     int src_stride_rgb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void ABGRToUVRow_MSA(const uint8_t* src_rgb0,
+                     int src_stride_rgb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void RGBAToUVRow_MSA(const uint8_t* src_rgb0,
+                     int src_stride_rgb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void RGB24ToUVRow_MSA(const uint8_t* src_rgb0,
+                      int src_stride_rgb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void RAWToUVRow_MSA(const uint8_t* src_rgb0,
+                    int src_stride_rgb,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width);
+void RGB565ToUVRow_MSA(const uint8_t* src_rgb565,
+                       int src_stride_rgb565,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void ARGB1555ToUVRow_MSA(const uint8_t* src_argb1555,
+                         int src_stride_argb1555,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void BGRAToYRow_NEON(const uint8_t* src_bgra, uint8_t* dst_y, int width);
+void ABGRToYRow_NEON(const uint8_t* src_abgr, uint8_t* dst_y, int width);
+void RGBAToYRow_NEON(const uint8_t* src_rgba, uint8_t* dst_y, int width);
+void RGB24ToYRow_NEON(const uint8_t* src_rgb24, uint8_t* dst_y, int width);
+void RAWToYRow_NEON(const uint8_t* src_raw, uint8_t* dst_y, int width);
+void RGB565ToYRow_NEON(const uint8_t* src_rgb565, uint8_t* dst_y, int width);
+void ARGB1555ToYRow_NEON(const uint8_t* src_argb1555,
+                         uint8_t* dst_y,
+                         int width);
+void ARGB4444ToYRow_NEON(const uint8_t* src_argb4444,
+                         uint8_t* dst_y,
+                         int width);
+void BGRAToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void ABGRToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RGBAToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RGB24ToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RAWToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RGB565ToYRow_MSA(const uint8_t* src_rgb565, uint8_t* dst_y, int width);
+void ARGB1555ToYRow_MSA(const uint8_t* src_argb1555, uint8_t* dst_y, int width);
+void ARGBToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void ARGBToYJRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void BGRAToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void ABGRToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RGBAToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RGB24ToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RAWToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width);
+void RGB565ToYRow_C(const uint8_t* src_rgb565, uint8_t* dst_y, int width);
+void ARGB1555ToYRow_C(const uint8_t* src_argb1555, uint8_t* dst_y, int width);
+void ARGB4444ToYRow_C(const uint8_t* src_argb4444, uint8_t* dst_y, int width);
+void ARGBToYRow_Any_SSSE3(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToYJRow_Any_SSSE3(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void BGRAToYRow_Any_SSSE3(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ABGRToYRow_Any_SSSE3(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGBAToYRow_Any_SSSE3(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGB24ToYRow_Any_SSSE3(const uint8_t* src_rgb24, uint8_t* dst_y, int width);
+void RAWToYRow_Any_SSSE3(const uint8_t* src_raw, uint8_t* dst_y, int width);
+void ARGBToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToYJRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void BGRAToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ABGRToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGBAToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGB24ToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RAWToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGB565ToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGB1555ToYRow_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGB4444ToYRow_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void BGRAToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ABGRToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGBAToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToYJRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGB24ToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RAWToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGB565ToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGB1555ToYRow_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
 
-void MergeUVRow_C(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void ARGBToUVRow_AVX2(const uint8_t* src_argb0,
+                      int src_stride_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void ARGBToUVJRow_AVX2(const uint8_t* src_argb0,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void ARGBToUVRow_SSSE3(const uint8_t* src_argb0,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void ARGBToUVJRow_SSSE3(const uint8_t* src_argb0,
+                        int src_stride_argb,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
+void BGRAToUVRow_SSSE3(const uint8_t* src_bgra0,
+                       int src_stride_bgra,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void ABGRToUVRow_SSSE3(const uint8_t* src_abgr0,
+                       int src_stride_abgr,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void RGBAToUVRow_SSSE3(const uint8_t* src_rgba0,
+                       int src_stride_rgba,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void ARGBToUVRow_Any_AVX2(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void ARGBToUVJRow_Any_AVX2(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void ARGBToUVRow_Any_SSSE3(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void ARGBToUVJRow_Any_SSSE3(const uint8_t* src_ptr,
+                            int src_stride_ptr,
+                            uint8_t* dst_u,
+                            uint8_t* dst_v,
+                            int width);
+void BGRAToUVRow_Any_SSSE3(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void ABGRToUVRow_Any_SSSE3(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void RGBAToUVRow_Any_SSSE3(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void ARGBToUV444Row_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void ARGBToUVRow_Any_NEON(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void ARGBToUV444Row_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_u,
+                            uint8_t* dst_v,
+                            int width);
+void ARGBToUVRow_Any_MSA(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void ARGBToUVJRow_Any_NEON(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void BGRAToUVRow_Any_NEON(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void ABGRToUVRow_Any_NEON(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void RGBAToUVRow_Any_NEON(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void RGB24ToUVRow_Any_NEON(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void RAWToUVRow_Any_NEON(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void RGB565ToUVRow_Any_NEON(const uint8_t* src_ptr,
+                            int src_stride_ptr,
+                            uint8_t* dst_u,
+                            uint8_t* dst_v,
+                            int width);
+void ARGB1555ToUVRow_Any_NEON(const uint8_t* src_ptr,
+                              int src_stride_ptr,
+                              uint8_t* dst_u,
+                              uint8_t* dst_v,
+                              int width);
+void ARGB4444ToUVRow_Any_NEON(const uint8_t* src_ptr,
+                              int src_stride_ptr,
+                              uint8_t* dst_u,
+                              uint8_t* dst_v,
+                              int width);
+void ARGBToUVJRow_Any_MSA(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void BGRAToUVRow_Any_MSA(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void ABGRToUVRow_Any_MSA(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void RGBAToUVRow_Any_MSA(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void RGB24ToUVRow_Any_MSA(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void RAWToUVRow_Any_MSA(const uint8_t* src_ptr,
+                        int src_stride_ptr,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
+void RGB565ToUVRow_Any_MSA(const uint8_t* src_ptr,
+                           int src_stride_ptr,
+                           uint8_t* dst_u,
+                           uint8_t* dst_v,
+                           int width);
+void ARGB1555ToUVRow_Any_MSA(const uint8_t* src_ptr,
+                             int src_stride_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void ARGBToUVRow_C(const uint8_t* src_rgb0,
+                   int src_stride_rgb,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void ARGBToUVJRow_C(const uint8_t* src_rgb0,
+                    int src_stride_rgb,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width);
+void ARGBToUVRow_C(const uint8_t* src_rgb0,
+                   int src_stride_rgb,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void ARGBToUVJRow_C(const uint8_t* src_rgb0,
+                    int src_stride_rgb,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width);
+void BGRAToUVRow_C(const uint8_t* src_rgb0,
+                   int src_stride_rgb,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void ABGRToUVRow_C(const uint8_t* src_rgb0,
+                   int src_stride_rgb,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void RGBAToUVRow_C(const uint8_t* src_rgb0,
+                   int src_stride_rgb,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void RGB24ToUVRow_C(const uint8_t* src_rgb0,
+                    int src_stride_rgb,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width);
+void RAWToUVRow_C(const uint8_t* src_rgb0,
+                  int src_stride_rgb,
+                  uint8_t* dst_u,
+                  uint8_t* dst_v,
                   int width);
-void MergeUVRow_SSE2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void RGB565ToUVRow_C(const uint8_t* src_rgb565,
+                     int src_stride_rgb565,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width);
-void MergeUVRow_AVX2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void ARGB1555ToUVRow_C(const uint8_t* src_argb1555,
+                       int src_stride_argb1555,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void ARGB4444ToUVRow_C(const uint8_t* src_argb4444,
+                       int src_stride_argb4444,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+
+void ARGBToUV444Row_SSSE3(const uint8_t* src_argb,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void ARGBToUV444Row_Any_SSSE3(const uint8_t* src_ptr,
+                              uint8_t* dst_u,
+                              uint8_t* dst_v,
+                              int width);
+
+void ARGBToUV444Row_C(const uint8_t* src_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+
+void MirrorRow_AVX2(const uint8_t* src, uint8_t* dst, int width);
+void MirrorRow_SSSE3(const uint8_t* src, uint8_t* dst, int width);
+void MirrorRow_NEON(const uint8_t* src, uint8_t* dst, int width);
+void MirrorRow_MSA(const uint8_t* src, uint8_t* dst, int width);
+void MirrorRow_C(const uint8_t* src, uint8_t* dst, int width);
+void MirrorRow_Any_AVX2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void MirrorRow_Any_SSSE3(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void MirrorRow_Any_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void MirrorRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void MirrorRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+
+void MirrorUVRow_SSSE3(const uint8_t* src,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width);
+void MirrorUVRow_NEON(const uint8_t* src_uv,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void MirrorUVRow_MSA(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width);
-void MergeUVRow_NEON(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void MirrorUVRow_C(const uint8_t* src_uv,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+
+void ARGBMirrorRow_AVX2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBMirrorRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBMirrorRow_NEON(const uint8_t* src, uint8_t* dst, int width);
+void ARGBMirrorRow_MSA(const uint8_t* src, uint8_t* dst, int width);
+void ARGBMirrorRow_C(const uint8_t* src, uint8_t* dst, int width);
+void ARGBMirrorRow_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void ARGBMirrorRow_Any_SSE2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void ARGBMirrorRow_Any_NEON(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void ARGBMirrorRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+
+void SplitUVRow_C(const uint8_t* src_uv,
+                  uint8_t* dst_u,
+                  uint8_t* dst_v,
+                  int width);
+void SplitUVRow_SSE2(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width);
-void MergeUVRow_Any_SSE2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void SplitUVRow_AVX2(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void SplitUVRow_NEON(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void SplitUVRow_MSA(const uint8_t* src_uv,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width);
+void SplitUVRow_Any_SSE2(const uint8_t* src_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width);
-void MergeUVRow_Any_AVX2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void SplitUVRow_Any_AVX2(const uint8_t* src_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width);
-void MergeUVRow_Any_NEON(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void SplitUVRow_Any_NEON(const uint8_t* src_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width);
+void SplitUVRow_Any_MSA(const uint8_t* src_ptr,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
 
-void CopyRow_SSE2(const uint8* src, uint8* dst, int count);
-void CopyRow_AVX(const uint8* src, uint8* dst, int count);
-void CopyRow_ERMS(const uint8* src, uint8* dst, int count);
-void CopyRow_NEON(const uint8* src, uint8* dst, int count);
-void CopyRow_MIPS(const uint8* src, uint8* dst, int count);
-void CopyRow_C(const uint8* src, uint8* dst, int count);
-void CopyRow_Any_SSE2(const uint8* src, uint8* dst, int count);
-void CopyRow_Any_AVX(const uint8* src, uint8* dst, int count);
-void CopyRow_Any_NEON(const uint8* src, uint8* dst, int count);
+void MergeUVRow_C(const uint8_t* src_u,
+                  const uint8_t* src_v,
+                  uint8_t* dst_uv,
+                  int width);
+void MergeUVRow_SSE2(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
+                     int width);
+void MergeUVRow_AVX2(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
+                     int width);
+void MergeUVRow_NEON(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
+                     int width);
+void MergeUVRow_MSA(const uint8_t* src_u,
+                    const uint8_t* src_v,
+                    uint8_t* dst_uv,
+                    int width);
+void MergeUVRow_Any_SSE2(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void MergeUVRow_Any_AVX2(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void MergeUVRow_Any_NEON(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void MergeUVRow_Any_MSA(const uint8_t* y_buf,
+                        const uint8_t* uv_buf,
+                        uint8_t* dst_ptr,
+                        int width);
 
-void CopyRow_16_C(const uint16* src, uint16* dst, int count);
+void SplitRGBRow_C(const uint8_t* src_rgb,
+                   uint8_t* dst_r,
+                   uint8_t* dst_g,
+                   uint8_t* dst_b,
+                   int width);
+void SplitRGBRow_SSSE3(const uint8_t* src_rgb,
+                       uint8_t* dst_r,
+                       uint8_t* dst_g,
+                       uint8_t* dst_b,
+                       int width);
+void SplitRGBRow_NEON(const uint8_t* src_rgb,
+                      uint8_t* dst_r,
+                      uint8_t* dst_g,
+                      uint8_t* dst_b,
+                      int width);
+void SplitRGBRow_Any_SSSE3(const uint8_t* src_ptr,
+                           uint8_t* dst_r,
+                           uint8_t* dst_g,
+                           uint8_t* dst_b,
+                           int width);
+void SplitRGBRow_Any_NEON(const uint8_t* src_ptr,
+                          uint8_t* dst_r,
+                          uint8_t* dst_g,
+                          uint8_t* dst_b,
+                          int width);
 
-void ARGBCopyAlphaRow_C(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBCopyAlphaRow_SSE2(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBCopyAlphaRow_AVX2(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBCopyAlphaRow_Any_SSE2(const uint8* src_argb, uint8* dst_argb,
+void MergeRGBRow_C(const uint8_t* src_r,
+                   const uint8_t* src_g,
+                   const uint8_t* src_b,
+                   uint8_t* dst_rgb,
+                   int width);
+void MergeRGBRow_SSSE3(const uint8_t* src_r,
+                       const uint8_t* src_g,
+                       const uint8_t* src_b,
+                       uint8_t* dst_rgb,
+                       int width);
+void MergeRGBRow_NEON(const uint8_t* src_r,
+                      const uint8_t* src_g,
+                      const uint8_t* src_b,
+                      uint8_t* dst_rgb,
+                      int width);
+void MergeRGBRow_Any_SSSE3(const uint8_t* y_buf,
+                           const uint8_t* u_buf,
+                           const uint8_t* v_buf,
+                           uint8_t* dst_ptr,
+                           int width);
+void MergeRGBRow_Any_NEON(const uint8_t* src_r,
+                          const uint8_t* src_g,
+                          const uint8_t* src_b,
+                          uint8_t* dst_rgb,
+                          int width);
+
+void MergeUVRow_16_C(const uint16_t* src_u,
+                     const uint16_t* src_v,
+                     uint16_t* dst_uv,
+                     int scale, /* 64 for 10 bit */
+                     int width);
+void MergeUVRow_16_AVX2(const uint16_t* src_u,
+                        const uint16_t* src_v,
+                        uint16_t* dst_uv,
+                        int scale,
+                        int width);
+
+void MultiplyRow_16_AVX2(const uint16_t* src_y,
+                         uint16_t* dst_y,
+                         int scale,
+                         int width);
+void MultiplyRow_16_C(const uint16_t* src_y,
+                      uint16_t* dst_y,
+                      int scale,
+                      int width);
+
+void Convert8To16Row_C(const uint8_t* src_y,
+                       uint16_t* dst_y,
+                       int scale,
+                       int width);
+void Convert8To16Row_SSE2(const uint8_t* src_y,
+                          uint16_t* dst_y,
+                          int scale,
+                          int width);
+void Convert8To16Row_AVX2(const uint8_t* src_y,
+                          uint16_t* dst_y,
+                          int scale,
+                          int width);
+void Convert8To16Row_Any_SSE2(const uint8_t* src_ptr,
+                              uint16_t* dst_ptr,
+                              int scale,
+                              int width);
+void Convert8To16Row_Any_AVX2(const uint8_t* src_ptr,
+                              uint16_t* dst_ptr,
+                              int scale,
+                              int width);
+
+void Convert16To8Row_C(const uint16_t* src_y,
+                       uint8_t* dst_y,
+                       int scale,
+                       int width);
+void Convert16To8Row_SSSE3(const uint16_t* src_y,
+                           uint8_t* dst_y,
+                           int scale,
+                           int width);
+void Convert16To8Row_AVX2(const uint16_t* src_y,
+                          uint8_t* dst_y,
+                          int scale,
+                          int width);
+void Convert16To8Row_Any_SSSE3(const uint16_t* src_ptr,
+                               uint8_t* dst_ptr,
+                               int scale,
                                int width);
-void ARGBCopyAlphaRow_Any_AVX2(const uint8* src_argb, uint8* dst_argb,
+void Convert16To8Row_Any_AVX2(const uint16_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int scale,
+                              int width);
+
+void CopyRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void CopyRow_AVX(const uint8_t* src, uint8_t* dst, int width);
+void CopyRow_ERMS(const uint8_t* src, uint8_t* dst, int width);
+void CopyRow_NEON(const uint8_t* src, uint8_t* dst, int width);
+void CopyRow_MIPS(const uint8_t* src, uint8_t* dst, int count);
+void CopyRow_C(const uint8_t* src, uint8_t* dst, int count);
+void CopyRow_Any_SSE2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void CopyRow_Any_AVX(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void CopyRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+
+void CopyRow_16_C(const uint16_t* src, uint16_t* dst, int count);
+
+void ARGBCopyAlphaRow_C(const uint8_t* src, uint8_t* dst, int width);
+void ARGBCopyAlphaRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBCopyAlphaRow_AVX2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBCopyAlphaRow_Any_SSE2(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
+                               int width);
+void ARGBCopyAlphaRow_Any_AVX2(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
                                int width);
 
-void ARGBExtractAlphaRow_C(const uint8* src_argb, uint8* dst_a, int width);
-void ARGBExtractAlphaRow_SSE2(const uint8* src_argb, uint8* dst_a, int width);
-void ARGBExtractAlphaRow_NEON(const uint8* src_argb, uint8* dst_a, int width);
-void ARGBExtractAlphaRow_Any_SSE2(const uint8* src_argb, uint8* dst_a,
+void ARGBExtractAlphaRow_C(const uint8_t* src_argb, uint8_t* dst_a, int width);
+void ARGBExtractAlphaRow_SSE2(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width);
+void ARGBExtractAlphaRow_AVX2(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width);
+void ARGBExtractAlphaRow_NEON(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width);
+void ARGBExtractAlphaRow_MSA(const uint8_t* src_argb,
+                             uint8_t* dst_a,
+                             int width);
+void ARGBExtractAlphaRow_Any_SSE2(const uint8_t* src_ptr,
+                                  uint8_t* dst_ptr,
                                   int width);
-void ARGBExtractAlphaRow_Any_NEON(const uint8* src_argb, uint8* dst_a,
+void ARGBExtractAlphaRow_Any_AVX2(const uint8_t* src_ptr,
+                                  uint8_t* dst_ptr,
+                                  int width);
+void ARGBExtractAlphaRow_Any_NEON(const uint8_t* src_ptr,
+                                  uint8_t* dst_ptr,
+                                  int width);
+void ARGBExtractAlphaRow_Any_MSA(const uint8_t* src_ptr,
+                                 uint8_t* dst_ptr,
+                                 int width);
+
+void ARGBCopyYToAlphaRow_C(const uint8_t* src, uint8_t* dst, int width);
+void ARGBCopyYToAlphaRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBCopyYToAlphaRow_AVX2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBCopyYToAlphaRow_Any_SSE2(const uint8_t* src_ptr,
+                                  uint8_t* dst_ptr,
+                                  int width);
+void ARGBCopyYToAlphaRow_Any_AVX2(const uint8_t* src_ptr,
+                                  uint8_t* dst_ptr,
                                   int width);
 
-void ARGBCopyYToAlphaRow_C(const uint8* src_y, uint8* dst_argb, int width);
-void ARGBCopyYToAlphaRow_SSE2(const uint8* src_y, uint8* dst_argb, int width);
-void ARGBCopyYToAlphaRow_AVX2(const uint8* src_y, uint8* dst_argb, int width);
-void ARGBCopyYToAlphaRow_Any_SSE2(const uint8* src_y, uint8* dst_argb,
-                                  int width);
-void ARGBCopyYToAlphaRow_Any_AVX2(const uint8* src_y, uint8* dst_argb,
-                                  int width);
+void SetRow_C(uint8_t* dst, uint8_t v8, int width);
+void SetRow_MSA(uint8_t* dst, uint8_t v8, int width);
+void SetRow_X86(uint8_t* dst, uint8_t v8, int width);
+void SetRow_ERMS(uint8_t* dst, uint8_t v8, int width);
+void SetRow_NEON(uint8_t* dst, uint8_t v8, int width);
+void SetRow_Any_X86(uint8_t* dst_ptr, uint8_t v32, int width);
+void SetRow_Any_NEON(uint8_t* dst_ptr, uint8_t v32, int width);
 
-void SetRow_C(uint8* dst, uint8 v8, int count);
-void SetRow_X86(uint8* dst, uint8 v8, int count);
-void SetRow_ERMS(uint8* dst, uint8 v8, int count);
-void SetRow_NEON(uint8* dst, uint8 v8, int count);
-void SetRow_Any_X86(uint8* dst, uint8 v8, int count);
-void SetRow_Any_NEON(uint8* dst, uint8 v8, int count);
-
-void ARGBSetRow_C(uint8* dst_argb, uint32 v32, int count);
-void ARGBSetRow_X86(uint8* dst_argb, uint32 v32, int count);
-void ARGBSetRow_NEON(uint8* dst_argb, uint32 v32, int count);
-void ARGBSetRow_Any_NEON(uint8* dst_argb, uint32 v32, int count);
+void ARGBSetRow_C(uint8_t* dst_argb, uint32_t v32, int width);
+void ARGBSetRow_X86(uint8_t* dst_argb, uint32_t v32, int width);
+void ARGBSetRow_NEON(uint8_t* dst, uint32_t v32, int width);
+void ARGBSetRow_Any_NEON(uint8_t* dst_ptr, uint32_t v32, int width);
+void ARGBSetRow_MSA(uint8_t* dst_argb, uint32_t v32, int width);
+void ARGBSetRow_Any_MSA(uint8_t* dst_ptr, uint32_t v32, int width);
 
 // ARGBShufflers for BGRAToARGB etc.
-void ARGBShuffleRow_C(const uint8* src_argb, uint8* dst_argb,
-                      const uint8* shuffler, int width);
-void ARGBShuffleRow_SSE2(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width);
-void ARGBShuffleRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                          const uint8* shuffler, int width);
-void ARGBShuffleRow_AVX2(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width);
-void ARGBShuffleRow_NEON(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width);
-void ARGBShuffleRow_Any_SSE2(const uint8* src_argb, uint8* dst_argb,
-                             const uint8* shuffler, int width);
-void ARGBShuffleRow_Any_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                              const uint8* shuffler, int width);
-void ARGBShuffleRow_Any_AVX2(const uint8* src_argb, uint8* dst_argb,
-                             const uint8* shuffler, int width);
-void ARGBShuffleRow_Any_NEON(const uint8* src_argb, uint8* dst_argb,
-                             const uint8* shuffler, int width);
-
-void RGB24ToARGBRow_SSSE3(const uint8* src_rgb24, uint8* dst_argb, int width);
-void RAWToARGBRow_SSSE3(const uint8* src_raw, uint8* dst_argb, int width);
-void RAWToRGB24Row_SSSE3(const uint8* src_raw, uint8* dst_rgb24, int width);
-void RGB565ToARGBRow_SSE2(const uint8* src_rgb565, uint8* dst_argb, int width);
-void ARGB1555ToARGBRow_SSE2(const uint8* src_argb1555, uint8* dst_argb,
-                            int width);
-void ARGB4444ToARGBRow_SSE2(const uint8* src_argb4444, uint8* dst_argb,
-                            int width);
-void RGB565ToARGBRow_AVX2(const uint8* src_rgb565, uint8* dst_argb, int width);
-void ARGB1555ToARGBRow_AVX2(const uint8* src_argb1555, uint8* dst_argb,
-                            int width);
-void ARGB4444ToARGBRow_AVX2(const uint8* src_argb4444, uint8* dst_argb,
-                            int width);
-
-void RGB24ToARGBRow_NEON(const uint8* src_rgb24, uint8* dst_argb, int width);
-void RAWToARGBRow_NEON(const uint8* src_raw, uint8* dst_argb, int width);
-void RAWToRGB24Row_NEON(const uint8* src_raw, uint8* dst_rgb24, int width);
-void RGB565ToARGBRow_NEON(const uint8* src_rgb565, uint8* dst_argb, int width);
-void ARGB1555ToARGBRow_NEON(const uint8* src_argb1555, uint8* dst_argb,
-                            int width);
-void ARGB4444ToARGBRow_NEON(const uint8* src_argb4444, uint8* dst_argb,
-                            int width);
-void RGB24ToARGBRow_C(const uint8* src_rgb24, uint8* dst_argb, int width);
-void RAWToARGBRow_C(const uint8* src_raw, uint8* dst_argb, int width);
-void RAWToRGB24Row_C(const uint8* src_raw, uint8* dst_rgb24, int width);
-void RGB565ToARGBRow_C(const uint8* src_rgb, uint8* dst_argb, int width);
-void ARGB1555ToARGBRow_C(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGB4444ToARGBRow_C(const uint8* src_argb, uint8* dst_argb, int width);
-void RGB24ToARGBRow_Any_SSSE3(const uint8* src_rgb24, uint8* dst_argb,
+void ARGBShuffleRow_C(const uint8_t* src_argb,
+                      uint8_t* dst_argb,
+                      const uint8_t* shuffler,
+                      int width);
+void ARGBShuffleRow_SSSE3(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          const uint8_t* shuffler,
+                          int width);
+void ARGBShuffleRow_AVX2(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
+                         const uint8_t* shuffler,
+                         int width);
+void ARGBShuffleRow_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
+                         const uint8_t* shuffler,
+                         int width);
+void ARGBShuffleRow_MSA(const uint8_t* src_argb,
+                        uint8_t* dst_argb,
+                        const uint8_t* shuffler,
+                        int width);
+void ARGBShuffleRow_Any_SSSE3(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              const uint8_t* param,
                               int width);
-void RAWToARGBRow_Any_SSSE3(const uint8* src_raw, uint8* dst_argb, int width);
-void RAWToRGB24Row_Any_SSSE3(const uint8* src_raw, uint8* dst_rgb24, int width);
-
-void RGB565ToARGBRow_Any_SSE2(const uint8* src_rgb565, uint8* dst_argb,
-                              int width);
-void ARGB1555ToARGBRow_Any_SSE2(const uint8* src_argb1555, uint8* dst_argb,
-                                int width);
-void ARGB4444ToARGBRow_Any_SSE2(const uint8* src_argb4444, uint8* dst_argb,
-                                int width);
-void RGB565ToARGBRow_Any_AVX2(const uint8* src_rgb565, uint8* dst_argb,
-                              int width);
-void ARGB1555ToARGBRow_Any_AVX2(const uint8* src_argb1555, uint8* dst_argb,
-                                int width);
-void ARGB4444ToARGBRow_Any_AVX2(const uint8* src_argb4444, uint8* dst_argb,
-                                int width);
-
-void RGB24ToARGBRow_Any_NEON(const uint8* src_rgb24, uint8* dst_argb,
+void ARGBShuffleRow_Any_AVX2(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             const uint8_t* param,
                              int width);
-void RAWToARGBRow_Any_NEON(const uint8* src_raw, uint8* dst_argb, int width);
-void RAWToRGB24Row_Any_NEON(const uint8* src_raw, uint8* dst_rgb24, int width);
-void RGB565ToARGBRow_Any_NEON(const uint8* src_rgb565, uint8* dst_argb,
+void ARGBShuffleRow_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             const uint8_t* param,
+                             int width);
+void ARGBShuffleRow_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            const uint8_t* param,
+                            int width);
+
+void RGB24ToARGBRow_SSSE3(const uint8_t* src_rgb24,
+                          uint8_t* dst_argb,
+                          int width);
+void RAWToARGBRow_SSSE3(const uint8_t* src_raw, uint8_t* dst_argb, int width);
+void RAWToRGB24Row_SSSE3(const uint8_t* src_raw, uint8_t* dst_rgb24, int width);
+void RGB565ToARGBRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGB1555ToARGBRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGB4444ToARGBRow_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void RGB565ToARGBRow_AVX2(const uint8_t* src_rgb565,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGB1555ToARGBRow_AVX2(const uint8_t* src_argb1555,
+                            uint8_t* dst_argb,
+                            int width);
+void ARGB4444ToARGBRow_AVX2(const uint8_t* src_argb4444,
+                            uint8_t* dst_argb,
+                            int width);
+
+void RGB24ToARGBRow_NEON(const uint8_t* src_rgb24,
+                         uint8_t* dst_argb,
+                         int width);
+void RGB24ToARGBRow_MSA(const uint8_t* src_rgb24, uint8_t* dst_argb, int width);
+void RAWToARGBRow_NEON(const uint8_t* src_raw, uint8_t* dst_argb, int width);
+void RAWToARGBRow_MSA(const uint8_t* src_raw, uint8_t* dst_argb, int width);
+void RAWToRGB24Row_NEON(const uint8_t* src_raw, uint8_t* dst_rgb24, int width);
+void RAWToRGB24Row_MSA(const uint8_t* src_raw, uint8_t* dst_rgb24, int width);
+void RGB565ToARGBRow_NEON(const uint8_t* src_rgb565,
+                          uint8_t* dst_argb,
+                          int width);
+void RGB565ToARGBRow_MSA(const uint8_t* src_rgb565,
+                         uint8_t* dst_argb,
+                         int width);
+void ARGB1555ToARGBRow_NEON(const uint8_t* src_argb1555,
+                            uint8_t* dst_argb,
+                            int width);
+void ARGB1555ToARGBRow_MSA(const uint8_t* src_argb1555,
+                           uint8_t* dst_argb,
+                           int width);
+void ARGB4444ToARGBRow_NEON(const uint8_t* src_argb4444,
+                            uint8_t* dst_argb,
+                            int width);
+void ARGB4444ToARGBRow_MSA(const uint8_t* src_argb4444,
+                           uint8_t* dst_argb,
+                           int width);
+void RGB24ToARGBRow_C(const uint8_t* src_rgb24, uint8_t* dst_argb, int width);
+void RAWToARGBRow_C(const uint8_t* src_raw, uint8_t* dst_argb, int width);
+void RAWToRGB24Row_C(const uint8_t* src_raw, uint8_t* dst_rgb24, int width);
+void RGB565ToARGBRow_C(const uint8_t* src_rgb565, uint8_t* dst_argb, int width);
+void ARGB1555ToARGBRow_C(const uint8_t* src_argb1555,
+                         uint8_t* dst_argb,
+                         int width);
+void ARGB4444ToARGBRow_C(const uint8_t* src_argb4444,
+                         uint8_t* dst_argb,
+                         int width);
+void AR30ToARGBRow_C(const uint8_t* src_ar30, uint8_t* dst_argb, int width);
+void AR30ToABGRRow_C(const uint8_t* src_ar30, uint8_t* dst_abgr, int width);
+void ARGBToAR30Row_C(const uint8_t* src_argb, uint8_t* dst_ar30, int width);
+void AR30ToAB30Row_C(const uint8_t* src_ar30, uint8_t* dst_ab30, int width);
+
+void RGB24ToARGBRow_Any_SSSE3(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
                               int width);
-void ARGB1555ToARGBRow_Any_NEON(const uint8* src_argb1555, uint8* dst_argb,
+void RAWToARGBRow_Any_SSSE3(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void RAWToRGB24Row_Any_SSSE3(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+
+void RGB565ToARGBRow_Any_SSE2(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGB1555ToARGBRow_Any_SSE2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
-void ARGB4444ToARGBRow_Any_NEON(const uint8* src_argb4444, uint8* dst_argb,
+void ARGB4444ToARGBRow_Any_SSE2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
+                                int width);
+void RGB565ToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGB1555ToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
+                                int width);
+void ARGB4444ToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
 
-void ARGBToRGB24Row_SSSE3(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRAWRow_SSSE3(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB565Row_SSE2(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_SSE2(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB4444Row_SSE2(const uint8* src_argb, uint8* dst_rgb, int width);
+void RGB24ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void RGB24ToARGBRow_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void RAWToARGBRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RAWToARGBRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RAWToRGB24Row_Any_NEON(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void RAWToRGB24Row_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void RGB565ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void RGB565ToARGBRow_Any_MSA(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGB1555ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
+                                int width);
+void ARGB1555ToARGBRow_Any_MSA(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
+                               int width);
+void ARGB4444ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
+                                int width);
 
-void ARGBToRGB565DitherRow_C(const uint8* src_argb, uint8* dst_rgb,
-                             const uint32 dither4, int width);
-void ARGBToRGB565DitherRow_SSE2(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width);
-void ARGBToRGB565DitherRow_AVX2(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width);
+void ARGB4444ToARGBRow_Any_MSA(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
+                               int width);
 
-void ARGBToRGB565Row_AVX2(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_AVX2(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB4444Row_AVX2(const uint8* src_argb, uint8* dst_rgb, int width);
+void ARGBToRGB24Row_SSSE3(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToRAWRow_SSSE3(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToRGB565Row_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToARGB1555Row_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToARGB4444Row_SSE2(const uint8_t* src, uint8_t* dst, int width);
+void ABGRToAR30Row_SSSE3(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToAR30Row_SSSE3(const uint8_t* src, uint8_t* dst, int width);
 
-void ARGBToRGB24Row_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRAWRow_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB565Row_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB4444Row_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB565DitherRow_NEON(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width);
+void ARGBToRAWRow_AVX2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToRGB24Row_AVX2(const uint8_t* src, uint8_t* dst, int width);
 
-void ARGBToRGBARow_C(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB24Row_C(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRAWRow_C(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB565Row_C(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_C(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB4444Row_C(const uint8* src_argb, uint8* dst_rgb, int width);
+void ARGBToRGB24Row_AVX512VBMI(const uint8_t* src, uint8_t* dst, int width);
 
-void J400ToARGBRow_SSE2(const uint8* src_y, uint8* dst_argb, int width);
-void J400ToARGBRow_AVX2(const uint8* src_y, uint8* dst_argb, int width);
-void J400ToARGBRow_NEON(const uint8* src_y, uint8* dst_argb, int width);
-void J400ToARGBRow_C(const uint8* src_y, uint8* dst_argb, int width);
-void J400ToARGBRow_Any_SSE2(const uint8* src_y, uint8* dst_argb, int width);
-void J400ToARGBRow_Any_AVX2(const uint8* src_y, uint8* dst_argb, int width);
-void J400ToARGBRow_Any_NEON(const uint8* src_y, uint8* dst_argb, int width);
+void ARGBToRGB565DitherRow_C(const uint8_t* src_argb,
+                             uint8_t* dst_rgb,
+                             const uint32_t dither4,
+                             int width);
+void ARGBToRGB565DitherRow_SSE2(const uint8_t* src,
+                                uint8_t* dst,
+                                const uint32_t dither4,
+                                int width);
+void ARGBToRGB565DitherRow_AVX2(const uint8_t* src,
+                                uint8_t* dst,
+                                const uint32_t dither4,
+                                int width);
 
-void I444ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_argb,
+void ARGBToRGB565Row_AVX2(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToARGB1555Row_AVX2(const uint8_t* src_argb,
+                            uint8_t* dst_rgb,
+                            int width);
+void ARGBToARGB4444Row_AVX2(const uint8_t* src_argb,
+                            uint8_t* dst_rgb,
+                            int width);
+void ABGRToAR30Row_AVX2(const uint8_t* src, uint8_t* dst, int width);
+void ARGBToAR30Row_AVX2(const uint8_t* src, uint8_t* dst, int width);
+
+void ARGBToRGB24Row_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_rgb24,
+                         int width);
+void ARGBToRAWRow_NEON(const uint8_t* src_argb, uint8_t* dst_raw, int width);
+void ARGBToRGB565Row_NEON(const uint8_t* src_argb,
+                          uint8_t* dst_rgb565,
+                          int width);
+void ARGBToARGB1555Row_NEON(const uint8_t* src_argb,
+                            uint8_t* dst_argb1555,
+                            int width);
+void ARGBToARGB4444Row_NEON(const uint8_t* src_argb,
+                            uint8_t* dst_argb4444,
+                            int width);
+void ARGBToRGB565DitherRow_NEON(const uint8_t* src_argb,
+                                uint8_t* dst_rgb,
+                                const uint32_t dither4,
+                                int width);
+void ARGBToRGB24Row_MSA(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToRAWRow_MSA(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToRGB565Row_MSA(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToARGB1555Row_MSA(const uint8_t* src_argb,
+                           uint8_t* dst_rgb,
+                           int width);
+void ARGBToARGB4444Row_MSA(const uint8_t* src_argb,
+                           uint8_t* dst_rgb,
+                           int width);
+void ARGBToRGB565DitherRow_MSA(const uint8_t* src_argb,
+                               uint8_t* dst_rgb,
+                               const uint32_t dither4,
+                               int width);
+
+void ARGBToRGBARow_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToRGB24Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToRAWRow_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToRGB565Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToARGB1555Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ARGBToARGB4444Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width);
+void ABGRToAR30Row_C(const uint8_t* src_abgr, uint8_t* dst_ar30, int width);
+void ARGBToAR30Row_C(const uint8_t* src_argb, uint8_t* dst_ar30, int width);
+
+void J400ToARGBRow_SSE2(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void J400ToARGBRow_AVX2(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void J400ToARGBRow_NEON(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void J400ToARGBRow_MSA(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void J400ToARGBRow_C(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void J400ToARGBRow_Any_SSE2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void J400ToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void J400ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void J400ToARGBRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+
+void I444ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width);
-void I422ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_argb,
+void I422ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width);
-void I422ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_argb,
+void I422ToAR30Row_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width);
-void I422AlphaToARGBRow_C(const uint8* y_buf,
-                          const uint8* u_buf,
-                          const uint8* v_buf,
-                          const uint8* a_buf,
-                          uint8* dst_argb,
+void I210ToAR30Row_C(const uint16_t* src_y,
+                     const uint16_t* src_u,
+                     const uint16_t* src_v,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width);
+void I210ToARGBRow_C(const uint16_t* src_y,
+                     const uint16_t* src_u,
+                     const uint16_t* src_v,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width);
+void I422AlphaToARGBRow_C(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          const uint8_t* src_a,
+                          uint8_t* rgb_buf,
                           const struct YuvConstants* yuvconstants,
                           int width);
-void I411ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_argb,
+void NV12ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_uv,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width);
-void NV12ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_uv,
-                     uint8* dst_argb,
-                     const struct YuvConstants* yuvconstants,
-                     int width);
-void NV12ToRGB565Row_C(const uint8* src_y,
-                       const uint8* src_uv,
-                       uint8* dst_argb,
+void NV12ToRGB565Row_C(const uint8_t* src_y,
+                       const uint8_t* src_uv,
+                       uint8_t* dst_rgb565,
                        const struct YuvConstants* yuvconstants,
                        int width);
-void NV21ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_uv,
-                     uint8* dst_argb,
+void NV21ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_vu,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width);
-void YUY2ToARGBRow_C(const uint8* src_yuy2,
-                     uint8* dst_argb,
-                     const struct YuvConstants* yuvconstants,
-                     int width);
-void UYVYToARGBRow_C(const uint8* src_uyvy,
-                     uint8* dst_argb,
-                     const struct YuvConstants* yuvconstants,
-                     int width);
-void I422ToRGBARow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_rgba,
-                     const struct YuvConstants* yuvconstants,
-                     int width);
-void I422ToRGB24Row_C(const uint8* src_y,
-                      const uint8* src_u,
-                      const uint8* src_v,
-                      uint8* dst_rgb24,
+void NV12ToRGB24Row_C(const uint8_t* src_y,
+                      const uint8_t* src_uv,
+                      uint8_t* rgb_buf,
                       const struct YuvConstants* yuvconstants,
                       int width);
-void I422ToARGB4444Row_C(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb4444,
+void NV21ToRGB24Row_C(const uint8_t* src_y,
+                      const uint8_t* src_vu,
+                      uint8_t* rgb_buf,
+                      const struct YuvConstants* yuvconstants,
+                      int width);
+void YUY2ToARGBRow_C(const uint8_t* src_yuy2,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width);
+void UYVYToARGBRow_C(const uint8_t* src_uyvy,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width);
+void I422ToRGBARow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width);
+void I422ToRGB24Row_C(const uint8_t* src_y,
+                      const uint8_t* src_u,
+                      const uint8_t* src_v,
+                      uint8_t* rgb_buf,
+                      const struct YuvConstants* yuvconstants,
+                      int width);
+void I422ToARGB4444Row_C(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_argb4444,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I422ToARGB1555Row_C(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb4444,
+void I422ToARGB1555Row_C(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_argb1555,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I422ToRGB565Row_C(const uint8* src_y,
-                       const uint8* src_u,
-                       const uint8* src_v,
-                       uint8* dst_rgb565,
+void I422ToRGB565Row_C(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_rgb565,
                        const struct YuvConstants* yuvconstants,
                        int width);
-void I422ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToARGBRow_AVX2(const uint8_t* y_buf,
+                        const uint8_t* u_buf,
+                        const uint8_t* v_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToRGBARow_AVX2(const uint8_t* y_buf,
+                        const uint8_t* u_buf,
+                        const uint8_t* v_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422ToRGBARow_AVX2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width);
-void I444ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
+void I444ToARGBRow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* u_buf,
+                         const uint8_t* v_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I444ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I444ToARGBRow_AVX2(const uint8_t* y_buf,
+                        const uint8_t* u_buf,
+                        const uint8_t* v_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I444ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
+void I444ToARGBRow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* u_buf,
+                         const uint8_t* v_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I444ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I444ToARGBRow_AVX2(const uint8_t* y_buf,
+                        const uint8_t* u_buf,
+                        const uint8_t* v_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
+void I422ToARGBRow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* u_buf,
+                         const uint8_t* v_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I422AlphaToARGBRow_SSSE3(const uint8* y_buf,
-                              const uint8* u_buf,
-                              const uint8* v_buf,
-                              const uint8* a_buf,
-                              uint8* dst_argb,
+
+void I422ToAR30Row_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* u_buf,
+                         const uint8_t* v_buf,
+                         uint8_t* dst_ar30,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void I210ToAR30Row_SSSE3(const uint16_t* y_buf,
+                         const uint16_t* u_buf,
+                         const uint16_t* v_buf,
+                         uint8_t* dst_ar30,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void I210ToARGBRow_SSSE3(const uint16_t* y_buf,
+                         const uint16_t* u_buf,
+                         const uint16_t* v_buf,
+                         uint8_t* dst_argb,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void I422ToAR30Row_AVX2(const uint8_t* y_buf,
+                        const uint8_t* u_buf,
+                        const uint8_t* v_buf,
+                        uint8_t* dst_ar30,
+                        const struct YuvConstants* yuvconstants,
+                        int width);
+void I210ToARGBRow_AVX2(const uint16_t* y_buf,
+                        const uint16_t* u_buf,
+                        const uint16_t* v_buf,
+                        uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants,
+                        int width);
+void I210ToAR30Row_AVX2(const uint16_t* y_buf,
+                        const uint16_t* u_buf,
+                        const uint16_t* v_buf,
+                        uint8_t* dst_ar30,
+                        const struct YuvConstants* yuvconstants,
+                        int width);
+void I422AlphaToARGBRow_SSSE3(const uint8_t* y_buf,
+                              const uint8_t* u_buf,
+                              const uint8_t* v_buf,
+                              const uint8_t* a_buf,
+                              uint8_t* dst_argb,
                               const struct YuvConstants* yuvconstants,
                               int width);
-void I422AlphaToARGBRow_AVX2(const uint8* y_buf,
-                             const uint8* u_buf,
-                             const uint8* v_buf,
-                             const uint8* a_buf,
-                             uint8* dst_argb,
+void I422AlphaToARGBRow_AVX2(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             const uint8_t* a_buf,
+                             uint8_t* dst_argb,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
+void NV12ToARGBRow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I411ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width);
-void I411ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void NV12ToARGBRow_AVX2(const uint8_t* y_buf,
+                        const uint8_t* uv_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void NV12ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_uv,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width);
-void NV12ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_uv,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width);
-void NV12ToRGB565Row_SSSE3(const uint8* src_y,
-                           const uint8* src_uv,
-                           uint8* dst_argb,
+void NV12ToRGB24Row_SSSE3(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb24,
+                          const struct YuvConstants* yuvconstants,
+                          int width);
+void NV21ToRGB24Row_SSSE3(const uint8_t* src_y,
+                          const uint8_t* src_vu,
+                          uint8_t* dst_rgb24,
+                          const struct YuvConstants* yuvconstants,
+                          int width);
+void NV12ToRGB565Row_SSSE3(const uint8_t* src_y,
+                           const uint8_t* src_uv,
+                           uint8_t* dst_rgb565,
                            const struct YuvConstants* yuvconstants,
                            int width);
-void NV12ToRGB565Row_AVX2(const uint8* src_y,
-                          const uint8* src_uv,
-                          uint8* dst_argb,
+void NV12ToRGB24Row_AVX2(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void NV21ToRGB24Row_AVX2(const uint8_t* src_y,
+                         const uint8_t* src_vu,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width);
+void NV12ToRGB565Row_AVX2(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width);
-void NV21ToARGBRow_SSSE3(const uint8* src_y,
-                         const uint8* src_uv,
-                         uint8* dst_argb,
+void NV21ToARGBRow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* vu_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void NV21ToARGBRow_AVX2(const uint8* src_y,
-                        const uint8* src_uv,
-                        uint8* dst_argb,
+void NV21ToARGBRow_AVX2(const uint8_t* y_buf,
+                        const uint8_t* vu_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void YUY2ToARGBRow_SSSE3(const uint8* src_yuy2,
-                         uint8* dst_argb,
+void YUY2ToARGBRow_SSSE3(const uint8_t* yuy2_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void UYVYToARGBRow_SSSE3(const uint8* src_uyvy,
-                         uint8* dst_argb,
+void UYVYToARGBRow_SSSE3(const uint8_t* uyvy_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void YUY2ToARGBRow_AVX2(const uint8* src_yuy2,
-                        uint8* dst_argb,
+void YUY2ToARGBRow_AVX2(const uint8_t* yuy2_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void UYVYToARGBRow_AVX2(const uint8* src_uyvy,
-                        uint8* dst_argb,
+void UYVYToARGBRow_AVX2(const uint8_t* uyvy_buf,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width);
-void I422ToRGBARow_SSSE3(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_rgba,
+void I422ToRGBARow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* u_buf,
+                         const uint8_t* v_buf,
+                         uint8_t* dst_rgba,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I422ToARGB4444Row_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void I422ToARGB4444Row_SSSE3(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             uint8_t* dst_argb4444,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422ToARGB4444Row_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I422ToARGB4444Row_AVX2(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb4444,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToARGB1555Row_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void I422ToARGB1555Row_SSSE3(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             uint8_t* dst_argb1555,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422ToARGB1555Row_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I422ToARGB1555Row_AVX2(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb1555,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToRGB565Row_SSSE3(const uint8* src_y,
-                           const uint8* src_u,
-                           const uint8* src_v,
-                           uint8* dst_argb,
+void I422ToRGB565Row_SSSE3(const uint8_t* src_y,
+                           const uint8_t* src_u,
+                           const uint8_t* src_v,
+                           uint8_t* dst_rgb565,
                            const struct YuvConstants* yuvconstants,
                            int width);
-void I422ToRGB565Row_AVX2(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          uint8* dst_argb,
+void I422ToRGB565Row_AVX2(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width);
-void I422ToRGB24Row_SSSE3(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          uint8* dst_rgb24,
+void I422ToRGB24Row_SSSE3(const uint8_t* y_buf,
+                          const uint8_t* u_buf,
+                          const uint8_t* v_buf,
+                          uint8_t* dst_rgb24,
                           const struct YuvConstants* yuvconstants,
                           int width);
-void I422ToRGB24Row_AVX2(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_rgb24,
+void I422ToRGB24Row_AVX2(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb24,
                          const struct YuvConstants* yuvconstants,
                          int width);
-void I422ToARGBRow_Any_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I422ToARGBRow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToRGBARow_Any_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I422ToRGBARow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I444ToARGBRow_Any_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void I444ToARGBRow_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I444ToARGBRow_Any_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I444ToARGBRow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToARGBRow_Any_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void I422ToARGBRow_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422AlphaToARGBRow_Any_SSSE3(const uint8* y_buf,
-                                  const uint8* u_buf,
-                                  const uint8* v_buf,
-                                  const uint8* a_buf,
-                                  uint8* dst_argb,
+void I422ToAR30Row_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void I210ToAR30Row_Any_SSSE3(const uint16_t* y_buf,
+                             const uint16_t* u_buf,
+                             const uint16_t* v_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void I210ToARGBRow_Any_SSSE3(const uint16_t* y_buf,
+                             const uint16_t* u_buf,
+                             const uint16_t* v_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void I422ToAR30Row_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            const struct YuvConstants* yuvconstants,
+                            int width);
+void I210ToARGBRow_Any_AVX2(const uint16_t* y_buf,
+                            const uint16_t* u_buf,
+                            const uint16_t* v_buf,
+                            uint8_t* dst_ptr,
+                            const struct YuvConstants* yuvconstants,
+                            int width);
+void I210ToAR30Row_Any_AVX2(const uint16_t* y_buf,
+                            const uint16_t* u_buf,
+                            const uint16_t* v_buf,
+                            uint8_t* dst_ptr,
+                            const struct YuvConstants* yuvconstants,
+                            int width);
+void I422AlphaToARGBRow_Any_SSSE3(const uint8_t* y_buf,
+                                  const uint8_t* u_buf,
+                                  const uint8_t* v_buf,
+                                  const uint8_t* a_buf,
+                                  uint8_t* dst_ptr,
                                   const struct YuvConstants* yuvconstants,
                                   int width);
-void I422AlphaToARGBRow_Any_AVX2(const uint8* y_buf,
-                                 const uint8* u_buf,
-                                 const uint8* v_buf,
-                                 const uint8* a_buf,
-                                 uint8* dst_argb,
+void I422AlphaToARGBRow_Any_AVX2(const uint8_t* y_buf,
+                                 const uint8_t* u_buf,
+                                 const uint8_t* v_buf,
+                                 const uint8_t* a_buf,
+                                 uint8_t* dst_ptr,
                                  const struct YuvConstants* yuvconstants,
                                  int width);
-void I411ToARGBRow_Any_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void NV12ToARGBRow_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I411ToARGBRow_Any_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void NV12ToARGBRow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* uv_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void NV12ToARGBRow_Any_SSSE3(const uint8* src_y,
-                             const uint8* src_uv,
-                             uint8* dst_argb,
+void NV21ToARGBRow_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void NV12ToARGBRow_Any_AVX2(const uint8* src_y,
-                            const uint8* src_uv,
-                            uint8* dst_argb,
+void NV21ToARGBRow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* uv_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void NV21ToARGBRow_Any_SSSE3(const uint8* src_y,
-                             const uint8* src_vu,
-                             uint8* dst_argb,
+void NV12ToRGB24Row_Any_SSSE3(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              const struct YuvConstants* yuvconstants,
+                              int width);
+void NV21ToRGB24Row_Any_SSSE3(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              const struct YuvConstants* yuvconstants,
+                              int width);
+void NV12ToRGB24Row_Any_AVX2(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void NV21ToARGBRow_Any_AVX2(const uint8* src_y,
-                            const uint8* src_vu,
-                            uint8* dst_argb,
-                            const struct YuvConstants* yuvconstants,
-                            int width);
-void NV12ToRGB565Row_Any_SSSE3(const uint8* src_y,
-                               const uint8* src_uv,
-                               uint8* dst_argb,
+void NV21ToRGB24Row_Any_AVX2(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void NV12ToRGB565Row_Any_SSSE3(const uint8_t* y_buf,
+                               const uint8_t* uv_buf,
+                               uint8_t* dst_ptr,
                                const struct YuvConstants* yuvconstants,
                                int width);
-void NV12ToRGB565Row_Any_AVX2(const uint8* src_y,
-                              const uint8* src_uv,
-                              uint8* dst_argb,
+void NV12ToRGB565Row_Any_AVX2(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
                               const struct YuvConstants* yuvconstants,
                               int width);
-void YUY2ToARGBRow_Any_SSSE3(const uint8* src_yuy2,
-                             uint8* dst_argb,
+void YUY2ToARGBRow_Any_SSSE3(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void UYVYToARGBRow_Any_SSSE3(const uint8* src_uyvy,
-                             uint8* dst_argb,
+void UYVYToARGBRow_Any_SSSE3(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void YUY2ToARGBRow_Any_AVX2(const uint8* src_yuy2,
-                            uint8* dst_argb,
+void YUY2ToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void UYVYToARGBRow_Any_AVX2(const uint8* src_uyvy,
-                            uint8* dst_argb,
+void UYVYToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToRGBARow_Any_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_rgba,
+void I422ToRGBARow_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422ToARGB4444Row_Any_SSSE3(const uint8* src_y,
-                                 const uint8* src_u,
-                                 const uint8* src_v,
-                                 uint8* dst_rgba,
+void I422ToARGB4444Row_Any_SSSE3(const uint8_t* y_buf,
+                                 const uint8_t* u_buf,
+                                 const uint8_t* v_buf,
+                                 uint8_t* dst_ptr,
                                  const struct YuvConstants* yuvconstants,
                                  int width);
-void I422ToARGB4444Row_Any_AVX2(const uint8* src_y,
-                                const uint8* src_u,
-                                const uint8* src_v,
-                                uint8* dst_rgba,
+void I422ToARGB4444Row_Any_AVX2(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_ptr,
                                 const struct YuvConstants* yuvconstants,
                                 int width);
-void I422ToARGB1555Row_Any_SSSE3(const uint8* src_y,
-                                 const uint8* src_u,
-                                 const uint8* src_v,
-                                 uint8* dst_rgba,
+void I422ToARGB1555Row_Any_SSSE3(const uint8_t* y_buf,
+                                 const uint8_t* u_buf,
+                                 const uint8_t* v_buf,
+                                 uint8_t* dst_ptr,
                                  const struct YuvConstants* yuvconstants,
                                  int width);
-void I422ToARGB1555Row_Any_AVX2(const uint8* src_y,
-                                const uint8* src_u,
-                                const uint8* src_v,
-                                uint8* dst_rgba,
+void I422ToARGB1555Row_Any_AVX2(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_ptr,
                                 const struct YuvConstants* yuvconstants,
                                 int width);
-void I422ToRGB565Row_Any_SSSE3(const uint8* src_y,
-                               const uint8* src_u,
-                               const uint8* src_v,
-                               uint8* dst_rgba,
+void I422ToRGB565Row_Any_SSSE3(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_ptr,
                                const struct YuvConstants* yuvconstants,
                                int width);
-void I422ToRGB565Row_Any_AVX2(const uint8* src_y,
-                              const uint8* src_u,
-                              const uint8* src_v,
-                              uint8* dst_rgba,
+void I422ToRGB565Row_Any_AVX2(const uint8_t* y_buf,
+                              const uint8_t* u_buf,
+                              const uint8_t* v_buf,
+                              uint8_t* dst_ptr,
                               const struct YuvConstants* yuvconstants,
                               int width);
-void I422ToRGB24Row_Any_SSSE3(const uint8* src_y,
-                              const uint8* src_u,
-                              const uint8* src_v,
-                              uint8* dst_argb,
+void I422ToRGB24Row_Any_SSSE3(const uint8_t* y_buf,
+                              const uint8_t* u_buf,
+                              const uint8_t* v_buf,
+                              uint8_t* dst_ptr,
                               const struct YuvConstants* yuvconstants,
                               int width);
-void I422ToRGB24Row_Any_AVX2(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void I422ToRGB24Row_Any_AVX2(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
 
-void I400ToARGBRow_C(const uint8* src_y, uint8* dst_argb, int width);
-void I400ToARGBRow_SSE2(const uint8* src_y, uint8* dst_argb, int width);
-void I400ToARGBRow_AVX2(const uint8* src_y, uint8* dst_argb, int width);
-void I400ToARGBRow_NEON(const uint8* src_y, uint8* dst_argb, int width);
-void I400ToARGBRow_Any_SSE2(const uint8* src_y, uint8* dst_argb, int width);
-void I400ToARGBRow_Any_AVX2(const uint8* src_y, uint8* dst_argb, int width);
-void I400ToARGBRow_Any_NEON(const uint8* src_y, uint8* dst_argb, int width);
+void I400ToARGBRow_C(const uint8_t* src_y, uint8_t* rgb_buf, int width);
+void I400ToARGBRow_SSE2(const uint8_t* y_buf, uint8_t* dst_argb, int width);
+void I400ToARGBRow_AVX2(const uint8_t* y_buf, uint8_t* dst_argb, int width);
+void I400ToARGBRow_NEON(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void I400ToARGBRow_MSA(const uint8_t* src_y, uint8_t* dst_argb, int width);
+void I400ToARGBRow_Any_SSE2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void I400ToARGBRow_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void I400ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void I400ToARGBRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
 
 // ARGB preattenuated alpha blend.
-void ARGBBlendRow_SSSE3(const uint8* src_argb, const uint8* src_argb1,
-                        uint8* dst_argb, int width);
-void ARGBBlendRow_NEON(const uint8* src_argb, const uint8* src_argb1,
-                       uint8* dst_argb, int width);
-void ARGBBlendRow_C(const uint8* src_argb, const uint8* src_argb1,
-                    uint8* dst_argb, int width);
+void ARGBBlendRow_SSSE3(const uint8_t* src_argb0,
+                        const uint8_t* src_argb1,
+                        uint8_t* dst_argb,
+                        int width);
+void ARGBBlendRow_NEON(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width);
+void ARGBBlendRow_MSA(const uint8_t* src_argb0,
+                      const uint8_t* src_argb1,
+                      uint8_t* dst_argb,
+                      int width);
+void ARGBBlendRow_C(const uint8_t* src_argb0,
+                    const uint8_t* src_argb1,
+                    uint8_t* dst_argb,
+                    int width);
 
 // Unattenuated planar alpha blend.
-void BlendPlaneRow_SSSE3(const uint8* src0, const uint8* src1,
-                         const uint8* alpha, uint8* dst, int width);
-void BlendPlaneRow_Any_SSSE3(const uint8* src0, const uint8* src1,
-                             const uint8* alpha, uint8* dst, int width);
-void BlendPlaneRow_AVX2(const uint8* src0, const uint8* src1,
-                        const uint8* alpha, uint8* dst, int width);
-void BlendPlaneRow_Any_AVX2(const uint8* src0, const uint8* src1,
-                            const uint8* alpha, uint8* dst, int width);
-void BlendPlaneRow_C(const uint8* src0, const uint8* src1,
-                     const uint8* alpha, uint8* dst, int width);
+void BlendPlaneRow_SSSE3(const uint8_t* src0,
+                         const uint8_t* src1,
+                         const uint8_t* alpha,
+                         uint8_t* dst,
+                         int width);
+void BlendPlaneRow_Any_SSSE3(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
+                             int width);
+void BlendPlaneRow_AVX2(const uint8_t* src0,
+                        const uint8_t* src1,
+                        const uint8_t* alpha,
+                        uint8_t* dst,
+                        int width);
+void BlendPlaneRow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void BlendPlaneRow_C(const uint8_t* src0,
+                     const uint8_t* src1,
+                     const uint8_t* alpha,
+                     uint8_t* dst,
+                     int width);
 
 // ARGB multiply images. Same API as Blend, but these require
 // pointer and width alignment for SSE2.
-void ARGBMultiplyRow_C(const uint8* src_argb, const uint8* src_argb1,
-                       uint8* dst_argb, int width);
-void ARGBMultiplyRow_SSE2(const uint8* src_argb, const uint8* src_argb1,
-                          uint8* dst_argb, int width);
-void ARGBMultiplyRow_Any_SSE2(const uint8* src_argb, const uint8* src_argb1,
-                              uint8* dst_argb, int width);
-void ARGBMultiplyRow_AVX2(const uint8* src_argb, const uint8* src_argb1,
-                          uint8* dst_argb, int width);
-void ARGBMultiplyRow_Any_AVX2(const uint8* src_argb, const uint8* src_argb1,
-                              uint8* dst_argb, int width);
-void ARGBMultiplyRow_NEON(const uint8* src_argb, const uint8* src_argb1,
-                          uint8* dst_argb, int width);
-void ARGBMultiplyRow_Any_NEON(const uint8* src_argb, const uint8* src_argb1,
-                              uint8* dst_argb, int width);
+void ARGBMultiplyRow_C(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width);
+void ARGBMultiplyRow_SSE2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBMultiplyRow_Any_SSE2(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBMultiplyRow_AVX2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBMultiplyRow_Any_AVX2(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBMultiplyRow_NEON(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBMultiplyRow_Any_NEON(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBMultiplyRow_MSA(const uint8_t* src_argb0,
+                         const uint8_t* src_argb1,
+                         uint8_t* dst_argb,
+                         int width);
+void ARGBMultiplyRow_Any_MSA(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             int width);
 
 // ARGB add images.
-void ARGBAddRow_C(const uint8* src_argb, const uint8* src_argb1,
-                  uint8* dst_argb, int width);
-void ARGBAddRow_SSE2(const uint8* src_argb, const uint8* src_argb1,
-                     uint8* dst_argb, int width);
-void ARGBAddRow_Any_SSE2(const uint8* src_argb, const uint8* src_argb1,
-                         uint8* dst_argb, int width);
-void ARGBAddRow_AVX2(const uint8* src_argb, const uint8* src_argb1,
-                     uint8* dst_argb, int width);
-void ARGBAddRow_Any_AVX2(const uint8* src_argb, const uint8* src_argb1,
-                         uint8* dst_argb, int width);
-void ARGBAddRow_NEON(const uint8* src_argb, const uint8* src_argb1,
-                     uint8* dst_argb, int width);
-void ARGBAddRow_Any_NEON(const uint8* src_argb, const uint8* src_argb1,
-                         uint8* dst_argb, int width);
+void ARGBAddRow_C(const uint8_t* src_argb0,
+                  const uint8_t* src_argb1,
+                  uint8_t* dst_argb,
+                  int width);
+void ARGBAddRow_SSE2(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width);
+void ARGBAddRow_Any_SSE2(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void ARGBAddRow_AVX2(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width);
+void ARGBAddRow_Any_AVX2(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void ARGBAddRow_NEON(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width);
+void ARGBAddRow_Any_NEON(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void ARGBAddRow_MSA(const uint8_t* src_argb0,
+                    const uint8_t* src_argb1,
+                    uint8_t* dst_argb,
+                    int width);
+void ARGBAddRow_Any_MSA(const uint8_t* y_buf,
+                        const uint8_t* uv_buf,
+                        uint8_t* dst_ptr,
+                        int width);
 
 // ARGB subtract images. Same API as Blend, but these require
 // pointer and width alignment for SSE2.
-void ARGBSubtractRow_C(const uint8* src_argb, const uint8* src_argb1,
-                       uint8* dst_argb, int width);
-void ARGBSubtractRow_SSE2(const uint8* src_argb, const uint8* src_argb1,
-                          uint8* dst_argb, int width);
-void ARGBSubtractRow_Any_SSE2(const uint8* src_argb, const uint8* src_argb1,
-                              uint8* dst_argb, int width);
-void ARGBSubtractRow_AVX2(const uint8* src_argb, const uint8* src_argb1,
-                          uint8* dst_argb, int width);
-void ARGBSubtractRow_Any_AVX2(const uint8* src_argb, const uint8* src_argb1,
-                              uint8* dst_argb, int width);
-void ARGBSubtractRow_NEON(const uint8* src_argb, const uint8* src_argb1,
-                          uint8* dst_argb, int width);
-void ARGBSubtractRow_Any_NEON(const uint8* src_argb, const uint8* src_argb1,
-                              uint8* dst_argb, int width);
+void ARGBSubtractRow_C(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width);
+void ARGBSubtractRow_SSE2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBSubtractRow_Any_SSE2(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBSubtractRow_AVX2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBSubtractRow_Any_AVX2(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBSubtractRow_NEON(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBSubtractRow_Any_NEON(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBSubtractRow_MSA(const uint8_t* src_argb0,
+                         const uint8_t* src_argb1,
+                         uint8_t* dst_argb,
+                         int width);
+void ARGBSubtractRow_Any_MSA(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             int width);
 
-void ARGBToRGB24Row_Any_SSSE3(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRAWRow_Any_SSSE3(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB565Row_Any_SSE2(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_Any_SSE2(const uint8* src_argb, uint8* dst_rgb,
+void ARGBToRGB24Row_Any_SSSE3(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBToRAWRow_Any_SSSE3(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void ARGBToRGB565Row_Any_SSE2(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBToARGB1555Row_Any_SSE2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
-void ARGBToARGB4444Row_Any_SSE2(const uint8* src_argb, uint8* dst_rgb,
+void ARGBToARGB4444Row_Any_SSE2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
+void ABGRToAR30Row_Any_SSSE3(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGBToAR30Row_Any_SSSE3(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGBToRAWRow_Any_AVX2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToRGB24Row_Any_AVX2(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGBToRGB24Row_Any_AVX512VBMI(const uint8_t* src_ptr,
+                                   uint8_t* dst_ptr,
+                                   int width);
+void ARGBToRGB565DitherRow_Any_SSE2(const uint8_t* src_ptr,
+                                    uint8_t* dst_ptr,
+                                    const uint32_t param,
+                                    int width);
+void ARGBToRGB565DitherRow_Any_AVX2(const uint8_t* src_ptr,
+                                    uint8_t* dst_ptr,
+                                    const uint32_t param,
+                                    int width);
 
-void ARGBToRGB565DitherRow_Any_SSE2(const uint8* src_argb, uint8* dst_rgb,
-                                    const uint32 dither4, int width);
-void ARGBToRGB565DitherRow_Any_AVX2(const uint8* src_argb, uint8* dst_rgb,
-                                    const uint32 dither4, int width);
+void ARGBToRGB565Row_Any_AVX2(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBToARGB1555Row_Any_AVX2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
+                                int width);
+void ARGBToARGB4444Row_Any_AVX2(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
+                                int width);
+void ABGRToAR30Row_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void ARGBToAR30Row_Any_AVX2(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
 
-void ARGBToRGB565Row_Any_AVX2(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_Any_AVX2(const uint8* src_argb, uint8* dst_rgb,
+void ARGBToRGB24Row_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGBToRAWRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToRGB565Row_Any_NEON(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
+void ARGBToARGB1555Row_Any_NEON(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
-void ARGBToARGB4444Row_Any_AVX2(const uint8* src_argb, uint8* dst_rgb,
+void ARGBToARGB4444Row_Any_NEON(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
+void ARGBToRGB565DitherRow_Any_NEON(const uint8_t* src_ptr,
+                                    uint8_t* dst_ptr,
+                                    const uint32_t param,
+                                    int width);
+void ARGBToRGB24Row_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
+                            int width);
+void ARGBToRAWRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void ARGBToRGB565Row_Any_MSA(const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
+                             int width);
+void ARGBToARGB1555Row_Any_MSA(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
+                               int width);
+void ARGBToARGB4444Row_Any_MSA(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
+                               int width);
+void ARGBToRGB565DitherRow_Any_MSA(const uint8_t* src_ptr,
+                                   uint8_t* dst_ptr,
+                                   const uint32_t param,
+                                   int width);
 
-void ARGBToRGB24Row_Any_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRAWRow_Any_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToRGB565Row_Any_NEON(const uint8* src_argb, uint8* dst_rgb, int width);
-void ARGBToARGB1555Row_Any_NEON(const uint8* src_argb, uint8* dst_rgb,
-                                int width);
-void ARGBToARGB4444Row_Any_NEON(const uint8* src_argb, uint8* dst_rgb,
-                                int width);
-void ARGBToRGB565DitherRow_Any_NEON(const uint8* src_argb, uint8* dst_rgb,
-                                    const uint32 dither4, int width);
-
-void I444ToARGBRow_Any_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I444ToARGBRow_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToARGBRow_Any_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I422ToARGBRow_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422AlphaToARGBRow_Any_NEON(const uint8* src_y,
-                                 const uint8* src_u,
-                                 const uint8* src_v,
-                                 const uint8* src_a,
-                                 uint8* dst_argb,
+void I422AlphaToARGBRow_Any_NEON(const uint8_t* y_buf,
+                                 const uint8_t* u_buf,
+                                 const uint8_t* v_buf,
+                                 const uint8_t* a_buf,
+                                 uint8_t* dst_ptr,
                                  const struct YuvConstants* yuvconstants,
                                  int width);
-void I411ToARGBRow_Any_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
+void I422ToRGBARow_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToRGBARow_Any_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb,
-                            const struct YuvConstants* yuvconstants,
-                            int width);
-void I422ToRGB24Row_Any_NEON(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb,
+void I422ToRGB24Row_Any_NEON(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
                              const struct YuvConstants* yuvconstants,
                              int width);
-void I422ToARGB4444Row_Any_NEON(const uint8* src_y,
-                                const uint8* src_u,
-                                const uint8* src_v,
-                                uint8* dst_argb,
+void I422ToARGB4444Row_Any_NEON(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_ptr,
                                 const struct YuvConstants* yuvconstants,
                                 int width);
-void I422ToARGB1555Row_Any_NEON(const uint8* src_y,
-                                const uint8* src_u,
-                                const uint8* src_v,
-                                uint8* dst_argb,
+void I422ToARGB1555Row_Any_NEON(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_ptr,
                                 const struct YuvConstants* yuvconstants,
                                 int width);
-void I422ToRGB565Row_Any_NEON(const uint8* src_y,
-                              const uint8* src_u,
-                              const uint8* src_v,
-                              uint8* dst_argb,
+void I422ToRGB565Row_Any_NEON(const uint8_t* y_buf,
+                              const uint8_t* u_buf,
+                              const uint8_t* v_buf,
+                              uint8_t* dst_ptr,
                               const struct YuvConstants* yuvconstants,
                               int width);
-void NV12ToARGBRow_Any_NEON(const uint8* src_y,
-                            const uint8* src_uv,
-                            uint8* dst_argb,
+void NV12ToARGBRow_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* uv_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void NV21ToARGBRow_Any_NEON(const uint8* src_y,
-                            const uint8* src_vu,
-                            uint8* dst_argb,
+void NV21ToARGBRow_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* uv_buf,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void NV12ToRGB565Row_Any_NEON(const uint8* src_y,
-                              const uint8* src_uv,
-                              uint8* dst_argb,
+void NV12ToRGB24Row_Any_NEON(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void NV21ToRGB24Row_Any_NEON(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void NV12ToRGB565Row_Any_NEON(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
                               const struct YuvConstants* yuvconstants,
                               int width);
-void YUY2ToARGBRow_Any_NEON(const uint8* src_yuy2,
-                            uint8* dst_argb,
+void YUY2ToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void UYVYToARGBRow_Any_NEON(const uint8* src_uyvy,
-                            uint8* dst_argb,
+void UYVYToARGBRow_Any_NEON(const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             const struct YuvConstants* yuvconstants,
                             int width);
-void I422ToARGBRow_DSPR2(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
+void I444ToARGBRow_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* u_buf,
+                           const uint8_t* v_buf,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void I422ToARGBRow_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* u_buf,
+                           const uint8_t* v_buf,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void I422ToRGBARow_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* u_buf,
+                           const uint8_t* v_buf,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void I422AlphaToARGBRow_Any_MSA(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                const uint8_t* a_buf,
+                                uint8_t* dst_ptr,
+                                const struct YuvConstants* yuvconstants,
+                                int width);
+void I422ToRGB24Row_Any_MSA(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            const struct YuvConstants* yuvconstants,
+                            int width);
+void I422ToRGB565Row_Any_MSA(const uint8_t* y_buf,
+                             const uint8_t* u_buf,
+                             const uint8_t* v_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void I422ToARGB4444Row_Any_MSA(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_ptr,
+                               const struct YuvConstants* yuvconstants,
+                               int width);
+void I422ToARGB1555Row_Any_MSA(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_ptr,
+                               const struct YuvConstants* yuvconstants,
+                               int width);
+void NV12ToARGBRow_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* uv_buf,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void NV12ToRGB565Row_Any_MSA(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             const struct YuvConstants* yuvconstants,
+                             int width);
+void NV21ToARGBRow_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* uv_buf,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void YUY2ToARGBRow_Any_MSA(const uint8_t* src_ptr,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+void UYVYToARGBRow_Any_MSA(const uint8_t* src_ptr,
+                           uint8_t* dst_ptr,
+                           const struct YuvConstants* yuvconstants,
+                           int width);
+
+void YUY2ToYRow_AVX2(const uint8_t* src_yuy2, uint8_t* dst_y, int width);
+void YUY2ToUVRow_AVX2(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void YUY2ToUV422Row_AVX2(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width);
-void I422ToARGBRow_DSPR2(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
+void YUY2ToYRow_SSE2(const uint8_t* src_yuy2, uint8_t* dst_y, int width);
+void YUY2ToUVRow_SSE2(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void YUY2ToUV422Row_SSE2(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width);
+void YUY2ToYRow_NEON(const uint8_t* src_yuy2, uint8_t* dst_y, int width);
+void YUY2ToUVRow_NEON(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void YUY2ToUV422Row_NEON(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void YUY2ToYRow_MSA(const uint8_t* src_yuy2, uint8_t* dst_y, int width);
+void YUY2ToUVRow_MSA(const uint8_t* src_yuy2,
+                     int src_stride_yuy2,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void YUY2ToUV422Row_MSA(const uint8_t* src_yuy2,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
+void YUY2ToYRow_C(const uint8_t* src_yuy2, uint8_t* dst_y, int width);
+void YUY2ToUVRow_C(const uint8_t* src_yuy2,
+                   int src_stride_yuy2,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void YUY2ToUV422Row_C(const uint8_t* src_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void YUY2ToYRow_Any_AVX2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void YUY2ToUVRow_Any_AVX2(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void YUY2ToUV422Row_Any_AVX2(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void YUY2ToYRow_Any_SSE2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void YUY2ToUVRow_Any_SSE2(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void YUY2ToUV422Row_Any_SSE2(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void YUY2ToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void YUY2ToUVRow_Any_NEON(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void YUY2ToUV422Row_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void YUY2ToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void YUY2ToUVRow_Any_MSA(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void YUY2ToUV422Row_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_u,
+                            uint8_t* dst_v,
+                            int width);
+void UYVYToYRow_AVX2(const uint8_t* src_uyvy, uint8_t* dst_y, int width);
+void UYVYToUVRow_AVX2(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void UYVYToUV422Row_AVX2(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void UYVYToYRow_SSE2(const uint8_t* src_uyvy, uint8_t* dst_y, int width);
+void UYVYToUVRow_SSE2(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void UYVYToUV422Row_SSE2(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void UYVYToYRow_AVX2(const uint8_t* src_uyvy, uint8_t* dst_y, int width);
+void UYVYToUVRow_AVX2(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void UYVYToUV422Row_AVX2(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void UYVYToYRow_NEON(const uint8_t* src_uyvy, uint8_t* dst_y, int width);
+void UYVYToUVRow_NEON(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void UYVYToUV422Row_NEON(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void UYVYToYRow_MSA(const uint8_t* src_uyvy, uint8_t* dst_y, int width);
+void UYVYToUVRow_MSA(const uint8_t* src_uyvy,
+                     int src_stride_uyvy,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width);
+void UYVYToUV422Row_MSA(const uint8_t* src_uyvy,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width);
 
-void YUY2ToYRow_AVX2(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_AVX2(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_AVX2(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToYRow_SSE2(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_SSE2(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_SSE2(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToYRow_NEON(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_NEON(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_NEON(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToYRow_C(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_C(const uint8* src_yuy2, int stride_yuy2,
-                   uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_C(const uint8* src_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToYRow_Any_AVX2(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_Any_AVX2(const uint8* src_yuy2, int stride_yuy2,
-                          uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_Any_AVX2(const uint8* src_yuy2,
-                             uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToYRow_Any_SSE2(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_Any_SSE2(const uint8* src_yuy2, int stride_yuy2,
-                          uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_Any_SSE2(const uint8* src_yuy2,
-                             uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToYRow_Any_NEON(const uint8* src_yuy2, uint8* dst_y, int width);
-void YUY2ToUVRow_Any_NEON(const uint8* src_yuy2, int stride_yuy2,
-                          uint8* dst_u, uint8* dst_v, int width);
-void YUY2ToUV422Row_Any_NEON(const uint8* src_yuy2,
-                             uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_AVX2(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_AVX2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_AVX2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_SSE2(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_SSE2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_SSE2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_AVX2(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_AVX2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_AVX2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_NEON(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_NEON(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_NEON(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width);
+void UYVYToYRow_C(const uint8_t* src_uyvy, uint8_t* dst_y, int width);
+void UYVYToUVRow_C(const uint8_t* src_uyvy,
+                   int src_stride_uyvy,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width);
+void UYVYToUV422Row_C(const uint8_t* src_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width);
+void UYVYToYRow_Any_AVX2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void UYVYToUVRow_Any_AVX2(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void UYVYToUV422Row_Any_AVX2(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void UYVYToYRow_Any_SSE2(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void UYVYToUVRow_Any_SSE2(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void UYVYToUV422Row_Any_SSE2(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void UYVYToYRow_Any_NEON(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void UYVYToUVRow_Any_NEON(const uint8_t* src_ptr,
+                          int src_stride_ptr,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width);
+void UYVYToUV422Row_Any_NEON(const uint8_t* src_ptr,
+                             uint8_t* dst_u,
+                             uint8_t* dst_v,
+                             int width);
+void UYVYToYRow_Any_MSA(const uint8_t* src_ptr, uint8_t* dst_ptr, int width);
+void UYVYToUVRow_Any_MSA(const uint8_t* src_ptr,
+                         int src_stride_ptr,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width);
+void UYVYToUV422Row_Any_MSA(const uint8_t* src_ptr,
+                            uint8_t* dst_u,
+                            uint8_t* dst_v,
+                            int width);
 
-void UYVYToYRow_C(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_C(const uint8* src_uyvy, int stride_uyvy,
-                   uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_C(const uint8* src_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_Any_AVX2(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_Any_AVX2(const uint8* src_uyvy, int stride_uyvy,
-                          uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_Any_AVX2(const uint8* src_uyvy,
-                             uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_Any_SSE2(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_Any_SSE2(const uint8* src_uyvy, int stride_uyvy,
-                          uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_Any_SSE2(const uint8* src_uyvy,
-                             uint8* dst_u, uint8* dst_v, int width);
-void UYVYToYRow_Any_NEON(const uint8* src_uyvy, uint8* dst_y, int width);
-void UYVYToUVRow_Any_NEON(const uint8* src_uyvy, int stride_uyvy,
-                          uint8* dst_u, uint8* dst_v, int width);
-void UYVYToUV422Row_Any_NEON(const uint8* src_uyvy,
-                             uint8* dst_u, uint8* dst_v, int width);
-
-void I422ToYUY2Row_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_yuy2, int width);
-void I422ToUYVYRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_uyvy, int width);
-void I422ToYUY2Row_SSE2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_yuy2, int width);
-void I422ToUYVYRow_SSE2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_uyvy, int width);
-void I422ToYUY2Row_Any_SSE2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_yuy2, int width);
-void I422ToUYVYRow_Any_SSE2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_uyvy, int width);
-void I422ToYUY2Row_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_yuy2, int width);
-void I422ToUYVYRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_uyvy, int width);
-void I422ToYUY2Row_Any_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_yuy2, int width);
-void I422ToUYVYRow_Any_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_uyvy, int width);
+void I422ToYUY2Row_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_frame,
+                     int width);
+void I422ToUYVYRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_frame,
+                     int width);
+void I422ToYUY2Row_SSE2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width);
+void I422ToUYVYRow_SSE2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width);
+void I422ToYUY2Row_Any_SSE2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void I422ToUYVYRow_Any_SSE2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void I422ToYUY2Row_AVX2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width);
+void I422ToUYVYRow_AVX2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width);
+void I422ToYUY2Row_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void I422ToUYVYRow_Any_AVX2(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void I422ToYUY2Row_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width);
+void I422ToUYVYRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width);
+void I422ToYUY2Row_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void I422ToUYVYRow_Any_NEON(const uint8_t* y_buf,
+                            const uint8_t* u_buf,
+                            const uint8_t* v_buf,
+                            uint8_t* dst_ptr,
+                            int width);
+void I422ToYUY2Row_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_yuy2,
+                       int width);
+void I422ToUYVYRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_uyvy,
+                       int width);
+void I422ToYUY2Row_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* u_buf,
+                           const uint8_t* v_buf,
+                           uint8_t* dst_ptr,
+                           int width);
+void I422ToUYVYRow_Any_MSA(const uint8_t* y_buf,
+                           const uint8_t* u_buf,
+                           const uint8_t* v_buf,
+                           uint8_t* dst_ptr,
+                           int width);
 
 // Effects related row functions.
-void ARGBAttenuateRow_C(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBAttenuateRow_SSSE3(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBAttenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBAttenuateRow_NEON(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBAttenuateRow_Any_SSE2(const uint8* src_argb, uint8* dst_argb,
-                               int width);
-void ARGBAttenuateRow_Any_SSSE3(const uint8* src_argb, uint8* dst_argb,
+void ARGBAttenuateRow_C(const uint8_t* src_argb, uint8_t* dst_argb, int width);
+void ARGBAttenuateRow_SSSE3(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            int width);
+void ARGBAttenuateRow_AVX2(const uint8_t* src_argb,
+                           uint8_t* dst_argb,
+                           int width);
+void ARGBAttenuateRow_NEON(const uint8_t* src_argb,
+                           uint8_t* dst_argb,
+                           int width);
+void ARGBAttenuateRow_MSA(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBAttenuateRow_Any_SSSE3(const uint8_t* src_ptr,
+                                uint8_t* dst_ptr,
                                 int width);
-void ARGBAttenuateRow_Any_AVX2(const uint8* src_argb, uint8* dst_argb,
+void ARGBAttenuateRow_Any_AVX2(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
                                int width);
-void ARGBAttenuateRow_Any_NEON(const uint8* src_argb, uint8* dst_argb,
+void ARGBAttenuateRow_Any_NEON(const uint8_t* src_ptr,
+                               uint8_t* dst_ptr,
                                int width);
+void ARGBAttenuateRow_Any_MSA(const uint8_t* src_ptr,
+                              uint8_t* dst_ptr,
+                              int width);
 
 // Inverse table for unattenuate, shared by C and SSE2.
-extern const uint32 fixed_invtbl8[256];
-void ARGBUnattenuateRow_C(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBUnattenuateRow_SSE2(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBUnattenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBUnattenuateRow_Any_SSE2(const uint8* src_argb, uint8* dst_argb,
+extern const uint32_t fixed_invtbl8[256];
+void ARGBUnattenuateRow_C(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          int width);
+void ARGBUnattenuateRow_SSE2(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             int width);
+void ARGBUnattenuateRow_AVX2(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             int width);
+void ARGBUnattenuateRow_Any_SSE2(const uint8_t* src_ptr,
+                                 uint8_t* dst_ptr,
                                  int width);
-void ARGBUnattenuateRow_Any_AVX2(const uint8* src_argb, uint8* dst_argb,
+void ARGBUnattenuateRow_Any_AVX2(const uint8_t* src_ptr,
+                                 uint8_t* dst_ptr,
                                  int width);
 
-void ARGBGrayRow_C(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBGrayRow_SSSE3(const uint8* src_argb, uint8* dst_argb, int width);
-void ARGBGrayRow_NEON(const uint8* src_argb, uint8* dst_argb, int width);
+void ARGBGrayRow_C(const uint8_t* src_argb, uint8_t* dst_argb, int width);
+void ARGBGrayRow_SSSE3(const uint8_t* src_argb, uint8_t* dst_argb, int width);
+void ARGBGrayRow_NEON(const uint8_t* src_argb, uint8_t* dst_argb, int width);
+void ARGBGrayRow_MSA(const uint8_t* src_argb, uint8_t* dst_argb, int width);
 
-void ARGBSepiaRow_C(uint8* dst_argb, int width);
-void ARGBSepiaRow_SSSE3(uint8* dst_argb, int width);
-void ARGBSepiaRow_NEON(uint8* dst_argb, int width);
+void ARGBSepiaRow_C(uint8_t* dst_argb, int width);
+void ARGBSepiaRow_SSSE3(uint8_t* dst_argb, int width);
+void ARGBSepiaRow_NEON(uint8_t* dst_argb, int width);
+void ARGBSepiaRow_MSA(uint8_t* dst_argb, int width);
 
-void ARGBColorMatrixRow_C(const uint8* src_argb, uint8* dst_argb,
-                          const int8* matrix_argb, int width);
-void ARGBColorMatrixRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                              const int8* matrix_argb, int width);
-void ARGBColorMatrixRow_NEON(const uint8* src_argb, uint8* dst_argb,
-                             const int8* matrix_argb, int width);
+void ARGBColorMatrixRow_C(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          const int8_t* matrix_argb,
+                          int width);
+void ARGBColorMatrixRow_SSSE3(const uint8_t* src_argb,
+                              uint8_t* dst_argb,
+                              const int8_t* matrix_argb,
+                              int width);
+void ARGBColorMatrixRow_NEON(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             const int8_t* matrix_argb,
+                             int width);
+void ARGBColorMatrixRow_MSA(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            const int8_t* matrix_argb,
+                            int width);
 
-void ARGBColorTableRow_C(uint8* dst_argb, const uint8* table_argb, int width);
-void ARGBColorTableRow_X86(uint8* dst_argb, const uint8* table_argb, int width);
+void ARGBColorTableRow_C(uint8_t* dst_argb,
+                         const uint8_t* table_argb,
+                         int width);
+void ARGBColorTableRow_X86(uint8_t* dst_argb,
+                           const uint8_t* table_argb,
+                           int width);
 
-void RGBColorTableRow_C(uint8* dst_argb, const uint8* table_argb, int width);
-void RGBColorTableRow_X86(uint8* dst_argb, const uint8* table_argb, int width);
+void RGBColorTableRow_C(uint8_t* dst_argb,
+                        const uint8_t* table_argb,
+                        int width);
+void RGBColorTableRow_X86(uint8_t* dst_argb,
+                          const uint8_t* table_argb,
+                          int width);
 
-void ARGBQuantizeRow_C(uint8* dst_argb, int scale, int interval_size,
-                       int interval_offset, int width);
-void ARGBQuantizeRow_SSE2(uint8* dst_argb, int scale, int interval_size,
-                          int interval_offset, int width);
-void ARGBQuantizeRow_NEON(uint8* dst_argb, int scale, int interval_size,
-                          int interval_offset, int width);
+void ARGBQuantizeRow_C(uint8_t* dst_argb,
+                       int scale,
+                       int interval_size,
+                       int interval_offset,
+                       int width);
+void ARGBQuantizeRow_SSE2(uint8_t* dst_argb,
+                          int scale,
+                          int interval_size,
+                          int interval_offset,
+                          int width);
+void ARGBQuantizeRow_NEON(uint8_t* dst_argb,
+                          int scale,
+                          int interval_size,
+                          int interval_offset,
+                          int width);
+void ARGBQuantizeRow_MSA(uint8_t* dst_argb,
+                         int scale,
+                         int interval_size,
+                         int interval_offset,
+                         int width);
 
-void ARGBShadeRow_C(const uint8* src_argb, uint8* dst_argb, int width,
-                    uint32 value);
-void ARGBShadeRow_SSE2(const uint8* src_argb, uint8* dst_argb, int width,
-                       uint32 value);
-void ARGBShadeRow_NEON(const uint8* src_argb, uint8* dst_argb, int width,
-                       uint32 value);
+void ARGBShadeRow_C(const uint8_t* src_argb,
+                    uint8_t* dst_argb,
+                    int width,
+                    uint32_t value);
+void ARGBShadeRow_SSE2(const uint8_t* src_argb,
+                       uint8_t* dst_argb,
+                       int width,
+                       uint32_t value);
+void ARGBShadeRow_NEON(const uint8_t* src_argb,
+                       uint8_t* dst_argb,
+                       int width,
+                       uint32_t value);
+void ARGBShadeRow_MSA(const uint8_t* src_argb,
+                      uint8_t* dst_argb,
+                      int width,
+                      uint32_t value);
 
 // Used for blur.
-void CumulativeSumToAverageRow_SSE2(const int32* topleft, const int32* botleft,
-                                    int width, int area, uint8* dst, int count);
-void ComputeCumulativeSumRow_SSE2(const uint8* row, int32* cumsum,
-                                  const int32* previous_cumsum, int width);
+void CumulativeSumToAverageRow_SSE2(const int32_t* topleft,
+                                    const int32_t* botleft,
+                                    int width,
+                                    int area,
+                                    uint8_t* dst,
+                                    int count);
+void ComputeCumulativeSumRow_SSE2(const uint8_t* row,
+                                  int32_t* cumsum,
+                                  const int32_t* previous_cumsum,
+                                  int width);
 
-void CumulativeSumToAverageRow_C(const int32* topleft, const int32* botleft,
-                                 int width, int area, uint8* dst, int count);
-void ComputeCumulativeSumRow_C(const uint8* row, int32* cumsum,
-                               const int32* previous_cumsum, int width);
+void CumulativeSumToAverageRow_C(const int32_t* tl,
+                                 const int32_t* bl,
+                                 int w,
+                                 int area,
+                                 uint8_t* dst,
+                                 int count);
+void ComputeCumulativeSumRow_C(const uint8_t* row,
+                               int32_t* cumsum,
+                               const int32_t* previous_cumsum,
+                               int width);
 
 LIBYUV_API
-void ARGBAffineRow_C(const uint8* src_argb, int src_argb_stride,
-                     uint8* dst_argb, const float* uv_dudv, int width);
+void ARGBAffineRow_C(const uint8_t* src_argb,
+                     int src_argb_stride,
+                     uint8_t* dst_argb,
+                     const float* uv_dudv,
+                     int width);
 LIBYUV_API
-void ARGBAffineRow_SSE2(const uint8* src_argb, int src_argb_stride,
-                        uint8* dst_argb, const float* uv_dudv, int width);
+void ARGBAffineRow_SSE2(const uint8_t* src_argb,
+                        int src_argb_stride,
+                        uint8_t* dst_argb,
+                        const float* src_dudv,
+                        int width);
 
 // Used for I420Scale, ARGBScale, and ARGBInterpolate.
-void InterpolateRow_C(uint8* dst_ptr, const uint8* src_ptr,
-                      ptrdiff_t src_stride_ptr,
-                      int width, int source_y_fraction);
-void InterpolateRow_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                          ptrdiff_t src_stride_ptr, int width,
+void InterpolateRow_C(uint8_t* dst_ptr,
+                      const uint8_t* src_ptr,
+                      ptrdiff_t src_stride,
+                      int width,
+                      int source_y_fraction);
+void InterpolateRow_SSSE3(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          int dst_width,
                           int source_y_fraction);
-void InterpolateRow_AVX2(uint8* dst_ptr, const uint8* src_ptr,
-                         ptrdiff_t src_stride_ptr, int width,
+void InterpolateRow_AVX2(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         int dst_width,
                          int source_y_fraction);
-void InterpolateRow_NEON(uint8* dst_ptr, const uint8* src_ptr,
-                         ptrdiff_t src_stride_ptr, int width,
+void InterpolateRow_NEON(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         int dst_width,
                          int source_y_fraction);
-void InterpolateRow_DSPR2(uint8* dst_ptr, const uint8* src_ptr,
-                          ptrdiff_t src_stride_ptr, int width,
-                          int source_y_fraction);
-void InterpolateRow_Any_NEON(uint8* dst_ptr, const uint8* src_ptr,
-                             ptrdiff_t src_stride_ptr, int width,
+void InterpolateRow_MSA(uint8_t* dst_ptr,
+                        const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        int width,
+                        int source_y_fraction);
+void InterpolateRow_Any_NEON(uint8_t* dst_ptr,
+                             const uint8_t* src_ptr,
+                             ptrdiff_t src_stride_ptr,
+                             int width,
                              int source_y_fraction);
-void InterpolateRow_Any_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                              ptrdiff_t src_stride_ptr, int width,
+void InterpolateRow_Any_SSSE3(uint8_t* dst_ptr,
+                              const uint8_t* src_ptr,
+                              ptrdiff_t src_stride_ptr,
+                              int width,
                               int source_y_fraction);
-void InterpolateRow_Any_AVX2(uint8* dst_ptr, const uint8* src_ptr,
-                             ptrdiff_t src_stride_ptr, int width,
+void InterpolateRow_Any_AVX2(uint8_t* dst_ptr,
+                             const uint8_t* src_ptr,
+                             ptrdiff_t src_stride_ptr,
+                             int width,
                              int source_y_fraction);
-void InterpolateRow_Any_DSPR2(uint8* dst_ptr, const uint8* src_ptr,
-                              ptrdiff_t src_stride_ptr, int width,
-                              int source_y_fraction);
+void InterpolateRow_Any_MSA(uint8_t* dst_ptr,
+                            const uint8_t* src_ptr,
+                            ptrdiff_t src_stride_ptr,
+                            int width,
+                            int source_y_fraction);
 
-void InterpolateRow_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                         ptrdiff_t src_stride_ptr,
-                         int width, int source_y_fraction);
+void InterpolateRow_16_C(uint16_t* dst_ptr,
+                         const uint16_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         int width,
+                         int source_y_fraction);
 
 // Sobel images.
-void SobelXRow_C(const uint8* src_y0, const uint8* src_y1, const uint8* src_y2,
-                 uint8* dst_sobelx, int width);
-void SobelXRow_SSE2(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobelx, int width);
-void SobelXRow_NEON(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobelx, int width);
-void SobelYRow_C(const uint8* src_y0, const uint8* src_y1,
-                 uint8* dst_sobely, int width);
-void SobelYRow_SSE2(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width);
-void SobelYRow_NEON(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width);
-void SobelRow_C(const uint8* src_sobelx, const uint8* src_sobely,
-                uint8* dst_argb, int width);
-void SobelRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                   uint8* dst_argb, int width);
-void SobelRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                   uint8* dst_argb, int width);
-void SobelToPlaneRow_C(const uint8* src_sobelx, const uint8* src_sobely,
-                       uint8* dst_y, int width);
-void SobelToPlaneRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_y, int width);
-void SobelToPlaneRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_y, int width);
-void SobelXYRow_C(const uint8* src_sobelx, const uint8* src_sobely,
-                  uint8* dst_argb, int width);
-void SobelXYRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width);
-void SobelXYRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width);
-void SobelRow_Any_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                       uint8* dst_argb, int width);
-void SobelRow_Any_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                       uint8* dst_argb, int width);
-void SobelToPlaneRow_Any_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                              uint8* dst_y, int width);
-void SobelToPlaneRow_Any_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                              uint8* dst_y, int width);
-void SobelXYRow_Any_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                         uint8* dst_argb, int width);
-void SobelXYRow_Any_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                         uint8* dst_argb, int width);
-
-void ARGBPolynomialRow_C(const uint8* src_argb,
-                         uint8* dst_argb, const float* poly,
+void SobelXRow_C(const uint8_t* src_y0,
+                 const uint8_t* src_y1,
+                 const uint8_t* src_y2,
+                 uint8_t* dst_sobelx,
+                 int width);
+void SobelXRow_SSE2(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    const uint8_t* src_y2,
+                    uint8_t* dst_sobelx,
+                    int width);
+void SobelXRow_NEON(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    const uint8_t* src_y2,
+                    uint8_t* dst_sobelx,
+                    int width);
+void SobelXRow_MSA(const uint8_t* src_y0,
+                   const uint8_t* src_y1,
+                   const uint8_t* src_y2,
+                   uint8_t* dst_sobelx,
+                   int width);
+void SobelYRow_C(const uint8_t* src_y0,
+                 const uint8_t* src_y1,
+                 uint8_t* dst_sobely,
+                 int width);
+void SobelYRow_SSE2(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    uint8_t* dst_sobely,
+                    int width);
+void SobelYRow_NEON(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    uint8_t* dst_sobely,
+                    int width);
+void SobelYRow_MSA(const uint8_t* src_y0,
+                   const uint8_t* src_y1,
+                   uint8_t* dst_sobely,
+                   int width);
+void SobelRow_C(const uint8_t* src_sobelx,
+                const uint8_t* src_sobely,
+                uint8_t* dst_argb,
+                int width);
+void SobelRow_SSE2(const uint8_t* src_sobelx,
+                   const uint8_t* src_sobely,
+                   uint8_t* dst_argb,
+                   int width);
+void SobelRow_NEON(const uint8_t* src_sobelx,
+                   const uint8_t* src_sobely,
+                   uint8_t* dst_argb,
+                   int width);
+void SobelRow_MSA(const uint8_t* src_sobelx,
+                  const uint8_t* src_sobely,
+                  uint8_t* dst_argb,
+                  int width);
+void SobelToPlaneRow_C(const uint8_t* src_sobelx,
+                       const uint8_t* src_sobely,
+                       uint8_t* dst_y,
+                       int width);
+void SobelToPlaneRow_SSE2(const uint8_t* src_sobelx,
+                          const uint8_t* src_sobely,
+                          uint8_t* dst_y,
+                          int width);
+void SobelToPlaneRow_NEON(const uint8_t* src_sobelx,
+                          const uint8_t* src_sobely,
+                          uint8_t* dst_y,
+                          int width);
+void SobelToPlaneRow_MSA(const uint8_t* src_sobelx,
+                         const uint8_t* src_sobely,
+                         uint8_t* dst_y,
                          int width);
-void ARGBPolynomialRow_SSE2(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
+void SobelXYRow_C(const uint8_t* src_sobelx,
+                  const uint8_t* src_sobely,
+                  uint8_t* dst_argb,
+                  int width);
+void SobelXYRow_SSE2(const uint8_t* src_sobelx,
+                     const uint8_t* src_sobely,
+                     uint8_t* dst_argb,
+                     int width);
+void SobelXYRow_NEON(const uint8_t* src_sobelx,
+                     const uint8_t* src_sobely,
+                     uint8_t* dst_argb,
+                     int width);
+void SobelXYRow_MSA(const uint8_t* src_sobelx,
+                    const uint8_t* src_sobely,
+                    uint8_t* dst_argb,
+                    int width);
+void SobelRow_Any_SSE2(const uint8_t* y_buf,
+                       const uint8_t* uv_buf,
+                       uint8_t* dst_ptr,
+                       int width);
+void SobelRow_Any_NEON(const uint8_t* y_buf,
+                       const uint8_t* uv_buf,
+                       uint8_t* dst_ptr,
+                       int width);
+void SobelRow_Any_MSA(const uint8_t* y_buf,
+                      const uint8_t* uv_buf,
+                      uint8_t* dst_ptr,
+                      int width);
+void SobelToPlaneRow_Any_SSE2(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void SobelToPlaneRow_Any_NEON(const uint8_t* y_buf,
+                              const uint8_t* uv_buf,
+                              uint8_t* dst_ptr,
+                              int width);
+void SobelToPlaneRow_Any_MSA(const uint8_t* y_buf,
+                             const uint8_t* uv_buf,
+                             uint8_t* dst_ptr,
+                             int width);
+void SobelXYRow_Any_SSE2(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void SobelXYRow_Any_NEON(const uint8_t* y_buf,
+                         const uint8_t* uv_buf,
+                         uint8_t* dst_ptr,
+                         int width);
+void SobelXYRow_Any_MSA(const uint8_t* y_buf,
+                        const uint8_t* uv_buf,
+                        uint8_t* dst_ptr,
+                        int width);
+
+void ARGBPolynomialRow_C(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
+                         const float* poly,
+                         int width);
+void ARGBPolynomialRow_SSE2(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            const float* poly,
                             int width);
-void ARGBPolynomialRow_AVX2(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
+void ARGBPolynomialRow_AVX2(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            const float* poly,
                             int width);
 
-void ARGBLumaColorTableRow_C(const uint8* src_argb, uint8* dst_argb, int width,
-                             const uint8* luma, uint32 lumacoeff);
-void ARGBLumaColorTableRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
+// Scale and convert to half float.
+void HalfFloatRow_C(const uint16_t* src, uint16_t* dst, float scale, int width);
+void HalfFloatRow_SSE2(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width);
+void HalfFloatRow_Any_SSE2(const uint16_t* src_ptr,
+                           uint16_t* dst_ptr,
+                           float param,
+                           int width);
+void HalfFloatRow_AVX2(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width);
+void HalfFloatRow_Any_AVX2(const uint16_t* src_ptr,
+                           uint16_t* dst_ptr,
+                           float param,
+                           int width);
+void HalfFloatRow_F16C(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width);
+void HalfFloatRow_Any_F16C(const uint16_t* src,
+                           uint16_t* dst,
+                           float scale,
+                           int width);
+void HalfFloat1Row_F16C(const uint16_t* src,
+                        uint16_t* dst,
+                        float scale,
+                        int width);
+void HalfFloat1Row_Any_F16C(const uint16_t* src,
+                            uint16_t* dst,
+                            float scale,
+                            int width);
+void HalfFloatRow_NEON(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width);
+void HalfFloatRow_Any_NEON(const uint16_t* src_ptr,
+                           uint16_t* dst_ptr,
+                           float param,
+                           int width);
+void HalfFloat1Row_NEON(const uint16_t* src,
+                        uint16_t* dst,
+                        float scale,
+                        int width);
+void HalfFloat1Row_Any_NEON(const uint16_t* src_ptr,
+                            uint16_t* dst_ptr,
+                            float param,
+                            int width);
+void HalfFloatRow_MSA(const uint16_t* src,
+                      uint16_t* dst,
+                      float scale,
+                      int width);
+void HalfFloatRow_Any_MSA(const uint16_t* src_ptr,
+                          uint16_t* dst_ptr,
+                          float param,
+                          int width);
+void ByteToFloatRow_C(const uint8_t* src, float* dst, float scale, int width);
+void ByteToFloatRow_NEON(const uint8_t* src,
+                         float* dst,
+                         float scale,
+                         int width);
+void ByteToFloatRow_Any_NEON(const uint8_t* src_ptr,
+                             float* dst_ptr,
+                             float param,
+                             int width);
+
+void ARGBLumaColorTableRow_C(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             int width,
+                             const uint8_t* luma,
+                             uint32_t lumacoeff);
+void ARGBLumaColorTableRow_SSSE3(const uint8_t* src_argb,
+                                 uint8_t* dst_argb,
                                  int width,
-                                 const uint8* luma, uint32 lumacoeff);
+                                 const uint8_t* luma,
+                                 uint32_t lumacoeff);
+
+float ScaleMaxSamples_C(const float* src, float* dst, float scale, int width);
+float ScaleMaxSamples_NEON(const float* src,
+                           float* dst,
+                           float scale,
+                           int width);
+float ScaleSumSamples_C(const float* src, float* dst, float scale, int width);
+float ScaleSumSamples_NEON(const float* src,
+                           float* dst,
+                           float scale,
+                           int width);
+void ScaleSamples_C(const float* src, float* dst, float scale, int width);
+void ScaleSamples_NEON(const float* src, float* dst, float scale, int width);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_ROW_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_ROW_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale.h
index 102158d..b937d34 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_SCALE_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_SCALE_H_
 #define INCLUDE_LIBYUV_SCALE_H_
 
 #include "libyuv/basic_types.h"
@@ -20,25 +20,33 @@
 
 // Supported filtering.
 typedef enum FilterMode {
-  kFilterNone = 0,  // Point sample; Fastest.
-  kFilterLinear = 1,  // Filter horizontally only.
+  kFilterNone = 0,      // Point sample; Fastest.
+  kFilterLinear = 1,    // Filter horizontally only.
   kFilterBilinear = 2,  // Faster than box, but lower quality scaling down.
-  kFilterBox = 3  // Highest quality.
+  kFilterBox = 3        // Highest quality.
 } FilterModeEnum;
 
 // Scale a YUV plane.
 LIBYUV_API
-void ScalePlane(const uint8* src, int src_stride,
-                int src_width, int src_height,
-                uint8* dst, int dst_stride,
-                int dst_width, int dst_height,
+void ScalePlane(const uint8_t* src,
+                int src_stride,
+                int src_width,
+                int src_height,
+                uint8_t* dst,
+                int dst_stride,
+                int dst_width,
+                int dst_height,
                 enum FilterMode filtering);
 
 LIBYUV_API
-void ScalePlane_16(const uint16* src, int src_stride,
-                   int src_width, int src_height,
-                   uint16* dst, int dst_stride,
-                   int dst_width, int dst_height,
+void ScalePlane_16(const uint16_t* src,
+                   int src_stride,
+                   int src_width,
+                   int src_height,
+                   uint16_t* dst,
+                   int dst_stride,
+                   int dst_width,
+                   int dst_height,
                    enum FilterMode filtering);
 
 // Scales a YUV 4:2:0 image from the src width and height to the
@@ -52,44 +60,64 @@
 // Returns 0 if successful.
 
 LIBYUV_API
-int I420Scale(const uint8* src_y, int src_stride_y,
-              const uint8* src_u, int src_stride_u,
-              const uint8* src_v, int src_stride_v,
-              int src_width, int src_height,
-              uint8* dst_y, int dst_stride_y,
-              uint8* dst_u, int dst_stride_u,
-              uint8* dst_v, int dst_stride_v,
-              int dst_width, int dst_height,
+int I420Scale(const uint8_t* src_y,
+              int src_stride_y,
+              const uint8_t* src_u,
+              int src_stride_u,
+              const uint8_t* src_v,
+              int src_stride_v,
+              int src_width,
+              int src_height,
+              uint8_t* dst_y,
+              int dst_stride_y,
+              uint8_t* dst_u,
+              int dst_stride_u,
+              uint8_t* dst_v,
+              int dst_stride_v,
+              int dst_width,
+              int dst_height,
               enum FilterMode filtering);
 
 LIBYUV_API
-int I420Scale_16(const uint16* src_y, int src_stride_y,
-                 const uint16* src_u, int src_stride_u,
-                 const uint16* src_v, int src_stride_v,
-                 int src_width, int src_height,
-                 uint16* dst_y, int dst_stride_y,
-                 uint16* dst_u, int dst_stride_u,
-                 uint16* dst_v, int dst_stride_v,
-                 int dst_width, int dst_height,
+int I420Scale_16(const uint16_t* src_y,
+                 int src_stride_y,
+                 const uint16_t* src_u,
+                 int src_stride_u,
+                 const uint16_t* src_v,
+                 int src_stride_v,
+                 int src_width,
+                 int src_height,
+                 uint16_t* dst_y,
+                 int dst_stride_y,
+                 uint16_t* dst_u,
+                 int dst_stride_u,
+                 uint16_t* dst_v,
+                 int dst_stride_v,
+                 int dst_width,
+                 int dst_height,
                  enum FilterMode filtering);
 
 #ifdef __cplusplus
 // Legacy API.  Deprecated.
 LIBYUV_API
-int Scale(const uint8* src_y, const uint8* src_u, const uint8* src_v,
-          int src_stride_y, int src_stride_u, int src_stride_v,
-          int src_width, int src_height,
-          uint8* dst_y, uint8* dst_u, uint8* dst_v,
-          int dst_stride_y, int dst_stride_u, int dst_stride_v,
-          int dst_width, int dst_height,
+int Scale(const uint8_t* src_y,
+          const uint8_t* src_u,
+          const uint8_t* src_v,
+          int src_stride_y,
+          int src_stride_u,
+          int src_stride_v,
+          int src_width,
+          int src_height,
+          uint8_t* dst_y,
+          uint8_t* dst_u,
+          uint8_t* dst_v,
+          int dst_stride_y,
+          int dst_stride_u,
+          int dst_stride_v,
+          int dst_width,
+          int dst_height,
           LIBYUV_BOOL interpolate);
 
-// Legacy API.  Deprecated.
-LIBYUV_API
-int ScaleOffset(const uint8* src_i420, int src_width, int src_height,
-                uint8* dst_i420, int dst_width, int dst_height, int dst_yoffset,
-                LIBYUV_BOOL interpolate);
-
 // For testing, allow disabling of specialized scalers.
 LIBYUV_API
 void SetUseReferenceImpl(LIBYUV_BOOL use);
@@ -100,4 +128,4 @@
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_SCALE_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_SCALE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_argb.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_argb.h
index b56cf52..7641f18 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_argb.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_argb.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_SCALE_ARGB_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_SCALE_ARGB_H_
 #define INCLUDE_LIBYUV_SCALE_ARGB_H_
 
 #include "libyuv/basic_types.h"
@@ -20,32 +20,52 @@
 #endif
 
 LIBYUV_API
-int ARGBScale(const uint8* src_argb, int src_stride_argb,
-              int src_width, int src_height,
-              uint8* dst_argb, int dst_stride_argb,
-              int dst_width, int dst_height,
+int ARGBScale(const uint8_t* src_argb,
+              int src_stride_argb,
+              int src_width,
+              int src_height,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int dst_width,
+              int dst_height,
               enum FilterMode filtering);
 
 // Clipped scale takes destination rectangle coordinates for clip values.
 LIBYUV_API
-int ARGBScaleClip(const uint8* src_argb, int src_stride_argb,
-                  int src_width, int src_height,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int dst_width, int dst_height,
-                  int clip_x, int clip_y, int clip_width, int clip_height,
+int ARGBScaleClip(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  int src_width,
+                  int src_height,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int dst_width,
+                  int dst_height,
+                  int clip_x,
+                  int clip_y,
+                  int clip_width,
+                  int clip_height,
                   enum FilterMode filtering);
 
 // Scale with YUV conversion to ARGB and clipping.
 LIBYUV_API
-int YUVToARGBScaleClip(const uint8* src_y, int src_stride_y,
-                       const uint8* src_u, int src_stride_u,
-                       const uint8* src_v, int src_stride_v,
-                       uint32 src_fourcc,
-                       int src_width, int src_height,
-                       uint8* dst_argb, int dst_stride_argb,
-                       uint32 dst_fourcc,
-                       int dst_width, int dst_height,
-                       int clip_x, int clip_y, int clip_width, int clip_height,
+int YUVToARGBScaleClip(const uint8_t* src_y,
+                       int src_stride_y,
+                       const uint8_t* src_u,
+                       int src_stride_u,
+                       const uint8_t* src_v,
+                       int src_stride_v,
+                       uint32_t src_fourcc,
+                       int src_width,
+                       int src_height,
+                       uint8_t* dst_argb,
+                       int dst_stride_argb,
+                       uint32_t dst_fourcc,
+                       int dst_width,
+                       int dst_height,
+                       int clip_x,
+                       int clip_y,
+                       int clip_width,
+                       int clip_height,
                        enum FilterMode filtering);
 
 #ifdef __cplusplus
@@ -53,4 +73,4 @@
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_SCALE_ARGB_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_SCALE_ARGB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_row.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_row.h
index df699e6..7194ba0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_row.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/scale_row.h
@@ -8,7 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_SCALE_ROW_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_SCALE_ROW_H_
 #define INCLUDE_LIBYUV_SCALE_ROW_H_
 
 #include "libyuv/basic_types.h"
@@ -19,17 +19,20 @@
 extern "C" {
 #endif
 
-#if defined(__pnacl__) || defined(__CLR_VER) || \
-    (defined(__i386__) && !defined(__SSE2__))
+#if defined(__pnacl__) || defined(__CLR_VER) ||            \
+    (defined(__native_client__) && defined(__x86_64__)) || \
+    (defined(__i386__) && !defined(__SSE__) && !defined(__clang__))
 #define LIBYUV_DISABLE_X86
 #endif
+#if defined(__native_client__)
+#define LIBYUV_DISABLE_NEON
+#endif
 // MemorySanitizer does not support assembly code yet. http://crbug.com/344505
 #if defined(__has_feature)
 #if __has_feature(memory_sanitizer)
 #define LIBYUV_DISABLE_X86
 #endif
 #endif
-
 // GCC >= 4.7.0 required for AVX2.
 #if defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__))
 #if (__GNUC__ > 4) || (__GNUC__ == 4 && (__GNUC_MINOR__ >= 7))
@@ -45,8 +48,8 @@
 #endif  // __clang__
 
 // Visual C 2012 required for AVX2.
-#if defined(_M_IX86) && !defined(__clang__) && \
-    defined(_MSC_VER) && _MSC_VER >= 1700
+#if defined(_M_IX86) && !defined(__clang__) && defined(_MSC_VER) && \
+    _MSC_VER >= 1700
 #define VISUALC_HAS_AVX2 1
 #endif  // VisualStudio >= 2012
 
@@ -72,15 +75,16 @@
 // The following are available on all x86 platforms, but
 // require VS2012, clang 3.4 or gcc 4.7.
 // The code supports NaCL but requires a new compiler and validator.
-#if !defined(LIBYUV_DISABLE_X86) && (defined(VISUALC_HAS_AVX2) || \
-    defined(CLANG_HAS_AVX2) || defined(GCC_HAS_AVX2))
+#if !defined(LIBYUV_DISABLE_X86) &&                          \
+    (defined(VISUALC_HAS_AVX2) || defined(CLANG_HAS_AVX2) || \
+     defined(GCC_HAS_AVX2))
 #define HAS_SCALEADDROW_AVX2
 #define HAS_SCALEROWDOWN2_AVX2
 #define HAS_SCALEROWDOWN4_AVX2
 #endif
 
 // The following are available on Neon platforms:
-#if !defined(LIBYUV_DISABLE_NEON) && !defined(__native_client__) && \
+#if !defined(LIBYUV_DISABLE_NEON) && \
     (defined(__ARM_NEON__) || defined(LIBYUV_NEON) || defined(__aarch64__))
 #define HAS_SCALEARGBCOLS_NEON
 #define HAS_SCALEARGBROWDOWN2_NEON
@@ -93,33 +97,51 @@
 #define HAS_SCALEARGBFILTERCOLS_NEON
 #endif
 
-// The following are available on Mips platforms:
-#if !defined(LIBYUV_DISABLE_MIPS) && !defined(__native_client__) && \
-    defined(__mips__) && defined(__mips_dsp) && (__mips_dsp_rev >= 2)
-#define HAS_SCALEROWDOWN2_DSPR2
-#define HAS_SCALEROWDOWN4_DSPR2
-#define HAS_SCALEROWDOWN34_DSPR2
-#define HAS_SCALEROWDOWN38_DSPR2
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#define HAS_SCALEADDROW_MSA
+#define HAS_SCALEARGBCOLS_MSA
+#define HAS_SCALEARGBFILTERCOLS_MSA
+#define HAS_SCALEARGBROWDOWN2_MSA
+#define HAS_SCALEARGBROWDOWNEVEN_MSA
+#define HAS_SCALEFILTERCOLS_MSA
+#define HAS_SCALEROWDOWN2_MSA
+#define HAS_SCALEROWDOWN34_MSA
+#define HAS_SCALEROWDOWN38_MSA
+#define HAS_SCALEROWDOWN4_MSA
 #endif
 
 // Scale ARGB vertically with bilinear interpolation.
 void ScalePlaneVertical(int src_height,
-                        int dst_width, int dst_height,
-                        int src_stride, int dst_stride,
-                        const uint8* src_argb, uint8* dst_argb,
-                        int x, int y, int dy,
-                        int bpp, enum FilterMode filtering);
+                        int dst_width,
+                        int dst_height,
+                        int src_stride,
+                        int dst_stride,
+                        const uint8_t* src_argb,
+                        uint8_t* dst_argb,
+                        int x,
+                        int y,
+                        int dy,
+                        int bpp,
+                        enum FilterMode filtering);
 
 void ScalePlaneVertical_16(int src_height,
-                           int dst_width, int dst_height,
-                           int src_stride, int dst_stride,
-                           const uint16* src_argb, uint16* dst_argb,
-                           int x, int y, int dy,
-                           int wpp, enum FilterMode filtering);
+                           int dst_width,
+                           int dst_height,
+                           int src_stride,
+                           int dst_stride,
+                           const uint16_t* src_argb,
+                           uint16_t* dst_argb,
+                           int x,
+                           int y,
+                           int dy,
+                           int wpp,
+                           enum FilterMode filtering);
 
 // Simplify the filtering based on scale factors.
-enum FilterMode ScaleFilterReduce(int src_width, int src_height,
-                                  int dst_width, int dst_height,
+enum FilterMode ScaleFilterReduce(int src_width,
+                                  int src_height,
+                                  int dst_width,
+                                  int dst_height,
                                   enum FilterMode filtering);
 
 // Divide num by div and return as 16.16 fixed point result.
@@ -137,367 +159,786 @@
 #endif
 
 // Compute slope values for stepping.
-void ScaleSlope(int src_width, int src_height,
-                int dst_width, int dst_height,
+void ScaleSlope(int src_width,
+                int src_height,
+                int dst_width,
+                int dst_height,
                 enum FilterMode filtering,
-                int* x, int* y, int* dx, int* dy);
+                int* x,
+                int* y,
+                int* dx,
+                int* dy);
 
-void ScaleRowDown2_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                     uint8* dst, int dst_width);
-void ScaleRowDown2_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                        uint16* dst, int dst_width);
-void ScaleRowDown2Linear_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width);
-void ScaleRowDown2Linear_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                              uint16* dst, int dst_width);
-void ScaleRowDown2Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width);
-void ScaleRowDown2Box_Odd_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width);
-void ScaleRowDown2Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst, int dst_width);
-void ScaleRowDown4_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                     uint8* dst, int dst_width);
-void ScaleRowDown4_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                        uint16* dst, int dst_width);
-void ScaleRowDown4Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width);
-void ScaleRowDown4Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst, int dst_width);
-void ScaleRowDown34_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                      uint8* dst, int dst_width);
-void ScaleRowDown34_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                         uint16* dst, int dst_width);
-void ScaleRowDown34_0_Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* d, int dst_width);
-void ScaleRowDown34_0_Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                               uint16* d, int dst_width);
-void ScaleRowDown34_1_Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* d, int dst_width);
-void ScaleRowDown34_1_Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                               uint16* d, int dst_width);
-void ScaleCols_C(uint8* dst_ptr, const uint8* src_ptr,
-                 int dst_width, int x, int dx);
-void ScaleCols_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                    int dst_width, int x, int dx);
-void ScaleColsUp2_C(uint8* dst_ptr, const uint8* src_ptr,
-                    int dst_width, int, int);
-void ScaleColsUp2_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                       int dst_width, int, int);
-void ScaleFilterCols_C(uint8* dst_ptr, const uint8* src_ptr,
-                       int dst_width, int x, int dx);
-void ScaleFilterCols_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                          int dst_width, int x, int dx);
-void ScaleFilterCols64_C(uint8* dst_ptr, const uint8* src_ptr,
-                         int dst_width, int x, int dx);
-void ScaleFilterCols64_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                            int dst_width, int x, int dx);
-void ScaleRowDown38_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                      uint8* dst, int dst_width);
-void ScaleRowDown38_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                         uint16* dst, int dst_width);
-void ScaleRowDown38_3_Box_C(const uint8* src_ptr,
+void ScaleRowDown2_C(const uint8_t* src_ptr,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     int dst_width);
+void ScaleRowDown2_16_C(const uint16_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint16_t* dst,
+                        int dst_width);
+void ScaleRowDown2Linear_C(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           int dst_width);
+void ScaleRowDown2Linear_16_C(const uint16_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint16_t* dst,
+                              int dst_width);
+void ScaleRowDown2Box_C(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width);
+void ScaleRowDown2Box_Odd_C(const uint8_t* src_ptr,
                             ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_3_Box_16_C(const uint16* src_ptr,
-                               ptrdiff_t src_stride,
-                               uint16* dst_ptr, int dst_width);
-void ScaleRowDown38_2_Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_2_Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                               uint16* dst_ptr, int dst_width);
-void ScaleAddRow_C(const uint8* src_ptr, uint16* dst_ptr, int src_width);
-void ScaleAddRow_16_C(const uint16* src_ptr, uint32* dst_ptr, int src_width);
-void ScaleARGBRowDown2_C(const uint8* src_argb,
+                            uint8_t* dst,
+                            int dst_width);
+void ScaleRowDown2Box_16_C(const uint16_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint16_t* dst,
+                           int dst_width);
+void ScaleRowDown4_C(const uint8_t* src_ptr,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     int dst_width);
+void ScaleRowDown4_16_C(const uint16_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint16_t* dst,
+                        int dst_width);
+void ScaleRowDown4Box_C(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width);
+void ScaleRowDown4Box_16_C(const uint16_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint16_t* dst,
+                           int dst_width);
+void ScaleRowDown34_C(const uint8_t* src_ptr,
+                      ptrdiff_t src_stride,
+                      uint8_t* dst,
+                      int dst_width);
+void ScaleRowDown34_16_C(const uint16_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Linear_C(const uint8* src_argb,
+                         uint16_t* dst,
+                         int dst_width);
+void ScaleRowDown34_0_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* d,
+                            int dst_width);
+void ScaleRowDown34_0_Box_16_C(const uint16_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Box_C(const uint8* src_argb, ptrdiff_t src_stride,
-                            uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEven_C(const uint8* src_argb, ptrdiff_t src_stride,
+                               uint16_t* d,
+                               int dst_width);
+void ScaleRowDown34_1_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* d,
+                            int dst_width);
+void ScaleRowDown34_1_Box_16_C(const uint16_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint16_t* d,
+                               int dst_width);
+void ScaleCols_C(uint8_t* dst_ptr,
+                 const uint8_t* src_ptr,
+                 int dst_width,
+                 int x,
+                 int dx);
+void ScaleCols_16_C(uint16_t* dst_ptr,
+                    const uint16_t* src_ptr,
+                    int dst_width,
+                    int x,
+                    int dx);
+void ScaleColsUp2_C(uint8_t* dst_ptr,
+                    const uint8_t* src_ptr,
+                    int dst_width,
+                    int,
+                    int);
+void ScaleColsUp2_16_C(uint16_t* dst_ptr,
+                       const uint16_t* src_ptr,
+                       int dst_width,
+                       int,
+                       int);
+void ScaleFilterCols_C(uint8_t* dst_ptr,
+                       const uint8_t* src_ptr,
+                       int dst_width,
+                       int x,
+                       int dx);
+void ScaleFilterCols_16_C(uint16_t* dst_ptr,
+                          const uint16_t* src_ptr,
+                          int dst_width,
+                          int x,
+                          int dx);
+void ScaleFilterCols64_C(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         int dst_width,
+                         int x32,
+                         int dx);
+void ScaleFilterCols64_16_C(uint16_t* dst_ptr,
+                            const uint16_t* src_ptr,
+                            int dst_width,
+                            int x32,
+                            int dx);
+void ScaleRowDown38_C(const uint8_t* src_ptr,
+                      ptrdiff_t src_stride,
+                      uint8_t* dst,
+                      int dst_width);
+void ScaleRowDown38_16_C(const uint16_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint16_t* dst,
+                         int dst_width);
+void ScaleRowDown38_3_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown38_3_Box_16_C(const uint16_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown38_2_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown38_2_Box_16_C(const uint16_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst_ptr,
+                               int dst_width);
+void ScaleAddRow_C(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width);
+void ScaleAddRow_16_C(const uint16_t* src_ptr,
+                      uint32_t* dst_ptr,
+                      int src_width);
+void ScaleARGBRowDown2_C(const uint8_t* src_argb,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst_argb,
+                         int dst_width);
+void ScaleARGBRowDown2Linear_C(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_argb,
+                               int dst_width);
+void ScaleARGBRowDown2Box_C(const uint8_t* src_argb,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_argb,
+                            int dst_width);
+void ScaleARGBRowDownEven_C(const uint8_t* src_argb,
+                            ptrdiff_t src_stride,
                             int src_stepx,
-                            uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEvenBox_C(const uint8* src_argb,
+                            uint8_t* dst_argb,
+                            int dst_width);
+void ScaleARGBRowDownEvenBox_C(const uint8_t* src_argb,
                                ptrdiff_t src_stride,
                                int src_stepx,
-                               uint8* dst_argb, int dst_width);
-void ScaleARGBCols_C(uint8* dst_argb, const uint8* src_argb,
-                     int dst_width, int x, int dx);
-void ScaleARGBCols64_C(uint8* dst_argb, const uint8* src_argb,
-                       int dst_width, int x, int dx);
-void ScaleARGBColsUp2_C(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int, int);
-void ScaleARGBFilterCols_C(uint8* dst_argb, const uint8* src_argb,
-                           int dst_width, int x, int dx);
-void ScaleARGBFilterCols64_C(uint8* dst_argb, const uint8* src_argb,
-                             int dst_width, int x, int dx);
+                               uint8_t* dst_argb,
+                               int dst_width);
+void ScaleARGBCols_C(uint8_t* dst_argb,
+                     const uint8_t* src_argb,
+                     int dst_width,
+                     int x,
+                     int dx);
+void ScaleARGBCols64_C(uint8_t* dst_argb,
+                       const uint8_t* src_argb,
+                       int dst_width,
+                       int x32,
+                       int dx);
+void ScaleARGBColsUp2_C(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int,
+                        int);
+void ScaleARGBFilterCols_C(uint8_t* dst_argb,
+                           const uint8_t* src_argb,
+                           int dst_width,
+                           int x,
+                           int dx);
+void ScaleARGBFilterCols64_C(uint8_t* dst_argb,
+                             const uint8_t* src_argb,
+                             int dst_width,
+                             int x32,
+                             int dx);
 
 // Specialized scalers for x86.
-void ScaleRowDown2_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Linear_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Box_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown2_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Linear_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                              uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Box_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
-void ScaleRowDown4_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width);
-void ScaleRowDown4Box_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown4_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width);
-void ScaleRowDown4Box_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
+void ScaleRowDown2_SSSE3(const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst_ptr,
+                         int dst_width);
+void ScaleRowDown2Linear_SSSE3(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown2Box_SSSE3(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown2_AVX2(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width);
+void ScaleRowDown2Linear_AVX2(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleRowDown2Box_AVX2(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width);
+void ScaleRowDown4_SSSE3(const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst_ptr,
+                         int dst_width);
+void ScaleRowDown4Box_SSSE3(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown4_AVX2(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width);
+void ScaleRowDown4Box_AVX2(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width);
 
-void ScaleRowDown34_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_1_Box_SSSE3(const uint8* src_ptr,
+void ScaleRowDown34_SSSE3(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst_ptr,
+                          int dst_width);
+void ScaleRowDown34_1_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_0_Box_SSSE3(const uint8* src_ptr,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown34_0_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_3_Box_SSSE3(const uint8* src_ptr,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown38_SSSE3(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst_ptr,
+                          int dst_width);
+void ScaleRowDown38_3_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_2_Box_SSSE3(const uint8* src_ptr,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown38_2_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown2_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                             uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Linear_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                                   uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Box_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Box_Odd_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown2_Any_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Linear_Any_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                  uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Box_Any_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
-void ScaleRowDown2Box_Odd_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
-void ScaleRowDown4_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                             uint8* dst_ptr, int dst_width);
-void ScaleRowDown4Box_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown4_Any_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown4Box_Any_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown2_Any_SSSE3(const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst_ptr,
+                             int dst_width);
+void ScaleRowDown2Linear_Any_SSSE3(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
+void ScaleRowDown2Box_Any_SSSE3(const uint8_t* src_ptr,
+                                ptrdiff_t src_stride,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown2Box_Odd_SSSE3(const uint8_t* src_ptr,
+                                ptrdiff_t src_stride,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown2_Any_AVX2(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown2Linear_Any_AVX2(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
+void ScaleRowDown2Box_Any_AVX2(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown2Box_Odd_AVX2(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown4_Any_SSSE3(const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst_ptr,
+                             int dst_width);
+void ScaleRowDown4Box_Any_SSSE3(const uint8_t* src_ptr,
+                                ptrdiff_t src_stride,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleRowDown4_Any_AVX2(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown4Box_Any_AVX2(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
 
-void ScaleRowDown34_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                              uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_1_Box_Any_SSSE3(const uint8* src_ptr,
+void ScaleRowDown34_Any_SSSE3(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleRowDown34_1_Box_Any_SSSE3(const uint8_t* src_ptr,
                                     ptrdiff_t src_stride,
-                                    uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_0_Box_Any_SSSE3(const uint8* src_ptr,
+                                    uint8_t* dst_ptr,
+                                    int dst_width);
+void ScaleRowDown34_0_Box_Any_SSSE3(const uint8_t* src_ptr,
                                     ptrdiff_t src_stride,
-                                    uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_Any_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                              uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_3_Box_Any_SSSE3(const uint8* src_ptr,
+                                    uint8_t* dst_ptr,
+                                    int dst_width);
+void ScaleRowDown38_Any_SSSE3(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleRowDown38_3_Box_Any_SSSE3(const uint8_t* src_ptr,
                                     ptrdiff_t src_stride,
-                                    uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_2_Box_Any_SSSE3(const uint8* src_ptr,
+                                    uint8_t* dst_ptr,
+                                    int dst_width);
+void ScaleRowDown38_2_Box_Any_SSSE3(const uint8_t* src_ptr,
                                     ptrdiff_t src_stride,
-                                    uint8* dst_ptr, int dst_width);
+                                    uint8_t* dst_ptr,
+                                    int dst_width);
 
-void ScaleAddRow_SSE2(const uint8* src_ptr, uint16* dst_ptr, int src_width);
-void ScaleAddRow_AVX2(const uint8* src_ptr, uint16* dst_ptr, int src_width);
-void ScaleAddRow_Any_SSE2(const uint8* src_ptr, uint16* dst_ptr, int src_width);
-void ScaleAddRow_Any_AVX2(const uint8* src_ptr, uint16* dst_ptr, int src_width);
+void ScaleAddRow_SSE2(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width);
+void ScaleAddRow_AVX2(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width);
+void ScaleAddRow_Any_SSE2(const uint8_t* src_ptr,
+                          uint16_t* dst_ptr,
+                          int src_width);
+void ScaleAddRow_Any_AVX2(const uint8_t* src_ptr,
+                          uint16_t* dst_ptr,
+                          int src_width);
 
-void ScaleFilterCols_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                           int dst_width, int x, int dx);
-void ScaleColsUp2_SSE2(uint8* dst_ptr, const uint8* src_ptr,
-                       int dst_width, int x, int dx);
-
+void ScaleFilterCols_SSSE3(uint8_t* dst_ptr,
+                           const uint8_t* src_ptr,
+                           int dst_width,
+                           int x,
+                           int dx);
+void ScaleColsUp2_SSE2(uint8_t* dst_ptr,
+                       const uint8_t* src_ptr,
+                       int dst_width,
+                       int x,
+                       int dx);
 
 // ARGB Column functions
-void ScaleARGBCols_SSE2(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx);
-void ScaleARGBFilterCols_SSSE3(uint8* dst_argb, const uint8* src_argb,
-                               int dst_width, int x, int dx);
-void ScaleARGBColsUp2_SSE2(uint8* dst_argb, const uint8* src_argb,
-                           int dst_width, int x, int dx);
-void ScaleARGBFilterCols_NEON(uint8* dst_argb, const uint8* src_argb,
-                              int dst_width, int x, int dx);
-void ScaleARGBCols_NEON(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx);
-void ScaleARGBFilterCols_Any_NEON(uint8* dst_argb, const uint8* src_argb,
-                                  int dst_width, int x, int dx);
-void ScaleARGBCols_Any_NEON(uint8* dst_argb, const uint8* src_argb,
-                            int dst_width, int x, int dx);
+void ScaleARGBCols_SSE2(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int x,
+                        int dx);
+void ScaleARGBFilterCols_SSSE3(uint8_t* dst_argb,
+                               const uint8_t* src_argb,
+                               int dst_width,
+                               int x,
+                               int dx);
+void ScaleARGBColsUp2_SSE2(uint8_t* dst_argb,
+                           const uint8_t* src_argb,
+                           int dst_width,
+                           int x,
+                           int dx);
+void ScaleARGBFilterCols_NEON(uint8_t* dst_argb,
+                              const uint8_t* src_argb,
+                              int dst_width,
+                              int x,
+                              int dx);
+void ScaleARGBCols_NEON(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int x,
+                        int dx);
+void ScaleARGBFilterCols_Any_NEON(uint8_t* dst_ptr,
+                                  const uint8_t* src_ptr,
+                                  int dst_width,
+                                  int x,
+                                  int dx);
+void ScaleARGBCols_Any_NEON(uint8_t* dst_ptr,
+                            const uint8_t* src_ptr,
+                            int dst_width,
+                            int x,
+                            int dx);
+void ScaleARGBFilterCols_MSA(uint8_t* dst_argb,
+                             const uint8_t* src_argb,
+                             int dst_width,
+                             int x,
+                             int dx);
+void ScaleARGBCols_MSA(uint8_t* dst_argb,
+                       const uint8_t* src_argb,
+                       int dst_width,
+                       int x,
+                       int dx);
+void ScaleARGBFilterCols_Any_MSA(uint8_t* dst_ptr,
+                                 const uint8_t* src_ptr,
+                                 int dst_width,
+                                 int x,
+                                 int dx);
+void ScaleARGBCols_Any_MSA(uint8_t* dst_ptr,
+                           const uint8_t* src_ptr,
+                           int dst_width,
+                           int x,
+                           int dx);
 
 // ARGB Row functions
-void ScaleARGBRowDown2_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                            uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Linear_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                                  uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Box_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                               uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width);
-void ScaleARGBRowDown2Linear_NEON(const uint8* src_argb, ptrdiff_t src_stride,
-                                  uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst, int dst_width);
-void ScaleARGBRowDown2_Any_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                                uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Linear_Any_SSE2(const uint8* src_argb,
+void ScaleARGBRowDown2_SSE2(const uint8_t* src_argb,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_argb,
+                            int dst_width);
+void ScaleARGBRowDown2Linear_SSE2(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_argb,
+                                  int dst_width);
+void ScaleARGBRowDown2Box_SSE2(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_argb,
+                               int dst_width);
+void ScaleARGBRowDown2_NEON(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            int dst_width);
+void ScaleARGBRowDown2Linear_NEON(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_argb,
+                                  int dst_width);
+void ScaleARGBRowDown2Box_NEON(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst,
+                               int dst_width);
+void ScaleARGBRowDown2_MSA(const uint8_t* src_argb,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_argb,
+                           int dst_width);
+void ScaleARGBRowDown2Linear_MSA(const uint8_t* src_argb,
+                                 ptrdiff_t src_stride,
+                                 uint8_t* dst_argb,
+                                 int dst_width);
+void ScaleARGBRowDown2Box_MSA(const uint8_t* src_argb,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_argb,
+                              int dst_width);
+void ScaleARGBRowDown2_Any_SSE2(const uint8_t* src_ptr,
+                                ptrdiff_t src_stride,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleARGBRowDown2Linear_Any_SSE2(const uint8_t* src_ptr,
                                       ptrdiff_t src_stride,
-                                      uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Box_Any_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                                   uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst, int dst_width);
-void ScaleARGBRowDown2Linear_Any_NEON(const uint8* src_argb,
+                                      uint8_t* dst_ptr,
+                                      int dst_width);
+void ScaleARGBRowDown2Box_Any_SSE2(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
+void ScaleARGBRowDown2_Any_NEON(const uint8_t* src_ptr,
+                                ptrdiff_t src_stride,
+                                uint8_t* dst_ptr,
+                                int dst_width);
+void ScaleARGBRowDown2Linear_Any_NEON(const uint8_t* src_ptr,
                                       ptrdiff_t src_stride,
-                                      uint8* dst_argb, int dst_width);
-void ScaleARGBRowDown2Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                                   uint8* dst, int dst_width);
+                                      uint8_t* dst_ptr,
+                                      int dst_width);
+void ScaleARGBRowDown2Box_Any_NEON(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
+void ScaleARGBRowDown2_Any_MSA(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleARGBRowDown2Linear_Any_MSA(const uint8_t* src_ptr,
+                                     ptrdiff_t src_stride,
+                                     uint8_t* dst_ptr,
+                                     int dst_width);
+void ScaleARGBRowDown2Box_Any_MSA(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
 
-void ScaleARGBRowDownEven_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                               int src_stepx, uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEvenBox_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                                  int src_stepx,
-                                  uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEven_NEON(const uint8* src_argb, ptrdiff_t src_stride,
+void ScaleARGBRowDownEven_SSE2(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
                                int src_stepx,
-                               uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEvenBox_NEON(const uint8* src_argb, ptrdiff_t src_stride,
+                               uint8_t* dst_argb,
+                               int dst_width);
+void ScaleARGBRowDownEvenBox_SSE2(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
                                   int src_stepx,
-                                  uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEven_Any_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
+                                  uint8_t* dst_argb,
+                                  int dst_width);
+void ScaleARGBRowDownEven_NEON(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
+                               int src_stepx,
+                               uint8_t* dst_argb,
+                               int dst_width);
+void ScaleARGBRowDownEvenBox_NEON(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
+                                  int src_stepx,
+                                  uint8_t* dst_argb,
+                                  int dst_width);
+void ScaleARGBRowDownEven_MSA(const uint8_t* src_argb,
+                              ptrdiff_t src_stride,
+                              int32_t src_stepx,
+                              uint8_t* dst_argb,
+                              int dst_width);
+void ScaleARGBRowDownEvenBox_MSA(const uint8_t* src_argb,
+                                 ptrdiff_t src_stride,
+                                 int src_stepx,
+                                 uint8_t* dst_argb,
+                                 int dst_width);
+void ScaleARGBRowDownEven_Any_SSE2(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
                                    int src_stepx,
-                                   uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEvenBox_Any_SSE2(const uint8* src_argb,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
+void ScaleARGBRowDownEvenBox_Any_SSE2(const uint8_t* src_ptr,
                                       ptrdiff_t src_stride,
                                       int src_stepx,
-                                      uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEven_Any_NEON(const uint8* src_argb, ptrdiff_t src_stride,
+                                      uint8_t* dst_ptr,
+                                      int dst_width);
+void ScaleARGBRowDownEven_Any_NEON(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
                                    int src_stepx,
-                                   uint8* dst_argb, int dst_width);
-void ScaleARGBRowDownEvenBox_Any_NEON(const uint8* src_argb,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
+void ScaleARGBRowDownEvenBox_Any_NEON(const uint8_t* src_ptr,
                                       ptrdiff_t src_stride,
                                       int src_stepx,
-                                      uint8* dst_argb, int dst_width);
+                                      uint8_t* dst_ptr,
+                                      int dst_width);
+void ScaleARGBRowDownEven_Any_MSA(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  int32_t src_stepx,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
+void ScaleARGBRowDownEvenBox_Any_MSA(const uint8_t* src_ptr,
+                                     ptrdiff_t src_stride,
+                                     int src_stepx,
+                                     uint8_t* dst_ptr,
+                                     int dst_width);
 
 // ScaleRowDown2Box also used by planar functions
 // NEON downscalers with interpolation.
 
 // Note - not static due to reuse in convert for 444 to 420.
-void ScaleRowDown2_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width);
-void ScaleRowDown2Linear_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                              uint8* dst, int dst_width);
-void ScaleRowDown2Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width);
+void ScaleRowDown2_NEON(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width);
+void ScaleRowDown2Linear_NEON(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              int dst_width);
+void ScaleRowDown2Box_NEON(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           int dst_width);
 
-void ScaleRowDown4_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width);
-void ScaleRowDown4Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
+void ScaleRowDown4_NEON(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width);
+void ScaleRowDown4Box_NEON(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width);
 
 // Down scale from 4 to 3 pixels. Use the neon multilane read/write
 //  to load up the every 4th pixel into a 4 different registers.
 // Point samples 32 pixels to 24 pixels.
-void ScaleRowDown34_NEON(const uint8* src_ptr,
+void ScaleRowDown34_NEON(const uint8_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_0_Box_NEON(const uint8* src_ptr,
+                         uint8_t* dst_ptr,
+                         int dst_width);
+void ScaleRowDown34_0_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_1_Box_NEON(const uint8* src_ptr,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown34_1_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
+                               uint8_t* dst_ptr,
+                               int dst_width);
 
 // 32 -> 12
-void ScaleRowDown38_NEON(const uint8* src_ptr,
+void ScaleRowDown38_NEON(const uint8_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width);
+                         uint8_t* dst_ptr,
+                         int dst_width);
 // 32x3 -> 12x1
-void ScaleRowDown38_3_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown38_3_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
+                               uint8_t* dst_ptr,
+                               int dst_width);
 // 32x2 -> 12x1
-void ScaleRowDown38_2_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown38_2_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
+                               uint8_t* dst_ptr,
+                               int dst_width);
 
-void ScaleRowDown2_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width);
-void ScaleRowDown2Linear_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                                  uint8* dst, int dst_width);
-void ScaleRowDown2Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst, int dst_width);
-void ScaleRowDown2Box_Odd_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst, int dst_width);
-void ScaleRowDown4_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width);
-void ScaleRowDown4Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                             uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_0_Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                                   uint8* dst_ptr, int dst_width);
-void ScaleRowDown34_1_Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                                   uint8* dst_ptr, int dst_width);
+void ScaleRowDown2_Any_NEON(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown2Linear_Any_NEON(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
+void ScaleRowDown2Box_Any_NEON(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown2Box_Odd_NEON(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown4_Any_NEON(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown4Box_Any_NEON(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width);
+void ScaleRowDown34_Any_NEON(const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst_ptr,
+                             int dst_width);
+void ScaleRowDown34_0_Box_Any_NEON(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
+void ScaleRowDown34_1_Box_Any_NEON(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
 // 32 -> 12
-void ScaleRowDown38_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                             uint8* dst_ptr, int dst_width);
+void ScaleRowDown38_Any_NEON(const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst_ptr,
+                             int dst_width);
 // 32x3 -> 12x1
-void ScaleRowDown38_3_Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
+void ScaleRowDown38_3_Box_Any_NEON(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
 // 32x2 -> 12x1
-void ScaleRowDown38_2_Box_Any_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width);
+void ScaleRowDown38_2_Box_Any_NEON(const uint8_t* src_ptr,
+                                   ptrdiff_t src_stride,
+                                   uint8_t* dst_ptr,
+                                   int dst_width);
 
-void ScaleAddRow_NEON(const uint8* src_ptr, uint16* dst_ptr, int src_width);
-void ScaleAddRow_Any_NEON(const uint8* src_ptr, uint16* dst_ptr, int src_width);
+void ScaleAddRow_NEON(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width);
+void ScaleAddRow_Any_NEON(const uint8_t* src_ptr,
+                          uint16_t* dst_ptr,
+                          int src_width);
 
-void ScaleFilterCols_NEON(uint8* dst_ptr, const uint8* src_ptr,
-                          int dst_width, int x, int dx);
+void ScaleFilterCols_NEON(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          int dst_width,
+                          int x,
+                          int dx);
 
-void ScaleFilterCols_Any_NEON(uint8* dst_ptr, const uint8* src_ptr,
-                              int dst_width, int x, int dx);
+void ScaleFilterCols_Any_NEON(uint8_t* dst_ptr,
+                              const uint8_t* src_ptr,
+                              int dst_width,
+                              int x,
+                              int dx);
 
-void ScaleRowDown2_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst, int dst_width);
-void ScaleRowDown2Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width);
-void ScaleRowDown4_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst, int dst_width);
-void ScaleRowDown4Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width);
-void ScaleRowDown34_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst, int dst_width);
-void ScaleRowDown34_0_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* d, int dst_width);
-void ScaleRowDown34_1_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* d, int dst_width);
-void ScaleRowDown38_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst, int dst_width);
-void ScaleRowDown38_2_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
-void ScaleRowDown38_3_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width);
+void ScaleRowDown2_MSA(const uint8_t* src_ptr,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       int dst_width);
+void ScaleRowDown2Linear_MSA(const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst,
+                             int dst_width);
+void ScaleRowDown2Box_MSA(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          int dst_width);
+void ScaleRowDown4_MSA(const uint8_t* src_ptr,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       int dst_width);
+void ScaleRowDown4Box_MSA(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          int dst_width);
+void ScaleRowDown38_MSA(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width);
+void ScaleRowDown38_2_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleRowDown38_3_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleAddRow_MSA(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width);
+void ScaleFilterCols_MSA(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         int dst_width,
+                         int x,
+                         int dx);
+void ScaleRowDown34_MSA(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width);
+void ScaleRowDown34_0_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* d,
+                              int dst_width);
+void ScaleRowDown34_1_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* d,
+                              int dst_width);
+
+void ScaleRowDown2_Any_MSA(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width);
+void ScaleRowDown2Linear_Any_MSA(const uint8_t* src_ptr,
+                                 ptrdiff_t src_stride,
+                                 uint8_t* dst_ptr,
+                                 int dst_width);
+void ScaleRowDown2Box_Any_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleRowDown4_Any_MSA(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width);
+void ScaleRowDown4Box_Any_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width);
+void ScaleRowDown38_Any_MSA(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown38_2_Box_Any_MSA(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
+void ScaleRowDown38_3_Box_Any_MSA(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
+void ScaleAddRow_Any_MSA(const uint8_t* src_ptr,
+                         uint16_t* dst_ptr,
+                         int src_width);
+void ScaleFilterCols_Any_MSA(uint8_t* dst_ptr,
+                             const uint8_t* src_ptr,
+                             int dst_width,
+                             int x,
+                             int dx);
+void ScaleRowDown34_Any_MSA(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width);
+void ScaleRowDown34_0_Box_Any_MSA(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
+void ScaleRowDown34_1_Box_Any_MSA(const uint8_t* src_ptr,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_ptr,
+                                  int dst_width);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_SCALE_ROW_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_SCALE_ROW_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/version.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/version.h
index 0fbdc02..7022785 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/version.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/version.h
@@ -8,9 +8,9 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef INCLUDE_LIBYUV_VERSION_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_VERSION_H_
 #define INCLUDE_LIBYUV_VERSION_H_
 
-#define LIBYUV_VERSION 1616
+#define LIBYUV_VERSION 1711
 
-#endif  // INCLUDE_LIBYUV_VERSION_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_VERSION_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/video_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/video_common.h
index ad934e4..bcef378 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/video_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/include/libyuv/video_common.h
@@ -10,7 +10,7 @@
 
 // Common definitions for video, including fourcc and VideoFormat.
 
-#ifndef INCLUDE_LIBYUV_VIDEO_COMMON_H_  // NOLINT
+#ifndef INCLUDE_LIBYUV_VIDEO_COMMON_H_
 #define INCLUDE_LIBYUV_VIDEO_COMMON_H_
 
 #include "libyuv/basic_types.h"
@@ -28,13 +28,13 @@
 // Needs to be a macro otherwise the OS X compiler complains when the kFormat*
 // constants are used in a switch.
 #ifdef __cplusplus
-#define FOURCC(a, b, c, d) ( \
-    (static_cast<uint32>(a)) | (static_cast<uint32>(b) << 8) | \
-    (static_cast<uint32>(c) << 16) | (static_cast<uint32>(d) << 24))
+#define FOURCC(a, b, c, d)                                        \
+  ((static_cast<uint32_t>(a)) | (static_cast<uint32_t>(b) << 8) | \
+   (static_cast<uint32_t>(c) << 16) | (static_cast<uint32_t>(d) << 24))
 #else
-#define FOURCC(a, b, c, d) ( \
-    ((uint32)(a)) | ((uint32)(b) << 8) | /* NOLINT */ \
-    ((uint32)(c) << 16) | ((uint32)(d) << 24))  /* NOLINT */
+#define FOURCC(a, b, c, d)                                     \
+  (((uint32_t)(a)) | ((uint32_t)(b) << 8) |       /* NOLINT */ \
+   ((uint32_t)(c) << 16) | ((uint32_t)(d) << 24)) /* NOLINT */
 #endif
 
 // Some pages discussing FourCC codes:
@@ -53,38 +53,33 @@
   FOURCC_I420 = FOURCC('I', '4', '2', '0'),
   FOURCC_I422 = FOURCC('I', '4', '2', '2'),
   FOURCC_I444 = FOURCC('I', '4', '4', '4'),
-  FOURCC_I411 = FOURCC('I', '4', '1', '1'),
   FOURCC_I400 = FOURCC('I', '4', '0', '0'),
   FOURCC_NV21 = FOURCC('N', 'V', '2', '1'),
   FOURCC_NV12 = FOURCC('N', 'V', '1', '2'),
   FOURCC_YUY2 = FOURCC('Y', 'U', 'Y', '2'),
   FOURCC_UYVY = FOURCC('U', 'Y', 'V', 'Y'),
+  FOURCC_H010 = FOURCC('H', '0', '1', '0'),  // unofficial fourcc. 10 bit lsb
 
-  // 2 Secondary YUV formats: row biplanar.
+  // 1 Secondary YUV format: row biplanar.
   FOURCC_M420 = FOURCC('M', '4', '2', '0'),
-  FOURCC_Q420 = FOURCC('Q', '4', '2', '0'),  // deprecated.
 
-  // 9 Primary RGB formats: 4 32 bpp, 2 24 bpp, 3 16 bpp.
+  // 11 Primary RGB formats: 4 32 bpp, 2 24 bpp, 3 16 bpp, 1 10 bpc
   FOURCC_ARGB = FOURCC('A', 'R', 'G', 'B'),
   FOURCC_BGRA = FOURCC('B', 'G', 'R', 'A'),
   FOURCC_ABGR = FOURCC('A', 'B', 'G', 'R'),
+  FOURCC_AR30 = FOURCC('A', 'R', '3', '0'),  // 10 bit per channel. 2101010.
+  FOURCC_AB30 = FOURCC('A', 'B', '3', '0'),  // ABGR version of 10 bit
   FOURCC_24BG = FOURCC('2', '4', 'B', 'G'),
-  FOURCC_RAW  = FOURCC('r', 'a', 'w', ' '),
+  FOURCC_RAW = FOURCC('r', 'a', 'w', ' '),
   FOURCC_RGBA = FOURCC('R', 'G', 'B', 'A'),
   FOURCC_RGBP = FOURCC('R', 'G', 'B', 'P'),  // rgb565 LE.
   FOURCC_RGBO = FOURCC('R', 'G', 'B', 'O'),  // argb1555 LE.
   FOURCC_R444 = FOURCC('R', '4', '4', '4'),  // argb4444 LE.
 
-  // 4 Secondary RGB formats: 4 Bayer Patterns. deprecated.
-  FOURCC_RGGB = FOURCC('R', 'G', 'G', 'B'),
-  FOURCC_BGGR = FOURCC('B', 'G', 'G', 'R'),
-  FOURCC_GRBG = FOURCC('G', 'R', 'B', 'G'),
-  FOURCC_GBRG = FOURCC('G', 'B', 'R', 'G'),
-
   // 1 Primary Compressed YUV format.
   FOURCC_MJPG = FOURCC('M', 'J', 'P', 'G'),
 
-  // 5 Auxiliary YUV variations: 3 with U and V planes are swapped, 1 Alias.
+  // 7 Auxiliary YUV variations: 3 with U and V planes are swapped, 1 Alias.
   FOURCC_YV12 = FOURCC('Y', 'V', '1', '2'),
   FOURCC_YV16 = FOURCC('Y', 'V', '1', '6'),
   FOURCC_YV24 = FOURCC('Y', 'V', '2', '4'),
@@ -112,7 +107,13 @@
   FOURCC_L565 = FOURCC('L', '5', '6', '5'),  // Alias for RGBP.
   FOURCC_5551 = FOURCC('5', '5', '5', '1'),  // Alias for RGBO.
 
-  // 1 Auxiliary compressed YUV format set aside for capturer.
+  // deprecated formats.  Not supported, but defined for backward compatibility.
+  FOURCC_I411 = FOURCC('I', '4', '1', '1'),
+  FOURCC_Q420 = FOURCC('Q', '4', '2', '0'),
+  FOURCC_RGGB = FOURCC('R', 'G', 'G', 'B'),
+  FOURCC_BGGR = FOURCC('B', 'G', 'G', 'R'),
+  FOURCC_GRBG = FOURCC('G', 'R', 'B', 'G'),
+  FOURCC_GBRG = FOURCC('G', 'B', 'R', 'G'),
   FOURCC_H264 = FOURCC('H', '2', '6', '4'),
 
   // Match any fourcc.
@@ -136,8 +137,10 @@
   FOURCC_BPP_BGRA = 32,
   FOURCC_BPP_ABGR = 32,
   FOURCC_BPP_RGBA = 32,
+  FOURCC_BPP_AR30 = 32,
+  FOURCC_BPP_AB30 = 32,
   FOURCC_BPP_24BG = 24,
-  FOURCC_BPP_RAW  = 24,
+  FOURCC_BPP_RAW = 24,
   FOURCC_BPP_RGBP = 16,
   FOURCC_BPP_RGBO = 16,
   FOURCC_BPP_R444 = 16,
@@ -152,6 +155,7 @@
   FOURCC_BPP_J420 = 12,
   FOURCC_BPP_J400 = 8,
   FOURCC_BPP_H420 = 12,
+  FOURCC_BPP_H010 = 24,
   FOURCC_BPP_MJPG = 0,  // 0 means unknown.
   FOURCC_BPP_H264 = 0,
   FOURCC_BPP_IYUV = 12,
@@ -170,15 +174,15 @@
   FOURCC_BPP_CM24 = 24,
 
   // Match any fourcc.
-  FOURCC_BPP_ANY  = 0,  // 0 means unknown.
+  FOURCC_BPP_ANY = 0,  // 0 means unknown.
 };
 
 // Converts fourcc aliases into canonical ones.
-LIBYUV_API uint32 CanonicalFourCC(uint32 fourcc);
+LIBYUV_API uint32_t CanonicalFourCC(uint32_t fourcc);
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
 
-#endif  // INCLUDE_LIBYUV_VIDEO_COMMON_H_  NOLINT
+#endif  // INCLUDE_LIBYUV_VIDEO_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare.cc
index e3846bd..50e3abd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare.cc
@@ -29,10 +29,10 @@
 
 // hash seed of 5381 recommended.
 LIBYUV_API
-uint32 HashDjb2(const uint8* src, uint64 count, uint32 seed) {
+uint32_t HashDjb2(const uint8_t* src, uint64_t count, uint32_t seed) {
   const int kBlockSize = 1 << 15;  // 32768;
   int remainder;
-  uint32 (*HashDjb2_SSE)(const uint8* src, int count, uint32 seed) =
+  uint32_t (*HashDjb2_SSE)(const uint8_t* src, int count, uint32_t seed) =
       HashDjb2_C;
 #if defined(HAS_HASHDJB2_SSE41)
   if (TestCpuFlag(kCpuHasSSE41)) {
@@ -45,25 +45,25 @@
   }
 #endif
 
-  while (count >= (uint64)(kBlockSize)) {
+  while (count >= (uint64_t)(kBlockSize)) {
     seed = HashDjb2_SSE(src, kBlockSize, seed);
     src += kBlockSize;
     count -= kBlockSize;
   }
-  remainder = (int)(count) & ~15;
+  remainder = (int)count & ~15;
   if (remainder) {
     seed = HashDjb2_SSE(src, remainder, seed);
     src += remainder;
     count -= remainder;
   }
-  remainder = (int)(count) & 15;
+  remainder = (int)count & 15;
   if (remainder) {
     seed = HashDjb2_C(src, remainder, seed);
   }
   return seed;
 }
 
-static uint32 ARGBDetectRow_C(const uint8* argb, int width) {
+static uint32_t ARGBDetectRow_C(const uint8_t* argb, int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     if (argb[0] != 255) {  // First byte is not Alpha of 255, so not ARGB.
@@ -94,8 +94,11 @@
 // Scan an opaque argb image and return fourcc based on alpha offset.
 // Returns FOURCC_ARGB, FOURCC_BGRA, or 0 if unknown.
 LIBYUV_API
-uint32 ARGBDetect(const uint8* argb, int stride_argb, int width, int height) {
-  uint32 fourcc = 0;
+uint32_t ARGBDetect(const uint8_t* argb,
+                    int stride_argb,
+                    int width,
+                    int height) {
+  uint32_t fourcc = 0;
   int h;
 
   // Coalesce rows.
@@ -111,19 +114,80 @@
   return fourcc;
 }
 
+// NEON version accumulates in 16 bit shorts which overflow at 65536 bytes.
+// So actual maximum is 1 less loop, which is 64436 - 32 bytes.
+
+LIBYUV_API
+uint64_t ComputeHammingDistance(const uint8_t* src_a,
+                                const uint8_t* src_b,
+                                int count) {
+  const int kBlockSize = 1 << 15;  // 32768;
+  const int kSimdSize = 64;
+  // SIMD for multiple of 64, and C for remainder
+  int remainder = count & (kBlockSize - 1) & ~(kSimdSize - 1);
+  uint64_t diff = 0;
+  int i;
+  uint32_t (*HammingDistance)(const uint8_t* src_a, const uint8_t* src_b,
+                              int count) = HammingDistance_C;
+#if defined(HAS_HAMMINGDISTANCE_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    HammingDistance = HammingDistance_NEON;
+  }
+#endif
+#if defined(HAS_HAMMINGDISTANCE_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    HammingDistance = HammingDistance_SSSE3;
+  }
+#endif
+#if defined(HAS_HAMMINGDISTANCE_SSE42)
+  if (TestCpuFlag(kCpuHasSSE42)) {
+    HammingDistance = HammingDistance_SSE42;
+  }
+#endif
+#if defined(HAS_HAMMINGDISTANCE_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    HammingDistance = HammingDistance_AVX2;
+  }
+#endif
+#if defined(HAS_HAMMINGDISTANCE_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    HammingDistance = HammingDistance_MSA;
+  }
+#endif
+#ifdef _OPENMP
+#pragma omp parallel for reduction(+ : diff)
+#endif
+  for (i = 0; i < (count - (kBlockSize - 1)); i += kBlockSize) {
+    diff += HammingDistance(src_a + i, src_b + i, kBlockSize);
+  }
+  src_a += count & ~(kBlockSize - 1);
+  src_b += count & ~(kBlockSize - 1);
+  if (remainder) {
+    diff += HammingDistance(src_a, src_b, remainder);
+    src_a += remainder;
+    src_b += remainder;
+  }
+  remainder = count & (kSimdSize - 1);
+  if (remainder) {
+    diff += HammingDistance_C(src_a, src_b, remainder);
+  }
+  return diff;
+}
+
 // TODO(fbarchard): Refactor into row function.
 LIBYUV_API
-uint64 ComputeSumSquareError(const uint8* src_a, const uint8* src_b,
-                             int count) {
+uint64_t ComputeSumSquareError(const uint8_t* src_a,
+                               const uint8_t* src_b,
+                               int count) {
   // SumSquareError returns values 0 to 65535 for each squared difference.
-  // Up to 65536 of those can be summed and remain within a uint32.
-  // After each block of 65536 pixels, accumulate into a uint64.
+  // Up to 65536 of those can be summed and remain within a uint32_t.
+  // After each block of 65536 pixels, accumulate into a uint64_t.
   const int kBlockSize = 65536;
   int remainder = count & (kBlockSize - 1) & ~31;
-  uint64 sse = 0;
+  uint64_t sse = 0;
   int i;
-  uint32 (*SumSquareError)(const uint8* src_a, const uint8* src_b, int count) =
-      SumSquareError_C;
+  uint32_t (*SumSquareError)(const uint8_t* src_a, const uint8_t* src_b,
+                             int count) = SumSquareError_C;
 #if defined(HAS_SUMSQUAREERROR_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     SumSquareError = SumSquareError_NEON;
@@ -141,8 +205,13 @@
     SumSquareError = SumSquareError_AVX2;
   }
 #endif
+#if defined(HAS_SUMSQUAREERROR_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SumSquareError = SumSquareError_MSA;
+  }
+#endif
 #ifdef _OPENMP
-#pragma omp parallel for reduction(+: sse)
+#pragma omp parallel for reduction(+ : sse)
 #endif
   for (i = 0; i < (count - (kBlockSize - 1)); i += kBlockSize) {
     sse += SumSquareError(src_a + i, src_b + i, kBlockSize);
@@ -162,14 +231,16 @@
 }
 
 LIBYUV_API
-uint64 ComputeSumSquareErrorPlane(const uint8* src_a, int stride_a,
-                                  const uint8* src_b, int stride_b,
-                                  int width, int height) {
-  uint64 sse = 0;
+uint64_t ComputeSumSquareErrorPlane(const uint8_t* src_a,
+                                    int stride_a,
+                                    const uint8_t* src_b,
+                                    int stride_b,
+                                    int width,
+                                    int height) {
+  uint64_t sse = 0;
   int h;
   // Coalesce rows.
-  if (stride_a == width &&
-      stride_b == width) {
+  if (stride_a == width && stride_b == width) {
     width *= height;
     height = 1;
     stride_a = stride_b = 0;
@@ -183,66 +254,76 @@
 }
 
 LIBYUV_API
-double SumSquareErrorToPsnr(uint64 sse, uint64 count) {
+double SumSquareErrorToPsnr(uint64_t sse, uint64_t count) {
   double psnr;
   if (sse > 0) {
-    double mse = (double)(count) / (double)(sse);
+    double mse = (double)count / (double)sse;
     psnr = 10.0 * log10(255.0 * 255.0 * mse);
   } else {
-    psnr = kMaxPsnr;      // Limit to prevent divide by 0
+    psnr = kMaxPsnr;  // Limit to prevent divide by 0
   }
 
-  if (psnr > kMaxPsnr)
+  if (psnr > kMaxPsnr) {
     psnr = kMaxPsnr;
+  }
 
   return psnr;
 }
 
 LIBYUV_API
-double CalcFramePsnr(const uint8* src_a, int stride_a,
-                     const uint8* src_b, int stride_b,
-                     int width, int height) {
-  const uint64 samples = width * height;
-  const uint64 sse = ComputeSumSquareErrorPlane(src_a, stride_a,
-                                                src_b, stride_b,
-                                                width, height);
+double CalcFramePsnr(const uint8_t* src_a,
+                     int stride_a,
+                     const uint8_t* src_b,
+                     int stride_b,
+                     int width,
+                     int height) {
+  const uint64_t samples = (uint64_t)width * (uint64_t)height;
+  const uint64_t sse = ComputeSumSquareErrorPlane(src_a, stride_a, src_b,
+                                                  stride_b, width, height);
   return SumSquareErrorToPsnr(sse, samples);
 }
 
 LIBYUV_API
-double I420Psnr(const uint8* src_y_a, int stride_y_a,
-                const uint8* src_u_a, int stride_u_a,
-                const uint8* src_v_a, int stride_v_a,
-                const uint8* src_y_b, int stride_y_b,
-                const uint8* src_u_b, int stride_u_b,
-                const uint8* src_v_b, int stride_v_b,
-                int width, int height) {
-  const uint64 sse_y = ComputeSumSquareErrorPlane(src_y_a, stride_y_a,
-                                                  src_y_b, stride_y_b,
-                                                  width, height);
+double I420Psnr(const uint8_t* src_y_a,
+                int stride_y_a,
+                const uint8_t* src_u_a,
+                int stride_u_a,
+                const uint8_t* src_v_a,
+                int stride_v_a,
+                const uint8_t* src_y_b,
+                int stride_y_b,
+                const uint8_t* src_u_b,
+                int stride_u_b,
+                const uint8_t* src_v_b,
+                int stride_v_b,
+                int width,
+                int height) {
+  const uint64_t sse_y = ComputeSumSquareErrorPlane(
+      src_y_a, stride_y_a, src_y_b, stride_y_b, width, height);
   const int width_uv = (width + 1) >> 1;
   const int height_uv = (height + 1) >> 1;
-  const uint64 sse_u = ComputeSumSquareErrorPlane(src_u_a, stride_u_a,
-                                                  src_u_b, stride_u_b,
-                                                  width_uv, height_uv);
-  const uint64 sse_v = ComputeSumSquareErrorPlane(src_v_a, stride_v_a,
-                                                  src_v_b, stride_v_b,
-                                                  width_uv, height_uv);
-  const uint64 samples = width * height + 2 * (width_uv * height_uv);
-  const uint64 sse = sse_y + sse_u + sse_v;
+  const uint64_t sse_u = ComputeSumSquareErrorPlane(
+      src_u_a, stride_u_a, src_u_b, stride_u_b, width_uv, height_uv);
+  const uint64_t sse_v = ComputeSumSquareErrorPlane(
+      src_v_a, stride_v_a, src_v_b, stride_v_b, width_uv, height_uv);
+  const uint64_t samples = (uint64_t)width * (uint64_t)height +
+                           2 * ((uint64_t)width_uv * (uint64_t)height_uv);
+  const uint64_t sse = sse_y + sse_u + sse_v;
   return SumSquareErrorToPsnr(sse, samples);
 }
 
-static const int64 cc1 =  26634;  // (64^2*(.01*255)^2
-static const int64 cc2 = 239708;  // (64^2*(.03*255)^2
+static const int64_t cc1 = 26634;   // (64^2*(.01*255)^2
+static const int64_t cc2 = 239708;  // (64^2*(.03*255)^2
 
-static double Ssim8x8_C(const uint8* src_a, int stride_a,
-                        const uint8* src_b, int stride_b) {
-  int64 sum_a = 0;
-  int64 sum_b = 0;
-  int64 sum_sq_a = 0;
-  int64 sum_sq_b = 0;
-  int64 sum_axb = 0;
+static double Ssim8x8_C(const uint8_t* src_a,
+                        int stride_a,
+                        const uint8_t* src_b,
+                        int stride_b) {
+  int64_t sum_a = 0;
+  int64_t sum_b = 0;
+  int64_t sum_sq_a = 0;
+  int64_t sum_sq_b = 0;
+  int64_t sum_axb = 0;
 
   int i;
   for (i = 0; i < 8; ++i) {
@@ -260,22 +341,22 @@
   }
 
   {
-    const int64 count = 64;
+    const int64_t count = 64;
     // scale the constants by number of pixels
-    const int64 c1 = (cc1 * count * count) >> 12;
-    const int64 c2 = (cc2 * count * count) >> 12;
+    const int64_t c1 = (cc1 * count * count) >> 12;
+    const int64_t c2 = (cc2 * count * count) >> 12;
 
-    const int64 sum_a_x_sum_b = sum_a * sum_b;
+    const int64_t sum_a_x_sum_b = sum_a * sum_b;
 
-    const int64 ssim_n = (2 * sum_a_x_sum_b + c1) *
-                         (2 * count * sum_axb - 2 * sum_a_x_sum_b + c2);
+    const int64_t ssim_n = (2 * sum_a_x_sum_b + c1) *
+                           (2 * count * sum_axb - 2 * sum_a_x_sum_b + c2);
 
-    const int64 sum_a_sq = sum_a*sum_a;
-    const int64 sum_b_sq = sum_b*sum_b;
+    const int64_t sum_a_sq = sum_a * sum_a;
+    const int64_t sum_b_sq = sum_b * sum_b;
 
-    const int64 ssim_d = (sum_a_sq + sum_b_sq + c1) *
-                         (count * sum_sq_a - sum_a_sq +
-                          count * sum_sq_b - sum_b_sq + c2);
+    const int64_t ssim_d =
+        (sum_a_sq + sum_b_sq + c1) *
+        (count * sum_sq_a - sum_a_sq + count * sum_sq_b - sum_b_sq + c2);
 
     if (ssim_d == 0.0) {
       return DBL_MAX;
@@ -288,13 +369,16 @@
 // on the 4x4 pixel grid. Such arrangement allows the windows to overlap
 // block boundaries to penalize blocking artifacts.
 LIBYUV_API
-double CalcFrameSsim(const uint8* src_a, int stride_a,
-                     const uint8* src_b, int stride_b,
-                     int width, int height) {
+double CalcFrameSsim(const uint8_t* src_a,
+                     int stride_a,
+                     const uint8_t* src_b,
+                     int stride_b,
+                     int width,
+                     int height) {
   int samples = 0;
   double ssim_total = 0;
-  double (*Ssim8x8)(const uint8* src_a, int stride_a,
-                    const uint8* src_b, int stride_b) = Ssim8x8_C;
+  double (*Ssim8x8)(const uint8_t* src_a, int stride_a, const uint8_t* src_b,
+                    int stride_b) = Ssim8x8_C;
 
   // sample point start with each 4x4 location
   int i;
@@ -314,22 +398,27 @@
 }
 
 LIBYUV_API
-double I420Ssim(const uint8* src_y_a, int stride_y_a,
-                const uint8* src_u_a, int stride_u_a,
-                const uint8* src_v_a, int stride_v_a,
-                const uint8* src_y_b, int stride_y_b,
-                const uint8* src_u_b, int stride_u_b,
-                const uint8* src_v_b, int stride_v_b,
-                int width, int height) {
-  const double ssim_y = CalcFrameSsim(src_y_a, stride_y_a,
-                                      src_y_b, stride_y_b, width, height);
+double I420Ssim(const uint8_t* src_y_a,
+                int stride_y_a,
+                const uint8_t* src_u_a,
+                int stride_u_a,
+                const uint8_t* src_v_a,
+                int stride_v_a,
+                const uint8_t* src_y_b,
+                int stride_y_b,
+                const uint8_t* src_u_b,
+                int stride_u_b,
+                const uint8_t* src_v_b,
+                int stride_v_b,
+                int width,
+                int height) {
+  const double ssim_y =
+      CalcFrameSsim(src_y_a, stride_y_a, src_y_b, stride_y_b, width, height);
   const int width_uv = (width + 1) >> 1;
   const int height_uv = (height + 1) >> 1;
-  const double ssim_u = CalcFrameSsim(src_u_a, stride_u_a,
-                                      src_u_b, stride_u_b,
+  const double ssim_u = CalcFrameSsim(src_u_a, stride_u_a, src_u_b, stride_u_b,
                                       width_uv, height_uv);
-  const double ssim_v = CalcFrameSsim(src_v_a, stride_v_a,
-                                      src_v_b, stride_v_b,
+  const double ssim_v = CalcFrameSsim(src_v_a, stride_v_a, src_v_b, stride_v_b,
                                       width_uv, height_uv);
   return ssim_y * 0.8 + 0.1 * (ssim_u + ssim_v);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_common.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_common.cc
index 42fc589..d4b170a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_common.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_common.cc
@@ -17,20 +17,80 @@
 extern "C" {
 #endif
 
-uint32 SumSquareError_C(const uint8* src_a, const uint8* src_b, int count) {
-  uint32 sse = 0u;
+#if ORIGINAL_OPT
+uint32_t HammingDistance_C1(const uint8_t* src_a,
+                            const uint8_t* src_b,
+                            int count) {
+  uint32_t diff = 0u;
+
+  int i;
+  for (i = 0; i < count; ++i) {
+    int x = src_a[i] ^ src_b[i];
+    if (x & 1)
+      ++diff;
+    if (x & 2)
+      ++diff;
+    if (x & 4)
+      ++diff;
+    if (x & 8)
+      ++diff;
+    if (x & 16)
+      ++diff;
+    if (x & 32)
+      ++diff;
+    if (x & 64)
+      ++diff;
+    if (x & 128)
+      ++diff;
+  }
+  return diff;
+}
+#endif
+
+// Hakmem method for hamming distance.
+uint32_t HammingDistance_C(const uint8_t* src_a,
+                           const uint8_t* src_b,
+                           int count) {
+  uint32_t diff = 0u;
+
+  int i;
+  for (i = 0; i < count - 3; i += 4) {
+    uint32_t x = *((const uint32_t*)src_a) ^ *((const uint32_t*)src_b);
+    uint32_t u = x - ((x >> 1) & 0x55555555);
+    u = ((u >> 2) & 0x33333333) + (u & 0x33333333);
+    diff += ((((u + (u >> 4)) & 0x0f0f0f0f) * 0x01010101) >> 24);
+    src_a += 4;
+    src_b += 4;
+  }
+
+  for (; i < count; ++i) {
+    uint32_t x = *src_a ^ *src_b;
+    uint32_t u = x - ((x >> 1) & 0x55);
+    u = ((u >> 2) & 0x33) + (u & 0x33);
+    diff += (u + (u >> 4)) & 0x0f;
+    src_a += 1;
+    src_b += 1;
+  }
+
+  return diff;
+}
+
+uint32_t SumSquareError_C(const uint8_t* src_a,
+                          const uint8_t* src_b,
+                          int count) {
+  uint32_t sse = 0u;
   int i;
   for (i = 0; i < count; ++i) {
     int diff = src_a[i] - src_b[i];
-    sse += (uint32)(diff * diff);
+    sse += (uint32_t)(diff * diff);
   }
   return sse;
 }
 
 // hash seed of 5381 recommended.
 // Internal C version of HashDjb2 with int sized count for efficiency.
-uint32 HashDjb2_C(const uint8* src, int count, uint32 seed) {
-  uint32 hash = seed;
+uint32_t HashDjb2_C(const uint8_t* src, int count, uint32_t seed) {
+  uint32_t hash = seed;
   int i;
   for (i = 0; i < count; ++i) {
     hash += (hash << 5) + src[i];
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_gcc.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_gcc.cc
index 1b83edb..676527c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_gcc.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_gcc.cc
@@ -22,124 +22,334 @@
 #if !defined(LIBYUV_DISABLE_X86) && \
     (defined(__x86_64__) || (defined(__i386__) && !defined(_MSC_VER)))
 
-uint32 SumSquareError_SSE2(const uint8* src_a, const uint8* src_b, int count) {
-  uint32 sse;
-  asm volatile (
-    "pxor      %%xmm0,%%xmm0                   \n"
-    "pxor      %%xmm5,%%xmm5                   \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "lea       " MEMLEA(0x10, 0) ",%0          \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x10, 1) ",%1          \n"
-    "movdqa    %%xmm1,%%xmm3                   \n"
-    "psubusb   %%xmm2,%%xmm1                   \n"
-    "psubusb   %%xmm3,%%xmm2                   \n"
-    "por       %%xmm2,%%xmm1                   \n"
-    "movdqa    %%xmm1,%%xmm2                   \n"
-    "punpcklbw %%xmm5,%%xmm1                   \n"
-    "punpckhbw %%xmm5,%%xmm2                   \n"
-    "pmaddwd   %%xmm1,%%xmm1                   \n"
-    "pmaddwd   %%xmm2,%%xmm2                   \n"
-    "paddd     %%xmm1,%%xmm0                   \n"
-    "paddd     %%xmm2,%%xmm0                   \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
+#if defined(__x86_64__)
+uint32_t HammingDistance_SSE42(const uint8_t* src_a,
+                               const uint8_t* src_b,
+                               int count) {
+  uint64_t diff = 0u;
 
-    "pshufd    $0xee,%%xmm0,%%xmm1             \n"
-    "paddd     %%xmm1,%%xmm0                   \n"
-    "pshufd    $0x1,%%xmm0,%%xmm1              \n"
-    "paddd     %%xmm1,%%xmm0                   \n"
-    "movd      %%xmm0,%3                       \n"
+  asm volatile(
+      "xor        %3,%3                          \n"
+      "xor        %%r8,%%r8                      \n"
+      "xor        %%r9,%%r9                      \n"
+      "xor        %%r10,%%r10                    \n"
 
-  : "+r"(src_a),      // %0
-    "+r"(src_b),      // %1
-    "+r"(count),      // %2
-    "=g"(sse)         // %3
-  :: "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      // Process 32 bytes per loop.
+      LABELALIGN
+      "1:                                        \n"
+      "mov        (%0),%%rcx                     \n"
+      "mov        0x8(%0),%%rdx                  \n"
+      "xor        (%1),%%rcx                     \n"
+      "xor        0x8(%1),%%rdx                  \n"
+      "popcnt     %%rcx,%%rcx                    \n"
+      "popcnt     %%rdx,%%rdx                    \n"
+      "mov        0x10(%0),%%rsi                 \n"
+      "mov        0x18(%0),%%rdi                 \n"
+      "xor        0x10(%1),%%rsi                 \n"
+      "xor        0x18(%1),%%rdi                 \n"
+      "popcnt     %%rsi,%%rsi                    \n"
+      "popcnt     %%rdi,%%rdi                    \n"
+      "add        $0x20,%0                       \n"
+      "add        $0x20,%1                       \n"
+      "add        %%rcx,%3                       \n"
+      "add        %%rdx,%%r8                     \n"
+      "add        %%rsi,%%r9                     \n"
+      "add        %%rdi,%%r10                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+
+      "add        %%r8, %3                       \n"
+      "add        %%r9, %3                       \n"
+      "add        %%r10, %3                      \n"
+      : "+r"(src_a),  // %0
+        "+r"(src_b),  // %1
+        "+r"(count),  // %2
+        "=r"(diff)    // %3
+      :
+      : "memory", "cc", "rcx", "rdx", "rsi", "rdi", "r8", "r9", "r10");
+
+  return static_cast<uint32_t>(diff);
+}
+#else
+uint32_t HammingDistance_SSE42(const uint8_t* src_a,
+                               const uint8_t* src_b,
+                               int count) {
+  uint32_t diff = 0u;
+
+  asm volatile(
+      // Process 16 bytes per loop.
+      LABELALIGN
+      "1:                                        \n"
+      "mov        (%0),%%ecx                     \n"
+      "mov        0x4(%0),%%edx                  \n"
+      "xor        (%1),%%ecx                     \n"
+      "xor        0x4(%1),%%edx                  \n"
+      "popcnt     %%ecx,%%ecx                    \n"
+      "add        %%ecx,%3                       \n"
+      "popcnt     %%edx,%%edx                    \n"
+      "add        %%edx,%3                       \n"
+      "mov        0x8(%0),%%ecx                  \n"
+      "mov        0xc(%0),%%edx                  \n"
+      "xor        0x8(%1),%%ecx                  \n"
+      "xor        0xc(%1),%%edx                  \n"
+      "popcnt     %%ecx,%%ecx                    \n"
+      "add        %%ecx,%3                       \n"
+      "popcnt     %%edx,%%edx                    \n"
+      "add        %%edx,%3                       \n"
+      "add        $0x10,%0                       \n"
+      "add        $0x10,%1                       \n"
+      "sub        $0x10,%2                       \n"
+      "jg         1b                             \n"
+      : "+r"(src_a),  // %0
+        "+r"(src_b),  // %1
+        "+r"(count),  // %2
+        "+r"(diff)    // %3
+      :
+      : "memory", "cc", "ecx", "edx");
+
+  return diff;
+}
+#endif
+
+static const vec8 kNibbleMask = {15, 15, 15, 15, 15, 15, 15, 15,
+                                 15, 15, 15, 15, 15, 15, 15, 15};
+static const vec8 kBitCount = {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4};
+
+uint32_t HammingDistance_SSSE3(const uint8_t* src_a,
+                               const uint8_t* src_b,
+                               int count) {
+  uint32_t diff = 0u;
+
+  asm volatile(
+      "movdqa     %4,%%xmm2                      \n"
+      "movdqa     %5,%%xmm3                      \n"
+      "pxor       %%xmm0,%%xmm0                  \n"
+      "pxor       %%xmm1,%%xmm1                  \n"
+      "sub        %0,%1                          \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqa     (%0),%%xmm4                    \n"
+      "movdqa     0x10(%0), %%xmm5               \n"
+      "pxor       (%0,%1), %%xmm4                \n"
+      "movdqa     %%xmm4,%%xmm6                  \n"
+      "pand       %%xmm2,%%xmm6                  \n"
+      "psrlw      $0x4,%%xmm4                    \n"
+      "movdqa     %%xmm3,%%xmm7                  \n"
+      "pshufb     %%xmm6,%%xmm7                  \n"
+      "pand       %%xmm2,%%xmm4                  \n"
+      "movdqa     %%xmm3,%%xmm6                  \n"
+      "pshufb     %%xmm4,%%xmm6                  \n"
+      "paddb      %%xmm7,%%xmm6                  \n"
+      "pxor       0x10(%0,%1),%%xmm5             \n"
+      "add        $0x20,%0                       \n"
+      "movdqa     %%xmm5,%%xmm4                  \n"
+      "pand       %%xmm2,%%xmm5                  \n"
+      "psrlw      $0x4,%%xmm4                    \n"
+      "movdqa     %%xmm3,%%xmm7                  \n"
+      "pshufb     %%xmm5,%%xmm7                  \n"
+      "pand       %%xmm2,%%xmm4                  \n"
+      "movdqa     %%xmm3,%%xmm5                  \n"
+      "pshufb     %%xmm4,%%xmm5                  \n"
+      "paddb      %%xmm7,%%xmm5                  \n"
+      "paddb      %%xmm5,%%xmm6                  \n"
+      "psadbw     %%xmm1,%%xmm6                  \n"
+      "paddd      %%xmm6,%%xmm0                  \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+
+      "pshufd     $0xaa,%%xmm0,%%xmm1            \n"
+      "paddd      %%xmm1,%%xmm0                  \n"
+      "movd       %%xmm0, %3                     \n"
+      : "+r"(src_a),       // %0
+        "+r"(src_b),       // %1
+        "+r"(count),       // %2
+        "=r"(diff)         // %3
+      : "m"(kNibbleMask),  // %4
+        "m"(kBitCount)     // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
+
+  return diff;
+}
+
+#ifdef HAS_HAMMINGDISTANCE_AVX2
+uint32_t HammingDistance_AVX2(const uint8_t* src_a,
+                              const uint8_t* src_b,
+                              int count) {
+  uint32_t diff = 0u;
+
+  asm volatile(
+      "vbroadcastf128 %4,%%ymm2                  \n"
+      "vbroadcastf128 %5,%%ymm3                  \n"
+      "vpxor      %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpxor      %%ymm1,%%ymm1,%%ymm1           \n"
+      "sub        %0,%1                          \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqa    (%0),%%ymm4                    \n"
+      "vmovdqa    0x20(%0), %%ymm5               \n"
+      "vpxor      (%0,%1), %%ymm4, %%ymm4        \n"
+      "vpand      %%ymm2,%%ymm4,%%ymm6           \n"
+      "vpsrlw     $0x4,%%ymm4,%%ymm4             \n"
+      "vpshufb    %%ymm6,%%ymm3,%%ymm6           \n"
+      "vpand      %%ymm2,%%ymm4,%%ymm4           \n"
+      "vpshufb    %%ymm4,%%ymm3,%%ymm4           \n"
+      "vpaddb     %%ymm4,%%ymm6,%%ymm6           \n"
+      "vpxor      0x20(%0,%1),%%ymm5,%%ymm4      \n"
+      "add        $0x40,%0                       \n"
+      "vpand      %%ymm2,%%ymm4,%%ymm5           \n"
+      "vpsrlw     $0x4,%%ymm4,%%ymm4             \n"
+      "vpshufb    %%ymm5,%%ymm3,%%ymm5           \n"
+      "vpand      %%ymm2,%%ymm4,%%ymm4           \n"
+      "vpshufb    %%ymm4,%%ymm3,%%ymm4           \n"
+      "vpaddb     %%ymm5,%%ymm4,%%ymm4           \n"
+      "vpaddb     %%ymm6,%%ymm4,%%ymm4           \n"
+      "vpsadbw    %%ymm1,%%ymm4,%%ymm4           \n"
+      "vpaddd     %%ymm0,%%ymm4,%%ymm0           \n"
+      "sub        $0x40,%2                       \n"
+      "jg         1b                             \n"
+
+      "vpermq     $0xb1,%%ymm0,%%ymm1            \n"
+      "vpaddd     %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xaa,%%ymm0,%%ymm1            \n"
+      "vpaddd     %%ymm1,%%ymm0,%%ymm0           \n"
+      "vmovd      %%xmm0, %3                     \n"
+      "vzeroupper                                \n"
+      : "+r"(src_a),       // %0
+        "+r"(src_b),       // %1
+        "+r"(count),       // %2
+        "=r"(diff)         // %3
+      : "m"(kNibbleMask),  // %4
+        "m"(kBitCount)     // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
+
+  return diff;
+}
+#endif  // HAS_HAMMINGDISTANCE_AVX2
+
+uint32_t SumSquareError_SSE2(const uint8_t* src_a,
+                             const uint8_t* src_b,
+                             int count) {
+  uint32_t sse;
+  asm volatile(
+      "pxor      %%xmm0,%%xmm0                   \n"
+      "pxor      %%xmm5,%%xmm5                   \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqu    (%1),%%xmm2                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "movdqa    %%xmm1,%%xmm3                   \n"
+      "psubusb   %%xmm2,%%xmm1                   \n"
+      "psubusb   %%xmm3,%%xmm2                   \n"
+      "por       %%xmm2,%%xmm1                   \n"
+      "movdqa    %%xmm1,%%xmm2                   \n"
+      "punpcklbw %%xmm5,%%xmm1                   \n"
+      "punpckhbw %%xmm5,%%xmm2                   \n"
+      "pmaddwd   %%xmm1,%%xmm1                   \n"
+      "pmaddwd   %%xmm2,%%xmm2                   \n"
+      "paddd     %%xmm1,%%xmm0                   \n"
+      "paddd     %%xmm2,%%xmm0                   \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+
+      "pshufd    $0xee,%%xmm0,%%xmm1             \n"
+      "paddd     %%xmm1,%%xmm0                   \n"
+      "pshufd    $0x1,%%xmm0,%%xmm1              \n"
+      "paddd     %%xmm1,%%xmm0                   \n"
+      "movd      %%xmm0,%3                       \n"
+
+      : "+r"(src_a),  // %0
+        "+r"(src_b),  // %1
+        "+r"(count),  // %2
+        "=g"(sse)     // %3
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
   return sse;
 }
 
-static uvec32 kHash16x33 = { 0x92d9e201, 0, 0, 0 };  // 33 ^ 16
-static uvec32 kHashMul0 = {
-  0x0c3525e1,  // 33 ^ 15
-  0xa3476dc1,  // 33 ^ 14
-  0x3b4039a1,  // 33 ^ 13
-  0x4f5f0981,  // 33 ^ 12
+static const uvec32 kHash16x33 = {0x92d9e201, 0, 0, 0};  // 33 ^ 16
+static const uvec32 kHashMul0 = {
+    0x0c3525e1,  // 33 ^ 15
+    0xa3476dc1,  // 33 ^ 14
+    0x3b4039a1,  // 33 ^ 13
+    0x4f5f0981,  // 33 ^ 12
 };
-static uvec32 kHashMul1 = {
-  0x30f35d61,  // 33 ^ 11
-  0x855cb541,  // 33 ^ 10
-  0x040a9121,  // 33 ^ 9
-  0x747c7101,  // 33 ^ 8
+static const uvec32 kHashMul1 = {
+    0x30f35d61,  // 33 ^ 11
+    0x855cb541,  // 33 ^ 10
+    0x040a9121,  // 33 ^ 9
+    0x747c7101,  // 33 ^ 8
 };
-static uvec32 kHashMul2 = {
-  0xec41d4e1,  // 33 ^ 7
-  0x4cfa3cc1,  // 33 ^ 6
-  0x025528a1,  // 33 ^ 5
-  0x00121881,  // 33 ^ 4
+static const uvec32 kHashMul2 = {
+    0xec41d4e1,  // 33 ^ 7
+    0x4cfa3cc1,  // 33 ^ 6
+    0x025528a1,  // 33 ^ 5
+    0x00121881,  // 33 ^ 4
 };
-static uvec32 kHashMul3 = {
-  0x00008c61,  // 33 ^ 3
-  0x00000441,  // 33 ^ 2
-  0x00000021,  // 33 ^ 1
-  0x00000001,  // 33 ^ 0
+static const uvec32 kHashMul3 = {
+    0x00008c61,  // 33 ^ 3
+    0x00000441,  // 33 ^ 2
+    0x00000021,  // 33 ^ 1
+    0x00000001,  // 33 ^ 0
 };
 
-uint32 HashDjb2_SSE41(const uint8* src, int count, uint32 seed) {
-  uint32 hash;
-  asm volatile (
-    "movd      %2,%%xmm0                       \n"
-    "pxor      %%xmm7,%%xmm7                   \n"
-    "movdqa    %4,%%xmm6                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "lea       " MEMLEA(0x10, 0) ",%0          \n"
-    "pmulld    %%xmm6,%%xmm0                   \n"
-    "movdqa    %5,%%xmm5                       \n"
-    "movdqa    %%xmm1,%%xmm2                   \n"
-    "punpcklbw %%xmm7,%%xmm2                   \n"
-    "movdqa    %%xmm2,%%xmm3                   \n"
-    "punpcklwd %%xmm7,%%xmm3                   \n"
-    "pmulld    %%xmm5,%%xmm3                   \n"
-    "movdqa    %6,%%xmm5                       \n"
-    "movdqa    %%xmm2,%%xmm4                   \n"
-    "punpckhwd %%xmm7,%%xmm4                   \n"
-    "pmulld    %%xmm5,%%xmm4                   \n"
-    "movdqa    %7,%%xmm5                       \n"
-    "punpckhbw %%xmm7,%%xmm1                   \n"
-    "movdqa    %%xmm1,%%xmm2                   \n"
-    "punpcklwd %%xmm7,%%xmm2                   \n"
-    "pmulld    %%xmm5,%%xmm2                   \n"
-    "movdqa    %8,%%xmm5                       \n"
-    "punpckhwd %%xmm7,%%xmm1                   \n"
-    "pmulld    %%xmm5,%%xmm1                   \n"
-    "paddd     %%xmm4,%%xmm3                   \n"
-    "paddd     %%xmm2,%%xmm1                   \n"
-    "paddd     %%xmm3,%%xmm1                   \n"
-    "pshufd    $0xe,%%xmm1,%%xmm2              \n"
-    "paddd     %%xmm2,%%xmm1                   \n"
-    "pshufd    $0x1,%%xmm1,%%xmm2              \n"
-    "paddd     %%xmm2,%%xmm1                   \n"
-    "paddd     %%xmm1,%%xmm0                   \n"
-    "sub       $0x10,%1                        \n"
-    "jg        1b                              \n"
-    "movd      %%xmm0,%3                       \n"
-  : "+r"(src),        // %0
-    "+r"(count),      // %1
-    "+rm"(seed),      // %2
-    "=g"(hash)        // %3
-  : "m"(kHash16x33),  // %4
-    "m"(kHashMul0),   // %5
-    "m"(kHashMul1),   // %6
-    "m"(kHashMul2),   // %7
-    "m"(kHashMul3)    // %8
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+uint32_t HashDjb2_SSE41(const uint8_t* src, int count, uint32_t seed) {
+  uint32_t hash;
+  asm volatile(
+      "movd      %2,%%xmm0                       \n"
+      "pxor      %%xmm7,%%xmm7                   \n"
+      "movdqa    %4,%%xmm6                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "pmulld    %%xmm6,%%xmm0                   \n"
+      "movdqa    %5,%%xmm5                       \n"
+      "movdqa    %%xmm1,%%xmm2                   \n"
+      "punpcklbw %%xmm7,%%xmm2                   \n"
+      "movdqa    %%xmm2,%%xmm3                   \n"
+      "punpcklwd %%xmm7,%%xmm3                   \n"
+      "pmulld    %%xmm5,%%xmm3                   \n"
+      "movdqa    %6,%%xmm5                       \n"
+      "movdqa    %%xmm2,%%xmm4                   \n"
+      "punpckhwd %%xmm7,%%xmm4                   \n"
+      "pmulld    %%xmm5,%%xmm4                   \n"
+      "movdqa    %7,%%xmm5                       \n"
+      "punpckhbw %%xmm7,%%xmm1                   \n"
+      "movdqa    %%xmm1,%%xmm2                   \n"
+      "punpcklwd %%xmm7,%%xmm2                   \n"
+      "pmulld    %%xmm5,%%xmm2                   \n"
+      "movdqa    %8,%%xmm5                       \n"
+      "punpckhwd %%xmm7,%%xmm1                   \n"
+      "pmulld    %%xmm5,%%xmm1                   \n"
+      "paddd     %%xmm4,%%xmm3                   \n"
+      "paddd     %%xmm2,%%xmm1                   \n"
+      "paddd     %%xmm3,%%xmm1                   \n"
+      "pshufd    $0xe,%%xmm1,%%xmm2              \n"
+      "paddd     %%xmm2,%%xmm1                   \n"
+      "pshufd    $0x1,%%xmm1,%%xmm2              \n"
+      "paddd     %%xmm2,%%xmm1                   \n"
+      "paddd     %%xmm1,%%xmm0                   \n"
+      "sub       $0x10,%1                        \n"
+      "jg        1b                              \n"
+      "movd      %%xmm0,%3                       \n"
+      : "+r"(src),        // %0
+        "+r"(count),      // %1
+        "+rm"(seed),      // %2
+        "=g"(hash)        // %3
+      : "m"(kHash16x33),  // %4
+        "m"(kHashMul0),   // %5
+        "m"(kHashMul1),   // %6
+        "m"(kHashMul2),   // %7
+        "m"(kHashMul3)    // %8
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
   return hash;
 }
 #endif  // defined(__x86_64__) || (defined(__i386__) && !defined(__pic__)))
@@ -148,4 +358,3 @@
 }  // extern "C"
 }  // namespace libyuv
 #endif
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_msa.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_msa.cc
new file mode 100644
index 0000000..0b807d3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_msa.cc
@@ -0,0 +1,97 @@
+/*
+ *  Copyright 2017 The LibYuv Project Authors. All rights reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS. All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "libyuv/basic_types.h"
+
+#include "libyuv/compare_row.h"
+#include "libyuv/row.h"
+
+// This module is for GCC MSA
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#include "libyuv/macros_msa.h"
+
+#ifdef __cplusplus
+namespace libyuv {
+extern "C" {
+#endif
+
+uint32_t HammingDistance_MSA(const uint8_t* src_a,
+                             const uint8_t* src_b,
+                             int count) {
+  uint32_t diff = 0u;
+  int i;
+  v16u8 src0, src1, src2, src3;
+  v2i64 vec0 = {0}, vec1 = {0};
+
+  for (i = 0; i < count; i += 32) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_a, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_a, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)src_b, 0);
+    src3 = (v16u8)__msa_ld_b((v16i8*)src_b, 16);
+    src0 ^= src2;
+    src1 ^= src3;
+    vec0 += __msa_pcnt_d((v2i64)src0);
+    vec1 += __msa_pcnt_d((v2i64)src1);
+    src_a += 32;
+    src_b += 32;
+  }
+
+  vec0 += vec1;
+  diff = (uint32_t)__msa_copy_u_w((v4i32)vec0, 0);
+  diff += (uint32_t)__msa_copy_u_w((v4i32)vec0, 2);
+  return diff;
+}
+
+uint32_t SumSquareError_MSA(const uint8_t* src_a,
+                            const uint8_t* src_b,
+                            int count) {
+  uint32_t sse = 0u;
+  int i;
+  v16u8 src0, src1, src2, src3;
+  v8i16 vec0, vec1, vec2, vec3;
+  v4i32 reg0 = {0}, reg1 = {0}, reg2 = {0}, reg3 = {0};
+  v2i64 tmp0;
+
+  for (i = 0; i < count; i += 32) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_a, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_a, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)src_b, 0);
+    src3 = (v16u8)__msa_ld_b((v16i8*)src_b, 16);
+    vec0 = (v8i16)__msa_ilvr_b((v16i8)src2, (v16i8)src0);
+    vec1 = (v8i16)__msa_ilvl_b((v16i8)src2, (v16i8)src0);
+    vec2 = (v8i16)__msa_ilvr_b((v16i8)src3, (v16i8)src1);
+    vec3 = (v8i16)__msa_ilvl_b((v16i8)src3, (v16i8)src1);
+    vec0 = __msa_hsub_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = __msa_hsub_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = __msa_hsub_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = __msa_hsub_u_h((v16u8)vec3, (v16u8)vec3);
+    reg0 = __msa_dpadd_s_w(reg0, vec0, vec0);
+    reg1 = __msa_dpadd_s_w(reg1, vec1, vec1);
+    reg2 = __msa_dpadd_s_w(reg2, vec2, vec2);
+    reg3 = __msa_dpadd_s_w(reg3, vec3, vec3);
+    src_a += 32;
+    src_b += 32;
+  }
+
+  reg0 += reg1;
+  reg2 += reg3;
+  reg0 += reg2;
+  tmp0 = __msa_hadd_s_d(reg0, reg0);
+  sse = (uint32_t)__msa_copy_u_w((v4i32)tmp0, 0);
+  sse += (uint32_t)__msa_copy_u_w((v4i32)tmp0, 2);
+  return sse;
+}
+
+#ifdef __cplusplus
+}  // extern "C"
+}  // namespace libyuv
+#endif
+
+#endif  // !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon.cc
index 49aa3b4..2a2181e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon.cc
@@ -21,40 +21,70 @@
 #if !defined(LIBYUV_DISABLE_NEON) && defined(__ARM_NEON__) && \
     !defined(__aarch64__)
 
-uint32 SumSquareError_NEON(const uint8* src_a, const uint8* src_b, int count) {
-  volatile uint32 sse;
-  asm volatile (
-    "vmov.u8    q8, #0                         \n"
-    "vmov.u8    q10, #0                        \n"
-    "vmov.u8    q9, #0                         \n"
-    "vmov.u8    q11, #0                        \n"
+// 256 bits at a time
+// uses short accumulator which restricts count to 131 KB
+uint32_t HammingDistance_NEON(const uint8_t* src_a,
+                              const uint8_t* src_b,
+                              int count) {
+  uint32_t diff;
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"
-    MEMACCESS(1)
-    "vld1.8     {q1}, [%1]!                    \n"
-    "subs       %2, %2, #16                    \n"
-    "vsubl.u8   q2, d0, d2                     \n"
-    "vsubl.u8   q3, d1, d3                     \n"
-    "vmlal.s16  q8, d4, d4                     \n"
-    "vmlal.s16  q9, d6, d6                     \n"
-    "vmlal.s16  q10, d5, d5                    \n"
-    "vmlal.s16  q11, d7, d7                    \n"
-    "bgt        1b                             \n"
+  asm volatile(
+      "vmov.u16   q4, #0                         \n"  // accumulator
 
-    "vadd.u32   q8, q8, q9                     \n"
-    "vadd.u32   q10, q10, q11                  \n"
-    "vadd.u32   q11, q8, q10                   \n"
-    "vpaddl.u32 q1, q11                        \n"
-    "vadd.u64   d0, d2, d3                     \n"
-    "vmov.32    %3, d0[0]                      \n"
-    : "+r"(src_a),
-      "+r"(src_b),
-      "+r"(count),
-      "=r"(sse)
-    :
-    : "memory", "cc", "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11");
+      "1:                                        \n"
+      "vld1.8     {q0, q1}, [%0]!                \n"
+      "vld1.8     {q2, q3}, [%1]!                \n"
+      "veor.32    q0, q0, q2                     \n"
+      "veor.32    q1, q1, q3                     \n"
+      "vcnt.i8    q0, q0                         \n"
+      "vcnt.i8    q1, q1                         \n"
+      "subs       %2, %2, #32                    \n"
+      "vadd.u8    q0, q0, q1                     \n"  // 16 byte counts
+      "vpadal.u8  q4, q0                         \n"  // 8 shorts
+      "bgt        1b                             \n"
+
+      "vpaddl.u16 q0, q4                         \n"  // 4 ints
+      "vpadd.u32  d0, d0, d1                     \n"
+      "vpadd.u32  d0, d0, d0                     \n"
+      "vmov.32    %3, d0[0]                      \n"
+
+      : "+r"(src_a), "+r"(src_b), "+r"(count), "=r"(diff)
+      :
+      : "cc", "q0", "q1", "q2", "q3", "q4");
+  return diff;
+}
+
+uint32_t SumSquareError_NEON(const uint8_t* src_a,
+                             const uint8_t* src_b,
+                             int count) {
+  uint32_t sse;
+  asm volatile(
+      "vmov.u8    q8, #0                         \n"
+      "vmov.u8    q10, #0                        \n"
+      "vmov.u8    q9, #0                         \n"
+      "vmov.u8    q11, #0                        \n"
+
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"
+      "vld1.8     {q1}, [%1]!                    \n"
+      "subs       %2, %2, #16                    \n"
+      "vsubl.u8   q2, d0, d2                     \n"
+      "vsubl.u8   q3, d1, d3                     \n"
+      "vmlal.s16  q8, d4, d4                     \n"
+      "vmlal.s16  q9, d6, d6                     \n"
+      "vmlal.s16  q10, d5, d5                    \n"
+      "vmlal.s16  q11, d7, d7                    \n"
+      "bgt        1b                             \n"
+
+      "vadd.u32   q8, q8, q9                     \n"
+      "vadd.u32   q10, q10, q11                  \n"
+      "vadd.u32   q11, q8, q10                   \n"
+      "vpaddl.u32 q1, q11                        \n"
+      "vadd.u64   d0, d2, d3                     \n"
+      "vmov.32    %3, d0[0]                      \n"
+      : "+r"(src_a), "+r"(src_b), "+r"(count), "=r"(sse)
+      :
+      : "memory", "cc", "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11");
   return sse;
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon64.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon64.cc
index f9c7df9..6e8f672 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon64.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_neon64.cc
@@ -20,39 +20,65 @@
 
 #if !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
-uint32 SumSquareError_NEON(const uint8* src_a, const uint8* src_b, int count) {
-  volatile uint32 sse;
-  asm volatile (
-    "eor        v16.16b, v16.16b, v16.16b      \n"
-    "eor        v18.16b, v18.16b, v18.16b      \n"
-    "eor        v17.16b, v17.16b, v17.16b      \n"
-    "eor        v19.16b, v19.16b, v19.16b      \n"
+// 256 bits at a time
+// uses short accumulator which restricts count to 131 KB
+uint32_t HammingDistance_NEON(const uint8_t* src_a,
+                              const uint8_t* src_b,
+                              int count) {
+  uint32_t diff;
+  asm volatile(
+      "movi       v4.8h, #0                      \n"
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"
-    MEMACCESS(1)
-    "ld1        {v1.16b}, [%1], #16            \n"
-    "subs       %w2, %w2, #16                  \n"
-    "usubl      v2.8h, v0.8b, v1.8b            \n"
-    "usubl2     v3.8h, v0.16b, v1.16b          \n"
-    "smlal      v16.4s, v2.4h, v2.4h           \n"
-    "smlal      v17.4s, v3.4h, v3.4h           \n"
-    "smlal2     v18.4s, v2.8h, v2.8h           \n"
-    "smlal2     v19.4s, v3.8h, v3.8h           \n"
-    "b.gt       1b                             \n"
+      "1:                                        \n"
+      "ld1        {v0.16b, v1.16b}, [%0], #32    \n"
+      "ld1        {v2.16b, v3.16b}, [%1], #32    \n"
+      "eor        v0.16b, v0.16b, v2.16b         \n"
+      "eor        v1.16b, v1.16b, v3.16b         \n"
+      "cnt        v0.16b, v0.16b                 \n"
+      "cnt        v1.16b, v1.16b                 \n"
+      "subs       %w2, %w2, #32                  \n"
+      "add        v0.16b, v0.16b, v1.16b         \n"
+      "uadalp     v4.8h, v0.16b                  \n"
+      "b.gt       1b                             \n"
 
-    "add        v16.4s, v16.4s, v17.4s         \n"
-    "add        v18.4s, v18.4s, v19.4s         \n"
-    "add        v19.4s, v16.4s, v18.4s         \n"
-    "addv       s0, v19.4s                     \n"
-    "fmov       %w3, s0                        \n"
-    : "+r"(src_a),
-      "+r"(src_b),
-      "+r"(count),
-      "=r"(sse)
-    :
-    : "cc", "v0", "v1", "v2", "v3", "v16", "v17", "v18", "v19");
+      "uaddlv     s4, v4.8h                      \n"
+      "fmov       %w3, s4                        \n"
+      : "+r"(src_a), "+r"(src_b), "+r"(count), "=r"(diff)
+      :
+      : "cc", "v0", "v1", "v2", "v3", "v4");
+  return diff;
+}
+
+uint32_t SumSquareError_NEON(const uint8_t* src_a,
+                             const uint8_t* src_b,
+                             int count) {
+  uint32_t sse;
+  asm volatile(
+      "eor        v16.16b, v16.16b, v16.16b      \n"
+      "eor        v18.16b, v18.16b, v18.16b      \n"
+      "eor        v17.16b, v17.16b, v17.16b      \n"
+      "eor        v19.16b, v19.16b, v19.16b      \n"
+
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"
+      "ld1        {v1.16b}, [%1], #16            \n"
+      "subs       %w2, %w2, #16                  \n"
+      "usubl      v2.8h, v0.8b, v1.8b            \n"
+      "usubl2     v3.8h, v0.16b, v1.16b          \n"
+      "smlal      v16.4s, v2.4h, v2.4h           \n"
+      "smlal      v17.4s, v3.4h, v3.4h           \n"
+      "smlal2     v18.4s, v2.8h, v2.8h           \n"
+      "smlal2     v19.4s, v3.8h, v3.8h           \n"
+      "b.gt       1b                             \n"
+
+      "add        v16.4s, v16.4s, v17.4s         \n"
+      "add        v18.4s, v18.4s, v19.4s         \n"
+      "add        v19.4s, v16.4s, v18.4s         \n"
+      "addv       s0, v19.4s                     \n"
+      "fmov       %w3, s0                        \n"
+      : "+r"(src_a), "+r"(src_b), "+r"(count), "=r"(sse)
+      :
+      : "cc", "v0", "v1", "v2", "v3", "v16", "v17", "v18", "v19");
   return sse;
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_win.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_win.cc
index dc86fe2..d57d3d9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_win.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/compare_win.cc
@@ -13,20 +13,39 @@
 #include "libyuv/compare_row.h"
 #include "libyuv/row.h"
 
+#if defined(_MSC_VER)
+#include <intrin.h>  // For __popcnt
+#endif
+
 #ifdef __cplusplus
 namespace libyuv {
 extern "C" {
 #endif
 
 // This module is for 32 bit Visual C x86 and clangcl
-#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86)
+#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86) && defined(_MSC_VER)
 
-__declspec(naked)
-uint32 SumSquareError_SSE2(const uint8* src_a, const uint8* src_b, int count) {
+uint32_t HammingDistance_SSE42(const uint8_t* src_a,
+                               const uint8_t* src_b,
+                               int count) {
+  uint32_t diff = 0u;
+
+  int i;
+  for (i = 0; i < count - 3; i += 4) {
+    uint32_t x = *((uint32_t*)src_a) ^ *((uint32_t*)src_b);  // NOLINT
+    src_a += 4;
+    src_b += 4;
+    diff += __popcnt(x);
+  }
+  return diff;
+}
+
+__declspec(naked) uint32_t
+    SumSquareError_SSE2(const uint8_t* src_a, const uint8_t* src_b, int count) {
   __asm {
-    mov        eax, [esp + 4]    // src_a
-    mov        edx, [esp + 8]    // src_b
-    mov        ecx, [esp + 12]   // count
+    mov        eax, [esp + 4]  // src_a
+    mov        edx, [esp + 8]  // src_b
+    mov        ecx, [esp + 12]  // count
     pxor       xmm0, xmm0
     pxor       xmm5, xmm5
 
@@ -61,13 +80,13 @@
 // Visual C 2012 required for AVX2.
 #if _MSC_VER >= 1700
 // C4752: found Intel(R) Advanced Vector Extensions; consider using /arch:AVX.
-#pragma warning(disable: 4752)
-__declspec(naked)
-uint32 SumSquareError_AVX2(const uint8* src_a, const uint8* src_b, int count) {
+#pragma warning(disable : 4752)
+__declspec(naked) uint32_t
+    SumSquareError_AVX2(const uint8_t* src_a, const uint8_t* src_b, int count) {
   __asm {
-    mov        eax, [esp + 4]    // src_a
-    mov        edx, [esp + 8]    // src_b
-    mov        ecx, [esp + 12]   // count
+    mov        eax, [esp + 4]  // src_a
+    mov        edx, [esp + 8]  // src_b
+    mov        ecx, [esp + 12]  // count
     vpxor      ymm0, ymm0, ymm0  // sum
     vpxor      ymm5, ymm5, ymm5  // constant 0 for unpck
     sub        edx, eax
@@ -101,65 +120,65 @@
 }
 #endif  // _MSC_VER >= 1700
 
-uvec32 kHash16x33 = { 0x92d9e201, 0, 0, 0 };  // 33 ^ 16
+uvec32 kHash16x33 = {0x92d9e201, 0, 0, 0};  // 33 ^ 16
 uvec32 kHashMul0 = {
-  0x0c3525e1,  // 33 ^ 15
-  0xa3476dc1,  // 33 ^ 14
-  0x3b4039a1,  // 33 ^ 13
-  0x4f5f0981,  // 33 ^ 12
+    0x0c3525e1,  // 33 ^ 15
+    0xa3476dc1,  // 33 ^ 14
+    0x3b4039a1,  // 33 ^ 13
+    0x4f5f0981,  // 33 ^ 12
 };
 uvec32 kHashMul1 = {
-  0x30f35d61,  // 33 ^ 11
-  0x855cb541,  // 33 ^ 10
-  0x040a9121,  // 33 ^ 9
-  0x747c7101,  // 33 ^ 8
+    0x30f35d61,  // 33 ^ 11
+    0x855cb541,  // 33 ^ 10
+    0x040a9121,  // 33 ^ 9
+    0x747c7101,  // 33 ^ 8
 };
 uvec32 kHashMul2 = {
-  0xec41d4e1,  // 33 ^ 7
-  0x4cfa3cc1,  // 33 ^ 6
-  0x025528a1,  // 33 ^ 5
-  0x00121881,  // 33 ^ 4
+    0xec41d4e1,  // 33 ^ 7
+    0x4cfa3cc1,  // 33 ^ 6
+    0x025528a1,  // 33 ^ 5
+    0x00121881,  // 33 ^ 4
 };
 uvec32 kHashMul3 = {
-  0x00008c61,  // 33 ^ 3
-  0x00000441,  // 33 ^ 2
-  0x00000021,  // 33 ^ 1
-  0x00000001,  // 33 ^ 0
+    0x00008c61,  // 33 ^ 3
+    0x00000441,  // 33 ^ 2
+    0x00000021,  // 33 ^ 1
+    0x00000001,  // 33 ^ 0
 };
 
-__declspec(naked)
-uint32 HashDjb2_SSE41(const uint8* src, int count, uint32 seed) {
+__declspec(naked) uint32_t
+    HashDjb2_SSE41(const uint8_t* src, int count, uint32_t seed) {
   __asm {
-    mov        eax, [esp + 4]    // src
-    mov        ecx, [esp + 8]    // count
+    mov        eax, [esp + 4]  // src
+    mov        ecx, [esp + 8]  // count
     movd       xmm0, [esp + 12]  // seed
 
-    pxor       xmm7, xmm7        // constant 0 for unpck
+    pxor       xmm7, xmm7  // constant 0 for unpck
     movdqa     xmm6, xmmword ptr kHash16x33
 
   wloop:
-    movdqu     xmm1, [eax]       // src[0-15]
+    movdqu     xmm1, [eax]  // src[0-15]
     lea        eax, [eax + 16]
-    pmulld     xmm0, xmm6        // hash *= 33 ^ 16
+    pmulld     xmm0, xmm6  // hash *= 33 ^ 16
     movdqa     xmm5, xmmword ptr kHashMul0
     movdqa     xmm2, xmm1
-    punpcklbw  xmm2, xmm7        // src[0-7]
+    punpcklbw  xmm2, xmm7  // src[0-7]
     movdqa     xmm3, xmm2
-    punpcklwd  xmm3, xmm7        // src[0-3]
+    punpcklwd  xmm3, xmm7  // src[0-3]
     pmulld     xmm3, xmm5
     movdqa     xmm5, xmmword ptr kHashMul1
     movdqa     xmm4, xmm2
-    punpckhwd  xmm4, xmm7        // src[4-7]
+    punpckhwd  xmm4, xmm7  // src[4-7]
     pmulld     xmm4, xmm5
     movdqa     xmm5, xmmword ptr kHashMul2
-    punpckhbw  xmm1, xmm7        // src[8-15]
+    punpckhbw  xmm1, xmm7  // src[8-15]
     movdqa     xmm2, xmm1
-    punpcklwd  xmm2, xmm7        // src[8-11]
+    punpcklwd  xmm2, xmm7  // src[8-11]
     pmulld     xmm2, xmm5
     movdqa     xmm5, xmmword ptr kHashMul3
-    punpckhwd  xmm1, xmm7        // src[12-15]
+    punpckhwd  xmm1, xmm7  // src[12-15]
     pmulld     xmm1, xmm5
-    paddd      xmm3, xmm4        // add 16 results
+    paddd      xmm3, xmm4  // add 16 results
     paddd      xmm1, xmm2
     paddd      xmm1, xmm3
 
@@ -171,18 +190,18 @@
     sub        ecx, 16
     jg         wloop
 
-    movd       eax, xmm0         // return hash
+    movd       eax, xmm0  // return hash
     ret
   }
 }
 
 // Visual C 2012 required for AVX2.
 #if _MSC_VER >= 1700
-__declspec(naked)
-uint32 HashDjb2_AVX2(const uint8* src, int count, uint32 seed) {
+__declspec(naked) uint32_t
+    HashDjb2_AVX2(const uint8_t* src, int count, uint32_t seed) {
   __asm {
-    mov        eax, [esp + 4]    // src
-    mov        ecx, [esp + 8]    // count
+    mov        eax, [esp + 4]  // src
+    mov        ecx, [esp + 8]  // count
     vmovd      xmm0, [esp + 12]  // seed
 
   wloop:
@@ -196,7 +215,7 @@
     vpmulld    xmm2, xmm2, xmmword ptr kHashMul2
     lea        eax, [eax + 16]
     vpmulld    xmm1, xmm1, xmmword ptr kHashMul3
-    vpaddd     xmm3, xmm3, xmm4        // add 16 results
+    vpaddd     xmm3, xmm3, xmm4  // add 16 results
     vpaddd     xmm1, xmm1, xmm2
     vpaddd     xmm1, xmm1, xmm3
     vpshufd    xmm2, xmm1, 0x0e  // upper 2 dwords
@@ -207,7 +226,7 @@
     sub        ecx, 16
     jg         wloop
 
-    vmovd      eax, xmm0         // return hash
+    vmovd      eax, xmm0  // return hash
     vzeroupper
     ret
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert.cc
index a33742d..375cc73 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert.cc
@@ -14,8 +14,8 @@
 #include "libyuv/cpu_id.h"
 #include "libyuv/planar_functions.h"
 #include "libyuv/rotate.h"
-#include "libyuv/scale.h"  // For ScalePlane()
 #include "libyuv/row.h"
+#include "libyuv/scale.h"  // For ScalePlane()
 
 #ifdef __cplusplus
 namespace libyuv {
@@ -28,14 +28,22 @@
 }
 
 // Any I4xx To I420 format with mirroring.
-static int I4xxToI420(const uint8* src_y, int src_stride_y,
-                      const uint8* src_u, int src_stride_u,
-                      const uint8* src_v, int src_stride_v,
-                      uint8* dst_y, int dst_stride_y,
-                      uint8* dst_u, int dst_stride_u,
-                      uint8* dst_v, int dst_stride_v,
-                      int src_y_width, int src_y_height,
-                      int src_uv_width, int src_uv_height) {
+static int I4xxToI420(const uint8_t* src_y,
+                      int src_stride_y,
+                      const uint8_t* src_u,
+                      int src_stride_u,
+                      const uint8_t* src_v,
+                      int src_stride_v,
+                      uint8_t* dst_y,
+                      int dst_stride_y,
+                      uint8_t* dst_u,
+                      int dst_stride_u,
+                      uint8_t* dst_v,
+                      int dst_stride_v,
+                      int src_y_width,
+                      int src_y_height,
+                      int src_uv_width,
+                      int src_uv_height) {
   const int dst_y_width = Abs(src_y_width);
   const int dst_y_height = Abs(src_y_height);
   const int dst_uv_width = SUBSAMPLE(dst_y_width, 1, 1);
@@ -44,35 +52,37 @@
     return -1;
   }
   if (dst_y) {
-    ScalePlane(src_y, src_stride_y, src_y_width, src_y_height,
-               dst_y, dst_stride_y, dst_y_width, dst_y_height,
-               kFilterBilinear);
+    ScalePlane(src_y, src_stride_y, src_y_width, src_y_height, dst_y,
+               dst_stride_y, dst_y_width, dst_y_height, kFilterBilinear);
   }
-  ScalePlane(src_u, src_stride_u, src_uv_width, src_uv_height,
-             dst_u, dst_stride_u, dst_uv_width, dst_uv_height,
-             kFilterBilinear);
-  ScalePlane(src_v, src_stride_v, src_uv_width, src_uv_height,
-             dst_v, dst_stride_v, dst_uv_width, dst_uv_height,
-             kFilterBilinear);
+  ScalePlane(src_u, src_stride_u, src_uv_width, src_uv_height, dst_u,
+             dst_stride_u, dst_uv_width, dst_uv_height, kFilterBilinear);
+  ScalePlane(src_v, src_stride_v, src_uv_width, src_uv_height, dst_v,
+             dst_stride_v, dst_uv_width, dst_uv_height, kFilterBilinear);
   return 0;
 }
 
-// Copy I420 with optional flipping
+// Copy I420 with optional flipping.
 // TODO(fbarchard): Use Scale plane which supports mirroring, but ensure
 // is does row coalescing.
 LIBYUV_API
-int I420Copy(const uint8* src_y, int src_stride_y,
-             const uint8* src_u, int src_stride_u,
-             const uint8* src_v, int src_stride_v,
-             uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int width, int height) {
+int I420Copy(const uint8_t* src_y,
+             int src_stride_y,
+             const uint8_t* src_u,
+             int src_stride_u,
+             const uint8_t* src_v,
+             int src_stride_v,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src_u || !src_v ||
-      !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -96,79 +106,152 @@
   return 0;
 }
 
+// Copy I010 with optional flipping.
+LIBYUV_API
+int I010Copy(const uint16_t* src_y,
+             int src_stride_y,
+             const uint16_t* src_u,
+             int src_stride_u,
+             const uint16_t* src_v,
+             int src_stride_v,
+             uint16_t* dst_y,
+             int dst_stride_y,
+             uint16_t* dst_u,
+             int dst_stride_u,
+             uint16_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height) {
+  int halfwidth = (width + 1) >> 1;
+  int halfheight = (height + 1) >> 1;
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    halfheight = (height + 1) >> 1;
+    src_y = src_y + (height - 1) * src_stride_y;
+    src_u = src_u + (halfheight - 1) * src_stride_u;
+    src_v = src_v + (halfheight - 1) * src_stride_v;
+    src_stride_y = -src_stride_y;
+    src_stride_u = -src_stride_u;
+    src_stride_v = -src_stride_v;
+  }
+
+  if (dst_y) {
+    CopyPlane_16(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+  }
+  // Copy UV planes.
+  CopyPlane_16(src_u, src_stride_u, dst_u, dst_stride_u, halfwidth, halfheight);
+  CopyPlane_16(src_v, src_stride_v, dst_v, dst_stride_v, halfwidth, halfheight);
+  return 0;
+}
+
+// Convert 10 bit YUV to 8 bit.
+LIBYUV_API
+int I010ToI420(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
+  int halfwidth = (width + 1) >> 1;
+  int halfheight = (height + 1) >> 1;
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    halfheight = (height + 1) >> 1;
+    src_y = src_y + (height - 1) * src_stride_y;
+    src_u = src_u + (halfheight - 1) * src_stride_u;
+    src_v = src_v + (halfheight - 1) * src_stride_v;
+    src_stride_y = -src_stride_y;
+    src_stride_u = -src_stride_u;
+    src_stride_v = -src_stride_v;
+  }
+
+  // Convert Y plane.
+  Convert16To8Plane(src_y, src_stride_y, dst_y, dst_stride_y, 16384, width,
+                    height);
+  // Convert UV planes.
+  Convert16To8Plane(src_u, src_stride_u, dst_u, dst_stride_u, 16384, halfwidth,
+                    halfheight);
+  Convert16To8Plane(src_v, src_stride_v, dst_v, dst_stride_v, 16384, halfwidth,
+                    halfheight);
+  return 0;
+}
+
 // 422 chroma is 1/2 width, 1x height
 // 420 chroma is 1/2 width, 1/2 height
 LIBYUV_API
-int I422ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int I422ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   const int src_uv_width = SUBSAMPLE(width, 1, 1);
-  return I4xxToI420(src_y, src_stride_y,
-                    src_u, src_stride_u,
-                    src_v, src_stride_v,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height,
-                    src_uv_width, height);
+  return I4xxToI420(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                    src_stride_v, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                    dst_v, dst_stride_v, width, height, src_uv_width, height);
 }
 
 // 444 chroma is 1x width, 1x height
 // 420 chroma is 1/2 width, 1/2 height
 LIBYUV_API
-int I444ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
-  return I4xxToI420(src_y, src_stride_y,
-                    src_u, src_stride_u,
-                    src_v, src_stride_v,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height,
-                    width, height);
-}
-
-// 411 chroma is 1/4 width, 1x height
-// 420 chroma is 1/2 width, 1/2 height
-LIBYUV_API
-int I411ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
-  const int src_uv_width = SUBSAMPLE(width, 3, 2);
-  return I4xxToI420(src_y, src_stride_y,
-                    src_u, src_stride_u,
-                    src_v, src_stride_v,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height,
-                    src_uv_width, height);
+int I444ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
+  return I4xxToI420(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                    src_stride_v, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                    dst_v, dst_stride_v, width, height, width, height);
 }
 
 // I400 is greyscale typically used in MJPG
 LIBYUV_API
-int I400ToI420(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int I400ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -186,11 +269,15 @@
   return 0;
 }
 
-static void CopyPlane2(const uint8* src, int src_stride_0, int src_stride_1,
-                       uint8* dst, int dst_stride,
-                       int width, int height) {
+static void CopyPlane2(const uint8_t* src,
+                       int src_stride_0,
+                       int src_stride_1,
+                       uint8_t* dst,
+                       int dst_stride,
+                       int width,
+                       int height) {
   int y;
-  void (*CopyRow)(const uint8* src, uint8* dst, int width) = CopyRow_C;
+  void (*CopyRow)(const uint8_t* src, uint8_t* dst, int width) = CopyRow_C;
 #if defined(HAS_COPYROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     CopyRow = IS_ALIGNED(width, 32) ? CopyRow_SSE2 : CopyRow_Any_SSE2;
@@ -211,11 +298,6 @@
     CopyRow = IS_ALIGNED(width, 32) ? CopyRow_NEON : CopyRow_Any_NEON;
   }
 #endif
-#if defined(HAS_COPYROW_MIPS)
-  if (TestCpuFlag(kCpuHasMIPS)) {
-    CopyRow = CopyRow_MIPS;
-  }
-#endif
 
   // Copy plane
   for (y = 0; y < height - 1; y += 2) {
@@ -238,17 +320,22 @@
 // src_stride_m420 is row planar. Normally this will be the width in pixels.
 //   The UV plane is half width, but 2 values, so src_stride_m420 applies to
 //   this as well as the two Y planes.
-static int X420ToI420(const uint8* src_y,
-                      int src_stride_y0, int src_stride_y1,
-                      const uint8* src_uv, int src_stride_uv,
-                      uint8* dst_y, int dst_stride_y,
-                      uint8* dst_u, int dst_stride_u,
-                      uint8* dst_v, int dst_stride_v,
-                      int width, int height) {
+static int X420ToI420(const uint8_t* src_y,
+                      int src_stride_y0,
+                      int src_stride_y1,
+                      const uint8_t* src_uv,
+                      int src_stride_uv,
+                      uint8_t* dst_y,
+                      int dst_stride_y,
+                      uint8_t* dst_u,
+                      int dst_stride_u,
+                      uint8_t* dst_v,
+                      int dst_stride_v,
+                      int width,
+                      int height) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src_uv || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_uv || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -265,16 +352,14 @@
     dst_stride_v = -dst_stride_v;
   }
   // Coalesce rows.
-  if (src_stride_y0 == width &&
-      src_stride_y1 == width &&
+  if (src_stride_y0 == width && src_stride_y1 == width &&
       dst_stride_y == width) {
     width *= height;
     height = 1;
     src_stride_y0 = src_stride_y1 = dst_stride_y = 0;
   }
   // Coalesce rows.
-  if (src_stride_uv == halfwidth * 2 &&
-      dst_stride_u == halfwidth &&
+  if (src_stride_uv == halfwidth * 2 && dst_stride_u == halfwidth &&
       dst_stride_v == halfwidth) {
     halfwidth *= halfheight;
     halfheight = 1;
@@ -299,63 +384,78 @@
 
 // Convert NV12 to I420.
 LIBYUV_API
-int NV12ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_uv, int src_stride_uv,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
-  return X420ToI420(src_y, src_stride_y, src_stride_y,
-                    src_uv, src_stride_uv,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height);
+int NV12ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_uv,
+               int src_stride_uv,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
+  return X420ToI420(src_y, src_stride_y, src_stride_y, src_uv, src_stride_uv,
+                    dst_y, dst_stride_y, dst_u, dst_stride_u, dst_v,
+                    dst_stride_v, width, height);
 }
 
 // Convert NV21 to I420.  Same as NV12 but u and v pointers swapped.
 LIBYUV_API
-int NV21ToI420(const uint8* src_y, int src_stride_y,
-               const uint8* src_vu, int src_stride_vu,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
-  return X420ToI420(src_y, src_stride_y, src_stride_y,
-                    src_vu, src_stride_vu,
-                    dst_y, dst_stride_y,
-                    dst_v, dst_stride_v,
-                    dst_u, dst_stride_u,
-                    width, height);
+int NV21ToI420(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_vu,
+               int src_stride_vu,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
+  return X420ToI420(src_y, src_stride_y, src_stride_y, src_vu, src_stride_vu,
+                    dst_y, dst_stride_y, dst_v, dst_stride_v, dst_u,
+                    dst_stride_u, width, height);
 }
 
 // Convert M420 to I420.
 LIBYUV_API
-int M420ToI420(const uint8* src_m420, int src_stride_m420,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int M420ToI420(const uint8_t* src_m420,
+               int src_stride_m420,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   return X420ToI420(src_m420, src_stride_m420, src_stride_m420 * 2,
-                    src_m420 + src_stride_m420 * 2, src_stride_m420 * 3,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
+                    src_m420 + src_stride_m420 * 2, src_stride_m420 * 3, dst_y,
+                    dst_stride_y, dst_u, dst_stride_u, dst_v, dst_stride_v,
                     width, height);
 }
 
 // Convert YUY2 to I420.
 LIBYUV_API
-int YUY2ToI420(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int YUY2ToI420(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*YUY2ToUVRow)(const uint8* src_yuy2, int src_stride_yuy2,
-      uint8* dst_u, uint8* dst_v, int width) = YUY2ToUVRow_C;
-  void (*YUY2ToYRow)(const uint8* src_yuy2,
-      uint8* dst_y, int width) = YUY2ToYRow_C;
+  void (*YUY2ToUVRow)(const uint8_t* src_yuy2, int src_stride_yuy2,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      YUY2ToUVRow_C;
+  void (*YUY2ToYRow)(const uint8_t* src_yuy2, uint8_t* dst_y, int width) =
+      YUY2ToYRow_C;
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
@@ -392,6 +492,16 @@
     }
   }
 #endif
+#if defined(HAS_YUY2TOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    YUY2ToYRow = YUY2ToYRow_Any_MSA;
+    YUY2ToUVRow = YUY2ToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      YUY2ToYRow = YUY2ToYRow_MSA;
+      YUY2ToUVRow = YUY2ToUVRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     YUY2ToUVRow(src_yuy2, src_stride_yuy2, dst_u, dst_v, width);
@@ -411,16 +521,22 @@
 
 // Convert UYVY to I420.
 LIBYUV_API
-int UYVYToI420(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int UYVYToI420(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*UYVYToUVRow)(const uint8* src_uyvy, int src_stride_uyvy,
-      uint8* dst_u, uint8* dst_v, int width) = UYVYToUVRow_C;
-  void (*UYVYToYRow)(const uint8* src_uyvy,
-      uint8* dst_y, int width) = UYVYToYRow_C;
+  void (*UYVYToUVRow)(const uint8_t* src_uyvy, int src_stride_uyvy,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      UYVYToUVRow_C;
+  void (*UYVYToYRow)(const uint8_t* src_uyvy, uint8_t* dst_y, int width) =
+      UYVYToYRow_C;
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
@@ -457,6 +573,16 @@
     }
   }
 #endif
+#if defined(HAS_UYVYTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    UYVYToYRow = UYVYToYRow_Any_MSA;
+    UYVYToUVRow = UYVYToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      UYVYToYRow = UYVYToYRow_MSA;
+      UYVYToUVRow = UYVYToUVRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     UYVYToUVRow(src_uyvy, src_stride_uyvy, dst_u, dst_v, width);
@@ -476,19 +602,23 @@
 
 // Convert ARGB to I420.
 LIBYUV_API
-int ARGBToI420(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int ARGBToI420(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  if (!src_argb ||
-      !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_argb || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -533,6 +663,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVRow = ARGBToUVRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     ARGBToUVRow(src_argb, src_stride_argb, dst_u, dst_v, width);
@@ -552,19 +698,23 @@
 
 // Convert BGRA to I420.
 LIBYUV_API
-int BGRAToI420(const uint8* src_bgra, int src_stride_bgra,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int BGRAToI420(const uint8_t* src_bgra,
+               int src_stride_bgra,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*BGRAToUVRow)(const uint8* src_bgra0, int src_stride_bgra,
-      uint8* dst_u, uint8* dst_v, int width) = BGRAToUVRow_C;
-  void (*BGRAToYRow)(const uint8* src_bgra, uint8* dst_y, int width) =
+  void (*BGRAToUVRow)(const uint8_t* src_bgra0, int src_stride_bgra,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      BGRAToUVRow_C;
+  void (*BGRAToYRow)(const uint8_t* src_bgra, uint8_t* dst_y, int width) =
       BGRAToYRow_C;
-  if (!src_bgra ||
-      !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_bgra || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -592,12 +742,28 @@
   }
 #endif
 #if defined(HAS_BGRATOUVROW_NEON)
-    if (TestCpuFlag(kCpuHasNEON)) {
-      BGRAToUVRow = BGRAToUVRow_Any_NEON;
-      if (IS_ALIGNED(width, 16)) {
-        BGRAToUVRow = BGRAToUVRow_NEON;
-      }
+  if (TestCpuFlag(kCpuHasNEON)) {
+    BGRAToUVRow = BGRAToUVRow_Any_NEON;
+    if (IS_ALIGNED(width, 16)) {
+      BGRAToUVRow = BGRAToUVRow_NEON;
     }
+  }
+#endif
+#if defined(HAS_BGRATOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    BGRAToYRow = BGRAToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      BGRAToYRow = BGRAToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_BGRATOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    BGRAToUVRow = BGRAToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      BGRAToUVRow = BGRAToUVRow_MSA;
+    }
+  }
 #endif
 
   for (y = 0; y < height - 1; y += 2) {
@@ -618,19 +784,23 @@
 
 // Convert ABGR to I420.
 LIBYUV_API
-int ABGRToI420(const uint8* src_abgr, int src_stride_abgr,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int ABGRToI420(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*ABGRToUVRow)(const uint8* src_abgr0, int src_stride_abgr,
-      uint8* dst_u, uint8* dst_v, int width) = ABGRToUVRow_C;
-  void (*ABGRToYRow)(const uint8* src_abgr, uint8* dst_y, int width) =
+  void (*ABGRToUVRow)(const uint8_t* src_abgr0, int src_stride_abgr,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ABGRToUVRow_C;
+  void (*ABGRToYRow)(const uint8_t* src_abgr, uint8_t* dst_y, int width) =
       ABGRToYRow_C;
-  if (!src_abgr ||
-      !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_abgr || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -665,6 +835,22 @@
     }
   }
 #endif
+#if defined(HAS_ABGRTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ABGRToYRow = ABGRToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ABGRToYRow = ABGRToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ABGRTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ABGRToUVRow = ABGRToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ABGRToUVRow = ABGRToUVRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     ABGRToUVRow(src_abgr, src_stride_abgr, dst_u, dst_v, width);
@@ -684,19 +870,23 @@
 
 // Convert RGBA to I420.
 LIBYUV_API
-int RGBAToI420(const uint8* src_rgba, int src_stride_rgba,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int RGBAToI420(const uint8_t* src_rgba,
+               int src_stride_rgba,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*RGBAToUVRow)(const uint8* src_rgba0, int src_stride_rgba,
-      uint8* dst_u, uint8* dst_v, int width) = RGBAToUVRow_C;
-  void (*RGBAToYRow)(const uint8* src_rgba, uint8* dst_y, int width) =
+  void (*RGBAToUVRow)(const uint8_t* src_rgba0, int src_stride_rgba,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      RGBAToUVRow_C;
+  void (*RGBAToYRow)(const uint8_t* src_rgba, uint8_t* dst_y, int width) =
       RGBAToYRow_C;
-  if (!src_rgba ||
-      !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_rgba || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -731,6 +921,22 @@
     }
   }
 #endif
+#if defined(HAS_RGBATOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RGBAToYRow = RGBAToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RGBAToYRow = RGBAToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_RGBATOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RGBAToUVRow = RGBAToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RGBAToUVRow = RGBAToUVRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     RGBAToUVRow(src_rgba, src_stride_rgba, dst_u, dst_v, width);
@@ -750,27 +956,33 @@
 
 // Convert RGB24 to I420.
 LIBYUV_API
-int RGB24ToI420(const uint8* src_rgb24, int src_stride_rgb24,
-                uint8* dst_y, int dst_stride_y,
-                uint8* dst_u, int dst_stride_u,
-                uint8* dst_v, int dst_stride_v,
-                int width, int height) {
+int RGB24ToI420(const uint8_t* src_rgb24,
+                int src_stride_rgb24,
+                uint8_t* dst_y,
+                int dst_stride_y,
+                uint8_t* dst_u,
+                int dst_stride_u,
+                uint8_t* dst_v,
+                int dst_stride_v,
+                int width,
+                int height) {
   int y;
-#if defined(HAS_RGB24TOYROW_NEON)
-  void (*RGB24ToUVRow)(const uint8* src_rgb24, int src_stride_rgb24,
-      uint8* dst_u, uint8* dst_v, int width) = RGB24ToUVRow_C;
-  void (*RGB24ToYRow)(const uint8* src_rgb24, uint8* dst_y, int width) =
+#if (defined(HAS_RGB24TOYROW_NEON) || defined(HAS_RGB24TOYROW_MSA))
+  void (*RGB24ToUVRow)(const uint8_t* src_rgb24, int src_stride_rgb24,
+                       uint8_t* dst_u, uint8_t* dst_v, int width) =
+      RGB24ToUVRow_C;
+  void (*RGB24ToYRow)(const uint8_t* src_rgb24, uint8_t* dst_y, int width) =
       RGB24ToYRow_C;
 #else
-  void (*RGB24ToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
+  void (*RGB24ToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb, int width) =
       RGB24ToARGBRow_C;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
 #endif
-  if (!src_rgb24 || !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_rgb24 || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -792,6 +1004,15 @@
       }
     }
   }
+#elif defined(HAS_RGB24TOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RGB24ToUVRow = RGB24ToUVRow_Any_MSA;
+    RGB24ToYRow = RGB24ToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RGB24ToYRow = RGB24ToYRow_MSA;
+      RGB24ToUVRow = RGB24ToUVRow_MSA;
+    }
+  }
 // Other platforms do intermediate conversion from RGB24 to ARGB.
 #else
 #if defined(HAS_RGB24TOARGBROW_SSSE3)
@@ -822,14 +1043,17 @@
     }
   }
 #endif
+#endif
+
   {
+#if !(defined(HAS_RGB24TOYROW_NEON) || defined(HAS_RGB24TOYROW_MSA))
     // Allocate 2 rows of ARGB.
     const int kRowSize = (width * 4 + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
 #endif
 
     for (y = 0; y < height - 1; y += 2) {
-#if defined(HAS_RGB24TOYROW_NEON)
+#if (defined(HAS_RGB24TOYROW_NEON) || defined(HAS_RGB24TOYROW_MSA))
       RGB24ToUVRow(src_rgb24, src_stride_rgb24, dst_u, dst_v, width);
       RGB24ToYRow(src_rgb24, dst_y, width);
       RGB24ToYRow(src_rgb24 + src_stride_rgb24, dst_y + dst_stride_y, width);
@@ -846,7 +1070,7 @@
       dst_v += dst_stride_v;
     }
     if (height & 1) {
-#if defined(HAS_RGB24TOYROW_NEON)
+#if (defined(HAS_RGB24TOYROW_NEON) || defined(HAS_RGB24TOYROW_MSA))
       RGB24ToUVRow(src_rgb24, 0, dst_u, dst_v, width);
       RGB24ToYRow(src_rgb24, dst_y, width);
 #else
@@ -855,36 +1079,41 @@
       ARGBToYRow(row, dst_y, width);
 #endif
     }
-#if !defined(HAS_RGB24TOYROW_NEON)
+#if !(defined(HAS_RGB24TOYROW_NEON) || defined(HAS_RGB24TOYROW_MSA))
     free_aligned_buffer_64(row);
-  }
 #endif
+  }
   return 0;
 }
 
 // Convert RAW to I420.
 LIBYUV_API
-int RAWToI420(const uint8* src_raw, int src_stride_raw,
-              uint8* dst_y, int dst_stride_y,
-              uint8* dst_u, int dst_stride_u,
-              uint8* dst_v, int dst_stride_v,
-              int width, int height) {
+int RAWToI420(const uint8_t* src_raw,
+              int src_stride_raw,
+              uint8_t* dst_y,
+              int dst_stride_y,
+              uint8_t* dst_u,
+              int dst_stride_u,
+              uint8_t* dst_v,
+              int dst_stride_v,
+              int width,
+              int height) {
   int y;
-#if defined(HAS_RAWTOYROW_NEON)
-  void (*RAWToUVRow)(const uint8* src_raw, int src_stride_raw,
-      uint8* dst_u, uint8* dst_v, int width) = RAWToUVRow_C;
-  void (*RAWToYRow)(const uint8* src_raw, uint8* dst_y, int width) =
+#if (defined(HAS_RAWTOYROW_NEON) || defined(HAS_RAWTOYROW_MSA))
+  void (*RAWToUVRow)(const uint8_t* src_raw, int src_stride_raw, uint8_t* dst_u,
+                     uint8_t* dst_v, int width) = RAWToUVRow_C;
+  void (*RAWToYRow)(const uint8_t* src_raw, uint8_t* dst_y, int width) =
       RAWToYRow_C;
 #else
-  void (*RAWToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
+  void (*RAWToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb, int width) =
       RAWToARGBRow_C;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
 #endif
-  if (!src_raw || !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_raw || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -906,6 +1135,15 @@
       }
     }
   }
+#elif defined(HAS_RAWTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RAWToUVRow = RAWToUVRow_Any_MSA;
+    RAWToYRow = RAWToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RAWToYRow = RAWToYRow_MSA;
+      RAWToUVRow = RAWToUVRow_MSA;
+    }
+  }
 // Other platforms do intermediate conversion from RAW to ARGB.
 #else
 #if defined(HAS_RAWTOARGBROW_SSSE3)
@@ -936,14 +1174,17 @@
     }
   }
 #endif
+#endif
+
   {
+#if !(defined(HAS_RAWTOYROW_NEON) || defined(HAS_RAWTOYROW_MSA))
     // Allocate 2 rows of ARGB.
     const int kRowSize = (width * 4 + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
 #endif
 
     for (y = 0; y < height - 1; y += 2) {
-#if defined(HAS_RAWTOYROW_NEON)
+#if (defined(HAS_RAWTOYROW_NEON) || defined(HAS_RAWTOYROW_MSA))
       RAWToUVRow(src_raw, src_stride_raw, dst_u, dst_v, width);
       RAWToYRow(src_raw, dst_y, width);
       RAWToYRow(src_raw + src_stride_raw, dst_y + dst_stride_y, width);
@@ -960,7 +1201,7 @@
       dst_v += dst_stride_v;
     }
     if (height & 1) {
-#if defined(HAS_RAWTOYROW_NEON)
+#if (defined(HAS_RAWTOYROW_NEON) || defined(HAS_RAWTOYROW_MSA))
       RAWToUVRow(src_raw, 0, dst_u, dst_v, width);
       RAWToYRow(src_raw, dst_y, width);
 #else
@@ -969,36 +1210,42 @@
       ARGBToYRow(row, dst_y, width);
 #endif
     }
-#if !defined(HAS_RAWTOYROW_NEON)
+#if !(defined(HAS_RAWTOYROW_NEON) || defined(HAS_RAWTOYROW_MSA))
     free_aligned_buffer_64(row);
-  }
 #endif
+  }
   return 0;
 }
 
 // Convert RGB565 to I420.
 LIBYUV_API
-int RGB565ToI420(const uint8* src_rgb565, int src_stride_rgb565,
-                 uint8* dst_y, int dst_stride_y,
-                 uint8* dst_u, int dst_stride_u,
-                 uint8* dst_v, int dst_stride_v,
-                 int width, int height) {
+int RGB565ToI420(const uint8_t* src_rgb565,
+                 int src_stride_rgb565,
+                 uint8_t* dst_y,
+                 int dst_stride_y,
+                 uint8_t* dst_u,
+                 int dst_stride_u,
+                 uint8_t* dst_v,
+                 int dst_stride_v,
+                 int width,
+                 int height) {
   int y;
-#if defined(HAS_RGB565TOYROW_NEON)
-  void (*RGB565ToUVRow)(const uint8* src_rgb565, int src_stride_rgb565,
-      uint8* dst_u, uint8* dst_v, int width) = RGB565ToUVRow_C;
-  void (*RGB565ToYRow)(const uint8* src_rgb565, uint8* dst_y, int width) =
+#if (defined(HAS_RGB565TOYROW_NEON) || defined(HAS_RGB565TOYROW_MSA))
+  void (*RGB565ToUVRow)(const uint8_t* src_rgb565, int src_stride_rgb565,
+                        uint8_t* dst_u, uint8_t* dst_v, int width) =
+      RGB565ToUVRow_C;
+  void (*RGB565ToYRow)(const uint8_t* src_rgb565, uint8_t* dst_y, int width) =
       RGB565ToYRow_C;
 #else
-  void (*RGB565ToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
-      RGB565ToARGBRow_C;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*RGB565ToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb,
+                          int width) = RGB565ToARGBRow_C;
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
 #endif
-  if (!src_rgb565 || !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_rgb565 || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1020,6 +1267,15 @@
       }
     }
   }
+#elif defined(HAS_RGB565TOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RGB565ToUVRow = RGB565ToUVRow_Any_MSA;
+    RGB565ToYRow = RGB565ToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RGB565ToYRow = RGB565ToYRow_MSA;
+      RGB565ToUVRow = RGB565ToUVRow_MSA;
+    }
+  }
 // Other platforms do intermediate conversion from RGB565 to ARGB.
 #else
 #if defined(HAS_RGB565TOARGBROW_SSE2)
@@ -1058,14 +1314,15 @@
     }
   }
 #endif
+#endif
   {
+#if !(defined(HAS_RGB565TOYROW_NEON) || defined(HAS_RGB565TOYROW_MSA))
     // Allocate 2 rows of ARGB.
     const int kRowSize = (width * 4 + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
 #endif
-
     for (y = 0; y < height - 1; y += 2) {
-#if defined(HAS_RGB565TOYROW_NEON)
+#if (defined(HAS_RGB565TOYROW_NEON) || defined(HAS_RGB565TOYROW_MSA))
       RGB565ToUVRow(src_rgb565, src_stride_rgb565, dst_u, dst_v, width);
       RGB565ToYRow(src_rgb565, dst_y, width);
       RGB565ToYRow(src_rgb565 + src_stride_rgb565, dst_y + dst_stride_y, width);
@@ -1082,7 +1339,7 @@
       dst_v += dst_stride_v;
     }
     if (height & 1) {
-#if defined(HAS_RGB565TOYROW_NEON)
+#if (defined(HAS_RGB565TOYROW_NEON) || defined(HAS_RGB565TOYROW_MSA))
       RGB565ToUVRow(src_rgb565, 0, dst_u, dst_v, width);
       RGB565ToYRow(src_rgb565, dst_y, width);
 #else
@@ -1091,36 +1348,43 @@
       ARGBToYRow(row, dst_y, width);
 #endif
     }
-#if !defined(HAS_RGB565TOYROW_NEON)
+#if !(defined(HAS_RGB565TOYROW_NEON) || defined(HAS_RGB565TOYROW_MSA))
     free_aligned_buffer_64(row);
-  }
 #endif
+  }
   return 0;
 }
 
 // Convert ARGB1555 to I420.
 LIBYUV_API
-int ARGB1555ToI420(const uint8* src_argb1555, int src_stride_argb1555,
-                   uint8* dst_y, int dst_stride_y,
-                   uint8* dst_u, int dst_stride_u,
-                   uint8* dst_v, int dst_stride_v,
-                   int width, int height) {
+int ARGB1555ToI420(const uint8_t* src_argb1555,
+                   int src_stride_argb1555,
+                   uint8_t* dst_y,
+                   int dst_stride_y,
+                   uint8_t* dst_u,
+                   int dst_stride_u,
+                   uint8_t* dst_v,
+                   int dst_stride_v,
+                   int width,
+                   int height) {
   int y;
-#if defined(HAS_ARGB1555TOYROW_NEON)
-  void (*ARGB1555ToUVRow)(const uint8* src_argb1555, int src_stride_argb1555,
-      uint8* dst_u, uint8* dst_v, int width) = ARGB1555ToUVRow_C;
-  void (*ARGB1555ToYRow)(const uint8* src_argb1555, uint8* dst_y, int width) =
-      ARGB1555ToYRow_C;
+#if (defined(HAS_ARGB1555TOYROW_NEON) || defined(HAS_ARGB1555TOYROW_MSA))
+  void (*ARGB1555ToUVRow)(const uint8_t* src_argb1555, int src_stride_argb1555,
+                          uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGB1555ToUVRow_C;
+  void (*ARGB1555ToYRow)(const uint8_t* src_argb1555, uint8_t* dst_y,
+                         int width) = ARGB1555ToYRow_C;
 #else
-  void (*ARGB1555ToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
-      ARGB1555ToARGBRow_C;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGB1555ToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb,
+                            int width) = ARGB1555ToARGBRow_C;
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
 #endif
-  if (!src_argb1555 || !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_argb1555 || !dst_y || !dst_u || !dst_v || width <= 0 ||
+      height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1142,6 +1406,15 @@
       }
     }
   }
+#elif defined(HAS_ARGB1555TOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGB1555ToUVRow = ARGB1555ToUVRow_Any_MSA;
+    ARGB1555ToYRow = ARGB1555ToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGB1555ToYRow = ARGB1555ToYRow_MSA;
+      ARGB1555ToUVRow = ARGB1555ToUVRow_MSA;
+    }
+  }
 // Other platforms do intermediate conversion from ARGB1555 to ARGB.
 #else
 #if defined(HAS_ARGB1555TOARGBROW_SSE2)
@@ -1180,14 +1453,16 @@
     }
   }
 #endif
+#endif
   {
+#if !(defined(HAS_ARGB1555TOYROW_NEON) || defined(HAS_ARGB1555TOYROW_MSA))
     // Allocate 2 rows of ARGB.
     const int kRowSize = (width * 4 + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
 #endif
 
     for (y = 0; y < height - 1; y += 2) {
-#if defined(HAS_ARGB1555TOYROW_NEON)
+#if (defined(HAS_ARGB1555TOYROW_NEON) || defined(HAS_ARGB1555TOYROW_MSA))
       ARGB1555ToUVRow(src_argb1555, src_stride_argb1555, dst_u, dst_v, width);
       ARGB1555ToYRow(src_argb1555, dst_y, width);
       ARGB1555ToYRow(src_argb1555 + src_stride_argb1555, dst_y + dst_stride_y,
@@ -1206,7 +1481,7 @@
       dst_v += dst_stride_v;
     }
     if (height & 1) {
-#if defined(HAS_ARGB1555TOYROW_NEON)
+#if (defined(HAS_ARGB1555TOYROW_NEON) || defined(HAS_ARGB1555TOYROW_MSA))
       ARGB1555ToUVRow(src_argb1555, 0, dst_u, dst_v, width);
       ARGB1555ToYRow(src_argb1555, dst_y, width);
 #else
@@ -1215,36 +1490,43 @@
       ARGBToYRow(row, dst_y, width);
 #endif
     }
-#if !defined(HAS_ARGB1555TOYROW_NEON)
+#if !(defined(HAS_ARGB1555TOYROW_NEON) || defined(HAS_ARGB1555TOYROW_MSA))
     free_aligned_buffer_64(row);
-  }
 #endif
+  }
   return 0;
 }
 
 // Convert ARGB4444 to I420.
 LIBYUV_API
-int ARGB4444ToI420(const uint8* src_argb4444, int src_stride_argb4444,
-                   uint8* dst_y, int dst_stride_y,
-                   uint8* dst_u, int dst_stride_u,
-                   uint8* dst_v, int dst_stride_v,
-                   int width, int height) {
+int ARGB4444ToI420(const uint8_t* src_argb4444,
+                   int src_stride_argb4444,
+                   uint8_t* dst_y,
+                   int dst_stride_y,
+                   uint8_t* dst_u,
+                   int dst_stride_u,
+                   uint8_t* dst_v,
+                   int dst_stride_v,
+                   int width,
+                   int height) {
   int y;
 #if defined(HAS_ARGB4444TOYROW_NEON)
-  void (*ARGB4444ToUVRow)(const uint8* src_argb4444, int src_stride_argb4444,
-      uint8* dst_u, uint8* dst_v, int width) = ARGB4444ToUVRow_C;
-  void (*ARGB4444ToYRow)(const uint8* src_argb4444, uint8* dst_y, int width) =
-      ARGB4444ToYRow_C;
+  void (*ARGB4444ToUVRow)(const uint8_t* src_argb4444, int src_stride_argb4444,
+                          uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGB4444ToUVRow_C;
+  void (*ARGB4444ToYRow)(const uint8_t* src_argb4444, uint8_t* dst_y,
+                         int width) = ARGB4444ToYRow_C;
 #else
-  void (*ARGB4444ToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
-      ARGB4444ToARGBRow_C;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGB4444ToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb,
+                            int width) = ARGB4444ToARGBRow_C;
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
 #endif
-  if (!src_argb4444 || !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_argb4444 || !dst_y || !dst_u || !dst_v || width <= 0 ||
+      height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1284,6 +1566,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGB4444TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGB4444ToARGBRow = ARGB4444ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGB4444ToARGBRow = ARGB4444ToARGBRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_ARGBTOYROW_SSSE3) && defined(HAS_ARGBTOUVROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     ARGBToUVRow = ARGBToUVRow_Any_SSSE3;
@@ -1304,7 +1594,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+      if (IS_ALIGNED(width, 32)) {
+        ARGBToUVRow = ARGBToUVRow_MSA;
+      }
+    }
+  }
+#endif
+#endif
+
   {
+#if !defined(HAS_ARGB4444TOYROW_NEON)
     // Allocate 2 rows of ARGB.
     const int kRowSize = (width * 4 + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
@@ -1341,13 +1646,15 @@
     }
 #if !defined(HAS_ARGB4444TOYROW_NEON)
     free_aligned_buffer_64(row);
-  }
 #endif
+  }
   return 0;
 }
 
-static void SplitPixels(const uint8* src_u, int src_pixel_stride_uv,
-                        uint8* dst_u, int width) {
+static void SplitPixels(const uint8_t* src_u,
+                        int src_pixel_stride_uv,
+                        uint8_t* dst_u,
+                        int width) {
   int i;
   for (i = 0; i < width; ++i) {
     *dst_u = *src_u;
@@ -1358,21 +1665,26 @@
 
 // Convert Android420 to I420.
 LIBYUV_API
-int Android420ToI420(const uint8* src_y, int src_stride_y,
-                     const uint8* src_u, int src_stride_u,
-                     const uint8* src_v, int src_stride_v,
+int Android420ToI420(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_u,
+                     int src_stride_u,
+                     const uint8_t* src_v,
+                     int src_stride_v,
                      int src_pixel_stride_uv,
-                     uint8* dst_y, int dst_stride_y,
-                     uint8* dst_u, int dst_stride_u,
-                     uint8* dst_v, int dst_stride_v,
-                     int width, int height) {
+                     uint8_t* dst_y,
+                     int dst_stride_y,
+                     uint8_t* dst_u,
+                     int dst_stride_u,
+                     uint8_t* dst_v,
+                     int dst_stride_v,
+                     int width,
+                     int height) {
   int y;
-  const int vu_off = src_v - src_u;
+  const ptrdiff_t vu_off = src_v - src_u;
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src_u || !src_v ||
-      !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1396,15 +1708,16 @@
     CopyPlane(src_u, src_stride_u, dst_u, dst_stride_u, halfwidth, halfheight);
     CopyPlane(src_v, src_stride_v, dst_v, dst_stride_v, halfwidth, halfheight);
     return 0;
-  // Split UV planes - NV21
-  } else if (src_pixel_stride_uv == 2 && vu_off == -1 &&
-             src_stride_u == src_stride_v) {
+    // Split UV planes - NV21
+  }
+  if (src_pixel_stride_uv == 2 && vu_off == -1 &&
+      src_stride_u == src_stride_v) {
     SplitUVPlane(src_v, src_stride_v, dst_v, dst_stride_v, dst_u, dst_stride_u,
                  halfwidth, halfheight);
     return 0;
-  // Split UV planes - NV12
-  } else if (src_pixel_stride_uv == 2 && vu_off == 1 &&
-             src_stride_u == src_stride_v) {
+    // Split UV planes - NV12
+  }
+  if (src_pixel_stride_uv == 2 && vu_off == 1 && src_stride_u == src_stride_v) {
     SplitUVPlane(src_u, src_stride_u, dst_u, dst_stride_u, dst_v, dst_stride_v,
                  halfwidth, halfheight);
     return 0;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_argb.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_argb.cc
index fb9582d..f2fe474 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_argb.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_argb.cc
@@ -26,11 +26,13 @@
 
 // Copy ARGB with optional flipping
 LIBYUV_API
-int ARGBCopy(const uint8* src_argb, int src_stride_argb,
-             uint8* dst_argb, int dst_stride_argb,
-             int width, int height) {
-  if (!src_argb || !dst_argb ||
-      width <= 0 || height == 0) {
+int ARGBCopy(const uint8_t* src_argb,
+             int src_stride_argb,
+             uint8_t* dst_argb,
+             int dst_stride_argb,
+             int width,
+             int height) {
+  if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -40,27 +42,29 @@
     src_stride_argb = -src_stride_argb;
   }
 
-  CopyPlane(src_argb, src_stride_argb, dst_argb, dst_stride_argb,
-            width * 4, height);
+  CopyPlane(src_argb, src_stride_argb, dst_argb, dst_stride_argb, width * 4,
+            height);
   return 0;
 }
 
-// Convert I422 to ARGB with matrix
-static int I420ToARGBMatrix(const uint8* src_y, int src_stride_y,
-                            const uint8* src_u, int src_stride_u,
-                            const uint8* src_v, int src_stride_v,
-                            uint8* dst_argb, int dst_stride_argb,
+// Convert I420 to ARGB with matrix
+static int I420ToARGBMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_u,
+                            int src_stride_u,
+                            const uint8_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_argb,
+                            int dst_stride_argb,
                             const struct YuvConstants* yuvconstants,
-                            int width, int height) {
+                            int width,
+                            int height) {
   int y;
-  void (*I422ToARGBRow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I422ToARGBRow_C;
-  if (!src_y || !src_u || !src_v || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*I422ToARGBRow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I422ToARGBRow_C;
+  if (!src_y || !src_u || !src_v || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -93,13 +97,12 @@
     }
   }
 #endif
-#if defined(HAS_I422TOARGBROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride_argb, 4)) {
-    I422ToARGBRow = I422ToARGBRow_DSPR2;
+#if defined(HAS_I422TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToARGBRow = I422ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToARGBRow = I422ToARGBRow_MSA;
+    }
   }
 #endif
 
@@ -117,111 +120,130 @@
 
 // Convert I420 to ARGB.
 LIBYUV_API
-int I420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I420ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvI601Constants,
-                          width, height);
+int I420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I420ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvI601Constants, width, height);
 }
 
 // Convert I420 to ABGR.
 LIBYUV_API
-int I420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I420ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int I420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I420ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuI601Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert J420 to ARGB.
 LIBYUV_API
-int J420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I420ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvJPEGConstants,
-                          width, height);
+int J420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I420ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvJPEGConstants, width, height);
 }
 
 // Convert J420 to ABGR.
 LIBYUV_API
-int J420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I420ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int J420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I420ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuJPEGConstants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert H420 to ARGB.
 LIBYUV_API
-int H420ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I420ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvH709Constants,
-                          width, height);
+int H420ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I420ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvH709Constants, width, height);
 }
 
 // Convert H420 to ABGR.
 LIBYUV_API
-int H420ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I420ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int H420ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I420ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuH709Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert I422 to ARGB with matrix
-static int I422ToARGBMatrix(const uint8* src_y, int src_stride_y,
-                            const uint8* src_u, int src_stride_u,
-                            const uint8* src_v, int src_stride_v,
-                            uint8* dst_argb, int dst_stride_argb,
+static int I422ToARGBMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_u,
+                            int src_stride_u,
+                            const uint8_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_argb,
+                            int dst_stride_argb,
                             const struct YuvConstants* yuvconstants,
-                            int width, int height) {
+                            int width,
+                            int height) {
   int y;
-  void (*I422ToARGBRow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I422ToARGBRow_C;
-  if (!src_y || !src_u || !src_v ||
-      !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*I422ToARGBRow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I422ToARGBRow_C;
+  if (!src_y || !src_u || !src_v || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -231,10 +253,8 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      src_stride_u * 2 == width &&
-      src_stride_v * 2 == width &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_y == width && src_stride_u * 2 == width &&
+      src_stride_v * 2 == width && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_y = src_stride_u = src_stride_v = dst_stride_argb = 0;
@@ -263,13 +283,12 @@
     }
   }
 #endif
-#if defined(HAS_I422TOARGBROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride_argb, 4)) {
-    I422ToARGBRow = I422ToARGBRow_DSPR2;
+#if defined(HAS_I422TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToARGBRow = I422ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToARGBRow = I422ToARGBRow_MSA;
+    }
   }
 #endif
 
@@ -285,111 +304,380 @@
 
 // Convert I422 to ARGB.
 LIBYUV_API
-int I422ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I422ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvI601Constants,
-                          width, height);
+int I422ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I422ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvI601Constants, width, height);
 }
 
 // Convert I422 to ABGR.
 LIBYUV_API
-int I422ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I422ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int I422ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I422ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuI601Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert J422 to ARGB.
 LIBYUV_API
-int J422ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I422ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvJPEGConstants,
-                          width, height);
+int J422ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I422ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvJPEGConstants, width, height);
 }
 
 // Convert J422 to ABGR.
 LIBYUV_API
-int J422ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I422ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int J422ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I422ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuJPEGConstants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert H422 to ARGB.
 LIBYUV_API
-int H422ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I422ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvH709Constants,
-                          width, height);
+int H422ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I422ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvH709Constants, width, height);
 }
 
 // Convert H422 to ABGR.
 LIBYUV_API
-int H422ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I422ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int H422ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I422ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
+                          &kYvuH709Constants,  // Use Yvu matrix
+                          width, height);
+}
+
+// Convert 10 bit YUV to ARGB with matrix
+// TODO(fbarchard): Consider passing scale multiplier to I210ToARGB to
+// multiply 10 bit yuv into high bits to allow any number of bits.
+static int I010ToAR30Matrix(const uint16_t* src_y,
+                            int src_stride_y,
+                            const uint16_t* src_u,
+                            int src_stride_u,
+                            const uint16_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_ar30,
+                            int dst_stride_ar30,
+                            const struct YuvConstants* yuvconstants,
+                            int width,
+                            int height) {
+  int y;
+  void (*I210ToAR30Row)(const uint16_t* y_buf, const uint16_t* u_buf,
+                        const uint16_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I210ToAR30Row_C;
+  if (!src_y || !src_u || !src_v || !dst_ar30 || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_ar30 = dst_ar30 + (height - 1) * dst_stride_ar30;
+    dst_stride_ar30 = -dst_stride_ar30;
+  }
+#if defined(HAS_I210TOAR30ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    I210ToAR30Row = I210ToAR30Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 8)) {
+      I210ToAR30Row = I210ToAR30Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_I210TOAR30ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I210ToAR30Row = I210ToAR30Row_Any_AVX2;
+    if (IS_ALIGNED(width, 16)) {
+      I210ToAR30Row = I210ToAR30Row_AVX2;
+    }
+  }
+#endif
+  for (y = 0; y < height; ++y) {
+    I210ToAR30Row(src_y, src_u, src_v, dst_ar30, yuvconstants, width);
+    dst_ar30 += dst_stride_ar30;
+    src_y += src_stride_y;
+    if (y & 1) {
+      src_u += src_stride_u;
+      src_v += src_stride_v;
+    }
+  }
+  return 0;
+}
+
+// Convert I010 to AR30.
+LIBYUV_API
+int I010ToAR30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height) {
+  return I010ToAR30Matrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_ar30, dst_stride_ar30,
+                          &kYuvI601Constants, width, height);
+}
+
+// Convert H010 to AR30.
+LIBYUV_API
+int H010ToAR30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height) {
+  return I010ToAR30Matrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_ar30, dst_stride_ar30,
+                          &kYuvH709Constants, width, height);
+}
+
+// Convert I010 to AB30.
+LIBYUV_API
+int I010ToAB30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ab30,
+               int dst_stride_ab30,
+               int width,
+               int height) {
+  return I010ToAR30Matrix(src_y, src_stride_y, src_v, src_stride_v, src_u,
+                          src_stride_u, dst_ab30, dst_stride_ab30,
+                          &kYvuI601Constants, width, height);
+}
+
+// Convert H010 to AB30.
+LIBYUV_API
+int H010ToAB30(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ab30,
+               int dst_stride_ab30,
+               int width,
+               int height) {
+  return I010ToAR30Matrix(src_y, src_stride_y, src_v, src_stride_v, src_u,
+                          src_stride_u, dst_ab30, dst_stride_ab30,
+                          &kYvuH709Constants, width, height);
+}
+
+// Convert 10 bit YUV to ARGB with matrix
+static int I010ToARGBMatrix(const uint16_t* src_y,
+                            int src_stride_y,
+                            const uint16_t* src_u,
+                            int src_stride_u,
+                            const uint16_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_argb,
+                            int dst_stride_argb,
+                            const struct YuvConstants* yuvconstants,
+                            int width,
+                            int height) {
+  int y;
+  void (*I210ToARGBRow)(const uint16_t* y_buf, const uint16_t* u_buf,
+                        const uint16_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I210ToARGBRow_C;
+  if (!src_y || !src_u || !src_v || !dst_argb || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_argb = dst_argb + (height - 1) * dst_stride_argb;
+    dst_stride_argb = -dst_stride_argb;
+  }
+#if defined(HAS_I210TOARGBROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    I210ToARGBRow = I210ToARGBRow_Any_SSSE3;
+    if (IS_ALIGNED(width, 8)) {
+      I210ToARGBRow = I210ToARGBRow_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_I210TOARGBROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I210ToARGBRow = I210ToARGBRow_Any_AVX2;
+    if (IS_ALIGNED(width, 16)) {
+      I210ToARGBRow = I210ToARGBRow_AVX2;
+    }
+  }
+#endif
+  for (y = 0; y < height; ++y) {
+    I210ToARGBRow(src_y, src_u, src_v, dst_argb, yuvconstants, width);
+    dst_argb += dst_stride_argb;
+    src_y += src_stride_y;
+    if (y & 1) {
+      src_u += src_stride_u;
+      src_v += src_stride_v;
+    }
+  }
+  return 0;
+}
+
+// Convert I010 to ARGB.
+LIBYUV_API
+int I010ToARGB(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I010ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvI601Constants, width, height);
+}
+
+// Convert I010 to ABGR.
+LIBYUV_API
+int I010ToABGR(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I010ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
+                          &kYvuI601Constants,  // Use Yvu matrix
+                          width, height);
+}
+
+// Convert H010 to ARGB.
+LIBYUV_API
+int H010ToARGB(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I010ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvH709Constants, width, height);
+}
+
+// Convert H010 to ABGR.
+LIBYUV_API
+int H010ToABGR(const uint16_t* src_y,
+               int src_stride_y,
+               const uint16_t* src_u,
+               int src_stride_u,
+               const uint16_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I010ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuH709Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert I444 to ARGB with matrix
-static int I444ToARGBMatrix(const uint8* src_y, int src_stride_y,
-                            const uint8* src_u, int src_stride_u,
-                            const uint8* src_v, int src_stride_v,
-                            uint8* dst_argb, int dst_stride_argb,
+static int I444ToARGBMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_u,
+                            int src_stride_u,
+                            const uint8_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_argb,
+                            int dst_stride_argb,
                             const struct YuvConstants* yuvconstants,
-                            int width, int height) {
+                            int width,
+                            int height) {
   int y;
-  void (*I444ToARGBRow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I444ToARGBRow_C;
-  if (!src_y || !src_u || !src_v ||
-      !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*I444ToARGBRow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I444ToARGBRow_C;
+  if (!src_y || !src_u || !src_v || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -399,9 +687,7 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      src_stride_u == width &&
-      src_stride_v == width &&
+  if (src_stride_y == width && src_stride_u == width && src_stride_v == width &&
       dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
@@ -431,6 +717,14 @@
     }
   }
 #endif
+#if defined(HAS_I444TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I444ToARGBRow = I444ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I444ToARGBRow = I444ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I444ToARGBRow(src_y, src_u, src_v, dst_argb, yuvconstants, width);
@@ -444,138 +738,81 @@
 
 // Convert I444 to ARGB.
 LIBYUV_API
-int I444ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I444ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvI601Constants,
-                          width, height);
+int I444ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I444ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvI601Constants, width, height);
 }
 
 // Convert I444 to ABGR.
 LIBYUV_API
-int I444ToABGR(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_abgr, int dst_stride_abgr,
-               int width, int height) {
-  return I444ToARGBMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_abgr, dst_stride_abgr,
+int I444ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return I444ToARGBMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_abgr, dst_stride_abgr,
                           &kYvuI601Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert J444 to ARGB.
 LIBYUV_API
-int J444ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return I444ToARGBMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_argb, dst_stride_argb,
-                          &kYuvJPEGConstants,
-                          width, height);
-}
-
-// Convert I411 to ARGB.
-LIBYUV_API
-int I411ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  int y;
-  void (*I411ToARGBRow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I411ToARGBRow_C;
-  if (!src_y || !src_u || !src_v ||
-      !dst_argb ||
-      width <= 0 || height == 0) {
-    return -1;
-  }
-  // Negative height means invert the image.
-  if (height < 0) {
-    height = -height;
-    dst_argb = dst_argb + (height - 1) * dst_stride_argb;
-    dst_stride_argb = -dst_stride_argb;
-  }
-  // Coalesce rows.
-  if (src_stride_y == width &&
-      src_stride_u * 4 == width &&
-      src_stride_v * 4 == width &&
-      dst_stride_argb == width * 4) {
-    width *= height;
-    height = 1;
-    src_stride_y = src_stride_u = src_stride_v = dst_stride_argb = 0;
-  }
-#if defined(HAS_I411TOARGBROW_SSSE3)
-  if (TestCpuFlag(kCpuHasSSSE3)) {
-    I411ToARGBRow = I411ToARGBRow_Any_SSSE3;
-    if (IS_ALIGNED(width, 8)) {
-      I411ToARGBRow = I411ToARGBRow_SSSE3;
-    }
-  }
-#endif
-#if defined(HAS_I411TOARGBROW_AVX2)
-  if (TestCpuFlag(kCpuHasAVX2)) {
-    I411ToARGBRow = I411ToARGBRow_Any_AVX2;
-    if (IS_ALIGNED(width, 16)) {
-      I411ToARGBRow = I411ToARGBRow_AVX2;
-    }
-  }
-#endif
-#if defined(HAS_I411TOARGBROW_NEON)
-  if (TestCpuFlag(kCpuHasNEON)) {
-    I411ToARGBRow = I411ToARGBRow_Any_NEON;
-    if (IS_ALIGNED(width, 8)) {
-      I411ToARGBRow = I411ToARGBRow_NEON;
-    }
-  }
-#endif
-
-  for (y = 0; y < height; ++y) {
-    I411ToARGBRow(src_y, src_u, src_v, dst_argb, &kYuvI601Constants, width);
-    dst_argb += dst_stride_argb;
-    src_y += src_stride_y;
-    src_u += src_stride_u;
-    src_v += src_stride_v;
-  }
-  return 0;
+int J444ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return I444ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_argb, dst_stride_argb,
+                          &kYuvJPEGConstants, width, height);
 }
 
 // Convert I420 with Alpha to preattenuated ARGB.
-static int I420AlphaToARGBMatrix(const uint8* src_y, int src_stride_y,
-                                 const uint8* src_u, int src_stride_u,
-                                 const uint8* src_v, int src_stride_v,
-                                 const uint8* src_a, int src_stride_a,
-                                 uint8* dst_argb, int dst_stride_argb,
+static int I420AlphaToARGBMatrix(const uint8_t* src_y,
+                                 int src_stride_y,
+                                 const uint8_t* src_u,
+                                 int src_stride_u,
+                                 const uint8_t* src_v,
+                                 int src_stride_v,
+                                 const uint8_t* src_a,
+                                 int src_stride_a,
+                                 uint8_t* dst_argb,
+                                 int dst_stride_argb,
                                  const struct YuvConstants* yuvconstants,
-                                 int width, int height, int attenuate) {
+                                 int width,
+                                 int height,
+                                 int attenuate) {
   int y;
-  void (*I422AlphaToARGBRow)(const uint8* y_buf,
-                             const uint8* u_buf,
-                             const uint8* v_buf,
-                             const uint8* a_buf,
-                             uint8* dst_argb,
+  void (*I422AlphaToARGBRow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                             const uint8_t* v_buf, const uint8_t* a_buf,
+                             uint8_t* dst_argb,
                              const struct YuvConstants* yuvconstants,
                              int width) = I422AlphaToARGBRow_C;
-  void (*ARGBAttenuateRow)(const uint8* src_argb, uint8* dst_argb,
+  void (*ARGBAttenuateRow)(const uint8_t* src_argb, uint8_t* dst_argb,
                            int width) = ARGBAttenuateRow_C;
-  if (!src_y || !src_u || !src_v || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -608,13 +845,12 @@
     }
   }
 #endif
-#if defined(HAS_I422ALPHATOARGBROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride_argb, 4)) {
-    I422AlphaToARGBRow = I422AlphaToARGBRow_DSPR2;
+#if defined(HAS_I422ALPHATOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422AlphaToARGBRow = I422AlphaToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422AlphaToARGBRow = I422AlphaToARGBRow_MSA;
+    }
   }
 #endif
 #if defined(HAS_ARGBATTENUATEROW_SSSE3)
@@ -641,6 +877,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBATTENUATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBAttenuateRow = ARGBAttenuateRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBAttenuateRow = ARGBAttenuateRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I422AlphaToARGBRow(src_y, src_u, src_v, src_a, dst_argb, yuvconstants,
@@ -661,49 +905,59 @@
 
 // Convert I420 with Alpha to ARGB.
 LIBYUV_API
-int I420AlphaToARGB(const uint8* src_y, int src_stride_y,
-                    const uint8* src_u, int src_stride_u,
-                    const uint8* src_v, int src_stride_v,
-                    const uint8* src_a, int src_stride_a,
-                    uint8* dst_argb, int dst_stride_argb,
-                    int width, int height, int attenuate) {
-  return I420AlphaToARGBMatrix(src_y, src_stride_y,
-                               src_u, src_stride_u,
-                               src_v, src_stride_v,
-                               src_a, src_stride_a,
-                               dst_argb, dst_stride_argb,
-                               &kYuvI601Constants,
-                               width, height, attenuate);
+int I420AlphaToARGB(const uint8_t* src_y,
+                    int src_stride_y,
+                    const uint8_t* src_u,
+                    int src_stride_u,
+                    const uint8_t* src_v,
+                    int src_stride_v,
+                    const uint8_t* src_a,
+                    int src_stride_a,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    int width,
+                    int height,
+                    int attenuate) {
+  return I420AlphaToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                               src_stride_v, src_a, src_stride_a, dst_argb,
+                               dst_stride_argb, &kYuvI601Constants, width,
+                               height, attenuate);
 }
 
 // Convert I420 with Alpha to ABGR.
 LIBYUV_API
-int I420AlphaToABGR(const uint8* src_y, int src_stride_y,
-                    const uint8* src_u, int src_stride_u,
-                    const uint8* src_v, int src_stride_v,
-                    const uint8* src_a, int src_stride_a,
-                    uint8* dst_abgr, int dst_stride_abgr,
-                    int width, int height, int attenuate) {
-  return I420AlphaToARGBMatrix(src_y, src_stride_y,
-                               src_v, src_stride_v,  // Swap U and V
-                               src_u, src_stride_u,
-                               src_a, src_stride_a,
-                               dst_abgr, dst_stride_abgr,
-                               &kYvuI601Constants,  // Use Yvu matrix
-                               width, height, attenuate);
+int I420AlphaToABGR(const uint8_t* src_y,
+                    int src_stride_y,
+                    const uint8_t* src_u,
+                    int src_stride_u,
+                    const uint8_t* src_v,
+                    int src_stride_v,
+                    const uint8_t* src_a,
+                    int src_stride_a,
+                    uint8_t* dst_abgr,
+                    int dst_stride_abgr,
+                    int width,
+                    int height,
+                    int attenuate) {
+  return I420AlphaToARGBMatrix(
+      src_y, src_stride_y, src_v, src_stride_v,  // Swap U and V
+      src_u, src_stride_u, src_a, src_stride_a, dst_abgr, dst_stride_abgr,
+      &kYvuI601Constants,  // Use Yvu matrix
+      width, height, attenuate);
 }
 
 // Convert I400 to ARGB.
 LIBYUV_API
-int I400ToARGB(const uint8* src_y, int src_stride_y,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int I400ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*I400ToARGBRow)(const uint8* y_buf,
-                     uint8* rgb_buf,
-                     int width) = I400ToARGBRow_C;
-  if (!src_y || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*I400ToARGBRow)(const uint8_t* y_buf, uint8_t* rgb_buf, int width) =
+      I400ToARGBRow_C;
+  if (!src_y || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -713,8 +967,7 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_y == width && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_y = dst_stride_argb = 0;
@@ -743,6 +996,14 @@
     }
   }
 #endif
+#if defined(HAS_I400TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I400ToARGBRow = I400ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      I400ToARGBRow = I400ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I400ToARGBRow(src_y, dst_argb, width);
@@ -754,14 +1015,16 @@
 
 // Convert J400 to ARGB.
 LIBYUV_API
-int J400ToARGB(const uint8* src_y, int src_stride_y,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int J400ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*J400ToARGBRow)(const uint8* src_y, uint8* dst_argb, int width) =
+  void (*J400ToARGBRow)(const uint8_t* src_y, uint8_t* dst_argb, int width) =
       J400ToARGBRow_C;
-  if (!src_y || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_y || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -771,8 +1034,7 @@
     src_stride_y = -src_stride_y;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_y == width && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_y = dst_stride_argb = 0;
@@ -801,6 +1063,14 @@
     }
   }
 #endif
+#if defined(HAS_J400TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    J400ToARGBRow = J400ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      J400ToARGBRow = J400ToARGBRow_MSA;
+    }
+  }
+#endif
   for (y = 0; y < height; ++y) {
     J400ToARGBRow(src_y, dst_argb, width);
     src_y += src_stride_y;
@@ -810,85 +1080,89 @@
 }
 
 // Shuffle table for converting BGRA to ARGB.
-static uvec8 kShuffleMaskBGRAToARGB = {
-  3u, 2u, 1u, 0u, 7u, 6u, 5u, 4u, 11u, 10u, 9u, 8u, 15u, 14u, 13u, 12u
-};
+static const uvec8 kShuffleMaskBGRAToARGB = {
+    3u, 2u, 1u, 0u, 7u, 6u, 5u, 4u, 11u, 10u, 9u, 8u, 15u, 14u, 13u, 12u};
 
 // Shuffle table for converting ABGR to ARGB.
-static uvec8 kShuffleMaskABGRToARGB = {
-  2u, 1u, 0u, 3u, 6u, 5u, 4u, 7u, 10u, 9u, 8u, 11u, 14u, 13u, 12u, 15u
-};
+static const uvec8 kShuffleMaskABGRToARGB = {
+    2u, 1u, 0u, 3u, 6u, 5u, 4u, 7u, 10u, 9u, 8u, 11u, 14u, 13u, 12u, 15u};
 
 // Shuffle table for converting RGBA to ARGB.
-static uvec8 kShuffleMaskRGBAToARGB = {
-  1u, 2u, 3u, 0u, 5u, 6u, 7u, 4u, 9u, 10u, 11u, 8u, 13u, 14u, 15u, 12u
-};
+static const uvec8 kShuffleMaskRGBAToARGB = {
+    1u, 2u, 3u, 0u, 5u, 6u, 7u, 4u, 9u, 10u, 11u, 8u, 13u, 14u, 15u, 12u};
 
 // Convert BGRA to ARGB.
 LIBYUV_API
-int BGRAToARGB(const uint8* src_bgra, int src_stride_bgra,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return ARGBShuffle(src_bgra, src_stride_bgra,
-                     dst_argb, dst_stride_argb,
-                     (const uint8*)(&kShuffleMaskBGRAToARGB),
-                     width, height);
+int BGRAToARGB(const uint8_t* src_bgra,
+               int src_stride_bgra,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return ARGBShuffle(src_bgra, src_stride_bgra, dst_argb, dst_stride_argb,
+                     (const uint8_t*)(&kShuffleMaskBGRAToARGB), width, height);
 }
 
 // Convert ARGB to BGRA (same as BGRAToARGB).
 LIBYUV_API
-int ARGBToBGRA(const uint8* src_bgra, int src_stride_bgra,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return ARGBShuffle(src_bgra, src_stride_bgra,
-                     dst_argb, dst_stride_argb,
-                     (const uint8*)(&kShuffleMaskBGRAToARGB),
-                     width, height);
+int ARGBToBGRA(const uint8_t* src_bgra,
+               int src_stride_bgra,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return ARGBShuffle(src_bgra, src_stride_bgra, dst_argb, dst_stride_argb,
+                     (const uint8_t*)(&kShuffleMaskBGRAToARGB), width, height);
 }
 
 // Convert ABGR to ARGB.
 LIBYUV_API
-int ABGRToARGB(const uint8* src_abgr, int src_stride_abgr,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return ARGBShuffle(src_abgr, src_stride_abgr,
-                     dst_argb, dst_stride_argb,
-                     (const uint8*)(&kShuffleMaskABGRToARGB),
-                     width, height);
+int ABGRToARGB(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return ARGBShuffle(src_abgr, src_stride_abgr, dst_argb, dst_stride_argb,
+                     (const uint8_t*)(&kShuffleMaskABGRToARGB), width, height);
 }
 
 // Convert ARGB to ABGR to (same as ABGRToARGB).
 LIBYUV_API
-int ARGBToABGR(const uint8* src_abgr, int src_stride_abgr,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return ARGBShuffle(src_abgr, src_stride_abgr,
-                     dst_argb, dst_stride_argb,
-                     (const uint8*)(&kShuffleMaskABGRToARGB),
-                     width, height);
+int ARGBToABGR(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return ARGBShuffle(src_abgr, src_stride_abgr, dst_argb, dst_stride_argb,
+                     (const uint8_t*)(&kShuffleMaskABGRToARGB), width, height);
 }
 
 // Convert RGBA to ARGB.
 LIBYUV_API
-int RGBAToARGB(const uint8* src_rgba, int src_stride_rgba,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
-  return ARGBShuffle(src_rgba, src_stride_rgba,
-                     dst_argb, dst_stride_argb,
-                     (const uint8*)(&kShuffleMaskRGBAToARGB),
-                     width, height);
+int RGBAToARGB(const uint8_t* src_rgba,
+               int src_stride_rgba,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return ARGBShuffle(src_rgba, src_stride_rgba, dst_argb, dst_stride_argb,
+                     (const uint8_t*)(&kShuffleMaskRGBAToARGB), width, height);
 }
 
 // Convert RGB24 to ARGB.
 LIBYUV_API
-int RGB24ToARGB(const uint8* src_rgb24, int src_stride_rgb24,
-                uint8* dst_argb, int dst_stride_argb,
-                int width, int height) {
+int RGB24ToARGB(const uint8_t* src_rgb24,
+                int src_stride_rgb24,
+                uint8_t* dst_argb,
+                int dst_stride_argb,
+                int width,
+                int height) {
   int y;
-  void (*RGB24ToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
+  void (*RGB24ToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb, int width) =
       RGB24ToARGBRow_C;
-  if (!src_rgb24 || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_rgb24 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -898,8 +1172,7 @@
     src_stride_rgb24 = -src_stride_rgb24;
   }
   // Coalesce rows.
-  if (src_stride_rgb24 == width * 3 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_rgb24 == width * 3 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_rgb24 = dst_stride_argb = 0;
@@ -920,6 +1193,14 @@
     }
   }
 #endif
+#if defined(HAS_RGB24TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RGB24ToARGBRow = RGB24ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RGB24ToARGBRow = RGB24ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     RGB24ToARGBRow(src_rgb24, dst_argb, width);
@@ -931,14 +1212,16 @@
 
 // Convert RAW to ARGB.
 LIBYUV_API
-int RAWToARGB(const uint8* src_raw, int src_stride_raw,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height) {
+int RAWToARGB(const uint8_t* src_raw,
+              int src_stride_raw,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height) {
   int y;
-  void (*RAWToARGBRow)(const uint8* src_rgb, uint8* dst_argb, int width) =
+  void (*RAWToARGBRow)(const uint8_t* src_rgb, uint8_t* dst_argb, int width) =
       RAWToARGBRow_C;
-  if (!src_raw || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_raw || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -948,8 +1231,7 @@
     src_stride_raw = -src_stride_raw;
   }
   // Coalesce rows.
-  if (src_stride_raw == width * 3 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_raw == width * 3 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_raw = dst_stride_argb = 0;
@@ -970,6 +1252,14 @@
     }
   }
 #endif
+#if defined(HAS_RAWTOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RAWToARGBRow = RAWToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RAWToARGBRow = RAWToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     RAWToARGBRow(src_raw, dst_argb, width);
@@ -981,14 +1271,16 @@
 
 // Convert RGB565 to ARGB.
 LIBYUV_API
-int RGB565ToARGB(const uint8* src_rgb565, int src_stride_rgb565,
-                 uint8* dst_argb, int dst_stride_argb,
-                 int width, int height) {
+int RGB565ToARGB(const uint8_t* src_rgb565,
+                 int src_stride_rgb565,
+                 uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int width,
+                 int height) {
   int y;
-  void (*RGB565ToARGBRow)(const uint8* src_rgb565, uint8* dst_argb, int width) =
-      RGB565ToARGBRow_C;
-  if (!src_rgb565 || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*RGB565ToARGBRow)(const uint8_t* src_rgb565, uint8_t* dst_argb,
+                          int width) = RGB565ToARGBRow_C;
+  if (!src_rgb565 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -998,8 +1290,7 @@
     src_stride_rgb565 = -src_stride_rgb565;
   }
   // Coalesce rows.
-  if (src_stride_rgb565 == width * 2 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_rgb565 == width * 2 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_rgb565 = dst_stride_argb = 0;
@@ -1028,6 +1319,14 @@
     }
   }
 #endif
+#if defined(HAS_RGB565TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RGB565ToARGBRow = RGB565ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RGB565ToARGBRow = RGB565ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     RGB565ToARGBRow(src_rgb565, dst_argb, width);
@@ -1039,14 +1338,16 @@
 
 // Convert ARGB1555 to ARGB.
 LIBYUV_API
-int ARGB1555ToARGB(const uint8* src_argb1555, int src_stride_argb1555,
-                   uint8* dst_argb, int dst_stride_argb,
-                   int width, int height) {
+int ARGB1555ToARGB(const uint8_t* src_argb1555,
+                   int src_stride_argb1555,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   int width,
+                   int height) {
   int y;
-  void (*ARGB1555ToARGBRow)(const uint8* src_argb1555, uint8* dst_argb,
-      int width) = ARGB1555ToARGBRow_C;
-  if (!src_argb1555 || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*ARGB1555ToARGBRow)(const uint8_t* src_argb1555, uint8_t* dst_argb,
+                            int width) = ARGB1555ToARGBRow_C;
+  if (!src_argb1555 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1056,8 +1357,7 @@
     src_stride_argb1555 = -src_stride_argb1555;
   }
   // Coalesce rows.
-  if (src_stride_argb1555 == width * 2 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb1555 == width * 2 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb1555 = dst_stride_argb = 0;
@@ -1086,6 +1386,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGB1555TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGB1555ToARGBRow = ARGB1555ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGB1555ToARGBRow = ARGB1555ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGB1555ToARGBRow(src_argb1555, dst_argb, width);
@@ -1097,14 +1405,16 @@
 
 // Convert ARGB4444 to ARGB.
 LIBYUV_API
-int ARGB4444ToARGB(const uint8* src_argb4444, int src_stride_argb4444,
-                   uint8* dst_argb, int dst_stride_argb,
-                   int width, int height) {
+int ARGB4444ToARGB(const uint8_t* src_argb4444,
+                   int src_stride_argb4444,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   int width,
+                   int height) {
   int y;
-  void (*ARGB4444ToARGBRow)(const uint8* src_argb4444, uint8* dst_argb,
-      int width) = ARGB4444ToARGBRow_C;
-  if (!src_argb4444 || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*ARGB4444ToARGBRow)(const uint8_t* src_argb4444, uint8_t* dst_argb,
+                            int width) = ARGB4444ToARGBRow_C;
+  if (!src_argb4444 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1114,8 +1424,7 @@
     src_stride_argb4444 = -src_stride_argb4444;
   }
   // Coalesce rows.
-  if (src_stride_argb4444 == width * 2 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb4444 == width * 2 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb4444 = dst_stride_argb = 0;
@@ -1144,6 +1453,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGB4444TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGB4444ToARGBRow = ARGB4444ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGB4444ToARGBRow = ARGB4444ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGB4444ToARGBRow(src_argb4444, dst_argb, width);
@@ -1153,20 +1470,117 @@
   return 0;
 }
 
-// Convert NV12 to ARGB.
+// Convert AR30 to ARGB.
 LIBYUV_API
-int NV12ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_uv, int src_stride_uv,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int AR30ToARGB(const uint8_t* src_ar30,
+               int src_stride_ar30,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*NV12ToARGBRow)(const uint8* y_buf,
-                        const uint8* uv_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = NV12ToARGBRow_C;
-  if (!src_y || !src_uv || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_ar30 || !dst_argb || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    src_ar30 = src_ar30 + (height - 1) * src_stride_ar30;
+    src_stride_ar30 = -src_stride_ar30;
+  }
+  // Coalesce rows.
+  if (src_stride_ar30 == width * 4 && dst_stride_argb == width * 4) {
+    width *= height;
+    height = 1;
+    src_stride_ar30 = dst_stride_argb = 0;
+  }
+  for (y = 0; y < height; ++y) {
+    AR30ToARGBRow_C(src_ar30, dst_argb, width);
+    src_ar30 += src_stride_ar30;
+    dst_argb += dst_stride_argb;
+  }
+  return 0;
+}
+
+// Convert AR30 to ABGR.
+LIBYUV_API
+int AR30ToABGR(const uint8_t* src_ar30,
+               int src_stride_ar30,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  int y;
+  if (!src_ar30 || !dst_abgr || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    src_ar30 = src_ar30 + (height - 1) * src_stride_ar30;
+    src_stride_ar30 = -src_stride_ar30;
+  }
+  // Coalesce rows.
+  if (src_stride_ar30 == width * 4 && dst_stride_abgr == width * 4) {
+    width *= height;
+    height = 1;
+    src_stride_ar30 = dst_stride_abgr = 0;
+  }
+  for (y = 0; y < height; ++y) {
+    AR30ToABGRRow_C(src_ar30, dst_abgr, width);
+    src_ar30 += src_stride_ar30;
+    dst_abgr += dst_stride_abgr;
+  }
+  return 0;
+}
+
+// Convert AR30 to AB30.
+LIBYUV_API
+int AR30ToAB30(const uint8_t* src_ar30,
+               int src_stride_ar30,
+               uint8_t* dst_ab30,
+               int dst_stride_ab30,
+               int width,
+               int height) {
+  int y;
+  if (!src_ar30 || !dst_ab30 || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    src_ar30 = src_ar30 + (height - 1) * src_stride_ar30;
+    src_stride_ar30 = -src_stride_ar30;
+  }
+  // Coalesce rows.
+  if (src_stride_ar30 == width * 4 && dst_stride_ab30 == width * 4) {
+    width *= height;
+    height = 1;
+    src_stride_ar30 = dst_stride_ab30 = 0;
+  }
+  for (y = 0; y < height; ++y) {
+    AR30ToAB30Row_C(src_ar30, dst_ab30, width);
+    src_ar30 += src_stride_ar30;
+    dst_ab30 += dst_stride_ab30;
+  }
+  return 0;
+}
+
+// Convert NV12 to ARGB with matrix
+static int NV12ToARGBMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_uv,
+                            int src_stride_uv,
+                            uint8_t* dst_argb,
+                            int dst_stride_argb,
+                            const struct YuvConstants* yuvconstants,
+                            int width,
+                            int height) {
+  int y;
+  void (*NV12ToARGBRow)(
+      const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* rgb_buf,
+      const struct YuvConstants* yuvconstants, int width) = NV12ToARGBRow_C;
+  if (!src_y || !src_uv || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1199,9 +1613,17 @@
     }
   }
 #endif
+#if defined(HAS_NV12TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    NV12ToARGBRow = NV12ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      NV12ToARGBRow = NV12ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
-    NV12ToARGBRow(src_y, src_uv, dst_argb, &kYuvI601Constants, width);
+    NV12ToARGBRow(src_y, src_uv, dst_argb, yuvconstants, width);
     dst_argb += dst_stride_argb;
     src_y += src_stride_y;
     if (y & 1) {
@@ -1211,20 +1633,21 @@
   return 0;
 }
 
-// Convert NV21 to ARGB.
-LIBYUV_API
-int NV21ToARGB(const uint8* src_y, int src_stride_y,
-               const uint8* src_uv, int src_stride_uv,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+// Convert NV21 to ARGB with matrix
+static int NV21ToARGBMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_vu,
+                            int src_stride_vu,
+                            uint8_t* dst_argb,
+                            int dst_stride_argb,
+                            const struct YuvConstants* yuvconstants,
+                            int width,
+                            int height) {
   int y;
-  void (*NV21ToARGBRow)(const uint8* y_buf,
-                        const uint8* uv_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = NV21ToARGBRow_C;
-  if (!src_y || !src_uv || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*NV21ToARGBRow)(
+      const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* rgb_buf,
+      const struct YuvConstants* yuvconstants, int width) = NV21ToARGBRow_C;
+  if (!src_y || !src_vu || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1257,31 +1680,246 @@
     }
   }
 #endif
+#if defined(HAS_NV21TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    NV21ToARGBRow = NV21ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      NV21ToARGBRow = NV21ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
-    NV21ToARGBRow(src_y, src_uv, dst_argb, &kYuvI601Constants, width);
+    NV21ToARGBRow(src_y, src_vu, dst_argb, yuvconstants, width);
     dst_argb += dst_stride_argb;
     src_y += src_stride_y;
     if (y & 1) {
+      src_vu += src_stride_vu;
+    }
+  }
+  return 0;
+}
+
+// Convert NV12 to ARGB.
+LIBYUV_API
+int NV12ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_uv,
+               int src_stride_uv,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return NV12ToARGBMatrix(src_y, src_stride_y, src_uv, src_stride_uv, dst_argb,
+                          dst_stride_argb, &kYuvI601Constants, width, height);
+}
+
+// Convert NV21 to ARGB.
+LIBYUV_API
+int NV21ToARGB(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_vu,
+               int src_stride_vu,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
+  return NV21ToARGBMatrix(src_y, src_stride_y, src_vu, src_stride_vu, dst_argb,
+                          dst_stride_argb, &kYuvI601Constants, width, height);
+}
+
+// Convert NV12 to ABGR.
+// To output ABGR instead of ARGB swap the UV and use a mirrrored yuc matrix.
+// To swap the UV use NV12 instead of NV21.LIBYUV_API
+int NV12ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_uv,
+               int src_stride_uv,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return NV21ToARGBMatrix(src_y, src_stride_y, src_uv, src_stride_uv, dst_abgr,
+                          dst_stride_abgr, &kYvuI601Constants, width, height);
+}
+
+// Convert NV21 to ABGR.
+LIBYUV_API
+int NV21ToABGR(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_vu,
+               int src_stride_vu,
+               uint8_t* dst_abgr,
+               int dst_stride_abgr,
+               int width,
+               int height) {
+  return NV12ToARGBMatrix(src_y, src_stride_y, src_vu, src_stride_vu, dst_abgr,
+                          dst_stride_abgr, &kYvuI601Constants, width, height);
+}
+
+// TODO(fbarchard): Consider SSSE3 2 step conversion.
+// Convert NV12 to RGB24 with matrix
+static int NV12ToRGB24Matrix(const uint8_t* src_y,
+                             int src_stride_y,
+                             const uint8_t* src_uv,
+                             int src_stride_uv,
+                             uint8_t* dst_rgb24,
+                             int dst_stride_rgb24,
+                             const struct YuvConstants* yuvconstants,
+                             int width,
+                             int height) {
+  int y;
+  void (*NV12ToRGB24Row)(
+      const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* rgb_buf,
+      const struct YuvConstants* yuvconstants, int width) = NV12ToRGB24Row_C;
+  if (!src_y || !src_uv || !dst_rgb24 || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_rgb24 = dst_rgb24 + (height - 1) * dst_stride_rgb24;
+    dst_stride_rgb24 = -dst_stride_rgb24;
+  }
+#if defined(HAS_NV12TORGB24ROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    NV12ToRGB24Row = NV12ToRGB24Row_Any_NEON;
+    if (IS_ALIGNED(width, 8)) {
+      NV12ToRGB24Row = NV12ToRGB24Row_NEON;
+    }
+  }
+#endif
+#if defined(HAS_NV12TORGB24ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    NV12ToRGB24Row = NV12ToRGB24Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 16)) {
+      NV12ToRGB24Row = NV12ToRGB24Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_NV12TORGB24ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    NV12ToRGB24Row = NV12ToRGB24Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      NV12ToRGB24Row = NV12ToRGB24Row_AVX2;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    NV12ToRGB24Row(src_y, src_uv, dst_rgb24, yuvconstants, width);
+    dst_rgb24 += dst_stride_rgb24;
+    src_y += src_stride_y;
+    if (y & 1) {
       src_uv += src_stride_uv;
     }
   }
   return 0;
 }
 
+// Convert NV21 to RGB24 with matrix
+static int NV21ToRGB24Matrix(const uint8_t* src_y,
+                             int src_stride_y,
+                             const uint8_t* src_vu,
+                             int src_stride_vu,
+                             uint8_t* dst_rgb24,
+                             int dst_stride_rgb24,
+                             const struct YuvConstants* yuvconstants,
+                             int width,
+                             int height) {
+  int y;
+  void (*NV21ToRGB24Row)(
+      const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* rgb_buf,
+      const struct YuvConstants* yuvconstants, int width) = NV21ToRGB24Row_C;
+  if (!src_y || !src_vu || !dst_rgb24 || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_rgb24 = dst_rgb24 + (height - 1) * dst_stride_rgb24;
+    dst_stride_rgb24 = -dst_stride_rgb24;
+  }
+#if defined(HAS_NV21TORGB24ROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    NV21ToRGB24Row = NV21ToRGB24Row_Any_NEON;
+    if (IS_ALIGNED(width, 8)) {
+      NV21ToRGB24Row = NV21ToRGB24Row_NEON;
+    }
+  }
+#endif
+#if defined(HAS_NV21TORGB24ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    NV21ToRGB24Row = NV21ToRGB24Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 16)) {
+      NV21ToRGB24Row = NV21ToRGB24Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_NV21TORGB24ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    NV21ToRGB24Row = NV21ToRGB24Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      NV21ToRGB24Row = NV21ToRGB24Row_AVX2;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    NV21ToRGB24Row(src_y, src_vu, dst_rgb24, yuvconstants, width);
+    dst_rgb24 += dst_stride_rgb24;
+    src_y += src_stride_y;
+    if (y & 1) {
+      src_vu += src_stride_vu;
+    }
+  }
+  return 0;
+}
+
+// TODO(fbarchard): NV12ToRAW can be implemented by mirrored matrix.
+// Convert NV12 to RGB24.
+LIBYUV_API
+int NV12ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_uv,
+                int src_stride_uv,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height) {
+  return NV12ToRGB24Matrix(src_y, src_stride_y, src_uv, src_stride_uv,
+                           dst_rgb24, dst_stride_rgb24, &kYuvI601Constants,
+                           width, height);
+}
+
+// Convert NV21 to RGB24.
+LIBYUV_API
+int NV21ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_vu,
+                int src_stride_vu,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height) {
+  return NV21ToRGB24Matrix(src_y, src_stride_y, src_vu, src_stride_vu,
+                           dst_rgb24, dst_stride_rgb24, &kYuvI601Constants,
+                           width, height);
+}
+
 // Convert M420 to ARGB.
 LIBYUV_API
-int M420ToARGB(const uint8* src_m420, int src_stride_m420,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int M420ToARGB(const uint8_t* src_m420,
+               int src_stride_m420,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*NV12ToARGBRow)(const uint8* y_buf,
-                        const uint8* uv_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = NV12ToARGBRow_C;
-  if (!src_m420 || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*NV12ToARGBRow)(
+      const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* rgb_buf,
+      const struct YuvConstants* yuvconstants, int width) = NV12ToARGBRow_C;
+  if (!src_m420 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1314,6 +1952,14 @@
     }
   }
 #endif
+#if defined(HAS_NV12TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    NV12ToARGBRow = NV12ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      NV12ToARGBRow = NV12ToARGBRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     NV12ToARGBRow(src_m420, src_m420 + src_stride_m420 * 2, dst_argb,
@@ -1332,17 +1978,17 @@
 
 // Convert YUY2 to ARGB.
 LIBYUV_API
-int YUY2ToARGB(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int YUY2ToARGB(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*YUY2ToARGBRow)(const uint8* src_yuy2,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) =
+  void (*YUY2ToARGBRow)(const uint8_t* src_yuy2, uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants, int width) =
       YUY2ToARGBRow_C;
-  if (!src_yuy2 || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_yuy2 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1352,8 +1998,7 @@
     src_stride_yuy2 = -src_stride_yuy2;
   }
   // Coalesce rows.
-  if (src_stride_yuy2 == width * 2 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_yuy2 == width * 2 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_yuy2 = dst_stride_argb = 0;
@@ -1382,6 +2027,14 @@
     }
   }
 #endif
+#if defined(HAS_YUY2TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    YUY2ToARGBRow = YUY2ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      YUY2ToARGBRow = YUY2ToARGBRow_MSA;
+    }
+  }
+#endif
   for (y = 0; y < height; ++y) {
     YUY2ToARGBRow(src_yuy2, dst_argb, &kYuvI601Constants, width);
     src_yuy2 += src_stride_yuy2;
@@ -1392,17 +2045,17 @@
 
 // Convert UYVY to ARGB.
 LIBYUV_API
-int UYVYToARGB(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int UYVYToARGB(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*UYVYToARGBRow)(const uint8* src_uyvy,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) =
+  void (*UYVYToARGBRow)(const uint8_t* src_uyvy, uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants, int width) =
       UYVYToARGBRow_C;
-  if (!src_uyvy || !dst_argb ||
-      width <= 0 || height == 0) {
+  if (!src_uyvy || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1412,8 +2065,7 @@
     src_stride_uyvy = -src_stride_uyvy;
   }
   // Coalesce rows.
-  if (src_stride_uyvy == width * 2 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_uyvy == width * 2 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_uyvy = dst_stride_argb = 0;
@@ -1442,6 +2094,14 @@
     }
   }
 #endif
+#if defined(HAS_UYVYTOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    UYVYToARGBRow = UYVYToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      UYVYToARGBRow = UYVYToARGBRow_MSA;
+    }
+  }
+#endif
   for (y = 0; y < height; ++y) {
     UYVYToARGBRow(src_uyvy, dst_argb, &kYuvI601Constants, width);
     src_uyvy += src_stride_uyvy;
@@ -1449,6 +2109,121 @@
   }
   return 0;
 }
+static void WeavePixels(const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        int src_pixel_stride_uv,
+                        uint8_t* dst_uv,
+                        int width) {
+  int i;
+  for (i = 0; i < width; ++i) {
+    dst_uv[0] = *src_u;
+    dst_uv[1] = *src_v;
+    dst_uv += 2;
+    src_u += src_pixel_stride_uv;
+    src_v += src_pixel_stride_uv;
+  }
+}
+
+// Convert Android420 to ARGB.
+LIBYUV_API
+int Android420ToARGBMatrix(const uint8_t* src_y,
+                           int src_stride_y,
+                           const uint8_t* src_u,
+                           int src_stride_u,
+                           const uint8_t* src_v,
+                           int src_stride_v,
+                           int src_pixel_stride_uv,
+                           uint8_t* dst_argb,
+                           int dst_stride_argb,
+                           const struct YuvConstants* yuvconstants,
+                           int width,
+                           int height) {
+  int y;
+  uint8_t* dst_uv;
+  const ptrdiff_t vu_off = src_v - src_u;
+  int halfwidth = (width + 1) >> 1;
+  int halfheight = (height + 1) >> 1;
+  if (!src_y || !src_u || !src_v || !dst_argb || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    halfheight = (height + 1) >> 1;
+    dst_argb = dst_argb + (height - 1) * dst_stride_argb;
+    dst_stride_argb = -dst_stride_argb;
+  }
+
+  // I420
+  if (src_pixel_stride_uv == 1) {
+    return I420ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                            src_stride_v, dst_argb, dst_stride_argb,
+                            yuvconstants, width, height);
+    // NV21
+  }
+  if (src_pixel_stride_uv == 2 && vu_off == -1 &&
+      src_stride_u == src_stride_v) {
+    return NV21ToARGBMatrix(src_y, src_stride_y, src_v, src_stride_v, dst_argb,
+                            dst_stride_argb, yuvconstants, width, height);
+    // NV12
+  }
+  if (src_pixel_stride_uv == 2 && vu_off == 1 && src_stride_u == src_stride_v) {
+    return NV12ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, dst_argb,
+                            dst_stride_argb, yuvconstants, width, height);
+  }
+
+  // General case fallback creates NV12
+  align_buffer_64(plane_uv, halfwidth * 2 * halfheight);
+  dst_uv = plane_uv;
+  for (y = 0; y < halfheight; ++y) {
+    WeavePixels(src_u, src_v, src_pixel_stride_uv, dst_uv, halfwidth);
+    src_u += src_stride_u;
+    src_v += src_stride_v;
+    dst_uv += halfwidth * 2;
+  }
+  NV12ToARGBMatrix(src_y, src_stride_y, plane_uv, halfwidth * 2, dst_argb,
+                   dst_stride_argb, yuvconstants, width, height);
+  free_aligned_buffer_64(plane_uv);
+  return 0;
+}
+
+// Convert Android420 to ARGB.
+LIBYUV_API
+int Android420ToARGB(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_u,
+                     int src_stride_u,
+                     const uint8_t* src_v,
+                     int src_stride_v,
+                     int src_pixel_stride_uv,
+                     uint8_t* dst_argb,
+                     int dst_stride_argb,
+                     int width,
+                     int height) {
+  return Android420ToARGBMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                                src_stride_v, src_pixel_stride_uv, dst_argb,
+                                dst_stride_argb, &kYuvI601Constants, width,
+                                height);
+}
+
+// Convert Android420 to ABGR.
+LIBYUV_API
+int Android420ToABGR(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_u,
+                     int src_stride_u,
+                     const uint8_t* src_v,
+                     int src_stride_v,
+                     int src_pixel_stride_uv,
+                     uint8_t* dst_abgr,
+                     int dst_stride_abgr,
+                     int width,
+                     int height) {
+  return Android420ToARGBMatrix(src_y, src_stride_y, src_v, src_stride_v, src_u,
+                                src_stride_u, src_pixel_stride_uv, dst_abgr,
+                                dst_stride_abgr, &kYvuI601Constants, width,
+                                height);
+}
 
 #ifdef __cplusplus
 }  // extern "C"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from.cc
index 3b2dca8..6fa2532 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from.cc
@@ -15,9 +15,9 @@
 #include "libyuv/cpu_id.h"
 #include "libyuv/planar_functions.h"
 #include "libyuv/rotate.h"
+#include "libyuv/row.h"
 #include "libyuv/scale.h"  // For ScalePlane()
 #include "libyuv/video_common.h"
-#include "libyuv/row.h"
 
 #ifdef __cplusplus
 namespace libyuv {
@@ -30,109 +30,144 @@
 }
 
 // I420 To any I4xx YUV format with mirroring.
-static int I420ToI4xx(const uint8* src_y, int src_stride_y,
-                      const uint8* src_u, int src_stride_u,
-                      const uint8* src_v, int src_stride_v,
-                      uint8* dst_y, int dst_stride_y,
-                      uint8* dst_u, int dst_stride_u,
-                      uint8* dst_v, int dst_stride_v,
-                      int src_y_width, int src_y_height,
-                      int dst_uv_width, int dst_uv_height) {
+static int I420ToI4xx(const uint8_t* src_y,
+                      int src_stride_y,
+                      const uint8_t* src_u,
+                      int src_stride_u,
+                      const uint8_t* src_v,
+                      int src_stride_v,
+                      uint8_t* dst_y,
+                      int dst_stride_y,
+                      uint8_t* dst_u,
+                      int dst_stride_u,
+                      uint8_t* dst_v,
+                      int dst_stride_v,
+                      int src_y_width,
+                      int src_y_height,
+                      int dst_uv_width,
+                      int dst_uv_height) {
   const int dst_y_width = Abs(src_y_width);
   const int dst_y_height = Abs(src_y_height);
   const int src_uv_width = SUBSAMPLE(src_y_width, 1, 1);
   const int src_uv_height = SUBSAMPLE(src_y_height, 1, 1);
-  if (src_y_width == 0 || src_y_height == 0 ||
-      dst_uv_width <= 0 || dst_uv_height <= 0) {
+  if (src_y_width == 0 || src_y_height == 0 || dst_uv_width <= 0 ||
+      dst_uv_height <= 0) {
     return -1;
   }
   if (dst_y) {
-    ScalePlane(src_y, src_stride_y, src_y_width, src_y_height,
-               dst_y, dst_stride_y, dst_y_width, dst_y_height,
-               kFilterBilinear);
+    ScalePlane(src_y, src_stride_y, src_y_width, src_y_height, dst_y,
+               dst_stride_y, dst_y_width, dst_y_height, kFilterBilinear);
   }
-  ScalePlane(src_u, src_stride_u, src_uv_width, src_uv_height,
-             dst_u, dst_stride_u, dst_uv_width, dst_uv_height,
-             kFilterBilinear);
-  ScalePlane(src_v, src_stride_v, src_uv_width, src_uv_height,
-             dst_v, dst_stride_v, dst_uv_width, dst_uv_height,
-             kFilterBilinear);
+  ScalePlane(src_u, src_stride_u, src_uv_width, src_uv_height, dst_u,
+             dst_stride_u, dst_uv_width, dst_uv_height, kFilterBilinear);
+  ScalePlane(src_v, src_stride_v, src_uv_width, src_uv_height, dst_v,
+             dst_stride_v, dst_uv_width, dst_uv_height, kFilterBilinear);
+  return 0;
+}
+
+// Convert 8 bit YUV to 10 bit.
+LIBYUV_API
+int I420ToI010(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint16_t* dst_y,
+               int dst_stride_y,
+               uint16_t* dst_u,
+               int dst_stride_u,
+               uint16_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
+  int halfwidth = (width + 1) >> 1;
+  int halfheight = (height + 1) >> 1;
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    halfheight = (height + 1) >> 1;
+    src_y = src_y + (height - 1) * src_stride_y;
+    src_u = src_u + (halfheight - 1) * src_stride_u;
+    src_v = src_v + (halfheight - 1) * src_stride_v;
+    src_stride_y = -src_stride_y;
+    src_stride_u = -src_stride_u;
+    src_stride_v = -src_stride_v;
+  }
+
+  // Convert Y plane.
+  Convert8To16Plane(src_y, src_stride_y, dst_y, dst_stride_y, 1024, width,
+                    height);
+  // Convert UV planes.
+  Convert8To16Plane(src_u, src_stride_u, dst_u, dst_stride_u, 1024, halfwidth,
+                    halfheight);
+  Convert8To16Plane(src_v, src_stride_v, dst_v, dst_stride_v, 1024, halfwidth,
+                    halfheight);
   return 0;
 }
 
 // 420 chroma is 1/2 width, 1/2 height
 // 422 chroma is 1/2 width, 1x height
 LIBYUV_API
-int I420ToI422(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int I420ToI422(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   const int dst_uv_width = (Abs(width) + 1) >> 1;
   const int dst_uv_height = Abs(height);
-  return I420ToI4xx(src_y, src_stride_y,
-                    src_u, src_stride_u,
-                    src_v, src_stride_v,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height,
-                    dst_uv_width, dst_uv_height);
+  return I420ToI4xx(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                    src_stride_v, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                    dst_v, dst_stride_v, width, height, dst_uv_width,
+                    dst_uv_height);
 }
 
 // 420 chroma is 1/2 width, 1/2 height
 // 444 chroma is 1x width, 1x height
 LIBYUV_API
-int I420ToI444(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int I420ToI444(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   const int dst_uv_width = Abs(width);
   const int dst_uv_height = Abs(height);
-  return I420ToI4xx(src_y, src_stride_y,
-                    src_u, src_stride_u,
-                    src_v, src_stride_v,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height,
-                    dst_uv_width, dst_uv_height);
-}
-
-// 420 chroma is 1/2 width, 1/2 height
-// 411 chroma is 1/4 width, 1x height
-LIBYUV_API
-int I420ToI411(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
-  const int dst_uv_width = (Abs(width) + 3) >> 2;
-  const int dst_uv_height = Abs(height);
-  return I420ToI4xx(src_y, src_stride_y,
-                    src_u, src_stride_u,
-                    src_v, src_stride_v,
-                    dst_y, dst_stride_y,
-                    dst_u, dst_stride_u,
-                    dst_v, dst_stride_v,
-                    width, height,
-                    dst_uv_width, dst_uv_height);
+  return I420ToI4xx(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                    src_stride_v, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                    dst_v, dst_stride_v, width, height, dst_uv_width,
+                    dst_uv_height);
 }
 
 // Copy to I400. Source can be I420,422,444,400,NV12,NV21
 LIBYUV_API
-int I400Copy(const uint8* src_y, int src_stride_y,
-             uint8* dst_y, int dst_stride_y,
-             int width, int height) {
-  if (!src_y || !dst_y ||
-      width <= 0 || height == 0) {
+int I400Copy(const uint8_t* src_y,
+             int src_stride_y,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             int width,
+             int height) {
+  if (!src_y || !dst_y || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -146,17 +181,21 @@
 }
 
 LIBYUV_API
-int I422ToYUY2(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_yuy2, int dst_stride_yuy2,
-               int width, int height) {
+int I422ToYUY2(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_yuy2,
+               int dst_stride_yuy2,
+               int width,
+               int height) {
   int y;
-  void (*I422ToYUY2Row)(const uint8* src_y, const uint8* src_u,
-                        const uint8* src_v, uint8* dst_yuy2, int width) =
+  void (*I422ToYUY2Row)(const uint8_t* src_y, const uint8_t* src_u,
+                        const uint8_t* src_v, uint8_t* dst_yuy2, int width) =
       I422ToYUY2Row_C;
-  if (!src_y || !src_u || !src_v || !dst_yuy2 ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_yuy2 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -166,10 +205,8 @@
     dst_stride_yuy2 = -dst_stride_yuy2;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      src_stride_u * 2 == width &&
-      src_stride_v * 2 == width &&
-      dst_stride_yuy2 == width * 2) {
+  if (src_stride_y == width && src_stride_u * 2 == width &&
+      src_stride_v * 2 == width && dst_stride_yuy2 == width * 2) {
     width *= height;
     height = 1;
     src_stride_y = src_stride_u = src_stride_v = dst_stride_yuy2 = 0;
@@ -182,6 +219,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOYUY2ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToYUY2Row = I422ToYUY2Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToYUY2Row = I422ToYUY2Row_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_I422TOYUY2ROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     I422ToYUY2Row = I422ToYUY2Row_Any_NEON;
@@ -202,17 +247,21 @@
 }
 
 LIBYUV_API
-int I420ToYUY2(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_yuy2, int dst_stride_yuy2,
-               int width, int height) {
+int I420ToYUY2(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_yuy2,
+               int dst_stride_yuy2,
+               int width,
+               int height) {
   int y;
-  void (*I422ToYUY2Row)(const uint8* src_y, const uint8* src_u,
-                        const uint8* src_v, uint8* dst_yuy2, int width) =
+  void (*I422ToYUY2Row)(const uint8_t* src_y, const uint8_t* src_u,
+                        const uint8_t* src_v, uint8_t* dst_yuy2, int width) =
       I422ToYUY2Row_C;
-  if (!src_y || !src_u || !src_v || !dst_yuy2 ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_yuy2 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -229,6 +278,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOYUY2ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToYUY2Row = I422ToYUY2Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToYUY2Row = I422ToYUY2Row_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_I422TOYUY2ROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     I422ToYUY2Row = I422ToYUY2Row_Any_NEON;
@@ -237,6 +294,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOYUY2ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToYUY2Row = I422ToYUY2Row_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToYUY2Row = I422ToYUY2Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     I422ToYUY2Row(src_y, src_u, src_v, dst_yuy2, width);
@@ -254,17 +319,21 @@
 }
 
 LIBYUV_API
-int I422ToUYVY(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_uyvy, int dst_stride_uyvy,
-               int width, int height) {
+int I422ToUYVY(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_uyvy,
+               int dst_stride_uyvy,
+               int width,
+               int height) {
   int y;
-  void (*I422ToUYVYRow)(const uint8* src_y, const uint8* src_u,
-                        const uint8* src_v, uint8* dst_uyvy, int width) =
+  void (*I422ToUYVYRow)(const uint8_t* src_y, const uint8_t* src_u,
+                        const uint8_t* src_v, uint8_t* dst_uyvy, int width) =
       I422ToUYVYRow_C;
-  if (!src_y || !src_u || !src_v || !dst_uyvy ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_uyvy || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -274,10 +343,8 @@
     dst_stride_uyvy = -dst_stride_uyvy;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      src_stride_u * 2 == width &&
-      src_stride_v * 2 == width &&
-      dst_stride_uyvy == width * 2) {
+  if (src_stride_y == width && src_stride_u * 2 == width &&
+      src_stride_v * 2 == width && dst_stride_uyvy == width * 2) {
     width *= height;
     height = 1;
     src_stride_y = src_stride_u = src_stride_v = dst_stride_uyvy = 0;
@@ -290,6 +357,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOUYVYROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToUYVYRow = I422ToUYVYRow_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToUYVYRow = I422ToUYVYRow_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_I422TOUYVYROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     I422ToUYVYRow = I422ToUYVYRow_Any_NEON;
@@ -298,6 +373,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOUYVYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToUYVYRow = I422ToUYVYRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToUYVYRow = I422ToUYVYRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I422ToUYVYRow(src_y, src_u, src_v, dst_uyvy, width);
@@ -310,17 +393,21 @@
 }
 
 LIBYUV_API
-int I420ToUYVY(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_uyvy, int dst_stride_uyvy,
-               int width, int height) {
+int I420ToUYVY(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_uyvy,
+               int dst_stride_uyvy,
+               int width,
+               int height) {
   int y;
-  void (*I422ToUYVYRow)(const uint8* src_y, const uint8* src_u,
-                        const uint8* src_v, uint8* dst_uyvy, int width) =
+  void (*I422ToUYVYRow)(const uint8_t* src_y, const uint8_t* src_u,
+                        const uint8_t* src_v, uint8_t* dst_uyvy, int width) =
       I422ToUYVYRow_C;
-  if (!src_y || !src_u || !src_v || !dst_uyvy ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_uyvy || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -337,6 +424,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOUYVYROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToUYVYRow = I422ToUYVYRow_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToUYVYRow = I422ToUYVYRow_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_I422TOUYVYROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     I422ToUYVYRow = I422ToUYVYRow_Any_NEON;
@@ -345,6 +440,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOUYVYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToUYVYRow = I422ToUYVYRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToUYVYRow = I422ToUYVYRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     I422ToUYVYRow(src_y, src_u, src_v, dst_uyvy, width);
@@ -363,14 +466,20 @@
 
 // TODO(fbarchard): test negative height for invert.
 LIBYUV_API
-int I420ToNV12(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height) {
-  if (!src_y || !src_u || !src_v || !dst_y || !dst_uv ||
-      width <= 0 || height == 0) {
+int I420ToNV12(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height) {
+  if (!src_y || !src_u || !src_v || !dst_y || !dst_uv || width <= 0 ||
+      height == 0) {
     return -1;
   }
   int halfwidth = (width + 1) / 2;
@@ -378,44 +487,47 @@
   if (dst_y) {
     CopyPlane(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
   }
-  MergeUVPlane(src_u, src_stride_u,
-               src_v, src_stride_v,
-               dst_uv, dst_stride_uv,
+  MergeUVPlane(src_u, src_stride_u, src_v, src_stride_v, dst_uv, dst_stride_uv,
                halfwidth, halfheight);
   return 0;
 }
 
 LIBYUV_API
-int I420ToNV21(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_vu, int dst_stride_vu,
-               int width, int height) {
-  return I420ToNV12(src_y, src_stride_y,
-                    src_v, src_stride_v,
-                    src_u, src_stride_u,
-                    dst_y, dst_stride_y,
-                    dst_vu, dst_stride_vu,
+int I420ToNV21(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_vu,
+               int dst_stride_vu,
+               int width,
+               int height) {
+  return I420ToNV12(src_y, src_stride_y, src_v, src_stride_v, src_u,
+                    src_stride_u, dst_y, dst_stride_y, dst_vu, dst_stride_vu,
                     width, height);
 }
 
 // Convert I422 to RGBA with matrix
-static int I420ToRGBAMatrix(const uint8* src_y, int src_stride_y,
-                            const uint8* src_u, int src_stride_u,
-                            const uint8* src_v, int src_stride_v,
-                            uint8* dst_rgba, int dst_stride_rgba,
+static int I420ToRGBAMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_u,
+                            int src_stride_u,
+                            const uint8_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_rgba,
+                            int dst_stride_rgba,
                             const struct YuvConstants* yuvconstants,
-                            int width, int height) {
+                            int width,
+                            int height) {
   int y;
-  void (*I422ToRGBARow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I422ToRGBARow_C;
-  if (!src_y || !src_u || !src_v || !dst_rgba ||
-      width <= 0 || height == 0) {
+  void (*I422ToRGBARow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I422ToRGBARow_C;
+  if (!src_y || !src_u || !src_v || !dst_rgba || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -448,13 +560,12 @@
     }
   }
 #endif
-#if defined(HAS_I422TORGBAROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2) &&
-      IS_ALIGNED(dst_rgba, 4) && IS_ALIGNED(dst_stride_rgba, 4)) {
-    I422ToRGBARow = I422ToRGBARow_DSPR2;
+#if defined(HAS_I422TORGBAROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToRGBARow = I422ToRGBARow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToRGBARow = I422ToRGBARow_MSA;
+    }
   }
 #endif
 
@@ -472,50 +583,58 @@
 
 // Convert I420 to RGBA.
 LIBYUV_API
-int I420ToRGBA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_rgba, int dst_stride_rgba,
-               int width, int height) {
-  return I420ToRGBAMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_rgba, dst_stride_rgba,
-                          &kYuvI601Constants,
-                          width, height);
+int I420ToRGBA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_rgba,
+               int dst_stride_rgba,
+               int width,
+               int height) {
+  return I420ToRGBAMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_rgba, dst_stride_rgba,
+                          &kYuvI601Constants, width, height);
 }
 
 // Convert I420 to BGRA.
 LIBYUV_API
-int I420ToBGRA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_bgra, int dst_stride_bgra,
-               int width, int height) {
-  return I420ToRGBAMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_bgra, dst_stride_bgra,
+int I420ToBGRA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_bgra,
+               int dst_stride_bgra,
+               int width,
+               int height) {
+  return I420ToRGBAMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_bgra, dst_stride_bgra,
                           &kYvuI601Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert I420 to RGB24 with matrix
-static int I420ToRGB24Matrix(const uint8* src_y, int src_stride_y,
-                             const uint8* src_u, int src_stride_u,
-                             const uint8* src_v, int src_stride_v,
-                             uint8* dst_rgb24, int dst_stride_rgb24,
+static int I420ToRGB24Matrix(const uint8_t* src_y,
+                             int src_stride_y,
+                             const uint8_t* src_u,
+                             int src_stride_u,
+                             const uint8_t* src_v,
+                             int src_stride_v,
+                             uint8_t* dst_rgb24,
+                             int dst_stride_rgb24,
                              const struct YuvConstants* yuvconstants,
-                             int width, int height) {
+                             int width,
+                             int height) {
   int y;
-  void (*I422ToRGB24Row)(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* rgb_buf,
-                         const struct YuvConstants* yuvconstants,
-                         int width) = I422ToRGB24Row_C;
-  if (!src_y || !src_u || !src_v || !dst_rgb24 ||
-      width <= 0 || height == 0) {
+  void (*I422ToRGB24Row)(const uint8_t* y_buf, const uint8_t* u_buf,
+                         const uint8_t* v_buf, uint8_t* rgb_buf,
+                         const struct YuvConstants* yuvconstants, int width) =
+      I422ToRGB24Row_C;
+  if (!src_y || !src_u || !src_v || !dst_rgb24 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -548,6 +667,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TORGB24ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToRGB24Row = I422ToRGB24Row_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      I422ToRGB24Row = I422ToRGB24Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I422ToRGB24Row(src_y, src_u, src_v, dst_rgb24, yuvconstants, width);
@@ -563,50 +690,95 @@
 
 // Convert I420 to RGB24.
 LIBYUV_API
-int I420ToRGB24(const uint8* src_y, int src_stride_y,
-                const uint8* src_u, int src_stride_u,
-                const uint8* src_v, int src_stride_v,
-                uint8* dst_rgb24, int dst_stride_rgb24,
-                int width, int height) {
-  return I420ToRGB24Matrix(src_y, src_stride_y,
-                           src_u, src_stride_u,
-                           src_v, src_stride_v,
-                           dst_rgb24, dst_stride_rgb24,
-                           &kYuvI601Constants,
-                           width, height);
+int I420ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_u,
+                int src_stride_u,
+                const uint8_t* src_v,
+                int src_stride_v,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height) {
+  return I420ToRGB24Matrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                           src_stride_v, dst_rgb24, dst_stride_rgb24,
+                           &kYuvI601Constants, width, height);
 }
 
 // Convert I420 to RAW.
 LIBYUV_API
-int I420ToRAW(const uint8* src_y, int src_stride_y,
-              const uint8* src_u, int src_stride_u,
-              const uint8* src_v, int src_stride_v,
-              uint8* dst_raw, int dst_stride_raw,
-              int width, int height) {
-  return I420ToRGB24Matrix(src_y, src_stride_y,
-                           src_v, src_stride_v,  // Swap U and V
-                           src_u, src_stride_u,
-                           dst_raw, dst_stride_raw,
+int I420ToRAW(const uint8_t* src_y,
+              int src_stride_y,
+              const uint8_t* src_u,
+              int src_stride_u,
+              const uint8_t* src_v,
+              int src_stride_v,
+              uint8_t* dst_raw,
+              int dst_stride_raw,
+              int width,
+              int height) {
+  return I420ToRGB24Matrix(src_y, src_stride_y, src_v,
+                           src_stride_v,  // Swap U and V
+                           src_u, src_stride_u, dst_raw, dst_stride_raw,
                            &kYvuI601Constants,  // Use Yvu matrix
                            width, height);
 }
 
+// Convert H420 to RGB24.
+LIBYUV_API
+int H420ToRGB24(const uint8_t* src_y,
+                int src_stride_y,
+                const uint8_t* src_u,
+                int src_stride_u,
+                const uint8_t* src_v,
+                int src_stride_v,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height) {
+  return I420ToRGB24Matrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                           src_stride_v, dst_rgb24, dst_stride_rgb24,
+                           &kYuvH709Constants, width, height);
+}
+
+// Convert H420 to RAW.
+LIBYUV_API
+int H420ToRAW(const uint8_t* src_y,
+              int src_stride_y,
+              const uint8_t* src_u,
+              int src_stride_u,
+              const uint8_t* src_v,
+              int src_stride_v,
+              uint8_t* dst_raw,
+              int dst_stride_raw,
+              int width,
+              int height) {
+  return I420ToRGB24Matrix(src_y, src_stride_y, src_v,
+                           src_stride_v,  // Swap U and V
+                           src_u, src_stride_u, dst_raw, dst_stride_raw,
+                           &kYvuH709Constants,  // Use Yvu matrix
+                           width, height);
+}
+
 // Convert I420 to ARGB1555.
 LIBYUV_API
-int I420ToARGB1555(const uint8* src_y, int src_stride_y,
-                   const uint8* src_u, int src_stride_u,
-                   const uint8* src_v, int src_stride_v,
-                   uint8* dst_argb1555, int dst_stride_argb1555,
-                   int width, int height) {
+int I420ToARGB1555(const uint8_t* src_y,
+                   int src_stride_y,
+                   const uint8_t* src_u,
+                   int src_stride_u,
+                   const uint8_t* src_v,
+                   int src_stride_v,
+                   uint8_t* dst_argb1555,
+                   int dst_stride_argb1555,
+                   int width,
+                   int height) {
   int y;
-  void (*I422ToARGB1555Row)(const uint8* y_buf,
-                            const uint8* u_buf,
-                            const uint8* v_buf,
-                            uint8* rgb_buf,
+  void (*I422ToARGB1555Row)(const uint8_t* y_buf, const uint8_t* u_buf,
+                            const uint8_t* v_buf, uint8_t* rgb_buf,
                             const struct YuvConstants* yuvconstants,
                             int width) = I422ToARGB1555Row_C;
-  if (!src_y || !src_u || !src_v || !dst_argb1555 ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_argb1555 || width <= 0 ||
+      height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -639,6 +811,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOARGB1555ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToARGB1555Row = I422ToARGB1555Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToARGB1555Row = I422ToARGB1555Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I422ToARGB1555Row(src_y, src_u, src_v, dst_argb1555, &kYuvI601Constants,
@@ -653,23 +833,25 @@
   return 0;
 }
 
-
 // Convert I420 to ARGB4444.
 LIBYUV_API
-int I420ToARGB4444(const uint8* src_y, int src_stride_y,
-                   const uint8* src_u, int src_stride_u,
-                   const uint8* src_v, int src_stride_v,
-                   uint8* dst_argb4444, int dst_stride_argb4444,
-                   int width, int height) {
+int I420ToARGB4444(const uint8_t* src_y,
+                   int src_stride_y,
+                   const uint8_t* src_u,
+                   int src_stride_u,
+                   const uint8_t* src_v,
+                   int src_stride_v,
+                   uint8_t* dst_argb4444,
+                   int dst_stride_argb4444,
+                   int width,
+                   int height) {
   int y;
-  void (*I422ToARGB4444Row)(const uint8* y_buf,
-                            const uint8* u_buf,
-                            const uint8* v_buf,
-                            uint8* rgb_buf,
+  void (*I422ToARGB4444Row)(const uint8_t* y_buf, const uint8_t* u_buf,
+                            const uint8_t* v_buf, uint8_t* rgb_buf,
                             const struct YuvConstants* yuvconstants,
                             int width) = I422ToARGB4444Row_C;
-  if (!src_y || !src_u || !src_v || !dst_argb4444 ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_argb4444 || width <= 0 ||
+      height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -702,6 +884,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOARGB4444ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToARGB4444Row = I422ToARGB4444Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToARGB4444Row = I422ToARGB4444Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I422ToARGB4444Row(src_y, src_u, src_v, dst_argb4444, &kYuvI601Constants,
@@ -718,20 +908,22 @@
 
 // Convert I420 to RGB565.
 LIBYUV_API
-int I420ToRGB565(const uint8* src_y, int src_stride_y,
-                 const uint8* src_u, int src_stride_u,
-                 const uint8* src_v, int src_stride_v,
-                 uint8* dst_rgb565, int dst_stride_rgb565,
-                 int width, int height) {
+int I420ToRGB565(const uint8_t* src_y,
+                 int src_stride_y,
+                 const uint8_t* src_u,
+                 int src_stride_u,
+                 const uint8_t* src_v,
+                 int src_stride_v,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height) {
   int y;
-  void (*I422ToRGB565Row)(const uint8* y_buf,
-                          const uint8* u_buf,
-                          const uint8* v_buf,
-                          uint8* rgb_buf,
-                          const struct YuvConstants* yuvconstants,
-                          int width) = I422ToRGB565Row_C;
-  if (!src_y || !src_u || !src_v || !dst_rgb565 ||
-      width <= 0 || height == 0) {
+  void (*I422ToRGB565Row)(const uint8_t* y_buf, const uint8_t* u_buf,
+                          const uint8_t* v_buf, uint8_t* rgb_buf,
+                          const struct YuvConstants* yuvconstants, int width) =
+      I422ToRGB565Row_C;
+  if (!src_y || !src_u || !src_v || !dst_rgb565 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -764,6 +956,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TORGB565ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToRGB565Row = I422ToRGB565Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToRGB565Row = I422ToRGB565Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     I422ToRGB565Row(src_y, src_u, src_v, dst_rgb565, &kYuvI601Constants, width);
@@ -777,32 +977,102 @@
   return 0;
 }
 
+// Convert I422 to RGB565.
+LIBYUV_API
+int I422ToRGB565(const uint8_t* src_y,
+                 int src_stride_y,
+                 const uint8_t* src_u,
+                 int src_stride_u,
+                 const uint8_t* src_v,
+                 int src_stride_v,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height) {
+  int y;
+  void (*I422ToRGB565Row)(const uint8_t* y_buf, const uint8_t* u_buf,
+                          const uint8_t* v_buf, uint8_t* rgb_buf,
+                          const struct YuvConstants* yuvconstants, int width) =
+      I422ToRGB565Row_C;
+  if (!src_y || !src_u || !src_v || !dst_rgb565 || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_rgb565 = dst_rgb565 + (height - 1) * dst_stride_rgb565;
+    dst_stride_rgb565 = -dst_stride_rgb565;
+  }
+#if defined(HAS_I422TORGB565ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    I422ToRGB565Row = I422ToRGB565Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToRGB565Row = I422ToRGB565Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_I422TORGB565ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToRGB565Row = I422ToRGB565Row_Any_AVX2;
+    if (IS_ALIGNED(width, 16)) {
+      I422ToRGB565Row = I422ToRGB565Row_AVX2;
+    }
+  }
+#endif
+#if defined(HAS_I422TORGB565ROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    I422ToRGB565Row = I422ToRGB565Row_Any_NEON;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToRGB565Row = I422ToRGB565Row_NEON;
+    }
+  }
+#endif
+#if defined(HAS_I422TORGB565ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToRGB565Row = I422ToRGB565Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToRGB565Row = I422ToRGB565Row_MSA;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    I422ToRGB565Row(src_y, src_u, src_v, dst_rgb565, &kYuvI601Constants, width);
+    dst_rgb565 += dst_stride_rgb565;
+    src_y += src_stride_y;
+    src_u += src_stride_u;
+    src_v += src_stride_v;
+  }
+  return 0;
+}
+
 // Ordered 8x8 dither for 888 to 565.  Values from 0 to 7.
-static const uint8 kDither565_4x4[16] = {
-  0, 4, 1, 5,
-  6, 2, 7, 3,
-  1, 5, 0, 4,
-  7, 3, 6, 2,
+static const uint8_t kDither565_4x4[16] = {
+    0, 4, 1, 5, 6, 2, 7, 3, 1, 5, 0, 4, 7, 3, 6, 2,
 };
 
 // Convert I420 to RGB565 with dithering.
 LIBYUV_API
-int I420ToRGB565Dither(const uint8* src_y, int src_stride_y,
-                       const uint8* src_u, int src_stride_u,
-                       const uint8* src_v, int src_stride_v,
-                       uint8* dst_rgb565, int dst_stride_rgb565,
-                       const uint8* dither4x4, int width, int height) {
+int I420ToRGB565Dither(const uint8_t* src_y,
+                       int src_stride_y,
+                       const uint8_t* src_u,
+                       int src_stride_u,
+                       const uint8_t* src_v,
+                       int src_stride_v,
+                       uint8_t* dst_rgb565,
+                       int dst_stride_rgb565,
+                       const uint8_t* dither4x4,
+                       int width,
+                       int height) {
   int y;
-  void (*I422ToARGBRow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I422ToARGBRow_C;
-  void (*ARGBToRGB565DitherRow)(const uint8* src_argb, uint8* dst_rgb,
-      const uint32 dither4, int width) = ARGBToRGB565DitherRow_C;
-  if (!src_y || !src_u || !src_v || !dst_rgb565 ||
-      width <= 0 || height == 0) {
+  void (*I422ToARGBRow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I422ToARGBRow_C;
+  void (*ARGBToRGB565DitherRow)(const uint8_t* src_argb, uint8_t* dst_rgb,
+                                const uint32_t dither4, int width) =
+      ARGBToRGB565DitherRow_C;
+  if (!src_y || !src_u || !src_v || !dst_rgb565 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -838,12 +1108,12 @@
     }
   }
 #endif
-#if defined(HAS_I422TOARGBROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2)) {
-    I422ToARGBRow = I422ToARGBRow_DSPR2;
+#if defined(HAS_I422TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToARGBRow = I422ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToARGBRow = I422ToARGBRow_MSA;
+    }
   }
 #endif
 #if defined(HAS_ARGBTORGB565DITHERROW_SSE2)
@@ -870,13 +1140,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORGB565DITHERROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToRGB565DitherRow = ARGBToRGB565DitherRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBToRGB565DitherRow = ARGBToRGB565DitherRow_MSA;
+    }
+  }
+#endif
   {
     // Allocate a row of argb.
     align_buffer_64(row_argb, width * 4);
     for (y = 0; y < height; ++y) {
       I422ToARGBRow(src_y, src_u, src_v, row_argb, &kYuvI601Constants, width);
       ARGBToRGB565DitherRow(row_argb, dst_rgb565,
-                            *(uint32*)(dither4x4 + ((y & 3) << 2)), width);
+                            *(const uint32_t*)(dither4x4 + ((y & 3) << 2)),
+                            width);
       dst_rgb565 += dst_stride_rgb565;
       src_y += src_stride_y;
       if (y & 1) {
@@ -889,220 +1168,254 @@
   return 0;
 }
 
+// Convert I420 to AR30 with matrix
+static int I420ToAR30Matrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_u,
+                            int src_stride_u,
+                            const uint8_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_ar30,
+                            int dst_stride_ar30,
+                            const struct YuvConstants* yuvconstants,
+                            int width,
+                            int height) {
+  int y;
+  void (*I422ToAR30Row)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I422ToAR30Row_C;
+
+  if (!src_y || !src_u || !src_v || !dst_ar30 || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_ar30 = dst_ar30 + (height - 1) * dst_stride_ar30;
+    dst_stride_ar30 = -dst_stride_ar30;
+  }
+
+#if defined(HAS_I422TOAR30ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    I422ToAR30Row = I422ToAR30Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToAR30Row = I422ToAR30Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_I422TOAR30ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToAR30Row = I422ToAR30Row_Any_AVX2;
+    if (IS_ALIGNED(width, 16)) {
+      I422ToAR30Row = I422ToAR30Row_AVX2;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    I422ToAR30Row(src_y, src_u, src_v, dst_ar30, yuvconstants, width);
+    dst_ar30 += dst_stride_ar30;
+    src_y += src_stride_y;
+    if (y & 1) {
+      src_u += src_stride_u;
+      src_v += src_stride_v;
+    }
+  }
+  return 0;
+}
+
+// Convert I420 to AR30.
+LIBYUV_API
+int I420ToAR30(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height) {
+  return I420ToAR30Matrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_ar30, dst_stride_ar30,
+                          &kYuvI601Constants, width, height);
+}
+
+// Convert H420 to AR30.
+LIBYUV_API
+int H420ToAR30(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height) {
+  return I420ToAR30Matrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_ar30, dst_stride_ar30,
+                          &kYvuH709Constants, width, height);
+}
+
 // Convert I420 to specified format
 LIBYUV_API
-int ConvertFromI420(const uint8* y, int y_stride,
-                    const uint8* u, int u_stride,
-                    const uint8* v, int v_stride,
-                    uint8* dst_sample, int dst_sample_stride,
-                    int width, int height,
-                    uint32 fourcc) {
-  uint32 format = CanonicalFourCC(fourcc);
+int ConvertFromI420(const uint8_t* y,
+                    int y_stride,
+                    const uint8_t* u,
+                    int u_stride,
+                    const uint8_t* v,
+                    int v_stride,
+                    uint8_t* dst_sample,
+                    int dst_sample_stride,
+                    int width,
+                    int height,
+                    uint32_t fourcc) {
+  uint32_t format = CanonicalFourCC(fourcc);
   int r = 0;
-  if (!y || !u|| !v || !dst_sample ||
-      width <= 0 || height == 0) {
+  if (!y || !u || !v || !dst_sample || width <= 0 || height == 0) {
     return -1;
   }
   switch (format) {
     // Single plane formats
     case FOURCC_YUY2:
-      r = I420ToYUY2(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width * 2,
-                     width, height);
+      r = I420ToYUY2(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 2, width,
+                     height);
       break;
     case FOURCC_UYVY:
-      r = I420ToUYVY(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width * 2,
-                     width, height);
+      r = I420ToUYVY(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 2, width,
+                     height);
       break;
     case FOURCC_RGBP:
-      r = I420ToRGB565(y, y_stride,
-                       u, u_stride,
-                       v, v_stride,
-                       dst_sample,
-                       dst_sample_stride ? dst_sample_stride : width * 2,
-                       width, height);
+      r = I420ToRGB565(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                       dst_sample_stride ? dst_sample_stride : width * 2, width,
+                       height);
       break;
     case FOURCC_RGBO:
-      r = I420ToARGB1555(y, y_stride,
-                         u, u_stride,
-                         v, v_stride,
-                         dst_sample,
+      r = I420ToARGB1555(y, y_stride, u, u_stride, v, v_stride, dst_sample,
                          dst_sample_stride ? dst_sample_stride : width * 2,
                          width, height);
       break;
     case FOURCC_R444:
-      r = I420ToARGB4444(y, y_stride,
-                         u, u_stride,
-                         v, v_stride,
-                         dst_sample,
+      r = I420ToARGB4444(y, y_stride, u, u_stride, v, v_stride, dst_sample,
                          dst_sample_stride ? dst_sample_stride : width * 2,
                          width, height);
       break;
     case FOURCC_24BG:
-      r = I420ToRGB24(y, y_stride,
-                      u, u_stride,
-                      v, v_stride,
-                      dst_sample,
-                      dst_sample_stride ? dst_sample_stride : width * 3,
-                      width, height);
+      r = I420ToRGB24(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                      dst_sample_stride ? dst_sample_stride : width * 3, width,
+                      height);
       break;
     case FOURCC_RAW:
-      r = I420ToRAW(y, y_stride,
-                    u, u_stride,
-                    v, v_stride,
-                    dst_sample,
-                    dst_sample_stride ? dst_sample_stride : width * 3,
-                    width, height);
+      r = I420ToRAW(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                    dst_sample_stride ? dst_sample_stride : width * 3, width,
+                    height);
       break;
     case FOURCC_ARGB:
-      r = I420ToARGB(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width * 4,
-                     width, height);
+      r = I420ToARGB(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 4, width,
+                     height);
       break;
     case FOURCC_BGRA:
-      r = I420ToBGRA(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width * 4,
-                     width, height);
+      r = I420ToBGRA(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 4, width,
+                     height);
       break;
     case FOURCC_ABGR:
-      r = I420ToABGR(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width * 4,
-                     width, height);
+      r = I420ToABGR(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 4, width,
+                     height);
       break;
     case FOURCC_RGBA:
-      r = I420ToRGBA(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width * 4,
-                     width, height);
+      r = I420ToRGBA(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 4, width,
+                     height);
+      break;
+    case FOURCC_AR30:
+      r = I420ToAR30(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width * 4, width,
+                     height);
       break;
     case FOURCC_I400:
-      r = I400Copy(y, y_stride,
-                   dst_sample,
-                   dst_sample_stride ? dst_sample_stride : width,
-                   width, height);
+      r = I400Copy(y, y_stride, dst_sample,
+                   dst_sample_stride ? dst_sample_stride : width, width,
+                   height);
       break;
     case FOURCC_NV12: {
-      uint8* dst_uv = dst_sample + width * height;
-      r = I420ToNV12(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width,
-                     dst_uv,
-                     dst_sample_stride ? dst_sample_stride : width,
-                     width, height);
+      uint8_t* dst_uv = dst_sample + width * height;
+      r = I420ToNV12(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width, dst_uv,
+                     dst_sample_stride ? dst_sample_stride : width, width,
+                     height);
       break;
     }
     case FOURCC_NV21: {
-      uint8* dst_vu = dst_sample + width * height;
-      r = I420ToNV21(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample,
-                     dst_sample_stride ? dst_sample_stride : width,
-                     dst_vu,
-                     dst_sample_stride ? dst_sample_stride : width,
-                     width, height);
+      uint8_t* dst_vu = dst_sample + width * height;
+      r = I420ToNV21(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride ? dst_sample_stride : width, dst_vu,
+                     dst_sample_stride ? dst_sample_stride : width, width,
+                     height);
       break;
     }
     // TODO(fbarchard): Add M420.
     // Triplanar formats
-    // TODO(fbarchard): halfstride instead of halfwidth
     case FOURCC_I420:
     case FOURCC_YV12: {
-      int halfwidth = (width + 1) / 2;
+      dst_sample_stride = dst_sample_stride ? dst_sample_stride : width;
+      int halfstride = (dst_sample_stride + 1) / 2;
       int halfheight = (height + 1) / 2;
-      uint8* dst_u;
-      uint8* dst_v;
+      uint8_t* dst_u;
+      uint8_t* dst_v;
       if (format == FOURCC_YV12) {
-        dst_v = dst_sample + width * height;
-        dst_u = dst_v + halfwidth * halfheight;
+        dst_v = dst_sample + dst_sample_stride * height;
+        dst_u = dst_v + halfstride * halfheight;
       } else {
-        dst_u = dst_sample + width * height;
-        dst_v = dst_u + halfwidth * halfheight;
+        dst_u = dst_sample + dst_sample_stride * height;
+        dst_v = dst_u + halfstride * halfheight;
       }
-      r = I420Copy(y, y_stride,
-                   u, u_stride,
-                   v, v_stride,
-                   dst_sample, width,
-                   dst_u, halfwidth,
-                   dst_v, halfwidth,
+      r = I420Copy(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                   dst_sample_stride, dst_u, halfstride, dst_v, halfstride,
                    width, height);
       break;
     }
     case FOURCC_I422:
     case FOURCC_YV16: {
-      int halfwidth = (width + 1) / 2;
-      uint8* dst_u;
-      uint8* dst_v;
+      dst_sample_stride = dst_sample_stride ? dst_sample_stride : width;
+      int halfstride = (dst_sample_stride + 1) / 2;
+      uint8_t* dst_u;
+      uint8_t* dst_v;
       if (format == FOURCC_YV16) {
-        dst_v = dst_sample + width * height;
-        dst_u = dst_v + halfwidth * height;
+        dst_v = dst_sample + dst_sample_stride * height;
+        dst_u = dst_v + halfstride * height;
       } else {
-        dst_u = dst_sample + width * height;
-        dst_v = dst_u + halfwidth * height;
+        dst_u = dst_sample + dst_sample_stride * height;
+        dst_v = dst_u + halfstride * height;
       }
-      r = I420ToI422(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample, width,
-                     dst_u, halfwidth,
-                     dst_v, halfwidth,
+      r = I420ToI422(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride, dst_u, halfstride, dst_v, halfstride,
                      width, height);
       break;
     }
     case FOURCC_I444:
     case FOURCC_YV24: {
-      uint8* dst_u;
-      uint8* dst_v;
+      dst_sample_stride = dst_sample_stride ? dst_sample_stride : width;
+      uint8_t* dst_u;
+      uint8_t* dst_v;
       if (format == FOURCC_YV24) {
-        dst_v = dst_sample + width * height;
-        dst_u = dst_v + width * height;
+        dst_v = dst_sample + dst_sample_stride * height;
+        dst_u = dst_v + dst_sample_stride * height;
       } else {
-        dst_u = dst_sample + width * height;
-        dst_v = dst_u + width * height;
+        dst_u = dst_sample + dst_sample_stride * height;
+        dst_v = dst_u + dst_sample_stride * height;
       }
-      r = I420ToI444(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample, width,
-                     dst_u, width,
-                     dst_v, width,
-                     width, height);
+      r = I420ToI444(y, y_stride, u, u_stride, v, v_stride, dst_sample,
+                     dst_sample_stride, dst_u, dst_sample_stride, dst_v,
+                     dst_sample_stride, width, height);
       break;
     }
-    case FOURCC_I411: {
-      int quarterwidth = (width + 3) / 4;
-      uint8* dst_u = dst_sample + width * height;
-      uint8* dst_v = dst_u + quarterwidth * height;
-      r = I420ToI411(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     dst_sample, width,
-                     dst_u, quarterwidth,
-                     dst_v, quarterwidth,
-                     width, height);
-      break;
-    }
-
     // Formats not supported - MJPG, biplanar, some rgb formats.
     default:
       return -1;  // unknown fourcc - return failure code.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from_argb.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from_argb.cc
index 2a8682b..c8d9125 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from_argb.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_from_argb.cc
@@ -22,16 +22,21 @@
 
 // ARGB little endian (bgra in memory) to I444
 LIBYUV_API
-int ARGBToI444(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int ARGBToI444(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  void (*ARGBToUV444Row)(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
-      int width) = ARGBToUV444Row_C;
+  void (*ARGBToUV444Row)(const uint8_t* src_argb, uint8_t* dst_u,
+                         uint8_t* dst_v, int width) = ARGBToUV444Row_C;
   if (!src_argb || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
@@ -41,20 +46,18 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_y == width &&
-      dst_stride_u == width &&
-      dst_stride_v == width) {
+  if (src_stride_argb == width * 4 && dst_stride_y == width &&
+      dst_stride_u == width && dst_stride_v == width) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_y = dst_stride_u = dst_stride_v = 0;
   }
 #if defined(HAS_ARGBTOUV444ROW_SSSE3)
-    if (TestCpuFlag(kCpuHasSSSE3)) {
-      ARGBToUV444Row = ARGBToUV444Row_Any_SSSE3;
-      if (IS_ALIGNED(width, 16)) {
-        ARGBToUV444Row = ARGBToUV444Row_SSSE3;
-      }
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    ARGBToUV444Row = ARGBToUV444Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToUV444Row = ARGBToUV444Row_SSSE3;
+    }
   }
 #endif
 #if defined(HAS_ARGBTOUV444ROW_NEON)
@@ -65,6 +68,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOUV444ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUV444Row = ARGBToUV444Row_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToUV444Row = ARGBToUV444Row_MSA;
+    }
+  }
+#endif
 #if defined(HAS_ARGBTOYROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     ARGBToYRow = ARGBToYRow_Any_SSSE3;
@@ -89,6 +100,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToUV444Row(src_argb, dst_u, dst_v, width);
@@ -103,19 +122,23 @@
 
 // ARGB little endian (bgra in memory) to I422
 LIBYUV_API
-int ARGBToI422(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int ARGBToI422(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  if (!src_argb ||
-      !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_argb || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -125,10 +148,8 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_y == width &&
-      dst_stride_u * 2 == width &&
-      dst_stride_v * 2 == width) {
+  if (src_stride_argb == width * 4 && dst_stride_y == width &&
+      dst_stride_u * 2 == width && dst_stride_v * 2 == width) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_y = dst_stride_u = dst_stride_v = 0;
@@ -170,6 +191,23 @@
   }
 #endif
 
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVRow = ARGBToUVRow_MSA;
+    }
+  }
+#endif
+
   for (y = 0; y < height; ++y) {
     ARGBToUVRow(src_argb, 0, dst_u, dst_v, width);
     ARGBToYRow(src_argb, dst_y, width);
@@ -181,95 +219,25 @@
   return 0;
 }
 
-// ARGB little endian (bgra in memory) to I411
 LIBYUV_API
-int ARGBToI411(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
-  int y;
-  void (*ARGBToUV411Row)(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
-      int width) = ARGBToUV411Row_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
-      ARGBToYRow_C;
-  if (!src_argb || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
-    return -1;
-  }
-  if (height < 0) {
-    height = -height;
-    src_argb = src_argb + (height - 1) * src_stride_argb;
-    src_stride_argb = -src_stride_argb;
-  }
-  // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_y == width &&
-      dst_stride_u * 4 == width &&
-      dst_stride_v * 4 == width) {
-    width *= height;
-    height = 1;
-    src_stride_argb = dst_stride_y = dst_stride_u = dst_stride_v = 0;
-  }
-#if defined(HAS_ARGBTOYROW_SSSE3)
-  if (TestCpuFlag(kCpuHasSSSE3)) {
-    ARGBToYRow = ARGBToYRow_Any_SSSE3;
-    if (IS_ALIGNED(width, 16)) {
-      ARGBToYRow = ARGBToYRow_SSSE3;
-    }
-  }
-#endif
-#if defined(HAS_ARGBTOYROW_AVX2)
-  if (TestCpuFlag(kCpuHasAVX2)) {
-    ARGBToYRow = ARGBToYRow_Any_AVX2;
-    if (IS_ALIGNED(width, 32)) {
-      ARGBToYRow = ARGBToYRow_AVX2;
-    }
-  }
-#endif
-#if defined(HAS_ARGBTOYROW_NEON)
-  if (TestCpuFlag(kCpuHasNEON)) {
-    ARGBToYRow = ARGBToYRow_Any_NEON;
-    if (IS_ALIGNED(width, 8)) {
-      ARGBToYRow = ARGBToYRow_NEON;
-    }
-  }
-#endif
-#if defined(HAS_ARGBTOUV411ROW_NEON)
-  if (TestCpuFlag(kCpuHasNEON)) {
-    ARGBToUV411Row = ARGBToUV411Row_Any_NEON;
-    if (IS_ALIGNED(width, 32)) {
-      ARGBToUV411Row = ARGBToUV411Row_NEON;
-    }
-  }
-#endif
-
-  for (y = 0; y < height; ++y) {
-    ARGBToUV411Row(src_argb, dst_u, dst_v, width);
-    ARGBToYRow(src_argb, dst_y, width);
-    src_argb += src_stride_argb;
-    dst_y += dst_stride_y;
-    dst_u += dst_stride_u;
-    dst_v += dst_stride_v;
-  }
-  return 0;
-}
-
-LIBYUV_API
-int ARGBToNV12(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height) {
+int ARGBToNV12(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height) {
   int y;
   int halfwidth = (width + 1) >> 1;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  void (*MergeUVRow_)(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
-                      int width) = MergeUVRow_C;
-  if (!src_argb ||
-      !dst_y || !dst_uv ||
-      width <= 0 || height == 0) {
+  void (*MergeUVRow_)(const uint8_t* src_u, const uint8_t* src_v,
+                      uint8_t* dst_uv, int width) = MergeUVRow_C;
+  if (!src_argb || !dst_y || !dst_uv || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -314,6 +282,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVRow = ARGBToUVRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_MERGEUVROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     MergeUVRow_ = MergeUVRow_Any_SSE2;
@@ -338,10 +322,18 @@
     }
   }
 #endif
+#if defined(HAS_MERGEUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    MergeUVRow_ = MergeUVRow_Any_MSA;
+    if (IS_ALIGNED(halfwidth, 16)) {
+      MergeUVRow_ = MergeUVRow_MSA;
+    }
+  }
+#endif
   {
     // Allocate a rows of uv.
     align_buffer_64(row_u, ((halfwidth + 31) & ~31) * 2);
-    uint8* row_v = row_u + ((halfwidth + 31) & ~31);
+    uint8_t* row_v = row_u + ((halfwidth + 31) & ~31);
 
     for (y = 0; y < height - 1; y += 2) {
       ARGBToUVRow(src_argb, src_stride_argb, row_u, row_v, width);
@@ -364,21 +356,24 @@
 
 // Same as NV12 but U and V swapped.
 LIBYUV_API
-int ARGBToNV21(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height) {
+int ARGBToNV21(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_vu,
+               int dst_stride_vu,
+               int width,
+               int height) {
   int y;
   int halfwidth = (width + 1) >> 1;
-  void (*ARGBToUVRow)(const uint8* src_argb0, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb0, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  void (*MergeUVRow_)(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
-                      int width) = MergeUVRow_C;
-  if (!src_argb ||
-      !dst_y || !dst_uv ||
-      width <= 0 || height == 0) {
+  void (*MergeUVRow_)(const uint8_t* src_u, const uint8_t* src_v,
+                      uint8_t* dst_vu, int width) = MergeUVRow_C;
+  if (!src_argb || !dst_y || !dst_vu || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -423,6 +418,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVRow = ARGBToUVRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_MERGEUVROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     MergeUVRow_ = MergeUVRow_Any_SSE2;
@@ -447,23 +458,31 @@
     }
   }
 #endif
+#if defined(HAS_MERGEUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    MergeUVRow_ = MergeUVRow_Any_MSA;
+    if (IS_ALIGNED(halfwidth, 16)) {
+      MergeUVRow_ = MergeUVRow_MSA;
+    }
+  }
+#endif
   {
     // Allocate a rows of uv.
     align_buffer_64(row_u, ((halfwidth + 31) & ~31) * 2);
-    uint8* row_v = row_u + ((halfwidth + 31) & ~31);
+    uint8_t* row_v = row_u + ((halfwidth + 31) & ~31);
 
     for (y = 0; y < height - 1; y += 2) {
       ARGBToUVRow(src_argb, src_stride_argb, row_u, row_v, width);
-      MergeUVRow_(row_v, row_u, dst_uv, halfwidth);
+      MergeUVRow_(row_v, row_u, dst_vu, halfwidth);
       ARGBToYRow(src_argb, dst_y, width);
       ARGBToYRow(src_argb + src_stride_argb, dst_y + dst_stride_y, width);
       src_argb += src_stride_argb * 2;
       dst_y += dst_stride_y * 2;
-      dst_uv += dst_stride_uv;
+      dst_vu += dst_stride_vu;
     }
     if (height & 1) {
       ARGBToUVRow(src_argb, 0, row_u, row_v, width);
-      MergeUVRow_(row_v, row_u, dst_uv, halfwidth);
+      MergeUVRow_(row_v, row_u, dst_vu, halfwidth);
       ARGBToYRow(src_argb, dst_y, width);
     }
     free_aligned_buffer_64(row_u);
@@ -473,19 +492,23 @@
 
 // Convert ARGB to YUY2.
 LIBYUV_API
-int ARGBToYUY2(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yuy2, int dst_stride_yuy2,
-               int width, int height) {
+int ARGBToYUY2(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yuy2,
+               int dst_stride_yuy2,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToUVRow)(const uint8* src_argb, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  void (*I422ToYUY2Row)(const uint8* src_y, const uint8* src_u,
-      const uint8* src_v, uint8* dst_yuy2, int width) = I422ToYUY2Row_C;
+  void (*I422ToYUY2Row)(const uint8_t* src_y, const uint8_t* src_u,
+                        const uint8_t* src_v, uint8_t* dst_yuy2, int width) =
+      I422ToYUY2Row_C;
 
-  if (!src_argb || !dst_yuy2 ||
-      width <= 0 || height == 0) {
+  if (!src_argb || !dst_yuy2 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -495,8 +518,7 @@
     dst_stride_yuy2 = -dst_stride_yuy2;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_yuy2 == width * 2) {
+  if (src_stride_argb == width * 4 && dst_stride_yuy2 == width * 2) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_yuy2 = 0;
@@ -537,6 +559,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVRow = ARGBToUVRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_I422TOYUY2ROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     I422ToYUY2Row = I422ToYUY2Row_Any_SSE2;
@@ -545,6 +583,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOYUY2ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToYUY2Row = I422ToYUY2Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToYUY2Row = I422ToYUY2Row_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_I422TOYUY2ROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     I422ToYUY2Row = I422ToYUY2Row_Any_NEON;
@@ -553,12 +599,20 @@
     }
   }
 #endif
+#if defined(HAS_I422TOYUY2ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToYUY2Row = I422ToYUY2Row_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToYUY2Row = I422ToYUY2Row_MSA;
+    }
+  }
+#endif
 
   {
     // Allocate a rows of yuv.
     align_buffer_64(row_y, ((width + 63) & ~63) * 2);
-    uint8* row_u = row_y + ((width + 63) & ~63);
-    uint8* row_v = row_u + ((width + 63) & ~63) / 2;
+    uint8_t* row_u = row_y + ((width + 63) & ~63);
+    uint8_t* row_v = row_u + ((width + 63) & ~63) / 2;
 
     for (y = 0; y < height; ++y) {
       ARGBToUVRow(src_argb, 0, row_u, row_v, width);
@@ -575,19 +629,23 @@
 
 // Convert ARGB to UYVY.
 LIBYUV_API
-int ARGBToUYVY(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_uyvy, int dst_stride_uyvy,
-               int width, int height) {
+int ARGBToUYVY(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_uyvy,
+               int dst_stride_uyvy,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToUVRow)(const uint8* src_argb, int src_stride_argb,
-      uint8* dst_u, uint8* dst_v, int width) = ARGBToUVRow_C;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToUVRow)(const uint8_t* src_argb, int src_stride_argb,
+                      uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVRow_C;
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
-  void (*I422ToUYVYRow)(const uint8* src_y, const uint8* src_u,
-      const uint8* src_v, uint8* dst_uyvy, int width) = I422ToUYVYRow_C;
+  void (*I422ToUYVYRow)(const uint8_t* src_y, const uint8_t* src_u,
+                        const uint8_t* src_v, uint8_t* dst_uyvy, int width) =
+      I422ToUYVYRow_C;
 
-  if (!src_argb || !dst_uyvy ||
-      width <= 0 || height == 0) {
+  if (!src_argb || !dst_uyvy || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -597,8 +655,7 @@
     dst_stride_uyvy = -dst_stride_uyvy;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_uyvy == width * 2) {
+  if (src_stride_argb == width * 4 && dst_stride_uyvy == width * 2) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_uyvy = 0;
@@ -639,6 +696,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVRow = ARGBToUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVRow = ARGBToUVRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_I422TOUYVYROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     I422ToUYVYRow = I422ToUYVYRow_Any_SSE2;
@@ -647,6 +720,14 @@
     }
   }
 #endif
+#if defined(HAS_I422TOUYVYROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    I422ToUYVYRow = I422ToUYVYRow_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToUYVYRow = I422ToUYVYRow_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_I422TOUYVYROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     I422ToUYVYRow = I422ToUYVYRow_Any_NEON;
@@ -655,12 +736,20 @@
     }
   }
 #endif
+#if defined(HAS_I422TOUYVYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToUYVYRow = I422ToUYVYRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      I422ToUYVYRow = I422ToUYVYRow_MSA;
+    }
+  }
+#endif
 
   {
     // Allocate a rows of yuv.
     align_buffer_64(row_y, ((width + 63) & ~63) * 2);
-    uint8* row_u = row_y + ((width + 63) & ~63);
-    uint8* row_v = row_u + ((width + 63) & ~63) / 2;
+    uint8_t* row_u = row_y + ((width + 63) & ~63);
+    uint8_t* row_v = row_u + ((width + 63) & ~63) / 2;
 
     for (y = 0; y < height; ++y) {
       ARGBToUVRow(src_argb, 0, row_u, row_v, width);
@@ -677,11 +766,14 @@
 
 // Convert ARGB to I400.
 LIBYUV_API
-int ARGBToI400(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height) {
+int ARGBToI400(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToYRow)(const uint8* src_argb, uint8* dst_y, int width) =
+  void (*ARGBToYRow)(const uint8_t* src_argb, uint8_t* dst_y, int width) =
       ARGBToYRow_C;
   if (!src_argb || !dst_y || width <= 0 || height == 0) {
     return -1;
@@ -692,8 +784,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_y == width) {
+  if (src_stride_argb == width * 4 && dst_stride_y == width) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_y = 0;
@@ -722,6 +813,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYRow = ARGBToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYRow = ARGBToYRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToYRow(src_argb, dst_y, width);
@@ -732,28 +831,31 @@
 }
 
 // Shuffle table for converting ARGB to RGBA.
-static uvec8 kShuffleMaskARGBToRGBA = {
-  3u, 0u, 1u, 2u, 7u, 4u, 5u, 6u, 11u, 8u, 9u, 10u, 15u, 12u, 13u, 14u
-};
+static const uvec8 kShuffleMaskARGBToRGBA = {
+    3u, 0u, 1u, 2u, 7u, 4u, 5u, 6u, 11u, 8u, 9u, 10u, 15u, 12u, 13u, 14u};
 
 // Convert ARGB to RGBA.
 LIBYUV_API
-int ARGBToRGBA(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_rgba, int dst_stride_rgba,
-               int width, int height) {
-  return ARGBShuffle(src_argb, src_stride_argb,
-                     dst_rgba, dst_stride_rgba,
-                     (const uint8*)(&kShuffleMaskARGBToRGBA),
-                     width, height);
+int ARGBToRGBA(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_rgba,
+               int dst_stride_rgba,
+               int width,
+               int height) {
+  return ARGBShuffle(src_argb, src_stride_argb, dst_rgba, dst_stride_rgba,
+                     (const uint8_t*)(&kShuffleMaskARGBToRGBA), width, height);
 }
 
 // Convert ARGB To RGB24.
 LIBYUV_API
-int ARGBToRGB24(const uint8* src_argb, int src_stride_argb,
-                uint8* dst_rgb24, int dst_stride_rgb24,
-                int width, int height) {
+int ARGBToRGB24(const uint8_t* src_argb,
+                int src_stride_argb,
+                uint8_t* dst_rgb24,
+                int dst_stride_rgb24,
+                int width,
+                int height) {
   int y;
-  void (*ARGBToRGB24Row)(const uint8* src_argb, uint8* dst_rgb, int width) =
+  void (*ARGBToRGB24Row)(const uint8_t* src_argb, uint8_t* dst_rgb, int width) =
       ARGBToRGB24Row_C;
   if (!src_argb || !dst_rgb24 || width <= 0 || height == 0) {
     return -1;
@@ -764,8 +866,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_rgb24 == width * 3) {
+  if (src_stride_argb == width * 4 && dst_stride_rgb24 == width * 3) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_rgb24 = 0;
@@ -778,6 +879,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORGB24ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    ARGBToRGB24Row = ARGBToRGB24Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToRGB24Row = ARGBToRGB24Row_AVX2;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTORGB24ROW_AVX512VBMI)
+  if (TestCpuFlag(kCpuHasAVX512VBMI)) {
+    ARGBToRGB24Row = ARGBToRGB24Row_Any_AVX512VBMI;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToRGB24Row = ARGBToRGB24Row_AVX512VBMI;
+    }
+  }
+#endif
 #if defined(HAS_ARGBTORGB24ROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     ARGBToRGB24Row = ARGBToRGB24Row_Any_NEON;
@@ -786,6 +903,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORGB24ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToRGB24Row = ARGBToRGB24Row_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToRGB24Row = ARGBToRGB24Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToRGB24Row(src_argb, dst_rgb24, width);
@@ -797,11 +922,14 @@
 
 // Convert ARGB To RAW.
 LIBYUV_API
-int ARGBToRAW(const uint8* src_argb, int src_stride_argb,
-              uint8* dst_raw, int dst_stride_raw,
-              int width, int height) {
+int ARGBToRAW(const uint8_t* src_argb,
+              int src_stride_argb,
+              uint8_t* dst_raw,
+              int dst_stride_raw,
+              int width,
+              int height) {
   int y;
-  void (*ARGBToRAWRow)(const uint8* src_argb, uint8* dst_rgb, int width) =
+  void (*ARGBToRAWRow)(const uint8_t* src_argb, uint8_t* dst_rgb, int width) =
       ARGBToRAWRow_C;
   if (!src_argb || !dst_raw || width <= 0 || height == 0) {
     return -1;
@@ -812,8 +940,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_raw == width * 3) {
+  if (src_stride_argb == width * 4 && dst_stride_raw == width * 3) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_raw = 0;
@@ -826,6 +953,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORAWROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    ARGBToRAWRow = ARGBToRAWRow_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToRAWRow = ARGBToRAWRow_AVX2;
+    }
+  }
+#endif
 #if defined(HAS_ARGBTORAWROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     ARGBToRAWRow = ARGBToRAWRow_Any_NEON;
@@ -834,6 +969,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORAWROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToRAWRow = ARGBToRAWRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToRAWRow = ARGBToRAWRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToRAWRow(src_argb, dst_raw, width);
@@ -844,21 +987,23 @@
 }
 
 // Ordered 8x8 dither for 888 to 565.  Values from 0 to 7.
-static const uint8 kDither565_4x4[16] = {
-  0, 4, 1, 5,
-  6, 2, 7, 3,
-  1, 5, 0, 4,
-  7, 3, 6, 2,
+static const uint8_t kDither565_4x4[16] = {
+    0, 4, 1, 5, 6, 2, 7, 3, 1, 5, 0, 4, 7, 3, 6, 2,
 };
 
 // Convert ARGB To RGB565 with 4x4 dither matrix (16 bytes).
 LIBYUV_API
-int ARGBToRGB565Dither(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_rgb565, int dst_stride_rgb565,
-                       const uint8* dither4x4, int width, int height) {
+int ARGBToRGB565Dither(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_rgb565,
+                       int dst_stride_rgb565,
+                       const uint8_t* dither4x4,
+                       int width,
+                       int height) {
   int y;
-  void (*ARGBToRGB565DitherRow)(const uint8* src_argb, uint8* dst_rgb,
-      const uint32 dither4, int width) = ARGBToRGB565DitherRow_C;
+  void (*ARGBToRGB565DitherRow)(const uint8_t* src_argb, uint8_t* dst_rgb,
+                                const uint32_t dither4, int width) =
+      ARGBToRGB565DitherRow_C;
   if (!src_argb || !dst_rgb565 || width <= 0 || height == 0) {
     return -1;
   }
@@ -894,9 +1039,19 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORGB565DITHERROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToRGB565DitherRow = ARGBToRGB565DitherRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBToRGB565DitherRow = ARGBToRGB565DitherRow_MSA;
+    }
+  }
+#endif
+
   for (y = 0; y < height; ++y) {
     ARGBToRGB565DitherRow(src_argb, dst_rgb565,
-                          *(uint32*)(dither4x4 + ((y & 3) << 2)), width);
+                          *(const uint32_t*)(dither4x4 + ((y & 3) << 2)),
+                          width);
     src_argb += src_stride_argb;
     dst_rgb565 += dst_stride_rgb565;
   }
@@ -906,12 +1061,15 @@
 // Convert ARGB To RGB565.
 // TODO(fbarchard): Consider using dither function low level with zeros.
 LIBYUV_API
-int ARGBToRGB565(const uint8* src_argb, int src_stride_argb,
-                 uint8* dst_rgb565, int dst_stride_rgb565,
-                 int width, int height) {
+int ARGBToRGB565(const uint8_t* src_argb,
+                 int src_stride_argb,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height) {
   int y;
-  void (*ARGBToRGB565Row)(const uint8* src_argb, uint8* dst_rgb, int width) =
-      ARGBToRGB565Row_C;
+  void (*ARGBToRGB565Row)(const uint8_t* src_argb, uint8_t* dst_rgb,
+                          int width) = ARGBToRGB565Row_C;
   if (!src_argb || !dst_rgb565 || width <= 0 || height == 0) {
     return -1;
   }
@@ -921,8 +1079,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_rgb565 == width * 2) {
+  if (src_stride_argb == width * 4 && dst_stride_rgb565 == width * 2) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_rgb565 = 0;
@@ -951,6 +1108,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTORGB565ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToRGB565Row = ARGBToRGB565Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBToRGB565Row = ARGBToRGB565Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToRGB565Row(src_argb, dst_rgb565, width);
@@ -962,12 +1127,15 @@
 
 // Convert ARGB To ARGB1555.
 LIBYUV_API
-int ARGBToARGB1555(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_argb1555, int dst_stride_argb1555,
-                   int width, int height) {
+int ARGBToARGB1555(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb1555,
+                   int dst_stride_argb1555,
+                   int width,
+                   int height) {
   int y;
-  void (*ARGBToARGB1555Row)(const uint8* src_argb, uint8* dst_rgb, int width) =
-      ARGBToARGB1555Row_C;
+  void (*ARGBToARGB1555Row)(const uint8_t* src_argb, uint8_t* dst_rgb,
+                            int width) = ARGBToARGB1555Row_C;
   if (!src_argb || !dst_argb1555 || width <= 0 || height == 0) {
     return -1;
   }
@@ -977,8 +1145,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb1555 == width * 2) {
+  if (src_stride_argb == width * 4 && dst_stride_argb1555 == width * 2) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb1555 = 0;
@@ -1007,6 +1174,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOARGB1555ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToARGB1555Row = ARGBToARGB1555Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBToARGB1555Row = ARGBToARGB1555Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToARGB1555Row(src_argb, dst_argb1555, width);
@@ -1018,12 +1193,15 @@
 
 // Convert ARGB To ARGB4444.
 LIBYUV_API
-int ARGBToARGB4444(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_argb4444, int dst_stride_argb4444,
-                   int width, int height) {
+int ARGBToARGB4444(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb4444,
+                   int dst_stride_argb4444,
+                   int width,
+                   int height) {
   int y;
-  void (*ARGBToARGB4444Row)(const uint8* src_argb, uint8* dst_rgb, int width) =
-      ARGBToARGB4444Row_C;
+  void (*ARGBToARGB4444Row)(const uint8_t* src_argb, uint8_t* dst_rgb,
+                            int width) = ARGBToARGB4444Row_C;
   if (!src_argb || !dst_argb4444 || width <= 0 || height == 0) {
     return -1;
   }
@@ -1033,8 +1211,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb4444 == width * 2) {
+  if (src_stride_argb == width * 4 && dst_stride_argb4444 == width * 2) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb4444 = 0;
@@ -1063,6 +1240,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOARGB4444ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToARGB4444Row = ARGBToARGB4444Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBToARGB4444Row = ARGBToARGB4444Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToARGB4444Row(src_argb, dst_argb4444, width);
@@ -1072,21 +1257,123 @@
   return 0;
 }
 
+// Convert ABGR To AR30.
+LIBYUV_API
+int ABGRToAR30(const uint8_t* src_abgr,
+               int src_stride_abgr,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height) {
+  int y;
+  void (*ABGRToAR30Row)(const uint8_t* src_abgr, uint8_t* dst_rgb, int width) =
+      ABGRToAR30Row_C;
+  if (!src_abgr || !dst_ar30 || width <= 0 || height == 0) {
+    return -1;
+  }
+  if (height < 0) {
+    height = -height;
+    src_abgr = src_abgr + (height - 1) * src_stride_abgr;
+    src_stride_abgr = -src_stride_abgr;
+  }
+  // Coalesce rows.
+  if (src_stride_abgr == width * 4 && dst_stride_ar30 == width * 4) {
+    width *= height;
+    height = 1;
+    src_stride_abgr = dst_stride_ar30 = 0;
+  }
+#if defined(HAS_ABGRTOAR30ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    ABGRToAR30Row = ABGRToAR30Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 4)) {
+      ABGRToAR30Row = ABGRToAR30Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_ABGRTOAR30ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    ABGRToAR30Row = ABGRToAR30Row_Any_AVX2;
+    if (IS_ALIGNED(width, 8)) {
+      ABGRToAR30Row = ABGRToAR30Row_AVX2;
+    }
+  }
+#endif
+  for (y = 0; y < height; ++y) {
+    ABGRToAR30Row(src_abgr, dst_ar30, width);
+    src_abgr += src_stride_abgr;
+    dst_ar30 += dst_stride_ar30;
+  }
+  return 0;
+}
+
+// Convert ARGB To AR30.
+LIBYUV_API
+int ARGBToAR30(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_ar30,
+               int dst_stride_ar30,
+               int width,
+               int height) {
+  int y;
+  void (*ARGBToAR30Row)(const uint8_t* src_argb, uint8_t* dst_rgb, int width) =
+      ARGBToAR30Row_C;
+  if (!src_argb || !dst_ar30 || width <= 0 || height == 0) {
+    return -1;
+  }
+  if (height < 0) {
+    height = -height;
+    src_argb = src_argb + (height - 1) * src_stride_argb;
+    src_stride_argb = -src_stride_argb;
+  }
+  // Coalesce rows.
+  if (src_stride_argb == width * 4 && dst_stride_ar30 == width * 4) {
+    width *= height;
+    height = 1;
+    src_stride_argb = dst_stride_ar30 = 0;
+  }
+#if defined(HAS_ARGBTOAR30ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    ARGBToAR30Row = ARGBToAR30Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 4)) {
+      ARGBToAR30Row = ARGBToAR30Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOAR30ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    ARGBToAR30Row = ARGBToAR30Row_Any_AVX2;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBToAR30Row = ARGBToAR30Row_AVX2;
+    }
+  }
+#endif
+  for (y = 0; y < height; ++y) {
+    ARGBToAR30Row(src_argb, dst_ar30, width);
+    src_argb += src_stride_argb;
+    dst_ar30 += dst_stride_ar30;
+  }
+  return 0;
+}
+
 // Convert ARGB to J420. (JPeg full range I420).
 LIBYUV_API
-int ARGBToJ420(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yj, int dst_stride_yj,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int ARGBToJ420(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yj,
+               int dst_stride_yj,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToUVJRow)(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) = ARGBToUVJRow_C;
-  void (*ARGBToYJRow)(const uint8* src_argb, uint8* dst_yj, int width) =
+  void (*ARGBToUVJRow)(const uint8_t* src_argb0, int src_stride_argb,
+                       uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVJRow_C;
+  void (*ARGBToYJRow)(const uint8_t* src_argb, uint8_t* dst_yj, int width) =
       ARGBToYJRow_C;
-  if (!src_argb ||
-      !dst_yj || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_argb || !dst_yj || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1129,6 +1416,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYJROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYJRow = ARGBToYJRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYJRow = ARGBToYJRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVJROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVJRow = ARGBToUVJRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVJRow = ARGBToUVJRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height - 1; y += 2) {
     ARGBToUVJRow(src_argb, src_stride_argb, dst_u, dst_v, width);
@@ -1148,19 +1451,23 @@
 
 // Convert ARGB to J422. (JPeg full range I422).
 LIBYUV_API
-int ARGBToJ422(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yj, int dst_stride_yj,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int ARGBToJ422(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yj,
+               int dst_stride_yj,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToUVJRow)(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) = ARGBToUVJRow_C;
-  void (*ARGBToYJRow)(const uint8* src_argb, uint8* dst_yj, int width) =
+  void (*ARGBToUVJRow)(const uint8_t* src_argb0, int src_stride_argb,
+                       uint8_t* dst_u, uint8_t* dst_v, int width) =
+      ARGBToUVJRow_C;
+  void (*ARGBToYJRow)(const uint8_t* src_argb, uint8_t* dst_yj, int width) =
       ARGBToYJRow_C;
-  if (!src_argb ||
-      !dst_yj || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_argb || !dst_yj || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1170,10 +1477,8 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_yj == width &&
-      dst_stride_u * 2 == width &&
-      dst_stride_v * 2 == width) {
+  if (src_stride_argb == width * 4 && dst_stride_yj == width &&
+      dst_stride_u * 2 == width && dst_stride_v * 2 == width) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_yj = dst_stride_u = dst_stride_v = 0;
@@ -1212,6 +1517,22 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYJROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYJRow = ARGBToYJRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYJRow = ARGBToYJRow_MSA;
+    }
+  }
+#endif
+#if defined(HAS_ARGBTOUVJROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToUVJRow = ARGBToUVJRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      ARGBToUVJRow = ARGBToUVJRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToUVJRow(src_argb, 0, dst_u, dst_v, width);
@@ -1226,11 +1547,14 @@
 
 // Convert ARGB to J400.
 LIBYUV_API
-int ARGBToJ400(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_yj, int dst_stride_yj,
-               int width, int height) {
+int ARGBToJ400(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_yj,
+               int dst_stride_yj,
+               int width,
+               int height) {
   int y;
-  void (*ARGBToYJRow)(const uint8* src_argb, uint8* dst_yj, int width) =
+  void (*ARGBToYJRow)(const uint8_t* src_argb, uint8_t* dst_yj, int width) =
       ARGBToYJRow_C;
   if (!src_argb || !dst_yj || width <= 0 || height == 0) {
     return -1;
@@ -1241,8 +1565,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_yj == width) {
+  if (src_stride_argb == width * 4 && dst_stride_yj == width) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_yj = 0;
@@ -1271,6 +1594,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYJROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYJRow = ARGBToYJRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYJRow = ARGBToYJRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBToYJRow(src_argb, dst_yj, width);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_jpeg.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_jpeg.cc
index 90f550a..ae3cc18 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_jpeg.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_jpeg.cc
@@ -22,28 +22,24 @@
 
 #ifdef HAVE_JPEG
 struct I420Buffers {
-  uint8* y;
+  uint8_t* y;
   int y_stride;
-  uint8* u;
+  uint8_t* u;
   int u_stride;
-  uint8* v;
+  uint8_t* v;
   int v_stride;
   int w;
   int h;
 };
 
 static void JpegCopyI420(void* opaque,
-                         const uint8* const* data,
+                         const uint8_t* const* data,
                          const int* strides,
                          int rows) {
   I420Buffers* dest = (I420Buffers*)(opaque);
-  I420Copy(data[0], strides[0],
-           data[1], strides[1],
-           data[2], strides[2],
-           dest->y, dest->y_stride,
-           dest->u, dest->u_stride,
-           dest->v, dest->v_stride,
-           dest->w, rows);
+  I420Copy(data[0], strides[0], data[1], strides[1], data[2], strides[2],
+           dest->y, dest->y_stride, dest->u, dest->u_stride, dest->v,
+           dest->v_stride, dest->w, rows);
   dest->y += rows * dest->y_stride;
   dest->u += ((rows + 1) >> 1) * dest->u_stride;
   dest->v += ((rows + 1) >> 1) * dest->v_stride;
@@ -51,17 +47,13 @@
 }
 
 static void JpegI422ToI420(void* opaque,
-                           const uint8* const* data,
+                           const uint8_t* const* data,
                            const int* strides,
                            int rows) {
   I420Buffers* dest = (I420Buffers*)(opaque);
-  I422ToI420(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->y, dest->y_stride,
-             dest->u, dest->u_stride,
-             dest->v, dest->v_stride,
-             dest->w, rows);
+  I422ToI420(data[0], strides[0], data[1], strides[1], data[2], strides[2],
+             dest->y, dest->y_stride, dest->u, dest->u_stride, dest->v,
+             dest->v_stride, dest->w, rows);
   dest->y += rows * dest->y_stride;
   dest->u += ((rows + 1) >> 1) * dest->u_stride;
   dest->v += ((rows + 1) >> 1) * dest->v_stride;
@@ -69,35 +61,13 @@
 }
 
 static void JpegI444ToI420(void* opaque,
-                           const uint8* const* data,
+                           const uint8_t* const* data,
                            const int* strides,
                            int rows) {
   I420Buffers* dest = (I420Buffers*)(opaque);
-  I444ToI420(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->y, dest->y_stride,
-             dest->u, dest->u_stride,
-             dest->v, dest->v_stride,
-             dest->w, rows);
-  dest->y += rows * dest->y_stride;
-  dest->u += ((rows + 1) >> 1) * dest->u_stride;
-  dest->v += ((rows + 1) >> 1) * dest->v_stride;
-  dest->h -= rows;
-}
-
-static void JpegI411ToI420(void* opaque,
-                           const uint8* const* data,
-                           const int* strides,
-                           int rows) {
-  I420Buffers* dest = (I420Buffers*)(opaque);
-  I411ToI420(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->y, dest->y_stride,
-             dest->u, dest->u_stride,
-             dest->v, dest->v_stride,
-             dest->w, rows);
+  I444ToI420(data[0], strides[0], data[1], strides[1], data[2], strides[2],
+             dest->y, dest->y_stride, dest->u, dest->u_stride, dest->v,
+             dest->v_stride, dest->w, rows);
   dest->y += rows * dest->y_stride;
   dest->u += ((rows + 1) >> 1) * dest->u_stride;
   dest->v += ((rows + 1) >> 1) * dest->v_stride;
@@ -105,15 +75,12 @@
 }
 
 static void JpegI400ToI420(void* opaque,
-                           const uint8* const* data,
+                           const uint8_t* const* data,
                            const int* strides,
                            int rows) {
   I420Buffers* dest = (I420Buffers*)(opaque);
-  I400ToI420(data[0], strides[0],
-             dest->y, dest->y_stride,
-             dest->u, dest->u_stride,
-             dest->v, dest->v_stride,
-             dest->w, rows);
+  I400ToI420(data[0], strides[0], dest->y, dest->y_stride, dest->u,
+             dest->u_stride, dest->v, dest->v_stride, dest->w, rows);
   dest->y += rows * dest->y_stride;
   dest->u += ((rows + 1) >> 1) * dest->u_stride;
   dest->v += ((rows + 1) >> 1) * dest->v_stride;
@@ -122,8 +89,10 @@
 
 // Query size of MJPG in pixels.
 LIBYUV_API
-int MJPGSize(const uint8* sample, size_t sample_size,
-             int* width, int* height) {
+int MJPGSize(const uint8_t* sample,
+             size_t sample_size,
+             int* width,
+             int* height) {
   MJpegDecoder mjpeg_decoder;
   LIBYUV_BOOL ret = mjpeg_decoder.LoadFrame(sample, sample_size);
   if (ret) {
@@ -135,15 +104,21 @@
 }
 
 // MJPG (Motion JPeg) to I420
-// TODO(fbarchard): review w and h requirement. dw and dh may be enough.
+// TODO(fbarchard): review src_width and src_height requirement. dst_width and
+// dst_height may be enough.
 LIBYUV_API
-int MJPGToI420(const uint8* sample,
+int MJPGToI420(const uint8_t* sample,
                size_t sample_size,
-               uint8* y, int y_stride,
-               uint8* u, int u_stride,
-               uint8* v, int v_stride,
-               int w, int h,
-               int dw, int dh) {
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int src_width,
+               int src_height,
+               int dst_width,
+               int dst_height) {
   if (sample_size == kUnknownDataSize) {
     // ERROR: MJPEG frame size unknown
     return -1;
@@ -152,17 +127,17 @@
   // TODO(fbarchard): Port MJpeg to C.
   MJpegDecoder mjpeg_decoder;
   LIBYUV_BOOL ret = mjpeg_decoder.LoadFrame(sample, sample_size);
-  if (ret && (mjpeg_decoder.GetWidth() != w ||
-              mjpeg_decoder.GetHeight() != h)) {
+  if (ret && (mjpeg_decoder.GetWidth() != src_width ||
+              mjpeg_decoder.GetHeight() != src_height)) {
     // ERROR: MJPEG frame has unexpected dimensions
     mjpeg_decoder.UnloadFrame();
     return 1;  // runtime failure
   }
   if (ret) {
-    I420Buffers bufs = { y, y_stride, u, u_stride, v, v_stride, dw, dh };
+    I420Buffers bufs = {dst_y, dst_stride_y, dst_u,     dst_stride_u,
+                        dst_v, dst_stride_v, dst_width, dst_height};
     // YUV420
-    if (mjpeg_decoder.GetColorSpace() ==
-            MJpegDecoder::kColorSpaceYCbCr &&
+    if (mjpeg_decoder.GetColorSpace() == MJpegDecoder::kColorSpaceYCbCr &&
         mjpeg_decoder.GetNumComponents() == 3 &&
         mjpeg_decoder.GetVertSampFactor(0) == 2 &&
         mjpeg_decoder.GetHorizSampFactor(0) == 2 &&
@@ -170,8 +145,9 @@
         mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
         mjpeg_decoder.GetVertSampFactor(2) == 1 &&
         mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegCopyI420, &bufs, dw, dh);
-    // YUV422
+      ret = mjpeg_decoder.DecodeToCallback(&JpegCopyI420, &bufs, dst_width,
+                                           dst_height);
+      // YUV422
     } else if (mjpeg_decoder.GetColorSpace() ==
                    MJpegDecoder::kColorSpaceYCbCr &&
                mjpeg_decoder.GetNumComponents() == 3 &&
@@ -181,8 +157,9 @@
                mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
                mjpeg_decoder.GetVertSampFactor(2) == 1 &&
                mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI422ToI420, &bufs, dw, dh);
-    // YUV444
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI422ToI420, &bufs, dst_width,
+                                           dst_height);
+      // YUV444
     } else if (mjpeg_decoder.GetColorSpace() ==
                    MJpegDecoder::kColorSpaceYCbCr &&
                mjpeg_decoder.GetNumComponents() == 3 &&
@@ -192,28 +169,19 @@
                mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
                mjpeg_decoder.GetVertSampFactor(2) == 1 &&
                mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI444ToI420, &bufs, dw, dh);
-    // YUV411
-    } else if (mjpeg_decoder.GetColorSpace() ==
-                   MJpegDecoder::kColorSpaceYCbCr &&
-               mjpeg_decoder.GetNumComponents() == 3 &&
-               mjpeg_decoder.GetVertSampFactor(0) == 1 &&
-               mjpeg_decoder.GetHorizSampFactor(0) == 4 &&
-               mjpeg_decoder.GetVertSampFactor(1) == 1 &&
-               mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
-               mjpeg_decoder.GetVertSampFactor(2) == 1 &&
-               mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI411ToI420, &bufs, dw, dh);
-    // YUV400
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI444ToI420, &bufs, dst_width,
+                                           dst_height);
+      // YUV400
     } else if (mjpeg_decoder.GetColorSpace() ==
                    MJpegDecoder::kColorSpaceGrayscale &&
                mjpeg_decoder.GetNumComponents() == 1 &&
                mjpeg_decoder.GetVertSampFactor(0) == 1 &&
                mjpeg_decoder.GetHorizSampFactor(0) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI400ToI420, &bufs, dw, dh);
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI400ToI420, &bufs, dst_width,
+                                           dst_height);
     } else {
       // TODO(fbarchard): Implement conversion for any other colorspace/sample
-      // factors that occur in practice. 411 is supported by libjpeg
+      // factors that occur in practice.
       // ERROR: Unable to convert MJPEG frame because format is not supported
       mjpeg_decoder.UnloadFrame();
       return 1;
@@ -224,88 +192,67 @@
 
 #ifdef HAVE_JPEG
 struct ARGBBuffers {
-  uint8* argb;
+  uint8_t* argb;
   int argb_stride;
   int w;
   int h;
 };
 
 static void JpegI420ToARGB(void* opaque,
-                         const uint8* const* data,
-                         const int* strides,
-                         int rows) {
+                           const uint8_t* const* data,
+                           const int* strides,
+                           int rows) {
   ARGBBuffers* dest = (ARGBBuffers*)(opaque);
-  I420ToARGB(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->argb, dest->argb_stride,
-             dest->w, rows);
+  I420ToARGB(data[0], strides[0], data[1], strides[1], data[2], strides[2],
+             dest->argb, dest->argb_stride, dest->w, rows);
   dest->argb += rows * dest->argb_stride;
   dest->h -= rows;
 }
 
 static void JpegI422ToARGB(void* opaque,
-                           const uint8* const* data,
+                           const uint8_t* const* data,
                            const int* strides,
                            int rows) {
   ARGBBuffers* dest = (ARGBBuffers*)(opaque);
-  I422ToARGB(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->argb, dest->argb_stride,
-             dest->w, rows);
+  I422ToARGB(data[0], strides[0], data[1], strides[1], data[2], strides[2],
+             dest->argb, dest->argb_stride, dest->w, rows);
   dest->argb += rows * dest->argb_stride;
   dest->h -= rows;
 }
 
 static void JpegI444ToARGB(void* opaque,
-                           const uint8* const* data,
+                           const uint8_t* const* data,
                            const int* strides,
                            int rows) {
   ARGBBuffers* dest = (ARGBBuffers*)(opaque);
-  I444ToARGB(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->argb, dest->argb_stride,
-             dest->w, rows);
-  dest->argb += rows * dest->argb_stride;
-  dest->h -= rows;
-}
-
-static void JpegI411ToARGB(void* opaque,
-                           const uint8* const* data,
-                           const int* strides,
-                           int rows) {
-  ARGBBuffers* dest = (ARGBBuffers*)(opaque);
-  I411ToARGB(data[0], strides[0],
-             data[1], strides[1],
-             data[2], strides[2],
-             dest->argb, dest->argb_stride,
-             dest->w, rows);
+  I444ToARGB(data[0], strides[0], data[1], strides[1], data[2], strides[2],
+             dest->argb, dest->argb_stride, dest->w, rows);
   dest->argb += rows * dest->argb_stride;
   dest->h -= rows;
 }
 
 static void JpegI400ToARGB(void* opaque,
-                           const uint8* const* data,
+                           const uint8_t* const* data,
                            const int* strides,
                            int rows) {
   ARGBBuffers* dest = (ARGBBuffers*)(opaque);
-  I400ToARGB(data[0], strides[0],
-             dest->argb, dest->argb_stride,
-             dest->w, rows);
+  I400ToARGB(data[0], strides[0], dest->argb, dest->argb_stride, dest->w, rows);
   dest->argb += rows * dest->argb_stride;
   dest->h -= rows;
 }
 
 // MJPG (Motion JPeg) to ARGB
-// TODO(fbarchard): review w and h requirement. dw and dh may be enough.
+// TODO(fbarchard): review src_width and src_height requirement. dst_width and
+// dst_height may be enough.
 LIBYUV_API
-int MJPGToARGB(const uint8* sample,
+int MJPGToARGB(const uint8_t* sample,
                size_t sample_size,
-               uint8* argb, int argb_stride,
-               int w, int h,
-               int dw, int dh) {
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int src_width,
+               int src_height,
+               int dst_width,
+               int dst_height) {
   if (sample_size == kUnknownDataSize) {
     // ERROR: MJPEG frame size unknown
     return -1;
@@ -314,17 +261,16 @@
   // TODO(fbarchard): Port MJpeg to C.
   MJpegDecoder mjpeg_decoder;
   LIBYUV_BOOL ret = mjpeg_decoder.LoadFrame(sample, sample_size);
-  if (ret && (mjpeg_decoder.GetWidth() != w ||
-              mjpeg_decoder.GetHeight() != h)) {
+  if (ret && (mjpeg_decoder.GetWidth() != src_width ||
+              mjpeg_decoder.GetHeight() != src_height)) {
     // ERROR: MJPEG frame has unexpected dimensions
     mjpeg_decoder.UnloadFrame();
     return 1;  // runtime failure
   }
   if (ret) {
-    ARGBBuffers bufs = { argb, argb_stride, dw, dh };
+    ARGBBuffers bufs = {dst_argb, dst_stride_argb, dst_width, dst_height};
     // YUV420
-    if (mjpeg_decoder.GetColorSpace() ==
-            MJpegDecoder::kColorSpaceYCbCr &&
+    if (mjpeg_decoder.GetColorSpace() == MJpegDecoder::kColorSpaceYCbCr &&
         mjpeg_decoder.GetNumComponents() == 3 &&
         mjpeg_decoder.GetVertSampFactor(0) == 2 &&
         mjpeg_decoder.GetHorizSampFactor(0) == 2 &&
@@ -332,8 +278,9 @@
         mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
         mjpeg_decoder.GetVertSampFactor(2) == 1 &&
         mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI420ToARGB, &bufs, dw, dh);
-    // YUV422
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI420ToARGB, &bufs, dst_width,
+                                           dst_height);
+      // YUV422
     } else if (mjpeg_decoder.GetColorSpace() ==
                    MJpegDecoder::kColorSpaceYCbCr &&
                mjpeg_decoder.GetNumComponents() == 3 &&
@@ -343,8 +290,9 @@
                mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
                mjpeg_decoder.GetVertSampFactor(2) == 1 &&
                mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI422ToARGB, &bufs, dw, dh);
-    // YUV444
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI422ToARGB, &bufs, dst_width,
+                                           dst_height);
+      // YUV444
     } else if (mjpeg_decoder.GetColorSpace() ==
                    MJpegDecoder::kColorSpaceYCbCr &&
                mjpeg_decoder.GetNumComponents() == 3 &&
@@ -354,28 +302,19 @@
                mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
                mjpeg_decoder.GetVertSampFactor(2) == 1 &&
                mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI444ToARGB, &bufs, dw, dh);
-    // YUV411
-    } else if (mjpeg_decoder.GetColorSpace() ==
-                   MJpegDecoder::kColorSpaceYCbCr &&
-               mjpeg_decoder.GetNumComponents() == 3 &&
-               mjpeg_decoder.GetVertSampFactor(0) == 1 &&
-               mjpeg_decoder.GetHorizSampFactor(0) == 4 &&
-               mjpeg_decoder.GetVertSampFactor(1) == 1 &&
-               mjpeg_decoder.GetHorizSampFactor(1) == 1 &&
-               mjpeg_decoder.GetVertSampFactor(2) == 1 &&
-               mjpeg_decoder.GetHorizSampFactor(2) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI411ToARGB, &bufs, dw, dh);
-    // YUV400
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI444ToARGB, &bufs, dst_width,
+                                           dst_height);
+      // YUV400
     } else if (mjpeg_decoder.GetColorSpace() ==
                    MJpegDecoder::kColorSpaceGrayscale &&
                mjpeg_decoder.GetNumComponents() == 1 &&
                mjpeg_decoder.GetVertSampFactor(0) == 1 &&
                mjpeg_decoder.GetHorizSampFactor(0) == 1) {
-      ret = mjpeg_decoder.DecodeToCallback(&JpegI400ToARGB, &bufs, dw, dh);
+      ret = mjpeg_decoder.DecodeToCallback(&JpegI400ToARGB, &bufs, dst_width,
+                                           dst_height);
     } else {
       // TODO(fbarchard): Implement conversion for any other colorspace/sample
-      // factors that occur in practice. 411 is supported by libjpeg
+      // factors that occur in practice.
       // ERROR: Unable to convert MJPEG frame because format is not supported
       mjpeg_decoder.UnloadFrame();
       return 1;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_argb.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_argb.cc
index aecdc80..6748452 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_argb.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_argb.cc
@@ -28,36 +28,50 @@
 // src_height is used to compute location of planes, and indicate inversion
 // sample_size is measured in bytes and is the size of the frame.
 //   With MJPEG it is the compressed size of the frame.
+
+// TODO(fbarchard): Add the following:
+// H010ToARGB
+// H420ToARGB
+// H422ToARGB
+// I010ToARGB
+// J400ToARGB
+// J422ToARGB
+// J444ToARGB
+
 LIBYUV_API
-int ConvertToARGB(const uint8* sample, size_t sample_size,
-                  uint8* crop_argb, int argb_stride,
-                  int crop_x, int crop_y,
-                  int src_width, int src_height,
-                  int crop_width, int crop_height,
+int ConvertToARGB(const uint8_t* sample,
+                  size_t sample_size,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int crop_x,
+                  int crop_y,
+                  int src_width,
+                  int src_height,
+                  int crop_width,
+                  int crop_height,
                   enum RotationMode rotation,
-                  uint32 fourcc) {
-  uint32 format = CanonicalFourCC(fourcc);
+                  uint32_t fourcc) {
+  uint32_t format = CanonicalFourCC(fourcc);
   int aligned_src_width = (src_width + 1) & ~1;
-  const uint8* src;
-  const uint8* src_uv;
+  const uint8_t* src;
+  const uint8_t* src_uv;
   int abs_src_height = (src_height < 0) ? -src_height : src_height;
   int inv_crop_height = (crop_height < 0) ? -crop_height : crop_height;
   int r = 0;
 
   // One pass rotation is available for some formats. For the rest, convert
-  // to I420 (with optional vertical flipping) into a temporary I420 buffer,
-  // and then rotate the I420 to the final destination buffer.
-  // For in-place conversion, if destination crop_argb is same as source sample,
+  // to ARGB (with optional vertical flipping) into a temporary ARGB buffer,
+  // and then rotate the ARGB to the final destination buffer.
+  // For in-place conversion, if destination dst_argb is same as source sample,
   // also enable temporary buffer.
-  LIBYUV_BOOL need_buf = (rotation && format != FOURCC_ARGB) ||
-      crop_argb == sample;
-  uint8* dest_argb = crop_argb;
-  int dest_argb_stride = argb_stride;
-  uint8* rotate_buffer = NULL;
+  LIBYUV_BOOL need_buf =
+      (rotation && format != FOURCC_ARGB) || dst_argb == sample;
+  uint8_t* dest_argb = dst_argb;
+  int dest_dst_stride_argb = dst_stride_argb;
+  uint8_t* rotate_buffer = NULL;
   int abs_crop_height = (crop_height < 0) ? -crop_height : crop_height;
 
-  if (crop_argb == NULL || sample == NULL ||
-      src_width <= 0 || crop_width <= 0 ||
+  if (dst_argb == NULL || sample == NULL || src_width <= 0 || crop_width <= 0 ||
       src_height == 0 || crop_height == 0) {
     return -1;
   }
@@ -67,187 +81,174 @@
 
   if (need_buf) {
     int argb_size = crop_width * 4 * abs_crop_height;
-    rotate_buffer = (uint8*)malloc(argb_size);
+    rotate_buffer = (uint8_t*)malloc(argb_size); /* NOLINT */
     if (!rotate_buffer) {
       return 1;  // Out of memory runtime error.
     }
-    crop_argb = rotate_buffer;
-    argb_stride = crop_width * 4;
+    dst_argb = rotate_buffer;
+    dst_stride_argb = crop_width * 4;
   }
 
   switch (format) {
     // Single plane formats
     case FOURCC_YUY2:
       src = sample + (aligned_src_width * crop_y + crop_x) * 2;
-      r = YUY2ToARGB(src, aligned_src_width * 2,
-                     crop_argb, argb_stride,
+      r = YUY2ToARGB(src, aligned_src_width * 2, dst_argb, dst_stride_argb,
                      crop_width, inv_crop_height);
       break;
     case FOURCC_UYVY:
       src = sample + (aligned_src_width * crop_y + crop_x) * 2;
-      r = UYVYToARGB(src, aligned_src_width * 2,
-                     crop_argb, argb_stride,
+      r = UYVYToARGB(src, aligned_src_width * 2, dst_argb, dst_stride_argb,
                      crop_width, inv_crop_height);
       break;
     case FOURCC_24BG:
       src = sample + (src_width * crop_y + crop_x) * 3;
-      r = RGB24ToARGB(src, src_width * 3,
-                      crop_argb, argb_stride,
-                      crop_width, inv_crop_height);
+      r = RGB24ToARGB(src, src_width * 3, dst_argb, dst_stride_argb, crop_width,
+                      inv_crop_height);
       break;
     case FOURCC_RAW:
       src = sample + (src_width * crop_y + crop_x) * 3;
-      r = RAWToARGB(src, src_width * 3,
-                    crop_argb, argb_stride,
-                    crop_width, inv_crop_height);
+      r = RAWToARGB(src, src_width * 3, dst_argb, dst_stride_argb, crop_width,
+                    inv_crop_height);
       break;
     case FOURCC_ARGB:
-      src = sample + (src_width * crop_y + crop_x) * 4;
-      r = ARGBToARGB(src, src_width * 4,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      if (!need_buf && !rotation) {
+        src = sample + (src_width * crop_y + crop_x) * 4;
+        r = ARGBToARGB(src, src_width * 4, dst_argb, dst_stride_argb,
+                       crop_width, inv_crop_height);
+      }
       break;
     case FOURCC_BGRA:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = BGRAToARGB(src, src_width * 4,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = BGRAToARGB(src, src_width * 4, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_ABGR:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = ABGRToARGB(src, src_width * 4,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = ABGRToARGB(src, src_width * 4, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_RGBA:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = RGBAToARGB(src, src_width * 4,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = RGBAToARGB(src, src_width * 4, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
+      break;
+    case FOURCC_AR30:
+      src = sample + (src_width * crop_y + crop_x) * 4;
+      r = AR30ToARGB(src, src_width * 4, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
+      break;
+    case FOURCC_AB30:
+      src = sample + (src_width * crop_y + crop_x) * 4;
+      r = AB30ToARGB(src, src_width * 4, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_RGBP:
       src = sample + (src_width * crop_y + crop_x) * 2;
-      r = RGB565ToARGB(src, src_width * 2,
-                       crop_argb, argb_stride,
+      r = RGB565ToARGB(src, src_width * 2, dst_argb, dst_stride_argb,
                        crop_width, inv_crop_height);
       break;
     case FOURCC_RGBO:
       src = sample + (src_width * crop_y + crop_x) * 2;
-      r = ARGB1555ToARGB(src, src_width * 2,
-                         crop_argb, argb_stride,
+      r = ARGB1555ToARGB(src, src_width * 2, dst_argb, dst_stride_argb,
                          crop_width, inv_crop_height);
       break;
     case FOURCC_R444:
       src = sample + (src_width * crop_y + crop_x) * 2;
-      r = ARGB4444ToARGB(src, src_width * 2,
-                         crop_argb, argb_stride,
+      r = ARGB4444ToARGB(src, src_width * 2, dst_argb, dst_stride_argb,
                          crop_width, inv_crop_height);
       break;
     case FOURCC_I400:
       src = sample + src_width * crop_y + crop_x;
-      r = I400ToARGB(src, src_width,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = I400ToARGB(src, src_width, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
       break;
 
     // Biplanar formats
     case FOURCC_NV12:
       src = sample + (src_width * crop_y + crop_x);
-      src_uv = sample + aligned_src_width * (src_height + crop_y / 2) + crop_x;
-      r = NV12ToARGB(src, src_width,
-                     src_uv, aligned_src_width,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      src_uv = sample + aligned_src_width * (abs_src_height + crop_y / 2) + crop_x;
+      r = NV12ToARGB(src, src_width, src_uv, aligned_src_width, dst_argb,
+                     dst_stride_argb, crop_width, inv_crop_height);
       break;
     case FOURCC_NV21:
       src = sample + (src_width * crop_y + crop_x);
-      src_uv = sample + aligned_src_width * (src_height + crop_y / 2) + crop_x;
+      src_uv = sample + aligned_src_width * (abs_src_height + crop_y / 2) + crop_x;
       // Call NV12 but with u and v parameters swapped.
-      r = NV21ToARGB(src, src_width,
-                     src_uv, aligned_src_width,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = NV21ToARGB(src, src_width, src_uv, aligned_src_width, dst_argb,
+                     dst_stride_argb, crop_width, inv_crop_height);
       break;
     case FOURCC_M420:
       src = sample + (src_width * crop_y) * 12 / 8 + crop_x;
-      r = M420ToARGB(src, src_width,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = M420ToARGB(src, src_width, dst_argb, dst_stride_argb, crop_width,
+                     inv_crop_height);
       break;
+
     // Triplanar formats
     case FOURCC_I420:
     case FOURCC_YV12: {
-      const uint8* src_y = sample + (src_width * crop_y + crop_x);
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + (src_width * crop_y + crop_x);
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       int halfwidth = (src_width + 1) / 2;
       int halfheight = (abs_src_height + 1) / 2;
       if (format == FOURCC_YV12) {
         src_v = sample + src_width * abs_src_height +
-            (halfwidth * crop_y + crop_x) / 2;
+                (halfwidth * crop_y + crop_x) / 2;
         src_u = sample + src_width * abs_src_height +
-            halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
+                halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
       } else {
         src_u = sample + src_width * abs_src_height +
-            (halfwidth * crop_y + crop_x) / 2;
+                (halfwidth * crop_y + crop_x) / 2;
         src_v = sample + src_width * abs_src_height +
-            halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
+                halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
       }
-      r = I420ToARGB(src_y, src_width,
-                     src_u, halfwidth,
-                     src_v, halfwidth,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = I420ToARGB(src_y, src_width, src_u, halfwidth, src_v, halfwidth,
+                     dst_argb, dst_stride_argb, crop_width, inv_crop_height);
       break;
     }
 
     case FOURCC_J420: {
-      const uint8* src_y = sample + (src_width * crop_y + crop_x);
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + (src_width * crop_y + crop_x);
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       int halfwidth = (src_width + 1) / 2;
       int halfheight = (abs_src_height + 1) / 2;
       src_u = sample + src_width * abs_src_height +
-          (halfwidth * crop_y + crop_x) / 2;
+              (halfwidth * crop_y + crop_x) / 2;
       src_v = sample + src_width * abs_src_height +
-          halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
-      r = J420ToARGB(src_y, src_width,
-                     src_u, halfwidth,
-                     src_v, halfwidth,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+              halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
+      r = J420ToARGB(src_y, src_width, src_u, halfwidth, src_v, halfwidth,
+                     dst_argb, dst_stride_argb, crop_width, inv_crop_height);
       break;
     }
 
     case FOURCC_I422:
     case FOURCC_YV16: {
-      const uint8* src_y = sample + src_width * crop_y + crop_x;
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + src_width * crop_y + crop_x;
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       int halfwidth = (src_width + 1) / 2;
       if (format == FOURCC_YV16) {
-        src_v = sample + src_width * abs_src_height +
-            halfwidth * crop_y + crop_x / 2;
+        src_v = sample + src_width * abs_src_height + halfwidth * crop_y +
+                crop_x / 2;
         src_u = sample + src_width * abs_src_height +
-            halfwidth * (abs_src_height + crop_y) + crop_x / 2;
+                halfwidth * (abs_src_height + crop_y) + crop_x / 2;
       } else {
-        src_u = sample + src_width * abs_src_height +
-            halfwidth * crop_y + crop_x / 2;
+        src_u = sample + src_width * abs_src_height + halfwidth * crop_y +
+                crop_x / 2;
         src_v = sample + src_width * abs_src_height +
-            halfwidth * (abs_src_height + crop_y) + crop_x / 2;
+                halfwidth * (abs_src_height + crop_y) + crop_x / 2;
       }
-      r = I422ToARGB(src_y, src_width,
-                     src_u, halfwidth,
-                     src_v, halfwidth,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = I422ToARGB(src_y, src_width, src_u, halfwidth, src_v, halfwidth,
+                     dst_argb, dst_stride_argb, crop_width, inv_crop_height);
       break;
     }
     case FOURCC_I444:
     case FOURCC_YV24: {
-      const uint8* src_y = sample + src_width * crop_y + crop_x;
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + src_width * crop_y + crop_x;
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       if (format == FOURCC_YV24) {
         src_v = sample + src_width * (abs_src_height + crop_y) + crop_x;
         src_u = sample + src_width * (abs_src_height * 2 + crop_y) + crop_x;
@@ -255,32 +256,14 @@
         src_u = sample + src_width * (abs_src_height + crop_y) + crop_x;
         src_v = sample + src_width * (abs_src_height * 2 + crop_y) + crop_x;
       }
-      r = I444ToARGB(src_y, src_width,
-                     src_u, src_width,
-                     src_v, src_width,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
-      break;
-    }
-    case FOURCC_I411: {
-      int quarterwidth = (src_width + 3) / 4;
-      const uint8* src_y = sample + src_width * crop_y + crop_x;
-      const uint8* src_u = sample + src_width * abs_src_height +
-          quarterwidth * crop_y + crop_x / 4;
-      const uint8* src_v = sample + src_width * abs_src_height +
-          quarterwidth * (abs_src_height + crop_y) + crop_x / 4;
-      r = I411ToARGB(src_y, src_width,
-                     src_u, quarterwidth,
-                     src_v, quarterwidth,
-                     crop_argb, argb_stride,
-                     crop_width, inv_crop_height);
+      r = I444ToARGB(src_y, src_width, src_u, src_width, src_v, src_width,
+                     dst_argb, dst_stride_argb, crop_width, inv_crop_height);
       break;
     }
 #ifdef HAVE_JPEG
     case FOURCC_MJPG:
-      r = MJPGToARGB(sample, sample_size,
-                     crop_argb, argb_stride,
-                     src_width, abs_src_height, crop_width, inv_crop_height);
+      r = MJPGToARGB(sample, sample_size, dst_argb, dst_stride_argb, src_width,
+                     abs_src_height, crop_width, inv_crop_height);
       break;
 #endif
     default:
@@ -289,11 +272,14 @@
 
   if (need_buf) {
     if (!r) {
-      r = ARGBRotate(crop_argb, argb_stride,
-                     dest_argb, dest_argb_stride,
+      r = ARGBRotate(dst_argb, dst_stride_argb, dest_argb, dest_dst_stride_argb,
                      crop_width, abs_crop_height, rotation);
     }
     free(rotate_buffer);
+  } else if (rotation) {
+    src = sample + (src_width * crop_y + crop_x) * 4;
+    r = ARGBRotate(src, src_width * 4, dst_argb, dst_stride_argb, crop_width,
+                   inv_crop_height, rotation);
   }
 
   return r;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_i420.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_i420.cc
index e5f307c..df08309 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_i420.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/convert_to_i420.cc
@@ -25,251 +25,216 @@
 // sample_size is measured in bytes and is the size of the frame.
 //   With MJPEG it is the compressed size of the frame.
 LIBYUV_API
-int ConvertToI420(const uint8* sample,
+int ConvertToI420(const uint8_t* sample,
                   size_t sample_size,
-                  uint8* y, int y_stride,
-                  uint8* u, int u_stride,
-                  uint8* v, int v_stride,
-                  int crop_x, int crop_y,
-                  int src_width, int src_height,
-                  int crop_width, int crop_height,
+                  uint8_t* dst_y,
+                  int dst_stride_y,
+                  uint8_t* dst_u,
+                  int dst_stride_u,
+                  uint8_t* dst_v,
+                  int dst_stride_v,
+                  int crop_x,
+                  int crop_y,
+                  int src_width,
+                  int src_height,
+                  int crop_width,
+                  int crop_height,
                   enum RotationMode rotation,
-                  uint32 fourcc) {
-  uint32 format = CanonicalFourCC(fourcc);
+                  uint32_t fourcc) {
+  uint32_t format = CanonicalFourCC(fourcc);
   int aligned_src_width = (src_width + 1) & ~1;
-  const uint8* src;
-  const uint8* src_uv;
+  const uint8_t* src;
+  const uint8_t* src_uv;
   const int abs_src_height = (src_height < 0) ? -src_height : src_height;
   // TODO(nisse): Why allow crop_height < 0?
   const int abs_crop_height = (crop_height < 0) ? -crop_height : crop_height;
   int r = 0;
-  LIBYUV_BOOL need_buf = (rotation && format != FOURCC_I420 &&
-      format != FOURCC_NV12 && format != FOURCC_NV21 &&
-      format != FOURCC_YV12) || y == sample;
-  uint8* tmp_y = y;
-  uint8* tmp_u = u;
-  uint8* tmp_v = v;
-  int tmp_y_stride = y_stride;
-  int tmp_u_stride = u_stride;
-  int tmp_v_stride = v_stride;
-  uint8* rotate_buffer = NULL;
+  LIBYUV_BOOL need_buf =
+      (rotation && format != FOURCC_I420 && format != FOURCC_NV12 &&
+       format != FOURCC_NV21 && format != FOURCC_YV12) ||
+      dst_y == sample;
+  uint8_t* tmp_y = dst_y;
+  uint8_t* tmp_u = dst_u;
+  uint8_t* tmp_v = dst_v;
+  int tmp_y_stride = dst_stride_y;
+  int tmp_u_stride = dst_stride_u;
+  int tmp_v_stride = dst_stride_v;
+  uint8_t* rotate_buffer = NULL;
   const int inv_crop_height =
       (src_height < 0) ? -abs_crop_height : abs_crop_height;
 
-  if (!y || !u || !v || !sample ||
-      src_width <= 0 || crop_width <= 0  ||
-      src_height == 0 || crop_height == 0) {
+  if (!dst_y || !dst_u || !dst_v || !sample || src_width <= 0 ||
+      crop_width <= 0 || src_height == 0 || crop_height == 0) {
     return -1;
   }
 
   // One pass rotation is available for some formats. For the rest, convert
   // to I420 (with optional vertical flipping) into a temporary I420 buffer,
   // and then rotate the I420 to the final destination buffer.
-  // For in-place conversion, if destination y is same as source sample,
+  // For in-place conversion, if destination dst_y is same as source sample,
   // also enable temporary buffer.
   if (need_buf) {
     int y_size = crop_width * abs_crop_height;
     int uv_size = ((crop_width + 1) / 2) * ((abs_crop_height + 1) / 2);
-    rotate_buffer = (uint8*)malloc(y_size + uv_size * 2);
+    rotate_buffer = (uint8_t*)malloc(y_size + uv_size * 2); /* NOLINT */
     if (!rotate_buffer) {
       return 1;  // Out of memory runtime error.
     }
-    y = rotate_buffer;
-    u = y + y_size;
-    v = u + uv_size;
-    y_stride = crop_width;
-    u_stride = v_stride = ((crop_width + 1) / 2);
+    dst_y = rotate_buffer;
+    dst_u = dst_y + y_size;
+    dst_v = dst_u + uv_size;
+    dst_stride_y = crop_width;
+    dst_stride_u = dst_stride_v = ((crop_width + 1) / 2);
   }
 
   switch (format) {
     // Single plane formats
     case FOURCC_YUY2:
       src = sample + (aligned_src_width * crop_y + crop_x) * 2;
-      r = YUY2ToI420(src, aligned_src_width * 2,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = YUY2ToI420(src, aligned_src_width * 2, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_UYVY:
       src = sample + (aligned_src_width * crop_y + crop_x) * 2;
-      r = UYVYToI420(src, aligned_src_width * 2,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = UYVYToI420(src, aligned_src_width * 2, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_RGBP:
       src = sample + (src_width * crop_y + crop_x) * 2;
-      r = RGB565ToI420(src, src_width * 2,
-                       y, y_stride,
-                       u, u_stride,
-                       v, v_stride,
-                       crop_width, inv_crop_height);
+      r = RGB565ToI420(src, src_width * 2, dst_y, dst_stride_y, dst_u,
+                       dst_stride_u, dst_v, dst_stride_v, crop_width,
+                       inv_crop_height);
       break;
     case FOURCC_RGBO:
       src = sample + (src_width * crop_y + crop_x) * 2;
-      r = ARGB1555ToI420(src, src_width * 2,
-                         y, y_stride,
-                         u, u_stride,
-                         v, v_stride,
-                         crop_width, inv_crop_height);
+      r = ARGB1555ToI420(src, src_width * 2, dst_y, dst_stride_y, dst_u,
+                         dst_stride_u, dst_v, dst_stride_v, crop_width,
+                         inv_crop_height);
       break;
     case FOURCC_R444:
       src = sample + (src_width * crop_y + crop_x) * 2;
-      r = ARGB4444ToI420(src, src_width * 2,
-                         y, y_stride,
-                         u, u_stride,
-                         v, v_stride,
-                         crop_width, inv_crop_height);
+      r = ARGB4444ToI420(src, src_width * 2, dst_y, dst_stride_y, dst_u,
+                         dst_stride_u, dst_v, dst_stride_v, crop_width,
+                         inv_crop_height);
       break;
     case FOURCC_24BG:
       src = sample + (src_width * crop_y + crop_x) * 3;
-      r = RGB24ToI420(src, src_width * 3,
-                      y, y_stride,
-                      u, u_stride,
-                      v, v_stride,
-                      crop_width, inv_crop_height);
+      r = RGB24ToI420(src, src_width * 3, dst_y, dst_stride_y, dst_u,
+                      dst_stride_u, dst_v, dst_stride_v, crop_width,
+                      inv_crop_height);
       break;
     case FOURCC_RAW:
       src = sample + (src_width * crop_y + crop_x) * 3;
-      r = RAWToI420(src, src_width * 3,
-                    y, y_stride,
-                    u, u_stride,
-                    v, v_stride,
-                    crop_width, inv_crop_height);
+      r = RAWToI420(src, src_width * 3, dst_y, dst_stride_y, dst_u,
+                    dst_stride_u, dst_v, dst_stride_v, crop_width,
+                    inv_crop_height);
       break;
     case FOURCC_ARGB:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = ARGBToI420(src, src_width * 4,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = ARGBToI420(src, src_width * 4, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_BGRA:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = BGRAToI420(src, src_width * 4,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = BGRAToI420(src, src_width * 4, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_ABGR:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = ABGRToI420(src, src_width * 4,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = ABGRToI420(src, src_width * 4, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, crop_width,
+                     inv_crop_height);
       break;
     case FOURCC_RGBA:
       src = sample + (src_width * crop_y + crop_x) * 4;
-      r = RGBAToI420(src, src_width * 4,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = RGBAToI420(src, src_width * 4, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, crop_width,
+                     inv_crop_height);
       break;
+    // TODO(fbarchard): Add AR30 and AB30
     case FOURCC_I400:
       src = sample + src_width * crop_y + crop_x;
-      r = I400ToI420(src, src_width,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = I400ToI420(src, src_width, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                     dst_v, dst_stride_v, crop_width, inv_crop_height);
       break;
     // Biplanar formats
     case FOURCC_NV12:
       src = sample + (src_width * crop_y + crop_x);
-      src_uv = sample + (src_width * src_height) +
-        ((crop_y / 2) * aligned_src_width) + ((crop_x / 2) * 2);
-      r = NV12ToI420Rotate(src, src_width,
-                           src_uv, aligned_src_width,
-                           y, y_stride,
-                           u, u_stride,
-                           v, v_stride,
-                           crop_width, inv_crop_height, rotation);
+      src_uv = sample + (src_width * abs_src_height) +
+               ((crop_y / 2) * aligned_src_width) + ((crop_x / 2) * 2);
+      r = NV12ToI420Rotate(src, src_width, src_uv, aligned_src_width, dst_y,
+                           dst_stride_y, dst_u, dst_stride_u, dst_v,
+                           dst_stride_v, crop_width, inv_crop_height, rotation);
       break;
     case FOURCC_NV21:
       src = sample + (src_width * crop_y + crop_x);
-      src_uv = sample + (src_width * src_height) +
-        ((crop_y / 2) * aligned_src_width) + ((crop_x / 2) * 2);
-      // Call NV12 but with u and v parameters swapped.
-      r = NV12ToI420Rotate(src, src_width,
-                           src_uv, aligned_src_width,
-                           y, y_stride,
-                           v, v_stride,
-                           u, u_stride,
-                           crop_width, inv_crop_height, rotation);
+      src_uv = sample + (src_width * abs_src_height) +
+               ((crop_y / 2) * aligned_src_width) + ((crop_x / 2) * 2);
+      // Call NV12 but with dst_u and dst_v parameters swapped.
+      r = NV12ToI420Rotate(src, src_width, src_uv, aligned_src_width, dst_y,
+                           dst_stride_y, dst_v, dst_stride_v, dst_u,
+                           dst_stride_u, crop_width, inv_crop_height, rotation);
       break;
     case FOURCC_M420:
       src = sample + (src_width * crop_y) * 12 / 8 + crop_x;
-      r = M420ToI420(src, src_width,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = M420ToI420(src, src_width, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                     dst_v, dst_stride_v, crop_width, inv_crop_height);
       break;
     // Triplanar formats
     case FOURCC_I420:
     case FOURCC_YV12: {
-      const uint8* src_y = sample + (src_width * crop_y + crop_x);
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + (src_width * crop_y + crop_x);
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       int halfwidth = (src_width + 1) / 2;
       int halfheight = (abs_src_height + 1) / 2;
       if (format == FOURCC_YV12) {
         src_v = sample + src_width * abs_src_height +
-            (halfwidth * crop_y + crop_x) / 2;
+                (halfwidth * crop_y + crop_x) / 2;
         src_u = sample + src_width * abs_src_height +
-            halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
+                halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
       } else {
         src_u = sample + src_width * abs_src_height +
-            (halfwidth * crop_y + crop_x) / 2;
+                (halfwidth * crop_y + crop_x) / 2;
         src_v = sample + src_width * abs_src_height +
-            halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
+                halfwidth * (halfheight + crop_y / 2) + crop_x / 2;
       }
-      r = I420Rotate(src_y, src_width,
-                     src_u, halfwidth,
-                     src_v, halfwidth,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height, rotation);
+      r = I420Rotate(src_y, src_width, src_u, halfwidth, src_v, halfwidth,
+                     dst_y, dst_stride_y, dst_u, dst_stride_u, dst_v,
+                     dst_stride_v, crop_width, inv_crop_height, rotation);
       break;
     }
     case FOURCC_I422:
     case FOURCC_YV16: {
-      const uint8* src_y = sample + src_width * crop_y + crop_x;
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + src_width * crop_y + crop_x;
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       int halfwidth = (src_width + 1) / 2;
       if (format == FOURCC_YV16) {
-        src_v = sample + src_width * abs_src_height +
-            halfwidth * crop_y + crop_x / 2;
+        src_v = sample + src_width * abs_src_height + halfwidth * crop_y +
+                crop_x / 2;
         src_u = sample + src_width * abs_src_height +
-            halfwidth * (abs_src_height + crop_y) + crop_x / 2;
+                halfwidth * (abs_src_height + crop_y) + crop_x / 2;
       } else {
-        src_u = sample + src_width * abs_src_height +
-            halfwidth * crop_y + crop_x / 2;
+        src_u = sample + src_width * abs_src_height + halfwidth * crop_y +
+                crop_x / 2;
         src_v = sample + src_width * abs_src_height +
-            halfwidth * (abs_src_height + crop_y) + crop_x / 2;
+                halfwidth * (abs_src_height + crop_y) + crop_x / 2;
       }
-      r = I422ToI420(src_y, src_width,
-                     src_u, halfwidth,
-                     src_v, halfwidth,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = I422ToI420(src_y, src_width, src_u, halfwidth, src_v, halfwidth,
+                     dst_y, dst_stride_y, dst_u, dst_stride_u, dst_v,
+                     dst_stride_v, crop_width, inv_crop_height);
       break;
     }
     case FOURCC_I444:
     case FOURCC_YV24: {
-      const uint8* src_y = sample + src_width * crop_y + crop_x;
-      const uint8* src_u;
-      const uint8* src_v;
+      const uint8_t* src_y = sample + src_width * crop_y + crop_x;
+      const uint8_t* src_u;
+      const uint8_t* src_v;
       if (format == FOURCC_YV24) {
         src_v = sample + src_width * (abs_src_height + crop_y) + crop_x;
         src_u = sample + src_width * (abs_src_height * 2 + crop_y) + crop_x;
@@ -277,38 +242,16 @@
         src_u = sample + src_width * (abs_src_height + crop_y) + crop_x;
         src_v = sample + src_width * (abs_src_height * 2 + crop_y) + crop_x;
       }
-      r = I444ToI420(src_y, src_width,
-                     src_u, src_width,
-                     src_v, src_width,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
-      break;
-    }
-    case FOURCC_I411: {
-      int quarterwidth = (src_width + 3) / 4;
-      const uint8* src_y = sample + src_width * crop_y + crop_x;
-      const uint8* src_u = sample + src_width * abs_src_height +
-          quarterwidth * crop_y + crop_x / 4;
-      const uint8* src_v = sample + src_width * abs_src_height +
-          quarterwidth * (abs_src_height + crop_y) + crop_x / 4;
-      r = I411ToI420(src_y, src_width,
-                     src_u, quarterwidth,
-                     src_v, quarterwidth,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     crop_width, inv_crop_height);
+      r = I444ToI420(src_y, src_width, src_u, src_width, src_v, src_width,
+                     dst_y, dst_stride_y, dst_u, dst_stride_u, dst_v,
+                     dst_stride_v, crop_width, inv_crop_height);
       break;
     }
 #ifdef HAVE_JPEG
     case FOURCC_MJPG:
-      r = MJPGToI420(sample, sample_size,
-                     y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     src_width, abs_src_height, crop_width, inv_crop_height);
+      r = MJPGToI420(sample, sample_size, dst_y, dst_stride_y, dst_u,
+                     dst_stride_u, dst_v, dst_stride_v, src_width,
+                     abs_src_height, crop_width, inv_crop_height);
       break;
 #endif
     default:
@@ -317,13 +260,10 @@
 
   if (need_buf) {
     if (!r) {
-      r = I420Rotate(y, y_stride,
-                     u, u_stride,
-                     v, v_stride,
-                     tmp_y, tmp_y_stride,
-                     tmp_u, tmp_u_stride,
-                     tmp_v, tmp_v_stride,
-                     crop_width, abs_crop_height, rotation);
+      r = I420Rotate(dst_y, dst_stride_y, dst_u, dst_stride_u, dst_v,
+                     dst_stride_v, tmp_y, tmp_y_stride, tmp_u, tmp_u_stride,
+                     tmp_v, tmp_v_stride, crop_width, abs_crop_height,
+                     rotation);
     }
     free(rotate_buffer);
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/cpu_id.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/cpu_id.cc
index 84927eb..31e24b6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/cpu_id.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/cpu_id.cc
@@ -13,22 +13,16 @@
 #if defined(_MSC_VER)
 #include <intrin.h>  // For __cpuidex()
 #endif
-#if !defined(__pnacl__) && !defined(__CLR_VER) && \
+#if !defined(__pnacl__) && !defined(__CLR_VER) &&                           \
     !defined(__native_client__) && (defined(_M_IX86) || defined(_M_X64)) && \
     defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 160040219)
 #include <immintrin.h>  // For _xgetbv()
 #endif
 
-#if !defined(__native_client__)
-#include <stdlib.h>  // For getenv()
-#endif
-
 // For ArmCpuCaps() but unittested on all platforms
 #include <stdio.h>
 #include <string.h>
 
-#include "libyuv/basic_types.h"  // For CPU_X86
-
 #ifdef __cplusplus
 namespace libyuv {
 extern "C" {
@@ -43,16 +37,20 @@
 #define SAFEBUFFERS
 #endif
 
+// cpu_info_ variable for SIMD instruction sets detected.
+LIBYUV_API int cpu_info_ = 0;
+
+// TODO(fbarchard): Consider using int for cpuid so casting is not needed.
 // Low level cpuid for X86.
-#if (defined(_M_IX86) || defined(_M_X64) || \
-    defined(__i386__) || defined(__x86_64__)) && \
+#if (defined(_M_IX86) || defined(_M_X64) || defined(__i386__) || \
+     defined(__x86_64__)) &&                                     \
     !defined(__pnacl__) && !defined(__CLR_VER)
 LIBYUV_API
-void CpuId(uint32 info_eax, uint32 info_ecx, uint32* cpu_info) {
+void CpuId(int info_eax, int info_ecx, int* cpu_info) {
 #if defined(_MSC_VER)
 // Visual C version uses intrinsic or inline x86 assembly.
 #if defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 160040219)
-  __cpuidex((int*)(cpu_info), info_eax, info_ecx);
+  __cpuidex(cpu_info, info_eax, info_ecx);
 #elif defined(_M_IX86)
   __asm {
     mov        eax, info_eax
@@ -66,26 +64,26 @@
   }
 #else  // Visual C but not x86
   if (info_ecx == 0) {
-    __cpuid((int*)(cpu_info), info_eax);
+    __cpuid(cpu_info, info_eax);
   } else {
-    cpu_info[3] = cpu_info[2] = cpu_info[1] = cpu_info[0] = 0;
+    cpu_info[3] = cpu_info[2] = cpu_info[1] = cpu_info[0] = 0u;
   }
 #endif
 // GCC version uses inline x86 assembly.
 #else  // defined(_MSC_VER)
-  uint32 info_ebx, info_edx;
-  asm volatile (
-#if defined( __i386__) && defined(__PIC__)
-    // Preserve ebx for fpic 32 bit.
-    "mov %%ebx, %%edi                          \n"
-    "cpuid                                     \n"
-    "xchg %%edi, %%ebx                         \n"
-    : "=D" (info_ebx),
+  int info_ebx, info_edx;
+  asm volatile(
+#if defined(__i386__) && defined(__PIC__)
+      // Preserve ebx for fpic 32 bit.
+      "mov %%ebx, %%edi                          \n"
+      "cpuid                                     \n"
+      "xchg %%edi, %%ebx                         \n"
+      : "=D"(info_ebx),
 #else
-    "cpuid                                     \n"
-    : "=b" (info_ebx),
+      "cpuid                                     \n"
+      : "=b"(info_ebx),
 #endif  //  defined( __i386__) && defined(__PIC__)
-      "+a" (info_eax), "+c" (info_ecx), "=d" (info_edx));
+        "+a"(info_eax), "+c"(info_ecx), "=d"(info_edx));
   cpu_info[0] = info_eax;
   cpu_info[1] = info_ebx;
   cpu_info[2] = info_ecx;
@@ -94,7 +92,9 @@
 }
 #else  // (defined(_M_IX86) || defined(_M_X64) ...
 LIBYUV_API
-void CpuId(uint32 eax, uint32 ecx, uint32* cpu_info) {
+void CpuId(int eax, int ecx, int* cpu_info) {
+  (void)eax;
+  (void)ecx;
   cpu_info[0] = cpu_info[1] = cpu_info[2] = cpu_info[3] = 0;
 }
 #endif
@@ -111,20 +111,22 @@
 #if defined(_M_IX86) && (_MSC_VER < 1900)
 #pragma optimize("g", off)
 #endif
-#if (defined(_M_IX86) || defined(_M_X64) || \
-    defined(__i386__) || defined(__x86_64__)) && \
+#if (defined(_M_IX86) || defined(_M_X64) || defined(__i386__) || \
+     defined(__x86_64__)) &&                                     \
     !defined(__pnacl__) && !defined(__CLR_VER) && !defined(__native_client__)
-#define HAS_XGETBV
 // X86 CPUs have xgetbv to detect OS saves high parts of ymm registers.
 int GetXCR0() {
-  uint32 xcr0 = 0u;
+  int xcr0 = 0;
 #if defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 160040219)
-  xcr0 = (uint32)(_xgetbv(0));  // VS2010 SP1 required.
+  xcr0 = (int)_xgetbv(0);  // VS2010 SP1 required.  NOLINT
 #elif defined(__i386__) || defined(__x86_64__)
-  asm(".byte 0x0f, 0x01, 0xd0" : "=a" (xcr0) : "c" (0) : "%edx");
+  asm(".byte 0x0f, 0x01, 0xd0" : "=a"(xcr0) : "c"(0) : "%edx");
 #endif  // defined(__i386__) || defined(__x86_64__)
   return xcr0;
 }
+#else
+// xgetbv unavailable to query for OSSave support.  Return 0.
+#define GetXCR0() 0
 #endif  // defined(_M_IX86) || defined(_M_X64) ..
 // Return optimization to previous setting.
 #if defined(_M_IX86) && (_MSC_VER < 1900)
@@ -133,8 +135,7 @@
 
 // based on libvpx arm_cpudetect.c
 // For Arm, but public to allow testing on any CPU
-LIBYUV_API SAFEBUFFERS
-int ArmCpuCaps(const char* cpuinfo_name) {
+LIBYUV_API SAFEBUFFERS int ArmCpuCaps(const char* cpuinfo_name) {
   char cpuinfo_line[512];
   FILE* f = fopen(cpuinfo_name, "r");
   if (!f) {
@@ -151,7 +152,7 @@
       }
       // aarch64 uses asimd for Neon.
       p = strstr(cpuinfo_line, " asimd");
-      if (p && (p[6] == ' ' || p[6] == '\n')) {
+      if (p) {
         fclose(f);
         return kCpuHasNEON;
       }
@@ -161,103 +162,78 @@
   return 0;
 }
 
-// CPU detect function for SIMD instruction sets.
-LIBYUV_API
-int cpu_info_ = 0;  // cpu_info is not initialized yet.
-
-// Test environment variable for disabling CPU features. Any non-zero value
-// to disable. Zero ignored to make it easy to set the variable on/off.
-#if !defined(__native_client__) && !defined(_M_ARM)
-
-static LIBYUV_BOOL TestEnv(const char* name) {
-  const char* var = getenv(name);
-  if (var) {
-    if (var[0] != '0') {
-      return LIBYUV_TRUE;
+// TODO(fbarchard): Consider read_msa_ir().
+// TODO(fbarchard): Add unittest.
+LIBYUV_API SAFEBUFFERS int MipsCpuCaps(const char* cpuinfo_name,
+                                       const char ase[]) {
+  char cpuinfo_line[512];
+  FILE* f = fopen(cpuinfo_name, "r");
+  if (!f) {
+    // ase enabled if /proc/cpuinfo is unavailable.
+    if (strcmp(ase, " msa") == 0) {
+      return kCpuHasMSA;
+    }
+    return 0;
+  }
+  while (fgets(cpuinfo_line, sizeof(cpuinfo_line) - 1, f)) {
+    if (memcmp(cpuinfo_line, "ASEs implemented", 16) == 0) {
+      char* p = strstr(cpuinfo_line, ase);
+      if (p) {
+        fclose(f);
+        if (strcmp(ase, " msa") == 0) {
+          return kCpuHasMSA;
+        }
+        return 0;
+      }
     }
   }
-  return LIBYUV_FALSE;
+  fclose(f);
+  return 0;
 }
-#else  // nacl does not support getenv().
-static LIBYUV_BOOL TestEnv(const char*) {
-  return LIBYUV_FALSE;
-}
-#endif
 
-LIBYUV_API SAFEBUFFERS
-int InitCpuFlags(void) {
-  // TODO(fbarchard): swap kCpuInit logic so 0 means uninitialized.
+static SAFEBUFFERS int GetCpuFlags(void) {
   int cpu_info = 0;
-#if !defined(__pnacl__) && !defined(__CLR_VER) && defined(CPU_X86)
-  uint32 cpu_info0[4] = { 0, 0, 0, 0 };
-  uint32 cpu_info1[4] = { 0, 0, 0, 0 };
-  uint32 cpu_info7[4] = { 0, 0, 0, 0 };
+#if !defined(__pnacl__) && !defined(__CLR_VER) &&                   \
+    (defined(__x86_64__) || defined(_M_X64) || defined(__i386__) || \
+     defined(_M_IX86))
+  int cpu_info0[4] = {0, 0, 0, 0};
+  int cpu_info1[4] = {0, 0, 0, 0};
+  int cpu_info7[4] = {0, 0, 0, 0};
   CpuId(0, 0, cpu_info0);
   CpuId(1, 0, cpu_info1);
   if (cpu_info0[0] >= 7) {
     CpuId(7, 0, cpu_info7);
   }
-  cpu_info = ((cpu_info1[3] & 0x04000000) ? kCpuHasSSE2 : 0) |
+  cpu_info = kCpuHasX86 | ((cpu_info1[3] & 0x04000000) ? kCpuHasSSE2 : 0) |
              ((cpu_info1[2] & 0x00000200) ? kCpuHasSSSE3 : 0) |
              ((cpu_info1[2] & 0x00080000) ? kCpuHasSSE41 : 0) |
              ((cpu_info1[2] & 0x00100000) ? kCpuHasSSE42 : 0) |
-             ((cpu_info7[1] & 0x00000200) ? kCpuHasERMS : 0) |
-             ((cpu_info1[2] & 0x00001000) ? kCpuHasFMA3 : 0) |
-             kCpuHasX86;
+             ((cpu_info7[1] & 0x00000200) ? kCpuHasERMS : 0);
 
-#ifdef HAS_XGETBV
-  // AVX requires CPU has AVX, XSAVE and OSXSave for xgetbv
+  // AVX requires OS saves YMM registers.
   if (((cpu_info1[2] & 0x1c000000) == 0x1c000000) &&  // AVX and OSXSave
       ((GetXCR0() & 6) == 6)) {  // Test OS saves YMM registers
-    cpu_info |= ((cpu_info7[1] & 0x00000020) ? kCpuHasAVX2 : 0) | kCpuHasAVX;
+    cpu_info |= kCpuHasAVX | ((cpu_info7[1] & 0x00000020) ? kCpuHasAVX2 : 0) |
+                ((cpu_info1[2] & 0x00001000) ? kCpuHasFMA3 : 0) |
+                ((cpu_info1[2] & 0x20000000) ? kCpuHasF16C : 0);
 
     // Detect AVX512bw
     if ((GetXCR0() & 0xe0) == 0xe0) {
-      cpu_info |= (cpu_info7[1] & 0x40000000) ? kCpuHasAVX3 : 0;
+      cpu_info |= (cpu_info7[1] & 0x40000000) ? kCpuHasAVX512BW : 0;
+      cpu_info |= (cpu_info7[1] & 0x80000000) ? kCpuHasAVX512VL : 0;
+      cpu_info |= (cpu_info7[2] & 0x00000002) ? kCpuHasAVX512VBMI : 0;
+      cpu_info |= (cpu_info7[2] & 0x00000040) ? kCpuHasAVX512VBMI2 : 0;
+      cpu_info |= (cpu_info7[2] & 0x00001000) ? kCpuHasAVX512VBITALG : 0;
+      cpu_info |= (cpu_info7[2] & 0x00004000) ? kCpuHasAVX512VPOPCNTDQ : 0;
+      cpu_info |= (cpu_info7[2] & 0x00000100) ? kCpuHasGFNI : 0;
     }
   }
 #endif
-
-  // Environment variable overrides for testing.
-  if (TestEnv("LIBYUV_DISABLE_X86")) {
-    cpu_info &= ~kCpuHasX86;
-  }
-  if (TestEnv("LIBYUV_DISABLE_SSE2")) {
-    cpu_info &= ~kCpuHasSSE2;
-  }
-  if (TestEnv("LIBYUV_DISABLE_SSSE3")) {
-    cpu_info &= ~kCpuHasSSSE3;
-  }
-  if (TestEnv("LIBYUV_DISABLE_SSE41")) {
-    cpu_info &= ~kCpuHasSSE41;
-  }
-  if (TestEnv("LIBYUV_DISABLE_SSE42")) {
-    cpu_info &= ~kCpuHasSSE42;
-  }
-  if (TestEnv("LIBYUV_DISABLE_AVX")) {
-    cpu_info &= ~kCpuHasAVX;
-  }
-  if (TestEnv("LIBYUV_DISABLE_AVX2")) {
-    cpu_info &= ~kCpuHasAVX2;
-  }
-  if (TestEnv("LIBYUV_DISABLE_ERMS")) {
-    cpu_info &= ~kCpuHasERMS;
-  }
-  if (TestEnv("LIBYUV_DISABLE_FMA3")) {
-    cpu_info &= ~kCpuHasFMA3;
-  }
-  if (TestEnv("LIBYUV_DISABLE_AVX3")) {
-    cpu_info &= ~kCpuHasAVX3;
-  }
-#endif
 #if defined(__mips__) && defined(__linux__)
-#if defined(__mips_dspr2)
-  cpu_info |= kCpuHasDSPR2;
+#if defined(__mips_msa)
+  cpu_info = MipsCpuCaps("/proc/cpuinfo", " msa");
 #endif
   cpu_info |= kCpuHasMIPS;
-  if (getenv("LIBYUV_DISABLE_DSPR2")) {
-    cpu_info &= ~kCpuHasDSPR2;
-  }
 #endif
 #if defined(__arm__) || defined(__aarch64__)
 // gcc -mfpu=neon defines __ARM_NEON__
@@ -276,22 +252,22 @@
   cpu_info = ArmCpuCaps("/proc/cpuinfo");
 #endif
   cpu_info |= kCpuHasARM;
-  if (TestEnv("LIBYUV_DISABLE_NEON")) {
-    cpu_info &= ~kCpuHasNEON;
-  }
 #endif  // __arm__
-  if (TestEnv("LIBYUV_DISABLE_ASM")) {
-    cpu_info = 0;
-  }
-  cpu_info  |= kCpuInitialized;
-  cpu_info_ = cpu_info;
+  cpu_info |= kCpuInitialized;
   return cpu_info;
 }
 
 // Note that use of this function is not thread safe.
 LIBYUV_API
-void MaskCpuFlags(int enable_flags) {
-  cpu_info_ = InitCpuFlags() & enable_flags;
+int MaskCpuFlags(int enable_flags) {
+  int cpu_info = GetCpuFlags() & enable_flags;
+  SetCpuFlags(cpu_info);
+  return cpu_info;
+}
+
+LIBYUV_API
+int InitCpuFlags(void) {
+  return MaskCpuFlags(-1);
 }
 
 #ifdef __cplusplus
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_decoder.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_decoder.cc
index 22025ad..eaf2530 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_decoder.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_decoder.cc
@@ -21,7 +21,7 @@
 
 #if defined(_MSC_VER)
 // disable warning 4324: structure was padded due to __declspec(align())
-#pragma warning(disable:4324)
+#pragma warning(disable : 4324)
 #endif
 
 #endif
@@ -102,7 +102,7 @@
   DestroyOutputBuffers();
 }
 
-LIBYUV_BOOL MJpegDecoder::LoadFrame(const uint8* src, size_t src_len) {
+LIBYUV_BOOL MJpegDecoder::LoadFrame(const uint8_t* src, size_t src_len) {
   if (!ValidateJpeg(src, src_len)) {
     return LIBYUV_FALSE;
   }
@@ -129,7 +129,7 @@
       if (scanlines_[i]) {
         delete scanlines_[i];
       }
-      scanlines_[i] = new uint8* [scanlines_size];
+      scanlines_[i] = new uint8_t*[scanlines_size];
       scanlines_sizes_[i] = scanlines_size;
     }
 
@@ -145,7 +145,7 @@
       if (databuf_[i]) {
         delete databuf_[i];
       }
-      databuf_[i] = new uint8[databuf_size];
+      databuf_[i] = new uint8_t[databuf_size];
       databuf_strides_[i] = databuf_stride;
     }
 
@@ -195,13 +195,11 @@
 }
 
 int MJpegDecoder::GetHorizSubSampFactor(int component) {
-  return decompress_struct_->max_h_samp_factor /
-      GetHorizSampFactor(component);
+  return decompress_struct_->max_h_samp_factor / GetHorizSampFactor(component);
 }
 
 int MJpegDecoder::GetVertSubSampFactor(int component) {
-  return decompress_struct_->max_v_samp_factor /
-      GetVertSampFactor(component);
+  return decompress_struct_->max_v_samp_factor / GetVertSampFactor(component);
 }
 
 int MJpegDecoder::GetImageScanlinesPerImcuRow() {
@@ -245,10 +243,10 @@
 }
 
 // TODO(fbarchard): Allow rectangle to be specified: x, y, width, height.
-LIBYUV_BOOL MJpegDecoder::DecodeToBuffers(
-    uint8** planes, int dst_width, int dst_height) {
-  if (dst_width != GetWidth() ||
-      dst_height > GetHeight()) {
+LIBYUV_BOOL MJpegDecoder::DecodeToBuffers(uint8_t** planes,
+                                          int dst_width,
+                                          int dst_height) {
+  if (dst_width != GetWidth() || dst_height > GetHeight()) {
     // ERROR: Bad dimensions
     return LIBYUV_FALSE;
   }
@@ -289,14 +287,13 @@
       for (int i = 0; i < num_outbufs_; ++i) {
         // TODO(fbarchard): Compute skip to avoid this
         assert(skip % GetVertSubSampFactor(i) == 0);
-        int rows_to_skip =
-            DivideAndRoundDown(skip, GetVertSubSampFactor(i));
-        int scanlines_to_copy = GetComponentScanlinesPerImcuRow(i) -
-                                rows_to_skip;
+        int rows_to_skip = DivideAndRoundDown(skip, GetVertSubSampFactor(i));
+        int scanlines_to_copy =
+            GetComponentScanlinesPerImcuRow(i) - rows_to_skip;
         int data_to_skip = rows_to_skip * GetComponentStride(i);
-        CopyPlane(databuf_[i] + data_to_skip, GetComponentStride(i),
-                  planes[i], GetComponentWidth(i),
-                  GetComponentWidth(i), scanlines_to_copy);
+        CopyPlane(databuf_[i] + data_to_skip, GetComponentStride(i), planes[i],
+                  GetComponentWidth(i), GetComponentWidth(i),
+                  scanlines_to_copy);
         planes[i] += scanlines_to_copy * GetComponentWidth(i);
       }
       lines_left -= (GetImageScanlinesPerImcuRow() - skip);
@@ -305,16 +302,15 @@
 
   // Read full MCUs but cropped horizontally
   for (; lines_left > GetImageScanlinesPerImcuRow();
-         lines_left -= GetImageScanlinesPerImcuRow()) {
+       lines_left -= GetImageScanlinesPerImcuRow()) {
     if (!DecodeImcuRow()) {
       FinishDecode();
       return LIBYUV_FALSE;
     }
     for (int i = 0; i < num_outbufs_; ++i) {
       int scanlines_to_copy = GetComponentScanlinesPerImcuRow(i);
-      CopyPlane(databuf_[i], GetComponentStride(i),
-                planes[i], GetComponentWidth(i),
-                GetComponentWidth(i), scanlines_to_copy);
+      CopyPlane(databuf_[i], GetComponentStride(i), planes[i],
+                GetComponentWidth(i), GetComponentWidth(i), scanlines_to_copy);
       planes[i] += scanlines_to_copy * GetComponentWidth(i);
     }
   }
@@ -328,19 +324,19 @@
     for (int i = 0; i < num_outbufs_; ++i) {
       int scanlines_to_copy =
           DivideAndRoundUp(lines_left, GetVertSubSampFactor(i));
-      CopyPlane(databuf_[i], GetComponentStride(i),
-                planes[i], GetComponentWidth(i),
-                GetComponentWidth(i), scanlines_to_copy);
+      CopyPlane(databuf_[i], GetComponentStride(i), planes[i],
+                GetComponentWidth(i), GetComponentWidth(i), scanlines_to_copy);
       planes[i] += scanlines_to_copy * GetComponentWidth(i);
     }
   }
   return FinishDecode();
 }
 
-LIBYUV_BOOL MJpegDecoder::DecodeToCallback(CallbackFunction fn, void* opaque,
-    int dst_width, int dst_height) {
-  if (dst_width != GetWidth() ||
-      dst_height > GetHeight()) {
+LIBYUV_BOOL MJpegDecoder::DecodeToCallback(CallbackFunction fn,
+                                           void* opaque,
+                                           int dst_width,
+                                           int dst_height) {
+  if (dst_width != GetWidth() || dst_height > GetHeight()) {
     // ERROR: Bad dimensions
     return LIBYUV_FALSE;
   }
@@ -395,7 +391,7 @@
   }
   // Read full MCUs until we get to the crop point.
   for (; lines_left >= GetImageScanlinesPerImcuRow();
-         lines_left -= GetImageScanlinesPerImcuRow()) {
+       lines_left -= GetImageScanlinesPerImcuRow()) {
     if (!DecodeImcuRow()) {
       FinishDecode();
       return LIBYUV_FALSE;
@@ -435,22 +431,22 @@
 }
 
 void term_source(j_decompress_ptr cinfo) {
-  // Nothing to do.
+  (void)cinfo;  // Nothing to do.
 }
 
 #ifdef HAVE_SETJMP
 void ErrorHandler(j_common_ptr cinfo) {
-  // This is called when a jpeglib command experiences an error. Unfortunately
-  // jpeglib's error handling model is not very flexible, because it expects the
-  // error handler to not return--i.e., it wants the program to terminate. To
-  // recover from errors we use setjmp() as shown in their example. setjmp() is
-  // C's implementation for the "call with current continuation" functionality
-  // seen in some functional programming languages.
-  // A formatted message can be output, but is unsafe for release.
+// This is called when a jpeglib command experiences an error. Unfortunately
+// jpeglib's error handling model is not very flexible, because it expects the
+// error handler to not return--i.e., it wants the program to terminate. To
+// recover from errors we use setjmp() as shown in their example. setjmp() is
+// C's implementation for the "call with current continuation" functionality
+// seen in some functional programming languages.
+// A formatted message can be output, but is unsafe for release.
 #ifdef DEBUG
   char buf[JMSG_LENGTH_MAX];
   (*cinfo->err->format_message)(cinfo, buf);
-  // ERROR: Error in jpeglib: buf
+// ERROR: Error in jpeglib: buf
 #endif
 
   SetJmpErrorMgr* mgr = reinterpret_cast<SetJmpErrorMgr*>(cinfo->err);
@@ -459,8 +455,9 @@
   longjmp(mgr->setjmp_buffer, 1);
 }
 
+// Suppress fprintf warnings.
 void OutputHandler(j_common_ptr cinfo) {
-  // Suppress fprintf warnings.
+  (void)cinfo;
 }
 
 #endif  // HAVE_SETJMP
@@ -472,9 +469,9 @@
     // it.
     DestroyOutputBuffers();
 
-    scanlines_ = new uint8** [num_outbufs];
+    scanlines_ = new uint8_t**[num_outbufs];
     scanlines_sizes_ = new int[num_outbufs];
-    databuf_ = new uint8* [num_outbufs];
+    databuf_ = new uint8_t*[num_outbufs];
     databuf_strides_ = new int[num_outbufs];
 
     for (int i = 0; i < num_outbufs; ++i) {
@@ -490,13 +487,13 @@
 
 void MJpegDecoder::DestroyOutputBuffers() {
   for (int i = 0; i < num_outbufs_; ++i) {
-    delete [] scanlines_[i];
-    delete [] databuf_[i];
+    delete[] scanlines_[i];
+    delete[] databuf_[i];
   }
-  delete [] scanlines_;
-  delete [] databuf_;
-  delete [] scanlines_sizes_;
-  delete [] databuf_strides_;
+  delete[] scanlines_;
+  delete[] databuf_;
+  delete[] scanlines_sizes_;
+  delete[] databuf_strides_;
   scanlines_ = NULL;
   databuf_ = NULL;
   scanlines_sizes_ = NULL;
@@ -530,9 +527,9 @@
   return LIBYUV_TRUE;
 }
 
-void MJpegDecoder::SetScanlinePointers(uint8** data) {
+void MJpegDecoder::SetScanlinePointers(uint8_t** data) {
   for (int i = 0; i < num_outbufs_; ++i) {
-    uint8* data_i = data[i];
+    uint8_t* data_i = data[i];
     for (int j = 0; j < scanlines_sizes_[i]; ++j) {
       scanlines_[i][j] = data_i;
       data_i += GetComponentStride(i);
@@ -542,26 +539,26 @@
 
 inline LIBYUV_BOOL MJpegDecoder::DecodeImcuRow() {
   return (unsigned int)(GetImageScanlinesPerImcuRow()) ==
-      jpeg_read_raw_data(decompress_struct_,
-                         scanlines_,
-                         GetImageScanlinesPerImcuRow());
+         jpeg_read_raw_data(decompress_struct_, scanlines_,
+                            GetImageScanlinesPerImcuRow());
 }
 
 // The helper function which recognizes the jpeg sub-sampling type.
 JpegSubsamplingType MJpegDecoder::JpegSubsamplingTypeHelper(
-    int* subsample_x, int* subsample_y, int number_of_components) {
+    int* subsample_x,
+    int* subsample_y,
+    int number_of_components) {
   if (number_of_components == 3) {  // Color images.
-    if (subsample_x[0] == 1 && subsample_y[0] == 1 &&
-        subsample_x[1] == 2 && subsample_y[1] == 2 &&
-        subsample_x[2] == 2 && subsample_y[2] == 2) {
+    if (subsample_x[0] == 1 && subsample_y[0] == 1 && subsample_x[1] == 2 &&
+        subsample_y[1] == 2 && subsample_x[2] == 2 && subsample_y[2] == 2) {
       return kJpegYuv420;
-    } else if (subsample_x[0] == 1 && subsample_y[0] == 1 &&
-        subsample_x[1] == 2 && subsample_y[1] == 1 &&
-        subsample_x[2] == 2 && subsample_y[2] == 1) {
+    }
+    if (subsample_x[0] == 1 && subsample_y[0] == 1 && subsample_x[1] == 2 &&
+        subsample_y[1] == 1 && subsample_x[2] == 2 && subsample_y[2] == 1) {
       return kJpegYuv422;
-    } else if (subsample_x[0] == 1 && subsample_y[0] == 1 &&
-        subsample_x[1] == 1 && subsample_y[1] == 1 &&
-        subsample_x[2] == 1 && subsample_y[2] == 1) {
+    }
+    if (subsample_x[0] == 1 && subsample_y[0] == 1 && subsample_x[1] == 1 &&
+        subsample_y[1] == 1 && subsample_x[2] == 1 && subsample_y[2] == 1) {
       return kJpegYuv444;
     }
   } else if (number_of_components == 1) {  // Grey-scale images.
@@ -574,4 +571,3 @@
 
 }  // namespace libyuv
 #endif  // HAVE_JPEG
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_validate.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_validate.cc
index 9c48832..80c2cc0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_validate.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/mjpeg_validate.cc
@@ -18,13 +18,13 @@
 #endif
 
 // Helper function to scan for EOI marker (0xff 0xd9).
-static LIBYUV_BOOL ScanEOI(const uint8* sample, size_t sample_size) {
+static LIBYUV_BOOL ScanEOI(const uint8_t* sample, size_t sample_size) {
   if (sample_size >= 2) {
-    const uint8* end = sample + sample_size - 1;
-    const uint8* it = sample;
+    const uint8_t* end = sample + sample_size - 1;
+    const uint8_t* it = sample;
     while (it < end) {
       // TODO(fbarchard): scan for 0xd9 instead.
-      it = static_cast<const uint8 *>(memchr(it, 0xff, end - it));
+      it = (const uint8_t*)(memchr(it, 0xff, end - it));
       if (it == NULL) {
         break;
       }
@@ -39,7 +39,7 @@
 }
 
 // Helper function to validate the jpeg appears intact.
-LIBYUV_BOOL ValidateJpeg(const uint8* sample, size_t sample_size) {
+LIBYUV_BOOL ValidateJpeg(const uint8_t* sample, size_t sample_size) {
   // Maximum size that ValidateJpeg will consider valid.
   const size_t kMaxJpegSize = 0x7fffffffull;
   const size_t kBackSearchSize = 1024;
@@ -68,4 +68,3 @@
 }  // extern "C"
 }  // namespace libyuv
 #endif
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/planar_functions.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/planar_functions.cc
index a764f8d..5eae3f7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/planar_functions.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/planar_functions.cc
@@ -26,11 +26,14 @@
 
 // Copy a plane of data
 LIBYUV_API
-void CopyPlane(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height) {
+void CopyPlane(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height) {
   int y;
-  void (*CopyRow)(const uint8* src, uint8* dst, int width) = CopyRow_C;
+  void (*CopyRow)(const uint8_t* src, uint8_t* dst, int width) = CopyRow_C;
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
@@ -38,8 +41,7 @@
     dst_stride_y = -dst_stride_y;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      dst_stride_y == width) {
+  if (src_stride_y == width && dst_stride_y == width) {
     width *= height;
     height = 1;
     src_stride_y = dst_stride_y = 0;
@@ -48,6 +50,7 @@
   if (src_y == dst_y && src_stride_y == dst_stride_y) {
     return;
   }
+
 #if defined(HAS_COPYROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     CopyRow = IS_ALIGNED(width, 32) ? CopyRow_SSE2 : CopyRow_Any_SSE2;
@@ -68,11 +71,6 @@
     CopyRow = IS_ALIGNED(width, 32) ? CopyRow_NEON : CopyRow_Any_NEON;
   }
 #endif
-#if defined(HAS_COPYROW_MIPS)
-  if (TestCpuFlag(kCpuHasMIPS)) {
-    CopyRow = CopyRow_MIPS;
-  }
-#endif
 
   // Copy plane
   for (y = 0; y < height; ++y) {
@@ -83,15 +81,18 @@
 }
 
 // TODO(fbarchard): Consider support for negative height.
+// TODO(fbarchard): Consider stride measured in bytes.
 LIBYUV_API
-void CopyPlane_16(const uint16* src_y, int src_stride_y,
-                  uint16* dst_y, int dst_stride_y,
-                  int width, int height) {
+void CopyPlane_16(const uint16_t* src_y,
+                  int src_stride_y,
+                  uint16_t* dst_y,
+                  int dst_stride_y,
+                  int width,
+                  int height) {
   int y;
-  void (*CopyRow)(const uint16* src, uint16* dst, int width) = CopyRow_16_C;
+  void (*CopyRow)(const uint16_t* src, uint16_t* dst, int width) = CopyRow_16_C;
   // Coalesce rows.
-  if (src_stride_y == width &&
-      dst_stride_y == width) {
+  if (src_stride_y == width && dst_stride_y == width) {
     width *= height;
     height = 1;
     src_stride_y = dst_stride_y = 0;
@@ -111,11 +112,6 @@
     CopyRow = CopyRow_16_NEON;
   }
 #endif
-#if defined(HAS_COPYROW_16_MIPS)
-  if (TestCpuFlag(kCpuHasMIPS)) {
-    CopyRow = CopyRow_16_MIPS;
-  }
-#endif
 
   // Copy plane
   for (y = 0; y < height; ++y) {
@@ -125,19 +121,124 @@
   }
 }
 
+// Convert a plane of 16 bit data to 8 bit
+LIBYUV_API
+void Convert16To8Plane(const uint16_t* src_y,
+                       int src_stride_y,
+                       uint8_t* dst_y,
+                       int dst_stride_y,
+                       int scale,  // 16384 for 10 bits
+                       int width,
+                       int height) {
+  int y;
+  void (*Convert16To8Row)(const uint16_t* src_y, uint8_t* dst_y, int scale,
+                          int width) = Convert16To8Row_C;
+
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_y = dst_y + (height - 1) * dst_stride_y;
+    dst_stride_y = -dst_stride_y;
+  }
+  // Coalesce rows.
+  if (src_stride_y == width && dst_stride_y == width) {
+    width *= height;
+    height = 1;
+    src_stride_y = dst_stride_y = 0;
+  }
+#if defined(HAS_CONVERT16TO8ROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    Convert16To8Row = Convert16To8Row_Any_SSSE3;
+    if (IS_ALIGNED(width, 16)) {
+      Convert16To8Row = Convert16To8Row_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_CONVERT16TO8ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    Convert16To8Row = Convert16To8Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      Convert16To8Row = Convert16To8Row_AVX2;
+    }
+  }
+#endif
+
+  // Convert plane
+  for (y = 0; y < height; ++y) {
+    Convert16To8Row(src_y, dst_y, scale, width);
+    src_y += src_stride_y;
+    dst_y += dst_stride_y;
+  }
+}
+
+// Convert a plane of 8 bit data to 16 bit
+LIBYUV_API
+void Convert8To16Plane(const uint8_t* src_y,
+                       int src_stride_y,
+                       uint16_t* dst_y,
+                       int dst_stride_y,
+                       int scale,  // 16384 for 10 bits
+                       int width,
+                       int height) {
+  int y;
+  void (*Convert8To16Row)(const uint8_t* src_y, uint16_t* dst_y, int scale,
+                          int width) = Convert8To16Row_C;
+
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_y = dst_y + (height - 1) * dst_stride_y;
+    dst_stride_y = -dst_stride_y;
+  }
+  // Coalesce rows.
+  if (src_stride_y == width && dst_stride_y == width) {
+    width *= height;
+    height = 1;
+    src_stride_y = dst_stride_y = 0;
+  }
+#if defined(HAS_CONVERT8TO16ROW_SSE2)
+  if (TestCpuFlag(kCpuHasSSE2)) {
+    Convert8To16Row = Convert8To16Row_Any_SSE2;
+    if (IS_ALIGNED(width, 16)) {
+      Convert8To16Row = Convert8To16Row_SSE2;
+    }
+  }
+#endif
+#if defined(HAS_CONVERT8TO16ROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    Convert8To16Row = Convert8To16Row_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      Convert8To16Row = Convert8To16Row_AVX2;
+    }
+  }
+#endif
+
+  // Convert plane
+  for (y = 0; y < height; ++y) {
+    Convert8To16Row(src_y, dst_y, scale, width);
+    src_y += src_stride_y;
+    dst_y += dst_stride_y;
+  }
+}
+
 // Copy I422.
 LIBYUV_API
-int I422Copy(const uint8* src_y, int src_stride_y,
-             const uint8* src_u, int src_stride_u,
-             const uint8* src_v, int src_stride_v,
-             uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int width, int height) {
+int I422Copy(const uint8_t* src_y,
+             int src_stride_y,
+             const uint8_t* src_u,
+             int src_stride_u,
+             const uint8_t* src_v,
+             int src_stride_v,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height) {
   int halfwidth = (width + 1) >> 1;
-  if (!src_u || !src_v ||
-      !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -161,16 +262,21 @@
 
 // Copy I444.
 LIBYUV_API
-int I444Copy(const uint8* src_y, int src_stride_y,
-             const uint8* src_u, int src_stride_u,
-             const uint8* src_v, int src_stride_v,
-             uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int width, int height) {
-  if (!src_u || !src_v ||
-      !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+int I444Copy(const uint8_t* src_y,
+             int src_stride_y,
+             const uint8_t* src_u,
+             int src_stride_u,
+             const uint8_t* src_v,
+             int src_stride_v,
+             uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int width,
+             int height) {
+  if (!src_u || !src_v || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -194,9 +300,12 @@
 
 // Copy I400.
 LIBYUV_API
-int I400ToI400(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height) {
+int I400ToI400(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height) {
   if (!src_y || !dst_y || width <= 0 || height == 0) {
     return -1;
   }
@@ -212,11 +321,20 @@
 
 // Convert I420 to I400.
 LIBYUV_API
-int I420ToI400(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height) {
+int I420ToI400(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height) {
+  (void)src_u;
+  (void)src_stride_u;
+  (void)src_v;
+  (void)src_stride_v;
   if (!src_y || !dst_y || width <= 0 || height == 0) {
     return -1;
   }
@@ -234,12 +352,16 @@
 // Support function for NV12 etc UV channels.
 // Width and height are plane sizes (typically half pixel width).
 LIBYUV_API
-void SplitUVPlane(const uint8* src_uv, int src_stride_uv,
-                  uint8* dst_u, int dst_stride_u,
-                  uint8* dst_v, int dst_stride_v,
-                  int width, int height) {
+void SplitUVPlane(const uint8_t* src_uv,
+                  int src_stride_uv,
+                  uint8_t* dst_u,
+                  int dst_stride_u,
+                  uint8_t* dst_v,
+                  int dst_stride_v,
+                  int width,
+                  int height) {
   int y;
-  void (*SplitUVRow)(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+  void (*SplitUVRow)(const uint8_t* src_uv, uint8_t* dst_u, uint8_t* dst_v,
                      int width) = SplitUVRow_C;
   // Negative height means invert the image.
   if (height < 0) {
@@ -250,8 +372,7 @@
     dst_stride_v = -dst_stride_v;
   }
   // Coalesce rows.
-  if (src_stride_uv == width * 2 &&
-      dst_stride_u == width &&
+  if (src_stride_uv == width * 2 && dst_stride_u == width &&
       dst_stride_v == width) {
     width *= height;
     height = 1;
@@ -281,13 +402,11 @@
     }
   }
 #endif
-#if defined(HAS_SPLITUVROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(dst_u, 4) && IS_ALIGNED(dst_stride_u, 4) &&
-      IS_ALIGNED(dst_v, 4) && IS_ALIGNED(dst_stride_v, 4)) {
-    SplitUVRow = SplitUVRow_Any_DSPR2;
-    if (IS_ALIGNED(width, 16)) {
-      SplitUVRow = SplitUVRow_DSPR2;
+#if defined(HAS_SPLITUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SplitUVRow = SplitUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      SplitUVRow = SplitUVRow_MSA;
     }
   }
 #endif
@@ -302,13 +421,17 @@
 }
 
 LIBYUV_API
-void MergeUVPlane(const uint8* src_u, int src_stride_u,
-                  const uint8* src_v, int src_stride_v,
-                  uint8* dst_uv, int dst_stride_uv,
-                  int width, int height) {
+void MergeUVPlane(const uint8_t* src_u,
+                  int src_stride_u,
+                  const uint8_t* src_v,
+                  int src_stride_v,
+                  uint8_t* dst_uv,
+                  int dst_stride_uv,
+                  int width,
+                  int height) {
   int y;
-  void (*MergeUVRow)(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
-      int width) = MergeUVRow_C;
+  void (*MergeUVRow)(const uint8_t* src_u, const uint8_t* src_v,
+                     uint8_t* dst_uv, int width) = MergeUVRow_C;
   // Coalesce rows.
   // Negative height means invert the image.
   if (height < 0) {
@@ -317,8 +440,7 @@
     dst_stride_uv = -dst_stride_uv;
   }
   // Coalesce rows.
-  if (src_stride_u == width &&
-      src_stride_v == width &&
+  if (src_stride_u == width && src_stride_v == width &&
       dst_stride_uv == width * 2) {
     width *= height;
     height = 1;
@@ -348,6 +470,14 @@
     }
   }
 #endif
+#if defined(HAS_MERGEUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    MergeUVRow = MergeUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      MergeUVRow = MergeUVRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     // Merge a row of U and V into a row of UV.
@@ -358,12 +488,131 @@
   }
 }
 
-// Mirror a plane of data.
-void MirrorPlane(const uint8* src_y, int src_stride_y,
-                 uint8* dst_y, int dst_stride_y,
-                 int width, int height) {
+// Support function for NV12 etc RGB channels.
+// Width and height are plane sizes (typically half pixel width).
+LIBYUV_API
+void SplitRGBPlane(const uint8_t* src_rgb,
+                   int src_stride_rgb,
+                   uint8_t* dst_r,
+                   int dst_stride_r,
+                   uint8_t* dst_g,
+                   int dst_stride_g,
+                   uint8_t* dst_b,
+                   int dst_stride_b,
+                   int width,
+                   int height) {
   int y;
-  void (*MirrorRow)(const uint8* src, uint8* dst, int width) = MirrorRow_C;
+  void (*SplitRGBRow)(const uint8_t* src_rgb, uint8_t* dst_r, uint8_t* dst_g,
+                      uint8_t* dst_b, int width) = SplitRGBRow_C;
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_r = dst_r + (height - 1) * dst_stride_r;
+    dst_g = dst_g + (height - 1) * dst_stride_g;
+    dst_b = dst_b + (height - 1) * dst_stride_b;
+    dst_stride_r = -dst_stride_r;
+    dst_stride_g = -dst_stride_g;
+    dst_stride_b = -dst_stride_b;
+  }
+  // Coalesce rows.
+  if (src_stride_rgb == width * 3 && dst_stride_r == width &&
+      dst_stride_g == width && dst_stride_b == width) {
+    width *= height;
+    height = 1;
+    src_stride_rgb = dst_stride_r = dst_stride_g = dst_stride_b = 0;
+  }
+#if defined(HAS_SPLITRGBROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    SplitRGBRow = SplitRGBRow_Any_SSSE3;
+    if (IS_ALIGNED(width, 16)) {
+      SplitRGBRow = SplitRGBRow_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_SPLITRGBROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    SplitRGBRow = SplitRGBRow_Any_NEON;
+    if (IS_ALIGNED(width, 16)) {
+      SplitRGBRow = SplitRGBRow_NEON;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    // Copy a row of RGB.
+    SplitRGBRow(src_rgb, dst_r, dst_g, dst_b, width);
+    dst_r += dst_stride_r;
+    dst_g += dst_stride_g;
+    dst_b += dst_stride_b;
+    src_rgb += src_stride_rgb;
+  }
+}
+
+LIBYUV_API
+void MergeRGBPlane(const uint8_t* src_r,
+                   int src_stride_r,
+                   const uint8_t* src_g,
+                   int src_stride_g,
+                   const uint8_t* src_b,
+                   int src_stride_b,
+                   uint8_t* dst_rgb,
+                   int dst_stride_rgb,
+                   int width,
+                   int height) {
+  int y;
+  void (*MergeRGBRow)(const uint8_t* src_r, const uint8_t* src_g,
+                      const uint8_t* src_b, uint8_t* dst_rgb, int width) =
+      MergeRGBRow_C;
+  // Coalesce rows.
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    dst_rgb = dst_rgb + (height - 1) * dst_stride_rgb;
+    dst_stride_rgb = -dst_stride_rgb;
+  }
+  // Coalesce rows.
+  if (src_stride_r == width && src_stride_g == width && src_stride_b == width &&
+      dst_stride_rgb == width * 3) {
+    width *= height;
+    height = 1;
+    src_stride_r = src_stride_g = src_stride_b = dst_stride_rgb = 0;
+  }
+#if defined(HAS_MERGERGBROW_SSSE3)
+  if (TestCpuFlag(kCpuHasSSSE3)) {
+    MergeRGBRow = MergeRGBRow_Any_SSSE3;
+    if (IS_ALIGNED(width, 16)) {
+      MergeRGBRow = MergeRGBRow_SSSE3;
+    }
+  }
+#endif
+#if defined(HAS_MERGERGBROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    MergeRGBRow = MergeRGBRow_Any_NEON;
+    if (IS_ALIGNED(width, 16)) {
+      MergeRGBRow = MergeRGBRow_NEON;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    // Merge a row of U and V into a row of RGB.
+    MergeRGBRow(src_r, src_g, src_b, dst_rgb, width);
+    src_r += src_stride_r;
+    src_g += src_stride_g;
+    src_b += src_stride_b;
+    dst_rgb += dst_stride_rgb;
+  }
+}
+
+// Mirror a plane of data.
+void MirrorPlane(const uint8_t* src_y,
+                 int src_stride_y,
+                 uint8_t* dst_y,
+                 int dst_stride_y,
+                 int width,
+                 int height) {
+  int y;
+  void (*MirrorRow)(const uint8_t* src, uint8_t* dst, int width) = MirrorRow_C;
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
@@ -394,12 +643,12 @@
     }
   }
 #endif
-// TODO(fbarchard): Mirror on mips handle unaligned memory.
-#if defined(HAS_MIRRORROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(dst_y, 4) && IS_ALIGNED(dst_stride_y, 4)) {
-    MirrorRow = MirrorRow_DSPR2;
+#if defined(HAS_MIRRORROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    MirrorRow = MirrorRow_Any_MSA;
+    if (IS_ALIGNED(width, 64)) {
+      MirrorRow = MirrorRow_MSA;
+    }
   }
 #endif
 
@@ -413,17 +662,24 @@
 
 // Convert YUY2 to I422.
 LIBYUV_API
-int YUY2ToI422(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int YUY2ToI422(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*YUY2ToUV422Row)(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width) =
-      YUY2ToUV422Row_C;
-  void (*YUY2ToYRow)(const uint8* src_yuy2, uint8* dst_y, int width) =
+  void (*YUY2ToUV422Row)(const uint8_t* src_yuy2, uint8_t* dst_u,
+                         uint8_t* dst_v, int width) = YUY2ToUV422Row_C;
+  void (*YUY2ToYRow)(const uint8_t* src_yuy2, uint8_t* dst_y, int width) =
       YUY2ToYRow_C;
+  if (!src_yuy2 || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
+    return -1;
+  }
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
@@ -431,10 +687,9 @@
     src_stride_yuy2 = -src_stride_yuy2;
   }
   // Coalesce rows.
-  if (src_stride_yuy2 == width * 2 &&
-      dst_stride_y == width &&
-      dst_stride_u * 2 == width &&
-      dst_stride_v * 2 == width) {
+  if (src_stride_yuy2 == width * 2 && dst_stride_y == width &&
+      dst_stride_u * 2 == width && dst_stride_v * 2 == width &&
+      width * height <= 32768) {
     width *= height;
     height = 1;
     src_stride_yuy2 = dst_stride_y = dst_stride_u = dst_stride_v = 0;
@@ -462,15 +717,23 @@
 #if defined(HAS_YUY2TOYROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     YUY2ToYRow = YUY2ToYRow_Any_NEON;
-    if (width >= 16) {
-      YUY2ToUV422Row = YUY2ToUV422Row_Any_NEON;
-    }
+    YUY2ToUV422Row = YUY2ToUV422Row_Any_NEON;
     if (IS_ALIGNED(width, 16)) {
       YUY2ToYRow = YUY2ToYRow_NEON;
       YUY2ToUV422Row = YUY2ToUV422Row_NEON;
     }
   }
 #endif
+#if defined(HAS_YUY2TOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    YUY2ToYRow = YUY2ToYRow_Any_MSA;
+    YUY2ToUV422Row = YUY2ToUV422Row_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      YUY2ToYRow = YUY2ToYRow_MSA;
+      YUY2ToUV422Row = YUY2ToUV422Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     YUY2ToUV422Row(src_yuy2, dst_u, dst_v, width);
@@ -485,17 +748,24 @@
 
 // Convert UYVY to I422.
 LIBYUV_API
-int UYVYToI422(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int UYVYToI422(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int y;
-  void (*UYVYToUV422Row)(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width) =
-      UYVYToUV422Row_C;
-  void (*UYVYToYRow)(const uint8* src_uyvy,
-                     uint8* dst_y, int width) = UYVYToYRow_C;
+  void (*UYVYToUV422Row)(const uint8_t* src_uyvy, uint8_t* dst_u,
+                         uint8_t* dst_v, int width) = UYVYToUV422Row_C;
+  void (*UYVYToYRow)(const uint8_t* src_uyvy, uint8_t* dst_y, int width) =
+      UYVYToYRow_C;
+  if (!src_uyvy || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
+    return -1;
+  }
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
@@ -503,10 +773,9 @@
     src_stride_uyvy = -src_stride_uyvy;
   }
   // Coalesce rows.
-  if (src_stride_uyvy == width * 2 &&
-      dst_stride_y == width &&
-      dst_stride_u * 2 == width &&
-      dst_stride_v * 2 == width) {
+  if (src_stride_uyvy == width * 2 && dst_stride_y == width &&
+      dst_stride_u * 2 == width && dst_stride_v * 2 == width &&
+      width * height <= 32768) {
     width *= height;
     height = 1;
     src_stride_uyvy = dst_stride_y = dst_stride_u = dst_stride_v = 0;
@@ -534,15 +803,23 @@
 #if defined(HAS_UYVYTOYROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     UYVYToYRow = UYVYToYRow_Any_NEON;
-    if (width >= 16) {
-      UYVYToUV422Row = UYVYToUV422Row_Any_NEON;
-    }
+    UYVYToUV422Row = UYVYToUV422Row_Any_NEON;
     if (IS_ALIGNED(width, 16)) {
       UYVYToYRow = UYVYToYRow_NEON;
       UYVYToUV422Row = UYVYToUV422Row_NEON;
     }
   }
 #endif
+#if defined(HAS_UYVYTOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    UYVYToYRow = UYVYToYRow_Any_MSA;
+    UYVYToUV422Row = UYVYToUV422Row_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      UYVYToYRow = UYVYToYRow_MSA;
+      UYVYToUV422Row = UYVYToUV422Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     UYVYToUV422Row(src_uyvy, dst_u, dst_v, width);
@@ -555,13 +832,82 @@
   return 0;
 }
 
+// Convert YUY2 to Y.
+LIBYUV_API
+int YUY2ToY(const uint8_t* src_yuy2,
+            int src_stride_yuy2,
+            uint8_t* dst_y,
+            int dst_stride_y,
+            int width,
+            int height) {
+  int y;
+  void (*YUY2ToYRow)(const uint8_t* src_yuy2, uint8_t* dst_y, int width) =
+      YUY2ToYRow_C;
+  if (!src_yuy2 || !dst_y || width <= 0 || height == 0) {
+    return -1;
+  }
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    src_yuy2 = src_yuy2 + (height - 1) * src_stride_yuy2;
+    src_stride_yuy2 = -src_stride_yuy2;
+  }
+  // Coalesce rows.
+  if (src_stride_yuy2 == width * 2 && dst_stride_y == width) {
+    width *= height;
+    height = 1;
+    src_stride_yuy2 = dst_stride_y = 0;
+  }
+#if defined(HAS_YUY2TOYROW_SSE2)
+  if (TestCpuFlag(kCpuHasSSE2)) {
+    YUY2ToYRow = YUY2ToYRow_Any_SSE2;
+    if (IS_ALIGNED(width, 16)) {
+      YUY2ToYRow = YUY2ToYRow_SSE2;
+    }
+  }
+#endif
+#if defined(HAS_YUY2TOYROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    YUY2ToYRow = YUY2ToYRow_Any_AVX2;
+    if (IS_ALIGNED(width, 32)) {
+      YUY2ToYRow = YUY2ToYRow_AVX2;
+    }
+  }
+#endif
+#if defined(HAS_YUY2TOYROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    YUY2ToYRow = YUY2ToYRow_Any_NEON;
+    if (IS_ALIGNED(width, 16)) {
+      YUY2ToYRow = YUY2ToYRow_NEON;
+    }
+  }
+#endif
+#if defined(HAS_YUY2TOYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    YUY2ToYRow = YUY2ToYRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      YUY2ToYRow = YUY2ToYRow_MSA;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    YUY2ToYRow(src_yuy2, dst_y, width);
+    src_yuy2 += src_stride_yuy2;
+    dst_y += dst_stride_y;
+  }
+  return 0;
+}
+
 // Mirror I400 with optional flipping
 LIBYUV_API
-int I400Mirror(const uint8* src_y, int src_stride_y,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height) {
-  if (!src_y || !dst_y ||
-      width <= 0 || height == 0) {
+int I400Mirror(const uint8_t* src_y,
+               int src_stride_y,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height) {
+  if (!src_y || !dst_y || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -577,17 +923,24 @@
 
 // Mirror I420 with optional flipping
 LIBYUV_API
-int I420Mirror(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height) {
+int I420Mirror(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src_y || !src_u || !src_v || !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src_y || !src_u || !src_v || !dst_y || !dst_u || !dst_v || width <= 0 ||
+      height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -612,11 +965,14 @@
 
 // ARGB mirror.
 LIBYUV_API
-int ARGBMirror(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int ARGBMirror(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*ARGBMirrorRow)(const uint8* src, uint8* dst, int width) =
+  void (*ARGBMirrorRow)(const uint8_t* src, uint8_t* dst, int width) =
       ARGBMirrorRow_C;
   if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
@@ -651,6 +1007,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBMIRRORROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBMirrorRow = ARGBMirrorRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBMirrorRow = ARGBMirrorRow_MSA;
+    }
+  }
+#endif
 
   // Mirror plane
   for (y = 0; y < height; ++y) {
@@ -666,8 +1030,8 @@
 // the same blend function for all pixels if possible.
 LIBYUV_API
 ARGBBlendRow GetARGBBlend() {
-  void (*ARGBBlendRow)(const uint8* src_argb, const uint8* src_argb1,
-                       uint8* dst_argb, int width) = ARGBBlendRow_C;
+  void (*ARGBBlendRow)(const uint8_t* src_argb, const uint8_t* src_argb1,
+                       uint8_t* dst_argb, int width) = ARGBBlendRow_C;
 #if defined(HAS_ARGBBLENDROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     ARGBBlendRow = ARGBBlendRow_SSSE3;
@@ -679,18 +1043,27 @@
     ARGBBlendRow = ARGBBlendRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBBLENDROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBBlendRow = ARGBBlendRow_MSA;
+  }
+#endif
   return ARGBBlendRow;
 }
 
 // Alpha Blend 2 ARGB images and store to destination.
 LIBYUV_API
-int ARGBBlend(const uint8* src_argb0, int src_stride_argb0,
-              const uint8* src_argb1, int src_stride_argb1,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height) {
+int ARGBBlend(const uint8_t* src_argb0,
+              int src_stride_argb0,
+              const uint8_t* src_argb1,
+              int src_stride_argb1,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height) {
   int y;
-  void (*ARGBBlendRow)(const uint8* src_argb, const uint8* src_argb1,
-                       uint8* dst_argb, int width) = GetARGBBlend();
+  void (*ARGBBlendRow)(const uint8_t* src_argb, const uint8_t* src_argb1,
+                       uint8_t* dst_argb, int width) = GetARGBBlend();
   if (!src_argb0 || !src_argb1 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -701,8 +1074,7 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb0 == width * 4 &&
-      src_stride_argb1 == width * 4 &&
+  if (src_stride_argb0 == width * 4 && src_stride_argb1 == width * 4 &&
       dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
@@ -720,14 +1092,20 @@
 
 // Alpha Blend plane and store to destination.
 LIBYUV_API
-int BlendPlane(const uint8* src_y0, int src_stride_y0,
-               const uint8* src_y1, int src_stride_y1,
-               const uint8* alpha, int alpha_stride,
-               uint8* dst_y, int dst_stride_y,
-               int width, int height) {
+int BlendPlane(const uint8_t* src_y0,
+               int src_stride_y0,
+               const uint8_t* src_y1,
+               int src_stride_y1,
+               const uint8_t* alpha,
+               int alpha_stride,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               int width,
+               int height) {
   int y;
-  void (*BlendPlaneRow)(const uint8* src0, const uint8* src1,
-      const uint8* alpha, uint8* dst, int width) = BlendPlaneRow_C;
+  void (*BlendPlaneRow)(const uint8_t* src0, const uint8_t* src1,
+                        const uint8_t* alpha, uint8_t* dst, int width) =
+      BlendPlaneRow_C;
   if (!src_y0 || !src_y1 || !alpha || !dst_y || width <= 0 || height == 0) {
     return -1;
   }
@@ -739,10 +1117,8 @@
   }
 
   // Coalesce rows for Y plane.
-  if (src_stride_y0 == width &&
-      src_stride_y1 == width &&
-      alpha_stride == width &&
-      dst_stride_y == width) {
+  if (src_stride_y0 == width && src_stride_y1 == width &&
+      alpha_stride == width && dst_stride_y == width) {
     width *= height;
     height = 1;
     src_stride_y0 = src_stride_y1 = alpha_stride = dst_stride_y = 0;
@@ -750,7 +1126,7 @@
 
 #if defined(HAS_BLENDPLANEROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
-  BlendPlaneRow = BlendPlaneRow_Any_SSSE3;
+    BlendPlaneRow = BlendPlaneRow_Any_SSSE3;
     if (IS_ALIGNED(width, 8)) {
       BlendPlaneRow = BlendPlaneRow_SSSE3;
     }
@@ -758,7 +1134,7 @@
 #endif
 #if defined(HAS_BLENDPLANEROW_AVX2)
   if (TestCpuFlag(kCpuHasAVX2)) {
-  BlendPlaneRow = BlendPlaneRow_Any_AVX2;
+    BlendPlaneRow = BlendPlaneRow_Any_AVX2;
     if (IS_ALIGNED(width, 32)) {
       BlendPlaneRow = BlendPlaneRow_AVX2;
     }
@@ -778,24 +1154,36 @@
 #define MAXTWIDTH 2048
 // Alpha Blend YUV images and store to destination.
 LIBYUV_API
-int I420Blend(const uint8* src_y0, int src_stride_y0,
-              const uint8* src_u0, int src_stride_u0,
-              const uint8* src_v0, int src_stride_v0,
-              const uint8* src_y1, int src_stride_y1,
-              const uint8* src_u1, int src_stride_u1,
-              const uint8* src_v1, int src_stride_v1,
-              const uint8* alpha, int alpha_stride,
-              uint8* dst_y, int dst_stride_y,
-              uint8* dst_u, int dst_stride_u,
-              uint8* dst_v, int dst_stride_v,
-              int width, int height) {
+int I420Blend(const uint8_t* src_y0,
+              int src_stride_y0,
+              const uint8_t* src_u0,
+              int src_stride_u0,
+              const uint8_t* src_v0,
+              int src_stride_v0,
+              const uint8_t* src_y1,
+              int src_stride_y1,
+              const uint8_t* src_u1,
+              int src_stride_u1,
+              const uint8_t* src_v1,
+              int src_stride_v1,
+              const uint8_t* alpha,
+              int alpha_stride,
+              uint8_t* dst_y,
+              int dst_stride_y,
+              uint8_t* dst_u,
+              int dst_stride_u,
+              uint8_t* dst_v,
+              int dst_stride_v,
+              int width,
+              int height) {
   int y;
   // Half width/height for UV.
   int halfwidth = (width + 1) >> 1;
-  void (*BlendPlaneRow)(const uint8* src0, const uint8* src1,
-      const uint8* alpha, uint8* dst, int width) = BlendPlaneRow_C;
-  void (*ScaleRowDown2)(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) = ScaleRowDown2Box_C;
+  void (*BlendPlaneRow)(const uint8_t* src0, const uint8_t* src1,
+                        const uint8_t* alpha, uint8_t* dst, int width) =
+      BlendPlaneRow_C;
+  void (*ScaleRowDown2)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                        uint8_t* dst_ptr, int dst_width) = ScaleRowDown2Box_C;
   if (!src_y0 || !src_u0 || !src_v0 || !src_y1 || !src_u1 || !src_v1 ||
       !alpha || !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
@@ -809,11 +1197,8 @@
   }
 
   // Blend Y plane.
-  BlendPlane(src_y0, src_stride_y0,
-             src_y1, src_stride_y1,
-             alpha, alpha_stride,
-             dst_y, dst_stride_y,
-             width, height);
+  BlendPlane(src_y0, src_stride_y0, src_y1, src_stride_y1, alpha, alpha_stride,
+             dst_y, dst_stride_y, width, height);
 
 #if defined(HAS_BLENDPLANEROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
@@ -893,13 +1278,17 @@
 
 // Multiply 2 ARGB images and store to destination.
 LIBYUV_API
-int ARGBMultiply(const uint8* src_argb0, int src_stride_argb0,
-                 const uint8* src_argb1, int src_stride_argb1,
-                 uint8* dst_argb, int dst_stride_argb,
-                 int width, int height) {
+int ARGBMultiply(const uint8_t* src_argb0,
+                 int src_stride_argb0,
+                 const uint8_t* src_argb1,
+                 int src_stride_argb1,
+                 uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int width,
+                 int height) {
   int y;
-  void (*ARGBMultiplyRow)(const uint8* src0, const uint8* src1, uint8* dst,
-                          int width) = ARGBMultiplyRow_C;
+  void (*ARGBMultiplyRow)(const uint8_t* src0, const uint8_t* src1,
+                          uint8_t* dst, int width) = ARGBMultiplyRow_C;
   if (!src_argb0 || !src_argb1 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -910,8 +1299,7 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb0 == width * 4 &&
-      src_stride_argb1 == width * 4 &&
+  if (src_stride_argb0 == width * 4 && src_stride_argb1 == width * 4 &&
       dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
@@ -941,6 +1329,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBMULTIPLYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBMultiplyRow = ARGBMultiplyRow_Any_MSA;
+    if (IS_ALIGNED(width, 4)) {
+      ARGBMultiplyRow = ARGBMultiplyRow_MSA;
+    }
+  }
+#endif
 
   // Multiply plane
   for (y = 0; y < height; ++y) {
@@ -954,12 +1350,16 @@
 
 // Add 2 ARGB images and store to destination.
 LIBYUV_API
-int ARGBAdd(const uint8* src_argb0, int src_stride_argb0,
-            const uint8* src_argb1, int src_stride_argb1,
-            uint8* dst_argb, int dst_stride_argb,
-            int width, int height) {
+int ARGBAdd(const uint8_t* src_argb0,
+            int src_stride_argb0,
+            const uint8_t* src_argb1,
+            int src_stride_argb1,
+            uint8_t* dst_argb,
+            int dst_stride_argb,
+            int width,
+            int height) {
   int y;
-  void (*ARGBAddRow)(const uint8* src0, const uint8* src1, uint8* dst,
+  void (*ARGBAddRow)(const uint8_t* src0, const uint8_t* src1, uint8_t* dst,
                      int width) = ARGBAddRow_C;
   if (!src_argb0 || !src_argb1 || !dst_argb || width <= 0 || height == 0) {
     return -1;
@@ -971,8 +1371,7 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb0 == width * 4 &&
-      src_stride_argb1 == width * 4 &&
+  if (src_stride_argb0 == width * 4 && src_stride_argb1 == width * 4 &&
       dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
@@ -1007,6 +1406,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBADDROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBAddRow = ARGBAddRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBAddRow = ARGBAddRow_MSA;
+    }
+  }
+#endif
 
   // Add plane
   for (y = 0; y < height; ++y) {
@@ -1020,13 +1427,17 @@
 
 // Subtract 2 ARGB images and store to destination.
 LIBYUV_API
-int ARGBSubtract(const uint8* src_argb0, int src_stride_argb0,
-                 const uint8* src_argb1, int src_stride_argb1,
-                 uint8* dst_argb, int dst_stride_argb,
-                 int width, int height) {
+int ARGBSubtract(const uint8_t* src_argb0,
+                 int src_stride_argb0,
+                 const uint8_t* src_argb1,
+                 int src_stride_argb1,
+                 uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int width,
+                 int height) {
   int y;
-  void (*ARGBSubtractRow)(const uint8* src0, const uint8* src1, uint8* dst,
-                          int width) = ARGBSubtractRow_C;
+  void (*ARGBSubtractRow)(const uint8_t* src0, const uint8_t* src1,
+                          uint8_t* dst, int width) = ARGBSubtractRow_C;
   if (!src_argb0 || !src_argb1 || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -1037,8 +1448,7 @@
     dst_stride_argb = -dst_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb0 == width * 4 &&
-      src_stride_argb1 == width * 4 &&
+  if (src_stride_argb0 == width * 4 && src_stride_argb1 == width * 4 &&
       dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
@@ -1068,6 +1478,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBSUBTRACTROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBSubtractRow = ARGBSubtractRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBSubtractRow = ARGBSubtractRow_MSA;
+    }
+  }
+#endif
 
   // Subtract plane
   for (y = 0; y < height; ++y) {
@@ -1079,21 +1497,23 @@
   return 0;
 }
 // Convert I422 to RGBA with matrix
-static int I422ToRGBAMatrix(const uint8* src_y, int src_stride_y,
-                            const uint8* src_u, int src_stride_u,
-                            const uint8* src_v, int src_stride_v,
-                            uint8* dst_rgba, int dst_stride_rgba,
+static int I422ToRGBAMatrix(const uint8_t* src_y,
+                            int src_stride_y,
+                            const uint8_t* src_u,
+                            int src_stride_u,
+                            const uint8_t* src_v,
+                            int src_stride_v,
+                            uint8_t* dst_rgba,
+                            int dst_stride_rgba,
                             const struct YuvConstants* yuvconstants,
-                            int width, int height) {
+                            int width,
+                            int height) {
   int y;
-  void (*I422ToRGBARow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        const struct YuvConstants* yuvconstants,
-                        int width) = I422ToRGBARow_C;
-  if (!src_y || !src_u || !src_v || !dst_rgba ||
-      width <= 0 || height == 0) {
+  void (*I422ToRGBARow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf,
+                        const struct YuvConstants* yuvconstants, int width) =
+      I422ToRGBARow_C;
+  if (!src_y || !src_u || !src_v || !dst_rgba || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1126,13 +1546,12 @@
     }
   }
 #endif
-#if defined(HAS_I422TORGBAROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2) &&
-      IS_ALIGNED(dst_rgba, 4) && IS_ALIGNED(dst_stride_rgba, 4)) {
-    I422ToRGBARow = I422ToRGBARow_DSPR2;
+#if defined(HAS_I422TORGBAROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToRGBARow = I422ToRGBARow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      I422ToRGBARow = I422ToRGBARow_MSA;
+    }
   }
 #endif
 
@@ -1148,48 +1567,55 @@
 
 // Convert I422 to RGBA.
 LIBYUV_API
-int I422ToRGBA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_rgba, int dst_stride_rgba,
-               int width, int height) {
-  return I422ToRGBAMatrix(src_y, src_stride_y,
-                          src_u, src_stride_u,
-                          src_v, src_stride_v,
-                          dst_rgba, dst_stride_rgba,
-                          &kYuvI601Constants,
-                          width, height);
+int I422ToRGBA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_rgba,
+               int dst_stride_rgba,
+               int width,
+               int height) {
+  return I422ToRGBAMatrix(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                          src_stride_v, dst_rgba, dst_stride_rgba,
+                          &kYuvI601Constants, width, height);
 }
 
 // Convert I422 to BGRA.
 LIBYUV_API
-int I422ToBGRA(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_bgra, int dst_stride_bgra,
-               int width, int height) {
-  return I422ToRGBAMatrix(src_y, src_stride_y,
-                          src_v, src_stride_v,  // Swap U and V
-                          src_u, src_stride_u,
-                          dst_bgra, dst_stride_bgra,
+int I422ToBGRA(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_bgra,
+               int dst_stride_bgra,
+               int width,
+               int height) {
+  return I422ToRGBAMatrix(src_y, src_stride_y, src_v,
+                          src_stride_v,  // Swap U and V
+                          src_u, src_stride_u, dst_bgra, dst_stride_bgra,
                           &kYvuI601Constants,  // Use Yvu matrix
                           width, height);
 }
 
 // Convert NV12 to RGB565.
 LIBYUV_API
-int NV12ToRGB565(const uint8* src_y, int src_stride_y,
-                 const uint8* src_uv, int src_stride_uv,
-                 uint8* dst_rgb565, int dst_stride_rgb565,
-                 int width, int height) {
+int NV12ToRGB565(const uint8_t* src_y,
+                 int src_stride_y,
+                 const uint8_t* src_uv,
+                 int src_stride_uv,
+                 uint8_t* dst_rgb565,
+                 int dst_stride_rgb565,
+                 int width,
+                 int height) {
   int y;
-  void (*NV12ToRGB565Row)(const uint8* y_buf,
-                          const uint8* uv_buf,
-                          uint8* rgb_buf,
-                          const struct YuvConstants* yuvconstants,
-                          int width) = NV12ToRGB565Row_C;
-  if (!src_y || !src_uv || !dst_rgb565 ||
-      width <= 0 || height == 0) {
+  void (*NV12ToRGB565Row)(
+      const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* rgb_buf,
+      const struct YuvConstants* yuvconstants, int width) = NV12ToRGB565Row_C;
+  if (!src_y || !src_uv || !dst_rgb565 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1222,6 +1648,14 @@
     }
   }
 #endif
+#if defined(HAS_NV12TORGB565ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    NV12ToRGB565Row = NV12ToRGB565Row_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      NV12ToRGB565Row = NV12ToRGB565Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     NV12ToRGB565Row(src_y, src_uv, dst_rgb565, &kYuvI601Constants, width);
@@ -1236,14 +1670,16 @@
 
 // Convert RAW to RGB24.
 LIBYUV_API
-int RAWToRGB24(const uint8* src_raw, int src_stride_raw,
-               uint8* dst_rgb24, int dst_stride_rgb24,
-               int width, int height) {
+int RAWToRGB24(const uint8_t* src_raw,
+               int src_stride_raw,
+               uint8_t* dst_rgb24,
+               int dst_stride_rgb24,
+               int width,
+               int height) {
   int y;
-  void (*RAWToRGB24Row)(const uint8* src_rgb, uint8* dst_rgb24, int width) =
+  void (*RAWToRGB24Row)(const uint8_t* src_rgb, uint8_t* dst_rgb24, int width) =
       RAWToRGB24Row_C;
-  if (!src_raw || !dst_rgb24 ||
-      width <= 0 || height == 0) {
+  if (!src_raw || !dst_rgb24 || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -1253,8 +1689,7 @@
     src_stride_raw = -src_stride_raw;
   }
   // Coalesce rows.
-  if (src_stride_raw == width * 3 &&
-      dst_stride_rgb24 == width * 3) {
+  if (src_stride_raw == width * 3 && dst_stride_rgb24 == width * 3) {
     width *= height;
     height = 1;
     src_stride_raw = dst_stride_rgb24 = 0;
@@ -1275,6 +1710,14 @@
     }
   }
 #endif
+#if defined(HAS_RAWTORGB24ROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    RAWToRGB24Row = RAWToRGB24Row_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      RAWToRGB24Row = RAWToRGB24Row_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     RAWToRGB24Row(src_raw, dst_rgb24, width);
@@ -1285,11 +1728,13 @@
 }
 
 LIBYUV_API
-void SetPlane(uint8* dst_y, int dst_stride_y,
-              int width, int height,
-              uint32 value) {
+void SetPlane(uint8_t* dst_y,
+              int dst_stride_y,
+              int width,
+              int height,
+              uint32_t value) {
   int y;
-  void (*SetRow)(uint8* dst, uint8 value, int width) = SetRow_C;
+  void (*SetRow)(uint8_t * dst, uint8_t value, int width) = SetRow_C;
   if (height < 0) {
     height = -height;
     dst_y = dst_y + (height - 1) * dst_stride_y;
@@ -1322,6 +1767,11 @@
     SetRow = SetRow_ERMS;
   }
 #endif
+#if defined(HAS_SETROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 16)) {
+    SetRow = SetRow_MSA;
+  }
+#endif
 
   // Set plane
   for (y = 0; y < height; ++y) {
@@ -1332,22 +1782,26 @@
 
 // Draw a rectangle into I420
 LIBYUV_API
-int I420Rect(uint8* dst_y, int dst_stride_y,
-             uint8* dst_u, int dst_stride_u,
-             uint8* dst_v, int dst_stride_v,
-             int x, int y,
-             int width, int height,
-             int value_y, int value_u, int value_v) {
+int I420Rect(uint8_t* dst_y,
+             int dst_stride_y,
+             uint8_t* dst_u,
+             int dst_stride_u,
+             uint8_t* dst_v,
+             int dst_stride_v,
+             int x,
+             int y,
+             int width,
+             int height,
+             int value_y,
+             int value_u,
+             int value_v) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  uint8* start_y = dst_y + y * dst_stride_y + x;
-  uint8* start_u = dst_u + (y / 2) * dst_stride_u + (x / 2);
-  uint8* start_v = dst_v + (y / 2) * dst_stride_v + (x / 2);
-  if (!dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0 ||
-      x < 0 || y < 0 ||
-      value_y < 0 || value_y > 255 ||
-      value_u < 0 || value_u > 255 ||
+  uint8_t* start_y = dst_y + y * dst_stride_y + x;
+  uint8_t* start_u = dst_u + (y / 2) * dst_stride_u + (x / 2);
+  uint8_t* start_v = dst_v + (y / 2) * dst_stride_v + (x / 2);
+  if (!dst_y || !dst_u || !dst_v || width <= 0 || height == 0 || x < 0 ||
+      y < 0 || value_y < 0 || value_y > 255 || value_u < 0 || value_u > 255 ||
       value_v < 0 || value_v > 255) {
     return -1;
   }
@@ -1360,15 +1814,17 @@
 
 // Draw a rectangle into ARGB
 LIBYUV_API
-int ARGBRect(uint8* dst_argb, int dst_stride_argb,
-             int dst_x, int dst_y,
-             int width, int height,
-             uint32 value) {
+int ARGBRect(uint8_t* dst_argb,
+             int dst_stride_argb,
+             int dst_x,
+             int dst_y,
+             int width,
+             int height,
+             uint32_t value) {
   int y;
-  void (*ARGBSetRow)(uint8* dst_argb, uint32 value, int width) = ARGBSetRow_C;
-  if (!dst_argb ||
-      width <= 0 || height == 0 ||
-      dst_x < 0 || dst_y < 0) {
+  void (*ARGBSetRow)(uint8_t * dst_argb, uint32_t value, int width) =
+      ARGBSetRow_C;
+  if (!dst_argb || width <= 0 || height == 0 || dst_x < 0 || dst_y < 0) {
     return -1;
   }
   if (height < 0) {
@@ -1397,6 +1853,14 @@
     ARGBSetRow = ARGBSetRow_X86;
   }
 #endif
+#if defined(HAS_ARGBSETROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBSetRow = ARGBSetRow_Any_MSA;
+    if (IS_ALIGNED(width, 4)) {
+      ARGBSetRow = ARGBSetRow_MSA;
+    }
+  }
+#endif
 
   // Set plane
   for (y = 0; y < height; ++y) {
@@ -1420,11 +1884,14 @@
 //   f is foreground pixel premultiplied by alpha
 
 LIBYUV_API
-int ARGBAttenuate(const uint8* src_argb, int src_stride_argb,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int width, int height) {
+int ARGBAttenuate(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int width,
+                  int height) {
   int y;
-  void (*ARGBAttenuateRow)(const uint8* src_argb, uint8* dst_argb,
+  void (*ARGBAttenuateRow)(const uint8_t* src_argb, uint8_t* dst_argb,
                            int width) = ARGBAttenuateRow_C;
   if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
@@ -1435,8 +1902,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -1465,6 +1931,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBATTENUATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBAttenuateRow = ARGBAttenuateRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBAttenuateRow = ARGBAttenuateRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBAttenuateRow(src_argb, dst_argb, width);
@@ -1476,11 +1950,14 @@
 
 // Convert preattentuated ARGB to unattenuated ARGB.
 LIBYUV_API
-int ARGBUnattenuate(const uint8* src_argb, int src_stride_argb,
-                    uint8* dst_argb, int dst_stride_argb,
-                    int width, int height) {
+int ARGBUnattenuate(const uint8_t* src_argb,
+                    int src_stride_argb,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    int width,
+                    int height) {
   int y;
-  void (*ARGBUnattenuateRow)(const uint8* src_argb, uint8* dst_argb,
+  void (*ARGBUnattenuateRow)(const uint8_t* src_argb, uint8_t* dst_argb,
                              int width) = ARGBUnattenuateRow_C;
   if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
@@ -1491,8 +1968,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -1513,7 +1989,7 @@
     }
   }
 #endif
-// TODO(fbarchard): Neon version.
+  // TODO(fbarchard): Neon version.
 
   for (y = 0; y < height; ++y) {
     ARGBUnattenuateRow(src_argb, dst_argb, width);
@@ -1525,12 +2001,15 @@
 
 // Convert ARGB to Grayed ARGB.
 LIBYUV_API
-int ARGBGrayTo(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_argb, int dst_stride_argb,
-               int width, int height) {
+int ARGBGrayTo(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height) {
   int y;
-  void (*ARGBGrayRow)(const uint8* src_argb, uint8* dst_argb,
-                      int width) = ARGBGrayRow_C;
+  void (*ARGBGrayRow)(const uint8_t* src_argb, uint8_t* dst_argb, int width) =
+      ARGBGrayRow_C;
   if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -1540,8 +2019,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -1556,6 +2034,11 @@
     ARGBGrayRow = ARGBGrayRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBGRAYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 8)) {
+    ARGBGrayRow = ARGBGrayRow_MSA;
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBGrayRow(src_argb, dst_argb, width);
@@ -1567,13 +2050,16 @@
 
 // Make a rectangle of ARGB gray scale.
 LIBYUV_API
-int ARGBGray(uint8* dst_argb, int dst_stride_argb,
-             int dst_x, int dst_y,
-             int width, int height) {
+int ARGBGray(uint8_t* dst_argb,
+             int dst_stride_argb,
+             int dst_x,
+             int dst_y,
+             int width,
+             int height) {
   int y;
-  void (*ARGBGrayRow)(const uint8* src_argb, uint8* dst_argb,
-                      int width) = ARGBGrayRow_C;
-  uint8* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
+  void (*ARGBGrayRow)(const uint8_t* src_argb, uint8_t* dst_argb, int width) =
+      ARGBGrayRow_C;
+  uint8_t* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
   if (!dst_argb || width <= 0 || height <= 0 || dst_x < 0 || dst_y < 0) {
     return -1;
   }
@@ -1593,6 +2079,12 @@
     ARGBGrayRow = ARGBGrayRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBGRAYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 8)) {
+    ARGBGrayRow = ARGBGrayRow_MSA;
+  }
+#endif
+
   for (y = 0; y < height; ++y) {
     ARGBGrayRow(dst, dst, width);
     dst += dst_stride_argb;
@@ -1602,11 +2094,15 @@
 
 // Make a rectangle of ARGB Sepia tone.
 LIBYUV_API
-int ARGBSepia(uint8* dst_argb, int dst_stride_argb,
-              int dst_x, int dst_y, int width, int height) {
+int ARGBSepia(uint8_t* dst_argb,
+              int dst_stride_argb,
+              int dst_x,
+              int dst_y,
+              int width,
+              int height) {
   int y;
-  void (*ARGBSepiaRow)(uint8* dst_argb, int width) = ARGBSepiaRow_C;
-  uint8* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
+  void (*ARGBSepiaRow)(uint8_t * dst_argb, int width) = ARGBSepiaRow_C;
+  uint8_t* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
   if (!dst_argb || width <= 0 || height <= 0 || dst_x < 0 || dst_y < 0) {
     return -1;
   }
@@ -1626,6 +2122,12 @@
     ARGBSepiaRow = ARGBSepiaRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBSEPIAROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 8)) {
+    ARGBSepiaRow = ARGBSepiaRow_MSA;
+  }
+#endif
+
   for (y = 0; y < height; ++y) {
     ARGBSepiaRow(dst, width);
     dst += dst_stride_argb;
@@ -1636,13 +2138,17 @@
 // Apply a 4x4 matrix to each ARGB pixel.
 // Note: Normally for shading, but can be used to swizzle or invert.
 LIBYUV_API
-int ARGBColorMatrix(const uint8* src_argb, int src_stride_argb,
-                    uint8* dst_argb, int dst_stride_argb,
-                    const int8* matrix_argb,
-                    int width, int height) {
+int ARGBColorMatrix(const uint8_t* src_argb,
+                    int src_stride_argb,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    const int8_t* matrix_argb,
+                    int width,
+                    int height) {
   int y;
-  void (*ARGBColorMatrixRow)(const uint8* src_argb, uint8* dst_argb,
-      const int8* matrix_argb, int width) = ARGBColorMatrixRow_C;
+  void (*ARGBColorMatrixRow)(const uint8_t* src_argb, uint8_t* dst_argb,
+                             const int8_t* matrix_argb, int width) =
+      ARGBColorMatrixRow_C;
   if (!src_argb || !dst_argb || !matrix_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -1652,8 +2158,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -1668,6 +2173,11 @@
     ARGBColorMatrixRow = ARGBColorMatrixRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBCOLORMATRIXROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 8)) {
+    ARGBColorMatrixRow = ARGBColorMatrixRow_MSA;
+  }
+#endif
   for (y = 0; y < height; ++y) {
     ARGBColorMatrixRow(src_argb, dst_argb, matrix_argb, width);
     src_argb += src_stride_argb;
@@ -1679,13 +2189,17 @@
 // Apply a 4x3 matrix to each ARGB pixel.
 // Deprecated.
 LIBYUV_API
-int RGBColorMatrix(uint8* dst_argb, int dst_stride_argb,
-                   const int8* matrix_rgb,
-                   int dst_x, int dst_y, int width, int height) {
-  SIMD_ALIGNED(int8 matrix_argb[16]);
-  uint8* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
-  if (!dst_argb || !matrix_rgb || width <= 0 || height <= 0 ||
-      dst_x < 0 || dst_y < 0) {
+int RGBColorMatrix(uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   const int8_t* matrix_rgb,
+                   int dst_x,
+                   int dst_y,
+                   int width,
+                   int height) {
+  SIMD_ALIGNED(int8_t matrix_argb[16]);
+  uint8_t* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
+  if (!dst_argb || !matrix_rgb || width <= 0 || height <= 0 || dst_x < 0 ||
+      dst_y < 0) {
     return -1;
   }
 
@@ -1705,23 +2219,26 @@
   matrix_argb[14] = matrix_argb[13] = matrix_argb[12] = 0;
   matrix_argb[15] = 64;  // 1.0
 
-  return ARGBColorMatrix((const uint8*)(dst), dst_stride_argb,
-                         dst, dst_stride_argb,
-                         &matrix_argb[0], width, height);
+  return ARGBColorMatrix((const uint8_t*)(dst), dst_stride_argb, dst,
+                         dst_stride_argb, &matrix_argb[0], width, height);
 }
 
 // Apply a color table each ARGB pixel.
 // Table contains 256 ARGB values.
 LIBYUV_API
-int ARGBColorTable(uint8* dst_argb, int dst_stride_argb,
-                   const uint8* table_argb,
-                   int dst_x, int dst_y, int width, int height) {
+int ARGBColorTable(uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   const uint8_t* table_argb,
+                   int dst_x,
+                   int dst_y,
+                   int width,
+                   int height) {
   int y;
-  void (*ARGBColorTableRow)(uint8* dst_argb, const uint8* table_argb,
+  void (*ARGBColorTableRow)(uint8_t * dst_argb, const uint8_t* table_argb,
                             int width) = ARGBColorTableRow_C;
-  uint8* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
-  if (!dst_argb || !table_argb || width <= 0 || height <= 0 ||
-      dst_x < 0 || dst_y < 0) {
+  uint8_t* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
+  if (!dst_argb || !table_argb || width <= 0 || height <= 0 || dst_x < 0 ||
+      dst_y < 0) {
     return -1;
   }
   // Coalesce rows.
@@ -1745,15 +2262,19 @@
 // Apply a color table each ARGB pixel but preserve destination alpha.
 // Table contains 256 ARGB values.
 LIBYUV_API
-int RGBColorTable(uint8* dst_argb, int dst_stride_argb,
-                  const uint8* table_argb,
-                  int dst_x, int dst_y, int width, int height) {
+int RGBColorTable(uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  const uint8_t* table_argb,
+                  int dst_x,
+                  int dst_y,
+                  int width,
+                  int height) {
   int y;
-  void (*RGBColorTableRow)(uint8* dst_argb, const uint8* table_argb,
+  void (*RGBColorTableRow)(uint8_t * dst_argb, const uint8_t* table_argb,
                            int width) = RGBColorTableRow_C;
-  uint8* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
-  if (!dst_argb || !table_argb || width <= 0 || height <= 0 ||
-      dst_x < 0 || dst_y < 0) {
+  uint8_t* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
+  if (!dst_argb || !table_argb || width <= 0 || height <= 0 || dst_x < 0 ||
+      dst_y < 0) {
     return -1;
   }
   // Coalesce rows.
@@ -1784,13 +2305,19 @@
 // Caveat - although SSE2 saturates, the C function does not and should be used
 // with care if doing anything but quantization.
 LIBYUV_API
-int ARGBQuantize(uint8* dst_argb, int dst_stride_argb,
-                 int scale, int interval_size, int interval_offset,
-                 int dst_x, int dst_y, int width, int height) {
+int ARGBQuantize(uint8_t* dst_argb,
+                 int dst_stride_argb,
+                 int scale,
+                 int interval_size,
+                 int interval_offset,
+                 int dst_x,
+                 int dst_y,
+                 int width,
+                 int height) {
   int y;
-  void (*ARGBQuantizeRow)(uint8* dst_argb, int scale, int interval_size,
+  void (*ARGBQuantizeRow)(uint8_t * dst_argb, int scale, int interval_size,
                           int interval_offset, int width) = ARGBQuantizeRow_C;
-  uint8* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
+  uint8_t* dst = dst_argb + dst_y * dst_stride_argb + dst_x * 4;
   if (!dst_argb || width <= 0 || height <= 0 || dst_x < 0 || dst_y < 0 ||
       interval_size < 1 || interval_size > 255) {
     return -1;
@@ -1811,6 +2338,11 @@
     ARGBQuantizeRow = ARGBQuantizeRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBQUANTIZEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 8)) {
+    ARGBQuantizeRow = ARGBQuantizeRow_MSA;
+  }
+#endif
   for (y = 0; y < height; ++y) {
     ARGBQuantizeRow(dst, scale, interval_size, interval_offset, width);
     dst += dst_stride_argb;
@@ -1821,13 +2353,17 @@
 // Computes table of cumulative sum for image where the value is the sum
 // of all values above and to the left of the entry. Used by ARGBBlur.
 LIBYUV_API
-int ARGBComputeCumulativeSum(const uint8* src_argb, int src_stride_argb,
-                             int32* dst_cumsum, int dst_stride32_cumsum,
-                             int width, int height) {
+int ARGBComputeCumulativeSum(const uint8_t* src_argb,
+                             int src_stride_argb,
+                             int32_t* dst_cumsum,
+                             int dst_stride32_cumsum,
+                             int width,
+                             int height) {
   int y;
-  void (*ComputeCumulativeSumRow)(const uint8* row, int32* cumsum,
-      const int32* previous_cumsum, int width) = ComputeCumulativeSumRow_C;
-  int32* previous_cumsum = dst_cumsum;
+  void (*ComputeCumulativeSumRow)(const uint8_t* row, int32_t* cumsum,
+                                  const int32_t* previous_cumsum, int width) =
+      ComputeCumulativeSumRow_C;
+  int32_t* previous_cumsum = dst_cumsum;
   if (!dst_cumsum || !src_argb || width <= 0 || height <= 0) {
     return -1;
   }
@@ -1851,18 +2387,25 @@
 // aligned to 16 byte boundary. height can be radius * 2 + 2 to save memory
 // as the buffer is treated as circular.
 LIBYUV_API
-int ARGBBlur(const uint8* src_argb, int src_stride_argb,
-             uint8* dst_argb, int dst_stride_argb,
-             int32* dst_cumsum, int dst_stride32_cumsum,
-             int width, int height, int radius) {
+int ARGBBlur(const uint8_t* src_argb,
+             int src_stride_argb,
+             uint8_t* dst_argb,
+             int dst_stride_argb,
+             int32_t* dst_cumsum,
+             int dst_stride32_cumsum,
+             int width,
+             int height,
+             int radius) {
   int y;
-  void (*ComputeCumulativeSumRow)(const uint8 *row, int32 *cumsum,
-      const int32* previous_cumsum, int width) = ComputeCumulativeSumRow_C;
-  void (*CumulativeSumToAverageRow)(const int32* topleft, const int32* botleft,
-      int width, int area, uint8* dst, int count) = CumulativeSumToAverageRow_C;
-  int32* cumsum_bot_row;
-  int32* max_cumsum_bot_row;
-  int32* cumsum_top_row;
+  void (*ComputeCumulativeSumRow)(const uint8_t* row, int32_t* cumsum,
+                                  const int32_t* previous_cumsum, int width) =
+      ComputeCumulativeSumRow_C;
+  void (*CumulativeSumToAverageRow)(
+      const int32_t* topleft, const int32_t* botleft, int width, int area,
+      uint8_t* dst, int count) = CumulativeSumToAverageRow_C;
+  int32_t* cumsum_bot_row;
+  int32_t* max_cumsum_bot_row;
+  int32_t* cumsum_top_row;
 
   if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
@@ -1889,9 +2432,8 @@
 #endif
   // Compute enough CumulativeSum for first row to be blurred. After this
   // one row of CumulativeSum is updated at a time.
-  ARGBComputeCumulativeSum(src_argb, src_stride_argb,
-                           dst_cumsum, dst_stride32_cumsum,
-                           width, radius);
+  ARGBComputeCumulativeSum(src_argb, src_stride_argb, dst_cumsum,
+                           dst_stride32_cumsum, width, radius);
 
   src_argb = src_argb + radius * src_stride_argb;
   cumsum_bot_row = &dst_cumsum[(radius - 1) * dst_stride32_cumsum];
@@ -1917,7 +2459,7 @@
     // Increment cumsum_bot_row pointer with circular buffer wrap around and
     // then fill in a row of CumulativeSum.
     if ((y + radius) < height) {
-      const int32* prev_cumsum_bot_row = cumsum_bot_row;
+      const int32_t* prev_cumsum_bot_row = cumsum_bot_row;
       cumsum_bot_row += dst_stride32_cumsum;
       if (cumsum_bot_row >= max_cumsum_bot_row) {
         cumsum_bot_row = dst_cumsum;
@@ -1929,24 +2471,24 @@
 
     // Left clipped.
     for (x = 0; x < radius + 1; ++x) {
-      CumulativeSumToAverageRow(cumsum_top_row, cumsum_bot_row,
-                                boxwidth, area, &dst_argb[x * 4], 1);
+      CumulativeSumToAverageRow(cumsum_top_row, cumsum_bot_row, boxwidth, area,
+                                &dst_argb[x * 4], 1);
       area += (bot_y - top_y);
       boxwidth += 4;
     }
 
     // Middle unclipped.
     n = (width - 1) - radius - x + 1;
-    CumulativeSumToAverageRow(cumsum_top_row, cumsum_bot_row,
-                              boxwidth, area, &dst_argb[x * 4], n);
+    CumulativeSumToAverageRow(cumsum_top_row, cumsum_bot_row, boxwidth, area,
+                              &dst_argb[x * 4], n);
 
     // Right clipped.
     for (x += n; x <= width - 1; ++x) {
       area -= (bot_y - top_y);
       boxwidth -= 4;
       CumulativeSumToAverageRow(cumsum_top_row + (x - radius - 1) * 4,
-                                cumsum_bot_row + (x - radius - 1) * 4,
-                                boxwidth, area, &dst_argb[x * 4], 1);
+                                cumsum_bot_row + (x - radius - 1) * 4, boxwidth,
+                                area, &dst_argb[x * 4], 1);
     }
     dst_argb += dst_stride_argb;
   }
@@ -1955,12 +2497,16 @@
 
 // Multiply ARGB image by a specified ARGB value.
 LIBYUV_API
-int ARGBShade(const uint8* src_argb, int src_stride_argb,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height, uint32 value) {
+int ARGBShade(const uint8_t* src_argb,
+              int src_stride_argb,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height,
+              uint32_t value) {
   int y;
-  void (*ARGBShadeRow)(const uint8* src_argb, uint8* dst_argb,
-                       int width, uint32 value) = ARGBShadeRow_C;
+  void (*ARGBShadeRow)(const uint8_t* src_argb, uint8_t* dst_argb, int width,
+                       uint32_t value) = ARGBShadeRow_C;
   if (!src_argb || !dst_argb || width <= 0 || height == 0 || value == 0u) {
     return -1;
   }
@@ -1970,8 +2516,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -1986,6 +2531,11 @@
     ARGBShadeRow = ARGBShadeRow_NEON;
   }
 #endif
+#if defined(HAS_ARGBSHADEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 4)) {
+    ARGBShadeRow = ARGBShadeRow_MSA;
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBShadeRow(src_argb, dst_argb, width, value);
@@ -1997,12 +2547,17 @@
 
 // Interpolate 2 planes by specified amount (0 to 255).
 LIBYUV_API
-int InterpolatePlane(const uint8* src0, int src_stride0,
-                     const uint8* src1, int src_stride1,
-                     uint8* dst, int dst_stride,
-                     int width, int height, int interpolation) {
+int InterpolatePlane(const uint8_t* src0,
+                     int src_stride0,
+                     const uint8_t* src1,
+                     int src_stride1,
+                     uint8_t* dst,
+                     int dst_stride,
+                     int width,
+                     int height,
+                     int interpolation) {
   int y;
-  void (*InterpolateRow)(uint8* dst_ptr, const uint8* src_ptr,
+  void (*InterpolateRow)(uint8_t * dst_ptr, const uint8_t* src_ptr,
                          ptrdiff_t src_stride, int dst_width,
                          int source_y_fraction) = InterpolateRow_C;
   if (!src0 || !src1 || !dst || width <= 0 || height == 0) {
@@ -2015,9 +2570,7 @@
     dst_stride = -dst_stride;
   }
   // Coalesce rows.
-  if (src_stride0 == width &&
-      src_stride1 == width &&
-      dst_stride == width) {
+  if (src_stride0 == width && src_stride1 == width && dst_stride == width) {
     width *= height;
     height = 1;
     src_stride0 = src_stride1 = dst_stride = 0;
@@ -2046,13 +2599,12 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src0, 4) && IS_ALIGNED(src_stride0, 4) &&
-      IS_ALIGNED(src1, 4) && IS_ALIGNED(src_stride1, 4) &&
-      IS_ALIGNED(dst, 4) && IS_ALIGNED(dst_stride, 4) &&
-      IS_ALIGNED(width, 4)) {
-    InterpolateRow = InterpolateRow_DSPR2;
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      InterpolateRow = InterpolateRow_MSA;
+    }
   }
 #endif
 
@@ -2067,61 +2619,71 @@
 
 // Interpolate 2 ARGB images by specified amount (0 to 255).
 LIBYUV_API
-int ARGBInterpolate(const uint8* src_argb0, int src_stride_argb0,
-                    const uint8* src_argb1, int src_stride_argb1,
-                    uint8* dst_argb, int dst_stride_argb,
-                    int width, int height, int interpolation) {
-  return InterpolatePlane(src_argb0, src_stride_argb0,
-                          src_argb1, src_stride_argb1,
-                          dst_argb, dst_stride_argb,
+int ARGBInterpolate(const uint8_t* src_argb0,
+                    int src_stride_argb0,
+                    const uint8_t* src_argb1,
+                    int src_stride_argb1,
+                    uint8_t* dst_argb,
+                    int dst_stride_argb,
+                    int width,
+                    int height,
+                    int interpolation) {
+  return InterpolatePlane(src_argb0, src_stride_argb0, src_argb1,
+                          src_stride_argb1, dst_argb, dst_stride_argb,
                           width * 4, height, interpolation);
 }
 
 // Interpolate 2 YUV images by specified amount (0 to 255).
 LIBYUV_API
-int I420Interpolate(const uint8* src0_y, int src0_stride_y,
-                    const uint8* src0_u, int src0_stride_u,
-                    const uint8* src0_v, int src0_stride_v,
-                    const uint8* src1_y, int src1_stride_y,
-                    const uint8* src1_u, int src1_stride_u,
-                    const uint8* src1_v, int src1_stride_v,
-                    uint8* dst_y, int dst_stride_y,
-                    uint8* dst_u, int dst_stride_u,
-                    uint8* dst_v, int dst_stride_v,
-                    int width, int height, int interpolation) {
+int I420Interpolate(const uint8_t* src0_y,
+                    int src0_stride_y,
+                    const uint8_t* src0_u,
+                    int src0_stride_u,
+                    const uint8_t* src0_v,
+                    int src0_stride_v,
+                    const uint8_t* src1_y,
+                    int src1_stride_y,
+                    const uint8_t* src1_u,
+                    int src1_stride_u,
+                    const uint8_t* src1_v,
+                    int src1_stride_v,
+                    uint8_t* dst_y,
+                    int dst_stride_y,
+                    uint8_t* dst_u,
+                    int dst_stride_u,
+                    uint8_t* dst_v,
+                    int dst_stride_v,
+                    int width,
+                    int height,
+                    int interpolation) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src0_y || !src0_u || !src0_v ||
-      !src1_y || !src1_u || !src1_v ||
-      !dst_y || !dst_u || !dst_v ||
-      width <= 0 || height == 0) {
+  if (!src0_y || !src0_u || !src0_v || !src1_y || !src1_u || !src1_v ||
+      !dst_y || !dst_u || !dst_v || width <= 0 || height == 0) {
     return -1;
   }
-  InterpolatePlane(src0_y, src0_stride_y,
-                   src1_y, src1_stride_y,
-                   dst_y, dst_stride_y,
-                   width, height, interpolation);
-  InterpolatePlane(src0_u, src0_stride_u,
-                   src1_u, src1_stride_u,
-                   dst_u, dst_stride_u,
-                   halfwidth, halfheight, interpolation);
-  InterpolatePlane(src0_v, src0_stride_v,
-                   src1_v, src1_stride_v,
-                   dst_v, dst_stride_v,
-                   halfwidth, halfheight, interpolation);
+  InterpolatePlane(src0_y, src0_stride_y, src1_y, src1_stride_y, dst_y,
+                   dst_stride_y, width, height, interpolation);
+  InterpolatePlane(src0_u, src0_stride_u, src1_u, src1_stride_u, dst_u,
+                   dst_stride_u, halfwidth, halfheight, interpolation);
+  InterpolatePlane(src0_v, src0_stride_v, src1_v, src1_stride_v, dst_v,
+                   dst_stride_v, halfwidth, halfheight, interpolation);
   return 0;
 }
 
 // Shuffle ARGB channel order.  e.g. BGRA to ARGB.
 LIBYUV_API
-int ARGBShuffle(const uint8* src_bgra, int src_stride_bgra,
-                uint8* dst_argb, int dst_stride_argb,
-                const uint8* shuffler, int width, int height) {
+int ARGBShuffle(const uint8_t* src_bgra,
+                int src_stride_bgra,
+                uint8_t* dst_argb,
+                int dst_stride_argb,
+                const uint8_t* shuffler,
+                int width,
+                int height) {
   int y;
-  void (*ARGBShuffleRow)(const uint8* src_bgra, uint8* dst_argb,
-                         const uint8* shuffler, int width) = ARGBShuffleRow_C;
-  if (!src_bgra || !dst_argb ||
-      width <= 0 || height == 0) {
+  void (*ARGBShuffleRow)(const uint8_t* src_bgra, uint8_t* dst_argb,
+                         const uint8_t* shuffler, int width) = ARGBShuffleRow_C;
+  if (!src_bgra || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -2131,20 +2693,11 @@
     src_stride_bgra = -src_stride_bgra;
   }
   // Coalesce rows.
-  if (src_stride_bgra == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_bgra == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_bgra = dst_stride_argb = 0;
   }
-#if defined(HAS_ARGBSHUFFLEROW_SSE2)
-  if (TestCpuFlag(kCpuHasSSE2)) {
-    ARGBShuffleRow = ARGBShuffleRow_Any_SSE2;
-    if (IS_ALIGNED(width, 4)) {
-      ARGBShuffleRow = ARGBShuffleRow_SSE2;
-    }
-  }
-#endif
 #if defined(HAS_ARGBSHUFFLEROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     ARGBShuffleRow = ARGBShuffleRow_Any_SSSE3;
@@ -2169,6 +2722,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBSHUFFLEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBShuffleRow = ARGBShuffleRow_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      ARGBShuffleRow = ARGBShuffleRow_MSA;
+    }
+  }
+#endif
 
   for (y = 0; y < height; ++y) {
     ARGBShuffleRow(src_bgra, dst_argb, shuffler, width);
@@ -2179,28 +2740,32 @@
 }
 
 // Sobel ARGB effect.
-static int ARGBSobelize(const uint8* src_argb, int src_stride_argb,
-                        uint8* dst_argb, int dst_stride_argb,
-                        int width, int height,
-                        void (*SobelRow)(const uint8* src_sobelx,
-                                         const uint8* src_sobely,
-                                         uint8* dst, int width)) {
+static int ARGBSobelize(const uint8_t* src_argb,
+                        int src_stride_argb,
+                        uint8_t* dst_argb,
+                        int dst_stride_argb,
+                        int width,
+                        int height,
+                        void (*SobelRow)(const uint8_t* src_sobelx,
+                                         const uint8_t* src_sobely,
+                                         uint8_t* dst,
+                                         int width)) {
   int y;
-  void (*ARGBToYJRow)(const uint8* src_argb, uint8* dst_g, int width) =
+  void (*ARGBToYJRow)(const uint8_t* src_argb, uint8_t* dst_g, int width) =
       ARGBToYJRow_C;
-  void (*SobelYRow)(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width) = SobelYRow_C;
-  void (*SobelXRow)(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobely, int width) =
+  void (*SobelYRow)(const uint8_t* src_y0, const uint8_t* src_y1,
+                    uint8_t* dst_sobely, int width) = SobelYRow_C;
+  void (*SobelXRow)(const uint8_t* src_y0, const uint8_t* src_y1,
+                    const uint8_t* src_y2, uint8_t* dst_sobely, int width) =
       SobelXRow_C;
   const int kEdge = 16;  // Extra pixels at start of row for extrude/align.
-  if (!src_argb  || !dst_argb || width <= 0 || height == 0) {
+  if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
-    src_argb  = src_argb  + (height - 1) * src_stride_argb;
+    src_argb = src_argb + (height - 1) * src_stride_argb;
     src_stride_argb = -src_stride_argb;
   }
 
@@ -2228,6 +2793,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBTOYJROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBToYJRow = ARGBToYJRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBToYJRow = ARGBToYJRow_MSA;
+    }
+  }
+#endif
 
 #if defined(HAS_SOBELYROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
@@ -2239,6 +2812,11 @@
     SobelYRow = SobelYRow_NEON;
   }
 #endif
+#if defined(HAS_SOBELYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SobelYRow = SobelYRow_MSA;
+  }
+#endif
 #if defined(HAS_SOBELXROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     SobelXRow = SobelXRow_SSE2;
@@ -2249,18 +2827,23 @@
     SobelXRow = SobelXRow_NEON;
   }
 #endif
+#if defined(HAS_SOBELXROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SobelXRow = SobelXRow_MSA;
+  }
+#endif
   {
     // 3 rows with edges before/after.
     const int kRowSize = (width + kEdge + 31) & ~31;
     align_buffer_64(rows, kRowSize * 2 + (kEdge + kRowSize * 3 + kEdge));
-    uint8* row_sobelx = rows;
-    uint8* row_sobely = rows + kRowSize;
-    uint8* row_y = rows + kRowSize * 2;
+    uint8_t* row_sobelx = rows;
+    uint8_t* row_sobely = rows + kRowSize;
+    uint8_t* row_y = rows + kRowSize * 2;
 
     // Convert first row.
-    uint8* row_y0 = row_y + kEdge;
-    uint8* row_y1 = row_y0 + kRowSize;
-    uint8* row_y2 = row_y1 + kRowSize;
+    uint8_t* row_y0 = row_y + kEdge;
+    uint8_t* row_y1 = row_y0 + kRowSize;
+    uint8_t* row_y2 = row_y1 + kRowSize;
     ARGBToYJRow(src_argb, row_y0, width);
     row_y0[-1] = row_y0[0];
     memset(row_y0 + width, row_y0[width - 1], 16);  // Extrude 16 for valgrind.
@@ -2284,7 +2867,7 @@
 
       // Cycle thru circular queue of 3 row_y buffers.
       {
-        uint8* row_yt = row_y0;
+        uint8_t* row_yt = row_y0;
         row_y0 = row_y1;
         row_y1 = row_y2;
         row_y2 = row_yt;
@@ -2299,11 +2882,14 @@
 
 // Sobel ARGB effect.
 LIBYUV_API
-int ARGBSobel(const uint8* src_argb, int src_stride_argb,
-              uint8* dst_argb, int dst_stride_argb,
-              int width, int height) {
-  void (*SobelRow)(const uint8* src_sobelx, const uint8* src_sobely,
-                   uint8* dst_argb, int width) = SobelRow_C;
+int ARGBSobel(const uint8_t* src_argb,
+              int src_stride_argb,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int width,
+              int height) {
+  void (*SobelRow)(const uint8_t* src_sobelx, const uint8_t* src_sobely,
+                   uint8_t* dst_argb, int width) = SobelRow_C;
 #if defined(HAS_SOBELROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     SobelRow = SobelRow_Any_SSE2;
@@ -2320,17 +2906,28 @@
     }
   }
 #endif
+#if defined(HAS_SOBELROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SobelRow = SobelRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      SobelRow = SobelRow_MSA;
+    }
+  }
+#endif
   return ARGBSobelize(src_argb, src_stride_argb, dst_argb, dst_stride_argb,
                       width, height, SobelRow);
 }
 
 // Sobel ARGB effect with planar output.
 LIBYUV_API
-int ARGBSobelToPlane(const uint8* src_argb, int src_stride_argb,
-                     uint8* dst_y, int dst_stride_y,
-                     int width, int height) {
-  void (*SobelToPlaneRow)(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_, int width) = SobelToPlaneRow_C;
+int ARGBSobelToPlane(const uint8_t* src_argb,
+                     int src_stride_argb,
+                     uint8_t* dst_y,
+                     int dst_stride_y,
+                     int width,
+                     int height) {
+  void (*SobelToPlaneRow)(const uint8_t* src_sobelx, const uint8_t* src_sobely,
+                          uint8_t* dst_, int width) = SobelToPlaneRow_C;
 #if defined(HAS_SOBELTOPLANEROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     SobelToPlaneRow = SobelToPlaneRow_Any_SSE2;
@@ -2347,18 +2944,29 @@
     }
   }
 #endif
-  return ARGBSobelize(src_argb, src_stride_argb, dst_y, dst_stride_y,
-                      width, height, SobelToPlaneRow);
+#if defined(HAS_SOBELTOPLANEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SobelToPlaneRow = SobelToPlaneRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      SobelToPlaneRow = SobelToPlaneRow_MSA;
+    }
+  }
+#endif
+  return ARGBSobelize(src_argb, src_stride_argb, dst_y, dst_stride_y, width,
+                      height, SobelToPlaneRow);
 }
 
 // SobelXY ARGB effect.
 // Similar to Sobel, but also stores Sobel X in R and Sobel Y in B.  G = Sobel.
 LIBYUV_API
-int ARGBSobelXY(const uint8* src_argb, int src_stride_argb,
-                uint8* dst_argb, int dst_stride_argb,
-                int width, int height) {
-  void (*SobelXYRow)(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) = SobelXYRow_C;
+int ARGBSobelXY(const uint8_t* src_argb,
+                int src_stride_argb,
+                uint8_t* dst_argb,
+                int dst_stride_argb,
+                int width,
+                int height) {
+  void (*SobelXYRow)(const uint8_t* src_sobelx, const uint8_t* src_sobely,
+                     uint8_t* dst_argb, int width) = SobelXYRow_C;
 #if defined(HAS_SOBELXYROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     SobelXYRow = SobelXYRow_Any_SSE2;
@@ -2375,32 +2983,41 @@
     }
   }
 #endif
+#if defined(HAS_SOBELXYROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SobelXYRow = SobelXYRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      SobelXYRow = SobelXYRow_MSA;
+    }
+  }
+#endif
   return ARGBSobelize(src_argb, src_stride_argb, dst_argb, dst_stride_argb,
                       width, height, SobelXYRow);
 }
 
 // Apply a 4x4 polynomial to each ARGB pixel.
 LIBYUV_API
-int ARGBPolynomial(const uint8* src_argb, int src_stride_argb,
-                   uint8* dst_argb, int dst_stride_argb,
+int ARGBPolynomial(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
                    const float* poly,
-                   int width, int height) {
+                   int width,
+                   int height) {
   int y;
-  void (*ARGBPolynomialRow)(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
-                            int width) = ARGBPolynomialRow_C;
+  void (*ARGBPolynomialRow)(const uint8_t* src_argb, uint8_t* dst_argb,
+                            const float* poly, int width) = ARGBPolynomialRow_C;
   if (!src_argb || !dst_argb || !poly || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
-    src_argb  = src_argb  + (height - 1) * src_stride_argb;
+    src_argb = src_argb + (height - 1) * src_stride_argb;
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -2425,28 +3042,132 @@
   return 0;
 }
 
+// Convert plane of 16 bit shorts to half floats.
+// Source values are multiplied by scale before storing as half float.
+LIBYUV_API
+int HalfFloatPlane(const uint16_t* src_y,
+                   int src_stride_y,
+                   uint16_t* dst_y,
+                   int dst_stride_y,
+                   float scale,
+                   int width,
+                   int height) {
+  int y;
+  void (*HalfFloatRow)(const uint16_t* src, uint16_t* dst, float scale,
+                       int width) = HalfFloatRow_C;
+  if (!src_y || !dst_y || width <= 0 || height == 0) {
+    return -1;
+  }
+  src_stride_y >>= 1;
+  dst_stride_y >>= 1;
+  // Negative height means invert the image.
+  if (height < 0) {
+    height = -height;
+    src_y = src_y + (height - 1) * src_stride_y;
+    src_stride_y = -src_stride_y;
+  }
+  // Coalesce rows.
+  if (src_stride_y == width && dst_stride_y == width) {
+    width *= height;
+    height = 1;
+    src_stride_y = dst_stride_y = 0;
+  }
+#if defined(HAS_HALFFLOATROW_SSE2)
+  if (TestCpuFlag(kCpuHasSSE2)) {
+    HalfFloatRow = HalfFloatRow_Any_SSE2;
+    if (IS_ALIGNED(width, 8)) {
+      HalfFloatRow = HalfFloatRow_SSE2;
+    }
+  }
+#endif
+#if defined(HAS_HALFFLOATROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    HalfFloatRow = HalfFloatRow_Any_AVX2;
+    if (IS_ALIGNED(width, 16)) {
+      HalfFloatRow = HalfFloatRow_AVX2;
+    }
+  }
+#endif
+#if defined(HAS_HALFFLOATROW_F16C)
+  if (TestCpuFlag(kCpuHasAVX2) && TestCpuFlag(kCpuHasF16C)) {
+    HalfFloatRow =
+        (scale == 1.0f) ? HalfFloat1Row_Any_F16C : HalfFloatRow_Any_F16C;
+    if (IS_ALIGNED(width, 16)) {
+      HalfFloatRow = (scale == 1.0f) ? HalfFloat1Row_F16C : HalfFloatRow_F16C;
+    }
+  }
+#endif
+#if defined(HAS_HALFFLOATROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    HalfFloatRow =
+        (scale == 1.0f) ? HalfFloat1Row_Any_NEON : HalfFloatRow_Any_NEON;
+    if (IS_ALIGNED(width, 8)) {
+      HalfFloatRow = (scale == 1.0f) ? HalfFloat1Row_NEON : HalfFloatRow_NEON;
+    }
+  }
+#endif
+#if defined(HAS_HALFFLOATROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    HalfFloatRow = HalfFloatRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      HalfFloatRow = HalfFloatRow_MSA;
+    }
+  }
+#endif
+
+  for (y = 0; y < height; ++y) {
+    HalfFloatRow(src_y, dst_y, scale, width);
+    src_y += src_stride_y;
+    dst_y += dst_stride_y;
+  }
+  return 0;
+}
+
+// Convert a buffer of bytes to floats, scale the values and store as floats.
+LIBYUV_API
+int ByteToFloat(const uint8_t* src_y, float* dst_y, float scale, int width) {
+  void (*ByteToFloatRow)(const uint8_t* src, float* dst, float scale,
+                         int width) = ByteToFloatRow_C;
+  if (!src_y || !dst_y || width <= 0) {
+    return -1;
+  }
+#if defined(HAS_BYTETOFLOATROW_NEON)
+  if (TestCpuFlag(kCpuHasNEON)) {
+    ByteToFloatRow = ByteToFloatRow_Any_NEON;
+    if (IS_ALIGNED(width, 8)) {
+      ByteToFloatRow = ByteToFloatRow_NEON;
+    }
+  }
+#endif
+
+  ByteToFloatRow(src_y, dst_y, scale, width);
+  return 0;
+}
+
 // Apply a lumacolortable to each ARGB pixel.
 LIBYUV_API
-int ARGBLumaColorTable(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_argb, int dst_stride_argb,
-                       const uint8* luma,
-                       int width, int height) {
+int ARGBLumaColorTable(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_argb,
+                       int dst_stride_argb,
+                       const uint8_t* luma,
+                       int width,
+                       int height) {
   int y;
-  void (*ARGBLumaColorTableRow)(const uint8* src_argb, uint8* dst_argb,
-      int width, const uint8* luma, const uint32 lumacoeff) =
-      ARGBLumaColorTableRow_C;
+  void (*ARGBLumaColorTableRow)(
+      const uint8_t* src_argb, uint8_t* dst_argb, int width,
+      const uint8_t* luma, const uint32_t lumacoeff) = ARGBLumaColorTableRow_C;
   if (!src_argb || !dst_argb || !luma || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
-    src_argb  = src_argb  + (height - 1) * src_stride_argb;
+    src_argb = src_argb + (height - 1) * src_stride_argb;
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -2467,12 +3188,15 @@
 
 // Copy Alpha from one ARGB image to another.
 LIBYUV_API
-int ARGBCopyAlpha(const uint8* src_argb, int src_stride_argb,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int width, int height) {
+int ARGBCopyAlpha(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int width,
+                  int height) {
   int y;
-  void (*ARGBCopyAlphaRow)(const uint8* src_argb, uint8* dst_argb, int width) =
-      ARGBCopyAlphaRow_C;
+  void (*ARGBCopyAlphaRow)(const uint8_t* src_argb, uint8_t* dst_argb,
+                           int width) = ARGBCopyAlphaRow_C;
   if (!src_argb || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -2483,8 +3207,7 @@
     src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride_argb == width * 4 &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_argb == width * 4 && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_argb = dst_stride_argb = 0;
@@ -2516,55 +3239,73 @@
 
 // Extract just the alpha channel from ARGB.
 LIBYUV_API
-int ARGBExtractAlpha(const uint8* src_argb, int src_stride,
-                     uint8* dst_a, int dst_stride,
-                     int width, int height) {
+int ARGBExtractAlpha(const uint8_t* src_argb,
+                     int src_stride_argb,
+                     uint8_t* dst_a,
+                     int dst_stride_a,
+                     int width,
+                     int height) {
   if (!src_argb || !dst_a || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
   if (height < 0) {
     height = -height;
-    src_argb += (height - 1) * src_stride;
-    src_stride = -src_stride;
+    src_argb += (height - 1) * src_stride_argb;
+    src_stride_argb = -src_stride_argb;
   }
   // Coalesce rows.
-  if (src_stride == width * 4 && dst_stride == width) {
+  if (src_stride_argb == width * 4 && dst_stride_a == width) {
     width *= height;
     height = 1;
-    src_stride = dst_stride = 0;
+    src_stride_argb = dst_stride_a = 0;
   }
-  void (*ARGBExtractAlphaRow)(const uint8 *src_argb, uint8 *dst_a, int width) =
-      ARGBExtractAlphaRow_C;
+  void (*ARGBExtractAlphaRow)(const uint8_t* src_argb, uint8_t* dst_a,
+                              int width) = ARGBExtractAlphaRow_C;
 #if defined(HAS_ARGBEXTRACTALPHAROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     ARGBExtractAlphaRow = IS_ALIGNED(width, 8) ? ARGBExtractAlphaRow_SSE2
                                                : ARGBExtractAlphaRow_Any_SSE2;
   }
 #endif
+#if defined(HAS_ARGBEXTRACTALPHAROW_AVX2)
+  if (TestCpuFlag(kCpuHasAVX2)) {
+    ARGBExtractAlphaRow = IS_ALIGNED(width, 32) ? ARGBExtractAlphaRow_AVX2
+                                                : ARGBExtractAlphaRow_Any_AVX2;
+  }
+#endif
 #if defined(HAS_ARGBEXTRACTALPHAROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     ARGBExtractAlphaRow = IS_ALIGNED(width, 16) ? ARGBExtractAlphaRow_NEON
                                                 : ARGBExtractAlphaRow_Any_NEON;
   }
 #endif
+#if defined(HAS_ARGBEXTRACTALPHAROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBExtractAlphaRow = IS_ALIGNED(width, 16) ? ARGBExtractAlphaRow_MSA
+                                                : ARGBExtractAlphaRow_Any_MSA;
+  }
+#endif
 
   for (int y = 0; y < height; ++y) {
     ARGBExtractAlphaRow(src_argb, dst_a, width);
-    src_argb += src_stride;
-    dst_a += dst_stride;
+    src_argb += src_stride_argb;
+    dst_a += dst_stride_a;
   }
   return 0;
 }
 
 // Copy a planar Y channel to the alpha channel of a destination ARGB image.
 LIBYUV_API
-int ARGBCopyYToAlpha(const uint8* src_y, int src_stride_y,
-                     uint8* dst_argb, int dst_stride_argb,
-                     int width, int height) {
+int ARGBCopyYToAlpha(const uint8_t* src_y,
+                     int src_stride_y,
+                     uint8_t* dst_argb,
+                     int dst_stride_argb,
+                     int width,
+                     int height) {
   int y;
-  void (*ARGBCopyYToAlphaRow)(const uint8* src_y, uint8* dst_argb, int width) =
-      ARGBCopyYToAlphaRow_C;
+  void (*ARGBCopyYToAlphaRow)(const uint8_t* src_y, uint8_t* dst_argb,
+                              int width) = ARGBCopyYToAlphaRow_C;
   if (!src_y || !dst_argb || width <= 0 || height == 0) {
     return -1;
   }
@@ -2575,8 +3316,7 @@
     src_stride_y = -src_stride_y;
   }
   // Coalesce rows.
-  if (src_stride_y == width &&
-      dst_stride_argb == width * 4) {
+  if (src_stride_y == width && dst_stride_argb == width * 4) {
     width *= height;
     height = 1;
     src_stride_y = dst_stride_argb = 0;
@@ -2610,20 +3350,22 @@
 // directly. A SplitUVRow_Odd function could copy the remaining chroma.
 
 LIBYUV_API
-int YUY2ToNV12(const uint8* src_yuy2, int src_stride_yuy2,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height) {
+int YUY2ToNV12(const uint8_t* src_yuy2,
+               int src_stride_yuy2,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height) {
   int y;
   int halfwidth = (width + 1) >> 1;
-  void (*SplitUVRow)(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+  void (*SplitUVRow)(const uint8_t* src_uv, uint8_t* dst_u, uint8_t* dst_v,
                      int width) = SplitUVRow_C;
-  void (*InterpolateRow)(uint8* dst_ptr, const uint8* src_ptr,
+  void (*InterpolateRow)(uint8_t * dst_ptr, const uint8_t* src_ptr,
                          ptrdiff_t src_stride, int dst_width,
                          int source_y_fraction) = InterpolateRow_C;
-  if (!src_yuy2 ||
-      !dst_y || !dst_uv ||
-      width <= 0 || height == 0) {
+  if (!src_yuy2 || !dst_y || !dst_uv || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -2656,6 +3398,14 @@
     }
   }
 #endif
+#if defined(HAS_SPLITUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SplitUVRow = SplitUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      SplitUVRow = SplitUVRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_INTERPOLATEROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     InterpolateRow = InterpolateRow_Any_SSSE3;
@@ -2680,6 +3430,14 @@
     }
   }
 #endif
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      InterpolateRow = InterpolateRow_MSA;
+    }
+  }
+#endif
 
   {
     int awidth = halfwidth * 2;
@@ -2708,20 +3466,22 @@
 }
 
 LIBYUV_API
-int UYVYToNV12(const uint8* src_uyvy, int src_stride_uyvy,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_uv, int dst_stride_uv,
-               int width, int height) {
+int UYVYToNV12(const uint8_t* src_uyvy,
+               int src_stride_uyvy,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_uv,
+               int dst_stride_uv,
+               int width,
+               int height) {
   int y;
   int halfwidth = (width + 1) >> 1;
-  void (*SplitUVRow)(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+  void (*SplitUVRow)(const uint8_t* src_uv, uint8_t* dst_u, uint8_t* dst_v,
                      int width) = SplitUVRow_C;
-  void (*InterpolateRow)(uint8* dst_ptr, const uint8* src_ptr,
+  void (*InterpolateRow)(uint8_t * dst_ptr, const uint8_t* src_ptr,
                          ptrdiff_t src_stride, int dst_width,
                          int source_y_fraction) = InterpolateRow_C;
-  if (!src_uyvy ||
-      !dst_y || !dst_uv ||
-      width <= 0 || height == 0) {
+  if (!src_uyvy || !dst_y || !dst_uv || width <= 0 || height == 0) {
     return -1;
   }
   // Negative height means invert the image.
@@ -2754,6 +3514,14 @@
     }
   }
 #endif
+#if defined(HAS_SPLITUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    SplitUVRow = SplitUVRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      SplitUVRow = SplitUVRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_INTERPOLATEROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     InterpolateRow = InterpolateRow_Any_SSSE3;
@@ -2778,6 +3546,14 @@
     }
   }
 #endif
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(width, 32)) {
+      InterpolateRow = InterpolateRow_MSA;
+    }
+  }
+#endif
 
   {
     int awidth = halfwidth * 2;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate.cc
index 01ea5c4..f2bed85 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate.cc
@@ -10,8 +10,8 @@
 
 #include "libyuv/rotate.h"
 
-#include "libyuv/cpu_id.h"
 #include "libyuv/convert.h"
+#include "libyuv/cpu_id.h"
 #include "libyuv/planar_functions.h"
 #include "libyuv/rotate_row.h"
 #include "libyuv/row.h"
@@ -22,12 +22,20 @@
 #endif
 
 LIBYUV_API
-void TransposePlane(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height) {
+void TransposePlane(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height) {
   int i = height;
-  void (*TransposeWx8)(const uint8* src, int src_stride,
-                       uint8* dst, int dst_stride, int width) = TransposeWx8_C;
+#if defined(HAS_TRANSPOSEWX16_MSA)
+  void (*TransposeWx16)(const uint8_t* src, int src_stride, uint8_t* dst,
+                        int dst_stride, int width) = TransposeWx16_C;
+#else
+  void (*TransposeWx8)(const uint8_t* src, int src_stride, uint8_t* dst,
+                       int dst_stride, int width) = TransposeWx8_C;
+#endif
 #if defined(HAS_TRANSPOSEWX8_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     TransposeWx8 = TransposeWx8_NEON;
@@ -49,24 +57,32 @@
     }
   }
 #endif
-#if defined(HAS_TRANSPOSEWX8_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2)) {
-    if (IS_ALIGNED(width, 4) &&
-        IS_ALIGNED(src, 4) && IS_ALIGNED(src_stride, 4)) {
-      TransposeWx8 = TransposeWx8_Fast_DSPR2;
-    } else {
-      TransposeWx8 = TransposeWx8_DSPR2;
+#if defined(HAS_TRANSPOSEWX16_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    TransposeWx16 = TransposeWx16_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      TransposeWx16 = TransposeWx16_MSA;
     }
   }
 #endif
 
+#if defined(HAS_TRANSPOSEWX16_MSA)
+  // Work across the source in 16x16 tiles
+  while (i >= 16) {
+    TransposeWx16(src, src_stride, dst, dst_stride, width);
+    src += 16 * src_stride;  // Go down 16 rows.
+    dst += 16;               // Move over 16 columns.
+    i -= 16;
+  }
+#else
   // Work across the source in 8x8 tiles
   while (i >= 8) {
     TransposeWx8(src, src_stride, dst, dst_stride, width);
-    src += 8 * src_stride;    // Go down 8 rows.
-    dst += 8;                 // Move over 8 columns.
+    src += 8 * src_stride;  // Go down 8 rows.
+    dst += 8;               // Move over 8 columns.
     i -= 8;
   }
+#endif
 
   if (i > 0) {
     TransposeWxH_C(src, src_stride, dst, dst_stride, width, i);
@@ -74,9 +90,12 @@
 }
 
 LIBYUV_API
-void RotatePlane90(const uint8* src, int src_stride,
-                   uint8* dst, int dst_stride,
-                   int width, int height) {
+void RotatePlane90(const uint8_t* src,
+                   int src_stride,
+                   uint8_t* dst,
+                   int dst_stride,
+                   int width,
+                   int height) {
   // Rotate by 90 is a transpose with the source read
   // from bottom to top. So set the source pointer to the end
   // of the buffer and flip the sign of the source stride.
@@ -86,9 +105,12 @@
 }
 
 LIBYUV_API
-void RotatePlane270(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height) {
+void RotatePlane270(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height) {
   // Rotate by 270 is a transpose with the destination written
   // from bottom to top. So set the destination pointer to the end
   // of the buffer and flip the sign of the destination stride.
@@ -98,17 +120,20 @@
 }
 
 LIBYUV_API
-void RotatePlane180(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height) {
+void RotatePlane180(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height) {
   // Swap first and last row and mirror the content. Uses a temporary row.
   align_buffer_64(row, width);
-  const uint8* src_bot = src + src_stride * (height - 1);
-  uint8* dst_bot = dst + dst_stride * (height - 1);
+  const uint8_t* src_bot = src + src_stride * (height - 1);
+  uint8_t* dst_bot = dst + dst_stride * (height - 1);
   int half_height = (height + 1) >> 1;
   int y;
-  void (*MirrorRow)(const uint8* src, uint8* dst, int width) = MirrorRow_C;
-  void (*CopyRow)(const uint8* src, uint8* dst, int width) = CopyRow_C;
+  void (*MirrorRow)(const uint8_t* src, uint8_t* dst, int width) = MirrorRow_C;
+  void (*CopyRow)(const uint8_t* src, uint8_t* dst, int width) = CopyRow_C;
 #if defined(HAS_MIRRORROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     MirrorRow = MirrorRow_Any_NEON;
@@ -133,12 +158,12 @@
     }
   }
 #endif
-// TODO(fbarchard): Mirror on mips handle unaligned memory.
-#if defined(HAS_MIRRORROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst, 4) && IS_ALIGNED(dst_stride, 4)) {
-    MirrorRow = MirrorRow_DSPR2;
+#if defined(HAS_MIRRORROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    MirrorRow = MirrorRow_Any_MSA;
+    if (IS_ALIGNED(width, 64)) {
+      MirrorRow = MirrorRow_MSA;
+    }
   }
 #endif
 #if defined(HAS_COPYROW_SSE2)
@@ -161,11 +186,6 @@
     CopyRow = IS_ALIGNED(width, 32) ? CopyRow_NEON : CopyRow_Any_NEON;
   }
 #endif
-#if defined(HAS_COPYROW_MIPS)
-  if (TestCpuFlag(kCpuHasMIPS)) {
-    CopyRow = CopyRow_MIPS;
-  }
-#endif
 
   // Odd height will harmlessly mirror the middle row twice.
   for (y = 0; y < half_height; ++y) {
@@ -181,15 +201,24 @@
 }
 
 LIBYUV_API
-void TransposeUV(const uint8* src, int src_stride,
-                 uint8* dst_a, int dst_stride_a,
-                 uint8* dst_b, int dst_stride_b,
-                 int width, int height) {
+void TransposeUV(const uint8_t* src,
+                 int src_stride,
+                 uint8_t* dst_a,
+                 int dst_stride_a,
+                 uint8_t* dst_b,
+                 int dst_stride_b,
+                 int width,
+                 int height) {
   int i = height;
-  void (*TransposeUVWx8)(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b,
+#if defined(HAS_TRANSPOSEUVWX16_MSA)
+  void (*TransposeUVWx16)(const uint8_t* src, int src_stride, uint8_t* dst_a,
+                          int dst_stride_a, uint8_t* dst_b, int dst_stride_b,
+                          int width) = TransposeUVWx16_C;
+#else
+  void (*TransposeUVWx8)(const uint8_t* src, int src_stride, uint8_t* dst_a,
+                         int dst_stride_a, uint8_t* dst_b, int dst_stride_b,
                          int width) = TransposeUVWx8_C;
+#endif
 #if defined(HAS_TRANSPOSEUVWX8_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     TransposeUVWx8 = TransposeUVWx8_NEON;
@@ -203,72 +232,90 @@
     }
   }
 #endif
-#if defined(HAS_TRANSPOSEUVWX8_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(width, 2) &&
-      IS_ALIGNED(src, 4) && IS_ALIGNED(src_stride, 4)) {
-    TransposeUVWx8 = TransposeUVWx8_DSPR2;
+#if defined(HAS_TRANSPOSEUVWX16_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    TransposeUVWx16 = TransposeUVWx16_Any_MSA;
+    if (IS_ALIGNED(width, 8)) {
+      TransposeUVWx16 = TransposeUVWx16_MSA;
+    }
   }
 #endif
 
+#if defined(HAS_TRANSPOSEUVWX16_MSA)
+  // Work through the source in 8x8 tiles.
+  while (i >= 16) {
+    TransposeUVWx16(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b,
+                    width);
+    src += 16 * src_stride;  // Go down 16 rows.
+    dst_a += 16;             // Move over 8 columns.
+    dst_b += 16;             // Move over 8 columns.
+    i -= 16;
+  }
+#else
   // Work through the source in 8x8 tiles.
   while (i >= 8) {
-    TransposeUVWx8(src, src_stride,
-                   dst_a, dst_stride_a,
-                   dst_b, dst_stride_b,
+    TransposeUVWx8(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b,
                    width);
-    src += 8 * src_stride;    // Go down 8 rows.
-    dst_a += 8;               // Move over 8 columns.
-    dst_b += 8;               // Move over 8 columns.
+    src += 8 * src_stride;  // Go down 8 rows.
+    dst_a += 8;             // Move over 8 columns.
+    dst_b += 8;             // Move over 8 columns.
     i -= 8;
   }
+#endif
 
   if (i > 0) {
-    TransposeUVWxH_C(src, src_stride,
-                     dst_a, dst_stride_a,
-                     dst_b, dst_stride_b,
+    TransposeUVWxH_C(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b,
                      width, i);
   }
 }
 
 LIBYUV_API
-void RotateUV90(const uint8* src, int src_stride,
-                uint8* dst_a, int dst_stride_a,
-                uint8* dst_b, int dst_stride_b,
-                int width, int height) {
+void RotateUV90(const uint8_t* src,
+                int src_stride,
+                uint8_t* dst_a,
+                int dst_stride_a,
+                uint8_t* dst_b,
+                int dst_stride_b,
+                int width,
+                int height) {
   src += src_stride * (height - 1);
   src_stride = -src_stride;
 
-  TransposeUV(src, src_stride,
-              dst_a, dst_stride_a,
-              dst_b, dst_stride_b,
-              width, height);
+  TransposeUV(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b, width,
+              height);
 }
 
 LIBYUV_API
-void RotateUV270(const uint8* src, int src_stride,
-                 uint8* dst_a, int dst_stride_a,
-                 uint8* dst_b, int dst_stride_b,
-                 int width, int height) {
+void RotateUV270(const uint8_t* src,
+                 int src_stride,
+                 uint8_t* dst_a,
+                 int dst_stride_a,
+                 uint8_t* dst_b,
+                 int dst_stride_b,
+                 int width,
+                 int height) {
   dst_a += dst_stride_a * (width - 1);
   dst_b += dst_stride_b * (width - 1);
   dst_stride_a = -dst_stride_a;
   dst_stride_b = -dst_stride_b;
 
-  TransposeUV(src, src_stride,
-              dst_a, dst_stride_a,
-              dst_b, dst_stride_b,
-              width, height);
+  TransposeUV(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b, width,
+              height);
 }
 
 // Rotate 180 is a horizontal and vertical flip.
 LIBYUV_API
-void RotateUV180(const uint8* src, int src_stride,
-                 uint8* dst_a, int dst_stride_a,
-                 uint8* dst_b, int dst_stride_b,
-                 int width, int height) {
+void RotateUV180(const uint8_t* src,
+                 int src_stride,
+                 uint8_t* dst_a,
+                 int dst_stride_a,
+                 uint8_t* dst_b,
+                 int dst_stride_b,
+                 int width,
+                 int height) {
   int i;
-  void (*MirrorUVRow)(const uint8* src, uint8* dst_u, uint8* dst_v, int width) =
-      MirrorUVRow_C;
+  void (*MirrorUVRow)(const uint8_t* src, uint8_t* dst_u, uint8_t* dst_v,
+                      int width) = MirrorUVRow_C;
 #if defined(HAS_MIRRORUVROW_NEON)
   if (TestCpuFlag(kCpuHasNEON) && IS_ALIGNED(width, 8)) {
     MirrorUVRow = MirrorUVRow_NEON;
@@ -279,10 +326,9 @@
     MirrorUVRow = MirrorUVRow_SSSE3;
   }
 #endif
-#if defined(HAS_MIRRORUVROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src, 4) && IS_ALIGNED(src_stride, 4)) {
-    MirrorUVRow = MirrorUVRow_DSPR2;
+#if defined(HAS_MIRRORUVROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && IS_ALIGNED(width, 32)) {
+    MirrorUVRow = MirrorUVRow_MSA;
   }
 #endif
 
@@ -298,9 +344,12 @@
 }
 
 LIBYUV_API
-int RotatePlane(const uint8* src, int src_stride,
-                uint8* dst, int dst_stride,
-                int width, int height,
+int RotatePlane(const uint8_t* src,
+                int src_stride,
+                uint8_t* dst,
+                int dst_stride,
+                int width,
+                int height,
                 enum RotationMode mode) {
   if (!src || width <= 0 || height == 0 || !dst) {
     return -1;
@@ -316,24 +365,16 @@
   switch (mode) {
     case kRotate0:
       // copy frame
-      CopyPlane(src, src_stride,
-                dst, dst_stride,
-                width, height);
+      CopyPlane(src, src_stride, dst, dst_stride, width, height);
       return 0;
     case kRotate90:
-      RotatePlane90(src, src_stride,
-                    dst, dst_stride,
-                    width, height);
+      RotatePlane90(src, src_stride, dst, dst_stride, width, height);
       return 0;
     case kRotate270:
-      RotatePlane270(src, src_stride,
-                     dst, dst_stride,
-                     width, height);
+      RotatePlane270(src, src_stride, dst, dst_stride, width, height);
       return 0;
     case kRotate180:
-      RotatePlane180(src, src_stride,
-                     dst, dst_stride,
-                     width, height);
+      RotatePlane180(src, src_stride, dst, dst_stride, width, height);
       return 0;
     default:
       break;
@@ -342,18 +383,25 @@
 }
 
 LIBYUV_API
-int I420Rotate(const uint8* src_y, int src_stride_y,
-               const uint8* src_u, int src_stride_u,
-               const uint8* src_v, int src_stride_v,
-               uint8* dst_y, int dst_stride_y,
-               uint8* dst_u, int dst_stride_u,
-               uint8* dst_v, int dst_stride_v,
-               int width, int height,
+int I420Rotate(const uint8_t* src_y,
+               int src_stride_y,
+               const uint8_t* src_u,
+               int src_stride_u,
+               const uint8_t* src_v,
+               int src_stride_v,
+               uint8_t* dst_y,
+               int dst_stride_y,
+               uint8_t* dst_u,
+               int dst_stride_u,
+               uint8_t* dst_v,
+               int dst_stride_v,
+               int width,
+               int height,
                enum RotationMode mode) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src_y || !src_u || !src_v || width <= 0 || height == 0 ||
-      !dst_y || !dst_u || !dst_v) {
+  if (!src_y || !src_u || !src_v || width <= 0 || height == 0 || !dst_y ||
+      !dst_u || !dst_v) {
     return -1;
   }
 
@@ -372,45 +420,29 @@
   switch (mode) {
     case kRotate0:
       // copy frame
-      return I420Copy(src_y, src_stride_y,
-                      src_u, src_stride_u,
-                      src_v, src_stride_v,
-                      dst_y, dst_stride_y,
-                      dst_u, dst_stride_u,
-                      dst_v, dst_stride_v,
-                      width, height);
+      return I420Copy(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                      src_stride_v, dst_y, dst_stride_y, dst_u, dst_stride_u,
+                      dst_v, dst_stride_v, width, height);
     case kRotate90:
-      RotatePlane90(src_y, src_stride_y,
-                    dst_y, dst_stride_y,
-                    width, height);
-      RotatePlane90(src_u, src_stride_u,
-                    dst_u, dst_stride_u,
-                    halfwidth, halfheight);
-      RotatePlane90(src_v, src_stride_v,
-                    dst_v, dst_stride_v,
-                    halfwidth, halfheight);
+      RotatePlane90(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+      RotatePlane90(src_u, src_stride_u, dst_u, dst_stride_u, halfwidth,
+                    halfheight);
+      RotatePlane90(src_v, src_stride_v, dst_v, dst_stride_v, halfwidth,
+                    halfheight);
       return 0;
     case kRotate270:
-      RotatePlane270(src_y, src_stride_y,
-                     dst_y, dst_stride_y,
-                     width, height);
-      RotatePlane270(src_u, src_stride_u,
-                     dst_u, dst_stride_u,
-                     halfwidth, halfheight);
-      RotatePlane270(src_v, src_stride_v,
-                     dst_v, dst_stride_v,
-                     halfwidth, halfheight);
+      RotatePlane270(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+      RotatePlane270(src_u, src_stride_u, dst_u, dst_stride_u, halfwidth,
+                     halfheight);
+      RotatePlane270(src_v, src_stride_v, dst_v, dst_stride_v, halfwidth,
+                     halfheight);
       return 0;
     case kRotate180:
-      RotatePlane180(src_y, src_stride_y,
-                     dst_y, dst_stride_y,
-                     width, height);
-      RotatePlane180(src_u, src_stride_u,
-                     dst_u, dst_stride_u,
-                     halfwidth, halfheight);
-      RotatePlane180(src_v, src_stride_v,
-                     dst_v, dst_stride_v,
-                     halfwidth, halfheight);
+      RotatePlane180(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+      RotatePlane180(src_u, src_stride_u, dst_u, dst_stride_u, halfwidth,
+                     halfheight);
+      RotatePlane180(src_v, src_stride_v, dst_v, dst_stride_v, halfwidth,
+                     halfheight);
       return 0;
     default:
       break;
@@ -419,17 +451,23 @@
 }
 
 LIBYUV_API
-int NV12ToI420Rotate(const uint8* src_y, int src_stride_y,
-                     const uint8* src_uv, int src_stride_uv,
-                     uint8* dst_y, int dst_stride_y,
-                     uint8* dst_u, int dst_stride_u,
-                     uint8* dst_v, int dst_stride_v,
-                     int width, int height,
+int NV12ToI420Rotate(const uint8_t* src_y,
+                     int src_stride_y,
+                     const uint8_t* src_uv,
+                     int src_stride_uv,
+                     uint8_t* dst_y,
+                     int dst_stride_y,
+                     uint8_t* dst_u,
+                     int dst_stride_u,
+                     uint8_t* dst_v,
+                     int dst_stride_v,
+                     int width,
+                     int height,
                      enum RotationMode mode) {
   int halfwidth = (width + 1) >> 1;
   int halfheight = (height + 1) >> 1;
-  if (!src_y || !src_uv || width <= 0 || height == 0 ||
-      !dst_y || !dst_u || !dst_v) {
+  if (!src_y || !src_uv || width <= 0 || height == 0 || !dst_y || !dst_u ||
+      !dst_v) {
     return -1;
   }
 
@@ -446,38 +484,23 @@
   switch (mode) {
     case kRotate0:
       // copy frame
-      return NV12ToI420(src_y, src_stride_y,
-                        src_uv, src_stride_uv,
-                        dst_y, dst_stride_y,
-                        dst_u, dst_stride_u,
-                        dst_v, dst_stride_v,
+      return NV12ToI420(src_y, src_stride_y, src_uv, src_stride_uv, dst_y,
+                        dst_stride_y, dst_u, dst_stride_u, dst_v, dst_stride_v,
                         width, height);
     case kRotate90:
-      RotatePlane90(src_y, src_stride_y,
-                    dst_y, dst_stride_y,
-                    width, height);
-      RotateUV90(src_uv, src_stride_uv,
-                 dst_u, dst_stride_u,
-                 dst_v, dst_stride_v,
-                 halfwidth, halfheight);
+      RotatePlane90(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+      RotateUV90(src_uv, src_stride_uv, dst_u, dst_stride_u, dst_v,
+                 dst_stride_v, halfwidth, halfheight);
       return 0;
     case kRotate270:
-      RotatePlane270(src_y, src_stride_y,
-                     dst_y, dst_stride_y,
-                     width, height);
-      RotateUV270(src_uv, src_stride_uv,
-                  dst_u, dst_stride_u,
-                  dst_v, dst_stride_v,
-                  halfwidth, halfheight);
+      RotatePlane270(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+      RotateUV270(src_uv, src_stride_uv, dst_u, dst_stride_u, dst_v,
+                  dst_stride_v, halfwidth, halfheight);
       return 0;
     case kRotate180:
-      RotatePlane180(src_y, src_stride_y,
-                     dst_y, dst_stride_y,
-                     width, height);
-      RotateUV180(src_uv, src_stride_uv,
-                  dst_u, dst_stride_u,
-                  dst_v, dst_stride_v,
-                  halfwidth, halfheight);
+      RotatePlane180(src_y, src_stride_y, dst_y, dst_stride_y, width, height);
+      RotateUV180(src_uv, src_stride_uv, dst_u, dst_stride_u, dst_v,
+                  dst_stride_v, halfwidth, halfheight);
       return 0;
     default:
       break;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_any.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_any.cc
index 31a74c3..c2752e6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_any.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_any.cc
@@ -18,16 +18,16 @@
 extern "C" {
 #endif
 
-#define TANY(NAMEANY, TPOS_SIMD, MASK)                                         \
-    void NAMEANY(const uint8* src, int src_stride,                             \
-                 uint8* dst, int dst_stride, int width) {                      \
-      int r = width & MASK;                                                    \
-      int n = width - r;                                                       \
-      if (n > 0) {                                                             \
-        TPOS_SIMD(src, src_stride, dst, dst_stride, n);                        \
-      }                                                                        \
-      TransposeWx8_C(src + n, src_stride, dst + n * dst_stride, dst_stride, r);\
-    }
+#define TANY(NAMEANY, TPOS_SIMD, MASK)                                        \
+  void NAMEANY(const uint8_t* src, int src_stride, uint8_t* dst,              \
+               int dst_stride, int width) {                                   \
+    int r = width & MASK;                                                     \
+    int n = width - r;                                                        \
+    if (n > 0) {                                                              \
+      TPOS_SIMD(src, src_stride, dst, dst_stride, n);                         \
+    }                                                                         \
+    TransposeWx8_C(src + n, src_stride, dst + n * dst_stride, dst_stride, r); \
+  }
 
 #ifdef HAS_TRANSPOSEWX8_NEON
 TANY(TransposeWx8_Any_NEON, TransposeWx8_NEON, 7)
@@ -38,25 +38,23 @@
 #ifdef HAS_TRANSPOSEWX8_FAST_SSSE3
 TANY(TransposeWx8_Fast_Any_SSSE3, TransposeWx8_Fast_SSSE3, 15)
 #endif
-#ifdef HAS_TRANSPOSEWX8_DSPR2
-TANY(TransposeWx8_Any_DSPR2, TransposeWx8_DSPR2, 7)
+#ifdef HAS_TRANSPOSEWX16_MSA
+TANY(TransposeWx16_Any_MSA, TransposeWx16_MSA, 15)
 #endif
 #undef TANY
 
 #define TUVANY(NAMEANY, TPOS_SIMD, MASK)                                       \
-    void NAMEANY(const uint8* src, int src_stride,                             \
-                uint8* dst_a, int dst_stride_a,                                \
-                uint8* dst_b, int dst_stride_b, int width) {                   \
-      int r = width & MASK;                                                    \
-      int n = width - r;                                                       \
-      if (n > 0) {                                                             \
-        TPOS_SIMD(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b,   \
-                  n);                                                          \
-      }                                                                        \
-      TransposeUVWx8_C(src + n * 2, src_stride,                                \
-                       dst_a + n * dst_stride_a, dst_stride_a,                 \
-                       dst_b + n * dst_stride_b, dst_stride_b, r);             \
-    }
+  void NAMEANY(const uint8_t* src, int src_stride, uint8_t* dst_a,             \
+               int dst_stride_a, uint8_t* dst_b, int dst_stride_b,             \
+               int width) {                                                    \
+    int r = width & MASK;                                                      \
+    int n = width - r;                                                         \
+    if (n > 0) {                                                               \
+      TPOS_SIMD(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b, n); \
+    }                                                                          \
+    TransposeUVWx8_C(src + n * 2, src_stride, dst_a + n * dst_stride_a,        \
+                     dst_stride_a, dst_b + n * dst_stride_b, dst_stride_b, r); \
+  }
 
 #ifdef HAS_TRANSPOSEUVWX8_NEON
 TUVANY(TransposeUVWx8_Any_NEON, TransposeUVWx8_NEON, 7)
@@ -64,8 +62,8 @@
 #ifdef HAS_TRANSPOSEUVWX8_SSE2
 TUVANY(TransposeUVWx8_Any_SSE2, TransposeUVWx8_SSE2, 7)
 #endif
-#ifdef HAS_TRANSPOSEUVWX8_DSPR2
-TUVANY(TransposeUVWx8_Any_DSPR2, TransposeUVWx8_DSPR2, 7)
+#ifdef HAS_TRANSPOSEUVWX16_MSA
+TUVANY(TransposeUVWx16_Any_MSA, TransposeUVWx16_MSA, 7)
 #endif
 #undef TUVANY
 
@@ -73,8 +71,3 @@
 }  // extern "C"
 }  // namespace libyuv
 #endif
-
-
-
-
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_argb.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_argb.cc
index 787c0ad..5a6e053 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_argb.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_argb.cc
@@ -10,90 +10,106 @@
 
 #include "libyuv/rotate.h"
 
-#include "libyuv/cpu_id.h"
 #include "libyuv/convert.h"
+#include "libyuv/cpu_id.h"
 #include "libyuv/planar_functions.h"
 #include "libyuv/row.h"
+#include "libyuv/scale_row.h" /* for ScaleARGBRowDownEven_ */
 
 #ifdef __cplusplus
 namespace libyuv {
 extern "C" {
 #endif
 
-// ARGBScale has a function to copy pixels to a row, striding each source
-// pixel by a constant.
-#if !defined(LIBYUV_DISABLE_X86) && \
-    (defined(_M_IX86) || \
-    (defined(__x86_64__) && !defined(__native_client__)) || defined(__i386__))
-#define HAS_SCALEARGBROWDOWNEVEN_SSE2
-void ScaleARGBRowDownEven_SSE2(const uint8* src_ptr, int src_stride,
-                               int src_stepx, uint8* dst_ptr, int dst_width);
-#endif
-#if !defined(LIBYUV_DISABLE_NEON) && !defined(__native_client__) && \
-    (defined(__ARM_NEON__) || defined(LIBYUV_NEON) || defined(__aarch64__))
-#define HAS_SCALEARGBROWDOWNEVEN_NEON
-void ScaleARGBRowDownEven_NEON(const uint8* src_ptr, int src_stride,
-                               int src_stepx, uint8* dst_ptr, int dst_width);
-#endif
-
-void ScaleARGBRowDownEven_C(const uint8* src_ptr, int,
-                            int src_stepx, uint8* dst_ptr, int dst_width);
-
-static void ARGBTranspose(const uint8* src, int src_stride,
-                          uint8* dst, int dst_stride, int width, int height) {
+static void ARGBTranspose(const uint8_t* src_argb,
+                          int src_stride_argb,
+                          uint8_t* dst_argb,
+                          int dst_stride_argb,
+                          int width,
+                          int height) {
   int i;
-  int src_pixel_step = src_stride >> 2;
-  void (*ScaleARGBRowDownEven)(const uint8* src_ptr, int src_stride,
-      int src_step, uint8* dst_ptr, int dst_width) = ScaleARGBRowDownEven_C;
+  int src_pixel_step = src_stride_argb >> 2;
+  void (*ScaleARGBRowDownEven)(
+      const uint8_t* src_argb, ptrdiff_t src_stride_argb, int src_step,
+      uint8_t* dst_argb, int dst_width) = ScaleARGBRowDownEven_C;
 #if defined(HAS_SCALEARGBROWDOWNEVEN_SSE2)
-  if (TestCpuFlag(kCpuHasSSE2) && IS_ALIGNED(height, 4)) {  // Width of dest.
-    ScaleARGBRowDownEven = ScaleARGBRowDownEven_SSE2;
+  if (TestCpuFlag(kCpuHasSSE2)) {
+    ScaleARGBRowDownEven = ScaleARGBRowDownEven_Any_SSE2;
+    if (IS_ALIGNED(height, 4)) {  // Width of dest.
+      ScaleARGBRowDownEven = ScaleARGBRowDownEven_SSE2;
+    }
   }
 #endif
 #if defined(HAS_SCALEARGBROWDOWNEVEN_NEON)
-  if (TestCpuFlag(kCpuHasNEON) && IS_ALIGNED(height, 4)) {  // Width of dest.
-    ScaleARGBRowDownEven = ScaleARGBRowDownEven_NEON;
+  if (TestCpuFlag(kCpuHasNEON)) {
+    ScaleARGBRowDownEven = ScaleARGBRowDownEven_Any_NEON;
+    if (IS_ALIGNED(height, 4)) {  // Width of dest.
+      ScaleARGBRowDownEven = ScaleARGBRowDownEven_NEON;
+    }
+  }
+#endif
+#if defined(HAS_SCALEARGBROWDOWNEVEN_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBRowDownEven = ScaleARGBRowDownEven_Any_MSA;
+    if (IS_ALIGNED(height, 4)) {  // Width of dest.
+      ScaleARGBRowDownEven = ScaleARGBRowDownEven_MSA;
+    }
   }
 #endif
 
   for (i = 0; i < width; ++i) {  // column of source to row of dest.
-    ScaleARGBRowDownEven(src, 0, src_pixel_step, dst, height);
-    dst += dst_stride;
-    src += 4;
+    ScaleARGBRowDownEven(src_argb, 0, src_pixel_step, dst_argb, height);
+    dst_argb += dst_stride_argb;
+    src_argb += 4;
   }
 }
 
-void ARGBRotate90(const uint8* src, int src_stride,
-                  uint8* dst, int dst_stride, int width, int height) {
+void ARGBRotate90(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int width,
+                  int height) {
   // Rotate by 90 is a ARGBTranspose with the source read
   // from bottom to top. So set the source pointer to the end
   // of the buffer and flip the sign of the source stride.
-  src += src_stride * (height - 1);
-  src_stride = -src_stride;
-  ARGBTranspose(src, src_stride, dst, dst_stride, width, height);
+  src_argb += src_stride_argb * (height - 1);
+  src_stride_argb = -src_stride_argb;
+  ARGBTranspose(src_argb, src_stride_argb, dst_argb, dst_stride_argb, width,
+                height);
 }
 
-void ARGBRotate270(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride, int width, int height) {
+void ARGBRotate270(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   int width,
+                   int height) {
   // Rotate by 270 is a ARGBTranspose with the destination written
   // from bottom to top. So set the destination pointer to the end
   // of the buffer and flip the sign of the destination stride.
-  dst += dst_stride * (width - 1);
-  dst_stride = -dst_stride;
-  ARGBTranspose(src, src_stride, dst, dst_stride, width, height);
+  dst_argb += dst_stride_argb * (width - 1);
+  dst_stride_argb = -dst_stride_argb;
+  ARGBTranspose(src_argb, src_stride_argb, dst_argb, dst_stride_argb, width,
+                height);
 }
 
-void ARGBRotate180(const uint8* src, int src_stride,
-                   uint8* dst, int dst_stride, int width, int height) {
+void ARGBRotate180(const uint8_t* src_argb,
+                   int src_stride_argb,
+                   uint8_t* dst_argb,
+                   int dst_stride_argb,
+                   int width,
+                   int height) {
   // Swap first and last row and mirror the content. Uses a temporary row.
   align_buffer_64(row, width * 4);
-  const uint8* src_bot = src + src_stride * (height - 1);
-  uint8* dst_bot = dst + dst_stride * (height - 1);
+  const uint8_t* src_bot = src_argb + src_stride_argb * (height - 1);
+  uint8_t* dst_bot = dst_argb + dst_stride_argb * (height - 1);
   int half_height = (height + 1) >> 1;
   int y;
-  void (*ARGBMirrorRow)(const uint8* src, uint8* dst, int width) =
+  void (*ARGBMirrorRow)(const uint8_t* src_argb, uint8_t* dst_argb, int width) =
       ARGBMirrorRow_C;
-  void (*CopyRow)(const uint8* src, uint8* dst, int width) = CopyRow_C;
+  void (*CopyRow)(const uint8_t* src_argb, uint8_t* dst_argb, int width) =
+      CopyRow_C;
 #if defined(HAS_ARGBMIRRORROW_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
     ARGBMirrorRow = ARGBMirrorRow_Any_NEON;
@@ -118,6 +134,14 @@
     }
   }
 #endif
+#if defined(HAS_ARGBMIRRORROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ARGBMirrorRow = ARGBMirrorRow_Any_MSA;
+    if (IS_ALIGNED(width, 16)) {
+      ARGBMirrorRow = ARGBMirrorRow_MSA;
+    }
+  }
+#endif
 #if defined(HAS_COPYROW_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
     CopyRow = IS_ALIGNED(width * 4, 32) ? CopyRow_SSE2 : CopyRow_Any_SSE2;
@@ -138,28 +162,27 @@
     CopyRow = IS_ALIGNED(width * 4, 32) ? CopyRow_NEON : CopyRow_Any_NEON;
   }
 #endif
-#if defined(HAS_COPYROW_MIPS)
-  if (TestCpuFlag(kCpuHasMIPS)) {
-    CopyRow = CopyRow_MIPS;
-  }
-#endif
 
   // Odd height will harmlessly mirror the middle row twice.
   for (y = 0; y < half_height; ++y) {
-    ARGBMirrorRow(src, row, width);  // Mirror first row into a buffer
-    ARGBMirrorRow(src_bot, dst, width);  // Mirror last row into first row
+    ARGBMirrorRow(src_argb, row, width);      // Mirror first row into a buffer
+    ARGBMirrorRow(src_bot, dst_argb, width);  // Mirror last row into first row
     CopyRow(row, dst_bot, width * 4);  // Copy first mirrored row into last
-    src += src_stride;
-    dst += dst_stride;
-    src_bot -= src_stride;
-    dst_bot -= dst_stride;
+    src_argb += src_stride_argb;
+    dst_argb += dst_stride_argb;
+    src_bot -= src_stride_argb;
+    dst_bot -= dst_stride_argb;
   }
   free_aligned_buffer_64(row);
 }
 
 LIBYUV_API
-int ARGBRotate(const uint8* src_argb, int src_stride_argb,
-               uint8* dst_argb, int dst_stride_argb, int width, int height,
+int ARGBRotate(const uint8_t* src_argb,
+               int src_stride_argb,
+               uint8_t* dst_argb,
+               int dst_stride_argb,
+               int width,
+               int height,
                enum RotationMode mode) {
   if (!src_argb || width <= 0 || height == 0 || !dst_argb) {
     return -1;
@@ -175,23 +198,19 @@
   switch (mode) {
     case kRotate0:
       // copy frame
-      return ARGBCopy(src_argb, src_stride_argb,
-                      dst_argb, dst_stride_argb,
+      return ARGBCopy(src_argb, src_stride_argb, dst_argb, dst_stride_argb,
                       width, height);
     case kRotate90:
-      ARGBRotate90(src_argb, src_stride_argb,
-                   dst_argb, dst_stride_argb,
-                   width, height);
+      ARGBRotate90(src_argb, src_stride_argb, dst_argb, dst_stride_argb, width,
+                   height);
       return 0;
     case kRotate270:
-      ARGBRotate270(src_argb, src_stride_argb,
-                    dst_argb, dst_stride_argb,
-                    width, height);
+      ARGBRotate270(src_argb, src_stride_argb, dst_argb, dst_stride_argb, width,
+                    height);
       return 0;
     case kRotate180:
-      ARGBRotate180(src_argb, src_stride_argb,
-                    dst_argb, dst_stride_argb,
-                    width, height);
+      ARGBRotate180(src_argb, src_stride_argb, dst_argb, dst_stride_argb, width,
+                    height);
       return 0;
     default:
       break;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_common.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_common.cc
index b33a9a0..ff212ad 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_common.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_common.cc
@@ -8,16 +8,19 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "libyuv/row.h"
 #include "libyuv/rotate_row.h"
+#include "libyuv/row.h"
 
 #ifdef __cplusplus
 namespace libyuv {
 extern "C" {
 #endif
 
-void TransposeWx8_C(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride, int width) {
+void TransposeWx8_C(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width) {
   int i;
   for (i = 0; i < width; ++i) {
     dst[0] = src[0 * src_stride];
@@ -33,9 +36,13 @@
   }
 }
 
-void TransposeUVWx8_C(const uint8* src, int src_stride,
-                      uint8* dst_a, int dst_stride_a,
-                      uint8* dst_b, int dst_stride_b, int width) {
+void TransposeUVWx8_C(const uint8_t* src,
+                      int src_stride,
+                      uint8_t* dst_a,
+                      int dst_stride_a,
+                      uint8_t* dst_b,
+                      int dst_stride_b,
+                      int width) {
   int i;
   for (i = 0; i < width; ++i) {
     dst_a[0] = src[0 * src_stride + 0];
@@ -60,9 +67,12 @@
   }
 }
 
-void TransposeWxH_C(const uint8* src, int src_stride,
-                    uint8* dst, int dst_stride,
-                    int width, int height) {
+void TransposeWxH_C(const uint8_t* src,
+                    int src_stride,
+                    uint8_t* dst,
+                    int dst_stride,
+                    int width,
+                    int height) {
   int i;
   for (i = 0; i < width; ++i) {
     int j;
@@ -72,10 +82,14 @@
   }
 }
 
-void TransposeUVWxH_C(const uint8* src, int src_stride,
-                      uint8* dst_a, int dst_stride_a,
-                      uint8* dst_b, int dst_stride_b,
-                      int width, int height) {
+void TransposeUVWxH_C(const uint8_t* src,
+                      int src_stride,
+                      uint8_t* dst_a,
+                      int dst_stride_a,
+                      uint8_t* dst_b,
+                      int dst_stride_b,
+                      int width,
+                      int height) {
   int i;
   for (i = 0; i < width * 2; i += 2) {
     int j;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_gcc.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_gcc.cc
index cbe870c..04e19e2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_gcc.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_gcc.cc
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "libyuv/row.h"
 #include "libyuv/rotate_row.h"
+#include "libyuv/row.h"
 
 #ifdef __cplusplus
 namespace libyuv {
@@ -22,342 +22,348 @@
 
 // Transpose 8x8. 32 or 64 bit, but not NaCL for 64 bit.
 #if defined(HAS_TRANSPOSEWX8_SSSE3)
-void TransposeWx8_SSSE3(const uint8* src, int src_stride,
-                        uint8* dst, int dst_stride, int width) {
-  asm volatile (
-    // Read in the data from the source pointer.
-    // First round of bit swap.
-    LABELALIGN
-  "1:                                            \n"
-    "movq       (%0),%%xmm0                      \n"
-    "movq       (%0,%3),%%xmm1                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "punpcklbw  %%xmm1,%%xmm0                    \n"
-    "movq       (%0),%%xmm2                      \n"
-    "movdqa     %%xmm0,%%xmm1                    \n"
-    "palignr    $0x8,%%xmm1,%%xmm1               \n"
-    "movq       (%0,%3),%%xmm3                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "punpcklbw  %%xmm3,%%xmm2                    \n"
-    "movdqa     %%xmm2,%%xmm3                    \n"
-    "movq       (%0),%%xmm4                      \n"
-    "palignr    $0x8,%%xmm3,%%xmm3               \n"
-    "movq       (%0,%3),%%xmm5                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "punpcklbw  %%xmm5,%%xmm4                    \n"
-    "movdqa     %%xmm4,%%xmm5                    \n"
-    "movq       (%0),%%xmm6                      \n"
-    "palignr    $0x8,%%xmm5,%%xmm5               \n"
-    "movq       (%0,%3),%%xmm7                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "punpcklbw  %%xmm7,%%xmm6                    \n"
-    "neg        %3                               \n"
-    "movdqa     %%xmm6,%%xmm7                    \n"
-    "lea        0x8(%0,%3,8),%0                  \n"
-    "palignr    $0x8,%%xmm7,%%xmm7               \n"
-    "neg        %3                               \n"
-     // Second round of bit swap.
-    "punpcklwd  %%xmm2,%%xmm0                    \n"
-    "punpcklwd  %%xmm3,%%xmm1                    \n"
-    "movdqa     %%xmm0,%%xmm2                    \n"
-    "movdqa     %%xmm1,%%xmm3                    \n"
-    "palignr    $0x8,%%xmm2,%%xmm2               \n"
-    "palignr    $0x8,%%xmm3,%%xmm3               \n"
-    "punpcklwd  %%xmm6,%%xmm4                    \n"
-    "punpcklwd  %%xmm7,%%xmm5                    \n"
-    "movdqa     %%xmm4,%%xmm6                    \n"
-    "movdqa     %%xmm5,%%xmm7                    \n"
-    "palignr    $0x8,%%xmm6,%%xmm6               \n"
-    "palignr    $0x8,%%xmm7,%%xmm7               \n"
-    // Third round of bit swap.
-    // Write to the destination pointer.
-    "punpckldq  %%xmm4,%%xmm0                    \n"
-    "movq       %%xmm0,(%1)                      \n"
-    "movdqa     %%xmm0,%%xmm4                    \n"
-    "palignr    $0x8,%%xmm4,%%xmm4               \n"
-    "movq       %%xmm4,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm6,%%xmm2                    \n"
-    "movdqa     %%xmm2,%%xmm6                    \n"
-    "movq       %%xmm2,(%1)                      \n"
-    "palignr    $0x8,%%xmm6,%%xmm6               \n"
-    "punpckldq  %%xmm5,%%xmm1                    \n"
-    "movq       %%xmm6,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "movdqa     %%xmm1,%%xmm5                    \n"
-    "movq       %%xmm1,(%1)                      \n"
-    "palignr    $0x8,%%xmm5,%%xmm5               \n"
-    "movq       %%xmm5,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm7,%%xmm3                    \n"
-    "movq       %%xmm3,(%1)                      \n"
-    "movdqa     %%xmm3,%%xmm7                    \n"
-    "palignr    $0x8,%%xmm7,%%xmm7               \n"
-    "sub        $0x8,%2                          \n"
-    "movq       %%xmm7,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "jg         1b                               \n"
-    : "+r"(src),    // %0
-      "+r"(dst),    // %1
-      "+r"(width)   // %2
-    : "r"((intptr_t)(src_stride)),  // %3
-      "r"((intptr_t)(dst_stride))   // %4
-    : "memory", "cc",
-      "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+void TransposeWx8_SSSE3(const uint8_t* src,
+                        int src_stride,
+                        uint8_t* dst,
+                        int dst_stride,
+                        int width) {
+  asm volatile(
+      // Read in the data from the source pointer.
+      // First round of bit swap.
+      LABELALIGN
+      "1:                                          \n"
+      "movq       (%0),%%xmm0                      \n"
+      "movq       (%0,%3),%%xmm1                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "punpcklbw  %%xmm1,%%xmm0                    \n"
+      "movq       (%0),%%xmm2                      \n"
+      "movdqa     %%xmm0,%%xmm1                    \n"
+      "palignr    $0x8,%%xmm1,%%xmm1               \n"
+      "movq       (%0,%3),%%xmm3                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "punpcklbw  %%xmm3,%%xmm2                    \n"
+      "movdqa     %%xmm2,%%xmm3                    \n"
+      "movq       (%0),%%xmm4                      \n"
+      "palignr    $0x8,%%xmm3,%%xmm3               \n"
+      "movq       (%0,%3),%%xmm5                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "punpcklbw  %%xmm5,%%xmm4                    \n"
+      "movdqa     %%xmm4,%%xmm5                    \n"
+      "movq       (%0),%%xmm6                      \n"
+      "palignr    $0x8,%%xmm5,%%xmm5               \n"
+      "movq       (%0,%3),%%xmm7                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "punpcklbw  %%xmm7,%%xmm6                    \n"
+      "neg        %3                               \n"
+      "movdqa     %%xmm6,%%xmm7                    \n"
+      "lea        0x8(%0,%3,8),%0                  \n"
+      "palignr    $0x8,%%xmm7,%%xmm7               \n"
+      "neg        %3                               \n"
+      // Second round of bit swap.
+      "punpcklwd  %%xmm2,%%xmm0                    \n"
+      "punpcklwd  %%xmm3,%%xmm1                    \n"
+      "movdqa     %%xmm0,%%xmm2                    \n"
+      "movdqa     %%xmm1,%%xmm3                    \n"
+      "palignr    $0x8,%%xmm2,%%xmm2               \n"
+      "palignr    $0x8,%%xmm3,%%xmm3               \n"
+      "punpcklwd  %%xmm6,%%xmm4                    \n"
+      "punpcklwd  %%xmm7,%%xmm5                    \n"
+      "movdqa     %%xmm4,%%xmm6                    \n"
+      "movdqa     %%xmm5,%%xmm7                    \n"
+      "palignr    $0x8,%%xmm6,%%xmm6               \n"
+      "palignr    $0x8,%%xmm7,%%xmm7               \n"
+      // Third round of bit swap.
+      // Write to the destination pointer.
+      "punpckldq  %%xmm4,%%xmm0                    \n"
+      "movq       %%xmm0,(%1)                      \n"
+      "movdqa     %%xmm0,%%xmm4                    \n"
+      "palignr    $0x8,%%xmm4,%%xmm4               \n"
+      "movq       %%xmm4,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm6,%%xmm2                    \n"
+      "movdqa     %%xmm2,%%xmm6                    \n"
+      "movq       %%xmm2,(%1)                      \n"
+      "palignr    $0x8,%%xmm6,%%xmm6               \n"
+      "punpckldq  %%xmm5,%%xmm1                    \n"
+      "movq       %%xmm6,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "movdqa     %%xmm1,%%xmm5                    \n"
+      "movq       %%xmm1,(%1)                      \n"
+      "palignr    $0x8,%%xmm5,%%xmm5               \n"
+      "movq       %%xmm5,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm7,%%xmm3                    \n"
+      "movq       %%xmm3,(%1)                      \n"
+      "movdqa     %%xmm3,%%xmm7                    \n"
+      "palignr    $0x8,%%xmm7,%%xmm7               \n"
+      "sub        $0x8,%2                          \n"
+      "movq       %%xmm7,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "jg         1b                               \n"
+      : "+r"(src),                    // %0
+        "+r"(dst),                    // %1
+        "+r"(width)                   // %2
+      : "r"((intptr_t)(src_stride)),  // %3
+        "r"((intptr_t)(dst_stride))   // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // defined(HAS_TRANSPOSEWX8_SSSE3)
 
 // Transpose 16x8. 64 bit
 #if defined(HAS_TRANSPOSEWX8_FAST_SSSE3)
-void TransposeWx8_Fast_SSSE3(const uint8* src, int src_stride,
-                             uint8* dst, int dst_stride, int width) {
-  asm volatile (
-    // Read in the data from the source pointer.
-    // First round of bit swap.
-    LABELALIGN
-  "1:                                            \n"
-    "movdqu     (%0),%%xmm0                      \n"
-    "movdqu     (%0,%3),%%xmm1                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "movdqa     %%xmm0,%%xmm8                    \n"
-    "punpcklbw  %%xmm1,%%xmm0                    \n"
-    "punpckhbw  %%xmm1,%%xmm8                    \n"
-    "movdqu     (%0),%%xmm2                      \n"
-    "movdqa     %%xmm0,%%xmm1                    \n"
-    "movdqa     %%xmm8,%%xmm9                    \n"
-    "palignr    $0x8,%%xmm1,%%xmm1               \n"
-    "palignr    $0x8,%%xmm9,%%xmm9               \n"
-    "movdqu     (%0,%3),%%xmm3                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "movdqa     %%xmm2,%%xmm10                   \n"
-    "punpcklbw  %%xmm3,%%xmm2                    \n"
-    "punpckhbw  %%xmm3,%%xmm10                   \n"
-    "movdqa     %%xmm2,%%xmm3                    \n"
-    "movdqa     %%xmm10,%%xmm11                  \n"
-    "movdqu     (%0),%%xmm4                      \n"
-    "palignr    $0x8,%%xmm3,%%xmm3               \n"
-    "palignr    $0x8,%%xmm11,%%xmm11             \n"
-    "movdqu     (%0,%3),%%xmm5                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "movdqa     %%xmm4,%%xmm12                   \n"
-    "punpcklbw  %%xmm5,%%xmm4                    \n"
-    "punpckhbw  %%xmm5,%%xmm12                   \n"
-    "movdqa     %%xmm4,%%xmm5                    \n"
-    "movdqa     %%xmm12,%%xmm13                  \n"
-    "movdqu     (%0),%%xmm6                      \n"
-    "palignr    $0x8,%%xmm5,%%xmm5               \n"
-    "palignr    $0x8,%%xmm13,%%xmm13             \n"
-    "movdqu     (%0,%3),%%xmm7                   \n"
-    "lea        (%0,%3,2),%0                     \n"
-    "movdqa     %%xmm6,%%xmm14                   \n"
-    "punpcklbw  %%xmm7,%%xmm6                    \n"
-    "punpckhbw  %%xmm7,%%xmm14                   \n"
-    "neg        %3                               \n"
-    "movdqa     %%xmm6,%%xmm7                    \n"
-    "movdqa     %%xmm14,%%xmm15                  \n"
-    "lea        0x10(%0,%3,8),%0                 \n"
-    "palignr    $0x8,%%xmm7,%%xmm7               \n"
-    "palignr    $0x8,%%xmm15,%%xmm15             \n"
-    "neg        %3                               \n"
-     // Second round of bit swap.
-    "punpcklwd  %%xmm2,%%xmm0                    \n"
-    "punpcklwd  %%xmm3,%%xmm1                    \n"
-    "movdqa     %%xmm0,%%xmm2                    \n"
-    "movdqa     %%xmm1,%%xmm3                    \n"
-    "palignr    $0x8,%%xmm2,%%xmm2               \n"
-    "palignr    $0x8,%%xmm3,%%xmm3               \n"
-    "punpcklwd  %%xmm6,%%xmm4                    \n"
-    "punpcklwd  %%xmm7,%%xmm5                    \n"
-    "movdqa     %%xmm4,%%xmm6                    \n"
-    "movdqa     %%xmm5,%%xmm7                    \n"
-    "palignr    $0x8,%%xmm6,%%xmm6               \n"
-    "palignr    $0x8,%%xmm7,%%xmm7               \n"
-    "punpcklwd  %%xmm10,%%xmm8                   \n"
-    "punpcklwd  %%xmm11,%%xmm9                   \n"
-    "movdqa     %%xmm8,%%xmm10                   \n"
-    "movdqa     %%xmm9,%%xmm11                   \n"
-    "palignr    $0x8,%%xmm10,%%xmm10             \n"
-    "palignr    $0x8,%%xmm11,%%xmm11             \n"
-    "punpcklwd  %%xmm14,%%xmm12                  \n"
-    "punpcklwd  %%xmm15,%%xmm13                  \n"
-    "movdqa     %%xmm12,%%xmm14                  \n"
-    "movdqa     %%xmm13,%%xmm15                  \n"
-    "palignr    $0x8,%%xmm14,%%xmm14             \n"
-    "palignr    $0x8,%%xmm15,%%xmm15             \n"
-    // Third round of bit swap.
-    // Write to the destination pointer.
-    "punpckldq  %%xmm4,%%xmm0                    \n"
-    "movq       %%xmm0,(%1)                      \n"
-    "movdqa     %%xmm0,%%xmm4                    \n"
-    "palignr    $0x8,%%xmm4,%%xmm4               \n"
-    "movq       %%xmm4,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm6,%%xmm2                    \n"
-    "movdqa     %%xmm2,%%xmm6                    \n"
-    "movq       %%xmm2,(%1)                      \n"
-    "palignr    $0x8,%%xmm6,%%xmm6               \n"
-    "punpckldq  %%xmm5,%%xmm1                    \n"
-    "movq       %%xmm6,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "movdqa     %%xmm1,%%xmm5                    \n"
-    "movq       %%xmm1,(%1)                      \n"
-    "palignr    $0x8,%%xmm5,%%xmm5               \n"
-    "movq       %%xmm5,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm7,%%xmm3                    \n"
-    "movq       %%xmm3,(%1)                      \n"
-    "movdqa     %%xmm3,%%xmm7                    \n"
-    "palignr    $0x8,%%xmm7,%%xmm7               \n"
-    "movq       %%xmm7,(%1,%4)                   \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm12,%%xmm8                   \n"
-    "movq       %%xmm8,(%1)                      \n"
-    "movdqa     %%xmm8,%%xmm12                   \n"
-    "palignr    $0x8,%%xmm12,%%xmm12             \n"
-    "movq       %%xmm12,(%1,%4)                  \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm14,%%xmm10                  \n"
-    "movdqa     %%xmm10,%%xmm14                  \n"
-    "movq       %%xmm10,(%1)                     \n"
-    "palignr    $0x8,%%xmm14,%%xmm14             \n"
-    "punpckldq  %%xmm13,%%xmm9                   \n"
-    "movq       %%xmm14,(%1,%4)                  \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "movdqa     %%xmm9,%%xmm13                   \n"
-    "movq       %%xmm9,(%1)                      \n"
-    "palignr    $0x8,%%xmm13,%%xmm13             \n"
-    "movq       %%xmm13,(%1,%4)                  \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "punpckldq  %%xmm15,%%xmm11                  \n"
-    "movq       %%xmm11,(%1)                     \n"
-    "movdqa     %%xmm11,%%xmm15                  \n"
-    "palignr    $0x8,%%xmm15,%%xmm15             \n"
-    "sub        $0x10,%2                         \n"
-    "movq       %%xmm15,(%1,%4)                  \n"
-    "lea        (%1,%4,2),%1                     \n"
-    "jg         1b                               \n"
-    : "+r"(src),    // %0
-      "+r"(dst),    // %1
-      "+r"(width)   // %2
-    : "r"((intptr_t)(src_stride)),  // %3
-      "r"((intptr_t)(dst_stride))   // %4
-    : "memory", "cc",
-      "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7",
-      "xmm8", "xmm9", "xmm10", "xmm11", "xmm12", "xmm13",  "xmm14",  "xmm15"
-  );
+void TransposeWx8_Fast_SSSE3(const uint8_t* src,
+                             int src_stride,
+                             uint8_t* dst,
+                             int dst_stride,
+                             int width) {
+  asm volatile(
+      // Read in the data from the source pointer.
+      // First round of bit swap.
+      LABELALIGN
+      "1:                                          \n"
+      "movdqu     (%0),%%xmm0                      \n"
+      "movdqu     (%0,%3),%%xmm1                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "movdqa     %%xmm0,%%xmm8                    \n"
+      "punpcklbw  %%xmm1,%%xmm0                    \n"
+      "punpckhbw  %%xmm1,%%xmm8                    \n"
+      "movdqu     (%0),%%xmm2                      \n"
+      "movdqa     %%xmm0,%%xmm1                    \n"
+      "movdqa     %%xmm8,%%xmm9                    \n"
+      "palignr    $0x8,%%xmm1,%%xmm1               \n"
+      "palignr    $0x8,%%xmm9,%%xmm9               \n"
+      "movdqu     (%0,%3),%%xmm3                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "movdqa     %%xmm2,%%xmm10                   \n"
+      "punpcklbw  %%xmm3,%%xmm2                    \n"
+      "punpckhbw  %%xmm3,%%xmm10                   \n"
+      "movdqa     %%xmm2,%%xmm3                    \n"
+      "movdqa     %%xmm10,%%xmm11                  \n"
+      "movdqu     (%0),%%xmm4                      \n"
+      "palignr    $0x8,%%xmm3,%%xmm3               \n"
+      "palignr    $0x8,%%xmm11,%%xmm11             \n"
+      "movdqu     (%0,%3),%%xmm5                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "movdqa     %%xmm4,%%xmm12                   \n"
+      "punpcklbw  %%xmm5,%%xmm4                    \n"
+      "punpckhbw  %%xmm5,%%xmm12                   \n"
+      "movdqa     %%xmm4,%%xmm5                    \n"
+      "movdqa     %%xmm12,%%xmm13                  \n"
+      "movdqu     (%0),%%xmm6                      \n"
+      "palignr    $0x8,%%xmm5,%%xmm5               \n"
+      "palignr    $0x8,%%xmm13,%%xmm13             \n"
+      "movdqu     (%0,%3),%%xmm7                   \n"
+      "lea        (%0,%3,2),%0                     \n"
+      "movdqa     %%xmm6,%%xmm14                   \n"
+      "punpcklbw  %%xmm7,%%xmm6                    \n"
+      "punpckhbw  %%xmm7,%%xmm14                   \n"
+      "neg        %3                               \n"
+      "movdqa     %%xmm6,%%xmm7                    \n"
+      "movdqa     %%xmm14,%%xmm15                  \n"
+      "lea        0x10(%0,%3,8),%0                 \n"
+      "palignr    $0x8,%%xmm7,%%xmm7               \n"
+      "palignr    $0x8,%%xmm15,%%xmm15             \n"
+      "neg        %3                               \n"
+      // Second round of bit swap.
+      "punpcklwd  %%xmm2,%%xmm0                    \n"
+      "punpcklwd  %%xmm3,%%xmm1                    \n"
+      "movdqa     %%xmm0,%%xmm2                    \n"
+      "movdqa     %%xmm1,%%xmm3                    \n"
+      "palignr    $0x8,%%xmm2,%%xmm2               \n"
+      "palignr    $0x8,%%xmm3,%%xmm3               \n"
+      "punpcklwd  %%xmm6,%%xmm4                    \n"
+      "punpcklwd  %%xmm7,%%xmm5                    \n"
+      "movdqa     %%xmm4,%%xmm6                    \n"
+      "movdqa     %%xmm5,%%xmm7                    \n"
+      "palignr    $0x8,%%xmm6,%%xmm6               \n"
+      "palignr    $0x8,%%xmm7,%%xmm7               \n"
+      "punpcklwd  %%xmm10,%%xmm8                   \n"
+      "punpcklwd  %%xmm11,%%xmm9                   \n"
+      "movdqa     %%xmm8,%%xmm10                   \n"
+      "movdqa     %%xmm9,%%xmm11                   \n"
+      "palignr    $0x8,%%xmm10,%%xmm10             \n"
+      "palignr    $0x8,%%xmm11,%%xmm11             \n"
+      "punpcklwd  %%xmm14,%%xmm12                  \n"
+      "punpcklwd  %%xmm15,%%xmm13                  \n"
+      "movdqa     %%xmm12,%%xmm14                  \n"
+      "movdqa     %%xmm13,%%xmm15                  \n"
+      "palignr    $0x8,%%xmm14,%%xmm14             \n"
+      "palignr    $0x8,%%xmm15,%%xmm15             \n"
+      // Third round of bit swap.
+      // Write to the destination pointer.
+      "punpckldq  %%xmm4,%%xmm0                    \n"
+      "movq       %%xmm0,(%1)                      \n"
+      "movdqa     %%xmm0,%%xmm4                    \n"
+      "palignr    $0x8,%%xmm4,%%xmm4               \n"
+      "movq       %%xmm4,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm6,%%xmm2                    \n"
+      "movdqa     %%xmm2,%%xmm6                    \n"
+      "movq       %%xmm2,(%1)                      \n"
+      "palignr    $0x8,%%xmm6,%%xmm6               \n"
+      "punpckldq  %%xmm5,%%xmm1                    \n"
+      "movq       %%xmm6,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "movdqa     %%xmm1,%%xmm5                    \n"
+      "movq       %%xmm1,(%1)                      \n"
+      "palignr    $0x8,%%xmm5,%%xmm5               \n"
+      "movq       %%xmm5,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm7,%%xmm3                    \n"
+      "movq       %%xmm3,(%1)                      \n"
+      "movdqa     %%xmm3,%%xmm7                    \n"
+      "palignr    $0x8,%%xmm7,%%xmm7               \n"
+      "movq       %%xmm7,(%1,%4)                   \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm12,%%xmm8                   \n"
+      "movq       %%xmm8,(%1)                      \n"
+      "movdqa     %%xmm8,%%xmm12                   \n"
+      "palignr    $0x8,%%xmm12,%%xmm12             \n"
+      "movq       %%xmm12,(%1,%4)                  \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm14,%%xmm10                  \n"
+      "movdqa     %%xmm10,%%xmm14                  \n"
+      "movq       %%xmm10,(%1)                     \n"
+      "palignr    $0x8,%%xmm14,%%xmm14             \n"
+      "punpckldq  %%xmm13,%%xmm9                   \n"
+      "movq       %%xmm14,(%1,%4)                  \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "movdqa     %%xmm9,%%xmm13                   \n"
+      "movq       %%xmm9,(%1)                      \n"
+      "palignr    $0x8,%%xmm13,%%xmm13             \n"
+      "movq       %%xmm13,(%1,%4)                  \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "punpckldq  %%xmm15,%%xmm11                  \n"
+      "movq       %%xmm11,(%1)                     \n"
+      "movdqa     %%xmm11,%%xmm15                  \n"
+      "palignr    $0x8,%%xmm15,%%xmm15             \n"
+      "sub        $0x10,%2                         \n"
+      "movq       %%xmm15,(%1,%4)                  \n"
+      "lea        (%1,%4,2),%1                     \n"
+      "jg         1b                               \n"
+      : "+r"(src),                    // %0
+        "+r"(dst),                    // %1
+        "+r"(width)                   // %2
+      : "r"((intptr_t)(src_stride)),  // %3
+        "r"((intptr_t)(dst_stride))   // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7", "xmm8", "xmm9", "xmm10", "xmm11", "xmm12", "xmm13", "xmm14",
+        "xmm15");
 }
 #endif  // defined(HAS_TRANSPOSEWX8_FAST_SSSE3)
 
 // Transpose UV 8x8.  64 bit.
 #if defined(HAS_TRANSPOSEUVWX8_SSE2)
-void TransposeUVWx8_SSE2(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b, int width) {
-  asm volatile (
-    // Read in the data from the source pointer.
-    // First round of bit swap.
-    LABELALIGN
-  "1:                                            \n"
-    "movdqu     (%0),%%xmm0                      \n"
-    "movdqu     (%0,%4),%%xmm1                   \n"
-    "lea        (%0,%4,2),%0                     \n"
-    "movdqa     %%xmm0,%%xmm8                    \n"
-    "punpcklbw  %%xmm1,%%xmm0                    \n"
-    "punpckhbw  %%xmm1,%%xmm8                    \n"
-    "movdqa     %%xmm8,%%xmm1                    \n"
-    "movdqu     (%0),%%xmm2                      \n"
-    "movdqu     (%0,%4),%%xmm3                   \n"
-    "lea        (%0,%4,2),%0                     \n"
-    "movdqa     %%xmm2,%%xmm8                    \n"
-    "punpcklbw  %%xmm3,%%xmm2                    \n"
-    "punpckhbw  %%xmm3,%%xmm8                    \n"
-    "movdqa     %%xmm8,%%xmm3                    \n"
-    "movdqu     (%0),%%xmm4                      \n"
-    "movdqu     (%0,%4),%%xmm5                   \n"
-    "lea        (%0,%4,2),%0                     \n"
-    "movdqa     %%xmm4,%%xmm8                    \n"
-    "punpcklbw  %%xmm5,%%xmm4                    \n"
-    "punpckhbw  %%xmm5,%%xmm8                    \n"
-    "movdqa     %%xmm8,%%xmm5                    \n"
-    "movdqu     (%0),%%xmm6                      \n"
-    "movdqu     (%0,%4),%%xmm7                   \n"
-    "lea        (%0,%4,2),%0                     \n"
-    "movdqa     %%xmm6,%%xmm8                    \n"
-    "punpcklbw  %%xmm7,%%xmm6                    \n"
-    "neg        %4                               \n"
-    "lea        0x10(%0,%4,8),%0                 \n"
-    "punpckhbw  %%xmm7,%%xmm8                    \n"
-    "movdqa     %%xmm8,%%xmm7                    \n"
-    "neg        %4                               \n"
-     // Second round of bit swap.
-    "movdqa     %%xmm0,%%xmm8                    \n"
-    "movdqa     %%xmm1,%%xmm9                    \n"
-    "punpckhwd  %%xmm2,%%xmm8                    \n"
-    "punpckhwd  %%xmm3,%%xmm9                    \n"
-    "punpcklwd  %%xmm2,%%xmm0                    \n"
-    "punpcklwd  %%xmm3,%%xmm1                    \n"
-    "movdqa     %%xmm8,%%xmm2                    \n"
-    "movdqa     %%xmm9,%%xmm3                    \n"
-    "movdqa     %%xmm4,%%xmm8                    \n"
-    "movdqa     %%xmm5,%%xmm9                    \n"
-    "punpckhwd  %%xmm6,%%xmm8                    \n"
-    "punpckhwd  %%xmm7,%%xmm9                    \n"
-    "punpcklwd  %%xmm6,%%xmm4                    \n"
-    "punpcklwd  %%xmm7,%%xmm5                    \n"
-    "movdqa     %%xmm8,%%xmm6                    \n"
-    "movdqa     %%xmm9,%%xmm7                    \n"
-    // Third round of bit swap.
-    // Write to the destination pointer.
-    "movdqa     %%xmm0,%%xmm8                    \n"
-    "punpckldq  %%xmm4,%%xmm0                    \n"
-    "movlpd     %%xmm0,(%1)                      \n"  // Write back U channel
-    "movhpd     %%xmm0,(%2)                      \n"  // Write back V channel
-    "punpckhdq  %%xmm4,%%xmm8                    \n"
-    "movlpd     %%xmm8,(%1,%5)                   \n"
-    "lea        (%1,%5,2),%1                     \n"
-    "movhpd     %%xmm8,(%2,%6)                   \n"
-    "lea        (%2,%6,2),%2                     \n"
-    "movdqa     %%xmm2,%%xmm8                    \n"
-    "punpckldq  %%xmm6,%%xmm2                    \n"
-    "movlpd     %%xmm2,(%1)                      \n"
-    "movhpd     %%xmm2,(%2)                      \n"
-    "punpckhdq  %%xmm6,%%xmm8                    \n"
-    "movlpd     %%xmm8,(%1,%5)                   \n"
-    "lea        (%1,%5,2),%1                     \n"
-    "movhpd     %%xmm8,(%2,%6)                   \n"
-    "lea        (%2,%6,2),%2                     \n"
-    "movdqa     %%xmm1,%%xmm8                    \n"
-    "punpckldq  %%xmm5,%%xmm1                    \n"
-    "movlpd     %%xmm1,(%1)                      \n"
-    "movhpd     %%xmm1,(%2)                      \n"
-    "punpckhdq  %%xmm5,%%xmm8                    \n"
-    "movlpd     %%xmm8,(%1,%5)                   \n"
-    "lea        (%1,%5,2),%1                     \n"
-    "movhpd     %%xmm8,(%2,%6)                   \n"
-    "lea        (%2,%6,2),%2                     \n"
-    "movdqa     %%xmm3,%%xmm8                    \n"
-    "punpckldq  %%xmm7,%%xmm3                    \n"
-    "movlpd     %%xmm3,(%1)                      \n"
-    "movhpd     %%xmm3,(%2)                      \n"
-    "punpckhdq  %%xmm7,%%xmm8                    \n"
-    "sub        $0x8,%3                          \n"
-    "movlpd     %%xmm8,(%1,%5)                   \n"
-    "lea        (%1,%5,2),%1                     \n"
-    "movhpd     %%xmm8,(%2,%6)                   \n"
-    "lea        (%2,%6,2),%2                     \n"
-    "jg         1b                               \n"
-    : "+r"(src),    // %0
-      "+r"(dst_a),  // %1
-      "+r"(dst_b),  // %2
-      "+r"(width)   // %3
-    : "r"((intptr_t)(src_stride)),    // %4
-      "r"((intptr_t)(dst_stride_a)),  // %5
-      "r"((intptr_t)(dst_stride_b))   // %6
-    : "memory", "cc",
-      "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7",
-      "xmm8", "xmm9"
-  );
+void TransposeUVWx8_SSE2(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
+                         int width) {
+  asm volatile(
+      // Read in the data from the source pointer.
+      // First round of bit swap.
+      LABELALIGN
+      "1:                                          \n"
+      "movdqu     (%0),%%xmm0                      \n"
+      "movdqu     (%0,%4),%%xmm1                   \n"
+      "lea        (%0,%4,2),%0                     \n"
+      "movdqa     %%xmm0,%%xmm8                    \n"
+      "punpcklbw  %%xmm1,%%xmm0                    \n"
+      "punpckhbw  %%xmm1,%%xmm8                    \n"
+      "movdqa     %%xmm8,%%xmm1                    \n"
+      "movdqu     (%0),%%xmm2                      \n"
+      "movdqu     (%0,%4),%%xmm3                   \n"
+      "lea        (%0,%4,2),%0                     \n"
+      "movdqa     %%xmm2,%%xmm8                    \n"
+      "punpcklbw  %%xmm3,%%xmm2                    \n"
+      "punpckhbw  %%xmm3,%%xmm8                    \n"
+      "movdqa     %%xmm8,%%xmm3                    \n"
+      "movdqu     (%0),%%xmm4                      \n"
+      "movdqu     (%0,%4),%%xmm5                   \n"
+      "lea        (%0,%4,2),%0                     \n"
+      "movdqa     %%xmm4,%%xmm8                    \n"
+      "punpcklbw  %%xmm5,%%xmm4                    \n"
+      "punpckhbw  %%xmm5,%%xmm8                    \n"
+      "movdqa     %%xmm8,%%xmm5                    \n"
+      "movdqu     (%0),%%xmm6                      \n"
+      "movdqu     (%0,%4),%%xmm7                   \n"
+      "lea        (%0,%4,2),%0                     \n"
+      "movdqa     %%xmm6,%%xmm8                    \n"
+      "punpcklbw  %%xmm7,%%xmm6                    \n"
+      "neg        %4                               \n"
+      "lea        0x10(%0,%4,8),%0                 \n"
+      "punpckhbw  %%xmm7,%%xmm8                    \n"
+      "movdqa     %%xmm8,%%xmm7                    \n"
+      "neg        %4                               \n"
+      // Second round of bit swap.
+      "movdqa     %%xmm0,%%xmm8                    \n"
+      "movdqa     %%xmm1,%%xmm9                    \n"
+      "punpckhwd  %%xmm2,%%xmm8                    \n"
+      "punpckhwd  %%xmm3,%%xmm9                    \n"
+      "punpcklwd  %%xmm2,%%xmm0                    \n"
+      "punpcklwd  %%xmm3,%%xmm1                    \n"
+      "movdqa     %%xmm8,%%xmm2                    \n"
+      "movdqa     %%xmm9,%%xmm3                    \n"
+      "movdqa     %%xmm4,%%xmm8                    \n"
+      "movdqa     %%xmm5,%%xmm9                    \n"
+      "punpckhwd  %%xmm6,%%xmm8                    \n"
+      "punpckhwd  %%xmm7,%%xmm9                    \n"
+      "punpcklwd  %%xmm6,%%xmm4                    \n"
+      "punpcklwd  %%xmm7,%%xmm5                    \n"
+      "movdqa     %%xmm8,%%xmm6                    \n"
+      "movdqa     %%xmm9,%%xmm7                    \n"
+      // Third round of bit swap.
+      // Write to the destination pointer.
+      "movdqa     %%xmm0,%%xmm8                    \n"
+      "punpckldq  %%xmm4,%%xmm0                    \n"
+      "movlpd     %%xmm0,(%1)                      \n"  // Write back U channel
+      "movhpd     %%xmm0,(%2)                      \n"  // Write back V channel
+      "punpckhdq  %%xmm4,%%xmm8                    \n"
+      "movlpd     %%xmm8,(%1,%5)                   \n"
+      "lea        (%1,%5,2),%1                     \n"
+      "movhpd     %%xmm8,(%2,%6)                   \n"
+      "lea        (%2,%6,2),%2                     \n"
+      "movdqa     %%xmm2,%%xmm8                    \n"
+      "punpckldq  %%xmm6,%%xmm2                    \n"
+      "movlpd     %%xmm2,(%1)                      \n"
+      "movhpd     %%xmm2,(%2)                      \n"
+      "punpckhdq  %%xmm6,%%xmm8                    \n"
+      "movlpd     %%xmm8,(%1,%5)                   \n"
+      "lea        (%1,%5,2),%1                     \n"
+      "movhpd     %%xmm8,(%2,%6)                   \n"
+      "lea        (%2,%6,2),%2                     \n"
+      "movdqa     %%xmm1,%%xmm8                    \n"
+      "punpckldq  %%xmm5,%%xmm1                    \n"
+      "movlpd     %%xmm1,(%1)                      \n"
+      "movhpd     %%xmm1,(%2)                      \n"
+      "punpckhdq  %%xmm5,%%xmm8                    \n"
+      "movlpd     %%xmm8,(%1,%5)                   \n"
+      "lea        (%1,%5,2),%1                     \n"
+      "movhpd     %%xmm8,(%2,%6)                   \n"
+      "lea        (%2,%6,2),%2                     \n"
+      "movdqa     %%xmm3,%%xmm8                    \n"
+      "punpckldq  %%xmm7,%%xmm3                    \n"
+      "movlpd     %%xmm3,(%1)                      \n"
+      "movhpd     %%xmm3,(%2)                      \n"
+      "punpckhdq  %%xmm7,%%xmm8                    \n"
+      "sub        $0x8,%3                          \n"
+      "movlpd     %%xmm8,(%1,%5)                   \n"
+      "lea        (%1,%5,2),%1                     \n"
+      "movhpd     %%xmm8,(%2,%6)                   \n"
+      "lea        (%2,%6,2),%2                     \n"
+      "jg         1b                               \n"
+      : "+r"(src),                      // %0
+        "+r"(dst_a),                    // %1
+        "+r"(dst_b),                    // %2
+        "+r"(width)                     // %3
+      : "r"((intptr_t)(src_stride)),    // %4
+        "r"((intptr_t)(dst_stride_a)),  // %5
+        "r"((intptr_t)(dst_stride_b))   // %6
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7", "xmm8", "xmm9");
 }
 #endif  // defined(HAS_TRANSPOSEUVWX8_SSE2)
 #endif  // defined(__x86_64__) || defined(__i386__)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_mips.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_mips.cc
deleted file mode 100644
index 1e8ce25..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_mips.cc
+++ /dev/null
@@ -1,484 +0,0 @@
-/*
- *  Copyright 2011 The LibYuv Project Authors. All rights reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS. All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include "libyuv/row.h"
-#include "libyuv/rotate_row.h"
-
-#include "libyuv/basic_types.h"
-
-#ifdef __cplusplus
-namespace libyuv {
-extern "C" {
-#endif
-
-#if !defined(LIBYUV_DISABLE_MIPS) && \
-    defined(__mips_dsp) && (__mips_dsp_rev >= 2) && \
-    (_MIPS_SIM == _MIPS_SIM_ABI32)
-
-void TransposeWx8_DSPR2(const uint8* src, int src_stride,
-                        uint8* dst, int dst_stride, int width) {
-   __asm__ __volatile__ (
-      ".set push                                         \n"
-      ".set noreorder                                    \n"
-      "sll              $t2, %[src_stride], 0x1          \n" // src_stride x 2
-      "sll              $t4, %[src_stride], 0x2          \n" // src_stride x 4
-      "sll              $t9, %[src_stride], 0x3          \n" // src_stride x 8
-      "addu             $t3, $t2, %[src_stride]          \n"
-      "addu             $t5, $t4, %[src_stride]          \n"
-      "addu             $t6, $t2, $t4                    \n"
-      "andi             $t0, %[dst], 0x3                 \n"
-      "andi             $t1, %[dst_stride], 0x3          \n"
-      "or               $t0, $t0, $t1                    \n"
-      "bnez             $t0, 11f                         \n"
-      " subu            $t7, $t9, %[src_stride]          \n"
-//dst + dst_stride word aligned
-    "1:                                                  \n"
-      "lbu              $t0, 0(%[src])                   \n"
-      "lbux             $t1, %[src_stride](%[src])       \n"
-      "lbux             $t8, $t2(%[src])                 \n"
-      "lbux             $t9, $t3(%[src])                 \n"
-      "sll              $t1, $t1, 16                     \n"
-      "sll              $t9, $t9, 16                     \n"
-      "or               $t0, $t0, $t1                    \n"
-      "or               $t8, $t8, $t9                    \n"
-      "precr.qb.ph      $s0, $t8, $t0                    \n"
-      "lbux             $t0, $t4(%[src])                 \n"
-      "lbux             $t1, $t5(%[src])                 \n"
-      "lbux             $t8, $t6(%[src])                 \n"
-      "lbux             $t9, $t7(%[src])                 \n"
-      "sll              $t1, $t1, 16                     \n"
-      "sll              $t9, $t9, 16                     \n"
-      "or               $t0, $t0, $t1                    \n"
-      "or               $t8, $t8, $t9                    \n"
-      "precr.qb.ph      $s1, $t8, $t0                    \n"
-      "sw               $s0, 0(%[dst])                   \n"
-      "addiu            %[width], -1                     \n"
-      "addiu            %[src], 1                        \n"
-      "sw               $s1, 4(%[dst])                   \n"
-      "bnez             %[width], 1b                     \n"
-      " addu            %[dst], %[dst], %[dst_stride]    \n"
-      "b                2f                               \n"
-//dst + dst_stride unaligned
-   "11:                                                  \n"
-      "lbu              $t0, 0(%[src])                   \n"
-      "lbux             $t1, %[src_stride](%[src])       \n"
-      "lbux             $t8, $t2(%[src])                 \n"
-      "lbux             $t9, $t3(%[src])                 \n"
-      "sll              $t1, $t1, 16                     \n"
-      "sll              $t9, $t9, 16                     \n"
-      "or               $t0, $t0, $t1                    \n"
-      "or               $t8, $t8, $t9                    \n"
-      "precr.qb.ph      $s0, $t8, $t0                    \n"
-      "lbux             $t0, $t4(%[src])                 \n"
-      "lbux             $t1, $t5(%[src])                 \n"
-      "lbux             $t8, $t6(%[src])                 \n"
-      "lbux             $t9, $t7(%[src])                 \n"
-      "sll              $t1, $t1, 16                     \n"
-      "sll              $t9, $t9, 16                     \n"
-      "or               $t0, $t0, $t1                    \n"
-      "or               $t8, $t8, $t9                    \n"
-      "precr.qb.ph      $s1, $t8, $t0                    \n"
-      "swr              $s0, 0(%[dst])                   \n"
-      "swl              $s0, 3(%[dst])                   \n"
-      "addiu            %[width], -1                     \n"
-      "addiu            %[src], 1                        \n"
-      "swr              $s1, 4(%[dst])                   \n"
-      "swl              $s1, 7(%[dst])                   \n"
-      "bnez             %[width], 11b                    \n"
-       "addu             %[dst], %[dst], %[dst_stride]   \n"
-    "2:                                                  \n"
-      ".set pop                                          \n"
-      :[src] "+r" (src),
-       [dst] "+r" (dst),
-       [width] "+r" (width)
-      :[src_stride] "r" (src_stride),
-       [dst_stride] "r" (dst_stride)
-      : "t0", "t1",  "t2", "t3", "t4", "t5",
-        "t6", "t7", "t8", "t9",
-        "s0", "s1"
-  );
-}
-
-void TransposeWx8_Fast_DSPR2(const uint8* src, int src_stride,
-                             uint8* dst, int dst_stride, int width) {
-  __asm__ __volatile__ (
-      ".set noat                                         \n"
-      ".set push                                         \n"
-      ".set noreorder                                    \n"
-      "beqz             %[width], 2f                     \n"
-      " sll             $t2, %[src_stride], 0x1          \n"  // src_stride x 2
-      "sll              $t4, %[src_stride], 0x2          \n"  // src_stride x 4
-      "sll              $t9, %[src_stride], 0x3          \n"  // src_stride x 8
-      "addu             $t3, $t2, %[src_stride]          \n"
-      "addu             $t5, $t4, %[src_stride]          \n"
-      "addu             $t6, $t2, $t4                    \n"
-
-      "srl              $AT, %[width], 0x2               \n"
-      "andi             $t0, %[dst], 0x3                 \n"
-      "andi             $t1, %[dst_stride], 0x3          \n"
-      "or               $t0, $t0, $t1                    \n"
-      "bnez             $t0, 11f                         \n"
-      " subu            $t7, $t9, %[src_stride]          \n"
-//dst + dst_stride word aligned
-      "1:                                                \n"
-      "lw               $t0, 0(%[src])                   \n"
-      "lwx              $t1, %[src_stride](%[src])       \n"
-      "lwx              $t8, $t2(%[src])                 \n"
-      "lwx              $t9, $t3(%[src])                 \n"
-
-// t0 = | 30 | 20 | 10 | 00 |
-// t1 = | 31 | 21 | 11 | 01 |
-// t8 = | 32 | 22 | 12 | 02 |
-// t9 = | 33 | 23 | 13 | 03 |
-
-      "precr.qb.ph     $s0, $t1, $t0                     \n"
-      "precr.qb.ph     $s1, $t9, $t8                     \n"
-      "precrq.qb.ph    $s2, $t1, $t0                     \n"
-      "precrq.qb.ph    $s3, $t9, $t8                     \n"
-
-  // s0 = | 21 | 01 | 20 | 00 |
-  // s1 = | 23 | 03 | 22 | 02 |
-  // s2 = | 31 | 11 | 30 | 10 |
-  // s3 = | 33 | 13 | 32 | 12 |
-
-      "precr.qb.ph     $s4, $s1, $s0                     \n"
-      "precrq.qb.ph    $s5, $s1, $s0                     \n"
-      "precr.qb.ph     $s6, $s3, $s2                     \n"
-      "precrq.qb.ph    $s7, $s3, $s2                     \n"
-
-  // s4 = | 03 | 02 | 01 | 00 |
-  // s5 = | 23 | 22 | 21 | 20 |
-  // s6 = | 13 | 12 | 11 | 10 |
-  // s7 = | 33 | 32 | 31 | 30 |
-
-      "lwx              $t0, $t4(%[src])                 \n"
-      "lwx              $t1, $t5(%[src])                 \n"
-      "lwx              $t8, $t6(%[src])                 \n"
-      "lwx              $t9, $t7(%[src])                 \n"
-
-// t0 = | 34 | 24 | 14 | 04 |
-// t1 = | 35 | 25 | 15 | 05 |
-// t8 = | 36 | 26 | 16 | 06 |
-// t9 = | 37 | 27 | 17 | 07 |
-
-      "precr.qb.ph     $s0, $t1, $t0                     \n"
-      "precr.qb.ph     $s1, $t9, $t8                     \n"
-      "precrq.qb.ph    $s2, $t1, $t0                     \n"
-      "precrq.qb.ph    $s3, $t9, $t8                     \n"
-
-  // s0 = | 25 | 05 | 24 | 04 |
-  // s1 = | 27 | 07 | 26 | 06 |
-  // s2 = | 35 | 15 | 34 | 14 |
-  // s3 = | 37 | 17 | 36 | 16 |
-
-      "precr.qb.ph     $t0, $s1, $s0                     \n"
-      "precrq.qb.ph    $t1, $s1, $s0                     \n"
-      "precr.qb.ph     $t8, $s3, $s2                     \n"
-      "precrq.qb.ph    $t9, $s3, $s2                     \n"
-
-  // t0 = | 07 | 06 | 05 | 04 |
-  // t1 = | 27 | 26 | 25 | 24 |
-  // t8 = | 17 | 16 | 15 | 14 |
-  // t9 = | 37 | 36 | 35 | 34 |
-
-      "addu            $s0, %[dst], %[dst_stride]        \n"
-      "addu            $s1, $s0, %[dst_stride]           \n"
-      "addu            $s2, $s1, %[dst_stride]           \n"
-
-      "sw              $s4, 0(%[dst])                    \n"
-      "sw              $t0, 4(%[dst])                    \n"
-      "sw              $s6, 0($s0)                       \n"
-      "sw              $t8, 4($s0)                       \n"
-      "sw              $s5, 0($s1)                       \n"
-      "sw              $t1, 4($s1)                       \n"
-      "sw              $s7, 0($s2)                       \n"
-      "sw              $t9, 4($s2)                       \n"
-
-      "addiu            $AT, -1                          \n"
-      "addiu            %[src], 4                        \n"
-
-      "bnez             $AT, 1b                          \n"
-      " addu            %[dst], $s2, %[dst_stride]       \n"
-      "b                2f                               \n"
-//dst + dst_stride unaligned
-      "11:                                               \n"
-      "lw               $t0, 0(%[src])                   \n"
-      "lwx              $t1, %[src_stride](%[src])       \n"
-      "lwx              $t8, $t2(%[src])                 \n"
-      "lwx              $t9, $t3(%[src])                 \n"
-
-// t0 = | 30 | 20 | 10 | 00 |
-// t1 = | 31 | 21 | 11 | 01 |
-// t8 = | 32 | 22 | 12 | 02 |
-// t9 = | 33 | 23 | 13 | 03 |
-
-      "precr.qb.ph     $s0, $t1, $t0                     \n"
-      "precr.qb.ph     $s1, $t9, $t8                     \n"
-      "precrq.qb.ph    $s2, $t1, $t0                     \n"
-      "precrq.qb.ph    $s3, $t9, $t8                     \n"
-
-  // s0 = | 21 | 01 | 20 | 00 |
-  // s1 = | 23 | 03 | 22 | 02 |
-  // s2 = | 31 | 11 | 30 | 10 |
-  // s3 = | 33 | 13 | 32 | 12 |
-
-      "precr.qb.ph     $s4, $s1, $s0                     \n"
-      "precrq.qb.ph    $s5, $s1, $s0                     \n"
-      "precr.qb.ph     $s6, $s3, $s2                     \n"
-      "precrq.qb.ph    $s7, $s3, $s2                     \n"
-
-  // s4 = | 03 | 02 | 01 | 00 |
-  // s5 = | 23 | 22 | 21 | 20 |
-  // s6 = | 13 | 12 | 11 | 10 |
-  // s7 = | 33 | 32 | 31 | 30 |
-
-      "lwx              $t0, $t4(%[src])                 \n"
-      "lwx              $t1, $t5(%[src])                 \n"
-      "lwx              $t8, $t6(%[src])                 \n"
-      "lwx              $t9, $t7(%[src])                 \n"
-
-// t0 = | 34 | 24 | 14 | 04 |
-// t1 = | 35 | 25 | 15 | 05 |
-// t8 = | 36 | 26 | 16 | 06 |
-// t9 = | 37 | 27 | 17 | 07 |
-
-      "precr.qb.ph     $s0, $t1, $t0                     \n"
-      "precr.qb.ph     $s1, $t9, $t8                     \n"
-      "precrq.qb.ph    $s2, $t1, $t0                     \n"
-      "precrq.qb.ph    $s3, $t9, $t8                     \n"
-
-  // s0 = | 25 | 05 | 24 | 04 |
-  // s1 = | 27 | 07 | 26 | 06 |
-  // s2 = | 35 | 15 | 34 | 14 |
-  // s3 = | 37 | 17 | 36 | 16 |
-
-      "precr.qb.ph     $t0, $s1, $s0                     \n"
-      "precrq.qb.ph    $t1, $s1, $s0                     \n"
-      "precr.qb.ph     $t8, $s3, $s2                     \n"
-      "precrq.qb.ph    $t9, $s3, $s2                     \n"
-
-  // t0 = | 07 | 06 | 05 | 04 |
-  // t1 = | 27 | 26 | 25 | 24 |
-  // t8 = | 17 | 16 | 15 | 14 |
-  // t9 = | 37 | 36 | 35 | 34 |
-
-      "addu            $s0, %[dst], %[dst_stride]        \n"
-      "addu            $s1, $s0, %[dst_stride]           \n"
-      "addu            $s2, $s1, %[dst_stride]           \n"
-
-      "swr              $s4, 0(%[dst])                   \n"
-      "swl              $s4, 3(%[dst])                   \n"
-      "swr              $t0, 4(%[dst])                   \n"
-      "swl              $t0, 7(%[dst])                   \n"
-      "swr              $s6, 0($s0)                      \n"
-      "swl              $s6, 3($s0)                      \n"
-      "swr              $t8, 4($s0)                      \n"
-      "swl              $t8, 7($s0)                      \n"
-      "swr              $s5, 0($s1)                      \n"
-      "swl              $s5, 3($s1)                      \n"
-      "swr              $t1, 4($s1)                      \n"
-      "swl              $t1, 7($s1)                      \n"
-      "swr              $s7, 0($s2)                      \n"
-      "swl              $s7, 3($s2)                      \n"
-      "swr              $t9, 4($s2)                      \n"
-      "swl              $t9, 7($s2)                      \n"
-
-      "addiu            $AT, -1                          \n"
-      "addiu            %[src], 4                        \n"
-
-      "bnez             $AT, 11b                         \n"
-      " addu            %[dst], $s2, %[dst_stride]       \n"
-      "2:                                                \n"
-      ".set pop                                          \n"
-      ".set at                                           \n"
-      :[src] "+r" (src),
-       [dst] "+r" (dst),
-       [width] "+r" (width)
-      :[src_stride] "r" (src_stride),
-       [dst_stride] "r" (dst_stride)
-      : "t0", "t1", "t2", "t3", "t4", "t5", "t6", "t7", "t8", "t9",
-        "s0", "s1", "s2", "s3", "s4", "s5", "s6", "s7"
-  );
-}
-
-void TransposeUVWx8_DSPR2(const uint8* src, int src_stride,
-                          uint8* dst_a, int dst_stride_a,
-                          uint8* dst_b, int dst_stride_b,
-                          int width) {
-  __asm__ __volatile__ (
-      ".set push                                         \n"
-      ".set noreorder                                    \n"
-      "beqz            %[width], 2f                      \n"
-      " sll            $t2, %[src_stride], 0x1           \n" // src_stride x 2
-      "sll             $t4, %[src_stride], 0x2           \n" // src_stride x 4
-      "sll             $t9, %[src_stride], 0x3           \n" // src_stride x 8
-      "addu            $t3, $t2, %[src_stride]           \n"
-      "addu            $t5, $t4, %[src_stride]           \n"
-      "addu            $t6, $t2, $t4                     \n"
-      "subu            $t7, $t9, %[src_stride]           \n"
-      "srl             $t1, %[width], 1                  \n"
-
-// check word aligment for dst_a, dst_b, dst_stride_a and dst_stride_b
-      "andi            $t0, %[dst_a], 0x3                \n"
-      "andi            $t8, %[dst_b], 0x3                \n"
-      "or              $t0, $t0, $t8                     \n"
-      "andi            $t8, %[dst_stride_a], 0x3         \n"
-      "andi            $s5, %[dst_stride_b], 0x3         \n"
-      "or              $t8, $t8, $s5                     \n"
-      "or              $t0, $t0, $t8                     \n"
-      "bnez            $t0, 11f                          \n"
-      " nop                                              \n"
-// dst + dst_stride word aligned (both, a & b dst addresses)
-    "1:                                                  \n"
-      "lw              $t0, 0(%[src])                    \n" // |B0|A0|b0|a0|
-      "lwx             $t8, %[src_stride](%[src])        \n" // |B1|A1|b1|a1|
-      "addu            $s5, %[dst_a], %[dst_stride_a]    \n"
-      "lwx             $t9, $t2(%[src])                  \n" // |B2|A2|b2|a2|
-      "lwx             $s0, $t3(%[src])                  \n" // |B3|A3|b3|a3|
-      "addu            $s6, %[dst_b], %[dst_stride_b]    \n"
-
-      "precrq.ph.w     $s1, $t8, $t0                     \n" // |B1|A1|B0|A0|
-      "precrq.ph.w     $s2, $s0, $t9                     \n" // |B3|A3|B2|A2|
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |A3|A2|A1|A0|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |B3|B2|B1|B0|
-
-      "sll             $t0, $t0, 16                      \n"
-      "packrl.ph       $s1, $t8, $t0                     \n" // |b1|a1|b0|a0|
-      "sll             $t9, $t9, 16                      \n"
-      "packrl.ph       $s2, $s0, $t9                     \n" // |b3|a3|b2|a2|
-
-      "sw              $s3, 0($s5)                       \n"
-      "sw              $s4, 0($s6)                       \n"
-
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |a3|a2|a1|a0|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |b3|b2|b1|b0|
-
-      "lwx             $t0, $t4(%[src])                  \n" // |B4|A4|b4|a4|
-      "lwx             $t8, $t5(%[src])                  \n" // |B5|A5|b5|a5|
-      "lwx             $t9, $t6(%[src])                  \n" // |B6|A6|b6|a6|
-      "lwx             $s0, $t7(%[src])                  \n" // |B7|A7|b7|a7|
-      "sw              $s3, 0(%[dst_a])                  \n"
-      "sw              $s4, 0(%[dst_b])                  \n"
-
-      "precrq.ph.w     $s1, $t8, $t0                     \n" // |B5|A5|B4|A4|
-      "precrq.ph.w     $s2, $s0, $t9                     \n" // |B6|A6|B7|A7|
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |A7|A6|A5|A4|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |B7|B6|B5|B4|
-
-      "sll             $t0, $t0, 16                      \n"
-      "packrl.ph       $s1, $t8, $t0                     \n" // |b5|a5|b4|a4|
-      "sll             $t9, $t9, 16                      \n"
-      "packrl.ph       $s2, $s0, $t9                     \n" // |b7|a7|b6|a6|
-      "sw              $s3, 4($s5)                       \n"
-      "sw              $s4, 4($s6)                       \n"
-
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |a7|a6|a5|a4|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |b7|b6|b5|b4|
-
-      "addiu           %[src], 4                         \n"
-      "addiu           $t1, -1                           \n"
-      "sll             $t0, %[dst_stride_a], 1           \n"
-      "sll             $t8, %[dst_stride_b], 1           \n"
-      "sw              $s3, 4(%[dst_a])                  \n"
-      "sw              $s4, 4(%[dst_b])                  \n"
-      "addu            %[dst_a], %[dst_a], $t0           \n"
-      "bnez            $t1, 1b                           \n"
-      " addu           %[dst_b], %[dst_b], $t8           \n"
-      "b               2f                                \n"
-      " nop                                              \n"
-
-// dst_a or dst_b or dst_stride_a or dst_stride_b not word aligned
-   "11:                                                  \n"
-      "lw              $t0, 0(%[src])                    \n" // |B0|A0|b0|a0|
-      "lwx             $t8, %[src_stride](%[src])        \n" // |B1|A1|b1|a1|
-      "addu            $s5, %[dst_a], %[dst_stride_a]    \n"
-      "lwx             $t9, $t2(%[src])                  \n" // |B2|A2|b2|a2|
-      "lwx             $s0, $t3(%[src])                  \n" // |B3|A3|b3|a3|
-      "addu            $s6, %[dst_b], %[dst_stride_b]    \n"
-
-      "precrq.ph.w     $s1, $t8, $t0                     \n" // |B1|A1|B0|A0|
-      "precrq.ph.w     $s2, $s0, $t9                     \n" // |B3|A3|B2|A2|
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |A3|A2|A1|A0|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |B3|B2|B1|B0|
-
-      "sll             $t0, $t0, 16                      \n"
-      "packrl.ph       $s1, $t8, $t0                     \n" // |b1|a1|b0|a0|
-      "sll             $t9, $t9, 16                      \n"
-      "packrl.ph       $s2, $s0, $t9                     \n" // |b3|a3|b2|a2|
-
-      "swr             $s3, 0($s5)                       \n"
-      "swl             $s3, 3($s5)                       \n"
-      "swr             $s4, 0($s6)                       \n"
-      "swl             $s4, 3($s6)                       \n"
-
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |a3|a2|a1|a0|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |b3|b2|b1|b0|
-
-      "lwx             $t0, $t4(%[src])                  \n" // |B4|A4|b4|a4|
-      "lwx             $t8, $t5(%[src])                  \n" // |B5|A5|b5|a5|
-      "lwx             $t9, $t6(%[src])                  \n" // |B6|A6|b6|a6|
-      "lwx             $s0, $t7(%[src])                  \n" // |B7|A7|b7|a7|
-      "swr             $s3, 0(%[dst_a])                  \n"
-      "swl             $s3, 3(%[dst_a])                  \n"
-      "swr             $s4, 0(%[dst_b])                  \n"
-      "swl             $s4, 3(%[dst_b])                  \n"
-
-      "precrq.ph.w     $s1, $t8, $t0                     \n" // |B5|A5|B4|A4|
-      "precrq.ph.w     $s2, $s0, $t9                     \n" // |B6|A6|B7|A7|
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |A7|A6|A5|A4|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |B7|B6|B5|B4|
-
-      "sll             $t0, $t0, 16                      \n"
-      "packrl.ph       $s1, $t8, $t0                     \n" // |b5|a5|b4|a4|
-      "sll             $t9, $t9, 16                      \n"
-      "packrl.ph       $s2, $s0, $t9                     \n" // |b7|a7|b6|a6|
-
-      "swr             $s3, 4($s5)                       \n"
-      "swl             $s3, 7($s5)                       \n"
-      "swr             $s4, 4($s6)                       \n"
-      "swl             $s4, 7($s6)                       \n"
-
-      "precr.qb.ph     $s3, $s2, $s1                     \n" // |a7|a6|a5|a4|
-      "precrq.qb.ph    $s4, $s2, $s1                     \n" // |b7|b6|b5|b4|
-
-      "addiu           %[src], 4                         \n"
-      "addiu           $t1, -1                           \n"
-      "sll             $t0, %[dst_stride_a], 1           \n"
-      "sll             $t8, %[dst_stride_b], 1           \n"
-      "swr             $s3, 4(%[dst_a])                  \n"
-      "swl             $s3, 7(%[dst_a])                  \n"
-      "swr             $s4, 4(%[dst_b])                  \n"
-      "swl             $s4, 7(%[dst_b])                  \n"
-      "addu            %[dst_a], %[dst_a], $t0           \n"
-      "bnez            $t1, 11b                          \n"
-      " addu           %[dst_b], %[dst_b], $t8           \n"
-
-      "2:                                                \n"
-      ".set pop                                          \n"
-      : [src] "+r" (src),
-        [dst_a] "+r" (dst_a),
-        [dst_b] "+r" (dst_b),
-        [width] "+r" (width),
-        [src_stride] "+r" (src_stride)
-      : [dst_stride_a] "r" (dst_stride_a),
-        [dst_stride_b] "r" (dst_stride_b)
-      : "t0", "t1",  "t2", "t3",  "t4", "t5",
-        "t6", "t7", "t8", "t9",
-        "s0", "s1", "s2", "s3",
-        "s4", "s5", "s6"
-  );
-}
-
-#endif  // defined(__mips_dsp) && (__mips_dsp_rev >= 2)
-
-#ifdef __cplusplus
-}  // extern "C"
-}  // namespace libyuv
-#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_msa.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_msa.cc
new file mode 100644
index 0000000..99bdca6
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_msa.cc
@@ -0,0 +1,250 @@
+/*
+ *  Copyright 2016 The LibYuv Project Authors. All rights reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS. All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "libyuv/rotate_row.h"
+
+// This module is for GCC MSA
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#include "libyuv/macros_msa.h"
+
+#ifdef __cplusplus
+namespace libyuv {
+extern "C" {
+#endif
+
+#define ILVRL_B(in0, in1, in2, in3, out0, out1, out2, out3) \
+  {                                                         \
+    out0 = (v16u8)__msa_ilvr_b((v16i8)in1, (v16i8)in0);     \
+    out1 = (v16u8)__msa_ilvl_b((v16i8)in1, (v16i8)in0);     \
+    out2 = (v16u8)__msa_ilvr_b((v16i8)in3, (v16i8)in2);     \
+    out3 = (v16u8)__msa_ilvl_b((v16i8)in3, (v16i8)in2);     \
+  }
+
+#define ILVRL_H(in0, in1, in2, in3, out0, out1, out2, out3) \
+  {                                                         \
+    out0 = (v16u8)__msa_ilvr_h((v8i16)in1, (v8i16)in0);     \
+    out1 = (v16u8)__msa_ilvl_h((v8i16)in1, (v8i16)in0);     \
+    out2 = (v16u8)__msa_ilvr_h((v8i16)in3, (v8i16)in2);     \
+    out3 = (v16u8)__msa_ilvl_h((v8i16)in3, (v8i16)in2);     \
+  }
+
+#define ILVRL_W(in0, in1, in2, in3, out0, out1, out2, out3) \
+  {                                                         \
+    out0 = (v16u8)__msa_ilvr_w((v4i32)in1, (v4i32)in0);     \
+    out1 = (v16u8)__msa_ilvl_w((v4i32)in1, (v4i32)in0);     \
+    out2 = (v16u8)__msa_ilvr_w((v4i32)in3, (v4i32)in2);     \
+    out3 = (v16u8)__msa_ilvl_w((v4i32)in3, (v4i32)in2);     \
+  }
+
+#define ILVRL_D(in0, in1, in2, in3, out0, out1, out2, out3) \
+  {                                                         \
+    out0 = (v16u8)__msa_ilvr_d((v2i64)in1, (v2i64)in0);     \
+    out1 = (v16u8)__msa_ilvl_d((v2i64)in1, (v2i64)in0);     \
+    out2 = (v16u8)__msa_ilvr_d((v2i64)in3, (v2i64)in2);     \
+    out3 = (v16u8)__msa_ilvl_d((v2i64)in3, (v2i64)in2);     \
+  }
+
+void TransposeWx16_C(const uint8_t* src,
+                     int src_stride,
+                     uint8_t* dst,
+                     int dst_stride,
+                     int width) {
+  TransposeWx8_C(src, src_stride, dst, dst_stride, width);
+  TransposeWx8_C((src + 8 * src_stride), src_stride, (dst + 8), dst_stride,
+                 width);
+}
+
+void TransposeUVWx16_C(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst_a,
+                       int dst_stride_a,
+                       uint8_t* dst_b,
+                       int dst_stride_b,
+                       int width) {
+  TransposeUVWx8_C(src, src_stride, dst_a, dst_stride_a, dst_b, dst_stride_b,
+                   width);
+  TransposeUVWx8_C((src + 8 * src_stride), src_stride, (dst_a + 8),
+                   dst_stride_a, (dst_b + 8), dst_stride_b, width);
+}
+
+void TransposeWx16_MSA(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst,
+                       int dst_stride,
+                       int width) {
+  int x;
+  const uint8_t* s;
+  v16u8 src0, src1, src2, src3, dst0, dst1, dst2, dst3, vec0, vec1, vec2, vec3;
+  v16u8 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7;
+  v16u8 res0, res1, res2, res3, res4, res5, res6, res7, res8, res9;
+
+  for (x = 0; x < width; x += 16) {
+    s = src;
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg0, reg1, reg2, reg3);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg4, reg5, reg6, reg7);
+    ILVRL_W(reg0, reg4, reg1, reg5, res0, res1, res2, res3);
+    ILVRL_W(reg2, reg6, reg3, reg7, res4, res5, res6, res7);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg0, reg1, reg2, reg3);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg4, reg5, reg6, reg7);
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg4, (v4i32)reg0);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg4, (v4i32)reg0);
+    ILVRL_D(res0, res8, res1, res9, dst0, dst1, dst2, dst3);
+    ST_UB4(dst0, dst1, dst2, dst3, dst, dst_stride);
+    dst += dst_stride * 4;
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg5, (v4i32)reg1);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg5, (v4i32)reg1);
+    ILVRL_D(res2, res8, res3, res9, dst0, dst1, dst2, dst3);
+    ST_UB4(dst0, dst1, dst2, dst3, dst, dst_stride);
+    dst += dst_stride * 4;
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg6, (v4i32)reg2);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg6, (v4i32)reg2);
+    ILVRL_D(res4, res8, res5, res9, dst0, dst1, dst2, dst3);
+    ST_UB4(dst0, dst1, dst2, dst3, dst, dst_stride);
+    dst += dst_stride * 4;
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg7, (v4i32)reg3);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg7, (v4i32)reg3);
+    ILVRL_D(res6, res8, res7, res9, dst0, dst1, dst2, dst3);
+    ST_UB4(dst0, dst1, dst2, dst3, dst, dst_stride);
+    src += 16;
+    dst += dst_stride * 4;
+  }
+}
+
+void TransposeUVWx16_MSA(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
+                         int width) {
+  int x;
+  const uint8_t* s;
+  v16u8 src0, src1, src2, src3, dst0, dst1, dst2, dst3, vec0, vec1, vec2, vec3;
+  v16u8 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7;
+  v16u8 res0, res1, res2, res3, res4, res5, res6, res7, res8, res9;
+
+  for (x = 0; x < width; x += 8) {
+    s = src;
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg0, reg1, reg2, reg3);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg4, reg5, reg6, reg7);
+    ILVRL_W(reg0, reg4, reg1, reg5, res0, res1, res2, res3);
+    ILVRL_W(reg2, reg6, reg3, reg7, res4, res5, res6, res7);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg0, reg1, reg2, reg3);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    s += src_stride;
+    ILVRL_B(src0, src1, src2, src3, vec0, vec1, vec2, vec3);
+    ILVRL_H(vec0, vec2, vec1, vec3, reg4, reg5, reg6, reg7);
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg4, (v4i32)reg0);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg4, (v4i32)reg0);
+    ILVRL_D(res0, res8, res1, res9, dst0, dst1, dst2, dst3);
+    ST_UB2(dst0, dst2, dst_a, dst_stride_a);
+    ST_UB2(dst1, dst3, dst_b, dst_stride_b);
+    dst_a += dst_stride_a * 2;
+    dst_b += dst_stride_b * 2;
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg5, (v4i32)reg1);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg5, (v4i32)reg1);
+    ILVRL_D(res2, res8, res3, res9, dst0, dst1, dst2, dst3);
+    ST_UB2(dst0, dst2, dst_a, dst_stride_a);
+    ST_UB2(dst1, dst3, dst_b, dst_stride_b);
+    dst_a += dst_stride_a * 2;
+    dst_b += dst_stride_b * 2;
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg6, (v4i32)reg2);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg6, (v4i32)reg2);
+    ILVRL_D(res4, res8, res5, res9, dst0, dst1, dst2, dst3);
+    ST_UB2(dst0, dst2, dst_a, dst_stride_a);
+    ST_UB2(dst1, dst3, dst_b, dst_stride_b);
+    dst_a += dst_stride_a * 2;
+    dst_b += dst_stride_b * 2;
+    res8 = (v16u8)__msa_ilvr_w((v4i32)reg7, (v4i32)reg3);
+    res9 = (v16u8)__msa_ilvl_w((v4i32)reg7, (v4i32)reg3);
+    ILVRL_D(res6, res8, res7, res9, dst0, dst1, dst2, dst3);
+    ST_UB2(dst0, dst2, dst_a, dst_stride_a);
+    ST_UB2(dst1, dst3, dst_b, dst_stride_b);
+    src += 16;
+    dst_a += dst_stride_a * 2;
+    dst_b += dst_stride_b * 2;
+  }
+}
+
+#ifdef __cplusplus
+}  // extern "C"
+}  // namespace libyuv
+#endif
+
+#endif  // !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon.cc
index 1c22b47..fdc0dd4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon.cc
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "libyuv/row.h"
 #include "libyuv/rotate_row.h"
+#include "libyuv/row.h"
 
 #include "libyuv/basic_types.h"
 
@@ -21,38 +21,32 @@
 #if !defined(LIBYUV_DISABLE_NEON) && defined(__ARM_NEON__) && \
     !defined(__aarch64__)
 
-static uvec8 kVTbl4x4Transpose =
-  { 0,  4,  8, 12,  1,  5,  9, 13,  2,  6, 10, 14,  3,  7, 11, 15 };
+static const uvec8 kVTbl4x4Transpose = {0, 4, 8,  12, 1, 5, 9,  13,
+                                        2, 6, 10, 14, 3, 7, 11, 15};
 
-void TransposeWx8_NEON(const uint8* src, int src_stride,
-                       uint8* dst, int dst_stride,
+void TransposeWx8_NEON(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst,
+                       int dst_stride,
                        int width) {
-  const uint8* src_temp;
-  asm volatile (
-    // loops are on blocks of 8. loop will stop when
-    // counter gets to or below 0. starting the counter
-    // at w-8 allow for this
-    "sub         %5, #8                        \n"
+  const uint8_t* src_temp;
+  asm volatile(
+      // loops are on blocks of 8. loop will stop when
+      // counter gets to or below 0. starting the counter
+      // at w-8 allow for this
+      "sub         %5, #8                        \n"
 
-    // handle 8x8 blocks. this should be the majority of the plane
-    "1:                                        \n"
+      // handle 8x8 blocks. this should be the majority of the plane
+      "1:                                        \n"
       "mov         %0, %1                      \n"
 
-      MEMACCESS(0)
       "vld1.8      {d0}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d1}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d2}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d3}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d4}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d5}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d6}, [%0], %2              \n"
-      MEMACCESS(0)
       "vld1.8      {d7}, [%0]                  \n"
 
       "vtrn.8      d1, d0                      \n"
@@ -77,21 +71,13 @@
 
       "mov         %0, %3                      \n"
 
-    MEMACCESS(0)
       "vst1.8      {d1}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d0}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d3}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d2}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d5}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d4}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d7}, [%0], %4              \n"
-    MEMACCESS(0)
       "vst1.8      {d6}, [%0]                  \n"
 
       "add         %1, #8                      \n"  // src += 8
@@ -99,180 +85,138 @@
       "subs        %5,  #8                     \n"  // w   -= 8
       "bge         1b                          \n"
 
-    // add 8 back to counter. if the result is 0 there are
-    // no residuals.
-    "adds        %5, #8                        \n"
-    "beq         4f                            \n"
+      // add 8 back to counter. if the result is 0 there are
+      // no residuals.
+      "adds        %5, #8                        \n"
+      "beq         4f                            \n"
 
-    // some residual, so between 1 and 7 lines left to transpose
-    "cmp         %5, #2                        \n"
-    "blt         3f                            \n"
+      // some residual, so between 1 and 7 lines left to transpose
+      "cmp         %5, #2                        \n"
+      "blt         3f                            \n"
 
-    "cmp         %5, #4                        \n"
-    "blt         2f                            \n"
+      "cmp         %5, #4                        \n"
+      "blt         2f                            \n"
 
-    // 4x8 block
-    "mov         %0, %1                        \n"
-    MEMACCESS(0)
-    "vld1.32     {d0[0]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d0[1]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d1[0]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d1[1]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d2[0]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d2[1]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d3[0]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.32     {d3[1]}, [%0]                 \n"
+      // 4x8 block
+      "mov         %0, %1                        \n"
+      "vld1.32     {d0[0]}, [%0], %2             \n"
+      "vld1.32     {d0[1]}, [%0], %2             \n"
+      "vld1.32     {d1[0]}, [%0], %2             \n"
+      "vld1.32     {d1[1]}, [%0], %2             \n"
+      "vld1.32     {d2[0]}, [%0], %2             \n"
+      "vld1.32     {d2[1]}, [%0], %2             \n"
+      "vld1.32     {d3[0]}, [%0], %2             \n"
+      "vld1.32     {d3[1]}, [%0]                 \n"
 
-    "mov         %0, %3                        \n"
+      "mov         %0, %3                        \n"
 
-    MEMACCESS(6)
-    "vld1.8      {q3}, [%6]                    \n"
+      "vld1.8      {q3}, [%6]                    \n"
 
-    "vtbl.8      d4, {d0, d1}, d6              \n"
-    "vtbl.8      d5, {d0, d1}, d7              \n"
-    "vtbl.8      d0, {d2, d3}, d6              \n"
-    "vtbl.8      d1, {d2, d3}, d7              \n"
+      "vtbl.8      d4, {d0, d1}, d6              \n"
+      "vtbl.8      d5, {d0, d1}, d7              \n"
+      "vtbl.8      d0, {d2, d3}, d6              \n"
+      "vtbl.8      d1, {d2, d3}, d7              \n"
 
-    // TODO(frkoenig): Rework shuffle above to
-    // write out with 4 instead of 8 writes.
-    MEMACCESS(0)
-    "vst1.32     {d4[0]}, [%0], %4             \n"
-    MEMACCESS(0)
-    "vst1.32     {d4[1]}, [%0], %4             \n"
-    MEMACCESS(0)
-    "vst1.32     {d5[0]}, [%0], %4             \n"
-    MEMACCESS(0)
-    "vst1.32     {d5[1]}, [%0]                 \n"
+      // TODO(frkoenig): Rework shuffle above to
+      // write out with 4 instead of 8 writes.
+      "vst1.32     {d4[0]}, [%0], %4             \n"
+      "vst1.32     {d4[1]}, [%0], %4             \n"
+      "vst1.32     {d5[0]}, [%0], %4             \n"
+      "vst1.32     {d5[1]}, [%0]                 \n"
 
-    "add         %0, %3, #4                    \n"
-    MEMACCESS(0)
-    "vst1.32     {d0[0]}, [%0], %4             \n"
-    MEMACCESS(0)
-    "vst1.32     {d0[1]}, [%0], %4             \n"
-    MEMACCESS(0)
-    "vst1.32     {d1[0]}, [%0], %4             \n"
-    MEMACCESS(0)
-    "vst1.32     {d1[1]}, [%0]                 \n"
+      "add         %0, %3, #4                    \n"
+      "vst1.32     {d0[0]}, [%0], %4             \n"
+      "vst1.32     {d0[1]}, [%0], %4             \n"
+      "vst1.32     {d1[0]}, [%0], %4             \n"
+      "vst1.32     {d1[1]}, [%0]                 \n"
 
-    "add         %1, #4                        \n"  // src += 4
-    "add         %3, %3, %4, lsl #2            \n"  // dst += 4 * dst_stride
-    "subs        %5,  #4                       \n"  // w   -= 4
-    "beq         4f                            \n"
+      "add         %1, #4                        \n"  // src += 4
+      "add         %3, %3, %4, lsl #2            \n"  // dst += 4 * dst_stride
+      "subs        %5,  #4                       \n"  // w   -= 4
+      "beq         4f                            \n"
 
-    // some residual, check to see if it includes a 2x8 block,
-    // or less
-    "cmp         %5, #2                        \n"
-    "blt         3f                            \n"
+      // some residual, check to see if it includes a 2x8 block,
+      // or less
+      "cmp         %5, #2                        \n"
+      "blt         3f                            \n"
 
-    // 2x8 block
-    "2:                                        \n"
-    "mov         %0, %1                        \n"
-    MEMACCESS(0)
-    "vld1.16     {d0[0]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d1[0]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d0[1]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d1[1]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d0[2]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d1[2]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d0[3]}, [%0], %2             \n"
-    MEMACCESS(0)
-    "vld1.16     {d1[3]}, [%0]                 \n"
+      // 2x8 block
+      "2:                                        \n"
+      "mov         %0, %1                        \n"
+      "vld1.16     {d0[0]}, [%0], %2             \n"
+      "vld1.16     {d1[0]}, [%0], %2             \n"
+      "vld1.16     {d0[1]}, [%0], %2             \n"
+      "vld1.16     {d1[1]}, [%0], %2             \n"
+      "vld1.16     {d0[2]}, [%0], %2             \n"
+      "vld1.16     {d1[2]}, [%0], %2             \n"
+      "vld1.16     {d0[3]}, [%0], %2             \n"
+      "vld1.16     {d1[3]}, [%0]                 \n"
 
-    "vtrn.8      d0, d1                        \n"
+      "vtrn.8      d0, d1                        \n"
 
-    "mov         %0, %3                        \n"
+      "mov         %0, %3                        \n"
 
-    MEMACCESS(0)
-    "vst1.64     {d0}, [%0], %4                \n"
-    MEMACCESS(0)
-    "vst1.64     {d1}, [%0]                    \n"
+      "vst1.64     {d0}, [%0], %4                \n"
+      "vst1.64     {d1}, [%0]                    \n"
 
-    "add         %1, #2                        \n"  // src += 2
-    "add         %3, %3, %4, lsl #1            \n"  // dst += 2 * dst_stride
-    "subs        %5,  #2                       \n"  // w   -= 2
-    "beq         4f                            \n"
+      "add         %1, #2                        \n"  // src += 2
+      "add         %3, %3, %4, lsl #1            \n"  // dst += 2 * dst_stride
+      "subs        %5,  #2                       \n"  // w   -= 2
+      "beq         4f                            \n"
 
-    // 1x8 block
-    "3:                                        \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[0]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[1]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[2]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[3]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[4]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[5]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[6]}, [%1], %2             \n"
-    MEMACCESS(1)
-    "vld1.8      {d0[7]}, [%1]                 \n"
+      // 1x8 block
+      "3:                                        \n"
+      "vld1.8      {d0[0]}, [%1], %2             \n"
+      "vld1.8      {d0[1]}, [%1], %2             \n"
+      "vld1.8      {d0[2]}, [%1], %2             \n"
+      "vld1.8      {d0[3]}, [%1], %2             \n"
+      "vld1.8      {d0[4]}, [%1], %2             \n"
+      "vld1.8      {d0[5]}, [%1], %2             \n"
+      "vld1.8      {d0[6]}, [%1], %2             \n"
+      "vld1.8      {d0[7]}, [%1]                 \n"
 
-    MEMACCESS(3)
-    "vst1.64     {d0}, [%3]                    \n"
+      "vst1.64     {d0}, [%3]                    \n"
 
-    "4:                                        \n"
+      "4:                                        \n"
 
-    : "=&r"(src_temp),         // %0
-      "+r"(src),               // %1
-      "+r"(src_stride),        // %2
-      "+r"(dst),               // %3
-      "+r"(dst_stride),        // %4
-      "+r"(width)              // %5
-    : "r"(&kVTbl4x4Transpose)  // %6
-    : "memory", "cc", "q0", "q1", "q2", "q3"
-  );
+      : "=&r"(src_temp),         // %0
+        "+r"(src),               // %1
+        "+r"(src_stride),        // %2
+        "+r"(dst),               // %3
+        "+r"(dst_stride),        // %4
+        "+r"(width)              // %5
+      : "r"(&kVTbl4x4Transpose)  // %6
+      : "memory", "cc", "q0", "q1", "q2", "q3");
 }
 
-static uvec8 kVTbl4x4TransposeDi =
-  { 0,  8,  1,  9,  2, 10,  3, 11,  4, 12,  5, 13,  6, 14,  7, 15 };
+static const uvec8 kVTbl4x4TransposeDi = {0, 8,  1, 9,  2, 10, 3, 11,
+                                          4, 12, 5, 13, 6, 14, 7, 15};
 
-void TransposeUVWx8_NEON(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b,
+void TransposeUVWx8_NEON(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
                          int width) {
-  const uint8* src_temp;
-  asm volatile (
-    // loops are on blocks of 8. loop will stop when
-    // counter gets to or below 0. starting the counter
-    // at w-8 allow for this
-    "sub         %7, #8                        \n"
+  const uint8_t* src_temp;
+  asm volatile(
+      // loops are on blocks of 8. loop will stop when
+      // counter gets to or below 0. starting the counter
+      // at w-8 allow for this
+      "sub         %7, #8                        \n"
 
-    // handle 8x8 blocks. this should be the majority of the plane
-    "1:                                        \n"
+      // handle 8x8 blocks. this should be the majority of the plane
+      "1:                                        \n"
       "mov         %0, %1                      \n"
 
-      MEMACCESS(0)
       "vld2.8      {d0,  d1},  [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d2,  d3},  [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d4,  d5},  [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d6,  d7},  [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d16, d17}, [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d18, d19}, [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d20, d21}, [%0], %2        \n"
-      MEMACCESS(0)
       "vld2.8      {d22, d23}, [%0]            \n"
 
       "vtrn.8      q1, q0                      \n"
@@ -301,40 +245,24 @@
 
       "mov         %0, %3                      \n"
 
-    MEMACCESS(0)
       "vst1.8      {d2},  [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d0},  [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d6},  [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d4},  [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d18}, [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d16}, [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d22}, [%0], %4             \n"
-    MEMACCESS(0)
       "vst1.8      {d20}, [%0]                 \n"
 
       "mov         %0, %5                      \n"
 
-    MEMACCESS(0)
       "vst1.8      {d3},  [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d1},  [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d7},  [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d5},  [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d19}, [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d17}, [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d23}, [%0], %6             \n"
-    MEMACCESS(0)
       "vst1.8      {d21}, [%0]                 \n"
 
       "add         %1, #8*2                    \n"  // src   += 8*2
@@ -343,187 +271,142 @@
       "subs        %7,  #8                     \n"  // w     -= 8
       "bge         1b                          \n"
 
-    // add 8 back to counter. if the result is 0 there are
-    // no residuals.
-    "adds        %7, #8                        \n"
-    "beq         4f                            \n"
+      // add 8 back to counter. if the result is 0 there are
+      // no residuals.
+      "adds        %7, #8                        \n"
+      "beq         4f                            \n"
 
-    // some residual, so between 1 and 7 lines left to transpose
-    "cmp         %7, #2                        \n"
-    "blt         3f                            \n"
+      // some residual, so between 1 and 7 lines left to transpose
+      "cmp         %7, #2                        \n"
+      "blt         3f                            \n"
 
-    "cmp         %7, #4                        \n"
-    "blt         2f                            \n"
+      "cmp         %7, #4                        \n"
+      "blt         2f                            \n"
 
-    // TODO(frkoenig): Clean this up
-    // 4x8 block
-    "mov         %0, %1                        \n"
-    MEMACCESS(0)
-    "vld1.64     {d0}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d1}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d2}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d3}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d4}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d5}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d6}, [%0], %2                \n"
-    MEMACCESS(0)
-    "vld1.64     {d7}, [%0]                    \n"
+      // TODO(frkoenig): Clean this up
+      // 4x8 block
+      "mov         %0, %1                        \n"
+      "vld1.64     {d0}, [%0], %2                \n"
+      "vld1.64     {d1}, [%0], %2                \n"
+      "vld1.64     {d2}, [%0], %2                \n"
+      "vld1.64     {d3}, [%0], %2                \n"
+      "vld1.64     {d4}, [%0], %2                \n"
+      "vld1.64     {d5}, [%0], %2                \n"
+      "vld1.64     {d6}, [%0], %2                \n"
+      "vld1.64     {d7}, [%0]                    \n"
 
-    MEMACCESS(8)
-    "vld1.8      {q15}, [%8]                   \n"
+      "vld1.8      {q15}, [%8]                   \n"
 
-    "vtrn.8      q0, q1                        \n"
-    "vtrn.8      q2, q3                        \n"
+      "vtrn.8      q0, q1                        \n"
+      "vtrn.8      q2, q3                        \n"
 
-    "vtbl.8      d16, {d0, d1}, d30            \n"
-    "vtbl.8      d17, {d0, d1}, d31            \n"
-    "vtbl.8      d18, {d2, d3}, d30            \n"
-    "vtbl.8      d19, {d2, d3}, d31            \n"
-    "vtbl.8      d20, {d4, d5}, d30            \n"
-    "vtbl.8      d21, {d4, d5}, d31            \n"
-    "vtbl.8      d22, {d6, d7}, d30            \n"
-    "vtbl.8      d23, {d6, d7}, d31            \n"
+      "vtbl.8      d16, {d0, d1}, d30            \n"
+      "vtbl.8      d17, {d0, d1}, d31            \n"
+      "vtbl.8      d18, {d2, d3}, d30            \n"
+      "vtbl.8      d19, {d2, d3}, d31            \n"
+      "vtbl.8      d20, {d4, d5}, d30            \n"
+      "vtbl.8      d21, {d4, d5}, d31            \n"
+      "vtbl.8      d22, {d6, d7}, d30            \n"
+      "vtbl.8      d23, {d6, d7}, d31            \n"
 
-    "mov         %0, %3                        \n"
+      "mov         %0, %3                        \n"
 
-    MEMACCESS(0)
-    "vst1.32     {d16[0]},  [%0], %4           \n"
-    MEMACCESS(0)
-    "vst1.32     {d16[1]},  [%0], %4           \n"
-    MEMACCESS(0)
-    "vst1.32     {d17[0]},  [%0], %4           \n"
-    MEMACCESS(0)
-    "vst1.32     {d17[1]},  [%0], %4           \n"
+      "vst1.32     {d16[0]},  [%0], %4           \n"
+      "vst1.32     {d16[1]},  [%0], %4           \n"
+      "vst1.32     {d17[0]},  [%0], %4           \n"
+      "vst1.32     {d17[1]},  [%0], %4           \n"
 
-    "add         %0, %3, #4                    \n"
-    MEMACCESS(0)
-    "vst1.32     {d20[0]}, [%0], %4            \n"
-    MEMACCESS(0)
-    "vst1.32     {d20[1]}, [%0], %4            \n"
-    MEMACCESS(0)
-    "vst1.32     {d21[0]}, [%0], %4            \n"
-    MEMACCESS(0)
-    "vst1.32     {d21[1]}, [%0]                \n"
+      "add         %0, %3, #4                    \n"
+      "vst1.32     {d20[0]}, [%0], %4            \n"
+      "vst1.32     {d20[1]}, [%0], %4            \n"
+      "vst1.32     {d21[0]}, [%0], %4            \n"
+      "vst1.32     {d21[1]}, [%0]                \n"
 
-    "mov         %0, %5                        \n"
+      "mov         %0, %5                        \n"
 
-    MEMACCESS(0)
-    "vst1.32     {d18[0]}, [%0], %6            \n"
-    MEMACCESS(0)
-    "vst1.32     {d18[1]}, [%0], %6            \n"
-    MEMACCESS(0)
-    "vst1.32     {d19[0]}, [%0], %6            \n"
-    MEMACCESS(0)
-    "vst1.32     {d19[1]}, [%0], %6            \n"
+      "vst1.32     {d18[0]}, [%0], %6            \n"
+      "vst1.32     {d18[1]}, [%0], %6            \n"
+      "vst1.32     {d19[0]}, [%0], %6            \n"
+      "vst1.32     {d19[1]}, [%0], %6            \n"
 
-    "add         %0, %5, #4                    \n"
-    MEMACCESS(0)
-    "vst1.32     {d22[0]},  [%0], %6           \n"
-    MEMACCESS(0)
-    "vst1.32     {d22[1]},  [%0], %6           \n"
-    MEMACCESS(0)
-    "vst1.32     {d23[0]},  [%0], %6           \n"
-    MEMACCESS(0)
-    "vst1.32     {d23[1]},  [%0]               \n"
+      "add         %0, %5, #4                    \n"
+      "vst1.32     {d22[0]},  [%0], %6           \n"
+      "vst1.32     {d22[1]},  [%0], %6           \n"
+      "vst1.32     {d23[0]},  [%0], %6           \n"
+      "vst1.32     {d23[1]},  [%0]               \n"
 
-    "add         %1, #4*2                      \n"  // src   += 4 * 2
-    "add         %3, %3, %4, lsl #2            \n"  // dst_a += 4 * dst_stride_a
-    "add         %5, %5, %6, lsl #2            \n"  // dst_b += 4 * dst_stride_b
-    "subs        %7,  #4                       \n"  // w     -= 4
-    "beq         4f                            \n"
+      "add         %1, #4*2                      \n"  // src   += 4 * 2
+      "add         %3, %3, %4, lsl #2            \n"  // dst_a += 4 *
+                                                      // dst_stride_a
+      "add         %5, %5, %6, lsl #2            \n"  // dst_b += 4 *
+                                                      // dst_stride_b
+      "subs        %7,  #4                       \n"  // w     -= 4
+      "beq         4f                            \n"
 
-    // some residual, check to see if it includes a 2x8 block,
-    // or less
-    "cmp         %7, #2                        \n"
-    "blt         3f                            \n"
+      // some residual, check to see if it includes a 2x8 block,
+      // or less
+      "cmp         %7, #2                        \n"
+      "blt         3f                            \n"
 
-    // 2x8 block
-    "2:                                        \n"
-    "mov         %0, %1                        \n"
-    MEMACCESS(0)
-    "vld2.16     {d0[0], d2[0]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d1[0], d3[0]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d0[1], d2[1]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d1[1], d3[1]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d0[2], d2[2]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d1[2], d3[2]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d0[3], d2[3]}, [%0], %2      \n"
-    MEMACCESS(0)
-    "vld2.16     {d1[3], d3[3]}, [%0]          \n"
+      // 2x8 block
+      "2:                                        \n"
+      "mov         %0, %1                        \n"
+      "vld2.16     {d0[0], d2[0]}, [%0], %2      \n"
+      "vld2.16     {d1[0], d3[0]}, [%0], %2      \n"
+      "vld2.16     {d0[1], d2[1]}, [%0], %2      \n"
+      "vld2.16     {d1[1], d3[1]}, [%0], %2      \n"
+      "vld2.16     {d0[2], d2[2]}, [%0], %2      \n"
+      "vld2.16     {d1[2], d3[2]}, [%0], %2      \n"
+      "vld2.16     {d0[3], d2[3]}, [%0], %2      \n"
+      "vld2.16     {d1[3], d3[3]}, [%0]          \n"
 
-    "vtrn.8      d0, d1                        \n"
-    "vtrn.8      d2, d3                        \n"
+      "vtrn.8      d0, d1                        \n"
+      "vtrn.8      d2, d3                        \n"
 
-    "mov         %0, %3                        \n"
+      "mov         %0, %3                        \n"
 
-    MEMACCESS(0)
-    "vst1.64     {d0}, [%0], %4                \n"
-    MEMACCESS(0)
-    "vst1.64     {d2}, [%0]                    \n"
+      "vst1.64     {d0}, [%0], %4                \n"
+      "vst1.64     {d2}, [%0]                    \n"
 
-    "mov         %0, %5                        \n"
+      "mov         %0, %5                        \n"
 
-    MEMACCESS(0)
-    "vst1.64     {d1}, [%0], %6                \n"
-    MEMACCESS(0)
-    "vst1.64     {d3}, [%0]                    \n"
+      "vst1.64     {d1}, [%0], %6                \n"
+      "vst1.64     {d3}, [%0]                    \n"
 
-    "add         %1, #2*2                      \n"  // src   += 2 * 2
-    "add         %3, %3, %4, lsl #1            \n"  // dst_a += 2 * dst_stride_a
-    "add         %5, %5, %6, lsl #1            \n"  // dst_b += 2 * dst_stride_b
-    "subs        %7,  #2                       \n"  // w     -= 2
-    "beq         4f                            \n"
+      "add         %1, #2*2                      \n"  // src   += 2 * 2
+      "add         %3, %3, %4, lsl #1            \n"  // dst_a += 2 *
+                                                      // dst_stride_a
+      "add         %5, %5, %6, lsl #1            \n"  // dst_b += 2 *
+                                                      // dst_stride_b
+      "subs        %7,  #2                       \n"  // w     -= 2
+      "beq         4f                            \n"
 
-    // 1x8 block
-    "3:                                        \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[0], d1[0]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[1], d1[1]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[2], d1[2]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[3], d1[3]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[4], d1[4]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[5], d1[5]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[6], d1[6]}, [%1], %2      \n"
-    MEMACCESS(1)
-    "vld2.8      {d0[7], d1[7]}, [%1]          \n"
+      // 1x8 block
+      "3:                                        \n"
+      "vld2.8      {d0[0], d1[0]}, [%1], %2      \n"
+      "vld2.8      {d0[1], d1[1]}, [%1], %2      \n"
+      "vld2.8      {d0[2], d1[2]}, [%1], %2      \n"
+      "vld2.8      {d0[3], d1[3]}, [%1], %2      \n"
+      "vld2.8      {d0[4], d1[4]}, [%1], %2      \n"
+      "vld2.8      {d0[5], d1[5]}, [%1], %2      \n"
+      "vld2.8      {d0[6], d1[6]}, [%1], %2      \n"
+      "vld2.8      {d0[7], d1[7]}, [%1]          \n"
 
-    MEMACCESS(3)
-    "vst1.64     {d0}, [%3]                    \n"
-    MEMACCESS(5)
-    "vst1.64     {d1}, [%5]                    \n"
+      "vst1.64     {d0}, [%3]                    \n"
+      "vst1.64     {d1}, [%5]                    \n"
 
-    "4:                                        \n"
+      "4:                                        \n"
 
-    : "=&r"(src_temp),           // %0
-      "+r"(src),                 // %1
-      "+r"(src_stride),          // %2
-      "+r"(dst_a),               // %3
-      "+r"(dst_stride_a),        // %4
-      "+r"(dst_b),               // %5
-      "+r"(dst_stride_b),        // %6
-      "+r"(width)                // %7
-    : "r"(&kVTbl4x4TransposeDi)  // %8
-    : "memory", "cc",
-      "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11"
-  );
+      : "=&r"(src_temp),           // %0
+        "+r"(src),                 // %1
+        "+r"(src_stride),          // %2
+        "+r"(dst_a),               // %3
+        "+r"(dst_stride_a),        // %4
+        "+r"(dst_b),               // %5
+        "+r"(dst_stride_b),        // %6
+        "+r"(width)                // %7
+      : "r"(&kVTbl4x4TransposeDi)  // %8
+      : "memory", "cc", "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11");
 }
 #endif  // defined(__ARM_NEON__) && !defined(__aarch64__)
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon64.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon64.cc
index 1ab448f..f469baa 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon64.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_neon64.cc
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "libyuv/row.h"
 #include "libyuv/rotate_row.h"
+#include "libyuv/row.h"
 
 #include "libyuv/basic_types.h"
 
@@ -21,38 +21,32 @@
 // This module is for GCC Neon armv8 64 bit.
 #if !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
-static uvec8 kVTbl4x4Transpose =
-  { 0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7, 11, 15 };
+static const uvec8 kVTbl4x4Transpose = {0, 4, 8,  12, 1, 5, 9,  13,
+                                        2, 6, 10, 14, 3, 7, 11, 15};
 
-void TransposeWx8_NEON(const uint8* src, int src_stride,
-                       uint8* dst, int dst_stride, int width) {
-  const uint8* src_temp;
-  int64 width64 = (int64) width;  // Work around clang 3.4 warning.
-  asm volatile (
-    // loops are on blocks of 8. loop will stop when
-    // counter gets to or below 0. starting the counter
-    // at w-8 allow for this
-    "sub         %3, %3, #8                      \n"
+void TransposeWx8_NEON(const uint8_t* src,
+                       int src_stride,
+                       uint8_t* dst,
+                       int dst_stride,
+                       int width) {
+  const uint8_t* src_temp;
+  asm volatile(
+      // loops are on blocks of 8. loop will stop when
+      // counter gets to or below 0. starting the counter
+      // at w-8 allow for this
+      "sub         %w3, %w3, #8                     \n"
 
-    // handle 8x8 blocks. this should be the majority of the plane
-    "1:                                          \n"
+      // handle 8x8 blocks. this should be the majority of the plane
+      "1:                                          \n"
       "mov         %0, %1                        \n"
 
-      MEMACCESS(0)
       "ld1        {v0.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v1.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v2.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v3.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v4.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v5.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v6.8b}, [%0], %5              \n"
-      MEMACCESS(0)
       "ld1        {v7.8b}, [%0]                  \n"
 
       "trn2     v16.8b, v0.8b, v1.8b             \n"
@@ -84,456 +78,345 @@
 
       "mov         %0, %2                        \n"
 
-    MEMACCESS(0)
       "st1      {v17.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v16.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v19.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v18.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v21.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v20.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v23.8b}, [%0], %6               \n"
-    MEMACCESS(0)
       "st1      {v22.8b}, [%0]                   \n"
 
       "add         %1, %1, #8                    \n"  // src += 8
       "add         %2, %2, %6, lsl #3            \n"  // dst += 8 * dst_stride
-      "subs        %3, %3, #8                    \n"  // w   -= 8
+      "subs        %w3, %w3, #8                  \n"  // w   -= 8
       "b.ge        1b                            \n"
 
-    // add 8 back to counter. if the result is 0 there are
-    // no residuals.
-    "adds        %3, %3, #8                      \n"
-    "b.eq        4f                              \n"
+      // add 8 back to counter. if the result is 0 there are
+      // no residuals.
+      "adds        %w3, %w3, #8                    \n"
+      "b.eq        4f                              \n"
 
-    // some residual, so between 1 and 7 lines left to transpose
-    "cmp         %3, #2                          \n"
-    "b.lt        3f                              \n"
+      // some residual, so between 1 and 7 lines left to transpose
+      "cmp         %w3, #2                          \n"
+      "b.lt        3f                              \n"
 
-    "cmp         %3, #4                          \n"
-    "b.lt        2f                              \n"
+      "cmp         %w3, #4                          \n"
+      "b.lt        2f                              \n"
 
-    // 4x8 block
-    "mov         %0, %1                          \n"
-    MEMACCESS(0)
-    "ld1     {v0.s}[0], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v0.s}[1], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v0.s}[2], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v0.s}[3], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.s}[0], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.s}[1], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.s}[2], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.s}[3], [%0]                     \n"
+      // 4x8 block
+      "mov         %0, %1                          \n"
+      "ld1     {v0.s}[0], [%0], %5                 \n"
+      "ld1     {v0.s}[1], [%0], %5                 \n"
+      "ld1     {v0.s}[2], [%0], %5                 \n"
+      "ld1     {v0.s}[3], [%0], %5                 \n"
+      "ld1     {v1.s}[0], [%0], %5                 \n"
+      "ld1     {v1.s}[1], [%0], %5                 \n"
+      "ld1     {v1.s}[2], [%0], %5                 \n"
+      "ld1     {v1.s}[3], [%0]                     \n"
 
-    "mov         %0, %2                          \n"
+      "mov         %0, %2                          \n"
 
-    MEMACCESS(4)
-    "ld1      {v2.16b}, [%4]                     \n"
+      "ld1      {v2.16b}, [%4]                     \n"
 
-    "tbl      v3.16b, {v0.16b}, v2.16b           \n"
-    "tbl      v0.16b, {v1.16b}, v2.16b           \n"
+      "tbl      v3.16b, {v0.16b}, v2.16b           \n"
+      "tbl      v0.16b, {v1.16b}, v2.16b           \n"
 
-    // TODO(frkoenig): Rework shuffle above to
-    // write out with 4 instead of 8 writes.
-    MEMACCESS(0)
-    "st1 {v3.s}[0], [%0], %6                     \n"
-    MEMACCESS(0)
-    "st1 {v3.s}[1], [%0], %6                     \n"
-    MEMACCESS(0)
-    "st1 {v3.s}[2], [%0], %6                     \n"
-    MEMACCESS(0)
-    "st1 {v3.s}[3], [%0]                         \n"
+      // TODO(frkoenig): Rework shuffle above to
+      // write out with 4 instead of 8 writes.
+      "st1 {v3.s}[0], [%0], %6                     \n"
+      "st1 {v3.s}[1], [%0], %6                     \n"
+      "st1 {v3.s}[2], [%0], %6                     \n"
+      "st1 {v3.s}[3], [%0]                         \n"
 
-    "add         %0, %2, #4                      \n"
-    MEMACCESS(0)
-    "st1 {v0.s}[0], [%0], %6                     \n"
-    MEMACCESS(0)
-    "st1 {v0.s}[1], [%0], %6                     \n"
-    MEMACCESS(0)
-    "st1 {v0.s}[2], [%0], %6                     \n"
-    MEMACCESS(0)
-    "st1 {v0.s}[3], [%0]                         \n"
+      "add         %0, %2, #4                      \n"
+      "st1 {v0.s}[0], [%0], %6                     \n"
+      "st1 {v0.s}[1], [%0], %6                     \n"
+      "st1 {v0.s}[2], [%0], %6                     \n"
+      "st1 {v0.s}[3], [%0]                         \n"
 
-    "add         %1, %1, #4                      \n"  // src += 4
-    "add         %2, %2, %6, lsl #2              \n"  // dst += 4 * dst_stride
-    "subs        %3, %3, #4                      \n"  // w   -= 4
-    "b.eq        4f                              \n"
+      "add         %1, %1, #4                      \n"  // src += 4
+      "add         %2, %2, %6, lsl #2              \n"  // dst += 4 * dst_stride
+      "subs        %w3, %w3, #4                    \n"  // w   -= 4
+      "b.eq        4f                              \n"
 
-    // some residual, check to see if it includes a 2x8 block,
-    // or less
-    "cmp         %3, #2                          \n"
-    "b.lt        3f                              \n"
+      // some residual, check to see if it includes a 2x8 block,
+      // or less
+      "cmp         %w3, #2                         \n"
+      "b.lt        3f                              \n"
 
-    // 2x8 block
-    "2:                                          \n"
-    "mov         %0, %1                          \n"
-    MEMACCESS(0)
-    "ld1     {v0.h}[0], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.h}[0], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v0.h}[1], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.h}[1], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v0.h}[2], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.h}[2], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v0.h}[3], [%0], %5                 \n"
-    MEMACCESS(0)
-    "ld1     {v1.h}[3], [%0]                     \n"
+      // 2x8 block
+      "2:                                          \n"
+      "mov         %0, %1                          \n"
+      "ld1     {v0.h}[0], [%0], %5                 \n"
+      "ld1     {v1.h}[0], [%0], %5                 \n"
+      "ld1     {v0.h}[1], [%0], %5                 \n"
+      "ld1     {v1.h}[1], [%0], %5                 \n"
+      "ld1     {v0.h}[2], [%0], %5                 \n"
+      "ld1     {v1.h}[2], [%0], %5                 \n"
+      "ld1     {v0.h}[3], [%0], %5                 \n"
+      "ld1     {v1.h}[3], [%0]                     \n"
 
-    "trn2    v2.8b, v0.8b, v1.8b                 \n"
-    "trn1    v3.8b, v0.8b, v1.8b                 \n"
+      "trn2    v2.8b, v0.8b, v1.8b                 \n"
+      "trn1    v3.8b, v0.8b, v1.8b                 \n"
 
-    "mov         %0, %2                          \n"
+      "mov         %0, %2                          \n"
 
-    MEMACCESS(0)
-    "st1     {v3.8b}, [%0], %6                   \n"
-    MEMACCESS(0)
-    "st1     {v2.8b}, [%0]                       \n"
+      "st1     {v3.8b}, [%0], %6                   \n"
+      "st1     {v2.8b}, [%0]                       \n"
 
-    "add         %1, %1, #2                      \n"  // src += 2
-    "add         %2, %2, %6, lsl #1              \n"  // dst += 2 * dst_stride
-    "subs        %3, %3,  #2                     \n"  // w   -= 2
-    "b.eq        4f                              \n"
+      "add         %1, %1, #2                      \n"  // src += 2
+      "add         %2, %2, %6, lsl #1              \n"  // dst += 2 * dst_stride
+      "subs        %w3, %w3,  #2                   \n"  // w   -= 2
+      "b.eq        4f                              \n"
 
-    // 1x8 block
-    "3:                                          \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[0], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[1], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[2], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[3], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[4], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[5], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[6], [%1], %5             \n"
-    MEMACCESS(1)
-    "ld1         {v0.b}[7], [%1]                 \n"
+      // 1x8 block
+      "3:                                          \n"
+      "ld1         {v0.b}[0], [%1], %5             \n"
+      "ld1         {v0.b}[1], [%1], %5             \n"
+      "ld1         {v0.b}[2], [%1], %5             \n"
+      "ld1         {v0.b}[3], [%1], %5             \n"
+      "ld1         {v0.b}[4], [%1], %5             \n"
+      "ld1         {v0.b}[5], [%1], %5             \n"
+      "ld1         {v0.b}[6], [%1], %5             \n"
+      "ld1         {v0.b}[7], [%1]                 \n"
 
-    MEMACCESS(2)
-    "st1         {v0.8b}, [%2]                   \n"
+      "st1         {v0.8b}, [%2]                   \n"
 
-    "4:                                          \n"
+      "4:                                          \n"
 
-    : "=&r"(src_temp),                            // %0
-      "+r"(src),                                  // %1
-      "+r"(dst),                                  // %2
-      "+r"(width64)                               // %3
-    : "r"(&kVTbl4x4Transpose),                    // %4
-      "r"(static_cast<ptrdiff_t>(src_stride)),    // %5
-      "r"(static_cast<ptrdiff_t>(dst_stride))     // %6
-    : "memory", "cc", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16",
-      "v17", "v18", "v19", "v20", "v21", "v22", "v23"
-  );
+      : "=&r"(src_temp),                          // %0
+        "+r"(src),                                // %1
+        "+r"(dst),                                // %2
+        "+r"(width)                               // %3
+      : "r"(&kVTbl4x4Transpose),                  // %4
+        "r"(static_cast<ptrdiff_t>(src_stride)),  // %5
+        "r"(static_cast<ptrdiff_t>(dst_stride))   // %6
+      : "memory", "cc", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16",
+        "v17", "v18", "v19", "v20", "v21", "v22", "v23");
 }
 
-static uint8 kVTbl4x4TransposeDi[32] =
-  { 0,  16, 32, 48,  2, 18, 34, 50,  4, 20, 36, 52,  6, 22, 38, 54,
-    1,  17, 33, 49,  3, 19, 35, 51,  5, 21, 37, 53,  7, 23, 39, 55};
+static const uint8_t kVTbl4x4TransposeDi[32] = {
+    0, 16, 32, 48, 2, 18, 34, 50, 4, 20, 36, 52, 6, 22, 38, 54,
+    1, 17, 33, 49, 3, 19, 35, 51, 5, 21, 37, 53, 7, 23, 39, 55};
 
-void TransposeUVWx8_NEON(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b,
+void TransposeUVWx8_NEON(const uint8_t* src,
+                         int src_stride,
+                         uint8_t* dst_a,
+                         int dst_stride_a,
+                         uint8_t* dst_b,
+                         int dst_stride_b,
                          int width) {
-  const uint8* src_temp;
-  int64 width64 = (int64) width;  // Work around clang 3.4 warning.
-  asm volatile (
-    // loops are on blocks of 8. loop will stop when
-    // counter gets to or below 0. starting the counter
-    // at w-8 allow for this
-    "sub       %4, %4, #8                      \n"
+  const uint8_t* src_temp;
+  asm volatile(
+      // loops are on blocks of 8. loop will stop when
+      // counter gets to or below 0. starting the counter
+      // at w-8 allow for this
+      "sub       %w4, %w4, #8                    \n"
 
-    // handle 8x8 blocks. this should be the majority of the plane
-    "1:                                        \n"
-    "mov       %0, %1                          \n"
+      // handle 8x8 blocks. this should be the majority of the plane
+      "1:                                        \n"
+      "mov       %0, %1                          \n"
 
-    MEMACCESS(0)
-    "ld1       {v0.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v1.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v2.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v3.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v4.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v5.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v6.16b}, [%0], %5              \n"
-    MEMACCESS(0)
-    "ld1       {v7.16b}, [%0]                  \n"
+      "ld1       {v0.16b}, [%0], %5              \n"
+      "ld1       {v1.16b}, [%0], %5              \n"
+      "ld1       {v2.16b}, [%0], %5              \n"
+      "ld1       {v3.16b}, [%0], %5              \n"
+      "ld1       {v4.16b}, [%0], %5              \n"
+      "ld1       {v5.16b}, [%0], %5              \n"
+      "ld1       {v6.16b}, [%0], %5              \n"
+      "ld1       {v7.16b}, [%0]                  \n"
 
-    "trn1      v16.16b, v0.16b, v1.16b         \n"
-    "trn2      v17.16b, v0.16b, v1.16b         \n"
-    "trn1      v18.16b, v2.16b, v3.16b         \n"
-    "trn2      v19.16b, v2.16b, v3.16b         \n"
-    "trn1      v20.16b, v4.16b, v5.16b         \n"
-    "trn2      v21.16b, v4.16b, v5.16b         \n"
-    "trn1      v22.16b, v6.16b, v7.16b         \n"
-    "trn2      v23.16b, v6.16b, v7.16b         \n"
+      "trn1      v16.16b, v0.16b, v1.16b         \n"
+      "trn2      v17.16b, v0.16b, v1.16b         \n"
+      "trn1      v18.16b, v2.16b, v3.16b         \n"
+      "trn2      v19.16b, v2.16b, v3.16b         \n"
+      "trn1      v20.16b, v4.16b, v5.16b         \n"
+      "trn2      v21.16b, v4.16b, v5.16b         \n"
+      "trn1      v22.16b, v6.16b, v7.16b         \n"
+      "trn2      v23.16b, v6.16b, v7.16b         \n"
 
-    "trn1      v0.8h, v16.8h, v18.8h           \n"
-    "trn2      v1.8h, v16.8h, v18.8h           \n"
-    "trn1      v2.8h, v20.8h, v22.8h           \n"
-    "trn2      v3.8h, v20.8h, v22.8h           \n"
-    "trn1      v4.8h, v17.8h, v19.8h           \n"
-    "trn2      v5.8h, v17.8h, v19.8h           \n"
-    "trn1      v6.8h, v21.8h, v23.8h           \n"
-    "trn2      v7.8h, v21.8h, v23.8h           \n"
+      "trn1      v0.8h, v16.8h, v18.8h           \n"
+      "trn2      v1.8h, v16.8h, v18.8h           \n"
+      "trn1      v2.8h, v20.8h, v22.8h           \n"
+      "trn2      v3.8h, v20.8h, v22.8h           \n"
+      "trn1      v4.8h, v17.8h, v19.8h           \n"
+      "trn2      v5.8h, v17.8h, v19.8h           \n"
+      "trn1      v6.8h, v21.8h, v23.8h           \n"
+      "trn2      v7.8h, v21.8h, v23.8h           \n"
 
-    "trn1      v16.4s, v0.4s, v2.4s            \n"
-    "trn2      v17.4s, v0.4s, v2.4s            \n"
-    "trn1      v18.4s, v1.4s, v3.4s            \n"
-    "trn2      v19.4s, v1.4s, v3.4s            \n"
-    "trn1      v20.4s, v4.4s, v6.4s            \n"
-    "trn2      v21.4s, v4.4s, v6.4s            \n"
-    "trn1      v22.4s, v5.4s, v7.4s            \n"
-    "trn2      v23.4s, v5.4s, v7.4s            \n"
+      "trn1      v16.4s, v0.4s, v2.4s            \n"
+      "trn2      v17.4s, v0.4s, v2.4s            \n"
+      "trn1      v18.4s, v1.4s, v3.4s            \n"
+      "trn2      v19.4s, v1.4s, v3.4s            \n"
+      "trn1      v20.4s, v4.4s, v6.4s            \n"
+      "trn2      v21.4s, v4.4s, v6.4s            \n"
+      "trn1      v22.4s, v5.4s, v7.4s            \n"
+      "trn2      v23.4s, v5.4s, v7.4s            \n"
 
-    "mov       %0, %2                          \n"
+      "mov       %0, %2                          \n"
 
-    MEMACCESS(0)
-    "st1       {v16.d}[0], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v18.d}[0], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v17.d}[0], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v19.d}[0], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v16.d}[1], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v18.d}[1], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v17.d}[1], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v19.d}[1], [%0]                \n"
+      "st1       {v16.d}[0], [%0], %6            \n"
+      "st1       {v18.d}[0], [%0], %6            \n"
+      "st1       {v17.d}[0], [%0], %6            \n"
+      "st1       {v19.d}[0], [%0], %6            \n"
+      "st1       {v16.d}[1], [%0], %6            \n"
+      "st1       {v18.d}[1], [%0], %6            \n"
+      "st1       {v17.d}[1], [%0], %6            \n"
+      "st1       {v19.d}[1], [%0]                \n"
 
-    "mov       %0, %3                          \n"
+      "mov       %0, %3                          \n"
 
-    MEMACCESS(0)
-    "st1       {v20.d}[0], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v22.d}[0], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v21.d}[0], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v23.d}[0], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v20.d}[1], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v22.d}[1], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v21.d}[1], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v23.d}[1], [%0]                \n"
+      "st1       {v20.d}[0], [%0], %7            \n"
+      "st1       {v22.d}[0], [%0], %7            \n"
+      "st1       {v21.d}[0], [%0], %7            \n"
+      "st1       {v23.d}[0], [%0], %7            \n"
+      "st1       {v20.d}[1], [%0], %7            \n"
+      "st1       {v22.d}[1], [%0], %7            \n"
+      "st1       {v21.d}[1], [%0], %7            \n"
+      "st1       {v23.d}[1], [%0]                \n"
 
-    "add       %1, %1, #16                     \n"  // src   += 8*2
-    "add       %2, %2, %6, lsl #3              \n"  // dst_a += 8 * dst_stride_a
-    "add       %3, %3, %7, lsl #3              \n"  // dst_b += 8 * dst_stride_b
-    "subs      %4, %4,  #8                     \n"  // w     -= 8
-    "b.ge      1b                              \n"
+      "add       %1, %1, #16                     \n"  // src   += 8*2
+      "add       %2, %2, %6, lsl #3              \n"  // dst_a += 8 *
+                                                      // dst_stride_a
+      "add       %3, %3, %7, lsl #3              \n"  // dst_b += 8 *
+                                                      // dst_stride_b
+      "subs      %w4, %w4,  #8                   \n"  // w     -= 8
+      "b.ge      1b                              \n"
 
-    // add 8 back to counter. if the result is 0 there are
-    // no residuals.
-    "adds      %4, %4, #8                      \n"
-    "b.eq      4f                              \n"
+      // add 8 back to counter. if the result is 0 there are
+      // no residuals.
+      "adds      %w4, %w4, #8                    \n"
+      "b.eq      4f                              \n"
 
-    // some residual, so between 1 and 7 lines left to transpose
-    "cmp       %4, #2                          \n"
-    "b.lt      3f                              \n"
+      // some residual, so between 1 and 7 lines left to transpose
+      "cmp       %w4, #2                         \n"
+      "b.lt      3f                              \n"
 
-    "cmp       %4, #4                          \n"
-    "b.lt      2f                              \n"
+      "cmp       %w4, #4                         \n"
+      "b.lt      2f                              \n"
 
-    // TODO(frkoenig): Clean this up
-    // 4x8 block
-    "mov       %0, %1                          \n"
-    MEMACCESS(0)
-    "ld1       {v0.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v1.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v2.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v3.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v4.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v5.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v6.8b}, [%0], %5               \n"
-    MEMACCESS(0)
-    "ld1       {v7.8b}, [%0]                   \n"
+      // TODO(frkoenig): Clean this up
+      // 4x8 block
+      "mov       %0, %1                          \n"
+      "ld1       {v0.8b}, [%0], %5               \n"
+      "ld1       {v1.8b}, [%0], %5               \n"
+      "ld1       {v2.8b}, [%0], %5               \n"
+      "ld1       {v3.8b}, [%0], %5               \n"
+      "ld1       {v4.8b}, [%0], %5               \n"
+      "ld1       {v5.8b}, [%0], %5               \n"
+      "ld1       {v6.8b}, [%0], %5               \n"
+      "ld1       {v7.8b}, [%0]                   \n"
 
-    MEMACCESS(8)
-    "ld1       {v30.16b}, [%8], #16            \n"
-    "ld1       {v31.16b}, [%8]                 \n"
+      "ld1       {v30.16b}, [%8], #16            \n"
+      "ld1       {v31.16b}, [%8]                 \n"
 
-    "tbl       v16.16b, {v0.16b, v1.16b, v2.16b, v3.16b}, v30.16b  \n"
-    "tbl       v17.16b, {v0.16b, v1.16b, v2.16b, v3.16b}, v31.16b  \n"
-    "tbl       v18.16b, {v4.16b, v5.16b, v6.16b, v7.16b}, v30.16b  \n"
-    "tbl       v19.16b, {v4.16b, v5.16b, v6.16b, v7.16b}, v31.16b  \n"
+      "tbl       v16.16b, {v0.16b, v1.16b, v2.16b, v3.16b}, v30.16b  \n"
+      "tbl       v17.16b, {v0.16b, v1.16b, v2.16b, v3.16b}, v31.16b  \n"
+      "tbl       v18.16b, {v4.16b, v5.16b, v6.16b, v7.16b}, v30.16b  \n"
+      "tbl       v19.16b, {v4.16b, v5.16b, v6.16b, v7.16b}, v31.16b  \n"
 
-    "mov       %0, %2                          \n"
+      "mov       %0, %2                          \n"
 
-    MEMACCESS(0)
-    "st1       {v16.s}[0],  [%0], %6           \n"
-    MEMACCESS(0)
-    "st1       {v16.s}[1],  [%0], %6           \n"
-    MEMACCESS(0)
-    "st1       {v16.s}[2],  [%0], %6           \n"
-    MEMACCESS(0)
-    "st1       {v16.s}[3],  [%0], %6           \n"
+      "st1       {v16.s}[0],  [%0], %6           \n"
+      "st1       {v16.s}[1],  [%0], %6           \n"
+      "st1       {v16.s}[2],  [%0], %6           \n"
+      "st1       {v16.s}[3],  [%0], %6           \n"
 
-    "add       %0, %2, #4                      \n"
-    MEMACCESS(0)
-    "st1       {v18.s}[0], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v18.s}[1], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v18.s}[2], [%0], %6            \n"
-    MEMACCESS(0)
-    "st1       {v18.s}[3], [%0]                \n"
+      "add       %0, %2, #4                      \n"
+      "st1       {v18.s}[0], [%0], %6            \n"
+      "st1       {v18.s}[1], [%0], %6            \n"
+      "st1       {v18.s}[2], [%0], %6            \n"
+      "st1       {v18.s}[3], [%0]                \n"
 
-    "mov       %0, %3                          \n"
+      "mov       %0, %3                          \n"
 
-    MEMACCESS(0)
-    "st1       {v17.s}[0], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v17.s}[1], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v17.s}[2], [%0], %7            \n"
-    MEMACCESS(0)
-    "st1       {v17.s}[3], [%0], %7            \n"
+      "st1       {v17.s}[0], [%0], %7            \n"
+      "st1       {v17.s}[1], [%0], %7            \n"
+      "st1       {v17.s}[2], [%0], %7            \n"
+      "st1       {v17.s}[3], [%0], %7            \n"
 
-    "add       %0, %3, #4                      \n"
-    MEMACCESS(0)
-    "st1       {v19.s}[0],  [%0], %7           \n"
-    MEMACCESS(0)
-    "st1       {v19.s}[1],  [%0], %7           \n"
-    MEMACCESS(0)
-    "st1       {v19.s}[2],  [%0], %7           \n"
-    MEMACCESS(0)
-    "st1       {v19.s}[3],  [%0]               \n"
+      "add       %0, %3, #4                      \n"
+      "st1       {v19.s}[0],  [%0], %7           \n"
+      "st1       {v19.s}[1],  [%0], %7           \n"
+      "st1       {v19.s}[2],  [%0], %7           \n"
+      "st1       {v19.s}[3],  [%0]               \n"
 
-    "add       %1, %1, #8                      \n"  // src   += 4 * 2
-    "add       %2, %2, %6, lsl #2              \n"  // dst_a += 4 * dst_stride_a
-    "add       %3, %3, %7, lsl #2              \n"  // dst_b += 4 * dst_stride_b
-    "subs      %4,  %4,  #4                    \n"  // w     -= 4
-    "b.eq      4f                              \n"
+      "add       %1, %1, #8                      \n"  // src   += 4 * 2
+      "add       %2, %2, %6, lsl #2              \n"  // dst_a += 4 *
+                                                      // dst_stride_a
+      "add       %3, %3, %7, lsl #2              \n"  // dst_b += 4 *
+                                                      // dst_stride_b
+      "subs      %w4,  %w4,  #4                  \n"  // w     -= 4
+      "b.eq      4f                              \n"
 
-    // some residual, check to see if it includes a 2x8 block,
-    // or less
-    "cmp       %4, #2                          \n"
-    "b.lt      3f                              \n"
+      // some residual, check to see if it includes a 2x8 block,
+      // or less
+      "cmp       %w4, #2                         \n"
+      "b.lt      3f                              \n"
 
-    // 2x8 block
-    "2:                                        \n"
-    "mov       %0, %1                          \n"
-    MEMACCESS(0)
-    "ld2       {v0.h, v1.h}[0], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v2.h, v3.h}[0], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v0.h, v1.h}[1], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v2.h, v3.h}[1], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v0.h, v1.h}[2], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v2.h, v3.h}[2], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v0.h, v1.h}[3], [%0], %5       \n"
-    MEMACCESS(0)
-    "ld2       {v2.h, v3.h}[3], [%0]           \n"
+      // 2x8 block
+      "2:                                        \n"
+      "mov       %0, %1                          \n"
+      "ld2       {v0.h, v1.h}[0], [%0], %5       \n"
+      "ld2       {v2.h, v3.h}[0], [%0], %5       \n"
+      "ld2       {v0.h, v1.h}[1], [%0], %5       \n"
+      "ld2       {v2.h, v3.h}[1], [%0], %5       \n"
+      "ld2       {v0.h, v1.h}[2], [%0], %5       \n"
+      "ld2       {v2.h, v3.h}[2], [%0], %5       \n"
+      "ld2       {v0.h, v1.h}[3], [%0], %5       \n"
+      "ld2       {v2.h, v3.h}[3], [%0]           \n"
 
-    "trn1      v4.8b, v0.8b, v2.8b             \n"
-    "trn2      v5.8b, v0.8b, v2.8b             \n"
-    "trn1      v6.8b, v1.8b, v3.8b             \n"
-    "trn2      v7.8b, v1.8b, v3.8b             \n"
+      "trn1      v4.8b, v0.8b, v2.8b             \n"
+      "trn2      v5.8b, v0.8b, v2.8b             \n"
+      "trn1      v6.8b, v1.8b, v3.8b             \n"
+      "trn2      v7.8b, v1.8b, v3.8b             \n"
 
-    "mov       %0, %2                          \n"
+      "mov       %0, %2                          \n"
 
-    MEMACCESS(0)
-    "st1       {v4.d}[0], [%0], %6             \n"
-    MEMACCESS(0)
-    "st1       {v6.d}[0], [%0]                 \n"
+      "st1       {v4.d}[0], [%0], %6             \n"
+      "st1       {v6.d}[0], [%0]                 \n"
 
-    "mov       %0, %3                          \n"
+      "mov       %0, %3                          \n"
 
-    MEMACCESS(0)
-    "st1       {v5.d}[0], [%0], %7             \n"
-    MEMACCESS(0)
-    "st1       {v7.d}[0], [%0]                 \n"
+      "st1       {v5.d}[0], [%0], %7             \n"
+      "st1       {v7.d}[0], [%0]                 \n"
 
-    "add       %1, %1, #4                      \n"  // src   += 2 * 2
-    "add       %2, %2, %6, lsl #1              \n"  // dst_a += 2 * dst_stride_a
-    "add       %3, %3, %7, lsl #1              \n"  // dst_b += 2 * dst_stride_b
-    "subs      %4,  %4,  #2                    \n"  // w     -= 2
-    "b.eq      4f                              \n"
+      "add       %1, %1, #4                      \n"  // src   += 2 * 2
+      "add       %2, %2, %6, lsl #1              \n"  // dst_a += 2 *
+                                                      // dst_stride_a
+      "add       %3, %3, %7, lsl #1              \n"  // dst_b += 2 *
+                                                      // dst_stride_b
+      "subs      %w4,  %w4,  #2                  \n"  // w     -= 2
+      "b.eq      4f                              \n"
 
-    // 1x8 block
-    "3:                                        \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[0], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[1], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[2], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[3], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[4], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[5], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[6], [%1], %5       \n"
-    MEMACCESS(1)
-    "ld2       {v0.b, v1.b}[7], [%1]           \n"
+      // 1x8 block
+      "3:                                        \n"
+      "ld2       {v0.b, v1.b}[0], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[1], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[2], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[3], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[4], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[5], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[6], [%1], %5       \n"
+      "ld2       {v0.b, v1.b}[7], [%1]           \n"
 
-    MEMACCESS(2)
-    "st1       {v0.d}[0], [%2]                 \n"
-    MEMACCESS(3)
-    "st1       {v1.d}[0], [%3]                 \n"
+      "st1       {v0.d}[0], [%2]                 \n"
+      "st1       {v1.d}[0], [%3]                 \n"
 
-    "4:                                        \n"
+      "4:                                        \n"
 
-    : "=&r"(src_temp),                            // %0
-      "+r"(src),                                  // %1
-      "+r"(dst_a),                                // %2
-      "+r"(dst_b),                                // %3
-      "+r"(width64)                               // %4
-    : "r"(static_cast<ptrdiff_t>(src_stride)),    // %5
-      "r"(static_cast<ptrdiff_t>(dst_stride_a)),  // %6
-      "r"(static_cast<ptrdiff_t>(dst_stride_b)),  // %7
-      "r"(&kVTbl4x4TransposeDi)                   // %8
-    : "memory", "cc",
-      "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7",
-      "v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23",
-      "v30", "v31"
-  );
+      : "=&r"(src_temp),                            // %0
+        "+r"(src),                                  // %1
+        "+r"(dst_a),                                // %2
+        "+r"(dst_b),                                // %3
+        "+r"(width)                                 // %4
+      : "r"(static_cast<ptrdiff_t>(src_stride)),    // %5
+        "r"(static_cast<ptrdiff_t>(dst_stride_a)),  // %6
+        "r"(static_cast<ptrdiff_t>(dst_stride_b)),  // %7
+        "r"(&kVTbl4x4TransposeDi)                   // %8
+      : "memory", "cc", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16",
+        "v17", "v18", "v19", "v20", "v21", "v22", "v23", "v30", "v31");
 }
 #endif  // !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_win.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_win.cc
index 1300fc0..e887dd5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_win.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/rotate_win.cc
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "libyuv/row.h"
 #include "libyuv/rotate_row.h"
+#include "libyuv/row.h"
 
 #ifdef __cplusplus
 namespace libyuv {
@@ -17,17 +17,19 @@
 #endif
 
 // This module is for 32 bit Visual C x86 and clangcl
-#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86)
+#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86) && defined(_MSC_VER)
 
-__declspec(naked)
-void TransposeWx8_SSSE3(const uint8* src, int src_stride,
-                        uint8* dst, int dst_stride, int width) {
+__declspec(naked) void TransposeWx8_SSSE3(const uint8_t* src,
+                                          int src_stride,
+                                          uint8_t* dst,
+                                          int dst_stride,
+                                          int width) {
   __asm {
     push      edi
     push      esi
     push      ebp
-    mov       eax, [esp + 12 + 4]   // src
-    mov       edi, [esp + 12 + 8]   // src_stride
+    mov       eax, [esp + 12 + 4]  // src
+    mov       edi, [esp + 12 + 8]  // src_stride
     mov       edx, [esp + 12 + 12]  // dst
     mov       esi, [esp + 12 + 16]  // dst_stride
     mov       ecx, [esp + 12 + 20]  // width
@@ -110,18 +112,20 @@
   }
 }
 
-__declspec(naked)
-void TransposeUVWx8_SSE2(const uint8* src, int src_stride,
-                         uint8* dst_a, int dst_stride_a,
-                         uint8* dst_b, int dst_stride_b,
-                         int w) {
+__declspec(naked) void TransposeUVWx8_SSE2(const uint8_t* src,
+                                           int src_stride,
+                                           uint8_t* dst_a,
+                                           int dst_stride_a,
+                                           uint8_t* dst_b,
+                                           int dst_stride_b,
+                                           int w) {
   __asm {
     push      ebx
     push      esi
     push      edi
     push      ebp
-    mov       eax, [esp + 16 + 4]   // src
-    mov       edi, [esp + 16 + 8]   // src_stride
+    mov       eax, [esp + 16 + 4]  // src
+    mov       edi, [esp + 16 + 8]  // src_stride
     mov       edx, [esp + 16 + 12]  // dst_a
     mov       esi, [esp + 16 + 16]  // dst_stride_a
     mov       ebx, [esp + 16 + 20]  // dst_b
@@ -133,9 +137,9 @@
     mov       ecx, [ecx + 16 + 28]  // w
 
     align      4
- convertloop:
     // Read in the data from the source pointer.
     // First round of bit swap.
+  convertloop:
     movdqu    xmm0, [eax]
     movdqu    xmm1, [eax + edi]
     lea       eax, [eax + 2 * edi]
@@ -162,13 +166,13 @@
     lea       eax, [eax + 2 * edi]
     movdqu    [esp], xmm5  // backup xmm5
     neg       edi
-    movdqa    xmm5, xmm6   // use xmm5 as temp register.
+    movdqa    xmm5, xmm6  // use xmm5 as temp register.
     punpcklbw xmm6, xmm7
     punpckhbw xmm5, xmm7
     movdqa    xmm7, xmm5
     lea       eax, [eax + 8 * edi + 16]
     neg       edi
-    // Second round of bit swap.
+        // Second round of bit swap.
     movdqa    xmm5, xmm0
     punpcklwd xmm0, xmm2
     punpckhwd xmm5, xmm2
@@ -183,12 +187,13 @@
     movdqa    xmm6, xmm5
     movdqu    xmm5, [esp]  // restore xmm5
     movdqu    [esp], xmm6  // backup xmm6
-    movdqa    xmm6, xmm5    // use xmm6 as temp register.
+    movdqa    xmm6, xmm5  // use xmm6 as temp register.
     punpcklwd xmm5, xmm7
     punpckhwd xmm6, xmm7
     movdqa    xmm7, xmm6
-    // Third round of bit swap.
-    // Write to the destination pointer.
+
+        // Third round of bit swap.
+        // Write to the destination pointer.
     movdqa    xmm6, xmm0
     punpckldq xmm0, xmm4
     punpckhdq xmm6, xmm4
@@ -200,7 +205,7 @@
     lea       edx, [edx + 2 * esi]
     movhpd    qword ptr [ebx + ebp], xmm4
     lea       ebx, [ebx + 2 * ebp]
-    movdqa    xmm0, xmm2   // use xmm0 as the temp register.
+    movdqa    xmm0, xmm2  // use xmm0 as the temp register.
     punpckldq xmm2, xmm6
     movlpd    qword ptr [edx], xmm2
     movhpd    qword ptr [ebx], xmm2
@@ -209,7 +214,7 @@
     lea       edx, [edx + 2 * esi]
     movhpd    qword ptr [ebx + ebp], xmm0
     lea       ebx, [ebx + 2 * ebp]
-    movdqa    xmm0, xmm1   // use xmm0 as the temp register.
+    movdqa    xmm0, xmm1  // use xmm0 as the temp register.
     punpckldq xmm1, xmm5
     movlpd    qword ptr [edx], xmm1
     movhpd    qword ptr [ebx], xmm1
@@ -218,7 +223,7 @@
     lea       edx, [edx + 2 * esi]
     movhpd    qword ptr [ebx + ebp], xmm0
     lea       ebx, [ebx + 2 * ebp]
-    movdqa    xmm0, xmm3   // use xmm0 as the temp register.
+    movdqa    xmm0, xmm3  // use xmm0 as the temp register.
     punpckldq xmm3, xmm7
     movlpd    qword ptr [edx], xmm3
     movhpd    qword ptr [ebx], xmm3
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_any.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_any.cc
index 494164f..e91560c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_any.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_any.cc
@@ -19,30 +19,38 @@
 extern "C" {
 #endif
 
+// memset for temp is meant to clear the source buffer (not dest) so that
+// SIMD that reads full multiple of 16 bytes will not trigger msan errors.
+// memset is not needed for production, as the garbage values are processed but
+// not used, although there may be edge cases for subsampling.
+// The size of the buffer is based on the largest read, which can be inferred
+// by the source type (e.g. ARGB) and the mask (last parameter), or by examining
+// the source code for how much the source pointers are advanced.
+
 // Subsampled source needs to be increase by 1 of not even.
 #define SS(width, shift) (((width) + (1 << (shift)) - 1) >> (shift))
 
 // Any 4 planes to 1 with yuvconstants
-#define ANY41C(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, BPP, MASK)                \
-    void NAMEANY(const uint8* y_buf, const uint8* u_buf, const uint8* v_buf,   \
-                 const uint8* a_buf, uint8* dst_ptr,                           \
-                 const struct YuvConstants* yuvconstants,  int width) {        \
-      SIMD_ALIGNED(uint8 temp[64 * 5]);                                        \
-      memset(temp, 0, 64 * 4);  /* for msan */                                 \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(y_buf, u_buf, v_buf, a_buf, dst_ptr, yuvconstants, n);        \
-      }                                                                        \
-      memcpy(temp, y_buf + n, r);                                              \
-      memcpy(temp + 64, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT));               \
-      memcpy(temp + 128, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT));              \
-      memcpy(temp + 192, a_buf + n, r);                                        \
-      ANY_SIMD(temp, temp + 64, temp + 128, temp + 192, temp + 256,            \
-               yuvconstants, MASK + 1);                                        \
-      memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, temp + 256,                      \
-             SS(r, DUVSHIFT) * BPP);                                           \
-    }
+#define ANY41C(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, BPP, MASK)              \
+  void NAMEANY(const uint8_t* y_buf, const uint8_t* u_buf,                   \
+               const uint8_t* v_buf, const uint8_t* a_buf, uint8_t* dst_ptr, \
+               const struct YuvConstants* yuvconstants, int width) {         \
+    SIMD_ALIGNED(uint8_t temp[64 * 5]);                                      \
+    memset(temp, 0, 64 * 4); /* for msan */                                  \
+    int r = width & MASK;                                                    \
+    int n = width & ~MASK;                                                   \
+    if (n > 0) {                                                             \
+      ANY_SIMD(y_buf, u_buf, v_buf, a_buf, dst_ptr, yuvconstants, n);        \
+    }                                                                        \
+    memcpy(temp, y_buf + n, r);                                              \
+    memcpy(temp + 64, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT));               \
+    memcpy(temp + 128, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT));              \
+    memcpy(temp + 192, a_buf + n, r);                                        \
+    ANY_SIMD(temp, temp + 64, temp + 128, temp + 192, temp + 256,            \
+             yuvconstants, MASK + 1);                                        \
+    memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, temp + 256,                      \
+           SS(r, DUVSHIFT) * BPP);                                           \
+  }
 
 #ifdef HAS_I422ALPHATOARGBROW_SSSE3
 ANY41C(I422AlphaToARGBRow_Any_SSSE3, I422AlphaToARGBRow_SSSE3, 1, 0, 4, 7)
@@ -53,36 +61,57 @@
 #ifdef HAS_I422ALPHATOARGBROW_NEON
 ANY41C(I422AlphaToARGBRow_Any_NEON, I422AlphaToARGBRow_NEON, 1, 0, 4, 7)
 #endif
+#ifdef HAS_I422ALPHATOARGBROW_MSA
+ANY41C(I422AlphaToARGBRow_Any_MSA, I422AlphaToARGBRow_MSA, 1, 0, 4, 7)
+#endif
 #undef ANY41C
 
 // Any 3 planes to 1.
-#define ANY31(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, BPP, MASK)                 \
-    void NAMEANY(const uint8* y_buf, const uint8* u_buf, const uint8* v_buf,   \
-                 uint8* dst_ptr, int width) {                                  \
-      SIMD_ALIGNED(uint8 temp[64 * 4]);                                        \
-      memset(temp, 0, 64 * 3);  /* for YUY2 and msan */                        \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(y_buf, u_buf, v_buf, dst_ptr, n);                             \
-      }                                                                        \
-      memcpy(temp, y_buf + n, r);                                              \
-      memcpy(temp + 64, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT));               \
-      memcpy(temp + 128, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT));              \
-      ANY_SIMD(temp, temp + 64, temp + 128, temp + 192, MASK + 1);             \
-      memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, temp + 192,                      \
-             SS(r, DUVSHIFT) * BPP);                                           \
-    }
+#define ANY31(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, BPP, MASK)      \
+  void NAMEANY(const uint8_t* y_buf, const uint8_t* u_buf,          \
+               const uint8_t* v_buf, uint8_t* dst_ptr, int width) { \
+    SIMD_ALIGNED(uint8_t temp[64 * 4]);                             \
+    memset(temp, 0, 64 * 3); /* for YUY2 and msan */                \
+    int r = width & MASK;                                           \
+    int n = width & ~MASK;                                          \
+    if (n > 0) {                                                    \
+      ANY_SIMD(y_buf, u_buf, v_buf, dst_ptr, n);                    \
+    }                                                               \
+    memcpy(temp, y_buf + n, r);                                     \
+    memcpy(temp + 64, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT));      \
+    memcpy(temp + 128, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT));     \
+    ANY_SIMD(temp, temp + 64, temp + 128, temp + 192, MASK + 1);    \
+    memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, temp + 192,             \
+           SS(r, DUVSHIFT) * BPP);                                  \
+  }
+
+// Merge functions.
+#ifdef HAS_MERGERGBROW_SSSE3
+ANY31(MergeRGBRow_Any_SSSE3, MergeRGBRow_SSSE3, 0, 0, 3, 15)
+#endif
+#ifdef HAS_MERGERGBROW_NEON
+ANY31(MergeRGBRow_Any_NEON, MergeRGBRow_NEON, 0, 0, 3, 15)
+#endif
 #ifdef HAS_I422TOYUY2ROW_SSE2
 ANY31(I422ToYUY2Row_Any_SSE2, I422ToYUY2Row_SSE2, 1, 1, 4, 15)
 ANY31(I422ToUYVYRow_Any_SSE2, I422ToUYVYRow_SSE2, 1, 1, 4, 15)
 #endif
+#ifdef HAS_I422TOYUY2ROW_AVX2
+ANY31(I422ToYUY2Row_Any_AVX2, I422ToYUY2Row_AVX2, 1, 1, 4, 31)
+ANY31(I422ToUYVYRow_Any_AVX2, I422ToUYVYRow_AVX2, 1, 1, 4, 31)
+#endif
 #ifdef HAS_I422TOYUY2ROW_NEON
 ANY31(I422ToYUY2Row_Any_NEON, I422ToYUY2Row_NEON, 1, 1, 4, 15)
 #endif
+#ifdef HAS_I422TOYUY2ROW_MSA
+ANY31(I422ToYUY2Row_Any_MSA, I422ToYUY2Row_MSA, 1, 1, 4, 31)
+#endif
 #ifdef HAS_I422TOUYVYROW_NEON
 ANY31(I422ToUYVYRow_Any_NEON, I422ToUYVYRow_NEON, 1, 1, 4, 15)
 #endif
+#ifdef HAS_I422TOUYVYROW_MSA
+ANY31(I422ToUYVYRow_Any_MSA, I422ToUYVYRow_MSA, 1, 1, 4, 31)
+#endif
 #ifdef HAS_BLENDPLANEROW_AVX2
 ANY31(BlendPlaneRow_Any_AVX2, BlendPlaneRow_AVX2, 0, 0, 1, 31)
 #endif
@@ -94,35 +123,38 @@
 // Note that odd width replication includes 444 due to implementation
 // on arm that subsamples 444 to 422 internally.
 // Any 3 planes to 1 with yuvconstants
-#define ANY31C(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, BPP, MASK)                \
-    void NAMEANY(const uint8* y_buf, const uint8* u_buf, const uint8* v_buf,   \
-                 uint8* dst_ptr, const struct YuvConstants* yuvconstants,      \
-                 int width) {                                                  \
-      SIMD_ALIGNED(uint8 temp[64 * 4]);                                        \
-      memset(temp, 0, 64 * 3);  /* for YUY2 and msan */                        \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(y_buf, u_buf, v_buf, dst_ptr, yuvconstants, n);               \
-      }                                                                        \
-      memcpy(temp, y_buf + n, r);                                              \
-      memcpy(temp + 64, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT));               \
-      memcpy(temp + 128, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT));              \
-      if (width & 1) {                                                         \
-        temp[64 + SS(r, UVSHIFT)] = temp[64 + SS(r, UVSHIFT) - 1];             \
-        temp[128 + SS(r, UVSHIFT)] = temp[128 + SS(r, UVSHIFT) - 1];           \
-      }                                                                        \
-      ANY_SIMD(temp, temp + 64, temp + 128, temp + 192,                        \
-               yuvconstants, MASK + 1);                                        \
-      memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, temp + 192,                      \
-             SS(r, DUVSHIFT) * BPP);                                           \
-    }
+#define ANY31C(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, BPP, MASK)      \
+  void NAMEANY(const uint8_t* y_buf, const uint8_t* u_buf,           \
+               const uint8_t* v_buf, uint8_t* dst_ptr,               \
+               const struct YuvConstants* yuvconstants, int width) { \
+    SIMD_ALIGNED(uint8_t temp[128 * 4]);                             \
+    memset(temp, 0, 128 * 3); /* for YUY2 and msan */                \
+    int r = width & MASK;                                            \
+    int n = width & ~MASK;                                           \
+    if (n > 0) {                                                     \
+      ANY_SIMD(y_buf, u_buf, v_buf, dst_ptr, yuvconstants, n);       \
+    }                                                                \
+    memcpy(temp, y_buf + n, r);                                      \
+    memcpy(temp + 128, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT));      \
+    memcpy(temp + 256, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT));      \
+    if (width & 1) {                                                 \
+      temp[128 + SS(r, UVSHIFT)] = temp[128 + SS(r, UVSHIFT) - 1];   \
+      temp[256 + SS(r, UVSHIFT)] = temp[256 + SS(r, UVSHIFT) - 1];   \
+    }                                                                \
+    ANY_SIMD(temp, temp + 128, temp + 256, temp + 384, yuvconstants, \
+             MASK + 1);                                              \
+    memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, temp + 384,              \
+           SS(r, DUVSHIFT) * BPP);                                   \
+  }
 
 #ifdef HAS_I422TOARGBROW_SSSE3
 ANY31C(I422ToARGBRow_Any_SSSE3, I422ToARGBRow_SSSE3, 1, 0, 4, 7)
 #endif
-#ifdef HAS_I411TOARGBROW_SSSE3
-ANY31C(I411ToARGBRow_Any_SSSE3, I411ToARGBRow_SSSE3, 2, 0, 4, 7)
+#ifdef HAS_I422TOAR30ROW_SSSE3
+ANY31C(I422ToAR30Row_Any_SSSE3, I422ToAR30Row_SSSE3, 1, 0, 4, 7)
+#endif
+#ifdef HAS_I422TOAR30ROW_AVX2
+ANY31C(I422ToAR30Row_Any_AVX2, I422ToAR30Row_AVX2, 1, 0, 4, 15)
 #endif
 #ifdef HAS_I444TOARGBROW_SSSE3
 ANY31C(I444ToARGBRow_Any_SSSE3, I444ToARGBRow_SSSE3, 0, 0, 4, 7)
@@ -130,10 +162,10 @@
 ANY31C(I422ToARGB4444Row_Any_SSSE3, I422ToARGB4444Row_SSSE3, 1, 0, 2, 7)
 ANY31C(I422ToARGB1555Row_Any_SSSE3, I422ToARGB1555Row_SSSE3, 1, 0, 2, 7)
 ANY31C(I422ToRGB565Row_Any_SSSE3, I422ToRGB565Row_SSSE3, 1, 0, 2, 7)
-ANY31C(I422ToRGB24Row_Any_SSSE3, I422ToRGB24Row_SSSE3, 1, 0, 3, 7)
+ANY31C(I422ToRGB24Row_Any_SSSE3, I422ToRGB24Row_SSSE3, 1, 0, 3, 15)
 #endif  // HAS_I444TOARGBROW_SSSE3
 #ifdef HAS_I422TORGB24ROW_AVX2
-ANY31C(I422ToRGB24Row_Any_AVX2, I422ToRGB24Row_AVX2, 1, 0, 3, 15)
+ANY31C(I422ToRGB24Row_Any_AVX2, I422ToRGB24Row_AVX2, 1, 0, 3, 31)
 #endif
 #ifdef HAS_I422TOARGBROW_AVX2
 ANY31C(I422ToARGBRow_Any_AVX2, I422ToARGBRow_AVX2, 1, 0, 4, 15)
@@ -144,47 +176,87 @@
 #ifdef HAS_I444TOARGBROW_AVX2
 ANY31C(I444ToARGBRow_Any_AVX2, I444ToARGBRow_AVX2, 0, 0, 4, 15)
 #endif
-#ifdef HAS_I411TOARGBROW_AVX2
-ANY31C(I411ToARGBRow_Any_AVX2, I411ToARGBRow_AVX2, 2, 0, 4, 15)
-#endif
 #ifdef HAS_I422TOARGB4444ROW_AVX2
-ANY31C(I422ToARGB4444Row_Any_AVX2, I422ToARGB4444Row_AVX2, 1, 0, 2, 7)
+ANY31C(I422ToARGB4444Row_Any_AVX2, I422ToARGB4444Row_AVX2, 1, 0, 2, 15)
 #endif
 #ifdef HAS_I422TOARGB1555ROW_AVX2
-ANY31C(I422ToARGB1555Row_Any_AVX2, I422ToARGB1555Row_AVX2, 1, 0, 2, 7)
+ANY31C(I422ToARGB1555Row_Any_AVX2, I422ToARGB1555Row_AVX2, 1, 0, 2, 15)
 #endif
 #ifdef HAS_I422TORGB565ROW_AVX2
-ANY31C(I422ToRGB565Row_Any_AVX2, I422ToRGB565Row_AVX2, 1, 0, 2, 7)
+ANY31C(I422ToRGB565Row_Any_AVX2, I422ToRGB565Row_AVX2, 1, 0, 2, 15)
 #endif
 #ifdef HAS_I422TOARGBROW_NEON
 ANY31C(I444ToARGBRow_Any_NEON, I444ToARGBRow_NEON, 0, 0, 4, 7)
 ANY31C(I422ToARGBRow_Any_NEON, I422ToARGBRow_NEON, 1, 0, 4, 7)
-ANY31C(I411ToARGBRow_Any_NEON, I411ToARGBRow_NEON, 2, 0, 4, 7)
 ANY31C(I422ToRGBARow_Any_NEON, I422ToRGBARow_NEON, 1, 0, 4, 7)
 ANY31C(I422ToRGB24Row_Any_NEON, I422ToRGB24Row_NEON, 1, 0, 3, 7)
 ANY31C(I422ToARGB4444Row_Any_NEON, I422ToARGB4444Row_NEON, 1, 0, 2, 7)
 ANY31C(I422ToARGB1555Row_Any_NEON, I422ToARGB1555Row_NEON, 1, 0, 2, 7)
 ANY31C(I422ToRGB565Row_Any_NEON, I422ToRGB565Row_NEON, 1, 0, 2, 7)
 #endif
+#ifdef HAS_I422TOARGBROW_MSA
+ANY31C(I444ToARGBRow_Any_MSA, I444ToARGBRow_MSA, 0, 0, 4, 7)
+ANY31C(I422ToARGBRow_Any_MSA, I422ToARGBRow_MSA, 1, 0, 4, 7)
+ANY31C(I422ToRGBARow_Any_MSA, I422ToRGBARow_MSA, 1, 0, 4, 7)
+ANY31C(I422ToRGB24Row_Any_MSA, I422ToRGB24Row_MSA, 1, 0, 3, 15)
+ANY31C(I422ToARGB4444Row_Any_MSA, I422ToARGB4444Row_MSA, 1, 0, 2, 7)
+ANY31C(I422ToARGB1555Row_Any_MSA, I422ToARGB1555Row_MSA, 1, 0, 2, 7)
+ANY31C(I422ToRGB565Row_Any_MSA, I422ToRGB565Row_MSA, 1, 0, 2, 7)
+#endif
 #undef ANY31C
 
+// Any 3 planes of 16 bit to 1 with yuvconstants
+// TODO(fbarchard): consider sharing this code with ANY31C
+#define ANY31CT(NAMEANY, ANY_SIMD, UVSHIFT, DUVSHIFT, T, SBPP, BPP, MASK) \
+  void NAMEANY(const T* y_buf, const T* u_buf, const T* v_buf,            \
+               uint8_t* dst_ptr, const struct YuvConstants* yuvconstants, \
+               int width) {                                               \
+    SIMD_ALIGNED(T temp[16 * 3]);                                         \
+    SIMD_ALIGNED(uint8_t out[64]);                                        \
+    memset(temp, 0, 16 * 3 * SBPP); /* for YUY2 and msan */               \
+    int r = width & MASK;                                                 \
+    int n = width & ~MASK;                                                \
+    if (n > 0) {                                                          \
+      ANY_SIMD(y_buf, u_buf, v_buf, dst_ptr, yuvconstants, n);            \
+    }                                                                     \
+    memcpy(temp, y_buf + n, r * SBPP);                                    \
+    memcpy(temp + 16, u_buf + (n >> UVSHIFT), SS(r, UVSHIFT) * SBPP);     \
+    memcpy(temp + 32, v_buf + (n >> UVSHIFT), SS(r, UVSHIFT) * SBPP);     \
+    ANY_SIMD(temp, temp + 16, temp + 32, out, yuvconstants, MASK + 1);    \
+    memcpy(dst_ptr + (n >> DUVSHIFT) * BPP, out, SS(r, DUVSHIFT) * BPP);  \
+  }
+
+#ifdef HAS_I210TOAR30ROW_SSSE3
+ANY31CT(I210ToAR30Row_Any_SSSE3, I210ToAR30Row_SSSE3, 1, 0, uint16_t, 2, 4, 7)
+#endif
+#ifdef HAS_I210TOARGBROW_SSSE3
+ANY31CT(I210ToARGBRow_Any_SSSE3, I210ToARGBRow_SSSE3, 1, 0, uint16_t, 2, 4, 7)
+#endif
+#ifdef HAS_I210TOARGBROW_AVX2
+ANY31CT(I210ToARGBRow_Any_AVX2, I210ToARGBRow_AVX2, 1, 0, uint16_t, 2, 4, 15)
+#endif
+#ifdef HAS_I210TOAR30ROW_AVX2
+ANY31CT(I210ToAR30Row_Any_AVX2, I210ToAR30Row_AVX2, 1, 0, uint16_t, 2, 4, 15)
+#endif
+#undef ANY31CT
+
 // Any 2 planes to 1.
-#define ANY21(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, SBPP2, BPP, MASK)              \
-    void NAMEANY(const uint8* y_buf, const uint8* uv_buf,                      \
-                 uint8* dst_ptr, int width) {                                  \
-      SIMD_ALIGNED(uint8 temp[64 * 3]);                                        \
-      memset(temp, 0, 64 * 2);  /* for msan */                                 \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(y_buf, uv_buf, dst_ptr, n);                                   \
-      }                                                                        \
-      memcpy(temp, y_buf + n * SBPP, r * SBPP);                                \
-      memcpy(temp + 64, uv_buf + (n >> UVSHIFT) * SBPP2,                       \
-             SS(r, UVSHIFT) * SBPP2);                                          \
-      ANY_SIMD(temp, temp + 64, temp + 128, MASK + 1);                         \
-      memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
-    }
+#define ANY21(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, SBPP2, BPP, MASK)             \
+  void NAMEANY(const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* dst_ptr, \
+               int width) {                                                   \
+    SIMD_ALIGNED(uint8_t temp[64 * 3]);                                       \
+    memset(temp, 0, 64 * 2); /* for msan */                                   \
+    int r = width & MASK;                                                     \
+    int n = width & ~MASK;                                                    \
+    if (n > 0) {                                                              \
+      ANY_SIMD(y_buf, uv_buf, dst_ptr, n);                                    \
+    }                                                                         \
+    memcpy(temp, y_buf + n * SBPP, r * SBPP);                                 \
+    memcpy(temp + 64, uv_buf + (n >> UVSHIFT) * SBPP2,                        \
+           SS(r, UVSHIFT) * SBPP2);                                           \
+    ANY_SIMD(temp, temp + 64, temp + 128, MASK + 1);                          \
+    memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                           \
+  }
 
 // Merge functions.
 #ifdef HAS_MERGEUVROW_SSE2
@@ -196,6 +268,9 @@
 #ifdef HAS_MERGEUVROW_NEON
 ANY21(MergeUVRow_Any_NEON, MergeUVRow_NEON, 0, 1, 1, 2, 15)
 #endif
+#ifdef HAS_MERGEUVROW_MSA
+ANY21(MergeUVRow_Any_MSA, MergeUVRow_MSA, 0, 1, 1, 2, 15)
+#endif
 
 // Math functions.
 #ifdef HAS_ARGBMULTIPLYROW_SSE2
@@ -225,44 +300,61 @@
 #ifdef HAS_ARGBSUBTRACTROW_NEON
 ANY21(ARGBSubtractRow_Any_NEON, ARGBSubtractRow_NEON, 0, 4, 4, 4, 7)
 #endif
+#ifdef HAS_ARGBMULTIPLYROW_MSA
+ANY21(ARGBMultiplyRow_Any_MSA, ARGBMultiplyRow_MSA, 0, 4, 4, 4, 3)
+#endif
+#ifdef HAS_ARGBADDROW_MSA
+ANY21(ARGBAddRow_Any_MSA, ARGBAddRow_MSA, 0, 4, 4, 4, 7)
+#endif
+#ifdef HAS_ARGBSUBTRACTROW_MSA
+ANY21(ARGBSubtractRow_Any_MSA, ARGBSubtractRow_MSA, 0, 4, 4, 4, 7)
+#endif
 #ifdef HAS_SOBELROW_SSE2
 ANY21(SobelRow_Any_SSE2, SobelRow_SSE2, 0, 1, 1, 4, 15)
 #endif
 #ifdef HAS_SOBELROW_NEON
 ANY21(SobelRow_Any_NEON, SobelRow_NEON, 0, 1, 1, 4, 7)
 #endif
+#ifdef HAS_SOBELROW_MSA
+ANY21(SobelRow_Any_MSA, SobelRow_MSA, 0, 1, 1, 4, 15)
+#endif
 #ifdef HAS_SOBELTOPLANEROW_SSE2
 ANY21(SobelToPlaneRow_Any_SSE2, SobelToPlaneRow_SSE2, 0, 1, 1, 1, 15)
 #endif
 #ifdef HAS_SOBELTOPLANEROW_NEON
 ANY21(SobelToPlaneRow_Any_NEON, SobelToPlaneRow_NEON, 0, 1, 1, 1, 15)
 #endif
+#ifdef HAS_SOBELTOPLANEROW_MSA
+ANY21(SobelToPlaneRow_Any_MSA, SobelToPlaneRow_MSA, 0, 1, 1, 1, 31)
+#endif
 #ifdef HAS_SOBELXYROW_SSE2
 ANY21(SobelXYRow_Any_SSE2, SobelXYRow_SSE2, 0, 1, 1, 4, 15)
 #endif
 #ifdef HAS_SOBELXYROW_NEON
 ANY21(SobelXYRow_Any_NEON, SobelXYRow_NEON, 0, 1, 1, 4, 7)
 #endif
+#ifdef HAS_SOBELXYROW_MSA
+ANY21(SobelXYRow_Any_MSA, SobelXYRow_MSA, 0, 1, 1, 4, 15)
+#endif
 #undef ANY21
 
 // Any 2 planes to 1 with yuvconstants
-#define ANY21C(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, SBPP2, BPP, MASK)             \
-    void NAMEANY(const uint8* y_buf, const uint8* uv_buf,                      \
-                 uint8* dst_ptr, const struct YuvConstants* yuvconstants,      \
-                 int width) {                                                  \
-      SIMD_ALIGNED(uint8 temp[64 * 3]);                                        \
-      memset(temp, 0, 64 * 2);  /* for msan */                                 \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(y_buf, uv_buf, dst_ptr, yuvconstants, n);                     \
-      }                                                                        \
-      memcpy(temp, y_buf + n * SBPP, r * SBPP);                                \
-      memcpy(temp + 64, uv_buf + (n >> UVSHIFT) * SBPP2,                       \
-             SS(r, UVSHIFT) * SBPP2);                                          \
-      ANY_SIMD(temp, temp + 64, temp + 128, yuvconstants, MASK + 1);           \
-      memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
-    }
+#define ANY21C(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, SBPP2, BPP, MASK)            \
+  void NAMEANY(const uint8_t* y_buf, const uint8_t* uv_buf, uint8_t* dst_ptr, \
+               const struct YuvConstants* yuvconstants, int width) {          \
+    SIMD_ALIGNED(uint8_t temp[128 * 3]);                                      \
+    memset(temp, 0, 128 * 2); /* for msan */                                  \
+    int r = width & MASK;                                                     \
+    int n = width & ~MASK;                                                    \
+    if (n > 0) {                                                              \
+      ANY_SIMD(y_buf, uv_buf, dst_ptr, yuvconstants, n);                      \
+    }                                                                         \
+    memcpy(temp, y_buf + n * SBPP, r * SBPP);                                 \
+    memcpy(temp + 128, uv_buf + (n >> UVSHIFT) * SBPP2,                       \
+           SS(r, UVSHIFT) * SBPP2);                                           \
+    ANY_SIMD(temp, temp + 128, temp + 256, yuvconstants, MASK + 1);           \
+    memcpy(dst_ptr + n * BPP, temp + 256, r * BPP);                           \
+  }
 
 // Biplanar to RGB.
 #ifdef HAS_NV12TOARGBROW_SSSE3
@@ -274,6 +366,9 @@
 #ifdef HAS_NV12TOARGBROW_NEON
 ANY21C(NV12ToARGBRow_Any_NEON, NV12ToARGBRow_NEON, 1, 1, 2, 4, 7)
 #endif
+#ifdef HAS_NV12TOARGBROW_MSA
+ANY21C(NV12ToARGBRow_Any_MSA, NV12ToARGBRow_MSA, 1, 1, 2, 4, 7)
+#endif
 #ifdef HAS_NV21TOARGBROW_SSSE3
 ANY21C(NV21ToARGBRow_Any_SSSE3, NV21ToARGBRow_SSSE3, 1, 1, 2, 4, 7)
 #endif
@@ -283,6 +378,27 @@
 #ifdef HAS_NV21TOARGBROW_NEON
 ANY21C(NV21ToARGBRow_Any_NEON, NV21ToARGBRow_NEON, 1, 1, 2, 4, 7)
 #endif
+#ifdef HAS_NV21TOARGBROW_MSA
+ANY21C(NV21ToARGBRow_Any_MSA, NV21ToARGBRow_MSA, 1, 1, 2, 4, 7)
+#endif
+#ifdef HAS_NV12TORGB24ROW_NEON
+ANY21C(NV12ToRGB24Row_Any_NEON, NV12ToRGB24Row_NEON, 1, 1, 2, 3, 7)
+#endif
+#ifdef HAS_NV21TORGB24ROW_NEON
+ANY21C(NV21ToRGB24Row_Any_NEON, NV21ToRGB24Row_NEON, 1, 1, 2, 3, 7)
+#endif
+#ifdef HAS_NV12TORGB24ROW_SSSE3
+ANY21C(NV12ToRGB24Row_Any_SSSE3, NV12ToRGB24Row_SSSE3, 1, 1, 2, 3, 15)
+#endif
+#ifdef HAS_NV21TORGB24ROW_SSSE3
+ANY21C(NV21ToRGB24Row_Any_SSSE3, NV21ToRGB24Row_SSSE3, 1, 1, 2, 3, 15)
+#endif
+#ifdef HAS_NV12TORGB24ROW_AVX2
+ANY21C(NV12ToRGB24Row_Any_AVX2, NV12ToRGB24Row_AVX2, 1, 1, 2, 3, 31)
+#endif
+#ifdef HAS_NV21TORGB24ROW_AVX2
+ANY21C(NV21ToRGB24Row_Any_AVX2, NV21ToRGB24Row_AVX2, 1, 1, 2, 3, 31)
+#endif
 #ifdef HAS_NV12TORGB565ROW_SSSE3
 ANY21C(NV12ToRGB565Row_Any_SSSE3, NV12ToRGB565Row_SSSE3, 1, 1, 2, 2, 7)
 #endif
@@ -292,22 +408,25 @@
 #ifdef HAS_NV12TORGB565ROW_NEON
 ANY21C(NV12ToRGB565Row_Any_NEON, NV12ToRGB565Row_NEON, 1, 1, 2, 2, 7)
 #endif
+#ifdef HAS_NV12TORGB565ROW_MSA
+ANY21C(NV12ToRGB565Row_Any_MSA, NV12ToRGB565Row_MSA, 1, 1, 2, 2, 7)
+#endif
 #undef ANY21C
 
 // Any 1 to 1.
-#define ANY11(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, BPP, MASK)                     \
-    void NAMEANY(const uint8* src_ptr, uint8* dst_ptr, int width) {            \
-      SIMD_ALIGNED(uint8 temp[128 * 2]);                                       \
-      memset(temp, 0, 128);  /* for YUY2 and msan */                           \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr, dst_ptr, n);                                         \
-      }                                                                        \
-      memcpy(temp, src_ptr + (n >> UVSHIFT) * SBPP, SS(r, UVSHIFT) * SBPP);    \
-      ANY_SIMD(temp, temp + 128, MASK + 1);                                    \
-      memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
-    }
+#define ANY11(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, BPP, MASK)                \
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_ptr, int width) {     \
+    SIMD_ALIGNED(uint8_t temp[128 * 2]);                                  \
+    memset(temp, 0, 128); /* for YUY2 and msan */                         \
+    int r = width & MASK;                                                 \
+    int n = width & ~MASK;                                                \
+    if (n > 0) {                                                          \
+      ANY_SIMD(src_ptr, dst_ptr, n);                                      \
+    }                                                                     \
+    memcpy(temp, src_ptr + (n >> UVSHIFT) * SBPP, SS(r, UVSHIFT) * SBPP); \
+    ANY_SIMD(temp, temp + 128, MASK + 1);                                 \
+    memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                       \
+  }
 
 #ifdef HAS_COPYROW_AVX
 ANY11(CopyRow_Any_AVX, CopyRow_AVX, 0, 1, 1, 63)
@@ -325,6 +444,15 @@
 ANY11(ARGBToARGB1555Row_Any_SSE2, ARGBToARGB1555Row_SSE2, 0, 4, 2, 3)
 ANY11(ARGBToARGB4444Row_Any_SSE2, ARGBToARGB4444Row_SSE2, 0, 4, 2, 3)
 #endif
+#if defined(HAS_ARGBTORGB24ROW_AVX2)
+ANY11(ARGBToRGB24Row_Any_AVX2, ARGBToRGB24Row_AVX2, 0, 4, 3, 31)
+#endif
+#if defined(HAS_ARGBTORGB24ROW_AVX512VBMI)
+ANY11(ARGBToRGB24Row_Any_AVX512VBMI, ARGBToRGB24Row_AVX512VBMI, 0, 4, 3, 31)
+#endif
+#if defined(HAS_ARGBTORAWROW_AVX2)
+ANY11(ARGBToRAWRow_Any_AVX2, ARGBToRAWRow_AVX2, 0, 4, 3, 31)
+#endif
 #if defined(HAS_ARGBTORGB565ROW_AVX2)
 ANY11(ARGBToRGB565Row_Any_AVX2, ARGBToRGB565Row_AVX2, 0, 4, 2, 7)
 #endif
@@ -332,6 +460,18 @@
 ANY11(ARGBToARGB1555Row_Any_AVX2, ARGBToARGB1555Row_AVX2, 0, 4, 2, 7)
 ANY11(ARGBToARGB4444Row_Any_AVX2, ARGBToARGB4444Row_AVX2, 0, 4, 2, 7)
 #endif
+#if defined(HAS_ABGRTOAR30ROW_SSSE3)
+ANY11(ABGRToAR30Row_Any_SSSE3, ABGRToAR30Row_SSSE3, 0, 4, 4, 3)
+#endif
+#if defined(HAS_ARGBTOAR30ROW_SSSE3)
+ANY11(ARGBToAR30Row_Any_SSSE3, ARGBToAR30Row_SSSE3, 0, 4, 4, 3)
+#endif
+#if defined(HAS_ABGRTOAR30ROW_AVX2)
+ANY11(ABGRToAR30Row_Any_AVX2, ABGRToAR30Row_AVX2, 0, 4, 4, 7)
+#endif
+#if defined(HAS_ARGBTOAR30ROW_AVX2)
+ANY11(ARGBToAR30Row_Any_AVX2, ARGBToAR30Row_AVX2, 0, 4, 4, 7)
+#endif
 #if defined(HAS_J400TOARGBROW_SSE2)
 ANY11(J400ToARGBRow_Any_SSE2, J400ToARGBRow_SSE2, 0, 1, 4, 7)
 #endif
@@ -372,9 +512,21 @@
 ANY11(J400ToARGBRow_Any_NEON, J400ToARGBRow_NEON, 0, 1, 4, 7)
 ANY11(I400ToARGBRow_Any_NEON, I400ToARGBRow_NEON, 0, 1, 4, 7)
 #endif
+#if defined(HAS_ARGBTORGB24ROW_MSA)
+ANY11(ARGBToRGB24Row_Any_MSA, ARGBToRGB24Row_MSA, 0, 4, 3, 15)
+ANY11(ARGBToRAWRow_Any_MSA, ARGBToRAWRow_MSA, 0, 4, 3, 15)
+ANY11(ARGBToRGB565Row_Any_MSA, ARGBToRGB565Row_MSA, 0, 4, 2, 7)
+ANY11(ARGBToARGB1555Row_Any_MSA, ARGBToARGB1555Row_MSA, 0, 4, 2, 7)
+ANY11(ARGBToARGB4444Row_Any_MSA, ARGBToARGB4444Row_MSA, 0, 4, 2, 7)
+ANY11(J400ToARGBRow_Any_MSA, J400ToARGBRow_MSA, 0, 1, 4, 15)
+ANY11(I400ToARGBRow_Any_MSA, I400ToARGBRow_MSA, 0, 1, 4, 15)
+#endif
 #if defined(HAS_RAWTORGB24ROW_NEON)
 ANY11(RAWToRGB24Row_Any_NEON, RAWToRGB24Row_NEON, 0, 3, 3, 7)
 #endif
+#if defined(HAS_RAWTORGB24ROW_MSA)
+ANY11(RAWToRGB24Row_Any_MSA, RAWToRGB24Row_MSA, 0, 3, 3, 15)
+#endif
 #ifdef HAS_ARGBTOYROW_AVX2
 ANY11(ARGBToYRow_Any_AVX2, ARGBToYRow_AVX2, 0, 4, 1, 31)
 #endif
@@ -403,30 +555,57 @@
 #ifdef HAS_ARGBTOYROW_NEON
 ANY11(ARGBToYRow_Any_NEON, ARGBToYRow_NEON, 0, 4, 1, 7)
 #endif
+#ifdef HAS_ARGBTOYROW_MSA
+ANY11(ARGBToYRow_Any_MSA, ARGBToYRow_MSA, 0, 4, 1, 15)
+#endif
 #ifdef HAS_ARGBTOYJROW_NEON
 ANY11(ARGBToYJRow_Any_NEON, ARGBToYJRow_NEON, 0, 4, 1, 7)
 #endif
+#ifdef HAS_ARGBTOYJROW_MSA
+ANY11(ARGBToYJRow_Any_MSA, ARGBToYJRow_MSA, 0, 4, 1, 15)
+#endif
 #ifdef HAS_BGRATOYROW_NEON
 ANY11(BGRAToYRow_Any_NEON, BGRAToYRow_NEON, 0, 4, 1, 7)
 #endif
+#ifdef HAS_BGRATOYROW_MSA
+ANY11(BGRAToYRow_Any_MSA, BGRAToYRow_MSA, 0, 4, 1, 15)
+#endif
 #ifdef HAS_ABGRTOYROW_NEON
 ANY11(ABGRToYRow_Any_NEON, ABGRToYRow_NEON, 0, 4, 1, 7)
 #endif
+#ifdef HAS_ABGRTOYROW_MSA
+ANY11(ABGRToYRow_Any_MSA, ABGRToYRow_MSA, 0, 4, 1, 7)
+#endif
 #ifdef HAS_RGBATOYROW_NEON
 ANY11(RGBAToYRow_Any_NEON, RGBAToYRow_NEON, 0, 4, 1, 7)
 #endif
+#ifdef HAS_RGBATOYROW_MSA
+ANY11(RGBAToYRow_Any_MSA, RGBAToYRow_MSA, 0, 4, 1, 15)
+#endif
 #ifdef HAS_RGB24TOYROW_NEON
 ANY11(RGB24ToYRow_Any_NEON, RGB24ToYRow_NEON, 0, 3, 1, 7)
 #endif
+#ifdef HAS_RGB24TOYROW_MSA
+ANY11(RGB24ToYRow_Any_MSA, RGB24ToYRow_MSA, 0, 3, 1, 15)
+#endif
 #ifdef HAS_RAWTOYROW_NEON
 ANY11(RAWToYRow_Any_NEON, RAWToYRow_NEON, 0, 3, 1, 7)
 #endif
+#ifdef HAS_RAWTOYROW_MSA
+ANY11(RAWToYRow_Any_MSA, RAWToYRow_MSA, 0, 3, 1, 15)
+#endif
 #ifdef HAS_RGB565TOYROW_NEON
 ANY11(RGB565ToYRow_Any_NEON, RGB565ToYRow_NEON, 0, 2, 1, 7)
 #endif
+#ifdef HAS_RGB565TOYROW_MSA
+ANY11(RGB565ToYRow_Any_MSA, RGB565ToYRow_MSA, 0, 2, 1, 15)
+#endif
 #ifdef HAS_ARGB1555TOYROW_NEON
 ANY11(ARGB1555ToYRow_Any_NEON, ARGB1555ToYRow_NEON, 0, 2, 1, 7)
 #endif
+#ifdef HAS_ARGB1555TOYROW_MSA
+ANY11(ARGB1555ToYRow_Any_MSA, ARGB1555ToYRow_MSA, 0, 2, 1, 15)
+#endif
 #ifdef HAS_ARGB4444TOYROW_NEON
 ANY11(ARGB4444ToYRow_Any_NEON, ARGB4444ToYRow_NEON, 0, 2, 1, 7)
 #endif
@@ -434,23 +613,44 @@
 ANY11(YUY2ToYRow_Any_NEON, YUY2ToYRow_NEON, 1, 4, 1, 15)
 #endif
 #ifdef HAS_UYVYTOYROW_NEON
-ANY11(UYVYToYRow_Any_NEON, UYVYToYRow_NEON, 0, 2, 1, 15)
+ANY11(UYVYToYRow_Any_NEON, UYVYToYRow_NEON, 1, 4, 1, 15)
+#endif
+#ifdef HAS_YUY2TOYROW_MSA
+ANY11(YUY2ToYRow_Any_MSA, YUY2ToYRow_MSA, 1, 4, 1, 31)
+#endif
+#ifdef HAS_UYVYTOYROW_MSA
+ANY11(UYVYToYRow_Any_MSA, UYVYToYRow_MSA, 1, 4, 1, 31)
 #endif
 #ifdef HAS_RGB24TOARGBROW_NEON
 ANY11(RGB24ToARGBRow_Any_NEON, RGB24ToARGBRow_NEON, 0, 3, 4, 7)
 #endif
+#ifdef HAS_RGB24TOARGBROW_MSA
+ANY11(RGB24ToARGBRow_Any_MSA, RGB24ToARGBRow_MSA, 0, 3, 4, 15)
+#endif
 #ifdef HAS_RAWTOARGBROW_NEON
 ANY11(RAWToARGBRow_Any_NEON, RAWToARGBRow_NEON, 0, 3, 4, 7)
 #endif
+#ifdef HAS_RAWTOARGBROW_MSA
+ANY11(RAWToARGBRow_Any_MSA, RAWToARGBRow_MSA, 0, 3, 4, 15)
+#endif
 #ifdef HAS_RGB565TOARGBROW_NEON
 ANY11(RGB565ToARGBRow_Any_NEON, RGB565ToARGBRow_NEON, 0, 2, 4, 7)
 #endif
+#ifdef HAS_RGB565TOARGBROW_MSA
+ANY11(RGB565ToARGBRow_Any_MSA, RGB565ToARGBRow_MSA, 0, 2, 4, 15)
+#endif
 #ifdef HAS_ARGB1555TOARGBROW_NEON
 ANY11(ARGB1555ToARGBRow_Any_NEON, ARGB1555ToARGBRow_NEON, 0, 2, 4, 7)
 #endif
+#ifdef HAS_ARGB1555TOARGBROW_MSA
+ANY11(ARGB1555ToARGBRow_Any_MSA, ARGB1555ToARGBRow_MSA, 0, 2, 4, 15)
+#endif
 #ifdef HAS_ARGB4444TOARGBROW_NEON
 ANY11(ARGB4444ToARGBRow_Any_NEON, ARGB4444ToARGBRow_NEON, 0, 2, 4, 7)
 #endif
+#ifdef HAS_ARGB4444TOARGBROW_MSA
+ANY11(ARGB4444ToARGBRow_Any_MSA, ARGB4444ToARGBRow_MSA, 0, 2, 4, 15)
+#endif
 #ifdef HAS_ARGBATTENUATEROW_SSSE3
 ANY11(ARGBAttenuateRow_Any_SSSE3, ARGBAttenuateRow_SSSE3, 0, 4, 4, 3)
 #endif
@@ -466,29 +666,38 @@
 #ifdef HAS_ARGBATTENUATEROW_NEON
 ANY11(ARGBAttenuateRow_Any_NEON, ARGBAttenuateRow_NEON, 0, 4, 4, 7)
 #endif
+#ifdef HAS_ARGBATTENUATEROW_MSA
+ANY11(ARGBAttenuateRow_Any_MSA, ARGBAttenuateRow_MSA, 0, 4, 4, 7)
+#endif
 #ifdef HAS_ARGBEXTRACTALPHAROW_SSE2
 ANY11(ARGBExtractAlphaRow_Any_SSE2, ARGBExtractAlphaRow_SSE2, 0, 4, 1, 7)
 #endif
+#ifdef HAS_ARGBEXTRACTALPHAROW_AVX2
+ANY11(ARGBExtractAlphaRow_Any_AVX2, ARGBExtractAlphaRow_AVX2, 0, 4, 1, 31)
+#endif
 #ifdef HAS_ARGBEXTRACTALPHAROW_NEON
 ANY11(ARGBExtractAlphaRow_Any_NEON, ARGBExtractAlphaRow_NEON, 0, 4, 1, 15)
 #endif
+#ifdef HAS_ARGBEXTRACTALPHAROW_MSA
+ANY11(ARGBExtractAlphaRow_Any_MSA, ARGBExtractAlphaRow_MSA, 0, 4, 1, 15)
+#endif
 #undef ANY11
 
 // Any 1 to 1 blended.  Destination is read, modify, write.
-#define ANY11B(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, BPP, MASK)                    \
-    void NAMEANY(const uint8* src_ptr, uint8* dst_ptr, int width) {            \
-      SIMD_ALIGNED(uint8 temp[128 * 2]);                                       \
-      memset(temp, 0, 128 * 2);  /* for YUY2 and msan */                       \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr, dst_ptr, n);                                         \
-      }                                                                        \
-      memcpy(temp, src_ptr + (n >> UVSHIFT) * SBPP, SS(r, UVSHIFT) * SBPP);    \
-      memcpy(temp + 128, dst_ptr + n * BPP, r * BPP);                          \
-      ANY_SIMD(temp, temp + 128, MASK + 1);                                    \
-      memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
-    }
+#define ANY11B(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, BPP, MASK)               \
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_ptr, int width) {     \
+    SIMD_ALIGNED(uint8_t temp[64 * 2]);                                   \
+    memset(temp, 0, 64 * 2); /* for msan */                               \
+    int r = width & MASK;                                                 \
+    int n = width & ~MASK;                                                \
+    if (n > 0) {                                                          \
+      ANY_SIMD(src_ptr, dst_ptr, n);                                      \
+    }                                                                     \
+    memcpy(temp, src_ptr + (n >> UVSHIFT) * SBPP, SS(r, UVSHIFT) * SBPP); \
+    memcpy(temp + 64, dst_ptr + n * BPP, r * BPP);                        \
+    ANY_SIMD(temp, temp + 64, MASK + 1);                                  \
+    memcpy(dst_ptr + n * BPP, temp + 64, r * BPP);                        \
+  }
 
 #ifdef HAS_ARGBCOPYALPHAROW_AVX2
 ANY11B(ARGBCopyAlphaRow_Any_AVX2, ARGBCopyAlphaRow_AVX2, 0, 4, 4, 15)
@@ -506,61 +715,184 @@
 
 // Any 1 to 1 with parameter.
 #define ANY11P(NAMEANY, ANY_SIMD, T, SBPP, BPP, MASK)                          \
-    void NAMEANY(const uint8* src_ptr, uint8* dst_ptr,                         \
-                 T shuffler, int width) {                                      \
-      SIMD_ALIGNED(uint8 temp[64 * 2]);                                        \
-      memset(temp, 0, 64);  /* for msan */                                     \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr, dst_ptr, shuffler, n);                               \
-      }                                                                        \
-      memcpy(temp, src_ptr + n * SBPP, r * SBPP);                              \
-      ANY_SIMD(temp, temp + 64, shuffler, MASK + 1);                           \
-      memcpy(dst_ptr + n * BPP, temp + 64, r * BPP);                           \
-    }
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_ptr, T param, int width) { \
+    SIMD_ALIGNED(uint8_t temp[64 * 2]);                                        \
+    memset(temp, 0, 64); /* for msan */                                        \
+    int r = width & MASK;                                                      \
+    int n = width & ~MASK;                                                     \
+    if (n > 0) {                                                               \
+      ANY_SIMD(src_ptr, dst_ptr, param, n);                                    \
+    }                                                                          \
+    memcpy(temp, src_ptr + n * SBPP, r * SBPP);                                \
+    ANY_SIMD(temp, temp + 64, param, MASK + 1);                                \
+    memcpy(dst_ptr + n * BPP, temp + 64, r * BPP);                             \
+  }
 
 #if defined(HAS_ARGBTORGB565DITHERROW_SSE2)
-ANY11P(ARGBToRGB565DitherRow_Any_SSE2, ARGBToRGB565DitherRow_SSE2,
-       const uint32, 4, 2, 3)
+ANY11P(ARGBToRGB565DitherRow_Any_SSE2,
+       ARGBToRGB565DitherRow_SSE2,
+       const uint32_t,
+       4,
+       2,
+       3)
 #endif
 #if defined(HAS_ARGBTORGB565DITHERROW_AVX2)
-ANY11P(ARGBToRGB565DitherRow_Any_AVX2, ARGBToRGB565DitherRow_AVX2,
-       const uint32, 4, 2, 7)
+ANY11P(ARGBToRGB565DitherRow_Any_AVX2,
+       ARGBToRGB565DitherRow_AVX2,
+       const uint32_t,
+       4,
+       2,
+       7)
 #endif
 #if defined(HAS_ARGBTORGB565DITHERROW_NEON)
-ANY11P(ARGBToRGB565DitherRow_Any_NEON, ARGBToRGB565DitherRow_NEON,
-       const uint32, 4, 2, 7)
+ANY11P(ARGBToRGB565DitherRow_Any_NEON,
+       ARGBToRGB565DitherRow_NEON,
+       const uint32_t,
+       4,
+       2,
+       7)
 #endif
-#ifdef HAS_ARGBSHUFFLEROW_SSE2
-ANY11P(ARGBShuffleRow_Any_SSE2, ARGBShuffleRow_SSE2, const uint8*, 4, 4, 3)
+#if defined(HAS_ARGBTORGB565DITHERROW_MSA)
+ANY11P(ARGBToRGB565DitherRow_Any_MSA,
+       ARGBToRGB565DitherRow_MSA,
+       const uint32_t,
+       4,
+       2,
+       7)
 #endif
 #ifdef HAS_ARGBSHUFFLEROW_SSSE3
-ANY11P(ARGBShuffleRow_Any_SSSE3, ARGBShuffleRow_SSSE3, const uint8*, 4, 4, 7)
+ANY11P(ARGBShuffleRow_Any_SSSE3, ARGBShuffleRow_SSSE3, const uint8_t*, 4, 4, 7)
 #endif
 #ifdef HAS_ARGBSHUFFLEROW_AVX2
-ANY11P(ARGBShuffleRow_Any_AVX2, ARGBShuffleRow_AVX2, const uint8*, 4, 4, 15)
+ANY11P(ARGBShuffleRow_Any_AVX2, ARGBShuffleRow_AVX2, const uint8_t*, 4, 4, 15)
 #endif
 #ifdef HAS_ARGBSHUFFLEROW_NEON
-ANY11P(ARGBShuffleRow_Any_NEON, ARGBShuffleRow_NEON, const uint8*, 4, 4, 3)
+ANY11P(ARGBShuffleRow_Any_NEON, ARGBShuffleRow_NEON, const uint8_t*, 4, 4, 3)
+#endif
+#ifdef HAS_ARGBSHUFFLEROW_MSA
+ANY11P(ARGBShuffleRow_Any_MSA, ARGBShuffleRow_MSA, const uint8_t*, 4, 4, 7)
 #endif
 #undef ANY11P
 
+// Any 1 to 1 with parameter and shorts.  BPP measures in shorts.
+#define ANY11C(NAMEANY, ANY_SIMD, SBPP, BPP, STYPE, DTYPE, MASK)             \
+  void NAMEANY(const STYPE* src_ptr, DTYPE* dst_ptr, int scale, int width) { \
+    SIMD_ALIGNED(STYPE temp[32]);                                            \
+    SIMD_ALIGNED(DTYPE out[32]);                                             \
+    memset(temp, 0, 32 * SBPP); /* for msan */                               \
+    int r = width & MASK;                                                    \
+    int n = width & ~MASK;                                                   \
+    if (n > 0) {                                                             \
+      ANY_SIMD(src_ptr, dst_ptr, scale, n);                                  \
+    }                                                                        \
+    memcpy(temp, src_ptr + n, r * SBPP);                                     \
+    ANY_SIMD(temp, out, scale, MASK + 1);                                    \
+    memcpy(dst_ptr + n, out, r * BPP);                                       \
+  }
+
+#ifdef HAS_CONVERT16TO8ROW_SSSE3
+ANY11C(Convert16To8Row_Any_SSSE3,
+       Convert16To8Row_SSSE3,
+       2,
+       1,
+       uint16_t,
+       uint8_t,
+       15)
+#endif
+#ifdef HAS_CONVERT16TO8ROW_AVX2
+ANY11C(Convert16To8Row_Any_AVX2,
+       Convert16To8Row_AVX2,
+       2,
+       1,
+       uint16_t,
+       uint8_t,
+       31)
+#endif
+#ifdef HAS_CONVERT8TO16ROW_SSE2
+ANY11C(Convert8To16Row_Any_SSE2,
+       Convert8To16Row_SSE2,
+       1,
+       2,
+       uint8_t,
+       uint16_t,
+       15)
+#endif
+#ifdef HAS_CONVERT8TO16ROW_AVX2
+ANY11C(Convert8To16Row_Any_AVX2,
+       Convert8To16Row_AVX2,
+       1,
+       2,
+       uint8_t,
+       uint16_t,
+       31)
+#endif
+#undef ANY11C
+
+// Any 1 to 1 with parameter and shorts to byte.  BPP measures in shorts.
+#define ANY11P16(NAMEANY, ANY_SIMD, ST, T, SBPP, BPP, MASK)             \
+  void NAMEANY(const ST* src_ptr, T* dst_ptr, float param, int width) { \
+    SIMD_ALIGNED(ST temp[32]);                                          \
+    SIMD_ALIGNED(T out[32]);                                            \
+    memset(temp, 0, SBPP * 32); /* for msan */                          \
+    int r = width & MASK;                                               \
+    int n = width & ~MASK;                                              \
+    if (n > 0) {                                                        \
+      ANY_SIMD(src_ptr, dst_ptr, param, n);                             \
+    }                                                                   \
+    memcpy(temp, src_ptr + n, r * SBPP);                                \
+    ANY_SIMD(temp, out, param, MASK + 1);                               \
+    memcpy(dst_ptr + n, out, r * BPP);                                  \
+  }
+
+#ifdef HAS_HALFFLOATROW_SSE2
+ANY11P16(HalfFloatRow_Any_SSE2, HalfFloatRow_SSE2, uint16_t, uint16_t, 2, 2, 7)
+#endif
+#ifdef HAS_HALFFLOATROW_AVX2
+ANY11P16(HalfFloatRow_Any_AVX2, HalfFloatRow_AVX2, uint16_t, uint16_t, 2, 2, 15)
+#endif
+#ifdef HAS_HALFFLOATROW_F16C
+ANY11P16(HalfFloatRow_Any_F16C, HalfFloatRow_F16C, uint16_t, uint16_t, 2, 2, 15)
+ANY11P16(HalfFloat1Row_Any_F16C,
+         HalfFloat1Row_F16C,
+         uint16_t,
+         uint16_t,
+         2,
+         2,
+         15)
+#endif
+#ifdef HAS_HALFFLOATROW_NEON
+ANY11P16(HalfFloatRow_Any_NEON, HalfFloatRow_NEON, uint16_t, uint16_t, 2, 2, 7)
+ANY11P16(HalfFloat1Row_Any_NEON,
+         HalfFloat1Row_NEON,
+         uint16_t,
+         uint16_t,
+         2,
+         2,
+         7)
+#endif
+#ifdef HAS_HALFFLOATROW_MSA
+ANY11P16(HalfFloatRow_Any_MSA, HalfFloatRow_MSA, uint16_t, uint16_t, 2, 2, 31)
+#endif
+#ifdef HAS_BYTETOFLOATROW_NEON
+ANY11P16(ByteToFloatRow_Any_NEON, ByteToFloatRow_NEON, uint8_t, float, 1, 3, 7)
+#endif
+#undef ANY11P16
+
 // Any 1 to 1 with yuvconstants
-#define ANY11C(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, BPP, MASK)                    \
-    void NAMEANY(const uint8* src_ptr, uint8* dst_ptr,                         \
-                 const struct YuvConstants* yuvconstants, int width) {         \
-      SIMD_ALIGNED(uint8 temp[128 * 2]);                                       \
-      memset(temp, 0, 128);  /* for YUY2 and msan */                           \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr, dst_ptr, yuvconstants, n);                           \
-      }                                                                        \
-      memcpy(temp, src_ptr + (n >> UVSHIFT) * SBPP, SS(r, UVSHIFT) * SBPP);    \
-      ANY_SIMD(temp, temp + 128, yuvconstants, MASK + 1);                      \
-      memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
-    }
+#define ANY11C(NAMEANY, ANY_SIMD, UVSHIFT, SBPP, BPP, MASK)               \
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_ptr,                  \
+               const struct YuvConstants* yuvconstants, int width) {      \
+    SIMD_ALIGNED(uint8_t temp[128 * 2]);                                  \
+    memset(temp, 0, 128); /* for YUY2 and msan */                         \
+    int r = width & MASK;                                                 \
+    int n = width & ~MASK;                                                \
+    if (n > 0) {                                                          \
+      ANY_SIMD(src_ptr, dst_ptr, yuvconstants, n);                        \
+    }                                                                     \
+    memcpy(temp, src_ptr + (n >> UVSHIFT) * SBPP, SS(r, UVSHIFT) * SBPP); \
+    ANY_SIMD(temp, temp + 128, yuvconstants, MASK + 1);                   \
+    memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                       \
+  }
 #if defined(HAS_YUY2TOARGBROW_SSSE3)
 ANY11C(YUY2ToARGBRow_Any_SSSE3, YUY2ToARGBRow_SSSE3, 1, 4, 4, 15)
 ANY11C(UYVYToARGBRow_Any_SSSE3, UYVYToARGBRow_SSSE3, 1, 4, 4, 15)
@@ -573,25 +905,28 @@
 ANY11C(YUY2ToARGBRow_Any_NEON, YUY2ToARGBRow_NEON, 1, 4, 4, 7)
 ANY11C(UYVYToARGBRow_Any_NEON, UYVYToARGBRow_NEON, 1, 4, 4, 7)
 #endif
+#if defined(HAS_YUY2TOARGBROW_MSA)
+ANY11C(YUY2ToARGBRow_Any_MSA, YUY2ToARGBRow_MSA, 1, 4, 4, 7)
+ANY11C(UYVYToARGBRow_Any_MSA, UYVYToARGBRow_MSA, 1, 4, 4, 7)
+#endif
 #undef ANY11C
 
 // Any 1 to 1 interpolate.  Takes 2 rows of source via stride.
-#define ANY11T(NAMEANY, ANY_SIMD, SBPP, BPP, MASK)                             \
-    void NAMEANY(uint8* dst_ptr, const uint8* src_ptr,                         \
-                 ptrdiff_t src_stride_ptr, int width,                          \
-                 int source_y_fraction) {                                      \
-      SIMD_ALIGNED(uint8 temp[64 * 3]);                                        \
-      memset(temp, 0, 64 * 2);  /* for msan */                                 \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(dst_ptr, src_ptr, src_stride_ptr, n, source_y_fraction);      \
-      }                                                                        \
-      memcpy(temp, src_ptr + n * SBPP, r * SBPP);                              \
-      memcpy(temp + 64, src_ptr + src_stride_ptr + n * SBPP, r * SBPP);        \
-      ANY_SIMD(temp + 128, temp, 64, MASK + 1, source_y_fraction);             \
-      memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
-    }
+#define ANY11T(NAMEANY, ANY_SIMD, SBPP, BPP, MASK)                           \
+  void NAMEANY(uint8_t* dst_ptr, const uint8_t* src_ptr,                     \
+               ptrdiff_t src_stride_ptr, int width, int source_y_fraction) { \
+    SIMD_ALIGNED(uint8_t temp[64 * 3]);                                      \
+    memset(temp, 0, 64 * 2); /* for msan */                                  \
+    int r = width & MASK;                                                    \
+    int n = width & ~MASK;                                                   \
+    if (n > 0) {                                                             \
+      ANY_SIMD(dst_ptr, src_ptr, src_stride_ptr, n, source_y_fraction);      \
+    }                                                                        \
+    memcpy(temp, src_ptr + n * SBPP, r * SBPP);                              \
+    memcpy(temp + 64, src_ptr + src_stride_ptr + n * SBPP, r * SBPP);        \
+    ANY_SIMD(temp + 128, temp, 64, MASK + 1, source_y_fraction);             \
+    memcpy(dst_ptr + n * BPP, temp + 128, r * BPP);                          \
+  }
 
 #ifdef HAS_INTERPOLATEROW_AVX2
 ANY11T(InterpolateRow_Any_AVX2, InterpolateRow_AVX2, 1, 1, 31)
@@ -602,25 +937,25 @@
 #ifdef HAS_INTERPOLATEROW_NEON
 ANY11T(InterpolateRow_Any_NEON, InterpolateRow_NEON, 1, 1, 15)
 #endif
-#ifdef HAS_INTERPOLATEROW_DSPR2
-ANY11T(InterpolateRow_Any_DSPR2, InterpolateRow_DSPR2, 1, 1, 3)
+#ifdef HAS_INTERPOLATEROW_MSA
+ANY11T(InterpolateRow_Any_MSA, InterpolateRow_MSA, 1, 1, 31)
 #endif
 #undef ANY11T
 
 // Any 1 to 1 mirror.
-#define ANY11M(NAMEANY, ANY_SIMD, BPP, MASK)                                   \
-    void NAMEANY(const uint8* src_ptr, uint8* dst_ptr, int width) {            \
-      SIMD_ALIGNED(uint8 temp[64 * 2]);                                        \
-      memset(temp, 0, 64);  /* for msan */                                     \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr + r * BPP, dst_ptr, n);                               \
-      }                                                                        \
-      memcpy(temp, src_ptr, r * BPP);                                          \
-      ANY_SIMD(temp, temp + 64, MASK + 1);                                     \
-      memcpy(dst_ptr + n * BPP, temp + 64 + (MASK + 1 - r) * BPP, r * BPP);    \
-    }
+#define ANY11M(NAMEANY, ANY_SIMD, BPP, MASK)                              \
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_ptr, int width) {     \
+    SIMD_ALIGNED(uint8_t temp[64 * 2]);                                   \
+    memset(temp, 0, 64); /* for msan */                                   \
+    int r = width & MASK;                                                 \
+    int n = width & ~MASK;                                                \
+    if (n > 0) {                                                          \
+      ANY_SIMD(src_ptr + r * BPP, dst_ptr, n);                            \
+    }                                                                     \
+    memcpy(temp, src_ptr, r* BPP);                                        \
+    ANY_SIMD(temp, temp + 64, MASK + 1);                                  \
+    memcpy(dst_ptr + n * BPP, temp + 64 + (MASK + 1 - r) * BPP, r * BPP); \
+  }
 
 #ifdef HAS_MIRRORROW_AVX2
 ANY11M(MirrorRow_Any_AVX2, MirrorRow_AVX2, 1, 31)
@@ -631,6 +966,9 @@
 #ifdef HAS_MIRRORROW_NEON
 ANY11M(MirrorRow_Any_NEON, MirrorRow_NEON, 1, 15)
 #endif
+#ifdef HAS_MIRRORROW_MSA
+ANY11M(MirrorRow_Any_MSA, MirrorRow_MSA, 1, 63)
+#endif
 #ifdef HAS_ARGBMIRRORROW_AVX2
 ANY11M(ARGBMirrorRow_Any_AVX2, ARGBMirrorRow_AVX2, 4, 7)
 #endif
@@ -640,67 +978,54 @@
 #ifdef HAS_ARGBMIRRORROW_NEON
 ANY11M(ARGBMirrorRow_Any_NEON, ARGBMirrorRow_NEON, 4, 3)
 #endif
+#ifdef HAS_ARGBMIRRORROW_MSA
+ANY11M(ARGBMirrorRow_Any_MSA, ARGBMirrorRow_MSA, 4, 15)
+#endif
 #undef ANY11M
 
 // Any 1 plane. (memset)
-#define ANY1(NAMEANY, ANY_SIMD, T, BPP, MASK)                                  \
-    void NAMEANY(uint8* dst_ptr, T v32, int width) {                           \
-      SIMD_ALIGNED(uint8 temp[64]);                                            \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(dst_ptr, v32, n);                                             \
-      }                                                                        \
-      ANY_SIMD(temp, v32, MASK + 1);                                           \
-      memcpy(dst_ptr + n * BPP, temp, r * BPP);                                \
-    }
+#define ANY1(NAMEANY, ANY_SIMD, T, BPP, MASK)        \
+  void NAMEANY(uint8_t* dst_ptr, T v32, int width) { \
+    SIMD_ALIGNED(uint8_t temp[64]);                  \
+    int r = width & MASK;                            \
+    int n = width & ~MASK;                           \
+    if (n > 0) {                                     \
+      ANY_SIMD(dst_ptr, v32, n);                     \
+    }                                                \
+    ANY_SIMD(temp, v32, MASK + 1);                   \
+    memcpy(dst_ptr + n * BPP, temp, r * BPP);        \
+  }
 
 #ifdef HAS_SETROW_X86
-ANY1(SetRow_Any_X86, SetRow_X86, uint8, 1, 3)
+ANY1(SetRow_Any_X86, SetRow_X86, uint8_t, 1, 3)
 #endif
 #ifdef HAS_SETROW_NEON
-ANY1(SetRow_Any_NEON, SetRow_NEON, uint8, 1, 15)
+ANY1(SetRow_Any_NEON, SetRow_NEON, uint8_t, 1, 15)
 #endif
 #ifdef HAS_ARGBSETROW_NEON
-ANY1(ARGBSetRow_Any_NEON, ARGBSetRow_NEON, uint32, 4, 3)
+ANY1(ARGBSetRow_Any_NEON, ARGBSetRow_NEON, uint32_t, 4, 3)
+#endif
+#ifdef HAS_ARGBSETROW_MSA
+ANY1(ARGBSetRow_Any_MSA, ARGBSetRow_MSA, uint32_t, 4, 3)
 #endif
 #undef ANY1
 
 // Any 1 to 2.  Outputs UV planes.
-#define ANY12(NAMEANY, ANY_SIMD, UVSHIFT, BPP, DUVSHIFT, MASK)                 \
-    void NAMEANY(const uint8* src_ptr, uint8* dst_u, uint8* dst_v, int width) {\
-      SIMD_ALIGNED(uint8 temp[128 * 3]);                                       \
-      memset(temp, 0, 128);  /* for msan */                                    \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr, dst_u, dst_v, n);                                    \
-      }                                                                        \
-      memcpy(temp, src_ptr  + (n >> UVSHIFT) * BPP, SS(r, UVSHIFT) * BPP);     \
-      /* repeat last 4 bytes for 422 subsampler */                             \
-      if ((width & 1) && BPP == 4 && DUVSHIFT == 1) {                          \
-        memcpy(temp + SS(r, UVSHIFT) * BPP,                                    \
-               temp + SS(r, UVSHIFT) * BPP - BPP, BPP);                        \
-      }                                                                        \
-      /* repeat last 4 - 12 bytes for 411 subsampler */                        \
-      if (((width & 3) == 1) && BPP == 4 && DUVSHIFT == 2) {                   \
-        memcpy(temp + SS(r, UVSHIFT) * BPP,                                    \
-               temp + SS(r, UVSHIFT) * BPP - BPP, BPP);                        \
-        memcpy(temp + SS(r, UVSHIFT) * BPP + BPP,                              \
-               temp + SS(r, UVSHIFT) * BPP - BPP, BPP * 2);                    \
-      }                                                                        \
-      if (((width & 3) == 2) && BPP == 4 && DUVSHIFT == 2) {                   \
-        memcpy(temp + SS(r, UVSHIFT) * BPP,                                    \
-               temp + SS(r, UVSHIFT) * BPP - BPP * 2, BPP * 2);                \
-      }                                                                        \
-      if (((width & 3) == 3) && BPP == 4 && DUVSHIFT == 2) {                   \
-        memcpy(temp + SS(r, UVSHIFT) * BPP,                                    \
-               temp + SS(r, UVSHIFT) * BPP - BPP, BPP);                        \
-      }                                                                        \
-      ANY_SIMD(temp, temp + 128, temp + 256, MASK + 1);                        \
-      memcpy(dst_u + (n >> DUVSHIFT), temp + 128, SS(r, DUVSHIFT));            \
-      memcpy(dst_v + (n >> DUVSHIFT), temp + 256, SS(r, DUVSHIFT));            \
-    }
+#define ANY12(NAMEANY, ANY_SIMD, UVSHIFT, BPP, DUVSHIFT, MASK)          \
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_u, uint8_t* dst_v,  \
+               int width) {                                             \
+    SIMD_ALIGNED(uint8_t temp[128 * 3]);                                \
+    memset(temp, 0, 128); /* for msan */                                \
+    int r = width & MASK;                                               \
+    int n = width & ~MASK;                                              \
+    if (n > 0) {                                                        \
+      ANY_SIMD(src_ptr, dst_u, dst_v, n);                               \
+    }                                                                   \
+    memcpy(temp, src_ptr + (n >> UVSHIFT) * BPP, SS(r, UVSHIFT) * BPP); \
+    ANY_SIMD(temp, temp + 128, temp + 256, MASK + 1);                   \
+    memcpy(dst_u + (n >> DUVSHIFT), temp + 128, SS(r, DUVSHIFT));       \
+    memcpy(dst_v + (n >> DUVSHIFT), temp + 256, SS(r, DUVSHIFT));       \
+  }
 
 #ifdef HAS_SPLITUVROW_SSE2
 ANY12(SplitUVRow_Any_SSE2, SplitUVRow_SSE2, 0, 2, 0, 15)
@@ -711,8 +1036,8 @@
 #ifdef HAS_SPLITUVROW_NEON
 ANY12(SplitUVRow_Any_NEON, SplitUVRow_NEON, 0, 2, 0, 15)
 #endif
-#ifdef HAS_SPLITUVROW_DSPR2
-ANY12(SplitUVRow_Any_DSPR2, SplitUVRow_DSPR2, 0, 2, 0, 15)
+#ifdef HAS_SPLITUVROW_MSA
+ANY12(SplitUVRow_Any_MSA, SplitUVRow_MSA, 0, 2, 0, 31)
 #endif
 #ifdef HAS_ARGBTOUV444ROW_SSSE3
 ANY12(ARGBToUV444Row_Any_SSSE3, ARGBToUV444Row_SSSE3, 0, 4, 0, 15)
@@ -727,37 +1052,66 @@
 #endif
 #ifdef HAS_YUY2TOUV422ROW_NEON
 ANY12(ARGBToUV444Row_Any_NEON, ARGBToUV444Row_NEON, 0, 4, 0, 7)
-ANY12(ARGBToUV411Row_Any_NEON, ARGBToUV411Row_NEON, 0, 4, 2, 31)
 ANY12(YUY2ToUV422Row_Any_NEON, YUY2ToUV422Row_NEON, 1, 4, 1, 15)
 ANY12(UYVYToUV422Row_Any_NEON, UYVYToUV422Row_NEON, 1, 4, 1, 15)
 #endif
+#ifdef HAS_YUY2TOUV422ROW_MSA
+ANY12(ARGBToUV444Row_Any_MSA, ARGBToUV444Row_MSA, 0, 4, 0, 15)
+ANY12(YUY2ToUV422Row_Any_MSA, YUY2ToUV422Row_MSA, 1, 4, 1, 31)
+ANY12(UYVYToUV422Row_Any_MSA, UYVYToUV422Row_MSA, 1, 4, 1, 31)
+#endif
 #undef ANY12
 
+// Any 1 to 3.  Outputs RGB planes.
+#define ANY13(NAMEANY, ANY_SIMD, BPP, MASK)                                \
+  void NAMEANY(const uint8_t* src_ptr, uint8_t* dst_r, uint8_t* dst_g,     \
+               uint8_t* dst_b, int width) {                                \
+    SIMD_ALIGNED(uint8_t temp[16 * 6]);                                    \
+    memset(temp, 0, 16 * 3); /* for msan */                                \
+    int r = width & MASK;                                                  \
+    int n = width & ~MASK;                                                 \
+    if (n > 0) {                                                           \
+      ANY_SIMD(src_ptr, dst_r, dst_g, dst_b, n);                           \
+    }                                                                      \
+    memcpy(temp, src_ptr + n * BPP, r * BPP);                              \
+    ANY_SIMD(temp, temp + 16 * 3, temp + 16 * 4, temp + 16 * 5, MASK + 1); \
+    memcpy(dst_r + n, temp + 16 * 3, r);                                   \
+    memcpy(dst_g + n, temp + 16 * 4, r);                                   \
+    memcpy(dst_b + n, temp + 16 * 5, r);                                   \
+  }
+
+#ifdef HAS_SPLITRGBROW_SSSE3
+ANY13(SplitRGBRow_Any_SSSE3, SplitRGBRow_SSSE3, 3, 15)
+#endif
+#ifdef HAS_SPLITRGBROW_NEON
+ANY13(SplitRGBRow_Any_NEON, SplitRGBRow_NEON, 3, 15)
+#endif
+
 // Any 1 to 2 with source stride (2 rows of source).  Outputs UV planes.
 // 128 byte row allows for 32 avx ARGB pixels.
-#define ANY12S(NAMEANY, ANY_SIMD, UVSHIFT, BPP, MASK)                          \
-    void NAMEANY(const uint8* src_ptr, int src_stride_ptr,                     \
-                 uint8* dst_u, uint8* dst_v, int width) {                      \
-      SIMD_ALIGNED(uint8 temp[128 * 4]);                                       \
-      memset(temp, 0, 128 * 2);  /* for msan */                                \
-      int r = width & MASK;                                                    \
-      int n = width & ~MASK;                                                   \
-      if (n > 0) {                                                             \
-        ANY_SIMD(src_ptr, src_stride_ptr, dst_u, dst_v, n);                    \
-      }                                                                        \
-      memcpy(temp, src_ptr  + (n >> UVSHIFT) * BPP, SS(r, UVSHIFT) * BPP);     \
-      memcpy(temp + 128, src_ptr  + src_stride_ptr + (n >> UVSHIFT) * BPP,     \
-             SS(r, UVSHIFT) * BPP);                                            \
-      if ((width & 1) && UVSHIFT == 0) {  /* repeat last pixel for subsample */\
-        memcpy(temp + SS(r, UVSHIFT) * BPP,                                    \
-               temp + SS(r, UVSHIFT) * BPP - BPP, BPP);                        \
-        memcpy(temp + 128 + SS(r, UVSHIFT) * BPP,                              \
-               temp + 128 + SS(r, UVSHIFT) * BPP - BPP, BPP);                  \
-      }                                                                        \
-      ANY_SIMD(temp, 128, temp + 256, temp + 384, MASK + 1);                   \
-      memcpy(dst_u + (n >> 1), temp + 256, SS(r, 1));                          \
-      memcpy(dst_v + (n >> 1), temp + 384, SS(r, 1));                          \
-    }
+#define ANY12S(NAMEANY, ANY_SIMD, UVSHIFT, BPP, MASK)                        \
+  void NAMEANY(const uint8_t* src_ptr, int src_stride_ptr, uint8_t* dst_u,   \
+               uint8_t* dst_v, int width) {                                  \
+    SIMD_ALIGNED(uint8_t temp[128 * 4]);                                     \
+    memset(temp, 0, 128 * 2); /* for msan */                                 \
+    int r = width & MASK;                                                    \
+    int n = width & ~MASK;                                                   \
+    if (n > 0) {                                                             \
+      ANY_SIMD(src_ptr, src_stride_ptr, dst_u, dst_v, n);                    \
+    }                                                                        \
+    memcpy(temp, src_ptr + (n >> UVSHIFT) * BPP, SS(r, UVSHIFT) * BPP);      \
+    memcpy(temp + 128, src_ptr + src_stride_ptr + (n >> UVSHIFT) * BPP,      \
+           SS(r, UVSHIFT) * BPP);                                            \
+    if ((width & 1) && UVSHIFT == 0) { /* repeat last pixel for subsample */ \
+      memcpy(temp + SS(r, UVSHIFT) * BPP, temp + SS(r, UVSHIFT) * BPP - BPP, \
+             BPP);                                                           \
+      memcpy(temp + 128 + SS(r, UVSHIFT) * BPP,                              \
+             temp + 128 + SS(r, UVSHIFT) * BPP - BPP, BPP);                  \
+    }                                                                        \
+    ANY_SIMD(temp, 128, temp + 256, temp + 384, MASK + 1);                   \
+    memcpy(dst_u + (n >> 1), temp + 256, SS(r, 1));                          \
+    memcpy(dst_v + (n >> 1), temp + 384, SS(r, 1));                          \
+  }
 
 #ifdef HAS_ARGBTOUVROW_AVX2
 ANY12S(ARGBToUVRow_Any_AVX2, ARGBToUVRow_AVX2, 0, 4, 31)
@@ -783,30 +1137,57 @@
 #ifdef HAS_ARGBTOUVROW_NEON
 ANY12S(ARGBToUVRow_Any_NEON, ARGBToUVRow_NEON, 0, 4, 15)
 #endif
+#ifdef HAS_ARGBTOUVROW_MSA
+ANY12S(ARGBToUVRow_Any_MSA, ARGBToUVRow_MSA, 0, 4, 31)
+#endif
 #ifdef HAS_ARGBTOUVJROW_NEON
 ANY12S(ARGBToUVJRow_Any_NEON, ARGBToUVJRow_NEON, 0, 4, 15)
 #endif
+#ifdef HAS_ARGBTOUVJROW_MSA
+ANY12S(ARGBToUVJRow_Any_MSA, ARGBToUVJRow_MSA, 0, 4, 31)
+#endif
 #ifdef HAS_BGRATOUVROW_NEON
 ANY12S(BGRAToUVRow_Any_NEON, BGRAToUVRow_NEON, 0, 4, 15)
 #endif
+#ifdef HAS_BGRATOUVROW_MSA
+ANY12S(BGRAToUVRow_Any_MSA, BGRAToUVRow_MSA, 0, 4, 31)
+#endif
 #ifdef HAS_ABGRTOUVROW_NEON
 ANY12S(ABGRToUVRow_Any_NEON, ABGRToUVRow_NEON, 0, 4, 15)
 #endif
+#ifdef HAS_ABGRTOUVROW_MSA
+ANY12S(ABGRToUVRow_Any_MSA, ABGRToUVRow_MSA, 0, 4, 31)
+#endif
 #ifdef HAS_RGBATOUVROW_NEON
 ANY12S(RGBAToUVRow_Any_NEON, RGBAToUVRow_NEON, 0, 4, 15)
 #endif
+#ifdef HAS_RGBATOUVROW_MSA
+ANY12S(RGBAToUVRow_Any_MSA, RGBAToUVRow_MSA, 0, 4, 31)
+#endif
 #ifdef HAS_RGB24TOUVROW_NEON
 ANY12S(RGB24ToUVRow_Any_NEON, RGB24ToUVRow_NEON, 0, 3, 15)
 #endif
+#ifdef HAS_RGB24TOUVROW_MSA
+ANY12S(RGB24ToUVRow_Any_MSA, RGB24ToUVRow_MSA, 0, 3, 15)
+#endif
 #ifdef HAS_RAWTOUVROW_NEON
 ANY12S(RAWToUVRow_Any_NEON, RAWToUVRow_NEON, 0, 3, 15)
 #endif
+#ifdef HAS_RAWTOUVROW_MSA
+ANY12S(RAWToUVRow_Any_MSA, RAWToUVRow_MSA, 0, 3, 15)
+#endif
 #ifdef HAS_RGB565TOUVROW_NEON
 ANY12S(RGB565ToUVRow_Any_NEON, RGB565ToUVRow_NEON, 0, 2, 15)
 #endif
+#ifdef HAS_RGB565TOUVROW_MSA
+ANY12S(RGB565ToUVRow_Any_MSA, RGB565ToUVRow_MSA, 0, 2, 15)
+#endif
 #ifdef HAS_ARGB1555TOUVROW_NEON
 ANY12S(ARGB1555ToUVRow_Any_NEON, ARGB1555ToUVRow_NEON, 0, 2, 15)
 #endif
+#ifdef HAS_ARGB1555TOUVROW_MSA
+ANY12S(ARGB1555ToUVRow_Any_MSA, ARGB1555ToUVRow_MSA, 0, 2, 15)
+#endif
 #ifdef HAS_ARGB4444TOUVROW_NEON
 ANY12S(ARGB4444ToUVRow_Any_NEON, ARGB4444ToUVRow_NEON, 0, 2, 15)
 #endif
@@ -816,6 +1197,12 @@
 #ifdef HAS_UYVYTOUVROW_NEON
 ANY12S(UYVYToUVRow_Any_NEON, UYVYToUVRow_NEON, 1, 4, 15)
 #endif
+#ifdef HAS_YUY2TOUVROW_MSA
+ANY12S(YUY2ToUVRow_Any_MSA, YUY2ToUVRow_MSA, 1, 4, 31)
+#endif
+#ifdef HAS_UYVYTOUVROW_MSA
+ANY12S(UYVYToUVRow_Any_MSA, UYVYToUVRow_MSA, 1, 4, 31)
+#endif
 #undef ANY12S
 
 #ifdef __cplusplus
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_common.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_common.cc
index aefa38c..2bbc5ad 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_common.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_common.cc
@@ -10,6 +10,7 @@
 
 #include "libyuv/row.h"
 
+#include <stdio.h>
 #include <string.h>  // For memcpy and memset.
 
 #include "libyuv/basic_types.h"
@@ -23,59 +24,69 @@
 
 #define USE_BRANCHLESS 1
 #if USE_BRANCHLESS
-static __inline int32 clamp0(int32 v) {
+static __inline int32_t clamp0(int32_t v) {
   return ((-(v) >> 31) & (v));
 }
 
-static __inline int32 clamp255(int32 v) {
+static __inline int32_t clamp255(int32_t v) {
   return (((255 - (v)) >> 31) | (v)) & 255;
 }
 
-static __inline uint32 Clamp(int32 val) {
-  int v = clamp0(val);
-  return (uint32)(clamp255(v));
+static __inline int32_t clamp1023(int32_t v) {
+  return (((1023 - (v)) >> 31) | (v)) & 1023;
 }
 
-static __inline uint32 Abs(int32 v) {
+static __inline uint32_t Abs(int32_t v) {
   int m = v >> 31;
   return (v + m) ^ m;
 }
-#else  // USE_BRANCHLESS
-static __inline int32 clamp0(int32 v) {
+#else   // USE_BRANCHLESS
+static __inline int32_t clamp0(int32_t v) {
   return (v < 0) ? 0 : v;
 }
 
-static __inline int32 clamp255(int32 v) {
+static __inline int32_t clamp255(int32_t v) {
   return (v > 255) ? 255 : v;
 }
 
-static __inline uint32 Clamp(int32 val) {
-  int v = clamp0(val);
-  return (uint32)(clamp255(v));
+static __inline int32_t clamp1023(int32_t v) {
+  return (v > 1023) ? 1023 : v;
 }
 
-static __inline uint32 Abs(int32 v) {
+static __inline uint32_t Abs(int32_t v) {
   return (v < 0) ? -v : v;
 }
 #endif  // USE_BRANCHLESS
+static __inline uint32_t Clamp(int32_t val) {
+  int v = clamp0(val);
+  return (uint32_t)(clamp255(v));
+}
 
-#ifdef LIBYUV_LITTLE_ENDIAN
-#define WRITEWORD(p, v) *(uint32*)(p) = v
+static __inline uint32_t Clamp10(int32_t val) {
+  int v = clamp0(val);
+  return (uint32_t)(clamp1023(v));
+}
+
+// Little Endian
+#if defined(__x86_64__) || defined(_M_X64) || defined(__i386__) || \
+    defined(_M_IX86) || defined(__arm__) || defined(_M_ARM) ||     \
+    (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
+#define WRITEWORD(p, v) *(uint32_t*)(p) = v
 #else
-static inline void WRITEWORD(uint8* p, uint32 v) {
-  p[0] = (uint8)(v & 255);
-  p[1] = (uint8)((v >> 8) & 255);
-  p[2] = (uint8)((v >> 16) & 255);
-  p[3] = (uint8)((v >> 24) & 255);
+static inline void WRITEWORD(uint8_t* p, uint32_t v) {
+  p[0] = (uint8_t)(v & 255);
+  p[1] = (uint8_t)((v >> 8) & 255);
+  p[2] = (uint8_t)((v >> 16) & 255);
+  p[3] = (uint8_t)((v >> 24) & 255);
 }
 #endif
 
-void RGB24ToARGBRow_C(const uint8* src_rgb24, uint8* dst_argb, int width) {
+void RGB24ToARGBRow_C(const uint8_t* src_rgb24, uint8_t* dst_argb, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_rgb24[0];
-    uint8 g = src_rgb24[1];
-    uint8 r = src_rgb24[2];
+    uint8_t b = src_rgb24[0];
+    uint8_t g = src_rgb24[1];
+    uint8_t r = src_rgb24[2];
     dst_argb[0] = b;
     dst_argb[1] = g;
     dst_argb[2] = r;
@@ -85,12 +96,12 @@
   }
 }
 
-void RAWToARGBRow_C(const uint8* src_raw, uint8* dst_argb, int width) {
+void RAWToARGBRow_C(const uint8_t* src_raw, uint8_t* dst_argb, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 r = src_raw[0];
-    uint8 g = src_raw[1];
-    uint8 b = src_raw[2];
+    uint8_t r = src_raw[0];
+    uint8_t g = src_raw[1];
+    uint8_t b = src_raw[2];
     dst_argb[0] = b;
     dst_argb[1] = g;
     dst_argb[2] = r;
@@ -100,12 +111,12 @@
   }
 }
 
-void RAWToRGB24Row_C(const uint8* src_raw, uint8* dst_rgb24, int width) {
+void RAWToRGB24Row_C(const uint8_t* src_raw, uint8_t* dst_rgb24, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 r = src_raw[0];
-    uint8 g = src_raw[1];
-    uint8 b = src_raw[2];
+    uint8_t r = src_raw[0];
+    uint8_t g = src_raw[1];
+    uint8_t b = src_raw[2];
     dst_rgb24[0] = b;
     dst_rgb24[1] = g;
     dst_rgb24[2] = r;
@@ -114,12 +125,14 @@
   }
 }
 
-void RGB565ToARGBRow_C(const uint8* src_rgb565, uint8* dst_argb, int width) {
+void RGB565ToARGBRow_C(const uint8_t* src_rgb565,
+                       uint8_t* dst_argb,
+                       int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_rgb565[0] & 0x1f;
-    uint8 g = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
-    uint8 r = src_rgb565[1] >> 3;
+    uint8_t b = src_rgb565[0] & 0x1f;
+    uint8_t g = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
+    uint8_t r = src_rgb565[1] >> 3;
     dst_argb[0] = (b << 3) | (b >> 2);
     dst_argb[1] = (g << 2) | (g >> 4);
     dst_argb[2] = (r << 3) | (r >> 2);
@@ -129,14 +142,15 @@
   }
 }
 
-void ARGB1555ToARGBRow_C(const uint8* src_argb1555, uint8* dst_argb,
+void ARGB1555ToARGBRow_C(const uint8_t* src_argb1555,
+                         uint8_t* dst_argb,
                          int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_argb1555[0] & 0x1f;
-    uint8 g = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
-    uint8 r = (src_argb1555[1] & 0x7c) >> 2;
-    uint8 a = src_argb1555[1] >> 7;
+    uint8_t b = src_argb1555[0] & 0x1f;
+    uint8_t g = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
+    uint8_t r = (src_argb1555[1] & 0x7c) >> 2;
+    uint8_t a = src_argb1555[1] >> 7;
     dst_argb[0] = (b << 3) | (b >> 2);
     dst_argb[1] = (g << 3) | (g >> 2);
     dst_argb[2] = (r << 3) | (r >> 2);
@@ -146,14 +160,15 @@
   }
 }
 
-void ARGB4444ToARGBRow_C(const uint8* src_argb4444, uint8* dst_argb,
+void ARGB4444ToARGBRow_C(const uint8_t* src_argb4444,
+                         uint8_t* dst_argb,
                          int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_argb4444[0] & 0x0f;
-    uint8 g = src_argb4444[0] >> 4;
-    uint8 r = src_argb4444[1] & 0x0f;
-    uint8 a = src_argb4444[1] >> 4;
+    uint8_t b = src_argb4444[0] & 0x0f;
+    uint8_t g = src_argb4444[0] >> 4;
+    uint8_t r = src_argb4444[1] & 0x0f;
+    uint8_t a = src_argb4444[1] >> 4;
     dst_argb[0] = (b << 4) | b;
     dst_argb[1] = (g << 4) | g;
     dst_argb[2] = (r << 4) | r;
@@ -163,12 +178,53 @@
   }
 }
 
-void ARGBToRGB24Row_C(const uint8* src_argb, uint8* dst_rgb, int width) {
+void AR30ToARGBRow_C(const uint8_t* src_ar30, uint8_t* dst_argb, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_argb[0];
-    uint8 g = src_argb[1];
-    uint8 r = src_argb[2];
+    uint32_t ar30 = *(const uint32_t*)src_ar30;
+    uint32_t b = (ar30 >> 2) & 0xff;
+    uint32_t g = (ar30 >> 12) & 0xff;
+    uint32_t r = (ar30 >> 22) & 0xff;
+    uint32_t a = (ar30 >> 30) * 0x55;  // Replicate 2 bits to 8 bits.
+    *(uint32_t*)(dst_argb) = b | (g << 8) | (r << 16) | (a << 24);
+    dst_argb += 4;
+    src_ar30 += 4;
+  }
+}
+
+void AR30ToABGRRow_C(const uint8_t* src_ar30, uint8_t* dst_abgr, int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    uint32_t ar30 = *(const uint32_t*)src_ar30;
+    uint32_t b = (ar30 >> 2) & 0xff;
+    uint32_t g = (ar30 >> 12) & 0xff;
+    uint32_t r = (ar30 >> 22) & 0xff;
+    uint32_t a = (ar30 >> 30) * 0x55;  // Replicate 2 bits to 8 bits.
+    *(uint32_t*)(dst_abgr) = r | (g << 8) | (b << 16) | (a << 24);
+    dst_abgr += 4;
+    src_ar30 += 4;
+  }
+}
+
+void AR30ToAB30Row_C(const uint8_t* src_ar30, uint8_t* dst_ab30, int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    uint32_t ar30 = *(const uint32_t*)src_ar30;
+    uint32_t b = ar30 & 0x3ff;
+    uint32_t ga = ar30 & 0xc00ffc00;
+    uint32_t r = (ar30 >> 20) & 0x3ff;
+    *(uint32_t*)(dst_ab30) = r | ga | (b << 20);
+    dst_ab30 += 4;
+    src_ar30 += 4;
+  }
+}
+
+void ARGBToRGB24Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    uint8_t b = src_argb[0];
+    uint8_t g = src_argb[1];
+    uint8_t r = src_argb[2];
     dst_rgb[0] = b;
     dst_rgb[1] = g;
     dst_rgb[2] = r;
@@ -177,12 +233,12 @@
   }
 }
 
-void ARGBToRAWRow_C(const uint8* src_argb, uint8* dst_rgb, int width) {
+void ARGBToRAWRow_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_argb[0];
-    uint8 g = src_argb[1];
-    uint8 r = src_argb[2];
+    uint8_t b = src_argb[0];
+    uint8_t g = src_argb[1];
+    uint8_t r = src_argb[2];
     dst_rgb[0] = r;
     dst_rgb[1] = g;
     dst_rgb[2] = b;
@@ -191,25 +247,25 @@
   }
 }
 
-void ARGBToRGB565Row_C(const uint8* src_argb, uint8* dst_rgb, int width) {
+void ARGBToRGB565Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 b0 = src_argb[0] >> 3;
-    uint8 g0 = src_argb[1] >> 2;
-    uint8 r0 = src_argb[2] >> 3;
-    uint8 b1 = src_argb[4] >> 3;
-    uint8 g1 = src_argb[5] >> 2;
-    uint8 r1 = src_argb[6] >> 3;
-    WRITEWORD(dst_rgb, b0 | (g0 << 5) | (r0 << 11) |
-              (b1 << 16) | (g1 << 21) | (r1 << 27));
+    uint8_t b0 = src_argb[0] >> 3;
+    uint8_t g0 = src_argb[1] >> 2;
+    uint8_t r0 = src_argb[2] >> 3;
+    uint8_t b1 = src_argb[4] >> 3;
+    uint8_t g1 = src_argb[5] >> 2;
+    uint8_t r1 = src_argb[6] >> 3;
+    WRITEWORD(dst_rgb, b0 | (g0 << 5) | (r0 << 11) | (b1 << 16) | (g1 << 21) |
+                           (r1 << 27));
     dst_rgb += 4;
     src_argb += 8;
   }
   if (width & 1) {
-    uint8 b0 = src_argb[0] >> 3;
-    uint8 g0 = src_argb[1] >> 2;
-    uint8 r0 = src_argb[2] >> 3;
-    *(uint16*)(dst_rgb) = b0 | (g0 << 5) | (r0 << 11);
+    uint8_t b0 = src_argb[0] >> 3;
+    uint8_t g0 = src_argb[1] >> 2;
+    uint8_t r0 = src_argb[2] >> 3;
+    *(uint16_t*)(dst_rgb) = b0 | (g0 << 5) | (r0 << 11);
   }
 }
 
@@ -221,132 +277,160 @@
 // endian will not affect order of the original matrix.  But the dither4
 // will containing the first pixel in the lower byte for little endian
 // or the upper byte for big endian.
-void ARGBToRGB565DitherRow_C(const uint8* src_argb, uint8* dst_rgb,
-                             const uint32 dither4, int width) {
+void ARGBToRGB565DitherRow_C(const uint8_t* src_argb,
+                             uint8_t* dst_rgb,
+                             const uint32_t dither4,
+                             int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     int dither0 = ((const unsigned char*)(&dither4))[x & 3];
     int dither1 = ((const unsigned char*)(&dither4))[(x + 1) & 3];
-    uint8 b0 = clamp255(src_argb[0] + dither0) >> 3;
-    uint8 g0 = clamp255(src_argb[1] + dither0) >> 2;
-    uint8 r0 = clamp255(src_argb[2] + dither0) >> 3;
-    uint8 b1 = clamp255(src_argb[4] + dither1) >> 3;
-    uint8 g1 = clamp255(src_argb[5] + dither1) >> 2;
-    uint8 r1 = clamp255(src_argb[6] + dither1) >> 3;
-    WRITEWORD(dst_rgb, b0 | (g0 << 5) | (r0 << 11) |
-              (b1 << 16) | (g1 << 21) | (r1 << 27));
+    uint8_t b0 = clamp255(src_argb[0] + dither0) >> 3;
+    uint8_t g0 = clamp255(src_argb[1] + dither0) >> 2;
+    uint8_t r0 = clamp255(src_argb[2] + dither0) >> 3;
+    uint8_t b1 = clamp255(src_argb[4] + dither1) >> 3;
+    uint8_t g1 = clamp255(src_argb[5] + dither1) >> 2;
+    uint8_t r1 = clamp255(src_argb[6] + dither1) >> 3;
+    WRITEWORD(dst_rgb, b0 | (g0 << 5) | (r0 << 11) | (b1 << 16) | (g1 << 21) |
+                           (r1 << 27));
     dst_rgb += 4;
     src_argb += 8;
   }
   if (width & 1) {
     int dither0 = ((const unsigned char*)(&dither4))[(width - 1) & 3];
-    uint8 b0 = clamp255(src_argb[0] + dither0) >> 3;
-    uint8 g0 = clamp255(src_argb[1] + dither0) >> 2;
-    uint8 r0 = clamp255(src_argb[2] + dither0) >> 3;
-    *(uint16*)(dst_rgb) = b0 | (g0 << 5) | (r0 << 11);
+    uint8_t b0 = clamp255(src_argb[0] + dither0) >> 3;
+    uint8_t g0 = clamp255(src_argb[1] + dither0) >> 2;
+    uint8_t r0 = clamp255(src_argb[2] + dither0) >> 3;
+    *(uint16_t*)(dst_rgb) = b0 | (g0 << 5) | (r0 << 11);
   }
 }
 
-void ARGBToARGB1555Row_C(const uint8* src_argb, uint8* dst_rgb, int width) {
+void ARGBToARGB1555Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 b0 = src_argb[0] >> 3;
-    uint8 g0 = src_argb[1] >> 3;
-    uint8 r0 = src_argb[2] >> 3;
-    uint8 a0 = src_argb[3] >> 7;
-    uint8 b1 = src_argb[4] >> 3;
-    uint8 g1 = src_argb[5] >> 3;
-    uint8 r1 = src_argb[6] >> 3;
-    uint8 a1 = src_argb[7] >> 7;
-    *(uint32*)(dst_rgb) =
-        b0 | (g0 << 5) | (r0 << 10) | (a0 << 15) |
-        (b1 << 16) | (g1 << 21) | (r1 << 26) | (a1 << 31);
+    uint8_t b0 = src_argb[0] >> 3;
+    uint8_t g0 = src_argb[1] >> 3;
+    uint8_t r0 = src_argb[2] >> 3;
+    uint8_t a0 = src_argb[3] >> 7;
+    uint8_t b1 = src_argb[4] >> 3;
+    uint8_t g1 = src_argb[5] >> 3;
+    uint8_t r1 = src_argb[6] >> 3;
+    uint8_t a1 = src_argb[7] >> 7;
+    *(uint32_t*)(dst_rgb) = b0 | (g0 << 5) | (r0 << 10) | (a0 << 15) |
+                            (b1 << 16) | (g1 << 21) | (r1 << 26) | (a1 << 31);
     dst_rgb += 4;
     src_argb += 8;
   }
   if (width & 1) {
-    uint8 b0 = src_argb[0] >> 3;
-    uint8 g0 = src_argb[1] >> 3;
-    uint8 r0 = src_argb[2] >> 3;
-    uint8 a0 = src_argb[3] >> 7;
-    *(uint16*)(dst_rgb) =
-        b0 | (g0 << 5) | (r0 << 10) | (a0 << 15);
+    uint8_t b0 = src_argb[0] >> 3;
+    uint8_t g0 = src_argb[1] >> 3;
+    uint8_t r0 = src_argb[2] >> 3;
+    uint8_t a0 = src_argb[3] >> 7;
+    *(uint16_t*)(dst_rgb) = b0 | (g0 << 5) | (r0 << 10) | (a0 << 15);
   }
 }
 
-void ARGBToARGB4444Row_C(const uint8* src_argb, uint8* dst_rgb, int width) {
+void ARGBToARGB4444Row_C(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 b0 = src_argb[0] >> 4;
-    uint8 g0 = src_argb[1] >> 4;
-    uint8 r0 = src_argb[2] >> 4;
-    uint8 a0 = src_argb[3] >> 4;
-    uint8 b1 = src_argb[4] >> 4;
-    uint8 g1 = src_argb[5] >> 4;
-    uint8 r1 = src_argb[6] >> 4;
-    uint8 a1 = src_argb[7] >> 4;
-    *(uint32*)(dst_rgb) =
-        b0 | (g0 << 4) | (r0 << 8) | (a0 << 12) |
-        (b1 << 16) | (g1 << 20) | (r1 << 24) | (a1 << 28);
+    uint8_t b0 = src_argb[0] >> 4;
+    uint8_t g0 = src_argb[1] >> 4;
+    uint8_t r0 = src_argb[2] >> 4;
+    uint8_t a0 = src_argb[3] >> 4;
+    uint8_t b1 = src_argb[4] >> 4;
+    uint8_t g1 = src_argb[5] >> 4;
+    uint8_t r1 = src_argb[6] >> 4;
+    uint8_t a1 = src_argb[7] >> 4;
+    *(uint32_t*)(dst_rgb) = b0 | (g0 << 4) | (r0 << 8) | (a0 << 12) |
+                            (b1 << 16) | (g1 << 20) | (r1 << 24) | (a1 << 28);
     dst_rgb += 4;
     src_argb += 8;
   }
   if (width & 1) {
-    uint8 b0 = src_argb[0] >> 4;
-    uint8 g0 = src_argb[1] >> 4;
-    uint8 r0 = src_argb[2] >> 4;
-    uint8 a0 = src_argb[3] >> 4;
-    *(uint16*)(dst_rgb) =
-        b0 | (g0 << 4) | (r0 << 8) | (a0 << 12);
+    uint8_t b0 = src_argb[0] >> 4;
+    uint8_t g0 = src_argb[1] >> 4;
+    uint8_t r0 = src_argb[2] >> 4;
+    uint8_t a0 = src_argb[3] >> 4;
+    *(uint16_t*)(dst_rgb) = b0 | (g0 << 4) | (r0 << 8) | (a0 << 12);
   }
 }
 
-static __inline int RGBToY(uint8 r, uint8 g, uint8 b) {
-  return (66 * r + 129 * g +  25 * b + 0x1080) >> 8;
+void ABGRToAR30Row_C(const uint8_t* src_abgr, uint8_t* dst_ar30, int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    uint32_t b0 = (src_abgr[0] >> 6) | ((uint32_t)(src_abgr[0]) << 2);
+    uint32_t g0 = (src_abgr[1] >> 6) | ((uint32_t)(src_abgr[1]) << 2);
+    uint32_t r0 = (src_abgr[2] >> 6) | ((uint32_t)(src_abgr[2]) << 2);
+    uint32_t a0 = (src_abgr[3] >> 6);
+    *(uint32_t*)(dst_ar30) = r0 | (g0 << 10) | (b0 << 20) | (a0 << 30);
+    dst_ar30 += 4;
+    src_abgr += 4;
+  }
 }
 
-static __inline int RGBToU(uint8 r, uint8 g, uint8 b) {
+void ARGBToAR30Row_C(const uint8_t* src_argb, uint8_t* dst_ar30, int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    uint32_t b0 = (src_argb[0] >> 6) | ((uint32_t)(src_argb[0]) << 2);
+    uint32_t g0 = (src_argb[1] >> 6) | ((uint32_t)(src_argb[1]) << 2);
+    uint32_t r0 = (src_argb[2] >> 6) | ((uint32_t)(src_argb[2]) << 2);
+    uint32_t a0 = (src_argb[3] >> 6);
+    *(uint32_t*)(dst_ar30) = b0 | (g0 << 10) | (r0 << 20) | (a0 << 30);
+    dst_ar30 += 4;
+    src_argb += 4;
+  }
+}
+
+static __inline int RGBToY(uint8_t r, uint8_t g, uint8_t b) {
+  return (66 * r + 129 * g + 25 * b + 0x1080) >> 8;
+}
+
+static __inline int RGBToU(uint8_t r, uint8_t g, uint8_t b) {
   return (112 * b - 74 * g - 38 * r + 0x8080) >> 8;
 }
-static __inline int RGBToV(uint8 r, uint8 g, uint8 b) {
+static __inline int RGBToV(uint8_t r, uint8_t g, uint8_t b) {
   return (112 * r - 94 * g - 18 * b + 0x8080) >> 8;
 }
 
-#define MAKEROWY(NAME, R, G, B, BPP) \
-void NAME ## ToYRow_C(const uint8* src_argb0, uint8* dst_y, int width) {       \
-  int x;                                                                       \
-  for (x = 0; x < width; ++x) {                                                \
-    dst_y[0] = RGBToY(src_argb0[R], src_argb0[G], src_argb0[B]);               \
-    src_argb0 += BPP;                                                          \
-    dst_y += 1;                                                                \
-  }                                                                            \
-}                                                                              \
-void NAME ## ToUVRow_C(const uint8* src_rgb0, int src_stride_rgb,              \
-                       uint8* dst_u, uint8* dst_v, int width) {                \
-  const uint8* src_rgb1 = src_rgb0 + src_stride_rgb;                           \
-  int x;                                                                       \
-  for (x = 0; x < width - 1; x += 2) {                                         \
-    uint8 ab = (src_rgb0[B] + src_rgb0[B + BPP] +                              \
-               src_rgb1[B] + src_rgb1[B + BPP]) >> 2;                          \
-    uint8 ag = (src_rgb0[G] + src_rgb0[G + BPP] +                              \
-               src_rgb1[G] + src_rgb1[G + BPP]) >> 2;                          \
-    uint8 ar = (src_rgb0[R] + src_rgb0[R + BPP] +                              \
-               src_rgb1[R] + src_rgb1[R + BPP]) >> 2;                          \
-    dst_u[0] = RGBToU(ar, ag, ab);                                             \
-    dst_v[0] = RGBToV(ar, ag, ab);                                             \
-    src_rgb0 += BPP * 2;                                                       \
-    src_rgb1 += BPP * 2;                                                       \
-    dst_u += 1;                                                                \
-    dst_v += 1;                                                                \
-  }                                                                            \
-  if (width & 1) {                                                             \
-    uint8 ab = (src_rgb0[B] + src_rgb1[B]) >> 1;                               \
-    uint8 ag = (src_rgb0[G] + src_rgb1[G]) >> 1;                               \
-    uint8 ar = (src_rgb0[R] + src_rgb1[R]) >> 1;                               \
-    dst_u[0] = RGBToU(ar, ag, ab);                                             \
-    dst_v[0] = RGBToV(ar, ag, ab);                                             \
-  }                                                                            \
-}
+// ARGBToY_C and ARGBToUV_C
+#define MAKEROWY(NAME, R, G, B, BPP)                                         \
+  void NAME##ToYRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width) { \
+    int x;                                                                   \
+    for (x = 0; x < width; ++x) {                                            \
+      dst_y[0] = RGBToY(src_argb0[R], src_argb0[G], src_argb0[B]);           \
+      src_argb0 += BPP;                                                      \
+      dst_y += 1;                                                            \
+    }                                                                        \
+  }                                                                          \
+  void NAME##ToUVRow_C(const uint8_t* src_rgb0, int src_stride_rgb,          \
+                       uint8_t* dst_u, uint8_t* dst_v, int width) {          \
+    const uint8_t* src_rgb1 = src_rgb0 + src_stride_rgb;                     \
+    int x;                                                                   \
+    for (x = 0; x < width - 1; x += 2) {                                     \
+      uint8_t ab = (src_rgb0[B] + src_rgb0[B + BPP] + src_rgb1[B] +          \
+                    src_rgb1[B + BPP]) >>                                    \
+                   2;                                                        \
+      uint8_t ag = (src_rgb0[G] + src_rgb0[G + BPP] + src_rgb1[G] +          \
+                    src_rgb1[G + BPP]) >>                                    \
+                   2;                                                        \
+      uint8_t ar = (src_rgb0[R] + src_rgb0[R + BPP] + src_rgb1[R] +          \
+                    src_rgb1[R + BPP]) >>                                    \
+                   2;                                                        \
+      dst_u[0] = RGBToU(ar, ag, ab);                                         \
+      dst_v[0] = RGBToV(ar, ag, ab);                                         \
+      src_rgb0 += BPP * 2;                                                   \
+      src_rgb1 += BPP * 2;                                                   \
+      dst_u += 1;                                                            \
+      dst_v += 1;                                                            \
+    }                                                                        \
+    if (width & 1) {                                                         \
+      uint8_t ab = (src_rgb0[B] + src_rgb1[B]) >> 1;                         \
+      uint8_t ag = (src_rgb0[G] + src_rgb1[G]) >> 1;                         \
+      uint8_t ar = (src_rgb0[R] + src_rgb1[R]) >> 1;                         \
+      dst_u[0] = RGBToU(ar, ag, ab);                                         \
+      dst_v[0] = RGBToV(ar, ag, ab);                                         \
+    }                                                                        \
+  }
 
 MAKEROWY(ARGB, 2, 1, 0, 4)
 MAKEROWY(BGRA, 1, 2, 3, 4)
@@ -381,64 +465,65 @@
 // g -0.41869 * 255 = -106.76595 = -107
 // r  0.50000 * 255 = 127.5 = 127
 
-static __inline int RGBToYJ(uint8 r, uint8 g, uint8 b) {
-  return (38 * r + 75 * g +  15 * b + 64) >> 7;
+static __inline int RGBToYJ(uint8_t r, uint8_t g, uint8_t b) {
+  return (38 * r + 75 * g + 15 * b + 64) >> 7;
 }
 
-static __inline int RGBToUJ(uint8 r, uint8 g, uint8 b) {
+static __inline int RGBToUJ(uint8_t r, uint8_t g, uint8_t b) {
   return (127 * b - 84 * g - 43 * r + 0x8080) >> 8;
 }
-static __inline int RGBToVJ(uint8 r, uint8 g, uint8 b) {
+static __inline int RGBToVJ(uint8_t r, uint8_t g, uint8_t b) {
   return (127 * r - 107 * g - 20 * b + 0x8080) >> 8;
 }
 
 #define AVGB(a, b) (((a) + (b) + 1) >> 1)
 
-#define MAKEROWYJ(NAME, R, G, B, BPP) \
-void NAME ## ToYJRow_C(const uint8* src_argb0, uint8* dst_y, int width) {      \
-  int x;                                                                       \
-  for (x = 0; x < width; ++x) {                                                \
-    dst_y[0] = RGBToYJ(src_argb0[R], src_argb0[G], src_argb0[B]);              \
-    src_argb0 += BPP;                                                          \
-    dst_y += 1;                                                                \
-  }                                                                            \
-}                                                                              \
-void NAME ## ToUVJRow_C(const uint8* src_rgb0, int src_stride_rgb,             \
-                        uint8* dst_u, uint8* dst_v, int width) {               \
-  const uint8* src_rgb1 = src_rgb0 + src_stride_rgb;                           \
-  int x;                                                                       \
-  for (x = 0; x < width - 1; x += 2) {                                         \
-    uint8 ab = AVGB(AVGB(src_rgb0[B], src_rgb1[B]),                            \
-                    AVGB(src_rgb0[B + BPP], src_rgb1[B + BPP]));               \
-    uint8 ag = AVGB(AVGB(src_rgb0[G], src_rgb1[G]),                            \
-                    AVGB(src_rgb0[G + BPP], src_rgb1[G + BPP]));               \
-    uint8 ar = AVGB(AVGB(src_rgb0[R], src_rgb1[R]),                            \
-                    AVGB(src_rgb0[R + BPP], src_rgb1[R + BPP]));               \
-    dst_u[0] = RGBToUJ(ar, ag, ab);                                            \
-    dst_v[0] = RGBToVJ(ar, ag, ab);                                            \
-    src_rgb0 += BPP * 2;                                                       \
-    src_rgb1 += BPP * 2;                                                       \
-    dst_u += 1;                                                                \
-    dst_v += 1;                                                                \
-  }                                                                            \
-  if (width & 1) {                                                             \
-    uint8 ab = AVGB(src_rgb0[B], src_rgb1[B]);                                 \
-    uint8 ag = AVGB(src_rgb0[G], src_rgb1[G]);                                 \
-    uint8 ar = AVGB(src_rgb0[R], src_rgb1[R]);                                 \
-    dst_u[0] = RGBToUJ(ar, ag, ab);                                            \
-    dst_v[0] = RGBToVJ(ar, ag, ab);                                            \
-  }                                                                            \
-}
+// ARGBToYJ_C and ARGBToUVJ_C
+#define MAKEROWYJ(NAME, R, G, B, BPP)                                         \
+  void NAME##ToYJRow_C(const uint8_t* src_argb0, uint8_t* dst_y, int width) { \
+    int x;                                                                    \
+    for (x = 0; x < width; ++x) {                                             \
+      dst_y[0] = RGBToYJ(src_argb0[R], src_argb0[G], src_argb0[B]);           \
+      src_argb0 += BPP;                                                       \
+      dst_y += 1;                                                             \
+    }                                                                         \
+  }                                                                           \
+  void NAME##ToUVJRow_C(const uint8_t* src_rgb0, int src_stride_rgb,          \
+                        uint8_t* dst_u, uint8_t* dst_v, int width) {          \
+    const uint8_t* src_rgb1 = src_rgb0 + src_stride_rgb;                      \
+    int x;                                                                    \
+    for (x = 0; x < width - 1; x += 2) {                                      \
+      uint8_t ab = AVGB(AVGB(src_rgb0[B], src_rgb1[B]),                       \
+                        AVGB(src_rgb0[B + BPP], src_rgb1[B + BPP]));          \
+      uint8_t ag = AVGB(AVGB(src_rgb0[G], src_rgb1[G]),                       \
+                        AVGB(src_rgb0[G + BPP], src_rgb1[G + BPP]));          \
+      uint8_t ar = AVGB(AVGB(src_rgb0[R], src_rgb1[R]),                       \
+                        AVGB(src_rgb0[R + BPP], src_rgb1[R + BPP]));          \
+      dst_u[0] = RGBToUJ(ar, ag, ab);                                         \
+      dst_v[0] = RGBToVJ(ar, ag, ab);                                         \
+      src_rgb0 += BPP * 2;                                                    \
+      src_rgb1 += BPP * 2;                                                    \
+      dst_u += 1;                                                             \
+      dst_v += 1;                                                             \
+    }                                                                         \
+    if (width & 1) {                                                          \
+      uint8_t ab = AVGB(src_rgb0[B], src_rgb1[B]);                            \
+      uint8_t ag = AVGB(src_rgb0[G], src_rgb1[G]);                            \
+      uint8_t ar = AVGB(src_rgb0[R], src_rgb1[R]);                            \
+      dst_u[0] = RGBToUJ(ar, ag, ab);                                         \
+      dst_v[0] = RGBToVJ(ar, ag, ab);                                         \
+    }                                                                         \
+  }
 
 MAKEROWYJ(ARGB, 2, 1, 0, 4)
 #undef MAKEROWYJ
 
-void RGB565ToYRow_C(const uint8* src_rgb565, uint8* dst_y, int width) {
+void RGB565ToYRow_C(const uint8_t* src_rgb565, uint8_t* dst_y, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_rgb565[0] & 0x1f;
-    uint8 g = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
-    uint8 r = src_rgb565[1] >> 3;
+    uint8_t b = src_rgb565[0] & 0x1f;
+    uint8_t g = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
+    uint8_t r = src_rgb565[1] >> 3;
     b = (b << 3) | (b >> 2);
     g = (g << 2) | (g >> 4);
     r = (r << 3) | (r >> 2);
@@ -448,12 +533,12 @@
   }
 }
 
-void ARGB1555ToYRow_C(const uint8* src_argb1555, uint8* dst_y, int width) {
+void ARGB1555ToYRow_C(const uint8_t* src_argb1555, uint8_t* dst_y, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_argb1555[0] & 0x1f;
-    uint8 g = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
-    uint8 r = (src_argb1555[1] & 0x7c) >> 2;
+    uint8_t b = src_argb1555[0] & 0x1f;
+    uint8_t g = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
+    uint8_t r = (src_argb1555[1] & 0x7c) >> 2;
     b = (b << 3) | (b >> 2);
     g = (g << 3) | (g >> 2);
     r = (r << 3) | (r >> 2);
@@ -463,12 +548,12 @@
   }
 }
 
-void ARGB4444ToYRow_C(const uint8* src_argb4444, uint8* dst_y, int width) {
+void ARGB4444ToYRow_C(const uint8_t* src_argb4444, uint8_t* dst_y, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 b = src_argb4444[0] & 0x0f;
-    uint8 g = src_argb4444[0] >> 4;
-    uint8 r = src_argb4444[1] & 0x0f;
+    uint8_t b = src_argb4444[0] & 0x0f;
+    uint8_t g = src_argb4444[0] >> 4;
+    uint8_t r = src_argb4444[1] & 0x0f;
     b = (b << 4) | b;
     g = (g << 4) | g;
     r = (r << 4) | r;
@@ -478,26 +563,29 @@
   }
 }
 
-void RGB565ToUVRow_C(const uint8* src_rgb565, int src_stride_rgb565,
-                     uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* next_rgb565 = src_rgb565 + src_stride_rgb565;
+void RGB565ToUVRow_C(const uint8_t* src_rgb565,
+                     int src_stride_rgb565,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  const uint8_t* next_rgb565 = src_rgb565 + src_stride_rgb565;
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 b0 = src_rgb565[0] & 0x1f;
-    uint8 g0 = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
-    uint8 r0 = src_rgb565[1] >> 3;
-    uint8 b1 = src_rgb565[2] & 0x1f;
-    uint8 g1 = (src_rgb565[2] >> 5) | ((src_rgb565[3] & 0x07) << 3);
-    uint8 r1 = src_rgb565[3] >> 3;
-    uint8 b2 = next_rgb565[0] & 0x1f;
-    uint8 g2 = (next_rgb565[0] >> 5) | ((next_rgb565[1] & 0x07) << 3);
-    uint8 r2 = next_rgb565[1] >> 3;
-    uint8 b3 = next_rgb565[2] & 0x1f;
-    uint8 g3 = (next_rgb565[2] >> 5) | ((next_rgb565[3] & 0x07) << 3);
-    uint8 r3 = next_rgb565[3] >> 3;
-    uint8 b = (b0 + b1 + b2 + b3);  // 565 * 4 = 787.
-    uint8 g = (g0 + g1 + g2 + g3);
-    uint8 r = (r0 + r1 + r2 + r3);
+    uint8_t b0 = src_rgb565[0] & 0x1f;
+    uint8_t g0 = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
+    uint8_t r0 = src_rgb565[1] >> 3;
+    uint8_t b1 = src_rgb565[2] & 0x1f;
+    uint8_t g1 = (src_rgb565[2] >> 5) | ((src_rgb565[3] & 0x07) << 3);
+    uint8_t r1 = src_rgb565[3] >> 3;
+    uint8_t b2 = next_rgb565[0] & 0x1f;
+    uint8_t g2 = (next_rgb565[0] >> 5) | ((next_rgb565[1] & 0x07) << 3);
+    uint8_t r2 = next_rgb565[1] >> 3;
+    uint8_t b3 = next_rgb565[2] & 0x1f;
+    uint8_t g3 = (next_rgb565[2] >> 5) | ((next_rgb565[3] & 0x07) << 3);
+    uint8_t r3 = next_rgb565[3] >> 3;
+    uint8_t b = (b0 + b1 + b2 + b3);  // 565 * 4 = 787.
+    uint8_t g = (g0 + g1 + g2 + g3);
+    uint8_t r = (r0 + r1 + r2 + r3);
     b = (b << 1) | (b >> 6);  // 787 -> 888.
     r = (r << 1) | (r >> 6);
     dst_u[0] = RGBToU(r, g, b);
@@ -508,15 +596,15 @@
     dst_v += 1;
   }
   if (width & 1) {
-    uint8 b0 = src_rgb565[0] & 0x1f;
-    uint8 g0 = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
-    uint8 r0 = src_rgb565[1] >> 3;
-    uint8 b2 = next_rgb565[0] & 0x1f;
-    uint8 g2 = (next_rgb565[0] >> 5) | ((next_rgb565[1] & 0x07) << 3);
-    uint8 r2 = next_rgb565[1] >> 3;
-    uint8 b = (b0 + b2);  // 565 * 2 = 676.
-    uint8 g = (g0 + g2);
-    uint8 r = (r0 + r2);
+    uint8_t b0 = src_rgb565[0] & 0x1f;
+    uint8_t g0 = (src_rgb565[0] >> 5) | ((src_rgb565[1] & 0x07) << 3);
+    uint8_t r0 = src_rgb565[1] >> 3;
+    uint8_t b2 = next_rgb565[0] & 0x1f;
+    uint8_t g2 = (next_rgb565[0] >> 5) | ((next_rgb565[1] & 0x07) << 3);
+    uint8_t r2 = next_rgb565[1] >> 3;
+    uint8_t b = (b0 + b2);  // 565 * 2 = 676.
+    uint8_t g = (g0 + g2);
+    uint8_t r = (r0 + r2);
     b = (b << 2) | (b >> 4);  // 676 -> 888
     g = (g << 1) | (g >> 6);
     r = (r << 2) | (r >> 4);
@@ -525,26 +613,29 @@
   }
 }
 
-void ARGB1555ToUVRow_C(const uint8* src_argb1555, int src_stride_argb1555,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* next_argb1555 = src_argb1555 + src_stride_argb1555;
+void ARGB1555ToUVRow_C(const uint8_t* src_argb1555,
+                       int src_stride_argb1555,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  const uint8_t* next_argb1555 = src_argb1555 + src_stride_argb1555;
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 b0 = src_argb1555[0] & 0x1f;
-    uint8 g0 = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
-    uint8 r0 = (src_argb1555[1] & 0x7c) >> 2;
-    uint8 b1 = src_argb1555[2] & 0x1f;
-    uint8 g1 = (src_argb1555[2] >> 5) | ((src_argb1555[3] & 0x03) << 3);
-    uint8 r1 = (src_argb1555[3] & 0x7c) >> 2;
-    uint8 b2 = next_argb1555[0] & 0x1f;
-    uint8 g2 = (next_argb1555[0] >> 5) | ((next_argb1555[1] & 0x03) << 3);
-    uint8 r2 = (next_argb1555[1] & 0x7c) >> 2;
-    uint8 b3 = next_argb1555[2] & 0x1f;
-    uint8 g3 = (next_argb1555[2] >> 5) | ((next_argb1555[3] & 0x03) << 3);
-    uint8 r3 = (next_argb1555[3] & 0x7c) >> 2;
-    uint8 b = (b0 + b1 + b2 + b3);  // 555 * 4 = 777.
-    uint8 g = (g0 + g1 + g2 + g3);
-    uint8 r = (r0 + r1 + r2 + r3);
+    uint8_t b0 = src_argb1555[0] & 0x1f;
+    uint8_t g0 = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
+    uint8_t r0 = (src_argb1555[1] & 0x7c) >> 2;
+    uint8_t b1 = src_argb1555[2] & 0x1f;
+    uint8_t g1 = (src_argb1555[2] >> 5) | ((src_argb1555[3] & 0x03) << 3);
+    uint8_t r1 = (src_argb1555[3] & 0x7c) >> 2;
+    uint8_t b2 = next_argb1555[0] & 0x1f;
+    uint8_t g2 = (next_argb1555[0] >> 5) | ((next_argb1555[1] & 0x03) << 3);
+    uint8_t r2 = (next_argb1555[1] & 0x7c) >> 2;
+    uint8_t b3 = next_argb1555[2] & 0x1f;
+    uint8_t g3 = (next_argb1555[2] >> 5) | ((next_argb1555[3] & 0x03) << 3);
+    uint8_t r3 = (next_argb1555[3] & 0x7c) >> 2;
+    uint8_t b = (b0 + b1 + b2 + b3);  // 555 * 4 = 777.
+    uint8_t g = (g0 + g1 + g2 + g3);
+    uint8_t r = (r0 + r1 + r2 + r3);
     b = (b << 1) | (b >> 6);  // 777 -> 888.
     g = (g << 1) | (g >> 6);
     r = (r << 1) | (r >> 6);
@@ -556,15 +647,15 @@
     dst_v += 1;
   }
   if (width & 1) {
-    uint8 b0 = src_argb1555[0] & 0x1f;
-    uint8 g0 = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
-    uint8 r0 = (src_argb1555[1] & 0x7c) >> 2;
-    uint8 b2 = next_argb1555[0] & 0x1f;
-    uint8 g2 = (next_argb1555[0] >> 5) | ((next_argb1555[1] & 0x03) << 3);
-    uint8 r2 = next_argb1555[1] >> 3;
-    uint8 b = (b0 + b2);  // 555 * 2 = 666.
-    uint8 g = (g0 + g2);
-    uint8 r = (r0 + r2);
+    uint8_t b0 = src_argb1555[0] & 0x1f;
+    uint8_t g0 = (src_argb1555[0] >> 5) | ((src_argb1555[1] & 0x03) << 3);
+    uint8_t r0 = (src_argb1555[1] & 0x7c) >> 2;
+    uint8_t b2 = next_argb1555[0] & 0x1f;
+    uint8_t g2 = (next_argb1555[0] >> 5) | ((next_argb1555[1] & 0x03) << 3);
+    uint8_t r2 = next_argb1555[1] >> 3;
+    uint8_t b = (b0 + b2);  // 555 * 2 = 666.
+    uint8_t g = (g0 + g2);
+    uint8_t r = (r0 + r2);
     b = (b << 2) | (b >> 4);  // 666 -> 888.
     g = (g << 2) | (g >> 4);
     r = (r << 2) | (r >> 4);
@@ -573,26 +664,29 @@
   }
 }
 
-void ARGB4444ToUVRow_C(const uint8* src_argb4444, int src_stride_argb4444,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* next_argb4444 = src_argb4444 + src_stride_argb4444;
+void ARGB4444ToUVRow_C(const uint8_t* src_argb4444,
+                       int src_stride_argb4444,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  const uint8_t* next_argb4444 = src_argb4444 + src_stride_argb4444;
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 b0 = src_argb4444[0] & 0x0f;
-    uint8 g0 = src_argb4444[0] >> 4;
-    uint8 r0 = src_argb4444[1] & 0x0f;
-    uint8 b1 = src_argb4444[2] & 0x0f;
-    uint8 g1 = src_argb4444[2] >> 4;
-    uint8 r1 = src_argb4444[3] & 0x0f;
-    uint8 b2 = next_argb4444[0] & 0x0f;
-    uint8 g2 = next_argb4444[0] >> 4;
-    uint8 r2 = next_argb4444[1] & 0x0f;
-    uint8 b3 = next_argb4444[2] & 0x0f;
-    uint8 g3 = next_argb4444[2] >> 4;
-    uint8 r3 = next_argb4444[3] & 0x0f;
-    uint8 b = (b0 + b1 + b2 + b3);  // 444 * 4 = 666.
-    uint8 g = (g0 + g1 + g2 + g3);
-    uint8 r = (r0 + r1 + r2 + r3);
+    uint8_t b0 = src_argb4444[0] & 0x0f;
+    uint8_t g0 = src_argb4444[0] >> 4;
+    uint8_t r0 = src_argb4444[1] & 0x0f;
+    uint8_t b1 = src_argb4444[2] & 0x0f;
+    uint8_t g1 = src_argb4444[2] >> 4;
+    uint8_t r1 = src_argb4444[3] & 0x0f;
+    uint8_t b2 = next_argb4444[0] & 0x0f;
+    uint8_t g2 = next_argb4444[0] >> 4;
+    uint8_t r2 = next_argb4444[1] & 0x0f;
+    uint8_t b3 = next_argb4444[2] & 0x0f;
+    uint8_t g3 = next_argb4444[2] >> 4;
+    uint8_t r3 = next_argb4444[3] & 0x0f;
+    uint8_t b = (b0 + b1 + b2 + b3);  // 444 * 4 = 666.
+    uint8_t g = (g0 + g1 + g2 + g3);
+    uint8_t r = (r0 + r1 + r2 + r3);
     b = (b << 2) | (b >> 4);  // 666 -> 888.
     g = (g << 2) | (g >> 4);
     r = (r << 2) | (r >> 4);
@@ -604,15 +698,15 @@
     dst_v += 1;
   }
   if (width & 1) {
-    uint8 b0 = src_argb4444[0] & 0x0f;
-    uint8 g0 = src_argb4444[0] >> 4;
-    uint8 r0 = src_argb4444[1] & 0x0f;
-    uint8 b2 = next_argb4444[0] & 0x0f;
-    uint8 g2 = next_argb4444[0] >> 4;
-    uint8 r2 = next_argb4444[1] & 0x0f;
-    uint8 b = (b0 + b2);  // 444 * 2 = 555.
-    uint8 g = (g0 + g2);
-    uint8 r = (r0 + r2);
+    uint8_t b0 = src_argb4444[0] & 0x0f;
+    uint8_t g0 = src_argb4444[0] >> 4;
+    uint8_t r0 = src_argb4444[1] & 0x0f;
+    uint8_t b2 = next_argb4444[0] & 0x0f;
+    uint8_t g2 = next_argb4444[0] >> 4;
+    uint8_t r2 = next_argb4444[1] & 0x0f;
+    uint8_t b = (b0 + b2);  // 444 * 2 = 555.
+    uint8_t g = (g0 + g2);
+    uint8_t r = (r0 + r2);
     b = (b << 3) | (b >> 2);  // 555 -> 888.
     g = (g << 3) | (g >> 2);
     r = (r << 3) | (r >> 2);
@@ -621,13 +715,15 @@
   }
 }
 
-void ARGBToUV444Row_C(const uint8* src_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void ARGBToUV444Row_C(const uint8_t* src_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 ab = src_argb[0];
-    uint8 ag = src_argb[1];
-    uint8 ar = src_argb[2];
+    uint8_t ab = src_argb[0];
+    uint8_t ag = src_argb[1];
+    uint8_t ar = src_argb[2];
     dst_u[0] = RGBToU(ar, ag, ab);
     dst_v[0] = RGBToV(ar, ag, ab);
     src_argb += 4;
@@ -636,45 +732,10 @@
   }
 }
 
-void ARGBToUV411Row_C(const uint8* src_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  int x;
-  for (x = 0; x < width - 3; x += 4) {
-    uint8 ab = (src_argb[0] + src_argb[4] + src_argb[8] + src_argb[12]) >> 2;
-    uint8 ag = (src_argb[1] + src_argb[5] + src_argb[9] + src_argb[13]) >> 2;
-    uint8 ar = (src_argb[2] + src_argb[6] + src_argb[10] + src_argb[14]) >> 2;
-    dst_u[0] = RGBToU(ar, ag, ab);
-    dst_v[0] = RGBToV(ar, ag, ab);
-    src_argb += 16;
-    dst_u += 1;
-    dst_v += 1;
-  }
-  // Odd width handling mimics 'any' function which replicates last pixel.
-  if ((width & 3) == 3) {
-    uint8 ab = (src_argb[0] + src_argb[4] + src_argb[8] + src_argb[8]) >> 2;
-    uint8 ag = (src_argb[1] + src_argb[5] + src_argb[9] + src_argb[9]) >> 2;
-    uint8 ar = (src_argb[2] + src_argb[6] + src_argb[10] + src_argb[10]) >> 2;
-    dst_u[0] = RGBToU(ar, ag, ab);
-    dst_v[0] = RGBToV(ar, ag, ab);
-  } else if ((width & 3) == 2) {
-    uint8 ab = (src_argb[0] + src_argb[4]) >> 1;
-    uint8 ag = (src_argb[1] + src_argb[5]) >> 1;
-    uint8 ar = (src_argb[2] + src_argb[6]) >> 1;
-    dst_u[0] = RGBToU(ar, ag, ab);
-    dst_v[0] = RGBToV(ar, ag, ab);
-  } else if ((width & 3) == 1) {
-    uint8 ab = src_argb[0];
-    uint8 ag = src_argb[1];
-    uint8 ar = src_argb[2];
-    dst_u[0] = RGBToU(ar, ag, ab);
-    dst_v[0] = RGBToV(ar, ag, ab);
-  }
-}
-
-void ARGBGrayRow_C(const uint8* src_argb, uint8* dst_argb, int width) {
+void ARGBGrayRow_C(const uint8_t* src_argb, uint8_t* dst_argb, int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 y = RGBToYJ(src_argb[2], src_argb[1], src_argb[0]);
+    uint8_t y = RGBToYJ(src_argb[2], src_argb[1], src_argb[0]);
     dst_argb[2] = dst_argb[1] = dst_argb[0] = y;
     dst_argb[3] = src_argb[3];
     dst_argb += 4;
@@ -683,7 +744,7 @@
 }
 
 // Convert a row of image to Sepia tone.
-void ARGBSepiaRow_C(uint8* dst_argb, int width) {
+void ARGBSepiaRow_C(uint8_t* dst_argb, int width) {
   int x;
   for (x = 0; x < width; ++x) {
     int b = dst_argb[0];
@@ -702,22 +763,28 @@
 
 // Apply color matrix to a row of image. Matrix is signed.
 // TODO(fbarchard): Consider adding rounding (+32).
-void ARGBColorMatrixRow_C(const uint8* src_argb, uint8* dst_argb,
-                          const int8* matrix_argb, int width) {
+void ARGBColorMatrixRow_C(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          const int8_t* matrix_argb,
+                          int width) {
   int x;
   for (x = 0; x < width; ++x) {
     int b = src_argb[0];
     int g = src_argb[1];
     int r = src_argb[2];
     int a = src_argb[3];
-    int sb = (b * matrix_argb[0] + g * matrix_argb[1] +
-              r * matrix_argb[2] + a * matrix_argb[3]) >> 6;
-    int sg = (b * matrix_argb[4] + g * matrix_argb[5] +
-              r * matrix_argb[6] + a * matrix_argb[7]) >> 6;
-    int sr = (b * matrix_argb[8] + g * matrix_argb[9] +
-              r * matrix_argb[10] + a * matrix_argb[11]) >> 6;
-    int sa = (b * matrix_argb[12] + g * matrix_argb[13] +
-              r * matrix_argb[14] + a * matrix_argb[15]) >> 6;
+    int sb = (b * matrix_argb[0] + g * matrix_argb[1] + r * matrix_argb[2] +
+              a * matrix_argb[3]) >>
+             6;
+    int sg = (b * matrix_argb[4] + g * matrix_argb[5] + r * matrix_argb[6] +
+              a * matrix_argb[7]) >>
+             6;
+    int sr = (b * matrix_argb[8] + g * matrix_argb[9] + r * matrix_argb[10] +
+              a * matrix_argb[11]) >>
+             6;
+    int sa = (b * matrix_argb[12] + g * matrix_argb[13] + r * matrix_argb[14] +
+              a * matrix_argb[15]) >>
+             6;
     dst_argb[0] = Clamp(sb);
     dst_argb[1] = Clamp(sg);
     dst_argb[2] = Clamp(sr);
@@ -728,7 +795,9 @@
 }
 
 // Apply color table to a row of image.
-void ARGBColorTableRow_C(uint8* dst_argb, const uint8* table_argb, int width) {
+void ARGBColorTableRow_C(uint8_t* dst_argb,
+                         const uint8_t* table_argb,
+                         int width) {
   int x;
   for (x = 0; x < width; ++x) {
     int b = dst_argb[0];
@@ -744,7 +813,9 @@
 }
 
 // Apply color table to a row of image.
-void RGBColorTableRow_C(uint8* dst_argb, const uint8* table_argb, int width) {
+void RGBColorTableRow_C(uint8_t* dst_argb,
+                        const uint8_t* table_argb,
+                        int width) {
   int x;
   for (x = 0; x < width; ++x) {
     int b = dst_argb[0];
@@ -757,8 +828,11 @@
   }
 }
 
-void ARGBQuantizeRow_C(uint8* dst_argb, int scale, int interval_size,
-                       int interval_offset, int width) {
+void ARGBQuantizeRow_C(uint8_t* dst_argb,
+                       int scale,
+                       int interval_size,
+                       int interval_offset,
+                       int width) {
   int x;
   for (x = 0; x < width; ++x) {
     int b = dst_argb[0];
@@ -772,21 +846,23 @@
 }
 
 #define REPEAT8(v) (v) | ((v) << 8)
-#define SHADE(f, v) v * f >> 24
+#define SHADE(f, v) v* f >> 24
 
-void ARGBShadeRow_C(const uint8* src_argb, uint8* dst_argb, int width,
-                    uint32 value) {
-  const uint32 b_scale = REPEAT8(value & 0xff);
-  const uint32 g_scale = REPEAT8((value >> 8) & 0xff);
-  const uint32 r_scale = REPEAT8((value >> 16) & 0xff);
-  const uint32 a_scale = REPEAT8(value >> 24);
+void ARGBShadeRow_C(const uint8_t* src_argb,
+                    uint8_t* dst_argb,
+                    int width,
+                    uint32_t value) {
+  const uint32_t b_scale = REPEAT8(value & 0xff);
+  const uint32_t g_scale = REPEAT8((value >> 8) & 0xff);
+  const uint32_t r_scale = REPEAT8((value >> 16) & 0xff);
+  const uint32_t a_scale = REPEAT8(value >> 24);
 
   int i;
   for (i = 0; i < width; ++i) {
-    const uint32 b = REPEAT8(src_argb[0]);
-    const uint32 g = REPEAT8(src_argb[1]);
-    const uint32 r = REPEAT8(src_argb[2]);
-    const uint32 a = REPEAT8(src_argb[3]);
+    const uint32_t b = REPEAT8(src_argb[0]);
+    const uint32_t g = REPEAT8(src_argb[1]);
+    const uint32_t r = REPEAT8(src_argb[2]);
+    const uint32_t a = REPEAT8(src_argb[3]);
     dst_argb[0] = SHADE(b, b_scale);
     dst_argb[1] = SHADE(g, g_scale);
     dst_argb[2] = SHADE(r, r_scale);
@@ -799,20 +875,22 @@
 #undef SHADE
 
 #define REPEAT8(v) (v) | ((v) << 8)
-#define SHADE(f, v) v * f >> 16
+#define SHADE(f, v) v* f >> 16
 
-void ARGBMultiplyRow_C(const uint8* src_argb0, const uint8* src_argb1,
-                       uint8* dst_argb, int width) {
+void ARGBMultiplyRow_C(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width) {
   int i;
   for (i = 0; i < width; ++i) {
-    const uint32 b = REPEAT8(src_argb0[0]);
-    const uint32 g = REPEAT8(src_argb0[1]);
-    const uint32 r = REPEAT8(src_argb0[2]);
-    const uint32 a = REPEAT8(src_argb0[3]);
-    const uint32 b_scale = src_argb1[0];
-    const uint32 g_scale = src_argb1[1];
-    const uint32 r_scale = src_argb1[2];
-    const uint32 a_scale = src_argb1[3];
+    const uint32_t b = REPEAT8(src_argb0[0]);
+    const uint32_t g = REPEAT8(src_argb0[1]);
+    const uint32_t r = REPEAT8(src_argb0[2]);
+    const uint32_t a = REPEAT8(src_argb0[3]);
+    const uint32_t b_scale = src_argb1[0];
+    const uint32_t g_scale = src_argb1[1];
+    const uint32_t r_scale = src_argb1[2];
+    const uint32_t a_scale = src_argb1[3];
     dst_argb[0] = SHADE(b, b_scale);
     dst_argb[1] = SHADE(g, g_scale);
     dst_argb[2] = SHADE(r, r_scale);
@@ -827,8 +905,10 @@
 
 #define SHADE(f, v) clamp255(v + f)
 
-void ARGBAddRow_C(const uint8* src_argb0, const uint8* src_argb1,
-                  uint8* dst_argb, int width) {
+void ARGBAddRow_C(const uint8_t* src_argb0,
+                  const uint8_t* src_argb1,
+                  uint8_t* dst_argb,
+                  int width) {
   int i;
   for (i = 0; i < width; ++i) {
     const int b = src_argb0[0];
@@ -852,8 +932,10 @@
 
 #define SHADE(f, v) clamp0(f - v)
 
-void ARGBSubtractRow_C(const uint8* src_argb0, const uint8* src_argb1,
-                       uint8* dst_argb, int width) {
+void ARGBSubtractRow_C(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width) {
   int i;
   for (i = 0; i < width; ++i) {
     const int b = src_argb0[0];
@@ -876,8 +958,11 @@
 #undef SHADE
 
 // Sobel functions which mimics SSSE3.
-void SobelXRow_C(const uint8* src_y0, const uint8* src_y1, const uint8* src_y2,
-                 uint8* dst_sobelx, int width) {
+void SobelXRow_C(const uint8_t* src_y0,
+                 const uint8_t* src_y1,
+                 const uint8_t* src_y2,
+                 uint8_t* dst_sobelx,
+                 int width) {
   int i;
   for (i = 0; i < width; ++i) {
     int a = src_y0[i];
@@ -890,12 +975,14 @@
     int b_diff = b - b_sub;
     int c_diff = c - c_sub;
     int sobel = Abs(a_diff + b_diff * 2 + c_diff);
-    dst_sobelx[i] = (uint8)(clamp255(sobel));
+    dst_sobelx[i] = (uint8_t)(clamp255(sobel));
   }
 }
 
-void SobelYRow_C(const uint8* src_y0, const uint8* src_y1,
-                 uint8* dst_sobely, int width) {
+void SobelYRow_C(const uint8_t* src_y0,
+                 const uint8_t* src_y1,
+                 uint8_t* dst_sobely,
+                 int width) {
   int i;
   for (i = 0; i < width; ++i) {
     int a = src_y0[i + 0];
@@ -908,56 +995,62 @@
     int b_diff = b - b_sub;
     int c_diff = c - c_sub;
     int sobel = Abs(a_diff + b_diff * 2 + c_diff);
-    dst_sobely[i] = (uint8)(clamp255(sobel));
+    dst_sobely[i] = (uint8_t)(clamp255(sobel));
   }
 }
 
-void SobelRow_C(const uint8* src_sobelx, const uint8* src_sobely,
-                uint8* dst_argb, int width) {
+void SobelRow_C(const uint8_t* src_sobelx,
+                const uint8_t* src_sobely,
+                uint8_t* dst_argb,
+                int width) {
   int i;
   for (i = 0; i < width; ++i) {
     int r = src_sobelx[i];
     int b = src_sobely[i];
     int s = clamp255(r + b);
-    dst_argb[0] = (uint8)(s);
-    dst_argb[1] = (uint8)(s);
-    dst_argb[2] = (uint8)(s);
-    dst_argb[3] = (uint8)(255u);
+    dst_argb[0] = (uint8_t)(s);
+    dst_argb[1] = (uint8_t)(s);
+    dst_argb[2] = (uint8_t)(s);
+    dst_argb[3] = (uint8_t)(255u);
     dst_argb += 4;
   }
 }
 
-void SobelToPlaneRow_C(const uint8* src_sobelx, const uint8* src_sobely,
-                       uint8* dst_y, int width) {
+void SobelToPlaneRow_C(const uint8_t* src_sobelx,
+                       const uint8_t* src_sobely,
+                       uint8_t* dst_y,
+                       int width) {
   int i;
   for (i = 0; i < width; ++i) {
     int r = src_sobelx[i];
     int b = src_sobely[i];
     int s = clamp255(r + b);
-    dst_y[i] = (uint8)(s);
+    dst_y[i] = (uint8_t)(s);
   }
 }
 
-void SobelXYRow_C(const uint8* src_sobelx, const uint8* src_sobely,
-                  uint8* dst_argb, int width) {
+void SobelXYRow_C(const uint8_t* src_sobelx,
+                  const uint8_t* src_sobely,
+                  uint8_t* dst_argb,
+                  int width) {
   int i;
   for (i = 0; i < width; ++i) {
     int r = src_sobelx[i];
     int b = src_sobely[i];
     int g = clamp255(r + b);
-    dst_argb[0] = (uint8)(b);
-    dst_argb[1] = (uint8)(g);
-    dst_argb[2] = (uint8)(r);
-    dst_argb[3] = (uint8)(255u);
+    dst_argb[0] = (uint8_t)(b);
+    dst_argb[1] = (uint8_t)(g);
+    dst_argb[2] = (uint8_t)(r);
+    dst_argb[3] = (uint8_t)(255u);
     dst_argb += 4;
   }
 }
 
-void J400ToARGBRow_C(const uint8* src_y, uint8* dst_argb, int width) {
+void J400ToARGBRow_C(const uint8_t* src_y, uint8_t* dst_argb, int width) {
   // Copy a Y to RGB.
   int x;
   for (x = 0; x < width; ++x) {
-    uint8 y = src_y[0];
+    uint8_t y = src_y[0];
     dst_argb[2] = dst_argb[1] = dst_argb[0] = y;
     dst_argb[3] = 255u;
     dst_argb += 4;
@@ -974,75 +1067,69 @@
 //  B = (Y - 16) * 1.164 - U * -2.018
 
 // Y contribution to R,G,B.  Scale and bias.
-#define YG 18997 /* round(1.164 * 64 * 256 * 256 / 257) */
+#define YG 18997  /* round(1.164 * 64 * 256 * 256 / 257) */
 #define YGB -1160 /* 1.164 * 64 * -16 + 64 / 2 */
 
 // U and V contributions to R,G,B.
 #define UB -128 /* max(-128, round(-2.018 * 64)) */
-#define UG 25 /* round(0.391 * 64) */
-#define VG 52 /* round(0.813 * 64) */
+#define UG 25   /* round(0.391 * 64) */
+#define VG 52   /* round(0.813 * 64) */
 #define VR -102 /* round(-1.596 * 64) */
 
 // Bias values to subtract 16 from Y and 128 from U and V.
-#define BB (UB * 128            + YGB)
+#define BB (UB * 128 + YGB)
 #define BG (UG * 128 + VG * 128 + YGB)
-#define BR            (VR * 128 + YGB)
+#define BR (VR * 128 + YGB)
 
 #if defined(__aarch64__)  // 64 bit arm
 const struct YuvConstants SIMD_ALIGNED(kYuvI601Constants) = {
-  { -UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR },
-  { -UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR },
-  { UG, VG, UG, VG, UG, VG, UG, VG },
-  { UG, VG, UG, VG, UG, VG, UG, VG },
-  { BB, BG, BR, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR},
+    {-UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR},
+    {UG, VG, UG, VG, UG, VG, UG, VG},
+    {UG, VG, UG, VG, UG, VG, UG, VG},
+    {BB, BG, BR, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 const struct YuvConstants SIMD_ALIGNED(kYvuI601Constants) = {
-  { -VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB },
-  { -VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB },
-  { VG, UG, VG, UG, VG, UG, VG, UG },
-  { VG, UG, VG, UG, VG, UG, VG, UG },
-  { BR, BG, BB, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB},
+    {-VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB},
+    {VG, UG, VG, UG, VG, UG, VG, UG},
+    {VG, UG, VG, UG, VG, UG, VG, UG},
+    {BR, BG, BB, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 #elif defined(__arm__)  // 32 bit arm
 const struct YuvConstants SIMD_ALIGNED(kYuvI601Constants) = {
-  { -UB, -UB, -UB, -UB, -VR, -VR, -VR, -VR, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { UG, UG, UG, UG, VG, VG, VG, VG, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { BB, BG, BR, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-UB, -UB, -UB, -UB, -VR, -VR, -VR, -VR, 0, 0, 0, 0, 0, 0, 0, 0},
+    {UG, UG, UG, UG, VG, VG, VG, VG, 0, 0, 0, 0, 0, 0, 0, 0},
+    {BB, BG, BR, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 const struct YuvConstants SIMD_ALIGNED(kYvuI601Constants) = {
-  { -VR, -VR, -VR, -VR, -UB, -UB, -UB, -UB, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { VG, VG, VG, VG, UG, UG, UG, UG, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { BR, BG, BB, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-VR, -VR, -VR, -VR, -UB, -UB, -UB, -UB, 0, 0, 0, 0, 0, 0, 0, 0},
+    {VG, VG, VG, VG, UG, UG, UG, UG, 0, 0, 0, 0, 0, 0, 0, 0},
+    {BR, BG, BB, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 #else
 const struct YuvConstants SIMD_ALIGNED(kYuvI601Constants) = {
-  { UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0,
-    UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0 },
-  { UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG,
-    UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG },
-  { 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR,
-    0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR },
-  { BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB },
-  { BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG },
-  { BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR },
-  { YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG }
-};
+    {UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0,
+     UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0},
+    {UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG,
+     UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG},
+    {0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR,
+     0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR},
+    {BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB},
+    {BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG},
+    {BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR},
+    {YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG}};
 const struct YuvConstants SIMD_ALIGNED(kYvuI601Constants) = {
-  { VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0,
-    VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0 },
-  { VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG,
-    VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG },
-  { 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB,
-    0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB },
-  { BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR },
-  { BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG },
-  { BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB },
-  { YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG }
-};
+    {VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0,
+     VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0},
+    {VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG,
+     VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG},
+    {0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB,
+     0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB},
+    {BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR},
+    {BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG},
+    {BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB},
+    {YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG}};
 #endif
 
 #undef BB
@@ -1062,74 +1149,68 @@
 
 // Y contribution to R,G,B.  Scale and bias.
 #define YG 16320 /* round(1.000 * 64 * 256 * 256 / 257) */
-#define YGB 32  /* 64 / 2 */
+#define YGB 32   /* 64 / 2 */
 
 // U and V contributions to R,G,B.
 #define UB -113 /* round(-1.77200 * 64) */
-#define UG 22 /* round(0.34414 * 64) */
-#define VG 46 /* round(0.71414  * 64) */
-#define VR -90 /* round(-1.40200 * 64) */
+#define UG 22   /* round(0.34414 * 64) */
+#define VG 46   /* round(0.71414  * 64) */
+#define VR -90  /* round(-1.40200 * 64) */
 
 // Bias values to round, and subtract 128 from U and V.
-#define BB (UB * 128            + YGB)
+#define BB (UB * 128 + YGB)
 #define BG (UG * 128 + VG * 128 + YGB)
-#define BR            (VR * 128 + YGB)
+#define BR (VR * 128 + YGB)
 
 #if defined(__aarch64__)
 const struct YuvConstants SIMD_ALIGNED(kYuvJPEGConstants) = {
-  { -UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR },
-  { -UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR },
-  { UG, VG, UG, VG, UG, VG, UG, VG },
-  { UG, VG, UG, VG, UG, VG, UG, VG },
-  { BB, BG, BR, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR},
+    {-UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR},
+    {UG, VG, UG, VG, UG, VG, UG, VG},
+    {UG, VG, UG, VG, UG, VG, UG, VG},
+    {BB, BG, BR, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 const struct YuvConstants SIMD_ALIGNED(kYvuJPEGConstants) = {
-  { -VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB },
-  { -VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB },
-  { VG, UG, VG, UG, VG, UG, VG, UG },
-  { VG, UG, VG, UG, VG, UG, VG, UG },
-  { BR, BG, BB, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB},
+    {-VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB},
+    {VG, UG, VG, UG, VG, UG, VG, UG},
+    {VG, UG, VG, UG, VG, UG, VG, UG},
+    {BR, BG, BB, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 #elif defined(__arm__)
 const struct YuvConstants SIMD_ALIGNED(kYuvJPEGConstants) = {
-  { -UB, -UB, -UB, -UB, -VR, -VR, -VR, -VR, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { UG, UG, UG, UG, VG, VG, VG, VG, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { BB, BG, BR, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-UB, -UB, -UB, -UB, -VR, -VR, -VR, -VR, 0, 0, 0, 0, 0, 0, 0, 0},
+    {UG, UG, UG, UG, VG, VG, VG, VG, 0, 0, 0, 0, 0, 0, 0, 0},
+    {BB, BG, BR, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 const struct YuvConstants SIMD_ALIGNED(kYvuJPEGConstants) = {
-  { -VR, -VR, -VR, -VR, -UB, -UB, -UB, -UB, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { VG, VG, VG, VG, UG, UG, UG, UG, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { BR, BG, BB, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-VR, -VR, -VR, -VR, -UB, -UB, -UB, -UB, 0, 0, 0, 0, 0, 0, 0, 0},
+    {VG, VG, VG, VG, UG, UG, UG, UG, 0, 0, 0, 0, 0, 0, 0, 0},
+    {BR, BG, BB, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 #else
 const struct YuvConstants SIMD_ALIGNED(kYuvJPEGConstants) = {
-  { UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0,
-    UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0 },
-  { UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG,
-    UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG },
-  { 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR,
-    0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR },
-  { BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB },
-  { BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG },
-  { BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR },
-  { YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG }
-};
+    {UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0,
+     UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0},
+    {UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG,
+     UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG},
+    {0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR,
+     0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR},
+    {BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB},
+    {BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG},
+    {BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR},
+    {YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG}};
 const struct YuvConstants SIMD_ALIGNED(kYvuJPEGConstants) = {
-  { VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0,
-    VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0 },
-  { VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG,
-    VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG },
-  { 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB,
-    0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB },
-  { BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR },
-  { BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG },
-  { BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB },
-  { YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG }
-};
+    {VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0,
+     VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0},
+    {VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG,
+     VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG},
+    {0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB,
+     0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB},
+    {BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR},
+    {BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG},
+    {BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB},
+    {YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG}};
 #endif
 
 #undef BB
@@ -1143,81 +1224,76 @@
 #undef YG
 
 // BT.709 YUV to RGB reference
-// *  R = Y                - V * -1.28033
-// *  G = Y - U *  0.21482 - V *  0.38059
-// *  B = Y - U * -2.12798
+//  R = (Y - 16) * 1.164              - V * -1.793
+//  G = (Y - 16) * 1.164 - U *  0.213 - V *  0.533
+//  B = (Y - 16) * 1.164 - U * -2.112
+// See also http://www.equasys.de/colorconversion.html
 
 // Y contribution to R,G,B.  Scale and bias.
-#define YG 16320 /* round(1.000 * 64 * 256 * 256 / 257) */
-#define YGB 32  /* 64 / 2 */
+#define YG 18997  /* round(1.164 * 64 * 256 * 256 / 257) */
+#define YGB -1160 /* 1.164 * 64 * -16 + 64 / 2 */
 
-// TODO(fbarchard): Find way to express 2.12 instead of 2.0.
+// TODO(fbarchard): Find way to express 2.112 instead of 2.0.
 // U and V contributions to R,G,B.
-#define UB -128 /* max(-128, round(-2.12798 * 64)) */
-#define UG 14 /* round(0.21482 * 64) */
-#define VG 24 /* round(0.38059  * 64) */
-#define VR -82 /* round(-1.28033 * 64) */
+#define UB -128 /* max(-128, round(-2.112 * 64)) */
+#define UG 14   /* round(0.213 * 64) */
+#define VG 34   /* round(0.533  * 64) */
+#define VR -115 /* round(-1.793 * 64) */
 
 // Bias values to round, and subtract 128 from U and V.
-#define BB (UB * 128            + YGB)
+#define BB (UB * 128 + YGB)
 #define BG (UG * 128 + VG * 128 + YGB)
-#define BR            (VR * 128 + YGB)
+#define BR (VR * 128 + YGB)
 
 #if defined(__aarch64__)
 const struct YuvConstants SIMD_ALIGNED(kYuvH709Constants) = {
-  { -UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR },
-  { -UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR },
-  { UG, VG, UG, VG, UG, VG, UG, VG },
-  { UG, VG, UG, VG, UG, VG, UG, VG },
-  { BB, BG, BR, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR},
+    {-UB, -VR, -UB, -VR, -UB, -VR, -UB, -VR},
+    {UG, VG, UG, VG, UG, VG, UG, VG},
+    {UG, VG, UG, VG, UG, VG, UG, VG},
+    {BB, BG, BR, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 const struct YuvConstants SIMD_ALIGNED(kYvuH709Constants) = {
-  { -VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB },
-  { -VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB },
-  { VG, UG, VG, UG, VG, UG, VG, UG },
-  { VG, UG, VG, UG, VG, UG, VG, UG },
-  { BR, BG, BB, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB},
+    {-VR, -UB, -VR, -UB, -VR, -UB, -VR, -UB},
+    {VG, UG, VG, UG, VG, UG, VG, UG},
+    {VG, UG, VG, UG, VG, UG, VG, UG},
+    {BR, BG, BB, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 #elif defined(__arm__)
 const struct YuvConstants SIMD_ALIGNED(kYuvH709Constants) = {
-  { -UB, -UB, -UB, -UB, -VR, -VR, -VR, -VR, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { UG, UG, UG, UG, VG, VG, VG, VG, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { BB, BG, BR, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-UB, -UB, -UB, -UB, -VR, -VR, -VR, -VR, 0, 0, 0, 0, 0, 0, 0, 0},
+    {UG, UG, UG, UG, VG, VG, VG, VG, 0, 0, 0, 0, 0, 0, 0, 0},
+    {BB, BG, BR, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 const struct YuvConstants SIMD_ALIGNED(kYvuH709Constants) = {
-  { -VR, -VR, -VR, -VR, -UB, -UB, -UB, -UB, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { VG, VG, VG, VG, UG, UG, UG, UG, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { BR, BG, BB, 0, 0, 0, 0, 0 },
-  { 0x0101 * YG, 0, 0, 0 }
-};
+    {-VR, -VR, -VR, -VR, -UB, -UB, -UB, -UB, 0, 0, 0, 0, 0, 0, 0, 0},
+    {VG, VG, VG, VG, UG, UG, UG, UG, 0, 0, 0, 0, 0, 0, 0, 0},
+    {BR, BG, BB, 0, 0, 0, 0, 0},
+    {0x0101 * YG, 0, 0, 0}};
 #else
 const struct YuvConstants SIMD_ALIGNED(kYuvH709Constants) = {
-  { UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0,
-    UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0 },
-  { UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG,
-    UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG },
-  { 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR,
-    0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR },
-  { BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB },
-  { BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG },
-  { BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR },
-  { YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG }
-};
+    {UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0,
+     UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0},
+    {UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG,
+     UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG},
+    {0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR,
+     0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR},
+    {BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB},
+    {BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG},
+    {BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR},
+    {YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG}};
 const struct YuvConstants SIMD_ALIGNED(kYvuH709Constants) = {
-  { VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0,
-    VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0 },
-  { VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG,
-    VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG },
-  { 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB,
-    0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB },
-  { BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR },
-  { BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG },
-  { BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB },
-  { YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG }
-};
+    {VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0,
+     VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0, VR, 0},
+    {VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG,
+     VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG, VG, UG},
+    {0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB,
+     0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB, 0, UB},
+    {BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR, BR},
+    {BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG, BG},
+    {BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB, BB},
+    {YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG, YG}};
 #endif
 
 #undef BB
@@ -1231,8 +1307,14 @@
 #undef YG
 
 // C reference code that mimics the YUV assembly.
-static __inline void YuvPixel(uint8 y, uint8 u, uint8 v,
-                              uint8* b, uint8* g, uint8* r,
+// Reads 8 bit YUV and leaves result as 16 bit.
+
+static __inline void YuvPixel(uint8_t y,
+                              uint8_t u,
+                              uint8_t v,
+                              uint8_t* b,
+                              uint8_t* g,
+                              uint8_t* r,
                               const struct YuvConstants* yuvconstants) {
 #if defined(__aarch64__)
   int ub = -yuvconstants->kUVToRB[0];
@@ -1263,22 +1345,129 @@
   int yg = yuvconstants->kYToRgb[0];
 #endif
 
-  uint32 y1 = (uint32)(y * 0x0101 * yg) >> 16;
-  *b = Clamp((int32)(-(u * ub)          + y1 + bb) >> 6);
-  *g = Clamp((int32)(-(u * ug + v * vg) + y1 + bg) >> 6);
-  *r = Clamp((int32)         (-(v * vr) + y1 + br) >> 6);
+  uint32_t y1 = (uint32_t)(y * 0x0101 * yg) >> 16;
+  *b = Clamp((int32_t)(-(u * ub) + y1 + bb) >> 6);
+  *g = Clamp((int32_t)(-(u * ug + v * vg) + y1 + bg) >> 6);
+  *r = Clamp((int32_t)(-(v * vr) + y1 + br) >> 6);
+}
+
+// Reads 8 bit YUV and leaves result as 16 bit.
+static __inline void YuvPixel8_16(uint8_t y,
+                                  uint8_t u,
+                                  uint8_t v,
+                                  int* b,
+                                  int* g,
+                                  int* r,
+                                  const struct YuvConstants* yuvconstants) {
+#if defined(__aarch64__)
+  int ub = -yuvconstants->kUVToRB[0];
+  int ug = yuvconstants->kUVToG[0];
+  int vg = yuvconstants->kUVToG[1];
+  int vr = -yuvconstants->kUVToRB[1];
+  int bb = yuvconstants->kUVBiasBGR[0];
+  int bg = yuvconstants->kUVBiasBGR[1];
+  int br = yuvconstants->kUVBiasBGR[2];
+  int yg = yuvconstants->kYToRgb[0] / 0x0101;
+#elif defined(__arm__)
+  int ub = -yuvconstants->kUVToRB[0];
+  int ug = yuvconstants->kUVToG[0];
+  int vg = yuvconstants->kUVToG[4];
+  int vr = -yuvconstants->kUVToRB[4];
+  int bb = yuvconstants->kUVBiasBGR[0];
+  int bg = yuvconstants->kUVBiasBGR[1];
+  int br = yuvconstants->kUVBiasBGR[2];
+  int yg = yuvconstants->kYToRgb[0] / 0x0101;
+#else
+  int ub = yuvconstants->kUVToB[0];
+  int ug = yuvconstants->kUVToG[0];
+  int vg = yuvconstants->kUVToG[1];
+  int vr = yuvconstants->kUVToR[1];
+  int bb = yuvconstants->kUVBiasB[0];
+  int bg = yuvconstants->kUVBiasG[0];
+  int br = yuvconstants->kUVBiasR[0];
+  int yg = yuvconstants->kYToRgb[0];
+#endif
+
+  uint32_t y1 = (uint32_t)(y * 0x0101 * yg) >> 16;
+  *b = (int)(-(u * ub) + y1 + bb);
+  *g = (int)(-(u * ug + v * vg) + y1 + bg);
+  *r = (int)(-(v * vr) + y1 + br);
+}
+
+// C reference code that mimics the YUV 16 bit assembly.
+// Reads 10 bit YUV and leaves result as 16 bit.
+static __inline void YuvPixel16(int16_t y,
+                                int16_t u,
+                                int16_t v,
+                                int* b,
+                                int* g,
+                                int* r,
+                                const struct YuvConstants* yuvconstants) {
+#if defined(__aarch64__)
+  int ub = -yuvconstants->kUVToRB[0];
+  int ug = yuvconstants->kUVToG[0];
+  int vg = yuvconstants->kUVToG[1];
+  int vr = -yuvconstants->kUVToRB[1];
+  int bb = yuvconstants->kUVBiasBGR[0];
+  int bg = yuvconstants->kUVBiasBGR[1];
+  int br = yuvconstants->kUVBiasBGR[2];
+  int yg = yuvconstants->kYToRgb[0] / 0x0101;
+#elif defined(__arm__)
+  int ub = -yuvconstants->kUVToRB[0];
+  int ug = yuvconstants->kUVToG[0];
+  int vg = yuvconstants->kUVToG[4];
+  int vr = -yuvconstants->kUVToRB[4];
+  int bb = yuvconstants->kUVBiasBGR[0];
+  int bg = yuvconstants->kUVBiasBGR[1];
+  int br = yuvconstants->kUVBiasBGR[2];
+  int yg = yuvconstants->kYToRgb[0] / 0x0101;
+#else
+  int ub = yuvconstants->kUVToB[0];
+  int ug = yuvconstants->kUVToG[0];
+  int vg = yuvconstants->kUVToG[1];
+  int vr = yuvconstants->kUVToR[1];
+  int bb = yuvconstants->kUVBiasB[0];
+  int bg = yuvconstants->kUVBiasG[0];
+  int br = yuvconstants->kUVBiasR[0];
+  int yg = yuvconstants->kYToRgb[0];
+#endif
+
+  uint32_t y1 = (uint32_t)((y << 6) * yg) >> 16;
+  u = clamp255(u >> 2);
+  v = clamp255(v >> 2);
+  *b = (int)(-(u * ub) + y1 + bb);
+  *g = (int)(-(u * ug + v * vg) + y1 + bg);
+  *r = (int)(-(v * vr) + y1 + br);
+}
+
+// C reference code that mimics the YUV 10 bit assembly.
+// Reads 10 bit YUV and clamps down to 8 bit RGB.
+static __inline void YuvPixel10(uint16_t y,
+                                uint16_t u,
+                                uint16_t v,
+                                uint8_t* b,
+                                uint8_t* g,
+                                uint8_t* r,
+                                const struct YuvConstants* yuvconstants) {
+  int b16;
+  int g16;
+  int r16;
+  YuvPixel16(y, u, v, &b16, &g16, &r16, yuvconstants);
+  *b = Clamp(b16 >> 6);
+  *g = Clamp(g16 >> 6);
+  *r = Clamp(r16 >> 6);
 }
 
 // Y contribution to R,G,B.  Scale and bias.
-#define YG 18997 /* round(1.164 * 64 * 256 * 256 / 257) */
+#define YG 18997  /* round(1.164 * 64 * 256 * 256 / 257) */
 #define YGB -1160 /* 1.164 * 64 * -16 + 64 / 2 */
 
 // C reference code that mimics the YUV assembly.
-static __inline void YPixel(uint8 y, uint8* b, uint8* g, uint8* r) {
-  uint32 y1 = (uint32)(y * 0x0101 * YG) >> 16;
-  *b = Clamp((int32)(y1 + YGB) >> 6);
-  *g = Clamp((int32)(y1 + YGB) >> 6);
-  *r = Clamp((int32)(y1 + YGB) >> 6);
+static __inline void YPixel(uint8_t y, uint8_t* b, uint8_t* g, uint8_t* r) {
+  uint32_t y1 = (uint32_t)(y * 0x0101 * YG) >> 16;
+  *b = Clamp((int32_t)(y1 + YGB) >> 6);
+  *g = Clamp((int32_t)(y1 + YGB) >> 6);
+  *r = Clamp((int32_t)(y1 + YGB) >> 6);
 }
 
 #undef YG
@@ -1288,16 +1477,16 @@
     (defined(__ARM_NEON__) || defined(__aarch64__) || defined(LIBYUV_NEON))
 // C mimic assembly.
 // TODO(fbarchard): Remove subsampling from Neon.
-void I444ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* rgb_buf,
+void I444ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint8 u = (src_u[0] + src_u[1] + 1) >> 1;
-    uint8 v = (src_v[0] + src_v[1] + 1) >> 1;
+    uint8_t u = (src_u[0] + src_u[1] + 1) >> 1;
+    uint8_t v = (src_v[0] + src_v[1] + 1) >> 1;
     YuvPixel(src_y[0], u, v, rgb_buf + 0, rgb_buf + 1, rgb_buf + 2,
              yuvconstants);
     rgb_buf[3] = 255;
@@ -1310,22 +1499,22 @@
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
   }
 }
 #else
-void I444ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* rgb_buf,
+void I444ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width; ++x) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
     src_y += 1;
     src_u += 1;
@@ -1336,19 +1525,19 @@
 #endif
 
 // Also used for 420
-void I422ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* rgb_buf,
+void I422ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
-    YuvPixel(src_y[1], src_u[0], src_v[0],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
+    YuvPixel(src_y[1], src_u[0], src_v[0], rgb_buf + 4, rgb_buf + 5,
+             rgb_buf + 6, yuvconstants);
     rgb_buf[7] = 255;
     src_y += 2;
     src_u += 1;
@@ -1356,26 +1545,120 @@
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
   }
 }
 
-void I422AlphaToARGBRow_C(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          const uint8* src_a,
-                          uint8* rgb_buf,
+// 10 bit YUV to ARGB
+void I210ToARGBRow_C(const uint16_t* src_y,
+                     const uint16_t* src_u,
+                     const uint16_t* src_v,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width) {
+  int x;
+  for (x = 0; x < width - 1; x += 2) {
+    YuvPixel10(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+               rgb_buf + 2, yuvconstants);
+    rgb_buf[3] = 255;
+    YuvPixel10(src_y[1], src_u[0], src_v[0], rgb_buf + 4, rgb_buf + 5,
+               rgb_buf + 6, yuvconstants);
+    rgb_buf[7] = 255;
+    src_y += 2;
+    src_u += 1;
+    src_v += 1;
+    rgb_buf += 8;  // Advance 2 pixels.
+  }
+  if (width & 1) {
+    YuvPixel10(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+               rgb_buf + 2, yuvconstants);
+    rgb_buf[3] = 255;
+  }
+}
+
+static void StoreAR30(uint8_t* rgb_buf, int b, int g, int r) {
+  uint32_t ar30;
+  b = b >> 4;  // convert 10.6 to 10 bit.
+  g = g >> 4;
+  r = r >> 4;
+  b = Clamp10(b);
+  g = Clamp10(g);
+  r = Clamp10(r);
+  ar30 = b | ((uint32_t)g << 10) | ((uint32_t)r << 20) | 0xc0000000;
+  (*(uint32_t*)rgb_buf) = ar30;
+}
+
+// 10 bit YUV to 10 bit AR30
+void I210ToAR30Row_C(const uint16_t* src_y,
+                     const uint16_t* src_u,
+                     const uint16_t* src_v,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width) {
+  int x;
+  int b;
+  int g;
+  int r;
+  for (x = 0; x < width - 1; x += 2) {
+    YuvPixel16(src_y[0], src_u[0], src_v[0], &b, &g, &r, yuvconstants);
+    StoreAR30(rgb_buf, b, g, r);
+    YuvPixel16(src_y[1], src_u[0], src_v[0], &b, &g, &r, yuvconstants);
+    StoreAR30(rgb_buf + 4, b, g, r);
+    src_y += 2;
+    src_u += 1;
+    src_v += 1;
+    rgb_buf += 8;  // Advance 2 pixels.
+  }
+  if (width & 1) {
+    YuvPixel16(src_y[0], src_u[0], src_v[0], &b, &g, &r, yuvconstants);
+    StoreAR30(rgb_buf, b, g, r);
+  }
+}
+
+// 8 bit YUV to 10 bit AR30
+// Uses same code as 10 bit YUV bit shifts the 8 bit values up to 10 bits.
+void I422ToAR30Row_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
+                     const struct YuvConstants* yuvconstants,
+                     int width) {
+  int x;
+  int b;
+  int g;
+  int r;
+  for (x = 0; x < width - 1; x += 2) {
+    YuvPixel8_16(src_y[0], src_u[0], src_v[0], &b, &g, &r, yuvconstants);
+    StoreAR30(rgb_buf, b, g, r);
+    YuvPixel8_16(src_y[1], src_u[0], src_v[0], &b, &g, &r, yuvconstants);
+    StoreAR30(rgb_buf + 4, b, g, r);
+    src_y += 2;
+    src_u += 1;
+    src_v += 1;
+    rgb_buf += 8;  // Advance 2 pixels.
+  }
+  if (width & 1) {
+    YuvPixel8_16(src_y[0], src_u[0], src_v[0], &b, &g, &r, yuvconstants);
+    StoreAR30(rgb_buf, b, g, r);
+  }
+}
+
+void I422AlphaToARGBRow_C(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          const uint8_t* src_a,
+                          uint8_t* rgb_buf,
                           const struct YuvConstants* yuvconstants,
                           int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = src_a[0];
-    YuvPixel(src_y[1], src_u[0], src_v[0],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
+    YuvPixel(src_y[1], src_u[0], src_v[0], rgb_buf + 4, rgb_buf + 5,
+             rgb_buf + 6, yuvconstants);
     rgb_buf[7] = src_a[1];
     src_y += 2;
     src_u += 1;
@@ -1384,47 +1667,47 @@
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = src_a[0];
   }
 }
 
-void I422ToRGB24Row_C(const uint8* src_y,
-                      const uint8* src_u,
-                      const uint8* src_v,
-                      uint8* rgb_buf,
+void I422ToRGB24Row_C(const uint8_t* src_y,
+                      const uint8_t* src_u,
+                      const uint8_t* src_v,
+                      uint8_t* rgb_buf,
                       const struct YuvConstants* yuvconstants,
                       int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
-    YuvPixel(src_y[1], src_u[0], src_v[0],
-             rgb_buf + 3, rgb_buf + 4, rgb_buf + 5, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[1], src_u[0], src_v[0], rgb_buf + 3, rgb_buf + 4,
+             rgb_buf + 5, yuvconstants);
     src_y += 2;
     src_u += 1;
     src_v += 1;
     rgb_buf += 6;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
   }
 }
 
-void I422ToARGB4444Row_C(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb4444,
+void I422ToARGB4444Row_C(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_argb4444,
                          const struct YuvConstants* yuvconstants,
                          int width) {
-  uint8 b0;
-  uint8 g0;
-  uint8 r0;
-  uint8 b1;
-  uint8 g1;
-  uint8 r1;
+  uint8_t b0;
+  uint8_t g0;
+  uint8_t r0;
+  uint8_t b1;
+  uint8_t g1;
+  uint8_t r1;
   int x;
   for (x = 0; x < width - 1; x += 2) {
     YuvPixel(src_y[0], src_u[0], src_v[0], &b0, &g0, &r0, yuvconstants);
@@ -1435,8 +1718,8 @@
     b1 = b1 >> 4;
     g1 = g1 >> 4;
     r1 = r1 >> 4;
-    *(uint32*)(dst_argb4444) = b0 | (g0 << 4) | (r0 << 8) |
-        (b1 << 16) | (g1 << 20) | (r1 << 24) | 0xf000f000;
+    *(uint32_t*)(dst_argb4444) = b0 | (g0 << 4) | (r0 << 8) | (b1 << 16) |
+                                 (g1 << 20) | (r1 << 24) | 0xf000f000;
     src_y += 2;
     src_u += 1;
     src_v += 1;
@@ -1447,23 +1730,22 @@
     b0 = b0 >> 4;
     g0 = g0 >> 4;
     r0 = r0 >> 4;
-    *(uint16*)(dst_argb4444) = b0 | (g0 << 4) | (r0 << 8) |
-        0xf000;
+    *(uint16_t*)(dst_argb4444) = b0 | (g0 << 4) | (r0 << 8) | 0xf000;
   }
 }
 
-void I422ToARGB1555Row_C(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_argb1555,
+void I422ToARGB1555Row_C(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_argb1555,
                          const struct YuvConstants* yuvconstants,
                          int width) {
-  uint8 b0;
-  uint8 g0;
-  uint8 r0;
-  uint8 b1;
-  uint8 g1;
-  uint8 r1;
+  uint8_t b0;
+  uint8_t g0;
+  uint8_t r0;
+  uint8_t b1;
+  uint8_t g1;
+  uint8_t r1;
   int x;
   for (x = 0; x < width - 1; x += 2) {
     YuvPixel(src_y[0], src_u[0], src_v[0], &b0, &g0, &r0, yuvconstants);
@@ -1474,8 +1756,8 @@
     b1 = b1 >> 3;
     g1 = g1 >> 3;
     r1 = r1 >> 3;
-    *(uint32*)(dst_argb1555) = b0 | (g0 << 5) | (r0 << 10) |
-        (b1 << 16) | (g1 << 21) | (r1 << 26) | 0x80008000;
+    *(uint32_t*)(dst_argb1555) = b0 | (g0 << 5) | (r0 << 10) | (b1 << 16) |
+                                 (g1 << 21) | (r1 << 26) | 0x80008000;
     src_y += 2;
     src_u += 1;
     src_v += 1;
@@ -1486,23 +1768,22 @@
     b0 = b0 >> 3;
     g0 = g0 >> 3;
     r0 = r0 >> 3;
-    *(uint16*)(dst_argb1555) = b0 | (g0 << 5) | (r0 << 10) |
-        0x8000;
+    *(uint16_t*)(dst_argb1555) = b0 | (g0 << 5) | (r0 << 10) | 0x8000;
   }
 }
 
-void I422ToRGB565Row_C(const uint8* src_y,
-                       const uint8* src_u,
-                       const uint8* src_v,
-                       uint8* dst_rgb565,
+void I422ToRGB565Row_C(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_rgb565,
                        const struct YuvConstants* yuvconstants,
                        int width) {
-  uint8 b0;
-  uint8 g0;
-  uint8 r0;
-  uint8 b1;
-  uint8 g1;
-  uint8 r1;
+  uint8_t b0;
+  uint8_t g0;
+  uint8_t r0;
+  uint8_t b1;
+  uint8_t g1;
+  uint8_t r1;
   int x;
   for (x = 0; x < width - 1; x += 2) {
     YuvPixel(src_y[0], src_u[0], src_v[0], &b0, &g0, &r0, yuvconstants);
@@ -1513,8 +1794,8 @@
     b1 = b1 >> 3;
     g1 = g1 >> 2;
     r1 = r1 >> 3;
-    *(uint32*)(dst_rgb565) = b0 | (g0 << 5) | (r0 << 11) |
-        (b1 << 16) | (g1 << 21) | (r1 << 27);
+    *(uint32_t*)(dst_rgb565) =
+        b0 | (g0 << 5) | (r0 << 11) | (b1 << 16) | (g1 << 21) | (r1 << 27);
     src_y += 2;
     src_u += 1;
     src_v += 1;
@@ -1525,111 +1806,111 @@
     b0 = b0 >> 3;
     g0 = g0 >> 2;
     r0 = r0 >> 3;
-    *(uint16*)(dst_rgb565) = b0 | (g0 << 5) | (r0 << 11);
+    *(uint16_t*)(dst_rgb565) = b0 | (g0 << 5) | (r0 << 11);
   }
 }
 
-void I411ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* rgb_buf,
-                     const struct YuvConstants* yuvconstants,
-                     int width) {
-  int x;
-  for (x = 0; x < width - 3; x += 4) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
-    rgb_buf[3] = 255;
-    YuvPixel(src_y[1], src_u[0], src_v[0],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
-    rgb_buf[7] = 255;
-    YuvPixel(src_y[2], src_u[0], src_v[0],
-             rgb_buf + 8, rgb_buf + 9, rgb_buf + 10, yuvconstants);
-    rgb_buf[11] = 255;
-    YuvPixel(src_y[3], src_u[0], src_v[0],
-             rgb_buf + 12, rgb_buf + 13, rgb_buf + 14, yuvconstants);
-    rgb_buf[15] = 255;
-    src_y += 4;
-    src_u += 1;
-    src_v += 1;
-    rgb_buf += 16;  // Advance 4 pixels.
-  }
-  if (width & 2) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
-    rgb_buf[3] = 255;
-    YuvPixel(src_y[1], src_u[0], src_v[0],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
-    rgb_buf[7] = 255;
-    src_y += 2;
-    rgb_buf += 8;  // Advance 2 pixels.
-  }
-  if (width & 1) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
-    rgb_buf[3] = 255;
-  }
-}
-
-void NV12ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_uv,
-                     uint8* rgb_buf,
+void NV12ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_uv,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_y[0], src_uv[0], src_uv[1],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_uv[0], src_uv[1], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
-    YuvPixel(src_y[1], src_uv[0], src_uv[1],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
+    YuvPixel(src_y[1], src_uv[0], src_uv[1], rgb_buf + 4, rgb_buf + 5,
+             rgb_buf + 6, yuvconstants);
     rgb_buf[7] = 255;
     src_y += 2;
     src_uv += 2;
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_uv[0], src_uv[1],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_uv[0], src_uv[1], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
   }
 }
 
-void NV21ToARGBRow_C(const uint8* src_y,
-                     const uint8* src_vu,
-                     uint8* rgb_buf,
+void NV21ToARGBRow_C(const uint8_t* src_y,
+                     const uint8_t* src_vu,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_y[0], src_vu[1], src_vu[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_vu[1], src_vu[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
-    YuvPixel(src_y[1], src_vu[1], src_vu[0],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
+    YuvPixel(src_y[1], src_vu[1], src_vu[0], rgb_buf + 4, rgb_buf + 5,
+             rgb_buf + 6, yuvconstants);
     rgb_buf[7] = 255;
     src_y += 2;
     src_vu += 2;
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_vu[1], src_vu[0],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[0], src_vu[1], src_vu[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
   }
 }
 
-void NV12ToRGB565Row_C(const uint8* src_y,
-                       const uint8* src_uv,
-                       uint8* dst_rgb565,
+void NV12ToRGB24Row_C(const uint8_t* src_y,
+                      const uint8_t* src_uv,
+                      uint8_t* rgb_buf,
+                      const struct YuvConstants* yuvconstants,
+                      int width) {
+  int x;
+  for (x = 0; x < width - 1; x += 2) {
+    YuvPixel(src_y[0], src_uv[0], src_uv[1], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[1], src_uv[0], src_uv[1], rgb_buf + 3, rgb_buf + 4,
+             rgb_buf + 5, yuvconstants);
+    src_y += 2;
+    src_uv += 2;
+    rgb_buf += 6;  // Advance 2 pixels.
+  }
+  if (width & 1) {
+    YuvPixel(src_y[0], src_uv[0], src_uv[1], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
+  }
+}
+
+void NV21ToRGB24Row_C(const uint8_t* src_y,
+                      const uint8_t* src_vu,
+                      uint8_t* rgb_buf,
+                      const struct YuvConstants* yuvconstants,
+                      int width) {
+  int x;
+  for (x = 0; x < width - 1; x += 2) {
+    YuvPixel(src_y[0], src_vu[1], src_vu[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
+    YuvPixel(src_y[1], src_vu[1], src_vu[0], rgb_buf + 3, rgb_buf + 4,
+             rgb_buf + 5, yuvconstants);
+    src_y += 2;
+    src_vu += 2;
+    rgb_buf += 6;  // Advance 2 pixels.
+  }
+  if (width & 1) {
+    YuvPixel(src_y[0], src_vu[1], src_vu[0], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
+  }
+}
+
+void NV12ToRGB565Row_C(const uint8_t* src_y,
+                       const uint8_t* src_uv,
+                       uint8_t* dst_rgb565,
                        const struct YuvConstants* yuvconstants,
                        int width) {
-  uint8 b0;
-  uint8 g0;
-  uint8 r0;
-  uint8 b1;
-  uint8 g1;
-  uint8 r1;
+  uint8_t b0;
+  uint8_t g0;
+  uint8_t r0;
+  uint8_t b1;
+  uint8_t g1;
+  uint8_t r1;
   int x;
   for (x = 0; x < width - 1; x += 2) {
     YuvPixel(src_y[0], src_uv[0], src_uv[1], &b0, &g0, &r0, yuvconstants);
@@ -1640,8 +1921,8 @@
     b1 = b1 >> 3;
     g1 = g1 >> 2;
     r1 = r1 >> 3;
-    *(uint32*)(dst_rgb565) = b0 | (g0 << 5) | (r0 << 11) |
-        (b1 << 16) | (g1 << 21) | (r1 << 27);
+    *(uint32_t*)(dst_rgb565) =
+        b0 | (g0 << 5) | (r0 << 11) | (b1 << 16) | (g1 << 21) | (r1 << 27);
     src_y += 2;
     src_uv += 2;
     dst_rgb565 += 4;  // Advance 2 pixels.
@@ -1651,67 +1932,67 @@
     b0 = b0 >> 3;
     g0 = g0 >> 2;
     r0 = r0 >> 3;
-    *(uint16*)(dst_rgb565) = b0 | (g0 << 5) | (r0 << 11);
+    *(uint16_t*)(dst_rgb565) = b0 | (g0 << 5) | (r0 << 11);
   }
 }
 
-void YUY2ToARGBRow_C(const uint8* src_yuy2,
-                     uint8* rgb_buf,
+void YUY2ToARGBRow_C(const uint8_t* src_yuy2,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_yuy2[0], src_yuy2[1], src_yuy2[3],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_yuy2[0], src_yuy2[1], src_yuy2[3], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
-    YuvPixel(src_yuy2[2], src_yuy2[1], src_yuy2[3],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
+    YuvPixel(src_yuy2[2], src_yuy2[1], src_yuy2[3], rgb_buf + 4, rgb_buf + 5,
+             rgb_buf + 6, yuvconstants);
     rgb_buf[7] = 255;
     src_yuy2 += 4;
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_yuy2[0], src_yuy2[1], src_yuy2[3],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_yuy2[0], src_yuy2[1], src_yuy2[3], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
   }
 }
 
-void UYVYToARGBRow_C(const uint8* src_uyvy,
-                     uint8* rgb_buf,
+void UYVYToARGBRow_C(const uint8_t* src_uyvy,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_uyvy[1], src_uyvy[0], src_uyvy[2],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_uyvy[1], src_uyvy[0], src_uyvy[2], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
-    YuvPixel(src_uyvy[3], src_uyvy[0], src_uyvy[2],
-             rgb_buf + 4, rgb_buf + 5, rgb_buf + 6, yuvconstants);
+    YuvPixel(src_uyvy[3], src_uyvy[0], src_uyvy[2], rgb_buf + 4, rgb_buf + 5,
+             rgb_buf + 6, yuvconstants);
     rgb_buf[7] = 255;
     src_uyvy += 4;
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_uyvy[1], src_uyvy[0], src_uyvy[2],
-             rgb_buf + 0, rgb_buf + 1, rgb_buf + 2, yuvconstants);
+    YuvPixel(src_uyvy[1], src_uyvy[0], src_uyvy[2], rgb_buf + 0, rgb_buf + 1,
+             rgb_buf + 2, yuvconstants);
     rgb_buf[3] = 255;
   }
 }
 
-void I422ToRGBARow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* rgb_buf,
+void I422ToRGBARow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* rgb_buf,
                      const struct YuvConstants* yuvconstants,
                      int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 1, rgb_buf + 2, rgb_buf + 3, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 1, rgb_buf + 2,
+             rgb_buf + 3, yuvconstants);
     rgb_buf[0] = 255;
-    YuvPixel(src_y[1], src_u[0], src_v[0],
-             rgb_buf + 5, rgb_buf + 6, rgb_buf + 7, yuvconstants);
+    YuvPixel(src_y[1], src_u[0], src_v[0], rgb_buf + 5, rgb_buf + 6,
+             rgb_buf + 7, yuvconstants);
     rgb_buf[4] = 255;
     src_y += 2;
     src_u += 1;
@@ -1719,13 +2000,13 @@
     rgb_buf += 8;  // Advance 2 pixels.
   }
   if (width & 1) {
-    YuvPixel(src_y[0], src_u[0], src_v[0],
-             rgb_buf + 1, rgb_buf + 2, rgb_buf + 3, yuvconstants);
+    YuvPixel(src_y[0], src_u[0], src_v[0], rgb_buf + 1, rgb_buf + 2,
+             rgb_buf + 3, yuvconstants);
     rgb_buf[0] = 255;
   }
 }
 
-void I400ToARGBRow_C(const uint8* src_y, uint8* rgb_buf, int width) {
+void I400ToARGBRow_C(const uint8_t* src_y, uint8_t* rgb_buf, int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     YPixel(src_y[0], rgb_buf + 0, rgb_buf + 1, rgb_buf + 2);
@@ -1741,7 +2022,7 @@
   }
 }
 
-void MirrorRow_C(const uint8* src, uint8* dst, int width) {
+void MirrorRow_C(const uint8_t* src, uint8_t* dst, int width) {
   int x;
   src += width - 1;
   for (x = 0; x < width - 1; x += 2) {
@@ -1754,7 +2035,10 @@
   }
 }
 
-void MirrorUVRow_C(const uint8* src_uv, uint8* dst_u, uint8* dst_v, int width) {
+void MirrorUVRow_C(const uint8_t* src_uv,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width) {
   int x;
   src_uv += (width - 1) << 1;
   for (x = 0; x < width - 1; x += 2) {
@@ -1770,10 +2054,10 @@
   }
 }
 
-void ARGBMirrorRow_C(const uint8* src, uint8* dst, int width) {
+void ARGBMirrorRow_C(const uint8_t* src, uint8_t* dst, int width) {
   int x;
-  const uint32* src32 = (const uint32*)(src);
-  uint32* dst32 = (uint32*)(dst);
+  const uint32_t* src32 = (const uint32_t*)(src);
+  uint32_t* dst32 = (uint32_t*)(dst);
   src32 += width - 1;
   for (x = 0; x < width - 1; x += 2) {
     dst32[x] = src32[0];
@@ -1785,7 +2069,10 @@
   }
 }
 
-void SplitUVRow_C(const uint8* src_uv, uint8* dst_u, uint8* dst_v, int width) {
+void SplitUVRow_C(const uint8_t* src_uv,
+                  uint8_t* dst_u,
+                  uint8_t* dst_v,
+                  int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     dst_u[x] = src_uv[0];
@@ -1800,7 +2087,9 @@
   }
 }
 
-void MergeUVRow_C(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void MergeUVRow_C(const uint8_t* src_u,
+                  const uint8_t* src_v,
+                  uint8_t* dst_uv,
                   int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
@@ -1816,20 +2105,110 @@
   }
 }
 
-void CopyRow_C(const uint8* src, uint8* dst, int count) {
+void SplitRGBRow_C(const uint8_t* src_rgb,
+                   uint8_t* dst_r,
+                   uint8_t* dst_g,
+                   uint8_t* dst_b,
+                   int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    dst_r[x] = src_rgb[0];
+    dst_g[x] = src_rgb[1];
+    dst_b[x] = src_rgb[2];
+    src_rgb += 3;
+  }
+}
+
+void MergeRGBRow_C(const uint8_t* src_r,
+                   const uint8_t* src_g,
+                   const uint8_t* src_b,
+                   uint8_t* dst_rgb,
+                   int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    dst_rgb[0] = src_r[x];
+    dst_rgb[1] = src_g[x];
+    dst_rgb[2] = src_b[x];
+    dst_rgb += 3;
+  }
+}
+
+// Use scale to convert lsb formats to msb, depending how many bits there are:
+// 128 = 9 bits
+// 64 = 10 bits
+// 16 = 12 bits
+// 1 = 16 bits
+void MergeUVRow_16_C(const uint16_t* src_u,
+                     const uint16_t* src_v,
+                     uint16_t* dst_uv,
+                     int scale,
+                     int width) {
+  int x;
+  for (x = 0; x < width - 1; x += 2) {
+    dst_uv[0] = src_u[x] * scale;
+    dst_uv[1] = src_v[x] * scale;
+    dst_uv[2] = src_u[x + 1] * scale;
+    dst_uv[3] = src_v[x + 1] * scale;
+    dst_uv += 4;
+  }
+  if (width & 1) {
+    dst_uv[0] = src_u[width - 1] * scale;
+    dst_uv[1] = src_v[width - 1] * scale;
+  }
+}
+
+void MultiplyRow_16_C(const uint16_t* src_y,
+                      uint16_t* dst_y,
+                      int scale,
+                      int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    dst_y[x] = src_y[x] * scale;
+  }
+}
+
+// Use scale to convert lsb formats to msb, depending how many bits there are:
+// 32768 = 9 bits
+// 16384 = 10 bits
+// 4096 = 12 bits
+// 256 = 16 bits
+void Convert16To8Row_C(const uint16_t* src_y,
+                       uint8_t* dst_y,
+                       int scale,
+                       int width) {
+  int x;
+  for (x = 0; x < width; ++x) {
+    dst_y[x] = clamp255((src_y[x] * scale) >> 16);
+  }
+}
+
+// Use scale to convert lsb formats to msb, depending how many bits there are:
+// 1024 = 10 bits
+void Convert8To16Row_C(const uint8_t* src_y,
+                       uint16_t* dst_y,
+                       int scale,
+                       int width) {
+  int x;
+  scale *= 0x0101;  // replicates the byte.
+  for (x = 0; x < width; ++x) {
+    dst_y[x] = (src_y[x] * scale) >> 16;
+  }
+}
+
+void CopyRow_C(const uint8_t* src, uint8_t* dst, int count) {
   memcpy(dst, src, count);
 }
 
-void CopyRow_16_C(const uint16* src, uint16* dst, int count) {
+void CopyRow_16_C(const uint16_t* src, uint16_t* dst, int count) {
   memcpy(dst, src, count * 2);
 }
 
-void SetRow_C(uint8* dst, uint8 v8, int width) {
+void SetRow_C(uint8_t* dst, uint8_t v8, int width) {
   memset(dst, v8, width);
 }
 
-void ARGBSetRow_C(uint8* dst_argb, uint32 v32, int width) {
-  uint32* d = (uint32*)(dst_argb);
+void ARGBSetRow_C(uint8_t* dst_argb, uint32_t v32, int width) {
+  uint32_t* d = (uint32_t*)(dst_argb);
   int x;
   for (x = 0; x < width; ++x) {
     d[x] = v32;
@@ -1837,8 +2216,11 @@
 }
 
 // Filter 2 rows of YUY2 UV's (422) into U and V (420).
-void YUY2ToUVRow_C(const uint8* src_yuy2, int src_stride_yuy2,
-                   uint8* dst_u, uint8* dst_v, int width) {
+void YUY2ToUVRow_C(const uint8_t* src_yuy2,
+                   int src_stride_yuy2,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width) {
   // Output a row of UV values, filtering 2 rows of YUY2.
   int x;
   for (x = 0; x < width; x += 2) {
@@ -1851,8 +2233,10 @@
 }
 
 // Copy row of YUY2 UV's (422) into U and V (422).
-void YUY2ToUV422Row_C(const uint8* src_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void YUY2ToUV422Row_C(const uint8_t* src_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   // Output a row of UV values.
   int x;
   for (x = 0; x < width; x += 2) {
@@ -1865,7 +2249,7 @@
 }
 
 // Copy row of YUY2 Y's (422) into Y (420/422).
-void YUY2ToYRow_C(const uint8* src_yuy2, uint8* dst_y, int width) {
+void YUY2ToYRow_C(const uint8_t* src_yuy2, uint8_t* dst_y, int width) {
   // Output a row of Y values.
   int x;
   for (x = 0; x < width - 1; x += 2) {
@@ -1879,8 +2263,11 @@
 }
 
 // Filter 2 rows of UYVY UV's (422) into U and V (420).
-void UYVYToUVRow_C(const uint8* src_uyvy, int src_stride_uyvy,
-                   uint8* dst_u, uint8* dst_v, int width) {
+void UYVYToUVRow_C(const uint8_t* src_uyvy,
+                   int src_stride_uyvy,
+                   uint8_t* dst_u,
+                   uint8_t* dst_v,
+                   int width) {
   // Output a row of UV values.
   int x;
   for (x = 0; x < width; x += 2) {
@@ -1893,8 +2280,10 @@
 }
 
 // Copy row of UYVY UV's (422) into U and V (422).
-void UYVYToUV422Row_C(const uint8* src_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void UYVYToUV422Row_C(const uint8_t* src_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   // Output a row of UV values.
   int x;
   for (x = 0; x < width; x += 2) {
@@ -1907,7 +2296,7 @@
 }
 
 // Copy row of UYVY Y's (422) into Y (420/422).
-void UYVYToYRow_C(const uint8* src_uyvy, uint8* dst_y, int width) {
+void UYVYToYRow_C(const uint8_t* src_uyvy, uint8_t* dst_y, int width) {
   // Output a row of Y values.
   int x;
   for (x = 0; x < width - 1; x += 2) {
@@ -1925,17 +2314,19 @@
 // Blend src_argb0 over src_argb1 and store to dst_argb.
 // dst_argb may be src_argb0 or src_argb1.
 // This code mimics the SSSE3 version for better testability.
-void ARGBBlendRow_C(const uint8* src_argb0, const uint8* src_argb1,
-                    uint8* dst_argb, int width) {
+void ARGBBlendRow_C(const uint8_t* src_argb0,
+                    const uint8_t* src_argb1,
+                    uint8_t* dst_argb,
+                    int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
-    uint32 fb = src_argb0[0];
-    uint32 fg = src_argb0[1];
-    uint32 fr = src_argb0[2];
-    uint32 a = src_argb0[3];
-    uint32 bb = src_argb1[0];
-    uint32 bg = src_argb1[1];
-    uint32 br = src_argb1[2];
+    uint32_t fb = src_argb0[0];
+    uint32_t fg = src_argb0[1];
+    uint32_t fr = src_argb0[2];
+    uint32_t a = src_argb0[3];
+    uint32_t bb = src_argb1[0];
+    uint32_t bg = src_argb1[1];
+    uint32_t br = src_argb1[2];
     dst_argb[0] = BLEND(fb, bb, a);
     dst_argb[1] = BLEND(fg, bg, a);
     dst_argb[2] = BLEND(fr, br, a);
@@ -1958,13 +2349,13 @@
   }
 
   if (width & 1) {
-    uint32 fb = src_argb0[0];
-    uint32 fg = src_argb0[1];
-    uint32 fr = src_argb0[2];
-    uint32 a = src_argb0[3];
-    uint32 bb = src_argb1[0];
-    uint32 bg = src_argb1[1];
-    uint32 br = src_argb1[2];
+    uint32_t fb = src_argb0[0];
+    uint32_t fg = src_argb0[1];
+    uint32_t fr = src_argb0[2];
+    uint32_t a = src_argb0[3];
+    uint32_t bb = src_argb1[0];
+    uint32_t bg = src_argb1[1];
+    uint32_t br = src_argb1[2];
     dst_argb[0] = BLEND(fb, bb, a);
     dst_argb[1] = BLEND(fg, bg, a);
     dst_argb[2] = BLEND(fr, br, a);
@@ -1973,9 +2364,12 @@
 }
 #undef BLEND
 
-#define UBLEND(f, b, a) (((a) * f) + ((255 - a) * b) + 255) >> 8
-void BlendPlaneRow_C(const uint8* src0, const uint8* src1,
-                     const uint8* alpha, uint8* dst, int width) {
+#define UBLEND(f, b, a) (((a)*f) + ((255 - a) * b) + 255) >> 8
+void BlendPlaneRow_C(const uint8_t* src0,
+                     const uint8_t* src1,
+                     const uint8_t* alpha,
+                     uint8_t* dst,
+                     int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     dst[0] = UBLEND(src0[0], src1[0], alpha[0]);
@@ -1995,13 +2389,13 @@
 
 // Multiply source RGB by alpha and store to destination.
 // This code mimics the SSSE3 version for better testability.
-void ARGBAttenuateRow_C(const uint8* src_argb, uint8* dst_argb, int width) {
+void ARGBAttenuateRow_C(const uint8_t* src_argb, uint8_t* dst_argb, int width) {
   int i;
   for (i = 0; i < width - 1; i += 2) {
-    uint32 b = src_argb[0];
-    uint32 g = src_argb[1];
-    uint32 r = src_argb[2];
-    uint32 a = src_argb[3];
+    uint32_t b = src_argb[0];
+    uint32_t g = src_argb[1];
+    uint32_t r = src_argb[2];
+    uint32_t a = src_argb[3];
     dst_argb[0] = ATTENUATE(b, a);
     dst_argb[1] = ATTENUATE(g, a);
     dst_argb[2] = ATTENUATE(r, a);
@@ -2019,10 +2413,10 @@
   }
 
   if (width & 1) {
-    const uint32 b = src_argb[0];
-    const uint32 g = src_argb[1];
-    const uint32 r = src_argb[2];
-    const uint32 a = src_argb[3];
+    const uint32_t b = src_argb[0];
+    const uint32_t g = src_argb[1];
+    const uint32_t r = src_argb[2];
+    const uint32_t a = src_argb[3];
     dst_argb[0] = ATTENUATE(b, a);
     dst_argb[1] = ATTENUATE(g, a);
     dst_argb[2] = ATTENUATE(r, a);
@@ -2038,49 +2432,56 @@
 // Reciprocal method is off by 1 on some values. ie 125
 // 8.8 fixed point inverse table with 1.0 in upper short and 1 / a in lower.
 #define T(a) 0x01000000 + (0x10000 / a)
-const uint32 fixed_invtbl8[256] = {
-  0x01000000, 0x0100ffff, T(0x02), T(0x03), T(0x04), T(0x05), T(0x06), T(0x07),
-  T(0x08), T(0x09), T(0x0a), T(0x0b), T(0x0c), T(0x0d), T(0x0e), T(0x0f),
-  T(0x10), T(0x11), T(0x12), T(0x13), T(0x14), T(0x15), T(0x16), T(0x17),
-  T(0x18), T(0x19), T(0x1a), T(0x1b), T(0x1c), T(0x1d), T(0x1e), T(0x1f),
-  T(0x20), T(0x21), T(0x22), T(0x23), T(0x24), T(0x25), T(0x26), T(0x27),
-  T(0x28), T(0x29), T(0x2a), T(0x2b), T(0x2c), T(0x2d), T(0x2e), T(0x2f),
-  T(0x30), T(0x31), T(0x32), T(0x33), T(0x34), T(0x35), T(0x36), T(0x37),
-  T(0x38), T(0x39), T(0x3a), T(0x3b), T(0x3c), T(0x3d), T(0x3e), T(0x3f),
-  T(0x40), T(0x41), T(0x42), T(0x43), T(0x44), T(0x45), T(0x46), T(0x47),
-  T(0x48), T(0x49), T(0x4a), T(0x4b), T(0x4c), T(0x4d), T(0x4e), T(0x4f),
-  T(0x50), T(0x51), T(0x52), T(0x53), T(0x54), T(0x55), T(0x56), T(0x57),
-  T(0x58), T(0x59), T(0x5a), T(0x5b), T(0x5c), T(0x5d), T(0x5e), T(0x5f),
-  T(0x60), T(0x61), T(0x62), T(0x63), T(0x64), T(0x65), T(0x66), T(0x67),
-  T(0x68), T(0x69), T(0x6a), T(0x6b), T(0x6c), T(0x6d), T(0x6e), T(0x6f),
-  T(0x70), T(0x71), T(0x72), T(0x73), T(0x74), T(0x75), T(0x76), T(0x77),
-  T(0x78), T(0x79), T(0x7a), T(0x7b), T(0x7c), T(0x7d), T(0x7e), T(0x7f),
-  T(0x80), T(0x81), T(0x82), T(0x83), T(0x84), T(0x85), T(0x86), T(0x87),
-  T(0x88), T(0x89), T(0x8a), T(0x8b), T(0x8c), T(0x8d), T(0x8e), T(0x8f),
-  T(0x90), T(0x91), T(0x92), T(0x93), T(0x94), T(0x95), T(0x96), T(0x97),
-  T(0x98), T(0x99), T(0x9a), T(0x9b), T(0x9c), T(0x9d), T(0x9e), T(0x9f),
-  T(0xa0), T(0xa1), T(0xa2), T(0xa3), T(0xa4), T(0xa5), T(0xa6), T(0xa7),
-  T(0xa8), T(0xa9), T(0xaa), T(0xab), T(0xac), T(0xad), T(0xae), T(0xaf),
-  T(0xb0), T(0xb1), T(0xb2), T(0xb3), T(0xb4), T(0xb5), T(0xb6), T(0xb7),
-  T(0xb8), T(0xb9), T(0xba), T(0xbb), T(0xbc), T(0xbd), T(0xbe), T(0xbf),
-  T(0xc0), T(0xc1), T(0xc2), T(0xc3), T(0xc4), T(0xc5), T(0xc6), T(0xc7),
-  T(0xc8), T(0xc9), T(0xca), T(0xcb), T(0xcc), T(0xcd), T(0xce), T(0xcf),
-  T(0xd0), T(0xd1), T(0xd2), T(0xd3), T(0xd4), T(0xd5), T(0xd6), T(0xd7),
-  T(0xd8), T(0xd9), T(0xda), T(0xdb), T(0xdc), T(0xdd), T(0xde), T(0xdf),
-  T(0xe0), T(0xe1), T(0xe2), T(0xe3), T(0xe4), T(0xe5), T(0xe6), T(0xe7),
-  T(0xe8), T(0xe9), T(0xea), T(0xeb), T(0xec), T(0xed), T(0xee), T(0xef),
-  T(0xf0), T(0xf1), T(0xf2), T(0xf3), T(0xf4), T(0xf5), T(0xf6), T(0xf7),
-  T(0xf8), T(0xf9), T(0xfa), T(0xfb), T(0xfc), T(0xfd), T(0xfe), 0x01000100 };
+const uint32_t fixed_invtbl8[256] = {
+    0x01000000, 0x0100ffff, T(0x02), T(0x03),   T(0x04), T(0x05), T(0x06),
+    T(0x07),    T(0x08),    T(0x09), T(0x0a),   T(0x0b), T(0x0c), T(0x0d),
+    T(0x0e),    T(0x0f),    T(0x10), T(0x11),   T(0x12), T(0x13), T(0x14),
+    T(0x15),    T(0x16),    T(0x17), T(0x18),   T(0x19), T(0x1a), T(0x1b),
+    T(0x1c),    T(0x1d),    T(0x1e), T(0x1f),   T(0x20), T(0x21), T(0x22),
+    T(0x23),    T(0x24),    T(0x25), T(0x26),   T(0x27), T(0x28), T(0x29),
+    T(0x2a),    T(0x2b),    T(0x2c), T(0x2d),   T(0x2e), T(0x2f), T(0x30),
+    T(0x31),    T(0x32),    T(0x33), T(0x34),   T(0x35), T(0x36), T(0x37),
+    T(0x38),    T(0x39),    T(0x3a), T(0x3b),   T(0x3c), T(0x3d), T(0x3e),
+    T(0x3f),    T(0x40),    T(0x41), T(0x42),   T(0x43), T(0x44), T(0x45),
+    T(0x46),    T(0x47),    T(0x48), T(0x49),   T(0x4a), T(0x4b), T(0x4c),
+    T(0x4d),    T(0x4e),    T(0x4f), T(0x50),   T(0x51), T(0x52), T(0x53),
+    T(0x54),    T(0x55),    T(0x56), T(0x57),   T(0x58), T(0x59), T(0x5a),
+    T(0x5b),    T(0x5c),    T(0x5d), T(0x5e),   T(0x5f), T(0x60), T(0x61),
+    T(0x62),    T(0x63),    T(0x64), T(0x65),   T(0x66), T(0x67), T(0x68),
+    T(0x69),    T(0x6a),    T(0x6b), T(0x6c),   T(0x6d), T(0x6e), T(0x6f),
+    T(0x70),    T(0x71),    T(0x72), T(0x73),   T(0x74), T(0x75), T(0x76),
+    T(0x77),    T(0x78),    T(0x79), T(0x7a),   T(0x7b), T(0x7c), T(0x7d),
+    T(0x7e),    T(0x7f),    T(0x80), T(0x81),   T(0x82), T(0x83), T(0x84),
+    T(0x85),    T(0x86),    T(0x87), T(0x88),   T(0x89), T(0x8a), T(0x8b),
+    T(0x8c),    T(0x8d),    T(0x8e), T(0x8f),   T(0x90), T(0x91), T(0x92),
+    T(0x93),    T(0x94),    T(0x95), T(0x96),   T(0x97), T(0x98), T(0x99),
+    T(0x9a),    T(0x9b),    T(0x9c), T(0x9d),   T(0x9e), T(0x9f), T(0xa0),
+    T(0xa1),    T(0xa2),    T(0xa3), T(0xa4),   T(0xa5), T(0xa6), T(0xa7),
+    T(0xa8),    T(0xa9),    T(0xaa), T(0xab),   T(0xac), T(0xad), T(0xae),
+    T(0xaf),    T(0xb0),    T(0xb1), T(0xb2),   T(0xb3), T(0xb4), T(0xb5),
+    T(0xb6),    T(0xb7),    T(0xb8), T(0xb9),   T(0xba), T(0xbb), T(0xbc),
+    T(0xbd),    T(0xbe),    T(0xbf), T(0xc0),   T(0xc1), T(0xc2), T(0xc3),
+    T(0xc4),    T(0xc5),    T(0xc6), T(0xc7),   T(0xc8), T(0xc9), T(0xca),
+    T(0xcb),    T(0xcc),    T(0xcd), T(0xce),   T(0xcf), T(0xd0), T(0xd1),
+    T(0xd2),    T(0xd3),    T(0xd4), T(0xd5),   T(0xd6), T(0xd7), T(0xd8),
+    T(0xd9),    T(0xda),    T(0xdb), T(0xdc),   T(0xdd), T(0xde), T(0xdf),
+    T(0xe0),    T(0xe1),    T(0xe2), T(0xe3),   T(0xe4), T(0xe5), T(0xe6),
+    T(0xe7),    T(0xe8),    T(0xe9), T(0xea),   T(0xeb), T(0xec), T(0xed),
+    T(0xee),    T(0xef),    T(0xf0), T(0xf1),   T(0xf2), T(0xf3), T(0xf4),
+    T(0xf5),    T(0xf6),    T(0xf7), T(0xf8),   T(0xf9), T(0xfa), T(0xfb),
+    T(0xfc),    T(0xfd),    T(0xfe), 0x01000100};
 #undef T
 
-void ARGBUnattenuateRow_C(const uint8* src_argb, uint8* dst_argb, int width) {
+void ARGBUnattenuateRow_C(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          int width) {
   int i;
   for (i = 0; i < width; ++i) {
-    uint32 b = src_argb[0];
-    uint32 g = src_argb[1];
-    uint32 r = src_argb[2];
-    const uint32 a = src_argb[3];
-    const uint32 ia = fixed_invtbl8[a] & 0xffff;  // 8.8 fixed point
+    uint32_t b = src_argb[0];
+    uint32_t g = src_argb[1];
+    uint32_t r = src_argb[2];
+    const uint32_t a = src_argb[3];
+    const uint32_t ia = fixed_invtbl8[a] & 0xffff;  // 8.8 fixed point
     b = (b * ia) >> 8;
     g = (g * ia) >> 8;
     r = (r * ia) >> 8;
@@ -2094,31 +2495,37 @@
   }
 }
 
-void ComputeCumulativeSumRow_C(const uint8* row, int32* cumsum,
-                               const int32* previous_cumsum, int width) {
-  int32 row_sum[4] = {0, 0, 0, 0};
+void ComputeCumulativeSumRow_C(const uint8_t* row,
+                               int32_t* cumsum,
+                               const int32_t* previous_cumsum,
+                               int width) {
+  int32_t row_sum[4] = {0, 0, 0, 0};
   int x;
   for (x = 0; x < width; ++x) {
     row_sum[0] += row[x * 4 + 0];
     row_sum[1] += row[x * 4 + 1];
     row_sum[2] += row[x * 4 + 2];
     row_sum[3] += row[x * 4 + 3];
-    cumsum[x * 4 + 0] = row_sum[0]  + previous_cumsum[x * 4 + 0];
-    cumsum[x * 4 + 1] = row_sum[1]  + previous_cumsum[x * 4 + 1];
-    cumsum[x * 4 + 2] = row_sum[2]  + previous_cumsum[x * 4 + 2];
-    cumsum[x * 4 + 3] = row_sum[3]  + previous_cumsum[x * 4 + 3];
+    cumsum[x * 4 + 0] = row_sum[0] + previous_cumsum[x * 4 + 0];
+    cumsum[x * 4 + 1] = row_sum[1] + previous_cumsum[x * 4 + 1];
+    cumsum[x * 4 + 2] = row_sum[2] + previous_cumsum[x * 4 + 2];
+    cumsum[x * 4 + 3] = row_sum[3] + previous_cumsum[x * 4 + 3];
   }
 }
 
-void CumulativeSumToAverageRow_C(const int32* tl, const int32* bl,
-                                int w, int area, uint8* dst, int count) {
+void CumulativeSumToAverageRow_C(const int32_t* tl,
+                                 const int32_t* bl,
+                                 int w,
+                                 int area,
+                                 uint8_t* dst,
+                                 int count) {
   float ooa = 1.0f / area;
   int i;
   for (i = 0; i < count; ++i) {
-    dst[0] = (uint8)((bl[w + 0] + tl[0] - bl[0] - tl[w + 0]) * ooa);
-    dst[1] = (uint8)((bl[w + 1] + tl[1] - bl[1] - tl[w + 1]) * ooa);
-    dst[2] = (uint8)((bl[w + 2] + tl[2] - bl[2] - tl[w + 2]) * ooa);
-    dst[3] = (uint8)((bl[w + 3] + tl[3] - bl[3] - tl[w + 3]) * ooa);
+    dst[0] = (uint8_t)((bl[w + 0] + tl[0] - bl[0] - tl[w + 0]) * ooa);
+    dst[1] = (uint8_t)((bl[w + 1] + tl[1] - bl[1] - tl[w + 1]) * ooa);
+    dst[2] = (uint8_t)((bl[w + 2] + tl[2] - bl[2] - tl[w + 2]) * ooa);
+    dst[3] = (uint8_t)((bl[w + 3] + tl[3] - bl[3] - tl[w + 3]) * ooa);
     dst += 4;
     tl += 4;
     bl += 4;
@@ -2127,8 +2534,11 @@
 
 // Copy pixels from rotated source to destination row with a slope.
 LIBYUV_API
-void ARGBAffineRow_C(const uint8* src_argb, int src_argb_stride,
-                     uint8* dst_argb, const float* uv_dudv, int width) {
+void ARGBAffineRow_C(const uint8_t* src_argb,
+                     int src_argb_stride,
+                     uint8_t* dst_argb,
+                     const float* uv_dudv,
+                     int width) {
   int i;
   // Render a row of pixels from source into a buffer.
   float uv[2];
@@ -2137,9 +2547,8 @@
   for (i = 0; i < width; ++i) {
     int x = (int)(uv[0]);
     int y = (int)(uv[1]);
-    *(uint32*)(dst_argb) =
-        *(const uint32*)(src_argb + y * src_argb_stride +
-                                         x * 4);
+    *(uint32_t*)(dst_argb) =
+        *(const uint32_t*)(src_argb + y * src_argb_stride + x * 4);
     dst_argb += 4;
     uv[0] += uv_dudv[2];
     uv[1] += uv_dudv[3];
@@ -2147,16 +2556,20 @@
 }
 
 // Blend 2 rows into 1.
-static void HalfRow_C(const uint8* src_uv, ptrdiff_t src_uv_stride,
-                      uint8* dst_uv, int width) {
+static void HalfRow_C(const uint8_t* src_uv,
+                      ptrdiff_t src_uv_stride,
+                      uint8_t* dst_uv,
+                      int width) {
   int x;
   for (x = 0; x < width; ++x) {
     dst_uv[x] = (src_uv[x] + src_uv[src_uv_stride + x] + 1) >> 1;
   }
 }
 
-static void HalfRow_16_C(const uint16* src_uv, ptrdiff_t src_uv_stride,
-                         uint16* dst_uv, int width) {
+static void HalfRow_16_C(const uint16_t* src_uv,
+                         ptrdiff_t src_uv_stride,
+                         uint16_t* dst_uv,
+                         int width) {
   int x;
   for (x = 0; x < width; ++x) {
     dst_uv[x] = (src_uv[x] + src_uv[src_uv_stride + x] + 1) >> 1;
@@ -2164,12 +2577,14 @@
 }
 
 // C version 2x2 -> 2x1.
-void InterpolateRow_C(uint8* dst_ptr, const uint8* src_ptr,
+void InterpolateRow_C(uint8_t* dst_ptr,
+                      const uint8_t* src_ptr,
                       ptrdiff_t src_stride,
-                      int width, int source_y_fraction) {
+                      int width,
+                      int source_y_fraction) {
   int y1_fraction = source_y_fraction;
   int y0_fraction = 256 - y1_fraction;
-  const uint8* src_ptr1 = src_ptr + src_stride;
+  const uint8_t* src_ptr1 = src_ptr + src_stride;
   int x;
   if (y1_fraction == 0) {
     memcpy(dst_ptr, src_ptr, width);
@@ -2194,12 +2609,14 @@
   }
 }
 
-void InterpolateRow_16_C(uint16* dst_ptr, const uint16* src_ptr,
+void InterpolateRow_16_C(uint16_t* dst_ptr,
+                         const uint16_t* src_ptr,
                          ptrdiff_t src_stride,
-                         int width, int source_y_fraction) {
+                         int width,
+                         int source_y_fraction) {
   int y1_fraction = source_y_fraction;
   int y0_fraction = 256 - y1_fraction;
-  const uint16* src_ptr1 = src_ptr + src_stride;
+  const uint16_t* src_ptr1 = src_ptr + src_stride;
   int x;
   if (source_y_fraction == 0) {
     memcpy(dst_ptr, src_ptr, width * 2);
@@ -2222,8 +2639,10 @@
 }
 
 // Use first 4 shuffler values to reorder ARGB channels.
-void ARGBShuffleRow_C(const uint8* src_argb, uint8* dst_argb,
-                      const uint8* shuffler, int width) {
+void ARGBShuffleRow_C(const uint8_t* src_argb,
+                      uint8_t* dst_argb,
+                      const uint8_t* shuffler,
+                      int width) {
   int index0 = shuffler[0];
   int index1 = shuffler[1];
   int index2 = shuffler[2];
@@ -2232,10 +2651,10 @@
   int x;
   for (x = 0; x < width; ++x) {
     // To support in-place conversion.
-    uint8 b = src_argb[index0];
-    uint8 g = src_argb[index1];
-    uint8 r = src_argb[index2];
-    uint8 a = src_argb[index3];
+    uint8_t b = src_argb[index0];
+    uint8_t g = src_argb[index1];
+    uint8_t r = src_argb[index2];
+    uint8_t a = src_argb[index3];
     dst_argb[0] = b;
     dst_argb[1] = g;
     dst_argb[2] = r;
@@ -2245,10 +2664,11 @@
   }
 }
 
-void I422ToYUY2Row_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_frame, int width) {
+void I422ToYUY2Row_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_frame,
+                     int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     dst_frame[0] = src_y[0];
@@ -2268,10 +2688,11 @@
   }
 }
 
-void I422ToUYVYRow_C(const uint8* src_y,
-                     const uint8* src_u,
-                     const uint8* src_v,
-                     uint8* dst_frame, int width) {
+void I422ToUYVYRow_C(const uint8_t* src_y,
+                     const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_frame,
+                     int width) {
   int x;
   for (x = 0; x < width - 1; x += 2) {
     dst_frame[0] = src_u[0];
@@ -2291,9 +2712,8 @@
   }
 }
 
-
-void ARGBPolynomialRow_C(const uint8* src_argb,
-                         uint8* dst_argb,
+void ARGBPolynomialRow_C(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
                          const float* poly,
                          int width) {
   int i;
@@ -2323,33 +2743,75 @@
     dr += poly[14] * r3;
     da += poly[15] * a3;
 
-    dst_argb[0] = Clamp((int32)(db));
-    dst_argb[1] = Clamp((int32)(dg));
-    dst_argb[2] = Clamp((int32)(dr));
-    dst_argb[3] = Clamp((int32)(da));
+    dst_argb[0] = Clamp((int32_t)(db));
+    dst_argb[1] = Clamp((int32_t)(dg));
+    dst_argb[2] = Clamp((int32_t)(dr));
+    dst_argb[3] = Clamp((int32_t)(da));
     src_argb += 4;
     dst_argb += 4;
   }
 }
 
-void ARGBLumaColorTableRow_C(const uint8* src_argb, uint8* dst_argb, int width,
-                             const uint8* luma, uint32 lumacoeff) {
-  uint32 bc = lumacoeff & 0xff;
-  uint32 gc = (lumacoeff >> 8) & 0xff;
-  uint32 rc = (lumacoeff >> 16) & 0xff;
+// Samples assumed to be unsigned in low 9, 10 or 12 bits.  Scale factor
+// adjust the source integer range to the half float range desired.
+
+// This magic constant is 2^-112. Multiplying by this
+// is the same as subtracting 112 from the exponent, which
+// is the difference in exponent bias between 32-bit and
+// 16-bit floats. Once we've done this subtraction, we can
+// simply extract the low bits of the exponent and the high
+// bits of the mantissa from our float and we're done.
+
+// Work around GCC 7 punning warning -Wstrict-aliasing
+#if defined(__GNUC__)
+typedef uint32_t __attribute__((__may_alias__)) uint32_alias_t;
+#else
+typedef uint32_t uint32_alias_t;
+#endif
+
+void HalfFloatRow_C(const uint16_t* src,
+                    uint16_t* dst,
+                    float scale,
+                    int width) {
+  int i;
+  float mult = 1.9259299444e-34f * scale;
+  for (i = 0; i < width; ++i) {
+    float value = src[i] * mult;
+    dst[i] = (uint16_t)((*(const uint32_alias_t*)&value) >> 13);
+  }
+}
+
+void ByteToFloatRow_C(const uint8_t* src, float* dst, float scale, int width) {
+  int i;
+  for (i = 0; i < width; ++i) {
+    float value = src[i] * scale;
+    dst[i] = value;
+  }
+}
+
+void ARGBLumaColorTableRow_C(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             int width,
+                             const uint8_t* luma,
+                             uint32_t lumacoeff) {
+  uint32_t bc = lumacoeff & 0xff;
+  uint32_t gc = (lumacoeff >> 8) & 0xff;
+  uint32_t rc = (lumacoeff >> 16) & 0xff;
 
   int i;
   for (i = 0; i < width - 1; i += 2) {
     // Luminance in rows, color values in columns.
-    const uint8* luma0 = ((src_argb[0] * bc + src_argb[1] * gc +
-                           src_argb[2] * rc) & 0x7F00u) + luma;
-    const uint8* luma1;
+    const uint8_t* luma0 =
+        ((src_argb[0] * bc + src_argb[1] * gc + src_argb[2] * rc) & 0x7F00u) +
+        luma;
+    const uint8_t* luma1;
     dst_argb[0] = luma0[src_argb[0]];
     dst_argb[1] = luma0[src_argb[1]];
     dst_argb[2] = luma0[src_argb[2]];
     dst_argb[3] = src_argb[3];
-    luma1 = ((src_argb[4] * bc + src_argb[5] * gc +
-              src_argb[6] * rc) & 0x7F00u) + luma;
+    luma1 =
+        ((src_argb[4] * bc + src_argb[5] * gc + src_argb[6] * rc) & 0x7F00u) +
+        luma;
     dst_argb[4] = luma1[src_argb[4]];
     dst_argb[5] = luma1[src_argb[5]];
     dst_argb[6] = luma1[src_argb[6]];
@@ -2359,8 +2821,9 @@
   }
   if (width & 1) {
     // Luminance in rows, color values in columns.
-    const uint8* luma0 = ((src_argb[0] * bc + src_argb[1] * gc +
-                           src_argb[2] * rc) & 0x7F00u) + luma;
+    const uint8_t* luma0 =
+        ((src_argb[0] * bc + src_argb[1] * gc + src_argb[2] * rc) & 0x7F00u) +
+        luma;
     dst_argb[0] = luma0[src_argb[0]];
     dst_argb[1] = luma0[src_argb[1]];
     dst_argb[2] = luma0[src_argb[2]];
@@ -2368,7 +2831,7 @@
   }
 }
 
-void ARGBCopyAlphaRow_C(const uint8* src, uint8* dst, int width) {
+void ARGBCopyAlphaRow_C(const uint8_t* src, uint8_t* dst, int width) {
   int i;
   for (i = 0; i < width - 1; i += 2) {
     dst[3] = src[3];
@@ -2381,7 +2844,7 @@
   }
 }
 
-void ARGBExtractAlphaRow_C(const uint8* src_argb, uint8* dst_a, int width) {
+void ARGBExtractAlphaRow_C(const uint8_t* src_argb, uint8_t* dst_a, int width) {
   int i;
   for (i = 0; i < width - 1; i += 2) {
     dst_a[0] = src_argb[3];
@@ -2394,7 +2857,7 @@
   }
 }
 
-void ARGBCopyYToAlphaRow_C(const uint8* src, uint8* dst, int width) {
+void ARGBCopyYToAlphaRow_C(const uint8_t* src, uint8_t* dst, int width) {
   int i;
   for (i = 0; i < width - 1; i += 2) {
     dst[3] = src[0];
@@ -2413,13 +2876,13 @@
 #if !(defined(_MSC_VER) && defined(_M_IX86)) && \
     defined(HAS_I422TORGB565ROW_SSSE3)
 // row_win.cc has asm version, but GCC uses 2 step wrapper.
-void I422ToRGB565Row_SSSE3(const uint8* src_y,
-                           const uint8* src_u,
-                           const uint8* src_v,
-                           uint8* dst_rgb565,
+void I422ToRGB565Row_SSSE3(const uint8_t* src_y,
+                           const uint8_t* src_u,
+                           const uint8_t* src_v,
+                           uint8_t* dst_rgb565,
                            const struct YuvConstants* yuvconstants,
                            int width) {
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_SSSE3(src_y, src_u, src_v, row, yuvconstants, twidth);
@@ -2434,14 +2897,14 @@
 #endif
 
 #if defined(HAS_I422TOARGB1555ROW_SSSE3)
-void I422ToARGB1555Row_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb1555,
+void I422ToARGB1555Row_SSSE3(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             uint8_t* dst_argb1555,
                              const struct YuvConstants* yuvconstants,
                              int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_SSSE3(src_y, src_u, src_v, row, yuvconstants, twidth);
@@ -2456,14 +2919,14 @@
 #endif
 
 #if defined(HAS_I422TOARGB4444ROW_SSSE3)
-void I422ToARGB4444Row_SSSE3(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             uint8* dst_argb4444,
+void I422ToARGB4444Row_SSSE3(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             uint8_t* dst_argb4444,
                              const struct YuvConstants* yuvconstants,
                              int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_SSSE3(src_y, src_u, src_v, row, yuvconstants, twidth);
@@ -2478,13 +2941,13 @@
 #endif
 
 #if defined(HAS_NV12TORGB565ROW_SSSE3)
-void NV12ToRGB565Row_SSSE3(const uint8* src_y,
-                           const uint8* src_uv,
-                           uint8* dst_rgb565,
+void NV12ToRGB565Row_SSSE3(const uint8_t* src_y,
+                           const uint8_t* src_uv,
+                           uint8_t* dst_rgb565,
                            const struct YuvConstants* yuvconstants,
                            int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     NV12ToARGBRow_SSSE3(src_y, src_uv, row, yuvconstants, twidth);
@@ -2497,14 +2960,102 @@
 }
 #endif
 
-#if defined(HAS_I422TORGB565ROW_AVX2)
-void I422ToRGB565Row_AVX2(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          uint8* dst_rgb565,
+#if defined(HAS_NV12TORGB24ROW_SSSE3)
+void NV12ToRGB24Row_SSSE3(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb24,
                           const struct YuvConstants* yuvconstants,
                           int width) {
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  // Row buffer for intermediate ARGB pixels.
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
+  while (width > 0) {
+    int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
+    NV12ToARGBRow_SSSE3(src_y, src_uv, row, yuvconstants, twidth);
+    ARGBToRGB24Row_SSSE3(row, dst_rgb24, twidth);
+    src_y += twidth;
+    src_uv += twidth;
+    dst_rgb24 += twidth * 3;
+    width -= twidth;
+  }
+}
+#endif
+
+#if defined(HAS_NV21TORGB24ROW_SSSE3)
+void NV21ToRGB24Row_SSSE3(const uint8_t* src_y,
+                          const uint8_t* src_vu,
+                          uint8_t* dst_rgb24,
+                          const struct YuvConstants* yuvconstants,
+                          int width) {
+  // Row buffer for intermediate ARGB pixels.
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
+  while (width > 0) {
+    int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
+    NV21ToARGBRow_SSSE3(src_y, src_vu, row, yuvconstants, twidth);
+    ARGBToRGB24Row_SSSE3(row, dst_rgb24, twidth);
+    src_y += twidth;
+    src_vu += twidth;
+    dst_rgb24 += twidth * 3;
+    width -= twidth;
+  }
+}
+#endif
+
+#if defined(HAS_NV12TORGB24ROW_AVX2)
+void NV12ToRGB24Row_AVX2(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  // Row buffer for intermediate ARGB pixels.
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
+  while (width > 0) {
+    int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
+    NV12ToARGBRow_AVX2(src_y, src_uv, row, yuvconstants, twidth);
+#if defined(HAS_ARGBTORGB24ROW_AVX2)
+    ARGBToRGB24Row_AVX2(row, dst_rgb24, twidth);
+#else
+    ARGBToRGB24Row_SSSE3(row, dst_rgb24, twidth);
+#endif
+    src_y += twidth;
+    src_uv += twidth;
+    dst_rgb24 += twidth * 3;
+    width -= twidth;
+  }
+}
+#endif
+
+#if defined(HAS_NV21TORGB24ROW_AVX2)
+void NV21ToRGB24Row_AVX2(const uint8_t* src_y,
+                         const uint8_t* src_vu,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  // Row buffer for intermediate ARGB pixels.
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
+  while (width > 0) {
+    int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
+    NV21ToARGBRow_AVX2(src_y, src_vu, row, yuvconstants, twidth);
+#if defined(HAS_ARGBTORGB24ROW_AVX2)
+    ARGBToRGB24Row_AVX2(row, dst_rgb24, twidth);
+#else
+    ARGBToRGB24Row_SSSE3(row, dst_rgb24, twidth);
+#endif
+    src_y += twidth;
+    src_vu += twidth;
+    dst_rgb24 += twidth * 3;
+    width -= twidth;
+  }
+}
+#endif
+
+#if defined(HAS_I422TORGB565ROW_AVX2)
+void I422ToRGB565Row_AVX2(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          uint8_t* dst_rgb565,
+                          const struct YuvConstants* yuvconstants,
+                          int width) {
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_AVX2(src_y, src_u, src_v, row, yuvconstants, twidth);
@@ -2523,14 +3074,14 @@
 #endif
 
 #if defined(HAS_I422TOARGB1555ROW_AVX2)
-void I422ToARGB1555Row_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb1555,
+void I422ToARGB1555Row_AVX2(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb1555,
                             const struct YuvConstants* yuvconstants,
                             int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_AVX2(src_y, src_u, src_v, row, yuvconstants, twidth);
@@ -2549,14 +3100,14 @@
 #endif
 
 #if defined(HAS_I422TOARGB4444ROW_AVX2)
-void I422ToARGB4444Row_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb4444,
+void I422ToARGB4444Row_AVX2(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb4444,
                             const struct YuvConstants* yuvconstants,
                             int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_AVX2(src_y, src_u, src_v, row, yuvconstants, twidth);
@@ -2575,19 +3126,22 @@
 #endif
 
 #if defined(HAS_I422TORGB24ROW_AVX2)
-void I422ToRGB24Row_AVX2(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_rgb24,
-                            const struct YuvConstants* yuvconstants,
-                            int width) {
+void I422ToRGB24Row_AVX2(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     I422ToARGBRow_AVX2(src_y, src_u, src_v, row, yuvconstants, twidth);
-    // TODO(fbarchard): ARGBToRGB24Row_AVX2
+#if defined(HAS_ARGBTORGB24ROW_AVX2)
+    ARGBToRGB24Row_AVX2(row, dst_rgb24, twidth);
+#else
     ARGBToRGB24Row_SSSE3(row, dst_rgb24, twidth);
+#endif
     src_y += twidth;
     src_u += twidth / 2;
     src_v += twidth / 2;
@@ -2598,13 +3152,13 @@
 #endif
 
 #if defined(HAS_NV12TORGB565ROW_AVX2)
-void NV12ToRGB565Row_AVX2(const uint8* src_y,
-                          const uint8* src_uv,
-                          uint8* dst_rgb565,
+void NV12ToRGB565Row_AVX2(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width) {
   // Row buffer for intermediate ARGB pixels.
-  SIMD_ALIGNED(uint8 row[MAXTWIDTH * 4]);
+  SIMD_ALIGNED(uint8_t row[MAXTWIDTH * 4]);
   while (width > 0) {
     int twidth = width > MAXTWIDTH ? MAXTWIDTH : width;
     NV12ToARGBRow_AVX2(src_y, src_uv, row, yuvconstants, twidth);
@@ -2621,6 +3175,62 @@
 }
 #endif
 
+float ScaleSumSamples_C(const float* src, float* dst, float scale, int width) {
+  float fsum = 0.f;
+  int i;
+#if defined(__clang__)
+#pragma clang loop vectorize_width(4)
+#endif
+  for (i = 0; i < width; ++i) {
+    float v = *src++;
+    fsum += v * v;
+    *dst++ = v * scale;
+  }
+  return fsum;
+}
+
+float ScaleMaxSamples_C(const float* src, float* dst, float scale, int width) {
+  float fmax = 0.f;
+  int i;
+  for (i = 0; i < width; ++i) {
+    float v = *src++;
+    float vs = v * scale;
+    fmax = (v > fmax) ? v : fmax;
+    *dst++ = vs;
+  }
+  return fmax;
+}
+
+void ScaleSamples_C(const float* src, float* dst, float scale, int width) {
+  int i;
+  for (i = 0; i < width; ++i) {
+    *dst++ = *src++ * scale;
+  }
+}
+
+void GaussRow_C(const uint32_t* src, uint16_t* dst, int width) {
+  int i;
+  for (i = 0; i < width; ++i) {
+    *dst++ =
+        (src[0] + src[1] * 4 + src[2] * 6 + src[3] * 4 + src[4] + 128) >> 8;
+    ++src;
+  }
+}
+
+// filter 5 rows with 1, 4, 6, 4, 1 coefficients to produce 1 row.
+void GaussCol_C(const uint16_t* src0,
+                const uint16_t* src1,
+                const uint16_t* src2,
+                const uint16_t* src3,
+                const uint16_t* src4,
+                uint32_t* dst,
+                int width) {
+  int i;
+  for (i = 0; i < width; ++i) {
+    *dst++ = *src0++ + *src1++ * 4 + *src2++ * 6 + *src3++ * 4 + *src4++;
+  }
+}
+
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_gcc.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_gcc.cc
index 1ac7ef1..8d3cb81 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_gcc.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_gcc.cc
@@ -1,4 +1,3 @@
-// VERSION 2
 /*
  *  Copyright 2011 The LibYuv Project Authors. All rights reserved.
  *
@@ -23,1663 +22,2001 @@
 #if defined(HAS_ARGBTOYROW_SSSE3) || defined(HAS_ARGBGRAYROW_SSSE3)
 
 // Constants for ARGB
-static vec8 kARGBToY = {
-  13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33, 0
-};
+static const vec8 kARGBToY = {13, 65, 33, 0, 13, 65, 33, 0,
+                              13, 65, 33, 0, 13, 65, 33, 0};
 
 // JPeg full range.
-static vec8 kARGBToYJ = {
-  15, 75, 38, 0, 15, 75, 38, 0, 15, 75, 38, 0, 15, 75, 38, 0
-};
+static const vec8 kARGBToYJ = {15, 75, 38, 0, 15, 75, 38, 0,
+                               15, 75, 38, 0, 15, 75, 38, 0};
 #endif  // defined(HAS_ARGBTOYROW_SSSE3) || defined(HAS_ARGBGRAYROW_SSSE3)
 
 #if defined(HAS_ARGBTOYROW_SSSE3) || defined(HAS_I422TOARGBROW_SSSE3)
 
-static vec8 kARGBToU = {
-  112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38, 0
-};
+static const vec8 kARGBToU = {112, -74, -38, 0, 112, -74, -38, 0,
+                              112, -74, -38, 0, 112, -74, -38, 0};
 
-static vec8 kARGBToUJ = {
-  127, -84, -43, 0, 127, -84, -43, 0, 127, -84, -43, 0, 127, -84, -43, 0
-};
+static const vec8 kARGBToUJ = {127, -84, -43, 0, 127, -84, -43, 0,
+                               127, -84, -43, 0, 127, -84, -43, 0};
 
-static vec8 kARGBToV = {
-  -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0,
-};
+static const vec8 kARGBToV = {-18, -94, 112, 0, -18, -94, 112, 0,
+                              -18, -94, 112, 0, -18, -94, 112, 0};
 
-static vec8 kARGBToVJ = {
-  -20, -107, 127, 0, -20, -107, 127, 0, -20, -107, 127, 0, -20, -107, 127, 0
-};
+static const vec8 kARGBToVJ = {-20, -107, 127, 0, -20, -107, 127, 0,
+                               -20, -107, 127, 0, -20, -107, 127, 0};
 
 // Constants for BGRA
-static vec8 kBGRAToY = {
-  0, 33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13
-};
+static const vec8 kBGRAToY = {0, 33, 65, 13, 0, 33, 65, 13,
+                              0, 33, 65, 13, 0, 33, 65, 13};
 
-static vec8 kBGRAToU = {
-  0, -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112
-};
+static const vec8 kBGRAToU = {0, -38, -74, 112, 0, -38, -74, 112,
+                              0, -38, -74, 112, 0, -38, -74, 112};
 
-static vec8 kBGRAToV = {
-  0, 112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18
-};
+static const vec8 kBGRAToV = {0, 112, -94, -18, 0, 112, -94, -18,
+                              0, 112, -94, -18, 0, 112, -94, -18};
 
 // Constants for ABGR
-static vec8 kABGRToY = {
-  33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13, 0
-};
+static const vec8 kABGRToY = {33, 65, 13, 0, 33, 65, 13, 0,
+                              33, 65, 13, 0, 33, 65, 13, 0};
 
-static vec8 kABGRToU = {
-  -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112, 0
-};
+static const vec8 kABGRToU = {-38, -74, 112, 0, -38, -74, 112, 0,
+                              -38, -74, 112, 0, -38, -74, 112, 0};
 
-static vec8 kABGRToV = {
-  112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18, 0
-};
+static const vec8 kABGRToV = {112, -94, -18, 0, 112, -94, -18, 0,
+                              112, -94, -18, 0, 112, -94, -18, 0};
 
 // Constants for RGBA.
-static vec8 kRGBAToY = {
-  0, 13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33
-};
+static const vec8 kRGBAToY = {0, 13, 65, 33, 0, 13, 65, 33,
+                              0, 13, 65, 33, 0, 13, 65, 33};
 
-static vec8 kRGBAToU = {
-  0, 112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38
-};
+static const vec8 kRGBAToU = {0, 112, -74, -38, 0, 112, -74, -38,
+                              0, 112, -74, -38, 0, 112, -74, -38};
 
-static vec8 kRGBAToV = {
-  0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112
-};
+static const vec8 kRGBAToV = {0, -18, -94, 112, 0, -18, -94, 112,
+                              0, -18, -94, 112, 0, -18, -94, 112};
 
-static uvec8 kAddY16 = {
-  16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u
-};
+static const uvec8 kAddY16 = {16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u,
+                              16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u};
 
 // 7 bit fixed point 0.5.
-static vec16 kAddYJ64 = {
-  64, 64, 64, 64, 64, 64, 64, 64
-};
+static const vec16 kAddYJ64 = {64, 64, 64, 64, 64, 64, 64, 64};
 
-static uvec8 kAddUV128 = {
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+static const uvec8 kAddUV128 = {128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u,
+                                128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
-static uvec16 kAddUVJ128 = {
-  0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u
-};
+static const uvec16 kAddUVJ128 = {0x8080u, 0x8080u, 0x8080u, 0x8080u,
+                                  0x8080u, 0x8080u, 0x8080u, 0x8080u};
 #endif  // defined(HAS_ARGBTOYROW_SSSE3) || defined(HAS_I422TOARGBROW_SSSE3)
 
 #ifdef HAS_RGB24TOARGBROW_SSSE3
 
 // Shuffle table for converting RGB24 to ARGB.
-static uvec8 kShuffleMaskRGB24ToARGB = {
-  0u, 1u, 2u, 12u, 3u, 4u, 5u, 13u, 6u, 7u, 8u, 14u, 9u, 10u, 11u, 15u
-};
+static const uvec8 kShuffleMaskRGB24ToARGB = {
+    0u, 1u, 2u, 12u, 3u, 4u, 5u, 13u, 6u, 7u, 8u, 14u, 9u, 10u, 11u, 15u};
 
 // Shuffle table for converting RAW to ARGB.
-static uvec8 kShuffleMaskRAWToARGB = {
-  2u, 1u, 0u, 12u, 5u, 4u, 3u, 13u, 8u, 7u, 6u, 14u, 11u, 10u, 9u, 15u
-};
+static const uvec8 kShuffleMaskRAWToARGB = {2u, 1u, 0u, 12u, 5u,  4u,  3u, 13u,
+                                            8u, 7u, 6u, 14u, 11u, 10u, 9u, 15u};
 
 // Shuffle table for converting RAW to RGB24.  First 8.
 static const uvec8 kShuffleMaskRAWToRGB24_0 = {
-  2u, 1u, 0u, 5u, 4u, 3u, 8u, 7u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+    2u,   1u,   0u,   5u,   4u,   3u,   8u,   7u,
+    128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting RAW to RGB24.  Middle 8.
 static const uvec8 kShuffleMaskRAWToRGB24_1 = {
-  2u, 7u, 6u, 5u, 10u, 9u, 8u, 13u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+    2u,   7u,   6u,   5u,   10u,  9u,   8u,   13u,
+    128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting RAW to RGB24.  Last 8.
 static const uvec8 kShuffleMaskRAWToRGB24_2 = {
-  8u, 7u, 12u, 11u, 10u, 15u, 14u, 13u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+    8u,   7u,   12u,  11u,  10u,  15u,  14u,  13u,
+    128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting ARGB to RGB24.
-static uvec8 kShuffleMaskARGBToRGB24 = {
-  0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 10u, 12u, 13u, 14u, 128u, 128u, 128u, 128u
-};
+static const uvec8 kShuffleMaskARGBToRGB24 = {
+    0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 10u, 12u, 13u, 14u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting ARGB to RAW.
-static uvec8 kShuffleMaskARGBToRAW = {
-  2u, 1u, 0u, 6u, 5u, 4u, 10u, 9u, 8u, 14u, 13u, 12u, 128u, 128u, 128u, 128u
-};
+static const uvec8 kShuffleMaskARGBToRAW = {
+    2u, 1u, 0u, 6u, 5u, 4u, 10u, 9u, 8u, 14u, 13u, 12u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting ARGBToRGB24 for I422ToRGB24.  First 8 + next 4
-static uvec8 kShuffleMaskARGBToRGB24_0 = {
-  0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 128u, 128u, 128u, 128u, 10u, 12u, 13u, 14u
-};
+static const uvec8 kShuffleMaskARGBToRGB24_0 = {
+    0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 128u, 128u, 128u, 128u, 10u, 12u, 13u, 14u};
 
 // YUY2 shuf 16 Y to 32 Y.
-static const lvec8 kShuffleYUY2Y = {
-  0, 0, 2, 2, 4, 4, 6, 6, 8, 8, 10, 10, 12, 12, 14, 14,
-  0, 0, 2, 2, 4, 4, 6, 6, 8, 8, 10, 10, 12, 12, 14, 14
-};
+static const lvec8 kShuffleYUY2Y = {0,  0,  2,  2,  4,  4,  6,  6,  8,  8, 10,
+                                    10, 12, 12, 14, 14, 0,  0,  2,  2,  4, 4,
+                                    6,  6,  8,  8,  10, 10, 12, 12, 14, 14};
 
 // YUY2 shuf 8 UV to 16 UV.
-static const lvec8 kShuffleYUY2UV = {
-  1, 3, 1, 3, 5, 7, 5, 7, 9, 11, 9, 11, 13, 15, 13, 15,
-  1, 3, 1, 3, 5, 7, 5, 7, 9, 11, 9, 11, 13, 15, 13, 15
-};
+static const lvec8 kShuffleYUY2UV = {1,  3,  1,  3,  5,  7,  5,  7,  9,  11, 9,
+                                     11, 13, 15, 13, 15, 1,  3,  1,  3,  5,  7,
+                                     5,  7,  9,  11, 9,  11, 13, 15, 13, 15};
 
 // UYVY shuf 16 Y to 32 Y.
-static const lvec8 kShuffleUYVYY = {
-  1, 1, 3, 3, 5, 5, 7, 7, 9, 9, 11, 11, 13, 13, 15, 15,
-  1, 1, 3, 3, 5, 5, 7, 7, 9, 9, 11, 11, 13, 13, 15, 15
-};
+static const lvec8 kShuffleUYVYY = {1,  1,  3,  3,  5,  5,  7,  7,  9,  9, 11,
+                                    11, 13, 13, 15, 15, 1,  1,  3,  3,  5, 5,
+                                    7,  7,  9,  9,  11, 11, 13, 13, 15, 15};
 
 // UYVY shuf 8 UV to 16 UV.
-static const lvec8 kShuffleUYVYUV = {
-  0, 2, 0, 2, 4, 6, 4, 6, 8, 10, 8, 10, 12, 14, 12, 14,
-  0, 2, 0, 2, 4, 6, 4, 6, 8, 10, 8, 10, 12, 14, 12, 14
-};
+static const lvec8 kShuffleUYVYUV = {0,  2,  0,  2,  4,  6,  4,  6,  8,  10, 8,
+                                     10, 12, 14, 12, 14, 0,  2,  0,  2,  4,  6,
+                                     4,  6,  8,  10, 8,  10, 12, 14, 12, 14};
 
 // NV21 shuf 8 VU to 16 UV.
 static const lvec8 kShuffleNV21 = {
-  1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
-  1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
+    1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
+    1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
 };
 #endif  // HAS_RGB24TOARGBROW_SSSE3
 
 #ifdef HAS_J400TOARGBROW_SSE2
-void J400ToARGBRow_SSE2(const uint8* src_y, uint8* dst_argb, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "pslld     $0x18,%%xmm5                    \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movq      " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklwd %%xmm0,%%xmm0                   \n"
-    "punpckhwd %%xmm1,%%xmm1                   \n"
-    "por       %%xmm5,%%xmm0                   \n"
-    "por       %%xmm5,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_y),     // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm5"
-  );
+void J400ToARGBRow_SSE2(const uint8_t* src_y, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "pslld     $0x18,%%xmm5                    \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movq      (%0),%%xmm0                     \n"
+      "lea       0x8(%0),%0                      \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklwd %%xmm0,%%xmm0                   \n"
+      "punpckhwd %%xmm1,%%xmm1                   \n"
+      "por       %%xmm5,%%xmm0                   \n"
+      "por       %%xmm5,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_y),     // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm5");
 }
 #endif  // HAS_J400TOARGBROW_SSE2
 
 #ifdef HAS_RGB24TOARGBROW_SSSE3
-void RGB24ToARGBRow_SSSE3(const uint8* src_rgb24, uint8* dst_argb, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"  // generate mask 0xff000000
-    "pslld     $0x18,%%xmm5                    \n"
-    "movdqa    %3,%%xmm4                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm3   \n"
-    "lea       " MEMLEA(0x30,0) ",%0           \n"
-    "movdqa    %%xmm3,%%xmm2                   \n"
-    "palignr   $0x8,%%xmm1,%%xmm2              \n"
-    "pshufb    %%xmm4,%%xmm2                   \n"
-    "por       %%xmm5,%%xmm2                   \n"
-    "palignr   $0xc,%%xmm0,%%xmm1              \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x20,1) "   \n"
-    "por       %%xmm5,%%xmm0                   \n"
-    "pshufb    %%xmm4,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "por       %%xmm5,%%xmm1                   \n"
-    "palignr   $0x4,%%xmm3,%%xmm3              \n"
-    "pshufb    %%xmm4,%%xmm3                   \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "por       %%xmm5,%%xmm3                   \n"
-    "movdqu    %%xmm3," MEMACCESS2(0x30,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_rgb24),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  : "m"(kShuffleMaskRGB24ToARGB)  // %3
-  : "memory", "cc" , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void RGB24ToARGBRow_SSSE3(const uint8_t* src_rgb24,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"  // 0xff000000
+      "pslld     $0x18,%%xmm5                    \n"
+      "movdqa    %3,%%xmm4                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm3                 \n"
+      "lea       0x30(%0),%0                     \n"
+      "movdqa    %%xmm3,%%xmm2                   \n"
+      "palignr   $0x8,%%xmm1,%%xmm2              \n"
+      "pshufb    %%xmm4,%%xmm2                   \n"
+      "por       %%xmm5,%%xmm2                   \n"
+      "palignr   $0xc,%%xmm0,%%xmm1              \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "movdqu    %%xmm2,0x20(%1)                 \n"
+      "por       %%xmm5,%%xmm0                   \n"
+      "pshufb    %%xmm4,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "por       %%xmm5,%%xmm1                   \n"
+      "palignr   $0x4,%%xmm3,%%xmm3              \n"
+      "pshufb    %%xmm4,%%xmm3                   \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "por       %%xmm5,%%xmm3                   \n"
+      "movdqu    %%xmm3,0x30(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_rgb24),              // %0
+        "+r"(dst_argb),               // %1
+        "+r"(width)                   // %2
+      : "m"(kShuffleMaskRGB24ToARGB)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void RAWToARGBRow_SSSE3(const uint8* src_raw, uint8* dst_argb, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"  // generate mask 0xff000000
-    "pslld     $0x18,%%xmm5                    \n"
-    "movdqa    %3,%%xmm4                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm3   \n"
-    "lea       " MEMLEA(0x30,0) ",%0           \n"
-    "movdqa    %%xmm3,%%xmm2                   \n"
-    "palignr   $0x8,%%xmm1,%%xmm2              \n"
-    "pshufb    %%xmm4,%%xmm2                   \n"
-    "por       %%xmm5,%%xmm2                   \n"
-    "palignr   $0xc,%%xmm0,%%xmm1              \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x20,1) "   \n"
-    "por       %%xmm5,%%xmm0                   \n"
-    "pshufb    %%xmm4,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "por       %%xmm5,%%xmm1                   \n"
-    "palignr   $0x4,%%xmm3,%%xmm3              \n"
-    "pshufb    %%xmm4,%%xmm3                   \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "por       %%xmm5,%%xmm3                   \n"
-    "movdqu    %%xmm3," MEMACCESS2(0x30,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_raw),   // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  : "m"(kShuffleMaskRAWToARGB)  // %3
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void RAWToARGBRow_SSSE3(const uint8_t* src_raw, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"  // 0xff000000
+      "pslld     $0x18,%%xmm5                    \n"
+      "movdqa    %3,%%xmm4                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm3                 \n"
+      "lea       0x30(%0),%0                     \n"
+      "movdqa    %%xmm3,%%xmm2                   \n"
+      "palignr   $0x8,%%xmm1,%%xmm2              \n"
+      "pshufb    %%xmm4,%%xmm2                   \n"
+      "por       %%xmm5,%%xmm2                   \n"
+      "palignr   $0xc,%%xmm0,%%xmm1              \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "movdqu    %%xmm2,0x20(%1)                 \n"
+      "por       %%xmm5,%%xmm0                   \n"
+      "pshufb    %%xmm4,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "por       %%xmm5,%%xmm1                   \n"
+      "palignr   $0x4,%%xmm3,%%xmm3              \n"
+      "pshufb    %%xmm4,%%xmm3                   \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "por       %%xmm5,%%xmm3                   \n"
+      "movdqu    %%xmm3,0x30(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_raw),              // %0
+        "+r"(dst_argb),             // %1
+        "+r"(width)                 // %2
+      : "m"(kShuffleMaskRAWToARGB)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void RAWToRGB24Row_SSSE3(const uint8* src_raw, uint8* dst_rgb24, int width) {
-  asm volatile (
-   "movdqa     %3,%%xmm3                       \n"
-   "movdqa     %4,%%xmm4                       \n"
-   "movdqa     %5,%%xmm5                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x4,0) ",%%xmm1    \n"
-    "movdqu    " MEMACCESS2(0x8,0) ",%%xmm2    \n"
-    "lea       " MEMLEA(0x18,0) ",%0           \n"
-    "pshufb    %%xmm3,%%xmm0                   \n"
-    "pshufb    %%xmm4,%%xmm1                   \n"
-    "pshufb    %%xmm5,%%xmm2                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "movq      %%xmm1," MEMACCESS2(0x8,1) "    \n"
-    "movq      %%xmm2," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x18,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_raw),    // %0
-    "+r"(dst_rgb24),  // %1
-    "+r"(width)       // %2
-  : "m"(kShuffleMaskRAWToRGB24_0),  // %3
-    "m"(kShuffleMaskRAWToRGB24_1),  // %4
-    "m"(kShuffleMaskRAWToRGB24_2)   // %5
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void RAWToRGB24Row_SSSE3(const uint8_t* src_raw,
+                         uint8_t* dst_rgb24,
+                         int width) {
+  asm volatile(
+      "movdqa     %3,%%xmm3                       \n"
+      "movdqa     %4,%%xmm4                       \n"
+      "movdqa     %5,%%xmm5                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x4(%0),%%xmm1                  \n"
+      "movdqu    0x8(%0),%%xmm2                  \n"
+      "lea       0x18(%0),%0                     \n"
+      "pshufb    %%xmm3,%%xmm0                   \n"
+      "pshufb    %%xmm4,%%xmm1                   \n"
+      "pshufb    %%xmm5,%%xmm2                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movq      %%xmm1,0x8(%1)                  \n"
+      "movq      %%xmm2,0x10(%1)                 \n"
+      "lea       0x18(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_raw),                  // %0
+        "+r"(dst_rgb24),                // %1
+        "+r"(width)                     // %2
+      : "m"(kShuffleMaskRAWToRGB24_0),  // %3
+        "m"(kShuffleMaskRAWToRGB24_1),  // %4
+        "m"(kShuffleMaskRAWToRGB24_2)   // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void RGB565ToARGBRow_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "mov       $0x1080108,%%eax                \n"
-    "movd      %%eax,%%xmm5                    \n"
-    "pshufd    $0x0,%%xmm5,%%xmm5              \n"
-    "mov       $0x20802080,%%eax               \n"
-    "movd      %%eax,%%xmm6                    \n"
-    "pshufd    $0x0,%%xmm6,%%xmm6              \n"
-    "pcmpeqb   %%xmm3,%%xmm3                   \n"
-    "psllw     $0xb,%%xmm3                     \n"
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "psllw     $0xa,%%xmm4                     \n"
-    "psrlw     $0x5,%%xmm4                     \n"
-    "pcmpeqb   %%xmm7,%%xmm7                   \n"
-    "psllw     $0x8,%%xmm7                     \n"
-    "sub       %0,%1                           \n"
-    "sub       %0,%1                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "pand      %%xmm3,%%xmm1                   \n"
-    "psllw     $0xb,%%xmm2                     \n"
-    "pmulhuw   %%xmm5,%%xmm1                   \n"
-    "pmulhuw   %%xmm5,%%xmm2                   \n"
-    "psllw     $0x8,%%xmm1                     \n"
-    "por       %%xmm2,%%xmm1                   \n"
-    "pand      %%xmm4,%%xmm0                   \n"
-    "pmulhuw   %%xmm6,%%xmm0                   \n"
-    "por       %%xmm7,%%xmm0                   \n"
-    "movdqa    %%xmm1,%%xmm2                   \n"
-    "punpcklbw %%xmm0,%%xmm1                   \n"
-    "punpckhbw %%xmm0,%%xmm2                   \n"
-    MEMOPMEM(movdqu,xmm1,0x00,1,0,2)           //  movdqu  %%xmm1,(%1,%0,2)
-    MEMOPMEM(movdqu,xmm2,0x10,1,0,2)           //  movdqu  %%xmm2,0x10(%1,%0,2)
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  :
-  : "memory", "cc", "eax", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+void RGB565ToARGBRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "mov       $0x1080108,%%eax                \n"
+      "movd      %%eax,%%xmm5                    \n"
+      "pshufd    $0x0,%%xmm5,%%xmm5              \n"
+      "mov       $0x20802080,%%eax               \n"
+      "movd      %%eax,%%xmm6                    \n"
+      "pshufd    $0x0,%%xmm6,%%xmm6              \n"
+      "pcmpeqb   %%xmm3,%%xmm3                   \n"
+      "psllw     $0xb,%%xmm3                     \n"
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "psllw     $0xa,%%xmm4                     \n"
+      "psrlw     $0x5,%%xmm4                     \n"
+      "pcmpeqb   %%xmm7,%%xmm7                   \n"
+      "psllw     $0x8,%%xmm7                     \n"
+      "sub       %0,%1                           \n"
+      "sub       %0,%1                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "pand      %%xmm3,%%xmm1                   \n"
+      "psllw     $0xb,%%xmm2                     \n"
+      "pmulhuw   %%xmm5,%%xmm1                   \n"
+      "pmulhuw   %%xmm5,%%xmm2                   \n"
+      "psllw     $0x8,%%xmm1                     \n"
+      "por       %%xmm2,%%xmm1                   \n"
+      "pand      %%xmm4,%%xmm0                   \n"
+      "pmulhuw   %%xmm6,%%xmm0                   \n"
+      "por       %%xmm7,%%xmm0                   \n"
+      "movdqa    %%xmm1,%%xmm2                   \n"
+      "punpcklbw %%xmm0,%%xmm1                   \n"
+      "punpckhbw %%xmm0,%%xmm2                   \n"
+      "movdqu    %%xmm1,0x00(%1,%0,2)            \n"
+      "movdqu    %%xmm2,0x10(%1,%0,2)            \n"
+      "lea       0x10(%0),%0                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5",
+        "xmm6", "xmm7");
 }
 
-void ARGB1555ToARGBRow_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "mov       $0x1080108,%%eax                \n"
-    "movd      %%eax,%%xmm5                    \n"
-    "pshufd    $0x0,%%xmm5,%%xmm5              \n"
-    "mov       $0x42004200,%%eax               \n"
-    "movd      %%eax,%%xmm6                    \n"
-    "pshufd    $0x0,%%xmm6,%%xmm6              \n"
-    "pcmpeqb   %%xmm3,%%xmm3                   \n"
-    "psllw     $0xb,%%xmm3                     \n"
-    "movdqa    %%xmm3,%%xmm4                   \n"
-    "psrlw     $0x6,%%xmm4                     \n"
-    "pcmpeqb   %%xmm7,%%xmm7                   \n"
-    "psllw     $0x8,%%xmm7                     \n"
-    "sub       %0,%1                           \n"
-    "sub       %0,%1                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "psllw     $0x1,%%xmm1                     \n"
-    "psllw     $0xb,%%xmm2                     \n"
-    "pand      %%xmm3,%%xmm1                   \n"
-    "pmulhuw   %%xmm5,%%xmm2                   \n"
-    "pmulhuw   %%xmm5,%%xmm1                   \n"
-    "psllw     $0x8,%%xmm1                     \n"
-    "por       %%xmm2,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "pand      %%xmm4,%%xmm0                   \n"
-    "psraw     $0x8,%%xmm2                     \n"
-    "pmulhuw   %%xmm6,%%xmm0                   \n"
-    "pand      %%xmm7,%%xmm2                   \n"
-    "por       %%xmm2,%%xmm0                   \n"
-    "movdqa    %%xmm1,%%xmm2                   \n"
-    "punpcklbw %%xmm0,%%xmm1                   \n"
-    "punpckhbw %%xmm0,%%xmm2                   \n"
-    MEMOPMEM(movdqu,xmm1,0x00,1,0,2)           //  movdqu  %%xmm1,(%1,%0,2)
-    MEMOPMEM(movdqu,xmm2,0x10,1,0,2)           //  movdqu  %%xmm2,0x10(%1,%0,2)
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  :
-  : "memory", "cc", "eax", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+void ARGB1555ToARGBRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "mov       $0x1080108,%%eax                \n"
+      "movd      %%eax,%%xmm5                    \n"
+      "pshufd    $0x0,%%xmm5,%%xmm5              \n"
+      "mov       $0x42004200,%%eax               \n"
+      "movd      %%eax,%%xmm6                    \n"
+      "pshufd    $0x0,%%xmm6,%%xmm6              \n"
+      "pcmpeqb   %%xmm3,%%xmm3                   \n"
+      "psllw     $0xb,%%xmm3                     \n"
+      "movdqa    %%xmm3,%%xmm4                   \n"
+      "psrlw     $0x6,%%xmm4                     \n"
+      "pcmpeqb   %%xmm7,%%xmm7                   \n"
+      "psllw     $0x8,%%xmm7                     \n"
+      "sub       %0,%1                           \n"
+      "sub       %0,%1                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "psllw     $0x1,%%xmm1                     \n"
+      "psllw     $0xb,%%xmm2                     \n"
+      "pand      %%xmm3,%%xmm1                   \n"
+      "pmulhuw   %%xmm5,%%xmm2                   \n"
+      "pmulhuw   %%xmm5,%%xmm1                   \n"
+      "psllw     $0x8,%%xmm1                     \n"
+      "por       %%xmm2,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "pand      %%xmm4,%%xmm0                   \n"
+      "psraw     $0x8,%%xmm2                     \n"
+      "pmulhuw   %%xmm6,%%xmm0                   \n"
+      "pand      %%xmm7,%%xmm2                   \n"
+      "por       %%xmm2,%%xmm0                   \n"
+      "movdqa    %%xmm1,%%xmm2                   \n"
+      "punpcklbw %%xmm0,%%xmm1                   \n"
+      "punpckhbw %%xmm0,%%xmm2                   \n"
+      "movdqu    %%xmm1,0x00(%1,%0,2)            \n"
+      "movdqu    %%xmm2,0x10(%1,%0,2)            \n"
+      "lea       0x10(%0),%0                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5",
+        "xmm6", "xmm7");
 }
 
-void ARGB4444ToARGBRow_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "mov       $0xf0f0f0f,%%eax                \n"
-    "movd      %%eax,%%xmm4                    \n"
-    "pshufd    $0x0,%%xmm4,%%xmm4              \n"
-    "movdqa    %%xmm4,%%xmm5                   \n"
-    "pslld     $0x4,%%xmm5                     \n"
-    "sub       %0,%1                           \n"
-    "sub       %0,%1                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "pand      %%xmm4,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm2,%%xmm3                   \n"
-    "psllw     $0x4,%%xmm1                     \n"
-    "psrlw     $0x4,%%xmm3                     \n"
-    "por       %%xmm1,%%xmm0                   \n"
-    "por       %%xmm3,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm2,%%xmm0                   \n"
-    "punpckhbw %%xmm2,%%xmm1                   \n"
-    MEMOPMEM(movdqu,xmm0,0x00,1,0,2)           //  movdqu  %%xmm0,(%1,%0,2)
-    MEMOPMEM(movdqu,xmm1,0x10,1,0,2)           //  movdqu  %%xmm1,0x10(%1,%0,2)
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  :
-  : "memory", "cc", "eax", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ARGB4444ToARGBRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "mov       $0xf0f0f0f,%%eax                \n"
+      "movd      %%eax,%%xmm4                    \n"
+      "pshufd    $0x0,%%xmm4,%%xmm4              \n"
+      "movdqa    %%xmm4,%%xmm5                   \n"
+      "pslld     $0x4,%%xmm5                     \n"
+      "sub       %0,%1                           \n"
+      "sub       %0,%1                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "pand      %%xmm4,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm2,%%xmm3                   \n"
+      "psllw     $0x4,%%xmm1                     \n"
+      "psrlw     $0x4,%%xmm3                     \n"
+      "por       %%xmm1,%%xmm0                   \n"
+      "por       %%xmm3,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklbw %%xmm2,%%xmm0                   \n"
+      "punpckhbw %%xmm2,%%xmm1                   \n"
+      "movdqu    %%xmm0,0x00(%1,%0,2)            \n"
+      "movdqu    %%xmm1,0x10(%1,%0,2)            \n"
+      "lea       0x10(%0),%0                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void ARGBToRGB24Row_SSSE3(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "movdqa    %3,%%xmm6                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "pshufb    %%xmm6,%%xmm0                   \n"
-    "pshufb    %%xmm6,%%xmm1                   \n"
-    "pshufb    %%xmm6,%%xmm2                   \n"
-    "pshufb    %%xmm6,%%xmm3                   \n"
-    "movdqa    %%xmm1,%%xmm4                   \n"
-    "psrldq    $0x4,%%xmm1                     \n"
-    "pslldq    $0xc,%%xmm4                     \n"
-    "movdqa    %%xmm2,%%xmm5                   \n"
-    "por       %%xmm4,%%xmm0                   \n"
-    "pslldq    $0x8,%%xmm5                     \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "por       %%xmm5,%%xmm1                   \n"
-    "psrldq    $0x8,%%xmm2                     \n"
-    "pslldq    $0x4,%%xmm3                     \n"
-    "por       %%xmm3,%%xmm2                   \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x20,1) "   \n"
-    "lea       " MEMLEA(0x30,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  : "m"(kShuffleMaskARGBToRGB24)  // %3
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+void ARGBToRGB24Row_SSSE3(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+
+      "movdqa    %3,%%xmm6                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "pshufb    %%xmm6,%%xmm0                   \n"
+      "pshufb    %%xmm6,%%xmm1                   \n"
+      "pshufb    %%xmm6,%%xmm2                   \n"
+      "pshufb    %%xmm6,%%xmm3                   \n"
+      "movdqa    %%xmm1,%%xmm4                   \n"
+      "psrldq    $0x4,%%xmm1                     \n"
+      "pslldq    $0xc,%%xmm4                     \n"
+      "movdqa    %%xmm2,%%xmm5                   \n"
+      "por       %%xmm4,%%xmm0                   \n"
+      "pslldq    $0x8,%%xmm5                     \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "por       %%xmm5,%%xmm1                   \n"
+      "psrldq    $0x8,%%xmm2                     \n"
+      "pslldq    $0x4,%%xmm3                     \n"
+      "por       %%xmm3,%%xmm2                   \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "movdqu    %%xmm2,0x20(%1)                 \n"
+      "lea       0x30(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src),                    // %0
+        "+r"(dst),                    // %1
+        "+r"(width)                   // %2
+      : "m"(kShuffleMaskARGBToRGB24)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 
-void ARGBToRAWRow_SSSE3(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "movdqa    %3,%%xmm6                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "pshufb    %%xmm6,%%xmm0                   \n"
-    "pshufb    %%xmm6,%%xmm1                   \n"
-    "pshufb    %%xmm6,%%xmm2                   \n"
-    "pshufb    %%xmm6,%%xmm3                   \n"
-    "movdqa    %%xmm1,%%xmm4                   \n"
-    "psrldq    $0x4,%%xmm1                     \n"
-    "pslldq    $0xc,%%xmm4                     \n"
-    "movdqa    %%xmm2,%%xmm5                   \n"
-    "por       %%xmm4,%%xmm0                   \n"
-    "pslldq    $0x8,%%xmm5                     \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "por       %%xmm5,%%xmm1                   \n"
-    "psrldq    $0x8,%%xmm2                     \n"
-    "pslldq    $0x4,%%xmm3                     \n"
-    "por       %%xmm3,%%xmm2                   \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x20,1) "   \n"
-    "lea       " MEMLEA(0x30,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  : "m"(kShuffleMaskARGBToRAW)  // %3
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+void ARGBToRAWRow_SSSE3(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+
+      "movdqa    %3,%%xmm6                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "pshufb    %%xmm6,%%xmm0                   \n"
+      "pshufb    %%xmm6,%%xmm1                   \n"
+      "pshufb    %%xmm6,%%xmm2                   \n"
+      "pshufb    %%xmm6,%%xmm3                   \n"
+      "movdqa    %%xmm1,%%xmm4                   \n"
+      "psrldq    $0x4,%%xmm1                     \n"
+      "pslldq    $0xc,%%xmm4                     \n"
+      "movdqa    %%xmm2,%%xmm5                   \n"
+      "por       %%xmm4,%%xmm0                   \n"
+      "pslldq    $0x8,%%xmm5                     \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "por       %%xmm5,%%xmm1                   \n"
+      "psrldq    $0x8,%%xmm2                     \n"
+      "pslldq    $0x4,%%xmm3                     \n"
+      "por       %%xmm3,%%xmm2                   \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "movdqu    %%xmm2,0x20(%1)                 \n"
+      "lea       0x30(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src),                  // %0
+        "+r"(dst),                  // %1
+        "+r"(width)                 // %2
+      : "m"(kShuffleMaskARGBToRAW)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 
-void ARGBToRGB565Row_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm3,%%xmm3                   \n"
-    "psrld     $0x1b,%%xmm3                    \n"
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "psrld     $0x1a,%%xmm4                    \n"
-    "pslld     $0x5,%%xmm4                     \n"
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "pslld     $0xb,%%xmm5                     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "pslld     $0x8,%%xmm0                     \n"
-    "psrld     $0x3,%%xmm1                     \n"
-    "psrld     $0x5,%%xmm2                     \n"
-    "psrad     $0x10,%%xmm0                    \n"
-    "pand      %%xmm3,%%xmm1                   \n"
-    "pand      %%xmm4,%%xmm2                   \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "por       %%xmm2,%%xmm1                   \n"
-    "por       %%xmm1,%%xmm0                   \n"
-    "packssdw  %%xmm0,%%xmm0                   \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+#ifdef HAS_ARGBTORGB24ROW_AVX2
+// vpermd for 12+12 to 24
+static const lvec32 kPermdRGB24_AVX = {0, 1, 2, 4, 5, 6, 3, 7};
+
+void ARGBToRGB24Row_AVX2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm6                  \n"
+      "vmovdqa    %4,%%ymm7                      \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "lea        0x80(%0),%0                    \n"
+      "vpshufb    %%ymm6,%%ymm0,%%ymm0           \n"  // xxx0yyy0
+      "vpshufb    %%ymm6,%%ymm1,%%ymm1           \n"
+      "vpshufb    %%ymm6,%%ymm2,%%ymm2           \n"
+      "vpshufb    %%ymm6,%%ymm3,%%ymm3           \n"
+      "vpermd     %%ymm0,%%ymm7,%%ymm0           \n"  // pack to 24 bytes
+      "vpermd     %%ymm1,%%ymm7,%%ymm1           \n"
+      "vpermd     %%ymm2,%%ymm7,%%ymm2           \n"
+      "vpermd     %%ymm3,%%ymm7,%%ymm3           \n"
+      "vpermq     $0x3f,%%ymm1,%%ymm4            \n"  // combine 24 + 8
+      "vpor       %%ymm4,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "vpermq     $0xf9,%%ymm1,%%ymm1            \n"  // combine 16 + 16
+      "vpermq     $0x4f,%%ymm2,%%ymm4            \n"
+      "vpor       %%ymm4,%%ymm1,%%ymm1           \n"
+      "vmovdqu    %%ymm1,0x20(%1)                \n"
+      "vpermq     $0xfe,%%ymm2,%%ymm2            \n"  // combine 8 + 24
+      "vpermq     $0x93,%%ymm3,%%ymm3            \n"
+      "vpor       %%ymm3,%%ymm2,%%ymm2           \n"
+      "vmovdqu    %%ymm2,0x40(%1)                \n"
+      "lea        0x60(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src),                     // %0
+        "+r"(dst),                     // %1
+        "+r"(width)                    // %2
+      : "m"(kShuffleMaskARGBToRGB24),  // %3
+        "m"(kPermdRGB24_AVX)           // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
+}
+#endif
+
+#ifdef HAS_ARGBTORGB24ROW_AVX512VBMI
+// Shuffle table for converting ARGBToRGB24
+static const ulvec8 kPermARGBToRGB24_0 = {
+    0u,  1u,  2u,  4u,  5u,  6u,  8u,  9u,  10u, 12u, 13u,
+    14u, 16u, 17u, 18u, 20u, 21u, 22u, 24u, 25u, 26u, 28u,
+    29u, 30u, 32u, 33u, 34u, 36u, 37u, 38u, 40u, 41u};
+static const ulvec8 kPermARGBToRGB24_1 = {
+    10u, 12u, 13u, 14u, 16u, 17u, 18u, 20u, 21u, 22u, 24u,
+    25u, 26u, 28u, 29u, 30u, 32u, 33u, 34u, 36u, 37u, 38u,
+    40u, 41u, 42u, 44u, 45u, 46u, 48u, 49u, 50u, 52u};
+static const ulvec8 kPermARGBToRGB24_2 = {
+    21u, 22u, 24u, 25u, 26u, 28u, 29u, 30u, 32u, 33u, 34u,
+    36u, 37u, 38u, 40u, 41u, 42u, 44u, 45u, 46u, 48u, 49u,
+    50u, 52u, 53u, 54u, 56u, 57u, 58u, 60u, 61u, 62u};
+
+void ARGBToRGB24Row_AVX512VBMI(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vmovdqa    %3,%%ymm5                      \n"
+      "vmovdqa    %4,%%ymm6                      \n"
+      "vmovdqa    %5,%%ymm7                      \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "lea        0x80(%0),%0                    \n"
+      "vpermt2b   %%ymm1,%%ymm5,%%ymm0           \n"
+      "vpermt2b   %%ymm2,%%ymm6,%%ymm1           \n"
+      "vpermt2b   %%ymm3,%%ymm7,%%ymm2           \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "vmovdqu    %%ymm1,0x20(%1)                \n"
+      "vmovdqu    %%ymm2,0x40(%1)                \n"
+      "lea        0x60(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src),                // %0
+        "+r"(dst),                // %1
+        "+r"(width)               // %2
+      : "m"(kPermARGBToRGB24_0),  // %3
+        "m"(kPermARGBToRGB24_1),  // %4
+        "m"(kPermARGBToRGB24_2)   // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5", "xmm6", "xmm7");
+}
+#endif
+
+#ifdef HAS_ARGBTORAWROW_AVX2
+void ARGBToRAWRow_AVX2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm6                  \n"
+      "vmovdqa    %4,%%ymm7                      \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "lea        0x80(%0),%0                    \n"
+      "vpshufb    %%ymm6,%%ymm0,%%ymm0           \n"  // xxx0yyy0
+      "vpshufb    %%ymm6,%%ymm1,%%ymm1           \n"
+      "vpshufb    %%ymm6,%%ymm2,%%ymm2           \n"
+      "vpshufb    %%ymm6,%%ymm3,%%ymm3           \n"
+      "vpermd     %%ymm0,%%ymm7,%%ymm0           \n"  // pack to 24 bytes
+      "vpermd     %%ymm1,%%ymm7,%%ymm1           \n"
+      "vpermd     %%ymm2,%%ymm7,%%ymm2           \n"
+      "vpermd     %%ymm3,%%ymm7,%%ymm3           \n"
+      "vpermq     $0x3f,%%ymm1,%%ymm4            \n"  // combine 24 + 8
+      "vpor       %%ymm4,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "vpermq     $0xf9,%%ymm1,%%ymm1            \n"  // combine 16 + 16
+      "vpermq     $0x4f,%%ymm2,%%ymm4            \n"
+      "vpor       %%ymm4,%%ymm1,%%ymm1           \n"
+      "vmovdqu    %%ymm1,0x20(%1)                \n"
+      "vpermq     $0xfe,%%ymm2,%%ymm2            \n"  // combine 8 + 24
+      "vpermq     $0x93,%%ymm3,%%ymm3            \n"
+      "vpor       %%ymm3,%%ymm2,%%ymm2           \n"
+      "vmovdqu    %%ymm2,0x40(%1)                \n"
+      "lea        0x60(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src),                   // %0
+        "+r"(dst),                   // %1
+        "+r"(width)                  // %2
+      : "m"(kShuffleMaskARGBToRAW),  // %3
+        "m"(kPermdRGB24_AVX)         // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
+}
+#endif
+
+void ARGBToRGB565Row_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm3,%%xmm3                   \n"
+      "psrld     $0x1b,%%xmm3                    \n"
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "psrld     $0x1a,%%xmm4                    \n"
+      "pslld     $0x5,%%xmm4                     \n"
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "pslld     $0xb,%%xmm5                     \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "pslld     $0x8,%%xmm0                     \n"
+      "psrld     $0x3,%%xmm1                     \n"
+      "psrld     $0x5,%%xmm2                     \n"
+      "psrad     $0x10,%%xmm0                    \n"
+      "pand      %%xmm3,%%xmm1                   \n"
+      "pand      %%xmm4,%%xmm2                   \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "por       %%xmm2,%%xmm1                   \n"
+      "por       %%xmm1,%%xmm0                   \n"
+      "packssdw  %%xmm0,%%xmm0                   \n"
+      "lea       0x10(%0),%0                     \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void ARGBToRGB565DitherRow_SSE2(const uint8* src, uint8* dst,
-                                const uint32 dither4, int width) {
-  asm volatile (
-    "movd       %3,%%xmm6                      \n"
-    "punpcklbw  %%xmm6,%%xmm6                  \n"
-    "movdqa     %%xmm6,%%xmm7                  \n"
-    "punpcklwd  %%xmm6,%%xmm6                  \n"
-    "punpckhwd  %%xmm7,%%xmm7                  \n"
-    "pcmpeqb    %%xmm3,%%xmm3                  \n"
-    "psrld      $0x1b,%%xmm3                   \n"
-    "pcmpeqb    %%xmm4,%%xmm4                  \n"
-    "psrld      $0x1a,%%xmm4                   \n"
-    "pslld      $0x5,%%xmm4                    \n"
-    "pcmpeqb    %%xmm5,%%xmm5                  \n"
-    "pslld      $0xb,%%xmm5                    \n"
+void ARGBToRGB565DitherRow_SSE2(const uint8_t* src,
+                                uint8_t* dst,
+                                const uint32_t dither4,
+                                int width) {
+  asm volatile(
+      "movd       %3,%%xmm6                      \n"
+      "punpcklbw  %%xmm6,%%xmm6                  \n"
+      "movdqa     %%xmm6,%%xmm7                  \n"
+      "punpcklwd  %%xmm6,%%xmm6                  \n"
+      "punpckhwd  %%xmm7,%%xmm7                  \n"
+      "pcmpeqb    %%xmm3,%%xmm3                  \n"
+      "psrld      $0x1b,%%xmm3                   \n"
+      "pcmpeqb    %%xmm4,%%xmm4                  \n"
+      "psrld      $0x1a,%%xmm4                   \n"
+      "pslld      $0x5,%%xmm4                    \n"
+      "pcmpeqb    %%xmm5,%%xmm5                  \n"
+      "pslld      $0xb,%%xmm5                    \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu     (%0),%%xmm0                    \n"
-    "paddusb    %%xmm6,%%xmm0                  \n"
-    "movdqa     %%xmm0,%%xmm1                  \n"
-    "movdqa     %%xmm0,%%xmm2                  \n"
-    "pslld      $0x8,%%xmm0                    \n"
-    "psrld      $0x3,%%xmm1                    \n"
-    "psrld      $0x5,%%xmm2                    \n"
-    "psrad      $0x10,%%xmm0                   \n"
-    "pand       %%xmm3,%%xmm1                  \n"
-    "pand       %%xmm4,%%xmm2                  \n"
-    "pand       %%xmm5,%%xmm0                  \n"
-    "por        %%xmm2,%%xmm1                  \n"
-    "por        %%xmm1,%%xmm0                  \n"
-    "packssdw   %%xmm0,%%xmm0                  \n"
-    "lea        0x10(%0),%0                    \n"
-    "movq       %%xmm0,(%1)                    \n"
-    "lea        0x8(%1),%1                     \n"
-    "sub        $0x4,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  : "m"(dither4) // %3
-  : "memory", "cc",
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu     (%0),%%xmm0                    \n"
+      "paddusb    %%xmm6,%%xmm0                  \n"
+      "movdqa     %%xmm0,%%xmm1                  \n"
+      "movdqa     %%xmm0,%%xmm2                  \n"
+      "pslld      $0x8,%%xmm0                    \n"
+      "psrld      $0x3,%%xmm1                    \n"
+      "psrld      $0x5,%%xmm2                    \n"
+      "psrad      $0x10,%%xmm0                   \n"
+      "pand       %%xmm3,%%xmm1                  \n"
+      "pand       %%xmm4,%%xmm2                  \n"
+      "pand       %%xmm5,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm1                  \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "packssdw   %%xmm0,%%xmm0                  \n"
+      "lea        0x10(%0),%0                    \n"
+      "movq       %%xmm0,(%1)                    \n"
+      "lea        0x8(%1),%1                     \n"
+      "sub        $0x4,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src),    // %0
+        "+r"(dst),    // %1
+        "+r"(width)   // %2
+      : "m"(dither4)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 
 #ifdef HAS_ARGBTORGB565DITHERROW_AVX2
-void ARGBToRGB565DitherRow_AVX2(const uint8* src, uint8* dst,
-                                const uint32 dither4, int width) {
-  asm volatile (
-    "vbroadcastss %3,%%xmm6                    \n"
-    "vpunpcklbw %%xmm6,%%xmm6,%%xmm6           \n"
-    "vpermq     $0xd8,%%ymm6,%%ymm6            \n"
-    "vpunpcklwd %%ymm6,%%ymm6,%%ymm6           \n"
-    "vpcmpeqb   %%ymm3,%%ymm3,%%ymm3           \n"
-    "vpsrld     $0x1b,%%ymm3,%%ymm3            \n"
-    "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpsrld     $0x1a,%%ymm4,%%ymm4            \n"
-    "vpslld     $0x5,%%ymm4,%%ymm4             \n"
-    "vpslld     $0xb,%%ymm3,%%ymm5             \n"
+void ARGBToRGB565DitherRow_AVX2(const uint8_t* src,
+                                uint8_t* dst,
+                                const uint32_t dither4,
+                                int width) {
+  asm volatile(
+      "vbroadcastss %3,%%xmm6                    \n"
+      "vpunpcklbw %%xmm6,%%xmm6,%%xmm6           \n"
+      "vpermq     $0xd8,%%ymm6,%%ymm6            \n"
+      "vpunpcklwd %%ymm6,%%ymm6,%%ymm6           \n"
+      "vpcmpeqb   %%ymm3,%%ymm3,%%ymm3           \n"
+      "vpsrld     $0x1b,%%ymm3,%%ymm3            \n"
+      "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpsrld     $0x1a,%%ymm4,%%ymm4            \n"
+      "vpslld     $0x5,%%ymm4,%%ymm4             \n"
+      "vpslld     $0xb,%%ymm3,%%ymm5             \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    (%0),%%ymm0                    \n"
-    "vpaddusb   %%ymm6,%%ymm0,%%ymm0           \n"
-    "vpsrld     $0x5,%%ymm0,%%ymm2             \n"
-    "vpsrld     $0x3,%%ymm0,%%ymm1             \n"
-    "vpsrld     $0x8,%%ymm0,%%ymm0             \n"
-    "vpand      %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpand      %%ymm3,%%ymm1,%%ymm1           \n"
-    "vpand      %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpor       %%ymm2,%%ymm1,%%ymm1           \n"
-    "vpor       %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpackusdw  %%ymm0,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "lea        0x20(%0),%0                    \n"
-    "vmovdqu    %%xmm0,(%1)                    \n"
-    "lea        0x10(%1),%1                    \n"
-    "sub        $0x8,%2                        \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  : "m"(dither4) // %3
-  : "memory", "cc",
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vpaddusb   %%ymm6,%%ymm0,%%ymm0           \n"
+      "vpsrld     $0x5,%%ymm0,%%ymm2             \n"
+      "vpsrld     $0x3,%%ymm0,%%ymm1             \n"
+      "vpsrld     $0x8,%%ymm0,%%ymm0             \n"
+      "vpand      %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpand      %%ymm3,%%ymm1,%%ymm1           \n"
+      "vpand      %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpor       %%ymm2,%%ymm1,%%ymm1           \n"
+      "vpor       %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpackusdw  %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "lea        0x20(%0),%0                    \n"
+      "vmovdqu    %%xmm0,(%1)                    \n"
+      "lea        0x10(%1),%1                    \n"
+      "sub        $0x8,%2                        \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src),    // %0
+        "+r"(dst),    // %1
+        "+r"(width)   // %2
+      : "m"(dither4)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBTORGB565DITHERROW_AVX2
 
+void ARGBToARGB1555Row_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "psrld     $0x1b,%%xmm4                    \n"
+      "movdqa    %%xmm4,%%xmm5                   \n"
+      "pslld     $0x5,%%xmm5                     \n"
+      "movdqa    %%xmm4,%%xmm6                   \n"
+      "pslld     $0xa,%%xmm6                     \n"
+      "pcmpeqb   %%xmm7,%%xmm7                   \n"
+      "pslld     $0xf,%%xmm7                     \n"
 
-void ARGBToARGB1555Row_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "psrld     $0x1b,%%xmm4                    \n"
-    "movdqa    %%xmm4,%%xmm5                   \n"
-    "pslld     $0x5,%%xmm5                     \n"
-    "movdqa    %%xmm4,%%xmm6                   \n"
-    "pslld     $0xa,%%xmm6                     \n"
-    "pcmpeqb   %%xmm7,%%xmm7                   \n"
-    "pslld     $0xf,%%xmm7                     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm3                   \n"
-    "psrad     $0x10,%%xmm0                    \n"
-    "psrld     $0x3,%%xmm1                     \n"
-    "psrld     $0x6,%%xmm2                     \n"
-    "psrld     $0x9,%%xmm3                     \n"
-    "pand      %%xmm7,%%xmm0                   \n"
-    "pand      %%xmm4,%%xmm1                   \n"
-    "pand      %%xmm5,%%xmm2                   \n"
-    "pand      %%xmm6,%%xmm3                   \n"
-    "por       %%xmm1,%%xmm0                   \n"
-    "por       %%xmm3,%%xmm2                   \n"
-    "por       %%xmm2,%%xmm0                   \n"
-    "packssdw  %%xmm0,%%xmm0                   \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  :: "memory", "cc",
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm3                   \n"
+      "psrad     $0x10,%%xmm0                    \n"
+      "psrld     $0x3,%%xmm1                     \n"
+      "psrld     $0x6,%%xmm2                     \n"
+      "psrld     $0x9,%%xmm3                     \n"
+      "pand      %%xmm7,%%xmm0                   \n"
+      "pand      %%xmm4,%%xmm1                   \n"
+      "pand      %%xmm5,%%xmm2                   \n"
+      "pand      %%xmm6,%%xmm3                   \n"
+      "por       %%xmm1,%%xmm0                   \n"
+      "por       %%xmm3,%%xmm2                   \n"
+      "por       %%xmm2,%%xmm0                   \n"
+      "packssdw  %%xmm0,%%xmm0                   \n"
+      "lea       0x10(%0),%0                     \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7");
 }
 
-void ARGBToARGB4444Row_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "psllw     $0xc,%%xmm4                     \n"
-    "movdqa    %%xmm4,%%xmm3                   \n"
-    "psrlw     $0x8,%%xmm3                     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "pand      %%xmm3,%%xmm0                   \n"
-    "pand      %%xmm4,%%xmm1                   \n"
-    "psrlq     $0x4,%%xmm0                     \n"
-    "psrlq     $0x8,%%xmm1                     \n"
-    "por       %%xmm1,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4"
-  );
+void ARGBToARGB4444Row_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "psllw     $0xc,%%xmm4                     \n"
+      "movdqa    %%xmm4,%%xmm3                   \n"
+      "psrlw     $0x8,%%xmm3                     \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "pand      %%xmm3,%%xmm0                   \n"
+      "pand      %%xmm4,%%xmm1                   \n"
+      "psrlq     $0x4,%%xmm0                     \n"
+      "psrlq     $0x8,%%xmm1                     \n"
+      "por       %%xmm1,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "lea       0x10(%0),%0                     \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4");
 }
 #endif  // HAS_RGB24TOARGBROW_SSSE3
 
+/*
+
+ARGBToAR30Row:
+
+Red Blue
+With the 8 bit value in the upper bits of a short, vpmulhuw by (1024+4) will
+produce a 10 bit value in the low 10 bits of each 16 bit value. This is whats
+wanted for the blue channel. The red needs to be shifted 4 left, so multiply by
+(1024+4)*16 for red.
+
+Alpha Green
+Alpha and Green are already in the high bits so vpand can zero out the other
+bits, keeping just 2 upper bits of alpha and 8 bit green. The same multiplier
+could be used for Green - (1024+4) putting the 10 bit green in the lsb.  Alpha
+would be a simple multiplier to shift it into position.  It wants a gap of 10
+above the green.  Green is 10 bits, so there are 6 bits in the low short.  4
+more are needed, so a multiplier of 4 gets the 2 bits into the upper 16 bits,
+and then a shift of 4 is a multiply of 16, so (4*16) = 64.  Then shift the
+result left 10 to position the A and G channels.
+*/
+
+// Shuffle table for converting RAW to RGB24.  Last 8.
+static const uvec8 kShuffleRB30 = {128u, 0u, 128u, 2u,  128u, 4u,  128u, 6u,
+                                   128u, 8u, 128u, 10u, 128u, 12u, 128u, 14u};
+
+static const uvec8 kShuffleBR30 = {128u, 2u,  128u, 0u, 128u, 6u,  128u, 4u,
+                                   128u, 10u, 128u, 8u, 128u, 14u, 128u, 12u};
+
+static const uint32_t kMulRB10 = 1028 * 16 * 65536 + 1028;
+static const uint32_t kMaskRB10 = 0x3ff003ff;
+static const uint32_t kMaskAG10 = 0xc000ff00;
+static const uint32_t kMulAG10 = 64 * 65536 + 1028;
+
+void ARGBToAR30Row_SSSE3(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "movdqa     %3,%%xmm2                     \n"  // shuffler for RB
+      "movd       %4,%%xmm3                     \n"  // multipler for RB
+      "movd       %5,%%xmm4                     \n"  // mask for R10 B10
+      "movd       %6,%%xmm5                     \n"  // mask for AG
+      "movd       %7,%%xmm6                     \n"  // multipler for AG
+      "pshufd     $0x0,%%xmm3,%%xmm3            \n"
+      "pshufd     $0x0,%%xmm4,%%xmm4            \n"
+      "pshufd     $0x0,%%xmm5,%%xmm5            \n"
+      "pshufd     $0x0,%%xmm6,%%xmm6            \n"
+      "sub        %0,%1                         \n"
+
+      "1:                                       \n"
+      "movdqu     (%0),%%xmm0                   \n"  // fetch 4 ARGB pixels
+      "movdqa     %%xmm0,%%xmm1                 \n"
+      "pshufb     %%xmm2,%%xmm1                 \n"  // R0B0
+      "pand       %%xmm5,%%xmm0                 \n"  // A0G0
+      "pmulhuw    %%xmm3,%%xmm1                 \n"  // X2 R16 X4  B10
+      "pmulhuw    %%xmm6,%%xmm0                 \n"  // X10 A2 X10 G10
+      "pand       %%xmm4,%%xmm1                 \n"  // X2 R10 X10 B10
+      "pslld      $10,%%xmm0                    \n"  // A2 x10 G10 x10
+      "por        %%xmm1,%%xmm0                 \n"  // A2 R10 G10 B10
+      "movdqu     %%xmm0,(%1,%0)                \n"  // store 4 AR30 pixels
+      "add        $0x10,%0                      \n"
+      "sub        $0x4,%2                       \n"
+      "jg         1b                            \n"
+
+      : "+r"(src),          // %0
+        "+r"(dst),          // %1
+        "+r"(width)         // %2
+      : "m"(kShuffleRB30),  // %3
+        "m"(kMulRB10),      // %4
+        "m"(kMaskRB10),     // %5
+        "m"(kMaskAG10),     // %6
+        "m"(kMulAG10)       // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
+}
+
+void ABGRToAR30Row_SSSE3(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "movdqa     %3,%%xmm2                     \n"  // shuffler for RB
+      "movd       %4,%%xmm3                     \n"  // multipler for RB
+      "movd       %5,%%xmm4                     \n"  // mask for R10 B10
+      "movd       %6,%%xmm5                     \n"  // mask for AG
+      "movd       %7,%%xmm6                     \n"  // multipler for AG
+      "pshufd     $0x0,%%xmm3,%%xmm3            \n"
+      "pshufd     $0x0,%%xmm4,%%xmm4            \n"
+      "pshufd     $0x0,%%xmm5,%%xmm5            \n"
+      "pshufd     $0x0,%%xmm6,%%xmm6            \n"
+      "sub        %0,%1                         \n"
+
+      "1:                                       \n"
+      "movdqu     (%0),%%xmm0                   \n"  // fetch 4 ABGR pixels
+      "movdqa     %%xmm0,%%xmm1                 \n"
+      "pshufb     %%xmm2,%%xmm1                 \n"  // R0B0
+      "pand       %%xmm5,%%xmm0                 \n"  // A0G0
+      "pmulhuw    %%xmm3,%%xmm1                 \n"  // X2 R16 X4  B10
+      "pmulhuw    %%xmm6,%%xmm0                 \n"  // X10 A2 X10 G10
+      "pand       %%xmm4,%%xmm1                 \n"  // X2 R10 X10 B10
+      "pslld      $10,%%xmm0                    \n"  // A2 x10 G10 x10
+      "por        %%xmm1,%%xmm0                 \n"  // A2 R10 G10 B10
+      "movdqu     %%xmm0,(%1,%0)                \n"  // store 4 AR30 pixels
+      "add        $0x10,%0                      \n"
+      "sub        $0x4,%2                       \n"
+      "jg         1b                            \n"
+
+      : "+r"(src),          // %0
+        "+r"(dst),          // %1
+        "+r"(width)         // %2
+      : "m"(kShuffleBR30),  // %3  reversed shuffler
+        "m"(kMulRB10),      // %4
+        "m"(kMaskRB10),     // %5
+        "m"(kMaskAG10),     // %6
+        "m"(kMulAG10)       // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
+}
+
+#ifdef HAS_ARGBTOAR30ROW_AVX2
+void ARGBToAR30Row_AVX2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm2                  \n"  // shuffler for RB
+      "vbroadcastss  %4,%%ymm3                   \n"  // multipler for RB
+      "vbroadcastss  %5,%%ymm4                   \n"  // mask for R10 B10
+      "vbroadcastss  %6,%%ymm5                   \n"  // mask for AG
+      "vbroadcastss  %7,%%ymm6                   \n"  // multipler for AG
+      "sub        %0,%1                          \n"
+
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"  // fetch 8 ARGB pixels
+      "vpshufb    %%ymm2,%%ymm0,%%ymm1           \n"  // R0B0
+      "vpand      %%ymm5,%%ymm0,%%ymm0           \n"  // A0G0
+      "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"  // X2 R16 X4  B10
+      "vpmulhuw   %%ymm6,%%ymm0,%%ymm0           \n"  // X10 A2 X10 G10
+      "vpand      %%ymm4,%%ymm1,%%ymm1           \n"  // X2 R10 X10 B10
+      "vpslld     $10,%%ymm0,%%ymm0              \n"  // A2 x10 G10 x10
+      "vpor       %%ymm1,%%ymm0,%%ymm0           \n"  // A2 R10 G10 B10
+      "vmovdqu    %%ymm0,(%1,%0)                 \n"  // store 8 AR30 pixels
+      "add        $0x20,%0                       \n"
+      "sub        $0x8,%2                        \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+
+      : "+r"(src),          // %0
+        "+r"(dst),          // %1
+        "+r"(width)         // %2
+      : "m"(kShuffleRB30),  // %3
+        "m"(kMulRB10),      // %4
+        "m"(kMaskRB10),     // %5
+        "m"(kMaskAG10),     // %6
+        "m"(kMulAG10)       // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
+}
+#endif
+
+#ifdef HAS_ABGRTOAR30ROW_AVX2
+void ABGRToAR30Row_AVX2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm2                  \n"  // shuffler for RB
+      "vbroadcastss  %4,%%ymm3                   \n"  // multipler for RB
+      "vbroadcastss  %5,%%ymm4                   \n"  // mask for R10 B10
+      "vbroadcastss  %6,%%ymm5                   \n"  // mask for AG
+      "vbroadcastss  %7,%%ymm6                   \n"  // multipler for AG
+      "sub        %0,%1                          \n"
+
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"  // fetch 8 ABGR pixels
+      "vpshufb    %%ymm2,%%ymm0,%%ymm1           \n"  // R0B0
+      "vpand      %%ymm5,%%ymm0,%%ymm0           \n"  // A0G0
+      "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"  // X2 R16 X4  B10
+      "vpmulhuw   %%ymm6,%%ymm0,%%ymm0           \n"  // X10 A2 X10 G10
+      "vpand      %%ymm4,%%ymm1,%%ymm1           \n"  // X2 R10 X10 B10
+      "vpslld     $10,%%ymm0,%%ymm0              \n"  // A2 x10 G10 x10
+      "vpor       %%ymm1,%%ymm0,%%ymm0           \n"  // A2 R10 G10 B10
+      "vmovdqu    %%ymm0,(%1,%0)                 \n"  // store 8 AR30 pixels
+      "add        $0x20,%0                       \n"
+      "sub        $0x8,%2                        \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+
+      : "+r"(src),          // %0
+        "+r"(dst),          // %1
+        "+r"(width)         // %2
+      : "m"(kShuffleBR30),  // %3  reversed shuffler
+        "m"(kMulRB10),      // %4
+        "m"(kMaskRB10),     // %5
+        "m"(kMaskAG10),     // %6
+        "m"(kMulAG10)       // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
+}
+#endif
+
 #ifdef HAS_ARGBTOYROW_SSSE3
 // Convert 16 ARGB pixels (64 bytes) to 16 Y values.
-void ARGBToYRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "movdqa    %3,%%xmm4                       \n"
-    "movdqa    %4,%%xmm5                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm4,%%xmm3                   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm3,%%xmm2                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "psrlw     $0x7,%%xmm2                     \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kARGBToY),   // %3
-    "m"(kAddY16)     // %4
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ARGBToYRow_SSSE3(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movdqa    %3,%%xmm4                       \n"
+      "movdqa    %4,%%xmm5                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm4,%%xmm3                   \n"
+      "lea       0x40(%0),%0                     \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm3,%%xmm2                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "psrlw     $0x7,%%xmm2                     \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      : "m"(kARGBToY),   // %3
+        "m"(kAddY16)     // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBTOYROW_SSSE3
 
 #ifdef HAS_ARGBTOYJROW_SSSE3
 // Convert 16 ARGB pixels (64 bytes) to 16 YJ values.
 // Same as ARGBToYRow but different coefficients, no add 16, but do rounding.
-void ARGBToYJRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "movdqa    %3,%%xmm4                       \n"
-    "movdqa    %4,%%xmm5                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm4,%%xmm3                   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm3,%%xmm2                   \n"
-    "paddw     %%xmm5,%%xmm0                   \n"
-    "paddw     %%xmm5,%%xmm2                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "psrlw     $0x7,%%xmm2                     \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kARGBToYJ),  // %3
-    "m"(kAddYJ64)    // %4
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ARGBToYJRow_SSSE3(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movdqa    %3,%%xmm4                       \n"
+      "movdqa    %4,%%xmm5                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm4,%%xmm3                   \n"
+      "lea       0x40(%0),%0                     \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm3,%%xmm2                   \n"
+      "paddw     %%xmm5,%%xmm0                   \n"
+      "paddw     %%xmm5,%%xmm2                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "psrlw     $0x7,%%xmm2                     \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      : "m"(kARGBToYJ),  // %3
+        "m"(kAddYJ64)    // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBTOYJROW_SSSE3
 
 #ifdef HAS_ARGBTOYROW_AVX2
 // vpermd for vphaddw + vpackuswb vpermd.
-static const lvec32 kPermdARGBToY_AVX = {
-  0, 4, 1, 5, 2, 6, 3, 7
-};
+static const lvec32 kPermdARGBToY_AVX = {0, 4, 1, 5, 2, 6, 3, 7};
 
 // Convert 32 ARGB pixels (128 bytes) to 32 Y values.
-void ARGBToYRow_AVX2(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "vbroadcastf128 %3,%%ymm4                  \n"
-    "vbroadcastf128 %4,%%ymm5                  \n"
-    "vmovdqu    %5,%%ymm6                      \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    "vmovdqu    " MEMACCESS2(0x40,0) ",%%ymm2  \n"
-    "vmovdqu    " MEMACCESS2(0x60,0) ",%%ymm3  \n"
-    "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "lea       " MEMLEA(0x80,0) ",%0           \n"
-    "vphaddw    %%ymm1,%%ymm0,%%ymm0           \n"  // mutates.
-    "vphaddw    %%ymm3,%%ymm2,%%ymm2           \n"
-    "vpsrlw     $0x7,%%ymm0,%%ymm0             \n"
-    "vpsrlw     $0x7,%%ymm2,%%ymm2             \n"
-    "vpackuswb  %%ymm2,%%ymm0,%%ymm0           \n"  // mutates.
-    "vpermd     %%ymm0,%%ymm6,%%ymm0           \n"  // unmutate.
-    "vpaddb     %%ymm5,%%ymm0,%%ymm0           \n"  // add 16 for Y
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kARGBToY),   // %3
-    "m"(kAddY16),    // %4
-    "m"(kPermdARGBToY_AVX)  // %5
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+void ARGBToYRow_AVX2(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm4                  \n"
+      "vbroadcastf128 %4,%%ymm5                  \n"
+      "vmovdqu    %5,%%ymm6                      \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "lea       0x80(%0),%0                     \n"
+      "vphaddw    %%ymm1,%%ymm0,%%ymm0           \n"  // mutates.
+      "vphaddw    %%ymm3,%%ymm2,%%ymm2           \n"
+      "vpsrlw     $0x7,%%ymm0,%%ymm0             \n"
+      "vpsrlw     $0x7,%%ymm2,%%ymm2             \n"
+      "vpackuswb  %%ymm2,%%ymm0,%%ymm0           \n"  // mutates.
+      "vpermd     %%ymm0,%%ymm6,%%ymm0           \n"  // unmutate.
+      "vpaddb     %%ymm5,%%ymm0,%%ymm0           \n"  // add 16 for Y
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),         // %0
+        "+r"(dst_y),            // %1
+        "+r"(width)             // %2
+      : "m"(kARGBToY),          // %3
+        "m"(kAddY16),           // %4
+        "m"(kPermdARGBToY_AVX)  // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 #endif  // HAS_ARGBTOYROW_AVX2
 
 #ifdef HAS_ARGBTOYJROW_AVX2
 // Convert 32 ARGB pixels (128 bytes) to 32 Y values.
-void ARGBToYJRow_AVX2(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "vbroadcastf128 %3,%%ymm4                  \n"
-    "vbroadcastf128 %4,%%ymm5                  \n"
-    "vmovdqu    %5,%%ymm6                      \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    "vmovdqu    " MEMACCESS2(0x40,0) ",%%ymm2  \n"
-    "vmovdqu    " MEMACCESS2(0x60,0) ",%%ymm3  \n"
-    "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "lea       " MEMLEA(0x80,0) ",%0           \n"
-    "vphaddw    %%ymm1,%%ymm0,%%ymm0           \n"  // mutates.
-    "vphaddw    %%ymm3,%%ymm2,%%ymm2           \n"
-    "vpaddw     %%ymm5,%%ymm0,%%ymm0           \n"  // Add .5 for rounding.
-    "vpaddw     %%ymm5,%%ymm2,%%ymm2           \n"
-    "vpsrlw     $0x7,%%ymm0,%%ymm0             \n"
-    "vpsrlw     $0x7,%%ymm2,%%ymm2             \n"
-    "vpackuswb  %%ymm2,%%ymm0,%%ymm0           \n"  // mutates.
-    "vpermd     %%ymm0,%%ymm6,%%ymm0           \n"  // unmutate.
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kARGBToYJ),   // %3
-    "m"(kAddYJ64),    // %4
-    "m"(kPermdARGBToY_AVX)  // %5
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+void ARGBToYJRow_AVX2(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm4                  \n"
+      "vbroadcastf128 %4,%%ymm5                  \n"
+      "vmovdqu    %5,%%ymm6                      \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "lea       0x80(%0),%0                     \n"
+      "vphaddw    %%ymm1,%%ymm0,%%ymm0           \n"  // mutates.
+      "vphaddw    %%ymm3,%%ymm2,%%ymm2           \n"
+      "vpaddw     %%ymm5,%%ymm0,%%ymm0           \n"  // Add .5 for rounding.
+      "vpaddw     %%ymm5,%%ymm2,%%ymm2           \n"
+      "vpsrlw     $0x7,%%ymm0,%%ymm0             \n"
+      "vpsrlw     $0x7,%%ymm2,%%ymm2             \n"
+      "vpackuswb  %%ymm2,%%ymm0,%%ymm0           \n"  // mutates.
+      "vpermd     %%ymm0,%%ymm6,%%ymm0           \n"  // unmutate.
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),         // %0
+        "+r"(dst_y),            // %1
+        "+r"(width)             // %2
+      : "m"(kARGBToYJ),         // %3
+        "m"(kAddYJ64),          // %4
+        "m"(kPermdARGBToY_AVX)  // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 #endif  // HAS_ARGBTOYJROW_AVX2
 
 #ifdef HAS_ARGBTOUVROW_SSSE3
-void ARGBToUVRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "movdqa    %5,%%xmm3                       \n"
-    "movdqa    %6,%%xmm4                       \n"
-    "movdqa    %7,%%xmm5                       \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm7)            //  movdqu (%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x10,0,4,1,xmm7)            //  movdqu 0x10(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm1                   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    MEMOPREG(movdqu,0x20,0,4,1,xmm7)            //  movdqu 0x20(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x30,0,4,1,xmm7)            //  movdqu 0x30(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
+void ARGBToUVRow_SSSE3(const uint8_t* src_argb0,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  asm volatile(
+      "movdqa    %5,%%xmm3                       \n"
+      "movdqa    %6,%%xmm4                       \n"
+      "movdqa    %7,%%xmm5                       \n"
+      "sub       %1,%2                           \n"
 
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqa    %%xmm2,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm6,%%xmm2             \n"
-    "shufps    $0xdd,%%xmm6,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "phaddw    %%xmm2,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm1                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm1                     \n"
-    "packsswb  %%xmm1,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movlps    %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movhps,xmm0,0x00,1,2,1)           //  movhps    %%xmm0,(%1,%2,1)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_argb)), // %4
-    "m"(kARGBToV),  // %5
-    "m"(kARGBToU),  // %6
-    "m"(kAddUV128)  // %7
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x10(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm1                   \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x20(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "movdqu    0x30(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+
+      "lea       0x40(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqa    %%xmm2,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm6,%%xmm2             \n"
+      "shufps    $0xdd,%%xmm6,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "phaddw    %%xmm2,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm1                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm1                     \n"
+      "packsswb  %%xmm1,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movlps    %%xmm0,(%1)                     \n"
+      "movhps    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_argb)),  // %4
+        "m"(kARGBToV),                     // %5
+        "m"(kARGBToU),                     // %6
+        "m"(kAddUV128)                     // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm6", "xmm7");
 }
 #endif  // HAS_ARGBTOUVROW_SSSE3
 
 #ifdef HAS_ARGBTOUVROW_AVX2
 // vpshufb for vphaddw + vpackuswb packed to shorts.
 static const lvec8 kShufARGBToUV_AVX = {
-  0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15,
-  0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15
-};
-void ARGBToUVRow_AVX2(const uint8* src_argb0, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "vbroadcastf128 %5,%%ymm5                  \n"
-    "vbroadcastf128 %6,%%ymm6                  \n"
-    "vbroadcastf128 %7,%%ymm7                  \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    "vmovdqu    " MEMACCESS2(0x40,0) ",%%ymm2  \n"
-    "vmovdqu    " MEMACCESS2(0x60,0) ",%%ymm3  \n"
-    VMEMOPREG(vpavgb,0x00,0,4,1,ymm0,ymm0)     // vpavgb (%0,%4,1),%%ymm0,%%ymm0
-    VMEMOPREG(vpavgb,0x20,0,4,1,ymm1,ymm1)
-    VMEMOPREG(vpavgb,0x40,0,4,1,ymm2,ymm2)
-    VMEMOPREG(vpavgb,0x60,0,4,1,ymm3,ymm3)
-    "lea       " MEMLEA(0x80,0) ",%0           \n"
-    "vshufps    $0x88,%%ymm1,%%ymm0,%%ymm4     \n"
-    "vshufps    $0xdd,%%ymm1,%%ymm0,%%ymm0     \n"
-    "vpavgb     %%ymm4,%%ymm0,%%ymm0           \n"
-    "vshufps    $0x88,%%ymm3,%%ymm2,%%ymm4     \n"
-    "vshufps    $0xdd,%%ymm3,%%ymm2,%%ymm2     \n"
-    "vpavgb     %%ymm4,%%ymm2,%%ymm2           \n"
+    0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15,
+    0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15};
+void ARGBToUVRow_AVX2(const uint8_t* src_argb0,
+                      int src_stride_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "vbroadcastf128 %5,%%ymm5                  \n"
+      "vbroadcastf128 %6,%%ymm6                  \n"
+      "vbroadcastf128 %7,%%ymm7                  \n"
+      "sub        %1,%2                          \n"
 
-    "vpmaddubsw %%ymm7,%%ymm0,%%ymm1           \n"
-    "vpmaddubsw %%ymm7,%%ymm2,%%ymm3           \n"
-    "vpmaddubsw %%ymm6,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm6,%%ymm2,%%ymm2           \n"
-    "vphaddw    %%ymm3,%%ymm1,%%ymm1           \n"
-    "vphaddw    %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpsraw     $0x8,%%ymm1,%%ymm1             \n"
-    "vpsraw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpacksswb  %%ymm0,%%ymm1,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vpshufb    %8,%%ymm0,%%ymm0               \n"
-    "vpaddb     %%ymm5,%%ymm0,%%ymm0           \n"
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "vpavgb    0x00(%0,%4,1),%%ymm0,%%ymm0     \n"
+      "vpavgb    0x20(%0,%4,1),%%ymm1,%%ymm1     \n"
+      "vpavgb    0x40(%0,%4,1),%%ymm2,%%ymm2     \n"
+      "vpavgb    0x60(%0,%4,1),%%ymm3,%%ymm3     \n"
+      "lea        0x80(%0),%0                    \n"
+      "vshufps    $0x88,%%ymm1,%%ymm0,%%ymm4     \n"
+      "vshufps    $0xdd,%%ymm1,%%ymm0,%%ymm0     \n"
+      "vpavgb     %%ymm4,%%ymm0,%%ymm0           \n"
+      "vshufps    $0x88,%%ymm3,%%ymm2,%%ymm4     \n"
+      "vshufps    $0xdd,%%ymm3,%%ymm2,%%ymm2     \n"
+      "vpavgb     %%ymm4,%%ymm2,%%ymm2           \n"
 
-    "vextractf128 $0x0,%%ymm0," MEMACCESS(1) " \n"
-    VEXTOPMEM(vextractf128,1,ymm0,0x0,1,2,1) // vextractf128 $1,%%ymm0,(%1,%2,1)
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x20,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_argb)), // %4
-    "m"(kAddUV128),  // %5
-    "m"(kARGBToV),   // %6
-    "m"(kARGBToU),   // %7
-    "m"(kShufARGBToUV_AVX)  // %8
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      "vpmaddubsw %%ymm7,%%ymm0,%%ymm1           \n"
+      "vpmaddubsw %%ymm7,%%ymm2,%%ymm3           \n"
+      "vpmaddubsw %%ymm6,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm6,%%ymm2,%%ymm2           \n"
+      "vphaddw    %%ymm3,%%ymm1,%%ymm1           \n"
+      "vphaddw    %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpsraw     $0x8,%%ymm1,%%ymm1             \n"
+      "vpsraw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpacksswb  %%ymm0,%%ymm1,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vpshufb    %8,%%ymm0,%%ymm0               \n"
+      "vpaddb     %%ymm5,%%ymm0,%%ymm0           \n"
+
+      "vextractf128 $0x0,%%ymm0,(%1)             \n"
+      "vextractf128 $0x1,%%ymm0,0x0(%1,%2,1)     \n"
+      "lea        0x10(%1),%1                    \n"
+      "sub        $0x20,%3                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_argb)),  // %4
+        "m"(kAddUV128),                    // %5
+        "m"(kARGBToV),                     // %6
+        "m"(kARGBToU),                     // %7
+        "m"(kShufARGBToUV_AVX)             // %8
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBTOUVROW_AVX2
 
 #ifdef HAS_ARGBTOUVJROW_AVX2
-void ARGBToUVJRow_AVX2(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "vbroadcastf128 %5,%%ymm5                  \n"
-    "vbroadcastf128 %6,%%ymm6                  \n"
-    "vbroadcastf128 %7,%%ymm7                  \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    "vmovdqu    " MEMACCESS2(0x40,0) ",%%ymm2  \n"
-    "vmovdqu    " MEMACCESS2(0x60,0) ",%%ymm3  \n"
-    VMEMOPREG(vpavgb,0x00,0,4,1,ymm0,ymm0)     // vpavgb (%0,%4,1),%%ymm0,%%ymm0
-    VMEMOPREG(vpavgb,0x20,0,4,1,ymm1,ymm1)
-    VMEMOPREG(vpavgb,0x40,0,4,1,ymm2,ymm2)
-    VMEMOPREG(vpavgb,0x60,0,4,1,ymm3,ymm3)
-    "lea       " MEMLEA(0x80,0) ",%0           \n"
-    "vshufps    $0x88,%%ymm1,%%ymm0,%%ymm4     \n"
-    "vshufps    $0xdd,%%ymm1,%%ymm0,%%ymm0     \n"
-    "vpavgb     %%ymm4,%%ymm0,%%ymm0           \n"
-    "vshufps    $0x88,%%ymm3,%%ymm2,%%ymm4     \n"
-    "vshufps    $0xdd,%%ymm3,%%ymm2,%%ymm2     \n"
-    "vpavgb     %%ymm4,%%ymm2,%%ymm2           \n"
+void ARGBToUVJRow_AVX2(const uint8_t* src_argb0,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  asm volatile(
+      "vbroadcastf128 %5,%%ymm5                  \n"
+      "vbroadcastf128 %6,%%ymm6                  \n"
+      "vbroadcastf128 %7,%%ymm7                  \n"
+      "sub        %1,%2                          \n"
 
-    "vpmaddubsw %%ymm7,%%ymm0,%%ymm1           \n"
-    "vpmaddubsw %%ymm7,%%ymm2,%%ymm3           \n"
-    "vpmaddubsw %%ymm6,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm6,%%ymm2,%%ymm2           \n"
-    "vphaddw    %%ymm3,%%ymm1,%%ymm1           \n"
-    "vphaddw    %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm5,%%ymm1,%%ymm1           \n"
-    "vpsraw     $0x8,%%ymm1,%%ymm1             \n"
-    "vpsraw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpacksswb  %%ymm0,%%ymm1,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vpshufb    %8,%%ymm0,%%ymm0               \n"
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x40(%0),%%ymm2                \n"
+      "vmovdqu    0x60(%0),%%ymm3                \n"
+      "vpavgb    0x00(%0,%4,1),%%ymm0,%%ymm0     \n"
+      "vpavgb    0x20(%0,%4,1),%%ymm1,%%ymm1     \n"
+      "vpavgb    0x40(%0,%4,1),%%ymm2,%%ymm2     \n"
+      "vpavgb    0x60(%0,%4,1),%%ymm3,%%ymm3     \n"
+      "lea       0x80(%0),%0                     \n"
+      "vshufps    $0x88,%%ymm1,%%ymm0,%%ymm4     \n"
+      "vshufps    $0xdd,%%ymm1,%%ymm0,%%ymm0     \n"
+      "vpavgb     %%ymm4,%%ymm0,%%ymm0           \n"
+      "vshufps    $0x88,%%ymm3,%%ymm2,%%ymm4     \n"
+      "vshufps    $0xdd,%%ymm3,%%ymm2,%%ymm2     \n"
+      "vpavgb     %%ymm4,%%ymm2,%%ymm2           \n"
 
-    "vextractf128 $0x0,%%ymm0," MEMACCESS(1) " \n"
-    VEXTOPMEM(vextractf128,1,ymm0,0x0,1,2,1) // vextractf128 $1,%%ymm0,(%1,%2,1)
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x20,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_argb)), // %4
-    "m"(kAddUVJ128),  // %5
-    "m"(kARGBToVJ),  // %6
-    "m"(kARGBToUJ),  // %7
-    "m"(kShufARGBToUV_AVX)  // %8
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      "vpmaddubsw %%ymm7,%%ymm0,%%ymm1           \n"
+      "vpmaddubsw %%ymm7,%%ymm2,%%ymm3           \n"
+      "vpmaddubsw %%ymm6,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm6,%%ymm2,%%ymm2           \n"
+      "vphaddw    %%ymm3,%%ymm1,%%ymm1           \n"
+      "vphaddw    %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm5,%%ymm1,%%ymm1           \n"
+      "vpsraw     $0x8,%%ymm1,%%ymm1             \n"
+      "vpsraw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpacksswb  %%ymm0,%%ymm1,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vpshufb    %8,%%ymm0,%%ymm0               \n"
+
+      "vextractf128 $0x0,%%ymm0,(%1)             \n"
+      "vextractf128 $0x1,%%ymm0,0x0(%1,%2,1)     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x20,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_argb)),  // %4
+        "m"(kAddUVJ128),                   // %5
+        "m"(kARGBToVJ),                    // %6
+        "m"(kARGBToUJ),                    // %7
+        "m"(kShufARGBToUV_AVX)             // %8
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBTOUVJROW_AVX2
 
 #ifdef HAS_ARGBTOUVJROW_SSSE3
-void ARGBToUVJRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                        uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "movdqa    %5,%%xmm3                       \n"
-    "movdqa    %6,%%xmm4                       \n"
-    "movdqa    %7,%%xmm5                       \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm7)            //  movdqu (%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x10,0,4,1,xmm7)            //  movdqu 0x10(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm1                   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    MEMOPREG(movdqu,0x20,0,4,1,xmm7)            //  movdqu 0x20(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x30,0,4,1,xmm7)            //  movdqu 0x30(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
+void ARGBToUVJRow_SSSE3(const uint8_t* src_argb0,
+                        int src_stride_argb,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width) {
+  asm volatile(
+      "movdqa    %5,%%xmm3                       \n"
+      "movdqa    %6,%%xmm4                       \n"
+      "movdqa    %7,%%xmm5                       \n"
+      "sub       %1,%2                           \n"
 
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqa    %%xmm2,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm6,%%xmm2             \n"
-    "shufps    $0xdd,%%xmm6,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "phaddw    %%xmm2,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm1                   \n"
-    "paddw     %%xmm5,%%xmm0                   \n"
-    "paddw     %%xmm5,%%xmm1                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm1                     \n"
-    "packsswb  %%xmm1,%%xmm0                   \n"
-    "movlps    %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movhps,xmm0,0x00,1,2,1)           //  movhps  %%xmm0,(%1,%2,1)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_argb)), // %4
-    "m"(kARGBToVJ),  // %5
-    "m"(kARGBToUJ),  // %6
-    "m"(kAddUVJ128)  // %7
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x10(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm1                   \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x20(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "movdqu    0x30(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+
+      "lea       0x40(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqa    %%xmm2,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm6,%%xmm2             \n"
+      "shufps    $0xdd,%%xmm6,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "phaddw    %%xmm2,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm1                   \n"
+      "paddw     %%xmm5,%%xmm0                   \n"
+      "paddw     %%xmm5,%%xmm1                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm1                     \n"
+      "packsswb  %%xmm1,%%xmm0                   \n"
+      "movlps    %%xmm0,(%1)                     \n"
+      "movhps    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_argb)),  // %4
+        "m"(kARGBToVJ),                    // %5
+        "m"(kARGBToUJ),                    // %6
+        "m"(kAddUVJ128)                    // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm6", "xmm7");
 }
 #endif  // HAS_ARGBTOUVJROW_SSSE3
 
 #ifdef HAS_ARGBTOUV444ROW_SSSE3
-void ARGBToUV444Row_SSSE3(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
+void ARGBToUV444Row_SSSE3(const uint8_t* src_argb,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
                           int width) {
-  asm volatile (
-    "movdqa    %4,%%xmm3                       \n"
-    "movdqa    %5,%%xmm4                       \n"
-    "movdqa    %6,%%xmm5                       \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm4,%%xmm6                   \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm2                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm2                     \n"
-    "packsswb  %%xmm2,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    "pmaddubsw %%xmm3,%%xmm0                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "pmaddubsw %%xmm3,%%xmm2                   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm2                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm2                     \n"
-    "packsswb  %%xmm2,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    MEMOPMEM(movdqu,xmm0,0x00,1,2,1)           //  movdqu  %%xmm0,(%1,%2,1)
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),        // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "m"(kARGBToV),  // %4
-    "m"(kARGBToU),  // %5
-    "m"(kAddUV128)  // %6
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm6"
-  );
+  asm volatile(
+      "movdqa    %4,%%xmm3                       \n"
+      "movdqa    %5,%%xmm4                       \n"
+      "movdqa    %6,%%xmm5                       \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm4,%%xmm6                   \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm2                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm2                     \n"
+      "packsswb  %%xmm2,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "pmaddubsw %%xmm3,%%xmm0                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "pmaddubsw %%xmm3,%%xmm2                   \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm2                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm2                     \n"
+      "packsswb  %%xmm2,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "lea       0x40(%0),%0                     \n"
+      "movdqu    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+rm"(width)     // %3
+      : "m"(kARGBToV),   // %4
+        "m"(kARGBToU),   // %5
+        "m"(kAddUV128)   // %6
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm6");
 }
 #endif  // HAS_ARGBTOUV444ROW_SSSE3
 
-void BGRAToYRow_SSSE3(const uint8* src_bgra, uint8* dst_y, int width) {
-  asm volatile (
-    "movdqa    %4,%%xmm5                       \n"
-    "movdqa    %3,%%xmm4                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm4,%%xmm3                   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm3,%%xmm2                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "psrlw     $0x7,%%xmm2                     \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_bgra),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kBGRAToY),   // %3
-    "m"(kAddY16)     // %4
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void BGRAToYRow_SSSE3(const uint8_t* src_bgra, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movdqa    %4,%%xmm5                       \n"
+      "movdqa    %3,%%xmm4                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm4,%%xmm3                   \n"
+      "lea       0x40(%0),%0                     \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm3,%%xmm2                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "psrlw     $0x7,%%xmm2                     \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_bgra),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      : "m"(kBGRAToY),   // %3
+        "m"(kAddY16)     // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void BGRAToUVRow_SSSE3(const uint8* src_bgra0, int src_stride_bgra,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "movdqa    %5,%%xmm3                       \n"
-    "movdqa    %6,%%xmm4                       \n"
-    "movdqa    %7,%%xmm5                       \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm7)            //  movdqu (%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x10,0,4,1,xmm7)            //  movdqu 0x10(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm1                   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    MEMOPREG(movdqu,0x20,0,4,1,xmm7)            //  movdqu 0x20(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x30,0,4,1,xmm7)            //  movdqu 0x30(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
+void BGRAToUVRow_SSSE3(const uint8_t* src_bgra0,
+                       int src_stride_bgra,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  asm volatile(
+      "movdqa    %5,%%xmm3                       \n"
+      "movdqa    %6,%%xmm4                       \n"
+      "movdqa    %7,%%xmm5                       \n"
+      "sub       %1,%2                           \n"
 
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqa    %%xmm2,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm6,%%xmm2             \n"
-    "shufps    $0xdd,%%xmm6,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "phaddw    %%xmm2,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm1                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm1                     \n"
-    "packsswb  %%xmm1,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movlps    %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movhps,xmm0,0x00,1,2,1)           //  movhps  %%xmm0,(%1,%2,1)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_bgra0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_bgra)), // %4
-    "m"(kBGRAToV),  // %5
-    "m"(kBGRAToU),  // %6
-    "m"(kAddUV128)  // %7
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x10(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm1                   \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x20(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "movdqu    0x30(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+
+      "lea       0x40(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqa    %%xmm2,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm6,%%xmm2             \n"
+      "shufps    $0xdd,%%xmm6,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "phaddw    %%xmm2,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm1                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm1                     \n"
+      "packsswb  %%xmm1,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movlps    %%xmm0,(%1)                     \n"
+      "movhps    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_bgra0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_bgra)),  // %4
+        "m"(kBGRAToV),                     // %5
+        "m"(kBGRAToU),                     // %6
+        "m"(kAddUV128)                     // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm6", "xmm7");
 }
 
-void ABGRToYRow_SSSE3(const uint8* src_abgr, uint8* dst_y, int width) {
-  asm volatile (
-    "movdqa    %4,%%xmm5                       \n"
-    "movdqa    %3,%%xmm4                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm4,%%xmm3                   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm3,%%xmm2                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "psrlw     $0x7,%%xmm2                     \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_abgr),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kABGRToY),   // %3
-    "m"(kAddY16)     // %4
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ABGRToYRow_SSSE3(const uint8_t* src_abgr, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movdqa    %4,%%xmm5                       \n"
+      "movdqa    %3,%%xmm4                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm4,%%xmm3                   \n"
+      "lea       0x40(%0),%0                     \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm3,%%xmm2                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "psrlw     $0x7,%%xmm2                     \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_abgr),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      : "m"(kABGRToY),   // %3
+        "m"(kAddY16)     // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void RGBAToYRow_SSSE3(const uint8* src_rgba, uint8* dst_y, int width) {
-  asm volatile (
-    "movdqa    %4,%%xmm5                       \n"
-    "movdqa    %3,%%xmm4                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm4,%%xmm3                   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "phaddw    %%xmm3,%%xmm2                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "psrlw     $0x7,%%xmm2                     \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_rgba),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  : "m"(kRGBAToY),   // %3
-    "m"(kAddY16)     // %4
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void RGBAToYRow_SSSE3(const uint8_t* src_rgba, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movdqa    %4,%%xmm5                       \n"
+      "movdqa    %3,%%xmm4                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm4,%%xmm3                   \n"
+      "lea       0x40(%0),%0                     \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "phaddw    %%xmm3,%%xmm2                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "psrlw     $0x7,%%xmm2                     \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_rgba),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      : "m"(kRGBAToY),   // %3
+        "m"(kAddY16)     // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void ABGRToUVRow_SSSE3(const uint8* src_abgr0, int src_stride_abgr,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "movdqa    %5,%%xmm3                       \n"
-    "movdqa    %6,%%xmm4                       \n"
-    "movdqa    %7,%%xmm5                       \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm7)            //  movdqu (%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x10,0,4,1,xmm7)            //  movdqu 0x10(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm1                   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    MEMOPREG(movdqu,0x20,0,4,1,xmm7)            //  movdqu 0x20(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x30,0,4,1,xmm7)            //  movdqu 0x30(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
+void ABGRToUVRow_SSSE3(const uint8_t* src_abgr0,
+                       int src_stride_abgr,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  asm volatile(
+      "movdqa    %5,%%xmm3                       \n"
+      "movdqa    %6,%%xmm4                       \n"
+      "movdqa    %7,%%xmm5                       \n"
+      "sub       %1,%2                           \n"
 
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqa    %%xmm2,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm6,%%xmm2             \n"
-    "shufps    $0xdd,%%xmm6,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "phaddw    %%xmm2,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm1                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm1                     \n"
-    "packsswb  %%xmm1,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movlps    %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movhps,xmm0,0x00,1,2,1)           //  movhps  %%xmm0,(%1,%2,1)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_abgr0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_abgr)), // %4
-    "m"(kABGRToV),  // %5
-    "m"(kABGRToU),  // %6
-    "m"(kAddUV128)  // %7
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x10(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm1                   \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x20(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "movdqu    0x30(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+
+      "lea       0x40(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqa    %%xmm2,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm6,%%xmm2             \n"
+      "shufps    $0xdd,%%xmm6,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "phaddw    %%xmm2,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm1                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm1                     \n"
+      "packsswb  %%xmm1,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movlps    %%xmm0,(%1)                     \n"
+      "movhps    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_abgr0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_abgr)),  // %4
+        "m"(kABGRToV),                     // %5
+        "m"(kABGRToU),                     // %6
+        "m"(kAddUV128)                     // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm6", "xmm7");
 }
 
-void RGBAToUVRow_SSSE3(const uint8* src_rgba0, int src_stride_rgba,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "movdqa    %5,%%xmm3                       \n"
-    "movdqa    %6,%%xmm4                       \n"
-    "movdqa    %7,%%xmm5                       \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm7)            //  movdqu (%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x10,0,4,1,xmm7)            //  movdqu 0x10(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm1                   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    MEMOPREG(movdqu,0x20,0,4,1,xmm7)            //  movdqu 0x20(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x30,0,4,1,xmm7)            //  movdqu 0x30(%0,%4,1),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
+void RGBAToUVRow_SSSE3(const uint8_t* src_rgba0,
+                       int src_stride_rgba,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  asm volatile(
+      "movdqa    %5,%%xmm3                       \n"
+      "movdqa    %6,%%xmm4                       \n"
+      "movdqa    %7,%%xmm5                       \n"
+      "sub       %1,%2                           \n"
 
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm0                   \n"
-    "movdqa    %%xmm2,%%xmm7                   \n"
-    "shufps    $0x88,%%xmm6,%%xmm2             \n"
-    "shufps    $0xdd,%%xmm6,%%xmm7             \n"
-    "pavgb     %%xmm7,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm2                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "phaddw    %%xmm2,%%xmm0                   \n"
-    "phaddw    %%xmm6,%%xmm1                   \n"
-    "psraw     $0x8,%%xmm0                     \n"
-    "psraw     $0x8,%%xmm1                     \n"
-    "packsswb  %%xmm1,%%xmm0                   \n"
-    "paddb     %%xmm5,%%xmm0                   \n"
-    "movlps    %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movhps,xmm0,0x00,1,2,1)           //  movhps  %%xmm0,(%1,%2,1)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_rgba0),       // %0
-    "+r"(dst_u),           // %1
-    "+r"(dst_v),           // %2
-    "+rm"(width)           // %3
-  : "r"((intptr_t)(src_stride_rgba)), // %4
-    "m"(kRGBAToV),  // %5
-    "m"(kRGBAToU),  // %6
-    "m"(kAddUV128)  // %7
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm6", "xmm7"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x10(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm1                   \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x20(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqu    0x30(%0),%%xmm6                 \n"
+      "movdqu    0x30(%0,%4,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+
+      "lea       0x40(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm0                   \n"
+      "movdqa    %%xmm2,%%xmm7                   \n"
+      "shufps    $0x88,%%xmm6,%%xmm2             \n"
+      "shufps    $0xdd,%%xmm6,%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm2                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "phaddw    %%xmm2,%%xmm0                   \n"
+      "phaddw    %%xmm6,%%xmm1                   \n"
+      "psraw     $0x8,%%xmm0                     \n"
+      "psraw     $0x8,%%xmm1                     \n"
+      "packsswb  %%xmm1,%%xmm0                   \n"
+      "paddb     %%xmm5,%%xmm0                   \n"
+      "movlps    %%xmm0,(%1)                     \n"
+      "movhps    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_rgba0),                   // %0
+        "+r"(dst_u),                       // %1
+        "+r"(dst_v),                       // %2
+        "+rm"(width)                       // %3
+      : "r"((intptr_t)(src_stride_rgba)),  // %4
+        "m"(kRGBAToV),                     // %5
+        "m"(kRGBAToU),                     // %6
+        "m"(kAddUV128)                     // %7
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm6", "xmm7");
 }
 
 #if defined(HAS_I422TOARGBROW_SSSE3) || defined(HAS_I422TOARGBROW_AVX2)
 
 // Read 8 UV from 444
-#define READYUV444                                                             \
-    "movq       " MEMACCESS([u_buf]) ",%%xmm0                   \n"            \
-    MEMOPREG(movq, 0x00, [u_buf], [v_buf], 1, xmm1)                            \
-    "lea        " MEMLEA(0x8, [u_buf]) ",%[u_buf]               \n"            \
-    "punpcklbw  %%xmm1,%%xmm0                                   \n"            \
-    "movq       " MEMACCESS([y_buf]) ",%%xmm4                   \n"            \
-    "punpcklbw  %%xmm4,%%xmm4                                   \n"            \
-    "lea        " MEMLEA(0x8, [y_buf]) ",%[y_buf]               \n"
+#define READYUV444                                                \
+  "movq       (%[u_buf]),%%xmm0                               \n" \
+  "movq       0x00(%[u_buf],%[v_buf],1),%%xmm1                \n" \
+  "lea        0x8(%[u_buf]),%[u_buf]                          \n" \
+  "punpcklbw  %%xmm1,%%xmm0                                   \n" \
+  "movq       (%[y_buf]),%%xmm4                               \n" \
+  "punpcklbw  %%xmm4,%%xmm4                                   \n" \
+  "lea        0x8(%[y_buf]),%[y_buf]                          \n"
 
 // Read 4 UV from 422, upsample to 8 UV
-#define READYUV422                                                             \
-    "movd       " MEMACCESS([u_buf]) ",%%xmm0                   \n"            \
-    MEMOPREG(movd, 0x00, [u_buf], [v_buf], 1, xmm1)                            \
-    "lea        " MEMLEA(0x4, [u_buf]) ",%[u_buf]               \n"            \
-    "punpcklbw  %%xmm1,%%xmm0                                   \n"            \
-    "punpcklwd  %%xmm0,%%xmm0                                   \n"            \
-    "movq       " MEMACCESS([y_buf]) ",%%xmm4                   \n"            \
-    "punpcklbw  %%xmm4,%%xmm4                                   \n"            \
-    "lea        " MEMLEA(0x8, [y_buf]) ",%[y_buf]               \n"
+#define READYUV422                                                \
+  "movd       (%[u_buf]),%%xmm0                               \n" \
+  "movd       0x00(%[u_buf],%[v_buf],1),%%xmm1                \n" \
+  "lea        0x4(%[u_buf]),%[u_buf]                          \n" \
+  "punpcklbw  %%xmm1,%%xmm0                                   \n" \
+  "punpcklwd  %%xmm0,%%xmm0                                   \n" \
+  "movq       (%[y_buf]),%%xmm4                               \n" \
+  "punpcklbw  %%xmm4,%%xmm4                                   \n" \
+  "lea        0x8(%[y_buf]),%[y_buf]                          \n"
+
+// Read 4 UV from 422 10 bit, upsample to 8 UV
+// TODO(fbarchard): Consider shufb to replace pack/unpack
+// TODO(fbarchard): Consider pmulhuw to replace psraw
+// TODO(fbarchard): Consider pmullw to replace psllw and allow different bits.
+#define READYUV210                                                \
+  "movq       (%[u_buf]),%%xmm0                               \n" \
+  "movq       0x00(%[u_buf],%[v_buf],1),%%xmm1                \n" \
+  "lea        0x8(%[u_buf]),%[u_buf]                          \n" \
+  "punpcklwd  %%xmm1,%%xmm0                                   \n" \
+  "psraw      $0x2,%%xmm0                                     \n" \
+  "packuswb   %%xmm0,%%xmm0                                   \n" \
+  "punpcklwd  %%xmm0,%%xmm0                                   \n" \
+  "movdqu     (%[y_buf]),%%xmm4                               \n" \
+  "psllw      $0x6,%%xmm4                                     \n" \
+  "lea        0x10(%[y_buf]),%[y_buf]                         \n"
 
 // Read 4 UV from 422, upsample to 8 UV.  With 8 Alpha.
-#define READYUVA422                                                            \
-    "movd       " MEMACCESS([u_buf]) ",%%xmm0                   \n"            \
-    MEMOPREG(movd, 0x00, [u_buf], [v_buf], 1, xmm1)                            \
-    "lea        " MEMLEA(0x4, [u_buf]) ",%[u_buf]               \n"            \
-    "punpcklbw  %%xmm1,%%xmm0                                   \n"            \
-    "punpcklwd  %%xmm0,%%xmm0                                   \n"            \
-    "movq       " MEMACCESS([y_buf]) ",%%xmm4                   \n"            \
-    "punpcklbw  %%xmm4,%%xmm4                                   \n"            \
-    "lea        " MEMLEA(0x8, [y_buf]) ",%[y_buf]               \n"            \
-    "movq       " MEMACCESS([a_buf]) ",%%xmm5                   \n"            \
-    "lea        " MEMLEA(0x8, [a_buf]) ",%[a_buf]               \n"
-
-// Read 2 UV from 411, upsample to 8 UV.
-// reading 4 bytes is an msan violation.
-//    "movd       " MEMACCESS([u_buf]) ",%%xmm0                   \n"
-//    MEMOPREG(movd, 0x00, [u_buf], [v_buf], 1, xmm1)
-// pinsrw fails with drmemory
-//  __asm pinsrw     xmm0, [esi], 0        /* U */
-//  __asm pinsrw     xmm1, [esi + edi], 0  /* V */
-#define READYUV411_TEMP                                                        \
-    "movzwl     " MEMACCESS([u_buf]) ",%[temp]                  \n"            \
-    "movd       %[temp],%%xmm0                                  \n"            \
-    MEMOPARG(movzwl, 0x00, [u_buf], [v_buf], 1, [temp]) "       \n"            \
-    "movd       %[temp],%%xmm1                                  \n"            \
-    "lea        " MEMLEA(0x2, [u_buf]) ",%[u_buf]               \n"            \
-    "punpcklbw  %%xmm1,%%xmm0                                   \n"            \
-    "punpcklwd  %%xmm0,%%xmm0                                   \n"            \
-    "punpckldq  %%xmm0,%%xmm0                                   \n"            \
-    "movq       " MEMACCESS([y_buf]) ",%%xmm4                   \n"            \
-    "punpcklbw  %%xmm4,%%xmm4                                   \n"            \
-    "lea        " MEMLEA(0x8, [y_buf]) ",%[y_buf]               \n"
+#define READYUVA422                                               \
+  "movd       (%[u_buf]),%%xmm0                               \n" \
+  "movd       0x00(%[u_buf],%[v_buf],1),%%xmm1                \n" \
+  "lea        0x4(%[u_buf]),%[u_buf]                          \n" \
+  "punpcklbw  %%xmm1,%%xmm0                                   \n" \
+  "punpcklwd  %%xmm0,%%xmm0                                   \n" \
+  "movq       (%[y_buf]),%%xmm4                               \n" \
+  "punpcklbw  %%xmm4,%%xmm4                                   \n" \
+  "lea        0x8(%[y_buf]),%[y_buf]                          \n" \
+  "movq       (%[a_buf]),%%xmm5                               \n" \
+  "lea        0x8(%[a_buf]),%[a_buf]                          \n"
 
 // Read 4 UV from NV12, upsample to 8 UV
-#define READNV12                                                               \
-    "movq       " MEMACCESS([uv_buf]) ",%%xmm0                  \n"            \
-    "lea        " MEMLEA(0x8, [uv_buf]) ",%[uv_buf]             \n"            \
-    "punpcklwd  %%xmm0,%%xmm0                                   \n"            \
-    "movq       " MEMACCESS([y_buf]) ",%%xmm4                   \n"            \
-    "punpcklbw  %%xmm4,%%xmm4                                   \n"            \
-    "lea        " MEMLEA(0x8, [y_buf]) ",%[y_buf]               \n"
+#define READNV12                                                  \
+  "movq       (%[uv_buf]),%%xmm0                              \n" \
+  "lea        0x8(%[uv_buf]),%[uv_buf]                        \n" \
+  "punpcklwd  %%xmm0,%%xmm0                                   \n" \
+  "movq       (%[y_buf]),%%xmm4                               \n" \
+  "punpcklbw  %%xmm4,%%xmm4                                   \n" \
+  "lea        0x8(%[y_buf]),%[y_buf]                          \n"
 
 // Read 4 VU from NV21, upsample to 8 UV
-#define READNV21                                                               \
-    "movq       " MEMACCESS([vu_buf]) ",%%xmm0                  \n"            \
-    "lea        " MEMLEA(0x8, [vu_buf]) ",%[vu_buf]             \n"            \
-    "pshufb     %[kShuffleNV21], %%xmm0                         \n"            \
-    "movq       " MEMACCESS([y_buf]) ",%%xmm4                   \n"            \
-    "punpcklbw  %%xmm4,%%xmm4                                   \n"            \
-    "lea        " MEMLEA(0x8, [y_buf]) ",%[y_buf]               \n"
+#define READNV21                                                  \
+  "movq       (%[vu_buf]),%%xmm0                              \n" \
+  "lea        0x8(%[vu_buf]),%[vu_buf]                        \n" \
+  "pshufb     %[kShuffleNV21], %%xmm0                         \n" \
+  "movq       (%[y_buf]),%%xmm4                               \n" \
+  "punpcklbw  %%xmm4,%%xmm4                                   \n" \
+  "lea        0x8(%[y_buf]),%[y_buf]                          \n"
 
 // Read 4 YUY2 with 8 Y and update 4 UV to 8 UV.
-#define READYUY2                                                               \
-    "movdqu     " MEMACCESS([yuy2_buf]) ",%%xmm4                \n"            \
-    "pshufb     %[kShuffleYUY2Y], %%xmm4                        \n"            \
-    "movdqu     " MEMACCESS([yuy2_buf]) ",%%xmm0                \n"            \
-    "pshufb     %[kShuffleYUY2UV], %%xmm0                       \n"            \
-    "lea        " MEMLEA(0x10, [yuy2_buf]) ",%[yuy2_buf]        \n"
+#define READYUY2                                                  \
+  "movdqu     (%[yuy2_buf]),%%xmm4                            \n" \
+  "pshufb     %[kShuffleYUY2Y], %%xmm4                        \n" \
+  "movdqu     (%[yuy2_buf]),%%xmm0                            \n" \
+  "pshufb     %[kShuffleYUY2UV], %%xmm0                       \n" \
+  "lea        0x10(%[yuy2_buf]),%[yuy2_buf]                   \n"
 
 // Read 4 UYVY with 8 Y and update 4 UV to 8 UV.
-#define READUYVY                                                               \
-    "movdqu     " MEMACCESS([uyvy_buf]) ",%%xmm4                \n"            \
-    "pshufb     %[kShuffleUYVYY], %%xmm4                        \n"            \
-    "movdqu     " MEMACCESS([uyvy_buf]) ",%%xmm0                \n"            \
-    "pshufb     %[kShuffleUYVYUV], %%xmm0                       \n"            \
-    "lea        " MEMLEA(0x10, [uyvy_buf]) ",%[uyvy_buf]        \n"
+#define READUYVY                                                  \
+  "movdqu     (%[uyvy_buf]),%%xmm4                            \n" \
+  "pshufb     %[kShuffleUYVYY], %%xmm4                        \n" \
+  "movdqu     (%[uyvy_buf]),%%xmm0                            \n" \
+  "pshufb     %[kShuffleUYVYUV], %%xmm0                       \n" \
+  "lea        0x10(%[uyvy_buf]),%[uyvy_buf]                   \n"
 
 #if defined(__x86_64__)
-#define YUVTORGB_SETUP(yuvconstants)                                           \
-    "movdqa     " MEMACCESS([yuvconstants]) ",%%xmm8            \n"            \
-    "movdqa     " MEMACCESS2(32, [yuvconstants]) ",%%xmm9       \n"            \
-    "movdqa     " MEMACCESS2(64, [yuvconstants]) ",%%xmm10      \n"            \
-    "movdqa     " MEMACCESS2(96, [yuvconstants]) ",%%xmm11      \n"            \
-    "movdqa     " MEMACCESS2(128, [yuvconstants]) ",%%xmm12     \n"            \
-    "movdqa     " MEMACCESS2(160, [yuvconstants]) ",%%xmm13     \n"            \
-    "movdqa     " MEMACCESS2(192, [yuvconstants]) ",%%xmm14     \n"
+#define YUVTORGB_SETUP(yuvconstants)                              \
+  "movdqa     (%[yuvconstants]),%%xmm8                        \n" \
+  "movdqa     32(%[yuvconstants]),%%xmm9                      \n" \
+  "movdqa     64(%[yuvconstants]),%%xmm10                     \n" \
+  "movdqa     96(%[yuvconstants]),%%xmm11                     \n" \
+  "movdqa     128(%[yuvconstants]),%%xmm12                    \n" \
+  "movdqa     160(%[yuvconstants]),%%xmm13                    \n" \
+  "movdqa     192(%[yuvconstants]),%%xmm14                    \n"
 // Convert 8 pixels: 8 UV and 8 Y
-#define YUVTORGB(yuvconstants)                                                 \
-    "movdqa     %%xmm0,%%xmm1                                   \n"            \
-    "movdqa     %%xmm0,%%xmm2                                   \n"            \
-    "movdqa     %%xmm0,%%xmm3                                   \n"            \
-    "movdqa     %%xmm11,%%xmm0                                  \n"            \
-    "pmaddubsw  %%xmm8,%%xmm1                                   \n"            \
-    "psubw      %%xmm1,%%xmm0                                   \n"            \
-    "movdqa     %%xmm12,%%xmm1                                  \n"            \
-    "pmaddubsw  %%xmm9,%%xmm2                                   \n"            \
-    "psubw      %%xmm2,%%xmm1                                   \n"            \
-    "movdqa     %%xmm13,%%xmm2                                  \n"            \
-    "pmaddubsw  %%xmm10,%%xmm3                                  \n"            \
-    "psubw      %%xmm3,%%xmm2                                   \n"            \
-    "pmulhuw    %%xmm14,%%xmm4                                  \n"            \
-    "paddsw     %%xmm4,%%xmm0                                   \n"            \
-    "paddsw     %%xmm4,%%xmm1                                   \n"            \
-    "paddsw     %%xmm4,%%xmm2                                   \n"            \
-    "psraw      $0x6,%%xmm0                                     \n"            \
-    "psraw      $0x6,%%xmm1                                     \n"            \
-    "psraw      $0x6,%%xmm2                                     \n"            \
-    "packuswb   %%xmm0,%%xmm0                                   \n"            \
-    "packuswb   %%xmm1,%%xmm1                                   \n"            \
-    "packuswb   %%xmm2,%%xmm2                                   \n"
+#define YUVTORGB16(yuvconstants)                                  \
+  "movdqa     %%xmm0,%%xmm1                                   \n" \
+  "movdqa     %%xmm0,%%xmm2                                   \n" \
+  "movdqa     %%xmm0,%%xmm3                                   \n" \
+  "movdqa     %%xmm11,%%xmm0                                  \n" \
+  "pmaddubsw  %%xmm8,%%xmm1                                   \n" \
+  "psubw      %%xmm1,%%xmm0                                   \n" \
+  "movdqa     %%xmm12,%%xmm1                                  \n" \
+  "pmaddubsw  %%xmm9,%%xmm2                                   \n" \
+  "psubw      %%xmm2,%%xmm1                                   \n" \
+  "movdqa     %%xmm13,%%xmm2                                  \n" \
+  "pmaddubsw  %%xmm10,%%xmm3                                  \n" \
+  "psubw      %%xmm3,%%xmm2                                   \n" \
+  "pmulhuw    %%xmm14,%%xmm4                                  \n" \
+  "paddsw     %%xmm4,%%xmm0                                   \n" \
+  "paddsw     %%xmm4,%%xmm1                                   \n" \
+  "paddsw     %%xmm4,%%xmm2                                   \n"
 #define YUVTORGB_REGS \
-    "xmm8", "xmm9", "xmm10", "xmm11", "xmm12", "xmm13", "xmm14",
+  "xmm8", "xmm9", "xmm10", "xmm11", "xmm12", "xmm13", "xmm14",
 
 #else
 #define YUVTORGB_SETUP(yuvconstants)
 // Convert 8 pixels: 8 UV and 8 Y
-#define YUVTORGB(yuvconstants)                                                 \
-    "movdqa     %%xmm0,%%xmm1                                   \n"            \
-    "movdqa     %%xmm0,%%xmm2                                   \n"            \
-    "movdqa     %%xmm0,%%xmm3                                   \n"            \
-    "movdqa     " MEMACCESS2(96, [yuvconstants]) ",%%xmm0       \n"            \
-    "pmaddubsw  " MEMACCESS([yuvconstants]) ",%%xmm1            \n"            \
-    "psubw      %%xmm1,%%xmm0                                   \n"            \
-    "movdqa     " MEMACCESS2(128, [yuvconstants]) ",%%xmm1      \n"            \
-    "pmaddubsw  " MEMACCESS2(32, [yuvconstants]) ",%%xmm2       \n"            \
-    "psubw      %%xmm2,%%xmm1                                   \n"            \
-    "movdqa     " MEMACCESS2(160, [yuvconstants]) ",%%xmm2      \n"            \
-    "pmaddubsw  " MEMACCESS2(64, [yuvconstants]) ",%%xmm3       \n"            \
-    "psubw      %%xmm3,%%xmm2                                   \n"            \
-    "pmulhuw    " MEMACCESS2(192, [yuvconstants]) ",%%xmm4      \n"            \
-    "paddsw     %%xmm4,%%xmm0                                   \n"            \
-    "paddsw     %%xmm4,%%xmm1                                   \n"            \
-    "paddsw     %%xmm4,%%xmm2                                   \n"            \
-    "psraw      $0x6,%%xmm0                                     \n"            \
-    "psraw      $0x6,%%xmm1                                     \n"            \
-    "psraw      $0x6,%%xmm2                                     \n"            \
-    "packuswb   %%xmm0,%%xmm0                                   \n"            \
-    "packuswb   %%xmm1,%%xmm1                                   \n"            \
-    "packuswb   %%xmm2,%%xmm2                                   \n"
+#define YUVTORGB16(yuvconstants)                                  \
+  "movdqa     %%xmm0,%%xmm1                                   \n" \
+  "movdqa     %%xmm0,%%xmm2                                   \n" \
+  "movdqa     %%xmm0,%%xmm3                                   \n" \
+  "movdqa     96(%[yuvconstants]),%%xmm0                      \n" \
+  "pmaddubsw  (%[yuvconstants]),%%xmm1                        \n" \
+  "psubw      %%xmm1,%%xmm0                                   \n" \
+  "movdqa     128(%[yuvconstants]),%%xmm1                     \n" \
+  "pmaddubsw  32(%[yuvconstants]),%%xmm2                      \n" \
+  "psubw      %%xmm2,%%xmm1                                   \n" \
+  "movdqa     160(%[yuvconstants]),%%xmm2                     \n" \
+  "pmaddubsw  64(%[yuvconstants]),%%xmm3                      \n" \
+  "psubw      %%xmm3,%%xmm2                                   \n" \
+  "pmulhuw    192(%[yuvconstants]),%%xmm4                     \n" \
+  "paddsw     %%xmm4,%%xmm0                                   \n" \
+  "paddsw     %%xmm4,%%xmm1                                   \n" \
+  "paddsw     %%xmm4,%%xmm2                                   \n"
 #define YUVTORGB_REGS
 #endif
 
+#define YUVTORGB(yuvconstants)                                    \
+  YUVTORGB16(yuvconstants)                                        \
+  "psraw      $0x6,%%xmm0                                     \n" \
+  "psraw      $0x6,%%xmm1                                     \n" \
+  "psraw      $0x6,%%xmm2                                     \n" \
+  "packuswb   %%xmm0,%%xmm0                                   \n" \
+  "packuswb   %%xmm1,%%xmm1                                   \n" \
+  "packuswb   %%xmm2,%%xmm2                                   \n"
+
 // Store 8 ARGB values.
-#define STOREARGB                                                              \
-    "punpcklbw  %%xmm1,%%xmm0                                    \n"           \
-    "punpcklbw  %%xmm5,%%xmm2                                    \n"           \
-    "movdqa     %%xmm0,%%xmm1                                    \n"           \
-    "punpcklwd  %%xmm2,%%xmm0                                    \n"           \
-    "punpckhwd  %%xmm2,%%xmm1                                    \n"           \
-    "movdqu     %%xmm0," MEMACCESS([dst_argb]) "                 \n"           \
-    "movdqu     %%xmm1," MEMACCESS2(0x10, [dst_argb]) "          \n"           \
-    "lea        " MEMLEA(0x20, [dst_argb]) ", %[dst_argb]        \n"
+#define STOREARGB                                                  \
+  "punpcklbw  %%xmm1,%%xmm0                                    \n" \
+  "punpcklbw  %%xmm5,%%xmm2                                    \n" \
+  "movdqa     %%xmm0,%%xmm1                                    \n" \
+  "punpcklwd  %%xmm2,%%xmm0                                    \n" \
+  "punpckhwd  %%xmm2,%%xmm1                                    \n" \
+  "movdqu     %%xmm0,(%[dst_argb])                             \n" \
+  "movdqu     %%xmm1,0x10(%[dst_argb])                         \n" \
+  "lea        0x20(%[dst_argb]), %[dst_argb]                   \n"
 
 // Store 8 RGBA values.
-#define STORERGBA                                                              \
-    "pcmpeqb   %%xmm5,%%xmm5                                     \n"           \
-    "punpcklbw %%xmm2,%%xmm1                                     \n"           \
-    "punpcklbw %%xmm0,%%xmm5                                     \n"           \
-    "movdqa    %%xmm5,%%xmm0                                     \n"           \
-    "punpcklwd %%xmm1,%%xmm5                                     \n"           \
-    "punpckhwd %%xmm1,%%xmm0                                     \n"           \
-    "movdqu    %%xmm5," MEMACCESS([dst_rgba]) "                  \n"           \
-    "movdqu    %%xmm0," MEMACCESS2(0x10, [dst_rgba]) "           \n"           \
-    "lea       " MEMLEA(0x20, [dst_rgba]) ",%[dst_rgba]          \n"
+#define STORERGBA                                                  \
+  "pcmpeqb   %%xmm5,%%xmm5                                     \n" \
+  "punpcklbw %%xmm2,%%xmm1                                     \n" \
+  "punpcklbw %%xmm0,%%xmm5                                     \n" \
+  "movdqa    %%xmm5,%%xmm0                                     \n" \
+  "punpcklwd %%xmm1,%%xmm5                                     \n" \
+  "punpckhwd %%xmm1,%%xmm0                                     \n" \
+  "movdqu    %%xmm5,(%[dst_rgba])                              \n" \
+  "movdqu    %%xmm0,0x10(%[dst_rgba])                          \n" \
+  "lea       0x20(%[dst_rgba]),%[dst_rgba]                     \n"
 
-void OMITFP I444ToARGBRow_SSSE3(const uint8* y_buf,
-                                const uint8* u_buf,
-                                const uint8* v_buf,
-                                uint8* dst_argb,
+// Store 8 AR30 values.
+#define STOREAR30                                                  \
+  "psraw      $0x4,%%xmm0                                      \n" \
+  "psraw      $0x4,%%xmm1                                      \n" \
+  "psraw      $0x4,%%xmm2                                      \n" \
+  "pminsw     %%xmm7,%%xmm0                                    \n" \
+  "pminsw     %%xmm7,%%xmm1                                    \n" \
+  "pminsw     %%xmm7,%%xmm2                                    \n" \
+  "pmaxsw     %%xmm6,%%xmm0                                    \n" \
+  "pmaxsw     %%xmm6,%%xmm1                                    \n" \
+  "pmaxsw     %%xmm6,%%xmm2                                    \n" \
+  "psllw      $0x4,%%xmm2                                      \n" \
+  "movdqa     %%xmm0,%%xmm3                                    \n" \
+  "punpcklwd  %%xmm2,%%xmm0                                    \n" \
+  "punpckhwd  %%xmm2,%%xmm3                                    \n" \
+  "movdqa     %%xmm1,%%xmm2                                    \n" \
+  "punpcklwd  %%xmm5,%%xmm1                                    \n" \
+  "punpckhwd  %%xmm5,%%xmm2                                    \n" \
+  "pslld      $0xa,%%xmm1                                      \n" \
+  "pslld      $0xa,%%xmm2                                      \n" \
+  "por        %%xmm1,%%xmm0                                    \n" \
+  "por        %%xmm2,%%xmm3                                    \n" \
+  "movdqu     %%xmm0,(%[dst_ar30])                             \n" \
+  "movdqu     %%xmm3,0x10(%[dst_ar30])                         \n" \
+  "lea        0x20(%[dst_ar30]), %[dst_ar30]                   \n"
+
+void OMITFP I444ToARGBRow_SSSE3(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_argb,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV444
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1691,15 +2028,15 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS
+  : "memory", "cc", YUVTORGB_REGS
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
 }
 
-void OMITFP I422ToRGB24Row_SSSE3(const uint8* y_buf,
-                                 const uint8* u_buf,
-                                 const uint8* v_buf,
-                                 uint8* dst_rgb24,
+void OMITFP I422ToRGB24Row_SSSE3(const uint8_t* y_buf,
+                                 const uint8_t* u_buf,
+                                 const uint8_t* v_buf,
+                                 uint8_t* dst_rgb24,
                                  const struct YuvConstants* yuvconstants,
                                  int width) {
   asm volatile (
@@ -1707,8 +2044,9 @@
     "movdqa    %[kShuffleMaskARGBToRGB24_0],%%xmm5 \n"
     "movdqa    %[kShuffleMaskARGBToRGB24],%%xmm6   \n"
     "sub       %[u_buf],%[v_buf]               \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV422
     YUVTORGB(yuvconstants)
     "punpcklbw %%xmm1,%%xmm0                   \n"
@@ -1719,16 +2057,16 @@
     "pshufb    %%xmm5,%%xmm0                   \n"
     "pshufb    %%xmm6,%%xmm1                   \n"
     "palignr   $0xc,%%xmm0,%%xmm1              \n"
-    "movq      %%xmm0," MEMACCESS([dst_rgb24]) "\n"
-    "movdqu    %%xmm1," MEMACCESS2(0x8,[dst_rgb24]) "\n"
-    "lea       " MEMLEA(0x18,[dst_rgb24]) ",%[dst_rgb24] \n"
+    "movq      %%xmm0,(%[dst_rgb24])           \n"
+    "movdqu    %%xmm1,0x8(%[dst_rgb24])        \n"
+    "lea       0x18(%[dst_rgb24]),%[dst_rgb24] \n"
     "subl      $0x8,%[width]                   \n"
     "jg        1b                              \n"
   : [y_buf]"+r"(y_buf),    // %[y_buf]
     [u_buf]"+r"(u_buf),    // %[u_buf]
     [v_buf]"+r"(v_buf),    // %[v_buf]
     [dst_rgb24]"+r"(dst_rgb24),  // %[dst_rgb24]
-#if defined(__i386__) && defined(__pic__)
+#if defined(__i386__)
     [width]"+m"(width)     // %[width]
 #else
     [width]"+rm"(width)    // %[width]
@@ -1736,23 +2074,24 @@
   : [yuvconstants]"r"(yuvconstants),  // %[yuvconstants]
     [kShuffleMaskARGBToRGB24_0]"m"(kShuffleMaskARGBToRGB24_0),
     [kShuffleMaskARGBToRGB24]"m"(kShuffleMaskARGBToRGB24)
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS
+  : "memory", "cc", YUVTORGB_REGS
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
   );
 }
 
-void OMITFP I422ToARGBRow_SSSE3(const uint8* y_buf,
-                                const uint8* u_buf,
-                                const uint8* v_buf,
-                                uint8* dst_argb,
+void OMITFP I422ToARGBRow_SSSE3(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_argb,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV422
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1764,24 +2103,125 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS
+  : "memory", "cc", YUVTORGB_REGS
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
 }
 
-#ifdef HAS_I422ALPHATOARGBROW_SSSE3
-void OMITFP I422AlphaToARGBRow_SSSE3(const uint8* y_buf,
-                                     const uint8* u_buf,
-                                     const uint8* v_buf,
-                                     const uint8* a_buf,
-                                     uint8* dst_argb,
-                                     const struct YuvConstants* yuvconstants,
-                                     int width) {
+void OMITFP I422ToAR30Row_SSSE3(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_ar30,
+                                const struct YuvConstants* yuvconstants,
+                                int width) {
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
+    "pcmpeqb   %%xmm5,%%xmm5                   \n"  // AR30 constants
+    "psrlw     $14,%%xmm5                      \n"
+    "psllw     $4,%%xmm5                       \n"  // 2 alpha bits
+    "pxor      %%xmm6,%%xmm6                   \n"
+    "pcmpeqb   %%xmm7,%%xmm7                   \n"  // 0 for min
+    "psrlw     $6,%%xmm7                       \n"  // 1023 for max
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
+    READYUV422
+    YUVTORGB16(yuvconstants)
+    STOREAR30
+    "sub       $0x8,%[width]                   \n"
+    "jg        1b                              \n"
+  : [y_buf]"+r"(y_buf),    // %[y_buf]
+    [u_buf]"+r"(u_buf),    // %[u_buf]
+    [v_buf]"+r"(v_buf),    // %[v_buf]
+    [dst_ar30]"+r"(dst_ar30),  // %[dst_ar30]
+    [width]"+rm"(width)    // %[width]
+  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
+  : "memory", "cc", YUVTORGB_REGS
+    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
+  );
+}
+
+// 10 bit YUV to ARGB
+void OMITFP I210ToARGBRow_SSSE3(const uint16_t* y_buf,
+                                const uint16_t* u_buf,
+                                const uint16_t* v_buf,
+                                uint8_t* dst_argb,
+                                const struct YuvConstants* yuvconstants,
+                                int width) {
+  asm volatile (
+    YUVTORGB_SETUP(yuvconstants)
+    "sub       %[u_buf],%[v_buf]               \n"
+    "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
+    LABELALIGN
+    "1:                                        \n"
+    READYUV210
+    YUVTORGB(yuvconstants)
+    STOREARGB
+    "sub       $0x8,%[width]                   \n"
+    "jg        1b                              \n"
+  : [y_buf]"+r"(y_buf),    // %[y_buf]
+    [u_buf]"+r"(u_buf),    // %[u_buf]
+    [v_buf]"+r"(v_buf),    // %[v_buf]
+    [dst_argb]"+r"(dst_argb),  // %[dst_argb]
+    [width]"+rm"(width)    // %[width]
+  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
+  : "memory", "cc", YUVTORGB_REGS
+    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
+  );
+}
+
+// 10 bit YUV to AR30
+void OMITFP I210ToAR30Row_SSSE3(const uint16_t* y_buf,
+                                const uint16_t* u_buf,
+                                const uint16_t* v_buf,
+                                uint8_t* dst_ar30,
+                                const struct YuvConstants* yuvconstants,
+                                int width) {
+  asm volatile (
+    YUVTORGB_SETUP(yuvconstants)
+    "sub       %[u_buf],%[v_buf]               \n"
+    "pcmpeqb   %%xmm5,%%xmm5                   \n"
+    "psrlw     $14,%%xmm5                      \n"
+    "psllw     $4,%%xmm5                       \n"  // 2 alpha bits
+    "pxor      %%xmm6,%%xmm6                   \n"
+    "pcmpeqb   %%xmm7,%%xmm7                   \n"  // 0 for min
+    "psrlw     $6,%%xmm7                       \n"  // 1023 for max
+
+    LABELALIGN
+    "1:                                        \n"
+    READYUV210
+    YUVTORGB16(yuvconstants)
+    STOREAR30
+    "sub       $0x8,%[width]                   \n"
+    "jg        1b                              \n"
+  : [y_buf]"+r"(y_buf),    // %[y_buf]
+    [u_buf]"+r"(u_buf),    // %[u_buf]
+    [v_buf]"+r"(v_buf),    // %[v_buf]
+    [dst_ar30]"+r"(dst_ar30),  // %[dst_ar30]
+    [width]"+rm"(width)    // %[width]
+  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
+  : "memory", "cc", YUVTORGB_REGS
+    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
+  );
+}
+
+#ifdef HAS_I422ALPHATOARGBROW_SSSE3
+void OMITFP I422AlphaToARGBRow_SSSE3(const uint8_t* y_buf,
+                                     const uint8_t* u_buf,
+                                     const uint8_t* v_buf,
+                                     const uint8_t* a_buf,
+                                     uint8_t* dst_argb,
+                                     const struct YuvConstants* yuvconstants,
+                                     int width) {
+  // clang-format off
+  asm volatile (
+    YUVTORGB_SETUP(yuvconstants)
+    "sub       %[u_buf],%[v_buf]               \n"
+
+    LABELALIGN
+    "1:                                        \n"
     READYUVA422
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1792,64 +2232,31 @@
     [v_buf]"+r"(v_buf),    // %[v_buf]
     [a_buf]"+r"(a_buf),    // %[a_buf]
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
-#if defined(__i386__) && defined(__pic__)
+#if defined(__i386__)
     [width]"+m"(width)     // %[width]
 #else
     [width]"+rm"(width)    // %[width]
 #endif
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS
+  : "memory", "cc", YUVTORGB_REGS
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 #endif  // HAS_I422ALPHATOARGBROW_SSSE3
 
-#ifdef HAS_I411TOARGBROW_SSSE3
-void OMITFP I411ToARGBRow_SSSE3(const uint8* y_buf,
-                                const uint8* u_buf,
-                                const uint8* v_buf,
-                                uint8* dst_argb,
+void OMITFP NV12ToARGBRow_SSSE3(const uint8_t* y_buf,
+                                const uint8_t* uv_buf,
+                                uint8_t* dst_argb,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
-  int temp;
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
-    "sub       %[u_buf],%[v_buf]               \n"
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    LABELALIGN
-  "1:                                          \n"
-    READYUV411_TEMP
-    YUVTORGB(yuvconstants)
-    STOREARGB
-    "subl      $0x8,%[width]                   \n"
-    "jg        1b                              \n"
-  : [y_buf]"+r"(y_buf),        // %[y_buf]
-    [u_buf]"+r"(u_buf),        // %[u_buf]
-    [v_buf]"+r"(v_buf),        // %[v_buf]
-    [dst_argb]"+r"(dst_argb),  // %[dst_argb]
-    [temp]"=&r"(temp),         // %[temp]
-#if defined(__i386__) && defined(__pic__)
-    [width]"+m"(width)         // %[width]
-#else
-    [width]"+rm"(width)        // %[width]
-#endif
-  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
-}
-#endif
 
-void OMITFP NV12ToARGBRow_SSSE3(const uint8* y_buf,
-                                const uint8* uv_buf,
-                                uint8* dst_argb,
-                                const struct YuvConstants* yuvconstants,
-                                int width) {
-  asm volatile (
-    YUVTORGB_SETUP(yuvconstants)
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READNV12
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1860,21 +2267,24 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-    : "memory", "cc", YUVTORGB_REGS  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS
       "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 
-void OMITFP NV21ToARGBRow_SSSE3(const uint8* y_buf,
-                                const uint8* vu_buf,
-                                uint8* dst_argb,
+void OMITFP NV21ToARGBRow_SSSE3(const uint8_t* y_buf,
+                                const uint8_t* vu_buf,
+                                uint8_t* dst_argb,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READNV21
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1886,20 +2296,23 @@
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants), // %[yuvconstants]
     [kShuffleNV21]"m"(kShuffleNV21)
-    : "memory", "cc", YUVTORGB_REGS  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS
       "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 
-void OMITFP YUY2ToARGBRow_SSSE3(const uint8* yuy2_buf,
-                                uint8* dst_argb,
+void OMITFP YUY2ToARGBRow_SSSE3(const uint8_t* yuy2_buf,
+                                uint8_t* dst_argb,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUY2
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1911,20 +2324,23 @@
   : [yuvconstants]"r"(yuvconstants), // %[yuvconstants]
     [kShuffleYUY2Y]"m"(kShuffleYUY2Y),
     [kShuffleYUY2UV]"m"(kShuffleYUY2UV)
-    : "memory", "cc", YUVTORGB_REGS  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS
       "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 
-void OMITFP UYVYToARGBRow_SSSE3(const uint8* uyvy_buf,
-                                uint8* dst_argb,
+void OMITFP UYVYToARGBRow_SSSE3(const uint8_t* uyvy_buf,
+                                uint8_t* dst_argb,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READUYVY
     YUVTORGB(yuvconstants)
     STOREARGB
@@ -1936,23 +2352,25 @@
   : [yuvconstants]"r"(yuvconstants), // %[yuvconstants]
     [kShuffleUYVYY]"m"(kShuffleUYVYY),
     [kShuffleUYVYUV]"m"(kShuffleUYVYUV)
-    : "memory", "cc", YUVTORGB_REGS  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS
       "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 
-void OMITFP I422ToRGBARow_SSSE3(const uint8* y_buf,
-                                const uint8* u_buf,
-                                const uint8* v_buf,
-                                uint8* dst_rgba,
+void OMITFP I422ToRGBARow_SSSE3(const uint8_t* y_buf,
+                                const uint8_t* u_buf,
+                                const uint8_t* v_buf,
+                                uint8_t* dst_rgba,
                                 const struct YuvConstants* yuvconstants,
                                 int width) {
   asm volatile (
     YUVTORGB_SETUP(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
     "pcmpeqb   %%xmm5,%%xmm5                   \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV422
     YUVTORGB(yuvconstants)
     STORERGBA
@@ -1964,7 +2382,7 @@
     [dst_rgba]"+r"(dst_rgba),  // %[dst_rgba]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS
+  : "memory", "cc", YUVTORGB_REGS
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
 }
@@ -1972,179 +2390,211 @@
 #endif  // HAS_I422TOARGBROW_SSSE3
 
 // Read 16 UV from 444
-#define READYUV444_AVX2                                                        \
-    "vmovdqu    " MEMACCESS([u_buf]) ",%%xmm0                       \n"        \
-    MEMOPREG(vmovdqu, 0x00, [u_buf], [v_buf], 1, xmm1)                         \
-    "lea        " MEMLEA(0x10, [u_buf]) ",%[u_buf]                  \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpermq     $0xd8,%%ymm1,%%ymm1                                 \n"        \
-    "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n"        \
-    "vmovdqu    " MEMACCESS([y_buf]) ",%%xmm4                       \n"        \
-    "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n"        \
-    "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n"        \
-    "lea        " MEMLEA(0x10, [y_buf]) ",%[y_buf]                  \n"
+#define READYUV444_AVX2                                               \
+  "vmovdqu    (%[u_buf]),%%xmm0                                   \n" \
+  "vmovdqu    0x00(%[u_buf],%[v_buf],1),%%xmm1                    \n" \
+  "lea        0x10(%[u_buf]),%[u_buf]                             \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpermq     $0xd8,%%ymm1,%%ymm1                                 \n" \
+  "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n" \
+  "vmovdqu    (%[y_buf]),%%xmm4                                   \n" \
+  "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n" \
+  "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n" \
+  "lea        0x10(%[y_buf]),%[y_buf]                             \n"
 
 // Read 8 UV from 422, upsample to 16 UV.
-#define READYUV422_AVX2                                                        \
-    "vmovq      " MEMACCESS([u_buf]) ",%%xmm0                       \n"        \
-    MEMOPREG(vmovq, 0x00, [u_buf], [v_buf], 1, xmm1)                           \
-    "lea        " MEMLEA(0x8, [u_buf]) ",%[u_buf]                   \n"        \
-    "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n"        \
-    "vmovdqu    " MEMACCESS([y_buf]) ",%%xmm4                       \n"        \
-    "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n"        \
-    "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n"        \
-    "lea        " MEMLEA(0x10, [y_buf]) ",%[y_buf]                  \n"
+#define READYUV422_AVX2                                               \
+  "vmovq      (%[u_buf]),%%xmm0                                   \n" \
+  "vmovq      0x00(%[u_buf],%[v_buf],1),%%xmm1                    \n" \
+  "lea        0x8(%[u_buf]),%[u_buf]                              \n" \
+  "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n" \
+  "vmovdqu    (%[y_buf]),%%xmm4                                   \n" \
+  "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n" \
+  "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n" \
+  "lea        0x10(%[y_buf]),%[y_buf]                             \n"
+
+// Read 8 UV from 210 10 bit, upsample to 16 UV
+// TODO(fbarchard): Consider vshufb to replace pack/unpack
+// TODO(fbarchard): Consider vunpcklpd to combine the 2 registers into 1.
+#define READYUV210_AVX2                                            \
+  "vmovdqu    (%[u_buf]),%%xmm0                                \n" \
+  "vmovdqu    0x00(%[u_buf],%[v_buf],1),%%xmm1                 \n" \
+  "lea        0x10(%[u_buf]),%[u_buf]                          \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                              \n" \
+  "vpermq     $0xd8,%%ymm1,%%ymm1                              \n" \
+  "vpunpcklwd %%ymm1,%%ymm0,%%ymm0                             \n" \
+  "vpsraw     $0x2,%%ymm0,%%ymm0                               \n" \
+  "vpackuswb  %%ymm0,%%ymm0,%%ymm0                             \n" \
+  "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                             \n" \
+  "vmovdqu    (%[y_buf]),%%ymm4                                \n" \
+  "vpsllw     $0x6,%%ymm4,%%ymm4                               \n" \
+  "lea        0x20(%[y_buf]),%[y_buf]                          \n"
 
 // Read 8 UV from 422, upsample to 16 UV.  With 16 Alpha.
-#define READYUVA422_AVX2                                                       \
-    "vmovq      " MEMACCESS([u_buf]) ",%%xmm0                       \n"        \
-    MEMOPREG(vmovq, 0x00, [u_buf], [v_buf], 1, xmm1)                           \
-    "lea        " MEMLEA(0x8, [u_buf]) ",%[u_buf]                   \n"        \
-    "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n"        \
-    "vmovdqu    " MEMACCESS([y_buf]) ",%%xmm4                       \n"        \
-    "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n"        \
-    "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n"        \
-    "lea        " MEMLEA(0x10, [y_buf]) ",%[y_buf]                  \n"        \
-    "vmovdqu    " MEMACCESS([a_buf]) ",%%xmm5                       \n"        \
-    "vpermq     $0xd8,%%ymm5,%%ymm5                                 \n"        \
-    "lea        " MEMLEA(0x10, [a_buf]) ",%[a_buf]                  \n"
-
-// Read 4 UV from 411, upsample to 16 UV.
-#define READYUV411_AVX2                                                        \
-    "vmovd      " MEMACCESS([u_buf]) ",%%xmm0                       \n"        \
-    MEMOPREG(vmovd, 0x00, [u_buf], [v_buf], 1, xmm1)                           \
-    "lea        " MEMLEA(0x4, [u_buf]) ",%[u_buf]                   \n"        \
-    "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n"        \
-    "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpunpckldq %%ymm0,%%ymm0,%%ymm0                                \n"        \
-    "vmovdqu    " MEMACCESS([y_buf]) ",%%xmm4                       \n"        \
-    "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n"        \
-    "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n"        \
-    "lea        " MEMLEA(0x10, [y_buf]) ",%[y_buf]                  \n"
+#define READYUVA422_AVX2                                              \
+  "vmovq      (%[u_buf]),%%xmm0                                   \n" \
+  "vmovq      0x00(%[u_buf],%[v_buf],1),%%xmm1                    \n" \
+  "lea        0x8(%[u_buf]),%[u_buf]                              \n" \
+  "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n" \
+  "vmovdqu    (%[y_buf]),%%xmm4                                   \n" \
+  "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n" \
+  "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n" \
+  "lea        0x10(%[y_buf]),%[y_buf]                             \n" \
+  "vmovdqu    (%[a_buf]),%%xmm5                                   \n" \
+  "vpermq     $0xd8,%%ymm5,%%ymm5                                 \n" \
+  "lea        0x10(%[a_buf]),%[a_buf]                             \n"
 
 // Read 8 UV from NV12, upsample to 16 UV.
-#define READNV12_AVX2                                                          \
-    "vmovdqu    " MEMACCESS([uv_buf]) ",%%xmm0                      \n"        \
-    "lea        " MEMLEA(0x10, [uv_buf]) ",%[uv_buf]                \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n"        \
-    "vmovdqu    " MEMACCESS([y_buf]) ",%%xmm4                       \n"        \
-    "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n"        \
-    "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n"        \
-    "lea        " MEMLEA(0x10, [y_buf]) ",%[y_buf]                  \n"
+#define READNV12_AVX2                                                 \
+  "vmovdqu    (%[uv_buf]),%%xmm0                                  \n" \
+  "lea        0x10(%[uv_buf]),%[uv_buf]                           \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpunpcklwd %%ymm0,%%ymm0,%%ymm0                                \n" \
+  "vmovdqu    (%[y_buf]),%%xmm4                                   \n" \
+  "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n" \
+  "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n" \
+  "lea        0x10(%[y_buf]),%[y_buf]                             \n"
 
 // Read 8 VU from NV21, upsample to 16 UV.
-#define READNV21_AVX2                                                          \
-    "vmovdqu    " MEMACCESS([vu_buf]) ",%%xmm0                      \n"        \
-    "lea        " MEMLEA(0x10, [vu_buf]) ",%[vu_buf]                \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpshufb     %[kShuffleNV21], %%ymm0, %%ymm0                    \n"        \
-    "vmovdqu    " MEMACCESS([y_buf]) ",%%xmm4                       \n"        \
-    "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n"        \
-    "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n"        \
-    "lea        " MEMLEA(0x10, [y_buf]) ",%[y_buf]                  \n"
+#define READNV21_AVX2                                                 \
+  "vmovdqu    (%[vu_buf]),%%xmm0                                  \n" \
+  "lea        0x10(%[vu_buf]),%[vu_buf]                           \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpshufb     %[kShuffleNV21], %%ymm0, %%ymm0                    \n" \
+  "vmovdqu    (%[y_buf]),%%xmm4                                   \n" \
+  "vpermq     $0xd8,%%ymm4,%%ymm4                                 \n" \
+  "vpunpcklbw %%ymm4,%%ymm4,%%ymm4                                \n" \
+  "lea        0x10(%[y_buf]),%[y_buf]                             \n"
 
 // Read 8 YUY2 with 16 Y and upsample 8 UV to 16 UV.
-#define READYUY2_AVX2                                                          \
-    "vmovdqu    " MEMACCESS([yuy2_buf]) ",%%ymm4                    \n"        \
-    "vpshufb    %[kShuffleYUY2Y], %%ymm4, %%ymm4                    \n"        \
-    "vmovdqu    " MEMACCESS([yuy2_buf]) ",%%ymm0                    \n"        \
-    "vpshufb    %[kShuffleYUY2UV], %%ymm0, %%ymm0                   \n"        \
-    "lea        " MEMLEA(0x20, [yuy2_buf]) ",%[yuy2_buf]            \n"
+#define READYUY2_AVX2                                                 \
+  "vmovdqu    (%[yuy2_buf]),%%ymm4                                \n" \
+  "vpshufb    %[kShuffleYUY2Y], %%ymm4, %%ymm4                    \n" \
+  "vmovdqu    (%[yuy2_buf]),%%ymm0                                \n" \
+  "vpshufb    %[kShuffleYUY2UV], %%ymm0, %%ymm0                   \n" \
+  "lea        0x20(%[yuy2_buf]),%[yuy2_buf]                       \n"
 
 // Read 8 UYVY with 16 Y and upsample 8 UV to 16 UV.
-#define READUYVY_AVX2                                                          \
-    "vmovdqu     " MEMACCESS([uyvy_buf]) ",%%ymm4                   \n"        \
-    "vpshufb     %[kShuffleUYVYY], %%ymm4, %%ymm4                   \n"        \
-    "vmovdqu     " MEMACCESS([uyvy_buf]) ",%%ymm0                   \n"        \
-    "vpshufb     %[kShuffleUYVYUV], %%ymm0, %%ymm0                  \n"        \
-    "lea        " MEMLEA(0x20, [uyvy_buf]) ",%[uyvy_buf]            \n"
+#define READUYVY_AVX2                                                 \
+  "vmovdqu    (%[uyvy_buf]),%%ymm4                                \n" \
+  "vpshufb    %[kShuffleUYVYY], %%ymm4, %%ymm4                    \n" \
+  "vmovdqu    (%[uyvy_buf]),%%ymm0                                \n" \
+  "vpshufb    %[kShuffleUYVYUV], %%ymm0, %%ymm0                   \n" \
+  "lea        0x20(%[uyvy_buf]),%[uyvy_buf]                       \n"
 
 #if defined(__x86_64__)
-#define YUVTORGB_SETUP_AVX2(yuvconstants)                                      \
-    "vmovdqa     " MEMACCESS([yuvconstants]) ",%%ymm8            \n"           \
-    "vmovdqa     " MEMACCESS2(32, [yuvconstants]) ",%%ymm9       \n"           \
-    "vmovdqa     " MEMACCESS2(64, [yuvconstants]) ",%%ymm10      \n"           \
-    "vmovdqa     " MEMACCESS2(96, [yuvconstants]) ",%%ymm11      \n"           \
-    "vmovdqa     " MEMACCESS2(128, [yuvconstants]) ",%%ymm12     \n"           \
-    "vmovdqa     " MEMACCESS2(160, [yuvconstants]) ",%%ymm13     \n"           \
-    "vmovdqa     " MEMACCESS2(192, [yuvconstants]) ",%%ymm14     \n"
-#define YUVTORGB_AVX2(yuvconstants)                                            \
-    "vpmaddubsw  %%ymm10,%%ymm0,%%ymm2                              \n"        \
-    "vpmaddubsw  %%ymm9,%%ymm0,%%ymm1                               \n"        \
-    "vpmaddubsw  %%ymm8,%%ymm0,%%ymm0                               \n"        \
-    "vpsubw      %%ymm2,%%ymm13,%%ymm2                              \n"        \
-    "vpsubw      %%ymm1,%%ymm12,%%ymm1                              \n"        \
-    "vpsubw      %%ymm0,%%ymm11,%%ymm0                              \n"        \
-    "vpmulhuw    %%ymm14,%%ymm4,%%ymm4                              \n"        \
-    "vpaddsw     %%ymm4,%%ymm0,%%ymm0                               \n"        \
-    "vpaddsw     %%ymm4,%%ymm1,%%ymm1                               \n"        \
-    "vpaddsw     %%ymm4,%%ymm2,%%ymm2                               \n"        \
-    "vpsraw      $0x6,%%ymm0,%%ymm0                                 \n"        \
-    "vpsraw      $0x6,%%ymm1,%%ymm1                                 \n"        \
-    "vpsraw      $0x6,%%ymm2,%%ymm2                                 \n"        \
-    "vpackuswb   %%ymm0,%%ymm0,%%ymm0                               \n"        \
-    "vpackuswb   %%ymm1,%%ymm1,%%ymm1                               \n"        \
-    "vpackuswb   %%ymm2,%%ymm2,%%ymm2                               \n"
+#define YUVTORGB_SETUP_AVX2(yuvconstants)                            \
+  "vmovdqa     (%[yuvconstants]),%%ymm8                          \n" \
+  "vmovdqa     32(%[yuvconstants]),%%ymm9                        \n" \
+  "vmovdqa     64(%[yuvconstants]),%%ymm10                       \n" \
+  "vmovdqa     96(%[yuvconstants]),%%ymm11                       \n" \
+  "vmovdqa     128(%[yuvconstants]),%%ymm12                      \n" \
+  "vmovdqa     160(%[yuvconstants]),%%ymm13                      \n" \
+  "vmovdqa     192(%[yuvconstants]),%%ymm14                      \n"
+
+#define YUVTORGB16_AVX2(yuvconstants)                                 \
+  "vpmaddubsw  %%ymm10,%%ymm0,%%ymm2                              \n" \
+  "vpmaddubsw  %%ymm9,%%ymm0,%%ymm1                               \n" \
+  "vpmaddubsw  %%ymm8,%%ymm0,%%ymm0                               \n" \
+  "vpsubw      %%ymm2,%%ymm13,%%ymm2                              \n" \
+  "vpsubw      %%ymm1,%%ymm12,%%ymm1                              \n" \
+  "vpsubw      %%ymm0,%%ymm11,%%ymm0                              \n" \
+  "vpmulhuw    %%ymm14,%%ymm4,%%ymm4                              \n" \
+  "vpaddsw     %%ymm4,%%ymm0,%%ymm0                               \n" \
+  "vpaddsw     %%ymm4,%%ymm1,%%ymm1                               \n" \
+  "vpaddsw     %%ymm4,%%ymm2,%%ymm2                               \n"
+
 #define YUVTORGB_REGS_AVX2 \
-    "xmm8", "xmm9", "xmm10", "xmm11", "xmm12", "xmm13", "xmm14",
+  "xmm8", "xmm9", "xmm10", "xmm11", "xmm12", "xmm13", "xmm14",
+
 #else  // Convert 16 pixels: 16 UV and 16 Y.
+
 #define YUVTORGB_SETUP_AVX2(yuvconstants)
-#define YUVTORGB_AVX2(yuvconstants)                                            \
-    "vpmaddubsw  " MEMACCESS2(64, [yuvconstants]) ",%%ymm0,%%ymm2   \n"        \
-    "vpmaddubsw  " MEMACCESS2(32, [yuvconstants]) ",%%ymm0,%%ymm1   \n"        \
-    "vpmaddubsw  " MEMACCESS([yuvconstants]) ",%%ymm0,%%ymm0        \n"        \
-    "vmovdqu     " MEMACCESS2(160, [yuvconstants]) ",%%ymm3         \n"        \
-    "vpsubw      %%ymm2,%%ymm3,%%ymm2                               \n"        \
-    "vmovdqu     " MEMACCESS2(128, [yuvconstants]) ",%%ymm3         \n"        \
-    "vpsubw      %%ymm1,%%ymm3,%%ymm1                               \n"        \
-    "vmovdqu     " MEMACCESS2(96, [yuvconstants]) ",%%ymm3          \n"        \
-    "vpsubw      %%ymm0,%%ymm3,%%ymm0                               \n"        \
-    "vpmulhuw    " MEMACCESS2(192, [yuvconstants]) ",%%ymm4,%%ymm4  \n"        \
-    "vpaddsw     %%ymm4,%%ymm0,%%ymm0                               \n"        \
-    "vpaddsw     %%ymm4,%%ymm1,%%ymm1                               \n"        \
-    "vpaddsw     %%ymm4,%%ymm2,%%ymm2                               \n"        \
-    "vpsraw      $0x6,%%ymm0,%%ymm0                                 \n"        \
-    "vpsraw      $0x6,%%ymm1,%%ymm1                                 \n"        \
-    "vpsraw      $0x6,%%ymm2,%%ymm2                                 \n"        \
-    "vpackuswb   %%ymm0,%%ymm0,%%ymm0                               \n"        \
-    "vpackuswb   %%ymm1,%%ymm1,%%ymm1                               \n"        \
-    "vpackuswb   %%ymm2,%%ymm2,%%ymm2                               \n"
+#define YUVTORGB16_AVX2(yuvconstants)                                 \
+  "vpmaddubsw  64(%[yuvconstants]),%%ymm0,%%ymm2                  \n" \
+  "vpmaddubsw  32(%[yuvconstants]),%%ymm0,%%ymm1                  \n" \
+  "vpmaddubsw  (%[yuvconstants]),%%ymm0,%%ymm0                    \n" \
+  "vmovdqu     160(%[yuvconstants]),%%ymm3                        \n" \
+  "vpsubw      %%ymm2,%%ymm3,%%ymm2                               \n" \
+  "vmovdqu     128(%[yuvconstants]),%%ymm3                        \n" \
+  "vpsubw      %%ymm1,%%ymm3,%%ymm1                               \n" \
+  "vmovdqu     96(%[yuvconstants]),%%ymm3                         \n" \
+  "vpsubw      %%ymm0,%%ymm3,%%ymm0                               \n" \
+  "vpmulhuw    192(%[yuvconstants]),%%ymm4,%%ymm4                 \n" \
+  "vpaddsw     %%ymm4,%%ymm0,%%ymm0                               \n" \
+  "vpaddsw     %%ymm4,%%ymm1,%%ymm1                               \n" \
+  "vpaddsw     %%ymm4,%%ymm2,%%ymm2                               \n"
 #define YUVTORGB_REGS_AVX2
 #endif
 
+#define YUVTORGB_AVX2(yuvconstants)                                   \
+  YUVTORGB16_AVX2(yuvconstants)                                       \
+  "vpsraw      $0x6,%%ymm0,%%ymm0                                 \n" \
+  "vpsraw      $0x6,%%ymm1,%%ymm1                                 \n" \
+  "vpsraw      $0x6,%%ymm2,%%ymm2                                 \n" \
+  "vpackuswb   %%ymm0,%%ymm0,%%ymm0                               \n" \
+  "vpackuswb   %%ymm1,%%ymm1,%%ymm1                               \n" \
+  "vpackuswb   %%ymm2,%%ymm2,%%ymm2                               \n"
+
 // Store 16 ARGB values.
-#define STOREARGB_AVX2                                                         \
-    "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n"        \
-    "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n"        \
-    "vpunpcklbw %%ymm5,%%ymm2,%%ymm2                                \n"        \
-    "vpermq     $0xd8,%%ymm2,%%ymm2                                 \n"        \
-    "vpunpcklwd %%ymm2,%%ymm0,%%ymm1                                \n"        \
-    "vpunpckhwd %%ymm2,%%ymm0,%%ymm0                                \n"        \
-    "vmovdqu    %%ymm1," MEMACCESS([dst_argb]) "                    \n"        \
-    "vmovdqu    %%ymm0," MEMACCESS2(0x20, [dst_argb]) "             \n"        \
-    "lea       " MEMLEA(0x40, [dst_argb]) ", %[dst_argb]            \n"
+#define STOREARGB_AVX2                                                \
+  "vpunpcklbw %%ymm1,%%ymm0,%%ymm0                                \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpunpcklbw %%ymm5,%%ymm2,%%ymm2                                \n" \
+  "vpermq     $0xd8,%%ymm2,%%ymm2                                 \n" \
+  "vpunpcklwd %%ymm2,%%ymm0,%%ymm1                                \n" \
+  "vpunpckhwd %%ymm2,%%ymm0,%%ymm0                                \n" \
+  "vmovdqu    %%ymm1,(%[dst_argb])                                \n" \
+  "vmovdqu    %%ymm0,0x20(%[dst_argb])                            \n" \
+  "lea       0x40(%[dst_argb]), %[dst_argb]                       \n"
+
+// Store 16 AR30 values.
+#define STOREAR30_AVX2                                                \
+  "vpsraw     $0x4,%%ymm0,%%ymm0                                  \n" \
+  "vpsraw     $0x4,%%ymm1,%%ymm1                                  \n" \
+  "vpsraw     $0x4,%%ymm2,%%ymm2                                  \n" \
+  "vpminsw    %%ymm7,%%ymm0,%%ymm0                                \n" \
+  "vpminsw    %%ymm7,%%ymm1,%%ymm1                                \n" \
+  "vpminsw    %%ymm7,%%ymm2,%%ymm2                                \n" \
+  "vpmaxsw    %%ymm6,%%ymm0,%%ymm0                                \n" \
+  "vpmaxsw    %%ymm6,%%ymm1,%%ymm1                                \n" \
+  "vpmaxsw    %%ymm6,%%ymm2,%%ymm2                                \n" \
+  "vpsllw     $0x4,%%ymm2,%%ymm2                                  \n" \
+  "vpermq     $0xd8,%%ymm0,%%ymm0                                 \n" \
+  "vpermq     $0xd8,%%ymm1,%%ymm1                                 \n" \
+  "vpermq     $0xd8,%%ymm2,%%ymm2                                 \n" \
+  "vpunpckhwd %%ymm2,%%ymm0,%%ymm3                                \n" \
+  "vpunpcklwd %%ymm2,%%ymm0,%%ymm0                                \n" \
+  "vpunpckhwd %%ymm5,%%ymm1,%%ymm2                                \n" \
+  "vpunpcklwd %%ymm5,%%ymm1,%%ymm1                                \n" \
+  "vpslld     $0xa,%%ymm1,%%ymm1                                  \n" \
+  "vpslld     $0xa,%%ymm2,%%ymm2                                  \n" \
+  "vpor       %%ymm1,%%ymm0,%%ymm0                                \n" \
+  "vpor       %%ymm2,%%ymm3,%%ymm3                                \n" \
+  "vmovdqu    %%ymm0,(%[dst_ar30])                                \n" \
+  "vmovdqu    %%ymm3,0x20(%[dst_ar30])                            \n" \
+  "lea        0x40(%[dst_ar30]), %[dst_ar30]                      \n"
 
 #ifdef HAS_I444TOARGBROW_AVX2
 // 16 pixels
 // 16 UV values with 16 Y producing 16 ARGB (64 bytes).
-void OMITFP I444ToARGBRow_AVX2(const uint8* y_buf,
-                               const uint8* u_buf,
-                               const uint8* v_buf,
-                               uint8* dst_argb,
+void OMITFP I444ToARGBRow_AVX2(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
     "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV444_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
@@ -2157,65 +2607,34 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS_AVX2
+  : "memory", "cc", YUVTORGB_REGS_AVX2
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
 }
 #endif  // HAS_I444TOARGBROW_AVX2
 
-#ifdef HAS_I411TOARGBROW_AVX2
-// 16 pixels
-// 4 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-void OMITFP I411ToARGBRow_AVX2(const uint8* y_buf,
-                               const uint8* u_buf,
-                               const uint8* v_buf,
-                               uint8* dst_argb,
-                               const struct YuvConstants* yuvconstants,
-                               int width) {
-  asm volatile (
-    YUVTORGB_SETUP_AVX2(yuvconstants)
-    "sub       %[u_buf],%[v_buf]               \n"
-    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
-    LABELALIGN
-  "1:                                          \n"
-    READYUV411_AVX2
-    YUVTORGB_AVX2(yuvconstants)
-    STOREARGB_AVX2
-    "sub       $0x10,%[width]                  \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : [y_buf]"+r"(y_buf),    // %[y_buf]
-    [u_buf]"+r"(u_buf),    // %[u_buf]
-    [v_buf]"+r"(v_buf),    // %[v_buf]
-    [dst_argb]"+r"(dst_argb),  // %[dst_argb]
-    [width]"+rm"(width)    // %[width]
-  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS_AVX2
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
-}
-#endif  // HAS_I411TOARGBROW_AVX2
-
 #if defined(HAS_I422TOARGBROW_AVX2)
 // 16 pixels
 // 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-void OMITFP I422ToARGBRow_AVX2(const uint8* y_buf,
-                               const uint8* u_buf,
-                               const uint8* v_buf,
-                               uint8* dst_argb,
+void OMITFP I422ToARGBRow_AVX2(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
     "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV422_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
     "sub       $0x10,%[width]                  \n"
     "jg        1b                              \n"
+
     "vzeroupper                                \n"
   : [y_buf]"+r"(y_buf),    // %[y_buf]
     [u_buf]"+r"(u_buf),    // %[u_buf]
@@ -2223,27 +2642,144 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS_AVX2
+  : "memory", "cc", YUVTORGB_REGS_AVX2
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
 }
 #endif  // HAS_I422TOARGBROW_AVX2
 
-#if defined(HAS_I422ALPHATOARGBROW_AVX2)
+#if defined(HAS_I422TOAR30ROW_AVX2)
 // 16 pixels
-// 8 UV values upsampled to 16 UV, mixed with 16 Y and 16 A producing 16 ARGB.
-void OMITFP I422AlphaToARGBRow_AVX2(const uint8* y_buf,
-                               const uint8* u_buf,
-                               const uint8* v_buf,
-                               const uint8* a_buf,
-                               uint8* dst_argb,
+// 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 AR30 (64 bytes).
+void OMITFP I422ToAR30Row_AVX2(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_ar30,
                                const struct YuvConstants* yuvconstants,
                                int width) {
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
+    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"  // AR30 constants
+    "vpsrlw    $14,%%ymm5,%%ymm5               \n"
+    "vpsllw    $4,%%ymm5,%%ymm5                \n"  // 2 alpha bits
+    "vpxor     %%ymm6,%%ymm6,%%ymm6            \n"  // 0 for min
+    "vpcmpeqb  %%ymm7,%%ymm7,%%ymm7            \n"  // 1023 for max
+    "vpsrlw    $6,%%ymm7,%%ymm7                \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
+    READYUV422_AVX2
+    YUVTORGB16_AVX2(yuvconstants)
+    STOREAR30_AVX2
+    "sub       $0x10,%[width]                  \n"
+    "jg        1b                              \n"
+
+    "vzeroupper                                \n"
+  : [y_buf]"+r"(y_buf),    // %[y_buf]
+    [u_buf]"+r"(u_buf),    // %[u_buf]
+    [v_buf]"+r"(v_buf),    // %[v_buf]
+    [dst_ar30]"+r"(dst_ar30),  // %[dst_ar30]
+    [width]"+rm"(width)    // %[width]
+  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
+  : "memory", "cc", YUVTORGB_REGS_AVX2
+    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
+  );
+}
+#endif  // HAS_I422TOAR30ROW_AVX2
+
+#if defined(HAS_I210TOARGBROW_AVX2)
+// 16 pixels
+// 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
+void OMITFP I210ToARGBRow_AVX2(const uint16_t* y_buf,
+                               const uint16_t* u_buf,
+                               const uint16_t* v_buf,
+                               uint8_t* dst_argb,
+                               const struct YuvConstants* yuvconstants,
+                               int width) {
+  asm volatile (
+    YUVTORGB_SETUP_AVX2(yuvconstants)
+    "sub       %[u_buf],%[v_buf]               \n"
+    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+
+    LABELALIGN
+    "1:                                        \n"
+    READYUV210_AVX2
+    YUVTORGB_AVX2(yuvconstants)
+    STOREARGB_AVX2
+    "sub       $0x10,%[width]                  \n"
+    "jg        1b                              \n"
+
+    "vzeroupper                                \n"
+  : [y_buf]"+r"(y_buf),    // %[y_buf]
+    [u_buf]"+r"(u_buf),    // %[u_buf]
+    [v_buf]"+r"(v_buf),    // %[v_buf]
+    [dst_argb]"+r"(dst_argb),  // %[dst_argb]
+    [width]"+rm"(width)    // %[width]
+  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
+  : "memory", "cc", YUVTORGB_REGS_AVX2
+    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
+  );
+}
+#endif  // HAS_I210TOARGBROW_AVX2
+
+#if defined(HAS_I210TOAR30ROW_AVX2)
+// 16 pixels
+// 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 AR30 (64 bytes).
+void OMITFP I210ToAR30Row_AVX2(const uint16_t* y_buf,
+                               const uint16_t* u_buf,
+                               const uint16_t* v_buf,
+                               uint8_t* dst_ar30,
+                               const struct YuvConstants* yuvconstants,
+                               int width) {
+  asm volatile (
+    YUVTORGB_SETUP_AVX2(yuvconstants)
+    "sub       %[u_buf],%[v_buf]               \n"
+    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"  // AR30 constants
+    "vpsrlw    $14,%%ymm5,%%ymm5               \n"
+    "vpsllw    $4,%%ymm5,%%ymm5                \n"  // 2 alpha bits
+    "vpxor     %%ymm6,%%ymm6,%%ymm6            \n"  // 0 for min
+    "vpcmpeqb  %%ymm7,%%ymm7,%%ymm7            \n"  // 1023 for max
+    "vpsrlw    $6,%%ymm7,%%ymm7                \n"
+
+    LABELALIGN
+    "1:                                        \n"
+    READYUV210_AVX2
+    YUVTORGB16_AVX2(yuvconstants)
+    STOREAR30_AVX2
+    "sub       $0x10,%[width]                  \n"
+    "jg        1b                              \n"
+
+    "vzeroupper                                \n"
+  : [y_buf]"+r"(y_buf),    // %[y_buf]
+    [u_buf]"+r"(u_buf),    // %[u_buf]
+    [v_buf]"+r"(v_buf),    // %[v_buf]
+    [dst_ar30]"+r"(dst_ar30),  // %[dst_ar30]
+    [width]"+rm"(width)    // %[width]
+  : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
+  : "memory", "cc", YUVTORGB_REGS_AVX2
+    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
+  );
+}
+#endif  // HAS_I210TOAR30ROW_AVX2
+
+#if defined(HAS_I422ALPHATOARGBROW_AVX2)
+// 16 pixels
+// 8 UV values upsampled to 16 UV, mixed with 16 Y and 16 A producing 16 ARGB.
+void OMITFP I422AlphaToARGBRow_AVX2(const uint8_t* y_buf,
+                                    const uint8_t* u_buf,
+                                    const uint8_t* v_buf,
+                                    const uint8_t* a_buf,
+                                    uint8_t* dst_argb,
+                                    const struct YuvConstants* yuvconstants,
+                                    int width) {
+  // clang-format off
+  asm volatile (
+    YUVTORGB_SETUP_AVX2(yuvconstants)
+    "sub       %[u_buf],%[v_buf]               \n"
+
+    LABELALIGN
+    "1:                                        \n"
     READYUVA422_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
@@ -2255,33 +2791,35 @@
     [v_buf]"+r"(v_buf),    // %[v_buf]
     [a_buf]"+r"(a_buf),    // %[a_buf]
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
-#if defined(__i386__) && defined(__pic__)
+#if defined(__i386__)
     [width]"+m"(width)     // %[width]
 #else
     [width]"+rm"(width)    // %[width]
 #endif
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS_AVX2
+  : "memory", "cc", YUVTORGB_REGS_AVX2
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 #endif  // HAS_I422ALPHATOARGBROW_AVX2
 
 #if defined(HAS_I422TORGBAROW_AVX2)
 // 16 pixels
 // 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 RGBA (64 bytes).
-void OMITFP I422ToRGBARow_AVX2(const uint8* y_buf,
-                               const uint8* u_buf,
-                               const uint8* v_buf,
-                               uint8* dst_argb,
+void OMITFP I422ToRGBARow_AVX2(const uint8_t* y_buf,
+                               const uint8_t* u_buf,
+                               const uint8_t* v_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "sub       %[u_buf],%[v_buf]               \n"
     "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUV422_AVX2
     YUVTORGB_AVX2(yuvconstants)
 
@@ -2292,11 +2830,11 @@
     "vpermq     $0xd8,%%ymm2,%%ymm2            \n"
     "vpunpcklwd %%ymm1,%%ymm2,%%ymm0           \n"
     "vpunpckhwd %%ymm1,%%ymm2,%%ymm1           \n"
-    "vmovdqu    %%ymm0," MEMACCESS([dst_argb]) "\n"
-    "vmovdqu    %%ymm1," MEMACCESS2(0x20,[dst_argb]) "\n"
-    "lea       " MEMLEA(0x40,[dst_argb]) ",%[dst_argb] \n"
-    "sub       $0x10,%[width]                  \n"
-    "jg        1b                              \n"
+    "vmovdqu    %%ymm0,(%[dst_argb])           \n"
+    "vmovdqu    %%ymm1,0x20(%[dst_argb])       \n"
+    "lea        0x40(%[dst_argb]),%[dst_argb]  \n"
+    "sub        $0x10,%[width]                 \n"
+    "jg         1b                             \n"
     "vzeroupper                                \n"
   : [y_buf]"+r"(y_buf),    // %[y_buf]
     [u_buf]"+r"(u_buf),    // %[u_buf]
@@ -2304,7 +2842,7 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-  : "memory", "cc", NACL_R14 YUVTORGB_REGS_AVX2
+  : "memory", "cc", YUVTORGB_REGS_AVX2
     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
 }
@@ -2313,16 +2851,18 @@
 #if defined(HAS_NV12TOARGBROW_AVX2)
 // 16 pixels.
 // 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-void OMITFP NV12ToARGBRow_AVX2(const uint8* y_buf,
-                               const uint8* uv_buf,
-                               uint8* dst_argb,
+void OMITFP NV12ToARGBRow_AVX2(const uint8_t* y_buf,
+                               const uint8_t* uv_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READNV12_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
@@ -2334,25 +2874,28 @@
     [dst_argb]"+r"(dst_argb),  // %[dst_argb]
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants)  // %[yuvconstants]
-    : "memory", "cc", YUVTORGB_REGS_AVX2  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS_AVX2
     "xmm0", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 #endif  // HAS_NV12TOARGBROW_AVX2
 
 #if defined(HAS_NV21TOARGBROW_AVX2)
 // 16 pixels.
 // 8 VU values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-void OMITFP NV21ToARGBRow_AVX2(const uint8* y_buf,
-                               const uint8* vu_buf,
-                               uint8* dst_argb,
+void OMITFP NV21ToARGBRow_AVX2(const uint8_t* y_buf,
+                               const uint8_t* vu_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READNV21_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
@@ -2365,24 +2908,27 @@
     [width]"+rm"(width)    // %[width]
   : [yuvconstants]"r"(yuvconstants), // %[yuvconstants]
     [kShuffleNV21]"m"(kShuffleNV21)
-    : "memory", "cc", YUVTORGB_REGS_AVX2  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS_AVX2
       "xmm0", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 #endif  // HAS_NV21TOARGBROW_AVX2
 
 #if defined(HAS_YUY2TOARGBROW_AVX2)
 // 16 pixels.
 // 8 YUY2 values with 16 Y and 8 UV producing 16 ARGB (64 bytes).
-void OMITFP YUY2ToARGBRow_AVX2(const uint8* yuy2_buf,
-                               uint8* dst_argb,
+void OMITFP YUY2ToARGBRow_AVX2(const uint8_t* yuy2_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READYUY2_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
@@ -2395,24 +2941,27 @@
   : [yuvconstants]"r"(yuvconstants), // %[yuvconstants]
     [kShuffleYUY2Y]"m"(kShuffleYUY2Y),
     [kShuffleYUY2UV]"m"(kShuffleYUY2UV)
-    : "memory", "cc", YUVTORGB_REGS_AVX2  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS_AVX2
       "xmm0", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 #endif  // HAS_YUY2TOARGBROW_AVX2
 
 #if defined(HAS_UYVYTOARGBROW_AVX2)
 // 16 pixels.
 // 8 UYVY values with 16 Y and 8 UV producing 16 ARGB (64 bytes).
-void OMITFP UYVYToARGBRow_AVX2(const uint8* uyvy_buf,
-                               uint8* dst_argb,
+void OMITFP UYVYToARGBRow_AVX2(const uint8_t* uyvy_buf,
+                               uint8_t* dst_argb,
                                const struct YuvConstants* yuvconstants,
                                int width) {
+  // clang-format off
   asm volatile (
     YUVTORGB_SETUP_AVX2(yuvconstants)
     "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+
     LABELALIGN
-  "1:                                          \n"
+    "1:                                        \n"
     READUYVY_AVX2
     YUVTORGB_AVX2(yuvconstants)
     STOREARGB_AVX2
@@ -2425,1131 +2974,1603 @@
   : [yuvconstants]"r"(yuvconstants), // %[yuvconstants]
     [kShuffleUYVYY]"m"(kShuffleUYVYY),
     [kShuffleUYVYUV]"m"(kShuffleUYVYUV)
-    : "memory", "cc", YUVTORGB_REGS_AVX2  // Does not use r14.
+    : "memory", "cc", YUVTORGB_REGS_AVX2
       "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
   );
+  // clang-format on
 }
 #endif  // HAS_UYVYTOARGBROW_AVX2
 
 #ifdef HAS_I400TOARGBROW_SSE2
-void I400ToARGBRow_SSE2(const uint8* y_buf, uint8* dst_argb, int width) {
-  asm volatile (
-    "mov       $0x4a354a35,%%eax               \n"  // 4a35 = 18997 = 1.164
-    "movd      %%eax,%%xmm2                    \n"
-    "pshufd    $0x0,%%xmm2,%%xmm2              \n"
-    "mov       $0x04880488,%%eax               \n"  // 0488 = 1160 = 1.164 * 16
-    "movd      %%eax,%%xmm3                    \n"
-    "pshufd    $0x0,%%xmm3,%%xmm3              \n"
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "pslld     $0x18,%%xmm4                    \n"
-    LABELALIGN
-  "1:                                          \n"
-    // Step 1: Scale Y contribution to 8 G values. G = (y - 16) * 1.164
-    "movq      " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "pmulhuw   %%xmm2,%%xmm0                   \n"
-    "psubusw   %%xmm3,%%xmm0                   \n"
-    "psrlw     $6, %%xmm0                      \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
+void I400ToARGBRow_SSE2(const uint8_t* y_buf, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "mov       $0x4a354a35,%%eax               \n"  // 4a35 = 18997 = 1.164
+      "movd      %%eax,%%xmm2                    \n"
+      "pshufd    $0x0,%%xmm2,%%xmm2              \n"
+      "mov       $0x04880488,%%eax               \n"  // 0488 = 1160 = 1.164 *
+                                                      // 16
+      "movd      %%eax,%%xmm3                    \n"
+      "pshufd    $0x0,%%xmm3,%%xmm3              \n"
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "pslld     $0x18,%%xmm4                    \n"
 
-    // Step 2: Weave into ARGB
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklwd %%xmm0,%%xmm0                   \n"
-    "punpckhwd %%xmm1,%%xmm1                   \n"
-    "por       %%xmm4,%%xmm0                   \n"
-    "por       %%xmm4,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
+      LABELALIGN
+      "1:                                        \n"
+      // Step 1: Scale Y contribution to 8 G values. G = (y - 16) * 1.164
+      "movq      (%0),%%xmm0                     \n"
+      "lea       0x8(%0),%0                      \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "pmulhuw   %%xmm2,%%xmm0                   \n"
+      "psubusw   %%xmm3,%%xmm0                   \n"
+      "psrlw     $6, %%xmm0                      \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
 
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(y_buf),     // %0
-    "+r"(dst_argb),  // %1
-    "+rm"(width)     // %2
-  :
-  : "memory", "cc", "eax"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4"
-  );
+      // Step 2: Weave into ARGB
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklwd %%xmm0,%%xmm0                   \n"
+      "punpckhwd %%xmm1,%%xmm1                   \n"
+      "por       %%xmm4,%%xmm0                   \n"
+      "por       %%xmm4,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(y_buf),     // %0
+        "+r"(dst_argb),  // %1
+        "+rm"(width)     // %2
+      :
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4");
 }
 #endif  // HAS_I400TOARGBROW_SSE2
 
 #ifdef HAS_I400TOARGBROW_AVX2
 // 16 pixels of Y converted to 16 pixels of ARGB (64 bytes).
 // note: vpunpcklbw mutates and vpackuswb unmutates.
-void I400ToARGBRow_AVX2(const uint8* y_buf, uint8* dst_argb, int width) {
-  asm volatile (
-    "mov        $0x4a354a35,%%eax              \n" // 0488 = 1160 = 1.164 * 16
-    "vmovd      %%eax,%%xmm2                   \n"
-    "vbroadcastss %%xmm2,%%ymm2                \n"
-    "mov        $0x4880488,%%eax               \n" // 4a35 = 18997 = 1.164
-    "vmovd      %%eax,%%xmm3                   \n"
-    "vbroadcastss %%xmm3,%%ymm3                \n"
-    "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpslld     $0x18,%%ymm4,%%ymm4            \n"
+void I400ToARGBRow_AVX2(const uint8_t* y_buf, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "mov        $0x4a354a35,%%eax              \n"  // 0488 = 1160 = 1.164 *
+                                                      // 16
+      "vmovd      %%eax,%%xmm2                   \n"
+      "vbroadcastss %%xmm2,%%ymm2                \n"
+      "mov        $0x4880488,%%eax               \n"  // 4a35 = 18997 = 1.164
+      "vmovd      %%eax,%%xmm3                   \n"
+      "vbroadcastss %%xmm3,%%ymm3                \n"
+      "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpslld     $0x18,%%ymm4,%%ymm4            \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    // Step 1: Scale Y contribution to 16 G values. G = (y - 16) * 1.164
-    "vmovdqu    " MEMACCESS(0) ",%%xmm0        \n"
-    "lea        " MEMLEA(0x10,0) ",%0          \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vpunpcklbw %%ymm0,%%ymm0,%%ymm0           \n"
-    "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpsubusw   %%ymm3,%%ymm0,%%ymm0           \n"
-    "vpsrlw     $0x6,%%ymm0,%%ymm0             \n"
-    "vpackuswb  %%ymm0,%%ymm0,%%ymm0           \n"
-    "vpunpcklbw %%ymm0,%%ymm0,%%ymm1           \n"
-    "vpermq     $0xd8,%%ymm1,%%ymm1            \n"
-    "vpunpcklwd %%ymm1,%%ymm1,%%ymm0           \n"
-    "vpunpckhwd %%ymm1,%%ymm1,%%ymm1           \n"
-    "vpor       %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpor       %%ymm4,%%ymm1,%%ymm1           \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "vmovdqu    %%ymm1," MEMACCESS2(0x20,1) "  \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub        $0x10,%2                       \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(y_buf),     // %0
-    "+r"(dst_argb),  // %1
-    "+rm"(width)     // %2
-  :
-  : "memory", "cc", "eax"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      // Step 1: Scale Y contribution to 16 G values. G = (y - 16) * 1.164
+      "vmovdqu    (%0),%%xmm0                    \n"
+      "lea        0x10(%0),%0                    \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vpunpcklbw %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpsubusw   %%ymm3,%%ymm0,%%ymm0           \n"
+      "vpsrlw     $0x6,%%ymm0,%%ymm0             \n"
+      "vpackuswb  %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpunpcklbw %%ymm0,%%ymm0,%%ymm1           \n"
+      "vpermq     $0xd8,%%ymm1,%%ymm1            \n"
+      "vpunpcklwd %%ymm1,%%ymm1,%%ymm0           \n"
+      "vpunpckhwd %%ymm1,%%ymm1,%%ymm1           \n"
+      "vpor       %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpor       %%ymm4,%%ymm1,%%ymm1           \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "vmovdqu    %%ymm1,0x20(%1)                \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub        $0x10,%2                       \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(y_buf),     // %0
+        "+r"(dst_argb),  // %1
+        "+rm"(width)     // %2
+      :
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4");
 }
 #endif  // HAS_I400TOARGBROW_AVX2
 
 #ifdef HAS_MIRRORROW_SSSE3
 // Shuffle table for reversing the bytes.
-static uvec8 kShuffleMirror = {
-  15u, 14u, 13u, 12u, 11u, 10u, 9u, 8u, 7u, 6u, 5u, 4u, 3u, 2u, 1u, 0u
-};
+static const uvec8 kShuffleMirror = {15u, 14u, 13u, 12u, 11u, 10u, 9u, 8u,
+                                     7u,  6u,  5u,  4u,  3u,  2u,  1u, 0u};
 
-void MirrorRow_SSSE3(const uint8* src, uint8* dst, int width) {
+void MirrorRow_SSSE3(const uint8_t* src, uint8_t* dst, int width) {
   intptr_t temp_width = (intptr_t)(width);
-  asm volatile (
-    "movdqa    %3,%%xmm5                       \n"
-    LABELALIGN
-  "1:                                          \n"
-    MEMOPREG(movdqu,-0x10,0,2,1,xmm0)          //  movdqu -0x10(%0,%2),%%xmm0
-    "pshufb    %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(temp_width)  // %2
-  : "m"(kShuffleMirror) // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm5"
-  );
+  asm volatile(
+
+      "movdqa    %3,%%xmm5                       \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    -0x10(%0,%2,1),%%xmm0           \n"
+      "pshufb    %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src),           // %0
+        "+r"(dst),           // %1
+        "+r"(temp_width)     // %2
+      : "m"(kShuffleMirror)  // %3
+      : "memory", "cc", "xmm0", "xmm5");
 }
 #endif  // HAS_MIRRORROW_SSSE3
 
 #ifdef HAS_MIRRORROW_AVX2
-void MirrorRow_AVX2(const uint8* src, uint8* dst, int width) {
+void MirrorRow_AVX2(const uint8_t* src, uint8_t* dst, int width) {
   intptr_t temp_width = (intptr_t)(width);
-  asm volatile (
-    "vbroadcastf128 %3,%%ymm5                  \n"
-    LABELALIGN
-  "1:                                          \n"
-    MEMOPREG(vmovdqu,-0x20,0,2,1,ymm0)         //  vmovdqu -0x20(%0,%2),%%ymm0
-    "vpshufb    %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpermq     $0x4e,%%ymm0,%%ymm0            \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(temp_width)  // %2
-  : "m"(kShuffleMirror) // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm5"
-  );
+  asm volatile(
+
+      "vbroadcastf128 %3,%%ymm5                  \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    -0x20(%0,%2,1),%%ymm0          \n"
+      "vpshufb    %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpermq     $0x4e,%%ymm0,%%ymm0            \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src),           // %0
+        "+r"(dst),           // %1
+        "+r"(temp_width)     // %2
+      : "m"(kShuffleMirror)  // %3
+      : "memory", "cc", "xmm0", "xmm5");
 }
 #endif  // HAS_MIRRORROW_AVX2
 
 #ifdef HAS_MIRRORUVROW_SSSE3
 // Shuffle table for reversing the bytes of UV channels.
-static uvec8 kShuffleMirrorUV = {
-  14u, 12u, 10u, 8u, 6u, 4u, 2u, 0u, 15u, 13u, 11u, 9u, 7u, 5u, 3u, 1u
-};
-void MirrorUVRow_SSSE3(const uint8* src, uint8* dst_u, uint8* dst_v,
+static const uvec8 kShuffleMirrorUV = {14u, 12u, 10u, 8u, 6u, 4u, 2u, 0u,
+                                       15u, 13u, 11u, 9u, 7u, 5u, 3u, 1u};
+void MirrorUVRow_SSSE3(const uint8_t* src,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
                        int width) {
   intptr_t temp_width = (intptr_t)(width);
-  asm volatile (
-    "movdqa    %4,%%xmm1                       \n"
-    "lea       " MEMLEA4(-0x10,0,3,2) ",%0     \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(-0x10,0) ",%0          \n"
-    "pshufb    %%xmm1,%%xmm0                   \n"
-    "movlpd    %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movhpd,xmm0,0x00,1,2,1)           //  movhpd    %%xmm0,(%1,%2)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $8,%3                           \n"
-    "jg        1b                              \n"
-  : "+r"(src),      // %0
-    "+r"(dst_u),    // %1
-    "+r"(dst_v),    // %2
-    "+r"(temp_width)  // %3
-  : "m"(kShuffleMirrorUV)  // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1"
-  );
+  asm volatile(
+      "movdqa    %4,%%xmm1                       \n"
+      "lea       -0x10(%0,%3,2),%0               \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "lea       -0x10(%0),%0                    \n"
+      "pshufb    %%xmm1,%%xmm0                   \n"
+      "movlpd    %%xmm0,(%1)                     \n"
+      "movhpd    %%xmm0,0x00(%1,%2,1)            \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $8,%3                           \n"
+      "jg        1b                              \n"
+      : "+r"(src),             // %0
+        "+r"(dst_u),           // %1
+        "+r"(dst_v),           // %2
+        "+r"(temp_width)       // %3
+      : "m"(kShuffleMirrorUV)  // %4
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_MIRRORUVROW_SSSE3
 
 #ifdef HAS_ARGBMIRRORROW_SSE2
 
-void ARGBMirrorRow_SSE2(const uint8* src, uint8* dst, int width) {
+void ARGBMirrorRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
   intptr_t temp_width = (intptr_t)(width);
-  asm volatile (
-    "lea       " MEMLEA4(-0x10,0,2,4) ",%0     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "pshufd    $0x1b,%%xmm0,%%xmm0             \n"
-    "lea       " MEMLEA(-0x10,0) ",%0          \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(temp_width)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0"
-  );
+  asm volatile(
+
+      "lea       -0x10(%0,%2,4),%0               \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "pshufd    $0x1b,%%xmm0,%%xmm0             \n"
+      "lea       -0x10(%0),%0                    \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),        // %0
+        "+r"(dst),        // %1
+        "+r"(temp_width)  // %2
+      :
+      : "memory", "cc", "xmm0");
 }
 #endif  // HAS_ARGBMIRRORROW_SSE2
 
 #ifdef HAS_ARGBMIRRORROW_AVX2
 // Shuffle table for reversing the bytes.
-static const ulvec32 kARGBShuffleMirror_AVX2 = {
-  7u, 6u, 5u, 4u, 3u, 2u, 1u, 0u
-};
-void ARGBMirrorRow_AVX2(const uint8* src, uint8* dst, int width) {
+static const ulvec32 kARGBShuffleMirror_AVX2 = {7u, 6u, 5u, 4u, 3u, 2u, 1u, 0u};
+void ARGBMirrorRow_AVX2(const uint8_t* src, uint8_t* dst, int width) {
   intptr_t temp_width = (intptr_t)(width);
-  asm volatile (
-    "vmovdqu    %3,%%ymm5                      \n"
-    LABELALIGN
-  "1:                                          \n"
-    VMEMOPREG(vpermd,-0x20,0,2,4,ymm5,ymm0) // vpermd -0x20(%0,%2,4),ymm5,ymm0
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "sub        $0x8,%2                        \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src),  // %0
-    "+r"(dst),  // %1
-    "+r"(temp_width)  // %2
-  : "m"(kARGBShuffleMirror_AVX2) // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm5"
-  );
+  asm volatile(
+
+      "vmovdqu    %3,%%ymm5                      \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vpermd    -0x20(%0,%2,4),%%ymm5,%%ymm0    \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea        0x20(%1),%1                    \n"
+      "sub        $0x8,%2                        \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src),                    // %0
+        "+r"(dst),                    // %1
+        "+r"(temp_width)              // %2
+      : "m"(kARGBShuffleMirror_AVX2)  // %3
+      : "memory", "cc", "xmm0", "xmm5");
 }
 #endif  // HAS_ARGBMIRRORROW_AVX2
 
 #ifdef HAS_SPLITUVROW_AVX2
-void SplitUVRow_AVX2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void SplitUVRow_AVX2(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5             \n"
-    "vpsrlw     $0x8,%%ymm5,%%ymm5               \n"
-    "sub        %1,%2                            \n"
-    LABELALIGN
-  "1:                                            \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0          \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1    \n"
-    "lea        " MEMLEA(0x40,0) ",%0            \n"
-    "vpsrlw     $0x8,%%ymm0,%%ymm2               \n"
-    "vpsrlw     $0x8,%%ymm1,%%ymm3               \n"
-    "vpand      %%ymm5,%%ymm0,%%ymm0             \n"
-    "vpand      %%ymm5,%%ymm1,%%ymm1             \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0             \n"
-    "vpackuswb  %%ymm3,%%ymm2,%%ymm2             \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0              \n"
-    "vpermq     $0xd8,%%ymm2,%%ymm2              \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "          \n"
-    MEMOPMEM(vmovdqu,ymm2,0x00,1,2,1)             //  vmovdqu %%ymm2,(%1,%2)
-    "lea        " MEMLEA(0x20,1) ",%1            \n"
-    "sub        $0x20,%3                         \n"
-    "jg         1b                               \n"
-    "vzeroupper                                  \n"
-  : "+r"(src_uv),     // %0
-    "+r"(dst_u),      // %1
-    "+r"(dst_v),      // %2
-    "+r"(width)         // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+  asm volatile(
+      "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+      "vpsrlw     $0x8,%%ymm5,%%ymm5             \n"
+      "sub        %1,%2                          \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "lea        0x40(%0),%0                    \n"
+      "vpsrlw     $0x8,%%ymm0,%%ymm2             \n"
+      "vpsrlw     $0x8,%%ymm1,%%ymm3             \n"
+      "vpand      %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpand      %%ymm5,%%ymm1,%%ymm1           \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpackuswb  %%ymm3,%%ymm2,%%ymm2           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vpermq     $0xd8,%%ymm2,%%ymm2            \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "vmovdqu    %%ymm2,0x00(%1,%2,1)            \n"
+      "lea        0x20(%1),%1                    \n"
+      "sub        $0x20,%3                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_uv),  // %0
+        "+r"(dst_u),   // %1
+        "+r"(dst_v),   // %2
+        "+r"(width)    // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SPLITUVROW_AVX2
 
 #ifdef HAS_SPLITUVROW_SSE2
-void SplitUVRow_SSE2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void SplitUVRow_SSE2(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width) {
-  asm volatile (
-    "pcmpeqb    %%xmm5,%%xmm5                    \n"
-    "psrlw      $0x8,%%xmm5                      \n"
-    "sub        %1,%2                            \n"
-    LABELALIGN
-  "1:                                            \n"
-    "movdqu     " MEMACCESS(0) ",%%xmm0          \n"
-    "movdqu     " MEMACCESS2(0x10,0) ",%%xmm1    \n"
-    "lea        " MEMLEA(0x20,0) ",%0            \n"
-    "movdqa     %%xmm0,%%xmm2                    \n"
-    "movdqa     %%xmm1,%%xmm3                    \n"
-    "pand       %%xmm5,%%xmm0                    \n"
-    "pand       %%xmm5,%%xmm1                    \n"
-    "packuswb   %%xmm1,%%xmm0                    \n"
-    "psrlw      $0x8,%%xmm2                      \n"
-    "psrlw      $0x8,%%xmm3                      \n"
-    "packuswb   %%xmm3,%%xmm2                    \n"
-    "movdqu     %%xmm0," MEMACCESS(1) "          \n"
-    MEMOPMEM(movdqu,xmm2,0x00,1,2,1)             //  movdqu     %%xmm2,(%1,%2)
-    "lea        " MEMLEA(0x10,1) ",%1            \n"
-    "sub        $0x10,%3                         \n"
-    "jg         1b                               \n"
-  : "+r"(src_uv),     // %0
-    "+r"(dst_u),      // %1
-    "+r"(dst_v),      // %2
-    "+r"(width)         // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+  asm volatile(
+      "pcmpeqb    %%xmm5,%%xmm5                  \n"
+      "psrlw      $0x8,%%xmm5                    \n"
+      "sub        %1,%2                          \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     0x10(%0),%%xmm1                \n"
+      "lea        0x20(%0),%0                    \n"
+      "movdqa     %%xmm0,%%xmm2                  \n"
+      "movdqa     %%xmm1,%%xmm3                  \n"
+      "pand       %%xmm5,%%xmm0                  \n"
+      "pand       %%xmm5,%%xmm1                  \n"
+      "packuswb   %%xmm1,%%xmm0                  \n"
+      "psrlw      $0x8,%%xmm2                    \n"
+      "psrlw      $0x8,%%xmm3                    \n"
+      "packuswb   %%xmm3,%%xmm2                  \n"
+      "movdqu     %%xmm0,(%1)                    \n"
+      "movdqu    %%xmm2,0x00(%1,%2,1)            \n"
+      "lea        0x10(%1),%1                    \n"
+      "sub        $0x10,%3                       \n"
+      "jg         1b                             \n"
+      : "+r"(src_uv),  // %0
+        "+r"(dst_u),   // %1
+        "+r"(dst_v),   // %2
+        "+r"(width)    // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SPLITUVROW_SSE2
 
 #ifdef HAS_MERGEUVROW_AVX2
-void MergeUVRow_AVX2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void MergeUVRow_AVX2(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
                      int width) {
-  asm volatile (
-    "sub       %0,%1                             \n"
-    LABELALIGN
-  "1:                                            \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0           \n"
-    MEMOPREG(vmovdqu,0x00,0,1,1,ymm1)             //  vmovdqu (%0,%1,1),%%ymm1
-    "lea       " MEMLEA(0x20,0) ",%0             \n"
-    "vpunpcklbw %%ymm1,%%ymm0,%%ymm2             \n"
-    "vpunpckhbw %%ymm1,%%ymm0,%%ymm0             \n"
-    "vextractf128 $0x0,%%ymm2," MEMACCESS(2) "   \n"
-    "vextractf128 $0x0,%%ymm0," MEMACCESS2(0x10,2) "\n"
-    "vextractf128 $0x1,%%ymm2," MEMACCESS2(0x20,2) "\n"
-    "vextractf128 $0x1,%%ymm0," MEMACCESS2(0x30,2) "\n"
-    "lea       " MEMLEA(0x40,2) ",%2             \n"
-    "sub       $0x20,%3                          \n"
-    "jg        1b                                \n"
-    "vzeroupper                                  \n"
-  : "+r"(src_u),     // %0
-    "+r"(src_v),     // %1
-    "+r"(dst_uv),    // %2
-    "+r"(width)      // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2"
-  );
+  asm volatile(
+
+      "sub       %0,%1                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu    0x00(%0,%1,1),%%ymm1           \n"
+      "lea       0x20(%0),%0                     \n"
+      "vpunpcklbw %%ymm1,%%ymm0,%%ymm2           \n"
+      "vpunpckhbw %%ymm1,%%ymm0,%%ymm0           \n"
+      "vextractf128 $0x0,%%ymm2,(%2)             \n"
+      "vextractf128 $0x0,%%ymm0,0x10(%2)         \n"
+      "vextractf128 $0x1,%%ymm2,0x20(%2)         \n"
+      "vextractf128 $0x1,%%ymm0,0x30(%2)         \n"
+      "lea       0x40(%2),%2                     \n"
+      "sub       $0x20,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_u),   // %0
+        "+r"(src_v),   // %1
+        "+r"(dst_uv),  // %2
+        "+r"(width)    // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_MERGEUVROW_AVX2
 
 #ifdef HAS_MERGEUVROW_SSE2
-void MergeUVRow_SSE2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void MergeUVRow_SSE2(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
                      int width) {
-  asm volatile (
-    "sub       %0,%1                             \n"
-    LABELALIGN
-  "1:                                            \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0           \n"
-    MEMOPREG(movdqu,0x00,0,1,1,xmm1)             //  movdqu    (%0,%1,1),%%xmm1
-    "lea       " MEMLEA(0x10,0) ",%0             \n"
-    "movdqa    %%xmm0,%%xmm2                     \n"
-    "punpcklbw %%xmm1,%%xmm0                     \n"
-    "punpckhbw %%xmm1,%%xmm2                     \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "           \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x10,2) "     \n"
-    "lea       " MEMLEA(0x20,2) ",%2             \n"
-    "sub       $0x10,%3                          \n"
-    "jg        1b                                \n"
-  : "+r"(src_u),     // %0
-    "+r"(src_v),     // %1
-    "+r"(dst_uv),    // %2
-    "+r"(width)      // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2"
-  );
+  asm volatile(
+
+      "sub       %0,%1                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%1,1),%%xmm1            \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "punpcklbw %%xmm1,%%xmm0                   \n"
+      "punpckhbw %%xmm1,%%xmm2                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "movdqu    %%xmm2,0x10(%2)                 \n"
+      "lea       0x20(%2),%2                     \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_u),   // %0
+        "+r"(src_v),   // %1
+        "+r"(dst_uv),  // %2
+        "+r"(width)    // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_MERGEUVROW_SSE2
 
-#ifdef HAS_COPYROW_SSE2
-void CopyRow_SSE2(const uint8* src, uint8* dst, int count) {
+// Use scale to convert lsb formats to msb, depending how many bits there are:
+// 128 = 9 bits
+// 64 = 10 bits
+// 16 = 12 bits
+// 1 = 16 bits
+#ifdef HAS_MERGEUVROW_16_AVX2
+void MergeUVRow_16_AVX2(const uint16_t* src_u,
+                        const uint16_t* src_v,
+                        uint16_t* dst_uv,
+                        int scale,
+                        int width) {
+  // clang-format off
   asm volatile (
-    "test       $0xf,%0                        \n"
-    "jne        2f                             \n"
-    "test       $0xf,%1                        \n"
-    "jne        2f                             \n"
+    "vmovd      %4,%%xmm3                      \n"
+    "vpunpcklwd %%xmm3,%%xmm3,%%xmm3           \n"
+    "vbroadcastss %%xmm3,%%ymm3                \n"
+    "sub       %0,%1                           \n"
+
+    // 16 pixels per loop.
     LABELALIGN
-  "1:                                          \n"
-    "movdqa    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqa    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "movdqa    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqa    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
+    "1:                                        \n"
+    "vmovdqu   (%0),%%ymm0                     \n"
+    "vmovdqu   (%0,%1,1),%%ymm1                \n"
+    "add        $0x20,%0                       \n"
+
+    "vpmullw   %%ymm3,%%ymm0,%%ymm0            \n"
+    "vpmullw   %%ymm3,%%ymm1,%%ymm1            \n"
+    "vpunpcklwd %%ymm1,%%ymm0,%%ymm2           \n"  // mutates
+    "vpunpckhwd %%ymm1,%%ymm0,%%ymm0           \n"
+    "vextractf128 $0x0,%%ymm2,(%2)             \n"
+    "vextractf128 $0x0,%%ymm0,0x10(%2)         \n"
+    "vextractf128 $0x1,%%ymm2,0x20(%2)         \n"
+    "vextractf128 $0x1,%%ymm0,0x30(%2)         \n"
+    "add       $0x40,%2                        \n"
+    "sub       $0x10,%3                        \n"
+    "jg        1b                              \n"
+    "vzeroupper                                \n"
+  : "+r"(src_u),   // %0
+    "+r"(src_v),   // %1
+    "+r"(dst_uv),  // %2
+    "+r"(width)    // %3
+  : "r"(scale)     // %4
+  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3");
+  // clang-format on
+}
+#endif  // HAS_MERGEUVROW_AVX2
+
+// Use scale to convert lsb formats to msb, depending how many bits there are:
+// 128 = 9 bits
+// 64 = 10 bits
+// 16 = 12 bits
+// 1 = 16 bits
+#ifdef HAS_MULTIPLYROW_16_AVX2
+void MultiplyRow_16_AVX2(const uint16_t* src_y,
+                         uint16_t* dst_y,
+                         int scale,
+                         int width) {
+  // clang-format off
+  asm volatile (
+    "vmovd      %3,%%xmm3                      \n"
+    "vpunpcklwd %%xmm3,%%xmm3,%%xmm3           \n"
+    "vbroadcastss %%xmm3,%%ymm3                \n"
+    "sub       %0,%1                           \n"
+
+    // 16 pixels per loop.
+    LABELALIGN
+    "1:                                        \n"
+    "vmovdqu   (%0),%%ymm0                     \n"
+    "vmovdqu   0x20(%0),%%ymm1                 \n"
+    "vpmullw   %%ymm3,%%ymm0,%%ymm0            \n"
+    "vpmullw   %%ymm3,%%ymm1,%%ymm1            \n"
+    "vmovdqu   %%ymm0,(%0,%1)                  \n"
+    "vmovdqu   %%ymm1,0x20(%0,%1)              \n"
+    "add        $0x40,%0                       \n"
     "sub       $0x20,%2                        \n"
     "jg        1b                              \n"
-    "jmp       9f                              \n"
+    "vzeroupper                                \n"
+  : "+r"(src_y),   // %0
+    "+r"(dst_y),   // %1
+    "+r"(width)    // %2
+  : "r"(scale)     // %3
+  : "memory", "cc", "xmm0", "xmm1", "xmm3");
+  // clang-format on
+}
+#endif  // HAS_MULTIPLYROW_16_AVX2
+
+// Use scale to convert lsb formats to msb, depending how many bits there are:
+// 32768 = 9 bits
+// 16384 = 10 bits
+// 4096 = 12 bits
+// 256 = 16 bits
+void Convert16To8Row_SSSE3(const uint16_t* src_y,
+                           uint8_t* dst_y,
+                           int scale,
+                           int width) {
+  // clang-format off
+  asm volatile (
+    "movd      %3,%%xmm2                      \n"
+    "punpcklwd %%xmm2,%%xmm2                  \n"
+    "pshufd    $0x0,%%xmm2,%%xmm2             \n"
+
+    // 32 pixels per loop.
     LABELALIGN
-  "2:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
+    "1:                                       \n"
+    "movdqu    (%0),%%xmm0                    \n"
+    "movdqu    0x10(%0),%%xmm1                \n"
+    "add       $0x20,%0                       \n"
+    "pmulhuw   %%xmm2,%%xmm0                  \n"
+    "pmulhuw   %%xmm2,%%xmm1                  \n"
+    "packuswb  %%xmm1,%%xmm0                  \n"
+    "movdqu    %%xmm0,(%1)                    \n"
+    "add       $0x10,%1                       \n"
+    "sub       $0x10,%2                       \n"
+    "jg        1b                             \n"
+  : "+r"(src_y),   // %0
+    "+r"(dst_y),   // %1
+    "+r"(width)    // %2
+  : "r"(scale)     // %3
+  : "memory", "cc", "xmm0", "xmm1", "xmm2");
+  // clang-format on
+}
+
+#ifdef HAS_CONVERT16TO8ROW_AVX2
+void Convert16To8Row_AVX2(const uint16_t* src_y,
+                          uint8_t* dst_y,
+                          int scale,
+                          int width) {
+  // clang-format off
+  asm volatile (
+    "vmovd      %3,%%xmm2                      \n"
+    "vpunpcklwd %%xmm2,%%xmm2,%%xmm2           \n"
+    "vbroadcastss %%xmm2,%%ymm2                \n"
+
+    // 32 pixels per loop.
+    LABELALIGN
+    "1:                                        \n"
+    "vmovdqu   (%0),%%ymm0                     \n"
+    "vmovdqu   0x20(%0),%%ymm1                 \n"
+    "add       $0x40,%0                        \n"
+    "vpmulhuw  %%ymm2,%%ymm0,%%ymm0            \n"
+    "vpmulhuw  %%ymm2,%%ymm1,%%ymm1            \n"
+    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"  // mutates
+    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+    "vmovdqu   %%ymm0,(%1)                     \n"
+    "add       $0x20,%1                        \n"
     "sub       $0x20,%2                        \n"
-    "jg        2b                              \n"
-  "9:                                          \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(count)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1"
-  );
+    "jg        1b                              \n"
+    "vzeroupper                                \n"
+  : "+r"(src_y),   // %0
+    "+r"(dst_y),   // %1
+    "+r"(width)    // %2
+  : "r"(scale)     // %3
+  : "memory", "cc", "xmm0", "xmm1", "xmm2");
+  // clang-format on
+}
+#endif  // HAS_CONVERT16TO8ROW_AVX2
+
+// Use scale to convert to lsb formats depending how many bits there are:
+// 512 = 9 bits
+// 1024 = 10 bits
+// 4096 = 12 bits
+// TODO(fbarchard): reduce to SSE2
+void Convert8To16Row_SSE2(const uint8_t* src_y,
+                          uint16_t* dst_y,
+                          int scale,
+                          int width) {
+  // clang-format off
+  asm volatile (
+    "movd      %3,%%xmm2                      \n"
+    "punpcklwd %%xmm2,%%xmm2                  \n"
+    "pshufd    $0x0,%%xmm2,%%xmm2             \n"
+
+    // 32 pixels per loop.
+    LABELALIGN
+    "1:                                       \n"
+    "movdqu    (%0),%%xmm0                    \n"
+    "movdqa    %%xmm0,%%xmm1                  \n"
+    "punpcklbw %%xmm0,%%xmm0                  \n"
+    "punpckhbw %%xmm1,%%xmm1                  \n"
+    "add       $0x10,%0                       \n"
+    "pmulhuw   %%xmm2,%%xmm0                  \n"
+    "pmulhuw   %%xmm2,%%xmm1                  \n"
+    "movdqu    %%xmm0,(%1)                    \n"
+    "movdqu    %%xmm1,0x10(%1)                \n"
+    "add       $0x20,%1                       \n"
+    "sub       $0x10,%2                       \n"
+    "jg        1b                             \n"
+  : "+r"(src_y),   // %0
+    "+r"(dst_y),   // %1
+    "+r"(width)    // %2
+  : "r"(scale)     // %3
+  : "memory", "cc", "xmm0", "xmm1", "xmm2");
+  // clang-format on
+}
+
+#ifdef HAS_CONVERT8TO16ROW_AVX2
+void Convert8To16Row_AVX2(const uint8_t* src_y,
+                          uint16_t* dst_y,
+                          int scale,
+                          int width) {
+  // clang-format off
+  asm volatile (
+    "vmovd      %3,%%xmm2                      \n"
+    "vpunpcklwd %%xmm2,%%xmm2,%%xmm2           \n"
+    "vbroadcastss %%xmm2,%%ymm2                \n"
+
+    // 32 pixels per loop.
+    LABELALIGN
+    "1:                                        \n"
+    "vmovdqu   (%0),%%ymm0                     \n"
+    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+    "add       $0x20,%0                        \n"
+    "vpunpckhbw %%ymm0,%%ymm0,%%ymm1           \n"
+    "vpunpcklbw %%ymm0,%%ymm0,%%ymm0           \n"
+    "vpmulhuw  %%ymm2,%%ymm0,%%ymm0            \n"
+    "vpmulhuw  %%ymm2,%%ymm1,%%ymm1            \n"
+    "vmovdqu   %%ymm0,(%1)                     \n"
+    "vmovdqu   %%ymm1,0x20(%1)                 \n"
+    "add       $0x40,%1                        \n"
+    "sub       $0x20,%2                        \n"
+    "jg        1b                              \n"
+    "vzeroupper                                \n"
+  : "+r"(src_y),   // %0
+    "+r"(dst_y),   // %1
+    "+r"(width)    // %2
+  : "r"(scale)     // %3
+  : "memory", "cc", "xmm0", "xmm1", "xmm2");
+  // clang-format on
+}
+#endif  // HAS_CONVERT8TO16ROW_AVX2
+
+#ifdef HAS_SPLITRGBROW_SSSE3
+
+// Shuffle table for converting RGB to Planar.
+static const uvec8 kShuffleMaskRGBToR0 = {0u,   3u,   6u,   9u,   12u,  15u,
+                                          128u, 128u, 128u, 128u, 128u, 128u,
+                                          128u, 128u, 128u, 128u};
+static const uvec8 kShuffleMaskRGBToR1 = {128u, 128u, 128u, 128u, 128u, 128u,
+                                          2u,   5u,   8u,   11u,  14u,  128u,
+                                          128u, 128u, 128u, 128u};
+static const uvec8 kShuffleMaskRGBToR2 = {128u, 128u, 128u, 128u, 128u, 128u,
+                                          128u, 128u, 128u, 128u, 128u, 1u,
+                                          4u,   7u,   10u,  13u};
+
+static const uvec8 kShuffleMaskRGBToG0 = {1u,   4u,   7u,   10u,  13u,  128u,
+                                          128u, 128u, 128u, 128u, 128u, 128u,
+                                          128u, 128u, 128u, 128u};
+static const uvec8 kShuffleMaskRGBToG1 = {128u, 128u, 128u, 128u, 128u, 0u,
+                                          3u,   6u,   9u,   12u,  15u,  128u,
+                                          128u, 128u, 128u, 128u};
+static const uvec8 kShuffleMaskRGBToG2 = {128u, 128u, 128u, 128u, 128u, 128u,
+                                          128u, 128u, 128u, 128u, 128u, 2u,
+                                          5u,   8u,   11u,  14u};
+
+static const uvec8 kShuffleMaskRGBToB0 = {2u,   5u,   8u,   11u,  14u,  128u,
+                                          128u, 128u, 128u, 128u, 128u, 128u,
+                                          128u, 128u, 128u, 128u};
+static const uvec8 kShuffleMaskRGBToB1 = {128u, 128u, 128u, 128u, 128u, 1u,
+                                          4u,   7u,   10u,  13u,  128u, 128u,
+                                          128u, 128u, 128u, 128u};
+static const uvec8 kShuffleMaskRGBToB2 = {128u, 128u, 128u, 128u, 128u, 128u,
+                                          128u, 128u, 128u, 128u, 0u,   3u,
+                                          6u,   9u,   12u,  15u};
+
+void SplitRGBRow_SSSE3(const uint8_t* src_rgb,
+                       uint8_t* dst_r,
+                       uint8_t* dst_g,
+                       uint8_t* dst_b,
+                       int width) {
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     0x10(%0),%%xmm1                \n"
+      "movdqu     0x20(%0),%%xmm2                \n"
+      "pshufb     %5, %%xmm0                     \n"
+      "pshufb     %6, %%xmm1                     \n"
+      "pshufb     %7, %%xmm2                     \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm0                  \n"
+      "movdqu     %%xmm0,(%1)                    \n"
+      "lea        0x10(%1),%1                    \n"
+
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     0x10(%0),%%xmm1                \n"
+      "movdqu     0x20(%0),%%xmm2                \n"
+      "pshufb     %8, %%xmm0                     \n"
+      "pshufb     %9, %%xmm1                     \n"
+      "pshufb     %10, %%xmm2                    \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm0                  \n"
+      "movdqu     %%xmm0,(%2)                    \n"
+      "lea        0x10(%2),%2                    \n"
+
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     0x10(%0),%%xmm1                \n"
+      "movdqu     0x20(%0),%%xmm2                \n"
+      "pshufb     %11, %%xmm0                    \n"
+      "pshufb     %12, %%xmm1                    \n"
+      "pshufb     %13, %%xmm2                    \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm0                  \n"
+      "movdqu     %%xmm0,(%3)                    \n"
+      "lea        0x10(%3),%3                    \n"
+      "lea        0x30(%0),%0                    \n"
+      "sub        $0x10,%4                       \n"
+      "jg         1b                             \n"
+      : "+r"(src_rgb),             // %0
+        "+r"(dst_r),               // %1
+        "+r"(dst_g),               // %2
+        "+r"(dst_b),               // %3
+        "+r"(width)                // %4
+      : "m"(kShuffleMaskRGBToR0),  // %5
+        "m"(kShuffleMaskRGBToR1),  // %6
+        "m"(kShuffleMaskRGBToR2),  // %7
+        "m"(kShuffleMaskRGBToG0),  // %8
+        "m"(kShuffleMaskRGBToG1),  // %9
+        "m"(kShuffleMaskRGBToG2),  // %10
+        "m"(kShuffleMaskRGBToB0),  // %11
+        "m"(kShuffleMaskRGBToB1),  // %12
+        "m"(kShuffleMaskRGBToB2)   // %13
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
+}
+#endif  // HAS_SPLITRGBROW_SSSE3
+
+#ifdef HAS_MERGERGBROW_SSSE3
+
+// Shuffle table for converting RGB to Planar.
+static const uvec8 kShuffleMaskRToRGB0 = {0u, 128u, 128u, 1u, 128u, 128u,
+                                          2u, 128u, 128u, 3u, 128u, 128u,
+                                          4u, 128u, 128u, 5u};
+static const uvec8 kShuffleMaskGToRGB0 = {128u, 0u, 128u, 128u, 1u, 128u,
+                                          128u, 2u, 128u, 128u, 3u, 128u,
+                                          128u, 4u, 128u, 128u};
+static const uvec8 kShuffleMaskBToRGB0 = {128u, 128u, 0u, 128u, 128u, 1u,
+                                          128u, 128u, 2u, 128u, 128u, 3u,
+                                          128u, 128u, 4u, 128u};
+
+static const uvec8 kShuffleMaskGToRGB1 = {5u, 128u, 128u, 6u, 128u, 128u,
+                                          7u, 128u, 128u, 8u, 128u, 128u,
+                                          9u, 128u, 128u, 10u};
+static const uvec8 kShuffleMaskBToRGB1 = {128u, 5u, 128u, 128u, 6u, 128u,
+                                          128u, 7u, 128u, 128u, 8u, 128u,
+                                          128u, 9u, 128u, 128u};
+static const uvec8 kShuffleMaskRToRGB1 = {128u, 128u, 6u,  128u, 128u, 7u,
+                                          128u, 128u, 8u,  128u, 128u, 9u,
+                                          128u, 128u, 10u, 128u};
+
+static const uvec8 kShuffleMaskBToRGB2 = {10u, 128u, 128u, 11u, 128u, 128u,
+                                          12u, 128u, 128u, 13u, 128u, 128u,
+                                          14u, 128u, 128u, 15u};
+static const uvec8 kShuffleMaskRToRGB2 = {128u, 11u, 128u, 128u, 12u, 128u,
+                                          128u, 13u, 128u, 128u, 14u, 128u,
+                                          128u, 15u, 128u, 128u};
+static const uvec8 kShuffleMaskGToRGB2 = {128u, 128u, 11u, 128u, 128u, 12u,
+                                          128u, 128u, 13u, 128u, 128u, 14u,
+                                          128u, 128u, 15u, 128u};
+
+void MergeRGBRow_SSSE3(const uint8_t* src_r,
+                       const uint8_t* src_g,
+                       const uint8_t* src_b,
+                       uint8_t* dst_rgb,
+                       int width) {
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     (%1),%%xmm1                    \n"
+      "movdqu     (%2),%%xmm2                    \n"
+      "pshufb     %5, %%xmm0                     \n"
+      "pshufb     %6, %%xmm1                     \n"
+      "pshufb     %7, %%xmm2                     \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm0                  \n"
+      "movdqu     %%xmm0,(%3)                    \n"
+
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     (%1),%%xmm1                    \n"
+      "movdqu     (%2),%%xmm2                    \n"
+      "pshufb     %8, %%xmm0                     \n"
+      "pshufb     %9, %%xmm1                     \n"
+      "pshufb     %10, %%xmm2                    \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm0                  \n"
+      "movdqu     %%xmm0,16(%3)                  \n"
+
+      "movdqu     (%0),%%xmm0                    \n"
+      "movdqu     (%1),%%xmm1                    \n"
+      "movdqu     (%2),%%xmm2                    \n"
+      "pshufb     %11, %%xmm0                    \n"
+      "pshufb     %12, %%xmm1                    \n"
+      "pshufb     %13, %%xmm2                    \n"
+      "por        %%xmm1,%%xmm0                  \n"
+      "por        %%xmm2,%%xmm0                  \n"
+      "movdqu     %%xmm0,32(%3)                  \n"
+
+      "lea        0x10(%0),%0                    \n"
+      "lea        0x10(%1),%1                    \n"
+      "lea        0x10(%2),%2                    \n"
+      "lea        0x30(%3),%3                    \n"
+      "sub        $0x10,%4                       \n"
+      "jg         1b                             \n"
+      : "+r"(src_r),               // %0
+        "+r"(src_g),               // %1
+        "+r"(src_b),               // %2
+        "+r"(dst_rgb),             // %3
+        "+r"(width)                // %4
+      : "m"(kShuffleMaskRToRGB0),  // %5
+        "m"(kShuffleMaskGToRGB0),  // %6
+        "m"(kShuffleMaskBToRGB0),  // %7
+        "m"(kShuffleMaskRToRGB1),  // %8
+        "m"(kShuffleMaskGToRGB1),  // %9
+        "m"(kShuffleMaskBToRGB1),  // %10
+        "m"(kShuffleMaskRToRGB2),  // %11
+        "m"(kShuffleMaskGToRGB2),  // %12
+        "m"(kShuffleMaskBToRGB2)   // %13
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
+}
+#endif  // HAS_MERGERGBROW_SSSE3
+
+#ifdef HAS_COPYROW_SSE2
+void CopyRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "test       $0xf,%0                        \n"
+      "jne        2f                             \n"
+      "test       $0xf,%1                        \n"
+      "jne        2f                             \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqa    (%0),%%xmm0                     \n"
+      "movdqa    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "movdqa    %%xmm0,(%1)                     \n"
+      "movdqa    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "jmp       9f                              \n"
+
+      LABELALIGN
+      "2:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        2b                              \n"
+
+      LABELALIGN "9:                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_COPYROW_SSE2
 
 #ifdef HAS_COPYROW_AVX
-void CopyRow_AVX(const uint8* src, uint8* dst, int count) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vmovdqu   %%ymm0," MEMACCESS(1) "         \n"
-    "vmovdqu   %%ymm1," MEMACCESS2(0x20,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x40,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(count)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1"
-  );
+void CopyRow_AVX(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vmovdqu   %%ymm0,(%1)                     \n"
+      "vmovdqu   %%ymm1,0x20(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x40,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_COPYROW_AVX
 
 #ifdef HAS_COPYROW_ERMS
 // Multiple of 1.
-void CopyRow_ERMS(const uint8* src, uint8* dst, int width) {
+void CopyRow_ERMS(const uint8_t* src, uint8_t* dst, int width) {
   size_t width_tmp = (size_t)(width);
-  asm volatile (
-    "rep movsb " MEMMOVESTRING(0,1) "          \n"
-  : "+S"(src),  // %0
-    "+D"(dst),  // %1
-    "+c"(width_tmp) // %2
-  :
-  : "memory", "cc"
-  );
+  asm volatile(
+
+      "rep movsb                      \n"
+      : "+S"(src),       // %0
+        "+D"(dst),       // %1
+        "+c"(width_tmp)  // %2
+      :
+      : "memory", "cc");
 }
 #endif  // HAS_COPYROW_ERMS
 
 #ifdef HAS_ARGBCOPYALPHAROW_SSE2
 // width in pixels
-void ARGBCopyAlphaRow_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm0,%%xmm0                   \n"
-    "pslld     $0x18,%%xmm0                    \n"
-    "pcmpeqb   %%xmm1,%%xmm1                   \n"
-    "psrld     $0x8,%%xmm1                     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm2         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm3   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm4         \n"
-    "movdqu    " MEMACCESS2(0x10,1) ",%%xmm5   \n"
-    "pand      %%xmm0,%%xmm2                   \n"
-    "pand      %%xmm0,%%xmm3                   \n"
-    "pand      %%xmm1,%%xmm4                   \n"
-    "pand      %%xmm1,%%xmm5                   \n"
-    "por       %%xmm4,%%xmm2                   \n"
-    "por       %%xmm5,%%xmm3                   \n"
-    "movdqu    %%xmm2," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm3," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ARGBCopyAlphaRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm0,%%xmm0                   \n"
+      "pslld     $0x18,%%xmm0                    \n"
+      "pcmpeqb   %%xmm1,%%xmm1                   \n"
+      "psrld     $0x8,%%xmm1                     \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm2                     \n"
+      "movdqu    0x10(%0),%%xmm3                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "movdqu    (%1),%%xmm4                     \n"
+      "movdqu    0x10(%1),%%xmm5                 \n"
+      "pand      %%xmm0,%%xmm2                   \n"
+      "pand      %%xmm0,%%xmm3                   \n"
+      "pand      %%xmm1,%%xmm4                   \n"
+      "pand      %%xmm1,%%xmm5                   \n"
+      "por       %%xmm4,%%xmm2                   \n"
+      "por       %%xmm5,%%xmm3                   \n"
+      "movdqu    %%xmm2,(%1)                     \n"
+      "movdqu    %%xmm3,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBCOPYALPHAROW_SSE2
 
 #ifdef HAS_ARGBCOPYALPHAROW_AVX2
 // width in pixels
-void ARGBCopyAlphaRow_AVX2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "vpcmpeqb  %%ymm0,%%ymm0,%%ymm0            \n"
-    "vpsrld    $0x8,%%ymm0,%%ymm0              \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm1         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm2   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpblendvb %%ymm0," MEMACCESS(1) ",%%ymm1,%%ymm1        \n"
-    "vpblendvb %%ymm0," MEMACCESS2(0x20,1) ",%%ymm2,%%ymm2  \n"
-    "vmovdqu   %%ymm1," MEMACCESS(1) "         \n"
-    "vmovdqu   %%ymm2," MEMACCESS2(0x20,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2"
-  );
+void ARGBCopyAlphaRow_AVX2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vpcmpeqb  %%ymm0,%%ymm0,%%ymm0            \n"
+      "vpsrld    $0x8,%%ymm0,%%ymm0              \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm1                     \n"
+      "vmovdqu   0x20(%0),%%ymm2                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpblendvb %%ymm0,(%1),%%ymm1,%%ymm1       \n"
+      "vpblendvb %%ymm0,0x20(%1),%%ymm2,%%ymm2   \n"
+      "vmovdqu   %%ymm1,(%1)                     \n"
+      "vmovdqu   %%ymm2,0x20(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_ARGBCOPYALPHAROW_AVX2
 
 #ifdef HAS_ARGBEXTRACTALPHAROW_SSE2
 // width in pixels
-void ARGBExtractAlphaRow_SSE2(const uint8* src_argb, uint8* dst_a, int width) {
- asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ", %%xmm0        \n"
-    "movdqu    " MEMACCESS2(0x10, 0) ", %%xmm1 \n"
-    "lea       " MEMLEA(0x20, 0) ", %0         \n"
-    "psrld     $0x18, %%xmm0                   \n"
-    "psrld     $0x18, %%xmm1                   \n"
-    "packssdw  %%xmm1, %%xmm0                  \n"
-    "packuswb  %%xmm0, %%xmm0                  \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8, 1) ", %1          \n"
-    "sub       $0x8, %2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_a),     // %1
-    "+rm"(width)     // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1"
-  );
+void ARGBExtractAlphaRow_SSE2(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width) {
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0), %%xmm0                    \n"
+      "movdqu    0x10(%0), %%xmm1                \n"
+      "lea       0x20(%0), %0                    \n"
+      "psrld     $0x18, %%xmm0                   \n"
+      "psrld     $0x18, %%xmm1                   \n"
+      "packssdw  %%xmm1, %%xmm0                  \n"
+      "packuswb  %%xmm0, %%xmm0                  \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1), %1                     \n"
+      "sub       $0x8, %2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_a),     // %1
+        "+rm"(width)     // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_ARGBEXTRACTALPHAROW_SSE2
 
+#ifdef HAS_ARGBEXTRACTALPHAROW_AVX2
+static const uvec8 kShuffleAlphaShort_AVX2 = {
+    3u,  128u, 128u, 128u, 7u,  128u, 128u, 128u,
+    11u, 128u, 128u, 128u, 15u, 128u, 128u, 128u};
+
+void ARGBExtractAlphaRow_AVX2(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width) {
+  asm volatile(
+      "vmovdqa    %3,%%ymm4                      \n"
+      "vbroadcastf128 %4,%%ymm5                  \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0), %%ymm0                    \n"
+      "vmovdqu   0x20(%0), %%ymm1                \n"
+      "vpshufb    %%ymm5,%%ymm0,%%ymm0           \n"  // vpsrld $0x18, %%ymm0
+      "vpshufb    %%ymm5,%%ymm1,%%ymm1           \n"
+      "vmovdqu   0x40(%0), %%ymm2                \n"
+      "vmovdqu   0x60(%0), %%ymm3                \n"
+      "lea       0x80(%0), %0                    \n"
+      "vpackssdw  %%ymm1, %%ymm0, %%ymm0         \n"  // mutates
+      "vpshufb    %%ymm5,%%ymm2,%%ymm2           \n"
+      "vpshufb    %%ymm5,%%ymm3,%%ymm3           \n"
+      "vpackssdw  %%ymm3, %%ymm2, %%ymm2         \n"  // mutates
+      "vpackuswb  %%ymm2,%%ymm0,%%ymm0           \n"  // mutates.
+      "vpermd     %%ymm0,%%ymm4,%%ymm0           \n"  // unmutate.
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub        $0x20, %2                      \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),               // %0
+        "+r"(dst_a),                  // %1
+        "+rm"(width)                  // %2
+      : "m"(kPermdARGBToY_AVX),       // %3
+        "m"(kShuffleAlphaShort_AVX2)  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
+}
+#endif  // HAS_ARGBEXTRACTALPHAROW_AVX2
+
 #ifdef HAS_ARGBCOPYYTOALPHAROW_SSE2
 // width in pixels
-void ARGBCopyYToAlphaRow_SSE2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm0,%%xmm0                   \n"
-    "pslld     $0x18,%%xmm0                    \n"
-    "pcmpeqb   %%xmm1,%%xmm1                   \n"
-    "psrld     $0x8,%%xmm1                     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movq      " MEMACCESS(0) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "punpcklbw %%xmm2,%%xmm2                   \n"
-    "punpckhwd %%xmm2,%%xmm3                   \n"
-    "punpcklwd %%xmm2,%%xmm2                   \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm4         \n"
-    "movdqu    " MEMACCESS2(0x10,1) ",%%xmm5   \n"
-    "pand      %%xmm0,%%xmm2                   \n"
-    "pand      %%xmm0,%%xmm3                   \n"
-    "pand      %%xmm1,%%xmm4                   \n"
-    "pand      %%xmm1,%%xmm5                   \n"
-    "por       %%xmm4,%%xmm2                   \n"
-    "por       %%xmm5,%%xmm3                   \n"
-    "movdqu    %%xmm2," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm3," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ARGBCopyYToAlphaRow_SSE2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm0,%%xmm0                   \n"
+      "pslld     $0x18,%%xmm0                    \n"
+      "pcmpeqb   %%xmm1,%%xmm1                   \n"
+      "psrld     $0x8,%%xmm1                     \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movq      (%0),%%xmm2                     \n"
+      "lea       0x8(%0),%0                      \n"
+      "punpcklbw %%xmm2,%%xmm2                   \n"
+      "punpckhwd %%xmm2,%%xmm3                   \n"
+      "punpcklwd %%xmm2,%%xmm2                   \n"
+      "movdqu    (%1),%%xmm4                     \n"
+      "movdqu    0x10(%1),%%xmm5                 \n"
+      "pand      %%xmm0,%%xmm2                   \n"
+      "pand      %%xmm0,%%xmm3                   \n"
+      "pand      %%xmm1,%%xmm4                   \n"
+      "pand      %%xmm1,%%xmm5                   \n"
+      "por       %%xmm4,%%xmm2                   \n"
+      "por       %%xmm5,%%xmm3                   \n"
+      "movdqu    %%xmm2,(%1)                     \n"
+      "movdqu    %%xmm3,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBCOPYYTOALPHAROW_SSE2
 
 #ifdef HAS_ARGBCOPYYTOALPHAROW_AVX2
 // width in pixels
-void ARGBCopyYToAlphaRow_AVX2(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    "vpcmpeqb  %%ymm0,%%ymm0,%%ymm0            \n"
-    "vpsrld    $0x8,%%ymm0,%%ymm0              \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vpmovzxbd " MEMACCESS(0) ",%%ymm1         \n"
-    "vpmovzxbd " MEMACCESS2(0x8,0) ",%%ymm2    \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "vpslld    $0x18,%%ymm1,%%ymm1             \n"
-    "vpslld    $0x18,%%ymm2,%%ymm2             \n"
-    "vpblendvb %%ymm0," MEMACCESS(1) ",%%ymm1,%%ymm1        \n"
-    "vpblendvb %%ymm0," MEMACCESS2(0x20,1) ",%%ymm2,%%ymm2  \n"
-    "vmovdqu   %%ymm1," MEMACCESS(1) "         \n"
-    "vmovdqu   %%ymm2," MEMACCESS2(0x20,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2"
-  );
+void ARGBCopyYToAlphaRow_AVX2(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "vpcmpeqb  %%ymm0,%%ymm0,%%ymm0            \n"
+      "vpsrld    $0x8,%%ymm0,%%ymm0              \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vpmovzxbd (%0),%%ymm1                     \n"
+      "vpmovzxbd 0x8(%0),%%ymm2                  \n"
+      "lea       0x10(%0),%0                     \n"
+      "vpslld    $0x18,%%ymm1,%%ymm1             \n"
+      "vpslld    $0x18,%%ymm2,%%ymm2             \n"
+      "vpblendvb %%ymm0,(%1),%%ymm1,%%ymm1       \n"
+      "vpblendvb %%ymm0,0x20(%1),%%ymm2,%%ymm2   \n"
+      "vmovdqu   %%ymm1,(%1)                     \n"
+      "vmovdqu   %%ymm2,0x20(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_ARGBCOPYYTOALPHAROW_AVX2
 
 #ifdef HAS_SETROW_X86
-void SetRow_X86(uint8* dst, uint8 v8, int width) {
+void SetRow_X86(uint8_t* dst, uint8_t v8, int width) {
   size_t width_tmp = (size_t)(width >> 2);
-  const uint32 v32 = v8 * 0x01010101u;  // Duplicate byte to all bytes.
-  asm volatile (
-    "rep stosl " MEMSTORESTRING(eax,0) "       \n"
-    : "+D"(dst),       // %0
-      "+c"(width_tmp)  // %1
-    : "a"(v32)         // %2
-    : "memory", "cc");
+  const uint32_t v32 = v8 * 0x01010101u;  // Duplicate byte to all bytes.
+  asm volatile(
+
+      "rep stosl                      \n"
+      : "+D"(dst),       // %0
+        "+c"(width_tmp)  // %1
+      : "a"(v32)         // %2
+      : "memory", "cc");
 }
 
-void SetRow_ERMS(uint8* dst, uint8 v8, int width) {
+void SetRow_ERMS(uint8_t* dst, uint8_t v8, int width) {
   size_t width_tmp = (size_t)(width);
-  asm volatile (
-    "rep stosb " MEMSTORESTRING(al,0) "        \n"
-    : "+D"(dst),       // %0
-      "+c"(width_tmp)  // %1
-    : "a"(v8)          // %2
-    : "memory", "cc");
+  asm volatile(
+
+      "rep stosb                      \n"
+      : "+D"(dst),       // %0
+        "+c"(width_tmp)  // %1
+      : "a"(v8)          // %2
+      : "memory", "cc");
 }
 
-void ARGBSetRow_X86(uint8* dst_argb, uint32 v32, int width) {
+void ARGBSetRow_X86(uint8_t* dst_argb, uint32_t v32, int width) {
   size_t width_tmp = (size_t)(width);
-  asm volatile (
-    "rep stosl " MEMSTORESTRING(eax,0) "       \n"
-    : "+D"(dst_argb),  // %0
-      "+c"(width_tmp)  // %1
-    : "a"(v32)         // %2
-    : "memory", "cc");
+  asm volatile(
+
+      "rep stosl                      \n"
+      : "+D"(dst_argb),  // %0
+        "+c"(width_tmp)  // %1
+      : "a"(v32)         // %2
+      : "memory", "cc");
 }
 #endif  // HAS_SETROW_X86
 
 #ifdef HAS_YUY2TOYROW_SSE2
-void YUY2ToYRow_SSE2(const uint8* src_yuy2, uint8* dst_y, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psrlw     $0x8,%%xmm5                     \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_yuy2),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm5"
-  );
+void YUY2ToYRow_SSE2(const uint8_t* src_yuy2, uint8_t* dst_y, int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psrlw     $0x8,%%xmm5                     \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void YUY2ToUVRow_SSE2(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psrlw     $0x8,%%xmm5                     \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm2)           //  movdqu  (%0,%4,1),%%xmm2
-    MEMOPREG(movdqu,0x10,0,4,1,xmm3)           //  movdqu  0x10(%0,%4,1),%%xmm3
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "pavgb     %%xmm3,%%xmm1                   \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movq,xmm1,0x00,1,2,1)             //  movq    %%xmm1,(%1,%2)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_yuy2),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  : "r"((intptr_t)(stride_yuy2))  // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+void YUY2ToUVRow_SSE2(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psrlw     $0x8,%%xmm5                     \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x00(%0,%4,1),%%xmm2            \n"
+      "movdqu    0x10(%0,%4,1),%%xmm3            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "pavgb     %%xmm3,%%xmm1                   \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movq    %%xmm1,0x00(%1,%2,1)              \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_yuy2),               // %0
+        "+r"(dst_u),                  // %1
+        "+r"(dst_v),                  // %2
+        "+r"(width)                   // %3
+      : "r"((intptr_t)(stride_yuy2))  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 
-void YUY2ToUV422Row_SSE2(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psrlw     $0x8,%%xmm5                     \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movq,xmm1,0x00,1,2,1)             //  movq    %%xmm1,(%1,%2)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_yuy2),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
+void YUY2ToUV422Row_SSE2(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psrlw     $0x8,%%xmm5                     \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movq    %%xmm1,0x00(%1,%2,1)              \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void UYVYToYRow_SSE2(const uint8* src_uyvy, uint8* dst_y, int width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_uyvy),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1"
-  );
+void UYVYToYRow_SSE2(const uint8_t* src_uyvy, uint8_t* dst_y, int width) {
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 
-void UYVYToUVRow_SSE2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psrlw     $0x8,%%xmm5                     \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm2)           //  movdqu  (%0,%4,1),%%xmm2
-    MEMOPREG(movdqu,0x10,0,4,1,xmm3)           //  movdqu  0x10(%0,%4,1),%%xmm3
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "pavgb     %%xmm3,%%xmm1                   \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movq,xmm1,0x00,1,2,1)             //  movq    %%xmm1,(%1,%2)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_uyvy),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  : "r"((intptr_t)(stride_uyvy))  // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+void UYVYToUVRow_SSE2(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psrlw     $0x8,%%xmm5                     \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x00(%0,%4,1),%%xmm2            \n"
+      "movdqu    0x10(%0,%4,1),%%xmm3            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "pavgb     %%xmm3,%%xmm1                   \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movq    %%xmm1,0x00(%1,%2,1)              \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_uyvy),               // %0
+        "+r"(dst_u),                  // %1
+        "+r"(dst_v),                  // %2
+        "+r"(width)                   // %3
+      : "r"((intptr_t)(stride_uyvy))  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 
-void UYVYToUV422Row_SSE2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psrlw     $0x8,%%xmm5                     \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    MEMOPMEM(movq,xmm1,0x00,1,2,1)             //  movq    %%xmm1,(%1,%2)
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_uyvy),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
+void UYVYToUV422Row_SSE2(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psrlw     $0x8,%%xmm5                     \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movq    %%xmm1,0x00(%1,%2,1)              \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 #endif  // HAS_YUY2TOYROW_SSE2
 
 #ifdef HAS_YUY2TOYROW_AVX2
-void YUY2ToYRow_AVX2(const uint8* src_yuy2, uint8* dst_y, int width) {
-  asm volatile (
-    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
-    "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm0            \n"
-    "vpand     %%ymm5,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vmovdqu   %%ymm0," MEMACCESS(1) "         \n"
-    "lea      " MEMLEA(0x20,1) ",%1            \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_yuy2),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm5"
-  );
+void YUY2ToYRow_AVX2(const uint8_t* src_yuy2, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+      "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm0            \n"
+      "vpand     %%ymm5,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vmovdqu   %%ymm0,(%1)                     \n"
+      "lea      0x20(%1),%1                      \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void YUY2ToUVRow_AVX2(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
-    "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    VMEMOPREG(vpavgb,0x00,0,4,1,ymm0,ymm0)     // vpavgb (%0,%4,1),%%ymm0,%%ymm0
-    VMEMOPREG(vpavgb,0x20,0,4,1,ymm1,ymm1)
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpsrlw    $0x8,%%ymm1,%%ymm1              \n"
-    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vextractf128 $0x0,%%ymm1," MEMACCESS(1) " \n"
-    VEXTOPMEM(vextractf128,0,ymm0,0x00,1,2,1) // vextractf128 $0x0,%%ymm0,(%1,%2,1)
-    "lea      " MEMLEA(0x10,1) ",%1            \n"
-    "sub       $0x20,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_yuy2),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  : "r"((intptr_t)(stride_yuy2))  // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
+void YUY2ToUVRow_AVX2(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+      "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "vpavgb    0x00(%0,%4,1),%%ymm0,%%ymm0     \n"
+      "vpavgb    0x20(%0,%4,1),%%ymm1,%%ymm1     \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpsrlw    $0x8,%%ymm1,%%ymm1              \n"
+      "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vextractf128 $0x0,%%ymm1,(%1)             \n"
+      "vextractf128 $0x0,%%ymm0,0x00(%1,%2,1)    \n"
+      "lea      0x10(%1),%1                      \n"
+      "sub       $0x20,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_yuy2),               // %0
+        "+r"(dst_u),                  // %1
+        "+r"(dst_v),                  // %2
+        "+r"(width)                   // %3
+      : "r"((intptr_t)(stride_yuy2))  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void YUY2ToUV422Row_AVX2(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
-    "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpsrlw    $0x8,%%ymm1,%%ymm1              \n"
-    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vextractf128 $0x0,%%ymm1," MEMACCESS(1) " \n"
-    VEXTOPMEM(vextractf128,0,ymm0,0x00,1,2,1) // vextractf128 $0x0,%%ymm0,(%1,%2,1)
-    "lea      " MEMLEA(0x10,1) ",%1            \n"
-    "sub       $0x20,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_yuy2),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
+void YUY2ToUV422Row_AVX2(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  asm volatile(
+      "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+      "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpsrlw    $0x8,%%ymm1,%%ymm1              \n"
+      "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vextractf128 $0x0,%%ymm1,(%1)             \n"
+      "vextractf128 $0x0,%%ymm0,0x00(%1,%2,1)    \n"
+      "lea      0x10(%1),%1                      \n"
+      "sub       $0x20,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void UYVYToYRow_AVX2(const uint8* src_uyvy, uint8* dst_y, int width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpsrlw    $0x8,%%ymm1,%%ymm1              \n"
-    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vmovdqu   %%ymm0," MEMACCESS(1) "         \n"
-    "lea      " MEMLEA(0x20,1) ",%1            \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_uyvy),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm5"
-  );
-}
-void UYVYToUVRow_AVX2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
-    "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
-    "sub       %1,%2                           \n"
+void UYVYToYRow_AVX2(const uint8_t* src_uyvy, uint8_t* dst_y, int width) {
+  asm volatile(
 
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    VMEMOPREG(vpavgb,0x00,0,4,1,ymm0,ymm0)     // vpavgb (%0,%4,1),%%ymm0,%%ymm0
-    VMEMOPREG(vpavgb,0x20,0,4,1,ymm1,ymm1)
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm0            \n"
-    "vpand     %%ymm5,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vextractf128 $0x0,%%ymm1," MEMACCESS(1) " \n"
-    VEXTOPMEM(vextractf128,0,ymm0,0x00,1,2,1) // vextractf128 $0x0,%%ymm0,(%1,%2,1)
-    "lea      " MEMLEA(0x10,1) ",%1            \n"
-    "sub       $0x20,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_uyvy),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  : "r"((intptr_t)(stride_uyvy))  // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpsrlw    $0x8,%%ymm1,%%ymm1              \n"
+      "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vmovdqu   %%ymm0,(%1)                     \n"
+      "lea      0x20(%1),%1                      \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
+}
+void UYVYToUVRow_AVX2(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "vpcmpeqb  %%ymm5,%%ymm5,%%ymm5            \n"
+      "vpsrlw    $0x8,%%ymm5,%%ymm5              \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "vpavgb    0x00(%0,%4,1),%%ymm0,%%ymm0     \n"
+      "vpavgb    0x20(%0,%4,1),%%ymm1,%%ymm1     \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm0            \n"
+      "vpand     %%ymm5,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vextractf128 $0x0,%%ymm1,(%1)             \n"
+      "vextractf128 $0x0,%%ymm0,0x00(%1,%2,1)    \n"
+      "lea      0x10(%1),%1                      \n"
+      "sub       $0x20,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_uyvy),               // %0
+        "+r"(dst_u),                  // %1
+        "+r"(dst_v),                  // %2
+        "+r"(width)                   // %3
+      : "r"((intptr_t)(stride_uyvy))  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void UYVYToUV422Row_AVX2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
-    "vpsrlw     $0x8,%%ymm5,%%ymm5             \n"
-    "sub       %1,%2                           \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm0            \n"
-    "vpand     %%ymm5,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
-    "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
-    "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
-    "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
-    "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
-    "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
-    "vextractf128 $0x0,%%ymm1," MEMACCESS(1) " \n"
-    VEXTOPMEM(vextractf128,0,ymm0,0x00,1,2,1) // vextractf128 $0x0,%%ymm0,(%1,%2,1)
-    "lea      " MEMLEA(0x10,1) ",%1            \n"
-    "sub       $0x20,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_uyvy),    // %0
-    "+r"(dst_u),       // %1
-    "+r"(dst_v),       // %2
-    "+r"(width)          // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
+void UYVYToUV422Row_AVX2(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  asm volatile(
+      "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+      "vpsrlw     $0x8,%%ymm5,%%ymm5             \n"
+      "sub       %1,%2                           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm0            \n"
+      "vpand     %%ymm5,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm1,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vpand     %%ymm5,%%ymm0,%%ymm1            \n"
+      "vpsrlw    $0x8,%%ymm0,%%ymm0              \n"
+      "vpackuswb %%ymm1,%%ymm1,%%ymm1            \n"
+      "vpackuswb %%ymm0,%%ymm0,%%ymm0            \n"
+      "vpermq    $0xd8,%%ymm1,%%ymm1             \n"
+      "vpermq    $0xd8,%%ymm0,%%ymm0             \n"
+      "vextractf128 $0x0,%%ymm1,(%1)             \n"
+      "vextractf128 $0x0,%%ymm0,0x00(%1,%2,1)    \n"
+      "lea      0x10(%1),%1                      \n"
+      "sub       $0x20,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 #endif  // HAS_YUY2TOYROW_AVX2
 
 #ifdef HAS_ARGBBLENDROW_SSSE3
 // Shuffle table for isolating alpha.
-static uvec8 kShuffleAlpha = {
-  3u, 0x80, 3u, 0x80, 7u, 0x80, 7u, 0x80,
-  11u, 0x80, 11u, 0x80, 15u, 0x80, 15u, 0x80
-};
+static const uvec8 kShuffleAlpha = {3u,  0x80, 3u,  0x80, 7u,  0x80, 7u,  0x80,
+                                    11u, 0x80, 11u, 0x80, 15u, 0x80, 15u, 0x80};
 
 // Blend 8 pixels at a time
-void ARGBBlendRow_SSSE3(const uint8* src_argb0, const uint8* src_argb1,
-                        uint8* dst_argb, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm7,%%xmm7                   \n"
-    "psrlw     $0xf,%%xmm7                     \n"
-    "pcmpeqb   %%xmm6,%%xmm6                   \n"
-    "psrlw     $0x8,%%xmm6                     \n"
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psllw     $0x8,%%xmm5                     \n"
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "pslld     $0x18,%%xmm4                    \n"
-    "sub       $0x4,%3                         \n"
-    "jl        49f                             \n"
+void ARGBBlendRow_SSSE3(const uint8_t* src_argb0,
+                        const uint8_t* src_argb1,
+                        uint8_t* dst_argb,
+                        int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm7,%%xmm7                   \n"
+      "psrlw     $0xf,%%xmm7                     \n"
+      "pcmpeqb   %%xmm6,%%xmm6                   \n"
+      "psrlw     $0x8,%%xmm6                     \n"
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psllw     $0x8,%%xmm5                     \n"
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "pslld     $0x18,%%xmm4                    \n"
+      "sub       $0x4,%3                         \n"
+      "jl        49f                             \n"
 
-    // 4 pixel loop.
-    LABELALIGN
-  "40:                                         \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm3         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm3,%%xmm0                   \n"
-    "pxor      %%xmm4,%%xmm3                   \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm2         \n"
-    "pshufb    %4,%%xmm3                       \n"
-    "pand      %%xmm6,%%xmm2                   \n"
-    "paddw     %%xmm7,%%xmm3                   \n"
-    "pmullw    %%xmm3,%%xmm2                   \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm1         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "por       %%xmm4,%%xmm0                   \n"
-    "pmullw    %%xmm3,%%xmm1                   \n"
-    "psrlw     $0x8,%%xmm2                     \n"
-    "paddusb   %%xmm2,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm1                   \n"
-    "paddusb   %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jge       40b                             \n"
+      // 4 pixel loop.
+      LABELALIGN
+      "40:                                       \n"
+      "movdqu    (%0),%%xmm3                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqa    %%xmm3,%%xmm0                   \n"
+      "pxor      %%xmm4,%%xmm3                   \n"
+      "movdqu    (%1),%%xmm2                     \n"
+      "pshufb    %4,%%xmm3                       \n"
+      "pand      %%xmm6,%%xmm2                   \n"
+      "paddw     %%xmm7,%%xmm3                   \n"
+      "pmullw    %%xmm3,%%xmm2                   \n"
+      "movdqu    (%1),%%xmm1                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "por       %%xmm4,%%xmm0                   \n"
+      "pmullw    %%xmm3,%%xmm1                   \n"
+      "psrlw     $0x8,%%xmm2                     \n"
+      "paddusb   %%xmm2,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm1                   \n"
+      "paddusb   %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jge       40b                             \n"
 
-  "49:                                         \n"
-    "add       $0x3,%3                         \n"
-    "jl        99f                             \n"
+      "49:                                       \n"
+      "add       $0x3,%3                         \n"
+      "jl        99f                             \n"
 
-    // 1 pixel loop.
-  "91:                                         \n"
-    "movd      " MEMACCESS(0) ",%%xmm3         \n"
-    "lea       " MEMLEA(0x4,0) ",%0            \n"
-    "movdqa    %%xmm3,%%xmm0                   \n"
-    "pxor      %%xmm4,%%xmm3                   \n"
-    "movd      " MEMACCESS(1) ",%%xmm2         \n"
-    "pshufb    %4,%%xmm3                       \n"
-    "pand      %%xmm6,%%xmm2                   \n"
-    "paddw     %%xmm7,%%xmm3                   \n"
-    "pmullw    %%xmm3,%%xmm2                   \n"
-    "movd      " MEMACCESS(1) ",%%xmm1         \n"
-    "lea       " MEMLEA(0x4,1) ",%1            \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "por       %%xmm4,%%xmm0                   \n"
-    "pmullw    %%xmm3,%%xmm1                   \n"
-    "psrlw     $0x8,%%xmm2                     \n"
-    "paddusb   %%xmm2,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm1                   \n"
-    "paddusb   %%xmm1,%%xmm0                   \n"
-    "movd      %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x4,2) ",%2            \n"
-    "sub       $0x1,%3                         \n"
-    "jge       91b                             \n"
-  "99:                                         \n"
-  : "+r"(src_argb0),    // %0
-    "+r"(src_argb1),    // %1
-    "+r"(dst_argb),     // %2
-    "+r"(width)         // %3
-  : "m"(kShuffleAlpha)  // %4
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 1 pixel loop.
+      "91:                                       \n"
+      "movd      (%0),%%xmm3                     \n"
+      "lea       0x4(%0),%0                      \n"
+      "movdqa    %%xmm3,%%xmm0                   \n"
+      "pxor      %%xmm4,%%xmm3                   \n"
+      "movd      (%1),%%xmm2                     \n"
+      "pshufb    %4,%%xmm3                       \n"
+      "pand      %%xmm6,%%xmm2                   \n"
+      "paddw     %%xmm7,%%xmm3                   \n"
+      "pmullw    %%xmm3,%%xmm2                   \n"
+      "movd      (%1),%%xmm1                     \n"
+      "lea       0x4(%1),%1                      \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "por       %%xmm4,%%xmm0                   \n"
+      "pmullw    %%xmm3,%%xmm1                   \n"
+      "psrlw     $0x8,%%xmm2                     \n"
+      "paddusb   %%xmm2,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm1                   \n"
+      "paddusb   %%xmm1,%%xmm0                   \n"
+      "movd      %%xmm0,(%2)                     \n"
+      "lea       0x4(%2),%2                      \n"
+      "sub       $0x1,%3                         \n"
+      "jge       91b                             \n"
+      "99:                                       \n"
+      : "+r"(src_argb0),    // %0
+        "+r"(src_argb1),    // %1
+        "+r"(dst_argb),     // %2
+        "+r"(width)         // %3
+      : "m"(kShuffleAlpha)  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBBLENDROW_SSSE3
 
@@ -3559,46 +4580,49 @@
 // =((A2*C2)+(B2*(255-C2))+255)/256
 // signed version of math
 // =(((A2-128)*C2)+((B2-128)*(255-C2))+32768+127)/256
-void BlendPlaneRow_SSSE3(const uint8* src0, const uint8* src1,
-                         const uint8* alpha, uint8* dst, int width) {
-  asm volatile (
-    "pcmpeqb    %%xmm5,%%xmm5                  \n"
-    "psllw      $0x8,%%xmm5                    \n"
-    "mov        $0x80808080,%%eax              \n"
-    "movd       %%eax,%%xmm6                   \n"
-    "pshufd     $0x0,%%xmm6,%%xmm6             \n"
-    "mov        $0x807f807f,%%eax              \n"
-    "movd       %%eax,%%xmm7                   \n"
-    "pshufd     $0x0,%%xmm7,%%xmm7             \n"
-    "sub        %2,%0                          \n"
-    "sub        %2,%1                          \n"
-    "sub        %2,%3                          \n"
+void BlendPlaneRow_SSSE3(const uint8_t* src0,
+                         const uint8_t* src1,
+                         const uint8_t* alpha,
+                         uint8_t* dst,
+                         int width) {
+  asm volatile(
+      "pcmpeqb    %%xmm5,%%xmm5                  \n"
+      "psllw      $0x8,%%xmm5                    \n"
+      "mov        $0x80808080,%%eax              \n"
+      "movd       %%eax,%%xmm6                   \n"
+      "pshufd     $0x0,%%xmm6,%%xmm6             \n"
+      "mov        $0x807f807f,%%eax              \n"
+      "movd       %%eax,%%xmm7                   \n"
+      "pshufd     $0x0,%%xmm7,%%xmm7             \n"
+      "sub        %2,%0                          \n"
+      "sub        %2,%1                          \n"
+      "sub        %2,%3                          \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movq       (%2),%%xmm0                    \n"
-    "punpcklbw  %%xmm0,%%xmm0                  \n"
-    "pxor       %%xmm5,%%xmm0                  \n"
-    "movq       (%0,%2,1),%%xmm1               \n"
-    "movq       (%1,%2,1),%%xmm2               \n"
-    "punpcklbw  %%xmm2,%%xmm1                  \n"
-    "psubb      %%xmm6,%%xmm1                  \n"
-    "pmaddubsw  %%xmm1,%%xmm0                  \n"
-    "paddw      %%xmm7,%%xmm0                  \n"
-    "psrlw      $0x8,%%xmm0                    \n"
-    "packuswb   %%xmm0,%%xmm0                  \n"
-    "movq       %%xmm0,(%3,%2,1)               \n"
-    "lea        0x8(%2),%2                     \n"
-    "sub        $0x8,%4                        \n"
-    "jg        1b                              \n"
-  : "+r"(src0),       // %0
-    "+r"(src1),       // %1
-    "+r"(alpha),      // %2
-    "+r"(dst),        // %3
-    "+rm"(width)      // %4
-  :: "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm5", "xmm6", "xmm7"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movq       (%2),%%xmm0                    \n"
+      "punpcklbw  %%xmm0,%%xmm0                  \n"
+      "pxor       %%xmm5,%%xmm0                  \n"
+      "movq       (%0,%2,1),%%xmm1               \n"
+      "movq       (%1,%2,1),%%xmm2               \n"
+      "punpcklbw  %%xmm2,%%xmm1                  \n"
+      "psubb      %%xmm6,%%xmm1                  \n"
+      "pmaddubsw  %%xmm1,%%xmm0                  \n"
+      "paddw      %%xmm7,%%xmm0                  \n"
+      "psrlw      $0x8,%%xmm0                    \n"
+      "packuswb   %%xmm0,%%xmm0                  \n"
+      "movq       %%xmm0,(%3,%2,1)               \n"
+      "lea        0x8(%2),%2                     \n"
+      "sub        $0x8,%4                        \n"
+      "jg        1b                              \n"
+      : "+r"(src0),   // %0
+        "+r"(src1),   // %1
+        "+r"(alpha),  // %2
+        "+r"(dst),    // %3
+        "+rm"(width)  // %4
+        ::"memory",
+        "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm5", "xmm6", "xmm7");
 }
 #endif  // HAS_BLENDPLANEROW_SSSE3
 
@@ -3608,312 +4632,308 @@
 // =((A2*C2)+(B2*(255-C2))+255)/256
 // signed version of math
 // =(((A2-128)*C2)+((B2-128)*(255-C2))+32768+127)/256
-void BlendPlaneRow_AVX2(const uint8* src0, const uint8* src1,
-                        const uint8* alpha, uint8* dst, int width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
-    "vpsllw     $0x8,%%ymm5,%%ymm5             \n"
-    "mov        $0x80808080,%%eax              \n"
-    "vmovd      %%eax,%%xmm6                   \n"
-    "vbroadcastss %%xmm6,%%ymm6                \n"
-    "mov        $0x807f807f,%%eax              \n"
-    "vmovd      %%eax,%%xmm7                   \n"
-    "vbroadcastss %%xmm7,%%ymm7                \n"
-    "sub        %2,%0                          \n"
-    "sub        %2,%1                          \n"
-    "sub        %2,%3                          \n"
+void BlendPlaneRow_AVX2(const uint8_t* src0,
+                        const uint8_t* src1,
+                        const uint8_t* alpha,
+                        uint8_t* dst,
+                        int width) {
+  asm volatile(
+      "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+      "vpsllw     $0x8,%%ymm5,%%ymm5             \n"
+      "mov        $0x80808080,%%eax              \n"
+      "vmovd      %%eax,%%xmm6                   \n"
+      "vbroadcastss %%xmm6,%%ymm6                \n"
+      "mov        $0x807f807f,%%eax              \n"
+      "vmovd      %%eax,%%xmm7                   \n"
+      "vbroadcastss %%xmm7,%%ymm7                \n"
+      "sub        %2,%0                          \n"
+      "sub        %2,%1                          \n"
+      "sub        %2,%3                          \n"
 
-    // 32 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    (%2),%%ymm0                    \n"
-    "vpunpckhbw %%ymm0,%%ymm0,%%ymm3           \n"
-    "vpunpcklbw %%ymm0,%%ymm0,%%ymm0           \n"
-    "vpxor      %%ymm5,%%ymm3,%%ymm3           \n"
-    "vpxor      %%ymm5,%%ymm0,%%ymm0           \n"
-    "vmovdqu    (%0,%2,1),%%ymm1               \n"
-    "vmovdqu    (%1,%2,1),%%ymm2               \n"
-    "vpunpckhbw %%ymm2,%%ymm1,%%ymm4           \n"
-    "vpunpcklbw %%ymm2,%%ymm1,%%ymm1           \n"
-    "vpsubb     %%ymm6,%%ymm4,%%ymm4           \n"
-    "vpsubb     %%ymm6,%%ymm1,%%ymm1           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "vpmaddubsw %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm7,%%ymm3,%%ymm3           \n"
-    "vpaddw     %%ymm7,%%ymm0,%%ymm0           \n"
-    "vpsrlw     $0x8,%%ymm3,%%ymm3             \n"
-    "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpackuswb  %%ymm3,%%ymm0,%%ymm0           \n"
-    "vmovdqu    %%ymm0,(%3,%2,1)               \n"
-    "lea        0x20(%2),%2                    \n"
-    "sub        $0x20,%4                       \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src0),       // %0
-    "+r"(src1),       // %1
-    "+r"(alpha),      // %2
-    "+r"(dst),        // %3
-    "+rm"(width)      // %4
-  :: "memory", "cc", "eax",
-     "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 32 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%2),%%ymm0                    \n"
+      "vpunpckhbw %%ymm0,%%ymm0,%%ymm3           \n"
+      "vpunpcklbw %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpxor      %%ymm5,%%ymm3,%%ymm3           \n"
+      "vpxor      %%ymm5,%%ymm0,%%ymm0           \n"
+      "vmovdqu    (%0,%2,1),%%ymm1               \n"
+      "vmovdqu    (%1,%2,1),%%ymm2               \n"
+      "vpunpckhbw %%ymm2,%%ymm1,%%ymm4           \n"
+      "vpunpcklbw %%ymm2,%%ymm1,%%ymm1           \n"
+      "vpsubb     %%ymm6,%%ymm4,%%ymm4           \n"
+      "vpsubb     %%ymm6,%%ymm1,%%ymm1           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "vpmaddubsw %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm7,%%ymm3,%%ymm3           \n"
+      "vpaddw     %%ymm7,%%ymm0,%%ymm0           \n"
+      "vpsrlw     $0x8,%%ymm3,%%ymm3             \n"
+      "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpackuswb  %%ymm3,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,(%3,%2,1)               \n"
+      "lea        0x20(%2),%2                    \n"
+      "sub        $0x20,%4                       \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src0),   // %0
+        "+r"(src1),   // %1
+        "+r"(alpha),  // %2
+        "+r"(dst),    // %3
+        "+rm"(width)  // %4
+        ::"memory",
+        "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_BLENDPLANEROW_AVX2
 
 #ifdef HAS_ARGBATTENUATEROW_SSSE3
 // Shuffle table duplicating alpha
-static uvec8 kShuffleAlpha0 = {
-  3u, 3u, 3u, 3u, 3u, 3u, 128u, 128u, 7u, 7u, 7u, 7u, 7u, 7u, 128u, 128u
-};
-static uvec8 kShuffleAlpha1 = {
-  11u, 11u, 11u, 11u, 11u, 11u, 128u, 128u,
-  15u, 15u, 15u, 15u, 15u, 15u, 128u, 128u
-};
+static const uvec8 kShuffleAlpha0 = {3u, 3u, 3u, 3u, 3u, 3u, 128u, 128u,
+                                     7u, 7u, 7u, 7u, 7u, 7u, 128u, 128u};
+static const uvec8 kShuffleAlpha1 = {11u, 11u, 11u, 11u, 11u, 11u, 128u, 128u,
+                                     15u, 15u, 15u, 15u, 15u, 15u, 128u, 128u};
 // Attenuate 4 pixels at a time.
-void ARGBAttenuateRow_SSSE3(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    "pcmpeqb   %%xmm3,%%xmm3                   \n"
-    "pslld     $0x18,%%xmm3                    \n"
-    "movdqa    %3,%%xmm4                       \n"
-    "movdqa    %4,%%xmm5                       \n"
+void ARGBAttenuateRow_SSSE3(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            int width) {
+  asm volatile(
+      "pcmpeqb   %%xmm3,%%xmm3                   \n"
+      "pslld     $0x18,%%xmm3                    \n"
+      "movdqa    %3,%%xmm4                       \n"
+      "movdqa    %4,%%xmm5                       \n"
 
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "punpcklbw %%xmm1,%%xmm1                   \n"
-    "pmulhuw   %%xmm1,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "pshufb    %%xmm5,%%xmm1                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm2         \n"
-    "punpckhbw %%xmm2,%%xmm2                   \n"
-    "pmulhuw   %%xmm2,%%xmm1                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "pand      %%xmm3,%%xmm2                   \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "por       %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),    // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)        // %2
-  : "m"(kShuffleAlpha0),  // %3
-    "m"(kShuffleAlpha1)  // %4
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "punpcklbw %%xmm1,%%xmm1                   \n"
+      "pmulhuw   %%xmm1,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "pshufb    %%xmm5,%%xmm1                   \n"
+      "movdqu    (%0),%%xmm2                     \n"
+      "punpckhbw %%xmm2,%%xmm2                   \n"
+      "pmulhuw   %%xmm2,%%xmm1                   \n"
+      "movdqu    (%0),%%xmm2                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "pand      %%xmm3,%%xmm2                   \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "por       %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),       // %0
+        "+r"(dst_argb),       // %1
+        "+r"(width)           // %2
+      : "m"(kShuffleAlpha0),  // %3
+        "m"(kShuffleAlpha1)   // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBATTENUATEROW_SSSE3
 
 #ifdef HAS_ARGBATTENUATEROW_AVX2
 // Shuffle table duplicating alpha.
-static const uvec8 kShuffleAlpha_AVX2 = {
-  6u, 7u, 6u, 7u, 6u, 7u, 128u, 128u, 14u, 15u, 14u, 15u, 14u, 15u, 128u, 128u
-};
+static const uvec8 kShuffleAlpha_AVX2 = {6u,   7u,   6u,   7u,  6u,  7u,
+                                         128u, 128u, 14u,  15u, 14u, 15u,
+                                         14u,  15u,  128u, 128u};
 // Attenuate 8 pixels at a time.
-void ARGBAttenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    "vbroadcastf128 %3,%%ymm4                  \n"
-    "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
-    "vpslld     $0x18,%%ymm5,%%ymm5            \n"
-    "sub        %0,%1                          \n"
+void ARGBAttenuateRow_AVX2(const uint8_t* src_argb,
+                           uint8_t* dst_argb,
+                           int width) {
+  asm volatile(
+      "vbroadcastf128 %3,%%ymm4                  \n"
+      "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+      "vpslld     $0x18,%%ymm5,%%ymm5            \n"
+      "sub        %0,%1                          \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm6        \n"
-    "vpunpcklbw %%ymm6,%%ymm6,%%ymm0           \n"
-    "vpunpckhbw %%ymm6,%%ymm6,%%ymm1           \n"
-    "vpshufb    %%ymm4,%%ymm0,%%ymm2           \n"
-    "vpshufb    %%ymm4,%%ymm1,%%ymm3           \n"
-    "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"
-    "vpand      %%ymm5,%%ymm6,%%ymm6           \n"
-    "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpsrlw     $0x8,%%ymm1,%%ymm1             \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpor       %%ymm6,%%ymm0,%%ymm0           \n"
-    MEMOPMEM(vmovdqu,ymm0,0x00,0,1,1)          //  vmovdqu %%ymm0,(%0,%1)
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "sub        $0x8,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb),    // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)        // %2
-  : "m"(kShuffleAlpha_AVX2)  // %3
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm6                    \n"
+      "vpunpcklbw %%ymm6,%%ymm6,%%ymm0           \n"
+      "vpunpckhbw %%ymm6,%%ymm6,%%ymm1           \n"
+      "vpshufb    %%ymm4,%%ymm0,%%ymm2           \n"
+      "vpshufb    %%ymm4,%%ymm1,%%ymm3           \n"
+      "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"
+      "vpand      %%ymm5,%%ymm6,%%ymm6           \n"
+      "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpsrlw     $0x8,%%ymm1,%%ymm1             \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpor       %%ymm6,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,0x00(%0,%1,1)           \n"
+      "lea       0x20(%0),%0                     \n"
+      "sub        $0x8,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),          // %0
+        "+r"(dst_argb),          // %1
+        "+r"(width)              // %2
+      : "m"(kShuffleAlpha_AVX2)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 #endif  // HAS_ARGBATTENUATEROW_AVX2
 
 #ifdef HAS_ARGBUNATTENUATEROW_SSE2
 // Unattenuate 4 pixels at a time.
-void ARGBUnattenuateRow_SSE2(const uint8* src_argb, uint8* dst_argb,
+void ARGBUnattenuateRow_SSE2(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
                              int width) {
   uintptr_t alpha;
-  asm volatile (
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movzb     " MEMACCESS2(0x03,0) ",%3       \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    MEMOPREG(movd,0x00,4,3,4,xmm2)             //  movd      0x0(%4,%3,4),%%xmm2
-    "movzb     " MEMACCESS2(0x07,0) ",%3       \n"
-    MEMOPREG(movd,0x00,4,3,4,xmm3)             //  movd      0x0(%4,%3,4),%%xmm3
-    "pshuflw   $0x40,%%xmm2,%%xmm2             \n"
-    "pshuflw   $0x40,%%xmm3,%%xmm3             \n"
-    "movlhps   %%xmm3,%%xmm2                   \n"
-    "pmulhuw   %%xmm2,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "movzb     " MEMACCESS2(0x0b,0) ",%3       \n"
-    "punpckhbw %%xmm1,%%xmm1                   \n"
-    MEMOPREG(movd,0x00,4,3,4,xmm2)             //  movd      0x0(%4,%3,4),%%xmm2
-    "movzb     " MEMACCESS2(0x0f,0) ",%3       \n"
-    MEMOPREG(movd,0x00,4,3,4,xmm3)             //  movd      0x0(%4,%3,4),%%xmm3
-    "pshuflw   $0x40,%%xmm2,%%xmm2             \n"
-    "pshuflw   $0x40,%%xmm3,%%xmm3             \n"
-    "movlhps   %%xmm3,%%xmm2                   \n"
-    "pmulhuw   %%xmm2,%%xmm1                   \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),     // %0
-    "+r"(dst_argb),     // %1
-    "+r"(width),        // %2
-    "=&r"(alpha)        // %3
-  : "r"(fixed_invtbl8)  // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+  asm volatile(
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movzb     0x03(%0),%3                     \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "movd      0x00(%4,%3,4),%%xmm2            \n"
+      "movzb     0x07(%0),%3                     \n"
+      "movd      0x00(%4,%3,4),%%xmm3            \n"
+      "pshuflw   $0x40,%%xmm2,%%xmm2             \n"
+      "pshuflw   $0x40,%%xmm3,%%xmm3             \n"
+      "movlhps   %%xmm3,%%xmm2                   \n"
+      "pmulhuw   %%xmm2,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "movzb     0x0b(%0),%3                     \n"
+      "punpckhbw %%xmm1,%%xmm1                   \n"
+      "movd      0x00(%4,%3,4),%%xmm2            \n"
+      "movzb     0x0f(%0),%3                     \n"
+      "movd      0x00(%4,%3,4),%%xmm3            \n"
+      "pshuflw   $0x40,%%xmm2,%%xmm2             \n"
+      "pshuflw   $0x40,%%xmm3,%%xmm3             \n"
+      "movlhps   %%xmm3,%%xmm2                   \n"
+      "pmulhuw   %%xmm2,%%xmm1                   \n"
+      "lea       0x10(%0),%0                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),     // %0
+        "+r"(dst_argb),     // %1
+        "+r"(width),        // %2
+        "=&r"(alpha)        // %3
+      : "r"(fixed_invtbl8)  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBUNATTENUATEROW_SSE2
 
 #ifdef HAS_ARGBUNATTENUATEROW_AVX2
 // Shuffle table duplicating alpha.
 static const uvec8 kUnattenShuffleAlpha_AVX2 = {
-  0u, 1u, 0u, 1u, 0u, 1u, 6u, 7u, 8u, 9u, 8u, 9u, 8u, 9u, 14u, 15u
-};
+    0u, 1u, 0u, 1u, 0u, 1u, 6u, 7u, 8u, 9u, 8u, 9u, 8u, 9u, 14u, 15u};
 // Unattenuate 8 pixels at a time.
-void ARGBUnattenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb,
+void ARGBUnattenuateRow_AVX2(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
                              int width) {
   uintptr_t alpha;
-  asm volatile (
-    "sub        %0,%1                          \n"
-    "vbroadcastf128 %5,%%ymm5                  \n"
+  asm volatile(
+      "sub        %0,%1                          \n"
+      "vbroadcastf128 %5,%%ymm5                  \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    // replace VPGATHER
-    "movzb     " MEMACCESS2(0x03,0) ",%3       \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm0)             //  vmovd 0x0(%4,%3,4),%%xmm0
-    "movzb     " MEMACCESS2(0x07,0) ",%3       \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm1)             //  vmovd 0x0(%4,%3,4),%%xmm1
-    "movzb     " MEMACCESS2(0x0b,0) ",%3       \n"
-    "vpunpckldq %%xmm1,%%xmm0,%%xmm6           \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm2)             //  vmovd 0x0(%4,%3,4),%%xmm2
-    "movzb     " MEMACCESS2(0x0f,0) ",%3       \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm3)             //  vmovd 0x0(%4,%3,4),%%xmm3
-    "movzb     " MEMACCESS2(0x13,0) ",%3       \n"
-    "vpunpckldq %%xmm3,%%xmm2,%%xmm7           \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm0)             //  vmovd 0x0(%4,%3,4),%%xmm0
-    "movzb     " MEMACCESS2(0x17,0) ",%3       \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm1)             //  vmovd 0x0(%4,%3,4),%%xmm1
-    "movzb     " MEMACCESS2(0x1b,0) ",%3       \n"
-    "vpunpckldq %%xmm1,%%xmm0,%%xmm0           \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm2)             //  vmovd 0x0(%4,%3,4),%%xmm2
-    "movzb     " MEMACCESS2(0x1f,0) ",%3       \n"
-    MEMOPREG(vmovd,0x00,4,3,4,xmm3)             //  vmovd 0x0(%4,%3,4),%%xmm3
-    "vpunpckldq %%xmm3,%%xmm2,%%xmm2           \n"
-    "vpunpcklqdq %%xmm7,%%xmm6,%%xmm3          \n"
-    "vpunpcklqdq %%xmm2,%%xmm0,%%xmm0          \n"
-    "vinserti128 $0x1,%%xmm0,%%ymm3,%%ymm3     \n"
-    // end of VPGATHER
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      // replace VPGATHER
+      "movzb     0x03(%0),%3                     \n"
+      "vmovd     0x00(%4,%3,4),%%xmm0            \n"
+      "movzb     0x07(%0),%3                     \n"
+      "vmovd     0x00(%4,%3,4),%%xmm1            \n"
+      "movzb     0x0b(%0),%3                     \n"
+      "vpunpckldq %%xmm1,%%xmm0,%%xmm6           \n"
+      "vmovd     0x00(%4,%3,4),%%xmm2            \n"
+      "movzb     0x0f(%0),%3                     \n"
+      "vmovd     0x00(%4,%3,4),%%xmm3            \n"
+      "movzb     0x13(%0),%3                     \n"
+      "vpunpckldq %%xmm3,%%xmm2,%%xmm7           \n"
+      "vmovd     0x00(%4,%3,4),%%xmm0            \n"
+      "movzb     0x17(%0),%3                     \n"
+      "vmovd     0x00(%4,%3,4),%%xmm1            \n"
+      "movzb     0x1b(%0),%3                     \n"
+      "vpunpckldq %%xmm1,%%xmm0,%%xmm0           \n"
+      "vmovd     0x00(%4,%3,4),%%xmm2            \n"
+      "movzb     0x1f(%0),%3                     \n"
+      "vmovd     0x00(%4,%3,4),%%xmm3            \n"
+      "vpunpckldq %%xmm3,%%xmm2,%%xmm2           \n"
+      "vpunpcklqdq %%xmm7,%%xmm6,%%xmm3          \n"
+      "vpunpcklqdq %%xmm2,%%xmm0,%%xmm0          \n"
+      "vinserti128 $0x1,%%xmm0,%%ymm3,%%ymm3     \n"
+      // end of VPGATHER
 
-    "vmovdqu    " MEMACCESS(0) ",%%ymm6        \n"
-    "vpunpcklbw %%ymm6,%%ymm6,%%ymm0           \n"
-    "vpunpckhbw %%ymm6,%%ymm6,%%ymm1           \n"
-    "vpunpcklwd %%ymm3,%%ymm3,%%ymm2           \n"
-    "vpunpckhwd %%ymm3,%%ymm3,%%ymm3           \n"
-    "vpshufb    %%ymm5,%%ymm2,%%ymm2           \n"
-    "vpshufb    %%ymm5,%%ymm3,%%ymm3           \n"
-    "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    MEMOPMEM(vmovdqu,ymm0,0x00,0,1,1)          //  vmovdqu %%ymm0,(%0,%1)
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "sub        $0x8,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb),      // %0
-    "+r"(dst_argb),      // %1
-    "+r"(width),         // %2
-    "=&r"(alpha)         // %3
-  : "r"(fixed_invtbl8),  // %4
-    "m"(kUnattenShuffleAlpha_AVX2)  // %5
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      "vmovdqu    (%0),%%ymm6                    \n"
+      "vpunpcklbw %%ymm6,%%ymm6,%%ymm0           \n"
+      "vpunpckhbw %%ymm6,%%ymm6,%%ymm1           \n"
+      "vpunpcklwd %%ymm3,%%ymm3,%%ymm2           \n"
+      "vpunpckhwd %%ymm3,%%ymm3,%%ymm3           \n"
+      "vpshufb    %%ymm5,%%ymm2,%%ymm2           \n"
+      "vpshufb    %%ymm5,%%ymm3,%%ymm3           \n"
+      "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,0x00(%0,%1,1)           \n"
+      "lea       0x20(%0),%0                     \n"
+      "sub        $0x8,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),                 // %0
+        "+r"(dst_argb),                 // %1
+        "+r"(width),                    // %2
+        "=&r"(alpha)                    // %3
+      : "r"(fixed_invtbl8),             // %4
+        "m"(kUnattenShuffleAlpha_AVX2)  // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBUNATTENUATEROW_AVX2
 
 #ifdef HAS_ARGBGRAYROW_SSSE3
 // Convert 8 ARGB pixels (64 bytes) to 8 Gray ARGB pixels
-void ARGBGrayRow_SSSE3(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    "movdqa    %3,%%xmm4                       \n"
-    "movdqa    %4,%%xmm5                       \n"
+void ARGBGrayRow_SSSE3(const uint8_t* src_argb, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "movdqa    %3,%%xmm4                       \n"
+      "movdqa    %4,%%xmm5                       \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "pmaddubsw %%xmm4,%%xmm0                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "phaddw    %%xmm1,%%xmm0                   \n"
-    "paddw     %%xmm5,%%xmm0                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm2         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm3   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "psrld     $0x18,%%xmm2                    \n"
-    "psrld     $0x18,%%xmm3                    \n"
-    "packuswb  %%xmm3,%%xmm2                   \n"
-    "packuswb  %%xmm2,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm3                   \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "punpcklbw %%xmm2,%%xmm3                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklwd %%xmm3,%%xmm0                   \n"
-    "punpckhwd %%xmm3,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)       // %2
-  : "m"(kARGBToYJ),   // %3
-    "m"(kAddYJ64)     // %4
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "pmaddubsw %%xmm4,%%xmm0                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "phaddw    %%xmm1,%%xmm0                   \n"
+      "paddw     %%xmm5,%%xmm0                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm2                     \n"
+      "movdqu    0x10(%0),%%xmm3                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "psrld     $0x18,%%xmm2                    \n"
+      "psrld     $0x18,%%xmm3                    \n"
+      "packuswb  %%xmm3,%%xmm2                   \n"
+      "packuswb  %%xmm2,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm3                   \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "punpcklbw %%xmm2,%%xmm3                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklwd %%xmm3,%%xmm0                   \n"
+      "punpckhwd %%xmm3,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "m"(kARGBToYJ),  // %3
+        "m"(kAddYJ64)    // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBGRAYROW_SSSE3
 
@@ -3922,412 +4942,415 @@
 //    g = (r * 45 + g * 88 + b * 22) >> 7
 //    r = (r * 50 + g * 98 + b * 24) >> 7
 // Constant for ARGB color to sepia tone
-static vec8 kARGBToSepiaB = {
-  17, 68, 35, 0, 17, 68, 35, 0, 17, 68, 35, 0, 17, 68, 35, 0
-};
+static const vec8 kARGBToSepiaB = {17, 68, 35, 0, 17, 68, 35, 0,
+                                   17, 68, 35, 0, 17, 68, 35, 0};
 
-static vec8 kARGBToSepiaG = {
-  22, 88, 45, 0, 22, 88, 45, 0, 22, 88, 45, 0, 22, 88, 45, 0
-};
+static const vec8 kARGBToSepiaG = {22, 88, 45, 0, 22, 88, 45, 0,
+                                   22, 88, 45, 0, 22, 88, 45, 0};
 
-static vec8 kARGBToSepiaR = {
-  24, 98, 50, 0, 24, 98, 50, 0, 24, 98, 50, 0, 24, 98, 50, 0
-};
+static const vec8 kARGBToSepiaR = {24, 98, 50, 0, 24, 98, 50, 0,
+                                   24, 98, 50, 0, 24, 98, 50, 0};
 
 // Convert 8 ARGB pixels (32 bytes) to 8 Sepia ARGB pixels.
-void ARGBSepiaRow_SSSE3(uint8* dst_argb, int width) {
-  asm volatile (
-    "movdqa    %2,%%xmm2                       \n"
-    "movdqa    %3,%%xmm3                       \n"
-    "movdqa    %4,%%xmm4                       \n"
+void ARGBSepiaRow_SSSE3(uint8_t* dst_argb, int width) {
+  asm volatile(
+      "movdqa    %2,%%xmm2                       \n"
+      "movdqa    %3,%%xmm3                       \n"
+      "movdqa    %4,%%xmm4                       \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm6   \n"
-    "pmaddubsw %%xmm2,%%xmm0                   \n"
-    "pmaddubsw %%xmm2,%%xmm6                   \n"
-    "phaddw    %%xmm6,%%xmm0                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm5         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "pmaddubsw %%xmm3,%%xmm5                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "phaddw    %%xmm1,%%xmm5                   \n"
-    "psrlw     $0x7,%%xmm5                     \n"
-    "packuswb  %%xmm5,%%xmm5                   \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm5         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "pmaddubsw %%xmm4,%%xmm5                   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "phaddw    %%xmm1,%%xmm5                   \n"
-    "psrlw     $0x7,%%xmm5                     \n"
-    "packuswb  %%xmm5,%%xmm5                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm6         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "psrld     $0x18,%%xmm6                    \n"
-    "psrld     $0x18,%%xmm1                    \n"
-    "packuswb  %%xmm1,%%xmm6                   \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "punpcklbw %%xmm6,%%xmm5                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklwd %%xmm5,%%xmm0                   \n"
-    "punpckhwd %%xmm5,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(0) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,0) "   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "sub       $0x8,%1                         \n"
-    "jg        1b                              \n"
-  : "+r"(dst_argb),      // %0
-    "+r"(width)          // %1
-  : "m"(kARGBToSepiaB),  // %2
-    "m"(kARGBToSepiaG),  // %3
-    "m"(kARGBToSepiaR)   // %4
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm6                 \n"
+      "pmaddubsw %%xmm2,%%xmm0                   \n"
+      "pmaddubsw %%xmm2,%%xmm6                   \n"
+      "phaddw    %%xmm6,%%xmm0                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm5                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "pmaddubsw %%xmm3,%%xmm5                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "phaddw    %%xmm1,%%xmm5                   \n"
+      "psrlw     $0x7,%%xmm5                     \n"
+      "packuswb  %%xmm5,%%xmm5                   \n"
+      "punpcklbw %%xmm5,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm5                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "pmaddubsw %%xmm4,%%xmm5                   \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "phaddw    %%xmm1,%%xmm5                   \n"
+      "psrlw     $0x7,%%xmm5                     \n"
+      "packuswb  %%xmm5,%%xmm5                   \n"
+      "movdqu    (%0),%%xmm6                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "psrld     $0x18,%%xmm6                    \n"
+      "psrld     $0x18,%%xmm1                    \n"
+      "packuswb  %%xmm1,%%xmm6                   \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "punpcklbw %%xmm6,%%xmm5                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklwd %%xmm5,%%xmm0                   \n"
+      "punpckhwd %%xmm5,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%0)                     \n"
+      "movdqu    %%xmm1,0x10(%0)                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "sub       $0x8,%1                         \n"
+      "jg        1b                              \n"
+      : "+r"(dst_argb),      // %0
+        "+r"(width)          // %1
+      : "m"(kARGBToSepiaB),  // %2
+        "m"(kARGBToSepiaG),  // %3
+        "m"(kARGBToSepiaR)   // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 #endif  // HAS_ARGBSEPIAROW_SSSE3
 
 #ifdef HAS_ARGBCOLORMATRIXROW_SSSE3
 // Tranform 8 ARGB pixels (32 bytes) with color matrix.
 // Same as Sepia except matrix is provided.
-void ARGBColorMatrixRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                              const int8* matrix_argb, int width) {
-  asm volatile (
-    "movdqu    " MEMACCESS(3) ",%%xmm5         \n"
-    "pshufd    $0x00,%%xmm5,%%xmm2             \n"
-    "pshufd    $0x55,%%xmm5,%%xmm3             \n"
-    "pshufd    $0xaa,%%xmm5,%%xmm4             \n"
-    "pshufd    $0xff,%%xmm5,%%xmm5             \n"
+void ARGBColorMatrixRow_SSSE3(const uint8_t* src_argb,
+                              uint8_t* dst_argb,
+                              const int8_t* matrix_argb,
+                              int width) {
+  asm volatile(
+      "movdqu    (%3),%%xmm5                     \n"
+      "pshufd    $0x00,%%xmm5,%%xmm2             \n"
+      "pshufd    $0x55,%%xmm5,%%xmm3             \n"
+      "pshufd    $0xaa,%%xmm5,%%xmm4             \n"
+      "pshufd    $0xff,%%xmm5,%%xmm5             \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm7   \n"
-    "pmaddubsw %%xmm2,%%xmm0                   \n"
-    "pmaddubsw %%xmm2,%%xmm7                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm6         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "pmaddubsw %%xmm3,%%xmm6                   \n"
-    "pmaddubsw %%xmm3,%%xmm1                   \n"
-    "phaddsw   %%xmm7,%%xmm0                   \n"
-    "phaddsw   %%xmm1,%%xmm6                   \n"
-    "psraw     $0x6,%%xmm0                     \n"
-    "psraw     $0x6,%%xmm6                     \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "punpcklbw %%xmm6,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm7   \n"
-    "pmaddubsw %%xmm4,%%xmm1                   \n"
-    "pmaddubsw %%xmm4,%%xmm7                   \n"
-    "phaddsw   %%xmm7,%%xmm1                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm6         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm7   \n"
-    "pmaddubsw %%xmm5,%%xmm6                   \n"
-    "pmaddubsw %%xmm5,%%xmm7                   \n"
-    "phaddsw   %%xmm7,%%xmm6                   \n"
-    "psraw     $0x6,%%xmm1                     \n"
-    "psraw     $0x6,%%xmm6                     \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "punpcklbw %%xmm6,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm6                   \n"
-    "punpcklwd %%xmm1,%%xmm0                   \n"
-    "punpckhwd %%xmm1,%%xmm6                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm6," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),      // %0
-    "+r"(dst_argb),      // %1
-    "+r"(width)          // %2
-  : "r"(matrix_argb)     // %3
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm7                 \n"
+      "pmaddubsw %%xmm2,%%xmm0                   \n"
+      "pmaddubsw %%xmm2,%%xmm7                   \n"
+      "movdqu    (%0),%%xmm6                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "pmaddubsw %%xmm3,%%xmm6                   \n"
+      "pmaddubsw %%xmm3,%%xmm1                   \n"
+      "phaddsw   %%xmm7,%%xmm0                   \n"
+      "phaddsw   %%xmm1,%%xmm6                   \n"
+      "psraw     $0x6,%%xmm0                     \n"
+      "psraw     $0x6,%%xmm6                     \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "punpcklbw %%xmm6,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "movdqu    0x10(%0),%%xmm7                 \n"
+      "pmaddubsw %%xmm4,%%xmm1                   \n"
+      "pmaddubsw %%xmm4,%%xmm7                   \n"
+      "phaddsw   %%xmm7,%%xmm1                   \n"
+      "movdqu    (%0),%%xmm6                     \n"
+      "movdqu    0x10(%0),%%xmm7                 \n"
+      "pmaddubsw %%xmm5,%%xmm6                   \n"
+      "pmaddubsw %%xmm5,%%xmm7                   \n"
+      "phaddsw   %%xmm7,%%xmm6                   \n"
+      "psraw     $0x6,%%xmm1                     \n"
+      "psraw     $0x6,%%xmm6                     \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "punpcklbw %%xmm6,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm6                   \n"
+      "punpcklwd %%xmm1,%%xmm0                   \n"
+      "punpckhwd %%xmm1,%%xmm6                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm6,0x10(%1)                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),   // %0
+        "+r"(dst_argb),   // %1
+        "+r"(width)       // %2
+      : "r"(matrix_argb)  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBCOLORMATRIXROW_SSSE3
 
 #ifdef HAS_ARGBQUANTIZEROW_SSE2
 // Quantize 4 ARGB pixels (16 bytes).
-void ARGBQuantizeRow_SSE2(uint8* dst_argb, int scale, int interval_size,
-                          int interval_offset, int width) {
-  asm volatile (
-    "movd      %2,%%xmm2                       \n"
-    "movd      %3,%%xmm3                       \n"
-    "movd      %4,%%xmm4                       \n"
-    "pshuflw   $0x40,%%xmm2,%%xmm2             \n"
-    "pshufd    $0x44,%%xmm2,%%xmm2             \n"
-    "pshuflw   $0x40,%%xmm3,%%xmm3             \n"
-    "pshufd    $0x44,%%xmm3,%%xmm3             \n"
-    "pshuflw   $0x40,%%xmm4,%%xmm4             \n"
-    "pshufd    $0x44,%%xmm4,%%xmm4             \n"
-    "pxor      %%xmm5,%%xmm5                   \n"
-    "pcmpeqb   %%xmm6,%%xmm6                   \n"
-    "pslld     $0x18,%%xmm6                    \n"
+void ARGBQuantizeRow_SSE2(uint8_t* dst_argb,
+                          int scale,
+                          int interval_size,
+                          int interval_offset,
+                          int width) {
+  asm volatile(
+      "movd      %2,%%xmm2                       \n"
+      "movd      %3,%%xmm3                       \n"
+      "movd      %4,%%xmm4                       \n"
+      "pshuflw   $0x40,%%xmm2,%%xmm2             \n"
+      "pshufd    $0x44,%%xmm2,%%xmm2             \n"
+      "pshuflw   $0x40,%%xmm3,%%xmm3             \n"
+      "pshufd    $0x44,%%xmm3,%%xmm3             \n"
+      "pshuflw   $0x40,%%xmm4,%%xmm4             \n"
+      "pshufd    $0x44,%%xmm4,%%xmm4             \n"
+      "pxor      %%xmm5,%%xmm5                   \n"
+      "pcmpeqb   %%xmm6,%%xmm6                   \n"
+      "pslld     $0x18,%%xmm6                    \n"
 
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "pmulhuw   %%xmm2,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm1         \n"
-    "punpckhbw %%xmm5,%%xmm1                   \n"
-    "pmulhuw   %%xmm2,%%xmm1                   \n"
-    "pmullw    %%xmm3,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm7         \n"
-    "pmullw    %%xmm3,%%xmm1                   \n"
-    "pand      %%xmm6,%%xmm7                   \n"
-    "paddw     %%xmm4,%%xmm0                   \n"
-    "paddw     %%xmm4,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "por       %%xmm7,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(0) "         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "sub       $0x4,%1                         \n"
-    "jg        1b                              \n"
-  : "+r"(dst_argb),       // %0
-    "+r"(width)           // %1
-  : "r"(scale),           // %2
-    "r"(interval_size),   // %3
-    "r"(interval_offset)  // %4
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "punpcklbw %%xmm5,%%xmm0                   \n"
+      "pmulhuw   %%xmm2,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm1                     \n"
+      "punpckhbw %%xmm5,%%xmm1                   \n"
+      "pmulhuw   %%xmm2,%%xmm1                   \n"
+      "pmullw    %%xmm3,%%xmm0                   \n"
+      "movdqu    (%0),%%xmm7                     \n"
+      "pmullw    %%xmm3,%%xmm1                   \n"
+      "pand      %%xmm6,%%xmm7                   \n"
+      "paddw     %%xmm4,%%xmm0                   \n"
+      "paddw     %%xmm4,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "por       %%xmm7,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%0)                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "sub       $0x4,%1                         \n"
+      "jg        1b                              \n"
+      : "+r"(dst_argb),       // %0
+        "+r"(width)           // %1
+      : "r"(scale),           // %2
+        "r"(interval_size),   // %3
+        "r"(interval_offset)  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBQUANTIZEROW_SSE2
 
 #ifdef HAS_ARGBSHADEROW_SSE2
 // Shade 4 pixels at a time by specified value.
-void ARGBShadeRow_SSE2(const uint8* src_argb, uint8* dst_argb, int width,
-                       uint32 value) {
-  asm volatile (
-    "movd      %3,%%xmm2                       \n"
-    "punpcklbw %%xmm2,%%xmm2                   \n"
-    "punpcklqdq %%xmm2,%%xmm2                  \n"
+void ARGBShadeRow_SSE2(const uint8_t* src_argb,
+                       uint8_t* dst_argb,
+                       int width,
+                       uint32_t value) {
+  asm volatile(
+      "movd      %3,%%xmm2                       \n"
+      "punpcklbw %%xmm2,%%xmm2                   \n"
+      "punpcklqdq %%xmm2,%%xmm2                  \n"
 
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "punpckhbw %%xmm1,%%xmm1                   \n"
-    "pmulhuw   %%xmm2,%%xmm0                   \n"
-    "pmulhuw   %%xmm2,%%xmm1                   \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  : "r"(value)       // %3
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2"
-  );
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "punpckhbw %%xmm1,%%xmm1                   \n"
+      "pmulhuw   %%xmm2,%%xmm0                   \n"
+      "pmulhuw   %%xmm2,%%xmm1                   \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(value)       // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_ARGBSHADEROW_SSE2
 
 #ifdef HAS_ARGBMULTIPLYROW_SSE2
 // Multiply 2 rows of ARGB pixels together, 4 pixels at a time.
-void ARGBMultiplyRow_SSE2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    "pxor      %%xmm5,%%xmm5                  \n"
+void ARGBMultiplyRow_SSE2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
 
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "movdqu    %%xmm0,%%xmm1                   \n"
-    "movdqu    %%xmm2,%%xmm3                   \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "punpckhbw %%xmm1,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm2                   \n"
-    "punpckhbw %%xmm5,%%xmm3                   \n"
-    "pmulhuw   %%xmm2,%%xmm0                   \n"
-    "pmulhuw   %%xmm3,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      "pxor      %%xmm5,%%xmm5                   \n"
+
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqu    (%1),%%xmm2                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "movdqu    %%xmm0,%%xmm1                   \n"
+      "movdqu    %%xmm2,%%xmm3                   \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "punpckhbw %%xmm1,%%xmm1                   \n"
+      "punpcklbw %%xmm5,%%xmm2                   \n"
+      "punpckhbw %%xmm5,%%xmm3                   \n"
+      "pmulhuw   %%xmm2,%%xmm0                   \n"
+      "pmulhuw   %%xmm3,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_ARGBMULTIPLYROW_SSE2
 
 #ifdef HAS_ARGBMULTIPLYROW_AVX2
 // Multiply 2 rows of ARGB pixels together, 8 pixels at a time.
-void ARGBMultiplyRow_AVX2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+void ARGBMultiplyRow_AVX2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
 
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm1        \n"
-    "lea        " MEMLEA(0x20,0) ",%0          \n"
-    "vmovdqu    " MEMACCESS(1) ",%%ymm3        \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "vpunpcklbw %%ymm1,%%ymm1,%%ymm0           \n"
-    "vpunpckhbw %%ymm1,%%ymm1,%%ymm1           \n"
-    "vpunpcklbw %%ymm5,%%ymm3,%%ymm2           \n"
-    "vpunpckhbw %%ymm5,%%ymm3,%%ymm3           \n"
-    "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    "vmovdqu    %%ymm0," MEMACCESS(2) "        \n"
-    "lea       " MEMLEA(0x20,2) ",%2           \n"
-    "sub        $0x8,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "memory", "cc"
+      "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm1                    \n"
+      "lea        0x20(%0),%0                    \n"
+      "vmovdqu    (%1),%%ymm3                    \n"
+      "lea        0x20(%1),%1                    \n"
+      "vpunpcklbw %%ymm1,%%ymm1,%%ymm0           \n"
+      "vpunpckhbw %%ymm1,%%ymm1,%%ymm1           \n"
+      "vpunpcklbw %%ymm5,%%ymm3,%%ymm2           \n"
+      "vpunpckhbw %%ymm5,%%ymm3,%%ymm3           \n"
+      "vpmulhuw   %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpmulhuw   %%ymm3,%%ymm1,%%ymm1           \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,(%2)                    \n"
+      "lea       0x20(%2),%2                     \n"
+      "sub        $0x8,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "memory", "cc"
 #if defined(__AVX2__)
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
+        ,
+        "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
 #endif
-  );
+      );
 }
 #endif  // HAS_ARGBMULTIPLYROW_AVX2
 
 #ifdef HAS_ARGBADDROW_SSE2
 // Add 2 rows of ARGB pixels together, 4 pixels at a time.
-void ARGBAddRow_SSE2(const uint8* src_argb0, const uint8* src_argb1,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm1         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "paddusb   %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1"
-  );
+void ARGBAddRow_SSE2(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqu    (%1),%%xmm1                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "paddusb   %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_ARGBADDROW_SSE2
 
 #ifdef HAS_ARGBADDROW_AVX2
 // Add 2 rows of ARGB pixels together, 4 pixels at a time.
-void ARGBAddRow_AVX2(const uint8* src_argb0, const uint8* src_argb1,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "lea        " MEMLEA(0x20,0) ",%0          \n"
-    "vpaddusb   " MEMACCESS(1) ",%%ymm0,%%ymm0 \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "vmovdqu    %%ymm0," MEMACCESS(2) "        \n"
-    "lea        " MEMLEA(0x20,2) ",%2          \n"
-    "sub        $0x8,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "memory", "cc"
-    , "xmm0"
-  );
+void ARGBAddRow_AVX2(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "lea        0x20(%0),%0                    \n"
+      "vpaddusb   (%1),%%ymm0,%%ymm0             \n"
+      "lea        0x20(%1),%1                    \n"
+      "vmovdqu    %%ymm0,(%2)                    \n"
+      "lea        0x20(%2),%2                    \n"
+      "sub        $0x8,%3                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "memory", "cc", "xmm0");
 }
 #endif  // HAS_ARGBADDROW_AVX2
 
 #ifdef HAS_ARGBSUBTRACTROW_SSE2
 // Subtract 2 rows of ARGB pixels, 4 pixels at a time.
-void ARGBSubtractRow_SSE2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm1         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "psubusb   %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1"
-  );
+void ARGBSubtractRow_SSE2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqu    (%1),%%xmm1                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "psubusb   %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_ARGBSUBTRACTROW_SSE2
 
 #ifdef HAS_ARGBSUBTRACTROW_AVX2
 // Subtract 2 rows of ARGB pixels, 8 pixels at a time.
-void ARGBSubtractRow_AVX2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "lea        " MEMLEA(0x20,0) ",%0          \n"
-    "vpsubusb   " MEMACCESS(1) ",%%ymm0,%%ymm0 \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "vmovdqu    %%ymm0," MEMACCESS(2) "        \n"
-    "lea        " MEMLEA(0x20,2) ",%2          \n"
-    "sub        $0x8,%3                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "memory", "cc"
-    , "xmm0"
-  );
+void ARGBSubtractRow_AVX2(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "lea        0x20(%0),%0                    \n"
+      "vpsubusb   (%1),%%ymm0,%%ymm0             \n"
+      "lea        0x20(%1),%1                    \n"
+      "vmovdqu    %%ymm0,(%2)                    \n"
+      "lea        0x20(%2),%2                    \n"
+      "sub        $0x8,%3                        \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "memory", "cc", "xmm0");
 }
 #endif  // HAS_ARGBSUBTRACTROW_AVX2
 
@@ -4336,52 +5359,53 @@
 // -1  0  1
 // -2  0  2
 // -1  0  1
-void SobelXRow_SSE2(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobelx, int width) {
-  asm volatile (
-    "sub       %0,%1                           \n"
-    "sub       %0,%2                           \n"
-    "sub       %0,%3                           \n"
-    "pxor      %%xmm5,%%xmm5                   \n"
+void SobelXRow_SSE2(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    const uint8_t* src_y2,
+                    uint8_t* dst_sobelx,
+                    int width) {
+  asm volatile(
+      "sub       %0,%1                           \n"
+      "sub       %0,%2                           \n"
+      "sub       %0,%3                           \n"
+      "pxor      %%xmm5,%%xmm5                   \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movq      " MEMACCESS(0) ",%%xmm0         \n"
-    "movq      " MEMACCESS2(0x2,0) ",%%xmm1    \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpcklbw %%xmm5,%%xmm1                   \n"
-    "psubw     %%xmm1,%%xmm0                   \n"
-    MEMOPREG(movq,0x00,0,1,1,xmm1)             //  movq      (%0,%1,1),%%xmm1
-    MEMOPREG(movq,0x02,0,1,1,xmm2)             //  movq      0x2(%0,%1,1),%%xmm2
-    "punpcklbw %%xmm5,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm2                   \n"
-    "psubw     %%xmm2,%%xmm1                   \n"
-    MEMOPREG(movq,0x00,0,2,1,xmm2)             //  movq      (%0,%2,1),%%xmm2
-    MEMOPREG(movq,0x02,0,2,1,xmm3)             //  movq      0x2(%0,%2,1),%%xmm3
-    "punpcklbw %%xmm5,%%xmm2                   \n"
-    "punpcklbw %%xmm5,%%xmm3                   \n"
-    "psubw     %%xmm3,%%xmm2                   \n"
-    "paddw     %%xmm2,%%xmm0                   \n"
-    "paddw     %%xmm1,%%xmm0                   \n"
-    "paddw     %%xmm1,%%xmm0                   \n"
-    "pxor      %%xmm1,%%xmm1                   \n"
-    "psubw     %%xmm0,%%xmm1                   \n"
-    "pmaxsw    %%xmm1,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    MEMOPMEM(movq,xmm0,0x00,0,3,1)             //  movq      %%xmm0,(%0,%3,1)
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "sub       $0x8,%4                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_y0),      // %0
-    "+r"(src_y1),      // %1
-    "+r"(src_y2),      // %2
-    "+r"(dst_sobelx),  // %3
-    "+r"(width)        // %4
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movq      (%0),%%xmm0                     \n"
+      "movq      0x2(%0),%%xmm1                  \n"
+      "punpcklbw %%xmm5,%%xmm0                   \n"
+      "punpcklbw %%xmm5,%%xmm1                   \n"
+      "psubw     %%xmm1,%%xmm0                   \n"
+      "movq      0x00(%0,%1,1),%%xmm1            \n"
+      "movq      0x02(%0,%1,1),%%xmm2            \n"
+      "punpcklbw %%xmm5,%%xmm1                   \n"
+      "punpcklbw %%xmm5,%%xmm2                   \n"
+      "psubw     %%xmm2,%%xmm1                   \n"
+      "movq      0x00(%0,%2,1),%%xmm2            \n"
+      "movq      0x02(%0,%2,1),%%xmm3            \n"
+      "punpcklbw %%xmm5,%%xmm2                   \n"
+      "punpcklbw %%xmm5,%%xmm3                   \n"
+      "psubw     %%xmm3,%%xmm2                   \n"
+      "paddw     %%xmm2,%%xmm0                   \n"
+      "paddw     %%xmm1,%%xmm0                   \n"
+      "paddw     %%xmm1,%%xmm0                   \n"
+      "pxor      %%xmm1,%%xmm1                   \n"
+      "psubw     %%xmm0,%%xmm1                   \n"
+      "pmaxsw    %%xmm1,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movq      %%xmm0,0x00(%0,%3,1)            \n"
+      "lea       0x8(%0),%0                      \n"
+      "sub       $0x8,%4                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_y0),      // %0
+        "+r"(src_y1),      // %1
+        "+r"(src_y2),      // %2
+        "+r"(dst_sobelx),  // %3
+        "+r"(width)        // %4
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SOBELXROW_SSE2
 
@@ -4390,50 +5414,50 @@
 // -1 -2 -1
 //  0  0  0
 //  1  2  1
-void SobelYRow_SSE2(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width) {
-  asm volatile (
-    "sub       %0,%1                           \n"
-    "sub       %0,%2                           \n"
-    "pxor      %%xmm5,%%xmm5                   \n"
+void SobelYRow_SSE2(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    uint8_t* dst_sobely,
+                    int width) {
+  asm volatile(
+      "sub       %0,%1                           \n"
+      "sub       %0,%2                           \n"
+      "pxor      %%xmm5,%%xmm5                   \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movq      " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movq,0x00,0,1,1,xmm1)             //  movq      (%0,%1,1),%%xmm1
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpcklbw %%xmm5,%%xmm1                   \n"
-    "psubw     %%xmm1,%%xmm0                   \n"
-    "movq      " MEMACCESS2(0x1,0) ",%%xmm1    \n"
-    MEMOPREG(movq,0x01,0,1,1,xmm2)             //  movq      0x1(%0,%1,1),%%xmm2
-    "punpcklbw %%xmm5,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm2                   \n"
-    "psubw     %%xmm2,%%xmm1                   \n"
-    "movq      " MEMACCESS2(0x2,0) ",%%xmm2    \n"
-    MEMOPREG(movq,0x02,0,1,1,xmm3)             //  movq      0x2(%0,%1,1),%%xmm3
-    "punpcklbw %%xmm5,%%xmm2                   \n"
-    "punpcklbw %%xmm5,%%xmm3                   \n"
-    "psubw     %%xmm3,%%xmm2                   \n"
-    "paddw     %%xmm2,%%xmm0                   \n"
-    "paddw     %%xmm1,%%xmm0                   \n"
-    "paddw     %%xmm1,%%xmm0                   \n"
-    "pxor      %%xmm1,%%xmm1                   \n"
-    "psubw     %%xmm0,%%xmm1                   \n"
-    "pmaxsw    %%xmm1,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    MEMOPMEM(movq,xmm0,0x00,0,2,1)             //  movq      %%xmm0,(%0,%2,1)
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "sub       $0x8,%3                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_y0),      // %0
-    "+r"(src_y1),      // %1
-    "+r"(dst_sobely),  // %2
-    "+r"(width)        // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movq      (%0),%%xmm0                     \n"
+      "movq      0x00(%0,%1,1),%%xmm1            \n"
+      "punpcklbw %%xmm5,%%xmm0                   \n"
+      "punpcklbw %%xmm5,%%xmm1                   \n"
+      "psubw     %%xmm1,%%xmm0                   \n"
+      "movq      0x1(%0),%%xmm1                  \n"
+      "movq      0x01(%0,%1,1),%%xmm2            \n"
+      "punpcklbw %%xmm5,%%xmm1                   \n"
+      "punpcklbw %%xmm5,%%xmm2                   \n"
+      "psubw     %%xmm2,%%xmm1                   \n"
+      "movq      0x2(%0),%%xmm2                  \n"
+      "movq      0x02(%0,%1,1),%%xmm3            \n"
+      "punpcklbw %%xmm5,%%xmm2                   \n"
+      "punpcklbw %%xmm5,%%xmm3                   \n"
+      "psubw     %%xmm3,%%xmm2                   \n"
+      "paddw     %%xmm2,%%xmm0                   \n"
+      "paddw     %%xmm1,%%xmm0                   \n"
+      "paddw     %%xmm1,%%xmm0                   \n"
+      "pxor      %%xmm1,%%xmm1                   \n"
+      "psubw     %%xmm0,%%xmm1                   \n"
+      "pmaxsw    %%xmm1,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movq      %%xmm0,0x00(%0,%2,1)            \n"
+      "lea       0x8(%0),%0                      \n"
+      "sub       $0x8,%3                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_y0),      // %0
+        "+r"(src_y1),      // %1
+        "+r"(dst_sobely),  // %2
+        "+r"(width)        // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SOBELYROW_SSE2
 
@@ -4443,79 +5467,79 @@
 // R = Sobel
 // G = Sobel
 // B = Sobel
-void SobelRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                   uint8* dst_argb, int width) {
-  asm volatile (
-    "sub       %0,%1                           \n"
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "pslld     $0x18,%%xmm5                    \n"
+void SobelRow_SSE2(const uint8_t* src_sobelx,
+                   const uint8_t* src_sobely,
+                   uint8_t* dst_argb,
+                   int width) {
+  asm volatile(
+      "sub       %0,%1                           \n"
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "pslld     $0x18,%%xmm5                    \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,1,1,xmm1)           //  movdqu    (%0,%1,1),%%xmm1
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "paddusb   %%xmm1,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "punpcklbw %%xmm0,%%xmm2                   \n"
-    "punpckhbw %%xmm0,%%xmm0                   \n"
-    "movdqa    %%xmm2,%%xmm1                   \n"
-    "punpcklwd %%xmm2,%%xmm1                   \n"
-    "punpckhwd %%xmm2,%%xmm2                   \n"
-    "por       %%xmm5,%%xmm1                   \n"
-    "por       %%xmm5,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm3                   \n"
-    "punpcklwd %%xmm0,%%xmm3                   \n"
-    "punpckhwd %%xmm0,%%xmm0                   \n"
-    "por       %%xmm5,%%xmm3                   \n"
-    "por       %%xmm5,%%xmm0                   \n"
-    "movdqu    %%xmm1," MEMACCESS(2) "         \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x10,2) "   \n"
-    "movdqu    %%xmm3," MEMACCESS2(0x20,2) "   \n"
-    "movdqu    %%xmm0," MEMACCESS2(0x30,2) "   \n"
-    "lea       " MEMLEA(0x40,2) ",%2           \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(width)        // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%1,1),%%xmm1            \n"
+      "lea       0x10(%0),%0                     \n"
+      "paddusb   %%xmm1,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "punpcklbw %%xmm0,%%xmm2                   \n"
+      "punpckhbw %%xmm0,%%xmm0                   \n"
+      "movdqa    %%xmm2,%%xmm1                   \n"
+      "punpcklwd %%xmm2,%%xmm1                   \n"
+      "punpckhwd %%xmm2,%%xmm2                   \n"
+      "por       %%xmm5,%%xmm1                   \n"
+      "por       %%xmm5,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm3                   \n"
+      "punpcklwd %%xmm0,%%xmm3                   \n"
+      "punpckhwd %%xmm0,%%xmm0                   \n"
+      "por       %%xmm5,%%xmm3                   \n"
+      "por       %%xmm5,%%xmm0                   \n"
+      "movdqu    %%xmm1,(%2)                     \n"
+      "movdqu    %%xmm2,0x10(%2)                 \n"
+      "movdqu    %%xmm3,0x20(%2)                 \n"
+      "movdqu    %%xmm0,0x30(%2)                 \n"
+      "lea       0x40(%2),%2                     \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(width)        // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SOBELROW_SSE2
 
 #ifdef HAS_SOBELTOPLANEROW_SSE2
 // Adds Sobel X and Sobel Y and stores Sobel into a plane.
-void SobelToPlaneRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_y, int width) {
-  asm volatile (
-    "sub       %0,%1                           \n"
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "pslld     $0x18,%%xmm5                    \n"
+void SobelToPlaneRow_SSE2(const uint8_t* src_sobelx,
+                          const uint8_t* src_sobely,
+                          uint8_t* dst_y,
+                          int width) {
+  asm volatile(
+      "sub       %0,%1                           \n"
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "pslld     $0x18,%%xmm5                    \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,1,1,xmm1)           //  movdqu    (%0,%1,1),%%xmm1
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "paddusb   %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_y),       // %2
-    "+r"(width)        // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%1,1),%%xmm1            \n"
+      "lea       0x10(%0),%0                     \n"
+      "paddusb   %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_y),       // %2
+        "+r"(width)        // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1");
 }
 #endif  // HAS_SOBELTOPLANEROW_SSE2
 
@@ -4525,1004 +5549,1123 @@
 // R = Sobel X
 // G = Sobel
 // B = Sobel Y
-void SobelXYRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    "sub       %0,%1                           \n"
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
+void SobelXYRow_SSE2(const uint8_t* src_sobelx,
+                     const uint8_t* src_sobely,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      "sub       %0,%1                           \n"
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
 
-    // 8 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,1,1,xmm1)           //  movdqu    (%0,%1,1),%%xmm1
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "paddusb   %%xmm1,%%xmm2                   \n"
-    "movdqa    %%xmm0,%%xmm3                   \n"
-    "punpcklbw %%xmm5,%%xmm3                   \n"
-    "punpckhbw %%xmm5,%%xmm0                   \n"
-    "movdqa    %%xmm1,%%xmm4                   \n"
-    "punpcklbw %%xmm2,%%xmm4                   \n"
-    "punpckhbw %%xmm2,%%xmm1                   \n"
-    "movdqa    %%xmm4,%%xmm6                   \n"
-    "punpcklwd %%xmm3,%%xmm6                   \n"
-    "punpckhwd %%xmm3,%%xmm4                   \n"
-    "movdqa    %%xmm1,%%xmm7                   \n"
-    "punpcklwd %%xmm0,%%xmm7                   \n"
-    "punpckhwd %%xmm0,%%xmm1                   \n"
-    "movdqu    %%xmm6," MEMACCESS(2) "         \n"
-    "movdqu    %%xmm4," MEMACCESS2(0x10,2) "   \n"
-    "movdqu    %%xmm7," MEMACCESS2(0x20,2) "   \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x30,2) "   \n"
-    "lea       " MEMLEA(0x40,2) ",%2           \n"
-    "sub       $0x10,%3                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(width)        // %3
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 8 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%1,1),%%xmm1            \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "paddusb   %%xmm1,%%xmm2                   \n"
+      "movdqa    %%xmm0,%%xmm3                   \n"
+      "punpcklbw %%xmm5,%%xmm3                   \n"
+      "punpckhbw %%xmm5,%%xmm0                   \n"
+      "movdqa    %%xmm1,%%xmm4                   \n"
+      "punpcklbw %%xmm2,%%xmm4                   \n"
+      "punpckhbw %%xmm2,%%xmm1                   \n"
+      "movdqa    %%xmm4,%%xmm6                   \n"
+      "punpcklwd %%xmm3,%%xmm6                   \n"
+      "punpckhwd %%xmm3,%%xmm4                   \n"
+      "movdqa    %%xmm1,%%xmm7                   \n"
+      "punpcklwd %%xmm0,%%xmm7                   \n"
+      "punpckhwd %%xmm0,%%xmm1                   \n"
+      "movdqu    %%xmm6,(%2)                     \n"
+      "movdqu    %%xmm4,0x10(%2)                 \n"
+      "movdqu    %%xmm7,0x20(%2)                 \n"
+      "movdqu    %%xmm1,0x30(%2)                 \n"
+      "lea       0x40(%2),%2                     \n"
+      "sub       $0x10,%3                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(width)        // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_SOBELXYROW_SSE2
 
 #ifdef HAS_COMPUTECUMULATIVESUMROW_SSE2
 // Creates a table of cumulative sums where each value is a sum of all values
 // above and to the left of the value, inclusive of the value.
-void ComputeCumulativeSumRow_SSE2(const uint8* row, int32* cumsum,
-                                  const int32* previous_cumsum, int width) {
-  asm volatile (
-    "pxor      %%xmm0,%%xmm0                   \n"
-    "pxor      %%xmm1,%%xmm1                   \n"
-    "sub       $0x4,%3                         \n"
-    "jl        49f                             \n"
-    "test      $0xf,%1                         \n"
-    "jne       49f                             \n"
+void ComputeCumulativeSumRow_SSE2(const uint8_t* row,
+                                  int32_t* cumsum,
+                                  const int32_t* previous_cumsum,
+                                  int width) {
+  asm volatile(
+      "pxor      %%xmm0,%%xmm0                   \n"
+      "pxor      %%xmm1,%%xmm1                   \n"
+      "sub       $0x4,%3                         \n"
+      "jl        49f                             \n"
+      "test      $0xf,%1                         \n"
+      "jne       49f                             \n"
 
-  // 4 pixel loop                              \n"
-    LABELALIGN
-  "40:                                         \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm2,%%xmm4                   \n"
-    "punpcklbw %%xmm1,%%xmm2                   \n"
-    "movdqa    %%xmm2,%%xmm3                   \n"
-    "punpcklwd %%xmm1,%%xmm2                   \n"
-    "punpckhwd %%xmm1,%%xmm3                   \n"
-    "punpckhbw %%xmm1,%%xmm4                   \n"
-    "movdqa    %%xmm4,%%xmm5                   \n"
-    "punpcklwd %%xmm1,%%xmm4                   \n"
-    "punpckhwd %%xmm1,%%xmm5                   \n"
-    "paddd     %%xmm2,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(2) ",%%xmm2         \n"
-    "paddd     %%xmm0,%%xmm2                   \n"
-    "paddd     %%xmm3,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x10,2) ",%%xmm3   \n"
-    "paddd     %%xmm0,%%xmm3                   \n"
-    "paddd     %%xmm4,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x20,2) ",%%xmm4   \n"
-    "paddd     %%xmm0,%%xmm4                   \n"
-    "paddd     %%xmm5,%%xmm0                   \n"
-    "movdqu    " MEMACCESS2(0x30,2) ",%%xmm5   \n"
-    "lea       " MEMLEA(0x40,2) ",%2           \n"
-    "paddd     %%xmm0,%%xmm5                   \n"
-    "movdqu    %%xmm2," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm3," MEMACCESS2(0x10,1) "   \n"
-    "movdqu    %%xmm4," MEMACCESS2(0x20,1) "   \n"
-    "movdqu    %%xmm5," MEMACCESS2(0x30,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x4,%3                         \n"
-    "jge       40b                             \n"
+      // 4 pixel loop.
+      LABELALIGN
+      "40:                                       \n"
+      "movdqu    (%0),%%xmm2                     \n"
+      "lea       0x10(%0),%0                     \n"
+      "movdqa    %%xmm2,%%xmm4                   \n"
+      "punpcklbw %%xmm1,%%xmm2                   \n"
+      "movdqa    %%xmm2,%%xmm3                   \n"
+      "punpcklwd %%xmm1,%%xmm2                   \n"
+      "punpckhwd %%xmm1,%%xmm3                   \n"
+      "punpckhbw %%xmm1,%%xmm4                   \n"
+      "movdqa    %%xmm4,%%xmm5                   \n"
+      "punpcklwd %%xmm1,%%xmm4                   \n"
+      "punpckhwd %%xmm1,%%xmm5                   \n"
+      "paddd     %%xmm2,%%xmm0                   \n"
+      "movdqu    (%2),%%xmm2                     \n"
+      "paddd     %%xmm0,%%xmm2                   \n"
+      "paddd     %%xmm3,%%xmm0                   \n"
+      "movdqu    0x10(%2),%%xmm3                 \n"
+      "paddd     %%xmm0,%%xmm3                   \n"
+      "paddd     %%xmm4,%%xmm0                   \n"
+      "movdqu    0x20(%2),%%xmm4                 \n"
+      "paddd     %%xmm0,%%xmm4                   \n"
+      "paddd     %%xmm5,%%xmm0                   \n"
+      "movdqu    0x30(%2),%%xmm5                 \n"
+      "lea       0x40(%2),%2                     \n"
+      "paddd     %%xmm0,%%xmm5                   \n"
+      "movdqu    %%xmm2,(%1)                     \n"
+      "movdqu    %%xmm3,0x10(%1)                 \n"
+      "movdqu    %%xmm4,0x20(%1)                 \n"
+      "movdqu    %%xmm5,0x30(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x4,%3                         \n"
+      "jge       40b                             \n"
 
-  "49:                                         \n"
-    "add       $0x3,%3                         \n"
-    "jl        19f                             \n"
+      "49:                                       \n"
+      "add       $0x3,%3                         \n"
+      "jl        19f                             \n"
 
-  // 1 pixel loop                              \n"
-    LABELALIGN
-  "10:                                         \n"
-    "movd      " MEMACCESS(0) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x4,0) ",%0            \n"
-    "punpcklbw %%xmm1,%%xmm2                   \n"
-    "punpcklwd %%xmm1,%%xmm2                   \n"
-    "paddd     %%xmm2,%%xmm0                   \n"
-    "movdqu    " MEMACCESS(2) ",%%xmm2         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "paddd     %%xmm0,%%xmm2                   \n"
-    "movdqu    %%xmm2," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x1,%3                         \n"
-    "jge       10b                             \n"
+      // 1 pixel loop.
+      LABELALIGN
+      "10:                                       \n"
+      "movd      (%0),%%xmm2                     \n"
+      "lea       0x4(%0),%0                      \n"
+      "punpcklbw %%xmm1,%%xmm2                   \n"
+      "punpcklwd %%xmm1,%%xmm2                   \n"
+      "paddd     %%xmm2,%%xmm0                   \n"
+      "movdqu    (%2),%%xmm2                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "paddd     %%xmm0,%%xmm2                   \n"
+      "movdqu    %%xmm2,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x1,%3                         \n"
+      "jge       10b                             \n"
 
-  "19:                                         \n"
-  : "+r"(row),  // %0
-    "+r"(cumsum),  // %1
-    "+r"(previous_cumsum),  // %2
-    "+r"(width)  // %3
-  :
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+      "19:                                       \n"
+      : "+r"(row),              // %0
+        "+r"(cumsum),           // %1
+        "+r"(previous_cumsum),  // %2
+        "+r"(width)             // %3
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_COMPUTECUMULATIVESUMROW_SSE2
 
 #ifdef HAS_CUMULATIVESUMTOAVERAGEROW_SSE2
-void CumulativeSumToAverageRow_SSE2(const int32* topleft, const int32* botleft,
-                                    int width, int area, uint8* dst,
+void CumulativeSumToAverageRow_SSE2(const int32_t* topleft,
+                                    const int32_t* botleft,
+                                    int width,
+                                    int area,
+                                    uint8_t* dst,
                                     int count) {
-  asm volatile (
-    "movd      %5,%%xmm5                       \n"
-    "cvtdq2ps  %%xmm5,%%xmm5                   \n"
-    "rcpss     %%xmm5,%%xmm4                   \n"
-    "pshufd    $0x0,%%xmm4,%%xmm4              \n"
-    "sub       $0x4,%3                         \n"
-    "jl        49f                             \n"
-    "cmpl      $0x80,%5                        \n"
-    "ja        40f                             \n"
+  asm volatile(
+      "movd      %5,%%xmm5                       \n"
+      "cvtdq2ps  %%xmm5,%%xmm5                   \n"
+      "rcpss     %%xmm5,%%xmm4                   \n"
+      "pshufd    $0x0,%%xmm4,%%xmm4              \n"
+      "sub       $0x4,%3                         \n"
+      "jl        49f                             \n"
+      "cmpl      $0x80,%5                        \n"
+      "ja        40f                             \n"
 
-    "pshufd    $0x0,%%xmm5,%%xmm5              \n"
-    "pcmpeqb   %%xmm6,%%xmm6                   \n"
-    "psrld     $0x10,%%xmm6                    \n"
-    "cvtdq2ps  %%xmm6,%%xmm6                   \n"
-    "addps     %%xmm6,%%xmm5                   \n"
-    "mulps     %%xmm4,%%xmm5                   \n"
-    "cvtps2dq  %%xmm5,%%xmm5                   \n"
-    "packssdw  %%xmm5,%%xmm5                   \n"
+      "pshufd    $0x0,%%xmm5,%%xmm5              \n"
+      "pcmpeqb   %%xmm6,%%xmm6                   \n"
+      "psrld     $0x10,%%xmm6                    \n"
+      "cvtdq2ps  %%xmm6,%%xmm6                   \n"
+      "addps     %%xmm6,%%xmm5                   \n"
+      "mulps     %%xmm4,%%xmm5                   \n"
+      "cvtps2dq  %%xmm5,%%xmm5                   \n"
+      "packssdw  %%xmm5,%%xmm5                   \n"
 
-  // 4 pixel small loop                        \n"
-    LABELALIGN
-  "4:                                         \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    MEMOPREG(psubd,0x00,0,4,4,xmm0)            // psubd    0x00(%0,%4,4),%%xmm0
-    MEMOPREG(psubd,0x10,0,4,4,xmm1)            // psubd    0x10(%0,%4,4),%%xmm1
-    MEMOPREG(psubd,0x20,0,4,4,xmm2)            // psubd    0x20(%0,%4,4),%%xmm2
-    MEMOPREG(psubd,0x30,0,4,4,xmm3)            // psubd    0x30(%0,%4,4),%%xmm3
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "psubd     " MEMACCESS(1) ",%%xmm0         \n"
-    "psubd     " MEMACCESS2(0x10,1) ",%%xmm1   \n"
-    "psubd     " MEMACCESS2(0x20,1) ",%%xmm2   \n"
-    "psubd     " MEMACCESS2(0x30,1) ",%%xmm3   \n"
-    MEMOPREG(paddd,0x00,1,4,4,xmm0)            // paddd    0x00(%1,%4,4),%%xmm0
-    MEMOPREG(paddd,0x10,1,4,4,xmm1)            // paddd    0x10(%1,%4,4),%%xmm1
-    MEMOPREG(paddd,0x20,1,4,4,xmm2)            // paddd    0x20(%1,%4,4),%%xmm2
-    MEMOPREG(paddd,0x30,1,4,4,xmm3)            // paddd    0x30(%1,%4,4),%%xmm3
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "packssdw  %%xmm1,%%xmm0                   \n"
-    "packssdw  %%xmm3,%%xmm2                   \n"
-    "pmulhuw   %%xmm5,%%xmm0                   \n"
-    "pmulhuw   %%xmm5,%%xmm2                   \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jge       4b                              \n"
-    "jmp       49f                             \n"
+      // 4 pixel small loop.
+      LABELALIGN
+      "4:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "psubd     0x00(%0,%4,4),%%xmm0            \n"
+      "psubd     0x10(%0,%4,4),%%xmm1            \n"
+      "psubd     0x20(%0,%4,4),%%xmm2            \n"
+      "psubd     0x30(%0,%4,4),%%xmm3            \n"
+      "lea       0x40(%0),%0                     \n"
+      "psubd     (%1),%%xmm0                     \n"
+      "psubd     0x10(%1),%%xmm1                 \n"
+      "psubd     0x20(%1),%%xmm2                 \n"
+      "psubd     0x30(%1),%%xmm3                 \n"
+      "paddd     0x00(%1,%4,4),%%xmm0            \n"
+      "paddd     0x10(%1,%4,4),%%xmm1            \n"
+      "paddd     0x20(%1,%4,4),%%xmm2            \n"
+      "paddd     0x30(%1,%4,4),%%xmm3            \n"
+      "lea       0x40(%1),%1                     \n"
+      "packssdw  %%xmm1,%%xmm0                   \n"
+      "packssdw  %%xmm3,%%xmm2                   \n"
+      "pmulhuw   %%xmm5,%%xmm0                   \n"
+      "pmulhuw   %%xmm5,%%xmm2                   \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jge       4b                              \n"
+      "jmp       49f                             \n"
 
-  // 4 pixel loop                              \n"
-    LABELALIGN
-  "40:                                         \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "movdqu    " MEMACCESS2(0x20,0) ",%%xmm2   \n"
-    "movdqu    " MEMACCESS2(0x30,0) ",%%xmm3   \n"
-    MEMOPREG(psubd,0x00,0,4,4,xmm0)            // psubd    0x00(%0,%4,4),%%xmm0
-    MEMOPREG(psubd,0x10,0,4,4,xmm1)            // psubd    0x10(%0,%4,4),%%xmm1
-    MEMOPREG(psubd,0x20,0,4,4,xmm2)            // psubd    0x20(%0,%4,4),%%xmm2
-    MEMOPREG(psubd,0x30,0,4,4,xmm3)            // psubd    0x30(%0,%4,4),%%xmm3
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "psubd     " MEMACCESS(1) ",%%xmm0         \n"
-    "psubd     " MEMACCESS2(0x10,1) ",%%xmm1   \n"
-    "psubd     " MEMACCESS2(0x20,1) ",%%xmm2   \n"
-    "psubd     " MEMACCESS2(0x30,1) ",%%xmm3   \n"
-    MEMOPREG(paddd,0x00,1,4,4,xmm0)            // paddd    0x00(%1,%4,4),%%xmm0
-    MEMOPREG(paddd,0x10,1,4,4,xmm1)            // paddd    0x10(%1,%4,4),%%xmm1
-    MEMOPREG(paddd,0x20,1,4,4,xmm2)            // paddd    0x20(%1,%4,4),%%xmm2
-    MEMOPREG(paddd,0x30,1,4,4,xmm3)            // paddd    0x30(%1,%4,4),%%xmm3
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "cvtdq2ps  %%xmm0,%%xmm0                   \n"
-    "cvtdq2ps  %%xmm1,%%xmm1                   \n"
-    "mulps     %%xmm4,%%xmm0                   \n"
-    "mulps     %%xmm4,%%xmm1                   \n"
-    "cvtdq2ps  %%xmm2,%%xmm2                   \n"
-    "cvtdq2ps  %%xmm3,%%xmm3                   \n"
-    "mulps     %%xmm4,%%xmm2                   \n"
-    "mulps     %%xmm4,%%xmm3                   \n"
-    "cvtps2dq  %%xmm0,%%xmm0                   \n"
-    "cvtps2dq  %%xmm1,%%xmm1                   \n"
-    "cvtps2dq  %%xmm2,%%xmm2                   \n"
-    "cvtps2dq  %%xmm3,%%xmm3                   \n"
-    "packssdw  %%xmm1,%%xmm0                   \n"
-    "packssdw  %%xmm3,%%xmm2                   \n"
-    "packuswb  %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jge       40b                             \n"
+      // 4 pixel loop
+      LABELALIGN
+      "40:                                       \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x20(%0),%%xmm2                 \n"
+      "movdqu    0x30(%0),%%xmm3                 \n"
+      "psubd     0x00(%0,%4,4),%%xmm0            \n"
+      "psubd     0x10(%0,%4,4),%%xmm1            \n"
+      "psubd     0x20(%0,%4,4),%%xmm2            \n"
+      "psubd     0x30(%0,%4,4),%%xmm3            \n"
+      "lea       0x40(%0),%0                     \n"
+      "psubd     (%1),%%xmm0                     \n"
+      "psubd     0x10(%1),%%xmm1                 \n"
+      "psubd     0x20(%1),%%xmm2                 \n"
+      "psubd     0x30(%1),%%xmm3                 \n"
+      "paddd     0x00(%1,%4,4),%%xmm0            \n"
+      "paddd     0x10(%1,%4,4),%%xmm1            \n"
+      "paddd     0x20(%1,%4,4),%%xmm2            \n"
+      "paddd     0x30(%1,%4,4),%%xmm3            \n"
+      "lea       0x40(%1),%1                     \n"
+      "cvtdq2ps  %%xmm0,%%xmm0                   \n"
+      "cvtdq2ps  %%xmm1,%%xmm1                   \n"
+      "mulps     %%xmm4,%%xmm0                   \n"
+      "mulps     %%xmm4,%%xmm1                   \n"
+      "cvtdq2ps  %%xmm2,%%xmm2                   \n"
+      "cvtdq2ps  %%xmm3,%%xmm3                   \n"
+      "mulps     %%xmm4,%%xmm2                   \n"
+      "mulps     %%xmm4,%%xmm3                   \n"
+      "cvtps2dq  %%xmm0,%%xmm0                   \n"
+      "cvtps2dq  %%xmm1,%%xmm1                   \n"
+      "cvtps2dq  %%xmm2,%%xmm2                   \n"
+      "cvtps2dq  %%xmm3,%%xmm3                   \n"
+      "packssdw  %%xmm1,%%xmm0                   \n"
+      "packssdw  %%xmm3,%%xmm2                   \n"
+      "packuswb  %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jge       40b                             \n"
 
-  "49:                                         \n"
-    "add       $0x3,%3                         \n"
-    "jl        19f                             \n"
+      "49:                                       \n"
+      "add       $0x3,%3                         \n"
+      "jl        19f                             \n"
 
-  // 1 pixel loop                              \n"
-    LABELALIGN
-  "10:                                         \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(psubd,0x00,0,4,4,xmm0)            // psubd    0x00(%0,%4,4),%%xmm0
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "psubd     " MEMACCESS(1) ",%%xmm0         \n"
-    MEMOPREG(paddd,0x00,1,4,4,xmm0)            // paddd    0x00(%1,%4,4),%%xmm0
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "cvtdq2ps  %%xmm0,%%xmm0                   \n"
-    "mulps     %%xmm4,%%xmm0                   \n"
-    "cvtps2dq  %%xmm0,%%xmm0                   \n"
-    "packssdw  %%xmm0,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movd      %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x4,2) ",%2            \n"
-    "sub       $0x1,%3                         \n"
-    "jge       10b                             \n"
-  "19:                                         \n"
-  : "+r"(topleft),  // %0
-    "+r"(botleft),  // %1
-    "+r"(dst),      // %2
-    "+rm"(count)    // %3
-  : "r"((intptr_t)(width)),  // %4
-    "rm"(area)     // %5
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+      // 1 pixel loop
+      LABELALIGN
+      "10:                                       \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "psubd     0x00(%0,%4,4),%%xmm0            \n"
+      "lea       0x10(%0),%0                     \n"
+      "psubd     (%1),%%xmm0                     \n"
+      "paddd     0x00(%1,%4,4),%%xmm0            \n"
+      "lea       0x10(%1),%1                     \n"
+      "cvtdq2ps  %%xmm0,%%xmm0                   \n"
+      "mulps     %%xmm4,%%xmm0                   \n"
+      "cvtps2dq  %%xmm0,%%xmm0                   \n"
+      "packssdw  %%xmm0,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movd      %%xmm0,(%2)                     \n"
+      "lea       0x4(%2),%2                      \n"
+      "sub       $0x1,%3                         \n"
+      "jge       10b                             \n"
+      "19:                                       \n"
+      : "+r"(topleft),           // %0
+        "+r"(botleft),           // %1
+        "+r"(dst),               // %2
+        "+rm"(count)             // %3
+      : "r"((intptr_t)(width)),  // %4
+        "rm"(area)               // %5
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 #endif  // HAS_CUMULATIVESUMTOAVERAGEROW_SSE2
 
 #ifdef HAS_ARGBAFFINEROW_SSE2
 // Copy ARGB pixels from source image with slope to a row of destination.
 LIBYUV_API
-void ARGBAffineRow_SSE2(const uint8* src_argb, int src_argb_stride,
-                        uint8* dst_argb, const float* src_dudv, int width) {
+void ARGBAffineRow_SSE2(const uint8_t* src_argb,
+                        int src_argb_stride,
+                        uint8_t* dst_argb,
+                        const float* src_dudv,
+                        int width) {
   intptr_t src_argb_stride_temp = src_argb_stride;
   intptr_t temp;
-  asm volatile (
-    "movq      " MEMACCESS(3) ",%%xmm2         \n"
-    "movq      " MEMACCESS2(0x08,3) ",%%xmm7   \n"
-    "shl       $0x10,%1                        \n"
-    "add       $0x4,%1                         \n"
-    "movd      %1,%%xmm5                       \n"
-    "sub       $0x4,%4                         \n"
-    "jl        49f                             \n"
+  asm volatile(
+      "movq      (%3),%%xmm2                     \n"
+      "movq      0x08(%3),%%xmm7                 \n"
+      "shl       $0x10,%1                        \n"
+      "add       $0x4,%1                         \n"
+      "movd      %1,%%xmm5                       \n"
+      "sub       $0x4,%4                         \n"
+      "jl        49f                             \n"
 
-    "pshufd    $0x44,%%xmm7,%%xmm7             \n"
-    "pshufd    $0x0,%%xmm5,%%xmm5              \n"
-    "movdqa    %%xmm2,%%xmm0                   \n"
-    "addps     %%xmm7,%%xmm0                   \n"
-    "movlhps   %%xmm0,%%xmm2                   \n"
-    "movdqa    %%xmm7,%%xmm4                   \n"
-    "addps     %%xmm4,%%xmm4                   \n"
-    "movdqa    %%xmm2,%%xmm3                   \n"
-    "addps     %%xmm4,%%xmm3                   \n"
-    "addps     %%xmm4,%%xmm4                   \n"
+      "pshufd    $0x44,%%xmm7,%%xmm7             \n"
+      "pshufd    $0x0,%%xmm5,%%xmm5              \n"
+      "movdqa    %%xmm2,%%xmm0                   \n"
+      "addps     %%xmm7,%%xmm0                   \n"
+      "movlhps   %%xmm0,%%xmm2                   \n"
+      "movdqa    %%xmm7,%%xmm4                   \n"
+      "addps     %%xmm4,%%xmm4                   \n"
+      "movdqa    %%xmm2,%%xmm3                   \n"
+      "addps     %%xmm4,%%xmm3                   \n"
+      "addps     %%xmm4,%%xmm4                   \n"
 
-  // 4 pixel loop                              \n"
-    LABELALIGN
-  "40:                                         \n"
-    "cvttps2dq %%xmm2,%%xmm0                   \n"  // x, y float to int first 2
-    "cvttps2dq %%xmm3,%%xmm1                   \n"  // x, y float to int next 2
-    "packssdw  %%xmm1,%%xmm0                   \n"  // x, y as 8 shorts
-    "pmaddwd   %%xmm5,%%xmm0                   \n"  // off = x * 4 + y * stride
-    "movd      %%xmm0,%k1                      \n"
-    "pshufd    $0x39,%%xmm0,%%xmm0             \n"
-    "movd      %%xmm0,%k5                      \n"
-    "pshufd    $0x39,%%xmm0,%%xmm0             \n"
-    MEMOPREG(movd,0x00,0,1,1,xmm1)             //  movd      (%0,%1,1),%%xmm1
-    MEMOPREG(movd,0x00,0,5,1,xmm6)             //  movd      (%0,%5,1),%%xmm6
-    "punpckldq %%xmm6,%%xmm1                   \n"
-    "addps     %%xmm4,%%xmm2                   \n"
-    "movq      %%xmm1," MEMACCESS(2) "         \n"
-    "movd      %%xmm0,%k1                      \n"
-    "pshufd    $0x39,%%xmm0,%%xmm0             \n"
-    "movd      %%xmm0,%k5                      \n"
-    MEMOPREG(movd,0x00,0,1,1,xmm0)             //  movd      (%0,%1,1),%%xmm0
-    MEMOPREG(movd,0x00,0,5,1,xmm6)             //  movd      (%0,%5,1),%%xmm6
-    "punpckldq %%xmm6,%%xmm0                   \n"
-    "addps     %%xmm4,%%xmm3                   \n"
-    "movq      %%xmm0," MEMACCESS2(0x08,2) "   \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%4                         \n"
-    "jge       40b                             \n"
+      // 4 pixel loop
+      LABELALIGN
+      "40:                                       \n"
+      "cvttps2dq %%xmm2,%%xmm0                   \n"  // x,y float->int first 2
+      "cvttps2dq %%xmm3,%%xmm1                   \n"  // x,y float->int next 2
+      "packssdw  %%xmm1,%%xmm0                   \n"  // x, y as 8 shorts
+      "pmaddwd   %%xmm5,%%xmm0                   \n"  // off = x*4 + y*stride
+      "movd      %%xmm0,%k1                      \n"
+      "pshufd    $0x39,%%xmm0,%%xmm0             \n"
+      "movd      %%xmm0,%k5                      \n"
+      "pshufd    $0x39,%%xmm0,%%xmm0             \n"
+      "movd      0x00(%0,%1,1),%%xmm1            \n"
+      "movd      0x00(%0,%5,1),%%xmm6            \n"
+      "punpckldq %%xmm6,%%xmm1                   \n"
+      "addps     %%xmm4,%%xmm2                   \n"
+      "movq      %%xmm1,(%2)                     \n"
+      "movd      %%xmm0,%k1                      \n"
+      "pshufd    $0x39,%%xmm0,%%xmm0             \n"
+      "movd      %%xmm0,%k5                      \n"
+      "movd      0x00(%0,%1,1),%%xmm0            \n"
+      "movd      0x00(%0,%5,1),%%xmm6            \n"
+      "punpckldq %%xmm6,%%xmm0                   \n"
+      "addps     %%xmm4,%%xmm3                   \n"
+      "movq      %%xmm0,0x08(%2)                 \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%4                         \n"
+      "jge       40b                             \n"
 
-  "49:                                         \n"
-    "add       $0x3,%4                         \n"
-    "jl        19f                             \n"
+      "49:                                       \n"
+      "add       $0x3,%4                         \n"
+      "jl        19f                             \n"
 
-  // 1 pixel loop                              \n"
-    LABELALIGN
-  "10:                                         \n"
-    "cvttps2dq %%xmm2,%%xmm0                   \n"
-    "packssdw  %%xmm0,%%xmm0                   \n"
-    "pmaddwd   %%xmm5,%%xmm0                   \n"
-    "addps     %%xmm7,%%xmm2                   \n"
-    "movd      %%xmm0,%k1                      \n"
-    MEMOPREG(movd,0x00,0,1,1,xmm0)             //  movd      (%0,%1,1),%%xmm0
-    "movd      %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x04,2) ",%2           \n"
-    "sub       $0x1,%4                         \n"
-    "jge       10b                             \n"
-  "19:                                         \n"
-  : "+r"(src_argb),  // %0
-    "+r"(src_argb_stride_temp),  // %1
-    "+r"(dst_argb),  // %2
-    "+r"(src_dudv),  // %3
-    "+rm"(width),    // %4
-    "=&r"(temp)      // %5
-  :
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 1 pixel loop
+      LABELALIGN
+      "10:                                       \n"
+      "cvttps2dq %%xmm2,%%xmm0                   \n"
+      "packssdw  %%xmm0,%%xmm0                   \n"
+      "pmaddwd   %%xmm5,%%xmm0                   \n"
+      "addps     %%xmm7,%%xmm2                   \n"
+      "movd      %%xmm0,%k1                      \n"
+      "movd      0x00(%0,%1,1),%%xmm0            \n"
+      "movd      %%xmm0,(%2)                     \n"
+      "lea       0x04(%2),%2                     \n"
+      "sub       $0x1,%4                         \n"
+      "jge       10b                             \n"
+      "19:                                       \n"
+      : "+r"(src_argb),              // %0
+        "+r"(src_argb_stride_temp),  // %1
+        "+r"(dst_argb),              // %2
+        "+r"(src_dudv),              // %3
+        "+rm"(width),                // %4
+        "=&r"(temp)                  // %5
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBAFFINEROW_SSE2
 
 #ifdef HAS_INTERPOLATEROW_SSSE3
 // Bilinear filter 16x2 -> 16x1
-void InterpolateRow_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                          ptrdiff_t src_stride, int dst_width,
+void InterpolateRow_SSSE3(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          int dst_width,
                           int source_y_fraction) {
-  asm volatile (
-    "sub       %1,%0                           \n"
-    "cmp       $0x0,%3                         \n"
-    "je        100f                            \n"
-    "cmp       $0x80,%3                        \n"
-    "je        50f                             \n"
+  asm volatile(
+      "sub       %1,%0                           \n"
+      "cmp       $0x0,%3                         \n"
+      "je        100f                            \n"
+      "cmp       $0x80,%3                        \n"
+      "je        50f                             \n"
 
-    "movd      %3,%%xmm0                       \n"
-    "neg       %3                              \n"
-    "add       $0x100,%3                       \n"
-    "movd      %3,%%xmm5                       \n"
-    "punpcklbw %%xmm0,%%xmm5                   \n"
-    "punpcklwd %%xmm5,%%xmm5                   \n"
-    "pshufd    $0x0,%%xmm5,%%xmm5              \n"
-    "mov       $0x80808080,%%eax               \n"
-    "movd      %%eax,%%xmm4                    \n"
-    "pshufd    $0x0,%%xmm4,%%xmm4              \n"
+      "movd      %3,%%xmm0                       \n"
+      "neg       %3                              \n"
+      "add       $0x100,%3                       \n"
+      "movd      %3,%%xmm5                       \n"
+      "punpcklbw %%xmm0,%%xmm5                   \n"
+      "punpcklwd %%xmm5,%%xmm5                   \n"
+      "pshufd    $0x0,%%xmm5,%%xmm5              \n"
+      "mov       $0x80808080,%%eax               \n"
+      "movd      %%eax,%%xmm4                    \n"
+      "pshufd    $0x0,%%xmm4,%%xmm4              \n"
 
-    // General purpose row blend.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,1,4,1,xmm2)
-    "movdqa     %%xmm0,%%xmm1                  \n"
-    "punpcklbw  %%xmm2,%%xmm0                  \n"
-    "punpckhbw  %%xmm2,%%xmm1                  \n"
-    "psubb      %%xmm4,%%xmm0                  \n"
-    "psubb      %%xmm4,%%xmm1                  \n"
-    "movdqa     %%xmm5,%%xmm2                  \n"
-    "movdqa     %%xmm5,%%xmm3                  \n"
-    "pmaddubsw  %%xmm0,%%xmm2                  \n"
-    "pmaddubsw  %%xmm1,%%xmm3                  \n"
-    "paddw      %%xmm4,%%xmm2                  \n"
-    "paddw      %%xmm4,%%xmm3                  \n"
-    "psrlw      $0x8,%%xmm2                    \n"
-    "psrlw      $0x8,%%xmm3                    \n"
-    "packuswb   %%xmm3,%%xmm2                  \n"
-    MEMOPMEM(movdqu,xmm2,0x00,1,0,1)
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-    "jmp       99f                             \n"
+      // General purpose row blend.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%1),%%xmm0                     \n"
+      "movdqu    0x00(%1,%4,1),%%xmm2            \n"
+      "movdqa     %%xmm0,%%xmm1                  \n"
+      "punpcklbw  %%xmm2,%%xmm0                  \n"
+      "punpckhbw  %%xmm2,%%xmm1                  \n"
+      "psubb      %%xmm4,%%xmm0                  \n"
+      "psubb      %%xmm4,%%xmm1                  \n"
+      "movdqa     %%xmm5,%%xmm2                  \n"
+      "movdqa     %%xmm5,%%xmm3                  \n"
+      "pmaddubsw  %%xmm0,%%xmm2                  \n"
+      "pmaddubsw  %%xmm1,%%xmm3                  \n"
+      "paddw      %%xmm4,%%xmm2                  \n"
+      "paddw      %%xmm4,%%xmm3                  \n"
+      "psrlw      $0x8,%%xmm2                    \n"
+      "psrlw      $0x8,%%xmm3                    \n"
+      "packuswb   %%xmm3,%%xmm2                  \n"
+      "movdqu    %%xmm2,0x00(%1,%0,1)            \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      "jmp       99f                             \n"
 
-    // Blend 50 / 50.
-    LABELALIGN
-  "50:                                         \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,1,4,1,xmm1)
-    "pavgb     %%xmm1,%%xmm0                   \n"
-    MEMOPMEM(movdqu,xmm0,0x00,1,0,1)
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        50b                             \n"
-    "jmp       99f                             \n"
+      // Blend 50 / 50.
+      LABELALIGN
+      "50:                                       \n"
+      "movdqu    (%1),%%xmm0                     \n"
+      "movdqu    0x00(%1,%4,1),%%xmm1            \n"
+      "pavgb     %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,0x00(%1,%0,1)            \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        50b                             \n"
+      "jmp       99f                             \n"
 
-    // Blend 100 / 0 - Copy row unchanged.
-    LABELALIGN
-  "100:                                        \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm0         \n"
-    MEMOPMEM(movdqu,xmm0,0x00,1,0,1)
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        100b                            \n"
+      // Blend 100 / 0 - Copy row unchanged.
+      LABELALIGN
+      "100:                                      \n"
+      "movdqu    (%1),%%xmm0                     \n"
+      "movdqu    %%xmm0,0x00(%1,%0,1)            \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        100b                            \n"
 
-  "99:                                         \n"
-  : "+r"(dst_ptr),     // %0
-    "+r"(src_ptr),     // %1
-    "+rm"(dst_width),  // %2
-    "+r"(source_y_fraction)  // %3
-  : "r"((intptr_t)(src_stride))  // %4
-  : "memory", "cc", "eax", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+      "99:                                       \n"
+      : "+r"(dst_ptr),               // %0
+        "+r"(src_ptr),               // %1
+        "+rm"(dst_width),            // %2
+        "+r"(source_y_fraction)      // %3
+      : "r"((intptr_t)(src_stride))  // %4
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_INTERPOLATEROW_SSSE3
 
 #ifdef HAS_INTERPOLATEROW_AVX2
 // Bilinear filter 32x2 -> 32x1
-void InterpolateRow_AVX2(uint8* dst_ptr, const uint8* src_ptr,
-                         ptrdiff_t src_stride, int dst_width,
+void InterpolateRow_AVX2(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         int dst_width,
                          int source_y_fraction) {
-  asm volatile (
-    "cmp       $0x0,%3                         \n"
-    "je        100f                            \n"
-    "sub       %1,%0                           \n"
-    "cmp       $0x80,%3                        \n"
-    "je        50f                             \n"
+  asm volatile(
+      "cmp       $0x0,%3                         \n"
+      "je        100f                            \n"
+      "sub       %1,%0                           \n"
+      "cmp       $0x80,%3                        \n"
+      "je        50f                             \n"
 
-    "vmovd      %3,%%xmm0                      \n"
-    "neg        %3                             \n"
-    "add        $0x100,%3                      \n"
-    "vmovd      %3,%%xmm5                      \n"
-    "vpunpcklbw %%xmm0,%%xmm5,%%xmm5           \n"
-    "vpunpcklwd %%xmm5,%%xmm5,%%xmm5           \n"
-    "vbroadcastss %%xmm5,%%ymm5                \n"
-    "mov        $0x80808080,%%eax              \n"
-    "vmovd      %%eax,%%xmm4                   \n"
-    "vbroadcastss %%xmm4,%%ymm4                \n"
+      "vmovd      %3,%%xmm0                      \n"
+      "neg        %3                             \n"
+      "add        $0x100,%3                      \n"
+      "vmovd      %3,%%xmm5                      \n"
+      "vpunpcklbw %%xmm0,%%xmm5,%%xmm5           \n"
+      "vpunpcklwd %%xmm5,%%xmm5,%%xmm5           \n"
+      "vbroadcastss %%xmm5,%%ymm5                \n"
+      "mov        $0x80808080,%%eax              \n"
+      "vmovd      %%eax,%%xmm4                   \n"
+      "vbroadcastss %%xmm4,%%ymm4                \n"
 
-    // General purpose row blend.
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(1) ",%%ymm0        \n"
-    MEMOPREG(vmovdqu,0x00,1,4,1,ymm2)
-    "vpunpckhbw %%ymm2,%%ymm0,%%ymm1           \n"
-    "vpunpcklbw %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpsubb     %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpsubb     %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm1,%%ymm5,%%ymm1           \n"
-    "vpmaddubsw %%ymm0,%%ymm5,%%ymm0           \n"
-    "vpaddw     %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpaddw     %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpsrlw     $0x8,%%ymm1,%%ymm1             \n"
-    "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    MEMOPMEM(vmovdqu,ymm0,0x00,1,0,1)
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "jmp       99f                             \n"
+      // General purpose row blend.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%1),%%ymm0                    \n"
+      "vmovdqu    0x00(%1,%4,1),%%ymm2           \n"
+      "vpunpckhbw %%ymm2,%%ymm0,%%ymm1           \n"
+      "vpunpcklbw %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpsubb     %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpsubb     %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm1,%%ymm5,%%ymm1           \n"
+      "vpmaddubsw %%ymm0,%%ymm5,%%ymm0           \n"
+      "vpaddw     %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpaddw     %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpsrlw     $0x8,%%ymm1,%%ymm1             \n"
+      "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vmovdqu    %%ymm0,0x00(%1,%0,1)           \n"
+      "lea        0x20(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "jmp        99f                            \n"
 
-    // Blend 50 / 50.
-    LABELALIGN
-  "50:                                         \n"
-    "vmovdqu    " MEMACCESS(1) ",%%ymm0        \n"
-    VMEMOPREG(vpavgb,0x00,1,4,1,ymm0,ymm0)     // vpavgb (%1,%4,1),%%ymm0,%%ymm0
-    MEMOPMEM(vmovdqu,ymm0,0x00,1,0,1)
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x20,%2                        \n"
-    "jg        50b                             \n"
-    "jmp       99f                             \n"
+      // Blend 50 / 50.
+      LABELALIGN
+      "50:                                       \n"
+      "vmovdqu   (%1),%%ymm0                     \n"
+      "vpavgb    0x00(%1,%4,1),%%ymm0,%%ymm0     \n"
+      "vmovdqu   %%ymm0,0x00(%1,%0,1)            \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        50b                             \n"
+      "jmp       99f                             \n"
 
-    // Blend 100 / 0 - Copy row unchanged.
-    LABELALIGN
-  "100:                                        \n"
-    "rep movsb " MEMMOVESTRING(1,0) "          \n"
-    "jmp       999f                            \n"
+      // Blend 100 / 0 - Copy row unchanged.
+      LABELALIGN
+      "100:                                      \n"
+      "rep movsb                                 \n"
+      "jmp       999f                            \n"
 
-  "99:                                         \n"
-    "vzeroupper                                \n"
-  "999:                                        \n"
-  : "+D"(dst_ptr),    // %0
-    "+S"(src_ptr),    // %1
-    "+cm"(dst_width),  // %2
-    "+r"(source_y_fraction)  // %3
-  : "r"((intptr_t)(src_stride))  // %4
-  : "memory", "cc", "eax", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm4", "xmm5"
-  );
+      "99:                                       \n"
+      "vzeroupper                                \n"
+      "999:                                      \n"
+      : "+D"(dst_ptr),               // %0
+        "+S"(src_ptr),               // %1
+        "+cm"(dst_width),            // %2
+        "+r"(source_y_fraction)      // %3
+      : "r"((intptr_t)(src_stride))  // %4
+      : "memory", "cc", "eax", "xmm0", "xmm1", "xmm2", "xmm4", "xmm5");
 }
 #endif  // HAS_INTERPOLATEROW_AVX2
 
 #ifdef HAS_ARGBSHUFFLEROW_SSSE3
 // For BGRAToARGB, ABGRToARGB, RGBAToARGB, and ARGBToRGBA.
-void ARGBShuffleRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                          const uint8* shuffler, int width) {
-  asm volatile (
-    "movdqu    " MEMACCESS(3) ",%%xmm5         \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pshufb    %%xmm5,%%xmm0                   \n"
-    "pshufb    %%xmm5,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  : "r"(shuffler)    // %3
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm5"
-  );
+void ARGBShuffleRow_SSSE3(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          const uint8_t* shuffler,
+                          int width) {
+  asm volatile(
+
+      "movdqu    (%3),%%xmm5                     \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "pshufb    %%xmm5,%%xmm0                   \n"
+      "pshufb    %%xmm5,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(shuffler)    // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 #endif  // HAS_ARGBSHUFFLEROW_SSSE3
 
 #ifdef HAS_ARGBSHUFFLEROW_AVX2
 // For BGRAToARGB, ABGRToARGB, RGBAToARGB, and ARGBToRGBA.
-void ARGBShuffleRow_AVX2(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width) {
-  asm volatile (
-    "vbroadcastf128 " MEMACCESS(3) ",%%ymm5    \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu   " MEMACCESS(0) ",%%ymm0         \n"
-    "vmovdqu   " MEMACCESS2(0x20,0) ",%%ymm1   \n"
-    "lea       " MEMLEA(0x40,0) ",%0           \n"
-    "vpshufb   %%ymm5,%%ymm0,%%ymm0            \n"
-    "vpshufb   %%ymm5,%%ymm1,%%ymm1            \n"
-    "vmovdqu   %%ymm0," MEMACCESS(1) "         \n"
-    "vmovdqu   %%ymm1," MEMACCESS2(0x20,1) "   \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  : "r"(shuffler)    // %3
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm5"
-  );
+void ARGBShuffleRow_AVX2(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
+                         const uint8_t* shuffler,
+                         int width) {
+  asm volatile(
+
+      "vbroadcastf128 (%3),%%ymm5                \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu   (%0),%%ymm0                     \n"
+      "vmovdqu   0x20(%0),%%ymm1                 \n"
+      "lea       0x40(%0),%0                     \n"
+      "vpshufb   %%ymm5,%%ymm0,%%ymm0            \n"
+      "vpshufb   %%ymm5,%%ymm1,%%ymm1            \n"
+      "vmovdqu   %%ymm0,(%1)                     \n"
+      "vmovdqu   %%ymm1,0x20(%1)                 \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(shuffler)    // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm5");
 }
 #endif  // HAS_ARGBSHUFFLEROW_AVX2
 
-#ifdef HAS_ARGBSHUFFLEROW_SSE2
-// For BGRAToARGB, ABGRToARGB, RGBAToARGB, and ARGBToRGBA.
-void ARGBShuffleRow_SSE2(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width) {
-  uintptr_t pixel_temp;
-  asm volatile (
-    "pxor      %%xmm5,%%xmm5                   \n"
-    "mov       " MEMACCESS(4) ",%k2            \n"
-    "cmp       $0x3000102,%k2                  \n"
-    "je        3012f                           \n"
-    "cmp       $0x10203,%k2                    \n"
-    "je        123f                            \n"
-    "cmp       $0x30201,%k2                    \n"
-    "je        321f                            \n"
-    "cmp       $0x2010003,%k2                  \n"
-    "je        2103f                           \n"
-
-    LABELALIGN
-  "1:                                          \n"
-    "movzb     " MEMACCESS(4) ",%2             \n"
-    MEMOPARG(movzb,0x00,0,2,1,2) "             \n"  //  movzb     (%0,%2,1),%2
-    "mov       %b2," MEMACCESS(1) "            \n"
-    "movzb     " MEMACCESS2(0x1,4) ",%2        \n"
-    MEMOPARG(movzb,0x00,0,2,1,2) "             \n"  //  movzb     (%0,%2,1),%2
-    "mov       %b2," MEMACCESS2(0x1,1) "       \n"
-    "movzb     " MEMACCESS2(0x2,4) ",%2        \n"
-    MEMOPARG(movzb,0x00,0,2,1,2) "             \n"  //  movzb     (%0,%2,1),%2
-    "mov       %b2," MEMACCESS2(0x2,1) "       \n"
-    "movzb     " MEMACCESS2(0x3,4) ",%2        \n"
-    MEMOPARG(movzb,0x00,0,2,1,2) "             \n"  //  movzb     (%0,%2,1),%2
-    "mov       %b2," MEMACCESS2(0x3,1) "       \n"
-    "lea       " MEMLEA(0x4,0) ",%0            \n"
-    "lea       " MEMLEA(0x4,1) ",%1            \n"
-    "sub       $0x1,%3                         \n"
-    "jg        1b                              \n"
-    "jmp       99f                             \n"
-
-    LABELALIGN
-  "123:                                        \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpckhbw %%xmm5,%%xmm1                   \n"
-    "pshufhw   $0x1b,%%xmm0,%%xmm0             \n"
-    "pshuflw   $0x1b,%%xmm0,%%xmm0             \n"
-    "pshufhw   $0x1b,%%xmm1,%%xmm1             \n"
-    "pshuflw   $0x1b,%%xmm1,%%xmm1             \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        123b                            \n"
-    "jmp       99f                             \n"
-
-    LABELALIGN
-  "321:                                        \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpckhbw %%xmm5,%%xmm1                   \n"
-    "pshufhw   $0x39,%%xmm0,%%xmm0             \n"
-    "pshuflw   $0x39,%%xmm0,%%xmm0             \n"
-    "pshufhw   $0x39,%%xmm1,%%xmm1             \n"
-    "pshuflw   $0x39,%%xmm1,%%xmm1             \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        321b                            \n"
-    "jmp       99f                             \n"
-
-    LABELALIGN
-  "2103:                                       \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpckhbw %%xmm5,%%xmm1                   \n"
-    "pshufhw   $0x93,%%xmm0,%%xmm0             \n"
-    "pshuflw   $0x93,%%xmm0,%%xmm0             \n"
-    "pshufhw   $0x93,%%xmm1,%%xmm1             \n"
-    "pshuflw   $0x93,%%xmm1,%%xmm1             \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        2103b                           \n"
-    "jmp       99f                             \n"
-
-    LABELALIGN
-  "3012:                                       \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpckhbw %%xmm5,%%xmm1                   \n"
-    "pshufhw   $0xc6,%%xmm0,%%xmm0             \n"
-    "pshuflw   $0xc6,%%xmm0,%%xmm0             \n"
-    "pshufhw   $0xc6,%%xmm1,%%xmm1             \n"
-    "pshuflw   $0xc6,%%xmm1,%%xmm1             \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        3012b                           \n"
-
-  "99:                                         \n"
-  : "+r"(src_argb),     // %0
-    "+r"(dst_argb),     // %1
-    "=&d"(pixel_temp),  // %2
-    "+r"(width)         // %3
-  : "r"(shuffler)       // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm5"
-  );
-}
-#endif  // HAS_ARGBSHUFFLEROW_SSE2
-
 #ifdef HAS_I422TOYUY2ROW_SSE2
-void I422ToYUY2Row_SSE2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_frame, int width) {
- asm volatile (
-    "sub       %1,%2                             \n"
-    LABELALIGN
-  "1:                                            \n"
-    "movq      " MEMACCESS(1) ",%%xmm2           \n"
-    MEMOPREG(movq,0x00,1,2,1,xmm3)               //  movq    (%1,%2,1),%%xmm3
-    "lea       " MEMLEA(0x8,1) ",%1              \n"
-    "punpcklbw %%xmm3,%%xmm2                     \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0           \n"
-    "lea       " MEMLEA(0x10,0) ",%0             \n"
-    "movdqa    %%xmm0,%%xmm1                     \n"
-    "punpcklbw %%xmm2,%%xmm0                     \n"
-    "punpckhbw %%xmm2,%%xmm1                     \n"
-    "movdqu    %%xmm0," MEMACCESS(3) "           \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,3) "     \n"
-    "lea       " MEMLEA(0x20,3) ",%3             \n"
-    "sub       $0x10,%4                          \n"
-    "jg         1b                               \n"
-    : "+r"(src_y),  // %0
-      "+r"(src_u),  // %1
-      "+r"(src_v),  // %2
-      "+r"(dst_frame),  // %3
-      "+rm"(width)  // %4
-    :
-    : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3"
-  );
+void I422ToYUY2Row_SSE2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width) {
+  asm volatile(
+
+      "sub       %1,%2                             \n"
+
+      LABELALIGN
+      "1:                                          \n"
+      "movq      (%1),%%xmm2                       \n"
+      "movq      0x00(%1,%2,1),%%xmm1              \n"
+      "add       $0x8,%1                           \n"
+      "punpcklbw %%xmm1,%%xmm2                     \n"
+      "movdqu    (%0),%%xmm0                       \n"
+      "add       $0x10,%0                          \n"
+      "movdqa    %%xmm0,%%xmm1                     \n"
+      "punpcklbw %%xmm2,%%xmm0                     \n"
+      "punpckhbw %%xmm2,%%xmm1                     \n"
+      "movdqu    %%xmm0,(%3)                       \n"
+      "movdqu    %%xmm1,0x10(%3)                   \n"
+      "lea       0x20(%3),%3                       \n"
+      "sub       $0x10,%4                          \n"
+      "jg         1b                               \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_yuy2),  // %3
+        "+rm"(width)     // %4
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_I422TOYUY2ROW_SSE2
 
 #ifdef HAS_I422TOUYVYROW_SSE2
-void I422ToUYVYRow_SSE2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_frame, int width) {
- asm volatile (
-    "sub        %1,%2                            \n"
-    LABELALIGN
-  "1:                                            \n"
-    "movq      " MEMACCESS(1) ",%%xmm2           \n"
-    MEMOPREG(movq,0x00,1,2,1,xmm3)               //  movq    (%1,%2,1),%%xmm3
-    "lea       " MEMLEA(0x8,1) ",%1              \n"
-    "punpcklbw %%xmm3,%%xmm2                     \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0           \n"
-    "movdqa    %%xmm2,%%xmm1                     \n"
-    "lea       " MEMLEA(0x10,0) ",%0             \n"
-    "punpcklbw %%xmm0,%%xmm1                     \n"
-    "punpckhbw %%xmm0,%%xmm2                     \n"
-    "movdqu    %%xmm1," MEMACCESS(3) "           \n"
-    "movdqu    %%xmm2," MEMACCESS2(0x10,3) "     \n"
-    "lea       " MEMLEA(0x20,3) ",%3             \n"
-    "sub       $0x10,%4                          \n"
-    "jg         1b                               \n"
-    : "+r"(src_y),  // %0
-      "+r"(src_u),  // %1
-      "+r"(src_v),  // %2
-      "+r"(dst_frame),  // %3
-      "+rm"(width)  // %4
-    :
-    : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3"
-  );
+void I422ToUYVYRow_SSE2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width) {
+  asm volatile(
+
+      "sub        %1,%2                            \n"
+
+      LABELALIGN
+      "1:                                          \n"
+      "movq      (%1),%%xmm2                       \n"
+      "movq      0x00(%1,%2,1),%%xmm1              \n"
+      "add       $0x8,%1                           \n"
+      "punpcklbw %%xmm1,%%xmm2                     \n"
+      "movdqu    (%0),%%xmm0                       \n"
+      "movdqa    %%xmm2,%%xmm1                     \n"
+      "add       $0x10,%0                          \n"
+      "punpcklbw %%xmm0,%%xmm1                     \n"
+      "punpckhbw %%xmm0,%%xmm2                     \n"
+      "movdqu    %%xmm1,(%3)                       \n"
+      "movdqu    %%xmm2,0x10(%3)                   \n"
+      "lea       0x20(%3),%3                       \n"
+      "sub       $0x10,%4                          \n"
+      "jg         1b                               \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_uyvy),  // %3
+        "+rm"(width)     // %4
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
 }
 #endif  // HAS_I422TOUYVYROW_SSE2
 
-#ifdef HAS_ARGBPOLYNOMIALROW_SSE2
-void ARGBPolynomialRow_SSE2(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
-                            int width) {
-  asm volatile (
-    "pxor      %%xmm3,%%xmm3                   \n"
+#ifdef HAS_I422TOYUY2ROW_AVX2
+void I422ToYUY2Row_AVX2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width) {
+  asm volatile(
 
-    // 2 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movq      " MEMACCESS(0) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "punpcklbw %%xmm3,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm4                   \n"
-    "punpcklwd %%xmm3,%%xmm0                   \n"
-    "punpckhwd %%xmm3,%%xmm4                   \n"
-    "cvtdq2ps  %%xmm0,%%xmm0                   \n"
-    "cvtdq2ps  %%xmm4,%%xmm4                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "movdqa    %%xmm4,%%xmm5                   \n"
-    "mulps     " MEMACCESS2(0x10,3) ",%%xmm0   \n"
-    "mulps     " MEMACCESS2(0x10,3) ",%%xmm4   \n"
-    "addps     " MEMACCESS(3) ",%%xmm0         \n"
-    "addps     " MEMACCESS(3) ",%%xmm4         \n"
-    "movdqa    %%xmm1,%%xmm2                   \n"
-    "movdqa    %%xmm5,%%xmm6                   \n"
-    "mulps     %%xmm1,%%xmm2                   \n"
-    "mulps     %%xmm5,%%xmm6                   \n"
-    "mulps     %%xmm2,%%xmm1                   \n"
-    "mulps     %%xmm6,%%xmm5                   \n"
-    "mulps     " MEMACCESS2(0x20,3) ",%%xmm2   \n"
-    "mulps     " MEMACCESS2(0x20,3) ",%%xmm6   \n"
-    "mulps     " MEMACCESS2(0x30,3) ",%%xmm1   \n"
-    "mulps     " MEMACCESS2(0x30,3) ",%%xmm5   \n"
-    "addps     %%xmm2,%%xmm0                   \n"
-    "addps     %%xmm6,%%xmm4                   \n"
-    "addps     %%xmm1,%%xmm0                   \n"
-    "addps     %%xmm5,%%xmm4                   \n"
-    "cvttps2dq %%xmm0,%%xmm0                   \n"
-    "cvttps2dq %%xmm4,%%xmm4                   \n"
-    "packuswb  %%xmm4,%%xmm0                   \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x2,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  : "r"(poly)        // %3
-  : "memory", "cc"
-    , "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+      "sub       %1,%2                             \n"
+
+      LABELALIGN
+      "1:                                          \n"
+      "vpmovzxbw  (%1),%%ymm1                      \n"
+      "vpmovzxbw  0x00(%1,%2,1),%%ymm2             \n"
+      "add        $0x10,%1                         \n"
+      "vpsllw     $0x8,%%ymm2,%%ymm2               \n"
+      "vpor       %%ymm1,%%ymm2,%%ymm2             \n"
+      "vmovdqu    (%0),%%ymm0                      \n"
+      "add        $0x20,%0                         \n"
+      "vpunpcklbw %%ymm2,%%ymm0,%%ymm1             \n"
+      "vpunpckhbw %%ymm2,%%ymm0,%%ymm2             \n"
+      "vextractf128 $0x0,%%ymm1,(%3)               \n"
+      "vextractf128 $0x0,%%ymm2,0x10(%3)           \n"
+      "vextractf128 $0x1,%%ymm1,0x20(%3)           \n"
+      "vextractf128 $0x1,%%ymm2,0x30(%3)           \n"
+      "lea        0x40(%3),%3                      \n"
+      "sub        $0x20,%4                         \n"
+      "jg         1b                               \n"
+      "vzeroupper                                  \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_yuy2),  // %3
+        "+rm"(width)     // %4
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
+}
+#endif  // HAS_I422TOYUY2ROW_AVX2
+
+#ifdef HAS_I422TOUYVYROW_AVX2
+void I422ToUYVYRow_AVX2(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width) {
+  asm volatile(
+
+      "sub        %1,%2                            \n"
+
+      LABELALIGN
+      "1:                                          \n"
+      "vpmovzxbw  (%1),%%ymm1                      \n"
+      "vpmovzxbw  0x00(%1,%2,1),%%ymm2             \n"
+      "add        $0x10,%1                         \n"
+      "vpsllw     $0x8,%%ymm2,%%ymm2               \n"
+      "vpor       %%ymm1,%%ymm2,%%ymm2             \n"
+      "vmovdqu    (%0),%%ymm0                      \n"
+      "add        $0x20,%0                         \n"
+      "vpunpcklbw %%ymm0,%%ymm2,%%ymm1             \n"
+      "vpunpckhbw %%ymm0,%%ymm2,%%ymm2             \n"
+      "vextractf128 $0x0,%%ymm1,(%3)               \n"
+      "vextractf128 $0x0,%%ymm2,0x10(%3)           \n"
+      "vextractf128 $0x1,%%ymm1,0x20(%3)           \n"
+      "vextractf128 $0x1,%%ymm2,0x30(%3)           \n"
+      "lea        0x40(%3),%3                      \n"
+      "sub        $0x20,%4                         \n"
+      "jg         1b                               \n"
+      "vzeroupper                                  \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_uyvy),  // %3
+        "+rm"(width)     // %4
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2");
+}
+#endif  // HAS_I422TOUYVYROW_AVX2
+
+#ifdef HAS_ARGBPOLYNOMIALROW_SSE2
+void ARGBPolynomialRow_SSE2(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            const float* poly,
+                            int width) {
+  asm volatile(
+
+      "pxor      %%xmm3,%%xmm3                   \n"
+
+      // 2 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movq      (%0),%%xmm0                     \n"
+      "lea       0x8(%0),%0                      \n"
+      "punpcklbw %%xmm3,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm4                   \n"
+      "punpcklwd %%xmm3,%%xmm0                   \n"
+      "punpckhwd %%xmm3,%%xmm4                   \n"
+      "cvtdq2ps  %%xmm0,%%xmm0                   \n"
+      "cvtdq2ps  %%xmm4,%%xmm4                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "movdqa    %%xmm4,%%xmm5                   \n"
+      "mulps     0x10(%3),%%xmm0                 \n"
+      "mulps     0x10(%3),%%xmm4                 \n"
+      "addps     (%3),%%xmm0                     \n"
+      "addps     (%3),%%xmm4                     \n"
+      "movdqa    %%xmm1,%%xmm2                   \n"
+      "movdqa    %%xmm5,%%xmm6                   \n"
+      "mulps     %%xmm1,%%xmm2                   \n"
+      "mulps     %%xmm5,%%xmm6                   \n"
+      "mulps     %%xmm2,%%xmm1                   \n"
+      "mulps     %%xmm6,%%xmm5                   \n"
+      "mulps     0x20(%3),%%xmm2                 \n"
+      "mulps     0x20(%3),%%xmm6                 \n"
+      "mulps     0x30(%3),%%xmm1                 \n"
+      "mulps     0x30(%3),%%xmm5                 \n"
+      "addps     %%xmm2,%%xmm0                   \n"
+      "addps     %%xmm6,%%xmm4                   \n"
+      "addps     %%xmm1,%%xmm0                   \n"
+      "addps     %%xmm5,%%xmm4                   \n"
+      "cvttps2dq %%xmm0,%%xmm0                   \n"
+      "cvttps2dq %%xmm4,%%xmm4                   \n"
+      "packuswb  %%xmm4,%%xmm0                   \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x2,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(poly)        // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 #endif  // HAS_ARGBPOLYNOMIALROW_SSE2
 
 #ifdef HAS_ARGBPOLYNOMIALROW_AVX2
-void ARGBPolynomialRow_AVX2(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
+void ARGBPolynomialRow_AVX2(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            const float* poly,
                             int width) {
-  asm volatile (
-    "vbroadcastf128 " MEMACCESS(3) ",%%ymm4     \n"
-    "vbroadcastf128 " MEMACCESS2(0x10,3) ",%%ymm5 \n"
-    "vbroadcastf128 " MEMACCESS2(0x20,3) ",%%ymm6 \n"
-    "vbroadcastf128 " MEMACCESS2(0x30,3) ",%%ymm7 \n"
+  asm volatile(
+      "vbroadcastf128 (%3),%%ymm4                \n"
+      "vbroadcastf128 0x10(%3),%%ymm5            \n"
+      "vbroadcastf128 0x20(%3),%%ymm6            \n"
+      "vbroadcastf128 0x30(%3),%%ymm7            \n"
 
-    // 2 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "vpmovzxbd   " MEMACCESS(0) ",%%ymm0       \n"  // 2 ARGB pixels
-    "lea         " MEMLEA(0x8,0) ",%0          \n"
-    "vcvtdq2ps   %%ymm0,%%ymm0                 \n"  // X 8 floats
-    "vmulps      %%ymm0,%%ymm0,%%ymm2          \n"  // X * X
-    "vmulps      %%ymm7,%%ymm0,%%ymm3          \n"  // C3 * X
-    "vfmadd132ps %%ymm5,%%ymm4,%%ymm0          \n"  // result = C0 + C1 * X
-    "vfmadd231ps %%ymm6,%%ymm2,%%ymm0          \n"  // result += C2 * X * X
-    "vfmadd231ps %%ymm3,%%ymm2,%%ymm0          \n"  // result += C3 * X * X * X
-    "vcvttps2dq  %%ymm0,%%ymm0                 \n"
-    "vpackusdw   %%ymm0,%%ymm0,%%ymm0          \n"
-    "vpermq      $0xd8,%%ymm0,%%ymm0           \n"
-    "vpackuswb   %%xmm0,%%xmm0,%%xmm0          \n"
-    "vmovq       %%xmm0," MEMACCESS(1) "       \n"
-    "lea         " MEMLEA(0x8,1) ",%1          \n"
-    "sub         $0x2,%2                       \n"
-    "jg          1b                            \n"
-    "vzeroupper                                \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  : "r"(poly)        // %3
-  : "memory", "cc",
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      // 2 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vpmovzxbd   (%0),%%ymm0                   \n"  // 2 ARGB pixels
+      "lea         0x8(%0),%0                    \n"
+      "vcvtdq2ps   %%ymm0,%%ymm0                 \n"  // X 8 floats
+      "vmulps      %%ymm0,%%ymm0,%%ymm2          \n"  // X * X
+      "vmulps      %%ymm7,%%ymm0,%%ymm3          \n"  // C3 * X
+      "vfmadd132ps %%ymm5,%%ymm4,%%ymm0          \n"  // result = C0 + C1 * X
+      "vfmadd231ps %%ymm6,%%ymm2,%%ymm0          \n"  // result += C2 * X * X
+      "vfmadd231ps %%ymm3,%%ymm2,%%ymm0          \n"  // result += C3 * X * X *
+                                                      // X
+      "vcvttps2dq  %%ymm0,%%ymm0                 \n"
+      "vpackusdw   %%ymm0,%%ymm0,%%ymm0          \n"
+      "vpermq      $0xd8,%%ymm0,%%ymm0           \n"
+      "vpackuswb   %%xmm0,%%xmm0,%%xmm0          \n"
+      "vmovq       %%xmm0,(%1)                   \n"
+      "lea         0x8(%1),%1                    \n"
+      "sub         $0x2,%2                       \n"
+      "jg          1b                            \n"
+      "vzeroupper                                \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(poly)        // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 #endif  // HAS_ARGBPOLYNOMIALROW_AVX2
 
+#ifdef HAS_HALFFLOATROW_SSE2
+static float kScaleBias = 1.9259299444e-34f;
+void HalfFloatRow_SSE2(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width) {
+  scale *= kScaleBias;
+  asm volatile(
+      "movd        %3,%%xmm4                     \n"
+      "pshufd      $0x0,%%xmm4,%%xmm4            \n"
+      "pxor        %%xmm5,%%xmm5                 \n"
+      "sub         %0,%1                         \n"
+
+      // 16 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu      (%0),%%xmm2                   \n"  // 8 shorts
+      "add         $0x10,%0                      \n"
+      "movdqa      %%xmm2,%%xmm3                 \n"
+      "punpcklwd   %%xmm5,%%xmm2                 \n"  // 8 ints in xmm2/1
+      "cvtdq2ps    %%xmm2,%%xmm2                 \n"  // 8 floats
+      "punpckhwd   %%xmm5,%%xmm3                 \n"
+      "cvtdq2ps    %%xmm3,%%xmm3                 \n"
+      "mulps       %%xmm4,%%xmm2                 \n"
+      "mulps       %%xmm4,%%xmm3                 \n"
+      "psrld       $0xd,%%xmm2                   \n"
+      "psrld       $0xd,%%xmm3                   \n"
+      "packssdw    %%xmm3,%%xmm2                 \n"
+      "movdqu      %%xmm2,-0x10(%0,%1,1)         \n"
+      "sub         $0x8,%2                       \n"
+      "jg          1b                            \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      : "m"(scale)   // %3
+      : "memory", "cc", "xmm2", "xmm3", "xmm4", "xmm5");
+}
+#endif  // HAS_HALFFLOATROW_SSE2
+
+#ifdef HAS_HALFFLOATROW_AVX2
+void HalfFloatRow_AVX2(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width) {
+  scale *= kScaleBias;
+  asm volatile(
+      "vbroadcastss  %3, %%ymm4                  \n"
+      "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+      "sub        %0,%1                          \n"
+
+      // 16 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm2                    \n"  // 16 shorts
+      "add        $0x20,%0                       \n"
+      "vpunpckhwd %%ymm5,%%ymm2,%%ymm3           \n"  // mutates
+      "vpunpcklwd %%ymm5,%%ymm2,%%ymm2           \n"
+      "vcvtdq2ps  %%ymm3,%%ymm3                  \n"
+      "vcvtdq2ps  %%ymm2,%%ymm2                  \n"
+      "vmulps     %%ymm3,%%ymm4,%%ymm3           \n"
+      "vmulps     %%ymm2,%%ymm4,%%ymm2           \n"
+      "vpsrld     $0xd,%%ymm3,%%ymm3             \n"
+      "vpsrld     $0xd,%%ymm2,%%ymm2             \n"
+      "vpackssdw  %%ymm3, %%ymm2, %%ymm2         \n"  // unmutates
+      "vmovdqu    %%ymm2,-0x20(%0,%1,1)          \n"
+      "sub        $0x10,%2                       \n"
+      "jg         1b                             \n"
+
+      "vzeroupper                                \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+#if defined(__x86_64__)
+      : "x"(scale)  // %3
+#else
+      : "m"(scale)  // %3
+#endif
+      : "memory", "cc", "xmm2", "xmm3", "xmm4", "xmm5");
+}
+#endif  // HAS_HALFFLOATROW_AVX2
+
+#ifdef HAS_HALFFLOATROW_F16C
+void HalfFloatRow_F16C(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width) {
+  asm volatile(
+      "vbroadcastss  %3, %%ymm4                  \n"
+      "sub        %0,%1                          \n"
+
+      // 16 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vpmovzxwd   (%0),%%ymm2                   \n"  // 16 shorts -> 16 ints
+      "vpmovzxwd   0x10(%0),%%ymm3               \n"
+      "vcvtdq2ps   %%ymm2,%%ymm2                 \n"
+      "vcvtdq2ps   %%ymm3,%%ymm3                 \n"
+      "vmulps      %%ymm2,%%ymm4,%%ymm2          \n"
+      "vmulps      %%ymm3,%%ymm4,%%ymm3          \n"
+      "vcvtps2ph   $3, %%ymm2, %%xmm2            \n"
+      "vcvtps2ph   $3, %%ymm3, %%xmm3            \n"
+      "vmovdqu     %%xmm2,0x00(%0,%1,1)          \n"
+      "vmovdqu     %%xmm3,0x10(%0,%1,1)          \n"
+      "add         $0x20,%0                      \n"
+      "sub         $0x10,%2                      \n"
+      "jg          1b                            \n"
+      "vzeroupper                                \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+#if defined(__x86_64__)
+      : "x"(scale)  // %3
+#else
+      : "m"(scale)  // %3
+#endif
+      : "memory", "cc", "xmm2", "xmm3", "xmm4");
+}
+#endif  // HAS_HALFFLOATROW_F16C
+
+#ifdef HAS_HALFFLOATROW_F16C
+void HalfFloat1Row_F16C(const uint16_t* src, uint16_t* dst, float, int width) {
+  asm volatile(
+      "sub        %0,%1                          \n"
+      // 16 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "vpmovzxwd   (%0),%%ymm2                   \n"  // 16 shorts -> 16 ints
+      "vpmovzxwd   0x10(%0),%%ymm3               \n"
+      "vcvtdq2ps   %%ymm2,%%ymm2                 \n"
+      "vcvtdq2ps   %%ymm3,%%ymm3                 \n"
+      "vcvtps2ph   $3, %%ymm2, %%xmm2            \n"
+      "vcvtps2ph   $3, %%ymm3, %%xmm3            \n"
+      "vmovdqu     %%xmm2,0x00(%0,%1,1)          \n"
+      "vmovdqu     %%xmm3,0x10(%0,%1,1)          \n"
+      "add         $0x20,%0                      \n"
+      "sub         $0x10,%2                      \n"
+      "jg          1b                            \n"
+      "vzeroupper                                \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "memory", "cc", "xmm2", "xmm3");
+}
+#endif  // HAS_HALFFLOATROW_F16C
+
 #ifdef HAS_ARGBCOLORTABLEROW_X86
 // Tranform ARGB pixels with color table.
-void ARGBColorTableRow_X86(uint8* dst_argb, const uint8* table_argb,
+void ARGBColorTableRow_X86(uint8_t* dst_argb,
+                           const uint8_t* table_argb,
                            int width) {
   uintptr_t pixel_temp;
-  asm volatile (
-    // 1 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movzb     " MEMACCESS(0) ",%1             \n"
-    "lea       " MEMLEA(0x4,0) ",%0            \n"
-    MEMOPARG(movzb,0x00,3,1,4,1) "             \n"  // movzb (%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x4,0) "      \n"
-    "movzb     " MEMACCESS2(-0x3,0) ",%1       \n"
-    MEMOPARG(movzb,0x01,3,1,4,1) "             \n"  // movzb 0x1(%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x3,0) "      \n"
-    "movzb     " MEMACCESS2(-0x2,0) ",%1       \n"
-    MEMOPARG(movzb,0x02,3,1,4,1) "             \n"  // movzb 0x2(%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x2,0) "      \n"
-    "movzb     " MEMACCESS2(-0x1,0) ",%1       \n"
-    MEMOPARG(movzb,0x03,3,1,4,1) "             \n"  // movzb 0x3(%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x1,0) "      \n"
-    "dec       %2                              \n"
-    "jg        1b                              \n"
-  : "+r"(dst_argb),     // %0
-    "=&d"(pixel_temp),  // %1
-    "+r"(width)         // %2
-  : "r"(table_argb)     // %3
-  : "memory", "cc");
+  asm volatile(
+      // 1 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movzb     (%0),%1                         \n"
+      "lea       0x4(%0),%0                      \n"
+      "movzb     0x00(%3,%1,4),%1                \n"
+      "mov       %b1,-0x4(%0)                    \n"
+      "movzb     -0x3(%0),%1                     \n"
+      "movzb     0x01(%3,%1,4),%1                \n"
+      "mov       %b1,-0x3(%0)                    \n"
+      "movzb     -0x2(%0),%1                     \n"
+      "movzb     0x02(%3,%1,4),%1                \n"
+      "mov       %b1,-0x2(%0)                    \n"
+      "movzb     -0x1(%0),%1                     \n"
+      "movzb     0x03(%3,%1,4),%1                \n"
+      "mov       %b1,-0x1(%0)                    \n"
+      "dec       %2                              \n"
+      "jg        1b                              \n"
+      : "+r"(dst_argb),     // %0
+        "=&d"(pixel_temp),  // %1
+        "+r"(width)         // %2
+      : "r"(table_argb)     // %3
+      : "memory", "cc");
 }
 #endif  // HAS_ARGBCOLORTABLEROW_X86
 
 #ifdef HAS_RGBCOLORTABLEROW_X86
 // Tranform RGB pixels with color table.
-void RGBColorTableRow_X86(uint8* dst_argb, const uint8* table_argb, int width) {
+void RGBColorTableRow_X86(uint8_t* dst_argb,
+                          const uint8_t* table_argb,
+                          int width) {
   uintptr_t pixel_temp;
-  asm volatile (
-    // 1 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movzb     " MEMACCESS(0) ",%1             \n"
-    "lea       " MEMLEA(0x4,0) ",%0            \n"
-    MEMOPARG(movzb,0x00,3,1,4,1) "             \n"  // movzb (%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x4,0) "      \n"
-    "movzb     " MEMACCESS2(-0x3,0) ",%1       \n"
-    MEMOPARG(movzb,0x01,3,1,4,1) "             \n"  // movzb 0x1(%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x3,0) "      \n"
-    "movzb     " MEMACCESS2(-0x2,0) ",%1       \n"
-    MEMOPARG(movzb,0x02,3,1,4,1) "             \n"  // movzb 0x2(%3,%1,4),%1
-    "mov       %b1," MEMACCESS2(-0x2,0) "      \n"
-    "dec       %2                              \n"
-    "jg        1b                              \n"
-  : "+r"(dst_argb),     // %0
-    "=&d"(pixel_temp),  // %1
-    "+r"(width)         // %2
-  : "r"(table_argb)     // %3
-  : "memory", "cc");
+  asm volatile(
+      // 1 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movzb     (%0),%1                         \n"
+      "lea       0x4(%0),%0                      \n"
+      "movzb     0x00(%3,%1,4),%1                \n"
+      "mov       %b1,-0x4(%0)                    \n"
+      "movzb     -0x3(%0),%1                     \n"
+      "movzb     0x01(%3,%1,4),%1                \n"
+      "mov       %b1,-0x3(%0)                    \n"
+      "movzb     -0x2(%0),%1                     \n"
+      "movzb     0x02(%3,%1,4),%1                \n"
+      "mov       %b1,-0x2(%0)                    \n"
+      "dec       %2                              \n"
+      "jg        1b                              \n"
+      : "+r"(dst_argb),     // %0
+        "=&d"(pixel_temp),  // %1
+        "+r"(width)         // %2
+      : "r"(table_argb)     // %3
+      : "memory", "cc");
 }
 #endif  // HAS_RGBCOLORTABLEROW_X86
 
 #ifdef HAS_ARGBLUMACOLORTABLEROW_SSSE3
 // Tranform RGB pixels with luma table.
-void ARGBLumaColorTableRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
+void ARGBLumaColorTableRow_SSSE3(const uint8_t* src_argb,
+                                 uint8_t* dst_argb,
                                  int width,
-                                 const uint8* luma, uint32 lumacoeff) {
+                                 const uint8_t* luma,
+                                 uint32_t lumacoeff) {
   uintptr_t pixel_temp;
   uintptr_t table_temp;
-  asm volatile (
-    "movd      %6,%%xmm3                       \n"
-    "pshufd    $0x0,%%xmm3,%%xmm3              \n"
-    "pcmpeqb   %%xmm4,%%xmm4                   \n"
-    "psllw     $0x8,%%xmm4                     \n"
-    "pxor      %%xmm5,%%xmm5                   \n"
+  asm volatile(
+      "movd      %6,%%xmm3                       \n"
+      "pshufd    $0x0,%%xmm3,%%xmm3              \n"
+      "pcmpeqb   %%xmm4,%%xmm4                   \n"
+      "psllw     $0x8,%%xmm4                     \n"
+      "pxor      %%xmm5,%%xmm5                   \n"
 
-    // 4 pixel loop.
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(2) ",%%xmm0         \n"
-    "pmaddubsw %%xmm3,%%xmm0                   \n"
-    "phaddw    %%xmm0,%%xmm0                   \n"
-    "pand      %%xmm4,%%xmm0                   \n"
-    "punpcklwd %%xmm5,%%xmm0                   \n"
-    "movd      %%xmm0,%k1                      \n"  // 32 bit offset
-    "add       %5,%1                           \n"
-    "pshufd    $0x39,%%xmm0,%%xmm0             \n"
+      // 4 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%2),%%xmm0                     \n"
+      "pmaddubsw %%xmm3,%%xmm0                   \n"
+      "phaddw    %%xmm0,%%xmm0                   \n"
+      "pand      %%xmm4,%%xmm0                   \n"
+      "punpcklwd %%xmm5,%%xmm0                   \n"
+      "movd      %%xmm0,%k1                      \n"  // 32 bit offset
+      "add       %5,%1                           \n"
+      "pshufd    $0x39,%%xmm0,%%xmm0             \n"
 
-    "movzb     " MEMACCESS(2) ",%0             \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS(3) "            \n"
-    "movzb     " MEMACCESS2(0x1,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x1,3) "       \n"
-    "movzb     " MEMACCESS2(0x2,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x2,3) "       \n"
-    "movzb     " MEMACCESS2(0x3,2) ",%0        \n"
-    "mov       %b0," MEMACCESS2(0x3,3) "       \n"
+      "movzb     (%2),%0                         \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,(%3)                        \n"
+      "movzb     0x1(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x1(%3)                     \n"
+      "movzb     0x2(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x2(%3)                     \n"
+      "movzb     0x3(%2),%0                      \n"
+      "mov       %b0,0x3(%3)                     \n"
 
-    "movd      %%xmm0,%k1                      \n"  // 32 bit offset
-    "add       %5,%1                           \n"
-    "pshufd    $0x39,%%xmm0,%%xmm0             \n"
+      "movd      %%xmm0,%k1                      \n"  // 32 bit offset
+      "add       %5,%1                           \n"
+      "pshufd    $0x39,%%xmm0,%%xmm0             \n"
 
-    "movzb     " MEMACCESS2(0x4,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x4,3) "       \n"
-    "movzb     " MEMACCESS2(0x5,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x5,3) "       \n"
-    "movzb     " MEMACCESS2(0x6,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x6,3) "       \n"
-    "movzb     " MEMACCESS2(0x7,2) ",%0        \n"
-    "mov       %b0," MEMACCESS2(0x7,3) "       \n"
+      "movzb     0x4(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x4(%3)                     \n"
+      "movzb     0x5(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x5(%3)                     \n"
+      "movzb     0x6(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x6(%3)                     \n"
+      "movzb     0x7(%2),%0                      \n"
+      "mov       %b0,0x7(%3)                     \n"
 
-    "movd      %%xmm0,%k1                      \n"  // 32 bit offset
-    "add       %5,%1                           \n"
-    "pshufd    $0x39,%%xmm0,%%xmm0             \n"
+      "movd      %%xmm0,%k1                      \n"  // 32 bit offset
+      "add       %5,%1                           \n"
+      "pshufd    $0x39,%%xmm0,%%xmm0             \n"
 
-    "movzb     " MEMACCESS2(0x8,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x8,3) "       \n"
-    "movzb     " MEMACCESS2(0x9,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0x9,3) "       \n"
-    "movzb     " MEMACCESS2(0xa,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0xa,3) "       \n"
-    "movzb     " MEMACCESS2(0xb,2) ",%0        \n"
-    "mov       %b0," MEMACCESS2(0xb,3) "       \n"
+      "movzb     0x8(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x8(%3)                     \n"
+      "movzb     0x9(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0x9(%3)                     \n"
+      "movzb     0xa(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0xa(%3)                     \n"
+      "movzb     0xb(%2),%0                      \n"
+      "mov       %b0,0xb(%3)                     \n"
 
-    "movd      %%xmm0,%k1                      \n"  // 32 bit offset
-    "add       %5,%1                           \n"
+      "movd      %%xmm0,%k1                      \n"  // 32 bit offset
+      "add       %5,%1                           \n"
 
-    "movzb     " MEMACCESS2(0xc,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0xc,3) "       \n"
-    "movzb     " MEMACCESS2(0xd,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0xd,3) "       \n"
-    "movzb     " MEMACCESS2(0xe,2) ",%0        \n"
-    MEMOPARG(movzb,0x00,1,0,1,0) "             \n"  // movzb     (%1,%0,1),%0
-    "mov       %b0," MEMACCESS2(0xe,3) "       \n"
-    "movzb     " MEMACCESS2(0xf,2) ",%0        \n"
-    "mov       %b0," MEMACCESS2(0xf,3) "       \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "lea       " MEMLEA(0x10,3) ",%3           \n"
-    "sub       $0x4,%4                         \n"
-    "jg        1b                              \n"
-  : "=&d"(pixel_temp),  // %0
-    "=&a"(table_temp),  // %1
-    "+r"(src_argb),     // %2
-    "+r"(dst_argb),     // %3
-    "+rm"(width)        // %4
-  : "r"(luma),          // %5
-    "rm"(lumacoeff)     // %6
-  : "memory", "cc", "xmm0", "xmm3", "xmm4", "xmm5"
-  );
+      "movzb     0xc(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0xc(%3)                     \n"
+      "movzb     0xd(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0xd(%3)                     \n"
+      "movzb     0xe(%2),%0                      \n"
+      "movzb     0x00(%1,%0,1),%0                \n"
+      "mov       %b0,0xe(%3)                     \n"
+      "movzb     0xf(%2),%0                      \n"
+      "mov       %b0,0xf(%3)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "lea       0x10(%3),%3                     \n"
+      "sub       $0x4,%4                         \n"
+      "jg        1b                              \n"
+      : "=&d"(pixel_temp),  // %0
+        "=&a"(table_temp),  // %1
+        "+r"(src_argb),     // %2
+        "+r"(dst_argb),     // %3
+        "+rm"(width)        // %4
+      : "r"(luma),          // %5
+        "rm"(lumacoeff)     // %6
+      : "memory", "cc", "xmm0", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_ARGBLUMACOLORTABLEROW_SSSE3
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_mips.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_mips.cc
deleted file mode 100644
index 285f0b5..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_mips.cc
+++ /dev/null
@@ -1,782 +0,0 @@
-/*
- *  Copyright (c) 2012 The LibYuv project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS. All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include "libyuv/row.h"
-
-#ifdef __cplusplus
-namespace libyuv {
-extern "C" {
-#endif
-
-// The following are available on Mips platforms:
-#if !defined(LIBYUV_DISABLE_MIPS) && defined(__mips__) && \
-    (_MIPS_SIM == _MIPS_SIM_ABI32)
-
-#ifdef HAS_COPYROW_MIPS
-void CopyRow_MIPS(const uint8* src, uint8* dst, int count) {
-  __asm__ __volatile__ (
-    ".set      noreorder                         \n"
-    ".set      noat                              \n"
-    "slti      $at, %[count], 8                  \n"
-    "bne       $at ,$zero, $last8                \n"
-    "xor       $t8, %[src], %[dst]               \n"
-    "andi      $t8, $t8, 0x3                     \n"
-
-    "bne       $t8, $zero, unaligned             \n"
-    "negu      $a3, %[dst]                       \n"
-    // make dst/src aligned
-    "andi      $a3, $a3, 0x3                     \n"
-    "beq       $a3, $zero, $chk16w               \n"
-    // word-aligned now count is the remining bytes count
-    "subu     %[count], %[count], $a3            \n"
-
-    "lwr       $t8, 0(%[src])                    \n"
-    "addu      %[src], %[src], $a3               \n"
-    "swr       $t8, 0(%[dst])                    \n"
-    "addu      %[dst], %[dst], $a3               \n"
-
-    // Now the dst/src are mutually word-aligned with word-aligned addresses
-    "$chk16w:                                    \n"
-    "andi      $t8, %[count], 0x3f               \n"  // whole 64-B chunks?
-    // t8 is the byte count after 64-byte chunks
-    "beq       %[count], $t8, chk8w              \n"
-    // There will be at most 1 32-byte chunk after it
-    "subu      $a3, %[count], $t8                \n"  // the reminder
-    // Here a3 counts bytes in 16w chunks
-    "addu      $a3, %[dst], $a3                  \n"
-    // Now a3 is the final dst after 64-byte chunks
-    "addu      $t0, %[dst], %[count]             \n"
-    // t0 is the "past the end" address
-
-    // When in the loop we exercise "pref 30,x(a1)", the a1+x should not be past
-    // the "t0-32" address
-    // This means: for x=128 the last "safe" a1 address is "t0-160"
-    // Alternatively, for x=64 the last "safe" a1 address is "t0-96"
-    // we will use "pref 30,128(a1)", so "t0-160" is the limit
-    "subu      $t9, $t0, 160                     \n"
-    // t9 is the "last safe pref 30,128(a1)" address
-    "pref      0, 0(%[src])                      \n"  // first line of src
-    "pref      0, 32(%[src])                     \n"  // second line of src
-    "pref      0, 64(%[src])                     \n"
-    "pref      30, 32(%[dst])                    \n"
-    // In case the a1 > t9 don't use "pref 30" at all
-    "sgtu      $v1, %[dst], $t9                  \n"
-    "bgtz      $v1, $loop16w                     \n"
-    "nop                                         \n"
-    // otherwise, start with using pref30
-    "pref      30, 64(%[dst])                    \n"
-    "$loop16w:                                    \n"
-    "pref      0, 96(%[src])                     \n"
-    "lw        $t0, 0(%[src])                    \n"
-    "bgtz      $v1, $skip_pref30_96              \n"  // skip
-    "lw        $t1, 4(%[src])                    \n"
-    "pref      30, 96(%[dst])                    \n"  // continue
-    "$skip_pref30_96:                            \n"
-    "lw        $t2, 8(%[src])                    \n"
-    "lw        $t3, 12(%[src])                   \n"
-    "lw        $t4, 16(%[src])                   \n"
-    "lw        $t5, 20(%[src])                   \n"
-    "lw        $t6, 24(%[src])                   \n"
-    "lw        $t7, 28(%[src])                   \n"
-    "pref      0, 128(%[src])                    \n"
-    //  bring the next lines of src, addr 128
-    "sw        $t0, 0(%[dst])                    \n"
-    "sw        $t1, 4(%[dst])                    \n"
-    "sw        $t2, 8(%[dst])                    \n"
-    "sw        $t3, 12(%[dst])                   \n"
-    "sw        $t4, 16(%[dst])                   \n"
-    "sw        $t5, 20(%[dst])                   \n"
-    "sw        $t6, 24(%[dst])                   \n"
-    "sw        $t7, 28(%[dst])                   \n"
-    "lw        $t0, 32(%[src])                   \n"
-    "bgtz      $v1, $skip_pref30_128             \n"  // skip pref 30,128(a1)
-    "lw        $t1, 36(%[src])                   \n"
-    "pref      30, 128(%[dst])                   \n"  // set dest, addr 128
-    "$skip_pref30_128:                           \n"
-    "lw        $t2, 40(%[src])                   \n"
-    "lw        $t3, 44(%[src])                   \n"
-    "lw        $t4, 48(%[src])                   \n"
-    "lw        $t5, 52(%[src])                   \n"
-    "lw        $t6, 56(%[src])                   \n"
-    "lw        $t7, 60(%[src])                   \n"
-    "pref      0, 160(%[src])                    \n"
-    // bring the next lines of src, addr 160
-    "sw        $t0, 32(%[dst])                   \n"
-    "sw        $t1, 36(%[dst])                   \n"
-    "sw        $t2, 40(%[dst])                   \n"
-    "sw        $t3, 44(%[dst])                   \n"
-    "sw        $t4, 48(%[dst])                   \n"
-    "sw        $t5, 52(%[dst])                   \n"
-    "sw        $t6, 56(%[dst])                   \n"
-    "sw        $t7, 60(%[dst])                   \n"
-
-    "addiu     %[dst], %[dst], 64                \n"  // adding 64 to dest
-    "sgtu      $v1, %[dst], $t9                  \n"
-    "bne       %[dst], $a3, $loop16w             \n"
-    " addiu    %[src], %[src], 64                \n"  // adding 64 to src
-    "move      %[count], $t8                     \n"
-
-    // Here we have src and dest word-aligned but less than 64-bytes to go
-
-    "chk8w:                                      \n"
-    "pref      0, 0x0(%[src])                    \n"
-    "andi      $t8, %[count], 0x1f               \n"  // 32-byte chunk?
-    // the t8 is the reminder count past 32-bytes
-    "beq       %[count], $t8, chk1w              \n"
-    // count=t8,no 32-byte chunk
-    " nop                                        \n"
-
-    "lw        $t0, 0(%[src])                    \n"
-    "lw        $t1, 4(%[src])                    \n"
-    "lw        $t2, 8(%[src])                    \n"
-    "lw        $t3, 12(%[src])                   \n"
-    "lw        $t4, 16(%[src])                   \n"
-    "lw        $t5, 20(%[src])                   \n"
-    "lw        $t6, 24(%[src])                   \n"
-    "lw        $t7, 28(%[src])                   \n"
-    "addiu     %[src], %[src], 32                \n"
-
-    "sw        $t0, 0(%[dst])                    \n"
-    "sw        $t1, 4(%[dst])                    \n"
-    "sw        $t2, 8(%[dst])                    \n"
-    "sw        $t3, 12(%[dst])                   \n"
-    "sw        $t4, 16(%[dst])                   \n"
-    "sw        $t5, 20(%[dst])                   \n"
-    "sw        $t6, 24(%[dst])                   \n"
-    "sw        $t7, 28(%[dst])                   \n"
-    "addiu     %[dst], %[dst], 32                \n"
-
-    "chk1w:                                      \n"
-    "andi      %[count], $t8, 0x3                \n"
-    // now count is the reminder past 1w chunks
-    "beq       %[count], $t8, $last8             \n"
-    " subu     $a3, $t8, %[count]                \n"
-    // a3 is count of bytes in 1w chunks
-    "addu      $a3, %[dst], $a3                  \n"
-    // now a3 is the dst address past the 1w chunks
-    // copying in words (4-byte chunks)
-    "$wordCopy_loop:                             \n"
-    "lw        $t3, 0(%[src])                    \n"
-    // the first t3 may be equal t0 ... optimize?
-    "addiu     %[src], %[src],4                  \n"
-    "addiu     %[dst], %[dst],4                  \n"
-    "bne       %[dst], $a3,$wordCopy_loop        \n"
-    " sw       $t3, -4(%[dst])                   \n"
-
-    // For the last (<8) bytes
-    "$last8:                                     \n"
-    "blez      %[count], leave                   \n"
-    " addu     $a3, %[dst], %[count]             \n"  // a3 -last dst address
-    "$last8loop:                                 \n"
-    "lb        $v1, 0(%[src])                    \n"
-    "addiu     %[src], %[src], 1                 \n"
-    "addiu     %[dst], %[dst], 1                 \n"
-    "bne       %[dst], $a3, $last8loop           \n"
-    " sb       $v1, -1(%[dst])                   \n"
-
-    "leave:                                      \n"
-    "  j       $ra                               \n"
-    "  nop                                       \n"
-
-    //
-    // UNALIGNED case
-    //
-
-    "unaligned:                                  \n"
-    // got here with a3="negu a1"
-    "andi      $a3, $a3, 0x3                     \n"  // a1 is word aligned?
-    "beqz      $a3, $ua_chk16w                   \n"
-    " subu     %[count], %[count], $a3           \n"
-    // bytes left after initial a3 bytes
-    "lwr       $v1, 0(%[src])                    \n"
-    "lwl       $v1, 3(%[src])                    \n"
-    "addu      %[src], %[src], $a3               \n"  // a3 may be 1, 2 or 3
-    "swr       $v1, 0(%[dst])                    \n"
-    "addu      %[dst], %[dst], $a3               \n"
-    // below the dst will be word aligned (NOTE1)
-    "$ua_chk16w:                                 \n"
-    "andi      $t8, %[count], 0x3f               \n"  // whole 64-B chunks?
-    // t8 is the byte count after 64-byte chunks
-    "beq       %[count], $t8, ua_chk8w           \n"
-    // if a2==t8, no 64-byte chunks
-    // There will be at most 1 32-byte chunk after it
-    "subu      $a3, %[count], $t8                \n"  // the reminder
-    // Here a3 counts bytes in 16w chunks
-    "addu      $a3, %[dst], $a3                  \n"
-    // Now a3 is the final dst after 64-byte chunks
-    "addu      $t0, %[dst], %[count]             \n"  // t0 "past the end"
-    "subu      $t9, $t0, 160                     \n"
-    // t9 is the "last safe pref 30,128(a1)" address
-    "pref      0, 0(%[src])                      \n"  // first line of src
-    "pref      0, 32(%[src])                     \n"  // second line  addr 32
-    "pref      0, 64(%[src])                     \n"
-    "pref      30, 32(%[dst])                    \n"
-    // safe, as we have at least 64 bytes ahead
-    // In case the a1 > t9 don't use "pref 30" at all
-    "sgtu      $v1, %[dst], $t9                  \n"
-    "bgtz      $v1, $ua_loop16w                  \n"
-    // skip "pref 30,64(a1)" for too short arrays
-    " nop                                        \n"
-    // otherwise, start with using pref30
-    "pref      30, 64(%[dst])                    \n"
-    "$ua_loop16w:                                \n"
-    "pref      0, 96(%[src])                     \n"
-    "lwr       $t0, 0(%[src])                    \n"
-    "lwl       $t0, 3(%[src])                    \n"
-    "lwr       $t1, 4(%[src])                    \n"
-    "bgtz      $v1, $ua_skip_pref30_96           \n"
-    " lwl      $t1, 7(%[src])                    \n"
-    "pref      30, 96(%[dst])                    \n"
-    // continue setting up the dest, addr 96
-    "$ua_skip_pref30_96:                         \n"
-    "lwr       $t2, 8(%[src])                    \n"
-    "lwl       $t2, 11(%[src])                   \n"
-    "lwr       $t3, 12(%[src])                   \n"
-    "lwl       $t3, 15(%[src])                   \n"
-    "lwr       $t4, 16(%[src])                   \n"
-    "lwl       $t4, 19(%[src])                   \n"
-    "lwr       $t5, 20(%[src])                   \n"
-    "lwl       $t5, 23(%[src])                   \n"
-    "lwr       $t6, 24(%[src])                   \n"
-    "lwl       $t6, 27(%[src])                   \n"
-    "lwr       $t7, 28(%[src])                   \n"
-    "lwl       $t7, 31(%[src])                   \n"
-    "pref      0, 128(%[src])                    \n"
-    // bring the next lines of src, addr 128
-    "sw        $t0, 0(%[dst])                    \n"
-    "sw        $t1, 4(%[dst])                    \n"
-    "sw        $t2, 8(%[dst])                    \n"
-    "sw        $t3, 12(%[dst])                   \n"
-    "sw        $t4, 16(%[dst])                   \n"
-    "sw        $t5, 20(%[dst])                   \n"
-    "sw        $t6, 24(%[dst])                   \n"
-    "sw        $t7, 28(%[dst])                   \n"
-    "lwr       $t0, 32(%[src])                   \n"
-    "lwl       $t0, 35(%[src])                   \n"
-    "lwr       $t1, 36(%[src])                   \n"
-    "bgtz      $v1, ua_skip_pref30_128           \n"
-    " lwl      $t1, 39(%[src])                   \n"
-    "pref      30, 128(%[dst])                   \n"
-    // continue setting up the dest, addr 128
-    "ua_skip_pref30_128:                         \n"
-
-    "lwr       $t2, 40(%[src])                   \n"
-    "lwl       $t2, 43(%[src])                   \n"
-    "lwr       $t3, 44(%[src])                   \n"
-    "lwl       $t3, 47(%[src])                   \n"
-    "lwr       $t4, 48(%[src])                   \n"
-    "lwl       $t4, 51(%[src])                   \n"
-    "lwr       $t5, 52(%[src])                   \n"
-    "lwl       $t5, 55(%[src])                   \n"
-    "lwr       $t6, 56(%[src])                   \n"
-    "lwl       $t6, 59(%[src])                   \n"
-    "lwr       $t7, 60(%[src])                   \n"
-    "lwl       $t7, 63(%[src])                   \n"
-    "pref      0, 160(%[src])                    \n"
-    // bring the next lines of src, addr 160
-    "sw        $t0, 32(%[dst])                   \n"
-    "sw        $t1, 36(%[dst])                   \n"
-    "sw        $t2, 40(%[dst])                   \n"
-    "sw        $t3, 44(%[dst])                   \n"
-    "sw        $t4, 48(%[dst])                   \n"
-    "sw        $t5, 52(%[dst])                   \n"
-    "sw        $t6, 56(%[dst])                   \n"
-    "sw        $t7, 60(%[dst])                   \n"
-
-    "addiu     %[dst],%[dst],64                  \n"  // adding 64 to dest
-    "sgtu      $v1,%[dst],$t9                    \n"
-    "bne       %[dst],$a3,$ua_loop16w            \n"
-    " addiu    %[src],%[src],64                  \n"  // adding 64 to src
-    "move      %[count],$t8                      \n"
-
-    // Here we have src and dest word-aligned but less than 64-bytes to go
-
-    "ua_chk8w:                                   \n"
-    "pref      0, 0x0(%[src])                    \n"
-    "andi      $t8, %[count], 0x1f               \n"  // 32-byte chunk?
-    // the t8 is the reminder count
-    "beq       %[count], $t8, $ua_chk1w          \n"
-    // when count==t8, no 32-byte chunk
-
-    "lwr       $t0, 0(%[src])                    \n"
-    "lwl       $t0, 3(%[src])                    \n"
-    "lwr       $t1, 4(%[src])                    \n"
-    "lwl       $t1, 7(%[src])                    \n"
-    "lwr       $t2, 8(%[src])                    \n"
-    "lwl       $t2, 11(%[src])                   \n"
-    "lwr       $t3, 12(%[src])                   \n"
-    "lwl       $t3, 15(%[src])                   \n"
-    "lwr       $t4, 16(%[src])                   \n"
-    "lwl       $t4, 19(%[src])                   \n"
-    "lwr       $t5, 20(%[src])                   \n"
-    "lwl       $t5, 23(%[src])                   \n"
-    "lwr       $t6, 24(%[src])                   \n"
-    "lwl       $t6, 27(%[src])                   \n"
-    "lwr       $t7, 28(%[src])                   \n"
-    "lwl       $t7, 31(%[src])                   \n"
-    "addiu     %[src], %[src], 32                \n"
-
-    "sw        $t0, 0(%[dst])                    \n"
-    "sw        $t1, 4(%[dst])                    \n"
-    "sw        $t2, 8(%[dst])                    \n"
-    "sw        $t3, 12(%[dst])                   \n"
-    "sw        $t4, 16(%[dst])                   \n"
-    "sw        $t5, 20(%[dst])                   \n"
-    "sw        $t6, 24(%[dst])                   \n"
-    "sw        $t7, 28(%[dst])                   \n"
-    "addiu     %[dst], %[dst], 32                \n"
-
-    "$ua_chk1w:                                  \n"
-    "andi      %[count], $t8, 0x3                \n"
-    // now count is the reminder past 1w chunks
-    "beq       %[count], $t8, ua_smallCopy       \n"
-    "subu      $a3, $t8, %[count]                \n"
-    // a3 is count of bytes in 1w chunks
-    "addu      $a3, %[dst], $a3                  \n"
-    // now a3 is the dst address past the 1w chunks
-
-    // copying in words (4-byte chunks)
-    "$ua_wordCopy_loop:                          \n"
-    "lwr       $v1, 0(%[src])                    \n"
-    "lwl       $v1, 3(%[src])                    \n"
-    "addiu     %[src], %[src], 4                 \n"
-    "addiu     %[dst], %[dst], 4                 \n"
-    // note: dst=a1 is word aligned here, see NOTE1
-    "bne       %[dst], $a3, $ua_wordCopy_loop    \n"
-    " sw       $v1,-4(%[dst])                    \n"
-
-    // Now less than 4 bytes (value in count) left to copy
-    "ua_smallCopy:                               \n"
-    "beqz      %[count], leave                   \n"
-    " addu     $a3, %[dst], %[count]             \n" // a3 = last dst address
-    "$ua_smallCopy_loop:                         \n"
-    "lb        $v1, 0(%[src])                    \n"
-    "addiu     %[src], %[src], 1                 \n"
-    "addiu     %[dst], %[dst], 1                 \n"
-    "bne       %[dst],$a3,$ua_smallCopy_loop     \n"
-    " sb       $v1, -1(%[dst])                   \n"
-
-    "j         $ra                               \n"
-    " nop                                        \n"
-    ".set      at                                \n"
-    ".set      reorder                           \n"
-       : [dst] "+r" (dst), [src] "+r" (src)
-       : [count] "r" (count)
-       : "t0", "t1", "t2", "t3", "t4", "t5", "t6", "t7",
-       "t8", "t9", "a3", "v1", "at"
-  );
-}
-#endif  // HAS_COPYROW_MIPS
-
-// DSPR2 functions
-#if !defined(LIBYUV_DISABLE_MIPS) && defined(__mips_dsp) && \
-    (__mips_dsp_rev >= 2) && \
-    (_MIPS_SIM == _MIPS_SIM_ABI32) && (__mips_isa_rev < 6)
-
-void SplitUVRow_DSPR2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                      int width) {
-  __asm__ __volatile__ (
-    ".set push                                     \n"
-    ".set noreorder                                \n"
-    "srl             $t4, %[width], 4              \n"  // multiplies of 16
-    "blez            $t4, 2f                       \n"
-    " andi           %[width], %[width], 0xf       \n"  // residual
-
-  "1:                                              \n"
-    "addiu           $t4, $t4, -1                  \n"
-    "lw              $t0, 0(%[src_uv])             \n"  // V1 | U1 | V0 | U0
-    "lw              $t1, 4(%[src_uv])             \n"  // V3 | U3 | V2 | U2
-    "lw              $t2, 8(%[src_uv])             \n"  // V5 | U5 | V4 | U4
-    "lw              $t3, 12(%[src_uv])            \n"  // V7 | U7 | V6 | U6
-    "lw              $t5, 16(%[src_uv])            \n"  // V9 | U9 | V8 | U8
-    "lw              $t6, 20(%[src_uv])            \n"  // V11 | U11 | V10 | U10
-    "lw              $t7, 24(%[src_uv])            \n"  // V13 | U13 | V12 | U12
-    "lw              $t8, 28(%[src_uv])            \n"  // V15 | U15 | V14 | U14
-    "addiu           %[src_uv], %[src_uv], 32      \n"
-    "precrq.qb.ph    $t9, $t1, $t0                 \n"  // V3 | V2 | V1 | V0
-    "precr.qb.ph     $t0, $t1, $t0                 \n"  // U3 | U2 | U1 | U0
-    "precrq.qb.ph    $t1, $t3, $t2                 \n"  // V7 | V6 | V5 | V4
-    "precr.qb.ph     $t2, $t3, $t2                 \n"  // U7 | U6 | U5 | U4
-    "precrq.qb.ph    $t3, $t6, $t5                 \n"  // V11 | V10 | V9 | V8
-    "precr.qb.ph     $t5, $t6, $t5                 \n"  // U11 | U10 | U9 | U8
-    "precrq.qb.ph    $t6, $t8, $t7                 \n"  // V15 | V14 | V13 | V12
-    "precr.qb.ph     $t7, $t8, $t7                 \n"  // U15 | U14 | U13 | U12
-    "sw              $t9, 0(%[dst_v])              \n"
-    "sw              $t0, 0(%[dst_u])              \n"
-    "sw              $t1, 4(%[dst_v])              \n"
-    "sw              $t2, 4(%[dst_u])              \n"
-    "sw              $t3, 8(%[dst_v])              \n"
-    "sw              $t5, 8(%[dst_u])              \n"
-    "sw              $t6, 12(%[dst_v])             \n"
-    "sw              $t7, 12(%[dst_u])             \n"
-    "addiu           %[dst_v], %[dst_v], 16        \n"
-    "bgtz            $t4, 1b                       \n"
-    " addiu          %[dst_u], %[dst_u], 16        \n"
-
-    "beqz            %[width], 3f                  \n"
-    " nop                                          \n"
-
-  "2:                                              \n"
-    "lbu             $t0, 0(%[src_uv])             \n"
-    "lbu             $t1, 1(%[src_uv])             \n"
-    "addiu           %[src_uv], %[src_uv], 2       \n"
-    "addiu           %[width], %[width], -1        \n"
-    "sb              $t0, 0(%[dst_u])              \n"
-    "sb              $t1, 0(%[dst_v])              \n"
-    "addiu           %[dst_u], %[dst_u], 1         \n"
-    "bgtz            %[width], 2b                  \n"
-    " addiu          %[dst_v], %[dst_v], 1         \n"
-
-  "3:                                              \n"
-    ".set pop                                      \n"
-     : [src_uv] "+r" (src_uv),
-       [width] "+r" (width),
-       [dst_u] "+r" (dst_u),
-       [dst_v] "+r" (dst_v)
-     :
-     : "t0", "t1", "t2", "t3",
-     "t4", "t5", "t6", "t7", "t8", "t9"
-  );
-}
-
-void MirrorRow_DSPR2(const uint8* src, uint8* dst, int width) {
-  __asm__ __volatile__ (
-    ".set push                             \n"
-    ".set noreorder                        \n"
-
-    "srl       $t4, %[width], 4            \n"  // multiplies of 16
-    "andi      $t5, %[width], 0xf          \n"
-    "blez      $t4, 2f                     \n"
-    " addu     %[src], %[src], %[width]    \n"  // src += width
-
-   "1:                                     \n"
-    "lw        $t0, -16(%[src])            \n"  // |3|2|1|0|
-    "lw        $t1, -12(%[src])            \n"  // |7|6|5|4|
-    "lw        $t2, -8(%[src])             \n"  // |11|10|9|8|
-    "lw        $t3, -4(%[src])             \n"  // |15|14|13|12|
-    "wsbh      $t0, $t0                    \n"  // |2|3|0|1|
-    "wsbh      $t1, $t1                    \n"  // |6|7|4|5|
-    "wsbh      $t2, $t2                    \n"  // |10|11|8|9|
-    "wsbh      $t3, $t3                    \n"  // |14|15|12|13|
-    "rotr      $t0, $t0, 16                \n"  // |0|1|2|3|
-    "rotr      $t1, $t1, 16                \n"  // |4|5|6|7|
-    "rotr      $t2, $t2, 16                \n"  // |8|9|10|11|
-    "rotr      $t3, $t3, 16                \n"  // |12|13|14|15|
-    "addiu     %[src], %[src], -16         \n"
-    "addiu     $t4, $t4, -1                \n"
-    "sw        $t3, 0(%[dst])              \n"  // |15|14|13|12|
-    "sw        $t2, 4(%[dst])              \n"  // |11|10|9|8|
-    "sw        $t1, 8(%[dst])              \n"  // |7|6|5|4|
-    "sw        $t0, 12(%[dst])             \n"  // |3|2|1|0|
-    "bgtz      $t4, 1b                     \n"
-    " addiu    %[dst], %[dst], 16          \n"
-    "beqz      $t5, 3f                     \n"
-    " nop                                  \n"
-
-   "2:                                     \n"
-    "lbu       $t0, -1(%[src])             \n"
-    "addiu     $t5, $t5, -1                \n"
-    "addiu     %[src], %[src], -1          \n"
-    "sb        $t0, 0(%[dst])              \n"
-    "bgez      $t5, 2b                     \n"
-    " addiu    %[dst], %[dst], 1           \n"
-
-   "3:                                     \n"
-    ".set pop                              \n"
-      : [src] "+r" (src), [dst] "+r" (dst)
-      : [width] "r" (width)
-      : "t0", "t1", "t2", "t3", "t4", "t5"
-  );
-}
-
-void MirrorUVRow_DSPR2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                       int width) {
-  int x;
-  int y;
-  __asm__ __volatile__ (
-    ".set push                                    \n"
-    ".set noreorder                               \n"
-
-    "addu            $t4, %[width], %[width]      \n"
-    "srl             %[x], %[width], 4            \n"
-    "andi            %[y], %[width], 0xf          \n"
-    "blez            %[x], 2f                     \n"
-    " addu           %[src_uv], %[src_uv], $t4    \n"
-
-   "1:                                            \n"
-    "lw              $t0, -32(%[src_uv])          \n"  // |3|2|1|0|
-    "lw              $t1, -28(%[src_uv])          \n"  // |7|6|5|4|
-    "lw              $t2, -24(%[src_uv])          \n"  // |11|10|9|8|
-    "lw              $t3, -20(%[src_uv])          \n"  // |15|14|13|12|
-    "lw              $t4, -16(%[src_uv])          \n"  // |19|18|17|16|
-    "lw              $t6, -12(%[src_uv])          \n"  // |23|22|21|20|
-    "lw              $t7, -8(%[src_uv])           \n"  // |27|26|25|24|
-    "lw              $t8, -4(%[src_uv])           \n"  // |31|30|29|28|
-
-    "rotr            $t0, $t0, 16                 \n"  // |1|0|3|2|
-    "rotr            $t1, $t1, 16                 \n"  // |5|4|7|6|
-    "rotr            $t2, $t2, 16                 \n"  // |9|8|11|10|
-    "rotr            $t3, $t3, 16                 \n"  // |13|12|15|14|
-    "rotr            $t4, $t4, 16                 \n"  // |17|16|19|18|
-    "rotr            $t6, $t6, 16                 \n"  // |21|20|23|22|
-    "rotr            $t7, $t7, 16                 \n"  // |25|24|27|26|
-    "rotr            $t8, $t8, 16                 \n"  // |29|28|31|30|
-    "precr.qb.ph     $t9, $t0, $t1                \n"  // |0|2|4|6|
-    "precrq.qb.ph    $t5, $t0, $t1                \n"  // |1|3|5|7|
-    "precr.qb.ph     $t0, $t2, $t3                \n"  // |8|10|12|14|
-    "precrq.qb.ph    $t1, $t2, $t3                \n"  // |9|11|13|15|
-    "precr.qb.ph     $t2, $t4, $t6                \n"  // |16|18|20|22|
-    "precrq.qb.ph    $t3, $t4, $t6                \n"  // |17|19|21|23|
-    "precr.qb.ph     $t4, $t7, $t8                \n"  // |24|26|28|30|
-    "precrq.qb.ph    $t6, $t7, $t8                \n"  // |25|27|29|31|
-    "addiu           %[src_uv], %[src_uv], -32    \n"
-    "addiu           %[x], %[x], -1               \n"
-    "swr             $t4, 0(%[dst_u])             \n"
-    "swl             $t4, 3(%[dst_u])             \n"  // |30|28|26|24|
-    "swr             $t6, 0(%[dst_v])             \n"
-    "swl             $t6, 3(%[dst_v])             \n"  // |31|29|27|25|
-    "swr             $t2, 4(%[dst_u])             \n"
-    "swl             $t2, 7(%[dst_u])             \n"  // |22|20|18|16|
-    "swr             $t3, 4(%[dst_v])             \n"
-    "swl             $t3, 7(%[dst_v])             \n"  // |23|21|19|17|
-    "swr             $t0, 8(%[dst_u])             \n"
-    "swl             $t0, 11(%[dst_u])            \n"  // |14|12|10|8|
-    "swr             $t1, 8(%[dst_v])             \n"
-    "swl             $t1, 11(%[dst_v])            \n"  // |15|13|11|9|
-    "swr             $t9, 12(%[dst_u])            \n"
-    "swl             $t9, 15(%[dst_u])            \n"  // |6|4|2|0|
-    "swr             $t5, 12(%[dst_v])            \n"
-    "swl             $t5, 15(%[dst_v])            \n"  // |7|5|3|1|
-    "addiu           %[dst_v], %[dst_v], 16       \n"
-    "bgtz            %[x], 1b                     \n"
-    " addiu          %[dst_u], %[dst_u], 16       \n"
-    "beqz            %[y], 3f                     \n"
-    " nop                                         \n"
-    "b               2f                           \n"
-    " nop                                         \n"
-
-   "2:                                            \n"
-    "lbu             $t0, -2(%[src_uv])           \n"
-    "lbu             $t1, -1(%[src_uv])           \n"
-    "addiu           %[src_uv], %[src_uv], -2     \n"
-    "addiu           %[y], %[y], -1               \n"
-    "sb              $t0, 0(%[dst_u])             \n"
-    "sb              $t1, 0(%[dst_v])             \n"
-    "addiu           %[dst_u], %[dst_u], 1        \n"
-    "bgtz            %[y], 2b                     \n"
-    " addiu          %[dst_v], %[dst_v], 1        \n"
-
-   "3:                                            \n"
-    ".set pop                                     \n"
-      : [src_uv] "+r" (src_uv),
-        [dst_u] "+r" (dst_u),
-        [dst_v] "+r" (dst_v),
-        [x] "=&r" (x),
-        [y] "=&r" (y)
-      : [width] "r" (width)
-      : "t0", "t1", "t2", "t3", "t4",
-      "t5", "t7", "t8", "t9"
-  );
-}
-
-// Convert (4 Y and 2 VU) I422 and arrange RGB values into
-// t5 = | 0 | B0 | 0 | b0 |
-// t4 = | 0 | B1 | 0 | b1 |
-// t9 = | 0 | G0 | 0 | g0 |
-// t8 = | 0 | G1 | 0 | g1 |
-// t2 = | 0 | R0 | 0 | r0 |
-// t1 = | 0 | R1 | 0 | r1 |
-#define YUVTORGB                                                               \
-      "lw                $t0, 0(%[y_buf])       \n"                            \
-      "lhu               $t1, 0(%[u_buf])       \n"                            \
-      "lhu               $t2, 0(%[v_buf])       \n"                            \
-      "preceu.ph.qbr     $t1, $t1               \n"                            \
-      "preceu.ph.qbr     $t2, $t2               \n"                            \
-      "preceu.ph.qbra    $t3, $t0               \n"                            \
-      "preceu.ph.qbla    $t0, $t0               \n"                            \
-      "subu.ph           $t1, $t1, $s5          \n"                            \
-      "subu.ph           $t2, $t2, $s5          \n"                            \
-      "subu.ph           $t3, $t3, $s4          \n"                            \
-      "subu.ph           $t0, $t0, $s4          \n"                            \
-      "mul.ph            $t3, $t3, $s0          \n"                            \
-      "mul.ph            $t0, $t0, $s0          \n"                            \
-      "shll.ph           $t4, $t1, 0x7          \n"                            \
-      "subu.ph           $t4, $t4, $t1          \n"                            \
-      "mul.ph            $t6, $t1, $s1          \n"                            \
-      "mul.ph            $t1, $t2, $s2          \n"                            \
-      "addq_s.ph         $t5, $t4, $t3          \n"                            \
-      "addq_s.ph         $t4, $t4, $t0          \n"                            \
-      "shra.ph           $t5, $t5, 6            \n"                            \
-      "shra.ph           $t4, $t4, 6            \n"                            \
-      "addiu             %[u_buf], 2            \n"                            \
-      "addiu             %[v_buf], 2            \n"                            \
-      "addu.ph           $t6, $t6, $t1          \n"                            \
-      "mul.ph            $t1, $t2, $s3          \n"                            \
-      "addu.ph           $t9, $t6, $t3          \n"                            \
-      "addu.ph           $t8, $t6, $t0          \n"                            \
-      "shra.ph           $t9, $t9, 6            \n"                            \
-      "shra.ph           $t8, $t8, 6            \n"                            \
-      "addu.ph           $t2, $t1, $t3          \n"                            \
-      "addu.ph           $t1, $t1, $t0          \n"                            \
-      "shra.ph           $t2, $t2, 6            \n"                            \
-      "shra.ph           $t1, $t1, 6            \n"                            \
-      "subu.ph           $t5, $t5, $s5          \n"                            \
-      "subu.ph           $t4, $t4, $s5          \n"                            \
-      "subu.ph           $t9, $t9, $s5          \n"                            \
-      "subu.ph           $t8, $t8, $s5          \n"                            \
-      "subu.ph           $t2, $t2, $s5          \n"                            \
-      "subu.ph           $t1, $t1, $s5          \n"                            \
-      "shll_s.ph         $t5, $t5, 8            \n"                            \
-      "shll_s.ph         $t4, $t4, 8            \n"                            \
-      "shll_s.ph         $t9, $t9, 8            \n"                            \
-      "shll_s.ph         $t8, $t8, 8            \n"                            \
-      "shll_s.ph         $t2, $t2, 8            \n"                            \
-      "shll_s.ph         $t1, $t1, 8            \n"                            \
-      "shra.ph           $t5, $t5, 8            \n"                            \
-      "shra.ph           $t4, $t4, 8            \n"                            \
-      "shra.ph           $t9, $t9, 8            \n"                            \
-      "shra.ph           $t8, $t8, 8            \n"                            \
-      "shra.ph           $t2, $t2, 8            \n"                            \
-      "shra.ph           $t1, $t1, 8            \n"                            \
-      "addu.ph           $t5, $t5, $s5          \n"                            \
-      "addu.ph           $t4, $t4, $s5          \n"                            \
-      "addu.ph           $t9, $t9, $s5          \n"                            \
-      "addu.ph           $t8, $t8, $s5          \n"                            \
-      "addu.ph           $t2, $t2, $s5          \n"                            \
-      "addu.ph           $t1, $t1, $s5          \n"
-
-// TODO(fbarchard): accept yuv conversion constants.
-void I422ToARGBRow_DSPR2(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* rgb_buf,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
-  __asm__ __volatile__ (
-    ".set push                                \n"
-    ".set noreorder                           \n"
-    "beqz              %[width], 2f           \n"
-    " repl.ph          $s0, 74                \n"  // |YG|YG| = |74|74|
-    "repl.ph           $s1, -25               \n"  // |UG|UG| = |-25|-25|
-    "repl.ph           $s2, -52               \n"  // |VG|VG| = |-52|-52|
-    "repl.ph           $s3, 102               \n"  // |VR|VR| = |102|102|
-    "repl.ph           $s4, 16                \n"  // |0|16|0|16|
-    "repl.ph           $s5, 128               \n"  // |128|128| // clipping
-    "lui               $s6, 0xff00            \n"
-    "ori               $s6, 0xff00            \n"  // |ff|00|ff|00|ff|
-
-   "1:                                        \n"
-      YUVTORGB
-// Arranging into argb format
-    "precr.qb.ph       $t4, $t8, $t4          \n"  // |G1|g1|B1|b1|
-    "precr.qb.ph       $t5, $t9, $t5          \n"  // |G0|g0|B0|b0|
-    "addiu             %[width], -4           \n"
-    "precrq.qb.ph      $t8, $t4, $t5          \n"  // |G1|B1|G0|B0|
-    "precr.qb.ph       $t9, $t4, $t5          \n"  // |g1|b1|g0|b0|
-    "precr.qb.ph       $t2, $t1, $t2          \n"  // |R1|r1|R0|r0|
-
-    "addiu             %[y_buf], 4            \n"
-    "preceu.ph.qbla    $t1, $t2               \n"  // |0 |R1|0 |R0|
-    "preceu.ph.qbra    $t2, $t2               \n"  // |0 |r1|0 |r0|
-    "or                $t1, $t1, $s6          \n"  // |ff|R1|ff|R0|
-    "or                $t2, $t2, $s6          \n"  // |ff|r1|ff|r0|
-    "precrq.ph.w       $t0, $t2, $t9          \n"  // |ff|r1|g1|b1|
-    "precrq.ph.w       $t3, $t1, $t8          \n"  // |ff|R1|G1|B1|
-    "sll               $t9, $t9, 16           \n"
-    "sll               $t8, $t8, 16           \n"
-    "packrl.ph         $t2, $t2, $t9          \n"  // |ff|r0|g0|b0|
-    "packrl.ph         $t1, $t1, $t8          \n"  // |ff|R0|G0|B0|
-// Store results.
-    "sw                $t2, 0(%[rgb_buf])     \n"
-    "sw                $t0, 4(%[rgb_buf])     \n"
-    "sw                $t1, 8(%[rgb_buf])     \n"
-    "sw                $t3, 12(%[rgb_buf])    \n"
-    "bnez              %[width], 1b           \n"
-    " addiu            %[rgb_buf], 16         \n"
-   "2:                                        \n"
-    ".set pop                                 \n"
-      :[y_buf] "+r" (y_buf),
-       [u_buf] "+r" (u_buf),
-       [v_buf] "+r" (v_buf),
-       [width] "+r" (width),
-       [rgb_buf] "+r" (rgb_buf)
-      :
-      : "t0", "t1",  "t2", "t3",  "t4", "t5",
-      "t6", "t7", "t8", "t9",
-      "s0", "s1", "s2", "s3",
-      "s4", "s5", "s6"
-  );
-}
-
-// Bilinear filter 8x2 -> 8x1
-void InterpolateRow_DSPR2(uint8* dst_ptr, const uint8* src_ptr,
-                          ptrdiff_t src_stride, int dst_width,
-                          int source_y_fraction) {
-    int y0_fraction = 256 - source_y_fraction;
-    const uint8* src_ptr1 = src_ptr + src_stride;
-
-  __asm__ __volatile__ (
-     ".set push                                           \n"
-     ".set noreorder                                      \n"
-
-     "replv.ph          $t0, %[y0_fraction]               \n"
-     "replv.ph          $t1, %[source_y_fraction]         \n"
-
-   "1:                                                    \n"
-     "lw                $t2, 0(%[src_ptr])                \n"
-     "lw                $t3, 0(%[src_ptr1])               \n"
-     "lw                $t4, 4(%[src_ptr])                \n"
-     "lw                $t5, 4(%[src_ptr1])               \n"
-     "muleu_s.ph.qbl    $t6, $t2, $t0                     \n"
-     "muleu_s.ph.qbr    $t7, $t2, $t0                     \n"
-     "muleu_s.ph.qbl    $t8, $t3, $t1                     \n"
-     "muleu_s.ph.qbr    $t9, $t3, $t1                     \n"
-     "muleu_s.ph.qbl    $t2, $t4, $t0                     \n"
-     "muleu_s.ph.qbr    $t3, $t4, $t0                     \n"
-     "muleu_s.ph.qbl    $t4, $t5, $t1                     \n"
-     "muleu_s.ph.qbr    $t5, $t5, $t1                     \n"
-     "addq.ph           $t6, $t6, $t8                     \n"
-     "addq.ph           $t7, $t7, $t9                     \n"
-     "addq.ph           $t2, $t2, $t4                     \n"
-     "addq.ph           $t3, $t3, $t5                     \n"
-     "shra.ph           $t6, $t6, 8                       \n"
-     "shra.ph           $t7, $t7, 8                       \n"
-     "shra.ph           $t2, $t2, 8                       \n"
-     "shra.ph           $t3, $t3, 8                       \n"
-     "precr.qb.ph       $t6, $t6, $t7                     \n"
-     "precr.qb.ph       $t2, $t2, $t3                     \n"
-     "addiu             %[src_ptr], %[src_ptr], 8         \n"
-     "addiu             %[src_ptr1], %[src_ptr1], 8       \n"
-     "addiu             %[dst_width], %[dst_width], -8    \n"
-     "sw                $t6, 0(%[dst_ptr])                \n"
-     "sw                $t2, 4(%[dst_ptr])                \n"
-     "bgtz              %[dst_width], 1b                  \n"
-     " addiu            %[dst_ptr], %[dst_ptr], 8         \n"
-
-     ".set pop                                            \n"
-  : [dst_ptr] "+r" (dst_ptr),
-    [src_ptr1] "+r" (src_ptr1),
-    [src_ptr] "+r" (src_ptr),
-    [dst_width] "+r" (dst_width)
-  : [source_y_fraction] "r" (source_y_fraction),
-    [y0_fraction] "r" (y0_fraction),
-    [src_stride] "r" (src_stride)
-  : "t0", "t1", "t2", "t3", "t4", "t5",
-    "t6", "t7", "t8", "t9"
-  );
-}
-#endif  // __mips_dsp_rev >= 2
-
-#endif  // defined(__mips__)
-
-#ifdef __cplusplus
-}  // extern "C"
-}  // namespace libyuv
-#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_msa.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_msa.cc
new file mode 100644
index 0000000..4fb2631
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_msa.cc
@@ -0,0 +1,3512 @@
+/*
+ *  Copyright 2016 The LibYuv Project Authors. All rights reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS. All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <string.h>
+
+#include "libyuv/row.h"
+
+// This module is for GCC MSA
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#include "libyuv/macros_msa.h"
+
+#ifdef __cplusplus
+namespace libyuv {
+extern "C" {
+#endif
+
+#define ALPHA_VAL (-1)
+
+// Fill YUV -> RGB conversion constants into vectors
+#define YUVTORGB_SETUP(yuvconst, ub, vr, ug, vg, bb, bg, br, yg) \
+  {                                                              \
+    ub = __msa_fill_w(yuvconst->kUVToB[0]);                      \
+    vr = __msa_fill_w(yuvconst->kUVToR[1]);                      \
+    ug = __msa_fill_w(yuvconst->kUVToG[0]);                      \
+    vg = __msa_fill_w(yuvconst->kUVToG[1]);                      \
+    bb = __msa_fill_w(yuvconst->kUVBiasB[0]);                    \
+    bg = __msa_fill_w(yuvconst->kUVBiasG[0]);                    \
+    br = __msa_fill_w(yuvconst->kUVBiasR[0]);                    \
+    yg = __msa_fill_w(yuvconst->kYToRgb[0]);                     \
+  }
+
+// Load YUV 422 pixel data
+#define READYUV422(psrc_y, psrc_u, psrc_v, out_y, out_u, out_v)    \
+  {                                                                \
+    uint64_t y_m;                                                  \
+    uint32_t u_m, v_m;                                             \
+    v4i32 zero_m = {0};                                            \
+    y_m = LD(psrc_y);                                              \
+    u_m = LW(psrc_u);                                              \
+    v_m = LW(psrc_v);                                              \
+    out_y = (v16u8)__msa_insert_d((v2i64)zero_m, 0, (int64_t)y_m); \
+    out_u = (v16u8)__msa_insert_w(zero_m, 0, (int32_t)u_m);        \
+    out_v = (v16u8)__msa_insert_w(zero_m, 0, (int32_t)v_m);        \
+  }
+
+// Clip input vector elements between 0 to 255
+#define CLIP_0TO255(in0, in1, in2, in3, in4, in5) \
+  {                                               \
+    v4i32 max_m = __msa_ldi_w(0xFF);              \
+                                                  \
+    in0 = __msa_maxi_s_w(in0, 0);                 \
+    in1 = __msa_maxi_s_w(in1, 0);                 \
+    in2 = __msa_maxi_s_w(in2, 0);                 \
+    in3 = __msa_maxi_s_w(in3, 0);                 \
+    in4 = __msa_maxi_s_w(in4, 0);                 \
+    in5 = __msa_maxi_s_w(in5, 0);                 \
+    in0 = __msa_min_s_w(max_m, in0);              \
+    in1 = __msa_min_s_w(max_m, in1);              \
+    in2 = __msa_min_s_w(max_m, in2);              \
+    in3 = __msa_min_s_w(max_m, in3);              \
+    in4 = __msa_min_s_w(max_m, in4);              \
+    in5 = __msa_min_s_w(max_m, in5);              \
+  }
+
+// Convert 8 pixels of YUV 420 to RGB.
+#define YUVTORGB(in_y, in_uv, ubvr, ugvg, bb, bg, br, yg, out_b, out_g, out_r) \
+  {                                                                            \
+    v8i16 vec0_m, vec1_m;                                                      \
+    v4i32 reg0_m, reg1_m, reg2_m, reg3_m, reg4_m;                              \
+    v4i32 reg5_m, reg6_m, reg7_m;                                              \
+    v16i8 zero_m = {0};                                                        \
+                                                                               \
+    vec0_m = (v8i16)__msa_ilvr_b((v16i8)in_y, (v16i8)in_y);                    \
+    vec1_m = (v8i16)__msa_ilvr_b((v16i8)zero_m, (v16i8)in_uv);                 \
+    reg0_m = (v4i32)__msa_ilvr_h((v8i16)zero_m, (v8i16)vec0_m);                \
+    reg1_m = (v4i32)__msa_ilvl_h((v8i16)zero_m, (v8i16)vec0_m);                \
+    reg2_m = (v4i32)__msa_ilvr_h((v8i16)zero_m, (v8i16)vec1_m);                \
+    reg3_m = (v4i32)__msa_ilvl_h((v8i16)zero_m, (v8i16)vec1_m);                \
+    reg0_m *= yg;                                                              \
+    reg1_m *= yg;                                                              \
+    reg2_m *= ubvr;                                                            \
+    reg3_m *= ubvr;                                                            \
+    reg0_m = __msa_srai_w(reg0_m, 16);                                         \
+    reg1_m = __msa_srai_w(reg1_m, 16);                                         \
+    reg4_m = __msa_dotp_s_w((v8i16)vec1_m, (v8i16)ugvg);                       \
+    reg5_m = __msa_ilvev_w(reg2_m, reg2_m);                                    \
+    reg6_m = __msa_ilvev_w(reg3_m, reg3_m);                                    \
+    reg7_m = __msa_ilvr_w(reg4_m, reg4_m);                                     \
+    reg2_m = __msa_ilvod_w(reg2_m, reg2_m);                                    \
+    reg3_m = __msa_ilvod_w(reg3_m, reg3_m);                                    \
+    reg4_m = __msa_ilvl_w(reg4_m, reg4_m);                                     \
+    reg5_m = reg0_m - reg5_m;                                                  \
+    reg6_m = reg1_m - reg6_m;                                                  \
+    reg2_m = reg0_m - reg2_m;                                                  \
+    reg3_m = reg1_m - reg3_m;                                                  \
+    reg7_m = reg0_m - reg7_m;                                                  \
+    reg4_m = reg1_m - reg4_m;                                                  \
+    reg5_m += bb;                                                              \
+    reg6_m += bb;                                                              \
+    reg7_m += bg;                                                              \
+    reg4_m += bg;                                                              \
+    reg2_m += br;                                                              \
+    reg3_m += br;                                                              \
+    reg5_m = __msa_srai_w(reg5_m, 6);                                          \
+    reg6_m = __msa_srai_w(reg6_m, 6);                                          \
+    reg7_m = __msa_srai_w(reg7_m, 6);                                          \
+    reg4_m = __msa_srai_w(reg4_m, 6);                                          \
+    reg2_m = __msa_srai_w(reg2_m, 6);                                          \
+    reg3_m = __msa_srai_w(reg3_m, 6);                                          \
+    CLIP_0TO255(reg5_m, reg6_m, reg7_m, reg4_m, reg2_m, reg3_m);               \
+    out_b = __msa_pckev_h((v8i16)reg6_m, (v8i16)reg5_m);                       \
+    out_g = __msa_pckev_h((v8i16)reg4_m, (v8i16)reg7_m);                       \
+    out_r = __msa_pckev_h((v8i16)reg3_m, (v8i16)reg2_m);                       \
+  }
+
+// Pack and Store 8 ARGB values.
+#define STOREARGB(in0, in1, in2, in3, pdst_argb)           \
+  {                                                        \
+    v8i16 vec0_m, vec1_m;                                  \
+    v16u8 dst0_m, dst1_m;                                  \
+    vec0_m = (v8i16)__msa_ilvev_b((v16i8)in1, (v16i8)in0); \
+    vec1_m = (v8i16)__msa_ilvev_b((v16i8)in3, (v16i8)in2); \
+    dst0_m = (v16u8)__msa_ilvr_h(vec1_m, vec0_m);          \
+    dst1_m = (v16u8)__msa_ilvl_h(vec1_m, vec0_m);          \
+    ST_UB2(dst0_m, dst1_m, pdst_argb, 16);                 \
+  }
+
+// Takes ARGB input and calculates Y.
+#define ARGBTOY(argb0, argb1, argb2, argb3, const0, const1, const2, shift, \
+                y_out)                                                     \
+  {                                                                        \
+    v16u8 vec0_m, vec1_m, vec2_m, vec3_m;                                  \
+    v8u16 reg0_m, reg1_m;                                                  \
+                                                                           \
+    vec0_m = (v16u8)__msa_pckev_h((v8i16)argb1, (v8i16)argb0);             \
+    vec1_m = (v16u8)__msa_pckev_h((v8i16)argb3, (v8i16)argb2);             \
+    vec2_m = (v16u8)__msa_pckod_h((v8i16)argb1, (v8i16)argb0);             \
+    vec3_m = (v16u8)__msa_pckod_h((v8i16)argb3, (v8i16)argb2);             \
+    reg0_m = __msa_dotp_u_h(vec0_m, const0);                               \
+    reg1_m = __msa_dotp_u_h(vec1_m, const0);                               \
+    reg0_m = __msa_dpadd_u_h(reg0_m, vec2_m, const1);                      \
+    reg1_m = __msa_dpadd_u_h(reg1_m, vec3_m, const1);                      \
+    reg0_m += const2;                                                      \
+    reg1_m += const2;                                                      \
+    reg0_m = (v8u16)__msa_srai_h((v8i16)reg0_m, shift);                    \
+    reg1_m = (v8u16)__msa_srai_h((v8i16)reg1_m, shift);                    \
+    y_out = (v16u8)__msa_pckev_b((v16i8)reg1_m, (v16i8)reg0_m);            \
+  }
+
+// Loads current and next row of ARGB input and averages it to calculate U and V
+#define READ_ARGB(s_ptr, t_ptr, argb0, argb1, argb2, argb3)               \
+  {                                                                       \
+    v16u8 src0_m, src1_m, src2_m, src3_m, src4_m, src5_m, src6_m, src7_m; \
+    v16u8 vec0_m, vec1_m, vec2_m, vec3_m, vec4_m, vec5_m, vec6_m, vec7_m; \
+    v16u8 vec8_m, vec9_m;                                                 \
+    v8u16 reg0_m, reg1_m, reg2_m, reg3_m, reg4_m, reg5_m, reg6_m, reg7_m; \
+    v8u16 reg8_m, reg9_m;                                                 \
+                                                                          \
+    src0_m = (v16u8)__msa_ld_b((v16i8*)s, 0);                             \
+    src1_m = (v16u8)__msa_ld_b((v16i8*)s, 16);                            \
+    src2_m = (v16u8)__msa_ld_b((v16i8*)s, 32);                            \
+    src3_m = (v16u8)__msa_ld_b((v16i8*)s, 48);                            \
+    src4_m = (v16u8)__msa_ld_b((v16i8*)t, 0);                             \
+    src5_m = (v16u8)__msa_ld_b((v16i8*)t, 16);                            \
+    src6_m = (v16u8)__msa_ld_b((v16i8*)t, 32);                            \
+    src7_m = (v16u8)__msa_ld_b((v16i8*)t, 48);                            \
+    vec0_m = (v16u8)__msa_ilvr_b((v16i8)src0_m, (v16i8)src4_m);           \
+    vec1_m = (v16u8)__msa_ilvr_b((v16i8)src1_m, (v16i8)src5_m);           \
+    vec2_m = (v16u8)__msa_ilvr_b((v16i8)src2_m, (v16i8)src6_m);           \
+    vec3_m = (v16u8)__msa_ilvr_b((v16i8)src3_m, (v16i8)src7_m);           \
+    vec4_m = (v16u8)__msa_ilvl_b((v16i8)src0_m, (v16i8)src4_m);           \
+    vec5_m = (v16u8)__msa_ilvl_b((v16i8)src1_m, (v16i8)src5_m);           \
+    vec6_m = (v16u8)__msa_ilvl_b((v16i8)src2_m, (v16i8)src6_m);           \
+    vec7_m = (v16u8)__msa_ilvl_b((v16i8)src3_m, (v16i8)src7_m);           \
+    reg0_m = __msa_hadd_u_h(vec0_m, vec0_m);                              \
+    reg1_m = __msa_hadd_u_h(vec1_m, vec1_m);                              \
+    reg2_m = __msa_hadd_u_h(vec2_m, vec2_m);                              \
+    reg3_m = __msa_hadd_u_h(vec3_m, vec3_m);                              \
+    reg4_m = __msa_hadd_u_h(vec4_m, vec4_m);                              \
+    reg5_m = __msa_hadd_u_h(vec5_m, vec5_m);                              \
+    reg6_m = __msa_hadd_u_h(vec6_m, vec6_m);                              \
+    reg7_m = __msa_hadd_u_h(vec7_m, vec7_m);                              \
+    reg8_m = (v8u16)__msa_pckev_d((v2i64)reg4_m, (v2i64)reg0_m);          \
+    reg9_m = (v8u16)__msa_pckev_d((v2i64)reg5_m, (v2i64)reg1_m);          \
+    reg8_m += (v8u16)__msa_pckod_d((v2i64)reg4_m, (v2i64)reg0_m);         \
+    reg9_m += (v8u16)__msa_pckod_d((v2i64)reg5_m, (v2i64)reg1_m);         \
+    reg0_m = (v8u16)__msa_pckev_d((v2i64)reg6_m, (v2i64)reg2_m);          \
+    reg1_m = (v8u16)__msa_pckev_d((v2i64)reg7_m, (v2i64)reg3_m);          \
+    reg0_m += (v8u16)__msa_pckod_d((v2i64)reg6_m, (v2i64)reg2_m);         \
+    reg1_m += (v8u16)__msa_pckod_d((v2i64)reg7_m, (v2i64)reg3_m);         \
+    reg8_m = (v8u16)__msa_srai_h((v8i16)reg8_m, 2);                       \
+    reg9_m = (v8u16)__msa_srai_h((v8i16)reg9_m, 2);                       \
+    reg0_m = (v8u16)__msa_srai_h((v8i16)reg0_m, 2);                       \
+    reg1_m = (v8u16)__msa_srai_h((v8i16)reg1_m, 2);                       \
+    argb0 = (v16u8)__msa_pckev_b((v16i8)reg9_m, (v16i8)reg8_m);           \
+    argb1 = (v16u8)__msa_pckev_b((v16i8)reg1_m, (v16i8)reg0_m);           \
+    src0_m = (v16u8)__msa_ld_b((v16i8*)s, 64);                            \
+    src1_m = (v16u8)__msa_ld_b((v16i8*)s, 80);                            \
+    src2_m = (v16u8)__msa_ld_b((v16i8*)s, 96);                            \
+    src3_m = (v16u8)__msa_ld_b((v16i8*)s, 112);                           \
+    src4_m = (v16u8)__msa_ld_b((v16i8*)t, 64);                            \
+    src5_m = (v16u8)__msa_ld_b((v16i8*)t, 80);                            \
+    src6_m = (v16u8)__msa_ld_b((v16i8*)t, 96);                            \
+    src7_m = (v16u8)__msa_ld_b((v16i8*)t, 112);                           \
+    vec2_m = (v16u8)__msa_ilvr_b((v16i8)src0_m, (v16i8)src4_m);           \
+    vec3_m = (v16u8)__msa_ilvr_b((v16i8)src1_m, (v16i8)src5_m);           \
+    vec4_m = (v16u8)__msa_ilvr_b((v16i8)src2_m, (v16i8)src6_m);           \
+    vec5_m = (v16u8)__msa_ilvr_b((v16i8)src3_m, (v16i8)src7_m);           \
+    vec6_m = (v16u8)__msa_ilvl_b((v16i8)src0_m, (v16i8)src4_m);           \
+    vec7_m = (v16u8)__msa_ilvl_b((v16i8)src1_m, (v16i8)src5_m);           \
+    vec8_m = (v16u8)__msa_ilvl_b((v16i8)src2_m, (v16i8)src6_m);           \
+    vec9_m = (v16u8)__msa_ilvl_b((v16i8)src3_m, (v16i8)src7_m);           \
+    reg0_m = __msa_hadd_u_h(vec2_m, vec2_m);                              \
+    reg1_m = __msa_hadd_u_h(vec3_m, vec3_m);                              \
+    reg2_m = __msa_hadd_u_h(vec4_m, vec4_m);                              \
+    reg3_m = __msa_hadd_u_h(vec5_m, vec5_m);                              \
+    reg4_m = __msa_hadd_u_h(vec6_m, vec6_m);                              \
+    reg5_m = __msa_hadd_u_h(vec7_m, vec7_m);                              \
+    reg6_m = __msa_hadd_u_h(vec8_m, vec8_m);                              \
+    reg7_m = __msa_hadd_u_h(vec9_m, vec9_m);                              \
+    reg8_m = (v8u16)__msa_pckev_d((v2i64)reg4_m, (v2i64)reg0_m);          \
+    reg9_m = (v8u16)__msa_pckev_d((v2i64)reg5_m, (v2i64)reg1_m);          \
+    reg8_m += (v8u16)__msa_pckod_d((v2i64)reg4_m, (v2i64)reg0_m);         \
+    reg9_m += (v8u16)__msa_pckod_d((v2i64)reg5_m, (v2i64)reg1_m);         \
+    reg0_m = (v8u16)__msa_pckev_d((v2i64)reg6_m, (v2i64)reg2_m);          \
+    reg1_m = (v8u16)__msa_pckev_d((v2i64)reg7_m, (v2i64)reg3_m);          \
+    reg0_m += (v8u16)__msa_pckod_d((v2i64)reg6_m, (v2i64)reg2_m);         \
+    reg1_m += (v8u16)__msa_pckod_d((v2i64)reg7_m, (v2i64)reg3_m);         \
+    reg8_m = (v8u16)__msa_srai_h((v8i16)reg8_m, 2);                       \
+    reg9_m = (v8u16)__msa_srai_h((v8i16)reg9_m, 2);                       \
+    reg0_m = (v8u16)__msa_srai_h((v8i16)reg0_m, 2);                       \
+    reg1_m = (v8u16)__msa_srai_h((v8i16)reg1_m, 2);                       \
+    argb2 = (v16u8)__msa_pckev_b((v16i8)reg9_m, (v16i8)reg8_m);           \
+    argb3 = (v16u8)__msa_pckev_b((v16i8)reg1_m, (v16i8)reg0_m);           \
+  }
+
+// Takes ARGB input and calculates U and V.
+#define ARGBTOUV(argb0, argb1, argb2, argb3, const0, const1, const2, const3, \
+                 shf0, shf1, shf2, shf3, v_out, u_out)                       \
+  {                                                                          \
+    v16u8 vec0_m, vec1_m, vec2_m, vec3_m, vec4_m, vec5_m, vec6_m, vec7_m;    \
+    v8u16 reg0_m, reg1_m, reg2_m, reg3_m;                                    \
+                                                                             \
+    vec0_m = (v16u8)__msa_vshf_b(shf0, (v16i8)argb1, (v16i8)argb0);          \
+    vec1_m = (v16u8)__msa_vshf_b(shf0, (v16i8)argb3, (v16i8)argb2);          \
+    vec2_m = (v16u8)__msa_vshf_b(shf1, (v16i8)argb1, (v16i8)argb0);          \
+    vec3_m = (v16u8)__msa_vshf_b(shf1, (v16i8)argb3, (v16i8)argb2);          \
+    vec4_m = (v16u8)__msa_vshf_b(shf2, (v16i8)argb1, (v16i8)argb0);          \
+    vec5_m = (v16u8)__msa_vshf_b(shf2, (v16i8)argb3, (v16i8)argb2);          \
+    vec6_m = (v16u8)__msa_vshf_b(shf3, (v16i8)argb1, (v16i8)argb0);          \
+    vec7_m = (v16u8)__msa_vshf_b(shf3, (v16i8)argb3, (v16i8)argb2);          \
+    reg0_m = __msa_dotp_u_h(vec0_m, const1);                                 \
+    reg1_m = __msa_dotp_u_h(vec1_m, const1);                                 \
+    reg2_m = __msa_dotp_u_h(vec4_m, const1);                                 \
+    reg3_m = __msa_dotp_u_h(vec5_m, const1);                                 \
+    reg0_m += const3;                                                        \
+    reg1_m += const3;                                                        \
+    reg2_m += const3;                                                        \
+    reg3_m += const3;                                                        \
+    reg0_m -= __msa_dotp_u_h(vec2_m, const0);                                \
+    reg1_m -= __msa_dotp_u_h(vec3_m, const0);                                \
+    reg2_m -= __msa_dotp_u_h(vec6_m, const2);                                \
+    reg3_m -= __msa_dotp_u_h(vec7_m, const2);                                \
+    v_out = (v16u8)__msa_pckod_b((v16i8)reg1_m, (v16i8)reg0_m);              \
+    u_out = (v16u8)__msa_pckod_b((v16i8)reg3_m, (v16i8)reg2_m);              \
+  }
+
+// Load I444 pixel data
+#define READI444(psrc_y, psrc_u, psrc_v, out_y, out_u, out_v) \
+  {                                                           \
+    uint64_t y_m, u_m, v_m;                                   \
+    v2i64 zero_m = {0};                                       \
+    y_m = LD(psrc_y);                                         \
+    u_m = LD(psrc_u);                                         \
+    v_m = LD(psrc_v);                                         \
+    out_y = (v16u8)__msa_insert_d(zero_m, 0, (int64_t)y_m);   \
+    out_u = (v16u8)__msa_insert_d(zero_m, 0, (int64_t)u_m);   \
+    out_v = (v16u8)__msa_insert_d(zero_m, 0, (int64_t)v_m);   \
+  }
+
+void MirrorRow_MSA(const uint8_t* src, uint8_t* dst, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3;
+  v16u8 dst0, dst1, dst2, dst3;
+  v16i8 shuffler = {15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0};
+  src += width - 64;
+
+  for (x = 0; x < width; x += 64) {
+    LD_UB4(src, 16, src3, src2, src1, src0);
+    VSHF_B2_UB(src3, src3, src2, src2, shuffler, shuffler, dst3, dst2);
+    VSHF_B2_UB(src1, src1, src0, src0, shuffler, shuffler, dst1, dst0);
+    ST_UB4(dst0, dst1, dst2, dst3, dst, 16);
+    dst += 64;
+    src -= 64;
+  }
+}
+
+void ARGBMirrorRow_MSA(const uint8_t* src, uint8_t* dst, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3;
+  v16u8 dst0, dst1, dst2, dst3;
+  v16i8 shuffler = {12, 13, 14, 15, 8, 9, 10, 11, 4, 5, 6, 7, 0, 1, 2, 3};
+  src += width * 4 - 64;
+
+  for (x = 0; x < width; x += 16) {
+    LD_UB4(src, 16, src3, src2, src1, src0);
+    VSHF_B2_UB(src3, src3, src2, src2, shuffler, shuffler, dst3, dst2);
+    VSHF_B2_UB(src1, src1, src0, src0, shuffler, shuffler, dst1, dst0);
+    ST_UB4(dst0, dst1, dst2, dst3, dst, 16);
+    dst += 64;
+    src -= 64;
+  }
+}
+
+void I422ToYUY2Row_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_yuy2,
+                       int width) {
+  int x;
+  v16u8 src_u0, src_v0, src_y0, src_y1, vec_uv0, vec_uv1;
+  v16u8 dst_yuy2_0, dst_yuy2_1, dst_yuy2_2, dst_yuy2_3;
+
+  for (x = 0; x < width; x += 32) {
+    src_u0 = LD_UB(src_u);
+    src_v0 = LD_UB(src_v);
+    LD_UB2(src_y, 16, src_y0, src_y1);
+    ILVRL_B2_UB(src_v0, src_u0, vec_uv0, vec_uv1);
+    ILVRL_B2_UB(vec_uv0, src_y0, dst_yuy2_0, dst_yuy2_1);
+    ILVRL_B2_UB(vec_uv1, src_y1, dst_yuy2_2, dst_yuy2_3);
+    ST_UB4(dst_yuy2_0, dst_yuy2_1, dst_yuy2_2, dst_yuy2_3, dst_yuy2, 16);
+    src_u += 16;
+    src_v += 16;
+    src_y += 32;
+    dst_yuy2 += 64;
+  }
+}
+
+void I422ToUYVYRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_uyvy,
+                       int width) {
+  int x;
+  v16u8 src_u0, src_v0, src_y0, src_y1, vec_uv0, vec_uv1;
+  v16u8 dst_uyvy0, dst_uyvy1, dst_uyvy2, dst_uyvy3;
+
+  for (x = 0; x < width; x += 32) {
+    src_u0 = LD_UB(src_u);
+    src_v0 = LD_UB(src_v);
+    LD_UB2(src_y, 16, src_y0, src_y1);
+    ILVRL_B2_UB(src_v0, src_u0, vec_uv0, vec_uv1);
+    ILVRL_B2_UB(src_y0, vec_uv0, dst_uyvy0, dst_uyvy1);
+    ILVRL_B2_UB(src_y1, vec_uv1, dst_uyvy2, dst_uyvy3);
+    ST_UB4(dst_uyvy0, dst_uyvy1, dst_uyvy2, dst_uyvy3, dst_uyvy, 16);
+    src_u += 16;
+    src_v += 16;
+    src_y += 32;
+    dst_uyvy += 64;
+  }
+}
+
+void I422ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  v16u8 src0, src1, src2;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    READYUV422(src_y, src_u, src_v, src0, src1, src2);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    STOREARGB(vec0, vec1, vec2, alpha, dst_argb);
+    src_y += 8;
+    src_u += 4;
+    src_v += 4;
+    dst_argb += 32;
+  }
+}
+
+void I422ToRGBARow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  v16u8 src0, src1, src2;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    READYUV422(src_y, src_u, src_v, src0, src1, src2);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    STOREARGB(alpha, vec0, vec1, vec2, dst_argb);
+    src_y += 8;
+    src_u += 4;
+    src_v += 4;
+    dst_argb += 32;
+  }
+}
+
+void I422AlphaToARGBRow_MSA(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            const uint8_t* src_a,
+                            uint8_t* dst_argb,
+                            const struct YuvConstants* yuvconstants,
+                            int width) {
+  int x;
+  int64_t data_a;
+  v16u8 src0, src1, src2, src3;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v4i32 zero = {0};
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    data_a = LD(src_a);
+    READYUV422(src_y, src_u, src_v, src0, src1, src2);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    src3 = (v16u8)__msa_insert_d((v2i64)zero, 0, data_a);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    src3 = (v16u8)__msa_ilvr_b((v16i8)src3, (v16i8)src3);
+    STOREARGB(vec0, vec1, vec2, src3, dst_argb);
+    src_y += 8;
+    src_u += 4;
+    src_v += 4;
+    src_a += 8;
+    dst_argb += 32;
+  }
+}
+
+void I422ToRGB24Row_MSA(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
+                        const struct YuvConstants* yuvconstants,
+                        int32_t width) {
+  int x;
+  int64_t data_u, data_v;
+  v16u8 src0, src1, src2, src3, src4, dst0, dst1, dst2;
+  v8i16 vec0, vec1, vec2, vec3, vec4, vec5;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 reg0, reg1, reg2, reg3;
+  v2i64 zero = {0};
+  v16i8 shuffler0 = {0, 1, 16, 2, 3, 17, 4, 5, 18, 6, 7, 19, 8, 9, 20, 10};
+  v16i8 shuffler1 = {0, 21, 1, 2, 22, 3, 4, 23, 5, 6, 24, 7, 8, 25, 9, 10};
+  v16i8 shuffler2 = {26, 6,  7,  27, 8,  9,  28, 10,
+                     11, 29, 12, 13, 30, 14, 15, 31};
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((v16u8*)src_y, 0);
+    data_u = LD(src_u);
+    data_v = LD(src_v);
+    src1 = (v16u8)__msa_insert_d(zero, 0, data_u);
+    src2 = (v16u8)__msa_insert_d(zero, 0, data_v);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    src3 = (v16u8)__msa_sldi_b((v16i8)src0, (v16i8)src0, 8);
+    src4 = (v16u8)__msa_sldi_b((v16i8)src1, (v16i8)src1, 8);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    YUVTORGB(src3, src4, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec3, vec4, vec5);
+    reg0 = (v16u8)__msa_ilvev_b((v16i8)vec1, (v16i8)vec0);
+    reg2 = (v16u8)__msa_ilvev_b((v16i8)vec4, (v16i8)vec3);
+    reg3 = (v16u8)__msa_pckev_b((v16i8)vec5, (v16i8)vec2);
+    reg1 = (v16u8)__msa_sldi_b((v16i8)reg2, (v16i8)reg0, 11);
+    dst0 = (v16u8)__msa_vshf_b(shuffler0, (v16i8)reg3, (v16i8)reg0);
+    dst1 = (v16u8)__msa_vshf_b(shuffler1, (v16i8)reg3, (v16i8)reg1);
+    dst2 = (v16u8)__msa_vshf_b(shuffler2, (v16i8)reg3, (v16i8)reg2);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    ST_UB(dst2, (dst_argb + 32));
+    src_y += 16;
+    src_u += 8;
+    src_v += 8;
+    dst_argb += 48;
+  }
+}
+
+// TODO(fbarchard): Consider AND instead of shift to isolate 5 upper bits of R.
+void I422ToRGB565Row_MSA(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb565,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  int x;
+  v16u8 src0, src1, src2, dst0;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    READYUV422(src_y, src_u, src_v, src0, src1, src2);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec2, vec1);
+    vec0 = __msa_srai_h(vec0, 3);
+    vec1 = __msa_srai_h(vec1, 3);
+    vec2 = __msa_srai_h(vec2, 2);
+    vec1 = __msa_slli_h(vec1, 11);
+    vec2 = __msa_slli_h(vec2, 5);
+    vec0 |= vec1;
+    dst0 = (v16u8)(vec2 | vec0);
+    ST_UB(dst0, dst_rgb565);
+    src_y += 8;
+    src_u += 4;
+    src_v += 4;
+    dst_rgb565 += 16;
+  }
+}
+
+// TODO(fbarchard): Consider AND instead of shift to isolate 4 upper bits of G.
+void I422ToARGB4444Row_MSA(const uint8_t* src_y,
+                           const uint8_t* src_u,
+                           const uint8_t* src_v,
+                           uint8_t* dst_argb4444,
+                           const struct YuvConstants* yuvconstants,
+                           int width) {
+  int x;
+  v16u8 src0, src1, src2, dst0;
+  v8i16 vec0, vec1, vec2;
+  v8u16 reg0, reg1, reg2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v8u16 const_0xF000 = (v8u16)__msa_fill_h(0xF000);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    READYUV422(src_y, src_u, src_v, src0, src1, src2);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    reg0 = (v8u16)__msa_srai_h(vec0, 4);
+    reg1 = (v8u16)__msa_srai_h(vec1, 4);
+    reg2 = (v8u16)__msa_srai_h(vec2, 4);
+    reg1 = (v8u16)__msa_slli_h((v8i16)reg1, 4);
+    reg2 = (v8u16)__msa_slli_h((v8i16)reg2, 8);
+    reg1 |= const_0xF000;
+    reg0 |= reg2;
+    dst0 = (v16u8)(reg1 | reg0);
+    ST_UB(dst0, dst_argb4444);
+    src_y += 8;
+    src_u += 4;
+    src_v += 4;
+    dst_argb4444 += 16;
+  }
+}
+
+void I422ToARGB1555Row_MSA(const uint8_t* src_y,
+                           const uint8_t* src_u,
+                           const uint8_t* src_v,
+                           uint8_t* dst_argb1555,
+                           const struct YuvConstants* yuvconstants,
+                           int width) {
+  int x;
+  v16u8 src0, src1, src2, dst0;
+  v8i16 vec0, vec1, vec2;
+  v8u16 reg0, reg1, reg2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v8u16 const_0x8000 = (v8u16)__msa_fill_h(0x8000);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    READYUV422(src_y, src_u, src_v, src0, src1, src2);
+    src1 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    reg0 = (v8u16)__msa_srai_h(vec0, 3);
+    reg1 = (v8u16)__msa_srai_h(vec1, 3);
+    reg2 = (v8u16)__msa_srai_h(vec2, 3);
+    reg1 = (v8u16)__msa_slli_h((v8i16)reg1, 5);
+    reg2 = (v8u16)__msa_slli_h((v8i16)reg2, 10);
+    reg1 |= const_0x8000;
+    reg0 |= reg2;
+    dst0 = (v16u8)(reg1 | reg0);
+    ST_UB(dst0, dst_argb1555);
+    src_y += 8;
+    src_u += 4;
+    src_v += 4;
+    dst_argb1555 += 16;
+  }
+}
+
+void YUY2ToYRow_MSA(const uint8_t* src_yuy2, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    LD_UB4(src_yuy2, 16, src0, src1, src2, src3);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst_y, 16);
+    src_yuy2 += 64;
+    dst_y += 32;
+  }
+}
+
+void YUY2ToUVRow_MSA(const uint8_t* src_yuy2,
+                     int src_stride_yuy2,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  const uint8_t* src_yuy2_next = src_yuy2 + src_stride_yuy2;
+  int x;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7;
+  v16u8 vec0, vec1, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    LD_UB4(src_yuy2, 16, src0, src1, src2, src3);
+    LD_UB4(src_yuy2_next, 16, src4, src5, src6, src7);
+    src0 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    src1 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    src2 = (v16u8)__msa_pckod_b((v16i8)src5, (v16i8)src4);
+    src3 = (v16u8)__msa_pckod_b((v16i8)src7, (v16i8)src6);
+    vec0 = __msa_aver_u_b(src0, src2);
+    vec1 = __msa_aver_u_b(src1, src3);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    src_yuy2 += 64;
+    src_yuy2_next += 64;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void YUY2ToUV422Row_MSA(const uint8_t* src_yuy2,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    LD_UB4(src_yuy2, 16, src0, src1, src2, src3);
+    src0 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    src1 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    src_yuy2 += 64;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void UYVYToYRow_MSA(const uint8_t* src_uyvy, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    LD_UB4(src_uyvy, 16, src0, src1, src2, src3);
+    dst0 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst_y, 16);
+    src_uyvy += 64;
+    dst_y += 32;
+  }
+}
+
+void UYVYToUVRow_MSA(const uint8_t* src_uyvy,
+                     int src_stride_uyvy,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  const uint8_t* src_uyvy_next = src_uyvy + src_stride_uyvy;
+  int x;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7;
+  v16u8 vec0, vec1, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    LD_UB4(src_uyvy, 16, src0, src1, src2, src3);
+    LD_UB4(src_uyvy_next, 16, src4, src5, src6, src7);
+    src0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    src1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    src2 = (v16u8)__msa_pckev_b((v16i8)src5, (v16i8)src4);
+    src3 = (v16u8)__msa_pckev_b((v16i8)src7, (v16i8)src6);
+    vec0 = __msa_aver_u_b(src0, src2);
+    vec1 = __msa_aver_u_b(src1, src3);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    src_uyvy += 64;
+    src_uyvy_next += 64;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void UYVYToUV422Row_MSA(const uint8_t* src_uyvy,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    LD_UB4(src_uyvy, 16, src0, src1, src2, src3);
+    src0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    src1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    src_uyvy += 64;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void ARGBToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, vec0, vec1, vec2, vec3, dst0;
+  v8u16 reg0, reg1, reg2, reg3, reg4, reg5;
+  v16i8 zero = {0};
+  v8u16 const_0x19 = (v8u16)__msa_ldi_h(0x19);
+  v8u16 const_0x81 = (v8u16)__msa_ldi_h(0x81);
+  v8u16 const_0x42 = (v8u16)__msa_ldi_h(0x42);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 32);
+    src3 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 48);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    vec2 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    vec3 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    reg0 = (v8u16)__msa_ilvev_b(zero, (v16i8)vec0);
+    reg1 = (v8u16)__msa_ilvev_b(zero, (v16i8)vec1);
+    reg2 = (v8u16)__msa_ilvev_b(zero, (v16i8)vec2);
+    reg3 = (v8u16)__msa_ilvev_b(zero, (v16i8)vec3);
+    reg4 = (v8u16)__msa_ilvod_b(zero, (v16i8)vec0);
+    reg5 = (v8u16)__msa_ilvod_b(zero, (v16i8)vec1);
+    reg0 *= const_0x19;
+    reg1 *= const_0x19;
+    reg2 *= const_0x81;
+    reg3 *= const_0x81;
+    reg4 *= const_0x42;
+    reg5 *= const_0x42;
+    reg0 += reg2;
+    reg1 += reg3;
+    reg0 += reg4;
+    reg1 += reg5;
+    reg0 += const_0x1080;
+    reg1 += const_0x1080;
+    reg0 = (v8u16)__msa_srai_h((v8i16)reg0, 8);
+    reg1 = (v8u16)__msa_srai_h((v8i16)reg1, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 64;
+    dst_y += 16;
+  }
+}
+
+void ARGBToUVRow_MSA(const uint8_t* src_argb0,
+                     int src_stride_argb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  int x;
+  const uint8_t* src_argb0_next = src_argb0 + src_stride_argb;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7;
+  v16u8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9;
+  v8u16 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7, reg8, reg9;
+  v16u8 dst0, dst1;
+  v8u16 const_0x70 = (v8u16)__msa_ldi_h(0x70);
+  v8u16 const_0x4A = (v8u16)__msa_ldi_h(0x4A);
+  v8u16 const_0x26 = (v8u16)__msa_ldi_h(0x26);
+  v8u16 const_0x5E = (v8u16)__msa_ldi_h(0x5E);
+  v8u16 const_0x12 = (v8u16)__msa_ldi_h(0x12);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+
+  for (x = 0; x < width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 32);
+    src3 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 48);
+    src4 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 64);
+    src5 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 80);
+    src6 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 96);
+    src7 = (v16u8)__msa_ld_b((v16u8*)src_argb0, 112);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    vec2 = (v16u8)__msa_pckev_b((v16i8)src5, (v16i8)src4);
+    vec3 = (v16u8)__msa_pckev_b((v16i8)src7, (v16i8)src6);
+    vec4 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    vec5 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    vec6 = (v16u8)__msa_pckod_b((v16i8)src5, (v16i8)src4);
+    vec7 = (v16u8)__msa_pckod_b((v16i8)src7, (v16i8)src6);
+    vec8 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    vec9 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    vec4 = (v16u8)__msa_pckev_b((v16i8)vec5, (v16i8)vec4);
+    vec5 = (v16u8)__msa_pckev_b((v16i8)vec7, (v16i8)vec6);
+    vec0 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec0);
+    vec1 = (v16u8)__msa_pckod_b((v16i8)vec3, (v16i8)vec2);
+    reg0 = __msa_hadd_u_h(vec8, vec8);
+    reg1 = __msa_hadd_u_h(vec9, vec9);
+    reg2 = __msa_hadd_u_h(vec4, vec4);
+    reg3 = __msa_hadd_u_h(vec5, vec5);
+    reg4 = __msa_hadd_u_h(vec0, vec0);
+    reg5 = __msa_hadd_u_h(vec1, vec1);
+    src0 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 0);
+    src1 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 16);
+    src2 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 32);
+    src3 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 48);
+    src4 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 64);
+    src5 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 80);
+    src6 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 96);
+    src7 = (v16u8)__msa_ld_b((v16u8*)src_argb0_next, 112);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    vec2 = (v16u8)__msa_pckev_b((v16i8)src5, (v16i8)src4);
+    vec3 = (v16u8)__msa_pckev_b((v16i8)src7, (v16i8)src6);
+    vec4 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    vec5 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    vec6 = (v16u8)__msa_pckod_b((v16i8)src5, (v16i8)src4);
+    vec7 = (v16u8)__msa_pckod_b((v16i8)src7, (v16i8)src6);
+    vec8 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    vec9 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    vec4 = (v16u8)__msa_pckev_b((v16i8)vec5, (v16i8)vec4);
+    vec5 = (v16u8)__msa_pckev_b((v16i8)vec7, (v16i8)vec6);
+    vec0 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec0);
+    vec1 = (v16u8)__msa_pckod_b((v16i8)vec3, (v16i8)vec2);
+    reg0 += __msa_hadd_u_h(vec8, vec8);
+    reg1 += __msa_hadd_u_h(vec9, vec9);
+    reg2 += __msa_hadd_u_h(vec4, vec4);
+    reg3 += __msa_hadd_u_h(vec5, vec5);
+    reg4 += __msa_hadd_u_h(vec0, vec0);
+    reg5 += __msa_hadd_u_h(vec1, vec1);
+    reg0 = (v8u16)__msa_srai_h((v8i16)reg0, 2);
+    reg1 = (v8u16)__msa_srai_h((v8i16)reg1, 2);
+    reg2 = (v8u16)__msa_srai_h((v8i16)reg2, 2);
+    reg3 = (v8u16)__msa_srai_h((v8i16)reg3, 2);
+    reg4 = (v8u16)__msa_srai_h((v8i16)reg4, 2);
+    reg5 = (v8u16)__msa_srai_h((v8i16)reg5, 2);
+    reg6 = reg0 * const_0x70;
+    reg7 = reg1 * const_0x70;
+    reg8 = reg2 * const_0x4A;
+    reg9 = reg3 * const_0x4A;
+    reg6 += const_0x8080;
+    reg7 += const_0x8080;
+    reg8 += reg4 * const_0x26;
+    reg9 += reg5 * const_0x26;
+    reg0 *= const_0x12;
+    reg1 *= const_0x12;
+    reg2 *= const_0x5E;
+    reg3 *= const_0x5E;
+    reg4 *= const_0x70;
+    reg5 *= const_0x70;
+    reg2 += reg0;
+    reg3 += reg1;
+    reg4 += const_0x8080;
+    reg5 += const_0x8080;
+    reg6 -= reg8;
+    reg7 -= reg9;
+    reg4 -= reg2;
+    reg5 -= reg3;
+    reg6 = (v8u16)__msa_srai_h((v8i16)reg6, 8);
+    reg7 = (v8u16)__msa_srai_h((v8i16)reg7, 8);
+    reg4 = (v8u16)__msa_srai_h((v8i16)reg4, 8);
+    reg5 = (v8u16)__msa_srai_h((v8i16)reg5, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg7, (v16i8)reg6);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)reg5, (v16i8)reg4);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    src_argb0 += 128;
+    src_argb0_next += 128;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void ARGBToRGB24Row_MSA(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1, dst2;
+  v16i8 shuffler0 = {0, 1, 2, 4, 5, 6, 8, 9, 10, 12, 13, 14, 16, 17, 18, 20};
+  v16i8 shuffler1 = {5,  6,  8,  9,  10, 12, 13, 14,
+                     16, 17, 18, 20, 21, 22, 24, 25};
+  v16i8 shuffler2 = {10, 12, 13, 14, 16, 17, 18, 20,
+                     21, 22, 24, 25, 26, 28, 29, 30};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 48);
+    dst0 = (v16u8)__msa_vshf_b(shuffler0, (v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(shuffler1, (v16i8)src2, (v16i8)src1);
+    dst2 = (v16u8)__msa_vshf_b(shuffler2, (v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst_rgb, 16);
+    ST_UB(dst2, (dst_rgb + 32));
+    src_argb += 64;
+    dst_rgb += 48;
+  }
+}
+
+void ARGBToRAWRow_MSA(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1, dst2;
+  v16i8 shuffler0 = {2, 1, 0, 6, 5, 4, 10, 9, 8, 14, 13, 12, 18, 17, 16, 22};
+  v16i8 shuffler1 = {5,  4,  10, 9,  8,  14, 13, 12,
+                     18, 17, 16, 22, 21, 20, 26, 25};
+  v16i8 shuffler2 = {8,  14, 13, 12, 18, 17, 16, 22,
+                     21, 20, 26, 25, 24, 30, 29, 28};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 48);
+    dst0 = (v16u8)__msa_vshf_b(shuffler0, (v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(shuffler1, (v16i8)src2, (v16i8)src1);
+    dst2 = (v16u8)__msa_vshf_b(shuffler2, (v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst_rgb, 16);
+    ST_UB(dst2, (dst_rgb + 32));
+    src_argb += 64;
+    dst_rgb += 48;
+  }
+}
+
+void ARGBToRGB565Row_MSA(const uint8_t* src_argb, uint8_t* dst_rgb, int width) {
+  int x;
+  v16u8 src0, src1, dst0;
+  v16u8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    vec0 = (v16u8)__msa_srai_b((v16i8)src0, 3);
+    vec1 = (v16u8)__msa_slli_b((v16i8)src0, 3);
+    vec2 = (v16u8)__msa_srai_b((v16i8)src0, 5);
+    vec4 = (v16u8)__msa_srai_b((v16i8)src1, 3);
+    vec5 = (v16u8)__msa_slli_b((v16i8)src1, 3);
+    vec6 = (v16u8)__msa_srai_b((v16i8)src1, 5);
+    vec1 = (v16u8)__msa_sldi_b(zero, (v16i8)vec1, 1);
+    vec2 = (v16u8)__msa_sldi_b(zero, (v16i8)vec2, 1);
+    vec5 = (v16u8)__msa_sldi_b(zero, (v16i8)vec5, 1);
+    vec6 = (v16u8)__msa_sldi_b(zero, (v16i8)vec6, 1);
+    vec3 = (v16u8)__msa_sldi_b(zero, (v16i8)src0, 2);
+    vec7 = (v16u8)__msa_sldi_b(zero, (v16i8)src1, 2);
+    vec0 = __msa_binsli_b(vec0, vec1, 2);
+    vec1 = __msa_binsli_b(vec2, vec3, 4);
+    vec4 = __msa_binsli_b(vec4, vec5, 2);
+    vec5 = __msa_binsli_b(vec6, vec7, 4);
+    vec0 = (v16u8)__msa_ilvev_b((v16i8)vec1, (v16i8)vec0);
+    vec4 = (v16u8)__msa_ilvev_b((v16i8)vec5, (v16i8)vec4);
+    dst0 = (v16u8)__msa_pckev_h((v8i16)vec4, (v8i16)vec0);
+    ST_UB(dst0, dst_rgb);
+    src_argb += 32;
+    dst_rgb += 16;
+  }
+}
+
+void ARGBToARGB1555Row_MSA(const uint8_t* src_argb,
+                           uint8_t* dst_rgb,
+                           int width) {
+  int x;
+  v16u8 src0, src1, dst0;
+  v16u8 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9;
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    vec0 = (v16u8)__msa_srai_b((v16i8)src0, 3);
+    vec1 = (v16u8)__msa_slli_b((v16i8)src0, 2);
+    vec2 = (v16u8)__msa_srai_b((v16i8)vec0, 3);
+    vec1 = (v16u8)__msa_sldi_b(zero, (v16i8)vec1, 1);
+    vec2 = (v16u8)__msa_sldi_b(zero, (v16i8)vec2, 1);
+    vec3 = (v16u8)__msa_srai_b((v16i8)src0, 1);
+    vec5 = (v16u8)__msa_srai_b((v16i8)src1, 3);
+    vec6 = (v16u8)__msa_slli_b((v16i8)src1, 2);
+    vec7 = (v16u8)__msa_srai_b((v16i8)vec5, 3);
+    vec6 = (v16u8)__msa_sldi_b(zero, (v16i8)vec6, 1);
+    vec7 = (v16u8)__msa_sldi_b(zero, (v16i8)vec7, 1);
+    vec8 = (v16u8)__msa_srai_b((v16i8)src1, 1);
+    vec3 = (v16u8)__msa_sldi_b(zero, (v16i8)vec3, 2);
+    vec8 = (v16u8)__msa_sldi_b(zero, (v16i8)vec8, 2);
+    vec4 = (v16u8)__msa_sldi_b(zero, (v16i8)src0, 3);
+    vec9 = (v16u8)__msa_sldi_b(zero, (v16i8)src1, 3);
+    vec0 = __msa_binsli_b(vec0, vec1, 2);
+    vec5 = __msa_binsli_b(vec5, vec6, 2);
+    vec1 = __msa_binsli_b(vec2, vec3, 5);
+    vec6 = __msa_binsli_b(vec7, vec8, 5);
+    vec1 = __msa_binsli_b(vec1, vec4, 0);
+    vec6 = __msa_binsli_b(vec6, vec9, 0);
+    vec0 = (v16u8)__msa_ilvev_b((v16i8)vec1, (v16i8)vec0);
+    vec1 = (v16u8)__msa_ilvev_b((v16i8)vec6, (v16i8)vec5);
+    dst0 = (v16u8)__msa_pckev_h((v8i16)vec1, (v8i16)vec0);
+    ST_UB(dst0, dst_rgb);
+    src_argb += 32;
+    dst_rgb += 16;
+  }
+}
+
+void ARGBToARGB4444Row_MSA(const uint8_t* src_argb,
+                           uint8_t* dst_rgb,
+                           int width) {
+  int x;
+  v16u8 src0, src1;
+  v16u8 vec0, vec1;
+  v16u8 dst0;
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    vec0 = (v16u8)__msa_srai_b((v16i8)src0, 4);
+    vec1 = (v16u8)__msa_srai_b((v16i8)src1, 4);
+    src0 = (v16u8)__msa_sldi_b(zero, (v16i8)src0, 1);
+    src1 = (v16u8)__msa_sldi_b(zero, (v16i8)src1, 1);
+    vec0 = __msa_binsli_b(vec0, src0, 3);
+    vec1 = __msa_binsli_b(vec1, src1, 3);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_rgb);
+    src_argb += 32;
+    dst_rgb += 16;
+  }
+}
+
+void ARGBToUV444Row_MSA(const uint8_t* src_argb,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int32_t width) {
+  int32_t x;
+  v16u8 src0, src1, src2, src3, reg0, reg1, reg2, reg3, dst0, dst1;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v8u16 vec8, vec9, vec10, vec11;
+  v8u16 const_112 = (v8u16)__msa_ldi_h(112);
+  v8u16 const_74 = (v8u16)__msa_ldi_h(74);
+  v8u16 const_38 = (v8u16)__msa_ldi_h(38);
+  v8u16 const_94 = (v8u16)__msa_ldi_h(94);
+  v8u16 const_18 = (v8u16)__msa_ldi_h(18);
+  v8u16 const_32896 = (v8u16)__msa_fill_h(32896);
+  v16i8 zero = {0};
+
+  for (x = width; x > 0; x -= 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 48);
+    reg0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    reg1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    reg2 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    reg3 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    src0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    src1 = (v16u8)__msa_pckev_b((v16i8)reg3, (v16i8)reg2);
+    src2 = (v16u8)__msa_pckod_b((v16i8)reg1, (v16i8)reg0);
+    vec0 = (v8u16)__msa_ilvr_b(zero, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b(zero, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b(zero, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b(zero, (v16i8)src1);
+    vec4 = (v8u16)__msa_ilvr_b(zero, (v16i8)src2);
+    vec5 = (v8u16)__msa_ilvl_b(zero, (v16i8)src2);
+    vec10 = vec0 * const_18;
+    vec11 = vec1 * const_18;
+    vec8 = vec2 * const_94;
+    vec9 = vec3 * const_94;
+    vec6 = vec4 * const_112;
+    vec7 = vec5 * const_112;
+    vec0 *= const_112;
+    vec1 *= const_112;
+    vec2 *= const_74;
+    vec3 *= const_74;
+    vec4 *= const_38;
+    vec5 *= const_38;
+    vec8 += vec10;
+    vec9 += vec11;
+    vec6 += const_32896;
+    vec7 += const_32896;
+    vec0 += const_32896;
+    vec1 += const_32896;
+    vec2 += vec4;
+    vec3 += vec5;
+    vec0 -= vec2;
+    vec1 -= vec3;
+    vec6 -= vec8;
+    vec7 -= vec9;
+    vec0 = (v8u16)__msa_srai_h((v8i16)vec0, 8);
+    vec1 = (v8u16)__msa_srai_h((v8i16)vec1, 8);
+    vec6 = (v8u16)__msa_srai_h((v8i16)vec6, 8);
+    vec7 = (v8u16)__msa_srai_h((v8i16)vec7, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec7, (v16i8)vec6);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    src_argb += 64;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void ARGBMultiplyRow_MSA(const uint8_t* src_argb0,
+                         const uint8_t* src_argb1,
+                         uint8_t* dst_argb,
+                         int width) {
+  int x;
+  v16u8 src0, src1, dst0;
+  v8u16 vec0, vec1, vec2, vec3;
+  v4u32 reg0, reg1, reg2, reg3;
+  v8i16 zero = {0};
+
+  for (x = 0; x < width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 0);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src0, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src0, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)zero, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)zero, (v16i8)src1);
+    reg0 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec0);
+    reg1 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec0);
+    reg2 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec1);
+    reg3 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec1);
+    reg0 *= (v4u32)__msa_ilvr_h(zero, (v8i16)vec2);
+    reg1 *= (v4u32)__msa_ilvl_h(zero, (v8i16)vec2);
+    reg2 *= (v4u32)__msa_ilvr_h(zero, (v8i16)vec3);
+    reg3 *= (v4u32)__msa_ilvl_h(zero, (v8i16)vec3);
+    reg0 = (v4u32)__msa_srai_w((v4i32)reg0, 16);
+    reg1 = (v4u32)__msa_srai_w((v4i32)reg1, 16);
+    reg2 = (v4u32)__msa_srai_w((v4i32)reg2, 16);
+    reg3 = (v4u32)__msa_srai_w((v4i32)reg3, 16);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_argb);
+    src_argb0 += 16;
+    src_argb1 += 16;
+    dst_argb += 16;
+  }
+}
+
+void ARGBAddRow_MSA(const uint8_t* src_argb0,
+                    const uint8_t* src_argb1,
+                    uint8_t* dst_argb,
+                    int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 16);
+    dst0 = __msa_adds_u_b(src0, src2);
+    dst1 = __msa_adds_u_b(src1, src3);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb0 += 32;
+    src_argb1 += 32;
+    dst_argb += 32;
+  }
+}
+
+void ARGBSubtractRow_MSA(const uint8_t* src_argb0,
+                         const uint8_t* src_argb1,
+                         uint8_t* dst_argb,
+                         int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 16);
+    dst0 = __msa_subs_u_b(src0, src2);
+    dst1 = __msa_subs_u_b(src1, src3);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb0 += 32;
+    src_argb1 += 32;
+    dst_argb += 32;
+  }
+}
+
+void ARGBAttenuateRow_MSA(const uint8_t* src_argb,
+                          uint8_t* dst_argb,
+                          int width) {
+  int x;
+  v16u8 src0, src1, dst0, dst1;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9;
+  v4u32 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7;
+  v8i16 zero = {0};
+  v16u8 mask = {0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255};
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src0, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src0, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)src1, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)src1, (v16i8)src1);
+    vec4 = (v8u16)__msa_fill_h(vec0[3]);
+    vec5 = (v8u16)__msa_fill_h(vec0[7]);
+    vec6 = (v8u16)__msa_fill_h(vec1[3]);
+    vec7 = (v8u16)__msa_fill_h(vec1[7]);
+    vec4 = (v8u16)__msa_pckev_d((v2i64)vec5, (v2i64)vec4);
+    vec5 = (v8u16)__msa_pckev_d((v2i64)vec7, (v2i64)vec6);
+    vec6 = (v8u16)__msa_fill_h(vec2[3]);
+    vec7 = (v8u16)__msa_fill_h(vec2[7]);
+    vec8 = (v8u16)__msa_fill_h(vec3[3]);
+    vec9 = (v8u16)__msa_fill_h(vec3[7]);
+    vec6 = (v8u16)__msa_pckev_d((v2i64)vec7, (v2i64)vec6);
+    vec7 = (v8u16)__msa_pckev_d((v2i64)vec9, (v2i64)vec8);
+    reg0 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec4);
+    reg1 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec4);
+    reg2 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec5);
+    reg3 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec5);
+    reg4 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec6);
+    reg5 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec6);
+    reg6 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec7);
+    reg7 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec7);
+    reg0 *= (v4u32)__msa_ilvr_h(zero, (v8i16)vec0);
+    reg1 *= (v4u32)__msa_ilvl_h(zero, (v8i16)vec0);
+    reg2 *= (v4u32)__msa_ilvr_h(zero, (v8i16)vec1);
+    reg3 *= (v4u32)__msa_ilvl_h(zero, (v8i16)vec1);
+    reg4 *= (v4u32)__msa_ilvr_h(zero, (v8i16)vec2);
+    reg5 *= (v4u32)__msa_ilvl_h(zero, (v8i16)vec2);
+    reg6 *= (v4u32)__msa_ilvr_h(zero, (v8i16)vec3);
+    reg7 *= (v4u32)__msa_ilvl_h(zero, (v8i16)vec3);
+    reg0 = (v4u32)__msa_srai_w((v4i32)reg0, 24);
+    reg1 = (v4u32)__msa_srai_w((v4i32)reg1, 24);
+    reg2 = (v4u32)__msa_srai_w((v4i32)reg2, 24);
+    reg3 = (v4u32)__msa_srai_w((v4i32)reg3, 24);
+    reg4 = (v4u32)__msa_srai_w((v4i32)reg4, 24);
+    reg5 = (v4u32)__msa_srai_w((v4i32)reg5, 24);
+    reg6 = (v4u32)__msa_srai_w((v4i32)reg6, 24);
+    reg7 = (v4u32)__msa_srai_w((v4i32)reg7, 24);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    vec2 = (v8u16)__msa_pckev_h((v8i16)reg5, (v8i16)reg4);
+    vec3 = (v8u16)__msa_pckev_h((v8i16)reg7, (v8i16)reg6);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    dst0 = __msa_bmnz_v(dst0, src0, mask);
+    dst1 = __msa_bmnz_v(dst1, src1, mask);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb += 32;
+    dst_argb += 32;
+  }
+}
+
+void ARGBToRGB565DitherRow_MSA(const uint8_t* src_argb,
+                               uint8_t* dst_rgb,
+                               uint32_t dither4,
+                               int width) {
+  int x;
+  v16u8 src0, src1, dst0, vec0, vec1;
+  v8i16 vec_d0;
+  v8i16 reg0, reg1, reg2;
+  v16i8 zero = {0};
+  v8i16 max = __msa_ldi_h(0xFF);
+
+  vec_d0 = (v8i16)__msa_fill_w(dither4);
+  vec_d0 = (v8i16)__msa_ilvr_b(zero, (v16i8)vec_d0);
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    reg0 = (v8i16)__msa_ilvev_b(zero, (v16i8)vec0);
+    reg1 = (v8i16)__msa_ilvev_b(zero, (v16i8)vec1);
+    reg2 = (v8i16)__msa_ilvod_b(zero, (v16i8)vec0);
+    reg0 += vec_d0;
+    reg1 += vec_d0;
+    reg2 += vec_d0;
+    reg0 = __msa_maxi_s_h((v8i16)reg0, 0);
+    reg1 = __msa_maxi_s_h((v8i16)reg1, 0);
+    reg2 = __msa_maxi_s_h((v8i16)reg2, 0);
+    reg0 = __msa_min_s_h((v8i16)max, (v8i16)reg0);
+    reg1 = __msa_min_s_h((v8i16)max, (v8i16)reg1);
+    reg2 = __msa_min_s_h((v8i16)max, (v8i16)reg2);
+    reg0 = __msa_srai_h(reg0, 3);
+    reg2 = __msa_srai_h(reg2, 3);
+    reg1 = __msa_srai_h(reg1, 2);
+    reg2 = __msa_slli_h(reg2, 11);
+    reg1 = __msa_slli_h(reg1, 5);
+    reg0 |= reg1;
+    dst0 = (v16u8)(reg0 | reg2);
+    ST_UB(dst0, dst_rgb);
+    src_argb += 32;
+    dst_rgb += 16;
+  }
+}
+
+void ARGBShuffleRow_MSA(const uint8_t* src_argb,
+                        uint8_t* dst_argb,
+                        const uint8_t* shuffler,
+                        int width) {
+  int x;
+  v16u8 src0, src1, dst0, dst1;
+  v16i8 vec0;
+  v16i8 shuffler_vec = {0, 0, 0, 0, 4, 4, 4, 4, 8, 8, 8, 8, 12, 12, 12, 12};
+  int32_t val = LW((int32_t*)shuffler);
+
+  vec0 = (v16i8)__msa_fill_w(val);
+  shuffler_vec += vec0;
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16u8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16u8*)src_argb, 16);
+    dst0 = (v16u8)__msa_vshf_b(shuffler_vec, (v16i8)src0, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(shuffler_vec, (v16i8)src1, (v16i8)src1);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb += 32;
+    dst_argb += 32;
+  }
+}
+
+void ARGBShadeRow_MSA(const uint8_t* src_argb,
+                      uint8_t* dst_argb,
+                      int width,
+                      uint32_t value) {
+  int x;
+  v16u8 src0, dst0;
+  v8u16 vec0, vec1;
+  v4u32 reg0, reg1, reg2, reg3, rgba_scale;
+  v8i16 zero = {0};
+
+  rgba_scale[0] = value;
+  rgba_scale = (v4u32)__msa_ilvr_b((v16i8)rgba_scale, (v16i8)rgba_scale);
+  rgba_scale = (v4u32)__msa_ilvr_h(zero, (v8i16)rgba_scale);
+
+  for (x = 0; x < width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((const v16u8*)src_argb, 0);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src0, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src0, (v16i8)src0);
+    reg0 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec0);
+    reg1 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec0);
+    reg2 = (v4u32)__msa_ilvr_h(zero, (v8i16)vec1);
+    reg3 = (v4u32)__msa_ilvl_h(zero, (v8i16)vec1);
+    reg0 *= rgba_scale;
+    reg1 *= rgba_scale;
+    reg2 *= rgba_scale;
+    reg3 *= rgba_scale;
+    reg0 = (v4u32)__msa_srai_w((v4i32)reg0, 24);
+    reg1 = (v4u32)__msa_srai_w((v4i32)reg1, 24);
+    reg2 = (v4u32)__msa_srai_w((v4i32)reg2, 24);
+    reg3 = (v4u32)__msa_srai_w((v4i32)reg3, 24);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_argb);
+    src_argb += 16;
+    dst_argb += 16;
+  }
+}
+
+void ARGBGrayRow_MSA(const uint8_t* src_argb, uint8_t* dst_argb, int width) {
+  int x;
+  v16u8 src0, src1, vec0, vec1, dst0, dst1;
+  v8u16 reg0;
+  v16u8 const_0x26 = (v16u8)__msa_ldi_h(0x26);
+  v16u8 const_0x4B0F = (v16u8)__msa_fill_h(0x4B0F);
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16u8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16u8*)src_argb, 16);
+    vec0 = (v16u8)__msa_pckev_h((v8i16)src1, (v8i16)src0);
+    vec1 = (v16u8)__msa_pckod_h((v8i16)src1, (v8i16)src0);
+    reg0 = __msa_dotp_u_h(vec0, const_0x4B0F);
+    reg0 = __msa_dpadd_u_h(reg0, vec1, const_0x26);
+    reg0 = (v8u16)__msa_srari_h((v8i16)reg0, 7);
+    vec0 = (v16u8)__msa_ilvev_b((v16i8)reg0, (v16i8)reg0);
+    vec1 = (v16u8)__msa_ilvod_b((v16i8)vec1, (v16i8)vec0);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb += 32;
+    dst_argb += 32;
+  }
+}
+
+void ARGBSepiaRow_MSA(uint8_t* dst_argb, int width) {
+  int x;
+  v16u8 src0, src1, dst0, dst1, vec0, vec1, vec2, vec3, vec4, vec5;
+  v8u16 reg0, reg1, reg2;
+  v16u8 const_0x4411 = (v16u8)__msa_fill_h(0x4411);
+  v16u8 const_0x23 = (v16u8)__msa_ldi_h(0x23);
+  v16u8 const_0x5816 = (v16u8)__msa_fill_h(0x5816);
+  v16u8 const_0x2D = (v16u8)__msa_ldi_h(0x2D);
+  v16u8 const_0x6218 = (v16u8)__msa_fill_h(0x6218);
+  v16u8 const_0x32 = (v16u8)__msa_ldi_h(0x32);
+  v8u16 const_0xFF = (v8u16)__msa_ldi_h(0xFF);
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((v16u8*)dst_argb, 0);
+    src1 = (v16u8)__msa_ld_b((v16u8*)dst_argb, 16);
+    vec0 = (v16u8)__msa_pckev_h((v8i16)src1, (v8i16)src0);
+    vec1 = (v16u8)__msa_pckod_h((v8i16)src1, (v8i16)src0);
+    vec3 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec1);
+    reg0 = (v8u16)__msa_dotp_u_h(vec0, const_0x4411);
+    reg1 = (v8u16)__msa_dotp_u_h(vec0, const_0x5816);
+    reg2 = (v8u16)__msa_dotp_u_h(vec0, const_0x6218);
+    reg0 = (v8u16)__msa_dpadd_u_h(reg0, vec1, const_0x23);
+    reg1 = (v8u16)__msa_dpadd_u_h(reg1, vec1, const_0x2D);
+    reg2 = (v8u16)__msa_dpadd_u_h(reg2, vec1, const_0x32);
+    reg0 = (v8u16)__msa_srai_h((v8i16)reg0, 7);
+    reg1 = (v8u16)__msa_srai_h((v8i16)reg1, 7);
+    reg2 = (v8u16)__msa_srai_h((v8i16)reg2, 7);
+    reg1 = (v8u16)__msa_min_u_h((v8u16)reg1, const_0xFF);
+    reg2 = (v8u16)__msa_min_u_h((v8u16)reg2, const_0xFF);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)reg0, (v16i8)reg0);
+    vec1 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg1);
+    vec2 = (v16u8)__msa_pckev_b((v16i8)reg2, (v16i8)reg2);
+    vec4 = (v16u8)__msa_ilvr_b((v16i8)vec2, (v16i8)vec0);
+    vec5 = (v16u8)__msa_ilvr_b((v16i8)vec3, (v16i8)vec1);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)vec5, (v16i8)vec4);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)vec5, (v16i8)vec4);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    dst_argb += 32;
+  }
+}
+
+void ARGB4444ToARGBRow_MSA(const uint8_t* src_argb4444,
+                           uint8_t* dst_argb,
+                           int width) {
+  int x;
+  v16u8 src0, src1;
+  v8u16 vec0, vec1, vec2, vec3;
+  v16u8 dst0, dst1, dst2, dst3;
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16u8*)src_argb4444, 0);
+    src1 = (v16u8)__msa_ld_b((const v16u8*)src_argb4444, 16);
+    vec0 = (v8u16)__msa_andi_b(src0, 0x0F);
+    vec1 = (v8u16)__msa_andi_b(src1, 0x0F);
+    vec2 = (v8u16)__msa_andi_b(src0, 0xF0);
+    vec3 = (v8u16)__msa_andi_b(src1, 0xF0);
+    vec0 |= (v8u16)__msa_slli_b((v16i8)vec0, 4);
+    vec1 |= (v8u16)__msa_slli_b((v16i8)vec1, 4);
+    vec2 |= (v8u16)__msa_srli_b((v16i8)vec2, 4);
+    vec3 |= (v8u16)__msa_srli_b((v16i8)vec3, 4);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)vec2, (v16i8)vec0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)vec2, (v16i8)vec0);
+    dst2 = (v16u8)__msa_ilvr_b((v16i8)vec3, (v16i8)vec1);
+    dst3 = (v16u8)__msa_ilvl_b((v16i8)vec3, (v16i8)vec1);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_argb4444 += 32;
+    dst_argb += 64;
+  }
+}
+
+void ARGB1555ToARGBRow_MSA(const uint8_t* src_argb1555,
+                           uint8_t* dst_argb,
+                           int width) {
+  int x;
+  v8u16 src0, src1;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5;
+  v16u8 reg0, reg1, reg2, reg3, reg4, reg5, reg6;
+  v16u8 dst0, dst1, dst2, dst3;
+  v8u16 const_0x1F = (v8u16)__msa_ldi_h(0x1F);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v8u16)__msa_ld_h((const v8u16*)src_argb1555, 0);
+    src1 = (v8u16)__msa_ld_h((const v8u16*)src_argb1555, 16);
+    vec0 = src0 & const_0x1F;
+    vec1 = src1 & const_0x1F;
+    src0 = (v8u16)__msa_srli_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srli_h((v8i16)src1, 5);
+    vec2 = src0 & const_0x1F;
+    vec3 = src1 & const_0x1F;
+    src0 = (v8u16)__msa_srli_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srli_h((v8i16)src1, 5);
+    vec4 = src0 & const_0x1F;
+    vec5 = src1 & const_0x1F;
+    src0 = (v8u16)__msa_srli_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srli_h((v8i16)src1, 5);
+    reg0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    reg1 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    reg2 = (v16u8)__msa_pckev_b((v16i8)vec5, (v16i8)vec4);
+    reg3 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    reg4 = (v16u8)__msa_slli_b((v16i8)reg0, 3);
+    reg5 = (v16u8)__msa_slli_b((v16i8)reg1, 3);
+    reg6 = (v16u8)__msa_slli_b((v16i8)reg2, 3);
+    reg4 |= (v16u8)__msa_srai_b((v16i8)reg0, 2);
+    reg5 |= (v16u8)__msa_srai_b((v16i8)reg1, 2);
+    reg6 |= (v16u8)__msa_srai_b((v16i8)reg2, 2);
+    reg3 = -reg3;
+    reg0 = (v16u8)__msa_ilvr_b((v16i8)reg6, (v16i8)reg4);
+    reg1 = (v16u8)__msa_ilvl_b((v16i8)reg6, (v16i8)reg4);
+    reg2 = (v16u8)__msa_ilvr_b((v16i8)reg3, (v16i8)reg5);
+    reg3 = (v16u8)__msa_ilvl_b((v16i8)reg3, (v16i8)reg5);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)reg2, (v16i8)reg0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)reg2, (v16i8)reg0);
+    dst2 = (v16u8)__msa_ilvr_b((v16i8)reg3, (v16i8)reg1);
+    dst3 = (v16u8)__msa_ilvl_b((v16i8)reg3, (v16i8)reg1);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_argb1555 += 32;
+    dst_argb += 64;
+  }
+}
+
+void RGB565ToARGBRow_MSA(const uint8_t* src_rgb565,
+                         uint8_t* dst_argb,
+                         int width) {
+  int x;
+  v8u16 src0, src1, vec0, vec1, vec2, vec3, vec4, vec5;
+  v8u16 reg0, reg1, reg2, reg3, reg4, reg5;
+  v16u8 res0, res1, res2, res3, dst0, dst1, dst2, dst3;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+  v8u16 const_0x1F = (v8u16)__msa_ldi_h(0x1F);
+  v8u16 const_0x7E0 = (v8u16)__msa_fill_h(0x7E0);
+  v8u16 const_0xF800 = (v8u16)__msa_fill_h(0xF800);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v8u16)__msa_ld_h((const v8u16*)src_rgb565, 0);
+    src1 = (v8u16)__msa_ld_h((const v8u16*)src_rgb565, 16);
+    vec0 = src0 & const_0x1F;
+    vec1 = src0 & const_0x7E0;
+    vec2 = src0 & const_0xF800;
+    vec3 = src1 & const_0x1F;
+    vec4 = src1 & const_0x7E0;
+    vec5 = src1 & const_0xF800;
+    reg0 = (v8u16)__msa_slli_h((v8i16)vec0, 3);
+    reg1 = (v8u16)__msa_srli_h((v8i16)vec1, 3);
+    reg2 = (v8u16)__msa_srli_h((v8i16)vec2, 8);
+    reg3 = (v8u16)__msa_slli_h((v8i16)vec3, 3);
+    reg4 = (v8u16)__msa_srli_h((v8i16)vec4, 3);
+    reg5 = (v8u16)__msa_srli_h((v8i16)vec5, 8);
+    reg0 |= (v8u16)__msa_srli_h((v8i16)vec0, 2);
+    reg1 |= (v8u16)__msa_srli_h((v8i16)vec1, 9);
+    reg2 |= (v8u16)__msa_srli_h((v8i16)vec2, 13);
+    reg3 |= (v8u16)__msa_srli_h((v8i16)vec3, 2);
+    reg4 |= (v8u16)__msa_srli_h((v8i16)vec4, 9);
+    reg5 |= (v8u16)__msa_srli_h((v8i16)vec5, 13);
+    res0 = (v16u8)__msa_ilvev_b((v16i8)reg2, (v16i8)reg0);
+    res1 = (v16u8)__msa_ilvev_b((v16i8)alpha, (v16i8)reg1);
+    res2 = (v16u8)__msa_ilvev_b((v16i8)reg5, (v16i8)reg3);
+    res3 = (v16u8)__msa_ilvev_b((v16i8)alpha, (v16i8)reg4);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)res1, (v16i8)res0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)res1, (v16i8)res0);
+    dst2 = (v16u8)__msa_ilvr_b((v16i8)res3, (v16i8)res2);
+    dst3 = (v16u8)__msa_ilvl_b((v16i8)res3, (v16i8)res2);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_rgb565 += 32;
+    dst_argb += 64;
+  }
+}
+
+void RGB24ToARGBRow_MSA(const uint8_t* src_rgb24,
+                        uint8_t* dst_argb,
+                        int width) {
+  int x;
+  v16u8 src0, src1, src2;
+  v16u8 vec0, vec1, vec2;
+  v16u8 dst0, dst1, dst2, dst3;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+  v16i8 shuffler = {0, 1, 2, 16, 3, 4, 5, 17, 6, 7, 8, 18, 9, 10, 11, 19};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_rgb24, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_rgb24, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_rgb24, 32);
+    vec0 = (v16u8)__msa_sldi_b((v16i8)src1, (v16i8)src0, 12);
+    vec1 = (v16u8)__msa_sldi_b((v16i8)src2, (v16i8)src1, 8);
+    vec2 = (v16u8)__msa_sldi_b((v16i8)src2, (v16i8)src2, 4);
+    dst0 = (v16u8)__msa_vshf_b(shuffler, (v16i8)alpha, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(shuffler, (v16i8)alpha, (v16i8)vec0);
+    dst2 = (v16u8)__msa_vshf_b(shuffler, (v16i8)alpha, (v16i8)vec1);
+    dst3 = (v16u8)__msa_vshf_b(shuffler, (v16i8)alpha, (v16i8)vec2);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_rgb24 += 48;
+    dst_argb += 64;
+  }
+}
+
+void RAWToARGBRow_MSA(const uint8_t* src_raw, uint8_t* dst_argb, int width) {
+  int x;
+  v16u8 src0, src1, src2;
+  v16u8 vec0, vec1, vec2;
+  v16u8 dst0, dst1, dst2, dst3;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+  v16i8 mask = {2, 1, 0, 16, 5, 4, 3, 17, 8, 7, 6, 18, 11, 10, 9, 19};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_raw, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_raw, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_raw, 32);
+    vec0 = (v16u8)__msa_sldi_b((v16i8)src1, (v16i8)src0, 12);
+    vec1 = (v16u8)__msa_sldi_b((v16i8)src2, (v16i8)src1, 8);
+    vec2 = (v16u8)__msa_sldi_b((v16i8)src2, (v16i8)src2, 4);
+    dst0 = (v16u8)__msa_vshf_b(mask, (v16i8)alpha, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(mask, (v16i8)alpha, (v16i8)vec0);
+    dst2 = (v16u8)__msa_vshf_b(mask, (v16i8)alpha, (v16i8)vec1);
+    dst3 = (v16u8)__msa_vshf_b(mask, (v16i8)alpha, (v16i8)vec2);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_raw += 48;
+    dst_argb += 64;
+  }
+}
+
+void ARGB1555ToYRow_MSA(const uint8_t* src_argb1555,
+                        uint8_t* dst_y,
+                        int width) {
+  int x;
+  v8u16 src0, src1, vec0, vec1, vec2, vec3, vec4, vec5;
+  v8u16 reg0, reg1, reg2, reg3, reg4, reg5;
+  v16u8 dst0;
+  v8u16 const_0x19 = (v8u16)__msa_ldi_h(0x19);
+  v8u16 const_0x81 = (v8u16)__msa_ldi_h(0x81);
+  v8u16 const_0x42 = (v8u16)__msa_ldi_h(0x42);
+  v8u16 const_0x1F = (v8u16)__msa_ldi_h(0x1F);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v8u16)__msa_ld_b((const v8i16*)src_argb1555, 0);
+    src1 = (v8u16)__msa_ld_b((const v8i16*)src_argb1555, 16);
+    vec0 = src0 & const_0x1F;
+    vec1 = src1 & const_0x1F;
+    src0 = (v8u16)__msa_srai_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srai_h((v8i16)src1, 5);
+    vec2 = src0 & const_0x1F;
+    vec3 = src1 & const_0x1F;
+    src0 = (v8u16)__msa_srai_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srai_h((v8i16)src1, 5);
+    vec4 = src0 & const_0x1F;
+    vec5 = src1 & const_0x1F;
+    reg0 = (v8u16)__msa_slli_h((v8i16)vec0, 3);
+    reg1 = (v8u16)__msa_slli_h((v8i16)vec1, 3);
+    reg0 |= (v8u16)__msa_srai_h((v8i16)vec0, 2);
+    reg1 |= (v8u16)__msa_srai_h((v8i16)vec1, 2);
+    reg2 = (v8u16)__msa_slli_h((v8i16)vec2, 3);
+    reg3 = (v8u16)__msa_slli_h((v8i16)vec3, 3);
+    reg2 |= (v8u16)__msa_srai_h((v8i16)vec2, 2);
+    reg3 |= (v8u16)__msa_srai_h((v8i16)vec3, 2);
+    reg4 = (v8u16)__msa_slli_h((v8i16)vec4, 3);
+    reg5 = (v8u16)__msa_slli_h((v8i16)vec5, 3);
+    reg4 |= (v8u16)__msa_srai_h((v8i16)vec4, 2);
+    reg5 |= (v8u16)__msa_srai_h((v8i16)vec5, 2);
+    reg0 *= const_0x19;
+    reg1 *= const_0x19;
+    reg2 *= const_0x81;
+    reg3 *= const_0x81;
+    reg4 *= const_0x42;
+    reg5 *= const_0x42;
+    reg0 += reg2;
+    reg1 += reg3;
+    reg0 += reg4;
+    reg1 += reg5;
+    reg0 += const_0x1080;
+    reg1 += const_0x1080;
+    reg0 = (v8u16)__msa_srai_h((v8i16)reg0, 8);
+    reg1 = (v8u16)__msa_srai_h((v8i16)reg1, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    ST_UB(dst0, dst_y);
+    src_argb1555 += 32;
+    dst_y += 16;
+  }
+}
+
+void RGB565ToYRow_MSA(const uint8_t* src_rgb565, uint8_t* dst_y, int width) {
+  int x;
+  v8u16 src0, src1, vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v8u16 reg0, reg1, reg2, reg3, reg4, reg5;
+  v4u32 res0, res1, res2, res3;
+  v16u8 dst0;
+  v4u32 const_0x810019 = (v4u32)__msa_fill_w(0x810019);
+  v4u32 const_0x010042 = (v4u32)__msa_fill_w(0x010042);
+  v8i16 const_0x1080 = __msa_fill_h(0x1080);
+  v8u16 const_0x1F = (v8u16)__msa_ldi_h(0x1F);
+  v8u16 const_0x7E0 = (v8u16)__msa_fill_h(0x7E0);
+  v8u16 const_0xF800 = (v8u16)__msa_fill_h(0xF800);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v8u16)__msa_ld_b((const v8i16*)src_rgb565, 0);
+    src1 = (v8u16)__msa_ld_b((const v8i16*)src_rgb565, 16);
+    vec0 = src0 & const_0x1F;
+    vec1 = src0 & const_0x7E0;
+    vec2 = src0 & const_0xF800;
+    vec3 = src1 & const_0x1F;
+    vec4 = src1 & const_0x7E0;
+    vec5 = src1 & const_0xF800;
+    reg0 = (v8u16)__msa_slli_h((v8i16)vec0, 3);
+    reg1 = (v8u16)__msa_srli_h((v8i16)vec1, 3);
+    reg2 = (v8u16)__msa_srli_h((v8i16)vec2, 8);
+    reg3 = (v8u16)__msa_slli_h((v8i16)vec3, 3);
+    reg4 = (v8u16)__msa_srli_h((v8i16)vec4, 3);
+    reg5 = (v8u16)__msa_srli_h((v8i16)vec5, 8);
+    reg0 |= (v8u16)__msa_srli_h((v8i16)vec0, 2);
+    reg1 |= (v8u16)__msa_srli_h((v8i16)vec1, 9);
+    reg2 |= (v8u16)__msa_srli_h((v8i16)vec2, 13);
+    reg3 |= (v8u16)__msa_srli_h((v8i16)vec3, 2);
+    reg4 |= (v8u16)__msa_srli_h((v8i16)vec4, 9);
+    reg5 |= (v8u16)__msa_srli_h((v8i16)vec5, 13);
+    vec0 = (v8u16)__msa_ilvr_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_ilvl_h((v8i16)reg1, (v8i16)reg0);
+    vec2 = (v8u16)__msa_ilvr_h((v8i16)reg4, (v8i16)reg3);
+    vec3 = (v8u16)__msa_ilvl_h((v8i16)reg4, (v8i16)reg3);
+    vec4 = (v8u16)__msa_ilvr_h(const_0x1080, (v8i16)reg2);
+    vec5 = (v8u16)__msa_ilvl_h(const_0x1080, (v8i16)reg2);
+    vec6 = (v8u16)__msa_ilvr_h(const_0x1080, (v8i16)reg5);
+    vec7 = (v8u16)__msa_ilvl_h(const_0x1080, (v8i16)reg5);
+    res0 = __msa_dotp_u_w(vec0, (v8u16)const_0x810019);
+    res1 = __msa_dotp_u_w(vec1, (v8u16)const_0x810019);
+    res2 = __msa_dotp_u_w(vec2, (v8u16)const_0x810019);
+    res3 = __msa_dotp_u_w(vec3, (v8u16)const_0x810019);
+    res0 = __msa_dpadd_u_w(res0, vec4, (v8u16)const_0x010042);
+    res1 = __msa_dpadd_u_w(res1, vec5, (v8u16)const_0x010042);
+    res2 = __msa_dpadd_u_w(res2, vec6, (v8u16)const_0x010042);
+    res3 = __msa_dpadd_u_w(res3, vec7, (v8u16)const_0x010042);
+    res0 = (v4u32)__msa_srai_w((v4i32)res0, 8);
+    res1 = (v4u32)__msa_srai_w((v4i32)res1, 8);
+    res2 = (v4u32)__msa_srai_w((v4i32)res2, 8);
+    res3 = (v4u32)__msa_srai_w((v4i32)res3, 8);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)res1, (v8i16)res0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)res3, (v8i16)res2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_y);
+    src_rgb565 += 32;
+    dst_y += 16;
+  }
+}
+
+void RGB24ToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, reg0, reg1, reg2, reg3, dst0;
+  v8u16 vec0, vec1, vec2, vec3;
+  v8u16 const_0x8119 = (v8u16)__msa_fill_h(0x8119);
+  v8u16 const_0x42 = (v8u16)__msa_fill_h(0x42);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+  v16i8 mask0 = {0, 1, 2, 3, 3, 4, 5, 6, 6, 7, 8, 9, 9, 10, 11, 12};
+  v16i8 mask1 = {12, 13, 14, 15, 15, 16, 17, 18,
+                 18, 19, 20, 21, 21, 22, 23, 24};
+  v16i8 mask2 = {8, 9, 10, 11, 11, 12, 13, 14, 14, 15, 16, 17, 17, 18, 19, 20};
+  v16i8 mask3 = {4, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13, 13, 14, 15, 16};
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 32);
+    reg0 = (v16u8)__msa_vshf_b(mask0, zero, (v16i8)src0);
+    reg1 = (v16u8)__msa_vshf_b(mask1, (v16i8)src1, (v16i8)src0);
+    reg2 = (v16u8)__msa_vshf_b(mask2, (v16i8)src2, (v16i8)src1);
+    reg3 = (v16u8)__msa_vshf_b(mask3, zero, (v16i8)src2);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    vec2 = (v8u16)__msa_pckod_h((v8i16)reg1, (v8i16)reg0);
+    vec3 = (v8u16)__msa_pckod_h((v8i16)reg3, (v8i16)reg2);
+    vec0 = __msa_dotp_u_h((v16u8)vec0, (v16u8)const_0x8119);
+    vec1 = __msa_dotp_u_h((v16u8)vec1, (v16u8)const_0x8119);
+    vec0 = __msa_dpadd_u_h(vec0, (v16u8)vec2, (v16u8)const_0x42);
+    vec1 = __msa_dpadd_u_h(vec1, (v16u8)vec3, (v16u8)const_0x42);
+    vec0 += const_0x1080;
+    vec1 += const_0x1080;
+    vec0 = (v8u16)__msa_srai_h((v8i16)vec0, 8);
+    vec1 = (v8u16)__msa_srai_h((v8i16)vec1, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 48;
+    dst_y += 16;
+  }
+}
+
+void RAWToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, reg0, reg1, reg2, reg3, dst0;
+  v8u16 vec0, vec1, vec2, vec3;
+  v8u16 const_0x8142 = (v8u16)__msa_fill_h(0x8142);
+  v8u16 const_0x19 = (v8u16)__msa_fill_h(0x19);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+  v16i8 mask0 = {0, 1, 2, 3, 3, 4, 5, 6, 6, 7, 8, 9, 9, 10, 11, 12};
+  v16i8 mask1 = {12, 13, 14, 15, 15, 16, 17, 18,
+                 18, 19, 20, 21, 21, 22, 23, 24};
+  v16i8 mask2 = {8, 9, 10, 11, 11, 12, 13, 14, 14, 15, 16, 17, 17, 18, 19, 20};
+  v16i8 mask3 = {4, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13, 13, 14, 15, 16};
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 32);
+    reg0 = (v16u8)__msa_vshf_b(mask0, zero, (v16i8)src0);
+    reg1 = (v16u8)__msa_vshf_b(mask1, (v16i8)src1, (v16i8)src0);
+    reg2 = (v16u8)__msa_vshf_b(mask2, (v16i8)src2, (v16i8)src1);
+    reg3 = (v16u8)__msa_vshf_b(mask3, zero, (v16i8)src2);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    vec2 = (v8u16)__msa_pckod_h((v8i16)reg1, (v8i16)reg0);
+    vec3 = (v8u16)__msa_pckod_h((v8i16)reg3, (v8i16)reg2);
+    vec0 = __msa_dotp_u_h((v16u8)vec0, (v16u8)const_0x8142);
+    vec1 = __msa_dotp_u_h((v16u8)vec1, (v16u8)const_0x8142);
+    vec0 = __msa_dpadd_u_h(vec0, (v16u8)vec2, (v16u8)const_0x19);
+    vec1 = __msa_dpadd_u_h(vec1, (v16u8)vec3, (v16u8)const_0x19);
+    vec0 += const_0x1080;
+    vec1 += const_0x1080;
+    vec0 = (v8u16)__msa_srai_h((v8i16)vec0, 8);
+    vec1 = (v8u16)__msa_srai_h((v8i16)vec1, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 48;
+    dst_y += 16;
+  }
+}
+
+void ARGB1555ToUVRow_MSA(const uint8_t* src_argb1555,
+                         int src_stride_argb1555,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  int x;
+  const uint16_t* s = (const uint16_t*)src_argb1555;
+  const uint16_t* t = (const uint16_t*)(src_argb1555 + src_stride_argb1555);
+  int64_t res0, res1;
+  v8u16 src0, src1, src2, src3, reg0, reg1, reg2, reg3;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6;
+  v16u8 dst0;
+  v8u16 const_0x70 = (v8u16)__msa_ldi_h(0x70);
+  v8u16 const_0x4A = (v8u16)__msa_ldi_h(0x4A);
+  v8u16 const_0x26 = (v8u16)__msa_ldi_h(0x26);
+  v8u16 const_0x5E = (v8u16)__msa_ldi_h(0x5E);
+  v8u16 const_0x12 = (v8u16)__msa_ldi_h(0x12);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+  v8u16 const_0x1F = (v8u16)__msa_ldi_h(0x1F);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v8u16)__msa_ld_b((v8i16*)s, 0);
+    src1 = (v8u16)__msa_ld_b((v8i16*)s, 16);
+    src2 = (v8u16)__msa_ld_b((v8i16*)t, 0);
+    src3 = (v8u16)__msa_ld_b((v8i16*)t, 16);
+    vec0 = src0 & const_0x1F;
+    vec1 = src1 & const_0x1F;
+    vec0 += src2 & const_0x1F;
+    vec1 += src3 & const_0x1F;
+    vec0 = (v8u16)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    src0 = (v8u16)__msa_srai_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srai_h((v8i16)src1, 5);
+    src2 = (v8u16)__msa_srai_h((v8i16)src2, 5);
+    src3 = (v8u16)__msa_srai_h((v8i16)src3, 5);
+    vec2 = src0 & const_0x1F;
+    vec3 = src1 & const_0x1F;
+    vec2 += src2 & const_0x1F;
+    vec3 += src3 & const_0x1F;
+    vec2 = (v8u16)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    src0 = (v8u16)__msa_srai_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srai_h((v8i16)src1, 5);
+    src2 = (v8u16)__msa_srai_h((v8i16)src2, 5);
+    src3 = (v8u16)__msa_srai_h((v8i16)src3, 5);
+    vec4 = src0 & const_0x1F;
+    vec5 = src1 & const_0x1F;
+    vec4 += src2 & const_0x1F;
+    vec5 += src3 & const_0x1F;
+    vec4 = (v8u16)__msa_pckev_b((v16i8)vec5, (v16i8)vec4);
+    vec0 = __msa_hadd_u_h((v16u8)vec0, (v16u8)vec0);
+    vec2 = __msa_hadd_u_h((v16u8)vec2, (v16u8)vec2);
+    vec4 = __msa_hadd_u_h((v16u8)vec4, (v16u8)vec4);
+    vec6 = (v8u16)__msa_slli_h((v8i16)vec0, 1);
+    vec6 |= (v8u16)__msa_srai_h((v8i16)vec0, 6);
+    vec0 = (v8u16)__msa_slli_h((v8i16)vec2, 1);
+    vec0 |= (v8u16)__msa_srai_h((v8i16)vec2, 6);
+    vec2 = (v8u16)__msa_slli_h((v8i16)vec4, 1);
+    vec2 |= (v8u16)__msa_srai_h((v8i16)vec4, 6);
+    reg0 = vec6 * const_0x70;
+    reg1 = vec0 * const_0x4A;
+    reg2 = vec2 * const_0x70;
+    reg3 = vec0 * const_0x5E;
+    reg0 += const_0x8080;
+    reg1 += vec2 * const_0x26;
+    reg2 += const_0x8080;
+    reg3 += vec6 * const_0x12;
+    reg0 -= reg1;
+    reg2 -= reg3;
+    reg0 = (v8u16)__msa_srai_h((v8i16)reg0, 8);
+    reg2 = (v8u16)__msa_srai_h((v8i16)reg2, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg2, (v16i8)reg0);
+    res0 = __msa_copy_u_d((v2i64)dst0, 0);
+    res1 = __msa_copy_u_d((v2i64)dst0, 1);
+    SD(res0, dst_u);
+    SD(res1, dst_v);
+    s += 16;
+    t += 16;
+    dst_u += 8;
+    dst_v += 8;
+  }
+}
+
+void RGB565ToUVRow_MSA(const uint8_t* src_rgb565,
+                       int src_stride_rgb565,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  int x;
+  const uint16_t* s = (const uint16_t*)src_rgb565;
+  const uint16_t* t = (const uint16_t*)(src_rgb565 + src_stride_rgb565);
+  int64_t res0, res1;
+  v8u16 src0, src1, src2, src3, reg0, reg1, reg2, reg3;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5;
+  v16u8 dst0;
+  v8u16 const_0x70 = (v8u16)__msa_ldi_h(0x70);
+  v8u16 const_0x4A = (v8u16)__msa_ldi_h(0x4A);
+  v8u16 const_0x26 = (v8u16)__msa_ldi_h(0x26);
+  v8u16 const_0x5E = (v8u16)__msa_ldi_h(0x5E);
+  v8u16 const_0x12 = (v8u16)__msa_ldi_h(0x12);
+  v8u16 const_32896 = (v8u16)__msa_fill_h(0x8080);
+  v8u16 const_0x1F = (v8u16)__msa_ldi_h(0x1F);
+  v8u16 const_0x3F = (v8u16)__msa_fill_h(0x3F);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v8u16)__msa_ld_b((v8i16*)s, 0);
+    src1 = (v8u16)__msa_ld_b((v8i16*)s, 16);
+    src2 = (v8u16)__msa_ld_b((v8i16*)t, 0);
+    src3 = (v8u16)__msa_ld_b((v8i16*)t, 16);
+    vec0 = src0 & const_0x1F;
+    vec1 = src1 & const_0x1F;
+    vec0 += src2 & const_0x1F;
+    vec1 += src3 & const_0x1F;
+    vec0 = (v8u16)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    src0 = (v8u16)__msa_srai_h((v8i16)src0, 5);
+    src1 = (v8u16)__msa_srai_h((v8i16)src1, 5);
+    src2 = (v8u16)__msa_srai_h((v8i16)src2, 5);
+    src3 = (v8u16)__msa_srai_h((v8i16)src3, 5);
+    vec2 = src0 & const_0x3F;
+    vec3 = src1 & const_0x3F;
+    vec2 += src2 & const_0x3F;
+    vec3 += src3 & const_0x3F;
+    vec1 = (v8u16)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    src0 = (v8u16)__msa_srai_h((v8i16)src0, 6);
+    src1 = (v8u16)__msa_srai_h((v8i16)src1, 6);
+    src2 = (v8u16)__msa_srai_h((v8i16)src2, 6);
+    src3 = (v8u16)__msa_srai_h((v8i16)src3, 6);
+    vec4 = src0 & const_0x1F;
+    vec5 = src1 & const_0x1F;
+    vec4 += src2 & const_0x1F;
+    vec5 += src3 & const_0x1F;
+    vec2 = (v8u16)__msa_pckev_b((v16i8)vec5, (v16i8)vec4);
+    vec0 = __msa_hadd_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = __msa_hadd_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = __msa_hadd_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = (v8u16)__msa_slli_h((v8i16)vec0, 1);
+    vec3 |= (v8u16)__msa_srai_h((v8i16)vec0, 6);
+    vec4 = (v8u16)__msa_slli_h((v8i16)vec2, 1);
+    vec4 |= (v8u16)__msa_srai_h((v8i16)vec2, 6);
+    reg0 = vec3 * const_0x70;
+    reg1 = vec1 * const_0x4A;
+    reg2 = vec4 * const_0x70;
+    reg3 = vec1 * const_0x5E;
+    reg0 += const_32896;
+    reg1 += vec4 * const_0x26;
+    reg2 += const_32896;
+    reg3 += vec3 * const_0x12;
+    reg0 -= reg1;
+    reg2 -= reg3;
+    reg0 = (v8u16)__msa_srai_h((v8i16)reg0, 8);
+    reg2 = (v8u16)__msa_srai_h((v8i16)reg2, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg2, (v16i8)reg0);
+    res0 = __msa_copy_u_d((v2i64)dst0, 0);
+    res1 = __msa_copy_u_d((v2i64)dst0, 1);
+    SD(res0, dst_u);
+    SD(res1, dst_v);
+    s += 16;
+    t += 16;
+    dst_u += 8;
+    dst_v += 8;
+  }
+}
+
+void RGB24ToUVRow_MSA(const uint8_t* src_rgb0,
+                      int src_stride_rgb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  int x;
+  const uint8_t* s = src_rgb0;
+  const uint8_t* t = src_rgb0 + src_stride_rgb;
+  int64_t res0, res1;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7;
+  v16u8 inp0, inp1, inp2, inp3, inp4, inp5;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v8i16 reg0, reg1, reg2, reg3;
+  v16u8 dst0;
+  v8u16 const_0x70 = (v8u16)__msa_fill_h(0x70);
+  v8u16 const_0x4A = (v8u16)__msa_fill_h(0x4A);
+  v8u16 const_0x26 = (v8u16)__msa_fill_h(0x26);
+  v8u16 const_0x5E = (v8u16)__msa_fill_h(0x5E);
+  v8u16 const_0x12 = (v8u16)__msa_fill_h(0x12);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+  v16i8 mask = {0, 1, 2, 16, 3, 4, 5, 17, 6, 7, 8, 18, 9, 10, 11, 19};
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 16) {
+    inp0 = (v16u8)__msa_ld_b((const v16i8*)s, 0);
+    inp1 = (v16u8)__msa_ld_b((const v16i8*)s, 16);
+    inp2 = (v16u8)__msa_ld_b((const v16i8*)s, 32);
+    inp3 = (v16u8)__msa_ld_b((const v16i8*)t, 0);
+    inp4 = (v16u8)__msa_ld_b((const v16i8*)t, 16);
+    inp5 = (v16u8)__msa_ld_b((const v16i8*)t, 32);
+    src1 = (v16u8)__msa_sldi_b((v16i8)inp1, (v16i8)inp0, 12);
+    src5 = (v16u8)__msa_sldi_b((v16i8)inp4, (v16i8)inp3, 12);
+    src2 = (v16u8)__msa_sldi_b((v16i8)inp2, (v16i8)inp1, 8);
+    src6 = (v16u8)__msa_sldi_b((v16i8)inp5, (v16i8)inp4, 8);
+    src3 = (v16u8)__msa_sldi_b((v16i8)inp2, (v16i8)inp2, 4);
+    src7 = (v16u8)__msa_sldi_b((v16i8)inp5, (v16i8)inp5, 4);
+    src0 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)inp0);
+    src1 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src1);
+    src2 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src2);
+    src3 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src3);
+    src4 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)inp3);
+    src5 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src5);
+    src6 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src6);
+    src7 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src7);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src4, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src4, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)src5, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)src5, (v16i8)src1);
+    vec4 = (v8u16)__msa_ilvr_b((v16i8)src6, (v16i8)src2);
+    vec5 = (v8u16)__msa_ilvl_b((v16i8)src6, (v16i8)src2);
+    vec6 = (v8u16)__msa_ilvr_b((v16i8)src7, (v16i8)src3);
+    vec7 = (v8u16)__msa_ilvl_b((v16i8)src7, (v16i8)src3);
+    vec0 = (v8u16)__msa_hadd_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = (v8u16)__msa_hadd_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = (v8u16)__msa_hadd_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = (v8u16)__msa_hadd_u_h((v16u8)vec3, (v16u8)vec3);
+    vec4 = (v8u16)__msa_hadd_u_h((v16u8)vec4, (v16u8)vec4);
+    vec5 = (v8u16)__msa_hadd_u_h((v16u8)vec5, (v16u8)vec5);
+    vec6 = (v8u16)__msa_hadd_u_h((v16u8)vec6, (v16u8)vec6);
+    vec7 = (v8u16)__msa_hadd_u_h((v16u8)vec7, (v16u8)vec7);
+    reg0 = (v8i16)__msa_pckev_d((v2i64)vec1, (v2i64)vec0);
+    reg1 = (v8i16)__msa_pckev_d((v2i64)vec3, (v2i64)vec2);
+    reg2 = (v8i16)__msa_pckev_d((v2i64)vec5, (v2i64)vec4);
+    reg3 = (v8i16)__msa_pckev_d((v2i64)vec7, (v2i64)vec6);
+    reg0 += (v8i16)__msa_pckod_d((v2i64)vec1, (v2i64)vec0);
+    reg1 += (v8i16)__msa_pckod_d((v2i64)vec3, (v2i64)vec2);
+    reg2 += (v8i16)__msa_pckod_d((v2i64)vec5, (v2i64)vec4);
+    reg3 += (v8i16)__msa_pckod_d((v2i64)vec7, (v2i64)vec6);
+    reg0 = __msa_srai_h((v8i16)reg0, 2);
+    reg1 = __msa_srai_h((v8i16)reg1, 2);
+    reg2 = __msa_srai_h((v8i16)reg2, 2);
+    reg3 = __msa_srai_h((v8i16)reg3, 2);
+    vec4 = (v8u16)__msa_pckev_h(reg1, reg0);
+    vec5 = (v8u16)__msa_pckev_h(reg3, reg2);
+    vec6 = (v8u16)__msa_pckod_h(reg1, reg0);
+    vec7 = (v8u16)__msa_pckod_h(reg3, reg2);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)vec5, (v8i16)vec4);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)vec7, (v8i16)vec6);
+    vec2 = (v8u16)__msa_pckod_h((v8i16)vec5, (v8i16)vec4);
+    vec3 = vec0 * const_0x70;
+    vec4 = vec1 * const_0x4A;
+    vec5 = vec2 * const_0x26;
+    vec2 *= const_0x70;
+    vec1 *= const_0x5E;
+    vec0 *= const_0x12;
+    reg0 = __msa_subv_h((v8i16)vec3, (v8i16)vec4);
+    reg1 = __msa_subv_h((v8i16)const_0x8080, (v8i16)vec5);
+    reg2 = __msa_subv_h((v8i16)vec2, (v8i16)vec1);
+    reg3 = __msa_subv_h((v8i16)const_0x8080, (v8i16)vec0);
+    reg0 += reg1;
+    reg2 += reg3;
+    reg0 = __msa_srai_h(reg0, 8);
+    reg2 = __msa_srai_h(reg2, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg2, (v16i8)reg0);
+    res0 = __msa_copy_u_d((v2i64)dst0, 0);
+    res1 = __msa_copy_u_d((v2i64)dst0, 1);
+    SD(res0, dst_u);
+    SD(res1, dst_v);
+    t += 48;
+    s += 48;
+    dst_u += 8;
+    dst_v += 8;
+  }
+}
+
+void RAWToUVRow_MSA(const uint8_t* src_rgb0,
+                    int src_stride_rgb,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width) {
+  int x;
+  const uint8_t* s = src_rgb0;
+  const uint8_t* t = src_rgb0 + src_stride_rgb;
+  int64_t res0, res1;
+  v16u8 inp0, inp1, inp2, inp3, inp4, inp5;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v8i16 reg0, reg1, reg2, reg3;
+  v16u8 dst0;
+  v8u16 const_0x70 = (v8u16)__msa_fill_h(0x70);
+  v8u16 const_0x4A = (v8u16)__msa_fill_h(0x4A);
+  v8u16 const_0x26 = (v8u16)__msa_fill_h(0x26);
+  v8u16 const_0x5E = (v8u16)__msa_fill_h(0x5E);
+  v8u16 const_0x12 = (v8u16)__msa_fill_h(0x12);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+  v16i8 mask = {0, 1, 2, 16, 3, 4, 5, 17, 6, 7, 8, 18, 9, 10, 11, 19};
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 16) {
+    inp0 = (v16u8)__msa_ld_b((const v16i8*)s, 0);
+    inp1 = (v16u8)__msa_ld_b((const v16i8*)s, 16);
+    inp2 = (v16u8)__msa_ld_b((const v16i8*)s, 32);
+    inp3 = (v16u8)__msa_ld_b((const v16i8*)t, 0);
+    inp4 = (v16u8)__msa_ld_b((const v16i8*)t, 16);
+    inp5 = (v16u8)__msa_ld_b((const v16i8*)t, 32);
+    src1 = (v16u8)__msa_sldi_b((v16i8)inp1, (v16i8)inp0, 12);
+    src5 = (v16u8)__msa_sldi_b((v16i8)inp4, (v16i8)inp3, 12);
+    src2 = (v16u8)__msa_sldi_b((v16i8)inp2, (v16i8)inp1, 8);
+    src6 = (v16u8)__msa_sldi_b((v16i8)inp5, (v16i8)inp4, 8);
+    src3 = (v16u8)__msa_sldi_b((v16i8)inp2, (v16i8)inp2, 4);
+    src7 = (v16u8)__msa_sldi_b((v16i8)inp5, (v16i8)inp5, 4);
+    src0 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)inp0);
+    src1 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src1);
+    src2 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src2);
+    src3 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src3);
+    src4 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)inp3);
+    src5 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src5);
+    src6 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src6);
+    src7 = (v16u8)__msa_vshf_b(mask, (v16i8)zero, (v16i8)src7);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src4, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src4, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)src5, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)src5, (v16i8)src1);
+    vec4 = (v8u16)__msa_ilvr_b((v16i8)src6, (v16i8)src2);
+    vec5 = (v8u16)__msa_ilvl_b((v16i8)src6, (v16i8)src2);
+    vec6 = (v8u16)__msa_ilvr_b((v16i8)src7, (v16i8)src3);
+    vec7 = (v8u16)__msa_ilvl_b((v16i8)src7, (v16i8)src3);
+    vec0 = (v8u16)__msa_hadd_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = (v8u16)__msa_hadd_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = (v8u16)__msa_hadd_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = (v8u16)__msa_hadd_u_h((v16u8)vec3, (v16u8)vec3);
+    vec4 = (v8u16)__msa_hadd_u_h((v16u8)vec4, (v16u8)vec4);
+    vec5 = (v8u16)__msa_hadd_u_h((v16u8)vec5, (v16u8)vec5);
+    vec6 = (v8u16)__msa_hadd_u_h((v16u8)vec6, (v16u8)vec6);
+    vec7 = (v8u16)__msa_hadd_u_h((v16u8)vec7, (v16u8)vec7);
+    reg0 = (v8i16)__msa_pckev_d((v2i64)vec1, (v2i64)vec0);
+    reg1 = (v8i16)__msa_pckev_d((v2i64)vec3, (v2i64)vec2);
+    reg2 = (v8i16)__msa_pckev_d((v2i64)vec5, (v2i64)vec4);
+    reg3 = (v8i16)__msa_pckev_d((v2i64)vec7, (v2i64)vec6);
+    reg0 += (v8i16)__msa_pckod_d((v2i64)vec1, (v2i64)vec0);
+    reg1 += (v8i16)__msa_pckod_d((v2i64)vec3, (v2i64)vec2);
+    reg2 += (v8i16)__msa_pckod_d((v2i64)vec5, (v2i64)vec4);
+    reg3 += (v8i16)__msa_pckod_d((v2i64)vec7, (v2i64)vec6);
+    reg0 = __msa_srai_h(reg0, 2);
+    reg1 = __msa_srai_h(reg1, 2);
+    reg2 = __msa_srai_h(reg2, 2);
+    reg3 = __msa_srai_h(reg3, 2);
+    vec4 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec5 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    vec6 = (v8u16)__msa_pckod_h((v8i16)reg1, (v8i16)reg0);
+    vec7 = (v8u16)__msa_pckod_h((v8i16)reg3, (v8i16)reg2);
+    vec0 = (v8u16)__msa_pckod_h((v8i16)vec5, (v8i16)vec4);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)vec7, (v8i16)vec6);
+    vec2 = (v8u16)__msa_pckev_h((v8i16)vec5, (v8i16)vec4);
+    vec3 = vec0 * const_0x70;
+    vec4 = vec1 * const_0x4A;
+    vec5 = vec2 * const_0x26;
+    vec2 *= const_0x70;
+    vec1 *= const_0x5E;
+    vec0 *= const_0x12;
+    reg0 = __msa_subv_h((v8i16)vec3, (v8i16)vec4);
+    reg1 = __msa_subv_h((v8i16)const_0x8080, (v8i16)vec5);
+    reg2 = __msa_subv_h((v8i16)vec2, (v8i16)vec1);
+    reg3 = __msa_subv_h((v8i16)const_0x8080, (v8i16)vec0);
+    reg0 += reg1;
+    reg2 += reg3;
+    reg0 = __msa_srai_h(reg0, 8);
+    reg2 = __msa_srai_h(reg2, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg2, (v16i8)reg0);
+    res0 = __msa_copy_u_d((v2i64)dst0, 0);
+    res1 = __msa_copy_u_d((v2i64)dst0, 1);
+    SD(res0, dst_u);
+    SD(res1, dst_v);
+    t += 48;
+    s += 48;
+    dst_u += 8;
+    dst_v += 8;
+  }
+}
+
+void NV12ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_uv,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  uint64_t val0, val1;
+  v16u8 src0, src1, res0, res1, dst0, dst1;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 zero = {0};
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    val0 = LD(src_y);
+    val1 = LD(src_uv);
+    src0 = (v16u8)__msa_insert_d((v2i64)zero, 0, val0);
+    src1 = (v16u8)__msa_insert_d((v2i64)zero, 0, val1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    res0 = (v16u8)__msa_ilvev_b((v16i8)vec2, (v16i8)vec0);
+    res1 = (v16u8)__msa_ilvev_b((v16i8)alpha, (v16i8)vec1);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)res1, (v16i8)res0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)res1, (v16i8)res0);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_y += 8;
+    src_uv += 8;
+    dst_argb += 32;
+  }
+}
+
+void NV12ToRGB565Row_MSA(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb565,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  int x;
+  uint64_t val0, val1;
+  v16u8 src0, src1, dst0;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 zero = {0};
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    val0 = LD(src_y);
+    val1 = LD(src_uv);
+    src0 = (v16u8)__msa_insert_d((v2i64)zero, 0, val0);
+    src1 = (v16u8)__msa_insert_d((v2i64)zero, 0, val1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    vec0 = vec0 >> 3;
+    vec1 = (vec1 >> 2) << 5;
+    vec2 = (vec2 >> 3) << 11;
+    dst0 = (v16u8)(vec0 | vec1 | vec2);
+    ST_UB(dst0, dst_rgb565);
+    src_y += 8;
+    src_uv += 8;
+    dst_rgb565 += 16;
+  }
+}
+
+void NV21ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_vu,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  uint64_t val0, val1;
+  v16u8 src0, src1, res0, res1, dst0, dst1;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+  v16u8 zero = {0};
+  v16i8 shuffler = {1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14};
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    val0 = LD(src_y);
+    val1 = LD(src_vu);
+    src0 = (v16u8)__msa_insert_d((v2i64)zero, 0, val0);
+    src1 = (v16u8)__msa_insert_d((v2i64)zero, 0, val1);
+    src1 = (v16u8)__msa_vshf_b(shuffler, (v16i8)src1, (v16i8)src1);
+    YUVTORGB(src0, src1, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    res0 = (v16u8)__msa_ilvev_b((v16i8)vec2, (v16i8)vec0);
+    res1 = (v16u8)__msa_ilvev_b((v16i8)alpha, (v16i8)vec1);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)res1, (v16i8)res0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)res1, (v16i8)res0);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_y += 8;
+    src_vu += 8;
+    dst_argb += 32;
+  }
+}
+
+void SobelRow_MSA(const uint8_t* src_sobelx,
+                  const uint8_t* src_sobely,
+                  uint8_t* dst_argb,
+                  int width) {
+  int x;
+  v16u8 src0, src1, vec0, dst0, dst1, dst2, dst3;
+  v16i8 mask0 = {0, 0, 0, 16, 1, 1, 1, 16, 2, 2, 2, 16, 3, 3, 3, 16};
+  v16i8 const_0x4 = __msa_ldi_b(0x4);
+  v16i8 mask1 = mask0 + const_0x4;
+  v16i8 mask2 = mask1 + const_0x4;
+  v16i8 mask3 = mask2 + const_0x4;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_sobelx, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_sobely, 0);
+    vec0 = __msa_adds_u_b(src0, src1);
+    dst0 = (v16u8)__msa_vshf_b(mask0, (v16i8)alpha, (v16i8)vec0);
+    dst1 = (v16u8)__msa_vshf_b(mask1, (v16i8)alpha, (v16i8)vec0);
+    dst2 = (v16u8)__msa_vshf_b(mask2, (v16i8)alpha, (v16i8)vec0);
+    dst3 = (v16u8)__msa_vshf_b(mask3, (v16i8)alpha, (v16i8)vec0);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_sobelx += 16;
+    src_sobely += 16;
+    dst_argb += 64;
+  }
+}
+
+void SobelToPlaneRow_MSA(const uint8_t* src_sobelx,
+                         const uint8_t* src_sobely,
+                         uint8_t* dst_y,
+                         int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+
+  for (x = 0; x < width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_sobelx, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_sobelx, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_sobely, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_sobely, 16);
+    dst0 = __msa_adds_u_b(src0, src2);
+    dst1 = __msa_adds_u_b(src1, src3);
+    ST_UB2(dst0, dst1, dst_y, 16);
+    src_sobelx += 32;
+    src_sobely += 32;
+    dst_y += 32;
+  }
+}
+
+void SobelXYRow_MSA(const uint8_t* src_sobelx,
+                    const uint8_t* src_sobely,
+                    uint8_t* dst_argb,
+                    int width) {
+  int x;
+  v16u8 src0, src1, vec0, vec1, vec2;
+  v16u8 reg0, reg1, dst0, dst1, dst2, dst3;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_sobelx, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_sobely, 0);
+    vec0 = __msa_adds_u_b(src0, src1);
+    vec1 = (v16u8)__msa_ilvr_b((v16i8)src0, (v16i8)src1);
+    vec2 = (v16u8)__msa_ilvl_b((v16i8)src0, (v16i8)src1);
+    reg0 = (v16u8)__msa_ilvr_b((v16i8)alpha, (v16i8)vec0);
+    reg1 = (v16u8)__msa_ilvl_b((v16i8)alpha, (v16i8)vec0);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)reg0, (v16i8)vec1);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)reg0, (v16i8)vec1);
+    dst2 = (v16u8)__msa_ilvr_b((v16i8)reg1, (v16i8)vec2);
+    dst3 = (v16u8)__msa_ilvl_b((v16i8)reg1, (v16i8)vec2);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_sobelx += 16;
+    src_sobely += 16;
+    dst_argb += 64;
+  }
+}
+
+void ARGBToYJRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0;
+  v16u8 const_0x4B0F = (v16u8)__msa_fill_h(0x4B0F);
+  v16u8 const_0x26 = (v16u8)__msa_fill_h(0x26);
+  v8u16 const_0x40 = (v8u16)__msa_fill_h(0x40);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 48);
+    ARGBTOY(src0, src1, src2, src3, const_0x4B0F, const_0x26, const_0x40, 7,
+            dst0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 64;
+    dst_y += 16;
+  }
+}
+
+void BGRAToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0;
+  v16u8 const_0x4200 = (v16u8)__msa_fill_h(0x4200);
+  v16u8 const_0x1981 = (v16u8)__msa_fill_h(0x1981);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 48);
+    ARGBTOY(src0, src1, src2, src3, const_0x4200, const_0x1981, const_0x1080, 8,
+            dst0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 64;
+    dst_y += 16;
+  }
+}
+
+void ABGRToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0;
+  v16u8 const_0x8142 = (v16u8)__msa_fill_h(0x8142);
+  v16u8 const_0x19 = (v16u8)__msa_fill_h(0x19);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 48);
+    ARGBTOY(src0, src1, src2, src3, const_0x8142, const_0x19, const_0x1080, 8,
+            dst0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 64;
+    dst_y += 16;
+  }
+}
+
+void RGBAToYRow_MSA(const uint8_t* src_argb0, uint8_t* dst_y, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0;
+  v16u8 const_0x1900 = (v16u8)__msa_fill_h(0x1900);
+  v16u8 const_0x4281 = (v16u8)__msa_fill_h(0x4281);
+  v8u16 const_0x1080 = (v8u16)__msa_fill_h(0x1080);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 48);
+    ARGBTOY(src0, src1, src2, src3, const_0x1900, const_0x4281, const_0x1080, 8,
+            dst0);
+    ST_UB(dst0, dst_y);
+    src_argb0 += 64;
+    dst_y += 16;
+  }
+}
+
+void ARGBToUVJRow_MSA(const uint8_t* src_rgb0,
+                      int src_stride_rgb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  int x;
+  const uint8_t* s = src_rgb0;
+  const uint8_t* t = src_rgb0 + src_stride_rgb;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7;
+  v16u8 vec0, vec1, vec2, vec3;
+  v16u8 dst0, dst1;
+  v16i8 shuffler0 = {0, 1, 4, 5, 8, 9, 12, 13, 16, 17, 20, 21, 24, 25, 28, 29};
+  v16i8 shuffler1 = {2,  3,  6,  7,  10, 11, 14, 15,
+                     18, 19, 22, 23, 26, 27, 30, 31};
+  v16i8 shuffler2 = {0, 3, 4, 7, 8, 11, 12, 15, 16, 19, 20, 23, 24, 27, 28, 31};
+  v16i8 shuffler3 = {1, 2, 5, 6, 9, 10, 13, 14, 17, 18, 21, 22, 25, 26, 29, 30};
+  v16u8 const_0x7F = (v16u8)__msa_fill_h(0x7F);
+  v16u8 const_0x6B14 = (v16u8)__msa_fill_h(0x6B14);
+  v16u8 const_0x2B54 = (v16u8)__msa_fill_h(0x2B54);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+
+  for (x = 0; x < width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)s, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)s, 48);
+    src4 = (v16u8)__msa_ld_b((const v16i8*)t, 0);
+    src5 = (v16u8)__msa_ld_b((const v16i8*)t, 16);
+    src6 = (v16u8)__msa_ld_b((const v16i8*)t, 32);
+    src7 = (v16u8)__msa_ld_b((const v16i8*)t, 48);
+    src0 = __msa_aver_u_b(src0, src4);
+    src1 = __msa_aver_u_b(src1, src5);
+    src2 = __msa_aver_u_b(src2, src6);
+    src3 = __msa_aver_u_b(src3, src7);
+    src4 = (v16u8)__msa_pckev_w((v4i32)src1, (v4i32)src0);
+    src5 = (v16u8)__msa_pckev_w((v4i32)src3, (v4i32)src2);
+    src6 = (v16u8)__msa_pckod_w((v4i32)src1, (v4i32)src0);
+    src7 = (v16u8)__msa_pckod_w((v4i32)src3, (v4i32)src2);
+    vec0 = __msa_aver_u_b(src4, src6);
+    vec1 = __msa_aver_u_b(src5, src7);
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 64);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 80);
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 96);
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 112);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t, 64);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t, 80);
+    src6 = (v16u8)__msa_ld_b((v16i8*)t, 96);
+    src7 = (v16u8)__msa_ld_b((v16i8*)t, 112);
+    src0 = __msa_aver_u_b(src0, src4);
+    src1 = __msa_aver_u_b(src1, src5);
+    src2 = __msa_aver_u_b(src2, src6);
+    src3 = __msa_aver_u_b(src3, src7);
+    src4 = (v16u8)__msa_pckev_w((v4i32)src1, (v4i32)src0);
+    src5 = (v16u8)__msa_pckev_w((v4i32)src3, (v4i32)src2);
+    src6 = (v16u8)__msa_pckod_w((v4i32)src1, (v4i32)src0);
+    src7 = (v16u8)__msa_pckod_w((v4i32)src3, (v4i32)src2);
+    vec2 = __msa_aver_u_b(src4, src6);
+    vec3 = __msa_aver_u_b(src5, src7);
+    ARGBTOUV(vec0, vec1, vec2, vec3, const_0x6B14, const_0x7F, const_0x2B54,
+             const_0x8080, shuffler1, shuffler0, shuffler2, shuffler3, dst0,
+             dst1);
+    ST_UB(dst0, dst_v);
+    ST_UB(dst1, dst_u);
+    s += 128;
+    t += 128;
+    dst_v += 16;
+    dst_u += 16;
+  }
+}
+
+void BGRAToUVRow_MSA(const uint8_t* src_rgb0,
+                     int src_stride_rgb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  int x;
+  const uint8_t* s = src_rgb0;
+  const uint8_t* t = src_rgb0 + src_stride_rgb;
+  v16u8 dst0, dst1, vec0, vec1, vec2, vec3;
+  v16i8 shuffler0 = {0, 1, 4, 5, 8, 9, 12, 13, 16, 17, 20, 21, 24, 25, 28, 29};
+  v16i8 shuffler1 = {2,  3,  6,  7,  10, 11, 14, 15,
+                     18, 19, 22, 23, 26, 27, 30, 31};
+  v16i8 shuffler2 = {0, 3, 4, 7, 8, 11, 12, 15, 16, 19, 20, 23, 24, 27, 28, 31};
+  v16i8 shuffler3 = {2, 1, 6, 5, 10, 9, 14, 13, 18, 17, 22, 21, 26, 25, 30, 29};
+  v16u8 const_0x125E = (v16u8)__msa_fill_h(0x125E);
+  v16u8 const_0x7000 = (v16u8)__msa_fill_h(0x7000);
+  v16u8 const_0x264A = (v16u8)__msa_fill_h(0x264A);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+
+  for (x = 0; x < width; x += 32) {
+    READ_ARGB(s, t, vec0, vec1, vec2, vec3);
+    ARGBTOUV(vec0, vec1, vec2, vec3, const_0x125E, const_0x7000, const_0x264A,
+             const_0x8080, shuffler0, shuffler1, shuffler2, shuffler3, dst0,
+             dst1);
+    ST_UB(dst0, dst_v);
+    ST_UB(dst1, dst_u);
+    s += 128;
+    t += 128;
+    dst_v += 16;
+    dst_u += 16;
+  }
+}
+
+void ABGRToUVRow_MSA(const uint8_t* src_rgb0,
+                     int src_stride_rgb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  int x;
+  const uint8_t* s = src_rgb0;
+  const uint8_t* t = src_rgb0 + src_stride_rgb;
+  v16u8 src0, src1, src2, src3;
+  v16u8 dst0, dst1;
+  v16i8 shuffler0 = {0, 1, 4, 5, 8, 9, 12, 13, 16, 17, 20, 21, 24, 25, 28, 29};
+  v16i8 shuffler1 = {2,  3,  6,  7,  10, 11, 14, 15,
+                     18, 19, 22, 23, 26, 27, 30, 31};
+  v16i8 shuffler2 = {0, 3, 4, 7, 8, 11, 12, 15, 16, 19, 20, 23, 24, 27, 28, 31};
+  v16i8 shuffler3 = {1, 2, 5, 6, 9, 10, 13, 14, 17, 18, 21, 22, 25, 26, 29, 30};
+  v16u8 const_0x4A26 = (v16u8)__msa_fill_h(0x4A26);
+  v16u8 const_0x0070 = (v16u8)__msa_fill_h(0x0070);
+  v16u8 const_0x125E = (v16u8)__msa_fill_h(0x125E);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+
+  for (x = 0; x < width; x += 32) {
+    READ_ARGB(s, t, src0, src1, src2, src3);
+    ARGBTOUV(src0, src1, src2, src3, const_0x4A26, const_0x0070, const_0x125E,
+             const_0x8080, shuffler1, shuffler0, shuffler2, shuffler3, dst0,
+             dst1);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    s += 128;
+    t += 128;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void RGBAToUVRow_MSA(const uint8_t* src_rgb0,
+                     int src_stride_rgb,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  int x;
+  const uint8_t* s = src_rgb0;
+  const uint8_t* t = src_rgb0 + src_stride_rgb;
+  v16u8 dst0, dst1, vec0, vec1, vec2, vec3;
+  v16i8 shuffler0 = {0, 1, 4, 5, 8, 9, 12, 13, 16, 17, 20, 21, 24, 25, 28, 29};
+  v16i8 shuffler1 = {2,  3,  6,  7,  10, 11, 14, 15,
+                     18, 19, 22, 23, 26, 27, 30, 31};
+  v16i8 shuffler2 = {0, 3, 4, 7, 8, 11, 12, 15, 16, 19, 20, 23, 24, 27, 28, 31};
+  v16i8 shuffler3 = {2, 1, 6, 5, 10, 9, 14, 13, 18, 17, 22, 21, 26, 25, 30, 29};
+  v16u8 const_0x125E = (v16u8)__msa_fill_h(0x264A);
+  v16u8 const_0x7000 = (v16u8)__msa_fill_h(0x7000);
+  v16u8 const_0x264A = (v16u8)__msa_fill_h(0x125E);
+  v8u16 const_0x8080 = (v8u16)__msa_fill_h(0x8080);
+
+  for (x = 0; x < width; x += 32) {
+    READ_ARGB(s, t, vec0, vec1, vec2, vec3);
+    ARGBTOUV(vec0, vec1, vec2, vec3, const_0x125E, const_0x7000, const_0x264A,
+             const_0x8080, shuffler0, shuffler1, shuffler2, shuffler3, dst0,
+             dst1);
+    ST_UB(dst0, dst_u);
+    ST_UB(dst1, dst_v);
+    s += 128;
+    t += 128;
+    dst_u += 16;
+    dst_v += 16;
+  }
+}
+
+void I444ToARGBRow_MSA(const uint8_t* src_y,
+                       const uint8_t* src_u,
+                       const uint8_t* src_v,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  v16u8 src0, src1, src2, dst0, dst1;
+  v8u16 vec0, vec1, vec2;
+  v4i32 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7, reg8, reg9;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+  v8i16 zero = {0};
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+
+  for (x = 0; x < width; x += 8) {
+    READI444(src_y, src_u, src_v, src0, src1, src2);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src0, (v16i8)src0);
+    reg0 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec0);
+    reg1 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec0);
+    reg0 *= vec_yg;
+    reg1 *= vec_yg;
+    reg0 = __msa_srai_w(reg0, 16);
+    reg1 = __msa_srai_w(reg1, 16);
+    reg4 = reg0 + vec_br;
+    reg5 = reg1 + vec_br;
+    reg2 = reg0 + vec_bg;
+    reg3 = reg1 + vec_bg;
+    reg0 += vec_bb;
+    reg1 += vec_bb;
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)zero, (v16i8)src1);
+    vec1 = (v8u16)__msa_ilvr_b((v16i8)zero, (v16i8)src2);
+    reg6 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec0);
+    reg7 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec0);
+    reg8 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec1);
+    reg9 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec1);
+    reg0 -= reg6 * vec_ub;
+    reg1 -= reg7 * vec_ub;
+    reg2 -= reg6 * vec_ug;
+    reg3 -= reg7 * vec_ug;
+    reg4 -= reg8 * vec_vr;
+    reg5 -= reg9 * vec_vr;
+    reg2 -= reg8 * vec_vg;
+    reg3 -= reg9 * vec_vg;
+    reg0 = __msa_srai_w(reg0, 6);
+    reg1 = __msa_srai_w(reg1, 6);
+    reg2 = __msa_srai_w(reg2, 6);
+    reg3 = __msa_srai_w(reg3, 6);
+    reg4 = __msa_srai_w(reg4, 6);
+    reg5 = __msa_srai_w(reg5, 6);
+    CLIP_0TO255(reg0, reg1, reg2, reg3, reg4, reg5);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    vec2 = (v8u16)__msa_pckev_h((v8i16)reg5, (v8i16)reg4);
+    vec0 = (v8u16)__msa_ilvev_b((v16i8)vec1, (v16i8)vec0);
+    vec1 = (v8u16)__msa_ilvev_b((v16i8)alpha, (v16i8)vec2);
+    dst0 = (v16u8)__msa_ilvr_h((v8i16)vec1, (v8i16)vec0);
+    dst1 = (v16u8)__msa_ilvl_h((v8i16)vec1, (v8i16)vec0);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_y += 8;
+    src_u += 8;
+    src_v += 8;
+    dst_argb += 32;
+  }
+}
+
+void I400ToARGBRow_MSA(const uint8_t* src_y, uint8_t* dst_argb, int width) {
+  int x;
+  v16u8 src0, res0, res1, res2, res3, res4, dst0, dst1, dst2, dst3;
+  v8i16 vec0, vec1;
+  v4i32 reg0, reg1, reg2, reg3;
+  v4i32 vec_yg = __msa_fill_w(0x4A35);
+  v8i16 vec_ygb = __msa_fill_h(0xFB78);
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+  v8i16 max = __msa_ldi_h(0xFF);
+  v8i16 zero = {0};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_y, 0);
+    vec0 = (v8i16)__msa_ilvr_b((v16i8)src0, (v16i8)src0);
+    vec1 = (v8i16)__msa_ilvl_b((v16i8)src0, (v16i8)src0);
+    reg0 = (v4i32)__msa_ilvr_h(zero, vec0);
+    reg1 = (v4i32)__msa_ilvl_h(zero, vec0);
+    reg2 = (v4i32)__msa_ilvr_h(zero, vec1);
+    reg3 = (v4i32)__msa_ilvl_h(zero, vec1);
+    reg0 *= vec_yg;
+    reg1 *= vec_yg;
+    reg2 *= vec_yg;
+    reg3 *= vec_yg;
+    reg0 = __msa_srai_w(reg0, 16);
+    reg1 = __msa_srai_w(reg1, 16);
+    reg2 = __msa_srai_w(reg2, 16);
+    reg3 = __msa_srai_w(reg3, 16);
+    vec0 = (v8i16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8i16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    vec0 += vec_ygb;
+    vec1 += vec_ygb;
+    vec0 = __msa_srai_h(vec0, 6);
+    vec1 = __msa_srai_h(vec1, 6);
+    vec0 = __msa_maxi_s_h(vec0, 0);
+    vec1 = __msa_maxi_s_h(vec1, 0);
+    vec0 = __msa_min_s_h(max, vec0);
+    vec1 = __msa_min_s_h(max, vec1);
+    res0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    res1 = (v16u8)__msa_ilvr_b((v16i8)res0, (v16i8)res0);
+    res2 = (v16u8)__msa_ilvl_b((v16i8)res0, (v16i8)res0);
+    res3 = (v16u8)__msa_ilvr_b((v16i8)alpha, (v16i8)res0);
+    res4 = (v16u8)__msa_ilvl_b((v16i8)alpha, (v16i8)res0);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)res3, (v16i8)res1);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)res3, (v16i8)res1);
+    dst2 = (v16u8)__msa_ilvr_b((v16i8)res4, (v16i8)res2);
+    dst3 = (v16u8)__msa_ilvl_b((v16i8)res4, (v16i8)res2);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_y += 16;
+    dst_argb += 64;
+  }
+}
+
+void J400ToARGBRow_MSA(const uint8_t* src_y, uint8_t* dst_argb, int width) {
+  int x;
+  v16u8 src0, vec0, vec1, vec2, vec3, dst0, dst1, dst2, dst3;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_y, 0);
+    vec0 = (v16u8)__msa_ilvr_b((v16i8)src0, (v16i8)src0);
+    vec1 = (v16u8)__msa_ilvl_b((v16i8)src0, (v16i8)src0);
+    vec2 = (v16u8)__msa_ilvr_b((v16i8)alpha, (v16i8)src0);
+    vec3 = (v16u8)__msa_ilvl_b((v16i8)alpha, (v16i8)src0);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)vec2, (v16i8)vec0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)vec2, (v16i8)vec0);
+    dst2 = (v16u8)__msa_ilvr_b((v16i8)vec3, (v16i8)vec1);
+    dst3 = (v16u8)__msa_ilvl_b((v16i8)vec3, (v16i8)vec1);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    src_y += 16;
+    dst_argb += 64;
+  }
+}
+
+void YUY2ToARGBRow_MSA(const uint8_t* src_yuy2,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  v16u8 src0, src1, src2;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_yuy2, 0);
+    src1 = (v16u8)__msa_pckev_b((v16i8)src0, (v16i8)src0);
+    src2 = (v16u8)__msa_pckod_b((v16i8)src0, (v16i8)src0);
+    YUVTORGB(src1, src2, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    STOREARGB(vec0, vec1, vec2, alpha, dst_argb);
+    src_yuy2 += 16;
+    dst_argb += 32;
+  }
+}
+
+void UYVYToARGBRow_MSA(const uint8_t* src_uyvy,
+                       uint8_t* dst_argb,
+                       const struct YuvConstants* yuvconstants,
+                       int width) {
+  int x;
+  v16u8 src0, src1, src2;
+  v8i16 vec0, vec1, vec2;
+  v4i32 vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg, vec_br, vec_yg;
+  v4i32 vec_ubvr, vec_ugvg;
+  v16u8 alpha = (v16u8)__msa_ldi_b(ALPHA_VAL);
+
+  YUVTORGB_SETUP(yuvconstants, vec_ub, vec_vr, vec_ug, vec_vg, vec_bb, vec_bg,
+                 vec_br, vec_yg);
+  vec_ubvr = __msa_ilvr_w(vec_vr, vec_ub);
+  vec_ugvg = (v4i32)__msa_ilvev_h((v8i16)vec_vg, (v8i16)vec_ug);
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_uyvy, 0);
+    src1 = (v16u8)__msa_pckod_b((v16i8)src0, (v16i8)src0);
+    src2 = (v16u8)__msa_pckev_b((v16i8)src0, (v16i8)src0);
+    YUVTORGB(src1, src2, vec_ubvr, vec_ugvg, vec_bb, vec_bg, vec_br, vec_yg,
+             vec0, vec1, vec2);
+    STOREARGB(vec0, vec1, vec2, alpha, dst_argb);
+    src_uyvy += 16;
+    dst_argb += 32;
+  }
+}
+
+void InterpolateRow_MSA(uint8_t* dst_ptr,
+                        const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        int width,
+                        int32_t source_y_fraction) {
+  int32_t y1_fraction = source_y_fraction;
+  int32_t y0_fraction = 256 - y1_fraction;
+  uint16_t y_fractions;
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+  v8u16 vec0, vec1, vec2, vec3, y_frac;
+
+  if (0 == y1_fraction) {
+    memcpy(dst_ptr, src_ptr, width);
+    return;
+  }
+
+  if (128 == y1_fraction) {
+    for (x = 0; x < width; x += 32) {
+      src0 = (v16u8)__msa_ld_b((const v16i8*)s, 0);
+      src1 = (v16u8)__msa_ld_b((const v16i8*)s, 16);
+      src2 = (v16u8)__msa_ld_b((const v16i8*)t, 0);
+      src3 = (v16u8)__msa_ld_b((const v16i8*)t, 16);
+      dst0 = __msa_aver_u_b(src0, src2);
+      dst1 = __msa_aver_u_b(src1, src3);
+      ST_UB2(dst0, dst1, dst_ptr, 16);
+      s += 32;
+      t += 32;
+      dst_ptr += 32;
+    }
+    return;
+  }
+
+  y_fractions = (uint16_t)(y0_fraction + (y1_fraction << 8));
+  y_frac = (v8u16)__msa_fill_h(y_fractions);
+
+  for (x = 0; x < width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)t, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)t, 16);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src2, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src2, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)src3, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)src3, (v16i8)src1);
+    vec0 = (v8u16)__msa_dotp_u_h((v16u8)vec0, (v16u8)y_frac);
+    vec1 = (v8u16)__msa_dotp_u_h((v16u8)vec1, (v16u8)y_frac);
+    vec2 = (v8u16)__msa_dotp_u_h((v16u8)vec2, (v16u8)y_frac);
+    vec3 = (v8u16)__msa_dotp_u_h((v16u8)vec3, (v16u8)y_frac);
+    vec0 = (v8u16)__msa_srari_h((v8i16)vec0, 8);
+    vec1 = (v8u16)__msa_srari_h((v8i16)vec1, 8);
+    vec2 = (v8u16)__msa_srari_h((v8i16)vec2, 8);
+    vec3 = (v8u16)__msa_srari_h((v8i16)vec3, 8);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    ST_UB2(dst0, dst1, dst_ptr, 16);
+    s += 32;
+    t += 32;
+    dst_ptr += 32;
+  }
+}
+
+void ARGBSetRow_MSA(uint8_t* dst_argb, uint32_t v32, int width) {
+  int x;
+  v4i32 dst0 = __builtin_msa_fill_w(v32);
+
+  for (x = 0; x < width; x += 4) {
+    ST_UB(dst0, dst_argb);
+    dst_argb += 16;
+  }
+}
+
+void RAWToRGB24Row_MSA(const uint8_t* src_raw, uint8_t* dst_rgb24, int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, src4, dst0, dst1, dst2;
+  v16i8 shuffler0 = {2, 1, 0, 5, 4, 3, 8, 7, 6, 11, 10, 9, 14, 13, 12, 17};
+  v16i8 shuffler1 = {8,  7,  12, 11, 10, 15, 14, 13,
+                     18, 17, 16, 21, 20, 19, 24, 23};
+  v16i8 shuffler2 = {14, 19, 18, 17, 22, 21, 20, 25,
+                     24, 23, 28, 27, 26, 31, 30, 29};
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_raw, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_raw, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_raw, 32);
+    src3 = (v16u8)__msa_sldi_b((v16i8)src1, (v16i8)src0, 8);
+    src4 = (v16u8)__msa_sldi_b((v16i8)src2, (v16i8)src1, 8);
+    dst0 = (v16u8)__msa_vshf_b(shuffler0, (v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(shuffler1, (v16i8)src4, (v16i8)src3);
+    dst2 = (v16u8)__msa_vshf_b(shuffler2, (v16i8)src2, (v16i8)src1);
+    ST_UB2(dst0, dst1, dst_rgb24, 16);
+    ST_UB(dst2, (dst_rgb24 + 32));
+    src_raw += 48;
+    dst_rgb24 += 48;
+  }
+}
+
+void MergeUVRow_MSA(const uint8_t* src_u,
+                    const uint8_t* src_v,
+                    uint8_t* dst_uv,
+                    int width) {
+  int x;
+  v16u8 src0, src1, dst0, dst1;
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_u, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_v, 0);
+    dst0 = (v16u8)__msa_ilvr_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_ilvl_b((v16i8)src1, (v16i8)src0);
+    ST_UB2(dst0, dst1, dst_uv, 16);
+    src_u += 16;
+    src_v += 16;
+    dst_uv += 32;
+  }
+}
+
+void ARGBExtractAlphaRow_MSA(const uint8_t* src_argb,
+                             uint8_t* dst_a,
+                             int width) {
+  int i;
+  v16u8 src0, src1, src2, src3, vec0, vec1, dst0;
+
+  for (i = 0; i < width; i += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 48);
+    vec0 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    dst0 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_a);
+    src_argb += 64;
+    dst_a += 16;
+  }
+}
+
+void ARGBBlendRow_MSA(const uint8_t* src_argb0,
+                      const uint8_t* src_argb1,
+                      uint8_t* dst_argb,
+                      int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v8u16 vec8, vec9, vec10, vec11, vec12, vec13;
+  v8u16 const_256 = (v8u16)__msa_ldi_h(256);
+  v16u8 const_255 = (v16u8)__msa_ldi_b(255);
+  v16u8 mask = {0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 255};
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_argb1, 16);
+    vec0 = (v8u16)__msa_ilvr_b(zero, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b(zero, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b(zero, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b(zero, (v16i8)src1);
+    vec4 = (v8u16)__msa_ilvr_b(zero, (v16i8)src2);
+    vec5 = (v8u16)__msa_ilvl_b(zero, (v16i8)src2);
+    vec6 = (v8u16)__msa_ilvr_b(zero, (v16i8)src3);
+    vec7 = (v8u16)__msa_ilvl_b(zero, (v16i8)src3);
+    vec8 = (v8u16)__msa_fill_h(vec0[3]);
+    vec9 = (v8u16)__msa_fill_h(vec0[7]);
+    vec10 = (v8u16)__msa_fill_h(vec1[3]);
+    vec11 = (v8u16)__msa_fill_h(vec1[7]);
+    vec8 = (v8u16)__msa_pckev_d((v2i64)vec9, (v2i64)vec8);
+    vec9 = (v8u16)__msa_pckev_d((v2i64)vec11, (v2i64)vec10);
+    vec10 = (v8u16)__msa_fill_h(vec2[3]);
+    vec11 = (v8u16)__msa_fill_h(vec2[7]);
+    vec12 = (v8u16)__msa_fill_h(vec3[3]);
+    vec13 = (v8u16)__msa_fill_h(vec3[7]);
+    vec10 = (v8u16)__msa_pckev_d((v2i64)vec11, (v2i64)vec10);
+    vec11 = (v8u16)__msa_pckev_d((v2i64)vec13, (v2i64)vec12);
+    vec8 = const_256 - vec8;
+    vec9 = const_256 - vec9;
+    vec10 = const_256 - vec10;
+    vec11 = const_256 - vec11;
+    vec8 *= vec4;
+    vec9 *= vec5;
+    vec10 *= vec6;
+    vec11 *= vec7;
+    vec8 = (v8u16)__msa_srai_h((v8i16)vec8, 8);
+    vec9 = (v8u16)__msa_srai_h((v8i16)vec9, 8);
+    vec10 = (v8u16)__msa_srai_h((v8i16)vec10, 8);
+    vec11 = (v8u16)__msa_srai_h((v8i16)vec11, 8);
+    vec0 += vec8;
+    vec1 += vec9;
+    vec2 += vec10;
+    vec3 += vec11;
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    dst0 = __msa_bmnz_v(dst0, const_255, mask);
+    dst1 = __msa_bmnz_v(dst1, const_255, mask);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb0 += 32;
+    src_argb1 += 32;
+    dst_argb += 32;
+  }
+}
+
+void ARGBQuantizeRow_MSA(uint8_t* dst_argb,
+                         int scale,
+                         int interval_size,
+                         int interval_offset,
+                         int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1, dst2, dst3;
+  v8i16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v4i32 tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
+  v4i32 tmp8, tmp9, tmp10, tmp11, tmp12, tmp13, tmp14, tmp15;
+  v4i32 vec_scale = __msa_fill_w(scale);
+  v16u8 vec_int_sz = (v16u8)__msa_fill_b(interval_size);
+  v16u8 vec_int_ofst = (v16u8)__msa_fill_b(interval_offset);
+  v16i8 mask = {0, 1, 2, 19, 4, 5, 6, 23, 8, 9, 10, 27, 12, 13, 14, 31};
+  v16i8 zero = {0};
+
+  for (x = 0; x < width; x += 8) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)dst_argb, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)dst_argb, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)dst_argb, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)dst_argb, 48);
+    vec0 = (v8i16)__msa_ilvr_b(zero, (v16i8)src0);
+    vec1 = (v8i16)__msa_ilvl_b(zero, (v16i8)src0);
+    vec2 = (v8i16)__msa_ilvr_b(zero, (v16i8)src1);
+    vec3 = (v8i16)__msa_ilvl_b(zero, (v16i8)src1);
+    vec4 = (v8i16)__msa_ilvr_b(zero, (v16i8)src2);
+    vec5 = (v8i16)__msa_ilvl_b(zero, (v16i8)src2);
+    vec6 = (v8i16)__msa_ilvr_b(zero, (v16i8)src3);
+    vec7 = (v8i16)__msa_ilvl_b(zero, (v16i8)src3);
+    tmp0 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec0);
+    tmp1 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec0);
+    tmp2 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec1);
+    tmp3 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec1);
+    tmp4 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec2);
+    tmp5 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec2);
+    tmp6 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec3);
+    tmp7 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec3);
+    tmp8 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec4);
+    tmp9 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec4);
+    tmp10 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec5);
+    tmp11 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec5);
+    tmp12 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec6);
+    tmp13 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec6);
+    tmp14 = (v4i32)__msa_ilvr_h((v8i16)zero, (v8i16)vec7);
+    tmp15 = (v4i32)__msa_ilvl_h((v8i16)zero, (v8i16)vec7);
+    tmp0 *= vec_scale;
+    tmp1 *= vec_scale;
+    tmp2 *= vec_scale;
+    tmp3 *= vec_scale;
+    tmp4 *= vec_scale;
+    tmp5 *= vec_scale;
+    tmp6 *= vec_scale;
+    tmp7 *= vec_scale;
+    tmp8 *= vec_scale;
+    tmp9 *= vec_scale;
+    tmp10 *= vec_scale;
+    tmp11 *= vec_scale;
+    tmp12 *= vec_scale;
+    tmp13 *= vec_scale;
+    tmp14 *= vec_scale;
+    tmp15 *= vec_scale;
+    tmp0 >>= 16;
+    tmp1 >>= 16;
+    tmp2 >>= 16;
+    tmp3 >>= 16;
+    tmp4 >>= 16;
+    tmp5 >>= 16;
+    tmp6 >>= 16;
+    tmp7 >>= 16;
+    tmp8 >>= 16;
+    tmp9 >>= 16;
+    tmp10 >>= 16;
+    tmp11 >>= 16;
+    tmp12 >>= 16;
+    tmp13 >>= 16;
+    tmp14 >>= 16;
+    tmp15 >>= 16;
+    vec0 = (v8i16)__msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec1 = (v8i16)__msa_pckev_h((v8i16)tmp3, (v8i16)tmp2);
+    vec2 = (v8i16)__msa_pckev_h((v8i16)tmp5, (v8i16)tmp4);
+    vec3 = (v8i16)__msa_pckev_h((v8i16)tmp7, (v8i16)tmp6);
+    vec4 = (v8i16)__msa_pckev_h((v8i16)tmp9, (v8i16)tmp8);
+    vec5 = (v8i16)__msa_pckev_h((v8i16)tmp11, (v8i16)tmp10);
+    vec6 = (v8i16)__msa_pckev_h((v8i16)tmp13, (v8i16)tmp12);
+    vec7 = (v8i16)__msa_pckev_h((v8i16)tmp15, (v8i16)tmp14);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    dst2 = (v16u8)__msa_pckev_b((v16i8)vec5, (v16i8)vec4);
+    dst3 = (v16u8)__msa_pckev_b((v16i8)vec7, (v16i8)vec6);
+    dst0 *= vec_int_sz;
+    dst1 *= vec_int_sz;
+    dst2 *= vec_int_sz;
+    dst3 *= vec_int_sz;
+    dst0 += vec_int_ofst;
+    dst1 += vec_int_ofst;
+    dst2 += vec_int_ofst;
+    dst3 += vec_int_ofst;
+    dst0 = (v16u8)__msa_vshf_b(mask, (v16i8)src0, (v16i8)dst0);
+    dst1 = (v16u8)__msa_vshf_b(mask, (v16i8)src1, (v16i8)dst1);
+    dst2 = (v16u8)__msa_vshf_b(mask, (v16i8)src2, (v16i8)dst2);
+    dst3 = (v16u8)__msa_vshf_b(mask, (v16i8)src3, (v16i8)dst3);
+    ST_UB4(dst0, dst1, dst2, dst3, dst_argb, 16);
+    dst_argb += 64;
+  }
+}
+
+void ARGBColorMatrixRow_MSA(const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            const int8_t* matrix_argb,
+                            int width) {
+  int32_t x;
+  v16i8 src0;
+  v16u8 src1, src2, dst0, dst1;
+  v8i16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9;
+  v8i16 vec10, vec11, vec12, vec13, vec14, vec15, vec16, vec17;
+  v4i32 tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
+  v4i32 tmp8, tmp9, tmp10, tmp11, tmp12, tmp13, tmp14, tmp15;
+  v16i8 zero = {0};
+  v8i16 max = __msa_ldi_h(255);
+
+  src0 = __msa_ld_b((v16i8*)matrix_argb, 0);
+  vec0 = (v8i16)__msa_ilvr_b(zero, src0);
+  vec1 = (v8i16)__msa_ilvl_b(zero, src0);
+
+  for (x = 0; x < width; x += 8) {
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 0);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_argb, 16);
+    vec2 = (v8i16)__msa_ilvr_b(zero, (v16i8)src1);
+    vec3 = (v8i16)__msa_ilvl_b(zero, (v16i8)src1);
+    vec4 = (v8i16)__msa_ilvr_b(zero, (v16i8)src2);
+    vec5 = (v8i16)__msa_ilvl_b(zero, (v16i8)src2);
+    vec6 = (v8i16)__msa_pckod_d((v2i64)vec2, (v2i64)vec2);
+    vec7 = (v8i16)__msa_pckod_d((v2i64)vec3, (v2i64)vec3);
+    vec8 = (v8i16)__msa_pckod_d((v2i64)vec4, (v2i64)vec4);
+    vec9 = (v8i16)__msa_pckod_d((v2i64)vec5, (v2i64)vec5);
+    vec2 = (v8i16)__msa_pckev_d((v2i64)vec2, (v2i64)vec2);
+    vec3 = (v8i16)__msa_pckev_d((v2i64)vec3, (v2i64)vec3);
+    vec4 = (v8i16)__msa_pckev_d((v2i64)vec4, (v2i64)vec4);
+    vec5 = (v8i16)__msa_pckev_d((v2i64)vec5, (v2i64)vec5);
+    vec10 = vec2 * vec0;
+    vec11 = vec2 * vec1;
+    vec12 = vec6 * vec0;
+    vec13 = vec6 * vec1;
+    tmp0 = __msa_hadd_s_w(vec10, vec10);
+    tmp1 = __msa_hadd_s_w(vec11, vec11);
+    tmp2 = __msa_hadd_s_w(vec12, vec12);
+    tmp3 = __msa_hadd_s_w(vec13, vec13);
+    vec14 = vec3 * vec0;
+    vec15 = vec3 * vec1;
+    vec16 = vec7 * vec0;
+    vec17 = vec7 * vec1;
+    tmp4 = __msa_hadd_s_w(vec14, vec14);
+    tmp5 = __msa_hadd_s_w(vec15, vec15);
+    tmp6 = __msa_hadd_s_w(vec16, vec16);
+    tmp7 = __msa_hadd_s_w(vec17, vec17);
+    vec10 = __msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec11 = __msa_pckev_h((v8i16)tmp3, (v8i16)tmp2);
+    vec12 = __msa_pckev_h((v8i16)tmp5, (v8i16)tmp4);
+    vec13 = __msa_pckev_h((v8i16)tmp7, (v8i16)tmp6);
+    tmp0 = __msa_hadd_s_w(vec10, vec10);
+    tmp1 = __msa_hadd_s_w(vec11, vec11);
+    tmp2 = __msa_hadd_s_w(vec12, vec12);
+    tmp3 = __msa_hadd_s_w(vec13, vec13);
+    tmp0 = __msa_srai_w(tmp0, 6);
+    tmp1 = __msa_srai_w(tmp1, 6);
+    tmp2 = __msa_srai_w(tmp2, 6);
+    tmp3 = __msa_srai_w(tmp3, 6);
+    vec2 = vec4 * vec0;
+    vec6 = vec4 * vec1;
+    vec3 = vec8 * vec0;
+    vec7 = vec8 * vec1;
+    tmp8 = __msa_hadd_s_w(vec2, vec2);
+    tmp9 = __msa_hadd_s_w(vec6, vec6);
+    tmp10 = __msa_hadd_s_w(vec3, vec3);
+    tmp11 = __msa_hadd_s_w(vec7, vec7);
+    vec4 = vec5 * vec0;
+    vec8 = vec5 * vec1;
+    vec5 = vec9 * vec0;
+    vec9 = vec9 * vec1;
+    tmp12 = __msa_hadd_s_w(vec4, vec4);
+    tmp13 = __msa_hadd_s_w(vec8, vec8);
+    tmp14 = __msa_hadd_s_w(vec5, vec5);
+    tmp15 = __msa_hadd_s_w(vec9, vec9);
+    vec14 = __msa_pckev_h((v8i16)tmp9, (v8i16)tmp8);
+    vec15 = __msa_pckev_h((v8i16)tmp11, (v8i16)tmp10);
+    vec16 = __msa_pckev_h((v8i16)tmp13, (v8i16)tmp12);
+    vec17 = __msa_pckev_h((v8i16)tmp15, (v8i16)tmp14);
+    tmp4 = __msa_hadd_s_w(vec14, vec14);
+    tmp5 = __msa_hadd_s_w(vec15, vec15);
+    tmp6 = __msa_hadd_s_w(vec16, vec16);
+    tmp7 = __msa_hadd_s_w(vec17, vec17);
+    tmp4 = __msa_srai_w(tmp4, 6);
+    tmp5 = __msa_srai_w(tmp5, 6);
+    tmp6 = __msa_srai_w(tmp6, 6);
+    tmp7 = __msa_srai_w(tmp7, 6);
+    vec10 = __msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec11 = __msa_pckev_h((v8i16)tmp3, (v8i16)tmp2);
+    vec12 = __msa_pckev_h((v8i16)tmp5, (v8i16)tmp4);
+    vec13 = __msa_pckev_h((v8i16)tmp7, (v8i16)tmp6);
+    vec10 = __msa_maxi_s_h(vec10, 0);
+    vec11 = __msa_maxi_s_h(vec11, 0);
+    vec12 = __msa_maxi_s_h(vec12, 0);
+    vec13 = __msa_maxi_s_h(vec13, 0);
+    vec10 = __msa_min_s_h(vec10, max);
+    vec11 = __msa_min_s_h(vec11, max);
+    vec12 = __msa_min_s_h(vec12, max);
+    vec13 = __msa_min_s_h(vec13, max);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec11, (v16i8)vec10);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec13, (v16i8)vec12);
+    ST_UB2(dst0, dst1, dst_argb, 16);
+    src_argb += 32;
+    dst_argb += 32;
+  }
+}
+
+void SplitUVRow_MSA(const uint8_t* src_uv,
+                    uint8_t* dst_u,
+                    uint8_t* dst_v,
+                    int width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1, dst2, dst3;
+
+  for (x = 0; x < width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 32);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 48);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    dst2 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    dst3 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst_u, 16);
+    ST_UB2(dst2, dst3, dst_v, 16);
+    src_uv += 64;
+    dst_u += 32;
+    dst_v += 32;
+  }
+}
+
+void SetRow_MSA(uint8_t* dst, uint8_t v8, int width) {
+  int x;
+  v16u8 dst0 = (v16u8)__msa_fill_b(v8);
+
+  for (x = 0; x < width; x += 16) {
+    ST_UB(dst0, dst);
+    dst += 16;
+  }
+}
+
+void MirrorUVRow_MSA(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  int x;
+  v16u8 src0, src1, src2, src3;
+  v16u8 dst0, dst1, dst2, dst3;
+  v16i8 mask0 = {30, 28, 26, 24, 22, 20, 18, 16, 14, 12, 10, 8, 6, 4, 2, 0};
+  v16i8 mask1 = {31, 29, 27, 25, 23, 21, 19, 17, 15, 13, 11, 9, 7, 5, 3, 1};
+
+  src_uv += (2 * width);
+
+  for (x = 0; x < width; x += 32) {
+    src_uv -= 64;
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 16);
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 32);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_uv, 48);
+    dst0 = (v16u8)__msa_vshf_b(mask1, (v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_vshf_b(mask1, (v16i8)src3, (v16i8)src2);
+    dst2 = (v16u8)__msa_vshf_b(mask0, (v16i8)src1, (v16i8)src0);
+    dst3 = (v16u8)__msa_vshf_b(mask0, (v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst_v, 16);
+    ST_UB2(dst2, dst3, dst_u, 16);
+    dst_u += 32;
+    dst_v += 32;
+  }
+}
+
+void SobelXRow_MSA(const uint8_t* src_y0,
+                   const uint8_t* src_y1,
+                   const uint8_t* src_y2,
+                   uint8_t* dst_sobelx,
+                   int32_t width) {
+  int x;
+  v16u8 src0, src1, src2, src3, src4, src5, dst0;
+  v8i16 vec0, vec1, vec2, vec3, vec4, vec5;
+  v16i8 mask0 = {0, 2, 1, 3, 2, 4, 3, 5, 4, 6, 5, 7, 6, 8, 7, 9};
+  v16i8 tmp = __msa_ldi_b(8);
+  v16i8 mask1 = mask0 + tmp;
+  v8i16 zero = {0};
+  v8i16 max = __msa_ldi_h(255);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_y0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_y0, 16);
+    src2 = (v16u8)__msa_ld_b((const v16i8*)src_y1, 0);
+    src3 = (v16u8)__msa_ld_b((const v16i8*)src_y1, 16);
+    src4 = (v16u8)__msa_ld_b((const v16i8*)src_y2, 0);
+    src5 = (v16u8)__msa_ld_b((const v16i8*)src_y2, 16);
+    vec0 = (v8i16)__msa_vshf_b(mask0, (v16i8)src1, (v16i8)src0);
+    vec1 = (v8i16)__msa_vshf_b(mask1, (v16i8)src1, (v16i8)src0);
+    vec2 = (v8i16)__msa_vshf_b(mask0, (v16i8)src3, (v16i8)src2);
+    vec3 = (v8i16)__msa_vshf_b(mask1, (v16i8)src3, (v16i8)src2);
+    vec4 = (v8i16)__msa_vshf_b(mask0, (v16i8)src5, (v16i8)src4);
+    vec5 = (v8i16)__msa_vshf_b(mask1, (v16i8)src5, (v16i8)src4);
+    vec0 = (v8i16)__msa_hsub_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = (v8i16)__msa_hsub_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = (v8i16)__msa_hsub_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = (v8i16)__msa_hsub_u_h((v16u8)vec3, (v16u8)vec3);
+    vec4 = (v8i16)__msa_hsub_u_h((v16u8)vec4, (v16u8)vec4);
+    vec5 = (v8i16)__msa_hsub_u_h((v16u8)vec5, (v16u8)vec5);
+    vec0 += vec2;
+    vec1 += vec3;
+    vec4 += vec2;
+    vec5 += vec3;
+    vec0 += vec4;
+    vec1 += vec5;
+    vec0 = __msa_add_a_h(zero, vec0);
+    vec1 = __msa_add_a_h(zero, vec1);
+    vec0 = __msa_maxi_s_h(vec0, 0);
+    vec1 = __msa_maxi_s_h(vec1, 0);
+    vec0 = __msa_min_s_h(max, vec0);
+    vec1 = __msa_min_s_h(max, vec1);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_sobelx);
+    src_y0 += 16;
+    src_y1 += 16;
+    src_y2 += 16;
+    dst_sobelx += 16;
+  }
+}
+
+void SobelYRow_MSA(const uint8_t* src_y0,
+                   const uint8_t* src_y1,
+                   uint8_t* dst_sobely,
+                   int32_t width) {
+  int x;
+  v16u8 src0, src1, dst0;
+  v8i16 vec0, vec1, vec2, vec3, vec4, vec5, vec6;
+  v8i16 zero = {0};
+  v8i16 max = __msa_ldi_h(255);
+
+  for (x = 0; x < width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((const v16i8*)src_y0, 0);
+    src1 = (v16u8)__msa_ld_b((const v16i8*)src_y1, 0);
+    vec0 = (v8i16)__msa_ilvr_b((v16i8)zero, (v16i8)src0);
+    vec1 = (v8i16)__msa_ilvl_b((v16i8)zero, (v16i8)src0);
+    vec2 = (v8i16)__msa_ilvr_b((v16i8)zero, (v16i8)src1);
+    vec3 = (v8i16)__msa_ilvl_b((v16i8)zero, (v16i8)src1);
+    vec0 -= vec2;
+    vec1 -= vec3;
+    vec6[0] = src_y0[16] - src_y1[16];
+    vec6[1] = src_y0[17] - src_y1[17];
+    vec2 = (v8i16)__msa_sldi_b((v16i8)vec1, (v16i8)vec0, 2);
+    vec3 = (v8i16)__msa_sldi_b((v16i8)vec6, (v16i8)vec1, 2);
+    vec4 = (v8i16)__msa_sldi_b((v16i8)vec1, (v16i8)vec0, 4);
+    vec5 = (v8i16)__msa_sldi_b((v16i8)vec6, (v16i8)vec1, 4);
+    vec0 += vec2;
+    vec1 += vec3;
+    vec4 += vec2;
+    vec5 += vec3;
+    vec0 += vec4;
+    vec1 += vec5;
+    vec0 = __msa_add_a_h(zero, vec0);
+    vec1 = __msa_add_a_h(zero, vec1);
+    vec0 = __msa_maxi_s_h(vec0, 0);
+    vec1 = __msa_maxi_s_h(vec1, 0);
+    vec0 = __msa_min_s_h(max, vec0);
+    vec1 = __msa_min_s_h(max, vec1);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst_sobely);
+    src_y0 += 16;
+    src_y1 += 16;
+    dst_sobely += 16;
+  }
+}
+
+void HalfFloatRow_MSA(const uint16_t* src,
+                      uint16_t* dst,
+                      float scale,
+                      int width) {
+  int i;
+  v8u16 src0, src1, src2, src3, dst0, dst1, dst2, dst3;
+  v4u32 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v4f32 fvec0, fvec1, fvec2, fvec3, fvec4, fvec5, fvec6, fvec7;
+  v4f32 mult_vec;
+  v8i16 zero = {0};
+  mult_vec[0] = 1.9259299444e-34f * scale;
+  mult_vec = (v4f32)__msa_splati_w((v4i32)mult_vec, 0);
+
+  for (i = 0; i < width; i += 32) {
+    src0 = (v8u16)__msa_ld_h((v8i16*)src, 0);
+    src1 = (v8u16)__msa_ld_h((v8i16*)src, 16);
+    src2 = (v8u16)__msa_ld_h((v8i16*)src, 32);
+    src3 = (v8u16)__msa_ld_h((v8i16*)src, 48);
+    vec0 = (v4u32)__msa_ilvr_h(zero, (v8i16)src0);
+    vec1 = (v4u32)__msa_ilvl_h(zero, (v8i16)src0);
+    vec2 = (v4u32)__msa_ilvr_h(zero, (v8i16)src1);
+    vec3 = (v4u32)__msa_ilvl_h(zero, (v8i16)src1);
+    vec4 = (v4u32)__msa_ilvr_h(zero, (v8i16)src2);
+    vec5 = (v4u32)__msa_ilvl_h(zero, (v8i16)src2);
+    vec6 = (v4u32)__msa_ilvr_h(zero, (v8i16)src3);
+    vec7 = (v4u32)__msa_ilvl_h(zero, (v8i16)src3);
+    fvec0 = __msa_ffint_u_w(vec0);
+    fvec1 = __msa_ffint_u_w(vec1);
+    fvec2 = __msa_ffint_u_w(vec2);
+    fvec3 = __msa_ffint_u_w(vec3);
+    fvec4 = __msa_ffint_u_w(vec4);
+    fvec5 = __msa_ffint_u_w(vec5);
+    fvec6 = __msa_ffint_u_w(vec6);
+    fvec7 = __msa_ffint_u_w(vec7);
+    fvec0 *= mult_vec;
+    fvec1 *= mult_vec;
+    fvec2 *= mult_vec;
+    fvec3 *= mult_vec;
+    fvec4 *= mult_vec;
+    fvec5 *= mult_vec;
+    fvec6 *= mult_vec;
+    fvec7 *= mult_vec;
+    vec0 = ((v4u32)fvec0) >> 13;
+    vec1 = ((v4u32)fvec1) >> 13;
+    vec2 = ((v4u32)fvec2) >> 13;
+    vec3 = ((v4u32)fvec3) >> 13;
+    vec4 = ((v4u32)fvec4) >> 13;
+    vec5 = ((v4u32)fvec5) >> 13;
+    vec6 = ((v4u32)fvec6) >> 13;
+    vec7 = ((v4u32)fvec7) >> 13;
+    dst0 = (v8u16)__msa_pckev_h((v8i16)vec1, (v8i16)vec0);
+    dst1 = (v8u16)__msa_pckev_h((v8i16)vec3, (v8i16)vec2);
+    dst2 = (v8u16)__msa_pckev_h((v8i16)vec5, (v8i16)vec4);
+    dst3 = (v8u16)__msa_pckev_h((v8i16)vec7, (v8i16)vec6);
+    ST_UH2(dst0, dst1, dst, 8);
+    ST_UH2(dst2, dst3, dst + 16, 8);
+    src += 32;
+    dst += 32;
+  }
+}
+
+#ifdef __cplusplus
+}  // extern "C"
+}  // namespace libyuv
+#endif
+
+#endif  // !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon.cc
index 909df06..ff87e74 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon.cc
@@ -10,6 +10,8 @@
 
 #include "libyuv/row.h"
 
+#include <stdio.h>
+
 #ifdef __cplusplus
 namespace libyuv {
 extern "C" {
@@ -20,1446 +22,1311 @@
     !defined(__aarch64__)
 
 // Read 8 Y, 4 U and 4 V from 422
-#define READYUV422                                                             \
-    MEMACCESS(0)                                                               \
-    "vld1.8     {d0}, [%0]!                    \n"                             \
-    MEMACCESS(1)                                                               \
-    "vld1.32    {d2[0]}, [%1]!                 \n"                             \
-    MEMACCESS(2)                                                               \
-    "vld1.32    {d2[1]}, [%2]!                 \n"
-
-// Read 8 Y, 2 U and 2 V from 422
-#define READYUV411                                                             \
-    MEMACCESS(0)                                                               \
-    "vld1.8     {d0}, [%0]!                    \n"                             \
-    MEMACCESS(1)                                                               \
-    "vld1.16    {d2[0]}, [%1]!                 \n"                             \
-    MEMACCESS(2)                                                               \
-    "vld1.16    {d2[1]}, [%2]!                 \n"                             \
-    "vmov.u8    d3, d2                         \n"                             \
-    "vzip.u8    d2, d3                         \n"
+#define READYUV422                               \
+  "vld1.8     {d0}, [%0]!                    \n" \
+  "vld1.32    {d2[0]}, [%1]!                 \n" \
+  "vld1.32    {d2[1]}, [%2]!                 \n"
 
 // Read 8 Y, 8 U and 8 V from 444
-#define READYUV444                                                             \
-    MEMACCESS(0)                                                               \
-    "vld1.8     {d0}, [%0]!                    \n"                             \
-    MEMACCESS(1)                                                               \
-    "vld1.8     {d2}, [%1]!                    \n"                             \
-    MEMACCESS(2)                                                               \
-    "vld1.8     {d3}, [%2]!                    \n"                             \
-    "vpaddl.u8  q1, q1                         \n"                             \
-    "vrshrn.u16 d2, q1, #1                     \n"
+#define READYUV444                               \
+  "vld1.8     {d0}, [%0]!                    \n" \
+  "vld1.8     {d2}, [%1]!                    \n" \
+  "vld1.8     {d3}, [%2]!                    \n" \
+  "vpaddl.u8  q1, q1                         \n" \
+  "vrshrn.u16 d2, q1, #1                     \n"
 
 // Read 8 Y, and set 4 U and 4 V to 128
-#define READYUV400                                                             \
-    MEMACCESS(0)                                                               \
-    "vld1.8     {d0}, [%0]!                    \n"                             \
-    "vmov.u8    d2, #128                       \n"
+#define READYUV400                               \
+  "vld1.8     {d0}, [%0]!                    \n" \
+  "vmov.u8    d2, #128                       \n"
 
 // Read 8 Y and 4 UV from NV12
 #define READNV12                                                               \
-    MEMACCESS(0)                                                               \
-    "vld1.8     {d0}, [%0]!                    \n"                             \
-    MEMACCESS(1)                                                               \
-    "vld1.8     {d2}, [%1]!                    \n"                             \
-    "vmov.u8    d3, d2                         \n"/* split odd/even uv apart */\
-    "vuzp.u8    d2, d3                         \n"                             \
-    "vtrn.u32   d2, d3                         \n"
+  "vld1.8     {d0}, [%0]!                    \n"                               \
+  "vld1.8     {d2}, [%1]!                    \n"                               \
+  "vmov.u8    d3, d2                         \n" /* split odd/even uv apart */ \
+  "vuzp.u8    d2, d3                         \n"                               \
+  "vtrn.u32   d2, d3                         \n"
 
 // Read 8 Y and 4 VU from NV21
 #define READNV21                                                               \
-    MEMACCESS(0)                                                               \
-    "vld1.8     {d0}, [%0]!                    \n"                             \
-    MEMACCESS(1)                                                               \
-    "vld1.8     {d2}, [%1]!                    \n"                             \
-    "vmov.u8    d3, d2                         \n"/* split odd/even uv apart */\
-    "vuzp.u8    d3, d2                         \n"                             \
-    "vtrn.u32   d2, d3                         \n"
+  "vld1.8     {d0}, [%0]!                    \n"                               \
+  "vld1.8     {d2}, [%1]!                    \n"                               \
+  "vmov.u8    d3, d2                         \n" /* split odd/even uv apart */ \
+  "vuzp.u8    d3, d2                         \n"                               \
+  "vtrn.u32   d2, d3                         \n"
 
 // Read 8 YUY2
-#define READYUY2                                                               \
-    MEMACCESS(0)                                                               \
-    "vld2.8     {d0, d2}, [%0]!                \n"                             \
-    "vmov.u8    d3, d2                         \n"                             \
-    "vuzp.u8    d2, d3                         \n"                             \
-    "vtrn.u32   d2, d3                         \n"
+#define READYUY2                                 \
+  "vld2.8     {d0, d2}, [%0]!                \n" \
+  "vmov.u8    d3, d2                         \n" \
+  "vuzp.u8    d2, d3                         \n" \
+  "vtrn.u32   d2, d3                         \n"
 
 // Read 8 UYVY
-#define READUYVY                                                               \
-    MEMACCESS(0)                                                               \
-    "vld2.8     {d2, d3}, [%0]!                \n"                             \
-    "vmov.u8    d0, d3                         \n"                             \
-    "vmov.u8    d3, d2                         \n"                             \
-    "vuzp.u8    d2, d3                         \n"                             \
-    "vtrn.u32   d2, d3                         \n"
+#define READUYVY                                 \
+  "vld2.8     {d2, d3}, [%0]!                \n" \
+  "vmov.u8    d0, d3                         \n" \
+  "vmov.u8    d3, d2                         \n" \
+  "vuzp.u8    d2, d3                         \n" \
+  "vtrn.u32   d2, d3                         \n"
 
-#define YUVTORGB_SETUP                                                         \
-    MEMACCESS([kUVToRB])                                                       \
-    "vld1.8     {d24}, [%[kUVToRB]]            \n"                             \
-    MEMACCESS([kUVToG])                                                        \
-    "vld1.8     {d25}, [%[kUVToG]]             \n"                             \
-    MEMACCESS([kUVBiasBGR])                                                    \
-    "vld1.16    {d26[], d27[]}, [%[kUVBiasBGR]]! \n"                           \
-    MEMACCESS([kUVBiasBGR])                                                    \
-    "vld1.16    {d8[], d9[]}, [%[kUVBiasBGR]]!   \n"                           \
-    MEMACCESS([kUVBiasBGR])                                                    \
-    "vld1.16    {d28[], d29[]}, [%[kUVBiasBGR]]  \n"                           \
-    MEMACCESS([kYToRgb])                                                       \
-    "vld1.32    {d30[], d31[]}, [%[kYToRgb]]     \n"
+#define YUVTORGB_SETUP                             \
+  "vld1.8     {d24}, [%[kUVToRB]]            \n"   \
+  "vld1.8     {d25}, [%[kUVToG]]             \n"   \
+  "vld1.16    {d26[], d27[]}, [%[kUVBiasBGR]]! \n" \
+  "vld1.16    {d8[], d9[]}, [%[kUVBiasBGR]]!   \n" \
+  "vld1.16    {d28[], d29[]}, [%[kUVBiasBGR]]  \n" \
+  "vld1.32    {d30[], d31[]}, [%[kYToRgb]]     \n"
 
-#define YUVTORGB                                                               \
-    "vmull.u8   q8, d2, d24                    \n" /* u/v B/R component      */\
-    "vmull.u8   q9, d2, d25                    \n" /* u/v G component        */\
-    "vmovl.u8   q0, d0                         \n" /* Y                      */\
-    "vmovl.s16  q10, d1                        \n"                             \
-    "vmovl.s16  q0, d0                         \n"                             \
-    "vmul.s32   q10, q10, q15                  \n"                             \
-    "vmul.s32   q0, q0, q15                    \n"                             \
-    "vqshrun.s32 d0, q0, #16                   \n"                             \
-    "vqshrun.s32 d1, q10, #16                  \n" /* Y                      */\
-    "vadd.s16   d18, d19                       \n"                             \
-    "vshll.u16  q1, d16, #16                   \n" /* Replicate u * UB       */\
-    "vshll.u16  q10, d17, #16                  \n" /* Replicate v * VR       */\
-    "vshll.u16  q3, d18, #16                   \n" /* Replicate (v*VG + u*UG)*/\
-    "vaddw.u16  q1, q1, d16                    \n"                             \
-    "vaddw.u16  q10, q10, d17                  \n"                             \
-    "vaddw.u16  q3, q3, d18                    \n"                             \
-    "vqadd.s16  q8, q0, q13                    \n" /* B */                     \
-    "vqadd.s16  q9, q0, q14                    \n" /* R */                     \
-    "vqadd.s16  q0, q0, q4                     \n" /* G */                     \
-    "vqadd.s16  q8, q8, q1                     \n" /* B */                     \
-    "vqadd.s16  q9, q9, q10                    \n" /* R */                     \
-    "vqsub.s16  q0, q0, q3                     \n" /* G */                     \
-    "vqshrun.s16 d20, q8, #6                   \n" /* B */                     \
-    "vqshrun.s16 d22, q9, #6                   \n" /* R */                     \
-    "vqshrun.s16 d21, q0, #6                   \n" /* G */
+#define YUVTORGB                                                              \
+  "vmull.u8   q8, d2, d24                    \n" /* u/v B/R component      */ \
+  "vmull.u8   q9, d2, d25                    \n" /* u/v G component        */ \
+  "vmovl.u8   q0, d0                         \n" /* Y                      */ \
+  "vmovl.s16  q10, d1                        \n"                              \
+  "vmovl.s16  q0, d0                         \n"                              \
+  "vmul.s32   q10, q10, q15                  \n"                              \
+  "vmul.s32   q0, q0, q15                    \n"                              \
+  "vqshrun.s32 d0, q0, #16                   \n"                              \
+  "vqshrun.s32 d1, q10, #16                  \n" /* Y                      */ \
+  "vadd.s16   d18, d19                       \n"                              \
+  "vshll.u16  q1, d16, #16                   \n" /* Replicate u * UB       */ \
+  "vshll.u16  q10, d17, #16                  \n" /* Replicate v * VR       */ \
+  "vshll.u16  q3, d18, #16                   \n" /* Replicate (v*VG + u*UG)*/ \
+  "vaddw.u16  q1, q1, d16                    \n"                              \
+  "vaddw.u16  q10, q10, d17                  \n"                              \
+  "vaddw.u16  q3, q3, d18                    \n"                              \
+  "vqadd.s16  q8, q0, q13                    \n" /* B */                      \
+  "vqadd.s16  q9, q0, q14                    \n" /* R */                      \
+  "vqadd.s16  q0, q0, q4                     \n" /* G */                      \
+  "vqadd.s16  q8, q8, q1                     \n" /* B */                      \
+  "vqadd.s16  q9, q9, q10                    \n" /* R */                      \
+  "vqsub.s16  q0, q0, q3                     \n" /* G */                      \
+  "vqshrun.s16 d20, q8, #6                   \n" /* B */                      \
+  "vqshrun.s16 d22, q9, #6                   \n" /* R */                      \
+  "vqshrun.s16 d21, q0, #6                   \n" /* G */
 
-void I444ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I444ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READYUV444
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    MEMACCESS(3)
-    "vst4.8     {d20, d21, d22, d23}, [%3]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_u),     // %1
-      "+r"(src_v),     // %2
-      "+r"(dst_argb),  // %3
-      "+r"(width)      // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "vmov.u8    d23, #255                      \n"
+      "1:                                        \n" READYUV444 YUVTORGB
+      "subs       %4, %4, #8                     \n"
+      "vst4.8     {d20, d21, d22, d23}, [%3]!    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_argb),  // %3
+        "+r"(width)      // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void I422ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    MEMACCESS(3)
-    "vst4.8     {d20, d21, d22, d23}, [%3]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_u),     // %1
-      "+r"(src_v),     // %2
-      "+r"(dst_argb),  // %3
-      "+r"(width)      // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "vmov.u8    d23, #255                      \n"
+      "1:                                        \n" READYUV422 YUVTORGB
+      "subs       %4, %4, #8                     \n"
+      "vst4.8     {d20, d21, d22, d23}, [%3]!    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_argb),  // %3
+        "+r"(width)      // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void I422AlphaToARGBRow_NEON(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             const uint8* src_a,
-                             uint8* dst_argb,
+void I422AlphaToARGBRow_NEON(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             const uint8_t* src_a,
+                             uint8_t* dst_argb,
                              const struct YuvConstants* yuvconstants,
                              int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %5, %5, #8                     \n"
-    MEMACCESS(3)
-    "vld1.8     {d23}, [%3]!                   \n"
-    MEMACCESS(4)
-    "vst4.8     {d20, d21, d22, d23}, [%4]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_u),     // %1
-      "+r"(src_v),     // %2
-      "+r"(src_a),     // %3
-      "+r"(dst_argb),  // %4
-      "+r"(width)      // %5
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READYUV422 YUVTORGB
+      "subs       %5, %5, #8                     \n"
+      "vld1.8     {d23}, [%3]!                   \n"
+      "vst4.8     {d20, d21, d22, d23}, [%4]!    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(src_a),     // %3
+        "+r"(dst_argb),  // %4
+        "+r"(width)      // %5
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void I411ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToRGBARow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_rgba,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READYUV411
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    MEMACCESS(3)
-    "vst4.8     {d20, d21, d22, d23}, [%3]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_u),     // %1
-      "+r"(src_v),     // %2
-      "+r"(dst_argb),  // %3
-      "+r"(width)      // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READYUV422 YUVTORGB
+      "subs       %4, %4, #8                     \n"
+      "vmov.u8    d19, #255                      \n"  // YUVTORGB modified d19
+      "vst4.8     {d19, d20, d21, d22}, [%3]!    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_rgba),  // %3
+        "+r"(width)      // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void I422ToRGBARow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_rgba,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    "vmov.u8    d19, #255                      \n"  // d19 modified by YUVTORGB
-    MEMACCESS(3)
-    "vst4.8     {d19, d20, d21, d22}, [%3]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_u),     // %1
-      "+r"(src_v),     // %2
-      "+r"(dst_rgba),  // %3
-      "+r"(width)      // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
-}
-
-void I422ToRGB24Row_NEON(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_rgb24,
+void I422ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb24,
                          const struct YuvConstants* yuvconstants,
                          int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    MEMACCESS(3)
-    "vst3.8     {d20, d21, d22}, [%3]!         \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),      // %0
-      "+r"(src_u),      // %1
-      "+r"(src_v),      // %2
-      "+r"(dst_rgb24),  // %3
-      "+r"(width)       // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READYUV422 YUVTORGB
+      "subs       %4, %4, #8                     \n"
+      "vst3.8     {d20, d21, d22}, [%3]!         \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),      // %0
+        "+r"(src_u),      // %1
+        "+r"(src_v),      // %2
+        "+r"(dst_rgb24),  // %3
+        "+r"(width)       // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-#define ARGBTORGB565                                                           \
-    "vshll.u8    q0, d22, #8                   \n"  /* R                    */ \
-    "vshll.u8    q8, d21, #8                   \n"  /* G                    */ \
-    "vshll.u8    q9, d20, #8                   \n"  /* B                    */ \
-    "vsri.16     q0, q8, #5                    \n"  /* RG                   */ \
-    "vsri.16     q0, q9, #11                   \n"  /* RGB                  */
+#define ARGBTORGB565                                                        \
+  "vshll.u8    q0, d22, #8                   \n" /* R                    */ \
+  "vshll.u8    q8, d21, #8                   \n" /* G                    */ \
+  "vshll.u8    q9, d20, #8                   \n" /* B                    */ \
+  "vsri.16     q0, q8, #5                    \n" /* RG                   */ \
+  "vsri.16     q0, q9, #11                   \n" /* RGB                  */
 
-void I422ToRGB565Row_NEON(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          uint8* dst_rgb565,
+void I422ToRGB565Row_NEON(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    ARGBTORGB565
-    MEMACCESS(3)
-    "vst1.8     {q0}, [%3]!                    \n"  // store 8 pixels RGB565.
-    "bgt        1b                             \n"
-    : "+r"(src_y),    // %0
-      "+r"(src_u),    // %1
-      "+r"(src_v),    // %2
-      "+r"(dst_rgb565),  // %3
-      "+r"(width)     // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READYUV422 YUVTORGB
+      "subs       %4, %4, #8                     \n" ARGBTORGB565
+      "vst1.8     {q0}, [%3]!                    \n"  // store 8 pixels RGB565.
+      "bgt        1b                             \n"
+      : "+r"(src_y),       // %0
+        "+r"(src_u),       // %1
+        "+r"(src_v),       // %2
+        "+r"(dst_rgb565),  // %3
+        "+r"(width)        // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-#define ARGBTOARGB1555                                                         \
-    "vshll.u8    q0, d23, #8                   \n"  /* A                    */ \
-    "vshll.u8    q8, d22, #8                   \n"  /* R                    */ \
-    "vshll.u8    q9, d21, #8                   \n"  /* G                    */ \
-    "vshll.u8    q10, d20, #8                  \n"  /* B                    */ \
-    "vsri.16     q0, q8, #1                    \n"  /* AR                   */ \
-    "vsri.16     q0, q9, #6                    \n"  /* ARG                  */ \
-    "vsri.16     q0, q10, #11                  \n"  /* ARGB                 */
+#define ARGBTOARGB1555                                                      \
+  "vshll.u8    q0, d23, #8                   \n" /* A                    */ \
+  "vshll.u8    q8, d22, #8                   \n" /* R                    */ \
+  "vshll.u8    q9, d21, #8                   \n" /* G                    */ \
+  "vshll.u8    q10, d20, #8                  \n" /* B                    */ \
+  "vsri.16     q0, q8, #1                    \n" /* AR                   */ \
+  "vsri.16     q0, q9, #6                    \n" /* ARG                  */ \
+  "vsri.16     q0, q10, #11                  \n" /* ARGB                 */
 
-void I422ToARGB1555Row_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb1555,
+void I422ToARGB1555Row_NEON(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb1555,
                             const struct YuvConstants* yuvconstants,
                             int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    "vmov.u8    d23, #255                      \n"
-    ARGBTOARGB1555
-    MEMACCESS(3)
-    "vst1.8     {q0}, [%3]!                    \n"  // store 8 pixels ARGB1555.
-    "bgt        1b                             \n"
-    : "+r"(src_y),    // %0
-      "+r"(src_u),    // %1
-      "+r"(src_v),    // %2
-      "+r"(dst_argb1555),  // %3
-      "+r"(width)     // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READYUV422 YUVTORGB
+      "subs       %4, %4, #8                     \n"
+      "vmov.u8    d23, #255                      \n" ARGBTOARGB1555
+      "vst1.8     {q0}, [%3]!                    \n"  // store 8 pixels
+      "bgt        1b                             \n"
+      : "+r"(src_y),         // %0
+        "+r"(src_u),         // %1
+        "+r"(src_v),         // %2
+        "+r"(dst_argb1555),  // %3
+        "+r"(width)          // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-#define ARGBTOARGB4444                                                         \
-    "vshr.u8    d20, d20, #4                   \n"  /* B                    */ \
-    "vbic.32    d21, d21, d4                   \n"  /* G                    */ \
-    "vshr.u8    d22, d22, #4                   \n"  /* R                    */ \
-    "vbic.32    d23, d23, d4                   \n"  /* A                    */ \
-    "vorr       d0, d20, d21                   \n"  /* BG                   */ \
-    "vorr       d1, d22, d23                   \n"  /* RA                   */ \
-    "vzip.u8    d0, d1                         \n"  /* BGRA                 */
+#define ARGBTOARGB4444                                                      \
+  "vshr.u8    d20, d20, #4                   \n" /* B                    */ \
+  "vbic.32    d21, d21, d4                   \n" /* G                    */ \
+  "vshr.u8    d22, d22, #4                   \n" /* R                    */ \
+  "vbic.32    d23, d23, d4                   \n" /* A                    */ \
+  "vorr       d0, d20, d21                   \n" /* BG                   */ \
+  "vorr       d1, d22, d23                   \n" /* RA                   */ \
+  "vzip.u8    d0, d1                         \n" /* BGRA                 */
 
-void I422ToARGB4444Row_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb4444,
+void I422ToARGB4444Row_NEON(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb4444,
                             const struct YuvConstants* yuvconstants,
                             int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d4, #0x0f                      \n"  // bits to clear with vbic.
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB
-    "subs       %4, %4, #8                     \n"
-    "vmov.u8    d23, #255                      \n"
-    ARGBTOARGB4444
-    MEMACCESS(3)
-    "vst1.8     {q0}, [%3]!                    \n"  // store 8 pixels ARGB4444.
-    "bgt        1b                             \n"
-    : "+r"(src_y),    // %0
-      "+r"(src_u),    // %1
-      "+r"(src_v),    // %2
-      "+r"(dst_argb4444),  // %3
-      "+r"(width)     // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "vmov.u8    d4, #0x0f                      \n"  // vbic bits to clear
+      "1:                                        \n"
+
+      READYUV422 YUVTORGB
+      "subs       %4, %4, #8                     \n"
+      "vmov.u8    d23, #255                      \n" ARGBTOARGB4444
+      "vst1.8     {q0}, [%3]!                    \n"  // store 8 pixels
+      "bgt        1b                             \n"
+      : "+r"(src_y),         // %0
+        "+r"(src_u),         // %1
+        "+r"(src_v),         // %2
+        "+r"(dst_argb4444),  // %3
+        "+r"(width)          // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void I400ToARGBRow_NEON(const uint8* src_y,
-                        uint8* dst_argb,
-                        int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READYUV400
-    YUVTORGB
-    "subs       %2, %2, #8                     \n"
-    MEMACCESS(1)
-    "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(dst_argb),  // %1
-      "+r"(width)      // %2
-    : [kUVToRB]"r"(&kYuvI601Constants.kUVToRB),
-      [kUVToG]"r"(&kYuvI601Constants.kUVToG),
-      [kUVBiasBGR]"r"(&kYuvI601Constants.kUVBiasBGR),
-      [kYToRgb]"r"(&kYuvI601Constants.kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+void I400ToARGBRow_NEON(const uint8_t* src_y, uint8_t* dst_argb, int width) {
+  asm volatile(
+      YUVTORGB_SETUP
+      "vmov.u8    d23, #255                      \n"
+      "1:                                        \n" READYUV400 YUVTORGB
+      "subs       %2, %2, #8                     \n"
+      "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : [kUVToRB] "r"(&kYuvI601Constants.kUVToRB),
+        [kUVToG] "r"(&kYuvI601Constants.kUVToG),
+        [kUVBiasBGR] "r"(&kYuvI601Constants.kUVBiasBGR),
+        [kYToRgb] "r"(&kYuvI601Constants.kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void J400ToARGBRow_NEON(const uint8* src_y,
-                        uint8* dst_argb,
-                        int width) {
-  asm volatile (
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d20}, [%0]!                   \n"
-    "vmov       d21, d20                       \n"
-    "vmov       d22, d20                       \n"
-    "subs       %2, %2, #8                     \n"
-    MEMACCESS(1)
-    "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(dst_argb),  // %1
-      "+r"(width)      // %2
-    :
-    : "cc", "memory", "d20", "d21", "d22", "d23"
-  );
+void J400ToARGBRow_NEON(const uint8_t* src_y, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "vmov.u8    d23, #255                      \n"
+      "1:                                        \n"
+      "vld1.8     {d20}, [%0]!                   \n"
+      "vmov       d21, d20                       \n"
+      "vmov       d22, d20                       \n"
+      "subs       %2, %2, #8                     \n"
+      "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "d20", "d21", "d22", "d23");
 }
 
-void NV12ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_uv,
-                        uint8* dst_argb,
+void NV12ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_uv,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READNV12
-    YUVTORGB
-    "subs       %3, %3, #8                     \n"
-    MEMACCESS(2)
-    "vst4.8     {d20, d21, d22, d23}, [%2]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_uv),    // %1
-      "+r"(dst_argb),  // %2
-      "+r"(width)      // %3
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(YUVTORGB_SETUP
+               "vmov.u8    d23, #255                      \n"
+               "1:                                        \n" READNV12 YUVTORGB
+               "subs       %3, %3, #8                     \n"
+               "vst4.8     {d20, d21, d22, d23}, [%2]!    \n"
+               "bgt        1b                             \n"
+               : "+r"(src_y),     // %0
+                 "+r"(src_uv),    // %1
+                 "+r"(dst_argb),  // %2
+                 "+r"(width)      // %3
+               : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+                 [kUVToG] "r"(&yuvconstants->kUVToG),
+                 [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+                 [kYToRgb] "r"(&yuvconstants->kYToRgb)
+               : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9",
+                 "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
-void NV21ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_vu,
-                        uint8* dst_argb,
+void NV21ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_vu,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READNV21
-    YUVTORGB
-    "subs       %3, %3, #8                     \n"
-    MEMACCESS(2)
-    "vst4.8     {d20, d21, d22, d23}, [%2]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_vu),    // %1
-      "+r"(dst_argb),  // %2
-      "+r"(width)      // %3
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(YUVTORGB_SETUP
+               "vmov.u8    d23, #255                      \n"
+               "1:                                        \n" READNV21 YUVTORGB
+               "subs       %3, %3, #8                     \n"
+               "vst4.8     {d20, d21, d22, d23}, [%2]!    \n"
+               "bgt        1b                             \n"
+               : "+r"(src_y),     // %0
+                 "+r"(src_vu),    // %1
+                 "+r"(dst_argb),  // %2
+                 "+r"(width)      // %3
+               : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+                 [kUVToG] "r"(&yuvconstants->kUVToG),
+                 [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+                 [kYToRgb] "r"(&yuvconstants->kYToRgb)
+               : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9",
+                 "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
-void NV12ToRGB565Row_NEON(const uint8* src_y,
-                          const uint8* src_uv,
-                          uint8* dst_rgb565,
+void NV12ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  asm volatile(
+
+      YUVTORGB_SETUP
+
+      "1:                                        \n"
+
+      READNV12 YUVTORGB
+      "subs       %3, %3, #8                     \n"
+      "vst3.8     {d20, d21, d22}, [%2]!         \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),      // %0
+        "+r"(src_uv),     // %1
+        "+r"(dst_rgb24),  // %2
+        "+r"(width)       // %3
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
+}
+
+void NV21ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_vu,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  asm volatile(
+
+      YUVTORGB_SETUP
+
+      "1:                                        \n"
+
+      READNV21 YUVTORGB
+      "subs       %3, %3, #8                     \n"
+      "vst3.8     {d20, d21, d22}, [%2]!         \n"
+      "bgt        1b                             \n"
+      : "+r"(src_y),      // %0
+        "+r"(src_vu),     // %1
+        "+r"(dst_rgb24),  // %2
+        "+r"(width)       // %3
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
+}
+
+void NV12ToRGB565Row_NEON(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READNV12
-    YUVTORGB
-    "subs       %3, %3, #8                     \n"
-    ARGBTORGB565
-    MEMACCESS(2)
-    "vst1.8     {q0}, [%2]!                    \n"  // store 8 pixels RGB565.
-    "bgt        1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_uv),    // %1
-      "+r"(dst_rgb565),  // %2
-      "+r"(width)      // %3
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READNV12 YUVTORGB
+      "subs       %3, %3, #8                     \n" ARGBTORGB565
+      "vst1.8     {q0}, [%2]!                    \n"  // store 8 pixels RGB565.
+      "bgt        1b                             \n"
+      : "+r"(src_y),       // %0
+        "+r"(src_uv),      // %1
+        "+r"(dst_rgb565),  // %2
+        "+r"(width)        // %3
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9", "q10", "q11",
+        "q12", "q13", "q14", "q15");
 }
 
-void YUY2ToARGBRow_NEON(const uint8* src_yuy2,
-                        uint8* dst_argb,
+void YUY2ToARGBRow_NEON(const uint8_t* src_yuy2,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READYUY2
-    YUVTORGB
-    "subs       %2, %2, #8                     \n"
-    MEMACCESS(1)
-    "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_yuy2),  // %0
-      "+r"(dst_argb),  // %1
-      "+r"(width)      // %2
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(YUVTORGB_SETUP
+               "vmov.u8    d23, #255                      \n"
+               "1:                                        \n" READYUY2 YUVTORGB
+               "subs       %2, %2, #8                     \n"
+               "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
+               "bgt        1b                             \n"
+               : "+r"(src_yuy2),  // %0
+                 "+r"(dst_argb),  // %1
+                 "+r"(width)      // %2
+               : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+                 [kUVToG] "r"(&yuvconstants->kUVToG),
+                 [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+                 [kYToRgb] "r"(&yuvconstants->kYToRgb)
+               : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9",
+                 "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
-void UYVYToARGBRow_NEON(const uint8* src_uyvy,
-                        uint8* dst_argb,
+void UYVYToARGBRow_NEON(const uint8_t* src_uyvy,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "vmov.u8    d23, #255                      \n"
-  "1:                                          \n"
-    READUYVY
-    YUVTORGB
-    "subs       %2, %2, #8                     \n"
-    MEMACCESS(1)
-    "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
-    "bgt        1b                             \n"
-    : "+r"(src_uyvy),  // %0
-      "+r"(dst_argb),  // %1
-      "+r"(width)      // %2
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "q0", "q1", "q2", "q3", "q4",
-      "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+  asm volatile(YUVTORGB_SETUP
+               "vmov.u8    d23, #255                      \n"
+               "1:                                        \n" READUYVY YUVTORGB
+               "subs       %2, %2, #8                     \n"
+               "vst4.8     {d20, d21, d22, d23}, [%1]!    \n"
+               "bgt        1b                             \n"
+               : "+r"(src_uyvy),  // %0
+                 "+r"(dst_argb),  // %1
+                 "+r"(width)      // %2
+               : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+                 [kUVToG] "r"(&yuvconstants->kUVToG),
+                 [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+                 [kYToRgb] "r"(&yuvconstants->kYToRgb)
+               : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q8", "q9",
+                 "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
 // Reads 16 pairs of UV and write even values to dst_u and odd to dst_v.
-void SplitUVRow_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void SplitUVRow_NEON(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld2.8     {q0, q1}, [%0]!                \n"  // load 16 pairs of UV
-    "subs       %3, %3, #16                    \n"  // 16 processed per loop
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"  // store U
-    MEMACCESS(2)
-    "vst1.8     {q1}, [%2]!                    \n"  // store V
-    "bgt        1b                             \n"
-    : "+r"(src_uv),  // %0
-      "+r"(dst_u),   // %1
-      "+r"(dst_v),   // %2
-      "+r"(width)    // %3  // Output registers
-    :                       // Input registers
-    : "cc", "memory", "q0", "q1"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "vld2.8     {q0, q1}, [%0]!                \n"  // load 16 pairs of UV
+      "subs       %3, %3, #16                    \n"  // 16 processed per loop
+      "vst1.8     {q0}, [%1]!                    \n"  // store U
+      "vst1.8     {q1}, [%2]!                    \n"  // store V
+      "bgt        1b                             \n"
+      : "+r"(src_uv),               // %0
+        "+r"(dst_u),                // %1
+        "+r"(dst_v),                // %2
+        "+r"(width)                 // %3  // Output registers
+      :                             // Input registers
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
 }
 
 // Reads 16 U's and V's and writes out 16 pairs of UV.
-void MergeUVRow_NEON(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void MergeUVRow_NEON(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
                      int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load U
-    MEMACCESS(1)
-    "vld1.8     {q1}, [%1]!                    \n"  // load V
-    "subs       %3, %3, #16                    \n"  // 16 processed per loop
-    MEMACCESS(2)
-    "vst2.u8    {q0, q1}, [%2]!                \n"  // store 16 pairs of UV
-    "bgt        1b                             \n"
-    :
-      "+r"(src_u),   // %0
-      "+r"(src_v),   // %1
-      "+r"(dst_uv),  // %2
-      "+r"(width)    // %3  // Output registers
-    :                       // Input registers
-    : "cc", "memory", "q0", "q1"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load U
+      "vld1.8     {q1}, [%1]!                    \n"  // load V
+      "subs       %3, %3, #16                    \n"  // 16 processed per loop
+      "vst2.8     {q0, q1}, [%2]!                \n"  // store 16 pairs of UV
+      "bgt        1b                             \n"
+      : "+r"(src_u),                // %0
+        "+r"(src_v),                // %1
+        "+r"(dst_uv),               // %2
+        "+r"(width)                 // %3  // Output registers
+      :                             // Input registers
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
+}
+
+// Reads 16 packed RGB and write to planar dst_r, dst_g, dst_b.
+void SplitRGBRow_NEON(const uint8_t* src_rgb,
+                      uint8_t* dst_r,
+                      uint8_t* dst_g,
+                      uint8_t* dst_b,
+                      int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld3.8     {d0, d2, d4}, [%0]!            \n"  // load 8 RGB
+      "vld3.8     {d1, d3, d5}, [%0]!            \n"  // next 8 RGB
+      "subs       %4, %4, #16                    \n"  // 16 processed per loop
+      "vst1.8     {q0}, [%1]!                    \n"  // store R
+      "vst1.8     {q1}, [%2]!                    \n"  // store G
+      "vst1.8     {q2}, [%3]!                    \n"  // store B
+      "bgt        1b                             \n"
+      : "+r"(src_rgb),                    // %0
+        "+r"(dst_r),                      // %1
+        "+r"(dst_g),                      // %2
+        "+r"(dst_b),                      // %3
+        "+r"(width)                       // %4
+      :                                   // Input registers
+      : "cc", "memory", "d0", "d1", "d2"  // Clobber List
+      );
+}
+
+// Reads 16 planar R's, G's and B's and writes out 16 packed RGB at a time
+void MergeRGBRow_NEON(const uint8_t* src_r,
+                      const uint8_t* src_g,
+                      const uint8_t* src_b,
+                      uint8_t* dst_rgb,
+                      int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load R
+      "vld1.8     {q1}, [%1]!                    \n"  // load G
+      "vld1.8     {q2}, [%2]!                    \n"  // load B
+      "subs       %4, %4, #16                    \n"  // 16 processed per loop
+      "vst3.8     {d0, d2, d4}, [%3]!            \n"  // store 8 RGB
+      "vst3.8     {d1, d3, d5}, [%3]!            \n"  // next 8 RGB
+      "bgt        1b                             \n"
+      : "+r"(src_r),                      // %0
+        "+r"(src_g),                      // %1
+        "+r"(src_b),                      // %2
+        "+r"(dst_rgb),                    // %3
+        "+r"(width)                       // %4
+      :                                   // Input registers
+      : "cc", "memory", "q0", "q1", "q2"  // Clobber List
+      );
 }
 
 // Copy multiple of 32.  vld4.8  allow unaligned and is fastest on a15.
-void CopyRow_NEON(const uint8* src, uint8* dst, int count) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 32
-    "subs       %2, %2, #32                    \n"  // 32 processed per loop
-    MEMACCESS(1)
-    "vst1.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 32
-    "bgt        1b                             \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(count)  // %2  // Output registers
-  :                     // Input registers
-  : "cc", "memory", "q0", "q1"  // Clobber List
-  );
+void CopyRow_NEON(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld1.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 32
+      "subs       %2, %2, #32                    \n"  // 32 processed per loop
+      "vst1.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 32
+      "bgt        1b                             \n"
+      : "+r"(src),                  // %0
+        "+r"(dst),                  // %1
+        "+r"(width)                 // %2  // Output registers
+      :                             // Input registers
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
 }
 
-// SetRow writes 'count' bytes using an 8 bit value repeated.
-void SetRow_NEON(uint8* dst, uint8 v8, int count) {
-  asm volatile (
-    "vdup.8    q0, %2                          \n"  // duplicate 16 bytes
-  "1:                                          \n"
-    "subs      %1, %1, #16                     \n"  // 16 bytes per loop
-    MEMACCESS(0)
-    "vst1.8    {q0}, [%0]!                     \n"  // store
-    "bgt       1b                              \n"
-  : "+r"(dst),   // %0
-    "+r"(count)  // %1
-  : "r"(v8)      // %2
-  : "cc", "memory", "q0"
-  );
+// SetRow writes 'width' bytes using an 8 bit value repeated.
+void SetRow_NEON(uint8_t* dst, uint8_t v8, int width) {
+  asm volatile(
+      "vdup.8    q0, %2                          \n"  // duplicate 16 bytes
+      "1:                                        \n"
+      "subs      %1, %1, #16                     \n"  // 16 bytes per loop
+      "vst1.8    {q0}, [%0]!                     \n"  // store
+      "bgt       1b                              \n"
+      : "+r"(dst),   // %0
+        "+r"(width)  // %1
+      : "r"(v8)      // %2
+      : "cc", "memory", "q0");
 }
 
-// ARGBSetRow writes 'count' pixels using an 32 bit value repeated.
-void ARGBSetRow_NEON(uint8* dst, uint32 v32, int count) {
-  asm volatile (
-    "vdup.u32  q0, %2                          \n"  // duplicate 4 ints
-  "1:                                          \n"
-    "subs      %1, %1, #4                      \n"  // 4 pixels per loop
-    MEMACCESS(0)
-    "vst1.8    {q0}, [%0]!                     \n"  // store
-    "bgt       1b                              \n"
-  : "+r"(dst),   // %0
-    "+r"(count)  // %1
-  : "r"(v32)     // %2
-  : "cc", "memory", "q0"
-  );
+// ARGBSetRow writes 'width' pixels using an 32 bit value repeated.
+void ARGBSetRow_NEON(uint8_t* dst, uint32_t v32, int width) {
+  asm volatile(
+      "vdup.u32  q0, %2                          \n"  // duplicate 4 ints
+      "1:                                        \n"
+      "subs      %1, %1, #4                      \n"  // 4 pixels per loop
+      "vst1.8    {q0}, [%0]!                     \n"  // store
+      "bgt       1b                              \n"
+      : "+r"(dst),   // %0
+        "+r"(width)  // %1
+      : "r"(v32)     // %2
+      : "cc", "memory", "q0");
 }
 
-void MirrorRow_NEON(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    // Start at end of source row.
-    "mov        r3, #-16                       \n"
-    "add        %0, %0, %2                     \n"
-    "sub        %0, #16                        \n"
+void MirrorRow_NEON(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      // Start at end of source row.
+      "mov        r3, #-16                       \n"
+      "add        %0, %0, %2                     \n"
+      "sub        %0, #16                        \n"
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0], r3                 \n"  // src -= 16
-    "subs       %2, #16                        \n"  // 16 pixels per loop.
-    "vrev64.8   q0, q0                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d1}, [%1]!                    \n"  // dst += 16
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  :
-  : "cc", "memory", "r3", "q0"
-  );
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0], r3                 \n"  // src -= 16
+      "subs       %2, #16                        \n"  // 16 pixels per loop.
+      "vrev64.8   q0, q0                         \n"
+      "vst1.8     {d1}, [%1]!                    \n"  // dst += 16
+      "vst1.8     {d0}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "cc", "memory", "r3", "q0");
 }
 
-void MirrorUVRow_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void MirrorUVRow_NEON(const uint8_t* src_uv,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
                       int width) {
-  asm volatile (
-    // Start at end of source row.
-    "mov        r12, #-16                      \n"
-    "add        %0, %0, %3, lsl #1             \n"
-    "sub        %0, #16                        \n"
+  asm volatile(
+      // Start at end of source row.
+      "mov        r12, #-16                      \n"
+      "add        %0, %0, %3, lsl #1             \n"
+      "sub        %0, #16                        \n"
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld2.8     {d0, d1}, [%0], r12            \n"  // src -= 16
-    "subs       %3, #8                         \n"  // 8 pixels per loop.
-    "vrev64.8   q0, q0                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // dst += 8
-    MEMACCESS(2)
-    "vst1.8     {d1}, [%2]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_uv),  // %0
-    "+r"(dst_u),   // %1
-    "+r"(dst_v),   // %2
-    "+r"(width)    // %3
-  :
-  : "cc", "memory", "r12", "q0"
-  );
+      "1:                                        \n"
+      "vld2.8     {d0, d1}, [%0], r12            \n"  // src -= 16
+      "subs       %3, #8                         \n"  // 8 pixels per loop.
+      "vrev64.8   q0, q0                         \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // dst += 8
+      "vst1.8     {d1}, [%2]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_uv),  // %0
+        "+r"(dst_u),   // %1
+        "+r"(dst_v),   // %2
+        "+r"(width)    // %3
+      :
+      : "cc", "memory", "r12", "q0");
 }
 
-void ARGBMirrorRow_NEON(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    // Start at end of source row.
-    "mov        r3, #-16                       \n"
-    "add        %0, %0, %2, lsl #2             \n"
-    "sub        %0, #16                        \n"
+void ARGBMirrorRow_NEON(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      // Start at end of source row.
+      "mov        r3, #-16                       \n"
+      "add        %0, %0, %2, lsl #2             \n"
+      "sub        %0, #16                        \n"
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0], r3                 \n"  // src -= 16
-    "subs       %2, #4                         \n"  // 4 pixels per loop.
-    "vrev64.32  q0, q0                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d1}, [%1]!                    \n"  // dst += 16
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  :
-  : "cc", "memory", "r3", "q0"
-  );
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0], r3                 \n"  // src -= 16
+      "subs       %2, #4                         \n"  // 4 pixels per loop.
+      "vrev64.32  q0, q0                         \n"
+      "vst1.8     {d1}, [%1]!                    \n"  // dst += 16
+      "vst1.8     {d0}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "cc", "memory", "r3", "q0");
 }
 
-void RGB24ToARGBRow_NEON(const uint8* src_rgb24, uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d4, #255                       \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld3.8     {d1, d2, d3}, [%0]!            \n"  // load 8 pixels of RGB24.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    MEMACCESS(1)
-    "vst4.8     {d1, d2, d3, d4}, [%1]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_rgb24),  // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)         // %2
-  :
-  : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
-  );
+void RGB24ToARGBRow_NEON(const uint8_t* src_rgb24,
+                         uint8_t* dst_argb,
+                         int width) {
+  asm volatile(
+      "vmov.u8    d4, #255                       \n"  // Alpha
+      "1:                                        \n"
+      "vld3.8     {d1, d2, d3}, [%0]!            \n"  // load 8 pixels of RGB24.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vst4.8     {d1, d2, d3, d4}, [%1]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_rgb24),  // %0
+        "+r"(dst_argb),   // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
+      );
 }
 
-void RAWToARGBRow_NEON(const uint8* src_raw, uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d4, #255                       \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld3.8     {d1, d2, d3}, [%0]!            \n"  // load 8 pixels of RAW.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vswp.u8    d1, d3                         \n"  // swap R, B
-    MEMACCESS(1)
-    "vst4.8     {d1, d2, d3, d4}, [%1]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_raw),   // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  :
-  : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
-  );
+void RAWToARGBRow_NEON(const uint8_t* src_raw, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "vmov.u8    d4, #255                       \n"  // Alpha
+      "1:                                        \n"
+      "vld3.8     {d1, d2, d3}, [%0]!            \n"  // load 8 pixels of RAW.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vswp.u8    d1, d3                         \n"  // swap R, B
+      "vst4.8     {d1, d2, d3, d4}, [%1]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_raw),   // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
+      );
 }
 
-void RAWToRGB24Row_NEON(const uint8* src_raw, uint8* dst_rgb24, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld3.8     {d1, d2, d3}, [%0]!            \n"  // load 8 pixels of RAW.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vswp.u8    d1, d3                         \n"  // swap R, B
-    MEMACCESS(1)
-    "vst3.8     {d1, d2, d3}, [%1]!            \n"  // store 8 pixels of RGB24.
-    "bgt        1b                             \n"
-  : "+r"(src_raw),    // %0
-    "+r"(dst_rgb24),  // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "d1", "d2", "d3"  // Clobber List
-  );
+void RAWToRGB24Row_NEON(const uint8_t* src_raw, uint8_t* dst_rgb24, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld3.8     {d1, d2, d3}, [%0]!            \n"  // load 8 pixels of RAW.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vswp.u8    d1, d3                         \n"  // swap R, B
+      "vst3.8     {d1, d2, d3}, [%1]!            \n"  // store 8 pixels of
+                                                      // RGB24.
+      "bgt        1b                             \n"
+      : "+r"(src_raw),    // %0
+        "+r"(dst_rgb24),  // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "d1", "d2", "d3"  // Clobber List
+      );
 }
 
-#define RGB565TOARGB                                                           \
-    "vshrn.u16  d6, q0, #5                     \n"  /* G xxGGGGGG           */ \
-    "vuzp.u8    d0, d1                         \n"  /* d0 xxxBBBBB RRRRRxxx */ \
-    "vshl.u8    d6, d6, #2                     \n"  /* G GGGGGG00 upper 6   */ \
-    "vshr.u8    d1, d1, #3                     \n"  /* R 000RRRRR lower 5   */ \
-    "vshl.u8    q0, q0, #3                     \n"  /* B,R BBBBB000 upper 5 */ \
-    "vshr.u8    q2, q0, #5                     \n"  /* B,R 00000BBB lower 3 */ \
-    "vorr.u8    d0, d0, d4                     \n"  /* B                    */ \
-    "vshr.u8    d4, d6, #6                     \n"  /* G 000000GG lower 2   */ \
-    "vorr.u8    d2, d1, d5                     \n"  /* R                    */ \
-    "vorr.u8    d1, d4, d6                     \n"  /* G                    */
+#define RGB565TOARGB                                                        \
+  "vshrn.u16  d6, q0, #5                     \n" /* G xxGGGGGG           */ \
+  "vuzp.u8    d0, d1                         \n" /* d0 xxxBBBBB RRRRRxxx */ \
+  "vshl.u8    d6, d6, #2                     \n" /* G GGGGGG00 upper 6   */ \
+  "vshr.u8    d1, d1, #3                     \n" /* R 000RRRRR lower 5   */ \
+  "vshl.u8    q0, q0, #3                     \n" /* B,R BBBBB000 upper 5 */ \
+  "vshr.u8    q2, q0, #5                     \n" /* B,R 00000BBB lower 3 */ \
+  "vorr.u8    d0, d0, d4                     \n" /* B                    */ \
+  "vshr.u8    d4, d6, #6                     \n" /* G 000000GG lower 2   */ \
+  "vorr.u8    d2, d1, d5                     \n" /* R                    */ \
+  "vorr.u8    d1, d4, d6                     \n" /* G                    */
 
-void RGB565ToARGBRow_NEON(const uint8* src_rgb565, uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d3, #255                       \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 RGB565 pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    RGB565TOARGB
-    MEMACCESS(1)
-    "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_rgb565),  // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3"  // Clobber List
-  );
+void RGB565ToARGBRow_NEON(const uint8_t* src_rgb565,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      "vmov.u8    d3, #255                       \n"  // Alpha
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 RGB565 pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      RGB565TOARGB
+      "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_rgb565),  // %0
+        "+r"(dst_argb),    // %1
+        "+r"(width)        // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
-#define ARGB1555TOARGB                                                         \
-    "vshrn.u16  d7, q0, #8                     \n"  /* A Arrrrrxx           */ \
-    "vshr.u8    d6, d7, #2                     \n"  /* R xxxRRRRR           */ \
-    "vshrn.u16  d5, q0, #5                     \n"  /* G xxxGGGGG           */ \
-    "vmovn.u16  d4, q0                         \n"  /* B xxxBBBBB           */ \
-    "vshr.u8    d7, d7, #7                     \n"  /* A 0000000A           */ \
-    "vneg.s8    d7, d7                         \n"  /* A AAAAAAAA upper 8   */ \
-    "vshl.u8    d6, d6, #3                     \n"  /* R RRRRR000 upper 5   */ \
-    "vshr.u8    q1, q3, #5                     \n"  /* R,A 00000RRR lower 3 */ \
-    "vshl.u8    q0, q2, #3                     \n"  /* B,G BBBBB000 upper 5 */ \
-    "vshr.u8    q2, q0, #5                     \n"  /* B,G 00000BBB lower 3 */ \
-    "vorr.u8    q1, q1, q3                     \n"  /* R,A                  */ \
-    "vorr.u8    q0, q0, q2                     \n"  /* B,G                  */ \
+#define ARGB1555TOARGB                                                      \
+  "vshrn.u16  d7, q0, #8                     \n" /* A Arrrrrxx           */ \
+  "vshr.u8    d6, d7, #2                     \n" /* R xxxRRRRR           */ \
+  "vshrn.u16  d5, q0, #5                     \n" /* G xxxGGGGG           */ \
+  "vmovn.u16  d4, q0                         \n" /* B xxxBBBBB           */ \
+  "vshr.u8    d7, d7, #7                     \n" /* A 0000000A           */ \
+  "vneg.s8    d7, d7                         \n" /* A AAAAAAAA upper 8   */ \
+  "vshl.u8    d6, d6, #3                     \n" /* R RRRRR000 upper 5   */ \
+  "vshr.u8    q1, q3, #5                     \n" /* R,A 00000RRR lower 3 */ \
+  "vshl.u8    q0, q2, #3                     \n" /* B,G BBBBB000 upper 5 */ \
+  "vshr.u8    q2, q0, #5                     \n" /* B,G 00000BBB lower 3 */ \
+  "vorr.u8    q1, q1, q3                     \n" /* R,A                  */ \
+  "vorr.u8    q0, q0, q2                     \n" /* B,G                  */
 
 // RGB555TOARGB is same as ARGB1555TOARGB but ignores alpha.
-#define RGB555TOARGB                                                           \
-    "vshrn.u16  d6, q0, #5                     \n"  /* G xxxGGGGG           */ \
-    "vuzp.u8    d0, d1                         \n"  /* d0 xxxBBBBB xRRRRRxx */ \
-    "vshl.u8    d6, d6, #3                     \n"  /* G GGGGG000 upper 5   */ \
-    "vshr.u8    d1, d1, #2                     \n"  /* R 00xRRRRR lower 5   */ \
-    "vshl.u8    q0, q0, #3                     \n"  /* B,R BBBBB000 upper 5 */ \
-    "vshr.u8    q2, q0, #5                     \n"  /* B,R 00000BBB lower 3 */ \
-    "vorr.u8    d0, d0, d4                     \n"  /* B                    */ \
-    "vshr.u8    d4, d6, #5                     \n"  /* G 00000GGG lower 3   */ \
-    "vorr.u8    d2, d1, d5                     \n"  /* R                    */ \
-    "vorr.u8    d1, d4, d6                     \n"  /* G                    */
+#define RGB555TOARGB                                                        \
+  "vshrn.u16  d6, q0, #5                     \n" /* G xxxGGGGG           */ \
+  "vuzp.u8    d0, d1                         \n" /* d0 xxxBBBBB xRRRRRxx */ \
+  "vshl.u8    d6, d6, #3                     \n" /* G GGGGG000 upper 5   */ \
+  "vshr.u8    d1, d1, #2                     \n" /* R 00xRRRRR lower 5   */ \
+  "vshl.u8    q0, q0, #3                     \n" /* B,R BBBBB000 upper 5 */ \
+  "vshr.u8    q2, q0, #5                     \n" /* B,R 00000BBB lower 3 */ \
+  "vorr.u8    d0, d0, d4                     \n" /* B                    */ \
+  "vshr.u8    d4, d6, #5                     \n" /* G 00000GGG lower 3   */ \
+  "vorr.u8    d2, d1, d5                     \n" /* R                    */ \
+  "vorr.u8    d1, d4, d6                     \n" /* G                    */
 
-void ARGB1555ToARGBRow_NEON(const uint8* src_argb1555, uint8* dst_argb,
+void ARGB1555ToARGBRow_NEON(const uint8_t* src_argb1555,
+                            uint8_t* dst_argb,
                             int width) {
-  asm volatile (
-    "vmov.u8    d3, #255                       \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB1555 pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGB1555TOARGB
-    MEMACCESS(1)
-    "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_argb1555),  // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3"  // Clobber List
-  );
+  asm volatile(
+      "vmov.u8    d3, #255                       \n"  // Alpha
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB1555 pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGB1555TOARGB
+      "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_argb1555),  // %0
+        "+r"(dst_argb),      // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
-#define ARGB4444TOARGB                                                         \
-    "vuzp.u8    d0, d1                         \n"  /* d0 BG, d1 RA         */ \
-    "vshl.u8    q2, q0, #4                     \n"  /* B,R BBBB0000         */ \
-    "vshr.u8    q1, q0, #4                     \n"  /* G,A 0000GGGG         */ \
-    "vshr.u8    q0, q2, #4                     \n"  /* B,R 0000BBBB         */ \
-    "vorr.u8    q0, q0, q2                     \n"  /* B,R BBBBBBBB         */ \
-    "vshl.u8    q2, q1, #4                     \n"  /* G,A GGGG0000         */ \
-    "vorr.u8    q1, q1, q2                     \n"  /* G,A GGGGGGGG         */ \
-    "vswp.u8    d1, d2                         \n"  /* B,R,G,A -> B,G,R,A   */
+#define ARGB4444TOARGB                                                      \
+  "vuzp.u8    d0, d1                         \n" /* d0 BG, d1 RA         */ \
+  "vshl.u8    q2, q0, #4                     \n" /* B,R BBBB0000         */ \
+  "vshr.u8    q1, q0, #4                     \n" /* G,A 0000GGGG         */ \
+  "vshr.u8    q0, q2, #4                     \n" /* B,R 0000BBBB         */ \
+  "vorr.u8    q0, q0, q2                     \n" /* B,R BBBBBBBB         */ \
+  "vshl.u8    q2, q1, #4                     \n" /* G,A GGGG0000         */ \
+  "vorr.u8    q1, q1, q2                     \n" /* G,A GGGGGGGG         */ \
+  "vswp.u8    d1, d2                         \n" /* B,R,G,A -> B,G,R,A   */
 
-void ARGB4444ToARGBRow_NEON(const uint8* src_argb4444, uint8* dst_argb,
+void ARGB4444ToARGBRow_NEON(const uint8_t* src_argb4444,
+                            uint8_t* dst_argb,
                             int width) {
-  asm volatile (
-    "vmov.u8    d3, #255                       \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB4444 pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGB4444TOARGB
-    MEMACCESS(1)
-    "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_argb4444),  // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2"  // Clobber List
-  );
+  asm volatile(
+      "vmov.u8    d3, #255                       \n"  // Alpha
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB4444 pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGB4444TOARGB
+      "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_argb4444),  // %0
+        "+r"(dst_argb),      // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2"  // Clobber List
+      );
 }
 
-void ARGBToRGB24Row_NEON(const uint8* src_argb, uint8* dst_rgb24, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d1, d2, d3, d4}, [%0]!        \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    MEMACCESS(1)
-    "vst3.8     {d1, d2, d3}, [%1]!            \n"  // store 8 pixels of RGB24.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_rgb24),  // %1
-    "+r"(width)         // %2
-  :
-  : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
-  );
-}
-
-void ARGBToRAWRow_NEON(const uint8* src_argb, uint8* dst_raw, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d1, d2, d3, d4}, [%0]!        \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vswp.u8    d1, d3                         \n"  // swap R, B
-    MEMACCESS(1)
-    "vst3.8     {d1, d2, d3}, [%1]!            \n"  // store 8 pixels of RAW.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_raw),   // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
-  );
-}
-
-void YUY2ToYRow_NEON(const uint8* src_yuy2, uint8* dst_y, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld2.8     {q0, q1}, [%0]!                \n"  // load 16 pixels of YUY2.
-    "subs       %2, %2, #16                    \n"  // 16 processed per loop.
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"  // store 16 pixels of Y.
-    "bgt        1b                             \n"
-  : "+r"(src_yuy2),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "q0", "q1"  // Clobber List
-  );
-}
-
-void UYVYToYRow_NEON(const uint8* src_uyvy, uint8* dst_y, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld2.8     {q0, q1}, [%0]!                \n"  // load 16 pixels of UYVY.
-    "subs       %2, %2, #16                    \n"  // 16 processed per loop.
-    MEMACCESS(1)
-    "vst1.8     {q1}, [%1]!                    \n"  // store 16 pixels of Y.
-    "bgt        1b                             \n"
-  : "+r"(src_uyvy),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "q0", "q1"  // Clobber List
-  );
-}
-
-void YUY2ToUV422Row_NEON(const uint8* src_yuy2, uint8* dst_u, uint8* dst_v,
+void ARGBToRGB24Row_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_rgb24,
                          int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of YUY2.
-    "subs       %3, %3, #16                    \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "vst1.8     {d1}, [%1]!                    \n"  // store 8 U.
-    MEMACCESS(2)
-    "vst1.8     {d3}, [%2]!                    \n"  // store 8 V.
-    "bgt        1b                             \n"
-  : "+r"(src_yuy2),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d1, d2, d3, d4}, [%0]!        \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vst3.8     {d1, d2, d3}, [%1]!            \n"  // store 8 pixels of
+                                                      // RGB24.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),   // %0
+        "+r"(dst_rgb24),  // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
+      );
 }
 
-void UYVYToUV422Row_NEON(const uint8* src_uyvy, uint8* dst_u, uint8* dst_v,
+void ARGBToRAWRow_NEON(const uint8_t* src_argb, uint8_t* dst_raw, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d1, d2, d3, d4}, [%0]!        \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vswp.u8    d1, d3                         \n"  // swap R, B
+      "vst3.8     {d1, d2, d3}, [%1]!            \n"  // store 8 pixels of RAW.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_raw),   // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "d1", "d2", "d3", "d4"  // Clobber List
+      );
+}
+
+void YUY2ToYRow_NEON(const uint8_t* src_yuy2, uint8_t* dst_y, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld2.8     {q0, q1}, [%0]!                \n"  // load 16 pixels of YUY2.
+      "subs       %2, %2, #16                    \n"  // 16 processed per loop.
+      "vst1.8     {q0}, [%1]!                    \n"  // store 16 pixels of Y.
+      "bgt        1b                             \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
+}
+
+void UYVYToYRow_NEON(const uint8_t* src_uyvy, uint8_t* dst_y, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld2.8     {q0, q1}, [%0]!                \n"  // load 16 pixels of UYVY.
+      "subs       %2, %2, #16                    \n"  // 16 processed per loop.
+      "vst1.8     {q1}, [%1]!                    \n"  // store 16 pixels of Y.
+      "bgt        1b                             \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
+}
+
+void YUY2ToUV422Row_NEON(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of UYVY.
-    "subs       %3, %3, #16                    \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 U.
-    MEMACCESS(2)
-    "vst1.8     {d2}, [%2]!                    \n"  // store 8 V.
-    "bgt        1b                             \n"
-  : "+r"(src_uyvy),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of YUY2.
+      "subs       %3, %3, #16                    \n"  // 16 pixels = 8 UVs.
+      "vst1.8     {d1}, [%1]!                    \n"  // store 8 U.
+      "vst1.8     {d3}, [%2]!                    \n"  // store 8 V.
+      "bgt        1b                             \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3"  // Clobber List
+      );
 }
 
-void YUY2ToUVRow_NEON(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "add        %1, %0, %1                     \n"  // stride + src_yuy2
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of YUY2.
-    "subs       %4, %4, #16                    \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load next row YUY2.
-    "vrhadd.u8  d1, d1, d5                     \n"  // average rows of U
-    "vrhadd.u8  d3, d3, d7                     \n"  // average rows of V
-    MEMACCESS(2)
-    "vst1.8     {d1}, [%2]!                    \n"  // store 8 U.
-    MEMACCESS(3)
-    "vst1.8     {d3}, [%3]!                    \n"  // store 8 V.
-    "bgt        1b                             \n"
-  : "+r"(src_yuy2),     // %0
-    "+r"(stride_yuy2),  // %1
-    "+r"(dst_u),        // %2
-    "+r"(dst_v),        // %3
-    "+r"(width)           // %4
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7"  // Clobber List
-  );
+void UYVYToUV422Row_NEON(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of UYVY.
+      "subs       %3, %3, #16                    \n"  // 16 pixels = 8 UVs.
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 U.
+      "vst1.8     {d2}, [%2]!                    \n"  // store 8 V.
+      "bgt        1b                             \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3"  // Clobber List
+      );
 }
 
-void UYVYToUVRow_NEON(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "add        %1, %0, %1                     \n"  // stride + src_uyvy
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of UYVY.
-    "subs       %4, %4, #16                    \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load next row UYVY.
-    "vrhadd.u8  d0, d0, d4                     \n"  // average rows of U
-    "vrhadd.u8  d2, d2, d6                     \n"  // average rows of V
-    MEMACCESS(2)
-    "vst1.8     {d0}, [%2]!                    \n"  // store 8 U.
-    MEMACCESS(3)
-    "vst1.8     {d2}, [%3]!                    \n"  // store 8 V.
-    "bgt        1b                             \n"
-  : "+r"(src_uyvy),     // %0
-    "+r"(stride_uyvy),  // %1
-    "+r"(dst_u),        // %2
-    "+r"(dst_v),        // %3
-    "+r"(width)           // %4
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7"  // Clobber List
-  );
+void YUY2ToUVRow_NEON(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "add        %1, %0, %1                     \n"  // stride + src_yuy2
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of YUY2.
+      "subs       %4, %4, #16                    \n"  // 16 pixels = 8 UVs.
+      "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load next row YUY2.
+      "vrhadd.u8  d1, d1, d5                     \n"  // average rows of U
+      "vrhadd.u8  d3, d3, d7                     \n"  // average rows of V
+      "vst1.8     {d1}, [%2]!                    \n"  // store 8 U.
+      "vst1.8     {d3}, [%3]!                    \n"  // store 8 V.
+      "bgt        1b                             \n"
+      : "+r"(src_yuy2),     // %0
+        "+r"(stride_yuy2),  // %1
+        "+r"(dst_u),        // %2
+        "+r"(dst_v),        // %3
+        "+r"(width)         // %4
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6",
+        "d7"  // Clobber List
+      );
+}
+
+void UYVYToUVRow_NEON(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      "add        %1, %0, %1                     \n"  // stride + src_uyvy
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 16 pixels of UYVY.
+      "subs       %4, %4, #16                    \n"  // 16 pixels = 8 UVs.
+      "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load next row UYVY.
+      "vrhadd.u8  d0, d0, d4                     \n"  // average rows of U
+      "vrhadd.u8  d2, d2, d6                     \n"  // average rows of V
+      "vst1.8     {d0}, [%2]!                    \n"  // store 8 U.
+      "vst1.8     {d2}, [%3]!                    \n"  // store 8 V.
+      "bgt        1b                             \n"
+      : "+r"(src_uyvy),     // %0
+        "+r"(stride_uyvy),  // %1
+        "+r"(dst_u),        // %2
+        "+r"(dst_v),        // %3
+        "+r"(width)         // %4
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6",
+        "d7"  // Clobber List
+      );
 }
 
 // For BGRAToARGB, ABGRToARGB, RGBAToARGB, and ARGBToRGBA.
-void ARGBShuffleRow_NEON(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width) {
-  asm volatile (
-    MEMACCESS(3)
-    "vld1.8     {q2}, [%3]                     \n"  // shuffler
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 4 pixels.
-    "subs       %2, %2, #4                     \n"  // 4 processed per loop
-    "vtbl.8     d2, {d0, d1}, d4               \n"  // look up 2 first pixels
-    "vtbl.8     d3, {d0, d1}, d5               \n"  // look up 2 next pixels
-    MEMACCESS(1)
-    "vst1.8     {q1}, [%1]!                    \n"  // store 4.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  : "r"(shuffler)    // %3
-  : "cc", "memory", "q0", "q1", "q2"  // Clobber List
-  );
+void ARGBShuffleRow_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
+                         const uint8_t* shuffler,
+                         int width) {
+  asm volatile(
+      "vld1.8     {q2}, [%3]                     \n"  // shuffler
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 4 pixels.
+      "subs       %2, %2, #4                     \n"  // 4 processed per loop
+      "vtbl.8     d2, {d0, d1}, d4               \n"  // look up 2 first pixels
+      "vtbl.8     d3, {d0, d1}, d5               \n"  // look up 2 next pixels
+      "vst1.8     {q1}, [%1]!                    \n"  // store 4.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),                   // %0
+        "+r"(dst_argb),                   // %1
+        "+r"(width)                       // %2
+      : "r"(shuffler)                     // %3
+      : "cc", "memory", "q0", "q1", "q2"  // Clobber List
+      );
 }
 
-void I422ToYUY2Row_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_yuy2, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld2.8     {d0, d2}, [%0]!                \n"  // load 16 Ys
-    MEMACCESS(1)
-    "vld1.8     {d1}, [%1]!                    \n"  // load 8 Us
-    MEMACCESS(2)
-    "vld1.8     {d3}, [%2]!                    \n"  // load 8 Vs
-    "subs       %4, %4, #16                    \n"  // 16 pixels
-    MEMACCESS(3)
-    "vst4.8     {d0, d1, d2, d3}, [%3]!        \n"  // Store 8 YUY2/16 pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_y),     // %0
-    "+r"(src_u),     // %1
-    "+r"(src_v),     // %2
-    "+r"(dst_yuy2),  // %3
-    "+r"(width)      // %4
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3"
-  );
+void I422ToYUY2Row_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld2.8     {d0, d2}, [%0]!                \n"  // load 16 Ys
+      "vld1.8     {d1}, [%1]!                    \n"  // load 8 Us
+      "vld1.8     {d3}, [%2]!                    \n"  // load 8 Vs
+      "subs       %4, %4, #16                    \n"  // 16 pixels
+      "vst4.8     {d0, d1, d2, d3}, [%3]!        \n"  // Store 8 YUY2/16 pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_yuy2),  // %3
+        "+r"(width)      // %4
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3");
 }
 
-void I422ToUYVYRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_uyvy, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld2.8     {d1, d3}, [%0]!                \n"  // load 16 Ys
-    MEMACCESS(1)
-    "vld1.8     {d0}, [%1]!                    \n"  // load 8 Us
-    MEMACCESS(2)
-    "vld1.8     {d2}, [%2]!                    \n"  // load 8 Vs
-    "subs       %4, %4, #16                    \n"  // 16 pixels
-    MEMACCESS(3)
-    "vst4.8     {d0, d1, d2, d3}, [%3]!        \n"  // Store 8 UYVY/16 pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_y),     // %0
-    "+r"(src_u),     // %1
-    "+r"(src_v),     // %2
-    "+r"(dst_uyvy),  // %3
-    "+r"(width)      // %4
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3"
-  );
+void I422ToUYVYRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld2.8     {d1, d3}, [%0]!                \n"  // load 16 Ys
+      "vld1.8     {d0}, [%1]!                    \n"  // load 8 Us
+      "vld1.8     {d2}, [%2]!                    \n"  // load 8 Vs
+      "subs       %4, %4, #16                    \n"  // 16 pixels
+      "vst4.8     {d0, d1, d2, d3}, [%3]!        \n"  // Store 8 UYVY/16 pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_uyvy),  // %3
+        "+r"(width)      // %4
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3");
 }
 
-void ARGBToRGB565Row_NEON(const uint8* src_argb, uint8* dst_rgb565, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d20, d21, d22, d23}, [%0]!    \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGBTORGB565
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"  // store 8 pixels RGB565.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_rgb565),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "q0", "q8", "q9", "q10", "q11"
-  );
+void ARGBToRGB565Row_NEON(const uint8_t* src_argb,
+                          uint8_t* dst_rgb565,
+                          int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d20, d21, d22, d23}, [%0]!    \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGBTORGB565
+      "vst1.8     {q0}, [%1]!                    \n"  // store 8 pixels RGB565.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),    // %0
+        "+r"(dst_rgb565),  // %1
+        "+r"(width)        // %2
+      :
+      : "cc", "memory", "q0", "q8", "q9", "q10", "q11");
 }
 
-void ARGBToRGB565DitherRow_NEON(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width) {
-  asm volatile (
-    "vdup.32    d2, %2                         \n"  // dither4
-  "1:                                          \n"
-    MEMACCESS(1)
-    "vld4.8     {d20, d21, d22, d23}, [%1]!    \n"  // load 8 pixels of ARGB.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vqadd.u8   d20, d20, d2                   \n"
-    "vqadd.u8   d21, d21, d2                   \n"
-    "vqadd.u8   d22, d22, d2                   \n"
-    ARGBTORGB565
-    MEMACCESS(0)
-    "vst1.8     {q0}, [%0]!                    \n"  // store 8 pixels RGB565.
-    "bgt        1b                             \n"
-  : "+r"(dst_rgb)    // %0
-  : "r"(src_argb),   // %1
-    "r"(dither4),    // %2
-    "r"(width)       // %3
-  : "cc", "memory", "q0", "q1", "q8", "q9", "q10", "q11"
-  );
+void ARGBToRGB565DitherRow_NEON(const uint8_t* src_argb,
+                                uint8_t* dst_rgb,
+                                const uint32_t dither4,
+                                int width) {
+  asm volatile(
+      "vdup.32    d2, %2                         \n"  // dither4
+      "1:                                        \n"
+      "vld4.8     {d20, d21, d22, d23}, [%1]!    \n"  // load 8 pixels of ARGB.
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vqadd.u8   d20, d20, d2                   \n"
+      "vqadd.u8   d21, d21, d2                   \n"
+      "vqadd.u8   d22, d22, d2                   \n"  // add for dither
+      ARGBTORGB565
+      "vst1.8     {q0}, [%0]!                    \n"  // store 8 RGB565.
+      "bgt        1b                             \n"
+      : "+r"(dst_rgb)   // %0
+      : "r"(src_argb),  // %1
+        "r"(dither4),   // %2
+        "r"(width)      // %3
+      : "cc", "memory", "q0", "q1", "q8", "q9", "q10", "q11");
 }
 
-void ARGBToARGB1555Row_NEON(const uint8* src_argb, uint8* dst_argb1555,
+void ARGBToARGB1555Row_NEON(const uint8_t* src_argb,
+                            uint8_t* dst_argb1555,
                             int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d20, d21, d22, d23}, [%0]!    \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGBTOARGB1555
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"  // store 8 pixels ARGB1555.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb1555),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "q0", "q8", "q9", "q10", "q11"
-  );
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d20, d21, d22, d23}, [%0]!    \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGBTOARGB1555
+      "vst1.8     {q0}, [%1]!                    \n"  // store 8 ARGB1555.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),      // %0
+        "+r"(dst_argb1555),  // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "q0", "q8", "q9", "q10", "q11");
 }
 
-void ARGBToARGB4444Row_NEON(const uint8* src_argb, uint8* dst_argb4444,
+void ARGBToARGB4444Row_NEON(const uint8_t* src_argb,
+                            uint8_t* dst_argb4444,
                             int width) {
-  asm volatile (
-    "vmov.u8    d4, #0x0f                      \n"  // bits to clear with vbic.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d20, d21, d22, d23}, [%0]!    \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGBTOARGB4444
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"  // store 8 pixels ARGB4444.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),      // %0
-    "+r"(dst_argb4444),  // %1
-    "+r"(width)            // %2
-  :
-  : "cc", "memory", "q0", "q8", "q9", "q10", "q11"
-  );
+  asm volatile(
+      "vmov.u8    d4, #0x0f                      \n"  // bits to clear with
+                                                      // vbic.
+      "1:                                        \n"
+      "vld4.8     {d20, d21, d22, d23}, [%0]!    \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGBTOARGB4444
+      "vst1.8     {q0}, [%1]!                    \n"  // store 8 ARGB4444.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),      // %0
+        "+r"(dst_argb4444),  // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "q0", "q8", "q9", "q10", "q11");
 }
 
-void ARGBToYRow_NEON(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
-    "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
-    "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
-    "vmov.u8    d27, #16                       \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlal.u8   q2, d1, d25                    \n"  // G
-    "vmlal.u8   q2, d2, d26                    \n"  // R
-    "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d27                        \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q12", "q13"
-  );
+void ARGBToYRow_NEON(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
+      "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
+      "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
+      "vmov.u8    d27, #16                       \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlal.u8   q2, d1, d25                    \n"  // G
+      "vmlal.u8   q2, d2, d26                    \n"  // R
+      "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d27                        \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q12", "q13");
 }
 
-void ARGBExtractAlphaRow_NEON(const uint8* src_argb, uint8* dst_a, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels
-    "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels
-    "subs       %2, %2, #16                    \n"  // 16 processed per loop
-    MEMACCESS(1)
-    "vst1.8     {q3}, [%1]!                    \n"  // store 16 A's.
-    "bgt       1b                              \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_a),      // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3"  // Clobber List
-  );
+void ARGBExtractAlphaRow_NEON(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels
+      "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels
+      "subs       %2, %2, #16                    \n"  // 16 processed per loop
+      "vst1.8     {q3}, [%1]!                    \n"  // store 16 A's.
+      "bgt       1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_a),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
-void ARGBToYJRow_NEON(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d24, #15                       \n"  // B * 0.11400 coefficient
-    "vmov.u8    d25, #75                       \n"  // G * 0.58700 coefficient
-    "vmov.u8    d26, #38                       \n"  // R * 0.29900 coefficient
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlal.u8   q2, d1, d25                    \n"  // G
-    "vmlal.u8   q2, d2, d26                    \n"  // R
-    "vqrshrun.s16 d0, q2, #7                   \n"  // 15 bit to 8 bit Y
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q12", "q13"
-  );
+void ARGBToYJRow_NEON(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d24, #15                       \n"  // B * 0.11400 coefficient
+      "vmov.u8    d25, #75                       \n"  // G * 0.58700 coefficient
+      "vmov.u8    d26, #38                       \n"  // R * 0.29900 coefficient
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlal.u8   q2, d1, d25                    \n"  // G
+      "vmlal.u8   q2, d2, d26                    \n"  // R
+      "vqrshrun.s16 d0, q2, #7                   \n"  // 15 bit to 8 bit Y
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q12", "q13");
 }
 
 // 8x1 pixels.
-void ARGBToUV444Row_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
+void ARGBToUV444Row_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width) {
-  asm volatile (
-    "vmov.u8    d24, #112                      \n"  // UB / VR 0.875 coefficient
-    "vmov.u8    d25, #74                       \n"  // UG -0.5781 coefficient
-    "vmov.u8    d26, #38                       \n"  // UR -0.2969 coefficient
-    "vmov.u8    d27, #18                       \n"  // VB -0.1406 coefficient
-    "vmov.u8    d28, #94                       \n"  // VG -0.7344 coefficient
-    "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlsl.u8   q2, d1, d25                    \n"  // G
-    "vmlsl.u8   q2, d2, d26                    \n"  // R
-    "vadd.u16   q2, q2, q15                    \n"  // +128 -> unsigned
+  asm volatile(
+      "vmov.u8    d24, #112                      \n"  // UB / VR 0.875
+                                                      // coefficient
+      "vmov.u8    d25, #74                       \n"  // UG -0.5781 coefficient
+      "vmov.u8    d26, #38                       \n"  // UR -0.2969 coefficient
+      "vmov.u8    d27, #18                       \n"  // VB -0.1406 coefficient
+      "vmov.u8    d28, #94                       \n"  // VG -0.7344 coefficient
+      "vmov.u16   q15, #0x8080                   \n"  // 128.5
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlsl.u8   q2, d1, d25                    \n"  // G
+      "vmlsl.u8   q2, d2, d26                    \n"  // R
+      "vadd.u16   q2, q2, q15                    \n"  // +128 -> unsigned
 
-    "vmull.u8   q3, d2, d24                    \n"  // R
-    "vmlsl.u8   q3, d1, d28                    \n"  // G
-    "vmlsl.u8   q3, d0, d27                    \n"  // B
-    "vadd.u16   q3, q3, q15                    \n"  // +128 -> unsigned
+      "vmull.u8   q3, d2, d24                    \n"  // R
+      "vmlsl.u8   q3, d1, d28                    \n"  // G
+      "vmlsl.u8   q3, d0, d27                    \n"  // B
+      "vadd.u16   q3, q3, q15                    \n"  // +128 -> unsigned
 
-    "vqshrn.u16  d0, q2, #8                    \n"  // 16 bit to 8 bit U
-    "vqshrn.u16  d1, q3, #8                    \n"  // 16 bit to 8 bit V
+      "vqshrn.u16  d0, q2, #8                    \n"  // 16 bit to 8 bit U
+      "vqshrn.u16  d1, q3, #8                    \n"  // 16 bit to 8 bit V
 
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels U.
-    MEMACCESS(2)
-    "vst1.8     {d1}, [%2]!                    \n"  // store 8 pixels V.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q12", "q13", "q14", "q15"
-  );
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels U.
+      "vst1.8     {d1}, [%2]!                    \n"  // store 8 pixels V.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q12", "q13", "q14",
+        "q15");
 }
 
-// 32x1 pixels -> 8x1.  width is number of argb pixels. e.g. 32.
-void ARGBToUV411Row_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
-                         int width) {
-  asm volatile (
-    "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
-    "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
-    "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
-    "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
-    "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
-    "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(0)
-    "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels.
-    "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
-    "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
-    "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(0)
-    "vld4.8     {d8, d10, d12, d14}, [%0]!     \n"  // load 8 more ARGB pixels.
-    MEMACCESS(0)
-    "vld4.8     {d9, d11, d13, d15}, [%0]!     \n"  // load last 8 ARGB pixels.
-    "vpaddl.u8  q4, q4                         \n"  // B 16 bytes -> 8 shorts.
-    "vpaddl.u8  q5, q5                         \n"  // G 16 bytes -> 8 shorts.
-    "vpaddl.u8  q6, q6                         \n"  // R 16 bytes -> 8 shorts.
-
-    "vpadd.u16  d0, d0, d1                     \n"  // B 16 shorts -> 8 shorts.
-    "vpadd.u16  d1, d8, d9                     \n"  // B
-    "vpadd.u16  d2, d2, d3                     \n"  // G 16 shorts -> 8 shorts.
-    "vpadd.u16  d3, d10, d11                   \n"  // G
-    "vpadd.u16  d4, d4, d5                     \n"  // R 16 shorts -> 8 shorts.
-    "vpadd.u16  d5, d12, d13                   \n"  // R
-
-    "vrshr.u16  q0, q0, #1                     \n"  // 2x average
-    "vrshr.u16  q1, q1, #1                     \n"
-    "vrshr.u16  q2, q2, #1                     \n"
-
-    "subs       %3, %3, #32                    \n"  // 32 processed per loop.
-    "vmul.s16   q8, q0, q10                    \n"  // B
-    "vmls.s16   q8, q1, q11                    \n"  // G
-    "vmls.s16   q8, q2, q12                    \n"  // R
-    "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
-    "vmul.s16   q9, q2, q10                    \n"  // R
-    "vmls.s16   q9, q1, q14                    \n"  // G
-    "vmls.s16   q9, q0, q13                    \n"  // B
-    "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
-    "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
-    "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels U.
-    MEMACCESS(2)
-    "vst1.8     {d1}, [%2]!                    \n"  // store 8 pixels V.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7",
-    "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
-}
-
+// clang-format off
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-#define RGBTOUV(QB, QG, QR) \
-    "vmul.s16   q8, " #QB ", q10               \n"  /* B                    */ \
-    "vmls.s16   q8, " #QG ", q11               \n"  /* G                    */ \
-    "vmls.s16   q8, " #QR ", q12               \n"  /* R                    */ \
-    "vadd.u16   q8, q8, q15                    \n"  /* +128 -> unsigned     */ \
-    "vmul.s16   q9, " #QR ", q10               \n"  /* R                    */ \
-    "vmls.s16   q9, " #QG ", q14               \n"  /* G                    */ \
-    "vmls.s16   q9, " #QB ", q13               \n"  /* B                    */ \
-    "vadd.u16   q9, q9, q15                    \n"  /* +128 -> unsigned     */ \
-    "vqshrn.u16  d0, q8, #8                    \n"  /* 16 bit to 8 bit U    */ \
-    "vqshrn.u16  d1, q9, #8                    \n"  /* 16 bit to 8 bit V    */
+#define RGBTOUV(QB, QG, QR)                                                 \
+  "vmul.s16   q8, " #QB ", q10               \n" /* B                    */ \
+  "vmls.s16   q8, " #QG ", q11               \n" /* G                    */ \
+  "vmls.s16   q8, " #QR ", q12               \n" /* R                    */ \
+  "vadd.u16   q8, q8, q15                    \n" /* +128 -> unsigned     */ \
+  "vmul.s16   q9, " #QR ", q10               \n" /* R                    */ \
+  "vmls.s16   q9, " #QG ", q14               \n" /* G                    */ \
+  "vmls.s16   q9, " #QB ", q13               \n" /* B                    */ \
+  "vadd.u16   q9, q9, q15                    \n" /* +128 -> unsigned     */ \
+  "vqshrn.u16  d0, q8, #8                    \n" /* 16 bit to 8 bit U    */ \
+  "vqshrn.u16  d1, q9, #8                    \n" /* 16 bit to 8 bit V    */
+// clang-format on
 
 // TODO(fbarchard): Consider vhadd vertical, then vpaddl horizontal, avoid shr.
-void ARGBToUVRow_NEON(const uint8* src_argb, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void ARGBToUVRow_NEON(const uint8_t* src_argb,
+                      int src_stride_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_argb
     "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
@@ -1468,17 +1335,13 @@
     "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
     "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(0)
     "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels.
     "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld4.8     {d8, d10, d12, d14}, [%1]!     \n"  // load 8 more ARGB pixels.
-    MEMACCESS(1)
     "vld4.8     {d9, d11, d13, d15}, [%1]!     \n"  // load last 8 ARGB pixels.
     "vpadal.u8  q0, q4                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q1, q5                         \n"  // G 16 bytes -> 8 shorts.
@@ -1490,9 +1353,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q0, q1, q2)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_argb),  // %0
@@ -1507,8 +1368,11 @@
 }
 
 // TODO(fbarchard): Subsample match C code.
-void ARGBToUVJRow_NEON(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
+void ARGBToUVJRow_NEON(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_argb
     "vmov.s16   q10, #127 / 2                  \n"  // UB / VR 0.500 coefficient
@@ -1517,17 +1381,13 @@
     "vmov.s16   q13, #20 / 2                   \n"  // VB -0.08131 coefficient
     "vmov.s16   q14, #107 / 2                  \n"  // VG -0.41869 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(0)
     "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels.
     "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld4.8     {d8, d10, d12, d14}, [%1]!     \n"  // load 8 more ARGB pixels.
-    MEMACCESS(1)
     "vld4.8     {d9, d11, d13, d15}, [%1]!     \n"  // load last 8 ARGB pixels.
     "vpadal.u8  q0, q4                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q1, q5                         \n"  // G 16 bytes -> 8 shorts.
@@ -1539,9 +1399,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q0, q1, q2)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_argb),  // %0
@@ -1555,8 +1413,11 @@
   );
 }
 
-void BGRAToUVRow_NEON(const uint8* src_bgra, int src_stride_bgra,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void BGRAToUVRow_NEON(const uint8_t* src_bgra,
+                      int src_stride_bgra,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_bgra
     "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
@@ -1565,17 +1426,13 @@
     "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
     "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 BGRA pixels.
-    MEMACCESS(0)
     "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 BGRA pixels.
     "vpaddl.u8  q3, q3                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q2, q2                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q1                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld4.8     {d8, d10, d12, d14}, [%1]!     \n"  // load 8 more BGRA pixels.
-    MEMACCESS(1)
     "vld4.8     {d9, d11, d13, d15}, [%1]!     \n"  // load last 8 BGRA pixels.
     "vpadal.u8  q3, q7                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q2, q6                         \n"  // G 16 bytes -> 8 shorts.
@@ -1587,9 +1444,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q3, q2, q1)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_bgra),  // %0
@@ -1603,8 +1458,11 @@
   );
 }
 
-void ABGRToUVRow_NEON(const uint8* src_abgr, int src_stride_abgr,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void ABGRToUVRow_NEON(const uint8_t* src_abgr,
+                      int src_stride_abgr,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_abgr
     "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
@@ -1613,17 +1471,13 @@
     "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
     "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ABGR pixels.
-    MEMACCESS(0)
     "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ABGR pixels.
     "vpaddl.u8  q2, q2                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q0, q0                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld4.8     {d8, d10, d12, d14}, [%1]!     \n"  // load 8 more ABGR pixels.
-    MEMACCESS(1)
     "vld4.8     {d9, d11, d13, d15}, [%1]!     \n"  // load last 8 ABGR pixels.
     "vpadal.u8  q2, q6                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q1, q5                         \n"  // G 16 bytes -> 8 shorts.
@@ -1635,9 +1489,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q2, q1, q0)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_abgr),  // %0
@@ -1651,8 +1503,11 @@
   );
 }
 
-void RGBAToUVRow_NEON(const uint8* src_rgba, int src_stride_rgba,
-                      uint8* dst_u, uint8* dst_v, int width) {
+void RGBAToUVRow_NEON(const uint8_t* src_rgba,
+                      int src_stride_rgba,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_rgba
     "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
@@ -1661,17 +1516,13 @@
     "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
     "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 RGBA pixels.
-    MEMACCESS(0)
     "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 RGBA pixels.
     "vpaddl.u8  q0, q1                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q2                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q2, q3                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld4.8     {d8, d10, d12, d14}, [%1]!     \n"  // load 8 more RGBA pixels.
-    MEMACCESS(1)
     "vld4.8     {d9, d11, d13, d15}, [%1]!     \n"  // load last 8 RGBA pixels.
     "vpadal.u8  q0, q5                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q1, q6                         \n"  // G 16 bytes -> 8 shorts.
@@ -1683,9 +1534,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q0, q1, q2)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_rgba),  // %0
@@ -1699,8 +1548,11 @@
   );
 }
 
-void RGB24ToUVRow_NEON(const uint8* src_rgb24, int src_stride_rgb24,
-                       uint8* dst_u, uint8* dst_v, int width) {
+void RGB24ToUVRow_NEON(const uint8_t* src_rgb24,
+                       int src_stride_rgb24,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_rgb24
     "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
@@ -1709,17 +1561,13 @@
     "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
     "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld3.8     {d0, d2, d4}, [%0]!            \n"  // load 8 RGB24 pixels.
-    MEMACCESS(0)
     "vld3.8     {d1, d3, d5}, [%0]!            \n"  // load next 8 RGB24 pixels.
     "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld3.8     {d8, d10, d12}, [%1]!          \n"  // load 8 more RGB24 pixels.
-    MEMACCESS(1)
     "vld3.8     {d9, d11, d13}, [%1]!          \n"  // load last 8 RGB24 pixels.
     "vpadal.u8  q0, q4                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q1, q5                         \n"  // G 16 bytes -> 8 shorts.
@@ -1731,9 +1579,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q0, q1, q2)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_rgb24),  // %0
@@ -1747,8 +1593,11 @@
   );
 }
 
-void RAWToUVRow_NEON(const uint8* src_raw, int src_stride_raw,
-                     uint8* dst_u, uint8* dst_v, int width) {
+void RAWToUVRow_NEON(const uint8_t* src_raw,
+                     int src_stride_raw,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
   asm volatile (
     "add        %1, %0, %1                     \n"  // src_stride + src_raw
     "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
@@ -1757,17 +1606,13 @@
     "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
     "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
     "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
+    "1:                                        \n"
     "vld3.8     {d0, d2, d4}, [%0]!            \n"  // load 8 RAW pixels.
-    MEMACCESS(0)
     "vld3.8     {d1, d3, d5}, [%0]!            \n"  // load next 8 RAW pixels.
     "vpaddl.u8  q2, q2                         \n"  // B 16 bytes -> 8 shorts.
     "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
     "vpaddl.u8  q0, q0                         \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "vld3.8     {d8, d10, d12}, [%1]!          \n"  // load 8 more RAW pixels.
-    MEMACCESS(1)
     "vld3.8     {d9, d11, d13}, [%1]!          \n"  // load last 8 RAW pixels.
     "vpadal.u8  q2, q6                         \n"  // B 16 bytes -> 8 shorts.
     "vpadal.u8  q1, q5                         \n"  // G 16 bytes -> 8 shorts.
@@ -1779,9 +1624,7 @@
 
     "subs       %4, %4, #16                    \n"  // 32 processed per loop.
     RGBTOUV(q2, q1, q0)
-    MEMACCESS(2)
     "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
     "bgt        1b                             \n"
   : "+r"(src_raw),  // %0
@@ -1796,875 +1639,815 @@
 }
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-void RGB565ToUVRow_NEON(const uint8* src_rgb565, int src_stride_rgb565,
-                        uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "add        %1, %0, %1                     \n"  // src_stride + src_argb
-    "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
-    "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
-    "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
-    "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
-    "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
-    "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 RGB565 pixels.
-    RGB565TOARGB
-    "vpaddl.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpaddl.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpaddl.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // next 8 RGB565 pixels.
-    RGB565TOARGB
-    "vpaddl.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpaddl.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpaddl.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
+void RGB565ToUVRow_NEON(const uint8_t* src_rgb565,
+                        int src_stride_rgb565,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width) {
+  asm volatile(
+      "add        %1, %0, %1                     \n"  // src_stride + src_argb
+      "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875
+                                                      // coefficient
+      "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
+      "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
+      "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
+      "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
+      "vmov.u16   q15, #0x8080                   \n"  // 128.5
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 RGB565 pixels.
+      RGB565TOARGB
+      "vpaddl.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpaddl.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpaddl.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%0]!                    \n"  // next 8 RGB565 pixels.
+      RGB565TOARGB
+      "vpaddl.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpaddl.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpaddl.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
 
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"  // load 8 RGB565 pixels.
-    RGB565TOARGB
-    "vpadal.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpadal.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpadal.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"  // next 8 RGB565 pixels.
-    RGB565TOARGB
-    "vpadal.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpadal.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpadal.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%1]!                    \n"  // load 8 RGB565 pixels.
+      RGB565TOARGB
+      "vpadal.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpadal.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpadal.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%1]!                    \n"  // next 8 RGB565 pixels.
+      RGB565TOARGB
+      "vpadal.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpadal.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpadal.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
 
-    "vrshr.u16  q4, q4, #1                     \n"  // 2x average
-    "vrshr.u16  q5, q5, #1                     \n"
-    "vrshr.u16  q6, q6, #1                     \n"
+      "vrshr.u16  q4, q4, #1                     \n"  // 2x average
+      "vrshr.u16  q5, q5, #1                     \n"
+      "vrshr.u16  q6, q6, #1                     \n"
 
-    "subs       %4, %4, #16                    \n"  // 16 processed per loop.
-    "vmul.s16   q8, q4, q10                    \n"  // B
-    "vmls.s16   q8, q5, q11                    \n"  // G
-    "vmls.s16   q8, q6, q12                    \n"  // R
-    "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
-    "vmul.s16   q9, q6, q10                    \n"  // R
-    "vmls.s16   q9, q5, q14                    \n"  // G
-    "vmls.s16   q9, q4, q13                    \n"  // B
-    "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
-    "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
-    "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
-    MEMACCESS(2)
-    "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
-    "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
-    "bgt        1b                             \n"
-  : "+r"(src_rgb565),  // %0
-    "+r"(src_stride_rgb565),  // %1
-    "+r"(dst_u),     // %2
-    "+r"(dst_v),     // %3
-    "+r"(width)        // %4
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7",
-    "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+      "subs       %4, %4, #16                    \n"  // 16 processed per loop.
+      "vmul.s16   q8, q4, q10                    \n"  // B
+      "vmls.s16   q8, q5, q11                    \n"  // G
+      "vmls.s16   q8, q6, q12                    \n"  // R
+      "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
+      "vmul.s16   q9, q6, q10                    \n"  // R
+      "vmls.s16   q9, q5, q14                    \n"  // G
+      "vmls.s16   q9, q4, q13                    \n"  // B
+      "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
+      "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
+      "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
+      "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
+      "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
+      "bgt        1b                             \n"
+      : "+r"(src_rgb565),         // %0
+        "+r"(src_stride_rgb565),  // %1
+        "+r"(dst_u),              // %2
+        "+r"(dst_v),              // %3
+        "+r"(width)               // %4
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7", "q8",
+        "q9", "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-void ARGB1555ToUVRow_NEON(const uint8* src_argb1555, int src_stride_argb1555,
-                        uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "add        %1, %0, %1                     \n"  // src_stride + src_argb
-    "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
-    "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
-    "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
-    "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
-    "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
-    "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "vpaddl.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpaddl.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpaddl.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // next 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "vpaddl.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpaddl.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpaddl.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
+void ARGB1555ToUVRow_NEON(const uint8_t* src_argb1555,
+                          int src_stride_argb1555,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width) {
+  asm volatile(
+      "add        %1, %0, %1                     \n"  // src_stride + src_argb
+      "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875
+                                                      // coefficient
+      "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
+      "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
+      "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
+      "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
+      "vmov.u16   q15, #0x8080                   \n"  // 128.5
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "vpaddl.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpaddl.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpaddl.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%0]!                    \n"  // next 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "vpaddl.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpaddl.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpaddl.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
 
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"  // load 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "vpadal.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpadal.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpadal.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"  // next 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "vpadal.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpadal.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpadal.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%1]!                    \n"  // load 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "vpadal.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpadal.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpadal.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%1]!                    \n"  // next 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "vpadal.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpadal.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpadal.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
 
-    "vrshr.u16  q4, q4, #1                     \n"  // 2x average
-    "vrshr.u16  q5, q5, #1                     \n"
-    "vrshr.u16  q6, q6, #1                     \n"
+      "vrshr.u16  q4, q4, #1                     \n"  // 2x average
+      "vrshr.u16  q5, q5, #1                     \n"
+      "vrshr.u16  q6, q6, #1                     \n"
 
-    "subs       %4, %4, #16                    \n"  // 16 processed per loop.
-    "vmul.s16   q8, q4, q10                    \n"  // B
-    "vmls.s16   q8, q5, q11                    \n"  // G
-    "vmls.s16   q8, q6, q12                    \n"  // R
-    "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
-    "vmul.s16   q9, q6, q10                    \n"  // R
-    "vmls.s16   q9, q5, q14                    \n"  // G
-    "vmls.s16   q9, q4, q13                    \n"  // B
-    "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
-    "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
-    "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
-    MEMACCESS(2)
-    "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
-    "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
-    "bgt        1b                             \n"
-  : "+r"(src_argb1555),  // %0
-    "+r"(src_stride_argb1555),  // %1
-    "+r"(dst_u),     // %2
-    "+r"(dst_v),     // %3
-    "+r"(width)        // %4
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7",
-    "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+      "subs       %4, %4, #16                    \n"  // 16 processed per loop.
+      "vmul.s16   q8, q4, q10                    \n"  // B
+      "vmls.s16   q8, q5, q11                    \n"  // G
+      "vmls.s16   q8, q6, q12                    \n"  // R
+      "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
+      "vmul.s16   q9, q6, q10                    \n"  // R
+      "vmls.s16   q9, q5, q14                    \n"  // G
+      "vmls.s16   q9, q4, q13                    \n"  // B
+      "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
+      "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
+      "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
+      "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
+      "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
+      "bgt        1b                             \n"
+      : "+r"(src_argb1555),         // %0
+        "+r"(src_stride_argb1555),  // %1
+        "+r"(dst_u),                // %2
+        "+r"(dst_v),                // %3
+        "+r"(width)                 // %4
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7", "q8",
+        "q9", "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-void ARGB4444ToUVRow_NEON(const uint8* src_argb4444, int src_stride_argb4444,
-                          uint8* dst_u, uint8* dst_v, int width) {
-  asm volatile (
-    "add        %1, %0, %1                     \n"  // src_stride + src_argb
-    "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875 coefficient
-    "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
-    "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
-    "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
-    "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
-    "vmov.u16   q15, #0x8080                   \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "vpaddl.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpaddl.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpaddl.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // next 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "vpaddl.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpaddl.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpaddl.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
+void ARGB4444ToUVRow_NEON(const uint8_t* src_argb4444,
+                          int src_stride_argb4444,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width) {
+  asm volatile(
+      "add        %1, %0, %1                     \n"  // src_stride + src_argb
+      "vmov.s16   q10, #112 / 2                  \n"  // UB / VR 0.875
+                                                      // coefficient
+      "vmov.s16   q11, #74 / 2                   \n"  // UG -0.5781 coefficient
+      "vmov.s16   q12, #38 / 2                   \n"  // UR -0.2969 coefficient
+      "vmov.s16   q13, #18 / 2                   \n"  // VB -0.1406 coefficient
+      "vmov.s16   q14, #94 / 2                   \n"  // VG -0.7344 coefficient
+      "vmov.u16   q15, #0x8080                   \n"  // 128.5
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "vpaddl.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpaddl.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpaddl.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%0]!                    \n"  // next 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "vpaddl.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpaddl.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpaddl.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
 
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"  // load 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "vpadal.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpadal.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpadal.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"  // next 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "vpadal.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
-    "vpadal.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
-    "vpadal.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%1]!                    \n"  // load 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "vpadal.u8  d8, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpadal.u8  d10, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpadal.u8  d12, d2                        \n"  // R 8 bytes -> 4 shorts.
+      "vld1.8     {q0}, [%1]!                    \n"  // next 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "vpadal.u8  d9, d0                         \n"  // B 8 bytes -> 4 shorts.
+      "vpadal.u8  d11, d1                        \n"  // G 8 bytes -> 4 shorts.
+      "vpadal.u8  d13, d2                        \n"  // R 8 bytes -> 4 shorts.
 
-    "vrshr.u16  q4, q4, #1                     \n"  // 2x average
-    "vrshr.u16  q5, q5, #1                     \n"
-    "vrshr.u16  q6, q6, #1                     \n"
+      "vrshr.u16  q4, q4, #1                     \n"  // 2x average
+      "vrshr.u16  q5, q5, #1                     \n"
+      "vrshr.u16  q6, q6, #1                     \n"
 
-    "subs       %4, %4, #16                    \n"  // 16 processed per loop.
-    "vmul.s16   q8, q4, q10                    \n"  // B
-    "vmls.s16   q8, q5, q11                    \n"  // G
-    "vmls.s16   q8, q6, q12                    \n"  // R
-    "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
-    "vmul.s16   q9, q6, q10                    \n"  // R
-    "vmls.s16   q9, q5, q14                    \n"  // G
-    "vmls.s16   q9, q4, q13                    \n"  // B
-    "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
-    "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
-    "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
-    MEMACCESS(2)
-    "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
-    MEMACCESS(3)
-    "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
-    "bgt        1b                             \n"
-  : "+r"(src_argb4444),  // %0
-    "+r"(src_stride_argb4444),  // %1
-    "+r"(dst_u),     // %2
-    "+r"(dst_v),     // %3
-    "+r"(width)        // %4
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7",
-    "q8", "q9", "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+      "subs       %4, %4, #16                    \n"  // 16 processed per loop.
+      "vmul.s16   q8, q4, q10                    \n"  // B
+      "vmls.s16   q8, q5, q11                    \n"  // G
+      "vmls.s16   q8, q6, q12                    \n"  // R
+      "vadd.u16   q8, q8, q15                    \n"  // +128 -> unsigned
+      "vmul.s16   q9, q6, q10                    \n"  // R
+      "vmls.s16   q9, q5, q14                    \n"  // G
+      "vmls.s16   q9, q4, q13                    \n"  // B
+      "vadd.u16   q9, q9, q15                    \n"  // +128 -> unsigned
+      "vqshrn.u16  d0, q8, #8                    \n"  // 16 bit to 8 bit U
+      "vqshrn.u16  d1, q9, #8                    \n"  // 16 bit to 8 bit V
+      "vst1.8     {d0}, [%2]!                    \n"  // store 8 pixels U.
+      "vst1.8     {d1}, [%3]!                    \n"  // store 8 pixels V.
+      "bgt        1b                             \n"
+      : "+r"(src_argb4444),         // %0
+        "+r"(src_stride_argb4444),  // %1
+        "+r"(dst_u),                // %2
+        "+r"(dst_v),                // %3
+        "+r"(width)                 // %4
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q4", "q5", "q6", "q7", "q8",
+        "q9", "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
-void RGB565ToYRow_NEON(const uint8* src_rgb565, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
-    "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
-    "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
-    "vmov.u8    d27, #16                       \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 RGB565 pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    RGB565TOARGB
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlal.u8   q2, d1, d25                    \n"  // G
-    "vmlal.u8   q2, d2, d26                    \n"  // R
-    "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d27                        \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_rgb565),  // %0
-    "+r"(dst_y),       // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q12", "q13"
-  );
+void RGB565ToYRow_NEON(const uint8_t* src_rgb565, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
+      "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
+      "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
+      "vmov.u8    d27, #16                       \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 RGB565 pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      RGB565TOARGB
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlal.u8   q2, d1, d25                    \n"  // G
+      "vmlal.u8   q2, d2, d26                    \n"  // R
+      "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d27                        \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_rgb565),  // %0
+        "+r"(dst_y),       // %1
+        "+r"(width)        // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q12", "q13");
 }
 
-void ARGB1555ToYRow_NEON(const uint8* src_argb1555, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
-    "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
-    "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
-    "vmov.u8    d27, #16                       \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB1555 pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGB1555TOARGB
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlal.u8   q2, d1, d25                    \n"  // G
-    "vmlal.u8   q2, d2, d26                    \n"  // R
-    "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d27                        \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_argb1555),  // %0
-    "+r"(dst_y),         // %1
-    "+r"(width)            // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q12", "q13"
-  );
+void ARGB1555ToYRow_NEON(const uint8_t* src_argb1555,
+                         uint8_t* dst_y,
+                         int width) {
+  asm volatile(
+      "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
+      "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
+      "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
+      "vmov.u8    d27, #16                       \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB1555 pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGB1555TOARGB
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlal.u8   q2, d1, d25                    \n"  // G
+      "vmlal.u8   q2, d2, d26                    \n"  // R
+      "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d27                        \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_argb1555),  // %0
+        "+r"(dst_y),         // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q12", "q13");
 }
 
-void ARGB4444ToYRow_NEON(const uint8* src_argb4444, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
-    "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
-    "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
-    "vmov.u8    d27, #16                       \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB4444 pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    ARGB4444TOARGB
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlal.u8   q2, d1, d25                    \n"  // G
-    "vmlal.u8   q2, d2, d26                    \n"  // R
-    "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d27                        \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_argb4444),  // %0
-    "+r"(dst_y),         // %1
-    "+r"(width)            // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q12", "q13"
-  );
+void ARGB4444ToYRow_NEON(const uint8_t* src_argb4444,
+                         uint8_t* dst_y,
+                         int width) {
+  asm volatile(
+      "vmov.u8    d24, #13                       \n"  // B * 0.1016 coefficient
+      "vmov.u8    d25, #65                       \n"  // G * 0.5078 coefficient
+      "vmov.u8    d26, #33                       \n"  // R * 0.2578 coefficient
+      "vmov.u8    d27, #16                       \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 8 ARGB4444 pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      ARGB4444TOARGB
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlal.u8   q2, d1, d25                    \n"  // G
+      "vmlal.u8   q2, d2, d26                    \n"  // R
+      "vqrshrun.s16 d0, q2, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d27                        \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_argb4444),  // %0
+        "+r"(dst_y),         // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q12", "q13");
 }
 
-void BGRAToYRow_NEON(const uint8* src_bgra, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d4, #33                        \n"  // R * 0.2578 coefficient
-    "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
-    "vmov.u8    d6, #13                        \n"  // B * 0.1016 coefficient
-    "vmov.u8    d7, #16                        \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of BGRA.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q8, d1, d4                     \n"  // R
-    "vmlal.u8   q8, d2, d5                     \n"  // G
-    "vmlal.u8   q8, d3, d6                     \n"  // B
-    "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d7                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_bgra),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8"
-  );
+void BGRAToYRow_NEON(const uint8_t* src_bgra, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d4, #33                        \n"  // R * 0.2578 coefficient
+      "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
+      "vmov.u8    d6, #13                        \n"  // B * 0.1016 coefficient
+      "vmov.u8    d7, #16                        \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of BGRA.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q8, d1, d4                     \n"  // R
+      "vmlal.u8   q8, d2, d5                     \n"  // G
+      "vmlal.u8   q8, d3, d6                     \n"  // B
+      "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d7                         \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_bgra),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8");
 }
 
-void ABGRToYRow_NEON(const uint8* src_abgr, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d4, #33                        \n"  // R * 0.2578 coefficient
-    "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
-    "vmov.u8    d6, #13                        \n"  // B * 0.1016 coefficient
-    "vmov.u8    d7, #16                        \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of ABGR.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q8, d0, d4                     \n"  // R
-    "vmlal.u8   q8, d1, d5                     \n"  // G
-    "vmlal.u8   q8, d2, d6                     \n"  // B
-    "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d7                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_abgr),  // %0
-    "+r"(dst_y),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8"
-  );
+void ABGRToYRow_NEON(const uint8_t* src_abgr, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d4, #33                        \n"  // R * 0.2578 coefficient
+      "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
+      "vmov.u8    d6, #13                        \n"  // B * 0.1016 coefficient
+      "vmov.u8    d7, #16                        \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of ABGR.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q8, d0, d4                     \n"  // R
+      "vmlal.u8   q8, d1, d5                     \n"  // G
+      "vmlal.u8   q8, d2, d6                     \n"  // B
+      "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d7                         \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_abgr),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8");
 }
 
-void RGBAToYRow_NEON(const uint8* src_rgba, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d4, #13                        \n"  // B * 0.1016 coefficient
-    "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
-    "vmov.u8    d6, #33                        \n"  // R * 0.2578 coefficient
-    "vmov.u8    d7, #16                        \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of RGBA.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q8, d1, d4                     \n"  // B
-    "vmlal.u8   q8, d2, d5                     \n"  // G
-    "vmlal.u8   q8, d3, d6                     \n"  // R
-    "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d7                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_rgba),  // %0
-    "+r"(dst_y),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8"
-  );
+void RGBAToYRow_NEON(const uint8_t* src_rgba, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d4, #13                        \n"  // B * 0.1016 coefficient
+      "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
+      "vmov.u8    d6, #33                        \n"  // R * 0.2578 coefficient
+      "vmov.u8    d7, #16                        \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of RGBA.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q8, d1, d4                     \n"  // B
+      "vmlal.u8   q8, d2, d5                     \n"  // G
+      "vmlal.u8   q8, d3, d6                     \n"  // R
+      "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d7                         \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_rgba),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8");
 }
 
-void RGB24ToYRow_NEON(const uint8* src_rgb24, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d4, #13                        \n"  // B * 0.1016 coefficient
-    "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
-    "vmov.u8    d6, #33                        \n"  // R * 0.2578 coefficient
-    "vmov.u8    d7, #16                        \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld3.8     {d0, d1, d2}, [%0]!            \n"  // load 8 pixels of RGB24.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q8, d0, d4                     \n"  // B
-    "vmlal.u8   q8, d1, d5                     \n"  // G
-    "vmlal.u8   q8, d2, d6                     \n"  // R
-    "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d7                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_rgb24),  // %0
-    "+r"(dst_y),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8"
-  );
+void RGB24ToYRow_NEON(const uint8_t* src_rgb24, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d4, #13                        \n"  // B * 0.1016 coefficient
+      "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
+      "vmov.u8    d6, #33                        \n"  // R * 0.2578 coefficient
+      "vmov.u8    d7, #16                        \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld3.8     {d0, d1, d2}, [%0]!            \n"  // load 8 pixels of RGB24.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q8, d0, d4                     \n"  // B
+      "vmlal.u8   q8, d1, d5                     \n"  // G
+      "vmlal.u8   q8, d2, d6                     \n"  // R
+      "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d7                         \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_rgb24),  // %0
+        "+r"(dst_y),      // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8");
 }
 
-void RAWToYRow_NEON(const uint8* src_raw, uint8* dst_y, int width) {
-  asm volatile (
-    "vmov.u8    d4, #33                        \n"  // R * 0.2578 coefficient
-    "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
-    "vmov.u8    d6, #13                        \n"  // B * 0.1016 coefficient
-    "vmov.u8    d7, #16                        \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld3.8     {d0, d1, d2}, [%0]!            \n"  // load 8 pixels of RAW.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q8, d0, d4                     \n"  // B
-    "vmlal.u8   q8, d1, d5                     \n"  // G
-    "vmlal.u8   q8, d2, d6                     \n"  // R
-    "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
-    "vqadd.u8   d0, d7                         \n"
-    MEMACCESS(1)
-    "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
-    "bgt        1b                             \n"
-  : "+r"(src_raw),  // %0
-    "+r"(dst_y),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8"
-  );
+void RAWToYRow_NEON(const uint8_t* src_raw, uint8_t* dst_y, int width) {
+  asm volatile(
+      "vmov.u8    d4, #33                        \n"  // R * 0.2578 coefficient
+      "vmov.u8    d5, #65                        \n"  // G * 0.5078 coefficient
+      "vmov.u8    d6, #13                        \n"  // B * 0.1016 coefficient
+      "vmov.u8    d7, #16                        \n"  // Add 16 constant
+      "1:                                        \n"
+      "vld3.8     {d0, d1, d2}, [%0]!            \n"  // load 8 pixels of RAW.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q8, d0, d4                     \n"  // B
+      "vmlal.u8   q8, d1, d5                     \n"  // G
+      "vmlal.u8   q8, d2, d6                     \n"  // R
+      "vqrshrun.s16 d0, q8, #7                   \n"  // 16 bit to 8 bit Y
+      "vqadd.u8   d0, d7                         \n"
+      "vst1.8     {d0}, [%1]!                    \n"  // store 8 pixels Y.
+      "bgt        1b                             \n"
+      : "+r"(src_raw),  // %0
+        "+r"(dst_y),    // %1
+        "+r"(width)     // %2
+      :
+      : "cc", "memory", "d0", "d1", "d2", "d3", "d4", "d5", "d6", "d7", "q8");
 }
 
 // Bilinear filter 16x2 -> 16x1
-void InterpolateRow_NEON(uint8* dst_ptr,
-                         const uint8* src_ptr, ptrdiff_t src_stride,
-                         int dst_width, int source_y_fraction) {
+void InterpolateRow_NEON(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         int dst_width,
+                         int source_y_fraction) {
   int y1_fraction = source_y_fraction;
-  asm volatile (
-    "cmp        %4, #0                         \n"
-    "beq        100f                           \n"
-    "add        %2, %1                         \n"
-    "cmp        %4, #128                       \n"
-    "beq        50f                            \n"
+  asm volatile(
+      "cmp        %4, #0                         \n"
+      "beq        100f                           \n"
+      "add        %2, %1                         \n"
+      "cmp        %4, #128                       \n"
+      "beq        50f                            \n"
 
-    "vdup.8     d5, %4                         \n"
-    "rsb        %4, #256                       \n"
-    "vdup.8     d4, %4                         \n"
-    // General purpose row blend.
-  "1:                                          \n"
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"
-    MEMACCESS(2)
-    "vld1.8     {q1}, [%2]!                    \n"
-    "subs       %3, %3, #16                    \n"
-    "vmull.u8   q13, d0, d4                    \n"
-    "vmull.u8   q14, d1, d4                    \n"
-    "vmlal.u8   q13, d2, d5                    \n"
-    "vmlal.u8   q14, d3, d5                    \n"
-    "vrshrn.u16 d0, q13, #8                    \n"
-    "vrshrn.u16 d1, q14, #8                    \n"
-    MEMACCESS(0)
-    "vst1.8     {q0}, [%0]!                    \n"
-    "bgt        1b                             \n"
-    "b          99f                            \n"
+      "vdup.8     d5, %4                         \n"
+      "rsb        %4, #256                       \n"
+      "vdup.8     d4, %4                         \n"
+      // General purpose row blend.
+      "1:                                        \n"
+      "vld1.8     {q0}, [%1]!                    \n"
+      "vld1.8     {q1}, [%2]!                    \n"
+      "subs       %3, %3, #16                    \n"
+      "vmull.u8   q13, d0, d4                    \n"
+      "vmull.u8   q14, d1, d4                    \n"
+      "vmlal.u8   q13, d2, d5                    \n"
+      "vmlal.u8   q14, d3, d5                    \n"
+      "vrshrn.u16 d0, q13, #8                    \n"
+      "vrshrn.u16 d1, q14, #8                    \n"
+      "vst1.8     {q0}, [%0]!                    \n"
+      "bgt        1b                             \n"
+      "b          99f                            \n"
 
-    // Blend 50 / 50.
-  "50:                                         \n"
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"
-    MEMACCESS(2)
-    "vld1.8     {q1}, [%2]!                    \n"
-    "subs       %3, %3, #16                    \n"
-    "vrhadd.u8  q0, q1                         \n"
-    MEMACCESS(0)
-    "vst1.8     {q0}, [%0]!                    \n"
-    "bgt        50b                            \n"
-    "b          99f                            \n"
+      // Blend 50 / 50.
+      "50:                                       \n"
+      "vld1.8     {q0}, [%1]!                    \n"
+      "vld1.8     {q1}, [%2]!                    \n"
+      "subs       %3, %3, #16                    \n"
+      "vrhadd.u8  q0, q1                         \n"
+      "vst1.8     {q0}, [%0]!                    \n"
+      "bgt        50b                            \n"
+      "b          99f                            \n"
 
-    // Blend 100 / 0 - Copy row unchanged.
-  "100:                                        \n"
-    MEMACCESS(1)
-    "vld1.8     {q0}, [%1]!                    \n"
-    "subs       %3, %3, #16                    \n"
-    MEMACCESS(0)
-    "vst1.8     {q0}, [%0]!                    \n"
-    "bgt        100b                           \n"
+      // Blend 100 / 0 - Copy row unchanged.
+      "100:                                      \n"
+      "vld1.8     {q0}, [%1]!                    \n"
+      "subs       %3, %3, #16                    \n"
+      "vst1.8     {q0}, [%0]!                    \n"
+      "bgt        100b                           \n"
 
-  "99:                                         \n"
-  : "+r"(dst_ptr),          // %0
-    "+r"(src_ptr),          // %1
-    "+r"(src_stride),       // %2
-    "+r"(dst_width),        // %3
-    "+r"(y1_fraction)       // %4
-  :
-  : "cc", "memory", "q0", "q1", "d4", "d5", "q13", "q14"
-  );
+      "99:                                       \n"
+      : "+r"(dst_ptr),     // %0
+        "+r"(src_ptr),     // %1
+        "+r"(src_stride),  // %2
+        "+r"(dst_width),   // %3
+        "+r"(y1_fraction)  // %4
+      :
+      : "cc", "memory", "q0", "q1", "d4", "d5", "q13", "q14");
 }
 
 // dr * (256 - sa) / 256 + sr = dr - dr * sa / 256 + sr
-void ARGBBlendRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                       uint8* dst_argb, int width) {
-  asm volatile (
-    "subs       %3, #8                         \n"
-    "blt        89f                            \n"
-    // Blend 8 pixels.
-  "8:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of ARGB0.
-    MEMACCESS(1)
-    "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load 8 pixels of ARGB1.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q10, d4, d3                    \n"  // db * a
-    "vmull.u8   q11, d5, d3                    \n"  // dg * a
-    "vmull.u8   q12, d6, d3                    \n"  // dr * a
-    "vqrshrn.u16 d20, q10, #8                  \n"  // db >>= 8
-    "vqrshrn.u16 d21, q11, #8                  \n"  // dg >>= 8
-    "vqrshrn.u16 d22, q12, #8                  \n"  // dr >>= 8
-    "vqsub.u8   q2, q2, q10                    \n"  // dbg - dbg * a / 256
-    "vqsub.u8   d6, d6, d22                    \n"  // dr - dr * a / 256
-    "vqadd.u8   q0, q0, q2                     \n"  // + sbg
-    "vqadd.u8   d2, d2, d6                     \n"  // + sr
-    "vmov.u8    d3, #255                       \n"  // a = 255
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 pixels of ARGB.
-    "bge        8b                             \n"
+void ARGBBlendRow_NEON(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width) {
+  asm volatile(
+      "subs       %3, #8                         \n"
+      "blt        89f                            \n"
+      // Blend 8 pixels.
+      "8:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of ARGB0.
+      "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load 8 pixels of ARGB1.
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q10, d4, d3                    \n"  // db * a
+      "vmull.u8   q11, d5, d3                    \n"  // dg * a
+      "vmull.u8   q12, d6, d3                    \n"  // dr * a
+      "vqrshrn.u16 d20, q10, #8                  \n"  // db >>= 8
+      "vqrshrn.u16 d21, q11, #8                  \n"  // dg >>= 8
+      "vqrshrn.u16 d22, q12, #8                  \n"  // dr >>= 8
+      "vqsub.u8   q2, q2, q10                    \n"  // dbg - dbg * a / 256
+      "vqsub.u8   d6, d6, d22                    \n"  // dr - dr * a / 256
+      "vqadd.u8   q0, q0, q2                     \n"  // + sbg
+      "vqadd.u8   d2, d2, d6                     \n"  // + sr
+      "vmov.u8    d3, #255                       \n"  // a = 255
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 pixels of ARGB.
+      "bge        8b                             \n"
 
-  "89:                                         \n"
-    "adds       %3, #8-1                       \n"
-    "blt        99f                            \n"
+      "89:                                       \n"
+      "adds       %3, #8-1                       \n"
+      "blt        99f                            \n"
 
-    // Blend 1 pixels.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0[0],d1[0],d2[0],d3[0]}, [%0]! \n"  // load 1 pixel ARGB0.
-    MEMACCESS(1)
-    "vld4.8     {d4[0],d5[0],d6[0],d7[0]}, [%1]! \n"  // load 1 pixel ARGB1.
-    "subs       %3, %3, #1                     \n"  // 1 processed per loop.
-    "vmull.u8   q10, d4, d3                    \n"  // db * a
-    "vmull.u8   q11, d5, d3                    \n"  // dg * a
-    "vmull.u8   q12, d6, d3                    \n"  // dr * a
-    "vqrshrn.u16 d20, q10, #8                  \n"  // db >>= 8
-    "vqrshrn.u16 d21, q11, #8                  \n"  // dg >>= 8
-    "vqrshrn.u16 d22, q12, #8                  \n"  // dr >>= 8
-    "vqsub.u8   q2, q2, q10                    \n"  // dbg - dbg * a / 256
-    "vqsub.u8   d6, d6, d22                    \n"  // dr - dr * a / 256
-    "vqadd.u8   q0, q0, q2                     \n"  // + sbg
-    "vqadd.u8   d2, d2, d6                     \n"  // + sr
-    "vmov.u8    d3, #255                       \n"  // a = 255
-    MEMACCESS(2)
-    "vst4.8     {d0[0],d1[0],d2[0],d3[0]}, [%2]! \n"  // store 1 pixel.
-    "bge        1b                             \n"
+      // Blend 1 pixels.
+      "1:                                        \n"
+      "vld4.8     {d0[0],d1[0],d2[0],d3[0]}, [%0]! \n"  // load 1 pixel ARGB0.
+      "vld4.8     {d4[0],d5[0],d6[0],d7[0]}, [%1]! \n"  // load 1 pixel ARGB1.
+      "subs       %3, %3, #1                     \n"    // 1 processed per loop.
+      "vmull.u8   q10, d4, d3                    \n"    // db * a
+      "vmull.u8   q11, d5, d3                    \n"    // dg * a
+      "vmull.u8   q12, d6, d3                    \n"    // dr * a
+      "vqrshrn.u16 d20, q10, #8                  \n"    // db >>= 8
+      "vqrshrn.u16 d21, q11, #8                  \n"    // dg >>= 8
+      "vqrshrn.u16 d22, q12, #8                  \n"    // dr >>= 8
+      "vqsub.u8   q2, q2, q10                    \n"    // dbg - dbg * a / 256
+      "vqsub.u8   d6, d6, d22                    \n"    // dr - dr * a / 256
+      "vqadd.u8   q0, q0, q2                     \n"    // + sbg
+      "vqadd.u8   d2, d2, d6                     \n"    // + sr
+      "vmov.u8    d3, #255                       \n"    // a = 255
+      "vst4.8     {d0[0],d1[0],d2[0],d3[0]}, [%2]! \n"  // store 1 pixel.
+      "bge        1b                             \n"
 
-  "99:                                         \n"
+      "99:                                         \n"
 
-  : "+r"(src_argb0),    // %0
-    "+r"(src_argb1),    // %1
-    "+r"(dst_argb),     // %2
-    "+r"(width)         // %3
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q10", "q11", "q12"
-  );
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q10", "q11", "q12");
 }
 
 // Attenuate 8 pixels at a time.
-void ARGBAttenuateRow_NEON(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    // Attenuate 8 pixels.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q10, d0, d3                    \n"  // b * a
-    "vmull.u8   q11, d1, d3                    \n"  // g * a
-    "vmull.u8   q12, d2, d3                    \n"  // r * a
-    "vqrshrn.u16 d0, q10, #8                   \n"  // b >>= 8
-    "vqrshrn.u16 d1, q11, #8                   \n"  // g >>= 8
-    "vqrshrn.u16 d2, q12, #8                   \n"  // r >>= 8
-    MEMACCESS(1)
-    "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "q0", "q1", "q10", "q11", "q12"
-  );
+void ARGBAttenuateRow_NEON(const uint8_t* src_argb,
+                           uint8_t* dst_argb,
+                           int width) {
+  asm volatile(
+      // Attenuate 8 pixels.
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q10, d0, d3                    \n"  // b * a
+      "vmull.u8   q11, d1, d3                    \n"  // g * a
+      "vmull.u8   q12, d2, d3                    \n"  // r * a
+      "vqrshrn.u16 d0, q10, #8                   \n"  // b >>= 8
+      "vqrshrn.u16 d1, q11, #8                   \n"  // g >>= 8
+      "vqrshrn.u16 d2, q12, #8                   \n"  // r >>= 8
+      "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1", "q10", "q11", "q12");
 }
 
 // Quantize 8 ARGB pixels (32 bytes).
 // dst = (dst * scale >> 16) * interval_size + interval_offset;
-void ARGBQuantizeRow_NEON(uint8* dst_argb, int scale, int interval_size,
-                          int interval_offset, int width) {
-  asm volatile (
-    "vdup.u16   q8, %2                         \n"
-    "vshr.u16   q8, q8, #1                     \n"  // scale >>= 1
-    "vdup.u16   q9, %3                         \n"  // interval multiply.
-    "vdup.u16   q10, %4                        \n"  // interval add
+void ARGBQuantizeRow_NEON(uint8_t* dst_argb,
+                          int scale,
+                          int interval_size,
+                          int interval_offset,
+                          int width) {
+  asm volatile(
+      "vdup.u16   q8, %2                         \n"
+      "vshr.u16   q8, q8, #1                     \n"  // scale >>= 1
+      "vdup.u16   q9, %3                         \n"  // interval multiply.
+      "vdup.u16   q10, %4                        \n"  // interval add
 
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d2, d4, d6}, [%0]         \n"  // load 8 pixels of ARGB.
-    "subs       %1, %1, #8                     \n"  // 8 processed per loop.
-    "vmovl.u8   q0, d0                         \n"  // b (0 .. 255)
-    "vmovl.u8   q1, d2                         \n"
-    "vmovl.u8   q2, d4                         \n"
-    "vqdmulh.s16 q0, q0, q8                    \n"  // b * scale
-    "vqdmulh.s16 q1, q1, q8                    \n"  // g
-    "vqdmulh.s16 q2, q2, q8                    \n"  // r
-    "vmul.u16   q0, q0, q9                     \n"  // b * interval_size
-    "vmul.u16   q1, q1, q9                     \n"  // g
-    "vmul.u16   q2, q2, q9                     \n"  // r
-    "vadd.u16   q0, q0, q10                    \n"  // b + interval_offset
-    "vadd.u16   q1, q1, q10                    \n"  // g
-    "vadd.u16   q2, q2, q10                    \n"  // r
-    "vqmovn.u16 d0, q0                         \n"
-    "vqmovn.u16 d2, q1                         \n"
-    "vqmovn.u16 d4, q2                         \n"
-    MEMACCESS(0)
-    "vst4.8     {d0, d2, d4, d6}, [%0]!        \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(dst_argb),       // %0
-    "+r"(width)           // %1
-  : "r"(scale),           // %2
-    "r"(interval_size),   // %3
-    "r"(interval_offset)  // %4
-  : "cc", "memory", "q0", "q1", "q2", "q3", "q8", "q9", "q10"
-  );
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld4.8     {d0, d2, d4, d6}, [%0]         \n"  // load 8 pixels of ARGB.
+      "subs       %1, %1, #8                     \n"  // 8 processed per loop.
+      "vmovl.u8   q0, d0                         \n"  // b (0 .. 255)
+      "vmovl.u8   q1, d2                         \n"
+      "vmovl.u8   q2, d4                         \n"
+      "vqdmulh.s16 q0, q0, q8                    \n"  // b * scale
+      "vqdmulh.s16 q1, q1, q8                    \n"  // g
+      "vqdmulh.s16 q2, q2, q8                    \n"  // r
+      "vmul.u16   q0, q0, q9                     \n"  // b * interval_size
+      "vmul.u16   q1, q1, q9                     \n"  // g
+      "vmul.u16   q2, q2, q9                     \n"  // r
+      "vadd.u16   q0, q0, q10                    \n"  // b + interval_offset
+      "vadd.u16   q1, q1, q10                    \n"  // g
+      "vadd.u16   q2, q2, q10                    \n"  // r
+      "vqmovn.u16 d0, q0                         \n"
+      "vqmovn.u16 d2, q1                         \n"
+      "vqmovn.u16 d4, q2                         \n"
+      "vst4.8     {d0, d2, d4, d6}, [%0]!        \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(dst_argb),       // %0
+        "+r"(width)           // %1
+      : "r"(scale),           // %2
+        "r"(interval_size),   // %3
+        "r"(interval_offset)  // %4
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q8", "q9", "q10");
 }
 
 // Shade 8 pixels at a time by specified value.
 // NOTE vqrdmulh.s16 q10, q10, d0[0] must use a scaler register from 0 to 8.
 // Rounding in vqrdmulh does +1 to high if high bit of low s16 is set.
-void ARGBShadeRow_NEON(const uint8* src_argb, uint8* dst_argb, int width,
-                       uint32 value) {
-  asm volatile (
-    "vdup.u32   q0, %3                         \n"  // duplicate scale value.
-    "vzip.u8    d0, d1                         \n"  // d0 aarrggbb.
-    "vshr.u16   q0, q0, #1                     \n"  // scale / 2.
+void ARGBShadeRow_NEON(const uint8_t* src_argb,
+                       uint8_t* dst_argb,
+                       int width,
+                       uint32_t value) {
+  asm volatile(
+      "vdup.u32   q0, %3                         \n"  // duplicate scale value.
+      "vzip.u8    d0, d1                         \n"  // d0 aarrggbb.
+      "vshr.u16   q0, q0, #1                     \n"  // scale / 2.
 
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d20, d22, d24, d26}, [%0]!    \n"  // load 8 pixels of ARGB.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmovl.u8   q10, d20                       \n"  // b (0 .. 255)
-    "vmovl.u8   q11, d22                       \n"
-    "vmovl.u8   q12, d24                       \n"
-    "vmovl.u8   q13, d26                       \n"
-    "vqrdmulh.s16 q10, q10, d0[0]              \n"  // b * scale * 2
-    "vqrdmulh.s16 q11, q11, d0[1]              \n"  // g
-    "vqrdmulh.s16 q12, q12, d0[2]              \n"  // r
-    "vqrdmulh.s16 q13, q13, d0[3]              \n"  // a
-    "vqmovn.u16 d20, q10                       \n"
-    "vqmovn.u16 d22, q11                       \n"
-    "vqmovn.u16 d24, q12                       \n"
-    "vqmovn.u16 d26, q13                       \n"
-    MEMACCESS(1)
-    "vst4.8     {d20, d22, d24, d26}, [%1]!    \n"  // store 8 pixels of ARGB.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),       // %0
-    "+r"(dst_argb),       // %1
-    "+r"(width)           // %2
-  : "r"(value)            // %3
-  : "cc", "memory", "q0", "q10", "q11", "q12", "q13"
-  );
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld4.8     {d20, d22, d24, d26}, [%0]!    \n"  // load 8 pixels of ARGB.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmovl.u8   q10, d20                       \n"  // b (0 .. 255)
+      "vmovl.u8   q11, d22                       \n"
+      "vmovl.u8   q12, d24                       \n"
+      "vmovl.u8   q13, d26                       \n"
+      "vqrdmulh.s16 q10, q10, d0[0]              \n"  // b * scale * 2
+      "vqrdmulh.s16 q11, q11, d0[1]              \n"  // g
+      "vqrdmulh.s16 q12, q12, d0[2]              \n"  // r
+      "vqrdmulh.s16 q13, q13, d0[3]              \n"  // a
+      "vqmovn.u16 d20, q10                       \n"
+      "vqmovn.u16 d22, q11                       \n"
+      "vqmovn.u16 d24, q12                       \n"
+      "vqmovn.u16 d26, q13                       \n"
+      "vst4.8     {d20, d22, d24, d26}, [%1]!    \n"  // store 8 pixels of ARGB.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(value)       // %3
+      : "cc", "memory", "q0", "q10", "q11", "q12", "q13");
 }
 
 // Convert 8 ARGB pixels (64 bytes) to 8 Gray ARGB pixels
 // Similar to ARGBToYJ but stores ARGB.
 // C code is (15 * b + 75 * g + 38 * r + 64) >> 7;
-void ARGBGrayRow_NEON(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d24, #15                       \n"  // B * 0.11400 coefficient
-    "vmov.u8    d25, #75                       \n"  // G * 0.58700 coefficient
-    "vmov.u8    d26, #38                       \n"  // R * 0.29900 coefficient
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q2, d0, d24                    \n"  // B
-    "vmlal.u8   q2, d1, d25                    \n"  // G
-    "vmlal.u8   q2, d2, d26                    \n"  // R
-    "vqrshrun.s16 d0, q2, #7                   \n"  // 15 bit to 8 bit B
-    "vmov       d1, d0                         \n"  // G
-    "vmov       d2, d0                         \n"  // R
-    MEMACCESS(1)
-    "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q12", "q13"
-  );
+void ARGBGrayRow_NEON(const uint8_t* src_argb, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "vmov.u8    d24, #15                       \n"  // B * 0.11400 coefficient
+      "vmov.u8    d25, #75                       \n"  // G * 0.58700 coefficient
+      "vmov.u8    d26, #38                       \n"  // R * 0.29900 coefficient
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q2, d0, d24                    \n"  // B
+      "vmlal.u8   q2, d1, d25                    \n"  // G
+      "vmlal.u8   q2, d2, d26                    \n"  // R
+      "vqrshrun.s16 d0, q2, #7                   \n"  // 15 bit to 8 bit B
+      "vmov       d1, d0                         \n"  // G
+      "vmov       d2, d0                         \n"  // R
+      "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q12", "q13");
 }
 
 // Convert 8 ARGB pixels (32 bytes) to 8 Sepia ARGB pixels.
 //    b = (r * 35 + g * 68 + b * 17) >> 7
 //    g = (r * 45 + g * 88 + b * 22) >> 7
 //    r = (r * 50 + g * 98 + b * 24) >> 7
-void ARGBSepiaRow_NEON(uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d20, #17                       \n"  // BB coefficient
-    "vmov.u8    d21, #68                       \n"  // BG coefficient
-    "vmov.u8    d22, #35                       \n"  // BR coefficient
-    "vmov.u8    d24, #22                       \n"  // GB coefficient
-    "vmov.u8    d25, #88                       \n"  // GG coefficient
-    "vmov.u8    d26, #45                       \n"  // GR coefficient
-    "vmov.u8    d28, #24                       \n"  // BB coefficient
-    "vmov.u8    d29, #98                       \n"  // BG coefficient
-    "vmov.u8    d30, #50                       \n"  // BR coefficient
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]         \n"  // load 8 ARGB pixels.
-    "subs       %1, %1, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q2, d0, d20                    \n"  // B to Sepia B
-    "vmlal.u8   q2, d1, d21                    \n"  // G
-    "vmlal.u8   q2, d2, d22                    \n"  // R
-    "vmull.u8   q3, d0, d24                    \n"  // B to Sepia G
-    "vmlal.u8   q3, d1, d25                    \n"  // G
-    "vmlal.u8   q3, d2, d26                    \n"  // R
-    "vmull.u8   q8, d0, d28                    \n"  // B to Sepia R
-    "vmlal.u8   q8, d1, d29                    \n"  // G
-    "vmlal.u8   q8, d2, d30                    \n"  // R
-    "vqshrn.u16 d0, q2, #7                     \n"  // 16 bit to 8 bit B
-    "vqshrn.u16 d1, q3, #7                     \n"  // 16 bit to 8 bit G
-    "vqshrn.u16 d2, q8, #7                     \n"  // 16 bit to 8 bit R
-    MEMACCESS(0)
-    "vst4.8     {d0, d1, d2, d3}, [%0]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-  : "+r"(dst_argb),  // %0
-    "+r"(width)      // %1
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3",
-    "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+void ARGBSepiaRow_NEON(uint8_t* dst_argb, int width) {
+  asm volatile(
+      "vmov.u8    d20, #17                       \n"  // BB coefficient
+      "vmov.u8    d21, #68                       \n"  // BG coefficient
+      "vmov.u8    d22, #35                       \n"  // BR coefficient
+      "vmov.u8    d24, #22                       \n"  // GB coefficient
+      "vmov.u8    d25, #88                       \n"  // GG coefficient
+      "vmov.u8    d26, #45                       \n"  // GR coefficient
+      "vmov.u8    d28, #24                       \n"  // BB coefficient
+      "vmov.u8    d29, #98                       \n"  // BG coefficient
+      "vmov.u8    d30, #50                       \n"  // BR coefficient
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]         \n"  // load 8 ARGB pixels.
+      "subs       %1, %1, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q2, d0, d20                    \n"  // B to Sepia B
+      "vmlal.u8   q2, d1, d21                    \n"  // G
+      "vmlal.u8   q2, d2, d22                    \n"  // R
+      "vmull.u8   q3, d0, d24                    \n"  // B to Sepia G
+      "vmlal.u8   q3, d1, d25                    \n"  // G
+      "vmlal.u8   q3, d2, d26                    \n"  // R
+      "vmull.u8   q8, d0, d28                    \n"  // B to Sepia R
+      "vmlal.u8   q8, d1, d29                    \n"  // G
+      "vmlal.u8   q8, d2, d30                    \n"  // R
+      "vqshrn.u16 d0, q2, #7                     \n"  // 16 bit to 8 bit B
+      "vqshrn.u16 d1, q3, #7                     \n"  // 16 bit to 8 bit G
+      "vqshrn.u16 d2, q8, #7                     \n"  // 16 bit to 8 bit R
+      "vst4.8     {d0, d1, d2, d3}, [%0]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(dst_argb),  // %0
+        "+r"(width)      // %1
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3", "q10", "q11", "q12", "q13",
+        "q14", "q15");
 }
 
 // Tranform 8 ARGB pixels (32 bytes) with color matrix.
 // TODO(fbarchard): Was same as Sepia except matrix is provided.  This function
 // needs to saturate.  Consider doing a non-saturating version.
-void ARGBColorMatrixRow_NEON(const uint8* src_argb, uint8* dst_argb,
-                             const int8* matrix_argb, int width) {
-  asm volatile (
-    MEMACCESS(3)
-    "vld1.8     {q2}, [%3]                     \n"  // load 3 ARGB vectors.
-    "vmovl.s8   q0, d4                         \n"  // B,G coefficients s16.
-    "vmovl.s8   q1, d5                         \n"  // R,A coefficients s16.
+void ARGBColorMatrixRow_NEON(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             const int8_t* matrix_argb,
+                             int width) {
+  asm volatile(
+      "vld1.8     {q2}, [%3]                     \n"  // load 3 ARGB vectors.
+      "vmovl.s8   q0, d4                         \n"  // B,G coefficients s16.
+      "vmovl.s8   q1, d5                         \n"  // R,A coefficients s16.
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d16, d18, d20, d22}, [%0]!    \n"  // load 8 ARGB pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop.
-    "vmovl.u8   q8, d16                        \n"  // b (0 .. 255) 16 bit
-    "vmovl.u8   q9, d18                        \n"  // g
-    "vmovl.u8   q10, d20                       \n"  // r
-    "vmovl.u8   q11, d22                       \n"  // a
-    "vmul.s16   q12, q8, d0[0]                 \n"  // B = B * Matrix B
-    "vmul.s16   q13, q8, d1[0]                 \n"  // G = B * Matrix G
-    "vmul.s16   q14, q8, d2[0]                 \n"  // R = B * Matrix R
-    "vmul.s16   q15, q8, d3[0]                 \n"  // A = B * Matrix A
-    "vmul.s16   q4, q9, d0[1]                  \n"  // B += G * Matrix B
-    "vmul.s16   q5, q9, d1[1]                  \n"  // G += G * Matrix G
-    "vmul.s16   q6, q9, d2[1]                  \n"  // R += G * Matrix R
-    "vmul.s16   q7, q9, d3[1]                  \n"  // A += G * Matrix A
-    "vqadd.s16  q12, q12, q4                   \n"  // Accumulate B
-    "vqadd.s16  q13, q13, q5                   \n"  // Accumulate G
-    "vqadd.s16  q14, q14, q6                   \n"  // Accumulate R
-    "vqadd.s16  q15, q15, q7                   \n"  // Accumulate A
-    "vmul.s16   q4, q10, d0[2]                 \n"  // B += R * Matrix B
-    "vmul.s16   q5, q10, d1[2]                 \n"  // G += R * Matrix G
-    "vmul.s16   q6, q10, d2[2]                 \n"  // R += R * Matrix R
-    "vmul.s16   q7, q10, d3[2]                 \n"  // A += R * Matrix A
-    "vqadd.s16  q12, q12, q4                   \n"  // Accumulate B
-    "vqadd.s16  q13, q13, q5                   \n"  // Accumulate G
-    "vqadd.s16  q14, q14, q6                   \n"  // Accumulate R
-    "vqadd.s16  q15, q15, q7                   \n"  // Accumulate A
-    "vmul.s16   q4, q11, d0[3]                 \n"  // B += A * Matrix B
-    "vmul.s16   q5, q11, d1[3]                 \n"  // G += A * Matrix G
-    "vmul.s16   q6, q11, d2[3]                 \n"  // R += A * Matrix R
-    "vmul.s16   q7, q11, d3[3]                 \n"  // A += A * Matrix A
-    "vqadd.s16  q12, q12, q4                   \n"  // Accumulate B
-    "vqadd.s16  q13, q13, q5                   \n"  // Accumulate G
-    "vqadd.s16  q14, q14, q6                   \n"  // Accumulate R
-    "vqadd.s16  q15, q15, q7                   \n"  // Accumulate A
-    "vqshrun.s16 d16, q12, #6                  \n"  // 16 bit to 8 bit B
-    "vqshrun.s16 d18, q13, #6                  \n"  // 16 bit to 8 bit G
-    "vqshrun.s16 d20, q14, #6                  \n"  // 16 bit to 8 bit R
-    "vqshrun.s16 d22, q15, #6                  \n"  // 16 bit to 8 bit A
-    MEMACCESS(1)
-    "vst4.8     {d16, d18, d20, d22}, [%1]!    \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)       // %2
-  : "r"(matrix_argb)  // %3
-  : "cc", "memory", "q0", "q1", "q2", "q4", "q5", "q6", "q7", "q8", "q9",
-    "q10", "q11", "q12", "q13", "q14", "q15"
-  );
+      "1:                                        \n"
+      "vld4.8     {d16, d18, d20, d22}, [%0]!    \n"  // load 8 ARGB pixels.
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop.
+      "vmovl.u8   q8, d16                        \n"  // b (0 .. 255) 16 bit
+      "vmovl.u8   q9, d18                        \n"  // g
+      "vmovl.u8   q10, d20                       \n"  // r
+      "vmovl.u8   q11, d22                       \n"  // a
+      "vmul.s16   q12, q8, d0[0]                 \n"  // B = B * Matrix B
+      "vmul.s16   q13, q8, d1[0]                 \n"  // G = B * Matrix G
+      "vmul.s16   q14, q8, d2[0]                 \n"  // R = B * Matrix R
+      "vmul.s16   q15, q8, d3[0]                 \n"  // A = B * Matrix A
+      "vmul.s16   q4, q9, d0[1]                  \n"  // B += G * Matrix B
+      "vmul.s16   q5, q9, d1[1]                  \n"  // G += G * Matrix G
+      "vmul.s16   q6, q9, d2[1]                  \n"  // R += G * Matrix R
+      "vmul.s16   q7, q9, d3[1]                  \n"  // A += G * Matrix A
+      "vqadd.s16  q12, q12, q4                   \n"  // Accumulate B
+      "vqadd.s16  q13, q13, q5                   \n"  // Accumulate G
+      "vqadd.s16  q14, q14, q6                   \n"  // Accumulate R
+      "vqadd.s16  q15, q15, q7                   \n"  // Accumulate A
+      "vmul.s16   q4, q10, d0[2]                 \n"  // B += R * Matrix B
+      "vmul.s16   q5, q10, d1[2]                 \n"  // G += R * Matrix G
+      "vmul.s16   q6, q10, d2[2]                 \n"  // R += R * Matrix R
+      "vmul.s16   q7, q10, d3[2]                 \n"  // A += R * Matrix A
+      "vqadd.s16  q12, q12, q4                   \n"  // Accumulate B
+      "vqadd.s16  q13, q13, q5                   \n"  // Accumulate G
+      "vqadd.s16  q14, q14, q6                   \n"  // Accumulate R
+      "vqadd.s16  q15, q15, q7                   \n"  // Accumulate A
+      "vmul.s16   q4, q11, d0[3]                 \n"  // B += A * Matrix B
+      "vmul.s16   q5, q11, d1[3]                 \n"  // G += A * Matrix G
+      "vmul.s16   q6, q11, d2[3]                 \n"  // R += A * Matrix R
+      "vmul.s16   q7, q11, d3[3]                 \n"  // A += A * Matrix A
+      "vqadd.s16  q12, q12, q4                   \n"  // Accumulate B
+      "vqadd.s16  q13, q13, q5                   \n"  // Accumulate G
+      "vqadd.s16  q14, q14, q6                   \n"  // Accumulate R
+      "vqadd.s16  q15, q15, q7                   \n"  // Accumulate A
+      "vqshrun.s16 d16, q12, #6                  \n"  // 16 bit to 8 bit B
+      "vqshrun.s16 d18, q13, #6                  \n"  // 16 bit to 8 bit G
+      "vqshrun.s16 d20, q14, #6                  \n"  // 16 bit to 8 bit R
+      "vqshrun.s16 d22, q15, #6                  \n"  // 16 bit to 8 bit A
+      "vst4.8     {d16, d18, d20, d22}, [%1]!    \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_argb),   // %0
+        "+r"(dst_argb),   // %1
+        "+r"(width)       // %2
+      : "r"(matrix_argb)  // %3
+      : "cc", "memory", "q0", "q1", "q2", "q4", "q5", "q6", "q7", "q8", "q9",
+        "q10", "q11", "q12", "q13", "q14", "q15");
 }
 
 // Multiply 2 rows of ARGB pixels together, 8 pixels at a time.
-void ARGBMultiplyRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(1)
-    "vld4.8     {d1, d3, d5, d7}, [%1]!        \n"  // load 8 more ARGB pixels.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vmull.u8   q0, d0, d1                     \n"  // multiply B
-    "vmull.u8   q1, d2, d3                     \n"  // multiply G
-    "vmull.u8   q2, d4, d5                     \n"  // multiply R
-    "vmull.u8   q3, d6, d7                     \n"  // multiply A
-    "vrshrn.u16 d0, q0, #8                     \n"  // 16 bit to 8 bit B
-    "vrshrn.u16 d1, q1, #8                     \n"  // 16 bit to 8 bit G
-    "vrshrn.u16 d2, q2, #8                     \n"  // 16 bit to 8 bit R
-    "vrshrn.u16 d3, q3, #8                     \n"  // 16 bit to 8 bit A
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3"
-  );
+void ARGBMultiplyRow_NEON(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
+      "vld4.8     {d1, d3, d5, d7}, [%1]!        \n"  // load 8 more ARGB
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vmull.u8   q0, d0, d1                     \n"  // multiply B
+      "vmull.u8   q1, d2, d3                     \n"  // multiply G
+      "vmull.u8   q2, d4, d5                     \n"  // multiply R
+      "vmull.u8   q3, d6, d7                     \n"  // multiply A
+      "vrshrn.u16 d0, q0, #8                     \n"  // 16 bit to 8 bit B
+      "vrshrn.u16 d1, q1, #8                     \n"  // 16 bit to 8 bit G
+      "vrshrn.u16 d2, q2, #8                     \n"  // 16 bit to 8 bit R
+      "vrshrn.u16 d3, q3, #8                     \n"  // 16 bit to 8 bit A
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3");
 }
 
 // Add 2 rows of ARGB pixels together, 8 pixels at a time.
-void ARGBAddRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(1)
-    "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load 8 more ARGB pixels.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vqadd.u8   q0, q0, q2                     \n"  // add B, G
-    "vqadd.u8   q1, q1, q3                     \n"  // add R, A
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3"
-  );
+void ARGBAddRow_NEON(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
+      "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load 8 more ARGB
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vqadd.u8   q0, q0, q2                     \n"  // add B, G
+      "vqadd.u8   q1, q1, q3                     \n"  // add R, A
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3");
 }
 
 // Subtract 2 rows of ARGB pixels, 8 pixels at a time.
-void ARGBSubtractRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(1)
-    "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load 8 more ARGB pixels.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vqsub.u8   q0, q0, q2                     \n"  // subtract B, G
-    "vqsub.u8   q1, q1, q3                     \n"  // subtract R, A
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "cc", "memory", "q0", "q1", "q2", "q3"
-  );
+void ARGBSubtractRow_NEON(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // load 8 ARGB pixels.
+      "vld4.8     {d4, d5, d6, d7}, [%1]!        \n"  // load 8 more ARGB
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vqsub.u8   q0, q0, q2                     \n"  // subtract B, G
+      "vqsub.u8   q1, q1, q3                     \n"  // subtract R, A
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "q0", "q1", "q2", "q3");
 }
 
 // Adds Sobel X and Sobel Y and stores Sobel into ARGB.
@@ -2672,54 +2455,50 @@
 // R = Sobel
 // G = Sobel
 // B = Sobel
-void SobelRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d3, #255                       \n"  // alpha
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d0}, [%0]!                    \n"  // load 8 sobelx.
-    MEMACCESS(1)
-    "vld1.8     {d1}, [%1]!                    \n"  // load 8 sobely.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vqadd.u8   d0, d0, d1                     \n"  // add
-    "vmov.u8    d1, d0                         \n"
-    "vmov.u8    d2, d0                         \n"
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "q0", "q1"
-  );
+void SobelRow_NEON(const uint8_t* src_sobelx,
+                   const uint8_t* src_sobely,
+                   uint8_t* dst_argb,
+                   int width) {
+  asm volatile(
+      "vmov.u8    d3, #255                       \n"  // alpha
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld1.8     {d0}, [%0]!                    \n"  // load 8 sobelx.
+      "vld1.8     {d1}, [%1]!                    \n"  // load 8 sobely.
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vqadd.u8   d0, d0, d1                     \n"  // add
+      "vmov.u8    d1, d0                         \n"
+      "vmov.u8    d2, d0                         \n"
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(width)        // %3
+      :
+      : "cc", "memory", "q0", "q1");
 }
 
 // Adds Sobel X and Sobel Y and stores Sobel into plane.
-void SobelToPlaneRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_y, int width) {
-  asm volatile (
-    // 16 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"  // load 16 sobelx.
-    MEMACCESS(1)
-    "vld1.8     {q1}, [%1]!                    \n"  // load 16 sobely.
-    "subs       %3, %3, #16                    \n"  // 16 processed per loop.
-    "vqadd.u8   q0, q0, q1                     \n"  // add
-    MEMACCESS(2)
-    "vst1.8     {q0}, [%2]!                    \n"  // store 16 pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_y),       // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "q0", "q1"
-  );
+void SobelToPlaneRow_NEON(const uint8_t* src_sobelx,
+                          const uint8_t* src_sobely,
+                          uint8_t* dst_y,
+                          int width) {
+  asm volatile(
+      // 16 pixel loop.
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load 16 sobelx.
+      "vld1.8     {q1}, [%1]!                    \n"  // load 16 sobely.
+      "subs       %3, %3, #16                    \n"  // 16 processed per loop.
+      "vqadd.u8   q0, q0, q1                     \n"  // add
+      "vst1.8     {q0}, [%2]!                    \n"  // store 16 pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_y),       // %2
+        "+r"(width)        // %3
+      :
+      : "cc", "memory", "q0", "q1");
 }
 
 // Mixes Sobel X, Sobel Y and Sobel into ARGB.
@@ -2727,115 +2506,186 @@
 // R = Sobel X
 // G = Sobel
 // B = Sobel Y
-void SobelXYRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    "vmov.u8    d3, #255                       \n"  // alpha
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d2}, [%0]!                    \n"  // load 8 sobelx.
-    MEMACCESS(1)
-    "vld1.8     {d0}, [%1]!                    \n"  // load 8 sobely.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vqadd.u8   d1, d0, d2                     \n"  // add
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
-    "bgt        1b                             \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "q0", "q1"
-  );
+void SobelXYRow_NEON(const uint8_t* src_sobelx,
+                     const uint8_t* src_sobely,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      "vmov.u8    d3, #255                       \n"  // alpha
+      // 8 pixel loop.
+      "1:                                        \n"
+      "vld1.8     {d2}, [%0]!                    \n"  // load 8 sobelx.
+      "vld1.8     {d0}, [%1]!                    \n"  // load 8 sobely.
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vqadd.u8   d1, d0, d2                     \n"  // add
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"  // store 8 ARGB pixels.
+      "bgt        1b                             \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(width)        // %3
+      :
+      : "cc", "memory", "q0", "q1");
 }
 
 // SobelX as a matrix is
 // -1  0  1
 // -2  0  2
 // -1  0  1
-void SobelXRow_NEON(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobelx, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d0}, [%0],%5                  \n"  // top
-    MEMACCESS(0)
-    "vld1.8     {d1}, [%0],%6                  \n"
-    "vsubl.u8   q0, d0, d1                     \n"
-    MEMACCESS(1)
-    "vld1.8     {d2}, [%1],%5                  \n"  // center * 2
-    MEMACCESS(1)
-    "vld1.8     {d3}, [%1],%6                  \n"
-    "vsubl.u8   q1, d2, d3                     \n"
-    "vadd.s16   q0, q0, q1                     \n"
-    "vadd.s16   q0, q0, q1                     \n"
-    MEMACCESS(2)
-    "vld1.8     {d2}, [%2],%5                  \n"  // bottom
-    MEMACCESS(2)
-    "vld1.8     {d3}, [%2],%6                  \n"
-    "subs       %4, %4, #8                     \n"  // 8 pixels
-    "vsubl.u8   q1, d2, d3                     \n"
-    "vadd.s16   q0, q0, q1                     \n"
-    "vabs.s16   q0, q0                         \n"
-    "vqmovn.u16 d0, q0                         \n"
-    MEMACCESS(3)
-    "vst1.8     {d0}, [%3]!                    \n"  // store 8 sobelx
-    "bgt        1b                             \n"
-  : "+r"(src_y0),      // %0
-    "+r"(src_y1),      // %1
-    "+r"(src_y2),      // %2
-    "+r"(dst_sobelx),  // %3
-    "+r"(width)        // %4
-  : "r"(2),            // %5
-    "r"(6)             // %6
-  : "cc", "memory", "q0", "q1"  // Clobber List
-  );
+void SobelXRow_NEON(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    const uint8_t* src_y2,
+                    uint8_t* dst_sobelx,
+                    int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld1.8     {d0}, [%0],%5                  \n"  // top
+      "vld1.8     {d1}, [%0],%6                  \n"
+      "vsubl.u8   q0, d0, d1                     \n"
+      "vld1.8     {d2}, [%1],%5                  \n"  // center * 2
+      "vld1.8     {d3}, [%1],%6                  \n"
+      "vsubl.u8   q1, d2, d3                     \n"
+      "vadd.s16   q0, q0, q1                     \n"
+      "vadd.s16   q0, q0, q1                     \n"
+      "vld1.8     {d2}, [%2],%5                  \n"  // bottom
+      "vld1.8     {d3}, [%2],%6                  \n"
+      "subs       %4, %4, #8                     \n"  // 8 pixels
+      "vsubl.u8   q1, d2, d3                     \n"
+      "vadd.s16   q0, q0, q1                     \n"
+      "vabs.s16   q0, q0                         \n"
+      "vqmovn.u16 d0, q0                         \n"
+      "vst1.8     {d0}, [%3]!                    \n"  // store 8 sobelx
+      "bgt        1b                             \n"
+      : "+r"(src_y0),               // %0
+        "+r"(src_y1),               // %1
+        "+r"(src_y2),               // %2
+        "+r"(dst_sobelx),           // %3
+        "+r"(width)                 // %4
+      : "r"(2),                     // %5
+        "r"(6)                      // %6
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
 }
 
 // SobelY as a matrix is
 // -1 -2 -1
 //  0  0  0
 //  1  2  1
-void SobelYRow_NEON(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d0}, [%0],%4                  \n"  // left
-    MEMACCESS(1)
-    "vld1.8     {d1}, [%1],%4                  \n"
-    "vsubl.u8   q0, d0, d1                     \n"
-    MEMACCESS(0)
-    "vld1.8     {d2}, [%0],%4                  \n"  // center * 2
-    MEMACCESS(1)
-    "vld1.8     {d3}, [%1],%4                  \n"
-    "vsubl.u8   q1, d2, d3                     \n"
-    "vadd.s16   q0, q0, q1                     \n"
-    "vadd.s16   q0, q0, q1                     \n"
-    MEMACCESS(0)
-    "vld1.8     {d2}, [%0],%5                  \n"  // right
-    MEMACCESS(1)
-    "vld1.8     {d3}, [%1],%5                  \n"
-    "subs       %3, %3, #8                     \n"  // 8 pixels
-    "vsubl.u8   q1, d2, d3                     \n"
-    "vadd.s16   q0, q0, q1                     \n"
-    "vabs.s16   q0, q0                         \n"
-    "vqmovn.u16 d0, q0                         \n"
-    MEMACCESS(2)
-    "vst1.8     {d0}, [%2]!                    \n"  // store 8 sobely
-    "bgt        1b                             \n"
-  : "+r"(src_y0),      // %0
-    "+r"(src_y1),      // %1
-    "+r"(dst_sobely),  // %2
-    "+r"(width)        // %3
-  : "r"(1),            // %4
-    "r"(6)             // %5
-  : "cc", "memory", "q0", "q1"  // Clobber List
-  );
+void SobelYRow_NEON(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    uint8_t* dst_sobely,
+                    int width) {
+  asm volatile(
+      "1:                                        \n"
+      "vld1.8     {d0}, [%0],%4                  \n"  // left
+      "vld1.8     {d1}, [%1],%4                  \n"
+      "vsubl.u8   q0, d0, d1                     \n"
+      "vld1.8     {d2}, [%0],%4                  \n"  // center * 2
+      "vld1.8     {d3}, [%1],%4                  \n"
+      "vsubl.u8   q1, d2, d3                     \n"
+      "vadd.s16   q0, q0, q1                     \n"
+      "vadd.s16   q0, q0, q1                     \n"
+      "vld1.8     {d2}, [%0],%5                  \n"  // right
+      "vld1.8     {d3}, [%1],%5                  \n"
+      "subs       %3, %3, #8                     \n"  // 8 pixels
+      "vsubl.u8   q1, d2, d3                     \n"
+      "vadd.s16   q0, q0, q1                     \n"
+      "vabs.s16   q0, q0                         \n"
+      "vqmovn.u16 d0, q0                         \n"
+      "vst1.8     {d0}, [%2]!                    \n"  // store 8 sobely
+      "bgt        1b                             \n"
+      : "+r"(src_y0),               // %0
+        "+r"(src_y1),               // %1
+        "+r"(dst_sobely),           // %2
+        "+r"(width)                 // %3
+      : "r"(1),                     // %4
+        "r"(6)                      // %5
+      : "cc", "memory", "q0", "q1"  // Clobber List
+      );
 }
-#endif  // defined(__ARM_NEON__) && !defined(__aarch64__)
+
+// %y passes a float as a scalar vector for vector * scalar multiply.
+// the regoster must be d0 to d15 and indexed with [0] or [1] to access
+// the float in the first or second float of the d-reg
+
+void HalfFloat1Row_NEON(const uint16_t* src,
+                        uint16_t* dst,
+                        float /*unused*/,
+                        int width) {
+  asm volatile(
+
+      "1:                                        \n"
+      "vld1.8     {q1}, [%0]!                    \n"  // load 8 shorts
+      "subs       %2, %2, #8                     \n"  // 8 pixels per loop
+      "vmovl.u16  q2, d2                         \n"  // 8 int's
+      "vmovl.u16  q3, d3                         \n"
+      "vcvt.f32.u32  q2, q2                      \n"  // 8 floats
+      "vcvt.f32.u32  q3, q3                      \n"
+      "vmul.f32   q2, q2, %y3                    \n"  // adjust exponent
+      "vmul.f32   q3, q3, %y3                    \n"
+      "vqshrn.u32 d2, q2, #13                    \n"  // isolate halffloat
+      "vqshrn.u32 d3, q3, #13                    \n"
+      "vst1.8     {q1}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src),              // %0
+        "+r"(dst),              // %1
+        "+r"(width)             // %2
+      : "w"(1.9259299444e-34f)  // %3
+      : "cc", "memory", "q1", "q2", "q3");
+}
+
+void HalfFloatRow_NEON(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width) {
+  asm volatile(
+
+      "1:                                        \n"
+      "vld1.8     {q1}, [%0]!                    \n"  // load 8 shorts
+      "subs       %2, %2, #8                     \n"  // 8 pixels per loop
+      "vmovl.u16  q2, d2                         \n"  // 8 int's
+      "vmovl.u16  q3, d3                         \n"
+      "vcvt.f32.u32  q2, q2                      \n"  // 8 floats
+      "vcvt.f32.u32  q3, q3                      \n"
+      "vmul.f32   q2, q2, %y3                    \n"  // adjust exponent
+      "vmul.f32   q3, q3, %y3                    \n"
+      "vqshrn.u32 d2, q2, #13                    \n"  // isolate halffloat
+      "vqshrn.u32 d3, q3, #13                    \n"
+      "vst1.8     {q1}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src),                      // %0
+        "+r"(dst),                      // %1
+        "+r"(width)                     // %2
+      : "w"(scale * 1.9259299444e-34f)  // %3
+      : "cc", "memory", "q1", "q2", "q3");
+}
+
+void ByteToFloatRow_NEON(const uint8_t* src,
+                         float* dst,
+                         float scale,
+                         int width) {
+  asm volatile(
+
+      "1:                                        \n"
+      "vld1.8     {d2}, [%0]!                    \n"  // load 8 bytes
+      "subs       %2, %2, #8                     \n"  // 8 pixels per loop
+      "vmovl.u8   q1, d2                         \n"  // 8 shorts
+      "vmovl.u16  q2, d2                         \n"  // 8 ints
+      "vmovl.u16  q3, d3                         \n"
+      "vcvt.f32.u32  q2, q2                      \n"  // 8 floats
+      "vcvt.f32.u32  q3, q3                      \n"
+      "vmul.f32   q2, q2, %y3                    \n"  // scale
+      "vmul.f32   q3, q3, %y3                    \n"
+      "vst1.8     {q2, q3}, [%1]!                \n"  // store 8 floats
+      "bgt        1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      : "w"(scale)   // %3
+      : "cc", "memory", "q1", "q2", "q3");
+}
+
+#endif  // !defined(LIBYUV_DISABLE_NEON) && defined(__ARM_NEON__)..
 
 #ifdef __cplusplus
 }  // extern "C"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon64.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon64.cc
index 6375d4f..24b4520 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon64.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_neon64.cc
@@ -19,118 +19,103 @@
 #if !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
 // Read 8 Y, 4 U and 4 V from 422
-#define READYUV422                                                             \
-    MEMACCESS(0)                                                               \
-    "ld1        {v0.8b}, [%0], #8              \n"                             \
-    MEMACCESS(1)                                                               \
-    "ld1        {v1.s}[0], [%1], #4            \n"                             \
-    MEMACCESS(2)                                                               \
-    "ld1        {v1.s}[1], [%2], #4            \n"
-
-// Read 8 Y, 2 U and 2 V from 422
-#define READYUV411                                                             \
-    MEMACCESS(0)                                                               \
-    "ld1        {v0.8b}, [%0], #8              \n"                             \
-    MEMACCESS(1)                                                               \
-    "ld1        {v2.h}[0], [%1], #2            \n"                             \
-    MEMACCESS(2)                                                               \
-    "ld1        {v2.h}[1], [%2], #2            \n"                             \
-    "zip1       v1.8b, v2.8b, v2.8b            \n"
+#define READYUV422                               \
+  "ld1        {v0.8b}, [%0], #8              \n" \
+  "ld1        {v1.s}[0], [%1], #4            \n" \
+  "ld1        {v1.s}[1], [%2], #4            \n"
 
 // Read 8 Y, 8 U and 8 V from 444
-#define READYUV444                                                             \
-    MEMACCESS(0)                                                               \
-    "ld1        {v0.8b}, [%0], #8              \n"                             \
-    MEMACCESS(1)                                                               \
-    "ld1        {v1.d}[0], [%1], #8            \n"                             \
-    MEMACCESS(2)                                                               \
-    "ld1        {v1.d}[1], [%2], #8            \n"                             \
-    "uaddlp     v1.8h, v1.16b                  \n"                             \
-    "rshrn      v1.8b, v1.8h, #1               \n"
+#define READYUV444                               \
+  "ld1        {v0.8b}, [%0], #8              \n" \
+  "ld1        {v1.d}[0], [%1], #8            \n" \
+  "ld1        {v1.d}[1], [%2], #8            \n" \
+  "uaddlp     v1.8h, v1.16b                  \n" \
+  "rshrn      v1.8b, v1.8h, #1               \n"
 
 // Read 8 Y, and set 4 U and 4 V to 128
-#define READYUV400                                                             \
-    MEMACCESS(0)                                                               \
-    "ld1        {v0.8b}, [%0], #8              \n"                             \
-    "movi       v1.8b , #128                   \n"
+#define READYUV400                               \
+  "ld1        {v0.8b}, [%0], #8              \n" \
+  "movi       v1.8b , #128                   \n"
 
 // Read 8 Y and 4 UV from NV12
-#define READNV12                                                               \
-    MEMACCESS(0)                                                               \
-    "ld1        {v0.8b}, [%0], #8              \n"                             \
-    MEMACCESS(1)                                                               \
-    "ld1        {v2.8b}, [%1], #8              \n"                             \
-    "uzp1       v1.8b, v2.8b, v2.8b            \n"                             \
-    "uzp2       v3.8b, v2.8b, v2.8b            \n"                             \
-    "ins        v1.s[1], v3.s[0]               \n"
+#define READNV12                                 \
+  "ld1        {v0.8b}, [%0], #8              \n" \
+  "ld1        {v2.8b}, [%1], #8              \n" \
+  "uzp1       v1.8b, v2.8b, v2.8b            \n" \
+  "uzp2       v3.8b, v2.8b, v2.8b            \n" \
+  "ins        v1.s[1], v3.s[0]               \n"
 
 // Read 8 Y and 4 VU from NV21
-#define READNV21                                                               \
-    MEMACCESS(0)                                                               \
-    "ld1        {v0.8b}, [%0], #8              \n"                             \
-    MEMACCESS(1)                                                               \
-    "ld1        {v2.8b}, [%1], #8              \n"                             \
-    "uzp1       v3.8b, v2.8b, v2.8b            \n"                             \
-    "uzp2       v1.8b, v2.8b, v2.8b            \n"                             \
-    "ins        v1.s[1], v3.s[0]               \n"
+#define READNV21                                 \
+  "ld1        {v0.8b}, [%0], #8              \n" \
+  "ld1        {v2.8b}, [%1], #8              \n" \
+  "uzp1       v3.8b, v2.8b, v2.8b            \n" \
+  "uzp2       v1.8b, v2.8b, v2.8b            \n" \
+  "ins        v1.s[1], v3.s[0]               \n"
 
 // Read 8 YUY2
-#define READYUY2                                                               \
-    MEMACCESS(0)                                                               \
-    "ld2        {v0.8b, v1.8b}, [%0], #16      \n"                             \
-    "uzp2       v3.8b, v1.8b, v1.8b            \n"                             \
-    "uzp1       v1.8b, v1.8b, v1.8b            \n"                             \
-    "ins        v1.s[1], v3.s[0]               \n"
+#define READYUY2                                 \
+  "ld2        {v0.8b, v1.8b}, [%0], #16      \n" \
+  "uzp2       v3.8b, v1.8b, v1.8b            \n" \
+  "uzp1       v1.8b, v1.8b, v1.8b            \n" \
+  "ins        v1.s[1], v3.s[0]               \n"
 
 // Read 8 UYVY
-#define READUYVY                                                               \
-    MEMACCESS(0)                                                               \
-    "ld2        {v2.8b, v3.8b}, [%0], #16      \n"                             \
-    "orr        v0.8b, v3.8b, v3.8b            \n"                             \
-    "uzp1       v1.8b, v2.8b, v2.8b            \n"                             \
-    "uzp2       v3.8b, v2.8b, v2.8b            \n"                             \
-    "ins        v1.s[1], v3.s[0]               \n"
+#define READUYVY                                 \
+  "ld2        {v2.8b, v3.8b}, [%0], #16      \n" \
+  "orr        v0.8b, v3.8b, v3.8b            \n" \
+  "uzp1       v1.8b, v2.8b, v2.8b            \n" \
+  "uzp2       v3.8b, v2.8b, v2.8b            \n" \
+  "ins        v1.s[1], v3.s[0]               \n"
 
-#define YUVTORGB_SETUP                                                         \
-    "ld1r       {v24.8h}, [%[kUVBiasBGR]], #2  \n"                             \
-    "ld1r       {v25.8h}, [%[kUVBiasBGR]], #2  \n"                             \
-    "ld1r       {v26.8h}, [%[kUVBiasBGR]]      \n"                             \
-    "ld1r       {v31.4s}, [%[kYToRgb]]         \n"                             \
-    "ld2        {v27.8h, v28.8h}, [%[kUVToRB]] \n"                             \
-    "ld2        {v29.8h, v30.8h}, [%[kUVToG]]  \n"
+#define YUVTORGB_SETUP                           \
+  "ld1r       {v24.8h}, [%[kUVBiasBGR]], #2  \n" \
+  "ld1r       {v25.8h}, [%[kUVBiasBGR]], #2  \n" \
+  "ld1r       {v26.8h}, [%[kUVBiasBGR]]      \n" \
+  "ld1r       {v31.4s}, [%[kYToRgb]]         \n" \
+  "ld2        {v27.8h, v28.8h}, [%[kUVToRB]] \n" \
+  "ld2        {v29.8h, v30.8h}, [%[kUVToG]]  \n"
 
-#define YUVTORGB(vR, vG, vB)                                                   \
-    "uxtl       v0.8h, v0.8b                   \n" /* Extract Y    */          \
-    "shll       v2.8h, v1.8b, #8               \n" /* Replicate UV */          \
-    "ushll2     v3.4s, v0.8h, #0               \n" /* Y */                     \
-    "ushll      v0.4s, v0.4h, #0               \n"                             \
-    "mul        v3.4s, v3.4s, v31.4s           \n"                             \
-    "mul        v0.4s, v0.4s, v31.4s           \n"                             \
-    "sqshrun    v0.4h, v0.4s, #16              \n"                             \
-    "sqshrun2   v0.8h, v3.4s, #16              \n" /* Y */                     \
-    "uaddw      v1.8h, v2.8h, v1.8b            \n" /* Replicate UV */          \
-    "mov        v2.d[0], v1.d[1]               \n" /* Extract V */             \
-    "uxtl       v2.8h, v2.8b                   \n"                             \
-    "uxtl       v1.8h, v1.8b                   \n" /* Extract U */             \
-    "mul        v3.8h, v1.8h, v27.8h           \n"                             \
-    "mul        v5.8h, v1.8h, v29.8h           \n"                             \
-    "mul        v6.8h, v2.8h, v30.8h           \n"                             \
-    "mul        v7.8h, v2.8h, v28.8h           \n"                             \
-    "sqadd      v6.8h, v6.8h, v5.8h            \n"                             \
-    "sqadd      " #vB ".8h, v24.8h, v0.8h      \n" /* B */                     \
-    "sqadd      " #vG ".8h, v25.8h, v0.8h      \n" /* G */                     \
-    "sqadd      " #vR ".8h, v26.8h, v0.8h      \n" /* R */                     \
-    "sqadd      " #vB ".8h, " #vB ".8h, v3.8h  \n" /* B */                     \
-    "sqsub      " #vG ".8h, " #vG ".8h, v6.8h  \n" /* G */                     \
-    "sqadd      " #vR ".8h, " #vR ".8h, v7.8h  \n" /* R */                     \
-    "sqshrun    " #vB ".8b, " #vB ".8h, #6     \n" /* B */                     \
-    "sqshrun    " #vG ".8b, " #vG ".8h, #6     \n" /* G */                     \
-    "sqshrun    " #vR ".8b, " #vR ".8h, #6     \n" /* R */                     \
+#define YUVTORGB(vR, vG, vB)                                        \
+  "uxtl       v0.8h, v0.8b                   \n" /* Extract Y    */ \
+  "shll       v2.8h, v1.8b, #8               \n" /* Replicate UV */ \
+  "ushll2     v3.4s, v0.8h, #0               \n" /* Y */            \
+  "ushll      v0.4s, v0.4h, #0               \n"                    \
+  "mul        v3.4s, v3.4s, v31.4s           \n"                    \
+  "mul        v0.4s, v0.4s, v31.4s           \n"                    \
+  "sqshrun    v0.4h, v0.4s, #16              \n"                    \
+  "sqshrun2   v0.8h, v3.4s, #16              \n" /* Y */            \
+  "uaddw      v1.8h, v2.8h, v1.8b            \n" /* Replicate UV */ \
+  "mov        v2.d[0], v1.d[1]               \n" /* Extract V */    \
+  "uxtl       v2.8h, v2.8b                   \n"                    \
+  "uxtl       v1.8h, v1.8b                   \n" /* Extract U */    \
+  "mul        v3.8h, v1.8h, v27.8h           \n"                    \
+  "mul        v5.8h, v1.8h, v29.8h           \n"                    \
+  "mul        v6.8h, v2.8h, v30.8h           \n"                    \
+  "mul        v7.8h, v2.8h, v28.8h           \n"                    \
+  "sqadd      v6.8h, v6.8h, v5.8h            \n"                    \
+  "sqadd      " #vB                                                 \
+  ".8h, v24.8h, v0.8h      \n" /* B */                              \
+  "sqadd      " #vG                                                 \
+  ".8h, v25.8h, v0.8h      \n" /* G */                              \
+  "sqadd      " #vR                                                 \
+  ".8h, v26.8h, v0.8h      \n" /* R */                              \
+  "sqadd      " #vB ".8h, " #vB                                     \
+  ".8h, v3.8h  \n" /* B */                                          \
+  "sqsub      " #vG ".8h, " #vG                                     \
+  ".8h, v6.8h  \n" /* G */                                          \
+  "sqadd      " #vR ".8h, " #vR                                     \
+  ".8h, v7.8h  \n" /* R */                                          \
+  "sqshrun    " #vB ".8b, " #vB                                     \
+  ".8h, #6     \n" /* B */                                          \
+  "sqshrun    " #vG ".8b, " #vG                                     \
+  ".8h, #6     \n"                               /* G */            \
+  "sqshrun    " #vR ".8b, " #vR ".8h, #6     \n" /* R */
 
-void I444ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I444ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -140,7 +125,6 @@
     READYUV444
     YUVTORGB(v22, v21, v20)
     "subs       %w4, %w4, #8                   \n"
-    MEMACCESS(3)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%3], #32 \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -157,10 +141,10 @@
   );
 }
 
-void I422ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
+void I422ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -170,7 +154,6 @@
     READYUV422
     YUVTORGB(v22, v21, v20)
     "subs       %w4, %w4, #8                   \n"
-    MEMACCESS(3)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%3], #32     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -187,11 +170,11 @@
   );
 }
 
-void I422AlphaToARGBRow_NEON(const uint8* src_y,
-                             const uint8* src_u,
-                             const uint8* src_v,
-                             const uint8* src_a,
-                             uint8* dst_argb,
+void I422AlphaToARGBRow_NEON(const uint8_t* src_y,
+                             const uint8_t* src_u,
+                             const uint8_t* src_v,
+                             const uint8_t* src_a,
+                             uint8_t* dst_argb,
                              const struct YuvConstants* yuvconstants,
                              int width) {
   asm volatile (
@@ -199,10 +182,8 @@
   "1:                                          \n"
     READYUV422
     YUVTORGB(v22, v21, v20)
-    MEMACCESS(3)
     "ld1        {v23.8b}, [%3], #8             \n"
     "subs       %w5, %w5, #8                   \n"
-    MEMACCESS(4)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%4], #32     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -220,40 +201,10 @@
   );
 }
 
-void I411ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "movi       v23.8b, #255                   \n" /* A */
-  "1:                                          \n"
-    READYUV411
-    YUVTORGB(v22, v21, v20)
-    "subs       %w4, %w4, #8                   \n"
-    MEMACCESS(3)
-    "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%3], #32     \n"
-    "b.gt       1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(src_u),     // %1
-      "+r"(src_v),     // %2
-      "+r"(dst_argb),  // %3
-      "+r"(width)      // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
-      "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30"
-  );
-}
-
-void I422ToRGBARow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_rgba,
+void I422ToRGBARow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_rgba,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -263,7 +214,6 @@
     READYUV422
     YUVTORGB(v23, v22, v21)
     "subs       %w4, %w4, #8                   \n"
-    MEMACCESS(3)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%3], #32     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -280,10 +230,10 @@
   );
 }
 
-void I422ToRGB24Row_NEON(const uint8* src_y,
-                         const uint8* src_u,
-                         const uint8* src_v,
-                         uint8* dst_rgb24,
+void I422ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_u,
+                         const uint8_t* src_v,
+                         uint8_t* dst_rgb24,
                          const struct YuvConstants* yuvconstants,
                          int width) {
   asm volatile (
@@ -292,7 +242,6 @@
     READYUV422
     YUVTORGB(v22, v21, v20)
     "subs       %w4, %w4, #8                   \n"
-    MEMACCESS(3)
     "st3        {v20.8b,v21.8b,v22.8b}, [%3], #24     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -309,97 +258,91 @@
   );
 }
 
-#define ARGBTORGB565                                                           \
-    "shll       v0.8h,  v22.8b, #8             \n"  /* R                    */ \
-    "shll       v21.8h, v21.8b, #8             \n"  /* G                    */ \
-    "shll       v20.8h, v20.8b, #8             \n"  /* B                    */ \
-    "sri        v0.8h,  v21.8h, #5             \n"  /* RG                   */ \
-    "sri        v0.8h,  v20.8h, #11            \n"  /* RGB                  */
+#define ARGBTORGB565                                                        \
+  "shll       v0.8h,  v22.8b, #8             \n" /* R                    */ \
+  "shll       v21.8h, v21.8b, #8             \n" /* G                    */ \
+  "shll       v20.8h, v20.8b, #8             \n" /* B                    */ \
+  "sri        v0.8h,  v21.8h, #5             \n" /* RG                   */ \
+  "sri        v0.8h,  v20.8h, #11            \n" /* RGB                  */
 
-void I422ToRGB565Row_NEON(const uint8* src_y,
-                          const uint8* src_u,
-                          const uint8* src_v,
-                          uint8* dst_rgb565,
+void I422ToRGB565Row_NEON(const uint8_t* src_y,
+                          const uint8_t* src_u,
+                          const uint8_t* src_v,
+                          uint8_t* dst_rgb565,
                           const struct YuvConstants* yuvconstants,
                           int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB(v22, v21, v20)
-    "subs       %w4, %w4, #8                   \n"
-    ARGBTORGB565
-    MEMACCESS(3)
-    "st1        {v0.8h}, [%3], #16             \n"  // store 8 pixels RGB565.
-    "b.gt       1b                             \n"
-    : "+r"(src_y),    // %0
-      "+r"(src_u),    // %1
-      "+r"(src_v),    // %2
-      "+r"(dst_rgb565),  // %3
-      "+r"(width)     // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
-      "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READYUV422 YUVTORGB(
+          v22, v21,
+          v20) "subs       %w4, %w4, #8                   \n" ARGBTORGB565
+               "st1        {v0.8h}, [%3], #16             \n"  // store 8 pixels
+                                                               // RGB565.
+               "b.gt       1b                             \n"
+      : "+r"(src_y),       // %0
+        "+r"(src_u),       // %1
+        "+r"(src_v),       // %2
+        "+r"(dst_rgb565),  // %3
+        "+r"(width)        // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
+        "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30");
 }
 
-#define ARGBTOARGB1555                                                         \
-    "shll       v0.8h,  v23.8b, #8             \n"  /* A                    */ \
-    "shll       v22.8h, v22.8b, #8             \n"  /* R                    */ \
-    "shll       v21.8h, v21.8b, #8             \n"  /* G                    */ \
-    "shll       v20.8h, v20.8b, #8             \n"  /* B                    */ \
-    "sri        v0.8h,  v22.8h, #1             \n"  /* AR                   */ \
-    "sri        v0.8h,  v21.8h, #6             \n"  /* ARG                  */ \
-    "sri        v0.8h,  v20.8h, #11            \n"  /* ARGB                 */
+#define ARGBTOARGB1555                                                      \
+  "shll       v0.8h,  v23.8b, #8             \n" /* A                    */ \
+  "shll       v22.8h, v22.8b, #8             \n" /* R                    */ \
+  "shll       v21.8h, v21.8b, #8             \n" /* G                    */ \
+  "shll       v20.8h, v20.8b, #8             \n" /* B                    */ \
+  "sri        v0.8h,  v22.8h, #1             \n" /* AR                   */ \
+  "sri        v0.8h,  v21.8h, #6             \n" /* ARG                  */ \
+  "sri        v0.8h,  v20.8h, #11            \n" /* ARGB                 */
 
-void I422ToARGB1555Row_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb1555,
+void I422ToARGB1555Row_NEON(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb1555,
                             const struct YuvConstants* yuvconstants,
                             int width) {
-  asm volatile (
-    YUVTORGB_SETUP
-    "movi       v23.8b, #255                   \n"
-  "1:                                          \n"
-    READYUV422
-    YUVTORGB(v22, v21, v20)
-    "subs       %w4, %w4, #8                   \n"
-    ARGBTOARGB1555
-    MEMACCESS(3)
-    "st1        {v0.8h}, [%3], #16             \n"  // store 8 pixels RGB565.
-    "b.gt       1b                             \n"
-    : "+r"(src_y),    // %0
-      "+r"(src_u),    // %1
-      "+r"(src_v),    // %2
-      "+r"(dst_argb1555),  // %3
-      "+r"(width)     // %4
-    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
-      [kUVToG]"r"(&yuvconstants->kUVToG),
-      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
-      [kYToRgb]"r"(&yuvconstants->kYToRgb)
-    : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
-      "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30"
-  );
+  asm volatile(
+      YUVTORGB_SETUP
+      "movi       v23.8b, #255                   \n"
+      "1:                                        \n" READYUV422 YUVTORGB(
+          v22, v21,
+          v20) "subs       %w4, %w4, #8                   \n" ARGBTOARGB1555
+               "st1        {v0.8h}, [%3], #16             \n"  // store 8 pixels
+                                                               // RGB565.
+               "b.gt       1b                             \n"
+      : "+r"(src_y),         // %0
+        "+r"(src_u),         // %1
+        "+r"(src_v),         // %2
+        "+r"(dst_argb1555),  // %3
+        "+r"(width)          // %4
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
+        "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30");
 }
 
-#define ARGBTOARGB4444                                                         \
-    /* Input v20.8b<=B, v21.8b<=G, v22.8b<=R, v23.8b<=A, v4.8b<=0x0f        */ \
-    "ushr       v20.8b, v20.8b, #4             \n"  /* B                    */ \
-    "bic        v21.8b, v21.8b, v4.8b          \n"  /* G                    */ \
-    "ushr       v22.8b, v22.8b, #4             \n"  /* R                    */ \
-    "bic        v23.8b, v23.8b, v4.8b          \n"  /* A                    */ \
-    "orr        v0.8b,  v20.8b, v21.8b         \n"  /* BG                   */ \
-    "orr        v1.8b,  v22.8b, v23.8b         \n"  /* RA                   */ \
-    "zip1       v0.16b, v0.16b, v1.16b         \n"  /* BGRA                 */
+#define ARGBTOARGB4444                                                       \
+  /* Input v20.8b<=B, v21.8b<=G, v22.8b<=R, v23.8b<=A, v4.8b<=0x0f        */ \
+  "ushr       v20.8b, v20.8b, #4             \n" /* B                    */  \
+  "bic        v21.8b, v21.8b, v4.8b          \n" /* G                    */  \
+  "ushr       v22.8b, v22.8b, #4             \n" /* R                    */  \
+  "bic        v23.8b, v23.8b, v4.8b          \n" /* A                    */  \
+  "orr        v0.8b,  v20.8b, v21.8b         \n" /* BG                   */  \
+  "orr        v1.8b,  v22.8b, v23.8b         \n" /* RA                   */  \
+  "zip1       v0.16b, v0.16b, v1.16b         \n" /* BGRA                 */
 
-void I422ToARGB4444Row_NEON(const uint8* src_y,
-                            const uint8* src_u,
-                            const uint8* src_v,
-                            uint8* dst_argb4444,
+void I422ToARGB4444Row_NEON(const uint8_t* src_y,
+                            const uint8_t* src_u,
+                            const uint8_t* src_v,
+                            uint8_t* dst_argb4444,
                             const struct YuvConstants* yuvconstants,
                             int width) {
   asm volatile (
@@ -411,7 +354,6 @@
     "subs       %w4, %w4, #8                   \n"
     "movi       v23.8b, #255                   \n"
     ARGBTOARGB4444
-    MEMACCESS(3)
     "st1        {v0.8h}, [%3], #16             \n"  // store 8 pixels ARGB4444.
     "b.gt       1b                             \n"
     : "+r"(src_y),    // %0
@@ -428,9 +370,7 @@
   );
 }
 
-void I400ToARGBRow_NEON(const uint8* src_y,
-                        uint8* dst_argb,
-                        int width) {
+void I400ToARGBRow_NEON(const uint8_t* src_y, uint8_t* dst_argb, int width) {
   asm volatile (
     YUVTORGB_SETUP
     "movi       v23.8b, #255                   \n"
@@ -438,7 +378,6 @@
     READYUV400
     YUVTORGB(v22, v21, v20)
     "subs       %w2, %w2, #8                   \n"
-    MEMACCESS(1)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], #32     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -453,31 +392,26 @@
   );
 }
 
-void J400ToARGBRow_NEON(const uint8* src_y,
-                        uint8* dst_argb,
-                        int width) {
-  asm volatile (
-    "movi       v23.8b, #255                   \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v20.8b}, [%0], #8             \n"
-    "orr        v21.8b, v20.8b, v20.8b         \n"
-    "orr        v22.8b, v20.8b, v20.8b         \n"
-    "subs       %w2, %w2, #8                   \n"
-    MEMACCESS(1)
-    "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], #32     \n"
-    "b.gt       1b                             \n"
-    : "+r"(src_y),     // %0
-      "+r"(dst_argb),  // %1
-      "+r"(width)      // %2
-    :
-    : "cc", "memory", "v20", "v21", "v22", "v23"
-  );
+void J400ToARGBRow_NEON(const uint8_t* src_y, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "movi       v23.8b, #255                   \n"
+      "1:                                        \n"
+      "ld1        {v20.8b}, [%0], #8             \n"
+      "orr        v21.8b, v20.8b, v20.8b         \n"
+      "orr        v22.8b, v20.8b, v20.8b         \n"
+      "subs       %w2, %w2, #8                   \n"
+      "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], #32     \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v20", "v21", "v22", "v23");
 }
 
-void NV12ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_uv,
-                        uint8* dst_argb,
+void NV12ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_uv,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -487,7 +421,6 @@
     READNV12
     YUVTORGB(v22, v21, v20)
     "subs       %w3, %w3, #8                   \n"
-    MEMACCESS(2)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%2], #32     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -503,9 +436,9 @@
   );
 }
 
-void NV21ToARGBRow_NEON(const uint8* src_y,
-                        const uint8* src_vu,
-                        uint8* dst_argb,
+void NV21ToARGBRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_vu,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -515,7 +448,6 @@
     READNV21
     YUVTORGB(v22, v21, v20)
     "subs       %w3, %w3, #8                   \n"
-    MEMACCESS(2)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%2], #32     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
@@ -531,24 +463,22 @@
   );
 }
 
-void NV12ToRGB565Row_NEON(const uint8* src_y,
-                          const uint8* src_uv,
-                          uint8* dst_rgb565,
-                          const struct YuvConstants* yuvconstants,
-                          int width) {
+void NV12ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_uv,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
   asm volatile (
     YUVTORGB_SETUP
   "1:                                          \n"
     READNV12
     YUVTORGB(v22, v21, v20)
     "subs       %w3, %w3, #8                   \n"
-    ARGBTORGB565
-    MEMACCESS(2)
-    "st1        {v0.8h}, [%2], 16              \n"  // store 8 pixels RGB565.
+    "st3        {v20.8b,v21.8b,v22.8b}, [%2], #24     \n"
     "b.gt       1b                             \n"
     : "+r"(src_y),     // %0
       "+r"(src_uv),    // %1
-      "+r"(dst_rgb565),  // %2
+      "+r"(dst_rgb24),  // %2
       "+r"(width)      // %3
     : [kUVToRB]"r"(&yuvconstants->kUVToRB),
       [kUVToG]"r"(&yuvconstants->kUVToG),
@@ -559,8 +489,59 @@
   );
 }
 
-void YUY2ToARGBRow_NEON(const uint8* src_yuy2,
-                        uint8* dst_argb,
+void NV21ToRGB24Row_NEON(const uint8_t* src_y,
+                         const uint8_t* src_vu,
+                         uint8_t* dst_rgb24,
+                         const struct YuvConstants* yuvconstants,
+                         int width) {
+  asm volatile (
+    YUVTORGB_SETUP
+  "1:                                          \n"
+    READNV21
+    YUVTORGB(v22, v21, v20)
+    "subs       %w3, %w3, #8                   \n"
+    "st3        {v20.8b,v21.8b,v22.8b}, [%2], #24     \n"
+    "b.gt       1b                             \n"
+    : "+r"(src_y),     // %0
+      "+r"(src_vu),    // %1
+      "+r"(dst_rgb24),  // %2
+      "+r"(width)      // %3
+    : [kUVToRB]"r"(&yuvconstants->kUVToRB),
+      [kUVToG]"r"(&yuvconstants->kUVToG),
+      [kUVBiasBGR]"r"(&yuvconstants->kUVBiasBGR),
+      [kYToRgb]"r"(&yuvconstants->kYToRgb)
+    : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
+      "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30"
+  );
+}
+
+void NV12ToRGB565Row_NEON(const uint8_t* src_y,
+                          const uint8_t* src_uv,
+                          uint8_t* dst_rgb565,
+                          const struct YuvConstants* yuvconstants,
+                          int width) {
+  asm volatile(
+      YUVTORGB_SETUP
+      "1:                                        \n" READNV12 YUVTORGB(
+          v22, v21,
+          v20) "subs       %w3, %w3, #8                   \n" ARGBTORGB565
+               "st1        {v0.8h}, [%2], 16              \n"  // store 8 pixels
+                                                               // RGB565.
+               "b.gt       1b                             \n"
+      : "+r"(src_y),       // %0
+        "+r"(src_uv),      // %1
+        "+r"(dst_rgb565),  // %2
+        "+r"(width)        // %3
+      : [kUVToRB] "r"(&yuvconstants->kUVToRB),
+        [kUVToG] "r"(&yuvconstants->kUVToG),
+        [kUVBiasBGR] "r"(&yuvconstants->kUVBiasBGR),
+        [kYToRgb] "r"(&yuvconstants->kYToRgb)
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
+        "v21", "v22", "v23", "v24", "v25", "v26", "v27", "v28", "v29", "v30");
+}
+
+void YUY2ToARGBRow_NEON(const uint8_t* src_yuy2,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -570,7 +551,6 @@
     READYUY2
     YUVTORGB(v22, v21, v20)
     "subs       %w2, %w2, #8                   \n"
-    MEMACCESS(1)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], #32      \n"
     "b.gt       1b                             \n"
     : "+r"(src_yuy2),  // %0
@@ -585,8 +565,8 @@
   );
 }
 
-void UYVYToARGBRow_NEON(const uint8* src_uyvy,
-                        uint8* dst_argb,
+void UYVYToARGBRow_NEON(const uint8_t* src_uyvy,
+                        uint8_t* dst_argb,
                         const struct YuvConstants* yuvconstants,
                         int width) {
   asm volatile (
@@ -596,7 +576,6 @@
     READUYVY
     YUVTORGB(v22, v21, v20)
     "subs       %w2, %w2, #8                   \n"
-    MEMACCESS(1)
     "st4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], 32      \n"
     "b.gt       1b                             \n"
     : "+r"(src_uyvy),  // %0
@@ -612,869 +591,819 @@
 }
 
 // Reads 16 pairs of UV and write even values to dst_u and odd to dst_v.
-void SplitUVRow_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+void SplitUVRow_NEON(const uint8_t* src_uv,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
                      int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld2        {v0.16b,v1.16b}, [%0], #32     \n"  // load 16 pairs of UV
-    "subs       %w3, %w3, #16                  \n"  // 16 processed per loop
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"  // store U
-    MEMACCESS(2)
-    "st1        {v1.16b}, [%2], #16            \n"  // store V
-    "b.gt       1b                             \n"
-    : "+r"(src_uv),  // %0
-      "+r"(dst_u),   // %1
-      "+r"(dst_v),   // %2
-      "+r"(width)    // %3  // Output registers
-    :                       // Input registers
-    : "cc", "memory", "v0", "v1"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld2        {v0.16b,v1.16b}, [%0], #32     \n"  // load 16 pairs of UV
+      "subs       %w3, %w3, #16                  \n"  // 16 processed per loop
+      "st1        {v0.16b}, [%1], #16            \n"  // store U
+      "st1        {v1.16b}, [%2], #16            \n"  // store V
+      "b.gt       1b                             \n"
+      : "+r"(src_uv),               // %0
+        "+r"(dst_u),                // %1
+        "+r"(dst_v),                // %2
+        "+r"(width)                 // %3  // Output registers
+      :                             // Input registers
+      : "cc", "memory", "v0", "v1"  // Clobber List
+      );
 }
 
 // Reads 16 U's and V's and writes out 16 pairs of UV.
-void MergeUVRow_NEON(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
+void MergeUVRow_NEON(const uint8_t* src_u,
+                     const uint8_t* src_v,
+                     uint8_t* dst_uv,
                      int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load U
-    MEMACCESS(1)
-    "ld1        {v1.16b}, [%1], #16            \n"  // load V
-    "subs       %w3, %w3, #16                  \n"  // 16 processed per loop
-    MEMACCESS(2)
-    "st2        {v0.16b,v1.16b}, [%2], #32     \n"  // store 16 pairs of UV
-    "b.gt       1b                             \n"
-    :
-      "+r"(src_u),   // %0
-      "+r"(src_v),   // %1
-      "+r"(dst_uv),  // %2
-      "+r"(width)    // %3  // Output registers
-    :                       // Input registers
-    : "cc", "memory", "v0", "v1"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load U
+      "ld1        {v1.16b}, [%1], #16            \n"  // load V
+      "subs       %w3, %w3, #16                  \n"  // 16 processed per loop
+      "st2        {v0.16b,v1.16b}, [%2], #32     \n"  // store 16 pairs of UV
+      "b.gt       1b                             \n"
+      : "+r"(src_u),                // %0
+        "+r"(src_v),                // %1
+        "+r"(dst_uv),               // %2
+        "+r"(width)                 // %3  // Output registers
+      :                             // Input registers
+      : "cc", "memory", "v0", "v1"  // Clobber List
+      );
 }
 
-// Copy multiple of 32.  vld4.8  allow unaligned and is fastest on a15.
-void CopyRow_NEON(const uint8* src, uint8* dst, int count) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32       \n"  // load 32
-    "subs       %w2, %w2, #32                  \n"  // 32 processed per loop
-    MEMACCESS(1)
-    "st1        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32       \n"  // store 32
-    "b.gt       1b                             \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(count)  // %2  // Output registers
-  :                     // Input registers
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
-}
-
-// SetRow writes 'count' bytes using an 8 bit value repeated.
-void SetRow_NEON(uint8* dst, uint8 v8, int count) {
-  asm volatile (
-    "dup        v0.16b, %w2                    \n"  // duplicate 16 bytes
-  "1:                                          \n"
-    "subs       %w1, %w1, #16                  \n"  // 16 bytes per loop
-    MEMACCESS(0)
-    "st1        {v0.16b}, [%0], #16            \n"  // store
-    "b.gt       1b                             \n"
-  : "+r"(dst),   // %0
-    "+r"(count)  // %1
-  : "r"(v8)      // %2
-  : "cc", "memory", "v0"
-  );
-}
-
-void ARGBSetRow_NEON(uint8* dst, uint32 v32, int count) {
-  asm volatile (
-    "dup        v0.4s, %w2                     \n"  // duplicate 4 ints
-  "1:                                          \n"
-    "subs       %w1, %w1, #4                   \n"  // 4 ints per loop
-    MEMACCESS(0)
-    "st1        {v0.16b}, [%0], #16            \n"  // store
-    "b.gt       1b                             \n"
-  : "+r"(dst),   // %0
-    "+r"(count)  // %1
-  : "r"(v32)     // %2
-  : "cc", "memory", "v0"
-  );
-}
-
-void MirrorRow_NEON(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-    // Start at end of source row.
-    "add        %0, %0, %w2, sxtw              \n"
-    "sub        %0, %0, #16                    \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], %3             \n"  // src -= 16
-    "subs       %w2, %w2, #16                  \n"  // 16 pixels per loop.
-    "rev64      v0.16b, v0.16b                 \n"
-    MEMACCESS(1)
-    "st1        {v0.D}[1], [%1], #8            \n"  // dst += 16
-    MEMACCESS(1)
-    "st1        {v0.D}[0], [%1], #8            \n"
-    "b.gt       1b                             \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  : "r"((ptrdiff_t)-16)    // %3
-  : "cc", "memory", "v0"
-  );
-}
-
-void MirrorUVRow_NEON(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
+// Reads 16 packed RGB and write to planar dst_r, dst_g, dst_b.
+void SplitRGBRow_NEON(const uint8_t* src_rgb,
+                      uint8_t* dst_r,
+                      uint8_t* dst_g,
+                      uint8_t* dst_b,
                       int width) {
-  asm volatile (
-    // Start at end of source row.
-    "add        %0, %0, %w3, sxtw #1           \n"
-    "sub        %0, %0, #16                    \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld2        {v0.8b, v1.8b}, [%0], %4       \n"  // src -= 16
-    "subs       %w3, %w3, #8                   \n"  // 8 pixels per loop.
-    "rev64      v0.8b, v0.8b                   \n"
-    "rev64      v1.8b, v1.8b                   \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // dst += 8
-    MEMACCESS(2)
-    "st1        {v1.8b}, [%2], #8              \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_uv),  // %0
-    "+r"(dst_u),   // %1
-    "+r"(dst_v),   // %2
-    "+r"(width)    // %3
-  : "r"((ptrdiff_t)-16)      // %4
-  : "cc", "memory", "v0", "v1"
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld3        {v0.16b,v1.16b,v2.16b}, [%0], #48 \n"  // load 16 RGB
+      "subs       %w4, %w4, #16                  \n"  // 16 processed per loop
+      "st1        {v0.16b}, [%1], #16            \n"  // store R
+      "st1        {v1.16b}, [%2], #16            \n"  // store G
+      "st1        {v2.16b}, [%3], #16            \n"  // store B
+      "b.gt       1b                             \n"
+      : "+r"(src_rgb),                    // %0
+        "+r"(dst_r),                      // %1
+        "+r"(dst_g),                      // %2
+        "+r"(dst_b),                      // %3
+        "+r"(width)                       // %4
+      :                                   // Input registers
+      : "cc", "memory", "v0", "v1", "v2"  // Clobber List
+      );
 }
 
-void ARGBMirrorRow_NEON(const uint8* src, uint8* dst, int width) {
-  asm volatile (
-  // Start at end of source row.
-    "add        %0, %0, %w2, sxtw #2           \n"
-    "sub        %0, %0, #16                    \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], %3             \n"  // src -= 16
-    "subs       %w2, %w2, #4                   \n"  // 4 pixels per loop.
-    "rev64      v0.4s, v0.4s                   \n"
-    MEMACCESS(1)
-    "st1        {v0.D}[1], [%1], #8            \n"  // dst += 16
-    MEMACCESS(1)
-    "st1        {v0.D}[0], [%1], #8            \n"
-    "b.gt       1b                             \n"
-  : "+r"(src),   // %0
-    "+r"(dst),   // %1
-    "+r"(width)  // %2
-  : "r"((ptrdiff_t)-16)    // %3
-  : "cc", "memory", "v0"
-  );
+// Reads 16 planar R's, G's and B's and writes out 16 packed RGB at a time
+void MergeRGBRow_NEON(const uint8_t* src_r,
+                      const uint8_t* src_g,
+                      const uint8_t* src_b,
+                      uint8_t* dst_rgb,
+                      int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load R
+      "ld1        {v1.16b}, [%1], #16            \n"  // load G
+      "ld1        {v2.16b}, [%2], #16            \n"  // load B
+      "subs       %w4, %w4, #16                  \n"  // 16 processed per loop
+      "st3        {v0.16b,v1.16b,v2.16b}, [%3], #48 \n"  // store 16 RGB
+      "b.gt       1b                             \n"
+      : "+r"(src_r),                      // %0
+        "+r"(src_g),                      // %1
+        "+r"(src_b),                      // %2
+        "+r"(dst_rgb),                    // %3
+        "+r"(width)                       // %4
+      :                                   // Input registers
+      : "cc", "memory", "v0", "v1", "v2"  // Clobber List
+      );
 }
 
-void RGB24ToARGBRow_NEON(const uint8* src_rgb24, uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v4.8b, #255                    \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld3        {v1.8b,v2.8b,v3.8b}, [%0], #24 \n"  // load 8 pixels of RGB24.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    MEMACCESS(1)
-    "st4        {v1.8b,v2.8b,v3.8b,v4.8b}, [%1], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_rgb24),  // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "v1", "v2", "v3", "v4"  // Clobber List
-  );
+// Copy multiple of 32.
+void CopyRow_NEON(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ldp        q0, q1, [%0], #32              \n"
+      "subs       %w2, %w2, #32                  \n"  // 32 processed per loop
+      "stp        q0, q1, [%1], #32              \n"
+      "b.gt       1b                             \n"
+      : "+r"(src),                  // %0
+        "+r"(dst),                  // %1
+        "+r"(width)                 // %2  // Output registers
+      :                             // Input registers
+      : "cc", "memory", "v0", "v1"  // Clobber List
+      );
 }
 
-void RAWToARGBRow_NEON(const uint8* src_raw, uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v5.8b, #255                    \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // read r g b
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "orr        v3.8b, v1.8b, v1.8b            \n"  // move g
-    "orr        v4.8b, v0.8b, v0.8b            \n"  // move r
-    MEMACCESS(1)
-    "st4        {v2.8b,v3.8b,v4.8b,v5.8b}, [%1], #32 \n"  // store b g r a
-    "b.gt       1b                             \n"
-  : "+r"(src_raw),   // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5"  // Clobber List
-  );
+// SetRow writes 'width' bytes using an 8 bit value repeated.
+void SetRow_NEON(uint8_t* dst, uint8_t v8, int width) {
+  asm volatile(
+      "dup        v0.16b, %w2                    \n"  // duplicate 16 bytes
+      "1:                                        \n"
+      "subs       %w1, %w1, #16                  \n"  // 16 bytes per loop
+      "st1        {v0.16b}, [%0], #16            \n"  // store
+      "b.gt       1b                             \n"
+      : "+r"(dst),   // %0
+        "+r"(width)  // %1
+      : "r"(v8)      // %2
+      : "cc", "memory", "v0");
 }
 
-void RAWToRGB24Row_NEON(const uint8* src_raw, uint8* dst_rgb24, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // read r g b
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "orr        v3.8b, v1.8b, v1.8b            \n"  // move g
-    "orr        v4.8b, v0.8b, v0.8b            \n"  // move r
-    MEMACCESS(1)
-    "st3        {v2.8b,v3.8b,v4.8b}, [%1], #24 \n"  // store b g r
-    "b.gt       1b                             \n"
-  : "+r"(src_raw),    // %0
-    "+r"(dst_rgb24),  // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4"  // Clobber List
-  );
+void ARGBSetRow_NEON(uint8_t* dst, uint32_t v32, int width) {
+  asm volatile(
+      "dup        v0.4s, %w2                     \n"  // duplicate 4 ints
+      "1:                                        \n"
+      "subs       %w1, %w1, #4                   \n"  // 4 ints per loop
+      "st1        {v0.16b}, [%0], #16            \n"  // store
+      "b.gt       1b                             \n"
+      : "+r"(dst),   // %0
+        "+r"(width)  // %1
+      : "r"(v32)     // %2
+      : "cc", "memory", "v0");
 }
 
-#define RGB565TOARGB                                                           \
-    "shrn       v6.8b, v0.8h, #5               \n"  /* G xxGGGGGG           */ \
-    "shl        v6.8b, v6.8b, #2               \n"  /* G GGGGGG00 upper 6   */ \
-    "ushr       v4.8b, v6.8b, #6               \n"  /* G 000000GG lower 2   */ \
-    "orr        v1.8b, v4.8b, v6.8b            \n"  /* G                    */ \
-    "xtn        v2.8b, v0.8h                   \n"  /* B xxxBBBBB           */ \
-    "ushr       v0.8h, v0.8h, #11              \n"  /* R 000RRRRR           */ \
-    "xtn2       v2.16b,v0.8h                   \n"  /* R in upper part      */ \
-    "shl        v2.16b, v2.16b, #3             \n"  /* R,B BBBBB000 upper 5 */ \
-    "ushr       v0.16b, v2.16b, #5             \n"  /* R,B 00000BBB lower 3 */ \
-    "orr        v0.16b, v0.16b, v2.16b         \n"  /* R,B                  */ \
-    "dup        v2.2D, v0.D[1]                 \n"  /* R                    */
-
-void RGB565ToARGBRow_NEON(const uint8* src_rgb565, uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v3.8b, #255                    \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 RGB565 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    RGB565TOARGB
-    MEMACCESS(1)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_rgb565),  // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v6"  // Clobber List
-  );
+void MirrorRow_NEON(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      // Start at end of source row.
+      "add        %0, %0, %w2, sxtw              \n"
+      "sub        %0, %0, #16                    \n"
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], %3             \n"  // src -= 16
+      "subs       %w2, %w2, #16                  \n"  // 16 pixels per loop.
+      "rev64      v0.16b, v0.16b                 \n"
+      "st1        {v0.D}[1], [%1], #8            \n"  // dst += 16
+      "st1        {v0.D}[0], [%1], #8            \n"
+      "b.gt       1b                             \n"
+      : "+r"(src),           // %0
+        "+r"(dst),           // %1
+        "+r"(width)          // %2
+      : "r"((ptrdiff_t)-16)  // %3
+      : "cc", "memory", "v0");
 }
 
-#define ARGB1555TOARGB                                                         \
-    "ushr       v2.8h, v0.8h, #10              \n"  /* R xxxRRRRR           */ \
-    "shl        v2.8h, v2.8h, #3               \n"  /* R RRRRR000 upper 5   */ \
-    "xtn        v3.8b, v2.8h                   \n"  /* RRRRR000 AAAAAAAA    */ \
-                                                                               \
-    "sshr       v2.8h, v0.8h, #15              \n"  /* A AAAAAAAA           */ \
-    "xtn2       v3.16b, v2.8h                  \n"                             \
-                                                                               \
-    "xtn        v2.8b, v0.8h                   \n"  /* B xxxBBBBB           */ \
-    "shrn2      v2.16b,v0.8h, #5               \n"  /* G xxxGGGGG           */ \
-                                                                               \
-    "ushr       v1.16b, v3.16b, #5             \n"  /* R,A 00000RRR lower 3 */ \
-    "shl        v0.16b, v2.16b, #3             \n"  /* B,G BBBBB000 upper 5 */ \
-    "ushr       v2.16b, v0.16b, #5             \n"  /* B,G 00000BBB lower 3 */ \
-                                                                               \
-    "orr        v0.16b, v0.16b, v2.16b         \n"  /* B,G                  */ \
-    "orr        v2.16b, v1.16b, v3.16b         \n"  /* R,A                  */ \
-    "dup        v1.2D, v0.D[1]                 \n"                             \
-    "dup        v3.2D, v2.D[1]                 \n"
+void MirrorUVRow_NEON(const uint8_t* src_uv,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  asm volatile(
+      // Start at end of source row.
+      "add        %0, %0, %w3, sxtw #1           \n"
+      "sub        %0, %0, #16                    \n"
+      "1:                                        \n"
+      "ld2        {v0.8b, v1.8b}, [%0], %4       \n"  // src -= 16
+      "subs       %w3, %w3, #8                   \n"  // 8 pixels per loop.
+      "rev64      v0.8b, v0.8b                   \n"
+      "rev64      v1.8b, v1.8b                   \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // dst += 8
+      "st1        {v1.8b}, [%2], #8              \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_uv),        // %0
+        "+r"(dst_u),         // %1
+        "+r"(dst_v),         // %2
+        "+r"(width)          // %3
+      : "r"((ptrdiff_t)-16)  // %4
+      : "cc", "memory", "v0", "v1");
+}
+
+void ARGBMirrorRow_NEON(const uint8_t* src, uint8_t* dst, int width) {
+  asm volatile(
+      // Start at end of source row.
+      "add        %0, %0, %w2, sxtw #2           \n"
+      "sub        %0, %0, #16                    \n"
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], %3             \n"  // src -= 16
+      "subs       %w2, %w2, #4                   \n"  // 4 pixels per loop.
+      "rev64      v0.4s, v0.4s                   \n"
+      "st1        {v0.D}[1], [%1], #8            \n"  // dst += 16
+      "st1        {v0.D}[0], [%1], #8            \n"
+      "b.gt       1b                             \n"
+      : "+r"(src),           // %0
+        "+r"(dst),           // %1
+        "+r"(width)          // %2
+      : "r"((ptrdiff_t)-16)  // %3
+      : "cc", "memory", "v0");
+}
+
+void RGB24ToARGBRow_NEON(const uint8_t* src_rgb24,
+                         uint8_t* dst_argb,
+                         int width) {
+  asm volatile(
+      "movi       v4.8b, #255                    \n"  // Alpha
+      "1:                                        \n"
+      "ld3        {v1.8b,v2.8b,v3.8b}, [%0], #24 \n"  // load 8 pixels of RGB24.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "st4        {v1.8b,v2.8b,v3.8b,v4.8b}, [%1], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_rgb24),  // %0
+        "+r"(dst_argb),   // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "v1", "v2", "v3", "v4"  // Clobber List
+      );
+}
+
+void RAWToARGBRow_NEON(const uint8_t* src_raw, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "movi       v5.8b, #255                    \n"  // Alpha
+      "1:                                        \n"
+      "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // read r g b
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "orr        v3.8b, v1.8b, v1.8b            \n"  // move g
+      "orr        v4.8b, v0.8b, v0.8b            \n"  // move r
+      "st4        {v2.8b,v3.8b,v4.8b,v5.8b}, [%1], #32 \n"  // store b g r a
+      "b.gt       1b                             \n"
+      : "+r"(src_raw),   // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5"  // Clobber List
+      );
+}
+
+void RAWToRGB24Row_NEON(const uint8_t* src_raw, uint8_t* dst_rgb24, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // read r g b
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "orr        v3.8b, v1.8b, v1.8b            \n"  // move g
+      "orr        v4.8b, v0.8b, v0.8b            \n"  // move r
+      "st3        {v2.8b,v3.8b,v4.8b}, [%1], #24 \n"  // store b g r
+      "b.gt       1b                             \n"
+      : "+r"(src_raw),    // %0
+        "+r"(dst_rgb24),  // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4"  // Clobber List
+      );
+}
+
+#define RGB565TOARGB                                                        \
+  "shrn       v6.8b, v0.8h, #5               \n" /* G xxGGGGGG           */ \
+  "shl        v6.8b, v6.8b, #2               \n" /* G GGGGGG00 upper 6   */ \
+  "ushr       v4.8b, v6.8b, #6               \n" /* G 000000GG lower 2   */ \
+  "orr        v1.8b, v4.8b, v6.8b            \n" /* G                    */ \
+  "xtn        v2.8b, v0.8h                   \n" /* B xxxBBBBB           */ \
+  "ushr       v0.8h, v0.8h, #11              \n" /* R 000RRRRR           */ \
+  "xtn2       v2.16b,v0.8h                   \n" /* R in upper part      */ \
+  "shl        v2.16b, v2.16b, #3             \n" /* R,B BBBBB000 upper 5 */ \
+  "ushr       v0.16b, v2.16b, #5             \n" /* R,B 00000BBB lower 3 */ \
+  "orr        v0.16b, v0.16b, v2.16b         \n" /* R,B                  */ \
+  "dup        v2.2D, v0.D[1]                 \n" /* R                    */
+
+void RGB565ToARGBRow_NEON(const uint8_t* src_rgb565,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      "movi       v3.8b, #255                    \n"  // Alpha
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 RGB565 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      RGB565TOARGB
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_rgb565),  // %0
+        "+r"(dst_argb),    // %1
+        "+r"(width)        // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v6"  // Clobber List
+      );
+}
+
+#define ARGB1555TOARGB                                                      \
+  "ushr       v2.8h, v0.8h, #10              \n" /* R xxxRRRRR           */ \
+  "shl        v2.8h, v2.8h, #3               \n" /* R RRRRR000 upper 5   */ \
+  "xtn        v3.8b, v2.8h                   \n" /* RRRRR000 AAAAAAAA    */ \
+                                                                            \
+  "sshr       v2.8h, v0.8h, #15              \n" /* A AAAAAAAA           */ \
+  "xtn2       v3.16b, v2.8h                  \n"                            \
+                                                                            \
+  "xtn        v2.8b, v0.8h                   \n" /* B xxxBBBBB           */ \
+  "shrn2      v2.16b,v0.8h, #5               \n" /* G xxxGGGGG           */ \
+                                                                            \
+  "ushr       v1.16b, v3.16b, #5             \n" /* R,A 00000RRR lower 3 */ \
+  "shl        v0.16b, v2.16b, #3             \n" /* B,G BBBBB000 upper 5 */ \
+  "ushr       v2.16b, v0.16b, #5             \n" /* B,G 00000BBB lower 3 */ \
+                                                                            \
+  "orr        v0.16b, v0.16b, v2.16b         \n" /* B,G                  */ \
+  "orr        v2.16b, v1.16b, v3.16b         \n" /* R,A                  */ \
+  "dup        v1.2D, v0.D[1]                 \n"                            \
+  "dup        v3.2D, v2.D[1]                 \n"
 
 // RGB555TOARGB is same as ARGB1555TOARGB but ignores alpha.
-#define RGB555TOARGB                                                           \
-    "ushr       v2.8h, v0.8h, #10              \n"  /* R xxxRRRRR           */ \
-    "shl        v2.8h, v2.8h, #3               \n"  /* R RRRRR000 upper 5   */ \
-    "xtn        v3.8b, v2.8h                   \n"  /* RRRRR000             */ \
-                                                                               \
-    "xtn        v2.8b, v0.8h                   \n"  /* B xxxBBBBB           */ \
-    "shrn2      v2.16b,v0.8h, #5               \n"  /* G xxxGGGGG           */ \
-                                                                               \
-    "ushr       v1.16b, v3.16b, #5             \n"  /* R   00000RRR lower 3 */ \
-    "shl        v0.16b, v2.16b, #3             \n"  /* B,G BBBBB000 upper 5 */ \
-    "ushr       v2.16b, v0.16b, #5             \n"  /* B,G 00000BBB lower 3 */ \
-                                                                               \
-    "orr        v0.16b, v0.16b, v2.16b         \n"  /* B,G                  */ \
-    "orr        v2.16b, v1.16b, v3.16b         \n"  /* R                    */ \
-    "dup        v1.2D, v0.D[1]                 \n"  /* G */                    \
+#define RGB555TOARGB                                                        \
+  "ushr       v2.8h, v0.8h, #10              \n" /* R xxxRRRRR           */ \
+  "shl        v2.8h, v2.8h, #3               \n" /* R RRRRR000 upper 5   */ \
+  "xtn        v3.8b, v2.8h                   \n" /* RRRRR000             */ \
+                                                                            \
+  "xtn        v2.8b, v0.8h                   \n" /* B xxxBBBBB           */ \
+  "shrn2      v2.16b,v0.8h, #5               \n" /* G xxxGGGGG           */ \
+                                                                            \
+  "ushr       v1.16b, v3.16b, #5             \n" /* R   00000RRR lower 3 */ \
+  "shl        v0.16b, v2.16b, #3             \n" /* B,G BBBBB000 upper 5 */ \
+  "ushr       v2.16b, v0.16b, #5             \n" /* B,G 00000BBB lower 3 */ \
+                                                                            \
+  "orr        v0.16b, v0.16b, v2.16b         \n" /* B,G                  */ \
+  "orr        v2.16b, v1.16b, v3.16b         \n" /* R                    */ \
+  "dup        v1.2D, v0.D[1]                 \n" /* G */
 
-void ARGB1555ToARGBRow_NEON(const uint8* src_argb1555, uint8* dst_argb,
+void ARGB1555ToARGBRow_NEON(const uint8_t* src_argb1555,
+                            uint8_t* dst_argb,
                             int width) {
-  asm volatile (
-    "movi       v3.8b, #255                    \n"  // Alpha
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB1555 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGB1555TOARGB
-    MEMACCESS(1)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_argb1555),  // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+  asm volatile(
+      "movi       v3.8b, #255                    \n"  // Alpha
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB1555 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGB1555TOARGB
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB
+                                                            // pixels
+      "b.gt       1b                             \n"
+      : "+r"(src_argb1555),  // %0
+        "+r"(dst_argb),      // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-#define ARGB4444TOARGB                                                         \
-    "shrn       v1.8b,  v0.8h, #8              \n"  /* v1(l) AR             */ \
-    "xtn2       v1.16b, v0.8h                  \n"  /* v1(h) GB             */ \
-    "shl        v2.16b, v1.16b, #4             \n"  /* B,R BBBB0000         */ \
-    "ushr       v3.16b, v1.16b, #4             \n"  /* G,A 0000GGGG         */ \
-    "ushr       v0.16b, v2.16b, #4             \n"  /* B,R 0000BBBB         */ \
-    "shl        v1.16b, v3.16b, #4             \n"  /* G,A GGGG0000         */ \
-    "orr        v2.16b, v0.16b, v2.16b         \n"  /* B,R BBBBBBBB         */ \
-    "orr        v3.16b, v1.16b, v3.16b         \n"  /* G,A GGGGGGGG         */ \
-    "dup        v0.2D, v2.D[1]                 \n"                             \
-    "dup        v1.2D, v3.D[1]                 \n"
+#define ARGB4444TOARGB                                                      \
+  "shrn       v1.8b,  v0.8h, #8              \n" /* v1(l) AR             */ \
+  "xtn2       v1.16b, v0.8h                  \n" /* v1(h) GB             */ \
+  "shl        v2.16b, v1.16b, #4             \n" /* B,R BBBB0000         */ \
+  "ushr       v3.16b, v1.16b, #4             \n" /* G,A 0000GGGG         */ \
+  "ushr       v0.16b, v2.16b, #4             \n" /* B,R 0000BBBB         */ \
+  "shl        v1.16b, v3.16b, #4             \n" /* G,A GGGG0000         */ \
+  "orr        v2.16b, v0.16b, v2.16b         \n" /* B,R BBBBBBBB         */ \
+  "orr        v3.16b, v1.16b, v3.16b         \n" /* G,A GGGGGGGG         */ \
+  "dup        v0.2D, v2.D[1]                 \n"                            \
+  "dup        v1.2D, v3.D[1]                 \n"
 
-void ARGB4444ToARGBRow_NEON(const uint8* src_argb4444, uint8* dst_argb,
+void ARGB4444ToARGBRow_NEON(const uint8_t* src_argb4444,
+                            uint8_t* dst_argb,
                             int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB4444 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGB4444TOARGB
-    MEMACCESS(1)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_argb4444),  // %0
-    "+r"(dst_argb),    // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB4444 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGB4444TOARGB
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB
+                                                            // pixels
+      "b.gt       1b                             \n"
+      : "+r"(src_argb4444),  // %0
+        "+r"(dst_argb),      // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4"  // Clobber List
+      );
 }
 
-void ARGBToRGB24Row_NEON(const uint8* src_argb, uint8* dst_rgb24, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v1.8b,v2.8b,v3.8b,v4.8b}, [%0], #32 \n"  // load 8 ARGB pixels
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    MEMACCESS(1)
-    "st3        {v1.8b,v2.8b,v3.8b}, [%1], #24 \n"  // store 8 pixels of RGB24.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_rgb24),  // %1
-    "+r"(width)         // %2
-  :
-  : "cc", "memory", "v1", "v2", "v3", "v4"  // Clobber List
-  );
-}
-
-void ARGBToRAWRow_NEON(const uint8* src_argb, uint8* dst_raw, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v1.8b,v2.8b,v3.8b,v4.8b}, [%0], #32 \n"  // load b g r a
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "orr        v4.8b, v2.8b, v2.8b            \n"  // mov g
-    "orr        v5.8b, v1.8b, v1.8b            \n"  // mov b
-    MEMACCESS(1)
-    "st3        {v3.8b,v4.8b,v5.8b}, [%1], #24 \n"  // store r g b
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_raw),   // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v1", "v2", "v3", "v4", "v5"  // Clobber List
-  );
-}
-
-void YUY2ToYRow_NEON(const uint8* src_yuy2, uint8* dst_y, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld2        {v0.16b,v1.16b}, [%0], #32     \n"  // load 16 pixels of YUY2.
-    "subs       %w2, %w2, #16                  \n"  // 16 processed per loop.
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"  // store 16 pixels of Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_yuy2),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1"  // Clobber List
-  );
-}
-
-void UYVYToYRow_NEON(const uint8* src_uyvy, uint8* dst_y, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld2        {v0.16b,v1.16b}, [%0], #32     \n"  // load 16 pixels of UYVY.
-    "subs       %w2, %w2, #16                  \n"  // 16 processed per loop.
-    MEMACCESS(1)
-    "st1        {v1.16b}, [%1], #16            \n"  // store 16 pixels of Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_uyvy),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1"  // Clobber List
-  );
-}
-
-void YUY2ToUV422Row_NEON(const uint8* src_yuy2, uint8* dst_u, uint8* dst_v,
+void ARGBToRGB24Row_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_rgb24,
                          int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 YUY2 pixels
-    "subs       %w3, %w3, #16                  \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "st1        {v1.8b}, [%1], #8              \n"  // store 8 U.
-    MEMACCESS(2)
-    "st1        {v3.8b}, [%2], #8              \n"  // store 8 V.
-    "b.gt       1b                             \n"
-  : "+r"(src_yuy2),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v1.8b,v2.8b,v3.8b,v4.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "st3        {v1.8b,v2.8b,v3.8b}, [%1], #24 \n"  // store 8 pixels of
+                                                      // RGB24.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),   // %0
+        "+r"(dst_rgb24),  // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "v1", "v2", "v3", "v4"  // Clobber List
+      );
 }
 
-void UYVYToUV422Row_NEON(const uint8* src_uyvy, uint8* dst_u, uint8* dst_v,
+void ARGBToRAWRow_NEON(const uint8_t* src_argb, uint8_t* dst_raw, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v1.8b,v2.8b,v3.8b,v4.8b}, [%0], #32 \n"  // load b g r a
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "orr        v4.8b, v2.8b, v2.8b            \n"  // mov g
+      "orr        v5.8b, v1.8b, v1.8b            \n"  // mov b
+      "st3        {v3.8b,v4.8b,v5.8b}, [%1], #24 \n"  // store r g b
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_raw),   // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v1", "v2", "v3", "v4", "v5"  // Clobber List
+      );
+}
+
+void YUY2ToYRow_NEON(const uint8_t* src_yuy2, uint8_t* dst_y, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld2        {v0.16b,v1.16b}, [%0], #32     \n"  // load 16 pixels of YUY2.
+      "subs       %w2, %w2, #16                  \n"  // 16 processed per loop.
+      "st1        {v0.16b}, [%1], #16            \n"  // store 16 pixels of Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1"  // Clobber List
+      );
+}
+
+void UYVYToYRow_NEON(const uint8_t* src_uyvy, uint8_t* dst_y, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld2        {v0.16b,v1.16b}, [%0], #32     \n"  // load 16 pixels of UYVY.
+      "subs       %w2, %w2, #16                  \n"  // 16 processed per loop.
+      "st1        {v1.16b}, [%1], #16            \n"  // store 16 pixels of Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1"  // Clobber List
+      );
+}
+
+void YUY2ToUV422Row_NEON(const uint8_t* src_yuy2,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 UYVY pixels
-    "subs       %w3, %w3, #16                  \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 U.
-    MEMACCESS(2)
-    "st1        {v2.8b}, [%2], #8              \n"  // store 8 V.
-    "b.gt       1b                             \n"
-  : "+r"(src_uyvy),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 YUY2
+      "subs       %w3, %w3, #16                  \n"  // 16 pixels = 8 UVs.
+      "st1        {v1.8b}, [%1], #8              \n"  // store 8 U.
+      "st1        {v3.8b}, [%2], #8              \n"  // store 8 V.
+      "b.gt       1b                             \n"
+      : "+r"(src_yuy2),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-void YUY2ToUVRow_NEON(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_yuy2b = src_yuy2 + stride_yuy2;
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 pixels
-    "subs       %w4, %w4, #16                  \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load next row
-    "urhadd     v1.8b, v1.8b, v5.8b            \n"  // average rows of U
-    "urhadd     v3.8b, v3.8b, v7.8b            \n"  // average rows of V
-    MEMACCESS(2)
-    "st1        {v1.8b}, [%2], #8              \n"  // store 8 U.
-    MEMACCESS(3)
-    "st1        {v3.8b}, [%3], #8              \n"  // store 8 V.
-    "b.gt       1b                             \n"
-  : "+r"(src_yuy2),     // %0
-    "+r"(src_yuy2b),    // %1
-    "+r"(dst_u),        // %2
-    "+r"(dst_v),        // %3
-    "+r"(width)           // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4",
-    "v5", "v6", "v7"  // Clobber List
-  );
+void UYVYToUV422Row_NEON(const uint8_t* src_uyvy,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
+                         int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 UYVY
+      "subs       %w3, %w3, #16                  \n"  // 16 pixels = 8 UVs.
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 U.
+      "st1        {v2.8b}, [%2], #8              \n"  // store 8 V.
+      "b.gt       1b                             \n"
+      : "+r"(src_uyvy),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-void UYVYToUVRow_NEON(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_uyvyb = src_uyvy + stride_uyvy;
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 pixels
-    "subs       %w4, %w4, #16                  \n"  // 16 pixels = 8 UVs.
-    MEMACCESS(1)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load next row
-    "urhadd     v0.8b, v0.8b, v4.8b            \n"  // average rows of U
-    "urhadd     v2.8b, v2.8b, v6.8b            \n"  // average rows of V
-    MEMACCESS(2)
-    "st1        {v0.8b}, [%2], #8              \n"  // store 8 U.
-    MEMACCESS(3)
-    "st1        {v2.8b}, [%3], #8              \n"  // store 8 V.
-    "b.gt       1b                             \n"
-  : "+r"(src_uyvy),     // %0
-    "+r"(src_uyvyb),    // %1
-    "+r"(dst_u),        // %2
-    "+r"(dst_v),        // %3
-    "+r"(width)           // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4",
-    "v5", "v6", "v7"  // Clobber List
-  );
+void YUY2ToUVRow_NEON(const uint8_t* src_yuy2,
+                      int stride_yuy2,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  const uint8_t* src_yuy2b = src_yuy2 + stride_yuy2;
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 pixels
+      "subs       %w4, %w4, #16                  \n"  // 16 pixels = 8 UVs.
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load next row
+      "urhadd     v1.8b, v1.8b, v5.8b            \n"        // average rows of U
+      "urhadd     v3.8b, v3.8b, v7.8b            \n"        // average rows of V
+      "st1        {v1.8b}, [%2], #8              \n"        // store 8 U.
+      "st1        {v3.8b}, [%3], #8              \n"        // store 8 V.
+      "b.gt       1b                             \n"
+      : "+r"(src_yuy2),   // %0
+        "+r"(src_yuy2b),  // %1
+        "+r"(dst_u),      // %2
+        "+r"(dst_v),      // %3
+        "+r"(width)       // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6",
+        "v7"  // Clobber List
+      );
+}
+
+void UYVYToUVRow_NEON(const uint8_t* src_uyvy,
+                      int stride_uyvy,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  const uint8_t* src_uyvyb = src_uyvy + stride_uyvy;
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 16 pixels
+      "subs       %w4, %w4, #16                  \n"  // 16 pixels = 8 UVs.
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load next row
+      "urhadd     v0.8b, v0.8b, v4.8b            \n"        // average rows of U
+      "urhadd     v2.8b, v2.8b, v6.8b            \n"        // average rows of V
+      "st1        {v0.8b}, [%2], #8              \n"        // store 8 U.
+      "st1        {v2.8b}, [%3], #8              \n"        // store 8 V.
+      "b.gt       1b                             \n"
+      : "+r"(src_uyvy),   // %0
+        "+r"(src_uyvyb),  // %1
+        "+r"(dst_u),      // %2
+        "+r"(dst_v),      // %3
+        "+r"(width)       // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6",
+        "v7"  // Clobber List
+      );
 }
 
 // For BGRAToARGB, ABGRToARGB, RGBAToARGB, and ARGBToRGBA.
-void ARGBShuffleRow_NEON(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width) {
-  asm volatile (
-    MEMACCESS(3)
-    "ld1        {v2.16b}, [%3]                 \n"  // shuffler
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 4 pixels.
-    "subs       %w2, %w2, #4                   \n"  // 4 processed per loop
-    "tbl        v1.16b, {v0.16b}, v2.16b       \n"  // look up 4 pixels
-    MEMACCESS(1)
-    "st1        {v1.16b}, [%1], #16            \n"  // store 4.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)        // %2
-  : "r"(shuffler)    // %3
-  : "cc", "memory", "v0", "v1", "v2"  // Clobber List
-  );
+void ARGBShuffleRow_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_argb,
+                         const uint8_t* shuffler,
+                         int width) {
+  asm volatile(
+      "ld1        {v2.16b}, [%3]                 \n"  // shuffler
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 4 pixels.
+      "subs       %w2, %w2, #4                   \n"  // 4 processed per loop
+      "tbl        v1.16b, {v0.16b}, v2.16b       \n"  // look up 4 pixels
+      "st1        {v1.16b}, [%1], #16            \n"  // store 4.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),                   // %0
+        "+r"(dst_argb),                   // %1
+        "+r"(width)                       // %2
+      : "r"(shuffler)                     // %3
+      : "cc", "memory", "v0", "v1", "v2"  // Clobber List
+      );
 }
 
-void I422ToYUY2Row_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_yuy2, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld2        {v0.8b, v1.8b}, [%0], #16      \n"  // load 16 Ys
-    "orr        v2.8b, v1.8b, v1.8b            \n"
-    MEMACCESS(1)
-    "ld1        {v1.8b}, [%1], #8              \n"  // load 8 Us
-    MEMACCESS(2)
-    "ld1        {v3.8b}, [%2], #8              \n"  // load 8 Vs
-    "subs       %w4, %w4, #16                  \n"  // 16 pixels
-    MEMACCESS(3)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%3], #32 \n"  // Store 16 pixels.
-    "b.gt       1b                             \n"
-  : "+r"(src_y),     // %0
-    "+r"(src_u),     // %1
-    "+r"(src_v),     // %2
-    "+r"(dst_yuy2),  // %3
-    "+r"(width)      // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"
-  );
+void I422ToYUY2Row_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_yuy2,
+                        int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld2        {v0.8b, v1.8b}, [%0], #16      \n"  // load 16 Ys
+      "orr        v2.8b, v1.8b, v1.8b            \n"
+      "ld1        {v1.8b}, [%1], #8              \n"        // load 8 Us
+      "ld1        {v3.8b}, [%2], #8              \n"        // load 8 Vs
+      "subs       %w4, %w4, #16                  \n"        // 16 pixels
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%3], #32 \n"  // Store 16 pixels.
+      "b.gt       1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_yuy2),  // %3
+        "+r"(width)      // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3");
 }
 
-void I422ToUYVYRow_NEON(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_uyvy, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld2        {v1.8b,v2.8b}, [%0], #16       \n"  // load 16 Ys
-    "orr        v3.8b, v2.8b, v2.8b            \n"
-    MEMACCESS(1)
-    "ld1        {v0.8b}, [%1], #8              \n"  // load 8 Us
-    MEMACCESS(2)
-    "ld1        {v2.8b}, [%2], #8              \n"  // load 8 Vs
-    "subs       %w4, %w4, #16                  \n"  // 16 pixels
-    MEMACCESS(3)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%3], #32 \n"  // Store 16 pixels.
-    "b.gt       1b                             \n"
-  : "+r"(src_y),     // %0
-    "+r"(src_u),     // %1
-    "+r"(src_v),     // %2
-    "+r"(dst_uyvy),  // %3
-    "+r"(width)      // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"
-  );
+void I422ToUYVYRow_NEON(const uint8_t* src_y,
+                        const uint8_t* src_u,
+                        const uint8_t* src_v,
+                        uint8_t* dst_uyvy,
+                        int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld2        {v1.8b,v2.8b}, [%0], #16       \n"  // load 16 Ys
+      "orr        v3.8b, v2.8b, v2.8b            \n"
+      "ld1        {v0.8b}, [%1], #8              \n"        // load 8 Us
+      "ld1        {v2.8b}, [%2], #8              \n"        // load 8 Vs
+      "subs       %w4, %w4, #16                  \n"        // 16 pixels
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%3], #32 \n"  // Store 16 pixels.
+      "b.gt       1b                             \n"
+      : "+r"(src_y),     // %0
+        "+r"(src_u),     // %1
+        "+r"(src_v),     // %2
+        "+r"(dst_uyvy),  // %3
+        "+r"(width)      // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3");
 }
 
-void ARGBToRGB565Row_NEON(const uint8* src_argb, uint8* dst_rgb565, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%0], #32 \n"  // load 8 pixels
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGBTORGB565
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"  // store 8 pixels RGB565.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_rgb565),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v20", "v21", "v22", "v23"
-  );
+void ARGBToRGB565Row_NEON(const uint8_t* src_argb,
+                          uint8_t* dst_rgb565,
+                          int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%0], #32 \n"  // load 8 pixels
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGBTORGB565
+      "st1        {v0.16b}, [%1], #16            \n"  // store 8 pixels RGB565.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),    // %0
+        "+r"(dst_rgb565),  // %1
+        "+r"(width)        // %2
+      :
+      : "cc", "memory", "v0", "v20", "v21", "v22", "v23");
 }
 
-void ARGBToRGB565DitherRow_NEON(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width) {
-  asm volatile (
-    "dup        v1.4s, %w2                     \n"  // dither4
-  "1:                                          \n"
-    MEMACCESS(1)
-    "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], #32 \n"  // load 8 pixels
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "uqadd      v20.8b, v20.8b, v1.8b          \n"
-    "uqadd      v21.8b, v21.8b, v1.8b          \n"
-    "uqadd      v22.8b, v22.8b, v1.8b          \n"
-    ARGBTORGB565
-    MEMACCESS(0)
-    "st1        {v0.16b}, [%0], #16            \n"  // store 8 pixels RGB565.
-    "b.gt       1b                             \n"
-  : "+r"(dst_rgb)    // %0
-  : "r"(src_argb),   // %1
-    "r"(dither4),    // %2
-    "r"(width)       // %3
-  : "cc", "memory", "v0", "v1", "v20", "v21", "v22", "v23"
-  );
+void ARGBToRGB565DitherRow_NEON(const uint8_t* src_argb,
+                                uint8_t* dst_rgb,
+                                const uint32_t dither4,
+                                int width) {
+  asm volatile(
+      "dup        v1.4s, %w2                     \n"  // dither4
+      "1:                                        \n"
+      "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%1], #32 \n"  // load 8 pixels
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "uqadd      v20.8b, v20.8b, v1.8b          \n"
+      "uqadd      v21.8b, v21.8b, v1.8b          \n"
+      "uqadd      v22.8b, v22.8b, v1.8b          \n" ARGBTORGB565
+      "st1        {v0.16b}, [%0], #16            \n"  // store 8 pixels RGB565.
+      "b.gt       1b                             \n"
+      : "+r"(dst_rgb)   // %0
+      : "r"(src_argb),  // %1
+        "r"(dither4),   // %2
+        "r"(width)      // %3
+      : "cc", "memory", "v0", "v1", "v20", "v21", "v22", "v23");
 }
 
-void ARGBToARGB1555Row_NEON(const uint8* src_argb, uint8* dst_argb1555,
+void ARGBToARGB1555Row_NEON(const uint8_t* src_argb,
+                            uint8_t* dst_argb1555,
                             int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%0], #32 \n"  // load 8 pixels
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGBTOARGB1555
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"  // store 8 pixels ARGB1555.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb1555),  // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v20", "v21", "v22", "v23"
-  );
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%0], #32 \n"  // load 8 pixels
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGBTOARGB1555
+      "st1        {v0.16b}, [%1], #16            \n"  // store 8 pixels
+                                                      // ARGB1555.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),      // %0
+        "+r"(dst_argb1555),  // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "v0", "v20", "v21", "v22", "v23");
 }
 
-void ARGBToARGB4444Row_NEON(const uint8* src_argb, uint8* dst_argb4444,
+void ARGBToARGB4444Row_NEON(const uint8_t* src_argb,
+                            uint8_t* dst_argb4444,
                             int width) {
-  asm volatile (
-    "movi       v4.16b, #0x0f                  \n"  // bits to clear with vbic.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%0], #32 \n"  // load 8 pixels
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGBTOARGB4444
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"  // store 8 pixels ARGB4444.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),      // %0
-    "+r"(dst_argb4444),  // %1
-    "+r"(width)            // %2
-  :
-  : "cc", "memory", "v0", "v1", "v4", "v20", "v21", "v22", "v23"
-  );
+  asm volatile(
+      "movi       v4.16b, #0x0f                  \n"  // bits to clear with
+                                                      // vbic.
+      "1:                                        \n"
+      "ld4        {v20.8b,v21.8b,v22.8b,v23.8b}, [%0], #32 \n"  // load 8 pixels
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGBTOARGB4444
+      "st1        {v0.16b}, [%1], #16            \n"  // store 8 pixels
+                                                      // ARGB4444.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),      // %0
+        "+r"(dst_argb4444),  // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "v0", "v1", "v4", "v20", "v21", "v22", "v23");
 }
 
-void ARGBToYRow_NEON(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v3.8h, v0.8b, v4.8b            \n"  // B
-    "umlal      v3.8h, v1.8b, v5.8b            \n"  // G
-    "umlal      v3.8h, v2.8b, v6.8b            \n"  // R
-    "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7"
-  );
+void ARGBToYRow_NEON(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v3.8h, v0.8b, v4.8b            \n"  // B
+      "umlal      v3.8h, v1.8b, v5.8b            \n"  // G
+      "umlal      v3.8h, v2.8b, v6.8b            \n"  // R
+      "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7");
 }
 
-void ARGBExtractAlphaRow_NEON(const uint8* src_argb, uint8* dst_a, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load row 16 pixels
-    "subs       %w2, %w2, #16                  \n"  // 16 processed per loop
-    MEMACCESS(1)
-    "st1        {v3.16b}, [%1], #16            \n"  // store 16 A's.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_a),      // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+void ARGBExtractAlphaRow_NEON(const uint8_t* src_argb,
+                              uint8_t* dst_a,
+                              int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load row 16
+                                                                // pixels
+      "subs       %w2, %w2, #16                  \n"  // 16 processed per loop
+      "st1        {v3.16b}, [%1], #16            \n"  // store 16 A's.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_a),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-void ARGBToYJRow_NEON(const uint8* src_argb, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #15                     \n"  // B * 0.11400 coefficient
-    "movi       v5.8b, #75                     \n"  // G * 0.58700 coefficient
-    "movi       v6.8b, #38                     \n"  // R * 0.29900 coefficient
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v3.8h, v0.8b, v4.8b            \n"  // B
-    "umlal      v3.8h, v1.8b, v5.8b            \n"  // G
-    "umlal      v3.8h, v2.8b, v6.8b            \n"  // R
-    "sqrshrun   v0.8b, v3.8h, #7               \n"  // 15 bit to 8 bit Y
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6"
-  );
+void ARGBToYJRow_NEON(const uint8_t* src_argb, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #15                     \n"  // B * 0.11400 coefficient
+      "movi       v5.8b, #75                     \n"  // G * 0.58700 coefficient
+      "movi       v6.8b, #38                     \n"  // R * 0.29900 coefficient
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v3.8h, v0.8b, v4.8b            \n"  // B
+      "umlal      v3.8h, v1.8b, v5.8b            \n"  // G
+      "umlal      v3.8h, v2.8b, v6.8b            \n"  // R
+      "sqrshrun   v0.8b, v3.8h, #7               \n"  // 15 bit to 8 bit Y
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6");
 }
 
 // 8x1 pixels.
-void ARGBToUV444Row_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
+void ARGBToUV444Row_NEON(const uint8_t* src_argb,
+                         uint8_t* dst_u,
+                         uint8_t* dst_v,
                          int width) {
-  asm volatile (
-    "movi       v24.8b, #112                   \n"  // UB / VR 0.875 coefficient
-    "movi       v25.8b, #74                    \n"  // UG -0.5781 coefficient
-    "movi       v26.8b, #38                    \n"  // UR -0.2969 coefficient
-    "movi       v27.8b, #18                    \n"  // VB -0.1406 coefficient
-    "movi       v28.8b, #94                    \n"  // VG -0.7344 coefficient
-    "movi       v29.16b,#0x80                  \n"  // 128.5
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "umull      v4.8h, v0.8b, v24.8b           \n"  // B
-    "umlsl      v4.8h, v1.8b, v25.8b           \n"  // G
-    "umlsl      v4.8h, v2.8b, v26.8b           \n"  // R
-    "add        v4.8h, v4.8h, v29.8h           \n"  // +128 -> unsigned
+  asm volatile(
+      "movi       v24.8b, #112                   \n"  // UB / VR 0.875
+                                                      // coefficient
+      "movi       v25.8b, #74                    \n"  // UG -0.5781 coefficient
+      "movi       v26.8b, #38                    \n"  // UR -0.2969 coefficient
+      "movi       v27.8b, #18                    \n"  // VB -0.1406 coefficient
+      "movi       v28.8b, #94                    \n"  // VG -0.7344 coefficient
+      "movi       v29.16b,#0x80                  \n"  // 128.5
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+                                                            // pixels.
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "umull      v4.8h, v0.8b, v24.8b           \n"  // B
+      "umlsl      v4.8h, v1.8b, v25.8b           \n"  // G
+      "umlsl      v4.8h, v2.8b, v26.8b           \n"  // R
+      "add        v4.8h, v4.8h, v29.8h           \n"  // +128 -> unsigned
 
-    "umull      v3.8h, v2.8b, v24.8b           \n"  // R
-    "umlsl      v3.8h, v1.8b, v28.8b           \n"  // G
-    "umlsl      v3.8h, v0.8b, v27.8b           \n"  // B
-    "add        v3.8h, v3.8h, v29.8h           \n"  // +128 -> unsigned
+      "umull      v3.8h, v2.8b, v24.8b           \n"  // R
+      "umlsl      v3.8h, v1.8b, v28.8b           \n"  // G
+      "umlsl      v3.8h, v0.8b, v27.8b           \n"  // B
+      "add        v3.8h, v3.8h, v29.8h           \n"  // +128 -> unsigned
 
-    "uqshrn     v0.8b, v4.8h, #8               \n"  // 16 bit to 8 bit U
-    "uqshrn     v1.8b, v3.8h, #8               \n"  // 16 bit to 8 bit V
+      "uqshrn     v0.8b, v4.8h, #8               \n"  // 16 bit to 8 bit U
+      "uqshrn     v1.8b, v3.8h, #8               \n"  // 16 bit to 8 bit V
 
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels U.
-    MEMACCESS(2)
-    "st1        {v1.8b}, [%2], #8              \n"  // store 8 pixels V.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4",
-    "v24", "v25", "v26", "v27", "v28", "v29"
-  );
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels U.
+      "st1        {v1.8b}, [%2], #8              \n"  // store 8 pixels V.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_u),     // %1
+        "+r"(dst_v),     // %2
+        "+r"(width)      // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v24", "v25", "v26",
+        "v27", "v28", "v29");
 }
 
-#define RGBTOUV_SETUP_REG                                                      \
-    "movi       v20.8h, #56, lsl #0  \n"  /* UB/VR coefficient (0.875) / 2 */  \
-    "movi       v21.8h, #37, lsl #0  \n"  /* UG coefficient (-0.5781) / 2  */  \
-    "movi       v22.8h, #19, lsl #0  \n"  /* UR coefficient (-0.2969) / 2  */  \
-    "movi       v23.8h, #9,  lsl #0  \n"  /* VB coefficient (-0.1406) / 2  */  \
-    "movi       v24.8h, #47, lsl #0  \n"  /* VG coefficient (-0.7344) / 2  */  \
-    "movi       v25.16b, #0x80       \n"  /* 128.5 (0x8080 in 16-bit)      */
-
-// 32x1 pixels -> 8x1.  width is number of argb pixels. e.g. 32.
-void ARGBToUV411Row_NEON(const uint8* src_argb, uint8* dst_u, uint8* dst_v,
-                         int width) {
-  asm volatile (
-    RGBTOUV_SETUP_REG
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 16 pixels.
-    "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
-    "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
-    "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(0)
-    "ld4        {v4.16b,v5.16b,v6.16b,v7.16b}, [%0], #64 \n"  // load next 16.
-    "uaddlp     v4.8h, v4.16b                  \n"  // B 16 bytes -> 8 shorts.
-    "uaddlp     v5.8h, v5.16b                  \n"  // G 16 bytes -> 8 shorts.
-    "uaddlp     v6.8h, v6.16b                  \n"  // R 16 bytes -> 8 shorts.
-
-    "addp       v0.8h, v0.8h, v4.8h            \n"  // B 16 shorts -> 8 shorts.
-    "addp       v1.8h, v1.8h, v5.8h            \n"  // G 16 shorts -> 8 shorts.
-    "addp       v2.8h, v2.8h, v6.8h            \n"  // R 16 shorts -> 8 shorts.
-
-    "urshr      v0.8h, v0.8h, #1               \n"  // 2x average
-    "urshr      v1.8h, v1.8h, #1               \n"
-    "urshr      v2.8h, v2.8h, #1               \n"
-
-    "subs       %w3, %w3, #32                  \n"  // 32 processed per loop.
-    "mul        v3.8h, v0.8h, v20.8h           \n"  // B
-    "mls        v3.8h, v1.8h, v21.8h           \n"  // G
-    "mls        v3.8h, v2.8h, v22.8h           \n"  // R
-    "add        v3.8h, v3.8h, v25.8h           \n"  // +128 -> unsigned
-    "mul        v4.8h, v2.8h, v20.8h           \n"  // R
-    "mls        v4.8h, v1.8h, v24.8h           \n"  // G
-    "mls        v4.8h, v0.8h, v23.8h           \n"  // B
-    "add        v4.8h, v4.8h, v25.8h           \n"  // +128 -> unsigned
-    "uqshrn     v0.8b, v3.8h, #8               \n"  // 16 bit to 8 bit U
-    "uqshrn     v1.8b, v4.8h, #8               \n"  // 16 bit to 8 bit V
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels U.
-    MEMACCESS(2)
-    "st1        {v1.8b}, [%2], #8              \n"  // store 8 pixels V.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_u),     // %1
-    "+r"(dst_v),     // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7",
-    "v20", "v21", "v22", "v23", "v24", "v25"
-  );
-}
+#define RGBTOUV_SETUP_REG                                                  \
+  "movi       v20.8h, #56, lsl #0  \n" /* UB/VR coefficient (0.875) / 2 */ \
+  "movi       v21.8h, #37, lsl #0  \n" /* UG coefficient (-0.5781) / 2  */ \
+  "movi       v22.8h, #19, lsl #0  \n" /* UR coefficient (-0.2969) / 2  */ \
+  "movi       v23.8h, #9,  lsl #0  \n" /* VB coefficient (-0.1406) / 2  */ \
+  "movi       v24.8h, #47, lsl #0  \n" /* VG coefficient (-0.7344) / 2  */ \
+  "movi       v25.16b, #0x80       \n" /* 128.5 (0x8080 in 16-bit)      */
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-#define RGBTOUV(QB, QG, QR) \
-    "mul        v3.8h, " #QB ",v20.8h          \n"  /* B                    */ \
-    "mul        v4.8h, " #QR ",v20.8h          \n"  /* R                    */ \
-    "mls        v3.8h, " #QG ",v21.8h          \n"  /* G                    */ \
-    "mls        v4.8h, " #QG ",v24.8h          \n"  /* G                    */ \
-    "mls        v3.8h, " #QR ",v22.8h          \n"  /* R                    */ \
-    "mls        v4.8h, " #QB ",v23.8h          \n"  /* B                    */ \
-    "add        v3.8h, v3.8h, v25.8h           \n"  /* +128 -> unsigned     */ \
-    "add        v4.8h, v4.8h, v25.8h           \n"  /* +128 -> unsigned     */ \
-    "uqshrn     v0.8b, v3.8h, #8               \n"  /* 16 bit to 8 bit U    */ \
-    "uqshrn     v1.8b, v4.8h, #8               \n"  /* 16 bit to 8 bit V    */
+// clang-format off
+#define RGBTOUV(QB, QG, QR)                                                 \
+  "mul        v3.8h, " #QB ",v20.8h          \n" /* B                    */ \
+  "mul        v4.8h, " #QR ",v20.8h          \n" /* R                    */ \
+  "mls        v3.8h, " #QG ",v21.8h          \n" /* G                    */ \
+  "mls        v4.8h, " #QG ",v24.8h          \n" /* G                    */ \
+  "mls        v3.8h, " #QR ",v22.8h          \n" /* R                    */ \
+  "mls        v4.8h, " #QB ",v23.8h          \n" /* B                    */ \
+  "add        v3.8h, v3.8h, v25.8h           \n" /* +128 -> unsigned     */ \
+  "add        v4.8h, v4.8h, v25.8h           \n" /* +128 -> unsigned     */ \
+  "uqshrn     v0.8b, v3.8h, #8               \n" /* 16 bit to 8 bit U    */ \
+  "uqshrn     v1.8b, v4.8h, #8               \n" /* 16 bit to 8 bit V    */
+// clang-format on
 
 // TODO(fbarchard): Consider vhadd vertical, then vpaddl horizontal, avoid shr.
 // TODO(fbarchard): consider ptrdiff_t for all strides.
 
-void ARGBToUVRow_NEON(const uint8* src_argb, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_argb_1 = src_argb + src_stride_argb;
+void ARGBToUVRow_NEON(const uint8_t* src_argb,
+                      int src_stride_argb,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  const uint8_t* src_argb_1 = src_argb + src_stride_argb;
   asm volatile (
     RGBTOUV_SETUP_REG
   "1:                                          \n"
-    MEMACCESS(0)
     "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 16 pixels.
     "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
 
-    MEMACCESS(1)
     "ld4        {v4.16b,v5.16b,v6.16b,v7.16b}, [%1], #64 \n"  // load next 16
     "uadalp     v0.8h, v4.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v1.8h, v5.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1486,9 +1415,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v0.8h, v1.8h, v2.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_argb),  // %0
@@ -1503,9 +1430,12 @@
 }
 
 // TODO(fbarchard): Subsample match C code.
-void ARGBToUVJRow_NEON(const uint8* src_argb, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_argb_1 = src_argb + src_stride_argb;
+void ARGBToUVJRow_NEON(const uint8_t* src_argb,
+                       int src_stride_argb,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  const uint8_t* src_argb_1 = src_argb + src_stride_argb;
   asm volatile (
     "movi       v20.8h, #63, lsl #0            \n"  // UB/VR coeff (0.500) / 2
     "movi       v21.8h, #42, lsl #0            \n"  // UG coeff (-0.33126) / 2
@@ -1514,12 +1444,10 @@
     "movi       v24.8h, #53, lsl #0            \n"  // VG coeff (-0.41869) / 2
     "movi       v25.16b, #0x80                 \n"  // 128.5 (0x8080 in 16-bit)
   "1:                                          \n"
-    MEMACCESS(0)
     "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 16 pixels.
     "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "ld4        {v4.16b,v5.16b,v6.16b,v7.16b}, [%1], #64  \n"  // load next 16
     "uadalp     v0.8h, v4.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v1.8h, v5.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1531,9 +1459,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v0.8h, v1.8h, v2.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_argb),  // %0
@@ -1547,18 +1473,19 @@
   );
 }
 
-void BGRAToUVRow_NEON(const uint8* src_bgra, int src_stride_bgra,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_bgra_1 = src_bgra + src_stride_bgra;
+void BGRAToUVRow_NEON(const uint8_t* src_bgra,
+                      int src_stride_bgra,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  const uint8_t* src_bgra_1 = src_bgra + src_stride_bgra;
   asm volatile (
     RGBTOUV_SETUP_REG
   "1:                                          \n"
-    MEMACCESS(0)
     "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 16 pixels.
     "uaddlp     v0.8h, v3.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v3.8h, v2.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v2.8h, v1.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "ld4        {v4.16b,v5.16b,v6.16b,v7.16b}, [%1], #64 \n"  // load 16 more
     "uadalp     v0.8h, v7.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v3.8h, v6.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1570,9 +1497,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v0.8h, v1.8h, v2.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_bgra),  // %0
@@ -1586,18 +1511,19 @@
   );
 }
 
-void ABGRToUVRow_NEON(const uint8* src_abgr, int src_stride_abgr,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_abgr_1 = src_abgr + src_stride_abgr;
+void ABGRToUVRow_NEON(const uint8_t* src_abgr,
+                      int src_stride_abgr,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  const uint8_t* src_abgr_1 = src_abgr + src_stride_abgr;
   asm volatile (
     RGBTOUV_SETUP_REG
   "1:                                          \n"
-    MEMACCESS(0)
     "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 16 pixels.
     "uaddlp     v3.8h, v2.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v2.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v1.8h, v0.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "ld4        {v4.16b,v5.16b,v6.16b,v7.16b}, [%1], #64 \n"  // load 16 more.
     "uadalp     v3.8h, v6.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v2.8h, v5.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1609,9 +1535,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v0.8h, v2.8h, v1.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_abgr),  // %0
@@ -1625,18 +1549,19 @@
   );
 }
 
-void RGBAToUVRow_NEON(const uint8* src_rgba, int src_stride_rgba,
-                      uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_rgba_1 = src_rgba + src_stride_rgba;
+void RGBAToUVRow_NEON(const uint8_t* src_rgba,
+                      int src_stride_rgba,
+                      uint8_t* dst_u,
+                      uint8_t* dst_v,
+                      int width) {
+  const uint8_t* src_rgba_1 = src_rgba + src_stride_rgba;
   asm volatile (
     RGBTOUV_SETUP_REG
   "1:                                          \n"
-    MEMACCESS(0)
     "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 16 pixels.
     "uaddlp     v0.8h, v1.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v1.8h, v2.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v2.8h, v3.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "ld4        {v4.16b,v5.16b,v6.16b,v7.16b}, [%1], #64 \n"  // load 16 more.
     "uadalp     v0.8h, v5.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v1.8h, v6.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1648,9 +1573,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v0.8h, v1.8h, v2.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_rgba),  // %0
@@ -1664,18 +1587,19 @@
   );
 }
 
-void RGB24ToUVRow_NEON(const uint8* src_rgb24, int src_stride_rgb24,
-                       uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_rgb24_1 = src_rgb24 + src_stride_rgb24;
+void RGB24ToUVRow_NEON(const uint8_t* src_rgb24,
+                       int src_stride_rgb24,
+                       uint8_t* dst_u,
+                       uint8_t* dst_v,
+                       int width) {
+  const uint8_t* src_rgb24_1 = src_rgb24 + src_stride_rgb24;
   asm volatile (
     RGBTOUV_SETUP_REG
   "1:                                          \n"
-    MEMACCESS(0)
     "ld3        {v0.16b,v1.16b,v2.16b}, [%0], #48 \n"  // load 16 pixels.
     "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "ld3        {v4.16b,v5.16b,v6.16b}, [%1], #48 \n"  // load 16 more.
     "uadalp     v0.8h, v4.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v1.8h, v5.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1687,9 +1611,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v0.8h, v1.8h, v2.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_rgb24),  // %0
@@ -1703,18 +1625,19 @@
   );
 }
 
-void RAWToUVRow_NEON(const uint8* src_raw, int src_stride_raw,
-                     uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_raw_1 = src_raw + src_stride_raw;
+void RAWToUVRow_NEON(const uint8_t* src_raw,
+                     int src_stride_raw,
+                     uint8_t* dst_u,
+                     uint8_t* dst_v,
+                     int width) {
+  const uint8_t* src_raw_1 = src_raw + src_stride_raw;
   asm volatile (
     RGBTOUV_SETUP_REG
   "1:                                          \n"
-    MEMACCESS(0)
     "ld3        {v0.16b,v1.16b,v2.16b}, [%0], #48 \n"  // load 8 RAW pixels.
     "uaddlp     v2.8h, v2.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
     "uaddlp     v0.8h, v0.16b                  \n"  // R 16 bytes -> 8 shorts.
-    MEMACCESS(1)
     "ld3        {v4.16b,v5.16b,v6.16b}, [%1], #48 \n"  // load 8 more RAW pixels
     "uadalp     v2.8h, v6.16b                  \n"  // B 16 bytes -> 8 shorts.
     "uadalp     v1.8h, v5.16b                  \n"  // G 16 bytes -> 8 shorts.
@@ -1726,9 +1649,7 @@
 
     "subs       %w4, %w4, #16                  \n"  // 32 processed per loop.
     RGBTOUV(v2.8h, v1.8h, v0.8h)
-    MEMACCESS(2)
     "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
     "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
     "b.gt       1b                             \n"
   : "+r"(src_raw),  // %0
@@ -1743,699 +1664,656 @@
 }
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-void RGB565ToUVRow_NEON(const uint8* src_rgb565, int src_stride_rgb565,
-                        uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_rgb565_1 = src_rgb565 + src_stride_rgb565;
-  asm volatile (
-    "movi       v22.8h, #56, lsl #0            \n"  // UB / VR coeff (0.875) / 2
-    "movi       v23.8h, #37, lsl #0            \n"  // UG coeff (-0.5781) / 2
-    "movi       v24.8h, #19, lsl #0            \n"  // UR coeff (-0.2969) / 2
-    "movi       v25.8h, #9 , lsl #0            \n"  // VB coeff (-0.1406) / 2
-    "movi       v26.8h, #47, lsl #0            \n"  // VG coeff (-0.7344) / 2
-    "movi       v27.16b, #0x80                 \n"  // 128.5 (0x8080 in 16-bit)
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 RGB565 pixels.
-    RGB565TOARGB
-    "uaddlp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uaddlp     v18.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uaddlp     v20.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // next 8 RGB565 pixels.
-    RGB565TOARGB
-    "uaddlp     v17.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uaddlp     v19.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uaddlp     v21.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+void RGB565ToUVRow_NEON(const uint8_t* src_rgb565,
+                        int src_stride_rgb565,
+                        uint8_t* dst_u,
+                        uint8_t* dst_v,
+                        int width) {
+  const uint8_t* src_rgb565_1 = src_rgb565 + src_stride_rgb565;
+  asm volatile(
+      "movi       v22.8h, #56, lsl #0            \n"  // UB / VR coeff (0.875) /
+                                                      // 2
+      "movi       v23.8h, #37, lsl #0            \n"  // UG coeff (-0.5781) / 2
+      "movi       v24.8h, #19, lsl #0            \n"  // UR coeff (-0.2969) / 2
+      "movi       v25.8h, #9 , lsl #0            \n"  // VB coeff (-0.1406) / 2
+      "movi       v26.8h, #47, lsl #0            \n"  // VG coeff (-0.7344) / 2
+      "movi       v27.16b, #0x80                 \n"  // 128.5 0x8080 in 16bit
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 RGB565 pixels.
+      RGB565TOARGB
+      "uaddlp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uaddlp     v18.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uaddlp     v20.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%0], #16            \n"  // next 8 RGB565 pixels.
+      RGB565TOARGB
+      "uaddlp     v17.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uaddlp     v19.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uaddlp     v21.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
 
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"  // load 8 RGB565 pixels.
-    RGB565TOARGB
-    "uadalp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uadalp     v18.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uadalp     v20.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"  // next 8 RGB565 pixels.
-    RGB565TOARGB
-    "uadalp     v17.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uadalp     v19.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uadalp     v21.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%1], #16            \n"  // load 8 RGB565 pixels.
+      RGB565TOARGB
+      "uadalp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uadalp     v18.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uadalp     v20.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%1], #16            \n"  // next 8 RGB565 pixels.
+      RGB565TOARGB
+      "uadalp     v17.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uadalp     v19.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uadalp     v21.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
 
-    "ins        v16.D[1], v17.D[0]             \n"
-    "ins        v18.D[1], v19.D[0]             \n"
-    "ins        v20.D[1], v21.D[0]             \n"
+      "ins        v16.D[1], v17.D[0]             \n"
+      "ins        v18.D[1], v19.D[0]             \n"
+      "ins        v20.D[1], v21.D[0]             \n"
 
-    "urshr      v4.8h, v16.8h, #1              \n"  // 2x average
-    "urshr      v5.8h, v18.8h, #1              \n"
-    "urshr      v6.8h, v20.8h, #1              \n"
+      "urshr      v4.8h, v16.8h, #1              \n"  // 2x average
+      "urshr      v5.8h, v18.8h, #1              \n"
+      "urshr      v6.8h, v20.8h, #1              \n"
 
-    "subs       %w4, %w4, #16                  \n"  // 16 processed per loop.
-    "mul        v16.8h, v4.8h, v22.8h          \n"  // B
-    "mls        v16.8h, v5.8h, v23.8h          \n"  // G
-    "mls        v16.8h, v6.8h, v24.8h          \n"  // R
-    "add        v16.8h, v16.8h, v27.8h         \n"  // +128 -> unsigned
-    "mul        v17.8h, v6.8h, v22.8h          \n"  // R
-    "mls        v17.8h, v5.8h, v26.8h          \n"  // G
-    "mls        v17.8h, v4.8h, v25.8h          \n"  // B
-    "add        v17.8h, v17.8h, v27.8h         \n"  // +128 -> unsigned
-    "uqshrn     v0.8b, v16.8h, #8              \n"  // 16 bit to 8 bit U
-    "uqshrn     v1.8b, v17.8h, #8              \n"  // 16 bit to 8 bit V
-    MEMACCESS(2)
-    "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
-    "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
-    "b.gt       1b                             \n"
-  : "+r"(src_rgb565),  // %0
-    "+r"(src_rgb565_1),  // %1
-    "+r"(dst_u),     // %2
-    "+r"(dst_v),     // %3
-    "+r"(width)        // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7",
-    "v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23", "v24",
-    "v25", "v26", "v27"
-  );
+      "subs       %w4, %w4, #16                  \n"  // 16 processed per loop.
+      "mul        v16.8h, v4.8h, v22.8h          \n"  // B
+      "mls        v16.8h, v5.8h, v23.8h          \n"  // G
+      "mls        v16.8h, v6.8h, v24.8h          \n"  // R
+      "add        v16.8h, v16.8h, v27.8h         \n"  // +128 -> unsigned
+      "mul        v17.8h, v6.8h, v22.8h          \n"  // R
+      "mls        v17.8h, v5.8h, v26.8h          \n"  // G
+      "mls        v17.8h, v4.8h, v25.8h          \n"  // B
+      "add        v17.8h, v17.8h, v27.8h         \n"  // +128 -> unsigned
+      "uqshrn     v0.8b, v16.8h, #8              \n"  // 16 bit to 8 bit U
+      "uqshrn     v1.8b, v17.8h, #8              \n"  // 16 bit to 8 bit V
+      "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
+      "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
+      "b.gt       1b                             \n"
+      : "+r"(src_rgb565),    // %0
+        "+r"(src_rgb565_1),  // %1
+        "+r"(dst_u),         // %2
+        "+r"(dst_v),         // %3
+        "+r"(width)          // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16",
+        "v17", "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25", "v26",
+        "v27");
 }
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-void ARGB1555ToUVRow_NEON(const uint8* src_argb1555, int src_stride_argb1555,
-                        uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_argb1555_1 = src_argb1555 + src_stride_argb1555;
-  asm volatile (
-    RGBTOUV_SETUP_REG
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "uaddlp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uaddlp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uaddlp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // next 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "uaddlp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uaddlp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uaddlp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+void ARGB1555ToUVRow_NEON(const uint8_t* src_argb1555,
+                          int src_stride_argb1555,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width) {
+  const uint8_t* src_argb1555_1 = src_argb1555 + src_stride_argb1555;
+  asm volatile(
+      RGBTOUV_SETUP_REG
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "uaddlp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uaddlp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uaddlp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%0], #16            \n"  // next 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "uaddlp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uaddlp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uaddlp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
 
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"  // load 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "uadalp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uadalp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uadalp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"  // next 8 ARGB1555 pixels.
-    RGB555TOARGB
-    "uadalp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uadalp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uadalp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%1], #16            \n"  // load 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "uadalp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uadalp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uadalp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%1], #16            \n"  // next 8 ARGB1555 pixels.
+      RGB555TOARGB
+      "uadalp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uadalp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uadalp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
 
-    "ins        v16.D[1], v26.D[0]             \n"
-    "ins        v17.D[1], v27.D[0]             \n"
-    "ins        v18.D[1], v28.D[0]             \n"
+      "ins        v16.D[1], v26.D[0]             \n"
+      "ins        v17.D[1], v27.D[0]             \n"
+      "ins        v18.D[1], v28.D[0]             \n"
 
-    "urshr      v4.8h, v16.8h, #1              \n"  // 2x average
-    "urshr      v5.8h, v17.8h, #1              \n"
-    "urshr      v6.8h, v18.8h, #1              \n"
+      "urshr      v4.8h, v16.8h, #1              \n"  // 2x average
+      "urshr      v5.8h, v17.8h, #1              \n"
+      "urshr      v6.8h, v18.8h, #1              \n"
 
-    "subs       %w4, %w4, #16                  \n"  // 16 processed per loop.
-    "mul        v2.8h, v4.8h, v20.8h           \n"  // B
-    "mls        v2.8h, v5.8h, v21.8h           \n"  // G
-    "mls        v2.8h, v6.8h, v22.8h           \n"  // R
-    "add        v2.8h, v2.8h, v25.8h           \n"  // +128 -> unsigned
-    "mul        v3.8h, v6.8h, v20.8h           \n"  // R
-    "mls        v3.8h, v5.8h, v24.8h           \n"  // G
-    "mls        v3.8h, v4.8h, v23.8h           \n"  // B
-    "add        v3.8h, v3.8h, v25.8h           \n"  // +128 -> unsigned
-    "uqshrn     v0.8b, v2.8h, #8               \n"  // 16 bit to 8 bit U
-    "uqshrn     v1.8b, v3.8h, #8               \n"  // 16 bit to 8 bit V
-    MEMACCESS(2)
-    "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
-    "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb1555),  // %0
-    "+r"(src_argb1555_1),  // %1
-    "+r"(dst_u),     // %2
-    "+r"(dst_v),     // %3
-    "+r"(width)        // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6",
-    "v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25",
-    "v26", "v27", "v28"
-  );
+      "subs       %w4, %w4, #16                  \n"  // 16 processed per loop.
+      "mul        v2.8h, v4.8h, v20.8h           \n"  // B
+      "mls        v2.8h, v5.8h, v21.8h           \n"  // G
+      "mls        v2.8h, v6.8h, v22.8h           \n"  // R
+      "add        v2.8h, v2.8h, v25.8h           \n"  // +128 -> unsigned
+      "mul        v3.8h, v6.8h, v20.8h           \n"  // R
+      "mls        v3.8h, v5.8h, v24.8h           \n"  // G
+      "mls        v3.8h, v4.8h, v23.8h           \n"  // B
+      "add        v3.8h, v3.8h, v25.8h           \n"  // +128 -> unsigned
+      "uqshrn     v0.8b, v2.8h, #8               \n"  // 16 bit to 8 bit U
+      "uqshrn     v1.8b, v3.8h, #8               \n"  // 16 bit to 8 bit V
+      "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
+      "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb1555),    // %0
+        "+r"(src_argb1555_1),  // %1
+        "+r"(dst_u),           // %2
+        "+r"(dst_v),           // %3
+        "+r"(width)            // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v16", "v17",
+        "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25", "v26", "v27",
+        "v28");
 }
 
 // 16x2 pixels -> 8x1.  width is number of argb pixels. e.g. 16.
-void ARGB4444ToUVRow_NEON(const uint8* src_argb4444, int src_stride_argb4444,
-                          uint8* dst_u, uint8* dst_v, int width) {
-  const uint8* src_argb4444_1 = src_argb4444 + src_stride_argb4444;
-  asm volatile (
-    RGBTOUV_SETUP_REG
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "uaddlp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uaddlp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uaddlp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // next 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "uaddlp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uaddlp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uaddlp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+void ARGB4444ToUVRow_NEON(const uint8_t* src_argb4444,
+                          int src_stride_argb4444,
+                          uint8_t* dst_u,
+                          uint8_t* dst_v,
+                          int width) {
+  const uint8_t* src_argb4444_1 = src_argb4444 + src_stride_argb4444;
+  asm volatile(
+      RGBTOUV_SETUP_REG
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "uaddlp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uaddlp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uaddlp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%0], #16            \n"  // next 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "uaddlp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uaddlp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uaddlp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
 
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"  // load 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "uadalp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uadalp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uadalp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"  // next 8 ARGB4444 pixels.
-    ARGB4444TOARGB
-    "uadalp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
-    "uadalp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
-    "uadalp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%1], #16            \n"  // load 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "uadalp     v16.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uadalp     v17.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uadalp     v18.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
+      "ld1        {v0.16b}, [%1], #16            \n"  // next 8 ARGB4444 pixels.
+      ARGB4444TOARGB
+      "uadalp     v26.4h, v0.8b                  \n"  // B 8 bytes -> 4 shorts.
+      "uadalp     v27.4h, v1.8b                  \n"  // G 8 bytes -> 4 shorts.
+      "uadalp     v28.4h, v2.8b                  \n"  // R 8 bytes -> 4 shorts.
 
-    "ins        v16.D[1], v26.D[0]             \n"
-    "ins        v17.D[1], v27.D[0]             \n"
-    "ins        v18.D[1], v28.D[0]             \n"
+      "ins        v16.D[1], v26.D[0]             \n"
+      "ins        v17.D[1], v27.D[0]             \n"
+      "ins        v18.D[1], v28.D[0]             \n"
 
-    "urshr      v4.8h, v16.8h, #1              \n"  // 2x average
-    "urshr      v5.8h, v17.8h, #1              \n"
-    "urshr      v6.8h, v18.8h, #1              \n"
+      "urshr      v4.8h, v16.8h, #1              \n"  // 2x average
+      "urshr      v5.8h, v17.8h, #1              \n"
+      "urshr      v6.8h, v18.8h, #1              \n"
 
-    "subs       %w4, %w4, #16                  \n"  // 16 processed per loop.
-    "mul        v2.8h, v4.8h, v20.8h           \n"  // B
-    "mls        v2.8h, v5.8h, v21.8h           \n"  // G
-    "mls        v2.8h, v6.8h, v22.8h           \n"  // R
-    "add        v2.8h, v2.8h, v25.8h           \n"  // +128 -> unsigned
-    "mul        v3.8h, v6.8h, v20.8h           \n"  // R
-    "mls        v3.8h, v5.8h, v24.8h           \n"  // G
-    "mls        v3.8h, v4.8h, v23.8h           \n"  // B
-    "add        v3.8h, v3.8h, v25.8h           \n"  // +128 -> unsigned
-    "uqshrn     v0.8b, v2.8h, #8               \n"  // 16 bit to 8 bit U
-    "uqshrn     v1.8b, v3.8h, #8               \n"  // 16 bit to 8 bit V
-    MEMACCESS(2)
-    "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
-    MEMACCESS(3)
-    "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb4444),  // %0
-    "+r"(src_argb4444_1),  // %1
-    "+r"(dst_u),     // %2
-    "+r"(dst_v),     // %3
-    "+r"(width)        // %4
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6",
-    "v16", "v17", "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25",
-    "v26", "v27", "v28"
+      "subs       %w4, %w4, #16                  \n"  // 16 processed per loop.
+      "mul        v2.8h, v4.8h, v20.8h           \n"  // B
+      "mls        v2.8h, v5.8h, v21.8h           \n"  // G
+      "mls        v2.8h, v6.8h, v22.8h           \n"  // R
+      "add        v2.8h, v2.8h, v25.8h           \n"  // +128 -> unsigned
+      "mul        v3.8h, v6.8h, v20.8h           \n"  // R
+      "mls        v3.8h, v5.8h, v24.8h           \n"  // G
+      "mls        v3.8h, v4.8h, v23.8h           \n"  // B
+      "add        v3.8h, v3.8h, v25.8h           \n"  // +128 -> unsigned
+      "uqshrn     v0.8b, v2.8h, #8               \n"  // 16 bit to 8 bit U
+      "uqshrn     v1.8b, v3.8h, #8               \n"  // 16 bit to 8 bit V
+      "st1        {v0.8b}, [%2], #8              \n"  // store 8 pixels U.
+      "st1        {v1.8b}, [%3], #8              \n"  // store 8 pixels V.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb4444),    // %0
+        "+r"(src_argb4444_1),  // %1
+        "+r"(dst_u),           // %2
+        "+r"(dst_v),           // %3
+        "+r"(width)            // %4
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v16", "v17",
+        "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25", "v26", "v27",
+        "v28"
 
-  );
+      );
 }
 
-void RGB565ToYRow_NEON(const uint8* src_rgb565, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v24.8b, #13                    \n"  // B * 0.1016 coefficient
-    "movi       v25.8b, #65                    \n"  // G * 0.5078 coefficient
-    "movi       v26.8b, #33                    \n"  // R * 0.2578 coefficient
-    "movi       v27.8b, #16                    \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 RGB565 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    RGB565TOARGB
-    "umull      v3.8h, v0.8b, v24.8b           \n"  // B
-    "umlal      v3.8h, v1.8b, v25.8b           \n"  // G
-    "umlal      v3.8h, v2.8b, v26.8b           \n"  // R
-    "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v27.8b           \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_rgb565),  // %0
-    "+r"(dst_y),       // %1
-    "+r"(width)          // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v6",
-    "v24", "v25", "v26", "v27"
-  );
+void RGB565ToYRow_NEON(const uint8_t* src_rgb565, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v24.8b, #13                    \n"  // B * 0.1016 coefficient
+      "movi       v25.8b, #65                    \n"  // G * 0.5078 coefficient
+      "movi       v26.8b, #33                    \n"  // R * 0.2578 coefficient
+      "movi       v27.8b, #16                    \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 RGB565 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      RGB565TOARGB
+      "umull      v3.8h, v0.8b, v24.8b           \n"  // B
+      "umlal      v3.8h, v1.8b, v25.8b           \n"  // G
+      "umlal      v3.8h, v2.8b, v26.8b           \n"  // R
+      "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v27.8b           \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_rgb565),  // %0
+        "+r"(dst_y),       // %1
+        "+r"(width)        // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v6", "v24", "v25", "v26",
+        "v27");
 }
 
-void ARGB1555ToYRow_NEON(const uint8* src_argb1555, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB1555 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGB1555TOARGB
-    "umull      v3.8h, v0.8b, v4.8b            \n"  // B
-    "umlal      v3.8h, v1.8b, v5.8b            \n"  // G
-    "umlal      v3.8h, v2.8b, v6.8b            \n"  // R
-    "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb1555),  // %0
-    "+r"(dst_y),         // %1
-    "+r"(width)            // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7"
-  );
+void ARGB1555ToYRow_NEON(const uint8_t* src_argb1555,
+                         uint8_t* dst_y,
+                         int width) {
+  asm volatile(
+      "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB1555 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGB1555TOARGB
+      "umull      v3.8h, v0.8b, v4.8b            \n"  // B
+      "umlal      v3.8h, v1.8b, v5.8b            \n"  // G
+      "umlal      v3.8h, v2.8b, v6.8b            \n"  // R
+      "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb1555),  // %0
+        "+r"(dst_y),         // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7");
 }
 
-void ARGB4444ToYRow_NEON(const uint8* src_argb4444, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v24.8b, #13                    \n"  // B * 0.1016 coefficient
-    "movi       v25.8b, #65                    \n"  // G * 0.5078 coefficient
-    "movi       v26.8b, #33                    \n"  // R * 0.2578 coefficient
-    "movi       v27.8b, #16                    \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB4444 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    ARGB4444TOARGB
-    "umull      v3.8h, v0.8b, v24.8b           \n"  // B
-    "umlal      v3.8h, v1.8b, v25.8b           \n"  // G
-    "umlal      v3.8h, v2.8b, v26.8b           \n"  // R
-    "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v27.8b           \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb4444),  // %0
-    "+r"(dst_y),         // %1
-    "+r"(width)            // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v24", "v25", "v26", "v27"
-  );
+void ARGB4444ToYRow_NEON(const uint8_t* src_argb4444,
+                         uint8_t* dst_y,
+                         int width) {
+  asm volatile(
+      "movi       v24.8b, #13                    \n"  // B * 0.1016 coefficient
+      "movi       v25.8b, #65                    \n"  // G * 0.5078 coefficient
+      "movi       v26.8b, #33                    \n"  // R * 0.2578 coefficient
+      "movi       v27.8b, #16                    \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 8 ARGB4444 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      ARGB4444TOARGB
+      "umull      v3.8h, v0.8b, v24.8b           \n"  // B
+      "umlal      v3.8h, v1.8b, v25.8b           \n"  // G
+      "umlal      v3.8h, v2.8b, v26.8b           \n"  // R
+      "sqrshrun   v0.8b, v3.8h, #7               \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v27.8b           \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb4444),  // %0
+        "+r"(dst_y),         // %1
+        "+r"(width)          // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v24", "v25", "v26", "v27");
 }
 
-void BGRAToYRow_NEON(const uint8* src_bgra, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v16.8h, v1.8b, v4.8b           \n"  // R
-    "umlal      v16.8h, v2.8b, v5.8b           \n"  // G
-    "umlal      v16.8h, v3.8b, v6.8b           \n"  // B
-    "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_bgra),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16"
-  );
+void BGRAToYRow_NEON(const uint8_t* src_bgra, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v16.8h, v1.8b, v4.8b           \n"  // R
+      "umlal      v16.8h, v2.8b, v5.8b           \n"  // G
+      "umlal      v16.8h, v3.8b, v6.8b           \n"  // B
+      "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_bgra),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16");
 }
 
-void ABGRToYRow_NEON(const uint8* src_abgr, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v16.8h, v0.8b, v4.8b           \n"  // R
-    "umlal      v16.8h, v1.8b, v5.8b           \n"  // G
-    "umlal      v16.8h, v2.8b, v6.8b           \n"  // B
-    "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_abgr),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16"
-  );
+void ABGRToYRow_NEON(const uint8_t* src_abgr, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v16.8h, v0.8b, v4.8b           \n"  // R
+      "umlal      v16.8h, v1.8b, v5.8b           \n"  // G
+      "umlal      v16.8h, v2.8b, v6.8b           \n"  // B
+      "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_abgr),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16");
 }
 
-void RGBAToYRow_NEON(const uint8* src_rgba, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v16.8h, v1.8b, v4.8b           \n"  // B
-    "umlal      v16.8h, v2.8b, v5.8b           \n"  // G
-    "umlal      v16.8h, v3.8b, v6.8b           \n"  // R
-    "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_rgba),  // %0
-    "+r"(dst_y),     // %1
-    "+r"(width)        // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16"
-  );
+void RGBAToYRow_NEON(const uint8_t* src_rgba, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v16.8h, v1.8b, v4.8b           \n"  // B
+      "umlal      v16.8h, v2.8b, v5.8b           \n"  // G
+      "umlal      v16.8h, v3.8b, v6.8b           \n"  // R
+      "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_rgba),  // %0
+        "+r"(dst_y),     // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16");
 }
 
-void RGB24ToYRow_NEON(const uint8* src_rgb24, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // load 8 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v16.8h, v0.8b, v4.8b           \n"  // B
-    "umlal      v16.8h, v1.8b, v5.8b           \n"  // G
-    "umlal      v16.8h, v2.8b, v6.8b           \n"  // R
-    "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_rgb24),  // %0
-    "+r"(dst_y),      // %1
-    "+r"(width)         // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16"
-  );
+void RGB24ToYRow_NEON(const uint8_t* src_rgb24, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // load 8 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v16.8h, v0.8b, v4.8b           \n"  // B
+      "umlal      v16.8h, v1.8b, v5.8b           \n"  // G
+      "umlal      v16.8h, v2.8b, v6.8b           \n"  // R
+      "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_rgb24),  // %0
+        "+r"(dst_y),      // %1
+        "+r"(width)       // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16");
 }
 
-void RAWToYRow_NEON(const uint8* src_raw, uint8* dst_y, int width) {
-  asm volatile (
-    "movi       v4.8b, #33                     \n"  // R * 0.2578 coefficient
-    "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
-    "movi       v6.8b, #13                     \n"  // B * 0.1016 coefficient
-    "movi       v7.8b, #16                     \n"  // Add 16 constant
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // load 8 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v16.8h, v0.8b, v4.8b           \n"  // B
-    "umlal      v16.8h, v1.8b, v5.8b           \n"  // G
-    "umlal      v16.8h, v2.8b, v6.8b           \n"  // R
-    "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
-    "uqadd      v0.8b, v0.8b, v7.8b            \n"
-    MEMACCESS(1)
-    "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
-    "b.gt       1b                             \n"
-  : "+r"(src_raw),  // %0
-    "+r"(dst_y),    // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16"
-  );
+void RAWToYRow_NEON(const uint8_t* src_raw, uint8_t* dst_y, int width) {
+  asm volatile(
+      "movi       v4.8b, #33                     \n"  // R * 0.2578 coefficient
+      "movi       v5.8b, #65                     \n"  // G * 0.5078 coefficient
+      "movi       v6.8b, #13                     \n"  // B * 0.1016 coefficient
+      "movi       v7.8b, #16                     \n"  // Add 16 constant
+      "1:                                        \n"
+      "ld3        {v0.8b,v1.8b,v2.8b}, [%0], #24 \n"  // load 8 pixels.
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v16.8h, v0.8b, v4.8b           \n"  // B
+      "umlal      v16.8h, v1.8b, v5.8b           \n"  // G
+      "umlal      v16.8h, v2.8b, v6.8b           \n"  // R
+      "sqrshrun   v0.8b, v16.8h, #7              \n"  // 16 bit to 8 bit Y
+      "uqadd      v0.8b, v0.8b, v7.8b            \n"
+      "st1        {v0.8b}, [%1], #8              \n"  // store 8 pixels Y.
+      "b.gt       1b                             \n"
+      : "+r"(src_raw),  // %0
+        "+r"(dst_y),    // %1
+        "+r"(width)     // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16");
 }
 
 // Bilinear filter 16x2 -> 16x1
-void InterpolateRow_NEON(uint8* dst_ptr,
-                         const uint8* src_ptr, ptrdiff_t src_stride,
-                         int dst_width, int source_y_fraction) {
+void InterpolateRow_NEON(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         int dst_width,
+                         int source_y_fraction) {
   int y1_fraction = source_y_fraction;
   int y0_fraction = 256 - y1_fraction;
-  const uint8* src_ptr1 = src_ptr + src_stride;
-  asm volatile (
-    "cmp        %w4, #0                        \n"
-    "b.eq       100f                           \n"
-    "cmp        %w4, #128                      \n"
-    "b.eq       50f                            \n"
+  const uint8_t* src_ptr1 = src_ptr + src_stride;
+  asm volatile(
+      "cmp        %w4, #0                        \n"
+      "b.eq       100f                           \n"
+      "cmp        %w4, #128                      \n"
+      "b.eq       50f                            \n"
 
-    "dup        v5.16b, %w4                    \n"
-    "dup        v4.16b, %w5                    \n"
-    // General purpose row blend.
-  "1:                                          \n"
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"
-    MEMACCESS(2)
-    "ld1        {v1.16b}, [%2], #16            \n"
-    "subs       %w3, %w3, #16                  \n"
-    "umull      v2.8h, v0.8b,  v4.8b           \n"
-    "umull2     v3.8h, v0.16b, v4.16b          \n"
-    "umlal      v2.8h, v1.8b,  v5.8b           \n"
-    "umlal2     v3.8h, v1.16b, v5.16b          \n"
-    "rshrn      v0.8b,  v2.8h, #8              \n"
-    "rshrn2     v0.16b, v3.8h, #8              \n"
-    MEMACCESS(0)
-    "st1        {v0.16b}, [%0], #16            \n"
-    "b.gt       1b                             \n"
-    "b          99f                            \n"
+      "dup        v5.16b, %w4                    \n"
+      "dup        v4.16b, %w5                    \n"
+      // General purpose row blend.
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%1], #16            \n"
+      "ld1        {v1.16b}, [%2], #16            \n"
+      "subs       %w3, %w3, #16                  \n"
+      "umull      v2.8h, v0.8b,  v4.8b           \n"
+      "umull2     v3.8h, v0.16b, v4.16b          \n"
+      "umlal      v2.8h, v1.8b,  v5.8b           \n"
+      "umlal2     v3.8h, v1.16b, v5.16b          \n"
+      "rshrn      v0.8b,  v2.8h, #8              \n"
+      "rshrn2     v0.16b, v3.8h, #8              \n"
+      "st1        {v0.16b}, [%0], #16            \n"
+      "b.gt       1b                             \n"
+      "b          99f                            \n"
 
-    // Blend 50 / 50.
-  "50:                                         \n"
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"
-    MEMACCESS(2)
-    "ld1        {v1.16b}, [%2], #16            \n"
-    "subs       %w3, %w3, #16                  \n"
-    "urhadd     v0.16b, v0.16b, v1.16b         \n"
-    MEMACCESS(0)
-    "st1        {v0.16b}, [%0], #16            \n"
-    "b.gt       50b                            \n"
-    "b          99f                            \n"
+      // Blend 50 / 50.
+      "50:                                       \n"
+      "ld1        {v0.16b}, [%1], #16            \n"
+      "ld1        {v1.16b}, [%2], #16            \n"
+      "subs       %w3, %w3, #16                  \n"
+      "urhadd     v0.16b, v0.16b, v1.16b         \n"
+      "st1        {v0.16b}, [%0], #16            \n"
+      "b.gt       50b                            \n"
+      "b          99f                            \n"
 
-    // Blend 100 / 0 - Copy row unchanged.
-  "100:                                        \n"
-    MEMACCESS(1)
-    "ld1        {v0.16b}, [%1], #16            \n"
-    "subs       %w3, %w3, #16                  \n"
-    MEMACCESS(0)
-    "st1        {v0.16b}, [%0], #16            \n"
-    "b.gt       100b                           \n"
+      // Blend 100 / 0 - Copy row unchanged.
+      "100:                                      \n"
+      "ld1        {v0.16b}, [%1], #16            \n"
+      "subs       %w3, %w3, #16                  \n"
+      "st1        {v0.16b}, [%0], #16            \n"
+      "b.gt       100b                           \n"
 
-  "99:                                         \n"
-  : "+r"(dst_ptr),          // %0
-    "+r"(src_ptr),          // %1
-    "+r"(src_ptr1),         // %2
-    "+r"(dst_width),        // %3
-    "+r"(y1_fraction),      // %4
-    "+r"(y0_fraction)       // %5
-  :
-  : "cc", "memory", "v0", "v1", "v3", "v4", "v5"
-  );
+      "99:                                       \n"
+      : "+r"(dst_ptr),      // %0
+        "+r"(src_ptr),      // %1
+        "+r"(src_ptr1),     // %2
+        "+r"(dst_width),    // %3
+        "+r"(y1_fraction),  // %4
+        "+r"(y0_fraction)   // %5
+      :
+      : "cc", "memory", "v0", "v1", "v3", "v4", "v5");
 }
 
 // dr * (256 - sa) / 256 + sr = dr - dr * sa / 256 + sr
-void ARGBBlendRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                       uint8* dst_argb, int width) {
-  asm volatile (
-    "subs       %w3, %w3, #8                   \n"
-    "b.lt       89f                            \n"
-    // Blend 8 pixels.
-  "8:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB0 pixels
-    MEMACCESS(1)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 ARGB1 pixels
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "umull      v16.8h, v4.8b, v3.8b           \n"  // db * a
-    "umull      v17.8h, v5.8b, v3.8b           \n"  // dg * a
-    "umull      v18.8h, v6.8b, v3.8b           \n"  // dr * a
-    "uqrshrn    v16.8b, v16.8h, #8             \n"  // db >>= 8
-    "uqrshrn    v17.8b, v17.8h, #8             \n"  // dg >>= 8
-    "uqrshrn    v18.8b, v18.8h, #8             \n"  // dr >>= 8
-    "uqsub      v4.8b, v4.8b, v16.8b           \n"  // db - (db * a / 256)
-    "uqsub      v5.8b, v5.8b, v17.8b           \n"  // dg - (dg * a / 256)
-    "uqsub      v6.8b, v6.8b, v18.8b           \n"  // dr - (dr * a / 256)
-    "uqadd      v0.8b, v0.8b, v4.8b            \n"  // + sb
-    "uqadd      v1.8b, v1.8b, v5.8b            \n"  // + sg
-    "uqadd      v2.8b, v2.8b, v6.8b            \n"  // + sr
-    "movi       v3.8b, #255                    \n"  // a = 255
-    MEMACCESS(2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB pixels
-    "b.ge       8b                             \n"
+void ARGBBlendRow_NEON(const uint8_t* src_argb0,
+                       const uint8_t* src_argb1,
+                       uint8_t* dst_argb,
+                       int width) {
+  asm volatile(
+      "subs       %w3, %w3, #8                   \n"
+      "b.lt       89f                            \n"
+      // Blend 8 pixels.
+      "8:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB0
+                                                            // pixels
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 ARGB1
+                                                            // pixels
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "umull      v16.8h, v4.8b, v3.8b           \n"  // db * a
+      "umull      v17.8h, v5.8b, v3.8b           \n"  // dg * a
+      "umull      v18.8h, v6.8b, v3.8b           \n"  // dr * a
+      "uqrshrn    v16.8b, v16.8h, #8             \n"  // db >>= 8
+      "uqrshrn    v17.8b, v17.8h, #8             \n"  // dg >>= 8
+      "uqrshrn    v18.8b, v18.8h, #8             \n"  // dr >>= 8
+      "uqsub      v4.8b, v4.8b, v16.8b           \n"  // db - (db * a / 256)
+      "uqsub      v5.8b, v5.8b, v17.8b           \n"  // dg - (dg * a / 256)
+      "uqsub      v6.8b, v6.8b, v18.8b           \n"  // dr - (dr * a / 256)
+      "uqadd      v0.8b, v0.8b, v4.8b            \n"  // + sb
+      "uqadd      v1.8b, v1.8b, v5.8b            \n"  // + sg
+      "uqadd      v2.8b, v2.8b, v6.8b            \n"  // + sr
+      "movi       v3.8b, #255                    \n"  // a = 255
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB
+                                                            // pixels
+      "b.ge       8b                             \n"
 
-  "89:                                         \n"
-    "adds       %w3, %w3, #8-1                 \n"
-    "b.lt       99f                            \n"
+      "89:                                       \n"
+      "adds       %w3, %w3, #8-1                 \n"
+      "b.lt       99f                            \n"
 
-    // Blend 1 pixels.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.b,v1.b,v2.b,v3.b}[0], [%0], #4 \n"  // load 1 pixel ARGB0.
-    MEMACCESS(1)
-    "ld4        {v4.b,v5.b,v6.b,v7.b}[0], [%1], #4 \n"  // load 1 pixel ARGB1.
-    "subs       %w3, %w3, #1                   \n"  // 1 processed per loop.
-    "umull      v16.8h, v4.8b, v3.8b           \n"  // db * a
-    "umull      v17.8h, v5.8b, v3.8b           \n"  // dg * a
-    "umull      v18.8h, v6.8b, v3.8b           \n"  // dr * a
-    "uqrshrn    v16.8b, v16.8h, #8             \n"  // db >>= 8
-    "uqrshrn    v17.8b, v17.8h, #8             \n"  // dg >>= 8
-    "uqrshrn    v18.8b, v18.8h, #8             \n"  // dr >>= 8
-    "uqsub      v4.8b, v4.8b, v16.8b           \n"  // db - (db * a / 256)
-    "uqsub      v5.8b, v5.8b, v17.8b           \n"  // dg - (dg * a / 256)
-    "uqsub      v6.8b, v6.8b, v18.8b           \n"  // dr - (dr * a / 256)
-    "uqadd      v0.8b, v0.8b, v4.8b            \n"  // + sb
-    "uqadd      v1.8b, v1.8b, v5.8b            \n"  // + sg
-    "uqadd      v2.8b, v2.8b, v6.8b            \n"  // + sr
-    "movi       v3.8b, #255                    \n"  // a = 255
-    MEMACCESS(2)
-    "st4        {v0.b,v1.b,v2.b,v3.b}[0], [%2], #4 \n"  // store 1 pixel.
-    "b.ge       1b                             \n"
+      // Blend 1 pixels.
+      "1:                                        \n"
+      "ld4        {v0.b,v1.b,v2.b,v3.b}[0], [%0], #4 \n"  // load 1 pixel ARGB0.
+      "ld4        {v4.b,v5.b,v6.b,v7.b}[0], [%1], #4 \n"  // load 1 pixel ARGB1.
+      "subs       %w3, %w3, #1                   \n"  // 1 processed per loop.
+      "umull      v16.8h, v4.8b, v3.8b           \n"  // db * a
+      "umull      v17.8h, v5.8b, v3.8b           \n"  // dg * a
+      "umull      v18.8h, v6.8b, v3.8b           \n"  // dr * a
+      "uqrshrn    v16.8b, v16.8h, #8             \n"  // db >>= 8
+      "uqrshrn    v17.8b, v17.8h, #8             \n"  // dg >>= 8
+      "uqrshrn    v18.8b, v18.8h, #8             \n"  // dr >>= 8
+      "uqsub      v4.8b, v4.8b, v16.8b           \n"  // db - (db * a / 256)
+      "uqsub      v5.8b, v5.8b, v17.8b           \n"  // dg - (dg * a / 256)
+      "uqsub      v6.8b, v6.8b, v18.8b           \n"  // dr - (dr * a / 256)
+      "uqadd      v0.8b, v0.8b, v4.8b            \n"  // + sb
+      "uqadd      v1.8b, v1.8b, v5.8b            \n"  // + sg
+      "uqadd      v2.8b, v2.8b, v6.8b            \n"  // + sr
+      "movi       v3.8b, #255                    \n"  // a = 255
+      "st4        {v0.b,v1.b,v2.b,v3.b}[0], [%2], #4 \n"  // store 1 pixel.
+      "b.ge       1b                             \n"
 
-  "99:                                         \n"
+      "99:                                       \n"
 
-  : "+r"(src_argb0),    // %0
-    "+r"(src_argb1),    // %1
-    "+r"(dst_argb),     // %2
-    "+r"(width)         // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7",
-    "v16", "v17", "v18"
-  );
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16",
+        "v17", "v18");
 }
 
 // Attenuate 8 pixels at a time.
-void ARGBAttenuateRow_NEON(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    // Attenuate 8 pixels.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v4.8h, v0.8b, v3.8b            \n"  // b * a
-    "umull      v5.8h, v1.8b, v3.8b            \n"  // g * a
-    "umull      v6.8h, v2.8b, v3.8b            \n"  // r * a
-    "uqrshrn    v0.8b, v4.8h, #8               \n"  // b >>= 8
-    "uqrshrn    v1.8b, v5.8h, #8               \n"  // g >>= 8
-    "uqrshrn    v2.8b, v6.8h, #8               \n"  // r >>= 8
-    MEMACCESS(1)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)       // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6"
-  );
+void ARGBAttenuateRow_NEON(const uint8_t* src_argb,
+                           uint8_t* dst_argb,
+                           int width) {
+  asm volatile(
+      // Attenuate 8 pixels.
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v4.8h, v0.8b, v3.8b            \n"  // b * a
+      "umull      v5.8h, v1.8b, v3.8b            \n"  // g * a
+      "umull      v6.8h, v2.8b, v3.8b            \n"  // r * a
+      "uqrshrn    v0.8b, v4.8h, #8               \n"  // b >>= 8
+      "uqrshrn    v1.8b, v5.8h, #8               \n"  // g >>= 8
+      "uqrshrn    v2.8b, v6.8h, #8               \n"  // r >>= 8
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 ARGB
+                                                            // pixels
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6");
 }
 
 // Quantize 8 ARGB pixels (32 bytes).
 // dst = (dst * scale >> 16) * interval_size + interval_offset;
-void ARGBQuantizeRow_NEON(uint8* dst_argb, int scale, int interval_size,
-                          int interval_offset, int width) {
-  asm volatile (
-    "dup        v4.8h, %w2                     \n"
-    "ushr       v4.8h, v4.8h, #1               \n"  // scale >>= 1
-    "dup        v5.8h, %w3                     \n"  // interval multiply.
-    "dup        v6.8h, %w4                     \n"  // interval add
+void ARGBQuantizeRow_NEON(uint8_t* dst_argb,
+                          int scale,
+                          int interval_size,
+                          int interval_offset,
+                          int width) {
+  asm volatile(
+      "dup        v4.8h, %w2                     \n"
+      "ushr       v4.8h, v4.8h, #1               \n"  // scale >>= 1
+      "dup        v5.8h, %w3                     \n"  // interval multiply.
+      "dup        v6.8h, %w4                     \n"  // interval add
 
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0]  \n"  // load 8 pixels of ARGB.
-    "subs       %w1, %w1, #8                   \n"  // 8 processed per loop.
-    "uxtl       v0.8h, v0.8b                   \n"  // b (0 .. 255)
-    "uxtl       v1.8h, v1.8b                   \n"
-    "uxtl       v2.8h, v2.8b                   \n"
-    "sqdmulh    v0.8h, v0.8h, v4.8h            \n"  // b * scale
-    "sqdmulh    v1.8h, v1.8h, v4.8h            \n"  // g
-    "sqdmulh    v2.8h, v2.8h, v4.8h            \n"  // r
-    "mul        v0.8h, v0.8h, v5.8h            \n"  // b * interval_size
-    "mul        v1.8h, v1.8h, v5.8h            \n"  // g
-    "mul        v2.8h, v2.8h, v5.8h            \n"  // r
-    "add        v0.8h, v0.8h, v6.8h            \n"  // b + interval_offset
-    "add        v1.8h, v1.8h, v6.8h            \n"  // g
-    "add        v2.8h, v2.8h, v6.8h            \n"  // r
-    "uqxtn      v0.8b, v0.8h                   \n"
-    "uqxtn      v1.8b, v1.8h                   \n"
-    "uqxtn      v2.8b, v2.8h                   \n"
-    MEMACCESS(0)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(dst_argb),       // %0
-    "+r"(width)           // %1
-  : "r"(scale),           // %2
-    "r"(interval_size),   // %3
-    "r"(interval_offset)  // %4
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6"
-  );
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0]  \n"  // load 8  ARGB.
+      "subs       %w1, %w1, #8                   \n"    // 8 processed per loop.
+      "uxtl       v0.8h, v0.8b                   \n"    // b (0 .. 255)
+      "uxtl       v1.8h, v1.8b                   \n"
+      "uxtl       v2.8h, v2.8b                   \n"
+      "sqdmulh    v0.8h, v0.8h, v4.8h            \n"  // b * scale
+      "sqdmulh    v1.8h, v1.8h, v4.8h            \n"  // g
+      "sqdmulh    v2.8h, v2.8h, v4.8h            \n"  // r
+      "mul        v0.8h, v0.8h, v5.8h            \n"  // b * interval_size
+      "mul        v1.8h, v1.8h, v5.8h            \n"  // g
+      "mul        v2.8h, v2.8h, v5.8h            \n"  // r
+      "add        v0.8h, v0.8h, v6.8h            \n"  // b + interval_offset
+      "add        v1.8h, v1.8h, v6.8h            \n"  // g
+      "add        v2.8h, v2.8h, v6.8h            \n"  // r
+      "uqxtn      v0.8b, v0.8h                   \n"
+      "uqxtn      v1.8b, v1.8h                   \n"
+      "uqxtn      v2.8b, v2.8h                   \n"
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(dst_argb),       // %0
+        "+r"(width)           // %1
+      : "r"(scale),           // %2
+        "r"(interval_size),   // %3
+        "r"(interval_offset)  // %4
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6");
 }
 
 // Shade 8 pixels at a time by specified value.
 // NOTE vqrdmulh.s16 q10, q10, d0[0] must use a scaler register from 0 to 8.
 // Rounding in vqrdmulh does +1 to high if high bit of low s16 is set.
-void ARGBShadeRow_NEON(const uint8* src_argb, uint8* dst_argb, int width,
-                       uint32 value) {
-  asm volatile (
-    "dup        v0.4s, %w3                     \n"  // duplicate scale value.
-    "zip1       v0.8b, v0.8b, v0.8b            \n"  // v0.8b aarrggbb.
-    "ushr       v0.8h, v0.8h, #1               \n"  // scale / 2.
+void ARGBShadeRow_NEON(const uint8_t* src_argb,
+                       uint8_t* dst_argb,
+                       int width,
+                       uint32_t value) {
+  asm volatile(
+      "dup        v0.4s, %w3                     \n"  // duplicate scale value.
+      "zip1       v0.8b, v0.8b, v0.8b            \n"  // v0.8b aarrggbb.
+      "ushr       v0.8h, v0.8h, #1               \n"  // scale / 2.
 
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "uxtl       v4.8h, v4.8b                   \n"  // b (0 .. 255)
-    "uxtl       v5.8h, v5.8b                   \n"
-    "uxtl       v6.8h, v6.8b                   \n"
-    "uxtl       v7.8h, v7.8b                   \n"
-    "sqrdmulh   v4.8h, v4.8h, v0.h[0]          \n"  // b * scale * 2
-    "sqrdmulh   v5.8h, v5.8h, v0.h[1]          \n"  // g
-    "sqrdmulh   v6.8h, v6.8h, v0.h[2]          \n"  // r
-    "sqrdmulh   v7.8h, v7.8h, v0.h[3]          \n"  // a
-    "uqxtn      v4.8b, v4.8h                   \n"
-    "uqxtn      v5.8b, v5.8h                   \n"
-    "uqxtn      v6.8b, v6.8h                   \n"
-    "uqxtn      v7.8b, v7.8h                   \n"
-    MEMACCESS(1)
-    "st4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),       // %0
-    "+r"(dst_argb),       // %1
-    "+r"(width)           // %2
-  : "r"(value)            // %3
-  : "cc", "memory", "v0", "v4", "v5", "v6", "v7"
-  );
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "uxtl       v4.8h, v4.8b                   \n"  // b (0 .. 255)
+      "uxtl       v5.8h, v5.8b                   \n"
+      "uxtl       v6.8h, v6.8b                   \n"
+      "uxtl       v7.8h, v7.8b                   \n"
+      "sqrdmulh   v4.8h, v4.8h, v0.h[0]          \n"  // b * scale * 2
+      "sqrdmulh   v5.8h, v5.8h, v0.h[1]          \n"  // g
+      "sqrdmulh   v6.8h, v6.8h, v0.h[2]          \n"  // r
+      "sqrdmulh   v7.8h, v7.8h, v0.h[3]          \n"  // a
+      "uqxtn      v4.8b, v4.8h                   \n"
+      "uqxtn      v5.8b, v5.8h                   \n"
+      "uqxtn      v6.8b, v6.8h                   \n"
+      "uqxtn      v7.8b, v7.8h                   \n"
+      "st4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      : "r"(value)       // %3
+      : "cc", "memory", "v0", "v4", "v5", "v6", "v7");
 }
 
 // Convert 8 ARGB pixels (64 bytes) to 8 Gray ARGB pixels
 // Similar to ARGBToYJ but stores ARGB.
 // C code is (15 * b + 75 * g + 38 * r + 64) >> 7;
-void ARGBGrayRow_NEON(const uint8* src_argb, uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v24.8b, #15                    \n"  // B * 0.11400 coefficient
-    "movi       v25.8b, #75                    \n"  // G * 0.58700 coefficient
-    "movi       v26.8b, #38                    \n"  // R * 0.29900 coefficient
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "umull      v4.8h, v0.8b, v24.8b           \n"  // B
-    "umlal      v4.8h, v1.8b, v25.8b           \n"  // G
-    "umlal      v4.8h, v2.8b, v26.8b           \n"  // R
-    "sqrshrun   v0.8b, v4.8h, #7               \n"  // 15 bit to 8 bit B
-    "orr        v1.8b, v0.8b, v0.8b            \n"  // G
-    "orr        v2.8b, v0.8b, v0.8b            \n"  // R
-    MEMACCESS(1)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 pixels.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(width)      // %2
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v24", "v25", "v26"
-  );
+void ARGBGrayRow_NEON(const uint8_t* src_argb, uint8_t* dst_argb, int width) {
+  asm volatile(
+      "movi       v24.8b, #15                    \n"  // B * 0.11400 coefficient
+      "movi       v25.8b, #75                    \n"  // G * 0.58700 coefficient
+      "movi       v26.8b, #38                    \n"  // R * 0.29900 coefficient
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "umull      v4.8h, v0.8b, v24.8b           \n"  // B
+      "umlal      v4.8h, v1.8b, v25.8b           \n"  // G
+      "umlal      v4.8h, v2.8b, v26.8b           \n"  // R
+      "sqrshrun   v0.8b, v4.8h, #7               \n"  // 15 bit to 8 bit B
+      "orr        v1.8b, v0.8b, v0.8b            \n"  // G
+      "orr        v2.8b, v0.8b, v0.8b            \n"  // R
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32 \n"  // store 8 pixels.
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(width)      // %2
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v24", "v25", "v26");
 }
 
 // Convert 8 ARGB pixels (32 bytes) to 8 Sepia ARGB pixels.
@@ -2443,194 +2321,180 @@
 //    g = (r * 45 + g * 88 + b * 22) >> 7
 //    r = (r * 50 + g * 98 + b * 24) >> 7
 
-void ARGBSepiaRow_NEON(uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v20.8b, #17                    \n"  // BB coefficient
-    "movi       v21.8b, #68                    \n"  // BG coefficient
-    "movi       v22.8b, #35                    \n"  // BR coefficient
-    "movi       v24.8b, #22                    \n"  // GB coefficient
-    "movi       v25.8b, #88                    \n"  // GG coefficient
-    "movi       v26.8b, #45                    \n"  // GR coefficient
-    "movi       v28.8b, #24                    \n"  // BB coefficient
-    "movi       v29.8b, #98                    \n"  // BG coefficient
-    "movi       v30.8b, #50                    \n"  // BR coefficient
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0] \n"  // load 8 ARGB pixels.
-    "subs       %w1, %w1, #8                   \n"  // 8 processed per loop.
-    "umull      v4.8h, v0.8b, v20.8b           \n"  // B to Sepia B
-    "umlal      v4.8h, v1.8b, v21.8b           \n"  // G
-    "umlal      v4.8h, v2.8b, v22.8b           \n"  // R
-    "umull      v5.8h, v0.8b, v24.8b           \n"  // B to Sepia G
-    "umlal      v5.8h, v1.8b, v25.8b           \n"  // G
-    "umlal      v5.8h, v2.8b, v26.8b           \n"  // R
-    "umull      v6.8h, v0.8b, v28.8b           \n"  // B to Sepia R
-    "umlal      v6.8h, v1.8b, v29.8b           \n"  // G
-    "umlal      v6.8h, v2.8b, v30.8b           \n"  // R
-    "uqshrn     v0.8b, v4.8h, #7               \n"  // 16 bit to 8 bit B
-    "uqshrn     v1.8b, v5.8h, #7               \n"  // 16 bit to 8 bit G
-    "uqshrn     v2.8b, v6.8h, #7               \n"  // 16 bit to 8 bit R
-    MEMACCESS(0)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // store 8 pixels.
-    "b.gt       1b                             \n"
-  : "+r"(dst_argb),  // %0
-    "+r"(width)      // %1
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7",
-    "v20", "v21", "v22", "v24", "v25", "v26", "v28", "v29", "v30"
-  );
+void ARGBSepiaRow_NEON(uint8_t* dst_argb, int width) {
+  asm volatile(
+      "movi       v20.8b, #17                    \n"  // BB coefficient
+      "movi       v21.8b, #68                    \n"  // BG coefficient
+      "movi       v22.8b, #35                    \n"  // BR coefficient
+      "movi       v24.8b, #22                    \n"  // GB coefficient
+      "movi       v25.8b, #88                    \n"  // GG coefficient
+      "movi       v26.8b, #45                    \n"  // GR coefficient
+      "movi       v28.8b, #24                    \n"  // BB coefficient
+      "movi       v29.8b, #98                    \n"  // BG coefficient
+      "movi       v30.8b, #50                    \n"  // BR coefficient
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0] \n"  // load 8 ARGB pixels.
+      "subs       %w1, %w1, #8                   \n"   // 8 processed per loop.
+      "umull      v4.8h, v0.8b, v20.8b           \n"   // B to Sepia B
+      "umlal      v4.8h, v1.8b, v21.8b           \n"   // G
+      "umlal      v4.8h, v2.8b, v22.8b           \n"   // R
+      "umull      v5.8h, v0.8b, v24.8b           \n"   // B to Sepia G
+      "umlal      v5.8h, v1.8b, v25.8b           \n"   // G
+      "umlal      v5.8h, v2.8b, v26.8b           \n"   // R
+      "umull      v6.8h, v0.8b, v28.8b           \n"   // B to Sepia R
+      "umlal      v6.8h, v1.8b, v29.8b           \n"   // G
+      "umlal      v6.8h, v2.8b, v30.8b           \n"   // R
+      "uqshrn     v0.8b, v4.8h, #7               \n"   // 16 bit to 8 bit B
+      "uqshrn     v1.8b, v5.8h, #7               \n"   // 16 bit to 8 bit G
+      "uqshrn     v2.8b, v6.8h, #7               \n"   // 16 bit to 8 bit R
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // store 8 pixels.
+      "b.gt       1b                             \n"
+      : "+r"(dst_argb),  // %0
+        "+r"(width)      // %1
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20",
+        "v21", "v22", "v24", "v25", "v26", "v28", "v29", "v30");
 }
 
 // Tranform 8 ARGB pixels (32 bytes) with color matrix.
 // TODO(fbarchard): Was same as Sepia except matrix is provided.  This function
 // needs to saturate.  Consider doing a non-saturating version.
-void ARGBColorMatrixRow_NEON(const uint8* src_argb, uint8* dst_argb,
-                             const int8* matrix_argb, int width) {
-  asm volatile (
-    MEMACCESS(3)
-    "ld1        {v2.16b}, [%3]                 \n"  // load 3 ARGB vectors.
-    "sxtl       v0.8h, v2.8b                   \n"  // B,G coefficients s16.
-    "sxtl2      v1.8h, v2.16b                  \n"  // R,A coefficients s16.
+void ARGBColorMatrixRow_NEON(const uint8_t* src_argb,
+                             uint8_t* dst_argb,
+                             const int8_t* matrix_argb,
+                             int width) {
+  asm volatile(
+      "ld1        {v2.16b}, [%3]                 \n"  // load 3 ARGB vectors.
+      "sxtl       v0.8h, v2.8b                   \n"  // B,G coefficients s16.
+      "sxtl2      v1.8h, v2.16b                  \n"  // R,A coefficients s16.
 
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v16.8b,v17.8b,v18.8b,v19.8b}, [%0], #32 \n"  // load 8 pixels.
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "uxtl       v16.8h, v16.8b                 \n"  // b (0 .. 255) 16 bit
-    "uxtl       v17.8h, v17.8b                 \n"  // g
-    "uxtl       v18.8h, v18.8b                 \n"  // r
-    "uxtl       v19.8h, v19.8b                 \n"  // a
-    "mul        v22.8h, v16.8h, v0.h[0]        \n"  // B = B * Matrix B
-    "mul        v23.8h, v16.8h, v0.h[4]        \n"  // G = B * Matrix G
-    "mul        v24.8h, v16.8h, v1.h[0]        \n"  // R = B * Matrix R
-    "mul        v25.8h, v16.8h, v1.h[4]        \n"  // A = B * Matrix A
-    "mul        v4.8h, v17.8h, v0.h[1]         \n"  // B += G * Matrix B
-    "mul        v5.8h, v17.8h, v0.h[5]         \n"  // G += G * Matrix G
-    "mul        v6.8h, v17.8h, v1.h[1]         \n"  // R += G * Matrix R
-    "mul        v7.8h, v17.8h, v1.h[5]         \n"  // A += G * Matrix A
-    "sqadd      v22.8h, v22.8h, v4.8h          \n"  // Accumulate B
-    "sqadd      v23.8h, v23.8h, v5.8h          \n"  // Accumulate G
-    "sqadd      v24.8h, v24.8h, v6.8h          \n"  // Accumulate R
-    "sqadd      v25.8h, v25.8h, v7.8h          \n"  // Accumulate A
-    "mul        v4.8h, v18.8h, v0.h[2]         \n"  // B += R * Matrix B
-    "mul        v5.8h, v18.8h, v0.h[6]         \n"  // G += R * Matrix G
-    "mul        v6.8h, v18.8h, v1.h[2]         \n"  // R += R * Matrix R
-    "mul        v7.8h, v18.8h, v1.h[6]         \n"  // A += R * Matrix A
-    "sqadd      v22.8h, v22.8h, v4.8h          \n"  // Accumulate B
-    "sqadd      v23.8h, v23.8h, v5.8h          \n"  // Accumulate G
-    "sqadd      v24.8h, v24.8h, v6.8h          \n"  // Accumulate R
-    "sqadd      v25.8h, v25.8h, v7.8h          \n"  // Accumulate A
-    "mul        v4.8h, v19.8h, v0.h[3]         \n"  // B += A * Matrix B
-    "mul        v5.8h, v19.8h, v0.h[7]         \n"  // G += A * Matrix G
-    "mul        v6.8h, v19.8h, v1.h[3]         \n"  // R += A * Matrix R
-    "mul        v7.8h, v19.8h, v1.h[7]         \n"  // A += A * Matrix A
-    "sqadd      v22.8h, v22.8h, v4.8h          \n"  // Accumulate B
-    "sqadd      v23.8h, v23.8h, v5.8h          \n"  // Accumulate G
-    "sqadd      v24.8h, v24.8h, v6.8h          \n"  // Accumulate R
-    "sqadd      v25.8h, v25.8h, v7.8h          \n"  // Accumulate A
-    "sqshrun    v16.8b, v22.8h, #6             \n"  // 16 bit to 8 bit B
-    "sqshrun    v17.8b, v23.8h, #6             \n"  // 16 bit to 8 bit G
-    "sqshrun    v18.8b, v24.8h, #6             \n"  // 16 bit to 8 bit R
-    "sqshrun    v19.8b, v25.8h, #6             \n"  // 16 bit to 8 bit A
-    MEMACCESS(1)
-    "st4        {v16.8b,v17.8b,v18.8b,v19.8b}, [%1], #32 \n"  // store 8 pixels.
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_argb),   // %1
-    "+r"(width)       // %2
-  : "r"(matrix_argb)  // %3
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17",
-    "v18", "v19", "v22", "v23", "v24", "v25"
-  );
+      "1:                                        \n"
+      "ld4        {v16.8b,v17.8b,v18.8b,v19.8b}, [%0], #32 \n"  // load 8 ARGB
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
+      "uxtl       v16.8h, v16.8b                 \n"  // b (0 .. 255) 16 bit
+      "uxtl       v17.8h, v17.8b                 \n"  // g
+      "uxtl       v18.8h, v18.8b                 \n"  // r
+      "uxtl       v19.8h, v19.8b                 \n"  // a
+      "mul        v22.8h, v16.8h, v0.h[0]        \n"  // B = B * Matrix B
+      "mul        v23.8h, v16.8h, v0.h[4]        \n"  // G = B * Matrix G
+      "mul        v24.8h, v16.8h, v1.h[0]        \n"  // R = B * Matrix R
+      "mul        v25.8h, v16.8h, v1.h[4]        \n"  // A = B * Matrix A
+      "mul        v4.8h, v17.8h, v0.h[1]         \n"  // B += G * Matrix B
+      "mul        v5.8h, v17.8h, v0.h[5]         \n"  // G += G * Matrix G
+      "mul        v6.8h, v17.8h, v1.h[1]         \n"  // R += G * Matrix R
+      "mul        v7.8h, v17.8h, v1.h[5]         \n"  // A += G * Matrix A
+      "sqadd      v22.8h, v22.8h, v4.8h          \n"  // Accumulate B
+      "sqadd      v23.8h, v23.8h, v5.8h          \n"  // Accumulate G
+      "sqadd      v24.8h, v24.8h, v6.8h          \n"  // Accumulate R
+      "sqadd      v25.8h, v25.8h, v7.8h          \n"  // Accumulate A
+      "mul        v4.8h, v18.8h, v0.h[2]         \n"  // B += R * Matrix B
+      "mul        v5.8h, v18.8h, v0.h[6]         \n"  // G += R * Matrix G
+      "mul        v6.8h, v18.8h, v1.h[2]         \n"  // R += R * Matrix R
+      "mul        v7.8h, v18.8h, v1.h[6]         \n"  // A += R * Matrix A
+      "sqadd      v22.8h, v22.8h, v4.8h          \n"  // Accumulate B
+      "sqadd      v23.8h, v23.8h, v5.8h          \n"  // Accumulate G
+      "sqadd      v24.8h, v24.8h, v6.8h          \n"  // Accumulate R
+      "sqadd      v25.8h, v25.8h, v7.8h          \n"  // Accumulate A
+      "mul        v4.8h, v19.8h, v0.h[3]         \n"  // B += A * Matrix B
+      "mul        v5.8h, v19.8h, v0.h[7]         \n"  // G += A * Matrix G
+      "mul        v6.8h, v19.8h, v1.h[3]         \n"  // R += A * Matrix R
+      "mul        v7.8h, v19.8h, v1.h[7]         \n"  // A += A * Matrix A
+      "sqadd      v22.8h, v22.8h, v4.8h          \n"  // Accumulate B
+      "sqadd      v23.8h, v23.8h, v5.8h          \n"  // Accumulate G
+      "sqadd      v24.8h, v24.8h, v6.8h          \n"  // Accumulate R
+      "sqadd      v25.8h, v25.8h, v7.8h          \n"  // Accumulate A
+      "sqshrun    v16.8b, v22.8h, #6             \n"  // 16 bit to 8 bit B
+      "sqshrun    v17.8b, v23.8h, #6             \n"  // 16 bit to 8 bit G
+      "sqshrun    v18.8b, v24.8h, #6             \n"  // 16 bit to 8 bit R
+      "sqshrun    v19.8b, v25.8h, #6             \n"  // 16 bit to 8 bit A
+      "st4        {v16.8b,v17.8b,v18.8b,v19.8b}, [%1], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),   // %0
+        "+r"(dst_argb),   // %1
+        "+r"(width)       // %2
+      : "r"(matrix_argb)  // %3
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16",
+        "v17", "v18", "v19", "v22", "v23", "v24", "v25");
 }
 
 // TODO(fbarchard): fix vqshrun in ARGBMultiplyRow_NEON and reenable.
 // Multiply 2 rows of ARGB pixels together, 8 pixels at a time.
-void ARGBMultiplyRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    MEMACCESS(1)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 more pixels.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "umull      v0.8h, v0.8b, v4.8b            \n"  // multiply B
-    "umull      v1.8h, v1.8b, v5.8b            \n"  // multiply G
-    "umull      v2.8h, v2.8b, v6.8b            \n"  // multiply R
-    "umull      v3.8h, v3.8b, v7.8b            \n"  // multiply A
-    "rshrn      v0.8b, v0.8h, #8               \n"  // 16 bit to 8 bit B
-    "rshrn      v1.8b, v1.8h, #8               \n"  // 16 bit to 8 bit G
-    "rshrn      v2.8b, v2.8h, #8               \n"  // 16 bit to 8 bit R
-    "rshrn      v3.8b, v3.8h, #8               \n"  // 16 bit to 8 bit A
-    MEMACCESS(2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7"
-  );
+void ARGBMultiplyRow_NEON(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 more
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "umull      v0.8h, v0.8b, v4.8b            \n"  // multiply B
+      "umull      v1.8h, v1.8b, v5.8b            \n"  // multiply G
+      "umull      v2.8h, v2.8b, v6.8b            \n"  // multiply R
+      "umull      v3.8h, v3.8b, v7.8b            \n"  // multiply A
+      "rshrn      v0.8b, v0.8h, #8               \n"  // 16 bit to 8 bit B
+      "rshrn      v1.8b, v1.8h, #8               \n"  // 16 bit to 8 bit G
+      "rshrn      v2.8b, v2.8h, #8               \n"  // 16 bit to 8 bit R
+      "rshrn      v3.8b, v3.8h, #8               \n"  // 16 bit to 8 bit A
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7");
 }
 
 // Add 2 rows of ARGB pixels together, 8 pixels at a time.
-void ARGBAddRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    MEMACCESS(1)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 more pixels.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "uqadd      v0.8b, v0.8b, v4.8b            \n"
-    "uqadd      v1.8b, v1.8b, v5.8b            \n"
-    "uqadd      v2.8b, v2.8b, v6.8b            \n"
-    "uqadd      v3.8b, v3.8b, v7.8b            \n"
-    MEMACCESS(2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7"
-  );
+void ARGBAddRow_NEON(const uint8_t* src_argb0,
+                     const uint8_t* src_argb1,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 more
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "uqadd      v0.8b, v0.8b, v4.8b            \n"
+      "uqadd      v1.8b, v1.8b, v5.8b            \n"
+      "uqadd      v2.8b, v2.8b, v6.8b            \n"
+      "uqadd      v3.8b, v3.8b, v7.8b            \n"
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7");
 }
 
 // Subtract 2 rows of ARGB pixels, 8 pixels at a time.
-void ARGBSubtractRow_NEON(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
-  asm volatile (
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB pixels.
-    MEMACCESS(1)
-    "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 more pixels.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "uqsub      v0.8b, v0.8b, v4.8b            \n"
-    "uqsub      v1.8b, v1.8b, v5.8b            \n"
-    "uqsub      v2.8b, v2.8b, v6.8b            \n"
-    "uqsub      v3.8b, v3.8b, v7.8b            \n"
-    MEMACCESS(2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-
-  : "+r"(src_argb0),  // %0
-    "+r"(src_argb1),  // %1
-    "+r"(dst_argb),   // %2
-    "+r"(width)       // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7"
-  );
+void ARGBSubtractRow_NEON(const uint8_t* src_argb0,
+                          const uint8_t* src_argb1,
+                          uint8_t* dst_argb,
+                          int width) {
+  asm volatile(
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32 \n"  // load 8 ARGB
+      "ld4        {v4.8b,v5.8b,v6.8b,v7.8b}, [%1], #32 \n"  // load 8 more
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "uqsub      v0.8b, v0.8b, v4.8b            \n"
+      "uqsub      v1.8b, v1.8b, v5.8b            \n"
+      "uqsub      v2.8b, v2.8b, v6.8b            \n"
+      "uqsub      v3.8b, v3.8b, v7.8b            \n"
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_argb0),  // %0
+        "+r"(src_argb1),  // %1
+        "+r"(dst_argb),   // %2
+        "+r"(width)       // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7");
 }
 
 // Adds Sobel X and Sobel Y and stores Sobel into ARGB.
@@ -2638,54 +2502,50 @@
 // R = Sobel
 // G = Sobel
 // B = Sobel
-void SobelRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v3.8b, #255                    \n"  // alpha
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.8b}, [%0], #8              \n"  // load 8 sobelx.
-    MEMACCESS(1)
-    "ld1        {v1.8b}, [%1], #8              \n"  // load 8 sobely.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "uqadd      v0.8b, v0.8b, v1.8b            \n"  // add
-    "orr        v1.8b, v0.8b, v0.8b            \n"
-    "orr        v2.8b, v0.8b, v0.8b            \n"
-    MEMACCESS(2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"
-  );
+void SobelRow_NEON(const uint8_t* src_sobelx,
+                   const uint8_t* src_sobely,
+                   uint8_t* dst_argb,
+                   int width) {
+  asm volatile(
+      "movi       v3.8b, #255                    \n"  // alpha
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld1        {v0.8b}, [%0], #8              \n"  // load 8 sobelx.
+      "ld1        {v1.8b}, [%1], #8              \n"  // load 8 sobely.
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "uqadd      v0.8b, v0.8b, v1.8b            \n"  // add
+      "orr        v1.8b, v0.8b, v0.8b            \n"
+      "orr        v2.8b, v0.8b, v0.8b            \n"
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(width)        // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3");
 }
 
 // Adds Sobel X and Sobel Y and stores Sobel into plane.
-void SobelToPlaneRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_y, int width) {
-  asm volatile (
-    // 16 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b}, [%0], #16            \n"  // load 16 sobelx.
-    MEMACCESS(1)
-    "ld1        {v1.16b}, [%1], #16            \n"  // load 16 sobely.
-    "subs       %w3, %w3, #16                  \n"  // 16 processed per loop.
-    "uqadd      v0.16b, v0.16b, v1.16b         \n"  // add
-    MEMACCESS(2)
-    "st1        {v0.16b}, [%2], #16            \n"  // store 16 pixels.
-    "b.gt       1b                             \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_y),       // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1"
-  );
+void SobelToPlaneRow_NEON(const uint8_t* src_sobelx,
+                          const uint8_t* src_sobely,
+                          uint8_t* dst_y,
+                          int width) {
+  asm volatile(
+      // 16 pixel loop.
+      "1:                                        \n"
+      "ld1        {v0.16b}, [%0], #16            \n"  // load 16 sobelx.
+      "ld1        {v1.16b}, [%1], #16            \n"  // load 16 sobely.
+      "subs       %w3, %w3, #16                  \n"  // 16 processed per loop.
+      "uqadd      v0.16b, v0.16b, v1.16b         \n"  // add
+      "st1        {v0.16b}, [%2], #16            \n"  // store 16 pixels.
+      "b.gt       1b                             \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_y),       // %2
+        "+r"(width)        // %3
+      :
+      : "cc", "memory", "v0", "v1");
 }
 
 // Mixes Sobel X, Sobel Y and Sobel into ARGB.
@@ -2693,114 +2553,329 @@
 // R = Sobel X
 // G = Sobel
 // B = Sobel Y
-void SobelXYRow_NEON(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) {
-  asm volatile (
-    "movi       v3.8b, #255                    \n"  // alpha
-    // 8 pixel loop.
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v2.8b}, [%0], #8              \n"  // load 8 sobelx.
-    MEMACCESS(1)
-    "ld1        {v0.8b}, [%1], #8              \n"  // load 8 sobely.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "uqadd      v1.8b, v0.8b, v2.8b            \n"  // add
-    MEMACCESS(2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_sobelx),  // %0
-    "+r"(src_sobely),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(width)        // %3
-  :
-  : "cc", "memory", "v0", "v1", "v2", "v3"
-  );
+void SobelXYRow_NEON(const uint8_t* src_sobelx,
+                     const uint8_t* src_sobely,
+                     uint8_t* dst_argb,
+                     int width) {
+  asm volatile(
+      "movi       v3.8b, #255                    \n"  // alpha
+      // 8 pixel loop.
+      "1:                                        \n"
+      "ld1        {v2.8b}, [%0], #8              \n"  // load 8 sobelx.
+      "ld1        {v0.8b}, [%1], #8              \n"  // load 8 sobely.
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "uqadd      v1.8b, v0.8b, v2.8b            \n"  // add
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32 \n"  // store 8 ARGB
+      "b.gt       1b                             \n"
+      : "+r"(src_sobelx),  // %0
+        "+r"(src_sobely),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(width)        // %3
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v3");
 }
 
 // SobelX as a matrix is
 // -1  0  1
 // -2  0  2
 // -1  0  1
-void SobelXRow_NEON(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobelx, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.8b}, [%0],%5               \n"  // top
-    MEMACCESS(0)
-    "ld1        {v1.8b}, [%0],%6               \n"
-    "usubl      v0.8h, v0.8b, v1.8b            \n"
-    MEMACCESS(1)
-    "ld1        {v2.8b}, [%1],%5               \n"  // center * 2
-    MEMACCESS(1)
-    "ld1        {v3.8b}, [%1],%6               \n"
-    "usubl      v1.8h, v2.8b, v3.8b            \n"
-    "add        v0.8h, v0.8h, v1.8h            \n"
-    "add        v0.8h, v0.8h, v1.8h            \n"
-    MEMACCESS(2)
-    "ld1        {v2.8b}, [%2],%5               \n"  // bottom
-    MEMACCESS(2)
-    "ld1        {v3.8b}, [%2],%6               \n"
-    "subs       %w4, %w4, #8                   \n"  // 8 pixels
-    "usubl      v1.8h, v2.8b, v3.8b            \n"
-    "add        v0.8h, v0.8h, v1.8h            \n"
-    "abs        v0.8h, v0.8h                   \n"
-    "uqxtn      v0.8b, v0.8h                   \n"
-    MEMACCESS(3)
-    "st1        {v0.8b}, [%3], #8              \n"  // store 8 sobelx
-    "b.gt       1b                             \n"
-  : "+r"(src_y0),      // %0
-    "+r"(src_y1),      // %1
-    "+r"(src_y2),      // %2
-    "+r"(dst_sobelx),  // %3
-    "+r"(width)        // %4
-  : "r"(2LL),          // %5
-    "r"(6LL)           // %6
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+void SobelXRow_NEON(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    const uint8_t* src_y2,
+                    uint8_t* dst_sobelx,
+                    int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v0.8b}, [%0],%5               \n"  // top
+      "ld1        {v1.8b}, [%0],%6               \n"
+      "usubl      v0.8h, v0.8b, v1.8b            \n"
+      "ld1        {v2.8b}, [%1],%5               \n"  // center * 2
+      "ld1        {v3.8b}, [%1],%6               \n"
+      "usubl      v1.8h, v2.8b, v3.8b            \n"
+      "add        v0.8h, v0.8h, v1.8h            \n"
+      "add        v0.8h, v0.8h, v1.8h            \n"
+      "ld1        {v2.8b}, [%2],%5               \n"  // bottom
+      "ld1        {v3.8b}, [%2],%6               \n"
+      "subs       %w4, %w4, #8                   \n"  // 8 pixels
+      "usubl      v1.8h, v2.8b, v3.8b            \n"
+      "add        v0.8h, v0.8h, v1.8h            \n"
+      "abs        v0.8h, v0.8h                   \n"
+      "uqxtn      v0.8b, v0.8h                   \n"
+      "st1        {v0.8b}, [%3], #8              \n"  // store 8 sobelx
+      "b.gt       1b                             \n"
+      : "+r"(src_y0),                           // %0
+        "+r"(src_y1),                           // %1
+        "+r"(src_y2),                           // %2
+        "+r"(dst_sobelx),                       // %3
+        "+r"(width)                             // %4
+      : "r"(2LL),                               // %5
+        "r"(6LL)                                // %6
+      : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
 // SobelY as a matrix is
 // -1 -2 -1
 //  0  0  0
 //  1  2  1
-void SobelYRow_NEON(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.8b}, [%0],%4               \n"  // left
-    MEMACCESS(1)
-    "ld1        {v1.8b}, [%1],%4               \n"
-    "usubl      v0.8h, v0.8b, v1.8b            \n"
-    MEMACCESS(0)
-    "ld1        {v2.8b}, [%0],%4               \n"  // center * 2
-    MEMACCESS(1)
-    "ld1        {v3.8b}, [%1],%4               \n"
-    "usubl      v1.8h, v2.8b, v3.8b            \n"
-    "add        v0.8h, v0.8h, v1.8h            \n"
-    "add        v0.8h, v0.8h, v1.8h            \n"
-    MEMACCESS(0)
-    "ld1        {v2.8b}, [%0],%5               \n"  // right
-    MEMACCESS(1)
-    "ld1        {v3.8b}, [%1],%5               \n"
-    "subs       %w3, %w3, #8                   \n"  // 8 pixels
-    "usubl      v1.8h, v2.8b, v3.8b            \n"
-    "add        v0.8h, v0.8h, v1.8h            \n"
-    "abs        v0.8h, v0.8h                   \n"
-    "uqxtn      v0.8b, v0.8h                   \n"
-    MEMACCESS(2)
-    "st1        {v0.8b}, [%2], #8              \n"  // store 8 sobely
-    "b.gt       1b                             \n"
-  : "+r"(src_y0),      // %0
-    "+r"(src_y1),      // %1
-    "+r"(dst_sobely),  // %2
-    "+r"(width)        // %3
-  : "r"(1LL),          // %4
-    "r"(6LL)           // %5
-  : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+void SobelYRow_NEON(const uint8_t* src_y0,
+                    const uint8_t* src_y1,
+                    uint8_t* dst_sobely,
+                    int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v0.8b}, [%0],%4               \n"  // left
+      "ld1        {v1.8b}, [%1],%4               \n"
+      "usubl      v0.8h, v0.8b, v1.8b            \n"
+      "ld1        {v2.8b}, [%0],%4               \n"  // center * 2
+      "ld1        {v3.8b}, [%1],%4               \n"
+      "usubl      v1.8h, v2.8b, v3.8b            \n"
+      "add        v0.8h, v0.8h, v1.8h            \n"
+      "add        v0.8h, v0.8h, v1.8h            \n"
+      "ld1        {v2.8b}, [%0],%5               \n"  // right
+      "ld1        {v3.8b}, [%1],%5               \n"
+      "subs       %w3, %w3, #8                   \n"  // 8 pixels
+      "usubl      v1.8h, v2.8b, v3.8b            \n"
+      "add        v0.8h, v0.8h, v1.8h            \n"
+      "abs        v0.8h, v0.8h                   \n"
+      "uqxtn      v0.8b, v0.8h                   \n"
+      "st1        {v0.8b}, [%2], #8              \n"  // store 8 sobely
+      "b.gt       1b                             \n"
+      : "+r"(src_y0),                           // %0
+        "+r"(src_y1),                           // %1
+        "+r"(dst_sobely),                       // %2
+        "+r"(width)                             // %3
+      : "r"(1LL),                               // %4
+        "r"(6LL)                                // %5
+      : "cc", "memory", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
+
+// Caveat - rounds float to half float whereas scaling version truncates.
+void HalfFloat1Row_NEON(const uint16_t* src,
+                        uint16_t* dst,
+                        float /*unused*/,
+                        int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v1.16b}, [%0], #16            \n"  // load 8 shorts
+      "subs       %w2, %w2, #8                   \n"  // 8 pixels per loop
+      "uxtl       v2.4s, v1.4h                   \n"  // 8 int's
+      "uxtl2      v3.4s, v1.8h                   \n"
+      "scvtf      v2.4s, v2.4s                   \n"  // 8 floats
+      "scvtf      v3.4s, v3.4s                   \n"
+      "fcvtn      v1.4h, v2.4s                   \n"  // 8 half floats
+      "fcvtn2     v1.8h, v3.4s                   \n"
+      "st1        {v1.16b}, [%1], #16            \n"  // store 8 shorts
+      "b.gt       1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      :
+      : "cc", "memory", "v1", "v2", "v3");
+}
+
+void HalfFloatRow_NEON(const uint16_t* src,
+                       uint16_t* dst,
+                       float scale,
+                       int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v1.16b}, [%0], #16            \n"  // load 8 shorts
+      "subs       %w2, %w2, #8                   \n"  // 8 pixels per loop
+      "uxtl       v2.4s, v1.4h                   \n"  // 8 int's
+      "uxtl2      v3.4s, v1.8h                   \n"
+      "scvtf      v2.4s, v2.4s                   \n"  // 8 floats
+      "scvtf      v3.4s, v3.4s                   \n"
+      "fmul       v2.4s, v2.4s, %3.s[0]          \n"  // adjust exponent
+      "fmul       v3.4s, v3.4s, %3.s[0]          \n"
+      "uqshrn     v1.4h, v2.4s, #13              \n"  // isolate halffloat
+      "uqshrn2    v1.8h, v3.4s, #13              \n"
+      "st1        {v1.16b}, [%1], #16            \n"  // store 8 shorts
+      "b.gt       1b                             \n"
+      : "+r"(src),                      // %0
+        "+r"(dst),                      // %1
+        "+r"(width)                     // %2
+      : "w"(scale * 1.9259299444e-34f)  // %3
+      : "cc", "memory", "v1", "v2", "v3");
+}
+
+void ByteToFloatRow_NEON(const uint8_t* src,
+                         float* dst,
+                         float scale,
+                         int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v1.8b}, [%0], #8              \n"  // load 8 bytes
+      "subs       %w2, %w2, #8                   \n"  // 8 pixels per loop
+      "uxtl       v1.8h, v1.8b                   \n"  // 8 shorts
+      "uxtl       v2.4s, v1.4h                   \n"  // 8 ints
+      "uxtl2      v3.4s, v1.8h                   \n"
+      "scvtf      v2.4s, v2.4s                   \n"  // 8 floats
+      "scvtf      v3.4s, v3.4s                   \n"
+      "fmul       v2.4s, v2.4s, %3.s[0]          \n"  // scale
+      "fmul       v3.4s, v3.4s, %3.s[0]          \n"
+      "st1        {v2.16b, v3.16b}, [%1], #32    \n"  // store 8 floats
+      "b.gt       1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      : "w"(scale)   // %3
+      : "cc", "memory", "v1", "v2", "v3");
+}
+
+float ScaleMaxSamples_NEON(const float* src,
+                           float* dst,
+                           float scale,
+                           int width) {
+  float fmax;
+  asm volatile(
+      "movi       v5.4s, #0                      \n"  // max
+      "movi       v6.4s, #0                      \n"
+
+      "1:                                        \n"
+      "ld1        {v1.4s, v2.4s}, [%0], #32      \n"  // load 8 samples
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+      "fmul       v3.4s, v1.4s, %4.s[0]          \n"  // scale
+      "fmul       v4.4s, v2.4s, %4.s[0]          \n"  // scale
+      "fmax       v5.4s, v5.4s, v1.4s            \n"  // max
+      "fmax       v6.4s, v6.4s, v2.4s            \n"
+      "st1        {v3.4s, v4.4s}, [%1], #32      \n"  // store 8 samples
+      "b.gt       1b                             \n"
+      "fmax       v5.4s, v5.4s, v6.4s            \n"  // max
+      "fmaxv      %s3, v5.4s                     \n"  // signed max acculator
+      : "+r"(src),                                    // %0
+        "+r"(dst),                                    // %1
+        "+r"(width),                                  // %2
+        "=w"(fmax)                                    // %3
+      : "w"(scale)                                    // %4
+      : "cc", "memory", "v1", "v2", "v3", "v4", "v5", "v6");
+  return fmax;
+}
+
+float ScaleSumSamples_NEON(const float* src,
+                           float* dst,
+                           float scale,
+                           int width) {
+  float fsum;
+  asm volatile(
+      "movi       v5.4s, #0                      \n"  // max
+      "movi       v6.4s, #0                      \n"  // max
+
+      "1:                                        \n"
+      "ld1        {v1.4s, v2.4s}, [%0], #32      \n"  // load 8 samples
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+      "fmul       v3.4s, v1.4s, %4.s[0]          \n"  // scale
+      "fmul       v4.4s, v2.4s, %4.s[0]          \n"
+      "fmla       v5.4s, v1.4s, v1.4s            \n"  // sum of squares
+      "fmla       v6.4s, v2.4s, v2.4s            \n"
+      "st1        {v3.4s, v4.4s}, [%1], #32      \n"  // store 8 samples
+      "b.gt       1b                             \n"
+      "faddp      v5.4s, v5.4s, v6.4s            \n"
+      "faddp      v5.4s, v5.4s, v5.4s            \n"
+      "faddp      %3.4s, v5.4s, v5.4s            \n"  // sum
+      : "+r"(src),                                    // %0
+        "+r"(dst),                                    // %1
+        "+r"(width),                                  // %2
+        "=w"(fsum)                                    // %3
+      : "w"(scale)                                    // %4
+      : "cc", "memory", "v1", "v2", "v3", "v4", "v5", "v6");
+  return fsum;
+}
+
+void ScaleSamples_NEON(const float* src, float* dst, float scale, int width) {
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v1.4s, v2.4s}, [%0], #32      \n"  // load 8 samples
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+      "fmul       v1.4s, v1.4s, %3.s[0]          \n"  // scale
+      "fmul       v2.4s, v2.4s, %3.s[0]          \n"  // scale
+      "st1        {v1.4s, v2.4s}, [%1], #32      \n"  // store 8 samples
+      "b.gt       1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(dst),   // %1
+        "+r"(width)  // %2
+      : "w"(scale)   // %3
+      : "cc", "memory", "v1", "v2");
+}
+
+// filter 5 rows with 1, 4, 6, 4, 1 coefficients to produce 1 row.
+void GaussCol_NEON(const uint16_t* src0,
+                   const uint16_t* src1,
+                   const uint16_t* src2,
+                   const uint16_t* src3,
+                   const uint16_t* src4,
+                   uint32_t* dst,
+                   int width) {
+  asm volatile(
+      "movi       v6.8h, #4                      \n"  // constant 4
+      "movi       v7.8h, #6                      \n"  // constant 6
+
+      "1:                                        \n"
+      "ld1        {v1.8h}, [%0], #16             \n"  // load 8 samples, 5 rows
+      "ld1        {v2.8h}, [%4], #16             \n"
+      "uaddl      v0.4s, v1.4h, v2.4h            \n"  // * 1
+      "uaddl2     v1.4s, v1.8h, v2.8h            \n"  // * 1
+      "ld1        {v2.8h}, [%1], #16             \n"
+      "umlal      v0.4s, v2.4h, v6.4h            \n"  // * 4
+      "umlal2     v1.4s, v2.8h, v6.8h            \n"  // * 4
+      "ld1        {v2.8h}, [%2], #16             \n"
+      "umlal      v0.4s, v2.4h, v7.4h            \n"  // * 6
+      "umlal2     v1.4s, v2.8h, v7.8h            \n"  // * 6
+      "ld1        {v2.8h}, [%3], #16             \n"
+      "umlal      v0.4s, v2.4h, v6.4h            \n"  // * 4
+      "umlal2     v1.4s, v2.8h, v6.8h            \n"  // * 4
+      "subs       %w6, %w6, #8                   \n"  // 8 processed per loop
+      "st1        {v0.4s,v1.4s}, [%5], #32       \n"  // store 8 samples
+      "b.gt       1b                             \n"
+      : "+r"(src0),  // %0
+        "+r"(src1),  // %1
+        "+r"(src2),  // %2
+        "+r"(src3),  // %3
+        "+r"(src4),  // %4
+        "+r"(dst),   // %5
+        "+r"(width)  // %6
+      :
+      : "cc", "memory", "v0", "v1", "v2", "v6", "v7");
+}
+
+// filter 5 rows with 1, 4, 6, 4, 1 coefficients to produce 1 row.
+void GaussRow_NEON(const uint32_t* src, uint16_t* dst, int width) {
+  const uint32_t* src1 = src + 1;
+  const uint32_t* src2 = src + 2;
+  const uint32_t* src3 = src + 3;
+  asm volatile(
+      "movi       v6.4s, #4                      \n"  // constant 4
+      "movi       v7.4s, #6                      \n"  // constant 6
+
+      "1:                                        \n"
+      "ld1        {v0.4s,v1.4s,v2.4s}, [%0], %6  \n"  // load 12 source samples
+      "add        v0.4s, v0.4s, v1.4s            \n"  // * 1
+      "add        v1.4s, v1.4s, v2.4s            \n"  // * 1
+      "ld1        {v2.4s,v3.4s}, [%2], #32       \n"
+      "mla        v0.4s, v2.4s, v7.4s            \n"  // * 6
+      "mla        v1.4s, v3.4s, v7.4s            \n"  // * 6
+      "ld1        {v2.4s,v3.4s}, [%1], #32       \n"
+      "ld1        {v4.4s,v5.4s}, [%3], #32       \n"
+      "add        v2.4s, v2.4s, v4.4s            \n"  // add rows for * 4
+      "add        v3.4s, v3.4s, v5.4s            \n"
+      "mla        v0.4s, v2.4s, v6.4s            \n"  // * 4
+      "mla        v1.4s, v3.4s, v6.4s            \n"  // * 4
+      "subs       %w5, %w5, #8                   \n"  // 8 processed per loop
+      "uqrshrn    v0.4h, v0.4s, #8               \n"  // round and pack
+      "uqrshrn2   v0.8h, v1.4s, #8               \n"
+      "st1        {v0.8h}, [%4], #16             \n"  // store 8 samples
+      "b.gt       1b                             \n"
+      : "+r"(src),   // %0
+        "+r"(src1),  // %1
+        "+r"(src2),  // %2
+        "+r"(src3),  // %3
+        "+r"(dst),   // %4
+        "+r"(width)  // %5
+      : "r"(32LL)    // %6
+      : "cc", "memory", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7");
+}
+
 #endif  // !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
 #ifdef __cplusplus
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_win.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_win.cc
index 2a3da89..5500d7f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_win.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/row_win.cc
@@ -28,72 +28,71 @@
 #if defined(_M_X64)
 
 // Read 4 UV from 422, upsample to 8 UV.
-#define READYUV422                                                             \
-    xmm0 = _mm_cvtsi32_si128(*(uint32*)u_buf);                                 \
-    xmm1 = _mm_cvtsi32_si128(*(uint32*)(u_buf + offset));                      \
-    xmm0 = _mm_unpacklo_epi8(xmm0, xmm1);                                      \
-    xmm0 = _mm_unpacklo_epi16(xmm0, xmm0);                                     \
-    u_buf += 4;                                                                \
-    xmm4 = _mm_loadl_epi64((__m128i*)y_buf);                                   \
-    xmm4 = _mm_unpacklo_epi8(xmm4, xmm4);                                      \
-    y_buf += 8;
+#define READYUV422                                        \
+  xmm0 = _mm_cvtsi32_si128(*(uint32_t*)u_buf);            \
+  xmm1 = _mm_cvtsi32_si128(*(uint32_t*)(u_buf + offset)); \
+  xmm0 = _mm_unpacklo_epi8(xmm0, xmm1);                   \
+  xmm0 = _mm_unpacklo_epi16(xmm0, xmm0);                  \
+  u_buf += 4;                                             \
+  xmm4 = _mm_loadl_epi64((__m128i*)y_buf);                \
+  xmm4 = _mm_unpacklo_epi8(xmm4, xmm4);                   \
+  y_buf += 8;
 
 // Read 4 UV from 422, upsample to 8 UV.  With 8 Alpha.
-#define READYUVA422                                                            \
-    xmm0 = _mm_cvtsi32_si128(*(uint32*)u_buf);                                 \
-    xmm1 = _mm_cvtsi32_si128(*(uint32*)(u_buf + offset));                      \
-    xmm0 = _mm_unpacklo_epi8(xmm0, xmm1);                                      \
-    xmm0 = _mm_unpacklo_epi16(xmm0, xmm0);                                     \
-    u_buf += 4;                                                                \
-    xmm4 = _mm_loadl_epi64((__m128i*)y_buf);                                   \
-    xmm4 = _mm_unpacklo_epi8(xmm4, xmm4);                                      \
-    y_buf += 8;                                                                \
-    xmm5 = _mm_loadl_epi64((__m128i*)a_buf);                                   \
-    a_buf += 8;
+#define READYUVA422                                       \
+  xmm0 = _mm_cvtsi32_si128(*(uint32_t*)u_buf);            \
+  xmm1 = _mm_cvtsi32_si128(*(uint32_t*)(u_buf + offset)); \
+  xmm0 = _mm_unpacklo_epi8(xmm0, xmm1);                   \
+  xmm0 = _mm_unpacklo_epi16(xmm0, xmm0);                  \
+  u_buf += 4;                                             \
+  xmm4 = _mm_loadl_epi64((__m128i*)y_buf);                \
+  xmm4 = _mm_unpacklo_epi8(xmm4, xmm4);                   \
+  y_buf += 8;                                             \
+  xmm5 = _mm_loadl_epi64((__m128i*)a_buf);                \
+  a_buf += 8;
 
 // Convert 8 pixels: 8 UV and 8 Y.
-#define YUVTORGB(yuvconstants)                                                 \
-    xmm1 = _mm_loadu_si128(&xmm0);                                             \
-    xmm2 = _mm_loadu_si128(&xmm0);                                             \
-    xmm0 = _mm_maddubs_epi16(xmm0, *(__m128i*)yuvconstants->kUVToB);           \
-    xmm1 = _mm_maddubs_epi16(xmm1, *(__m128i*)yuvconstants->kUVToG);           \
-    xmm2 = _mm_maddubs_epi16(xmm2, *(__m128i*)yuvconstants->kUVToR);           \
-    xmm0 = _mm_sub_epi16(*(__m128i*)yuvconstants->kUVBiasB, xmm0);             \
-    xmm1 = _mm_sub_epi16(*(__m128i*)yuvconstants->kUVBiasG, xmm1);             \
-    xmm2 = _mm_sub_epi16(*(__m128i*)yuvconstants->kUVBiasR, xmm2);             \
-    xmm4 = _mm_mulhi_epu16(xmm4, *(__m128i*)yuvconstants->kYToRgb);            \
-    xmm0 = _mm_adds_epi16(xmm0, xmm4);                                         \
-    xmm1 = _mm_adds_epi16(xmm1, xmm4);                                         \
-    xmm2 = _mm_adds_epi16(xmm2, xmm4);                                         \
-    xmm0 = _mm_srai_epi16(xmm0, 6);                                            \
-    xmm1 = _mm_srai_epi16(xmm1, 6);                                            \
-    xmm2 = _mm_srai_epi16(xmm2, 6);                                            \
-    xmm0 = _mm_packus_epi16(xmm0, xmm0);                                       \
-    xmm1 = _mm_packus_epi16(xmm1, xmm1);                                       \
-    xmm2 = _mm_packus_epi16(xmm2, xmm2);
+#define YUVTORGB(yuvconstants)                                     \
+  xmm1 = _mm_loadu_si128(&xmm0);                                   \
+  xmm2 = _mm_loadu_si128(&xmm0);                                   \
+  xmm0 = _mm_maddubs_epi16(xmm0, *(__m128i*)yuvconstants->kUVToB); \
+  xmm1 = _mm_maddubs_epi16(xmm1, *(__m128i*)yuvconstants->kUVToG); \
+  xmm2 = _mm_maddubs_epi16(xmm2, *(__m128i*)yuvconstants->kUVToR); \
+  xmm0 = _mm_sub_epi16(*(__m128i*)yuvconstants->kUVBiasB, xmm0);   \
+  xmm1 = _mm_sub_epi16(*(__m128i*)yuvconstants->kUVBiasG, xmm1);   \
+  xmm2 = _mm_sub_epi16(*(__m128i*)yuvconstants->kUVBiasR, xmm2);   \
+  xmm4 = _mm_mulhi_epu16(xmm4, *(__m128i*)yuvconstants->kYToRgb);  \
+  xmm0 = _mm_adds_epi16(xmm0, xmm4);                               \
+  xmm1 = _mm_adds_epi16(xmm1, xmm4);                               \
+  xmm2 = _mm_adds_epi16(xmm2, xmm4);                               \
+  xmm0 = _mm_srai_epi16(xmm0, 6);                                  \
+  xmm1 = _mm_srai_epi16(xmm1, 6);                                  \
+  xmm2 = _mm_srai_epi16(xmm2, 6);                                  \
+  xmm0 = _mm_packus_epi16(xmm0, xmm0);                             \
+  xmm1 = _mm_packus_epi16(xmm1, xmm1);                             \
+  xmm2 = _mm_packus_epi16(xmm2, xmm2);
 
 // Store 8 ARGB values.
-#define STOREARGB                                                              \
-    xmm0 = _mm_unpacklo_epi8(xmm0, xmm1);                                      \
-    xmm2 = _mm_unpacklo_epi8(xmm2, xmm5);                                      \
-    xmm1 = _mm_loadu_si128(&xmm0);                                             \
-    xmm0 = _mm_unpacklo_epi16(xmm0, xmm2);                                     \
-    xmm1 = _mm_unpackhi_epi16(xmm1, xmm2);                                     \
-    _mm_storeu_si128((__m128i *)dst_argb, xmm0);                               \
-    _mm_storeu_si128((__m128i *)(dst_argb + 16), xmm1);                        \
-    dst_argb += 32;
-
+#define STOREARGB                                    \
+  xmm0 = _mm_unpacklo_epi8(xmm0, xmm1);              \
+  xmm2 = _mm_unpacklo_epi8(xmm2, xmm5);              \
+  xmm1 = _mm_loadu_si128(&xmm0);                     \
+  xmm0 = _mm_unpacklo_epi16(xmm0, xmm2);             \
+  xmm1 = _mm_unpackhi_epi16(xmm1, xmm2);             \
+  _mm_storeu_si128((__m128i*)dst_argb, xmm0);        \
+  _mm_storeu_si128((__m128i*)(dst_argb + 16), xmm1); \
+  dst_argb += 32;
 
 #if defined(HAS_I422TOARGBROW_SSSE3)
-void I422ToARGBRow_SSSE3(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* dst_argb,
+void I422ToARGBRow_SSSE3(const uint8_t* y_buf,
+                         const uint8_t* u_buf,
+                         const uint8_t* v_buf,
+                         uint8_t* dst_argb,
                          const struct YuvConstants* yuvconstants,
                          int width) {
   __m128i xmm0, xmm1, xmm2, xmm4;
   const __m128i xmm5 = _mm_set1_epi8(-1);
-  const ptrdiff_t offset = (uint8*)v_buf - (uint8*)u_buf;
+  const ptrdiff_t offset = (uint8_t*)v_buf - (uint8_t*)u_buf;
   while (width > 0) {
     READYUV422
     YUVTORGB(yuvconstants)
@@ -104,15 +103,15 @@
 #endif
 
 #if defined(HAS_I422ALPHATOARGBROW_SSSE3)
-void I422AlphaToARGBRow_SSSE3(const uint8* y_buf,
-                              const uint8* u_buf,
-                              const uint8* v_buf,
-                              const uint8* a_buf,
-                              uint8* dst_argb,
+void I422AlphaToARGBRow_SSSE3(const uint8_t* y_buf,
+                              const uint8_t* u_buf,
+                              const uint8_t* v_buf,
+                              const uint8_t* a_buf,
+                              uint8_t* dst_argb,
                               const struct YuvConstants* yuvconstants,
                               int width) {
   __m128i xmm0, xmm1, xmm2, xmm4, xmm5;
-  const ptrdiff_t offset = (uint8*)v_buf - (uint8*)u_buf;
+  const ptrdiff_t offset = (uint8_t*)v_buf - (uint8_t*)u_buf;
   while (width > 0) {
     READYUVA422
     YUVTORGB(yuvconstants)
@@ -127,175 +126,143 @@
 #ifdef HAS_ARGBTOYROW_SSSE3
 
 // Constants for ARGB.
-static const vec8 kARGBToY = {
-  13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33, 0
-};
+static const vec8 kARGBToY = {13, 65, 33, 0, 13, 65, 33, 0,
+                              13, 65, 33, 0, 13, 65, 33, 0};
 
 // JPeg full range.
-static const vec8 kARGBToYJ = {
-  15, 75, 38, 0, 15, 75, 38, 0, 15, 75, 38, 0, 15, 75, 38, 0
-};
+static const vec8 kARGBToYJ = {15, 75, 38, 0, 15, 75, 38, 0,
+                               15, 75, 38, 0, 15, 75, 38, 0};
 
-static const vec8 kARGBToU = {
-  112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38, 0
-};
+static const vec8 kARGBToU = {112, -74, -38, 0, 112, -74, -38, 0,
+                              112, -74, -38, 0, 112, -74, -38, 0};
 
-static const vec8 kARGBToUJ = {
-  127, -84, -43, 0, 127, -84, -43, 0, 127, -84, -43, 0, 127, -84, -43, 0
-};
+static const vec8 kARGBToUJ = {127, -84, -43, 0, 127, -84, -43, 0,
+                               127, -84, -43, 0, 127, -84, -43, 0};
 
 static const vec8 kARGBToV = {
-  -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0,
+    -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0,
 };
 
-static const vec8 kARGBToVJ = {
-  -20, -107, 127, 0, -20, -107, 127, 0, -20, -107, 127, 0, -20, -107, 127, 0
-};
+static const vec8 kARGBToVJ = {-20, -107, 127, 0, -20, -107, 127, 0,
+                               -20, -107, 127, 0, -20, -107, 127, 0};
 
 // vpshufb for vphaddw + vpackuswb packed to shorts.
 static const lvec8 kShufARGBToUV_AVX = {
-  0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15,
-  0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15
-};
+    0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15,
+    0, 1, 8, 9, 2, 3, 10, 11, 4, 5, 12, 13, 6, 7, 14, 15};
 
 // Constants for BGRA.
-static const vec8 kBGRAToY = {
-  0, 33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13
-};
+static const vec8 kBGRAToY = {0, 33, 65, 13, 0, 33, 65, 13,
+                              0, 33, 65, 13, 0, 33, 65, 13};
 
-static const vec8 kBGRAToU = {
-  0, -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112
-};
+static const vec8 kBGRAToU = {0, -38, -74, 112, 0, -38, -74, 112,
+                              0, -38, -74, 112, 0, -38, -74, 112};
 
-static const vec8 kBGRAToV = {
-  0, 112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18
-};
+static const vec8 kBGRAToV = {0, 112, -94, -18, 0, 112, -94, -18,
+                              0, 112, -94, -18, 0, 112, -94, -18};
 
 // Constants for ABGR.
-static const vec8 kABGRToY = {
-  33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13, 0, 33, 65, 13, 0
-};
+static const vec8 kABGRToY = {33, 65, 13, 0, 33, 65, 13, 0,
+                              33, 65, 13, 0, 33, 65, 13, 0};
 
-static const vec8 kABGRToU = {
-  -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112, 0, -38, -74, 112, 0
-};
+static const vec8 kABGRToU = {-38, -74, 112, 0, -38, -74, 112, 0,
+                              -38, -74, 112, 0, -38, -74, 112, 0};
 
-static const vec8 kABGRToV = {
-  112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18, 0, 112, -94, -18, 0
-};
+static const vec8 kABGRToV = {112, -94, -18, 0, 112, -94, -18, 0,
+                              112, -94, -18, 0, 112, -94, -18, 0};
 
 // Constants for RGBA.
-static const vec8 kRGBAToY = {
-  0, 13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33, 0, 13, 65, 33
-};
+static const vec8 kRGBAToY = {0, 13, 65, 33, 0, 13, 65, 33,
+                              0, 13, 65, 33, 0, 13, 65, 33};
 
-static const vec8 kRGBAToU = {
-  0, 112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38, 0, 112, -74, -38
-};
+static const vec8 kRGBAToU = {0, 112, -74, -38, 0, 112, -74, -38,
+                              0, 112, -74, -38, 0, 112, -74, -38};
 
-static const vec8 kRGBAToV = {
-  0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112, 0, -18, -94, 112
-};
+static const vec8 kRGBAToV = {0, -18, -94, 112, 0, -18, -94, 112,
+                              0, -18, -94, 112, 0, -18, -94, 112};
 
-static const uvec8 kAddY16 = {
-  16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u
-};
+static const uvec8 kAddY16 = {16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u,
+                              16u, 16u, 16u, 16u, 16u, 16u, 16u, 16u};
 
 // 7 bit fixed point 0.5.
-static const vec16 kAddYJ64 = {
-  64, 64, 64, 64, 64, 64, 64, 64
-};
+static const vec16 kAddYJ64 = {64, 64, 64, 64, 64, 64, 64, 64};
 
-static const uvec8 kAddUV128 = {
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+static const uvec8 kAddUV128 = {128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u,
+                                128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
-static const uvec16 kAddUVJ128 = {
-  0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u, 0x8080u
-};
+static const uvec16 kAddUVJ128 = {0x8080u, 0x8080u, 0x8080u, 0x8080u,
+                                  0x8080u, 0x8080u, 0x8080u, 0x8080u};
 
 // Shuffle table for converting RGB24 to ARGB.
 static const uvec8 kShuffleMaskRGB24ToARGB = {
-  0u, 1u, 2u, 12u, 3u, 4u, 5u, 13u, 6u, 7u, 8u, 14u, 9u, 10u, 11u, 15u
-};
+    0u, 1u, 2u, 12u, 3u, 4u, 5u, 13u, 6u, 7u, 8u, 14u, 9u, 10u, 11u, 15u};
 
 // Shuffle table for converting RAW to ARGB.
-static const uvec8 kShuffleMaskRAWToARGB = {
-  2u, 1u, 0u, 12u, 5u, 4u, 3u, 13u, 8u, 7u, 6u, 14u, 11u, 10u, 9u, 15u
-};
+static const uvec8 kShuffleMaskRAWToARGB = {2u, 1u, 0u, 12u, 5u,  4u,  3u, 13u,
+                                            8u, 7u, 6u, 14u, 11u, 10u, 9u, 15u};
 
 // Shuffle table for converting RAW to RGB24.  First 8.
 static const uvec8 kShuffleMaskRAWToRGB24_0 = {
-  2u, 1u, 0u, 5u, 4u, 3u, 8u, 7u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+    2u,   1u,   0u,   5u,   4u,   3u,   8u,   7u,
+    128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting RAW to RGB24.  Middle 8.
 static const uvec8 kShuffleMaskRAWToRGB24_1 = {
-  2u, 7u, 6u, 5u, 10u, 9u, 8u, 13u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+    2u,   7u,   6u,   5u,   10u,  9u,   8u,   13u,
+    128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting RAW to RGB24.  Last 8.
 static const uvec8 kShuffleMaskRAWToRGB24_2 = {
-  8u, 7u, 12u, 11u, 10u, 15u, 14u, 13u,
-  128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u
-};
+    8u,   7u,   12u,  11u,  10u,  15u,  14u,  13u,
+    128u, 128u, 128u, 128u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting ARGB to RGB24.
 static const uvec8 kShuffleMaskARGBToRGB24 = {
-  0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 10u, 12u, 13u, 14u, 128u, 128u, 128u, 128u
-};
+    0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 10u, 12u, 13u, 14u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting ARGB to RAW.
 static const uvec8 kShuffleMaskARGBToRAW = {
-  2u, 1u, 0u, 6u, 5u, 4u, 10u, 9u, 8u, 14u, 13u, 12u, 128u, 128u, 128u, 128u
-};
+    2u, 1u, 0u, 6u, 5u, 4u, 10u, 9u, 8u, 14u, 13u, 12u, 128u, 128u, 128u, 128u};
 
 // Shuffle table for converting ARGBToRGB24 for I422ToRGB24.  First 8 + next 4
 static const uvec8 kShuffleMaskARGBToRGB24_0 = {
-  0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 128u, 128u, 128u, 128u, 10u, 12u, 13u, 14u
-};
+    0u, 1u, 2u, 4u, 5u, 6u, 8u, 9u, 128u, 128u, 128u, 128u, 10u, 12u, 13u, 14u};
 
 // YUY2 shuf 16 Y to 32 Y.
-static const lvec8 kShuffleYUY2Y = {
-  0, 0, 2, 2, 4, 4, 6, 6, 8, 8, 10, 10, 12, 12, 14, 14,
-  0, 0, 2, 2, 4, 4, 6, 6, 8, 8, 10, 10, 12, 12, 14, 14
-};
+static const lvec8 kShuffleYUY2Y = {0,  0,  2,  2,  4,  4,  6,  6,  8,  8, 10,
+                                    10, 12, 12, 14, 14, 0,  0,  2,  2,  4, 4,
+                                    6,  6,  8,  8,  10, 10, 12, 12, 14, 14};
 
 // YUY2 shuf 8 UV to 16 UV.
-static const lvec8 kShuffleYUY2UV = {
-  1, 3, 1, 3, 5, 7, 5, 7, 9, 11, 9, 11, 13, 15, 13, 15,
-  1, 3, 1, 3, 5, 7, 5, 7, 9, 11, 9, 11, 13, 15, 13, 15
-};
+static const lvec8 kShuffleYUY2UV = {1,  3,  1,  3,  5,  7,  5,  7,  9,  11, 9,
+                                     11, 13, 15, 13, 15, 1,  3,  1,  3,  5,  7,
+                                     5,  7,  9,  11, 9,  11, 13, 15, 13, 15};
 
 // UYVY shuf 16 Y to 32 Y.
-static const lvec8 kShuffleUYVYY = {
-  1, 1, 3, 3, 5, 5, 7, 7, 9, 9, 11, 11, 13, 13, 15, 15,
-  1, 1, 3, 3, 5, 5, 7, 7, 9, 9, 11, 11, 13, 13, 15, 15
-};
+static const lvec8 kShuffleUYVYY = {1,  1,  3,  3,  5,  5,  7,  7,  9,  9, 11,
+                                    11, 13, 13, 15, 15, 1,  1,  3,  3,  5, 5,
+                                    7,  7,  9,  9,  11, 11, 13, 13, 15, 15};
 
 // UYVY shuf 8 UV to 16 UV.
-static const lvec8 kShuffleUYVYUV = {
-  0, 2, 0, 2, 4, 6, 4, 6, 8, 10, 8, 10, 12, 14, 12, 14,
-  0, 2, 0, 2, 4, 6, 4, 6, 8, 10, 8, 10, 12, 14, 12, 14
-};
+static const lvec8 kShuffleUYVYUV = {0,  2,  0,  2,  4,  6,  4,  6,  8,  10, 8,
+                                     10, 12, 14, 12, 14, 0,  2,  0,  2,  4,  6,
+                                     4,  6,  8,  10, 8,  10, 12, 14, 12, 14};
 
 // NV21 shuf 8 VU to 16 UV.
 static const lvec8 kShuffleNV21 = {
-  1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
-  1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
+    1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
+    1, 0, 1, 0, 3, 2, 3, 2, 5, 4, 5, 4, 7, 6, 7, 6,
 };
 
 // Duplicates gray value 3 times and fills in alpha opaque.
-__declspec(naked)
-void J400ToARGBRow_SSE2(const uint8* src_y, uint8* dst_argb, int width) {
+__declspec(naked) void J400ToARGBRow_SSE2(const uint8_t* src_y,
+                                          uint8_t* dst_argb,
+                                          int width) {
   __asm {
-    mov        eax, [esp + 4]        // src_y
-    mov        edx, [esp + 8]        // dst_argb
-    mov        ecx, [esp + 12]       // width
-    pcmpeqb    xmm5, xmm5            // generate mask 0xff000000
+    mov        eax, [esp + 4]  // src_y
+    mov        edx, [esp + 8]  // dst_argb
+    mov        ecx, [esp + 12]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0xff000000
     pslld      xmm5, 24
 
   convertloop:
@@ -318,13 +285,14 @@
 
 #ifdef HAS_J400TOARGBROW_AVX2
 // Duplicates gray value 3 times and fills in alpha opaque.
-__declspec(naked)
-void J400ToARGBRow_AVX2(const uint8* src_y, uint8* dst_argb, int width) {
+__declspec(naked) void J400ToARGBRow_AVX2(const uint8_t* src_y,
+                                          uint8_t* dst_argb,
+                                          int width) {
   __asm {
-    mov         eax, [esp + 4]        // src_y
-    mov         edx, [esp + 8]        // dst_argb
-    mov         ecx, [esp + 12]       // width
-    vpcmpeqb    ymm5, ymm5, ymm5      // generate mask 0xff000000
+    mov         eax, [esp + 4]  // src_y
+    mov         edx, [esp + 8]  // dst_argb
+    mov         ecx, [esp + 12]  // width
+    vpcmpeqb    ymm5, ymm5, ymm5  // generate mask 0xff000000
     vpslld      ymm5, ymm5, 24
 
   convertloop:
@@ -348,13 +316,14 @@
 }
 #endif  // HAS_J400TOARGBROW_AVX2
 
-__declspec(naked)
-void RGB24ToARGBRow_SSSE3(const uint8* src_rgb24, uint8* dst_argb, int width) {
+__declspec(naked) void RGB24ToARGBRow_SSSE3(const uint8_t* src_rgb24,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_rgb24
-    mov       edx, [esp + 8]   // dst_argb
+    mov       eax, [esp + 4]  // src_rgb24
+    mov       edx, [esp + 8]  // dst_argb
     mov       ecx, [esp + 12]  // width
-    pcmpeqb   xmm5, xmm5       // generate mask 0xff000000
+    pcmpeqb   xmm5, xmm5  // generate mask 0xff000000
     pslld     xmm5, 24
     movdqa    xmm4, xmmword ptr kShuffleMaskRGB24ToARGB
 
@@ -364,17 +333,17 @@
     movdqu    xmm3, [eax + 32]
     lea       eax, [eax + 48]
     movdqa    xmm2, xmm3
-    palignr   xmm2, xmm1, 8    // xmm2 = { xmm3[0:3] xmm1[8:15]}
+    palignr   xmm2, xmm1, 8  // xmm2 = { xmm3[0:3] xmm1[8:15]}
     pshufb    xmm2, xmm4
     por       xmm2, xmm5
-    palignr   xmm1, xmm0, 12   // xmm1 = { xmm3[0:7] xmm0[12:15]}
+    palignr   xmm1, xmm0, 12  // xmm1 = { xmm3[0:7] xmm0[12:15]}
     pshufb    xmm0, xmm4
     movdqu    [edx + 32], xmm2
     por       xmm0, xmm5
     pshufb    xmm1, xmm4
     movdqu    [edx], xmm0
     por       xmm1, xmm5
-    palignr   xmm3, xmm3, 4    // xmm3 = { xmm3[4:15]}
+    palignr   xmm3, xmm3, 4  // xmm3 = { xmm3[4:15]}
     pshufb    xmm3, xmm4
     movdqu    [edx + 16], xmm1
     por       xmm3, xmm5
@@ -386,14 +355,14 @@
   }
 }
 
-__declspec(naked)
-void RAWToARGBRow_SSSE3(const uint8* src_raw, uint8* dst_argb,
-                        int width) {
+__declspec(naked) void RAWToARGBRow_SSSE3(const uint8_t* src_raw,
+                                          uint8_t* dst_argb,
+                                          int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_raw
-    mov       edx, [esp + 8]   // dst_argb
+    mov       eax, [esp + 4]  // src_raw
+    mov       edx, [esp + 8]  // dst_argb
     mov       ecx, [esp + 12]  // width
-    pcmpeqb   xmm5, xmm5       // generate mask 0xff000000
+    pcmpeqb   xmm5, xmm5  // generate mask 0xff000000
     pslld     xmm5, 24
     movdqa    xmm4, xmmword ptr kShuffleMaskRAWToARGB
 
@@ -403,17 +372,17 @@
     movdqu    xmm3, [eax + 32]
     lea       eax, [eax + 48]
     movdqa    xmm2, xmm3
-    palignr   xmm2, xmm1, 8    // xmm2 = { xmm3[0:3] xmm1[8:15]}
+    palignr   xmm2, xmm1, 8  // xmm2 = { xmm3[0:3] xmm1[8:15]}
     pshufb    xmm2, xmm4
     por       xmm2, xmm5
-    palignr   xmm1, xmm0, 12   // xmm1 = { xmm3[0:7] xmm0[12:15]}
+    palignr   xmm1, xmm0, 12  // xmm1 = { xmm3[0:7] xmm0[12:15]}
     pshufb    xmm0, xmm4
     movdqu    [edx + 32], xmm2
     por       xmm0, xmm5
     pshufb    xmm1, xmm4
     movdqu    [edx], xmm0
     por       xmm1, xmm5
-    palignr   xmm3, xmm3, 4    // xmm3 = { xmm3[4:15]}
+    palignr   xmm3, xmm3, 4  // xmm3 = { xmm3[4:15]}
     pshufb    xmm3, xmm4
     movdqu    [edx + 16], xmm1
     por       xmm3, xmm5
@@ -425,11 +394,12 @@
   }
 }
 
-__declspec(naked)
-void RAWToRGB24Row_SSSE3(const uint8* src_raw, uint8* dst_rgb24, int width) {
+__declspec(naked) void RAWToRGB24Row_SSSE3(const uint8_t* src_raw,
+                                           uint8_t* dst_rgb24,
+                                           int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_raw
-    mov       edx, [esp + 8]   // dst_rgb24
+    mov       eax, [esp + 4]  // src_raw
+    mov       edx, [esp + 8]  // dst_rgb24
     mov       ecx, [esp + 12]  // width
     movdqa    xmm3, xmmword ptr kShuffleMaskRAWToRGB24_0
     movdqa    xmm4, xmmword ptr kShuffleMaskRAWToRGB24_1
@@ -460,9 +430,9 @@
 // v * (256 + 8)
 // G shift of 5 is incorporated, so shift is 5 + 8 and 5 + 3
 // 20 instructions.
-__declspec(naked)
-void RGB565ToARGBRow_SSE2(const uint8* src_rgb565, uint8* dst_argb,
-                          int width) {
+__declspec(naked) void RGB565ToARGBRow_SSE2(const uint8_t* src_rgb565,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
     mov       eax, 0x01080108  // generate multiplier to repeat 5 bits
     movd      xmm5, eax
@@ -470,33 +440,33 @@
     mov       eax, 0x20802080  // multiplier shift by 5 and then repeat 6 bits
     movd      xmm6, eax
     pshufd    xmm6, xmm6, 0
-    pcmpeqb   xmm3, xmm3       // generate mask 0xf800f800 for Red
+    pcmpeqb   xmm3, xmm3  // generate mask 0xf800f800 for Red
     psllw     xmm3, 11
-    pcmpeqb   xmm4, xmm4       // generate mask 0x07e007e0 for Green
+    pcmpeqb   xmm4, xmm4  // generate mask 0x07e007e0 for Green
     psllw     xmm4, 10
     psrlw     xmm4, 5
-    pcmpeqb   xmm7, xmm7       // generate mask 0xff00ff00 for Alpha
+    pcmpeqb   xmm7, xmm7  // generate mask 0xff00ff00 for Alpha
     psllw     xmm7, 8
 
-    mov       eax, [esp + 4]   // src_rgb565
-    mov       edx, [esp + 8]   // dst_argb
+    mov       eax, [esp + 4]  // src_rgb565
+    mov       edx, [esp + 8]  // dst_argb
     mov       ecx, [esp + 12]  // width
     sub       edx, eax
     sub       edx, eax
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 8 pixels of bgr565
+    movdqu    xmm0, [eax]  // fetch 8 pixels of bgr565
     movdqa    xmm1, xmm0
     movdqa    xmm2, xmm0
-    pand      xmm1, xmm3    // R in upper 5 bits
-    psllw     xmm2, 11      // B in upper 5 bits
-    pmulhuw   xmm1, xmm5    // * (256 + 8)
-    pmulhuw   xmm2, xmm5    // * (256 + 8)
+    pand      xmm1, xmm3  // R in upper 5 bits
+    psllw     xmm2, 11  // B in upper 5 bits
+    pmulhuw   xmm1, xmm5  // * (256 + 8)
+    pmulhuw   xmm2, xmm5  // * (256 + 8)
     psllw     xmm1, 8
-    por       xmm1, xmm2    // RB
-    pand      xmm0, xmm4    // G in middle 6 bits
-    pmulhuw   xmm0, xmm6    // << 5 * (256 + 4)
-    por       xmm0, xmm7    // AG
+    por       xmm1, xmm2  // RB
+    pand      xmm0, xmm4  // G in middle 6 bits
+    pmulhuw   xmm0, xmm6  // << 5 * (256 + 4)
+    por       xmm0, xmm7  // AG
     movdqa    xmm2, xmm1
     punpcklbw xmm1, xmm0
     punpckhbw xmm2, xmm0
@@ -516,9 +486,9 @@
 // v * 256 + v * 8
 // v * (256 + 8)
 // G shift of 5 is incorporated, so shift is 5 + 8 and 5 + 3
-__declspec(naked)
-void RGB565ToARGBRow_AVX2(const uint8* src_rgb565, uint8* dst_argb,
-                          int width) {
+__declspec(naked) void RGB565ToARGBRow_AVX2(const uint8_t* src_rgb565,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
     mov        eax, 0x01080108  // generate multiplier to repeat 5 bits
     vmovd      xmm5, eax
@@ -526,32 +496,32 @@
     mov        eax, 0x20802080  // multiplier shift by 5 and then repeat 6 bits
     vmovd      xmm6, eax
     vbroadcastss ymm6, xmm6
-    vpcmpeqb   ymm3, ymm3, ymm3       // generate mask 0xf800f800 for Red
+    vpcmpeqb   ymm3, ymm3, ymm3  // generate mask 0xf800f800 for Red
     vpsllw     ymm3, ymm3, 11
-    vpcmpeqb   ymm4, ymm4, ymm4       // generate mask 0x07e007e0 for Green
+    vpcmpeqb   ymm4, ymm4, ymm4  // generate mask 0x07e007e0 for Green
     vpsllw     ymm4, ymm4, 10
     vpsrlw     ymm4, ymm4, 5
-    vpcmpeqb   ymm7, ymm7, ymm7       // generate mask 0xff00ff00 for Alpha
+    vpcmpeqb   ymm7, ymm7, ymm7  // generate mask 0xff00ff00 for Alpha
     vpsllw     ymm7, ymm7, 8
 
-    mov        eax, [esp + 4]   // src_rgb565
-    mov        edx, [esp + 8]   // dst_argb
+    mov        eax, [esp + 4]  // src_rgb565
+    mov        edx, [esp + 8]  // dst_argb
     mov        ecx, [esp + 12]  // width
     sub        edx, eax
     sub        edx, eax
 
  convertloop:
-    vmovdqu    ymm0, [eax]   // fetch 16 pixels of bgr565
-    vpand      ymm1, ymm0, ymm3    // R in upper 5 bits
-    vpsllw     ymm2, ymm0, 11      // B in upper 5 bits
-    vpmulhuw   ymm1, ymm1, ymm5    // * (256 + 8)
-    vpmulhuw   ymm2, ymm2, ymm5    // * (256 + 8)
+    vmovdqu    ymm0, [eax]  // fetch 16 pixels of bgr565
+    vpand      ymm1, ymm0, ymm3  // R in upper 5 bits
+    vpsllw     ymm2, ymm0, 11  // B in upper 5 bits
+    vpmulhuw   ymm1, ymm1, ymm5  // * (256 + 8)
+    vpmulhuw   ymm2, ymm2, ymm5  // * (256 + 8)
     vpsllw     ymm1, ymm1, 8
-    vpor       ymm1, ymm1, ymm2    // RB
-    vpand      ymm0, ymm0, ymm4    // G in middle 6 bits
-    vpmulhuw   ymm0, ymm0, ymm6    // << 5 * (256 + 4)
-    vpor       ymm0, ymm0, ymm7    // AG
-    vpermq     ymm0, ymm0, 0xd8    // mutate for unpack
+    vpor       ymm1, ymm1, ymm2  // RB
+    vpand      ymm0, ymm0, ymm4  // G in middle 6 bits
+    vpmulhuw   ymm0, ymm0, ymm6  // << 5 * (256 + 4)
+    vpor       ymm0, ymm0, ymm7  // AG
+    vpermq     ymm0, ymm0, 0xd8  // mutate for unpack
     vpermq     ymm1, ymm1, 0xd8
     vpunpckhbw ymm2, ymm1, ymm0
     vpunpcklbw ymm1, ymm1, ymm0
@@ -567,9 +537,9 @@
 #endif  // HAS_RGB565TOARGBROW_AVX2
 
 #ifdef HAS_ARGB1555TOARGBROW_AVX2
-__declspec(naked)
-void ARGB1555ToARGBRow_AVX2(const uint8* src_argb1555, uint8* dst_argb,
-                            int width) {
+__declspec(naked) void ARGB1555ToARGBRow_AVX2(const uint8_t* src_argb1555,
+                                              uint8_t* dst_argb,
+                                              int width) {
   __asm {
     mov        eax, 0x01080108  // generate multiplier to repeat 5 bits
     vmovd      xmm5, eax
@@ -577,33 +547,33 @@
     mov        eax, 0x42004200  // multiplier shift by 6 and then repeat 5 bits
     vmovd      xmm6, eax
     vbroadcastss ymm6, xmm6
-    vpcmpeqb   ymm3, ymm3, ymm3 // generate mask 0xf800f800 for Red
+    vpcmpeqb   ymm3, ymm3, ymm3  // generate mask 0xf800f800 for Red
     vpsllw     ymm3, ymm3, 11
-    vpsrlw     ymm4, ymm3, 6    // generate mask 0x03e003e0 for Green
-    vpcmpeqb   ymm7, ymm7, ymm7 // generate mask 0xff00ff00 for Alpha
+    vpsrlw     ymm4, ymm3, 6  // generate mask 0x03e003e0 for Green
+    vpcmpeqb   ymm7, ymm7, ymm7  // generate mask 0xff00ff00 for Alpha
     vpsllw     ymm7, ymm7, 8
 
-    mov        eax,  [esp + 4]   // src_argb1555
-    mov        edx,  [esp + 8]   // dst_argb
+    mov        eax,  [esp + 4]  // src_argb1555
+    mov        edx,  [esp + 8]  // dst_argb
     mov        ecx,  [esp + 12]  // width
     sub        edx,  eax
     sub        edx,  eax
 
  convertloop:
-    vmovdqu    ymm0, [eax]         // fetch 16 pixels of 1555
-    vpsllw     ymm1, ymm0, 1       // R in upper 5 bits
-    vpsllw     ymm2, ymm0, 11      // B in upper 5 bits
+    vmovdqu    ymm0, [eax]  // fetch 16 pixels of 1555
+    vpsllw     ymm1, ymm0, 1  // R in upper 5 bits
+    vpsllw     ymm2, ymm0, 11  // B in upper 5 bits
     vpand      ymm1, ymm1, ymm3
-    vpmulhuw   ymm2, ymm2, ymm5    // * (256 + 8)
-    vpmulhuw   ymm1, ymm1, ymm5    // * (256 + 8)
+    vpmulhuw   ymm2, ymm2, ymm5  // * (256 + 8)
+    vpmulhuw   ymm1, ymm1, ymm5  // * (256 + 8)
     vpsllw     ymm1, ymm1, 8
-    vpor       ymm1, ymm1, ymm2    // RB
-    vpsraw     ymm2, ymm0, 8       // A
-    vpand      ymm0, ymm0, ymm4    // G in middle 5 bits
-    vpmulhuw   ymm0, ymm0, ymm6    // << 6 * (256 + 8)
+    vpor       ymm1, ymm1, ymm2  // RB
+    vpsraw     ymm2, ymm0, 8  // A
+    vpand      ymm0, ymm0, ymm4  // G in middle 5 bits
+    vpmulhuw   ymm0, ymm0, ymm6  // << 6 * (256 + 8)
     vpand      ymm2, ymm2, ymm7
-    vpor       ymm0, ymm0, ymm2    // AG
-    vpermq     ymm0, ymm0, 0xd8    // mutate for unpack
+    vpor       ymm0, ymm0, ymm2  // AG
+    vpermq     ymm0, ymm0, 0xd8  // mutate for unpack
     vpermq     ymm1, ymm1, 0xd8
     vpunpckhbw ymm2, ymm1, ymm0
     vpunpcklbw ymm1, ymm1, ymm0
@@ -619,29 +589,29 @@
 #endif  // HAS_ARGB1555TOARGBROW_AVX2
 
 #ifdef HAS_ARGB4444TOARGBROW_AVX2
-__declspec(naked)
-void ARGB4444ToARGBRow_AVX2(const uint8* src_argb4444, uint8* dst_argb,
-                            int width) {
+__declspec(naked) void ARGB4444ToARGBRow_AVX2(const uint8_t* src_argb4444,
+                                              uint8_t* dst_argb,
+                                              int width) {
   __asm {
     mov       eax,  0x0f0f0f0f  // generate mask 0x0f0f0f0f
     vmovd     xmm4, eax
     vbroadcastss ymm4, xmm4
-    vpslld    ymm5, ymm4, 4     // 0xf0f0f0f0 for high nibbles
-    mov       eax,  [esp + 4]   // src_argb4444
-    mov       edx,  [esp + 8]   // dst_argb
+    vpslld    ymm5, ymm4, 4  // 0xf0f0f0f0 for high nibbles
+    mov       eax,  [esp + 4]  // src_argb4444
+    mov       edx,  [esp + 8]  // dst_argb
     mov       ecx,  [esp + 12]  // width
     sub       edx,  eax
     sub       edx,  eax
 
  convertloop:
-    vmovdqu    ymm0, [eax]         // fetch 16 pixels of bgra4444
-    vpand      ymm2, ymm0, ymm5    // mask high nibbles
-    vpand      ymm0, ymm0, ymm4    // mask low nibbles
+    vmovdqu    ymm0, [eax]  // fetch 16 pixels of bgra4444
+    vpand      ymm2, ymm0, ymm5  // mask high nibbles
+    vpand      ymm0, ymm0, ymm4  // mask low nibbles
     vpsrlw     ymm3, ymm2, 4
     vpsllw     ymm1, ymm0, 4
     vpor       ymm2, ymm2, ymm3
     vpor       ymm0, ymm0, ymm1
-    vpermq     ymm0, ymm0, 0xd8    // mutate for unpack
+    vpermq     ymm0, ymm0, 0xd8  // mutate for unpack
     vpermq     ymm2, ymm2, 0xd8
     vpunpckhbw ymm1, ymm0, ymm2
     vpunpcklbw ymm0, ymm0, ymm2
@@ -657,9 +627,9 @@
 #endif  // HAS_ARGB4444TOARGBROW_AVX2
 
 // 24 instructions
-__declspec(naked)
-void ARGB1555ToARGBRow_SSE2(const uint8* src_argb1555, uint8* dst_argb,
-                            int width) {
+__declspec(naked) void ARGB1555ToARGBRow_SSE2(const uint8_t* src_argb1555,
+                                              uint8_t* dst_argb,
+                                              int width) {
   __asm {
     mov       eax, 0x01080108  // generate multiplier to repeat 5 bits
     movd      xmm5, eax
@@ -667,36 +637,36 @@
     mov       eax, 0x42004200  // multiplier shift by 6 and then repeat 5 bits
     movd      xmm6, eax
     pshufd    xmm6, xmm6, 0
-    pcmpeqb   xmm3, xmm3       // generate mask 0xf800f800 for Red
+    pcmpeqb   xmm3, xmm3  // generate mask 0xf800f800 for Red
     psllw     xmm3, 11
-    movdqa    xmm4, xmm3       // generate mask 0x03e003e0 for Green
+    movdqa    xmm4, xmm3  // generate mask 0x03e003e0 for Green
     psrlw     xmm4, 6
-    pcmpeqb   xmm7, xmm7       // generate mask 0xff00ff00 for Alpha
+    pcmpeqb   xmm7, xmm7  // generate mask 0xff00ff00 for Alpha
     psllw     xmm7, 8
 
-    mov       eax, [esp + 4]   // src_argb1555
-    mov       edx, [esp + 8]   // dst_argb
+    mov       eax, [esp + 4]  // src_argb1555
+    mov       edx, [esp + 8]  // dst_argb
     mov       ecx, [esp + 12]  // width
     sub       edx, eax
     sub       edx, eax
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 8 pixels of 1555
+    movdqu    xmm0, [eax]  // fetch 8 pixels of 1555
     movdqa    xmm1, xmm0
     movdqa    xmm2, xmm0
-    psllw     xmm1, 1       // R in upper 5 bits
-    psllw     xmm2, 11      // B in upper 5 bits
+    psllw     xmm1, 1  // R in upper 5 bits
+    psllw     xmm2, 11  // B in upper 5 bits
     pand      xmm1, xmm3
-    pmulhuw   xmm2, xmm5    // * (256 + 8)
-    pmulhuw   xmm1, xmm5    // * (256 + 8)
+    pmulhuw   xmm2, xmm5  // * (256 + 8)
+    pmulhuw   xmm1, xmm5  // * (256 + 8)
     psllw     xmm1, 8
-    por       xmm1, xmm2    // RB
+    por       xmm1, xmm2  // RB
     movdqa    xmm2, xmm0
-    pand      xmm0, xmm4    // G in middle 5 bits
-    psraw     xmm2, 8       // A
-    pmulhuw   xmm0, xmm6    // << 6 * (256 + 8)
+    pand      xmm0, xmm4  // G in middle 5 bits
+    psraw     xmm2, 8  // A
+    pmulhuw   xmm0, xmm6  // << 6 * (256 + 8)
     pand      xmm2, xmm7
-    por       xmm0, xmm2    // AG
+    por       xmm0, xmm2  // AG
     movdqa    xmm2, xmm1
     punpcklbw xmm1, xmm0
     punpckhbw xmm2, xmm0
@@ -710,26 +680,26 @@
 }
 
 // 18 instructions.
-__declspec(naked)
-void ARGB4444ToARGBRow_SSE2(const uint8* src_argb4444, uint8* dst_argb,
-                            int width) {
+__declspec(naked) void ARGB4444ToARGBRow_SSE2(const uint8_t* src_argb4444,
+                                              uint8_t* dst_argb,
+                                              int width) {
   __asm {
     mov       eax, 0x0f0f0f0f  // generate mask 0x0f0f0f0f
     movd      xmm4, eax
     pshufd    xmm4, xmm4, 0
-    movdqa    xmm5, xmm4       // 0xf0f0f0f0 for high nibbles
+    movdqa    xmm5, xmm4  // 0xf0f0f0f0 for high nibbles
     pslld     xmm5, 4
-    mov       eax, [esp + 4]   // src_argb4444
-    mov       edx, [esp + 8]   // dst_argb
+    mov       eax, [esp + 4]  // src_argb4444
+    mov       edx, [esp + 8]  // dst_argb
     mov       ecx, [esp + 12]  // width
     sub       edx, eax
     sub       edx, eax
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 8 pixels of bgra4444
+    movdqu    xmm0, [eax]  // fetch 8 pixels of bgra4444
     movdqa    xmm2, xmm0
-    pand      xmm0, xmm4    // mask low nibbles
-    pand      xmm2, xmm5    // mask high nibbles
+    pand      xmm0, xmm4  // mask low nibbles
+    pand      xmm2, xmm5  // mask high nibbles
     movdqa    xmm1, xmm0
     movdqa    xmm3, xmm2
     psllw     xmm1, 4
@@ -748,37 +718,38 @@
   }
 }
 
-__declspec(naked)
-void ARGBToRGB24Row_SSSE3(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToRGB24Row_SSSE3(const uint8_t* src_argb,
+                                            uint8_t* dst_rgb,
+                                            int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_argb
-    mov       edx, [esp + 8]   // dst_rgb
+    mov       eax, [esp + 4]  // src_argb
+    mov       edx, [esp + 8]  // dst_rgb
     mov       ecx, [esp + 12]  // width
     movdqa    xmm6, xmmword ptr kShuffleMaskARGBToRGB24
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 16 pixels of argb
+    movdqu    xmm0, [eax]  // fetch 16 pixels of argb
     movdqu    xmm1, [eax + 16]
     movdqu    xmm2, [eax + 32]
     movdqu    xmm3, [eax + 48]
     lea       eax, [eax + 64]
-    pshufb    xmm0, xmm6    // pack 16 bytes of ARGB to 12 bytes of RGB
+    pshufb    xmm0, xmm6  // pack 16 bytes of ARGB to 12 bytes of RGB
     pshufb    xmm1, xmm6
     pshufb    xmm2, xmm6
     pshufb    xmm3, xmm6
-    movdqa    xmm4, xmm1   // 4 bytes from 1 for 0
-    psrldq    xmm1, 4      // 8 bytes from 1
-    pslldq    xmm4, 12     // 4 bytes from 1 for 0
-    movdqa    xmm5, xmm2   // 8 bytes from 2 for 1
-    por       xmm0, xmm4   // 4 bytes from 1 for 0
-    pslldq    xmm5, 8      // 8 bytes from 2 for 1
+    movdqa    xmm4, xmm1  // 4 bytes from 1 for 0
+    psrldq    xmm1, 4  // 8 bytes from 1
+    pslldq    xmm4, 12  // 4 bytes from 1 for 0
+    movdqa    xmm5, xmm2  // 8 bytes from 2 for 1
+    por       xmm0, xmm4  // 4 bytes from 1 for 0
+    pslldq    xmm5, 8  // 8 bytes from 2 for 1
     movdqu    [edx], xmm0  // store 0
-    por       xmm1, xmm5   // 8 bytes from 2 for 1
-    psrldq    xmm2, 8      // 4 bytes from 2
-    pslldq    xmm3, 4      // 12 bytes from 3 for 2
-    por       xmm2, xmm3   // 12 bytes from 3 for 2
-    movdqu    [edx + 16], xmm1   // store 1
-    movdqu    [edx + 32], xmm2   // store 2
+    por       xmm1, xmm5  // 8 bytes from 2 for 1
+    psrldq    xmm2, 8  // 4 bytes from 2
+    pslldq    xmm3, 4  // 12 bytes from 3 for 2
+    por       xmm2, xmm3  // 12 bytes from 3 for 2
+    movdqu    [edx + 16], xmm1  // store 1
+    movdqu    [edx + 32], xmm2  // store 2
     lea       edx, [edx + 48]
     sub       ecx, 16
     jg        convertloop
@@ -786,37 +757,38 @@
   }
 }
 
-__declspec(naked)
-void ARGBToRAWRow_SSSE3(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToRAWRow_SSSE3(const uint8_t* src_argb,
+                                          uint8_t* dst_rgb,
+                                          int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_argb
-    mov       edx, [esp + 8]   // dst_rgb
+    mov       eax, [esp + 4]  // src_argb
+    mov       edx, [esp + 8]  // dst_rgb
     mov       ecx, [esp + 12]  // width
     movdqa    xmm6, xmmword ptr kShuffleMaskARGBToRAW
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 16 pixels of argb
+    movdqu    xmm0, [eax]  // fetch 16 pixels of argb
     movdqu    xmm1, [eax + 16]
     movdqu    xmm2, [eax + 32]
     movdqu    xmm3, [eax + 48]
     lea       eax, [eax + 64]
-    pshufb    xmm0, xmm6    // pack 16 bytes of ARGB to 12 bytes of RGB
+    pshufb    xmm0, xmm6  // pack 16 bytes of ARGB to 12 bytes of RGB
     pshufb    xmm1, xmm6
     pshufb    xmm2, xmm6
     pshufb    xmm3, xmm6
-    movdqa    xmm4, xmm1   // 4 bytes from 1 for 0
-    psrldq    xmm1, 4      // 8 bytes from 1
-    pslldq    xmm4, 12     // 4 bytes from 1 for 0
-    movdqa    xmm5, xmm2   // 8 bytes from 2 for 1
-    por       xmm0, xmm4   // 4 bytes from 1 for 0
-    pslldq    xmm5, 8      // 8 bytes from 2 for 1
+    movdqa    xmm4, xmm1  // 4 bytes from 1 for 0
+    psrldq    xmm1, 4  // 8 bytes from 1
+    pslldq    xmm4, 12  // 4 bytes from 1 for 0
+    movdqa    xmm5, xmm2  // 8 bytes from 2 for 1
+    por       xmm0, xmm4  // 4 bytes from 1 for 0
+    pslldq    xmm5, 8  // 8 bytes from 2 for 1
     movdqu    [edx], xmm0  // store 0
-    por       xmm1, xmm5   // 8 bytes from 2 for 1
-    psrldq    xmm2, 8      // 4 bytes from 2
-    pslldq    xmm3, 4      // 12 bytes from 3 for 2
-    por       xmm2, xmm3   // 12 bytes from 3 for 2
-    movdqu    [edx + 16], xmm1   // store 1
-    movdqu    [edx + 32], xmm2   // store 2
+    por       xmm1, xmm5  // 8 bytes from 2 for 1
+    psrldq    xmm2, 8  // 4 bytes from 2
+    pslldq    xmm3, 4  // 12 bytes from 3 for 2
+    por       xmm2, xmm3  // 12 bytes from 3 for 2
+    movdqu    [edx + 16], xmm1  // store 1
+    movdqu    [edx + 32], xmm2  // store 2
     lea       edx, [edx + 48]
     sub       ecx, 16
     jg        convertloop
@@ -824,33 +796,34 @@
   }
 }
 
-__declspec(naked)
-void ARGBToRGB565Row_SSE2(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToRGB565Row_SSE2(const uint8_t* src_argb,
+                                            uint8_t* dst_rgb,
+                                            int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_argb
-    mov       edx, [esp + 8]   // dst_rgb
+    mov       eax, [esp + 4]  // src_argb
+    mov       edx, [esp + 8]  // dst_rgb
     mov       ecx, [esp + 12]  // width
-    pcmpeqb   xmm3, xmm3       // generate mask 0x0000001f
+    pcmpeqb   xmm3, xmm3  // generate mask 0x0000001f
     psrld     xmm3, 27
-    pcmpeqb   xmm4, xmm4       // generate mask 0x000007e0
+    pcmpeqb   xmm4, xmm4  // generate mask 0x000007e0
     psrld     xmm4, 26
     pslld     xmm4, 5
-    pcmpeqb   xmm5, xmm5       // generate mask 0xfffff800
+    pcmpeqb   xmm5, xmm5  // generate mask 0xfffff800
     pslld     xmm5, 11
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 4 pixels of argb
-    movdqa    xmm1, xmm0    // B
-    movdqa    xmm2, xmm0    // G
-    pslld     xmm0, 8       // R
-    psrld     xmm1, 3       // B
-    psrld     xmm2, 5       // G
-    psrad     xmm0, 16      // R
-    pand      xmm1, xmm3    // B
-    pand      xmm2, xmm4    // G
-    pand      xmm0, xmm5    // R
-    por       xmm1, xmm2    // BG
-    por       xmm0, xmm1    // BGR
+    movdqu    xmm0, [eax]  // fetch 4 pixels of argb
+    movdqa    xmm1, xmm0  // B
+    movdqa    xmm2, xmm0  // G
+    pslld     xmm0, 8  // R
+    psrld     xmm1, 3  // B
+    psrld     xmm2, 5  // G
+    psrad     xmm0, 16  // R
+    pand      xmm1, xmm3  // B
+    pand      xmm2, xmm4  // G
+    pand      xmm0, xmm5  // R
+    por       xmm1, xmm2  // BG
+    por       xmm0, xmm1  // BGR
     packssdw  xmm0, xmm0
     lea       eax, [eax + 16]
     movq      qword ptr [edx], xmm0  // store 4 pixels of RGB565
@@ -861,41 +834,42 @@
   }
 }
 
-__declspec(naked)
-void ARGBToRGB565DitherRow_SSE2(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width) {
+__declspec(naked) void ARGBToRGB565DitherRow_SSE2(const uint8_t* src_argb,
+                                                  uint8_t* dst_rgb,
+                                                  const uint32_t dither4,
+                                                  int width) {
   __asm {
 
-    mov       eax, [esp + 4]   // src_argb
-    mov       edx, [esp + 8]   // dst_rgb
-    movd      xmm6, [esp + 12] // dither4
+    mov       eax, [esp + 4]  // src_argb
+    mov       edx, [esp + 8]  // dst_rgb
+    movd      xmm6, [esp + 12]  // dither4
     mov       ecx, [esp + 16]  // width
-    punpcklbw xmm6, xmm6       // make dither 16 bytes
+    punpcklbw xmm6, xmm6  // make dither 16 bytes
     movdqa    xmm7, xmm6
     punpcklwd xmm6, xmm6
     punpckhwd xmm7, xmm7
-    pcmpeqb   xmm3, xmm3       // generate mask 0x0000001f
+    pcmpeqb   xmm3, xmm3  // generate mask 0x0000001f
     psrld     xmm3, 27
-    pcmpeqb   xmm4, xmm4       // generate mask 0x000007e0
+    pcmpeqb   xmm4, xmm4  // generate mask 0x000007e0
     psrld     xmm4, 26
     pslld     xmm4, 5
-    pcmpeqb   xmm5, xmm5       // generate mask 0xfffff800
+    pcmpeqb   xmm5, xmm5  // generate mask 0xfffff800
     pslld     xmm5, 11
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 4 pixels of argb
-    paddusb   xmm0, xmm6    // add dither
-    movdqa    xmm1, xmm0    // B
-    movdqa    xmm2, xmm0    // G
-    pslld     xmm0, 8       // R
-    psrld     xmm1, 3       // B
-    psrld     xmm2, 5       // G
-    psrad     xmm0, 16      // R
-    pand      xmm1, xmm3    // B
-    pand      xmm2, xmm4    // G
-    pand      xmm0, xmm5    // R
-    por       xmm1, xmm2    // BG
-    por       xmm0, xmm1    // BGR
+    movdqu    xmm0, [eax]  // fetch 4 pixels of argb
+    paddusb   xmm0, xmm6  // add dither
+    movdqa    xmm1, xmm0  // B
+    movdqa    xmm2, xmm0  // G
+    pslld     xmm0, 8  // R
+    psrld     xmm1, 3  // B
+    psrld     xmm2, 5  // G
+    psrad     xmm0, 16  // R
+    pand      xmm1, xmm3  // B
+    pand      xmm2, xmm4  // G
+    pand      xmm0, xmm5  // R
+    por       xmm1, xmm2  // BG
+    por       xmm0, xmm1  // BGR
     packssdw  xmm0, xmm0
     lea       eax, [eax + 16]
     movq      qword ptr [edx], xmm0  // store 4 pixels of RGB565
@@ -907,39 +881,40 @@
 }
 
 #ifdef HAS_ARGBTORGB565DITHERROW_AVX2
-__declspec(naked)
-void ARGBToRGB565DitherRow_AVX2(const uint8* src_argb, uint8* dst_rgb,
-                                const uint32 dither4, int width) {
+__declspec(naked) void ARGBToRGB565DitherRow_AVX2(const uint8_t* src_argb,
+                                                  uint8_t* dst_rgb,
+                                                  const uint32_t dither4,
+                                                  int width) {
   __asm {
-    mov        eax, [esp + 4]      // src_argb
-    mov        edx, [esp + 8]      // dst_rgb
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_rgb
     vbroadcastss xmm6, [esp + 12]  // dither4
-    mov        ecx, [esp + 16]     // width
-    vpunpcklbw xmm6, xmm6, xmm6    // make dither 32 bytes
+    mov        ecx, [esp + 16]  // width
+    vpunpcklbw xmm6, xmm6, xmm6  // make dither 32 bytes
     vpermq     ymm6, ymm6, 0xd8
     vpunpcklwd ymm6, ymm6, ymm6
-    vpcmpeqb   ymm3, ymm3, ymm3    // generate mask 0x0000001f
+    vpcmpeqb   ymm3, ymm3, ymm3  // generate mask 0x0000001f
     vpsrld     ymm3, ymm3, 27
-    vpcmpeqb   ymm4, ymm4, ymm4    // generate mask 0x000007e0
+    vpcmpeqb   ymm4, ymm4, ymm4  // generate mask 0x000007e0
     vpsrld     ymm4, ymm4, 26
     vpslld     ymm4, ymm4, 5
-    vpslld     ymm5, ymm3, 11      // generate mask 0x0000f800
+    vpslld     ymm5, ymm3, 11  // generate mask 0x0000f800
 
  convertloop:
-    vmovdqu    ymm0, [eax]         // fetch 8 pixels of argb
-    vpaddusb   ymm0, ymm0, ymm6    // add dither
-    vpsrld     ymm2, ymm0, 5       // G
-    vpsrld     ymm1, ymm0, 3       // B
-    vpsrld     ymm0, ymm0, 8       // R
-    vpand      ymm2, ymm2, ymm4    // G
-    vpand      ymm1, ymm1, ymm3    // B
-    vpand      ymm0, ymm0, ymm5    // R
-    vpor       ymm1, ymm1, ymm2    // BG
-    vpor       ymm0, ymm0, ymm1    // BGR
+    vmovdqu    ymm0, [eax]  // fetch 8 pixels of argb
+    vpaddusb   ymm0, ymm0, ymm6  // add dither
+    vpsrld     ymm2, ymm0, 5  // G
+    vpsrld     ymm1, ymm0, 3  // B
+    vpsrld     ymm0, ymm0, 8  // R
+    vpand      ymm2, ymm2, ymm4  // G
+    vpand      ymm1, ymm1, ymm3  // B
+    vpand      ymm0, ymm0, ymm5  // R
+    vpor       ymm1, ymm1, ymm2  // BG
+    vpor       ymm0, ymm0, ymm1  // BGR
     vpackusdw  ymm0, ymm0, ymm0
     vpermq     ymm0, ymm0, 0xd8
     lea        eax, [eax + 32]
-    vmovdqu    [edx], xmm0         // store 8 pixels of RGB565
+    vmovdqu    [edx], xmm0  // store 8 pixels of RGB565
     lea        edx, [edx + 16]
     sub        ecx, 8
     jg         convertloop
@@ -950,37 +925,38 @@
 #endif  // HAS_ARGBTORGB565DITHERROW_AVX2
 
 // TODO(fbarchard): Improve sign extension/packing.
-__declspec(naked)
-void ARGBToARGB1555Row_SSE2(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToARGB1555Row_SSE2(const uint8_t* src_argb,
+                                              uint8_t* dst_rgb,
+                                              int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_argb
-    mov       edx, [esp + 8]   // dst_rgb
+    mov       eax, [esp + 4]  // src_argb
+    mov       edx, [esp + 8]  // dst_rgb
     mov       ecx, [esp + 12]  // width
-    pcmpeqb   xmm4, xmm4       // generate mask 0x0000001f
+    pcmpeqb   xmm4, xmm4  // generate mask 0x0000001f
     psrld     xmm4, 27
-    movdqa    xmm5, xmm4       // generate mask 0x000003e0
+    movdqa    xmm5, xmm4  // generate mask 0x000003e0
     pslld     xmm5, 5
-    movdqa    xmm6, xmm4       // generate mask 0x00007c00
+    movdqa    xmm6, xmm4  // generate mask 0x00007c00
     pslld     xmm6, 10
-    pcmpeqb   xmm7, xmm7       // generate mask 0xffff8000
+    pcmpeqb   xmm7, xmm7  // generate mask 0xffff8000
     pslld     xmm7, 15
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 4 pixels of argb
-    movdqa    xmm1, xmm0    // B
-    movdqa    xmm2, xmm0    // G
-    movdqa    xmm3, xmm0    // R
-    psrad     xmm0, 16      // A
-    psrld     xmm1, 3       // B
-    psrld     xmm2, 6       // G
-    psrld     xmm3, 9       // R
-    pand      xmm0, xmm7    // A
-    pand      xmm1, xmm4    // B
-    pand      xmm2, xmm5    // G
-    pand      xmm3, xmm6    // R
-    por       xmm0, xmm1    // BA
-    por       xmm2, xmm3    // GR
-    por       xmm0, xmm2    // BGRA
+    movdqu    xmm0, [eax]  // fetch 4 pixels of argb
+    movdqa    xmm1, xmm0  // B
+    movdqa    xmm2, xmm0  // G
+    movdqa    xmm3, xmm0  // R
+    psrad     xmm0, 16  // A
+    psrld     xmm1, 3  // B
+    psrld     xmm2, 6  // G
+    psrld     xmm3, 9  // R
+    pand      xmm0, xmm7  // A
+    pand      xmm1, xmm4  // B
+    pand      xmm2, xmm5  // G
+    pand      xmm3, xmm6  // R
+    por       xmm0, xmm1  // BA
+    por       xmm2, xmm3  // GR
+    por       xmm0, xmm2  // BGRA
     packssdw  xmm0, xmm0
     lea       eax, [eax + 16]
     movq      qword ptr [edx], xmm0  // store 4 pixels of ARGB1555
@@ -991,22 +967,23 @@
   }
 }
 
-__declspec(naked)
-void ARGBToARGB4444Row_SSE2(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToARGB4444Row_SSE2(const uint8_t* src_argb,
+                                              uint8_t* dst_rgb,
+                                              int width) {
   __asm {
-    mov       eax, [esp + 4]   // src_argb
-    mov       edx, [esp + 8]   // dst_rgb
+    mov       eax, [esp + 4]  // src_argb
+    mov       edx, [esp + 8]  // dst_rgb
     mov       ecx, [esp + 12]  // width
-    pcmpeqb   xmm4, xmm4       // generate mask 0xf000f000
+    pcmpeqb   xmm4, xmm4  // generate mask 0xf000f000
     psllw     xmm4, 12
-    movdqa    xmm3, xmm4       // generate mask 0x00f000f0
+    movdqa    xmm3, xmm4  // generate mask 0x00f000f0
     psrlw     xmm3, 8
 
  convertloop:
-    movdqu    xmm0, [eax]   // fetch 4 pixels of argb
+    movdqu    xmm0, [eax]  // fetch 4 pixels of argb
     movdqa    xmm1, xmm0
-    pand      xmm0, xmm3    // low nibble
-    pand      xmm1, xmm4    // high nibble
+    pand      xmm0, xmm3  // low nibble
+    pand      xmm1, xmm4  // high nibble
     psrld     xmm0, 4
     psrld     xmm1, 8
     por       xmm0, xmm1
@@ -1021,33 +998,34 @@
 }
 
 #ifdef HAS_ARGBTORGB565ROW_AVX2
-__declspec(naked)
-void ARGBToRGB565Row_AVX2(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToRGB565Row_AVX2(const uint8_t* src_argb,
+                                            uint8_t* dst_rgb,
+                                            int width) {
   __asm {
-    mov        eax, [esp + 4]      // src_argb
-    mov        edx, [esp + 8]      // dst_rgb
-    mov        ecx, [esp + 12]     // width
-    vpcmpeqb   ymm3, ymm3, ymm3    // generate mask 0x0000001f
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_rgb
+    mov        ecx, [esp + 12]  // width
+    vpcmpeqb   ymm3, ymm3, ymm3  // generate mask 0x0000001f
     vpsrld     ymm3, ymm3, 27
-    vpcmpeqb   ymm4, ymm4, ymm4    // generate mask 0x000007e0
+    vpcmpeqb   ymm4, ymm4, ymm4  // generate mask 0x000007e0
     vpsrld     ymm4, ymm4, 26
     vpslld     ymm4, ymm4, 5
-    vpslld     ymm5, ymm3, 11      // generate mask 0x0000f800
+    vpslld     ymm5, ymm3, 11  // generate mask 0x0000f800
 
  convertloop:
-    vmovdqu    ymm0, [eax]         // fetch 8 pixels of argb
-    vpsrld     ymm2, ymm0, 5       // G
-    vpsrld     ymm1, ymm0, 3       // B
-    vpsrld     ymm0, ymm0, 8       // R
-    vpand      ymm2, ymm2, ymm4    // G
-    vpand      ymm1, ymm1, ymm3    // B
-    vpand      ymm0, ymm0, ymm5    // R
-    vpor       ymm1, ymm1, ymm2    // BG
-    vpor       ymm0, ymm0, ymm1    // BGR
+    vmovdqu    ymm0, [eax]  // fetch 8 pixels of argb
+    vpsrld     ymm2, ymm0, 5  // G
+    vpsrld     ymm1, ymm0, 3  // B
+    vpsrld     ymm0, ymm0, 8  // R
+    vpand      ymm2, ymm2, ymm4  // G
+    vpand      ymm1, ymm1, ymm3  // B
+    vpand      ymm0, ymm0, ymm5  // R
+    vpor       ymm1, ymm1, ymm2  // BG
+    vpor       ymm0, ymm0, ymm1  // BGR
     vpackusdw  ymm0, ymm0, ymm0
     vpermq     ymm0, ymm0, 0xd8
     lea        eax, [eax + 32]
-    vmovdqu    [edx], xmm0         // store 8 pixels of RGB565
+    vmovdqu    [edx], xmm0  // store 8 pixels of RGB565
     lea        edx, [edx + 16]
     sub        ecx, 8
     jg         convertloop
@@ -1058,36 +1036,37 @@
 #endif  // HAS_ARGBTORGB565ROW_AVX2
 
 #ifdef HAS_ARGBTOARGB1555ROW_AVX2
-__declspec(naked)
-void ARGBToARGB1555Row_AVX2(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToARGB1555Row_AVX2(const uint8_t* src_argb,
+                                              uint8_t* dst_rgb,
+                                              int width) {
   __asm {
-    mov        eax, [esp + 4]      // src_argb
-    mov        edx, [esp + 8]      // dst_rgb
-    mov        ecx, [esp + 12]     // width
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_rgb
+    mov        ecx, [esp + 12]  // width
     vpcmpeqb   ymm4, ymm4, ymm4
-    vpsrld     ymm4, ymm4, 27      // generate mask 0x0000001f
-    vpslld     ymm5, ymm4, 5       // generate mask 0x000003e0
-    vpslld     ymm6, ymm4, 10      // generate mask 0x00007c00
-    vpcmpeqb   ymm7, ymm7, ymm7    // generate mask 0xffff8000
+    vpsrld     ymm4, ymm4, 27  // generate mask 0x0000001f
+    vpslld     ymm5, ymm4, 5  // generate mask 0x000003e0
+    vpslld     ymm6, ymm4, 10  // generate mask 0x00007c00
+    vpcmpeqb   ymm7, ymm7, ymm7  // generate mask 0xffff8000
     vpslld     ymm7, ymm7, 15
 
  convertloop:
-    vmovdqu    ymm0, [eax]         // fetch 8 pixels of argb
-    vpsrld     ymm3, ymm0, 9       // R
-    vpsrld     ymm2, ymm0, 6       // G
-    vpsrld     ymm1, ymm0, 3       // B
-    vpsrad     ymm0, ymm0, 16      // A
-    vpand      ymm3, ymm3, ymm6    // R
-    vpand      ymm2, ymm2, ymm5    // G
-    vpand      ymm1, ymm1, ymm4    // B
-    vpand      ymm0, ymm0, ymm7    // A
-    vpor       ymm0, ymm0, ymm1    // BA
-    vpor       ymm2, ymm2, ymm3    // GR
-    vpor       ymm0, ymm0, ymm2    // BGRA
+    vmovdqu    ymm0, [eax]  // fetch 8 pixels of argb
+    vpsrld     ymm3, ymm0, 9  // R
+    vpsrld     ymm2, ymm0, 6  // G
+    vpsrld     ymm1, ymm0, 3  // B
+    vpsrad     ymm0, ymm0, 16  // A
+    vpand      ymm3, ymm3, ymm6  // R
+    vpand      ymm2, ymm2, ymm5  // G
+    vpand      ymm1, ymm1, ymm4  // B
+    vpand      ymm0, ymm0, ymm7  // A
+    vpor       ymm0, ymm0, ymm1  // BA
+    vpor       ymm2, ymm2, ymm3  // GR
+    vpor       ymm0, ymm0, ymm2  // BGRA
     vpackssdw  ymm0, ymm0, ymm0
     vpermq     ymm0, ymm0, 0xd8
     lea        eax, [eax + 32]
-    vmovdqu    [edx], xmm0         // store 8 pixels of ARGB1555
+    vmovdqu    [edx], xmm0  // store 8 pixels of ARGB1555
     lea        edx, [edx + 16]
     sub        ecx, 8
     jg         convertloop
@@ -1098,27 +1077,28 @@
 #endif  // HAS_ARGBTOARGB1555ROW_AVX2
 
 #ifdef HAS_ARGBTOARGB4444ROW_AVX2
-__declspec(naked)
-void ARGBToARGB4444Row_AVX2(const uint8* src_argb, uint8* dst_rgb, int width) {
+__declspec(naked) void ARGBToARGB4444Row_AVX2(const uint8_t* src_argb,
+                                              uint8_t* dst_rgb,
+                                              int width) {
   __asm {
-    mov        eax, [esp + 4]   // src_argb
-    mov        edx, [esp + 8]   // dst_rgb
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_rgb
     mov        ecx, [esp + 12]  // width
-    vpcmpeqb   ymm4, ymm4, ymm4   // generate mask 0xf000f000
+    vpcmpeqb   ymm4, ymm4, ymm4  // generate mask 0xf000f000
     vpsllw     ymm4, ymm4, 12
-    vpsrlw     ymm3, ymm4, 8      // generate mask 0x00f000f0
+    vpsrlw     ymm3, ymm4, 8  // generate mask 0x00f000f0
 
  convertloop:
-    vmovdqu    ymm0, [eax]         // fetch 8 pixels of argb
-    vpand      ymm1, ymm0, ymm4    // high nibble
-    vpand      ymm0, ymm0, ymm3    // low nibble
+    vmovdqu    ymm0, [eax]  // fetch 8 pixels of argb
+    vpand      ymm1, ymm0, ymm4  // high nibble
+    vpand      ymm0, ymm0, ymm3  // low nibble
     vpsrld     ymm1, ymm1, 8
     vpsrld     ymm0, ymm0, 4
     vpor       ymm0, ymm0, ymm1
     vpackuswb  ymm0, ymm0, ymm0
     vpermq     ymm0, ymm0, 0xd8
     lea        eax, [eax + 32]
-    vmovdqu    [edx], xmm0         // store 8 pixels of ARGB4444
+    vmovdqu    [edx], xmm0  // store 8 pixels of ARGB4444
     lea        edx, [edx + 16]
     sub        ecx, 8
     jg         convertloop
@@ -1129,12 +1109,13 @@
 #endif  // HAS_ARGBTOARGB4444ROW_AVX2
 
 // Convert 16 ARGB pixels (64 bytes) to 16 Y values.
-__declspec(naked)
-void ARGBToYRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void ARGBToYRow_SSSE3(const uint8_t* src_argb,
+                                        uint8_t* dst_y,
+                                        int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     movdqa     xmm4, xmmword ptr kARGBToY
     movdqa     xmm5, xmmword ptr kAddY16
 
@@ -1164,12 +1145,13 @@
 
 // Convert 16 ARGB pixels (64 bytes) to 16 YJ values.
 // Same as ARGBToYRow but different coefficients, no add 16, but do rounding.
-__declspec(naked)
-void ARGBToYJRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void ARGBToYJRow_SSSE3(const uint8_t* src_argb,
+                                         uint8_t* dst_y,
+                                         int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     movdqa     xmm4, xmmword ptr kARGBToYJ
     movdqa     xmm5, xmmword ptr kAddYJ64
 
@@ -1200,17 +1182,16 @@
 
 #ifdef HAS_ARGBTOYROW_AVX2
 // vpermd for vphaddw + vpackuswb vpermd.
-static const lvec32 kPermdARGBToY_AVX = {
-  0, 4, 1, 5, 2, 6, 3, 7
-};
+static const lvec32 kPermdARGBToY_AVX = {0, 4, 1, 5, 2, 6, 3, 7};
 
 // Convert 32 ARGB pixels (128 bytes) to 32 Y values.
-__declspec(naked)
-void ARGBToYRow_AVX2(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void ARGBToYRow_AVX2(const uint8_t* src_argb,
+                                       uint8_t* dst_y,
+                                       int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     vbroadcastf128 ymm4, xmmword ptr kARGBToY
     vbroadcastf128 ymm5, xmmword ptr kAddY16
     vmovdqu    ymm6, ymmword ptr kPermdARGBToY_AVX
@@ -1244,12 +1225,13 @@
 
 #ifdef HAS_ARGBTOYJROW_AVX2
 // Convert 32 ARGB pixels (128 bytes) to 32 Y values.
-__declspec(naked)
-void ARGBToYJRow_AVX2(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void ARGBToYJRow_AVX2(const uint8_t* src_argb,
+                                        uint8_t* dst_y,
+                                        int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     vbroadcastf128 ymm4, xmmword ptr kARGBToYJ
     vbroadcastf128 ymm5, xmmword ptr kAddYJ64
     vmovdqu    ymm6, ymmword ptr kPermdARGBToY_AVX
@@ -1283,12 +1265,13 @@
 }
 #endif  //  HAS_ARGBTOYJROW_AVX2
 
-__declspec(naked)
-void BGRAToYRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void BGRAToYRow_SSSE3(const uint8_t* src_argb,
+                                        uint8_t* dst_y,
+                                        int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     movdqa     xmm4, xmmword ptr kBGRAToY
     movdqa     xmm5, xmmword ptr kAddY16
 
@@ -1316,12 +1299,13 @@
   }
 }
 
-__declspec(naked)
-void ABGRToYRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void ABGRToYRow_SSSE3(const uint8_t* src_argb,
+                                        uint8_t* dst_y,
+                                        int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     movdqa     xmm4, xmmword ptr kABGRToY
     movdqa     xmm5, xmmword ptr kAddY16
 
@@ -1349,12 +1333,13 @@
   }
 }
 
-__declspec(naked)
-void RGBAToYRow_SSSE3(const uint8* src_argb, uint8* dst_y, int width) {
+__declspec(naked) void RGBAToYRow_SSSE3(const uint8_t* src_argb,
+                                        uint8_t* dst_y,
+                                        int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_y */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_y */
+    mov        ecx, [esp + 12] /* width */
     movdqa     xmm4, xmmword ptr kRGBAToY
     movdqa     xmm5, xmmword ptr kAddY16
 
@@ -1382,24 +1367,26 @@
   }
 }
 
-__declspec(naked)
-void ARGBToUVRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void ARGBToUVRow_SSSE3(const uint8_t* src_argb0,
+                                         int src_stride_argb,
+                                         uint8_t* dst_u,
+                                         uint8_t* dst_v,
+                                         int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     movdqa     xmm5, xmmword ptr kAddUV128
     movdqa     xmm6, xmmword ptr kARGBToV
     movdqa     xmm7, xmmword ptr kARGBToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx  // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 16x2 argb pixels to 8x1 */
+         /* step 1 - subsample 16x2 argb pixels to 8x1 */
     movdqu     xmm0, [eax]
     movdqu     xmm4, [eax + esi]
     pavgb      xmm0, xmm4
@@ -1423,9 +1410,9 @@
     shufps     xmm4, xmm3, 0xdd
     pavgb      xmm2, xmm4
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 16 different pixels, its 8 pixels of U and 8 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 16 different pixels, its 8 pixels of U and 8 of V
     movdqa     xmm1, xmm0
     movdqa     xmm3, xmm2
     pmaddubsw  xmm0, xmm7  // U
@@ -1437,11 +1424,11 @@
     psraw      xmm0, 8
     psraw      xmm1, 8
     packsswb   xmm0, xmm1
-    paddb      xmm0, xmm5            // -> unsigned
+    paddb      xmm0, xmm5  // -> unsigned
 
-    // step 3 - store 8 U and 8 V values
-    movlps     qword ptr [edx], xmm0 // U
-    movhps     qword ptr [edx + edi], xmm0 // V
+        // step 3 - store 8 U and 8 V values
+    movlps     qword ptr [edx], xmm0  // U
+    movhps     qword ptr [edx + edi], xmm0  // V
     lea        edx, [edx + 8]
     sub        ecx, 16
     jg         convertloop
@@ -1452,24 +1439,26 @@
   }
 }
 
-__declspec(naked)
-void ARGBToUVJRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                        uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void ARGBToUVJRow_SSSE3(const uint8_t* src_argb0,
+                                          int src_stride_argb,
+                                          uint8_t* dst_u,
+                                          uint8_t* dst_v,
+                                          int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     movdqa     xmm5, xmmword ptr kAddUVJ128
     movdqa     xmm6, xmmword ptr kARGBToVJ
     movdqa     xmm7, xmmword ptr kARGBToUJ
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx  // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 16x2 argb pixels to 8x1 */
+         /* step 1 - subsample 16x2 argb pixels to 8x1 */
     movdqu     xmm0, [eax]
     movdqu     xmm4, [eax + esi]
     pavgb      xmm0, xmm4
@@ -1493,9 +1482,9 @@
     shufps     xmm4, xmm3, 0xdd
     pavgb      xmm2, xmm4
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 16 different pixels, its 8 pixels of U and 8 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 16 different pixels, its 8 pixels of U and 8 of V
     movdqa     xmm1, xmm0
     movdqa     xmm3, xmm2
     pmaddubsw  xmm0, xmm7  // U
@@ -1510,9 +1499,9 @@
     psraw      xmm1, 8
     packsswb   xmm0, xmm1
 
-    // step 3 - store 8 U and 8 V values
-    movlps     qword ptr [edx], xmm0 // U
-    movhps     qword ptr [edx + edi], xmm0 // V
+        // step 3 - store 8 U and 8 V values
+    movlps     qword ptr [edx], xmm0  // U
+    movhps     qword ptr [edx + edi], xmm0  // V
     lea        edx, [edx + 8]
     sub        ecx, 16
     jg         convertloop
@@ -1524,24 +1513,26 @@
 }
 
 #ifdef HAS_ARGBTOUVROW_AVX2
-__declspec(naked)
-void ARGBToUVRow_AVX2(const uint8* src_argb0, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void ARGBToUVRow_AVX2(const uint8_t* src_argb0,
+                                        int src_stride_argb,
+                                        uint8_t* dst_u,
+                                        uint8_t* dst_v,
+                                        int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     vbroadcastf128 ymm5, xmmword ptr kAddUV128
     vbroadcastf128 ymm6, xmmword ptr kARGBToV
     vbroadcastf128 ymm7, xmmword ptr kARGBToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx   // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 32x2 argb pixels to 16x1 */
+        /* step 1 - subsample 32x2 argb pixels to 16x1 */
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     vmovdqu    ymm2, [eax + 64]
@@ -1558,9 +1549,9 @@
     vshufps    ymm2, ymm2, ymm3, 0xdd
     vpavgb     ymm2, ymm2, ymm4  // mutated by vshufps
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 32 different pixels, its 16 pixels of U and 16 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 32 different pixels, its 16 pixels of U and 16 of V
     vpmaddubsw ymm1, ymm0, ymm7  // U
     vpmaddubsw ymm3, ymm2, ymm7
     vpmaddubsw ymm0, ymm0, ymm6  // V
@@ -1574,9 +1565,9 @@
     vpshufb    ymm0, ymm0, ymmword ptr kShufARGBToUV_AVX  // for vshufps/vphaddw
     vpaddb     ymm0, ymm0, ymm5  // -> unsigned
 
-    // step 3 - store 16 U and 16 V values
-    vextractf128 [edx], ymm0, 0 // U
-    vextractf128 [edx + edi], ymm0, 1 // V
+        // step 3 - store 16 U and 16 V values
+    vextractf128 [edx], ymm0, 0  // U
+    vextractf128 [edx + edi], ymm0, 1  // V
     lea        edx, [edx + 16]
     sub        ecx, 32
     jg         convertloop
@@ -1590,24 +1581,26 @@
 #endif  // HAS_ARGBTOUVROW_AVX2
 
 #ifdef HAS_ARGBTOUVJROW_AVX2
-__declspec(naked)
-void ARGBToUVJRow_AVX2(const uint8* src_argb0, int src_stride_argb,
-                      uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void ARGBToUVJRow_AVX2(const uint8_t* src_argb0,
+                                         int src_stride_argb,
+                                         uint8_t* dst_u,
+                                         uint8_t* dst_v,
+                                         int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     vbroadcastf128 ymm5, xmmword ptr kAddUV128
     vbroadcastf128 ymm6, xmmword ptr kARGBToV
     vbroadcastf128 ymm7, xmmword ptr kARGBToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx   // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 32x2 argb pixels to 16x1 */
+        /* step 1 - subsample 32x2 argb pixels to 16x1 */
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     vmovdqu    ymm2, [eax + 64]
@@ -1624,9 +1617,9 @@
     vshufps    ymm2, ymm2, ymm3, 0xdd
     vpavgb     ymm2, ymm2, ymm4  // mutated by vshufps
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 32 different pixels, its 16 pixels of U and 16 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 32 different pixels, its 16 pixels of U and 16 of V
     vpmaddubsw ymm1, ymm0, ymm7  // U
     vpmaddubsw ymm3, ymm2, ymm7
     vpmaddubsw ymm0, ymm0, ymm6  // V
@@ -1641,9 +1634,9 @@
     vpermq     ymm0, ymm0, 0xd8  // For vpacksswb
     vpshufb    ymm0, ymm0, ymmword ptr kShufARGBToUV_AVX  // for vshufps/vphaddw
 
-    // step 3 - store 16 U and 16 V values
-    vextractf128 [edx], ymm0, 0 // U
-    vextractf128 [edx + edi], ymm0, 1 // V
+        // step 3 - store 16 U and 16 V values
+    vextractf128 [edx], ymm0, 0  // U
+    vextractf128 [edx + edi], ymm0, 1  // V
     lea        edx, [edx + 16]
     sub        ecx, 32
     jg         convertloop
@@ -1656,23 +1649,24 @@
 }
 #endif  // HAS_ARGBTOUVJROW_AVX2
 
-__declspec(naked)
-void ARGBToUV444Row_SSSE3(const uint8* src_argb0,
-                          uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void ARGBToUV444Row_SSSE3(const uint8_t* src_argb0,
+                                            uint8_t* dst_u,
+                                            uint8_t* dst_v,
+                                            int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]   // src_argb
-    mov        edx, [esp + 4 + 8]   // dst_u
+    mov        eax, [esp + 4 + 4]  // src_argb
+    mov        edx, [esp + 4 + 8]  // dst_u
     mov        edi, [esp + 4 + 12]  // dst_v
     mov        ecx, [esp + 4 + 16]  // width
     movdqa     xmm5, xmmword ptr kAddUV128
     movdqa     xmm6, xmmword ptr kARGBToV
     movdqa     xmm7, xmmword ptr kARGBToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx    // stride from u to v
 
  convertloop:
-    /* convert to U and V */
-    movdqu     xmm0, [eax]          // U
+        /* convert to U and V */
+    movdqu     xmm0, [eax]  // U
     movdqu     xmm1, [eax + 16]
     movdqu     xmm2, [eax + 32]
     movdqu     xmm3, [eax + 48]
@@ -1688,7 +1682,7 @@
     paddb      xmm0, xmm5
     movdqu     [edx], xmm0
 
-    movdqu     xmm0, [eax]          // V
+    movdqu     xmm0, [eax]  // V
     movdqu     xmm1, [eax + 16]
     movdqu     xmm2, [eax + 32]
     movdqu     xmm3, [eax + 48]
@@ -1713,24 +1707,26 @@
   }
 }
 
-__declspec(naked)
-void BGRAToUVRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void BGRAToUVRow_SSSE3(const uint8_t* src_argb0,
+                                         int src_stride_argb,
+                                         uint8_t* dst_u,
+                                         uint8_t* dst_v,
+                                         int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     movdqa     xmm5, xmmword ptr kAddUV128
     movdqa     xmm6, xmmword ptr kBGRAToV
     movdqa     xmm7, xmmword ptr kBGRAToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx  // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 16x2 argb pixels to 8x1 */
+         /* step 1 - subsample 16x2 argb pixels to 8x1 */
     movdqu     xmm0, [eax]
     movdqu     xmm4, [eax + esi]
     pavgb      xmm0, xmm4
@@ -1754,9 +1750,9 @@
     shufps     xmm4, xmm3, 0xdd
     pavgb      xmm2, xmm4
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 16 different pixels, its 8 pixels of U and 8 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 16 different pixels, its 8 pixels of U and 8 of V
     movdqa     xmm1, xmm0
     movdqa     xmm3, xmm2
     pmaddubsw  xmm0, xmm7  // U
@@ -1768,11 +1764,11 @@
     psraw      xmm0, 8
     psraw      xmm1, 8
     packsswb   xmm0, xmm1
-    paddb      xmm0, xmm5            // -> unsigned
+    paddb      xmm0, xmm5  // -> unsigned
 
-    // step 3 - store 8 U and 8 V values
-    movlps     qword ptr [edx], xmm0 // U
-    movhps     qword ptr [edx + edi], xmm0 // V
+        // step 3 - store 8 U and 8 V values
+    movlps     qword ptr [edx], xmm0  // U
+    movhps     qword ptr [edx + edi], xmm0  // V
     lea        edx, [edx + 8]
     sub        ecx, 16
     jg         convertloop
@@ -1783,24 +1779,26 @@
   }
 }
 
-__declspec(naked)
-void ABGRToUVRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void ABGRToUVRow_SSSE3(const uint8_t* src_argb0,
+                                         int src_stride_argb,
+                                         uint8_t* dst_u,
+                                         uint8_t* dst_v,
+                                         int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     movdqa     xmm5, xmmword ptr kAddUV128
     movdqa     xmm6, xmmword ptr kABGRToV
     movdqa     xmm7, xmmword ptr kABGRToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx  // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 16x2 argb pixels to 8x1 */
+         /* step 1 - subsample 16x2 argb pixels to 8x1 */
     movdqu     xmm0, [eax]
     movdqu     xmm4, [eax + esi]
     pavgb      xmm0, xmm4
@@ -1824,9 +1822,9 @@
     shufps     xmm4, xmm3, 0xdd
     pavgb      xmm2, xmm4
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 16 different pixels, its 8 pixels of U and 8 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 16 different pixels, its 8 pixels of U and 8 of V
     movdqa     xmm1, xmm0
     movdqa     xmm3, xmm2
     pmaddubsw  xmm0, xmm7  // U
@@ -1838,11 +1836,11 @@
     psraw      xmm0, 8
     psraw      xmm1, 8
     packsswb   xmm0, xmm1
-    paddb      xmm0, xmm5            // -> unsigned
+    paddb      xmm0, xmm5  // -> unsigned
 
-    // step 3 - store 8 U and 8 V values
-    movlps     qword ptr [edx], xmm0 // U
-    movhps     qword ptr [edx + edi], xmm0 // V
+        // step 3 - store 8 U and 8 V values
+    movlps     qword ptr [edx], xmm0  // U
+    movhps     qword ptr [edx + edi], xmm0  // V
     lea        edx, [edx + 8]
     sub        ecx, 16
     jg         convertloop
@@ -1853,24 +1851,26 @@
   }
 }
 
-__declspec(naked)
-void RGBAToUVRow_SSSE3(const uint8* src_argb0, int src_stride_argb,
-                       uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void RGBAToUVRow_SSSE3(const uint8_t* src_argb0,
+                                         int src_stride_argb,
+                                         uint8_t* dst_u,
+                                         uint8_t* dst_v,
+                                         int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_argb
-    mov        esi, [esp + 8 + 8]   // src_stride_argb
+    mov        eax, [esp + 8 + 4]  // src_argb
+    mov        esi, [esp + 8 + 8]  // src_stride_argb
     mov        edx, [esp + 8 + 12]  // dst_u
     mov        edi, [esp + 8 + 16]  // dst_v
     mov        ecx, [esp + 8 + 20]  // width
     movdqa     xmm5, xmmword ptr kAddUV128
     movdqa     xmm6, xmmword ptr kRGBAToV
     movdqa     xmm7, xmmword ptr kRGBAToU
-    sub        edi, edx             // stride from u to v
+    sub        edi, edx  // stride from u to v
 
  convertloop:
-    /* step 1 - subsample 16x2 argb pixels to 8x1 */
+         /* step 1 - subsample 16x2 argb pixels to 8x1 */
     movdqu     xmm0, [eax]
     movdqu     xmm4, [eax + esi]
     pavgb      xmm0, xmm4
@@ -1894,9 +1894,9 @@
     shufps     xmm4, xmm3, 0xdd
     pavgb      xmm2, xmm4
 
-    // step 2 - convert to U and V
-    // from here down is very similar to Y code except
-    // instead of 16 different pixels, its 8 pixels of U and 8 of V
+        // step 2 - convert to U and V
+        // from here down is very similar to Y code except
+        // instead of 16 different pixels, its 8 pixels of U and 8 of V
     movdqa     xmm1, xmm0
     movdqa     xmm3, xmm2
     pmaddubsw  xmm0, xmm7  // U
@@ -1908,11 +1908,11 @@
     psraw      xmm0, 8
     psraw      xmm1, 8
     packsswb   xmm0, xmm1
-    paddb      xmm0, xmm5            // -> unsigned
+    paddb      xmm0, xmm5  // -> unsigned
 
-    // step 3 - store 8 U and 8 V values
-    movlps     qword ptr [edx], xmm0 // U
-    movhps     qword ptr [edx + edi], xmm0 // V
+        // step 3 - store 8 U and 8 V values
+    movlps     qword ptr [edx], xmm0  // U
+    movhps     qword ptr [edx + edi], xmm0  // V
     lea        edx, [edx + 8]
     sub        ecx, 16
     jg         convertloop
@@ -1925,109 +1925,95 @@
 #endif  // HAS_ARGBTOYROW_SSSE3
 
 // Read 16 UV from 444
-#define READYUV444_AVX2 __asm {                                                \
-    __asm vmovdqu    xmm0, [esi]                  /* U */                      \
-    __asm vmovdqu    xmm1, [esi + edi]            /* V */                      \
+#define READYUV444_AVX2 \
+  __asm {                                                \
+    __asm vmovdqu    xmm0, [esi] /* U */                      \
+    __asm vmovdqu    xmm1, [esi + edi] /* V */                      \
     __asm lea        esi,  [esi + 16]                                          \
     __asm vpermq     ymm0, ymm0, 0xd8                                          \
     __asm vpermq     ymm1, ymm1, 0xd8                                          \
-    __asm vpunpcklbw ymm0, ymm0, ymm1             /* UV */                     \
-    __asm vmovdqu    xmm4, [eax]                  /* Y */                      \
+    __asm vpunpcklbw ymm0, ymm0, ymm1 /* UV */                     \
+    __asm vmovdqu    xmm4, [eax] /* Y */                      \
     __asm vpermq     ymm4, ymm4, 0xd8                                          \
     __asm vpunpcklbw ymm4, ymm4, ymm4                                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        eax, [eax + 16]}
 
 // Read 8 UV from 422, upsample to 16 UV.
-#define READYUV422_AVX2 __asm {                                                \
-    __asm vmovq      xmm0, qword ptr [esi]        /* U */                      \
-    __asm vmovq      xmm1, qword ptr [esi + edi]  /* V */                      \
+#define READYUV422_AVX2 \
+  __asm {                                                \
+    __asm vmovq      xmm0, qword ptr [esi] /* U */                      \
+    __asm vmovq      xmm1, qword ptr [esi + edi] /* V */                      \
     __asm lea        esi,  [esi + 8]                                           \
-    __asm vpunpcklbw ymm0, ymm0, ymm1             /* UV */                     \
+    __asm vpunpcklbw ymm0, ymm0, ymm1 /* UV */                     \
     __asm vpermq     ymm0, ymm0, 0xd8                                          \
-    __asm vpunpcklwd ymm0, ymm0, ymm0             /* UVUV (upsample) */        \
-    __asm vmovdqu    xmm4, [eax]                  /* Y */                      \
+    __asm vpunpcklwd ymm0, ymm0, ymm0 /* UVUV (upsample) */        \
+    __asm vmovdqu    xmm4, [eax] /* Y */                      \
     __asm vpermq     ymm4, ymm4, 0xd8                                          \
     __asm vpunpcklbw ymm4, ymm4, ymm4                                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        eax, [eax + 16]}
 
 // Read 8 UV from 422, upsample to 16 UV.  With 16 Alpha.
-#define READYUVA422_AVX2 __asm {                                               \
-    __asm vmovq      xmm0, qword ptr [esi]        /* U */                      \
-    __asm vmovq      xmm1, qword ptr [esi + edi]  /* V */                      \
+#define READYUVA422_AVX2 \
+  __asm {                                               \
+    __asm vmovq      xmm0, qword ptr [esi] /* U */                      \
+    __asm vmovq      xmm1, qword ptr [esi + edi] /* V */                      \
     __asm lea        esi,  [esi + 8]                                           \
-    __asm vpunpcklbw ymm0, ymm0, ymm1             /* UV */                     \
+    __asm vpunpcklbw ymm0, ymm0, ymm1 /* UV */                     \
     __asm vpermq     ymm0, ymm0, 0xd8                                          \
-    __asm vpunpcklwd ymm0, ymm0, ymm0             /* UVUV (upsample) */        \
-    __asm vmovdqu    xmm4, [eax]                  /* Y */                      \
+    __asm vpunpcklwd ymm0, ymm0, ymm0 /* UVUV (upsample) */        \
+    __asm vmovdqu    xmm4, [eax] /* Y */                      \
     __asm vpermq     ymm4, ymm4, 0xd8                                          \
     __asm vpunpcklbw ymm4, ymm4, ymm4                                          \
     __asm lea        eax, [eax + 16]                                           \
-    __asm vmovdqu    xmm5, [ebp]                  /* A */                      \
+    __asm vmovdqu    xmm5, [ebp] /* A */                      \
     __asm vpermq     ymm5, ymm5, 0xd8                                          \
-    __asm lea        ebp, [ebp + 16]                                           \
-  }
-
-// Read 4 UV from 411, upsample to 16 UV.
-#define READYUV411_AVX2 __asm {                                                \
-    __asm vmovd      xmm0, dword ptr [esi]        /* U */                      \
-    __asm vmovd      xmm1, dword ptr [esi + edi]  /* V */                      \
-    __asm lea        esi,  [esi + 4]                                           \
-    __asm vpunpcklbw ymm0, ymm0, ymm1             /* UV */                     \
-    __asm vpunpcklwd ymm0, ymm0, ymm0             /* UVUV (upsample) */        \
-    __asm vpermq     ymm0, ymm0, 0xd8                                          \
-    __asm vpunpckldq ymm0, ymm0, ymm0             /* UVUVUVUV (upsample) */    \
-    __asm vmovdqu    xmm4, [eax]                  /* Y */                      \
-    __asm vpermq     ymm4, ymm4, 0xd8                                          \
-    __asm vpunpcklbw ymm4, ymm4, ymm4                                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        ebp, [ebp + 16]}
 
 // Read 8 UV from NV12, upsample to 16 UV.
-#define READNV12_AVX2 __asm {                                                  \
-    __asm vmovdqu    xmm0, [esi]                  /* UV */                     \
+#define READNV12_AVX2 \
+  __asm {                                                  \
+    __asm vmovdqu    xmm0, [esi] /* UV */                     \
     __asm lea        esi,  [esi + 16]                                          \
     __asm vpermq     ymm0, ymm0, 0xd8                                          \
-    __asm vpunpcklwd ymm0, ymm0, ymm0             /* UVUV (upsample) */        \
-    __asm vmovdqu    xmm4, [eax]                  /* Y */                      \
+    __asm vpunpcklwd ymm0, ymm0, ymm0 /* UVUV (upsample) */        \
+    __asm vmovdqu    xmm4, [eax] /* Y */                      \
     __asm vpermq     ymm4, ymm4, 0xd8                                          \
     __asm vpunpcklbw ymm4, ymm4, ymm4                                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        eax, [eax + 16]}
 
 // Read 8 UV from NV21, upsample to 16 UV.
-#define READNV21_AVX2 __asm {                                                  \
-    __asm vmovdqu    xmm0, [esi]                  /* UV */                     \
+#define READNV21_AVX2 \
+  __asm {                                                  \
+    __asm vmovdqu    xmm0, [esi] /* UV */                     \
     __asm lea        esi,  [esi + 16]                                          \
     __asm vpermq     ymm0, ymm0, 0xd8                                          \
     __asm vpshufb    ymm0, ymm0, ymmword ptr kShuffleNV21                      \
-    __asm vmovdqu    xmm4, [eax]                  /* Y */                      \
+    __asm vmovdqu    xmm4, [eax] /* Y */                      \
     __asm vpermq     ymm4, ymm4, 0xd8                                          \
     __asm vpunpcklbw ymm4, ymm4, ymm4                                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        eax, [eax + 16]}
 
 // Read 8 YUY2 with 16 Y and upsample 8 UV to 16 UV.
-#define READYUY2_AVX2 __asm {                                                  \
-    __asm vmovdqu    ymm4, [eax]          /* YUY2 */                           \
+#define READYUY2_AVX2 \
+  __asm {                                                  \
+    __asm vmovdqu    ymm4, [eax] /* YUY2 */                           \
     __asm vpshufb    ymm4, ymm4, ymmword ptr kShuffleYUY2Y                     \
-    __asm vmovdqu    ymm0, [eax]          /* UV */                             \
+    __asm vmovdqu    ymm0, [eax] /* UV */                             \
     __asm vpshufb    ymm0, ymm0, ymmword ptr kShuffleYUY2UV                    \
-    __asm lea        eax, [eax + 32]                                           \
-  }
+    __asm lea        eax, [eax + 32]}
 
 // Read 8 UYVY with 16 Y and upsample 8 UV to 16 UV.
-#define READUYVY_AVX2 __asm {                                                  \
-    __asm vmovdqu    ymm4, [eax]          /* UYVY */                           \
+#define READUYVY_AVX2 \
+  __asm {                                                  \
+    __asm vmovdqu    ymm4, [eax] /* UYVY */                           \
     __asm vpshufb    ymm4, ymm4, ymmword ptr kShuffleUYVYY                     \
-    __asm vmovdqu    ymm0, [eax]          /* UV */                             \
+    __asm vmovdqu    ymm0, [eax] /* UV */                             \
     __asm vpshufb    ymm0, ymm0, ymmword ptr kShuffleUYVYUV                    \
-    __asm lea        eax, [eax + 32]                                           \
-  }
+    __asm lea        eax, [eax + 32]}
 
 // Convert 16 pixels: 16 UV and 16 Y.
-#define YUVTORGB_AVX2(YuvConstants) __asm {                                    \
+#define YUVTORGB_AVX2(YuvConstants) \
+  __asm {                                    \
     __asm vpmaddubsw ymm2, ymm0, ymmword ptr [YuvConstants + KUVTOR] /* R UV */\
     __asm vpmaddubsw ymm1, ymm0, ymmword ptr [YuvConstants + KUVTOG] /* G UV */\
     __asm vpmaddubsw ymm0, ymm0, ymmword ptr [YuvConstants + KUVTOB] /* B UV */\
@@ -2036,68 +2022,67 @@
     __asm vmovdqu    ymm3, ymmword ptr [YuvConstants + KUVBIASG]               \
     __asm vpsubw     ymm1, ymm3, ymm1                                          \
     __asm vmovdqu    ymm3, ymmword ptr [YuvConstants + KUVBIASB]               \
-    __asm vpsubw     ymm0, ymm3, ymm0                                          \
-    /* Step 2: Find Y contribution to 16 R,G,B values */                       \
+    __asm vpsubw     ymm0, ymm3, ymm0 /* Step 2: Find Y contribution to 16 R,G,B values */                       \
     __asm vpmulhuw   ymm4, ymm4, ymmword ptr [YuvConstants + KYTORGB]          \
-    __asm vpaddsw    ymm0, ymm0, ymm4           /* B += Y */                   \
-    __asm vpaddsw    ymm1, ymm1, ymm4           /* G += Y */                   \
-    __asm vpaddsw    ymm2, ymm2, ymm4           /* R += Y */                   \
+    __asm vpaddsw    ymm0, ymm0, ymm4 /* B += Y */                   \
+    __asm vpaddsw    ymm1, ymm1, ymm4 /* G += Y */                   \
+    __asm vpaddsw    ymm2, ymm2, ymm4 /* R += Y */                   \
     __asm vpsraw     ymm0, ymm0, 6                                             \
     __asm vpsraw     ymm1, ymm1, 6                                             \
     __asm vpsraw     ymm2, ymm2, 6                                             \
-    __asm vpackuswb  ymm0, ymm0, ymm0           /* B */                        \
-    __asm vpackuswb  ymm1, ymm1, ymm1           /* G */                        \
-    __asm vpackuswb  ymm2, ymm2, ymm2           /* R */                        \
+    __asm vpackuswb  ymm0, ymm0, ymm0 /* B */                        \
+    __asm vpackuswb  ymm1, ymm1, ymm1 /* G */                        \
+    __asm vpackuswb  ymm2, ymm2, ymm2 /* R */                  \
   }
 
 // Store 16 ARGB values.
-#define STOREARGB_AVX2 __asm {                                                 \
-    __asm vpunpcklbw ymm0, ymm0, ymm1           /* BG */                       \
+#define STOREARGB_AVX2 \
+  __asm {                                                 \
+    __asm vpunpcklbw ymm0, ymm0, ymm1 /* BG */                       \
     __asm vpermq     ymm0, ymm0, 0xd8                                          \
-    __asm vpunpcklbw ymm2, ymm2, ymm5           /* RA */                       \
+    __asm vpunpcklbw ymm2, ymm2, ymm5 /* RA */                       \
     __asm vpermq     ymm2, ymm2, 0xd8                                          \
-    __asm vpunpcklwd ymm1, ymm0, ymm2           /* BGRA first 8 pixels */      \
-    __asm vpunpckhwd ymm0, ymm0, ymm2           /* BGRA next 8 pixels */       \
+    __asm vpunpcklwd ymm1, ymm0, ymm2 /* BGRA first 8 pixels */      \
+    __asm vpunpckhwd ymm0, ymm0, ymm2 /* BGRA next 8 pixels */       \
     __asm vmovdqu    0[edx], ymm1                                              \
     __asm vmovdqu    32[edx], ymm0                                             \
-    __asm lea        edx,  [edx + 64]                                          \
-  }
+    __asm lea        edx,  [edx + 64]}
 
 // Store 16 RGBA values.
-#define STORERGBA_AVX2 __asm {                                                 \
-    __asm vpunpcklbw ymm1, ymm1, ymm2           /* GR */                       \
+#define STORERGBA_AVX2 \
+  __asm {                                                 \
+    __asm vpunpcklbw ymm1, ymm1, ymm2 /* GR */                       \
     __asm vpermq     ymm1, ymm1, 0xd8                                          \
-    __asm vpunpcklbw ymm2, ymm5, ymm0           /* AB */                       \
+    __asm vpunpcklbw ymm2, ymm5, ymm0 /* AB */                       \
     __asm vpermq     ymm2, ymm2, 0xd8                                          \
-    __asm vpunpcklwd ymm0, ymm2, ymm1           /* ABGR first 8 pixels */      \
-    __asm vpunpckhwd ymm1, ymm2, ymm1           /* ABGR next 8 pixels */       \
+    __asm vpunpcklwd ymm0, ymm2, ymm1 /* ABGR first 8 pixels */      \
+    __asm vpunpckhwd ymm1, ymm2, ymm1 /* ABGR next 8 pixels */       \
     __asm vmovdqu    [edx], ymm0                                               \
     __asm vmovdqu    [edx + 32], ymm1                                          \
-    __asm lea        edx,  [edx + 64]                                          \
-  }
+    __asm lea        edx,  [edx + 64]}
 
 #ifdef HAS_I422TOARGBROW_AVX2
 // 16 pixels
 // 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-__declspec(naked)
-void I422ToARGBRow_AVX2(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void I422ToARGBRow_AVX2(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
     mov        ecx, [esp + 12 + 24]  // width
     sub        edi, esi
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
 
  convertloop:
     READYUV422_AVX2
@@ -2119,21 +2104,21 @@
 #ifdef HAS_I422ALPHATOARGBROW_AVX2
 // 16 pixels
 // 8 UV values upsampled to 16 UV, mixed with 16 Y and 16 A producing 16 ARGB.
-__declspec(naked)
-void I422AlphaToARGBRow_AVX2(const uint8* y_buf,
-                             const uint8* u_buf,
-                             const uint8* v_buf,
-                             const uint8* a_buf,
-                             uint8* dst_argb,
-                             const struct YuvConstants* yuvconstants,
-                             int width) {
+__declspec(naked) void I422AlphaToARGBRow_AVX2(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    const uint8_t* a_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
     push       ebp
-    mov        eax, [esp + 16 + 4]   // Y
-    mov        esi, [esp + 16 + 8]   // U
+    mov        eax, [esp + 16 + 4]  // Y
+    mov        esi, [esp + 16 + 8]  // U
     mov        edi, [esp + 16 + 12]  // V
     mov        ebp, [esp + 16 + 16]  // A
     mov        edx, [esp + 16 + 20]  // argb
@@ -2162,25 +2147,25 @@
 #ifdef HAS_I444TOARGBROW_AVX2
 // 16 pixels
 // 16 UV values with 16 Y producing 16 ARGB (64 bytes).
-__declspec(naked)
-void I444ToARGBRow_AVX2(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void I444ToARGBRow_AVX2(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
     mov        ecx, [esp + 12 + 24]  // width
     sub        edi, esi
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
  convertloop:
     READYUV444_AVX2
     YUVTORGB_AVX2(ebx)
@@ -2198,64 +2183,24 @@
 }
 #endif  // HAS_I444TOARGBROW_AVX2
 
-#ifdef HAS_I411TOARGBROW_AVX2
-// 16 pixels
-// 4 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-__declspec(naked)
-void I411ToARGBRow_AVX2(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
-  __asm {
-    push       esi
-    push       edi
-    push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
-    mov        edi, [esp + 12 + 12]  // V
-    mov        edx, [esp + 12 + 16]  // abgr
-    mov        ebx, [esp + 12 + 20]  // yuvconstants
-    mov        ecx, [esp + 12 + 24]  // width
-    sub        edi, esi
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
-
- convertloop:
-    READYUV411_AVX2
-    YUVTORGB_AVX2(ebx)
-    STOREARGB_AVX2
-
-    sub        ecx, 16
-    jg         convertloop
-
-    pop        ebx
-    pop        edi
-    pop        esi
-    vzeroupper
-    ret
-  }
-}
-#endif  // HAS_I411TOARGBROW_AVX2
-
 #ifdef HAS_NV12TOARGBROW_AVX2
 // 16 pixels.
 // 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-__declspec(naked)
-void NV12ToARGBRow_AVX2(const uint8* y_buf,
-                        const uint8* uv_buf,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void NV12ToARGBRow_AVX2(
+    const uint8_t* y_buf,
+    const uint8_t* uv_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       ebx
-    mov        eax, [esp + 8 + 4]   // Y
-    mov        esi, [esp + 8 + 8]   // UV
+    mov        eax, [esp + 8 + 4]  // Y
+    mov        esi, [esp + 8 + 8]  // UV
     mov        edx, [esp + 8 + 12]  // argb
     mov        ebx, [esp + 8 + 16]  // yuvconstants
     mov        ecx, [esp + 8 + 20]  // width
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
 
  convertloop:
     READNV12_AVX2
@@ -2276,21 +2221,21 @@
 #ifdef HAS_NV21TOARGBROW_AVX2
 // 16 pixels.
 // 8 VU values upsampled to 16 UV, mixed with 16 Y producing 16 ARGB (64 bytes).
-__declspec(naked)
-void NV21ToARGBRow_AVX2(const uint8* y_buf,
-                        const uint8* vu_buf,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void NV21ToARGBRow_AVX2(
+    const uint8_t* y_buf,
+    const uint8_t* vu_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       ebx
-    mov        eax, [esp + 8 + 4]   // Y
-    mov        esi, [esp + 8 + 8]   // VU
+    mov        eax, [esp + 8 + 4]  // Y
+    mov        esi, [esp + 8 + 8]  // VU
     mov        edx, [esp + 8 + 12]  // argb
     mov        ebx, [esp + 8 + 16]  // yuvconstants
     mov        ecx, [esp + 8 + 20]  // width
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
 
  convertloop:
     READNV21_AVX2
@@ -2311,18 +2256,18 @@
 #ifdef HAS_YUY2TOARGBROW_AVX2
 // 16 pixels.
 // 8 YUY2 values with 16 Y and 8 UV producing 16 ARGB (64 bytes).
-__declspec(naked)
-void YUY2ToARGBRow_AVX2(const uint8* src_yuy2,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void YUY2ToARGBRow_AVX2(
+    const uint8_t* src_yuy2,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       ebx
-    mov        eax, [esp + 4 + 4]   // yuy2
-    mov        edx, [esp + 4 + 8]   // argb
+    mov        eax, [esp + 4 + 4]  // yuy2
+    mov        edx, [esp + 4 + 8]  // argb
     mov        ebx, [esp + 4 + 12]  // yuvconstants
     mov        ecx, [esp + 4 + 16]  // width
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
 
  convertloop:
     READYUY2_AVX2
@@ -2342,18 +2287,18 @@
 #ifdef HAS_UYVYTOARGBROW_AVX2
 // 16 pixels.
 // 8 UYVY values with 16 Y and 8 UV producing 16 ARGB (64 bytes).
-__declspec(naked)
-void UYVYToARGBRow_AVX2(const uint8* src_uyvy,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void UYVYToARGBRow_AVX2(
+    const uint8_t* src_uyvy,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       ebx
-    mov        eax, [esp + 4 + 4]   // uyvy
-    mov        edx, [esp + 4 + 8]   // argb
+    mov        eax, [esp + 4 + 4]  // uyvy
+    mov        edx, [esp + 4 + 8]  // argb
     mov        ebx, [esp + 4 + 12]  // yuvconstants
     mov        ecx, [esp + 4 + 16]  // width
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
 
  convertloop:
     READUYVY_AVX2
@@ -2373,25 +2318,25 @@
 #ifdef HAS_I422TORGBAROW_AVX2
 // 16 pixels
 // 8 UV values upsampled to 16 UV, mixed with 16 Y producing 16 RGBA (64 bytes).
-__declspec(naked)
-void I422ToRGBARow_AVX2(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* dst_argb,
-                        const struct YuvConstants* yuvconstants,
-                        int width) {
+__declspec(naked) void I422ToRGBARow_AVX2(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // abgr
     mov        ebx, [esp + 12 + 20]  // yuvconstants
     mov        ecx, [esp + 12 + 24]  // width
     sub        edi, esi
-    vpcmpeqb   ymm5, ymm5, ymm5     // generate 0xffffffffffffffff for alpha
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate 0xffffffffffffffff for alpha
 
  convertloop:
     READYUV422_AVX2
@@ -2415,100 +2360,83 @@
 // Allows a conversion with half size scaling.
 
 // Read 8 UV from 444.
-#define READYUV444 __asm {                                                     \
+#define READYUV444 \
+  __asm {                                                     \
     __asm movq       xmm0, qword ptr [esi] /* U */                             \
     __asm movq       xmm1, qword ptr [esi + edi] /* V */                       \
     __asm lea        esi,  [esi + 8]                                           \
-    __asm punpcklbw  xmm0, xmm1           /* UV */                             \
+    __asm punpcklbw  xmm0, xmm1 /* UV */                             \
     __asm movq       xmm4, qword ptr [eax]                                     \
     __asm punpcklbw  xmm4, xmm4                                                \
-    __asm lea        eax, [eax + 8]                                            \
-  }
+    __asm lea        eax, [eax + 8]}
 
 // Read 4 UV from 422, upsample to 8 UV.
-#define READYUV422 __asm {                                                     \
-    __asm movd       xmm0, [esi]          /* U */                              \
-    __asm movd       xmm1, [esi + edi]    /* V */                              \
+#define READYUV422 \
+  __asm {                                                     \
+    __asm movd       xmm0, [esi] /* U */                              \
+    __asm movd       xmm1, [esi + edi] /* V */                              \
     __asm lea        esi,  [esi + 4]                                           \
-    __asm punpcklbw  xmm0, xmm1           /* UV */                             \
-    __asm punpcklwd  xmm0, xmm0           /* UVUV (upsample) */                \
+    __asm punpcklbw  xmm0, xmm1 /* UV */                             \
+    __asm punpcklwd  xmm0, xmm0 /* UVUV (upsample) */                \
     __asm movq       xmm4, qword ptr [eax]                                     \
     __asm punpcklbw  xmm4, xmm4                                                \
-    __asm lea        eax, [eax + 8]                                            \
-  }
+    __asm lea        eax, [eax + 8]}
 
 // Read 4 UV from 422, upsample to 8 UV.  With 8 Alpha.
-#define READYUVA422 __asm {                                                    \
-    __asm movd       xmm0, [esi]          /* U */                              \
-    __asm movd       xmm1, [esi + edi]    /* V */                              \
+#define READYUVA422 \
+  __asm {                                                    \
+    __asm movd       xmm0, [esi] /* U */                              \
+    __asm movd       xmm1, [esi + edi] /* V */                              \
     __asm lea        esi,  [esi + 4]                                           \
-    __asm punpcklbw  xmm0, xmm1           /* UV */                             \
-    __asm punpcklwd  xmm0, xmm0           /* UVUV (upsample) */                \
-    __asm movq       xmm4, qword ptr [eax]   /* Y */                           \
+    __asm punpcklbw  xmm0, xmm1 /* UV */                             \
+    __asm punpcklwd  xmm0, xmm0 /* UVUV (upsample) */                \
+    __asm movq       xmm4, qword ptr [eax] /* Y */                           \
     __asm punpcklbw  xmm4, xmm4                                                \
     __asm lea        eax, [eax + 8]                                            \
-    __asm movq       xmm5, qword ptr [ebp]   /* A */                           \
-    __asm lea        ebp, [ebp + 8]                                            \
-  }
-
-// Read 2 UV from 411, upsample to 8 UV.
-// drmemory fails with memory fault if pinsrw used. libyuv bug: 525
-//  __asm pinsrw     xmm0, [esi], 0        /* U */
-//  __asm pinsrw     xmm1, [esi + edi], 0  /* V */
-#define READYUV411_EBX __asm {                                                 \
-    __asm movzx      ebx, word ptr [esi]        /* U */                        \
-    __asm movd       xmm0, ebx                                                 \
-    __asm movzx      ebx, word ptr [esi + edi]  /* V */                        \
-    __asm movd       xmm1, ebx                                                 \
-    __asm lea        esi,  [esi + 2]                                           \
-    __asm punpcklbw  xmm0, xmm1            /* UV */                            \
-    __asm punpcklwd  xmm0, xmm0            /* UVUV (upsample) */               \
-    __asm punpckldq  xmm0, xmm0            /* UVUVUVUV (upsample) */           \
-    __asm movq       xmm4, qword ptr [eax]                                     \
-    __asm punpcklbw  xmm4, xmm4                                                \
-    __asm lea        eax, [eax + 8]                                            \
-  }
+    __asm movq       xmm5, qword ptr [ebp] /* A */                           \
+    __asm lea        ebp, [ebp + 8]}
 
 // Read 4 UV from NV12, upsample to 8 UV.
-#define READNV12 __asm {                                                       \
+#define READNV12 \
+  __asm {                                                       \
     __asm movq       xmm0, qword ptr [esi] /* UV */                            \
     __asm lea        esi,  [esi + 8]                                           \
-    __asm punpcklwd  xmm0, xmm0           /* UVUV (upsample) */                \
+    __asm punpcklwd  xmm0, xmm0 /* UVUV (upsample) */                \
     __asm movq       xmm4, qword ptr [eax]                                     \
     __asm punpcklbw  xmm4, xmm4                                                \
-    __asm lea        eax, [eax + 8]                                            \
-  }
+    __asm lea        eax, [eax + 8]}
 
 // Read 4 VU from NV21, upsample to 8 UV.
-#define READNV21 __asm {                                                       \
+#define READNV21 \
+  __asm {                                                       \
     __asm movq       xmm0, qword ptr [esi] /* UV */                            \
     __asm lea        esi,  [esi + 8]                                           \
     __asm pshufb     xmm0, xmmword ptr kShuffleNV21                            \
     __asm movq       xmm4, qword ptr [eax]                                     \
     __asm punpcklbw  xmm4, xmm4                                                \
-    __asm lea        eax, [eax + 8]                                            \
-  }
+    __asm lea        eax, [eax + 8]}
 
 // Read 4 YUY2 with 8 Y and upsample 4 UV to 8 UV.
-#define READYUY2 __asm {                                                       \
-    __asm movdqu     xmm4, [eax]          /* YUY2 */                           \
+#define READYUY2 \
+  __asm {                                                       \
+    __asm movdqu     xmm4, [eax] /* YUY2 */                           \
     __asm pshufb     xmm4, xmmword ptr kShuffleYUY2Y                           \
-    __asm movdqu     xmm0, [eax]          /* UV */                             \
+    __asm movdqu     xmm0, [eax] /* UV */                             \
     __asm pshufb     xmm0, xmmword ptr kShuffleYUY2UV                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        eax, [eax + 16]}
 
 // Read 4 UYVY with 8 Y and upsample 4 UV to 8 UV.
-#define READUYVY __asm {                                                       \
-    __asm movdqu     xmm4, [eax]          /* UYVY */                           \
+#define READUYVY \
+  __asm {                                                       \
+    __asm movdqu     xmm4, [eax] /* UYVY */                           \
     __asm pshufb     xmm4, xmmword ptr kShuffleUYVYY                           \
-    __asm movdqu     xmm0, [eax]          /* UV */                             \
+    __asm movdqu     xmm0, [eax] /* UV */                             \
     __asm pshufb     xmm0, xmmword ptr kShuffleUYVYUV                          \
-    __asm lea        eax, [eax + 16]                                           \
-  }
+    __asm lea        eax, [eax + 16]}
 
 // Convert 8 pixels: 8 UV and 8 Y.
-#define YUVTORGB(YuvConstants) __asm {                                         \
+#define YUVTORGB(YuvConstants) \
+  __asm {                                         \
     __asm movdqa     xmm1, xmm0                                                \
     __asm movdqa     xmm2, xmm0                                                \
     __asm movdqa     xmm3, xmm0                                                \
@@ -2522,129 +2450,125 @@
     __asm pmaddubsw  xmm3, xmmword ptr [YuvConstants + KUVTOR]                 \
     __asm psubw      xmm2, xmm3                                                \
     __asm pmulhuw    xmm4, xmmword ptr [YuvConstants + KYTORGB]                \
-    __asm paddsw     xmm0, xmm4           /* B += Y */                         \
-    __asm paddsw     xmm1, xmm4           /* G += Y */                         \
-    __asm paddsw     xmm2, xmm4           /* R += Y */                         \
+    __asm paddsw     xmm0, xmm4 /* B += Y */                         \
+    __asm paddsw     xmm1, xmm4 /* G += Y */                         \
+    __asm paddsw     xmm2, xmm4 /* R += Y */                         \
     __asm psraw      xmm0, 6                                                   \
     __asm psraw      xmm1, 6                                                   \
     __asm psraw      xmm2, 6                                                   \
-    __asm packuswb   xmm0, xmm0           /* B */                              \
-    __asm packuswb   xmm1, xmm1           /* G */                              \
-    __asm packuswb   xmm2, xmm2           /* R */                              \
+    __asm packuswb   xmm0, xmm0 /* B */                              \
+    __asm packuswb   xmm1, xmm1 /* G */                              \
+    __asm packuswb   xmm2, xmm2 /* R */             \
   }
 
 // Store 8 ARGB values.
-#define STOREARGB __asm {                                                      \
-    __asm punpcklbw  xmm0, xmm1           /* BG */                             \
-    __asm punpcklbw  xmm2, xmm5           /* RA */                             \
+#define STOREARGB \
+  __asm {                                                      \
+    __asm punpcklbw  xmm0, xmm1 /* BG */                             \
+    __asm punpcklbw  xmm2, xmm5 /* RA */                             \
     __asm movdqa     xmm1, xmm0                                                \
-    __asm punpcklwd  xmm0, xmm2           /* BGRA first 4 pixels */            \
-    __asm punpckhwd  xmm1, xmm2           /* BGRA next 4 pixels */             \
+    __asm punpcklwd  xmm0, xmm2 /* BGRA first 4 pixels */            \
+    __asm punpckhwd  xmm1, xmm2 /* BGRA next 4 pixels */             \
     __asm movdqu     0[edx], xmm0                                              \
     __asm movdqu     16[edx], xmm1                                             \
-    __asm lea        edx,  [edx + 32]                                          \
-  }
+    __asm lea        edx,  [edx + 32]}
 
 // Store 8 BGRA values.
-#define STOREBGRA __asm {                                                      \
-    __asm pcmpeqb    xmm5, xmm5           /* generate 0xffffffff for alpha */  \
-    __asm punpcklbw  xmm1, xmm0           /* GB */                             \
-    __asm punpcklbw  xmm5, xmm2           /* AR */                             \
+#define STOREBGRA \
+  __asm {                                                      \
+    __asm pcmpeqb    xmm5, xmm5 /* generate 0xffffffff for alpha */  \
+    __asm punpcklbw  xmm1, xmm0 /* GB */                             \
+    __asm punpcklbw  xmm5, xmm2 /* AR */                             \
     __asm movdqa     xmm0, xmm5                                                \
-    __asm punpcklwd  xmm5, xmm1           /* BGRA first 4 pixels */            \
-    __asm punpckhwd  xmm0, xmm1           /* BGRA next 4 pixels */             \
+    __asm punpcklwd  xmm5, xmm1 /* BGRA first 4 pixels */            \
+    __asm punpckhwd  xmm0, xmm1 /* BGRA next 4 pixels */             \
     __asm movdqu     0[edx], xmm5                                              \
     __asm movdqu     16[edx], xmm0                                             \
-    __asm lea        edx,  [edx + 32]                                          \
-  }
+    __asm lea        edx,  [edx + 32]}
 
 // Store 8 RGBA values.
-#define STORERGBA __asm {                                                      \
-    __asm pcmpeqb    xmm5, xmm5           /* generate 0xffffffff for alpha */  \
-    __asm punpcklbw  xmm1, xmm2           /* GR */                             \
-    __asm punpcklbw  xmm5, xmm0           /* AB */                             \
+#define STORERGBA \
+  __asm {                                                      \
+    __asm pcmpeqb    xmm5, xmm5 /* generate 0xffffffff for alpha */  \
+    __asm punpcklbw  xmm1, xmm2 /* GR */                             \
+    __asm punpcklbw  xmm5, xmm0 /* AB */                             \
     __asm movdqa     xmm0, xmm5                                                \
-    __asm punpcklwd  xmm5, xmm1           /* RGBA first 4 pixels */            \
-    __asm punpckhwd  xmm0, xmm1           /* RGBA next 4 pixels */             \
+    __asm punpcklwd  xmm5, xmm1 /* RGBA first 4 pixels */            \
+    __asm punpckhwd  xmm0, xmm1 /* RGBA next 4 pixels */             \
     __asm movdqu     0[edx], xmm5                                              \
     __asm movdqu     16[edx], xmm0                                             \
-    __asm lea        edx,  [edx + 32]                                          \
-  }
+    __asm lea        edx,  [edx + 32]}
 
 // Store 8 RGB24 values.
-#define STORERGB24 __asm {                                                     \
-    /* Weave into RRGB */                                                      \
-    __asm punpcklbw  xmm0, xmm1           /* BG */                             \
-    __asm punpcklbw  xmm2, xmm2           /* RR */                             \
+#define STORERGB24 \
+  __asm {/* Weave into RRGB */                                                      \
+    __asm punpcklbw  xmm0, xmm1 /* BG */                             \
+    __asm punpcklbw  xmm2, xmm2 /* RR */                             \
     __asm movdqa     xmm1, xmm0                                                \
-    __asm punpcklwd  xmm0, xmm2           /* BGRR first 4 pixels */            \
-    __asm punpckhwd  xmm1, xmm2           /* BGRR next 4 pixels */             \
-    /* RRGB -> RGB24 */                                                        \
-    __asm pshufb     xmm0, xmm5           /* Pack first 8 and last 4 bytes. */ \
-    __asm pshufb     xmm1, xmm6           /* Pack first 12 bytes. */           \
-    __asm palignr    xmm1, xmm0, 12       /* last 4 bytes of xmm0 + 12 xmm1 */ \
-    __asm movq       qword ptr 0[edx], xmm0  /* First 8 bytes */               \
-    __asm movdqu     8[edx], xmm1         /* Last 16 bytes */                  \
-    __asm lea        edx,  [edx + 24]                                          \
-  }
+    __asm punpcklwd  xmm0, xmm2 /* BGRR first 4 pixels */            \
+    __asm punpckhwd  xmm1, xmm2 /* BGRR next 4 pixels */ /* RRGB -> RGB24 */                                                        \
+    __asm pshufb     xmm0, xmm5 /* Pack first 8 and last 4 bytes. */ \
+    __asm pshufb     xmm1, xmm6 /* Pack first 12 bytes. */           \
+    __asm palignr    xmm1, xmm0, 12 /* last 4 bytes of xmm0 + 12 xmm1 */ \
+    __asm movq       qword ptr 0[edx], xmm0 /* First 8 bytes */               \
+    __asm movdqu     8[edx], xmm1 /* Last 16 bytes */                  \
+    __asm lea        edx,  [edx + 24]}
 
 // Store 8 RGB565 values.
-#define STORERGB565 __asm {                                                    \
-    /* Weave into RRGB */                                                      \
-    __asm punpcklbw  xmm0, xmm1           /* BG */                             \
-    __asm punpcklbw  xmm2, xmm2           /* RR */                             \
+#define STORERGB565 \
+  __asm {/* Weave into RRGB */                                                      \
+    __asm punpcklbw  xmm0, xmm1 /* BG */                             \
+    __asm punpcklbw  xmm2, xmm2 /* RR */                             \
     __asm movdqa     xmm1, xmm0                                                \
-    __asm punpcklwd  xmm0, xmm2           /* BGRR first 4 pixels */            \
-    __asm punpckhwd  xmm1, xmm2           /* BGRR next 4 pixels */             \
-    /* RRGB -> RGB565 */                                                       \
-    __asm movdqa     xmm3, xmm0    /* B  first 4 pixels of argb */             \
-    __asm movdqa     xmm2, xmm0    /* G */                                     \
-    __asm pslld      xmm0, 8       /* R */                                     \
-    __asm psrld      xmm3, 3       /* B */                                     \
-    __asm psrld      xmm2, 5       /* G */                                     \
-    __asm psrad      xmm0, 16      /* R */                                     \
-    __asm pand       xmm3, xmm5    /* B */                                     \
-    __asm pand       xmm2, xmm6    /* G */                                     \
-    __asm pand       xmm0, xmm7    /* R */                                     \
-    __asm por        xmm3, xmm2    /* BG */                                    \
-    __asm por        xmm0, xmm3    /* BGR */                                   \
-    __asm movdqa     xmm3, xmm1    /* B  next 4 pixels of argb */              \
-    __asm movdqa     xmm2, xmm1    /* G */                                     \
-    __asm pslld      xmm1, 8       /* R */                                     \
-    __asm psrld      xmm3, 3       /* B */                                     \
-    __asm psrld      xmm2, 5       /* G */                                     \
-    __asm psrad      xmm1, 16      /* R */                                     \
-    __asm pand       xmm3, xmm5    /* B */                                     \
-    __asm pand       xmm2, xmm6    /* G */                                     \
-    __asm pand       xmm1, xmm7    /* R */                                     \
-    __asm por        xmm3, xmm2    /* BG */                                    \
-    __asm por        xmm1, xmm3    /* BGR */                                   \
+    __asm punpcklwd  xmm0, xmm2 /* BGRR first 4 pixels */            \
+    __asm punpckhwd  xmm1, xmm2 /* BGRR next 4 pixels */ /* RRGB -> RGB565 */                                                       \
+    __asm movdqa     xmm3, xmm0 /* B  first 4 pixels of argb */             \
+    __asm movdqa     xmm2, xmm0 /* G */                                     \
+    __asm pslld      xmm0, 8 /* R */                                     \
+    __asm psrld      xmm3, 3 /* B */                                     \
+    __asm psrld      xmm2, 5 /* G */                                     \
+    __asm psrad      xmm0, 16 /* R */                                     \
+    __asm pand       xmm3, xmm5 /* B */                                     \
+    __asm pand       xmm2, xmm6 /* G */                                     \
+    __asm pand       xmm0, xmm7 /* R */                                     \
+    __asm por        xmm3, xmm2 /* BG */                                    \
+    __asm por        xmm0, xmm3 /* BGR */                                   \
+    __asm movdqa     xmm3, xmm1 /* B  next 4 pixels of argb */              \
+    __asm movdqa     xmm2, xmm1 /* G */                                     \
+    __asm pslld      xmm1, 8 /* R */                                     \
+    __asm psrld      xmm3, 3 /* B */                                     \
+    __asm psrld      xmm2, 5 /* G */                                     \
+    __asm psrad      xmm1, 16 /* R */                                     \
+    __asm pand       xmm3, xmm5 /* B */                                     \
+    __asm pand       xmm2, xmm6 /* G */                                     \
+    __asm pand       xmm1, xmm7 /* R */                                     \
+    __asm por        xmm3, xmm2 /* BG */                                    \
+    __asm por        xmm1, xmm3 /* BGR */                                   \
     __asm packssdw   xmm0, xmm1                                                \
-    __asm movdqu     0[edx], xmm0  /* store 8 pixels of RGB565 */              \
-    __asm lea        edx, [edx + 16]                                           \
-  }
+    __asm movdqu     0[edx], xmm0 /* store 8 pixels of RGB565 */              \
+    __asm lea        edx, [edx + 16]}
 
 // 8 pixels.
 // 8 UV values, mixed with 8 Y producing 8 ARGB (32 bytes).
-__declspec(naked)
-void I444ToARGBRow_SSSE3(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void I444ToARGBRow_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
     mov        ecx, [esp + 12 + 24]  // width
     sub        edi, esi
-    pcmpeqb    xmm5, xmm5            // generate 0xffffffff for alpha
+    pcmpeqb    xmm5, xmm5  // generate 0xffffffff for alpha
 
  convertloop:
     READYUV444
@@ -2663,19 +2587,19 @@
 
 // 8 pixels.
 // 4 UV values upsampled to 8 UV, mixed with 8 Y producing 8 RGB24 (24 bytes).
-__declspec(naked)
-void I422ToRGB24Row_SSSE3(const uint8* y_buf,
-                          const uint8* u_buf,
-                          const uint8* v_buf,
-                          uint8* dst_rgb24,
-                          const struct YuvConstants* yuvconstants,
-                          int width) {
+__declspec(naked) void I422ToRGB24Row_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_rgb24,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
@@ -2701,30 +2625,30 @@
 
 // 8 pixels
 // 4 UV values upsampled to 8 UV, mixed with 8 Y producing 8 RGB565 (16 bytes).
-__declspec(naked)
-void I422ToRGB565Row_SSSE3(const uint8* y_buf,
-                           const uint8* u_buf,
-                           const uint8* v_buf,
-                           uint8* rgb565_buf,
-                           const struct YuvConstants* yuvconstants,
-                           int width) {
+__declspec(naked) void I422ToRGB565Row_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* rgb565_buf,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
     mov        ecx, [esp + 12 + 24]  // width
     sub        edi, esi
-    pcmpeqb    xmm5, xmm5       // generate mask 0x0000001f
+    pcmpeqb    xmm5, xmm5  // generate mask 0x0000001f
     psrld      xmm5, 27
-    pcmpeqb    xmm6, xmm6       // generate mask 0x000007e0
+    pcmpeqb    xmm6, xmm6  // generate mask 0x000007e0
     psrld      xmm6, 26
     pslld      xmm6, 5
-    pcmpeqb    xmm7, xmm7       // generate mask 0xfffff800
+    pcmpeqb    xmm7, xmm7  // generate mask 0xfffff800
     pslld      xmm7, 11
 
  convertloop:
@@ -2744,25 +2668,25 @@
 
 // 8 pixels.
 // 4 UV values upsampled to 8 UV, mixed with 8 Y producing 8 ARGB (32 bytes).
-__declspec(naked)
-void I422ToARGBRow_SSSE3(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void I422ToARGBRow_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
     mov        ecx, [esp + 12 + 24]  // width
     sub        edi, esi
-    pcmpeqb    xmm5, xmm5           // generate 0xffffffff for alpha
+    pcmpeqb    xmm5, xmm5  // generate 0xffffffff for alpha
 
  convertloop:
     READYUV422
@@ -2781,21 +2705,21 @@
 
 // 8 pixels.
 // 4 UV values upsampled to 8 UV, mixed with 8 Y and 8 A producing 8 ARGB.
-__declspec(naked)
-void I422AlphaToARGBRow_SSSE3(const uint8* y_buf,
-                              const uint8* u_buf,
-                              const uint8* v_buf,
-                              const uint8* a_buf,
-                              uint8* dst_argb,
-                              const struct YuvConstants* yuvconstants,
-                              int width) {
+__declspec(naked) void I422AlphaToARGBRow_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    const uint8_t* a_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
     push       ebp
-    mov        eax, [esp + 16 + 4]   // Y
-    mov        esi, [esp + 16 + 8]   // U
+    mov        eax, [esp + 16 + 4]  // Y
+    mov        esi, [esp + 16 + 8]  // U
     mov        edi, [esp + 16 + 12]  // V
     mov        ebp, [esp + 16 + 16]  // A
     mov        edx, [esp + 16 + 20]  // argb
@@ -2820,62 +2744,22 @@
 }
 
 // 8 pixels.
-// 2 UV values upsampled to 8 UV, mixed with 8 Y producing 8 ARGB (32 bytes).
-// Similar to I420 but duplicate UV once more.
-__declspec(naked)
-void I411ToARGBRow_SSSE3(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
-  __asm {
-    push       esi
-    push       edi
-    push       ebx
-    push       ebp
-    mov        eax, [esp + 16 + 4]   // Y
-    mov        esi, [esp + 16 + 8]   // U
-    mov        edi, [esp + 16 + 12]  // V
-    mov        edx, [esp + 16 + 16]  // abgr
-    mov        ebp, [esp + 16 + 20]  // yuvconstants
-    mov        ecx, [esp + 16 + 24]  // width
-    sub        edi, esi
-    pcmpeqb    xmm5, xmm5            // generate 0xffffffff for alpha
-
- convertloop:
-    READYUV411_EBX
-    YUVTORGB(ebp)
-    STOREARGB
-
-    sub        ecx, 8
-    jg         convertloop
-
-    pop        ebp
-    pop        ebx
-    pop        edi
-    pop        esi
-    ret
-  }
-}
-
-// 8 pixels.
 // 4 UV values upsampled to 8 UV, mixed with 8 Y producing 8 ARGB (32 bytes).
-__declspec(naked)
-void NV12ToARGBRow_SSSE3(const uint8* y_buf,
-                         const uint8* uv_buf,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void NV12ToARGBRow_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* uv_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       ebx
-    mov        eax, [esp + 8 + 4]   // Y
-    mov        esi, [esp + 8 + 8]   // UV
+    mov        eax, [esp + 8 + 4]  // Y
+    mov        esi, [esp + 8 + 8]  // UV
     mov        edx, [esp + 8 + 12]  // argb
     mov        ebx, [esp + 8 + 16]  // yuvconstants
     mov        ecx, [esp + 8 + 20]  // width
-    pcmpeqb    xmm5, xmm5           // generate 0xffffffff for alpha
+    pcmpeqb    xmm5, xmm5  // generate 0xffffffff for alpha
 
  convertloop:
     READNV12
@@ -2893,21 +2777,21 @@
 
 // 8 pixels.
 // 4 UV values upsampled to 8 UV, mixed with 8 Y producing 8 ARGB (32 bytes).
-__declspec(naked)
-void NV21ToARGBRow_SSSE3(const uint8* y_buf,
-                         const uint8* vu_buf,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void NV21ToARGBRow_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* vu_buf,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       ebx
-    mov        eax, [esp + 8 + 4]   // Y
-    mov        esi, [esp + 8 + 8]   // VU
+    mov        eax, [esp + 8 + 4]  // Y
+    mov        esi, [esp + 8 + 8]  // VU
     mov        edx, [esp + 8 + 12]  // argb
     mov        ebx, [esp + 8 + 16]  // yuvconstants
     mov        ecx, [esp + 8 + 20]  // width
-    pcmpeqb    xmm5, xmm5           // generate 0xffffffff for alpha
+    pcmpeqb    xmm5, xmm5  // generate 0xffffffff for alpha
 
  convertloop:
     READNV21
@@ -2925,18 +2809,18 @@
 
 // 8 pixels.
 // 4 YUY2 values with 8 Y and 4 UV producing 8 ARGB (32 bytes).
-__declspec(naked)
-void YUY2ToARGBRow_SSSE3(const uint8* src_yuy2,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void YUY2ToARGBRow_SSSE3(
+    const uint8_t* src_yuy2,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       ebx
-    mov        eax, [esp + 4 + 4]   // yuy2
-    mov        edx, [esp + 4 + 8]   // argb
+    mov        eax, [esp + 4 + 4]  // yuy2
+    mov        edx, [esp + 4 + 8]  // argb
     mov        ebx, [esp + 4 + 12]  // yuvconstants
     mov        ecx, [esp + 4 + 16]  // width
-    pcmpeqb    xmm5, xmm5           // generate 0xffffffff for alpha
+    pcmpeqb    xmm5, xmm5  // generate 0xffffffff for alpha
 
  convertloop:
     READYUY2
@@ -2953,18 +2837,18 @@
 
 // 8 pixels.
 // 4 UYVY values with 8 Y and 4 UV producing 8 ARGB (32 bytes).
-__declspec(naked)
-void UYVYToARGBRow_SSSE3(const uint8* src_uyvy,
-                         uint8* dst_argb,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void UYVYToARGBRow_SSSE3(
+    const uint8_t* src_uyvy,
+    uint8_t* dst_argb,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       ebx
-    mov        eax, [esp + 4 + 4]   // uyvy
-    mov        edx, [esp + 4 + 8]   // argb
+    mov        eax, [esp + 4 + 4]  // uyvy
+    mov        edx, [esp + 4 + 8]  // argb
     mov        ebx, [esp + 4 + 12]  // yuvconstants
     mov        ecx, [esp + 4 + 16]  // width
-    pcmpeqb    xmm5, xmm5           // generate 0xffffffff for alpha
+    pcmpeqb    xmm5, xmm5  // generate 0xffffffff for alpha
 
  convertloop:
     READUYVY
@@ -2979,19 +2863,19 @@
   }
 }
 
-__declspec(naked)
-void I422ToRGBARow_SSSE3(const uint8* y_buf,
-                         const uint8* u_buf,
-                         const uint8* v_buf,
-                         uint8* dst_rgba,
-                         const struct YuvConstants* yuvconstants,
-                         int width) {
+__declspec(naked) void I422ToRGBARow_SSSE3(
+    const uint8_t* y_buf,
+    const uint8_t* u_buf,
+    const uint8_t* v_buf,
+    uint8_t* dst_rgba,
+    const struct YuvConstants* yuvconstants,
+    int width) {
   __asm {
     push       esi
     push       edi
     push       ebx
-    mov        eax, [esp + 12 + 4]   // Y
-    mov        esi, [esp + 12 + 8]   // U
+    mov        eax, [esp + 12 + 4]  // Y
+    mov        esi, [esp + 12 + 8]  // U
     mov        edi, [esp + 12 + 12]  // V
     mov        edx, [esp + 12 + 16]  // argb
     mov        ebx, [esp + 12 + 20]  // yuvconstants
@@ -3016,39 +2900,38 @@
 
 #ifdef HAS_I400TOARGBROW_SSE2
 // 8 pixels of Y converted to 8 pixels of ARGB (32 bytes).
-__declspec(naked)
-void I400ToARGBRow_SSE2(const uint8* y_buf,
-                        uint8* rgb_buf,
-                        int width) {
+__declspec(naked) void I400ToARGBRow_SSE2(const uint8_t* y_buf,
+                                          uint8_t* rgb_buf,
+                                          int width) {
   __asm {
-    mov        eax, 0x4a354a35      // 4a35 = 18997 = round(1.164 * 64 * 256)
+    mov        eax, 0x4a354a35  // 4a35 = 18997 = round(1.164 * 64 * 256)
     movd       xmm2, eax
     pshufd     xmm2, xmm2,0
-    mov        eax, 0x04880488      // 0488 = 1160 = round(1.164 * 64 * 16)
+    mov        eax, 0x04880488  // 0488 = 1160 = round(1.164 * 64 * 16)
     movd       xmm3, eax
     pshufd     xmm3, xmm3, 0
-    pcmpeqb    xmm4, xmm4           // generate mask 0xff000000
+    pcmpeqb    xmm4, xmm4  // generate mask 0xff000000
     pslld      xmm4, 24
 
-    mov        eax, [esp + 4]       // Y
-    mov        edx, [esp + 8]       // rgb
-    mov        ecx, [esp + 12]      // width
+    mov        eax, [esp + 4]  // Y
+    mov        edx, [esp + 8]  // rgb
+    mov        ecx, [esp + 12]  // width
 
  convertloop:
-    // Step 1: Scale Y contribution to 8 G values. G = (y - 16) * 1.164
+        // Step 1: Scale Y contribution to 8 G values. G = (y - 16) * 1.164
     movq       xmm0, qword ptr [eax]
     lea        eax, [eax + 8]
-    punpcklbw  xmm0, xmm0           // Y.Y
+    punpcklbw  xmm0, xmm0  // Y.Y
     pmulhuw    xmm0, xmm2
     psubusw    xmm0, xmm3
     psrlw      xmm0, 6
-    packuswb   xmm0, xmm0           // G
+    packuswb   xmm0, xmm0        // G
 
-    // Step 2: Weave into ARGB
-    punpcklbw  xmm0, xmm0           // GG
+        // Step 2: Weave into ARGB
+    punpcklbw  xmm0, xmm0  // GG
     movdqa     xmm1, xmm0
-    punpcklwd  xmm0, xmm0           // BGRA first 4 pixels
-    punpckhwd  xmm1, xmm1           // BGRA next 4 pixels
+    punpcklwd  xmm0, xmm0  // BGRA first 4 pixels
+    punpckhwd  xmm1, xmm1  // BGRA next 4 pixels
     por        xmm0, xmm4
     por        xmm1, xmm4
     movdqu     [edx], xmm0
@@ -3064,41 +2947,40 @@
 #ifdef HAS_I400TOARGBROW_AVX2
 // 16 pixels of Y converted to 16 pixels of ARGB (64 bytes).
 // note: vpunpcklbw mutates and vpackuswb unmutates.
-__declspec(naked)
-void I400ToARGBRow_AVX2(const uint8* y_buf,
-                        uint8* rgb_buf,
-                        int width) {
+__declspec(naked) void I400ToARGBRow_AVX2(const uint8_t* y_buf,
+                                          uint8_t* rgb_buf,
+                                          int width) {
   __asm {
-    mov        eax, 0x4a354a35      // 4a35 = 18997 = round(1.164 * 64 * 256)
+    mov        eax, 0x4a354a35  // 4a35 = 18997 = round(1.164 * 64 * 256)
     vmovd      xmm2, eax
     vbroadcastss ymm2, xmm2
-    mov        eax, 0x04880488      // 0488 = 1160 = round(1.164 * 64 * 16)
+    mov        eax, 0x04880488  // 0488 = 1160 = round(1.164 * 64 * 16)
     vmovd      xmm3, eax
     vbroadcastss ymm3, xmm3
-    vpcmpeqb   ymm4, ymm4, ymm4     // generate mask 0xff000000
+    vpcmpeqb   ymm4, ymm4, ymm4  // generate mask 0xff000000
     vpslld     ymm4, ymm4, 24
 
-    mov        eax, [esp + 4]       // Y
-    mov        edx, [esp + 8]       // rgb
-    mov        ecx, [esp + 12]      // width
+    mov        eax, [esp + 4]  // Y
+    mov        edx, [esp + 8]  // rgb
+    mov        ecx, [esp + 12]  // width
 
  convertloop:
-    // Step 1: Scale Y contriportbution to 16 G values. G = (y - 16) * 1.164
+        // Step 1: Scale Y contriportbution to 16 G values. G = (y - 16) * 1.164
     vmovdqu    xmm0, [eax]
     lea        eax, [eax + 16]
-    vpermq     ymm0, ymm0, 0xd8           // vpunpcklbw mutates
-    vpunpcklbw ymm0, ymm0, ymm0           // Y.Y
+    vpermq     ymm0, ymm0, 0xd8  // vpunpcklbw mutates
+    vpunpcklbw ymm0, ymm0, ymm0  // Y.Y
     vpmulhuw   ymm0, ymm0, ymm2
     vpsubusw   ymm0, ymm0, ymm3
     vpsrlw     ymm0, ymm0, 6
-    vpackuswb  ymm0, ymm0, ymm0           // G.  still mutated: 3120
+    vpackuswb  ymm0, ymm0, ymm0        // G.  still mutated: 3120
 
-    // TODO(fbarchard): Weave alpha with unpack.
-    // Step 2: Weave into ARGB
-    vpunpcklbw ymm1, ymm0, ymm0           // GG - mutates
+        // TODO(fbarchard): Weave alpha with unpack.
+        // Step 2: Weave into ARGB
+    vpunpcklbw ymm1, ymm0, ymm0  // GG - mutates
     vpermq     ymm1, ymm1, 0xd8
-    vpunpcklwd ymm0, ymm1, ymm1           // GGGG first 8 pixels
-    vpunpckhwd ymm1, ymm1, ymm1           // GGGG next 8 pixels
+    vpunpcklwd ymm0, ymm1, ymm1  // GGGG first 8 pixels
+    vpunpckhwd ymm1, ymm1, ymm1  // GGGG next 8 pixels
     vpor       ymm0, ymm0, ymm4
     vpor       ymm1, ymm1, ymm4
     vmovdqu    [edx], ymm0
@@ -3114,16 +2996,16 @@
 
 #ifdef HAS_MIRRORROW_SSSE3
 // Shuffle table for reversing the bytes.
-static const uvec8 kShuffleMirror = {
-  15u, 14u, 13u, 12u, 11u, 10u, 9u, 8u, 7u, 6u, 5u, 4u, 3u, 2u, 1u, 0u
-};
+static const uvec8 kShuffleMirror = {15u, 14u, 13u, 12u, 11u, 10u, 9u, 8u,
+                                     7u,  6u,  5u,  4u,  3u,  2u,  1u, 0u};
 
 // TODO(fbarchard): Replace lea with -16 offset.
-__declspec(naked)
-void MirrorRow_SSSE3(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void MirrorRow_SSSE3(const uint8_t* src,
+                                       uint8_t* dst,
+                                       int width) {
   __asm {
-    mov       eax, [esp + 4]   // src
-    mov       edx, [esp + 8]   // dst
+    mov       eax, [esp + 4]  // src
+    mov       edx, [esp + 8]  // dst
     mov       ecx, [esp + 12]  // width
     movdqa    xmm5, xmmword ptr kShuffleMirror
 
@@ -3140,11 +3022,12 @@
 #endif  // HAS_MIRRORROW_SSSE3
 
 #ifdef HAS_MIRRORROW_AVX2
-__declspec(naked)
-void MirrorRow_AVX2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void MirrorRow_AVX2(const uint8_t* src,
+                                      uint8_t* dst,
+                                      int width) {
   __asm {
-    mov       eax, [esp + 4]   // src
-    mov       edx, [esp + 8]   // dst
+    mov       eax, [esp + 4]  // src
+    mov       edx, [esp + 8]  // dst
     mov       ecx, [esp + 12]  // width
     vbroadcastf128 ymm5, xmmword ptr kShuffleMirror
 
@@ -3164,17 +3047,17 @@
 
 #ifdef HAS_MIRRORUVROW_SSSE3
 // Shuffle table for reversing the bytes of UV channels.
-static const uvec8 kShuffleMirrorUV = {
-  14u, 12u, 10u, 8u, 6u, 4u, 2u, 0u, 15u, 13u, 11u, 9u, 7u, 5u, 3u, 1u
-};
+static const uvec8 kShuffleMirrorUV = {14u, 12u, 10u, 8u, 6u, 4u, 2u, 0u,
+                                       15u, 13u, 11u, 9u, 7u, 5u, 3u, 1u};
 
-__declspec(naked)
-void MirrorUVRow_SSSE3(const uint8* src, uint8* dst_u, uint8* dst_v,
-                       int width) {
+__declspec(naked) void MirrorUVRow_SSSE3(const uint8_t* src,
+                                         uint8_t* dst_u,
+                                         uint8_t* dst_v,
+                                         int width) {
   __asm {
     push      edi
-    mov       eax, [esp + 4 + 4]   // src
-    mov       edx, [esp + 4 + 8]   // dst_u
+    mov       eax, [esp + 4 + 4]  // src
+    mov       edx, [esp + 4 + 8]  // dst_u
     mov       edi, [esp + 4 + 12]  // dst_v
     mov       ecx, [esp + 4 + 16]  // width
     movdqa    xmm1, xmmword ptr kShuffleMirrorUV
@@ -3198,11 +3081,12 @@
 #endif  // HAS_MIRRORUVROW_SSSE3
 
 #ifdef HAS_ARGBMIRRORROW_SSE2
-__declspec(naked)
-void ARGBMirrorRow_SSE2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void ARGBMirrorRow_SSE2(const uint8_t* src,
+                                          uint8_t* dst,
+                                          int width) {
   __asm {
-    mov       eax, [esp + 4]   // src
-    mov       edx, [esp + 8]   // dst
+    mov       eax, [esp + 4]  // src
+    mov       edx, [esp + 8]  // dst
     mov       ecx, [esp + 12]  // width
     lea       eax, [eax - 16 + ecx * 4]  // last 4 pixels.
 
@@ -3221,15 +3105,14 @@
 
 #ifdef HAS_ARGBMIRRORROW_AVX2
 // Shuffle table for reversing the bytes.
-static const ulvec32 kARGBShuffleMirror_AVX2 = {
-  7u, 6u, 5u, 4u, 3u, 2u, 1u, 0u
-};
+static const ulvec32 kARGBShuffleMirror_AVX2 = {7u, 6u, 5u, 4u, 3u, 2u, 1u, 0u};
 
-__declspec(naked)
-void ARGBMirrorRow_AVX2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void ARGBMirrorRow_AVX2(const uint8_t* src,
+                                          uint8_t* dst,
+                                          int width) {
   __asm {
-    mov       eax, [esp + 4]   // src
-    mov       edx, [esp + 8]   // dst
+    mov       eax, [esp + 4]  // src
+    mov       edx, [esp + 8]  // dst
     mov       ecx, [esp + 12]  // width
     vmovdqu   ymm5, ymmword ptr kARGBShuffleMirror_AVX2
 
@@ -3246,16 +3129,17 @@
 #endif  // HAS_ARGBMIRRORROW_AVX2
 
 #ifdef HAS_SPLITUVROW_SSE2
-__declspec(naked)
-void SplitUVRow_SSE2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                     int width) {
+__declspec(naked) void SplitUVRow_SSE2(const uint8_t* src_uv,
+                                       uint8_t* dst_u,
+                                       uint8_t* dst_v,
+                                       int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_uv
-    mov        edx, [esp + 4 + 8]    // dst_u
-    mov        edi, [esp + 4 + 12]   // dst_v
-    mov        ecx, [esp + 4 + 16]   // width
-    pcmpeqb    xmm5, xmm5            // generate mask 0x00ff00ff
+    mov        eax, [esp + 4 + 4]  // src_uv
+    mov        edx, [esp + 4 + 8]  // dst_u
+    mov        edi, [esp + 4 + 12]  // dst_v
+    mov        ecx, [esp + 4 + 16]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0x00ff00ff
     psrlw      xmm5, 8
     sub        edi, edx
 
@@ -3265,10 +3149,10 @@
     lea        eax,  [eax + 32]
     movdqa     xmm2, xmm0
     movdqa     xmm3, xmm1
-    pand       xmm0, xmm5   // even bytes
+    pand       xmm0, xmm5  // even bytes
     pand       xmm1, xmm5
     packuswb   xmm0, xmm1
-    psrlw      xmm2, 8      // odd bytes
+    psrlw      xmm2, 8  // odd bytes
     psrlw      xmm3, 8
     packuswb   xmm2, xmm3
     movdqu     [edx], xmm0
@@ -3285,16 +3169,17 @@
 #endif  // HAS_SPLITUVROW_SSE2
 
 #ifdef HAS_SPLITUVROW_AVX2
-__declspec(naked)
-void SplitUVRow_AVX2(const uint8* src_uv, uint8* dst_u, uint8* dst_v,
-                     int width) {
+__declspec(naked) void SplitUVRow_AVX2(const uint8_t* src_uv,
+                                       uint8_t* dst_u,
+                                       uint8_t* dst_v,
+                                       int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_uv
-    mov        edx, [esp + 4 + 8]    // dst_u
-    mov        edi, [esp + 4 + 12]   // dst_v
-    mov        ecx, [esp + 4 + 16]   // width
-    vpcmpeqb   ymm5, ymm5, ymm5      // generate mask 0x00ff00ff
+    mov        eax, [esp + 4 + 4]  // src_uv
+    mov        edx, [esp + 4 + 8]  // dst_u
+    mov        edi, [esp + 4 + 12]  // dst_v
+    mov        ecx, [esp + 4 + 16]  // width
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0x00ff00ff
     vpsrlw     ymm5, ymm5, 8
     sub        edi, edx
 
@@ -3302,9 +3187,9 @@
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     lea        eax,  [eax + 64]
-    vpsrlw     ymm2, ymm0, 8      // odd bytes
+    vpsrlw     ymm2, ymm0, 8  // odd bytes
     vpsrlw     ymm3, ymm1, 8
-    vpand      ymm0, ymm0, ymm5   // even bytes
+    vpand      ymm0, ymm0, ymm5  // even bytes
     vpand      ymm1, ymm1, ymm5
     vpackuswb  ymm0, ymm0, ymm1
     vpackuswb  ymm2, ymm2, ymm3
@@ -3324,24 +3209,25 @@
 #endif  // HAS_SPLITUVROW_AVX2
 
 #ifdef HAS_MERGEUVROW_SSE2
-__declspec(naked)
-void MergeUVRow_SSE2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
-                     int width) {
+__declspec(naked) void MergeUVRow_SSE2(const uint8_t* src_u,
+                                       const uint8_t* src_v,
+                                       uint8_t* dst_uv,
+                                       int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_u
-    mov        edx, [esp + 4 + 8]    // src_v
-    mov        edi, [esp + 4 + 12]   // dst_uv
-    mov        ecx, [esp + 4 + 16]   // width
+    mov        eax, [esp + 4 + 4]  // src_u
+    mov        edx, [esp + 4 + 8]  // src_v
+    mov        edi, [esp + 4 + 12]  // dst_uv
+    mov        ecx, [esp + 4 + 16]  // width
     sub        edx, eax
 
   convertloop:
-    movdqu     xmm0, [eax]      // read 16 U's
+    movdqu     xmm0, [eax]  // read 16 U's
     movdqu     xmm1, [eax + edx]  // and 16 V's
     lea        eax,  [eax + 16]
     movdqa     xmm2, xmm0
-    punpcklbw  xmm0, xmm1       // first 8 UV pairs
-    punpckhbw  xmm2, xmm1       // next 8 UV pairs
+    punpcklbw  xmm0, xmm1  // first 8 UV pairs
+    punpckhbw  xmm2, xmm1  // next 8 UV pairs
     movdqu     [edi], xmm0
     movdqu     [edi + 16], xmm2
     lea        edi, [edi + 32]
@@ -3355,24 +3241,25 @@
 #endif  //  HAS_MERGEUVROW_SSE2
 
 #ifdef HAS_MERGEUVROW_AVX2
-__declspec(naked)
-void MergeUVRow_AVX2(const uint8* src_u, const uint8* src_v, uint8* dst_uv,
-                     int width) {
+__declspec(naked) void MergeUVRow_AVX2(const uint8_t* src_u,
+                                       const uint8_t* src_v,
+                                       uint8_t* dst_uv,
+                                       int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_u
-    mov        edx, [esp + 4 + 8]    // src_v
-    mov        edi, [esp + 4 + 12]   // dst_uv
-    mov        ecx, [esp + 4 + 16]   // width
+    mov        eax, [esp + 4 + 4]  // src_u
+    mov        edx, [esp + 4 + 8]  // src_v
+    mov        edi, [esp + 4 + 12]  // dst_uv
+    mov        ecx, [esp + 4 + 16]  // width
     sub        edx, eax
 
   convertloop:
-    vmovdqu    ymm0, [eax]           // read 32 U's
-    vmovdqu    ymm1, [eax + edx]     // and 32 V's
+    vmovdqu    ymm0, [eax]  // read 32 U's
+    vmovdqu    ymm1, [eax + edx]  // and 32 V's
     lea        eax,  [eax + 32]
-    vpunpcklbw ymm2, ymm0, ymm1      // low 16 UV pairs. mutated qqword 0,2
-    vpunpckhbw ymm0, ymm0, ymm1      // high 16 UV pairs. mutated qqword 1,3
-    vextractf128 [edi], ymm2, 0       // bytes 0..15
+    vpunpcklbw ymm2, ymm0, ymm1  // low 16 UV pairs. mutated qqword 0,2
+    vpunpckhbw ymm0, ymm0, ymm1  // high 16 UV pairs. mutated qqword 1,3
+    vextractf128 [edi], ymm2, 0  // bytes 0..15
     vextractf128 [edi + 16], ymm0, 0  // bytes 16..31
     vextractf128 [edi + 32], ymm2, 1  // bytes 32..47
     vextractf128 [edi + 48], ymm0, 1  // bytes 47..63
@@ -3388,13 +3275,14 @@
 #endif  //  HAS_MERGEUVROW_AVX2
 
 #ifdef HAS_COPYROW_SSE2
-// CopyRow copys 'count' bytes using a 16 byte load/store, 32 bytes at time.
-__declspec(naked)
-void CopyRow_SSE2(const uint8* src, uint8* dst, int count) {
+// CopyRow copys 'width' bytes using a 16 byte load/store, 32 bytes at time.
+__declspec(naked) void CopyRow_SSE2(const uint8_t* src,
+                                    uint8_t* dst,
+                                    int width) {
   __asm {
-    mov        eax, [esp + 4]   // src
-    mov        edx, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
+    mov        eax, [esp + 4]  // src
+    mov        edx, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
     test       eax, 15
     jne        convertloopu
     test       edx, 15
@@ -3426,13 +3314,14 @@
 #endif  // HAS_COPYROW_SSE2
 
 #ifdef HAS_COPYROW_AVX
-// CopyRow copys 'count' bytes using a 32 byte load/store, 64 bytes at time.
-__declspec(naked)
-void CopyRow_AVX(const uint8* src, uint8* dst, int count) {
+// CopyRow copys 'width' bytes using a 32 byte load/store, 64 bytes at time.
+__declspec(naked) void CopyRow_AVX(const uint8_t* src,
+                                   uint8_t* dst,
+                                   int width) {
   __asm {
-    mov        eax, [esp + 4]   // src
-    mov        edx, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
+    mov        eax, [esp + 4]  // src
+    mov        edx, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
 
   convertloop:
     vmovdqu    ymm0, [eax]
@@ -3451,14 +3340,15 @@
 #endif  // HAS_COPYROW_AVX
 
 // Multiple of 1.
-__declspec(naked)
-void CopyRow_ERMS(const uint8* src, uint8* dst, int count) {
+__declspec(naked) void CopyRow_ERMS(const uint8_t* src,
+                                    uint8_t* dst,
+                                    int width) {
   __asm {
     mov        eax, esi
     mov        edx, edi
-    mov        esi, [esp + 4]   // src
-    mov        edi, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
+    mov        esi, [esp + 4]  // src
+    mov        edi, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
     rep movsb
     mov        edi, edx
     mov        esi, eax
@@ -3468,15 +3358,16 @@
 
 #ifdef HAS_ARGBCOPYALPHAROW_SSE2
 // width in pixels
-__declspec(naked)
-void ARGBCopyAlphaRow_SSE2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void ARGBCopyAlphaRow_SSE2(const uint8_t* src,
+                                             uint8_t* dst,
+                                             int width) {
   __asm {
-    mov        eax, [esp + 4]   // src
-    mov        edx, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
-    pcmpeqb    xmm0, xmm0       // generate mask 0xff000000
+    mov        eax, [esp + 4]  // src
+    mov        edx, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
+    pcmpeqb    xmm0, xmm0  // generate mask 0xff000000
     pslld      xmm0, 24
-    pcmpeqb    xmm1, xmm1       // generate mask 0x00ffffff
+    pcmpeqb    xmm1, xmm1  // generate mask 0x00ffffff
     psrld      xmm1, 8
 
   convertloop:
@@ -3504,14 +3395,15 @@
 
 #ifdef HAS_ARGBCOPYALPHAROW_AVX2
 // width in pixels
-__declspec(naked)
-void ARGBCopyAlphaRow_AVX2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void ARGBCopyAlphaRow_AVX2(const uint8_t* src,
+                                             uint8_t* dst,
+                                             int width) {
   __asm {
-    mov        eax, [esp + 4]   // src
-    mov        edx, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
+    mov        eax, [esp + 4]  // src
+    mov        edx, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
     vpcmpeqb   ymm0, ymm0, ymm0
-    vpsrld     ymm0, ymm0, 8    // generate mask 0x00ffffff
+    vpsrld     ymm0, ymm0, 8  // generate mask 0x00ffffff
 
   convertloop:
     vmovdqu    ymm1, [eax]
@@ -3533,11 +3425,12 @@
 
 #ifdef HAS_ARGBEXTRACTALPHAROW_SSE2
 // width in pixels
-__declspec(naked)
-void ARGBExtractAlphaRow_SSE2(const uint8* src_argb, uint8* dst_a, int width) {
+__declspec(naked) void ARGBExtractAlphaRow_SSE2(const uint8_t* src_argb,
+                                                uint8_t* dst_a,
+                                                int width) {
   __asm {
-    mov        eax, [esp + 4]   // src_argb
-    mov        edx, [esp + 8]   // dst_a
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_a
     mov        ecx, [esp + 12]  // width
 
   extractloop:
@@ -3558,17 +3451,54 @@
 }
 #endif  // HAS_ARGBEXTRACTALPHAROW_SSE2
 
+#ifdef HAS_ARGBEXTRACTALPHAROW_AVX2
+// width in pixels
+__declspec(naked) void ARGBExtractAlphaRow_AVX2(const uint8_t* src_argb,
+                                                uint8_t* dst_a,
+                                                int width) {
+  __asm {
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_a
+    mov        ecx, [esp + 12]  // width
+    vmovdqa    ymm4, ymmword ptr kPermdARGBToY_AVX
+
+  extractloop:
+    vmovdqu    ymm0, [eax]
+    vmovdqu    ymm1, [eax + 32]
+    vpsrld     ymm0, ymm0, 24
+    vpsrld     ymm1, ymm1, 24
+    vmovdqu    ymm2, [eax + 64]
+    vmovdqu    ymm3, [eax + 96]
+    lea        eax, [eax + 128]
+    vpackssdw  ymm0, ymm0, ymm1  // mutates
+    vpsrld     ymm2, ymm2, 24
+    vpsrld     ymm3, ymm3, 24
+    vpackssdw  ymm2, ymm2, ymm3  // mutates
+    vpackuswb  ymm0, ymm0, ymm2  // mutates
+    vpermd     ymm0, ymm4, ymm0  // unmutate
+    vmovdqu    [edx], ymm0
+    lea        edx, [edx + 32]
+    sub        ecx, 32
+    jg         extractloop
+
+    vzeroupper
+    ret
+  }
+}
+#endif  // HAS_ARGBEXTRACTALPHAROW_AVX2
+
 #ifdef HAS_ARGBCOPYYTOALPHAROW_SSE2
 // width in pixels
-__declspec(naked)
-void ARGBCopyYToAlphaRow_SSE2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void ARGBCopyYToAlphaRow_SSE2(const uint8_t* src,
+                                                uint8_t* dst,
+                                                int width) {
   __asm {
-    mov        eax, [esp + 4]   // src
-    mov        edx, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
-    pcmpeqb    xmm0, xmm0       // generate mask 0xff000000
+    mov        eax, [esp + 4]  // src
+    mov        edx, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
+    pcmpeqb    xmm0, xmm0  // generate mask 0xff000000
     pslld      xmm0, 24
-    pcmpeqb    xmm1, xmm1       // generate mask 0x00ffffff
+    pcmpeqb    xmm1, xmm1  // generate mask 0x00ffffff
     psrld      xmm1, 8
 
   convertloop:
@@ -3598,14 +3528,15 @@
 
 #ifdef HAS_ARGBCOPYYTOALPHAROW_AVX2
 // width in pixels
-__declspec(naked)
-void ARGBCopyYToAlphaRow_AVX2(const uint8* src, uint8* dst, int width) {
+__declspec(naked) void ARGBCopyYToAlphaRow_AVX2(const uint8_t* src,
+                                                uint8_t* dst,
+                                                int width) {
   __asm {
-    mov        eax, [esp + 4]   // src
-    mov        edx, [esp + 8]   // dst
-    mov        ecx, [esp + 12]  // count
+    mov        eax, [esp + 4]  // src
+    mov        edx, [esp + 8]  // dst
+    mov        ecx, [esp + 12]  // width
     vpcmpeqb   ymm0, ymm0, ymm0
-    vpsrld     ymm0, ymm0, 8    // generate mask 0x00ffffff
+    vpsrld     ymm0, ymm0, 8  // generate mask 0x00ffffff
 
   convertloop:
     vpmovzxbd  ymm1, qword ptr [eax]
@@ -3628,17 +3559,16 @@
 #endif  // HAS_ARGBCOPYYTOALPHAROW_AVX2
 
 #ifdef HAS_SETROW_X86
-// Write 'count' bytes using an 8 bit value repeated.
-// Count should be multiple of 4.
-__declspec(naked)
-void SetRow_X86(uint8* dst, uint8 v8, int count) {
+// Write 'width' bytes using an 8 bit value repeated.
+// width should be multiple of 4.
+__declspec(naked) void SetRow_X86(uint8_t* dst, uint8_t v8, int width) {
   __asm {
-    movzx      eax, byte ptr [esp + 8]    // v8
+    movzx      eax, byte ptr [esp + 8]  // v8
     mov        edx, 0x01010101  // Duplicate byte to all bytes.
-    mul        edx              // overwrites edx with upper part of result.
+    mul        edx  // overwrites edx with upper part of result.
     mov        edx, edi
-    mov        edi, [esp + 4]   // dst
-    mov        ecx, [esp + 12]  // count
+    mov        edi, [esp + 4]  // dst
+    mov        ecx, [esp + 12]  // width
     shr        ecx, 2
     rep stosd
     mov        edi, edx
@@ -3646,28 +3576,28 @@
   }
 }
 
-// Write 'count' bytes using an 8 bit value repeated.
-__declspec(naked)
-void SetRow_ERMS(uint8* dst, uint8 v8, int count) {
+// Write 'width' bytes using an 8 bit value repeated.
+__declspec(naked) void SetRow_ERMS(uint8_t* dst, uint8_t v8, int width) {
   __asm {
     mov        edx, edi
-    mov        edi, [esp + 4]   // dst
-    mov        eax, [esp + 8]   // v8
-    mov        ecx, [esp + 12]  // count
+    mov        edi, [esp + 4]  // dst
+    mov        eax, [esp + 8]  // v8
+    mov        ecx, [esp + 12]  // width
     rep stosb
     mov        edi, edx
     ret
   }
 }
 
-// Write 'count' 32 bit values.
-__declspec(naked)
-void ARGBSetRow_X86(uint8* dst_argb, uint32 v32, int count) {
+// Write 'width' 32 bit values.
+__declspec(naked) void ARGBSetRow_X86(uint8_t* dst_argb,
+                                      uint32_t v32,
+                                      int width) {
   __asm {
     mov        edx, edi
-    mov        edi, [esp + 4]   // dst
-    mov        eax, [esp + 8]   // v32
-    mov        ecx, [esp + 12]  // count
+    mov        edi, [esp + 4]  // dst
+    mov        eax, [esp + 8]  // v32
+    mov        ecx, [esp + 12]  // width
     rep stosd
     mov        edi, edx
     ret
@@ -3676,12 +3606,13 @@
 #endif  // HAS_SETROW_X86
 
 #ifdef HAS_YUY2TOYROW_AVX2
-__declspec(naked)
-void YUY2ToYRow_AVX2(const uint8* src_yuy2, uint8* dst_y, int width) {
+__declspec(naked) void YUY2ToYRow_AVX2(const uint8_t* src_yuy2,
+                                       uint8_t* dst_y,
+                                       int width) {
   __asm {
-    mov        eax, [esp + 4]    // src_yuy2
-    mov        edx, [esp + 8]    // dst_y
-    mov        ecx, [esp + 12]   // width
+    mov        eax, [esp + 4]  // src_yuy2
+    mov        edx, [esp + 8]  // dst_y
+    mov        ecx, [esp + 12]  // width
     vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0x00ff00ff
     vpsrlw     ymm5, ymm5, 8
 
@@ -3689,9 +3620,9 @@
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     lea        eax,  [eax + 64]
-    vpand      ymm0, ymm0, ymm5   // even bytes are Y
+    vpand      ymm0, ymm0, ymm5  // even bytes are Y
     vpand      ymm1, ymm1, ymm5
-    vpackuswb  ymm0, ymm0, ymm1   // mutates.
+    vpackuswb  ymm0, ymm0, ymm1  // mutates.
     vpermq     ymm0, ymm0, 0xd8
     vmovdqu    [edx], ymm0
     lea        edx, [edx + 32]
@@ -3702,18 +3633,20 @@
   }
 }
 
-__declspec(naked)
-void YUY2ToUVRow_AVX2(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void YUY2ToUVRow_AVX2(const uint8_t* src_yuy2,
+                                        int stride_yuy2,
+                                        uint8_t* dst_u,
+                                        uint8_t* dst_v,
+                                        int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_yuy2
-    mov        esi, [esp + 8 + 8]    // stride_yuy2
-    mov        edx, [esp + 8 + 12]   // dst_u
-    mov        edi, [esp + 8 + 16]   // dst_v
-    mov        ecx, [esp + 8 + 20]   // width
-    vpcmpeqb   ymm5, ymm5, ymm5      // generate mask 0x00ff00ff
+    mov        eax, [esp + 8 + 4]  // src_yuy2
+    mov        esi, [esp + 8 + 8]  // stride_yuy2
+    mov        edx, [esp + 8 + 12]  // dst_u
+    mov        edi, [esp + 8 + 16]  // dst_v
+    mov        ecx, [esp + 8 + 20]  // width
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0x00ff00ff
     vpsrlw     ymm5, ymm5, 8
     sub        edi, edx
 
@@ -3723,18 +3656,18 @@
     vpavgb     ymm0, ymm0, [eax + esi]
     vpavgb     ymm1, ymm1, [eax + esi + 32]
     lea        eax,  [eax + 64]
-    vpsrlw     ymm0, ymm0, 8      // YUYV -> UVUV
+    vpsrlw     ymm0, ymm0, 8  // YUYV -> UVUV
     vpsrlw     ymm1, ymm1, 8
-    vpackuswb  ymm0, ymm0, ymm1   // mutates.
+    vpackuswb  ymm0, ymm0, ymm1  // mutates.
     vpermq     ymm0, ymm0, 0xd8
     vpand      ymm1, ymm0, ymm5  // U
-    vpsrlw     ymm0, ymm0, 8     // V
+    vpsrlw     ymm0, ymm0, 8  // V
     vpackuswb  ymm1, ymm1, ymm1  // mutates.
     vpackuswb  ymm0, ymm0, ymm0  // mutates.
     vpermq     ymm1, ymm1, 0xd8
     vpermq     ymm0, ymm0, 0xd8
     vextractf128 [edx], ymm1, 0  // U
-    vextractf128 [edx + edi], ymm0, 0 // V
+    vextractf128 [edx + edi], ymm0, 0  // V
     lea        edx, [edx + 16]
     sub        ecx, 32
     jg         convertloop
@@ -3746,16 +3679,17 @@
   }
 }
 
-__declspec(naked)
-void YUY2ToUV422Row_AVX2(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void YUY2ToUV422Row_AVX2(const uint8_t* src_yuy2,
+                                           uint8_t* dst_u,
+                                           uint8_t* dst_v,
+                                           int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_yuy2
-    mov        edx, [esp + 4 + 8]    // dst_u
-    mov        edi, [esp + 4 + 12]   // dst_v
-    mov        ecx, [esp + 4 + 16]   // width
-    vpcmpeqb   ymm5, ymm5, ymm5      // generate mask 0x00ff00ff
+    mov        eax, [esp + 4 + 4]  // src_yuy2
+    mov        edx, [esp + 4 + 8]  // dst_u
+    mov        edi, [esp + 4 + 12]  // dst_v
+    mov        ecx, [esp + 4 + 16]  // width
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0x00ff00ff
     vpsrlw     ymm5, ymm5, 8
     sub        edi, edx
 
@@ -3763,18 +3697,18 @@
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     lea        eax,  [eax + 64]
-    vpsrlw     ymm0, ymm0, 8      // YUYV -> UVUV
+    vpsrlw     ymm0, ymm0, 8  // YUYV -> UVUV
     vpsrlw     ymm1, ymm1, 8
-    vpackuswb  ymm0, ymm0, ymm1   // mutates.
+    vpackuswb  ymm0, ymm0, ymm1  // mutates.
     vpermq     ymm0, ymm0, 0xd8
     vpand      ymm1, ymm0, ymm5  // U
-    vpsrlw     ymm0, ymm0, 8     // V
+    vpsrlw     ymm0, ymm0, 8  // V
     vpackuswb  ymm1, ymm1, ymm1  // mutates.
     vpackuswb  ymm0, ymm0, ymm0  // mutates.
     vpermq     ymm1, ymm1, 0xd8
     vpermq     ymm0, ymm0, 0xd8
     vextractf128 [edx], ymm1, 0  // U
-    vextractf128 [edx + edi], ymm0, 0 // V
+    vextractf128 [edx + edi], ymm0, 0  // V
     lea        edx, [edx + 16]
     sub        ecx, 32
     jg         convertloop
@@ -3785,21 +3719,21 @@
   }
 }
 
-__declspec(naked)
-void UYVYToYRow_AVX2(const uint8* src_uyvy,
-                     uint8* dst_y, int width) {
+__declspec(naked) void UYVYToYRow_AVX2(const uint8_t* src_uyvy,
+                                       uint8_t* dst_y,
+                                       int width) {
   __asm {
-    mov        eax, [esp + 4]    // src_uyvy
-    mov        edx, [esp + 8]    // dst_y
-    mov        ecx, [esp + 12]   // width
+    mov        eax, [esp + 4]  // src_uyvy
+    mov        edx, [esp + 8]  // dst_y
+    mov        ecx, [esp + 12]  // width
 
   convertloop:
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     lea        eax,  [eax + 64]
-    vpsrlw     ymm0, ymm0, 8      // odd bytes are Y
+    vpsrlw     ymm0, ymm0, 8  // odd bytes are Y
     vpsrlw     ymm1, ymm1, 8
-    vpackuswb  ymm0, ymm0, ymm1   // mutates.
+    vpackuswb  ymm0, ymm0, ymm1  // mutates.
     vpermq     ymm0, ymm0, 0xd8
     vmovdqu    [edx], ymm0
     lea        edx, [edx + 32]
@@ -3810,18 +3744,20 @@
   }
 }
 
-__declspec(naked)
-void UYVYToUVRow_AVX2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void UYVYToUVRow_AVX2(const uint8_t* src_uyvy,
+                                        int stride_uyvy,
+                                        uint8_t* dst_u,
+                                        uint8_t* dst_v,
+                                        int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_yuy2
-    mov        esi, [esp + 8 + 8]    // stride_yuy2
-    mov        edx, [esp + 8 + 12]   // dst_u
-    mov        edi, [esp + 8 + 16]   // dst_v
-    mov        ecx, [esp + 8 + 20]   // width
-    vpcmpeqb   ymm5, ymm5, ymm5      // generate mask 0x00ff00ff
+    mov        eax, [esp + 8 + 4]  // src_yuy2
+    mov        esi, [esp + 8 + 8]  // stride_yuy2
+    mov        edx, [esp + 8 + 12]  // dst_u
+    mov        edi, [esp + 8 + 16]  // dst_v
+    mov        ecx, [esp + 8 + 20]  // width
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0x00ff00ff
     vpsrlw     ymm5, ymm5, 8
     sub        edi, edx
 
@@ -3831,18 +3767,18 @@
     vpavgb     ymm0, ymm0, [eax + esi]
     vpavgb     ymm1, ymm1, [eax + esi + 32]
     lea        eax,  [eax + 64]
-    vpand      ymm0, ymm0, ymm5   // UYVY -> UVUV
+    vpand      ymm0, ymm0, ymm5  // UYVY -> UVUV
     vpand      ymm1, ymm1, ymm5
-    vpackuswb  ymm0, ymm0, ymm1   // mutates.
+    vpackuswb  ymm0, ymm0, ymm1  // mutates.
     vpermq     ymm0, ymm0, 0xd8
     vpand      ymm1, ymm0, ymm5  // U
-    vpsrlw     ymm0, ymm0, 8     // V
+    vpsrlw     ymm0, ymm0, 8  // V
     vpackuswb  ymm1, ymm1, ymm1  // mutates.
     vpackuswb  ymm0, ymm0, ymm0  // mutates.
     vpermq     ymm1, ymm1, 0xd8
     vpermq     ymm0, ymm0, 0xd8
     vextractf128 [edx], ymm1, 0  // U
-    vextractf128 [edx + edi], ymm0, 0 // V
+    vextractf128 [edx + edi], ymm0, 0  // V
     lea        edx, [edx + 16]
     sub        ecx, 32
     jg         convertloop
@@ -3854,16 +3790,17 @@
   }
 }
 
-__declspec(naked)
-void UYVYToUV422Row_AVX2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void UYVYToUV422Row_AVX2(const uint8_t* src_uyvy,
+                                           uint8_t* dst_u,
+                                           uint8_t* dst_v,
+                                           int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_yuy2
-    mov        edx, [esp + 4 + 8]    // dst_u
-    mov        edi, [esp + 4 + 12]   // dst_v
-    mov        ecx, [esp + 4 + 16]   // width
-    vpcmpeqb   ymm5, ymm5, ymm5      // generate mask 0x00ff00ff
+    mov        eax, [esp + 4 + 4]  // src_yuy2
+    mov        edx, [esp + 4 + 8]  // dst_u
+    mov        edi, [esp + 4 + 12]  // dst_v
+    mov        ecx, [esp + 4 + 16]  // width
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0x00ff00ff
     vpsrlw     ymm5, ymm5, 8
     sub        edi, edx
 
@@ -3871,18 +3808,18 @@
     vmovdqu    ymm0, [eax]
     vmovdqu    ymm1, [eax + 32]
     lea        eax,  [eax + 64]
-    vpand      ymm0, ymm0, ymm5   // UYVY -> UVUV
+    vpand      ymm0, ymm0, ymm5  // UYVY -> UVUV
     vpand      ymm1, ymm1, ymm5
-    vpackuswb  ymm0, ymm0, ymm1   // mutates.
+    vpackuswb  ymm0, ymm0, ymm1  // mutates.
     vpermq     ymm0, ymm0, 0xd8
     vpand      ymm1, ymm0, ymm5  // U
-    vpsrlw     ymm0, ymm0, 8     // V
+    vpsrlw     ymm0, ymm0, 8  // V
     vpackuswb  ymm1, ymm1, ymm1  // mutates.
     vpackuswb  ymm0, ymm0, ymm0  // mutates.
     vpermq     ymm1, ymm1, 0xd8
     vpermq     ymm0, ymm0, 0xd8
     vextractf128 [edx], ymm1, 0  // U
-    vextractf128 [edx + edi], ymm0, 0 // V
+    vextractf128 [edx + edi], ymm0, 0  // V
     lea        edx, [edx + 16]
     sub        ecx, 32
     jg         convertloop
@@ -3895,21 +3832,21 @@
 #endif  // HAS_YUY2TOYROW_AVX2
 
 #ifdef HAS_YUY2TOYROW_SSE2
-__declspec(naked)
-void YUY2ToYRow_SSE2(const uint8* src_yuy2,
-                     uint8* dst_y, int width) {
+__declspec(naked) void YUY2ToYRow_SSE2(const uint8_t* src_yuy2,
+                                       uint8_t* dst_y,
+                                       int width) {
   __asm {
-    mov        eax, [esp + 4]    // src_yuy2
-    mov        edx, [esp + 8]    // dst_y
-    mov        ecx, [esp + 12]   // width
-    pcmpeqb    xmm5, xmm5        // generate mask 0x00ff00ff
+    mov        eax, [esp + 4]  // src_yuy2
+    mov        edx, [esp + 8]  // dst_y
+    mov        ecx, [esp + 12]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0x00ff00ff
     psrlw      xmm5, 8
 
   convertloop:
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
-    pand       xmm0, xmm5   // even bytes are Y
+    pand       xmm0, xmm5  // even bytes are Y
     pand       xmm1, xmm5
     packuswb   xmm0, xmm1
     movdqu     [edx], xmm0
@@ -3920,18 +3857,20 @@
   }
 }
 
-__declspec(naked)
-void YUY2ToUVRow_SSE2(const uint8* src_yuy2, int stride_yuy2,
-                      uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void YUY2ToUVRow_SSE2(const uint8_t* src_yuy2,
+                                        int stride_yuy2,
+                                        uint8_t* dst_u,
+                                        uint8_t* dst_v,
+                                        int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_yuy2
-    mov        esi, [esp + 8 + 8]    // stride_yuy2
-    mov        edx, [esp + 8 + 12]   // dst_u
-    mov        edi, [esp + 8 + 16]   // dst_v
-    mov        ecx, [esp + 8 + 20]   // width
-    pcmpeqb    xmm5, xmm5            // generate mask 0x00ff00ff
+    mov        eax, [esp + 8 + 4]  // src_yuy2
+    mov        esi, [esp + 8 + 8]  // stride_yuy2
+    mov        edx, [esp + 8 + 12]  // dst_u
+    mov        edi, [esp + 8 + 16]  // dst_v
+    mov        ecx, [esp + 8 + 20]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0x00ff00ff
     psrlw      xmm5, 8
     sub        edi, edx
 
@@ -3943,13 +3882,13 @@
     lea        eax,  [eax + 32]
     pavgb      xmm0, xmm2
     pavgb      xmm1, xmm3
-    psrlw      xmm0, 8      // YUYV -> UVUV
+    psrlw      xmm0, 8  // YUYV -> UVUV
     psrlw      xmm1, 8
     packuswb   xmm0, xmm1
     movdqa     xmm1, xmm0
     pand       xmm0, xmm5  // U
     packuswb   xmm0, xmm0
-    psrlw      xmm1, 8     // V
+    psrlw      xmm1, 8  // V
     packuswb   xmm1, xmm1
     movq       qword ptr [edx], xmm0
     movq       qword ptr [edx + edi], xmm1
@@ -3963,16 +3902,17 @@
   }
 }
 
-__declspec(naked)
-void YUY2ToUV422Row_SSE2(const uint8* src_yuy2,
-                         uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void YUY2ToUV422Row_SSE2(const uint8_t* src_yuy2,
+                                           uint8_t* dst_u,
+                                           uint8_t* dst_v,
+                                           int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_yuy2
-    mov        edx, [esp + 4 + 8]    // dst_u
-    mov        edi, [esp + 4 + 12]   // dst_v
-    mov        ecx, [esp + 4 + 16]   // width
-    pcmpeqb    xmm5, xmm5            // generate mask 0x00ff00ff
+    mov        eax, [esp + 4 + 4]  // src_yuy2
+    mov        edx, [esp + 4 + 8]  // dst_u
+    mov        edi, [esp + 4 + 12]  // dst_v
+    mov        ecx, [esp + 4 + 16]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0x00ff00ff
     psrlw      xmm5, 8
     sub        edi, edx
 
@@ -3980,13 +3920,13 @@
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
-    psrlw      xmm0, 8      // YUYV -> UVUV
+    psrlw      xmm0, 8  // YUYV -> UVUV
     psrlw      xmm1, 8
     packuswb   xmm0, xmm1
     movdqa     xmm1, xmm0
     pand       xmm0, xmm5  // U
     packuswb   xmm0, xmm0
-    psrlw      xmm1, 8     // V
+    psrlw      xmm1, 8  // V
     packuswb   xmm1, xmm1
     movq       qword ptr [edx], xmm0
     movq       qword ptr [edx + edi], xmm1
@@ -3999,19 +3939,19 @@
   }
 }
 
-__declspec(naked)
-void UYVYToYRow_SSE2(const uint8* src_uyvy,
-                     uint8* dst_y, int width) {
+__declspec(naked) void UYVYToYRow_SSE2(const uint8_t* src_uyvy,
+                                       uint8_t* dst_y,
+                                       int width) {
   __asm {
-    mov        eax, [esp + 4]    // src_uyvy
-    mov        edx, [esp + 8]    // dst_y
-    mov        ecx, [esp + 12]   // width
+    mov        eax, [esp + 4]  // src_uyvy
+    mov        edx, [esp + 8]  // dst_y
+    mov        ecx, [esp + 12]  // width
 
   convertloop:
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
-    psrlw      xmm0, 8    // odd bytes are Y
+    psrlw      xmm0, 8  // odd bytes are Y
     psrlw      xmm1, 8
     packuswb   xmm0, xmm1
     movdqu     [edx], xmm0
@@ -4022,18 +3962,20 @@
   }
 }
 
-__declspec(naked)
-void UYVYToUVRow_SSE2(const uint8* src_uyvy, int stride_uyvy,
-                      uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void UYVYToUVRow_SSE2(const uint8_t* src_uyvy,
+                                        int stride_uyvy,
+                                        uint8_t* dst_u,
+                                        uint8_t* dst_v,
+                                        int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_yuy2
-    mov        esi, [esp + 8 + 8]    // stride_yuy2
-    mov        edx, [esp + 8 + 12]   // dst_u
-    mov        edi, [esp + 8 + 16]   // dst_v
-    mov        ecx, [esp + 8 + 20]   // width
-    pcmpeqb    xmm5, xmm5            // generate mask 0x00ff00ff
+    mov        eax, [esp + 8 + 4]  // src_yuy2
+    mov        esi, [esp + 8 + 8]  // stride_yuy2
+    mov        edx, [esp + 8 + 12]  // dst_u
+    mov        edi, [esp + 8 + 16]  // dst_v
+    mov        ecx, [esp + 8 + 20]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0x00ff00ff
     psrlw      xmm5, 8
     sub        edi, edx
 
@@ -4045,13 +3987,13 @@
     lea        eax,  [eax + 32]
     pavgb      xmm0, xmm2
     pavgb      xmm1, xmm3
-    pand       xmm0, xmm5   // UYVY -> UVUV
+    pand       xmm0, xmm5  // UYVY -> UVUV
     pand       xmm1, xmm5
     packuswb   xmm0, xmm1
     movdqa     xmm1, xmm0
     pand       xmm0, xmm5  // U
     packuswb   xmm0, xmm0
-    psrlw      xmm1, 8     // V
+    psrlw      xmm1, 8  // V
     packuswb   xmm1, xmm1
     movq       qword ptr [edx], xmm0
     movq       qword ptr [edx + edi], xmm1
@@ -4065,16 +4007,17 @@
   }
 }
 
-__declspec(naked)
-void UYVYToUV422Row_SSE2(const uint8* src_uyvy,
-                         uint8* dst_u, uint8* dst_v, int width) {
+__declspec(naked) void UYVYToUV422Row_SSE2(const uint8_t* src_uyvy,
+                                           uint8_t* dst_u,
+                                           uint8_t* dst_v,
+                                           int width) {
   __asm {
     push       edi
-    mov        eax, [esp + 4 + 4]    // src_yuy2
-    mov        edx, [esp + 4 + 8]    // dst_u
-    mov        edi, [esp + 4 + 12]   // dst_v
-    mov        ecx, [esp + 4 + 16]   // width
-    pcmpeqb    xmm5, xmm5            // generate mask 0x00ff00ff
+    mov        eax, [esp + 4 + 4]  // src_yuy2
+    mov        edx, [esp + 4 + 8]  // dst_u
+    mov        edi, [esp + 4 + 12]  // dst_v
+    mov        ecx, [esp + 4 + 16]  // width
+    pcmpeqb    xmm5, xmm5  // generate mask 0x00ff00ff
     psrlw      xmm5, 8
     sub        edi, edx
 
@@ -4082,13 +4025,13 @@
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
-    pand       xmm0, xmm5   // UYVY -> UVUV
+    pand       xmm0, xmm5  // UYVY -> UVUV
     pand       xmm1, xmm5
     packuswb   xmm0, xmm1
     movdqa     xmm1, xmm0
     pand       xmm0, xmm5  // U
     packuswb   xmm0, xmm0
-    psrlw      xmm1, 8     // V
+    psrlw      xmm1, 8  // V
     packuswb   xmm1, xmm1
     movq       qword ptr [edx], xmm0
     movq       qword ptr [edx + edi], xmm1
@@ -4108,13 +4051,15 @@
 // =((A2*C2)+(B2*(255-C2))+255)/256
 // signed version of math
 // =(((A2-128)*C2)+((B2-128)*(255-C2))+32768+127)/256
-__declspec(naked)
-void BlendPlaneRow_SSSE3(const uint8* src0, const uint8* src1,
-                         const uint8* alpha, uint8* dst, int width) {
+__declspec(naked) void BlendPlaneRow_SSSE3(const uint8_t* src0,
+                                           const uint8_t* src1,
+                                           const uint8_t* alpha,
+                                           uint8_t* dst,
+                                           int width) {
   __asm {
     push       esi
     push       edi
-    pcmpeqb    xmm5, xmm5       // generate mask 0xff00ff00
+    pcmpeqb    xmm5, xmm5  // generate mask 0xff00ff00
     psllw      xmm5, 8
     mov        eax, 0x80808080  // 128 for biasing image to signed.
     movd       xmm6, eax
@@ -4123,8 +4068,8 @@
     mov        eax, 0x807f807f  // 32768 + 127 for unbias and round.
     movd       xmm7, eax
     pshufd     xmm7, xmm7, 0x00
-    mov        eax, [esp + 8 + 4]   // src0
-    mov        edx, [esp + 8 + 8]   // src1
+    mov        eax, [esp + 8 + 4]  // src0
+    mov        edx, [esp + 8 + 8]  // src1
     mov        esi, [esp + 8 + 12]  // alpha
     mov        edi, [esp + 8 + 16]  // dst
     mov        ecx, [esp + 8 + 20]  // width
@@ -4132,17 +4077,17 @@
     sub        edx, esi
     sub        edi, esi
 
-    // 8 pixel loop.
+        // 8 pixel loop.
   convertloop8:
-    movq       xmm0, qword ptr [esi]        // alpha
+    movq       xmm0, qword ptr [esi]  // alpha
     punpcklbw  xmm0, xmm0
-    pxor       xmm0, xmm5         // a, 255-a
+    pxor       xmm0, xmm5  // a, 255-a
     movq       xmm1, qword ptr [eax + esi]  // src0
     movq       xmm2, qword ptr [edx + esi]  // src1
     punpcklbw  xmm1, xmm2
-    psubb      xmm1, xmm6         // bias src0/1 - 128
+    psubb      xmm1, xmm6  // bias src0/1 - 128
     pmaddubsw  xmm0, xmm1
-    paddw      xmm0, xmm7         // unbias result - 32768 and round.
+    paddw      xmm0, xmm7  // unbias result - 32768 and round.
     psrlw      xmm0, 8
     packuswb   xmm0, xmm0
     movq       qword ptr [edi + esi], xmm0
@@ -4163,13 +4108,15 @@
 // =((A2*C2)+(B2*(255-C2))+255)/256
 // signed version of math
 // =(((A2-128)*C2)+((B2-128)*(255-C2))+32768+127)/256
-__declspec(naked)
-void BlendPlaneRow_AVX2(const uint8* src0, const uint8* src1,
-                         const uint8* alpha, uint8* dst, int width) {
+__declspec(naked) void BlendPlaneRow_AVX2(const uint8_t* src0,
+                                          const uint8_t* src1,
+                                          const uint8_t* alpha,
+                                          uint8_t* dst,
+                                          int width) {
   __asm {
     push        esi
     push        edi
-    vpcmpeqb    ymm5, ymm5, ymm5       // generate mask 0xff00ff00
+    vpcmpeqb    ymm5, ymm5, ymm5  // generate mask 0xff00ff00
     vpsllw      ymm5, ymm5, 8
     mov         eax, 0x80808080  // 128 for biasing image to signed.
     vmovd       xmm6, eax
@@ -4177,8 +4124,8 @@
     mov         eax, 0x807f807f  // 32768 + 127 for unbias and round.
     vmovd       xmm7, eax
     vbroadcastss ymm7, xmm7
-    mov         eax, [esp + 8 + 4]   // src0
-    mov         edx, [esp + 8 + 8]   // src1
+    mov         eax, [esp + 8 + 4]  // src0
+    mov         edx, [esp + 8 + 8]  // src1
     mov         esi, [esp + 8 + 12]  // alpha
     mov         edi, [esp + 8 + 16]  // dst
     mov         ecx, [esp + 8 + 20]  // width
@@ -4186,23 +4133,23 @@
     sub         edx, esi
     sub         edi, esi
 
-    // 32 pixel loop.
+        // 32 pixel loop.
   convertloop32:
-    vmovdqu     ymm0, [esi]        // alpha
-    vpunpckhbw  ymm3, ymm0, ymm0   // 8..15, 24..31
-    vpunpcklbw  ymm0, ymm0, ymm0   // 0..7, 16..23
-    vpxor       ymm3, ymm3, ymm5   // a, 255-a
-    vpxor       ymm0, ymm0, ymm5   // a, 255-a
+    vmovdqu     ymm0, [esi]  // alpha
+    vpunpckhbw  ymm3, ymm0, ymm0  // 8..15, 24..31
+    vpunpcklbw  ymm0, ymm0, ymm0  // 0..7, 16..23
+    vpxor       ymm3, ymm3, ymm5  // a, 255-a
+    vpxor       ymm0, ymm0, ymm5  // a, 255-a
     vmovdqu     ymm1, [eax + esi]  // src0
     vmovdqu     ymm2, [edx + esi]  // src1
     vpunpckhbw  ymm4, ymm1, ymm2
     vpunpcklbw  ymm1, ymm1, ymm2
-    vpsubb      ymm4, ymm4, ymm6   // bias src0/1 - 128
-    vpsubb      ymm1, ymm1, ymm6   // bias src0/1 - 128
+    vpsubb      ymm4, ymm4, ymm6  // bias src0/1 - 128
+    vpsubb      ymm1, ymm1, ymm6  // bias src0/1 - 128
     vpmaddubsw  ymm3, ymm3, ymm4
     vpmaddubsw  ymm0, ymm0, ymm1
-    vpaddw      ymm3, ymm3, ymm7   // unbias result - 32768 and round.
-    vpaddw      ymm0, ymm0, ymm7   // unbias result - 32768 and round.
+    vpaddw      ymm3, ymm3, ymm7  // unbias result - 32768 and round.
+    vpaddw      ymm0, ymm0, ymm7  // unbias result - 32768 and round.
     vpsrlw      ymm3, ymm3, 8
     vpsrlw      ymm0, ymm0, 8
     vpackuswb   ymm0, ymm0, ymm3
@@ -4221,52 +4168,51 @@
 
 #ifdef HAS_ARGBBLENDROW_SSSE3
 // Shuffle table for isolating alpha.
-static const uvec8 kShuffleAlpha = {
-  3u, 0x80, 3u, 0x80, 7u, 0x80, 7u, 0x80,
-  11u, 0x80, 11u, 0x80, 15u, 0x80, 15u, 0x80
-};
+static const uvec8 kShuffleAlpha = {3u,  0x80, 3u,  0x80, 7u,  0x80, 7u,  0x80,
+                                    11u, 0x80, 11u, 0x80, 15u, 0x80, 15u, 0x80};
 
 // Blend 8 pixels at a time.
-__declspec(naked)
-void ARGBBlendRow_SSSE3(const uint8* src_argb0, const uint8* src_argb1,
-                        uint8* dst_argb, int width) {
+__declspec(naked) void ARGBBlendRow_SSSE3(const uint8_t* src_argb0,
+                                          const uint8_t* src_argb1,
+                                          uint8_t* dst_argb,
+                                          int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
-    pcmpeqb    xmm7, xmm7       // generate constant 0x0001
+    pcmpeqb    xmm7, xmm7  // generate constant 0x0001
     psrlw      xmm7, 15
-    pcmpeqb    xmm6, xmm6       // generate mask 0x00ff00ff
+    pcmpeqb    xmm6, xmm6  // generate mask 0x00ff00ff
     psrlw      xmm6, 8
-    pcmpeqb    xmm5, xmm5       // generate mask 0xff00ff00
+    pcmpeqb    xmm5, xmm5  // generate mask 0xff00ff00
     psllw      xmm5, 8
-    pcmpeqb    xmm4, xmm4       // generate mask 0xff000000
+    pcmpeqb    xmm4, xmm4  // generate mask 0xff000000
     pslld      xmm4, 24
     sub        ecx, 4
-    jl         convertloop4b    // less than 4 pixels?
+    jl         convertloop4b  // less than 4 pixels?
 
-    // 4 pixel loop.
+        // 4 pixel loop.
   convertloop4:
-    movdqu     xmm3, [eax]      // src argb
+    movdqu     xmm3, [eax]  // src argb
     lea        eax, [eax + 16]
-    movdqa     xmm0, xmm3       // src argb
-    pxor       xmm3, xmm4       // ~alpha
-    movdqu     xmm2, [esi]      // _r_b
-    pshufb     xmm3, xmmword ptr kShuffleAlpha // alpha
-    pand       xmm2, xmm6       // _r_b
-    paddw      xmm3, xmm7       // 256 - alpha
-    pmullw     xmm2, xmm3       // _r_b * alpha
-    movdqu     xmm1, [esi]      // _a_g
+    movdqa     xmm0, xmm3  // src argb
+    pxor       xmm3, xmm4  // ~alpha
+    movdqu     xmm2, [esi]  // _r_b
+    pshufb     xmm3, xmmword ptr kShuffleAlpha  // alpha
+    pand       xmm2, xmm6  // _r_b
+    paddw      xmm3, xmm7  // 256 - alpha
+    pmullw     xmm2, xmm3  // _r_b * alpha
+    movdqu     xmm1, [esi]  // _a_g
     lea        esi, [esi + 16]
-    psrlw      xmm1, 8          // _a_g
-    por        xmm0, xmm4       // set alpha to 255
-    pmullw     xmm1, xmm3       // _a_g * alpha
-    psrlw      xmm2, 8          // _r_b convert to 8 bits again
-    paddusb    xmm0, xmm2       // + src argb
-    pand       xmm1, xmm5       // a_g_ convert to 8 bits again
-    paddusb    xmm0, xmm1       // + src argb
+    psrlw      xmm1, 8  // _a_g
+    por        xmm0, xmm4  // set alpha to 255
+    pmullw     xmm1, xmm3  // _a_g * alpha
+    psrlw      xmm2, 8  // _r_b convert to 8 bits again
+    paddusb    xmm0, xmm2  // + src argb
+    pand       xmm1, xmm5  // a_g_ convert to 8 bits again
+    paddusb    xmm0, xmm1  // + src argb
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
     sub        ecx, 4
@@ -4276,26 +4222,26 @@
     add        ecx, 4 - 1
     jl         convertloop1b
 
-    // 1 pixel loop.
+        // 1 pixel loop.
   convertloop1:
-    movd       xmm3, [eax]      // src argb
+    movd       xmm3, [eax]  // src argb
     lea        eax, [eax + 4]
-    movdqa     xmm0, xmm3       // src argb
-    pxor       xmm3, xmm4       // ~alpha
-    movd       xmm2, [esi]      // _r_b
-    pshufb     xmm3, xmmword ptr kShuffleAlpha // alpha
-    pand       xmm2, xmm6       // _r_b
-    paddw      xmm3, xmm7       // 256 - alpha
-    pmullw     xmm2, xmm3       // _r_b * alpha
-    movd       xmm1, [esi]      // _a_g
+    movdqa     xmm0, xmm3  // src argb
+    pxor       xmm3, xmm4  // ~alpha
+    movd       xmm2, [esi]  // _r_b
+    pshufb     xmm3, xmmword ptr kShuffleAlpha  // alpha
+    pand       xmm2, xmm6  // _r_b
+    paddw      xmm3, xmm7  // 256 - alpha
+    pmullw     xmm2, xmm3  // _r_b * alpha
+    movd       xmm1, [esi]  // _a_g
     lea        esi, [esi + 4]
-    psrlw      xmm1, 8          // _a_g
-    por        xmm0, xmm4       // set alpha to 255
-    pmullw     xmm1, xmm3       // _a_g * alpha
-    psrlw      xmm2, 8          // _r_b convert to 8 bits again
-    paddusb    xmm0, xmm2       // + src argb
-    pand       xmm1, xmm5       // a_g_ convert to 8 bits again
-    paddusb    xmm0, xmm1       // + src argb
+    psrlw      xmm1, 8  // _a_g
+    por        xmm0, xmm4  // set alpha to 255
+    pmullw     xmm1, xmm3  // _a_g * alpha
+    psrlw      xmm2, 8  // _r_b convert to 8 bits again
+    paddusb    xmm0, xmm2  // + src argb
+    pand       xmm1, xmm5  // a_g_ convert to 8 bits again
+    paddusb    xmm0, xmm1  // + src argb
     movd       [edx], xmm0
     lea        edx, [edx + 4]
     sub        ecx, 1
@@ -4311,41 +4257,42 @@
 #ifdef HAS_ARGBATTENUATEROW_SSSE3
 // Shuffle table duplicating alpha.
 static const uvec8 kShuffleAlpha0 = {
-  3u, 3u, 3u, 3u, 3u, 3u, 128u, 128u, 7u, 7u, 7u, 7u, 7u, 7u, 128u, 128u,
+    3u, 3u, 3u, 3u, 3u, 3u, 128u, 128u, 7u, 7u, 7u, 7u, 7u, 7u, 128u, 128u,
 };
 static const uvec8 kShuffleAlpha1 = {
-  11u, 11u, 11u, 11u, 11u, 11u, 128u, 128u,
-  15u, 15u, 15u, 15u, 15u, 15u, 128u, 128u,
+    11u, 11u, 11u, 11u, 11u, 11u, 128u, 128u,
+    15u, 15u, 15u, 15u, 15u, 15u, 128u, 128u,
 };
-__declspec(naked)
-void ARGBAttenuateRow_SSSE3(const uint8* src_argb, uint8* dst_argb, int width) {
+__declspec(naked) void ARGBAttenuateRow_SSSE3(const uint8_t* src_argb,
+                                              uint8_t* dst_argb,
+                                              int width) {
   __asm {
-    mov        eax, [esp + 4]   // src_argb0
-    mov        edx, [esp + 8]   // dst_argb
+    mov        eax, [esp + 4]  // src_argb0
+    mov        edx, [esp + 8]  // dst_argb
     mov        ecx, [esp + 12]  // width
-    pcmpeqb    xmm3, xmm3       // generate mask 0xff000000
+    pcmpeqb    xmm3, xmm3  // generate mask 0xff000000
     pslld      xmm3, 24
     movdqa     xmm4, xmmword ptr kShuffleAlpha0
     movdqa     xmm5, xmmword ptr kShuffleAlpha1
 
  convertloop:
-    movdqu     xmm0, [eax]      // read 4 pixels
-    pshufb     xmm0, xmm4       // isolate first 2 alphas
-    movdqu     xmm1, [eax]      // read 4 pixels
-    punpcklbw  xmm1, xmm1       // first 2 pixel rgbs
-    pmulhuw    xmm0, xmm1       // rgb * a
-    movdqu     xmm1, [eax]      // read 4 pixels
-    pshufb     xmm1, xmm5       // isolate next 2 alphas
-    movdqu     xmm2, [eax]      // read 4 pixels
-    punpckhbw  xmm2, xmm2       // next 2 pixel rgbs
-    pmulhuw    xmm1, xmm2       // rgb * a
-    movdqu     xmm2, [eax]      // mask original alpha
+    movdqu     xmm0, [eax]  // read 4 pixels
+    pshufb     xmm0, xmm4  // isolate first 2 alphas
+    movdqu     xmm1, [eax]  // read 4 pixels
+    punpcklbw  xmm1, xmm1  // first 2 pixel rgbs
+    pmulhuw    xmm0, xmm1  // rgb * a
+    movdqu     xmm1, [eax]  // read 4 pixels
+    pshufb     xmm1, xmm5  // isolate next 2 alphas
+    movdqu     xmm2, [eax]  // read 4 pixels
+    punpckhbw  xmm2, xmm2  // next 2 pixel rgbs
+    pmulhuw    xmm1, xmm2  // rgb * a
+    movdqu     xmm2, [eax]  // mask original alpha
     lea        eax, [eax + 16]
     pand       xmm2, xmm3
     psrlw      xmm0, 8
     psrlw      xmm1, 8
     packuswb   xmm0, xmm1
-    por        xmm0, xmm2       // copy original alpha
+    por        xmm0, xmm2  // copy original alpha
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
     sub        ecx, 4
@@ -4358,22 +4305,23 @@
 
 #ifdef HAS_ARGBATTENUATEROW_AVX2
 // Shuffle table duplicating alpha.
-static const uvec8 kShuffleAlpha_AVX2 = {
-  6u, 7u, 6u, 7u, 6u, 7u, 128u, 128u, 14u, 15u, 14u, 15u, 14u, 15u, 128u, 128u
-};
-__declspec(naked)
-void ARGBAttenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb, int width) {
+static const uvec8 kShuffleAlpha_AVX2 = {6u,   7u,   6u,   7u,  6u,  7u,
+                                         128u, 128u, 14u,  15u, 14u, 15u,
+                                         14u,  15u,  128u, 128u};
+__declspec(naked) void ARGBAttenuateRow_AVX2(const uint8_t* src_argb,
+                                             uint8_t* dst_argb,
+                                             int width) {
   __asm {
-    mov        eax, [esp + 4]   // src_argb0
-    mov        edx, [esp + 8]   // dst_argb
+    mov        eax, [esp + 4]  // src_argb0
+    mov        edx, [esp + 8]  // dst_argb
     mov        ecx, [esp + 12]  // width
     sub        edx, eax
     vbroadcastf128 ymm4, xmmword ptr kShuffleAlpha_AVX2
-    vpcmpeqb   ymm5, ymm5, ymm5 // generate mask 0xff000000
+    vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0xff000000
     vpslld     ymm5, ymm5, 24
 
  convertloop:
-    vmovdqu    ymm6, [eax]       // read 8 pixels.
+    vmovdqu    ymm6, [eax]  // read 8 pixels.
     vpunpcklbw ymm0, ymm6, ymm6  // low 4 pixels. mutated.
     vpunpckhbw ymm1, ymm6, ymm6  // high 4 pixels. mutated.
     vpshufb    ymm2, ymm0, ymm4  // low 4 alphas
@@ -4398,40 +4346,40 @@
 
 #ifdef HAS_ARGBUNATTENUATEROW_SSE2
 // Unattenuate 4 pixels at a time.
-__declspec(naked)
-void ARGBUnattenuateRow_SSE2(const uint8* src_argb, uint8* dst_argb,
-                             int width) {
+__declspec(naked) void ARGBUnattenuateRow_SSE2(const uint8_t* src_argb,
+                                               uint8_t* dst_argb,
+                                               int width) {
   __asm {
     push       ebx
     push       esi
     push       edi
-    mov        eax, [esp + 12 + 4]   // src_argb
-    mov        edx, [esp + 12 + 8]   // dst_argb
+    mov        eax, [esp + 12 + 4]  // src_argb
+    mov        edx, [esp + 12 + 8]  // dst_argb
     mov        ecx, [esp + 12 + 12]  // width
     lea        ebx, fixed_invtbl8
 
  convertloop:
-    movdqu     xmm0, [eax]      // read 4 pixels
+    movdqu     xmm0, [eax]  // read 4 pixels
     movzx      esi, byte ptr [eax + 3]  // first alpha
     movzx      edi, byte ptr [eax + 7]  // second alpha
-    punpcklbw  xmm0, xmm0       // first 2
+    punpcklbw  xmm0, xmm0  // first 2
     movd       xmm2, dword ptr [ebx + esi * 4]
     movd       xmm3, dword ptr [ebx + edi * 4]
-    pshuflw    xmm2, xmm2, 040h // first 4 inv_alpha words.  1, a, a, a
-    pshuflw    xmm3, xmm3, 040h // next 4 inv_alpha words
+    pshuflw    xmm2, xmm2, 040h  // first 4 inv_alpha words.  1, a, a, a
+    pshuflw    xmm3, xmm3, 040h  // next 4 inv_alpha words
     movlhps    xmm2, xmm3
-    pmulhuw    xmm0, xmm2       // rgb * a
+    pmulhuw    xmm0, xmm2  // rgb * a
 
-    movdqu     xmm1, [eax]      // read 4 pixels
+    movdqu     xmm1, [eax]  // read 4 pixels
     movzx      esi, byte ptr [eax + 11]  // third alpha
     movzx      edi, byte ptr [eax + 15]  // forth alpha
-    punpckhbw  xmm1, xmm1       // next 2
+    punpckhbw  xmm1, xmm1  // next 2
     movd       xmm2, dword ptr [ebx + esi * 4]
     movd       xmm3, dword ptr [ebx + edi * 4]
-    pshuflw    xmm2, xmm2, 040h // first 4 inv_alpha words
-    pshuflw    xmm3, xmm3, 040h // next 4 inv_alpha words
+    pshuflw    xmm2, xmm2, 040h  // first 4 inv_alpha words
+    pshuflw    xmm3, xmm3, 040h  // next 4 inv_alpha words
     movlhps    xmm2, xmm3
-    pmulhuw    xmm1, xmm2       // rgb * a
+    pmulhuw    xmm1, xmm2  // rgb * a
     lea        eax, [eax + 16]
     packuswb   xmm0, xmm1
     movdqu     [edx], xmm0
@@ -4450,25 +4398,24 @@
 #ifdef HAS_ARGBUNATTENUATEROW_AVX2
 // Shuffle table duplicating alpha.
 static const uvec8 kUnattenShuffleAlpha_AVX2 = {
-  0u, 1u, 0u, 1u, 0u, 1u, 6u, 7u, 8u, 9u, 8u, 9u, 8u, 9u, 14u, 15u
-};
+    0u, 1u, 0u, 1u, 0u, 1u, 6u, 7u, 8u, 9u, 8u, 9u, 8u, 9u, 14u, 15u};
 // TODO(fbarchard): Enable USE_GATHER for future hardware if faster.
 // USE_GATHER is not on by default, due to being a slow instruction.
 #ifdef USE_GATHER
-__declspec(naked)
-void ARGBUnattenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb,
-                             int width) {
+__declspec(naked) void ARGBUnattenuateRow_AVX2(const uint8_t* src_argb,
+                                               uint8_t* dst_argb,
+                                               int width) {
   __asm {
-    mov        eax, [esp + 4]   // src_argb0
-    mov        edx, [esp + 8]   // dst_argb
+    mov        eax, [esp + 4]  // src_argb0
+    mov        edx, [esp + 8]  // dst_argb
     mov        ecx, [esp + 12]  // width
     sub        edx, eax
     vbroadcastf128 ymm4, xmmword ptr kUnattenShuffleAlpha_AVX2
 
  convertloop:
-    vmovdqu    ymm6, [eax]       // read 8 pixels.
+    vmovdqu    ymm6, [eax]  // read 8 pixels.
     vpcmpeqb   ymm5, ymm5, ymm5  // generate mask 0xffffffff for gather.
-    vpsrld     ymm2, ymm6, 24    // alpha in low 8 bits.
+    vpsrld     ymm2, ymm6, 24  // alpha in low 8 bits.
     vpunpcklbw ymm0, ymm6, ymm6  // low 4 pixels. mutated.
     vpunpckhbw ymm1, ymm6, ymm6  // high 4 pixels. mutated.
     vpgatherdd ymm3, [ymm2 * 4 + fixed_invtbl8], ymm5  // ymm5 cleared.  1, a
@@ -4488,50 +4435,50 @@
     ret
   }
 }
-#else  // USE_GATHER
-__declspec(naked)
-void ARGBUnattenuateRow_AVX2(const uint8* src_argb, uint8* dst_argb,
-                             int width) {
+#else   // USE_GATHER
+__declspec(naked) void ARGBUnattenuateRow_AVX2(const uint8_t* src_argb,
+                                               uint8_t* dst_argb,
+                                               int width) {
   __asm {
 
     push       ebx
     push       esi
     push       edi
-    mov        eax, [esp + 12 + 4]   // src_argb
-    mov        edx, [esp + 12 + 8]   // dst_argb
+    mov        eax, [esp + 12 + 4]  // src_argb
+    mov        edx, [esp + 12 + 8]  // dst_argb
     mov        ecx, [esp + 12 + 12]  // width
     sub        edx, eax
     lea        ebx, fixed_invtbl8
     vbroadcastf128 ymm5, xmmword ptr kUnattenShuffleAlpha_AVX2
 
  convertloop:
-    // replace VPGATHER
-    movzx      esi, byte ptr [eax + 3]                 // alpha0
-    movzx      edi, byte ptr [eax + 7]                 // alpha1
+        // replace VPGATHER
+    movzx      esi, byte ptr [eax + 3]  // alpha0
+    movzx      edi, byte ptr [eax + 7]  // alpha1
     vmovd      xmm0, dword ptr [ebx + esi * 4]  // [1,a0]
     vmovd      xmm1, dword ptr [ebx + edi * 4]  // [1,a1]
-    movzx      esi, byte ptr [eax + 11]                // alpha2
-    movzx      edi, byte ptr [eax + 15]                // alpha3
-    vpunpckldq xmm6, xmm0, xmm1                        // [1,a1,1,a0]
+    movzx      esi, byte ptr [eax + 11]  // alpha2
+    movzx      edi, byte ptr [eax + 15]  // alpha3
+    vpunpckldq xmm6, xmm0, xmm1  // [1,a1,1,a0]
     vmovd      xmm2, dword ptr [ebx + esi * 4]  // [1,a2]
     vmovd      xmm3, dword ptr [ebx + edi * 4]  // [1,a3]
-    movzx      esi, byte ptr [eax + 19]                // alpha4
-    movzx      edi, byte ptr [eax + 23]                // alpha5
-    vpunpckldq xmm7, xmm2, xmm3                        // [1,a3,1,a2]
+    movzx      esi, byte ptr [eax + 19]  // alpha4
+    movzx      edi, byte ptr [eax + 23]  // alpha5
+    vpunpckldq xmm7, xmm2, xmm3  // [1,a3,1,a2]
     vmovd      xmm0, dword ptr [ebx + esi * 4]  // [1,a4]
     vmovd      xmm1, dword ptr [ebx + edi * 4]  // [1,a5]
-    movzx      esi, byte ptr [eax + 27]                // alpha6
-    movzx      edi, byte ptr [eax + 31]                // alpha7
-    vpunpckldq xmm0, xmm0, xmm1                        // [1,a5,1,a4]
+    movzx      esi, byte ptr [eax + 27]  // alpha6
+    movzx      edi, byte ptr [eax + 31]  // alpha7
+    vpunpckldq xmm0, xmm0, xmm1  // [1,a5,1,a4]
     vmovd      xmm2, dword ptr [ebx + esi * 4]  // [1,a6]
     vmovd      xmm3, dword ptr [ebx + edi * 4]  // [1,a7]
-    vpunpckldq xmm2, xmm2, xmm3                        // [1,a7,1,a6]
-    vpunpcklqdq xmm3, xmm6, xmm7                       // [1,a3,1,a2,1,a1,1,a0]
-    vpunpcklqdq xmm0, xmm0, xmm2                       // [1,a7,1,a6,1,a5,1,a4]
-    vinserti128 ymm3, ymm3, xmm0, 1 // [1,a7,1,a6,1,a5,1,a4,1,a3,1,a2,1,a1,1,a0]
+    vpunpckldq xmm2, xmm2, xmm3  // [1,a7,1,a6]
+    vpunpcklqdq xmm3, xmm6, xmm7  // [1,a3,1,a2,1,a1,1,a0]
+    vpunpcklqdq xmm0, xmm0, xmm2  // [1,a7,1,a6,1,a5,1,a4]
+    vinserti128 ymm3, ymm3, xmm0, 1                // [1,a7,1,a6,1,a5,1,a4,1,a3,1,a2,1,a1,1,a0]
     // end of VPGATHER
 
-    vmovdqu    ymm6, [eax]       // read 8 pixels.
+    vmovdqu    ymm6, [eax]  // read 8 pixels.
     vpunpcklbw ymm0, ymm6, ymm6  // low 4 pixels. mutated.
     vpunpckhbw ymm1, ymm6, ymm6  // high 4 pixels. mutated.
     vpunpcklwd ymm2, ymm3, ymm3  // low 4 inverted alphas. mutated. 1, 1, a, a
@@ -4540,7 +4487,7 @@
     vpshufb    ymm3, ymm3, ymm5  // replicate high 4 alphas
     vpmulhuw   ymm0, ymm0, ymm2  // rgb * ia
     vpmulhuw   ymm1, ymm1, ymm3  // rgb * ia
-    vpackuswb  ymm0, ymm0, ymm1  // unmutated.
+    vpackuswb  ymm0, ymm0, ymm1             // unmutated.
     vmovdqu    [eax + edx], ymm0
     lea        eax, [eax + 32]
     sub        ecx, 8
@@ -4558,12 +4505,13 @@
 
 #ifdef HAS_ARGBGRAYROW_SSSE3
 // Convert 8 ARGB pixels (64 bytes) to 8 Gray ARGB pixels.
-__declspec(naked)
-void ARGBGrayRow_SSSE3(const uint8* src_argb, uint8* dst_argb, int width) {
+__declspec(naked) void ARGBGrayRow_SSSE3(const uint8_t* src_argb,
+                                         uint8_t* dst_argb,
+                                         int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_argb */
-    mov        ecx, [esp + 12]  /* width */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_argb */
+    mov        ecx, [esp + 12] /* width */
     movdqa     xmm4, xmmword ptr kARGBToYJ
     movdqa     xmm5, xmmword ptr kAddYJ64
 
@@ -4575,20 +4523,20 @@
     phaddw     xmm0, xmm1
     paddw      xmm0, xmm5  // Add .5 for rounding.
     psrlw      xmm0, 7
-    packuswb   xmm0, xmm0   // 8 G bytes
+    packuswb   xmm0, xmm0  // 8 G bytes
     movdqu     xmm2, [eax]  // A
     movdqu     xmm3, [eax + 16]
     lea        eax, [eax + 32]
     psrld      xmm2, 24
     psrld      xmm3, 24
     packuswb   xmm2, xmm3
-    packuswb   xmm2, xmm2   // 8 A bytes
-    movdqa     xmm3, xmm0   // Weave into GG, GA, then GGGA
-    punpcklbw  xmm0, xmm0   // 8 GG words
-    punpcklbw  xmm3, xmm2   // 8 GA words
+    packuswb   xmm2, xmm2  // 8 A bytes
+    movdqa     xmm3, xmm0  // Weave into GG, GA, then GGGA
+    punpcklbw  xmm0, xmm0  // 8 GG words
+    punpcklbw  xmm3, xmm2  // 8 GA words
     movdqa     xmm1, xmm0
-    punpcklwd  xmm0, xmm3   // GGGA first 4
-    punpckhwd  xmm1, xmm3   // GGGA next 4
+    punpcklwd  xmm0, xmm3  // GGGA first 4
+    punpckhwd  xmm1, xmm3  // GGGA next 4
     movdqu     [edx], xmm0
     movdqu     [edx + 16], xmm1
     lea        edx, [edx + 32]
@@ -4604,24 +4552,20 @@
 //    g = (r * 45 + g * 88 + b * 22) >> 7
 //    r = (r * 50 + g * 98 + b * 24) >> 7
 // Constant for ARGB color to sepia tone.
-static const vec8 kARGBToSepiaB = {
-  17, 68, 35, 0, 17, 68, 35, 0, 17, 68, 35, 0, 17, 68, 35, 0
-};
+static const vec8 kARGBToSepiaB = {17, 68, 35, 0, 17, 68, 35, 0,
+                                   17, 68, 35, 0, 17, 68, 35, 0};
 
-static const vec8 kARGBToSepiaG = {
-  22, 88, 45, 0, 22, 88, 45, 0, 22, 88, 45, 0, 22, 88, 45, 0
-};
+static const vec8 kARGBToSepiaG = {22, 88, 45, 0, 22, 88, 45, 0,
+                                   22, 88, 45, 0, 22, 88, 45, 0};
 
-static const vec8 kARGBToSepiaR = {
-  24, 98, 50, 0, 24, 98, 50, 0, 24, 98, 50, 0, 24, 98, 50, 0
-};
+static const vec8 kARGBToSepiaR = {24, 98, 50, 0, 24, 98, 50, 0,
+                                   24, 98, 50, 0, 24, 98, 50, 0};
 
 // Convert 8 ARGB pixels (32 bytes) to 8 Sepia ARGB pixels.
-__declspec(naked)
-void ARGBSepiaRow_SSSE3(uint8* dst_argb, int width) {
+__declspec(naked) void ARGBSepiaRow_SSSE3(uint8_t* dst_argb, int width) {
   __asm {
-    mov        eax, [esp + 4]   /* dst_argb */
-    mov        ecx, [esp + 8]   /* width */
+    mov        eax, [esp + 4] /* dst_argb */
+    mov        ecx, [esp + 8] /* width */
     movdqa     xmm2, xmmword ptr kARGBToSepiaB
     movdqa     xmm3, xmmword ptr kARGBToSepiaG
     movdqa     xmm4, xmmword ptr kARGBToSepiaR
@@ -4633,32 +4577,32 @@
     pmaddubsw  xmm6, xmm2
     phaddw     xmm0, xmm6
     psrlw      xmm0, 7
-    packuswb   xmm0, xmm0   // 8 B values
+    packuswb   xmm0, xmm0  // 8 B values
     movdqu     xmm5, [eax]  // G
     movdqu     xmm1, [eax + 16]
     pmaddubsw  xmm5, xmm3
     pmaddubsw  xmm1, xmm3
     phaddw     xmm5, xmm1
     psrlw      xmm5, 7
-    packuswb   xmm5, xmm5   // 8 G values
-    punpcklbw  xmm0, xmm5   // 8 BG values
+    packuswb   xmm5, xmm5  // 8 G values
+    punpcklbw  xmm0, xmm5  // 8 BG values
     movdqu     xmm5, [eax]  // R
     movdqu     xmm1, [eax + 16]
     pmaddubsw  xmm5, xmm4
     pmaddubsw  xmm1, xmm4
     phaddw     xmm5, xmm1
     psrlw      xmm5, 7
-    packuswb   xmm5, xmm5   // 8 R values
+    packuswb   xmm5, xmm5  // 8 R values
     movdqu     xmm6, [eax]  // A
     movdqu     xmm1, [eax + 16]
     psrld      xmm6, 24
     psrld      xmm1, 24
     packuswb   xmm6, xmm1
-    packuswb   xmm6, xmm6   // 8 A values
-    punpcklbw  xmm5, xmm6   // 8 RA values
-    movdqa     xmm1, xmm0   // Weave BG, RA together
-    punpcklwd  xmm0, xmm5   // BGRA first 4
-    punpckhwd  xmm1, xmm5   // BGRA next 4
+    packuswb   xmm6, xmm6  // 8 A values
+    punpcklbw  xmm5, xmm6  // 8 RA values
+    movdqa     xmm1, xmm0  // Weave BG, RA together
+    punpcklwd  xmm0, xmm5  // BGRA first 4
+    punpckhwd  xmm1, xmm5  // BGRA next 4
     movdqu     [eax], xmm0
     movdqu     [eax + 16], xmm1
     lea        eax, [eax + 32]
@@ -4674,19 +4618,20 @@
 // Same as Sepia except matrix is provided.
 // TODO(fbarchard): packuswbs only use half of the reg. To make RGBA, combine R
 // and B into a high and low, then G/A, unpackl/hbw and then unpckl/hwd.
-__declspec(naked)
-void ARGBColorMatrixRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                              const int8* matrix_argb, int width) {
+__declspec(naked) void ARGBColorMatrixRow_SSSE3(const uint8_t* src_argb,
+                                                uint8_t* dst_argb,
+                                                const int8_t* matrix_argb,
+                                                int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_argb */
-    mov        ecx, [esp + 12]  /* matrix_argb */
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_argb */
+    mov        ecx, [esp + 12] /* matrix_argb */
     movdqu     xmm5, [ecx]
     pshufd     xmm2, xmm5, 0x00
     pshufd     xmm3, xmm5, 0x55
     pshufd     xmm4, xmm5, 0xaa
     pshufd     xmm5, xmm5, 0xff
-    mov        ecx, [esp + 16]  /* width */
+    mov        ecx, [esp + 16] /* width */
 
  convertloop:
     movdqu     xmm0, [eax]  // B
@@ -4697,31 +4642,31 @@
     movdqu     xmm1, [eax + 16]
     pmaddubsw  xmm6, xmm3
     pmaddubsw  xmm1, xmm3
-    phaddsw    xmm0, xmm7   // B
-    phaddsw    xmm6, xmm1   // G
-    psraw      xmm0, 6      // B
-    psraw      xmm6, 6      // G
-    packuswb   xmm0, xmm0   // 8 B values
-    packuswb   xmm6, xmm6   // 8 G values
-    punpcklbw  xmm0, xmm6   // 8 BG values
+    phaddsw    xmm0, xmm7  // B
+    phaddsw    xmm6, xmm1  // G
+    psraw      xmm0, 6  // B
+    psraw      xmm6, 6  // G
+    packuswb   xmm0, xmm0  // 8 B values
+    packuswb   xmm6, xmm6  // 8 G values
+    punpcklbw  xmm0, xmm6  // 8 BG values
     movdqu     xmm1, [eax]  // R
     movdqu     xmm7, [eax + 16]
     pmaddubsw  xmm1, xmm4
     pmaddubsw  xmm7, xmm4
-    phaddsw    xmm1, xmm7   // R
+    phaddsw    xmm1, xmm7  // R
     movdqu     xmm6, [eax]  // A
     movdqu     xmm7, [eax + 16]
     pmaddubsw  xmm6, xmm5
     pmaddubsw  xmm7, xmm5
-    phaddsw    xmm6, xmm7   // A
-    psraw      xmm1, 6      // R
-    psraw      xmm6, 6      // A
-    packuswb   xmm1, xmm1   // 8 R values
-    packuswb   xmm6, xmm6   // 8 A values
-    punpcklbw  xmm1, xmm6   // 8 RA values
-    movdqa     xmm6, xmm0   // Weave BG, RA together
-    punpcklwd  xmm0, xmm1   // BGRA first 4
-    punpckhwd  xmm6, xmm1   // BGRA next 4
+    phaddsw    xmm6, xmm7  // A
+    psraw      xmm1, 6  // R
+    psraw      xmm6, 6  // A
+    packuswb   xmm1, xmm1  // 8 R values
+    packuswb   xmm6, xmm6  // 8 A values
+    punpcklbw  xmm1, xmm6  // 8 RA values
+    movdqa     xmm6, xmm0  // Weave BG, RA together
+    punpcklwd  xmm0, xmm1  // BGRA first 4
+    punpckhwd  xmm6, xmm1  // BGRA next 4
     movdqu     [edx], xmm0
     movdqu     [edx + 16], xmm6
     lea        eax, [eax + 32]
@@ -4735,15 +4680,17 @@
 
 #ifdef HAS_ARGBQUANTIZEROW_SSE2
 // Quantize 4 ARGB pixels (16 bytes).
-__declspec(naked)
-void ARGBQuantizeRow_SSE2(uint8* dst_argb, int scale, int interval_size,
-                          int interval_offset, int width) {
+__declspec(naked) void ARGBQuantizeRow_SSE2(uint8_t* dst_argb,
+                                            int scale,
+                                            int interval_size,
+                                            int interval_offset,
+                                            int width) {
   __asm {
-    mov        eax, [esp + 4]    /* dst_argb */
-    movd       xmm2, [esp + 8]   /* scale */
-    movd       xmm3, [esp + 12]  /* interval_size */
-    movd       xmm4, [esp + 16]  /* interval_offset */
-    mov        ecx, [esp + 20]   /* width */
+    mov        eax, [esp + 4] /* dst_argb */
+    movd       xmm2, [esp + 8] /* scale */
+    movd       xmm3, [esp + 12] /* interval_size */
+    movd       xmm4, [esp + 16] /* interval_offset */
+    mov        ecx, [esp + 20] /* width */
     pshuflw    xmm2, xmm2, 040h
     pshufd     xmm2, xmm2, 044h
     pshuflw    xmm3, xmm3, 040h
@@ -4756,16 +4703,16 @@
 
  convertloop:
     movdqu     xmm0, [eax]  // read 4 pixels
-    punpcklbw  xmm0, xmm5   // first 2 pixels
-    pmulhuw    xmm0, xmm2   // pixel * scale >> 16
+    punpcklbw  xmm0, xmm5  // first 2 pixels
+    pmulhuw    xmm0, xmm2  // pixel * scale >> 16
     movdqu     xmm1, [eax]  // read 4 pixels
-    punpckhbw  xmm1, xmm5   // next 2 pixels
+    punpckhbw  xmm1, xmm5  // next 2 pixels
     pmulhuw    xmm1, xmm2
-    pmullw     xmm0, xmm3   // * interval_size
+    pmullw     xmm0, xmm3  // * interval_size
     movdqu     xmm7, [eax]  // read 4 pixels
     pmullw     xmm1, xmm3
-    pand       xmm7, xmm6   // mask alpha
-    paddw      xmm0, xmm4   // + interval_size / 2
+    pand       xmm7, xmm6  // mask alpha
+    paddw      xmm0, xmm4  // + interval_size / 2
     paddw      xmm1, xmm4
     packuswb   xmm0, xmm1
     por        xmm0, xmm7
@@ -4780,25 +4727,26 @@
 
 #ifdef HAS_ARGBSHADEROW_SSE2
 // Shade 4 pixels at a time by specified value.
-__declspec(naked)
-void ARGBShadeRow_SSE2(const uint8* src_argb, uint8* dst_argb, int width,
-                       uint32 value) {
+__declspec(naked) void ARGBShadeRow_SSE2(const uint8_t* src_argb,
+                                         uint8_t* dst_argb,
+                                         int width,
+                                         uint32_t value) {
   __asm {
-    mov        eax, [esp + 4]   // src_argb
-    mov        edx, [esp + 8]   // dst_argb
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_argb
     mov        ecx, [esp + 12]  // width
     movd       xmm2, [esp + 16]  // value
     punpcklbw  xmm2, xmm2
     punpcklqdq xmm2, xmm2
 
  convertloop:
-    movdqu     xmm0, [eax]      // read 4 pixels
+    movdqu     xmm0, [eax]  // read 4 pixels
     lea        eax, [eax + 16]
     movdqa     xmm1, xmm0
-    punpcklbw  xmm0, xmm0       // first 2
-    punpckhbw  xmm1, xmm1       // next 2
-    pmulhuw    xmm0, xmm2       // argb * value
-    pmulhuw    xmm1, xmm2       // argb * value
+    punpcklbw  xmm0, xmm0  // first 2
+    punpckhbw  xmm1, xmm1  // next 2
+    pmulhuw    xmm0, xmm2  // argb * value
+    pmulhuw    xmm1, xmm2  // argb * value
     psrlw      xmm0, 8
     psrlw      xmm1, 8
     packuswb   xmm0, xmm1
@@ -4814,28 +4762,29 @@
 
 #ifdef HAS_ARGBMULTIPLYROW_SSE2
 // Multiply 2 rows of ARGB pixels together, 4 pixels at a time.
-__declspec(naked)
-void ARGBMultiplyRow_SSE2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
+__declspec(naked) void ARGBMultiplyRow_SSE2(const uint8_t* src_argb0,
+                                            const uint8_t* src_argb1,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
     pxor       xmm5, xmm5  // constant 0
 
  convertloop:
-    movdqu     xmm0, [eax]        // read 4 pixels from src_argb0
-    movdqu     xmm2, [esi]        // read 4 pixels from src_argb1
+    movdqu     xmm0, [eax]  // read 4 pixels from src_argb0
+    movdqu     xmm2, [esi]  // read 4 pixels from src_argb1
     movdqu     xmm1, xmm0
     movdqu     xmm3, xmm2
-    punpcklbw  xmm0, xmm0         // first 2
-    punpckhbw  xmm1, xmm1         // next 2
-    punpcklbw  xmm2, xmm5         // first 2
-    punpckhbw  xmm3, xmm5         // next 2
-    pmulhuw    xmm0, xmm2         // src_argb0 * src_argb1 first 2
-    pmulhuw    xmm1, xmm3         // src_argb0 * src_argb1 next 2
+    punpcklbw  xmm0, xmm0  // first 2
+    punpckhbw  xmm1, xmm1  // next 2
+    punpcklbw  xmm2, xmm5  // first 2
+    punpckhbw  xmm3, xmm5  // next 2
+    pmulhuw    xmm0, xmm2  // src_argb0 * src_argb1 first 2
+    pmulhuw    xmm1, xmm3  // src_argb0 * src_argb1 next 2
     lea        eax, [eax + 16]
     lea        esi, [esi + 16]
     packuswb   xmm0, xmm1
@@ -4853,13 +4802,14 @@
 #ifdef HAS_ARGBADDROW_SSE2
 // Add 2 rows of ARGB pixels together, 4 pixels at a time.
 // TODO(fbarchard): Port this to posix, neon and other math functions.
-__declspec(naked)
-void ARGBAddRow_SSE2(const uint8* src_argb0, const uint8* src_argb1,
-                     uint8* dst_argb, int width) {
+__declspec(naked) void ARGBAddRow_SSE2(const uint8_t* src_argb0,
+                                       const uint8_t* src_argb1,
+                                       uint8_t* dst_argb,
+                                       int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
 
@@ -4867,11 +4817,11 @@
     jl         convertloop49
 
  convertloop4:
-    movdqu     xmm0, [eax]        // read 4 pixels from src_argb0
+    movdqu     xmm0, [eax]  // read 4 pixels from src_argb0
     lea        eax, [eax + 16]
-    movdqu     xmm1, [esi]        // read 4 pixels from src_argb1
+    movdqu     xmm1, [esi]  // read 4 pixels from src_argb1
     lea        esi, [esi + 16]
-    paddusb    xmm0, xmm1         // src_argb0 + src_argb1
+    paddusb    xmm0, xmm1  // src_argb0 + src_argb1
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
     sub        ecx, 4
@@ -4882,11 +4832,11 @@
     jl         convertloop19
 
  convertloop1:
-    movd       xmm0, [eax]        // read 1 pixels from src_argb0
+    movd       xmm0, [eax]  // read 1 pixels from src_argb0
     lea        eax, [eax + 4]
-    movd       xmm1, [esi]        // read 1 pixels from src_argb1
+    movd       xmm1, [esi]  // read 1 pixels from src_argb1
     lea        esi, [esi + 4]
-    paddusb    xmm0, xmm1         // src_argb0 + src_argb1
+    paddusb    xmm0, xmm1  // src_argb0 + src_argb1
     movd       [edx], xmm0
     lea        edx, [edx + 4]
     sub        ecx, 1
@@ -4901,22 +4851,23 @@
 
 #ifdef HAS_ARGBSUBTRACTROW_SSE2
 // Subtract 2 rows of ARGB pixels together, 4 pixels at a time.
-__declspec(naked)
-void ARGBSubtractRow_SSE2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
+__declspec(naked) void ARGBSubtractRow_SSE2(const uint8_t* src_argb0,
+                                            const uint8_t* src_argb1,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
 
  convertloop:
-    movdqu     xmm0, [eax]        // read 4 pixels from src_argb0
+    movdqu     xmm0, [eax]  // read 4 pixels from src_argb0
     lea        eax, [eax + 16]
-    movdqu     xmm1, [esi]        // read 4 pixels from src_argb1
+    movdqu     xmm1, [esi]  // read 4 pixels from src_argb1
     lea        esi, [esi + 16]
-    psubusb    xmm0, xmm1         // src_argb0 - src_argb1
+    psubusb    xmm0, xmm1  // src_argb0 - src_argb1
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
     sub        ecx, 4
@@ -4930,28 +4881,29 @@
 
 #ifdef HAS_ARGBMULTIPLYROW_AVX2
 // Multiply 2 rows of ARGB pixels together, 8 pixels at a time.
-__declspec(naked)
-void ARGBMultiplyRow_AVX2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
+__declspec(naked) void ARGBMultiplyRow_AVX2(const uint8_t* src_argb0,
+                                            const uint8_t* src_argb1,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
-    vpxor      ymm5, ymm5, ymm5     // constant 0
+    vpxor      ymm5, ymm5, ymm5  // constant 0
 
  convertloop:
-    vmovdqu    ymm1, [eax]        // read 8 pixels from src_argb0
+    vmovdqu    ymm1, [eax]  // read 8 pixels from src_argb0
     lea        eax, [eax + 32]
-    vmovdqu    ymm3, [esi]        // read 8 pixels from src_argb1
+    vmovdqu    ymm3, [esi]  // read 8 pixels from src_argb1
     lea        esi, [esi + 32]
-    vpunpcklbw ymm0, ymm1, ymm1   // low 4
-    vpunpckhbw ymm1, ymm1, ymm1   // high 4
-    vpunpcklbw ymm2, ymm3, ymm5   // low 4
-    vpunpckhbw ymm3, ymm3, ymm5   // high 4
-    vpmulhuw   ymm0, ymm0, ymm2   // src_argb0 * src_argb1 low 4
-    vpmulhuw   ymm1, ymm1, ymm3   // src_argb0 * src_argb1 high 4
+    vpunpcklbw ymm0, ymm1, ymm1  // low 4
+    vpunpckhbw ymm1, ymm1, ymm1  // high 4
+    vpunpcklbw ymm2, ymm3, ymm5  // low 4
+    vpunpckhbw ymm3, ymm3, ymm5  // high 4
+    vpmulhuw   ymm0, ymm0, ymm2  // src_argb0 * src_argb1 low 4
+    vpmulhuw   ymm1, ymm1, ymm3  // src_argb0 * src_argb1 high 4
     vpackuswb  ymm0, ymm0, ymm1
     vmovdqu    [edx], ymm0
     lea        edx, [edx + 32]
@@ -4967,20 +4919,21 @@
 
 #ifdef HAS_ARGBADDROW_AVX2
 // Add 2 rows of ARGB pixels together, 8 pixels at a time.
-__declspec(naked)
-void ARGBAddRow_AVX2(const uint8* src_argb0, const uint8* src_argb1,
-                     uint8* dst_argb, int width) {
+__declspec(naked) void ARGBAddRow_AVX2(const uint8_t* src_argb0,
+                                       const uint8_t* src_argb1,
+                                       uint8_t* dst_argb,
+                                       int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
 
  convertloop:
-    vmovdqu    ymm0, [eax]              // read 8 pixels from src_argb0
+    vmovdqu    ymm0, [eax]  // read 8 pixels from src_argb0
     lea        eax, [eax + 32]
-    vpaddusb   ymm0, ymm0, [esi]        // add 8 pixels from src_argb1
+    vpaddusb   ymm0, ymm0, [esi]  // add 8 pixels from src_argb1
     lea        esi, [esi + 32]
     vmovdqu    [edx], ymm0
     lea        edx, [edx + 32]
@@ -4996,20 +4949,21 @@
 
 #ifdef HAS_ARGBSUBTRACTROW_AVX2
 // Subtract 2 rows of ARGB pixels together, 8 pixels at a time.
-__declspec(naked)
-void ARGBSubtractRow_AVX2(const uint8* src_argb0, const uint8* src_argb1,
-                          uint8* dst_argb, int width) {
+__declspec(naked) void ARGBSubtractRow_AVX2(const uint8_t* src_argb0,
+                                            const uint8_t* src_argb1,
+                                            uint8_t* dst_argb,
+                                            int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_argb0
-    mov        esi, [esp + 4 + 8]   // src_argb1
+    mov        eax, [esp + 4 + 4]  // src_argb0
+    mov        esi, [esp + 4 + 8]  // src_argb1
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
 
  convertloop:
-    vmovdqu    ymm0, [eax]              // read 8 pixels from src_argb0
+    vmovdqu    ymm0, [eax]  // read 8 pixels from src_argb0
     lea        eax, [eax + 32]
-    vpsubusb   ymm0, ymm0, [esi]        // src_argb0 - src_argb1
+    vpsubusb   ymm0, ymm0, [esi]  // src_argb0 - src_argb1
     lea        esi, [esi + 32]
     vmovdqu    [edx], ymm0
     lea        edx, [edx + 32]
@@ -5028,14 +4982,16 @@
 // -1  0  1
 // -2  0  2
 // -1  0  1
-__declspec(naked)
-void SobelXRow_SSE2(const uint8* src_y0, const uint8* src_y1,
-                    const uint8* src_y2, uint8* dst_sobelx, int width) {
+__declspec(naked) void SobelXRow_SSE2(const uint8_t* src_y0,
+                                      const uint8_t* src_y1,
+                                      const uint8_t* src_y2,
+                                      uint8_t* dst_sobelx,
+                                      int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   // src_y0
-    mov        esi, [esp + 8 + 8]   // src_y1
+    mov        eax, [esp + 8 + 4]  // src_y0
+    mov        esi, [esp + 8 + 8]  // src_y1
     mov        edi, [esp + 8 + 12]  // src_y2
     mov        edx, [esp + 8 + 16]  // dst_sobelx
     mov        ecx, [esp + 8 + 20]  // width
@@ -5045,17 +5001,17 @@
     pxor       xmm5, xmm5  // constant 0
 
  convertloop:
-    movq       xmm0, qword ptr [eax]            // read 8 pixels from src_y0[0]
-    movq       xmm1, qword ptr [eax + 2]        // read 8 pixels from src_y0[2]
+    movq       xmm0, qword ptr [eax]  // read 8 pixels from src_y0[0]
+    movq       xmm1, qword ptr [eax + 2]  // read 8 pixels from src_y0[2]
     punpcklbw  xmm0, xmm5
     punpcklbw  xmm1, xmm5
     psubw      xmm0, xmm1
-    movq       xmm1, qword ptr [eax + esi]      // read 8 pixels from src_y1[0]
+    movq       xmm1, qword ptr [eax + esi]  // read 8 pixels from src_y1[0]
     movq       xmm2, qword ptr [eax + esi + 2]  // read 8 pixels from src_y1[2]
     punpcklbw  xmm1, xmm5
     punpcklbw  xmm2, xmm5
     psubw      xmm1, xmm2
-    movq       xmm2, qword ptr [eax + edi]      // read 8 pixels from src_y2[0]
+    movq       xmm2, qword ptr [eax + edi]  // read 8 pixels from src_y2[0]
     movq       xmm3, qword ptr [eax + edi + 2]  // read 8 pixels from src_y2[2]
     punpcklbw  xmm2, xmm5
     punpcklbw  xmm3, xmm5
@@ -5063,7 +5019,7 @@
     paddw      xmm0, xmm2
     paddw      xmm0, xmm1
     paddw      xmm0, xmm1
-    pxor       xmm1, xmm1   // abs = max(xmm0, -xmm0).  SSSE3 could use pabsw
+    pxor       xmm1, xmm1  // abs = max(xmm0, -xmm0).  SSSE3 could use pabsw
     psubw      xmm1, xmm0
     pmaxsw     xmm0, xmm1
     packuswb   xmm0, xmm0
@@ -5084,13 +5040,14 @@
 // -1 -2 -1
 //  0  0  0
 //  1  2  1
-__declspec(naked)
-void SobelYRow_SSE2(const uint8* src_y0, const uint8* src_y1,
-                    uint8* dst_sobely, int width) {
+__declspec(naked) void SobelYRow_SSE2(const uint8_t* src_y0,
+                                      const uint8_t* src_y1,
+                                      uint8_t* dst_sobely,
+                                      int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_y0
-    mov        esi, [esp + 4 + 8]   // src_y1
+    mov        eax, [esp + 4 + 4]  // src_y0
+    mov        esi, [esp + 4 + 8]  // src_y1
     mov        edx, [esp + 4 + 12]  // dst_sobely
     mov        ecx, [esp + 4 + 16]  // width
     sub        esi, eax
@@ -5098,17 +5055,17 @@
     pxor       xmm5, xmm5  // constant 0
 
  convertloop:
-    movq       xmm0, qword ptr [eax]            // read 8 pixels from src_y0[0]
-    movq       xmm1, qword ptr [eax + esi]      // read 8 pixels from src_y1[0]
+    movq       xmm0, qword ptr [eax]  // read 8 pixels from src_y0[0]
+    movq       xmm1, qword ptr [eax + esi]  // read 8 pixels from src_y1[0]
     punpcklbw  xmm0, xmm5
     punpcklbw  xmm1, xmm5
     psubw      xmm0, xmm1
-    movq       xmm1, qword ptr [eax + 1]        // read 8 pixels from src_y0[1]
+    movq       xmm1, qword ptr [eax + 1]  // read 8 pixels from src_y0[1]
     movq       xmm2, qword ptr [eax + esi + 1]  // read 8 pixels from src_y1[1]
     punpcklbw  xmm1, xmm5
     punpcklbw  xmm2, xmm5
     psubw      xmm1, xmm2
-    movq       xmm2, qword ptr [eax + 2]        // read 8 pixels from src_y0[2]
+    movq       xmm2, qword ptr [eax + 2]  // read 8 pixels from src_y0[2]
     movq       xmm3, qword ptr [eax + esi + 2]  // read 8 pixels from src_y1[2]
     punpcklbw  xmm2, xmm5
     punpcklbw  xmm3, xmm5
@@ -5116,7 +5073,7 @@
     paddw      xmm0, xmm2
     paddw      xmm0, xmm1
     paddw      xmm0, xmm1
-    pxor       xmm1, xmm1   // abs = max(xmm0, -xmm0).  SSSE3 could use pabsw
+    pxor       xmm1, xmm1  // abs = max(xmm0, -xmm0).  SSSE3 could use pabsw
     psubw      xmm1, xmm0
     pmaxsw     xmm0, xmm1
     packuswb   xmm0, xmm0
@@ -5137,36 +5094,37 @@
 // R = Sobel
 // G = Sobel
 // B = Sobel
-__declspec(naked)
-void SobelRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                   uint8* dst_argb, int width) {
+__declspec(naked) void SobelRow_SSE2(const uint8_t* src_sobelx,
+                                     const uint8_t* src_sobely,
+                                     uint8_t* dst_argb,
+                                     int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_sobelx
-    mov        esi, [esp + 4 + 8]   // src_sobely
+    mov        eax, [esp + 4 + 4]  // src_sobelx
+    mov        esi, [esp + 4 + 8]  // src_sobely
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
     sub        esi, eax
-    pcmpeqb    xmm5, xmm5           // alpha 255
-    pslld      xmm5, 24             // 0xff000000
+    pcmpeqb    xmm5, xmm5  // alpha 255
+    pslld      xmm5, 24  // 0xff000000
 
  convertloop:
-    movdqu     xmm0, [eax]            // read 16 pixels src_sobelx
-    movdqu     xmm1, [eax + esi]      // read 16 pixels src_sobely
+    movdqu     xmm0, [eax]  // read 16 pixels src_sobelx
+    movdqu     xmm1, [eax + esi]  // read 16 pixels src_sobely
     lea        eax, [eax + 16]
-    paddusb    xmm0, xmm1             // sobel = sobelx + sobely
-    movdqa     xmm2, xmm0             // GG
-    punpcklbw  xmm2, xmm0             // First 8
-    punpckhbw  xmm0, xmm0             // Next 8
-    movdqa     xmm1, xmm2             // GGGG
-    punpcklwd  xmm1, xmm2             // First 4
-    punpckhwd  xmm2, xmm2             // Next 4
-    por        xmm1, xmm5             // GGGA
+    paddusb    xmm0, xmm1  // sobel = sobelx + sobely
+    movdqa     xmm2, xmm0  // GG
+    punpcklbw  xmm2, xmm0  // First 8
+    punpckhbw  xmm0, xmm0  // Next 8
+    movdqa     xmm1, xmm2  // GGGG
+    punpcklwd  xmm1, xmm2  // First 4
+    punpckhwd  xmm2, xmm2  // Next 4
+    por        xmm1, xmm5  // GGGA
     por        xmm2, xmm5
-    movdqa     xmm3, xmm0             // GGGG
-    punpcklwd  xmm3, xmm0             // Next 4
-    punpckhwd  xmm0, xmm0             // Last 4
-    por        xmm3, xmm5             // GGGA
+    movdqa     xmm3, xmm0  // GGGG
+    punpcklwd  xmm3, xmm0  // Next 4
+    punpckhwd  xmm0, xmm0  // Last 4
+    por        xmm3, xmm5  // GGGA
     por        xmm0, xmm5
     movdqu     [edx], xmm1
     movdqu     [edx + 16], xmm2
@@ -5184,22 +5142,23 @@
 
 #ifdef HAS_SOBELTOPLANEROW_SSE2
 // Adds Sobel X and Sobel Y and stores Sobel into a plane.
-__declspec(naked)
-void SobelToPlaneRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                          uint8* dst_y, int width) {
+__declspec(naked) void SobelToPlaneRow_SSE2(const uint8_t* src_sobelx,
+                                            const uint8_t* src_sobely,
+                                            uint8_t* dst_y,
+                                            int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_sobelx
-    mov        esi, [esp + 4 + 8]   // src_sobely
+    mov        eax, [esp + 4 + 4]  // src_sobelx
+    mov        esi, [esp + 4 + 8]  // src_sobely
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
     sub        esi, eax
 
  convertloop:
-    movdqu     xmm0, [eax]            // read 16 pixels src_sobelx
-    movdqu     xmm1, [eax + esi]      // read 16 pixels src_sobely
+    movdqu     xmm0, [eax]  // read 16 pixels src_sobelx
+    movdqu     xmm1, [eax + esi]  // read 16 pixels src_sobely
     lea        eax, [eax + 16]
-    paddusb    xmm0, xmm1             // sobel = sobelx + sobely
+    paddusb    xmm0, xmm1  // sobel = sobelx + sobely
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
     sub        ecx, 16
@@ -5217,36 +5176,37 @@
 // R = Sobel X
 // G = Sobel
 // B = Sobel Y
-__declspec(naked)
-void SobelXYRow_SSE2(const uint8* src_sobelx, const uint8* src_sobely,
-                     uint8* dst_argb, int width) {
+__declspec(naked) void SobelXYRow_SSE2(const uint8_t* src_sobelx,
+                                       const uint8_t* src_sobely,
+                                       uint8_t* dst_argb,
+                                       int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   // src_sobelx
-    mov        esi, [esp + 4 + 8]   // src_sobely
+    mov        eax, [esp + 4 + 4]  // src_sobelx
+    mov        esi, [esp + 4 + 8]  // src_sobely
     mov        edx, [esp + 4 + 12]  // dst_argb
     mov        ecx, [esp + 4 + 16]  // width
     sub        esi, eax
-    pcmpeqb    xmm5, xmm5           // alpha 255
+    pcmpeqb    xmm5, xmm5  // alpha 255
 
  convertloop:
-    movdqu     xmm0, [eax]            // read 16 pixels src_sobelx
-    movdqu     xmm1, [eax + esi]      // read 16 pixels src_sobely
+    movdqu     xmm0, [eax]  // read 16 pixels src_sobelx
+    movdqu     xmm1, [eax + esi]  // read 16 pixels src_sobely
     lea        eax, [eax + 16]
     movdqa     xmm2, xmm0
-    paddusb    xmm2, xmm1             // sobel = sobelx + sobely
-    movdqa     xmm3, xmm0             // XA
+    paddusb    xmm2, xmm1  // sobel = sobelx + sobely
+    movdqa     xmm3, xmm0  // XA
     punpcklbw  xmm3, xmm5
     punpckhbw  xmm0, xmm5
-    movdqa     xmm4, xmm1             // YS
+    movdqa     xmm4, xmm1  // YS
     punpcklbw  xmm4, xmm2
     punpckhbw  xmm1, xmm2
-    movdqa     xmm6, xmm4             // YSXA
-    punpcklwd  xmm6, xmm3             // First 4
-    punpckhwd  xmm4, xmm3             // Next 4
-    movdqa     xmm7, xmm1             // YSXA
-    punpcklwd  xmm7, xmm0             // Next 4
-    punpckhwd  xmm1, xmm0             // Last 4
+    movdqa     xmm6, xmm4  // YSXA
+    punpcklwd  xmm6, xmm3  // First 4
+    punpckhwd  xmm4, xmm3  // Next 4
+    movdqa     xmm7, xmm1  // YSXA
+    punpcklwd  xmm7, xmm0  // Next 4
+    punpckhwd  xmm1, xmm0  // Last 4
     movdqu     [edx], xmm6
     movdqu     [edx + 16], xmm4
     movdqu     [edx + 32], xmm7
@@ -5275,8 +5235,11 @@
 // count is number of averaged pixels to produce.
 // Does 4 pixels at a time.
 // This function requires alignment on accumulation buffer pointers.
-void CumulativeSumToAverageRow_SSE2(const int32* topleft, const int32* botleft,
-                                    int width, int area, uint8* dst,
+void CumulativeSumToAverageRow_SSE2(const int32_t* topleft,
+                                    const int32_t* botleft,
+                                    int width,
+                                    int area,
+                                    uint8_t* dst,
                                     int count) {
   __asm {
     mov        eax, topleft  // eax topleft
@@ -5294,18 +5257,18 @@
     cmp        area, 128  // 128 pixels will not overflow 15 bits.
     ja         l4
 
-    pshufd     xmm5, xmm5, 0        // area
-    pcmpeqb    xmm6, xmm6           // constant of 65536.0 - 1 = 65535.0
+    pshufd     xmm5, xmm5, 0  // area
+    pcmpeqb    xmm6, xmm6  // constant of 65536.0 - 1 = 65535.0
     psrld      xmm6, 16
     cvtdq2ps   xmm6, xmm6
-    addps      xmm5, xmm6           // (65536.0 + area - 1)
-    mulps      xmm5, xmm4           // (65536.0 + area - 1) * 1 / area
-    cvtps2dq   xmm5, xmm5           // 0.16 fixed point
-    packssdw   xmm5, xmm5           // 16 bit shorts
+    addps      xmm5, xmm6  // (65536.0 + area - 1)
+    mulps      xmm5, xmm4  // (65536.0 + area - 1) * 1 / area
+    cvtps2dq   xmm5, xmm5  // 0.16 fixed point
+    packssdw   xmm5, xmm5  // 16 bit shorts
 
-    // 4 pixel loop small blocks.
+        // 4 pixel loop small blocks.
   s4:
-    // top left
+        // top left
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     movdqu     xmm2, [eax + 32]
@@ -5345,9 +5308,9 @@
 
     jmp        l4b
 
-    // 4 pixel loop
+            // 4 pixel loop
   l4:
-    // top left
+        // top left
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     movdqu     xmm2, [eax + 32]
@@ -5373,7 +5336,7 @@
     paddd      xmm3, [esi + edx * 4 + 48]
     lea        esi, [esi + 64]
 
-    cvtdq2ps   xmm0, xmm0   // Average = Sum * 1 / Area
+    cvtdq2ps   xmm0, xmm0  // Average = Sum * 1 / Area
     cvtdq2ps   xmm1, xmm1
     mulps      xmm0, xmm4
     mulps      xmm1, xmm4
@@ -5397,7 +5360,7 @@
     add        ecx, 4 - 1
     jl         l1b
 
-    // 1 pixel loop
+        // 1 pixel loop
   l1:
     movdqu     xmm0, [eax]
     psubd      xmm0, [eax + edx * 4]
@@ -5422,8 +5385,10 @@
 #ifdef HAS_COMPUTECUMULATIVESUMROW_SSE2
 // Creates a table of cumulative sums where each value is a sum of all values
 // above and to the left of the value.
-void ComputeCumulativeSumRow_SSE2(const uint8* row, int32* cumsum,
-                                  const int32* previous_cumsum, int width) {
+void ComputeCumulativeSumRow_SSE2(const uint8_t* row,
+                                  int32_t* cumsum,
+                                  const int32_t* previous_cumsum,
+                                  int width) {
   __asm {
     mov        eax, row
     mov        edx, cumsum
@@ -5437,7 +5402,7 @@
     test       edx, 15
     jne        l4b
 
-    // 4 pixel loop
+        // 4 pixel loop
   l4:
     movdqu     xmm2, [eax]  // 4 argb pixels 16 bytes.
     lea        eax, [eax + 16]
@@ -5483,7 +5448,7 @@
     add        ecx, 4 - 1
     jl         l1b
 
-    // 1 pixel loop
+        // 1 pixel loop
   l1:
     movd       xmm2, dword ptr [eax]  // 1 argb pixel 4 bytes.
     lea        eax, [eax + 4]
@@ -5505,10 +5470,11 @@
 
 #ifdef HAS_ARGBAFFINEROW_SSE2
 // Copy ARGB pixels from source image with slope to a row of destination.
-__declspec(naked)
-LIBYUV_API
-void ARGBAffineRow_SSE2(const uint8* src_argb, int src_argb_stride,
-                        uint8* dst_argb, const float* uv_dudv, int width) {
+__declspec(naked) LIBYUV_API void ARGBAffineRow_SSE2(const uint8_t* src_argb,
+                                                     int src_argb_stride,
+                                                     uint8_t* dst_argb,
+                                                     const float* uv_dudv,
+                                                     int width) {
   __asm {
     push       esi
     push       edi
@@ -5519,46 +5485,46 @@
     movq       xmm2, qword ptr [ecx]  // uv
     movq       xmm7, qword ptr [ecx + 8]  // dudv
     mov        ecx, [esp + 28]  // width
-    shl        esi, 16          // 4, stride
+    shl        esi, 16  // 4, stride
     add        esi, 4
     movd       xmm5, esi
     sub        ecx, 4
     jl         l4b
 
-    // setup for 4 pixel loop
+        // setup for 4 pixel loop
     pshufd     xmm7, xmm7, 0x44  // dup dudv
     pshufd     xmm5, xmm5, 0  // dup 4, stride
-    movdqa     xmm0, xmm2    // x0, y0, x1, y1
+    movdqa     xmm0, xmm2  // x0, y0, x1, y1
     addps      xmm0, xmm7
     movlhps    xmm2, xmm0
     movdqa     xmm4, xmm7
-    addps      xmm4, xmm4    // dudv *= 2
-    movdqa     xmm3, xmm2    // x2, y2, x3, y3
+    addps      xmm4, xmm4  // dudv *= 2
+    movdqa     xmm3, xmm2  // x2, y2, x3, y3
     addps      xmm3, xmm4
-    addps      xmm4, xmm4    // dudv *= 4
+    addps      xmm4, xmm4  // dudv *= 4
 
-    // 4 pixel loop
+        // 4 pixel loop
   l4:
-    cvttps2dq  xmm0, xmm2    // x, y float to int first 2
-    cvttps2dq  xmm1, xmm3    // x, y float to int next 2
-    packssdw   xmm0, xmm1    // x, y as 8 shorts
-    pmaddwd    xmm0, xmm5    // offsets = x * 4 + y * stride.
+    cvttps2dq  xmm0, xmm2  // x, y float to int first 2
+    cvttps2dq  xmm1, xmm3  // x, y float to int next 2
+    packssdw   xmm0, xmm1  // x, y as 8 shorts
+    pmaddwd    xmm0, xmm5  // offsets = x * 4 + y * stride.
     movd       esi, xmm0
     pshufd     xmm0, xmm0, 0x39  // shift right
     movd       edi, xmm0
     pshufd     xmm0, xmm0, 0x39  // shift right
     movd       xmm1, [eax + esi]  // read pixel 0
     movd       xmm6, [eax + edi]  // read pixel 1
-    punpckldq  xmm1, xmm6     // combine pixel 0 and 1
-    addps      xmm2, xmm4    // x, y += dx, dy first 2
+    punpckldq  xmm1, xmm6  // combine pixel 0 and 1
+    addps      xmm2, xmm4  // x, y += dx, dy first 2
     movq       qword ptr [edx], xmm1
     movd       esi, xmm0
     pshufd     xmm0, xmm0, 0x39  // shift right
     movd       edi, xmm0
     movd       xmm6, [eax + esi]  // read pixel 2
     movd       xmm0, [eax + edi]  // read pixel 3
-    punpckldq  xmm6, xmm0     // combine pixel 2 and 3
-    addps      xmm3, xmm4    // x, y += dx, dy next 2
+    punpckldq  xmm6, xmm0  // combine pixel 2 and 3
+    addps      xmm3, xmm4  // x, y += dx, dy next 2
     movq       qword ptr 8[edx], xmm6
     lea        edx, [edx + 16]
     sub        ecx, 4
@@ -5568,12 +5534,12 @@
     add        ecx, 4 - 1
     jl         l1b
 
-    // 1 pixel loop
+        // 1 pixel loop
   l1:
-    cvttps2dq  xmm0, xmm2    // x, y float to int
-    packssdw   xmm0, xmm0    // x, y as shorts
-    pmaddwd    xmm0, xmm5    // offset = x * 4 + y * stride
-    addps      xmm2, xmm7    // x, y += dx, dy
+    cvttps2dq  xmm0, xmm2  // x, y float to int
+    packssdw   xmm0, xmm0  // x, y as shorts
+    pmaddwd    xmm0, xmm5  // offset = x * 4 + y * stride
+    addps      xmm2, xmm7  // x, y += dx, dy
     movd       esi, xmm0
     movd       xmm0, [eax + esi]  // copy a pixel
     movd       [edx], xmm0
@@ -5590,15 +5556,16 @@
 
 #ifdef HAS_INTERPOLATEROW_AVX2
 // Bilinear filter 32x2 -> 32x1
-__declspec(naked)
-void InterpolateRow_AVX2(uint8* dst_ptr, const uint8* src_ptr,
-                         ptrdiff_t src_stride, int dst_width,
-                         int source_y_fraction) {
+__declspec(naked) void InterpolateRow_AVX2(uint8_t* dst_ptr,
+                                           const uint8_t* src_ptr,
+                                           ptrdiff_t src_stride,
+                                           int dst_width,
+                                           int source_y_fraction) {
   __asm {
     push       esi
     push       edi
-    mov        edi, [esp + 8 + 4]   // dst_ptr
-    mov        esi, [esp + 8 + 8]   // src_ptr
+    mov        edi, [esp + 8 + 4]  // dst_ptr
+    mov        esi, [esp + 8 + 8]  // src_ptr
     mov        edx, [esp + 8 + 12]  // src_stride
     mov        ecx, [esp + 8 + 16]  // dst_width
     mov        eax, [esp + 8 + 20]  // source_y_fraction (0..255)
@@ -5607,7 +5574,7 @@
     je         xloop100  // 0 / 256.  Blend 100 / 0.
     sub        edi, esi
     cmp        eax, 128
-    je         xloop50   // 128 /256 is 0.50.  Blend 50 / 50.
+    je         xloop50  // 128 /256 is 0.50.  Blend 50 / 50.
 
     vmovd      xmm0, eax  // high fraction 0..255
     neg        eax
@@ -5634,14 +5601,14 @@
     vpaddw     ymm0, ymm0, ymm4
     vpsrlw     ymm1, ymm1, 8
     vpsrlw     ymm0, ymm0, 8
-    vpackuswb  ymm0, ymm0, ymm1  // unmutates
+    vpackuswb  ymm0, ymm0, ymm1            // unmutates
     vmovdqu    [esi + edi], ymm0
     lea        esi, [esi + 32]
     sub        ecx, 32
     jg         xloop
     jmp        xloop99
 
-   // Blend 50 / 50.
+        // Blend 50 / 50.
  xloop50:
    vmovdqu    ymm0, [esi]
    vpavgb     ymm0, ymm0, [esi + edx]
@@ -5651,7 +5618,7 @@
    jg         xloop50
    jmp        xloop99
 
-   // Blend 100 / 0 - Copy row unchanged.
+        // Blend 100 / 0 - Copy row unchanged.
  xloop100:
    rep movsb
 
@@ -5666,25 +5633,26 @@
 
 // Bilinear filter 16x2 -> 16x1
 // TODO(fbarchard): Consider allowing 256 using memcpy.
-__declspec(naked)
-void InterpolateRow_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                          ptrdiff_t src_stride, int dst_width,
-                          int source_y_fraction) {
+__declspec(naked) void InterpolateRow_SSSE3(uint8_t* dst_ptr,
+                                            const uint8_t* src_ptr,
+                                            ptrdiff_t src_stride,
+                                            int dst_width,
+                                            int source_y_fraction) {
   __asm {
     push       esi
     push       edi
 
-    mov        edi, [esp + 8 + 4]   // dst_ptr
-    mov        esi, [esp + 8 + 8]   // src_ptr
+    mov        edi, [esp + 8 + 4]  // dst_ptr
+    mov        esi, [esp + 8 + 8]  // src_ptr
     mov        edx, [esp + 8 + 12]  // src_stride
     mov        ecx, [esp + 8 + 16]  // dst_width
     mov        eax, [esp + 8 + 20]  // source_y_fraction (0..255)
     sub        edi, esi
-    // Dispatch to specialized filters if applicable.
+        // Dispatch to specialized filters if applicable.
     cmp        eax, 0
     je         xloop100  // 0 /256.  Blend 100 / 0.
     cmp        eax, 128
-    je         xloop50   // 128 / 256 is 0.50.  Blend 50 / 50.
+    je         xloop50  // 128 / 256 is 0.50.  Blend 50 / 50.
 
     movd       xmm0, eax  // high fraction 0..255
     neg        eax
@@ -5703,7 +5671,7 @@
     movdqu     xmm1, xmm0
     punpcklbw  xmm0, xmm2
     punpckhbw  xmm1, xmm2
-    psubb      xmm0, xmm4  // bias image by -128
+    psubb      xmm0, xmm4            // bias image by -128
     psubb      xmm1, xmm4
     movdqa     xmm2, xmm5
     movdqa     xmm3, xmm5
@@ -5720,7 +5688,7 @@
     jg         xloop
     jmp        xloop99
 
-    // Blend 50 / 50.
+        // Blend 50 / 50.
   xloop50:
     movdqu     xmm0, [esi]
     movdqu     xmm1, [esi + edx]
@@ -5731,7 +5699,7 @@
     jg         xloop50
     jmp        xloop99
 
-    // Blend 100 / 0 - Copy row unchanged.
+        // Blend 100 / 0 - Copy row unchanged.
   xloop100:
     movdqu     xmm0, [esi]
     movdqu     [esi + edi], xmm0
@@ -5747,15 +5715,16 @@
 }
 
 // For BGRAToARGB, ABGRToARGB, RGBAToARGB, and ARGBToRGBA.
-__declspec(naked)
-void ARGBShuffleRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                          const uint8* shuffler, int width) {
+__declspec(naked) void ARGBShuffleRow_SSSE3(const uint8_t* src_argb,
+                                            uint8_t* dst_argb,
+                                            const uint8_t* shuffler,
+                                            int width) {
   __asm {
-    mov        eax, [esp + 4]    // src_argb
-    mov        edx, [esp + 8]    // dst_argb
-    mov        ecx, [esp + 12]   // shuffler
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_argb
+    mov        ecx, [esp + 12]  // shuffler
     movdqu     xmm5, [ecx]
-    mov        ecx, [esp + 16]   // width
+    mov        ecx, [esp + 16]  // width
 
   wloop:
     movdqu     xmm0, [eax]
@@ -5773,15 +5742,16 @@
 }
 
 #ifdef HAS_ARGBSHUFFLEROW_AVX2
-__declspec(naked)
-void ARGBShuffleRow_AVX2(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width) {
+__declspec(naked) void ARGBShuffleRow_AVX2(const uint8_t* src_argb,
+                                           uint8_t* dst_argb,
+                                           const uint8_t* shuffler,
+                                           int width) {
   __asm {
-    mov        eax, [esp + 4]     // src_argb
-    mov        edx, [esp + 8]     // dst_argb
-    mov        ecx, [esp + 12]    // shuffler
-    vbroadcastf128 ymm5, [ecx]    // same shuffle in high as low.
-    mov        ecx, [esp + 16]    // width
+    mov        eax, [esp + 4]  // src_argb
+    mov        edx, [esp + 8]  // dst_argb
+    mov        ecx, [esp + 12]  // shuffler
+    vbroadcastf128 ymm5, [ecx]  // same shuffle in high as low.
+    mov        ecx, [esp + 16]  // width
 
   wloop:
     vmovdqu    ymm0, [eax]
@@ -5801,152 +5771,36 @@
 }
 #endif  // HAS_ARGBSHUFFLEROW_AVX2
 
-__declspec(naked)
-void ARGBShuffleRow_SSE2(const uint8* src_argb, uint8* dst_argb,
-                         const uint8* shuffler, int width) {
-  __asm {
-    push       ebx
-    push       esi
-    mov        eax, [esp + 8 + 4]    // src_argb
-    mov        edx, [esp + 8 + 8]    // dst_argb
-    mov        esi, [esp + 8 + 12]   // shuffler
-    mov        ecx, [esp + 8 + 16]   // width
-    pxor       xmm5, xmm5
-
-    mov        ebx, [esi]   // shuffler
-    cmp        ebx, 0x03000102
-    je         shuf_3012
-    cmp        ebx, 0x00010203
-    je         shuf_0123
-    cmp        ebx, 0x00030201
-    je         shuf_0321
-    cmp        ebx, 0x02010003
-    je         shuf_2103
-
-  // TODO(fbarchard): Use one source pointer and 3 offsets.
-  shuf_any1:
-    movzx      ebx, byte ptr [esi]
-    movzx      ebx, byte ptr [eax + ebx]
-    mov        [edx], bl
-    movzx      ebx, byte ptr [esi + 1]
-    movzx      ebx, byte ptr [eax + ebx]
-    mov        [edx + 1], bl
-    movzx      ebx, byte ptr [esi + 2]
-    movzx      ebx, byte ptr [eax + ebx]
-    mov        [edx + 2], bl
-    movzx      ebx, byte ptr [esi + 3]
-    movzx      ebx, byte ptr [eax + ebx]
-    mov        [edx + 3], bl
-    lea        eax, [eax + 4]
-    lea        edx, [edx + 4]
-    sub        ecx, 1
-    jg         shuf_any1
-    jmp        shuf99
-
-  shuf_0123:
-    movdqu     xmm0, [eax]
-    lea        eax, [eax + 16]
-    movdqa     xmm1, xmm0
-    punpcklbw  xmm0, xmm5
-    punpckhbw  xmm1, xmm5
-    pshufhw    xmm0, xmm0, 01Bh   // 1B = 00011011 = 0x0123 = BGRAToARGB
-    pshuflw    xmm0, xmm0, 01Bh
-    pshufhw    xmm1, xmm1, 01Bh
-    pshuflw    xmm1, xmm1, 01Bh
-    packuswb   xmm0, xmm1
-    movdqu     [edx], xmm0
-    lea        edx, [edx + 16]
-    sub        ecx, 4
-    jg         shuf_0123
-    jmp        shuf99
-
-  shuf_0321:
-    movdqu     xmm0, [eax]
-    lea        eax, [eax + 16]
-    movdqa     xmm1, xmm0
-    punpcklbw  xmm0, xmm5
-    punpckhbw  xmm1, xmm5
-    pshufhw    xmm0, xmm0, 039h   // 39 = 00111001 = 0x0321 = RGBAToARGB
-    pshuflw    xmm0, xmm0, 039h
-    pshufhw    xmm1, xmm1, 039h
-    pshuflw    xmm1, xmm1, 039h
-    packuswb   xmm0, xmm1
-    movdqu     [edx], xmm0
-    lea        edx, [edx + 16]
-    sub        ecx, 4
-    jg         shuf_0321
-    jmp        shuf99
-
-  shuf_2103:
-    movdqu     xmm0, [eax]
-    lea        eax, [eax + 16]
-    movdqa     xmm1, xmm0
-    punpcklbw  xmm0, xmm5
-    punpckhbw  xmm1, xmm5
-    pshufhw    xmm0, xmm0, 093h   // 93 = 10010011 = 0x2103 = ARGBToRGBA
-    pshuflw    xmm0, xmm0, 093h
-    pshufhw    xmm1, xmm1, 093h
-    pshuflw    xmm1, xmm1, 093h
-    packuswb   xmm0, xmm1
-    movdqu     [edx], xmm0
-    lea        edx, [edx + 16]
-    sub        ecx, 4
-    jg         shuf_2103
-    jmp        shuf99
-
-  shuf_3012:
-    movdqu     xmm0, [eax]
-    lea        eax, [eax + 16]
-    movdqa     xmm1, xmm0
-    punpcklbw  xmm0, xmm5
-    punpckhbw  xmm1, xmm5
-    pshufhw    xmm0, xmm0, 0C6h   // C6 = 11000110 = 0x3012 = ABGRToARGB
-    pshuflw    xmm0, xmm0, 0C6h
-    pshufhw    xmm1, xmm1, 0C6h
-    pshuflw    xmm1, xmm1, 0C6h
-    packuswb   xmm0, xmm1
-    movdqu     [edx], xmm0
-    lea        edx, [edx + 16]
-    sub        ecx, 4
-    jg         shuf_3012
-
-  shuf99:
-    pop        esi
-    pop        ebx
-    ret
-  }
-}
-
 // YUY2 - Macro-pixel = 2 image pixels
 // Y0U0Y1V0....Y2U2Y3V2...Y4U4Y5V4....
 
 // UYVY - Macro-pixel = 2 image pixels
 // U0Y0V0Y1
 
-__declspec(naked)
-void I422ToYUY2Row_SSE2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_frame, int width) {
+__declspec(naked) void I422ToYUY2Row_SSE2(const uint8_t* src_y,
+                                          const uint8_t* src_u,
+                                          const uint8_t* src_v,
+                                          uint8_t* dst_frame,
+                                          int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_y
-    mov        esi, [esp + 8 + 8]    // src_u
-    mov        edx, [esp + 8 + 12]   // src_v
-    mov        edi, [esp + 8 + 16]   // dst_frame
-    mov        ecx, [esp + 8 + 20]   // width
+    mov        eax, [esp + 8 + 4]  // src_y
+    mov        esi, [esp + 8 + 8]  // src_u
+    mov        edx, [esp + 8 + 12]  // src_v
+    mov        edi, [esp + 8 + 16]  // dst_frame
+    mov        ecx, [esp + 8 + 20]  // width
     sub        edx, esi
 
   convertloop:
-    movq       xmm2, qword ptr [esi] // U
-    movq       xmm3, qword ptr [esi + edx] // V
+    movq       xmm2, qword ptr [esi]  // U
+    movq       xmm3, qword ptr [esi + edx]  // V
     lea        esi, [esi + 8]
-    punpcklbw  xmm2, xmm3 // UV
-    movdqu     xmm0, [eax] // Y
+    punpcklbw  xmm2, xmm3  // UV
+    movdqu     xmm0, [eax]  // Y
     lea        eax, [eax + 16]
     movdqa     xmm1, xmm0
-    punpcklbw  xmm0, xmm2 // YUYV
+    punpcklbw  xmm0, xmm2  // YUYV
     punpckhbw  xmm1, xmm2
     movdqu     [edi], xmm0
     movdqu     [edi + 16], xmm1
@@ -5960,30 +5814,30 @@
   }
 }
 
-__declspec(naked)
-void I422ToUYVYRow_SSE2(const uint8* src_y,
-                        const uint8* src_u,
-                        const uint8* src_v,
-                        uint8* dst_frame, int width) {
+__declspec(naked) void I422ToUYVYRow_SSE2(const uint8_t* src_y,
+                                          const uint8_t* src_u,
+                                          const uint8_t* src_v,
+                                          uint8_t* dst_frame,
+                                          int width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_y
-    mov        esi, [esp + 8 + 8]    // src_u
-    mov        edx, [esp + 8 + 12]   // src_v
-    mov        edi, [esp + 8 + 16]   // dst_frame
-    mov        ecx, [esp + 8 + 20]   // width
+    mov        eax, [esp + 8 + 4]  // src_y
+    mov        esi, [esp + 8 + 8]  // src_u
+    mov        edx, [esp + 8 + 12]  // src_v
+    mov        edi, [esp + 8 + 16]  // dst_frame
+    mov        ecx, [esp + 8 + 20]  // width
     sub        edx, esi
 
   convertloop:
-    movq       xmm2, qword ptr [esi] // U
-    movq       xmm3, qword ptr [esi + edx] // V
+    movq       xmm2, qword ptr [esi]  // U
+    movq       xmm3, qword ptr [esi + edx]  // V
     lea        esi, [esi + 8]
-    punpcklbw  xmm2, xmm3 // UV
-    movdqu     xmm0, [eax] // Y
+    punpcklbw  xmm2, xmm3  // UV
+    movdqu     xmm0, [eax]  // Y
     movdqa     xmm1, xmm2
     lea        eax, [eax + 16]
-    punpcklbw  xmm1, xmm0 // UYVY
+    punpcklbw  xmm1, xmm0  // UYVY
     punpckhbw  xmm2, xmm0
     movdqu     [edi], xmm1
     movdqu     [edi + 16], xmm2
@@ -5998,22 +5852,22 @@
 }
 
 #ifdef HAS_ARGBPOLYNOMIALROW_SSE2
-__declspec(naked)
-void ARGBPolynomialRow_SSE2(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
-                            int width) {
+__declspec(naked) void ARGBPolynomialRow_SSE2(const uint8_t* src_argb,
+                                              uint8_t* dst_argb,
+                                              const float* poly,
+                                              int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   /* src_argb */
-    mov        edx, [esp + 4 + 8]   /* dst_argb */
-    mov        esi, [esp + 4 + 12]  /* poly */
-    mov        ecx, [esp + 4 + 16]  /* width */
+    mov        eax, [esp + 4 + 4] /* src_argb */
+    mov        edx, [esp + 4 + 8] /* dst_argb */
+    mov        esi, [esp + 4 + 12] /* poly */
+    mov        ecx, [esp + 4 + 16] /* width */
     pxor       xmm3, xmm3  // 0 constant for zero extending bytes to ints.
 
-    // 2 pixel loop.
+        // 2 pixel loop.
  convertloop:
-//    pmovzxbd  xmm0, dword ptr [eax]  // BGRA pixel
-//    pmovzxbd  xmm4, dword ptr [eax + 4]  // BGRA pixel
+        //    pmovzxbd  xmm0, dword ptr [eax]  // BGRA pixel
+        //    pmovzxbd  xmm4, dword ptr [eax + 4]  // BGRA pixel
     movq       xmm0, qword ptr [eax]  // BGRABGRA
     lea        eax, [eax + 8]
     punpcklbw  xmm0, xmm3
@@ -6057,25 +5911,25 @@
 #endif  // HAS_ARGBPOLYNOMIALROW_SSE2
 
 #ifdef HAS_ARGBPOLYNOMIALROW_AVX2
-__declspec(naked)
-void ARGBPolynomialRow_AVX2(const uint8* src_argb,
-                            uint8* dst_argb, const float* poly,
-                            int width) {
+__declspec(naked) void ARGBPolynomialRow_AVX2(const uint8_t* src_argb,
+                                              uint8_t* dst_argb,
+                                              const float* poly,
+                                              int width) {
   __asm {
-    mov        eax, [esp + 4]   /* src_argb */
-    mov        edx, [esp + 8]   /* dst_argb */
-    mov        ecx, [esp + 12]   /* poly */
-    vbroadcastf128 ymm4, [ecx]       // C0
+    mov        eax, [esp + 4] /* src_argb */
+    mov        edx, [esp + 8] /* dst_argb */
+    mov        ecx, [esp + 12] /* poly */
+    vbroadcastf128 ymm4, [ecx]  // C0
     vbroadcastf128 ymm5, [ecx + 16]  // C1
     vbroadcastf128 ymm6, [ecx + 32]  // C2
     vbroadcastf128 ymm7, [ecx + 48]  // C3
-    mov        ecx, [esp + 16]  /* width */
+    mov        ecx, [esp + 16] /* width */
 
     // 2 pixel loop.
  convertloop:
     vpmovzxbd   ymm0, qword ptr [eax]  // 2 BGRA pixels
     lea         eax, [eax + 8]
-    vcvtdq2ps   ymm0, ymm0        // X 8 floats
+    vcvtdq2ps   ymm0, ymm0  // X 8 floats
     vmulps      ymm2, ymm0, ymm0  // X * X
     vmulps      ymm3, ymm0, ymm7  // C3 * X
     vfmadd132ps ymm0, ymm4, ymm5  // result = C0 + C1 * X
@@ -6095,16 +5949,125 @@
 }
 #endif  // HAS_ARGBPOLYNOMIALROW_AVX2
 
+#ifdef HAS_HALFFLOATROW_SSE2
+static float kExpBias = 1.9259299444e-34f;
+__declspec(naked) void HalfFloatRow_SSE2(const uint16_t* src,
+                                         uint16_t* dst,
+                                         float scale,
+                                         int width) {
+  __asm {
+    mov        eax, [esp + 4] /* src */
+    mov        edx, [esp + 8] /* dst */
+    movd       xmm4, dword ptr [esp + 12] /* scale */
+    mov        ecx, [esp + 16] /* width */
+    mulss      xmm4, kExpBias
+    pshufd     xmm4, xmm4, 0
+    pxor       xmm5, xmm5
+    sub        edx, eax
+
+        // 8 pixel loop.
+ convertloop:
+    movdqu      xmm2, xmmword ptr [eax]  // 8 shorts
+    add         eax, 16
+    movdqa      xmm3, xmm2
+    punpcklwd   xmm2, xmm5
+    cvtdq2ps    xmm2, xmm2  // convert 8 ints to floats
+    punpckhwd   xmm3, xmm5
+    cvtdq2ps    xmm3, xmm3
+    mulps       xmm2, xmm4
+    mulps       xmm3, xmm4
+    psrld       xmm2, 13
+    psrld       xmm3, 13
+    packssdw    xmm2, xmm3
+    movdqu      [eax + edx - 16], xmm2
+    sub         ecx, 8
+    jg          convertloop
+    ret
+  }
+}
+#endif  // HAS_HALFFLOATROW_SSE2
+
+#ifdef HAS_HALFFLOATROW_AVX2
+__declspec(naked) void HalfFloatRow_AVX2(const uint16_t* src,
+                                         uint16_t* dst,
+                                         float scale,
+                                         int width) {
+  __asm {
+    mov        eax, [esp + 4] /* src */
+    mov        edx, [esp + 8] /* dst */
+    movd       xmm4, dword ptr [esp + 12] /* scale */
+    mov        ecx, [esp + 16] /* width */
+
+    vmulss     xmm4, xmm4, kExpBias
+    vbroadcastss ymm4, xmm4
+    vpxor      ymm5, ymm5, ymm5
+    sub        edx, eax
+
+        // 16 pixel loop.
+ convertloop:
+    vmovdqu     ymm2, [eax]  // 16 shorts
+    add         eax, 32
+    vpunpckhwd  ymm3, ymm2, ymm5  // convert 16 shorts to 16 ints
+    vpunpcklwd  ymm2, ymm2, ymm5
+    vcvtdq2ps   ymm3, ymm3  // convert 16 ints to floats
+    vcvtdq2ps   ymm2, ymm2
+    vmulps      ymm3, ymm3, ymm4  // scale to adjust exponent for 5 bit range.
+    vmulps      ymm2, ymm2, ymm4
+    vpsrld      ymm3, ymm3, 13  // float convert to 8 half floats truncate
+    vpsrld      ymm2, ymm2, 13
+    vpackssdw   ymm2, ymm2, ymm3
+    vmovdqu     [eax + edx - 32], ymm2
+    sub         ecx, 16
+    jg          convertloop
+    vzeroupper
+    ret
+  }
+}
+#endif  // HAS_HALFFLOATROW_AVX2
+
+#ifdef HAS_HALFFLOATROW_F16C
+__declspec(naked) void HalfFloatRow_F16C(const uint16_t* src,
+                                         uint16_t* dst,
+                                         float scale,
+                                         int width) {
+  __asm {
+    mov        eax, [esp + 4] /* src */
+    mov        edx, [esp + 8] /* dst */
+    vbroadcastss ymm4, [esp + 12] /* scale */
+    mov        ecx, [esp + 16] /* width */
+    sub        edx, eax
+
+        // 16 pixel loop.
+ convertloop:
+    vpmovzxwd   ymm2, xmmword ptr [eax]  // 8 shorts -> 8 ints
+    vpmovzxwd   ymm3, xmmword ptr [eax + 16]  // 8 more shorts
+    add         eax, 32
+    vcvtdq2ps   ymm2, ymm2  // convert 8 ints to floats
+    vcvtdq2ps   ymm3, ymm3
+    vmulps      ymm2, ymm2, ymm4  // scale to normalized range 0 to 1
+    vmulps      ymm3, ymm3, ymm4
+    vcvtps2ph   xmm2, ymm2, 3  // float convert to 8 half floats truncate
+    vcvtps2ph   xmm3, ymm3, 3
+    vmovdqu     [eax + edx + 32], xmm2
+    vmovdqu     [eax + edx + 32 + 16], xmm3
+    sub         ecx, 16
+    jg          convertloop
+    vzeroupper
+    ret
+  }
+}
+#endif  // HAS_HALFFLOATROW_F16C
+
 #ifdef HAS_ARGBCOLORTABLEROW_X86
 // Tranform ARGB pixels with color table.
-__declspec(naked)
-void ARGBColorTableRow_X86(uint8* dst_argb, const uint8* table_argb,
-                           int width) {
+__declspec(naked) void ARGBColorTableRow_X86(uint8_t* dst_argb,
+                                             const uint8_t* table_argb,
+                                             int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   /* dst_argb */
-    mov        esi, [esp + 4 + 8]   /* table_argb */
-    mov        ecx, [esp + 4 + 12]  /* width */
+    mov        eax, [esp + 4 + 4] /* dst_argb */
+    mov        esi, [esp + 4 + 8] /* table_argb */
+    mov        ecx, [esp + 4 + 12] /* width */
 
     // 1 pixel loop.
   convertloop:
@@ -6131,13 +6094,14 @@
 
 #ifdef HAS_RGBCOLORTABLEROW_X86
 // Tranform RGB pixels with color table.
-__declspec(naked)
-void RGBColorTableRow_X86(uint8* dst_argb, const uint8* table_argb, int width) {
+__declspec(naked) void RGBColorTableRow_X86(uint8_t* dst_argb,
+                                            const uint8_t* table_argb,
+                                            int width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]   /* dst_argb */
-    mov        esi, [esp + 4 + 8]   /* table_argb */
-    mov        ecx, [esp + 4 + 12]  /* width */
+    mov        eax, [esp + 4 + 4] /* dst_argb */
+    mov        esi, [esp + 4 + 8] /* table_argb */
+    mov        ecx, [esp + 4 + 12] /* width */
 
     // 1 pixel loop.
   convertloop:
@@ -6162,27 +6126,28 @@
 
 #ifdef HAS_ARGBLUMACOLORTABLEROW_SSSE3
 // Tranform RGB pixels with luma table.
-__declspec(naked)
-void ARGBLumaColorTableRow_SSSE3(const uint8* src_argb, uint8* dst_argb,
-                                 int width,
-                                 const uint8* luma, uint32 lumacoeff) {
+__declspec(naked) void ARGBLumaColorTableRow_SSSE3(const uint8_t* src_argb,
+                                                   uint8_t* dst_argb,
+                                                   int width,
+                                                   const uint8_t* luma,
+                                                   uint32_t lumacoeff) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]   /* src_argb */
-    mov        edi, [esp + 8 + 8]   /* dst_argb */
-    mov        ecx, [esp + 8 + 12]  /* width */
+    mov        eax, [esp + 8 + 4] /* src_argb */
+    mov        edi, [esp + 8 + 8] /* dst_argb */
+    mov        ecx, [esp + 8 + 12] /* width */
     movd       xmm2, dword ptr [esp + 8 + 16]  // luma table
     movd       xmm3, dword ptr [esp + 8 + 20]  // lumacoeff
     pshufd     xmm2, xmm2, 0
     pshufd     xmm3, xmm3, 0
-    pcmpeqb    xmm4, xmm4        // generate mask 0xff00ff00
+    pcmpeqb    xmm4, xmm4  // generate mask 0xff00ff00
     psllw      xmm4, 8
     pxor       xmm5, xmm5
 
-    // 4 pixel loop.
+        // 4 pixel loop.
   convertloop:
-    movdqu     xmm0, xmmword ptr [eax]      // generate luma ptr
+    movdqu     xmm0, xmmword ptr [eax]  // generate luma ptr
     pmaddubsw  xmm0, xmm3
     phaddw     xmm0, xmm0
     pand       xmm0, xmm4  // mask out low bits
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale.cc
index 36e3fe5..2cfa1c6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale.cc
@@ -33,17 +33,25 @@
 // This is an optimized version for scaling down a plane to 1/2 of
 // its original size.
 
-static void ScalePlaneDown2(int src_width, int src_height,
-                            int dst_width, int dst_height,
-                            int src_stride, int dst_stride,
-                            const uint8* src_ptr, uint8* dst_ptr,
+static void ScalePlaneDown2(int src_width,
+                            int src_height,
+                            int dst_width,
+                            int dst_height,
+                            int src_stride,
+                            int dst_stride,
+                            const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown2)(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) =
-      filtering == kFilterNone ? ScaleRowDown2_C :
-      (filtering == kFilterLinear ? ScaleRowDown2Linear_C : ScaleRowDown2Box_C);
+  void (*ScaleRowDown2)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                        uint8_t* dst_ptr, int dst_width) =
+      filtering == kFilterNone
+          ? ScaleRowDown2_C
+          : (filtering == kFilterLinear ? ScaleRowDown2Linear_C
+                                        : ScaleRowDown2Box_C);
   int row_stride = src_stride << 1;
+  (void)src_width;
+  (void)src_height;
   if (!filtering) {
     src_ptr += src_stride;  // Point to odd rows.
     src_stride = 0;
@@ -51,46 +59,63 @@
 
 #if defined(HAS_SCALEROWDOWN2_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
-    ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_Any_NEON :
-        (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_NEON :
-        ScaleRowDown2Box_Any_NEON);
+    ScaleRowDown2 =
+        filtering == kFilterNone
+            ? ScaleRowDown2_Any_NEON
+            : (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_NEON
+                                          : ScaleRowDown2Box_Any_NEON);
     if (IS_ALIGNED(dst_width, 16)) {
-      ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_NEON :
-          (filtering == kFilterLinear ? ScaleRowDown2Linear_NEON :
-          ScaleRowDown2Box_NEON);
+      ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_NEON
+                                               : (filtering == kFilterLinear
+                                                      ? ScaleRowDown2Linear_NEON
+                                                      : ScaleRowDown2Box_NEON);
     }
   }
 #endif
 #if defined(HAS_SCALEROWDOWN2_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
-    ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_Any_SSSE3 :
-        (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_SSSE3 :
-        ScaleRowDown2Box_Any_SSSE3);
+    ScaleRowDown2 =
+        filtering == kFilterNone
+            ? ScaleRowDown2_Any_SSSE3
+            : (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_SSSE3
+                                          : ScaleRowDown2Box_Any_SSSE3);
     if (IS_ALIGNED(dst_width, 16)) {
-      ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_SSSE3 :
-          (filtering == kFilterLinear ? ScaleRowDown2Linear_SSSE3 :
-          ScaleRowDown2Box_SSSE3);
+      ScaleRowDown2 =
+          filtering == kFilterNone
+              ? ScaleRowDown2_SSSE3
+              : (filtering == kFilterLinear ? ScaleRowDown2Linear_SSSE3
+                                            : ScaleRowDown2Box_SSSE3);
     }
   }
 #endif
 #if defined(HAS_SCALEROWDOWN2_AVX2)
   if (TestCpuFlag(kCpuHasAVX2)) {
-    ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_Any_AVX2 :
-        (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_AVX2 :
-        ScaleRowDown2Box_Any_AVX2);
+    ScaleRowDown2 =
+        filtering == kFilterNone
+            ? ScaleRowDown2_Any_AVX2
+            : (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_AVX2
+                                          : ScaleRowDown2Box_Any_AVX2);
     if (IS_ALIGNED(dst_width, 32)) {
-      ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_AVX2 :
-          (filtering == kFilterLinear ? ScaleRowDown2Linear_AVX2 :
-          ScaleRowDown2Box_AVX2);
+      ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_AVX2
+                                               : (filtering == kFilterLinear
+                                                      ? ScaleRowDown2Linear_AVX2
+                                                      : ScaleRowDown2Box_AVX2);
     }
   }
 #endif
-#if defined(HAS_SCALEROWDOWN2_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(src_ptr, 4) &&
-      IS_ALIGNED(src_stride, 4) && IS_ALIGNED(row_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    ScaleRowDown2 = filtering ?
-        ScaleRowDown2Box_DSPR2 : ScaleRowDown2_DSPR2;
+#if defined(HAS_SCALEROWDOWN2_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleRowDown2 =
+        filtering == kFilterNone
+            ? ScaleRowDown2_Any_MSA
+            : (filtering == kFilterLinear ? ScaleRowDown2Linear_Any_MSA
+                                          : ScaleRowDown2Box_Any_MSA);
+    if (IS_ALIGNED(dst_width, 32)) {
+      ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_MSA
+                                               : (filtering == kFilterLinear
+                                                      ? ScaleRowDown2Linear_MSA
+                                                      : ScaleRowDown2Box_MSA);
+    }
   }
 #endif
 
@@ -105,18 +130,25 @@
   }
 }
 
-static void ScalePlaneDown2_16(int src_width, int src_height,
-                               int dst_width, int dst_height,
-                               int src_stride, int dst_stride,
-                               const uint16* src_ptr, uint16* dst_ptr,
+static void ScalePlaneDown2_16(int src_width,
+                               int src_height,
+                               int dst_width,
+                               int dst_height,
+                               int src_stride,
+                               int dst_stride,
+                               const uint16_t* src_ptr,
+                               uint16_t* dst_ptr,
                                enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown2)(const uint16* src_ptr, ptrdiff_t src_stride,
-                        uint16* dst_ptr, int dst_width) =
-    filtering == kFilterNone ? ScaleRowDown2_16_C :
-        (filtering == kFilterLinear ? ScaleRowDown2Linear_16_C :
-        ScaleRowDown2Box_16_C);
+  void (*ScaleRowDown2)(const uint16_t* src_ptr, ptrdiff_t src_stride,
+                        uint16_t* dst_ptr, int dst_width) =
+      filtering == kFilterNone
+          ? ScaleRowDown2_16_C
+          : (filtering == kFilterLinear ? ScaleRowDown2Linear_16_C
+                                        : ScaleRowDown2Box_16_C);
   int row_stride = src_stride << 1;
+  (void)src_width;
+  (void)src_height;
   if (!filtering) {
     src_ptr += src_stride;  // Point to odd rows.
     src_stride = 0;
@@ -124,23 +156,17 @@
 
 #if defined(HAS_SCALEROWDOWN2_16_NEON)
   if (TestCpuFlag(kCpuHasNEON) && IS_ALIGNED(dst_width, 16)) {
-    ScaleRowDown2 = filtering ? ScaleRowDown2Box_16_NEON :
-        ScaleRowDown2_16_NEON;
+    ScaleRowDown2 =
+        filtering ? ScaleRowDown2Box_16_NEON : ScaleRowDown2_16_NEON;
   }
 #endif
 #if defined(HAS_SCALEROWDOWN2_16_SSE2)
   if (TestCpuFlag(kCpuHasSSE2) && IS_ALIGNED(dst_width, 16)) {
-    ScaleRowDown2 = filtering == kFilterNone ? ScaleRowDown2_16_SSE2 :
-        (filtering == kFilterLinear ? ScaleRowDown2Linear_16_SSE2 :
-        ScaleRowDown2Box_16_SSE2);
-  }
-#endif
-#if defined(HAS_SCALEROWDOWN2_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(src_ptr, 4) &&
-      IS_ALIGNED(src_stride, 4) && IS_ALIGNED(row_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    ScaleRowDown2 = filtering ?
-        ScaleRowDown2Box_16_DSPR2 : ScaleRowDown2_16_DSPR2;
+    ScaleRowDown2 =
+        filtering == kFilterNone
+            ? ScaleRowDown2_16_SSE2
+            : (filtering == kFilterLinear ? ScaleRowDown2Linear_16_SSE2
+                                          : ScaleRowDown2Box_16_SSE2);
   }
 #endif
 
@@ -159,24 +185,30 @@
 // This is an optimized version for scaling down a plane to 1/4 of
 // its original size.
 
-static void ScalePlaneDown4(int src_width, int src_height,
-                            int dst_width, int dst_height,
-                            int src_stride, int dst_stride,
-                            const uint8* src_ptr, uint8* dst_ptr,
+static void ScalePlaneDown4(int src_width,
+                            int src_height,
+                            int dst_width,
+                            int dst_height,
+                            int src_stride,
+                            int dst_stride,
+                            const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown4)(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) =
+  void (*ScaleRowDown4)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                        uint8_t* dst_ptr, int dst_width) =
       filtering ? ScaleRowDown4Box_C : ScaleRowDown4_C;
   int row_stride = src_stride << 2;
+  (void)src_width;
+  (void)src_height;
   if (!filtering) {
     src_ptr += src_stride * 2;  // Point to row 2.
     src_stride = 0;
   }
 #if defined(HAS_SCALEROWDOWN4_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
-    ScaleRowDown4 = filtering ?
-        ScaleRowDown4Box_Any_NEON : ScaleRowDown4_Any_NEON;
+    ScaleRowDown4 =
+        filtering ? ScaleRowDown4Box_Any_NEON : ScaleRowDown4_Any_NEON;
     if (IS_ALIGNED(dst_width, 8)) {
       ScaleRowDown4 = filtering ? ScaleRowDown4Box_NEON : ScaleRowDown4_NEON;
     }
@@ -184,8 +216,8 @@
 #endif
 #if defined(HAS_SCALEROWDOWN4_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
-    ScaleRowDown4 = filtering ?
-        ScaleRowDown4Box_Any_SSSE3 : ScaleRowDown4_Any_SSSE3;
+    ScaleRowDown4 =
+        filtering ? ScaleRowDown4Box_Any_SSSE3 : ScaleRowDown4_Any_SSSE3;
     if (IS_ALIGNED(dst_width, 8)) {
       ScaleRowDown4 = filtering ? ScaleRowDown4Box_SSSE3 : ScaleRowDown4_SSSE3;
     }
@@ -193,19 +225,20 @@
 #endif
 #if defined(HAS_SCALEROWDOWN4_AVX2)
   if (TestCpuFlag(kCpuHasAVX2)) {
-    ScaleRowDown4 = filtering ?
-        ScaleRowDown4Box_Any_AVX2 : ScaleRowDown4_Any_AVX2;
+    ScaleRowDown4 =
+        filtering ? ScaleRowDown4Box_Any_AVX2 : ScaleRowDown4_Any_AVX2;
     if (IS_ALIGNED(dst_width, 16)) {
       ScaleRowDown4 = filtering ? ScaleRowDown4Box_AVX2 : ScaleRowDown4_AVX2;
     }
   }
 #endif
-#if defined(HAS_SCALEROWDOWN4_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(row_stride, 4) &&
-      IS_ALIGNED(src_ptr, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    ScaleRowDown4 = filtering ?
-        ScaleRowDown4Box_DSPR2 : ScaleRowDown4_DSPR2;
+#if defined(HAS_SCALEROWDOWN4_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleRowDown4 =
+        filtering ? ScaleRowDown4Box_Any_MSA : ScaleRowDown4_Any_MSA;
+    if (IS_ALIGNED(dst_width, 16)) {
+      ScaleRowDown4 = filtering ? ScaleRowDown4Box_MSA : ScaleRowDown4_MSA;
+    }
   }
 #endif
 
@@ -219,38 +252,36 @@
   }
 }
 
-static void ScalePlaneDown4_16(int src_width, int src_height,
-                               int dst_width, int dst_height,
-                               int src_stride, int dst_stride,
-                               const uint16* src_ptr, uint16* dst_ptr,
+static void ScalePlaneDown4_16(int src_width,
+                               int src_height,
+                               int dst_width,
+                               int dst_height,
+                               int src_stride,
+                               int dst_stride,
+                               const uint16_t* src_ptr,
+                               uint16_t* dst_ptr,
                                enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown4)(const uint16* src_ptr, ptrdiff_t src_stride,
-                        uint16* dst_ptr, int dst_width) =
+  void (*ScaleRowDown4)(const uint16_t* src_ptr, ptrdiff_t src_stride,
+                        uint16_t* dst_ptr, int dst_width) =
       filtering ? ScaleRowDown4Box_16_C : ScaleRowDown4_16_C;
   int row_stride = src_stride << 2;
+  (void)src_width;
+  (void)src_height;
   if (!filtering) {
     src_ptr += src_stride * 2;  // Point to row 2.
     src_stride = 0;
   }
 #if defined(HAS_SCALEROWDOWN4_16_NEON)
   if (TestCpuFlag(kCpuHasNEON) && IS_ALIGNED(dst_width, 8)) {
-    ScaleRowDown4 = filtering ? ScaleRowDown4Box_16_NEON :
-        ScaleRowDown4_16_NEON;
+    ScaleRowDown4 =
+        filtering ? ScaleRowDown4Box_16_NEON : ScaleRowDown4_16_NEON;
   }
 #endif
 #if defined(HAS_SCALEROWDOWN4_16_SSE2)
   if (TestCpuFlag(kCpuHasSSE2) && IS_ALIGNED(dst_width, 8)) {
-    ScaleRowDown4 = filtering ? ScaleRowDown4Box_16_SSE2 :
-        ScaleRowDown4_16_SSE2;
-  }
-#endif
-#if defined(HAS_SCALEROWDOWN4_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(row_stride, 4) &&
-      IS_ALIGNED(src_ptr, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    ScaleRowDown4 = filtering ?
-        ScaleRowDown4Box_16_DSPR2 : ScaleRowDown4_16_DSPR2;
+    ScaleRowDown4 =
+        filtering ? ScaleRowDown4Box_16_SSE2 : ScaleRowDown4_16_SSE2;
   }
 #endif
 
@@ -265,18 +296,23 @@
 }
 
 // Scale plane down, 3/4
-
-static void ScalePlaneDown34(int src_width, int src_height,
-                             int dst_width, int dst_height,
-                             int src_stride, int dst_stride,
-                             const uint8* src_ptr, uint8* dst_ptr,
+static void ScalePlaneDown34(int src_width,
+                             int src_height,
+                             int dst_width,
+                             int dst_height,
+                             int src_stride,
+                             int dst_stride,
+                             const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
                              enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown34_0)(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
-  void (*ScaleRowDown34_1)(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
+  void (*ScaleRowDown34_0)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                           uint8_t* dst_ptr, int dst_width);
+  void (*ScaleRowDown34_1)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                           uint8_t* dst_ptr, int dst_width);
   const int filter_stride = (filtering == kFilterLinear) ? 0 : src_stride;
+  (void)src_width;
+  (void)src_height;
   assert(dst_width % 3 == 0);
   if (!filtering) {
     ScaleRowDown34_0 = ScaleRowDown34_C;
@@ -305,6 +341,26 @@
     }
   }
 #endif
+#if defined(HAS_SCALEROWDOWN34_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    if (!filtering) {
+      ScaleRowDown34_0 = ScaleRowDown34_Any_MSA;
+      ScaleRowDown34_1 = ScaleRowDown34_Any_MSA;
+    } else {
+      ScaleRowDown34_0 = ScaleRowDown34_0_Box_Any_MSA;
+      ScaleRowDown34_1 = ScaleRowDown34_1_Box_Any_MSA;
+    }
+    if (dst_width % 48 == 0) {
+      if (!filtering) {
+        ScaleRowDown34_0 = ScaleRowDown34_MSA;
+        ScaleRowDown34_1 = ScaleRowDown34_MSA;
+      } else {
+        ScaleRowDown34_0 = ScaleRowDown34_0_Box_MSA;
+        ScaleRowDown34_1 = ScaleRowDown34_1_Box_MSA;
+      }
+    }
+  }
+#endif
 #if defined(HAS_SCALEROWDOWN34_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     if (!filtering) {
@@ -325,19 +381,6 @@
     }
   }
 #endif
-#if defined(HAS_SCALEROWDOWN34_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && (dst_width % 24 == 0) &&
-      IS_ALIGNED(src_ptr, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    if (!filtering) {
-      ScaleRowDown34_0 = ScaleRowDown34_DSPR2;
-      ScaleRowDown34_1 = ScaleRowDown34_DSPR2;
-    } else {
-      ScaleRowDown34_0 = ScaleRowDown34_0_Box_DSPR2;
-      ScaleRowDown34_1 = ScaleRowDown34_1_Box_DSPR2;
-    }
-  }
-#endif
 
   for (y = 0; y < dst_height - 2; y += 3) {
     ScaleRowDown34_0(src_ptr, filter_stride, dst_ptr, dst_width);
@@ -346,8 +389,7 @@
     ScaleRowDown34_1(src_ptr, filter_stride, dst_ptr, dst_width);
     src_ptr += src_stride;
     dst_ptr += dst_stride;
-    ScaleRowDown34_0(src_ptr + src_stride, -filter_stride,
-                     dst_ptr, dst_width);
+    ScaleRowDown34_0(src_ptr + src_stride, -filter_stride, dst_ptr, dst_width);
     src_ptr += src_stride * 2;
     dst_ptr += dst_stride;
   }
@@ -363,17 +405,23 @@
   }
 }
 
-static void ScalePlaneDown34_16(int src_width, int src_height,
-                                int dst_width, int dst_height,
-                                int src_stride, int dst_stride,
-                                const uint16* src_ptr, uint16* dst_ptr,
+static void ScalePlaneDown34_16(int src_width,
+                                int src_height,
+                                int dst_width,
+                                int dst_height,
+                                int src_stride,
+                                int dst_stride,
+                                const uint16_t* src_ptr,
+                                uint16_t* dst_ptr,
                                 enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown34_0)(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst_ptr, int dst_width);
-  void (*ScaleRowDown34_1)(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst_ptr, int dst_width);
+  void (*ScaleRowDown34_0)(const uint16_t* src_ptr, ptrdiff_t src_stride,
+                           uint16_t* dst_ptr, int dst_width);
+  void (*ScaleRowDown34_1)(const uint16_t* src_ptr, ptrdiff_t src_stride,
+                           uint16_t* dst_ptr, int dst_width);
   const int filter_stride = (filtering == kFilterLinear) ? 0 : src_stride;
+  (void)src_width;
+  (void)src_height;
   assert(dst_width % 3 == 0);
   if (!filtering) {
     ScaleRowDown34_0 = ScaleRowDown34_16_C;
@@ -404,19 +452,6 @@
     }
   }
 #endif
-#if defined(HAS_SCALEROWDOWN34_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && (dst_width % 24 == 0) &&
-      IS_ALIGNED(src_ptr, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    if (!filtering) {
-      ScaleRowDown34_0 = ScaleRowDown34_16_DSPR2;
-      ScaleRowDown34_1 = ScaleRowDown34_16_DSPR2;
-    } else {
-      ScaleRowDown34_0 = ScaleRowDown34_0_Box_16_DSPR2;
-      ScaleRowDown34_1 = ScaleRowDown34_1_Box_16_DSPR2;
-    }
-  }
-#endif
 
   for (y = 0; y < dst_height - 2; y += 3) {
     ScaleRowDown34_0(src_ptr, filter_stride, dst_ptr, dst_width);
@@ -425,8 +460,7 @@
     ScaleRowDown34_1(src_ptr, filter_stride, dst_ptr, dst_width);
     src_ptr += src_stride;
     dst_ptr += dst_stride;
-    ScaleRowDown34_0(src_ptr + src_stride, -filter_stride,
-                     dst_ptr, dst_width);
+    ScaleRowDown34_0(src_ptr + src_stride, -filter_stride, dst_ptr, dst_width);
     src_ptr += src_stride * 2;
     dst_ptr += dst_stride;
   }
@@ -442,7 +476,6 @@
   }
 }
 
-
 // Scale plane, 3/8
 // This is an optimized version for scaling down a plane to 3/8
 // of its original size.
@@ -458,18 +491,24 @@
 // ggghhhii
 // Boxes are 3x3, 2x3, 3x2 and 2x2
 
-static void ScalePlaneDown38(int src_width, int src_height,
-                             int dst_width, int dst_height,
-                             int src_stride, int dst_stride,
-                             const uint8* src_ptr, uint8* dst_ptr,
+static void ScalePlaneDown38(int src_width,
+                             int src_height,
+                             int dst_width,
+                             int dst_height,
+                             int src_stride,
+                             int dst_stride,
+                             const uint8_t* src_ptr,
+                             uint8_t* dst_ptr,
                              enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown38_3)(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
-  void (*ScaleRowDown38_2)(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width);
+  void (*ScaleRowDown38_3)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                           uint8_t* dst_ptr, int dst_width);
+  void (*ScaleRowDown38_2)(const uint8_t* src_ptr, ptrdiff_t src_stride,
+                           uint8_t* dst_ptr, int dst_width);
   const int filter_stride = (filtering == kFilterLinear) ? 0 : src_stride;
   assert(dst_width % 3 == 0);
+  (void)src_width;
+  (void)src_height;
   if (!filtering) {
     ScaleRowDown38_3 = ScaleRowDown38_C;
     ScaleRowDown38_2 = ScaleRowDown38_C;
@@ -517,16 +556,23 @@
     }
   }
 #endif
-#if defined(HAS_SCALEROWDOWN38_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && (dst_width % 12 == 0) &&
-      IS_ALIGNED(src_ptr, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
+#if defined(HAS_SCALEROWDOWN38_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
     if (!filtering) {
-      ScaleRowDown38_3 = ScaleRowDown38_DSPR2;
-      ScaleRowDown38_2 = ScaleRowDown38_DSPR2;
+      ScaleRowDown38_3 = ScaleRowDown38_Any_MSA;
+      ScaleRowDown38_2 = ScaleRowDown38_Any_MSA;
     } else {
-      ScaleRowDown38_3 = ScaleRowDown38_3_Box_DSPR2;
-      ScaleRowDown38_2 = ScaleRowDown38_2_Box_DSPR2;
+      ScaleRowDown38_3 = ScaleRowDown38_3_Box_Any_MSA;
+      ScaleRowDown38_2 = ScaleRowDown38_2_Box_Any_MSA;
+    }
+    if (dst_width % 12 == 0) {
+      if (!filtering) {
+        ScaleRowDown38_3 = ScaleRowDown38_MSA;
+        ScaleRowDown38_2 = ScaleRowDown38_MSA;
+      } else {
+        ScaleRowDown38_3 = ScaleRowDown38_3_Box_MSA;
+        ScaleRowDown38_2 = ScaleRowDown38_2_Box_MSA;
+      }
     }
   }
 #endif
@@ -554,17 +600,23 @@
   }
 }
 
-static void ScalePlaneDown38_16(int src_width, int src_height,
-                                int dst_width, int dst_height,
-                                int src_stride, int dst_stride,
-                                const uint16* src_ptr, uint16* dst_ptr,
+static void ScalePlaneDown38_16(int src_width,
+                                int src_height,
+                                int dst_width,
+                                int dst_height,
+                                int src_stride,
+                                int dst_stride,
+                                const uint16_t* src_ptr,
+                                uint16_t* dst_ptr,
                                 enum FilterMode filtering) {
   int y;
-  void (*ScaleRowDown38_3)(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst_ptr, int dst_width);
-  void (*ScaleRowDown38_2)(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst_ptr, int dst_width);
+  void (*ScaleRowDown38_3)(const uint16_t* src_ptr, ptrdiff_t src_stride,
+                           uint16_t* dst_ptr, int dst_width);
+  void (*ScaleRowDown38_2)(const uint16_t* src_ptr, ptrdiff_t src_stride,
+                           uint16_t* dst_ptr, int dst_width);
   const int filter_stride = (filtering == kFilterLinear) ? 0 : src_stride;
+  (void)src_width;
+  (void)src_height;
   assert(dst_width % 3 == 0);
   if (!filtering) {
     ScaleRowDown38_3 = ScaleRowDown38_16_C;
@@ -595,19 +647,6 @@
     }
   }
 #endif
-#if defined(HAS_SCALEROWDOWN38_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && (dst_width % 12 == 0) &&
-      IS_ALIGNED(src_ptr, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_ptr, 4) && IS_ALIGNED(dst_stride, 4)) {
-    if (!filtering) {
-      ScaleRowDown38_3 = ScaleRowDown38_16_DSPR2;
-      ScaleRowDown38_2 = ScaleRowDown38_16_DSPR2;
-    } else {
-      ScaleRowDown38_3 = ScaleRowDown38_3_Box_16_DSPR2;
-      ScaleRowDown38_2 = ScaleRowDown38_2_Box_16_DSPR2;
-    }
-  }
-#endif
 
   for (y = 0; y < dst_height - 2; y += 3) {
     ScaleRowDown38_3(src_ptr, filter_stride, dst_ptr, dst_width);
@@ -634,8 +673,8 @@
 
 #define MIN1(x) ((x) < 1 ? 1 : (x))
 
-static __inline uint32 SumPixels(int iboxwidth, const uint16* src_ptr) {
-  uint32 sum = 0u;
+static __inline uint32_t SumPixels(int iboxwidth, const uint16_t* src_ptr) {
+  uint32_t sum = 0u;
   int x;
   assert(iboxwidth > 0);
   for (x = 0; x < iboxwidth; ++x) {
@@ -644,8 +683,8 @@
   return sum;
 }
 
-static __inline uint32 SumPixels_16(int iboxwidth, const uint32* src_ptr) {
-  uint32 sum = 0u;
+static __inline uint32_t SumPixels_16(int iboxwidth, const uint32_t* src_ptr) {
+  uint32_t sum = 0u;
   int x;
   assert(iboxwidth > 0);
   for (x = 0; x < iboxwidth; ++x) {
@@ -654,8 +693,12 @@
   return sum;
 }
 
-static void ScaleAddCols2_C(int dst_width, int boxheight, int x, int dx,
-                            const uint16* src_ptr, uint8* dst_ptr) {
+static void ScaleAddCols2_C(int dst_width,
+                            int boxheight,
+                            int x,
+                            int dx,
+                            const uint16_t* src_ptr,
+                            uint8_t* dst_ptr) {
   int i;
   int scaletbl[2];
   int minboxwidth = dx >> 16;
@@ -666,13 +709,18 @@
     int ix = x >> 16;
     x += dx;
     boxwidth = MIN1((x >> 16) - ix);
-    *dst_ptr++ = SumPixels(boxwidth, src_ptr + ix) *
-        scaletbl[boxwidth - minboxwidth] >> 16;
+    *dst_ptr++ =
+        SumPixels(boxwidth, src_ptr + ix) * scaletbl[boxwidth - minboxwidth] >>
+        16;
   }
 }
 
-static void ScaleAddCols2_16_C(int dst_width, int boxheight, int x, int dx,
-                               const uint32* src_ptr, uint16* dst_ptr) {
+static void ScaleAddCols2_16_C(int dst_width,
+                               int boxheight,
+                               int x,
+                               int dx,
+                               const uint32_t* src_ptr,
+                               uint16_t* dst_ptr) {
   int i;
   int scaletbl[2];
   int minboxwidth = dx >> 16;
@@ -684,22 +732,32 @@
     x += dx;
     boxwidth = MIN1((x >> 16) - ix);
     *dst_ptr++ = SumPixels_16(boxwidth, src_ptr + ix) *
-        scaletbl[boxwidth - minboxwidth]  >> 16;
+                     scaletbl[boxwidth - minboxwidth] >>
+                 16;
   }
 }
 
-static void ScaleAddCols0_C(int dst_width, int boxheight, int x, int,
-                            const uint16* src_ptr, uint8* dst_ptr) {
+static void ScaleAddCols0_C(int dst_width,
+                            int boxheight,
+                            int x,
+                            int dx,
+                            const uint16_t* src_ptr,
+                            uint8_t* dst_ptr) {
   int scaleval = 65536 / boxheight;
   int i;
+  (void)dx;
   src_ptr += (x >> 16);
   for (i = 0; i < dst_width; ++i) {
     *dst_ptr++ = src_ptr[i] * scaleval >> 16;
   }
 }
 
-static void ScaleAddCols1_C(int dst_width, int boxheight, int x, int dx,
-                            const uint16* src_ptr, uint8* dst_ptr) {
+static void ScaleAddCols1_C(int dst_width,
+                            int boxheight,
+                            int x,
+                            int dx,
+                            const uint16_t* src_ptr,
+                            uint8_t* dst_ptr) {
   int boxwidth = MIN1(dx >> 16);
   int scaleval = 65536 / (boxwidth * boxheight);
   int i;
@@ -710,8 +768,12 @@
   }
 }
 
-static void ScaleAddCols1_16_C(int dst_width, int boxheight, int x, int dx,
-                               const uint32* src_ptr, uint16* dst_ptr) {
+static void ScaleAddCols1_16_C(int dst_width,
+                               int boxheight,
+                               int x,
+                               int dx,
+                               const uint32_t* src_ptr,
+                               uint16_t* dst_ptr) {
   int boxwidth = MIN1(dx >> 16);
   int scaleval = 65536 / (boxwidth * boxheight);
   int i;
@@ -728,10 +790,14 @@
 // one pixel of destination using fixed point (16.16) to step
 // through source, sampling a box of pixel with simple
 // averaging.
-static void ScalePlaneBox(int src_width, int src_height,
-                          int dst_width, int dst_height,
-                          int src_stride, int dst_stride,
-                          const uint8* src_ptr, uint8* dst_ptr) {
+static void ScalePlaneBox(int src_width,
+                          int src_height,
+                          int dst_width,
+                          int dst_height,
+                          int src_stride,
+                          int dst_stride,
+                          const uint8_t* src_ptr,
+                          uint8_t* dst_ptr) {
   int j, k;
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
@@ -739,18 +805,18 @@
   int dx = 0;
   int dy = 0;
   const int max_y = (src_height << 16);
-  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterBox,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterBox, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
   {
-    // Allocate a row buffer of uint16.
+    // Allocate a row buffer of uint16_t.
     align_buffer_64(row16, src_width * 2);
     void (*ScaleAddCols)(int dst_width, int boxheight, int x, int dx,
-        const uint16* src_ptr, uint8* dst_ptr) =
-        (dx & 0xffff) ? ScaleAddCols2_C:
-        ((dx != 0x10000) ? ScaleAddCols1_C : ScaleAddCols0_C);
-    void (*ScaleAddRow)(const uint8* src_ptr, uint16* dst_ptr, int src_width) =
-        ScaleAddRow_C;
+                         const uint16_t* src_ptr, uint8_t* dst_ptr) =
+        (dx & 0xffff) ? ScaleAddCols2_C
+                      : ((dx != 0x10000) ? ScaleAddCols1_C : ScaleAddCols0_C);
+    void (*ScaleAddRow)(const uint8_t* src_ptr, uint16_t* dst_ptr,
+                        int src_width) = ScaleAddRow_C;
 #if defined(HAS_SCALEADDROW_SSE2)
     if (TestCpuFlag(kCpuHasSSE2)) {
       ScaleAddRow = ScaleAddRow_Any_SSE2;
@@ -775,11 +841,19 @@
       }
     }
 #endif
+#if defined(HAS_SCALEADDROW_MSA)
+    if (TestCpuFlag(kCpuHasMSA)) {
+      ScaleAddRow = ScaleAddRow_Any_MSA;
+      if (IS_ALIGNED(src_width, 16)) {
+        ScaleAddRow = ScaleAddRow_MSA;
+      }
+    }
+#endif
 
     for (j = 0; j < dst_height; ++j) {
       int boxheight;
       int iy = y >> 16;
-      const uint8* src = src_ptr + iy * src_stride;
+      const uint8_t* src = src_ptr + iy * src_stride;
       y += dy;
       if (y > max_y) {
         y = max_y;
@@ -787,20 +861,24 @@
       boxheight = MIN1((y >> 16) - iy);
       memset(row16, 0, src_width * 2);
       for (k = 0; k < boxheight; ++k) {
-        ScaleAddRow(src, (uint16 *)(row16), src_width);
+        ScaleAddRow(src, (uint16_t*)(row16), src_width);
         src += src_stride;
       }
-      ScaleAddCols(dst_width, boxheight, x, dx, (uint16*)(row16), dst_ptr);
+      ScaleAddCols(dst_width, boxheight, x, dx, (uint16_t*)(row16), dst_ptr);
       dst_ptr += dst_stride;
     }
     free_aligned_buffer_64(row16);
   }
 }
 
-static void ScalePlaneBox_16(int src_width, int src_height,
-                             int dst_width, int dst_height,
-                             int src_stride, int dst_stride,
-                             const uint16* src_ptr, uint16* dst_ptr) {
+static void ScalePlaneBox_16(int src_width,
+                             int src_height,
+                             int dst_width,
+                             int dst_height,
+                             int src_stride,
+                             int dst_stride,
+                             const uint16_t* src_ptr,
+                             uint16_t* dst_ptr) {
   int j, k;
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
@@ -808,17 +886,17 @@
   int dx = 0;
   int dy = 0;
   const int max_y = (src_height << 16);
-  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterBox,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterBox, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
   {
-    // Allocate a row buffer of uint32.
+    // Allocate a row buffer of uint32_t.
     align_buffer_64(row32, src_width * 4);
     void (*ScaleAddCols)(int dst_width, int boxheight, int x, int dx,
-        const uint32* src_ptr, uint16* dst_ptr) =
-        (dx & 0xffff) ? ScaleAddCols2_16_C: ScaleAddCols1_16_C;
-    void (*ScaleAddRow)(const uint16* src_ptr, uint32* dst_ptr, int src_width) =
-        ScaleAddRow_16_C;
+                         const uint32_t* src_ptr, uint16_t* dst_ptr) =
+        (dx & 0xffff) ? ScaleAddCols2_16_C : ScaleAddCols1_16_C;
+    void (*ScaleAddRow)(const uint16_t* src_ptr, uint32_t* dst_ptr,
+                        int src_width) = ScaleAddRow_16_C;
 
 #if defined(HAS_SCALEADDROW_16_SSE2)
     if (TestCpuFlag(kCpuHasSSE2) && IS_ALIGNED(src_width, 16)) {
@@ -829,7 +907,7 @@
     for (j = 0; j < dst_height; ++j) {
       int boxheight;
       int iy = y >> 16;
-      const uint16* src = src_ptr + iy * src_stride;
+      const uint16_t* src = src_ptr + iy * src_stride;
       y += dy;
       if (y > max_y) {
         y = max_y;
@@ -837,10 +915,10 @@
       boxheight = MIN1((y >> 16) - iy);
       memset(row32, 0, src_width * 4);
       for (k = 0; k < boxheight; ++k) {
-        ScaleAddRow(src, (uint32 *)(row32), src_width);
+        ScaleAddRow(src, (uint32_t*)(row32), src_width);
         src += src_stride;
       }
-      ScaleAddCols(dst_width, boxheight, x, dx, (uint32*)(row32), dst_ptr);
+      ScaleAddCols(dst_width, boxheight, x, dx, (uint32_t*)(row32), dst_ptr);
       dst_ptr += dst_stride;
     }
     free_aligned_buffer_64(row32);
@@ -848,10 +926,14 @@
 }
 
 // Scale plane down with bilinear interpolation.
-void ScalePlaneBilinearDown(int src_width, int src_height,
-                            int dst_width, int dst_height,
-                            int src_stride, int dst_stride,
-                            const uint8* src_ptr, uint8* dst_ptr,
+void ScalePlaneBilinearDown(int src_width,
+                            int src_height,
+                            int dst_width,
+                            int dst_height,
+                            int src_stride,
+                            int dst_stride,
+                            const uint8_t* src_ptr,
+                            uint8_t* dst_ptr,
                             enum FilterMode filtering) {
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
@@ -864,14 +946,14 @@
 
   const int max_y = (src_height - 1) << 16;
   int j;
-  void (*ScaleFilterCols)(uint8* dst_ptr, const uint8* src_ptr,
-      int dst_width, int x, int dx) =
+  void (*ScaleFilterCols)(uint8_t * dst_ptr, const uint8_t* src_ptr,
+                          int dst_width, int x, int dx) =
       (src_width >= 32768) ? ScaleFilterCols64_C : ScaleFilterCols_C;
-  void (*InterpolateRow)(uint8* dst_ptr, const uint8* src_ptr,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_C;
-  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering,
-             &x, &y, &dx, &dy);
+  void (*InterpolateRow)(uint8_t * dst_ptr, const uint8_t* src_ptr,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_C;
+  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
 
 #if defined(HAS_INTERPOLATEROW_SSSE3)
@@ -898,16 +980,15 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2)) {
-    InterpolateRow = InterpolateRow_Any_DSPR2;
-    if (IS_ALIGNED(src_width, 4)) {
-      InterpolateRow = InterpolateRow_DSPR2;
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(src_width, 32)) {
+      InterpolateRow = InterpolateRow_MSA;
     }
   }
 #endif
 
-
 #if defined(HAS_SCALEFILTERCOLS_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3) && src_width < 32768) {
     ScaleFilterCols = ScaleFilterCols_SSSE3;
@@ -921,13 +1002,21 @@
     }
   }
 #endif
+#if defined(HAS_SCALEFILTERCOLS_MSA)
+  if (TestCpuFlag(kCpuHasMSA) && src_width < 32768) {
+    ScaleFilterCols = ScaleFilterCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 16)) {
+      ScaleFilterCols = ScaleFilterCols_MSA;
+    }
+  }
+#endif
   if (y > max_y) {
     y = max_y;
   }
 
   for (j = 0; j < dst_height; ++j) {
     int yi = y >> 16;
-    const uint8* src = src_ptr + yi * src_stride;
+    const uint8_t* src = src_ptr + yi * src_stride;
     if (filtering == kFilterLinear) {
       ScaleFilterCols(dst_ptr, src, dst_width, x, dx);
     } else {
@@ -944,10 +1033,14 @@
   free_aligned_buffer_64(row);
 }
 
-void ScalePlaneBilinearDown_16(int src_width, int src_height,
-                               int dst_width, int dst_height,
-                               int src_stride, int dst_stride,
-                               const uint16* src_ptr, uint16* dst_ptr,
+void ScalePlaneBilinearDown_16(int src_width,
+                               int src_height,
+                               int dst_width,
+                               int dst_height,
+                               int src_stride,
+                               int dst_stride,
+                               const uint16_t* src_ptr,
+                               uint16_t* dst_ptr,
                                enum FilterMode filtering) {
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
@@ -960,14 +1053,14 @@
 
   const int max_y = (src_height - 1) << 16;
   int j;
-  void (*ScaleFilterCols)(uint16* dst_ptr, const uint16* src_ptr,
-      int dst_width, int x, int dx) =
+  void (*ScaleFilterCols)(uint16_t * dst_ptr, const uint16_t* src_ptr,
+                          int dst_width, int x, int dx) =
       (src_width >= 32768) ? ScaleFilterCols64_16_C : ScaleFilterCols_16_C;
-  void (*InterpolateRow)(uint16* dst_ptr, const uint16* src_ptr,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_16_C;
-  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering,
-             &x, &y, &dx, &dy);
+  void (*InterpolateRow)(uint16_t * dst_ptr, const uint16_t* src_ptr,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_16_C;
+  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
 
 #if defined(HAS_INTERPOLATEROW_16_SSE2)
@@ -1002,15 +1095,6 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2)) {
-    InterpolateRow = InterpolateRow_Any_16_DSPR2;
-    if (IS_ALIGNED(src_width, 4)) {
-      InterpolateRow = InterpolateRow_16_DSPR2;
-    }
-  }
-#endif
-
 
 #if defined(HAS_SCALEFILTERCOLS_16_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3) && src_width < 32768) {
@@ -1023,13 +1107,13 @@
 
   for (j = 0; j < dst_height; ++j) {
     int yi = y >> 16;
-    const uint16* src = src_ptr + yi * src_stride;
+    const uint16_t* src = src_ptr + yi * src_stride;
     if (filtering == kFilterLinear) {
       ScaleFilterCols(dst_ptr, src, dst_width, x, dx);
     } else {
       int yf = (y >> 8) & 255;
-      InterpolateRow((uint16*)row, src, src_stride, src_width, yf);
-      ScaleFilterCols(dst_ptr, (uint16*)row, dst_width, x, dx);
+      InterpolateRow((uint16_t*)row, src, src_stride, src_width, yf);
+      ScaleFilterCols(dst_ptr, (uint16_t*)row, dst_width, x, dx);
     }
     dst_ptr += dst_stride;
     y += dy;
@@ -1041,10 +1125,14 @@
 }
 
 // Scale up down with bilinear interpolation.
-void ScalePlaneBilinearUp(int src_width, int src_height,
-                          int dst_width, int dst_height,
-                          int src_stride, int dst_stride,
-                          const uint8* src_ptr, uint8* dst_ptr,
+void ScalePlaneBilinearUp(int src_width,
+                          int src_height,
+                          int dst_width,
+                          int dst_height,
+                          int src_stride,
+                          int dst_stride,
+                          const uint8_t* src_ptr,
+                          uint8_t* dst_ptr,
                           enum FilterMode filtering) {
   int j;
   // Initial source x/y coordinate and step values as 16.16 fixed point.
@@ -1053,14 +1141,14 @@
   int dx = 0;
   int dy = 0;
   const int max_y = (src_height - 1) << 16;
-  void (*InterpolateRow)(uint8* dst_ptr, const uint8* src_ptr,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_C;
-  void (*ScaleFilterCols)(uint8* dst_ptr, const uint8* src_ptr,
-      int dst_width, int x, int dx) =
+  void (*InterpolateRow)(uint8_t * dst_ptr, const uint8_t* src_ptr,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_C;
+  void (*ScaleFilterCols)(uint8_t * dst_ptr, const uint8_t* src_ptr,
+                          int dst_width, int x, int dx) =
       filtering ? ScaleFilterCols_C : ScaleCols_C;
-  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
 
 #if defined(HAS_INTERPOLATEROW_SSSE3)
@@ -1087,14 +1175,6 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2)) {
-    InterpolateRow = InterpolateRow_Any_DSPR2;
-    if (IS_ALIGNED(dst_width, 4)) {
-      InterpolateRow = InterpolateRow_DSPR2;
-    }
-  }
-#endif
 
   if (filtering && src_width >= 32768) {
     ScaleFilterCols = ScaleFilterCols64_C;
@@ -1112,6 +1192,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEFILTERCOLS_MSA)
+  if (filtering && TestCpuFlag(kCpuHasMSA) && src_width < 32768) {
+    ScaleFilterCols = ScaleFilterCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 16)) {
+      ScaleFilterCols = ScaleFilterCols_MSA;
+    }
+  }
+#endif
   if (!filtering && src_width * 2 == dst_width && x < 0x8000) {
     ScaleFilterCols = ScaleColsUp2_C;
 #if defined(HAS_SCALECOLS_SSE2)
@@ -1126,13 +1214,13 @@
   }
   {
     int yi = y >> 16;
-    const uint8* src = src_ptr + yi * src_stride;
+    const uint8_t* src = src_ptr + yi * src_stride;
 
     // Allocate 2 row buffers.
     const int kRowSize = (dst_width + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
 
-    uint8* rowptr = row;
+    uint8_t* rowptr = row;
     int rowstride = kRowSize;
     int lasty = yi;
 
@@ -1172,10 +1260,14 @@
   }
 }
 
-void ScalePlaneBilinearUp_16(int src_width, int src_height,
-                             int dst_width, int dst_height,
-                             int src_stride, int dst_stride,
-                             const uint16* src_ptr, uint16* dst_ptr,
+void ScalePlaneBilinearUp_16(int src_width,
+                             int src_height,
+                             int dst_width,
+                             int dst_height,
+                             int src_stride,
+                             int dst_stride,
+                             const uint16_t* src_ptr,
+                             uint16_t* dst_ptr,
                              enum FilterMode filtering) {
   int j;
   // Initial source x/y coordinate and step values as 16.16 fixed point.
@@ -1184,14 +1276,14 @@
   int dx = 0;
   int dy = 0;
   const int max_y = (src_height - 1) << 16;
-  void (*InterpolateRow)(uint16* dst_ptr, const uint16* src_ptr,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_16_C;
-  void (*ScaleFilterCols)(uint16* dst_ptr, const uint16* src_ptr,
-      int dst_width, int x, int dx) =
+  void (*InterpolateRow)(uint16_t * dst_ptr, const uint16_t* src_ptr,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_16_C;
+  void (*ScaleFilterCols)(uint16_t * dst_ptr, const uint16_t* src_ptr,
+                          int dst_width, int x, int dx) =
       filtering ? ScaleFilterCols_16_C : ScaleCols_16_C;
-  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
 
 #if defined(HAS_INTERPOLATEROW_16_SSE2)
@@ -1226,14 +1318,6 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2)) {
-    InterpolateRow = InterpolateRow_Any_16_DSPR2;
-    if (IS_ALIGNED(dst_width, 4)) {
-      InterpolateRow = InterpolateRow_16_DSPR2;
-    }
-  }
-#endif
 
   if (filtering && src_width >= 32768) {
     ScaleFilterCols = ScaleFilterCols64_16_C;
@@ -1257,13 +1341,13 @@
   }
   {
     int yi = y >> 16;
-    const uint16* src = src_ptr + yi * src_stride;
+    const uint16_t* src = src_ptr + yi * src_stride;
 
     // Allocate 2 row buffers.
     const int kRowSize = (dst_width + 31) & ~31;
     align_buffer_64(row, kRowSize * 4);
 
-    uint16* rowptr = (uint16*)row;
+    uint16_t* rowptr = (uint16_t*)row;
     int rowstride = kRowSize;
     int lasty = yi;
 
@@ -1308,20 +1392,24 @@
 // of x and dx is the integer part of the source position and
 // the lower 16 bits are the fixed decimal part.
 
-static void ScalePlaneSimple(int src_width, int src_height,
-                             int dst_width, int dst_height,
-                             int src_stride, int dst_stride,
-                             const uint8* src_ptr, uint8* dst_ptr) {
+static void ScalePlaneSimple(int src_width,
+                             int src_height,
+                             int dst_width,
+                             int dst_height,
+                             int src_stride,
+                             int dst_stride,
+                             const uint8_t* src_ptr,
+                             uint8_t* dst_ptr) {
   int i;
-  void (*ScaleCols)(uint8* dst_ptr, const uint8* src_ptr,
-      int dst_width, int x, int dx) = ScaleCols_C;
+  void (*ScaleCols)(uint8_t * dst_ptr, const uint8_t* src_ptr, int dst_width,
+                    int x, int dx) = ScaleCols_C;
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
   int y = 0;
   int dx = 0;
   int dy = 0;
-  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterNone,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterNone, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
 
   if (src_width * 2 == dst_width && x < 0x8000) {
@@ -1340,20 +1428,24 @@
   }
 }
 
-static void ScalePlaneSimple_16(int src_width, int src_height,
-                                int dst_width, int dst_height,
-                                int src_stride, int dst_stride,
-                                const uint16* src_ptr, uint16* dst_ptr) {
+static void ScalePlaneSimple_16(int src_width,
+                                int src_height,
+                                int dst_width,
+                                int dst_height,
+                                int src_stride,
+                                int dst_stride,
+                                const uint16_t* src_ptr,
+                                uint16_t* dst_ptr) {
   int i;
-  void (*ScaleCols)(uint16* dst_ptr, const uint16* src_ptr,
-      int dst_width, int x, int dx) = ScaleCols_16_C;
+  void (*ScaleCols)(uint16_t * dst_ptr, const uint16_t* src_ptr, int dst_width,
+                    int x, int dx) = ScaleCols_16_C;
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
   int y = 0;
   int dx = 0;
   int dy = 0;
-  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterNone,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, kFilterNone, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
 
   if (src_width * 2 == dst_width && x < 0x8000) {
@@ -1366,8 +1458,7 @@
   }
 
   for (i = 0; i < dst_height; ++i) {
-    ScaleCols(dst_ptr, src_ptr + (y >> 16) * src_stride,
-              dst_width, x, dx);
+    ScaleCols(dst_ptr, src_ptr + (y >> 16) * src_stride, dst_width, x, dx);
     dst_ptr += dst_stride;
     y += dy;
   }
@@ -1377,14 +1468,18 @@
 // This function dispatches to a specialized scaler based on scale factor.
 
 LIBYUV_API
-void ScalePlane(const uint8* src, int src_stride,
-                int src_width, int src_height,
-                uint8* dst, int dst_stride,
-                int dst_width, int dst_height,
+void ScalePlane(const uint8_t* src,
+                int src_stride,
+                int src_width,
+                int src_height,
+                uint8_t* dst,
+                int dst_stride,
+                int dst_width,
+                int dst_height,
                 enum FilterMode filtering) {
   // Simplify filtering when possible.
-  filtering = ScaleFilterReduce(src_width, src_height,
-                                dst_width, dst_height, filtering);
+  filtering = ScaleFilterReduce(src_width, src_height, dst_width, dst_height,
+                                filtering);
 
   // Negative height means invert the image.
   if (src_height < 0) {
@@ -1403,46 +1498,42 @@
   if (dst_width == src_width && filtering != kFilterBox) {
     int dy = FixedDiv(src_height, dst_height);
     // Arbitrary scale vertically, but unscaled horizontally.
-    ScalePlaneVertical(src_height,
-                       dst_width, dst_height,
-                       src_stride, dst_stride, src, dst,
-                       0, 0, dy, 1, filtering);
+    ScalePlaneVertical(src_height, dst_width, dst_height, src_stride,
+                       dst_stride, src, dst, 0, 0, dy, 1, filtering);
     return;
   }
   if (dst_width <= Abs(src_width) && dst_height <= src_height) {
     // Scale down.
-    if (4 * dst_width == 3 * src_width &&
-        4 * dst_height == 3 * src_height) {
+    if (4 * dst_width == 3 * src_width && 4 * dst_height == 3 * src_height) {
       // optimized, 3/4
-      ScalePlaneDown34(src_width, src_height, dst_width, dst_height,
-                       src_stride, dst_stride, src, dst, filtering);
+      ScalePlaneDown34(src_width, src_height, dst_width, dst_height, src_stride,
+                       dst_stride, src, dst, filtering);
       return;
     }
     if (2 * dst_width == src_width && 2 * dst_height == src_height) {
       // optimized, 1/2
-      ScalePlaneDown2(src_width, src_height, dst_width, dst_height,
-                      src_stride, dst_stride, src, dst, filtering);
+      ScalePlaneDown2(src_width, src_height, dst_width, dst_height, src_stride,
+                      dst_stride, src, dst, filtering);
       return;
     }
     // 3/8 rounded up for odd sized chroma height.
-    if (8 * dst_width == 3 * src_width &&
-        dst_height == ((src_height * 3 + 7) / 8)) {
+    if (8 * dst_width == 3 * src_width && 8 * dst_height == 3 * src_height) {
       // optimized, 3/8
-      ScalePlaneDown38(src_width, src_height, dst_width, dst_height,
-                       src_stride, dst_stride, src, dst, filtering);
+      ScalePlaneDown38(src_width, src_height, dst_width, dst_height, src_stride,
+                       dst_stride, src, dst, filtering);
       return;
     }
     if (4 * dst_width == src_width && 4 * dst_height == src_height &&
         (filtering == kFilterBox || filtering == kFilterNone)) {
       // optimized, 1/4
-      ScalePlaneDown4(src_width, src_height, dst_width, dst_height,
-                      src_stride, dst_stride, src, dst, filtering);
+      ScalePlaneDown4(src_width, src_height, dst_width, dst_height, src_stride,
+                      dst_stride, src, dst, filtering);
       return;
     }
   }
   if (filtering == kFilterBox && dst_height * 2 < src_height) {
-    ScalePlaneBox(src_width, src_height, dst_width, dst_height,
-                  src_stride, dst_stride, src, dst);
+    ScalePlaneBox(src_width, src_height, dst_width, dst_height, src_stride,
+                  dst_stride, src, dst);
     return;
   }
   if (filtering && dst_height > src_height) {
@@ -1455,19 +1546,23 @@
                            src_stride, dst_stride, src, dst, filtering);
     return;
   }
-  ScalePlaneSimple(src_width, src_height, dst_width, dst_height,
-                   src_stride, dst_stride, src, dst);
+  ScalePlaneSimple(src_width, src_height, dst_width, dst_height, src_stride,
+                   dst_stride, src, dst);
 }
 
 LIBYUV_API
-void ScalePlane_16(const uint16* src, int src_stride,
-                  int src_width, int src_height,
-                  uint16* dst, int dst_stride,
-                  int dst_width, int dst_height,
-                  enum FilterMode filtering) {
+void ScalePlane_16(const uint16_t* src,
+                   int src_stride,
+                   int src_width,
+                   int src_height,
+                   uint16_t* dst,
+                   int dst_stride,
+                   int dst_width,
+                   int dst_height,
+                   enum FilterMode filtering) {
   // Simplify filtering when possible.
-  filtering = ScaleFilterReduce(src_width, src_height,
-                                dst_width, dst_height, filtering);
+  filtering = ScaleFilterReduce(src_width, src_height, dst_width, dst_height,
+                                filtering);
 
   // Negative height means invert the image.
   if (src_height < 0) {
@@ -1483,19 +1578,16 @@
     CopyPlane_16(src, src_stride, dst, dst_stride, dst_width, dst_height);
     return;
   }
-  if (dst_width == src_width) {
+  if (dst_width == src_width && filtering != kFilterBox) {
     int dy = FixedDiv(src_height, dst_height);
     // Arbitrary scale vertically, but unscaled vertically.
-    ScalePlaneVertical_16(src_height,
-                          dst_width, dst_height,
-                          src_stride, dst_stride, src, dst,
-                          0, 0, dy, 1, filtering);
+    ScalePlaneVertical_16(src_height, dst_width, dst_height, src_stride,
+                          dst_stride, src, dst, 0, 0, dy, 1, filtering);
     return;
   }
   if (dst_width <= Abs(src_width) && dst_height <= src_height) {
     // Scale down.
-    if (4 * dst_width == 3 * src_width &&
-        4 * dst_height == 3 * src_height) {
+    if (4 * dst_width == 3 * src_width && 4 * dst_height == 3 * src_height) {
       // optimized, 3/4
       ScalePlaneDown34_16(src_width, src_height, dst_width, dst_height,
                           src_stride, dst_stride, src, dst, filtering);
@@ -1508,15 +1600,14 @@
       return;
     }
     // 3/8 rounded up for odd sized chroma height.
-    if (8 * dst_width == 3 * src_width &&
-        dst_height == ((src_height * 3 + 7) / 8)) {
+    if (8 * dst_width == 3 * src_width && 8 * dst_height == 3 * src_height) {
       // optimized, 3/8
       ScalePlaneDown38_16(src_width, src_height, dst_width, dst_height,
                           src_stride, dst_stride, src, dst, filtering);
       return;
     }
     if (4 * dst_width == src_width && 4 * dst_height == src_height &&
-               filtering != kFilterBilinear) {
+        (filtering == kFilterBox || filtering == kFilterNone)) {
       // optimized, 1/4
       ScalePlaneDown4_16(src_width, src_height, dst_width, dst_height,
                          src_stride, dst_stride, src, dst, filtering);
@@ -1524,8 +1615,8 @@
     }
   }
   if (filtering == kFilterBox && dst_height * 2 < src_height) {
-    ScalePlaneBox_16(src_width, src_height, dst_width, dst_height,
-                     src_stride, dst_stride, src, dst);
+    ScalePlaneBox_16(src_width, src_height, dst_width, dst_height, src_stride,
+                     dst_stride, src, dst);
     return;
   }
   if (filtering && dst_height > src_height) {
@@ -1538,132 +1629,110 @@
                               src_stride, dst_stride, src, dst, filtering);
     return;
   }
-  ScalePlaneSimple_16(src_width, src_height, dst_width, dst_height,
-                      src_stride, dst_stride, src, dst);
+  ScalePlaneSimple_16(src_width, src_height, dst_width, dst_height, src_stride,
+                      dst_stride, src, dst);
 }
 
 // Scale an I420 image.
 // This function in turn calls a scaling function for each plane.
 
 LIBYUV_API
-int I420Scale(const uint8* src_y, int src_stride_y,
-              const uint8* src_u, int src_stride_u,
-              const uint8* src_v, int src_stride_v,
-              int src_width, int src_height,
-              uint8* dst_y, int dst_stride_y,
-              uint8* dst_u, int dst_stride_u,
-              uint8* dst_v, int dst_stride_v,
-              int dst_width, int dst_height,
+int I420Scale(const uint8_t* src_y,
+              int src_stride_y,
+              const uint8_t* src_u,
+              int src_stride_u,
+              const uint8_t* src_v,
+              int src_stride_v,
+              int src_width,
+              int src_height,
+              uint8_t* dst_y,
+              int dst_stride_y,
+              uint8_t* dst_u,
+              int dst_stride_u,
+              uint8_t* dst_v,
+              int dst_stride_v,
+              int dst_width,
+              int dst_height,
               enum FilterMode filtering) {
   int src_halfwidth = SUBSAMPLE(src_width, 1, 1);
   int src_halfheight = SUBSAMPLE(src_height, 1, 1);
   int dst_halfwidth = SUBSAMPLE(dst_width, 1, 1);
   int dst_halfheight = SUBSAMPLE(dst_height, 1, 1);
   if (!src_y || !src_u || !src_v || src_width == 0 || src_height == 0 ||
-      src_width > 32768 || src_height > 32768 ||
-      !dst_y || !dst_u || !dst_v || dst_width <= 0 || dst_height <= 0) {
+      src_width > 32768 || src_height > 32768 || !dst_y || !dst_u || !dst_v ||
+      dst_width <= 0 || dst_height <= 0) {
     return -1;
   }
 
-  ScalePlane(src_y, src_stride_y, src_width, src_height,
-             dst_y, dst_stride_y, dst_width, dst_height,
-             filtering);
-  ScalePlane(src_u, src_stride_u, src_halfwidth, src_halfheight,
-             dst_u, dst_stride_u, dst_halfwidth, dst_halfheight,
-             filtering);
-  ScalePlane(src_v, src_stride_v, src_halfwidth, src_halfheight,
-             dst_v, dst_stride_v, dst_halfwidth, dst_halfheight,
-             filtering);
+  ScalePlane(src_y, src_stride_y, src_width, src_height, dst_y, dst_stride_y,
+             dst_width, dst_height, filtering);
+  ScalePlane(src_u, src_stride_u, src_halfwidth, src_halfheight, dst_u,
+             dst_stride_u, dst_halfwidth, dst_halfheight, filtering);
+  ScalePlane(src_v, src_stride_v, src_halfwidth, src_halfheight, dst_v,
+             dst_stride_v, dst_halfwidth, dst_halfheight, filtering);
   return 0;
 }
 
 LIBYUV_API
-int I420Scale_16(const uint16* src_y, int src_stride_y,
-                 const uint16* src_u, int src_stride_u,
-                 const uint16* src_v, int src_stride_v,
-                 int src_width, int src_height,
-                 uint16* dst_y, int dst_stride_y,
-                 uint16* dst_u, int dst_stride_u,
-                 uint16* dst_v, int dst_stride_v,
-                 int dst_width, int dst_height,
+int I420Scale_16(const uint16_t* src_y,
+                 int src_stride_y,
+                 const uint16_t* src_u,
+                 int src_stride_u,
+                 const uint16_t* src_v,
+                 int src_stride_v,
+                 int src_width,
+                 int src_height,
+                 uint16_t* dst_y,
+                 int dst_stride_y,
+                 uint16_t* dst_u,
+                 int dst_stride_u,
+                 uint16_t* dst_v,
+                 int dst_stride_v,
+                 int dst_width,
+                 int dst_height,
                  enum FilterMode filtering) {
   int src_halfwidth = SUBSAMPLE(src_width, 1, 1);
   int src_halfheight = SUBSAMPLE(src_height, 1, 1);
   int dst_halfwidth = SUBSAMPLE(dst_width, 1, 1);
   int dst_halfheight = SUBSAMPLE(dst_height, 1, 1);
   if (!src_y || !src_u || !src_v || src_width == 0 || src_height == 0 ||
-      src_width > 32768 || src_height > 32768 ||
-      !dst_y || !dst_u || !dst_v || dst_width <= 0 || dst_height <= 0) {
+      src_width > 32768 || src_height > 32768 || !dst_y || !dst_u || !dst_v ||
+      dst_width <= 0 || dst_height <= 0) {
     return -1;
   }
 
-  ScalePlane_16(src_y, src_stride_y, src_width, src_height,
-                dst_y, dst_stride_y, dst_width, dst_height,
-                filtering);
-  ScalePlane_16(src_u, src_stride_u, src_halfwidth, src_halfheight,
-                dst_u, dst_stride_u, dst_halfwidth, dst_halfheight,
-                filtering);
-  ScalePlane_16(src_v, src_stride_v, src_halfwidth, src_halfheight,
-                dst_v, dst_stride_v, dst_halfwidth, dst_halfheight,
-                filtering);
+  ScalePlane_16(src_y, src_stride_y, src_width, src_height, dst_y, dst_stride_y,
+                dst_width, dst_height, filtering);
+  ScalePlane_16(src_u, src_stride_u, src_halfwidth, src_halfheight, dst_u,
+                dst_stride_u, dst_halfwidth, dst_halfheight, filtering);
+  ScalePlane_16(src_v, src_stride_v, src_halfwidth, src_halfheight, dst_v,
+                dst_stride_v, dst_halfwidth, dst_halfheight, filtering);
   return 0;
 }
 
 // Deprecated api
 LIBYUV_API
-int Scale(const uint8* src_y, const uint8* src_u, const uint8* src_v,
-          int src_stride_y, int src_stride_u, int src_stride_v,
-          int src_width, int src_height,
-          uint8* dst_y, uint8* dst_u, uint8* dst_v,
-          int dst_stride_y, int dst_stride_u, int dst_stride_v,
-          int dst_width, int dst_height,
+int Scale(const uint8_t* src_y,
+          const uint8_t* src_u,
+          const uint8_t* src_v,
+          int src_stride_y,
+          int src_stride_u,
+          int src_stride_v,
+          int src_width,
+          int src_height,
+          uint8_t* dst_y,
+          uint8_t* dst_u,
+          uint8_t* dst_v,
+          int dst_stride_y,
+          int dst_stride_u,
+          int dst_stride_v,
+          int dst_width,
+          int dst_height,
           LIBYUV_BOOL interpolate) {
-  return I420Scale(src_y, src_stride_y,
-                   src_u, src_stride_u,
-                   src_v, src_stride_v,
-                   src_width, src_height,
-                   dst_y, dst_stride_y,
-                   dst_u, dst_stride_u,
-                   dst_v, dst_stride_v,
-                   dst_width, dst_height,
-                   interpolate ? kFilterBox : kFilterNone);
-}
-
-// Deprecated api
-LIBYUV_API
-int ScaleOffset(const uint8* src, int src_width, int src_height,
-                uint8* dst, int dst_width, int dst_height, int dst_yoffset,
-                LIBYUV_BOOL interpolate) {
-  // Chroma requires offset to multiple of 2.
-  int dst_yoffset_even = dst_yoffset & ~1;
-  int src_halfwidth = SUBSAMPLE(src_width, 1, 1);
-  int src_halfheight = SUBSAMPLE(src_height, 1, 1);
-  int dst_halfwidth = SUBSAMPLE(dst_width, 1, 1);
-  int dst_halfheight = SUBSAMPLE(dst_height, 1, 1);
-  int aheight = dst_height - dst_yoffset_even * 2;  // actual output height
-  const uint8* src_y = src;
-  const uint8* src_u = src + src_width * src_height;
-  const uint8* src_v = src + src_width * src_height +
-                             src_halfwidth * src_halfheight;
-  uint8* dst_y = dst + dst_yoffset_even * dst_width;
-  uint8* dst_u = dst + dst_width * dst_height +
-                 (dst_yoffset_even >> 1) * dst_halfwidth;
-  uint8* dst_v = dst + dst_width * dst_height + dst_halfwidth * dst_halfheight +
-                 (dst_yoffset_even >> 1) * dst_halfwidth;
-  if (!src || src_width <= 0 || src_height <= 0 ||
-      !dst || dst_width <= 0 || dst_height <= 0 || dst_yoffset_even < 0 ||
-      dst_yoffset_even >= dst_height) {
-    return -1;
-  }
-  return I420Scale(src_y, src_width,
-                   src_u, src_halfwidth,
-                   src_v, src_halfwidth,
-                   src_width, src_height,
-                   dst_y, dst_width,
-                   dst_u, dst_halfwidth,
-                   dst_v, dst_halfwidth,
-                   dst_width, aheight,
-                   interpolate ? kFilterBox : kFilterNone);
+  return I420Scale(src_y, src_stride_y, src_u, src_stride_u, src_v,
+                   src_stride_v, src_width, src_height, dst_y, dst_stride_y,
+                   dst_u, dst_stride_u, dst_v, dst_stride_v, dst_width,
+                   dst_height, interpolate ? kFilterBox : kFilterNone);
 }
 
 #ifdef __cplusplus
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_any.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_any.cc
index ed76a9e..53ad136 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_any.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_any.cc
@@ -20,184 +20,429 @@
 
 // Definition for ScaleFilterCols, ScaleARGBCols and ScaleARGBFilterCols
 #define CANY(NAMEANY, TERP_SIMD, TERP_C, BPP, MASK)                            \
-    void NAMEANY(uint8* dst_ptr, const uint8* src_ptr,                         \
-                 int dst_width, int x, int dx) {                               \
-      int n = dst_width & ~MASK;                                               \
-      if (n > 0) {                                                             \
-        TERP_SIMD(dst_ptr, src_ptr, n, x, dx);                                 \
-      }                                                                        \
-      TERP_C(dst_ptr + n * BPP, src_ptr,                                       \
-             dst_width & MASK, x + n * dx, dx);                                \
-    }
+  void NAMEANY(uint8_t* dst_ptr, const uint8_t* src_ptr, int dst_width, int x, \
+               int dx) {                                                       \
+    int r = dst_width & MASK;                                                  \
+    int n = dst_width & ~MASK;                                                 \
+    if (n > 0) {                                                               \
+      TERP_SIMD(dst_ptr, src_ptr, n, x, dx);                                   \
+    }                                                                          \
+    TERP_C(dst_ptr + n * BPP, src_ptr, r, x + n * dx, dx);                     \
+  }
 
 #ifdef HAS_SCALEFILTERCOLS_NEON
 CANY(ScaleFilterCols_Any_NEON, ScaleFilterCols_NEON, ScaleFilterCols_C, 1, 7)
 #endif
+#ifdef HAS_SCALEFILTERCOLS_MSA
+CANY(ScaleFilterCols_Any_MSA, ScaleFilterCols_MSA, ScaleFilterCols_C, 1, 15)
+#endif
 #ifdef HAS_SCALEARGBCOLS_NEON
 CANY(ScaleARGBCols_Any_NEON, ScaleARGBCols_NEON, ScaleARGBCols_C, 4, 7)
 #endif
+#ifdef HAS_SCALEARGBCOLS_MSA
+CANY(ScaleARGBCols_Any_MSA, ScaleARGBCols_MSA, ScaleARGBCols_C, 4, 3)
+#endif
 #ifdef HAS_SCALEARGBFILTERCOLS_NEON
-CANY(ScaleARGBFilterCols_Any_NEON, ScaleARGBFilterCols_NEON,
-     ScaleARGBFilterCols_C, 4, 3)
+CANY(ScaleARGBFilterCols_Any_NEON,
+     ScaleARGBFilterCols_NEON,
+     ScaleARGBFilterCols_C,
+     4,
+     3)
+#endif
+#ifdef HAS_SCALEARGBFILTERCOLS_MSA
+CANY(ScaleARGBFilterCols_Any_MSA,
+     ScaleARGBFilterCols_MSA,
+     ScaleARGBFilterCols_C,
+     4,
+     7)
 #endif
 #undef CANY
 
 // Fixed scale down.
+// Mask may be non-power of 2, so use MOD
 #define SDANY(NAMEANY, SCALEROWDOWN_SIMD, SCALEROWDOWN_C, FACTOR, BPP, MASK)   \
-    void NAMEANY(const uint8* src_ptr, ptrdiff_t src_stride,                   \
-                 uint8* dst_ptr, int dst_width) {                              \
-      int r = (int)((unsigned int)dst_width % (MASK + 1));                     \
-      int n = dst_width - r;                                                   \
-      if (n > 0) {                                                             \
-        SCALEROWDOWN_SIMD(src_ptr, src_stride, dst_ptr, n);                    \
-      }                                                                        \
-      SCALEROWDOWN_C(src_ptr + (n * FACTOR) * BPP, src_stride,                 \
-                     dst_ptr + n * BPP, r);                                    \
-    }
+  void NAMEANY(const uint8_t* src_ptr, ptrdiff_t src_stride, uint8_t* dst_ptr, \
+               int dst_width) {                                                \
+    int r = (int)((unsigned int)dst_width % (MASK + 1)); /* NOLINT */          \
+    int n = dst_width - r;                                                     \
+    if (n > 0) {                                                               \
+      SCALEROWDOWN_SIMD(src_ptr, src_stride, dst_ptr, n);                      \
+    }                                                                          \
+    SCALEROWDOWN_C(src_ptr + (n * FACTOR) * BPP, src_stride,                   \
+                   dst_ptr + n * BPP, r);                                      \
+  }
 
 // Fixed scale down for odd source width.  Used by I420Blend subsampling.
 // Since dst_width is (width + 1) / 2, this function scales one less pixel
 // and copies the last pixel.
 #define SDODD(NAMEANY, SCALEROWDOWN_SIMD, SCALEROWDOWN_C, FACTOR, BPP, MASK)   \
-    void NAMEANY(const uint8* src_ptr, ptrdiff_t src_stride,                   \
-                 uint8* dst_ptr, int dst_width) {                              \
-      int r = (int)((unsigned int)(dst_width - 1) % (MASK + 1));               \
-      int n = dst_width - r;                                                   \
-      if (n > 0) {                                                             \
-        SCALEROWDOWN_SIMD(src_ptr, src_stride, dst_ptr, n);                    \
-      }                                                                        \
-      SCALEROWDOWN_C(src_ptr + (n * FACTOR) * BPP, src_stride,                 \
-                     dst_ptr + n * BPP, r);                                    \
-    }
+  void NAMEANY(const uint8_t* src_ptr, ptrdiff_t src_stride, uint8_t* dst_ptr, \
+               int dst_width) {                                                \
+    int r = (int)((unsigned int)(dst_width - 1) % (MASK + 1)); /* NOLINT */    \
+    int n = (dst_width - 1) - r;                                               \
+    if (n > 0) {                                                               \
+      SCALEROWDOWN_SIMD(src_ptr, src_stride, dst_ptr, n);                      \
+    }                                                                          \
+    SCALEROWDOWN_C(src_ptr + (n * FACTOR) * BPP, src_stride,                   \
+                   dst_ptr + n * BPP, r + 1);                                  \
+  }
 
 #ifdef HAS_SCALEROWDOWN2_SSSE3
 SDANY(ScaleRowDown2_Any_SSSE3, ScaleRowDown2_SSSE3, ScaleRowDown2_C, 2, 1, 15)
-SDANY(ScaleRowDown2Linear_Any_SSSE3, ScaleRowDown2Linear_SSSE3,
-      ScaleRowDown2Linear_C, 2, 1, 15)
-SDANY(ScaleRowDown2Box_Any_SSSE3, ScaleRowDown2Box_SSSE3, ScaleRowDown2Box_C,
-      2, 1, 15)
-SDODD(ScaleRowDown2Box_Odd_SSSE3, ScaleRowDown2Box_SSSE3,
-      ScaleRowDown2Box_Odd_C, 2, 1, 15)
+SDANY(ScaleRowDown2Linear_Any_SSSE3,
+      ScaleRowDown2Linear_SSSE3,
+      ScaleRowDown2Linear_C,
+      2,
+      1,
+      15)
+SDANY(ScaleRowDown2Box_Any_SSSE3,
+      ScaleRowDown2Box_SSSE3,
+      ScaleRowDown2Box_C,
+      2,
+      1,
+      15)
+SDODD(ScaleRowDown2Box_Odd_SSSE3,
+      ScaleRowDown2Box_SSSE3,
+      ScaleRowDown2Box_Odd_C,
+      2,
+      1,
+      15)
 #endif
 #ifdef HAS_SCALEROWDOWN2_AVX2
 SDANY(ScaleRowDown2_Any_AVX2, ScaleRowDown2_AVX2, ScaleRowDown2_C, 2, 1, 31)
-SDANY(ScaleRowDown2Linear_Any_AVX2, ScaleRowDown2Linear_AVX2,
-      ScaleRowDown2Linear_C, 2, 1, 31)
-SDANY(ScaleRowDown2Box_Any_AVX2, ScaleRowDown2Box_AVX2, ScaleRowDown2Box_C,
-      2, 1, 31)
-SDODD(ScaleRowDown2Box_Odd_AVX2, ScaleRowDown2Box_AVX2, ScaleRowDown2Box_Odd_C,
-      2, 1, 31)
+SDANY(ScaleRowDown2Linear_Any_AVX2,
+      ScaleRowDown2Linear_AVX2,
+      ScaleRowDown2Linear_C,
+      2,
+      1,
+      31)
+SDANY(ScaleRowDown2Box_Any_AVX2,
+      ScaleRowDown2Box_AVX2,
+      ScaleRowDown2Box_C,
+      2,
+      1,
+      31)
+SDODD(ScaleRowDown2Box_Odd_AVX2,
+      ScaleRowDown2Box_AVX2,
+      ScaleRowDown2Box_Odd_C,
+      2,
+      1,
+      31)
 #endif
 #ifdef HAS_SCALEROWDOWN2_NEON
 SDANY(ScaleRowDown2_Any_NEON, ScaleRowDown2_NEON, ScaleRowDown2_C, 2, 1, 15)
-SDANY(ScaleRowDown2Linear_Any_NEON, ScaleRowDown2Linear_NEON,
-      ScaleRowDown2Linear_C, 2, 1, 15)
-SDANY(ScaleRowDown2Box_Any_NEON, ScaleRowDown2Box_NEON,
-      ScaleRowDown2Box_C, 2, 1, 15)
-SDODD(ScaleRowDown2Box_Odd_NEON, ScaleRowDown2Box_NEON,
-      ScaleRowDown2Box_Odd_C, 2, 1, 15)
+SDANY(ScaleRowDown2Linear_Any_NEON,
+      ScaleRowDown2Linear_NEON,
+      ScaleRowDown2Linear_C,
+      2,
+      1,
+      15)
+SDANY(ScaleRowDown2Box_Any_NEON,
+      ScaleRowDown2Box_NEON,
+      ScaleRowDown2Box_C,
+      2,
+      1,
+      15)
+SDODD(ScaleRowDown2Box_Odd_NEON,
+      ScaleRowDown2Box_NEON,
+      ScaleRowDown2Box_Odd_C,
+      2,
+      1,
+      15)
+#endif
+#ifdef HAS_SCALEROWDOWN2_MSA
+SDANY(ScaleRowDown2_Any_MSA, ScaleRowDown2_MSA, ScaleRowDown2_C, 2, 1, 31)
+SDANY(ScaleRowDown2Linear_Any_MSA,
+      ScaleRowDown2Linear_MSA,
+      ScaleRowDown2Linear_C,
+      2,
+      1,
+      31)
+SDANY(ScaleRowDown2Box_Any_MSA,
+      ScaleRowDown2Box_MSA,
+      ScaleRowDown2Box_C,
+      2,
+      1,
+      31)
 #endif
 #ifdef HAS_SCALEROWDOWN4_SSSE3
 SDANY(ScaleRowDown4_Any_SSSE3, ScaleRowDown4_SSSE3, ScaleRowDown4_C, 4, 1, 7)
-SDANY(ScaleRowDown4Box_Any_SSSE3, ScaleRowDown4Box_SSSE3, ScaleRowDown4Box_C,
-      4, 1, 7)
+SDANY(ScaleRowDown4Box_Any_SSSE3,
+      ScaleRowDown4Box_SSSE3,
+      ScaleRowDown4Box_C,
+      4,
+      1,
+      7)
 #endif
 #ifdef HAS_SCALEROWDOWN4_AVX2
 SDANY(ScaleRowDown4_Any_AVX2, ScaleRowDown4_AVX2, ScaleRowDown4_C, 4, 1, 15)
-SDANY(ScaleRowDown4Box_Any_AVX2, ScaleRowDown4Box_AVX2, ScaleRowDown4Box_C,
-      4, 1, 15)
+SDANY(ScaleRowDown4Box_Any_AVX2,
+      ScaleRowDown4Box_AVX2,
+      ScaleRowDown4Box_C,
+      4,
+      1,
+      15)
 #endif
 #ifdef HAS_SCALEROWDOWN4_NEON
 SDANY(ScaleRowDown4_Any_NEON, ScaleRowDown4_NEON, ScaleRowDown4_C, 4, 1, 7)
-SDANY(ScaleRowDown4Box_Any_NEON, ScaleRowDown4Box_NEON, ScaleRowDown4Box_C,
-      4, 1, 7)
+SDANY(ScaleRowDown4Box_Any_NEON,
+      ScaleRowDown4Box_NEON,
+      ScaleRowDown4Box_C,
+      4,
+      1,
+      7)
+#endif
+#ifdef HAS_SCALEROWDOWN4_MSA
+SDANY(ScaleRowDown4_Any_MSA, ScaleRowDown4_MSA, ScaleRowDown4_C, 4, 1, 15)
+SDANY(ScaleRowDown4Box_Any_MSA,
+      ScaleRowDown4Box_MSA,
+      ScaleRowDown4Box_C,
+      4,
+      1,
+      15)
 #endif
 #ifdef HAS_SCALEROWDOWN34_SSSE3
-SDANY(ScaleRowDown34_Any_SSSE3, ScaleRowDown34_SSSE3,
-      ScaleRowDown34_C, 4 / 3, 1, 23)
-SDANY(ScaleRowDown34_0_Box_Any_SSSE3, ScaleRowDown34_0_Box_SSSE3,
-      ScaleRowDown34_0_Box_C, 4 / 3, 1, 23)
-SDANY(ScaleRowDown34_1_Box_Any_SSSE3, ScaleRowDown34_1_Box_SSSE3,
-      ScaleRowDown34_1_Box_C, 4 / 3, 1, 23)
+SDANY(ScaleRowDown34_Any_SSSE3,
+      ScaleRowDown34_SSSE3,
+      ScaleRowDown34_C,
+      4 / 3,
+      1,
+      23)
+SDANY(ScaleRowDown34_0_Box_Any_SSSE3,
+      ScaleRowDown34_0_Box_SSSE3,
+      ScaleRowDown34_0_Box_C,
+      4 / 3,
+      1,
+      23)
+SDANY(ScaleRowDown34_1_Box_Any_SSSE3,
+      ScaleRowDown34_1_Box_SSSE3,
+      ScaleRowDown34_1_Box_C,
+      4 / 3,
+      1,
+      23)
 #endif
 #ifdef HAS_SCALEROWDOWN34_NEON
-SDANY(ScaleRowDown34_Any_NEON, ScaleRowDown34_NEON,
-      ScaleRowDown34_C, 4 / 3, 1, 23)
-SDANY(ScaleRowDown34_0_Box_Any_NEON, ScaleRowDown34_0_Box_NEON,
-      ScaleRowDown34_0_Box_C, 4 / 3, 1, 23)
-SDANY(ScaleRowDown34_1_Box_Any_NEON, ScaleRowDown34_1_Box_NEON,
-      ScaleRowDown34_1_Box_C, 4 / 3, 1, 23)
+SDANY(ScaleRowDown34_Any_NEON,
+      ScaleRowDown34_NEON,
+      ScaleRowDown34_C,
+      4 / 3,
+      1,
+      23)
+SDANY(ScaleRowDown34_0_Box_Any_NEON,
+      ScaleRowDown34_0_Box_NEON,
+      ScaleRowDown34_0_Box_C,
+      4 / 3,
+      1,
+      23)
+SDANY(ScaleRowDown34_1_Box_Any_NEON,
+      ScaleRowDown34_1_Box_NEON,
+      ScaleRowDown34_1_Box_C,
+      4 / 3,
+      1,
+      23)
+#endif
+#ifdef HAS_SCALEROWDOWN34_MSA
+SDANY(ScaleRowDown34_Any_MSA,
+      ScaleRowDown34_MSA,
+      ScaleRowDown34_C,
+      4 / 3,
+      1,
+      47)
+SDANY(ScaleRowDown34_0_Box_Any_MSA,
+      ScaleRowDown34_0_Box_MSA,
+      ScaleRowDown34_0_Box_C,
+      4 / 3,
+      1,
+      47)
+SDANY(ScaleRowDown34_1_Box_Any_MSA,
+      ScaleRowDown34_1_Box_MSA,
+      ScaleRowDown34_1_Box_C,
+      4 / 3,
+      1,
+      47)
 #endif
 #ifdef HAS_SCALEROWDOWN38_SSSE3
-SDANY(ScaleRowDown38_Any_SSSE3, ScaleRowDown38_SSSE3,
-      ScaleRowDown38_C, 8 / 3, 1, 11)
-SDANY(ScaleRowDown38_3_Box_Any_SSSE3, ScaleRowDown38_3_Box_SSSE3,
-      ScaleRowDown38_3_Box_C, 8 / 3, 1, 5)
-SDANY(ScaleRowDown38_2_Box_Any_SSSE3, ScaleRowDown38_2_Box_SSSE3,
-      ScaleRowDown38_2_Box_C, 8 / 3, 1, 5)
+SDANY(ScaleRowDown38_Any_SSSE3,
+      ScaleRowDown38_SSSE3,
+      ScaleRowDown38_C,
+      8 / 3,
+      1,
+      11)
+SDANY(ScaleRowDown38_3_Box_Any_SSSE3,
+      ScaleRowDown38_3_Box_SSSE3,
+      ScaleRowDown38_3_Box_C,
+      8 / 3,
+      1,
+      5)
+SDANY(ScaleRowDown38_2_Box_Any_SSSE3,
+      ScaleRowDown38_2_Box_SSSE3,
+      ScaleRowDown38_2_Box_C,
+      8 / 3,
+      1,
+      5)
 #endif
 #ifdef HAS_SCALEROWDOWN38_NEON
-SDANY(ScaleRowDown38_Any_NEON, ScaleRowDown38_NEON,
-      ScaleRowDown38_C, 8 / 3, 1, 11)
-SDANY(ScaleRowDown38_3_Box_Any_NEON, ScaleRowDown38_3_Box_NEON,
-      ScaleRowDown38_3_Box_C, 8 / 3, 1, 11)
-SDANY(ScaleRowDown38_2_Box_Any_NEON, ScaleRowDown38_2_Box_NEON,
-      ScaleRowDown38_2_Box_C, 8 / 3, 1, 11)
+SDANY(ScaleRowDown38_Any_NEON,
+      ScaleRowDown38_NEON,
+      ScaleRowDown38_C,
+      8 / 3,
+      1,
+      11)
+SDANY(ScaleRowDown38_3_Box_Any_NEON,
+      ScaleRowDown38_3_Box_NEON,
+      ScaleRowDown38_3_Box_C,
+      8 / 3,
+      1,
+      11)
+SDANY(ScaleRowDown38_2_Box_Any_NEON,
+      ScaleRowDown38_2_Box_NEON,
+      ScaleRowDown38_2_Box_C,
+      8 / 3,
+      1,
+      11)
+#endif
+#ifdef HAS_SCALEROWDOWN38_MSA
+SDANY(ScaleRowDown38_Any_MSA,
+      ScaleRowDown38_MSA,
+      ScaleRowDown38_C,
+      8 / 3,
+      1,
+      11)
+SDANY(ScaleRowDown38_3_Box_Any_MSA,
+      ScaleRowDown38_3_Box_MSA,
+      ScaleRowDown38_3_Box_C,
+      8 / 3,
+      1,
+      11)
+SDANY(ScaleRowDown38_2_Box_Any_MSA,
+      ScaleRowDown38_2_Box_MSA,
+      ScaleRowDown38_2_Box_C,
+      8 / 3,
+      1,
+      11)
 #endif
 
 #ifdef HAS_SCALEARGBROWDOWN2_SSE2
-SDANY(ScaleARGBRowDown2_Any_SSE2, ScaleARGBRowDown2_SSE2,
-      ScaleARGBRowDown2_C, 2, 4, 3)
-SDANY(ScaleARGBRowDown2Linear_Any_SSE2, ScaleARGBRowDown2Linear_SSE2,
-      ScaleARGBRowDown2Linear_C, 2, 4, 3)
-SDANY(ScaleARGBRowDown2Box_Any_SSE2, ScaleARGBRowDown2Box_SSE2,
-      ScaleARGBRowDown2Box_C, 2, 4, 3)
+SDANY(ScaleARGBRowDown2_Any_SSE2,
+      ScaleARGBRowDown2_SSE2,
+      ScaleARGBRowDown2_C,
+      2,
+      4,
+      3)
+SDANY(ScaleARGBRowDown2Linear_Any_SSE2,
+      ScaleARGBRowDown2Linear_SSE2,
+      ScaleARGBRowDown2Linear_C,
+      2,
+      4,
+      3)
+SDANY(ScaleARGBRowDown2Box_Any_SSE2,
+      ScaleARGBRowDown2Box_SSE2,
+      ScaleARGBRowDown2Box_C,
+      2,
+      4,
+      3)
 #endif
 #ifdef HAS_SCALEARGBROWDOWN2_NEON
-SDANY(ScaleARGBRowDown2_Any_NEON, ScaleARGBRowDown2_NEON,
-      ScaleARGBRowDown2_C, 2, 4, 7)
-SDANY(ScaleARGBRowDown2Linear_Any_NEON, ScaleARGBRowDown2Linear_NEON,
-      ScaleARGBRowDown2Linear_C, 2, 4, 7)
-SDANY(ScaleARGBRowDown2Box_Any_NEON, ScaleARGBRowDown2Box_NEON,
-      ScaleARGBRowDown2Box_C, 2, 4, 7)
+SDANY(ScaleARGBRowDown2_Any_NEON,
+      ScaleARGBRowDown2_NEON,
+      ScaleARGBRowDown2_C,
+      2,
+      4,
+      7)
+SDANY(ScaleARGBRowDown2Linear_Any_NEON,
+      ScaleARGBRowDown2Linear_NEON,
+      ScaleARGBRowDown2Linear_C,
+      2,
+      4,
+      7)
+SDANY(ScaleARGBRowDown2Box_Any_NEON,
+      ScaleARGBRowDown2Box_NEON,
+      ScaleARGBRowDown2Box_C,
+      2,
+      4,
+      7)
+#endif
+#ifdef HAS_SCALEARGBROWDOWN2_MSA
+SDANY(ScaleARGBRowDown2_Any_MSA,
+      ScaleARGBRowDown2_MSA,
+      ScaleARGBRowDown2_C,
+      2,
+      4,
+      3)
+SDANY(ScaleARGBRowDown2Linear_Any_MSA,
+      ScaleARGBRowDown2Linear_MSA,
+      ScaleARGBRowDown2Linear_C,
+      2,
+      4,
+      3)
+SDANY(ScaleARGBRowDown2Box_Any_MSA,
+      ScaleARGBRowDown2Box_MSA,
+      ScaleARGBRowDown2Box_C,
+      2,
+      4,
+      3)
 #endif
 #undef SDANY
 
 // Scale down by even scale factor.
-#define SDAANY(NAMEANY, SCALEROWDOWN_SIMD, SCALEROWDOWN_C, BPP, MASK)          \
-    void NAMEANY(const uint8* src_ptr, ptrdiff_t src_stride, int src_stepx,    \
-                 uint8* dst_ptr, int dst_width) {                              \
-      int r = (int)((unsigned int)dst_width % (MASK + 1));                     \
-      int n = dst_width - r;                                                   \
-      if (n > 0) {                                                             \
-        SCALEROWDOWN_SIMD(src_ptr, src_stride, src_stepx, dst_ptr, n);         \
-      }                                                                        \
-      SCALEROWDOWN_C(src_ptr + (n * src_stepx) * BPP, src_stride,              \
-                     src_stepx, dst_ptr + n * BPP, r);                         \
-    }
+#define SDAANY(NAMEANY, SCALEROWDOWN_SIMD, SCALEROWDOWN_C, BPP, MASK)       \
+  void NAMEANY(const uint8_t* src_ptr, ptrdiff_t src_stride, int src_stepx, \
+               uint8_t* dst_ptr, int dst_width) {                           \
+    int r = dst_width & MASK;                                               \
+    int n = dst_width & ~MASK;                                              \
+    if (n > 0) {                                                            \
+      SCALEROWDOWN_SIMD(src_ptr, src_stride, src_stepx, dst_ptr, n);        \
+    }                                                                       \
+    SCALEROWDOWN_C(src_ptr + (n * src_stepx) * BPP, src_stride, src_stepx,  \
+                   dst_ptr + n * BPP, r);                                   \
+  }
 
 #ifdef HAS_SCALEARGBROWDOWNEVEN_SSE2
-SDAANY(ScaleARGBRowDownEven_Any_SSE2, ScaleARGBRowDownEven_SSE2,
-       ScaleARGBRowDownEven_C, 4, 3)
-SDAANY(ScaleARGBRowDownEvenBox_Any_SSE2, ScaleARGBRowDownEvenBox_SSE2,
-       ScaleARGBRowDownEvenBox_C, 4, 3)
+SDAANY(ScaleARGBRowDownEven_Any_SSE2,
+       ScaleARGBRowDownEven_SSE2,
+       ScaleARGBRowDownEven_C,
+       4,
+       3)
+SDAANY(ScaleARGBRowDownEvenBox_Any_SSE2,
+       ScaleARGBRowDownEvenBox_SSE2,
+       ScaleARGBRowDownEvenBox_C,
+       4,
+       3)
 #endif
 #ifdef HAS_SCALEARGBROWDOWNEVEN_NEON
-SDAANY(ScaleARGBRowDownEven_Any_NEON, ScaleARGBRowDownEven_NEON,
-       ScaleARGBRowDownEven_C, 4, 3)
-SDAANY(ScaleARGBRowDownEvenBox_Any_NEON, ScaleARGBRowDownEvenBox_NEON,
-       ScaleARGBRowDownEvenBox_C, 4, 3)
+SDAANY(ScaleARGBRowDownEven_Any_NEON,
+       ScaleARGBRowDownEven_NEON,
+       ScaleARGBRowDownEven_C,
+       4,
+       3)
+SDAANY(ScaleARGBRowDownEvenBox_Any_NEON,
+       ScaleARGBRowDownEvenBox_NEON,
+       ScaleARGBRowDownEvenBox_C,
+       4,
+       3)
+#endif
+#ifdef HAS_SCALEARGBROWDOWNEVEN_MSA
+SDAANY(ScaleARGBRowDownEven_Any_MSA,
+       ScaleARGBRowDownEven_MSA,
+       ScaleARGBRowDownEven_C,
+       4,
+       3)
+SDAANY(ScaleARGBRowDownEvenBox_Any_MSA,
+       ScaleARGBRowDownEvenBox_MSA,
+       ScaleARGBRowDownEvenBox_C,
+       4,
+       3)
 #endif
 
 // Add rows box filter scale down.
-#define SAANY(NAMEANY, SCALEADDROW_SIMD, SCALEADDROW_C, MASK)                  \
-  void NAMEANY(const uint8* src_ptr, uint16* dst_ptr, int src_width) {         \
-      int n = src_width & ~MASK;                                               \
-      if (n > 0) {                                                             \
-        SCALEADDROW_SIMD(src_ptr, dst_ptr, n);                                 \
-      }                                                                        \
-      SCALEADDROW_C(src_ptr + n, dst_ptr + n, src_width & MASK);               \
-    }
+#define SAANY(NAMEANY, SCALEADDROW_SIMD, SCALEADDROW_C, MASK)              \
+  void NAMEANY(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width) { \
+    int n = src_width & ~MASK;                                             \
+    if (n > 0) {                                                           \
+      SCALEADDROW_SIMD(src_ptr, dst_ptr, n);                               \
+    }                                                                      \
+    SCALEADDROW_C(src_ptr + n, dst_ptr + n, src_width & MASK);             \
+  }
 
 #ifdef HAS_SCALEADDROW_SSE2
 SAANY(ScaleAddRow_Any_SSE2, ScaleAddRow_SSE2, ScaleAddRow_C, 15)
@@ -208,14 +453,12 @@
 #ifdef HAS_SCALEADDROW_NEON
 SAANY(ScaleAddRow_Any_NEON, ScaleAddRow_NEON, ScaleAddRow_C, 15)
 #endif
+#ifdef HAS_SCALEADDROW_MSA
+SAANY(ScaleAddRow_Any_MSA, ScaleAddRow_MSA, ScaleAddRow_C, 15)
+#endif
 #undef SAANY
 
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
 #endif
-
-
-
-
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_argb.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_argb.cc
index 17f51ae..53a22e8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_argb.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_argb.cc
@@ -30,20 +30,31 @@
 // ScaleARGB ARGB, 1/2
 // This is an optimized version for scaling down a ARGB to 1/2 of
 // its original size.
-static void ScaleARGBDown2(int src_width, int src_height,
-                           int dst_width, int dst_height,
-                           int src_stride, int dst_stride,
-                           const uint8* src_argb, uint8* dst_argb,
-                           int x, int dx, int y, int dy,
+static void ScaleARGBDown2(int src_width,
+                           int src_height,
+                           int dst_width,
+                           int dst_height,
+                           int src_stride,
+                           int dst_stride,
+                           const uint8_t* src_argb,
+                           uint8_t* dst_argb,
+                           int x,
+                           int dx,
+                           int y,
+                           int dy,
                            enum FilterMode filtering) {
   int j;
   int row_stride = src_stride * (dy >> 16);
-  void (*ScaleARGBRowDown2)(const uint8* src_argb, ptrdiff_t src_stride,
-                            uint8* dst_argb, int dst_width) =
-    filtering == kFilterNone ? ScaleARGBRowDown2_C :
-        (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_C :
-        ScaleARGBRowDown2Box_C);
-  assert(dx == 65536 * 2);  // Test scale factor of 2.
+  void (*ScaleARGBRowDown2)(const uint8_t* src_argb, ptrdiff_t src_stride,
+                            uint8_t* dst_argb, int dst_width) =
+      filtering == kFilterNone
+          ? ScaleARGBRowDown2_C
+          : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_C
+                                        : ScaleARGBRowDown2Box_C);
+  (void)src_width;
+  (void)src_height;
+  (void)dx;
+  assert(dx == 65536 * 2);      // Test scale factor of 2.
   assert((dy & 0x1ffff) == 0);  // Test vertical scale is multiple of 2.
   // Advance to odd row, even column.
   if (filtering == kFilterBilinear) {
@@ -54,25 +65,49 @@
 
 #if defined(HAS_SCALEARGBROWDOWN2_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
-    ScaleARGBRowDown2 = filtering == kFilterNone ? ScaleARGBRowDown2_Any_SSE2 :
-        (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_Any_SSE2 :
-        ScaleARGBRowDown2Box_Any_SSE2);
+    ScaleARGBRowDown2 =
+        filtering == kFilterNone
+            ? ScaleARGBRowDown2_Any_SSE2
+            : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_Any_SSE2
+                                          : ScaleARGBRowDown2Box_Any_SSE2);
     if (IS_ALIGNED(dst_width, 4)) {
-      ScaleARGBRowDown2 = filtering == kFilterNone ? ScaleARGBRowDown2_SSE2 :
-          (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_SSE2 :
-          ScaleARGBRowDown2Box_SSE2);
+      ScaleARGBRowDown2 =
+          filtering == kFilterNone
+              ? ScaleARGBRowDown2_SSE2
+              : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_SSE2
+                                            : ScaleARGBRowDown2Box_SSE2);
     }
   }
 #endif
 #if defined(HAS_SCALEARGBROWDOWN2_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
-    ScaleARGBRowDown2 = filtering == kFilterNone ? ScaleARGBRowDown2_Any_NEON :
-        (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_Any_NEON :
-        ScaleARGBRowDown2Box_Any_NEON);
+    ScaleARGBRowDown2 =
+        filtering == kFilterNone
+            ? ScaleARGBRowDown2_Any_NEON
+            : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_Any_NEON
+                                          : ScaleARGBRowDown2Box_Any_NEON);
     if (IS_ALIGNED(dst_width, 8)) {
-      ScaleARGBRowDown2 = filtering == kFilterNone ? ScaleARGBRowDown2_NEON :
-          (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_NEON :
-          ScaleARGBRowDown2Box_NEON);
+      ScaleARGBRowDown2 =
+          filtering == kFilterNone
+              ? ScaleARGBRowDown2_NEON
+              : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_NEON
+                                            : ScaleARGBRowDown2Box_NEON);
+    }
+  }
+#endif
+#if defined(HAS_SCALEARGBROWDOWN2_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBRowDown2 =
+        filtering == kFilterNone
+            ? ScaleARGBRowDown2_Any_MSA
+            : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_Any_MSA
+                                          : ScaleARGBRowDown2Box_Any_MSA);
+    if (IS_ALIGNED(dst_width, 4)) {
+      ScaleARGBRowDown2 =
+          filtering == kFilterNone
+              ? ScaleARGBRowDown2_MSA
+              : (filtering == kFilterLinear ? ScaleARGBRowDown2Linear_MSA
+                                            : ScaleARGBRowDown2Box_MSA);
     }
   }
 #endif
@@ -90,21 +125,32 @@
 // ScaleARGB ARGB, 1/4
 // This is an optimized version for scaling down a ARGB to 1/4 of
 // its original size.
-static void ScaleARGBDown4Box(int src_width, int src_height,
-                              int dst_width, int dst_height,
-                              int src_stride, int dst_stride,
-                              const uint8* src_argb, uint8* dst_argb,
-                              int x, int dx, int y, int dy) {
+static void ScaleARGBDown4Box(int src_width,
+                              int src_height,
+                              int dst_width,
+                              int dst_height,
+                              int src_stride,
+                              int dst_stride,
+                              const uint8_t* src_argb,
+                              uint8_t* dst_argb,
+                              int x,
+                              int dx,
+                              int y,
+                              int dy) {
   int j;
   // Allocate 2 rows of ARGB.
   const int kRowSize = (dst_width * 2 * 4 + 31) & ~31;
   align_buffer_64(row, kRowSize * 2);
   int row_stride = src_stride * (dy >> 16);
-  void (*ScaleARGBRowDown2)(const uint8* src_argb, ptrdiff_t src_stride,
-    uint8* dst_argb, int dst_width) = ScaleARGBRowDown2Box_C;
+  void (*ScaleARGBRowDown2)(const uint8_t* src_argb, ptrdiff_t src_stride,
+                            uint8_t* dst_argb, int dst_width) =
+      ScaleARGBRowDown2Box_C;
   // Advance to odd row, even column.
   src_argb += (y >> 16) * src_stride + (x >> 16) * 4;
-  assert(dx == 65536 * 4);  // Test scale factor of 4.
+  (void)src_width;
+  (void)src_height;
+  (void)dx;
+  assert(dx == 65536 * 4);      // Test scale factor of 4.
   assert((dy & 0x3ffff) == 0);  // Test vertical scale is multiple of 4.
 #if defined(HAS_SCALEARGBROWDOWN2_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
@@ -125,8 +171,8 @@
 
   for (j = 0; j < dst_height; ++j) {
     ScaleARGBRowDown2(src_argb, src_stride, row, dst_width * 2);
-    ScaleARGBRowDown2(src_argb + src_stride * 2, src_stride,
-                      row + kRowSize, dst_width * 2);
+    ScaleARGBRowDown2(src_argb + src_stride * 2, src_stride, row + kRowSize,
+                      dst_width * 2);
     ScaleARGBRowDown2(row, kRowSize, dst_argb, dst_width);
     src_argb += row_stride;
     dst_argb += dst_stride;
@@ -137,38 +183,57 @@
 // ScaleARGB ARGB Even
 // This is an optimized version for scaling down a ARGB to even
 // multiple of its original size.
-static void ScaleARGBDownEven(int src_width, int src_height,
-                              int dst_width, int dst_height,
-                              int src_stride, int dst_stride,
-                              const uint8* src_argb, uint8* dst_argb,
-                              int x, int dx, int y, int dy,
+static void ScaleARGBDownEven(int src_width,
+                              int src_height,
+                              int dst_width,
+                              int dst_height,
+                              int src_stride,
+                              int dst_stride,
+                              const uint8_t* src_argb,
+                              uint8_t* dst_argb,
+                              int x,
+                              int dx,
+                              int y,
+                              int dy,
                               enum FilterMode filtering) {
   int j;
   int col_step = dx >> 16;
   int row_stride = (dy >> 16) * src_stride;
-  void (*ScaleARGBRowDownEven)(const uint8* src_argb, ptrdiff_t src_stride,
-                               int src_step, uint8* dst_argb, int dst_width) =
+  void (*ScaleARGBRowDownEven)(const uint8_t* src_argb, ptrdiff_t src_stride,
+                               int src_step, uint8_t* dst_argb, int dst_width) =
       filtering ? ScaleARGBRowDownEvenBox_C : ScaleARGBRowDownEven_C;
+  (void)src_width;
+  (void)src_height;
   assert(IS_ALIGNED(src_width, 2));
   assert(IS_ALIGNED(src_height, 2));
   src_argb += (y >> 16) * src_stride + (x >> 16) * 4;
 #if defined(HAS_SCALEARGBROWDOWNEVEN_SSE2)
   if (TestCpuFlag(kCpuHasSSE2)) {
-    ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_Any_SSE2 :
-        ScaleARGBRowDownEven_Any_SSE2;
+    ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_Any_SSE2
+                                     : ScaleARGBRowDownEven_Any_SSE2;
     if (IS_ALIGNED(dst_width, 4)) {
-      ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_SSE2 :
-          ScaleARGBRowDownEven_SSE2;
+      ScaleARGBRowDownEven =
+          filtering ? ScaleARGBRowDownEvenBox_SSE2 : ScaleARGBRowDownEven_SSE2;
     }
   }
 #endif
 #if defined(HAS_SCALEARGBROWDOWNEVEN_NEON)
   if (TestCpuFlag(kCpuHasNEON)) {
-    ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_Any_NEON :
-        ScaleARGBRowDownEven_Any_NEON;
+    ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_Any_NEON
+                                     : ScaleARGBRowDownEven_Any_NEON;
     if (IS_ALIGNED(dst_width, 4)) {
-      ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_NEON :
-          ScaleARGBRowDownEven_NEON;
+      ScaleARGBRowDownEven =
+          filtering ? ScaleARGBRowDownEvenBox_NEON : ScaleARGBRowDownEven_NEON;
+    }
+  }
+#endif
+#if defined(HAS_SCALEARGBROWDOWNEVEN_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBRowDownEven = filtering ? ScaleARGBRowDownEvenBox_Any_MSA
+                                     : ScaleARGBRowDownEven_Any_MSA;
+    if (IS_ALIGNED(dst_width, 4)) {
+      ScaleARGBRowDownEven =
+          filtering ? ScaleARGBRowDownEvenBox_MSA : ScaleARGBRowDownEven_MSA;
     }
   }
 #endif
@@ -184,25 +249,32 @@
 }
 
 // Scale ARGB down with bilinear interpolation.
-static void ScaleARGBBilinearDown(int src_width, int src_height,
-                                  int dst_width, int dst_height,
-                                  int src_stride, int dst_stride,
-                                  const uint8* src_argb, uint8* dst_argb,
-                                  int x, int dx, int y, int dy,
+static void ScaleARGBBilinearDown(int src_width,
+                                  int src_height,
+                                  int dst_width,
+                                  int dst_height,
+                                  int src_stride,
+                                  int dst_stride,
+                                  const uint8_t* src_argb,
+                                  uint8_t* dst_argb,
+                                  int x,
+                                  int dx,
+                                  int y,
+                                  int dy,
                                   enum FilterMode filtering) {
   int j;
-  void (*InterpolateRow)(uint8* dst_argb, const uint8* src_argb,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_C;
-  void (*ScaleARGBFilterCols)(uint8* dst_argb, const uint8* src_argb,
-      int dst_width, int x, int dx) =
+  void (*InterpolateRow)(uint8_t * dst_argb, const uint8_t* src_argb,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_C;
+  void (*ScaleARGBFilterCols)(uint8_t * dst_argb, const uint8_t* src_argb,
+                              int dst_width, int x, int dx) =
       (src_width >= 32768) ? ScaleARGBFilterCols64_C : ScaleARGBFilterCols_C;
-  int64 xlast = x + (int64)(dst_width - 1) * dx;
-  int64 xl = (dx >= 0) ? x : xlast;
-  int64 xr = (dx >= 0) ? xlast : x;
+  int64_t xlast = x + (int64_t)(dst_width - 1) * dx;
+  int64_t xl = (dx >= 0) ? x : xlast;
+  int64_t xr = (dx >= 0) ? xlast : x;
   int clip_src_width;
-  xl = (xl >> 16) & ~3;  // Left edge aligned.
-  xr = (xr >> 16) + 1;  // Right most pixel used.  Bilinear uses 2 pixels.
+  xl = (xl >> 16) & ~3;    // Left edge aligned.
+  xr = (xr >> 16) + 1;     // Right most pixel used.  Bilinear uses 2 pixels.
   xr = (xr + 1 + 3) & ~3;  // 1 beyond 4 pixel aligned right most pixel.
   if (xr > src_width) {
     xr = src_width;
@@ -234,12 +306,11 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src_argb, 4) && IS_ALIGNED(src_stride, 4)) {
-    InterpolateRow = InterpolateRow_Any_DSPR2;
-    if (IS_ALIGNED(clip_src_width, 4)) {
-      InterpolateRow = InterpolateRow_DSPR2;
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(clip_src_width, 32)) {
+      InterpolateRow = InterpolateRow_MSA;
     }
   }
 #endif
@@ -256,6 +327,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEARGBFILTERCOLS_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBFilterCols = ScaleARGBFilterCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 8)) {
+      ScaleARGBFilterCols = ScaleARGBFilterCols_MSA;
+    }
+  }
+#endif
   // TODO(fbarchard): Consider not allocating row buffer for kFilterLinear.
   // Allocate a row of ARGB.
   {
@@ -267,7 +346,7 @@
     }
     for (j = 0; j < dst_height; ++j) {
       int yi = y >> 16;
-      const uint8* src = src_argb + yi * src_stride;
+      const uint8_t* src = src_argb + yi * src_stride;
       if (filtering == kFilterLinear) {
         ScaleARGBFilterCols(dst_argb, src, dst_width, x, dx);
       } else {
@@ -286,18 +365,25 @@
 }
 
 // Scale ARGB up with bilinear interpolation.
-static void ScaleARGBBilinearUp(int src_width, int src_height,
-                                int dst_width, int dst_height,
-                                int src_stride, int dst_stride,
-                                const uint8* src_argb, uint8* dst_argb,
-                                int x, int dx, int y, int dy,
+static void ScaleARGBBilinearUp(int src_width,
+                                int src_height,
+                                int dst_width,
+                                int dst_height,
+                                int src_stride,
+                                int dst_stride,
+                                const uint8_t* src_argb,
+                                uint8_t* dst_argb,
+                                int x,
+                                int dx,
+                                int y,
+                                int dy,
                                 enum FilterMode filtering) {
   int j;
-  void (*InterpolateRow)(uint8* dst_argb, const uint8* src_argb,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_C;
-  void (*ScaleARGBFilterCols)(uint8* dst_argb, const uint8* src_argb,
-      int dst_width, int x, int dx) =
+  void (*InterpolateRow)(uint8_t * dst_argb, const uint8_t* src_argb,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_C;
+  void (*ScaleARGBFilterCols)(uint8_t * dst_argb, const uint8_t* src_argb,
+                              int dst_width, int x, int dx) =
       filtering ? ScaleARGBFilterCols_C : ScaleARGBCols_C;
   const int max_y = (src_height - 1) << 16;
 #if defined(HAS_INTERPOLATEROW_SSSE3)
@@ -324,15 +410,17 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride, 4)) {
-    InterpolateRow = InterpolateRow_DSPR2;
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(dst_width, 8)) {
+      InterpolateRow = InterpolateRow_MSA;
+    }
   }
 #endif
   if (src_width >= 32768) {
-    ScaleARGBFilterCols = filtering ?
-        ScaleARGBFilterCols64_C : ScaleARGBCols64_C;
+    ScaleARGBFilterCols =
+        filtering ? ScaleARGBFilterCols64_C : ScaleARGBCols64_C;
   }
 #if defined(HAS_SCALEARGBFILTERCOLS_SSSE3)
   if (filtering && TestCpuFlag(kCpuHasSSSE3) && src_width < 32768) {
@@ -347,6 +435,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEARGBFILTERCOLS_MSA)
+  if (filtering && TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBFilterCols = ScaleARGBFilterCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 8)) {
+      ScaleARGBFilterCols = ScaleARGBFilterCols_MSA;
+    }
+  }
+#endif
 #if defined(HAS_SCALEARGBCOLS_SSE2)
   if (!filtering && TestCpuFlag(kCpuHasSSE2) && src_width < 32768) {
     ScaleARGBFilterCols = ScaleARGBCols_SSE2;
@@ -360,6 +456,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEARGBCOLS_MSA)
+  if (!filtering && TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBFilterCols = ScaleARGBCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 4)) {
+      ScaleARGBFilterCols = ScaleARGBCols_MSA;
+    }
+  }
+#endif
   if (!filtering && src_width * 2 == dst_width && x < 0x8000) {
     ScaleARGBFilterCols = ScaleARGBColsUp2_C;
 #if defined(HAS_SCALEARGBCOLSUP2_SSE2)
@@ -375,13 +479,13 @@
 
   {
     int yi = y >> 16;
-    const uint8* src = src_argb + yi * src_stride;
+    const uint8_t* src = src_argb + yi * src_stride;
 
     // Allocate 2 rows of ARGB.
     const int kRowSize = (dst_width * 4 + 31) & ~31;
     align_buffer_64(row, kRowSize * 2);
 
-    uint8* rowptr = row;
+    uint8_t* rowptr = row;
     int rowstride = kRowSize;
     int lasty = yi;
 
@@ -423,24 +527,27 @@
 
 #ifdef YUVSCALEUP
 // Scale YUV to ARGB up with bilinear interpolation.
-static void ScaleYUVToARGBBilinearUp(int src_width, int src_height,
-                                     int dst_width, int dst_height,
+static void ScaleYUVToARGBBilinearUp(int src_width,
+                                     int src_height,
+                                     int dst_width,
+                                     int dst_height,
                                      int src_stride_y,
                                      int src_stride_u,
                                      int src_stride_v,
                                      int dst_stride_argb,
-                                     const uint8* src_y,
-                                     const uint8* src_u,
-                                     const uint8* src_v,
-                                     uint8* dst_argb,
-                                     int x, int dx, int y, int dy,
+                                     const uint8_t* src_y,
+                                     const uint8_t* src_u,
+                                     const uint8_t* src_v,
+                                     uint8_t* dst_argb,
+                                     int x,
+                                     int dx,
+                                     int y,
+                                     int dy,
                                      enum FilterMode filtering) {
   int j;
-  void (*I422ToARGBRow)(const uint8* y_buf,
-                        const uint8* u_buf,
-                        const uint8* v_buf,
-                        uint8* rgb_buf,
-                        int width) = I422ToARGBRow_C;
+  void (*I422ToARGBRow)(const uint8_t* y_buf, const uint8_t* u_buf,
+                        const uint8_t* v_buf, uint8_t* rgb_buf, int width) =
+      I422ToARGBRow_C;
 #if defined(HAS_I422TOARGBROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     I422ToARGBRow = I422ToARGBRow_Any_SSSE3;
@@ -465,19 +572,18 @@
     }
   }
 #endif
-#if defined(HAS_I422TOARGBROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) && IS_ALIGNED(src_width, 4) &&
-      IS_ALIGNED(src_y, 4) && IS_ALIGNED(src_stride_y, 4) &&
-      IS_ALIGNED(src_u, 2) && IS_ALIGNED(src_stride_u, 2) &&
-      IS_ALIGNED(src_v, 2) && IS_ALIGNED(src_stride_v, 2) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride_argb, 4)) {
-    I422ToARGBRow = I422ToARGBRow_DSPR2;
+#if defined(HAS_I422TOARGBROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    I422ToARGBRow = I422ToARGBRow_Any_MSA;
+    if (IS_ALIGNED(src_width, 8)) {
+      I422ToARGBRow = I422ToARGBRow_MSA;
+    }
   }
 #endif
 
-  void (*InterpolateRow)(uint8* dst_argb, const uint8* src_argb,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_C;
+  void (*InterpolateRow)(uint8_t * dst_argb, const uint8_t* src_argb,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_C;
 #if defined(HAS_INTERPOLATEROW_SSSE3)
   if (TestCpuFlag(kCpuHasSSSE3)) {
     InterpolateRow = InterpolateRow_Any_SSSE3;
@@ -502,19 +608,21 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride_argb, 4)) {
-    InterpolateRow = InterpolateRow_DSPR2;
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(dst_width, 8)) {
+      InterpolateRow = InterpolateRow_MSA;
+    }
   }
 #endif
 
-  void (*ScaleARGBFilterCols)(uint8* dst_argb, const uint8* src_argb,
-      int dst_width, int x, int dx) =
+  void (*ScaleARGBFilterCols)(uint8_t * dst_argb, const uint8_t* src_argb,
+                              int dst_width, int x, int dx) =
       filtering ? ScaleARGBFilterCols_C : ScaleARGBCols_C;
   if (src_width >= 32768) {
-    ScaleARGBFilterCols = filtering ?
-        ScaleARGBFilterCols64_C : ScaleARGBCols64_C;
+    ScaleARGBFilterCols =
+        filtering ? ScaleARGBFilterCols64_C : ScaleARGBCols64_C;
   }
 #if defined(HAS_SCALEARGBFILTERCOLS_SSSE3)
   if (filtering && TestCpuFlag(kCpuHasSSSE3) && src_width < 32768) {
@@ -529,6 +637,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEARGBFILTERCOLS_MSA)
+  if (filtering && TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBFilterCols = ScaleARGBFilterCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 8)) {
+      ScaleARGBFilterCols = ScaleARGBFilterCols_MSA;
+    }
+  }
+#endif
 #if defined(HAS_SCALEARGBCOLS_SSE2)
   if (!filtering && TestCpuFlag(kCpuHasSSE2) && src_width < 32768) {
     ScaleARGBFilterCols = ScaleARGBCols_SSE2;
@@ -542,6 +658,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEARGBCOLS_MSA)
+  if (!filtering && TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBFilterCols = ScaleARGBCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 4)) {
+      ScaleARGBFilterCols = ScaleARGBCols_MSA;
+    }
+  }
+#endif
   if (!filtering && src_width * 2 == dst_width && x < 0x8000) {
     ScaleARGBFilterCols = ScaleARGBColsUp2_C;
 #if defined(HAS_SCALEARGBCOLSUP2_SSE2)
@@ -558,9 +682,9 @@
   const int kYShift = 1;  // Shift Y by 1 to convert Y plane to UV coordinate.
   int yi = y >> 16;
   int uv_yi = yi >> kYShift;
-  const uint8* src_row_y = src_y + yi * src_stride_y;
-  const uint8* src_row_u = src_u + uv_yi * src_stride_u;
-  const uint8* src_row_v = src_v + uv_yi * src_stride_v;
+  const uint8_t* src_row_y = src_y + yi * src_stride_y;
+  const uint8_t* src_row_u = src_u + uv_yi * src_stride_u;
+  const uint8_t* src_row_v = src_v + uv_yi * src_stride_v;
 
   // Allocate 2 rows of ARGB.
   const int kRowSize = (dst_width * 4 + 31) & ~31;
@@ -569,7 +693,7 @@
   // Allocate 1 row of ARGB for source conversion.
   align_buffer_64(argb_row, src_width * 4);
 
-  uint8* rowptr = row;
+  uint8_t* rowptr = row;
   int rowstride = kRowSize;
   int lasty = yi;
 
@@ -635,15 +759,23 @@
 // of x and dx is the integer part of the source position and
 // the lower 16 bits are the fixed decimal part.
 
-static void ScaleARGBSimple(int src_width, int src_height,
-                            int dst_width, int dst_height,
-                            int src_stride, int dst_stride,
-                            const uint8* src_argb, uint8* dst_argb,
-                            int x, int dx, int y, int dy) {
+static void ScaleARGBSimple(int src_width,
+                            int src_height,
+                            int dst_width,
+                            int dst_height,
+                            int src_stride,
+                            int dst_stride,
+                            const uint8_t* src_argb,
+                            uint8_t* dst_argb,
+                            int x,
+                            int dx,
+                            int y,
+                            int dy) {
   int j;
-  void (*ScaleARGBCols)(uint8* dst_argb, const uint8* src_argb,
-      int dst_width, int x, int dx) =
+  void (*ScaleARGBCols)(uint8_t * dst_argb, const uint8_t* src_argb,
+                        int dst_width, int x, int dx) =
       (src_width >= 32768) ? ScaleARGBCols64_C : ScaleARGBCols_C;
+  (void)src_height;
 #if defined(HAS_SCALEARGBCOLS_SSE2)
   if (TestCpuFlag(kCpuHasSSE2) && src_width < 32768) {
     ScaleARGBCols = ScaleARGBCols_SSE2;
@@ -657,6 +789,14 @@
     }
   }
 #endif
+#if defined(HAS_SCALEARGBCOLS_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    ScaleARGBCols = ScaleARGBCols_Any_MSA;
+    if (IS_ALIGNED(dst_width, 4)) {
+      ScaleARGBCols = ScaleARGBCols_MSA;
+    }
+  }
+#endif
   if (src_width * 2 == dst_width && x < 0x8000) {
     ScaleARGBCols = ScaleARGBColsUp2_C;
 #if defined(HAS_SCALEARGBCOLSUP2_SSE2)
@@ -667,8 +807,8 @@
   }
 
   for (j = 0; j < dst_height; ++j) {
-    ScaleARGBCols(dst_argb, src_argb + (y >> 16) * src_stride,
-                  dst_width, x, dx);
+    ScaleARGBCols(dst_argb, src_argb + (y >> 16) * src_stride, dst_width, x,
+                  dx);
     dst_argb += dst_stride;
     y += dy;
   }
@@ -677,11 +817,18 @@
 // ScaleARGB a ARGB.
 // This function in turn calls a scaling function
 // suitable for handling the desired resolutions.
-static void ScaleARGB(const uint8* src, int src_stride,
-                      int src_width, int src_height,
-                      uint8* dst, int dst_stride,
-                      int dst_width, int dst_height,
-                      int clip_x, int clip_y, int clip_width, int clip_height,
+static void ScaleARGB(const uint8_t* src,
+                      int src_stride,
+                      int src_width,
+                      int src_height,
+                      uint8_t* dst,
+                      int dst_stride,
+                      int dst_width,
+                      int dst_height,
+                      int clip_x,
+                      int clip_y,
+                      int clip_width,
+                      int clip_height,
                       enum FilterMode filtering) {
   // Initial source x/y coordinate and step values as 16.16 fixed point.
   int x = 0;
@@ -690,8 +837,7 @@
   int dy = 0;
   // ARGB does not support box filter yet, but allow the user to pass it.
   // Simplify filtering when possible.
-  filtering = ScaleFilterReduce(src_width, src_height,
-                                dst_width, dst_height,
+  filtering = ScaleFilterReduce(src_width, src_height, dst_width, dst_height,
                                 filtering);
 
   // Negative src_height means invert the image.
@@ -700,17 +846,17 @@
     src = src + (src_height - 1) * src_stride;
     src_stride = -src_stride;
   }
-  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering,
-             &x, &y, &dx, &dy);
+  ScaleSlope(src_width, src_height, dst_width, dst_height, filtering, &x, &y,
+             &dx, &dy);
   src_width = Abs(src_width);
   if (clip_x) {
-    int64 clipf = (int64)(clip_x) * dx;
+    int64_t clipf = (int64_t)(clip_x)*dx;
     x += (clipf & 0xffff);
     src += (clipf >> 16) * 4;
     dst += clip_x * 4;
   }
   if (clip_y) {
-    int64 clipf = (int64)(clip_y) * dy;
+    int64_t clipf = (int64_t)(clip_y)*dy;
     y += (clipf & 0xffff);
     src += (clipf >> 16) * src_stride;
     dst += clip_y * dst_stride;
@@ -725,24 +871,20 @@
       if (!(dx & 0x10000) && !(dy & 0x10000)) {
         if (dx == 0x20000) {
           // Optimized 1/2 downsample.
-          ScaleARGBDown2(src_width, src_height,
-                         clip_width, clip_height,
-                         src_stride, dst_stride, src, dst,
-                         x, dx, y, dy, filtering);
+          ScaleARGBDown2(src_width, src_height, clip_width, clip_height,
+                         src_stride, dst_stride, src, dst, x, dx, y, dy,
+                         filtering);
           return;
         }
         if (dx == 0x40000 && filtering == kFilterBox) {
           // Optimized 1/4 box downsample.
-          ScaleARGBDown4Box(src_width, src_height,
-                            clip_width, clip_height,
-                            src_stride, dst_stride, src, dst,
-                            x, dx, y, dy);
+          ScaleARGBDown4Box(src_width, src_height, clip_width, clip_height,
+                            src_stride, dst_stride, src, dst, x, dx, y, dy);
           return;
         }
-        ScaleARGBDownEven(src_width, src_height,
-                          clip_width, clip_height,
-                          src_stride, dst_stride, src, dst,
-                          x, dx, y, dy, filtering);
+        ScaleARGBDownEven(src_width, src_height, clip_width, clip_height,
+                          src_stride, dst_stride, src, dst, x, dx, y, dy,
+                          filtering);
         return;
       }
       // Optimized odd scale down. ie 3, 5, 7, 9x.
@@ -759,96 +901,105 @@
   }
   if (dx == 0x10000 && (x & 0xffff) == 0) {
     // Arbitrary scale vertically, but unscaled vertically.
-    ScalePlaneVertical(src_height,
-                       clip_width, clip_height,
-                       src_stride, dst_stride, src, dst,
-                       x, y, dy, 4, filtering);
+    ScalePlaneVertical(src_height, clip_width, clip_height, src_stride,
+                       dst_stride, src, dst, x, y, dy, 4, filtering);
     return;
   }
   if (filtering && dy < 65536) {
-    ScaleARGBBilinearUp(src_width, src_height,
-                        clip_width, clip_height,
-                        src_stride, dst_stride, src, dst,
-                        x, dx, y, dy, filtering);
+    ScaleARGBBilinearUp(src_width, src_height, clip_width, clip_height,
+                        src_stride, dst_stride, src, dst, x, dx, y, dy,
+                        filtering);
     return;
   }
   if (filtering) {
-    ScaleARGBBilinearDown(src_width, src_height,
-                          clip_width, clip_height,
-                          src_stride, dst_stride, src, dst,
-                          x, dx, y, dy, filtering);
+    ScaleARGBBilinearDown(src_width, src_height, clip_width, clip_height,
+                          src_stride, dst_stride, src, dst, x, dx, y, dy,
+                          filtering);
     return;
   }
-  ScaleARGBSimple(src_width, src_height, clip_width, clip_height,
-                  src_stride, dst_stride, src, dst,
-                  x, dx, y, dy);
+  ScaleARGBSimple(src_width, src_height, clip_width, clip_height, src_stride,
+                  dst_stride, src, dst, x, dx, y, dy);
 }
 
 LIBYUV_API
-int ARGBScaleClip(const uint8* src_argb, int src_stride_argb,
-                  int src_width, int src_height,
-                  uint8* dst_argb, int dst_stride_argb,
-                  int dst_width, int dst_height,
-                  int clip_x, int clip_y, int clip_width, int clip_height,
+int ARGBScaleClip(const uint8_t* src_argb,
+                  int src_stride_argb,
+                  int src_width,
+                  int src_height,
+                  uint8_t* dst_argb,
+                  int dst_stride_argb,
+                  int dst_width,
+                  int dst_height,
+                  int clip_x,
+                  int clip_y,
+                  int clip_width,
+                  int clip_height,
                   enum FilterMode filtering) {
-  if (!src_argb || src_width == 0 || src_height == 0 ||
-      !dst_argb || dst_width <= 0 || dst_height <= 0 ||
-      clip_x < 0 || clip_y < 0 ||
+  if (!src_argb || src_width == 0 || src_height == 0 || !dst_argb ||
+      dst_width <= 0 || dst_height <= 0 || clip_x < 0 || clip_y < 0 ||
       clip_width > 32768 || clip_height > 32768 ||
       (clip_x + clip_width) > dst_width ||
       (clip_y + clip_height) > dst_height) {
     return -1;
   }
-  ScaleARGB(src_argb, src_stride_argb, src_width, src_height,
-            dst_argb, dst_stride_argb, dst_width, dst_height,
-            clip_x, clip_y, clip_width, clip_height, filtering);
+  ScaleARGB(src_argb, src_stride_argb, src_width, src_height, dst_argb,
+            dst_stride_argb, dst_width, dst_height, clip_x, clip_y, clip_width,
+            clip_height, filtering);
   return 0;
 }
 
 // Scale an ARGB image.
 LIBYUV_API
-int ARGBScale(const uint8* src_argb, int src_stride_argb,
-              int src_width, int src_height,
-              uint8* dst_argb, int dst_stride_argb,
-              int dst_width, int dst_height,
+int ARGBScale(const uint8_t* src_argb,
+              int src_stride_argb,
+              int src_width,
+              int src_height,
+              uint8_t* dst_argb,
+              int dst_stride_argb,
+              int dst_width,
+              int dst_height,
               enum FilterMode filtering) {
-  if (!src_argb || src_width == 0 || src_height == 0 ||
-      src_width > 32768 || src_height > 32768 ||
-      !dst_argb || dst_width <= 0 || dst_height <= 0) {
+  if (!src_argb || src_width == 0 || src_height == 0 || src_width > 32768 ||
+      src_height > 32768 || !dst_argb || dst_width <= 0 || dst_height <= 0) {
     return -1;
   }
-  ScaleARGB(src_argb, src_stride_argb, src_width, src_height,
-            dst_argb, dst_stride_argb, dst_width, dst_height,
-            0, 0, dst_width, dst_height, filtering);
+  ScaleARGB(src_argb, src_stride_argb, src_width, src_height, dst_argb,
+            dst_stride_argb, dst_width, dst_height, 0, 0, dst_width, dst_height,
+            filtering);
   return 0;
 }
 
 // Scale with YUV conversion to ARGB and clipping.
 LIBYUV_API
-int YUVToARGBScaleClip(const uint8* src_y, int src_stride_y,
-                       const uint8* src_u, int src_stride_u,
-                       const uint8* src_v, int src_stride_v,
-                       uint32 src_fourcc,
-                       int src_width, int src_height,
-                       uint8* dst_argb, int dst_stride_argb,
-                       uint32 dst_fourcc,
-                       int dst_width, int dst_height,
-                       int clip_x, int clip_y, int clip_width, int clip_height,
+int YUVToARGBScaleClip(const uint8_t* src_y,
+                       int src_stride_y,
+                       const uint8_t* src_u,
+                       int src_stride_u,
+                       const uint8_t* src_v,
+                       int src_stride_v,
+                       uint32_t src_fourcc,
+                       int src_width,
+                       int src_height,
+                       uint8_t* dst_argb,
+                       int dst_stride_argb,
+                       uint32_t dst_fourcc,
+                       int dst_width,
+                       int dst_height,
+                       int clip_x,
+                       int clip_y,
+                       int clip_width,
+                       int clip_height,
                        enum FilterMode filtering) {
-  uint8* argb_buffer = (uint8*)malloc(src_width * src_height * 4);
+  uint8_t* argb_buffer = (uint8_t*)malloc(src_width * src_height * 4);
   int r;
-  I420ToARGB(src_y, src_stride_y,
-             src_u, src_stride_u,
-             src_v, src_stride_v,
-             argb_buffer, src_width * 4,
-             src_width, src_height);
+  (void)src_fourcc;  // TODO(fbarchard): implement and/or assert.
+  (void)dst_fourcc;
+  I420ToARGB(src_y, src_stride_y, src_u, src_stride_u, src_v, src_stride_v,
+             argb_buffer, src_width * 4, src_width, src_height);
 
-  r = ARGBScaleClip(argb_buffer, src_width * 4,
-                    src_width, src_height,
-                    dst_argb, dst_stride_argb,
-                    dst_width, dst_height,
-                    clip_x, clip_y, clip_width, clip_height,
-                    filtering);
+  r = ARGBScaleClip(argb_buffer, src_width * 4, src_width, src_height, dst_argb,
+                    dst_stride_argb, dst_width, dst_height, clip_x, clip_y,
+                    clip_width, clip_height, filtering);
   free(argb_buffer);
   return r;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_common.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_common.cc
index 3507aa4..b28d7da 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_common.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_common.cc
@@ -28,9 +28,12 @@
 }
 
 // CPU agnostic row functions
-void ScaleRowDown2_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                     uint8* dst, int dst_width) {
+void ScaleRowDown2_C(const uint8_t* src_ptr,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     int dst_width) {
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = src_ptr[1];
     dst[1] = src_ptr[3];
@@ -42,9 +45,12 @@
   }
 }
 
-void ScaleRowDown2_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                        uint16* dst, int dst_width) {
+void ScaleRowDown2_16_C(const uint16_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint16_t* dst,
+                        int dst_width) {
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = src_ptr[1];
     dst[1] = src_ptr[3];
@@ -56,10 +62,13 @@
   }
 }
 
-void ScaleRowDown2Linear_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width) {
-  const uint8* s = src_ptr;
+void ScaleRowDown2Linear_C(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           int dst_width) {
+  const uint8_t* s = src_ptr;
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = (s[0] + s[1] + 1) >> 1;
     dst[1] = (s[2] + s[3] + 1) >> 1;
@@ -71,10 +80,13 @@
   }
 }
 
-void ScaleRowDown2Linear_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                              uint16* dst, int dst_width) {
-  const uint16* s = src_ptr;
+void ScaleRowDown2Linear_16_C(const uint16_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint16_t* dst,
+                              int dst_width) {
+  const uint16_t* s = src_ptr;
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = (s[0] + s[1] + 1) >> 1;
     dst[1] = (s[2] + s[3] + 1) >> 1;
@@ -86,10 +98,12 @@
   }
 }
 
-void ScaleRowDown2Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width) {
-  const uint8* s = src_ptr;
-  const uint8* t = src_ptr + src_stride;
+void ScaleRowDown2Box_C(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width) {
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
   int x;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = (s[0] + s[1] + t[0] + t[1] + 2) >> 2;
@@ -103,10 +117,12 @@
   }
 }
 
-void ScaleRowDown2Box_Odd_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width) {
-  const uint8* s = src_ptr;
-  const uint8* t = src_ptr + src_stride;
+void ScaleRowDown2Box_Odd_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            int dst_width) {
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
   int x;
   dst_width -= 1;
   for (x = 0; x < dst_width - 1; x += 2) {
@@ -125,10 +141,12 @@
   dst[0] = (s[0] + t[0] + 1) >> 1;
 }
 
-void ScaleRowDown2Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst, int dst_width) {
-  const uint16* s = src_ptr;
-  const uint16* t = src_ptr + src_stride;
+void ScaleRowDown2Box_16_C(const uint16_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint16_t* dst,
+                           int dst_width) {
+  const uint16_t* s = src_ptr;
+  const uint16_t* t = src_ptr + src_stride;
   int x;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = (s[0] + s[1] + t[0] + t[1] + 2) >> 2;
@@ -142,9 +160,12 @@
   }
 }
 
-void ScaleRowDown4_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                     uint8* dst, int dst_width) {
+void ScaleRowDown4_C(const uint8_t* src_ptr,
+                     ptrdiff_t src_stride,
+                     uint8_t* dst,
+                     int dst_width) {
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = src_ptr[2];
     dst[1] = src_ptr[6];
@@ -156,9 +177,12 @@
   }
 }
 
-void ScaleRowDown4_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                        uint16* dst, int dst_width) {
+void ScaleRowDown4_16_C(const uint16_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint16_t* dst,
+                        int dst_width) {
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = src_ptr[2];
     dst[1] = src_ptr[6];
@@ -170,81 +194,88 @@
   }
 }
 
-void ScaleRowDown4Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width) {
+void ScaleRowDown4Box_C(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width) {
   intptr_t stride = src_stride;
   int x;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[3] +
-             src_ptr[stride + 0] + src_ptr[stride + 1] +
-             src_ptr[stride + 2] + src_ptr[stride + 3] +
-             src_ptr[stride * 2 + 0] + src_ptr[stride * 2 + 1] +
-             src_ptr[stride * 2 + 2] + src_ptr[stride * 2 + 3] +
-             src_ptr[stride * 3 + 0] + src_ptr[stride * 3 + 1] +
-             src_ptr[stride * 3 + 2] + src_ptr[stride * 3 + 3] +
-             8) >> 4;
+              src_ptr[stride + 0] + src_ptr[stride + 1] + src_ptr[stride + 2] +
+              src_ptr[stride + 3] + src_ptr[stride * 2 + 0] +
+              src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2] +
+              src_ptr[stride * 2 + 3] + src_ptr[stride * 3 + 0] +
+              src_ptr[stride * 3 + 1] + src_ptr[stride * 3 + 2] +
+              src_ptr[stride * 3 + 3] + 8) >>
+             4;
     dst[1] = (src_ptr[4] + src_ptr[5] + src_ptr[6] + src_ptr[7] +
-             src_ptr[stride + 4] + src_ptr[stride + 5] +
-             src_ptr[stride + 6] + src_ptr[stride + 7] +
-             src_ptr[stride * 2 + 4] + src_ptr[stride * 2 + 5] +
-             src_ptr[stride * 2 + 6] + src_ptr[stride * 2 + 7] +
-             src_ptr[stride * 3 + 4] + src_ptr[stride * 3 + 5] +
-             src_ptr[stride * 3 + 6] + src_ptr[stride * 3 + 7] +
-             8) >> 4;
+              src_ptr[stride + 4] + src_ptr[stride + 5] + src_ptr[stride + 6] +
+              src_ptr[stride + 7] + src_ptr[stride * 2 + 4] +
+              src_ptr[stride * 2 + 5] + src_ptr[stride * 2 + 6] +
+              src_ptr[stride * 2 + 7] + src_ptr[stride * 3 + 4] +
+              src_ptr[stride * 3 + 5] + src_ptr[stride * 3 + 6] +
+              src_ptr[stride * 3 + 7] + 8) >>
+             4;
     dst += 2;
     src_ptr += 8;
   }
   if (dst_width & 1) {
     dst[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[3] +
-             src_ptr[stride + 0] + src_ptr[stride + 1] +
-             src_ptr[stride + 2] + src_ptr[stride + 3] +
-             src_ptr[stride * 2 + 0] + src_ptr[stride * 2 + 1] +
-             src_ptr[stride * 2 + 2] + src_ptr[stride * 2 + 3] +
-             src_ptr[stride * 3 + 0] + src_ptr[stride * 3 + 1] +
-             src_ptr[stride * 3 + 2] + src_ptr[stride * 3 + 3] +
-             8) >> 4;
+              src_ptr[stride + 0] + src_ptr[stride + 1] + src_ptr[stride + 2] +
+              src_ptr[stride + 3] + src_ptr[stride * 2 + 0] +
+              src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2] +
+              src_ptr[stride * 2 + 3] + src_ptr[stride * 3 + 0] +
+              src_ptr[stride * 3 + 1] + src_ptr[stride * 3 + 2] +
+              src_ptr[stride * 3 + 3] + 8) >>
+             4;
   }
 }
 
-void ScaleRowDown4Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                           uint16* dst, int dst_width) {
+void ScaleRowDown4Box_16_C(const uint16_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint16_t* dst,
+                           int dst_width) {
   intptr_t stride = src_stride;
   int x;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[3] +
-             src_ptr[stride + 0] + src_ptr[stride + 1] +
-             src_ptr[stride + 2] + src_ptr[stride + 3] +
-             src_ptr[stride * 2 + 0] + src_ptr[stride * 2 + 1] +
-             src_ptr[stride * 2 + 2] + src_ptr[stride * 2 + 3] +
-             src_ptr[stride * 3 + 0] + src_ptr[stride * 3 + 1] +
-             src_ptr[stride * 3 + 2] + src_ptr[stride * 3 + 3] +
-             8) >> 4;
+              src_ptr[stride + 0] + src_ptr[stride + 1] + src_ptr[stride + 2] +
+              src_ptr[stride + 3] + src_ptr[stride * 2 + 0] +
+              src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2] +
+              src_ptr[stride * 2 + 3] + src_ptr[stride * 3 + 0] +
+              src_ptr[stride * 3 + 1] + src_ptr[stride * 3 + 2] +
+              src_ptr[stride * 3 + 3] + 8) >>
+             4;
     dst[1] = (src_ptr[4] + src_ptr[5] + src_ptr[6] + src_ptr[7] +
-             src_ptr[stride + 4] + src_ptr[stride + 5] +
-             src_ptr[stride + 6] + src_ptr[stride + 7] +
-             src_ptr[stride * 2 + 4] + src_ptr[stride * 2 + 5] +
-             src_ptr[stride * 2 + 6] + src_ptr[stride * 2 + 7] +
-             src_ptr[stride * 3 + 4] + src_ptr[stride * 3 + 5] +
-             src_ptr[stride * 3 + 6] + src_ptr[stride * 3 + 7] +
-             8) >> 4;
+              src_ptr[stride + 4] + src_ptr[stride + 5] + src_ptr[stride + 6] +
+              src_ptr[stride + 7] + src_ptr[stride * 2 + 4] +
+              src_ptr[stride * 2 + 5] + src_ptr[stride * 2 + 6] +
+              src_ptr[stride * 2 + 7] + src_ptr[stride * 3 + 4] +
+              src_ptr[stride * 3 + 5] + src_ptr[stride * 3 + 6] +
+              src_ptr[stride * 3 + 7] + 8) >>
+             4;
     dst += 2;
     src_ptr += 8;
   }
   if (dst_width & 1) {
     dst[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[3] +
-             src_ptr[stride + 0] + src_ptr[stride + 1] +
-             src_ptr[stride + 2] + src_ptr[stride + 3] +
-             src_ptr[stride * 2 + 0] + src_ptr[stride * 2 + 1] +
-             src_ptr[stride * 2 + 2] + src_ptr[stride * 2 + 3] +
-             src_ptr[stride * 3 + 0] + src_ptr[stride * 3 + 1] +
-             src_ptr[stride * 3 + 2] + src_ptr[stride * 3 + 3] +
-             8) >> 4;
+              src_ptr[stride + 0] + src_ptr[stride + 1] + src_ptr[stride + 2] +
+              src_ptr[stride + 3] + src_ptr[stride * 2 + 0] +
+              src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2] +
+              src_ptr[stride * 2 + 3] + src_ptr[stride * 3 + 0] +
+              src_ptr[stride * 3 + 1] + src_ptr[stride * 3 + 2] +
+              src_ptr[stride * 3 + 3] + 8) >>
+             4;
   }
 }
 
-void ScaleRowDown34_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                      uint8* dst, int dst_width) {
+void ScaleRowDown34_C(const uint8_t* src_ptr,
+                      ptrdiff_t src_stride,
+                      uint8_t* dst,
+                      int dst_width) {
   int x;
+  (void)src_stride;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (x = 0; x < dst_width; x += 3) {
     dst[0] = src_ptr[0];
@@ -255,9 +286,12 @@
   }
 }
 
-void ScaleRowDown34_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                         uint16* dst, int dst_width) {
+void ScaleRowDown34_16_C(const uint16_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint16_t* dst,
+                         int dst_width) {
   int x;
+  (void)src_stride;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (x = 0; x < dst_width; x += 3) {
     dst[0] = src_ptr[0];
@@ -269,19 +303,21 @@
 }
 
 // Filter rows 0 and 1 together, 3 : 1
-void ScaleRowDown34_0_Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* d, int dst_width) {
-  const uint8* s = src_ptr;
-  const uint8* t = src_ptr + src_stride;
+void ScaleRowDown34_0_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* d,
+                            int dst_width) {
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
   int x;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (x = 0; x < dst_width; x += 3) {
-    uint8 a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
-    uint8 a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
-    uint8 a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
-    uint8 b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
-    uint8 b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
-    uint8 b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
+    uint8_t a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
+    uint8_t a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
+    uint8_t a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
+    uint8_t b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
+    uint8_t b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
+    uint8_t b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
     d[0] = (a0 * 3 + b0 + 2) >> 2;
     d[1] = (a1 * 3 + b1 + 2) >> 2;
     d[2] = (a2 * 3 + b2 + 2) >> 2;
@@ -291,19 +327,21 @@
   }
 }
 
-void ScaleRowDown34_0_Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                               uint16* d, int dst_width) {
-  const uint16* s = src_ptr;
-  const uint16* t = src_ptr + src_stride;
+void ScaleRowDown34_0_Box_16_C(const uint16_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint16_t* d,
+                               int dst_width) {
+  const uint16_t* s = src_ptr;
+  const uint16_t* t = src_ptr + src_stride;
   int x;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (x = 0; x < dst_width; x += 3) {
-    uint16 a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
-    uint16 a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
-    uint16 a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
-    uint16 b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
-    uint16 b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
-    uint16 b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
+    uint16_t a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
+    uint16_t a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
+    uint16_t a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
+    uint16_t b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
+    uint16_t b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
+    uint16_t b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
     d[0] = (a0 * 3 + b0 + 2) >> 2;
     d[1] = (a1 * 3 + b1 + 2) >> 2;
     d[2] = (a2 * 3 + b2 + 2) >> 2;
@@ -314,19 +352,21 @@
 }
 
 // Filter rows 1 and 2 together, 1 : 1
-void ScaleRowDown34_1_Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* d, int dst_width) {
-  const uint8* s = src_ptr;
-  const uint8* t = src_ptr + src_stride;
+void ScaleRowDown34_1_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* d,
+                            int dst_width) {
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
   int x;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (x = 0; x < dst_width; x += 3) {
-    uint8 a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
-    uint8 a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
-    uint8 a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
-    uint8 b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
-    uint8 b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
-    uint8 b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
+    uint8_t a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
+    uint8_t a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
+    uint8_t a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
+    uint8_t b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
+    uint8_t b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
+    uint8_t b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
     d[0] = (a0 + b0 + 1) >> 1;
     d[1] = (a1 + b1 + 1) >> 1;
     d[2] = (a2 + b2 + 1) >> 1;
@@ -336,19 +376,21 @@
   }
 }
 
-void ScaleRowDown34_1_Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                               uint16* d, int dst_width) {
-  const uint16* s = src_ptr;
-  const uint16* t = src_ptr + src_stride;
+void ScaleRowDown34_1_Box_16_C(const uint16_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint16_t* d,
+                               int dst_width) {
+  const uint16_t* s = src_ptr;
+  const uint16_t* t = src_ptr + src_stride;
   int x;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (x = 0; x < dst_width; x += 3) {
-    uint16 a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
-    uint16 a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
-    uint16 a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
-    uint16 b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
-    uint16 b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
-    uint16 b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
+    uint16_t a0 = (s[0] * 3 + s[1] * 1 + 2) >> 2;
+    uint16_t a1 = (s[1] * 1 + s[2] * 1 + 1) >> 1;
+    uint16_t a2 = (s[2] * 1 + s[3] * 3 + 2) >> 2;
+    uint16_t b0 = (t[0] * 3 + t[1] * 1 + 2) >> 2;
+    uint16_t b1 = (t[1] * 1 + t[2] * 1 + 1) >> 1;
+    uint16_t b2 = (t[2] * 1 + t[3] * 3 + 2) >> 2;
     d[0] = (a0 + b0 + 1) >> 1;
     d[1] = (a1 + b1 + 1) >> 1;
     d[2] = (a2 + b2 + 1) >> 1;
@@ -359,8 +401,11 @@
 }
 
 // Scales a single row of pixels using point sampling.
-void ScaleCols_C(uint8* dst_ptr, const uint8* src_ptr,
-                 int dst_width, int x, int dx) {
+void ScaleCols_C(uint8_t* dst_ptr,
+                 const uint8_t* src_ptr,
+                 int dst_width,
+                 int x,
+                 int dx) {
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst_ptr[0] = src_ptr[x >> 16];
@@ -374,8 +419,11 @@
   }
 }
 
-void ScaleCols_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                    int dst_width, int x, int dx) {
+void ScaleCols_16_C(uint16_t* dst_ptr,
+                    const uint16_t* src_ptr,
+                    int dst_width,
+                    int x,
+                    int dx) {
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst_ptr[0] = src_ptr[x >> 16];
@@ -390,9 +438,14 @@
 }
 
 // Scales a single row of pixels up by 2x using point sampling.
-void ScaleColsUp2_C(uint8* dst_ptr, const uint8* src_ptr,
-                    int dst_width, int x, int dx) {
+void ScaleColsUp2_C(uint8_t* dst_ptr,
+                    const uint8_t* src_ptr,
+                    int dst_width,
+                    int x,
+                    int dx) {
   int j;
+  (void)x;
+  (void)dx;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst_ptr[1] = dst_ptr[0] = src_ptr[0];
     src_ptr += 1;
@@ -403,9 +456,14 @@
   }
 }
 
-void ScaleColsUp2_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                       int dst_width, int x, int dx) {
+void ScaleColsUp2_16_C(uint16_t* dst_ptr,
+                       const uint16_t* src_ptr,
+                       int dst_width,
+                       int x,
+                       int dx) {
   int j;
+  (void)x;
+  (void)dx;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst_ptr[1] = dst_ptr[0] = src_ptr[0];
     src_ptr += 1;
@@ -418,16 +476,19 @@
 
 // (1-f)a + fb can be replaced with a + f(b-a)
 #if defined(__arm__) || defined(__aarch64__)
-#define BLENDER(a, b, f) (uint8)((int)(a) + \
-    ((((int)((f)) * ((int)(b) - (int)(a))) + 0x8000) >> 16))
+#define BLENDER(a, b, f) \
+  (uint8_t)((int)(a) + ((((int)((f)) * ((int)(b) - (int)(a))) + 0x8000) >> 16))
 #else
-// inteluses 7 bit math with rounding.
-#define BLENDER(a, b, f) (uint8)((int)(a) + \
-    (((int)((f) >> 9) * ((int)(b) - (int)(a)) + 0x40) >> 7))
+// Intel uses 7 bit math with rounding.
+#define BLENDER(a, b, f) \
+  (uint8_t)((int)(a) + (((int)((f) >> 9) * ((int)(b) - (int)(a)) + 0x40) >> 7))
 #endif
 
-void ScaleFilterCols_C(uint8* dst_ptr, const uint8* src_ptr,
-                       int dst_width, int x, int dx) {
+void ScaleFilterCols_C(uint8_t* dst_ptr,
+                       const uint8_t* src_ptr,
+                       int dst_width,
+                       int x,
+                       int dx) {
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     int xi = x >> 16;
@@ -450,12 +511,15 @@
   }
 }
 
-void ScaleFilterCols64_C(uint8* dst_ptr, const uint8* src_ptr,
-                         int dst_width, int x32, int dx) {
-  int64 x = (int64)(x32);
+void ScaleFilterCols64_C(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         int dst_width,
+                         int x32,
+                         int dx) {
+  int64_t x = (int64_t)(x32);
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
-    int64 xi = x >> 16;
+    int64_t xi = x >> 16;
     int a = src_ptr[xi];
     int b = src_ptr[xi + 1];
     dst_ptr[0] = BLENDER(a, b, x & 0xffff);
@@ -468,7 +532,7 @@
     dst_ptr += 2;
   }
   if (dst_width & 1) {
-    int64 xi = x >> 16;
+    int64_t xi = x >> 16;
     int a = src_ptr[xi];
     int b = src_ptr[xi + 1];
     dst_ptr[0] = BLENDER(a, b, x & 0xffff);
@@ -476,12 +540,15 @@
 }
 #undef BLENDER
 
-// Same as 8 bit arm blender but return is cast to uint16
-#define BLENDER(a, b, f) (uint16)((int)(a) + \
-    ((((int)((f)) * ((int)(b) - (int)(a))) + 0x8000) >> 16))
+// Same as 8 bit arm blender but return is cast to uint16_t
+#define BLENDER(a, b, f) \
+  (uint16_t)((int)(a) + ((((int)((f)) * ((int)(b) - (int)(a))) + 0x8000) >> 16))
 
-void ScaleFilterCols_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                       int dst_width, int x, int dx) {
+void ScaleFilterCols_16_C(uint16_t* dst_ptr,
+                          const uint16_t* src_ptr,
+                          int dst_width,
+                          int x,
+                          int dx) {
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     int xi = x >> 16;
@@ -504,12 +571,15 @@
   }
 }
 
-void ScaleFilterCols64_16_C(uint16* dst_ptr, const uint16* src_ptr,
-                         int dst_width, int x32, int dx) {
-  int64 x = (int64)(x32);
+void ScaleFilterCols64_16_C(uint16_t* dst_ptr,
+                            const uint16_t* src_ptr,
+                            int dst_width,
+                            int x32,
+                            int dx) {
+  int64_t x = (int64_t)(x32);
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
-    int64 xi = x >> 16;
+    int64_t xi = x >> 16;
     int a = src_ptr[xi];
     int b = src_ptr[xi + 1];
     dst_ptr[0] = BLENDER(a, b, x & 0xffff);
@@ -522,7 +592,7 @@
     dst_ptr += 2;
   }
   if (dst_width & 1) {
-    int64 xi = x >> 16;
+    int64_t xi = x >> 16;
     int a = src_ptr[xi];
     int b = src_ptr[xi + 1];
     dst_ptr[0] = BLENDER(a, b, x & 0xffff);
@@ -530,9 +600,12 @@
 }
 #undef BLENDER
 
-void ScaleRowDown38_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                      uint8* dst, int dst_width) {
+void ScaleRowDown38_C(const uint8_t* src_ptr,
+                      ptrdiff_t src_stride,
+                      uint8_t* dst,
+                      int dst_width) {
   int x;
+  (void)src_stride;
   assert(dst_width % 3 == 0);
   for (x = 0; x < dst_width; x += 3) {
     dst[0] = src_ptr[0];
@@ -543,9 +616,12 @@
   }
 }
 
-void ScaleRowDown38_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                         uint16* dst, int dst_width) {
+void ScaleRowDown38_16_C(const uint16_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint16_t* dst,
+                         int dst_width) {
   int x;
+  (void)src_stride;
   assert(dst_width % 3 == 0);
   for (x = 0; x < dst_width; x += 3) {
     dst[0] = src_ptr[0];
@@ -557,100 +633,118 @@
 }
 
 // 8x3 -> 3x1
-void ScaleRowDown38_3_Box_C(const uint8* src_ptr,
+void ScaleRowDown38_3_Box_C(const uint8_t* src_ptr,
                             ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width) {
+                            uint8_t* dst_ptr,
+                            int dst_width) {
   intptr_t stride = src_stride;
   int i;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (i = 0; i < dst_width; i += 3) {
-    dst_ptr[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] +
-        src_ptr[stride + 0] + src_ptr[stride + 1] +
-        src_ptr[stride + 2] + src_ptr[stride * 2 + 0] +
-        src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2]) *
-        (65536 / 9) >> 16;
-    dst_ptr[1] = (src_ptr[3] + src_ptr[4] + src_ptr[5] +
-        src_ptr[stride + 3] + src_ptr[stride + 4] +
-        src_ptr[stride + 5] + src_ptr[stride * 2 + 3] +
-        src_ptr[stride * 2 + 4] + src_ptr[stride * 2 + 5]) *
-        (65536 / 9) >> 16;
-    dst_ptr[2] = (src_ptr[6] + src_ptr[7] +
-        src_ptr[stride + 6] + src_ptr[stride + 7] +
-        src_ptr[stride * 2 + 6] + src_ptr[stride * 2 + 7]) *
-        (65536 / 6) >> 16;
+    dst_ptr[0] =
+        (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[stride + 0] +
+         src_ptr[stride + 1] + src_ptr[stride + 2] + src_ptr[stride * 2 + 0] +
+         src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2]) *
+            (65536 / 9) >>
+        16;
+    dst_ptr[1] =
+        (src_ptr[3] + src_ptr[4] + src_ptr[5] + src_ptr[stride + 3] +
+         src_ptr[stride + 4] + src_ptr[stride + 5] + src_ptr[stride * 2 + 3] +
+         src_ptr[stride * 2 + 4] + src_ptr[stride * 2 + 5]) *
+            (65536 / 9) >>
+        16;
+    dst_ptr[2] =
+        (src_ptr[6] + src_ptr[7] + src_ptr[stride + 6] + src_ptr[stride + 7] +
+         src_ptr[stride * 2 + 6] + src_ptr[stride * 2 + 7]) *
+            (65536 / 6) >>
+        16;
     src_ptr += 8;
     dst_ptr += 3;
   }
 }
 
-void ScaleRowDown38_3_Box_16_C(const uint16* src_ptr,
+void ScaleRowDown38_3_Box_16_C(const uint16_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint16* dst_ptr, int dst_width) {
+                               uint16_t* dst_ptr,
+                               int dst_width) {
   intptr_t stride = src_stride;
   int i;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (i = 0; i < dst_width; i += 3) {
-    dst_ptr[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] +
-        src_ptr[stride + 0] + src_ptr[stride + 1] +
-        src_ptr[stride + 2] + src_ptr[stride * 2 + 0] +
-        src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2]) *
-        (65536 / 9) >> 16;
-    dst_ptr[1] = (src_ptr[3] + src_ptr[4] + src_ptr[5] +
-        src_ptr[stride + 3] + src_ptr[stride + 4] +
-        src_ptr[stride + 5] + src_ptr[stride * 2 + 3] +
-        src_ptr[stride * 2 + 4] + src_ptr[stride * 2 + 5]) *
-        (65536 / 9) >> 16;
-    dst_ptr[2] = (src_ptr[6] + src_ptr[7] +
-        src_ptr[stride + 6] + src_ptr[stride + 7] +
-        src_ptr[stride * 2 + 6] + src_ptr[stride * 2 + 7]) *
-        (65536 / 6) >> 16;
+    dst_ptr[0] =
+        (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[stride + 0] +
+         src_ptr[stride + 1] + src_ptr[stride + 2] + src_ptr[stride * 2 + 0] +
+         src_ptr[stride * 2 + 1] + src_ptr[stride * 2 + 2]) *
+            (65536 / 9) >>
+        16;
+    dst_ptr[1] =
+        (src_ptr[3] + src_ptr[4] + src_ptr[5] + src_ptr[stride + 3] +
+         src_ptr[stride + 4] + src_ptr[stride + 5] + src_ptr[stride * 2 + 3] +
+         src_ptr[stride * 2 + 4] + src_ptr[stride * 2 + 5]) *
+            (65536 / 9) >>
+        16;
+    dst_ptr[2] =
+        (src_ptr[6] + src_ptr[7] + src_ptr[stride + 6] + src_ptr[stride + 7] +
+         src_ptr[stride * 2 + 6] + src_ptr[stride * 2 + 7]) *
+            (65536 / 6) >>
+        16;
     src_ptr += 8;
     dst_ptr += 3;
   }
 }
 
 // 8x2 -> 3x1
-void ScaleRowDown38_2_Box_C(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width) {
+void ScaleRowDown38_2_Box_C(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width) {
   intptr_t stride = src_stride;
   int i;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (i = 0; i < dst_width; i += 3) {
-    dst_ptr[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] +
-        src_ptr[stride + 0] + src_ptr[stride + 1] +
-        src_ptr[stride + 2]) * (65536 / 6) >> 16;
-    dst_ptr[1] = (src_ptr[3] + src_ptr[4] + src_ptr[5] +
-        src_ptr[stride + 3] + src_ptr[stride + 4] +
-        src_ptr[stride + 5]) * (65536 / 6) >> 16;
-    dst_ptr[2] = (src_ptr[6] + src_ptr[7] +
-        src_ptr[stride + 6] + src_ptr[stride + 7]) *
-        (65536 / 4) >> 16;
+    dst_ptr[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[stride + 0] +
+                  src_ptr[stride + 1] + src_ptr[stride + 2]) *
+                     (65536 / 6) >>
+                 16;
+    dst_ptr[1] = (src_ptr[3] + src_ptr[4] + src_ptr[5] + src_ptr[stride + 3] +
+                  src_ptr[stride + 4] + src_ptr[stride + 5]) *
+                     (65536 / 6) >>
+                 16;
+    dst_ptr[2] =
+        (src_ptr[6] + src_ptr[7] + src_ptr[stride + 6] + src_ptr[stride + 7]) *
+            (65536 / 4) >>
+        16;
     src_ptr += 8;
     dst_ptr += 3;
   }
 }
 
-void ScaleRowDown38_2_Box_16_C(const uint16* src_ptr, ptrdiff_t src_stride,
-                               uint16* dst_ptr, int dst_width) {
+void ScaleRowDown38_2_Box_16_C(const uint16_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint16_t* dst_ptr,
+                               int dst_width) {
   intptr_t stride = src_stride;
   int i;
   assert((dst_width % 3 == 0) && (dst_width > 0));
   for (i = 0; i < dst_width; i += 3) {
-    dst_ptr[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] +
-        src_ptr[stride + 0] + src_ptr[stride + 1] +
-        src_ptr[stride + 2]) * (65536 / 6) >> 16;
-    dst_ptr[1] = (src_ptr[3] + src_ptr[4] + src_ptr[5] +
-        src_ptr[stride + 3] + src_ptr[stride + 4] +
-        src_ptr[stride + 5]) * (65536 / 6) >> 16;
-    dst_ptr[2] = (src_ptr[6] + src_ptr[7] +
-        src_ptr[stride + 6] + src_ptr[stride + 7]) *
-        (65536 / 4) >> 16;
+    dst_ptr[0] = (src_ptr[0] + src_ptr[1] + src_ptr[2] + src_ptr[stride + 0] +
+                  src_ptr[stride + 1] + src_ptr[stride + 2]) *
+                     (65536 / 6) >>
+                 16;
+    dst_ptr[1] = (src_ptr[3] + src_ptr[4] + src_ptr[5] + src_ptr[stride + 3] +
+                  src_ptr[stride + 4] + src_ptr[stride + 5]) *
+                     (65536 / 6) >>
+                 16;
+    dst_ptr[2] =
+        (src_ptr[6] + src_ptr[7] + src_ptr[stride + 6] + src_ptr[stride + 7]) *
+            (65536 / 4) >>
+        16;
     src_ptr += 8;
     dst_ptr += 3;
   }
 }
 
-void ScaleAddRow_C(const uint8* src_ptr, uint16* dst_ptr, int src_width) {
+void ScaleAddRow_C(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width) {
   int x;
   assert(src_width > 0);
   for (x = 0; x < src_width - 1; x += 2) {
@@ -664,7 +758,9 @@
   }
 }
 
-void ScaleAddRow_16_C(const uint16* src_ptr, uint32* dst_ptr, int src_width) {
+void ScaleAddRow_16_C(const uint16_t* src_ptr,
+                      uint32_t* dst_ptr,
+                      int src_width) {
   int x;
   assert(src_width > 0);
   for (x = 0; x < src_width - 1; x += 2) {
@@ -678,13 +774,14 @@
   }
 }
 
-void ScaleARGBRowDown2_C(const uint8* src_argb,
+void ScaleARGBRowDown2_C(const uint8_t* src_argb,
                          ptrdiff_t src_stride,
-                         uint8* dst_argb, int dst_width) {
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
-
+                         uint8_t* dst_argb,
+                         int dst_width) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = src[1];
     dst[1] = src[3];
@@ -696,10 +793,12 @@
   }
 }
 
-void ScaleARGBRowDown2Linear_C(const uint8* src_argb,
+void ScaleARGBRowDown2Linear_C(const uint8_t* src_argb,
                                ptrdiff_t src_stride,
-                               uint8* dst_argb, int dst_width) {
+                               uint8_t* dst_argb,
+                               int dst_width) {
   int x;
+  (void)src_stride;
   for (x = 0; x < dst_width; ++x) {
     dst_argb[0] = (src_argb[0] + src_argb[4] + 1) >> 1;
     dst_argb[1] = (src_argb[1] + src_argb[5] + 1) >> 1;
@@ -710,29 +809,37 @@
   }
 }
 
-void ScaleARGBRowDown2Box_C(const uint8* src_argb, ptrdiff_t src_stride,
-                            uint8* dst_argb, int dst_width) {
+void ScaleARGBRowDown2Box_C(const uint8_t* src_argb,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_argb,
+                            int dst_width) {
   int x;
   for (x = 0; x < dst_width; ++x) {
-    dst_argb[0] = (src_argb[0] + src_argb[4] +
-                  src_argb[src_stride] + src_argb[src_stride + 4] + 2) >> 2;
-    dst_argb[1] = (src_argb[1] + src_argb[5] +
-                  src_argb[src_stride + 1] + src_argb[src_stride + 5] + 2) >> 2;
-    dst_argb[2] = (src_argb[2] + src_argb[6] +
-                  src_argb[src_stride + 2] + src_argb[src_stride + 6] + 2) >> 2;
-    dst_argb[3] = (src_argb[3] + src_argb[7] +
-                  src_argb[src_stride + 3] + src_argb[src_stride + 7] + 2) >> 2;
+    dst_argb[0] = (src_argb[0] + src_argb[4] + src_argb[src_stride] +
+                   src_argb[src_stride + 4] + 2) >>
+                  2;
+    dst_argb[1] = (src_argb[1] + src_argb[5] + src_argb[src_stride + 1] +
+                   src_argb[src_stride + 5] + 2) >>
+                  2;
+    dst_argb[2] = (src_argb[2] + src_argb[6] + src_argb[src_stride + 2] +
+                   src_argb[src_stride + 6] + 2) >>
+                  2;
+    dst_argb[3] = (src_argb[3] + src_argb[7] + src_argb[src_stride + 3] +
+                   src_argb[src_stride + 7] + 2) >>
+                  2;
     src_argb += 8;
     dst_argb += 4;
   }
 }
 
-void ScaleARGBRowDownEven_C(const uint8* src_argb, ptrdiff_t src_stride,
+void ScaleARGBRowDownEven_C(const uint8_t* src_argb,
+                            ptrdiff_t src_stride,
                             int src_stepx,
-                            uint8* dst_argb, int dst_width) {
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
-
+                            uint8_t* dst_argb,
+                            int dst_width) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
+  (void)src_stride;
   int x;
   for (x = 0; x < dst_width - 1; x += 2) {
     dst[0] = src[0];
@@ -745,30 +852,38 @@
   }
 }
 
-void ScaleARGBRowDownEvenBox_C(const uint8* src_argb,
+void ScaleARGBRowDownEvenBox_C(const uint8_t* src_argb,
                                ptrdiff_t src_stride,
                                int src_stepx,
-                               uint8* dst_argb, int dst_width) {
+                               uint8_t* dst_argb,
+                               int dst_width) {
   int x;
   for (x = 0; x < dst_width; ++x) {
-    dst_argb[0] = (src_argb[0] + src_argb[4] +
-                  src_argb[src_stride] + src_argb[src_stride + 4] + 2) >> 2;
-    dst_argb[1] = (src_argb[1] + src_argb[5] +
-                  src_argb[src_stride + 1] + src_argb[src_stride + 5] + 2) >> 2;
-    dst_argb[2] = (src_argb[2] + src_argb[6] +
-                  src_argb[src_stride + 2] + src_argb[src_stride + 6] + 2) >> 2;
-    dst_argb[3] = (src_argb[3] + src_argb[7] +
-                  src_argb[src_stride + 3] + src_argb[src_stride + 7] + 2) >> 2;
+    dst_argb[0] = (src_argb[0] + src_argb[4] + src_argb[src_stride] +
+                   src_argb[src_stride + 4] + 2) >>
+                  2;
+    dst_argb[1] = (src_argb[1] + src_argb[5] + src_argb[src_stride + 1] +
+                   src_argb[src_stride + 5] + 2) >>
+                  2;
+    dst_argb[2] = (src_argb[2] + src_argb[6] + src_argb[src_stride + 2] +
+                   src_argb[src_stride + 6] + 2) >>
+                  2;
+    dst_argb[3] = (src_argb[3] + src_argb[7] + src_argb[src_stride + 3] +
+                   src_argb[src_stride + 7] + 2) >>
+                  2;
     src_argb += src_stepx * 4;
     dst_argb += 4;
   }
 }
 
 // Scales a single row of pixels using point sampling.
-void ScaleARGBCols_C(uint8* dst_argb, const uint8* src_argb,
-                     int dst_width, int x, int dx) {
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
+void ScaleARGBCols_C(uint8_t* dst_argb,
+                     const uint8_t* src_argb,
+                     int dst_width,
+                     int x,
+                     int dx) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst[0] = src[x >> 16];
@@ -782,11 +897,14 @@
   }
 }
 
-void ScaleARGBCols64_C(uint8* dst_argb, const uint8* src_argb,
-                       int dst_width, int x32, int dx) {
-  int64 x = (int64)(x32);
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
+void ScaleARGBCols64_C(uint8_t* dst_argb,
+                       const uint8_t* src_argb,
+                       int dst_width,
+                       int x32,
+                       int dx) {
+  int64_t x = (int64_t)(x32);
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst[0] = src[x >> 16];
@@ -801,11 +919,16 @@
 }
 
 // Scales a single row of pixels up by 2x using point sampling.
-void ScaleARGBColsUp2_C(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx) {
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
+void ScaleARGBColsUp2_C(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int x,
+                        int dx) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
   int j;
+  (void)x;
+  (void)dx;
   for (j = 0; j < dst_width - 1; j += 2) {
     dst[1] = dst[0] = src[0];
     src += 1;
@@ -818,23 +941,26 @@
 
 // TODO(fbarchard): Replace 0x7f ^ f with 128-f.  bug=607.
 // Mimics SSSE3 blender
-#define BLENDER1(a, b, f) ((a) * (0x7f ^ f) + (b) * f) >> 7
-#define BLENDERC(a, b, f, s) (uint32)( \
-    BLENDER1(((a) >> s) & 255, ((b) >> s) & 255, f) << s)
-#define BLENDER(a, b, f) \
-    BLENDERC(a, b, f, 24) | BLENDERC(a, b, f, 16) | \
-    BLENDERC(a, b, f, 8) | BLENDERC(a, b, f, 0)
+#define BLENDER1(a, b, f) ((a) * (0x7f ^ f) + (b)*f) >> 7
+#define BLENDERC(a, b, f, s) \
+  (uint32_t)(BLENDER1(((a) >> s) & 255, ((b) >> s) & 255, f) << s)
+#define BLENDER(a, b, f)                                                 \
+  BLENDERC(a, b, f, 24) | BLENDERC(a, b, f, 16) | BLENDERC(a, b, f, 8) | \
+      BLENDERC(a, b, f, 0)
 
-void ScaleARGBFilterCols_C(uint8* dst_argb, const uint8* src_argb,
-                           int dst_width, int x, int dx) {
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
+void ScaleARGBFilterCols_C(uint8_t* dst_argb,
+                           const uint8_t* src_argb,
+                           int dst_width,
+                           int x,
+                           int dx) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
     int xi = x >> 16;
     int xf = (x >> 9) & 0x7f;
-    uint32 a = src[xi];
-    uint32 b = src[xi + 1];
+    uint32_t a = src[xi];
+    uint32_t b = src[xi + 1];
     dst[0] = BLENDER(a, b, xf);
     x += dx;
     xi = x >> 16;
@@ -848,23 +974,26 @@
   if (dst_width & 1) {
     int xi = x >> 16;
     int xf = (x >> 9) & 0x7f;
-    uint32 a = src[xi];
-    uint32 b = src[xi + 1];
+    uint32_t a = src[xi];
+    uint32_t b = src[xi + 1];
     dst[0] = BLENDER(a, b, xf);
   }
 }
 
-void ScaleARGBFilterCols64_C(uint8* dst_argb, const uint8* src_argb,
-                             int dst_width, int x32, int dx) {
-  int64 x = (int64)(x32);
-  const uint32* src = (const uint32*)(src_argb);
-  uint32* dst = (uint32*)(dst_argb);
+void ScaleARGBFilterCols64_C(uint8_t* dst_argb,
+                             const uint8_t* src_argb,
+                             int dst_width,
+                             int x32,
+                             int dx) {
+  int64_t x = (int64_t)(x32);
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
   int j;
   for (j = 0; j < dst_width - 1; j += 2) {
-    int64 xi = x >> 16;
+    int64_t xi = x >> 16;
     int xf = (x >> 9) & 0x7f;
-    uint32 a = src[xi];
-    uint32 b = src[xi + 1];
+    uint32_t a = src[xi];
+    uint32_t b = src[xi + 1];
     dst[0] = BLENDER(a, b, xf);
     x += dx;
     xi = x >> 16;
@@ -876,10 +1005,10 @@
     dst += 2;
   }
   if (dst_width & 1) {
-    int64 xi = x >> 16;
+    int64_t xi = x >> 16;
     int xf = (x >> 9) & 0x7f;
-    uint32 a = src[xi];
-    uint32 b = src[xi + 1];
+    uint32_t a = src[xi];
+    uint32_t b = src[xi + 1];
     dst[0] = BLENDER(a, b, xf);
   }
 }
@@ -889,16 +1018,22 @@
 
 // Scale plane vertically with bilinear interpolation.
 void ScalePlaneVertical(int src_height,
-                        int dst_width, int dst_height,
-                        int src_stride, int dst_stride,
-                        const uint8* src_argb, uint8* dst_argb,
-                        int x, int y, int dy,
-                        int bpp, enum FilterMode filtering) {
+                        int dst_width,
+                        int dst_height,
+                        int src_stride,
+                        int dst_stride,
+                        const uint8_t* src_argb,
+                        uint8_t* dst_argb,
+                        int x,
+                        int y,
+                        int dy,
+                        int bpp,
+                        enum FilterMode filtering) {
   // TODO(fbarchard): Allow higher bpp.
   int dst_width_bytes = dst_width * bpp;
-  void (*InterpolateRow)(uint8* dst_argb, const uint8* src_argb,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_C;
+  void (*InterpolateRow)(uint8_t * dst_argb, const uint8_t* src_argb,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_C;
   const int max_y = (src_height > 1) ? ((src_height - 1) << 16) - 1 : 0;
   int j;
   assert(bpp >= 1 && bpp <= 4);
@@ -930,13 +1065,11 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src_argb, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride, 4)) {
-    InterpolateRow = InterpolateRow_Any_DSPR2;
-    if (IS_ALIGNED(dst_width_bytes, 4)) {
-      InterpolateRow = InterpolateRow_DSPR2;
+#if defined(HAS_INTERPOLATEROW_MSA)
+  if (TestCpuFlag(kCpuHasMSA)) {
+    InterpolateRow = InterpolateRow_Any_MSA;
+    if (IS_ALIGNED(dst_width_bytes, 32)) {
+      InterpolateRow = InterpolateRow_MSA;
     }
   }
 #endif
@@ -948,23 +1081,29 @@
     }
     yi = y >> 16;
     yf = filtering ? ((y >> 8) & 255) : 0;
-    InterpolateRow(dst_argb, src_argb + yi * src_stride,
-                   src_stride, dst_width_bytes, yf);
+    InterpolateRow(dst_argb, src_argb + yi * src_stride, src_stride,
+                   dst_width_bytes, yf);
     dst_argb += dst_stride;
     y += dy;
   }
 }
 void ScalePlaneVertical_16(int src_height,
-                           int dst_width, int dst_height,
-                           int src_stride, int dst_stride,
-                           const uint16* src_argb, uint16* dst_argb,
-                           int x, int y, int dy,
-                           int wpp, enum FilterMode filtering) {
+                           int dst_width,
+                           int dst_height,
+                           int src_stride,
+                           int dst_stride,
+                           const uint16_t* src_argb,
+                           uint16_t* dst_argb,
+                           int x,
+                           int y,
+                           int dy,
+                           int wpp,
+                           enum FilterMode filtering) {
   // TODO(fbarchard): Allow higher wpp.
   int dst_width_words = dst_width * wpp;
-  void (*InterpolateRow)(uint16* dst_argb, const uint16* src_argb,
-      ptrdiff_t src_stride, int dst_width, int source_y_fraction) =
-      InterpolateRow_16_C;
+  void (*InterpolateRow)(uint16_t * dst_argb, const uint16_t* src_argb,
+                         ptrdiff_t src_stride, int dst_width,
+                         int source_y_fraction) = InterpolateRow_16_C;
   const int max_y = (src_height > 1) ? ((src_height - 1) << 16) - 1 : 0;
   int j;
   assert(wpp >= 1 && wpp <= 2);
@@ -1004,16 +1143,6 @@
     }
   }
 #endif
-#if defined(HAS_INTERPOLATEROW_16_DSPR2)
-  if (TestCpuFlag(kCpuHasDSPR2) &&
-      IS_ALIGNED(src_argb, 4) && IS_ALIGNED(src_stride, 4) &&
-      IS_ALIGNED(dst_argb, 4) && IS_ALIGNED(dst_stride, 4)) {
-    InterpolateRow = InterpolateRow_Any_16_DSPR2;
-    if (IS_ALIGNED(dst_width_bytes, 4)) {
-      InterpolateRow = InterpolateRow_16_DSPR2;
-    }
-  }
-#endif
   for (j = 0; j < dst_height; ++j) {
     int yi;
     int yf;
@@ -1022,16 +1151,18 @@
     }
     yi = y >> 16;
     yf = filtering ? ((y >> 8) & 255) : 0;
-    InterpolateRow(dst_argb, src_argb + yi * src_stride,
-                   src_stride, dst_width_words, yf);
+    InterpolateRow(dst_argb, src_argb + yi * src_stride, src_stride,
+                   dst_width_words, yf);
     dst_argb += dst_stride;
     y += dy;
   }
 }
 
 // Simplify the filtering based on scale factors.
-enum FilterMode ScaleFilterReduce(int src_width, int src_height,
-                                  int dst_width, int dst_height,
+enum FilterMode ScaleFilterReduce(int src_width,
+                                  int src_height,
+                                  int dst_width,
+                                  int dst_height,
                                   enum FilterMode filtering) {
   if (src_width < 0) {
     src_width = -src_width;
@@ -1073,22 +1204,26 @@
 
 // Divide num by div and return as 16.16 fixed point result.
 int FixedDiv_C(int num, int div) {
-  return (int)(((int64)(num) << 16) / div);
+  return (int)(((int64_t)(num) << 16) / div);
 }
 
 // Divide num by div and return as 16.16 fixed point result.
 int FixedDiv1_C(int num, int div) {
-  return (int)((((int64)(num) << 16) - 0x00010001) /
-                          (div - 1));
+  return (int)((((int64_t)(num) << 16) - 0x00010001) / (div - 1));
 }
 
 #define CENTERSTART(dx, s) (dx < 0) ? -((-dx >> 1) + s) : ((dx >> 1) + s)
 
 // Compute slope values for stepping.
-void ScaleSlope(int src_width, int src_height,
-                int dst_width, int dst_height,
+void ScaleSlope(int src_width,
+                int src_height,
+                int dst_width,
+                int dst_height,
                 enum FilterMode filtering,
-                int* x, int* y, int* dx, int* dy) {
+                int* x,
+                int* y,
+                int* dx,
+                int* dy) {
   assert(x != NULL);
   assert(y != NULL);
   assert(dx != NULL);
@@ -1120,7 +1255,7 @@
       *x = 0;
     }
     if (dst_height <= src_height) {
-      *dy = FixedDiv(src_height,  dst_height);
+      *dy = FixedDiv(src_height, dst_height);
       *y = CENTERSTART(*dy, -32768);  // Subtract 0.5 (32768) to center filter.
     } else if (dst_height > 1) {
       *dy = FixedDiv1(src_height, dst_height);
@@ -1153,6 +1288,35 @@
 }
 #undef CENTERSTART
 
+// Read 8x2 upsample with filtering and write 16x1.
+// actually reads an extra pixel, so 9x2.
+void ScaleRowUp2_16_C(const uint16_t* src_ptr,
+                      ptrdiff_t src_stride,
+                      uint16_t* dst,
+                      int dst_width) {
+  const uint16_t* src2 = src_ptr + src_stride;
+
+  int x;
+  for (x = 0; x < dst_width - 1; x += 2) {
+    uint16_t p0 = src_ptr[0];
+    uint16_t p1 = src_ptr[1];
+    uint16_t p2 = src2[0];
+    uint16_t p3 = src2[1];
+    dst[0] = (p0 * 9 + p1 * 3 + p2 * 3 + p3 + 8) >> 4;
+    dst[1] = (p0 * 3 + p1 * 9 + p2 + p3 * 3 + 8) >> 4;
+    ++src_ptr;
+    ++src2;
+    dst += 2;
+  }
+  if (dst_width & 1) {
+    uint16_t p0 = src_ptr[0];
+    uint16_t p1 = src_ptr[1];
+    uint16_t p2 = src2[0];
+    uint16_t p3 = src2[1];
+    dst[0] = (p0 * 9 + p1 * 3 + p2 * 3 + p3 + 8) >> 4;
+  }
+}
+
 #ifdef __cplusplus
 }  // extern "C"
 }  // namespace libyuv
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_gcc.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_gcc.cc
index e2f8854..312236d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_gcc.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_gcc.cc
@@ -21,1296 +21,1348 @@
     (defined(__x86_64__) || (defined(__i386__) && !defined(_MSC_VER)))
 
 // Offsets for source bytes 0 to 9
-static uvec8 kShuf0 =
-  { 0, 1, 3, 4, 5, 7, 8, 9, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf0 = {0,   1,   3,   4,   5,   7,   8,   9,
+                             128, 128, 128, 128, 128, 128, 128, 128};
 
 // Offsets for source bytes 11 to 20 with 8 subtracted = 3 to 12.
-static uvec8 kShuf1 =
-  { 3, 4, 5, 7, 8, 9, 11, 12, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf1 = {3,   4,   5,   7,   8,   9,   11,  12,
+                             128, 128, 128, 128, 128, 128, 128, 128};
 
 // Offsets for source bytes 21 to 31 with 16 subtracted = 5 to 31.
-static uvec8 kShuf2 =
-  { 5, 7, 8, 9, 11, 12, 13, 15, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf2 = {5,   7,   8,   9,   11,  12,  13,  15,
+                             128, 128, 128, 128, 128, 128, 128, 128};
 
 // Offsets for source bytes 0 to 10
-static uvec8 kShuf01 =
-  { 0, 1, 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10 };
+static const uvec8 kShuf01 = {0, 1, 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10};
 
 // Offsets for source bytes 10 to 21 with 8 subtracted = 3 to 13.
-static uvec8 kShuf11 =
-  { 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10, 10, 11, 12, 13 };
+static const uvec8 kShuf11 = {2, 3, 4, 5,  5,  6,  6,  7,
+                              8, 9, 9, 10, 10, 11, 12, 13};
 
 // Offsets for source bytes 21 to 31 with 16 subtracted = 5 to 31.
-static uvec8 kShuf21 =
-  { 5, 6, 6, 7, 8, 9, 9, 10, 10, 11, 12, 13, 13, 14, 14, 15 };
+static const uvec8 kShuf21 = {5,  6,  6,  7,  8,  9,  9,  10,
+                              10, 11, 12, 13, 13, 14, 14, 15};
 
 // Coefficients for source bytes 0 to 10
-static uvec8 kMadd01 =
-  { 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2 };
+static const uvec8 kMadd01 = {3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2};
 
 // Coefficients for source bytes 10 to 21
-static uvec8 kMadd11 =
-  { 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1 };
+static const uvec8 kMadd11 = {1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1};
 
 // Coefficients for source bytes 21 to 31
-static uvec8 kMadd21 =
-  { 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3 };
+static const uvec8 kMadd21 = {2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3};
 
 // Coefficients for source bytes 21 to 31
-static vec16 kRound34 =
-  { 2, 2, 2, 2, 2, 2, 2, 2 };
+static const vec16 kRound34 = {2, 2, 2, 2, 2, 2, 2, 2};
 
-static uvec8 kShuf38a =
-  { 0, 3, 6, 8, 11, 14, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf38a = {0,   3,   6,   8,   11,  14,  128, 128,
+                               128, 128, 128, 128, 128, 128, 128, 128};
 
-static uvec8 kShuf38b =
-  { 128, 128, 128, 128, 128, 128, 0, 3, 6, 8, 11, 14, 128, 128, 128, 128 };
+static const uvec8 kShuf38b = {128, 128, 128, 128, 128, 128, 0,   3,
+                               6,   8,   11,  14,  128, 128, 128, 128};
 
 // Arrange words 0,3,6 into 0,1,2
-static uvec8 kShufAc =
-  { 0, 1, 6, 7, 12, 13, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAc = {0,   1,   6,   7,   12,  13,  128, 128,
+                              128, 128, 128, 128, 128, 128, 128, 128};
 
 // Arrange words 0,3,6 into 3,4,5
-static uvec8 kShufAc3 =
-  { 128, 128, 128, 128, 128, 128, 0, 1, 6, 7, 12, 13, 128, 128, 128, 128 };
+static const uvec8 kShufAc3 = {128, 128, 128, 128, 128, 128, 0,   1,
+                               6,   7,   12,  13,  128, 128, 128, 128};
 
 // Scaling values for boxes of 3x3 and 2x3
-static uvec16 kScaleAc33 =
-  { 65536 / 9, 65536 / 9, 65536 / 6, 65536 / 9, 65536 / 9, 65536 / 6, 0, 0 };
+static const uvec16 kScaleAc33 = {65536 / 9, 65536 / 9, 65536 / 6, 65536 / 9,
+                                  65536 / 9, 65536 / 6, 0,         0};
 
 // Arrange first value for pixels 0,1,2,3,4,5
-static uvec8 kShufAb0 =
-  { 0, 128, 3, 128, 6, 128, 8, 128, 11, 128, 14, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAb0 = {0,  128, 3,  128, 6,   128, 8,   128,
+                               11, 128, 14, 128, 128, 128, 128, 128};
 
 // Arrange second value for pixels 0,1,2,3,4,5
-static uvec8 kShufAb1 =
-  { 1, 128, 4, 128, 7, 128, 9, 128, 12, 128, 15, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAb1 = {1,  128, 4,  128, 7,   128, 9,   128,
+                               12, 128, 15, 128, 128, 128, 128, 128};
 
 // Arrange third value for pixels 0,1,2,3,4,5
-static uvec8 kShufAb2 =
-  { 2, 128, 5, 128, 128, 128, 10, 128, 13, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAb2 = {2,  128, 5,   128, 128, 128, 10,  128,
+                               13, 128, 128, 128, 128, 128, 128, 128};
 
 // Scaling values for boxes of 3x2 and 2x2
-static uvec16 kScaleAb2 =
-  { 65536 / 3, 65536 / 3, 65536 / 2, 65536 / 3, 65536 / 3, 65536 / 2, 0, 0 };
+static const uvec16 kScaleAb2 = {65536 / 3, 65536 / 3, 65536 / 2, 65536 / 3,
+                                 65536 / 3, 65536 / 2, 0,         0};
 
 // GCC versions of row functions are verbatim conversions from Visual C.
 // Generated using gcc disassembly on Visual C object file:
 // objdump -D yuvscaler.obj >yuvscaler.txt
 
-void ScaleRowDown2_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "psrlw     $0x8,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1"
-  );
+void ScaleRowDown2_SSSE3(const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst_ptr,
+                         int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      // 16 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "psrlw     $0x8,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1");
 }
 
-void ScaleRowDown2Linear_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "pcmpeqb    %%xmm4,%%xmm4                  \n"
-    "psrlw      $0xf,%%xmm4                    \n"
-    "packuswb   %%xmm4,%%xmm4                  \n"
-    "pxor       %%xmm5,%%xmm5                  \n"
+void ScaleRowDown2Linear_SSSE3(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst_ptr,
+                               int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "pcmpeqb    %%xmm4,%%xmm4                  \n"
+      "psrlw      $0xf,%%xmm4                    \n"
+      "packuswb   %%xmm4,%%xmm4                  \n"
+      "pxor       %%xmm5,%%xmm5                  \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10, 0) ",%%xmm1  \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pmaddubsw  %%xmm4,%%xmm0                  \n"
-    "pmaddubsw  %%xmm4,%%xmm1                  \n"
-    "pavgw      %%xmm5,%%xmm0                  \n"
-    "pavgw      %%xmm5,%%xmm1                  \n"
-    "packuswb   %%xmm1,%%xmm0                  \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm4", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "pmaddubsw  %%xmm4,%%xmm0                  \n"
+      "pmaddubsw  %%xmm4,%%xmm1                  \n"
+      "pavgw      %%xmm5,%%xmm0                  \n"
+      "pavgw      %%xmm5,%%xmm1                  \n"
+      "packuswb   %%xmm1,%%xmm0                  \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm4", "xmm5");
 }
 
-void ScaleRowDown2Box_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "pcmpeqb    %%xmm4,%%xmm4                  \n"
-    "psrlw      $0xf,%%xmm4                    \n"
-    "packuswb   %%xmm4,%%xmm4                  \n"
-    "pxor       %%xmm5,%%xmm5                  \n"
+void ScaleRowDown2Box_SSSE3(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width) {
+  asm volatile(
+      "pcmpeqb    %%xmm4,%%xmm4                  \n"
+      "psrlw      $0xf,%%xmm4                    \n"
+      "packuswb   %%xmm4,%%xmm4                  \n"
+      "pxor       %%xmm5,%%xmm5                  \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm2)           //  movdqu  (%0,%3,1),%%xmm2
-    MEMOPREG(movdqu,0x10,0,3,1,xmm3)           //  movdqu  0x10(%0,%3,1),%%xmm3
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pmaddubsw  %%xmm4,%%xmm0                  \n"
-    "pmaddubsw  %%xmm4,%%xmm1                  \n"
-    "pmaddubsw  %%xmm4,%%xmm2                  \n"
-    "pmaddubsw  %%xmm4,%%xmm3                  \n"
-    "paddw      %%xmm2,%%xmm0                  \n"
-    "paddw      %%xmm3,%%xmm1                  \n"
-    "psrlw      $0x1,%%xmm0                    \n"
-    "psrlw      $0x1,%%xmm1                    \n"
-    "pavgw      %%xmm5,%%xmm0                  \n"
-    "pavgw      %%xmm5,%%xmm1                  \n"
-    "packuswb   %%xmm1,%%xmm0                  \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  : "r"((intptr_t)(src_stride))   // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x00(%0,%3,1),%%xmm2            \n"
+      "movdqu    0x10(%0,%3,1),%%xmm3            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pmaddubsw  %%xmm4,%%xmm0                  \n"
+      "pmaddubsw  %%xmm4,%%xmm1                  \n"
+      "pmaddubsw  %%xmm4,%%xmm2                  \n"
+      "pmaddubsw  %%xmm4,%%xmm3                  \n"
+      "paddw      %%xmm2,%%xmm0                  \n"
+      "paddw      %%xmm3,%%xmm1                  \n"
+      "psrlw      $0x1,%%xmm0                    \n"
+      "psrlw      $0x1,%%xmm1                    \n"
+      "pavgw      %%xmm5,%%xmm0                  \n"
+      "pavgw      %%xmm5,%%xmm1                  \n"
+      "packuswb   %%xmm1,%%xmm0                  \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),               // %0
+        "+r"(dst_ptr),               // %1
+        "+r"(dst_width)              // %2
+      : "r"((intptr_t)(src_stride))  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 
 #ifdef HAS_SCALEROWDOWN2_AVX2
-void ScaleRowDown2_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    "lea        " MEMLEA(0x40,0) ",%0          \n"
-    "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpsrlw     $0x8,%%ymm1,%%ymm1             \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "sub        $0x20,%2                       \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1"
-  );
+void ScaleRowDown2_AVX2(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width) {
+  (void)src_stride;
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "lea        0x40(%0),%0                    \n"
+      "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpsrlw     $0x8,%%ymm1,%%ymm1             \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea        0x20(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1");
 }
 
-void ScaleRowDown2Linear_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                              uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpsrlw     $0xf,%%ymm4,%%ymm4             \n"
-    "vpackuswb  %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+void ScaleRowDown2Linear_AVX2(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpsrlw     $0xf,%%ymm4,%%ymm4             \n"
+      "vpackuswb  %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20, 0) ",%%ymm1 \n"
-    "lea        " MEMLEA(0x40,0) ",%0          \n"
-    "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpavgw     %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpavgw     %%ymm5,%%ymm1,%%ymm1           \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "sub        $0x20,%2                       \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm4", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "lea        0x40(%0),%0                    \n"
+      "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpavgw     %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpavgw     %%ymm5,%%ymm1,%%ymm1           \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea        0x20(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm4", "xmm5");
 }
 
-void ScaleRowDown2Box_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpsrlw     $0xf,%%ymm4,%%ymm4             \n"
-    "vpackuswb  %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+void ScaleRowDown2Box_AVX2(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width) {
+  asm volatile(
+      "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpsrlw     $0xf,%%ymm4,%%ymm4             \n"
+      "vpackuswb  %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    MEMOPREG(vmovdqu,0x00,0,3,1,ymm2)          //  vmovdqu  (%0,%3,1),%%ymm2
-    MEMOPREG(vmovdqu,0x20,0,3,1,ymm3)          //  vmovdqu  0x20(%0,%3,1),%%ymm3
-    "lea        " MEMLEA(0x40,0) ",%0          \n"
-    "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
-    "vpsrlw     $0x1,%%ymm0,%%ymm0             \n"
-    "vpsrlw     $0x1,%%ymm1,%%ymm1             \n"
-    "vpavgw     %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpavgw     %%ymm5,%%ymm1,%%ymm1           \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "lea        " MEMLEA(0x20,1) ",%1          \n"
-    "sub        $0x20,%2                       \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  : "r"((intptr_t)(src_stride))   // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x00(%0,%3,1),%%ymm2           \n"
+      "vmovdqu    0x20(%0,%3,1),%%ymm3           \n"
+      "lea        0x40(%0),%0                    \n"
+      "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
+      "vpsrlw     $0x1,%%ymm0,%%ymm0             \n"
+      "vpsrlw     $0x1,%%ymm1,%%ymm1             \n"
+      "vpavgw     %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpavgw     %%ymm5,%%ymm1,%%ymm1           \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "lea        0x20(%1),%1                    \n"
+      "sub        $0x20,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_ptr),               // %0
+        "+r"(dst_ptr),               // %1
+        "+r"(dst_width)              // %2
+      : "r"((intptr_t)(src_stride))  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SCALEROWDOWN2_AVX2
 
-void ScaleRowDown4_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "pcmpeqb   %%xmm5,%%xmm5                   \n"
-    "psrld     $0x18,%%xmm5                    \n"
-    "pslld     $0x10,%%xmm5                    \n"
+void ScaleRowDown4_SSSE3(const uint8_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint8_t* dst_ptr,
+                         int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "pcmpeqb   %%xmm5,%%xmm5                   \n"
+      "psrld     $0x18,%%xmm5                    \n"
+      "pslld     $0x10,%%xmm5                    \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pand      %%xmm5,%%xmm0                   \n"
-    "pand      %%xmm5,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm0                   \n"
-    "psrlw     $0x8,%%xmm0                     \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "pand      %%xmm5,%%xmm0                   \n"
+      "pand      %%xmm5,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm0                   \n"
+      "psrlw     $0x8,%%xmm0                     \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void ScaleRowDown4Box_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
+void ScaleRowDown4Box_SSSE3(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst_ptr,
+                            int dst_width) {
   intptr_t stridex3;
-  asm volatile (
-    "pcmpeqb    %%xmm4,%%xmm4                  \n"
-    "psrlw      $0xf,%%xmm4                    \n"
-    "movdqa     %%xmm4,%%xmm5                  \n"
-    "packuswb   %%xmm4,%%xmm4                  \n"
-    "psllw      $0x3,%%xmm5                    \n"
-    "lea       " MEMLEA4(0x00,4,4,2) ",%3      \n"
+  asm volatile(
+      "pcmpeqb    %%xmm4,%%xmm4                  \n"
+      "psrlw      $0xf,%%xmm4                    \n"
+      "movdqa     %%xmm4,%%xmm5                  \n"
+      "packuswb   %%xmm4,%%xmm4                  \n"
+      "psllw      $0x3,%%xmm5                    \n"
+      "lea       0x00(%4,%4,2),%3                \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x00,0,4,1,xmm2)           //  movdqu  (%0,%4,1),%%xmm2
-    MEMOPREG(movdqu,0x10,0,4,1,xmm3)           //  movdqu  0x10(%0,%4,1),%%xmm3
-    "pmaddubsw  %%xmm4,%%xmm0                  \n"
-    "pmaddubsw  %%xmm4,%%xmm1                  \n"
-    "pmaddubsw  %%xmm4,%%xmm2                  \n"
-    "pmaddubsw  %%xmm4,%%xmm3                  \n"
-    "paddw      %%xmm2,%%xmm0                  \n"
-    "paddw      %%xmm3,%%xmm1                  \n"
-    MEMOPREG(movdqu,0x00,0,4,2,xmm2)           //  movdqu  (%0,%4,2),%%xmm2
-    MEMOPREG(movdqu,0x10,0,4,2,xmm3)           //  movdqu  0x10(%0,%4,2),%%xmm3
-    "pmaddubsw  %%xmm4,%%xmm2                  \n"
-    "pmaddubsw  %%xmm4,%%xmm3                  \n"
-    "paddw      %%xmm2,%%xmm0                  \n"
-    "paddw      %%xmm3,%%xmm1                  \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm2)           //  movdqu  (%0,%3,1),%%xmm2
-    MEMOPREG(movdqu,0x10,0,3,1,xmm3)           //  movdqu  0x10(%0,%3,1),%%xmm3
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pmaddubsw  %%xmm4,%%xmm2                  \n"
-    "pmaddubsw  %%xmm4,%%xmm3                  \n"
-    "paddw      %%xmm2,%%xmm0                  \n"
-    "paddw      %%xmm3,%%xmm1                  \n"
-    "phaddw     %%xmm1,%%xmm0                  \n"
-    "paddw      %%xmm5,%%xmm0                  \n"
-    "psrlw      $0x4,%%xmm0                    \n"
-    "packuswb   %%xmm0,%%xmm0                  \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x8,1) ",%1            \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),     // %0
-    "+r"(dst_ptr),     // %1
-    "+r"(dst_width),   // %2
-    "=&r"(stridex3)    // %3
-  : "r"((intptr_t)(src_stride))    // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x00(%0,%4,1),%%xmm2            \n"
+      "movdqu    0x10(%0,%4,1),%%xmm3            \n"
+      "pmaddubsw  %%xmm4,%%xmm0                  \n"
+      "pmaddubsw  %%xmm4,%%xmm1                  \n"
+      "pmaddubsw  %%xmm4,%%xmm2                  \n"
+      "pmaddubsw  %%xmm4,%%xmm3                  \n"
+      "paddw      %%xmm2,%%xmm0                  \n"
+      "paddw      %%xmm3,%%xmm1                  \n"
+      "movdqu    0x00(%0,%4,2),%%xmm2            \n"
+      "movdqu    0x10(%0,%4,2),%%xmm3            \n"
+      "pmaddubsw  %%xmm4,%%xmm2                  \n"
+      "pmaddubsw  %%xmm4,%%xmm3                  \n"
+      "paddw      %%xmm2,%%xmm0                  \n"
+      "paddw      %%xmm3,%%xmm1                  \n"
+      "movdqu    0x00(%0,%3,1),%%xmm2            \n"
+      "movdqu    0x10(%0,%3,1),%%xmm3            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pmaddubsw  %%xmm4,%%xmm2                  \n"
+      "pmaddubsw  %%xmm4,%%xmm3                  \n"
+      "paddw      %%xmm2,%%xmm0                  \n"
+      "paddw      %%xmm3,%%xmm1                  \n"
+      "phaddw     %%xmm1,%%xmm0                  \n"
+      "paddw      %%xmm5,%%xmm0                  \n"
+      "psrlw      $0x4,%%xmm0                    \n"
+      "packuswb   %%xmm0,%%xmm0                  \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "lea       0x8(%1),%1                      \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),               // %0
+        "+r"(dst_ptr),               // %1
+        "+r"(dst_width),             // %2
+        "=&r"(stridex3)              // %3
+      : "r"((intptr_t)(src_stride))  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-
 #ifdef HAS_SCALEROWDOWN4_AVX2
-void ScaleRowDown4_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
-    "vpsrld     $0x18,%%ymm5,%%ymm5            \n"
-    "vpslld     $0x10,%%ymm5,%%ymm5            \n"
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    "lea        " MEMLEA(0x40,0) ",%0          \n"
-    "vpand      %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpand      %%ymm5,%%ymm1,%%ymm1           \n"
-    "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
-    "vpackuswb  %%ymm0,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vmovdqu    %%xmm0," MEMACCESS(1) "        \n"
-    "lea        " MEMLEA(0x10,1) ",%1          \n"
-    "sub        $0x10,%2                       \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm5"
-  );
+void ScaleRowDown4_AVX2(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "vpcmpeqb   %%ymm5,%%ymm5,%%ymm5           \n"
+      "vpsrld     $0x18,%%ymm5,%%ymm5            \n"
+      "vpslld     $0x10,%%ymm5,%%ymm5            \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "lea        0x40(%0),%0                    \n"
+      "vpand      %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpand      %%ymm5,%%ymm1,%%ymm1           \n"
+      "vpackuswb  %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vpsrlw     $0x8,%%ymm0,%%ymm0             \n"
+      "vpackuswb  %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vmovdqu    %%xmm0,(%1)                    \n"
+      "lea        0x10(%1),%1                    \n"
+      "sub        $0x10,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm5");
 }
 
-void ScaleRowDown4Box_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
-    "vpsrlw     $0xf,%%ymm4,%%ymm4             \n"
-    "vpsllw     $0x3,%%ymm4,%%ymm5             \n"
-    "vpackuswb  %%ymm4,%%ymm4,%%ymm4           \n"
+void ScaleRowDown4Box_AVX2(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width) {
+  asm volatile(
+      "vpcmpeqb   %%ymm4,%%ymm4,%%ymm4           \n"
+      "vpsrlw     $0xf,%%ymm4,%%ymm4             \n"
+      "vpsllw     $0x3,%%ymm4,%%ymm5             \n"
+      "vpackuswb  %%ymm4,%%ymm4,%%ymm4           \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm0        \n"
-    "vmovdqu    " MEMACCESS2(0x20,0) ",%%ymm1  \n"
-    MEMOPREG(vmovdqu,0x00,0,3,1,ymm2)          //  vmovdqu  (%0,%3,1),%%ymm2
-    MEMOPREG(vmovdqu,0x20,0,3,1,ymm3)          //  vmovdqu  0x20(%0,%3,1),%%ymm3
-    "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
-    "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
-    "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
-    MEMOPREG(vmovdqu,0x00,0,3,2,ymm2)          //  vmovdqu  (%0,%3,2),%%ymm2
-    MEMOPREG(vmovdqu,0x20,0,3,2,ymm3)          //  vmovdqu  0x20(%0,%3,2),%%ymm3
-    "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
-    MEMOPREG(vmovdqu,0x00,0,4,1,ymm2)          //  vmovdqu  (%0,%4,1),%%ymm2
-    MEMOPREG(vmovdqu,0x20,0,4,1,ymm3)          //  vmovdqu  0x20(%0,%4,1),%%ymm3
-    "lea        " MEMLEA(0x40,0) ",%0          \n"
-    "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
-    "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
-    "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
-    "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
-    "vphaddw    %%ymm1,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vpaddw     %%ymm5,%%ymm0,%%ymm0           \n"
-    "vpsrlw     $0x4,%%ymm0,%%ymm0             \n"
-    "vpackuswb  %%ymm0,%%ymm0,%%ymm0           \n"
-    "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
-    "vmovdqu    %%xmm0," MEMACCESS(1) "        \n"
-    "lea        " MEMLEA(0x10,1) ",%1          \n"
-    "sub        $0x10,%2                       \n"
-    "jg         1b                             \n"
-    "vzeroupper                                \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  : "r"((intptr_t)(src_stride)),  // %3
-    "r"((intptr_t)(src_stride * 3))   // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm0                    \n"
+      "vmovdqu    0x20(%0),%%ymm1                \n"
+      "vmovdqu    0x00(%0,%3,1),%%ymm2           \n"
+      "vmovdqu    0x20(%0,%3,1),%%ymm3           \n"
+      "vpmaddubsw %%ymm4,%%ymm0,%%ymm0           \n"
+      "vpmaddubsw %%ymm4,%%ymm1,%%ymm1           \n"
+      "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
+      "vmovdqu    0x00(%0,%3,2),%%ymm2           \n"
+      "vmovdqu    0x20(%0,%3,2),%%ymm3           \n"
+      "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
+      "vmovdqu    0x00(%0,%4,1),%%ymm2           \n"
+      "vmovdqu    0x20(%0,%4,1),%%ymm3           \n"
+      "lea        0x40(%0),%0                    \n"
+      "vpmaddubsw %%ymm4,%%ymm2,%%ymm2           \n"
+      "vpmaddubsw %%ymm4,%%ymm3,%%ymm3           \n"
+      "vpaddw     %%ymm2,%%ymm0,%%ymm0           \n"
+      "vpaddw     %%ymm3,%%ymm1,%%ymm1           \n"
+      "vphaddw    %%ymm1,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vpaddw     %%ymm5,%%ymm0,%%ymm0           \n"
+      "vpsrlw     $0x4,%%ymm0,%%ymm0             \n"
+      "vpackuswb  %%ymm0,%%ymm0,%%ymm0           \n"
+      "vpermq     $0xd8,%%ymm0,%%ymm0            \n"
+      "vmovdqu    %%xmm0,(%1)                    \n"
+      "lea        0x10(%1),%1                    \n"
+      "sub        $0x10,%2                       \n"
+      "jg         1b                             \n"
+      "vzeroupper                                \n"
+      : "+r"(src_ptr),                   // %0
+        "+r"(dst_ptr),                   // %1
+        "+r"(dst_width)                  // %2
+      : "r"((intptr_t)(src_stride)),     // %3
+        "r"((intptr_t)(src_stride * 3))  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 #endif  // HAS_SCALEROWDOWN4_AVX2
 
-void ScaleRowDown34_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movdqa    %0,%%xmm3                       \n"
-    "movdqa    %1,%%xmm4                       \n"
-    "movdqa    %2,%%xmm5                       \n"
-  :
-  : "m"(kShuf0),  // %0
-    "m"(kShuf1),  // %1
-    "m"(kShuf2)   // %2
-  );
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm2   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "movdqa    %%xmm2,%%xmm1                   \n"
-    "palignr   $0x8,%%xmm0,%%xmm1              \n"
-    "pshufb    %%xmm3,%%xmm0                   \n"
-    "pshufb    %%xmm4,%%xmm1                   \n"
-    "pshufb    %%xmm5,%%xmm2                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "movq      %%xmm1," MEMACCESS2(0x8,1) "    \n"
-    "movq      %%xmm2," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x18,1) ",%1           \n"
-    "sub       $0x18,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),   // %0
-    "+r"(dst_ptr),   // %1
-    "+r"(dst_width)  // %2
-  :: "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5"
-  );
+void ScaleRowDown34_SSSE3(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst_ptr,
+                          int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "movdqa    %0,%%xmm3                       \n"
+      "movdqa    %1,%%xmm4                       \n"
+      "movdqa    %2,%%xmm5                       \n"
+      :
+      : "m"(kShuf0),  // %0
+        "m"(kShuf1),  // %1
+        "m"(kShuf2)   // %2
+      );
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm2                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "movdqa    %%xmm2,%%xmm1                   \n"
+      "palignr   $0x8,%%xmm0,%%xmm1              \n"
+      "pshufb    %%xmm3,%%xmm0                   \n"
+      "pshufb    %%xmm4,%%xmm1                   \n"
+      "pshufb    %%xmm5,%%xmm2                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movq      %%xmm1,0x8(%1)                  \n"
+      "movq      %%xmm2,0x10(%1)                 \n"
+      "lea       0x18(%1),%1                     \n"
+      "sub       $0x18,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5");
 }
 
-void ScaleRowDown34_1_Box_SSSE3(const uint8* src_ptr,
+void ScaleRowDown34_1_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movdqa    %0,%%xmm2                       \n"  // kShuf01
-    "movdqa    %1,%%xmm3                       \n"  // kShuf11
-    "movdqa    %2,%%xmm4                       \n"  // kShuf21
-  :
-  : "m"(kShuf01),  // %0
-    "m"(kShuf11),  // %1
-    "m"(kShuf21)   // %2
-  );
-  asm volatile (
-    "movdqa    %0,%%xmm5                       \n"  // kMadd01
-    "movdqa    %1,%%xmm0                       \n"  // kMadd11
-    "movdqa    %2,%%xmm1                       \n"  // kRound34
-  :
-  : "m"(kMadd01),  // %0
-    "m"(kMadd11),  // %1
-    "m"(kRound34)  // %2
-  );
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm6         \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm7)           //  movdqu  (%0,%3),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
-    "pshufb    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm5,%%xmm6                   \n"
-    "paddsw    %%xmm1,%%xmm6                   \n"
-    "psrlw     $0x2,%%xmm6                     \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movq      %%xmm6," MEMACCESS(1) "         \n"
-    "movdqu    " MEMACCESS2(0x8,0) ",%%xmm6    \n"
-    MEMOPREG(movdqu,0x8,0,3,1,xmm7)            //  movdqu  0x8(%0,%3),%%xmm7
-    "pavgb     %%xmm7,%%xmm6                   \n"
-    "pshufb    %%xmm3,%%xmm6                   \n"
-    "pmaddubsw %%xmm0,%%xmm6                   \n"
-    "paddsw    %%xmm1,%%xmm6                   \n"
-    "psrlw     $0x2,%%xmm6                     \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movq      %%xmm6," MEMACCESS2(0x8,1) "    \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x10,0,3,1,xmm7)           //  movdqu  0x10(%0,%3),%%xmm7
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pavgb     %%xmm7,%%xmm6                   \n"
-    "pshufb    %%xmm4,%%xmm6                   \n"
-    "pmaddubsw %4,%%xmm6                       \n"
-    "paddsw    %%xmm1,%%xmm6                   \n"
-    "psrlw     $0x2,%%xmm6                     \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movq      %%xmm6," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x18,1) ",%1           \n"
-    "sub       $0x18,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),   // %0
-    "+r"(dst_ptr),   // %1
-    "+r"(dst_width)  // %2
-  : "r"((intptr_t)(src_stride)),  // %3
-    "m"(kMadd21)     // %4
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+                                uint8_t* dst_ptr,
+                                int dst_width) {
+  asm volatile(
+      "movdqa    %0,%%xmm2                       \n"  // kShuf01
+      "movdqa    %1,%%xmm3                       \n"  // kShuf11
+      "movdqa    %2,%%xmm4                       \n"  // kShuf21
+      :
+      : "m"(kShuf01),  // %0
+        "m"(kShuf11),  // %1
+        "m"(kShuf21)   // %2
+      );
+  asm volatile(
+      "movdqa    %0,%%xmm5                       \n"  // kMadd01
+      "movdqa    %1,%%xmm0                       \n"  // kMadd11
+      "movdqa    %2,%%xmm1                       \n"  // kRound34
+      :
+      : "m"(kMadd01),  // %0
+        "m"(kMadd11),  // %1
+        "m"(kRound34)  // %2
+      );
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm6                     \n"
+      "movdqu    0x00(%0,%3,1),%%xmm7            \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+      "pshufb    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm5,%%xmm6                   \n"
+      "paddsw    %%xmm1,%%xmm6                   \n"
+      "psrlw     $0x2,%%xmm6                     \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movq      %%xmm6,(%1)                     \n"
+      "movdqu    0x8(%0),%%xmm6                  \n"
+      "movdqu    0x8(%0,%3,1),%%xmm7             \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+      "pshufb    %%xmm3,%%xmm6                   \n"
+      "pmaddubsw %%xmm0,%%xmm6                   \n"
+      "paddsw    %%xmm1,%%xmm6                   \n"
+      "psrlw     $0x2,%%xmm6                     \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movq      %%xmm6,0x8(%1)                  \n"
+      "movdqu    0x10(%0),%%xmm6                 \n"
+      "movdqu    0x10(%0,%3,1),%%xmm7            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+      "pshufb    %%xmm4,%%xmm6                   \n"
+      "pmaddubsw %4,%%xmm6                       \n"
+      "paddsw    %%xmm1,%%xmm6                   \n"
+      "psrlw     $0x2,%%xmm6                     \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movq      %%xmm6,0x10(%1)                 \n"
+      "lea       0x18(%1),%1                     \n"
+      "sub       $0x18,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),                // %0
+        "+r"(dst_ptr),                // %1
+        "+r"(dst_width)               // %2
+      : "r"((intptr_t)(src_stride)),  // %3
+        "m"(kMadd21)                  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 
-void ScaleRowDown34_0_Box_SSSE3(const uint8* src_ptr,
+void ScaleRowDown34_0_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movdqa    %0,%%xmm2                       \n"  // kShuf01
-    "movdqa    %1,%%xmm3                       \n"  // kShuf11
-    "movdqa    %2,%%xmm4                       \n"  // kShuf21
-  :
-  : "m"(kShuf01),  // %0
-    "m"(kShuf11),  // %1
-    "m"(kShuf21)   // %2
-  );
-  asm volatile (
-    "movdqa    %0,%%xmm5                       \n"  // kMadd01
-    "movdqa    %1,%%xmm0                       \n"  // kMadd11
-    "movdqa    %2,%%xmm1                       \n"  // kRound34
-  :
-  : "m"(kMadd01),  // %0
-    "m"(kMadd11),  // %1
-    "m"(kRound34)  // %2
-  );
+                                uint8_t* dst_ptr,
+                                int dst_width) {
+  asm volatile(
+      "movdqa    %0,%%xmm2                       \n"  // kShuf01
+      "movdqa    %1,%%xmm3                       \n"  // kShuf11
+      "movdqa    %2,%%xmm4                       \n"  // kShuf21
+      :
+      : "m"(kShuf01),  // %0
+        "m"(kShuf11),  // %1
+        "m"(kShuf21)   // %2
+      );
+  asm volatile(
+      "movdqa    %0,%%xmm5                       \n"  // kMadd01
+      "movdqa    %1,%%xmm0                       \n"  // kMadd11
+      "movdqa    %2,%%xmm1                       \n"  // kRound34
+      :
+      : "m"(kMadd01),  // %0
+        "m"(kMadd11),  // %1
+        "m"(kRound34)  // %2
+      );
 
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm6         \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm7)           //  movdqu  (%0,%3,1),%%xmm7
-    "pavgb     %%xmm6,%%xmm7                   \n"
-    "pavgb     %%xmm7,%%xmm6                   \n"
-    "pshufb    %%xmm2,%%xmm6                   \n"
-    "pmaddubsw %%xmm5,%%xmm6                   \n"
-    "paddsw    %%xmm1,%%xmm6                   \n"
-    "psrlw     $0x2,%%xmm6                     \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movq      %%xmm6," MEMACCESS(1) "         \n"
-    "movdqu    " MEMACCESS2(0x8,0) ",%%xmm6    \n"
-    MEMOPREG(movdqu,0x8,0,3,1,xmm7)            //  movdqu  0x8(%0,%3,1),%%xmm7
-    "pavgb     %%xmm6,%%xmm7                   \n"
-    "pavgb     %%xmm7,%%xmm6                   \n"
-    "pshufb    %%xmm3,%%xmm6                   \n"
-    "pmaddubsw %%xmm0,%%xmm6                   \n"
-    "paddsw    %%xmm1,%%xmm6                   \n"
-    "psrlw     $0x2,%%xmm6                     \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movq      %%xmm6," MEMACCESS2(0x8,1) "    \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm6   \n"
-    MEMOPREG(movdqu,0x10,0,3,1,xmm7)           //  movdqu  0x10(%0,%3,1),%%xmm7
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pavgb     %%xmm6,%%xmm7                   \n"
-    "pavgb     %%xmm7,%%xmm6                   \n"
-    "pshufb    %%xmm4,%%xmm6                   \n"
-    "pmaddubsw %4,%%xmm6                       \n"
-    "paddsw    %%xmm1,%%xmm6                   \n"
-    "psrlw     $0x2,%%xmm6                     \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movq      %%xmm6," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x18,1) ",%1           \n"
-    "sub       $0x18,%2                        \n"
-    "jg        1b                              \n"
-    : "+r"(src_ptr),   // %0
-      "+r"(dst_ptr),   // %1
-      "+r"(dst_width)  // %2
-    : "r"((intptr_t)(src_stride)),  // %3
-      "m"(kMadd21)     // %4
-    : "memory", "cc", NACL_R14
-      "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm6                     \n"
+      "movdqu    0x00(%0,%3,1),%%xmm7            \n"
+      "pavgb     %%xmm6,%%xmm7                   \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+      "pshufb    %%xmm2,%%xmm6                   \n"
+      "pmaddubsw %%xmm5,%%xmm6                   \n"
+      "paddsw    %%xmm1,%%xmm6                   \n"
+      "psrlw     $0x2,%%xmm6                     \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movq      %%xmm6,(%1)                     \n"
+      "movdqu    0x8(%0),%%xmm6                  \n"
+      "movdqu    0x8(%0,%3,1),%%xmm7             \n"
+      "pavgb     %%xmm6,%%xmm7                   \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+      "pshufb    %%xmm3,%%xmm6                   \n"
+      "pmaddubsw %%xmm0,%%xmm6                   \n"
+      "paddsw    %%xmm1,%%xmm6                   \n"
+      "psrlw     $0x2,%%xmm6                     \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movq      %%xmm6,0x8(%1)                  \n"
+      "movdqu    0x10(%0),%%xmm6                 \n"
+      "movdqu    0x10(%0,%3,1),%%xmm7            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pavgb     %%xmm6,%%xmm7                   \n"
+      "pavgb     %%xmm7,%%xmm6                   \n"
+      "pshufb    %%xmm4,%%xmm6                   \n"
+      "pmaddubsw %4,%%xmm6                       \n"
+      "paddsw    %%xmm1,%%xmm6                   \n"
+      "psrlw     $0x2,%%xmm6                     \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movq      %%xmm6,0x10(%1)                 \n"
+      "lea       0x18(%1),%1                     \n"
+      "sub       $0x18,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),                // %0
+        "+r"(dst_ptr),                // %1
+        "+r"(dst_width)               // %2
+      : "r"((intptr_t)(src_stride)),  // %3
+        "m"(kMadd21)                  // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 
-void ScaleRowDown38_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movdqa    %3,%%xmm4                       \n"
-    "movdqa    %4,%%xmm5                       \n"
+void ScaleRowDown38_SSSE3(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst_ptr,
+                          int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "movdqa    %3,%%xmm4                       \n"
+      "movdqa    %4,%%xmm5                       \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "pshufb    %%xmm5,%%xmm1                   \n"
-    "paddusb   %%xmm1,%%xmm0                   \n"
-    "movq      %%xmm0," MEMACCESS(1) "         \n"
-    "movhlps   %%xmm0,%%xmm1                   \n"
-    "movd      %%xmm1," MEMACCESS2(0x8,1) "    \n"
-    "lea       " MEMLEA(0xc,1) ",%1            \n"
-    "sub       $0xc,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),   // %0
-    "+r"(dst_ptr),   // %1
-    "+r"(dst_width)  // %2
-  : "m"(kShuf38a),   // %3
-    "m"(kShuf38b)    // %4
-  : "memory", "cc", "xmm0", "xmm1", "xmm4", "xmm5"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "pshufb    %%xmm5,%%xmm1                   \n"
+      "paddusb   %%xmm1,%%xmm0                   \n"
+      "movq      %%xmm0,(%1)                     \n"
+      "movhlps   %%xmm0,%%xmm1                   \n"
+      "movd      %%xmm1,0x8(%1)                  \n"
+      "lea       0xc(%1),%1                      \n"
+      "sub       $0xc,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      : "m"(kShuf38a),   // %3
+        "m"(kShuf38b)    // %4
+      : "memory", "cc", "xmm0", "xmm1", "xmm4", "xmm5");
 }
 
-void ScaleRowDown38_2_Box_SSSE3(const uint8* src_ptr,
+void ScaleRowDown38_2_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movdqa    %0,%%xmm2                       \n"
-    "movdqa    %1,%%xmm3                       \n"
-    "movdqa    %2,%%xmm4                       \n"
-    "movdqa    %3,%%xmm5                       \n"
-  :
-  : "m"(kShufAb0),   // %0
-    "m"(kShufAb1),   // %1
-    "m"(kShufAb2),   // %2
-    "m"(kScaleAb2)   // %3
-  );
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm1)           //  movdqu  (%0,%3,1),%%xmm1
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "pavgb     %%xmm1,%%xmm0                   \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "pshufb    %%xmm2,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm6                   \n"
-    "pshufb    %%xmm3,%%xmm6                   \n"
-    "paddusw   %%xmm6,%%xmm1                   \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "paddusw   %%xmm0,%%xmm1                   \n"
-    "pmulhuw   %%xmm5,%%xmm1                   \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "movd      %%xmm1," MEMACCESS(1) "         \n"
-    "psrlq     $0x10,%%xmm1                    \n"
-    "movd      %%xmm1," MEMACCESS2(0x2,1) "    \n"
-    "lea       " MEMLEA(0x6,1) ",%1            \n"
-    "sub       $0x6,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),     // %0
-    "+r"(dst_ptr),     // %1
-    "+r"(dst_width)    // %2
-  : "r"((intptr_t)(src_stride))  // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+                                uint8_t* dst_ptr,
+                                int dst_width) {
+  asm volatile(
+      "movdqa    %0,%%xmm2                       \n"
+      "movdqa    %1,%%xmm3                       \n"
+      "movdqa    %2,%%xmm4                       \n"
+      "movdqa    %3,%%xmm5                       \n"
+      :
+      : "m"(kShufAb0),  // %0
+        "m"(kShufAb1),  // %1
+        "m"(kShufAb2),  // %2
+        "m"(kScaleAb2)  // %3
+      );
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%3,1),%%xmm1            \n"
+      "lea       0x10(%0),%0                     \n"
+      "pavgb     %%xmm1,%%xmm0                   \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "pshufb    %%xmm2,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm6                   \n"
+      "pshufb    %%xmm3,%%xmm6                   \n"
+      "paddusw   %%xmm6,%%xmm1                   \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "paddusw   %%xmm0,%%xmm1                   \n"
+      "pmulhuw   %%xmm5,%%xmm1                   \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "movd      %%xmm1,(%1)                     \n"
+      "psrlq     $0x10,%%xmm1                    \n"
+      "movd      %%xmm1,0x2(%1)                  \n"
+      "lea       0x6(%1),%1                      \n"
+      "sub       $0x6,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),               // %0
+        "+r"(dst_ptr),               // %1
+        "+r"(dst_width)              // %2
+      : "r"((intptr_t)(src_stride))  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 
-void ScaleRowDown38_3_Box_SSSE3(const uint8* src_ptr,
+void ScaleRowDown38_3_Box_SSSE3(const uint8_t* src_ptr,
                                 ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movdqa    %0,%%xmm2                       \n"
-    "movdqa    %1,%%xmm3                       \n"
-    "movdqa    %2,%%xmm4                       \n"
-    "pxor      %%xmm5,%%xmm5                   \n"
-  :
-  : "m"(kShufAc),    // %0
-    "m"(kShufAc3),   // %1
-    "m"(kScaleAc33)  // %2
-  );
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm6)           //  movdqu  (%0,%3,1),%%xmm6
-    "movhlps   %%xmm0,%%xmm1                   \n"
-    "movhlps   %%xmm6,%%xmm7                   \n"
-    "punpcklbw %%xmm5,%%xmm0                   \n"
-    "punpcklbw %%xmm5,%%xmm1                   \n"
-    "punpcklbw %%xmm5,%%xmm6                   \n"
-    "punpcklbw %%xmm5,%%xmm7                   \n"
-    "paddusw   %%xmm6,%%xmm0                   \n"
-    "paddusw   %%xmm7,%%xmm1                   \n"
-    MEMOPREG(movdqu,0x00,0,3,2,xmm6)           //  movdqu  (%0,%3,2),%%xmm6
-    "lea       " MEMLEA(0x10,0) ",%0           \n"
-    "movhlps   %%xmm6,%%xmm7                   \n"
-    "punpcklbw %%xmm5,%%xmm6                   \n"
-    "punpcklbw %%xmm5,%%xmm7                   \n"
-    "paddusw   %%xmm6,%%xmm0                   \n"
-    "paddusw   %%xmm7,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm6                   \n"
-    "psrldq    $0x2,%%xmm0                     \n"
-    "paddusw   %%xmm0,%%xmm6                   \n"
-    "psrldq    $0x2,%%xmm0                     \n"
-    "paddusw   %%xmm0,%%xmm6                   \n"
-    "pshufb    %%xmm2,%%xmm6                   \n"
-    "movdqa    %%xmm1,%%xmm7                   \n"
-    "psrldq    $0x2,%%xmm1                     \n"
-    "paddusw   %%xmm1,%%xmm7                   \n"
-    "psrldq    $0x2,%%xmm1                     \n"
-    "paddusw   %%xmm1,%%xmm7                   \n"
-    "pshufb    %%xmm3,%%xmm7                   \n"
-    "paddusw   %%xmm7,%%xmm6                   \n"
-    "pmulhuw   %%xmm4,%%xmm6                   \n"
-    "packuswb  %%xmm6,%%xmm6                   \n"
-    "movd      %%xmm6," MEMACCESS(1) "         \n"
-    "psrlq     $0x10,%%xmm6                    \n"
-    "movd      %%xmm6," MEMACCESS2(0x2,1) "    \n"
-    "lea       " MEMLEA(0x6,1) ",%1            \n"
-    "sub       $0x6,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),    // %0
-    "+r"(dst_ptr),    // %1
-    "+r"(dst_width)   // %2
-  : "r"((intptr_t)(src_stride))   // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+                                uint8_t* dst_ptr,
+                                int dst_width) {
+  asm volatile(
+      "movdqa    %0,%%xmm2                       \n"
+      "movdqa    %1,%%xmm3                       \n"
+      "movdqa    %2,%%xmm4                       \n"
+      "pxor      %%xmm5,%%xmm5                   \n"
+      :
+      : "m"(kShufAc),    // %0
+        "m"(kShufAc3),   // %1
+        "m"(kScaleAc33)  // %2
+      );
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x00(%0,%3,1),%%xmm6            \n"
+      "movhlps   %%xmm0,%%xmm1                   \n"
+      "movhlps   %%xmm6,%%xmm7                   \n"
+      "punpcklbw %%xmm5,%%xmm0                   \n"
+      "punpcklbw %%xmm5,%%xmm1                   \n"
+      "punpcklbw %%xmm5,%%xmm6                   \n"
+      "punpcklbw %%xmm5,%%xmm7                   \n"
+      "paddusw   %%xmm6,%%xmm0                   \n"
+      "paddusw   %%xmm7,%%xmm1                   \n"
+      "movdqu    0x00(%0,%3,2),%%xmm6            \n"
+      "lea       0x10(%0),%0                     \n"
+      "movhlps   %%xmm6,%%xmm7                   \n"
+      "punpcklbw %%xmm5,%%xmm6                   \n"
+      "punpcklbw %%xmm5,%%xmm7                   \n"
+      "paddusw   %%xmm6,%%xmm0                   \n"
+      "paddusw   %%xmm7,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm6                   \n"
+      "psrldq    $0x2,%%xmm0                     \n"
+      "paddusw   %%xmm0,%%xmm6                   \n"
+      "psrldq    $0x2,%%xmm0                     \n"
+      "paddusw   %%xmm0,%%xmm6                   \n"
+      "pshufb    %%xmm2,%%xmm6                   \n"
+      "movdqa    %%xmm1,%%xmm7                   \n"
+      "psrldq    $0x2,%%xmm1                     \n"
+      "paddusw   %%xmm1,%%xmm7                   \n"
+      "psrldq    $0x2,%%xmm1                     \n"
+      "paddusw   %%xmm1,%%xmm7                   \n"
+      "pshufb    %%xmm3,%%xmm7                   \n"
+      "paddusw   %%xmm7,%%xmm6                   \n"
+      "pmulhuw   %%xmm4,%%xmm6                   \n"
+      "packuswb  %%xmm6,%%xmm6                   \n"
+      "movd      %%xmm6,(%1)                     \n"
+      "psrlq     $0x10,%%xmm6                    \n"
+      "movd      %%xmm6,0x2(%1)                  \n"
+      "lea       0x6(%1),%1                      \n"
+      "sub       $0x6,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),               // %0
+        "+r"(dst_ptr),               // %1
+        "+r"(dst_width)              // %2
+      : "r"((intptr_t)(src_stride))  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 
 // Reads 16xN bytes and produces 16 shorts at a time.
-void ScaleAddRow_SSE2(const uint8* src_ptr, uint16* dst_ptr, int src_width) {
-  asm volatile (
-    "pxor      %%xmm5,%%xmm5                   \n"
+void ScaleAddRow_SSE2(const uint8_t* src_ptr,
+                      uint16_t* dst_ptr,
+                      int src_width) {
+  asm volatile(
 
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm3         \n"
-    "lea       " MEMLEA(0x10,0) ",%0           \n"  // src_ptr += 16
-    "movdqu    " MEMACCESS(1) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,1) ",%%xmm1   \n"
-    "movdqa    %%xmm3,%%xmm2                   \n"
-    "punpcklbw %%xmm5,%%xmm2                   \n"
-    "punpckhbw %%xmm5,%%xmm3                   \n"
-    "paddusw   %%xmm2,%%xmm0                   \n"
-    "paddusw   %%xmm3,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,1) "   \n"
-    "lea       " MEMLEA(0x20,1) ",%1           \n"
-    "sub       $0x10,%2                        \n"
-    "jg        1b                              \n"
-  : "+r"(src_ptr),     // %0
-    "+r"(dst_ptr),     // %1
-    "+r"(src_width)    // %2
-  :
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      "pxor      %%xmm5,%%xmm5                   \n"
+
+      // 16 pixel loop.
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm3                     \n"
+      "lea       0x10(%0),%0                     \n"  // src_ptr += 16
+      "movdqu    (%1),%%xmm0                     \n"
+      "movdqu    0x10(%1),%%xmm1                 \n"
+      "movdqa    %%xmm3,%%xmm2                   \n"
+      "punpcklbw %%xmm5,%%xmm2                   \n"
+      "punpckhbw %%xmm5,%%xmm3                   \n"
+      "paddusw   %%xmm2,%%xmm0                   \n"
+      "paddusw   %%xmm3,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "movdqu    %%xmm1,0x10(%1)                 \n"
+      "lea       0x20(%1),%1                     \n"
+      "sub       $0x10,%2                        \n"
+      "jg        1b                              \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(src_width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 
-
 #ifdef HAS_SCALEADDROW_AVX2
 // Reads 32 bytes and accumulates to 32 shorts at a time.
-void ScaleAddRow_AVX2(const uint8* src_ptr, uint16* dst_ptr, int src_width) {
-  asm volatile (
-    "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+void ScaleAddRow_AVX2(const uint8_t* src_ptr,
+                      uint16_t* dst_ptr,
+                      int src_width) {
+  asm volatile(
 
-    LABELALIGN
-  "1:                                          \n"
-    "vmovdqu    " MEMACCESS(0) ",%%ymm3        \n"
-    "lea        " MEMLEA(0x20,0) ",%0          \n"  // src_ptr += 32
-    "vpermq     $0xd8,%%ymm3,%%ymm3            \n"
-    "vpunpcklbw %%ymm5,%%ymm3,%%ymm2           \n"
-    "vpunpckhbw %%ymm5,%%ymm3,%%ymm3           \n"
-    "vpaddusw   " MEMACCESS(1) ",%%ymm2,%%ymm0 \n"
-    "vpaddusw   " MEMACCESS2(0x20,1) ",%%ymm3,%%ymm1 \n"
-    "vmovdqu    %%ymm0," MEMACCESS(1) "        \n"
-    "vmovdqu    %%ymm1," MEMACCESS2(0x20,1) "  \n"
-    "lea       " MEMLEA(0x40,1) ",%1           \n"
-    "sub       $0x20,%2                        \n"
-    "jg        1b                              \n"
-    "vzeroupper                                \n"
-  : "+r"(src_ptr),     // %0
-    "+r"(dst_ptr),     // %1
-    "+r"(src_width)    // %2
-  :
-  : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5"
-  );
+      "vpxor      %%ymm5,%%ymm5,%%ymm5           \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "vmovdqu    (%0),%%ymm3                    \n"
+      "lea        0x20(%0),%0                    \n"  // src_ptr += 32
+      "vpermq     $0xd8,%%ymm3,%%ymm3            \n"
+      "vpunpcklbw %%ymm5,%%ymm3,%%ymm2           \n"
+      "vpunpckhbw %%ymm5,%%ymm3,%%ymm3           \n"
+      "vpaddusw   (%1),%%ymm2,%%ymm0             \n"
+      "vpaddusw   0x20(%1),%%ymm3,%%ymm1         \n"
+      "vmovdqu    %%ymm0,(%1)                    \n"
+      "vmovdqu    %%ymm1,0x20(%1)                \n"
+      "lea       0x40(%1),%1                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+      "vzeroupper                                \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(src_width)  // %2
+      :
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm5");
 }
 #endif  // HAS_SCALEADDROW_AVX2
 
 // Constant for making pixels signed to avoid pmaddubsw
 // saturation.
-static uvec8 kFsub80 =
-  { 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
-    0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 };
+static const uvec8 kFsub80 = {0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
+                              0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80};
 
 // Constant for making pixels unsigned and adding .5 for rounding.
-static uvec16 kFadd40 =
-  { 0x4040, 0x4040, 0x4040, 0x4040, 0x4040, 0x4040, 0x4040, 0x4040 };
+static const uvec16 kFadd40 = {0x4040, 0x4040, 0x4040, 0x4040,
+                               0x4040, 0x4040, 0x4040, 0x4040};
 
 // Bilinear column filtering. SSSE3 version.
-void ScaleFilterCols_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                           int dst_width, int x, int dx) {
+void ScaleFilterCols_SSSE3(uint8_t* dst_ptr,
+                           const uint8_t* src_ptr,
+                           int dst_width,
+                           int x,
+                           int dx) {
   intptr_t x0, x1, temp_pixel;
-  asm volatile (
-    "movd      %6,%%xmm2                       \n"
-    "movd      %7,%%xmm3                       \n"
-    "movl      $0x04040000,%k2                 \n"
-    "movd      %k2,%%xmm5                      \n"
-    "pcmpeqb   %%xmm6,%%xmm6                   \n"
-    "psrlw     $0x9,%%xmm6                     \n"  // 0x007f007f
-    "pcmpeqb   %%xmm7,%%xmm7                   \n"
-    "psrlw     $15,%%xmm7                      \n"  // 0x00010001
+  asm volatile(
+      "movd      %6,%%xmm2                       \n"
+      "movd      %7,%%xmm3                       \n"
+      "movl      $0x04040000,%k2                 \n"
+      "movd      %k2,%%xmm5                      \n"
+      "pcmpeqb   %%xmm6,%%xmm6                   \n"
+      "psrlw     $0x9,%%xmm6                     \n"  // 0x007f007f
+      "pcmpeqb   %%xmm7,%%xmm7                   \n"
+      "psrlw     $15,%%xmm7                      \n"  // 0x00010001
 
-    "pextrw    $0x1,%%xmm2,%k3                 \n"
-    "subl      $0x2,%5                         \n"
-    "jl        29f                             \n"
-    "movdqa    %%xmm2,%%xmm0                   \n"
-    "paddd     %%xmm3,%%xmm0                   \n"
-    "punpckldq %%xmm0,%%xmm2                   \n"
-    "punpckldq %%xmm3,%%xmm3                   \n"
-    "paddd     %%xmm3,%%xmm3                   \n"
-    "pextrw    $0x3,%%xmm2,%k4                 \n"
+      "pextrw    $0x1,%%xmm2,%k3                 \n"
+      "subl      $0x2,%5                         \n"
+      "jl        29f                             \n"
+      "movdqa    %%xmm2,%%xmm0                   \n"
+      "paddd     %%xmm3,%%xmm0                   \n"
+      "punpckldq %%xmm0,%%xmm2                   \n"
+      "punpckldq %%xmm3,%%xmm3                   \n"
+      "paddd     %%xmm3,%%xmm3                   \n"
+      "pextrw    $0x3,%%xmm2,%k4                 \n"
 
-    LABELALIGN
-  "2:                                          \n"
-    "movdqa    %%xmm2,%%xmm1                   \n"
-    "paddd     %%xmm3,%%xmm2                   \n"
-    MEMOPARG(movzwl,0x00,1,3,1,k2)             //  movzwl  (%1,%3,1),%k2
-    "movd      %k2,%%xmm0                      \n"
-    "psrlw     $0x9,%%xmm1                     \n"
-    MEMOPARG(movzwl,0x00,1,4,1,k2)             //  movzwl  (%1,%4,1),%k2
-    "movd      %k2,%%xmm4                      \n"
-    "pshufb    %%xmm5,%%xmm1                   \n"
-    "punpcklwd %%xmm4,%%xmm0                   \n"
-    "psubb     %8,%%xmm0                       \n"  // make pixels signed.
-    "pxor      %%xmm6,%%xmm1                   \n"  // 128 -f = (f ^ 127 ) + 1
-    "paddusb   %%xmm7,%%xmm1                   \n"
-    "pmaddubsw %%xmm0,%%xmm1                   \n"
-    "pextrw    $0x1,%%xmm2,%k3                 \n"
-    "pextrw    $0x3,%%xmm2,%k4                 \n"
-    "paddw     %9,%%xmm1                       \n"  // make pixels unsigned.
-    "psrlw     $0x7,%%xmm1                     \n"
-    "packuswb  %%xmm1,%%xmm1                   \n"
-    "movd      %%xmm1,%k2                      \n"
-    "mov       %w2," MEMACCESS(0) "            \n"
-    "lea       " MEMLEA(0x2,0) ",%0            \n"
-    "subl      $0x2,%5                         \n"
-    "jge       2b                              \n"
+      LABELALIGN
+      "2:                                        \n"
+      "movdqa    %%xmm2,%%xmm1                   \n"
+      "paddd     %%xmm3,%%xmm2                   \n"
+      "movzwl    0x00(%1,%3,1),%k2               \n"
+      "movd      %k2,%%xmm0                      \n"
+      "psrlw     $0x9,%%xmm1                     \n"
+      "movzwl    0x00(%1,%4,1),%k2               \n"
+      "movd      %k2,%%xmm4                      \n"
+      "pshufb    %%xmm5,%%xmm1                   \n"
+      "punpcklwd %%xmm4,%%xmm0                   \n"
+      "psubb     %8,%%xmm0                       \n"  // make pixels signed.
+      "pxor      %%xmm6,%%xmm1                   \n"  // 128 - f = (f ^ 127 ) +
+                                                      // 1
+      "paddusb   %%xmm7,%%xmm1                   \n"
+      "pmaddubsw %%xmm0,%%xmm1                   \n"
+      "pextrw    $0x1,%%xmm2,%k3                 \n"
+      "pextrw    $0x3,%%xmm2,%k4                 \n"
+      "paddw     %9,%%xmm1                       \n"  // make pixels unsigned.
+      "psrlw     $0x7,%%xmm1                     \n"
+      "packuswb  %%xmm1,%%xmm1                   \n"
+      "movd      %%xmm1,%k2                      \n"
+      "mov       %w2,(%0)                        \n"
+      "lea       0x2(%0),%0                      \n"
+      "subl      $0x2,%5                         \n"
+      "jge       2b                              \n"
 
-    LABELALIGN
-  "29:                                         \n"
-    "addl      $0x1,%5                         \n"
-    "jl        99f                             \n"
-    MEMOPARG(movzwl,0x00,1,3,1,k2)             //  movzwl  (%1,%3,1),%k2
-    "movd      %k2,%%xmm0                      \n"
-    "psrlw     $0x9,%%xmm2                     \n"
-    "pshufb    %%xmm5,%%xmm2                   \n"
-    "psubb     %8,%%xmm0                       \n"  // make pixels signed.
-    "pxor      %%xmm6,%%xmm2                   \n"
-    "paddusb   %%xmm7,%%xmm2                   \n"
-    "pmaddubsw %%xmm0,%%xmm2                   \n"
-    "paddw     %9,%%xmm2                       \n"  // make pixels unsigned.
-    "psrlw     $0x7,%%xmm2                     \n"
-    "packuswb  %%xmm2,%%xmm2                   \n"
-    "movd      %%xmm2,%k2                      \n"
-    "mov       %b2," MEMACCESS(0) "            \n"
-  "99:                                         \n"
-  : "+r"(dst_ptr),      // %0
-    "+r"(src_ptr),      // %1
-    "=&a"(temp_pixel),  // %2
-    "=&r"(x0),          // %3
-    "=&r"(x1),          // %4
+      LABELALIGN
+      "29:                                       \n"
+      "addl      $0x1,%5                         \n"
+      "jl        99f                             \n"
+      "movzwl    0x00(%1,%3,1),%k2               \n"
+      "movd      %k2,%%xmm0                      \n"
+      "psrlw     $0x9,%%xmm2                     \n"
+      "pshufb    %%xmm5,%%xmm2                   \n"
+      "psubb     %8,%%xmm0                       \n"  // make pixels signed.
+      "pxor      %%xmm6,%%xmm2                   \n"
+      "paddusb   %%xmm7,%%xmm2                   \n"
+      "pmaddubsw %%xmm0,%%xmm2                   \n"
+      "paddw     %9,%%xmm2                       \n"  // make pixels unsigned.
+      "psrlw     $0x7,%%xmm2                     \n"
+      "packuswb  %%xmm2,%%xmm2                   \n"
+      "movd      %%xmm2,%k2                      \n"
+      "mov       %b2,(%0)                        \n"
+      "99:                                       \n"
+      : "+r"(dst_ptr),      // %0
+        "+r"(src_ptr),      // %1
+        "=&a"(temp_pixel),  // %2
+        "=&r"(x0),          // %3
+        "=&r"(x1),          // %4
 #if defined(__x86_64__)
-    "+rm"(dst_width)    // %5
+        "+rm"(dst_width)  // %5
 #else
-    "+m"(dst_width)    // %5
+        "+m"(dst_width)  // %5
 #endif
-  : "rm"(x),            // %6
-    "rm"(dx),           // %7
+      : "rm"(x),   // %6
+        "rm"(dx),  // %7
 #if defined(__x86_64__)
-    "x"(kFsub80),       // %8
-    "x"(kFadd40)        // %9
+        "x"(kFsub80),  // %8
+        "x"(kFadd40)   // %9
 #else
-    "m"(kFsub80),       // %8
-    "m"(kFadd40)        // %9
+        "m"(kFsub80),    // %8
+        "m"(kFadd40)     // %9
 #endif
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6", "xmm7"
-  );
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6",
+        "xmm7");
 }
 
 // Reads 4 pixels, duplicates them and writes 8 pixels.
 // Alignment requirement: src_argb 16 byte aligned, dst_argb 16 byte aligned.
-void ScaleColsUp2_SSE2(uint8* dst_ptr, const uint8* src_ptr,
-                       int dst_width, int x, int dx) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpcklbw %%xmm0,%%xmm0                   \n"
-    "punpckhbw %%xmm1,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(0) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,0) "   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "sub       $0x20,%2                         \n"
-    "jg        1b                              \n"
+void ScaleColsUp2_SSE2(uint8_t* dst_ptr,
+                       const uint8_t* src_ptr,
+                       int dst_width,
+                       int x,
+                       int dx) {
+  (void)x;
+  (void)dx;
+  asm volatile(
 
-  : "+r"(dst_ptr),     // %0
-    "+r"(src_ptr),     // %1
-    "+r"(dst_width)    // %2
-  :: "memory", "cc", "xmm0", "xmm1"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%1),%%xmm0                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpcklbw %%xmm0,%%xmm0                   \n"
+      "punpckhbw %%xmm1,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%0)                     \n"
+      "movdqu    %%xmm1,0x10(%0)                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "sub       $0x20,%2                        \n"
+      "jg        1b                              \n"
+
+      : "+r"(dst_ptr),   // %0
+        "+r"(src_ptr),   // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1");
 }
 
-void ScaleARGBRowDown2_SSE2(const uint8* src_argb,
+void ScaleARGBRowDown2_SSE2(const uint8_t* src_argb,
                             ptrdiff_t src_stride,
-                            uint8* dst_argb, int dst_width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "shufps    $0xdd,%%xmm1,%%xmm0             \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(dst_width)  // %2
-  :: "memory", "cc", "xmm0", "xmm1"
-  );
+                            uint8_t* dst_argb,
+                            int dst_width) {
+  (void)src_stride;
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "shufps    $0xdd,%%xmm1,%%xmm0             \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1");
 }
 
-void ScaleARGBRowDown2Linear_SSE2(const uint8* src_argb,
+void ScaleARGBRowDown2Linear_SSE2(const uint8_t* src_argb,
                                   ptrdiff_t src_stride,
-                                  uint8* dst_argb, int dst_width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm2             \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),  // %0
-    "+r"(dst_argb),  // %1
-    "+r"(dst_width)  // %2
-  :: "memory", "cc", "xmm0", "xmm1"
-  );
+                                  uint8_t* dst_argb,
+                                  int dst_width) {
+  (void)src_stride;
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm2             \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1");
 }
 
-void ScaleARGBRowDown2Box_SSE2(const uint8* src_argb,
+void ScaleARGBRowDown2Box_SSE2(const uint8_t* src_argb,
                                ptrdiff_t src_stride,
-                               uint8* dst_argb, int dst_width) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(0) ",%%xmm0         \n"
-    "movdqu    " MEMACCESS2(0x10,0) ",%%xmm1   \n"
-    MEMOPREG(movdqu,0x00,0,3,1,xmm2)           //  movdqu   (%0,%3,1),%%xmm2
-    MEMOPREG(movdqu,0x10,0,3,1,xmm3)           //  movdqu   0x10(%0,%3,1),%%xmm3
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "pavgb     %%xmm3,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm2             \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(1) "         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "sub       $0x4,%2                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),   // %0
-    "+r"(dst_argb),   // %1
-    "+r"(dst_width)   // %2
-  : "r"((intptr_t)(src_stride))   // %3
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3"
-  );
+                               uint8_t* dst_argb,
+                               int dst_width) {
+  asm volatile(
+
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%0),%%xmm0                     \n"
+      "movdqu    0x10(%0),%%xmm1                 \n"
+      "movdqu    0x00(%0,%3,1),%%xmm2            \n"
+      "movdqu    0x10(%0,%3,1),%%xmm3            \n"
+      "lea       0x20(%0),%0                     \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "pavgb     %%xmm3,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm2             \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%1)                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "sub       $0x4,%2                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),              // %0
+        "+r"(dst_argb),              // %1
+        "+r"(dst_width)              // %2
+      : "r"((intptr_t)(src_stride))  // %3
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3");
 }
 
 // Reads 4 pixels at a time.
 // Alignment requirement: dst_argb 16 byte aligned.
-void ScaleARGBRowDownEven_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                               int src_stepx, uint8* dst_argb, int dst_width) {
+void ScaleARGBRowDownEven_SSE2(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
+                               int src_stepx,
+                               uint8_t* dst_argb,
+                               int dst_width) {
   intptr_t src_stepx_x4 = (intptr_t)(src_stepx);
   intptr_t src_stepx_x12;
-  asm volatile (
-    "lea       " MEMLEA3(0x00,1,4) ",%1        \n"
-    "lea       " MEMLEA4(0x00,1,1,2) ",%4      \n"
-    LABELALIGN
-  "1:                                          \n"
-    "movd      " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movd,0x00,0,1,1,xmm1)             //  movd      (%0,%1,1),%%xmm1
-    "punpckldq %%xmm1,%%xmm0                   \n"
-    MEMOPREG(movd,0x00,0,1,2,xmm2)             //  movd      (%0,%1,2),%%xmm2
-    MEMOPREG(movd,0x00,0,4,1,xmm3)             //  movd      (%0,%4,1),%%xmm3
-    "lea       " MEMLEA4(0x00,0,1,4) ",%0      \n"
-    "punpckldq %%xmm3,%%xmm2                   \n"
-    "punpcklqdq %%xmm2,%%xmm0                  \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),       // %0
-    "+r"(src_stepx_x4),   // %1
-    "+r"(dst_argb),       // %2
-    "+r"(dst_width),      // %3
-    "=&r"(src_stepx_x12)  // %4
-  :: "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3"
-  );
+  (void)src_stride;
+  asm volatile(
+      "lea       0x00(,%1,4),%1                  \n"
+      "lea       0x00(%1,%1,2),%4                \n"
+
+      LABELALIGN
+      "1:                                        \n"
+      "movd      (%0),%%xmm0                     \n"
+      "movd      0x00(%0,%1,1),%%xmm1            \n"
+      "punpckldq %%xmm1,%%xmm0                   \n"
+      "movd      0x00(%0,%1,2),%%xmm2            \n"
+      "movd      0x00(%0,%4,1),%%xmm3            \n"
+      "lea       0x00(%0,%1,4),%0                \n"
+      "punpckldq %%xmm3,%%xmm2                   \n"
+      "punpcklqdq %%xmm2,%%xmm0                  \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),       // %0
+        "+r"(src_stepx_x4),   // %1
+        "+r"(dst_argb),       // %2
+        "+r"(dst_width),      // %3
+        "=&r"(src_stepx_x12)  // %4
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3");
 }
 
 // Blends four 2x2 to 4x1.
 // Alignment requirement: dst_argb 16 byte aligned.
-void ScaleARGBRowDownEvenBox_SSE2(const uint8* src_argb,
-                                  ptrdiff_t src_stride, int src_stepx,
-                                  uint8* dst_argb, int dst_width) {
+void ScaleARGBRowDownEvenBox_SSE2(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
+                                  int src_stepx,
+                                  uint8_t* dst_argb,
+                                  int dst_width) {
   intptr_t src_stepx_x4 = (intptr_t)(src_stepx);
   intptr_t src_stepx_x12;
   intptr_t row1 = (intptr_t)(src_stride);
-  asm volatile (
-    "lea       " MEMLEA3(0x00,1,4) ",%1        \n"
-    "lea       " MEMLEA4(0x00,1,1,2) ",%4      \n"
-    "lea       " MEMLEA4(0x00,0,5,1) ",%5      \n"
+  asm volatile(
+      "lea       0x00(,%1,4),%1                  \n"
+      "lea       0x00(%1,%1,2),%4                \n"
+      "lea       0x00(%0,%5,1),%5                \n"
 
-    LABELALIGN
-  "1:                                          \n"
-    "movq      " MEMACCESS(0) ",%%xmm0         \n"
-    MEMOPREG(movhps,0x00,0,1,1,xmm0)           //  movhps    (%0,%1,1),%%xmm0
-    MEMOPREG(movq,0x00,0,1,2,xmm1)             //  movq      (%0,%1,2),%%xmm1
-    MEMOPREG(movhps,0x00,0,4,1,xmm1)           //  movhps    (%0,%4,1),%%xmm1
-    "lea       " MEMLEA4(0x00,0,1,4) ",%0      \n"
-    "movq      " MEMACCESS(5) ",%%xmm2         \n"
-    MEMOPREG(movhps,0x00,5,1,1,xmm2)           //  movhps    (%5,%1,1),%%xmm2
-    MEMOPREG(movq,0x00,5,1,2,xmm3)             //  movq      (%5,%1,2),%%xmm3
-    MEMOPREG(movhps,0x00,5,4,1,xmm3)           //  movhps    (%5,%4,1),%%xmm3
-    "lea       " MEMLEA4(0x00,5,1,4) ",%5      \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "pavgb     %%xmm3,%%xmm1                   \n"
-    "movdqa    %%xmm0,%%xmm2                   \n"
-    "shufps    $0x88,%%xmm1,%%xmm0             \n"
-    "shufps    $0xdd,%%xmm1,%%xmm2             \n"
-    "pavgb     %%xmm2,%%xmm0                   \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%3                         \n"
-    "jg        1b                              \n"
-  : "+r"(src_argb),        // %0
-    "+r"(src_stepx_x4),    // %1
-    "+r"(dst_argb),        // %2
-    "+rm"(dst_width),      // %3
-    "=&r"(src_stepx_x12),  // %4
-    "+r"(row1)             // %5
-  :: "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movq      (%0),%%xmm0                     \n"
+      "movhps    0x00(%0,%1,1),%%xmm0            \n"
+      "movq      0x00(%0,%1,2),%%xmm1            \n"
+      "movhps    0x00(%0,%4,1),%%xmm1            \n"
+      "lea       0x00(%0,%1,4),%0                \n"
+      "movq      (%5),%%xmm2                     \n"
+      "movhps    0x00(%5,%1,1),%%xmm2            \n"
+      "movq      0x00(%5,%1,2),%%xmm3            \n"
+      "movhps    0x00(%5,%4,1),%%xmm3            \n"
+      "lea       0x00(%5,%1,4),%5                \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "pavgb     %%xmm3,%%xmm1                   \n"
+      "movdqa    %%xmm0,%%xmm2                   \n"
+      "shufps    $0x88,%%xmm1,%%xmm0             \n"
+      "shufps    $0xdd,%%xmm1,%%xmm2             \n"
+      "pavgb     %%xmm2,%%xmm0                   \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%3                         \n"
+      "jg        1b                              \n"
+      : "+r"(src_argb),        // %0
+        "+r"(src_stepx_x4),    // %1
+        "+r"(dst_argb),        // %2
+        "+rm"(dst_width),      // %3
+        "=&r"(src_stepx_x12),  // %4
+        "+r"(row1)             // %5
+        ::"memory",
+        "cc", "xmm0", "xmm1", "xmm2", "xmm3");
 }
 
-void ScaleARGBCols_SSE2(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx) {
+void ScaleARGBCols_SSE2(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int x,
+                        int dx) {
   intptr_t x0, x1;
-  asm volatile (
-    "movd      %5,%%xmm2                       \n"
-    "movd      %6,%%xmm3                       \n"
-    "pshufd    $0x0,%%xmm2,%%xmm2              \n"
-    "pshufd    $0x11,%%xmm3,%%xmm0             \n"
-    "paddd     %%xmm0,%%xmm2                   \n"
-    "paddd     %%xmm3,%%xmm3                   \n"
-    "pshufd    $0x5,%%xmm3,%%xmm0              \n"
-    "paddd     %%xmm0,%%xmm2                   \n"
-    "paddd     %%xmm3,%%xmm3                   \n"
-    "pshufd    $0x0,%%xmm3,%%xmm3              \n"
-    "pextrw    $0x1,%%xmm2,%k0                 \n"
-    "pextrw    $0x3,%%xmm2,%k1                 \n"
-    "cmp       $0x0,%4                         \n"
-    "jl        99f                             \n"
-    "sub       $0x4,%4                         \n"
-    "jl        49f                             \n"
+  asm volatile(
+      "movd      %5,%%xmm2                       \n"
+      "movd      %6,%%xmm3                       \n"
+      "pshufd    $0x0,%%xmm2,%%xmm2              \n"
+      "pshufd    $0x11,%%xmm3,%%xmm0             \n"
+      "paddd     %%xmm0,%%xmm2                   \n"
+      "paddd     %%xmm3,%%xmm3                   \n"
+      "pshufd    $0x5,%%xmm3,%%xmm0              \n"
+      "paddd     %%xmm0,%%xmm2                   \n"
+      "paddd     %%xmm3,%%xmm3                   \n"
+      "pshufd    $0x0,%%xmm3,%%xmm3              \n"
+      "pextrw    $0x1,%%xmm2,%k0                 \n"
+      "pextrw    $0x3,%%xmm2,%k1                 \n"
+      "cmp       $0x0,%4                         \n"
+      "jl        99f                             \n"
+      "sub       $0x4,%4                         \n"
+      "jl        49f                             \n"
 
-    LABELALIGN
-  "40:                                         \n"
-    MEMOPREG(movd,0x00,3,0,4,xmm0)             //  movd      (%3,%0,4),%%xmm0
-    MEMOPREG(movd,0x00,3,1,4,xmm1)             //  movd      (%3,%1,4),%%xmm1
-    "pextrw    $0x5,%%xmm2,%k0                 \n"
-    "pextrw    $0x7,%%xmm2,%k1                 \n"
-    "paddd     %%xmm3,%%xmm2                   \n"
-    "punpckldq %%xmm1,%%xmm0                   \n"
-    MEMOPREG(movd,0x00,3,0,4,xmm1)             //  movd      (%3,%0,4),%%xmm1
-    MEMOPREG(movd,0x00,3,1,4,xmm4)             //  movd      (%3,%1,4),%%xmm4
-    "pextrw    $0x1,%%xmm2,%k0                 \n"
-    "pextrw    $0x3,%%xmm2,%k1                 \n"
-    "punpckldq %%xmm4,%%xmm1                   \n"
-    "punpcklqdq %%xmm1,%%xmm0                  \n"
-    "movdqu    %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x10,2) ",%2           \n"
-    "sub       $0x4,%4                         \n"
-    "jge       40b                             \n"
+      LABELALIGN
+      "40:                                       \n"
+      "movd      0x00(%3,%0,4),%%xmm0            \n"
+      "movd      0x00(%3,%1,4),%%xmm1            \n"
+      "pextrw    $0x5,%%xmm2,%k0                 \n"
+      "pextrw    $0x7,%%xmm2,%k1                 \n"
+      "paddd     %%xmm3,%%xmm2                   \n"
+      "punpckldq %%xmm1,%%xmm0                   \n"
+      "movd      0x00(%3,%0,4),%%xmm1            \n"
+      "movd      0x00(%3,%1,4),%%xmm4            \n"
+      "pextrw    $0x1,%%xmm2,%k0                 \n"
+      "pextrw    $0x3,%%xmm2,%k1                 \n"
+      "punpckldq %%xmm4,%%xmm1                   \n"
+      "punpcklqdq %%xmm1,%%xmm0                  \n"
+      "movdqu    %%xmm0,(%2)                     \n"
+      "lea       0x10(%2),%2                     \n"
+      "sub       $0x4,%4                         \n"
+      "jge       40b                             \n"
 
-  "49:                                         \n"
-    "test      $0x2,%4                         \n"
-    "je        29f                             \n"
-    MEMOPREG(movd,0x00,3,0,4,xmm0)             //  movd      (%3,%0,4),%%xmm0
-    MEMOPREG(movd,0x00,3,1,4,xmm1)             //  movd      (%3,%1,4),%%xmm1
-    "pextrw    $0x5,%%xmm2,%k0                 \n"
-    "punpckldq %%xmm1,%%xmm0                   \n"
-    "movq      %%xmm0," MEMACCESS(2) "         \n"
-    "lea       " MEMLEA(0x8,2) ",%2            \n"
-  "29:                                         \n"
-    "test      $0x1,%4                         \n"
-    "je        99f                             \n"
-    MEMOPREG(movd,0x00,3,0,4,xmm0)             //  movd      (%3,%0,4),%%xmm0
-    "movd      %%xmm0," MEMACCESS(2) "         \n"
-  "99:                                         \n"
-  : "=&a"(x0),         // %0
-    "=&d"(x1),         // %1
-    "+r"(dst_argb),    // %2
-    "+r"(src_argb),    // %3
-    "+r"(dst_width)    // %4
-  : "rm"(x),           // %5
-    "rm"(dx)           // %6
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4"
-  );
+      "49:                                       \n"
+      "test      $0x2,%4                         \n"
+      "je        29f                             \n"
+      "movd      0x00(%3,%0,4),%%xmm0            \n"
+      "movd      0x00(%3,%1,4),%%xmm1            \n"
+      "pextrw    $0x5,%%xmm2,%k0                 \n"
+      "punpckldq %%xmm1,%%xmm0                   \n"
+      "movq      %%xmm0,(%2)                     \n"
+      "lea       0x8(%2),%2                      \n"
+      "29:                                       \n"
+      "test      $0x1,%4                         \n"
+      "je        99f                             \n"
+      "movd      0x00(%3,%0,4),%%xmm0            \n"
+      "movd      %%xmm0,(%2)                     \n"
+      "99:                                       \n"
+      : "=&a"(x0),       // %0
+        "=&d"(x1),       // %1
+        "+r"(dst_argb),  // %2
+        "+r"(src_argb),  // %3
+        "+r"(dst_width)  // %4
+      : "rm"(x),         // %5
+        "rm"(dx)         // %6
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4");
 }
 
 // Reads 4 pixels, duplicates them and writes 8 pixels.
 // Alignment requirement: src_argb 16 byte aligned, dst_argb 16 byte aligned.
-void ScaleARGBColsUp2_SSE2(uint8* dst_argb, const uint8* src_argb,
-                           int dst_width, int x, int dx) {
-  asm volatile (
-    LABELALIGN
-  "1:                                          \n"
-    "movdqu    " MEMACCESS(1) ",%%xmm0         \n"
-    "lea       " MEMLEA(0x10,1) ",%1           \n"
-    "movdqa    %%xmm0,%%xmm1                   \n"
-    "punpckldq %%xmm0,%%xmm0                   \n"
-    "punpckhdq %%xmm1,%%xmm1                   \n"
-    "movdqu    %%xmm0," MEMACCESS(0) "         \n"
-    "movdqu    %%xmm1," MEMACCESS2(0x10,0) "   \n"
-    "lea       " MEMLEA(0x20,0) ",%0           \n"
-    "sub       $0x8,%2                         \n"
-    "jg        1b                              \n"
+void ScaleARGBColsUp2_SSE2(uint8_t* dst_argb,
+                           const uint8_t* src_argb,
+                           int dst_width,
+                           int x,
+                           int dx) {
+  (void)x;
+  (void)dx;
+  asm volatile(
 
-  : "+r"(dst_argb),    // %0
-    "+r"(src_argb),    // %1
-    "+r"(dst_width)    // %2
-  :: "memory", "cc", NACL_R14
-    "xmm0", "xmm1"
-  );
+      LABELALIGN
+      "1:                                        \n"
+      "movdqu    (%1),%%xmm0                     \n"
+      "lea       0x10(%1),%1                     \n"
+      "movdqa    %%xmm0,%%xmm1                   \n"
+      "punpckldq %%xmm0,%%xmm0                   \n"
+      "punpckhdq %%xmm1,%%xmm1                   \n"
+      "movdqu    %%xmm0,(%0)                     \n"
+      "movdqu    %%xmm1,0x10(%0)                 \n"
+      "lea       0x20(%0),%0                     \n"
+      "sub       $0x8,%2                         \n"
+      "jg        1b                              \n"
+
+      : "+r"(dst_argb),  // %0
+        "+r"(src_argb),  // %1
+        "+r"(dst_width)  // %2
+        ::"memory",
+        "cc", "xmm0", "xmm1");
 }
 
 // Shuffle table for arranging 2 pixels into pairs for pmaddubsw
-static uvec8 kShuffleColARGB = {
-  0u, 4u, 1u, 5u, 2u, 6u, 3u, 7u,  // bbggrraa 1st pixel
-  8u, 12u, 9u, 13u, 10u, 14u, 11u, 15u  // bbggrraa 2nd pixel
+static const uvec8 kShuffleColARGB = {
+    0u, 4u,  1u, 5u,  2u,  6u,  3u,  7u,  // bbggrraa 1st pixel
+    8u, 12u, 9u, 13u, 10u, 14u, 11u, 15u  // bbggrraa 2nd pixel
 };
 
 // Shuffle table for duplicating 2 fractions into 8 bytes each
-static uvec8 kShuffleFractions = {
-  0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 4u, 4u, 4u, 4u, 4u, 4u, 4u, 4u,
+static const uvec8 kShuffleFractions = {
+    0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 4u, 4u, 4u, 4u, 4u, 4u, 4u, 4u,
 };
 
 // Bilinear row filtering combines 4x2 -> 4x1. SSSE3 version
-void ScaleARGBFilterCols_SSSE3(uint8* dst_argb, const uint8* src_argb,
-                               int dst_width, int x, int dx) {
+void ScaleARGBFilterCols_SSSE3(uint8_t* dst_argb,
+                               const uint8_t* src_argb,
+                               int dst_width,
+                               int x,
+                               int dx) {
   intptr_t x0, x1;
-  asm volatile (
-    "movdqa    %0,%%xmm4                       \n"
-    "movdqa    %1,%%xmm5                       \n"
-  :
-  : "m"(kShuffleColARGB),  // %0
-    "m"(kShuffleFractions)  // %1
-  );
+  asm volatile(
+      "movdqa    %0,%%xmm4                       \n"
+      "movdqa    %1,%%xmm5                       \n"
+      :
+      : "m"(kShuffleColARGB),   // %0
+        "m"(kShuffleFractions)  // %1
+      );
 
-  asm volatile (
-    "movd      %5,%%xmm2                       \n"
-    "movd      %6,%%xmm3                       \n"
-    "pcmpeqb   %%xmm6,%%xmm6                   \n"
-    "psrlw     $0x9,%%xmm6                     \n"
-    "pextrw    $0x1,%%xmm2,%k3                 \n"
-    "sub       $0x2,%2                         \n"
-    "jl        29f                             \n"
-    "movdqa    %%xmm2,%%xmm0                   \n"
-    "paddd     %%xmm3,%%xmm0                   \n"
-    "punpckldq %%xmm0,%%xmm2                   \n"
-    "punpckldq %%xmm3,%%xmm3                   \n"
-    "paddd     %%xmm3,%%xmm3                   \n"
-    "pextrw    $0x3,%%xmm2,%k4                 \n"
+  asm volatile(
+      "movd      %5,%%xmm2                       \n"
+      "movd      %6,%%xmm3                       \n"
+      "pcmpeqb   %%xmm6,%%xmm6                   \n"
+      "psrlw     $0x9,%%xmm6                     \n"
+      "pextrw    $0x1,%%xmm2,%k3                 \n"
+      "sub       $0x2,%2                         \n"
+      "jl        29f                             \n"
+      "movdqa    %%xmm2,%%xmm0                   \n"
+      "paddd     %%xmm3,%%xmm0                   \n"
+      "punpckldq %%xmm0,%%xmm2                   \n"
+      "punpckldq %%xmm3,%%xmm3                   \n"
+      "paddd     %%xmm3,%%xmm3                   \n"
+      "pextrw    $0x3,%%xmm2,%k4                 \n"
 
-    LABELALIGN
-  "2:                                          \n"
-    "movdqa    %%xmm2,%%xmm1                   \n"
-    "paddd     %%xmm3,%%xmm2                   \n"
-    MEMOPREG(movq,0x00,1,3,4,xmm0)             //  movq      (%1,%3,4),%%xmm0
-    "psrlw     $0x9,%%xmm1                     \n"
-    MEMOPREG(movhps,0x00,1,4,4,xmm0)           //  movhps    (%1,%4,4),%%xmm0
-    "pshufb    %%xmm5,%%xmm1                   \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "pxor      %%xmm6,%%xmm1                   \n"
-    "pmaddubsw %%xmm1,%%xmm0                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "pextrw    $0x1,%%xmm2,%k3                 \n"
-    "pextrw    $0x3,%%xmm2,%k4                 \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movq      %%xmm0," MEMACCESS(0) "         \n"
-    "lea       " MEMLEA(0x8,0) ",%0            \n"
-    "sub       $0x2,%2                         \n"
-    "jge       2b                              \n"
+      LABELALIGN
+      "2:                                        \n"
+      "movdqa    %%xmm2,%%xmm1                   \n"
+      "paddd     %%xmm3,%%xmm2                   \n"
+      "movq      0x00(%1,%3,4),%%xmm0            \n"
+      "psrlw     $0x9,%%xmm1                     \n"
+      "movhps    0x00(%1,%4,4),%%xmm0            \n"
+      "pshufb    %%xmm5,%%xmm1                   \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "pxor      %%xmm6,%%xmm1                   \n"
+      "pmaddubsw %%xmm1,%%xmm0                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "pextrw    $0x1,%%xmm2,%k3                 \n"
+      "pextrw    $0x3,%%xmm2,%k4                 \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movq      %%xmm0,(%0)                     \n"
+      "lea       0x8(%0),%0                      \n"
+      "sub       $0x2,%2                         \n"
+      "jge       2b                              \n"
 
-    LABELALIGN
-  "29:                                         \n"
-    "add       $0x1,%2                         \n"
-    "jl        99f                             \n"
-    "psrlw     $0x9,%%xmm2                     \n"
-    MEMOPREG(movq,0x00,1,3,4,xmm0)             //  movq      (%1,%3,4),%%xmm0
-    "pshufb    %%xmm5,%%xmm2                   \n"
-    "pshufb    %%xmm4,%%xmm0                   \n"
-    "pxor      %%xmm6,%%xmm2                   \n"
-    "pmaddubsw %%xmm2,%%xmm0                   \n"
-    "psrlw     $0x7,%%xmm0                     \n"
-    "packuswb  %%xmm0,%%xmm0                   \n"
-    "movd      %%xmm0," MEMACCESS(0) "         \n"
+      LABELALIGN
+      "29:                                       \n"
+      "add       $0x1,%2                         \n"
+      "jl        99f                             \n"
+      "psrlw     $0x9,%%xmm2                     \n"
+      "movq      0x00(%1,%3,4),%%xmm0            \n"
+      "pshufb    %%xmm5,%%xmm2                   \n"
+      "pshufb    %%xmm4,%%xmm0                   \n"
+      "pxor      %%xmm6,%%xmm2                   \n"
+      "pmaddubsw %%xmm2,%%xmm0                   \n"
+      "psrlw     $0x7,%%xmm0                     \n"
+      "packuswb  %%xmm0,%%xmm0                   \n"
+      "movd      %%xmm0,(%0)                     \n"
 
-    LABELALIGN
-  "99:                                         \n"
-  : "+r"(dst_argb),    // %0
-    "+r"(src_argb),    // %1
-    "+rm"(dst_width),  // %2
-    "=&r"(x0),         // %3
-    "=&r"(x1)          // %4
-  : "rm"(x),           // %5
-    "rm"(dx)           // %6
-  : "memory", "cc", NACL_R14
-    "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6"
-  );
+      LABELALIGN "99:                            \n"  // clang-format error.
+
+      : "+r"(dst_argb),    // %0
+        "+r"(src_argb),    // %1
+        "+rm"(dst_width),  // %2
+        "=&r"(x0),         // %3
+        "=&r"(x1)          // %4
+      : "rm"(x),           // %5
+        "rm"(dx)           // %6
+      : "memory", "cc", "xmm0", "xmm1", "xmm2", "xmm3", "xmm4", "xmm5", "xmm6");
 }
 
 // Divide num by div and return as 16.16 fixed point result.
 int FixedDiv_X86(int num, int div) {
-  asm volatile (
-    "cdq                                       \n"
-    "shld      $0x10,%%eax,%%edx               \n"
-    "shl       $0x10,%%eax                     \n"
-    "idiv      %1                              \n"
-    "mov       %0, %%eax                       \n"
-    : "+a"(num)  // %0
-    : "c"(div)   // %1
-    : "memory", "cc", "edx"
-  );
+  asm volatile(
+      "cdq                                       \n"
+      "shld      $0x10,%%eax,%%edx               \n"
+      "shl       $0x10,%%eax                     \n"
+      "idiv      %1                              \n"
+      "mov       %0, %%eax                       \n"
+      : "+a"(num)  // %0
+      : "c"(div)   // %1
+      : "memory", "cc", "edx");
   return num;
 }
 
 // Divide num - 1 by div - 1 and return as 16.16 fixed point result.
 int FixedDiv1_X86(int num, int div) {
-  asm volatile (
-    "cdq                                       \n"
-    "shld      $0x10,%%eax,%%edx               \n"
-    "shl       $0x10,%%eax                     \n"
-    "sub       $0x10001,%%eax                  \n"
-    "sbb       $0x0,%%edx                      \n"
-    "sub       $0x1,%1                         \n"
-    "idiv      %1                              \n"
-    "mov       %0, %%eax                       \n"
-    : "+a"(num)  // %0
-    : "c"(div)   // %1
-    : "memory", "cc", "edx"
-  );
+  asm volatile(
+      "cdq                                       \n"
+      "shld      $0x10,%%eax,%%edx               \n"
+      "shl       $0x10,%%eax                     \n"
+      "sub       $0x10001,%%eax                  \n"
+      "sbb       $0x0,%%edx                      \n"
+      "sub       $0x1,%1                         \n"
+      "idiv      %1                              \n"
+      "mov       %0, %%eax                       \n"
+      : "+a"(num)  // %0
+      : "c"(div)   // %1
+      : "memory", "cc", "edx");
   return num;
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_mips.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_mips.cc
deleted file mode 100644
index ae95307..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_mips.cc
+++ /dev/null
@@ -1,644 +0,0 @@
-/*
- *  Copyright 2012 The LibYuv Project Authors. All rights reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS. All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include "libyuv/basic_types.h"
-#include "libyuv/row.h"
-
-#ifdef __cplusplus
-namespace libyuv {
-extern "C" {
-#endif
-
-// This module is for GCC MIPS DSPR2
-#if !defined(LIBYUV_DISABLE_MIPS) && \
-    defined(__mips_dsp) && (__mips_dsp_rev >= 2) && \
-    (_MIPS_SIM == _MIPS_SIM_ABI32)
-
-void ScaleRowDown2_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst, int dst_width) {
-  __asm__ __volatile__(
-    ".set push                                     \n"
-    ".set noreorder                                \n"
-
-    "srl            $t9, %[dst_width], 4           \n"  // iterations -> by 16
-    "beqz           $t9, 2f                        \n"
-    " nop                                          \n"
-
-  "1:                                              \n"
-    "lw             $t0, 0(%[src_ptr])             \n"  // |3|2|1|0|
-    "lw             $t1, 4(%[src_ptr])             \n"  // |7|6|5|4|
-    "lw             $t2, 8(%[src_ptr])             \n"  // |11|10|9|8|
-    "lw             $t3, 12(%[src_ptr])            \n"  // |15|14|13|12|
-    "lw             $t4, 16(%[src_ptr])            \n"  // |19|18|17|16|
-    "lw             $t5, 20(%[src_ptr])            \n"  // |23|22|21|20|
-    "lw             $t6, 24(%[src_ptr])            \n"  // |27|26|25|24|
-    "lw             $t7, 28(%[src_ptr])            \n"  // |31|30|29|28|
-    // TODO(fbarchard): Use odd pixels instead of even.
-    "precr.qb.ph    $t8, $t1, $t0                  \n"  // |6|4|2|0|
-    "precr.qb.ph    $t0, $t3, $t2                  \n"  // |14|12|10|8|
-    "precr.qb.ph    $t1, $t5, $t4                  \n"  // |22|20|18|16|
-    "precr.qb.ph    $t2, $t7, $t6                  \n"  // |30|28|26|24|
-    "addiu          %[src_ptr], %[src_ptr], 32     \n"
-    "addiu          $t9, $t9, -1                   \n"
-    "sw             $t8, 0(%[dst])                 \n"
-    "sw             $t0, 4(%[dst])                 \n"
-    "sw             $t1, 8(%[dst])                 \n"
-    "sw             $t2, 12(%[dst])                \n"
-    "bgtz           $t9, 1b                        \n"
-    " addiu         %[dst], %[dst], 16             \n"
-
-  "2:                                              \n"
-    "andi           $t9, %[dst_width], 0xf         \n"  // residue
-    "beqz           $t9, 3f                        \n"
-    " nop                                          \n"
-
-  "21:                                             \n"
-    "lbu            $t0, 0(%[src_ptr])             \n"
-    "addiu          %[src_ptr], %[src_ptr], 2      \n"
-    "addiu          $t9, $t9, -1                   \n"
-    "sb             $t0, 0(%[dst])                 \n"
-    "bgtz           $t9, 21b                       \n"
-    " addiu         %[dst], %[dst], 1              \n"
-
-  "3:                                              \n"
-    ".set pop                                      \n"
-  : [src_ptr] "+r" (src_ptr),
-    [dst] "+r" (dst)
-  : [dst_width] "r" (dst_width)
-  : "t0", "t1", "t2", "t3", "t4", "t5",
-    "t6", "t7", "t8", "t9"
-  );
-}
-
-void ScaleRowDown2Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width) {
-  const uint8* t = src_ptr + src_stride;
-
-  __asm__ __volatile__ (
-    ".set push                                    \n"
-    ".set noreorder                               \n"
-
-    "srl            $t9, %[dst_width], 3          \n"  // iterations -> step 8
-    "bltz           $t9, 2f                       \n"
-    " nop                                         \n"
-
-  "1:                                             \n"
-    "lw             $t0, 0(%[src_ptr])            \n"  // |3|2|1|0|
-    "lw             $t1, 4(%[src_ptr])            \n"  // |7|6|5|4|
-    "lw             $t2, 8(%[src_ptr])            \n"  // |11|10|9|8|
-    "lw             $t3, 12(%[src_ptr])           \n"  // |15|14|13|12|
-    "lw             $t4, 0(%[t])                  \n"  // |19|18|17|16|
-    "lw             $t5, 4(%[t])                  \n"  // |23|22|21|20|
-    "lw             $t6, 8(%[t])                  \n"  // |27|26|25|24|
-    "lw             $t7, 12(%[t])                 \n"  // |31|30|29|28|
-    "addiu          $t9, $t9, -1                  \n"
-    "srl            $t8, $t0, 16                  \n"  // |X|X|3|2|
-    "ins            $t0, $t4, 16, 16              \n"  // |17|16|1|0|
-    "ins            $t4, $t8, 0, 16               \n"  // |19|18|3|2|
-    "raddu.w.qb     $t0, $t0                      \n"  // |17+16+1+0|
-    "raddu.w.qb     $t4, $t4                      \n"  // |19+18+3+2|
-    "shra_r.w       $t0, $t0, 2                   \n"  // |t0+2|>>2
-    "shra_r.w       $t4, $t4, 2                   \n"  // |t4+2|>>2
-    "srl            $t8, $t1, 16                  \n"  // |X|X|7|6|
-    "ins            $t1, $t5, 16, 16              \n"  // |21|20|5|4|
-    "ins            $t5, $t8, 0, 16               \n"  // |22|23|7|6|
-    "raddu.w.qb     $t1, $t1                      \n"  // |21+20+5+4|
-    "raddu.w.qb     $t5, $t5                      \n"  // |23+22+7+6|
-    "shra_r.w       $t1, $t1, 2                   \n"  // |t1+2|>>2
-    "shra_r.w       $t5, $t5, 2                   \n"  // |t5+2|>>2
-    "srl            $t8, $t2, 16                  \n"  // |X|X|11|10|
-    "ins            $t2, $t6, 16, 16              \n"  // |25|24|9|8|
-    "ins            $t6, $t8, 0, 16               \n"  // |27|26|11|10|
-    "raddu.w.qb     $t2, $t2                      \n"  // |25+24+9+8|
-    "raddu.w.qb     $t6, $t6                      \n"  // |27+26+11+10|
-    "shra_r.w       $t2, $t2, 2                   \n"  // |t2+2|>>2
-    "shra_r.w       $t6, $t6, 2                   \n"  // |t5+2|>>2
-    "srl            $t8, $t3, 16                  \n"  // |X|X|15|14|
-    "ins            $t3, $t7, 16, 16              \n"  // |29|28|13|12|
-    "ins            $t7, $t8, 0, 16               \n"  // |31|30|15|14|
-    "raddu.w.qb     $t3, $t3                      \n"  // |29+28+13+12|
-    "raddu.w.qb     $t7, $t7                      \n"  // |31+30+15+14|
-    "shra_r.w       $t3, $t3, 2                   \n"  // |t3+2|>>2
-    "shra_r.w       $t7, $t7, 2                   \n"  // |t7+2|>>2
-    "addiu          %[src_ptr], %[src_ptr], 16    \n"
-    "addiu          %[t], %[t], 16                \n"
-    "sb             $t0, 0(%[dst])                \n"
-    "sb             $t4, 1(%[dst])                \n"
-    "sb             $t1, 2(%[dst])                \n"
-    "sb             $t5, 3(%[dst])                \n"
-    "sb             $t2, 4(%[dst])                \n"
-    "sb             $t6, 5(%[dst])                \n"
-    "sb             $t3, 6(%[dst])                \n"
-    "sb             $t7, 7(%[dst])                \n"
-    "bgtz           $t9, 1b                       \n"
-    " addiu         %[dst], %[dst], 8             \n"
-
-  "2:                                             \n"
-    "andi           $t9, %[dst_width], 0x7        \n"  // x = residue
-    "beqz           $t9, 3f                       \n"
-    " nop                                         \n"
-
-    "21:                                          \n"
-    "lwr            $t1, 0(%[src_ptr])            \n"
-    "lwl            $t1, 3(%[src_ptr])            \n"
-    "lwr            $t2, 0(%[t])                  \n"
-    "lwl            $t2, 3(%[t])                  \n"
-    "srl            $t8, $t1, 16                  \n"
-    "ins            $t1, $t2, 16, 16              \n"
-    "ins            $t2, $t8, 0, 16               \n"
-    "raddu.w.qb     $t1, $t1                      \n"
-    "raddu.w.qb     $t2, $t2                      \n"
-    "shra_r.w       $t1, $t1, 2                   \n"
-    "shra_r.w       $t2, $t2, 2                   \n"
-    "sb             $t1, 0(%[dst])                \n"
-    "sb             $t2, 1(%[dst])                \n"
-    "addiu          %[src_ptr], %[src_ptr], 4     \n"
-    "addiu          $t9, $t9, -2                  \n"
-    "addiu          %[t], %[t], 4                 \n"
-    "bgtz           $t9, 21b                      \n"
-    " addiu         %[dst], %[dst], 2             \n"
-
-  "3:                                             \n"
-    ".set pop                                     \n"
-
-  : [src_ptr] "+r" (src_ptr),
-    [dst] "+r" (dst), [t] "+r" (t)
-  : [dst_width] "r" (dst_width)
-  : "t0", "t1", "t2", "t3", "t4", "t5",
-    "t6", "t7", "t8", "t9"
-  );
-}
-
-void ScaleRowDown4_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst, int dst_width) {
-  __asm__ __volatile__ (
-      ".set push                                    \n"
-      ".set noreorder                               \n"
-
-      "srl            $t9, %[dst_width], 3          \n"
-      "beqz           $t9, 2f                       \n"
-      " nop                                         \n"
-
-     "1:                                            \n"
-      "lw             $t1, 0(%[src_ptr])            \n"  // |3|2|1|0|
-      "lw             $t2, 4(%[src_ptr])            \n"  // |7|6|5|4|
-      "lw             $t3, 8(%[src_ptr])            \n"  // |11|10|9|8|
-      "lw             $t4, 12(%[src_ptr])           \n"  // |15|14|13|12|
-      "lw             $t5, 16(%[src_ptr])           \n"  // |19|18|17|16|
-      "lw             $t6, 20(%[src_ptr])           \n"  // |23|22|21|20|
-      "lw             $t7, 24(%[src_ptr])           \n"  // |27|26|25|24|
-      "lw             $t8, 28(%[src_ptr])           \n"  // |31|30|29|28|
-      "precr.qb.ph    $t1, $t2, $t1                 \n"  // |6|4|2|0|
-      "precr.qb.ph    $t2, $t4, $t3                 \n"  // |14|12|10|8|
-      "precr.qb.ph    $t5, $t6, $t5                 \n"  // |22|20|18|16|
-      "precr.qb.ph    $t6, $t8, $t7                 \n"  // |30|28|26|24|
-      "precr.qb.ph    $t1, $t2, $t1                 \n"  // |12|8|4|0|
-      "precr.qb.ph    $t5, $t6, $t5                 \n"  // |28|24|20|16|
-      "addiu          %[src_ptr], %[src_ptr], 32    \n"
-      "addiu          $t9, $t9, -1                  \n"
-      "sw             $t1, 0(%[dst])                \n"
-      "sw             $t5, 4(%[dst])                \n"
-      "bgtz           $t9, 1b                       \n"
-      " addiu         %[dst], %[dst], 8             \n"
-
-    "2:                                             \n"
-      "andi           $t9, %[dst_width], 7          \n"  // residue
-      "beqz           $t9, 3f                       \n"
-      " nop                                         \n"
-
-    "21:                                            \n"
-      "lbu            $t1, 0(%[src_ptr])            \n"
-      "addiu          %[src_ptr], %[src_ptr], 4     \n"
-      "addiu          $t9, $t9, -1                  \n"
-      "sb             $t1, 0(%[dst])                \n"
-      "bgtz           $t9, 21b                      \n"
-      " addiu         %[dst], %[dst], 1             \n"
-
-    "3:                                             \n"
-      ".set pop                                     \n"
-      : [src_ptr] "+r" (src_ptr),
-        [dst] "+r" (dst)
-      : [dst_width] "r" (dst_width)
-      : "t1", "t2", "t3", "t4", "t5",
-        "t6", "t7", "t8", "t9"
-  );
-}
-
-void ScaleRowDown4Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width) {
-  intptr_t stride = src_stride;
-  const uint8* s1 = src_ptr + stride;
-  const uint8* s2 = s1 + stride;
-  const uint8* s3 = s2 + stride;
-
-  __asm__ __volatile__ (
-      ".set push                                  \n"
-      ".set noreorder                             \n"
-
-      "srl           $t9, %[dst_width], 1         \n"
-      "andi          $t8, %[dst_width], 1         \n"
-
-     "1:                                          \n"
-      "lw            $t0, 0(%[src_ptr])           \n"  // |3|2|1|0|
-      "lw            $t1, 0(%[s1])                \n"  // |7|6|5|4|
-      "lw            $t2, 0(%[s2])                \n"  // |11|10|9|8|
-      "lw            $t3, 0(%[s3])                \n"  // |15|14|13|12|
-      "lw            $t4, 4(%[src_ptr])           \n"  // |19|18|17|16|
-      "lw            $t5, 4(%[s1])                \n"  // |23|22|21|20|
-      "lw            $t6, 4(%[s2])                \n"  // |27|26|25|24|
-      "lw            $t7, 4(%[s3])                \n"  // |31|30|29|28|
-      "raddu.w.qb    $t0, $t0                     \n"  // |3 + 2 + 1 + 0|
-      "raddu.w.qb    $t1, $t1                     \n"  // |7 + 6 + 5 + 4|
-      "raddu.w.qb    $t2, $t2                     \n"  // |11 + 10 + 9 + 8|
-      "raddu.w.qb    $t3, $t3                     \n"  // |15 + 14 + 13 + 12|
-      "raddu.w.qb    $t4, $t4                     \n"  // |19 + 18 + 17 + 16|
-      "raddu.w.qb    $t5, $t5                     \n"  // |23 + 22 + 21 + 20|
-      "raddu.w.qb    $t6, $t6                     \n"  // |27 + 26 + 25 + 24|
-      "raddu.w.qb    $t7, $t7                     \n"  // |31 + 30 + 29 + 28|
-      "add           $t0, $t0, $t1                \n"
-      "add           $t1, $t2, $t3                \n"
-      "add           $t0, $t0, $t1                \n"
-      "add           $t4, $t4, $t5                \n"
-      "add           $t6, $t6, $t7                \n"
-      "add           $t4, $t4, $t6                \n"
-      "shra_r.w      $t0, $t0, 4                  \n"
-      "shra_r.w      $t4, $t4, 4                  \n"
-      "sb            $t0, 0(%[dst])               \n"
-      "sb            $t4, 1(%[dst])               \n"
-      "addiu         %[src_ptr], %[src_ptr], 8    \n"
-      "addiu         %[s1], %[s1], 8              \n"
-      "addiu         %[s2], %[s2], 8              \n"
-      "addiu         %[s3], %[s3], 8              \n"
-      "addiu         $t9, $t9, -1                 \n"
-      "bgtz          $t9, 1b                      \n"
-      " addiu        %[dst], %[dst], 2            \n"
-      "beqz          $t8, 2f                      \n"
-      " nop                                       \n"
-
-      "lw            $t0, 0(%[src_ptr])           \n"  // |3|2|1|0|
-      "lw            $t1, 0(%[s1])                \n"  // |7|6|5|4|
-      "lw            $t2, 0(%[s2])                \n"  // |11|10|9|8|
-      "lw            $t3, 0(%[s3])                \n"  // |15|14|13|12|
-      "raddu.w.qb    $t0, $t0                     \n"  // |3 + 2 + 1 + 0|
-      "raddu.w.qb    $t1, $t1                     \n"  // |7 + 6 + 5 + 4|
-      "raddu.w.qb    $t2, $t2                     \n"  // |11 + 10 + 9 + 8|
-      "raddu.w.qb    $t3, $t3                     \n"  // |15 + 14 + 13 + 12|
-      "add           $t0, $t0, $t1                \n"
-      "add           $t1, $t2, $t3                \n"
-      "add           $t0, $t0, $t1                \n"
-      "shra_r.w      $t0, $t0, 4                  \n"
-      "sb            $t0, 0(%[dst])               \n"
-
-      "2:                                         \n"
-      ".set pop                                   \n"
-
-      : [src_ptr] "+r" (src_ptr),
-        [dst] "+r" (dst),
-        [s1] "+r" (s1),
-        [s2] "+r" (s2),
-        [s3] "+r" (s3)
-      : [dst_width] "r" (dst_width)
-      : "t0", "t1", "t2", "t3", "t4", "t5",
-        "t6","t7", "t8", "t9"
-  );
-}
-
-void ScaleRowDown34_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst, int dst_width) {
-  __asm__ __volatile__ (
-      ".set push                                          \n"
-      ".set noreorder                                     \n"
-    "1:                                                   \n"
-      "lw              $t1, 0(%[src_ptr])                 \n"  // |3|2|1|0|
-      "lw              $t2, 4(%[src_ptr])                 \n"  // |7|6|5|4|
-      "lw              $t3, 8(%[src_ptr])                 \n"  // |11|10|9|8|
-      "lw              $t4, 12(%[src_ptr])                \n"  // |15|14|13|12|
-      "lw              $t5, 16(%[src_ptr])                \n"  // |19|18|17|16|
-      "lw              $t6, 20(%[src_ptr])                \n"  // |23|22|21|20|
-      "lw              $t7, 24(%[src_ptr])                \n"  // |27|26|25|24|
-      "lw              $t8, 28(%[src_ptr])                \n"  // |31|30|29|28|
-      "precrq.qb.ph    $t0, $t2, $t4                      \n"  // |7|5|15|13|
-      "precrq.qb.ph    $t9, $t6, $t8                      \n"  // |23|21|31|30|
-      "addiu           %[dst_width], %[dst_width], -24    \n"
-      "ins             $t1, $t1, 8, 16                    \n"  // |3|1|0|X|
-      "ins             $t4, $t0, 8, 16                    \n"  // |X|15|13|12|
-      "ins             $t5, $t5, 8, 16                    \n"  // |19|17|16|X|
-      "ins             $t8, $t9, 8, 16                    \n"  // |X|31|29|28|
-      "addiu           %[src_ptr], %[src_ptr], 32         \n"
-      "packrl.ph       $t0, $t3, $t0                      \n"  // |9|8|7|5|
-      "packrl.ph       $t9, $t7, $t9                      \n"  // |25|24|23|21|
-      "prepend         $t1, $t2, 8                        \n"  // |4|3|1|0|
-      "prepend         $t3, $t4, 24                       \n"  // |15|13|12|11|
-      "prepend         $t5, $t6, 8                        \n"  // |20|19|17|16|
-      "prepend         $t7, $t8, 24                       \n"  // |31|29|28|27|
-      "sw              $t1, 0(%[dst])                     \n"
-      "sw              $t0, 4(%[dst])                     \n"
-      "sw              $t3, 8(%[dst])                     \n"
-      "sw              $t5, 12(%[dst])                    \n"
-      "sw              $t9, 16(%[dst])                    \n"
-      "sw              $t7, 20(%[dst])                    \n"
-      "bnez            %[dst_width], 1b                   \n"
-      " addiu          %[dst], %[dst], 24                 \n"
-      ".set pop                                           \n"
-      : [src_ptr] "+r" (src_ptr),
-        [dst] "+r" (dst),
-        [dst_width] "+r" (dst_width)
-      :
-      : "t0", "t1", "t2", "t3", "t4", "t5",
-        "t6","t7", "t8", "t9"
-  );
-}
-
-void ScaleRowDown34_0_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* d, int dst_width) {
-  __asm__ __volatile__ (
-      ".set push                                         \n"
-      ".set noreorder                                    \n"
-      "repl.ph           $t3, 3                          \n"  // 0x00030003
-
-    "1:                                                  \n"
-      "lw                $t0, 0(%[src_ptr])              \n"  // |S3|S2|S1|S0|
-      "lwx               $t1, %[src_stride](%[src_ptr])  \n"  // |T3|T2|T1|T0|
-      "rotr              $t2, $t0, 8                     \n"  // |S0|S3|S2|S1|
-      "rotr              $t6, $t1, 8                     \n"  // |T0|T3|T2|T1|
-      "muleu_s.ph.qbl    $t4, $t2, $t3                   \n"  // |S0*3|S3*3|
-      "muleu_s.ph.qbl    $t5, $t6, $t3                   \n"  // |T0*3|T3*3|
-      "andi              $t0, $t2, 0xFFFF                \n"  // |0|0|S2|S1|
-      "andi              $t1, $t6, 0xFFFF                \n"  // |0|0|T2|T1|
-      "raddu.w.qb        $t0, $t0                        \n"
-      "raddu.w.qb        $t1, $t1                        \n"
-      "shra_r.w          $t0, $t0, 1                     \n"
-      "shra_r.w          $t1, $t1, 1                     \n"
-      "preceu.ph.qbr     $t2, $t2                        \n"  // |0|S2|0|S1|
-      "preceu.ph.qbr     $t6, $t6                        \n"  // |0|T2|0|T1|
-      "rotr              $t2, $t2, 16                    \n"  // |0|S1|0|S2|
-      "rotr              $t6, $t6, 16                    \n"  // |0|T1|0|T2|
-      "addu.ph           $t2, $t2, $t4                   \n"
-      "addu.ph           $t6, $t6, $t5                   \n"
-      "sll               $t5, $t0, 1                     \n"
-      "add               $t0, $t5, $t0                   \n"
-      "shra_r.ph         $t2, $t2, 2                     \n"
-      "shra_r.ph         $t6, $t6, 2                     \n"
-      "shll.ph           $t4, $t2, 1                     \n"
-      "addq.ph           $t4, $t4, $t2                   \n"
-      "addu              $t0, $t0, $t1                   \n"
-      "addiu             %[src_ptr], %[src_ptr], 4       \n"
-      "shra_r.w          $t0, $t0, 2                     \n"
-      "addu.ph           $t6, $t6, $t4                   \n"
-      "shra_r.ph         $t6, $t6, 2                     \n"
-      "srl               $t1, $t6, 16                    \n"
-      "addiu             %[dst_width], %[dst_width], -3  \n"
-      "sb                $t1, 0(%[d])                    \n"
-      "sb                $t0, 1(%[d])                    \n"
-      "sb                $t6, 2(%[d])                    \n"
-      "bgtz              %[dst_width], 1b                \n"
-      " addiu            %[d], %[d], 3                   \n"
-    "3:                                                  \n"
-      ".set pop                                          \n"
-      : [src_ptr] "+r" (src_ptr),
-        [src_stride] "+r" (src_stride),
-        [d] "+r" (d),
-        [dst_width] "+r" (dst_width)
-      :
-      : "t0", "t1", "t2", "t3",
-        "t4", "t5", "t6"
-  );
-}
-
-void ScaleRowDown34_1_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* d, int dst_width) {
-  __asm__ __volatile__ (
-      ".set push                                           \n"
-      ".set noreorder                                      \n"
-      "repl.ph           $t2, 3                            \n"  // 0x00030003
-
-    "1:                                                    \n"
-      "lw                $t0, 0(%[src_ptr])                \n"  // |S3|S2|S1|S0|
-      "lwx               $t1, %[src_stride](%[src_ptr])    \n"  // |T3|T2|T1|T0|
-      "rotr              $t4, $t0, 8                       \n"  // |S0|S3|S2|S1|
-      "rotr              $t6, $t1, 8                       \n"  // |T0|T3|T2|T1|
-      "muleu_s.ph.qbl    $t3, $t4, $t2                     \n"  // |S0*3|S3*3|
-      "muleu_s.ph.qbl    $t5, $t6, $t2                     \n"  // |T0*3|T3*3|
-      "andi              $t0, $t4, 0xFFFF                  \n"  // |0|0|S2|S1|
-      "andi              $t1, $t6, 0xFFFF                  \n"  // |0|0|T2|T1|
-      "raddu.w.qb        $t0, $t0                          \n"
-      "raddu.w.qb        $t1, $t1                          \n"
-      "shra_r.w          $t0, $t0, 1                       \n"
-      "shra_r.w          $t1, $t1, 1                       \n"
-      "preceu.ph.qbr     $t4, $t4                          \n"  // |0|S2|0|S1|
-      "preceu.ph.qbr     $t6, $t6                          \n"  // |0|T2|0|T1|
-      "rotr              $t4, $t4, 16                      \n"  // |0|S1|0|S2|
-      "rotr              $t6, $t6, 16                      \n"  // |0|T1|0|T2|
-      "addu.ph           $t4, $t4, $t3                     \n"
-      "addu.ph           $t6, $t6, $t5                     \n"
-      "shra_r.ph         $t6, $t6, 2                       \n"
-      "shra_r.ph         $t4, $t4, 2                       \n"
-      "addu.ph           $t6, $t6, $t4                     \n"
-      "addiu             %[src_ptr], %[src_ptr], 4         \n"
-      "shra_r.ph         $t6, $t6, 1                       \n"
-      "addu              $t0, $t0, $t1                     \n"
-      "addiu             %[dst_width], %[dst_width], -3    \n"
-      "shra_r.w          $t0, $t0, 1                       \n"
-      "srl               $t1, $t6, 16                      \n"
-      "sb                $t1, 0(%[d])                      \n"
-      "sb                $t0, 1(%[d])                      \n"
-      "sb                $t6, 2(%[d])                      \n"
-      "bgtz              %[dst_width], 1b                  \n"
-      " addiu            %[d], %[d], 3                     \n"
-    "3:                                                    \n"
-      ".set pop                                            \n"
-      : [src_ptr] "+r" (src_ptr),
-        [src_stride] "+r" (src_stride),
-        [d] "+r" (d),
-        [dst_width] "+r" (dst_width)
-      :
-      : "t0", "t1", "t2", "t3",
-        "t4", "t5", "t6"
-  );
-}
-
-void ScaleRowDown38_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst, int dst_width) {
-  __asm__ __volatile__ (
-      ".set push                                     \n"
-      ".set noreorder                                \n"
-
-    "1:                                              \n"
-      "lw         $t0, 0(%[src_ptr])                 \n"  // |3|2|1|0|
-      "lw         $t1, 4(%[src_ptr])                 \n"  // |7|6|5|4|
-      "lw         $t2, 8(%[src_ptr])                 \n"  // |11|10|9|8|
-      "lw         $t3, 12(%[src_ptr])                \n"  // |15|14|13|12|
-      "lw         $t4, 16(%[src_ptr])                \n"  // |19|18|17|16|
-      "lw         $t5, 20(%[src_ptr])                \n"  // |23|22|21|20|
-      "lw         $t6, 24(%[src_ptr])                \n"  // |27|26|25|24|
-      "lw         $t7, 28(%[src_ptr])                \n"  // |31|30|29|28|
-      "wsbh       $t0, $t0                           \n"  // |2|3|0|1|
-      "wsbh       $t6, $t6                           \n"  // |26|27|24|25|
-      "srl        $t0, $t0, 8                        \n"  // |X|2|3|0|
-      "srl        $t3, $t3, 16                       \n"  // |X|X|15|14|
-      "srl        $t5, $t5, 16                       \n"  // |X|X|23|22|
-      "srl        $t7, $t7, 16                       \n"  // |X|X|31|30|
-      "ins        $t1, $t2, 24, 8                    \n"  // |8|6|5|4|
-      "ins        $t6, $t5, 0, 8                     \n"  // |26|27|24|22|
-      "ins        $t1, $t0, 0, 16                    \n"  // |8|6|3|0|
-      "ins        $t6, $t7, 24, 8                    \n"  // |30|27|24|22|
-      "prepend    $t2, $t3, 24                       \n"  // |X|15|14|11|
-      "ins        $t4, $t4, 16, 8                    \n"  // |19|16|17|X|
-      "ins        $t4, $t2, 0, 16                    \n"  // |19|16|14|11|
-      "addiu      %[src_ptr], %[src_ptr], 32         \n"
-      "addiu      %[dst_width], %[dst_width], -12    \n"
-      "addiu      $t8,%[dst_width], -12              \n"
-      "sw         $t1, 0(%[dst])                     \n"
-      "sw         $t4, 4(%[dst])                     \n"
-      "sw         $t6, 8(%[dst])                     \n"
-      "bgez       $t8, 1b                            \n"
-      " addiu     %[dst], %[dst], 12                 \n"
-      ".set pop                                      \n"
-      : [src_ptr] "+r" (src_ptr),
-        [dst] "+r" (dst),
-        [dst_width] "+r" (dst_width)
-      :
-      : "t0", "t1", "t2", "t3", "t4",
-        "t5", "t6", "t7", "t8"
-  );
-}
-
-void ScaleRowDown38_2_Box_DSPR2(const uint8* src_ptr, ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
-  intptr_t stride = src_stride;
-  const uint8* t = src_ptr + stride;
-  const int c = 0x2AAA;
-
-  __asm__ __volatile__ (
-      ".set push                                         \n"
-      ".set noreorder                                    \n"
-
-    "1:                                                  \n"
-      "lw              $t0, 0(%[src_ptr])                \n"  // |S3|S2|S1|S0|
-      "lw              $t1, 4(%[src_ptr])                \n"  // |S7|S6|S5|S4|
-      "lw              $t2, 0(%[t])                      \n"  // |T3|T2|T1|T0|
-      "lw              $t3, 4(%[t])                      \n"  // |T7|T6|T5|T4|
-      "rotr            $t1, $t1, 16                      \n"  // |S5|S4|S7|S6|
-      "packrl.ph       $t4, $t1, $t3                     \n"  // |S7|S6|T7|T6|
-      "packrl.ph       $t5, $t3, $t1                     \n"  // |T5|T4|S5|S4|
-      "raddu.w.qb      $t4, $t4                          \n"  // S7+S6+T7+T6
-      "raddu.w.qb      $t5, $t5                          \n"  // T5+T4+S5+S4
-      "precrq.qb.ph    $t6, $t0, $t2                     \n"  // |S3|S1|T3|T1|
-      "precrq.qb.ph    $t6, $t6, $t6                     \n"  // |S3|T3|S3|T3|
-      "srl             $t4, $t4, 2                       \n"  // t4 / 4
-      "srl             $t6, $t6, 16                      \n"  // |0|0|S3|T3|
-      "raddu.w.qb      $t6, $t6                          \n"  // 0+0+S3+T3
-      "addu            $t6, $t5, $t6                     \n"
-      "mul             $t6, $t6, %[c]                    \n"  // t6 * 0x2AAA
-      "sll             $t0, $t0, 8                       \n"  // |S2|S1|S0|0|
-      "sll             $t2, $t2, 8                       \n"  // |T2|T1|T0|0|
-      "raddu.w.qb      $t0, $t0                          \n"  // S2+S1+S0+0
-      "raddu.w.qb      $t2, $t2                          \n"  // T2+T1+T0+0
-      "addu            $t0, $t0, $t2                     \n"
-      "mul             $t0, $t0, %[c]                    \n"  // t0 * 0x2AAA
-      "addiu           %[src_ptr], %[src_ptr], 8         \n"
-      "addiu           %[t], %[t], 8                     \n"
-      "addiu           %[dst_width], %[dst_width], -3    \n"
-      "addiu           %[dst_ptr], %[dst_ptr], 3         \n"
-      "srl             $t6, $t6, 16                      \n"
-      "srl             $t0, $t0, 16                      \n"
-      "sb              $t4, -1(%[dst_ptr])               \n"
-      "sb              $t6, -2(%[dst_ptr])               \n"
-      "bgtz            %[dst_width], 1b                  \n"
-      " sb             $t0, -3(%[dst_ptr])               \n"
-      ".set pop                                          \n"
-      : [src_ptr] "+r" (src_ptr),
-        [dst_ptr] "+r" (dst_ptr),
-        [t] "+r" (t),
-        [dst_width] "+r" (dst_width)
-      : [c] "r" (c)
-      : "t0", "t1", "t2", "t3", "t4", "t5", "t6"
-  );
-}
-
-void ScaleRowDown38_3_Box_DSPR2(const uint8* src_ptr,
-                                ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
-  intptr_t stride = src_stride;
-  const uint8* s1 = src_ptr + stride;
-  stride += stride;
-  const uint8* s2 = src_ptr + stride;
-  const int c1 = 0x1C71;
-  const int c2 = 0x2AAA;
-
-  __asm__ __volatile__ (
-      ".set push                                         \n"
-      ".set noreorder                                    \n"
-
-    "1:                                                  \n"
-      "lw              $t0, 0(%[src_ptr])                \n"  // |S3|S2|S1|S0|
-      "lw              $t1, 4(%[src_ptr])                \n"  // |S7|S6|S5|S4|
-      "lw              $t2, 0(%[s1])                     \n"  // |T3|T2|T1|T0|
-      "lw              $t3, 4(%[s1])                     \n"  // |T7|T6|T5|T4|
-      "lw              $t4, 0(%[s2])                     \n"  // |R3|R2|R1|R0|
-      "lw              $t5, 4(%[s2])                     \n"  // |R7|R6|R5|R4|
-      "rotr            $t1, $t1, 16                      \n"  // |S5|S4|S7|S6|
-      "packrl.ph       $t6, $t1, $t3                     \n"  // |S7|S6|T7|T6|
-      "raddu.w.qb      $t6, $t6                          \n"  // S7+S6+T7+T6
-      "packrl.ph       $t7, $t3, $t1                     \n"  // |T5|T4|S5|S4|
-      "raddu.w.qb      $t7, $t7                          \n"  // T5+T4+S5+S4
-      "sll             $t8, $t5, 16                      \n"  // |R5|R4|0|0|
-      "raddu.w.qb      $t8, $t8                          \n"  // R5+R4
-      "addu            $t7, $t7, $t8                     \n"
-      "srl             $t8, $t5, 16                      \n"  // |0|0|R7|R6|
-      "raddu.w.qb      $t8, $t8                          \n"  // R7 + R6
-      "addu            $t6, $t6, $t8                     \n"
-      "mul             $t6, $t6, %[c2]                   \n"  // t6 * 0x2AAA
-      "precrq.qb.ph    $t8, $t0, $t2                     \n"  // |S3|S1|T3|T1|
-      "precrq.qb.ph    $t8, $t8, $t4                     \n"  // |S3|T3|R3|R1|
-      "srl             $t8, $t8, 8                       \n"  // |0|S3|T3|R3|
-      "raddu.w.qb      $t8, $t8                          \n"  // S3 + T3 + R3
-      "addu            $t7, $t7, $t8                     \n"
-      "mul             $t7, $t7, %[c1]                   \n"  // t7 * 0x1C71
-      "sll             $t0, $t0, 8                       \n"  // |S2|S1|S0|0|
-      "sll             $t2, $t2, 8                       \n"  // |T2|T1|T0|0|
-      "sll             $t4, $t4, 8                       \n"  // |R2|R1|R0|0|
-      "raddu.w.qb      $t0, $t0                          \n"
-      "raddu.w.qb      $t2, $t2                          \n"
-      "raddu.w.qb      $t4, $t4                          \n"
-      "addu            $t0, $t0, $t2                     \n"
-      "addu            $t0, $t0, $t4                     \n"
-      "mul             $t0, $t0, %[c1]                   \n"  // t0 * 0x1C71
-      "addiu           %[src_ptr], %[src_ptr], 8         \n"
-      "addiu           %[s1], %[s1], 8                   \n"
-      "addiu           %[s2], %[s2], 8                   \n"
-      "addiu           %[dst_width], %[dst_width], -3    \n"
-      "addiu           %[dst_ptr], %[dst_ptr], 3         \n"
-      "srl             $t6, $t6, 16                      \n"
-      "srl             $t7, $t7, 16                      \n"
-      "srl             $t0, $t0, 16                      \n"
-      "sb              $t6, -1(%[dst_ptr])               \n"
-      "sb              $t7, -2(%[dst_ptr])               \n"
-      "bgtz            %[dst_width], 1b                  \n"
-      " sb             $t0, -3(%[dst_ptr])               \n"
-      ".set pop                                          \n"
-      : [src_ptr] "+r" (src_ptr),
-        [dst_ptr] "+r" (dst_ptr),
-        [s1] "+r" (s1),
-        [s2] "+r" (s2),
-        [dst_width] "+r" (dst_width)
-      : [c1] "r" (c1), [c2] "r" (c2)
-      : "t0", "t1", "t2", "t3", "t4",
-        "t5", "t6", "t7", "t8"
-  );
-}
-
-#endif  // defined(__mips_dsp) && (__mips_dsp_rev >= 2)
-
-#ifdef __cplusplus
-}  // extern "C"
-}  // namespace libyuv
-#endif
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_msa.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_msa.cc
new file mode 100644
index 0000000..482a521
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_msa.cc
@@ -0,0 +1,949 @@
+/*
+ *  Copyright 2016 The LibYuv Project Authors. All rights reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS. All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+
+#include "libyuv/scale_row.h"
+
+// This module is for GCC MSA
+#if !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
+#include "libyuv/macros_msa.h"
+
+#ifdef __cplusplus
+namespace libyuv {
+extern "C" {
+#endif
+
+#define LOAD_INDEXED_DATA(srcp, indx0, out0) \
+  {                                          \
+    out0[0] = srcp[indx0[0]];                \
+    out0[1] = srcp[indx0[1]];                \
+    out0[2] = srcp[indx0[2]];                \
+    out0[3] = srcp[indx0[3]];                \
+  }
+
+void ScaleARGBRowDown2_MSA(const uint8_t* src_argb,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_argb,
+                           int dst_width) {
+  int x;
+  v16u8 src0, src1, dst0;
+  (void)src_stride;
+
+  for (x = 0; x < dst_width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_argb, 16);
+    dst0 = (v16u8)__msa_pckod_w((v4i32)src1, (v4i32)src0);
+    ST_UB(dst0, dst_argb);
+    src_argb += 32;
+    dst_argb += 16;
+  }
+}
+
+void ScaleARGBRowDown2Linear_MSA(const uint8_t* src_argb,
+                                 ptrdiff_t src_stride,
+                                 uint8_t* dst_argb,
+                                 int dst_width) {
+  int x;
+  v16u8 src0, src1, vec0, vec1, dst0;
+  (void)src_stride;
+
+  for (x = 0; x < dst_width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_argb, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_argb, 16);
+    vec0 = (v16u8)__msa_pckev_w((v4i32)src1, (v4i32)src0);
+    vec1 = (v16u8)__msa_pckod_w((v4i32)src1, (v4i32)src0);
+    dst0 = (v16u8)__msa_aver_u_b((v16u8)vec0, (v16u8)vec1);
+    ST_UB(dst0, dst_argb);
+    src_argb += 32;
+    dst_argb += 16;
+  }
+}
+
+void ScaleARGBRowDown2Box_MSA(const uint8_t* src_argb,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_argb,
+                              int dst_width) {
+  int x;
+  const uint8_t* s = src_argb;
+  const uint8_t* t = src_argb + src_stride;
+  v16u8 src0, src1, src2, src3, vec0, vec1, vec2, vec3, dst0;
+  v8u16 reg0, reg1, reg2, reg3;
+  v16i8 shuffler = {0, 4, 1, 5, 2, 6, 3, 7, 8, 12, 9, 13, 10, 14, 11, 15};
+
+  for (x = 0; x < dst_width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)t, 0);
+    src3 = (v16u8)__msa_ld_b((v16i8*)t, 16);
+    vec0 = (v16u8)__msa_vshf_b(shuffler, (v16i8)src0, (v16i8)src0);
+    vec1 = (v16u8)__msa_vshf_b(shuffler, (v16i8)src1, (v16i8)src1);
+    vec2 = (v16u8)__msa_vshf_b(shuffler, (v16i8)src2, (v16i8)src2);
+    vec3 = (v16u8)__msa_vshf_b(shuffler, (v16i8)src3, (v16i8)src3);
+    reg0 = __msa_hadd_u_h(vec0, vec0);
+    reg1 = __msa_hadd_u_h(vec1, vec1);
+    reg2 = __msa_hadd_u_h(vec2, vec2);
+    reg3 = __msa_hadd_u_h(vec3, vec3);
+    reg0 += reg2;
+    reg1 += reg3;
+    reg0 = (v8u16)__msa_srari_h((v8i16)reg0, 2);
+    reg1 = (v8u16)__msa_srari_h((v8i16)reg1, 2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    ST_UB(dst0, dst_argb);
+    s += 32;
+    t += 32;
+    dst_argb += 16;
+  }
+}
+
+void ScaleARGBRowDownEven_MSA(const uint8_t* src_argb,
+                              ptrdiff_t src_stride,
+                              int32_t src_stepx,
+                              uint8_t* dst_argb,
+                              int dst_width) {
+  int x;
+  int32_t stepx = src_stepx * 4;
+  int32_t data0, data1, data2, data3;
+  (void)src_stride;
+
+  for (x = 0; x < dst_width; x += 4) {
+    data0 = LW(src_argb);
+    data1 = LW(src_argb + stepx);
+    data2 = LW(src_argb + stepx * 2);
+    data3 = LW(src_argb + stepx * 3);
+    SW(data0, dst_argb);
+    SW(data1, dst_argb + 4);
+    SW(data2, dst_argb + 8);
+    SW(data3, dst_argb + 12);
+    src_argb += stepx * 4;
+    dst_argb += 16;
+  }
+}
+
+void ScaleARGBRowDownEvenBox_MSA(const uint8_t* src_argb,
+                                 ptrdiff_t src_stride,
+                                 int src_stepx,
+                                 uint8_t* dst_argb,
+                                 int dst_width) {
+  int x;
+  const uint8_t* nxt_argb = src_argb + src_stride;
+  int32_t stepx = src_stepx * 4;
+  int64_t data0, data1, data2, data3;
+  v16u8 src0 = {0}, src1 = {0}, src2 = {0}, src3 = {0};
+  v16u8 vec0, vec1, vec2, vec3;
+  v8u16 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7;
+  v16u8 dst0;
+
+  for (x = 0; x < dst_width; x += 4) {
+    data0 = LD(src_argb);
+    data1 = LD(src_argb + stepx);
+    data2 = LD(src_argb + stepx * 2);
+    data3 = LD(src_argb + stepx * 3);
+    src0 = (v16u8)__msa_insert_d((v2i64)src0, 0, data0);
+    src0 = (v16u8)__msa_insert_d((v2i64)src0, 1, data1);
+    src1 = (v16u8)__msa_insert_d((v2i64)src1, 0, data2);
+    src1 = (v16u8)__msa_insert_d((v2i64)src1, 1, data3);
+    data0 = LD(nxt_argb);
+    data1 = LD(nxt_argb + stepx);
+    data2 = LD(nxt_argb + stepx * 2);
+    data3 = LD(nxt_argb + stepx * 3);
+    src2 = (v16u8)__msa_insert_d((v2i64)src2, 0, data0);
+    src2 = (v16u8)__msa_insert_d((v2i64)src2, 1, data1);
+    src3 = (v16u8)__msa_insert_d((v2i64)src3, 0, data2);
+    src3 = (v16u8)__msa_insert_d((v2i64)src3, 1, data3);
+    vec0 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src0);
+    vec1 = (v16u8)__msa_ilvr_b((v16i8)src3, (v16i8)src1);
+    vec2 = (v16u8)__msa_ilvl_b((v16i8)src2, (v16i8)src0);
+    vec3 = (v16u8)__msa_ilvl_b((v16i8)src3, (v16i8)src1);
+    reg0 = __msa_hadd_u_h(vec0, vec0);
+    reg1 = __msa_hadd_u_h(vec1, vec1);
+    reg2 = __msa_hadd_u_h(vec2, vec2);
+    reg3 = __msa_hadd_u_h(vec3, vec3);
+    reg4 = (v8u16)__msa_pckev_d((v2i64)reg2, (v2i64)reg0);
+    reg5 = (v8u16)__msa_pckev_d((v2i64)reg3, (v2i64)reg1);
+    reg6 = (v8u16)__msa_pckod_d((v2i64)reg2, (v2i64)reg0);
+    reg7 = (v8u16)__msa_pckod_d((v2i64)reg3, (v2i64)reg1);
+    reg4 += reg6;
+    reg5 += reg7;
+    reg4 = (v8u16)__msa_srari_h((v8i16)reg4, 2);
+    reg5 = (v8u16)__msa_srari_h((v8i16)reg5, 2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg5, (v16i8)reg4);
+    ST_UB(dst0, dst_argb);
+    src_argb += stepx * 4;
+    nxt_argb += stepx * 4;
+    dst_argb += 16;
+  }
+}
+
+void ScaleRowDown2_MSA(const uint8_t* src_ptr,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       int dst_width) {
+  int x;
+  v16u8 src0, src1, src2, src3, dst0, dst1;
+  (void)src_stride;
+
+  for (x = 0; x < dst_width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 48);
+    dst0 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    dst1 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    ST_UB2(dst0, dst1, dst, 16);
+    src_ptr += 64;
+    dst += 32;
+  }
+}
+
+void ScaleRowDown2Linear_MSA(const uint8_t* src_ptr,
+                             ptrdiff_t src_stride,
+                             uint8_t* dst,
+                             int dst_width) {
+  int x;
+  v16u8 src0, src1, src2, src3, vec0, vec1, vec2, vec3, dst0, dst1;
+  (void)src_stride;
+
+  for (x = 0; x < dst_width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 48);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    vec2 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    vec1 = (v16u8)__msa_pckod_b((v16i8)src1, (v16i8)src0);
+    vec3 = (v16u8)__msa_pckod_b((v16i8)src3, (v16i8)src2);
+    dst0 = __msa_aver_u_b(vec1, vec0);
+    dst1 = __msa_aver_u_b(vec3, vec2);
+    ST_UB2(dst0, dst1, dst, 16);
+    src_ptr += 64;
+    dst += 32;
+  }
+}
+
+void ScaleRowDown2Box_MSA(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          int dst_width) {
+  int x;
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7, dst0, dst1;
+  v8u16 vec0, vec1, vec2, vec3;
+
+  for (x = 0; x < dst_width; x += 32) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 48);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t, 0);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t, 16);
+    src6 = (v16u8)__msa_ld_b((v16i8*)t, 32);
+    src7 = (v16u8)__msa_ld_b((v16i8*)t, 48);
+    vec0 = __msa_hadd_u_h(src0, src0);
+    vec1 = __msa_hadd_u_h(src1, src1);
+    vec2 = __msa_hadd_u_h(src2, src2);
+    vec3 = __msa_hadd_u_h(src3, src3);
+    vec0 += __msa_hadd_u_h(src4, src4);
+    vec1 += __msa_hadd_u_h(src5, src5);
+    vec2 += __msa_hadd_u_h(src6, src6);
+    vec3 += __msa_hadd_u_h(src7, src7);
+    vec0 = (v8u16)__msa_srari_h((v8i16)vec0, 2);
+    vec1 = (v8u16)__msa_srari_h((v8i16)vec1, 2);
+    vec2 = (v8u16)__msa_srari_h((v8i16)vec2, 2);
+    vec3 = (v8u16)__msa_srari_h((v8i16)vec3, 2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)vec3, (v16i8)vec2);
+    ST_UB2(dst0, dst1, dst, 16);
+    s += 64;
+    t += 64;
+    dst += 32;
+  }
+}
+
+void ScaleRowDown4_MSA(const uint8_t* src_ptr,
+                       ptrdiff_t src_stride,
+                       uint8_t* dst,
+                       int dst_width) {
+  int x;
+  v16u8 src0, src1, src2, src3, vec0, vec1, dst0;
+  (void)src_stride;
+
+  for (x = 0; x < dst_width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 48);
+    vec0 = (v16u8)__msa_pckev_b((v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_pckev_b((v16i8)src3, (v16i8)src2);
+    dst0 = (v16u8)__msa_pckod_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst);
+    src_ptr += 64;
+    dst += 16;
+  }
+}
+
+void ScaleRowDown4Box_MSA(const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          uint8_t* dst,
+                          int dst_width) {
+  int x;
+  const uint8_t* s = src_ptr;
+  const uint8_t* t0 = s + src_stride;
+  const uint8_t* t1 = s + src_stride * 2;
+  const uint8_t* t2 = s + src_stride * 3;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7, dst0;
+  v8u16 vec0, vec1, vec2, vec3;
+  v4u32 reg0, reg1, reg2, reg3;
+
+  for (x = 0; x < dst_width; x += 16) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 48);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t0, 0);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t0, 16);
+    src6 = (v16u8)__msa_ld_b((v16i8*)t0, 32);
+    src7 = (v16u8)__msa_ld_b((v16i8*)t0, 48);
+    vec0 = __msa_hadd_u_h(src0, src0);
+    vec1 = __msa_hadd_u_h(src1, src1);
+    vec2 = __msa_hadd_u_h(src2, src2);
+    vec3 = __msa_hadd_u_h(src3, src3);
+    vec0 += __msa_hadd_u_h(src4, src4);
+    vec1 += __msa_hadd_u_h(src5, src5);
+    vec2 += __msa_hadd_u_h(src6, src6);
+    vec3 += __msa_hadd_u_h(src7, src7);
+    src0 = (v16u8)__msa_ld_b((v16i8*)t1, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)t1, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)t1, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)t1, 48);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t2, 0);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t2, 16);
+    src6 = (v16u8)__msa_ld_b((v16i8*)t2, 32);
+    src7 = (v16u8)__msa_ld_b((v16i8*)t2, 48);
+    vec0 += __msa_hadd_u_h(src0, src0);
+    vec1 += __msa_hadd_u_h(src1, src1);
+    vec2 += __msa_hadd_u_h(src2, src2);
+    vec3 += __msa_hadd_u_h(src3, src3);
+    vec0 += __msa_hadd_u_h(src4, src4);
+    vec1 += __msa_hadd_u_h(src5, src5);
+    vec2 += __msa_hadd_u_h(src6, src6);
+    vec3 += __msa_hadd_u_h(src7, src7);
+    reg0 = __msa_hadd_u_w(vec0, vec0);
+    reg1 = __msa_hadd_u_w(vec1, vec1);
+    reg2 = __msa_hadd_u_w(vec2, vec2);
+    reg3 = __msa_hadd_u_w(vec3, vec3);
+    reg0 = (v4u32)__msa_srari_w((v4i32)reg0, 4);
+    reg1 = (v4u32)__msa_srari_w((v4i32)reg1, 4);
+    reg2 = (v4u32)__msa_srari_w((v4i32)reg2, 4);
+    reg3 = (v4u32)__msa_srari_w((v4i32)reg3, 4);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)reg1, (v8i16)reg0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)reg3, (v8i16)reg2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)vec1, (v16i8)vec0);
+    ST_UB(dst0, dst);
+    s += 64;
+    t0 += 64;
+    t1 += 64;
+    t2 += 64;
+    dst += 16;
+  }
+}
+
+void ScaleRowDown38_MSA(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width) {
+  int x, width;
+  uint64_t dst0;
+  uint32_t dst1;
+  v16u8 src0, src1, vec0;
+  v16i8 mask = {0, 3, 6, 8, 11, 14, 16, 19, 22, 24, 27, 30, 0, 0, 0, 0};
+  (void)src_stride;
+
+  assert(dst_width % 3 == 0);
+  width = dst_width / 3;
+
+  for (x = 0; x < width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 16);
+    vec0 = (v16u8)__msa_vshf_b(mask, (v16i8)src1, (v16i8)src0);
+    dst0 = __msa_copy_u_d((v2i64)vec0, 0);
+    dst1 = __msa_copy_u_w((v4i32)vec0, 2);
+    SD(dst0, dst);
+    SW(dst1, dst + 8);
+    src_ptr += 32;
+    dst += 12;
+  }
+}
+
+void ScaleRowDown38_2_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width) {
+  int x, width;
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
+  uint64_t dst0;
+  uint32_t dst1;
+  v16u8 src0, src1, src2, src3, out;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v4u32 tmp0, tmp1, tmp2, tmp3, tmp4;
+  v8i16 zero = {0};
+  v8i16 mask = {0, 1, 2, 8, 3, 4, 5, 9};
+  v16i8 dst_mask = {0, 2, 16, 4, 6, 18, 8, 10, 20, 12, 14, 22, 0, 0, 0, 0};
+  v4u32 const_0x2AAA = (v4u32)__msa_fill_w(0x2AAA);
+  v4u32 const_0x4000 = (v4u32)__msa_fill_w(0x4000);
+
+  assert((dst_width % 3 == 0) && (dst_width > 0));
+  width = dst_width / 3;
+
+  for (x = 0; x < width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)t, 0);
+    src3 = (v16u8)__msa_ld_b((v16i8*)t, 16);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src2, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src2, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)src3, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)src3, (v16i8)src1);
+    vec0 = __msa_hadd_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = __msa_hadd_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = __msa_hadd_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = __msa_hadd_u_h((v16u8)vec3, (v16u8)vec3);
+    vec4 = (v8u16)__msa_vshf_h(mask, zero, (v8i16)vec0);
+    vec5 = (v8u16)__msa_vshf_h(mask, zero, (v8i16)vec1);
+    vec6 = (v8u16)__msa_vshf_h(mask, zero, (v8i16)vec2);
+    vec7 = (v8u16)__msa_vshf_h(mask, zero, (v8i16)vec3);
+    vec0 = (v8u16)__msa_pckod_w((v4i32)vec1, (v4i32)vec0);
+    vec1 = (v8u16)__msa_pckod_w((v4i32)vec3, (v4i32)vec2);
+    vec0 = (v8u16)__msa_pckod_w((v4i32)vec1, (v4i32)vec0);
+    tmp0 = __msa_hadd_u_w(vec4, vec4);
+    tmp1 = __msa_hadd_u_w(vec5, vec5);
+    tmp2 = __msa_hadd_u_w(vec6, vec6);
+    tmp3 = __msa_hadd_u_w(vec7, vec7);
+    tmp4 = __msa_hadd_u_w(vec0, vec0);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)tmp3, (v8i16)tmp2);
+    tmp0 = __msa_hadd_u_w(vec0, vec0);
+    tmp1 = __msa_hadd_u_w(vec1, vec1);
+    tmp0 *= const_0x2AAA;
+    tmp1 *= const_0x2AAA;
+    tmp4 *= const_0x4000;
+    tmp0 = (v4u32)__msa_srai_w((v4i32)tmp0, 16);
+    tmp1 = (v4u32)__msa_srai_w((v4i32)tmp1, 16);
+    tmp4 = (v4u32)__msa_srai_w((v4i32)tmp4, 16);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)tmp4, (v8i16)tmp4);
+    out = (v16u8)__msa_vshf_b(dst_mask, (v16i8)vec1, (v16i8)vec0);
+    dst0 = __msa_copy_u_d((v2i64)out, 0);
+    dst1 = __msa_copy_u_w((v4i32)out, 2);
+    SD(dst0, dst_ptr);
+    SW(dst1, dst_ptr + 8);
+    s += 32;
+    t += 32;
+    dst_ptr += 12;
+  }
+}
+
+void ScaleRowDown38_3_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst_ptr,
+                              int dst_width) {
+  int x, width;
+  const uint8_t* s = src_ptr;
+  const uint8_t* t0 = s + src_stride;
+  const uint8_t* t1 = s + src_stride * 2;
+  uint64_t dst0;
+  uint32_t dst1;
+  v16u8 src0, src1, src2, src3, src4, src5, out;
+  v8u16 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7;
+  v4u32 tmp0, tmp1, tmp2, tmp3, tmp4;
+  v8u16 zero = {0};
+  v8i16 mask = {0, 1, 2, 8, 3, 4, 5, 9};
+  v16i8 dst_mask = {0, 2, 16, 4, 6, 18, 8, 10, 20, 12, 14, 22, 0, 0, 0, 0};
+  v4u32 const_0x1C71 = (v4u32)__msa_fill_w(0x1C71);
+  v4u32 const_0x2AAA = (v4u32)__msa_fill_w(0x2AAA);
+
+  assert((dst_width % 3 == 0) && (dst_width > 0));
+  width = dst_width / 3;
+
+  for (x = 0; x < width; x += 4) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)t0, 0);
+    src3 = (v16u8)__msa_ld_b((v16i8*)t0, 16);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t1, 0);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t1, 16);
+    vec0 = (v8u16)__msa_ilvr_b((v16i8)src2, (v16i8)src0);
+    vec1 = (v8u16)__msa_ilvl_b((v16i8)src2, (v16i8)src0);
+    vec2 = (v8u16)__msa_ilvr_b((v16i8)src3, (v16i8)src1);
+    vec3 = (v8u16)__msa_ilvl_b((v16i8)src3, (v16i8)src1);
+    vec4 = (v8u16)__msa_ilvr_b((v16i8)zero, (v16i8)src4);
+    vec5 = (v8u16)__msa_ilvl_b((v16i8)zero, (v16i8)src4);
+    vec6 = (v8u16)__msa_ilvr_b((v16i8)zero, (v16i8)src5);
+    vec7 = (v8u16)__msa_ilvl_b((v16i8)zero, (v16i8)src5);
+    vec0 = __msa_hadd_u_h((v16u8)vec0, (v16u8)vec0);
+    vec1 = __msa_hadd_u_h((v16u8)vec1, (v16u8)vec1);
+    vec2 = __msa_hadd_u_h((v16u8)vec2, (v16u8)vec2);
+    vec3 = __msa_hadd_u_h((v16u8)vec3, (v16u8)vec3);
+    vec0 += __msa_hadd_u_h((v16u8)vec4, (v16u8)vec4);
+    vec1 += __msa_hadd_u_h((v16u8)vec5, (v16u8)vec5);
+    vec2 += __msa_hadd_u_h((v16u8)vec6, (v16u8)vec6);
+    vec3 += __msa_hadd_u_h((v16u8)vec7, (v16u8)vec7);
+    vec4 = (v8u16)__msa_vshf_h(mask, (v8i16)zero, (v8i16)vec0);
+    vec5 = (v8u16)__msa_vshf_h(mask, (v8i16)zero, (v8i16)vec1);
+    vec6 = (v8u16)__msa_vshf_h(mask, (v8i16)zero, (v8i16)vec2);
+    vec7 = (v8u16)__msa_vshf_h(mask, (v8i16)zero, (v8i16)vec3);
+    vec0 = (v8u16)__msa_pckod_w((v4i32)vec1, (v4i32)vec0);
+    vec1 = (v8u16)__msa_pckod_w((v4i32)vec3, (v4i32)vec2);
+    vec0 = (v8u16)__msa_pckod_w((v4i32)vec1, (v4i32)vec0);
+    tmp0 = __msa_hadd_u_w(vec4, vec4);
+    tmp1 = __msa_hadd_u_w(vec5, vec5);
+    tmp2 = __msa_hadd_u_w(vec6, vec6);
+    tmp3 = __msa_hadd_u_w(vec7, vec7);
+    tmp4 = __msa_hadd_u_w(vec0, vec0);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)tmp3, (v8i16)tmp2);
+    tmp0 = __msa_hadd_u_w(vec0, vec0);
+    tmp1 = __msa_hadd_u_w(vec1, vec1);
+    tmp0 *= const_0x1C71;
+    tmp1 *= const_0x1C71;
+    tmp4 *= const_0x2AAA;
+    tmp0 = (v4u32)__msa_srai_w((v4i32)tmp0, 16);
+    tmp1 = (v4u32)__msa_srai_w((v4i32)tmp1, 16);
+    tmp4 = (v4u32)__msa_srai_w((v4i32)tmp4, 16);
+    vec0 = (v8u16)__msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    vec1 = (v8u16)__msa_pckev_h((v8i16)tmp4, (v8i16)tmp4);
+    out = (v16u8)__msa_vshf_b(dst_mask, (v16i8)vec1, (v16i8)vec0);
+    dst0 = __msa_copy_u_d((v2i64)out, 0);
+    dst1 = __msa_copy_u_w((v4i32)out, 2);
+    SD(dst0, dst_ptr);
+    SW(dst1, dst_ptr + 8);
+    s += 32;
+    t0 += 32;
+    t1 += 32;
+    dst_ptr += 12;
+  }
+}
+
+void ScaleAddRow_MSA(const uint8_t* src_ptr, uint16_t* dst_ptr, int src_width) {
+  int x;
+  v16u8 src0;
+  v8u16 dst0, dst1;
+  v16i8 zero = {0};
+
+  assert(src_width > 0);
+
+  for (x = 0; x < src_width; x += 16) {
+    src0 = LD_UB(src_ptr);
+    dst0 = (v8u16)__msa_ld_h((v8i16*)dst_ptr, 0);
+    dst1 = (v8u16)__msa_ld_h((v8i16*)dst_ptr, 16);
+    dst0 += (v8u16)__msa_ilvr_b(zero, (v16i8)src0);
+    dst1 += (v8u16)__msa_ilvl_b(zero, (v16i8)src0);
+    ST_UH2(dst0, dst1, dst_ptr, 8);
+    src_ptr += 16;
+    dst_ptr += 16;
+  }
+}
+
+void ScaleFilterCols_MSA(uint8_t* dst_ptr,
+                         const uint8_t* src_ptr,
+                         int dst_width,
+                         int x,
+                         int dx) {
+  int j;
+  v4i32 vec_x = __msa_fill_w(x);
+  v4i32 vec_dx = __msa_fill_w(dx);
+  v4i32 vec_const = {0, 1, 2, 3};
+  v4i32 vec0, vec1, vec2, vec3, vec4, vec5, vec6, vec7, vec8, vec9;
+  v4i32 tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
+  v8u16 reg0, reg1;
+  v16u8 dst0;
+  v4i32 const_0xFFFF = __msa_fill_w(0xFFFF);
+  v4i32 const_0x40 = __msa_fill_w(0x40);
+
+  vec0 = vec_dx * vec_const;
+  vec1 = vec_dx * 4;
+  vec_x += vec0;
+
+  for (j = 0; j < dst_width - 1; j += 16) {
+    vec2 = vec_x >> 16;
+    vec6 = vec_x & const_0xFFFF;
+    vec_x += vec1;
+    vec3 = vec_x >> 16;
+    vec7 = vec_x & const_0xFFFF;
+    vec_x += vec1;
+    vec4 = vec_x >> 16;
+    vec8 = vec_x & const_0xFFFF;
+    vec_x += vec1;
+    vec5 = vec_x >> 16;
+    vec9 = vec_x & const_0xFFFF;
+    vec_x += vec1;
+    vec6 >>= 9;
+    vec7 >>= 9;
+    vec8 >>= 9;
+    vec9 >>= 9;
+    LOAD_INDEXED_DATA(src_ptr, vec2, tmp0);
+    LOAD_INDEXED_DATA(src_ptr, vec3, tmp1);
+    LOAD_INDEXED_DATA(src_ptr, vec4, tmp2);
+    LOAD_INDEXED_DATA(src_ptr, vec5, tmp3);
+    vec2 += 1;
+    vec3 += 1;
+    vec4 += 1;
+    vec5 += 1;
+    LOAD_INDEXED_DATA(src_ptr, vec2, tmp4);
+    LOAD_INDEXED_DATA(src_ptr, vec3, tmp5);
+    LOAD_INDEXED_DATA(src_ptr, vec4, tmp6);
+    LOAD_INDEXED_DATA(src_ptr, vec5, tmp7);
+    tmp4 -= tmp0;
+    tmp5 -= tmp1;
+    tmp6 -= tmp2;
+    tmp7 -= tmp3;
+    tmp4 *= vec6;
+    tmp5 *= vec7;
+    tmp6 *= vec8;
+    tmp7 *= vec9;
+    tmp4 += const_0x40;
+    tmp5 += const_0x40;
+    tmp6 += const_0x40;
+    tmp7 += const_0x40;
+    tmp4 >>= 7;
+    tmp5 >>= 7;
+    tmp6 >>= 7;
+    tmp7 >>= 7;
+    tmp0 += tmp4;
+    tmp1 += tmp5;
+    tmp2 += tmp6;
+    tmp3 += tmp7;
+    reg0 = (v8u16)__msa_pckev_h((v8i16)tmp1, (v8i16)tmp0);
+    reg1 = (v8u16)__msa_pckev_h((v8i16)tmp3, (v8i16)tmp2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    __msa_st_b(dst0, dst_ptr, 0);
+    dst_ptr += 16;
+  }
+}
+
+void ScaleARGBCols_MSA(uint8_t* dst_argb,
+                       const uint8_t* src_argb,
+                       int dst_width,
+                       int x,
+                       int dx) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  uint32_t* dst = (uint32_t*)(dst_argb);
+  int j;
+  v4i32 x_vec = __msa_fill_w(x);
+  v4i32 dx_vec = __msa_fill_w(dx);
+  v4i32 const_vec = {0, 1, 2, 3};
+  v4i32 vec0, vec1, vec2;
+  v4i32 dst0;
+
+  vec0 = dx_vec * const_vec;
+  vec1 = dx_vec * 4;
+  x_vec += vec0;
+
+  for (j = 0; j < dst_width; j += 4) {
+    vec2 = x_vec >> 16;
+    x_vec += vec1;
+    LOAD_INDEXED_DATA(src, vec2, dst0);
+    __msa_st_w(dst0, dst, 0);
+    dst += 4;
+  }
+}
+
+void ScaleARGBFilterCols_MSA(uint8_t* dst_argb,
+                             const uint8_t* src_argb,
+                             int dst_width,
+                             int x,
+                             int dx) {
+  const uint32_t* src = (const uint32_t*)(src_argb);
+  int j;
+  v4u32 src0, src1, src2, src3;
+  v4u32 vec0, vec1, vec2, vec3;
+  v16u8 reg0, reg1, reg2, reg3, reg4, reg5, reg6, reg7;
+  v16u8 mult0, mult1, mult2, mult3;
+  v8u16 tmp0, tmp1, tmp2, tmp3;
+  v16u8 dst0, dst1;
+  v4u32 vec_x = (v4u32)__msa_fill_w(x);
+  v4u32 vec_dx = (v4u32)__msa_fill_w(dx);
+  v4u32 vec_const = {0, 1, 2, 3};
+  v16u8 const_0x7f = (v16u8)__msa_fill_b(0x7f);
+
+  vec0 = vec_dx * vec_const;
+  vec1 = vec_dx * 4;
+  vec_x += vec0;
+
+  for (j = 0; j < dst_width - 1; j += 8) {
+    vec2 = vec_x >> 16;
+    reg0 = (v16u8)(vec_x >> 9);
+    vec_x += vec1;
+    vec3 = vec_x >> 16;
+    reg1 = (v16u8)(vec_x >> 9);
+    vec_x += vec1;
+    reg0 = reg0 & const_0x7f;
+    reg1 = reg1 & const_0x7f;
+    reg0 = (v16u8)__msa_shf_b((v16i8)reg0, 0);
+    reg1 = (v16u8)__msa_shf_b((v16i8)reg1, 0);
+    reg2 = reg0 ^ const_0x7f;
+    reg3 = reg1 ^ const_0x7f;
+    mult0 = (v16u8)__msa_ilvr_b((v16i8)reg0, (v16i8)reg2);
+    mult1 = (v16u8)__msa_ilvl_b((v16i8)reg0, (v16i8)reg2);
+    mult2 = (v16u8)__msa_ilvr_b((v16i8)reg1, (v16i8)reg3);
+    mult3 = (v16u8)__msa_ilvl_b((v16i8)reg1, (v16i8)reg3);
+    LOAD_INDEXED_DATA(src, vec2, src0);
+    LOAD_INDEXED_DATA(src, vec3, src1);
+    vec2 += 1;
+    vec3 += 1;
+    LOAD_INDEXED_DATA(src, vec2, src2);
+    LOAD_INDEXED_DATA(src, vec3, src3);
+    reg4 = (v16u8)__msa_ilvr_b((v16i8)src2, (v16i8)src0);
+    reg5 = (v16u8)__msa_ilvl_b((v16i8)src2, (v16i8)src0);
+    reg6 = (v16u8)__msa_ilvr_b((v16i8)src3, (v16i8)src1);
+    reg7 = (v16u8)__msa_ilvl_b((v16i8)src3, (v16i8)src1);
+    tmp0 = __msa_dotp_u_h(reg4, mult0);
+    tmp1 = __msa_dotp_u_h(reg5, mult1);
+    tmp2 = __msa_dotp_u_h(reg6, mult2);
+    tmp3 = __msa_dotp_u_h(reg7, mult3);
+    tmp0 >>= 7;
+    tmp1 >>= 7;
+    tmp2 >>= 7;
+    tmp3 >>= 7;
+    dst0 = (v16u8)__msa_pckev_b((v16i8)tmp1, (v16i8)tmp0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)tmp3, (v16i8)tmp2);
+    __msa_st_b(dst0, dst_argb, 0);
+    __msa_st_b(dst1, dst_argb, 16);
+    dst_argb += 32;
+  }
+}
+
+void ScaleRowDown34_MSA(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width) {
+  int x;
+  (void)src_stride;
+  v16u8 src0, src1, src2, src3;
+  v16u8 vec0, vec1, vec2;
+  v16i8 mask0 = {0, 1, 3, 4, 5, 7, 8, 9, 11, 12, 13, 15, 16, 17, 19, 20};
+  v16i8 mask1 = {5, 7, 8, 9, 11, 12, 13, 15, 16, 17, 19, 20, 21, 23, 24, 25};
+  v16i8 mask2 = {11, 12, 13, 15, 16, 17, 19, 20,
+                 21, 23, 24, 25, 27, 28, 29, 31};
+
+  assert((dst_width % 3 == 0) && (dst_width > 0));
+
+  for (x = 0; x < dst_width; x += 48) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)src_ptr, 48);
+    vec0 = (v16u8)__msa_vshf_b(mask0, (v16i8)src1, (v16i8)src0);
+    vec1 = (v16u8)__msa_vshf_b(mask1, (v16i8)src2, (v16i8)src1);
+    vec2 = (v16u8)__msa_vshf_b(mask2, (v16i8)src3, (v16i8)src2);
+    __msa_st_b((v16i8)vec0, dst, 0);
+    __msa_st_b((v16i8)vec1, dst, 16);
+    __msa_st_b((v16i8)vec2, dst, 32);
+    src_ptr += 64;
+    dst += 48;
+  }
+}
+
+void ScaleRowDown34_0_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* d,
+                              int dst_width) {
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
+  int x;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7, dst0, dst1, dst2;
+  v16u8 vec0, vec1, vec2, vec3, vec4, vec5;
+  v16u8 vec6, vec7, vec8, vec9, vec10, vec11;
+  v8i16 reg0, reg1, reg2, reg3, reg4, reg5;
+  v8i16 reg6, reg7, reg8, reg9, reg10, reg11;
+  v16u8 const0 = {3, 1, 1, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 1, 1};
+  v16u8 const1 = {1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1};
+  v16u8 const2 = {1, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 1, 1, 1, 3};
+  v16i8 mask0 = {0, 1, 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10};
+  v16i8 mask1 = {10, 11, 12, 13, 13, 14, 14, 15,
+                 16, 17, 17, 18, 18, 19, 20, 21};
+  v16i8 mask2 = {5, 6, 6, 7, 8, 9, 9, 10, 10, 11, 12, 13, 13, 14, 14, 15};
+  v8i16 shft0 = {2, 1, 2, 2, 1, 2, 2, 1};
+  v8i16 shft1 = {2, 2, 1, 2, 2, 1, 2, 2};
+  v8i16 shft2 = {1, 2, 2, 1, 2, 2, 1, 2};
+
+  assert((dst_width % 3 == 0) && (dst_width > 0));
+
+  for (x = 0; x < dst_width; x += 48) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 48);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t, 0);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t, 16);
+    src6 = (v16u8)__msa_ld_b((v16i8*)t, 32);
+    src7 = (v16u8)__msa_ld_b((v16i8*)t, 48);
+    vec0 = (v16u8)__msa_vshf_b(mask0, (v16i8)src0, (v16i8)src0);
+    vec1 = (v16u8)__msa_vshf_b(mask1, (v16i8)src1, (v16i8)src0);
+    vec2 = (v16u8)__msa_vshf_b(mask2, (v16i8)src1, (v16i8)src1);
+    vec3 = (v16u8)__msa_vshf_b(mask0, (v16i8)src2, (v16i8)src2);
+    vec4 = (v16u8)__msa_vshf_b(mask1, (v16i8)src3, (v16i8)src2);
+    vec5 = (v16u8)__msa_vshf_b(mask2, (v16i8)src3, (v16i8)src3);
+    vec6 = (v16u8)__msa_vshf_b(mask0, (v16i8)src4, (v16i8)src4);
+    vec7 = (v16u8)__msa_vshf_b(mask1, (v16i8)src5, (v16i8)src4);
+    vec8 = (v16u8)__msa_vshf_b(mask2, (v16i8)src5, (v16i8)src5);
+    vec9 = (v16u8)__msa_vshf_b(mask0, (v16i8)src6, (v16i8)src6);
+    vec10 = (v16u8)__msa_vshf_b(mask1, (v16i8)src7, (v16i8)src6);
+    vec11 = (v16u8)__msa_vshf_b(mask2, (v16i8)src7, (v16i8)src7);
+    reg0 = (v8i16)__msa_dotp_u_h(vec0, const0);
+    reg1 = (v8i16)__msa_dotp_u_h(vec1, const1);
+    reg2 = (v8i16)__msa_dotp_u_h(vec2, const2);
+    reg3 = (v8i16)__msa_dotp_u_h(vec3, const0);
+    reg4 = (v8i16)__msa_dotp_u_h(vec4, const1);
+    reg5 = (v8i16)__msa_dotp_u_h(vec5, const2);
+    reg6 = (v8i16)__msa_dotp_u_h(vec6, const0);
+    reg7 = (v8i16)__msa_dotp_u_h(vec7, const1);
+    reg8 = (v8i16)__msa_dotp_u_h(vec8, const2);
+    reg9 = (v8i16)__msa_dotp_u_h(vec9, const0);
+    reg10 = (v8i16)__msa_dotp_u_h(vec10, const1);
+    reg11 = (v8i16)__msa_dotp_u_h(vec11, const2);
+    reg0 = __msa_srar_h(reg0, shft0);
+    reg1 = __msa_srar_h(reg1, shft1);
+    reg2 = __msa_srar_h(reg2, shft2);
+    reg3 = __msa_srar_h(reg3, shft0);
+    reg4 = __msa_srar_h(reg4, shft1);
+    reg5 = __msa_srar_h(reg5, shft2);
+    reg6 = __msa_srar_h(reg6, shft0);
+    reg7 = __msa_srar_h(reg7, shft1);
+    reg8 = __msa_srar_h(reg8, shft2);
+    reg9 = __msa_srar_h(reg9, shft0);
+    reg10 = __msa_srar_h(reg10, shft1);
+    reg11 = __msa_srar_h(reg11, shft2);
+    reg0 = reg0 * 3 + reg6;
+    reg1 = reg1 * 3 + reg7;
+    reg2 = reg2 * 3 + reg8;
+    reg3 = reg3 * 3 + reg9;
+    reg4 = reg4 * 3 + reg10;
+    reg5 = reg5 * 3 + reg11;
+    reg0 = __msa_srari_h(reg0, 2);
+    reg1 = __msa_srari_h(reg1, 2);
+    reg2 = __msa_srari_h(reg2, 2);
+    reg3 = __msa_srari_h(reg3, 2);
+    reg4 = __msa_srari_h(reg4, 2);
+    reg5 = __msa_srari_h(reg5, 2);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)reg3, (v16i8)reg2);
+    dst2 = (v16u8)__msa_pckev_b((v16i8)reg5, (v16i8)reg4);
+    __msa_st_b((v16i8)dst0, d, 0);
+    __msa_st_b((v16i8)dst1, d, 16);
+    __msa_st_b((v16i8)dst2, d, 32);
+    s += 64;
+    t += 64;
+    d += 48;
+  }
+}
+
+void ScaleRowDown34_1_Box_MSA(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* d,
+                              int dst_width) {
+  const uint8_t* s = src_ptr;
+  const uint8_t* t = src_ptr + src_stride;
+  int x;
+  v16u8 src0, src1, src2, src3, src4, src5, src6, src7, dst0, dst1, dst2;
+  v16u8 vec0, vec1, vec2, vec3, vec4, vec5;
+  v16u8 vec6, vec7, vec8, vec9, vec10, vec11;
+  v8i16 reg0, reg1, reg2, reg3, reg4, reg5;
+  v8i16 reg6, reg7, reg8, reg9, reg10, reg11;
+  v16u8 const0 = {3, 1, 1, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 1, 1};
+  v16u8 const1 = {1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1};
+  v16u8 const2 = {1, 1, 1, 3, 3, 1, 1, 1, 1, 3, 3, 1, 1, 1, 1, 3};
+  v16i8 mask0 = {0, 1, 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10};
+  v16i8 mask1 = {10, 11, 12, 13, 13, 14, 14, 15,
+                 16, 17, 17, 18, 18, 19, 20, 21};
+  v16i8 mask2 = {5, 6, 6, 7, 8, 9, 9, 10, 10, 11, 12, 13, 13, 14, 14, 15};
+  v8i16 shft0 = {2, 1, 2, 2, 1, 2, 2, 1};
+  v8i16 shft1 = {2, 2, 1, 2, 2, 1, 2, 2};
+  v8i16 shft2 = {1, 2, 2, 1, 2, 2, 1, 2};
+
+  assert((dst_width % 3 == 0) && (dst_width > 0));
+
+  for (x = 0; x < dst_width; x += 48) {
+    src0 = (v16u8)__msa_ld_b((v16i8*)s, 0);
+    src1 = (v16u8)__msa_ld_b((v16i8*)s, 16);
+    src2 = (v16u8)__msa_ld_b((v16i8*)s, 32);
+    src3 = (v16u8)__msa_ld_b((v16i8*)s, 48);
+    src4 = (v16u8)__msa_ld_b((v16i8*)t, 0);
+    src5 = (v16u8)__msa_ld_b((v16i8*)t, 16);
+    src6 = (v16u8)__msa_ld_b((v16i8*)t, 32);
+    src7 = (v16u8)__msa_ld_b((v16i8*)t, 48);
+    vec0 = (v16u8)__msa_vshf_b(mask0, (v16i8)src0, (v16i8)src0);
+    vec1 = (v16u8)__msa_vshf_b(mask1, (v16i8)src1, (v16i8)src0);
+    vec2 = (v16u8)__msa_vshf_b(mask2, (v16i8)src1, (v16i8)src1);
+    vec3 = (v16u8)__msa_vshf_b(mask0, (v16i8)src2, (v16i8)src2);
+    vec4 = (v16u8)__msa_vshf_b(mask1, (v16i8)src3, (v16i8)src2);
+    vec5 = (v16u8)__msa_vshf_b(mask2, (v16i8)src3, (v16i8)src3);
+    vec6 = (v16u8)__msa_vshf_b(mask0, (v16i8)src4, (v16i8)src4);
+    vec7 = (v16u8)__msa_vshf_b(mask1, (v16i8)src5, (v16i8)src4);
+    vec8 = (v16u8)__msa_vshf_b(mask2, (v16i8)src5, (v16i8)src5);
+    vec9 = (v16u8)__msa_vshf_b(mask0, (v16i8)src6, (v16i8)src6);
+    vec10 = (v16u8)__msa_vshf_b(mask1, (v16i8)src7, (v16i8)src6);
+    vec11 = (v16u8)__msa_vshf_b(mask2, (v16i8)src7, (v16i8)src7);
+    reg0 = (v8i16)__msa_dotp_u_h(vec0, const0);
+    reg1 = (v8i16)__msa_dotp_u_h(vec1, const1);
+    reg2 = (v8i16)__msa_dotp_u_h(vec2, const2);
+    reg3 = (v8i16)__msa_dotp_u_h(vec3, const0);
+    reg4 = (v8i16)__msa_dotp_u_h(vec4, const1);
+    reg5 = (v8i16)__msa_dotp_u_h(vec5, const2);
+    reg6 = (v8i16)__msa_dotp_u_h(vec6, const0);
+    reg7 = (v8i16)__msa_dotp_u_h(vec7, const1);
+    reg8 = (v8i16)__msa_dotp_u_h(vec8, const2);
+    reg9 = (v8i16)__msa_dotp_u_h(vec9, const0);
+    reg10 = (v8i16)__msa_dotp_u_h(vec10, const1);
+    reg11 = (v8i16)__msa_dotp_u_h(vec11, const2);
+    reg0 = __msa_srar_h(reg0, shft0);
+    reg1 = __msa_srar_h(reg1, shft1);
+    reg2 = __msa_srar_h(reg2, shft2);
+    reg3 = __msa_srar_h(reg3, shft0);
+    reg4 = __msa_srar_h(reg4, shft1);
+    reg5 = __msa_srar_h(reg5, shft2);
+    reg6 = __msa_srar_h(reg6, shft0);
+    reg7 = __msa_srar_h(reg7, shft1);
+    reg8 = __msa_srar_h(reg8, shft2);
+    reg9 = __msa_srar_h(reg9, shft0);
+    reg10 = __msa_srar_h(reg10, shft1);
+    reg11 = __msa_srar_h(reg11, shft2);
+    reg0 += reg6;
+    reg1 += reg7;
+    reg2 += reg8;
+    reg3 += reg9;
+    reg4 += reg10;
+    reg5 += reg11;
+    reg0 = __msa_srari_h(reg0, 1);
+    reg1 = __msa_srari_h(reg1, 1);
+    reg2 = __msa_srari_h(reg2, 1);
+    reg3 = __msa_srari_h(reg3, 1);
+    reg4 = __msa_srari_h(reg4, 1);
+    reg5 = __msa_srari_h(reg5, 1);
+    dst0 = (v16u8)__msa_pckev_b((v16i8)reg1, (v16i8)reg0);
+    dst1 = (v16u8)__msa_pckev_b((v16i8)reg3, (v16i8)reg2);
+    dst2 = (v16u8)__msa_pckev_b((v16i8)reg5, (v16i8)reg4);
+    __msa_st_b((v16i8)dst0, d, 0);
+    __msa_st_b((v16i8)dst1, d, 16);
+    __msa_st_b((v16i8)dst2, d, 32);
+    s += 64;
+    t += 64;
+    d += 48;
+  }
+}
+
+#ifdef __cplusplus
+}  // extern "C"
+}  // namespace libyuv
+#endif
+
+#endif  // !defined(LIBYUV_DISABLE_MSA) && defined(__mips_msa)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon.cc
index 44b0c80..459a299 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon.cc
@@ -23,564 +23,541 @@
 // Provided by Fritz Koenig
 
 // Read 32x1 throw away even pixels, and write 16x1.
-void ScaleRowDown2_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    // load even pixels into q0, odd into q1
-    MEMACCESS(0)
-    "vld2.8     {q0, q1}, [%0]!                \n"
-    "subs       %2, %2, #16                    \n"  // 16 processed per loop
-    MEMACCESS(1)
-    "vst1.8     {q1}, [%1]!                    \n"  // store odd pixels
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst),              // %1
-    "+r"(dst_width)         // %2
-  :
-  : "q0", "q1"              // Clobber List
-  );
+void ScaleRowDown2_NEON(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      // load even pixels into q0, odd into q1
+      "vld2.8     {q0, q1}, [%0]!                \n"
+      "subs       %2, %2, #16                    \n"  // 16 processed per loop
+      "vst1.8     {q1}, [%1]!                    \n"  // store odd pixels
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst),       // %1
+        "+r"(dst_width)  // %2
+      :
+      : "q0", "q1"  // Clobber List
+      );
 }
 
 // Read 32x1 average down and write 16x1.
-void ScaleRowDown2Linear_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0, q1}, [%0]!                \n"  // load pixels and post inc
-    "subs       %2, %2, #16                    \n"  // 16 processed per loop
-    "vpaddl.u8  q0, q0                         \n"  // add adjacent
-    "vpaddl.u8  q1, q1                         \n"
-    "vrshrn.u16 d0, q0, #1                     \n"  // downshift, round and pack
-    "vrshrn.u16 d1, q1, #1                     \n"
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst),              // %1
-    "+r"(dst_width)         // %2
-  :
-  : "q0", "q1"     // Clobber List
-  );
+void ScaleRowDown2Linear_NEON(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "vld2.8     {q0, q1}, [%0]!                \n"  // load 32 pixels
+      "subs       %2, %2, #16                    \n"  // 16 processed per loop
+      "vrhadd.u8  q0, q0, q1                     \n"  // rounding half add
+      "vst1.8     {q0}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst),       // %1
+        "+r"(dst_width)  // %2
+      :
+      : "q0", "q1"  // Clobber List
+      );
 }
 
 // Read 32x2 average down and write 16x1.
-void ScaleRowDown2Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width) {
-  asm volatile (
-    // change the stride to row 2 pointer
-    "add        %1, %0                         \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0, q1}, [%0]!                \n"  // load row 1 and post inc
-    MEMACCESS(1)
-    "vld1.8     {q2, q3}, [%1]!                \n"  // load row 2 and post inc
-    "subs       %3, %3, #16                    \n"  // 16 processed per loop
-    "vpaddl.u8  q0, q0                         \n"  // row 1 add adjacent
-    "vpaddl.u8  q1, q1                         \n"
-    "vpadal.u8  q0, q2                         \n"  // row 2 add adjacent + row1
-    "vpadal.u8  q1, q3                         \n"
-    "vrshrn.u16 d0, q0, #2                     \n"  // downshift, round and pack
-    "vrshrn.u16 d1, q1, #2                     \n"
-    MEMACCESS(2)
-    "vst1.8     {q0}, [%2]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(src_stride),       // %1
-    "+r"(dst),              // %2
-    "+r"(dst_width)         // %3
-  :
-  : "q0", "q1", "q2", "q3"     // Clobber List
-  );
+void ScaleRowDown2Box_NEON(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           int dst_width) {
+  asm volatile(
+      // change the stride to row 2 pointer
+      "add        %1, %0                         \n"
+      "1:                                        \n"
+      "vld1.8     {q0, q1}, [%0]!                \n"  // load row 1 and post inc
+      "vld1.8     {q2, q3}, [%1]!                \n"  // load row 2 and post inc
+      "subs       %3, %3, #16                    \n"  // 16 processed per loop
+      "vpaddl.u8  q0, q0                         \n"  // row 1 add adjacent
+      "vpaddl.u8  q1, q1                         \n"
+      "vpadal.u8  q0, q2                         \n"  // row 2 add adjacent +
+                                                      // row1
+      "vpadal.u8  q1, q3                         \n"
+      "vrshrn.u16 d0, q0, #2                     \n"  // downshift, round and
+                                                      // pack
+      "vrshrn.u16 d1, q1, #2                     \n"
+      "vst1.8     {q0}, [%2]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),     // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst),         // %2
+        "+r"(dst_width)    // %3
+      :
+      : "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
-void ScaleRowDown4_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!        \n" // src line 0
-    "subs       %2, %2, #8                     \n" // 8 processed per loop
-    MEMACCESS(1)
-    "vst1.8     {d2}, [%1]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width)         // %2
-  :
-  : "q0", "q1", "memory", "cc"
-  );
+void ScaleRowDown4_NEON(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // src line 0
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop
+      "vst1.8     {d2}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      :
+      : "q0", "q1", "memory", "cc");
 }
 
-void ScaleRowDown4Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
-  const uint8* src_ptr1 = src_ptr + src_stride;
-  const uint8* src_ptr2 = src_ptr + src_stride * 2;
-  const uint8* src_ptr3 = src_ptr + src_stride * 3;
-asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0]!                    \n"   // load up 16x4
-    MEMACCESS(3)
-    "vld1.8     {q1}, [%3]!                    \n"
-    MEMACCESS(4)
-    "vld1.8     {q2}, [%4]!                    \n"
-    MEMACCESS(5)
-    "vld1.8     {q3}, [%5]!                    \n"
-    "subs       %2, %2, #4                     \n"
-    "vpaddl.u8  q0, q0                         \n"
-    "vpadal.u8  q0, q1                         \n"
-    "vpadal.u8  q0, q2                         \n"
-    "vpadal.u8  q0, q3                         \n"
-    "vpaddl.u16 q0, q0                         \n"
-    "vrshrn.u32 d0, q0, #4                     \n"   // divide by 16 w/rounding
-    "vmovn.u16  d0, q0                         \n"
-    MEMACCESS(1)
-    "vst1.32    {d0[0]}, [%1]!                 \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),   // %0
-    "+r"(dst_ptr),   // %1
-    "+r"(dst_width), // %2
-    "+r"(src_ptr1),  // %3
-    "+r"(src_ptr2),  // %4
-    "+r"(src_ptr3)   // %5
-  :
-  : "q0", "q1", "q2", "q3", "memory", "cc"
-  );
+void ScaleRowDown4Box_NEON(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width) {
+  const uint8_t* src_ptr1 = src_ptr + src_stride;
+  const uint8_t* src_ptr2 = src_ptr + src_stride * 2;
+  const uint8_t* src_ptr3 = src_ptr + src_stride * 3;
+  asm volatile(
+      "1:                                        \n"
+      "vld1.8     {q0}, [%0]!                    \n"  // load up 16x4
+      "vld1.8     {q1}, [%3]!                    \n"
+      "vld1.8     {q2}, [%4]!                    \n"
+      "vld1.8     {q3}, [%5]!                    \n"
+      "subs       %2, %2, #4                     \n"
+      "vpaddl.u8  q0, q0                         \n"
+      "vpadal.u8  q0, q1                         \n"
+      "vpadal.u8  q0, q2                         \n"
+      "vpadal.u8  q0, q3                         \n"
+      "vpaddl.u16 q0, q0                         \n"
+      "vrshrn.u32 d0, q0, #4                     \n"  // divide by 16 w/rounding
+      "vmovn.u16  d0, q0                         \n"
+      "vst1.32    {d0[0]}, [%1]!                 \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),    // %0
+        "+r"(dst_ptr),    // %1
+        "+r"(dst_width),  // %2
+        "+r"(src_ptr1),   // %3
+        "+r"(src_ptr2),   // %4
+        "+r"(src_ptr3)    // %5
+      :
+      : "q0", "q1", "q2", "q3", "memory", "cc");
 }
 
 // Down scale from 4 to 3 pixels. Use the neon multilane read/write
 // to load up the every 4th pixel into a 4 different registers.
 // Point samples 32 pixels to 24 pixels.
-void ScaleRowDown34_NEON(const uint8* src_ptr,
+void ScaleRowDown34_NEON(const uint8_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d1, d2, d3}, [%0]!      \n" // src line 0
-    "subs       %2, %2, #24                  \n"
-    "vmov       d2, d3                       \n" // order d0, d1, d2
-    MEMACCESS(1)
-    "vst3.8     {d0, d1, d2}, [%1]!          \n"
-    "bgt        1b                           \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width)         // %2
-  :
-  : "d0", "d1", "d2", "d3", "memory", "cc"
-  );
+                         uint8_t* dst_ptr,
+                         int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "vld4.8     {d0, d1, d2, d3}, [%0]!        \n"  // src line 0
+      "subs       %2, %2, #24                    \n"
+      "vmov       d2, d3                         \n"  // order d0, d1, d2
+      "vst3.8     {d0, d1, d2}, [%1]!            \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      :
+      : "d0", "d1", "d2", "d3", "memory", "cc");
 }
 
-void ScaleRowDown34_0_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown34_0_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "vmov.u8    d24, #3                        \n"
-    "add        %3, %0                         \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8       {d0, d1, d2, d3}, [%0]!      \n" // src line 0
-    MEMACCESS(3)
-    "vld4.8       {d4, d5, d6, d7}, [%3]!      \n" // src line 1
-    "subs         %2, %2, #24                  \n"
+                               uint8_t* dst_ptr,
+                               int dst_width) {
+  asm volatile(
+      "vmov.u8    d24, #3                        \n"
+      "add        %3, %0                         \n"
+      "1:                                        \n"
+      "vld4.8       {d0, d1, d2, d3}, [%0]!      \n"  // src line 0
+      "vld4.8       {d4, d5, d6, d7}, [%3]!      \n"  // src line 1
+      "subs         %2, %2, #24                  \n"
 
-    // filter src line 0 with src line 1
-    // expand chars to shorts to allow for room
-    // when adding lines together
-    "vmovl.u8     q8, d4                       \n"
-    "vmovl.u8     q9, d5                       \n"
-    "vmovl.u8     q10, d6                      \n"
-    "vmovl.u8     q11, d7                      \n"
+      // filter src line 0 with src line 1
+      // expand chars to shorts to allow for room
+      // when adding lines together
+      "vmovl.u8     q8, d4                       \n"
+      "vmovl.u8     q9, d5                       \n"
+      "vmovl.u8     q10, d6                      \n"
+      "vmovl.u8     q11, d7                      \n"
 
-    // 3 * line_0 + line_1
-    "vmlal.u8     q8, d0, d24                  \n"
-    "vmlal.u8     q9, d1, d24                  \n"
-    "vmlal.u8     q10, d2, d24                 \n"
-    "vmlal.u8     q11, d3, d24                 \n"
+      // 3 * line_0 + line_1
+      "vmlal.u8     q8, d0, d24                  \n"
+      "vmlal.u8     q9, d1, d24                  \n"
+      "vmlal.u8     q10, d2, d24                 \n"
+      "vmlal.u8     q11, d3, d24                 \n"
 
-    // (3 * line_0 + line_1) >> 2
-    "vqrshrn.u16  d0, q8, #2                   \n"
-    "vqrshrn.u16  d1, q9, #2                   \n"
-    "vqrshrn.u16  d2, q10, #2                  \n"
-    "vqrshrn.u16  d3, q11, #2                  \n"
+      // (3 * line_0 + line_1) >> 2
+      "vqrshrn.u16  d0, q8, #2                   \n"
+      "vqrshrn.u16  d1, q9, #2                   \n"
+      "vqrshrn.u16  d2, q10, #2                  \n"
+      "vqrshrn.u16  d3, q11, #2                  \n"
 
-    // a0 = (src[0] * 3 + s[1] * 1) >> 2
-    "vmovl.u8     q8, d1                       \n"
-    "vmlal.u8     q8, d0, d24                  \n"
-    "vqrshrn.u16  d0, q8, #2                   \n"
+      // a0 = (src[0] * 3 + s[1] * 1) >> 2
+      "vmovl.u8     q8, d1                       \n"
+      "vmlal.u8     q8, d0, d24                  \n"
+      "vqrshrn.u16  d0, q8, #2                   \n"
 
-    // a1 = (src[1] * 1 + s[2] * 1) >> 1
-    "vrhadd.u8    d1, d1, d2                   \n"
+      // a1 = (src[1] * 1 + s[2] * 1) >> 1
+      "vrhadd.u8    d1, d1, d2                   \n"
 
-    // a2 = (src[2] * 1 + s[3] * 3) >> 2
-    "vmovl.u8     q8, d2                       \n"
-    "vmlal.u8     q8, d3, d24                  \n"
-    "vqrshrn.u16  d2, q8, #2                   \n"
+      // a2 = (src[2] * 1 + s[3] * 3) >> 2
+      "vmovl.u8     q8, d2                       \n"
+      "vmlal.u8     q8, d3, d24                  \n"
+      "vqrshrn.u16  d2, q8, #2                   \n"
 
-    MEMACCESS(1)
-    "vst3.8       {d0, d1, d2}, [%1]!          \n"
+      "vst3.8       {d0, d1, d2}, [%1]!          \n"
 
-    "bgt          1b                           \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width),        // %2
-    "+r"(src_stride)        // %3
-  :
-  : "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11", "d24", "memory", "cc"
-  );
+      "bgt          1b                           \n"
+      : "+r"(src_ptr),    // %0
+        "+r"(dst_ptr),    // %1
+        "+r"(dst_width),  // %2
+        "+r"(src_stride)  // %3
+      :
+      : "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11", "d24", "memory",
+        "cc");
 }
 
-void ScaleRowDown34_1_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown34_1_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "vmov.u8    d24, #3                        \n"
-    "add        %3, %0                         \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8       {d0, d1, d2, d3}, [%0]!      \n" // src line 0
-    MEMACCESS(3)
-    "vld4.8       {d4, d5, d6, d7}, [%3]!      \n" // src line 1
-    "subs         %2, %2, #24                  \n"
-    // average src line 0 with src line 1
-    "vrhadd.u8    q0, q0, q2                   \n"
-    "vrhadd.u8    q1, q1, q3                   \n"
+                               uint8_t* dst_ptr,
+                               int dst_width) {
+  asm volatile(
+      "vmov.u8    d24, #3                        \n"
+      "add        %3, %0                         \n"
+      "1:                                        \n"
+      "vld4.8       {d0, d1, d2, d3}, [%0]!      \n"  // src line 0
+      "vld4.8       {d4, d5, d6, d7}, [%3]!      \n"  // src line 1
+      "subs         %2, %2, #24                  \n"
+      // average src line 0 with src line 1
+      "vrhadd.u8    q0, q0, q2                   \n"
+      "vrhadd.u8    q1, q1, q3                   \n"
 
-    // a0 = (src[0] * 3 + s[1] * 1) >> 2
-    "vmovl.u8     q3, d1                       \n"
-    "vmlal.u8     q3, d0, d24                  \n"
-    "vqrshrn.u16  d0, q3, #2                   \n"
+      // a0 = (src[0] * 3 + s[1] * 1) >> 2
+      "vmovl.u8     q3, d1                       \n"
+      "vmlal.u8     q3, d0, d24                  \n"
+      "vqrshrn.u16  d0, q3, #2                   \n"
 
-    // a1 = (src[1] * 1 + s[2] * 1) >> 1
-    "vrhadd.u8    d1, d1, d2                   \n"
+      // a1 = (src[1] * 1 + s[2] * 1) >> 1
+      "vrhadd.u8    d1, d1, d2                   \n"
 
-    // a2 = (src[2] * 1 + s[3] * 3) >> 2
-    "vmovl.u8     q3, d2                       \n"
-    "vmlal.u8     q3, d3, d24                  \n"
-    "vqrshrn.u16  d2, q3, #2                   \n"
+      // a2 = (src[2] * 1 + s[3] * 3) >> 2
+      "vmovl.u8     q3, d2                       \n"
+      "vmlal.u8     q3, d3, d24                  \n"
+      "vqrshrn.u16  d2, q3, #2                   \n"
 
-    MEMACCESS(1)
-    "vst3.8       {d0, d1, d2}, [%1]!          \n"
-    "bgt          1b                           \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width),        // %2
-    "+r"(src_stride)        // %3
-  :
-  : "r4", "q0", "q1", "q2", "q3", "d24", "memory", "cc"
-  );
+      "vst3.8       {d0, d1, d2}, [%1]!          \n"
+      "bgt          1b                           \n"
+      : "+r"(src_ptr),    // %0
+        "+r"(dst_ptr),    // %1
+        "+r"(dst_width),  // %2
+        "+r"(src_stride)  // %3
+      :
+      : "r4", "q0", "q1", "q2", "q3", "d24", "memory", "cc");
 }
 
 #define HAS_SCALEROWDOWN38_NEON
-static uvec8 kShuf38 =
-  { 0, 3, 6, 8, 11, 14, 16, 19, 22, 24, 27, 30, 0, 0, 0, 0 };
-static uvec8 kShuf38_2 =
-  { 0, 8, 16, 2, 10, 17, 4, 12, 18, 6, 14, 19, 0, 0, 0, 0 };
-static vec16 kMult38_Div6 =
-  { 65536 / 12, 65536 / 12, 65536 / 12, 65536 / 12,
-    65536 / 12, 65536 / 12, 65536 / 12, 65536 / 12 };
-static vec16 kMult38_Div9 =
-  { 65536 / 18, 65536 / 18, 65536 / 18, 65536 / 18,
-    65536 / 18, 65536 / 18, 65536 / 18, 65536 / 18 };
+static const uvec8 kShuf38 = {0,  3,  6,  8,  11, 14, 16, 19,
+                              22, 24, 27, 30, 0,  0,  0,  0};
+static const uvec8 kShuf38_2 = {0,  8, 16, 2,  10, 17, 4, 12,
+                                18, 6, 14, 19, 0,  0,  0, 0};
+static const vec16 kMult38_Div6 = {65536 / 12, 65536 / 12, 65536 / 12,
+                                   65536 / 12, 65536 / 12, 65536 / 12,
+                                   65536 / 12, 65536 / 12};
+static const vec16 kMult38_Div9 = {65536 / 18, 65536 / 18, 65536 / 18,
+                                   65536 / 18, 65536 / 18, 65536 / 18,
+                                   65536 / 18, 65536 / 18};
 
 // 32 -> 12
-void ScaleRowDown38_NEON(const uint8* src_ptr,
+void ScaleRowDown38_NEON(const uint8_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    MEMACCESS(3)
-    "vld1.8     {q3}, [%3]                     \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d0, d1, d2, d3}, [%0]!        \n"
-    "subs       %2, %2, #12                    \n"
-    "vtbl.u8    d4, {d0, d1, d2, d3}, d6       \n"
-    "vtbl.u8    d5, {d0, d1, d2, d3}, d7       \n"
-    MEMACCESS(1)
-    "vst1.8     {d4}, [%1]!                    \n"
-    MEMACCESS(1)
-    "vst1.32    {d5[0]}, [%1]!                 \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width)         // %2
-  : "r"(&kShuf38)           // %3
-  : "d0", "d1", "d2", "d3", "d4", "d5", "memory", "cc"
-  );
+                         uint8_t* dst_ptr,
+                         int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "vld1.8     {q3}, [%3]                     \n"
+      "1:                                        \n"
+      "vld1.8     {d0, d1, d2, d3}, [%0]!        \n"
+      "subs       %2, %2, #12                    \n"
+      "vtbl.u8    d4, {d0, d1, d2, d3}, d6       \n"
+      "vtbl.u8    d5, {d0, d1, d2, d3}, d7       \n"
+      "vst1.8     {d4}, [%1]!                    \n"
+      "vst1.32    {d5[0]}, [%1]!                 \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      : "r"(&kShuf38)    // %3
+      : "d0", "d1", "d2", "d3", "d4", "d5", "memory", "cc");
 }
 
 // 32x3 -> 12x1
-void OMITFP ScaleRowDown38_3_Box_NEON(const uint8* src_ptr,
+void OMITFP ScaleRowDown38_3_Box_NEON(const uint8_t* src_ptr,
                                       ptrdiff_t src_stride,
-                                      uint8* dst_ptr, int dst_width) {
-  const uint8* src_ptr1 = src_ptr + src_stride * 2;
+                                      uint8_t* dst_ptr,
+                                      int dst_width) {
+  const uint8_t* src_ptr1 = src_ptr + src_stride * 2;
 
-  asm volatile (
-    MEMACCESS(5)
-    "vld1.16    {q13}, [%5]                    \n"
-    MEMACCESS(6)
-    "vld1.8     {q14}, [%6]                    \n"
-    MEMACCESS(7)
-    "vld1.8     {q15}, [%7]                    \n"
-    "add        %3, %0                         \n"
-  "1:                                          \n"
+  asm volatile(
+      "vld1.16    {q13}, [%5]                    \n"
+      "vld1.8     {q14}, [%6]                    \n"
+      "vld1.8     {q15}, [%7]                    \n"
+      "add        %3, %0                         \n"
+      "1:                                        \n"
 
-    // d0 = 00 40 01 41 02 42 03 43
-    // d1 = 10 50 11 51 12 52 13 53
-    // d2 = 20 60 21 61 22 62 23 63
-    // d3 = 30 70 31 71 32 72 33 73
-    MEMACCESS(0)
-    "vld4.8       {d0, d1, d2, d3}, [%0]!      \n"
-    MEMACCESS(3)
-    "vld4.8       {d4, d5, d6, d7}, [%3]!      \n"
-    MEMACCESS(4)
-    "vld4.8       {d16, d17, d18, d19}, [%4]!  \n"
-    "subs         %2, %2, #12                  \n"
+      // d0 = 00 40 01 41 02 42 03 43
+      // d1 = 10 50 11 51 12 52 13 53
+      // d2 = 20 60 21 61 22 62 23 63
+      // d3 = 30 70 31 71 32 72 33 73
+      "vld4.8       {d0, d1, d2, d3}, [%0]!      \n"
+      "vld4.8       {d4, d5, d6, d7}, [%3]!      \n"
+      "vld4.8       {d16, d17, d18, d19}, [%4]!  \n"
+      "subs         %2, %2, #12                  \n"
 
-    // Shuffle the input data around to get align the data
-    //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
-    // d0 = 00 10 01 11 02 12 03 13
-    // d1 = 40 50 41 51 42 52 43 53
-    "vtrn.u8      d0, d1                       \n"
-    "vtrn.u8      d4, d5                       \n"
-    "vtrn.u8      d16, d17                     \n"
+      // Shuffle the input data around to get align the data
+      //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
+      // d0 = 00 10 01 11 02 12 03 13
+      // d1 = 40 50 41 51 42 52 43 53
+      "vtrn.u8      d0, d1                       \n"
+      "vtrn.u8      d4, d5                       \n"
+      "vtrn.u8      d16, d17                     \n"
 
-    // d2 = 20 30 21 31 22 32 23 33
-    // d3 = 60 70 61 71 62 72 63 73
-    "vtrn.u8      d2, d3                       \n"
-    "vtrn.u8      d6, d7                       \n"
-    "vtrn.u8      d18, d19                     \n"
+      // d2 = 20 30 21 31 22 32 23 33
+      // d3 = 60 70 61 71 62 72 63 73
+      "vtrn.u8      d2, d3                       \n"
+      "vtrn.u8      d6, d7                       \n"
+      "vtrn.u8      d18, d19                     \n"
 
-    // d0 = 00+10 01+11 02+12 03+13
-    // d2 = 40+50 41+51 42+52 43+53
-    "vpaddl.u8    q0, q0                       \n"
-    "vpaddl.u8    q2, q2                       \n"
-    "vpaddl.u8    q8, q8                       \n"
+      // d0 = 00+10 01+11 02+12 03+13
+      // d2 = 40+50 41+51 42+52 43+53
+      "vpaddl.u8    q0, q0                       \n"
+      "vpaddl.u8    q2, q2                       \n"
+      "vpaddl.u8    q8, q8                       \n"
 
-    // d3 = 60+70 61+71 62+72 63+73
-    "vpaddl.u8    d3, d3                       \n"
-    "vpaddl.u8    d7, d7                       \n"
-    "vpaddl.u8    d19, d19                     \n"
+      // d3 = 60+70 61+71 62+72 63+73
+      "vpaddl.u8    d3, d3                       \n"
+      "vpaddl.u8    d7, d7                       \n"
+      "vpaddl.u8    d19, d19                     \n"
 
-    // combine source lines
-    "vadd.u16     q0, q2                       \n"
-    "vadd.u16     q0, q8                       \n"
-    "vadd.u16     d4, d3, d7                   \n"
-    "vadd.u16     d4, d19                      \n"
+      // combine source lines
+      "vadd.u16     q0, q2                       \n"
+      "vadd.u16     q0, q8                       \n"
+      "vadd.u16     d4, d3, d7                   \n"
+      "vadd.u16     d4, d19                      \n"
 
-    // dst_ptr[3] = (s[6 + st * 0] + s[7 + st * 0]
-    //             + s[6 + st * 1] + s[7 + st * 1]
-    //             + s[6 + st * 2] + s[7 + st * 2]) / 6
-    "vqrdmulh.s16 q2, q2, q13                  \n"
-    "vmovn.u16    d4, q2                       \n"
+      // dst_ptr[3] = (s[6 + st * 0] + s[7 + st * 0]
+      //             + s[6 + st * 1] + s[7 + st * 1]
+      //             + s[6 + st * 2] + s[7 + st * 2]) / 6
+      "vqrdmulh.s16 q2, q2, q13                  \n"
+      "vmovn.u16    d4, q2                       \n"
 
-    // Shuffle 2,3 reg around so that 2 can be added to the
-    //  0,1 reg and 3 can be added to the 4,5 reg. This
-    //  requires expanding from u8 to u16 as the 0,1 and 4,5
-    //  registers are already expanded. Then do transposes
-    //  to get aligned.
-    // q2 = xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
-    "vmovl.u8     q1, d2                       \n"
-    "vmovl.u8     q3, d6                       \n"
-    "vmovl.u8     q9, d18                      \n"
+      // Shuffle 2,3 reg around so that 2 can be added to the
+      //  0,1 reg and 3 can be added to the 4,5 reg. This
+      //  requires expanding from u8 to u16 as the 0,1 and 4,5
+      //  registers are already expanded. Then do transposes
+      //  to get aligned.
+      // q2 = xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
+      "vmovl.u8     q1, d2                       \n"
+      "vmovl.u8     q3, d6                       \n"
+      "vmovl.u8     q9, d18                      \n"
 
-    // combine source lines
-    "vadd.u16     q1, q3                       \n"
-    "vadd.u16     q1, q9                       \n"
+      // combine source lines
+      "vadd.u16     q1, q3                       \n"
+      "vadd.u16     q1, q9                       \n"
 
-    // d4 = xx 20 xx 30 xx 22 xx 32
-    // d5 = xx 21 xx 31 xx 23 xx 33
-    "vtrn.u32     d2, d3                       \n"
+      // d4 = xx 20 xx 30 xx 22 xx 32
+      // d5 = xx 21 xx 31 xx 23 xx 33
+      "vtrn.u32     d2, d3                       \n"
 
-    // d4 = xx 20 xx 21 xx 22 xx 23
-    // d5 = xx 30 xx 31 xx 32 xx 33
-    "vtrn.u16     d2, d3                       \n"
+      // d4 = xx 20 xx 21 xx 22 xx 23
+      // d5 = xx 30 xx 31 xx 32 xx 33
+      "vtrn.u16     d2, d3                       \n"
 
-    // 0+1+2, 3+4+5
-    "vadd.u16     q0, q1                       \n"
+      // 0+1+2, 3+4+5
+      "vadd.u16     q0, q1                       \n"
 
-    // Need to divide, but can't downshift as the the value
-    //  isn't a power of 2. So multiply by 65536 / n
-    //  and take the upper 16 bits.
-    "vqrdmulh.s16 q0, q0, q15                  \n"
+      // Need to divide, but can't downshift as the the value
+      //  isn't a power of 2. So multiply by 65536 / n
+      //  and take the upper 16 bits.
+      "vqrdmulh.s16 q0, q0, q15                  \n"
 
-    // Align for table lookup, vtbl requires registers to
-    //  be adjacent
-    "vmov.u8      d2, d4                       \n"
+      // Align for table lookup, vtbl requires registers to
+      //  be adjacent
+      "vmov.u8      d2, d4                       \n"
 
-    "vtbl.u8      d3, {d0, d1, d2}, d28        \n"
-    "vtbl.u8      d4, {d0, d1, d2}, d29        \n"
+      "vtbl.u8      d3, {d0, d1, d2}, d28        \n"
+      "vtbl.u8      d4, {d0, d1, d2}, d29        \n"
 
-    MEMACCESS(1)
-    "vst1.8       {d3}, [%1]!                  \n"
-    MEMACCESS(1)
-    "vst1.32      {d4[0]}, [%1]!               \n"
-    "bgt          1b                           \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width),        // %2
-    "+r"(src_stride),       // %3
-    "+r"(src_ptr1)          // %4
-  : "r"(&kMult38_Div6),     // %5
-    "r"(&kShuf38_2),        // %6
-    "r"(&kMult38_Div9)      // %7
-  : "q0", "q1", "q2", "q3", "q8", "q9", "q13", "q14", "q15", "memory", "cc"
-  );
+      "vst1.8       {d3}, [%1]!                  \n"
+      "vst1.32      {d4[0]}, [%1]!               \n"
+      "bgt          1b                           \n"
+      : "+r"(src_ptr),       // %0
+        "+r"(dst_ptr),       // %1
+        "+r"(dst_width),     // %2
+        "+r"(src_stride),    // %3
+        "+r"(src_ptr1)       // %4
+      : "r"(&kMult38_Div6),  // %5
+        "r"(&kShuf38_2),     // %6
+        "r"(&kMult38_Div9)   // %7
+      : "q0", "q1", "q2", "q3", "q8", "q9", "q13", "q14", "q15", "memory",
+        "cc");
 }
 
 // 32x2 -> 12x1
-void ScaleRowDown38_2_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown38_2_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    MEMACCESS(4)
-    "vld1.16    {q13}, [%4]                    \n"
-    MEMACCESS(5)
-    "vld1.8     {q14}, [%5]                    \n"
-    "add        %3, %0                         \n"
-  "1:                                          \n"
+                               uint8_t* dst_ptr,
+                               int dst_width) {
+  asm volatile(
+      "vld1.16    {q13}, [%4]                    \n"
+      "vld1.8     {q14}, [%5]                    \n"
+      "add        %3, %0                         \n"
+      "1:                                        \n"
 
-    // d0 = 00 40 01 41 02 42 03 43
-    // d1 = 10 50 11 51 12 52 13 53
-    // d2 = 20 60 21 61 22 62 23 63
-    // d3 = 30 70 31 71 32 72 33 73
-    MEMACCESS(0)
-    "vld4.8       {d0, d1, d2, d3}, [%0]!      \n"
-    MEMACCESS(3)
-    "vld4.8       {d4, d5, d6, d7}, [%3]!      \n"
-    "subs         %2, %2, #12                  \n"
+      // d0 = 00 40 01 41 02 42 03 43
+      // d1 = 10 50 11 51 12 52 13 53
+      // d2 = 20 60 21 61 22 62 23 63
+      // d3 = 30 70 31 71 32 72 33 73
+      "vld4.8       {d0, d1, d2, d3}, [%0]!      \n"
+      "vld4.8       {d4, d5, d6, d7}, [%3]!      \n"
+      "subs         %2, %2, #12                  \n"
 
-    // Shuffle the input data around to get align the data
-    //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
-    // d0 = 00 10 01 11 02 12 03 13
-    // d1 = 40 50 41 51 42 52 43 53
-    "vtrn.u8      d0, d1                       \n"
-    "vtrn.u8      d4, d5                       \n"
+      // Shuffle the input data around to get align the data
+      //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
+      // d0 = 00 10 01 11 02 12 03 13
+      // d1 = 40 50 41 51 42 52 43 53
+      "vtrn.u8      d0, d1                       \n"
+      "vtrn.u8      d4, d5                       \n"
 
-    // d2 = 20 30 21 31 22 32 23 33
-    // d3 = 60 70 61 71 62 72 63 73
-    "vtrn.u8      d2, d3                       \n"
-    "vtrn.u8      d6, d7                       \n"
+      // d2 = 20 30 21 31 22 32 23 33
+      // d3 = 60 70 61 71 62 72 63 73
+      "vtrn.u8      d2, d3                       \n"
+      "vtrn.u8      d6, d7                       \n"
 
-    // d0 = 00+10 01+11 02+12 03+13
-    // d2 = 40+50 41+51 42+52 43+53
-    "vpaddl.u8    q0, q0                       \n"
-    "vpaddl.u8    q2, q2                       \n"
+      // d0 = 00+10 01+11 02+12 03+13
+      // d2 = 40+50 41+51 42+52 43+53
+      "vpaddl.u8    q0, q0                       \n"
+      "vpaddl.u8    q2, q2                       \n"
 
-    // d3 = 60+70 61+71 62+72 63+73
-    "vpaddl.u8    d3, d3                       \n"
-    "vpaddl.u8    d7, d7                       \n"
+      // d3 = 60+70 61+71 62+72 63+73
+      "vpaddl.u8    d3, d3                       \n"
+      "vpaddl.u8    d7, d7                       \n"
 
-    // combine source lines
-    "vadd.u16     q0, q2                       \n"
-    "vadd.u16     d4, d3, d7                   \n"
+      // combine source lines
+      "vadd.u16     q0, q2                       \n"
+      "vadd.u16     d4, d3, d7                   \n"
 
-    // dst_ptr[3] = (s[6] + s[7] + s[6+st] + s[7+st]) / 4
-    "vqrshrn.u16  d4, q2, #2                   \n"
+      // dst_ptr[3] = (s[6] + s[7] + s[6+st] + s[7+st]) / 4
+      "vqrshrn.u16  d4, q2, #2                   \n"
 
-    // Shuffle 2,3 reg around so that 2 can be added to the
-    //  0,1 reg and 3 can be added to the 4,5 reg. This
-    //  requires expanding from u8 to u16 as the 0,1 and 4,5
-    //  registers are already expanded. Then do transposes
-    //  to get aligned.
-    // q2 = xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
-    "vmovl.u8     q1, d2                       \n"
-    "vmovl.u8     q3, d6                       \n"
+      // Shuffle 2,3 reg around so that 2 can be added to the
+      //  0,1 reg and 3 can be added to the 4,5 reg. This
+      //  requires expanding from u8 to u16 as the 0,1 and 4,5
+      //  registers are already expanded. Then do transposes
+      //  to get aligned.
+      // q2 = xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
+      "vmovl.u8     q1, d2                       \n"
+      "vmovl.u8     q3, d6                       \n"
 
-    // combine source lines
-    "vadd.u16     q1, q3                       \n"
+      // combine source lines
+      "vadd.u16     q1, q3                       \n"
 
-    // d4 = xx 20 xx 30 xx 22 xx 32
-    // d5 = xx 21 xx 31 xx 23 xx 33
-    "vtrn.u32     d2, d3                       \n"
+      // d4 = xx 20 xx 30 xx 22 xx 32
+      // d5 = xx 21 xx 31 xx 23 xx 33
+      "vtrn.u32     d2, d3                       \n"
 
-    // d4 = xx 20 xx 21 xx 22 xx 23
-    // d5 = xx 30 xx 31 xx 32 xx 33
-    "vtrn.u16     d2, d3                       \n"
+      // d4 = xx 20 xx 21 xx 22 xx 23
+      // d5 = xx 30 xx 31 xx 32 xx 33
+      "vtrn.u16     d2, d3                       \n"
 
-    // 0+1+2, 3+4+5
-    "vadd.u16     q0, q1                       \n"
+      // 0+1+2, 3+4+5
+      "vadd.u16     q0, q1                       \n"
 
-    // Need to divide, but can't downshift as the the value
-    //  isn't a power of 2. So multiply by 65536 / n
-    //  and take the upper 16 bits.
-    "vqrdmulh.s16 q0, q0, q13                  \n"
+      // Need to divide, but can't downshift as the the value
+      //  isn't a power of 2. So multiply by 65536 / n
+      //  and take the upper 16 bits.
+      "vqrdmulh.s16 q0, q0, q13                  \n"
 
-    // Align for table lookup, vtbl requires registers to
-    //  be adjacent
-    "vmov.u8      d2, d4                       \n"
+      // Align for table lookup, vtbl requires registers to
+      //  be adjacent
+      "vmov.u8      d2, d4                       \n"
 
-    "vtbl.u8      d3, {d0, d1, d2}, d28        \n"
-    "vtbl.u8      d4, {d0, d1, d2}, d29        \n"
+      "vtbl.u8      d3, {d0, d1, d2}, d28        \n"
+      "vtbl.u8      d4, {d0, d1, d2}, d29        \n"
 
-    MEMACCESS(1)
-    "vst1.8       {d3}, [%1]!                  \n"
-    MEMACCESS(1)
-    "vst1.32      {d4[0]}, [%1]!               \n"
-    "bgt          1b                           \n"
-  : "+r"(src_ptr),       // %0
-    "+r"(dst_ptr),       // %1
-    "+r"(dst_width),     // %2
-    "+r"(src_stride)     // %3
-  : "r"(&kMult38_Div6),  // %4
-    "r"(&kShuf38_2)      // %5
-  : "q0", "q1", "q2", "q3", "q13", "q14", "memory", "cc"
-  );
+      "vst1.8       {d3}, [%1]!                  \n"
+      "vst1.32      {d4[0]}, [%1]!               \n"
+      "bgt          1b                           \n"
+      : "+r"(src_ptr),       // %0
+        "+r"(dst_ptr),       // %1
+        "+r"(dst_width),     // %2
+        "+r"(src_stride)     // %3
+      : "r"(&kMult38_Div6),  // %4
+        "r"(&kShuf38_2)      // %5
+      : "q0", "q1", "q2", "q3", "q13", "q14", "memory", "cc");
 }
 
-void ScaleAddRows_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                    uint16* dst_ptr, int src_width, int src_height) {
-  const uint8* src_tmp;
-  asm volatile (
-  "1:                                          \n"
-    "mov       %0, %1                          \n"
-    "mov       r12, %5                         \n"
-    "veor      q2, q2, q2                      \n"
-    "veor      q3, q3, q3                      \n"
-  "2:                                          \n"
-    // load 16 pixels into q0
-    MEMACCESS(0)
-    "vld1.8     {q0}, [%0], %3                 \n"
-    "vaddw.u8   q3, q3, d1                     \n"
-    "vaddw.u8   q2, q2, d0                     \n"
-    "subs       r12, r12, #1                   \n"
-    "bgt        2b                             \n"
-    MEMACCESS(2)
-    "vst1.16    {q2, q3}, [%2]!                \n"  // store pixels
-    "add        %1, %1, #16                    \n"
-    "subs       %4, %4, #16                    \n"  // 16 processed per loop
-    "bgt        1b                             \n"
-  : "=&r"(src_tmp),    // %0
-    "+r"(src_ptr),     // %1
-    "+r"(dst_ptr),     // %2
-    "+r"(src_stride),  // %3
-    "+r"(src_width),   // %4
-    "+r"(src_height)   // %5
-  :
-  : "memory", "cc", "r12", "q0", "q1", "q2", "q3"  // Clobber List
-  );
+void ScaleAddRows_NEON(const uint8_t* src_ptr,
+                       ptrdiff_t src_stride,
+                       uint16_t* dst_ptr,
+                       int src_width,
+                       int src_height) {
+  const uint8_t* src_tmp;
+  asm volatile(
+      "1:                                        \n"
+      "mov       %0, %1                          \n"
+      "mov       r12, %5                         \n"
+      "veor      q2, q2, q2                      \n"
+      "veor      q3, q3, q3                      \n"
+      "2:                                        \n"
+      // load 16 pixels into q0
+      "vld1.8     {q0}, [%0], %3                 \n"
+      "vaddw.u8   q3, q3, d1                     \n"
+      "vaddw.u8   q2, q2, d0                     \n"
+      "subs       r12, r12, #1                   \n"
+      "bgt        2b                             \n"
+      "vst1.16    {q2, q3}, [%2]!                \n"  // store pixels
+      "add        %1, %1, #16                    \n"
+      "subs       %4, %4, #16                    \n"  // 16 processed per loop
+      "bgt        1b                             \n"
+      : "=&r"(src_tmp),    // %0
+        "+r"(src_ptr),     // %1
+        "+r"(dst_ptr),     // %2
+        "+r"(src_stride),  // %3
+        "+r"(src_width),   // %4
+        "+r"(src_height)   // %5
+      :
+      : "memory", "cc", "r12", "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
 // TODO(Yang Zhang): Investigate less load instructions for
 // the x/dx stepping
-#define LOAD2_DATA8_LANE(n)                                    \
-    "lsr        %5, %3, #16                    \n"             \
-    "add        %6, %1, %5                     \n"             \
-    "add        %3, %3, %4                     \n"             \
-    MEMACCESS(6)                                               \
-    "vld2.8     {d6["#n"], d7["#n"]}, [%6]     \n"
+#define LOAD2_DATA8_LANE(n)                      \
+  "lsr        %5, %3, #16                    \n" \
+  "add        %6, %1, %5                     \n" \
+  "add        %3, %3, %4                     \n" \
+  "vld2.8     {d6[" #n "], d7[" #n "]}, [%6] \n"
 
-// The NEON version mimics this formula:
-// #define BLENDER(a, b, f) (uint8)((int)(a) +
-//    ((int)(f) * ((int)(b) - (int)(a)) >> 16))
+// The NEON version mimics this formula (from row_common.cc):
+// #define BLENDER(a, b, f) (uint8_t)((int)(a) +
+//    ((((int)((f)) * ((int)(b) - (int)(a))) + 0x8000) >> 16))
 
-void ScaleFilterCols_NEON(uint8* dst_ptr, const uint8* src_ptr,
-                          int dst_width, int x, int dx) {
+void ScaleFilterCols_NEON(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          int dst_width,
+                          int x,
+                          int dx) {
   int dx_offset[4] = {0, 1, 2, 3};
   int* tmp = dx_offset;
-  const uint8* src_tmp = src_ptr;
+  const uint8_t* src_tmp = src_ptr;
   asm volatile (
     "vdup.32    q0, %3                         \n"  // x
     "vdup.32    q1, %4                         \n"  // dx
@@ -617,7 +594,6 @@
     "vadd.s16   q8, q8, q9                     \n"
     "vmovn.s16  d6, q8                         \n"
 
-    MEMACCESS(0)
     "vst1.8     {d6}, [%0]!                    \n"  // store pixels
     "vadd.s32   q1, q1, q0                     \n"
     "vadd.s32   q2, q2, q0                     \n"
@@ -639,325 +615,299 @@
 #undef LOAD2_DATA8_LANE
 
 // 16x2 -> 16x1
-void ScaleFilterRows_NEON(uint8* dst_ptr,
-                          const uint8* src_ptr, ptrdiff_t src_stride,
-                          int dst_width, int source_y_fraction) {
-  asm volatile (
-    "cmp          %4, #0                       \n"
-    "beq          100f                         \n"
-    "add          %2, %1                       \n"
-    "cmp          %4, #64                      \n"
-    "beq          75f                          \n"
-    "cmp          %4, #128                     \n"
-    "beq          50f                          \n"
-    "cmp          %4, #192                     \n"
-    "beq          25f                          \n"
+void ScaleFilterRows_NEON(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          int dst_width,
+                          int source_y_fraction) {
+  asm volatile(
+      "cmp          %4, #0                       \n"
+      "beq          100f                         \n"
+      "add          %2, %1                       \n"
+      "cmp          %4, #64                      \n"
+      "beq          75f                          \n"
+      "cmp          %4, #128                     \n"
+      "beq          50f                          \n"
+      "cmp          %4, #192                     \n"
+      "beq          25f                          \n"
 
-    "vdup.8       d5, %4                       \n"
-    "rsb          %4, #256                     \n"
-    "vdup.8       d4, %4                       \n"
-    // General purpose row blend.
-  "1:                                          \n"
-    MEMACCESS(1)
-    "vld1.8       {q0}, [%1]!                  \n"
-    MEMACCESS(2)
-    "vld1.8       {q1}, [%2]!                  \n"
-    "subs         %3, %3, #16                  \n"
-    "vmull.u8     q13, d0, d4                  \n"
-    "vmull.u8     q14, d1, d4                  \n"
-    "vmlal.u8     q13, d2, d5                  \n"
-    "vmlal.u8     q14, d3, d5                  \n"
-    "vrshrn.u16   d0, q13, #8                  \n"
-    "vrshrn.u16   d1, q14, #8                  \n"
-    MEMACCESS(0)
-    "vst1.8       {q0}, [%0]!                  \n"
-    "bgt          1b                           \n"
-    "b            99f                          \n"
+      "vdup.8       d5, %4                       \n"
+      "rsb          %4, #256                     \n"
+      "vdup.8       d4, %4                       \n"
+      // General purpose row blend.
+      "1:                                        \n"
+      "vld1.8       {q0}, [%1]!                  \n"
+      "vld1.8       {q1}, [%2]!                  \n"
+      "subs         %3, %3, #16                  \n"
+      "vmull.u8     q13, d0, d4                  \n"
+      "vmull.u8     q14, d1, d4                  \n"
+      "vmlal.u8     q13, d2, d5                  \n"
+      "vmlal.u8     q14, d3, d5                  \n"
+      "vrshrn.u16   d0, q13, #8                  \n"
+      "vrshrn.u16   d1, q14, #8                  \n"
+      "vst1.8       {q0}, [%0]!                  \n"
+      "bgt          1b                           \n"
+      "b            99f                          \n"
 
-    // Blend 25 / 75.
-  "25:                                         \n"
-    MEMACCESS(1)
-    "vld1.8       {q0}, [%1]!                  \n"
-    MEMACCESS(2)
-    "vld1.8       {q1}, [%2]!                  \n"
-    "subs         %3, %3, #16                  \n"
-    "vrhadd.u8    q0, q1                       \n"
-    "vrhadd.u8    q0, q1                       \n"
-    MEMACCESS(0)
-    "vst1.8       {q0}, [%0]!                  \n"
-    "bgt          25b                          \n"
-    "b            99f                          \n"
+      // Blend 25 / 75.
+      "25:                                       \n"
+      "vld1.8       {q0}, [%1]!                  \n"
+      "vld1.8       {q1}, [%2]!                  \n"
+      "subs         %3, %3, #16                  \n"
+      "vrhadd.u8    q0, q1                       \n"
+      "vrhadd.u8    q0, q1                       \n"
+      "vst1.8       {q0}, [%0]!                  \n"
+      "bgt          25b                          \n"
+      "b            99f                          \n"
 
-    // Blend 50 / 50.
-  "50:                                         \n"
-    MEMACCESS(1)
-    "vld1.8       {q0}, [%1]!                  \n"
-    MEMACCESS(2)
-    "vld1.8       {q1}, [%2]!                  \n"
-    "subs         %3, %3, #16                  \n"
-    "vrhadd.u8    q0, q1                       \n"
-    MEMACCESS(0)
-    "vst1.8       {q0}, [%0]!                  \n"
-    "bgt          50b                          \n"
-    "b            99f                          \n"
+      // Blend 50 / 50.
+      "50:                                       \n"
+      "vld1.8       {q0}, [%1]!                  \n"
+      "vld1.8       {q1}, [%2]!                  \n"
+      "subs         %3, %3, #16                  \n"
+      "vrhadd.u8    q0, q1                       \n"
+      "vst1.8       {q0}, [%0]!                  \n"
+      "bgt          50b                          \n"
+      "b            99f                          \n"
 
-    // Blend 75 / 25.
-  "75:                                         \n"
-    MEMACCESS(1)
-    "vld1.8       {q1}, [%1]!                  \n"
-    MEMACCESS(2)
-    "vld1.8       {q0}, [%2]!                  \n"
-    "subs         %3, %3, #16                  \n"
-    "vrhadd.u8    q0, q1                       \n"
-    "vrhadd.u8    q0, q1                       \n"
-    MEMACCESS(0)
-    "vst1.8       {q0}, [%0]!                  \n"
-    "bgt          75b                          \n"
-    "b            99f                          \n"
+      // Blend 75 / 25.
+      "75:                                       \n"
+      "vld1.8       {q1}, [%1]!                  \n"
+      "vld1.8       {q0}, [%2]!                  \n"
+      "subs         %3, %3, #16                  \n"
+      "vrhadd.u8    q0, q1                       \n"
+      "vrhadd.u8    q0, q1                       \n"
+      "vst1.8       {q0}, [%0]!                  \n"
+      "bgt          75b                          \n"
+      "b            99f                          \n"
 
-    // Blend 100 / 0 - Copy row unchanged.
-  "100:                                        \n"
-    MEMACCESS(1)
-    "vld1.8       {q0}, [%1]!                  \n"
-    "subs         %3, %3, #16                  \n"
-    MEMACCESS(0)
-    "vst1.8       {q0}, [%0]!                  \n"
-    "bgt          100b                         \n"
+      // Blend 100 / 0 - Copy row unchanged.
+      "100:                                      \n"
+      "vld1.8       {q0}, [%1]!                  \n"
+      "subs         %3, %3, #16                  \n"
+      "vst1.8       {q0}, [%0]!                  \n"
+      "bgt          100b                         \n"
 
-  "99:                                         \n"
-    MEMACCESS(0)
-    "vst1.8       {d1[7]}, [%0]                \n"
-  : "+r"(dst_ptr),          // %0
-    "+r"(src_ptr),          // %1
-    "+r"(src_stride),       // %2
-    "+r"(dst_width),        // %3
-    "+r"(source_y_fraction) // %4
-  :
-  : "q0", "q1", "d4", "d5", "q13", "q14", "memory", "cc"
-  );
+      "99:                                       \n"
+      "vst1.8       {d1[7]}, [%0]                \n"
+      : "+r"(dst_ptr),           // %0
+        "+r"(src_ptr),           // %1
+        "+r"(src_stride),        // %2
+        "+r"(dst_width),         // %3
+        "+r"(source_y_fraction)  // %4
+      :
+      : "q0", "q1", "d4", "d5", "q13", "q14", "memory", "cc");
 }
 
-void ScaleARGBRowDown2_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    // load even pixels into q0, odd into q1
-    MEMACCESS(0)
-    "vld2.32    {q0, q1}, [%0]!                \n"
-    MEMACCESS(0)
-    "vld2.32    {q2, q3}, [%0]!                \n"
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop
-    MEMACCESS(1)
-    "vst1.8     {q1}, [%1]!                    \n"  // store odd pixels
-    MEMACCESS(1)
-    "vst1.8     {q3}, [%1]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst),              // %1
-    "+r"(dst_width)         // %2
-  :
-  : "memory", "cc", "q0", "q1", "q2", "q3"  // Clobber List
-  );
+void ScaleARGBRowDown2_NEON(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "vld4.32    {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
+      "vld4.32    {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop
+      "vmov       q2, q1                         \n"  // load next 8 ARGB
+      "vst2.32    {q2, q3}, [%1]!                \n"  // store odd pixels
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst),       // %1
+        "+r"(dst_width)  // %2
+      :
+      : "memory", "cc", "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
-void ScaleARGBRowDown2Linear_NEON(const uint8* src_argb, ptrdiff_t src_stride,
-                                  uint8* dst_argb, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(0)
-    "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels.
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop
-    "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
-    "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
-    "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
-    "vpaddl.u8  q3, q3                         \n"  // A 16 bytes -> 8 shorts.
-    "vrshrn.u16 d0, q0, #1                     \n"  // downshift, round and pack
-    "vrshrn.u16 d1, q1, #1                     \n"
-    "vrshrn.u16 d2, q2, #1                     \n"
-    "vrshrn.u16 d3, q3, #1                     \n"
-    MEMACCESS(1)
-    "vst4.8     {d0, d1, d2, d3}, [%1]!        \n"
-    "bgt       1b                              \n"
-  : "+r"(src_argb),         // %0
-    "+r"(dst_argb),         // %1
-    "+r"(dst_width)         // %2
-  :
-  : "memory", "cc", "q0", "q1", "q2", "q3"     // Clobber List
-  );
+//  46:  f964 018d   vld4.32  {d16,d18,d20,d22}, [r4]!
+//  4a:  3e04        subs  r6, #4
+//  4c:  f964 118d   vld4.32  {d17,d19,d21,d23}, [r4]!
+//  50:  ef64 21f4   vorr  q9, q10, q10
+//  54:  f942 038d   vst2.32  {d16-d19}, [r2]!
+//  58:  d1f5        bne.n  46 <ScaleARGBRowDown2_C+0x46>
+
+void ScaleARGBRowDown2Linear_NEON(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_argb,
+                                  int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "vld4.32    {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
+      "vld4.32    {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop
+      "vrhadd.u8  q0, q0, q1                     \n"  // rounding half add
+      "vrhadd.u8  q1, q2, q3                     \n"  // rounding half add
+      "vst2.32    {q0, q1}, [%1]!                \n"
+      "bgt       1b                              \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(dst_width)  // %2
+      :
+      : "memory", "cc", "q0", "q1", "q2", "q3"  // Clobber List
+      );
 }
 
-void ScaleARGBRowDown2Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst, int dst_width) {
-  asm volatile (
-    // change the stride to row 2 pointer
-    "add        %1, %1, %0                     \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
-    MEMACCESS(0)
-    "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB pixels.
-    "subs       %3, %3, #8                     \n"  // 8 processed per loop.
-    "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
-    "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
-    "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
-    "vpaddl.u8  q3, q3                         \n"  // A 16 bytes -> 8 shorts.
-    MEMACCESS(1)
-    "vld4.8     {d16, d18, d20, d22}, [%1]!    \n"  // load 8 more ARGB pixels.
-    MEMACCESS(1)
-    "vld4.8     {d17, d19, d21, d23}, [%1]!    \n"  // load last 8 ARGB pixels.
-    "vpadal.u8  q0, q8                         \n"  // B 16 bytes -> 8 shorts.
-    "vpadal.u8  q1, q9                         \n"  // G 16 bytes -> 8 shorts.
-    "vpadal.u8  q2, q10                        \n"  // R 16 bytes -> 8 shorts.
-    "vpadal.u8  q3, q11                        \n"  // A 16 bytes -> 8 shorts.
-    "vrshrn.u16 d0, q0, #2                     \n"  // downshift, round and pack
-    "vrshrn.u16 d1, q1, #2                     \n"
-    "vrshrn.u16 d2, q2, #2                     \n"
-    "vrshrn.u16 d3, q3, #2                     \n"
-    MEMACCESS(2)
-    "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"
-    "bgt        1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(src_stride),       // %1
-    "+r"(dst),              // %2
-    "+r"(dst_width)         // %3
-  :
-  : "memory", "cc", "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11"
-  );
+void ScaleARGBRowDown2Box_NEON(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst,
+                               int dst_width) {
+  asm volatile(
+      // change the stride to row 2 pointer
+      "add        %1, %1, %0                     \n"
+      "1:                                        \n"
+      "vld4.8     {d0, d2, d4, d6}, [%0]!        \n"  // load 8 ARGB pixels.
+      "vld4.8     {d1, d3, d5, d7}, [%0]!        \n"  // load next 8 ARGB
+      "subs       %3, %3, #8                     \n"  // 8 processed per loop.
+      "vpaddl.u8  q0, q0                         \n"  // B 16 bytes -> 8 shorts.
+      "vpaddl.u8  q1, q1                         \n"  // G 16 bytes -> 8 shorts.
+      "vpaddl.u8  q2, q2                         \n"  // R 16 bytes -> 8 shorts.
+      "vpaddl.u8  q3, q3                         \n"  // A 16 bytes -> 8 shorts.
+      "vld4.8     {d16, d18, d20, d22}, [%1]!    \n"  // load 8 more ARGB
+      "vld4.8     {d17, d19, d21, d23}, [%1]!    \n"  // load last 8 ARGB
+      "vpadal.u8  q0, q8                         \n"  // B 16 bytes -> 8 shorts.
+      "vpadal.u8  q1, q9                         \n"  // G 16 bytes -> 8 shorts.
+      "vpadal.u8  q2, q10                        \n"  // R 16 bytes -> 8 shorts.
+      "vpadal.u8  q3, q11                        \n"  // A 16 bytes -> 8 shorts.
+      "vrshrn.u16 d0, q0, #2                     \n"  // round and pack to bytes
+      "vrshrn.u16 d1, q1, #2                     \n"
+      "vrshrn.u16 d2, q2, #2                     \n"
+      "vrshrn.u16 d3, q3, #2                     \n"
+      "vst4.8     {d0, d1, d2, d3}, [%2]!        \n"
+      "bgt        1b                             \n"
+      : "+r"(src_ptr),     // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst),         // %2
+        "+r"(dst_width)    // %3
+      :
+      : "memory", "cc", "q0", "q1", "q2", "q3", "q8", "q9", "q10", "q11");
 }
 
 // Reads 4 pixels at a time.
 // Alignment requirement: src_argb 4 byte aligned.
-void ScaleARGBRowDownEven_NEON(const uint8* src_argb,  ptrdiff_t src_stride,
-                               int src_stepx, uint8* dst_argb, int dst_width) {
-  asm volatile (
-    "mov        r12, %3, lsl #2                \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.32    {d0[0]}, [%0], r12             \n"
-    MEMACCESS(0)
-    "vld1.32    {d0[1]}, [%0], r12             \n"
-    MEMACCESS(0)
-    "vld1.32    {d1[0]}, [%0], r12             \n"
-    MEMACCESS(0)
-    "vld1.32    {d1[1]}, [%0], r12             \n"
-    "subs       %2, %2, #4                     \n"  // 4 pixels per loop.
-    MEMACCESS(1)
-    "vst1.8     {q0}, [%1]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_argb),    // %0
-    "+r"(dst_argb),    // %1
-    "+r"(dst_width)    // %2
-  : "r"(src_stepx)     // %3
-  : "memory", "cc", "r12", "q0"
-  );
+void ScaleARGBRowDownEven_NEON(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
+                               int src_stepx,
+                               uint8_t* dst_argb,
+                               int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "mov        r12, %3, lsl #2                \n"
+      "1:                                        \n"
+      "vld1.32    {d0[0]}, [%0], r12             \n"
+      "vld1.32    {d0[1]}, [%0], r12             \n"
+      "vld1.32    {d1[0]}, [%0], r12             \n"
+      "vld1.32    {d1[1]}, [%0], r12             \n"
+      "subs       %2, %2, #4                     \n"  // 4 pixels per loop.
+      "vst1.8     {q0}, [%1]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(dst_width)  // %2
+      : "r"(src_stepx)   // %3
+      : "memory", "cc", "r12", "q0");
 }
 
 // Reads 4 pixels at a time.
 // Alignment requirement: src_argb 4 byte aligned.
-void ScaleARGBRowDownEvenBox_NEON(const uint8* src_argb, ptrdiff_t src_stride,
+void ScaleARGBRowDownEvenBox_NEON(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
                                   int src_stepx,
-                                  uint8* dst_argb, int dst_width) {
-  asm volatile (
-    "mov        r12, %4, lsl #2                \n"
-    "add        %1, %1, %0                     \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "vld1.8     {d0}, [%0], r12                \n"  // Read 4 2x2 blocks -> 2x1
-    MEMACCESS(1)
-    "vld1.8     {d1}, [%1], r12                \n"
-    MEMACCESS(0)
-    "vld1.8     {d2}, [%0], r12                \n"
-    MEMACCESS(1)
-    "vld1.8     {d3}, [%1], r12                \n"
-    MEMACCESS(0)
-    "vld1.8     {d4}, [%0], r12                \n"
-    MEMACCESS(1)
-    "vld1.8     {d5}, [%1], r12                \n"
-    MEMACCESS(0)
-    "vld1.8     {d6}, [%0], r12                \n"
-    MEMACCESS(1)
-    "vld1.8     {d7}, [%1], r12                \n"
-    "vaddl.u8   q0, d0, d1                     \n"
-    "vaddl.u8   q1, d2, d3                     \n"
-    "vaddl.u8   q2, d4, d5                     \n"
-    "vaddl.u8   q3, d6, d7                     \n"
-    "vswp.8     d1, d2                         \n"  // ab_cd -> ac_bd
-    "vswp.8     d5, d6                         \n"  // ef_gh -> eg_fh
-    "vadd.u16   q0, q0, q1                     \n"  // (a+b)_(c+d)
-    "vadd.u16   q2, q2, q3                     \n"  // (e+f)_(g+h)
-    "vrshrn.u16 d0, q0, #2                     \n"  // first 2 pixels.
-    "vrshrn.u16 d1, q2, #2                     \n"  // next 2 pixels.
-    "subs       %3, %3, #4                     \n"  // 4 pixels per loop.
-    MEMACCESS(2)
-    "vst1.8     {q0}, [%2]!                    \n"
-    "bgt        1b                             \n"
-  : "+r"(src_argb),    // %0
-    "+r"(src_stride),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(dst_width)    // %3
-  : "r"(src_stepx)     // %4
-  : "memory", "cc", "r12", "q0", "q1", "q2", "q3"
-  );
+                                  uint8_t* dst_argb,
+                                  int dst_width) {
+  asm volatile(
+      "mov        r12, %4, lsl #2                \n"
+      "add        %1, %1, %0                     \n"
+      "1:                                        \n"
+      "vld1.8     {d0}, [%0], r12                \n"  // 4 2x2 blocks -> 2x1
+      "vld1.8     {d1}, [%1], r12                \n"
+      "vld1.8     {d2}, [%0], r12                \n"
+      "vld1.8     {d3}, [%1], r12                \n"
+      "vld1.8     {d4}, [%0], r12                \n"
+      "vld1.8     {d5}, [%1], r12                \n"
+      "vld1.8     {d6}, [%0], r12                \n"
+      "vld1.8     {d7}, [%1], r12                \n"
+      "vaddl.u8   q0, d0, d1                     \n"
+      "vaddl.u8   q1, d2, d3                     \n"
+      "vaddl.u8   q2, d4, d5                     \n"
+      "vaddl.u8   q3, d6, d7                     \n"
+      "vswp.8     d1, d2                         \n"  // ab_cd -> ac_bd
+      "vswp.8     d5, d6                         \n"  // ef_gh -> eg_fh
+      "vadd.u16   q0, q0, q1                     \n"  // (a+b)_(c+d)
+      "vadd.u16   q2, q2, q3                     \n"  // (e+f)_(g+h)
+      "vrshrn.u16 d0, q0, #2                     \n"  // first 2 pixels.
+      "vrshrn.u16 d1, q2, #2                     \n"  // next 2 pixels.
+      "subs       %3, %3, #4                     \n"  // 4 pixels per loop.
+      "vst1.8     {q0}, [%2]!                    \n"
+      "bgt        1b                             \n"
+      : "+r"(src_argb),    // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst_argb),    // %2
+        "+r"(dst_width)    // %3
+      : "r"(src_stepx)     // %4
+      : "memory", "cc", "r12", "q0", "q1", "q2", "q3");
 }
 
 // TODO(Yang Zhang): Investigate less load instructions for
 // the x/dx stepping
-#define LOAD1_DATA32_LANE(dn, n)                               \
-    "lsr        %5, %3, #16                    \n"             \
-    "add        %6, %1, %5, lsl #2             \n"             \
-    "add        %3, %3, %4                     \n"             \
-    MEMACCESS(6)                                               \
-    "vld1.32    {"#dn"["#n"]}, [%6]            \n"
+#define LOAD1_DATA32_LANE(dn, n)                 \
+  "lsr        %5, %3, #16                    \n" \
+  "add        %6, %1, %5, lsl #2             \n" \
+  "add        %3, %3, %4                     \n" \
+  "vld1.32    {" #dn "[" #n "]}, [%6]        \n"
 
-void ScaleARGBCols_NEON(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx) {
+void ScaleARGBCols_NEON(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int x,
+                        int dx) {
   int tmp;
-  const uint8* src_tmp = src_argb;
-  asm volatile (
-  "1:                                          \n"
-    LOAD1_DATA32_LANE(d0, 0)
-    LOAD1_DATA32_LANE(d0, 1)
-    LOAD1_DATA32_LANE(d1, 0)
-    LOAD1_DATA32_LANE(d1, 1)
-    LOAD1_DATA32_LANE(d2, 0)
-    LOAD1_DATA32_LANE(d2, 1)
-    LOAD1_DATA32_LANE(d3, 0)
-    LOAD1_DATA32_LANE(d3, 1)
-
-    MEMACCESS(0)
-    "vst1.32     {q0, q1}, [%0]!               \n"  // store pixels
-    "subs       %2, %2, #8                     \n"  // 8 processed per loop
-    "bgt        1b                             \n"
-  : "+r"(dst_argb),   // %0
-    "+r"(src_argb),   // %1
-    "+r"(dst_width),  // %2
-    "+r"(x),          // %3
-    "+r"(dx),         // %4
-    "=&r"(tmp),       // %5
-    "+r"(src_tmp)     // %6
-  :
-  : "memory", "cc", "q0", "q1"
-  );
+  const uint8_t* src_tmp = src_argb;
+  asm volatile(
+      "1:                                        \n"
+      // clang-format off
+      LOAD1_DATA32_LANE(d0, 0)
+      LOAD1_DATA32_LANE(d0, 1)
+      LOAD1_DATA32_LANE(d1, 0)
+      LOAD1_DATA32_LANE(d1, 1)
+      LOAD1_DATA32_LANE(d2, 0)
+      LOAD1_DATA32_LANE(d2, 1)
+      LOAD1_DATA32_LANE(d3, 0)
+      LOAD1_DATA32_LANE(d3, 1)
+      // clang-format on
+      "vst1.32     {q0, q1}, [%0]!               \n"  // store pixels
+      "subs       %2, %2, #8                     \n"  // 8 processed per loop
+      "bgt        1b                             \n"
+      : "+r"(dst_argb),   // %0
+        "+r"(src_argb),   // %1
+        "+r"(dst_width),  // %2
+        "+r"(x),          // %3
+        "+r"(dx),         // %4
+        "=&r"(tmp),       // %5
+        "+r"(src_tmp)     // %6
+      :
+      : "memory", "cc", "q0", "q1");
 }
 
 #undef LOAD1_DATA32_LANE
 
 // TODO(Yang Zhang): Investigate less load instructions for
 // the x/dx stepping
-#define LOAD2_DATA32_LANE(dn1, dn2, n)                         \
-    "lsr        %5, %3, #16                           \n"      \
-    "add        %6, %1, %5, lsl #2                    \n"      \
-    "add        %3, %3, %4                            \n"      \
-    MEMACCESS(6)                                               \
-    "vld2.32    {"#dn1"["#n"], "#dn2"["#n"]}, [%6]    \n"
+#define LOAD2_DATA32_LANE(dn1, dn2, n)                       \
+  "lsr        %5, %3, #16                                \n" \
+  "add        %6, %1, %5, lsl #2                         \n" \
+  "add        %3, %3, %4                                 \n" \
+  "vld2.32    {" #dn1 "[" #n "], " #dn2 "[" #n "]}, [%6] \n"
 
-void ScaleARGBFilterCols_NEON(uint8* dst_argb, const uint8* src_argb,
-                              int dst_width, int x, int dx) {
+void ScaleARGBFilterCols_NEON(uint8_t* dst_argb,
+                              const uint8_t* src_argb,
+                              int dst_width,
+                              int x,
+                              int dx) {
   int dx_offset[4] = {0, 1, 2, 3};
   int* tmp = dx_offset;
-  const uint8* src_tmp = src_argb;
+  const uint8_t* src_tmp = src_argb;
   asm volatile (
     "vdup.32    q0, %3                         \n"  // x
     "vdup.32    q1, %4                         \n"  // dx
@@ -993,7 +943,6 @@
     "vshrn.i16   d0, q11, #7                   \n"
     "vshrn.i16   d1, q12, #7                   \n"
 
-    MEMACCESS(0)
     "vst1.32     {d0, d1}, [%0]!               \n"  // store pixels
     "vadd.s32    q8, q8, q9                    \n"
     "subs        %2, %2, #4                    \n"  // 4 processed per loop
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon64.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon64.cc
index ff277f2..494a9cf 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon64.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_neon64.cc
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "libyuv/scale.h"
 #include "libyuv/row.h"
+#include "libyuv/scale.h"
 #include "libyuv/scale_row.h"
 
 #ifdef __cplusplus
@@ -21,580 +21,556 @@
 #if !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
 // Read 32x1 throw away even pixels, and write 16x1.
-void ScaleRowDown2_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    // load even pixels into v0, odd into v1
-    MEMACCESS(0)
-    "ld2        {v0.16b,v1.16b}, [%0], #32     \n"
-    "subs       %w2, %w2, #16                  \n"  // 16 processed per loop
-    MEMACCESS(1)
-    "st1        {v1.16b}, [%1], #16            \n"  // store odd pixels
-    "b.gt       1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst),              // %1
-    "+r"(dst_width)         // %2
-  :
-  : "v0", "v1"              // Clobber List
-  );
+void ScaleRowDown2_NEON(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst,
+                        int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      // load even pixels into v0, odd into v1
+      "ld2        {v0.16b,v1.16b}, [%0], #32     \n"
+      "subs       %w2, %w2, #16                  \n"  // 16 processed per loop
+      "st1        {v1.16b}, [%1], #16            \n"  // store odd pixels
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst),       // %1
+        "+r"(dst_width)  // %2
+      :
+      : "v0", "v1"  // Clobber List
+      );
 }
 
 // Read 32x1 average down and write 16x1.
-void ScaleRowDown2Linear_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b,v1.16b}, [%0], #32     \n"  // load pixels and post inc
-    "subs       %w2, %w2, #16                  \n"  // 16 processed per loop
-    "uaddlp     v0.8h, v0.16b                  \n"  // add adjacent
-    "uaddlp     v1.8h, v1.16b                  \n"
-    "rshrn      v0.8b, v0.8h, #1               \n"  // downshift, round and pack
-    "rshrn2     v0.16b, v1.8h, #1              \n"
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst),              // %1
-    "+r"(dst_width)         // %2
-  :
-  : "v0", "v1"     // Clobber List
-  );
+void ScaleRowDown2Linear_NEON(const uint8_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint8_t* dst,
+                              int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      // load even pixels into v0, odd into v1
+      "ld2        {v0.16b,v1.16b}, [%0], #32     \n"
+      "subs       %w2, %w2, #16                  \n"  // 16 processed per loop
+      "urhadd     v0.16b, v0.16b, v1.16b         \n"  // rounding half add
+      "st1        {v0.16b}, [%1], #16            \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst),       // %1
+        "+r"(dst_width)  // %2
+      :
+      : "v0", "v1"  // Clobber List
+      );
 }
 
 // Read 32x2 average down and write 16x1.
-void ScaleRowDown2Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst, int dst_width) {
-  asm volatile (
-    // change the stride to row 2 pointer
-    "add        %1, %1, %0                     \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.16b,v1.16b}, [%0], #32    \n"  // load row 1 and post inc
-    MEMACCESS(1)
-    "ld1        {v2.16b, v3.16b}, [%1], #32    \n"  // load row 2 and post inc
-    "subs       %w3, %w3, #16                  \n"  // 16 processed per loop
-    "uaddlp     v0.8h, v0.16b                  \n"  // row 1 add adjacent
-    "uaddlp     v1.8h, v1.16b                  \n"
-    "uadalp     v0.8h, v2.16b                  \n"  // row 2 add adjacent + row1
-    "uadalp     v1.8h, v3.16b                  \n"
-    "rshrn      v0.8b, v0.8h, #2               \n"  // downshift, round and pack
-    "rshrn2     v0.16b, v1.8h, #2              \n"
-    MEMACCESS(2)
-    "st1        {v0.16b}, [%2], #16            \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(src_stride),       // %1
-    "+r"(dst),              // %2
-    "+r"(dst_width)         // %3
-  :
-  : "v0", "v1", "v2", "v3"     // Clobber List
-  );
+void ScaleRowDown2Box_NEON(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst,
+                           int dst_width) {
+  asm volatile(
+      // change the stride to row 2 pointer
+      "add        %1, %1, %0                     \n"
+      "1:                                        \n"
+      "ld1        {v0.16b, v1.16b}, [%0], #32    \n"  // load row 1 and post inc
+      "ld1        {v2.16b, v3.16b}, [%1], #32    \n"  // load row 2 and post inc
+      "subs       %w3, %w3, #16                  \n"  // 16 processed per loop
+      "uaddlp     v0.8h, v0.16b                  \n"  // row 1 add adjacent
+      "uaddlp     v1.8h, v1.16b                  \n"
+      "uadalp     v0.8h, v2.16b                  \n"  // += row 2 add adjacent
+      "uadalp     v1.8h, v3.16b                  \n"
+      "rshrn      v0.8b, v0.8h, #2               \n"  // round and pack
+      "rshrn2     v0.16b, v1.8h, #2              \n"
+      "st1        {v0.16b}, [%2], #16            \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),     // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst),         // %2
+        "+r"(dst_width)    // %3
+      :
+      : "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-void ScaleRowDown4_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld4     {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32          \n"  // src line 0
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
-    MEMACCESS(1)
-    "st1     {v2.8b}, [%1], #8                 \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width)         // %2
-  :
-  : "v0", "v1", "v2", "v3", "memory", "cc"
-  );
+void ScaleRowDown4_NEON(const uint8_t* src_ptr,
+                        ptrdiff_t src_stride,
+                        uint8_t* dst_ptr,
+                        int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "ld4     {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32  \n"  // src line 0
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+      "st1     {v2.8b}, [%1], #8                 \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      :
+      : "v0", "v1", "v2", "v3", "memory", "cc");
 }
 
-void ScaleRowDown4Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
-  const uint8* src_ptr1 = src_ptr + src_stride;
-  const uint8* src_ptr2 = src_ptr + src_stride * 2;
-  const uint8* src_ptr3 = src_ptr + src_stride * 3;
-asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1     {v0.16b}, [%0], #16               \n"   // load up 16x4
-    MEMACCESS(3)
-    "ld1     {v1.16b}, [%2], #16               \n"
-    MEMACCESS(4)
-    "ld1     {v2.16b}, [%3], #16               \n"
-    MEMACCESS(5)
-    "ld1     {v3.16b}, [%4], #16               \n"
-    "subs    %w5, %w5, #4                      \n"
-    "uaddlp  v0.8h, v0.16b                     \n"
-    "uadalp  v0.8h, v1.16b                     \n"
-    "uadalp  v0.8h, v2.16b                     \n"
-    "uadalp  v0.8h, v3.16b                     \n"
-    "addp    v0.8h, v0.8h, v0.8h               \n"
-    "rshrn   v0.8b, v0.8h, #4                  \n"   // divide by 16 w/rounding
-    MEMACCESS(1)
-    "st1    {v0.s}[0], [%1], #4                \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_ptr),   // %0
-    "+r"(dst_ptr),   // %1
-    "+r"(src_ptr1),  // %2
-    "+r"(src_ptr2),  // %3
-    "+r"(src_ptr3),  // %4
-    "+r"(dst_width)  // %5
-  :
-  : "v0", "v1", "v2", "v3", "memory", "cc"
-  );
+void ScaleRowDown4Box_NEON(const uint8_t* src_ptr,
+                           ptrdiff_t src_stride,
+                           uint8_t* dst_ptr,
+                           int dst_width) {
+  const uint8_t* src_ptr1 = src_ptr + src_stride;
+  const uint8_t* src_ptr2 = src_ptr + src_stride * 2;
+  const uint8_t* src_ptr3 = src_ptr + src_stride * 3;
+  asm volatile(
+      "1:                                        \n"
+      "ld1     {v0.16b}, [%0], #16               \n"  // load up 16x4
+      "ld1     {v1.16b}, [%2], #16               \n"
+      "ld1     {v2.16b}, [%3], #16               \n"
+      "ld1     {v3.16b}, [%4], #16               \n"
+      "subs    %w5, %w5, #4                      \n"
+      "uaddlp  v0.8h, v0.16b                     \n"
+      "uadalp  v0.8h, v1.16b                     \n"
+      "uadalp  v0.8h, v2.16b                     \n"
+      "uadalp  v0.8h, v3.16b                     \n"
+      "addp    v0.8h, v0.8h, v0.8h               \n"
+      "rshrn   v0.8b, v0.8h, #4                  \n"  // divide by 16 w/rounding
+      "st1    {v0.s}[0], [%1], #4                \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(src_ptr1),  // %2
+        "+r"(src_ptr2),  // %3
+        "+r"(src_ptr3),  // %4
+        "+r"(dst_width)  // %5
+      :
+      : "v0", "v1", "v2", "v3", "memory", "cc");
 }
 
 // Down scale from 4 to 3 pixels. Use the neon multilane read/write
 // to load up the every 4th pixel into a 4 different registers.
 // Point samples 32 pixels to 24 pixels.
-void ScaleRowDown34_NEON(const uint8* src_ptr,
+void ScaleRowDown34_NEON(const uint8_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width) {
-  asm volatile (
-  "1:                                                  \n"
-    MEMACCESS(0)
-    "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32                \n"  // src line 0
-    "subs      %w2, %w2, #24                           \n"
-    "orr       v2.16b, v3.16b, v3.16b                  \n"  // order v0, v1, v2
-    MEMACCESS(1)
-    "st3       {v0.8b,v1.8b,v2.8b}, [%1], #24                \n"
-    "b.gt      1b                                      \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width)         // %2
-  :
-  : "v0", "v1", "v2", "v3", "memory", "cc"
-  );
+                         uint8_t* dst_ptr,
+                         int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                                \n"
+      "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32    \n"  // src line 0
+      "subs      %w2, %w2, #24                           \n"
+      "orr       v2.16b, v3.16b, v3.16b                  \n"  // order v0,v1,v2
+      "st3       {v0.8b,v1.8b,v2.8b}, [%1], #24          \n"
+      "b.gt      1b                                      \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      :
+      : "v0", "v1", "v2", "v3", "memory", "cc");
 }
 
-void ScaleRowDown34_0_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown34_0_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movi      v20.8b, #3                              \n"
-    "add       %3, %3, %0                              \n"
-  "1:                                                  \n"
-    MEMACCESS(0)
-    "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32                \n"  // src line 0
-    MEMACCESS(3)
-    "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%3], #32                \n"  // src line 1
-    "subs         %w2, %w2, #24                        \n"
+                               uint8_t* dst_ptr,
+                               int dst_width) {
+  asm volatile(
+      "movi      v20.8b, #3                              \n"
+      "add       %3, %3, %0                              \n"
+      "1:                                                \n"
+      "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32    \n"  // src line 0
+      "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%3], #32    \n"  // src line 1
+      "subs         %w2, %w2, #24                        \n"
 
-    // filter src line 0 with src line 1
-    // expand chars to shorts to allow for room
-    // when adding lines together
-    "ushll     v16.8h, v4.8b, #0                       \n"
-    "ushll     v17.8h, v5.8b, #0                       \n"
-    "ushll     v18.8h, v6.8b, #0                       \n"
-    "ushll     v19.8h, v7.8b, #0                       \n"
+      // filter src line 0 with src line 1
+      // expand chars to shorts to allow for room
+      // when adding lines together
+      "ushll     v16.8h, v4.8b, #0                       \n"
+      "ushll     v17.8h, v5.8b, #0                       \n"
+      "ushll     v18.8h, v6.8b, #0                       \n"
+      "ushll     v19.8h, v7.8b, #0                       \n"
 
-    // 3 * line_0 + line_1
-    "umlal     v16.8h, v0.8b, v20.8b                   \n"
-    "umlal     v17.8h, v1.8b, v20.8b                   \n"
-    "umlal     v18.8h, v2.8b, v20.8b                   \n"
-    "umlal     v19.8h, v3.8b, v20.8b                   \n"
+      // 3 * line_0 + line_1
+      "umlal     v16.8h, v0.8b, v20.8b                   \n"
+      "umlal     v17.8h, v1.8b, v20.8b                   \n"
+      "umlal     v18.8h, v2.8b, v20.8b                   \n"
+      "umlal     v19.8h, v3.8b, v20.8b                   \n"
 
-    // (3 * line_0 + line_1) >> 2
-    "uqrshrn   v0.8b, v16.8h, #2                       \n"
-    "uqrshrn   v1.8b, v17.8h, #2                       \n"
-    "uqrshrn   v2.8b, v18.8h, #2                       \n"
-    "uqrshrn   v3.8b, v19.8h, #2                       \n"
+      // (3 * line_0 + line_1) >> 2
+      "uqrshrn   v0.8b, v16.8h, #2                       \n"
+      "uqrshrn   v1.8b, v17.8h, #2                       \n"
+      "uqrshrn   v2.8b, v18.8h, #2                       \n"
+      "uqrshrn   v3.8b, v19.8h, #2                       \n"
 
-    // a0 = (src[0] * 3 + s[1] * 1) >> 2
-    "ushll     v16.8h, v1.8b, #0                       \n"
-    "umlal     v16.8h, v0.8b, v20.8b                   \n"
-    "uqrshrn   v0.8b, v16.8h, #2                       \n"
+      // a0 = (src[0] * 3 + s[1] * 1) >> 2
+      "ushll     v16.8h, v1.8b, #0                       \n"
+      "umlal     v16.8h, v0.8b, v20.8b                   \n"
+      "uqrshrn   v0.8b, v16.8h, #2                       \n"
 
-    // a1 = (src[1] * 1 + s[2] * 1) >> 1
-    "urhadd    v1.8b, v1.8b, v2.8b                     \n"
+      // a1 = (src[1] * 1 + s[2] * 1) >> 1
+      "urhadd    v1.8b, v1.8b, v2.8b                     \n"
 
-    // a2 = (src[2] * 1 + s[3] * 3) >> 2
-    "ushll     v16.8h, v2.8b, #0                       \n"
-    "umlal     v16.8h, v3.8b, v20.8b                   \n"
-    "uqrshrn   v2.8b, v16.8h, #2                       \n"
+      // a2 = (src[2] * 1 + s[3] * 3) >> 2
+      "ushll     v16.8h, v2.8b, #0                       \n"
+      "umlal     v16.8h, v3.8b, v20.8b                   \n"
+      "uqrshrn   v2.8b, v16.8h, #2                       \n"
 
-    MEMACCESS(1)
-    "st3       {v0.8b,v1.8b,v2.8b}, [%1], #24                \n"
+      "st3       {v0.8b,v1.8b,v2.8b}, [%1], #24          \n"
 
-    "b.gt      1b                                      \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width),        // %2
-    "+r"(src_stride)        // %3
-  :
-  : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17", "v18", "v19",
-    "v20", "memory", "cc"
-  );
+      "b.gt      1b                                      \n"
+      : "+r"(src_ptr),    // %0
+        "+r"(dst_ptr),    // %1
+        "+r"(dst_width),  // %2
+        "+r"(src_stride)  // %3
+      :
+      : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17", "v18",
+        "v19", "v20", "memory", "cc");
 }
 
-void ScaleRowDown34_1_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown34_1_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    "movi      v20.8b, #3                              \n"
-    "add       %3, %3, %0                              \n"
-  "1:                                                  \n"
-    MEMACCESS(0)
-    "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32                \n"  // src line 0
-    MEMACCESS(3)
-    "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%3], #32                \n"  // src line 1
-    "subs         %w2, %w2, #24                        \n"
-    // average src line 0 with src line 1
-    "urhadd    v0.8b, v0.8b, v4.8b                     \n"
-    "urhadd    v1.8b, v1.8b, v5.8b                     \n"
-    "urhadd    v2.8b, v2.8b, v6.8b                     \n"
-    "urhadd    v3.8b, v3.8b, v7.8b                     \n"
+                               uint8_t* dst_ptr,
+                               int dst_width) {
+  asm volatile(
+      "movi      v20.8b, #3                              \n"
+      "add       %3, %3, %0                              \n"
+      "1:                                                \n"
+      "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32    \n"  // src line 0
+      "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%3], #32    \n"  // src line 1
+      "subs         %w2, %w2, #24                        \n"
+      // average src line 0 with src line 1
+      "urhadd    v0.8b, v0.8b, v4.8b                     \n"
+      "urhadd    v1.8b, v1.8b, v5.8b                     \n"
+      "urhadd    v2.8b, v2.8b, v6.8b                     \n"
+      "urhadd    v3.8b, v3.8b, v7.8b                     \n"
 
-    // a0 = (src[0] * 3 + s[1] * 1) >> 2
-    "ushll     v4.8h, v1.8b, #0                        \n"
-    "umlal     v4.8h, v0.8b, v20.8b                    \n"
-    "uqrshrn   v0.8b, v4.8h, #2                        \n"
+      // a0 = (src[0] * 3 + s[1] * 1) >> 2
+      "ushll     v4.8h, v1.8b, #0                        \n"
+      "umlal     v4.8h, v0.8b, v20.8b                    \n"
+      "uqrshrn   v0.8b, v4.8h, #2                        \n"
 
-    // a1 = (src[1] * 1 + s[2] * 1) >> 1
-    "urhadd    v1.8b, v1.8b, v2.8b                     \n"
+      // a1 = (src[1] * 1 + s[2] * 1) >> 1
+      "urhadd    v1.8b, v1.8b, v2.8b                     \n"
 
-    // a2 = (src[2] * 1 + s[3] * 3) >> 2
-    "ushll     v4.8h, v2.8b, #0                        \n"
-    "umlal     v4.8h, v3.8b, v20.8b                    \n"
-    "uqrshrn   v2.8b, v4.8h, #2                        \n"
+      // a2 = (src[2] * 1 + s[3] * 3) >> 2
+      "ushll     v4.8h, v2.8b, #0                        \n"
+      "umlal     v4.8h, v3.8b, v20.8b                    \n"
+      "uqrshrn   v2.8b, v4.8h, #2                        \n"
 
-    MEMACCESS(1)
-    "st3       {v0.8b,v1.8b,v2.8b}, [%1], #24                \n"
-    "b.gt      1b                                      \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width),        // %2
-    "+r"(src_stride)        // %3
-  :
-  : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20", "memory", "cc"
-  );
+      "st3       {v0.8b,v1.8b,v2.8b}, [%1], #24          \n"
+      "b.gt      1b                                      \n"
+      : "+r"(src_ptr),    // %0
+        "+r"(dst_ptr),    // %1
+        "+r"(dst_width),  // %2
+        "+r"(src_stride)  // %3
+      :
+      : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v20", "memory", "cc");
 }
 
-static uvec8 kShuf38 =
-  { 0, 3, 6, 8, 11, 14, 16, 19, 22, 24, 27, 30, 0, 0, 0, 0 };
-static uvec8 kShuf38_2 =
-  { 0, 16, 32, 2, 18, 33, 4, 20, 34, 6, 22, 35, 0, 0, 0, 0 };
-static vec16 kMult38_Div6 =
-  { 65536 / 12, 65536 / 12, 65536 / 12, 65536 / 12,
-    65536 / 12, 65536 / 12, 65536 / 12, 65536 / 12 };
-static vec16 kMult38_Div9 =
-  { 65536 / 18, 65536 / 18, 65536 / 18, 65536 / 18,
-    65536 / 18, 65536 / 18, 65536 / 18, 65536 / 18 };
+static const uvec8 kShuf38 = {0,  3,  6,  8,  11, 14, 16, 19,
+                              22, 24, 27, 30, 0,  0,  0,  0};
+static const uvec8 kShuf38_2 = {0,  16, 32, 2,  18, 33, 4, 20,
+                                34, 6,  22, 35, 0,  0,  0, 0};
+static const vec16 kMult38_Div6 = {65536 / 12, 65536 / 12, 65536 / 12,
+                                   65536 / 12, 65536 / 12, 65536 / 12,
+                                   65536 / 12, 65536 / 12};
+static const vec16 kMult38_Div9 = {65536 / 18, 65536 / 18, 65536 / 18,
+                                   65536 / 18, 65536 / 18, 65536 / 18,
+                                   65536 / 18, 65536 / 18};
 
 // 32 -> 12
-void ScaleRowDown38_NEON(const uint8* src_ptr,
+void ScaleRowDown38_NEON(const uint8_t* src_ptr,
                          ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width) {
-  asm volatile (
-    MEMACCESS(3)
-    "ld1       {v3.16b}, [%3]                          \n"
-  "1:                                                  \n"
-    MEMACCESS(0)
-    "ld1       {v0.16b,v1.16b}, [%0], #32             \n"
-    "subs      %w2, %w2, #12                           \n"
-    "tbl       v2.16b, {v0.16b,v1.16b}, v3.16b        \n"
-    MEMACCESS(1)
-    "st1       {v2.8b}, [%1], #8                       \n"
-    MEMACCESS(1)
-    "st1       {v2.s}[2], [%1], #4                     \n"
-    "b.gt      1b                                      \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(dst_width)         // %2
-  : "r"(&kShuf38)           // %3
-  : "v0", "v1", "v2", "v3", "memory", "cc"
-  );
+                         uint8_t* dst_ptr,
+                         int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "ld1       {v3.16b}, [%3]                          \n"
+      "1:                                                \n"
+      "ld1       {v0.16b,v1.16b}, [%0], #32              \n"
+      "subs      %w2, %w2, #12                           \n"
+      "tbl       v2.16b, {v0.16b,v1.16b}, v3.16b         \n"
+      "st1       {v2.8b}, [%1], #8                       \n"
+      "st1       {v2.s}[2], [%1], #4                     \n"
+      "b.gt      1b                                      \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst_ptr),   // %1
+        "+r"(dst_width)  // %2
+      : "r"(&kShuf38)    // %3
+      : "v0", "v1", "v2", "v3", "memory", "cc");
 }
 
 // 32x3 -> 12x1
-void OMITFP ScaleRowDown38_3_Box_NEON(const uint8* src_ptr,
+void OMITFP ScaleRowDown38_3_Box_NEON(const uint8_t* src_ptr,
                                       ptrdiff_t src_stride,
-                                      uint8* dst_ptr, int dst_width) {
-  const uint8* src_ptr1 = src_ptr + src_stride * 2;
+                                      uint8_t* dst_ptr,
+                                      int dst_width) {
+  const uint8_t* src_ptr1 = src_ptr + src_stride * 2;
   ptrdiff_t tmp_src_stride = src_stride;
 
-  asm volatile (
-    MEMACCESS(5)
-    "ld1       {v29.8h}, [%5]                          \n"
-    MEMACCESS(6)
-    "ld1       {v30.16b}, [%6]                         \n"
-    MEMACCESS(7)
-    "ld1       {v31.8h}, [%7]                          \n"
-    "add       %2, %2, %0                              \n"
-  "1:                                                  \n"
+  asm volatile(
+      "ld1       {v29.8h}, [%5]                          \n"
+      "ld1       {v30.16b}, [%6]                         \n"
+      "ld1       {v31.8h}, [%7]                          \n"
+      "add       %2, %2, %0                              \n"
+      "1:                                                \n"
 
-    // 00 40 01 41 02 42 03 43
-    // 10 50 11 51 12 52 13 53
-    // 20 60 21 61 22 62 23 63
-    // 30 70 31 71 32 72 33 73
-    MEMACCESS(0)
-    "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32                \n"
-    MEMACCESS(3)
-    "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%2], #32                \n"
-    MEMACCESS(4)
-    "ld4       {v16.8b,v17.8b,v18.8b,v19.8b}, [%3], #32              \n"
-    "subs      %w4, %w4, #12                           \n"
+      // 00 40 01 41 02 42 03 43
+      // 10 50 11 51 12 52 13 53
+      // 20 60 21 61 22 62 23 63
+      // 30 70 31 71 32 72 33 73
+      "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32    \n"
+      "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%2], #32    \n"
+      "ld4       {v16.8b,v17.8b,v18.8b,v19.8b}, [%3], #32  \n"
+      "subs      %w4, %w4, #12                           \n"
 
-    // Shuffle the input data around to get align the data
-    //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
-    // 00 10 01 11 02 12 03 13
-    // 40 50 41 51 42 52 43 53
-    "trn1      v20.8b, v0.8b, v1.8b                    \n"
-    "trn2      v21.8b, v0.8b, v1.8b                    \n"
-    "trn1      v22.8b, v4.8b, v5.8b                    \n"
-    "trn2      v23.8b, v4.8b, v5.8b                    \n"
-    "trn1      v24.8b, v16.8b, v17.8b                  \n"
-    "trn2      v25.8b, v16.8b, v17.8b                  \n"
+      // Shuffle the input data around to get align the data
+      //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
+      // 00 10 01 11 02 12 03 13
+      // 40 50 41 51 42 52 43 53
+      "trn1      v20.8b, v0.8b, v1.8b                    \n"
+      "trn2      v21.8b, v0.8b, v1.8b                    \n"
+      "trn1      v22.8b, v4.8b, v5.8b                    \n"
+      "trn2      v23.8b, v4.8b, v5.8b                    \n"
+      "trn1      v24.8b, v16.8b, v17.8b                  \n"
+      "trn2      v25.8b, v16.8b, v17.8b                  \n"
 
-    // 20 30 21 31 22 32 23 33
-    // 60 70 61 71 62 72 63 73
-    "trn1      v0.8b, v2.8b, v3.8b                     \n"
-    "trn2      v1.8b, v2.8b, v3.8b                     \n"
-    "trn1      v4.8b, v6.8b, v7.8b                     \n"
-    "trn2      v5.8b, v6.8b, v7.8b                     \n"
-    "trn1      v16.8b, v18.8b, v19.8b                  \n"
-    "trn2      v17.8b, v18.8b, v19.8b                  \n"
+      // 20 30 21 31 22 32 23 33
+      // 60 70 61 71 62 72 63 73
+      "trn1      v0.8b, v2.8b, v3.8b                     \n"
+      "trn2      v1.8b, v2.8b, v3.8b                     \n"
+      "trn1      v4.8b, v6.8b, v7.8b                     \n"
+      "trn2      v5.8b, v6.8b, v7.8b                     \n"
+      "trn1      v16.8b, v18.8b, v19.8b                  \n"
+      "trn2      v17.8b, v18.8b, v19.8b                  \n"
 
-    // 00+10 01+11 02+12 03+13
-    // 40+50 41+51 42+52 43+53
-    "uaddlp    v20.4h, v20.8b                          \n"
-    "uaddlp    v21.4h, v21.8b                          \n"
-    "uaddlp    v22.4h, v22.8b                          \n"
-    "uaddlp    v23.4h, v23.8b                          \n"
-    "uaddlp    v24.4h, v24.8b                          \n"
-    "uaddlp    v25.4h, v25.8b                          \n"
+      // 00+10 01+11 02+12 03+13
+      // 40+50 41+51 42+52 43+53
+      "uaddlp    v20.4h, v20.8b                          \n"
+      "uaddlp    v21.4h, v21.8b                          \n"
+      "uaddlp    v22.4h, v22.8b                          \n"
+      "uaddlp    v23.4h, v23.8b                          \n"
+      "uaddlp    v24.4h, v24.8b                          \n"
+      "uaddlp    v25.4h, v25.8b                          \n"
 
-    // 60+70 61+71 62+72 63+73
-    "uaddlp    v1.4h, v1.8b                            \n"
-    "uaddlp    v5.4h, v5.8b                            \n"
-    "uaddlp    v17.4h, v17.8b                          \n"
+      // 60+70 61+71 62+72 63+73
+      "uaddlp    v1.4h, v1.8b                            \n"
+      "uaddlp    v5.4h, v5.8b                            \n"
+      "uaddlp    v17.4h, v17.8b                          \n"
 
-    // combine source lines
-    "add       v20.4h, v20.4h, v22.4h                  \n"
-    "add       v21.4h, v21.4h, v23.4h                  \n"
-    "add       v20.4h, v20.4h, v24.4h                  \n"
-    "add       v21.4h, v21.4h, v25.4h                  \n"
-    "add       v2.4h, v1.4h, v5.4h                     \n"
-    "add       v2.4h, v2.4h, v17.4h                    \n"
+      // combine source lines
+      "add       v20.4h, v20.4h, v22.4h                  \n"
+      "add       v21.4h, v21.4h, v23.4h                  \n"
+      "add       v20.4h, v20.4h, v24.4h                  \n"
+      "add       v21.4h, v21.4h, v25.4h                  \n"
+      "add       v2.4h, v1.4h, v5.4h                     \n"
+      "add       v2.4h, v2.4h, v17.4h                    \n"
 
-    // dst_ptr[3] = (s[6 + st * 0] + s[7 + st * 0]
-    //             + s[6 + st * 1] + s[7 + st * 1]
-    //             + s[6 + st * 2] + s[7 + st * 2]) / 6
-    "sqrdmulh  v2.8h, v2.8h, v29.8h                    \n"
-    "xtn       v2.8b,  v2.8h                           \n"
+      // dst_ptr[3] = (s[6 + st * 0] + s[7 + st * 0]
+      //             + s[6 + st * 1] + s[7 + st * 1]
+      //             + s[6 + st * 2] + s[7 + st * 2]) / 6
+      "sqrdmulh  v2.8h, v2.8h, v29.8h                    \n"
+      "xtn       v2.8b,  v2.8h                           \n"
 
-    // Shuffle 2,3 reg around so that 2 can be added to the
-    //  0,1 reg and 3 can be added to the 4,5 reg. This
-    //  requires expanding from u8 to u16 as the 0,1 and 4,5
-    //  registers are already expanded. Then do transposes
-    //  to get aligned.
-    // xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
-    "ushll     v16.8h, v16.8b, #0                      \n"
-    "uaddl     v0.8h, v0.8b, v4.8b                     \n"
+      // Shuffle 2,3 reg around so that 2 can be added to the
+      //  0,1 reg and 3 can be added to the 4,5 reg. This
+      //  requires expanding from u8 to u16 as the 0,1 and 4,5
+      //  registers are already expanded. Then do transposes
+      //  to get aligned.
+      // xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
+      "ushll     v16.8h, v16.8b, #0                      \n"
+      "uaddl     v0.8h, v0.8b, v4.8b                     \n"
 
-    // combine source lines
-    "add       v0.8h, v0.8h, v16.8h                    \n"
+      // combine source lines
+      "add       v0.8h, v0.8h, v16.8h                    \n"
 
-    // xx 20 xx 21 xx 22 xx 23
-    // xx 30 xx 31 xx 32 xx 33
-    "trn1      v1.8h, v0.8h, v0.8h                     \n"
-    "trn2      v4.8h, v0.8h, v0.8h                     \n"
-    "xtn       v0.4h, v1.4s                            \n"
-    "xtn       v4.4h, v4.4s                            \n"
+      // xx 20 xx 21 xx 22 xx 23
+      // xx 30 xx 31 xx 32 xx 33
+      "trn1      v1.8h, v0.8h, v0.8h                     \n"
+      "trn2      v4.8h, v0.8h, v0.8h                     \n"
+      "xtn       v0.4h, v1.4s                            \n"
+      "xtn       v4.4h, v4.4s                            \n"
 
-    // 0+1+2, 3+4+5
-    "add       v20.8h, v20.8h, v0.8h                   \n"
-    "add       v21.8h, v21.8h, v4.8h                   \n"
+      // 0+1+2, 3+4+5
+      "add       v20.8h, v20.8h, v0.8h                   \n"
+      "add       v21.8h, v21.8h, v4.8h                   \n"
 
-    // Need to divide, but can't downshift as the the value
-    //  isn't a power of 2. So multiply by 65536 / n
-    //  and take the upper 16 bits.
-    "sqrdmulh  v0.8h, v20.8h, v31.8h                   \n"
-    "sqrdmulh  v1.8h, v21.8h, v31.8h                   \n"
+      // Need to divide, but can't downshift as the the value
+      //  isn't a power of 2. So multiply by 65536 / n
+      //  and take the upper 16 bits.
+      "sqrdmulh  v0.8h, v20.8h, v31.8h                   \n"
+      "sqrdmulh  v1.8h, v21.8h, v31.8h                   \n"
 
-    // Align for table lookup, vtbl requires registers to
-    //  be adjacent
-    "tbl       v3.16b, {v0.16b, v1.16b, v2.16b}, v30.16b \n"
+      // Align for table lookup, vtbl requires registers to be adjacent
+      "tbl       v3.16b, {v0.16b, v1.16b, v2.16b}, v30.16b \n"
 
-    MEMACCESS(1)
-    "st1       {v3.8b}, [%1], #8                       \n"
-    MEMACCESS(1)
-    "st1       {v3.s}[2], [%1], #4                     \n"
-    "b.gt      1b                                      \n"
-  : "+r"(src_ptr),          // %0
-    "+r"(dst_ptr),          // %1
-    "+r"(tmp_src_stride),   // %2
-    "+r"(src_ptr1),         // %3
-    "+r"(dst_width)         // %4
-  : "r"(&kMult38_Div6),     // %5
-    "r"(&kShuf38_2),        // %6
-    "r"(&kMult38_Div9)      // %7
-  : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17",
-    "v18", "v19", "v20", "v21", "v22", "v23", "v24", "v25", "v29",
-    "v30", "v31", "memory", "cc"
-  );
+      "st1       {v3.8b}, [%1], #8                       \n"
+      "st1       {v3.s}[2], [%1], #4                     \n"
+      "b.gt      1b                                      \n"
+      : "+r"(src_ptr),         // %0
+        "+r"(dst_ptr),         // %1
+        "+r"(tmp_src_stride),  // %2
+        "+r"(src_ptr1),        // %3
+        "+r"(dst_width)        // %4
+      : "r"(&kMult38_Div6),    // %5
+        "r"(&kShuf38_2),       // %6
+        "r"(&kMult38_Div9)     // %7
+      : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17", "v18",
+        "v19", "v20", "v21", "v22", "v23", "v24", "v25", "v29", "v30", "v31",
+        "memory", "cc");
 }
 
 // 32x2 -> 12x1
-void ScaleRowDown38_2_Box_NEON(const uint8* src_ptr,
+void ScaleRowDown38_2_Box_NEON(const uint8_t* src_ptr,
                                ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
+                               uint8_t* dst_ptr,
+                               int dst_width) {
   // TODO(fbarchard): use src_stride directly for clang 3.5+.
   ptrdiff_t tmp_src_stride = src_stride;
-  asm volatile (
-    MEMACCESS(4)
-    "ld1       {v30.8h}, [%4]                          \n"
-    MEMACCESS(5)
-    "ld1       {v31.16b}, [%5]                         \n"
-    "add       %2, %2, %0                              \n"
-  "1:                                                  \n"
+  asm volatile(
+      "ld1       {v30.8h}, [%4]                          \n"
+      "ld1       {v31.16b}, [%5]                         \n"
+      "add       %2, %2, %0                              \n"
+      "1:                                                \n"
 
-    // 00 40 01 41 02 42 03 43
-    // 10 50 11 51 12 52 13 53
-    // 20 60 21 61 22 62 23 63
-    // 30 70 31 71 32 72 33 73
-    MEMACCESS(0)
-    "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32                \n"
-    MEMACCESS(3)
-    "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%2], #32                \n"
-    "subs      %w3, %w3, #12                           \n"
+      // 00 40 01 41 02 42 03 43
+      // 10 50 11 51 12 52 13 53
+      // 20 60 21 61 22 62 23 63
+      // 30 70 31 71 32 72 33 73
+      "ld4       {v0.8b,v1.8b,v2.8b,v3.8b}, [%0], #32    \n"
+      "ld4       {v4.8b,v5.8b,v6.8b,v7.8b}, [%2], #32    \n"
+      "subs      %w3, %w3, #12                           \n"
 
-    // Shuffle the input data around to get align the data
-    //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
-    // 00 10 01 11 02 12 03 13
-    // 40 50 41 51 42 52 43 53
-    "trn1      v16.8b, v0.8b, v1.8b                    \n"
-    "trn2      v17.8b, v0.8b, v1.8b                    \n"
-    "trn1      v18.8b, v4.8b, v5.8b                    \n"
-    "trn2      v19.8b, v4.8b, v5.8b                    \n"
+      // Shuffle the input data around to get align the data
+      //  so adjacent data can be added. 0,1 - 2,3 - 4,5 - 6,7
+      // 00 10 01 11 02 12 03 13
+      // 40 50 41 51 42 52 43 53
+      "trn1      v16.8b, v0.8b, v1.8b                    \n"
+      "trn2      v17.8b, v0.8b, v1.8b                    \n"
+      "trn1      v18.8b, v4.8b, v5.8b                    \n"
+      "trn2      v19.8b, v4.8b, v5.8b                    \n"
 
-    // 20 30 21 31 22 32 23 33
-    // 60 70 61 71 62 72 63 73
-    "trn1      v0.8b, v2.8b, v3.8b                     \n"
-    "trn2      v1.8b, v2.8b, v3.8b                     \n"
-    "trn1      v4.8b, v6.8b, v7.8b                     \n"
-    "trn2      v5.8b, v6.8b, v7.8b                     \n"
+      // 20 30 21 31 22 32 23 33
+      // 60 70 61 71 62 72 63 73
+      "trn1      v0.8b, v2.8b, v3.8b                     \n"
+      "trn2      v1.8b, v2.8b, v3.8b                     \n"
+      "trn1      v4.8b, v6.8b, v7.8b                     \n"
+      "trn2      v5.8b, v6.8b, v7.8b                     \n"
 
-    // 00+10 01+11 02+12 03+13
-    // 40+50 41+51 42+52 43+53
-    "uaddlp    v16.4h, v16.8b                          \n"
-    "uaddlp    v17.4h, v17.8b                          \n"
-    "uaddlp    v18.4h, v18.8b                          \n"
-    "uaddlp    v19.4h, v19.8b                          \n"
+      // 00+10 01+11 02+12 03+13
+      // 40+50 41+51 42+52 43+53
+      "uaddlp    v16.4h, v16.8b                          \n"
+      "uaddlp    v17.4h, v17.8b                          \n"
+      "uaddlp    v18.4h, v18.8b                          \n"
+      "uaddlp    v19.4h, v19.8b                          \n"
 
-    // 60+70 61+71 62+72 63+73
-    "uaddlp    v1.4h, v1.8b                            \n"
-    "uaddlp    v5.4h, v5.8b                            \n"
+      // 60+70 61+71 62+72 63+73
+      "uaddlp    v1.4h, v1.8b                            \n"
+      "uaddlp    v5.4h, v5.8b                            \n"
 
-    // combine source lines
-    "add       v16.4h, v16.4h, v18.4h                  \n"
-    "add       v17.4h, v17.4h, v19.4h                  \n"
-    "add       v2.4h, v1.4h, v5.4h                     \n"
+      // combine source lines
+      "add       v16.4h, v16.4h, v18.4h                  \n"
+      "add       v17.4h, v17.4h, v19.4h                  \n"
+      "add       v2.4h, v1.4h, v5.4h                     \n"
 
-    // dst_ptr[3] = (s[6] + s[7] + s[6+st] + s[7+st]) / 4
-    "uqrshrn   v2.8b, v2.8h, #2                        \n"
+      // dst_ptr[3] = (s[6] + s[7] + s[6+st] + s[7+st]) / 4
+      "uqrshrn   v2.8b, v2.8h, #2                        \n"
 
-    // Shuffle 2,3 reg around so that 2 can be added to the
-    //  0,1 reg and 3 can be added to the 4,5 reg. This
-    //  requires expanding from u8 to u16 as the 0,1 and 4,5
-    //  registers are already expanded. Then do transposes
-    //  to get aligned.
-    // xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
+      // Shuffle 2,3 reg around so that 2 can be added to the
+      //  0,1 reg and 3 can be added to the 4,5 reg. This
+      //  requires expanding from u8 to u16 as the 0,1 and 4,5
+      //  registers are already expanded. Then do transposes
+      //  to get aligned.
+      // xx 20 xx 30 xx 21 xx 31 xx 22 xx 32 xx 23 xx 33
 
-    // combine source lines
-    "uaddl     v0.8h, v0.8b, v4.8b                     \n"
+      // combine source lines
+      "uaddl     v0.8h, v0.8b, v4.8b                     \n"
 
-    // xx 20 xx 21 xx 22 xx 23
-    // xx 30 xx 31 xx 32 xx 33
-    "trn1      v1.8h, v0.8h, v0.8h                     \n"
-    "trn2      v4.8h, v0.8h, v0.8h                     \n"
-    "xtn       v0.4h, v1.4s                            \n"
-    "xtn       v4.4h, v4.4s                            \n"
+      // xx 20 xx 21 xx 22 xx 23
+      // xx 30 xx 31 xx 32 xx 33
+      "trn1      v1.8h, v0.8h, v0.8h                     \n"
+      "trn2      v4.8h, v0.8h, v0.8h                     \n"
+      "xtn       v0.4h, v1.4s                            \n"
+      "xtn       v4.4h, v4.4s                            \n"
 
-    // 0+1+2, 3+4+5
-    "add       v16.8h, v16.8h, v0.8h                   \n"
-    "add       v17.8h, v17.8h, v4.8h                   \n"
+      // 0+1+2, 3+4+5
+      "add       v16.8h, v16.8h, v0.8h                   \n"
+      "add       v17.8h, v17.8h, v4.8h                   \n"
 
-    // Need to divide, but can't downshift as the the value
-    //  isn't a power of 2. So multiply by 65536 / n
-    //  and take the upper 16 bits.
-    "sqrdmulh  v0.8h, v16.8h, v30.8h                   \n"
-    "sqrdmulh  v1.8h, v17.8h, v30.8h                   \n"
+      // Need to divide, but can't downshift as the the value
+      //  isn't a power of 2. So multiply by 65536 / n
+      //  and take the upper 16 bits.
+      "sqrdmulh  v0.8h, v16.8h, v30.8h                   \n"
+      "sqrdmulh  v1.8h, v17.8h, v30.8h                   \n"
 
-    // Align for table lookup, vtbl requires registers to
-    //  be adjacent
+      // Align for table lookup, vtbl requires registers to
+      //  be adjacent
 
-    "tbl       v3.16b, {v0.16b, v1.16b, v2.16b}, v31.16b \n"
+      "tbl       v3.16b, {v0.16b, v1.16b, v2.16b}, v31.16b \n"
 
-    MEMACCESS(1)
-    "st1       {v3.8b}, [%1], #8                       \n"
-    MEMACCESS(1)
-    "st1       {v3.s}[2], [%1], #4                     \n"
-    "b.gt      1b                                      \n"
-  : "+r"(src_ptr),         // %0
-    "+r"(dst_ptr),         // %1
-    "+r"(tmp_src_stride),  // %2
-    "+r"(dst_width)        // %3
-  : "r"(&kMult38_Div6),    // %4
-    "r"(&kShuf38_2)        // %5
-  : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17",
-    "v18", "v19", "v30", "v31", "memory", "cc"
-  );
+      "st1       {v3.8b}, [%1], #8                       \n"
+      "st1       {v3.s}[2], [%1], #4                     \n"
+      "b.gt      1b                                      \n"
+      : "+r"(src_ptr),         // %0
+        "+r"(dst_ptr),         // %1
+        "+r"(tmp_src_stride),  // %2
+        "+r"(dst_width)        // %3
+      : "r"(&kMult38_Div6),    // %4
+        "r"(&kShuf38_2)        // %5
+      : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17", "v18",
+        "v19", "v30", "v31", "memory", "cc");
 }
 
-void ScaleAddRows_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                    uint16* dst_ptr, int src_width, int src_height) {
-  const uint8* src_tmp;
-  asm volatile (
-  "1:                                          \n"
-    "mov       %0, %1                          \n"
-    "mov       w12, %w5                        \n"
-    "eor       v2.16b, v2.16b, v2.16b          \n"
-    "eor       v3.16b, v3.16b, v3.16b          \n"
-  "2:                                          \n"
-    // load 16 pixels into q0
-    MEMACCESS(0)
-    "ld1       {v0.16b}, [%0], %3              \n"
-    "uaddw2    v3.8h, v3.8h, v0.16b            \n"
-    "uaddw     v2.8h, v2.8h, v0.8b             \n"
-    "subs      w12, w12, #1                    \n"
-    "b.gt      2b                              \n"
-    MEMACCESS(2)
-    "st1      {v2.8h, v3.8h}, [%2], #32        \n"  // store pixels
-    "add      %1, %1, #16                      \n"
-    "subs     %w4, %w4, #16                    \n"  // 16 processed per loop
-    "b.gt     1b                               \n"
-  : "=&r"(src_tmp),    // %0
-    "+r"(src_ptr),     // %1
-    "+r"(dst_ptr),     // %2
-    "+r"(src_stride),  // %3
-    "+r"(src_width),   // %4
-    "+r"(src_height)   // %5
-  :
-  : "memory", "cc", "w12", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+void ScaleAddRows_NEON(const uint8_t* src_ptr,
+                       ptrdiff_t src_stride,
+                       uint16_t* dst_ptr,
+                       int src_width,
+                       int src_height) {
+  const uint8_t* src_tmp;
+  asm volatile(
+      "1:                                        \n"
+      "mov       %0, %1                          \n"
+      "mov       w12, %w5                        \n"
+      "eor       v2.16b, v2.16b, v2.16b          \n"
+      "eor       v3.16b, v3.16b, v3.16b          \n"
+      "2:                                        \n"
+      // load 16 pixels into q0
+      "ld1       {v0.16b}, [%0], %3              \n"
+      "uaddw2    v3.8h, v3.8h, v0.16b            \n"
+      "uaddw     v2.8h, v2.8h, v0.8b             \n"
+      "subs      w12, w12, #1                    \n"
+      "b.gt      2b                              \n"
+      "st1      {v2.8h, v3.8h}, [%2], #32        \n"  // store pixels
+      "add      %1, %1, #16                      \n"
+      "subs     %w4, %w4, #16                    \n"  // 16 processed per loop
+      "b.gt     1b                               \n"
+      : "=&r"(src_tmp),    // %0
+        "+r"(src_ptr),     // %1
+        "+r"(dst_ptr),     // %2
+        "+r"(src_stride),  // %3
+        "+r"(src_width),   // %4
+        "+r"(src_height)   // %5
+      :
+      : "memory", "cc", "w12", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
 // TODO(Yang Zhang): Investigate less load instructions for
 // the x/dx stepping
-#define LOAD2_DATA8_LANE(n)                                    \
-    "lsr        %5, %3, #16                    \n"             \
-    "add        %6, %1, %5                    \n"              \
-    "add        %3, %3, %4                     \n"             \
-    MEMACCESS(6)                                               \
-    "ld2        {v4.b, v5.b}["#n"], [%6]      \n"
+#define LOAD2_DATA8_LANE(n)                      \
+  "lsr        %5, %3, #16                    \n" \
+  "add        %6, %1, %5                     \n" \
+  "add        %3, %3, %4                     \n" \
+  "ld2        {v4.b, v5.b}[" #n "], [%6]     \n"
 
-void ScaleFilterCols_NEON(uint8* dst_ptr, const uint8* src_ptr,
-                          int dst_width, int x, int dx) {
+// The NEON version mimics this formula (from row_common.cc):
+// #define BLENDER(a, b, f) (uint8_t)((int)(a) +
+//    ((((int)((f)) * ((int)(b) - (int)(a))) + 0x8000) >> 16))
+
+void ScaleFilterCols_NEON(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          int dst_width,
+                          int x,
+                          int dx) {
   int dx_offset[4] = {0, 1, 2, 3};
   int* tmp = dx_offset;
-  const uint8* src_tmp = src_ptr;
-  int64 dst_width64 = (int64) dst_width;  // Work around ios 64 bit warning.
-  int64 x64 = (int64) x;
-  int64 dx64 = (int64) dx;
+  const uint8_t* src_tmp = src_ptr;
+  int64_t x64 = (int64_t)x;    // NOLINT
+  int64_t dx64 = (int64_t)dx;  // NOLINT
   asm volatile (
     "dup        v0.4s, %w3                     \n"  // x
     "dup        v1.4s, %w4                     \n"  // dx
@@ -626,12 +602,11 @@
     "ushll2    v6.4s, v6.8h, #0                \n"
     "mul       v16.4s, v16.4s, v7.4s           \n"
     "mul       v17.4s, v17.4s, v6.4s           \n"
-    "rshrn      v6.4h, v16.4s, #16             \n"
-    "rshrn2     v6.8h, v17.4s, #16             \n"
+    "rshrn     v6.4h, v16.4s, #16              \n"
+    "rshrn2    v6.8h, v17.4s, #16              \n"
     "add       v4.8h, v4.8h, v6.8h             \n"
     "xtn       v4.8b, v4.8h                    \n"
 
-    MEMACCESS(0)
     "st1       {v4.8b}, [%0], #8               \n"  // store pixels
     "add       v1.4s, v1.4s, v0.4s             \n"
     "add       v2.4s, v2.4s, v0.4s             \n"
@@ -639,7 +614,7 @@
     "b.gt      1b                              \n"
   : "+r"(dst_ptr),          // %0
     "+r"(src_ptr),          // %1
-    "+r"(dst_width64),      // %2
+    "+r"(dst_width),        // %2
     "+r"(x64),              // %3
     "+r"(dx64),             // %4
     "+r"(tmp),              // %5
@@ -653,331 +628,300 @@
 #undef LOAD2_DATA8_LANE
 
 // 16x2 -> 16x1
-void ScaleFilterRows_NEON(uint8* dst_ptr,
-                          const uint8* src_ptr, ptrdiff_t src_stride,
-                          int dst_width, int source_y_fraction) {
-    int y_fraction = 256 - source_y_fraction;
-  asm volatile (
-    "cmp          %w4, #0                      \n"
-    "b.eq         100f                         \n"
-    "add          %2, %2, %1                   \n"
-    "cmp          %w4, #64                     \n"
-    "b.eq         75f                          \n"
-    "cmp          %w4, #128                    \n"
-    "b.eq         50f                          \n"
-    "cmp          %w4, #192                    \n"
-    "b.eq         25f                          \n"
+void ScaleFilterRows_NEON(uint8_t* dst_ptr,
+                          const uint8_t* src_ptr,
+                          ptrdiff_t src_stride,
+                          int dst_width,
+                          int source_y_fraction) {
+  int y_fraction = 256 - source_y_fraction;
+  asm volatile(
+      "cmp          %w4, #0                      \n"
+      "b.eq         100f                         \n"
+      "add          %2, %2, %1                   \n"
+      "cmp          %w4, #64                     \n"
+      "b.eq         75f                          \n"
+      "cmp          %w4, #128                    \n"
+      "b.eq         50f                          \n"
+      "cmp          %w4, #192                    \n"
+      "b.eq         25f                          \n"
 
-    "dup          v5.8b, %w4                   \n"
-    "dup          v4.8b, %w5                   \n"
-    // General purpose row blend.
-  "1:                                          \n"
-    MEMACCESS(1)
-    "ld1          {v0.16b}, [%1], #16          \n"
-    MEMACCESS(2)
-    "ld1          {v1.16b}, [%2], #16          \n"
-    "subs         %w3, %w3, #16                \n"
-    "umull        v6.8h, v0.8b, v4.8b          \n"
-    "umull2       v7.8h, v0.16b, v4.16b        \n"
-    "umlal        v6.8h, v1.8b, v5.8b          \n"
-    "umlal2       v7.8h, v1.16b, v5.16b        \n"
-    "rshrn        v0.8b, v6.8h, #8             \n"
-    "rshrn2       v0.16b, v7.8h, #8            \n"
-    MEMACCESS(0)
-    "st1          {v0.16b}, [%0], #16          \n"
-    "b.gt         1b                           \n"
-    "b            99f                          \n"
+      "dup          v5.8b, %w4                   \n"
+      "dup          v4.8b, %w5                   \n"
+      // General purpose row blend.
+      "1:                                        \n"
+      "ld1          {v0.16b}, [%1], #16          \n"
+      "ld1          {v1.16b}, [%2], #16          \n"
+      "subs         %w3, %w3, #16                \n"
+      "umull        v6.8h, v0.8b, v4.8b          \n"
+      "umull2       v7.8h, v0.16b, v4.16b        \n"
+      "umlal        v6.8h, v1.8b, v5.8b          \n"
+      "umlal2       v7.8h, v1.16b, v5.16b        \n"
+      "rshrn        v0.8b, v6.8h, #8             \n"
+      "rshrn2       v0.16b, v7.8h, #8            \n"
+      "st1          {v0.16b}, [%0], #16          \n"
+      "b.gt         1b                           \n"
+      "b            99f                          \n"
 
-    // Blend 25 / 75.
-  "25:                                         \n"
-    MEMACCESS(1)
-    "ld1          {v0.16b}, [%1], #16          \n"
-    MEMACCESS(2)
-    "ld1          {v1.16b}, [%2], #16          \n"
-    "subs         %w3, %w3, #16                \n"
-    "urhadd       v0.16b, v0.16b, v1.16b       \n"
-    "urhadd       v0.16b, v0.16b, v1.16b       \n"
-    MEMACCESS(0)
-    "st1          {v0.16b}, [%0], #16          \n"
-    "b.gt         25b                          \n"
-    "b            99f                          \n"
+      // Blend 25 / 75.
+      "25:                                       \n"
+      "ld1          {v0.16b}, [%1], #16          \n"
+      "ld1          {v1.16b}, [%2], #16          \n"
+      "subs         %w3, %w3, #16                \n"
+      "urhadd       v0.16b, v0.16b, v1.16b       \n"
+      "urhadd       v0.16b, v0.16b, v1.16b       \n"
+      "st1          {v0.16b}, [%0], #16          \n"
+      "b.gt         25b                          \n"
+      "b            99f                          \n"
 
-    // Blend 50 / 50.
-  "50:                                         \n"
-    MEMACCESS(1)
-    "ld1          {v0.16b}, [%1], #16          \n"
-    MEMACCESS(2)
-    "ld1          {v1.16b}, [%2], #16          \n"
-    "subs         %w3, %w3, #16                \n"
-    "urhadd       v0.16b, v0.16b, v1.16b       \n"
-    MEMACCESS(0)
-    "st1          {v0.16b}, [%0], #16          \n"
-    "b.gt         50b                          \n"
-    "b            99f                          \n"
+      // Blend 50 / 50.
+      "50:                                       \n"
+      "ld1          {v0.16b}, [%1], #16          \n"
+      "ld1          {v1.16b}, [%2], #16          \n"
+      "subs         %w3, %w3, #16                \n"
+      "urhadd       v0.16b, v0.16b, v1.16b       \n"
+      "st1          {v0.16b}, [%0], #16          \n"
+      "b.gt         50b                          \n"
+      "b            99f                          \n"
 
-    // Blend 75 / 25.
-  "75:                                         \n"
-    MEMACCESS(1)
-    "ld1          {v1.16b}, [%1], #16          \n"
-    MEMACCESS(2)
-    "ld1          {v0.16b}, [%2], #16          \n"
-    "subs         %w3, %w3, #16                \n"
-    "urhadd       v0.16b, v0.16b, v1.16b       \n"
-    "urhadd       v0.16b, v0.16b, v1.16b       \n"
-    MEMACCESS(0)
-    "st1          {v0.16b}, [%0], #16          \n"
-    "b.gt         75b                          \n"
-    "b            99f                          \n"
+      // Blend 75 / 25.
+      "75:                                       \n"
+      "ld1          {v1.16b}, [%1], #16          \n"
+      "ld1          {v0.16b}, [%2], #16          \n"
+      "subs         %w3, %w3, #16                \n"
+      "urhadd       v0.16b, v0.16b, v1.16b       \n"
+      "urhadd       v0.16b, v0.16b, v1.16b       \n"
+      "st1          {v0.16b}, [%0], #16          \n"
+      "b.gt         75b                          \n"
+      "b            99f                          \n"
 
-    // Blend 100 / 0 - Copy row unchanged.
-  "100:                                        \n"
-    MEMACCESS(1)
-    "ld1          {v0.16b}, [%1], #16          \n"
-    "subs         %w3, %w3, #16                \n"
-    MEMACCESS(0)
-    "st1          {v0.16b}, [%0], #16          \n"
-    "b.gt         100b                         \n"
+      // Blend 100 / 0 - Copy row unchanged.
+      "100:                                      \n"
+      "ld1          {v0.16b}, [%1], #16          \n"
+      "subs         %w3, %w3, #16                \n"
+      "st1          {v0.16b}, [%0], #16          \n"
+      "b.gt         100b                         \n"
 
-  "99:                                         \n"
-    MEMACCESS(0)
-    "st1          {v0.b}[15], [%0]             \n"
-  : "+r"(dst_ptr),          // %0
-    "+r"(src_ptr),          // %1
-    "+r"(src_stride),       // %2
-    "+r"(dst_width),        // %3
-    "+r"(source_y_fraction),// %4
-    "+r"(y_fraction)        // %5
-  :
-  : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "memory", "cc"
-  );
+      "99:                                       \n"
+      "st1          {v0.b}[15], [%0]             \n"
+      : "+r"(dst_ptr),            // %0
+        "+r"(src_ptr),            // %1
+        "+r"(src_stride),         // %2
+        "+r"(dst_width),          // %3
+        "+r"(source_y_fraction),  // %4
+        "+r"(y_fraction)          // %5
+      :
+      : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "memory", "cc");
 }
 
-void ScaleARGBRowDown2_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    // load even pixels into q0, odd into q1
-    MEMACCESS (0)
-    "ld2        {v0.4s, v1.4s}, [%0], #32      \n"
-    MEMACCESS (0)
-    "ld2        {v2.4s, v3.4s}, [%0], #32      \n"
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
-    MEMACCESS (1)
-    "st1        {v1.16b}, [%1], #16            \n"  // store odd pixels
-    MEMACCESS (1)
-    "st1        {v3.16b}, [%1], #16            \n"
-    "b.gt       1b                             \n"
-  : "+r" (src_ptr),          // %0
-    "+r" (dst),              // %1
-    "+r" (dst_width)         // %2
-  :
-  : "memory", "cc", "v0", "v1", "v2", "v3"  // Clobber List
-  );
+void ScaleARGBRowDown2_NEON(const uint8_t* src_ptr,
+                            ptrdiff_t src_stride,
+                            uint8_t* dst,
+                            int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      // load 16 ARGB pixels with even pixels into q0/q2, odd into q1/q3
+      "ld4        {v0.4s,v1.4s,v2.4s,v3.4s}, [%0], #64 \n"
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+      "mov        v2.16b, v3.16b                 \n"
+      "st2        {v1.4s,v2.4s}, [%1], #32       \n"  // store 8 odd pixels
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),   // %0
+        "+r"(dst),       // %1
+        "+r"(dst_width)  // %2
+      :
+      : "memory", "cc", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-void ScaleARGBRowDown2Linear_NEON(const uint8* src_argb, ptrdiff_t src_stride,
-                                  uint8* dst_argb, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS (0)
-    // load 8 ARGB pixels.
-    "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64   \n"
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop.
-    "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
-    "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
-    "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
-    "uaddlp     v3.8h, v3.16b                  \n"  // A 16 bytes -> 8 shorts.
-    "rshrn      v0.8b, v0.8h, #1               \n"  // downshift, round and pack
-    "rshrn      v1.8b, v1.8h, #1               \n"
-    "rshrn      v2.8b, v2.8h, #1               \n"
-    "rshrn      v3.8b, v3.8h, #1               \n"
-    MEMACCESS (1)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%1], #32     \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),         // %0
-    "+r"(dst_argb),         // %1
-    "+r"(dst_width)         // %2
-  :
-  : "memory", "cc", "v0", "v1", "v2", "v3"    // Clobber List
-  );
+void ScaleARGBRowDown2Linear_NEON(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
+                                  uint8_t* dst_argb,
+                                  int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      // load 16 ARGB pixels with even pixels into q0/q2, odd into q1/q3
+      "ld4        {v0.4s,v1.4s,v2.4s,v3.4s}, [%0], #64 \n"
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+
+      "urhadd     v0.16b, v0.16b, v1.16b         \n"  // rounding half add
+      "urhadd     v1.16b, v2.16b, v3.16b         \n"
+      "st2        {v0.4s,v1.4s}, [%1], #32       \n"  // store 8 pixels
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),  // %0
+        "+r"(dst_argb),  // %1
+        "+r"(dst_width)  // %2
+      :
+      : "memory", "cc", "v0", "v1", "v2", "v3"  // Clobber List
+      );
 }
 
-void ScaleARGBRowDown2Box_NEON(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst, int dst_width) {
-  asm volatile (
-    // change the stride to row 2 pointer
-    "add        %1, %1, %0                     \n"
-  "1:                                          \n"
-    MEMACCESS (0)
-    "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64   \n"  // load 8 ARGB pixels.
-    "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
-    "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
-    "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
-    "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
-    "uaddlp     v3.8h, v3.16b                  \n"  // A 16 bytes -> 8 shorts.
-    MEMACCESS (1)
-    "ld4        {v16.16b,v17.16b,v18.16b,v19.16b}, [%1], #64 \n"  // load 8 more ARGB pixels.
-    "uadalp     v0.8h, v16.16b                 \n"  // B 16 bytes -> 8 shorts.
-    "uadalp     v1.8h, v17.16b                 \n"  // G 16 bytes -> 8 shorts.
-    "uadalp     v2.8h, v18.16b                 \n"  // R 16 bytes -> 8 shorts.
-    "uadalp     v3.8h, v19.16b                 \n"  // A 16 bytes -> 8 shorts.
-    "rshrn      v0.8b, v0.8h, #2               \n"  // downshift, round and pack
-    "rshrn      v1.8b, v1.8h, #2               \n"
-    "rshrn      v2.8b, v2.8h, #2               \n"
-    "rshrn      v3.8b, v3.8h, #2               \n"
-    MEMACCESS (2)
-    "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32     \n"
-    "b.gt       1b                             \n"
-  : "+r" (src_ptr),          // %0
-    "+r" (src_stride),       // %1
-    "+r" (dst),              // %2
-    "+r" (dst_width)         // %3
-  :
-  : "memory", "cc", "v0", "v1", "v2", "v3", "v16", "v17", "v18", "v19"
-  );
+void ScaleARGBRowDown2Box_NEON(const uint8_t* src_ptr,
+                               ptrdiff_t src_stride,
+                               uint8_t* dst,
+                               int dst_width) {
+  asm volatile(
+      // change the stride to row 2 pointer
+      "add        %1, %1, %0                     \n"
+      "1:                                        \n"
+      "ld4        {v0.16b,v1.16b,v2.16b,v3.16b}, [%0], #64 \n"  // load 8 ARGB
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop.
+      "uaddlp     v0.8h, v0.16b                  \n"  // B 16 bytes -> 8 shorts.
+      "uaddlp     v1.8h, v1.16b                  \n"  // G 16 bytes -> 8 shorts.
+      "uaddlp     v2.8h, v2.16b                  \n"  // R 16 bytes -> 8 shorts.
+      "uaddlp     v3.8h, v3.16b                  \n"  // A 16 bytes -> 8 shorts.
+      "ld4        {v16.16b,v17.16b,v18.16b,v19.16b}, [%1], #64 \n"  // load 8
+      "uadalp     v0.8h, v16.16b                 \n"  // B 16 bytes -> 8 shorts.
+      "uadalp     v1.8h, v17.16b                 \n"  // G 16 bytes -> 8 shorts.
+      "uadalp     v2.8h, v18.16b                 \n"  // R 16 bytes -> 8 shorts.
+      "uadalp     v3.8h, v19.16b                 \n"  // A 16 bytes -> 8 shorts.
+      "rshrn      v0.8b, v0.8h, #2               \n"  // round and pack
+      "rshrn      v1.8b, v1.8h, #2               \n"
+      "rshrn      v2.8b, v2.8h, #2               \n"
+      "rshrn      v3.8b, v3.8h, #2               \n"
+      "st4        {v0.8b,v1.8b,v2.8b,v3.8b}, [%2], #32     \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),     // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst),         // %2
+        "+r"(dst_width)    // %3
+      :
+      : "memory", "cc", "v0", "v1", "v2", "v3", "v16", "v17", "v18", "v19");
 }
 
 // Reads 4 pixels at a time.
 // Alignment requirement: src_argb 4 byte aligned.
-void ScaleARGBRowDownEven_NEON(const uint8* src_argb,  ptrdiff_t src_stride,
-                               int src_stepx, uint8* dst_argb, int dst_width) {
-  asm volatile (
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.s}[0], [%0], %3            \n"
-    MEMACCESS(0)
-    "ld1        {v0.s}[1], [%0], %3            \n"
-    MEMACCESS(0)
-    "ld1        {v0.s}[2], [%0], %3            \n"
-    MEMACCESS(0)
-    "ld1        {v0.s}[3], [%0], %3            \n"
-    "subs       %w2, %w2, #4                   \n"  // 4 pixels per loop.
-    MEMACCESS(1)
-    "st1        {v0.16b}, [%1], #16            \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),    // %0
-    "+r"(dst_argb),    // %1
-    "+r"(dst_width)    // %2
-  : "r"((int64)(src_stepx * 4)) // %3
-  : "memory", "cc", "v0"
-  );
+void ScaleARGBRowDownEven_NEON(const uint8_t* src_argb,
+                               ptrdiff_t src_stride,
+                               int src_stepx,
+                               uint8_t* dst_argb,
+                               int dst_width) {
+  (void)src_stride;
+  asm volatile(
+      "1:                                        \n"
+      "ld1        {v0.s}[0], [%0], %3            \n"
+      "ld1        {v0.s}[1], [%0], %3            \n"
+      "ld1        {v0.s}[2], [%0], %3            \n"
+      "ld1        {v0.s}[3], [%0], %3            \n"
+      "subs       %w2, %w2, #4                   \n"  // 4 pixels per loop.
+      "st1        {v0.16b}, [%1], #16            \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),                // %0
+        "+r"(dst_argb),                // %1
+        "+r"(dst_width)                // %2
+      : "r"((int64_t)(src_stepx * 4))  // %3
+      : "memory", "cc", "v0");
 }
 
 // Reads 4 pixels at a time.
 // Alignment requirement: src_argb 4 byte aligned.
 // TODO(Yang Zhang): Might be worth another optimization pass in future.
 // It could be upgraded to 8 pixels at a time to start with.
-void ScaleARGBRowDownEvenBox_NEON(const uint8* src_argb, ptrdiff_t src_stride,
+void ScaleARGBRowDownEvenBox_NEON(const uint8_t* src_argb,
+                                  ptrdiff_t src_stride,
                                   int src_stepx,
-                                  uint8* dst_argb, int dst_width) {
-  asm volatile (
-    "add        %1, %1, %0                     \n"
-  "1:                                          \n"
-    MEMACCESS(0)
-    "ld1        {v0.8b}, [%0], %4              \n"  // Read 4 2x2 blocks -> 2x1
-    MEMACCESS(1)
-    "ld1        {v1.8b}, [%1], %4              \n"
-    MEMACCESS(0)
-    "ld1        {v2.8b}, [%0], %4              \n"
-    MEMACCESS(1)
-    "ld1        {v3.8b}, [%1], %4              \n"
-    MEMACCESS(0)
-    "ld1        {v4.8b}, [%0], %4              \n"
-    MEMACCESS(1)
-    "ld1        {v5.8b}, [%1], %4              \n"
-    MEMACCESS(0)
-    "ld1        {v6.8b}, [%0], %4              \n"
-    MEMACCESS(1)
-    "ld1        {v7.8b}, [%1], %4              \n"
-    "uaddl      v0.8h, v0.8b, v1.8b            \n"
-    "uaddl      v2.8h, v2.8b, v3.8b            \n"
-    "uaddl      v4.8h, v4.8b, v5.8b            \n"
-    "uaddl      v6.8h, v6.8b, v7.8b            \n"
-    "mov        v16.d[1], v0.d[1]              \n"  // ab_cd -> ac_bd
-    "mov        v0.d[1], v2.d[0]               \n"
-    "mov        v2.d[0], v16.d[1]              \n"
-    "mov        v16.d[1], v4.d[1]              \n"  // ef_gh -> eg_fh
-    "mov        v4.d[1], v6.d[0]               \n"
-    "mov        v6.d[0], v16.d[1]              \n"
-    "add        v0.8h, v0.8h, v2.8h            \n"  // (a+b)_(c+d)
-    "add        v4.8h, v4.8h, v6.8h            \n"  // (e+f)_(g+h)
-    "rshrn      v0.8b, v0.8h, #2               \n"  // first 2 pixels.
-    "rshrn2     v0.16b, v4.8h, #2              \n"  // next 2 pixels.
-    "subs       %w3, %w3, #4                   \n"  // 4 pixels per loop.
-    MEMACCESS(2)
-    "st1     {v0.16b}, [%2], #16               \n"
-    "b.gt       1b                             \n"
-  : "+r"(src_argb),    // %0
-    "+r"(src_stride),  // %1
-    "+r"(dst_argb),    // %2
-    "+r"(dst_width)    // %3
-  : "r"((int64)(src_stepx * 4)) // %4
-  : "memory", "cc", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16"
-  );
+                                  uint8_t* dst_argb,
+                                  int dst_width) {
+  asm volatile(
+      "add        %1, %1, %0                     \n"
+      "1:                                        \n"
+      "ld1        {v0.8b}, [%0], %4              \n"  // Read 4 2x2 -> 2x1
+      "ld1        {v1.8b}, [%1], %4              \n"
+      "ld1        {v2.8b}, [%0], %4              \n"
+      "ld1        {v3.8b}, [%1], %4              \n"
+      "ld1        {v4.8b}, [%0], %4              \n"
+      "ld1        {v5.8b}, [%1], %4              \n"
+      "ld1        {v6.8b}, [%0], %4              \n"
+      "ld1        {v7.8b}, [%1], %4              \n"
+      "uaddl      v0.8h, v0.8b, v1.8b            \n"
+      "uaddl      v2.8h, v2.8b, v3.8b            \n"
+      "uaddl      v4.8h, v4.8b, v5.8b            \n"
+      "uaddl      v6.8h, v6.8b, v7.8b            \n"
+      "mov        v16.d[1], v0.d[1]              \n"  // ab_cd -> ac_bd
+      "mov        v0.d[1], v2.d[0]               \n"
+      "mov        v2.d[0], v16.d[1]              \n"
+      "mov        v16.d[1], v4.d[1]              \n"  // ef_gh -> eg_fh
+      "mov        v4.d[1], v6.d[0]               \n"
+      "mov        v6.d[0], v16.d[1]              \n"
+      "add        v0.8h, v0.8h, v2.8h            \n"  // (a+b)_(c+d)
+      "add        v4.8h, v4.8h, v6.8h            \n"  // (e+f)_(g+h)
+      "rshrn      v0.8b, v0.8h, #2               \n"  // first 2 pixels.
+      "rshrn2     v0.16b, v4.8h, #2              \n"  // next 2 pixels.
+      "subs       %w3, %w3, #4                   \n"  // 4 pixels per loop.
+      "st1     {v0.16b}, [%2], #16               \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_argb),                // %0
+        "+r"(src_stride),              // %1
+        "+r"(dst_argb),                // %2
+        "+r"(dst_width)                // %3
+      : "r"((int64_t)(src_stepx * 4))  // %4
+      : "memory", "cc", "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16");
 }
 
 // TODO(Yang Zhang): Investigate less load instructions for
 // the x/dx stepping
-#define LOAD1_DATA32_LANE(vn, n)                               \
-    "lsr        %5, %3, #16                    \n"             \
-    "add        %6, %1, %5, lsl #2             \n"             \
-    "add        %3, %3, %4                     \n"             \
-    MEMACCESS(6)                                               \
-    "ld1        {"#vn".s}["#n"], [%6]          \n"
+#define LOAD1_DATA32_LANE(vn, n)                 \
+  "lsr        %5, %3, #16                    \n" \
+  "add        %6, %1, %5, lsl #2             \n" \
+  "add        %3, %3, %4                     \n" \
+  "ld1        {" #vn ".s}[" #n "], [%6]      \n"
 
-void ScaleARGBCols_NEON(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx) {
-  const uint8* src_tmp = src_argb;
-  int64 dst_width64 = (int64) dst_width;  // Work around ios 64 bit warning.
-  int64 x64 = (int64) x;
-  int64 dx64 = (int64) dx;
-  int64 tmp64;
-  asm volatile (
-  "1:                                          \n"
-    LOAD1_DATA32_LANE(v0, 0)
-    LOAD1_DATA32_LANE(v0, 1)
-    LOAD1_DATA32_LANE(v0, 2)
-    LOAD1_DATA32_LANE(v0, 3)
-    LOAD1_DATA32_LANE(v1, 0)
-    LOAD1_DATA32_LANE(v1, 1)
-    LOAD1_DATA32_LANE(v1, 2)
-    LOAD1_DATA32_LANE(v1, 3)
-
-    MEMACCESS(0)
-    "st1        {v0.4s, v1.4s}, [%0], #32      \n"  // store pixels
-    "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
-    "b.gt        1b                            \n"
-  : "+r"(dst_argb),     // %0
-    "+r"(src_argb),     // %1
-    "+r"(dst_width64),  // %2
-    "+r"(x64),          // %3
-    "+r"(dx64),         // %4
-    "=&r"(tmp64),       // %5
-    "+r"(src_tmp)       // %6
-  :
-  : "memory", "cc", "v0", "v1"
-  );
+void ScaleARGBCols_NEON(uint8_t* dst_argb,
+                        const uint8_t* src_argb,
+                        int dst_width,
+                        int x,
+                        int dx) {
+  const uint8_t* src_tmp = src_argb;
+  int64_t x64 = (int64_t)x;    // NOLINT
+  int64_t dx64 = (int64_t)dx;  // NOLINT
+  int64_t tmp64;
+  asm volatile(
+      "1:                                        \n"
+      // clang-format off
+      LOAD1_DATA32_LANE(v0, 0)
+      LOAD1_DATA32_LANE(v0, 1)
+      LOAD1_DATA32_LANE(v0, 2)
+      LOAD1_DATA32_LANE(v0, 3)
+      LOAD1_DATA32_LANE(v1, 0)
+      LOAD1_DATA32_LANE(v1, 1)
+      LOAD1_DATA32_LANE(v1, 2)
+      LOAD1_DATA32_LANE(v1, 3)
+      // clang-format on
+      "st1        {v0.4s, v1.4s}, [%0], #32      \n"  // store pixels
+      "subs       %w2, %w2, #8                   \n"  // 8 processed per loop
+      "b.gt       1b                             \n"
+      : "+r"(dst_argb),   // %0
+        "+r"(src_argb),   // %1
+        "+r"(dst_width),  // %2
+        "+r"(x64),        // %3
+        "+r"(dx64),       // %4
+        "=&r"(tmp64),     // %5
+        "+r"(src_tmp)     // %6
+      :
+      : "memory", "cc", "v0", "v1");
 }
 
 #undef LOAD1_DATA32_LANE
 
 // TODO(Yang Zhang): Investigate less load instructions for
 // the x/dx stepping
-#define LOAD2_DATA32_LANE(vn1, vn2, n)                         \
-    "lsr        %5, %3, #16                           \n"      \
-    "add        %6, %1, %5, lsl #2                    \n"      \
-    "add        %3, %3, %4                            \n"      \
-    MEMACCESS(6)                                               \
-    "ld2        {"#vn1".s, "#vn2".s}["#n"], [%6]      \n"
+#define LOAD2_DATA32_LANE(vn1, vn2, n)                  \
+  "lsr        %5, %3, #16                           \n" \
+  "add        %6, %1, %5, lsl #2                    \n" \
+  "add        %3, %3, %4                            \n" \
+  "ld2        {" #vn1 ".s, " #vn2 ".s}[" #n "], [%6]  \n"
 
-void ScaleARGBFilterCols_NEON(uint8* dst_argb, const uint8* src_argb,
-                              int dst_width, int x, int dx) {
+void ScaleARGBFilterCols_NEON(uint8_t* dst_argb,
+                              const uint8_t* src_argb,
+                              int dst_width,
+                              int x,
+                              int dx) {
   int dx_offset[4] = {0, 1, 2, 3};
   int* tmp = dx_offset;
-  const uint8* src_tmp = src_argb;
-  int64 dst_width64 = (int64) dst_width;  // Work around ios 64 bit warning.
-  int64 x64 = (int64) x;
-  int64 dx64 = (int64) dx;
+  const uint8_t* src_tmp = src_argb;
+  int64_t x64 = (int64_t)x;    // NOLINT
+  int64_t dx64 = (int64_t)dx;  // NOLINT
   asm volatile (
     "dup        v0.4s, %w3                     \n"  // x
     "dup        v1.4s, %w4                     \n"  // dx
@@ -1014,14 +958,13 @@
     "shrn       v0.8b, v16.8h, #7              \n"
     "shrn2      v0.16b, v17.8h, #7             \n"
 
-    MEMACCESS(0)
     "st1     {v0.4s}, [%0], #16                \n"  // store pixels
     "add     v5.4s, v5.4s, v6.4s               \n"
     "subs    %w2, %w2, #4                      \n"  // 4 processed per loop
     "b.gt    1b                                \n"
   : "+r"(dst_argb),         // %0
     "+r"(src_argb),         // %1
-    "+r"(dst_width64),      // %2
+    "+r"(dst_width),        // %2
     "+r"(x64),              // %3
     "+r"(dx64),             // %4
     "+r"(tmp),              // %5
@@ -1034,6 +977,85 @@
 
 #undef LOAD2_DATA32_LANE
 
+// Read 16x2 average down and write 8x1.
+void ScaleRowDown2Box_16_NEON(const uint16_t* src_ptr,
+                              ptrdiff_t src_stride,
+                              uint16_t* dst,
+                              int dst_width) {
+  asm volatile(
+      // change the stride to row 2 pointer
+      "add        %1, %0, %1, lsl #1             \n"  // ptr + stide * 2
+      "1:                                        \n"
+      "ld1        {v0.8h, v1.8h}, [%0], #32      \n"  // load row 1 and post inc
+      "ld1        {v2.8h, v3.8h}, [%1], #32      \n"  // load row 2 and post inc
+      "subs       %w3, %w3, #8                   \n"  // 8 processed per loop
+      "uaddlp     v0.4s, v0.8h                   \n"  // row 1 add adjacent
+      "uaddlp     v1.4s, v1.8h                   \n"
+      "uadalp     v0.4s, v2.8h                   \n"  // +row 2 add adjacent
+      "uadalp     v1.4s, v3.8h                   \n"
+      "rshrn      v0.4h, v0.4s, #2               \n"  // round and pack
+      "rshrn2     v0.8h, v1.4s, #2               \n"
+      "st1        {v0.8h}, [%2], #16             \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),     // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst),         // %2
+        "+r"(dst_width)    // %3
+      :
+      : "v0", "v1", "v2", "v3"  // Clobber List
+      );
+}
+
+// Read 8x2 upsample with filtering and write 16x1.
+// Actually reads an extra pixel, so 9x2.
+void ScaleRowUp2_16_NEON(const uint16_t* src_ptr,
+                         ptrdiff_t src_stride,
+                         uint16_t* dst,
+                         int dst_width) {
+  asm volatile(
+      "add        %1, %0, %1, lsl #1             \n"  // ptr + stide * 2
+      "movi       v0.8h, #9                      \n"  // constants
+      "movi       v1.4s, #3                      \n"
+
+      "1:                                        \n"
+      "ld1        {v3.8h}, [%0], %4              \n"  // TL read first 8
+      "ld1        {v4.8h}, [%0], %5              \n"  // TR read 8 offset by 1
+      "ld1        {v5.8h}, [%1], %4              \n"  // BL read 8 from next row
+      "ld1        {v6.8h}, [%1], %5              \n"  // BR offset by 1
+      "subs       %w3, %w3, #16                  \n"  // 16 dst pixels per loop
+      "umull      v16.4s, v3.4h, v0.4h           \n"
+      "umull2     v7.4s, v3.8h, v0.8h            \n"
+      "umull      v18.4s, v4.4h, v0.4h           \n"
+      "umull2     v17.4s, v4.8h, v0.8h           \n"
+      "uaddw      v16.4s, v16.4s, v6.4h          \n"
+      "uaddl2     v19.4s, v6.8h, v3.8h           \n"
+      "uaddl      v3.4s, v6.4h, v3.4h            \n"
+      "uaddw2     v6.4s, v7.4s, v6.8h            \n"
+      "uaddl2     v7.4s, v5.8h, v4.8h            \n"
+      "uaddl      v4.4s, v5.4h, v4.4h            \n"
+      "uaddw      v18.4s, v18.4s, v5.4h          \n"
+      "mla        v16.4s, v4.4s, v1.4s           \n"
+      "mla        v18.4s, v3.4s, v1.4s           \n"
+      "mla        v6.4s, v7.4s, v1.4s            \n"
+      "uaddw2     v4.4s, v17.4s, v5.8h           \n"
+      "uqrshrn    v16.4h,  v16.4s, #4            \n"
+      "mla        v4.4s, v19.4s, v1.4s           \n"
+      "uqrshrn2   v16.8h, v6.4s, #4              \n"
+      "uqrshrn    v17.4h, v18.4s, #4             \n"
+      "uqrshrn2   v17.8h, v4.4s, #4              \n"
+      "st2        {v16.8h-v17.8h}, [%2], #32     \n"
+      "b.gt       1b                             \n"
+      : "+r"(src_ptr),     // %0
+        "+r"(src_stride),  // %1
+        "+r"(dst),         // %2
+        "+r"(dst_width)    // %3
+      : "r"(2LL),          // %4
+        "r"(14LL)          // %5
+      : "v0", "v1", "v2", "v3", "v4", "v5", "v6", "v7", "v16", "v17", "v18",
+        "v19"  // Clobber List
+      );
+}
+
 #endif  // !defined(LIBYUV_DISABLE_NEON) && defined(__aarch64__)
 
 #ifdef __cplusplus
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_win.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_win.cc
index f170973..c5fc86f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_win.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/scale_win.cc
@@ -17,97 +17,93 @@
 #endif
 
 // This module is for 32 bit Visual C x86 and clangcl
-#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86)
+#if !defined(LIBYUV_DISABLE_X86) && defined(_M_IX86) && defined(_MSC_VER)
 
 // Offsets for source bytes 0 to 9
-static uvec8 kShuf0 =
-  { 0, 1, 3, 4, 5, 7, 8, 9, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf0 = {0,   1,   3,   4,   5,   7,   8,   9,
+                             128, 128, 128, 128, 128, 128, 128, 128};
 
 // Offsets for source bytes 11 to 20 with 8 subtracted = 3 to 12.
-static uvec8 kShuf1 =
-  { 3, 4, 5, 7, 8, 9, 11, 12, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf1 = {3,   4,   5,   7,   8,   9,   11,  12,
+                             128, 128, 128, 128, 128, 128, 128, 128};
 
 // Offsets for source bytes 21 to 31 with 16 subtracted = 5 to 31.
-static uvec8 kShuf2 =
-  { 5, 7, 8, 9, 11, 12, 13, 15, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf2 = {5,   7,   8,   9,   11,  12,  13,  15,
+                             128, 128, 128, 128, 128, 128, 128, 128};
 
 // Offsets for source bytes 0 to 10
-static uvec8 kShuf01 =
-  { 0, 1, 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10 };
+static const uvec8 kShuf01 = {0, 1, 1, 2, 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10};
 
 // Offsets for source bytes 10 to 21 with 8 subtracted = 3 to 13.
-static uvec8 kShuf11 =
-  { 2, 3, 4, 5, 5, 6, 6, 7, 8, 9, 9, 10, 10, 11, 12, 13 };
+static const uvec8 kShuf11 = {2, 3, 4, 5,  5,  6,  6,  7,
+                              8, 9, 9, 10, 10, 11, 12, 13};
 
 // Offsets for source bytes 21 to 31 with 16 subtracted = 5 to 31.
-static uvec8 kShuf21 =
-  { 5, 6, 6, 7, 8, 9, 9, 10, 10, 11, 12, 13, 13, 14, 14, 15 };
+static const uvec8 kShuf21 = {5,  6,  6,  7,  8,  9,  9,  10,
+                              10, 11, 12, 13, 13, 14, 14, 15};
 
 // Coefficients for source bytes 0 to 10
-static uvec8 kMadd01 =
-  { 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2 };
+static const uvec8 kMadd01 = {3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2};
 
 // Coefficients for source bytes 10 to 21
-static uvec8 kMadd11 =
-  { 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1 };
+static const uvec8 kMadd11 = {1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1};
 
 // Coefficients for source bytes 21 to 31
-static uvec8 kMadd21 =
-  { 2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3 };
+static const uvec8 kMadd21 = {2, 2, 1, 3, 3, 1, 2, 2, 1, 3, 3, 1, 2, 2, 1, 3};
 
 // Coefficients for source bytes 21 to 31
-static vec16 kRound34 =
-  { 2, 2, 2, 2, 2, 2, 2, 2 };
+static const vec16 kRound34 = {2, 2, 2, 2, 2, 2, 2, 2};
 
-static uvec8 kShuf38a =
-  { 0, 3, 6, 8, 11, 14, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShuf38a = {0,   3,   6,   8,   11,  14,  128, 128,
+                               128, 128, 128, 128, 128, 128, 128, 128};
 
-static uvec8 kShuf38b =
-  { 128, 128, 128, 128, 128, 128, 0, 3, 6, 8, 11, 14, 128, 128, 128, 128 };
+static const uvec8 kShuf38b = {128, 128, 128, 128, 128, 128, 0,   3,
+                               6,   8,   11,  14,  128, 128, 128, 128};
 
 // Arrange words 0,3,6 into 0,1,2
-static uvec8 kShufAc =
-  { 0, 1, 6, 7, 12, 13, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAc = {0,   1,   6,   7,   12,  13,  128, 128,
+                              128, 128, 128, 128, 128, 128, 128, 128};
 
 // Arrange words 0,3,6 into 3,4,5
-static uvec8 kShufAc3 =
-  { 128, 128, 128, 128, 128, 128, 0, 1, 6, 7, 12, 13, 128, 128, 128, 128 };
+static const uvec8 kShufAc3 = {128, 128, 128, 128, 128, 128, 0,   1,
+                               6,   7,   12,  13,  128, 128, 128, 128};
 
 // Scaling values for boxes of 3x3 and 2x3
-static uvec16 kScaleAc33 =
-  { 65536 / 9, 65536 / 9, 65536 / 6, 65536 / 9, 65536 / 9, 65536 / 6, 0, 0 };
+static const uvec16 kScaleAc33 = {65536 / 9, 65536 / 9, 65536 / 6, 65536 / 9,
+                                  65536 / 9, 65536 / 6, 0,         0};
 
 // Arrange first value for pixels 0,1,2,3,4,5
-static uvec8 kShufAb0 =
-  { 0, 128, 3, 128, 6, 128, 8, 128, 11, 128, 14, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAb0 = {0,  128, 3,  128, 6,   128, 8,   128,
+                               11, 128, 14, 128, 128, 128, 128, 128};
 
 // Arrange second value for pixels 0,1,2,3,4,5
-static uvec8 kShufAb1 =
-  { 1, 128, 4, 128, 7, 128, 9, 128, 12, 128, 15, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAb1 = {1,  128, 4,  128, 7,   128, 9,   128,
+                               12, 128, 15, 128, 128, 128, 128, 128};
 
 // Arrange third value for pixels 0,1,2,3,4,5
-static uvec8 kShufAb2 =
-  { 2, 128, 5, 128, 128, 128, 10, 128, 13, 128, 128, 128, 128, 128, 128, 128 };
+static const uvec8 kShufAb2 = {2,  128, 5,   128, 128, 128, 10,  128,
+                               13, 128, 128, 128, 128, 128, 128, 128};
 
 // Scaling values for boxes of 3x2 and 2x2
-static uvec16 kScaleAb2 =
-  { 65536 / 3, 65536 / 3, 65536 / 2, 65536 / 3, 65536 / 3, 65536 / 2, 0, 0 };
+static const uvec16 kScaleAb2 = {65536 / 3, 65536 / 3, 65536 / 2, 65536 / 3,
+                                 65536 / 3, 65536 / 2, 0,         0};
 
 // Reads 32 pixels, throws half away and writes 16 pixels.
-__declspec(naked)
-void ScaleRowDown2_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                         uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown2_SSSE3(const uint8_t* src_ptr,
+                                           ptrdiff_t src_stride,
+                                           uint8_t* dst_ptr,
+                                           int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_ptr
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_ptr
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]  // src_ptr
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_ptr
+    mov        ecx, [esp + 16]  // dst_width
 
   wloop:
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
-    psrlw      xmm0, 8               // isolate odd pixels.
+    psrlw      xmm0, 8          // isolate odd pixels.
     psrlw      xmm1, 8
     packuswb   xmm0, xmm1
     movdqu     [edx], xmm0
@@ -120,27 +116,28 @@
 }
 
 // Blends 32x1 rectangle to 16x1.
-__declspec(naked)
-void ScaleRowDown2Linear_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                               uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown2Linear_SSSE3(const uint8_t* src_ptr,
+                                                 ptrdiff_t src_stride,
+                                                 uint8_t* dst_ptr,
+                                                 int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_ptr
-                                     // src_stride
-    mov        edx, [esp + 12]       // dst_ptr
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]  // src_ptr
+    // src_stride
+    mov        edx, [esp + 12]  // dst_ptr
+    mov        ecx, [esp + 16]  // dst_width
 
-    pcmpeqb    xmm4, xmm4            // constant 0x0101
+    pcmpeqb    xmm4, xmm4  // constant 0x0101
     psrlw      xmm4, 15
     packuswb   xmm4, xmm4
-    pxor       xmm5, xmm5            // constant 0
+    pxor       xmm5, xmm5  // constant 0
 
   wloop:
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
-    pmaddubsw  xmm0, xmm4      // horizontal add
+    pmaddubsw  xmm0, xmm4  // horizontal add
     pmaddubsw  xmm1, xmm4
-    pavgw      xmm0, xmm5      // (x + 1) / 2
+    pavgw      xmm0, xmm5       // (x + 1) / 2
     pavgw      xmm1, xmm5
     packuswb   xmm0, xmm1
     movdqu     [edx], xmm0
@@ -153,20 +150,21 @@
 }
 
 // Blends 32x2 rectangle to 16x1.
-__declspec(naked)
-void ScaleRowDown2Box_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                            uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown2Box_SSSE3(const uint8_t* src_ptr,
+                                              ptrdiff_t src_stride,
+                                              uint8_t* dst_ptr,
+                                              int dst_width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]    // src_ptr
-    mov        esi, [esp + 4 + 8]    // src_stride
-    mov        edx, [esp + 4 + 12]   // dst_ptr
-    mov        ecx, [esp + 4 + 16]   // dst_width
+    mov        eax, [esp + 4 + 4]  // src_ptr
+    mov        esi, [esp + 4 + 8]  // src_stride
+    mov        edx, [esp + 4 + 12]  // dst_ptr
+    mov        ecx, [esp + 4 + 16]  // dst_width
 
-    pcmpeqb    xmm4, xmm4            // constant 0x0101
+    pcmpeqb    xmm4, xmm4  // constant 0x0101
     psrlw      xmm4, 15
     packuswb   xmm4, xmm4
-    pxor       xmm5, xmm5            // constant 0
+    pxor       xmm5, xmm5  // constant 0
 
   wloop:
     movdqu     xmm0, [eax]
@@ -174,15 +172,15 @@
     movdqu     xmm2, [eax + esi]
     movdqu     xmm3, [eax + esi + 16]
     lea        eax,  [eax + 32]
-    pmaddubsw  xmm0, xmm4      // horizontal add
+    pmaddubsw  xmm0, xmm4  // horizontal add
     pmaddubsw  xmm1, xmm4
     pmaddubsw  xmm2, xmm4
     pmaddubsw  xmm3, xmm4
-    paddw      xmm0, xmm2      // vertical add
+    paddw      xmm0, xmm2  // vertical add
     paddw      xmm1, xmm3
     psrlw      xmm0, 1
     psrlw      xmm1, 1
-    pavgw      xmm0, xmm5      // (x + 1) / 2
+    pavgw      xmm0, xmm5  // (x + 1) / 2
     pavgw      xmm1, xmm5
     packuswb   xmm0, xmm1
     movdqu     [edx], xmm0
@@ -197,23 +195,24 @@
 
 #ifdef HAS_SCALEROWDOWN2_AVX2
 // Reads 64 pixels, throws half away and writes 32 pixels.
-__declspec(naked)
-void ScaleRowDown2_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown2_AVX2(const uint8_t* src_ptr,
+                                          ptrdiff_t src_stride,
+                                          uint8_t* dst_ptr,
+                                          int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_ptr
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_ptr
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]  // src_ptr
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_ptr
+    mov        ecx, [esp + 16]  // dst_width
 
   wloop:
     vmovdqu     ymm0, [eax]
     vmovdqu     ymm1, [eax + 32]
     lea         eax,  [eax + 64]
-    vpsrlw      ymm0, ymm0, 8        // isolate odd pixels.
+    vpsrlw      ymm0, ymm0, 8  // isolate odd pixels.
     vpsrlw      ymm1, ymm1, 8
     vpackuswb   ymm0, ymm0, ymm1
-    vpermq      ymm0, ymm0, 0xd8     // unmutate vpackuswb
+    vpermq      ymm0, ymm0, 0xd8       // unmutate vpackuswb
     vmovdqu     [edx], ymm0
     lea         edx, [edx + 32]
     sub         ecx, 32
@@ -225,30 +224,31 @@
 }
 
 // Blends 64x1 rectangle to 32x1.
-__declspec(naked)
-void ScaleRowDown2Linear_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                              uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown2Linear_AVX2(const uint8_t* src_ptr,
+                                                ptrdiff_t src_stride,
+                                                uint8_t* dst_ptr,
+                                                int dst_width) {
   __asm {
-    mov         eax, [esp + 4]        // src_ptr
-                                      // src_stride
-    mov         edx, [esp + 12]       // dst_ptr
-    mov         ecx, [esp + 16]       // dst_width
+    mov         eax, [esp + 4]  // src_ptr
+    // src_stride
+    mov         edx, [esp + 12]  // dst_ptr
+    mov         ecx, [esp + 16]  // dst_width
 
-    vpcmpeqb    ymm4, ymm4, ymm4      // '1' constant, 8b
+    vpcmpeqb    ymm4, ymm4, ymm4  // '1' constant, 8b
     vpsrlw      ymm4, ymm4, 15
     vpackuswb   ymm4, ymm4, ymm4
-    vpxor       ymm5, ymm5, ymm5      // constant 0
+    vpxor       ymm5, ymm5, ymm5  // constant 0
 
   wloop:
     vmovdqu     ymm0, [eax]
     vmovdqu     ymm1, [eax + 32]
     lea         eax,  [eax + 64]
-    vpmaddubsw  ymm0, ymm0, ymm4      // horizontal add
+    vpmaddubsw  ymm0, ymm0, ymm4  // horizontal add
     vpmaddubsw  ymm1, ymm1, ymm4
-    vpavgw      ymm0, ymm0, ymm5      // (x + 1) / 2
+    vpavgw      ymm0, ymm0, ymm5  // (x + 1) / 2
     vpavgw      ymm1, ymm1, ymm5
     vpackuswb   ymm0, ymm0, ymm1
-    vpermq      ymm0, ymm0, 0xd8      // unmutate vpackuswb
+    vpermq      ymm0, ymm0, 0xd8       // unmutate vpackuswb
     vmovdqu     [edx], ymm0
     lea         edx, [edx + 32]
     sub         ecx, 32
@@ -262,20 +262,21 @@
 // For rounding, average = (sum + 2) / 4
 // becomes average((sum >> 1), 0)
 // Blends 64x2 rectangle to 32x1.
-__declspec(naked)
-void ScaleRowDown2Box_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown2Box_AVX2(const uint8_t* src_ptr,
+                                             ptrdiff_t src_stride,
+                                             uint8_t* dst_ptr,
+                                             int dst_width) {
   __asm {
     push        esi
-    mov         eax, [esp + 4 + 4]    // src_ptr
-    mov         esi, [esp + 4 + 8]    // src_stride
-    mov         edx, [esp + 4 + 12]   // dst_ptr
-    mov         ecx, [esp + 4 + 16]   // dst_width
+    mov         eax, [esp + 4 + 4]  // src_ptr
+    mov         esi, [esp + 4 + 8]  // src_stride
+    mov         edx, [esp + 4 + 12]  // dst_ptr
+    mov         ecx, [esp + 4 + 16]  // dst_width
 
-    vpcmpeqb    ymm4, ymm4, ymm4      // '1' constant, 8b
+    vpcmpeqb    ymm4, ymm4, ymm4  // '1' constant, 8b
     vpsrlw      ymm4, ymm4, 15
     vpackuswb   ymm4, ymm4, ymm4
-    vpxor       ymm5, ymm5, ymm5      // constant 0
+    vpxor       ymm5, ymm5, ymm5  // constant 0
 
   wloop:
     vmovdqu     ymm0, [eax]
@@ -283,18 +284,18 @@
     vmovdqu     ymm2, [eax + esi]
     vmovdqu     ymm3, [eax + esi + 32]
     lea         eax,  [eax + 64]
-    vpmaddubsw  ymm0, ymm0, ymm4      // horizontal add
+    vpmaddubsw  ymm0, ymm0, ymm4  // horizontal add
     vpmaddubsw  ymm1, ymm1, ymm4
     vpmaddubsw  ymm2, ymm2, ymm4
     vpmaddubsw  ymm3, ymm3, ymm4
-    vpaddw      ymm0, ymm0, ymm2      // vertical add
+    vpaddw      ymm0, ymm0, ymm2  // vertical add
     vpaddw      ymm1, ymm1, ymm3
-    vpsrlw      ymm0, ymm0, 1         // (x + 2) / 4 = (x / 2 + 1) / 2
+    vpsrlw      ymm0, ymm0, 1  // (x + 2) / 4 = (x / 2 + 1) / 2
     vpsrlw      ymm1, ymm1, 1
-    vpavgw      ymm0, ymm0, ymm5      // (x + 1) / 2
+    vpavgw      ymm0, ymm0, ymm5  // (x + 1) / 2
     vpavgw      ymm1, ymm1, ymm5
     vpackuswb   ymm0, ymm0, ymm1
-    vpermq      ymm0, ymm0, 0xd8      // unmutate vpackuswb
+    vpermq      ymm0, ymm0, 0xd8  // unmutate vpackuswb
     vmovdqu     [edx], ymm0
     lea         edx, [edx + 32]
     sub         ecx, 32
@@ -308,15 +309,16 @@
 #endif  // HAS_SCALEROWDOWN2_AVX2
 
 // Point samples 32 pixels to 8 pixels.
-__declspec(naked)
-void ScaleRowDown4_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown4_SSSE3(const uint8_t* src_ptr,
+                                           ptrdiff_t src_stride,
+                                           uint8_t* dst_ptr,
+                                           int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_ptr
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_ptr
-    mov        ecx, [esp + 16]       // dst_width
-    pcmpeqb    xmm5, xmm5            // generate mask 0x00ff0000
+    mov        eax, [esp + 4]  // src_ptr
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_ptr
+    mov        ecx, [esp + 16]  // dst_width
+    pcmpeqb    xmm5, xmm5       // generate mask 0x00ff0000
     psrld      xmm5, 24
     pslld      xmm5, 16
 
@@ -339,50 +341,51 @@
 }
 
 // Blends 32x4 rectangle to 8x1.
-__declspec(naked)
-void ScaleRowDown4Box_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown4Box_SSSE3(const uint8_t* src_ptr,
+                                              ptrdiff_t src_stride,
+                                              uint8_t* dst_ptr,
+                                              int dst_width) {
   __asm {
     push       esi
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_ptr
-    mov        esi, [esp + 8 + 8]    // src_stride
-    mov        edx, [esp + 8 + 12]   // dst_ptr
-    mov        ecx, [esp + 8 + 16]   // dst_width
+    mov        eax, [esp + 8 + 4]  // src_ptr
+    mov        esi, [esp + 8 + 8]  // src_stride
+    mov        edx, [esp + 8 + 12]  // dst_ptr
+    mov        ecx, [esp + 8 + 16]  // dst_width
     lea        edi, [esi + esi * 2]  // src_stride * 3
-    pcmpeqb    xmm4, xmm4            // constant 0x0101
+    pcmpeqb    xmm4, xmm4  // constant 0x0101
     psrlw      xmm4, 15
     movdqa     xmm5, xmm4
     packuswb   xmm4, xmm4
-    psllw      xmm5, 3               // constant 0x0008
+    psllw      xmm5, 3  // constant 0x0008
 
   wloop:
-    movdqu     xmm0, [eax]           // average rows
+    movdqu     xmm0, [eax]  // average rows
     movdqu     xmm1, [eax + 16]
     movdqu     xmm2, [eax + esi]
     movdqu     xmm3, [eax + esi + 16]
-    pmaddubsw  xmm0, xmm4      // horizontal add
+    pmaddubsw  xmm0, xmm4  // horizontal add
     pmaddubsw  xmm1, xmm4
     pmaddubsw  xmm2, xmm4
     pmaddubsw  xmm3, xmm4
-    paddw      xmm0, xmm2      // vertical add rows 0, 1
+    paddw      xmm0, xmm2  // vertical add rows 0, 1
     paddw      xmm1, xmm3
     movdqu     xmm2, [eax + esi * 2]
     movdqu     xmm3, [eax + esi * 2 + 16]
     pmaddubsw  xmm2, xmm4
     pmaddubsw  xmm3, xmm4
-    paddw      xmm0, xmm2      // add row 2
+    paddw      xmm0, xmm2  // add row 2
     paddw      xmm1, xmm3
     movdqu     xmm2, [eax + edi]
     movdqu     xmm3, [eax + edi + 16]
     lea        eax, [eax + 32]
     pmaddubsw  xmm2, xmm4
     pmaddubsw  xmm3, xmm4
-    paddw      xmm0, xmm2      // add row 3
+    paddw      xmm0, xmm2  // add row 3
     paddw      xmm1, xmm3
     phaddw     xmm0, xmm1
-    paddw      xmm0, xmm5      // + 8 for round
-    psrlw      xmm0, 4         // /16 for average of 4 * 4
+    paddw      xmm0, xmm5  // + 8 for round
+    psrlw      xmm0, 4  // /16 for average of 4 * 4
     packuswb   xmm0, xmm0
     movq       qword ptr [edx], xmm0
     lea        edx, [edx + 8]
@@ -397,15 +400,16 @@
 
 #ifdef HAS_SCALEROWDOWN4_AVX2
 // Point samples 64 pixels to 16 pixels.
-__declspec(naked)
-void ScaleRowDown4_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                        uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown4_AVX2(const uint8_t* src_ptr,
+                                          ptrdiff_t src_stride,
+                                          uint8_t* dst_ptr,
+                                          int dst_width) {
   __asm {
-    mov         eax, [esp + 4]        // src_ptr
-                                      // src_stride ignored
-    mov         edx, [esp + 12]       // dst_ptr
-    mov         ecx, [esp + 16]       // dst_width
-    vpcmpeqb    ymm5, ymm5, ymm5      // generate mask 0x00ff0000
+    mov         eax, [esp + 4]  // src_ptr
+    // src_stride ignored
+    mov         edx, [esp + 12]  // dst_ptr
+    mov         ecx, [esp + 16]  // dst_width
+    vpcmpeqb    ymm5, ymm5, ymm5  // generate mask 0x00ff0000
     vpsrld      ymm5, ymm5, 24
     vpslld      ymm5, ymm5, 16
 
@@ -416,10 +420,10 @@
     vpand       ymm0, ymm0, ymm5
     vpand       ymm1, ymm1, ymm5
     vpackuswb   ymm0, ymm0, ymm1
-    vpermq      ymm0, ymm0, 0xd8      // unmutate vpackuswb
+    vpermq      ymm0, ymm0, 0xd8  // unmutate vpackuswb
     vpsrlw      ymm0, ymm0, 8
     vpackuswb   ymm0, ymm0, ymm0
-    vpermq      ymm0, ymm0, 0xd8      // unmutate vpackuswb
+    vpermq      ymm0, ymm0, 0xd8       // unmutate vpackuswb
     vmovdqu     [edx], xmm0
     lea         edx, [edx + 16]
     sub         ecx, 16
@@ -431,52 +435,53 @@
 }
 
 // Blends 64x4 rectangle to 16x1.
-__declspec(naked)
-void ScaleRowDown4Box_AVX2(const uint8* src_ptr, ptrdiff_t src_stride,
-                           uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown4Box_AVX2(const uint8_t* src_ptr,
+                                             ptrdiff_t src_stride,
+                                             uint8_t* dst_ptr,
+                                             int dst_width) {
   __asm {
     push        esi
     push        edi
-    mov         eax, [esp + 8 + 4]    // src_ptr
-    mov         esi, [esp + 8 + 8]    // src_stride
-    mov         edx, [esp + 8 + 12]   // dst_ptr
-    mov         ecx, [esp + 8 + 16]   // dst_width
+    mov         eax, [esp + 8 + 4]  // src_ptr
+    mov         esi, [esp + 8 + 8]  // src_stride
+    mov         edx, [esp + 8 + 12]  // dst_ptr
+    mov         ecx, [esp + 8 + 16]  // dst_width
     lea         edi, [esi + esi * 2]  // src_stride * 3
-    vpcmpeqb    ymm4, ymm4, ymm4            // constant 0x0101
+    vpcmpeqb    ymm4, ymm4, ymm4  // constant 0x0101
     vpsrlw      ymm4, ymm4, 15
-    vpsllw      ymm5, ymm4, 3               // constant 0x0008
+    vpsllw      ymm5, ymm4, 3  // constant 0x0008
     vpackuswb   ymm4, ymm4, ymm4
 
   wloop:
-    vmovdqu     ymm0, [eax]           // average rows
+    vmovdqu     ymm0, [eax]  // average rows
     vmovdqu     ymm1, [eax + 32]
     vmovdqu     ymm2, [eax + esi]
     vmovdqu     ymm3, [eax + esi + 32]
-    vpmaddubsw  ymm0, ymm0, ymm4      // horizontal add
+    vpmaddubsw  ymm0, ymm0, ymm4  // horizontal add
     vpmaddubsw  ymm1, ymm1, ymm4
     vpmaddubsw  ymm2, ymm2, ymm4
     vpmaddubsw  ymm3, ymm3, ymm4
-    vpaddw      ymm0, ymm0, ymm2      // vertical add rows 0, 1
+    vpaddw      ymm0, ymm0, ymm2  // vertical add rows 0, 1
     vpaddw      ymm1, ymm1, ymm3
     vmovdqu     ymm2, [eax + esi * 2]
     vmovdqu     ymm3, [eax + esi * 2 + 32]
     vpmaddubsw  ymm2, ymm2, ymm4
     vpmaddubsw  ymm3, ymm3, ymm4
-    vpaddw      ymm0, ymm0, ymm2      // add row 2
+    vpaddw      ymm0, ymm0, ymm2  // add row 2
     vpaddw      ymm1, ymm1, ymm3
     vmovdqu     ymm2, [eax + edi]
     vmovdqu     ymm3, [eax + edi + 32]
     lea         eax,  [eax + 64]
     vpmaddubsw  ymm2, ymm2, ymm4
     vpmaddubsw  ymm3, ymm3, ymm4
-    vpaddw      ymm0, ymm0, ymm2      // add row 3
+    vpaddw      ymm0, ymm0, ymm2  // add row 3
     vpaddw      ymm1, ymm1, ymm3
-    vphaddw     ymm0, ymm0, ymm1      // mutates
-    vpermq      ymm0, ymm0, 0xd8      // unmutate vphaddw
-    vpaddw      ymm0, ymm0, ymm5      // + 8 for round
-    vpsrlw      ymm0, ymm0, 4         // /32 for average of 4 * 4
+    vphaddw     ymm0, ymm0, ymm1  // mutates
+    vpermq      ymm0, ymm0, 0xd8  // unmutate vphaddw
+    vpaddw      ymm0, ymm0, ymm5  // + 8 for round
+    vpsrlw      ymm0, ymm0, 4  // /32 for average of 4 * 4
     vpackuswb   ymm0, ymm0, ymm0
-    vpermq      ymm0, ymm0, 0xd8      // unmutate vpackuswb
+    vpermq      ymm0, ymm0, 0xd8  // unmutate vpackuswb
     vmovdqu     [edx], xmm0
     lea         edx, [edx + 16]
     sub         ecx, 16
@@ -494,14 +499,15 @@
 // Produces three 8 byte values. For each 8 bytes, 16 bytes are read.
 // Then shuffled to do the scaling.
 
-__declspec(naked)
-void ScaleRowDown34_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown34_SSSE3(const uint8_t* src_ptr,
+                                            ptrdiff_t src_stride,
+                                            uint8_t* dst_ptr,
+                                            int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_ptr
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_ptr
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]   // src_ptr
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_ptr
+    mov        ecx, [esp + 16]  // dst_width
     movdqa     xmm3, xmmword ptr kShuf0
     movdqa     xmm4, xmmword ptr kShuf1
     movdqa     xmm5, xmmword ptr kShuf2
@@ -541,16 +547,16 @@
 // xmm7 kRound34
 
 // Note that movdqa+palign may be better than movdqu.
-__declspec(naked)
-void ScaleRowDown34_1_Box_SSSE3(const uint8* src_ptr,
-                                ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown34_1_Box_SSSE3(const uint8_t* src_ptr,
+                                                  ptrdiff_t src_stride,
+                                                  uint8_t* dst_ptr,
+                                                  int dst_width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]    // src_ptr
-    mov        esi, [esp + 4 + 8]    // src_stride
-    mov        edx, [esp + 4 + 12]   // dst_ptr
-    mov        ecx, [esp + 4 + 16]   // dst_width
+    mov        eax, [esp + 4 + 4]  // src_ptr
+    mov        esi, [esp + 4 + 8]  // src_stride
+    mov        edx, [esp + 4 + 12]  // dst_ptr
+    mov        ecx, [esp + 4 + 16]  // dst_width
     movdqa     xmm2, xmmword ptr kShuf01
     movdqa     xmm3, xmmword ptr kShuf11
     movdqa     xmm4, xmmword ptr kShuf21
@@ -559,7 +565,7 @@
     movdqa     xmm7, xmmword ptr kRound34
 
   wloop:
-    movdqu     xmm0, [eax]           // pixels 0..7
+    movdqu     xmm0, [eax]  // pixels 0..7
     movdqu     xmm1, [eax + esi]
     pavgb      xmm0, xmm1
     pshufb     xmm0, xmm2
@@ -568,7 +574,7 @@
     psrlw      xmm0, 2
     packuswb   xmm0, xmm0
     movq       qword ptr [edx], xmm0
-    movdqu     xmm0, [eax + 8]       // pixels 8..15
+    movdqu     xmm0, [eax + 8]  // pixels 8..15
     movdqu     xmm1, [eax + esi + 8]
     pavgb      xmm0, xmm1
     pshufb     xmm0, xmm3
@@ -577,7 +583,7 @@
     psrlw      xmm0, 2
     packuswb   xmm0, xmm0
     movq       qword ptr [edx + 8], xmm0
-    movdqu     xmm0, [eax + 16]      // pixels 16..23
+    movdqu     xmm0, [eax + 16]  // pixels 16..23
     movdqu     xmm1, [eax + esi + 16]
     lea        eax, [eax + 32]
     pavgb      xmm0, xmm1
@@ -598,16 +604,16 @@
 }
 
 // Note that movdqa+palign may be better than movdqu.
-__declspec(naked)
-void ScaleRowDown34_0_Box_SSSE3(const uint8* src_ptr,
-                                ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown34_0_Box_SSSE3(const uint8_t* src_ptr,
+                                                  ptrdiff_t src_stride,
+                                                  uint8_t* dst_ptr,
+                                                  int dst_width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]    // src_ptr
-    mov        esi, [esp + 4 + 8]    // src_stride
-    mov        edx, [esp + 4 + 12]   // dst_ptr
-    mov        ecx, [esp + 4 + 16]   // dst_width
+    mov        eax, [esp + 4 + 4]  // src_ptr
+    mov        esi, [esp + 4 + 8]  // src_stride
+    mov        edx, [esp + 4 + 12]  // dst_ptr
+    mov        ecx, [esp + 4 + 16]  // dst_width
     movdqa     xmm2, xmmword ptr kShuf01
     movdqa     xmm3, xmmword ptr kShuf11
     movdqa     xmm4, xmmword ptr kShuf21
@@ -616,7 +622,7 @@
     movdqa     xmm7, xmmword ptr kRound34
 
   wloop:
-    movdqu     xmm0, [eax]           // pixels 0..7
+    movdqu     xmm0, [eax]  // pixels 0..7
     movdqu     xmm1, [eax + esi]
     pavgb      xmm1, xmm0
     pavgb      xmm0, xmm1
@@ -626,7 +632,7 @@
     psrlw      xmm0, 2
     packuswb   xmm0, xmm0
     movq       qword ptr [edx], xmm0
-    movdqu     xmm0, [eax + 8]       // pixels 8..15
+    movdqu     xmm0, [eax + 8]  // pixels 8..15
     movdqu     xmm1, [eax + esi + 8]
     pavgb      xmm1, xmm0
     pavgb      xmm0, xmm1
@@ -636,7 +642,7 @@
     psrlw      xmm0, 2
     packuswb   xmm0, xmm0
     movq       qword ptr [edx + 8], xmm0
-    movdqu     xmm0, [eax + 16]      // pixels 16..23
+    movdqu     xmm0, [eax + 16]  // pixels 16..23
     movdqu     xmm1, [eax + esi + 16]
     lea        eax, [eax + 32]
     pavgb      xmm1, xmm0
@@ -660,26 +666,27 @@
 // 3/8 point sampler
 
 // Scale 32 pixels to 12
-__declspec(naked)
-void ScaleRowDown38_SSSE3(const uint8* src_ptr, ptrdiff_t src_stride,
-                          uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown38_SSSE3(const uint8_t* src_ptr,
+                                            ptrdiff_t src_stride,
+                                            uint8_t* dst_ptr,
+                                            int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_ptr
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_ptr
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]  // src_ptr
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_ptr
+    mov        ecx, [esp + 16]  // dst_width
     movdqa     xmm4, xmmword ptr kShuf38a
     movdqa     xmm5, xmmword ptr kShuf38b
 
   xloop:
-    movdqu     xmm0, [eax]           // 16 pixels -> 0,1,2,3,4,5
-    movdqu     xmm1, [eax + 16]      // 16 pixels -> 6,7,8,9,10,11
+    movdqu     xmm0, [eax]  // 16 pixels -> 0,1,2,3,4,5
+    movdqu     xmm1, [eax + 16]  // 16 pixels -> 6,7,8,9,10,11
     lea        eax, [eax + 32]
     pshufb     xmm0, xmm4
     pshufb     xmm1, xmm5
     paddusb    xmm0, xmm1
 
-    movq       qword ptr [edx], xmm0  // write 12 pixels
+    movq       qword ptr [edx], xmm0       // write 12 pixels
     movhlps    xmm1, xmm0
     movd       [edx + 8], xmm1
     lea        edx, [edx + 12]
@@ -691,23 +698,23 @@
 }
 
 // Scale 16x3 pixels to 6x1 with interpolation
-__declspec(naked)
-void ScaleRowDown38_3_Box_SSSE3(const uint8* src_ptr,
-                                ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown38_3_Box_SSSE3(const uint8_t* src_ptr,
+                                                  ptrdiff_t src_stride,
+                                                  uint8_t* dst_ptr,
+                                                  int dst_width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]    // src_ptr
-    mov        esi, [esp + 4 + 8]    // src_stride
-    mov        edx, [esp + 4 + 12]   // dst_ptr
-    mov        ecx, [esp + 4 + 16]   // dst_width
+    mov        eax, [esp + 4 + 4]  // src_ptr
+    mov        esi, [esp + 4 + 8]  // src_stride
+    mov        edx, [esp + 4 + 12]  // dst_ptr
+    mov        ecx, [esp + 4 + 16]  // dst_width
     movdqa     xmm2, xmmword ptr kShufAc
     movdqa     xmm3, xmmword ptr kShufAc3
     movdqa     xmm4, xmmword ptr kScaleAc33
     pxor       xmm5, xmm5
 
   xloop:
-    movdqu     xmm0, [eax]           // sum up 3 rows into xmm0/1
+    movdqu     xmm0, [eax]  // sum up 3 rows into xmm0/1
     movdqu     xmm6, [eax + esi]
     movhlps    xmm1, xmm0
     movhlps    xmm7, xmm6
@@ -725,14 +732,14 @@
     paddusw    xmm0, xmm6
     paddusw    xmm1, xmm7
 
-    movdqa     xmm6, xmm0            // 8 pixels -> 0,1,2 of xmm6
+    movdqa     xmm6, xmm0  // 8 pixels -> 0,1,2 of xmm6
     psrldq     xmm0, 2
     paddusw    xmm6, xmm0
     psrldq     xmm0, 2
     paddusw    xmm6, xmm0
     pshufb     xmm6, xmm2
 
-    movdqa     xmm7, xmm1            // 8 pixels -> 3,4,5 of xmm6
+    movdqa     xmm7, xmm1  // 8 pixels -> 3,4,5 of xmm6
     psrldq     xmm1, 2
     paddusw    xmm7, xmm1
     psrldq     xmm1, 2
@@ -740,10 +747,10 @@
     pshufb     xmm7, xmm3
     paddusw    xmm6, xmm7
 
-    pmulhuw    xmm6, xmm4            // divide by 9,9,6, 9,9,6
+    pmulhuw    xmm6, xmm4  // divide by 9,9,6, 9,9,6
     packuswb   xmm6, xmm6
 
-    movd       [edx], xmm6           // write 6 pixels
+    movd       [edx], xmm6  // write 6 pixels
     psrlq      xmm6, 16
     movd       [edx + 2], xmm6
     lea        edx, [edx + 6]
@@ -756,28 +763,28 @@
 }
 
 // Scale 16x2 pixels to 6x1 with interpolation
-__declspec(naked)
-void ScaleRowDown38_2_Box_SSSE3(const uint8* src_ptr,
-                                ptrdiff_t src_stride,
-                                uint8* dst_ptr, int dst_width) {
+__declspec(naked) void ScaleRowDown38_2_Box_SSSE3(const uint8_t* src_ptr,
+                                                  ptrdiff_t src_stride,
+                                                  uint8_t* dst_ptr,
+                                                  int dst_width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]    // src_ptr
-    mov        esi, [esp + 4 + 8]    // src_stride
-    mov        edx, [esp + 4 + 12]   // dst_ptr
-    mov        ecx, [esp + 4 + 16]   // dst_width
+    mov        eax, [esp + 4 + 4]  // src_ptr
+    mov        esi, [esp + 4 + 8]  // src_stride
+    mov        edx, [esp + 4 + 12]  // dst_ptr
+    mov        ecx, [esp + 4 + 16]  // dst_width
     movdqa     xmm2, xmmword ptr kShufAb0
     movdqa     xmm3, xmmword ptr kShufAb1
     movdqa     xmm4, xmmword ptr kShufAb2
     movdqa     xmm5, xmmword ptr kScaleAb2
 
   xloop:
-    movdqu     xmm0, [eax]           // average 2 rows into xmm0
+    movdqu     xmm0, [eax]  // average 2 rows into xmm0
     movdqu     xmm1, [eax + esi]
     lea        eax, [eax + 16]
     pavgb      xmm0, xmm1
 
-    movdqa     xmm1, xmm0            // 16 pixels -> 0,1,2,3,4,5 of xmm1
+    movdqa     xmm1, xmm0  // 16 pixels -> 0,1,2,3,4,5 of xmm1
     pshufb     xmm1, xmm2
     movdqa     xmm6, xmm0
     pshufb     xmm6, xmm3
@@ -785,10 +792,10 @@
     pshufb     xmm0, xmm4
     paddusw    xmm1, xmm0
 
-    pmulhuw    xmm1, xmm5            // divide by 3,3,2, 3,3,2
+    pmulhuw    xmm1, xmm5  // divide by 3,3,2, 3,3,2
     packuswb   xmm1, xmm1
 
-    movd       [edx], xmm1           // write 6 pixels
+    movd       [edx], xmm1  // write 6 pixels
     psrlq      xmm1, 16
     movd       [edx + 2], xmm1
     lea        edx, [edx + 6]
@@ -801,26 +808,27 @@
 }
 
 // Reads 16 bytes and accumulates to 16 shorts at a time.
-__declspec(naked)
-void ScaleAddRow_SSE2(const uint8* src_ptr, uint16* dst_ptr, int src_width) {
+__declspec(naked) void ScaleAddRow_SSE2(const uint8_t* src_ptr,
+                                        uint16_t* dst_ptr,
+                                        int src_width) {
   __asm {
-    mov        eax, [esp + 4]   // src_ptr
-    mov        edx, [esp + 8]   // dst_ptr
+    mov        eax, [esp + 4]  // src_ptr
+    mov        edx, [esp + 8]  // dst_ptr
     mov        ecx, [esp + 12]  // src_width
     pxor       xmm5, xmm5
 
-  // sum rows
+        // sum rows
   xloop:
-    movdqu     xmm3, [eax]       // read 16 bytes
+    movdqu     xmm3, [eax]  // read 16 bytes
     lea        eax, [eax + 16]
-    movdqu     xmm0, [edx]       // read 16 words from destination
+    movdqu     xmm0, [edx]  // read 16 words from destination
     movdqu     xmm1, [edx + 16]
     movdqa     xmm2, xmm3
     punpcklbw  xmm2, xmm5
     punpckhbw  xmm3, xmm5
-    paddusw    xmm0, xmm2        // sum 16 words
+    paddusw    xmm0, xmm2  // sum 16 words
     paddusw    xmm1, xmm3
-    movdqu     [edx], xmm0       // write 16 words to destination
+    movdqu     [edx], xmm0  // write 16 words to destination
     movdqu     [edx + 16], xmm1
     lea        edx, [edx + 32]
     sub        ecx, 16
@@ -831,24 +839,25 @@
 
 #ifdef HAS_SCALEADDROW_AVX2
 // Reads 32 bytes and accumulates to 32 shorts at a time.
-__declspec(naked)
-void ScaleAddRow_AVX2(const uint8* src_ptr, uint16* dst_ptr, int src_width) {
+__declspec(naked) void ScaleAddRow_AVX2(const uint8_t* src_ptr,
+                                        uint16_t* dst_ptr,
+                                        int src_width) {
   __asm {
-    mov         eax, [esp + 4]   // src_ptr
-    mov         edx, [esp + 8]   // dst_ptr
+    mov         eax, [esp + 4]  // src_ptr
+    mov         edx, [esp + 8]  // dst_ptr
     mov         ecx, [esp + 12]  // src_width
     vpxor       ymm5, ymm5, ymm5
 
-  // sum rows
+        // sum rows
   xloop:
-    vmovdqu     ymm3, [eax]       // read 32 bytes
+    vmovdqu     ymm3, [eax]  // read 32 bytes
     lea         eax, [eax + 32]
     vpermq      ymm3, ymm3, 0xd8  // unmutate for vpunpck
     vpunpcklbw  ymm2, ymm3, ymm5
     vpunpckhbw  ymm3, ymm3, ymm5
-    vpaddusw    ymm0, ymm2, [edx] // sum 16 words
+    vpaddusw    ymm0, ymm2, [edx]  // sum 16 words
     vpaddusw    ymm1, ymm3, [edx + 32]
-    vmovdqu     [edx], ymm0       // write 32 words to destination
+    vmovdqu     [edx], ymm0  // write 32 words to destination
     vmovdqu     [edx + 32], ymm1
     lea         edx, [edx + 64]
     sub         ecx, 32
@@ -862,86 +871,87 @@
 
 // Constant for making pixels signed to avoid pmaddubsw
 // saturation.
-static uvec8 kFsub80 =
-  { 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
-    0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80 };
+static const uvec8 kFsub80 = {0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80,
+                              0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80, 0x80};
 
 // Constant for making pixels unsigned and adding .5 for rounding.
-static uvec16 kFadd40 =
-  { 0x4040, 0x4040, 0x4040, 0x4040, 0x4040, 0x4040, 0x4040, 0x4040 };
+static const uvec16 kFadd40 = {0x4040, 0x4040, 0x4040, 0x4040,
+                               0x4040, 0x4040, 0x4040, 0x4040};
 
 // Bilinear column filtering. SSSE3 version.
-__declspec(naked)
-void ScaleFilterCols_SSSE3(uint8* dst_ptr, const uint8* src_ptr,
-                           int dst_width, int x, int dx) {
+__declspec(naked) void ScaleFilterCols_SSSE3(uint8_t* dst_ptr,
+                                             const uint8_t* src_ptr,
+                                             int dst_width,
+                                             int x,
+                                             int dx) {
   __asm {
     push       ebx
     push       esi
     push       edi
-    mov        edi, [esp + 12 + 4]    // dst_ptr
-    mov        esi, [esp + 12 + 8]    // src_ptr
-    mov        ecx, [esp + 12 + 12]   // dst_width
+    mov        edi, [esp + 12 + 4]  // dst_ptr
+    mov        esi, [esp + 12 + 8]  // src_ptr
+    mov        ecx, [esp + 12 + 12]  // dst_width
     movd       xmm2, [esp + 12 + 16]  // x
     movd       xmm3, [esp + 12 + 20]  // dx
-    mov        eax, 0x04040000      // shuffle to line up fractions with pixel.
+    mov        eax, 0x04040000  // shuffle to line up fractions with pixel.
     movd       xmm5, eax
-    pcmpeqb    xmm6, xmm6           // generate 0x007f for inverting fraction.
+    pcmpeqb    xmm6, xmm6  // generate 0x007f for inverting fraction.
     psrlw      xmm6, 9
-    pcmpeqb    xmm7, xmm7           // generate 0x0001
+    pcmpeqb    xmm7, xmm7  // generate 0x0001
     psrlw      xmm7, 15
-    pextrw     eax, xmm2, 1         // get x0 integer. preroll
+    pextrw     eax, xmm2, 1  // get x0 integer. preroll
     sub        ecx, 2
     jl         xloop29
 
-    movdqa     xmm0, xmm2           // x1 = x0 + dx
+    movdqa     xmm0, xmm2  // x1 = x0 + dx
     paddd      xmm0, xmm3
-    punpckldq  xmm2, xmm0           // x0 x1
-    punpckldq  xmm3, xmm3           // dx dx
-    paddd      xmm3, xmm3           // dx * 2, dx * 2
-    pextrw     edx, xmm2, 3         // get x1 integer. preroll
+    punpckldq  xmm2, xmm0  // x0 x1
+    punpckldq  xmm3, xmm3  // dx dx
+    paddd      xmm3, xmm3  // dx * 2, dx * 2
+    pextrw     edx, xmm2, 3  // get x1 integer. preroll
 
     // 2 Pixel loop.
   xloop2:
-    movdqa     xmm1, xmm2           // x0, x1 fractions.
-    paddd      xmm2, xmm3           // x += dx
+    movdqa     xmm1, xmm2  // x0, x1 fractions.
+    paddd      xmm2, xmm3  // x += dx
     movzx      ebx, word ptr [esi + eax]  // 2 source x0 pixels
     movd       xmm0, ebx
-    psrlw      xmm1, 9              // 7 bit fractions.
+    psrlw      xmm1, 9  // 7 bit fractions.
     movzx      ebx, word ptr [esi + edx]  // 2 source x1 pixels
     movd       xmm4, ebx
-    pshufb     xmm1, xmm5           // 0011
+    pshufb     xmm1, xmm5  // 0011
     punpcklwd  xmm0, xmm4
     psubb      xmm0, xmmword ptr kFsub80  // make pixels signed.
-    pxor       xmm1, xmm6           // 0..7f and 7f..0
-    paddusb    xmm1, xmm7           // +1 so 0..7f and 80..1
-    pmaddubsw  xmm1, xmm0           // 16 bit, 2 pixels.
-    pextrw     eax, xmm2, 1         // get x0 integer. next iteration.
-    pextrw     edx, xmm2, 3         // get x1 integer. next iteration.
+    pxor       xmm1, xmm6  // 0..7f and 7f..0
+    paddusb    xmm1, xmm7  // +1 so 0..7f and 80..1
+    pmaddubsw  xmm1, xmm0  // 16 bit, 2 pixels.
+    pextrw     eax, xmm2, 1  // get x0 integer. next iteration.
+    pextrw     edx, xmm2, 3  // get x1 integer. next iteration.
     paddw      xmm1, xmmword ptr kFadd40  // make pixels unsigned and round.
-    psrlw      xmm1, 7              // 8.7 fixed point to low 8 bits.
-    packuswb   xmm1, xmm1           // 8 bits, 2 pixels.
+    psrlw      xmm1, 7  // 8.7 fixed point to low 8 bits.
+    packuswb   xmm1, xmm1  // 8 bits, 2 pixels.
     movd       ebx, xmm1
     mov        [edi], bx
     lea        edi, [edi + 2]
-    sub        ecx, 2               // 2 pixels
+    sub        ecx, 2  // 2 pixels
     jge        xloop2
 
  xloop29:
     add        ecx, 2 - 1
     jl         xloop99
 
-    // 1 pixel remainder
+            // 1 pixel remainder
     movzx      ebx, word ptr [esi + eax]  // 2 source x0 pixels
     movd       xmm0, ebx
-    psrlw      xmm2, 9              // 7 bit fractions.
-    pshufb     xmm2, xmm5           // 0011
+    psrlw      xmm2, 9  // 7 bit fractions.
+    pshufb     xmm2, xmm5  // 0011
     psubb      xmm0, xmmword ptr kFsub80  // make pixels signed.
-    pxor       xmm2, xmm6           // 0..7f and 7f..0
-    paddusb    xmm2, xmm7           // +1 so 0..7f and 80..1
-    pmaddubsw  xmm2, xmm0           // 16 bit
+    pxor       xmm2, xmm6  // 0..7f and 7f..0
+    paddusb    xmm2, xmm7  // +1 so 0..7f and 80..1
+    pmaddubsw  xmm2, xmm0  // 16 bit
     paddw      xmm2, xmmword ptr kFadd40  // make pixels unsigned and round.
-    psrlw      xmm2, 7              // 8.7 fixed point to low 8 bits.
-    packuswb   xmm2, xmm2           // 8 bits
+    psrlw      xmm2, 7  // 8.7 fixed point to low 8 bits.
+    packuswb   xmm2, xmm2  // 8 bits
     movd       ebx, xmm2
     mov        [edi], bl
 
@@ -955,13 +965,15 @@
 }
 
 // Reads 16 pixels, duplicates them and writes 32 pixels.
-__declspec(naked)
-void ScaleColsUp2_SSE2(uint8* dst_ptr, const uint8* src_ptr,
-                       int dst_width, int x, int dx) {
+__declspec(naked) void ScaleColsUp2_SSE2(uint8_t* dst_ptr,
+                                         const uint8_t* src_ptr,
+                                         int dst_width,
+                                         int x,
+                                         int dx) {
   __asm {
-    mov        edx, [esp + 4]    // dst_ptr
-    mov        eax, [esp + 8]    // src_ptr
-    mov        ecx, [esp + 12]   // dst_width
+    mov        edx, [esp + 4]  // dst_ptr
+    mov        eax, [esp + 8]  // src_ptr
+    mov        ecx, [esp + 12]  // dst_width
 
   wloop:
     movdqu     xmm0, [eax]
@@ -980,15 +992,15 @@
 }
 
 // Reads 8 pixels, throws half away and writes 4 even pixels (0, 2, 4, 6)
-__declspec(naked)
-void ScaleARGBRowDown2_SSE2(const uint8* src_argb,
-                            ptrdiff_t src_stride,
-                            uint8* dst_argb, int dst_width) {
+__declspec(naked) void ScaleARGBRowDown2_SSE2(const uint8_t* src_argb,
+                                              ptrdiff_t src_stride,
+                                              uint8_t* dst_argb,
+                                              int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_argb
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_argb
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]   // src_argb
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_argb
+    mov        ecx, [esp + 16]  // dst_width
 
   wloop:
     movdqu     xmm0, [eax]
@@ -1005,23 +1017,23 @@
 }
 
 // Blends 8x1 rectangle to 4x1.
-__declspec(naked)
-void ScaleARGBRowDown2Linear_SSE2(const uint8* src_argb,
-                                  ptrdiff_t src_stride,
-                                  uint8* dst_argb, int dst_width) {
+__declspec(naked) void ScaleARGBRowDown2Linear_SSE2(const uint8_t* src_argb,
+                                                    ptrdiff_t src_stride,
+                                                    uint8_t* dst_argb,
+                                                    int dst_width) {
   __asm {
-    mov        eax, [esp + 4]        // src_argb
-                                     // src_stride ignored
-    mov        edx, [esp + 12]       // dst_argb
-    mov        ecx, [esp + 16]       // dst_width
+    mov        eax, [esp + 4]  // src_argb
+    // src_stride ignored
+    mov        edx, [esp + 12]  // dst_argb
+    mov        ecx, [esp + 16]  // dst_width
 
   wloop:
     movdqu     xmm0, [eax]
     movdqu     xmm1, [eax + 16]
     lea        eax,  [eax + 32]
     movdqa     xmm2, xmm0
-    shufps     xmm0, xmm1, 0x88      // even pixels
-    shufps     xmm2, xmm1, 0xdd      // odd pixels
+    shufps     xmm0, xmm1, 0x88  // even pixels
+    shufps     xmm2, xmm1, 0xdd       // odd pixels
     pavgb      xmm0, xmm2
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
@@ -1033,16 +1045,16 @@
 }
 
 // Blends 8x2 rectangle to 4x1.
-__declspec(naked)
-void ScaleARGBRowDown2Box_SSE2(const uint8* src_argb,
-                               ptrdiff_t src_stride,
-                               uint8* dst_argb, int dst_width) {
+__declspec(naked) void ScaleARGBRowDown2Box_SSE2(const uint8_t* src_argb,
+                                                 ptrdiff_t src_stride,
+                                                 uint8_t* dst_argb,
+                                                 int dst_width) {
   __asm {
     push       esi
-    mov        eax, [esp + 4 + 4]    // src_argb
-    mov        esi, [esp + 4 + 8]    // src_stride
-    mov        edx, [esp + 4 + 12]   // dst_argb
-    mov        ecx, [esp + 4 + 16]   // dst_width
+    mov        eax, [esp + 4 + 4]  // src_argb
+    mov        esi, [esp + 4 + 8]  // src_stride
+    mov        edx, [esp + 4 + 12]  // dst_argb
+    mov        ecx, [esp + 4 + 16]  // dst_width
 
   wloop:
     movdqu     xmm0, [eax]
@@ -1050,11 +1062,11 @@
     movdqu     xmm2, [eax + esi]
     movdqu     xmm3, [eax + esi + 16]
     lea        eax,  [eax + 32]
-    pavgb      xmm0, xmm2            // average rows
+    pavgb      xmm0, xmm2  // average rows
     pavgb      xmm1, xmm3
-    movdqa     xmm2, xmm0            // average columns (8 to 4 pixels)
-    shufps     xmm0, xmm1, 0x88      // even pixels
-    shufps     xmm2, xmm1, 0xdd      // odd pixels
+    movdqa     xmm2, xmm0  // average columns (8 to 4 pixels)
+    shufps     xmm0, xmm1, 0x88  // even pixels
+    shufps     xmm2, xmm1, 0xdd  // odd pixels
     pavgb      xmm0, xmm2
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
@@ -1067,18 +1079,19 @@
 }
 
 // Reads 4 pixels at a time.
-__declspec(naked)
-void ScaleARGBRowDownEven_SSE2(const uint8* src_argb, ptrdiff_t src_stride,
-                               int src_stepx,
-                               uint8* dst_argb, int dst_width) {
+__declspec(naked) void ScaleARGBRowDownEven_SSE2(const uint8_t* src_argb,
+                                                 ptrdiff_t src_stride,
+                                                 int src_stepx,
+                                                 uint8_t* dst_argb,
+                                                 int dst_width) {
   __asm {
     push       ebx
     push       edi
-    mov        eax, [esp + 8 + 4]    // src_argb
-                                     // src_stride ignored
-    mov        ebx, [esp + 8 + 12]   // src_stepx
-    mov        edx, [esp + 8 + 16]   // dst_argb
-    mov        ecx, [esp + 8 + 20]   // dst_width
+    mov        eax, [esp + 8 + 4]   // src_argb
+    // src_stride ignored
+    mov        ebx, [esp + 8 + 12]  // src_stepx
+    mov        edx, [esp + 8 + 16]  // dst_argb
+    mov        ecx, [esp + 8 + 20]  // dst_width
     lea        ebx, [ebx * 4]
     lea        edi, [ebx + ebx * 2]
 
@@ -1103,21 +1116,21 @@
 }
 
 // Blends four 2x2 to 4x1.
-__declspec(naked)
-void ScaleARGBRowDownEvenBox_SSE2(const uint8* src_argb,
-                                  ptrdiff_t src_stride,
-                                  int src_stepx,
-                                  uint8* dst_argb, int dst_width) {
+__declspec(naked) void ScaleARGBRowDownEvenBox_SSE2(const uint8_t* src_argb,
+                                                    ptrdiff_t src_stride,
+                                                    int src_stepx,
+                                                    uint8_t* dst_argb,
+                                                    int dst_width) {
   __asm {
     push       ebx
     push       esi
     push       edi
-    mov        eax, [esp + 12 + 4]    // src_argb
-    mov        esi, [esp + 12 + 8]    // src_stride
-    mov        ebx, [esp + 12 + 12]   // src_stepx
-    mov        edx, [esp + 12 + 16]   // dst_argb
-    mov        ecx, [esp + 12 + 20]   // dst_width
-    lea        esi, [eax + esi]       // row1 pointer
+    mov        eax, [esp + 12 + 4]  // src_argb
+    mov        esi, [esp + 12 + 8]  // src_stride
+    mov        ebx, [esp + 12 + 12]  // src_stepx
+    mov        edx, [esp + 12 + 16]  // dst_argb
+    mov        ecx, [esp + 12 + 20]  // dst_width
+    lea        esi, [eax + esi]  // row1 pointer
     lea        ebx, [ebx * 4]
     lea        edi, [ebx + ebx * 2]
 
@@ -1132,11 +1145,11 @@
     movq       xmm3, qword ptr [esi + ebx * 2]
     movhps     xmm3, qword ptr [esi + edi]
     lea        esi,  [esi + ebx * 4]
-    pavgb      xmm0, xmm2            // average rows
+    pavgb      xmm0, xmm2  // average rows
     pavgb      xmm1, xmm3
-    movdqa     xmm2, xmm0            // average columns (8 to 4 pixels)
-    shufps     xmm0, xmm1, 0x88      // even pixels
-    shufps     xmm2, xmm1, 0xdd      // odd pixels
+    movdqa     xmm2, xmm0  // average columns (8 to 4 pixels)
+    shufps     xmm0, xmm1, 0x88  // even pixels
+    shufps     xmm2, xmm1, 0xdd  // odd pixels
     pavgb      xmm0, xmm2
     movdqu     [edx], xmm0
     lea        edx, [edx + 16]
@@ -1151,64 +1164,66 @@
 }
 
 // Column scaling unfiltered. SSE2 version.
-__declspec(naked)
-void ScaleARGBCols_SSE2(uint8* dst_argb, const uint8* src_argb,
-                        int dst_width, int x, int dx) {
+__declspec(naked) void ScaleARGBCols_SSE2(uint8_t* dst_argb,
+                                          const uint8_t* src_argb,
+                                          int dst_width,
+                                          int x,
+                                          int dx) {
   __asm {
     push       edi
     push       esi
-    mov        edi, [esp + 8 + 4]    // dst_argb
-    mov        esi, [esp + 8 + 8]    // src_argb
-    mov        ecx, [esp + 8 + 12]   // dst_width
+    mov        edi, [esp + 8 + 4]  // dst_argb
+    mov        esi, [esp + 8 + 8]  // src_argb
+    mov        ecx, [esp + 8 + 12]  // dst_width
     movd       xmm2, [esp + 8 + 16]  // x
     movd       xmm3, [esp + 8 + 20]  // dx
 
-    pshufd     xmm2, xmm2, 0         // x0 x0 x0 x0
-    pshufd     xmm0, xmm3, 0x11      // dx  0 dx  0
+    pshufd     xmm2, xmm2, 0  // x0 x0 x0 x0
+    pshufd     xmm0, xmm3, 0x11  // dx  0 dx  0
     paddd      xmm2, xmm0
-    paddd      xmm3, xmm3            // 0, 0, 0,  dx * 2
-    pshufd     xmm0, xmm3, 0x05      // dx * 2, dx * 2, 0, 0
-    paddd      xmm2, xmm0            // x3 x2 x1 x0
-    paddd      xmm3, xmm3            // 0, 0, 0,  dx * 4
-    pshufd     xmm3, xmm3, 0         // dx * 4, dx * 4, dx * 4, dx * 4
+    paddd      xmm3, xmm3  // 0, 0, 0,  dx * 2
+    pshufd     xmm0, xmm3, 0x05  // dx * 2, dx * 2, 0, 0
+    paddd      xmm2, xmm0  // x3 x2 x1 x0
+    paddd      xmm3, xmm3  // 0, 0, 0,  dx * 4
+    pshufd     xmm3, xmm3, 0  // dx * 4, dx * 4, dx * 4, dx * 4
 
-    pextrw     eax, xmm2, 1          // get x0 integer.
-    pextrw     edx, xmm2, 3          // get x1 integer.
+    pextrw     eax, xmm2, 1  // get x0 integer.
+    pextrw     edx, xmm2, 3  // get x1 integer.
 
     cmp        ecx, 0
     jle        xloop99
     sub        ecx, 4
     jl         xloop49
 
-    // 4 Pixel loop.
+        // 4 Pixel loop.
  xloop4:
     movd       xmm0, [esi + eax * 4]  // 1 source x0 pixels
     movd       xmm1, [esi + edx * 4]  // 1 source x1 pixels
-    pextrw     eax, xmm2, 5           // get x2 integer.
-    pextrw     edx, xmm2, 7           // get x3 integer.
-    paddd      xmm2, xmm3             // x += dx
-    punpckldq  xmm0, xmm1             // x0 x1
+    pextrw     eax, xmm2, 5  // get x2 integer.
+    pextrw     edx, xmm2, 7  // get x3 integer.
+    paddd      xmm2, xmm3  // x += dx
+    punpckldq  xmm0, xmm1  // x0 x1
 
     movd       xmm1, [esi + eax * 4]  // 1 source x2 pixels
     movd       xmm4, [esi + edx * 4]  // 1 source x3 pixels
-    pextrw     eax, xmm2, 1           // get x0 integer. next iteration.
-    pextrw     edx, xmm2, 3           // get x1 integer. next iteration.
-    punpckldq  xmm1, xmm4             // x2 x3
-    punpcklqdq xmm0, xmm1             // x0 x1 x2 x3
+    pextrw     eax, xmm2, 1  // get x0 integer. next iteration.
+    pextrw     edx, xmm2, 3  // get x1 integer. next iteration.
+    punpckldq  xmm1, xmm4  // x2 x3
+    punpcklqdq xmm0, xmm1  // x0 x1 x2 x3
     movdqu     [edi], xmm0
     lea        edi, [edi + 16]
-    sub        ecx, 4                 // 4 pixels
+    sub        ecx, 4  // 4 pixels
     jge        xloop4
 
  xloop49:
     test       ecx, 2
     je         xloop29
 
-    // 2 Pixels.
+        // 2 Pixels.
     movd       xmm0, [esi + eax * 4]  // 1 source x0 pixels
     movd       xmm1, [esi + edx * 4]  // 1 source x1 pixels
-    pextrw     eax, xmm2, 5           // get x2 integer.
-    punpckldq  xmm0, xmm1             // x0 x1
+    pextrw     eax, xmm2, 5  // get x2 integer.
+    punpckldq  xmm0, xmm1  // x0 x1
 
     movq       qword ptr [edi], xmm0
     lea        edi, [edi + 8]
@@ -1217,7 +1232,7 @@
     test       ecx, 1
     je         xloop99
 
-    // 1 Pixels.
+        // 1 Pixels.
     movd       xmm0, [esi + eax * 4]  // 1 source x2 pixels
     movd       dword ptr [edi], xmm0
  xloop99:
@@ -1232,60 +1247,62 @@
 // TODO(fbarchard): Port to Neon
 
 // Shuffle table for arranging 2 pixels into pairs for pmaddubsw
-static uvec8 kShuffleColARGB = {
-  0u, 4u, 1u, 5u, 2u, 6u, 3u, 7u,  // bbggrraa 1st pixel
-  8u, 12u, 9u, 13u, 10u, 14u, 11u, 15u  // bbggrraa 2nd pixel
+static const uvec8 kShuffleColARGB = {
+    0u, 4u,  1u, 5u,  2u,  6u,  3u,  7u,  // bbggrraa 1st pixel
+    8u, 12u, 9u, 13u, 10u, 14u, 11u, 15u  // bbggrraa 2nd pixel
 };
 
 // Shuffle table for duplicating 2 fractions into 8 bytes each
-static uvec8 kShuffleFractions = {
-  0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 4u, 4u, 4u, 4u, 4u, 4u, 4u, 4u,
+static const uvec8 kShuffleFractions = {
+    0u, 0u, 0u, 0u, 0u, 0u, 0u, 0u, 4u, 4u, 4u, 4u, 4u, 4u, 4u, 4u,
 };
 
-__declspec(naked)
-void ScaleARGBFilterCols_SSSE3(uint8* dst_argb, const uint8* src_argb,
-                               int dst_width, int x, int dx) {
+__declspec(naked) void ScaleARGBFilterCols_SSSE3(uint8_t* dst_argb,
+                                                 const uint8_t* src_argb,
+                                                 int dst_width,
+                                                 int x,
+                                                 int dx) {
   __asm {
     push       esi
     push       edi
-    mov        edi, [esp + 8 + 4]    // dst_argb
-    mov        esi, [esp + 8 + 8]    // src_argb
-    mov        ecx, [esp + 8 + 12]   // dst_width
+    mov        edi, [esp + 8 + 4]  // dst_argb
+    mov        esi, [esp + 8 + 8]  // src_argb
+    mov        ecx, [esp + 8 + 12]  // dst_width
     movd       xmm2, [esp + 8 + 16]  // x
     movd       xmm3, [esp + 8 + 20]  // dx
     movdqa     xmm4, xmmword ptr kShuffleColARGB
     movdqa     xmm5, xmmword ptr kShuffleFractions
-    pcmpeqb    xmm6, xmm6           // generate 0x007f for inverting fraction.
+    pcmpeqb    xmm6, xmm6  // generate 0x007f for inverting fraction.
     psrlw      xmm6, 9
-    pextrw     eax, xmm2, 1         // get x0 integer. preroll
+    pextrw     eax, xmm2, 1  // get x0 integer. preroll
     sub        ecx, 2
     jl         xloop29
 
-    movdqa     xmm0, xmm2           // x1 = x0 + dx
+    movdqa     xmm0, xmm2  // x1 = x0 + dx
     paddd      xmm0, xmm3
-    punpckldq  xmm2, xmm0           // x0 x1
-    punpckldq  xmm3, xmm3           // dx dx
-    paddd      xmm3, xmm3           // dx * 2, dx * 2
-    pextrw     edx, xmm2, 3         // get x1 integer. preroll
+    punpckldq  xmm2, xmm0  // x0 x1
+    punpckldq  xmm3, xmm3  // dx dx
+    paddd      xmm3, xmm3  // dx * 2, dx * 2
+    pextrw     edx, xmm2, 3  // get x1 integer. preroll
 
     // 2 Pixel loop.
   xloop2:
-    movdqa     xmm1, xmm2           // x0, x1 fractions.
-    paddd      xmm2, xmm3           // x += dx
+    movdqa     xmm1, xmm2  // x0, x1 fractions.
+    paddd      xmm2, xmm3  // x += dx
     movq       xmm0, qword ptr [esi + eax * 4]  // 2 source x0 pixels
-    psrlw      xmm1, 9              // 7 bit fractions.
+    psrlw      xmm1, 9  // 7 bit fractions.
     movhps     xmm0, qword ptr [esi + edx * 4]  // 2 source x1 pixels
-    pshufb     xmm1, xmm5           // 0000000011111111
-    pshufb     xmm0, xmm4           // arrange pixels into pairs
-    pxor       xmm1, xmm6           // 0..7f and 7f..0
-    pmaddubsw  xmm0, xmm1           // argb_argb 16 bit, 2 pixels.
-    pextrw     eax, xmm2, 1         // get x0 integer. next iteration.
-    pextrw     edx, xmm2, 3         // get x1 integer. next iteration.
-    psrlw      xmm0, 7              // argb 8.7 fixed point to low 8 bits.
-    packuswb   xmm0, xmm0           // argb_argb 8 bits, 2 pixels.
+    pshufb     xmm1, xmm5  // 0000000011111111
+    pshufb     xmm0, xmm4  // arrange pixels into pairs
+    pxor       xmm1, xmm6  // 0..7f and 7f..0
+    pmaddubsw  xmm0, xmm1  // argb_argb 16 bit, 2 pixels.
+    pextrw     eax, xmm2, 1  // get x0 integer. next iteration.
+    pextrw     edx, xmm2, 3  // get x1 integer. next iteration.
+    psrlw      xmm0, 7  // argb 8.7 fixed point to low 8 bits.
+    packuswb   xmm0, xmm0  // argb_argb 8 bits, 2 pixels.
     movq       qword ptr [edi], xmm0
     lea        edi, [edi + 8]
-    sub        ecx, 2               // 2 pixels
+    sub        ecx, 2  // 2 pixels
     jge        xloop2
 
  xloop29:
@@ -1293,15 +1310,15 @@
     add        ecx, 2 - 1
     jl         xloop99
 
-    // 1 pixel remainder
-    psrlw      xmm2, 9              // 7 bit fractions.
+            // 1 pixel remainder
+    psrlw      xmm2, 9  // 7 bit fractions.
     movq       xmm0, qword ptr [esi + eax * 4]  // 2 source x0 pixels
-    pshufb     xmm2, xmm5           // 00000000
-    pshufb     xmm0, xmm4           // arrange pixels into pairs
-    pxor       xmm2, xmm6           // 0..7f and 7f..0
-    pmaddubsw  xmm0, xmm2           // argb 16 bit, 1 pixel.
+    pshufb     xmm2, xmm5  // 00000000
+    pshufb     xmm0, xmm4  // arrange pixels into pairs
+    pxor       xmm2, xmm6  // 0..7f and 7f..0
+    pmaddubsw  xmm0, xmm2  // argb 16 bit, 1 pixel.
     psrlw      xmm0, 7
-    packuswb   xmm0, xmm0           // argb 8 bits, 1 pixel.
+    packuswb   xmm0, xmm0  // argb 8 bits, 1 pixel.
     movd       [edi], xmm0
 
  xloop99:
@@ -1313,13 +1330,15 @@
 }
 
 // Reads 4 pixels, duplicates them and writes 8 pixels.
-__declspec(naked)
-void ScaleARGBColsUp2_SSE2(uint8* dst_argb, const uint8* src_argb,
-                           int dst_width, int x, int dx) {
+__declspec(naked) void ScaleARGBColsUp2_SSE2(uint8_t* dst_argb,
+                                             const uint8_t* src_argb,
+                                             int dst_width,
+                                             int x,
+                                             int dx) {
   __asm {
-    mov        edx, [esp + 4]    // dst_argb
-    mov        eax, [esp + 8]    // src_argb
-    mov        ecx, [esp + 12]   // dst_width
+    mov        edx, [esp + 4]  // dst_argb
+    mov        eax, [esp + 8]  // src_argb
+    mov        ecx, [esp + 12]  // dst_width
 
   wloop:
     movdqu     xmm0, [eax]
@@ -1338,12 +1357,11 @@
 }
 
 // Divide num by div and return as 16.16 fixed point result.
-__declspec(naked)
-int FixedDiv_X86(int num, int div) {
+__declspec(naked) int FixedDiv_X86(int num, int div) {
   __asm {
-    mov        eax, [esp + 4]    // num
-    cdq                          // extend num to 64 bits
-    shld       edx, eax, 16      // 32.16
+    mov        eax, [esp + 4]  // num
+    cdq  // extend num to 64 bits
+    shld       edx, eax, 16  // 32.16
     shl        eax, 16
     idiv       dword ptr [esp + 8]
     ret
@@ -1351,13 +1369,12 @@
 }
 
 // Divide num by div and return as 16.16 fixed point result.
-__declspec(naked)
-int FixedDiv1_X86(int num, int div) {
+__declspec(naked) int FixedDiv1_X86(int num, int div) {
   __asm {
-    mov        eax, [esp + 4]    // num
-    mov        ecx, [esp + 8]    // denom
-    cdq                          // extend num to 64 bits
-    shld       edx, eax, 16      // 32.16
+    mov        eax, [esp + 4]  // num
+    mov        ecx, [esp + 8]  // denom
+    cdq  // extend num to 64 bits
+    shld       edx, eax, 16  // 32.16
     shl        eax, 16
     sub        eax, 0x00010001
     sbb        edx, 0
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/video_common.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/video_common.cc
index 00fb71e..92384c0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/video_common.cc
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/libyuv/source/video_common.cc
@@ -8,7 +8,6 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-
 #include "libyuv/video_common.h"
 
 #ifdef __cplusplus
@@ -16,40 +15,39 @@
 extern "C" {
 #endif
 
-#define ARRAY_SIZE(x) (int)(sizeof(x) / sizeof(x[0]))
-
 struct FourCCAliasEntry {
-  uint32 alias;
-  uint32 canonical;
+  uint32_t alias;
+  uint32_t canonical;
 };
 
-static const struct FourCCAliasEntry kFourCCAliases[] = {
-  {FOURCC_IYUV, FOURCC_I420},
-  {FOURCC_YU12, FOURCC_I420},
-  {FOURCC_YU16, FOURCC_I422},
-  {FOURCC_YU24, FOURCC_I444},
-  {FOURCC_YUYV, FOURCC_YUY2},
-  {FOURCC_YUVS, FOURCC_YUY2},  // kCMPixelFormat_422YpCbCr8_yuvs
-  {FOURCC_HDYC, FOURCC_UYVY},
-  {FOURCC_2VUY, FOURCC_UYVY},  // kCMPixelFormat_422YpCbCr8
-  {FOURCC_JPEG, FOURCC_MJPG},  // Note: JPEG has DHT while MJPG does not.
-  {FOURCC_DMB1, FOURCC_MJPG},
-  {FOURCC_BA81, FOURCC_BGGR},  // deprecated.
-  {FOURCC_RGB3, FOURCC_RAW },
-  {FOURCC_BGR3, FOURCC_24BG},
-  {FOURCC_CM32, FOURCC_BGRA},  // kCMPixelFormat_32ARGB
-  {FOURCC_CM24, FOURCC_RAW },  // kCMPixelFormat_24RGB
-  {FOURCC_L555, FOURCC_RGBO},  // kCMPixelFormat_16LE555
-  {FOURCC_L565, FOURCC_RGBP},  // kCMPixelFormat_16LE565
-  {FOURCC_5551, FOURCC_RGBO},  // kCMPixelFormat_16LE5551
+#define NUM_ALIASES 18
+static const struct FourCCAliasEntry kFourCCAliases[NUM_ALIASES] = {
+    {FOURCC_IYUV, FOURCC_I420},
+    {FOURCC_YU12, FOURCC_I420},
+    {FOURCC_YU16, FOURCC_I422},
+    {FOURCC_YU24, FOURCC_I444},
+    {FOURCC_YUYV, FOURCC_YUY2},
+    {FOURCC_YUVS, FOURCC_YUY2},  // kCMPixelFormat_422YpCbCr8_yuvs
+    {FOURCC_HDYC, FOURCC_UYVY},
+    {FOURCC_2VUY, FOURCC_UYVY},  // kCMPixelFormat_422YpCbCr8
+    {FOURCC_JPEG, FOURCC_MJPG},  // Note: JPEG has DHT while MJPG does not.
+    {FOURCC_DMB1, FOURCC_MJPG},
+    {FOURCC_BA81, FOURCC_BGGR},  // deprecated.
+    {FOURCC_RGB3, FOURCC_RAW},
+    {FOURCC_BGR3, FOURCC_24BG},
+    {FOURCC_CM32, FOURCC_BGRA},  // kCMPixelFormat_32ARGB
+    {FOURCC_CM24, FOURCC_RAW},   // kCMPixelFormat_24RGB
+    {FOURCC_L555, FOURCC_RGBO},  // kCMPixelFormat_16LE555
+    {FOURCC_L565, FOURCC_RGBP},  // kCMPixelFormat_16LE565
+    {FOURCC_5551, FOURCC_RGBO},  // kCMPixelFormat_16LE5551
 };
 // TODO(fbarchard): Consider mapping kCMPixelFormat_32BGRA to FOURCC_ARGB.
 //  {FOURCC_BGRA, FOURCC_ARGB},  // kCMPixelFormat_32BGRA
 
 LIBYUV_API
-uint32 CanonicalFourCC(uint32 fourcc) {
+uint32_t CanonicalFourCC(uint32_t fourcc) {
   int i;
-  for (i = 0; i < ARRAY_SIZE(kFourCCAliases); ++i) {
+  for (i = 0; i < NUM_ALIASES; ++i) {
     if (kFourCCAliases[i].alias == fourcc) {
       return kFourCCAliases[i].canonical;
     }
@@ -62,4 +60,3 @@
 }  // extern "C"
 }  // namespace libyuv
 #endif
-
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/README.libvpx b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/README.libvpx
index 8d3cd96..36735ff 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/README.libvpx
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/README.libvpx
@@ -18,3 +18,4 @@
 Use .text instead of .rodata on macho to avoid broken tables in PIC mode.
 Use .text with no alignment for aout
 Only use 'hidden' visibility with Chromium
+Prefix ARCH_* with VPX_.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/x86inc.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/x86inc.asm
index b647dff..3d722fe 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/x86inc.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/third_party/x86inc/x86inc.asm
@@ -45,7 +45,7 @@
 %endif
 
 %ifndef STACK_ALIGNMENT
-    %if ARCH_X86_64
+    %if VPX_ARCH_X86_64
         %define STACK_ALIGNMENT 16
     %else
         %define STACK_ALIGNMENT 4
@@ -54,7 +54,7 @@
 
 %define WIN64  0
 %define UNIX64 0
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
     %ifidn __OUTPUT_FORMAT__,win32
         %define WIN64  1
     %elifidn __OUTPUT_FORMAT__,win64
@@ -165,7 +165,7 @@
         %endif
     %endif
 
-    %if ARCH_X86_64 == 0
+    %if VPX_ARCH_X86_64 == 0
         %undef PIC
     %endif
 
@@ -260,7 +260,7 @@
     %if %0 == 2
         %define r%1m  %2d
         %define r%1mp %2
-    %elif ARCH_X86_64 ; memory
+    %elif VPX_ARCH_X86_64 ; memory
         %define r%1m [rstk + stack_offset + %3]
         %define r%1mp qword r %+ %1 %+ m
     %else
@@ -281,7 +281,7 @@
     %define e%1h %3
     %define r%1b %2
     %define e%1b %2
-    %if ARCH_X86_64 == 0
+    %if VPX_ARCH_X86_64 == 0
         %define r%1 e%1
     %endif
 %endmacro
@@ -318,7 +318,7 @@
 
 DECLARE_REG_TMP_SIZE 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
     %define gprsize 8
 %else
     %define gprsize 4
@@ -485,7 +485,7 @@
             %if %1 > 0
                 %assign regs_used (regs_used + 1)
             %endif
-            %if ARCH_X86_64 && regs_used < 5 + UNIX64 * 3
+            %if VPX_ARCH_X86_64 && regs_used < 5 + UNIX64 * 3
                 ; Ensure that we don't clobber any registers containing arguments
                 %assign regs_used 5 + UNIX64 * 3
             %endif
@@ -607,7 +607,7 @@
     AUTO_REP_RET
 %endmacro
 
-%elif ARCH_X86_64 ; *nix x64 ;=============================================
+%elif VPX_ARCH_X86_64 ; *nix x64 ;=============================================
 
 DECLARE_REG 0,  rdi
 DECLARE_REG 1,  rsi
@@ -948,7 +948,7 @@
         %endif
     %endif
 
-    %if ARCH_X86_64 || cpuflag(sse2)
+    %if VPX_ARCH_X86_64 || cpuflag(sse2)
         %ifdef __NASM_VER__
             ALIGNMODE k8
         %else
@@ -1005,7 +1005,7 @@
     %define RESET_MM_PERMUTATION INIT_XMM %1
     %define mmsize 16
     %define num_mmregs 8
-    %if ARCH_X86_64
+    %if VPX_ARCH_X86_64
         %define num_mmregs 16
     %endif
     %define mova movdqa
@@ -1026,7 +1026,7 @@
     %define RESET_MM_PERMUTATION INIT_YMM %1
     %define mmsize 32
     %define num_mmregs 8
-    %if ARCH_X86_64
+    %if VPX_ARCH_X86_64
         %define num_mmregs 16
     %endif
     %define mova movdqa
@@ -1637,7 +1637,7 @@
 
 ; workaround: vpbroadcastq is broken in x86_32 due to a yasm bug (fixed in 1.3.0)
 %ifdef __YASM_VER__
-    %if __YASM_VERSION_ID__ < 0x01030000 && ARCH_X86_64 == 0
+    %if __YASM_VERSION_ID__ < 0x01030000 && VPX_ARCH_X86_64 == 0
         %macro vpbroadcastq 2
             %if sizeof%1 == 16
                 movddup %1, %2
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Anandan.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Anandan.py
new file mode 100644
index 0000000..b96cd9f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Anandan.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python
+# coding: utf-8
+import numpy as np
+import numpy.linalg as LA
+from scipy.ndimage.filters import gaussian_filter
+from scipy.sparse import csc_matrix
+from scipy.sparse.linalg import inv
+from MotionEST import MotionEST
+"""Anandan Model"""
+
+
+class Anandan(MotionEST):
+  """
+    constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        beta: smooth constrain weight
+        k1,k2,k3: confidence coefficients
+        max_iter: maximum number of iterations
+    """
+
+  def __init__(self, cur_f, ref_f, blk_sz, beta, k1, k2, k3, max_iter=100):
+    super(Anandan, self).__init__(cur_f, ref_f, blk_sz)
+    self.levels = int(np.log2(blk_sz))
+    self.intensity_hierarchy()
+    self.c_maxs = []
+    self.c_mins = []
+    self.e_maxs = []
+    self.e_mins = []
+    for l in xrange(self.levels + 1):
+      c_max, c_min, e_max, e_min = self.get_curvature(self.cur_Is[l])
+      self.c_maxs.append(c_max)
+      self.c_mins.append(c_min)
+      self.e_maxs.append(e_max)
+      self.e_mins.append(e_min)
+    self.beta = beta
+    self.k1, self.k2, self.k3 = k1, k2, k3
+    self.max_iter = max_iter
+
+  """
+    build intensity hierarchy
+    """
+
+  def intensity_hierarchy(self):
+    level = 0
+    self.cur_Is = []
+    self.ref_Is = []
+    #build each level itensity by using gaussian filters
+    while level <= self.levels:
+      cur_I = gaussian_filter(self.cur_yuv[:, :, 0], sigma=(2**level) * 0.56)
+      ref_I = gaussian_filter(self.ref_yuv[:, :, 0], sigma=(2**level) * 0.56)
+      self.ref_Is.append(ref_I)
+      self.cur_Is.append(cur_I)
+      level += 1
+
+  """
+    get curvature of each block
+    """
+
+  def get_curvature(self, I):
+    c_max = np.zeros((self.num_row, self.num_col))
+    c_min = np.zeros((self.num_row, self.num_col))
+    e_max = np.zeros((self.num_row, self.num_col, 2))
+    e_min = np.zeros((self.num_row, self.num_col, 2))
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        h11, h12, h21, h22 = 0, 0, 0, 0
+        for i in xrange(r * self.blk_sz, r * self.blk_sz + self.blk_sz):
+          for j in xrange(c * self.blk_sz, c * self.blk_sz + self.blk_sz):
+            if 0 <= i < self.height - 1 and 0 <= j < self.width - 1:
+              Ix = I[i][j + 1] - I[i][j]
+              Iy = I[i + 1][j] - I[i][j]
+              h11 += Iy * Iy
+              h12 += Ix * Iy
+              h21 += Ix * Iy
+              h22 += Ix * Ix
+        U, S, _ = LA.svd(np.array([[h11, h12], [h21, h22]]))
+        c_max[r, c], c_min[r, c] = S[0], S[1]
+        e_max[r, c] = U[:, 0]
+        e_min[r, c] = U[:, 1]
+    return c_max, c_min, e_max, e_min
+
+  """
+    get ssd of motion vector:
+      cur_I: current intensity
+      ref_I: reference intensity
+      center: current position
+      mv: motion vector
+    """
+
+  def get_ssd(self, cur_I, ref_I, center, mv):
+    ssd = 0
+    for r in xrange(int(center[0]), int(center[0]) + self.blk_sz):
+      for c in xrange(int(center[1]), int(center[1]) + self.blk_sz):
+        if 0 <= r < self.height and 0 <= c < self.width:
+          tr, tc = r + int(mv[0]), c + int(mv[1])
+          if 0 <= tr < self.height and 0 <= tc < self.width:
+            ssd += (ref_I[tr, tc] - cur_I[r, c])**2
+          else:
+            ssd += cur_I[r, c]**2
+    return ssd
+
+  """
+    get region match of level l
+      l: current level
+      last_mvs: matchine results of last level
+      radius: movenment radius
+    """
+
+  def region_match(self, l, last_mvs, radius):
+    mvs = np.zeros((self.num_row, self.num_col, 2))
+    min_ssds = np.zeros((self.num_row, self.num_col))
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        center = np.array([r * self.blk_sz, c * self.blk_sz])
+        #use overlap hierarchy policy
+        init_mvs = []
+        if last_mvs is None:
+          init_mvs = [np.array([0, 0])]
+        else:
+          for i, j in {(r, c), (r, c + 1), (r + 1, c), (r + 1, c + 1)}:
+            if 0 <= i < last_mvs.shape[0] and 0 <= j < last_mvs.shape[1]:
+              init_mvs.append(last_mvs[i, j])
+        #use last matching results as the start position as current level
+        min_ssd = None
+        min_mv = None
+        for init_mv in init_mvs:
+          for i in xrange(-2, 3):
+            for j in xrange(-2, 3):
+              mv = init_mv + np.array([i, j]) * radius
+              ssd = self.get_ssd(self.cur_Is[l], self.ref_Is[l], center, mv)
+              if min_ssd is None or ssd < min_ssd:
+                min_ssd = ssd
+                min_mv = mv
+        min_ssds[r, c] = min_ssd
+        mvs[r, c] = min_mv
+    return mvs, min_ssds
+
+  """
+    smooth motion field based on neighbor constraint
+      uvs: current estimation
+      mvs: matching results
+      min_ssds: minimum ssd of matching results
+      l: current level
+    """
+
+  def smooth(self, uvs, mvs, min_ssds, l):
+    sm_uvs = np.zeros((self.num_row, self.num_col, 2))
+    c_max = self.c_maxs[l]
+    c_min = self.c_mins[l]
+    e_max = self.e_maxs[l]
+    e_min = self.e_mins[l]
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        w_max = c_max[r, c] / (
+            self.k1 + self.k2 * min_ssds[r, c] + self.k3 * c_max[r, c])
+        w_min = c_min[r, c] / (
+            self.k1 + self.k2 * min_ssds[r, c] + self.k3 * c_min[r, c])
+        w = w_max * w_min / (w_max + w_min + 1e-6)
+        if w < 0:
+          w = 0
+        avg_uv = np.array([0.0, 0.0])
+        for i, j in {(r - 1, c), (r + 1, c), (r, c - 1), (r, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            avg_uv += 0.25 * uvs[i, j]
+        sm_uvs[r, c] = (w * w * mvs[r, c] + self.beta * avg_uv) / (
+            self.beta + w * w)
+    return sm_uvs
+
+  """
+    motion field estimation
+    """
+
+  def motion_field_estimation(self):
+    last_mvs = None
+    for l in xrange(self.levels, -1, -1):
+      mvs, min_ssds = self.region_match(l, last_mvs, 2**l)
+      uvs = np.zeros(mvs.shape)
+      for _ in xrange(self.max_iter):
+        uvs = self.smooth(uvs, mvs, min_ssds, l)
+      last_mvs = uvs
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        self.mf[r, c] = uvs[r, c]
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Exhaust.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Exhaust.py
new file mode 100644
index 0000000..97cfc41
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Exhaust.py
@@ -0,0 +1,251 @@
+#!/usr/bin/env python
+# coding: utf-8
+import numpy as np
+import numpy.linalg as LA
+from Util import MSE
+from MotionEST import MotionEST
+"""Exhaust Search:"""
+
+
+class Exhaust(MotionEST):
+  """
+    Constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        wnd_size: search window size
+        metric: metric to compare the blocks distrotion
+    """
+
+  def __init__(self, cur_f, ref_f, blk_size, wnd_size, metric=MSE):
+    self.name = 'exhaust'
+    self.wnd_sz = wnd_size
+    self.metric = metric
+    super(Exhaust, self).__init__(cur_f, ref_f, blk_size)
+
+  """
+    search method:
+        cur_r: start row
+        cur_c: start column
+    """
+
+  def search(self, cur_r, cur_c):
+    min_loss = self.block_dist(cur_r, cur_c, [0, 0], self.metric)
+    cur_x = cur_c * self.blk_sz
+    cur_y = cur_r * self.blk_sz
+    ref_x = cur_x
+    ref_y = cur_y
+    #search all validate positions and select the one with minimum distortion
+    for y in xrange(cur_y - self.wnd_sz, cur_y + self.wnd_sz):
+      for x in xrange(cur_x - self.wnd_sz, cur_x + self.wnd_sz):
+        if 0 <= x < self.width - self.blk_sz and 0 <= y < self.height - self.blk_sz:
+          loss = self.block_dist(cur_r, cur_c, [y - cur_y, x - cur_x],
+                                 self.metric)
+          if loss < min_loss:
+            min_loss = loss
+            ref_x = x
+            ref_y = y
+    return ref_x, ref_y
+
+  def motion_field_estimation(self):
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        ref_x, ref_y = self.search(i, j)
+        self.mf[i, j] = np.array(
+            [ref_y - i * self.blk_sz, ref_x - j * self.blk_sz])
+
+
+"""Exhaust with Neighbor Constraint"""
+
+
+class ExhaustNeighbor(MotionEST):
+  """
+    Constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        wnd_size: search window size
+        beta: neigbor loss weight
+        metric: metric to compare the blocks distrotion
+    """
+
+  def __init__(self, cur_f, ref_f, blk_size, wnd_size, beta, metric=MSE):
+    self.name = 'exhaust + neighbor'
+    self.wnd_sz = wnd_size
+    self.beta = beta
+    self.metric = metric
+    super(ExhaustNeighbor, self).__init__(cur_f, ref_f, blk_size)
+    self.assign = np.zeros((self.num_row, self.num_col), dtype=np.bool)
+
+  """
+    estimate neighbor loss:
+        cur_r: current row
+        cur_c: current column
+        mv: current motion vector
+    """
+
+  def neighborLoss(self, cur_r, cur_c, mv):
+    loss = 0
+    #accumulate difference between current block's motion vector with neighbors'
+    for i, j in {(-1, 0), (1, 0), (0, 1), (0, -1)}:
+      nb_r = cur_r + i
+      nb_c = cur_c + j
+      if 0 <= nb_r < self.num_row and 0 <= nb_c < self.num_col and self.assign[
+          nb_r, nb_c]:
+        loss += LA.norm(mv - self.mf[nb_r, nb_c])
+    return loss
+
+  """
+    search method:
+        cur_r: start row
+        cur_c: start column
+    """
+
+  def search(self, cur_r, cur_c):
+    dist_loss = self.block_dist(cur_r, cur_c, [0, 0], self.metric)
+    nb_loss = self.neighborLoss(cur_r, cur_c, np.array([0, 0]))
+    min_loss = dist_loss + self.beta * nb_loss
+    cur_x = cur_c * self.blk_sz
+    cur_y = cur_r * self.blk_sz
+    ref_x = cur_x
+    ref_y = cur_y
+    #search all validate positions and select the one with minimum distortion
+    # as well as weighted neighbor loss
+    for y in xrange(cur_y - self.wnd_sz, cur_y + self.wnd_sz):
+      for x in xrange(cur_x - self.wnd_sz, cur_x + self.wnd_sz):
+        if 0 <= x < self.width - self.blk_sz and 0 <= y < self.height - self.blk_sz:
+          dist_loss = self.block_dist(cur_r, cur_c, [y - cur_y, x - cur_x],
+                                      self.metric)
+          nb_loss = self.neighborLoss(cur_r, cur_c, [y - cur_y, x - cur_x])
+          loss = dist_loss + self.beta * nb_loss
+          if loss < min_loss:
+            min_loss = loss
+            ref_x = x
+            ref_y = y
+    return ref_x, ref_y
+
+  def motion_field_estimation(self):
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        ref_x, ref_y = self.search(i, j)
+        self.mf[i, j] = np.array(
+            [ref_y - i * self.blk_sz, ref_x - j * self.blk_sz])
+        self.assign[i, j] = True
+
+
+"""Exhaust with Neighbor Constraint and Feature Score"""
+
+
+class ExhaustNeighborFeatureScore(MotionEST):
+  """
+    Constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        wnd_size: search window size
+        beta: neigbor loss weight
+        max_iter: maximum number of iterations
+        metric: metric to compare the blocks distrotion
+    """
+
+  def __init__(self,
+               cur_f,
+               ref_f,
+               blk_size,
+               wnd_size,
+               beta=1,
+               max_iter=100,
+               metric=MSE):
+    self.name = 'exhaust + neighbor+feature score'
+    self.wnd_sz = wnd_size
+    self.beta = beta
+    self.metric = metric
+    self.max_iter = max_iter
+    super(ExhaustNeighborFeatureScore, self).__init__(cur_f, ref_f, blk_size)
+    self.fs = self.getFeatureScore()
+
+  """
+    get feature score of each block
+    """
+
+  def getFeatureScore(self):
+    fs = np.zeros((self.num_row, self.num_col))
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        IxIx = 0
+        IyIy = 0
+        IxIy = 0
+        #get ssd surface
+        for x in xrange(self.blk_sz - 1):
+          for y in xrange(self.blk_sz - 1):
+            ox = c * self.blk_sz + x
+            oy = r * self.blk_sz + y
+            Ix = self.cur_yuv[oy, ox + 1, 0] - self.cur_yuv[oy, ox, 0]
+            Iy = self.cur_yuv[oy + 1, ox, 0] - self.cur_yuv[oy, ox, 0]
+            IxIx += Ix * Ix
+            IyIy += Iy * Iy
+            IxIy += Ix * Iy
+        #get maximum and minimum eigenvalues
+        lambda_max = 0.5 * ((IxIx + IyIy) + np.sqrt(4 * IxIy * IxIy +
+                                                    (IxIx - IyIy)**2))
+        lambda_min = 0.5 * ((IxIx + IyIy) - np.sqrt(4 * IxIy * IxIy +
+                                                    (IxIx - IyIy)**2))
+        fs[r, c] = lambda_max * lambda_min / (1e-6 + lambda_max + lambda_min)
+        if fs[r, c] < 0:
+          fs[r, c] = 0
+    return fs
+
+  """
+    do exhaust search
+    """
+
+  def search(self, cur_r, cur_c):
+    min_loss = self.block_dist(cur_r, cur_c, [0, 0], self.metric)
+    cur_x = cur_c * self.blk_sz
+    cur_y = cur_r * self.blk_sz
+    ref_x = cur_x
+    ref_y = cur_y
+    #search all validate positions and select the one with minimum distortion
+    for y in xrange(cur_y - self.wnd_sz, cur_y + self.wnd_sz):
+      for x in xrange(cur_x - self.wnd_sz, cur_x + self.wnd_sz):
+        if 0 <= x < self.width - self.blk_sz and 0 <= y < self.height - self.blk_sz:
+          loss = self.block_dist(cur_r, cur_c, [y - cur_y, x - cur_x],
+                                 self.metric)
+          if loss < min_loss:
+            min_loss = loss
+            ref_x = x
+            ref_y = y
+    return ref_x, ref_y
+
+  """
+    add smooth constraint
+    """
+
+  def smooth(self, uvs, mvs):
+    sm_uvs = np.zeros(uvs.shape)
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        avg_uv = np.array([0.0, 0.0])
+        for i, j in {(r - 1, c), (r + 1, c), (r, c - 1), (r, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            avg_uv += uvs[i, j] / 6.0
+        for i, j in {(r - 1, c - 1), (r - 1, c + 1), (r + 1, c - 1),
+                     (r + 1, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            avg_uv += uvs[i, j] / 12.0
+        sm_uvs[r, c] = (self.fs[r, c] * mvs[r, c] + self.beta * avg_uv) / (
+            self.beta + self.fs[r, c])
+    return sm_uvs
+
+  def motion_field_estimation(self):
+    #get matching results
+    mvs = np.zeros(self.mf.shape)
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        ref_x, ref_y = self.search(r, c)
+        mvs[r, c] = np.array([ref_y - r * self.blk_sz, ref_x - c * self.blk_sz])
+    #add smoothness constraint
+    uvs = np.zeros(self.mf.shape)
+    for _ in xrange(self.max_iter):
+      uvs = self.smooth(uvs, mvs)
+    self.mf = uvs
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/GroundTruth.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/GroundTruth.py
new file mode 100644
index 0000000..20eb5f1
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/GroundTruth.py
@@ -0,0 +1,40 @@
+#!/ usr / bin / env python
+#coding : utf - 8
+import numpy as np
+import numpy.linalg as LA
+from MotionEST import MotionEST
+"""Ground Truth:
+
+  Load in ground truth motion field and mask
+"""
+
+
+class GroundTruth(MotionEST):
+  """constructor:
+
+    cur_f:current
+    frame ref_f:reference
+    frame blk_sz:block size
+    gt_path:ground truth motion field file path
+    """
+
+  def __init__(self, cur_f, ref_f, blk_sz, gt_path, mf=None, mask=None):
+    self.name = 'ground truth'
+    super(GroundTruth, self).__init__(cur_f, ref_f, blk_sz)
+    self.mask = np.zeros((self.num_row, self.num_col), dtype=np.bool)
+    if gt_path:
+      with open(gt_path) as gt_file:
+        lines = gt_file.readlines()
+        for i in xrange(len(lines)):
+          info = lines[i].split(';')
+          for j in xrange(len(info)):
+            x, y = info[j].split(',')
+            #-, - stands for nothing
+            if x == '-' or y == '-':
+              self.mask[i, -j - 1] = True
+              continue
+            #the order of original file is flipped on the x axis
+            self.mf[i, -j - 1] = np.array([float(y), -float(x)], dtype=np.int)
+    else:
+      self.mf = mf
+      self.mask = mask
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/HornSchunck.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/HornSchunck.py
new file mode 100644
index 0000000..38fcae1
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/HornSchunck.py
@@ -0,0 +1,204 @@
+#!/usr/bin/env python
+# coding: utf-8
+import numpy as np
+import numpy.linalg as LA
+from scipy.ndimage.filters import gaussian_filter
+from scipy.sparse import csc_matrix
+from scipy.sparse.linalg import inv
+from MotionEST import MotionEST
+"""Horn & Schunck Model"""
+
+
+class HornSchunck(MotionEST):
+  """
+    constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        alpha: smooth constrain weight
+        sigma: gaussian blur parameter
+    """
+
+  def __init__(self, cur_f, ref_f, blk_sz, alpha, sigma, max_iter=100):
+    super(HornSchunck, self).__init__(cur_f, ref_f, blk_sz)
+    self.cur_I, self.ref_I = self.getIntensity()
+    #perform gaussian blur to smooth the intensity
+    self.cur_I = gaussian_filter(self.cur_I, sigma=sigma)
+    self.ref_I = gaussian_filter(self.ref_I, sigma=sigma)
+    self.alpha = alpha
+    self.max_iter = max_iter
+    self.Ix, self.Iy, self.It = self.intensityDiff()
+
+  """
+    Build Frame Intensity
+    """
+
+  def getIntensity(self):
+    cur_I = np.zeros((self.num_row, self.num_col))
+    ref_I = np.zeros((self.num_row, self.num_col))
+    #use average intensity as block's intensity
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        r = i * self.blk_sz
+        c = j * self.blk_sz
+        cur_I[i, j] = np.mean(self.cur_yuv[r:r + self.blk_sz, c:c + self.blk_sz,
+                                           0])
+        ref_I[i, j] = np.mean(self.ref_yuv[r:r + self.blk_sz, c:c + self.blk_sz,
+                                           0])
+    return cur_I, ref_I
+
+  """
+    Get First Order Derivative
+    """
+
+  def intensityDiff(self):
+    Ix = np.zeros((self.num_row, self.num_col))
+    Iy = np.zeros((self.num_row, self.num_col))
+    It = np.zeros((self.num_row, self.num_col))
+    sz = self.blk_sz
+    for i in xrange(self.num_row - 1):
+      for j in xrange(self.num_col - 1):
+        """
+                Ix:
+                (i  ,j) <--- (i  ,j+1)
+                (i+1,j) <--- (i+1,j+1)
+                """
+        count = 0
+        for r, c in {(i, j + 1), (i + 1, j + 1)}:
+          if 0 <= r < self.num_row and 0 < c < self.num_col:
+            Ix[i, j] += (
+                self.cur_I[r, c] - self.cur_I[r, c - 1] + self.ref_I[r, c] -
+                self.ref_I[r, c - 1])
+            count += 2
+        Ix[i, j] /= count
+        """
+                Iy:
+                (i  ,j)      (i  ,j+1)
+                   ^             ^
+                   |             |
+                (i+1,j)      (i+1,j+1)
+                """
+        count = 0
+        for r, c in {(i + 1, j), (i + 1, j + 1)}:
+          if 0 < r < self.num_row and 0 <= c < self.num_col:
+            Iy[i, j] += (
+                self.cur_I[r, c] - self.cur_I[r - 1, c] + self.ref_I[r, c] -
+                self.ref_I[r - 1, c])
+            count += 2
+        Iy[i, j] /= count
+        count = 0
+        #It:
+        for r in xrange(i, i + 2):
+          for c in xrange(j, j + 2):
+            if 0 <= r < self.num_row and 0 <= c < self.num_col:
+              It[i, j] += (self.ref_I[r, c] - self.cur_I[r, c])
+              count += 1
+        It[i, j] /= count
+    return Ix, Iy, It
+
+  """
+    Get weighted average of neighbor motion vectors
+    for evaluation of laplacian
+    """
+
+  def averageMV(self):
+    avg = np.zeros((self.num_row, self.num_col, 2))
+    """
+        1/12 ---  1/6 --- 1/12
+         |         |       |
+        1/6  --- -1/8 --- 1/6
+         |         |       |
+        1/12 ---  1/6 --- 1/12
+        """
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        for r, c in {(-1, 0), (1, 0), (0, -1), (0, 1)}:
+          if 0 <= i + r < self.num_row and 0 <= j + c < self.num_col:
+            avg[i, j] += self.mf[i + r, j + c] / 6.0
+        for r, c in {(-1, -1), (-1, 1), (1, -1), (1, 1)}:
+          if 0 <= i + r < self.num_row and 0 <= j + c < self.num_col:
+            avg[i, j] += self.mf[i + r, j + c] / 12.0
+    return avg
+
+  def motion_field_estimation(self):
+    count = 0
+    """
+        u_{n+1} = ~u_n - Ix(Ix.~u_n+Iy.~v+It)/(IxIx+IyIy+alpha^2)
+        v_{n+1} = ~v_n - Iy(Ix.~u_n+Iy.~v+It)/(IxIx+IyIy+alpha^2)
+        """
+    denom = self.alpha**2 + np.power(self.Ix, 2) + np.power(self.Iy, 2)
+    while count < self.max_iter:
+      avg = self.averageMV()
+      self.mf[:, :, 1] = avg[:, :, 1] - self.Ix * (
+          self.Ix * avg[:, :, 1] + self.Iy * avg[:, :, 0] + self.It) / denom
+      self.mf[:, :, 0] = avg[:, :, 0] - self.Iy * (
+          self.Ix * avg[:, :, 1] + self.Iy * avg[:, :, 0] + self.It) / denom
+      count += 1
+    self.mf *= self.blk_sz
+
+  def motion_field_estimation_mat(self):
+    row_idx = []
+    col_idx = []
+    data = []
+
+    N = 2 * self.num_row * self.num_col
+    b = np.zeros((N, 1))
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        """(IxIx+alpha^2)u+IxIy.v-alpha^2~u IxIy.u+(IyIy+alpha^2)v-alpha^2~v"""
+        u_idx = i * 2 * self.num_col + 2 * j
+        v_idx = u_idx + 1
+        b[u_idx, 0] = -self.Ix[i, j] * self.It[i, j]
+        b[v_idx, 0] = -self.Iy[i, j] * self.It[i, j]
+        #u: (IxIx+alpha^2)u
+        row_idx.append(u_idx)
+        col_idx.append(u_idx)
+        data.append(self.Ix[i, j] * self.Ix[i, j] + self.alpha**2)
+        #IxIy.v
+        row_idx.append(u_idx)
+        col_idx.append(v_idx)
+        data.append(self.Ix[i, j] * self.Iy[i, j])
+
+        #v: IxIy.u
+        row_idx.append(v_idx)
+        col_idx.append(u_idx)
+        data.append(self.Ix[i, j] * self.Iy[i, j])
+        #(IyIy+alpha^2)v
+        row_idx.append(v_idx)
+        col_idx.append(v_idx)
+        data.append(self.Iy[i, j] * self.Iy[i, j] + self.alpha**2)
+
+        #-alpha^2~u
+        #-alpha^2~v
+        for r, c in {(-1, 0), (1, 0), (0, -1), (0, 1)}:
+          if 0 <= i + r < self.num_row and 0 <= j + c < self.num_col:
+            u_nb = (i + r) * 2 * self.num_col + 2 * (j + c)
+            v_nb = u_nb + 1
+
+            row_idx.append(u_idx)
+            col_idx.append(u_nb)
+            data.append(-1 * self.alpha**2 / 6.0)
+
+            row_idx.append(v_idx)
+            col_idx.append(v_nb)
+            data.append(-1 * self.alpha**2 / 6.0)
+        for r, c in {(-1, -1), (-1, 1), (1, -1), (1, 1)}:
+          if 0 <= i + r < self.num_row and 0 <= j + c < self.num_col:
+            u_nb = (i + r) * 2 * self.num_col + 2 * (j + c)
+            v_nb = u_nb + 1
+
+            row_idx.append(u_idx)
+            col_idx.append(u_nb)
+            data.append(-1 * self.alpha**2 / 12.0)
+
+            row_idx.append(v_idx)
+            col_idx.append(v_nb)
+            data.append(-1 * self.alpha**2 / 12.0)
+    M = csc_matrix((data, (row_idx, col_idx)), shape=(N, N))
+    M_inv = inv(M)
+    uv = M_inv.dot(b)
+
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        self.mf[i, j, 0] = uv[i * 2 * self.num_col + 2 * j + 1, 0] * self.blk_sz
+        self.mf[i, j, 1] = uv[i * 2 * self.num_col + 2 * j, 0] * self.blk_sz
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/MotionEST.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/MotionEST.py
new file mode 100644
index 0000000..a9a4ace
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/MotionEST.py
@@ -0,0 +1,109 @@
+#!/ usr / bin / env python
+#coding : utf - 8
+import numpy as np
+import numpy.linalg as LA
+import matplotlib.pyplot as plt
+from Util import drawMF, MSE
+"""The Base Class of Estimators"""
+
+
+class MotionEST(object):
+  """
+    constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+    """
+
+  def __init__(self, cur_f, ref_f, blk_sz):
+    self.cur_f = cur_f
+    self.ref_f = ref_f
+    self.blk_sz = blk_sz
+    #convert RGB to YUV
+    self.cur_yuv = np.array(self.cur_f.convert('YCbCr'), dtype=np.int)
+    self.ref_yuv = np.array(self.ref_f.convert('YCbCr'), dtype=np.int)
+    #frame size
+    self.width = self.cur_f.size[0]
+    self.height = self.cur_f.size[1]
+    #motion field size
+    self.num_row = self.height // self.blk_sz
+    self.num_col = self.width // self.blk_sz
+    #initialize motion field
+    self.mf = np.zeros((self.num_row, self.num_col, 2))
+
+  """estimation function Override by child classes"""
+
+  def motion_field_estimation(self):
+    pass
+
+  """
+    distortion of a block:
+      cur_r: current row
+      cur_c: current column
+      mv: motion vector
+      metric: distortion metric
+  """
+
+  def block_dist(self, cur_r, cur_c, mv, metric=MSE):
+    cur_x = cur_c * self.blk_sz
+    cur_y = cur_r * self.blk_sz
+    h = min(self.blk_sz, self.height - cur_y)
+    w = min(self.blk_sz, self.width - cur_x)
+    cur_blk = self.cur_yuv[cur_y:cur_y + h, cur_x:cur_x + w, :]
+    ref_x = int(cur_x + mv[1])
+    ref_y = int(cur_y + mv[0])
+    if 0 <= ref_x < self.width - w and 0 <= ref_y < self.height - h:
+      ref_blk = self.ref_yuv[ref_y:ref_y + h, ref_x:ref_x + w, :]
+    else:
+      ref_blk = np.zeros((h, w, 3))
+    return metric(cur_blk, ref_blk)
+
+  """
+    distortion of motion field
+  """
+
+  def distortion(self, mask=None, metric=MSE):
+    loss = 0
+    count = 0
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        if mask is not None and mask[i, j]:
+          continue
+        loss += self.block_dist(i, j, self.mf[i, j], metric)
+        count += 1
+    return loss / count
+
+  """evaluation compare the difference with ground truth"""
+
+  def motion_field_evaluation(self, ground_truth):
+    loss = 0
+    count = 0
+    gt = ground_truth.mf
+    mask = ground_truth.mask
+    for i in xrange(self.num_row):
+      for j in xrange(self.num_col):
+        if mask is not None and mask[i][j]:
+          continue
+        loss += LA.norm(gt[i, j] - self.mf[i, j])
+        count += 1
+    return loss / count
+
+  """render the motion field"""
+
+  def show(self, ground_truth=None, size=10):
+    cur_mf = drawMF(self.cur_f, self.blk_sz, self.mf)
+    if ground_truth is None:
+      n_row = 1
+    else:
+      gt_mf = drawMF(self.cur_f, self.blk_sz, ground_truth)
+      n_row = 2
+    plt.figure(figsize=(n_row * size, size * self.height / self.width))
+    plt.subplot(1, n_row, 1)
+    plt.imshow(cur_mf)
+    plt.title('Estimated Motion Field')
+    if ground_truth is not None:
+      plt.subplot(1, n_row, 2)
+      plt.imshow(gt_mf)
+      plt.title('Ground Truth')
+    plt.tight_layout()
+    plt.show()
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/SearchSmooth.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/SearchSmooth.py
new file mode 100644
index 0000000..e0b7fb6
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/SearchSmooth.py
@@ -0,0 +1,213 @@
+#!/usr/bin/env python
+# coding: utf-8
+import numpy as np
+import numpy.linalg as LA
+from Util import MSE
+from MotionEST import MotionEST
+"""Search & Smooth Model with Adapt Weights"""
+
+
+class SearchSmoothAdapt(MotionEST):
+  """
+    Constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        wnd_size: search window size
+        beta: neigbor loss weight
+        max_iter: maximum number of iterations
+        metric: metric to compare the blocks distrotion
+    """
+
+  def __init__(self, cur_f, ref_f, blk_size, search, max_iter=100):
+    self.search = search
+    self.max_iter = max_iter
+    super(SearchSmoothAdapt, self).__init__(cur_f, ref_f, blk_size)
+
+  """
+    get local diffiencial of refernce
+    """
+
+  def getRefLocalDiff(self, mvs):
+    m, n = self.num_row, self.num_col
+    localDiff = [[] for _ in xrange(m)]
+    blk_sz = self.blk_sz
+    for r in xrange(m):
+      for c in xrange(n):
+        I_row = 0
+        I_col = 0
+        #get ssd surface
+        count = 0
+        center = self.cur_yuv[r * blk_sz:(r + 1) * blk_sz,
+                              c * blk_sz:(c + 1) * blk_sz, 0]
+        ty = np.clip(r * blk_sz + int(mvs[r, c, 0]), 0, self.height - blk_sz)
+        tx = np.clip(c * blk_sz + int(mvs[r, c, 1]), 0, self.width - blk_sz)
+        target = self.ref_yuv[ty:ty + blk_sz, tx:tx + blk_sz, 0]
+        for y, x in {(ty - blk_sz, tx), (ty + blk_sz, tx)}:
+          if 0 <= y < self.height - blk_sz and 0 <= x < self.width - blk_sz:
+            nb = self.ref_yuv[y:y + blk_sz, x:x + blk_sz, 0]
+            I_row += np.sum(np.abs(nb - center)) - np.sum(
+                np.abs(target - center))
+            count += 1
+        I_row //= (count * blk_sz * blk_sz)
+        count = 0
+        for y, x in {(ty, tx - blk_sz), (ty, tx + blk_sz)}:
+          if 0 <= y < self.height - blk_sz and 0 <= x < self.width - blk_sz:
+            nb = self.ref_yuv[y:y + blk_sz, x:x + blk_sz, 0]
+            I_col += np.sum(np.abs(nb - center)) - np.sum(
+                np.abs(target - center))
+            count += 1
+        I_col //= (count * blk_sz * blk_sz)
+        localDiff[r].append(
+            np.array([[I_row * I_row, I_row * I_col],
+                      [I_col * I_row, I_col * I_col]]))
+    return localDiff
+
+  """
+    add smooth constraint
+    """
+
+  def smooth(self, uvs, mvs):
+    sm_uvs = np.zeros(uvs.shape)
+    blk_sz = self.blk_sz
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        nb_uv = np.array([0.0, 0.0])
+        for i, j in {(r - 1, c), (r + 1, c), (r, c - 1), (r, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            nb_uv += uvs[i, j] / 6.0
+          else:
+            nb_uv += uvs[r, c] / 6.0
+        for i, j in {(r - 1, c - 1), (r - 1, c + 1), (r + 1, c - 1),
+                     (r + 1, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            nb_uv += uvs[i, j] / 12.0
+          else:
+            nb_uv += uvs[r, c] / 12.0
+        ssd_nb = self.block_dist(r, c, self.blk_sz * nb_uv)
+        mv = mvs[r, c]
+        ssd_mv = self.block_dist(r, c, mv)
+        alpha = (ssd_nb - ssd_mv) / (ssd_mv + 1e-6)
+        M = alpha * self.localDiff[r][c]
+        P = M + np.identity(2)
+        inv_P = LA.inv(P)
+        sm_uvs[r, c] = np.dot(inv_P, nb_uv) + np.dot(
+            np.matmul(inv_P, M), mv / blk_sz)
+    return sm_uvs
+
+  def block_matching(self):
+    self.search.motion_field_estimation()
+
+  def motion_field_estimation(self):
+    self.localDiff = self.getRefLocalDiff(self.search.mf)
+    #get matching results
+    mvs = self.search.mf
+    #add smoothness constraint
+    uvs = mvs / self.blk_sz
+    for _ in xrange(self.max_iter):
+      uvs = self.smooth(uvs, mvs)
+    self.mf = uvs * self.blk_sz
+
+
+"""Search & Smooth Model with Fixed Weights"""
+
+
+class SearchSmoothFix(MotionEST):
+  """
+    Constructor:
+        cur_f: current frame
+        ref_f: reference frame
+        blk_sz: block size
+        wnd_size: search window size
+        beta: neigbor loss weight
+        max_iter: maximum number of iterations
+        metric: metric to compare the blocks distrotion
+    """
+
+  def __init__(self, cur_f, ref_f, blk_size, search, beta, max_iter=100):
+    self.search = search
+    self.max_iter = max_iter
+    self.beta = beta
+    super(SearchSmoothFix, self).__init__(cur_f, ref_f, blk_size)
+
+  """
+    get local diffiencial of refernce
+    """
+
+  def getRefLocalDiff(self, mvs):
+    m, n = self.num_row, self.num_col
+    localDiff = [[] for _ in xrange(m)]
+    blk_sz = self.blk_sz
+    for r in xrange(m):
+      for c in xrange(n):
+        I_row = 0
+        I_col = 0
+        #get ssd surface
+        count = 0
+        center = self.cur_yuv[r * blk_sz:(r + 1) * blk_sz,
+                              c * blk_sz:(c + 1) * blk_sz, 0]
+        ty = np.clip(r * blk_sz + int(mvs[r, c, 0]), 0, self.height - blk_sz)
+        tx = np.clip(c * blk_sz + int(mvs[r, c, 1]), 0, self.width - blk_sz)
+        target = self.ref_yuv[ty:ty + blk_sz, tx:tx + blk_sz, 0]
+        for y, x in {(ty - blk_sz, tx), (ty + blk_sz, tx)}:
+          if 0 <= y < self.height - blk_sz and 0 <= x < self.width - blk_sz:
+            nb = self.ref_yuv[y:y + blk_sz, x:x + blk_sz, 0]
+            I_row += np.sum(np.abs(nb - center)) - np.sum(
+                np.abs(target - center))
+            count += 1
+        I_row //= (count * blk_sz * blk_sz)
+        count = 0
+        for y, x in {(ty, tx - blk_sz), (ty, tx + blk_sz)}:
+          if 0 <= y < self.height - blk_sz and 0 <= x < self.width - blk_sz:
+            nb = self.ref_yuv[y:y + blk_sz, x:x + blk_sz, 0]
+            I_col += np.sum(np.abs(nb - center)) - np.sum(
+                np.abs(target - center))
+            count += 1
+        I_col //= (count * blk_sz * blk_sz)
+        localDiff[r].append(
+            np.array([[I_row * I_row, I_row * I_col],
+                      [I_col * I_row, I_col * I_col]]))
+    return localDiff
+
+  """
+    add smooth constraint
+    """
+
+  def smooth(self, uvs, mvs):
+    sm_uvs = np.zeros(uvs.shape)
+    blk_sz = self.blk_sz
+    for r in xrange(self.num_row):
+      for c in xrange(self.num_col):
+        nb_uv = np.array([0.0, 0.0])
+        for i, j in {(r - 1, c), (r + 1, c), (r, c - 1), (r, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            nb_uv += uvs[i, j] / 6.0
+          else:
+            nb_uv += uvs[r, c] / 6.0
+        for i, j in {(r - 1, c - 1), (r - 1, c + 1), (r + 1, c - 1),
+                     (r + 1, c + 1)}:
+          if 0 <= i < self.num_row and 0 <= j < self.num_col:
+            nb_uv += uvs[i, j] / 12.0
+          else:
+            nb_uv += uvs[r, c] / 12.0
+        mv = mvs[r, c] / blk_sz
+        M = self.localDiff[r][c]
+        P = M + self.beta * np.identity(2)
+        inv_P = LA.inv(P)
+        sm_uvs[r, c] = np.dot(inv_P, self.beta * nb_uv) + np.dot(
+            np.matmul(inv_P, M), mv)
+    return sm_uvs
+
+  def block_matching(self):
+    self.search.motion_field_estimation()
+
+  def motion_field_estimation(self):
+    #get local structure
+    self.localDiff = self.getRefLocalDiff(self.search.mf)
+    #get matching results
+    mvs = self.search.mf
+    #add smoothness constraint
+    uvs = mvs / self.blk_sz
+    for _ in xrange(self.max_iter):
+      uvs = self.smooth(uvs, mvs)
+    self.mf = uvs * self.blk_sz
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Util.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Util.py
new file mode 100644
index 0000000..a5265f4
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/MotionEST/Util.py
@@ -0,0 +1,38 @@
+#!/usr/bin/env python
+# coding: utf-8
+import numpy as np
+import numpy.linalg as LA
+import matplotlib.pyplot as plt
+from scipy.ndimage import filters
+from PIL import Image, ImageDraw
+
+
+def MSE(blk1, blk2):
+  return np.mean(
+      LA.norm(
+          np.array(blk1, dtype=np.int) - np.array(blk2, dtype=np.int), axis=2))
+
+
+def drawMF(img, blk_sz, mf):
+  img_rgba = img.convert('RGBA')
+  mf_layer = Image.new(mode='RGBA', size=img_rgba.size, color=(0, 0, 0, 0))
+  draw = ImageDraw.Draw(mf_layer)
+  width = img_rgba.size[0]
+  height = img_rgba.size[1]
+  num_row = height // blk_sz
+  num_col = width // blk_sz
+  for i in xrange(num_row):
+    left = (0, i * blk_sz)
+    right = (width, i * blk_sz)
+    draw.line([left, right], fill=(0, 0, 255, 255))
+  for j in xrange(num_col):
+    up = (j * blk_sz, 0)
+    down = (j * blk_sz, height)
+    draw.line([up, down], fill=(0, 0, 255, 255))
+  for i in xrange(num_row):
+    for j in xrange(num_col):
+      center = (j * blk_sz + 0.5 * blk_sz, i * blk_sz + 0.5 * blk_sz)
+      """mf[i,j][0] is the row shift and mf[i,j][1] is the column shift In PIL coordinates, head[0] is x (column shift) and head[1] is y (row shift)."""
+      head = (center[0] + mf[i, j][1], center[1] + mf[i, j][0])
+      draw.line([center, head], fill=(255, 0, 0, 255))
+  return Image.alpha_composite(img_rgba, mf_layer)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/genY4M/genY4M.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/genY4M/genY4M.py
new file mode 100644
index 0000000..6ef5c6c
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/genY4M/genY4M.py
@@ -0,0 +1,76 @@
+import argparse
+from os import listdir, path
+from PIL import Image
+import sys
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--frame_path", default="../data/frame/", type=str)
+parser.add_argument("--frame_rate", default="25:1", type=str)
+parser.add_argument("--interlacing", default="Ip", type=str)
+parser.add_argument("--pix_ratio", default="0:0", type=str)
+parser.add_argument("--color_space", default="4:2:0", type=str)
+parser.add_argument("--output", default="output.y4m", type=str)
+
+
+def generate(args, frames):
+  if len(frames) == 0:
+    return
+  #sort the frames based on the frame index
+  frames = sorted(frames, key=lambda x: x[0])
+  #convert the frames to YUV form
+  frames = [f.convert("YCbCr") for _, f in frames]
+  #write the header
+  header = "YUV4MPEG2 W%d H%d F%s %s A%s" % (frames[0].width, frames[0].height,
+                                             args.frame_rate, args.interlacing,
+                                             args.pix_ratio)
+  cs = args.color_space.split(":")
+  header += " C%s%s%s\n" % (cs[0], cs[1], cs[2])
+  #estimate the sample step based on subsample value
+  subsamples = [int(c) for c in cs]
+  r_step = [1, int(subsamples[2] == 0) + 1, int(subsamples[2] == 0) + 1]
+  c_step = [1, 4 // subsamples[1], 4 // subsamples[1]]
+  #write in frames
+  with open(args.output, "wb") as y4m:
+    y4m.write(header)
+    for f in frames:
+      y4m.write("FRAME\n")
+      px = f.load()
+      for k in xrange(3):
+        for i in xrange(0, f.height, r_step[k]):
+          for j in xrange(0, f.width, c_step[k]):
+            yuv = px[j, i]
+            y4m.write(chr(yuv[k]))
+
+
+if __name__ == "__main__":
+  args = parser.parse_args()
+  frames = []
+  frames_mv = []
+  for filename in listdir(args.frame_path):
+    name, ext = filename.split(".")
+    if ext == "png":
+      name_parse = name.split("_")
+      idx = int(name_parse[-1])
+      img = Image.open(path.join(args.frame_path, filename))
+      if name_parse[-2] == "mv":
+        frames_mv.append((idx, img))
+      else:
+        frames.append((idx, img))
+  if len(frames) == 0:
+    print("No frames in directory: " + args.frame_path)
+    sys.exit()
+  print("----------------------Y4M Info----------------------")
+  print("width:  %d" % frames[0][1].width)
+  print("height: %d" % frames[0][1].height)
+  print("#frame: %d" % len(frames))
+  print("frame rate: %s" % args.frame_rate)
+  print("interlacing: %s" % args.interlacing)
+  print("pixel ratio: %s" % args.pix_ratio)
+  print("color space: %s" % args.color_space)
+  print("----------------------------------------------------")
+
+  print("Generating ...")
+  generate(args, frames)
+  if len(frames_mv) != 0:
+    args.output = args.output.replace(".y4m", "_mv.y4m")
+    generate(args, frames_mv)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/BVH.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/BVH.pde
new file mode 100644
index 0000000..7249ee9
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/BVH.pde
@@ -0,0 +1,163 @@
+/*
+ *AABB bounding box
+ *Bouding Volume Hierarchy
+ */
+class BoundingBox {
+  float min_x, min_y, min_z, max_x, max_y, max_z;
+  PVector center;
+  BoundingBox() {
+    min_x = Float.POSITIVE_INFINITY;
+    min_y = Float.POSITIVE_INFINITY;
+    min_z = Float.POSITIVE_INFINITY;
+    max_x = Float.NEGATIVE_INFINITY;
+    max_y = Float.NEGATIVE_INFINITY;
+    max_z = Float.NEGATIVE_INFINITY;
+    center = new PVector();
+  }
+  // build a bounding box for a triangle
+  void create(Triangle t) {
+    min_x = min(t.p1.x, min(t.p2.x, t.p3.x));
+    max_x = max(t.p1.x, max(t.p2.x, t.p3.x));
+
+    min_y = min(t.p1.y, min(t.p2.y, t.p3.y));
+    max_y = max(t.p1.y, max(t.p2.y, t.p3.y));
+
+    min_z = min(t.p1.z, min(t.p2.z, t.p3.z));
+    max_z = max(t.p1.z, max(t.p2.z, t.p3.z));
+    center.x = (max_x + min_x) / 2;
+    center.y = (max_y + min_y) / 2;
+    center.z = (max_z + min_z) / 2;
+  }
+  // merge two bounding boxes
+  void add(BoundingBox bbx) {
+    min_x = min(min_x, bbx.min_x);
+    min_y = min(min_y, bbx.min_y);
+    min_z = min(min_z, bbx.min_z);
+
+    max_x = max(max_x, bbx.max_x);
+    max_y = max(max_y, bbx.max_y);
+    max_z = max(max_z, bbx.max_z);
+    center.x = (max_x + min_x) / 2;
+    center.y = (max_y + min_y) / 2;
+    center.z = (max_z + min_z) / 2;
+  }
+  // get bounding box center axis value
+  float getCenterAxisValue(int axis) {
+    if (axis == 1) {
+      return center.x;
+    } else if (axis == 2) {
+      return center.y;
+    }
+    // when axis == 3
+    return center.z;
+  }
+  // check if a ray is intersected with the bounding box
+  boolean intersect(Ray r) {
+    float tmin, tmax;
+    if (r.dir.x >= 0) {
+      tmin = (min_x - r.ori.x) * (1.0f / r.dir.x);
+      tmax = (max_x - r.ori.x) * (1.0f / r.dir.x);
+    } else {
+      tmin = (max_x - r.ori.x) * (1.0f / r.dir.x);
+      tmax = (min_x - r.ori.x) * (1.0f / r.dir.x);
+    }
+
+    float tymin, tymax;
+    if (r.dir.y >= 0) {
+      tymin = (min_y - r.ori.y) * (1.0f / r.dir.y);
+      tymax = (max_y - r.ori.y) * (1.0f / r.dir.y);
+    } else {
+      tymin = (max_y - r.ori.y) * (1.0f / r.dir.y);
+      tymax = (min_y - r.ori.y) * (1.0f / r.dir.y);
+    }
+
+    if (tmax < tymin || tymax < tmin) {
+      return false;
+    }
+
+    tmin = tmin < tymin ? tymin : tmin;
+    tmax = tmax > tymax ? tymax : tmax;
+
+    float tzmin, tzmax;
+    if (r.dir.z >= 0) {
+      tzmin = (min_z - r.ori.z) * (1.0f / r.dir.z);
+      tzmax = (max_z - r.ori.z) * (1.0f / r.dir.z);
+    } else {
+      tzmin = (max_z - r.ori.z) * (1.0f / r.dir.z);
+      tzmax = (min_z - r.ori.z) * (1.0f / r.dir.z);
+    }
+    if (tmax < tzmin || tmin > tzmax) {
+      return false;
+    }
+    return true;
+  }
+}
+// Bounding Volume Hierarchy
+class BVH {
+  // Binary Tree
+  BVH left, right;
+  BoundingBox overall_bbx;
+  ArrayList<Triangle> mesh;
+  BVH(ArrayList<Triangle> mesh) {
+    this.mesh = mesh;
+    overall_bbx = new BoundingBox();
+    left = null;
+    right = null;
+    int mesh_size = this.mesh.size();
+    if (mesh_size <= 1) {
+      return;
+    }
+    // random select an axis
+    int axis = int(random(100)) % 3 + 1;
+    // build bounding box and save the selected center component
+    float[] axis_values = new float[mesh_size];
+    for (int i = 0; i < mesh_size; i++) {
+      Triangle t = this.mesh.get(i);
+      overall_bbx.add(t.bbx);
+      axis_values[i] = t.bbx.getCenterAxisValue(axis);
+    }
+    // find the median value of selected center component as pivot
+    axis_values = sort(axis_values);
+    float pivot;
+    if (mesh_size % 2 == 1) {
+      pivot = axis_values[mesh_size / 2];
+    } else {
+      pivot =
+          0.5f * (axis_values[mesh_size / 2 - 1] + axis_values[mesh_size / 2]);
+    }
+    // Build left node and right node by partitioning the mesh based on triangle
+    // bounding box center component value
+    ArrayList<Triangle> left_mesh = new ArrayList<Triangle>();
+    ArrayList<Triangle> right_mesh = new ArrayList<Triangle>();
+    for (int i = 0; i < mesh_size; i++) {
+      Triangle t = this.mesh.get(i);
+      if (t.bbx.getCenterAxisValue(axis) < pivot) {
+        left_mesh.add(t);
+      } else if (t.bbx.getCenterAxisValue(axis) > pivot) {
+        right_mesh.add(t);
+      } else if (left_mesh.size() < right_mesh.size()) {
+        left_mesh.add(t);
+      } else {
+        right_mesh.add(t);
+      }
+    }
+    left = new BVH(left_mesh);
+    right = new BVH(right_mesh);
+  }
+  // check if a ray intersect with current volume
+  boolean intersect(Ray r, float[] param) {
+    if (mesh.size() == 0) {
+      return false;
+    }
+    if (mesh.size() == 1) {
+      Triangle t = mesh.get(0);
+      return t.intersect(r, param);
+    }
+    if (!overall_bbx.intersect(r)) {
+      return false;
+    }
+    boolean left_res = left.intersect(r, param);
+    boolean right_res = right.intersect(r, param);
+    return left_res || right_res;
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Camera.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Camera.pde
new file mode 100644
index 0000000..b39dae3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Camera.pde
@@ -0,0 +1,138 @@
+class Camera {
+  // camera's field of view
+  float fov;
+  // camera's position, look at point and axis
+  PVector pos, center, axis;
+  PVector init_pos, init_center, init_axis;
+  float move_speed;
+  float rot_speed;
+  Camera(float fov, PVector pos, PVector center, PVector axis) {
+    this.fov = fov;
+    this.pos = pos;
+    this.center = center;
+    this.axis = axis;
+    this.axis.normalize();
+    move_speed = 0.001;
+    rot_speed = 0.01 * PI;
+    init_pos = pos.copy();
+    init_center = center.copy();
+    init_axis = axis.copy();
+  }
+
+  Camera copy() {
+    Camera cam = new Camera(fov, pos.copy(), center.copy(), axis.copy());
+    return cam;
+  }
+
+  PVector project(PVector pos) {
+    PVector proj = MatxVec3(getCameraMat(), PVector.sub(pos, this.pos));
+    proj.x = (float)height / 2.0 * proj.x / proj.z / tan(fov / 2.0f);
+    proj.y = (float)height / 2.0 * proj.y / proj.z / tan(fov / 2.0f);
+    proj.z = proj.z;
+    return proj;
+  }
+
+  float[] getCameraMat() {
+    float[] mat = new float[9];
+    PVector dir = PVector.sub(center, pos);
+    dir.normalize();
+    PVector left = dir.cross(axis);
+    left.normalize();
+    // processing camera system does not follow right hand rule
+    mat[0] = -left.x;
+    mat[1] = -left.y;
+    mat[2] = -left.z;
+    mat[3] = axis.x;
+    mat[4] = axis.y;
+    mat[5] = axis.z;
+    mat[6] = dir.x;
+    mat[7] = dir.y;
+    mat[8] = dir.z;
+
+    return mat;
+  }
+
+  void run() {
+    PVector dir, left;
+    if (mousePressed) {
+      float angleX = (float)mouseX / width * PI - PI / 2;
+      float angleY = (float)mouseY / height * PI - PI;
+      PVector diff = PVector.sub(center, pos);
+      float radius = diff.mag();
+      pos.x = radius * sin(angleY) * sin(angleX) + center.x;
+      pos.y = radius * cos(angleY) + center.y;
+      pos.z = radius * sin(angleY) * cos(angleX) + center.z;
+      dir = PVector.sub(center, pos);
+      dir.normalize();
+      PVector up = new PVector(0, 1, 0);
+      left = up.cross(dir);
+      left.normalize();
+      axis = dir.cross(left);
+      axis.normalize();
+    }
+
+    if (keyPressed) {
+      switch (key) {
+        case 'w':
+          dir = PVector.sub(center, pos);
+          dir.normalize();
+          pos = PVector.add(pos, PVector.mult(dir, move_speed));
+          center = PVector.add(center, PVector.mult(dir, move_speed));
+          break;
+        case 's':
+          dir = PVector.sub(center, pos);
+          dir.normalize();
+          pos = PVector.sub(pos, PVector.mult(dir, move_speed));
+          center = PVector.sub(center, PVector.mult(dir, move_speed));
+          break;
+        case 'a':
+          dir = PVector.sub(center, pos);
+          dir.normalize();
+          left = axis.cross(dir);
+          left.normalize();
+          pos = PVector.add(pos, PVector.mult(left, move_speed));
+          center = PVector.add(center, PVector.mult(left, move_speed));
+          break;
+        case 'd':
+          dir = PVector.sub(center, pos);
+          dir.normalize();
+          left = axis.cross(dir);
+          left.normalize();
+          pos = PVector.sub(pos, PVector.mult(left, move_speed));
+          center = PVector.sub(center, PVector.mult(left, move_speed));
+          break;
+        case 'r':
+          dir = PVector.sub(center, pos);
+          dir.normalize();
+          float[] mat = getRotationMat3x3(rot_speed, dir.x, dir.y, dir.z);
+          axis = MatxVec3(mat, axis);
+          axis.normalize();
+          break;
+        case 'b':
+          pos = init_pos.copy();
+          center = init_center.copy();
+          axis = init_axis.copy();
+          break;
+        case '+': move_speed *= 2.0f; break;
+        case '-': move_speed /= 2.0; break;
+        case CODED:
+          if (keyCode == UP) {
+            pos = PVector.add(pos, PVector.mult(axis, move_speed));
+            center = PVector.add(center, PVector.mult(axis, move_speed));
+          } else if (keyCode == DOWN) {
+            pos = PVector.sub(pos, PVector.mult(axis, move_speed));
+            center = PVector.sub(center, PVector.mult(axis, move_speed));
+          }
+      }
+    }
+  }
+  void open() {
+    perspective(fov, float(width) / height, 1e-6, 1e5);
+    camera(pos.x, pos.y, pos.z, center.x, center.y, center.z, axis.x, axis.y,
+           axis.z);
+  }
+  void close() {
+    ortho(-width, 0, -height, 0);
+    camera(0, 0, 0, 0, 0, 1, 0, 1, 0);
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/MotionField.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/MotionField.pde
new file mode 100644
index 0000000..a5e04b6
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/MotionField.pde
@@ -0,0 +1,102 @@
+class MotionField {
+  int block_size;
+  ArrayList<PVector> motion_field;
+  MotionField(int block_size) {
+    this.block_size = block_size;
+    motion_field = new ArrayList<PVector>();
+  }
+
+  void update(Camera last_cam, Camera current_cam, PointCloud point_cloud,
+              BVH bvh) {
+    // clear motion field
+    motion_field = new ArrayList<PVector>();
+    int r_num = height / block_size, c_num = width / block_size;
+    for (int i = 0; i < r_num * c_num; i++)
+      motion_field.add(new PVector(0, 0, 0));
+    // estimate motion vector of each point in point cloud
+    for (int i = 0; i < point_cloud.size(); i++) {
+      PVector p = point_cloud.getPosition(i);
+      PVector p0 = current_cam.project(p);
+      PVector p1 = last_cam.project(p);
+      int row = int((p0.y + height / 2.0f) / block_size);
+      int col = int((p0.x + width / 2.0f) / block_size);
+      if (row >= 0 && row < r_num && col >= 0 && col < c_num) {
+        PVector accu = motion_field.get(row * c_num + col);
+        accu.x += p1.x - p0.x;
+        accu.y += p1.y - p0.y;
+        accu.z += 1;
+      }
+    }
+    // if some blocks do not have point, then use ray tracing to see if they are
+    // in triangles
+    for (int i = 0; i < r_num; i++)
+      for (int j = 0; j < c_num; j++) {
+        PVector accu = motion_field.get(i * c_num + j);
+        if (accu.z > 0) {
+          continue;
+        }
+        // use the center of the block to generate view ray
+        float cx = j * block_size + block_size / 2.0f - width / 2.0f;
+        float cy = i * block_size + block_size / 2.0f - height / 2.0f;
+        float cz = 0.5f * height / tan(current_cam.fov / 2.0f);
+        PVector dir = new PVector(cx, cy, cz);
+        float[] camMat = current_cam.getCameraMat();
+        dir = MatxVec3(transpose3x3(camMat), dir);
+        dir.normalize();
+        Ray r = new Ray(current_cam.pos, dir);
+        // ray tracing
+        float[] param = new float[4];
+        param[0] = Float.POSITIVE_INFINITY;
+        if (bvh.intersect(r, param)) {
+          PVector p = new PVector(param[1], param[2], param[3]);
+          PVector p0 = current_cam.project(p);
+          PVector p1 = last_cam.project(p);
+          accu.x += p1.x - p0.x;
+          accu.y += p1.y - p0.y;
+          accu.z += 1;
+        }
+      }
+    // estimate the motion vector of each block
+    for (int i = 0; i < r_num * c_num; i++) {
+      PVector mv = motion_field.get(i);
+      if (mv.z > 0) {
+        motion_field.set(i, new PVector(mv.x / mv.z, mv.y / mv.z, 0));
+      } else  // there is nothing in the block, use -1 to mark it.
+      {
+        motion_field.set(i, new PVector(0.0, 0.0, -1));
+      }
+    }
+  }
+
+  void render() {
+    int r_num = height / block_size, c_num = width / block_size;
+    for (int i = 0; i < r_num; i++)
+      for (int j = 0; j < c_num; j++) {
+        PVector mv = motion_field.get(i * c_num + j);
+        float ox = j * block_size + 0.5f * block_size;
+        float oy = i * block_size + 0.5f * block_size;
+        stroke(255, 0, 0);
+        line(ox, oy, ox + mv.x, oy + mv.y);
+      }
+  }
+
+  void save(String path) {
+    int r_num = height / block_size;
+    int c_num = width / block_size;
+    String[] mvs = new String[r_num];
+    for (int i = 0; i < r_num; i++) {
+      mvs[i] = "";
+      for (int j = 0; j < c_num; j++) {
+        PVector mv = motion_field.get(i * c_num + j);
+        if (mv.z != -1) {
+          mvs[i] += str(mv.x) + "," + str(mv.y);
+        } else  // there is nothing
+        {
+          mvs[i] += "-,-";
+        }
+        if (j != c_num - 1) mvs[i] += ";";
+      }
+    }
+    saveStrings(path, mvs);
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/PointCloud.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/PointCloud.pde
new file mode 100644
index 0000000..714a6f3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/PointCloud.pde
@@ -0,0 +1,138 @@
+class PointCloud {
+  ArrayList<PVector> points;  // array to save points
+  IntList point_colors;       // array to save points color
+  PVector cloud_mass;
+  float[] depth;
+  boolean[] real;
+  PointCloud() {
+    // initialize
+    points = new ArrayList<PVector>();
+    point_colors = new IntList();
+    cloud_mass = new PVector(0, 0, 0);
+    depth = new float[width * height];
+    real = new boolean[width * height];
+  }
+
+  void generate(PImage rgb_img, PImage depth_img, Transform trans) {
+    if (depth_img.width != width || depth_img.height != height ||
+        rgb_img.width != width || rgb_img.height != height) {
+      println("rgb and depth file dimension should be same with window size");
+      exit();
+    }
+    // clear depth and real
+    for (int i = 0; i < width * height; i++) {
+      depth[i] = 0;
+      real[i] = false;
+    }
+    for (int v = 0; v < height; v++)
+      for (int u = 0; u < width; u++) {
+        // get depth value (red channel)
+        color depth_px = depth_img.get(u, v);
+        depth[v * width + u] = depth_px & 0x0000FFFF;
+        if (int(depth[v * width + u]) != 0) {
+          real[v * width + u] = true;
+        }
+        point_colors.append(rgb_img.get(u, v));
+      }
+    for (int v = 0; v < height; v++)
+      for (int u = 0; u < width; u++) {
+        if (int(depth[v * width + u]) == 0) {
+          interpolateDepth(v, u);
+        }
+        // add transformed pixel as well as pixel color to the list
+        PVector pos = trans.transform(u, v, int(depth[v * width + u]));
+        points.add(pos);
+        // accumulate z value
+        cloud_mass = PVector.add(cloud_mass, pos);
+      }
+  }
+  void fillInDepthAlongPath(float d, Node node) {
+    node = node.parent;
+    while (node != null) {
+      int i = node.row;
+      int j = node.col;
+      if (depth[i * width + j] == 0) {
+        depth[i * width + j] = d;
+      }
+      node = node.parent;
+    }
+  }
+  // interpolate
+  void interpolateDepth(int row, int col) {
+    if (row < 0 || row >= height || col < 0 || col >= width ||
+        int(depth[row * width + col]) != 0) {
+      return;
+    }
+    ArrayList<Node> queue = new ArrayList<Node>();
+    queue.add(new Node(row, col, null));
+    boolean[] visited = new boolean[width * height];
+    for (int i = 0; i < width * height; i++) visited[i] = false;
+    visited[row * width + col] = true;
+    // Using BFS to Find the Nearest Neighbor
+    while (queue.size() > 0) {
+      // pop
+      Node node = queue.get(0);
+      queue.remove(0);
+      int i = node.row;
+      int j = node.col;
+      // if current position have a real depth
+      if (depth[i * width + j] != 0 && real[i * width + j]) {
+        fillInDepthAlongPath(depth[i * width + j], node);
+        break;
+      } else {
+        // search unvisited 8 neighbors
+        for (int r = max(0, i - 1); r < min(height, i + 2); r++) {
+          for (int c = max(0, j - 1); c < min(width, j + 2); c++) {
+            if (!visited[r * width + c]) {
+              visited[r * width + c] = true;
+              queue.add(new Node(r, c, node));
+            }
+          }
+        }
+      }
+    }
+  }
+  // get point cloud size
+  int size() { return points.size(); }
+  // get ith position
+  PVector getPosition(int i) {
+    if (i >= points.size()) {
+      println("point position: index " + str(i) + " exceeds");
+      exit();
+    }
+    return points.get(i);
+  }
+  // get ith color
+  color getColor(int i) {
+    if (i >= point_colors.size()) {
+      println("point color: index " + str(i) + " exceeds");
+      exit();
+    }
+    return point_colors.get(i);
+  }
+  // get cloud center
+  PVector getCloudCenter() {
+    if (points.size() > 0) {
+      return PVector.div(cloud_mass, points.size());
+    }
+    return new PVector(0, 0, 0);
+  }
+  // merge two clouds
+  void merge(PointCloud point_cloud) {
+    for (int i = 0; i < point_cloud.size(); i++) {
+      points.add(point_cloud.getPosition(i));
+      point_colors.append(point_cloud.getColor(i));
+    }
+    cloud_mass = PVector.add(cloud_mass, point_cloud.cloud_mass);
+  }
+}
+
+class Node {
+  int row, col;
+  Node parent;
+  Node(int row, int col, Node parent) {
+    this.row = row;
+    this.col = col;
+    this.parent = parent;
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Ray_Tracing.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Ray_Tracing.pde
new file mode 100644
index 0000000..ef4be69
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Ray_Tracing.pde
@@ -0,0 +1,61 @@
+// Triangle
+class Triangle {
+  // position
+  PVector p1, p2, p3;
+  // color
+  color c1, c2, c3;
+  BoundingBox bbx;
+  Triangle(PVector p1, PVector p2, PVector p3, color c1, color c2, color c3) {
+    this.p1 = p1;
+    this.p2 = p2;
+    this.p3 = p3;
+    this.c1 = c1;
+    this.c2 = c2;
+    this.c3 = c3;
+    bbx = new BoundingBox();
+    bbx.create(this);
+  }
+  // check to see if a ray intersects with the triangle
+  boolean intersect(Ray r, float[] param) {
+    PVector p21 = PVector.sub(p2, p1);
+    PVector p31 = PVector.sub(p3, p1);
+    PVector po1 = PVector.sub(r.ori, p1);
+
+    PVector dxp31 = r.dir.cross(p31);
+    PVector po1xp21 = po1.cross(p21);
+    float denom = p21.dot(dxp31);
+    float t = p31.dot(po1xp21) / denom;
+    float alpha = po1.dot(dxp31) / denom;
+    float beta = r.dir.dot(po1xp21) / denom;
+
+    boolean res = t > 0 && alpha > 0 && alpha < 1 && beta > 0 && beta < 1 &&
+                  alpha + beta < 1;
+    // depth test
+    if (res && t < param[0]) {
+      param[0] = t;
+      param[1] = alpha * p1.x + beta * p2.x + (1 - alpha - beta) * p3.x;
+      param[2] = alpha * p1.y + beta * p2.y + (1 - alpha - beta) * p3.y;
+      param[3] = alpha * p1.z + beta * p2.z + (1 - alpha - beta) * p3.z;
+    }
+    return res;
+  }
+  void render() {
+    beginShape(TRIANGLES);
+    fill(c1);
+    vertex(p1.x, p1.y, p1.z);
+    fill(c2);
+    vertex(p2.x, p2.y, p2.z);
+    fill(c3);
+    vertex(p3.x, p3.y, p3.z);
+    endShape();
+  }
+}
+// Ray
+class Ray {
+  // origin and direction
+  PVector ori, dir;
+  Ray(PVector ori, PVector dir) {
+    this.ori = ori;
+    this.dir = dir;
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Scene.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Scene.pde
new file mode 100644
index 0000000..cf79ab7
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Scene.pde
@@ -0,0 +1,59 @@
+class Scene {
+  PointCloud point_cloud;
+  ArrayList<Triangle> mesh;
+  BVH bvh;
+  MotionField motion_field;
+  Camera last_cam;
+  Camera current_cam;
+  int frame_count;
+
+  Scene(Camera camera, PointCloud point_cloud, MotionField motion_field) {
+    this.point_cloud = point_cloud;
+    this.motion_field = motion_field;
+    mesh = new ArrayList<Triangle>();
+    for (int v = 0; v < height - 1; v++)
+      for (int u = 0; u < width - 1; u++) {
+        PVector p1 = point_cloud.getPosition(v * width + u);
+        PVector p2 = point_cloud.getPosition(v * width + u + 1);
+        PVector p3 = point_cloud.getPosition((v + 1) * width + u + 1);
+        PVector p4 = point_cloud.getPosition((v + 1) * width + u);
+        color c1 = point_cloud.getColor(v * width + u);
+        color c2 = point_cloud.getColor(v * width + u + 1);
+        color c3 = point_cloud.getColor((v + 1) * width + u + 1);
+        color c4 = point_cloud.getColor((v + 1) * width + u);
+        mesh.add(new Triangle(p1, p2, p3, c1, c2, c3));
+        mesh.add(new Triangle(p3, p4, p1, c3, c4, c1));
+      }
+    bvh = new BVH(mesh);
+    last_cam = camera.copy();
+    current_cam = camera;
+    frame_count = 0;
+  }
+
+  void run() {
+    last_cam = current_cam.copy();
+    current_cam.run();
+    motion_field.update(last_cam, current_cam, point_cloud, bvh);
+    frame_count += 1;
+  }
+
+  void render(boolean show_motion_field) {
+    // build mesh
+    current_cam.open();
+    noStroke();
+    for (int i = 0; i < mesh.size(); i++) {
+      Triangle t = mesh.get(i);
+      t.render();
+    }
+    if (show_motion_field) {
+      current_cam.close();
+      motion_field.render();
+    }
+  }
+
+  void save(String path) { saveFrame(path + "_" + str(frame_count) + ".png"); }
+
+  void saveMotionField(String path) {
+    motion_field.save(path + "_" + str(frame_count) + ".txt");
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Transform.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Transform.pde
new file mode 100644
index 0000000..af2204e
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Transform.pde
@@ -0,0 +1,82 @@
+class Transform {
+  float[] inv_rot;  // inverse of rotation matrix
+  PVector inv_mov;  // inverse of movement vector
+  float focal;      // the focal distacne of real camera
+  int w, h;         // the width and height of the frame
+  float normalier;  // nomalization factor of depth
+  Transform(float tx, float ty, float tz, float qx, float qy, float qz,
+            float qw, float fov, int w, int h, float normalier) {
+    // currently, we did not use the info of real camera's position and
+    // quaternion maybe we will use it in the future when combine all frames
+    float[] rot = quaternion2Mat3x3(qx, qy, qz, qw);
+    inv_rot = transpose3x3(rot);
+    inv_mov = new PVector(-tx, -ty, -tz);
+    this.focal = 0.5f * h / tan(fov / 2.0);
+    this.w = w;
+    this.h = h;
+    this.normalier = normalier;
+  }
+
+  PVector transform(int i, int j, float d) {
+    // transfer from camera view to world view
+    float z = d / normalier;
+    float x = (i - w / 2.0f) * z / focal;
+    float y = (j - h / 2.0f) * z / focal;
+    return new PVector(x, y, z);
+  }
+}
+
+// get rotation matrix by using rotation axis and angle
+float[] getRotationMat3x3(float angle, float ax, float ay, float az) {
+  float[] mat = new float[9];
+  float c = cos(angle);
+  float s = sin(angle);
+  mat[0] = c + ax * ax * (1 - c);
+  mat[1] = ax * ay * (1 - c) - az * s;
+  mat[2] = ax * az * (1 - c) + ay * s;
+  mat[3] = ay * ax * (1 - c) + az * s;
+  mat[4] = c + ay * ay * (1 - c);
+  mat[5] = ay * az * (1 - c) - ax * s;
+  mat[6] = az * ax * (1 - c) - ay * s;
+  mat[7] = az * ay * (1 - c) + ax * s;
+  mat[8] = c + az * az * (1 - c);
+  return mat;
+}
+
+// get rotation matrix by using quaternion
+float[] quaternion2Mat3x3(float qx, float qy, float qz, float qw) {
+  float[] mat = new float[9];
+  mat[0] = 1 - 2 * qy * qy - 2 * qz * qz;
+  mat[1] = 2 * qx * qy - 2 * qz * qw;
+  mat[2] = 2 * qx * qz + 2 * qy * qw;
+  mat[3] = 2 * qx * qy + 2 * qz * qw;
+  mat[4] = 1 - 2 * qx * qx - 2 * qz * qz;
+  mat[5] = 2 * qy * qz - 2 * qx * qw;
+  mat[6] = 2 * qx * qz - 2 * qy * qw;
+  mat[7] = 2 * qy * qz + 2 * qx * qw;
+  mat[8] = 1 - 2 * qx * qx - 2 * qy * qy;
+  return mat;
+}
+
+// tranpose a 3x3 matrix
+float[] transpose3x3(float[] mat) {
+  float[] Tmat = new float[9];
+  for (int i = 0; i < 3; i++)
+    for (int j = 0; j < 3; j++) {
+      Tmat[i * 3 + j] = mat[j * 3 + i];
+    }
+  return Tmat;
+}
+
+// multiply a matrix with vector
+PVector MatxVec3(float[] mat, PVector v) {
+  float[] vec = v.array();
+  float[] res = new float[3];
+  for (int i = 0; i < 3; i++) {
+    res[i] = 0.0f;
+    for (int j = 0; j < 3; j++) {
+      res[i] += mat[i * 3 + j] * vec[j];
+    }
+  }
+  return new PVector(res[0], res[1], res[2]);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Util.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Util.pde
new file mode 100644
index 0000000..19d124a
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/Util.pde
@@ -0,0 +1,28 @@
+// show grids
+void showGrids(int block_size) {
+  ortho(-width, 0, -height, 0);
+  camera(0, 0, 0, 0, 0, 1, 0, 1, 0);
+  stroke(0, 0, 255);
+  for (int i = 0; i < height; i += block_size) {
+    line(0, i, width, i);
+  }
+  for (int i = 0; i < width; i += block_size) {
+    line(i, 0, i, height);
+  }
+}
+
+// save the point clould information
+void savePointCloud(PointCloud point_cloud, String file_name) {
+  String[] positions = new String[point_cloud.points.size()];
+  String[] colors = new String[point_cloud.points.size()];
+  for (int i = 0; i < point_cloud.points.size(); i++) {
+    PVector point = point_cloud.getPosition(i);
+    color point_color = point_cloud.getColor(i);
+    positions[i] = str(point.x) + ' ' + str(point.y) + ' ' + str(point.z);
+    colors[i] = str(((point_color >> 16) & 0xFF) / 255.0) + ' ' +
+                str(((point_color >> 8) & 0xFF) / 255.0) + ' ' +
+                str((point_color & 0xFF) / 255.0);
+  }
+  saveStrings(file_name + "_pos.txt", positions);
+  saveStrings(file_name + "_color.txt", colors);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/sketch_3D_reconstruction.pde b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/sketch_3D_reconstruction.pde
new file mode 100644
index 0000000..22a4954
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/3D-Reconstruction/sketch_3D_reconstruction/sketch_3D_reconstruction.pde
@@ -0,0 +1,74 @@
+/*The dataset is from
+ *Computer Vision Group
+ *TUM Department of Informatics Technical
+ *University of Munich
+ *https://vision.in.tum.de/data/datasets/rgbd-dataset/download#freiburg1_xyz
+ */
+Scene scene;
+void setup() {
+  size(640, 480, P3D);
+  // default settings
+  int frame_no = 0;            // frame number
+  float fov = PI / 3;          // field of view
+  int block_size = 8;          // block size
+  float normalizer = 5000.0f;  // normalizer
+  // initialize
+  PointCloud point_cloud = new PointCloud();
+  // synchronized rgb, depth and ground truth
+  String head = "../data/";
+  String[] rgb_depth_gt = loadStrings(head + "rgb_depth_groundtruth.txt");
+  // read in rgb and depth image file paths as well as corresponding camera
+  // posiiton and quaternion
+  String[] info = split(rgb_depth_gt[frame_no], ' ');
+  String rgb_path = head + info[1];
+  String depth_path = head + info[3];
+  float tx = float(info[7]), ty = float(info[8]),
+        tz = float(info[9]);  // real camera position
+  float qx = float(info[10]), qy = float(info[11]), qz = float(info[12]),
+        qw = float(info[13]);  // quaternion
+
+  // build transformer
+  Transform trans =
+      new Transform(tx, ty, tz, qx, qy, qz, qw, fov, width, height, normalizer);
+  PImage rgb = loadImage(rgb_path);
+  PImage depth = loadImage(depth_path);
+  // generate point cloud
+  point_cloud.generate(rgb, depth, trans);
+  // initialize camera
+  Camera camera = new Camera(fov, new PVector(0, 0, 0), new PVector(0, 0, 1),
+                             new PVector(0, 1, 0));
+  // initialize motion field
+  MotionField motion_field = new MotionField(block_size);
+  // initialize scene
+  scene = new Scene(camera, point_cloud, motion_field);
+}
+boolean inter = false;
+void draw() {
+  background(0);
+  // run camera dragged mouse to rotate camera
+  // w: go forward
+  // s: go backward
+  // a: go left
+  // d: go right
+  // up arrow: go up
+  // down arrow: go down
+  //+ increase move speed
+  //- decrease move speed
+  // r: rotate the camera
+  // b: reset to initial position
+  scene.run();  // true: make interpolation; false: do not make
+                // interpolation
+  if (keyPressed && key == 'o') {
+    inter = true;
+  }
+  scene.render(
+      false);  // true: turn on motion field; false: turn off motion field
+  // save frame with no motion field
+  scene.save("../data/frame/raw");
+  background(0);
+  scene.render(true);
+  showGrids(scene.motion_field.block_size);
+  // save frame with motion field
+  scene.save("../data/frame/raw_mv");
+  scene.saveMotionField("../data/frame/mv");
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/cpplint.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/cpplint.py
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/gen_authors.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/gen_authors.sh
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/intersect-diffs.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/intersect-diffs.py
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/lint-hunks.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/lint-hunks.py
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/non_greedy_mv/non_greedy_mv.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/non_greedy_mv/non_greedy_mv.py
new file mode 100644
index 0000000..513faa4
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/non_greedy_mv/non_greedy_mv.py
@@ -0,0 +1,186 @@
+import sys
+import matplotlib.pyplot as plt
+from matplotlib.collections import LineCollection
+from matplotlib import colors as mcolors
+import numpy as np
+import math
+
+
+def draw_mv_ls(axis, mv_ls, mode=0):
+  colors = np.array([(1., 0., 0., 1.)])
+  segs = np.array([
+      np.array([[ptr[0], ptr[1]], [ptr[0] + ptr[2], ptr[1] + ptr[3]]])
+      for ptr in mv_ls
+  ])
+  line_segments = LineCollection(
+      segs, linewidths=(1.,), colors=colors, linestyle='solid')
+  axis.add_collection(line_segments)
+  if mode == 0:
+    axis.scatter(mv_ls[:, 0], mv_ls[:, 1], s=2, c='b')
+  else:
+    axis.scatter(
+        mv_ls[:, 0] + mv_ls[:, 2], mv_ls[:, 1] + mv_ls[:, 3], s=2, c='b')
+
+
+def draw_pred_block_ls(axis, mv_ls, bs, mode=0):
+  colors = np.array([(0., 0., 0., 1.)])
+  segs = []
+  for ptr in mv_ls:
+    if mode == 0:
+      x = ptr[0]
+      y = ptr[1]
+    else:
+      x = ptr[0] + ptr[2]
+      y = ptr[1] + ptr[3]
+    x_ls = [x, x + bs, x + bs, x, x]
+    y_ls = [y, y, y + bs, y + bs, y]
+
+    segs.append(np.column_stack([x_ls, y_ls]))
+  line_segments = LineCollection(
+      segs, linewidths=(.5,), colors=colors, linestyle='solid')
+  axis.add_collection(line_segments)
+
+
+def read_frame(fp, no_swap=0):
+  plane = [None, None, None]
+  for i in range(3):
+    line = fp.readline()
+    word_ls = line.split()
+    word_ls = [int(item) for item in word_ls]
+    rows = word_ls[0]
+    cols = word_ls[1]
+
+    line = fp.readline()
+    word_ls = line.split()
+    word_ls = [int(item) for item in word_ls]
+
+    plane[i] = np.array(word_ls).reshape(rows, cols)
+    if i > 0:
+      plane[i] = plane[i].repeat(2, axis=0).repeat(2, axis=1)
+  plane = np.array(plane)
+  if no_swap == 0:
+    plane = np.swapaxes(np.swapaxes(plane, 0, 1), 1, 2)
+  return plane
+
+
+def yuv_to_rgb(yuv):
+  #mat = np.array([
+  #    [1.164,   0   , 1.596  ],
+  #    [1.164, -0.391, -0.813],
+  #    [1.164, 2.018 , 0     ] ]
+  #               )
+  #c = np.array([[ -16 , -16 , -16  ],
+  #              [ 0   , -128, -128 ],
+  #              [ -128, -128,   0  ]])
+
+  mat = np.array([[1, 0, 1.4075], [1, -0.3445, -0.7169], [1, 1.7790, 0]])
+  c = np.array([[0, 0, 0], [0, -128, -128], [-128, -128, 0]])
+  mat_c = np.dot(mat, c)
+  v = np.array([mat_c[0, 0], mat_c[1, 1], mat_c[2, 2]])
+  mat = mat.transpose()
+  rgb = np.dot(yuv, mat) + v
+  rgb = rgb.astype(int)
+  rgb = rgb.clip(0, 255)
+  return rgb / 255.
+
+
+def read_feature_score(fp, mv_rows, mv_cols):
+  line = fp.readline()
+  word_ls = line.split()
+  feature_score = np.array([math.log(float(v) + 1, 2) for v in word_ls])
+  feature_score = feature_score.reshape(mv_rows, mv_cols)
+  return feature_score
+
+def read_mv_mode_arr(fp, mv_rows, mv_cols):
+  line = fp.readline()
+  word_ls = line.split()
+  mv_mode_arr = np.array([int(v) for v in word_ls])
+  mv_mode_arr = mv_mode_arr.reshape(mv_rows, mv_cols)
+  return mv_mode_arr
+
+
+def read_frame_dpl_stats(fp):
+  line = fp.readline()
+  word_ls = line.split()
+  frame_idx = int(word_ls[1])
+  mi_rows = int(word_ls[3])
+  mi_cols = int(word_ls[5])
+  bs = int(word_ls[7])
+  ref_frame_idx = int(word_ls[9])
+  rf_idx = int(word_ls[11])
+  gf_frame_offset = int(word_ls[13])
+  ref_gf_frame_offset = int(word_ls[15])
+  mi_size = bs / 8
+  mv_ls = []
+  mv_rows = int((math.ceil(mi_rows * 1. / mi_size)))
+  mv_cols = int((math.ceil(mi_cols * 1. / mi_size)))
+  for i in range(mv_rows * mv_cols):
+    line = fp.readline()
+    word_ls = line.split()
+    row = int(word_ls[0]) * 8.
+    col = int(word_ls[1]) * 8.
+    mv_row = int(word_ls[2]) / 8.
+    mv_col = int(word_ls[3]) / 8.
+    mv_ls.append([col, row, mv_col, mv_row])
+  mv_ls = np.array(mv_ls)
+  feature_score = read_feature_score(fp, mv_rows, mv_cols)
+  mv_mode_arr = read_mv_mode_arr(fp, mv_rows, mv_cols)
+  img = yuv_to_rgb(read_frame(fp))
+  ref = yuv_to_rgb(read_frame(fp))
+  return rf_idx, frame_idx, ref_frame_idx, gf_frame_offset, ref_gf_frame_offset, mv_ls, img, ref, bs, feature_score, mv_mode_arr
+
+
+def read_dpl_stats_file(filename, frame_num=0):
+  fp = open(filename)
+  line = fp.readline()
+  width = 0
+  height = 0
+  data_ls = []
+  while (line):
+    if line[0] == '=':
+      data_ls.append(read_frame_dpl_stats(fp))
+    line = fp.readline()
+    if frame_num > 0 and len(data_ls) == frame_num:
+      break
+  return data_ls
+
+
+if __name__ == '__main__':
+  filename = sys.argv[1]
+  data_ls = read_dpl_stats_file(filename, frame_num=5)
+  for rf_idx, frame_idx, ref_frame_idx, gf_frame_offset, ref_gf_frame_offset, mv_ls, img, ref, bs, feature_score, mv_mode_arr in data_ls:
+    fig, axes = plt.subplots(2, 2)
+
+    axes[0][0].imshow(img)
+    draw_mv_ls(axes[0][0], mv_ls)
+    draw_pred_block_ls(axes[0][0], mv_ls, bs, mode=0)
+    #axes[0].grid(color='k', linestyle='-')
+    axes[0][0].set_ylim(img.shape[0], 0)
+    axes[0][0].set_xlim(0, img.shape[1])
+
+    if ref is not None:
+      axes[0][1].imshow(ref)
+      draw_mv_ls(axes[0][1], mv_ls, mode=1)
+      draw_pred_block_ls(axes[0][1], mv_ls, bs, mode=1)
+      #axes[1].grid(color='k', linestyle='-')
+      axes[0][1].set_ylim(ref.shape[0], 0)
+      axes[0][1].set_xlim(0, ref.shape[1])
+
+    axes[1][0].imshow(feature_score)
+    #feature_score_arr = feature_score.flatten()
+    #feature_score_max = feature_score_arr.max()
+    #feature_score_min = feature_score_arr.min()
+    #step = (feature_score_max - feature_score_min) / 20.
+    #feature_score_bins = np.arange(feature_score_min, feature_score_max, step)
+    #axes[1][1].hist(feature_score_arr, bins=feature_score_bins)
+    im = axes[1][1].imshow(mv_mode_arr)
+    #axes[1][1].figure.colorbar(im, ax=axes[1][1])
+
+    print rf_idx, frame_idx, ref_frame_idx, gf_frame_offset, ref_gf_frame_offset, len(mv_ls)
+
+    flatten_mv_mode = mv_mode_arr.flatten()
+    zero_mv_count = sum(flatten_mv_mode == 0);
+    new_mv_count = sum(flatten_mv_mode == 1);
+    ref_mv_count = sum(flatten_mv_mode == 2) + sum(flatten_mv_mode == 3);
+    print zero_mv_count, new_mv_count, ref_mv_count
+    plt.show()
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/set_analyzer_env.sh b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/set_analyzer_env.sh
new file mode 100644
index 0000000..8ee0c4f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/set_analyzer_env.sh
@@ -0,0 +1,131 @@
+##  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+##
+##  Use of this source code is governed by a BSD-style license
+##  that can be found in the LICENSE file in the root of the source
+##  tree. An additional intellectual property rights grant can be found
+##  in the file PATENTS.  All contributing project authors may
+##  be found in the AUTHORS file in the root of the source tree.
+##
+##  Sourcing this file sets environment variables to simplify setting up
+##  sanitizer builds and testing.
+
+sanitizer="${1}"
+
+case "${sanitizer}" in
+  address) ;;
+  cfi) ;;
+  integer) ;;
+  memory) ;;
+  thread) ;;
+  undefined) ;;
+  clear)
+    echo "Clearing environment:"
+    set -x
+    unset CC CXX LD AR
+    unset CFLAGS CXXFLAGS LDFLAGS
+    unset ASAN_OPTIONS MSAN_OPTIONS TSAN_OPTIONS UBSAN_OPTIONS
+    set +x
+    return
+    ;;
+  *)
+    echo "Usage: source set_analyzer_env.sh [<sanitizer>|clear]"
+    echo "  Supported sanitizers:"
+    echo "    address cfi integer memory thread undefined"
+    return 1
+    ;;
+esac
+
+if [ ! $(which clang) ]; then
+  # TODO(johannkoenig): Support gcc analyzers.
+  echo "ERROR: 'clang' must be in your PATH"
+  return 1
+fi
+
+# Warnings.
+if [ "${sanitizer}" = "undefined" -o "${sanitizer}" = "integer" ]; then
+  echo "WARNING: When building the ${sanitizer} sanitizer for 32 bit targets"
+  echo "you must run:"
+  echo "export LDFLAGS=\"\${LDFLAGS} --rtlib=compiler-rt -lgcc_s\""
+  echo "See http://llvm.org/bugs/show_bug.cgi?id=17693 for details."
+fi
+
+if [ "${sanitizer}" = "undefined" ]; then
+  major_version=$(clang --version | head -n 1 \
+    | grep -o -E "[[:digit:]]\.[[:digit:]]\.[[:digit:]]" | cut -f1 -d.)
+  if [ ${major_version} -eq 5 ]; then
+    echo "WARNING: clang v5 has a problem with vp9 x86_64 high bit depth"
+    echo "configurations. It can take ~40 minutes to compile"
+    echo "vpx_dsp/x86/fwd_txfm_sse2.c"
+    echo "clang v4 did not have this issue."
+  fi
+fi
+
+echo "It is recommended to configure with '--enable-debug' to improve stack"
+echo "traces. On mac builds, run 'dysmutil' on the output binaries (vpxenc,"
+echo "test_libvpx, etc) to link the stack traces to source code lines."
+
+# Build configuration.
+cflags="-fsanitize=${sanitizer}"
+ldflags="-fsanitize=${sanitizer}"
+
+# http://code.google.com/p/webm/issues/detail?id=570
+cflags="${cflags} -fno-strict-aliasing"
+# Useful backtraces.
+cflags="${cflags} -fno-omit-frame-pointer"
+# Exact backtraces.
+cflags="${cflags} -fno-optimize-sibling-calls"
+
+if [ "${sanitizer}" = "cfi" ]; then
+  # https://clang.llvm.org/docs/ControlFlowIntegrity.html
+  cflags="${cflags} -fno-sanitize-trap=cfi -flto -fvisibility=hidden"
+  ldflags="${ldflags} -fno-sanitize-trap=cfi -flto -fuse-ld=gold"
+  export AR="llvm-ar"
+fi
+
+set -x
+export CC="clang"
+export CXX="clang++"
+export LD="clang++"
+
+export CFLAGS="${cflags}"
+export CXXFLAGS="${cflags}"
+export LDFLAGS="${ldflags}"
+set +x
+
+# Execution configuration.
+sanitizer_options=""
+sanitizer_options="${sanitizer_options}:handle_segv=1"
+sanitizer_options="${sanitizer_options}:handle_abort=1"
+sanitizer_options="${sanitizer_options}:handle_sigfpe=1"
+sanitizer_options="${sanitizer_options}:fast_unwind_on_fatal=1"
+sanitizer_options="${sanitizer_options}:allocator_may_return_null=1"
+
+case "${sanitizer}" in
+  address)
+    sanitizer_options="${sanitizer_options}:detect_stack_use_after_return=1"
+    sanitizer_options="${sanitizer_options}:max_uar_stack_size_log=17"
+    set -x
+    export ASAN_OPTIONS="${sanitizer_options}"
+    set +x
+    ;;
+  cfi)
+    # No environment settings
+    ;;
+  memory)
+    set -x
+    export MSAN_OPTIONS="${sanitizer_options}"
+    set +x
+    ;;
+  thread)
+    # The thread sanitizer uses an entirely independent set of options.
+    set -x
+    export TSAN_OPTIONS="halt_on_error=1"
+    set +x
+    ;;
+  undefined|integer)
+    sanitizer_options="${sanitizer_options}:print_stacktrace=1"
+    set -x
+    export UBSAN_OPTIONS="${sanitizer_options}"
+    set +x
+    ;;
+esac
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/tiny_ssim.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/tiny_ssim.c
index 5e8ca02..ff4634a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/tiny_ssim.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/tiny_ssim.c
@@ -34,6 +34,10 @@
   unsigned int row, col;
   uint64_t total_sse = 0;
   int diff;
+  if (orig == NULL || recon == NULL) {
+    assert(0);
+    return 0;
+  }
 
   for (row = 0; row < rows; row++) {
     for (col = 0; col < cols; col++) {
@@ -46,13 +50,18 @@
   }
   return total_sse;
 }
-#endif
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 static uint64_t calc_plane_error(uint8_t *orig, int orig_stride, uint8_t *recon,
                                  int recon_stride, unsigned int cols,
                                  unsigned int rows) {
   unsigned int row, col;
   uint64_t total_sse = 0;
   int diff;
+  if (orig == NULL || recon == NULL) {
+    assert(0);
+    return 0;
+  }
 
   for (row = 0; row < rows; row++) {
     for (col = 0; col < cols; col++) {
@@ -91,40 +100,43 @@
   int w;
   int h;
   int bit_depth;
+  int frame_size;
 } input_file_t;
 
 // Open a file and determine if its y4m or raw.  If y4m get the header.
 static int open_input_file(const char *file_name, input_file_t *input, int w,
                            int h, int bit_depth) {
   char y4m_buf[4];
-  size_t r1;
+  input->w = w;
+  input->h = h;
+  input->bit_depth = bit_depth;
   input->type = RAW_YUV;
   input->buf = NULL;
   input->file = strcmp(file_name, "-") ? fopen(file_name, "rb") : stdin;
   if (input->file == NULL) return -1;
-  r1 = fread(y4m_buf, 1, 4, input->file);
-  if (r1 == 4) {
-    if (memcmp(y4m_buf, "YUV4", 4) == 0) input->type = Y4M;
-    switch (input->type) {
-      case Y4M:
-        y4m_input_open(&input->y4m, input->file, y4m_buf, 4, 0);
-        input->w = input->y4m.pic_w;
-        input->h = input->y4m.pic_h;
-        input->bit_depth = input->y4m.bit_depth;
-        // Y4M alloc's its own buf. Init this to avoid problems if we never
-        // read frames.
-        memset(&input->img, 0, sizeof(input->img));
-        break;
-      case RAW_YUV:
-        fseek(input->file, 0, SEEK_SET);
-        input->w = w;
-        input->h = h;
-        if (bit_depth < 9)
-          input->buf = malloc(w * h * 3 / 2);
-        else
-          input->buf = malloc(w * h * 3);
-        break;
-    }
+  if (fread(y4m_buf, 1, 4, input->file) != 4) return -1;
+  if (memcmp(y4m_buf, "YUV4", 4) == 0) input->type = Y4M;
+  switch (input->type) {
+    case Y4M:
+      y4m_input_open(&input->y4m, input->file, y4m_buf, 4, 0);
+      input->w = input->y4m.pic_w;
+      input->h = input->y4m.pic_h;
+      input->bit_depth = input->y4m.bit_depth;
+      // Y4M alloc's its own buf. Init this to avoid problems if we never
+      // read frames.
+      memset(&input->img, 0, sizeof(input->img));
+      break;
+    case RAW_YUV:
+      fseek(input->file, 0, SEEK_SET);
+      input->w = w;
+      input->h = h;
+      // handle odd frame sizes
+      input->frame_size = w * h + ((w + 1) / 2) * ((h + 1) / 2) * 2;
+      if (bit_depth > 8) {
+        input->frame_size *= 2;
+      }
+      input->buf = malloc(input->frame_size);
+      break;
   }
   return 0;
 }
@@ -150,15 +162,15 @@
       break;
     case RAW_YUV:
       if (bd < 9) {
-        r1 = fread(in->buf, in->w * in->h * 3 / 2, 1, in->file);
+        r1 = fread(in->buf, in->frame_size, 1, in->file);
         *y = in->buf;
         *u = in->buf + in->w * in->h;
-        *v = in->buf + 5 * in->w * in->h / 4;
+        *v = *u + ((1 + in->w) / 2) * ((1 + in->h) / 2);
       } else {
-        r1 = fread(in->buf, in->w * in->h * 3, 1, in->file);
+        r1 = fread(in->buf, in->frame_size, 1, in->file);
         *y = in->buf;
-        *u = in->buf + in->w * in->h / 2;
-        *v = *u + in->w * in->h / 2;
+        *u = in->buf + (in->w * in->h) * 2;
+        *v = *u + 2 * ((1 + in->w) / 2) * ((1 + in->h) / 2);
       }
       break;
   }
@@ -166,24 +178,15 @@
   return r1;
 }
 
-void ssim_parms_16x16(const uint8_t *s, int sp, const uint8_t *r, int rp,
-                      uint32_t *sum_s, uint32_t *sum_r, uint32_t *sum_sq_s,
-                      uint32_t *sum_sq_r, uint32_t *sum_sxr) {
+static void ssim_parms_8x8(const uint8_t *s, int sp, const uint8_t *r, int rp,
+                           uint32_t *sum_s, uint32_t *sum_r, uint32_t *sum_sq_s,
+                           uint32_t *sum_sq_r, uint32_t *sum_sxr) {
   int i, j;
-  for (i = 0; i < 16; i++, s += sp, r += rp) {
-    for (j = 0; j < 16; j++) {
-      *sum_s += s[j];
-      *sum_r += r[j];
-      *sum_sq_s += s[j] * s[j];
-      *sum_sq_r += r[j] * r[j];
-      *sum_sxr += s[j] * r[j];
-    }
+  if (s == NULL || r == NULL || sum_s == NULL || sum_r == NULL ||
+      sum_sq_s == NULL || sum_sq_r == NULL || sum_sxr == NULL) {
+    assert(0);
+    return;
   }
-}
-void ssim_parms_8x8(const uint8_t *s, int sp, const uint8_t *r, int rp,
-                    uint32_t *sum_s, uint32_t *sum_r, uint32_t *sum_sq_s,
-                    uint32_t *sum_sq_r, uint32_t *sum_sxr) {
-  int i, j;
   for (i = 0; i < 8; i++, s += sp, r += rp) {
     for (j = 0; j < 8; j++) {
       *sum_s += s[j];
@@ -195,10 +198,17 @@
   }
 }
 
-void highbd_ssim_parms_8x8(const uint16_t *s, int sp, const uint16_t *r, int rp,
-                           uint32_t *sum_s, uint32_t *sum_r, uint32_t *sum_sq_s,
-                           uint32_t *sum_sq_r, uint32_t *sum_sxr) {
+#if CONFIG_VP9_HIGHBITDEPTH
+static void highbd_ssim_parms_8x8(const uint16_t *s, int sp, const uint16_t *r,
+                                  int rp, uint32_t *sum_s, uint32_t *sum_r,
+                                  uint32_t *sum_sq_s, uint32_t *sum_sq_r,
+                                  uint32_t *sum_sxr) {
   int i, j;
+  if (s == NULL || r == NULL || sum_s == NULL || sum_r == NULL ||
+      sum_sq_s == NULL || sum_sq_r == NULL || sum_sxr == NULL) {
+    assert(0);
+    return;
+  }
   for (i = 0; i < 8; i++, s += sp, r += rp) {
     for (j = 0; j < 8; j++) {
       *sum_s += s[j];
@@ -209,11 +219,12 @@
     }
   }
 }
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
 static double similarity(uint32_t sum_s, uint32_t sum_r, uint32_t sum_sq_s,
                          uint32_t sum_sq_r, uint32_t sum_sxr, int count,
                          uint32_t bd) {
-  int64_t ssim_n, ssim_d;
+  double ssim_n, ssim_d;
   int64_t c1 = 0, c2 = 0;
   if (bd == 8) {
     // scale the constants by number of pixels
@@ -229,14 +240,14 @@
     assert(0);
   }
 
-  ssim_n = (2 * sum_s * sum_r + c1) *
-           ((int64_t)2 * count * sum_sxr - (int64_t)2 * sum_s * sum_r + c2);
+  ssim_n = (2.0 * sum_s * sum_r + c1) *
+           (2.0 * count * sum_sxr - 2.0 * sum_s * sum_r + c2);
 
-  ssim_d = (sum_s * sum_s + sum_r * sum_r + c1) *
-           ((int64_t)count * sum_sq_s - (int64_t)sum_s * sum_s +
-            (int64_t)count * sum_sq_r - (int64_t)sum_r * sum_r + c2);
+  ssim_d = ((double)sum_s * sum_s + (double)sum_r * sum_r + c1) *
+           ((double)count * sum_sq_s - (double)sum_s * sum_s +
+            (double)count * sum_sq_r - (double)sum_r * sum_r + c2);
 
-  return ssim_n * 1.0 / ssim_d;
+  return ssim_n / ssim_d;
 }
 
 static double ssim_8x8(const uint8_t *s, int sp, const uint8_t *r, int rp) {
@@ -245,14 +256,15 @@
   return similarity(sum_s, sum_r, sum_sq_s, sum_sq_r, sum_sxr, 64, 8);
 }
 
+#if CONFIG_VP9_HIGHBITDEPTH
 static double highbd_ssim_8x8(const uint16_t *s, int sp, const uint16_t *r,
-                              int rp, uint32_t bd, uint32_t shift) {
+                              int rp, uint32_t bd) {
   uint32_t sum_s = 0, sum_r = 0, sum_sq_s = 0, sum_sq_r = 0, sum_sxr = 0;
   highbd_ssim_parms_8x8(s, sp, r, rp, &sum_s, &sum_r, &sum_sq_s, &sum_sq_r,
                         &sum_sxr);
-  return similarity(sum_s >> shift, sum_r >> shift, sum_sq_s >> (2 * shift),
-                    sum_sq_r >> (2 * shift), sum_sxr >> (2 * shift), 64, bd);
+  return similarity(sum_s, sum_r, sum_sq_s, sum_sq_r, sum_sxr, 64, bd);
 }
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
 // We are using a 8x8 moving window with starting location of each 8x8 window
 // on the 4x4 pixel grid. Such arrangement allows the windows to overlap
@@ -276,9 +288,10 @@
   return ssim_total;
 }
 
+#if CONFIG_VP9_HIGHBITDEPTH
 static double highbd_ssim2(const uint8_t *img1, const uint8_t *img2,
                            int stride_img1, int stride_img2, int width,
-                           int height, uint32_t bd, uint32_t shift) {
+                           int height, uint32_t bd) {
   int i, j;
   int samples = 0;
   double ssim_total = 0;
@@ -287,9 +300,9 @@
   for (i = 0; i <= height - 8;
        i += 4, img1 += stride_img1 * 4, img2 += stride_img2 * 4) {
     for (j = 0; j <= width - 8; j += 4) {
-      double v = highbd_ssim_8x8(CONVERT_TO_SHORTPTR(img1 + j), stride_img1,
-                                 CONVERT_TO_SHORTPTR(img2 + j), stride_img2, bd,
-                                 shift);
+      double v =
+          highbd_ssim_8x8(CONVERT_TO_SHORTPTR(img1 + j), stride_img1,
+                          CONVERT_TO_SHORTPTR(img2 + j), stride_img2, bd);
       ssim_total += v;
       samples++;
     }
@@ -297,277 +310,7 @@
   ssim_total /= samples;
   return ssim_total;
 }
-
-// traditional ssim as per: http://en.wikipedia.org/wiki/Structural_similarity
-//
-// Re working out the math ->
-//
-// ssim(x,y) =  (2*mean(x)*mean(y) + c1)*(2*cov(x,y)+c2) /
-//   ((mean(x)^2+mean(y)^2+c1)*(var(x)+var(y)+c2))
-//
-// mean(x) = sum(x) / n
-//
-// cov(x,y) = (n*sum(xi*yi)-sum(x)*sum(y))/(n*n)
-//
-// var(x) = (n*sum(xi*xi)-sum(xi)*sum(xi))/(n*n)
-//
-// ssim(x,y) =
-//   (2*sum(x)*sum(y)/(n*n) + c1)*(2*(n*sum(xi*yi)-sum(x)*sum(y))/(n*n)+c2) /
-//   (((sum(x)*sum(x)+sum(y)*sum(y))/(n*n) +c1) *
-//    ((n*sum(xi*xi) - sum(xi)*sum(xi))/(n*n)+
-//     (n*sum(yi*yi) - sum(yi)*sum(yi))/(n*n)+c2)))
-//
-// factoring out n*n
-//
-// ssim(x,y) =
-//   (2*sum(x)*sum(y) + n*n*c1)*(2*(n*sum(xi*yi)-sum(x)*sum(y))+n*n*c2) /
-//   (((sum(x)*sum(x)+sum(y)*sum(y)) + n*n*c1) *
-//    (n*sum(xi*xi)-sum(xi)*sum(xi)+n*sum(yi*yi)-sum(yi)*sum(yi)+n*n*c2))
-//
-// Replace c1 with n*n * c1 for the final step that leads to this code:
-// The final step scales by 12 bits so we don't lose precision in the constants.
-
-static double ssimv_similarity(const Ssimv *sv, int64_t n) {
-  // Scale the constants by number of pixels.
-  const int64_t c1 = (cc1 * n * n) >> 12;
-  const int64_t c2 = (cc2 * n * n) >> 12;
-
-  const double l = 1.0 * (2 * sv->sum_s * sv->sum_r + c1) /
-                   (sv->sum_s * sv->sum_s + sv->sum_r * sv->sum_r + c1);
-
-  // Since these variables are unsigned sums, convert to double so
-  // math is done in double arithmetic.
-  const double v = (2.0 * n * sv->sum_sxr - 2 * sv->sum_s * sv->sum_r + c2) /
-                   (n * sv->sum_sq_s - sv->sum_s * sv->sum_s +
-                    n * sv->sum_sq_r - sv->sum_r * sv->sum_r + c2);
-
-  return l * v;
-}
-
-// The first term of the ssim metric is a luminance factor.
-//
-// (2*mean(x)*mean(y) + c1)/ (mean(x)^2+mean(y)^2+c1)
-//
-// This luminance factor is super sensitive to the dark side of luminance
-// values and completely insensitive on the white side.  check out 2 sets
-// (1,3) and (250,252) the term gives ( 2*1*3/(1+9) = .60
-// 2*250*252/ (250^2+252^2) => .99999997
-//
-// As a result in this tweaked version of the calculation in which the
-// luminance is taken as percentage off from peak possible.
-//
-// 255 * 255 - (sum_s - sum_r) / count * (sum_s - sum_r) / count
-//
-static double ssimv_similarity2(const Ssimv *sv, int64_t n) {
-  // Scale the constants by number of pixels.
-  const int64_t c1 = (cc1 * n * n) >> 12;
-  const int64_t c2 = (cc2 * n * n) >> 12;
-
-  const double mean_diff = (1.0 * sv->sum_s - sv->sum_r) / n;
-  const double l = (255 * 255 - mean_diff * mean_diff + c1) / (255 * 255 + c1);
-
-  // Since these variables are unsigned, sums convert to double so
-  // math is done in double arithmetic.
-  const double v = (2.0 * n * sv->sum_sxr - 2 * sv->sum_s * sv->sum_r + c2) /
-                   (n * sv->sum_sq_s - sv->sum_s * sv->sum_s +
-                    n * sv->sum_sq_r - sv->sum_r * sv->sum_r + c2);
-
-  return l * v;
-}
-static void ssimv_parms(uint8_t *img1, int img1_pitch, uint8_t *img2,
-                        int img2_pitch, Ssimv *sv) {
-  ssim_parms_8x8(img1, img1_pitch, img2, img2_pitch, &sv->sum_s, &sv->sum_r,
-                 &sv->sum_sq_s, &sv->sum_sq_r, &sv->sum_sxr);
-}
-
-double get_ssim_metrics(uint8_t *img1, int img1_pitch, uint8_t *img2,
-                        int img2_pitch, int width, int height, Ssimv *sv2,
-                        Metrics *m, int do_inconsistency) {
-  double dssim_total = 0;
-  double ssim_total = 0;
-  double ssim2_total = 0;
-  double inconsistency_total = 0;
-  int i, j;
-  int c = 0;
-  double norm;
-  double old_ssim_total = 0;
-
-  // We can sample points as frequently as we like start with 1 per 4x4.
-  for (i = 0; i < height;
-       i += 4, img1 += img1_pitch * 4, img2 += img2_pitch * 4) {
-    for (j = 0; j < width; j += 4, ++c) {
-      Ssimv sv = { 0, 0, 0, 0, 0, 0 };
-      double ssim;
-      double ssim2;
-      double dssim;
-      uint32_t var_new;
-      uint32_t var_old;
-      uint32_t mean_new;
-      uint32_t mean_old;
-      double ssim_new;
-      double ssim_old;
-
-      // Not sure there's a great way to handle the edge pixels
-      // in ssim when using a window. Seems biased against edge pixels
-      // however you handle this. This uses only samples that are
-      // fully in the frame.
-      if (j + 8 <= width && i + 8 <= height) {
-        ssimv_parms(img1 + j, img1_pitch, img2 + j, img2_pitch, &sv);
-      }
-
-      ssim = ssimv_similarity(&sv, 64);
-      ssim2 = ssimv_similarity2(&sv, 64);
-
-      sv.ssim = ssim2;
-
-      // dssim is calculated to use as an actual error metric and
-      // is scaled up to the same range as sum square error.
-      // Since we are subsampling every 16th point maybe this should be
-      // *16 ?
-      dssim = 255 * 255 * (1 - ssim2) / 2;
-
-      // Here I introduce a new error metric: consistency-weighted
-      // SSIM-inconsistency.  This metric isolates frames where the
-      // SSIM 'suddenly' changes, e.g. if one frame in every 8 is much
-      // sharper or blurrier than the others. Higher values indicate a
-      // temporally inconsistent SSIM. There are two ideas at work:
-      //
-      // 1) 'SSIM-inconsistency': the total inconsistency value
-      // reflects how much SSIM values are changing between this
-      // source / reference frame pair and the previous pair.
-      //
-      // 2) 'consistency-weighted': weights de-emphasize areas in the
-      // frame where the scene content has changed. Changes in scene
-      // content are detected via changes in local variance and local
-      // mean.
-      //
-      // Thus the overall measure reflects how inconsistent the SSIM
-      // values are, over consistent regions of the frame.
-      //
-      // The metric has three terms:
-      //
-      // term 1 -> uses change in scene Variance to weight error score
-      //  2 * var(Fi)*var(Fi-1) / (var(Fi)^2+var(Fi-1)^2)
-      //  larger changes from one frame to the next mean we care
-      //  less about consistency.
-      //
-      // term 2 -> uses change in local scene luminance to weight error
-      //  2 * avg(Fi)*avg(Fi-1) / (avg(Fi)^2+avg(Fi-1)^2)
-      //  larger changes from one frame to the next mean we care
-      //  less about consistency.
-      //
-      // term3 -> measures inconsistency in ssim scores between frames
-      //   1 - ( 2 * ssim(Fi)*ssim(Fi-1)/(ssim(Fi)^2+sssim(Fi-1)^2).
-      //
-      // This term compares the ssim score for the same location in 2
-      // subsequent frames.
-      var_new = sv.sum_sq_s - sv.sum_s * sv.sum_s / 64;
-      var_old = sv2[c].sum_sq_s - sv2[c].sum_s * sv2[c].sum_s / 64;
-      mean_new = sv.sum_s;
-      mean_old = sv2[c].sum_s;
-      ssim_new = sv.ssim;
-      ssim_old = sv2[c].ssim;
-
-      if (do_inconsistency) {
-        // We do the metric once for every 4x4 block in the image. Since
-        // we are scaling the error to SSE for use in a psnr calculation
-        // 1.0 = 4x4x255x255 the worst error we can possibly have.
-        static const double kScaling = 4. * 4 * 255 * 255;
-
-        // The constants have to be non 0 to avoid potential divide by 0
-        // issues other than that they affect kind of a weighting between
-        // the terms.  No testing of what the right terms should be has been
-        // done.
-        static const double c1 = 1, c2 = 1, c3 = 1;
-
-        // This measures how much consistent variance is in two consecutive
-        // source frames. 1.0 means they have exactly the same variance.
-        const double variance_term =
-            (2.0 * var_old * var_new + c1) /
-            (1.0 * var_old * var_old + 1.0 * var_new * var_new + c1);
-
-        // This measures how consistent the local mean are between two
-        // consecutive frames. 1.0 means they have exactly the same mean.
-        const double mean_term =
-            (2.0 * mean_old * mean_new + c2) /
-            (1.0 * mean_old * mean_old + 1.0 * mean_new * mean_new + c2);
-
-        // This measures how consistent the ssims of two
-        // consecutive frames is. 1.0 means they are exactly the same.
-        double ssim_term =
-            pow((2.0 * ssim_old * ssim_new + c3) /
-                    (ssim_old * ssim_old + ssim_new * ssim_new + c3),
-                5);
-
-        double this_inconsistency;
-
-        // Floating point math sometimes makes this > 1 by a tiny bit.
-        // We want the metric to scale between 0 and 1.0 so we can convert
-        // it to an snr scaled value.
-        if (ssim_term > 1) ssim_term = 1;
-
-        // This converts the consistency metric to an inconsistency metric
-        // ( so we can scale it like psnr to something like sum square error.
-        // The reason for the variance and mean terms is the assumption that
-        // if there are big changes in the source we shouldn't penalize
-        // inconsistency in ssim scores a bit less as it will be less visible
-        // to the user.
-        this_inconsistency = (1 - ssim_term) * variance_term * mean_term;
-
-        this_inconsistency *= kScaling;
-        inconsistency_total += this_inconsistency;
-      }
-      sv2[c] = sv;
-      ssim_total += ssim;
-      ssim2_total += ssim2;
-      dssim_total += dssim;
-
-      old_ssim_total += ssim_old;
-    }
-    old_ssim_total += 0;
-  }
-
-  norm = 1. / (width / 4) / (height / 4);
-  ssim_total *= norm;
-  ssim2_total *= norm;
-  m->ssim2 = ssim2_total;
-  m->ssim = ssim_total;
-  if (old_ssim_total == 0) inconsistency_total = 0;
-
-  m->ssimc = inconsistency_total;
-
-  m->dssim = dssim_total;
-  return inconsistency_total;
-}
-
-double highbd_calc_ssim(const YV12_BUFFER_CONFIG *source,
-                        const YV12_BUFFER_CONFIG *dest, double *weight,
-                        uint32_t bd, uint32_t in_bd) {
-  double a, b, c;
-  double ssimv;
-  uint32_t shift = 0;
-
-  assert(bd >= in_bd);
-  shift = bd - in_bd;
-
-  a = highbd_ssim2(source->y_buffer, dest->y_buffer, source->y_stride,
-                   dest->y_stride, source->y_crop_width, source->y_crop_height,
-                   in_bd, shift);
-
-  b = highbd_ssim2(source->u_buffer, dest->u_buffer, source->uv_stride,
-                   dest->uv_stride, source->uv_crop_width,
-                   source->uv_crop_height, in_bd, shift);
-
-  c = highbd_ssim2(source->v_buffer, dest->v_buffer, source->uv_stride,
-                   dest->uv_stride, source->uv_crop_width,
-                   source->uv_crop_height, in_bd, shift);
-
-  ssimv = a * .8 + .1 * (b + c);
-
-  *weight = 1;
-
-  return ssimv;
-}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
 int main(int argc, char *argv[]) {
   FILE *framestats = NULL;
@@ -583,13 +326,14 @@
   input_file_t in[2];
   double peak = 255.0;
 
+  memset(in, 0, sizeof(in));
+
   if (argc < 2) {
     fprintf(stderr,
             "Usage: %s file1.{yuv|y4m} file2.{yuv|y4m}"
             "[WxH tl_skip={0,1,3} frame_stats_file bits]\n",
             argv[0]);
-    return_value = 1;
-    goto clean_up;
+    return 1;
   }
 
   if (argc > 3) {
@@ -601,7 +345,7 @@
   }
 
   if (open_input_file(argv[1], &in[0], w, h, bit_depth) < 0) {
-    fprintf(stderr, "File %s can't be opened or parsed!\n", argv[2]);
+    fprintf(stderr, "File %s can't be opened or parsed!\n", argv[1]);
     goto clean_up;
   }
 
@@ -613,7 +357,7 @@
   }
   if (bit_depth == 10) peak = 1023.0;
 
-  if (bit_depth == 12) peak = 4095;
+  if (bit_depth == 12) peak = 4095.0;
 
   if (open_input_file(argv[2], &in[1], w, h, bit_depth) < 0) {
     fprintf(stderr, "File %s can't be opened or parsed!\n", argv[2]);
@@ -628,9 +372,19 @@
     goto clean_up;
   }
 
-  // Number of frames to skip from file1.yuv for every frame used. Normal values
-  // 0, 1 and 3 correspond to TL2, TL1 and TL0 respectively for a 3TL encoding
-  // in mode 10. 7 would be reasonable for comparing TL0 of a 4-layer encoding.
+  if (in[0].bit_depth != in[1].bit_depth) {
+    fprintf(stderr,
+            "Failing: Image bit depths don't match or are unspecified!\n");
+    return_value = 1;
+    goto clean_up;
+  }
+
+  bit_depth = in[0].bit_depth;
+
+  // Number of frames to skip from file1.yuv for every frame used. Normal
+  // values 0, 1 and 3 correspond to TL2, TL1 and TL0 respectively for a 3TL
+  // encoding in mode 10. 7 would be reasonable for comparing TL0 of a 4-layer
+  // encoding.
   if (argc > 4) {
     sscanf(argv[4], "%d", &tl_skip);
     if (argc > 5) {
@@ -644,12 +398,6 @@
     }
   }
 
-  if (w & 1 || h & 1) {
-    fprintf(stderr, "Invalid size %dx%d\n", w, h);
-    return_value = 1;
-    goto clean_up;
-  }
-
   while (1) {
     size_t r1, r2;
     unsigned char *y[2], *u[2], *v[2];
@@ -683,7 +431,7 @@
     psnr = calc_plane_error(buf0, w, buf1, w, w, h);                           \
   } else {                                                                     \
     ssim = highbd_ssim2(CONVERT_TO_BYTEPTR(buf0), CONVERT_TO_BYTEPTR(buf1), w, \
-                        w, w, h, bit_depth, bit_depth - 8);                    \
+                        w, w, h, bit_depth);                                   \
     psnr = calc_plane_error16(CAST_TO_SHORTPTR(buf0), w,                       \
                               CAST_TO_SHORTPTR(buf1), w, w, h);                \
   }
@@ -691,7 +439,7 @@
 #define psnr_and_ssim(ssim, psnr, buf0, buf1, w, h) \
   ssim = ssim2(buf0, buf1, w, w, w, h);             \
   psnr = calc_plane_error(buf0, w, buf1, w, w, h);
-#endif
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
     if (n_frames == allocated_frames) {
       allocated_frames = allocated_frames == 0 ? 1024 : allocated_frames * 2;
@@ -703,8 +451,10 @@
       psnrv = realloc(psnrv, allocated_frames * sizeof(*psnrv));
     }
     psnr_and_ssim(ssimy[n_frames], psnry[n_frames], y[0], y[1], w, h);
-    psnr_and_ssim(ssimu[n_frames], psnru[n_frames], u[0], u[1], w / 2, h / 2);
-    psnr_and_ssim(ssimv[n_frames], psnrv[n_frames], v[0], v[1], w / 2, h / 2);
+    psnr_and_ssim(ssimu[n_frames], psnru[n_frames], u[0], u[1], (w + 1) / 2,
+                  (h + 1) / 2);
+    psnr_and_ssim(ssimv[n_frames], psnrv[n_frames], v[0], v[1], (w + 1) / 2,
+                  (h + 1) / 2);
 
     n_frames++;
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/wrap-commit-msg.py b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools/wrap-commit-msg.py
old mode 100644
new mode 100755
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.c
index 6f14c25..59978b7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.c
@@ -46,6 +46,14 @@
     va_end(ap);                        \
   } while (0)
 
+#if CONFIG_ENCODERS
+/* Swallow warnings about unused results of fread/fwrite */
+static size_t wrap_fread(void *ptr, size_t size, size_t nmemb, FILE *stream) {
+  return fread(ptr, size, nmemb, stream);
+}
+#define fread wrap_fread
+#endif
+
 FILE *set_binary_mode(FILE *stream) {
   (void)stream;
 #if defined(_WIN32) || defined(__OS2__)
@@ -200,8 +208,6 @@
 
 #endif  // CONFIG_DECODERS
 
-// TODO(dkovalev): move this function to vpx_image.{c, h}, so it will be part
-// of vpx_image_t support
 int vpx_img_plane_width(const vpx_image_t *img, int plane) {
   if (plane > 0 && img->x_chroma_shift > 0)
     return (img->d_w + 1) >> img->x_chroma_shift;
@@ -266,6 +272,88 @@
   }
 }
 
+#if CONFIG_ENCODERS
+int read_frame(struct VpxInputContext *input_ctx, vpx_image_t *img) {
+  FILE *f = input_ctx->file;
+  y4m_input *y4m = &input_ctx->y4m;
+  int shortread = 0;
+
+  if (input_ctx->file_type == FILE_TYPE_Y4M) {
+    if (y4m_input_fetch_frame(y4m, f, img) < 1) return 0;
+  } else {
+    shortread = read_yuv_frame(input_ctx, img);
+  }
+
+  return !shortread;
+}
+
+int file_is_y4m(const char detect[4]) {
+  if (memcmp(detect, "YUV4", 4) == 0) {
+    return 1;
+  }
+  return 0;
+}
+
+int fourcc_is_ivf(const char detect[4]) {
+  if (memcmp(detect, "DKIF", 4) == 0) {
+    return 1;
+  }
+  return 0;
+}
+
+void open_input_file(struct VpxInputContext *input) {
+  /* Parse certain options from the input file, if possible */
+  input->file = strcmp(input->filename, "-") ? fopen(input->filename, "rb")
+                                             : set_binary_mode(stdin);
+
+  if (!input->file) fatal("Failed to open input file");
+
+  if (!fseeko(input->file, 0, SEEK_END)) {
+    /* Input file is seekable. Figure out how long it is, so we can get
+     * progress info.
+     */
+    input->length = ftello(input->file);
+    rewind(input->file);
+  }
+
+  /* Default to 1:1 pixel aspect ratio. */
+  input->pixel_aspect_ratio.numerator = 1;
+  input->pixel_aspect_ratio.denominator = 1;
+
+  /* For RAW input sources, these bytes will applied on the first frame
+   *  in read_frame().
+   */
+  input->detect.buf_read = fread(input->detect.buf, 1, 4, input->file);
+  input->detect.position = 0;
+
+  if (input->detect.buf_read == 4 && file_is_y4m(input->detect.buf)) {
+    if (y4m_input_open(&input->y4m, input->file, input->detect.buf, 4,
+                       input->only_i420) >= 0) {
+      input->file_type = FILE_TYPE_Y4M;
+      input->width = input->y4m.pic_w;
+      input->height = input->y4m.pic_h;
+      input->pixel_aspect_ratio.numerator = input->y4m.par_n;
+      input->pixel_aspect_ratio.denominator = input->y4m.par_d;
+      input->framerate.numerator = input->y4m.fps_n;
+      input->framerate.denominator = input->y4m.fps_d;
+      input->fmt = input->y4m.vpx_fmt;
+      input->bit_depth = input->y4m.bit_depth;
+    } else {
+      fatal("Unsupported Y4M stream.");
+    }
+  } else if (input->detect.buf_read == 4 && fourcc_is_ivf(input->detect.buf)) {
+    fatal("IVF is not supported as input.");
+  } else {
+    input->file_type = FILE_TYPE_RAW;
+  }
+}
+
+void close_input_file(struct VpxInputContext *input) {
+  fclose(input->file);
+  if (input->file_type == FILE_TYPE_Y4M) y4m_input_close(&input->y4m);
+}
+#endif
+
 // TODO(debargha): Consolidate the functions below into a separate file.
 #if CONFIG_VP9_HIGHBITDEPTH
 static void highbd_img_upshift(vpx_image_t *dst, vpx_image_t *src,
@@ -459,3 +547,225 @@
   }
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
+
+int compare_img(const vpx_image_t *const img1, const vpx_image_t *const img2) {
+  uint32_t l_w = img1->d_w;
+  uint32_t c_w = (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
+  const uint32_t c_h =
+      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
+  uint32_t i;
+  int match = 1;
+
+  match &= (img1->fmt == img2->fmt);
+  match &= (img1->d_w == img2->d_w);
+  match &= (img1->d_h == img2->d_h);
+#if CONFIG_VP9_HIGHBITDEPTH
+  if (img1->fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
+    l_w *= 2;
+    c_w *= 2;
+  }
+#endif
+
+  for (i = 0; i < img1->d_h; ++i)
+    match &= (memcmp(img1->planes[VPX_PLANE_Y] + i * img1->stride[VPX_PLANE_Y],
+                     img2->planes[VPX_PLANE_Y] + i * img2->stride[VPX_PLANE_Y],
+                     l_w) == 0);
+
+  for (i = 0; i < c_h; ++i)
+    match &= (memcmp(img1->planes[VPX_PLANE_U] + i * img1->stride[VPX_PLANE_U],
+                     img2->planes[VPX_PLANE_U] + i * img2->stride[VPX_PLANE_U],
+                     c_w) == 0);
+
+  for (i = 0; i < c_h; ++i)
+    match &= (memcmp(img1->planes[VPX_PLANE_V] + i * img1->stride[VPX_PLANE_V],
+                     img2->planes[VPX_PLANE_V] + i * img2->stride[VPX_PLANE_V],
+                     c_w) == 0);
+
+  return match;
+}
+
+#define mmin(a, b) ((a) < (b) ? (a) : (b))
+
+#if CONFIG_VP9_HIGHBITDEPTH
+void find_mismatch_high(const vpx_image_t *const img1,
+                        const vpx_image_t *const img2, int yloc[4], int uloc[4],
+                        int vloc[4]) {
+  uint16_t *plane1, *plane2;
+  uint32_t stride1, stride2;
+  const uint32_t bsize = 64;
+  const uint32_t bsizey = bsize >> img1->y_chroma_shift;
+  const uint32_t bsizex = bsize >> img1->x_chroma_shift;
+  const uint32_t c_w =
+      (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
+  const uint32_t c_h =
+      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
+  int match = 1;
+  uint32_t i, j;
+  yloc[0] = yloc[1] = yloc[2] = yloc[3] = -1;
+  plane1 = (uint16_t *)img1->planes[VPX_PLANE_Y];
+  plane2 = (uint16_t *)img2->planes[VPX_PLANE_Y];
+  stride1 = img1->stride[VPX_PLANE_Y] / 2;
+  stride2 = img2->stride[VPX_PLANE_Y] / 2;
+  for (i = 0, match = 1; match && i < img1->d_h; i += bsize) {
+    for (j = 0; match && j < img1->d_w; j += bsize) {
+      int k, l;
+      const int si = mmin(i + bsize, img1->d_h) - i;
+      const int sj = mmin(j + bsize, img1->d_w) - j;
+      for (k = 0; match && k < si; ++k) {
+        for (l = 0; match && l < sj; ++l) {
+          if (*(plane1 + (i + k) * stride1 + j + l) !=
+              *(plane2 + (i + k) * stride2 + j + l)) {
+            yloc[0] = i + k;
+            yloc[1] = j + l;
+            yloc[2] = *(plane1 + (i + k) * stride1 + j + l);
+            yloc[3] = *(plane2 + (i + k) * stride2 + j + l);
+            match = 0;
+            break;
+          }
+        }
+      }
+    }
+  }
+
+  uloc[0] = uloc[1] = uloc[2] = uloc[3] = -1;
+  plane1 = (uint16_t *)img1->planes[VPX_PLANE_U];
+  plane2 = (uint16_t *)img2->planes[VPX_PLANE_U];
+  stride1 = img1->stride[VPX_PLANE_U] / 2;
+  stride2 = img2->stride[VPX_PLANE_U] / 2;
+  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
+    for (j = 0; match && j < c_w; j += bsizex) {
+      int k, l;
+      const int si = mmin(i + bsizey, c_h - i);
+      const int sj = mmin(j + bsizex, c_w - j);
+      for (k = 0; match && k < si; ++k) {
+        for (l = 0; match && l < sj; ++l) {
+          if (*(plane1 + (i + k) * stride1 + j + l) !=
+              *(plane2 + (i + k) * stride2 + j + l)) {
+            uloc[0] = i + k;
+            uloc[1] = j + l;
+            uloc[2] = *(plane1 + (i + k) * stride1 + j + l);
+            uloc[3] = *(plane2 + (i + k) * stride2 + j + l);
+            match = 0;
+            break;
+          }
+        }
+      }
+    }
+  }
+
+  vloc[0] = vloc[1] = vloc[2] = vloc[3] = -1;
+  plane1 = (uint16_t *)img1->planes[VPX_PLANE_V];
+  plane2 = (uint16_t *)img2->planes[VPX_PLANE_V];
+  stride1 = img1->stride[VPX_PLANE_V] / 2;
+  stride2 = img2->stride[VPX_PLANE_V] / 2;
+  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
+    for (j = 0; match && j < c_w; j += bsizex) {
+      int k, l;
+      const int si = mmin(i + bsizey, c_h - i);
+      const int sj = mmin(j + bsizex, c_w - j);
+      for (k = 0; match && k < si; ++k) {
+        for (l = 0; match && l < sj; ++l) {
+          if (*(plane1 + (i + k) * stride1 + j + l) !=
+              *(plane2 + (i + k) * stride2 + j + l)) {
+            vloc[0] = i + k;
+            vloc[1] = j + l;
+            vloc[2] = *(plane1 + (i + k) * stride1 + j + l);
+            vloc[3] = *(plane2 + (i + k) * stride2 + j + l);
+            match = 0;
+            break;
+          }
+        }
+      }
+    }
+  }
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+void find_mismatch(const vpx_image_t *const img1, const vpx_image_t *const img2,
+                   int yloc[4], int uloc[4], int vloc[4]) {
+  const uint32_t bsize = 64;
+  const uint32_t bsizey = bsize >> img1->y_chroma_shift;
+  const uint32_t bsizex = bsize >> img1->x_chroma_shift;
+  const uint32_t c_w =
+      (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
+  const uint32_t c_h =
+      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
+  int match = 1;
+  uint32_t i, j;
+  yloc[0] = yloc[1] = yloc[2] = yloc[3] = -1;
+  for (i = 0, match = 1; match && i < img1->d_h; i += bsize) {
+    for (j = 0; match && j < img1->d_w; j += bsize) {
+      int k, l;
+      const int si = mmin(i + bsize, img1->d_h) - i;
+      const int sj = mmin(j + bsize, img1->d_w) - j;
+      for (k = 0; match && k < si; ++k) {
+        for (l = 0; match && l < sj; ++l) {
+          if (*(img1->planes[VPX_PLANE_Y] +
+                (i + k) * img1->stride[VPX_PLANE_Y] + j + l) !=
+              *(img2->planes[VPX_PLANE_Y] +
+                (i + k) * img2->stride[VPX_PLANE_Y] + j + l)) {
+            yloc[0] = i + k;
+            yloc[1] = j + l;
+            yloc[2] = *(img1->planes[VPX_PLANE_Y] +
+                        (i + k) * img1->stride[VPX_PLANE_Y] + j + l);
+            yloc[3] = *(img2->planes[VPX_PLANE_Y] +
+                        (i + k) * img2->stride[VPX_PLANE_Y] + j + l);
+            match = 0;
+            break;
+          }
+        }
+      }
+    }
+  }
+
+  uloc[0] = uloc[1] = uloc[2] = uloc[3] = -1;
+  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
+    for (j = 0; match && j < c_w; j += bsizex) {
+      int k, l;
+      const int si = mmin(i + bsizey, c_h - i);
+      const int sj = mmin(j + bsizex, c_w - j);
+      for (k = 0; match && k < si; ++k) {
+        for (l = 0; match && l < sj; ++l) {
+          if (*(img1->planes[VPX_PLANE_U] +
+                (i + k) * img1->stride[VPX_PLANE_U] + j + l) !=
+              *(img2->planes[VPX_PLANE_U] +
+                (i + k) * img2->stride[VPX_PLANE_U] + j + l)) {
+            uloc[0] = i + k;
+            uloc[1] = j + l;
+            uloc[2] = *(img1->planes[VPX_PLANE_U] +
+                        (i + k) * img1->stride[VPX_PLANE_U] + j + l);
+            uloc[3] = *(img2->planes[VPX_PLANE_U] +
+                        (i + k) * img2->stride[VPX_PLANE_U] + j + l);
+            match = 0;
+            break;
+          }
+        }
+      }
+    }
+  }
+  vloc[0] = vloc[1] = vloc[2] = vloc[3] = -1;
+  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
+    for (j = 0; match && j < c_w; j += bsizex) {
+      int k, l;
+      const int si = mmin(i + bsizey, c_h - i);
+      const int sj = mmin(j + bsizex, c_w - j);
+      for (k = 0; match && k < si; ++k) {
+        for (l = 0; match && l < sj; ++l) {
+          if (*(img1->planes[VPX_PLANE_V] +
+                (i + k) * img1->stride[VPX_PLANE_V] + j + l) !=
+              *(img2->planes[VPX_PLANE_V] +
+                (i + k) * img2->stride[VPX_PLANE_V] + j + l)) {
+            vloc[0] = i + k;
+            vloc[1] = j + l;
+            vloc[2] = *(img1->planes[VPX_PLANE_V] +
+                        (i + k) * img1->stride[VPX_PLANE_V] + j + l);
+            vloc[3] = *(img2->planes[VPX_PLANE_V] +
+                        (i + k) * img2->stride[VPX_PLANE_V] + j + l);
+            match = 0;
+            break;
+          }
+        }
+      }
+    }
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.h
index e41de31..4526d9f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/tools_common.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef TOOLS_COMMON_H_
-#define TOOLS_COMMON_H_
+#ifndef VPX_TOOLS_COMMON_H_
+#define VPX_TOOLS_COMMON_H_
 
 #include <stdio.h>
 
@@ -33,6 +33,7 @@
 #define ftello ftello64
 typedef off64_t FileOffset;
 #elif CONFIG_OS_SUPPORT
+#include <sys/types.h> /* NOLINT */
 typedef off_t FileOffset;
 /* Use 32-bit file operations in WebM file format when building ARM
  * executables (.axf) with RVCT. */
@@ -144,8 +145,6 @@
 const VpxInterface *get_vpx_decoder_by_name(const char *name);
 const VpxInterface *get_vpx_decoder_by_fourcc(uint32_t fourcc);
 
-// TODO(dkovalev): move this function to vpx_image.{c, h}, so it will be part
-// of vpx_image_t support
 int vpx_img_plane_width(const vpx_image_t *img, int plane);
 int vpx_img_plane_height(const vpx_image_t *img, int plane);
 void vpx_img_write(const vpx_image_t *img, FILE *file);
@@ -153,14 +152,31 @@
 
 double sse_to_psnr(double samples, double peak, double mse);
 
+#if CONFIG_ENCODERS
+int read_frame(struct VpxInputContext *input_ctx, vpx_image_t *img);
+int file_is_y4m(const char detect[4]);
+int fourcc_is_ivf(const char detect[4]);
+void open_input_file(struct VpxInputContext *input);
+void close_input_file(struct VpxInputContext *input);
+#endif
+
 #if CONFIG_VP9_HIGHBITDEPTH
 void vpx_img_upshift(vpx_image_t *dst, vpx_image_t *src, int input_shift);
 void vpx_img_downshift(vpx_image_t *dst, vpx_image_t *src, int down_shift);
 void vpx_img_truncate_16_to_8(vpx_image_t *dst, vpx_image_t *src);
 #endif
 
+int compare_img(const vpx_image_t *const img1, const vpx_image_t *const img2);
+#if CONFIG_VP9_HIGHBITDEPTH
+void find_mismatch_high(const vpx_image_t *const img1,
+                        const vpx_image_t *const img2, int yloc[4], int uloc[4],
+                        int vloc[4]);
+#endif
+void find_mismatch(const vpx_image_t *const img1, const vpx_image_t *const img2,
+                   int yloc[4], int uloc[4], int vloc[4]);
+
 #ifdef __cplusplus
 } /* extern "C" */
 #endif
 
-#endif  // TOOLS_COMMON_H_
+#endif  // VPX_TOOLS_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_cx.dox b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_cx.dox
index 92b0d34..b2220cf 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_cx.dox
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_cx.dox
@@ -8,6 +8,8 @@
     \ref usage_deadline.
 
 
+    \if samples
     \ref samples
+    \endif
 
 */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_dx.dox b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_dx.dox
index 883ce24..85063f7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_dx.dox
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/usage_dx.dox
@@ -11,7 +11,9 @@
     \ref usage_postproc based on the amount of free CPU time. For more
     information on the <code>deadline</code> parameter, see \ref usage_deadline.
 
+    \if samples
     \ref samples
+    \endif
 
 
     \section usage_cb Callback Based Decoding
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_common.h
index 44b27a8..77eb9fa 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VIDEO_COMMON_H_
-#define VIDEO_COMMON_H_
+#ifndef VPX_VIDEO_COMMON_H_
+#define VPX_VIDEO_COMMON_H_
 
 #include "./tools_common.h"
 
@@ -20,4 +20,4 @@
   struct VpxRational time_base;
 } VpxVideoInfo;
 
-#endif  // VIDEO_COMMON_H_
+#endif  // VPX_VIDEO_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.c
index a0ba252..16822ef 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.c
@@ -30,17 +30,37 @@
   char header[32];
   VpxVideoReader *reader = NULL;
   FILE *const file = fopen(filename, "rb");
-  if (!file) return NULL;  // Can't open file
+  if (!file) {
+    fprintf(stderr, "%s can't be opened.\n", filename);  // Can't open file
+    return NULL;
+  }
 
-  if (fread(header, 1, 32, file) != 32) return NULL;  // Can't read file header
+  if (fread(header, 1, 32, file) != 32) {
+    fprintf(stderr, "File header on %s can't be read.\n",
+            filename);  // Can't read file header
+    return NULL;
+  }
+  if (memcmp(kIVFSignature, header, 4) != 0) {
+    fprintf(stderr, "The IVF signature on %s is wrong.\n",
+            filename);  // Wrong IVF signature
 
-  if (memcmp(kIVFSignature, header, 4) != 0)
-    return NULL;  // Wrong IVF signature
+    return NULL;
+  }
+  if (mem_get_le16(header + 4) != 0) {
+    fprintf(stderr, "%s uses the wrong IVF version.\n",
+            filename);  // Wrong IVF version
 
-  if (mem_get_le16(header + 4) != 0) return NULL;  // Wrong IVF version
+    return NULL;
+  }
 
   reader = calloc(1, sizeof(*reader));
-  if (!reader) return NULL;  // Can't allocate VpxVideoReader
+  if (!reader) {
+    fprintf(
+        stderr,
+        "Can't allocate VpxVideoReader\n");  // Can't allocate VpxVideoReader
+
+    return NULL;
+  }
 
   reader->file = file;
   reader->info.codec_fourcc = mem_get_le32(header + 8);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.h
index 73c25b0..1f5c808 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_reader.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VIDEO_READER_H_
-#define VIDEO_READER_H_
+#ifndef VPX_VIDEO_READER_H_
+#define VPX_VIDEO_READER_H_
 
 #include "./video_common.h"
 
@@ -48,4 +48,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VIDEO_READER_H_
+#endif  // VPX_VIDEO_READER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.c
index 56d428b..6e9a848 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.c
@@ -37,11 +37,15 @@
   if (container == kContainerIVF) {
     VpxVideoWriter *writer = NULL;
     FILE *const file = fopen(filename, "wb");
-    if (!file) return NULL;
-
+    if (!file) {
+      fprintf(stderr, "%s can't be written to.\n", filename);
+      return NULL;
+    }
     writer = malloc(sizeof(*writer));
-    if (!writer) return NULL;
-
+    if (!writer) {
+      fprintf(stderr, "Can't allocate VpxVideoWriter.\n");
+      return NULL;
+    }
     writer->frame_count = 0;
     writer->info = *info;
     writer->file = file;
@@ -50,7 +54,7 @@
 
     return writer;
   }
-
+  fprintf(stderr, "VpxVideoWriter supports only IVF.\n");
   return NULL;
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.h
index a769811..b4d242b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/video_writer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VIDEO_WRITER_H_
-#define VIDEO_WRITER_H_
+#ifndef VPX_VIDEO_WRITER_H_
+#define VPX_VIDEO_WRITER_H_
 
 #include "./video_common.h"
 
@@ -41,4 +41,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VIDEO_WRITER_H_
+#endif  // VPX_VIDEO_WRITER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/alloccommon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/alloccommon.h
index 5d0840c..2d376bb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/alloccommon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/alloccommon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ALLOCCOMMON_H_
-#define VP8_COMMON_ALLOCCOMMON_H_
+#ifndef VPX_VP8_COMMON_ALLOCCOMMON_H_
+#define VPX_VP8_COMMON_ALLOCCOMMON_H_
 
 #include "onyxc_int.h"
 
@@ -21,10 +21,10 @@
 void vp8_remove_common(VP8_COMMON *oci);
 void vp8_de_alloc_frame_buffers(VP8_COMMON *oci);
 int vp8_alloc_frame_buffers(VP8_COMMON *oci, int width, int height);
-void vp8_setup_version(VP8_COMMON *oci);
+void vp8_setup_version(VP8_COMMON *cm);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_ALLOCCOMMON_H_
+#endif  // VPX_VP8_COMMON_ALLOCCOMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c
index e12f65a..48a1972 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.c
@@ -8,28 +8,12 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "vpx_config.h"
-#include "vp8_rtcd.h"
+#include "./vpx_config.h"
+#include "./vp8_rtcd.h"
+#include "vp8/common/arm/loopfilter_arm.h"
 #include "vp8/common/loopfilter.h"
 #include "vp8/common/onyxc_int.h"
 
-typedef void loopfilter_y_neon(unsigned char *src, int pitch,
-                               unsigned char blimit, unsigned char limit,
-                               unsigned char thresh);
-typedef void loopfilter_uv_neon(unsigned char *u, int pitch,
-                                unsigned char blimit, unsigned char limit,
-                                unsigned char thresh, unsigned char *v);
-
-extern loopfilter_y_neon vp8_loop_filter_horizontal_edge_y_neon;
-extern loopfilter_y_neon vp8_loop_filter_vertical_edge_y_neon;
-extern loopfilter_uv_neon vp8_loop_filter_horizontal_edge_uv_neon;
-extern loopfilter_uv_neon vp8_loop_filter_vertical_edge_uv_neon;
-
-extern loopfilter_y_neon vp8_mbloop_filter_horizontal_edge_y_neon;
-extern loopfilter_y_neon vp8_mbloop_filter_vertical_edge_y_neon;
-extern loopfilter_uv_neon vp8_mbloop_filter_horizontal_edge_uv_neon;
-extern loopfilter_uv_neon vp8_mbloop_filter_vertical_edge_uv_neon;
-
 /* NEON loopfilter functions */
 /* Horizontal MB filtering */
 void vp8_loop_filter_mbh_neon(unsigned char *y_ptr, unsigned char *u_ptr,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h
new file mode 100644
index 0000000..6cf660d
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/loopfilter_arm.h
@@ -0,0 +1,31 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP8_COMMON_ARM_LOOPFILTER_ARM_H_
+#define VPX_VP8_COMMON_ARM_LOOPFILTER_ARM_H_
+
+typedef void loopfilter_y_neon(unsigned char *src, int pitch,
+                               unsigned char blimit, unsigned char limit,
+                               unsigned char thresh);
+typedef void loopfilter_uv_neon(unsigned char *u, int pitch,
+                                unsigned char blimit, unsigned char limit,
+                                unsigned char thresh, unsigned char *v);
+
+loopfilter_y_neon vp8_loop_filter_horizontal_edge_y_neon;
+loopfilter_y_neon vp8_loop_filter_vertical_edge_y_neon;
+loopfilter_uv_neon vp8_loop_filter_horizontal_edge_uv_neon;
+loopfilter_uv_neon vp8_loop_filter_vertical_edge_uv_neon;
+
+loopfilter_y_neon vp8_mbloop_filter_horizontal_edge_y_neon;
+loopfilter_y_neon vp8_mbloop_filter_vertical_edge_y_neon;
+loopfilter_uv_neon vp8_mbloop_filter_horizontal_edge_uv_neon;
+loopfilter_uv_neon vp8_mbloop_filter_vertical_edge_uv_neon;
+
+#endif  // VPX_VP8_COMMON_ARM_LOOPFILTER_ARM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c
index 8520ab5..590956d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/bilinearpredict_neon.c
@@ -10,7 +10,9 @@
 
 #include <arm_neon.h>
 #include <string.h>
+
 #include "./vpx_config.h"
+#include "./vp8_rtcd.h"
 #include "vpx_dsp/arm/mem_neon.h"
 
 static const uint8_t bifilter4_coeff[8][2] = { { 128, 0 }, { 112, 16 },
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c
index c1d293b..c89b47d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/copymem_neon.c
@@ -10,6 +10,8 @@
 
 #include <arm_neon.h>
 
+#include "./vp8_rtcd.h"
+
 void vp8_copy_mem8x4_neon(unsigned char *src, int src_stride,
                           unsigned char *dst, int dst_stride) {
   uint8x8_t vtmp;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c
index 6edff3c..791aaea 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/dequantizeb_neon.c
@@ -10,6 +10,7 @@
 
 #include <arm_neon.h>
 
+#include "./vp8_rtcd.h"
 #include "vp8/common/blockd.h"
 
 void vp8_dequantize_b_neon(BLOCKD *d, short *DQC) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c
index d61dde8..5c26ce6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_blk_neon.c
@@ -8,15 +8,226 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#include "vpx_config.h"
-#include "vp8_rtcd.h"
+#include <arm_neon.h>
 
-/* place these declarations here because we don't want to maintain them
- * outside of this scope
- */
-void idct_dequant_full_2x_neon(short *q, short *dq, unsigned char *dst,
-                               int stride);
-void idct_dequant_0_2x_neon(short *q, short dq, unsigned char *dst, int stride);
+#include "./vp8_rtcd.h"
+
+static void idct_dequant_0_2x_neon(int16_t *q, int16_t dq, unsigned char *dst,
+                                   int stride) {
+  unsigned char *dst0;
+  int i, a0, a1;
+  int16x8x2_t q2Add;
+  int32x2_t d2s32 = vdup_n_s32(0), d4s32 = vdup_n_s32(0);
+  uint8x8_t d2u8, d4u8;
+  uint16x8_t q1u16, q2u16;
+
+  a0 = ((q[0] * dq) + 4) >> 3;
+  a1 = ((q[16] * dq) + 4) >> 3;
+  q[0] = q[16] = 0;
+  q2Add.val[0] = vdupq_n_s16((int16_t)a0);
+  q2Add.val[1] = vdupq_n_s16((int16_t)a1);
+
+  for (i = 0; i < 2; i++, dst += 4) {
+    dst0 = dst;
+    d2s32 = vld1_lane_s32((const int32_t *)dst0, d2s32, 0);
+    dst0 += stride;
+    d2s32 = vld1_lane_s32((const int32_t *)dst0, d2s32, 1);
+    dst0 += stride;
+    d4s32 = vld1_lane_s32((const int32_t *)dst0, d4s32, 0);
+    dst0 += stride;
+    d4s32 = vld1_lane_s32((const int32_t *)dst0, d4s32, 1);
+
+    q1u16 = vaddw_u8(vreinterpretq_u16_s16(q2Add.val[i]),
+                     vreinterpret_u8_s32(d2s32));
+    q2u16 = vaddw_u8(vreinterpretq_u16_s16(q2Add.val[i]),
+                     vreinterpret_u8_s32(d4s32));
+
+    d2u8 = vqmovun_s16(vreinterpretq_s16_u16(q1u16));
+    d4u8 = vqmovun_s16(vreinterpretq_s16_u16(q2u16));
+
+    d2s32 = vreinterpret_s32_u8(d2u8);
+    d4s32 = vreinterpret_s32_u8(d4u8);
+
+    dst0 = dst;
+    vst1_lane_s32((int32_t *)dst0, d2s32, 0);
+    dst0 += stride;
+    vst1_lane_s32((int32_t *)dst0, d2s32, 1);
+    dst0 += stride;
+    vst1_lane_s32((int32_t *)dst0, d4s32, 0);
+    dst0 += stride;
+    vst1_lane_s32((int32_t *)dst0, d4s32, 1);
+  }
+}
+
+static const int16_t cospi8sqrt2minus1 = 20091;
+static const int16_t sinpi8sqrt2 = 17734;
+// because the lowest bit in 0x8a8c is 0, we can pre-shift this
+
+static void idct_dequant_full_2x_neon(int16_t *q, int16_t *dq,
+                                      unsigned char *dst, int stride) {
+  unsigned char *dst0, *dst1;
+  int32x2_t d28, d29, d30, d31;
+  int16x8_t q0, q1, q2, q3, q4, q5, q6, q7, q8, q9, q10, q11;
+  int16x8_t qEmpty = vdupq_n_s16(0);
+  int32x4x2_t q2tmp0, q2tmp1;
+  int16x8x2_t q2tmp2, q2tmp3;
+  int16x4_t dLow0, dLow1, dHigh0, dHigh1;
+
+  d28 = d29 = d30 = d31 = vdup_n_s32(0);
+
+  // load dq
+  q0 = vld1q_s16(dq);
+  dq += 8;
+  q1 = vld1q_s16(dq);
+
+  // load q
+  q2 = vld1q_s16(q);
+  vst1q_s16(q, qEmpty);
+  q += 8;
+  q3 = vld1q_s16(q);
+  vst1q_s16(q, qEmpty);
+  q += 8;
+  q4 = vld1q_s16(q);
+  vst1q_s16(q, qEmpty);
+  q += 8;
+  q5 = vld1q_s16(q);
+  vst1q_s16(q, qEmpty);
+
+  // load src from dst
+  dst0 = dst;
+  dst1 = dst + 4;
+  d28 = vld1_lane_s32((const int32_t *)dst0, d28, 0);
+  dst0 += stride;
+  d28 = vld1_lane_s32((const int32_t *)dst1, d28, 1);
+  dst1 += stride;
+  d29 = vld1_lane_s32((const int32_t *)dst0, d29, 0);
+  dst0 += stride;
+  d29 = vld1_lane_s32((const int32_t *)dst1, d29, 1);
+  dst1 += stride;
+
+  d30 = vld1_lane_s32((const int32_t *)dst0, d30, 0);
+  dst0 += stride;
+  d30 = vld1_lane_s32((const int32_t *)dst1, d30, 1);
+  dst1 += stride;
+  d31 = vld1_lane_s32((const int32_t *)dst0, d31, 0);
+  d31 = vld1_lane_s32((const int32_t *)dst1, d31, 1);
+
+  q2 = vmulq_s16(q2, q0);
+  q3 = vmulq_s16(q3, q1);
+  q4 = vmulq_s16(q4, q0);
+  q5 = vmulq_s16(q5, q1);
+
+  // vswp
+  dLow0 = vget_low_s16(q2);
+  dHigh0 = vget_high_s16(q2);
+  dLow1 = vget_low_s16(q4);
+  dHigh1 = vget_high_s16(q4);
+  q2 = vcombine_s16(dLow0, dLow1);
+  q4 = vcombine_s16(dHigh0, dHigh1);
+
+  dLow0 = vget_low_s16(q3);
+  dHigh0 = vget_high_s16(q3);
+  dLow1 = vget_low_s16(q5);
+  dHigh1 = vget_high_s16(q5);
+  q3 = vcombine_s16(dLow0, dLow1);
+  q5 = vcombine_s16(dHigh0, dHigh1);
+
+  q6 = vqdmulhq_n_s16(q4, sinpi8sqrt2);
+  q7 = vqdmulhq_n_s16(q5, sinpi8sqrt2);
+  q8 = vqdmulhq_n_s16(q4, cospi8sqrt2minus1);
+  q9 = vqdmulhq_n_s16(q5, cospi8sqrt2minus1);
+
+  q10 = vqaddq_s16(q2, q3);
+  q11 = vqsubq_s16(q2, q3);
+
+  q8 = vshrq_n_s16(q8, 1);
+  q9 = vshrq_n_s16(q9, 1);
+
+  q4 = vqaddq_s16(q4, q8);
+  q5 = vqaddq_s16(q5, q9);
+
+  q2 = vqsubq_s16(q6, q5);
+  q3 = vqaddq_s16(q7, q4);
+
+  q4 = vqaddq_s16(q10, q3);
+  q5 = vqaddq_s16(q11, q2);
+  q6 = vqsubq_s16(q11, q2);
+  q7 = vqsubq_s16(q10, q3);
+
+  q2tmp0 = vtrnq_s32(vreinterpretq_s32_s16(q4), vreinterpretq_s32_s16(q6));
+  q2tmp1 = vtrnq_s32(vreinterpretq_s32_s16(q5), vreinterpretq_s32_s16(q7));
+  q2tmp2 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[0]),
+                     vreinterpretq_s16_s32(q2tmp1.val[0]));
+  q2tmp3 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[1]),
+                     vreinterpretq_s16_s32(q2tmp1.val[1]));
+
+  // loop 2
+  q8 = vqdmulhq_n_s16(q2tmp2.val[1], sinpi8sqrt2);
+  q9 = vqdmulhq_n_s16(q2tmp3.val[1], sinpi8sqrt2);
+  q10 = vqdmulhq_n_s16(q2tmp2.val[1], cospi8sqrt2minus1);
+  q11 = vqdmulhq_n_s16(q2tmp3.val[1], cospi8sqrt2minus1);
+
+  q2 = vqaddq_s16(q2tmp2.val[0], q2tmp3.val[0]);
+  q3 = vqsubq_s16(q2tmp2.val[0], q2tmp3.val[0]);
+
+  q10 = vshrq_n_s16(q10, 1);
+  q11 = vshrq_n_s16(q11, 1);
+
+  q10 = vqaddq_s16(q2tmp2.val[1], q10);
+  q11 = vqaddq_s16(q2tmp3.val[1], q11);
+
+  q8 = vqsubq_s16(q8, q11);
+  q9 = vqaddq_s16(q9, q10);
+
+  q4 = vqaddq_s16(q2, q9);
+  q5 = vqaddq_s16(q3, q8);
+  q6 = vqsubq_s16(q3, q8);
+  q7 = vqsubq_s16(q2, q9);
+
+  q4 = vrshrq_n_s16(q4, 3);
+  q5 = vrshrq_n_s16(q5, 3);
+  q6 = vrshrq_n_s16(q6, 3);
+  q7 = vrshrq_n_s16(q7, 3);
+
+  q2tmp0 = vtrnq_s32(vreinterpretq_s32_s16(q4), vreinterpretq_s32_s16(q6));
+  q2tmp1 = vtrnq_s32(vreinterpretq_s32_s16(q5), vreinterpretq_s32_s16(q7));
+  q2tmp2 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[0]),
+                     vreinterpretq_s16_s32(q2tmp1.val[0]));
+  q2tmp3 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[1]),
+                     vreinterpretq_s16_s32(q2tmp1.val[1]));
+
+  q4 = vreinterpretq_s16_u16(
+      vaddw_u8(vreinterpretq_u16_s16(q2tmp2.val[0]), vreinterpret_u8_s32(d28)));
+  q5 = vreinterpretq_s16_u16(
+      vaddw_u8(vreinterpretq_u16_s16(q2tmp2.val[1]), vreinterpret_u8_s32(d29)));
+  q6 = vreinterpretq_s16_u16(
+      vaddw_u8(vreinterpretq_u16_s16(q2tmp3.val[0]), vreinterpret_u8_s32(d30)));
+  q7 = vreinterpretq_s16_u16(
+      vaddw_u8(vreinterpretq_u16_s16(q2tmp3.val[1]), vreinterpret_u8_s32(d31)));
+
+  d28 = vreinterpret_s32_u8(vqmovun_s16(q4));
+  d29 = vreinterpret_s32_u8(vqmovun_s16(q5));
+  d30 = vreinterpret_s32_u8(vqmovun_s16(q6));
+  d31 = vreinterpret_s32_u8(vqmovun_s16(q7));
+
+  dst0 = dst;
+  dst1 = dst + 4;
+  vst1_lane_s32((int32_t *)dst0, d28, 0);
+  dst0 += stride;
+  vst1_lane_s32((int32_t *)dst1, d28, 1);
+  dst1 += stride;
+  vst1_lane_s32((int32_t *)dst0, d29, 0);
+  dst0 += stride;
+  vst1_lane_s32((int32_t *)dst1, d29, 1);
+  dst1 += stride;
+
+  vst1_lane_s32((int32_t *)dst0, d30, 0);
+  dst0 += stride;
+  vst1_lane_s32((int32_t *)dst1, d30, 1);
+  dst1 += stride;
+  vst1_lane_s32((int32_t *)dst0, d31, 0);
+  vst1_lane_s32((int32_t *)dst1, d31, 1);
+}
 
 void vp8_dequant_idct_add_y_block_neon(short *q, short *dq, unsigned char *dst,
                                        int stride, char *eobs) {
@@ -43,42 +254,42 @@
 }
 
 void vp8_dequant_idct_add_uv_block_neon(short *q, short *dq,
-                                        unsigned char *dstu,
-                                        unsigned char *dstv, int stride,
+                                        unsigned char *dst_u,
+                                        unsigned char *dst_v, int stride,
                                         char *eobs) {
   if (((short *)(eobs))[0]) {
     if (((short *)eobs)[0] & 0xfefe)
-      idct_dequant_full_2x_neon(q, dq, dstu, stride);
+      idct_dequant_full_2x_neon(q, dq, dst_u, stride);
     else
-      idct_dequant_0_2x_neon(q, dq[0], dstu, stride);
+      idct_dequant_0_2x_neon(q, dq[0], dst_u, stride);
   }
 
   q += 32;
-  dstu += 4 * stride;
+  dst_u += 4 * stride;
 
   if (((short *)(eobs))[1]) {
     if (((short *)eobs)[1] & 0xfefe)
-      idct_dequant_full_2x_neon(q, dq, dstu, stride);
+      idct_dequant_full_2x_neon(q, dq, dst_u, stride);
     else
-      idct_dequant_0_2x_neon(q, dq[0], dstu, stride);
+      idct_dequant_0_2x_neon(q, dq[0], dst_u, stride);
   }
 
   q += 32;
 
   if (((short *)(eobs))[2]) {
     if (((short *)eobs)[2] & 0xfefe)
-      idct_dequant_full_2x_neon(q, dq, dstv, stride);
+      idct_dequant_full_2x_neon(q, dq, dst_v, stride);
     else
-      idct_dequant_0_2x_neon(q, dq[0], dstv, stride);
+      idct_dequant_0_2x_neon(q, dq[0], dst_v, stride);
   }
 
   q += 32;
-  dstv += 4 * stride;
+  dst_v += 4 * stride;
 
   if (((short *)(eobs))[3]) {
     if (((short *)eobs)[3] & 0xfefe)
-      idct_dequant_full_2x_neon(q, dq, dstv, stride);
+      idct_dequant_full_2x_neon(q, dq, dst_v, stride);
     else
-      idct_dequant_0_2x_neon(q, dq[0], dstv, stride);
+      idct_dequant_0_2x_neon(q, dq[0], dst_v, stride);
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_0_2x_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_0_2x_neon.c
deleted file mode 100644
index c83102a..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_0_2x_neon.c
+++ /dev/null
@@ -1,59 +0,0 @@
-/*
- *  Copyright (c) 2014 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include <arm_neon.h>
-
-void idct_dequant_0_2x_neon(int16_t *q, int16_t dq, unsigned char *dst,
-                            int stride) {
-  unsigned char *dst0;
-  int i, a0, a1;
-  int16x8x2_t q2Add;
-  int32x2_t d2s32 = vdup_n_s32(0), d4s32 = vdup_n_s32(0);
-  uint8x8_t d2u8, d4u8;
-  uint16x8_t q1u16, q2u16;
-
-  a0 = ((q[0] * dq) + 4) >> 3;
-  a1 = ((q[16] * dq) + 4) >> 3;
-  q[0] = q[16] = 0;
-  q2Add.val[0] = vdupq_n_s16((int16_t)a0);
-  q2Add.val[1] = vdupq_n_s16((int16_t)a1);
-
-  for (i = 0; i < 2; i++, dst += 4) {
-    dst0 = dst;
-    d2s32 = vld1_lane_s32((const int32_t *)dst0, d2s32, 0);
-    dst0 += stride;
-    d2s32 = vld1_lane_s32((const int32_t *)dst0, d2s32, 1);
-    dst0 += stride;
-    d4s32 = vld1_lane_s32((const int32_t *)dst0, d4s32, 0);
-    dst0 += stride;
-    d4s32 = vld1_lane_s32((const int32_t *)dst0, d4s32, 1);
-
-    q1u16 = vaddw_u8(vreinterpretq_u16_s16(q2Add.val[i]),
-                     vreinterpret_u8_s32(d2s32));
-    q2u16 = vaddw_u8(vreinterpretq_u16_s16(q2Add.val[i]),
-                     vreinterpret_u8_s32(d4s32));
-
-    d2u8 = vqmovun_s16(vreinterpretq_s16_u16(q1u16));
-    d4u8 = vqmovun_s16(vreinterpretq_s16_u16(q2u16));
-
-    d2s32 = vreinterpret_s32_u8(d2u8);
-    d4s32 = vreinterpret_s32_u8(d4u8);
-
-    dst0 = dst;
-    vst1_lane_s32((int32_t *)dst0, d2s32, 0);
-    dst0 += stride;
-    vst1_lane_s32((int32_t *)dst0, d2s32, 1);
-    dst0 += stride;
-    vst1_lane_s32((int32_t *)dst0, d4s32, 0);
-    dst0 += stride;
-    vst1_lane_s32((int32_t *)dst0, d4s32, 1);
-  }
-  return;
-}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_full_2x_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_full_2x_neon.c
deleted file mode 100644
index f30671c..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/idct_dequant_full_2x_neon.c
+++ /dev/null
@@ -1,182 +0,0 @@
-/*
- *  Copyright (c) 2014 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include <arm_neon.h>
-
-static const int16_t cospi8sqrt2minus1 = 20091;
-static const int16_t sinpi8sqrt2 = 17734;
-// because the lowest bit in 0x8a8c is 0, we can pre-shift this
-
-void idct_dequant_full_2x_neon(int16_t *q, int16_t *dq, unsigned char *dst,
-                               int stride) {
-  unsigned char *dst0, *dst1;
-  int32x2_t d28, d29, d30, d31;
-  int16x8_t q0, q1, q2, q3, q4, q5, q6, q7, q8, q9, q10, q11;
-  int16x8_t qEmpty = vdupq_n_s16(0);
-  int32x4x2_t q2tmp0, q2tmp1;
-  int16x8x2_t q2tmp2, q2tmp3;
-  int16x4_t dLow0, dLow1, dHigh0, dHigh1;
-
-  d28 = d29 = d30 = d31 = vdup_n_s32(0);
-
-  // load dq
-  q0 = vld1q_s16(dq);
-  dq += 8;
-  q1 = vld1q_s16(dq);
-
-  // load q
-  q2 = vld1q_s16(q);
-  vst1q_s16(q, qEmpty);
-  q += 8;
-  q3 = vld1q_s16(q);
-  vst1q_s16(q, qEmpty);
-  q += 8;
-  q4 = vld1q_s16(q);
-  vst1q_s16(q, qEmpty);
-  q += 8;
-  q5 = vld1q_s16(q);
-  vst1q_s16(q, qEmpty);
-
-  // load src from dst
-  dst0 = dst;
-  dst1 = dst + 4;
-  d28 = vld1_lane_s32((const int32_t *)dst0, d28, 0);
-  dst0 += stride;
-  d28 = vld1_lane_s32((const int32_t *)dst1, d28, 1);
-  dst1 += stride;
-  d29 = vld1_lane_s32((const int32_t *)dst0, d29, 0);
-  dst0 += stride;
-  d29 = vld1_lane_s32((const int32_t *)dst1, d29, 1);
-  dst1 += stride;
-
-  d30 = vld1_lane_s32((const int32_t *)dst0, d30, 0);
-  dst0 += stride;
-  d30 = vld1_lane_s32((const int32_t *)dst1, d30, 1);
-  dst1 += stride;
-  d31 = vld1_lane_s32((const int32_t *)dst0, d31, 0);
-  d31 = vld1_lane_s32((const int32_t *)dst1, d31, 1);
-
-  q2 = vmulq_s16(q2, q0);
-  q3 = vmulq_s16(q3, q1);
-  q4 = vmulq_s16(q4, q0);
-  q5 = vmulq_s16(q5, q1);
-
-  // vswp
-  dLow0 = vget_low_s16(q2);
-  dHigh0 = vget_high_s16(q2);
-  dLow1 = vget_low_s16(q4);
-  dHigh1 = vget_high_s16(q4);
-  q2 = vcombine_s16(dLow0, dLow1);
-  q4 = vcombine_s16(dHigh0, dHigh1);
-
-  dLow0 = vget_low_s16(q3);
-  dHigh0 = vget_high_s16(q3);
-  dLow1 = vget_low_s16(q5);
-  dHigh1 = vget_high_s16(q5);
-  q3 = vcombine_s16(dLow0, dLow1);
-  q5 = vcombine_s16(dHigh0, dHigh1);
-
-  q6 = vqdmulhq_n_s16(q4, sinpi8sqrt2);
-  q7 = vqdmulhq_n_s16(q5, sinpi8sqrt2);
-  q8 = vqdmulhq_n_s16(q4, cospi8sqrt2minus1);
-  q9 = vqdmulhq_n_s16(q5, cospi8sqrt2minus1);
-
-  q10 = vqaddq_s16(q2, q3);
-  q11 = vqsubq_s16(q2, q3);
-
-  q8 = vshrq_n_s16(q8, 1);
-  q9 = vshrq_n_s16(q9, 1);
-
-  q4 = vqaddq_s16(q4, q8);
-  q5 = vqaddq_s16(q5, q9);
-
-  q2 = vqsubq_s16(q6, q5);
-  q3 = vqaddq_s16(q7, q4);
-
-  q4 = vqaddq_s16(q10, q3);
-  q5 = vqaddq_s16(q11, q2);
-  q6 = vqsubq_s16(q11, q2);
-  q7 = vqsubq_s16(q10, q3);
-
-  q2tmp0 = vtrnq_s32(vreinterpretq_s32_s16(q4), vreinterpretq_s32_s16(q6));
-  q2tmp1 = vtrnq_s32(vreinterpretq_s32_s16(q5), vreinterpretq_s32_s16(q7));
-  q2tmp2 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[0]),
-                     vreinterpretq_s16_s32(q2tmp1.val[0]));
-  q2tmp3 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[1]),
-                     vreinterpretq_s16_s32(q2tmp1.val[1]));
-
-  // loop 2
-  q8 = vqdmulhq_n_s16(q2tmp2.val[1], sinpi8sqrt2);
-  q9 = vqdmulhq_n_s16(q2tmp3.val[1], sinpi8sqrt2);
-  q10 = vqdmulhq_n_s16(q2tmp2.val[1], cospi8sqrt2minus1);
-  q11 = vqdmulhq_n_s16(q2tmp3.val[1], cospi8sqrt2minus1);
-
-  q2 = vqaddq_s16(q2tmp2.val[0], q2tmp3.val[0]);
-  q3 = vqsubq_s16(q2tmp2.val[0], q2tmp3.val[0]);
-
-  q10 = vshrq_n_s16(q10, 1);
-  q11 = vshrq_n_s16(q11, 1);
-
-  q10 = vqaddq_s16(q2tmp2.val[1], q10);
-  q11 = vqaddq_s16(q2tmp3.val[1], q11);
-
-  q8 = vqsubq_s16(q8, q11);
-  q9 = vqaddq_s16(q9, q10);
-
-  q4 = vqaddq_s16(q2, q9);
-  q5 = vqaddq_s16(q3, q8);
-  q6 = vqsubq_s16(q3, q8);
-  q7 = vqsubq_s16(q2, q9);
-
-  q4 = vrshrq_n_s16(q4, 3);
-  q5 = vrshrq_n_s16(q5, 3);
-  q6 = vrshrq_n_s16(q6, 3);
-  q7 = vrshrq_n_s16(q7, 3);
-
-  q2tmp0 = vtrnq_s32(vreinterpretq_s32_s16(q4), vreinterpretq_s32_s16(q6));
-  q2tmp1 = vtrnq_s32(vreinterpretq_s32_s16(q5), vreinterpretq_s32_s16(q7));
-  q2tmp2 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[0]),
-                     vreinterpretq_s16_s32(q2tmp1.val[0]));
-  q2tmp3 = vtrnq_s16(vreinterpretq_s16_s32(q2tmp0.val[1]),
-                     vreinterpretq_s16_s32(q2tmp1.val[1]));
-
-  q4 = vreinterpretq_s16_u16(
-      vaddw_u8(vreinterpretq_u16_s16(q2tmp2.val[0]), vreinterpret_u8_s32(d28)));
-  q5 = vreinterpretq_s16_u16(
-      vaddw_u8(vreinterpretq_u16_s16(q2tmp2.val[1]), vreinterpret_u8_s32(d29)));
-  q6 = vreinterpretq_s16_u16(
-      vaddw_u8(vreinterpretq_u16_s16(q2tmp3.val[0]), vreinterpret_u8_s32(d30)));
-  q7 = vreinterpretq_s16_u16(
-      vaddw_u8(vreinterpretq_u16_s16(q2tmp3.val[1]), vreinterpret_u8_s32(d31)));
-
-  d28 = vreinterpret_s32_u8(vqmovun_s16(q4));
-  d29 = vreinterpret_s32_u8(vqmovun_s16(q5));
-  d30 = vreinterpret_s32_u8(vqmovun_s16(q6));
-  d31 = vreinterpret_s32_u8(vqmovun_s16(q7));
-
-  dst0 = dst;
-  dst1 = dst + 4;
-  vst1_lane_s32((int32_t *)dst0, d28, 0);
-  dst0 += stride;
-  vst1_lane_s32((int32_t *)dst1, d28, 1);
-  dst1 += stride;
-  vst1_lane_s32((int32_t *)dst0, d29, 0);
-  dst0 += stride;
-  vst1_lane_s32((int32_t *)dst1, d29, 1);
-  dst1 += stride;
-
-  vst1_lane_s32((int32_t *)dst0, d30, 0);
-  dst0 += stride;
-  vst1_lane_s32((int32_t *)dst1, d30, 1);
-  dst1 += stride;
-  vst1_lane_s32((int32_t *)dst0, d31, 0);
-  vst1_lane_s32((int32_t *)dst1, d31, 1);
-  return;
-}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c
index 6c4bcc1..91600bf 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/iwalsh_neon.c
@@ -10,6 +10,8 @@
 
 #include <arm_neon.h>
 
+#include "./vp8_rtcd.h"
+
 void vp8_short_inv_walsh4x4_neon(int16_t *input, int16_t *mb_dqcoeff) {
   int16x8_t q0s16, q1s16, q2s16, q3s16;
   int16x4_t d4s16, d5s16, d6s16, d7s16;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c
index a168219..df983b2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimplehorizontaledge_neon.c
@@ -9,7 +9,9 @@
  */
 
 #include <arm_neon.h>
+
 #include "./vpx_config.h"
+#include "./vp8_rtcd.h"
 
 static INLINE void vp8_loop_filter_simple_horizontal_edge_neon(
     unsigned char *s, int p, const unsigned char *blimit) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c
index 80a222d..fbc83ae 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/loopfiltersimpleverticaledge_neon.c
@@ -9,7 +9,9 @@
  */
 
 #include <arm_neon.h>
+
 #include "./vpx_config.h"
+#include "./vp8_rtcd.h"
 #include "vpx_ports/arm.h"
 
 #ifdef VPX_INCOMPATIBLE_GCC
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/mbloopfilter_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/mbloopfilter_neon.c
index 65eec30..fafaf2d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/mbloopfilter_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/mbloopfilter_neon.c
@@ -9,7 +9,9 @@
  */
 
 #include <arm_neon.h>
+
 #include "./vpx_config.h"
+#include "vp8/common/arm/loopfilter_arm.h"
 
 static INLINE void vp8_mbloop_filter_neon(uint8x16_t qblimit,  // mblimit
                                           uint8x16_t qlimit,   // limit
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/sixtappredict_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/sixtappredict_neon.c
index aa2567d..48e86d3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/sixtappredict_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/sixtappredict_neon.c
@@ -11,6 +11,7 @@
 #include <arm_neon.h>
 #include <string.h>
 #include "./vpx_config.h"
+#include "./vp8_rtcd.h"
 #include "vpx_dsp/arm/mem_neon.h"
 #include "vpx_ports/mem.h"
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/vp8_loopfilter_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/vp8_loopfilter_neon.c
index d728673..ebc004a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/vp8_loopfilter_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/arm/neon/vp8_loopfilter_neon.c
@@ -9,7 +9,9 @@
  */
 
 #include <arm_neon.h>
+
 #include "./vpx_config.h"
+#include "vp8/common/arm/loopfilter_arm.h"
 #include "vpx_ports/arm.h"
 
 static INLINE void vp8_loop_filter_neon(uint8x16_t qblimit,  // flimit
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.c
index f47c5ba..22905c1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.c
@@ -11,9 +11,9 @@
 #include "blockd.h"
 #include "vpx_mem/vpx_mem.h"
 
-const unsigned char vp8_block2left[25] = {
-  0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8
-};
-const unsigned char vp8_block2above[25] = {
-  0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 4, 5, 4, 5, 6, 7, 6, 7, 8
-};
+const unsigned char vp8_block2left[25] = { 0, 0, 0, 0, 1, 1, 1, 1, 2,
+                                           2, 2, 2, 3, 3, 3, 3, 4, 4,
+                                           5, 5, 6, 6, 7, 7, 8 };
+const unsigned char vp8_block2above[25] = { 0, 1, 2, 3, 0, 1, 2, 3, 0,
+                                            1, 2, 3, 0, 1, 2, 3, 4, 5,
+                                            4, 5, 6, 7, 6, 7, 8 };
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.h
index ddcc307..02abe05 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/blockd.h
@@ -8,11 +8,12 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_BLOCKD_H_
-#define VP8_COMMON_BLOCKD_H_
+#ifndef VPX_VP8_COMMON_BLOCKD_H_
+#define VPX_VP8_COMMON_BLOCKD_H_
 
 void vpx_log(const char *format, ...);
 
+#include "vpx/internal/vpx_codec_internal.h"
 #include "vpx_config.h"
 #include "vpx_scale/yv12config.h"
 #include "mv.h"
@@ -201,8 +202,9 @@
   union b_mode_info bmi;
 } BLOCKD;
 
-typedef void (*vp8_subpix_fn_t)(unsigned char *src, int src_pitch, int xofst,
-                                int yofst, unsigned char *dst, int dst_pitch);
+typedef void (*vp8_subpix_fn_t)(unsigned char *src_ptr, int src_pixels_per_line,
+                                int xoffset, int yoffset,
+                                unsigned char *dst_ptr, int dst_pitch);
 
 typedef struct macroblockd {
   DECLARE_ALIGNED(16, unsigned char, predictor[384]);
@@ -288,7 +290,9 @@
 
   int corrupted;
 
-#if ARCH_X86 || ARCH_X86_64
+  struct vpx_internal_error_info error_info;
+
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   /* This is an intermediate buffer currently used in sub-pixel motion search
    * to keep a copy of the reference area. This buffer can be used for other
    * purpose.
@@ -304,4 +308,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_BLOCKD_H_
+#endif  // VPX_VP8_COMMON_BLOCKD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h
index 9b01bba..b342096 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/coefupdateprobs.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_COEFUPDATEPROBS_H_
-#define VP8_COMMON_COEFUPDATEPROBS_H_
+#ifndef VPX_VP8_COMMON_COEFUPDATEPROBS_H_
+#define VPX_VP8_COMMON_COEFUPDATEPROBS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -194,4 +194,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_COEFUPDATEPROBS_H_
+#endif  // VPX_VP8_COMMON_COEFUPDATEPROBS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/common.h
index bbfc4f3..2c30e8d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_COMMON_H_
-#define VP8_COMMON_COMMON_H_
+#ifndef VPX_VP8_COMMON_COMMON_H_
+#define VPX_VP8_COMMON_COMMON_H_
 
 #include <assert.h>
 
@@ -31,18 +31,18 @@
 
 /* Use this for variably-sized arrays. */
 
-#define vp8_copy_array(Dest, Src, N)       \
-  {                                        \
-    assert(sizeof(*Dest) == sizeof(*Src)); \
-    memcpy(Dest, Src, N * sizeof(*Src));   \
+#define vp8_copy_array(Dest, Src, N)           \
+  {                                            \
+    assert(sizeof(*(Dest)) == sizeof(*(Src))); \
+    memcpy(Dest, Src, (N) * sizeof(*(Src)));   \
   }
 
-#define vp8_zero(Dest) memset(&Dest, 0, sizeof(Dest));
+#define vp8_zero(Dest) memset(&(Dest), 0, sizeof(Dest));
 
-#define vp8_zero_array(Dest, N) memset(Dest, 0, N * sizeof(*Dest));
+#define vp8_zero_array(Dest, N) memset(Dest, 0, (N) * sizeof(*(Dest)));
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_COMMON_H_
+#endif  // VPX_VP8_COMMON_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h
index f3f54f6..b25e4a4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/default_coef_probs.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_DEFAULT_COEF_PROBS_H_
-#define VP8_COMMON_DEFAULT_COEF_PROBS_H_
+#ifndef VPX_VP8_COMMON_DEFAULT_COEF_PROBS_H_
+#define VPX_VP8_COMMON_DEFAULT_COEF_PROBS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -157,4 +157,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_DEFAULT_COEF_PROBS_H_
+#endif  // VPX_VP8_COMMON_DEFAULT_COEF_PROBS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.c
index f61fa9e..fc4a353 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.c
@@ -28,9 +28,9 @@
   0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
 };
 
-DECLARE_ALIGNED(16, const unsigned char, vp8_coef_bands[16]) = {
-  0, 1, 2, 3, 6, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6, 7
-};
+DECLARE_ALIGNED(16, const unsigned char,
+                vp8_coef_bands[16]) = { 0, 1, 2, 3, 6, 4, 5, 6,
+                                        6, 6, 6, 6, 6, 6, 6, 7 };
 
 DECLARE_ALIGNED(16, const unsigned char,
                 vp8_prev_token_class[MAX_ENTROPY_TOKENS]) = {
@@ -41,9 +41,9 @@
   0, 1, 4, 8, 5, 2, 3, 6, 9, 12, 13, 10, 7, 11, 14, 15,
 };
 
-DECLARE_ALIGNED(16, const short, vp8_default_inv_zig_zag[16]) = {
-  1, 2, 6, 7, 3, 5, 8, 13, 4, 9, 12, 14, 10, 11, 15, 16
-};
+DECLARE_ALIGNED(16, const short,
+                vp8_default_inv_zig_zag[16]) = { 1, 2, 6,  7,  3,  5,  8,  13,
+                                                 4, 9, 12, 14, 10, 11, 15, 16 };
 
 /* vp8_default_zig_zag_mask generated with:
 
@@ -129,9 +129,9 @@
 static const vp8_tree_index cat3[6] = { 2, 2, 4, 4, 0, 0 };
 static const vp8_tree_index cat4[8] = { 2, 2, 4, 4, 6, 6, 0, 0 };
 static const vp8_tree_index cat5[10] = { 2, 2, 4, 4, 6, 6, 8, 8, 0, 0 };
-static const vp8_tree_index cat6[22] = {
-  2, 2, 4, 4, 6, 6, 8, 8, 10, 10, 12, 12, 14, 14, 16, 16, 18, 18, 20, 20, 0, 0
-};
+static const vp8_tree_index cat6[22] = { 2,  2,  4,  4,  6,  6,  8,  8,
+                                         10, 10, 12, 12, 14, 14, 16, 16,
+                                         18, 18, 20, 20, 0,  0 };
 
 const vp8_extra_bit_struct vp8_extra_bits[12] = {
   { 0, 0, 0, 0 },         { 0, 0, 0, 1 },          { 0, 0, 0, 2 },
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.h
index d088560..fbdb7bc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropy.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ENTROPY_H_
-#define VP8_COMMON_ENTROPY_H_
+#ifndef VPX_VP8_COMMON_ENTROPY_H_
+#define VPX_VP8_COMMON_ENTROPY_H_
 
 #include "treecoder.h"
 #include "blockd.h"
@@ -105,4 +105,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_ENTROPY_H_
+#endif  // VPX_VP8_COMMON_ENTROPY_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.c
index 239492a..f61e0c2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.c
@@ -75,9 +75,9 @@
   -DC_PRED, 2, 4, 6, -V_PRED, -H_PRED, -TM_PRED, -B_PRED
 };
 
-const vp8_tree_index vp8_kf_ymode_tree[8] = {
-  -B_PRED, 2, 4, 6, -DC_PRED, -V_PRED, -H_PRED, -TM_PRED
-};
+const vp8_tree_index vp8_kf_ymode_tree[8] = { -B_PRED, 2,        4,
+                                              6,       -DC_PRED, -V_PRED,
+                                              -H_PRED, -TM_PRED };
 
 const vp8_tree_index vp8_uv_mode_tree[6] = { -DC_PRED, 2,       -V_PRED,
                                              4,        -H_PRED, -TM_PRED };
@@ -99,6 +99,6 @@
   memcpy(x->fc.sub_mv_ref_prob, sub_mv_ref_prob, sizeof(sub_mv_ref_prob));
 }
 
-void vp8_default_bmode_probs(vp8_prob p[VP8_BINTRAMODES - 1]) {
-  memcpy(p, vp8_bmode_prob, sizeof(vp8_bmode_prob));
+void vp8_default_bmode_probs(vp8_prob dest[VP8_BINTRAMODES - 1]) {
+  memcpy(dest, vp8_bmode_prob, sizeof(vp8_bmode_prob));
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.h
index b3fad19..c772cec 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymode.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ENTROPYMODE_H_
-#define VP8_COMMON_ENTROPYMODE_H_
+#ifndef VPX_VP8_COMMON_ENTROPYMODE_H_
+#define VPX_VP8_COMMON_ENTROPYMODE_H_
 
 #include "onyxc_int.h"
 #include "treecoder.h"
@@ -85,4 +85,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_ENTROPYMODE_H_
+#endif  // VPX_VP8_COMMON_ENTROPYMODE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymv.h
index 6373000..40039f5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/entropymv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ENTROPYMV_H_
-#define VP8_COMMON_ENTROPYMV_H_
+#ifndef VPX_VP8_COMMON_ENTROPYMV_H_
+#define VPX_VP8_COMMON_ENTROPYMV_H_
 
 #include "treecoder.h"
 
@@ -46,4 +46,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_ENTROPYMV_H_
+#endif  // VPX_VP8_COMMON_ENTROPYMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/extend.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/extend.h
index 7da5ce31..586a38a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/extend.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/extend.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_EXTEND_H_
-#define VP8_COMMON_EXTEND_H_
+#ifndef VPX_VP8_COMMON_EXTEND_H_
+#define VPX_VP8_COMMON_EXTEND_H_
 
 #include "vpx_scale/yv12config.h"
 
@@ -29,4 +29,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_EXTEND_H_
+#endif  // VPX_VP8_COMMON_EXTEND_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/filter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/filter.h
index f1d5ece..6acee22 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/filter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/filter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_FILTER_H_
-#define VP8_COMMON_FILTER_H_
+#ifndef VPX_VP8_COMMON_FILTER_H_
+#define VPX_VP8_COMMON_FILTER_H_
 
 #include "vpx_ports/mem.h"
 
@@ -28,4 +28,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_FILTER_H_
+#endif  // VPX_VP8_COMMON_FILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.c
index f40d2c6..6889fde 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.c
@@ -21,19 +21,20 @@
    Note that we only consider one 4x4 subblock from each candidate 16x16
    macroblock.   */
 void vp8_find_near_mvs(MACROBLOCKD *xd, const MODE_INFO *here, int_mv *nearest,
-                       int_mv *nearby, int_mv *best_mv, int cnt[4],
+                       int_mv *nearby, int_mv *best_mv, int near_mv_ref_cnts[4],
                        int refframe, int *ref_frame_sign_bias) {
   const MODE_INFO *above = here - xd->mode_info_stride;
   const MODE_INFO *left = here - 1;
   const MODE_INFO *aboveleft = above - 1;
   int_mv near_mvs[4];
   int_mv *mv = near_mvs;
-  int *cntx = cnt;
+  int *cntx = near_mv_ref_cnts;
   enum { CNT_INTRA, CNT_NEAREST, CNT_NEAR, CNT_SPLITMV };
 
   /* Zero accumulators */
   mv[0].as_int = mv[1].as_int = mv[2].as_int = 0;
-  cnt[0] = cnt[1] = cnt[2] = cnt[3] = 0;
+  near_mv_ref_cnts[0] = near_mv_ref_cnts[1] = near_mv_ref_cnts[2] =
+      near_mv_ref_cnts[3] = 0;
 
   /* Process above */
   if (above->mbmi.ref_frame != INTRA_FRAME) {
@@ -63,7 +64,7 @@
 
       *cntx += 2;
     } else {
-      cnt[CNT_INTRA] += 2;
+      near_mv_ref_cnts[CNT_INTRA] += 2;
     }
   }
 
@@ -83,33 +84,34 @@
 
       *cntx += 1;
     } else {
-      cnt[CNT_INTRA] += 1;
+      near_mv_ref_cnts[CNT_INTRA] += 1;
     }
   }
 
   /* If we have three distinct MV's ... */
-  if (cnt[CNT_SPLITMV]) {
+  if (near_mv_ref_cnts[CNT_SPLITMV]) {
     /* See if above-left MV can be merged with NEAREST */
-    if (mv->as_int == near_mvs[CNT_NEAREST].as_int) cnt[CNT_NEAREST] += 1;
+    if (mv->as_int == near_mvs[CNT_NEAREST].as_int)
+      near_mv_ref_cnts[CNT_NEAREST] += 1;
   }
 
-  cnt[CNT_SPLITMV] =
+  near_mv_ref_cnts[CNT_SPLITMV] =
       ((above->mbmi.mode == SPLITMV) + (left->mbmi.mode == SPLITMV)) * 2 +
       (aboveleft->mbmi.mode == SPLITMV);
 
   /* Swap near and nearest if necessary */
-  if (cnt[CNT_NEAR] > cnt[CNT_NEAREST]) {
+  if (near_mv_ref_cnts[CNT_NEAR] > near_mv_ref_cnts[CNT_NEAREST]) {
     int tmp;
-    tmp = cnt[CNT_NEAREST];
-    cnt[CNT_NEAREST] = cnt[CNT_NEAR];
-    cnt[CNT_NEAR] = tmp;
+    tmp = near_mv_ref_cnts[CNT_NEAREST];
+    near_mv_ref_cnts[CNT_NEAREST] = near_mv_ref_cnts[CNT_NEAR];
+    near_mv_ref_cnts[CNT_NEAR] = tmp;
     tmp = near_mvs[CNT_NEAREST].as_int;
     near_mvs[CNT_NEAREST].as_int = near_mvs[CNT_NEAR].as_int;
     near_mvs[CNT_NEAR].as_int = tmp;
   }
 
   /* Use near_mvs[0] to store the "best" MV */
-  if (cnt[CNT_NEAREST] >= cnt[CNT_INTRA]) {
+  if (near_mv_ref_cnts[CNT_NEAREST] >= near_mv_ref_cnts[CNT_INTRA]) {
     near_mvs[CNT_INTRA] = near_mvs[CNT_NEAREST];
   }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.h
index c1eaa26..d7db954 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/findnearmv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_FINDNEARMV_H_
-#define VP8_COMMON_FINDNEARMV_H_
+#ifndef VPX_VP8_COMMON_FINDNEARMV_H_
+#define VPX_VP8_COMMON_FINDNEARMV_H_
 
 #include "./vpx_config.h"
 #include "mv.h"
@@ -70,7 +70,7 @@
 }
 
 void vp8_find_near_mvs(MACROBLOCKD *xd, const MODE_INFO *here, int_mv *nearest,
-                       int_mv *nearby, int_mv *best, int near_mv_ref_cts[4],
+                       int_mv *nearby, int_mv *best_mv, int near_mv_ref_cnts[4],
                        int refframe, int *ref_frame_sign_bias);
 
 int vp8_find_near_mvs_bias(MACROBLOCKD *xd, const MODE_INFO *here,
@@ -148,4 +148,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_FINDNEARMV_H_
+#endif  // VPX_VP8_COMMON_FINDNEARMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/generic/systemdependent.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/generic/systemdependent.c
index 0432ee9..75ce7ef 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/generic/systemdependent.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/generic/systemdependent.c
@@ -10,11 +10,11 @@
 
 #include "vpx_config.h"
 #include "vp8_rtcd.h"
-#if ARCH_ARM
+#if VPX_ARCH_ARM
 #include "vpx_ports/arm.h"
-#elif ARCH_X86 || ARCH_X86_64
+#elif VPX_ARCH_X86 || VPX_ARCH_X86_64
 #include "vpx_ports/x86.h"
-#elif ARCH_PPC
+#elif VPX_ARCH_PPC
 #include "vpx_ports/ppc.h"
 #endif
 #include "vp8/common/onyxc_int.h"
@@ -88,15 +88,16 @@
 void vp8_machine_specific_config(VP8_COMMON *ctx) {
 #if CONFIG_MULTITHREAD
   ctx->processor_core_count = get_cpu_count();
-#else
-  (void)ctx;
 #endif /* CONFIG_MULTITHREAD */
 
-#if ARCH_ARM
+#if VPX_ARCH_ARM
   ctx->cpu_caps = arm_cpu_caps();
-#elif ARCH_X86 || ARCH_X86_64
+#elif VPX_ARCH_X86 || VPX_ARCH_X86_64
   ctx->cpu_caps = x86_simd_caps();
-#elif ARCH_PPC
+#elif VPX_ARCH_PPC
   ctx->cpu_caps = ppc_simd_caps();
+#else
+  // generic-gnu targets.
+  ctx->cpu_caps = 0;
 #endif
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/header.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/header.h
index 1df01fc..e64e241 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/header.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/header.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_HEADER_H_
-#define VP8_COMMON_HEADER_H_
+#ifndef VPX_VP8_COMMON_HEADER_H_
+#define VPX_VP8_COMMON_HEADER_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -45,4 +45,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_HEADER_H_
+#endif  // VPX_VP8_COMMON_HEADER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/idct_blk.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/idct_blk.c
index ff9f3eb..ebe1774 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/idct_blk.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/idct_blk.c
@@ -12,12 +12,6 @@
 #include "vp8_rtcd.h"
 #include "vpx_mem/vpx_mem.h"
 
-void vp8_dequant_idct_add_c(short *input, short *dq, unsigned char *dest,
-                            int stride);
-void vp8_dc_only_idct_add_c(short input_dc, unsigned char *pred,
-                            int pred_stride, unsigned char *dst_ptr,
-                            int dst_stride);
-
 void vp8_dequant_idct_add_y_block_c(short *q, short *dq, unsigned char *dst,
                                     int stride, char *eobs) {
   int i, j;
@@ -39,40 +33,40 @@
   }
 }
 
-void vp8_dequant_idct_add_uv_block_c(short *q, short *dq, unsigned char *dstu,
-                                     unsigned char *dstv, int stride,
+void vp8_dequant_idct_add_uv_block_c(short *q, short *dq, unsigned char *dst_u,
+                                     unsigned char *dst_v, int stride,
                                      char *eobs) {
   int i, j;
 
   for (i = 0; i < 2; ++i) {
     for (j = 0; j < 2; ++j) {
       if (*eobs++ > 1) {
-        vp8_dequant_idct_add_c(q, dq, dstu, stride);
+        vp8_dequant_idct_add_c(q, dq, dst_u, stride);
       } else {
-        vp8_dc_only_idct_add_c(q[0] * dq[0], dstu, stride, dstu, stride);
+        vp8_dc_only_idct_add_c(q[0] * dq[0], dst_u, stride, dst_u, stride);
         memset(q, 0, 2 * sizeof(q[0]));
       }
 
       q += 16;
-      dstu += 4;
+      dst_u += 4;
     }
 
-    dstu += 4 * stride - 8;
+    dst_u += 4 * stride - 8;
   }
 
   for (i = 0; i < 2; ++i) {
     for (j = 0; j < 2; ++j) {
       if (*eobs++ > 1) {
-        vp8_dequant_idct_add_c(q, dq, dstv, stride);
+        vp8_dequant_idct_add_c(q, dq, dst_v, stride);
       } else {
-        vp8_dc_only_idct_add_c(q[0] * dq[0], dstv, stride, dstv, stride);
+        vp8_dc_only_idct_add_c(q[0] * dq[0], dst_v, stride, dst_v, stride);
         memset(q, 0, 2 * sizeof(q[0]));
       }
 
       q += 16;
-      dstv += 4;
+      dst_v += 4;
     }
 
-    dstv += 4 * stride - 8;
+    dst_v += 4 * stride - 8;
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/invtrans.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/invtrans.h
index c7af32f..aed7bb0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/invtrans.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/invtrans.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_INVTRANS_H_
-#define VP8_COMMON_INVTRANS_H_
+#ifndef VPX_VP8_COMMON_INVTRANS_H_
+#define VPX_VP8_COMMON_INVTRANS_H_
 
 #include "./vpx_config.h"
 #include "vp8_rtcd.h"
@@ -54,4 +54,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_INVTRANS_H_
+#endif  // VPX_VP8_COMMON_INVTRANS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter.h
index 7484563..909e8df 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_LOOPFILTER_H_
-#define VP8_COMMON_LOOPFILTER_H_
+#ifndef VPX_VP8_COMMON_LOOPFILTER_H_
+#define VPX_VP8_COMMON_LOOPFILTER_H_
 
 #include "vpx_ports/mem.h"
 #include "vpx_config.h"
@@ -26,7 +26,7 @@
 
 typedef enum { NORMAL_LOOPFILTER = 0, SIMPLE_LOOPFILTER = 1 } LOOPFILTERTYPE;
 
-#if ARCH_ARM
+#if VPX_ARCH_ARM
 #define SIMD_WIDTH 1
 #else
 #define SIMD_WIDTH 16
@@ -93,11 +93,9 @@
 
 void vp8_loop_filter_row_simple(struct VP8Common *cm,
                                 struct modeinfo *mode_info_context, int mb_row,
-                                int post_ystride, int post_uvstride,
-                                unsigned char *y_ptr, unsigned char *u_ptr,
-                                unsigned char *v_ptr);
+                                int post_ystride, unsigned char *y_ptr);
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_LOOPFILTER_H_
+#endif  // VPX_VP8_COMMON_LOOPFILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter_filters.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter_filters.c
index 188e290..61a55d3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter_filters.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/loopfilter_filters.c
@@ -270,28 +270,32 @@
   *op0 = u ^ 0x80;
 }
 
-void vp8_loop_filter_simple_horizontal_edge_c(unsigned char *s, int p,
+void vp8_loop_filter_simple_horizontal_edge_c(unsigned char *y_ptr,
+                                              int y_stride,
                                               const unsigned char *blimit) {
   signed char mask = 0;
   int i = 0;
 
   do {
-    mask = vp8_simple_filter_mask(blimit[0], s[-2 * p], s[-1 * p], s[0 * p],
-                                  s[1 * p]);
-    vp8_simple_filter(mask, s - 2 * p, s - 1 * p, s, s + 1 * p);
-    ++s;
+    mask = vp8_simple_filter_mask(blimit[0], y_ptr[-2 * y_stride],
+                                  y_ptr[-1 * y_stride], y_ptr[0 * y_stride],
+                                  y_ptr[1 * y_stride]);
+    vp8_simple_filter(mask, y_ptr - 2 * y_stride, y_ptr - 1 * y_stride, y_ptr,
+                      y_ptr + 1 * y_stride);
+    ++y_ptr;
   } while (++i < 16);
 }
 
-void vp8_loop_filter_simple_vertical_edge_c(unsigned char *s, int p,
+void vp8_loop_filter_simple_vertical_edge_c(unsigned char *y_ptr, int y_stride,
                                             const unsigned char *blimit) {
   signed char mask = 0;
   int i = 0;
 
   do {
-    mask = vp8_simple_filter_mask(blimit[0], s[-2], s[-1], s[0], s[1]);
-    vp8_simple_filter(mask, s - 2, s - 1, s, s + 1);
-    s += p;
+    mask = vp8_simple_filter_mask(blimit[0], y_ptr[-2], y_ptr[-1], y_ptr[0],
+                                  y_ptr[1]);
+    vp8_simple_filter(mask, y_ptr - 2, y_ptr - 1, y_ptr, y_ptr + 1);
+    y_ptr += y_stride;
   } while (++i < 16);
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mfqe.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mfqe.c
index b6f8146..1fe7363 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mfqe.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mfqe.c
@@ -18,6 +18,7 @@
 
 #include "./vp8_rtcd.h"
 #include "./vpx_dsp_rtcd.h"
+#include "vp8/common/common.h"
 #include "vp8/common/postproc.h"
 #include "vpx_dsp/variance.h"
 #include "vpx_mem/vpx_mem.h"
@@ -211,6 +212,7 @@
       { 0, 1, 4, 5 }, { 2, 3, 6, 7 }, { 8, 9, 12, 13 }, { 10, 11, 14, 15 }
     };
     int i, j;
+    vp8_zero(*map);
     for (i = 0; i < 4; ++i) {
       map[i] = 1;
       for (j = 0; j < 4 && map[j]; ++j) {
@@ -233,7 +235,7 @@
 
   FRAME_TYPE frame_type = cm->frame_type;
   /* Point at base of Mb MODE_INFO list has motion vectors etc */
-  const MODE_INFO *mode_info_context = cm->show_frame_mi;
+  const MODE_INFO *mode_info_context = cm->mi;
   int mb_row;
   int mb_col;
   int totmap, map[4];
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/dspr2/idct_blk_dspr2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/dspr2/idct_blk_dspr2.c
index 899dc10..eae852d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/dspr2/idct_blk_dspr2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/dspr2/idct_blk_dspr2.c
@@ -35,41 +35,41 @@
 }
 
 void vp8_dequant_idct_add_uv_block_dspr2(short *q, short *dq,
-                                         unsigned char *dstu,
-                                         unsigned char *dstv, int stride,
+                                         unsigned char *dst_u,
+                                         unsigned char *dst_v, int stride,
                                          char *eobs) {
   int i, j;
 
   for (i = 0; i < 2; ++i) {
     for (j = 0; j < 2; ++j) {
       if (*eobs++ > 1)
-        vp8_dequant_idct_add_dspr2(q, dq, dstu, stride);
+        vp8_dequant_idct_add_dspr2(q, dq, dst_u, stride);
       else {
-        vp8_dc_only_idct_add_dspr2(q[0] * dq[0], dstu, stride, dstu, stride);
+        vp8_dc_only_idct_add_dspr2(q[0] * dq[0], dst_u, stride, dst_u, stride);
         ((int *)q)[0] = 0;
       }
 
       q += 16;
-      dstu += 4;
+      dst_u += 4;
     }
 
-    dstu += 4 * stride - 8;
+    dst_u += 4 * stride - 8;
   }
 
   for (i = 0; i < 2; ++i) {
     for (j = 0; j < 2; ++j) {
       if (*eobs++ > 1)
-        vp8_dequant_idct_add_dspr2(q, dq, dstv, stride);
+        vp8_dequant_idct_add_dspr2(q, dq, dst_v, stride);
       else {
-        vp8_dc_only_idct_add_dspr2(q[0] * dq[0], dstv, stride, dstv, stride);
+        vp8_dc_only_idct_add_dspr2(q[0] * dq[0], dst_v, stride, dst_v, stride);
         ((int *)q)[0] = 0;
       }
 
       q += 16;
-      dstv += 4;
+      dst_v += 4;
     }
 
-    dstv += 4 * stride - 8;
+    dst_v += 4 * stride - 8;
   }
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/mmi/idct_blk_mmi.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/mmi/idct_blk_mmi.c
index 3f690717..4fd6854 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/mmi/idct_blk_mmi.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/mmi/idct_blk_mmi.c
@@ -32,39 +32,39 @@
   }
 }
 
-void vp8_dequant_idct_add_uv_block_mmi(int16_t *q, int16_t *dq, uint8_t *dstu,
-                                       uint8_t *dstv, int stride, char *eobs) {
+void vp8_dequant_idct_add_uv_block_mmi(int16_t *q, int16_t *dq, uint8_t *dst_u,
+                                       uint8_t *dst_v, int stride, char *eobs) {
   int i, j;
 
   for (i = 0; i < 2; i++) {
     for (j = 0; j < 2; j++) {
       if (*eobs++ > 1) {
-        vp8_dequant_idct_add_mmi(q, dq, dstu, stride);
+        vp8_dequant_idct_add_mmi(q, dq, dst_u, stride);
       } else {
-        vp8_dc_only_idct_add_mmi(q[0] * dq[0], dstu, stride, dstu, stride);
+        vp8_dc_only_idct_add_mmi(q[0] * dq[0], dst_u, stride, dst_u, stride);
         memset(q, 0, 2 * sizeof(q[0]));
       }
 
       q += 16;
-      dstu += 4;
+      dst_u += 4;
     }
 
-    dstu += 4 * stride - 8;
+    dst_u += 4 * stride - 8;
   }
 
   for (i = 0; i < 2; i++) {
     for (j = 0; j < 2; j++) {
       if (*eobs++ > 1) {
-        vp8_dequant_idct_add_mmi(q, dq, dstv, stride);
+        vp8_dequant_idct_add_mmi(q, dq, dst_v, stride);
       } else {
-        vp8_dc_only_idct_add_mmi(q[0] * dq[0], dstv, stride, dstv, stride);
+        vp8_dc_only_idct_add_mmi(q[0] * dq[0], dst_v, stride, dst_v, stride);
         memset(q, 0, 2 * sizeof(q[0]));
       }
 
       q += 16;
-      dstv += 4;
+      dst_v += 4;
     }
 
-    dstv += 4 * stride - 8;
+    dst_v += 4 * stride - 8;
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/idct_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/idct_msa.c
index 3d516d0..efad0c2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/idct_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/idct_msa.c
@@ -134,7 +134,7 @@
   ST4x4_UB(dst0, dst0, 0, 1, 2, 3, dest, dest_stride);
 }
 
-void vp8_short_inv_walsh4x4_msa(int16_t *input, int16_t *mb_dq_coeff) {
+void vp8_short_inv_walsh4x4_msa(int16_t *input, int16_t *mb_dqcoeff) {
   v8i16 input0, input1, tmp0, tmp1, tmp2, tmp3, out0, out1;
   const v8i16 mask0 = { 0, 1, 2, 3, 8, 9, 10, 11 };
   const v8i16 mask1 = { 4, 5, 6, 7, 12, 13, 14, 15 };
@@ -157,22 +157,22 @@
   ADD2(tmp0, 3, tmp1, 3, out0, out1);
   out0 >>= 3;
   out1 >>= 3;
-  mb_dq_coeff[0] = __msa_copy_s_h(out0, 0);
-  mb_dq_coeff[16] = __msa_copy_s_h(out0, 4);
-  mb_dq_coeff[32] = __msa_copy_s_h(out1, 0);
-  mb_dq_coeff[48] = __msa_copy_s_h(out1, 4);
-  mb_dq_coeff[64] = __msa_copy_s_h(out0, 1);
-  mb_dq_coeff[80] = __msa_copy_s_h(out0, 5);
-  mb_dq_coeff[96] = __msa_copy_s_h(out1, 1);
-  mb_dq_coeff[112] = __msa_copy_s_h(out1, 5);
-  mb_dq_coeff[128] = __msa_copy_s_h(out0, 2);
-  mb_dq_coeff[144] = __msa_copy_s_h(out0, 6);
-  mb_dq_coeff[160] = __msa_copy_s_h(out1, 2);
-  mb_dq_coeff[176] = __msa_copy_s_h(out1, 6);
-  mb_dq_coeff[192] = __msa_copy_s_h(out0, 3);
-  mb_dq_coeff[208] = __msa_copy_s_h(out0, 7);
-  mb_dq_coeff[224] = __msa_copy_s_h(out1, 3);
-  mb_dq_coeff[240] = __msa_copy_s_h(out1, 7);
+  mb_dqcoeff[0] = __msa_copy_s_h(out0, 0);
+  mb_dqcoeff[16] = __msa_copy_s_h(out0, 4);
+  mb_dqcoeff[32] = __msa_copy_s_h(out1, 0);
+  mb_dqcoeff[48] = __msa_copy_s_h(out1, 4);
+  mb_dqcoeff[64] = __msa_copy_s_h(out0, 1);
+  mb_dqcoeff[80] = __msa_copy_s_h(out0, 5);
+  mb_dqcoeff[96] = __msa_copy_s_h(out1, 1);
+  mb_dqcoeff[112] = __msa_copy_s_h(out1, 5);
+  mb_dqcoeff[128] = __msa_copy_s_h(out0, 2);
+  mb_dqcoeff[144] = __msa_copy_s_h(out0, 6);
+  mb_dqcoeff[160] = __msa_copy_s_h(out1, 2);
+  mb_dqcoeff[176] = __msa_copy_s_h(out1, 6);
+  mb_dqcoeff[192] = __msa_copy_s_h(out0, 3);
+  mb_dqcoeff[208] = __msa_copy_s_h(out0, 7);
+  mb_dqcoeff[224] = __msa_copy_s_h(out1, 3);
+  mb_dqcoeff[240] = __msa_copy_s_h(out1, 7);
 }
 
 static void dequant_idct4x4_addblk_msa(int16_t *input, int16_t *dequant_input,
@@ -359,27 +359,27 @@
   }
 }
 
-void vp8_dequant_idct_add_uv_block_msa(int16_t *q, int16_t *dq, uint8_t *dstu,
-                                       uint8_t *dstv, int32_t stride,
+void vp8_dequant_idct_add_uv_block_msa(int16_t *q, int16_t *dq, uint8_t *dst_u,
+                                       uint8_t *dst_v, int32_t stride,
                                        char *eobs) {
   int16_t *eobs_h = (int16_t *)eobs;
 
   if (eobs_h[0]) {
     if (eobs_h[0] & 0xfefe) {
-      dequant_idct4x4_addblk_2x_msa(q, dq, dstu, stride);
+      dequant_idct4x4_addblk_2x_msa(q, dq, dst_u, stride);
     } else {
-      dequant_idct_addconst_2x_msa(q, dq, dstu, stride);
+      dequant_idct_addconst_2x_msa(q, dq, dst_u, stride);
     }
   }
 
   q += 32;
-  dstu += (stride * 4);
+  dst_u += (stride * 4);
 
   if (eobs_h[1]) {
     if (eobs_h[1] & 0xfefe) {
-      dequant_idct4x4_addblk_2x_msa(q, dq, dstu, stride);
+      dequant_idct4x4_addblk_2x_msa(q, dq, dst_u, stride);
     } else {
-      dequant_idct_addconst_2x_msa(q, dq, dstu, stride);
+      dequant_idct_addconst_2x_msa(q, dq, dst_u, stride);
     }
   }
 
@@ -387,20 +387,20 @@
 
   if (eobs_h[2]) {
     if (eobs_h[2] & 0xfefe) {
-      dequant_idct4x4_addblk_2x_msa(q, dq, dstv, stride);
+      dequant_idct4x4_addblk_2x_msa(q, dq, dst_v, stride);
     } else {
-      dequant_idct_addconst_2x_msa(q, dq, dstv, stride);
+      dequant_idct_addconst_2x_msa(q, dq, dst_v, stride);
     }
   }
 
   q += 32;
-  dstv += (stride * 4);
+  dst_v += (stride * 4);
 
   if (eobs_h[3]) {
     if (eobs_h[3] & 0xfefe) {
-      dequant_idct4x4_addblk_2x_msa(q, dq, dstv, stride);
+      dequant_idct4x4_addblk_2x_msa(q, dq, dst_v, stride);
     } else {
-      dequant_idct_addconst_2x_msa(q, dq, dstv, stride);
+      dequant_idct_addconst_2x_msa(q, dq, dst_v, stride);
     }
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/vp8_macros_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/vp8_macros_msa.h
index 6bec3ad..14f8379 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/vp8_macros_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mips/msa/vp8_macros_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_MIPS_MSA_VP8_MACROS_MSA_H_
-#define VP8_COMMON_MIPS_MSA_VP8_MACROS_MSA_H_
+#ifndef VPX_VP8_COMMON_MIPS_MSA_VP8_MACROS_MSA_H_
+#define VPX_VP8_COMMON_MIPS_MSA_VP8_MACROS_MSA_H_
 
 #include <msa.h>
 
@@ -1757,4 +1757,4 @@
                                                                 \
     tmp1_m;                                                     \
   })
-#endif /* VP8_COMMON_MIPS_MSA_VP8_MACROS_MSA_H_ */
+#endif  // VPX_VP8_COMMON_MIPS_MSA_VP8_MACROS_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/modecont.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/modecont.h
index b58c7dc..031f74f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/modecont.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/modecont.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_MODECONT_H_
-#define VP8_COMMON_MODECONT_H_
+#ifndef VPX_VP8_COMMON_MODECONT_H_
+#define VPX_VP8_COMMON_MODECONT_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -21,4 +21,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_MODECONT_H_
+#endif  // VPX_VP8_COMMON_MODECONT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mv.h
index b6d2147..4cde12f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/mv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_MV_H_
-#define VP8_COMMON_MV_H_
+#ifndef VPX_VP8_COMMON_MV_H_
+#define VPX_VP8_COMMON_MV_H_
 #include "vpx/vpx_integer.h"
 
 #ifdef __cplusplus
@@ -30,4 +30,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_MV_H_
+#endif  // VPX_VP8_COMMON_MV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyx.h
index 72fba2e..05c72df 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyx.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyx.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ONYX_H_
-#define VP8_COMMON_ONYX_H_
+#ifndef VPX_VP8_COMMON_ONYX_H_
+#define VPX_VP8_COMMON_ONYX_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -247,38 +247,38 @@
 void vp8_remove_compressor(struct VP8_COMP **comp);
 
 void vp8_init_config(struct VP8_COMP *onyx, VP8_CONFIG *oxcf);
-void vp8_change_config(struct VP8_COMP *onyx, VP8_CONFIG *oxcf);
+void vp8_change_config(struct VP8_COMP *cpi, VP8_CONFIG *oxcf);
 
-int vp8_receive_raw_frame(struct VP8_COMP *comp, unsigned int frame_flags,
+int vp8_receive_raw_frame(struct VP8_COMP *cpi, unsigned int frame_flags,
                           YV12_BUFFER_CONFIG *sd, int64_t time_stamp,
-                          int64_t end_time_stamp);
-int vp8_get_compressed_data(struct VP8_COMP *comp, unsigned int *frame_flags,
+                          int64_t end_time);
+int vp8_get_compressed_data(struct VP8_COMP *cpi, unsigned int *frame_flags,
                             size_t *size, unsigned char *dest,
                             unsigned char *dest_end, int64_t *time_stamp,
                             int64_t *time_end, int flush);
-int vp8_get_preview_raw_frame(struct VP8_COMP *comp, YV12_BUFFER_CONFIG *dest,
+int vp8_get_preview_raw_frame(struct VP8_COMP *cpi, YV12_BUFFER_CONFIG *dest,
                               vp8_ppflags_t *flags);
 
-int vp8_use_as_reference(struct VP8_COMP *comp, int ref_frame_flags);
-int vp8_update_reference(struct VP8_COMP *comp, int ref_frame_flags);
-int vp8_get_reference(struct VP8_COMP *comp,
+int vp8_use_as_reference(struct VP8_COMP *cpi, int ref_frame_flags);
+int vp8_update_reference(struct VP8_COMP *cpi, int ref_frame_flags);
+int vp8_get_reference(struct VP8_COMP *cpi,
                       enum vpx_ref_frame_type ref_frame_flag,
                       YV12_BUFFER_CONFIG *sd);
-int vp8_set_reference(struct VP8_COMP *comp,
+int vp8_set_reference(struct VP8_COMP *cpi,
                       enum vpx_ref_frame_type ref_frame_flag,
                       YV12_BUFFER_CONFIG *sd);
-int vp8_update_entropy(struct VP8_COMP *comp, int update);
-int vp8_set_roimap(struct VP8_COMP *comp, unsigned char *map, unsigned int rows,
+int vp8_update_entropy(struct VP8_COMP *cpi, int update);
+int vp8_set_roimap(struct VP8_COMP *cpi, unsigned char *map, unsigned int rows,
                    unsigned int cols, int delta_q[4], int delta_lf[4],
                    unsigned int threshold[4]);
-int vp8_set_active_map(struct VP8_COMP *comp, unsigned char *map,
+int vp8_set_active_map(struct VP8_COMP *cpi, unsigned char *map,
                        unsigned int rows, unsigned int cols);
-int vp8_set_internal_size(struct VP8_COMP *comp, VPX_SCALING horiz_mode,
+int vp8_set_internal_size(struct VP8_COMP *cpi, VPX_SCALING horiz_mode,
                           VPX_SCALING vert_mode);
-int vp8_get_quantizer(struct VP8_COMP *c);
+int vp8_get_quantizer(struct VP8_COMP *cpi);
 
 #ifdef __cplusplus
 }
 #endif
 
-#endif  // VP8_COMMON_ONYX_H_
+#endif  // VPX_VP8_COMMON_ONYX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxc_int.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxc_int.h
index 9a12c7f..ef8d007 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxc_int.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxc_int.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ONYXC_INT_H_
-#define VP8_COMMON_ONYXC_INT_H_
+#ifndef VPX_VP8_COMMON_ONYXC_INT_H_
+#define VPX_VP8_COMMON_ONYXC_INT_H_
 
 #include "vpx_config.h"
 #include "vp8_rtcd.h"
@@ -174,4 +174,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_ONYXC_INT_H_
+#endif  // VPX_VP8_COMMON_ONYXC_INT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxd.h
index d3c1b0e..e4e81aa 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/onyxd.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_ONYXD_H_
-#define VP8_COMMON_ONYXD_H_
+#ifndef VPX_VP8_COMMON_ONYXD_H_
+#define VPX_VP8_COMMON_ONYXD_H_
 
 /* Create/destroy static data structures. */
 #ifdef __cplusplus
@@ -41,23 +41,22 @@
 
 int vp8dx_get_setting(struct VP8D_COMP *comp, VP8D_SETTING oxst);
 
-int vp8dx_receive_compressed_data(struct VP8D_COMP *comp, size_t size,
-                                  const uint8_t *dest, int64_t time_stamp);
-int vp8dx_get_raw_frame(struct VP8D_COMP *comp, YV12_BUFFER_CONFIG *sd,
+int vp8dx_receive_compressed_data(struct VP8D_COMP *pbi, int64_t time_stamp);
+int vp8dx_get_raw_frame(struct VP8D_COMP *pbi, YV12_BUFFER_CONFIG *sd,
                         int64_t *time_stamp, int64_t *time_end_stamp,
                         vp8_ppflags_t *flags);
 int vp8dx_references_buffer(struct VP8Common *oci, int ref_frame);
 
-vpx_codec_err_t vp8dx_get_reference(struct VP8D_COMP *comp,
+vpx_codec_err_t vp8dx_get_reference(struct VP8D_COMP *pbi,
                                     enum vpx_ref_frame_type ref_frame_flag,
                                     YV12_BUFFER_CONFIG *sd);
-vpx_codec_err_t vp8dx_set_reference(struct VP8D_COMP *comp,
+vpx_codec_err_t vp8dx_set_reference(struct VP8D_COMP *pbi,
                                     enum vpx_ref_frame_type ref_frame_flag,
                                     YV12_BUFFER_CONFIG *sd);
-int vp8dx_get_quantizer(const struct VP8D_COMP *c);
+int vp8dx_get_quantizer(const struct VP8D_COMP *pbi);
 
 #ifdef __cplusplus
 }
 #endif
 
-#endif  // VP8_COMMON_ONYXD_H_
+#endif  // VPX_VP8_COMMON_ONYXD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.c
index d67ee8a..c03b16b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.c
@@ -60,20 +60,17 @@
 }
 
 void vp8_deblock(VP8_COMMON *cm, YV12_BUFFER_CONFIG *source,
-                 YV12_BUFFER_CONFIG *post, int q, int low_var_thresh,
-                 int flag) {
+                 YV12_BUFFER_CONFIG *post, int q) {
   double level = 6.0e-05 * q * q * q - .0067 * q * q + .306 * q + .0065;
   int ppl = (int)(level + .5);
 
-  const MODE_INFO *mode_info_context = cm->show_frame_mi;
+  const MODE_INFO *mode_info_context = cm->mi;
   int mbr, mbc;
 
   /* The pixel thresholds are adjusted according to if or not the macroblock
    * is a skipped block.  */
   unsigned char *ylimits = cm->pp_limits_buffer;
   unsigned char *uvlimits = cm->pp_limits_buffer + 16 * cm->mb_cols;
-  (void)low_var_thresh;
-  (void)flag;
 
   if (ppl > 0) {
     for (mbr = 0; mbr < cm->mb_rows; ++mbr) {
@@ -116,8 +113,7 @@
   }
 }
 
-void vp8_de_noise(VP8_COMMON *cm, YV12_BUFFER_CONFIG *source,
-                  YV12_BUFFER_CONFIG *post, int q, int low_var_thresh, int flag,
+void vp8_de_noise(VP8_COMMON *cm, YV12_BUFFER_CONFIG *source, int q,
                   int uvfilter) {
   int mbr;
   double level = 6.0e-05 * q * q * q - .0067 * q * q + .306 * q + .0065;
@@ -125,9 +121,6 @@
   int mb_rows = cm->mb_rows;
   int mb_cols = cm->mb_cols;
   unsigned char *limits = cm->pp_limits_buffer;
-  (void)post;
-  (void)low_var_thresh;
-  (void)flag;
 
   memset(limits, (unsigned char)ppl, 16 * mb_cols);
 
@@ -151,124 +144,6 @@
 }
 #endif  // CONFIG_POSTPROC
 
-/* Blend the macro block with a solid colored square.  Leave the
- * edges unblended to give distinction to macro blocks in areas
- * filled with the same color block.
- */
-void vp8_blend_mb_inner_c(unsigned char *y, unsigned char *u, unsigned char *v,
-                          int y_1, int u_1, int v_1, int alpha, int stride) {
-  int i, j;
-  int y1_const = y_1 * ((1 << 16) - alpha);
-  int u1_const = u_1 * ((1 << 16) - alpha);
-  int v1_const = v_1 * ((1 << 16) - alpha);
-
-  y += 2 * stride + 2;
-  for (i = 0; i < 12; ++i) {
-    for (j = 0; j < 12; ++j) {
-      y[j] = (y[j] * alpha + y1_const) >> 16;
-    }
-    y += stride;
-  }
-
-  stride >>= 1;
-
-  u += stride + 1;
-  v += stride + 1;
-
-  for (i = 0; i < 6; ++i) {
-    for (j = 0; j < 6; ++j) {
-      u[j] = (u[j] * alpha + u1_const) >> 16;
-      v[j] = (v[j] * alpha + v1_const) >> 16;
-    }
-    u += stride;
-    v += stride;
-  }
-}
-
-/* Blend only the edge of the macro block.  Leave center
- * unblended to allow for other visualizations to be layered.
- */
-void vp8_blend_mb_outer_c(unsigned char *y, unsigned char *u, unsigned char *v,
-                          int y_1, int u_1, int v_1, int alpha, int stride) {
-  int i, j;
-  int y1_const = y_1 * ((1 << 16) - alpha);
-  int u1_const = u_1 * ((1 << 16) - alpha);
-  int v1_const = v_1 * ((1 << 16) - alpha);
-
-  for (i = 0; i < 2; ++i) {
-    for (j = 0; j < 16; ++j) {
-      y[j] = (y[j] * alpha + y1_const) >> 16;
-    }
-    y += stride;
-  }
-
-  for (i = 0; i < 12; ++i) {
-    y[0] = (y[0] * alpha + y1_const) >> 16;
-    y[1] = (y[1] * alpha + y1_const) >> 16;
-    y[14] = (y[14] * alpha + y1_const) >> 16;
-    y[15] = (y[15] * alpha + y1_const) >> 16;
-    y += stride;
-  }
-
-  for (i = 0; i < 2; ++i) {
-    for (j = 0; j < 16; ++j) {
-      y[j] = (y[j] * alpha + y1_const) >> 16;
-    }
-    y += stride;
-  }
-
-  stride >>= 1;
-
-  for (j = 0; j < 8; ++j) {
-    u[j] = (u[j] * alpha + u1_const) >> 16;
-    v[j] = (v[j] * alpha + v1_const) >> 16;
-  }
-  u += stride;
-  v += stride;
-
-  for (i = 0; i < 6; ++i) {
-    u[0] = (u[0] * alpha + u1_const) >> 16;
-    v[0] = (v[0] * alpha + v1_const) >> 16;
-
-    u[7] = (u[7] * alpha + u1_const) >> 16;
-    v[7] = (v[7] * alpha + v1_const) >> 16;
-
-    u += stride;
-    v += stride;
-  }
-
-  for (j = 0; j < 8; ++j) {
-    u[j] = (u[j] * alpha + u1_const) >> 16;
-    v[j] = (v[j] * alpha + v1_const) >> 16;
-  }
-}
-
-void vp8_blend_b_c(unsigned char *y, unsigned char *u, unsigned char *v,
-                   int y_1, int u_1, int v_1, int alpha, int stride) {
-  int i, j;
-  int y1_const = y_1 * ((1 << 16) - alpha);
-  int u1_const = u_1 * ((1 << 16) - alpha);
-  int v1_const = v_1 * ((1 << 16) - alpha);
-
-  for (i = 0; i < 4; ++i) {
-    for (j = 0; j < 4; ++j) {
-      y[j] = (y[j] * alpha + y1_const) >> 16;
-    }
-    y += stride;
-  }
-
-  stride >>= 1;
-
-  for (i = 0; i < 2; ++i) {
-    for (j = 0; j < 2; ++j) {
-      u[j] = (u[j] * alpha + u1_const) >> 16;
-      v[j] = (v[j] * alpha + v1_const) >> 16;
-    }
-    u += stride;
-    v += stride;
-  }
-}
-
 #if CONFIG_POSTPROC
 int vp8_post_proc_frame(VP8_COMMON *oci, YV12_BUFFER_CONFIG *dest,
                         vp8_ppflags_t *ppflags) {
@@ -325,7 +200,7 @@
   vpx_clear_system_state();
 
   if ((flags & VP8D_MFQE) && oci->postproc_state.last_frame_valid &&
-      oci->current_video_frame >= 2 &&
+      oci->current_video_frame > 10 &&
       oci->postproc_state.last_base_qindex < 60 &&
       oci->base_qindex - oci->postproc_state.last_base_qindex >= 20) {
     vp8_multiframe_quality_enhance(oci);
@@ -334,11 +209,10 @@
       vp8_yv12_copy_frame(&oci->post_proc_buffer, &oci->post_proc_buffer_int);
       if (flags & VP8D_DEMACROBLOCK) {
         vp8_deblock(oci, &oci->post_proc_buffer_int, &oci->post_proc_buffer,
-                    q + (deblock_level - 5) * 10, 1, 0);
+                    q + (deblock_level - 5) * 10);
         vp8_de_mblock(&oci->post_proc_buffer, q + (deblock_level - 5) * 10);
       } else if (flags & VP8D_DEBLOCK) {
-        vp8_deblock(oci, &oci->post_proc_buffer_int, &oci->post_proc_buffer, q,
-                    1, 0);
+        vp8_deblock(oci, &oci->post_proc_buffer_int, &oci->post_proc_buffer, q);
       }
     }
     /* Move partially towards the base q of the previous frame */
@@ -346,12 +220,12 @@
         (3 * oci->postproc_state.last_base_qindex + oci->base_qindex) >> 2;
   } else if (flags & VP8D_DEMACROBLOCK) {
     vp8_deblock(oci, oci->frame_to_show, &oci->post_proc_buffer,
-                q + (deblock_level - 5) * 10, 1, 0);
+                q + (deblock_level - 5) * 10);
     vp8_de_mblock(&oci->post_proc_buffer, q + (deblock_level - 5) * 10);
 
     oci->postproc_state.last_base_qindex = oci->base_qindex;
   } else if (flags & VP8D_DEBLOCK) {
-    vp8_deblock(oci, oci->frame_to_show, &oci->post_proc_buffer, q, 1, 0);
+    vp8_deblock(oci, oci->frame_to_show, &oci->post_proc_buffer, q);
     oci->postproc_state.last_base_qindex = oci->base_qindex;
   } else {
     vp8_yv12_copy_frame(oci->frame_to_show, &oci->post_proc_buffer);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.h
index 7be112b..492c52a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/postproc.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_POSTPROC_H_
-#define VP8_COMMON_POSTPROC_H_
+#ifndef VPX_VP8_COMMON_POSTPROC_H_
+#define VPX_VP8_COMMON_POSTPROC_H_
 
 #include "vpx_ports/mem.h"
 struct postproc_state {
@@ -27,14 +27,13 @@
 extern "C" {
 #endif
 int vp8_post_proc_frame(struct VP8Common *oci, YV12_BUFFER_CONFIG *dest,
-                        vp8_ppflags_t *flags);
+                        vp8_ppflags_t *ppflags);
 
-void vp8_de_noise(struct VP8Common *oci, YV12_BUFFER_CONFIG *source,
-                  YV12_BUFFER_CONFIG *post, int q, int low_var_thresh, int flag,
+void vp8_de_noise(struct VP8Common *cm, YV12_BUFFER_CONFIG *source, int q,
                   int uvfilter);
 
-void vp8_deblock(struct VP8Common *oci, YV12_BUFFER_CONFIG *source,
-                 YV12_BUFFER_CONFIG *post, int q, int low_var_thresh, int flag);
+void vp8_deblock(struct VP8Common *cm, YV12_BUFFER_CONFIG *source,
+                 YV12_BUFFER_CONFIG *post, int q);
 
 #define MFQE_PRECISION 4
 
@@ -43,4 +42,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_POSTPROC_H_
+#endif  // VPX_VP8_COMMON_POSTPROC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/ppflags.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/ppflags.h
index 96e3af6..bdf0873 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/ppflags.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/ppflags.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_PPFLAGS_H_
-#define VP8_COMMON_PPFLAGS_H_
+#ifndef VPX_VP8_COMMON_PPFLAGS_H_
+#define VPX_VP8_COMMON_PPFLAGS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -36,4 +36,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_PPFLAGS_H_
+#endif  // VPX_VP8_COMMON_PPFLAGS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/quant_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/quant_common.h
index ff4203d..049840a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/quant_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/quant_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_QUANT_COMMON_H_
-#define VP8_COMMON_QUANT_COMMON_H_
+#ifndef VPX_VP8_COMMON_QUANT_COMMON_H_
+#define VPX_VP8_COMMON_QUANT_COMMON_H_
 
 #include "string.h"
 #include "blockd.h"
@@ -30,4 +30,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_QUANT_COMMON_H_
+#endif  // VPX_VP8_COMMON_QUANT_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.c
index 48892c9..2cb0709 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.c
@@ -333,6 +333,13 @@
   _16x16mv.as_mv.row &= x->fullpixel_mask;
   _16x16mv.as_mv.col &= x->fullpixel_mask;
 
+  if (2 * _16x16mv.as_mv.col < (x->mb_to_left_edge - (19 << 3)) ||
+      2 * _16x16mv.as_mv.col > x->mb_to_right_edge + (18 << 3) ||
+      2 * _16x16mv.as_mv.row < (x->mb_to_top_edge - (19 << 3)) ||
+      2 * _16x16mv.as_mv.row > x->mb_to_bottom_edge + (18 << 3)) {
+    return;
+  }
+
   pre_stride >>= 1;
   offset = (_16x16mv.as_mv.row >> 3) * pre_stride + (_16x16mv.as_mv.col >> 3);
   uptr = x->pre.u_buffer + offset;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.h
index 4cdd4fe..974e7ce 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconinter.h
@@ -8,30 +8,29 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_RECONINTER_H_
-#define VP8_COMMON_RECONINTER_H_
+#ifndef VPX_VP8_COMMON_RECONINTER_H_
+#define VPX_VP8_COMMON_RECONINTER_H_
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
-extern void vp8_build_inter_predictors_mb(MACROBLOCKD *x);
-extern void vp8_build_inter16x16_predictors_mb(
-    MACROBLOCKD *x, unsigned char *dst_y, unsigned char *dst_u,
-    unsigned char *dst_v, int dst_ystride, int dst_uvstride);
+void vp8_build_inter_predictors_mb(MACROBLOCKD *xd);
+void vp8_build_inter16x16_predictors_mb(MACROBLOCKD *x, unsigned char *dst_y,
+                                        unsigned char *dst_u,
+                                        unsigned char *dst_v, int dst_ystride,
+                                        int dst_uvstride);
 
-extern void vp8_build_inter16x16_predictors_mby(MACROBLOCKD *x,
-                                                unsigned char *dst_y,
-                                                int dst_ystride);
-extern void vp8_build_inter_predictors_b(BLOCKD *d, int pitch,
-                                         unsigned char *base_pre,
-                                         int pre_stride, vp8_subpix_fn_t sppf);
+void vp8_build_inter16x16_predictors_mby(MACROBLOCKD *x, unsigned char *dst_y,
+                                         int dst_ystride);
+void vp8_build_inter_predictors_b(BLOCKD *d, int pitch, unsigned char *base_pre,
+                                  int pre_stride, vp8_subpix_fn_t sppf);
 
-extern void vp8_build_inter16x16_predictors_mbuv(MACROBLOCKD *x);
-extern void vp8_build_inter4x4_predictors_mbuv(MACROBLOCKD *x);
+void vp8_build_inter16x16_predictors_mbuv(MACROBLOCKD *x);
+void vp8_build_inter4x4_predictors_mbuv(MACROBLOCKD *x);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_RECONINTER_H_
+#endif  // VPX_VP8_COMMON_RECONINTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra.h
index fd7c725..029ac00 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_RECONINTRA_H_
-#define VP8_COMMON_RECONINTRA_H_
+#ifndef VPX_VP8_COMMON_RECONINTRA_H_
+#define VPX_VP8_COMMON_RECONINTRA_H_
 
 #include "vp8/common/blockd.h"
 
@@ -32,4 +32,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_RECONINTRA_H_
+#endif  // VPX_VP8_COMMON_RECONINTRA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.c
index 64d33a2..be936df 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.c
@@ -16,7 +16,7 @@
 #include "blockd.h"
 #include "reconintra4x4.h"
 #include "vp8/common/common.h"
-#include "vpx_ports/mem.h"
+#include "vpx_ports/compiler_attributes.h"
 
 typedef void (*intra_pred_fn)(uint8_t *dst, ptrdiff_t stride,
                               const uint8_t *above, const uint8_t *left);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.h
index e17fc58..3618ec5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/reconintra4x4.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_RECONINTRA4X4_H_
-#define VP8_COMMON_RECONINTRA4X4_H_
+#ifndef VPX_VP8_COMMON_RECONINTRA4X4_H_
+#define VPX_VP8_COMMON_RECONINTRA4X4_H_
 #include "vp8/common/blockd.h"
 
 #ifdef __cplusplus
@@ -31,7 +31,7 @@
   *dst_ptr2 = *src_ptr;
 }
 
-void vp8_intra4x4_predict(unsigned char *Above, unsigned char *yleft,
+void vp8_intra4x4_predict(unsigned char *above, unsigned char *yleft,
                           int left_stride, B_PREDICTION_MODE b_mode,
                           unsigned char *dst, int dst_stride,
                           unsigned char top_left);
@@ -42,4 +42,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_RECONINTRA4X4_H_
+#endif  // VPX_VP8_COMMON_RECONINTRA4X4_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/rtcd_defs.pl b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/rtcd_defs.pl
index 3df745f..8452b5e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/rtcd_defs.pl
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/rtcd_defs.pl
@@ -31,10 +31,10 @@
 #
 # Dequant
 #
-add_proto qw/void vp8_dequantize_b/, "struct blockd*, short *dqc";
+add_proto qw/void vp8_dequantize_b/, "struct blockd*, short *DQC";
 specialize qw/vp8_dequantize_b mmx neon msa mmi/;
 
-add_proto qw/void vp8_dequant_idct_add/, "short *input, short *dq, unsigned char *output, int stride";
+add_proto qw/void vp8_dequant_idct_add/, "short *input, short *dq, unsigned char *dest, int stride";
 specialize qw/vp8_dequant_idct_add mmx neon dspr2 msa mmi/;
 
 add_proto qw/void vp8_dequant_idct_add_y_block/, "short *q, short *dq, unsigned char *dst, int stride, char *eobs";
@@ -46,20 +46,20 @@
 #
 # Loopfilter
 #
-add_proto qw/void vp8_loop_filter_mbv/, "unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi";
+add_proto qw/void vp8_loop_filter_mbv/, "unsigned char *y_ptr, unsigned char *u_ptr, unsigned char *v_ptr, int y_stride, int uv_stride, struct loop_filter_info *lfi";
 specialize qw/vp8_loop_filter_mbv sse2 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_loop_filter_bv/, "unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi";
+add_proto qw/void vp8_loop_filter_bv/, "unsigned char *y_ptr, unsigned char *u_ptr, unsigned char *v_ptr, int y_stride, int uv_stride, struct loop_filter_info *lfi";
 specialize qw/vp8_loop_filter_bv sse2 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_loop_filter_mbh/, "unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi";
+add_proto qw/void vp8_loop_filter_mbh/, "unsigned char *y_ptr, unsigned char *u_ptr, unsigned char *v_ptr, int y_stride, int uv_stride, struct loop_filter_info *lfi";
 specialize qw/vp8_loop_filter_mbh sse2 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_loop_filter_bh/, "unsigned char *y, unsigned char *u, unsigned char *v, int ystride, int uv_stride, struct loop_filter_info *lfi";
+add_proto qw/void vp8_loop_filter_bh/, "unsigned char *y_ptr, unsigned char *u_ptr, unsigned char *v_ptr, int y_stride, int uv_stride, struct loop_filter_info *lfi";
 specialize qw/vp8_loop_filter_bh sse2 neon dspr2 msa mmi/;
 
 
-add_proto qw/void vp8_loop_filter_simple_mbv/, "unsigned char *y, int ystride, const unsigned char *blimit";
+add_proto qw/void vp8_loop_filter_simple_mbv/, "unsigned char *y_ptr, int y_stride, const unsigned char *blimit";
 specialize qw/vp8_loop_filter_simple_mbv sse2 neon msa mmi/;
 $vp8_loop_filter_simple_mbv_c=vp8_loop_filter_simple_vertical_edge_c;
 $vp8_loop_filter_simple_mbv_sse2=vp8_loop_filter_simple_vertical_edge_sse2;
@@ -67,7 +67,7 @@
 $vp8_loop_filter_simple_mbv_msa=vp8_loop_filter_simple_vertical_edge_msa;
 $vp8_loop_filter_simple_mbv_mmi=vp8_loop_filter_simple_vertical_edge_mmi;
 
-add_proto qw/void vp8_loop_filter_simple_mbh/, "unsigned char *y, int ystride, const unsigned char *blimit";
+add_proto qw/void vp8_loop_filter_simple_mbh/, "unsigned char *y_ptr, int y_stride, const unsigned char *blimit";
 specialize qw/vp8_loop_filter_simple_mbh sse2 neon msa mmi/;
 $vp8_loop_filter_simple_mbh_c=vp8_loop_filter_simple_horizontal_edge_c;
 $vp8_loop_filter_simple_mbh_sse2=vp8_loop_filter_simple_horizontal_edge_sse2;
@@ -75,7 +75,7 @@
 $vp8_loop_filter_simple_mbh_msa=vp8_loop_filter_simple_horizontal_edge_msa;
 $vp8_loop_filter_simple_mbh_mmi=vp8_loop_filter_simple_horizontal_edge_mmi;
 
-add_proto qw/void vp8_loop_filter_simple_bv/, "unsigned char *y, int ystride, const unsigned char *blimit";
+add_proto qw/void vp8_loop_filter_simple_bv/, "unsigned char *y_ptr, int y_stride, const unsigned char *blimit";
 specialize qw/vp8_loop_filter_simple_bv sse2 neon msa mmi/;
 $vp8_loop_filter_simple_bv_c=vp8_loop_filter_bvs_c;
 $vp8_loop_filter_simple_bv_sse2=vp8_loop_filter_bvs_sse2;
@@ -83,7 +83,7 @@
 $vp8_loop_filter_simple_bv_msa=vp8_loop_filter_bvs_msa;
 $vp8_loop_filter_simple_bv_mmi=vp8_loop_filter_bvs_mmi;
 
-add_proto qw/void vp8_loop_filter_simple_bh/, "unsigned char *y, int ystride, const unsigned char *blimit";
+add_proto qw/void vp8_loop_filter_simple_bh/, "unsigned char *y_ptr, int y_stride, const unsigned char *blimit";
 specialize qw/vp8_loop_filter_simple_bh sse2 neon msa mmi/;
 $vp8_loop_filter_simple_bh_c=vp8_loop_filter_bhs_c;
 $vp8_loop_filter_simple_bh_sse2=vp8_loop_filter_bhs_sse2;
@@ -95,31 +95,31 @@
 # IDCT
 #
 #idct16
-add_proto qw/void vp8_short_idct4x4llm/, "short *input, unsigned char *pred, int pitch, unsigned char *dst, int dst_stride";
+add_proto qw/void vp8_short_idct4x4llm/, "short *input, unsigned char *pred_ptr, int pred_stride, unsigned char *dst_ptr, int dst_stride";
 specialize qw/vp8_short_idct4x4llm mmx neon dspr2 msa mmi/;
 
 #iwalsh1
-add_proto qw/void vp8_short_inv_walsh4x4_1/, "short *input, short *output";
+add_proto qw/void vp8_short_inv_walsh4x4_1/, "short *input, short *mb_dqcoeff";
 specialize qw/vp8_short_inv_walsh4x4_1 dspr2/;
 
 #iwalsh16
-add_proto qw/void vp8_short_inv_walsh4x4/, "short *input, short *output";
+add_proto qw/void vp8_short_inv_walsh4x4/, "short *input, short *mb_dqcoeff";
 specialize qw/vp8_short_inv_walsh4x4 sse2 neon dspr2 msa mmi/;
 
 #idct1_scalar_add
-add_proto qw/void vp8_dc_only_idct_add/, "short input, unsigned char *pred, int pred_stride, unsigned char *dst, int dst_stride";
+add_proto qw/void vp8_dc_only_idct_add/, "short input_dc, unsigned char *pred_ptr, int pred_stride, unsigned char *dst_ptr, int dst_stride";
 specialize qw/vp8_dc_only_idct_add mmx neon dspr2 msa mmi/;
 
 #
 # RECON
 #
-add_proto qw/void vp8_copy_mem16x16/, "unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_copy_mem16x16/, "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride";
 specialize qw/vp8_copy_mem16x16 sse2 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_copy_mem8x8/, "unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_copy_mem8x8/, "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride";
 specialize qw/vp8_copy_mem8x8 mmx neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_copy_mem8x4/, "unsigned char *src, int src_pitch, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_copy_mem8x4/, "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride";
 specialize qw/vp8_copy_mem8x4 mmx neon dspr2 msa mmi/;
 
 #
@@ -127,11 +127,11 @@
 #
 if (vpx_config("CONFIG_POSTPROC") eq "yes") {
 
-    add_proto qw/void vp8_blend_mb_inner/, "unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride";
+    add_proto qw/void vp8_blend_mb_inner/, "unsigned char *y, unsigned char *u, unsigned char *v, int y_1, int u_1, int v_1, int alpha, int stride";
 
-    add_proto qw/void vp8_blend_mb_outer/, "unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride";
+    add_proto qw/void vp8_blend_mb_outer/, "unsigned char *y, unsigned char *u, unsigned char *v, int y_1, int u_1, int v_1, int alpha, int stride";
 
-    add_proto qw/void vp8_blend_b/, "unsigned char *y, unsigned char *u, unsigned char *v, int y1, int u1, int v1, int alpha, int stride";
+    add_proto qw/void vp8_blend_b/, "unsigned char *y, unsigned char *u, unsigned char *v, int y_1, int u_1, int v_1, int alpha, int stride";
 
     add_proto qw/void vp8_filter_by_weight16x16/, "unsigned char *src, int src_stride, unsigned char *dst, int dst_stride, int src_weight";
     specialize qw/vp8_filter_by_weight16x16 sse2 msa/;
@@ -145,29 +145,29 @@
 #
 # Subpixel
 #
-add_proto qw/void vp8_sixtap_predict16x16/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_sixtap_predict16x16/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
 specialize qw/vp8_sixtap_predict16x16 sse2 ssse3 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_sixtap_predict8x8/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_sixtap_predict8x8/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
 specialize qw/vp8_sixtap_predict8x8 sse2 ssse3 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_sixtap_predict8x4/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_sixtap_predict8x4/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
 specialize qw/vp8_sixtap_predict8x4 sse2 ssse3 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_sixtap_predict4x4/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_sixtap_predict4x4/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
 specialize qw/vp8_sixtap_predict4x4 mmx ssse3 neon dspr2 msa mmi/;
 
-add_proto qw/void vp8_bilinear_predict16x16/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_bilinear_predict16x16/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
 specialize qw/vp8_bilinear_predict16x16 sse2 ssse3 neon msa/;
 
-add_proto qw/void vp8_bilinear_predict8x8/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
+add_proto qw/void vp8_bilinear_predict8x8/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
 specialize qw/vp8_bilinear_predict8x8 sse2 ssse3 neon msa/;
 
-add_proto qw/void vp8_bilinear_predict8x4/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
-specialize qw/vp8_bilinear_predict8x4 mmx neon msa/;
+add_proto qw/void vp8_bilinear_predict8x4/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
+specialize qw/vp8_bilinear_predict8x4 sse2 neon msa/;
 
-add_proto qw/void vp8_bilinear_predict4x4/, "unsigned char *src, int src_pitch, int xofst, int yofst, unsigned char *dst, int dst_pitch";
-specialize qw/vp8_bilinear_predict4x4 mmx neon msa/;
+add_proto qw/void vp8_bilinear_predict4x4/, "unsigned char *src_ptr, int src_pixels_per_line, int xoffset, int yoffset, unsigned char *dst_ptr, int dst_pitch";
+specialize qw/vp8_bilinear_predict4x4 sse2 neon msa/;
 
 #
 # Encoder functions below this point.
@@ -177,10 +177,8 @@
 #
 # Block copy
 #
-if ($opts{arch} =~ /x86/) {
-    add_proto qw/void vp8_copy32xn/, "const unsigned char *src_ptr, int source_stride, unsigned char *dst_ptr, int dst_stride, int n";
-    specialize qw/vp8_copy32xn sse2 sse3/;
-}
+add_proto qw/void vp8_copy32xn/, "const unsigned char *src_ptr, int src_stride, unsigned char *dst_ptr, int dst_stride, int height";
+specialize qw/vp8_copy32xn sse2 sse3/;
 
 #
 # Forward DCT
@@ -223,7 +221,7 @@
 $vp8_full_search_sad_sse3=vp8_full_search_sadx3;
 $vp8_full_search_sad_sse4_1=vp8_full_search_sadx8;
 
-add_proto qw/int vp8_refining_search_sad/, "struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int sad_per_bit, int distance, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv";
+add_proto qw/int vp8_refining_search_sad/, "struct macroblock *x, struct block *b, struct blockd *d, union int_mv *ref_mv, int error_per_bit, int search_range, struct variance_vtable *fn_ptr, int *mvcost[2], union int_mv *center_mv";
 specialize qw/vp8_refining_search_sad sse2 msa/;
 $vp8_refining_search_sad_sse2=vp8_refining_search_sadx4;
 $vp8_refining_search_sad_msa=vp8_refining_search_sadx4;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.h
index f3ffa16..903a536 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/setupintrarecon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_SETUPINTRARECON_H_
-#define VP8_COMMON_SETUPINTRARECON_H_
+#ifndef VPX_VP8_COMMON_SETUPINTRARECON_H_
+#define VPX_VP8_COMMON_SETUPINTRARECON_H_
 
 #include "./vpx_config.h"
 #include "vpx_scale/yv12config.h"
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_SETUPINTRARECON_H_
+#endif  // VPX_VP8_COMMON_SETUPINTRARECON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.h
index 0ee9a52..e37c471 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/swapyv12buffer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_SWAPYV12BUFFER_H_
-#define VP8_COMMON_SWAPYV12BUFFER_H_
+#ifndef VPX_VP8_COMMON_SWAPYV12BUFFER_H_
+#define VPX_VP8_COMMON_SWAPYV12BUFFER_H_
 
 #include "vpx_scale/yv12config.h"
 
@@ -24,4 +24,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_SWAPYV12BUFFER_H_
+#endif  // VPX_VP8_COMMON_SWAPYV12BUFFER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/systemdependent.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/systemdependent.h
index 3d44e37..83a5513 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/systemdependent.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/systemdependent.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_SYSTEMDEPENDENT_H_
-#define VP8_COMMON_SYSTEMDEPENDENT_H_
+#ifndef VPX_VP8_COMMON_SYSTEMDEPENDENT_H_
+#define VPX_VP8_COMMON_SYSTEMDEPENDENT_H_
 
 #include "vpx_config.h"
 
@@ -24,4 +24,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_SYSTEMDEPENDENT_H_
+#endif  // VPX_VP8_COMMON_SYSTEMDEPENDENT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/threading.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/threading.h
index b082bf1..f921369 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/threading.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/threading.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_THREADING_H_
-#define VP8_COMMON_THREADING_H_
+#ifndef VPX_VP8_COMMON_THREADING_H_
+#define VPX_VP8_COMMON_THREADING_H_
 
 #include "./vpx_config.h"
 
@@ -185,7 +185,7 @@
 
 #endif
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
 #include "vpx_ports/x86.h"
 #else
 #define x86_pause_hint()
@@ -209,4 +209,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_THREADING_H_
+#endif  // VPX_VP8_COMMON_THREADING_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.c
index 9feb40a..f1e78f4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.c
@@ -12,6 +12,7 @@
 #include <stdio.h>
 
 #include "vp8/common/treecoder.h"
+#include "vpx/vpx_integer.h"
 
 static void tree2tok(struct vp8_token_struct *const p, vp8_tree t, int i, int v,
                      int L) {
@@ -79,7 +80,7 @@
                                       vp8_prob probs[/* n-1 */],
                                       unsigned int branch_ct[/* n-1 */][2],
                                       const unsigned int num_events[/* n */],
-                                      unsigned int Pfac, int rd) {
+                                      unsigned int Pfactor, int Round) {
   const int tree_len = n - 1;
   int t = 0;
 
@@ -89,10 +90,10 @@
     const unsigned int *const c = branch_ct[t];
     const unsigned int tot = c[0] + c[1];
 
-    assert(tot < (1 << 24)); /* no overflow below */
-
     if (tot) {
-      const unsigned int p = ((c[0] * Pfac) + (rd ? tot >> 1 : 0)) / tot;
+      const unsigned int p =
+          (unsigned int)(((uint64_t)c[0] * Pfactor) + (Round ? tot >> 1 : 0)) /
+          tot;
       probs[t] = p < 256 ? (p ? p : 1) : 255; /* agree w/old version for now */
     } else {
       probs[t] = vp8_prob_half;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.h
index d8503cf..d7d8d0e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/treecoder.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_TREECODER_H_
-#define VP8_COMMON_TREECODER_H_
+#ifndef VPX_VP8_COMMON_TREECODER_H_
+#define VPX_VP8_COMMON_TREECODER_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -32,7 +32,7 @@
 typedef const bool_writer c_bool_writer;
 typedef const bool_reader c_bool_reader;
 
-#define vp8_complement(x) (255 - x)
+#define vp8_complement(x) (255 - (x))
 
 /* We build coding trees compactly in arrays.
    Each node of the tree is a pair of vp8_tree_indices.
@@ -79,4 +79,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_TREECODER_H_
+#endif  // VPX_VP8_COMMON_TREECODER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_entropymodedata.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_entropymodedata.h
index 3696c08..3fc942e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_entropymodedata.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_entropymodedata.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_VP8_ENTROPYMODEDATA_H_
-#define VP8_COMMON_VP8_ENTROPYMODEDATA_H_
+#ifndef VPX_VP8_COMMON_VP8_ENTROPYMODEDATA_H_
+#define VPX_VP8_COMMON_VP8_ENTROPYMODEDATA_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -169,4 +169,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_VP8_ENTROPYMODEDATA_H_
+#endif  // VPX_VP8_COMMON_VP8_ENTROPYMODEDATA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c
index 9fb1250..9c9e5f3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_loopfilter.c
@@ -219,13 +219,11 @@
 }
 
 void vp8_loop_filter_row_simple(VP8_COMMON *cm, MODE_INFO *mode_info_context,
-                                int mb_row, int post_ystride, int post_uvstride,
-                                unsigned char *y_ptr, unsigned char *u_ptr,
-                                unsigned char *v_ptr) {
+                                int mb_row, int post_ystride,
+                                unsigned char *y_ptr) {
   int mb_col;
   int filter_level;
   loop_filter_info_n *lfi_n = &cm->lf_info;
-  (void)post_uvstride;
 
   for (mb_col = 0; mb_col < cm->mb_cols; ++mb_col) {
     int skip_lf = (mode_info_context->mbmi.mode != B_PRED &&
@@ -258,8 +256,6 @@
     }
 
     y_ptr += 16;
-    u_ptr += 8;
-    v_ptr += 8;
 
     mode_info_context++; /* step to next MB */
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h
index 4d27f5e..ef0e4ae 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/vp8_skin_detection.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_COMMON_SKIN_DETECTION_H_
-#define VP8_COMMON_SKIN_DETECTION_H_
+#ifndef VPX_VP8_COMMON_VP8_SKIN_DETECTION_H_
+#define VPX_VP8_COMMON_VP8_SKIN_DETECTION_H_
 
 #include "vp8/encoder/onyx_int.h"
 #include "vpx/vpx_integer.h"
@@ -44,4 +44,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_COMMON_SKIN_DETECTION_H_
+#endif  // VPX_VP8_COMMON_VP8_SKIN_DETECTION_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/bilinear_filter_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/bilinear_filter_sse2.c
new file mode 100644
index 0000000..9bf65d8
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/bilinear_filter_sse2.c
@@ -0,0 +1,336 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+#include <xmmintrin.h>
+
+#include "./vp8_rtcd.h"
+#include "./vpx_config.h"
+#include "vp8/common/filter.h"
+#include "vpx_dsp/x86/mem_sse2.h"
+#include "vpx_ports/mem.h"
+
+static INLINE void horizontal_16x16(uint8_t *src, const int stride,
+                                    uint16_t *dst, const int xoffset) {
+  int h;
+  const __m128i zero = _mm_setzero_si128();
+
+  if (xoffset == 0) {
+    for (h = 0; h < 17; ++h) {
+      const __m128i a = _mm_loadu_si128((__m128i *)src);
+      const __m128i a_lo = _mm_unpacklo_epi8(a, zero);
+      const __m128i a_hi = _mm_unpackhi_epi8(a, zero);
+      _mm_store_si128((__m128i *)dst, a_lo);
+      _mm_store_si128((__m128i *)(dst + 8), a_hi);
+      src += stride;
+      dst += 16;
+    }
+    return;
+  }
+
+  {
+    const __m128i round_factor = _mm_set1_epi16(1 << (VP8_FILTER_SHIFT - 1));
+    const __m128i hfilter_0 = _mm_set1_epi16(vp8_bilinear_filters[xoffset][0]);
+    const __m128i hfilter_1 = _mm_set1_epi16(vp8_bilinear_filters[xoffset][1]);
+
+    for (h = 0; h < 17; ++h) {
+      const __m128i a = _mm_loadu_si128((__m128i *)src);
+      const __m128i a_lo = _mm_unpacklo_epi8(a, zero);
+      const __m128i a_hi = _mm_unpackhi_epi8(a, zero);
+      const __m128i a_lo_filtered = _mm_mullo_epi16(a_lo, hfilter_0);
+      const __m128i a_hi_filtered = _mm_mullo_epi16(a_hi, hfilter_0);
+
+      const __m128i b = _mm_loadu_si128((__m128i *)(src + 1));
+      const __m128i b_lo = _mm_unpacklo_epi8(b, zero);
+      const __m128i b_hi = _mm_unpackhi_epi8(b, zero);
+      const __m128i b_lo_filtered = _mm_mullo_epi16(b_lo, hfilter_1);
+      const __m128i b_hi_filtered = _mm_mullo_epi16(b_hi, hfilter_1);
+
+      const __m128i sum_lo = _mm_add_epi16(a_lo_filtered, b_lo_filtered);
+      const __m128i sum_hi = _mm_add_epi16(a_hi_filtered, b_hi_filtered);
+
+      const __m128i compensated_lo = _mm_add_epi16(sum_lo, round_factor);
+      const __m128i compensated_hi = _mm_add_epi16(sum_hi, round_factor);
+
+      const __m128i shifted_lo =
+          _mm_srai_epi16(compensated_lo, VP8_FILTER_SHIFT);
+      const __m128i shifted_hi =
+          _mm_srai_epi16(compensated_hi, VP8_FILTER_SHIFT);
+
+      _mm_store_si128((__m128i *)dst, shifted_lo);
+      _mm_store_si128((__m128i *)(dst + 8), shifted_hi);
+      src += stride;
+      dst += 16;
+    }
+  }
+}
+
+static INLINE void vertical_16x16(uint16_t *src, uint8_t *dst, const int stride,
+                                  const int yoffset) {
+  int h;
+
+  if (yoffset == 0) {
+    for (h = 0; h < 16; ++h) {
+      const __m128i row_lo = _mm_load_si128((__m128i *)src);
+      const __m128i row_hi = _mm_load_si128((__m128i *)(src + 8));
+      const __m128i packed = _mm_packus_epi16(row_lo, row_hi);
+      _mm_store_si128((__m128i *)dst, packed);
+      src += 16;
+      dst += stride;
+    }
+    return;
+  }
+
+  {
+    const __m128i round_factor = _mm_set1_epi16(1 << (VP8_FILTER_SHIFT - 1));
+    const __m128i vfilter_0 = _mm_set1_epi16(vp8_bilinear_filters[yoffset][0]);
+    const __m128i vfilter_1 = _mm_set1_epi16(vp8_bilinear_filters[yoffset][1]);
+
+    __m128i row_0_lo = _mm_load_si128((__m128i *)src);
+    __m128i row_0_hi = _mm_load_si128((__m128i *)(src + 8));
+    src += 16;
+    for (h = 0; h < 16; ++h) {
+      const __m128i row_0_lo_filtered = _mm_mullo_epi16(row_0_lo, vfilter_0);
+      const __m128i row_0_hi_filtered = _mm_mullo_epi16(row_0_hi, vfilter_0);
+
+      const __m128i row_1_lo = _mm_load_si128((__m128i *)src);
+      const __m128i row_1_hi = _mm_load_si128((__m128i *)(src + 8));
+      const __m128i row_1_lo_filtered = _mm_mullo_epi16(row_1_lo, vfilter_1);
+      const __m128i row_1_hi_filtered = _mm_mullo_epi16(row_1_hi, vfilter_1);
+
+      const __m128i sum_lo =
+          _mm_add_epi16(row_0_lo_filtered, row_1_lo_filtered);
+      const __m128i sum_hi =
+          _mm_add_epi16(row_0_hi_filtered, row_1_hi_filtered);
+
+      const __m128i compensated_lo = _mm_add_epi16(sum_lo, round_factor);
+      const __m128i compensated_hi = _mm_add_epi16(sum_hi, round_factor);
+
+      const __m128i shifted_lo =
+          _mm_srai_epi16(compensated_lo, VP8_FILTER_SHIFT);
+      const __m128i shifted_hi =
+          _mm_srai_epi16(compensated_hi, VP8_FILTER_SHIFT);
+
+      const __m128i packed = _mm_packus_epi16(shifted_lo, shifted_hi);
+      _mm_store_si128((__m128i *)dst, packed);
+      row_0_lo = row_1_lo;
+      row_0_hi = row_1_hi;
+      src += 16;
+      dst += stride;
+    }
+  }
+}
+
+void vp8_bilinear_predict16x16_sse2(uint8_t *src_ptr, int src_pixels_per_line,
+                                    int xoffset, int yoffset, uint8_t *dst_ptr,
+                                    int dst_pitch) {
+  DECLARE_ALIGNED(16, uint16_t, FData[16 * 17]);
+
+  assert((xoffset | yoffset) != 0);
+
+  horizontal_16x16(src_ptr, src_pixels_per_line, FData, xoffset);
+
+  vertical_16x16(FData, dst_ptr, dst_pitch, yoffset);
+}
+
+static INLINE void horizontal_8xN(uint8_t *src, const int stride, uint16_t *dst,
+                                  const int xoffset, const int height) {
+  int h;
+  const __m128i zero = _mm_setzero_si128();
+
+  if (xoffset == 0) {
+    for (h = 0; h < height; ++h) {
+      const __m128i a = _mm_loadl_epi64((__m128i *)src);
+      const __m128i a_u16 = _mm_unpacklo_epi8(a, zero);
+      _mm_store_si128((__m128i *)dst, a_u16);
+      src += stride;
+      dst += 8;
+    }
+    return;
+  }
+
+  {
+    const __m128i round_factor = _mm_set1_epi16(1 << (VP8_FILTER_SHIFT - 1));
+    const __m128i hfilter_0 = _mm_set1_epi16(vp8_bilinear_filters[xoffset][0]);
+    const __m128i hfilter_1 = _mm_set1_epi16(vp8_bilinear_filters[xoffset][1]);
+
+    // Filter horizontally. Rather than load the whole array and transpose, load
+    // 16 values (overreading) and shift to set up the second value. Do an
+    // "extra" 9th line so the vertical pass has the necessary context.
+    for (h = 0; h < height; ++h) {
+      const __m128i a = _mm_loadu_si128((__m128i *)src);
+      const __m128i b = _mm_srli_si128(a, 1);
+      const __m128i a_u16 = _mm_unpacklo_epi8(a, zero);
+      const __m128i b_u16 = _mm_unpacklo_epi8(b, zero);
+      const __m128i a_filtered = _mm_mullo_epi16(a_u16, hfilter_0);
+      const __m128i b_filtered = _mm_mullo_epi16(b_u16, hfilter_1);
+      const __m128i sum = _mm_add_epi16(a_filtered, b_filtered);
+      const __m128i compensated = _mm_add_epi16(sum, round_factor);
+      const __m128i shifted = _mm_srai_epi16(compensated, VP8_FILTER_SHIFT);
+      _mm_store_si128((__m128i *)dst, shifted);
+      src += stride;
+      dst += 8;
+    }
+  }
+}
+
+static INLINE void vertical_8xN(uint16_t *src, uint8_t *dst, const int stride,
+                                const int yoffset, const int height) {
+  int h;
+
+  if (yoffset == 0) {
+    for (h = 0; h < height; ++h) {
+      const __m128i row = _mm_load_si128((__m128i *)src);
+      const __m128i packed = _mm_packus_epi16(row, row);
+      _mm_storel_epi64((__m128i *)dst, packed);
+      src += 8;
+      dst += stride;
+    }
+    return;
+  }
+
+  {
+    const __m128i round_factor = _mm_set1_epi16(1 << (VP8_FILTER_SHIFT - 1));
+    const __m128i vfilter_0 = _mm_set1_epi16(vp8_bilinear_filters[yoffset][0]);
+    const __m128i vfilter_1 = _mm_set1_epi16(vp8_bilinear_filters[yoffset][1]);
+
+    __m128i row_0 = _mm_load_si128((__m128i *)src);
+    src += 8;
+    for (h = 0; h < height; ++h) {
+      const __m128i row_1 = _mm_load_si128((__m128i *)src);
+      const __m128i row_0_filtered = _mm_mullo_epi16(row_0, vfilter_0);
+      const __m128i row_1_filtered = _mm_mullo_epi16(row_1, vfilter_1);
+      const __m128i sum = _mm_add_epi16(row_0_filtered, row_1_filtered);
+      const __m128i compensated = _mm_add_epi16(sum, round_factor);
+      const __m128i shifted = _mm_srai_epi16(compensated, VP8_FILTER_SHIFT);
+      const __m128i packed = _mm_packus_epi16(shifted, shifted);
+      _mm_storel_epi64((__m128i *)dst, packed);
+      row_0 = row_1;
+      src += 8;
+      dst += stride;
+    }
+  }
+}
+
+void vp8_bilinear_predict8x8_sse2(uint8_t *src_ptr, int src_pixels_per_line,
+                                  int xoffset, int yoffset, uint8_t *dst_ptr,
+                                  int dst_pitch) {
+  DECLARE_ALIGNED(16, uint16_t, FData[8 * 9]);
+
+  assert((xoffset | yoffset) != 0);
+
+  horizontal_8xN(src_ptr, src_pixels_per_line, FData, xoffset, 9);
+
+  vertical_8xN(FData, dst_ptr, dst_pitch, yoffset, 8);
+}
+
+void vp8_bilinear_predict8x4_sse2(uint8_t *src_ptr, int src_pixels_per_line,
+                                  int xoffset, int yoffset, uint8_t *dst_ptr,
+                                  int dst_pitch) {
+  DECLARE_ALIGNED(16, uint16_t, FData[8 * 5]);
+
+  assert((xoffset | yoffset) != 0);
+
+  horizontal_8xN(src_ptr, src_pixels_per_line, FData, xoffset, 5);
+
+  vertical_8xN(FData, dst_ptr, dst_pitch, yoffset, 4);
+}
+
+static INLINE void horizontal_4x4(uint8_t *src, const int stride, uint16_t *dst,
+                                  const int xoffset) {
+  int h;
+  const __m128i zero = _mm_setzero_si128();
+
+  if (xoffset == 0) {
+    for (h = 0; h < 5; ++h) {
+      const __m128i a = load_unaligned_u32(src);
+      const __m128i a_u16 = _mm_unpacklo_epi8(a, zero);
+      _mm_storel_epi64((__m128i *)dst, a_u16);
+      src += stride;
+      dst += 4;
+    }
+    return;
+  }
+
+  {
+    const __m128i round_factor = _mm_set1_epi16(1 << (VP8_FILTER_SHIFT - 1));
+    const __m128i hfilter_0 = _mm_set1_epi16(vp8_bilinear_filters[xoffset][0]);
+    const __m128i hfilter_1 = _mm_set1_epi16(vp8_bilinear_filters[xoffset][1]);
+
+    for (h = 0; h < 5; ++h) {
+      const __m128i a = load_unaligned_u32(src);
+      const __m128i b = load_unaligned_u32(src + 1);
+      const __m128i a_u16 = _mm_unpacklo_epi8(a, zero);
+      const __m128i b_u16 = _mm_unpacklo_epi8(b, zero);
+      const __m128i a_filtered = _mm_mullo_epi16(a_u16, hfilter_0);
+      const __m128i b_filtered = _mm_mullo_epi16(b_u16, hfilter_1);
+      const __m128i sum = _mm_add_epi16(a_filtered, b_filtered);
+      const __m128i compensated = _mm_add_epi16(sum, round_factor);
+      const __m128i shifted = _mm_srai_epi16(compensated, VP8_FILTER_SHIFT);
+      _mm_storel_epi64((__m128i *)dst, shifted);
+      src += stride;
+      dst += 4;
+    }
+  }
+}
+
+static INLINE void vertical_4x4(uint16_t *src, uint8_t *dst, const int stride,
+                                const int yoffset) {
+  int h;
+
+  if (yoffset == 0) {
+    for (h = 0; h < 4; h += 2) {
+      const __m128i row = _mm_load_si128((__m128i *)src);
+      __m128i packed = _mm_packus_epi16(row, row);
+      store_unaligned_u32(dst, packed);
+      dst += stride;
+      packed = _mm_srli_si128(packed, 4);
+      store_unaligned_u32(dst, packed);
+      dst += stride;
+      src += 8;
+    }
+    return;
+  }
+
+  {
+    const __m128i round_factor = _mm_set1_epi16(1 << (VP8_FILTER_SHIFT - 1));
+    const __m128i vfilter_0 = _mm_set1_epi16(vp8_bilinear_filters[yoffset][0]);
+    const __m128i vfilter_1 = _mm_set1_epi16(vp8_bilinear_filters[yoffset][1]);
+
+    for (h = 0; h < 4; h += 2) {
+      const __m128i row_0 = _mm_load_si128((__m128i *)src);
+      const __m128i row_1 = _mm_loadu_si128((__m128i *)(src + 4));
+      const __m128i row_0_filtered = _mm_mullo_epi16(row_0, vfilter_0);
+      const __m128i row_1_filtered = _mm_mullo_epi16(row_1, vfilter_1);
+      const __m128i sum = _mm_add_epi16(row_0_filtered, row_1_filtered);
+      const __m128i compensated = _mm_add_epi16(sum, round_factor);
+      const __m128i shifted = _mm_srai_epi16(compensated, VP8_FILTER_SHIFT);
+      __m128i packed = _mm_packus_epi16(shifted, shifted);
+      storeu_uint32(dst, _mm_cvtsi128_si32(packed));
+      packed = _mm_srli_si128(packed, 4);
+      dst += stride;
+      storeu_uint32(dst, _mm_cvtsi128_si32(packed));
+      dst += stride;
+      src += 8;
+    }
+  }
+}
+
+void vp8_bilinear_predict4x4_sse2(uint8_t *src_ptr, int src_pixels_per_line,
+                                  int xoffset, int yoffset, uint8_t *dst_ptr,
+                                  int dst_pitch) {
+  DECLARE_ALIGNED(16, uint16_t, FData[4 * 5]);
+
+  assert((xoffset | yoffset) != 0);
+
+  horizontal_4x4(src_ptr, src_pixels_per_line, FData, xoffset);
+
+  vertical_4x4(FData, dst_ptr, dst_pitch, yoffset);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.c
deleted file mode 100644
index 2405342..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.c
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- *  Copyright (c) 2011 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include "vp8/common/x86/filter_x86.h"
-
-DECLARE_ALIGNED(16, const short, vp8_bilinear_filters_x86_4[8][8]) = {
-  { 128, 128, 128, 128, 0, 0, 0, 0 }, { 112, 112, 112, 112, 16, 16, 16, 16 },
-  { 96, 96, 96, 96, 32, 32, 32, 32 }, { 80, 80, 80, 80, 48, 48, 48, 48 },
-  { 64, 64, 64, 64, 64, 64, 64, 64 }, { 48, 48, 48, 48, 80, 80, 80, 80 },
-  { 32, 32, 32, 32, 96, 96, 96, 96 }, { 16, 16, 16, 16, 112, 112, 112, 112 }
-};
-
-DECLARE_ALIGNED(16, const short, vp8_bilinear_filters_x86_8[8][16]) = {
-  { 128, 128, 128, 128, 128, 128, 128, 128, 0, 0, 0, 0, 0, 0, 0, 0 },
-  { 112, 112, 112, 112, 112, 112, 112, 112, 16, 16, 16, 16, 16, 16, 16, 16 },
-  { 96, 96, 96, 96, 96, 96, 96, 96, 32, 32, 32, 32, 32, 32, 32, 32 },
-  { 80, 80, 80, 80, 80, 80, 80, 80, 48, 48, 48, 48, 48, 48, 48, 48 },
-  { 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64 },
-  { 48, 48, 48, 48, 48, 48, 48, 48, 80, 80, 80, 80, 80, 80, 80, 80 },
-  { 32, 32, 32, 32, 32, 32, 32, 32, 96, 96, 96, 96, 96, 96, 96, 96 },
-  { 16, 16, 16, 16, 16, 16, 16, 16, 112, 112, 112, 112, 112, 112, 112, 112 }
-};
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.h
deleted file mode 100644
index d282841..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/filter_x86.h
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- *  Copyright (c) 2011 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#ifndef VP8_COMMON_X86_FILTER_X86_H_
-#define VP8_COMMON_X86_FILTER_X86_H_
-
-#include "vpx_ports/mem.h"
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/* x86 assembly specific copy of vp8/common/filter.c:vp8_bilinear_filters with
- * duplicated values */
-
-/* duplicated 4x */
-extern DECLARE_ALIGNED(16, const short, vp8_bilinear_filters_x86_4[8][8]);
-
-/* duplicated 8x */
-extern DECLARE_ALIGNED(16, const short, vp8_bilinear_filters_x86_8[8][16]);
-
-#ifdef __cplusplus
-}  // extern "C"
-#endif
-
-#endif  // VP8_COMMON_X86_FILTER_X86_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_sse2.c
index 8aefb27..897ed5b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/idct_blk_sse2.c
@@ -42,43 +42,43 @@
 }
 
 void vp8_dequant_idct_add_uv_block_sse2(short *q, short *dq,
-                                        unsigned char *dstu,
-                                        unsigned char *dstv, int stride,
+                                        unsigned char *dst_u,
+                                        unsigned char *dst_v, int stride,
                                         char *eobs) {
   if (((short *)(eobs))[0]) {
     if (((short *)(eobs))[0] & 0xfefe) {
-      vp8_idct_dequant_full_2x_sse2(q, dq, dstu, stride);
+      vp8_idct_dequant_full_2x_sse2(q, dq, dst_u, stride);
     } else {
-      vp8_idct_dequant_0_2x_sse2(q, dq, dstu, stride);
+      vp8_idct_dequant_0_2x_sse2(q, dq, dst_u, stride);
     }
   }
   q += 32;
-  dstu += stride * 4;
+  dst_u += stride * 4;
 
   if (((short *)(eobs))[1]) {
     if (((short *)(eobs))[1] & 0xfefe) {
-      vp8_idct_dequant_full_2x_sse2(q, dq, dstu, stride);
+      vp8_idct_dequant_full_2x_sse2(q, dq, dst_u, stride);
     } else {
-      vp8_idct_dequant_0_2x_sse2(q, dq, dstu, stride);
+      vp8_idct_dequant_0_2x_sse2(q, dq, dst_u, stride);
     }
   }
   q += 32;
 
   if (((short *)(eobs))[2]) {
     if (((short *)(eobs))[2] & 0xfefe) {
-      vp8_idct_dequant_full_2x_sse2(q, dq, dstv, stride);
+      vp8_idct_dequant_full_2x_sse2(q, dq, dst_v, stride);
     } else {
-      vp8_idct_dequant_0_2x_sse2(q, dq, dstv, stride);
+      vp8_idct_dequant_0_2x_sse2(q, dq, dst_v, stride);
     }
   }
   q += 32;
-  dstv += stride * 4;
+  dst_v += stride * 4;
 
   if (((short *)(eobs))[3]) {
     if (((short *)(eobs))[3] & 0xfefe) {
-      vp8_idct_dequant_full_2x_sse2(q, dq, dstv, stride);
+      vp8_idct_dequant_full_2x_sse2(q, dq, dst_v, stride);
     } else {
-      vp8_idct_dequant_0_2x_sse2(q, dq, dstv, stride);
+      vp8_idct_dequant_0_2x_sse2(q, dq, dst_v, stride);
     }
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/iwalsh_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/iwalsh_sse2.asm
index 82d7bf9..0043e93 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/iwalsh_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/iwalsh_sse2.asm
@@ -13,7 +13,7 @@
 
 SECTION .text
 
-;void vp8_short_inv_walsh4x4_sse2(short *input, short *output)
+;void vp8_short_inv_walsh4x4_sse2(short *input, short *mb_dqcoeff)
 global sym(vp8_short_inv_walsh4x4_sse2) PRIVATE
 sym(vp8_short_inv_walsh4x4_sse2):
     push        rbp
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/loopfilter_x86.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/loopfilter_x86.c
index a187d51..cfa13a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/loopfilter_x86.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/loopfilter_x86.c
@@ -22,7 +22,7 @@
 #define prototype_simple_loopfilter(sym) \
   void sym(unsigned char *y, int ystride, const unsigned char *blimit)
 
-#if HAVE_SSE2 && ARCH_X86_64
+#if HAVE_SSE2 && VPX_ARCH_X86_64
 prototype_loopfilter(vp8_loop_filter_bv_y_sse2);
 prototype_loopfilter(vp8_loop_filter_bh_y_sse2);
 #else
@@ -68,7 +68,7 @@
 void vp8_loop_filter_bh_sse2(unsigned char *y_ptr, unsigned char *u_ptr,
                              unsigned char *v_ptr, int y_stride, int uv_stride,
                              loop_filter_info *lfi) {
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
   vp8_loop_filter_bh_y_sse2(y_ptr, y_stride, lfi->blim, lfi->lim, lfi->hev_thr,
                             2);
 #else
@@ -101,7 +101,7 @@
 void vp8_loop_filter_bv_sse2(unsigned char *y_ptr, unsigned char *u_ptr,
                              unsigned char *v_ptr, int y_stride, int uv_stride,
                              loop_filter_info *lfi) {
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
   vp8_loop_filter_bv_y_sse2(y_ptr, y_stride, lfi->blim, lfi->lim, lfi->hev_thr,
                             2);
 #else
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_mmx.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_mmx.asm
index 1f3a2ba..67bcd0c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_mmx.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_mmx.asm
@@ -10,8 +10,6 @@
 
 
 %include "vpx_ports/x86_abi_support.asm"
-extern sym(vp8_bilinear_filters_x86_8)
-
 
 %define BLOCK_HEIGHT_WIDTH 4
 %define vp8_filter_weight 128
@@ -205,280 +203,6 @@
     ret
 
 
-;void bilinear_predict8x4_mmx
-;(
-;    unsigned char  *src_ptr,
-;    int   src_pixels_per_line,
-;    int  xoffset,
-;    int  yoffset,
-;    unsigned char *dst_ptr,
-;    int dst_pitch
-;)
-global sym(vp8_bilinear_predict8x4_mmx) PRIVATE
-sym(vp8_bilinear_predict8x4_mmx):
-    push        rbp
-    mov         rbp, rsp
-    SHADOW_ARGS_TO_STACK 6
-    GET_GOT     rbx
-    push        rsi
-    push        rdi
-    ; end prolog
-
-    ;const short *HFilter = vp8_bilinear_filters_x86_8[xoffset];
-    ;const short *VFilter = vp8_bilinear_filters_x86_8[yoffset];
-
-        movsxd      rax,        dword ptr arg(2) ;xoffset
-        mov         rdi,        arg(4) ;dst_ptr           ;
-
-        lea         rcx,        [GLOBAL(sym(vp8_bilinear_filters_x86_8))]
-        shl         rax,        5
-
-        mov         rsi,        arg(0) ;src_ptr              ;
-        add         rax,        rcx
-
-        movsxd      rdx,        dword ptr arg(5) ;dst_pitch
-        movq        mm1,        [rax]               ;
-
-        movq        mm2,        [rax+16]            ;
-        movsxd      rax,        dword ptr arg(3) ;yoffset
-
-        pxor        mm0,        mm0                 ;
-        shl         rax,        5
-
-        add         rax,        rcx
-        lea         rcx,        [rdi+rdx*4]          ;
-
-        movsxd      rdx,        dword ptr arg(1) ;src_pixels_per_line    ;
-
-        ; get the first horizontal line done       ;
-        movq        mm3,        [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        movq        mm4,        mm3                 ; make a copy of current line
-
-        punpcklbw   mm3,        mm0                 ; xx 00 01 02 03 04 05 06
-        punpckhbw   mm4,        mm0                 ;
-
-        pmullw      mm3,        mm1                 ;
-        pmullw      mm4,        mm1                 ;
-
-        movq        mm5,        [rsi+1]             ;
-        movq        mm6,        mm5                 ;
-
-        punpcklbw   mm5,        mm0                 ;
-        punpckhbw   mm6,        mm0                 ;
-
-        pmullw      mm5,        mm2                 ;
-        pmullw      mm6,        mm2                 ;
-
-        paddw       mm3,        mm5                 ;
-        paddw       mm4,        mm6                 ;
-
-        paddw       mm3,        [GLOBAL(rd)]                 ; xmm3 += round value
-        psraw       mm3,        VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       mm4,        [GLOBAL(rd)]                 ;
-        psraw       mm4,        VP8_FILTER_SHIFT        ;
-
-        movq        mm7,        mm3                 ;
-        packuswb    mm7,        mm4                 ;
-
-        add         rsi,        rdx                 ; next line
-.next_row_8x4:
-        movq        mm3,        [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        movq        mm4,        mm3                 ; make a copy of current line
-
-        punpcklbw   mm3,        mm0                 ; xx 00 01 02 03 04 05 06
-        punpckhbw   mm4,        mm0                 ;
-
-        pmullw      mm3,        mm1                 ;
-        pmullw      mm4,        mm1                 ;
-
-        movq        mm5,        [rsi+1]             ;
-        movq        mm6,        mm5                 ;
-
-        punpcklbw   mm5,        mm0                 ;
-        punpckhbw   mm6,        mm0                 ;
-
-        pmullw      mm5,        mm2                 ;
-        pmullw      mm6,        mm2                 ;
-
-        paddw       mm3,        mm5                 ;
-        paddw       mm4,        mm6                 ;
-
-        movq        mm5,        mm7                 ;
-        movq        mm6,        mm7                 ;
-
-        punpcklbw   mm5,        mm0                 ;
-        punpckhbw   mm6,        mm0
-
-        pmullw      mm5,        [rax]               ;
-        pmullw      mm6,        [rax]               ;
-
-        paddw       mm3,        [GLOBAL(rd)]                 ; xmm3 += round value
-        psraw       mm3,        VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       mm4,        [GLOBAL(rd)]                 ;
-        psraw       mm4,        VP8_FILTER_SHIFT        ;
-
-        movq        mm7,        mm3                 ;
-        packuswb    mm7,        mm4                 ;
-
-
-        pmullw      mm3,        [rax+16]            ;
-        pmullw      mm4,        [rax+16]            ;
-
-        paddw       mm3,        mm5                 ;
-        paddw       mm4,        mm6                 ;
-
-
-        paddw       mm3,        [GLOBAL(rd)]                 ; xmm3 += round value
-        psraw       mm3,        VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       mm4,        [GLOBAL(rd)]                 ;
-        psraw       mm4,        VP8_FILTER_SHIFT        ;
-
-        packuswb    mm3,        mm4
-
-        movq        [rdi],      mm3                 ; store the results in the destination
-
-%if ABI_IS_32BIT
-        add         rsi,        rdx                 ; next line
-        add         rdi,        dword ptr arg(5) ;dst_pitch                   ;
-%else
-        movsxd      r8,         dword ptr arg(5) ;dst_pitch
-        add         rsi,        rdx                 ; next line
-        add         rdi,        r8
-%endif
-        cmp         rdi,        rcx                 ;
-        jne         .next_row_8x4
-
-    ; begin epilog
-    pop rdi
-    pop rsi
-    RESTORE_GOT
-    UNSHADOW_ARGS
-    pop         rbp
-    ret
-
-
-;void bilinear_predict4x4_mmx
-;(
-;    unsigned char  *src_ptr,
-;    int   src_pixels_per_line,
-;    int  xoffset,
-;    int  yoffset,
-;    unsigned char *dst_ptr,
-;    int dst_pitch
-;)
-global sym(vp8_bilinear_predict4x4_mmx) PRIVATE
-sym(vp8_bilinear_predict4x4_mmx):
-    push        rbp
-    mov         rbp, rsp
-    SHADOW_ARGS_TO_STACK 6
-    GET_GOT     rbx
-    push        rsi
-    push        rdi
-    ; end prolog
-
-    ;const short *HFilter = vp8_bilinear_filters_x86_8[xoffset];
-    ;const short *VFilter = vp8_bilinear_filters_x86_8[yoffset];
-
-        movsxd      rax,        dword ptr arg(2) ;xoffset
-        mov         rdi,        arg(4) ;dst_ptr           ;
-
-        lea         rcx,        [GLOBAL(sym(vp8_bilinear_filters_x86_8))]
-        shl         rax,        5
-
-        add         rax,        rcx ; HFilter
-        mov         rsi,        arg(0) ;src_ptr              ;
-
-        movsxd      rdx,        dword ptr arg(5) ;ldst_pitch
-        movq        mm1,        [rax]               ;
-
-        movq        mm2,        [rax+16]            ;
-        movsxd      rax,        dword ptr arg(3) ;yoffset
-
-        pxor        mm0,        mm0                 ;
-        shl         rax,        5
-
-        add         rax,        rcx
-        lea         rcx,        [rdi+rdx*4]          ;
-
-        movsxd      rdx,        dword ptr arg(1) ;src_pixels_per_line    ;
-
-        ; get the first horizontal line done       ;
-        movd        mm3,        [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        punpcklbw   mm3,        mm0                 ; xx 00 01 02 03 04 05 06
-
-        pmullw      mm3,        mm1                 ;
-        movd        mm5,        [rsi+1]             ;
-
-        punpcklbw   mm5,        mm0                 ;
-        pmullw      mm5,        mm2                 ;
-
-        paddw       mm3,        mm5                 ;
-        paddw       mm3,        [GLOBAL(rd)]                 ; xmm3 += round value
-
-        psraw       mm3,        VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        movq        mm7,        mm3                 ;
-        packuswb    mm7,        mm0                 ;
-
-        add         rsi,        rdx                 ; next line
-.next_row_4x4:
-        movd        mm3,        [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        punpcklbw   mm3,        mm0                 ; xx 00 01 02 03 04 05 06
-
-        pmullw      mm3,        mm1                 ;
-        movd        mm5,        [rsi+1]             ;
-
-        punpcklbw   mm5,        mm0                 ;
-        pmullw      mm5,        mm2                 ;
-
-        paddw       mm3,        mm5                 ;
-
-        movq        mm5,        mm7                 ;
-        punpcklbw   mm5,        mm0                 ;
-
-        pmullw      mm5,        [rax]               ;
-        paddw       mm3,        [GLOBAL(rd)]                 ; xmm3 += round value
-
-        psraw       mm3,        VP8_FILTER_SHIFT        ; xmm3 /= 128
-        movq        mm7,        mm3                 ;
-
-        packuswb    mm7,        mm0                 ;
-
-        pmullw      mm3,        [rax+16]            ;
-        paddw       mm3,        mm5                 ;
-
-
-        paddw       mm3,        [GLOBAL(rd)]                 ; xmm3 += round value
-        psraw       mm3,        VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        packuswb    mm3,        mm0
-        movd        [rdi],      mm3                 ; store the results in the destination
-
-%if ABI_IS_32BIT
-        add         rsi,        rdx                 ; next line
-        add         rdi,        dword ptr arg(5) ;dst_pitch                   ;
-%else
-        movsxd      r8,         dword ptr arg(5) ;dst_pitch                   ;
-        add         rsi,        rdx                 ; next line
-        add         rdi,        r8
-%endif
-
-        cmp         rdi,        rcx                 ;
-        jne         .next_row_4x4
-
-    ; begin epilog
-    pop rdi
-    pop rsi
-    RESTORE_GOT
-    UNSHADOW_ARGS
-    pop         rbp
-    ret
-
-
-
 SECTION_RODATA
 align 16
 rd:
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_sse2.asm
index 6e70f6d..51c015e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/subpixel_sse2.asm
@@ -10,7 +10,6 @@
 
 
 %include "vpx_ports/x86_abi_support.asm"
-extern sym(vp8_bilinear_filters_x86_8)
 
 %define BLOCK_HEIGHT_WIDTH 4
 %define VP8_FILTER_WEIGHT 128
@@ -958,419 +957,6 @@
     ret
 
 
-;void vp8_bilinear_predict16x16_sse2
-;(
-;    unsigned char  *src_ptr,
-;    int   src_pixels_per_line,
-;    int  xoffset,
-;    int  yoffset,
-;    unsigned char *dst_ptr,
-;    int dst_pitch
-;)
-extern sym(vp8_bilinear_filters_x86_8)
-global sym(vp8_bilinear_predict16x16_sse2) PRIVATE
-sym(vp8_bilinear_predict16x16_sse2):
-    push        rbp
-    mov         rbp, rsp
-    SHADOW_ARGS_TO_STACK 6
-    SAVE_XMM 7
-    GET_GOT     rbx
-    push        rsi
-    push        rdi
-    ; end prolog
-
-    ;const short *HFilter = vp8_bilinear_filters_x86_8[xoffset]
-    ;const short *VFilter = vp8_bilinear_filters_x86_8[yoffset]
-
-        lea         rcx,        [GLOBAL(sym(vp8_bilinear_filters_x86_8))]
-        movsxd      rax,        dword ptr arg(2) ;xoffset
-
-        cmp         rax,        0      ;skip first_pass filter if xoffset=0
-        je          .b16x16_sp_only
-
-        shl         rax,        5
-        add         rax,        rcx    ;HFilter
-
-        mov         rdi,        arg(4) ;dst_ptr
-        mov         rsi,        arg(0) ;src_ptr
-        movsxd      rdx,        dword ptr arg(5) ;dst_pitch
-
-        movdqa      xmm1,       [rax]
-        movdqa      xmm2,       [rax+16]
-
-        movsxd      rax,        dword ptr arg(3) ;yoffset
-
-        cmp         rax,        0      ;skip second_pass filter if yoffset=0
-        je          .b16x16_fp_only
-
-        shl         rax,        5
-        add         rax,        rcx    ;VFilter
-
-        lea         rcx,        [rdi+rdx*8]
-        lea         rcx,        [rcx+rdx*8]
-        movsxd      rdx,        dword ptr arg(1) ;src_pixels_per_line
-
-        pxor        xmm0,       xmm0
-
-%if ABI_IS_32BIT=0
-        movsxd      r8,         dword ptr arg(5) ;dst_pitch
-%endif
-        ; get the first horizontal line done
-        movdqu      xmm3,       [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        movdqa      xmm4,       xmm3                 ; make a copy of current line
-
-        punpcklbw   xmm3,       xmm0                 ; xx 00 01 02 03 04 05 06
-        punpckhbw   xmm4,       xmm0
-
-        pmullw      xmm3,       xmm1
-        pmullw      xmm4,       xmm1
-
-        movdqu      xmm5,       [rsi+1]
-        movdqa      xmm6,       xmm5
-
-        punpcklbw   xmm5,       xmm0
-        punpckhbw   xmm6,       xmm0
-
-        pmullw      xmm5,       xmm2
-        pmullw      xmm6,       xmm2
-
-        paddw       xmm3,       xmm5
-        paddw       xmm4,       xmm6
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       xmm4,       [GLOBAL(rd)]
-        psraw       xmm4,       VP8_FILTER_SHIFT
-
-        movdqa      xmm7,       xmm3
-        packuswb    xmm7,       xmm4
-
-        add         rsi,        rdx                 ; next line
-.next_row:
-        movdqu      xmm3,       [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        movdqa      xmm4,       xmm3                 ; make a copy of current line
-
-        punpcklbw   xmm3,       xmm0                 ; xx 00 01 02 03 04 05 06
-        punpckhbw   xmm4,       xmm0
-
-        pmullw      xmm3,       xmm1
-        pmullw      xmm4,       xmm1
-
-        movdqu      xmm5,       [rsi+1]
-        movdqa      xmm6,       xmm5
-
-        punpcklbw   xmm5,       xmm0
-        punpckhbw   xmm6,       xmm0
-
-        pmullw      xmm5,       xmm2
-        pmullw      xmm6,       xmm2
-
-        paddw       xmm3,       xmm5
-        paddw       xmm4,       xmm6
-
-        movdqa      xmm5,       xmm7
-        movdqa      xmm6,       xmm7
-
-        punpcklbw   xmm5,       xmm0
-        punpckhbw   xmm6,       xmm0
-
-        pmullw      xmm5,       [rax]
-        pmullw      xmm6,       [rax]
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       xmm4,       [GLOBAL(rd)]
-        psraw       xmm4,       VP8_FILTER_SHIFT
-
-        movdqa      xmm7,       xmm3
-        packuswb    xmm7,       xmm4
-
-        pmullw      xmm3,       [rax+16]
-        pmullw      xmm4,       [rax+16]
-
-        paddw       xmm3,       xmm5
-        paddw       xmm4,       xmm6
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       xmm4,       [GLOBAL(rd)]
-        psraw       xmm4,       VP8_FILTER_SHIFT
-
-        packuswb    xmm3,       xmm4
-        movdqa      [rdi],      xmm3                 ; store the results in the destination
-
-        add         rsi,        rdx                 ; next line
-%if ABI_IS_32BIT
-        add         rdi,        DWORD PTR arg(5) ;dst_pitch
-%else
-        add         rdi,        r8
-%endif
-
-        cmp         rdi,        rcx
-        jne         .next_row
-
-        jmp         .done
-
-.b16x16_sp_only:
-        movsxd      rax,        dword ptr arg(3) ;yoffset
-        shl         rax,        5
-        add         rax,        rcx    ;VFilter
-
-        mov         rdi,        arg(4) ;dst_ptr
-        mov         rsi,        arg(0) ;src_ptr
-        movsxd      rdx,        dword ptr arg(5) ;dst_pitch
-
-        movdqa      xmm1,       [rax]
-        movdqa      xmm2,       [rax+16]
-
-        lea         rcx,        [rdi+rdx*8]
-        lea         rcx,        [rcx+rdx*8]
-        movsxd      rax,        dword ptr arg(1) ;src_pixels_per_line
-
-        pxor        xmm0,       xmm0
-
-        ; get the first horizontal line done
-        movdqu      xmm7,       [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-
-        add         rsi,        rax                 ; next line
-.next_row_spo:
-        movdqu      xmm3,       [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-
-        movdqa      xmm5,       xmm7
-        movdqa      xmm6,       xmm7
-
-        movdqa      xmm4,       xmm3                 ; make a copy of current line
-        movdqa      xmm7,       xmm3
-
-        punpcklbw   xmm5,       xmm0
-        punpckhbw   xmm6,       xmm0
-        punpcklbw   xmm3,       xmm0                 ; xx 00 01 02 03 04 05 06
-        punpckhbw   xmm4,       xmm0
-
-        pmullw      xmm5,       xmm1
-        pmullw      xmm6,       xmm1
-        pmullw      xmm3,       xmm2
-        pmullw      xmm4,       xmm2
-
-        paddw       xmm3,       xmm5
-        paddw       xmm4,       xmm6
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       xmm4,       [GLOBAL(rd)]
-        psraw       xmm4,       VP8_FILTER_SHIFT
-
-        packuswb    xmm3,       xmm4
-        movdqa      [rdi],      xmm3                 ; store the results in the destination
-
-        add         rsi,        rax                 ; next line
-        add         rdi,        rdx                 ;dst_pitch
-        cmp         rdi,        rcx
-        jne         .next_row_spo
-
-        jmp         .done
-
-.b16x16_fp_only:
-        lea         rcx,        [rdi+rdx*8]
-        lea         rcx,        [rcx+rdx*8]
-        movsxd      rax,        dword ptr arg(1) ;src_pixels_per_line
-        pxor        xmm0,       xmm0
-
-.next_row_fpo:
-        movdqu      xmm3,       [rsi]               ; xx 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14
-        movdqa      xmm4,       xmm3                 ; make a copy of current line
-
-        punpcklbw   xmm3,       xmm0                 ; xx 00 01 02 03 04 05 06
-        punpckhbw   xmm4,       xmm0
-
-        pmullw      xmm3,       xmm1
-        pmullw      xmm4,       xmm1
-
-        movdqu      xmm5,       [rsi+1]
-        movdqa      xmm6,       xmm5
-
-        punpcklbw   xmm5,       xmm0
-        punpckhbw   xmm6,       xmm0
-
-        pmullw      xmm5,       xmm2
-        pmullw      xmm6,       xmm2
-
-        paddw       xmm3,       xmm5
-        paddw       xmm4,       xmm6
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        paddw       xmm4,       [GLOBAL(rd)]
-        psraw       xmm4,       VP8_FILTER_SHIFT
-
-        packuswb    xmm3,       xmm4
-        movdqa      [rdi],      xmm3                 ; store the results in the destination
-
-        add         rsi,        rax                 ; next line
-        add         rdi,        rdx                 ; dst_pitch
-        cmp         rdi,        rcx
-        jne         .next_row_fpo
-
-.done:
-    ; begin epilog
-    pop rdi
-    pop rsi
-    RESTORE_GOT
-    RESTORE_XMM
-    UNSHADOW_ARGS
-    pop         rbp
-    ret
-
-
-;void vp8_bilinear_predict8x8_sse2
-;(
-;    unsigned char  *src_ptr,
-;    int   src_pixels_per_line,
-;    int  xoffset,
-;    int  yoffset,
-;    unsigned char *dst_ptr,
-;    int dst_pitch
-;)
-global sym(vp8_bilinear_predict8x8_sse2) PRIVATE
-sym(vp8_bilinear_predict8x8_sse2):
-    push        rbp
-    mov         rbp, rsp
-    SHADOW_ARGS_TO_STACK 6
-    SAVE_XMM 7
-    GET_GOT     rbx
-    push        rsi
-    push        rdi
-    ; end prolog
-
-    ALIGN_STACK 16, rax
-    sub         rsp, 144                         ; reserve 144 bytes
-
-    ;const short *HFilter = vp8_bilinear_filters_x86_8[xoffset]
-    ;const short *VFilter = vp8_bilinear_filters_x86_8[yoffset]
-        lea         rcx,        [GLOBAL(sym(vp8_bilinear_filters_x86_8))]
-
-        mov         rsi,        arg(0) ;src_ptr
-        movsxd      rdx,        dword ptr arg(1) ;src_pixels_per_line
-
-    ;Read 9-line unaligned data in and put them on stack. This gives a big
-    ;performance boost.
-        movdqu      xmm0,       [rsi]
-        lea         rax,        [rdx + rdx*2]
-        movdqu      xmm1,       [rsi+rdx]
-        movdqu      xmm2,       [rsi+rdx*2]
-        add         rsi,        rax
-        movdqu      xmm3,       [rsi]
-        movdqu      xmm4,       [rsi+rdx]
-        movdqu      xmm5,       [rsi+rdx*2]
-        add         rsi,        rax
-        movdqu      xmm6,       [rsi]
-        movdqu      xmm7,       [rsi+rdx]
-
-        movdqa      XMMWORD PTR [rsp],            xmm0
-
-        movdqu      xmm0,       [rsi+rdx*2]
-
-        movdqa      XMMWORD PTR [rsp+16],         xmm1
-        movdqa      XMMWORD PTR [rsp+32],         xmm2
-        movdqa      XMMWORD PTR [rsp+48],         xmm3
-        movdqa      XMMWORD PTR [rsp+64],         xmm4
-        movdqa      XMMWORD PTR [rsp+80],         xmm5
-        movdqa      XMMWORD PTR [rsp+96],         xmm6
-        movdqa      XMMWORD PTR [rsp+112],        xmm7
-        movdqa      XMMWORD PTR [rsp+128],        xmm0
-
-        movsxd      rax,        dword ptr arg(2) ;xoffset
-        shl         rax,        5
-        add         rax,        rcx    ;HFilter
-
-        mov         rdi,        arg(4) ;dst_ptr
-        movsxd      rdx,        dword ptr arg(5) ;dst_pitch
-
-        movdqa      xmm1,       [rax]
-        movdqa      xmm2,       [rax+16]
-
-        movsxd      rax,        dword ptr arg(3) ;yoffset
-        shl         rax,        5
-        add         rax,        rcx    ;VFilter
-
-        lea         rcx,        [rdi+rdx*8]
-
-        movdqa      xmm5,       [rax]
-        movdqa      xmm6,       [rax+16]
-
-        pxor        xmm0,       xmm0
-
-        ; get the first horizontal line done
-        movdqa      xmm3,       XMMWORD PTR [rsp]
-        movdqa      xmm4,       xmm3                 ; make a copy of current line
-        psrldq      xmm4,       1
-
-        punpcklbw   xmm3,       xmm0                 ; 00 01 02 03 04 05 06 07
-        punpcklbw   xmm4,       xmm0                 ; 01 02 03 04 05 06 07 08
-
-        pmullw      xmm3,       xmm1
-        pmullw      xmm4,       xmm2
-
-        paddw       xmm3,       xmm4
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        movdqa      xmm7,       xmm3
-        add         rsp,        16                 ; next line
-.next_row8x8:
-        movdqa      xmm3,       XMMWORD PTR [rsp]               ; 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15
-        movdqa      xmm4,       xmm3                 ; make a copy of current line
-        psrldq      xmm4,       1
-
-        punpcklbw   xmm3,       xmm0                 ; 00 01 02 03 04 05 06 07
-        punpcklbw   xmm4,       xmm0                 ; 01 02 03 04 05 06 07 08
-
-        pmullw      xmm3,       xmm1
-        pmullw      xmm4,       xmm2
-
-        paddw       xmm3,       xmm4
-        pmullw      xmm7,       xmm5
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        movdqa      xmm4,       xmm3
-
-        pmullw      xmm3,       xmm6
-        paddw       xmm3,       xmm7
-
-        movdqa      xmm7,       xmm4
-
-        paddw       xmm3,       [GLOBAL(rd)]        ; xmm3 += round value
-        psraw       xmm3,       VP8_FILTER_SHIFT        ; xmm3 /= 128
-
-        packuswb    xmm3,       xmm0
-        movq        [rdi],      xmm3                 ; store the results in the destination
-
-        add         rsp,        16                 ; next line
-        add         rdi,        rdx
-
-        cmp         rdi,        rcx
-        jne         .next_row8x8
-
-    ;add rsp, 144
-    pop rsp
-    ; begin epilog
-    pop rdi
-    pop rsi
-    RESTORE_GOT
-    RESTORE_XMM
-    UNSHADOW_ARGS
-    pop         rbp
-    ret
-
-
 SECTION_RODATA
 align 16
 rd:
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/vp8_asm_stubs.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/vp8_asm_stubs.c
index de836f1..7fb83c2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/vp8_asm_stubs.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/vp8_asm_stubs.c
@@ -11,7 +11,6 @@
 #include "vpx_config.h"
 #include "vp8_rtcd.h"
 #include "vpx_ports/mem.h"
-#include "filter_x86.h"
 
 extern const short vp8_six_tap_x86[8][6 * 8];
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c
index 9cf74bf..11099c4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.c
@@ -15,7 +15,13 @@
 int vp8dx_start_decode(BOOL_DECODER *br, const unsigned char *source,
                        unsigned int source_sz, vpx_decrypt_cb decrypt_cb,
                        void *decrypt_state) {
-  br->user_buffer_end = source + source_sz;
+  if (source_sz && !source) return 1;
+
+  // To simplify calling code this fuction can be called with |source| == null
+  // and |source_sz| == 0. This and vp8dx_bool_decoder_fill() are essentially
+  // no-ops in this case.
+  // Work around a ubsan warning with a ternary to avoid adding 0 to null.
+  br->user_buffer_end = source ? source + source_sz : source;
   br->user_buffer = source;
   br->value = 0;
   br->count = -8;
@@ -23,8 +29,6 @@
   br->decrypt_cb = decrypt_cb;
   br->decrypt_state = decrypt_state;
 
-  if (source_sz && !source) return 1;
-
   /* Populate the buffer */
   vp8dx_bool_decoder_fill(br);
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.h
index 9c529f2..f2a18f0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/dboolhuff.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_DBOOLHUFF_H_
-#define VP8_DECODER_DBOOLHUFF_H_
+#ifndef VPX_VP8_DECODER_DBOOLHUFF_H_
+#define VPX_VP8_DECODER_DBOOLHUFF_H_
 
 #include <stddef.h>
 #include <limits.h>
@@ -76,7 +76,7 @@
   }
 
   {
-    const int shift = vp8_norm[range];
+    const unsigned char shift = vp8_norm[(unsigned char)range];
     range <<= shift;
     value <<= shift;
     count -= shift;
@@ -127,4 +127,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_DBOOLHUFF_H_
+#endif  // VPX_VP8_DECODER_DBOOLHUFF_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodeframe.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodeframe.c
index 8bfd3ce..67c254f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodeframe.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodeframe.c
@@ -211,7 +211,7 @@
           vp8_short_inv_walsh4x4(&b->dqcoeff[0], xd->qcoeff);
           memset(b->qcoeff, 0, 16 * sizeof(b->qcoeff[0]));
         } else {
-          b->dqcoeff[0] = b->qcoeff[0] * xd->dequant_y2[0];
+          b->dqcoeff[0] = (short)(b->qcoeff[0] * xd->dequant_y2[0]);
           vp8_short_inv_walsh4x4_1(&b->dqcoeff[0], xd->qcoeff);
           memset(b->qcoeff, 0, 2 * sizeof(b->qcoeff[0]));
         }
@@ -610,8 +610,7 @@
                                      lf_dst[2]);
         } else {
           vp8_loop_filter_row_simple(pc, lf_mic, mb_row - 1, recon_y_stride,
-                                     recon_uv_stride, lf_dst[0], lf_dst[1],
-                                     lf_dst[2]);
+                                     lf_dst[0]);
         }
         if (mb_row > 1) {
           yv12_extend_frame_left_right_c(yv12_fb_new, eb_dst[0], eb_dst[1],
@@ -647,8 +646,7 @@
                                  lf_dst[2]);
     } else {
       vp8_loop_filter_row_simple(pc, lf_mic, mb_row - 1, recon_y_stride,
-                                 recon_uv_stride, lf_dst[0], lf_dst[1],
-                                 lf_dst[2]);
+                                 lf_dst[0]);
     }
 
     yv12_extend_frame_left_right_c(yv12_fb_new, eb_dst[0], eb_dst[1],
@@ -686,6 +684,12 @@
   const unsigned char *partition_size_ptr = token_part_sizes + i * 3;
   unsigned int partition_size = 0;
   ptrdiff_t bytes_left = fragment_end - fragment_start;
+  if (bytes_left < 0) {
+    vpx_internal_error(
+        &pc->error, VPX_CODEC_CORRUPT_FRAME,
+        "Truncated packet or corrupt partition. No bytes left %d.",
+        (int)bytes_left);
+  }
   /* Calculate the length of this partition. The last partition
    * size is implicit. If the partition size can't be read, then
    * either use the remaining data in the buffer (for EC mode)
@@ -750,6 +754,9 @@
       ptrdiff_t ext_first_part_size = token_part_sizes -
                                       pbi->fragments.ptrs[0] +
                                       3 * (num_token_partitions - 1);
+      if (fragment_size < (unsigned int)ext_first_part_size)
+        vpx_internal_error(&pbi->common.error, VPX_CODEC_CORRUPT_FRAME,
+                           "Corrupted fragment size %d", fragment_size);
       fragment_size -= (unsigned int)ext_first_part_size;
       if (fragment_size > 0) {
         pbi->fragments.sizes[0] = (unsigned int)ext_first_part_size;
@@ -767,6 +774,9 @@
           first_fragment_end, fragment_end, fragment_idx - 1,
           num_token_partitions);
       pbi->fragments.sizes[fragment_idx] = (unsigned int)partition_size;
+      if (fragment_size < (unsigned int)partition_size)
+        vpx_internal_error(&pbi->common.error, VPX_CODEC_CORRUPT_FRAME,
+                           "Corrupted fragment size %d", fragment_size);
       fragment_size -= (unsigned int)partition_size;
       assert(fragment_idx <= num_token_partitions);
       if (fragment_size > 0) {
@@ -1208,7 +1218,11 @@
   if (vpx_atomic_load_acquire(&pbi->b_multithreaded_rd) &&
       pc->multi_token_partition != ONE_PARTITION) {
     unsigned int thread;
-    vp8mt_decode_mb_rows(pbi, xd);
+    if (vp8mt_decode_mb_rows(pbi, xd)) {
+      vp8_decoder_remove_threads(pbi);
+      pbi->restart_threads = 1;
+      vpx_internal_error(&pbi->common.error, VPX_CODEC_CORRUPT_FRAME, NULL);
+    }
     vp8_yv12_extend_frame_borders(yv12_fb_new);
     for (thread = 0; thread < pbi->decoding_thread_count; ++thread) {
       corrupt_tokens |= pbi->mb_row_di[thread].mbd.corrupted;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.c
index 8e9600c..9437385 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.c
@@ -485,10 +485,7 @@
   }
 }
 
-static void decode_mb_mode_mvs(VP8D_COMP *pbi, MODE_INFO *mi,
-                               MB_MODE_INFO *mbmi) {
-  (void)mbmi;
-
+static void decode_mb_mode_mvs(VP8D_COMP *pbi, MODE_INFO *mi) {
   /* Read the Macroblock segmentation map if it is being updated explicitly
    * this frame (reset to 0 above by default)
    * By default on a key frame reset all MBs to segment 0
@@ -537,7 +534,7 @@
       int mb_num = mb_row * pbi->common.mb_cols + mb_col;
 #endif
 
-      decode_mb_mode_mvs(pbi, mi, &mi->mbmi);
+      decode_mb_mode_mvs(pbi, mi);
 
 #if CONFIG_ERROR_CONCEALMENT
       /* look for corruption. set mvs_corrupt_from_mb to the current
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.h
index f33b073..504e943 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decodemv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_DECODEMV_H_
-#define VP8_DECODER_DECODEMV_H_
+#ifndef VPX_VP8_DECODER_DECODEMV_H_
+#define VPX_VP8_DECODER_DECODEMV_H_
 
 #include "onyxd_int.h"
 
@@ -23,4 +23,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_DECODEMV_H_
+#endif  // VPX_VP8_DECODER_DECODEMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decoderthreading.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decoderthreading.h
index c563cf6..3d49bc8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decoderthreading.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/decoderthreading.h
@@ -8,15 +8,15 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_DECODERTHREADING_H_
-#define VP8_DECODER_DECODERTHREADING_H_
+#ifndef VPX_VP8_DECODER_DECODERTHREADING_H_
+#define VPX_VP8_DECODER_DECODERTHREADING_H_
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
 #if CONFIG_MULTITHREAD
-void vp8mt_decode_mb_rows(VP8D_COMP *pbi, MACROBLOCKD *xd);
+int vp8mt_decode_mb_rows(VP8D_COMP *pbi, MACROBLOCKD *xd);
 void vp8_decoder_remove_threads(VP8D_COMP *pbi);
 void vp8_decoder_create_threads(VP8D_COMP *pbi);
 void vp8mt_alloc_temp_buffers(VP8D_COMP *pbi, int width, int prev_mb_rows);
@@ -27,4 +27,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_DECODERTHREADING_H_
+#endif  // VPX_VP8_DECODER_DECODERTHREADING_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.c
index b350baf..1c77873 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.c
@@ -11,6 +11,7 @@
 #include "vp8/common/blockd.h"
 #include "onyxd_int.h"
 #include "vpx_mem/vpx_mem.h"
+#include "vpx_ports/compiler_attributes.h"
 #include "vpx_ports/mem.h"
 #include "detokenize.h"
 
@@ -52,7 +53,10 @@
 /* for const-casting */
 typedef const uint8_t (*ProbaArray)[NUM_CTX][NUM_PROBAS];
 
-static int GetSigned(BOOL_DECODER *br, int value_to_sign) {
+// With corrupt / fuzzed streams the calculation of br->value may overflow. See
+// b/148271109.
+static VPX_NO_UNSIGNED_OVERFLOW_CHECK int GetSigned(BOOL_DECODER *br,
+                                                    int value_to_sign) {
   int split = (br->range + 1) >> 1;
   VP8_BD_VALUE bigsplit = (VP8_BD_VALUE)split << (VP8_BD_VALUE_SIZE - 8);
   int v;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.h
index f0b1254..410a431 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/detokenize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_DETOKENIZE_H_
-#define VP8_DECODER_DETOKENIZE_H_
+#ifndef VPX_VP8_DECODER_DETOKENIZE_H_
+#define VPX_VP8_DECODER_DETOKENIZE_H_
 
 #include "onyxd_int.h"
 
@@ -24,4 +24,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_DETOKENIZE_H_
+#endif  // VPX_VP8_DECODER_DETOKENIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/ec_types.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/ec_types.h
index 8da561f..84feb26 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/ec_types.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/ec_types.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_EC_TYPES_H_
-#define VP8_DECODER_EC_TYPES_H_
+#ifndef VPX_VP8_DECODER_EC_TYPES_H_
+#define VPX_VP8_DECODER_EC_TYPES_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -50,4 +50,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_EC_TYPES_H_
+#endif  // VPX_VP8_DECODER_EC_TYPES_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.c
index e221414..85982e4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.c
@@ -147,8 +147,8 @@
   }
 }
 
-void vp8_calculate_overlaps(MB_OVERLAP *overlap_ul, int mb_rows, int mb_cols,
-                            union b_mode_info *bmi, int b_row, int b_col) {
+static void calculate_overlaps(MB_OVERLAP *overlap_ul, int mb_rows, int mb_cols,
+                               union b_mode_info *bmi, int b_row, int b_col) {
   MB_OVERLAP *mb_overlap;
   int row, col, rel_row, rel_col;
   int new_row, new_col;
@@ -280,9 +280,9 @@
   int sub_col;
   for (sub_row = 0; sub_row < 4; ++sub_row) {
     for (sub_col = 0; sub_col < 4; ++sub_col) {
-      vp8_calculate_overlaps(overlaps, mb_rows, mb_cols,
-                             &(prev_mi->bmi[sub_row * 4 + sub_col]),
-                             4 * mb_row + sub_row, 4 * mb_col + sub_col);
+      calculate_overlaps(overlaps, mb_rows, mb_cols,
+                         &(prev_mi->bmi[sub_row * 4 + sub_col]),
+                         4 * mb_row + sub_row, 4 * mb_col + sub_col);
     }
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.h
index 89c78c1..608a79f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/error_concealment.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_ERROR_CONCEALMENT_H_
-#define VP8_DECODER_ERROR_CONCEALMENT_H_
+#ifndef VPX_VP8_DECODER_ERROR_CONCEALMENT_H_
+#define VPX_VP8_DECODER_ERROR_CONCEALMENT_H_
 
 #include "onyxd_int.h"
 #include "ec_types.h"
@@ -38,4 +38,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_ERROR_CONCEALMENT_H_
+#endif  // VPX_VP8_DECODER_ERROR_CONCEALMENT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_if.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_if.c
index f516eb0..765d2ec 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_if.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_if.c
@@ -16,6 +16,7 @@
 #include "onyxd_int.h"
 #include "vpx_mem/vpx_mem.h"
 #include "vp8/common/alloccommon.h"
+#include "vp8/common/common.h"
 #include "vp8/common/loopfilter.h"
 #include "vp8/common/swapyv12buffer.h"
 #include "vp8/common/threading.h"
@@ -36,7 +37,7 @@
 #if CONFIG_ERROR_CONCEALMENT
 #include "error_concealment.h"
 #endif
-#if ARCH_ARM
+#if VPX_ARCH_ARM
 #include "vpx_ports/arm.h"
 #endif
 
@@ -301,12 +302,9 @@
   return 1;
 }
 
-int vp8dx_receive_compressed_data(VP8D_COMP *pbi, size_t size,
-                                  const uint8_t *source, int64_t time_stamp) {
+int vp8dx_receive_compressed_data(VP8D_COMP *pbi, int64_t time_stamp) {
   VP8_COMMON *cm = &pbi->common;
   int retcode = -1;
-  (void)size;
-  (void)source;
 
   pbi->common.error.error_code = VPX_CODEC_OK;
 
@@ -321,21 +319,6 @@
   pbi->dec_fb_ref[GOLDEN_FRAME] = &cm->yv12_fb[cm->gld_fb_idx];
   pbi->dec_fb_ref[ALTREF_FRAME] = &cm->yv12_fb[cm->alt_fb_idx];
 
-  if (setjmp(pbi->common.error.jmp)) {
-    /* We do not know if the missing frame(s) was supposed to update
-     * any of the reference buffers, but we act conservative and
-     * mark only the last buffer as corrupted.
-     */
-    cm->yv12_fb[cm->lst_fb_idx].corrupted = 1;
-
-    if (cm->fb_idx_ref_cnt[cm->new_fb_idx] > 0) {
-      cm->fb_idx_ref_cnt[cm->new_fb_idx]--;
-    }
-    goto decode_exit;
-  }
-
-  pbi->common.error.setjmp = 1;
-
   retcode = vp8_decode_frame(pbi);
 
   if (retcode < 0) {
@@ -344,6 +327,12 @@
     }
 
     pbi->common.error.error_code = VPX_CODEC_ERROR;
+    // Propagate the error info.
+    if (pbi->mb.error_info.error_code != 0) {
+      pbi->common.error.error_code = pbi->mb.error_info.error_code;
+      memcpy(pbi->common.error.detail, pbi->mb.error_info.detail,
+             sizeof(pbi->mb.error_info.detail));
+    }
     goto decode_exit;
   }
 
@@ -382,7 +371,6 @@
   pbi->last_time_stamp = time_stamp;
 
 decode_exit:
-  pbi->common.error.setjmp = 0;
   vpx_clear_system_state();
   return retcode;
 }
@@ -445,7 +433,7 @@
 #if CONFIG_MULTITHREAD
   if (setjmp(fb->pbi[0]->common.error.jmp)) {
     vp8_remove_decoder_instances(fb);
-    memset(fb->pbi, 0, sizeof(fb->pbi));
+    vp8_zero(fb->pbi);
     vpx_clear_system_state();
     return VPX_CODEC_ERROR;
   }
@@ -471,6 +459,6 @@
   return VPX_CODEC_OK;
 }
 
-int vp8dx_get_quantizer(const VP8D_COMP *cpi) {
-  return cpi->common.base_qindex;
+int vp8dx_get_quantizer(const VP8D_COMP *pbi) {
+  return pbi->common.base_qindex;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_int.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_int.h
index 3689258..cf2c066 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_int.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/onyxd_int.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_ONYXD_INT_H_
-#define VP8_DECODER_ONYXD_INT_H_
+#ifndef VPX_VP8_DECODER_ONYXD_INT_H_
+#define VPX_VP8_DECODER_ONYXD_INT_H_
 
 #include "vpx_config.h"
 #include "vp8/common/onyxd.h"
@@ -118,11 +118,17 @@
 
   vpx_decrypt_cb decrypt_cb;
   void *decrypt_state;
+#if CONFIG_MULTITHREAD
+  // Restart threads on next frame if set to 1.
+  // This is set when error happens in multithreaded decoding and all threads
+  // are shut down.
+  int restart_threads;
+#endif
 } VP8D_COMP;
 
 void vp8cx_init_de_quantizer(VP8D_COMP *pbi);
 void vp8_mb_init_dequantizer(VP8D_COMP *pbi, MACROBLOCKD *xd);
-int vp8_decode_frame(VP8D_COMP *cpi);
+int vp8_decode_frame(VP8D_COMP *pbi);
 
 int vp8_create_decoder_instances(struct frame_buffers *fb, VP8D_CONFIG *oxcf);
 int vp8_remove_decoder_instances(struct frame_buffers *fb);
@@ -130,8 +136,8 @@
 #if CONFIG_DEBUG
 #define CHECK_MEM_ERROR(lval, expr)                                         \
   do {                                                                      \
-    lval = (expr);                                                          \
-    if (!lval)                                                              \
+    (lval) = (expr);                                                        \
+    if (!(lval))                                                            \
       vpx_internal_error(&pbi->common.error, VPX_CODEC_MEM_ERROR,           \
                          "Failed to allocate " #lval " at %s:%d", __FILE__, \
                          __LINE__);                                         \
@@ -139,8 +145,8 @@
 #else
 #define CHECK_MEM_ERROR(lval, expr)                               \
   do {                                                            \
-    lval = (expr);                                                \
-    if (!lval)                                                    \
+    (lval) = (expr);                                              \
+    if (!(lval))                                                  \
       vpx_internal_error(&pbi->common.error, VPX_CODEC_MEM_ERROR, \
                          "Failed to allocate " #lval);            \
   } while (0)
@@ -150,4 +156,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_ONYXD_INT_H_
+#endif  // VPX_VP8_DECODER_ONYXD_INT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/threading.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/threading.c
index aadc8dc..561922d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/threading.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/threading.c
@@ -15,8 +15,8 @@
 #endif
 #include "onyxd_int.h"
 #include "vpx_mem/vpx_mem.h"
+#include "vp8/common/common.h"
 #include "vp8/common/threading.h"
-
 #include "vp8/common/loopfilter.h"
 #include "vp8/common/extend.h"
 #include "vpx_ports/vpx_timer.h"
@@ -400,16 +400,32 @@
       xd->dst.u_buffer = dst_buffer[1] + recon_uvoffset;
       xd->dst.v_buffer = dst_buffer[2] + recon_uvoffset;
 
-      xd->pre.y_buffer =
-          ref_buffer[xd->mode_info_context->mbmi.ref_frame][0] + recon_yoffset;
-      xd->pre.u_buffer =
-          ref_buffer[xd->mode_info_context->mbmi.ref_frame][1] + recon_uvoffset;
-      xd->pre.v_buffer =
-          ref_buffer[xd->mode_info_context->mbmi.ref_frame][2] + recon_uvoffset;
-
       /* propagate errors from reference frames */
       xd->corrupted |= ref_fb_corrupted[xd->mode_info_context->mbmi.ref_frame];
 
+      if (xd->corrupted) {
+        // Move current decoding marcoblock to the end of row for all rows
+        // assigned to this thread, such that other threads won't be waiting.
+        for (; mb_row < pc->mb_rows;
+             mb_row += (pbi->decoding_thread_count + 1)) {
+          current_mb_col = &pbi->mt_current_mb_col[mb_row];
+          vpx_atomic_store_release(current_mb_col, pc->mb_cols + nsync);
+        }
+        vpx_internal_error(&xd->error_info, VPX_CODEC_CORRUPT_FRAME,
+                           "Corrupted reference frame");
+      }
+
+      if (xd->mode_info_context->mbmi.ref_frame >= LAST_FRAME) {
+        const MV_REFERENCE_FRAME ref = xd->mode_info_context->mbmi.ref_frame;
+        xd->pre.y_buffer = ref_buffer[ref][0] + recon_yoffset;
+        xd->pre.u_buffer = ref_buffer[ref][1] + recon_uvoffset;
+        xd->pre.v_buffer = ref_buffer[ref][2] + recon_uvoffset;
+      } else {
+        // ref_frame is INTRA_FRAME, pre buffer should not be used.
+        xd->pre.y_buffer = 0;
+        xd->pre.u_buffer = 0;
+        xd->pre.v_buffer = 0;
+      }
       mt_decode_macroblock(pbi, xd, 0);
 
       xd->left_available = 1;
@@ -557,8 +573,9 @@
     xd->mode_info_context += xd->mode_info_stride * pbi->decoding_thread_count;
   }
 
-  /* signal end of frame decoding if this thread processed the last mb_row */
-  if (last_mb_row == (pc->mb_rows - 1)) sem_post(&pbi->h_event_end_decoding);
+  /* signal end of decoding of current thread for current frame */
+  if (last_mb_row + (int)pbi->decoding_thread_count + 1 >= pc->mb_rows)
+    sem_post(&pbi->h_event_end_decoding);
 }
 
 static THREAD_FUNCTION thread_decoding_proc(void *p_data) {
@@ -576,7 +593,13 @@
       } else {
         MACROBLOCKD *xd = &mbrd->mbd;
         xd->left_context = &mb_row_left_context;
-
+        if (setjmp(xd->error_info.jmp)) {
+          xd->error_info.setjmp = 0;
+          // Signal the end of decoding for current thread.
+          sem_post(&pbi->h_event_end_decoding);
+          continue;
+        }
+        xd->error_info.setjmp = 1;
         mt_decode_mb_rows(pbi, xd, ithread + 1);
       }
     }
@@ -738,22 +761,28 @@
 
     /* Allocate memory for above_row buffers. */
     CALLOC_ARRAY(pbi->mt_yabove_row, pc->mb_rows);
-    for (i = 0; i < pc->mb_rows; ++i)
+    for (i = 0; i < pc->mb_rows; ++i) {
       CHECK_MEM_ERROR(pbi->mt_yabove_row[i],
                       vpx_memalign(16, sizeof(unsigned char) *
                                            (width + (VP8BORDERINPIXELS << 1))));
+      vp8_zero_array(pbi->mt_yabove_row[i], width + (VP8BORDERINPIXELS << 1));
+    }
 
     CALLOC_ARRAY(pbi->mt_uabove_row, pc->mb_rows);
-    for (i = 0; i < pc->mb_rows; ++i)
+    for (i = 0; i < pc->mb_rows; ++i) {
       CHECK_MEM_ERROR(pbi->mt_uabove_row[i],
                       vpx_memalign(16, sizeof(unsigned char) *
                                            (uv_width + VP8BORDERINPIXELS)));
+      vp8_zero_array(pbi->mt_uabove_row[i], uv_width + VP8BORDERINPIXELS);
+    }
 
     CALLOC_ARRAY(pbi->mt_vabove_row, pc->mb_rows);
-    for (i = 0; i < pc->mb_rows; ++i)
+    for (i = 0; i < pc->mb_rows; ++i) {
       CHECK_MEM_ERROR(pbi->mt_vabove_row[i],
                       vpx_memalign(16, sizeof(unsigned char) *
                                            (uv_width + VP8BORDERINPIXELS)));
+      vp8_zero_array(pbi->mt_vabove_row[i], uv_width + VP8BORDERINPIXELS);
+    }
 
     /* Allocate memory for left_col buffers. */
     CALLOC_ARRAY(pbi->mt_yleft_col, pc->mb_rows);
@@ -809,7 +838,7 @@
   }
 }
 
-void vp8mt_decode_mb_rows(VP8D_COMP *pbi, MACROBLOCKD *xd) {
+int vp8mt_decode_mb_rows(VP8D_COMP *pbi, MACROBLOCKD *xd) {
   VP8_COMMON *pc = &pbi->common;
   unsigned int i;
   int j;
@@ -855,7 +884,22 @@
     sem_post(&pbi->h_event_start_decoding[i]);
   }
 
+  if (setjmp(xd->error_info.jmp)) {
+    xd->error_info.setjmp = 0;
+    xd->corrupted = 1;
+    // Wait for other threads to finish. This prevents other threads decoding
+    // the current frame while the main thread starts decoding the next frame,
+    // which causes a data race.
+    for (i = 0; i < pbi->decoding_thread_count; ++i)
+      sem_wait(&pbi->h_event_end_decoding);
+    return -1;
+  }
+
+  xd->error_info.setjmp = 1;
   mt_decode_mb_rows(pbi, xd, 0);
 
-  sem_wait(&pbi->h_event_end_decoding); /* add back for each frame */
+  for (i = 0; i < pbi->decoding_thread_count + 1; ++i)
+    sem_wait(&pbi->h_event_end_decoding); /* add back for each frame */
+
+  return 0;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/treereader.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/treereader.h
index dd0f098..4bf938a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/treereader.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/decoder/treereader.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_DECODER_TREEREADER_H_
-#define VP8_DECODER_TREEREADER_H_
+#ifndef VPX_VP8_DECODER_TREEREADER_H_
+#define VPX_VP8_DECODER_TREEREADER_H_
 
 #include "./vpx_config.h"
 #include "vp8/common/treecoder.h"
@@ -30,7 +30,7 @@
 static INLINE int vp8_treed_read(
     vp8_reader *const r, /* !!! must return a 0 or 1 !!! */
     vp8_tree t, const vp8_prob *const p) {
-  register vp8_tree_index i = 0;
+  vp8_tree_index i = 0;
 
   while ((i = t[i + vp8_read(r, p[i >> 1])]) > 0) {
   }
@@ -42,4 +42,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_DECODER_TREEREADER_H_
+#endif  // VPX_VP8_DECODER_TREEREADER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/fastquantizeb_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/fastquantizeb_neon.c
index c42005d..6fc6080 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/fastquantizeb_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/fastquantizeb_neon.c
@@ -9,6 +9,8 @@
  */
 
 #include <arm_neon.h>
+
+#include "./vp8_rtcd.h"
 #include "vp8/encoder/block.h"
 
 static const uint16_t inv_zig_zag[16] = { 1, 2, 6,  7,  3,  5,  8,  13,
@@ -26,9 +28,11 @@
                    zig_zag1 = vld1q_u16(inv_zig_zag + 8);
   int16x8_t x0, x1, sz0, sz1, y0, y1;
   uint16x8_t eob0, eob1;
+#ifndef __aarch64__
   uint16x4_t eob_d16;
   uint32x2_t eob_d32;
   uint32x4_t eob_q32;
+#endif  // __arch64__
 
   /* sign of z: z >> 15 */
   sz0 = vshrq_n_s16(z0, 15);
@@ -66,11 +70,17 @@
 
   /* select the largest value */
   eob0 = vmaxq_u16(eob0, eob1);
+#ifdef __aarch64__
+  *d->eob = (int8_t)vmaxvq_u16(eob0);
+#else
   eob_d16 = vmax_u16(vget_low_u16(eob0), vget_high_u16(eob0));
   eob_q32 = vmovl_u16(eob_d16);
   eob_d32 = vmax_u32(vget_low_u32(eob_q32), vget_high_u32(eob_q32));
   eob_d32 = vpmax_u32(eob_d32, eob_d32);
 
+  vst1_lane_s8((int8_t *)d->eob, vreinterpret_s8_u32(eob_d32), 0);
+#endif  // __aarch64__
+
   /* qcoeff = x */
   vst1q_s16(d->qcoeff, x0);
   vst1q_s16(d->qcoeff + 8, x1);
@@ -78,6 +88,4 @@
   /* dqcoeff = x * dequant */
   vst1q_s16(d->dqcoeff, vmulq_s16(dequant0, x0));
   vst1q_s16(d->dqcoeff + 8, vmulq_s16(dequant1, x1));
-
-  vst1_lane_s8((int8_t *)d->eob, vreinterpret_s8_u32(eob_d32), 0);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/shortfdct_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/shortfdct_neon.c
index 76853e6..99dff6b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/shortfdct_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/shortfdct_neon.c
@@ -10,6 +10,8 @@
 
 #include <arm_neon.h>
 
+#include "./vp8_rtcd.h"
+
 void vp8_short_fdct4x4_neon(int16_t *input, int16_t *output, int pitch) {
   int16x4_t d0s16, d1s16, d2s16, d3s16, d4s16, d5s16, d6s16, d7s16;
   int16x4_t d16s16, d17s16, d26s16, dEmptys16;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/vp8_shortwalsh4x4_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/vp8_shortwalsh4x4_neon.c
index 8d6ea4c..02056f2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/vp8_shortwalsh4x4_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/arm/neon/vp8_shortwalsh4x4_neon.c
@@ -9,6 +9,8 @@
  */
 
 #include <arm_neon.h>
+
+#include "./vp8_rtcd.h"
 #include "vpx_ports/arm.h"
 
 #ifdef VPX_INCOMPATIBLE_GCC
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.c
index 8cacb645..3daa4e2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.c
@@ -41,13 +41,6 @@
 unsigned __int64 Sectionbits[500];
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-int intra_mode_stats[10][10][10];
-static unsigned int tree_update_hist[BLOCK_TYPES][COEF_BANDS]
-                                    [PREV_COEF_CONTEXTS][ENTROPY_NODES][2];
-extern unsigned int active_section;
-#endif
-
 #ifdef MODE_STATS
 int count_mb_seg[4] = { 0, 0, 0, 0 };
 #endif
@@ -178,7 +171,7 @@
 
         validate_buffer(w->buffer + w->pos, 1, w->buffer_end, w->error);
 
-        w->buffer[w->pos++] = (lowvalue >> (24 - offset));
+        w->buffer[w->pos++] = (lowvalue >> (24 - offset)) & 0xff;
         lowvalue <<= offset;
         shift = count;
         lowvalue &= 0xffffff;
@@ -428,10 +421,6 @@
 
   vp8_convert_rfct_to_prob(cpi);
 
-#ifdef VP8_ENTROPY_STATS
-  active_section = 1;
-#endif
-
   if (pc->mb_no_coeff_skip) {
     int total_mbs = pc->mb_rows * pc->mb_cols;
 
@@ -472,10 +461,6 @@
       xd->mb_to_top_edge = -((mb_row * 16) << 3);
       xd->mb_to_bottom_edge = ((pc->mb_rows - 1 - mb_row) * 16) << 3;
 
-#ifdef VP8_ENTROPY_STATS
-      active_section = 9;
-#endif
-
       if (cpi->mb.e_mbd.update_mb_segmentation_map) {
         write_mb_features(w, mi, &cpi->mb.e_mbd);
       }
@@ -486,9 +471,6 @@
 
       if (rf == INTRA_FRAME) {
         vp8_write(w, 0, cpi->prob_intra_coded);
-#ifdef VP8_ENTROPY_STATS
-        active_section = 6;
-#endif
         write_ymode(w, mode, pc->fc.ymode_prob);
 
         if (mode == B_PRED) {
@@ -522,28 +504,13 @@
           vp8_clamp_mv2(&best_mv, xd);
 
           vp8_mv_ref_probs(mv_ref_p, ct);
-
-#ifdef VP8_ENTROPY_STATS
-          accum_mv_refs(mode, ct);
-#endif
         }
 
-#ifdef VP8_ENTROPY_STATS
-        active_section = 3;
-#endif
-
         write_mv_ref(w, mode, mv_ref_p);
 
         switch (mode) /* new, split require MVs */
         {
-          case NEWMV:
-
-#ifdef VP8_ENTROPY_STATS
-            active_section = 5;
-#endif
-
-            write_mv(w, &mi->mv.as_mv, &best_mv, mvc);
-            break;
+          case NEWMV: write_mv(w, &mi->mv.as_mv, &best_mv, mvc); break;
 
           case SPLITMV: {
             int j = 0;
@@ -574,9 +541,6 @@
               write_sub_mv_ref(w, blockmode, vp8_sub_mv_ref_prob2[mv_contz]);
 
               if (blockmode == NEW4X4) {
-#ifdef VP8_ENTROPY_STATS
-                active_section = 11;
-#endif
                 write_mv(w, &blockmv.as_mv, &best_mv, (const MV_CONTEXT *)mvc);
               }
             } while (++j < cpi->mb.partition_info->count);
@@ -642,10 +606,6 @@
           const B_PREDICTION_MODE L = left_block_mode(m, i);
           const int bm = m->bmi[i].as_mode;
 
-#ifdef VP8_ENTROPY_STATS
-          ++intra_mode_stats[A][L][bm];
-#endif
-
           write_bmode(bc, bm, vp8_kf_bmode_prob[A][L]);
         } while (++i < 16);
       }
@@ -973,10 +933,6 @@
           vp8_write(w, u, upd);
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-          ++tree_update_hist[i][j][k][t][u];
-#endif
-
           if (u) {
             /* send/use new probability */
 
@@ -990,16 +946,6 @@
 
         } while (++t < ENTROPY_NODES);
 
-/* Accum token counts for generation of default statistics */
-#ifdef VP8_ENTROPY_STATS
-        t = 0;
-
-        do {
-          context_counters[i][j][k][t] += cpi->coef_counts[i][j][k][t];
-        } while (++t < MAX_ENTROPY_TOKENS);
-
-#endif
-
       } while (++k < PREV_COEF_CONTEXTS);
     } while (++j < COEF_BANDS);
   } while (++i < BLOCK_TYPES);
@@ -1097,12 +1043,18 @@
     cx_data[1] = 0x01;
     cx_data[2] = 0x2a;
 
+    /* Pack scale and frame size into 16 bits. Store it 8 bits at a time.
+     * https://tools.ietf.org/html/rfc6386
+     * 9.1. Uncompressed Data Chunk
+     * 16 bits      :     (2 bits Horizontal Scale << 14) | Width (14 bits)
+     * 16 bits      :     (2 bits Vertical Scale << 14) | Height (14 bits)
+     */
     v = (pc->horiz_scale << 14) | pc->Width;
-    cx_data[3] = v;
+    cx_data[3] = v & 0xff;
     cx_data[4] = v >> 8;
 
     v = (pc->vert_scale << 14) | pc->Height;
-    cx_data[5] = v;
+    cx_data[5] = v & 0xff;
     cx_data[6] = v >> 8;
 
     extra_bytes_packed = 7;
@@ -1286,15 +1238,6 @@
 
   if (pc->frame_type != KEY_FRAME) vp8_write_bit(bc, pc->refresh_last_frame);
 
-#ifdef VP8_ENTROPY_STATS
-
-  if (pc->frame_type == INTER_FRAME)
-    active_section = 0;
-  else
-    active_section = 7;
-
-#endif
-
   vpx_clear_system_state();
 
 #if CONFIG_REALTIME_ONLY & CONFIG_ONTHEFLY_BITPACKING
@@ -1308,25 +1251,13 @@
   vp8_update_coef_probs(cpi);
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-  active_section = 2;
-#endif
-
   /* Write out the mb_no_coeff_skip flag */
   vp8_write_bit(bc, pc->mb_no_coeff_skip);
 
   if (pc->frame_type == KEY_FRAME) {
     write_kfmodes(cpi);
-
-#ifdef VP8_ENTROPY_STATS
-    active_section = 8;
-#endif
   } else {
     pack_inter_mode_mvs(cpi);
-
-#ifdef VP8_ENTROPY_STATS
-    active_section = 1;
-#endif
   }
 
   vp8_stop_encode(bc);
@@ -1337,11 +1268,30 @@
 
   /* update frame tag */
   {
+    /* Pack partition size, show frame, version and frame type into to 24 bits.
+     * Store it 8 bits at a time.
+     * https://tools.ietf.org/html/rfc6386
+     * 9.1. Uncompressed Data Chunk
+     *    The uncompressed data chunk comprises a common (for key frames and
+     *    interframes) 3-byte frame tag that contains four fields, as follows:
+     *
+     *    1.  A 1-bit frame type (0 for key frames, 1 for interframes).
+     *
+     *    2.  A 3-bit version number (0 - 3 are defined as four different
+     *        profiles with different decoding complexity; other values may be
+     *        defined for future variants of the VP8 data format).
+     *
+     *    3.  A 1-bit show_frame flag (0 when current frame is not for display,
+     *        1 when current frame is for display).
+     *
+     *    4.  A 19-bit field containing the size of the first data partition in
+     *        bytes
+     */
     int v = (oh.first_partition_length_in_bytes << 5) | (oh.show_frame << 4) |
             (oh.version << 1) | oh.type;
 
-    dest[0] = v;
-    dest[1] = v >> 8;
+    dest[0] = v & 0xff;
+    dest[1] = (v >> 8) & 0xff;
     dest[2] = v >> 16;
   }
 
@@ -1431,50 +1381,3 @@
   }
 #endif
 }
-
-#ifdef VP8_ENTROPY_STATS
-void print_tree_update_probs() {
-  int i, j, k, l;
-  FILE *f = fopen("context.c", "a");
-  int Sum;
-  fprintf(f, "\n/* Update probabilities for token entropy tree. */\n\n");
-  fprintf(f,
-          "const vp8_prob tree_update_probs[BLOCK_TYPES] [COEF_BANDS] "
-          "[PREV_COEF_CONTEXTS] [ENTROPY_NODES] = {\n");
-
-  for (i = 0; i < BLOCK_TYPES; ++i) {
-    fprintf(f, "  { \n");
-
-    for (j = 0; j < COEF_BANDS; ++j) {
-      fprintf(f, "    {\n");
-
-      for (k = 0; k < PREV_COEF_CONTEXTS; ++k) {
-        fprintf(f, "      {");
-
-        for (l = 0; l < ENTROPY_NODES; ++l) {
-          Sum =
-              tree_update_hist[i][j][k][l][0] + tree_update_hist[i][j][k][l][1];
-
-          if (Sum > 0) {
-            if (((tree_update_hist[i][j][k][l][0] * 255) / Sum) > 0)
-              fprintf(f, "%3ld, ",
-                      (tree_update_hist[i][j][k][l][0] * 255) / Sum);
-            else
-              fprintf(f, "%3ld, ", 1);
-          } else
-            fprintf(f, "%3ld, ", 128);
-        }
-
-        fprintf(f, "},\n");
-      }
-
-      fprintf(f, "    },\n");
-    }
-
-    fprintf(f, "  },\n");
-  }
-
-  fprintf(f, "};\n");
-  fclose(f);
-}
-#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.h
index ed45bff..ee3f3e4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/bitstream.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_BITSTREAM_H_
-#define VP8_ENCODER_BITSTREAM_H_
+#ifndef VPX_VP8_ENCODER_BITSTREAM_H_
+#define VPX_VP8_ENCODER_BITSTREAM_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -29,4 +29,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_BITSTREAM_H_
+#endif  // VPX_VP8_ENCODER_BITSTREAM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/block.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/block.h
index 492af0e..1bc5ef7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/block.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/block.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_BLOCK_H_
-#define VP8_ENCODER_BLOCK_H_
+#ifndef VPX_VP8_ENCODER_BLOCK_H_
+#define VPX_VP8_ENCODER_BLOCK_H_
 
 #include "vp8/common/onyx.h"
 #include "vp8/common/blockd.h"
@@ -165,4 +165,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_BLOCK_H_
+#endif  // VPX_VP8_ENCODER_BLOCK_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c
index 04f8db9..819c2f2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.c
@@ -15,10 +15,6 @@
 
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-unsigned int active_section = 0;
-#endif
-
 const unsigned int vp8_prob_cost[256] = {
   2047, 2047, 1791, 1641, 1535, 1452, 1385, 1328, 1279, 1235, 1196, 1161, 1129,
   1099, 1072, 1046, 1023, 1000, 979,  959,  940,  922,  905,  889,  873,  858,
@@ -42,26 +38,26 @@
   12,   10,   9,    7,    6,    4,    3,    1,    1
 };
 
-void vp8_start_encode(BOOL_CODER *br, unsigned char *source,
+void vp8_start_encode(BOOL_CODER *bc, unsigned char *source,
                       unsigned char *source_end) {
-  br->lowvalue = 0;
-  br->range = 255;
-  br->count = -24;
-  br->buffer = source;
-  br->buffer_end = source_end;
-  br->pos = 0;
+  bc->lowvalue = 0;
+  bc->range = 255;
+  bc->count = -24;
+  bc->buffer = source;
+  bc->buffer_end = source_end;
+  bc->pos = 0;
 }
 
-void vp8_stop_encode(BOOL_CODER *br) {
+void vp8_stop_encode(BOOL_CODER *bc) {
   int i;
 
-  for (i = 0; i < 32; ++i) vp8_encode_bool(br, 0, 128);
+  for (i = 0; i < 32; ++i) vp8_encode_bool(bc, 0, 128);
 }
 
-void vp8_encode_value(BOOL_CODER *br, int data, int bits) {
+void vp8_encode_value(BOOL_CODER *bc, int data, int bits) {
   int bit;
 
   for (bit = bits - 1; bit >= 0; bit--) {
-    vp8_encode_bool(br, (1 & (data >> bit)), 0x80);
+    vp8_encode_bool(bc, (1 & (data >> bit)), 0x80);
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h
index 3c5775f..8cc61bd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/boolhuff.h
@@ -15,8 +15,8 @@
  *   Description  :     Bool Coder header file.
  *
  ****************************************************************************/
-#ifndef VP8_ENCODER_BOOLHUFF_H_
-#define VP8_ENCODER_BOOLHUFF_H_
+#ifndef VPX_VP8_ENCODER_BOOLHUFF_H_
+#define VPX_VP8_ENCODER_BOOLHUFF_H_
 
 #include "vpx_ports/mem.h"
 #include "vpx/internal/vpx_codec_internal.h"
@@ -35,11 +35,11 @@
   struct vpx_internal_error_info *error;
 } BOOL_CODER;
 
-extern void vp8_start_encode(BOOL_CODER *bc, unsigned char *buffer,
-                             unsigned char *buffer_end);
+void vp8_start_encode(BOOL_CODER *bc, unsigned char *source,
+                      unsigned char *source_end);
 
-extern void vp8_encode_value(BOOL_CODER *br, int data, int bits);
-extern void vp8_stop_encode(BOOL_CODER *bc);
+void vp8_encode_value(BOOL_CODER *bc, int data, int bits);
+void vp8_stop_encode(BOOL_CODER *bc);
 extern const unsigned int vp8_prob_cost[256];
 
 DECLARE_ALIGNED(16, extern const unsigned char, vp8_norm[256]);
@@ -56,31 +56,20 @@
 
   return 0;
 }
-static void vp8_encode_bool(BOOL_CODER *br, int bit, int probability) {
+static void vp8_encode_bool(BOOL_CODER *bc, int bit, int probability) {
   unsigned int split;
-  int count = br->count;
-  unsigned int range = br->range;
-  unsigned int lowvalue = br->lowvalue;
+  int count = bc->count;
+  unsigned int range = bc->range;
+  unsigned int lowvalue = bc->lowvalue;
   int shift;
 
-#ifdef VP8_ENTROPY_STATS
-#if defined(SECTIONBITS_OUTPUT)
-
-  if (bit)
-    Sectionbits[active_section] += vp8_prob_cost[255 - probability];
-  else
-    Sectionbits[active_section] += vp8_prob_cost[probability];
-
-#endif
-#endif
-
   split = 1 + (((range - 1) * probability) >> 8);
 
   range = split;
 
   if (bit) {
     lowvalue += split;
-    range = br->range - split;
+    range = bc->range - split;
   }
 
   shift = vp8_norm[range];
@@ -92,18 +81,18 @@
     int offset = shift - count;
 
     if ((lowvalue << (offset - 1)) & 0x80000000) {
-      int x = br->pos - 1;
+      int x = bc->pos - 1;
 
-      while (x >= 0 && br->buffer[x] == 0xff) {
-        br->buffer[x] = (unsigned char)0;
+      while (x >= 0 && bc->buffer[x] == 0xff) {
+        bc->buffer[x] = (unsigned char)0;
         x--;
       }
 
-      br->buffer[x] += 1;
+      bc->buffer[x] += 1;
     }
 
-    validate_buffer(br->buffer + br->pos, 1, br->buffer_end, br->error);
-    br->buffer[br->pos++] = (lowvalue >> (24 - offset));
+    validate_buffer(bc->buffer + bc->pos, 1, bc->buffer_end, bc->error);
+    bc->buffer[bc->pos++] = (lowvalue >> (24 - offset) & 0xff);
 
     lowvalue <<= offset;
     shift = count;
@@ -112,13 +101,13 @@
   }
 
   lowvalue <<= shift;
-  br->count = count;
-  br->lowvalue = lowvalue;
-  br->range = range;
+  bc->count = count;
+  bc->lowvalue = lowvalue;
+  bc->range = range;
 }
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_BOOLHUFF_H_
+#endif  // VPX_VP8_ENCODER_BOOLHUFF_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/copy_c.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c
similarity index 100%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/copy_c.c
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/copy_c.c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h
index 278dce7..0cd6cb4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_cost.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_DCT_VALUE_COST_H_
-#define VP8_ENCODER_DCT_VALUE_COST_H_
+#ifndef VPX_VP8_ENCODER_DCT_VALUE_COST_H_
+#define VPX_VP8_ENCODER_DCT_VALUE_COST_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -341,4 +341,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_DCT_VALUE_COST_H_
+#endif  // VPX_VP8_ENCODER_DCT_VALUE_COST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h
index 0597dea..5cc4505 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/dct_value_tokens.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_DCT_VALUE_TOKENS_H_
-#define VP8_ENCODER_DCT_VALUE_TOKENS_H_
+#ifndef VPX_VP8_ENCODER_DCT_VALUE_TOKENS_H_
+#define VPX_VP8_ENCODER_DCT_VALUE_TOKENS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -845,4 +845,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_DCT_VALUE_TOKENS_H_
+#endif  // VPX_VP8_ENCODER_DCT_VALUE_TOKENS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/defaultcoefcounts.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/defaultcoefcounts.h
index 2976325..a3ab34c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/defaultcoefcounts.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/defaultcoefcounts.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_DEFAULTCOEFCOUNTS_H_
-#define VP8_ENCODER_DEFAULTCOEFCOUNTS_H_
+#ifndef VPX_VP8_ENCODER_DEFAULTCOEFCOUNTS_H_
+#define VPX_VP8_ENCODER_DEFAULTCOEFCOUNTS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -232,4 +232,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_DEFAULTCOEFCOUNTS_H_
+#endif  // VPX_VP8_ENCODER_DEFAULTCOEFCOUNTS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.c
index eb963b9..e54d1e9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.c
@@ -213,13 +213,12 @@
   return FILTER_BLOCK;
 }
 
-int vp8_denoiser_filter_uv_c(unsigned char *mc_running_avg_uv,
-                             int mc_avg_uv_stride,
-                             unsigned char *running_avg_uv, int avg_uv_stride,
+int vp8_denoiser_filter_uv_c(unsigned char *mc_running_avg, int mc_avg_stride,
+                             unsigned char *running_avg, int avg_stride,
                              unsigned char *sig, int sig_stride,
                              unsigned int motion_magnitude,
                              int increase_denoising) {
-  unsigned char *running_avg_uv_start = running_avg_uv;
+  unsigned char *running_avg_start = running_avg;
   unsigned char *sig_start = sig;
   int sum_diff_thresh;
   int r, c;
@@ -259,13 +258,13 @@
       int adjustment = 0;
       int absdiff = 0;
 
-      diff = mc_running_avg_uv[c] - sig[c];
+      diff = mc_running_avg[c] - sig[c];
       absdiff = abs(diff);
 
       // When |diff| <= |3 + shift_inc1|, use pixel value from
       // last denoised raw.
       if (absdiff <= 3 + shift_inc1) {
-        running_avg_uv[c] = mc_running_avg_uv[c];
+        running_avg[c] = mc_running_avg[c];
         sum_diff += diff;
       } else {
         if (absdiff >= 4 && absdiff <= 7) {
@@ -277,16 +276,16 @@
         }
         if (diff > 0) {
           if ((sig[c] + adjustment) > 255) {
-            running_avg_uv[c] = 255;
+            running_avg[c] = 255;
           } else {
-            running_avg_uv[c] = sig[c] + adjustment;
+            running_avg[c] = sig[c] + adjustment;
           }
           sum_diff += adjustment;
         } else {
           if ((sig[c] - adjustment) < 0) {
-            running_avg_uv[c] = 0;
+            running_avg[c] = 0;
           } else {
-            running_avg_uv[c] = sig[c] - adjustment;
+            running_avg[c] = sig[c] - adjustment;
           }
           sum_diff -= adjustment;
         }
@@ -294,8 +293,8 @@
     }
     /* Update pointers for next iteration. */
     sig += sig_stride;
-    mc_running_avg_uv += mc_avg_uv_stride;
-    running_avg_uv += avg_uv_stride;
+    mc_running_avg += mc_avg_stride;
+    running_avg += avg_stride;
   }
 
   sum_diff_thresh = SUM_DIFF_THRESHOLD_UV;
@@ -314,27 +313,27 @@
     // Only apply the adjustment for max delta up to 3.
     if (delta < 4) {
       sig -= sig_stride * 8;
-      mc_running_avg_uv -= mc_avg_uv_stride * 8;
-      running_avg_uv -= avg_uv_stride * 8;
+      mc_running_avg -= mc_avg_stride * 8;
+      running_avg -= avg_stride * 8;
       for (r = 0; r < 8; ++r) {
         for (c = 0; c < 8; ++c) {
-          int diff = mc_running_avg_uv[c] - sig[c];
+          int diff = mc_running_avg[c] - sig[c];
           int adjustment = abs(diff);
           if (adjustment > delta) adjustment = delta;
           if (diff > 0) {
             // Bring denoised signal down.
-            if (running_avg_uv[c] - adjustment < 0) {
-              running_avg_uv[c] = 0;
+            if (running_avg[c] - adjustment < 0) {
+              running_avg[c] = 0;
             } else {
-              running_avg_uv[c] = running_avg_uv[c] - adjustment;
+              running_avg[c] = running_avg[c] - adjustment;
             }
             sum_diff -= adjustment;
           } else if (diff < 0) {
             // Bring denoised signal up.
-            if (running_avg_uv[c] + adjustment > 255) {
-              running_avg_uv[c] = 255;
+            if (running_avg[c] + adjustment > 255) {
+              running_avg[c] = 255;
             } else {
-              running_avg_uv[c] = running_avg_uv[c] + adjustment;
+              running_avg[c] = running_avg[c] + adjustment;
             }
             sum_diff += adjustment;
           }
@@ -342,8 +341,8 @@
         // TODO(marpan): Check here if abs(sum_diff) has gone below the
         // threshold sum_diff_thresh, and if so, we can exit the row loop.
         sig += sig_stride;
-        mc_running_avg_uv += mc_avg_uv_stride;
-        running_avg_uv += avg_uv_stride;
+        mc_running_avg += mc_avg_stride;
+        running_avg += avg_stride;
       }
       if (abs(sum_diff) > sum_diff_thresh) return COPY_BLOCK;
     } else {
@@ -351,7 +350,7 @@
     }
   }
 
-  vp8_copy_mem8x8(running_avg_uv_start, avg_uv_stride, sig_start, sig_stride);
+  vp8_copy_mem8x8(running_avg_start, avg_stride, sig_start, sig_stride);
   return FILTER_BLOCK;
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.h
index 91d87b3..51ae3b0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/denoising.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_DENOISING_H_
-#define VP8_ENCODER_DENOISING_H_
+#ifndef VPX_VP8_ENCODER_DENOISING_H_
+#define VPX_VP8_ENCODER_DENOISING_H_
 
 #include "block.h"
 #include "vp8/common/loopfilter.h"
@@ -100,4 +100,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_DENOISING_H_
+#endif  // VPX_VP8_ENCODER_DENOISING_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.c
index 9bb0df7..2b3d956 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.c
@@ -64,9 +64,9 @@
  * Eventually this should be replaced by custom no-reference routines,
  *  which will be faster.
  */
-static const unsigned char VP8_VAR_OFFS[16] = {
-  128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128
-};
+static const unsigned char VP8_VAR_OFFS[16] = { 128, 128, 128, 128, 128, 128,
+                                                128, 128, 128, 128, 128, 128,
+                                                128, 128, 128, 128 };
 
 /* Original activity measure from Tim T's code. */
 static unsigned int tt_activity_measure(VP8_COMP *cpi, MACROBLOCK *x) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.h
index 5274aba..cc8cf4d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeframe.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VP8_ENCODER_ENCODEFRAME_H_
-#define VP8_ENCODER_ENCODEFRAME_H_
+#ifndef VPX_VP8_ENCODER_ENCODEFRAME_H_
+#define VPX_VP8_ENCODER_ENCODEFRAME_H_
 
 #include "vp8/encoder/tokenize.h"
 
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_ENCODEFRAME_H_
+#endif  // VPX_VP8_ENCODER_ENCODEFRAME_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.h
index 3956cf5..021dc5e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodeintra.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_ENCODEINTRA_H_
-#define VP8_ENCODER_ENCODEINTRA_H_
+#ifndef VPX_VP8_ENCODER_ENCODEINTRA_H_
+#define VPX_VP8_ENCODER_ENCODEINTRA_H_
 #include "onyx_int.h"
 
 #ifdef __cplusplus
@@ -25,4 +25,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_ENCODEINTRA_H_
+#endif  // VPX_VP8_ENCODER_ENCODEINTRA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemb.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemb.h
index b55ba3ac..db577dd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemb.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemb.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_ENCODEMB_H_
-#define VP8_ENCODER_ENCODEMB_H_
+#ifndef VPX_VP8_ENCODER_ENCODEMB_H_
+#define VPX_VP8_ENCODER_ENCODEMB_H_
 
 #include "onyx_int.h"
 
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_ENCODEMB_H_
+#endif  // VPX_VP8_ENCODER_ENCODEMB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.c
index ea93ccd..04adf10 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.c
@@ -16,10 +16,6 @@
 
 #include <math.h>
 
-#ifdef VP8_ENTROPY_STATS
-extern unsigned int active_section;
-#endif
-
 static void encode_mvcomponent(vp8_writer *const w, const int v,
                                const struct mv_context *mvc) {
   const vp8_prob *p = mvc->prob;
@@ -309,9 +305,6 @@
   vp8_writer *const w = cpi->bc;
   MV_CONTEXT *mvc = cpi->common.fc.mvc;
   int flags[2] = { 0, 0 };
-#ifdef VP8_ENTROPY_STATS
-  active_section = 4;
-#endif
   write_component_probs(w, &mvc[0], &vp8_default_mv_context[0],
                         &vp8_mv_update_probs[0], cpi->mb.MVcount[0], 0,
                         &flags[0]);
@@ -323,8 +316,4 @@
     vp8_build_component_cost_table(
         cpi->mb.mvcost, (const MV_CONTEXT *)cpi->common.fc.mvc, flags);
   }
-
-#ifdef VP8_ENTROPY_STATS
-  active_section = 5;
-#endif
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.h
index 87db30f..347b9fe 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/encodemv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_ENCODEMV_H_
-#define VP8_ENCODER_ENCODEMV_H_
+#ifndef VPX_VP8_ENCODER_ENCODEMV_H_
+#define VPX_VP8_ENCODER_ENCODEMV_H_
 
 #include "onyx_int.h"
 
@@ -26,4 +26,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_ENCODEMV_H_
+#endif  // VPX_VP8_ENCODER_ENCODEMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ethreading.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ethreading.h
index 95bf73d..598fe60 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ethreading.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ethreading.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_ETHREADING_H_
-#define VP8_ENCODER_ETHREADING_H_
+#ifndef VPX_VP8_ENCODER_ETHREADING_H_
+#define VPX_VP8_ENCODER_ETHREADING_H_
 
 #include "vp8/encoder/onyx_int.h"
 
@@ -29,4 +29,4 @@
 }
 #endif
 
-#endif  // VP8_ENCODER_ETHREADING_H_
+#endif  // VPX_VP8_ENCODER_ETHREADING_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.c
index 4ea991e..981c0fd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.c
@@ -113,11 +113,9 @@
   return 1;
 }
 
-static void output_stats(const VP8_COMP *cpi,
-                         struct vpx_codec_pkt_list *pktlist,
+static void output_stats(struct vpx_codec_pkt_list *pktlist,
                          FIRSTPASS_STATS *stats) {
   struct vpx_codec_cx_pkt pkt;
-  (void)cpi;
   pkt.kind = VPX_CODEC_STATS_PKT;
   pkt.data.twopass_stats.buf = stats;
   pkt.data.twopass_stats.sz = sizeof(FIRSTPASS_STATS);
@@ -371,11 +369,10 @@
 }
 
 void vp8_end_first_pass(VP8_COMP *cpi) {
-  output_stats(cpi, cpi->output_pkt_list, &cpi->twopass.total_stats);
+  output_stats(cpi->output_pkt_list, &cpi->twopass.total_stats);
 }
 
-static void zz_motion_search(VP8_COMP *cpi, MACROBLOCK *x,
-                             YV12_BUFFER_CONFIG *raw_buffer,
+static void zz_motion_search(MACROBLOCK *x, YV12_BUFFER_CONFIG *raw_buffer,
                              int *raw_motion_err,
                              YV12_BUFFER_CONFIG *recon_buffer,
                              int *best_motion_err, int recon_yoffset) {
@@ -389,7 +386,6 @@
   int raw_stride = raw_buffer->y_stride;
   unsigned char *ref_ptr;
   int ref_stride = x->e_mbd.pre.y_stride;
-  (void)cpi;
 
   /* Set up pointers for this macro block raw buffer */
   raw_ptr = (unsigned char *)(raw_buffer->y_buffer + recon_yoffset + d->offset);
@@ -603,9 +599,8 @@
         int raw_motion_error = INT_MAX;
 
         /* Simple 0,0 motion with no mv overhead */
-        zz_motion_search(cpi, x, cpi->last_frame_unscaled_source,
-                         &raw_motion_error, lst_yv12, &motion_error,
-                         recon_yoffset);
+        zz_motion_search(x, cpi->last_frame_unscaled_source, &raw_motion_error,
+                         lst_yv12, &motion_error, recon_yoffset);
         d->bmi.mv.as_mv.row = 0;
         d->bmi.mv.as_mv.col = 0;
 
@@ -798,7 +793,7 @@
 
     /* don't want to do output stats with a stack variable! */
     memcpy(&cpi->twopass.this_frame_stats, &fps, sizeof(FIRSTPASS_STATS));
-    output_stats(cpi, cpi->output_pkt_list, &cpi->twopass.this_frame_stats);
+    output_stats(cpi->output_pkt_list, &cpi->twopass.this_frame_stats);
     accumulate_stats(&cpi->twopass.total_stats, &fps);
   }
 
@@ -1337,12 +1332,10 @@
 /* This function gives and estimate of how badly we believe the prediction
  * quality is decaying from frame to frame.
  */
-static double get_prediction_decay_rate(VP8_COMP *cpi,
-                                        FIRSTPASS_STATS *next_frame) {
+static double get_prediction_decay_rate(FIRSTPASS_STATS *next_frame) {
   double prediction_decay_rate;
   double motion_decay;
   double motion_pct = next_frame->pcnt_motion;
-  (void)cpi;
 
   /* Initial basis is the % mbs inter coded */
   prediction_decay_rate = next_frame->pcnt_inter;
@@ -1399,7 +1392,7 @@
     for (j = 0; j < still_interval; ++j) {
       if (EOF == input_stats(cpi, &tmp_next_frame)) break;
 
-      decay_rate = get_prediction_decay_rate(cpi, &tmp_next_frame);
+      decay_rate = get_prediction_decay_rate(&tmp_next_frame);
       if (decay_rate < 0.999) break;
     }
     /* Reset file position */
@@ -1450,8 +1443,7 @@
 }
 
 /* Update the motion related elements to the GF arf boost calculation */
-static void accumulate_frame_motion_stats(VP8_COMP *cpi,
-                                          FIRSTPASS_STATS *this_frame,
+static void accumulate_frame_motion_stats(FIRSTPASS_STATS *this_frame,
                                           double *this_frame_mv_in_out,
                                           double *mv_in_out_accumulator,
                                           double *abs_mv_in_out_accumulator,
@@ -1459,7 +1451,6 @@
   double this_frame_mvr_ratio;
   double this_frame_mvc_ratio;
   double motion_pct;
-  (void)cpi;
 
   /* Accumulate motion stats. */
   motion_pct = this_frame->pcnt_motion;
@@ -1542,7 +1533,7 @@
 
     /* Update the motion related elements to the boost calculation */
     accumulate_frame_motion_stats(
-        cpi, &this_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
+        &this_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
         &abs_mv_in_out_accumulator, &mv_ratio_accumulator);
 
     /* Calculate the baseline boost number for this frame */
@@ -1557,7 +1548,7 @@
     /* Cumulative effect of prediction quality decay */
     if (!flash_detected) {
       decay_accumulator =
-          decay_accumulator * get_prediction_decay_rate(cpi, &this_frame);
+          decay_accumulator * get_prediction_decay_rate(&this_frame);
       decay_accumulator = decay_accumulator < 0.1 ? 0.1 : decay_accumulator;
     }
     boost_score += (decay_accumulator * r);
@@ -1586,7 +1577,7 @@
 
     /* Update the motion related elements to the boost calculation */
     accumulate_frame_motion_stats(
-        cpi, &this_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
+        &this_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
         &abs_mv_in_out_accumulator, &mv_ratio_accumulator);
 
     /* Calculate the baseline boost number for this frame */
@@ -1601,7 +1592,7 @@
     /* Cumulative effect of prediction quality decay */
     if (!flash_detected) {
       decay_accumulator =
-          decay_accumulator * get_prediction_decay_rate(cpi, &this_frame);
+          decay_accumulator * get_prediction_decay_rate(&this_frame);
       decay_accumulator = decay_accumulator < 0.1 ? 0.1 : decay_accumulator;
     }
 
@@ -1703,7 +1694,7 @@
 
     /* Update the motion related elements to the boost calculation */
     accumulate_frame_motion_stats(
-        cpi, &next_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
+        &next_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
         &abs_mv_in_out_accumulator, &mv_ratio_accumulator);
 
     /* Calculate a baseline boost number for this frame */
@@ -1711,7 +1702,7 @@
 
     /* Cumulative effect of prediction quality decay */
     if (!flash_detected) {
-      loop_decay_rate = get_prediction_decay_rate(cpi, &next_frame);
+      loop_decay_rate = get_prediction_decay_rate(&next_frame);
       decay_accumulator = decay_accumulator * loop_decay_rate;
       decay_accumulator = decay_accumulator < 0.1 ? 0.1 : decay_accumulator;
     }
@@ -2081,9 +2072,10 @@
      * score, otherwise it may be worse off than an "un-boosted" frame
      */
     else {
+      // Avoid division by 0 by clamping cpi->twopass.kf_group_error_left to 1
       int alt_gf_bits =
           (int)((double)cpi->twopass.kf_group_bits * mod_frame_err /
-                DOUBLE_DIVIDE_CHECK((double)cpi->twopass.kf_group_error_left));
+                (double)VPXMAX(cpi->twopass.kf_group_error_left, 1));
 
       if (alt_gf_bits > gf_bits) {
         gf_bits = alt_gf_bits;
@@ -2599,7 +2591,7 @@
       }
 
       /* How fast is prediction quality decaying */
-      loop_decay_rate = get_prediction_decay_rate(cpi, &next_frame);
+      loop_decay_rate = get_prediction_decay_rate(&next_frame);
 
       /* We want to know something about the recent past... rather than
        * as used elsewhere where we are concened with decay in prediction
@@ -2781,7 +2773,7 @@
     if (r > RMAX) r = RMAX;
 
     /* How fast is prediction quality decaying */
-    loop_decay_rate = get_prediction_decay_rate(cpi, &next_frame);
+    loop_decay_rate = get_prediction_decay_rate(&next_frame);
 
     decay_accumulator = decay_accumulator * loop_decay_rate;
     decay_accumulator = decay_accumulator < 0.1 ? 0.1 : decay_accumulator;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.h
index ac8a7b1..f5490f1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/firstpass.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_FIRSTPASS_H_
-#define VP8_ENCODER_FIRSTPASS_H_
+#ifndef VPX_VP8_ENCODER_FIRSTPASS_H_
+#define VPX_VP8_ENCODER_FIRSTPASS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -28,4 +28,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_FIRSTPASS_H_
+#endif  // VPX_VP8_ENCODER_FIRSTPASS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/lookahead.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/lookahead.h
index a67f226..bf04011 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/lookahead.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/lookahead.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VP8_ENCODER_LOOKAHEAD_H_
-#define VP8_ENCODER_LOOKAHEAD_H_
+#ifndef VPX_VP8_ENCODER_LOOKAHEAD_H_
+#define VPX_VP8_ENCODER_LOOKAHEAD_H_
 #include "vpx_scale/yv12config.h"
 #include "vpx/vpx_integer.h"
 
@@ -74,7 +74,7 @@
 struct lookahead_entry *vp8_lookahead_pop(struct lookahead_ctx *ctx, int drain);
 
 #define PEEK_FORWARD 1
-#define PEEK_BACKWARD -1
+#define PEEK_BACKWARD (-1)
 /**\brief Get a future source buffer to encode
  *
  * \param[in] ctx       Pointer to the lookahead context
@@ -96,4 +96,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_LOOKAHEAD_H_
+#endif  // VPX_VP8_ENCODER_LOOKAHEAD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.c
index 970120f..9e7f5c7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.c
@@ -21,11 +21,6 @@
 #include "vp8/common/common.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 
-#ifdef VP8_ENTROPY_STATS
-static int mv_ref_ct[31][4][2];
-static int mv_mode_cts[4][2];
-#endif
-
 int vp8_mv_bit_cost(int_mv *mv, int_mv *ref, int *mvcost[2], int Weight) {
   /* MV costing is based on the distribution of vectors in the previous
    * frame and as such will tend to over state the cost of vectors. In
@@ -34,19 +29,22 @@
    * NEAREST for subsequent blocks. The "Weight" parameter allows, to a
    * limited extent, for some account to be taken of these factors.
    */
-  return ((mvcost[0][(mv->as_mv.row - ref->as_mv.row) >> 1] +
-           mvcost[1][(mv->as_mv.col - ref->as_mv.col) >> 1]) *
-          Weight) >>
-         7;
+  const int mv_idx_row =
+      clamp((mv->as_mv.row - ref->as_mv.row) >> 1, 0, MVvals);
+  const int mv_idx_col =
+      clamp((mv->as_mv.col - ref->as_mv.col) >> 1, 0, MVvals);
+  return ((mvcost[0][mv_idx_row] + mvcost[1][mv_idx_col]) * Weight) >> 7;
 }
 
 static int mv_err_cost(int_mv *mv, int_mv *ref, int *mvcost[2],
                        int error_per_bit) {
   /* Ignore mv costing if mvcost is NULL */
   if (mvcost) {
-    return ((mvcost[0][(mv->as_mv.row - ref->as_mv.row) >> 1] +
-             mvcost[1][(mv->as_mv.col - ref->as_mv.col) >> 1]) *
-                error_per_bit +
+    const int mv_idx_row =
+        clamp((mv->as_mv.row - ref->as_mv.row) >> 1, 0, MVvals);
+    const int mv_idx_col =
+        clamp((mv->as_mv.col - ref->as_mv.col) >> 1, 0, MVvals);
+    return ((mvcost[0][mv_idx_row] + mvcost[1][mv_idx_col]) * error_per_bit +
             128) >>
            8;
   }
@@ -253,7 +251,7 @@
   int pre_stride = x->e_mbd.pre.y_stride;
   unsigned char *base_pre = x->e_mbd.pre.y_buffer;
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   MACROBLOCKD *xd = &x->e_mbd;
   unsigned char *y_0 = base_pre + d->offset + (bestmv->as_mv.row) * pre_stride +
                        bestmv->as_mv.col;
@@ -382,7 +380,7 @@
   int pre_stride = x->e_mbd.pre.y_stride;
   unsigned char *base_pre = x->e_mbd.pre.y_buffer;
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   MACROBLOCKD *xd = &x->e_mbd;
   unsigned char *y_0 = base_pre + d->offset + (bestmv->as_mv.row) * pre_stride +
                        bestmv->as_mv.col;
@@ -678,7 +676,7 @@
   int pre_stride = x->e_mbd.pre.y_stride;
   unsigned char *base_pre = x->e_mbd.pre.y_buffer;
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   MACROBLOCKD *xd = &x->e_mbd;
   unsigned char *y_0 = base_pre + d->offset + (bestmv->as_mv.row) * pre_stride +
                        bestmv->as_mv.col;
@@ -839,7 +837,7 @@
 int vp8_hex_search(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
                    int_mv *best_mv, int search_param, int sad_per_bit,
                    const vp8_variance_fn_ptr_t *vfp, int *mvsadcost[2],
-                   int *mvcost[2], int_mv *center_mv) {
+                   int_mv *center_mv) {
   MV hex[6] = {
     { -1, -2 }, { 1, -2 }, { 2, 0 }, { 1, 2 }, { -1, 2 }, { -2, 0 }
   };
@@ -868,8 +866,6 @@
   fcenter_mv.as_mv.row = center_mv->as_mv.row >> 3;
   fcenter_mv.as_mv.col = center_mv->as_mv.col >> 3;
 
-  (void)mvcost;
-
   /* adjust ref_mv to make sure it is within MV range */
   vp8_clamp_mv(ref_mv, x->mv_col_min, x->mv_col_max, x->mv_row_min,
                x->mv_row_max);
@@ -1131,6 +1127,7 @@
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
 
+#if HAVE_SSE2 || HAVE_MSA
 int vp8_diamond_search_sadx4(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
                              int_mv *best_mv, int search_param, int sad_per_bit,
                              int *num00, vp8_variance_fn_ptr_t *fn_ptr,
@@ -1279,6 +1276,7 @@
   return fn_ptr->vf(what, what_stride, best_address, in_what_stride, &thissad) +
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
+#endif  // HAVE_SSE2 || HAVE_MSA
 
 int vp8_full_search_sad_c(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
                           int sad_per_bit, int distance,
@@ -1366,6 +1364,7 @@
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
 
+#if HAVE_SSSE3
 int vp8_full_search_sadx3(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
                           int sad_per_bit, int distance,
                           vp8_variance_fn_ptr_t *fn_ptr, int *mvcost[2],
@@ -1484,7 +1483,9 @@
   return fn_ptr->vf(what, what_stride, bestaddress, in_what_stride, &thissad) +
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
+#endif  // HAVE_SSSE3
 
+#if HAVE_SSE4_1
 int vp8_full_search_sadx8(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
                           int sad_per_bit, int distance,
                           vp8_variance_fn_ptr_t *fn_ptr, int *mvcost[2],
@@ -1630,6 +1631,7 @@
   return fn_ptr->vf(what, what_stride, bestaddress, in_what_stride, &thissad) +
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
+#endif  // HAVE_SSE4_1
 
 int vp8_refining_search_sad_c(MACROBLOCK *x, BLOCK *b, BLOCKD *d,
                               int_mv *ref_mv, int error_per_bit,
@@ -1709,6 +1711,7 @@
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
 
+#if HAVE_SSE2 || HAVE_MSA
 int vp8_refining_search_sadx4(MACROBLOCK *x, BLOCK *b, BLOCKD *d,
                               int_mv *ref_mv, int error_per_bit,
                               int search_range, vp8_variance_fn_ptr_t *fn_ptr,
@@ -1818,96 +1821,4 @@
   return fn_ptr->vf(what, what_stride, best_address, in_what_stride, &thissad) +
          mv_err_cost(&this_mv, center_mv, mvcost, x->errorperbit);
 }
-
-#ifdef VP8_ENTROPY_STATS
-void print_mode_context(void) {
-  FILE *f = fopen("modecont.c", "w");
-  int i, j;
-
-  fprintf(f, "#include \"entropy.h\"\n");
-  fprintf(f, "const int vp8_mode_contexts[6][4] =\n");
-  fprintf(f, "{\n");
-
-  for (j = 0; j < 6; ++j) {
-    fprintf(f, "  { /* %d */\n", j);
-    fprintf(f, "    ");
-
-    for (i = 0; i < 4; ++i) {
-      int overal_prob;
-      int this_prob;
-      int count;
-
-      /* Overall probs */
-      count = mv_mode_cts[i][0] + mv_mode_cts[i][1];
-
-      if (count)
-        overal_prob = 256 * mv_mode_cts[i][0] / count;
-      else
-        overal_prob = 128;
-
-      if (overal_prob == 0) overal_prob = 1;
-
-      /* context probs */
-      count = mv_ref_ct[j][i][0] + mv_ref_ct[j][i][1];
-
-      if (count)
-        this_prob = 256 * mv_ref_ct[j][i][0] / count;
-      else
-        this_prob = 128;
-
-      if (this_prob == 0) this_prob = 1;
-
-      fprintf(f, "%5d, ", this_prob);
-    }
-
-    fprintf(f, "  },\n");
-  }
-
-  fprintf(f, "};\n");
-  fclose(f);
-}
-
-/* MV ref count VP8_ENTROPY_STATS stats code */
-#ifdef VP8_ENTROPY_STATS
-void init_mv_ref_counts() {
-  memset(mv_ref_ct, 0, sizeof(mv_ref_ct));
-  memset(mv_mode_cts, 0, sizeof(mv_mode_cts));
-}
-
-void accum_mv_refs(MB_PREDICTION_MODE m, const int ct[4]) {
-  if (m == ZEROMV) {
-    ++mv_ref_ct[ct[0]][0][0];
-    ++mv_mode_cts[0][0];
-  } else {
-    ++mv_ref_ct[ct[0]][0][1];
-    ++mv_mode_cts[0][1];
-
-    if (m == NEARESTMV) {
-      ++mv_ref_ct[ct[1]][1][0];
-      ++mv_mode_cts[1][0];
-    } else {
-      ++mv_ref_ct[ct[1]][1][1];
-      ++mv_mode_cts[1][1];
-
-      if (m == NEARMV) {
-        ++mv_ref_ct[ct[2]][2][0];
-        ++mv_mode_cts[2][0];
-      } else {
-        ++mv_ref_ct[ct[2]][2][1];
-        ++mv_mode_cts[2][1];
-
-        if (m == NEWMV) {
-          ++mv_ref_ct[ct[3]][3][0];
-          ++mv_mode_cts[3][0];
-        } else {
-          ++mv_ref_ct[ct[3]][3][1];
-          ++mv_mode_cts[3][1];
-        }
-      }
-    }
-  }
-}
-
-#endif /* END MV ref count VP8_ENTROPY_STATS stats code */
-
-#endif
+#endif  // HAVE_SSE2 || HAVE_MSA
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.h
index b622879..57c18f5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mcomp.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_MCOMP_H_
-#define VP8_ENCODER_MCOMP_H_
+#ifndef VPX_VP8_ENCODER_MCOMP_H_
+#define VPX_VP8_ENCODER_MCOMP_H_
 
 #include "block.h"
 #include "vpx_dsp/variance.h"
@@ -18,11 +18,6 @@
 extern "C" {
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-extern void init_mv_ref_counts();
-extern void accum_mv_refs(MB_PREDICTION_MODE, const int near_mv_ref_cts[4]);
-#endif
-
 /* The maximum number of steps in a step search given the largest allowed
  * initial step
  */
@@ -34,15 +29,14 @@
 /* Maximum size of the first step in full pel units */
 #define MAX_FIRST_STEP (1 << (MAX_MVSEARCH_STEPS - 1))
 
-extern void print_mode_context(void);
-extern int vp8_mv_bit_cost(int_mv *mv, int_mv *ref, int *mvcost[2], int Weight);
-extern void vp8_init_dsmotion_compensation(MACROBLOCK *x, int stride);
-extern void vp8_init3smotion_compensation(MACROBLOCK *x, int stride);
+int vp8_mv_bit_cost(int_mv *mv, int_mv *ref, int *mvcost[2], int Weight);
+void vp8_init_dsmotion_compensation(MACROBLOCK *x, int stride);
+void vp8_init3smotion_compensation(MACROBLOCK *x, int stride);
 
-extern int vp8_hex_search(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
-                          int_mv *best_mv, int search_param, int error_per_bit,
-                          const vp8_variance_fn_ptr_t *vf, int *mvsadcost[2],
-                          int *mvcost[2], int_mv *center_mv);
+int vp8_hex_search(MACROBLOCK *x, BLOCK *b, BLOCKD *d, int_mv *ref_mv,
+                   int_mv *best_mv, int search_param, int sad_per_bit,
+                   const vp8_variance_fn_ptr_t *vfp, int *mvsadcost[2],
+                   int_mv *center_mv);
 
 typedef int(fractional_mv_step_fp)(MACROBLOCK *x, BLOCK *b, BLOCKD *d,
                                    int_mv *bestmv, int_mv *ref_mv,
@@ -51,10 +45,10 @@
                                    int *mvcost[2], int *distortion,
                                    unsigned int *sse);
 
-extern fractional_mv_step_fp vp8_find_best_sub_pixel_step_iteratively;
-extern fractional_mv_step_fp vp8_find_best_sub_pixel_step;
-extern fractional_mv_step_fp vp8_find_best_half_pixel_step;
-extern fractional_mv_step_fp vp8_skip_fractional_mv_step;
+fractional_mv_step_fp vp8_find_best_sub_pixel_step_iteratively;
+fractional_mv_step_fp vp8_find_best_sub_pixel_step;
+fractional_mv_step_fp vp8_find_best_half_pixel_step;
+fractional_mv_step_fp vp8_skip_fractional_mv_step;
 
 typedef int (*vp8_full_search_fn_t)(MACROBLOCK *x, BLOCK *b, BLOCKD *d,
                                     int_mv *ref_mv, int sad_per_bit,
@@ -78,4 +72,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_MCOMP_H_
+#endif  // VPX_VP8_ENCODER_MCOMP_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/modecosts.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/modecosts.h
index dfb8989..09ee2b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/modecosts.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/modecosts.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_MODECOSTS_H_
-#define VP8_ENCODER_MODECOSTS_H_
+#ifndef VPX_VP8_ENCODER_MODECOSTS_H_
+#define VPX_VP8_ENCODER_MODECOSTS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -17,10 +17,10 @@
 
 struct VP8_COMP;
 
-void vp8_init_mode_costs(struct VP8_COMP *x);
+void vp8_init_mode_costs(struct VP8_COMP *c);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_MODECOSTS_H_
+#endif  // VPX_VP8_ENCODER_MODECOSTS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.h
index da36628..58f5a97 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/mr_dissim.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_MR_DISSIM_H_
-#define VP8_ENCODER_MR_DISSIM_H_
+#ifndef VPX_VP8_ENCODER_MR_DISSIM_H_
+#define VPX_VP8_ENCODER_MR_DISSIM_H_
 #include "vpx_config.h"
 
 #ifdef __cplusplus
@@ -24,4 +24,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_MR_DISSIM_H_
+#endif  // VPX_VP8_ENCODER_MR_DISSIM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_if.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_if.c
index 2da9401..29c8cc6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_if.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_if.c
@@ -38,7 +38,7 @@
 #include "vpx_ports/system_state.h"
 #include "vpx_ports/vpx_timer.h"
 #include "vpx_util/vpx_write_yuv_frame.h"
-#if ARCH_ARM
+#if VPX_ARCH_ARM
 #include "vpx_ports/arm.h"
 #endif
 #if CONFIG_MULTI_RES_ENCODING
@@ -62,12 +62,7 @@
 extern int vp8_update_coef_context(VP8_COMP *cpi);
 #endif
 
-extern void vp8_deblock_frame(YV12_BUFFER_CONFIG *source,
-                              YV12_BUFFER_CONFIG *post, int filt_lvl,
-                              int low_var_thresh, int flag);
-extern void print_parms(VP8_CONFIG *ocf, char *filenam);
 extern unsigned int vp8_get_processor_freq();
-extern void print_tree_update_probs();
 
 int vp8_calc_ss_err(YV12_BUFFER_CONFIG *source, YV12_BUFFER_CONFIG *dest);
 
@@ -101,10 +96,6 @@
 extern int skip_false_count;
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-extern int intra_mode_stats[10][10][10];
-#endif
-
 #ifdef SPEEDSTATS
 unsigned int frames_at_speed[16] = { 0, 0, 0, 0, 0, 0, 0, 0,
                                      0, 0, 0, 0, 0, 0, 0, 0 };
@@ -224,6 +215,8 @@
   lc->frames_since_last_drop_overshoot = cpi->frames_since_last_drop_overshoot;
   lc->force_maxqp = cpi->force_maxqp;
   lc->last_frame_percent_intra = cpi->last_frame_percent_intra;
+  lc->last_q[0] = cpi->last_q[0];
+  lc->last_q[1] = cpi->last_q[1];
 
   memcpy(lc->count_mb_ref_frame_usage, cpi->mb.count_mb_ref_frame_usage,
          sizeof(cpi->mb.count_mb_ref_frame_usage));
@@ -261,6 +254,8 @@
   cpi->frames_since_last_drop_overshoot = lc->frames_since_last_drop_overshoot;
   cpi->force_maxqp = lc->force_maxqp;
   cpi->last_frame_percent_intra = lc->last_frame_percent_intra;
+  cpi->last_q[0] = lc->last_q[0];
+  cpi->last_q[1] = lc->last_q[1];
 
   memcpy(cpi->mb.count_mb_ref_frame_usage, lc->count_mb_ref_frame_usage,
          sizeof(cpi->mb.count_mb_ref_frame_usage));
@@ -689,8 +684,8 @@
 /* Convenience macros for mapping speed and mode into a continuous
  * range
  */
-#define GOOD(x) (x + 1)
-#define RT(x) (x + 7)
+#define GOOD(x) ((x) + 1)
+#define RT(x) ((x) + 7)
 
 static int speed_map(int speed, const int *map) {
   int res;
@@ -743,9 +738,9 @@
   0, RT(10), 1 << 1, RT(11), 1 << 2, RT(12), 1 << 3, INT_MAX
 };
 
-static const int mode_check_freq_map_vhbpred[] = {
-  0, GOOD(5), 2, RT(0), 0, RT(3), 2, RT(5), 4, INT_MAX
-};
+static const int mode_check_freq_map_vhbpred[] = { 0, GOOD(5), 2, RT(0),
+                                                   0, RT(3),   2, RT(5),
+                                                   4, INT_MAX };
 
 static const int mode_check_freq_map_near2[] = {
   0,      GOOD(5), 2,      RT(0),  0,      RT(3),  2,
@@ -761,13 +756,13 @@
                                                 1 << 3, RT(11),  1 << 4, RT(12),
                                                 1 << 5, INT_MAX };
 
-static const int mode_check_freq_map_split1[] = {
-  0, GOOD(2), 2, GOOD(3), 7, RT(1), 2, RT(2), 7, INT_MAX
-};
+static const int mode_check_freq_map_split1[] = { 0, GOOD(2), 2, GOOD(3),
+                                                  7, RT(1),   2, RT(2),
+                                                  7, INT_MAX };
 
-static const int mode_check_freq_map_split2[] = {
-  0, GOOD(1), 2, GOOD(2), 4, GOOD(3), 15, RT(1), 4, RT(2), 15, INT_MAX
-};
+static const int mode_check_freq_map_split2[] = { 0, GOOD(1), 2,  GOOD(2),
+                                                  4, GOOD(3), 15, RT(1),
+                                                  4, RT(2),   15, INT_MAX };
 
 void vp8_set_speed_features(VP8_COMP *cpi) {
   SPEED_FEATURES *sf = &cpi->sf;
@@ -1534,6 +1529,8 @@
     }
   }
 
+  cpi->ext_refresh_frame_flags_pending = 0;
+
   cpi->baseline_gf_interval =
       cpi->oxcf.alt_freq ? cpi->oxcf.alt_freq : DEFAULT_GF_INTERVAL;
 
@@ -1893,10 +1890,6 @@
   CHECK_MEM_ERROR(cpi->consec_zero_last_mvbias,
                   vpx_calloc((cpi->common.mb_rows * cpi->common.mb_cols), 1));
 
-#ifdef VP8_ENTROPY_STATS
-  init_context_counters();
-#endif
-
   /*Initialize the feed-forward activity masking.*/
   cpi->activity_avg = 90 << 12;
 
@@ -2005,10 +1998,6 @@
     cpi->mb.rd_thresh_mult[i] = 128;
   }
 
-#ifdef VP8_ENTROPY_STATS
-  init_mv_ref_counts();
-#endif
-
 #if CONFIG_MULTITHREAD
   if (vp8cx_create_encoder_threads(cpi)) {
     vp8_remove_compressor(&cpi);
@@ -2051,7 +2040,7 @@
   cpi->fn_ptr[BLOCK_4X4].sdx8f = vpx_sad4x4x8;
   cpi->fn_ptr[BLOCK_4X4].sdx4df = vpx_sad4x4x4d;
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   cpi->fn_ptr[BLOCK_16X16].copymem = vp8_copy32xn;
   cpi->fn_ptr[BLOCK_16X8].copymem = vp8_copy32xn;
   cpi->fn_ptr[BLOCK_8X16].copymem = vp8_copy32xn;
@@ -2106,8 +2095,8 @@
   return cpi;
 }
 
-void vp8_remove_compressor(VP8_COMP **ptr) {
-  VP8_COMP *cpi = *ptr;
+void vp8_remove_compressor(VP8_COMP **comp) {
+  VP8_COMP *cpi = *comp;
 
   if (!cpi) return;
 
@@ -2120,12 +2109,6 @@
 
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-    print_context_counters();
-    print_tree_update_probs();
-    print_mode_context();
-#endif
-
 #if CONFIG_INTERNAL_STATS
 
     if (cpi->pass != 1) {
@@ -2252,40 +2235,6 @@
     }
 #endif
 
-#ifdef VP8_ENTROPY_STATS
-    {
-      int i, j, k;
-      FILE *fmode = fopen("modecontext.c", "w");
-
-      fprintf(fmode, "\n#include \"entropymode.h\"\n\n");
-      fprintf(fmode, "const unsigned int vp8_kf_default_bmode_counts ");
-      fprintf(fmode,
-              "[VP8_BINTRAMODES] [VP8_BINTRAMODES] [VP8_BINTRAMODES] =\n{\n");
-
-      for (i = 0; i < 10; ++i) {
-        fprintf(fmode, "    { /* Above Mode :  %d */\n", i);
-
-        for (j = 0; j < 10; ++j) {
-          fprintf(fmode, "        {");
-
-          for (k = 0; k < 10; ++k) {
-            if (!intra_mode_stats[i][j][k])
-              fprintf(fmode, " %5d, ", 1);
-            else
-              fprintf(fmode, " %5d, ", intra_mode_stats[i][j][k]);
-          }
-
-          fprintf(fmode, "}, /* left_mode %d */\n", j);
-        }
-
-        fprintf(fmode, "    },\n");
-      }
-
-      fprintf(fmode, "};\n");
-      fclose(fmode);
-    }
-#endif
-
 #if defined(SECTIONBITS_OUTPUT)
 
     if (0) {
@@ -2326,7 +2275,7 @@
 
   vp8_remove_common(&cpi->common);
   vpx_free(cpi);
-  *ptr = 0;
+  *comp = 0;
 
 #ifdef OUTPUT_YUV_SRC
   fclose(yuv_file);
@@ -2464,6 +2413,7 @@
 
   if (ref_frame_flags & VP8_ALTR_FRAME) cpi->common.refresh_alt_ref_frame = 1;
 
+  cpi->ext_refresh_frame_flags_pending = 1;
   return 0;
 }
 
@@ -2819,13 +2769,8 @@
   return code_key_frame;
 }
 
-static void Pass1Encode(VP8_COMP *cpi, size_t *size, unsigned char *dest,
-                        unsigned int *frame_flags) {
-  (void)size;
-  (void)dest;
-  (void)frame_flags;
+static void Pass1Encode(VP8_COMP *cpi) {
   vp8_set_quantizer(cpi, 26);
-
   vp8_first_pass(cpi);
 }
 #endif
@@ -3562,6 +3507,7 @@
 
       cm->current_video_frame++;
       cpi->frames_since_key++;
+      cpi->ext_refresh_frame_flags_pending = 0;
       // We advance the temporal pattern for dropped frames.
       cpi->temporal_pattern_counter++;
 
@@ -3603,6 +3549,7 @@
 #endif
     cm->current_video_frame++;
     cpi->frames_since_key++;
+    cpi->ext_refresh_frame_flags_pending = 0;
     // We advance the temporal pattern for dropped frames.
     cpi->temporal_pattern_counter++;
     return;
@@ -3836,8 +3783,7 @@
   // (temporal denoising) mode.
   if (cpi->oxcf.noise_sensitivity >= 3) {
     if (cpi->denoiser.denoise_pars.spatial_blur != 0) {
-      vp8_de_noise(cm, cpi->Source, cpi->Source,
-                   cpi->denoiser.denoise_pars.spatial_blur, 1, 0, 0);
+      vp8_de_noise(cm, cpi->Source, cpi->denoiser.denoise_pars.spatial_blur, 1);
     }
   }
 #endif
@@ -3858,9 +3804,9 @@
     }
 
     if (cm->frame_type == KEY_FRAME) {
-      vp8_de_noise(cm, cpi->Source, cpi->Source, l, 1, 0, 1);
+      vp8_de_noise(cm, cpi->Source, l, 1);
     } else {
-      vp8_de_noise(cm, cpi->Source, cpi->Source, l, 1, 0, 1);
+      vp8_de_noise(cm, cpi->Source, l, 1);
 
       src = cpi->Source->y_buffer;
 
@@ -4003,7 +3949,13 @@
     vp8_encode_frame(cpi);
 
     if (cpi->pass == 0 && cpi->oxcf.end_usage == USAGE_STREAM_FROM_SERVER) {
-      if (vp8_drop_encodedframe_overshoot(cpi, Q)) return;
+      if (vp8_drop_encodedframe_overshoot(cpi, Q)) {
+        vpx_clear_system_state();
+        return;
+      }
+      if (cm->frame_type != KEY_FRAME)
+        cpi->last_pred_err_mb =
+            (int)(cpi->mb.prediction_error / cpi->common.MBs);
     }
 
     cpi->projected_frame_size -= vp8_estimate_entropy_savings(cpi);
@@ -4286,6 +4238,7 @@
     cpi->common.current_video_frame++;
     cpi->frames_since_key++;
     cpi->drop_frame_count++;
+    cpi->ext_refresh_frame_flags_pending = 0;
     // We advance the temporal pattern for dropped frames.
     cpi->temporal_pattern_counter++;
     return;
@@ -4394,8 +4347,10 @@
   /* For inter frames the current default behavior is that when
    * cm->refresh_golden_frame is set we copy the old GF over to the ARF buffer
    * This is purely an encoder decision at present.
+   * Avoid this behavior when refresh flags are set by the user.
    */
-  if (!cpi->oxcf.error_resilient_mode && cm->refresh_golden_frame) {
+  if (!cpi->oxcf.error_resilient_mode && cm->refresh_golden_frame &&
+      !cpi->ext_refresh_frame_flags_pending) {
     cm->copy_buffer_to_arf = 2;
   } else {
     cm->copy_buffer_to_arf = 0;
@@ -4702,6 +4657,8 @@
 
 #endif
 
+  cpi->ext_refresh_frame_flags_pending = 0;
+
   if (cm->refresh_golden_frame == 1) {
     cm->frame_flags = cm->frame_flags | FRAMEFLAGS_GOLDEN;
   } else {
@@ -4867,14 +4824,6 @@
 
   cm = &cpi->common;
 
-  if (setjmp(cpi->common.error.jmp)) {
-    cpi->common.error.setjmp = 0;
-    vpx_clear_system_state();
-    return VPX_CODEC_CORRUPT_FRAME;
-  }
-
-  cpi->common.error.setjmp = 1;
-
   vpx_usec_timer_start(&cmptimer);
 
   cpi->source = NULL;
@@ -5112,7 +5061,7 @@
   }
   switch (cpi->pass) {
 #if !CONFIG_REALTIME_ONLY
-    case 1: Pass1Encode(cpi, size, dest, frame_flags); break;
+    case 1: Pass1Encode(cpi); break;
     case 2: Pass2Encode(cpi, size, dest, dest_end, frame_flags); break;
 #endif  // !CONFIG_REALTIME_ONLY
     default:
@@ -5231,7 +5180,7 @@
           double weight = 0;
 
           vp8_deblock(cm, cm->frame_to_show, &cm->post_proc_buffer,
-                      cm->filter_level * 10 / 6, 1, 0);
+                      cm->filter_level * 10 / 6);
           vpx_clear_system_state();
 
           ye = calc_plane_error(orig->y_buffer, orig->y_stride, pp->y_buffer,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_int.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_int.h
index c489b46..50a750d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_int.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/onyx_int.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_ONYX_INT_H_
-#define VP8_ENCODER_ONYX_INT_H_
+#ifndef VPX_VP8_ENCODER_ONYX_INT_H_
+#define VPX_VP8_ENCODER_ONYX_INT_H_
 
 #include <stdio.h>
 #include "vpx_config.h"
@@ -57,6 +57,9 @@
 
 #define VP8_TEMPORAL_ALT_REF !CONFIG_REALTIME_ONLY
 
+/* vp8 uses 10,000,000 ticks/second as time stamp */
+#define TICKS_PER_SEC 10000000
+
 typedef struct {
   int kf_indicated;
   unsigned int frames_since_key;
@@ -257,6 +260,7 @@
 
   int count_mb_ref_frame_usage[MAX_REF_FRAMES];
 
+  int last_q[2];
 } LAYER_CONTEXT;
 
 typedef struct VP8_COMP {
@@ -510,6 +514,7 @@
 
   int force_maxqp;
   int frames_since_last_drop_overshoot;
+  int last_pred_err_mb;
 
   // GF update for 1 pass cbr.
   int gf_update_onepass_cbr;
@@ -695,6 +700,8 @@
 
   // Use the static threshold from ROI settings.
   int use_roi_static_threshold;
+
+  int ext_refresh_frame_flags_pending;
 } VP8_COMP;
 
 void vp8_initialize_enc(void);
@@ -714,8 +721,8 @@
 #if CONFIG_DEBUG
 #define CHECK_MEM_ERROR(lval, expr)                                         \
   do {                                                                      \
-    lval = (expr);                                                          \
-    if (!lval)                                                              \
+    (lval) = (expr);                                                        \
+    if (!(lval))                                                            \
       vpx_internal_error(&cpi->common.error, VPX_CODEC_MEM_ERROR,           \
                          "Failed to allocate " #lval " at %s:%d", __FILE__, \
                          __LINE__);                                         \
@@ -723,8 +730,8 @@
 #else
 #define CHECK_MEM_ERROR(lval, expr)                               \
   do {                                                            \
-    lval = (expr);                                                \
-    if (!lval)                                                    \
+    (lval) = (expr);                                              \
+    if (!(lval))                                                  \
       vpx_internal_error(&cpi->common.error, VPX_CODEC_MEM_ERROR, \
                          "Failed to allocate " #lval);            \
   } while (0)
@@ -733,4 +740,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_ONYX_INT_H_
+#endif  // VPX_VP8_ENCODER_ONYX_INT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.c
index 193762e..04f68c3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.c
@@ -173,9 +173,8 @@
 
 static int pick_intra4x4block(MACROBLOCK *x, int ib,
                               B_PREDICTION_MODE *best_mode,
-                              const int *mode_costs,
-
-                              int *bestrate, int *bestdistortion) {
+                              const int *mode_costs, int *bestrate,
+                              int *bestdistortion) {
   BLOCKD *b = &x->e_mbd.block[ib];
   BLOCK *be = &x->block[ib];
   int dst_stride = x->e_mbd.dst.y_stride;
@@ -564,7 +563,7 @@
   MACROBLOCKD *xd = &x->e_mbd;
   MB_MODE_INFO best_mbmode;
 
-  int_mv best_ref_mv_sb[2];
+  int_mv best_ref_mv_sb[2] = { { 0 }, { 0 } };
   int_mv mode_mv_sb[2][MB_MODE_COUNT];
   int_mv best_ref_mv;
   int_mv *mode_mv;
@@ -602,7 +601,7 @@
   /* search range got from mv_pred(). It uses step_param levels. (0-7) */
   int sr = 0;
 
-  unsigned char *plane[4][3];
+  unsigned char *plane[4][3] = { { 0, 0 } };
   int ref_frame_map[4];
   int sign_bias = 0;
   int dot_artifact_candidate = 0;
@@ -631,13 +630,16 @@
       }
     }
 #endif
+    assert(plane[LAST_FRAME][0] != NULL);
     dot_artifact_candidate = check_dot_artifact_candidate(
         cpi, x, target_y, stride, plane[LAST_FRAME][0], mb_row, mb_col, 0);
     // If not found in Y channel, check UV channel.
     if (!dot_artifact_candidate) {
+      assert(plane[LAST_FRAME][1] != NULL);
       dot_artifact_candidate = check_dot_artifact_candidate(
           cpi, x, target_u, stride_uv, plane[LAST_FRAME][1], mb_row, mb_col, 1);
       if (!dot_artifact_candidate) {
+        assert(plane[LAST_FRAME][2] != NULL);
         dot_artifact_candidate = check_dot_artifact_candidate(
             cpi, x, target_v, stride_uv, plane[LAST_FRAME][2], mb_row, mb_col,
             2);
@@ -1016,7 +1018,7 @@
 #endif
             bestsme = vp8_hex_search(x, b, d, &mvp_full, &d->bmi.mv, step_param,
                                      sadpb, &cpi->fn_ptr[BLOCK_16X16],
-                                     x->mvsadcost, x->mvcost, &best_ref_mv);
+                                     x->mvsadcost, &best_ref_mv);
             mode_mv[NEWMV].as_int = d->bmi.mv.as_int;
           } else {
             bestsme = cpi->diamond_search_sad(
@@ -1068,10 +1070,12 @@
         rate2 +=
             vp8_mv_bit_cost(&mode_mv[NEWMV], &best_ref_mv, cpi->mb.mvcost, 128);
       }
+        // fall through
 
       case NEARESTMV:
       case NEARMV:
         if (mode_mv[this_mode].as_int == 0) continue;
+        // fall through
 
       case ZEROMV:
 
@@ -1301,9 +1305,9 @@
   update_mvcount(x, &best_ref_mv);
 }
 
-void vp8_pick_intra_mode(MACROBLOCK *x, int *rate_) {
+void vp8_pick_intra_mode(MACROBLOCK *x, int *rate) {
   int error4x4, error16x16 = INT_MAX;
-  int rate, best_rate = 0, distortion, best_sse;
+  int rate_, best_rate = 0, distortion, best_sse;
   MB_PREDICTION_MODE mode, best_mode = DC_PRED;
   int this_rd;
   unsigned int sse;
@@ -1321,23 +1325,23 @@
                                      xd->predictor, 16);
     distortion = vpx_variance16x16(*(b->base_src), b->src_stride, xd->predictor,
                                    16, &sse);
-    rate = x->mbmode_cost[xd->frame_type][mode];
-    this_rd = RDCOST(x->rdmult, x->rddiv, rate, distortion);
+    rate_ = x->mbmode_cost[xd->frame_type][mode];
+    this_rd = RDCOST(x->rdmult, x->rddiv, rate_, distortion);
 
     if (error16x16 > this_rd) {
       error16x16 = this_rd;
       best_mode = mode;
       best_sse = sse;
-      best_rate = rate;
+      best_rate = rate_;
     }
   }
   xd->mode_info_context->mbmi.mode = best_mode;
 
-  error4x4 = pick_intra4x4mby_modes(x, &rate, &best_sse);
+  error4x4 = pick_intra4x4mby_modes(x, &rate_, &best_sse);
   if (error4x4 < error16x16) {
     xd->mode_info_context->mbmi.mode = B_PRED;
-    best_rate = rate;
+    best_rate = rate_;
   }
 
-  *rate_ = best_rate;
+  *rate = best_rate;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.h
index bf1d0c9..392fb41 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/pickinter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_PICKINTER_H_
-#define VP8_ENCODER_PICKINTER_H_
+#ifndef VPX_VP8_ENCODER_PICKINTER_H_
+#define VPX_VP8_ENCODER_PICKINTER_H_
 #include "vpx_config.h"
 #include "vp8/common/onyxc_int.h"
 
@@ -30,4 +30,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_PICKINTER_H_
+#endif  // VPX_VP8_ENCODER_PICKINTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.c
index b1b712d..387ac97 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.c
@@ -18,7 +18,7 @@
 #include "vpx_scale/vpx_scale.h"
 #include "vp8/common/alloccommon.h"
 #include "vp8/common/loopfilter.h"
-#if ARCH_ARM
+#if VPX_ARCH_ARM
 #include "vpx_ports/arm.h"
 #endif
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.h
index e6ad0db..03597e5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/picklpf.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_PICKLPF_H_
-#define VP8_ENCODER_PICKLPF_H_
+#ifndef VPX_VP8_ENCODER_PICKLPF_H_
+#define VPX_VP8_ENCODER_PICKLPF_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -27,4 +27,4 @@
 }
 #endif
 
-#endif  // VP8_ENCODER_PICKLPF_H_
+#endif  // VPX_VP8_ENCODER_PICKLPF_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/quantize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/quantize.h
index 267150f..78746c0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/quantize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/quantize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_QUANTIZE_H_
-#define VP8_ENCODER_QUANTIZE_H_
+#ifndef VPX_VP8_ENCODER_QUANTIZE_H_
+#define VPX_VP8_ENCODER_QUANTIZE_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -31,4 +31,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_QUANTIZE_H_
+#endif  // VPX_VP8_ENCODER_QUANTIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.c
index fc833bc..dbd76ed 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.c
@@ -1125,6 +1125,14 @@
   }
 }
 
+static int limit_q_cbr_inter(int last_q, int current_q) {
+  int limit_down = 12;
+  if (last_q - current_q > limit_down)
+    return (last_q - limit_down);
+  else
+    return current_q;
+}
+
 int vp8_regulate_q(VP8_COMP *cpi, int target_bits_per_frame) {
   int Q = cpi->active_worst_quality;
 
@@ -1264,6 +1272,12 @@
     }
   }
 
+  // Limit decrease in Q for 1 pass CBR screen content mode.
+  if (cpi->common.frame_type != KEY_FRAME && cpi->pass == 0 &&
+      cpi->oxcf.end_usage == USAGE_STREAM_FROM_SERVER &&
+      cpi->oxcf.screen_content_mode)
+    Q = limit_q_cbr_inter(cpi->last_q[1], Q);
+
   return Q;
 }
 
@@ -1464,7 +1478,7 @@
       (cpi->oxcf.screen_content_mode == 2 ||
        (cpi->drop_frames_allowed &&
         (force_drop_overshoot ||
-         (cpi->rate_correction_factor < (4.0f * MIN_BPB_FACTOR) &&
+         (cpi->rate_correction_factor < (8.0f * MIN_BPB_FACTOR) &&
           cpi->frames_since_last_drop_overshoot > (int)cpi->framerate))))) {
     // Note: the "projected_frame_size" from encode_frame() only gives estimate
     // of mode/motion vector rate (in non-rd mode): so below we only require
@@ -1484,7 +1498,8 @@
     if (cpi->drop_frames_allowed && pred_err_mb > (thresh_pred_err_mb << 4))
       thresh_rate = thresh_rate >> 3;
     if ((Q < thresh_qp && cpi->projected_frame_size > thresh_rate &&
-         pred_err_mb > thresh_pred_err_mb) ||
+         pred_err_mb > thresh_pred_err_mb &&
+         pred_err_mb > 2 * cpi->last_pred_err_mb) ||
         force_drop_overshoot) {
       unsigned int i;
       double new_correction_factor;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.h
index 249de4e..844c72c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/ratectrl.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_RATECTRL_H_
-#define VP8_ENCODER_RATECTRL_H_
+#ifndef VPX_VP8_ENCODER_RATECTRL_H_
+#define VPX_VP8_ENCODER_RATECTRL_H_
 
 #include "onyx_int.h"
 
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_RATECTRL_H_
+#endif  // VPX_VP8_ENCODER_RATECTRL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.c
index fa5dde1..79a858e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.c
@@ -989,7 +989,7 @@
   br += rate;
 
   for (i = 0; i < label_count; ++i) {
-    int_mv mode_mv[B_MODE_COUNT];
+    int_mv mode_mv[B_MODE_COUNT] = { { 0 }, { 0 } };
     int best_label_rd = INT_MAX;
     B_PREDICTION_MODE mode_selected = ZERO4X4;
     int bestlabelyrate = 0;
@@ -1767,7 +1767,7 @@
   /* search range got from mv_pred(). It uses step_param levels. (0-7) */
   int sr = 0;
 
-  unsigned char *plane[4][3];
+  unsigned char *plane[4][3] = { { 0, 0 } };
   int ref_frame_map[4];
   int sign_bias = 0;
 
@@ -1779,6 +1779,10 @@
                best_rd_sse = UINT_MAX;
 #endif
 
+  // _uv variables are not set consistantly before calling update_best_mode.
+  rd.rate_uv = 0;
+  rd.distortion_uv = 0;
+
   mode_mv = mode_mv_sb[sign_bias];
   best_ref_mv.as_int = 0;
   best_mode.rd = INT_MAX;
@@ -1846,6 +1850,9 @@
 
     /* everything but intra */
     if (x->e_mbd.mode_info_context->mbmi.ref_frame) {
+      assert(plane[this_ref_frame][0] != NULL &&
+             plane[this_ref_frame][1] != NULL &&
+             plane[this_ref_frame][2] != NULL);
       x->e_mbd.pre.y_buffer = plane[this_ref_frame][0];
       x->e_mbd.pre.u_buffer = plane[this_ref_frame][1];
       x->e_mbd.pre.v_buffer = plane[this_ref_frame][2];
@@ -1940,6 +1947,7 @@
         rd.distortion2 += distortion;
 
         if (tmp_rd < best_mode.yrd) {
+          assert(uv_intra_done);
           rd.rate2 += uv_intra_rate;
           rd.rate_uv = uv_intra_rate_tokenonly;
           rd.distortion2 += uv_intra_distortion;
@@ -2000,6 +2008,7 @@
         rd.distortion2 += distortion;
         rd.rate2 += x->mbmode_cost[x->e_mbd.frame_type]
                                   [x->e_mbd.mode_info_context->mbmi.mode];
+        assert(uv_intra_done);
         rd.rate2 += uv_intra_rate;
         rd.rate_uv = uv_intra_rate_tokenonly;
         rd.distortion2 += uv_intra_distortion;
@@ -2131,6 +2140,7 @@
         rd.rate2 +=
             vp8_mv_bit_cost(&mode_mv[NEWMV], &best_ref_mv, x->mvcost, 96);
       }
+        // fall through
 
       case NEARESTMV:
       case NEARMV:
@@ -2147,6 +2157,7 @@
             (mode_mv[this_mode].as_int == 0)) {
           continue;
         }
+        // fall through
 
       case ZEROMV:
 
@@ -2352,11 +2363,11 @@
   rd_update_mvcount(x, &best_ref_mv);
 }
 
-void vp8_rd_pick_intra_mode(MACROBLOCK *x, int *rate_) {
+void vp8_rd_pick_intra_mode(MACROBLOCK *x, int *rate) {
   int error4x4, error16x16;
   int rate4x4, rate16x16 = 0, rateuv;
   int dist4x4, dist16x16, distuv;
-  int rate;
+  int rate_;
   int rate4x4_tokenonly = 0;
   int rate16x16_tokenonly = 0;
   int rateuv_tokenonly = 0;
@@ -2364,7 +2375,7 @@
   x->e_mbd.mode_info_context->mbmi.ref_frame = INTRA_FRAME;
 
   rd_pick_intra_mbuv_mode(x, &rateuv, &rateuv_tokenonly, &distuv);
-  rate = rateuv;
+  rate_ = rateuv;
 
   error16x16 = rd_pick_intra16x16mby_mode(x, &rate16x16, &rate16x16_tokenonly,
                                           &dist16x16);
@@ -2374,10 +2385,10 @@
 
   if (error4x4 < error16x16) {
     x->e_mbd.mode_info_context->mbmi.mode = B_PRED;
-    rate += rate4x4;
+    rate_ += rate4x4;
   } else {
-    rate += rate16x16;
+    rate_ += rate16x16;
   }
 
-  *rate_ = rate;
+  *rate = rate_;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.h
index 960bd8f..cc3db81 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/rdopt.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_RDOPT_H_
-#define VP8_ENCODER_RDOPT_H_
+#ifndef VPX_VP8_ENCODER_RDOPT_H_
+#define VPX_VP8_ENCODER_RDOPT_H_
 
 #include "./vpx_config.h"
 
@@ -63,12 +63,12 @@
   }
 }
 
-extern void vp8_initialize_rd_consts(VP8_COMP *cpi, MACROBLOCK *x, int Qvalue);
-extern void vp8_rd_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x,
-                                   int recon_yoffset, int recon_uvoffset,
-                                   int *returnrate, int *returndistortion,
-                                   int *returnintra, int mb_row, int mb_col);
-extern void vp8_rd_pick_intra_mode(MACROBLOCK *x, int *rate);
+void vp8_initialize_rd_consts(VP8_COMP *cpi, MACROBLOCK *x, int Qvalue);
+void vp8_rd_pick_inter_mode(VP8_COMP *cpi, MACROBLOCK *x, int recon_yoffset,
+                            int recon_uvoffset, int *returnrate,
+                            int *returndistortion, int *returnintra, int mb_row,
+                            int mb_col);
+void vp8_rd_pick_intra_mode(MACROBLOCK *x, int *rate);
 
 static INLINE void get_plane_pointers(const YV12_BUFFER_CONFIG *fb,
                                       unsigned char *plane[3],
@@ -110,9 +110,9 @@
   for (; i < 4; ++i) ref_frame_map[i] = -1;
 }
 
-extern void vp8_mv_pred(VP8_COMP *cpi, MACROBLOCKD *xd, const MODE_INFO *here,
-                        int_mv *mvp, int refframe, int *ref_frame_sign_bias,
-                        int *sr, int near_sadidx[]);
+void vp8_mv_pred(VP8_COMP *cpi, MACROBLOCKD *xd, const MODE_INFO *here,
+                 int_mv *mvp, int refframe, int *ref_frame_sign_bias, int *sr,
+                 int near_sadidx[]);
 void vp8_cal_sad(VP8_COMP *cpi, MACROBLOCKD *xd, MACROBLOCK *x,
                  int recon_yoffset, int near_sadidx[]);
 int VP8_UVSSE(MACROBLOCK *x);
@@ -123,4 +123,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_RDOPT_H_
+#endif  // VPX_VP8_ENCODER_RDOPT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/segmentation.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/segmentation.h
index 1395a34..4ddbdbb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/segmentation.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/segmentation.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_SEGMENTATION_H_
-#define VP8_ENCODER_SEGMENTATION_H_
+#ifndef VPX_VP8_ENCODER_SEGMENTATION_H_
+#define VPX_VP8_ENCODER_SEGMENTATION_H_
 
 #include "string.h"
 #include "vp8/common/blockd.h"
@@ -26,4 +26,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_SEGMENTATION_H_
+#endif  // VPX_VP8_ENCODER_SEGMENTATION_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.c
index 0a7d25f..1c1a55f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.c
@@ -158,7 +158,8 @@
   /* Ignore mv costing by sending NULL cost arrays */
   bestsme =
       vp8_hex_search(x, b, d, &best_ref_mv1_full, &d->bmi.mv, step_param, sadpb,
-                     &cpi->fn_ptr[BLOCK_16X16], NULL, NULL, &best_ref_mv1);
+                     &cpi->fn_ptr[BLOCK_16X16], NULL, &best_ref_mv1);
+  (void)bestsme;  // Ignore unused return value.
 
 #if ALT_REF_SUBPEL_ENABLED
   /* Try sub-pixel MC? */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.h
index 865d909..fd39f5c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/temporal_filter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_TEMPORAL_FILTER_H_
-#define VP8_ENCODER_TEMPORAL_FILTER_H_
+#ifndef VPX_VP8_ENCODER_TEMPORAL_FILTER_H_
+#define VPX_VP8_ENCODER_TEMPORAL_FILTER_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -23,4 +23,4 @@
 }
 #endif
 
-#endif  // VP8_ENCODER_TEMPORAL_FILTER_H_
+#endif  // VPX_VP8_ENCODER_TEMPORAL_FILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.c
index ca5f0e3..c3d7026 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.c
@@ -19,10 +19,6 @@
 /* Global event counters used for accumulating statistics across several
    compressions, then generating context.c = initial stats. */
 
-#ifdef VP8_ENTROPY_STATS
-_int64 context_counters[BLOCK_TYPES][COEF_BANDS][PREV_COEF_CONTEXTS]
-                       [MAX_ENTROPY_TOKENS];
-#endif
 void vp8_stuff_mb(VP8_COMP *cpi, MACROBLOCK *x, TOKENEXTRA **t);
 void vp8_fix_contexts(MACROBLOCKD *x);
 
@@ -383,72 +379,6 @@
   tokenize1st_order_b(x, t, plane_type, cpi);
 }
 
-#ifdef VP8_ENTROPY_STATS
-
-void init_context_counters(void) {
-  memset(context_counters, 0, sizeof(context_counters));
-}
-
-void print_context_counters() {
-  int type, band, pt, t;
-
-  FILE *const f = fopen("context.c", "w");
-
-  fprintf(f, "#include \"entropy.h\"\n");
-
-  fprintf(f, "\n/* *** GENERATED FILE: DO NOT EDIT *** */\n\n");
-
-  fprintf(f,
-          "int Contexts[BLOCK_TYPES] [COEF_BANDS] [PREV_COEF_CONTEXTS] "
-          "[MAX_ENTROPY_TOKENS];\n\n");
-
-  fprintf(f,
-          "const int default_contexts[BLOCK_TYPES] [COEF_BANDS] "
-          "[PREV_COEF_CONTEXTS] [MAX_ENTROPY_TOKENS] = {");
-
-#define Comma(X) (X ? "," : "")
-
-  type = 0;
-
-  do {
-    fprintf(f, "%s\n  { /* block Type %d */", Comma(type), type);
-
-    band = 0;
-
-    do {
-      fprintf(f, "%s\n    { /* Coeff Band %d */", Comma(band), band);
-
-      pt = 0;
-
-      do {
-        fprintf(f, "%s\n      {", Comma(pt));
-
-        t = 0;
-
-        do {
-          const _int64 x = context_counters[type][band][pt][t];
-          const int y = (int)x;
-
-          assert(x == (_int64)y); /* no overflow handling yet */
-          fprintf(f, "%s %d", Comma(t), y);
-
-        } while (++t < MAX_ENTROPY_TOKENS);
-
-        fprintf(f, "}");
-      } while (++pt < PREV_COEF_CONTEXTS);
-
-      fprintf(f, "\n    }");
-
-    } while (++band < COEF_BANDS);
-
-    fprintf(f, "\n  }");
-  } while (++type < BLOCK_TYPES);
-
-  fprintf(f, "\n};\n");
-  fclose(f);
-}
-#endif
-
 static void stuff2nd_order_b(TOKENEXTRA **tp, ENTROPY_CONTEXT *a,
                              ENTROPY_CONTEXT *l, VP8_COMP *cpi, MACROBLOCK *x) {
   int pt;              /* near block/prev token context index */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.h
index e5dbdfc..47b5be1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/tokenize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_TOKENIZE_H_
-#define VP8_ENCODER_TOKENIZE_H_
+#ifndef VPX_VP8_ENCODER_TOKENIZE_H_
+#define VPX_VP8_ENCODER_TOKENIZE_H_
 
 #include "vp8/common/entropy.h"
 #include "block.h"
@@ -34,14 +34,6 @@
 
 int rd_cost_mby(MACROBLOCKD *);
 
-#ifdef VP8_ENTROPY_STATS
-void init_context_counters();
-void print_context_counters();
-
-extern _int64 context_counters[BLOCK_TYPES][COEF_BANDS][PREV_COEF_CONTEXTS]
-                              [MAX_ENTROPY_TOKENS];
-#endif
-
 extern const short *const vp8_dct_value_cost_ptr;
 /* TODO: The Token field should be broken out into a separate char array to
  *  improve cache locality, since it's needed for costing when the rest of the
@@ -53,4 +45,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_TOKENIZE_H_
+#endif  // VPX_VP8_ENCODER_TOKENIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/treewriter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/treewriter.h
index e766990..c02683a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/treewriter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/treewriter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP8_ENCODER_TREEWRITER_H_
-#define VP8_ENCODER_TREEWRITER_H_
+#ifndef VPX_VP8_ENCODER_TREEWRITER_H_
+#define VPX_VP8_ENCODER_TREEWRITER_H_
 
 /* Trees map alphabets into huffman-like codes suitable for an arithmetic
    bit coder.  Timothy S Murphy  11 October 2004 */
@@ -91,12 +91,12 @@
 
 /* Fill array of costs for all possible token values. */
 
-void vp8_cost_tokens(int *Costs, const vp8_prob *, vp8_tree);
+void vp8_cost_tokens(int *c, const vp8_prob *, vp8_tree);
 
-void vp8_cost_tokens2(int *Costs, const vp8_prob *, vp8_tree, int);
+void vp8_cost_tokens2(int *c, const vp8_prob *, vp8_tree, int);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP8_ENCODER_TREEWRITER_H_
+#endif  // VPX_VP8_ENCODER_TREEWRITER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/vp8_quantize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/vp8_quantize.c
index ff6e04e..5b89555 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/vp8_quantize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/vp8_quantize.c
@@ -174,8 +174,6 @@
   } else {
     *quant = (1 << 16) / d;
     *shift = 0;
-    /* use multiplication and constant shift by 16 */
-    *shift = 1 << (16 - *shift);
   }
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/encodeopt.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/block_error_sse2.asm
similarity index 100%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/encodeopt.asm
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/block_error_sse2.asm
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse2.asm
similarity index 100%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse2.asm
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse2.asm
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse3.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse3.asm
similarity index 100%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/common/x86/copy_sse3.asm
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/copy_sse3.asm
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/quantize_sse4.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/quantize_sse4.c
index 6f2c163..389c167 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/quantize_sse4.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/quantize_sse4.c
@@ -11,28 +11,29 @@
 #include <smmintrin.h> /* SSE4.1 */
 
 #include "./vp8_rtcd.h"
-#include "vp8/encoder/block.h"
 #include "vp8/common/entropy.h" /* vp8_default_inv_zig_zag */
+#include "vp8/encoder/block.h"
 
-#define SELECT_EOB(i, z, x, y, q)         \
-  do {                                    \
-    short boost = *zbin_boost_ptr;        \
-    short x_z = _mm_extract_epi16(x, z);  \
-    short y_z = _mm_extract_epi16(y, z);  \
-    int cmp = (x_z < boost) | (y_z == 0); \
-    zbin_boost_ptr++;                     \
-    if (cmp) break;                       \
-    q = _mm_insert_epi16(q, y_z, z);      \
-    eob = i;                              \
-    zbin_boost_ptr = b->zrun_zbin_boost;  \
+#define SELECT_EOB(i, z, x, y, q)                         \
+  do {                                                    \
+    short boost = *zbin_boost_ptr;                        \
+    /* Technically _mm_extract_epi16() returns an int: */ \
+    /* https://bugs.llvm.org/show_bug.cgi?id=41657 */     \
+    short x_z = (short)_mm_extract_epi16(x, z);           \
+    short y_z = (short)_mm_extract_epi16(y, z);           \
+    int cmp = (x_z < boost) | (y_z == 0);                 \
+    zbin_boost_ptr++;                                     \
+    if (cmp) break;                                       \
+    q = _mm_insert_epi16(q, y_z, z);                      \
+    eob = i;                                              \
+    zbin_boost_ptr = b->zrun_zbin_boost;                  \
   } while (0)
 
 void vp8_regular_quantize_b_sse4_1(BLOCK *b, BLOCKD *d) {
   char eob = 0;
   short *zbin_boost_ptr = b->zrun_zbin_boost;
 
-  __m128i sz0, x0, sz1, x1, y0, y1, x_minus_zbin0, x_minus_zbin1, dqcoeff0,
-      dqcoeff1;
+  __m128i x0, x1, y0, y1, x_minus_zbin0, x_minus_zbin1, dqcoeff0, dqcoeff1;
   __m128i quant_shift0 = _mm_load_si128((__m128i *)(b->quant_shift));
   __m128i quant_shift1 = _mm_load_si128((__m128i *)(b->quant_shift + 8));
   __m128i z0 = _mm_load_si128((__m128i *)(b->coeff));
@@ -53,15 +54,9 @@
   zbin_extra = _mm_shufflelo_epi16(zbin_extra, 0);
   zbin_extra = _mm_unpacklo_epi16(zbin_extra, zbin_extra);
 
-  /* Sign of z: z >> 15 */
-  sz0 = _mm_srai_epi16(z0, 15);
-  sz1 = _mm_srai_epi16(z1, 15);
-
-  /* x = abs(z): (z ^ sz) - sz */
-  x0 = _mm_xor_si128(z0, sz0);
-  x1 = _mm_xor_si128(z1, sz1);
-  x0 = _mm_sub_epi16(x0, sz0);
-  x1 = _mm_sub_epi16(x1, sz1);
+  /* x = abs(z) */
+  x0 = _mm_abs_epi16(z0);
+  x1 = _mm_abs_epi16(z1);
 
   /* zbin[] + zbin_extra */
   zbin0 = _mm_add_epi16(zbin0, zbin_extra);
@@ -89,11 +84,9 @@
   y0 = _mm_mulhi_epi16(y0, quant_shift0);
   y1 = _mm_mulhi_epi16(y1, quant_shift1);
 
-  /* Return the sign: (y ^ sz) - sz */
-  y0 = _mm_xor_si128(y0, sz0);
-  y1 = _mm_xor_si128(y1, sz1);
-  y0 = _mm_sub_epi16(y0, sz0);
-  y1 = _mm_sub_epi16(y1, sz1);
+  /* Restore the sign. */
+  y0 = _mm_sign_epi16(y0, z0);
+  y1 = _mm_sign_epi16(y1, z1);
 
   /* The loop gets unrolled anyway. Avoid the vp8_default_zig_zag1d lookup. */
   SELECT_EOB(1, 0, x_minus_zbin0, y0, qcoeff0);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_quantize_ssse3.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_quantize_ssse3.c
index d547450..147c30c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_quantize_ssse3.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/encoder/x86/vp8_quantize_ssse3.c
@@ -52,9 +52,9 @@
 
   __m128i sz0, sz1, x, x0, x1, y0, y1, zeros, abs0, abs1;
 
-  DECLARE_ALIGNED(16, const uint8_t, pshufb_zig_zag_mask[16]) = {
-    0, 1, 4, 8, 5, 2, 3, 6, 9, 12, 13, 10, 7, 11, 14, 15
-  };
+  DECLARE_ALIGNED(16, const uint8_t,
+                  pshufb_zig_zag_mask[16]) = { 0, 1,  4,  8,  5, 2,  3,  6,
+                                               9, 12, 13, 10, 7, 11, 14, 15 };
   __m128i zig_zag = _mm_load_si128((const __m128i *)pshufb_zig_zag_mask);
 
   /* sign of z: z >> 15 */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_common.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_common.mk
index 246fe6a..286a93a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_common.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_common.mk
@@ -15,7 +15,6 @@
 VP8_COMMON_SRCS-yes += common/alloccommon.c
 VP8_COMMON_SRCS-yes += common/blockd.c
 VP8_COMMON_SRCS-yes += common/coefupdateprobs.h
-VP8_COMMON_SRCS-yes += common/copy_c.c
 # VP8_COMMON_SRCS-yes += common/debugmodes.c
 VP8_COMMON_SRCS-yes += common/default_coef_probs.h
 VP8_COMMON_SRCS-yes += common/dequantize.c
@@ -70,10 +69,8 @@
 
 VP8_COMMON_SRCS-yes += common/treecoder.c
 
-VP8_COMMON_SRCS-$(ARCH_X86)$(ARCH_X86_64) += common/x86/filter_x86.c
-VP8_COMMON_SRCS-$(ARCH_X86)$(ARCH_X86_64) += common/x86/filter_x86.h
-VP8_COMMON_SRCS-$(ARCH_X86)$(ARCH_X86_64) += common/x86/vp8_asm_stubs.c
-VP8_COMMON_SRCS-$(ARCH_X86)$(ARCH_X86_64) += common/x86/loopfilter_x86.c
+VP8_COMMON_SRCS-$(VPX_ARCH_X86)$(VPX_ARCH_X86_64) += common/x86/vp8_asm_stubs.c
+VP8_COMMON_SRCS-$(VPX_ARCH_X86)$(VPX_ARCH_X86_64) += common/x86/loopfilter_x86.c
 VP8_COMMON_SRCS-$(CONFIG_POSTPROC) += common/mfqe.c
 VP8_COMMON_SRCS-$(CONFIG_POSTPROC) += common/postproc.h
 VP8_COMMON_SRCS-$(CONFIG_POSTPROC) += common/postproc.c
@@ -82,21 +79,20 @@
 VP8_COMMON_SRCS-$(HAVE_MMX) += common/x86/idctllm_mmx.asm
 VP8_COMMON_SRCS-$(HAVE_MMX) += common/x86/recon_mmx.asm
 VP8_COMMON_SRCS-$(HAVE_MMX) += common/x86/subpixel_mmx.asm
-VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/copy_sse2.asm
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/idct_blk_sse2.c
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/idctllm_sse2.asm
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/recon_sse2.asm
+VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/bilinear_filter_sse2.c
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/subpixel_sse2.asm
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/loopfilter_sse2.asm
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/iwalsh_sse2.asm
-VP8_COMMON_SRCS-$(HAVE_SSE3) += common/x86/copy_sse3.asm
 VP8_COMMON_SRCS-$(HAVE_SSSE3) += common/x86/subpixel_ssse3.asm
 
 ifeq ($(CONFIG_POSTPROC),yes)
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/mfqe_sse2.asm
 endif
 
-ifeq ($(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86_64),yes)
 VP8_COMMON_SRCS-$(HAVE_SSE2) += common/x86/loopfilter_block_sse2_x86_64.asm
 endif
 
@@ -130,14 +126,13 @@
 
 # common (neon intrinsics)
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/loopfilter_arm.c
+VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/loopfilter_arm.h
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/bilinearpredict_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/copymem_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/dc_only_idct_add_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/dequant_idct_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/dequantizeb_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/idct_blk_neon.c
-VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/idct_dequant_0_2x_neon.c
-VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/idct_dequant_full_2x_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/iwalsh_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/vp8_loopfilter_neon.c
 VP8_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/loopfiltersimplehorizontaledge_neon.c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_cx_iface.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_cx_iface.c
index b67baab..8f7617a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_cx_iface.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_cx_iface.c
@@ -16,7 +16,10 @@
 #include "vpx/internal/vpx_codec_internal.h"
 #include "vpx_version.h"
 #include "vpx_mem/vpx_mem.h"
+#include "vpx_ports/static_assert.h"
+#include "vpx_ports/system_state.h"
 #include "vpx_ports/vpx_once.h"
+#include "vpx_util/vpx_timestamp.h"
 #include "vp8/encoder/onyx_int.h"
 #include "vpx/vp8cx.h"
 #include "vp8/encoder/firstpass.h"
@@ -49,7 +52,7 @@
 #if !(CONFIG_REALTIME_ONLY)
   0, /* cpu_used      */
 #else
-  4, /* cpu_used      */
+  4,                      /* cpu_used      */
 #endif
   0, /* enable_auto_alt_ref */
   0, /* noise_sensitivity */
@@ -74,6 +77,9 @@
   vpx_codec_priv_t base;
   vpx_codec_enc_cfg_t cfg;
   struct vp8_extracfg vp8_cfg;
+  vpx_rational64_t timestamp_ratio;
+  vpx_codec_pts_t pts_offset;
+  unsigned char pts_offset_initialized;
   VP8_CONFIG oxcf;
   struct VP8_COMP *cpi;
   unsigned char *cx_data;
@@ -105,10 +111,10 @@
     return VPX_CODEC_INVALID_PARAM; \
   } while (0)
 
-#define RANGE_CHECK(p, memb, lo, hi)                                 \
-  do {                                                               \
-    if (!(((p)->memb == lo || (p)->memb > (lo)) && (p)->memb <= hi)) \
-      ERROR(#memb " out of range [" #lo ".." #hi "]");               \
+#define RANGE_CHECK(p, memb, lo, hi)                                     \
+  do {                                                                   \
+    if (!(((p)->memb == (lo) || (p)->memb > (lo)) && (p)->memb <= (hi))) \
+      ERROR(#memb " out of range [" #lo ".." #hi "]");                   \
   } while (0)
 
 #define RANGE_CHECK_HI(p, memb, hi)                                     \
@@ -258,9 +264,7 @@
                                     const vpx_image_t *img) {
   switch (img->fmt) {
     case VPX_IMG_FMT_YV12:
-    case VPX_IMG_FMT_I420:
-    case VPX_IMG_FMT_VPXI420:
-    case VPX_IMG_FMT_VPXYV12: break;
+    case VPX_IMG_FMT_I420: break;
     default:
       ERROR("Invalid image format. Only YV12 and I420 images are supported");
   }
@@ -484,6 +488,9 @@
 static vpx_codec_err_t set_cpu_used(vpx_codec_alg_priv_t *ctx, va_list args) {
   struct vp8_extracfg extra_cfg = ctx->vp8_cfg;
   extra_cfg.cpu_used = CAST(VP8E_SET_CPUUSED, args);
+  // Use fastest speed setting (speed 16 or -16) if it's set beyond the range.
+  extra_cfg.cpu_used = VPXMIN(16, extra_cfg.cpu_used);
+  extra_cfg.cpu_used = VPXMAX(-16, extra_cfg.cpu_used);
   return update_extracfg(ctx, &extra_cfg);
 }
 
@@ -656,6 +663,12 @@
     res = validate_config(priv, &priv->cfg, &priv->vp8_cfg, 0);
 
     if (!res) {
+      priv->pts_offset_initialized = 0;
+      priv->timestamp_ratio.den = priv->cfg.g_timebase.den;
+      priv->timestamp_ratio.num = (int64_t)priv->cfg.g_timebase.num;
+      priv->timestamp_ratio.num *= TICKS_PER_SEC;
+      reduce_ratio(&priv->timestamp_ratio);
+
       set_vp8e_config(&priv->oxcf, priv->cfg, priv->vp8_cfg, mr_cfg);
       priv->cpi = vp8_create_compressor(&priv->oxcf);
       if (!priv->cpi) res = VPX_CODEC_MEM_ERROR;
@@ -720,12 +733,14 @@
   new_qc = MODE_BESTQUALITY;
 
   if (deadline) {
+    /* Convert duration parameter from stream timebase to microseconds */
     uint64_t duration_us;
 
-    /* Convert duration parameter from stream timebase to microseconds */
-    duration_us = (uint64_t)duration * 1000000 *
-                  (uint64_t)ctx->cfg.g_timebase.num /
-                  (uint64_t)ctx->cfg.g_timebase.den;
+    VPX_STATIC_ASSERT(TICKS_PER_SEC > 1000000 &&
+                      (TICKS_PER_SEC % 1000000) == 0);
+
+    duration_us = duration * (uint64_t)ctx->timestamp_ratio.num /
+                  (ctx->timestamp_ratio.den * (TICKS_PER_SEC / 1000000));
 
     /* If the deadline is more that the duration this frame is to be shown,
      * use good quality mode. Otherwise use realtime mode.
@@ -799,9 +814,12 @@
 static vpx_codec_err_t vp8e_encode(vpx_codec_alg_priv_t *ctx,
                                    const vpx_image_t *img, vpx_codec_pts_t pts,
                                    unsigned long duration,
-                                   vpx_enc_frame_flags_t flags,
+                                   vpx_enc_frame_flags_t enc_flags,
                                    unsigned long deadline) {
-  vpx_codec_err_t res = VPX_CODEC_OK;
+  volatile vpx_codec_err_t res = VPX_CODEC_OK;
+  // Make a copy as volatile to avoid -Wclobbered with longjmp.
+  volatile vpx_enc_frame_flags_t flags = enc_flags;
+  volatile vpx_codec_pts_t pts_val = pts;
 
   if (!ctx->cfg.rc_target_bitrate) {
 #if CONFIG_MULTI_RES_ENCODING
@@ -822,6 +840,12 @@
 
   if (!res) res = validate_config(ctx, &ctx->cfg, &ctx->vp8_cfg, 1);
 
+  if (!ctx->pts_offset_initialized) {
+    ctx->pts_offset = pts_val;
+    ctx->pts_offset_initialized = 1;
+  }
+  pts_val -= ctx->pts_offset;
+
   pick_quickcompress_mode(ctx, duration, deadline);
   vpx_codec_pkt_list_init(&ctx->pkt_list);
 
@@ -843,6 +867,12 @@
     }
   }
 
+  if (setjmp(ctx->cpi->common.error.jmp)) {
+    ctx->cpi->common.error.setjmp = 0;
+    vpx_clear_system_state();
+    return VPX_CODEC_CORRUPT_FRAME;
+  }
+
   /* Initialize the encoder instance on the first frame*/
   if (!res && ctx->cpi) {
     unsigned int lib_flags;
@@ -865,11 +895,10 @@
     /* Convert API flags to internal codec lib flags */
     lib_flags = (flags & VPX_EFLAG_FORCE_KF) ? FRAMEFLAGS_KEY : 0;
 
-    /* vp8 use 10,000,000 ticks/second as time stamp */
     dst_time_stamp =
-        pts * 10000000 * ctx->cfg.g_timebase.num / ctx->cfg.g_timebase.den;
-    dst_end_time_stamp = (pts + duration) * 10000000 * ctx->cfg.g_timebase.num /
-                         ctx->cfg.g_timebase.den;
+        pts_val * ctx->timestamp_ratio.num / ctx->timestamp_ratio.den;
+    dst_end_time_stamp = (pts_val + (int64_t)duration) *
+                         ctx->timestamp_ratio.num / ctx->timestamp_ratio.den;
 
     if (img != NULL) {
       res = image2yuvconfig(img, &sd);
@@ -889,6 +918,8 @@
     cx_data_end = ctx->cx_data + cx_data_sz;
     lib_flags = 0;
 
+    ctx->cpi->common.error.setjmp = 1;
+
     while (cx_data_sz >= ctx->cx_data_sz / 2) {
       comp_data_state = vp8_get_compressed_data(
           ctx->cpi, &lib_flags, &size, cx_data, cx_data_end, &dst_time_stamp,
@@ -906,18 +937,21 @@
         VP8_COMP *cpi = (VP8_COMP *)ctx->cpi;
 
         /* Add the frame packet to the list of returned packets. */
-        round = (vpx_codec_pts_t)10000000 * ctx->cfg.g_timebase.num / 2 - 1;
+        round = (vpx_codec_pts_t)ctx->timestamp_ratio.num / 2;
+        if (round > 0) --round;
         delta = (dst_end_time_stamp - dst_time_stamp);
         pkt.kind = VPX_CODEC_CX_FRAME_PKT;
         pkt.data.frame.pts =
-            (dst_time_stamp * ctx->cfg.g_timebase.den + round) /
-            ctx->cfg.g_timebase.num / 10000000;
+            (dst_time_stamp * ctx->timestamp_ratio.den + round) /
+                ctx->timestamp_ratio.num +
+            ctx->pts_offset;
         pkt.data.frame.duration =
-            (unsigned long)((delta * ctx->cfg.g_timebase.den + round) /
-                            ctx->cfg.g_timebase.num / 10000000);
+            (unsigned long)((delta * ctx->timestamp_ratio.den + round) /
+                            ctx->timestamp_ratio.num);
         pkt.data.frame.flags = lib_flags << 16;
         pkt.data.frame.width[0] = cpi->common.Width;
         pkt.data.frame.height[0] = cpi->common.Height;
+        pkt.data.frame.spatial_layer_encoded[0] = 1;
 
         if (lib_flags & FRAMEFLAGS_KEY) {
           pkt.data.frame.flags |= VPX_FRAME_IS_KEY;
@@ -932,9 +966,9 @@
            * Invisible frames have no duration.
            */
           pkt.data.frame.pts =
-              ((cpi->last_time_stamp_seen * ctx->cfg.g_timebase.den + round) /
-               ctx->cfg.g_timebase.num / 10000000) +
-              1;
+              ((cpi->last_time_stamp_seen * ctx->timestamp_ratio.den + round) /
+               ctx->timestamp_ratio.num) +
+              ctx->pts_offset + 1;
           pkt.data.frame.duration = 0;
         }
 
@@ -1192,7 +1226,7 @@
 static vpx_codec_enc_cfg_map_t vp8e_usage_cfg_map[] = {
   { 0,
     {
-        0, /* g_usage */
+        0, /* g_usage (unused) */
         0, /* g_threads */
         0, /* g_profile */
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_dx_iface.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_dx_iface.c
index a2008b9..43156a0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_dx_iface.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8_dx_iface.c
@@ -38,13 +38,19 @@
 
 /* Structures for handling memory allocations */
 typedef enum { VP8_SEG_ALG_PRIV = 256, VP8_SEG_MAX } mem_seg_id_t;
-#define NELEMENTS(x) ((int)(sizeof(x) / sizeof(x[0])))
+#define NELEMENTS(x) ((int)(sizeof(x) / sizeof((x)[0])))
 
 struct vpx_codec_alg_priv {
   vpx_codec_priv_t base;
   vpx_codec_dec_cfg_t cfg;
   vp8_stream_info_t si;
   int decoder_init;
+#if CONFIG_MULTITHREAD
+  // Restart threads on next frame if set to 1.
+  // This is set when error happens in multithreaded decoding and all threads
+  // are shut down.
+  int restart_threads;
+#endif
   int postproc_cfg_set;
   vp8_postproc_cfg_t postproc_cfg;
   vpx_decrypt_cb decrypt_cb;
@@ -268,7 +274,7 @@
                                   const uint8_t *data, unsigned int data_sz,
                                   void *user_priv, long deadline) {
   volatile vpx_codec_err_t res;
-  unsigned int resolution_change = 0;
+  volatile unsigned int resolution_change = 0;
   unsigned int w, h;
 
   if (!ctx->fragments.enabled && (data == NULL && data_sz == 0)) {
@@ -298,6 +304,27 @@
 
   if ((ctx->si.h != h) || (ctx->si.w != w)) resolution_change = 1;
 
+#if CONFIG_MULTITHREAD
+  if (!res && ctx->restart_threads) {
+    struct frame_buffers *fb = &ctx->yv12_frame_buffers;
+    VP8D_COMP *pbi = ctx->yv12_frame_buffers.pbi[0];
+    VP8_COMMON *const pc = &pbi->common;
+    if (setjmp(pbi->common.error.jmp)) {
+      vp8_remove_decoder_instances(fb);
+      vp8_zero(fb->pbi);
+      vpx_clear_system_state();
+      return VPX_CODEC_ERROR;
+    }
+    pbi->common.error.setjmp = 1;
+    pbi->max_threads = ctx->cfg.threads;
+    vp8_decoder_create_threads(pbi);
+    if (vpx_atomic_load_acquire(&pbi->b_multithreaded_rd)) {
+      vp8mt_alloc_temp_buffers(pbi, pc->Width, pc->mb_rows);
+    }
+    ctx->restart_threads = 0;
+    pbi->common.error.setjmp = 0;
+  }
+#endif
   /* Initialize the decoder instance on the first frame*/
   if (!res && !ctx->decoder_init) {
     VP8D_CONFIG oxcf;
@@ -335,8 +362,8 @@
 
   if (!res) {
     VP8D_COMP *pbi = ctx->yv12_frame_buffers.pbi[0];
+    VP8_COMMON *const pc = &pbi->common;
     if (resolution_change) {
-      VP8_COMMON *const pc = &pbi->common;
       MACROBLOCKD *const xd = &pbi->mb;
 #if CONFIG_MULTITHREAD
       int i;
@@ -428,11 +455,38 @@
       pbi->common.fb_idx_ref_cnt[0] = 0;
     }
 
+    if (setjmp(pbi->common.error.jmp)) {
+      vpx_clear_system_state();
+      /* We do not know if the missing frame(s) was supposed to update
+       * any of the reference buffers, but we act conservative and
+       * mark only the last buffer as corrupted.
+       */
+      pc->yv12_fb[pc->lst_fb_idx].corrupted = 1;
+
+      if (pc->fb_idx_ref_cnt[pc->new_fb_idx] > 0) {
+        pc->fb_idx_ref_cnt[pc->new_fb_idx]--;
+      }
+      pc->error.setjmp = 0;
+#if CONFIG_MULTITHREAD
+      if (pbi->restart_threads) {
+        ctx->si.w = 0;
+        ctx->si.h = 0;
+        ctx->restart_threads = 1;
+      }
+#endif
+      res = update_error_state(ctx, &pbi->common.error);
+      return res;
+    }
+
+    pbi->common.error.setjmp = 1;
+
     /* update the pbi fragment data */
     pbi->fragments = ctx->fragments;
-
+#if CONFIG_MULTITHREAD
+    pbi->restart_threads = 0;
+#endif
     ctx->user_priv = user_priv;
-    if (vp8dx_receive_compressed_data(pbi, data_sz, data, deadline)) {
+    if (vp8dx_receive_compressed_data(pbi, deadline)) {
       res = update_error_state(ctx, &pbi->common.error);
     }
 
@@ -538,8 +592,10 @@
 static vpx_codec_err_t vp8_get_quantizer(vpx_codec_alg_priv_t *ctx,
                                          va_list args) {
   int *const arg = va_arg(args, int *);
+  VP8D_COMP *pbi = ctx->yv12_frame_buffers.pbi[0];
   if (arg == NULL) return VPX_CODEC_INVALID_PARAM;
-  *arg = vp8dx_get_quantizer(ctx->yv12_frame_buffers.pbi[0]);
+  if (pbi == NULL) return VPX_CODEC_CORRUPT_FRAME;
+  *arg = vp8dx_get_quantizer(pbi);
   return VPX_CODEC_OK;
 }
 
@@ -569,6 +625,7 @@
 
   if (update_info) {
     VP8D_COMP *pbi = (VP8D_COMP *)ctx->yv12_frame_buffers.pbi[0];
+    if (pbi == NULL) return VPX_CODEC_CORRUPT_FRAME;
 
     *update_info = pbi->common.refresh_alt_ref_frame * (int)VP8_ALTR_FRAME +
                    pbi->common.refresh_golden_frame * (int)VP8_GOLD_FRAME +
@@ -586,13 +643,16 @@
 
   if (ref_info) {
     VP8D_COMP *pbi = (VP8D_COMP *)ctx->yv12_frame_buffers.pbi[0];
-    VP8_COMMON *oci = &pbi->common;
-    *ref_info =
-        (vp8dx_references_buffer(oci, ALTREF_FRAME) ? VP8_ALTR_FRAME : 0) |
-        (vp8dx_references_buffer(oci, GOLDEN_FRAME) ? VP8_GOLD_FRAME : 0) |
-        (vp8dx_references_buffer(oci, LAST_FRAME) ? VP8_LAST_FRAME : 0);
-
-    return VPX_CODEC_OK;
+    if (pbi) {
+      VP8_COMMON *oci = &pbi->common;
+      *ref_info =
+          (vp8dx_references_buffer(oci, ALTREF_FRAME) ? VP8_ALTR_FRAME : 0) |
+          (vp8dx_references_buffer(oci, GOLDEN_FRAME) ? VP8_GOLD_FRAME : 0) |
+          (vp8dx_references_buffer(oci, LAST_FRAME) ? VP8_LAST_FRAME : 0);
+      return VPX_CODEC_OK;
+    } else {
+      return VPX_CODEC_CORRUPT_FRAME;
+    }
   } else {
     return VPX_CODEC_INVALID_PARAM;
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8cx.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8cx.mk
index 0dac016..3a8f8ea 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8cx.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp8/vp8cx.mk
@@ -23,6 +23,7 @@
 VP8_CX_SRCS-yes += encoder/defaultcoefcounts.h
 VP8_CX_SRCS-yes += encoder/bitstream.c
 VP8_CX_SRCS-yes += encoder/boolhuff.c
+VP8_CX_SRCS-yes += encoder/copy_c.c
 VP8_CX_SRCS-yes += encoder/dct.c
 VP8_CX_SRCS-yes += encoder/encodeframe.c
 VP8_CX_SRCS-yes += encoder/encodeframe.h
@@ -82,6 +83,8 @@
 VP8_CX_SRCS_REMOVE-yes += encoder/temporal_filter.h
 endif
 
+VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/copy_sse2.asm
+VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/copy_sse3.asm
 VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/dct_sse2.asm
 VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/fwalsh_sse2.asm
 VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp8_quantize_sse2.c
@@ -92,9 +95,9 @@
 VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/denoising_sse2.c
 endif
 
+VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/block_error_sse2.asm
 VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/temporal_filter_apply_sse2.asm
 VP8_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp8_enc_stubs_sse2.c
-VP8_CX_SRCS-$(ARCH_X86)$(ARCH_X86_64) += encoder/x86/encodeopt.asm
 
 ifeq ($(CONFIG_REALTIME_ONLY),yes)
 VP8_CX_SRCS_REMOVE-$(HAVE_SSE2) += encoder/x86/temporal_filter_apply_sse2.asm
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht16x16_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht16x16_add_neon.c
new file mode 100644
index 0000000..219ff63
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht16x16_add_neon.c
@@ -0,0 +1,446 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <arm_neon.h>
+
+#include "./vpx_dsp_rtcd.h"
+#include "vp9/common/vp9_enums.h"
+#include "vp9/common/arm/neon/vp9_iht_neon.h"
+#include "vpx_dsp/arm/highbd_idct_neon.h"
+#include "vpx_dsp/arm/idct_neon.h"
+#include "vpx_dsp/arm/transpose_neon.h"
+#include "vpx_dsp/inv_txfm.h"
+
+// Use macros to make sure argument lane is passed in as an constant integer.
+
+#define vmull_lane_s32_dual(in, c, lane, out)                          \
+  do {                                                                 \
+    out[0].val[0] = vmull_lane_s32(vget_low_s32(in.val[0]), c, lane);  \
+    out[0].val[1] = vmull_lane_s32(vget_low_s32(in.val[1]), c, lane);  \
+    out[1].val[0] = vmull_lane_s32(vget_high_s32(in.val[0]), c, lane); \
+    out[1].val[1] = vmull_lane_s32(vget_high_s32(in.val[1]), c, lane); \
+  } while (0)
+
+#define vmlal_lane_s32_dual(in, c, lane, out)                             \
+  do {                                                                    \
+    out[0].val[0] =                                                       \
+        vmlal_lane_s32(out[0].val[0], vget_low_s32(in.val[0]), c, lane);  \
+    out[0].val[1] =                                                       \
+        vmlal_lane_s32(out[0].val[1], vget_low_s32(in.val[1]), c, lane);  \
+    out[1].val[0] =                                                       \
+        vmlal_lane_s32(out[1].val[0], vget_high_s32(in.val[0]), c, lane); \
+    out[1].val[1] =                                                       \
+        vmlal_lane_s32(out[1].val[1], vget_high_s32(in.val[1]), c, lane); \
+  } while (0)
+
+#define vmlsl_lane_s32_dual(in, c, lane, out)                             \
+  do {                                                                    \
+    out[0].val[0] =                                                       \
+        vmlsl_lane_s32(out[0].val[0], vget_low_s32(in.val[0]), c, lane);  \
+    out[0].val[1] =                                                       \
+        vmlsl_lane_s32(out[0].val[1], vget_low_s32(in.val[1]), c, lane);  \
+    out[1].val[0] =                                                       \
+        vmlsl_lane_s32(out[1].val[0], vget_high_s32(in.val[0]), c, lane); \
+    out[1].val[1] =                                                       \
+        vmlsl_lane_s32(out[1].val[1], vget_high_s32(in.val[1]), c, lane); \
+  } while (0)
+
+static INLINE int32x4x2_t
+highbd_dct_const_round_shift_low_8(const int64x2x2_t *const in) {
+  int32x4x2_t out;
+  out.val[0] = vcombine_s32(vrshrn_n_s64(in[0].val[0], DCT_CONST_BITS),
+                            vrshrn_n_s64(in[1].val[0], DCT_CONST_BITS));
+  out.val[1] = vcombine_s32(vrshrn_n_s64(in[0].val[1], DCT_CONST_BITS),
+                            vrshrn_n_s64(in[1].val[1], DCT_CONST_BITS));
+  return out;
+}
+
+#define highbd_iadst_half_butterfly(in, c, lane, out) \
+  do {                                                \
+    int64x2x2_t t[2];                                 \
+    vmull_lane_s32_dual(in, c, lane, t);              \
+    out = highbd_dct_const_round_shift_low_8(t);      \
+  } while (0)
+
+#define highbd_iadst_butterfly(in0, in1, c, lane0, lane1, s0, s1) \
+  do {                                                            \
+    vmull_lane_s32_dual(in0, c, lane0, s0);                       \
+    vmull_lane_s32_dual(in0, c, lane1, s1);                       \
+    vmlal_lane_s32_dual(in1, c, lane1, s0);                       \
+    vmlsl_lane_s32_dual(in1, c, lane0, s1);                       \
+  } while (0)
+
+static INLINE int32x4x2_t vaddq_s32_dual(const int32x4x2_t in0,
+                                         const int32x4x2_t in1) {
+  int32x4x2_t out;
+  out.val[0] = vaddq_s32(in0.val[0], in1.val[0]);
+  out.val[1] = vaddq_s32(in0.val[1], in1.val[1]);
+  return out;
+}
+
+static INLINE int64x2x2_t vaddq_s64_dual(const int64x2x2_t in0,
+                                         const int64x2x2_t in1) {
+  int64x2x2_t out;
+  out.val[0] = vaddq_s64(in0.val[0], in1.val[0]);
+  out.val[1] = vaddq_s64(in0.val[1], in1.val[1]);
+  return out;
+}
+
+static INLINE int32x4x2_t vsubq_s32_dual(const int32x4x2_t in0,
+                                         const int32x4x2_t in1) {
+  int32x4x2_t out;
+  out.val[0] = vsubq_s32(in0.val[0], in1.val[0]);
+  out.val[1] = vsubq_s32(in0.val[1], in1.val[1]);
+  return out;
+}
+
+static INLINE int64x2x2_t vsubq_s64_dual(const int64x2x2_t in0,
+                                         const int64x2x2_t in1) {
+  int64x2x2_t out;
+  out.val[0] = vsubq_s64(in0.val[0], in1.val[0]);
+  out.val[1] = vsubq_s64(in0.val[1], in1.val[1]);
+  return out;
+}
+
+static INLINE int32x4x2_t vcombine_s32_dual(const int32x2x2_t in0,
+                                            const int32x2x2_t in1) {
+  int32x4x2_t out;
+  out.val[0] = vcombine_s32(in0.val[0], in1.val[0]);
+  out.val[1] = vcombine_s32(in0.val[1], in1.val[1]);
+  return out;
+}
+
+static INLINE int32x4x2_t highbd_add_dct_const_round_shift_low_8(
+    const int64x2x2_t *const in0, const int64x2x2_t *const in1) {
+  const int64x2x2_t sum_lo = vaddq_s64_dual(in0[0], in1[0]);
+  const int64x2x2_t sum_hi = vaddq_s64_dual(in0[1], in1[1]);
+  int32x2x2_t out_lo, out_hi;
+
+  out_lo.val[0] = vrshrn_n_s64(sum_lo.val[0], DCT_CONST_BITS);
+  out_lo.val[1] = vrshrn_n_s64(sum_lo.val[1], DCT_CONST_BITS);
+  out_hi.val[0] = vrshrn_n_s64(sum_hi.val[0], DCT_CONST_BITS);
+  out_hi.val[1] = vrshrn_n_s64(sum_hi.val[1], DCT_CONST_BITS);
+  return vcombine_s32_dual(out_lo, out_hi);
+}
+
+static INLINE int32x4x2_t highbd_sub_dct_const_round_shift_low_8(
+    const int64x2x2_t *const in0, const int64x2x2_t *const in1) {
+  const int64x2x2_t sub_lo = vsubq_s64_dual(in0[0], in1[0]);
+  const int64x2x2_t sub_hi = vsubq_s64_dual(in0[1], in1[1]);
+  int32x2x2_t out_lo, out_hi;
+
+  out_lo.val[0] = vrshrn_n_s64(sub_lo.val[0], DCT_CONST_BITS);
+  out_lo.val[1] = vrshrn_n_s64(sub_lo.val[1], DCT_CONST_BITS);
+  out_hi.val[0] = vrshrn_n_s64(sub_hi.val[0], DCT_CONST_BITS);
+  out_hi.val[1] = vrshrn_n_s64(sub_hi.val[1], DCT_CONST_BITS);
+  return vcombine_s32_dual(out_lo, out_hi);
+}
+
+static INLINE int32x4x2_t vnegq_s32_dual(const int32x4x2_t in) {
+  int32x4x2_t out;
+  out.val[0] = vnegq_s32(in.val[0]);
+  out.val[1] = vnegq_s32(in.val[1]);
+  return out;
+}
+
+static void highbd_iadst16_neon(const int32_t *input, int32_t *output,
+                                uint16_t *dest, const int stride,
+                                const int bd) {
+  const int32x4_t c_1_31_5_27 =
+      create_s32x4_neon(cospi_1_64, cospi_31_64, cospi_5_64, cospi_27_64);
+  const int32x4_t c_9_23_13_19 =
+      create_s32x4_neon(cospi_9_64, cospi_23_64, cospi_13_64, cospi_19_64);
+  const int32x4_t c_17_15_21_11 =
+      create_s32x4_neon(cospi_17_64, cospi_15_64, cospi_21_64, cospi_11_64);
+  const int32x4_t c_25_7_29_3 =
+      create_s32x4_neon(cospi_25_64, cospi_7_64, cospi_29_64, cospi_3_64);
+  const int32x4_t c_4_28_20_12 =
+      create_s32x4_neon(cospi_4_64, cospi_28_64, cospi_20_64, cospi_12_64);
+  const int32x4_t c_16_n16_8_24 =
+      create_s32x4_neon(cospi_16_64, -cospi_16_64, cospi_8_64, cospi_24_64);
+  int32x4x2_t in[16], out[16];
+  int32x4x2_t x[16], t[12];
+  int64x2x2_t s0[2], s1[2], s2[2], s3[2], s4[2], s5[2], s6[2], s7[2];
+  int64x2x2_t s8[2], s9[2], s10[2], s11[2], s12[2], s13[2], s14[2], s15[2];
+
+  // Load input (16x8)
+  in[0].val[0] = vld1q_s32(input);
+  in[0].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[8].val[0] = vld1q_s32(input);
+  in[8].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[1].val[0] = vld1q_s32(input);
+  in[1].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[9].val[0] = vld1q_s32(input);
+  in[9].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[2].val[0] = vld1q_s32(input);
+  in[2].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[10].val[0] = vld1q_s32(input);
+  in[10].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[3].val[0] = vld1q_s32(input);
+  in[3].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[11].val[0] = vld1q_s32(input);
+  in[11].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[4].val[0] = vld1q_s32(input);
+  in[4].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[12].val[0] = vld1q_s32(input);
+  in[12].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[5].val[0] = vld1q_s32(input);
+  in[5].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[13].val[0] = vld1q_s32(input);
+  in[13].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[6].val[0] = vld1q_s32(input);
+  in[6].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[14].val[0] = vld1q_s32(input);
+  in[14].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[7].val[0] = vld1q_s32(input);
+  in[7].val[1] = vld1q_s32(input + 4);
+  input += 8;
+  in[15].val[0] = vld1q_s32(input);
+  in[15].val[1] = vld1q_s32(input + 4);
+
+  // Transpose
+  transpose_s32_8x8(&in[0], &in[1], &in[2], &in[3], &in[4], &in[5], &in[6],
+                    &in[7]);
+  transpose_s32_8x8(&in[8], &in[9], &in[10], &in[11], &in[12], &in[13], &in[14],
+                    &in[15]);
+
+  x[0] = in[15];
+  x[1] = in[0];
+  x[2] = in[13];
+  x[3] = in[2];
+  x[4] = in[11];
+  x[5] = in[4];
+  x[6] = in[9];
+  x[7] = in[6];
+  x[8] = in[7];
+  x[9] = in[8];
+  x[10] = in[5];
+  x[11] = in[10];
+  x[12] = in[3];
+  x[13] = in[12];
+  x[14] = in[1];
+  x[15] = in[14];
+
+  // stage 1
+  highbd_iadst_butterfly(x[0], x[1], vget_low_s32(c_1_31_5_27), 0, 1, s0, s1);
+  highbd_iadst_butterfly(x[2], x[3], vget_high_s32(c_1_31_5_27), 0, 1, s2, s3);
+  highbd_iadst_butterfly(x[4], x[5], vget_low_s32(c_9_23_13_19), 0, 1, s4, s5);
+  highbd_iadst_butterfly(x[6], x[7], vget_high_s32(c_9_23_13_19), 0, 1, s6, s7);
+  highbd_iadst_butterfly(x[8], x[9], vget_low_s32(c_17_15_21_11), 0, 1, s8, s9);
+  highbd_iadst_butterfly(x[10], x[11], vget_high_s32(c_17_15_21_11), 0, 1, s10,
+                         s11);
+  highbd_iadst_butterfly(x[12], x[13], vget_low_s32(c_25_7_29_3), 0, 1, s12,
+                         s13);
+  highbd_iadst_butterfly(x[14], x[15], vget_high_s32(c_25_7_29_3), 0, 1, s14,
+                         s15);
+
+  x[0] = highbd_add_dct_const_round_shift_low_8(s0, s8);
+  x[1] = highbd_add_dct_const_round_shift_low_8(s1, s9);
+  x[2] = highbd_add_dct_const_round_shift_low_8(s2, s10);
+  x[3] = highbd_add_dct_const_round_shift_low_8(s3, s11);
+  x[4] = highbd_add_dct_const_round_shift_low_8(s4, s12);
+  x[5] = highbd_add_dct_const_round_shift_low_8(s5, s13);
+  x[6] = highbd_add_dct_const_round_shift_low_8(s6, s14);
+  x[7] = highbd_add_dct_const_round_shift_low_8(s7, s15);
+  x[8] = highbd_sub_dct_const_round_shift_low_8(s0, s8);
+  x[9] = highbd_sub_dct_const_round_shift_low_8(s1, s9);
+  x[10] = highbd_sub_dct_const_round_shift_low_8(s2, s10);
+  x[11] = highbd_sub_dct_const_round_shift_low_8(s3, s11);
+  x[12] = highbd_sub_dct_const_round_shift_low_8(s4, s12);
+  x[13] = highbd_sub_dct_const_round_shift_low_8(s5, s13);
+  x[14] = highbd_sub_dct_const_round_shift_low_8(s6, s14);
+  x[15] = highbd_sub_dct_const_round_shift_low_8(s7, s15);
+
+  // stage 2
+  t[0] = x[0];
+  t[1] = x[1];
+  t[2] = x[2];
+  t[3] = x[3];
+  t[4] = x[4];
+  t[5] = x[5];
+  t[6] = x[6];
+  t[7] = x[7];
+  highbd_iadst_butterfly(x[8], x[9], vget_low_s32(c_4_28_20_12), 0, 1, s8, s9);
+  highbd_iadst_butterfly(x[10], x[11], vget_high_s32(c_4_28_20_12), 0, 1, s10,
+                         s11);
+  highbd_iadst_butterfly(x[13], x[12], vget_low_s32(c_4_28_20_12), 1, 0, s13,
+                         s12);
+  highbd_iadst_butterfly(x[15], x[14], vget_high_s32(c_4_28_20_12), 1, 0, s15,
+                         s14);
+
+  x[0] = vaddq_s32_dual(t[0], t[4]);
+  x[1] = vaddq_s32_dual(t[1], t[5]);
+  x[2] = vaddq_s32_dual(t[2], t[6]);
+  x[3] = vaddq_s32_dual(t[3], t[7]);
+  x[4] = vsubq_s32_dual(t[0], t[4]);
+  x[5] = vsubq_s32_dual(t[1], t[5]);
+  x[6] = vsubq_s32_dual(t[2], t[6]);
+  x[7] = vsubq_s32_dual(t[3], t[7]);
+  x[8] = highbd_add_dct_const_round_shift_low_8(s8, s12);
+  x[9] = highbd_add_dct_const_round_shift_low_8(s9, s13);
+  x[10] = highbd_add_dct_const_round_shift_low_8(s10, s14);
+  x[11] = highbd_add_dct_const_round_shift_low_8(s11, s15);
+  x[12] = highbd_sub_dct_const_round_shift_low_8(s8, s12);
+  x[13] = highbd_sub_dct_const_round_shift_low_8(s9, s13);
+  x[14] = highbd_sub_dct_const_round_shift_low_8(s10, s14);
+  x[15] = highbd_sub_dct_const_round_shift_low_8(s11, s15);
+
+  // stage 3
+  t[0] = x[0];
+  t[1] = x[1];
+  t[2] = x[2];
+  t[3] = x[3];
+  highbd_iadst_butterfly(x[4], x[5], vget_high_s32(c_16_n16_8_24), 0, 1, s4,
+                         s5);
+  highbd_iadst_butterfly(x[7], x[6], vget_high_s32(c_16_n16_8_24), 1, 0, s7,
+                         s6);
+  t[8] = x[8];
+  t[9] = x[9];
+  t[10] = x[10];
+  t[11] = x[11];
+  highbd_iadst_butterfly(x[12], x[13], vget_high_s32(c_16_n16_8_24), 0, 1, s12,
+                         s13);
+  highbd_iadst_butterfly(x[15], x[14], vget_high_s32(c_16_n16_8_24), 1, 0, s15,
+                         s14);
+
+  x[0] = vaddq_s32_dual(t[0], t[2]);
+  x[1] = vaddq_s32_dual(t[1], t[3]);
+  x[2] = vsubq_s32_dual(t[0], t[2]);
+  x[3] = vsubq_s32_dual(t[1], t[3]);
+  x[4] = highbd_add_dct_const_round_shift_low_8(s4, s6);
+  x[5] = highbd_add_dct_const_round_shift_low_8(s5, s7);
+  x[6] = highbd_sub_dct_const_round_shift_low_8(s4, s6);
+  x[7] = highbd_sub_dct_const_round_shift_low_8(s5, s7);
+  x[8] = vaddq_s32_dual(t[8], t[10]);
+  x[9] = vaddq_s32_dual(t[9], t[11]);
+  x[10] = vsubq_s32_dual(t[8], t[10]);
+  x[11] = vsubq_s32_dual(t[9], t[11]);
+  x[12] = highbd_add_dct_const_round_shift_low_8(s12, s14);
+  x[13] = highbd_add_dct_const_round_shift_low_8(s13, s15);
+  x[14] = highbd_sub_dct_const_round_shift_low_8(s12, s14);
+  x[15] = highbd_sub_dct_const_round_shift_low_8(s13, s15);
+
+  // stage 4
+  {
+    const int32x4x2_t sum = vaddq_s32_dual(x[2], x[3]);
+    const int32x4x2_t sub = vsubq_s32_dual(x[2], x[3]);
+    highbd_iadst_half_butterfly(sum, vget_low_s32(c_16_n16_8_24), 1, x[2]);
+    highbd_iadst_half_butterfly(sub, vget_low_s32(c_16_n16_8_24), 0, x[3]);
+  }
+  {
+    const int32x4x2_t sum = vaddq_s32_dual(x[7], x[6]);
+    const int32x4x2_t sub = vsubq_s32_dual(x[7], x[6]);
+    highbd_iadst_half_butterfly(sum, vget_low_s32(c_16_n16_8_24), 0, x[6]);
+    highbd_iadst_half_butterfly(sub, vget_low_s32(c_16_n16_8_24), 0, x[7]);
+  }
+  {
+    const int32x4x2_t sum = vaddq_s32_dual(x[11], x[10]);
+    const int32x4x2_t sub = vsubq_s32_dual(x[11], x[10]);
+    highbd_iadst_half_butterfly(sum, vget_low_s32(c_16_n16_8_24), 0, x[10]);
+    highbd_iadst_half_butterfly(sub, vget_low_s32(c_16_n16_8_24), 0, x[11]);
+  }
+  {
+    const int32x4x2_t sum = vaddq_s32_dual(x[14], x[15]);
+    const int32x4x2_t sub = vsubq_s32_dual(x[14], x[15]);
+    highbd_iadst_half_butterfly(sum, vget_low_s32(c_16_n16_8_24), 1, x[14]);
+    highbd_iadst_half_butterfly(sub, vget_low_s32(c_16_n16_8_24), 0, x[15]);
+  }
+
+  out[0] = x[0];
+  out[1] = vnegq_s32_dual(x[8]);
+  out[2] = x[12];
+  out[3] = vnegq_s32_dual(x[4]);
+  out[4] = x[6];
+  out[5] = x[14];
+  out[6] = x[10];
+  out[7] = x[2];
+  out[8] = x[3];
+  out[9] = x[11];
+  out[10] = x[15];
+  out[11] = x[7];
+  out[12] = x[5];
+  out[13] = vnegq_s32_dual(x[13]);
+  out[14] = x[9];
+  out[15] = vnegq_s32_dual(x[1]);
+
+  if (output) {
+    highbd_idct16x16_store_pass1(out, output);
+  } else {
+    highbd_idct16x16_add_store(out, dest, stride, bd);
+  }
+}
+
+typedef void (*highbd_iht_1d)(const int32_t *input, int32_t *output,
+                              uint16_t *dest, const int stride, const int bd);
+
+typedef struct {
+  highbd_iht_1d cols, rows;  // vertical and horizontal
+} highbd_iht_2d;
+
+void vp9_highbd_iht16x16_256_add_neon(const tran_low_t *input, uint16_t *dest,
+                                      int stride, int tx_type, int bd) {
+  if (bd == 8) {
+    static const iht_2d IHT_16[] = {
+      { vpx_idct16x16_256_add_half1d,
+        vpx_idct16x16_256_add_half1d },  // DCT_DCT  = 0
+      { vpx_iadst16x16_256_add_half1d,
+        vpx_idct16x16_256_add_half1d },  // ADST_DCT = 1
+      { vpx_idct16x16_256_add_half1d,
+        vpx_iadst16x16_256_add_half1d },  // DCT_ADST = 2
+      { vpx_iadst16x16_256_add_half1d,
+        vpx_iadst16x16_256_add_half1d }  // ADST_ADST = 3
+    };
+    const iht_2d ht = IHT_16[tx_type];
+    int16_t row_output[16 * 16];
+
+    // pass 1
+    ht.rows(input, row_output, dest, stride, 1);               // upper 8 rows
+    ht.rows(input + 8 * 16, row_output + 8, dest, stride, 1);  // lower 8 rows
+
+    // pass 2
+    ht.cols(row_output, NULL, dest, stride, 1);               // left 8 columns
+    ht.cols(row_output + 16 * 8, NULL, dest + 8, stride, 1);  // right 8 columns
+  } else {
+    static const highbd_iht_2d IHT_16[] = {
+      { vpx_highbd_idct16x16_256_add_half1d,
+        vpx_highbd_idct16x16_256_add_half1d },  // DCT_DCT  = 0
+      { highbd_iadst16_neon,
+        vpx_highbd_idct16x16_256_add_half1d },  // ADST_DCT = 1
+      { vpx_highbd_idct16x16_256_add_half1d,
+        highbd_iadst16_neon },                      // DCT_ADST = 2
+      { highbd_iadst16_neon, highbd_iadst16_neon }  // ADST_ADST = 3
+    };
+    const highbd_iht_2d ht = IHT_16[tx_type];
+    int32_t row_output[16 * 16];
+
+    // pass 1
+    ht.rows(input, row_output, dest, stride, bd);               // upper 8 rows
+    ht.rows(input + 8 * 16, row_output + 8, dest, stride, bd);  // lower 8 rows
+
+    // pass 2
+    ht.cols(row_output, NULL, dest, stride, bd);  // left 8 columns
+    ht.cols(row_output + 8 * 16, NULL, dest + 8, stride,
+            bd);  // right 8 columns
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht4x4_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht4x4_add_neon.c
index 4628423..52c4f19 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht4x4_add_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht4x4_add_neon.c
@@ -23,34 +23,55 @@
 static INLINE void highbd_iadst4(int32x4_t *const io) {
   const int32_t sinpis[4] = { sinpi_1_9, sinpi_2_9, sinpi_3_9, sinpi_4_9 };
   const int32x4_t sinpi = vld1q_s32(sinpis);
-  int32x4_t s[8];
+  int64x2x2_t s[7], t[4];
+  int32x4_t s7;
 
-  s[0] = vmulq_lane_s32(io[0], vget_low_s32(sinpi), 0);
-  s[1] = vmulq_lane_s32(io[0], vget_low_s32(sinpi), 1);
-  s[2] = vmulq_lane_s32(io[1], vget_high_s32(sinpi), 0);
-  s[3] = vmulq_lane_s32(io[2], vget_high_s32(sinpi), 1);
-  s[4] = vmulq_lane_s32(io[2], vget_low_s32(sinpi), 0);
-  s[5] = vmulq_lane_s32(io[3], vget_low_s32(sinpi), 1);
-  s[6] = vmulq_lane_s32(io[3], vget_high_s32(sinpi), 1);
-  s[7] = vsubq_s32(io[0], io[2]);
-  s[7] = vaddq_s32(s[7], io[3]);
+  s[0].val[0] = vmull_lane_s32(vget_low_s32(io[0]), vget_low_s32(sinpi), 0);
+  s[0].val[1] = vmull_lane_s32(vget_high_s32(io[0]), vget_low_s32(sinpi), 0);
+  s[1].val[0] = vmull_lane_s32(vget_low_s32(io[0]), vget_low_s32(sinpi), 1);
+  s[1].val[1] = vmull_lane_s32(vget_high_s32(io[0]), vget_low_s32(sinpi), 1);
+  s[2].val[0] = vmull_lane_s32(vget_low_s32(io[1]), vget_high_s32(sinpi), 0);
+  s[2].val[1] = vmull_lane_s32(vget_high_s32(io[1]), vget_high_s32(sinpi), 0);
+  s[3].val[0] = vmull_lane_s32(vget_low_s32(io[2]), vget_high_s32(sinpi), 1);
+  s[3].val[1] = vmull_lane_s32(vget_high_s32(io[2]), vget_high_s32(sinpi), 1);
+  s[4].val[0] = vmull_lane_s32(vget_low_s32(io[2]), vget_low_s32(sinpi), 0);
+  s[4].val[1] = vmull_lane_s32(vget_high_s32(io[2]), vget_low_s32(sinpi), 0);
+  s[5].val[0] = vmull_lane_s32(vget_low_s32(io[3]), vget_low_s32(sinpi), 1);
+  s[5].val[1] = vmull_lane_s32(vget_high_s32(io[3]), vget_low_s32(sinpi), 1);
+  s[6].val[0] = vmull_lane_s32(vget_low_s32(io[3]), vget_high_s32(sinpi), 1);
+  s[6].val[1] = vmull_lane_s32(vget_high_s32(io[3]), vget_high_s32(sinpi), 1);
+  s7 = vsubq_s32(io[0], io[2]);
+  s7 = vaddq_s32(s7, io[3]);
 
-  s[0] = vaddq_s32(s[0], s[3]);
-  s[0] = vaddq_s32(s[0], s[5]);
-  s[1] = vsubq_s32(s[1], s[4]);
-  s[1] = vsubq_s32(s[1], s[6]);
+  s[0].val[0] = vaddq_s64(s[0].val[0], s[3].val[0]);
+  s[0].val[1] = vaddq_s64(s[0].val[1], s[3].val[1]);
+  s[0].val[0] = vaddq_s64(s[0].val[0], s[5].val[0]);
+  s[0].val[1] = vaddq_s64(s[0].val[1], s[5].val[1]);
+  s[1].val[0] = vsubq_s64(s[1].val[0], s[4].val[0]);
+  s[1].val[1] = vsubq_s64(s[1].val[1], s[4].val[1]);
+  s[1].val[0] = vsubq_s64(s[1].val[0], s[6].val[0]);
+  s[1].val[1] = vsubq_s64(s[1].val[1], s[6].val[1]);
   s[3] = s[2];
-  s[2] = vmulq_lane_s32(s[7], vget_high_s32(sinpi), 0);
+  s[2].val[0] = vmull_lane_s32(vget_low_s32(s7), vget_high_s32(sinpi), 0);
+  s[2].val[1] = vmull_lane_s32(vget_high_s32(s7), vget_high_s32(sinpi), 0);
 
-  io[0] = vaddq_s32(s[0], s[3]);
-  io[1] = vaddq_s32(s[1], s[3]);
-  io[2] = s[2];
-  io[3] = vaddq_s32(s[0], s[1]);
-  io[3] = vsubq_s32(io[3], s[3]);
-  io[0] = vrshrq_n_s32(io[0], DCT_CONST_BITS);
-  io[1] = vrshrq_n_s32(io[1], DCT_CONST_BITS);
-  io[2] = vrshrq_n_s32(io[2], DCT_CONST_BITS);
-  io[3] = vrshrq_n_s32(io[3], DCT_CONST_BITS);
+  t[0].val[0] = vaddq_s64(s[0].val[0], s[3].val[0]);
+  t[0].val[1] = vaddq_s64(s[0].val[1], s[3].val[1]);
+  t[1].val[0] = vaddq_s64(s[1].val[0], s[3].val[0]);
+  t[1].val[1] = vaddq_s64(s[1].val[1], s[3].val[1]);
+  t[2] = s[2];
+  t[3].val[0] = vaddq_s64(s[0].val[0], s[1].val[0]);
+  t[3].val[1] = vaddq_s64(s[0].val[1], s[1].val[1]);
+  t[3].val[0] = vsubq_s64(t[3].val[0], s[3].val[0]);
+  t[3].val[1] = vsubq_s64(t[3].val[1], s[3].val[1]);
+  io[0] = vcombine_s32(vrshrn_n_s64(t[0].val[0], DCT_CONST_BITS),
+                       vrshrn_n_s64(t[0].val[1], DCT_CONST_BITS));
+  io[1] = vcombine_s32(vrshrn_n_s64(t[1].val[0], DCT_CONST_BITS),
+                       vrshrn_n_s64(t[1].val[1], DCT_CONST_BITS));
+  io[2] = vcombine_s32(vrshrn_n_s64(t[2].val[0], DCT_CONST_BITS),
+                       vrshrn_n_s64(t[2].val[1], DCT_CONST_BITS));
+  io[3] = vcombine_s32(vrshrn_n_s64(t[3].val[0], DCT_CONST_BITS),
+                       vrshrn_n_s64(t[3].val[1], DCT_CONST_BITS));
 }
 
 void vp9_highbd_iht4x4_16_add_neon(const tran_low_t *input, uint16_t *dest,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht8x8_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht8x8_add_neon.c
index 498f035..2232c68 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht8x8_add_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_highbd_iht8x8_add_neon.c
@@ -18,19 +18,8 @@
 #include "vpx_dsp/arm/transpose_neon.h"
 #include "vpx_dsp/inv_txfm.h"
 
-static INLINE void iadst_half_butterfly_bd10_neon(int32x4_t *const x,
-                                                  const int32x2_t c) {
-  const int32x4_t sum = vaddq_s32(x[0], x[1]);
-  const int32x4_t sub = vsubq_s32(x[0], x[1]);
-
-  x[0] = vmulq_lane_s32(sum, c, 0);
-  x[1] = vmulq_lane_s32(sub, c, 0);
-  x[0] = vrshrq_n_s32(x[0], DCT_CONST_BITS);
-  x[1] = vrshrq_n_s32(x[1], DCT_CONST_BITS);
-}
-
-static INLINE void iadst_half_butterfly_bd12_neon(int32x4_t *const x,
-                                                  const int32x2_t c) {
+static INLINE void highbd_iadst_half_butterfly_neon(int32x4_t *const x,
+                                                    const int32x2_t c) {
   const int32x4_t sum = vaddq_s32(x[0], x[1]);
   const int32x4_t sub = vsubq_s32(x[0], x[1]);
   const int64x2_t t0_lo = vmull_lane_s32(vget_low_s32(sum), c, 0);
@@ -46,35 +35,11 @@
   x[1] = vcombine_s32(out1_lo, out1_hi);
 }
 
-static INLINE void iadst_butterfly_lane_0_1_bd10_neon(const int32x4_t in0,
-                                                      const int32x4_t in1,
-                                                      const int32x2_t c,
-                                                      int32x4_t *const s0,
-                                                      int32x4_t *const s1) {
-  const int32x4_t t0 = vmulq_lane_s32(in0, c, 0);
-  const int32x4_t t1 = vmulq_lane_s32(in0, c, 1);
-
-  *s0 = vmlaq_lane_s32(t0, in1, c, 1);
-  *s1 = vmlsq_lane_s32(t1, in1, c, 0);
-}
-
-static INLINE void iadst_butterfly_lane_1_0_bd10_neon(const int32x4_t in0,
-                                                      const int32x4_t in1,
-                                                      const int32x2_t c,
-                                                      int32x4_t *const s0,
-                                                      int32x4_t *const s1) {
-  const int32x4_t t0 = vmulq_lane_s32(in0, c, 1);
-  const int32x4_t t1 = vmulq_lane_s32(in0, c, 0);
-
-  *s0 = vmlaq_lane_s32(t0, in1, c, 0);
-  *s1 = vmlsq_lane_s32(t1, in1, c, 1);
-}
-
-static INLINE void iadst_butterfly_lane_0_1_bd12_neon(const int32x4_t in0,
-                                                      const int32x4_t in1,
-                                                      const int32x2_t c,
-                                                      int64x2_t *const s0,
-                                                      int64x2_t *const s1) {
+static INLINE void highbd_iadst_butterfly_lane_0_1_neon(const int32x4_t in0,
+                                                        const int32x4_t in1,
+                                                        const int32x2_t c,
+                                                        int64x2_t *const s0,
+                                                        int64x2_t *const s1) {
   const int64x2_t t0_lo = vmull_lane_s32(vget_low_s32(in0), c, 0);
   const int64x2_t t1_lo = vmull_lane_s32(vget_low_s32(in0), c, 1);
   const int64x2_t t0_hi = vmull_lane_s32(vget_high_s32(in0), c, 0);
@@ -86,11 +51,11 @@
   s1[1] = vmlsl_lane_s32(t1_hi, vget_high_s32(in1), c, 0);
 }
 
-static INLINE void iadst_butterfly_lane_1_0_bd12_neon(const int32x4_t in0,
-                                                      const int32x4_t in1,
-                                                      const int32x2_t c,
-                                                      int64x2_t *const s0,
-                                                      int64x2_t *const s1) {
+static INLINE void highbd_iadst_butterfly_lane_1_0_neon(const int32x4_t in0,
+                                                        const int32x4_t in1,
+                                                        const int32x2_t c,
+                                                        int64x2_t *const s0,
+                                                        int64x2_t *const s1) {
   const int64x2_t t0_lo = vmull_lane_s32(vget_low_s32(in0), c, 1);
   const int64x2_t t1_lo = vmull_lane_s32(vget_low_s32(in0), c, 0);
   const int64x2_t t0_hi = vmull_lane_s32(vget_high_s32(in0), c, 1);
@@ -102,19 +67,7 @@
   s1[1] = vmlsl_lane_s32(t1_hi, vget_high_s32(in1), c, 1);
 }
 
-static INLINE int32x4_t
-add_dct_const_round_shift_low_8_bd10(const int32x4_t in0, const int32x4_t in1) {
-  const int32x4_t sum = vaddq_s32(in0, in1);
-  return vrshrq_n_s32(sum, DCT_CONST_BITS);
-}
-
-static INLINE int32x4_t
-sub_dct_const_round_shift_low_8_bd10(const int32x4_t in0, const int32x4_t in1) {
-  const int32x4_t sub = vsubq_s32(in0, in1);
-  return vrshrq_n_s32(sub, DCT_CONST_BITS);
-}
-
-static INLINE int32x4_t add_dct_const_round_shift_low_8_bd12(
+static INLINE int32x4_t highbd_add_dct_const_round_shift_low_8(
     const int64x2_t *const in0, const int64x2_t *const in1) {
   const int64x2_t sum_lo = vaddq_s64(in0[0], in1[0]);
   const int64x2_t sum_hi = vaddq_s64(in0[1], in1[1]);
@@ -123,7 +76,7 @@
   return vcombine_s32(out_lo, out_hi);
 }
 
-static INLINE int32x4_t sub_dct_const_round_shift_low_8_bd12(
+static INLINE int32x4_t highbd_sub_dct_const_round_shift_low_8(
     const int64x2_t *const in0, const int64x2_t *const in1) {
   const int64x2_t sub_lo = vsubq_s64(in0[0], in1[0]);
   const int64x2_t sub_hi = vsubq_s64(in0[1], in1[1]);
@@ -132,84 +85,10 @@
   return vcombine_s32(out_lo, out_hi);
 }
 
-static INLINE void iadst8_bd10(int32x4_t *const io0, int32x4_t *const io1,
-                               int32x4_t *const io2, int32x4_t *const io3,
-                               int32x4_t *const io4, int32x4_t *const io5,
-                               int32x4_t *const io6, int32x4_t *const io7) {
-  const int32x4_t c0 =
-      create_s32x4_neon(cospi_2_64, cospi_30_64, cospi_10_64, cospi_22_64);
-  const int32x4_t c1 =
-      create_s32x4_neon(cospi_18_64, cospi_14_64, cospi_26_64, cospi_6_64);
-  const int32x4_t c2 =
-      create_s32x4_neon(cospi_16_64, 0, cospi_8_64, cospi_24_64);
-  int32x4_t x[8], t[4];
-  int32x4_t s[8];
-
-  x[0] = *io7;
-  x[1] = *io0;
-  x[2] = *io5;
-  x[3] = *io2;
-  x[4] = *io3;
-  x[5] = *io4;
-  x[6] = *io1;
-  x[7] = *io6;
-
-  // stage 1
-  iadst_butterfly_lane_0_1_bd10_neon(x[0], x[1], vget_low_s32(c0), &s[0],
-                                     &s[1]);
-  iadst_butterfly_lane_0_1_bd10_neon(x[2], x[3], vget_high_s32(c0), &s[2],
-                                     &s[3]);
-  iadst_butterfly_lane_0_1_bd10_neon(x[4], x[5], vget_low_s32(c1), &s[4],
-                                     &s[5]);
-  iadst_butterfly_lane_0_1_bd10_neon(x[6], x[7], vget_high_s32(c1), &s[6],
-                                     &s[7]);
-
-  x[0] = add_dct_const_round_shift_low_8_bd10(s[0], s[4]);
-  x[1] = add_dct_const_round_shift_low_8_bd10(s[1], s[5]);
-  x[2] = add_dct_const_round_shift_low_8_bd10(s[2], s[6]);
-  x[3] = add_dct_const_round_shift_low_8_bd10(s[3], s[7]);
-  x[4] = sub_dct_const_round_shift_low_8_bd10(s[0], s[4]);
-  x[5] = sub_dct_const_round_shift_low_8_bd10(s[1], s[5]);
-  x[6] = sub_dct_const_round_shift_low_8_bd10(s[2], s[6]);
-  x[7] = sub_dct_const_round_shift_low_8_bd10(s[3], s[7]);
-
-  // stage 2
-  t[0] = x[0];
-  t[1] = x[1];
-  t[2] = x[2];
-  t[3] = x[3];
-  iadst_butterfly_lane_0_1_bd10_neon(x[4], x[5], vget_high_s32(c2), &s[4],
-                                     &s[5]);
-  iadst_butterfly_lane_1_0_bd10_neon(x[7], x[6], vget_high_s32(c2), &s[7],
-                                     &s[6]);
-
-  x[0] = vaddq_s32(t[0], t[2]);
-  x[1] = vaddq_s32(t[1], t[3]);
-  x[2] = vsubq_s32(t[0], t[2]);
-  x[3] = vsubq_s32(t[1], t[3]);
-  x[4] = add_dct_const_round_shift_low_8_bd10(s[4], s[6]);
-  x[5] = add_dct_const_round_shift_low_8_bd10(s[5], s[7]);
-  x[6] = sub_dct_const_round_shift_low_8_bd10(s[4], s[6]);
-  x[7] = sub_dct_const_round_shift_low_8_bd10(s[5], s[7]);
-
-  // stage 3
-  iadst_half_butterfly_bd10_neon(x + 2, vget_low_s32(c2));
-  iadst_half_butterfly_bd10_neon(x + 6, vget_low_s32(c2));
-
-  *io0 = x[0];
-  *io1 = vnegq_s32(x[4]);
-  *io2 = x[6];
-  *io3 = vnegq_s32(x[2]);
-  *io4 = x[3];
-  *io5 = vnegq_s32(x[7]);
-  *io6 = x[5];
-  *io7 = vnegq_s32(x[1]);
-}
-
-static INLINE void iadst8_bd12(int32x4_t *const io0, int32x4_t *const io1,
-                               int32x4_t *const io2, int32x4_t *const io3,
-                               int32x4_t *const io4, int32x4_t *const io5,
-                               int32x4_t *const io6, int32x4_t *const io7) {
+static INLINE void highbd_iadst8(int32x4_t *const io0, int32x4_t *const io1,
+                                 int32x4_t *const io2, int32x4_t *const io3,
+                                 int32x4_t *const io4, int32x4_t *const io5,
+                                 int32x4_t *const io6, int32x4_t *const io7) {
   const int32x4_t c0 =
       create_s32x4_neon(cospi_2_64, cospi_30_64, cospi_10_64, cospi_22_64);
   const int32x4_t c1 =
@@ -229,40 +108,46 @@
   x[7] = *io6;
 
   // stage 1
-  iadst_butterfly_lane_0_1_bd12_neon(x[0], x[1], vget_low_s32(c0), s[0], s[1]);
-  iadst_butterfly_lane_0_1_bd12_neon(x[2], x[3], vget_high_s32(c0), s[2], s[3]);
-  iadst_butterfly_lane_0_1_bd12_neon(x[4], x[5], vget_low_s32(c1), s[4], s[5]);
-  iadst_butterfly_lane_0_1_bd12_neon(x[6], x[7], vget_high_s32(c1), s[6], s[7]);
+  highbd_iadst_butterfly_lane_0_1_neon(x[0], x[1], vget_low_s32(c0), s[0],
+                                       s[1]);
+  highbd_iadst_butterfly_lane_0_1_neon(x[2], x[3], vget_high_s32(c0), s[2],
+                                       s[3]);
+  highbd_iadst_butterfly_lane_0_1_neon(x[4], x[5], vget_low_s32(c1), s[4],
+                                       s[5]);
+  highbd_iadst_butterfly_lane_0_1_neon(x[6], x[7], vget_high_s32(c1), s[6],
+                                       s[7]);
 
-  x[0] = add_dct_const_round_shift_low_8_bd12(s[0], s[4]);
-  x[1] = add_dct_const_round_shift_low_8_bd12(s[1], s[5]);
-  x[2] = add_dct_const_round_shift_low_8_bd12(s[2], s[6]);
-  x[3] = add_dct_const_round_shift_low_8_bd12(s[3], s[7]);
-  x[4] = sub_dct_const_round_shift_low_8_bd12(s[0], s[4]);
-  x[5] = sub_dct_const_round_shift_low_8_bd12(s[1], s[5]);
-  x[6] = sub_dct_const_round_shift_low_8_bd12(s[2], s[6]);
-  x[7] = sub_dct_const_round_shift_low_8_bd12(s[3], s[7]);
+  x[0] = highbd_add_dct_const_round_shift_low_8(s[0], s[4]);
+  x[1] = highbd_add_dct_const_round_shift_low_8(s[1], s[5]);
+  x[2] = highbd_add_dct_const_round_shift_low_8(s[2], s[6]);
+  x[3] = highbd_add_dct_const_round_shift_low_8(s[3], s[7]);
+  x[4] = highbd_sub_dct_const_round_shift_low_8(s[0], s[4]);
+  x[5] = highbd_sub_dct_const_round_shift_low_8(s[1], s[5]);
+  x[6] = highbd_sub_dct_const_round_shift_low_8(s[2], s[6]);
+  x[7] = highbd_sub_dct_const_round_shift_low_8(s[3], s[7]);
 
   // stage 2
   t[0] = x[0];
   t[1] = x[1];
   t[2] = x[2];
   t[3] = x[3];
-  iadst_butterfly_lane_0_1_bd12_neon(x[4], x[5], vget_high_s32(c2), s[4], s[5]);
-  iadst_butterfly_lane_1_0_bd12_neon(x[7], x[6], vget_high_s32(c2), s[7], s[6]);
+  highbd_iadst_butterfly_lane_0_1_neon(x[4], x[5], vget_high_s32(c2), s[4],
+                                       s[5]);
+  highbd_iadst_butterfly_lane_1_0_neon(x[7], x[6], vget_high_s32(c2), s[7],
+                                       s[6]);
 
   x[0] = vaddq_s32(t[0], t[2]);
   x[1] = vaddq_s32(t[1], t[3]);
   x[2] = vsubq_s32(t[0], t[2]);
   x[3] = vsubq_s32(t[1], t[3]);
-  x[4] = add_dct_const_round_shift_low_8_bd12(s[4], s[6]);
-  x[5] = add_dct_const_round_shift_low_8_bd12(s[5], s[7]);
-  x[6] = sub_dct_const_round_shift_low_8_bd12(s[4], s[6]);
-  x[7] = sub_dct_const_round_shift_low_8_bd12(s[5], s[7]);
+  x[4] = highbd_add_dct_const_round_shift_low_8(s[4], s[6]);
+  x[5] = highbd_add_dct_const_round_shift_low_8(s[5], s[7]);
+  x[6] = highbd_sub_dct_const_round_shift_low_8(s[4], s[6]);
+  x[7] = highbd_sub_dct_const_round_shift_low_8(s[5], s[7]);
 
   // stage 3
-  iadst_half_butterfly_bd12_neon(x + 2, vget_low_s32(c2));
-  iadst_half_butterfly_bd12_neon(x + 6, vget_low_s32(c2));
+  highbd_iadst_half_butterfly_neon(x + 2, vget_low_s32(c2));
+  highbd_iadst_half_butterfly_neon(x + 6, vget_low_s32(c2));
 
   *io0 = x[0];
   *io1 = vnegq_s32(x[4]);
@@ -394,31 +279,17 @@
         const int32x4_t cospis1 =
             vld1q_s32(kCospi32 + 4);  // cospi 4, 12, 20, 28
 
-        if (bd == 10) {
-          idct8x8_64_half1d_bd10(cospis0, cospis1, &a[0], &a[1], &a[2], &a[3],
-                                 &a[4], &a[5], &a[6], &a[7]);
-          idct8x8_64_half1d_bd10(cospis0, cospis1, &a[8], &a[9], &a[10], &a[11],
-                                 &a[12], &a[13], &a[14], &a[15]);
-          transpose_s32_8x4(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3],
-                            &a[11]);
-          iadst8_bd10(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3], &a[11]);
-          transpose_s32_8x4(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
-                            &a[15]);
-          iadst8_bd10(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
+        idct8x8_64_half1d_bd12(cospis0, cospis1, &a[0], &a[1], &a[2], &a[3],
+                               &a[4], &a[5], &a[6], &a[7]);
+        idct8x8_64_half1d_bd12(cospis0, cospis1, &a[8], &a[9], &a[10], &a[11],
+                               &a[12], &a[13], &a[14], &a[15]);
+        transpose_s32_8x4(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3],
+                          &a[11]);
+        highbd_iadst8(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3], &a[11]);
+        transpose_s32_8x4(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
+                          &a[15]);
+        highbd_iadst8(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
                       &a[15]);
-        } else {
-          idct8x8_64_half1d_bd12(cospis0, cospis1, &a[0], &a[1], &a[2], &a[3],
-                                 &a[4], &a[5], &a[6], &a[7]);
-          idct8x8_64_half1d_bd12(cospis0, cospis1, &a[8], &a[9], &a[10], &a[11],
-                                 &a[12], &a[13], &a[14], &a[15]);
-          transpose_s32_8x4(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3],
-                            &a[11]);
-          iadst8_bd12(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3], &a[11]);
-          transpose_s32_8x4(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
-                            &a[15]);
-          iadst8_bd12(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
-                      &a[15]);
-        }
         break;
       }
 
@@ -427,67 +298,36 @@
         const int32x4_t cospis1 =
             vld1q_s32(kCospi32 + 4);  // cospi 4, 12, 20, 28
 
-        if (bd == 10) {
-          transpose_s32_8x4(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6],
-                            &a[7]);
-          iadst8_bd10(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6], &a[7]);
-          transpose_s32_8x4(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13],
-                            &a[14], &a[15]);
-          iadst8_bd10(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
+        transpose_s32_8x4(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6],
+                          &a[7]);
+        highbd_iadst8(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6], &a[7]);
+        transpose_s32_8x4(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
+                          &a[15]);
+        highbd_iadst8(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
                       &a[15]);
-          idct8x8_64_half1d_bd10(cospis0, cospis1, &a[0], &a[8], &a[1], &a[9],
-                                 &a[2], &a[10], &a[3], &a[11]);
-          idct8x8_64_half1d_bd10(cospis0, cospis1, &a[4], &a[12], &a[5], &a[13],
-                                 &a[6], &a[14], &a[7], &a[15]);
-        } else {
-          transpose_s32_8x4(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6],
-                            &a[7]);
-          iadst8_bd12(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6], &a[7]);
-          transpose_s32_8x4(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13],
-                            &a[14], &a[15]);
-          iadst8_bd12(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
-                      &a[15]);
-          idct8x8_64_half1d_bd12(cospis0, cospis1, &a[0], &a[8], &a[1], &a[9],
-                                 &a[2], &a[10], &a[3], &a[11]);
-          idct8x8_64_half1d_bd12(cospis0, cospis1, &a[4], &a[12], &a[5], &a[13],
-                                 &a[6], &a[14], &a[7], &a[15]);
-        }
+        idct8x8_64_half1d_bd12(cospis0, cospis1, &a[0], &a[8], &a[1], &a[9],
+                               &a[2], &a[10], &a[3], &a[11]);
+        idct8x8_64_half1d_bd12(cospis0, cospis1, &a[4], &a[12], &a[5], &a[13],
+                               &a[6], &a[14], &a[7], &a[15]);
         break;
       }
 
       default: {
         assert(tx_type == ADST_ADST);
-        if (bd == 10) {
-          transpose_s32_8x4(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6],
-                            &a[7]);
-          iadst8_bd10(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6], &a[7]);
-          transpose_s32_8x4(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13],
-                            &a[14], &a[15]);
-          iadst8_bd10(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
+        transpose_s32_8x4(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6],
+                          &a[7]);
+        highbd_iadst8(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6], &a[7]);
+        transpose_s32_8x4(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
+                          &a[15]);
+        highbd_iadst8(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
                       &a[15]);
-          transpose_s32_8x4(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3],
-                            &a[11]);
-          iadst8_bd10(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3], &a[11]);
-          transpose_s32_8x4(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
-                            &a[15]);
-          iadst8_bd10(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
+        transpose_s32_8x4(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3],
+                          &a[11]);
+        highbd_iadst8(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3], &a[11]);
+        transpose_s32_8x4(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
+                          &a[15]);
+        highbd_iadst8(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
                       &a[15]);
-        } else {
-          transpose_s32_8x4(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6],
-                            &a[7]);
-          iadst8_bd12(&a[0], &a[1], &a[2], &a[3], &a[4], &a[5], &a[6], &a[7]);
-          transpose_s32_8x4(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13],
-                            &a[14], &a[15]);
-          iadst8_bd12(&a[8], &a[9], &a[10], &a[11], &a[12], &a[13], &a[14],
-                      &a[15]);
-          transpose_s32_8x4(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3],
-                            &a[11]);
-          iadst8_bd12(&a[0], &a[8], &a[1], &a[9], &a[2], &a[10], &a[3], &a[11]);
-          transpose_s32_8x4(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
-                            &a[15]);
-          iadst8_bd12(&a[4], &a[12], &a[5], &a[13], &a[6], &a[14], &a[7],
-                      &a[15]);
-        }
         break;
       }
     }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c
index fc9824a..db72ff1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht16x16_add_neon.c
@@ -19,9 +19,9 @@
 #include "vpx_dsp/arm/mem_neon.h"
 #include "vpx_dsp/arm/transpose_neon.h"
 
-static void iadst16x16_256_add_half1d(const void *const input, int16_t *output,
-                                      void *const dest, const int stride,
-                                      const int highbd_flag) {
+void vpx_iadst16x16_256_add_half1d(const void *const input, int16_t *output,
+                                   void *const dest, const int stride,
+                                   const int highbd_flag) {
   int16x8_t in[16], out[16];
   const int16x4_t c_1_31_5_27 =
       create_s16x4_neon(cospi_1_64, cospi_31_64, cospi_5_64, cospi_27_64);
@@ -221,30 +221,10 @@
   x[15] = sub_dct_const_round_shift_low_8(s13, s15);
 
   // stage 4
-  {
-    const int16x8_t sum = vaddq_s16(x[2], x[3]);
-    const int16x8_t sub = vsubq_s16(x[2], x[3]);
-    x[2] = iadst_half_butterfly_neg_neon(sum, c_16_n16_8_24);
-    x[3] = iadst_half_butterfly_pos_neon(sub, c_16_n16_8_24);
-  }
-  {
-    const int16x8_t sum = vaddq_s16(x[7], x[6]);
-    const int16x8_t sub = vsubq_s16(x[7], x[6]);
-    x[6] = iadst_half_butterfly_pos_neon(sum, c_16_n16_8_24);
-    x[7] = iadst_half_butterfly_pos_neon(sub, c_16_n16_8_24);
-  }
-  {
-    const int16x8_t sum = vaddq_s16(x[11], x[10]);
-    const int16x8_t sub = vsubq_s16(x[11], x[10]);
-    x[10] = iadst_half_butterfly_pos_neon(sum, c_16_n16_8_24);
-    x[11] = iadst_half_butterfly_pos_neon(sub, c_16_n16_8_24);
-  }
-  {
-    const int16x8_t sum = vaddq_s16(x[14], x[15]);
-    const int16x8_t sub = vsubq_s16(x[14], x[15]);
-    x[14] = iadst_half_butterfly_neg_neon(sum, c_16_n16_8_24);
-    x[15] = iadst_half_butterfly_pos_neon(sub, c_16_n16_8_24);
-  }
+  iadst_half_butterfly_neg_neon(&x[3], &x[2], c_16_n16_8_24);
+  iadst_half_butterfly_pos_neon(&x[7], &x[6], c_16_n16_8_24);
+  iadst_half_butterfly_pos_neon(&x[11], &x[10], c_16_n16_8_24);
+  iadst_half_butterfly_neg_neon(&x[15], &x[14], c_16_n16_8_24);
 
   out[0] = x[0];
   out[1] = vnegq_s16(x[8]);
@@ -274,24 +254,17 @@
   }
 }
 
-typedef void (*iht_1d)(const void *const input, int16_t *output,
-                       void *const dest, const int stride,
-                       const int highbd_flag);
-
-typedef struct {
-  iht_1d cols, rows;  // vertical and horizontal
-} iht_2d;
-
 void vp9_iht16x16_256_add_neon(const tran_low_t *input, uint8_t *dest,
                                int stride, int tx_type) {
   static const iht_2d IHT_16[] = {
     { vpx_idct16x16_256_add_half1d,
       vpx_idct16x16_256_add_half1d },  // DCT_DCT  = 0
-    { iadst16x16_256_add_half1d,
+    { vpx_iadst16x16_256_add_half1d,
       vpx_idct16x16_256_add_half1d },  // ADST_DCT = 1
     { vpx_idct16x16_256_add_half1d,
-      iadst16x16_256_add_half1d },                            // DCT_ADST = 2
-    { iadst16x16_256_add_half1d, iadst16x16_256_add_half1d }  // ADST_ADST = 3
+      vpx_iadst16x16_256_add_half1d },  // DCT_ADST = 2
+    { vpx_iadst16x16_256_add_half1d,
+      vpx_iadst16x16_256_add_half1d }  // ADST_ADST = 3
   };
   const iht_2d ht = IHT_16[tx_type];
   int16_t row_output[16 * 16];
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht_neon.h
index e918ebc..c64822e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/arm/neon/vp9_iht_neon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_ARM_NEON_VP9_IHT_NEON_H_
-#define VP9_COMMON_ARM_NEON_VP9_IHT_NEON_H_
+#ifndef VPX_VP9_COMMON_ARM_NEON_VP9_IHT_NEON_H_
+#define VPX_VP9_COMMON_ARM_NEON_VP9_IHT_NEON_H_
 
 #include <arm_neon.h>
 
@@ -59,34 +59,55 @@
 
 static INLINE void iadst_half_butterfly_neon(int16x8_t *const x,
                                              const int16x4_t c) {
-  const int16x8_t sum = vaddq_s16(x[0], x[1]);
-  const int16x8_t sub = vsubq_s16(x[0], x[1]);
+  // Don't add/sub before multiply, which will overflow in iadst8.
+  const int32x4_t x0_lo = vmull_lane_s16(vget_low_s16(x[0]), c, 0);
+  const int32x4_t x0_hi = vmull_lane_s16(vget_high_s16(x[0]), c, 0);
+  const int32x4_t x1_lo = vmull_lane_s16(vget_low_s16(x[1]), c, 0);
+  const int32x4_t x1_hi = vmull_lane_s16(vget_high_s16(x[1]), c, 0);
   int32x4_t t0[2], t1[2];
 
-  t0[0] = vmull_lane_s16(vget_low_s16(sum), c, 0);
-  t0[1] = vmull_lane_s16(vget_high_s16(sum), c, 0);
-  t1[0] = vmull_lane_s16(vget_low_s16(sub), c, 0);
-  t1[1] = vmull_lane_s16(vget_high_s16(sub), c, 0);
+  t0[0] = vaddq_s32(x0_lo, x1_lo);
+  t0[1] = vaddq_s32(x0_hi, x1_hi);
+  t1[0] = vsubq_s32(x0_lo, x1_lo);
+  t1[1] = vsubq_s32(x0_hi, x1_hi);
   x[0] = dct_const_round_shift_low_8(t0);
   x[1] = dct_const_round_shift_low_8(t1);
 }
 
-static INLINE int16x8_t iadst_half_butterfly_neg_neon(const int16x8_t in,
-                                                      const int16x4_t c) {
-  int32x4_t t[2];
+static INLINE void iadst_half_butterfly_neg_neon(int16x8_t *const x0,
+                                                 int16x8_t *const x1,
+                                                 const int16x4_t c) {
+  // Don't add/sub before multiply, which will overflow in iadst8.
+  const int32x4_t x0_lo = vmull_lane_s16(vget_low_s16(*x0), c, 1);
+  const int32x4_t x0_hi = vmull_lane_s16(vget_high_s16(*x0), c, 1);
+  const int32x4_t x1_lo = vmull_lane_s16(vget_low_s16(*x1), c, 1);
+  const int32x4_t x1_hi = vmull_lane_s16(vget_high_s16(*x1), c, 1);
+  int32x4_t t0[2], t1[2];
 
-  t[0] = vmull_lane_s16(vget_low_s16(in), c, 1);
-  t[1] = vmull_lane_s16(vget_high_s16(in), c, 1);
-  return dct_const_round_shift_low_8(t);
+  t0[0] = vaddq_s32(x0_lo, x1_lo);
+  t0[1] = vaddq_s32(x0_hi, x1_hi);
+  t1[0] = vsubq_s32(x0_lo, x1_lo);
+  t1[1] = vsubq_s32(x0_hi, x1_hi);
+  *x1 = dct_const_round_shift_low_8(t0);
+  *x0 = dct_const_round_shift_low_8(t1);
 }
 
-static INLINE int16x8_t iadst_half_butterfly_pos_neon(const int16x8_t in,
-                                                      const int16x4_t c) {
-  int32x4_t t[2];
+static INLINE void iadst_half_butterfly_pos_neon(int16x8_t *const x0,
+                                                 int16x8_t *const x1,
+                                                 const int16x4_t c) {
+  // Don't add/sub before multiply, which will overflow in iadst8.
+  const int32x4_t x0_lo = vmull_lane_s16(vget_low_s16(*x0), c, 0);
+  const int32x4_t x0_hi = vmull_lane_s16(vget_high_s16(*x0), c, 0);
+  const int32x4_t x1_lo = vmull_lane_s16(vget_low_s16(*x1), c, 0);
+  const int32x4_t x1_hi = vmull_lane_s16(vget_high_s16(*x1), c, 0);
+  int32x4_t t0[2], t1[2];
 
-  t[0] = vmull_lane_s16(vget_low_s16(in), c, 0);
-  t[1] = vmull_lane_s16(vget_high_s16(in), c, 0);
-  return dct_const_round_shift_low_8(t);
+  t0[0] = vaddq_s32(x0_lo, x1_lo);
+  t0[1] = vaddq_s32(x0_hi, x1_hi);
+  t1[0] = vsubq_s32(x0_lo, x1_lo);
+  t1[1] = vsubq_s32(x0_hi, x1_hi);
+  *x1 = dct_const_round_shift_low_8(t0);
+  *x0 = dct_const_round_shift_low_8(t1);
 }
 
 static INLINE void iadst_butterfly_lane_0_1_neon(const int16x8_t in0,
@@ -236,4 +257,16 @@
   io[7] = vnegq_s16(x[1]);
 }
 
-#endif  // VP9_COMMON_ARM_NEON_VP9_IHT_NEON_H_
+void vpx_iadst16x16_256_add_half1d(const void *const input, int16_t *output,
+                                   void *const dest, const int stride,
+                                   const int highbd_flag);
+
+typedef void (*iht_1d)(const void *const input, int16_t *output,
+                       void *const dest, const int stride,
+                       const int highbd_flag);
+
+typedef struct {
+  iht_1d cols, rows;  // vertical and horizontal
+} iht_2d;
+
+#endif  // VPX_VP9_COMMON_ARM_NEON_VP9_IHT_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct16x16_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct16x16_msa.c
index 3e35301..c031322 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct16x16_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct16x16_msa.c
@@ -10,6 +10,7 @@
 
 #include <assert.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_enums.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct4x4_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct4x4_msa.c
index 786fbdb..aaccd5c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct4x4_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct4x4_msa.c
@@ -10,6 +10,7 @@
 
 #include <assert.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_enums.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct8x8_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct8x8_msa.c
index e416677..76d15ff 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct8x8_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/mips/msa/vp9_idct8x8_msa.c
@@ -10,6 +10,7 @@
 
 #include <assert.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_enums.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/ppc/vp9_idct_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/ppc/vp9_idct_vsx.c
new file mode 100644
index 0000000..e861596
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/ppc/vp9_idct_vsx.c
@@ -0,0 +1,116 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+
+#include "./vp9_rtcd.h"
+#include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/ppc/inv_txfm_vsx.h"
+#include "vpx_dsp/ppc/bitdepth_conversion_vsx.h"
+
+#include "vp9/common/vp9_enums.h"
+
+void vp9_iht4x4_16_add_vsx(const tran_low_t *input, uint8_t *dest, int stride,
+                           int tx_type) {
+  int16x8_t in[2], out[2];
+
+  in[0] = load_tran_low(0, input);
+  in[1] = load_tran_low(8 * sizeof(*input), input);
+
+  switch (tx_type) {
+    case DCT_DCT:
+      vpx_idct4_vsx(in, out);
+      vpx_idct4_vsx(out, in);
+      break;
+    case ADST_DCT:
+      vpx_idct4_vsx(in, out);
+      vp9_iadst4_vsx(out, in);
+      break;
+    case DCT_ADST:
+      vp9_iadst4_vsx(in, out);
+      vpx_idct4_vsx(out, in);
+      break;
+    default:
+      assert(tx_type == ADST_ADST);
+      vp9_iadst4_vsx(in, out);
+      vp9_iadst4_vsx(out, in);
+      break;
+  }
+
+  vpx_round_store4x4_vsx(in, out, dest, stride);
+}
+
+void vp9_iht8x8_64_add_vsx(const tran_low_t *input, uint8_t *dest, int stride,
+                           int tx_type) {
+  int16x8_t in[8], out[8];
+
+  // load input data
+  in[0] = load_tran_low(0, input);
+  in[1] = load_tran_low(8 * sizeof(*input), input);
+  in[2] = load_tran_low(2 * 8 * sizeof(*input), input);
+  in[3] = load_tran_low(3 * 8 * sizeof(*input), input);
+  in[4] = load_tran_low(4 * 8 * sizeof(*input), input);
+  in[5] = load_tran_low(5 * 8 * sizeof(*input), input);
+  in[6] = load_tran_low(6 * 8 * sizeof(*input), input);
+  in[7] = load_tran_low(7 * 8 * sizeof(*input), input);
+
+  switch (tx_type) {
+    case DCT_DCT:
+      vpx_idct8_vsx(in, out);
+      vpx_idct8_vsx(out, in);
+      break;
+    case ADST_DCT:
+      vpx_idct8_vsx(in, out);
+      vp9_iadst8_vsx(out, in);
+      break;
+    case DCT_ADST:
+      vp9_iadst8_vsx(in, out);
+      vpx_idct8_vsx(out, in);
+      break;
+    default:
+      assert(tx_type == ADST_ADST);
+      vp9_iadst8_vsx(in, out);
+      vp9_iadst8_vsx(out, in);
+      break;
+  }
+
+  vpx_round_store8x8_vsx(in, dest, stride);
+}
+
+void vp9_iht16x16_256_add_vsx(const tran_low_t *input, uint8_t *dest,
+                              int stride, int tx_type) {
+  int16x8_t in0[16], in1[16];
+
+  LOAD_INPUT16(load_tran_low, input, 0, 8 * sizeof(*input), in0);
+  LOAD_INPUT16(load_tran_low, input, 8 * 8 * 2 * sizeof(*input),
+               8 * sizeof(*input), in1);
+
+  switch (tx_type) {
+    case DCT_DCT:
+      vpx_idct16_vsx(in0, in1);
+      vpx_idct16_vsx(in0, in1);
+      break;
+    case ADST_DCT:
+      vpx_idct16_vsx(in0, in1);
+      vpx_iadst16_vsx(in0, in1);
+      break;
+    case DCT_ADST:
+      vpx_iadst16_vsx(in0, in1);
+      vpx_idct16_vsx(in0, in1);
+      break;
+    default:
+      assert(tx_type == ADST_ADST);
+      vpx_iadst16_vsx(in0, in1);
+      vpx_iadst16_vsx(in0, in1);
+      break;
+  }
+
+  vpx_round_store16x16_vsx(in0, in1, dest, stride);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.c
index 7345e25..5702dca 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.c
@@ -17,17 +17,26 @@
 #include "vp9/common/vp9_entropymv.h"
 #include "vp9/common/vp9_onyxc_int.h"
 
-void vp9_set_mb_mi(VP9_COMMON *cm, int width, int height) {
+void vp9_set_mi_size(int *mi_rows, int *mi_cols, int *mi_stride, int width,
+                     int height) {
   const int aligned_width = ALIGN_POWER_OF_TWO(width, MI_SIZE_LOG2);
   const int aligned_height = ALIGN_POWER_OF_TWO(height, MI_SIZE_LOG2);
+  *mi_cols = aligned_width >> MI_SIZE_LOG2;
+  *mi_rows = aligned_height >> MI_SIZE_LOG2;
+  *mi_stride = calc_mi_size(*mi_cols);
+}
 
-  cm->mi_cols = aligned_width >> MI_SIZE_LOG2;
-  cm->mi_rows = aligned_height >> MI_SIZE_LOG2;
-  cm->mi_stride = calc_mi_size(cm->mi_cols);
+void vp9_set_mb_size(int *mb_rows, int *mb_cols, int *mb_num, int mi_rows,
+                     int mi_cols) {
+  *mb_cols = (mi_cols + 1) >> 1;
+  *mb_rows = (mi_rows + 1) >> 1;
+  *mb_num = (*mb_rows) * (*mb_cols);
+}
 
-  cm->mb_cols = (cm->mi_cols + 1) >> 1;
-  cm->mb_rows = (cm->mi_rows + 1) >> 1;
-  cm->MBs = cm->mb_rows * cm->mb_cols;
+void vp9_set_mb_mi(VP9_COMMON *cm, int width, int height) {
+  vp9_set_mi_size(&cm->mi_rows, &cm->mi_cols, &cm->mi_stride, width, height);
+  vp9_set_mb_size(&cm->mb_rows, &cm->mb_cols, &cm->MBs, cm->mi_rows,
+                  cm->mi_cols);
 }
 
 static int alloc_seg_map(VP9_COMMON *cm, int seg_map_size) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.h
index a3a1638..90cbb09 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_alloccommon.h
@@ -8,10 +8,10 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_ALLOCCOMMON_H_
-#define VP9_COMMON_VP9_ALLOCCOMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_ALLOCCOMMON_H_
+#define VPX_VP9_COMMON_VP9_ALLOCCOMMON_H_
 
-#define INVALID_IDX -1  // Invalid buffer index.
+#define INVALID_IDX (-1)  // Invalid buffer index.
 
 #ifdef __cplusplus
 extern "C" {
@@ -33,6 +33,11 @@
 int vp9_alloc_state_buffers(struct VP9Common *cm, int width, int height);
 void vp9_free_state_buffers(struct VP9Common *cm);
 
+void vp9_set_mi_size(int *mi_rows, int *mi_cols, int *mi_stride, int width,
+                     int height);
+void vp9_set_mb_size(int *mb_rows, int *mb_cols, int *mb_num, int mi_rows,
+                     int mi_cols);
+
 void vp9_set_mb_mi(struct VP9Common *cm, int width, int height);
 
 void vp9_swap_current_and_last_seg_map(struct VP9Common *cm);
@@ -41,4 +46,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_ALLOCCOMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_ALLOCCOMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.h
index 780b292..6ef8127 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_blockd.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_BLOCKD_H_
-#define VP9_COMMON_VP9_BLOCKD_H_
+#ifndef VPX_VP9_COMMON_VP9_BLOCKD_H_
+#define VPX_VP9_COMMON_VP9_BLOCKD_H_
 
 #include "./vpx_config.h"
 
@@ -54,12 +54,14 @@
 // decoder implementation modules critically rely on the defined entry values
 // specified herein. They should be refactored concurrently.
 
-#define NONE -1
+#define NONE (-1)
 #define INTRA_FRAME 0
 #define LAST_FRAME 1
 #define GOLDEN_FRAME 2
 #define ALTREF_FRAME 3
 #define MAX_REF_FRAMES 4
+#define MAX_INTER_REF_FRAMES 3
+
 typedef int8_t MV_REFERENCE_FRAME;
 
 // This structure now relates to 8x8 block regions.
@@ -130,6 +132,8 @@
 
   // encoder
   const int16_t *dequant;
+
+  int *eob;
 };
 
 #define BLOCK_OFFSET(x, i) ((x) + (i)*16)
@@ -173,7 +177,7 @@
   FRAME_CONTEXT *fc;
 
   /* pointers to reference frames */
-  RefBuffer *block_refs[2];
+  const RefBuffer *block_refs[2];
 
   /* pointer to current frame */
   const YV12_BUFFER_CONFIG *cur_buf;
@@ -193,6 +197,8 @@
   int corrupted;
 
   struct vpx_internal_error_info *error_info;
+
+  PARTITION_TYPE *partition;
 } MACROBLOCKD;
 
 static INLINE PLANE_TYPE get_plane_type(int plane) {
@@ -281,8 +287,30 @@
                       BLOCK_SIZE plane_bsize, TX_SIZE tx_size, int has_eob,
                       int aoff, int loff);
 
+#if CONFIG_MISMATCH_DEBUG
+#define TX_UNIT_SIZE_LOG2 2
+static INLINE void mi_to_pixel_loc(int *pixel_c, int *pixel_r, int mi_col,
+                                   int mi_row, int tx_blk_col, int tx_blk_row,
+                                   int subsampling_x, int subsampling_y) {
+  *pixel_c = ((mi_col << MI_SIZE_LOG2) >> subsampling_x) +
+             (tx_blk_col << TX_UNIT_SIZE_LOG2);
+  *pixel_r = ((mi_row << MI_SIZE_LOG2) >> subsampling_y) +
+             (tx_blk_row << TX_UNIT_SIZE_LOG2);
+}
+
+static INLINE int get_block_width(BLOCK_SIZE bsize) {
+  const int num_4x4_w = num_4x4_blocks_wide_lookup[bsize];
+  return 4 * num_4x4_w;
+}
+
+static INLINE int get_block_height(BLOCK_SIZE bsize) {
+  const int num_4x4_h = num_4x4_blocks_high_lookup[bsize];
+  return 4 * num_4x4_h;
+}
+#endif
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_BLOCKD_H_
+#endif  // VPX_VP9_COMMON_VP9_BLOCKD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common.h
index 666c3be..e3c5535 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_COMMON_H_
-#define VP9_COMMON_VP9_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_COMMON_H_
+#define VPX_VP9_COMMON_VP9_COMMON_H_
 
 /* Interface header for common constant data structures and lookup tables */
 
@@ -33,14 +33,14 @@
   }
 
 // Use this for variably-sized arrays.
-#define vp9_copy_array(dest, src, n)       \
-  {                                        \
-    assert(sizeof(*dest) == sizeof(*src)); \
-    memcpy(dest, src, n * sizeof(*src));   \
+#define vp9_copy_array(dest, src, n)           \
+  {                                            \
+    assert(sizeof(*(dest)) == sizeof(*(src))); \
+    memcpy(dest, src, (n) * sizeof(*(src)));   \
   }
 
 #define vp9_zero(dest) memset(&(dest), 0, sizeof(dest))
-#define vp9_zero_array(dest, n) memset(dest, 0, n * sizeof(*dest))
+#define vp9_zero_array(dest, n) memset(dest, 0, (n) * sizeof(*(dest)))
 
 static INLINE int get_unsigned_bits(unsigned int num_values) {
   return num_values > 0 ? get_msb(num_values) + 1 : 0;
@@ -49,8 +49,8 @@
 #if CONFIG_DEBUG
 #define CHECK_MEM_ERROR(cm, lval, expr)                                     \
   do {                                                                      \
-    lval = (expr);                                                          \
-    if (!lval)                                                              \
+    (lval) = (expr);                                                        \
+    if (!(lval))                                                            \
       vpx_internal_error(&(cm)->error, VPX_CODEC_MEM_ERROR,                 \
                          "Failed to allocate " #lval " at %s:%d", __FILE__, \
                          __LINE__);                                         \
@@ -58,8 +58,8 @@
 #else
 #define CHECK_MEM_ERROR(cm, lval, expr)                     \
   do {                                                      \
-    lval = (expr);                                          \
-    if (!lval)                                              \
+    (lval) = (expr);                                        \
+    if (!(lval))                                            \
       vpx_internal_error(&(cm)->error, VPX_CODEC_MEM_ERROR, \
                          "Failed to allocate " #lval);      \
   } while (0)
@@ -75,4 +75,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.c
index 4a10833..809d7317 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.c
@@ -28,7 +28,7 @@
 const uint8_t num_8x8_blocks_high_lookup[BLOCK_SIZES] = { 1, 1, 1, 1, 2, 1, 2,
                                                           4, 2, 4, 8, 4, 8 };
 
-// VPXMIN(3, VPXMIN(b_width_log2(bsize), b_height_log2(bsize)))
+// VPXMIN(3, VPXMIN(b_width_log2_lookup(bsize), b_height_log2_lookup(bsize)))
 const uint8_t size_group_lookup[BLOCK_SIZES] = { 0, 0, 0, 1, 1, 1, 2,
                                                  2, 2, 3, 3, 3, 3 };
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.h
index 5c6a7e8..a533c5f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_common_data.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_COMMON_DATA_H_
-#define VP9_COMMON_VP9_COMMON_DATA_H_
+#ifndef VPX_VP9_COMMON_VP9_COMMON_DATA_H_
+#define VPX_VP9_COMMON_VP9_COMMON_DATA_H_
 
 #include "vp9/common/vp9_enums.h"
 #include "vpx/vpx_integer.h"
@@ -42,4 +42,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_COMMON_DATA_H_
+#endif  // VPX_VP9_COMMON_VP9_COMMON_DATA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.h
index 0ab5025..d026651 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropy.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_ENTROPY_H_
-#define VP9_COMMON_VP9_ENTROPY_H_
+#ifndef VPX_VP9_COMMON_VP9_ENTROPY_H_
+#define VPX_VP9_COMMON_VP9_ENTROPY_H_
 
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/prob.h"
@@ -194,4 +194,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_ENTROPY_H_
+#endif  // VPX_VP9_COMMON_VP9_ENTROPY_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.c
index 48cad33..bda824d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.c
@@ -179,32 +179,32 @@
   { 101, 21, 107, 181, 192, 103, 19, 67, 125 }  // y = tm
 };
 
-const vpx_prob vp9_kf_partition_probs[PARTITION_CONTEXTS][PARTITION_TYPES - 1] =
-    {
-      // 8x8 -> 4x4
-      { 158, 97, 94 },  // a/l both not split
-      { 93, 24, 99 },   // a split, l not split
-      { 85, 119, 44 },  // l split, a not split
-      { 62, 59, 67 },   // a/l both split
+const vpx_prob vp9_kf_partition_probs[PARTITION_CONTEXTS]
+                                     [PARTITION_TYPES - 1] = {
+                                       // 8x8 -> 4x4
+                                       { 158, 97, 94 },  // a/l both not split
+                                       { 93, 24, 99 },   // a split, l not split
+                                       { 85, 119, 44 },  // l split, a not split
+                                       { 62, 59, 67 },   // a/l both split
 
-      // 16x16 -> 8x8
-      { 149, 53, 53 },  // a/l both not split
-      { 94, 20, 48 },   // a split, l not split
-      { 83, 53, 24 },   // l split, a not split
-      { 52, 18, 18 },   // a/l both split
+                                       // 16x16 -> 8x8
+                                       { 149, 53, 53 },  // a/l both not split
+                                       { 94, 20, 48 },   // a split, l not split
+                                       { 83, 53, 24 },   // l split, a not split
+                                       { 52, 18, 18 },   // a/l both split
 
-      // 32x32 -> 16x16
-      { 150, 40, 39 },  // a/l both not split
-      { 78, 12, 26 },   // a split, l not split
-      { 67, 33, 11 },   // l split, a not split
-      { 24, 7, 5 },     // a/l both split
+                                       // 32x32 -> 16x16
+                                       { 150, 40, 39 },  // a/l both not split
+                                       { 78, 12, 26 },   // a split, l not split
+                                       { 67, 33, 11 },   // l split, a not split
+                                       { 24, 7, 5 },     // a/l both split
 
-      // 64x64 -> 32x32
-      { 174, 35, 49 },  // a/l both not split
-      { 68, 11, 27 },   // a split, l not split
-      { 57, 15, 9 },    // l split, a not split
-      { 12, 3, 3 },     // a/l both split
-    };
+                                       // 64x64 -> 32x32
+                                       { 174, 35, 49 },  // a/l both not split
+                                       { 68, 11, 27 },   // a split, l not split
+                                       { 57, 15, 9 },    // l split, a not split
+                                       { 12, 3, 3 },     // a/l both split
+                                     };
 
 static const vpx_prob
     default_partition_probs[PARTITION_CONTEXTS][PARTITION_TYPES - 1] = {
@@ -263,13 +263,13 @@
   -PARTITION_NONE, 2, -PARTITION_HORZ, 4, -PARTITION_VERT, -PARTITION_SPLIT
 };
 
-static const vpx_prob default_intra_inter_p[INTRA_INTER_CONTEXTS] = {
-  9, 102, 187, 225
-};
+static const vpx_prob default_intra_inter_p[INTRA_INTER_CONTEXTS] = { 9, 102,
+                                                                      187,
+                                                                      225 };
 
-static const vpx_prob default_comp_inter_p[COMP_INTER_CONTEXTS] = {
-  239, 183, 119, 96, 41
-};
+static const vpx_prob default_comp_inter_p[COMP_INTER_CONTEXTS] = { 239, 183,
+                                                                    119, 96,
+                                                                    41 };
 
 static const vpx_prob default_comp_ref_p[REF_CONTEXTS] = { 50, 126, 123, 221,
                                                            226 };
@@ -334,8 +334,8 @@
   vp9_copy(fc->inter_mode_probs, default_inter_mode_probs);
 }
 
-const vpx_tree_index vp9_switchable_interp_tree[TREE_SIZE(SWITCHABLE_FILTERS)] =
-    { -EIGHTTAP, 2, -EIGHTTAP_SMOOTH, -EIGHTTAP_SHARP };
+const vpx_tree_index vp9_switchable_interp_tree[TREE_SIZE(
+    SWITCHABLE_FILTERS)] = { -EIGHTTAP, 2, -EIGHTTAP_SMOOTH, -EIGHTTAP_SHARP };
 
 void vp9_adapt_mode_probs(VP9_COMMON *cm) {
   int i, j;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.h
index 0ee663f..a756c8d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymode.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_ENTROPYMODE_H_
-#define VP9_COMMON_VP9_ENTROPYMODE_H_
+#ifndef VPX_VP9_COMMON_VP9_ENTROPYMODE_H_
+#define VPX_VP9_COMMON_VP9_ENTROPYMODE_H_
 
 #include "vp9/common/vp9_entropy.h"
 #include "vp9/common/vp9_entropymv.h"
@@ -104,4 +104,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_ENTROPYMODE_H_
+#endif  // VPX_VP9_COMMON_VP9_ENTROPYMODE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.h
index e2fe37a..ee9d379 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_entropymv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_ENTROPYMV_H_
-#define VP9_COMMON_VP9_ENTROPYMV_H_
+#ifndef VPX_VP9_COMMON_VP9_ENTROPYMV_H_
+#define VPX_VP9_COMMON_VP9_ENTROPYMV_H_
 
 #include "./vpx_config.h"
 
@@ -25,7 +25,7 @@
 
 void vp9_init_mv_probs(struct VP9Common *cm);
 
-void vp9_adapt_mv_probs(struct VP9Common *cm, int usehp);
+void vp9_adapt_mv_probs(struct VP9Common *cm, int allow_hp);
 
 static INLINE int use_mv_hp(const MV *ref) {
   const int kMvRefThresh = 64;  // threshold for use of high-precision 1/8 mv
@@ -127,10 +127,10 @@
   nmv_component_counts comps[2];
 } nmv_context_counts;
 
-void vp9_inc_mv(const MV *mv, nmv_context_counts *mvctx);
+void vp9_inc_mv(const MV *mv, nmv_context_counts *counts);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_ENTROPYMV_H_
+#endif  // VPX_VP9_COMMON_VP9_ENTROPYMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_enums.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_enums.h
index 056b298..b33a3a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_enums.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_enums.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_ENUMS_H_
-#define VP9_COMMON_VP9_ENUMS_H_
+#ifndef VPX_VP9_COMMON_VP9_ENUMS_H_
+#define VPX_VP9_COMMON_VP9_ENUMS_H_
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -41,6 +41,8 @@
   MAX_PROFILES
 } BITSTREAM_PROFILE;
 
+typedef enum PARSE_RECON_FLAG { PARSE = 1, RECON = 2 } PARSE_RECON_FLAG;
+
 #define BLOCK_4X4 0
 #define BLOCK_4X8 1
 #define BLOCK_8X4 2
@@ -140,4 +142,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_ENUMS_H_
+#endif  // VPX_VP9_COMMON_VP9_ENUMS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.c
index 6c43af8..adbda6c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.c
@@ -63,6 +63,20 @@
   { 0, -3, 2, 41, 63, 29, -2, -2 },   { 0, -3, 1, 38, 64, 32, -1, -3 }
 };
 
-const InterpKernel *vp9_filter_kernels[4] = {
-  sub_pel_filters_8, sub_pel_filters_8lp, sub_pel_filters_8s, bilinear_filters
+// 4-tap filter
+DECLARE_ALIGNED(256, static const InterpKernel,
+                sub_pel_filters_4[SUBPEL_SHIFTS]) = {
+  { 0, 0, 0, 128, 0, 0, 0, 0 },     { 0, 0, -4, 126, 8, -2, 0, 0 },
+  { 0, 0, -6, 120, 18, -4, 0, 0 },  { 0, 0, -8, 114, 28, -6, 0, 0 },
+  { 0, 0, -10, 108, 36, -6, 0, 0 }, { 0, 0, -12, 102, 46, -8, 0, 0 },
+  { 0, 0, -12, 94, 56, -10, 0, 0 }, { 0, 0, -12, 84, 66, -10, 0, 0 },
+  { 0, 0, -12, 76, 76, -12, 0, 0 }, { 0, 0, -10, 66, 84, -12, 0, 0 },
+  { 0, 0, -10, 56, 94, -12, 0, 0 }, { 0, 0, -8, 46, 102, -12, 0, 0 },
+  { 0, 0, -6, 36, 108, -10, 0, 0 }, { 0, 0, -6, 28, 114, -8, 0, 0 },
+  { 0, 0, -4, 18, 120, -6, 0, 0 },  { 0, 0, -2, 8, 126, -4, 0, 0 }
+};
+
+const InterpKernel *vp9_filter_kernels[5] = {
+  sub_pel_filters_8, sub_pel_filters_8lp, sub_pel_filters_8s, bilinear_filters,
+  sub_pel_filters_4
 };
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.h
index 9d2b8e1..0382c88 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_filter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_FILTER_H_
-#define VP9_COMMON_VP9_FILTER_H_
+#ifndef VPX_VP9_COMMON_VP9_FILTER_H_
+#define VPX_VP9_COMMON_VP9_FILTER_H_
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -25,6 +25,7 @@
 #define EIGHTTAP_SHARP 2
 #define SWITCHABLE_FILTERS 3 /* Number of switchable filters */
 #define BILINEAR 3
+#define FOURTAP 4
 // The codec can operate in four possible inter prediction filter mode:
 // 8-tap, 8-tap-smooth, 8-tap-sharp, and switching between the three.
 #define SWITCHABLE_FILTER_CONTEXTS (SWITCHABLE_FILTERS + 1)
@@ -32,10 +33,10 @@
 
 typedef uint8_t INTERP_FILTER;
 
-extern const InterpKernel *vp9_filter_kernels[4];
+extern const InterpKernel *vp9_filter_kernels[5];
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_FILTER_H_
+#endif  // VPX_VP9_COMMON_VP9_FILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.h
index e2cfe61..11be838 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_frame_buffers.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_FRAME_BUFFERS_H_
-#define VP9_COMMON_VP9_FRAME_BUFFERS_H_
+#ifndef VPX_VP9_COMMON_VP9_FRAME_BUFFERS_H_
+#define VPX_VP9_COMMON_VP9_FRAME_BUFFERS_H_
 
 #include "vpx/vpx_frame_buffer.h"
 #include "vpx/vpx_integer.h"
@@ -50,4 +50,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_FRAME_BUFFERS_H_
+#endif  // VPX_VP9_COMMON_VP9_FRAME_BUFFERS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_idct.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_idct.h
index 3e83b84..94eeaf5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_idct.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_idct.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_IDCT_H_
-#define VP9_COMMON_VP9_IDCT_H_
+#ifndef VPX_VP9_COMMON_VP9_IDCT_H_
+#define VPX_VP9_COMMON_VP9_IDCT_H_
 
 #include <assert.h>
 
@@ -78,4 +78,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_IDCT_H_
+#endif  // VPX_VP9_COMMON_VP9_IDCT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.c
index da9180b..95d6029 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.c
@@ -880,12 +880,12 @@
 // This function sets up the bit masks for the entire 64x64 region represented
 // by mi_row, mi_col.
 void vp9_setup_mask(VP9_COMMON *const cm, const int mi_row, const int mi_col,
-                    MODE_INFO **mi, const int mode_info_stride,
+                    MODE_INFO **mi8x8, const int mode_info_stride,
                     LOOP_FILTER_MASK *lfm) {
   int idx_32, idx_16, idx_8;
   const loop_filter_info_n *const lfi_n = &cm->lf_info;
-  MODE_INFO **mip = mi;
-  MODE_INFO **mip2 = mi;
+  MODE_INFO **mip = mi8x8;
+  MODE_INFO **mip2 = mi8x8;
 
   // These are offsets to the next mi in the 64x64 block. It is what gets
   // added to the mi ptr as we go through each loop. It helps us to avoid
@@ -1087,13 +1087,19 @@
   const int row_step_stride = cm->mi_stride * row_step;
   struct buf_2d *const dst = &plane->dst;
   uint8_t *const dst0 = dst->buf;
-  unsigned int mask_16x16[MI_BLOCK_SIZE] = { 0 };
-  unsigned int mask_8x8[MI_BLOCK_SIZE] = { 0 };
-  unsigned int mask_4x4[MI_BLOCK_SIZE] = { 0 };
-  unsigned int mask_4x4_int[MI_BLOCK_SIZE] = { 0 };
+  unsigned int mask_16x16[MI_BLOCK_SIZE];
+  unsigned int mask_8x8[MI_BLOCK_SIZE];
+  unsigned int mask_4x4[MI_BLOCK_SIZE];
+  unsigned int mask_4x4_int[MI_BLOCK_SIZE];
   uint8_t lfl[MI_BLOCK_SIZE * MI_BLOCK_SIZE];
   int r, c;
 
+  vp9_zero(mask_16x16);
+  vp9_zero(mask_8x8);
+  vp9_zero(mask_4x4);
+  vp9_zero(mask_4x4_int);
+  vp9_zero(lfl);
+
   for (r = 0; r < MI_BLOCK_SIZE && mi_row + r < cm->mi_rows; r += row_step) {
     unsigned int mask_16x16_c = 0;
     unsigned int mask_8x8_c = 0;
@@ -1330,6 +1336,8 @@
   uint16_t mask_4x4 = lfm->left_uv[TX_4X4];
   uint16_t mask_4x4_int = lfm->int_4x4_uv;
 
+  vp9_zero(lfl_uv);
+
   assert(plane->subsampling_x == 1 && plane->subsampling_y == 1);
 
   // Vertical pass: do 2 rows at one time
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.h
index 481a6cd..39648a7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_loopfilter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_LOOPFILTER_H_
-#define VP9_COMMON_VP9_LOOPFILTER_H_
+#ifndef VPX_VP9_COMMON_VP9_LOOPFILTER_H_
+#define VPX_VP9_COMMON_VP9_LOOPFILTER_H_
 
 #include "vpx_ports/mem.h"
 #include "./vpx_config.h"
@@ -97,7 +97,7 @@
 // This function sets up the bit masks for the entire 64x64 region represented
 // by mi_row, mi_col.
 void vp9_setup_mask(struct VP9Common *const cm, const int mi_row,
-                    const int mi_col, MODE_INFO **mi_8x8,
+                    const int mi_col, MODE_INFO **mi8x8,
                     const int mode_info_stride, LOOP_FILTER_MASK *lfm);
 
 void vp9_filter_block_plane_ss00(struct VP9Common *const cm,
@@ -120,7 +120,7 @@
 void vp9_loop_filter_frame_init(struct VP9Common *cm, int default_filt_lvl);
 
 void vp9_loop_filter_frame(YV12_BUFFER_CONFIG *frame, struct VP9Common *cm,
-                           struct macroblockd *mbd, int filter_level,
+                           struct macroblockd *xd, int frame_filter_level,
                            int y_only, int partial_frame);
 
 // Get the superblock lfm for a given mi_row, mi_col.
@@ -157,4 +157,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_LOOPFILTER_H_
+#endif  // VPX_VP9_COMMON_VP9_LOOPFILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.h
index dfff8c2..f53e1c2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mfqe.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_MFQE_H_
-#define VP9_COMMON_VP9_MFQE_H_
+#ifndef VPX_VP9_COMMON_VP9_MFQE_H_
+#define VPX_VP9_COMMON_VP9_MFQE_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -28,4 +28,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_MFQE_H_
+#endif  // VPX_VP9_COMMON_VP9_MFQE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mv.h
index 4c8eac7..76f93cf 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_MV_H_
-#define VP9_COMMON_VP9_MV_H_
+#ifndef VPX_VP9_COMMON_VP9_MV_H_
+#define VPX_VP9_COMMON_VP9_MV_H_
 
 #include "vpx/vpx_integer.h"
 
@@ -19,6 +19,8 @@
 extern "C" {
 #endif
 
+#define INVALID_MV 0x80008000
+
 typedef struct mv {
   int16_t row;
   int16_t col;
@@ -52,4 +54,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_MV_H_
+#endif  // VPX_VP9_COMMON_VP9_MV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.h
index 2b2c1ba..5db6772 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_mvref_common.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VP9_COMMON_VP9_MVREF_COMMON_H_
-#define VP9_COMMON_VP9_MVREF_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_MVREF_COMMON_H_
+#define VPX_VP9_COMMON_VP9_MVREF_COMMON_H_
 
 #include "vp9/common/vp9_onyxc_int.h"
 #include "vp9/common/vp9_blockd.h"
@@ -263,10 +263,10 @@
                                  mv_ref_list, Done)                           \
   do {                                                                        \
     if (is_inter_block(mbmi)) {                                               \
-      if ((mbmi)->ref_frame[0] != ref_frame)                                  \
+      if ((mbmi)->ref_frame[0] != (ref_frame))                                \
         ADD_MV_REF_LIST(scale_mv((mbmi), 0, ref_frame, ref_sign_bias),        \
                         refmv_count, mv_ref_list, Done);                      \
-      if (has_second_ref(mbmi) && (mbmi)->ref_frame[1] != ref_frame &&        \
+      if (has_second_ref(mbmi) && (mbmi)->ref_frame[1] != (ref_frame) &&      \
           (mbmi)->mv[1].as_int != (mbmi)->mv[0].as_int)                       \
         ADD_MV_REF_LIST(scale_mv((mbmi), 1, ref_frame, ref_sign_bias),        \
                         refmv_count, mv_ref_list, Done);                      \
@@ -320,4 +320,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_MVREF_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_MVREF_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_onyxc_int.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_onyxc_int.h
index 1d96d92..f3942a8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_onyxc_int.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_onyxc_int.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_ONYXC_INT_H_
-#define VP9_COMMON_VP9_ONYXC_INT_H_
+#ifndef VPX_VP9_COMMON_VP9_ONYXC_INT_H_
+#define VPX_VP9_COMMON_VP9_ONYXC_INT_H_
 
 #include "./vpx_config.h"
 #include "vpx/internal/vpx_codec_internal.h"
@@ -37,10 +37,9 @@
 #define REF_FRAMES_LOG2 3
 #define REF_FRAMES (1 << REF_FRAMES_LOG2)
 
-// 1 scratch frame for the new frame, 3 for scaled references on the encoder.
-// TODO(jkoleszar): These 3 extra references could probably come from the
-// normal reference pool.
-#define FRAME_BUFFERS (REF_FRAMES + 4)
+// 1 scratch frame for the new frame, REFS_PER_FRAME for scaled references on
+// the encoder.
+#define FRAME_BUFFERS (REF_FRAMES + 1 + REFS_PER_FRAME)
 
 #define FRAME_CONTEXTS_LOG2 2
 #define FRAME_CONTEXTS (1 << FRAME_CONTEXTS_LOG2)
@@ -70,6 +69,7 @@
   int mi_rows;
   int mi_cols;
   uint8_t released;
+  int frame_index;
   vpx_codec_frame_buffer_t raw_frame_buffer;
   YV12_BUFFER_CONFIG buf;
 } RefCntBuffer;
@@ -128,6 +128,8 @@
 
   int new_fb_idx;
 
+  int cur_show_frame_fb_idx;
+
 #if CONFIG_VP9_POSTPROC
   YV12_BUFFER_CONFIG post_proc_buffer;
   YV12_BUFFER_CONFIG post_proc_buffer_int;
@@ -242,22 +244,50 @@
   int byte_alignment;
   int skip_loop_filter;
 
-  // Private data associated with the frame buffer callbacks.
-  void *cb_priv;
-  vpx_get_frame_buffer_cb_fn_t get_fb_cb;
-  vpx_release_frame_buffer_cb_fn_t release_fb_cb;
-
-  // Handles memory for the codec.
-  InternalFrameBufferList int_frame_buffers;
-
   // External BufferPool passed from outside.
   BufferPool *buffer_pool;
 
   PARTITION_CONTEXT *above_seg_context;
   ENTROPY_CONTEXT *above_context;
   int above_context_alloc_cols;
+
+  int lf_row;
 } VP9_COMMON;
 
+typedef struct {
+  int frame_width;
+  int frame_height;
+  int render_frame_width;
+  int render_frame_height;
+  int mi_rows;
+  int mi_cols;
+  int mb_rows;
+  int mb_cols;
+  int num_mbs;
+  vpx_bit_depth_t bit_depth;
+} FRAME_INFO;
+
+static INLINE void init_frame_info(FRAME_INFO *frame_info,
+                                   const VP9_COMMON *cm) {
+  frame_info->frame_width = cm->width;
+  frame_info->frame_height = cm->height;
+  frame_info->render_frame_width = cm->render_width;
+  frame_info->render_frame_height = cm->render_height;
+  frame_info->mi_cols = cm->mi_cols;
+  frame_info->mi_rows = cm->mi_rows;
+  frame_info->mb_cols = cm->mb_cols;
+  frame_info->mb_rows = cm->mb_rows;
+  frame_info->num_mbs = cm->MBs;
+  frame_info->bit_depth = cm->bit_depth;
+  // TODO(angiebird): Figure out how to get subsampling_x/y here
+}
+
+static INLINE YV12_BUFFER_CONFIG *get_buf_frame(VP9_COMMON *cm, int index) {
+  if (index < 0 || index >= FRAME_BUFFERS) return NULL;
+  if (cm->error.error_code != VPX_CODEC_OK) return NULL;
+  return &cm->buffer_pool->frame_bufs[index].buf;
+}
+
 static INLINE YV12_BUFFER_CONFIG *get_ref_frame(VP9_COMMON *cm, int index) {
   if (index < 0 || index >= REF_FRAMES) return NULL;
   if (cm->ref_frame_map[index] < 0) return NULL;
@@ -405,4 +435,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_ONYXC_INT_H_
+#endif  // VPX_VP9_COMMON_VP9_ONYXC_INT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.c
index dfc315e..d2c8535 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.c
@@ -183,7 +183,8 @@
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-static void deblock_and_de_macro_block(YV12_BUFFER_CONFIG *source,
+static void deblock_and_de_macro_block(VP9_COMMON *cm,
+                                       YV12_BUFFER_CONFIG *source,
                                        YV12_BUFFER_CONFIG *post, int q,
                                        int low_var_thresh, int flag,
                                        uint8_t *limits) {
@@ -216,7 +217,7 @@
         source->uv_height, source->uv_width, ppl);
   } else {
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-    vp9_deblock(source, post, q, limits);
+    vp9_deblock(cm, source, post, q, limits);
     vpx_mbpost_proc_across_ip(post->y_buffer, post->y_stride, post->y_height,
                               post->y_width, q2mbl(q));
     vpx_mbpost_proc_down(post->y_buffer, post->y_stride, post->y_height,
@@ -226,8 +227,8 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 }
 
-void vp9_deblock(const YV12_BUFFER_CONFIG *src, YV12_BUFFER_CONFIG *dst, int q,
-                 uint8_t *limits) {
+void vp9_deblock(struct VP9Common *cm, const YV12_BUFFER_CONFIG *src,
+                 YV12_BUFFER_CONFIG *dst, int q, uint8_t *limits) {
   const int ppl =
       (int)(6.0e-05 * q * q * q - 0.0067 * q * q + 0.306 * q + 0.0065 + 0.5);
 #if CONFIG_VP9_HIGHBITDEPTH
@@ -252,9 +253,8 @@
   } else {
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     int mbr;
-    const int mb_rows = src->y_height / 16;
-    const int mb_cols = src->y_width / 16;
-
+    const int mb_rows = cm->mb_rows;
+    const int mb_cols = cm->mb_cols;
     memset(limits, (unsigned char)ppl, 16 * mb_cols);
 
     for (mbr = 0; mbr < mb_rows; mbr++) {
@@ -276,9 +276,9 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 }
 
-void vp9_denoise(const YV12_BUFFER_CONFIG *src, YV12_BUFFER_CONFIG *dst, int q,
-                 uint8_t *limits) {
-  vp9_deblock(src, dst, q, limits);
+void vp9_denoise(struct VP9Common *cm, const YV12_BUFFER_CONFIG *src,
+                 YV12_BUFFER_CONFIG *dst, int q, uint8_t *limits) {
+  vp9_deblock(cm, src, dst, q, limits);
 }
 
 static void swap_mi_and_prev_mi(VP9_COMMON *cm) {
@@ -293,7 +293,7 @@
 }
 
 int vp9_post_proc_frame(struct VP9Common *cm, YV12_BUFFER_CONFIG *dest,
-                        vp9_ppflags_t *ppflags) {
+                        vp9_ppflags_t *ppflags, int unscaled_width) {
   const int q = VPXMIN(105, cm->lf.filter_level * 2);
   const int flags = ppflags->post_proc_flag;
   YV12_BUFFER_CONFIG *const ppbuf = &cm->post_proc_buffer;
@@ -359,7 +359,7 @@
   if (flags & (VP9D_DEMACROBLOCK | VP9D_DEBLOCK)) {
     if (!cm->postproc_state.limits) {
       cm->postproc_state.limits =
-          vpx_calloc(cm->width, sizeof(*cm->postproc_state.limits));
+          vpx_calloc(unscaled_width, sizeof(*cm->postproc_state.limits));
     }
   }
 
@@ -383,21 +383,21 @@
       vpx_yv12_copy_frame(ppbuf, &cm->post_proc_buffer_int);
     }
     if ((flags & VP9D_DEMACROBLOCK) && cm->post_proc_buffer_int.buffer_alloc) {
-      deblock_and_de_macro_block(&cm->post_proc_buffer_int, ppbuf,
+      deblock_and_de_macro_block(cm, &cm->post_proc_buffer_int, ppbuf,
                                  q + (ppflags->deblocking_level - 5) * 10, 1, 0,
                                  cm->postproc_state.limits);
     } else if (flags & VP9D_DEBLOCK) {
-      vp9_deblock(&cm->post_proc_buffer_int, ppbuf, q,
+      vp9_deblock(cm, &cm->post_proc_buffer_int, ppbuf, q,
                   cm->postproc_state.limits);
     } else {
       vpx_yv12_copy_frame(&cm->post_proc_buffer_int, ppbuf);
     }
   } else if (flags & VP9D_DEMACROBLOCK) {
-    deblock_and_de_macro_block(cm->frame_to_show, ppbuf,
+    deblock_and_de_macro_block(cm, cm->frame_to_show, ppbuf,
                                q + (ppflags->deblocking_level - 5) * 10, 1, 0,
                                cm->postproc_state.limits);
   } else if (flags & VP9D_DEBLOCK) {
-    vp9_deblock(cm->frame_to_show, ppbuf, q, cm->postproc_state.limits);
+    vp9_deblock(cm, cm->frame_to_show, ppbuf, q, cm->postproc_state.limits);
   } else {
     vpx_yv12_copy_frame(cm->frame_to_show, ppbuf);
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.h
index 6059094..bbe3aed 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_postproc.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_POSTPROC_H_
-#define VP9_COMMON_VP9_POSTPROC_H_
+#ifndef VPX_VP9_COMMON_VP9_POSTPROC_H_
+#define VPX_VP9_COMMON_VP9_POSTPROC_H_
 
 #include "vpx_ports/mem.h"
 #include "vpx_scale/yv12config.h"
@@ -38,16 +38,16 @@
 #define MFQE_PRECISION 4
 
 int vp9_post_proc_frame(struct VP9Common *cm, YV12_BUFFER_CONFIG *dest,
-                        vp9_ppflags_t *flags);
+                        vp9_ppflags_t *ppflags, int unscaled_width);
 
-void vp9_denoise(const YV12_BUFFER_CONFIG *src, YV12_BUFFER_CONFIG *dst, int q,
-                 uint8_t *limits);
+void vp9_denoise(struct VP9Common *cm, const YV12_BUFFER_CONFIG *src,
+                 YV12_BUFFER_CONFIG *dst, int q, uint8_t *limits);
 
-void vp9_deblock(const YV12_BUFFER_CONFIG *src, YV12_BUFFER_CONFIG *dst, int q,
-                 uint8_t *limits);
+void vp9_deblock(struct VP9Common *cm, const YV12_BUFFER_CONFIG *src,
+                 YV12_BUFFER_CONFIG *dst, int q, uint8_t *limits);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_POSTPROC_H_
+#endif  // VPX_VP9_COMMON_VP9_POSTPROC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_ppflags.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_ppflags.h
index b8b647b..a0e3017 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_ppflags.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_ppflags.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_PPFLAGS_H_
-#define VP9_COMMON_VP9_PPFLAGS_H_
+#ifndef VPX_VP9_COMMON_VP9_PPFLAGS_H_
+#define VPX_VP9_COMMON_VP9_PPFLAGS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -33,4 +33,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_PPFLAGS_H_
+#endif  // VPX_VP9_COMMON_VP9_PPFLAGS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.c
index 15c7e63..375cb4d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.c
@@ -13,6 +13,32 @@
 #include "vp9/common/vp9_pred_common.h"
 #include "vp9/common/vp9_seg_common.h"
 
+int vp9_compound_reference_allowed(const VP9_COMMON *cm) {
+  int i;
+  for (i = 1; i < REFS_PER_FRAME; ++i)
+    if (cm->ref_frame_sign_bias[i + 1] != cm->ref_frame_sign_bias[1]) return 1;
+
+  return 0;
+}
+
+void vp9_setup_compound_reference_mode(VP9_COMMON *cm) {
+  if (cm->ref_frame_sign_bias[LAST_FRAME] ==
+      cm->ref_frame_sign_bias[GOLDEN_FRAME]) {
+    cm->comp_fixed_ref = ALTREF_FRAME;
+    cm->comp_var_ref[0] = LAST_FRAME;
+    cm->comp_var_ref[1] = GOLDEN_FRAME;
+  } else if (cm->ref_frame_sign_bias[LAST_FRAME] ==
+             cm->ref_frame_sign_bias[ALTREF_FRAME]) {
+    cm->comp_fixed_ref = GOLDEN_FRAME;
+    cm->comp_var_ref[0] = LAST_FRAME;
+    cm->comp_var_ref[1] = ALTREF_FRAME;
+  } else {
+    cm->comp_fixed_ref = LAST_FRAME;
+    cm->comp_var_ref[0] = GOLDEN_FRAME;
+    cm->comp_var_ref[1] = ALTREF_FRAME;
+  }
+}
+
 int vp9_get_reference_mode_context(const VP9_COMMON *cm,
                                    const MACROBLOCKD *xd) {
   int ctx;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.h
index 8400bd7..ee59669 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_pred_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_PRED_COMMON_H_
-#define VP9_COMMON_VP9_PRED_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_PRED_COMMON_H_
+#define VPX_VP9_COMMON_VP9_PRED_COMMON_H_
 
 #include "vp9/common/vp9_blockd.h"
 #include "vp9/common/vp9_onyxc_int.h"
@@ -145,6 +145,10 @@
   return cm->fc->single_ref_prob[vp9_get_pred_context_single_ref_p2(xd)][1];
 }
 
+int vp9_compound_reference_allowed(const VP9_COMMON *cm);
+
+void vp9_setup_compound_reference_mode(VP9_COMMON *cm);
+
 // Returns a context number for the given MB prediction signal
 // The mode info data structure has a one element border above and to the
 // left of the entries corresponding to real blocks.
@@ -176,12 +180,6 @@
   }
 }
 
-static INLINE const vpx_prob *get_tx_probs2(TX_SIZE max_tx_size,
-                                            const MACROBLOCKD *xd,
-                                            const struct tx_probs *tx_probs) {
-  return get_tx_probs(max_tx_size, get_tx_size_context(xd), tx_probs);
-}
-
 static INLINE unsigned int *get_tx_counts(TX_SIZE max_tx_size, int ctx,
                                           struct tx_counts *tx_counts) {
   switch (max_tx_size) {
@@ -196,4 +194,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_PRED_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_PRED_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.h
index 4bae4a8..ec8b9f4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_quant_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_QUANT_COMMON_H_
-#define VP9_COMMON_VP9_QUANT_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_QUANT_COMMON_H_
+#define VPX_VP9_COMMON_VP9_QUANT_COMMON_H_
 
 #include "vpx/vpx_codec.h"
 #include "vp9/common/vp9_seg_common.h"
@@ -33,4 +33,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_QUANT_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_QUANT_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.c
index a108a65..ff59ff5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.c
@@ -63,14 +63,14 @@
 }
 
 static MV mi_mv_pred_q4(const MODE_INFO *mi, int idx) {
-  MV res = {
-    round_mv_comp_q4(
-        mi->bmi[0].as_mv[idx].as_mv.row + mi->bmi[1].as_mv[idx].as_mv.row +
-        mi->bmi[2].as_mv[idx].as_mv.row + mi->bmi[3].as_mv[idx].as_mv.row),
-    round_mv_comp_q4(
-        mi->bmi[0].as_mv[idx].as_mv.col + mi->bmi[1].as_mv[idx].as_mv.col +
-        mi->bmi[2].as_mv[idx].as_mv.col + mi->bmi[3].as_mv[idx].as_mv.col)
-  };
+  MV res = { round_mv_comp_q4(mi->bmi[0].as_mv[idx].as_mv.row +
+                              mi->bmi[1].as_mv[idx].as_mv.row +
+                              mi->bmi[2].as_mv[idx].as_mv.row +
+                              mi->bmi[3].as_mv[idx].as_mv.row),
+             round_mv_comp_q4(mi->bmi[0].as_mv[idx].as_mv.col +
+                              mi->bmi[1].as_mv[idx].as_mv.col +
+                              mi->bmi[2].as_mv[idx].as_mv.col +
+                              mi->bmi[3].as_mv[idx].as_mv.col) };
   return res;
 }
 
@@ -96,8 +96,8 @@
   const int spel_right = spel_left - SUBPEL_SHIFTS;
   const int spel_top = (VP9_INTERP_EXTEND + bh) << SUBPEL_BITS;
   const int spel_bottom = spel_top - SUBPEL_SHIFTS;
-  MV clamped_mv = { src_mv->row * (1 << (1 - ss_y)),
-                    src_mv->col * (1 << (1 - ss_x)) };
+  MV clamped_mv = { (short)(src_mv->row * (1 << (1 - ss_y))),
+                    (short)(src_mv->col * (1 << (1 - ss_x))) };
   assert(ss_x <= 1);
   assert(ss_y <= 1);
 
@@ -136,7 +136,7 @@
     const struct scale_factors *const sf = &xd->block_refs[ref]->sf;
     struct buf_2d *const pre_buf = &pd->pre[ref];
     struct buf_2d *const dst_buf = &pd->dst;
-    uint8_t *const dst = dst_buf->buf + dst_buf->stride * y + x;
+    uint8_t *const dst = dst_buf->buf + (int64_t)dst_buf->stride * y + x;
     const MV mv = mi->sb_type < BLOCK_8X8
                       ? average_split_mvs(pd, mi, ref, block)
                       : mi->mv[ref].as_mv;
@@ -178,7 +178,7 @@
       xs = sf->x_step_q4;
       ys = sf->y_step_q4;
     } else {
-      pre = pre_buf->buf + (y * pre_buf->stride + x);
+      pre = pre_buf->buf + ((int64_t)y * pre_buf->stride + x);
       scaled_mv.row = mv_q4.row;
       scaled_mv.col = mv_q4.col;
       xs = ys = 16;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.h
index bb9291a..12b5458 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconinter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_RECONINTER_H_
-#define VP9_COMMON_VP9_RECONINTER_H_
+#ifndef VPX_VP9_COMMON_VP9_RECONINTER_H_
+#define VPX_VP9_COMMON_VP9_RECONINTER_H_
 
 #include "vp9/common/vp9_filter.h"
 #include "vp9/common/vp9_onyxc_int.h"
@@ -61,24 +61,25 @@
                                    BLOCK_SIZE bsize);
 
 void vp9_build_inter_predictor(const uint8_t *src, int src_stride, uint8_t *dst,
-                               int dst_stride, const MV *mv_q3,
+                               int dst_stride, const MV *src_mv,
                                const struct scale_factors *sf, int w, int h,
-                               int do_avg, const InterpKernel *kernel,
+                               int ref, const InterpKernel *kernel,
                                enum mv_precision precision, int x, int y);
 
 #if CONFIG_VP9_HIGHBITDEPTH
 void vp9_highbd_build_inter_predictor(
     const uint16_t *src, int src_stride, uint16_t *dst, int dst_stride,
-    const MV *mv_q3, const struct scale_factors *sf, int w, int h, int do_avg,
+    const MV *src_mv, const struct scale_factors *sf, int w, int h, int ref,
     const InterpKernel *kernel, enum mv_precision precision, int x, int y,
     int bd);
 #endif
 
-static INLINE int scaled_buffer_offset(int x_offset, int y_offset, int stride,
-                                       const struct scale_factors *sf) {
+static INLINE int64_t scaled_buffer_offset(int x_offset, int y_offset,
+                                           int stride,
+                                           const struct scale_factors *sf) {
   const int x = sf ? sf->scale_value_x(x_offset, sf) : x_offset;
   const int y = sf ? sf->scale_value_y(y_offset, sf) : y_offset;
-  return y * stride + x;
+  return (int64_t)y * stride + x;
 }
 
 static INLINE void setup_pred_plane(struct buf_2d *dst, uint8_t *src,
@@ -103,4 +104,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_RECONINTER_H_
+#endif  // VPX_VP9_COMMON_VP9_RECONINTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.h
index 78e41c8..426a35e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_reconintra.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_RECONINTRA_H_
-#define VP9_COMMON_VP9_RECONINTRA_H_
+#ifndef VPX_VP9_COMMON_VP9_RECONINTRA_H_
+#define VPX_VP9_COMMON_VP9_RECONINTRA_H_
 
 #include "vpx/vpx_integer.h"
 #include "vp9/common/vp9_blockd.h"
@@ -28,4 +28,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_RECONINTRA_H_
+#endif  // VPX_VP9_COMMON_VP9_RECONINTRA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_rtcd_defs.pl b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_rtcd_defs.pl
index 274edfd..6980b9b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_rtcd_defs.pl
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_rtcd_defs.pl
@@ -62,14 +62,14 @@
 
 add_proto qw/void vp9_iht8x8_64_add/, "const tran_low_t *input, uint8_t *dest, int stride, int tx_type";
 
-add_proto qw/void vp9_iht16x16_256_add/, "const tran_low_t *input, uint8_t *output, int pitch, int tx_type";
+add_proto qw/void vp9_iht16x16_256_add/, "const tran_low_t *input, uint8_t *dest, int stride, int tx_type";
 
 if (vpx_config("CONFIG_EMULATE_HARDWARE") ne "yes") {
   # Note that there are more specializations appended when
   # CONFIG_VP9_HIGHBITDEPTH is off.
-  specialize qw/vp9_iht4x4_16_add neon sse2/;
-  specialize qw/vp9_iht8x8_64_add sse2/;
-  specialize qw/vp9_iht16x16_256_add sse2/;
+  specialize qw/vp9_iht4x4_16_add neon sse2 vsx/;
+  specialize qw/vp9_iht8x8_64_add neon sse2 vsx/;
+  specialize qw/vp9_iht16x16_256_add neon sse2 vsx/;
   if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") ne "yes") {
     # Note that these specializations are appended to the above ones.
     specialize qw/vp9_iht4x4_16_add dspr2 msa/;
@@ -100,12 +100,12 @@
 
   add_proto qw/void vp9_highbd_iht8x8_64_add/, "const tran_low_t *input, uint16_t *dest, int stride, int tx_type, int bd";
 
-  add_proto qw/void vp9_highbd_iht16x16_256_add/, "const tran_low_t *input, uint16_t *output, int pitch, int tx_type, int bd";
+  add_proto qw/void vp9_highbd_iht16x16_256_add/, "const tran_low_t *input, uint16_t *dest, int stride, int tx_type, int bd";
 
   if (vpx_config("CONFIG_EMULATE_HARDWARE") ne "yes") {
-    specialize qw/vp9_highbd_iht4x4_16_add sse4_1/;
-    specialize qw/vp9_highbd_iht8x8_64_add sse4_1/;
-    specialize qw/vp9_highbd_iht16x16_256_add sse4_1/;
+    specialize qw/vp9_highbd_iht4x4_16_add neon sse4_1/;
+    specialize qw/vp9_highbd_iht8x8_64_add neon sse4_1/;
+    specialize qw/vp9_highbd_iht16x16_256_add neon sse4_1/;
   }
 }
 
@@ -129,28 +129,22 @@
 add_proto qw/int64_t vp9_block_error_fp/, "const tran_low_t *coeff, const tran_low_t *dqcoeff, int block_size";
 
 add_proto qw/void vp9_quantize_fp/, "const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block, const int16_t *round_ptr, const int16_t *quant_ptr, tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr, const int16_t *scan, const int16_t *iscan";
-specialize qw/vp9_quantize_fp neon sse2 avx2/, "$ssse3_x86_64";
+specialize qw/vp9_quantize_fp neon sse2 avx2 vsx/, "$ssse3_x86_64";
 
 add_proto qw/void vp9_quantize_fp_32x32/, "const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block, const int16_t *round_ptr, const int16_t *quant_ptr, tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr, const int16_t *scan, const int16_t *iscan";
-specialize qw/vp9_quantize_fp_32x32 neon/, "$ssse3_x86_64";
-
-add_proto qw/void vp9_fdct8x8_quant/, "const int16_t *input, int stride, tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block, const int16_t *round_ptr, const int16_t *quant_ptr, tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr, const int16_t *scan, const int16_t *iscan";
+specialize qw/vp9_quantize_fp_32x32 neon vsx/, "$ssse3_x86_64";
 
 if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
   specialize qw/vp9_block_error avx2 sse2/;
 
   specialize qw/vp9_block_error_fp avx2 sse2/;
 
-  specialize qw/vp9_fdct8x8_quant neon ssse3/;
-
   add_proto qw/int64_t vp9_highbd_block_error/, "const tran_low_t *coeff, const tran_low_t *dqcoeff, intptr_t block_size, int64_t *ssz, int bd";
   specialize qw/vp9_highbd_block_error sse2/;
 } else {
   specialize qw/vp9_block_error avx2 msa sse2/;
 
   specialize qw/vp9_block_error_fp neon avx2 sse2/;
-
-  specialize qw/vp9_fdct8x8_quant sse2 ssse3 neon/;
 }
 
 # fdct functions
@@ -183,11 +177,20 @@
 add_proto qw/int vp9_diamond_search_sad/, "const struct macroblock *x, const struct search_site_config *cfg,  struct mv *ref_mv, struct mv *best_mv, int search_param, int sad_per_bit, int *num00, const struct vp9_variance_vtable *fn_ptr, const struct mv *center_mv";
 specialize qw/vp9_diamond_search_sad avx/;
 
+#
+# Apply temporal filter
+#
 if (vpx_config("CONFIG_REALTIME_ONLY") ne "yes") {
-add_proto qw/void vp9_temporal_filter_apply/, "const uint8_t *frame1, unsigned int stride, const uint8_t *frame2, unsigned int block_width, unsigned int block_height, int strength, int filter_weight, uint32_t *accumulator, uint16_t *count";
-specialize qw/vp9_temporal_filter_apply sse4_1/;
+add_proto qw/void vp9_apply_temporal_filter/, "const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre, int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src, int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator, uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count";
+specialize qw/vp9_apply_temporal_filter sse4_1/;
+
+  if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
+    add_proto qw/void vp9_highbd_apply_temporal_filter/, "const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre, int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src, int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre, int uv_pre_stride, unsigned int block_width, unsigned int block_height, int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32, uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count";
+    specialize qw/vp9_highbd_apply_temporal_filter sse4_1/;
+  }
 }
 
+
 if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
 
   # ENCODEMB INVOKE
@@ -205,7 +208,7 @@
 
   add_proto qw/void vp9_highbd_fwht4x4/, "const int16_t *input, tran_low_t *output, int stride";
 
-  add_proto qw/void vp9_highbd_temporal_filter_apply/, "const uint8_t *frame1, unsigned int stride, const uint8_t *frame2, unsigned int block_width, unsigned int block_height, int strength, int filter_weight, uint32_t *accumulator, uint16_t *count";
+  add_proto qw/void vp9_highbd_temporal_filter_apply/, "const uint8_t *frame1, unsigned int stride, const uint8_t *frame2, unsigned int block_width, unsigned int block_height, int strength, int *blk_fw, int use_32x32, uint32_t *accumulator, uint16_t *count";
 
 }
 # End vp9_high encoder functions
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scale.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scale.h
index ada8dba..2f3b609 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scale.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scale.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_SCALE_H_
-#define VP9_COMMON_VP9_SCALE_H_
+#ifndef VPX_VP9_COMMON_VP9_SCALE_H_
+#define VPX_VP9_COMMON_VP9_SCALE_H_
 
 #include "vp9/common/vp9_mv.h"
 #include "vpx_dsp/vpx_convolve.h"
@@ -20,7 +20,7 @@
 
 #define REF_SCALE_SHIFT 14
 #define REF_NO_SCALE (1 << REF_SCALE_SHIFT)
-#define REF_INVALID_SCALE -1
+#define REF_INVALID_SCALE (-1)
 
 struct scale_factors {
   int x_scale_fp;  // horizontal fixed point scale factor
@@ -42,7 +42,7 @@
 #if CONFIG_VP9_HIGHBITDEPTH
 void vp9_setup_scale_factors_for_frame(struct scale_factors *sf, int other_w,
                                        int other_h, int this_w, int this_h,
-                                       int use_high);
+                                       int use_highbd);
 #else
 void vp9_setup_scale_factors_for_frame(struct scale_factors *sf, int other_w,
                                        int other_h, int this_w, int this_h);
@@ -68,4 +68,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_SCALE_H_
+#endif  // VPX_VP9_COMMON_VP9_SCALE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scan.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scan.h
index b3520e7..72a9a5e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scan.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_scan.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_SCAN_H_
-#define VP9_COMMON_VP9_SCAN_H_
+#ifndef VPX_VP9_COMMON_VP9_SCAN_H_
+#define VPX_VP9_COMMON_VP9_SCAN_H_
 
 #include "vpx/vpx_integer.h"
 #include "vpx_ports/mem.h"
@@ -55,4 +55,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_SCAN_H_
+#endif  // VPX_VP9_COMMON_VP9_SCAN_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.h
index b9bf75d..b63e4f4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_seg_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_SEG_COMMON_H_
-#define VP9_COMMON_VP9_SEG_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_SEG_COMMON_H_
+#define VPX_VP9_COMMON_VP9_SEG_COMMON_H_
 
 #include "vpx_dsp/prob.h"
 
@@ -78,4 +78,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_SEG_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_SEG_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.c
index 8d44e91..b3d5016 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.c
@@ -8,6 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <assert.h>
+#include <limits.h>
 #include "./vpx_config.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_mem/vpx_mem.h"
@@ -38,11 +40,11 @@
   const int nsync = lf_sync->sync_range;
 
   if (r && !(c & (nsync - 1))) {
-    pthread_mutex_t *const mutex = &lf_sync->mutex_[r - 1];
+    pthread_mutex_t *const mutex = &lf_sync->mutex[r - 1];
     mutex_lock(mutex);
 
     while (c > lf_sync->cur_sb_col[r - 1] - nsync) {
-      pthread_cond_wait(&lf_sync->cond_[r - 1], mutex);
+      pthread_cond_wait(&lf_sync->cond[r - 1], mutex);
     }
     pthread_mutex_unlock(mutex);
   }
@@ -69,12 +71,12 @@
   }
 
   if (sig) {
-    mutex_lock(&lf_sync->mutex_[r]);
+    mutex_lock(&lf_sync->mutex[r]);
 
     lf_sync->cur_sb_col[r] = cur;
 
-    pthread_cond_signal(&lf_sync->cond_[r]);
-    pthread_mutex_unlock(&lf_sync->mutex_[r]);
+    pthread_cond_signal(&lf_sync->cond[r]);
+    pthread_mutex_unlock(&lf_sync->mutex[r]);
   }
 #else
   (void)lf_sync;
@@ -91,6 +93,7 @@
     int y_only, VP9LfSync *const lf_sync) {
   const int num_planes = y_only ? 1 : MAX_MB_PLANE;
   const int sb_cols = mi_cols_aligned_to_sb(cm->mi_cols) >> MI_BLOCK_SIZE_LOG2;
+  const int num_active_workers = lf_sync->num_active_workers;
   int mi_row, mi_col;
   enum lf_path path;
   if (y_only)
@@ -102,8 +105,10 @@
   else
     path = LF_PATH_SLOW;
 
+  assert(num_active_workers > 0);
+
   for (mi_row = start; mi_row < stop;
-       mi_row += lf_sync->num_workers * MI_BLOCK_SIZE) {
+       mi_row += num_active_workers * MI_BLOCK_SIZE) {
     MODE_INFO **const mi = cm->mi_grid_visible + mi_row * cm->mi_stride;
     LOOP_FILTER_MASK *lfm = get_lfm(&cm->lf, mi_row, 0);
 
@@ -157,10 +162,12 @@
   const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
   // Number of superblock rows and cols
   const int sb_rows = mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2;
-  // Decoder may allocate more threads than number of tiles based on user's
-  // input.
-  const int tile_cols = 1 << cm->log2_tile_cols;
-  const int num_workers = VPXMIN(nworkers, tile_cols);
+  const int num_tile_cols = 1 << cm->log2_tile_cols;
+  // Limit the number of workers to prevent changes in frame dimensions from
+  // causing incorrect sync calculations when sb_rows < threads/tile_cols.
+  // Further restrict them by the number of tile columns should the user
+  // request more as this implementation doesn't scale well beyond that.
+  const int num_workers = VPXMIN(nworkers, VPXMIN(num_tile_cols, sb_rows));
   int i;
 
   if (!lf_sync->sync_range || sb_rows != lf_sync->rows ||
@@ -168,6 +175,7 @@
     vp9_loop_filter_dealloc(lf_sync);
     vp9_loop_filter_alloc(lf_sync, cm, sb_rows, cm->width, num_workers);
   }
+  lf_sync->num_active_workers = num_workers;
 
   // Initialize cur_sb_col to -1 for all SB rows.
   memset(lf_sync->cur_sb_col, -1, sizeof(*lf_sync->cur_sb_col) * sb_rows);
@@ -231,6 +239,28 @@
                       workers, num_workers, lf_sync);
 }
 
+void vp9_lpf_mt_init(VP9LfSync *lf_sync, VP9_COMMON *cm, int frame_filter_level,
+                     int num_workers) {
+  const int sb_rows = mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2;
+
+  if (!frame_filter_level) return;
+
+  if (!lf_sync->sync_range || sb_rows != lf_sync->rows ||
+      num_workers > lf_sync->num_workers) {
+    vp9_loop_filter_dealloc(lf_sync);
+    vp9_loop_filter_alloc(lf_sync, cm, sb_rows, cm->width, num_workers);
+  }
+
+  // Initialize cur_sb_col to -1 for all SB rows.
+  memset(lf_sync->cur_sb_col, -1, sizeof(*lf_sync->cur_sb_col) * sb_rows);
+
+  lf_sync->corrupted = 0;
+
+  memset(lf_sync->num_tiles_done, 0,
+         sizeof(*lf_sync->num_tiles_done) * sb_rows);
+  cm->lf_row = 0;
+}
+
 // Set up nsync by width.
 static INLINE int get_sync_range(int width) {
   // nsync numbers are picked by testing. For example, for 4k
@@ -253,19 +283,41 @@
   {
     int i;
 
-    CHECK_MEM_ERROR(cm, lf_sync->mutex_,
-                    vpx_malloc(sizeof(*lf_sync->mutex_) * rows));
-    if (lf_sync->mutex_) {
+    CHECK_MEM_ERROR(cm, lf_sync->mutex,
+                    vpx_malloc(sizeof(*lf_sync->mutex) * rows));
+    if (lf_sync->mutex) {
       for (i = 0; i < rows; ++i) {
-        pthread_mutex_init(&lf_sync->mutex_[i], NULL);
+        pthread_mutex_init(&lf_sync->mutex[i], NULL);
       }
     }
 
-    CHECK_MEM_ERROR(cm, lf_sync->cond_,
-                    vpx_malloc(sizeof(*lf_sync->cond_) * rows));
-    if (lf_sync->cond_) {
+    CHECK_MEM_ERROR(cm, lf_sync->cond,
+                    vpx_malloc(sizeof(*lf_sync->cond) * rows));
+    if (lf_sync->cond) {
       for (i = 0; i < rows; ++i) {
-        pthread_cond_init(&lf_sync->cond_[i], NULL);
+        pthread_cond_init(&lf_sync->cond[i], NULL);
+      }
+    }
+
+    CHECK_MEM_ERROR(cm, lf_sync->lf_mutex,
+                    vpx_malloc(sizeof(*lf_sync->lf_mutex)));
+    pthread_mutex_init(lf_sync->lf_mutex, NULL);
+
+    CHECK_MEM_ERROR(cm, lf_sync->recon_done_mutex,
+                    vpx_malloc(sizeof(*lf_sync->recon_done_mutex) * rows));
+    if (lf_sync->recon_done_mutex) {
+      int i;
+      for (i = 0; i < rows; ++i) {
+        pthread_mutex_init(&lf_sync->recon_done_mutex[i], NULL);
+      }
+    }
+
+    CHECK_MEM_ERROR(cm, lf_sync->recon_done_cond,
+                    vpx_malloc(sizeof(*lf_sync->recon_done_cond) * rows));
+    if (lf_sync->recon_done_cond) {
+      int i;
+      for (i = 0; i < rows; ++i) {
+        pthread_cond_init(&lf_sync->recon_done_cond[i], NULL);
       }
     }
   }
@@ -274,39 +326,170 @@
   CHECK_MEM_ERROR(cm, lf_sync->lfdata,
                   vpx_malloc(num_workers * sizeof(*lf_sync->lfdata)));
   lf_sync->num_workers = num_workers;
+  lf_sync->num_active_workers = lf_sync->num_workers;
 
   CHECK_MEM_ERROR(cm, lf_sync->cur_sb_col,
                   vpx_malloc(sizeof(*lf_sync->cur_sb_col) * rows));
 
+  CHECK_MEM_ERROR(cm, lf_sync->num_tiles_done,
+                  vpx_malloc(sizeof(*lf_sync->num_tiles_done) *
+                                 mi_cols_aligned_to_sb(cm->mi_rows) >>
+                             MI_BLOCK_SIZE_LOG2));
+
   // Set up nsync.
   lf_sync->sync_range = get_sync_range(width);
 }
 
 // Deallocate lf synchronization related mutex and data
 void vp9_loop_filter_dealloc(VP9LfSync *lf_sync) {
-  if (lf_sync != NULL) {
-#if CONFIG_MULTITHREAD
-    int i;
+  assert(lf_sync != NULL);
 
-    if (lf_sync->mutex_ != NULL) {
-      for (i = 0; i < lf_sync->rows; ++i) {
-        pthread_mutex_destroy(&lf_sync->mutex_[i]);
-      }
-      vpx_free(lf_sync->mutex_);
+#if CONFIG_MULTITHREAD
+  if (lf_sync->mutex != NULL) {
+    int i;
+    for (i = 0; i < lf_sync->rows; ++i) {
+      pthread_mutex_destroy(&lf_sync->mutex[i]);
     }
-    if (lf_sync->cond_ != NULL) {
-      for (i = 0; i < lf_sync->rows; ++i) {
-        pthread_cond_destroy(&lf_sync->cond_[i]);
-      }
-      vpx_free(lf_sync->cond_);
-    }
-#endif  // CONFIG_MULTITHREAD
-    vpx_free(lf_sync->lfdata);
-    vpx_free(lf_sync->cur_sb_col);
-    // clear the structure as the source of this call may be a resize in which
-    // case this call will be followed by an _alloc() which may fail.
-    vp9_zero(*lf_sync);
+    vpx_free(lf_sync->mutex);
   }
+  if (lf_sync->cond != NULL) {
+    int i;
+    for (i = 0; i < lf_sync->rows; ++i) {
+      pthread_cond_destroy(&lf_sync->cond[i]);
+    }
+    vpx_free(lf_sync->cond);
+  }
+  if (lf_sync->recon_done_mutex != NULL) {
+    int i;
+    for (i = 0; i < lf_sync->rows; ++i) {
+      pthread_mutex_destroy(&lf_sync->recon_done_mutex[i]);
+    }
+    vpx_free(lf_sync->recon_done_mutex);
+  }
+
+  if (lf_sync->lf_mutex != NULL) {
+    pthread_mutex_destroy(lf_sync->lf_mutex);
+    vpx_free(lf_sync->lf_mutex);
+  }
+  if (lf_sync->recon_done_cond != NULL) {
+    int i;
+    for (i = 0; i < lf_sync->rows; ++i) {
+      pthread_cond_destroy(&lf_sync->recon_done_cond[i]);
+    }
+    vpx_free(lf_sync->recon_done_cond);
+  }
+#endif  // CONFIG_MULTITHREAD
+
+  vpx_free(lf_sync->lfdata);
+  vpx_free(lf_sync->cur_sb_col);
+  vpx_free(lf_sync->num_tiles_done);
+  // clear the structure as the source of this call may be a resize in which
+  // case this call will be followed by an _alloc() which may fail.
+  vp9_zero(*lf_sync);
+}
+
+static int get_next_row(VP9_COMMON *cm, VP9LfSync *lf_sync) {
+  int return_val = -1;
+  int cur_row;
+  const int max_rows = cm->mi_rows;
+
+#if CONFIG_MULTITHREAD
+  const int tile_cols = 1 << cm->log2_tile_cols;
+
+  pthread_mutex_lock(lf_sync->lf_mutex);
+  if (cm->lf_row < max_rows) {
+    cur_row = cm->lf_row >> MI_BLOCK_SIZE_LOG2;
+    return_val = cm->lf_row;
+    cm->lf_row += MI_BLOCK_SIZE;
+    if (cm->lf_row < max_rows) {
+      /* If this is not the last row, make sure the next row is also decoded.
+       * This is because the intra predict has to happen before loop filter */
+      cur_row += 1;
+    }
+  }
+  pthread_mutex_unlock(lf_sync->lf_mutex);
+
+  if (return_val == -1) return return_val;
+
+  pthread_mutex_lock(&lf_sync->recon_done_mutex[cur_row]);
+  if (lf_sync->num_tiles_done[cur_row] < tile_cols) {
+    pthread_cond_wait(&lf_sync->recon_done_cond[cur_row],
+                      &lf_sync->recon_done_mutex[cur_row]);
+  }
+  pthread_mutex_unlock(&lf_sync->recon_done_mutex[cur_row]);
+  pthread_mutex_lock(lf_sync->lf_mutex);
+  if (lf_sync->corrupted) {
+    int row = return_val >> MI_BLOCK_SIZE_LOG2;
+    pthread_mutex_lock(&lf_sync->mutex[row]);
+    lf_sync->cur_sb_col[row] = INT_MAX;
+    pthread_cond_signal(&lf_sync->cond[row]);
+    pthread_mutex_unlock(&lf_sync->mutex[row]);
+    return_val = -1;
+  }
+  pthread_mutex_unlock(lf_sync->lf_mutex);
+#else
+  (void)lf_sync;
+  if (cm->lf_row < max_rows) {
+    cur_row = cm->lf_row >> MI_BLOCK_SIZE_LOG2;
+    return_val = cm->lf_row;
+    cm->lf_row += MI_BLOCK_SIZE;
+    if (cm->lf_row < max_rows) {
+      /* If this is not the last row, make sure the next row is also decoded.
+       * This is because the intra predict has to happen before loop filter */
+      cur_row += 1;
+    }
+  }
+#endif  // CONFIG_MULTITHREAD
+
+  return return_val;
+}
+
+void vp9_loopfilter_rows(LFWorkerData *lf_data, VP9LfSync *lf_sync) {
+  int mi_row;
+  VP9_COMMON *cm = lf_data->cm;
+
+  while ((mi_row = get_next_row(cm, lf_sync)) != -1 && mi_row < cm->mi_rows) {
+    lf_data->start = mi_row;
+    lf_data->stop = mi_row + MI_BLOCK_SIZE;
+
+    thread_loop_filter_rows(lf_data->frame_buffer, lf_data->cm, lf_data->planes,
+                            lf_data->start, lf_data->stop, lf_data->y_only,
+                            lf_sync);
+  }
+}
+
+void vp9_set_row(VP9LfSync *lf_sync, int num_tiles, int row, int is_last_row,
+                 int corrupted) {
+#if CONFIG_MULTITHREAD
+  pthread_mutex_lock(lf_sync->lf_mutex);
+  lf_sync->corrupted |= corrupted;
+  pthread_mutex_unlock(lf_sync->lf_mutex);
+  pthread_mutex_lock(&lf_sync->recon_done_mutex[row]);
+  lf_sync->num_tiles_done[row] += 1;
+  if (num_tiles == lf_sync->num_tiles_done[row]) {
+    if (is_last_row) {
+      /* The last 2 rows wait on the last row to be done.
+       * So, we have to broadcast the signal in this case.
+       */
+      pthread_cond_broadcast(&lf_sync->recon_done_cond[row]);
+    } else {
+      pthread_cond_signal(&lf_sync->recon_done_cond[row]);
+    }
+  }
+  pthread_mutex_unlock(&lf_sync->recon_done_mutex[row]);
+#else
+  (void)lf_sync;
+  (void)num_tiles;
+  (void)row;
+  (void)is_last_row;
+  (void)corrupted;
+#endif  // CONFIG_MULTITHREAD
+}
+
+void vp9_loopfilter_job(LFWorkerData *lf_data, VP9LfSync *lf_sync) {
+  thread_loop_filter_rows(lf_data->frame_buffer, lf_data->cm, lf_data->planes,
+                          lf_data->start, lf_data->stop, lf_data->y_only,
+                          lf_sync);
 }
 
 // Accumulate frame counts.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.h
index 0f7c3ff..5df0117 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_thread_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_THREAD_COMMON_H_
-#define VP9_COMMON_VP9_THREAD_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_THREAD_COMMON_H_
+#define VPX_VP9_COMMON_VP9_THREAD_COMMON_H_
 #include "./vpx_config.h"
 #include "vp9/common/vp9_loopfilter.h"
 #include "vpx_util/vpx_thread.h"
@@ -24,8 +24,8 @@
 // Loopfilter row synchronization
 typedef struct VP9LfSyncData {
 #if CONFIG_MULTITHREAD
-  pthread_mutex_t *mutex_;
-  pthread_cond_t *cond_;
+  pthread_mutex_t *mutex;
+  pthread_cond_t *cond;
 #endif
   // Allocate memory to store the loop-filtered superblock index in each row.
   int *cur_sb_col;
@@ -36,7 +36,16 @@
 
   // Row-based parallel loopfilter data
   LFWorkerData *lfdata;
-  int num_workers;
+  int num_workers;         // number of allocated workers.
+  int num_active_workers;  // number of scheduled workers.
+
+#if CONFIG_MULTITHREAD
+  pthread_mutex_t *lf_mutex;
+  pthread_mutex_t *recon_done_mutex;
+  pthread_cond_t *recon_done_cond;
+#endif
+  int *num_tiles_done;
+  int corrupted;
 } VP9LfSync;
 
 // Allocate memory for loopfilter row synchronization.
@@ -53,6 +62,17 @@
                               int partial_frame, VPxWorker *workers,
                               int num_workers, VP9LfSync *lf_sync);
 
+// Multi-threaded loopfilter initialisations
+void vp9_lpf_mt_init(VP9LfSync *lf_sync, struct VP9Common *cm,
+                     int frame_filter_level, int num_workers);
+
+void vp9_loopfilter_rows(LFWorkerData *lf_data, VP9LfSync *lf_sync);
+
+void vp9_set_row(VP9LfSync *lf_sync, int num_tiles, int row, int is_last_row,
+                 int corrupted);
+
+void vp9_loopfilter_job(LFWorkerData *lf_data, VP9LfSync *lf_sync);
+
 void vp9_accumulate_frame_counts(struct FRAME_COUNTS *accum,
                                  const struct FRAME_COUNTS *counts, int is_dec);
 
@@ -60,4 +80,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_THREAD_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_THREAD_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.h
index 1b11c26..4ccf0a3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/common/vp9_tile_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_VP9_TILE_COMMON_H_
-#define VP9_COMMON_VP9_TILE_COMMON_H_
+#ifndef VPX_VP9_COMMON_VP9_TILE_COMMON_H_
+#define VPX_VP9_COMMON_VP9_TILE_COMMON_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_VP9_TILE_COMMON_H_
+#endif  // VPX_VP9_COMMON_VP9_TILE_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.c
index d0e896c..2a27e6f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.c
@@ -23,6 +23,9 @@
 #include "vpx_ports/mem_ops.h"
 #include "vpx_scale/vpx_scale.h"
 #include "vpx_util/vpx_thread.h"
+#if CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif  // CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
 
 #include "vp9/common/vp9_alloccommon.h"
 #include "vp9/common/vp9_common.h"
@@ -42,34 +45,15 @@
 #include "vp9/decoder/vp9_decodemv.h"
 #include "vp9/decoder/vp9_decoder.h"
 #include "vp9/decoder/vp9_dsubexp.h"
+#include "vp9/decoder/vp9_job_queue.h"
 
 #define MAX_VP9_HEADER_SIZE 80
 
-static int is_compound_reference_allowed(const VP9_COMMON *cm) {
-  int i;
-  for (i = 1; i < REFS_PER_FRAME; ++i)
-    if (cm->ref_frame_sign_bias[i + 1] != cm->ref_frame_sign_bias[1]) return 1;
+typedef int (*predict_recon_func)(TileWorkerData *twd, MODE_INFO *const mi,
+                                  int plane, int row, int col, TX_SIZE tx_size);
 
-  return 0;
-}
-
-static void setup_compound_reference_mode(VP9_COMMON *cm) {
-  if (cm->ref_frame_sign_bias[LAST_FRAME] ==
-      cm->ref_frame_sign_bias[GOLDEN_FRAME]) {
-    cm->comp_fixed_ref = ALTREF_FRAME;
-    cm->comp_var_ref[0] = LAST_FRAME;
-    cm->comp_var_ref[1] = GOLDEN_FRAME;
-  } else if (cm->ref_frame_sign_bias[LAST_FRAME] ==
-             cm->ref_frame_sign_bias[ALTREF_FRAME]) {
-    cm->comp_fixed_ref = GOLDEN_FRAME;
-    cm->comp_var_ref[0] = LAST_FRAME;
-    cm->comp_var_ref[1] = ALTREF_FRAME;
-  } else {
-    cm->comp_fixed_ref = LAST_FRAME;
-    cm->comp_var_ref[0] = GOLDEN_FRAME;
-    cm->comp_var_ref[1] = ALTREF_FRAME;
-  }
-}
+typedef void (*intra_recon_func)(TileWorkerData *twd, MODE_INFO *const mi,
+                                 int plane, int row, int col, TX_SIZE tx_size);
 
 static int read_is_valid(const uint8_t *start, size_t len, const uint8_t *end) {
   return len != 0 && len <= (size_t)(end - start);
@@ -118,7 +102,7 @@
 
 static REFERENCE_MODE read_frame_reference_mode(const VP9_COMMON *cm,
                                                 vpx_reader *r) {
-  if (is_compound_reference_allowed(cm)) {
+  if (vp9_compound_reference_allowed(cm)) {
     return vpx_read_bit(r)
                ? (vpx_read_bit(r) ? REFERENCE_MODE_SELECT : COMPOUND_REFERENCE)
                : SINGLE_REFERENCE;
@@ -351,20 +335,121 @@
   }
 }
 
+static void parse_intra_block_row_mt(TileWorkerData *twd, MODE_INFO *const mi,
+                                     int plane, int row, int col,
+                                     TX_SIZE tx_size) {
+  MACROBLOCKD *const xd = &twd->xd;
+  PREDICTION_MODE mode = (plane == 0) ? mi->mode : mi->uv_mode;
+
+  if (mi->sb_type < BLOCK_8X8)
+    if (plane == 0) mode = xd->mi[0]->bmi[(row << 1) + col].as_mode;
+
+  if (!mi->skip) {
+    struct macroblockd_plane *const pd = &xd->plane[plane];
+    const TX_TYPE tx_type =
+        (plane || xd->lossless) ? DCT_DCT : intra_mode_to_tx_type_lookup[mode];
+    const scan_order *sc = (plane || xd->lossless)
+                               ? &vp9_default_scan_orders[tx_size]
+                               : &vp9_scan_orders[tx_size][tx_type];
+    *pd->eob = vp9_decode_block_tokens(twd, plane, sc, col, row, tx_size,
+                                       mi->segment_id);
+    /* Keep the alignment to 16 */
+    pd->dqcoeff += (16 << (tx_size << 1));
+    pd->eob++;
+  }
+}
+
+static void predict_and_reconstruct_intra_block_row_mt(TileWorkerData *twd,
+                                                       MODE_INFO *const mi,
+                                                       int plane, int row,
+                                                       int col,
+                                                       TX_SIZE tx_size) {
+  MACROBLOCKD *const xd = &twd->xd;
+  struct macroblockd_plane *const pd = &xd->plane[plane];
+  PREDICTION_MODE mode = (plane == 0) ? mi->mode : mi->uv_mode;
+  uint8_t *dst = &pd->dst.buf[4 * row * pd->dst.stride + 4 * col];
+
+  if (mi->sb_type < BLOCK_8X8)
+    if (plane == 0) mode = xd->mi[0]->bmi[(row << 1) + col].as_mode;
+
+  vp9_predict_intra_block(xd, pd->n4_wl, tx_size, mode, dst, pd->dst.stride,
+                          dst, pd->dst.stride, col, row, plane);
+
+  if (!mi->skip) {
+    const TX_TYPE tx_type =
+        (plane || xd->lossless) ? DCT_DCT : intra_mode_to_tx_type_lookup[mode];
+    if (*pd->eob > 0) {
+      inverse_transform_block_intra(xd, plane, tx_type, tx_size, dst,
+                                    pd->dst.stride, *pd->eob);
+    }
+    /* Keep the alignment to 16 */
+    pd->dqcoeff += (16 << (tx_size << 1));
+    pd->eob++;
+  }
+}
+
 static int reconstruct_inter_block(TileWorkerData *twd, MODE_INFO *const mi,
-                                   int plane, int row, int col,
-                                   TX_SIZE tx_size) {
+                                   int plane, int row, int col, TX_SIZE tx_size,
+                                   int mi_row, int mi_col) {
+  MACROBLOCKD *const xd = &twd->xd;
+  struct macroblockd_plane *const pd = &xd->plane[plane];
+  const scan_order *sc = &vp9_default_scan_orders[tx_size];
+  const int eob = vp9_decode_block_tokens(twd, plane, sc, col, row, tx_size,
+                                          mi->segment_id);
+  uint8_t *dst = &pd->dst.buf[4 * row * pd->dst.stride + 4 * col];
+
+  if (eob > 0) {
+    inverse_transform_block_inter(xd, plane, tx_size, dst, pd->dst.stride, eob);
+  }
+#if CONFIG_MISMATCH_DEBUG
+  {
+    int pixel_c, pixel_r;
+    int blk_w = 1 << (tx_size + TX_UNIT_SIZE_LOG2);
+    int blk_h = 1 << (tx_size + TX_UNIT_SIZE_LOG2);
+    mi_to_pixel_loc(&pixel_c, &pixel_r, mi_col, mi_row, col, row,
+                    pd->subsampling_x, pd->subsampling_y);
+    mismatch_check_block_tx(dst, pd->dst.stride, plane, pixel_c, pixel_r, blk_w,
+                            blk_h, xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH);
+  }
+#else
+  (void)mi_row;
+  (void)mi_col;
+#endif
+  return eob;
+}
+
+static int parse_inter_block_row_mt(TileWorkerData *twd, MODE_INFO *const mi,
+                                    int plane, int row, int col,
+                                    TX_SIZE tx_size) {
   MACROBLOCKD *const xd = &twd->xd;
   struct macroblockd_plane *const pd = &xd->plane[plane];
   const scan_order *sc = &vp9_default_scan_orders[tx_size];
   const int eob = vp9_decode_block_tokens(twd, plane, sc, col, row, tx_size,
                                           mi->segment_id);
 
+  *pd->eob = eob;
+  pd->dqcoeff += (16 << (tx_size << 1));
+  pd->eob++;
+
+  return eob;
+}
+
+static int reconstruct_inter_block_row_mt(TileWorkerData *twd,
+                                          MODE_INFO *const mi, int plane,
+                                          int row, int col, TX_SIZE tx_size) {
+  MACROBLOCKD *const xd = &twd->xd;
+  struct macroblockd_plane *const pd = &xd->plane[plane];
+  const int eob = *pd->eob;
+
+  (void)mi;
   if (eob > 0) {
     inverse_transform_block_inter(
         xd, plane, tx_size, &pd->dst.buf[4 * row * pd->dst.stride + 4 * col],
         pd->dst.stride, eob);
   }
+  pd->dqcoeff += (16 << (tx_size << 1));
+  pd->eob++;
+
   return eob;
 }
 
@@ -444,16 +529,15 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
 #if CONFIG_VP9_HIGHBITDEPTH
-static void extend_and_predict(const uint8_t *buf_ptr1, int pre_buf_stride,
-                               int x0, int y0, int b_w, int b_h,
-                               int frame_width, int frame_height,
+static void extend_and_predict(TileWorkerData *twd, const uint8_t *buf_ptr1,
+                               int pre_buf_stride, int x0, int y0, int b_w,
+                               int b_h, int frame_width, int frame_height,
                                int border_offset, uint8_t *const dst,
                                int dst_buf_stride, int subpel_x, int subpel_y,
                                const InterpKernel *kernel,
                                const struct scale_factors *sf, MACROBLOCKD *xd,
                                int w, int h, int ref, int xs, int ys) {
-  DECLARE_ALIGNED(16, uint16_t, mc_buf_high[80 * 2 * 80 * 2]);
-
+  uint16_t *mc_buf_high = twd->extend_and_predict_buf;
   if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
     high_build_mc_border(buf_ptr1, pre_buf_stride, mc_buf_high, b_w, x0, y0,
                          b_w, b_h, frame_width, frame_height);
@@ -469,15 +553,15 @@
   }
 }
 #else
-static void extend_and_predict(const uint8_t *buf_ptr1, int pre_buf_stride,
-                               int x0, int y0, int b_w, int b_h,
-                               int frame_width, int frame_height,
+static void extend_and_predict(TileWorkerData *twd, const uint8_t *buf_ptr1,
+                               int pre_buf_stride, int x0, int y0, int b_w,
+                               int b_h, int frame_width, int frame_height,
                                int border_offset, uint8_t *const dst,
                                int dst_buf_stride, int subpel_x, int subpel_y,
                                const InterpKernel *kernel,
                                const struct scale_factors *sf, int w, int h,
                                int ref, int xs, int ys) {
-  DECLARE_ALIGNED(16, uint8_t, mc_buf[80 * 2 * 80 * 2]);
+  uint8_t *mc_buf = (uint8_t *)twd->extend_and_predict_buf;
   const uint8_t *buf_ptr;
 
   build_mc_border(buf_ptr1, pre_buf_stride, mc_buf, b_w, x0, y0, b_w, b_h,
@@ -490,8 +574,8 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
 static void dec_build_inter_predictors(
-    MACROBLOCKD *xd, int plane, int bw, int bh, int x, int y, int w, int h,
-    int mi_x, int mi_y, const InterpKernel *kernel,
+    TileWorkerData *twd, MACROBLOCKD *xd, int plane, int bw, int bh, int x,
+    int y, int w, int h, int mi_x, int mi_y, const InterpKernel *kernel,
     const struct scale_factors *sf, struct buf_2d *pre_buf,
     struct buf_2d *dst_buf, const MV *mv, RefCntBuffer *ref_frame_buf,
     int is_scaled, int ref) {
@@ -602,9 +686,9 @@
       const int b_h = y1 - y0 + 1;
       const int border_offset = y_pad * 3 * b_w + x_pad * 3;
 
-      extend_and_predict(buf_ptr1, buf_stride, x0, y0, b_w, b_h, frame_width,
-                         frame_height, border_offset, dst, dst_buf->stride,
-                         subpel_x, subpel_y, kernel, sf,
+      extend_and_predict(twd, buf_ptr1, buf_stride, x0, y0, b_w, b_h,
+                         frame_width, frame_height, border_offset, dst,
+                         dst_buf->stride, subpel_x, subpel_y, kernel, sf,
 #if CONFIG_VP9_HIGHBITDEPTH
                          xd,
 #endif
@@ -627,7 +711,8 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 }
 
-static void dec_build_inter_predictors_sb(VP9Decoder *const pbi,
+static void dec_build_inter_predictors_sb(TileWorkerData *twd,
+                                          VP9Decoder *const pbi,
                                           MACROBLOCKD *xd, int mi_row,
                                           int mi_col) {
   int plane;
@@ -670,10 +755,10 @@
         for (y = 0; y < num_4x4_h; ++y) {
           for (x = 0; x < num_4x4_w; ++x) {
             const MV mv = average_split_mvs(pd, mi, ref, i++);
-            dec_build_inter_predictors(xd, plane, n4w_x4, n4h_x4, 4 * x, 4 * y,
-                                       4, 4, mi_x, mi_y, kernel, sf, pre_buf,
-                                       dst_buf, &mv, ref_frame_buf, is_scaled,
-                                       ref);
+            dec_build_inter_predictors(twd, xd, plane, n4w_x4, n4h_x4, 4 * x,
+                                       4 * y, 4, 4, mi_x, mi_y, kernel, sf,
+                                       pre_buf, dst_buf, &mv, ref_frame_buf,
+                                       is_scaled, ref);
           }
         }
       }
@@ -687,7 +772,7 @@
         const int n4w_x4 = 4 * num_4x4_w;
         const int n4h_x4 = 4 * num_4x4_h;
         struct buf_2d *const pre_buf = &pd->pre[ref];
-        dec_build_inter_predictors(xd, plane, n4w_x4, n4h_x4, 0, 0, n4w_x4,
+        dec_build_inter_predictors(twd, xd, plane, n4w_x4, n4h_x4, 0, 0, n4w_x4,
                                    n4h_x4, mi_x, mi_y, kernel, sf, pre_buf,
                                    dst_buf, &mv, ref_frame_buf, is_scaled, ref);
       }
@@ -715,6 +800,25 @@
   }
 }
 
+static MODE_INFO *set_offsets_recon(VP9_COMMON *const cm, MACROBLOCKD *const xd,
+                                    int mi_row, int mi_col, int bw, int bh,
+                                    int bwl, int bhl) {
+  const int offset = mi_row * cm->mi_stride + mi_col;
+  const TileInfo *const tile = &xd->tile;
+  xd->mi = cm->mi_grid_visible + offset;
+
+  set_plane_n4(xd, bw, bh, bwl, bhl);
+
+  set_skip_context(xd, mi_row, mi_col);
+
+  // Distance of Mb to the various image edges. These are specified to 8th pel
+  // as they are always compared to values that are in 1/8th pel units
+  set_mi_row_col(xd, tile, mi_row, bh, mi_col, bw, cm->mi_rows, cm->mi_cols);
+
+  vp9_setup_dst_planes(xd->plane, get_frame_new_buffer(cm), mi_row, mi_col);
+  return xd->mi[0];
+}
+
 static MODE_INFO *set_offsets(VP9_COMMON *const cm, MACROBLOCKD *const xd,
                               BLOCK_SIZE bsize, int mi_row, int mi_col, int bw,
                               int bh, int x_mis, int y_mis, int bwl, int bhl) {
@@ -744,6 +848,66 @@
   return xd->mi[0];
 }
 
+static INLINE int predict_recon_inter(MACROBLOCKD *xd, MODE_INFO *mi,
+                                      TileWorkerData *twd,
+                                      predict_recon_func func) {
+  int eobtotal = 0;
+  int plane;
+  for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+    const struct macroblockd_plane *const pd = &xd->plane[plane];
+    const TX_SIZE tx_size = plane ? get_uv_tx_size(mi, pd) : mi->tx_size;
+    const int num_4x4_w = pd->n4_w;
+    const int num_4x4_h = pd->n4_h;
+    const int step = (1 << tx_size);
+    int row, col;
+    const int max_blocks_wide =
+        num_4x4_w + (xd->mb_to_right_edge >= 0
+                         ? 0
+                         : xd->mb_to_right_edge >> (5 + pd->subsampling_x));
+    const int max_blocks_high =
+        num_4x4_h + (xd->mb_to_bottom_edge >= 0
+                         ? 0
+                         : xd->mb_to_bottom_edge >> (5 + pd->subsampling_y));
+
+    xd->max_blocks_wide = xd->mb_to_right_edge >= 0 ? 0 : max_blocks_wide;
+    xd->max_blocks_high = xd->mb_to_bottom_edge >= 0 ? 0 : max_blocks_high;
+
+    for (row = 0; row < max_blocks_high; row += step)
+      for (col = 0; col < max_blocks_wide; col += step)
+        eobtotal += func(twd, mi, plane, row, col, tx_size);
+  }
+  return eobtotal;
+}
+
+static INLINE void predict_recon_intra(MACROBLOCKD *xd, MODE_INFO *mi,
+                                       TileWorkerData *twd,
+                                       intra_recon_func func) {
+  int plane;
+  for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+    const struct macroblockd_plane *const pd = &xd->plane[plane];
+    const TX_SIZE tx_size = plane ? get_uv_tx_size(mi, pd) : mi->tx_size;
+    const int num_4x4_w = pd->n4_w;
+    const int num_4x4_h = pd->n4_h;
+    const int step = (1 << tx_size);
+    int row, col;
+    const int max_blocks_wide =
+        num_4x4_w + (xd->mb_to_right_edge >= 0
+                         ? 0
+                         : xd->mb_to_right_edge >> (5 + pd->subsampling_x));
+    const int max_blocks_high =
+        num_4x4_h + (xd->mb_to_bottom_edge >= 0
+                         ? 0
+                         : xd->mb_to_bottom_edge >> (5 + pd->subsampling_y));
+
+    xd->max_blocks_wide = xd->mb_to_right_edge >= 0 ? 0 : max_blocks_wide;
+    xd->max_blocks_high = xd->mb_to_bottom_edge >= 0 ? 0 : max_blocks_high;
+
+    for (row = 0; row < max_blocks_high; row += step)
+      for (col = 0; col < max_blocks_wide; col += step)
+        func(twd, mi, plane, row, col, tx_size);
+  }
+}
+
 static void decode_block(TileWorkerData *twd, VP9Decoder *const pbi, int mi_row,
                          int mi_col, BLOCK_SIZE bsize, int bwl, int bhl) {
   VP9_COMMON *const cm = &pbi->common;
@@ -800,7 +964,25 @@
     }
   } else {
     // Prediction
-    dec_build_inter_predictors_sb(pbi, xd, mi_row, mi_col);
+    dec_build_inter_predictors_sb(twd, pbi, xd, mi_row, mi_col);
+#if CONFIG_MISMATCH_DEBUG
+    {
+      int plane;
+      for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+        const struct macroblockd_plane *pd = &xd->plane[plane];
+        int pixel_c, pixel_r;
+        const BLOCK_SIZE plane_bsize =
+            get_plane_block_size(VPXMAX(bsize, BLOCK_8X8), &xd->plane[plane]);
+        const int bw = get_block_width(plane_bsize);
+        const int bh = get_block_height(plane_bsize);
+        mi_to_pixel_loc(&pixel_c, &pixel_r, mi_col, mi_row, 0, 0,
+                        pd->subsampling_x, pd->subsampling_y);
+        mismatch_check_block_pre(pd->dst.buf, pd->dst.stride, plane, pixel_c,
+                                 pixel_r, bw, bh,
+                                 xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH);
+      }
+    }
+#endif
 
     // Reconstruction
     if (!mi->skip) {
@@ -829,8 +1011,8 @@
 
         for (row = 0; row < max_blocks_high; row += step)
           for (col = 0; col < max_blocks_wide; col += step)
-            eobtotal +=
-                reconstruct_inter_block(twd, mi, plane, row, col, tx_size);
+            eobtotal += reconstruct_inter_block(twd, mi, plane, row, col,
+                                                tx_size, mi_row, mi_col);
       }
 
       if (!less8x8 && eobtotal == 0) mi->skip = 1;  // skip loopfilter
@@ -844,6 +1026,98 @@
   }
 }
 
+static void recon_block(TileWorkerData *twd, VP9Decoder *const pbi, int mi_row,
+                        int mi_col, BLOCK_SIZE bsize, int bwl, int bhl) {
+  VP9_COMMON *const cm = &pbi->common;
+  const int bw = 1 << (bwl - 1);
+  const int bh = 1 << (bhl - 1);
+  MACROBLOCKD *const xd = &twd->xd;
+
+  MODE_INFO *mi = set_offsets_recon(cm, xd, mi_row, mi_col, bw, bh, bwl, bhl);
+
+  if (bsize >= BLOCK_8X8 && (cm->subsampling_x || cm->subsampling_y)) {
+    const BLOCK_SIZE uv_subsize =
+        ss_size_lookup[bsize][cm->subsampling_x][cm->subsampling_y];
+    if (uv_subsize == BLOCK_INVALID)
+      vpx_internal_error(xd->error_info, VPX_CODEC_CORRUPT_FRAME,
+                         "Invalid block size.");
+  }
+
+  if (!is_inter_block(mi)) {
+    predict_recon_intra(xd, mi, twd,
+                        predict_and_reconstruct_intra_block_row_mt);
+  } else {
+    // Prediction
+    dec_build_inter_predictors_sb(twd, pbi, xd, mi_row, mi_col);
+
+    // Reconstruction
+    if (!mi->skip) {
+      predict_recon_inter(xd, mi, twd, reconstruct_inter_block_row_mt);
+    }
+  }
+
+  vp9_build_mask(cm, mi, mi_row, mi_col, bw, bh);
+}
+
+static void parse_block(TileWorkerData *twd, VP9Decoder *const pbi, int mi_row,
+                        int mi_col, BLOCK_SIZE bsize, int bwl, int bhl) {
+  VP9_COMMON *const cm = &pbi->common;
+  const int bw = 1 << (bwl - 1);
+  const int bh = 1 << (bhl - 1);
+  const int x_mis = VPXMIN(bw, cm->mi_cols - mi_col);
+  const int y_mis = VPXMIN(bh, cm->mi_rows - mi_row);
+  vpx_reader *r = &twd->bit_reader;
+  MACROBLOCKD *const xd = &twd->xd;
+
+  MODE_INFO *mi = set_offsets(cm, xd, bsize, mi_row, mi_col, bw, bh, x_mis,
+                              y_mis, bwl, bhl);
+
+  if (bsize >= BLOCK_8X8 && (cm->subsampling_x || cm->subsampling_y)) {
+    const BLOCK_SIZE uv_subsize =
+        ss_size_lookup[bsize][cm->subsampling_x][cm->subsampling_y];
+    if (uv_subsize == BLOCK_INVALID)
+      vpx_internal_error(xd->error_info, VPX_CODEC_CORRUPT_FRAME,
+                         "Invalid block size.");
+  }
+
+  vp9_read_mode_info(twd, pbi, mi_row, mi_col, x_mis, y_mis);
+
+  if (mi->skip) {
+    dec_reset_skip_context(xd);
+  }
+
+  if (!is_inter_block(mi)) {
+    predict_recon_intra(xd, mi, twd, parse_intra_block_row_mt);
+  } else {
+    if (!mi->skip) {
+      tran_low_t *dqcoeff[MAX_MB_PLANE];
+      int *eob[MAX_MB_PLANE];
+      int plane;
+      int eobtotal;
+      // Based on eobtotal and bsize, this may be mi->skip may be set to true
+      // In that case dqcoeff and eob need to be backed up and restored as
+      // recon_block will not increment these pointers for skip cases
+      for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+        const struct macroblockd_plane *const pd = &xd->plane[plane];
+        dqcoeff[plane] = pd->dqcoeff;
+        eob[plane] = pd->eob;
+      }
+      eobtotal = predict_recon_inter(xd, mi, twd, parse_inter_block_row_mt);
+
+      if (bsize >= BLOCK_8X8 && eobtotal == 0) {
+        mi->skip = 1;  // skip loopfilter
+        for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+          struct macroblockd_plane *pd = &xd->plane[plane];
+          pd->dqcoeff = dqcoeff[plane];
+          pd->eob = eob[plane];
+        }
+      }
+    }
+  }
+
+  xd->corrupted |= vpx_reader_has_error(r);
+}
+
 static INLINE int dec_partition_plane_context(TileWorkerData *twd, int mi_row,
                                               int mi_col, int bsl) {
   const PARTITION_CONTEXT *above_ctx = twd->xd.above_seg_context + mi_col;
@@ -950,14 +1224,82 @@
     dec_update_partition_context(twd, mi_row, mi_col, subsize, num_8x8_wh);
 }
 
+static void process_partition(TileWorkerData *twd, VP9Decoder *const pbi,
+                              int mi_row, int mi_col, BLOCK_SIZE bsize,
+                              int n4x4_l2, int parse_recon_flag,
+                              process_block_fn_t process_block) {
+  VP9_COMMON *const cm = &pbi->common;
+  const int n8x8_l2 = n4x4_l2 - 1;
+  const int num_8x8_wh = 1 << n8x8_l2;
+  const int hbs = num_8x8_wh >> 1;
+  PARTITION_TYPE partition;
+  BLOCK_SIZE subsize;
+  const int has_rows = (mi_row + hbs) < cm->mi_rows;
+  const int has_cols = (mi_col + hbs) < cm->mi_cols;
+  MACROBLOCKD *const xd = &twd->xd;
+
+  if (mi_row >= cm->mi_rows || mi_col >= cm->mi_cols) return;
+
+  if (parse_recon_flag & PARSE) {
+    *xd->partition =
+        read_partition(twd, mi_row, mi_col, has_rows, has_cols, n8x8_l2);
+  }
+
+  partition = *xd->partition;
+  xd->partition++;
+
+  subsize = get_subsize(bsize, partition);
+  if (!hbs) {
+    // calculate bmode block dimensions (log 2)
+    xd->bmode_blocks_wl = 1 >> !!(partition & PARTITION_VERT);
+    xd->bmode_blocks_hl = 1 >> !!(partition & PARTITION_HORZ);
+    process_block(twd, pbi, mi_row, mi_col, subsize, 1, 1);
+  } else {
+    switch (partition) {
+      case PARTITION_NONE:
+        process_block(twd, pbi, mi_row, mi_col, subsize, n4x4_l2, n4x4_l2);
+        break;
+      case PARTITION_HORZ:
+        process_block(twd, pbi, mi_row, mi_col, subsize, n4x4_l2, n8x8_l2);
+        if (has_rows)
+          process_block(twd, pbi, mi_row + hbs, mi_col, subsize, n4x4_l2,
+                        n8x8_l2);
+        break;
+      case PARTITION_VERT:
+        process_block(twd, pbi, mi_row, mi_col, subsize, n8x8_l2, n4x4_l2);
+        if (has_cols)
+          process_block(twd, pbi, mi_row, mi_col + hbs, subsize, n8x8_l2,
+                        n4x4_l2);
+        break;
+      case PARTITION_SPLIT:
+        process_partition(twd, pbi, mi_row, mi_col, subsize, n8x8_l2,
+                          parse_recon_flag, process_block);
+        process_partition(twd, pbi, mi_row, mi_col + hbs, subsize, n8x8_l2,
+                          parse_recon_flag, process_block);
+        process_partition(twd, pbi, mi_row + hbs, mi_col, subsize, n8x8_l2,
+                          parse_recon_flag, process_block);
+        process_partition(twd, pbi, mi_row + hbs, mi_col + hbs, subsize,
+                          n8x8_l2, parse_recon_flag, process_block);
+        break;
+      default: assert(0 && "Invalid partition type");
+    }
+  }
+
+  if (parse_recon_flag & PARSE) {
+    // update partition context
+    if ((bsize == BLOCK_8X8 || partition != PARTITION_SPLIT) &&
+        bsize >= BLOCK_8X8)
+      dec_update_partition_context(twd, mi_row, mi_col, subsize, num_8x8_wh);
+  }
+}
+
 static void setup_token_decoder(const uint8_t *data, const uint8_t *data_end,
                                 size_t read_size,
                                 struct vpx_internal_error_info *error_info,
                                 vpx_reader *r, vpx_decrypt_cb decrypt_cb,
                                 void *decrypt_state) {
-  // Validate the calculated partition length. If the buffer
-  // described by the partition can't be fully read, then restrict
-  // it to the portion that can be (for EC mode) or throw an error.
+  // Validate the calculated partition length. If the buffer described by the
+  // partition can't be fully read then throw an error.
   if (!read_is_valid(data, read_size, data_end))
     vpx_internal_error(error_info, VPX_CODEC_CORRUPT_FRAME,
                        "Truncated packet or corrupt tile length");
@@ -1148,9 +1490,15 @@
     // Allocations in vp9_alloc_context_buffers() depend on individual
     // dimensions as well as the overall size.
     if (new_mi_cols > cm->mi_cols || new_mi_rows > cm->mi_rows) {
-      if (vp9_alloc_context_buffers(cm, width, height))
+      if (vp9_alloc_context_buffers(cm, width, height)) {
+        // The cm->mi_* values have been cleared and any existing context
+        // buffers have been freed. Clear cm->width and cm->height to be
+        // consistent and to force a realloc next time.
+        cm->width = 0;
+        cm->height = 0;
         vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
                            "Failed to allocate context buffers");
+      }
     } else {
       vp9_set_mb_mi(cm, width, height);
     }
@@ -1348,6 +1696,321 @@
   }
 }
 
+static void map_write(RowMTWorkerData *const row_mt_worker_data, int map_idx,
+                      int sync_idx) {
+#if CONFIG_MULTITHREAD
+  pthread_mutex_lock(&row_mt_worker_data->recon_sync_mutex[sync_idx]);
+  row_mt_worker_data->recon_map[map_idx] = 1;
+  pthread_cond_signal(&row_mt_worker_data->recon_sync_cond[sync_idx]);
+  pthread_mutex_unlock(&row_mt_worker_data->recon_sync_mutex[sync_idx]);
+#else
+  (void)row_mt_worker_data;
+  (void)map_idx;
+  (void)sync_idx;
+#endif  // CONFIG_MULTITHREAD
+}
+
+static void map_read(RowMTWorkerData *const row_mt_worker_data, int map_idx,
+                     int sync_idx) {
+#if CONFIG_MULTITHREAD
+  volatile int8_t *map = row_mt_worker_data->recon_map + map_idx;
+  pthread_mutex_t *const mutex =
+      &row_mt_worker_data->recon_sync_mutex[sync_idx];
+  pthread_mutex_lock(mutex);
+  while (!(*map)) {
+    pthread_cond_wait(&row_mt_worker_data->recon_sync_cond[sync_idx], mutex);
+  }
+  pthread_mutex_unlock(mutex);
+#else
+  (void)row_mt_worker_data;
+  (void)map_idx;
+  (void)sync_idx;
+#endif  // CONFIG_MULTITHREAD
+}
+
+static int lpf_map_write_check(VP9LfSync *lf_sync, int row, int num_tile_cols) {
+  int return_val = 0;
+#if CONFIG_MULTITHREAD
+  int corrupted;
+  pthread_mutex_lock(lf_sync->lf_mutex);
+  corrupted = lf_sync->corrupted;
+  pthread_mutex_unlock(lf_sync->lf_mutex);
+  if (!corrupted) {
+    pthread_mutex_lock(&lf_sync->recon_done_mutex[row]);
+    lf_sync->num_tiles_done[row] += 1;
+    if (num_tile_cols == lf_sync->num_tiles_done[row]) return_val = 1;
+    pthread_mutex_unlock(&lf_sync->recon_done_mutex[row]);
+  }
+#else
+  (void)lf_sync;
+  (void)row;
+  (void)num_tile_cols;
+#endif
+  return return_val;
+}
+
+static void vp9_tile_done(VP9Decoder *pbi) {
+#if CONFIG_MULTITHREAD
+  int terminate;
+  RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+  const int all_parse_done = 1 << pbi->common.log2_tile_cols;
+  pthread_mutex_lock(&row_mt_worker_data->recon_done_mutex);
+  row_mt_worker_data->num_tiles_done++;
+  terminate = all_parse_done == row_mt_worker_data->num_tiles_done;
+  pthread_mutex_unlock(&row_mt_worker_data->recon_done_mutex);
+  if (terminate) {
+    vp9_jobq_terminate(&row_mt_worker_data->jobq);
+  }
+#else
+  (void)pbi;
+#endif
+}
+
+static void vp9_jobq_alloc(VP9Decoder *pbi) {
+  VP9_COMMON *const cm = &pbi->common;
+  RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+  const int aligned_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+  const int sb_rows = aligned_rows >> MI_BLOCK_SIZE_LOG2;
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  const size_t jobq_size = (tile_cols * sb_rows * 2 + sb_rows) * sizeof(Job);
+
+  if (jobq_size > row_mt_worker_data->jobq_size) {
+    vpx_free(row_mt_worker_data->jobq_buf);
+    CHECK_MEM_ERROR(cm, row_mt_worker_data->jobq_buf, vpx_calloc(1, jobq_size));
+    vp9_jobq_init(&row_mt_worker_data->jobq, row_mt_worker_data->jobq_buf,
+                  jobq_size);
+    row_mt_worker_data->jobq_size = jobq_size;
+  }
+}
+
+static void recon_tile_row(TileWorkerData *tile_data, VP9Decoder *pbi,
+                           int mi_row, int is_last_row, VP9LfSync *lf_sync,
+                           int cur_tile_col) {
+  VP9_COMMON *const cm = &pbi->common;
+  RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  const int aligned_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+  const int sb_cols = aligned_cols >> MI_BLOCK_SIZE_LOG2;
+  const int cur_sb_row = mi_row >> MI_BLOCK_SIZE_LOG2;
+  int mi_col_start = tile_data->xd.tile.mi_col_start;
+  int mi_col_end = tile_data->xd.tile.mi_col_end;
+  int mi_col;
+
+  vp9_zero(tile_data->xd.left_context);
+  vp9_zero(tile_data->xd.left_seg_context);
+  for (mi_col = mi_col_start; mi_col < mi_col_end; mi_col += MI_BLOCK_SIZE) {
+    const int c = mi_col >> MI_BLOCK_SIZE_LOG2;
+    int plane;
+    const int sb_num = (cur_sb_row * (aligned_cols >> MI_BLOCK_SIZE_LOG2) + c);
+
+    // Top Dependency
+    if (cur_sb_row) {
+      map_read(row_mt_worker_data, ((cur_sb_row - 1) * sb_cols) + c,
+               ((cur_sb_row - 1) * tile_cols) + cur_tile_col);
+    }
+
+    for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+      tile_data->xd.plane[plane].eob =
+          row_mt_worker_data->eob[plane] + (sb_num << EOBS_PER_SB_LOG2);
+      tile_data->xd.plane[plane].dqcoeff =
+          row_mt_worker_data->dqcoeff[plane] + (sb_num << DQCOEFFS_PER_SB_LOG2);
+    }
+    tile_data->xd.partition =
+        row_mt_worker_data->partition + (sb_num * PARTITIONS_PER_SB);
+    process_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4, RECON,
+                      recon_block);
+    if (cm->lf.filter_level && !cm->skip_loop_filter) {
+      // Queue LPF_JOB
+      int is_lpf_job_ready = 0;
+
+      if (mi_col + MI_BLOCK_SIZE >= mi_col_end) {
+        // Checks if this row has been decoded in all tiles
+        is_lpf_job_ready = lpf_map_write_check(lf_sync, cur_sb_row, tile_cols);
+
+        if (is_lpf_job_ready) {
+          Job lpf_job;
+          lpf_job.job_type = LPF_JOB;
+          if (cur_sb_row > 0) {
+            lpf_job.row_num = mi_row - MI_BLOCK_SIZE;
+            vp9_jobq_queue(&row_mt_worker_data->jobq, &lpf_job,
+                           sizeof(lpf_job));
+          }
+          if (is_last_row) {
+            lpf_job.row_num = mi_row;
+            vp9_jobq_queue(&row_mt_worker_data->jobq, &lpf_job,
+                           sizeof(lpf_job));
+          }
+        }
+      }
+    }
+    map_write(row_mt_worker_data, (cur_sb_row * sb_cols) + c,
+              (cur_sb_row * tile_cols) + cur_tile_col);
+  }
+}
+
+static void parse_tile_row(TileWorkerData *tile_data, VP9Decoder *pbi,
+                           int mi_row, int cur_tile_col, uint8_t **data_end) {
+  int mi_col;
+  VP9_COMMON *const cm = &pbi->common;
+  RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+  TileInfo *tile = &tile_data->xd.tile;
+  TileBuffer *const buf = &pbi->tile_buffers[cur_tile_col];
+  const int aligned_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+
+  vp9_zero(tile_data->dqcoeff);
+  vp9_tile_init(tile, cm, 0, cur_tile_col);
+
+  /* Update reader only at the beginning of each row in a tile */
+  if (mi_row == 0) {
+    setup_token_decoder(buf->data, *data_end, buf->size, &tile_data->error_info,
+                        &tile_data->bit_reader, pbi->decrypt_cb,
+                        pbi->decrypt_state);
+  }
+  vp9_init_macroblockd(cm, &tile_data->xd, tile_data->dqcoeff);
+  tile_data->xd.error_info = &tile_data->error_info;
+
+  vp9_zero(tile_data->xd.left_context);
+  vp9_zero(tile_data->xd.left_seg_context);
+  for (mi_col = tile->mi_col_start; mi_col < tile->mi_col_end;
+       mi_col += MI_BLOCK_SIZE) {
+    const int r = mi_row >> MI_BLOCK_SIZE_LOG2;
+    const int c = mi_col >> MI_BLOCK_SIZE_LOG2;
+    int plane;
+    const int sb_num = (r * (aligned_cols >> MI_BLOCK_SIZE_LOG2) + c);
+    for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+      tile_data->xd.plane[plane].eob =
+          row_mt_worker_data->eob[plane] + (sb_num << EOBS_PER_SB_LOG2);
+      tile_data->xd.plane[plane].dqcoeff =
+          row_mt_worker_data->dqcoeff[plane] + (sb_num << DQCOEFFS_PER_SB_LOG2);
+    }
+    tile_data->xd.partition =
+        row_mt_worker_data->partition + sb_num * PARTITIONS_PER_SB;
+    process_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4, PARSE,
+                      parse_block);
+  }
+}
+
+static int row_decode_worker_hook(void *arg1, void *arg2) {
+  ThreadData *const thread_data = (ThreadData *)arg1;
+  uint8_t **data_end = (uint8_t **)arg2;
+  VP9Decoder *const pbi = thread_data->pbi;
+  VP9_COMMON *const cm = &pbi->common;
+  RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+  const int aligned_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+  const int aligned_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+  const int sb_rows = aligned_rows >> MI_BLOCK_SIZE_LOG2;
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  Job job;
+  LFWorkerData *lf_data = thread_data->lf_data;
+  VP9LfSync *lf_sync = thread_data->lf_sync;
+  volatile int corrupted = 0;
+  TileWorkerData *volatile tile_data_recon = NULL;
+
+  while (!vp9_jobq_dequeue(&row_mt_worker_data->jobq, &job, sizeof(job), 1)) {
+    int mi_col;
+    const int mi_row = job.row_num;
+
+    if (job.job_type == LPF_JOB) {
+      lf_data->start = mi_row;
+      lf_data->stop = lf_data->start + MI_BLOCK_SIZE;
+
+      if (cm->lf.filter_level && !cm->skip_loop_filter &&
+          mi_row < cm->mi_rows) {
+        vp9_loopfilter_job(lf_data, lf_sync);
+      }
+    } else if (job.job_type == RECON_JOB) {
+      const int cur_sb_row = mi_row >> MI_BLOCK_SIZE_LOG2;
+      const int is_last_row = sb_rows - 1 == cur_sb_row;
+      int mi_col_start, mi_col_end;
+      if (!tile_data_recon)
+        CHECK_MEM_ERROR(cm, tile_data_recon,
+                        vpx_memalign(32, sizeof(TileWorkerData)));
+
+      tile_data_recon->xd = pbi->mb;
+      vp9_tile_init(&tile_data_recon->xd.tile, cm, 0, job.tile_col);
+      vp9_init_macroblockd(cm, &tile_data_recon->xd, tile_data_recon->dqcoeff);
+      mi_col_start = tile_data_recon->xd.tile.mi_col_start;
+      mi_col_end = tile_data_recon->xd.tile.mi_col_end;
+
+      if (setjmp(tile_data_recon->error_info.jmp)) {
+        const int sb_cols = aligned_cols >> MI_BLOCK_SIZE_LOG2;
+        tile_data_recon->error_info.setjmp = 0;
+        corrupted = 1;
+        for (mi_col = mi_col_start; mi_col < mi_col_end;
+             mi_col += MI_BLOCK_SIZE) {
+          const int c = mi_col >> MI_BLOCK_SIZE_LOG2;
+          map_write(row_mt_worker_data, (cur_sb_row * sb_cols) + c,
+                    (cur_sb_row * tile_cols) + job.tile_col);
+        }
+        if (is_last_row) {
+          vp9_tile_done(pbi);
+        }
+        continue;
+      }
+
+      tile_data_recon->error_info.setjmp = 1;
+      tile_data_recon->xd.error_info = &tile_data_recon->error_info;
+
+      recon_tile_row(tile_data_recon, pbi, mi_row, is_last_row, lf_sync,
+                     job.tile_col);
+
+      if (corrupted)
+        vpx_internal_error(&tile_data_recon->error_info,
+                           VPX_CODEC_CORRUPT_FRAME,
+                           "Failed to decode tile data");
+
+      if (is_last_row) {
+        vp9_tile_done(pbi);
+      }
+    } else if (job.job_type == PARSE_JOB) {
+      TileWorkerData *const tile_data = &pbi->tile_worker_data[job.tile_col];
+
+      if (setjmp(tile_data->error_info.jmp)) {
+        tile_data->error_info.setjmp = 0;
+        corrupted = 1;
+        vp9_tile_done(pbi);
+        continue;
+      }
+
+      tile_data->xd = pbi->mb;
+      tile_data->xd.counts =
+          cm->frame_parallel_decoding_mode ? 0 : &tile_data->counts;
+
+      tile_data->error_info.setjmp = 1;
+
+      parse_tile_row(tile_data, pbi, mi_row, job.tile_col, data_end);
+
+      corrupted |= tile_data->xd.corrupted;
+      if (corrupted)
+        vpx_internal_error(&tile_data->error_info, VPX_CODEC_CORRUPT_FRAME,
+                           "Failed to decode tile data");
+
+      /* Queue in the recon_job for this row */
+      {
+        Job recon_job;
+        recon_job.row_num = mi_row;
+        recon_job.tile_col = job.tile_col;
+        recon_job.job_type = RECON_JOB;
+        vp9_jobq_queue(&row_mt_worker_data->jobq, &recon_job,
+                       sizeof(recon_job));
+      }
+
+      /* Queue next parse job */
+      if (mi_row + MI_BLOCK_SIZE < cm->mi_rows) {
+        Job parse_job;
+        parse_job.row_num = mi_row + MI_BLOCK_SIZE;
+        parse_job.tile_col = job.tile_col;
+        parse_job.job_type = PARSE_JOB;
+        vp9_jobq_queue(&row_mt_worker_data->jobq, &parse_job,
+                       sizeof(parse_job));
+      }
+    }
+  }
+
+  vpx_free(tile_data_recon);
+  return !corrupted;
+}
+
 static const uint8_t *decode_tiles(VP9Decoder *pbi, const uint8_t *data,
                                    const uint8_t *data_end) {
   VP9_COMMON *const cm = &pbi->common;
@@ -1426,7 +2089,29 @@
         vp9_zero(tile_data->xd.left_seg_context);
         for (mi_col = tile.mi_col_start; mi_col < tile.mi_col_end;
              mi_col += MI_BLOCK_SIZE) {
-          decode_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4);
+          if (pbi->row_mt == 1) {
+            int plane;
+            RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+            for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+              tile_data->xd.plane[plane].eob = row_mt_worker_data->eob[plane];
+              tile_data->xd.plane[plane].dqcoeff =
+                  row_mt_worker_data->dqcoeff[plane];
+            }
+            tile_data->xd.partition = row_mt_worker_data->partition;
+            process_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4,
+                              PARSE, parse_block);
+
+            for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+              tile_data->xd.plane[plane].eob = row_mt_worker_data->eob[plane];
+              tile_data->xd.plane[plane].dqcoeff =
+                  row_mt_worker_data->dqcoeff[plane];
+            }
+            tile_data->xd.partition = row_mt_worker_data->partition;
+            process_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4,
+                              RECON, recon_block);
+          } else {
+            decode_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4);
+          }
         }
         pbi->mb.corrupted |= tile_data->xd.corrupted;
         if (pbi->mb.corrupted)
@@ -1471,6 +2156,25 @@
   return vpx_reader_find_end(&tile_data->bit_reader);
 }
 
+static void set_rows_after_error(VP9LfSync *lf_sync, int start_row, int mi_rows,
+                                 int num_tiles_left, int total_num_tiles) {
+  do {
+    int mi_row;
+    const int aligned_rows = mi_cols_aligned_to_sb(mi_rows);
+    const int sb_rows = (aligned_rows >> MI_BLOCK_SIZE_LOG2);
+    const int corrupted = 1;
+    for (mi_row = start_row; mi_row < mi_rows; mi_row += MI_BLOCK_SIZE) {
+      const int is_last_row = (sb_rows - 1 == mi_row >> MI_BLOCK_SIZE_LOG2);
+      vp9_set_row(lf_sync, total_num_tiles, mi_row >> MI_BLOCK_SIZE_LOG2,
+                  is_last_row, corrupted);
+    }
+    /* If there are multiple tiles, the second tile should start marking row
+     * progress from row 0.
+     */
+    start_row = 0;
+  } while (num_tiles_left--);
+}
+
 // On entry 'tile_data->data_end' points to the end of the input frame, on exit
 // it is updated to reflect the bitreader position of the final tile column if
 // present in the tile buffer group or NULL otherwise.
@@ -1481,6 +2185,12 @@
   TileInfo *volatile tile = &tile_data->xd.tile;
   const int final_col = (1 << pbi->common.log2_tile_cols) - 1;
   const uint8_t *volatile bit_reader_end = NULL;
+  VP9_COMMON *cm = &pbi->common;
+
+  LFWorkerData *lf_data = tile_data->lf_data;
+  VP9LfSync *lf_sync = tile_data->lf_sync;
+
+  volatile int mi_row = 0;
   volatile int n = tile_data->buf_start;
   tile_data->error_info.setjmp = 1;
 
@@ -1488,14 +2198,26 @@
     tile_data->error_info.setjmp = 0;
     tile_data->xd.corrupted = 1;
     tile_data->data_end = NULL;
+    if (pbi->lpf_mt_opt && cm->lf.filter_level && !cm->skip_loop_filter) {
+      const int num_tiles_left = tile_data->buf_end - n;
+      const int mi_row_start = mi_row;
+      set_rows_after_error(lf_sync, mi_row_start, cm->mi_rows, num_tiles_left,
+                           1 << cm->log2_tile_cols);
+    }
     return 0;
   }
 
   tile_data->xd.corrupted = 0;
 
   do {
-    int mi_row, mi_col;
+    int mi_col;
     const TileBuffer *const buf = pbi->tile_buffers + n;
+
+    /* Initialize to 0 is safe since we do not deal with streams that have
+     * more than one row of tiles. (So tile->mi_row_start will be 0)
+     */
+    assert(cm->log2_tile_rows == 0);
+    mi_row = 0;
     vp9_zero(tile_data->dqcoeff);
     vp9_tile_init(tile, &pbi->common, 0, buf->col);
     setup_token_decoder(buf->data, tile_data->data_end, buf->size,
@@ -1513,6 +2235,14 @@
            mi_col += MI_BLOCK_SIZE) {
         decode_partition(tile_data, pbi, mi_row, mi_col, BLOCK_64X64, 4);
       }
+      if (pbi->lpf_mt_opt && cm->lf.filter_level && !cm->skip_loop_filter) {
+        const int aligned_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+        const int sb_rows = (aligned_rows >> MI_BLOCK_SIZE_LOG2);
+        const int is_last_row = (sb_rows - 1 == mi_row >> MI_BLOCK_SIZE_LOG2);
+        vp9_set_row(lf_sync, 1 << cm->log2_tile_cols,
+                    mi_row >> MI_BLOCK_SIZE_LOG2, is_last_row,
+                    tile_data->xd.corrupted);
+      }
     }
 
     if (buf->col == final_col) {
@@ -1520,31 +2250,38 @@
     }
   } while (!tile_data->xd.corrupted && ++n <= tile_data->buf_end);
 
+  if (pbi->lpf_mt_opt && n < tile_data->buf_end && cm->lf.filter_level &&
+      !cm->skip_loop_filter) {
+    /* This was not incremented in the tile loop, so increment before tiles left
+     * calculation
+     */
+    ++n;
+    set_rows_after_error(lf_sync, 0, cm->mi_rows, tile_data->buf_end - n,
+                         1 << cm->log2_tile_cols);
+  }
+
+  if (pbi->lpf_mt_opt && !tile_data->xd.corrupted && cm->lf.filter_level &&
+      !cm->skip_loop_filter) {
+    vp9_loopfilter_rows(lf_data, lf_sync);
+  }
+
   tile_data->data_end = bit_reader_end;
   return !tile_data->xd.corrupted;
 }
 
 // sorts in descending order
 static int compare_tile_buffers(const void *a, const void *b) {
-  const TileBuffer *const buf1 = (const TileBuffer *)a;
-  const TileBuffer *const buf2 = (const TileBuffer *)b;
-  return (int)(buf2->size - buf1->size);
+  const TileBuffer *const buf_a = (const TileBuffer *)a;
+  const TileBuffer *const buf_b = (const TileBuffer *)b;
+  return (buf_a->size < buf_b->size) - (buf_a->size > buf_b->size);
 }
 
-static const uint8_t *decode_tiles_mt(VP9Decoder *pbi, const uint8_t *data,
-                                      const uint8_t *data_end) {
-  VP9_COMMON *const cm = &pbi->common;
-  const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
-  const uint8_t *bit_reader_end = NULL;
-  const int aligned_mi_cols = mi_cols_aligned_to_sb(cm->mi_cols);
-  const int tile_cols = 1 << cm->log2_tile_cols;
-  const int tile_rows = 1 << cm->log2_tile_rows;
-  const int num_workers = VPXMIN(pbi->max_threads, tile_cols);
+static INLINE void init_mt(VP9Decoder *pbi) {
   int n;
-
-  assert(tile_cols <= (1 << 6));
-  assert(tile_rows == 1);
-  (void)tile_rows;
+  VP9_COMMON *const cm = &pbi->common;
+  VP9LfSync *lf_row_sync = &pbi->lf_row_sync;
+  const int aligned_mi_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+  const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
 
   if (pbi->num_tile_workers == 0) {
     const int num_threads = pbi->max_threads;
@@ -1562,12 +2299,173 @@
     }
   }
 
+  // Initialize LPF
+  if ((pbi->lpf_mt_opt || pbi->row_mt) && cm->lf.filter_level &&
+      !cm->skip_loop_filter) {
+    vp9_lpf_mt_init(lf_row_sync, cm, cm->lf.filter_level,
+                    pbi->num_tile_workers);
+  }
+
+  // Note: this memset assumes above_context[0], [1] and [2]
+  // are allocated as part of the same buffer.
+  memset(cm->above_context, 0,
+         sizeof(*cm->above_context) * MAX_MB_PLANE * 2 * aligned_mi_cols);
+
+  memset(cm->above_seg_context, 0,
+         sizeof(*cm->above_seg_context) * aligned_mi_cols);
+
+  vp9_reset_lfm(cm);
+}
+
+static const uint8_t *decode_tiles_row_wise_mt(VP9Decoder *pbi,
+                                               const uint8_t *data,
+                                               const uint8_t *data_end) {
+  VP9_COMMON *const cm = &pbi->common;
+  RowMTWorkerData *const row_mt_worker_data = pbi->row_mt_worker_data;
+  const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  const int tile_rows = 1 << cm->log2_tile_rows;
+  const int num_workers = pbi->max_threads;
+  int i, n;
+  int col;
+  int corrupted = 0;
+  const int sb_rows = mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2;
+  const int sb_cols = mi_cols_aligned_to_sb(cm->mi_cols) >> MI_BLOCK_SIZE_LOG2;
+  VP9LfSync *lf_row_sync = &pbi->lf_row_sync;
+  YV12_BUFFER_CONFIG *const new_fb = get_frame_new_buffer(cm);
+
+  assert(tile_cols <= (1 << 6));
+  assert(tile_rows == 1);
+  (void)tile_rows;
+
+  memset(row_mt_worker_data->recon_map, 0,
+         sb_rows * sb_cols * sizeof(*row_mt_worker_data->recon_map));
+
+  init_mt(pbi);
+
+  // Reset tile decoding hook
+  for (n = 0; n < num_workers; ++n) {
+    VPxWorker *const worker = &pbi->tile_workers[n];
+    ThreadData *const thread_data = &pbi->row_mt_worker_data->thread_data[n];
+    winterface->sync(worker);
+
+    if (cm->lf.filter_level && !cm->skip_loop_filter) {
+      thread_data->lf_sync = lf_row_sync;
+      thread_data->lf_data = &thread_data->lf_sync->lfdata[n];
+      vp9_loop_filter_data_reset(thread_data->lf_data, new_fb, cm,
+                                 pbi->mb.plane);
+    }
+
+    thread_data->pbi = pbi;
+
+    worker->hook = row_decode_worker_hook;
+    worker->data1 = thread_data;
+    worker->data2 = (void *)&row_mt_worker_data->data_end;
+  }
+
+  for (col = 0; col < tile_cols; ++col) {
+    TileWorkerData *const tile_data = &pbi->tile_worker_data[col];
+    tile_data->xd = pbi->mb;
+    tile_data->xd.counts =
+        cm->frame_parallel_decoding_mode ? NULL : &tile_data->counts;
+  }
+
+  /* Reset the jobq to start of the jobq buffer */
+  vp9_jobq_reset(&row_mt_worker_data->jobq);
+  row_mt_worker_data->num_tiles_done = 0;
+  row_mt_worker_data->data_end = NULL;
+
+  // Load tile data into tile_buffers
+  get_tile_buffers(pbi, data, data_end, tile_cols, tile_rows,
+                   &pbi->tile_buffers);
+
+  // Initialize thread frame counts.
+  if (!cm->frame_parallel_decoding_mode) {
+    for (col = 0; col < tile_cols; ++col) {
+      TileWorkerData *const tile_data = &pbi->tile_worker_data[col];
+      vp9_zero(tile_data->counts);
+    }
+  }
+
+  // queue parse jobs for 0th row of every tile
+  for (col = 0; col < tile_cols; ++col) {
+    Job parse_job;
+    parse_job.row_num = 0;
+    parse_job.tile_col = col;
+    parse_job.job_type = PARSE_JOB;
+    vp9_jobq_queue(&row_mt_worker_data->jobq, &parse_job, sizeof(parse_job));
+  }
+
+  for (i = 0; i < num_workers; ++i) {
+    VPxWorker *const worker = &pbi->tile_workers[i];
+    worker->had_error = 0;
+    if (i == num_workers - 1) {
+      winterface->execute(worker);
+    } else {
+      winterface->launch(worker);
+    }
+  }
+
+  for (; n > 0; --n) {
+    VPxWorker *const worker = &pbi->tile_workers[n - 1];
+    // TODO(jzern): The tile may have specific error data associated with
+    // its vpx_internal_error_info which could be propagated to the main info
+    // in cm. Additionally once the threads have been synced and an error is
+    // detected, there's no point in continuing to decode tiles.
+    corrupted |= !winterface->sync(worker);
+  }
+
+  pbi->mb.corrupted = corrupted;
+
+  {
+    /* Set data end */
+    TileWorkerData *const tile_data = &pbi->tile_worker_data[tile_cols - 1];
+    row_mt_worker_data->data_end = vpx_reader_find_end(&tile_data->bit_reader);
+  }
+
+  // Accumulate thread frame counts.
+  if (!cm->frame_parallel_decoding_mode) {
+    for (i = 0; i < tile_cols; ++i) {
+      TileWorkerData *const tile_data = &pbi->tile_worker_data[i];
+      vp9_accumulate_frame_counts(&cm->counts, &tile_data->counts, 1);
+    }
+  }
+
+  return row_mt_worker_data->data_end;
+}
+
+static const uint8_t *decode_tiles_mt(VP9Decoder *pbi, const uint8_t *data,
+                                      const uint8_t *data_end) {
+  VP9_COMMON *const cm = &pbi->common;
+  const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
+  const uint8_t *bit_reader_end = NULL;
+  VP9LfSync *lf_row_sync = &pbi->lf_row_sync;
+  YV12_BUFFER_CONFIG *const new_fb = get_frame_new_buffer(cm);
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  const int tile_rows = 1 << cm->log2_tile_rows;
+  const int num_workers = VPXMIN(pbi->max_threads, tile_cols);
+  int n;
+
+  assert(tile_cols <= (1 << 6));
+  assert(tile_rows == 1);
+  (void)tile_rows;
+
+  init_mt(pbi);
+
   // Reset tile decoding hook
   for (n = 0; n < num_workers; ++n) {
     VPxWorker *const worker = &pbi->tile_workers[n];
     TileWorkerData *const tile_data =
         &pbi->tile_worker_data[n + pbi->total_tiles];
     winterface->sync(worker);
+
+    if (pbi->lpf_mt_opt && cm->lf.filter_level && !cm->skip_loop_filter) {
+      tile_data->lf_sync = lf_row_sync;
+      tile_data->lf_data = &tile_data->lf_sync->lfdata[n];
+      vp9_loop_filter_data_reset(tile_data->lf_data, new_fb, cm, pbi->mb.plane);
+      tile_data->lf_data->y_only = 0;
+    }
+
     tile_data->xd = pbi->mb;
     tile_data->xd.counts =
         cm->frame_parallel_decoding_mode ? NULL : &tile_data->counts;
@@ -1576,15 +2474,6 @@
     worker->data2 = pbi;
   }
 
-  // Note: this memset assumes above_context[0], [1] and [2]
-  // are allocated as part of the same buffer.
-  memset(cm->above_context, 0,
-         sizeof(*cm->above_context) * MAX_MB_PLANE * 2 * aligned_mi_cols);
-  memset(cm->above_seg_context, 0,
-         sizeof(*cm->above_seg_context) * aligned_mi_cols);
-
-  vp9_reset_lfm(cm);
-
   // Load tile data into tile_buffers
   get_tile_buffers(pbi, data, data_end, tile_cols, tile_rows,
                    &pbi->tile_buffers);
@@ -1724,6 +2613,22 @@
   }
 }
 
+static INLINE void flush_all_fb_on_key(VP9_COMMON *cm) {
+  if (cm->frame_type == KEY_FRAME && cm->current_video_frame > 0) {
+    RefCntBuffer *const frame_bufs = cm->buffer_pool->frame_bufs;
+    BufferPool *const pool = cm->buffer_pool;
+    int i;
+    for (i = 0; i < FRAME_BUFFERS; ++i) {
+      if (i == cm->new_fb_idx) continue;
+      frame_bufs[i].ref_count = 0;
+      if (!frame_bufs[i].released) {
+        pool->release_fb_cb(pool->cb_priv, &frame_bufs[i].raw_frame_buffer);
+        frame_bufs[i].released = 1;
+      }
+    }
+  }
+}
+
 static size_t read_uncompressed_header(VP9Decoder *pbi,
                                        struct vpx_read_bit_buffer *rb) {
   VP9_COMMON *const cm = &pbi->common;
@@ -1788,6 +2693,7 @@
     setup_frame_size(cm, rb);
     if (pbi->need_resync) {
       memset(&cm->ref_frame_map, -1, sizeof(cm->ref_frame_map));
+      flush_all_fb_on_key(cm);
       pbi->need_resync = 0;
     }
   } else {
@@ -1911,6 +2817,35 @@
   setup_segmentation_dequant(cm);
 
   setup_tile_info(cm, rb);
+  if (pbi->row_mt == 1) {
+    int num_sbs = 1;
+    const int aligned_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+    const int sb_rows = aligned_rows >> MI_BLOCK_SIZE_LOG2;
+    const int num_jobs = sb_rows << cm->log2_tile_cols;
+
+    if (pbi->row_mt_worker_data == NULL) {
+      CHECK_MEM_ERROR(cm, pbi->row_mt_worker_data,
+                      vpx_calloc(1, sizeof(*pbi->row_mt_worker_data)));
+#if CONFIG_MULTITHREAD
+      pthread_mutex_init(&pbi->row_mt_worker_data->recon_done_mutex, NULL);
+#endif
+    }
+
+    if (pbi->max_threads > 1) {
+      const int aligned_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+      const int sb_cols = aligned_cols >> MI_BLOCK_SIZE_LOG2;
+
+      num_sbs = sb_cols * sb_rows;
+    }
+
+    if (num_sbs > pbi->row_mt_worker_data->num_sbs ||
+        num_jobs > pbi->row_mt_worker_data->num_jobs) {
+      vp9_dec_free_row_mt_mem(pbi->row_mt_worker_data);
+      vp9_dec_alloc_row_mt_mem(pbi->row_mt_worker_data, cm, num_sbs,
+                               pbi->max_threads, num_jobs);
+    }
+    vp9_jobq_alloc(pbi);
+  }
   sz = vpx_rb_read_literal(rb, 16);
 
   if (sz == 0)
@@ -1953,7 +2888,7 @@
 
     cm->reference_mode = read_frame_reference_mode(cm, &r);
     if (cm->reference_mode != SINGLE_REFERENCE)
-      setup_compound_reference_mode(cm);
+      vp9_setup_compound_reference_mode(cm);
     read_frame_reference_mode_probs(cm, &r);
 
     for (j = 0; j < BLOCK_SIZE_GROUPS; j++)
@@ -2021,6 +2956,12 @@
   const int tile_rows = 1 << cm->log2_tile_rows;
   const int tile_cols = 1 << cm->log2_tile_cols;
   YV12_BUFFER_CONFIG *const new_fb = get_frame_new_buffer(cm);
+#if CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
+  bitstream_queue_set_frame_read(cm->current_video_frame * 2 + cm->show_frame);
+#endif
+#if CONFIG_MISMATCH_DEBUG
+  mismatch_move_frame_idx_r();
+#endif
   xd->cur_buf = new_fb;
 
   if (!first_partition_size) {
@@ -2069,20 +3010,28 @@
     pbi->total_tiles = tile_rows * tile_cols;
   }
 
-  if (pbi->max_threads > 1 && tile_rows == 1 && tile_cols > 1) {
-    // Multi-threaded tile decoder
-    *p_data_end = decode_tiles_mt(pbi, data + first_partition_size, data_end);
-    if (!xd->corrupted) {
-      if (!cm->skip_loop_filter) {
-        // If multiple threads are used to decode tiles, then we use those
-        // threads to do parallel loopfiltering.
-        vp9_loop_filter_frame_mt(new_fb, cm, pbi->mb.plane, cm->lf.filter_level,
-                                 0, 0, pbi->tile_workers, pbi->num_tile_workers,
-                                 &pbi->lf_row_sync);
-      }
+  if (pbi->max_threads > 1 && tile_rows == 1 &&
+      (tile_cols > 1 || pbi->row_mt == 1)) {
+    if (pbi->row_mt == 1) {
+      *p_data_end =
+          decode_tiles_row_wise_mt(pbi, data + first_partition_size, data_end);
     } else {
-      vpx_internal_error(&cm->error, VPX_CODEC_CORRUPT_FRAME,
-                         "Decode failed. Frame data is corrupted.");
+      // Multi-threaded tile decoder
+      *p_data_end = decode_tiles_mt(pbi, data + first_partition_size, data_end);
+      if (!pbi->lpf_mt_opt) {
+        if (!xd->corrupted) {
+          if (!cm->skip_loop_filter) {
+            // If multiple threads are used to decode tiles, then we use those
+            // threads to do parallel loopfiltering.
+            vp9_loop_filter_frame_mt(
+                new_fb, cm, pbi->mb.plane, cm->lf.filter_level, 0, 0,
+                pbi->tile_workers, pbi->num_tile_workers, &pbi->lf_row_sync);
+          }
+        } else {
+          vpx_internal_error(&cm->error, VPX_CODEC_CORRUPT_FRAME,
+                             "Decode failed. Frame data is corrupted.");
+        }
+      }
     }
   } else {
     *p_data_end = decode_tiles(pbi, data + first_partition_size, data_end);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.h
index 44717f5..ba95e72 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodeframe.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_DECODER_VP9_DECODEFRAME_H_
-#define VP9_DECODER_VP9_DECODEFRAME_H_
+#ifndef VPX_VP9_DECODER_VP9_DECODEFRAME_H_
+#define VPX_VP9_DECODER_VP9_DECODEFRAME_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -32,4 +32,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_DECODER_VP9_DECODEFRAME_H_
+#endif  // VPX_VP9_DECODER_VP9_DECODEFRAME_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.c
index 0a78141..8a8d2ad 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.c
@@ -444,17 +444,6 @@
   }
 }
 
-static void dec_find_best_ref_mvs(int allow_hp, int_mv *mvlist, int_mv *best_mv,
-                                  int refmv_count) {
-  int i;
-
-  // Make sure all the candidates are properly clamped etc
-  for (i = 0; i < refmv_count; ++i) {
-    lower_mv_precision(&mvlist[i].as_mv, allow_hp);
-    *best_mv = mvlist[i];
-  }
-}
-
 // This macro is used to add a motion vector mv_ref list if it isn't
 // already in the list.  If it's the second motion vector or early_break
 // it will also skip all additional processing and jump to Done!
@@ -494,7 +483,7 @@
                             PREDICTION_MODE mode, MV_REFERENCE_FRAME ref_frame,
                             const POSITION *const mv_ref_search,
                             int_mv *mv_ref_list, int mi_row, int mi_col,
-                            int block, int is_sub8x8) {
+                            int block) {
   const int *ref_sign_bias = cm->ref_frame_sign_bias;
   int i, refmv_count = 0;
   int different_ref_found = 0;
@@ -511,7 +500,7 @@
   memset(mv_ref_list, 0, sizeof(*mv_ref_list) * MAX_MV_REF_CANDIDATES);
 
   i = 0;
-  if (is_sub8x8) {
+  if (block >= 0) {
     // If the size < 8x8 we get the mv from the bmi substructure for the
     // nearest two blocks.
     for (i = 0; i < 2; ++i) {
@@ -628,19 +617,22 @@
 
   assert(MAX_MV_REF_CANDIDATES == 2);
 
-  refmv_count =
-      dec_find_mv_refs(cm, xd, b_mode, mi->ref_frame[ref], mv_ref_search,
-                       mv_list, mi_row, mi_col, block, 1);
-
   switch (block) {
-    case 0: best_sub8x8->as_int = mv_list[refmv_count - 1].as_int; break;
+    case 0:
+      refmv_count =
+          dec_find_mv_refs(cm, xd, b_mode, mi->ref_frame[ref], mv_ref_search,
+                           mv_list, mi_row, mi_col, block);
+      best_sub8x8->as_int = mv_list[refmv_count - 1].as_int;
+      break;
     case 1:
     case 2:
       if (b_mode == NEARESTMV) {
         best_sub8x8->as_int = bmi[0].as_mv[ref].as_int;
       } else {
+        dec_find_mv_refs(cm, xd, b_mode, mi->ref_frame[ref], mv_ref_search,
+                         mv_list, mi_row, mi_col, block);
         best_sub8x8->as_int = 0;
-        for (n = 0; n < refmv_count; ++n)
+        for (n = 0; n < 2; ++n)
           if (bmi[0].as_mv[ref].as_int != mv_list[n].as_int) {
             best_sub8x8->as_int = mv_list[n].as_int;
             break;
@@ -651,15 +643,20 @@
       if (b_mode == NEARESTMV) {
         best_sub8x8->as_int = bmi[2].as_mv[ref].as_int;
       } else {
-        int_mv candidates[2 + MAX_MV_REF_CANDIDATES];
-        candidates[0] = bmi[1].as_mv[ref];
-        candidates[1] = bmi[0].as_mv[ref];
-        candidates[2] = mv_list[0];
-        candidates[3] = mv_list[1];
         best_sub8x8->as_int = 0;
-        for (n = 0; n < 2 + MAX_MV_REF_CANDIDATES; ++n)
-          if (bmi[2].as_mv[ref].as_int != candidates[n].as_int) {
-            best_sub8x8->as_int = candidates[n].as_int;
+        if (bmi[2].as_mv[ref].as_int != bmi[1].as_mv[ref].as_int) {
+          best_sub8x8->as_int = bmi[1].as_mv[ref].as_int;
+          break;
+        }
+        if (bmi[2].as_mv[ref].as_int != bmi[0].as_mv[ref].as_int) {
+          best_sub8x8->as_int = bmi[0].as_mv[ref].as_int;
+          break;
+        }
+        dec_find_mv_refs(cm, xd, b_mode, mi->ref_frame[ref], mv_ref_search,
+                         mv_list, mi_row, mi_col, block);
+        for (n = 0; n < 2; ++n)
+          if (bmi[2].as_mv[ref].as_int != mv_list[n].as_int) {
+            best_sub8x8->as_int = mv_list[n].as_int;
             break;
           }
       }
@@ -696,7 +693,7 @@
   VP9_COMMON *const cm = &pbi->common;
   const BLOCK_SIZE bsize = mi->sb_type;
   const int allow_hp = cm->allow_high_precision_mv;
-  int_mv best_ref_mvs[2];
+  int_mv best_ref_mvs[2] = { { 0 }, { 0 } };
   int ref, is_compound;
   uint8_t inter_mode_ctx;
   const POSITION *const mv_ref_search = mv_ref_blocks[bsize];
@@ -715,26 +712,6 @@
   } else {
     if (bsize >= BLOCK_8X8)
       mi->mode = read_inter_mode(cm, xd, r, inter_mode_ctx);
-    else
-      // Sub 8x8 blocks use the nearestmv as a ref_mv if the b_mode is NEWMV.
-      // Setting mode to NEARESTMV forces the search to stop after the nearestmv
-      // has been found. After b_modes have been read, mode will be overwritten
-      // by the last b_mode.
-      mi->mode = NEARESTMV;
-
-    if (mi->mode != ZEROMV) {
-      for (ref = 0; ref < 1 + is_compound; ++ref) {
-        int_mv tmp_mvs[MAX_MV_REF_CANDIDATES];
-        const MV_REFERENCE_FRAME frame = mi->ref_frame[ref];
-        int refmv_count;
-
-        refmv_count = dec_find_mv_refs(cm, xd, mi->mode, frame, mv_ref_search,
-                                       tmp_mvs, mi_row, mi_col, -1, 0);
-
-        dec_find_best_ref_mvs(allow_hp, tmp_mvs, &best_ref_mvs[ref],
-                              refmv_count);
-      }
-    }
   }
 
   mi->interp_filter = (cm->interp_filter == SWITCHABLE)
@@ -746,6 +723,7 @@
     const int num_4x4_h = 1 << xd->bmode_blocks_hl;
     int idx, idy;
     PREDICTION_MODE b_mode;
+    int got_mv_refs_for_new = 0;
     int_mv best_sub8x8[2];
     const uint32_t invalid_mv = 0x80008000;
     // Initialize the 2nd element as even though it won't be used meaningfully
@@ -760,6 +738,18 @@
           for (ref = 0; ref < 1 + is_compound; ++ref)
             append_sub8x8_mvs_for_idx(cm, xd, mv_ref_search, b_mode, j, ref,
                                       mi_row, mi_col, &best_sub8x8[ref]);
+        } else if (b_mode == NEWMV && !got_mv_refs_for_new) {
+          for (ref = 0; ref < 1 + is_compound; ++ref) {
+            int_mv tmp_mvs[MAX_MV_REF_CANDIDATES];
+            const MV_REFERENCE_FRAME frame = mi->ref_frame[ref];
+
+            dec_find_mv_refs(cm, xd, NEWMV, frame, mv_ref_search, tmp_mvs,
+                             mi_row, mi_col, -1);
+
+            lower_mv_precision(&tmp_mvs[0].as_mv, allow_hp);
+            best_ref_mvs[ref] = tmp_mvs[0];
+            got_mv_refs_for_new = 1;
+          }
         }
 
         if (!assign_mv(cm, xd, b_mode, mi->bmi[j].as_mv, best_ref_mvs,
@@ -777,6 +767,17 @@
 
     copy_mv_pair(mi->mv, mi->bmi[3].as_mv);
   } else {
+    if (mi->mode != ZEROMV) {
+      for (ref = 0; ref < 1 + is_compound; ++ref) {
+        int_mv tmp_mvs[MAX_MV_REF_CANDIDATES];
+        const MV_REFERENCE_FRAME frame = mi->ref_frame[ref];
+        int refmv_count =
+            dec_find_mv_refs(cm, xd, mi->mode, frame, mv_ref_search, tmp_mvs,
+                             mi_row, mi_col, -1);
+        lower_mv_precision(&tmp_mvs[refmv_count - 1].as_mv, allow_hp);
+        best_ref_mvs[ref] = tmp_mvs[refmv_count - 1];
+      }
+    }
     xd->corrupted |= !assign_mv(cm, xd, mi->mode, mi->mv, best_ref_mvs,
                                 best_ref_mvs, is_compound, allow_hp, r);
   }
@@ -819,13 +820,21 @@
   if (frame_is_intra_only(cm)) {
     read_intra_frame_mode_info(cm, xd, mi_row, mi_col, r, x_mis, y_mis);
   } else {
+    // Cache mi->ref_frame and mi->mv so that the compiler can prove that they
+    // are constant for the duration of the loop and avoids reloading them.
+    MV_REFERENCE_FRAME mi_ref_frame[2];
+    int_mv mi_mv[2];
+
     read_inter_frame_mode_info(pbi, xd, mi_row, mi_col, r, x_mis, y_mis);
 
+    copy_ref_frame_pair(mi_ref_frame, mi->ref_frame);
+    copy_mv_pair(mi_mv, mi->mv);
+
     for (h = 0; h < y_mis; ++h) {
       for (w = 0; w < x_mis; ++w) {
         MV_REF *const mv = frame_mvs + w;
-        copy_ref_frame_pair(mv->ref_frame, mi->ref_frame);
-        copy_mv_pair(mv->mv, mi->mv);
+        copy_ref_frame_pair(mv->ref_frame, mi_ref_frame);
+        copy_mv_pair(mv->mv, mi_mv);
       }
       frame_mvs += cm->mi_cols;
     }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.h
index b460cb8..11b45ac 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decodemv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_DECODER_VP9_DECODEMV_H_
-#define VP9_DECODER_VP9_DECODEMV_H_
+#ifndef VPX_VP9_DECODER_VP9_DECODEMV_H_
+#define VPX_VP9_DECODER_VP9_DECODEMV_H_
 
 #include "vpx_dsp/bitreader.h"
 
@@ -26,4 +26,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_DECODER_VP9_DECODEMV_H_
+#endif  // VPX_VP9_DECODER_VP9_DECODEMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.c
index a913fa5..0aed3d7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.c
@@ -55,6 +55,94 @@
          cm->mi_stride * (cm->mi_rows + 1) * sizeof(*cm->mi_grid_base));
 }
 
+void vp9_dec_alloc_row_mt_mem(RowMTWorkerData *row_mt_worker_data,
+                              VP9_COMMON *cm, int num_sbs, int max_threads,
+                              int num_jobs) {
+  int plane;
+  const size_t dqcoeff_size = (num_sbs << DQCOEFFS_PER_SB_LOG2) *
+                              sizeof(*row_mt_worker_data->dqcoeff[0]);
+  row_mt_worker_data->num_jobs = num_jobs;
+#if CONFIG_MULTITHREAD
+  {
+    int i;
+    CHECK_MEM_ERROR(
+        cm, row_mt_worker_data->recon_sync_mutex,
+        vpx_malloc(sizeof(*row_mt_worker_data->recon_sync_mutex) * num_jobs));
+    if (row_mt_worker_data->recon_sync_mutex) {
+      for (i = 0; i < num_jobs; ++i) {
+        pthread_mutex_init(&row_mt_worker_data->recon_sync_mutex[i], NULL);
+      }
+    }
+
+    CHECK_MEM_ERROR(
+        cm, row_mt_worker_data->recon_sync_cond,
+        vpx_malloc(sizeof(*row_mt_worker_data->recon_sync_cond) * num_jobs));
+    if (row_mt_worker_data->recon_sync_cond) {
+      for (i = 0; i < num_jobs; ++i) {
+        pthread_cond_init(&row_mt_worker_data->recon_sync_cond[i], NULL);
+      }
+    }
+  }
+#endif
+  row_mt_worker_data->num_sbs = num_sbs;
+  for (plane = 0; plane < 3; ++plane) {
+    CHECK_MEM_ERROR(cm, row_mt_worker_data->dqcoeff[plane],
+                    vpx_memalign(16, dqcoeff_size));
+    memset(row_mt_worker_data->dqcoeff[plane], 0, dqcoeff_size);
+    CHECK_MEM_ERROR(cm, row_mt_worker_data->eob[plane],
+                    vpx_calloc(num_sbs << EOBS_PER_SB_LOG2,
+                               sizeof(*row_mt_worker_data->eob[plane])));
+  }
+  CHECK_MEM_ERROR(cm, row_mt_worker_data->partition,
+                  vpx_calloc(num_sbs * PARTITIONS_PER_SB,
+                             sizeof(*row_mt_worker_data->partition)));
+  CHECK_MEM_ERROR(cm, row_mt_worker_data->recon_map,
+                  vpx_calloc(num_sbs, sizeof(*row_mt_worker_data->recon_map)));
+
+  // allocate memory for thread_data
+  if (row_mt_worker_data->thread_data == NULL) {
+    const size_t thread_size =
+        max_threads * sizeof(*row_mt_worker_data->thread_data);
+    CHECK_MEM_ERROR(cm, row_mt_worker_data->thread_data,
+                    vpx_memalign(32, thread_size));
+  }
+}
+
+void vp9_dec_free_row_mt_mem(RowMTWorkerData *row_mt_worker_data) {
+  if (row_mt_worker_data != NULL) {
+    int plane;
+#if CONFIG_MULTITHREAD
+    int i;
+    if (row_mt_worker_data->recon_sync_mutex != NULL) {
+      for (i = 0; i < row_mt_worker_data->num_jobs; ++i) {
+        pthread_mutex_destroy(&row_mt_worker_data->recon_sync_mutex[i]);
+      }
+      vpx_free(row_mt_worker_data->recon_sync_mutex);
+      row_mt_worker_data->recon_sync_mutex = NULL;
+    }
+    if (row_mt_worker_data->recon_sync_cond != NULL) {
+      for (i = 0; i < row_mt_worker_data->num_jobs; ++i) {
+        pthread_cond_destroy(&row_mt_worker_data->recon_sync_cond[i]);
+      }
+      vpx_free(row_mt_worker_data->recon_sync_cond);
+      row_mt_worker_data->recon_sync_cond = NULL;
+    }
+#endif
+    for (plane = 0; plane < 3; ++plane) {
+      vpx_free(row_mt_worker_data->eob[plane]);
+      row_mt_worker_data->eob[plane] = NULL;
+      vpx_free(row_mt_worker_data->dqcoeff[plane]);
+      row_mt_worker_data->dqcoeff[plane] = NULL;
+    }
+    vpx_free(row_mt_worker_data->partition);
+    row_mt_worker_data->partition = NULL;
+    vpx_free(row_mt_worker_data->recon_map);
+    row_mt_worker_data->recon_map = NULL;
+    vpx_free(row_mt_worker_data->thread_data);
+    row_mt_worker_data->thread_data = NULL;
+  }
+}
+
 static int vp9_dec_alloc_mi(VP9_COMMON *cm, int mi_size) {
   cm->mip = vpx_calloc(mi_size, sizeof(*cm->mip));
   if (!cm->mip) return 1;
@@ -69,6 +157,7 @@
   cm->mip = NULL;
   vpx_free(cm->mi_grid_base);
   cm->mi_grid_base = NULL;
+  cm->mi_alloc_size = 0;
 }
 
 VP9Decoder *vp9_decoder_create(BufferPool *const pool) {
@@ -139,6 +228,18 @@
     vp9_loop_filter_dealloc(&pbi->lf_row_sync);
   }
 
+  if (pbi->row_mt == 1) {
+    vp9_dec_free_row_mt_mem(pbi->row_mt_worker_data);
+    if (pbi->row_mt_worker_data != NULL) {
+      vp9_jobq_deinit(&pbi->row_mt_worker_data->jobq);
+      vpx_free(pbi->row_mt_worker_data->jobq_buf);
+#if CONFIG_MULTITHREAD
+      pthread_mutex_destroy(&pbi->row_mt_worker_data->recon_done_mutex);
+#endif
+    }
+    vpx_free(pbi->row_mt_worker_data);
+  }
+
   vp9_remove_common(&pbi->common);
   vpx_free(pbi);
 }
@@ -260,6 +361,44 @@
     cm->frame_refs[ref_index].idx = -1;
 }
 
+static void release_fb_on_decoder_exit(VP9Decoder *pbi) {
+  const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
+  VP9_COMMON *volatile const cm = &pbi->common;
+  BufferPool *volatile const pool = cm->buffer_pool;
+  RefCntBuffer *volatile const frame_bufs = cm->buffer_pool->frame_bufs;
+  int i;
+
+  // Synchronize all threads immediately as a subsequent decode call may
+  // cause a resize invalidating some allocations.
+  winterface->sync(&pbi->lf_worker);
+  for (i = 0; i < pbi->num_tile_workers; ++i) {
+    winterface->sync(&pbi->tile_workers[i]);
+  }
+
+  // Release all the reference buffers if worker thread is holding them.
+  if (pbi->hold_ref_buf == 1) {
+    int ref_index = 0, mask;
+    for (mask = pbi->refresh_frame_flags; mask; mask >>= 1) {
+      const int old_idx = cm->ref_frame_map[ref_index];
+      // Current thread releases the holding of reference frame.
+      decrease_ref_count(old_idx, frame_bufs, pool);
+
+      // Release the reference frame in reference map.
+      if (mask & 1) {
+        decrease_ref_count(old_idx, frame_bufs, pool);
+      }
+      ++ref_index;
+    }
+
+    // Current thread releases the holding of reference frame.
+    for (; ref_index < REF_FRAMES && !cm->show_existing_frame; ++ref_index) {
+      const int old_idx = cm->ref_frame_map[ref_index];
+      decrease_ref_count(old_idx, frame_bufs, pool);
+    }
+    pbi->hold_ref_buf = 0;
+  }
+}
+
 int vp9_receive_compressed_data(VP9Decoder *pbi, size_t size,
                                 const uint8_t **psource) {
   VP9_COMMON *volatile const cm = &pbi->common;
@@ -297,6 +436,9 @@
   // Find a free frame buffer. Return error if can not find any.
   cm->new_fb_idx = get_free_fb(cm);
   if (cm->new_fb_idx == INVALID_IDX) {
+    pbi->ready_for_new_data = 1;
+    release_fb_on_decoder_exit(pbi);
+    vpx_clear_system_state();
     vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
                        "Unable to find free frame buffer");
     return cm->error.error_code;
@@ -309,44 +451,11 @@
   pbi->cur_buf = &frame_bufs[cm->new_fb_idx];
 
   if (setjmp(cm->error.jmp)) {
-    const VPxWorkerInterface *const winterface = vpx_get_worker_interface();
-    int i;
-
     cm->error.setjmp = 0;
     pbi->ready_for_new_data = 1;
-
-    // Synchronize all threads immediately as a subsequent decode call may
-    // cause a resize invalidating some allocations.
-    winterface->sync(&pbi->lf_worker);
-    for (i = 0; i < pbi->num_tile_workers; ++i) {
-      winterface->sync(&pbi->tile_workers[i]);
-    }
-
-    // Release all the reference buffers if worker thread is holding them.
-    if (pbi->hold_ref_buf == 1) {
-      int ref_index = 0, mask;
-      for (mask = pbi->refresh_frame_flags; mask; mask >>= 1) {
-        const int old_idx = cm->ref_frame_map[ref_index];
-        // Current thread releases the holding of reference frame.
-        decrease_ref_count(old_idx, frame_bufs, pool);
-
-        // Release the reference frame in reference map.
-        if (mask & 1) {
-          decrease_ref_count(old_idx, frame_bufs, pool);
-        }
-        ++ref_index;
-      }
-
-      // Current thread releases the holding of reference frame.
-      for (; ref_index < REF_FRAMES && !cm->show_existing_frame; ++ref_index) {
-        const int old_idx = cm->ref_frame_map[ref_index];
-        decrease_ref_count(old_idx, frame_bufs, pool);
-      }
-      pbi->hold_ref_buf = 0;
-    }
+    release_fb_on_decoder_exit(pbi);
     // Release current frame.
     decrease_ref_count(cm->new_fb_idx, frame_bufs, pool);
-
     vpx_clear_system_state();
     return -1;
   }
@@ -364,6 +473,8 @@
     if (cm->seg.enabled) vp9_swap_current_and_last_seg_map(cm);
   }
 
+  if (cm->show_frame) cm->cur_show_frame_fb_idx = cm->new_fb_idx;
+
   // Update progress in frame parallel decode.
   cm->last_width = cm->width;
   cm->last_height = cm->height;
@@ -394,7 +505,7 @@
 
 #if CONFIG_VP9_POSTPROC
   if (!cm->show_existing_frame) {
-    ret = vp9_post_proc_frame(cm, sd, flags);
+    ret = vp9_post_proc_frame(cm, sd, flags, cm->width);
   } else {
     *sd = *cm->frame_to_show;
     ret = 0;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.h
index 4b26c31..b0ef83c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_decoder.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_DECODER_VP9_DECODER_H_
-#define VP9_DECODER_VP9_DECODER_H_
+#ifndef VPX_VP9_DECODER_VP9_DECODER_H_
+#define VPX_VP9_DECODER_VP9_DECODER_H_
 
 #include "./vpx_config.h"
 
@@ -21,11 +21,24 @@
 #include "vp9/common/vp9_thread_common.h"
 #include "vp9/common/vp9_onyxc_int.h"
 #include "vp9/common/vp9_ppflags.h"
+#include "./vp9_job_queue.h"
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
+#define EOBS_PER_SB_LOG2 8
+#define DQCOEFFS_PER_SB_LOG2 12
+#define PARTITIONS_PER_SB 85
+
+typedef enum JobType { PARSE_JOB, RECON_JOB, LPF_JOB } JobType;
+
+typedef struct ThreadData {
+  struct VP9Decoder *pbi;
+  LFWorkerData *lf_data;
+  VP9LfSync *lf_sync;
+} ThreadData;
+
 typedef struct TileBuffer {
   const uint8_t *data;
   size_t size;
@@ -37,12 +50,47 @@
   int buf_start, buf_end;  // pbi->tile_buffers to decode, inclusive
   vpx_reader bit_reader;
   FRAME_COUNTS counts;
+  LFWorkerData *lf_data;
+  VP9LfSync *lf_sync;
   DECLARE_ALIGNED(16, MACROBLOCKD, xd);
   /* dqcoeff are shared by all the planes. So planes must be decoded serially */
   DECLARE_ALIGNED(16, tran_low_t, dqcoeff[32 * 32]);
+  DECLARE_ALIGNED(16, uint16_t, extend_and_predict_buf[80 * 2 * 80 * 2]);
   struct vpx_internal_error_info error_info;
 } TileWorkerData;
 
+typedef void (*process_block_fn_t)(TileWorkerData *twd,
+                                   struct VP9Decoder *const pbi, int mi_row,
+                                   int mi_col, BLOCK_SIZE bsize, int bwl,
+                                   int bhl);
+
+typedef struct RowMTWorkerData {
+  int num_sbs;
+  int *eob[MAX_MB_PLANE];
+  PARTITION_TYPE *partition;
+  tran_low_t *dqcoeff[MAX_MB_PLANE];
+  int8_t *recon_map;
+  const uint8_t *data_end;
+  uint8_t *jobq_buf;
+  JobQueueRowMt jobq;
+  size_t jobq_size;
+  int num_tiles_done;
+  int num_jobs;
+#if CONFIG_MULTITHREAD
+  pthread_mutex_t recon_done_mutex;
+  pthread_mutex_t *recon_sync_mutex;
+  pthread_cond_t *recon_sync_cond;
+#endif
+  ThreadData *thread_data;
+} RowMTWorkerData;
+
+/* Structure to queue and dequeue row decode jobs */
+typedef struct Job {
+  int row_num;
+  int tile_col;
+  JobType job_type;
+} Job;
+
 typedef struct VP9Decoder {
   DECLARE_ALIGNED(16, MACROBLOCKD, mb);
 
@@ -72,10 +120,14 @@
   int inv_tile_order;
   int need_resync;   // wait for key/intra-only frame.
   int hold_ref_buf;  // hold the reference buffer.
+
+  int row_mt;
+  int lpf_mt_opt;
+  RowMTWorkerData *row_mt_worker_data;
 } VP9Decoder;
 
 int vp9_receive_compressed_data(struct VP9Decoder *pbi, size_t size,
-                                const uint8_t **dest);
+                                const uint8_t **psource);
 
 int vp9_get_raw_frame(struct VP9Decoder *pbi, YV12_BUFFER_CONFIG *sd,
                       vp9_ppflags_t *flags);
@@ -109,6 +161,11 @@
 
 void vp9_decoder_remove(struct VP9Decoder *pbi);
 
+void vp9_dec_alloc_row_mt_mem(RowMTWorkerData *row_mt_worker_data,
+                              VP9_COMMON *cm, int num_sbs, int max_threads,
+                              int num_jobs);
+void vp9_dec_free_row_mt_mem(RowMTWorkerData *row_mt_worker_data);
+
 static INLINE void decrease_ref_count(int idx, RefCntBuffer *const frame_bufs,
                                       BufferPool *const pool) {
   if (idx >= 0 && frame_bufs[idx].ref_count > 0) {
@@ -129,4 +186,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_DECODER_VP9_DECODER_H_
+#endif  // VPX_VP9_DECODER_VP9_DECODER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.c
index 4bd016d..c2e6b3d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.c
@@ -33,6 +33,20 @@
                             int *count, unsigned int *range) {
   const unsigned int split = (*range * prob + (256 - prob)) >> CHAR_BIT;
   const BD_VALUE bigsplit = (BD_VALUE)split << (BD_VALUE_SIZE - CHAR_BIT);
+#if CONFIG_BITSTREAM_DEBUG
+  const int queue_r = bitstream_queue_get_read();
+  const int frame_idx = bitstream_queue_get_frame_read();
+  int ref_result, ref_prob;
+  bitstream_queue_pop(&ref_result, &ref_prob);
+  if (prob != ref_prob) {
+    fprintf(stderr,
+            "\n *** [bit] prob error, frame_idx_r %d prob %d ref_prob %d "
+            "queue_r %d\n",
+            frame_idx, prob, ref_prob, queue_r);
+
+    assert(0);
+  }
+#endif
 
   if (*count < 0) {
     r->value = *value;
@@ -51,6 +65,20 @@
       *value <<= shift;
       *count -= shift;
     }
+#if CONFIG_BITSTREAM_DEBUG
+    {
+      const int bit = 1;
+      if (bit != ref_result) {
+        fprintf(
+            stderr,
+            "\n *** [bit] result error, frame_idx_r %d bit %d ref_result %d "
+            "queue_r %d\n",
+            frame_idx, bit, ref_result, queue_r);
+
+        assert(0);
+      }
+    }
+#endif
     return 1;
   }
   *range = split;
@@ -60,6 +88,19 @@
     *value <<= shift;
     *count -= shift;
   }
+#if CONFIG_BITSTREAM_DEBUG
+  {
+    const int bit = 0;
+    if (bit != ref_result) {
+      fprintf(stderr,
+              "\n *** [bit] result error, frame_idx_r %d bit %d ref_result %d "
+              "queue_r %d\n",
+              frame_idx, bit, ref_result, queue_r);
+
+      assert(0);
+    }
+  }
+#endif
   return 0;
 }
 
@@ -202,9 +243,9 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 #else
     if (read_bool(r, 128, &value, &count, &range)) {
-      dqcoeff[scan[c]] = -v;
+      dqcoeff[scan[c]] = (tran_low_t)-v;
     } else {
-      dqcoeff[scan[c]] = v;
+      dqcoeff[scan[c]] = (tran_low_t)v;
     }
 #endif  // CONFIG_COEFFICIENT_RANGE_CHECKING
     ++c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h
index 7b0d876..a32052ff 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_detokenize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_DECODER_VP9_DETOKENIZE_H_
-#define VP9_DECODER_VP9_DETOKENIZE_H_
+#ifndef VPX_VP9_DECODER_VP9_DETOKENIZE_H_
+#define VPX_VP9_DECODER_VP9_DETOKENIZE_H_
 
 #include "vpx_dsp/bitreader.h"
 #include "vp9/decoder/vp9_decoder.h"
@@ -27,4 +27,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_DECODER_VP9_DETOKENIZE_H_
+#endif  // VPX_VP9_DECODER_VP9_DETOKENIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h
index 5a8ec83..b0c7750 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_dsubexp.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_DECODER_VP9_DSUBEXP_H_
-#define VP9_DECODER_VP9_DSUBEXP_H_
+#ifndef VPX_VP9_DECODER_VP9_DSUBEXP_H_
+#define VPX_VP9_DECODER_VP9_DSUBEXP_H_
 
 #include "vpx_dsp/bitreader.h"
 
@@ -23,4 +23,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_DECODER_VP9_DSUBEXP_H_
+#endif  // VPX_VP9_DECODER_VP9_DSUBEXP_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c
new file mode 100644
index 0000000..9a31f5a
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.c
@@ -0,0 +1,124 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+#include <string.h>
+
+#include "vpx/vpx_integer.h"
+
+#include "vp9/decoder/vp9_job_queue.h"
+
+void vp9_jobq_init(JobQueueRowMt *jobq, uint8_t *buf, size_t buf_size) {
+#if CONFIG_MULTITHREAD
+  pthread_mutex_init(&jobq->mutex, NULL);
+  pthread_cond_init(&jobq->cond, NULL);
+#endif
+  jobq->buf_base = buf;
+  jobq->buf_wr = buf;
+  jobq->buf_rd = buf;
+  jobq->buf_end = buf + buf_size;
+  jobq->terminate = 0;
+}
+
+void vp9_jobq_reset(JobQueueRowMt *jobq) {
+#if CONFIG_MULTITHREAD
+  pthread_mutex_lock(&jobq->mutex);
+#endif
+  jobq->buf_wr = jobq->buf_base;
+  jobq->buf_rd = jobq->buf_base;
+  jobq->terminate = 0;
+#if CONFIG_MULTITHREAD
+  pthread_mutex_unlock(&jobq->mutex);
+#endif
+}
+
+void vp9_jobq_deinit(JobQueueRowMt *jobq) {
+  vp9_jobq_reset(jobq);
+#if CONFIG_MULTITHREAD
+  pthread_mutex_destroy(&jobq->mutex);
+  pthread_cond_destroy(&jobq->cond);
+#endif
+}
+
+void vp9_jobq_terminate(JobQueueRowMt *jobq) {
+#if CONFIG_MULTITHREAD
+  pthread_mutex_lock(&jobq->mutex);
+#endif
+  jobq->terminate = 1;
+#if CONFIG_MULTITHREAD
+  pthread_cond_broadcast(&jobq->cond);
+  pthread_mutex_unlock(&jobq->mutex);
+#endif
+}
+
+int vp9_jobq_queue(JobQueueRowMt *jobq, void *job, size_t job_size) {
+  int ret = 0;
+#if CONFIG_MULTITHREAD
+  pthread_mutex_lock(&jobq->mutex);
+#endif
+  if (jobq->buf_end >= jobq->buf_wr + job_size) {
+    memcpy(jobq->buf_wr, job, job_size);
+    jobq->buf_wr = jobq->buf_wr + job_size;
+#if CONFIG_MULTITHREAD
+    pthread_cond_signal(&jobq->cond);
+#endif
+    ret = 0;
+  } else {
+    /* Wrap around case is not supported */
+    assert(0);
+    ret = 1;
+  }
+#if CONFIG_MULTITHREAD
+  pthread_mutex_unlock(&jobq->mutex);
+#endif
+  return ret;
+}
+
+int vp9_jobq_dequeue(JobQueueRowMt *jobq, void *job, size_t job_size,
+                     int blocking) {
+  int ret = 0;
+#if CONFIG_MULTITHREAD
+  pthread_mutex_lock(&jobq->mutex);
+#endif
+  if (jobq->buf_end >= jobq->buf_rd + job_size) {
+    while (1) {
+      if (jobq->buf_wr >= jobq->buf_rd + job_size) {
+        memcpy(job, jobq->buf_rd, job_size);
+        jobq->buf_rd = jobq->buf_rd + job_size;
+        ret = 0;
+        break;
+      } else {
+        /* If all the entries have been dequeued, then break and return */
+        if (jobq->terminate == 1) {
+          ret = 1;
+          break;
+        }
+        if (blocking == 1) {
+#if CONFIG_MULTITHREAD
+          pthread_cond_wait(&jobq->cond, &jobq->mutex);
+#endif
+        } else {
+          /* If there is no job available,
+           * and this is non blocking call then return fail */
+          ret = 1;
+          break;
+        }
+      }
+    }
+  } else {
+    /* Wrap around case is not supported */
+    ret = 1;
+  }
+#if CONFIG_MULTITHREAD
+  pthread_mutex_unlock(&jobq->mutex);
+#endif
+
+  return ret;
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h
new file mode 100644
index 0000000..bc23bf9
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/decoder/vp9_job_queue.h
@@ -0,0 +1,45 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_DECODER_VP9_JOB_QUEUE_H_
+#define VPX_VP9_DECODER_VP9_JOB_QUEUE_H_
+
+#include "vpx_util/vpx_thread.h"
+
+typedef struct {
+  // Pointer to buffer base which contains the jobs
+  uint8_t *buf_base;
+
+  // Pointer to current address where new job can be added
+  uint8_t *volatile buf_wr;
+
+  // Pointer to current address from where next job can be obtained
+  uint8_t *volatile buf_rd;
+
+  // Pointer to end of job buffer
+  uint8_t *buf_end;
+
+  int terminate;
+
+#if CONFIG_MULTITHREAD
+  pthread_mutex_t mutex;
+  pthread_cond_t cond;
+#endif
+} JobQueueRowMt;
+
+void vp9_jobq_init(JobQueueRowMt *jobq, uint8_t *buf, size_t buf_size);
+void vp9_jobq_reset(JobQueueRowMt *jobq);
+void vp9_jobq_deinit(JobQueueRowMt *jobq);
+void vp9_jobq_terminate(JobQueueRowMt *jobq);
+int vp9_jobq_queue(JobQueueRowMt *jobq, void *job, size_t job_size);
+int vp9_jobq_dequeue(JobQueueRowMt *jobq, void *job, size_t job_size,
+                     int blocking);
+
+#endif  // VPX_VP9_DECODER_VP9_JOB_QUEUE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_dct_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_dct_neon.c
deleted file mode 100644
index 513718e..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_dct_neon.c
+++ /dev/null
@@ -1,35 +0,0 @@
-/*
- *  Copyright (c) 2014 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include <arm_neon.h>
-
-#include "./vp9_rtcd.h"
-#include "./vpx_config.h"
-#include "./vpx_dsp_rtcd.h"
-
-#include "vp9/common/vp9_blockd.h"
-#include "vpx_dsp/txfm_common.h"
-#include "vpx_dsp/vpx_dsp_common.h"
-
-void vp9_fdct8x8_quant_neon(const int16_t *input, int stride,
-                            tran_low_t *coeff_ptr, intptr_t n_coeffs,
-                            int skip_block, const int16_t *round_ptr,
-                            const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
-                            tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                            uint16_t *eob_ptr, const int16_t *scan_ptr,
-                            const int16_t *iscan_ptr) {
-  tran_low_t temp_buffer[64];
-  (void)coeff_ptr;
-
-  vpx_fdct8x8_neon(input, temp_buffer, stride);
-  vp9_quantize_fp_neon(temp_buffer, n_coeffs, skip_block, round_ptr, quant_ptr,
-                       qcoeff_ptr, dqcoeff_ptr, dequant_ptr, eob_ptr, scan_ptr,
-                       iscan_ptr);
-}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c
index 97a09bd..d75a481 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/arm/neon/vp9_quantize_neon.c
@@ -26,6 +26,22 @@
 #include "vpx_dsp/arm/mem_neon.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 
+static INLINE void calculate_dqcoeff_and_store(const int16x8_t qcoeff,
+                                               const int16x8_t dequant,
+                                               tran_low_t *dqcoeff) {
+  const int32x4_t dqcoeff_0 =
+      vmull_s16(vget_low_s16(qcoeff), vget_low_s16(dequant));
+  const int32x4_t dqcoeff_1 =
+      vmull_s16(vget_high_s16(qcoeff), vget_high_s16(dequant));
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  vst1q_s32(dqcoeff, dqcoeff_0);
+  vst1q_s32(dqcoeff + 4, dqcoeff_1);
+#else
+  vst1q_s16(dqcoeff, vcombine_s16(vmovn_s32(dqcoeff_0), vmovn_s32(dqcoeff_1)));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+}
+
 void vp9_quantize_fp_neon(const tran_low_t *coeff_ptr, intptr_t count,
                           int skip_block, const int16_t *round_ptr,
                           const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
@@ -55,7 +71,8 @@
     const int16x8_t v_iscan = vld1q_s16(&iscan[0]);
     const int16x8_t v_coeff = load_tran_low_to_s16q(coeff_ptr);
     const int16x8_t v_coeff_sign = vshrq_n_s16(v_coeff, 15);
-    const int16x8_t v_tmp = vabaq_s16(v_round, v_coeff, v_zero);
+    const int16x8_t v_abs = vabsq_s16(v_coeff);
+    const int16x8_t v_tmp = vqaddq_s16(v_abs, v_round);
     const int32x4_t v_tmp_lo =
         vmull_s16(vget_low_s16(v_tmp), vget_low_s16(v_quant));
     const int32x4_t v_tmp_hi =
@@ -67,10 +84,9 @@
     const int16x8_t v_nz_iscan = vbslq_s16(v_nz_mask, v_zero, v_iscan_plus1);
     const int16x8_t v_qcoeff_a = veorq_s16(v_tmp2, v_coeff_sign);
     const int16x8_t v_qcoeff = vsubq_s16(v_qcoeff_a, v_coeff_sign);
-    const int16x8_t v_dqcoeff = vmulq_s16(v_qcoeff, v_dequant);
+    calculate_dqcoeff_and_store(v_qcoeff, v_dequant, dqcoeff_ptr);
     v_eobmax_76543210 = vmaxq_s16(v_eobmax_76543210, v_nz_iscan);
     store_s16q_to_tran_low(qcoeff_ptr, v_qcoeff);
-    store_s16q_to_tran_low(dqcoeff_ptr, v_dqcoeff);
     v_round = vmovq_n_s16(round_ptr[1]);
     v_quant = vmovq_n_s16(quant_ptr[1]);
     v_dequant = vmovq_n_s16(dequant_ptr[1]);
@@ -80,7 +96,8 @@
     const int16x8_t v_iscan = vld1q_s16(&iscan[i]);
     const int16x8_t v_coeff = load_tran_low_to_s16q(coeff_ptr + i);
     const int16x8_t v_coeff_sign = vshrq_n_s16(v_coeff, 15);
-    const int16x8_t v_tmp = vabaq_s16(v_round, v_coeff, v_zero);
+    const int16x8_t v_abs = vabsq_s16(v_coeff);
+    const int16x8_t v_tmp = vqaddq_s16(v_abs, v_round);
     const int32x4_t v_tmp_lo =
         vmull_s16(vget_low_s16(v_tmp), vget_low_s16(v_quant));
     const int32x4_t v_tmp_hi =
@@ -92,11 +109,13 @@
     const int16x8_t v_nz_iscan = vbslq_s16(v_nz_mask, v_zero, v_iscan_plus1);
     const int16x8_t v_qcoeff_a = veorq_s16(v_tmp2, v_coeff_sign);
     const int16x8_t v_qcoeff = vsubq_s16(v_qcoeff_a, v_coeff_sign);
-    const int16x8_t v_dqcoeff = vmulq_s16(v_qcoeff, v_dequant);
+    calculate_dqcoeff_and_store(v_qcoeff, v_dequant, dqcoeff_ptr + i);
     v_eobmax_76543210 = vmaxq_s16(v_eobmax_76543210, v_nz_iscan);
     store_s16q_to_tran_low(qcoeff_ptr + i, v_qcoeff);
-    store_s16q_to_tran_low(dqcoeff_ptr + i, v_dqcoeff);
   }
+#ifdef __aarch64__
+  *eob_ptr = vmaxvq_s16(v_eobmax_76543210);
+#else
   {
     const int16x4_t v_eobmax_3210 = vmax_s16(vget_low_s16(v_eobmax_76543210),
                                              vget_high_s16(v_eobmax_76543210));
@@ -111,6 +130,7 @@
 
     *eob_ptr = (uint16_t)vget_lane_s16(v_eobmax_final, 0);
   }
+#endif  // __aarch64__
 }
 
 static INLINE int32x4_t extract_sign_bit(int32x4_t a) {
@@ -122,7 +142,7 @@
                                 const int16_t *quant_ptr,
                                 tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
                                 const int16_t *dequant_ptr, uint16_t *eob_ptr,
-                                const int16_t *scan, const int16_t *iscan_ptr) {
+                                const int16_t *scan, const int16_t *iscan) {
   const int16x8_t one = vdupq_n_s16(1);
   const int16x8_t neg_one = vdupq_n_s16(-1);
 
@@ -134,17 +154,16 @@
   const int16x8_t dequant_thresh = vshrq_n_s16(vld1q_s16(dequant_ptr), 2);
 
   // Process dc and the first seven ac coeffs.
-  const uint16x8_t iscan =
-      vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan_ptr), one));
+  const uint16x8_t v_iscan =
+      vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan), one));
   const int16x8_t coeff = load_tran_low_to_s16q(coeff_ptr);
   const int16x8_t coeff_sign = vshrq_n_s16(coeff, 15);
   const int16x8_t coeff_abs = vabsq_s16(coeff);
   const int16x8_t dequant_mask =
       vreinterpretq_s16_u16(vcgeq_s16(coeff_abs, dequant_thresh));
 
-  int16x8_t qcoeff = vaddq_s16(coeff_abs, round);
+  int16x8_t qcoeff = vqaddq_s16(coeff_abs, round);
   int32x4_t dqcoeff_0, dqcoeff_1;
-  int16x8_t dqcoeff;
   uint16x8_t eob_max;
   (void)scan;
   (void)count;
@@ -166,15 +185,19 @@
   // Add 1 if negative to round towards zero because the C uses division.
   dqcoeff_0 = vaddq_s32(dqcoeff_0, extract_sign_bit(dqcoeff_0));
   dqcoeff_1 = vaddq_s32(dqcoeff_1, extract_sign_bit(dqcoeff_1));
+#if CONFIG_VP9_HIGHBITDEPTH
+  vst1q_s32(dqcoeff_ptr, vshrq_n_s32(dqcoeff_0, 1));
+  vst1q_s32(dqcoeff_ptr + 4, vshrq_n_s32(dqcoeff_1, 1));
+#else
+  store_s16q_to_tran_low(dqcoeff_ptr, vcombine_s16(vshrn_n_s32(dqcoeff_0, 1),
+                                                   vshrn_n_s32(dqcoeff_1, 1)));
+#endif
 
-  dqcoeff = vcombine_s16(vshrn_n_s32(dqcoeff_0, 1), vshrn_n_s32(dqcoeff_1, 1));
-
-  eob_max = vandq_u16(vtstq_s16(qcoeff, neg_one), iscan);
+  eob_max = vandq_u16(vtstq_s16(qcoeff, neg_one), v_iscan);
 
   store_s16q_to_tran_low(qcoeff_ptr, qcoeff);
-  store_s16q_to_tran_low(dqcoeff_ptr, dqcoeff);
 
-  iscan_ptr += 8;
+  iscan += 8;
   coeff_ptr += 8;
   qcoeff_ptr += 8;
   dqcoeff_ptr += 8;
@@ -188,17 +211,16 @@
 
     // Process the rest of the ac coeffs.
     for (i = 8; i < 32 * 32; i += 8) {
-      const uint16x8_t iscan =
-          vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan_ptr), one));
+      const uint16x8_t v_iscan =
+          vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan), one));
       const int16x8_t coeff = load_tran_low_to_s16q(coeff_ptr);
       const int16x8_t coeff_sign = vshrq_n_s16(coeff, 15);
       const int16x8_t coeff_abs = vabsq_s16(coeff);
       const int16x8_t dequant_mask =
           vreinterpretq_s16_u16(vcgeq_s16(coeff_abs, dequant_thresh));
 
-      int16x8_t qcoeff = vaddq_s16(coeff_abs, round);
+      int16x8_t qcoeff = vqaddq_s16(coeff_abs, round);
       int32x4_t dqcoeff_0, dqcoeff_1;
-      int16x8_t dqcoeff;
 
       qcoeff = vqdmulhq_s16(qcoeff, quant);
       qcoeff = veorq_s16(qcoeff, coeff_sign);
@@ -211,21 +233,29 @@
       dqcoeff_0 = vaddq_s32(dqcoeff_0, extract_sign_bit(dqcoeff_0));
       dqcoeff_1 = vaddq_s32(dqcoeff_1, extract_sign_bit(dqcoeff_1));
 
-      dqcoeff =
-          vcombine_s16(vshrn_n_s32(dqcoeff_0, 1), vshrn_n_s32(dqcoeff_1, 1));
+#if CONFIG_VP9_HIGHBITDEPTH
+      vst1q_s32(dqcoeff_ptr, vshrq_n_s32(dqcoeff_0, 1));
+      vst1q_s32(dqcoeff_ptr + 4, vshrq_n_s32(dqcoeff_1, 1));
+#else
+      store_s16q_to_tran_low(
+          dqcoeff_ptr,
+          vcombine_s16(vshrn_n_s32(dqcoeff_0, 1), vshrn_n_s32(dqcoeff_1, 1)));
+#endif
 
       eob_max =
-          vmaxq_u16(eob_max, vandq_u16(vtstq_s16(qcoeff, neg_one), iscan));
+          vmaxq_u16(eob_max, vandq_u16(vtstq_s16(qcoeff, neg_one), v_iscan));
 
       store_s16q_to_tran_low(qcoeff_ptr, qcoeff);
-      store_s16q_to_tran_low(dqcoeff_ptr, dqcoeff);
 
-      iscan_ptr += 8;
+      iscan += 8;
       coeff_ptr += 8;
       qcoeff_ptr += 8;
       dqcoeff_ptr += 8;
     }
 
+#ifdef __aarch64__
+    *eob_ptr = vmaxvq_u16(eob_max);
+#else
     {
       const uint16x4_t eob_max_0 =
           vmax_u16(vget_low_u16(eob_max), vget_high_u16(eob_max));
@@ -233,5 +263,6 @@
       const uint16x4_t eob_max_2 = vpmax_u16(eob_max_1, eob_max_1);
       vst1_lane_u16(eob_ptr, eob_max_2, 0);
     }
+#endif  // __aarch64__
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_error_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_error_msa.c
index 188d04d..61786d8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_error_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_error_msa.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "./vpx_config.h"
 #include "./vp9_rtcd.h"
 #include "vpx_dsp/mips/macros_msa.h"
 
@@ -79,6 +80,7 @@
     return err;                                                              \
   }
 
+#if !CONFIG_VP9_HIGHBITDEPTH
 BLOCK_ERROR_BLOCKSIZE_MSA(16);
 BLOCK_ERROR_BLOCKSIZE_MSA(64);
 BLOCK_ERROR_BLOCKSIZE_MSA(256);
@@ -103,3 +105,4 @@
 
   return err;
 }
+#endif  // !CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct16x16_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct16x16_msa.c
index 0831e59..efbbe83 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct16x16_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct16x16_msa.c
@@ -10,6 +10,7 @@
 
 #include <assert.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_enums.h"
 #include "vp9/encoder/mips/msa/vp9_fdct_msa.h"
 #include "vpx_dsp/mips/fwd_txfm_msa.h"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct4x4_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct4x4_msa.c
index fa36f09..9c5cc12 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct4x4_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct4x4_msa.c
@@ -10,6 +10,7 @@
 
 #include <assert.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_enums.h"
 #include "vp9/encoder/mips/msa/vp9_fdct_msa.h"
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct8x8_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct8x8_msa.c
index 604db85..26d81aa 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct8x8_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct8x8_msa.c
@@ -10,6 +10,7 @@
 
 #include <assert.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_enums.h"
 #include "vp9/encoder/mips/msa/vp9_fdct_msa.h"
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct_msa.h
index 794bec7..fa1af2f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/mips/msa/vp9_fdct_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_MIPS_MSA_VP9_FDCT_MSA_H_
-#define VP9_ENCODER_MIPS_MSA_VP9_FDCT_MSA_H_
+#ifndef VPX_VP9_ENCODER_MIPS_MSA_VP9_FDCT_MSA_H_
+#define VPX_VP9_ENCODER_MIPS_MSA_VP9_FDCT_MSA_H_
 
 #include "vpx_dsp/mips/fwd_txfm_msa.h"
 #include "vpx_dsp/mips/txfm_macros_msa.h"
@@ -113,4 +113,4 @@
     PCKEV_H4_SH(in0_r_m, in0_r_m, in1_r_m, in1_r_m, s2_m, s2_m, s3_m, s3_m, \
                 out0, out1, out2, out3);                                    \
   }
-#endif /* VP9_ENCODER_MIPS_MSA_VP9_FDCT_MSA_H_ */
+#endif  // VPX_VP9_ENCODER_MIPS_MSA_VP9_FDCT_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/ppc/vp9_quantize_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/ppc/vp9_quantize_vsx.c
new file mode 100644
index 0000000..4f88b8f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/ppc/vp9_quantize_vsx.c
@@ -0,0 +1,292 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "./vpx_config.h"
+
+#include "./vp9_rtcd.h"
+#include "vpx_dsp/ppc/types_vsx.h"
+
+// Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit
+// integers, and return the high 16 bits of the intermediate integers.
+// (a * b) >> 16
+// Note: Because this is done in 2 operations, a and b cannot both be UINT16_MIN
+static INLINE int16x8_t vec_mulhi(int16x8_t a, int16x8_t b) {
+  // madds does ((A * B) >> 15) + C, we need >> 16, so we perform an extra right
+  // shift.
+  return vec_sra(vec_madds(a, b, vec_zeros_s16), vec_ones_u16);
+}
+
+// Negate 16-bit integers in a when the corresponding signed 16-bit
+// integer in b is negative.
+static INLINE int16x8_t vec_sign(int16x8_t a, int16x8_t b) {
+  const int16x8_t mask = vec_sra(b, vec_shift_sign_s16);
+  return vec_xor(vec_add(a, mask), mask);
+}
+
+// Compare packed 16-bit integers across a, and return the maximum value in
+// every element. Returns a vector containing the biggest value across vector a.
+static INLINE int16x8_t vec_max_across(int16x8_t a) {
+  a = vec_max(a, vec_perm(a, a, vec_perm64));
+  a = vec_max(a, vec_perm(a, a, vec_perm32));
+  return vec_max(a, vec_perm(a, a, vec_perm16));
+}
+
+void vp9_quantize_fp_vsx(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
+                         int skip_block, const int16_t *round_ptr,
+                         const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
+                         tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
+                         uint16_t *eob_ptr, const int16_t *scan,
+                         const int16_t *iscan) {
+  int16x8_t qcoeff0, qcoeff1, dqcoeff0, dqcoeff1, eob;
+  bool16x8_t zero_coeff0, zero_coeff1;
+
+  int16x8_t round = vec_vsx_ld(0, round_ptr);
+  int16x8_t quant = vec_vsx_ld(0, quant_ptr);
+  int16x8_t dequant = vec_vsx_ld(0, dequant_ptr);
+  int16x8_t coeff0 = vec_vsx_ld(0, coeff_ptr);
+  int16x8_t coeff1 = vec_vsx_ld(16, coeff_ptr);
+  int16x8_t scan0 = vec_vsx_ld(0, iscan);
+  int16x8_t scan1 = vec_vsx_ld(16, iscan);
+
+  (void)scan;
+  (void)skip_block;
+  assert(!skip_block);
+
+  // First set of 8 coeff starts with DC + 7 AC
+  qcoeff0 = vec_mulhi(vec_vaddshs(vec_abs(coeff0), round), quant);
+  zero_coeff0 = vec_cmpeq(qcoeff0, vec_zeros_s16);
+  qcoeff0 = vec_sign(qcoeff0, coeff0);
+  vec_vsx_st(qcoeff0, 0, qcoeff_ptr);
+
+  dqcoeff0 = vec_mladd(qcoeff0, dequant, vec_zeros_s16);
+  vec_vsx_st(dqcoeff0, 0, dqcoeff_ptr);
+
+  // Remove DC value from round and quant
+  round = vec_splat(round, 1);
+  quant = vec_splat(quant, 1);
+
+  // Remove DC value from dequant
+  dequant = vec_splat(dequant, 1);
+
+  // Second set of 8 coeff starts with (all AC)
+  qcoeff1 = vec_mulhi(vec_vaddshs(vec_abs(coeff1), round), quant);
+  zero_coeff1 = vec_cmpeq(qcoeff1, vec_zeros_s16);
+  qcoeff1 = vec_sign(qcoeff1, coeff1);
+  vec_vsx_st(qcoeff1, 16, qcoeff_ptr);
+
+  dqcoeff1 = vec_mladd(qcoeff1, dequant, vec_zeros_s16);
+  vec_vsx_st(dqcoeff1, 16, dqcoeff_ptr);
+
+  eob = vec_max(vec_or(scan0, zero_coeff0), vec_or(scan1, zero_coeff1));
+
+  // We quantize 16 coeff up front (enough for a 4x4) and process 24 coeff per
+  // loop iteration.
+  // for 8x8: 16 + 2 x 24 = 64
+  // for 16x16: 16 + 10 x 24 = 256
+  if (n_coeffs > 16) {
+    int16x8_t coeff2, qcoeff2, dqcoeff2, eob2, scan2;
+    bool16x8_t zero_coeff2;
+
+    int index = 16;
+    int off0 = 32;
+    int off1 = 48;
+    int off2 = 64;
+
+    do {
+      coeff0 = vec_vsx_ld(off0, coeff_ptr);
+      coeff1 = vec_vsx_ld(off1, coeff_ptr);
+      coeff2 = vec_vsx_ld(off2, coeff_ptr);
+      scan0 = vec_vsx_ld(off0, iscan);
+      scan1 = vec_vsx_ld(off1, iscan);
+      scan2 = vec_vsx_ld(off2, iscan);
+
+      qcoeff0 = vec_mulhi(vec_vaddshs(vec_abs(coeff0), round), quant);
+      zero_coeff0 = vec_cmpeq(qcoeff0, vec_zeros_s16);
+      qcoeff0 = vec_sign(qcoeff0, coeff0);
+      vec_vsx_st(qcoeff0, off0, qcoeff_ptr);
+      dqcoeff0 = vec_mladd(qcoeff0, dequant, vec_zeros_s16);
+      vec_vsx_st(dqcoeff0, off0, dqcoeff_ptr);
+
+      qcoeff1 = vec_mulhi(vec_vaddshs(vec_abs(coeff1), round), quant);
+      zero_coeff1 = vec_cmpeq(qcoeff1, vec_zeros_s16);
+      qcoeff1 = vec_sign(qcoeff1, coeff1);
+      vec_vsx_st(qcoeff1, off1, qcoeff_ptr);
+      dqcoeff1 = vec_mladd(qcoeff1, dequant, vec_zeros_s16);
+      vec_vsx_st(dqcoeff1, off1, dqcoeff_ptr);
+
+      qcoeff2 = vec_mulhi(vec_vaddshs(vec_abs(coeff2), round), quant);
+      zero_coeff2 = vec_cmpeq(qcoeff2, vec_zeros_s16);
+      qcoeff2 = vec_sign(qcoeff2, coeff2);
+      vec_vsx_st(qcoeff2, off2, qcoeff_ptr);
+      dqcoeff2 = vec_mladd(qcoeff2, dequant, vec_zeros_s16);
+      vec_vsx_st(dqcoeff2, off2, dqcoeff_ptr);
+
+      eob = vec_max(eob, vec_or(scan0, zero_coeff0));
+      eob2 = vec_max(vec_or(scan1, zero_coeff1), vec_or(scan2, zero_coeff2));
+      eob = vec_max(eob, eob2);
+
+      index += 24;
+      off0 += 48;
+      off1 += 48;
+      off2 += 48;
+    } while (index < n_coeffs);
+  }
+
+  eob = vec_max_across(eob);
+  *eob_ptr = eob[0] + 1;
+}
+
+// Sets the value of a 32-bit integers to 1 when the corresponding value in a is
+// negative.
+static INLINE int32x4_t vec_is_neg(int32x4_t a) {
+  return vec_sr(a, vec_shift_sign_s32);
+}
+
+// DeQuantization function used for 32x32 blocks. Quantized coeff of 32x32
+// blocks are twice as big as for other block sizes. As such, using
+// vec_mladd results in overflow.
+static INLINE int16x8_t dequantize_coeff_32(int16x8_t qcoeff,
+                                            int16x8_t dequant) {
+  int32x4_t dqcoeffe = vec_mule(qcoeff, dequant);
+  int32x4_t dqcoeffo = vec_mulo(qcoeff, dequant);
+  // Add 1 if negative to round towards zero because the C uses division.
+  dqcoeffe = vec_add(dqcoeffe, vec_is_neg(dqcoeffe));
+  dqcoeffo = vec_add(dqcoeffo, vec_is_neg(dqcoeffo));
+  dqcoeffe = vec_sra(dqcoeffe, vec_ones_u32);
+  dqcoeffo = vec_sra(dqcoeffo, vec_ones_u32);
+  return (int16x8_t)vec_perm(dqcoeffe, dqcoeffo, vec_perm_odd_even_pack);
+}
+
+void vp9_quantize_fp_32x32_vsx(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
+                               int skip_block, const int16_t *round_ptr,
+                               const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
+                               tran_low_t *dqcoeff_ptr,
+                               const int16_t *dequant_ptr, uint16_t *eob_ptr,
+                               const int16_t *scan, const int16_t *iscan) {
+  // In stage 1, we quantize 16 coeffs (DC + 15 AC)
+  // In stage 2, we loop 42 times and quantize 24 coeffs per iteration
+  // (32 * 32 - 16) / 24 = 42
+  int num_itr = 42;
+  // Offsets are in bytes, 16 coeffs = 32 bytes
+  int off0 = 32;
+  int off1 = 48;
+  int off2 = 64;
+
+  int16x8_t qcoeff0, qcoeff1, dqcoeff0, dqcoeff1, eob;
+  bool16x8_t mask0, mask1, zero_coeff0, zero_coeff1;
+
+  int16x8_t round = vec_vsx_ld(0, round_ptr);
+  int16x8_t quant = vec_vsx_ld(0, quant_ptr);
+  int16x8_t dequant = vec_vsx_ld(0, dequant_ptr);
+  int16x8_t coeff0 = vec_vsx_ld(0, coeff_ptr);
+  int16x8_t coeff1 = vec_vsx_ld(16, coeff_ptr);
+  int16x8_t scan0 = vec_vsx_ld(0, iscan);
+  int16x8_t scan1 = vec_vsx_ld(16, iscan);
+  int16x8_t thres = vec_sra(dequant, vec_splats((uint16_t)2));
+  int16x8_t abs_coeff0 = vec_abs(coeff0);
+  int16x8_t abs_coeff1 = vec_abs(coeff1);
+
+  (void)scan;
+  (void)skip_block;
+  (void)n_coeffs;
+  assert(!skip_block);
+
+  mask0 = vec_cmpge(abs_coeff0, thres);
+  round = vec_sra(vec_add(round, vec_ones_s16), vec_ones_u16);
+  // First set of 8 coeff starts with DC + 7 AC
+  qcoeff0 = vec_madds(vec_vaddshs(abs_coeff0, round), quant, vec_zeros_s16);
+  qcoeff0 = vec_and(qcoeff0, mask0);
+  zero_coeff0 = vec_cmpeq(qcoeff0, vec_zeros_s16);
+  qcoeff0 = vec_sign(qcoeff0, coeff0);
+  vec_vsx_st(qcoeff0, 0, qcoeff_ptr);
+
+  dqcoeff0 = dequantize_coeff_32(qcoeff0, dequant);
+  vec_vsx_st(dqcoeff0, 0, dqcoeff_ptr);
+
+  // Remove DC value from thres, round, quant and dequant
+  thres = vec_splat(thres, 1);
+  round = vec_splat(round, 1);
+  quant = vec_splat(quant, 1);
+  dequant = vec_splat(dequant, 1);
+
+  mask1 = vec_cmpge(abs_coeff1, thres);
+
+  // Second set of 8 coeff starts with (all AC)
+  qcoeff1 =
+      vec_madds(vec_vaddshs(vec_abs(coeff1), round), quant, vec_zeros_s16);
+  qcoeff1 = vec_and(qcoeff1, mask1);
+  zero_coeff1 = vec_cmpeq(qcoeff1, vec_zeros_s16);
+  qcoeff1 = vec_sign(qcoeff1, coeff1);
+  vec_vsx_st(qcoeff1, 16, qcoeff_ptr);
+
+  dqcoeff1 = dequantize_coeff_32(qcoeff1, dequant);
+  vec_vsx_st(dqcoeff1, 16, dqcoeff_ptr);
+
+  eob = vec_max(vec_or(scan0, zero_coeff0), vec_or(scan1, zero_coeff1));
+
+  do {
+    int16x8_t coeff2, abs_coeff2, qcoeff2, dqcoeff2, eob2, scan2;
+    bool16x8_t zero_coeff2, mask2;
+    coeff0 = vec_vsx_ld(off0, coeff_ptr);
+    coeff1 = vec_vsx_ld(off1, coeff_ptr);
+    coeff2 = vec_vsx_ld(off2, coeff_ptr);
+    scan0 = vec_vsx_ld(off0, iscan);
+    scan1 = vec_vsx_ld(off1, iscan);
+    scan2 = vec_vsx_ld(off2, iscan);
+
+    abs_coeff0 = vec_abs(coeff0);
+    abs_coeff1 = vec_abs(coeff1);
+    abs_coeff2 = vec_abs(coeff2);
+
+    qcoeff0 = vec_madds(vec_vaddshs(abs_coeff0, round), quant, vec_zeros_s16);
+    qcoeff1 = vec_madds(vec_vaddshs(abs_coeff1, round), quant, vec_zeros_s16);
+    qcoeff2 = vec_madds(vec_vaddshs(abs_coeff2, round), quant, vec_zeros_s16);
+
+    mask0 = vec_cmpge(abs_coeff0, thres);
+    mask1 = vec_cmpge(abs_coeff1, thres);
+    mask2 = vec_cmpge(abs_coeff2, thres);
+
+    qcoeff0 = vec_and(qcoeff0, mask0);
+    qcoeff1 = vec_and(qcoeff1, mask1);
+    qcoeff2 = vec_and(qcoeff2, mask2);
+
+    zero_coeff0 = vec_cmpeq(qcoeff0, vec_zeros_s16);
+    zero_coeff1 = vec_cmpeq(qcoeff1, vec_zeros_s16);
+    zero_coeff2 = vec_cmpeq(qcoeff2, vec_zeros_s16);
+
+    qcoeff0 = vec_sign(qcoeff0, coeff0);
+    qcoeff1 = vec_sign(qcoeff1, coeff1);
+    qcoeff2 = vec_sign(qcoeff2, coeff2);
+
+    vec_vsx_st(qcoeff0, off0, qcoeff_ptr);
+    vec_vsx_st(qcoeff1, off1, qcoeff_ptr);
+    vec_vsx_st(qcoeff2, off2, qcoeff_ptr);
+
+    dqcoeff0 = dequantize_coeff_32(qcoeff0, dequant);
+    dqcoeff1 = dequantize_coeff_32(qcoeff1, dequant);
+    dqcoeff2 = dequantize_coeff_32(qcoeff2, dequant);
+
+    vec_vsx_st(dqcoeff0, off0, dqcoeff_ptr);
+    vec_vsx_st(dqcoeff1, off1, dqcoeff_ptr);
+    vec_vsx_st(dqcoeff2, off2, dqcoeff_ptr);
+
+    eob = vec_max(eob, vec_or(scan0, zero_coeff0));
+    eob2 = vec_max(vec_or(scan1, zero_coeff1), vec_or(scan2, zero_coeff2));
+    eob = vec_max(eob, eob2);
+
+    off0 += 48;
+    off1 += 48;
+    off2 += 48;
+    num_itr--;
+  } while (num_itr != 0);
+
+  eob = vec_max_across(eob);
+  *eob_ptr = eob[0] + 1;
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h
index e508cb4..22a657e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_alt_ref_aq.h
@@ -15,8 +15,8 @@
  *  for altref frames.  Go to alt_ref_aq_private.h for implmentation details.
  */
 
-#ifndef VP9_ENCODER_VP9_ALT_REF_AQ_H_
-#define VP9_ENCODER_VP9_ALT_REF_AQ_H_
+#ifndef VPX_VP9_ENCODER_VP9_ALT_REF_AQ_H_
+#define VPX_VP9_ENCODER_VP9_ALT_REF_AQ_H_
 
 #include "vpx/vpx_integer.h"
 
@@ -124,4 +124,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_ALT_REF_AQ_H_
+#endif  // VPX_VP9_ENCODER_VP9_ALT_REF_AQ_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h
index b1b5656..749d3c19 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_360.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_AQ_360_H_
-#define VP9_ENCODER_VP9_AQ_360_H_
+#ifndef VPX_VP9_ENCODER_VP9_AQ_360_H_
+#define VPX_VP9_ENCODER_VP9_AQ_360_H_
 
 #include "vp9/encoder/vp9_encoder.h"
 
@@ -24,4 +24,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_AQ_VARIANCE_H_
+#endif  // VPX_VP9_ENCODER_VP9_AQ_360_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h
index a00d34e..d3cb34c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_complexity.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_AQ_COMPLEXITY_H_
-#define VP9_ENCODER_VP9_AQ_COMPLEXITY_H_
+#ifndef VPX_VP9_ENCODER_VP9_AQ_COMPLEXITY_H_
+#define VPX_VP9_ENCODER_VP9_AQ_COMPLEXITY_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -33,4 +33,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_AQ_COMPLEXITY_H_
+#endif  // VPX_VP9_ENCODER_VP9_AQ_COMPLEXITY_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c
index ed43de7..858a416 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.c
@@ -21,6 +21,14 @@
 #include "vp9/encoder/vp9_ratectrl.h"
 #include "vp9/encoder/vp9_segmentation.h"
 
+static const uint8_t VP9_VAR_OFFS[64] = {
+  128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
+  128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
+  128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
+  128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
+  128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128
+};
+
 CYCLIC_REFRESH *vp9_cyclic_refresh_alloc(int mi_rows, int mi_cols) {
   size_t last_coded_q_map_size;
   CYCLIC_REFRESH *const cr = vpx_calloc(1, sizeof(*cr));
@@ -39,13 +47,16 @@
   }
   assert(MAXQ <= 255);
   memset(cr->last_coded_q_map, MAXQ, last_coded_q_map_size);
+  cr->counter_encode_maxq_scene_change = 0;
   return cr;
 }
 
 void vp9_cyclic_refresh_free(CYCLIC_REFRESH *cr) {
-  vpx_free(cr->map);
-  vpx_free(cr->last_coded_q_map);
-  vpx_free(cr);
+  if (cr != NULL) {
+    vpx_free(cr->map);
+    vpx_free(cr->last_coded_q_map);
+    vpx_free(cr);
+  }
 }
 
 // Check if this coding block, of size bsize, should be considered for refresh
@@ -176,7 +187,8 @@
 
   // If this block is labeled for refresh, check if we should reset the
   // segment_id.
-  if (cyclic_refresh_segment_id_boosted(mi->segment_id)) {
+  if (cpi->sf.use_nonrd_pick_mode &&
+      cyclic_refresh_segment_id_boosted(mi->segment_id)) {
     mi->segment_id = refresh_this_block;
     // Reset segment_id if it will be skipped.
     if (skip) mi->segment_id = CR_SEGMENT_ID_BASE;
@@ -318,6 +330,28 @@
     rc->baseline_gf_interval = 10;
 }
 
+static int is_superblock_flat_static(VP9_COMP *const cpi, int sb_row_index,
+                                     int sb_col_index) {
+  unsigned int source_variance;
+  const uint8_t *src_y = cpi->Source->y_buffer;
+  const int ystride = cpi->Source->y_stride;
+  unsigned int sse;
+  const BLOCK_SIZE bsize = BLOCK_64X64;
+  src_y += (sb_row_index << 6) * ystride + (sb_col_index << 6);
+  source_variance =
+      cpi->fn_ptr[bsize].vf(src_y, ystride, VP9_VAR_OFFS, 0, &sse);
+  if (source_variance == 0) {
+    uint64_t block_sad;
+    const uint8_t *last_src_y = cpi->Last_Source->y_buffer;
+    const int last_ystride = cpi->Last_Source->y_stride;
+    last_src_y += (sb_row_index << 6) * ystride + (sb_col_index << 6);
+    block_sad =
+        cpi->fn_ptr[bsize].sdf(src_y, ystride, last_src_y, last_ystride);
+    if (block_sad == 0) return 1;
+  }
+  return 0;
+}
+
 // Update the segmentation map, and related quantities: cyclic refresh map,
 // refresh sb_index, and target number of blocks to be refreshed.
 // The map is set to either 0/CR_SEGMENT_ID_BASE (no refresh) or to
@@ -368,8 +402,17 @@
     int sb_col_index = i - sb_row_index * sb_cols;
     int mi_row = sb_row_index * MI_BLOCK_SIZE;
     int mi_col = sb_col_index * MI_BLOCK_SIZE;
+    int flat_static_blocks = 0;
+    int compute_content = 1;
     assert(mi_row >= 0 && mi_row < cm->mi_rows);
     assert(mi_col >= 0 && mi_col < cm->mi_cols);
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (cpi->common.use_highbitdepth) compute_content = 0;
+#endif
+    if (cpi->Last_Source == NULL ||
+        cpi->Last_Source->y_width != cpi->Source->y_width ||
+        cpi->Last_Source->y_height != cpi->Source->y_height)
+      compute_content = 0;
     bl_index = mi_row * cm->mi_cols + mi_col;
     // Loop through all 8x8 blocks in superblock and update map.
     xmis =
@@ -400,11 +443,21 @@
     // Enforce constant segment over superblock.
     // If segment is at least half of superblock, set to 1.
     if (sum_map >= xmis * ymis / 2) {
-      for (y = 0; y < ymis; y++)
-        for (x = 0; x < xmis; x++) {
-          seg_map[bl_index + y * cm->mi_cols + x] = CR_SEGMENT_ID_BOOST1;
-        }
-      cr->target_num_seg_blocks += xmis * ymis;
+      // This superblock is a candidate for refresh:
+      // compute spatial variance and exclude blocks that are spatially flat
+      // and stationary. Note: this is currently only done for screne content
+      // mode.
+      if (compute_content && cr->skip_flat_static_blocks)
+        flat_static_blocks =
+            is_superblock_flat_static(cpi, sb_row_index, sb_col_index);
+      if (!flat_static_blocks) {
+        // Label this superblock as segment 1.
+        for (y = 0; y < ymis; y++)
+          for (x = 0; x < xmis; x++) {
+            seg_map[bl_index + y * cm->mi_cols + x] = CR_SEGMENT_ID_BOOST1;
+          }
+        cr->target_num_seg_blocks += xmis * ymis;
+      }
     }
     i++;
     if (i == sbs_in_frame) {
@@ -413,7 +466,8 @@
   } while (cr->target_num_seg_blocks < block_count && i != cr->sb_index);
   cr->sb_index = i;
   cr->reduce_refresh = 0;
-  if (count_sel<(3 * count_tot)>> 2) cr->reduce_refresh = 1;
+  if (cpi->oxcf.content != VP9E_CONTENT_SCREEN)
+    if (count_sel<(3 * count_tot)>> 2) cr->reduce_refresh = 1;
 }
 
 // Set cyclic refresh parameters.
@@ -425,11 +479,20 @@
   int target_refresh = 0;
   double weight_segment_target = 0;
   double weight_segment = 0;
-  int thresh_low_motion = (cm->width < 720) ? 55 : 20;
+  int thresh_low_motion = 20;
+  int qp_thresh = VPXMIN((cpi->oxcf.content == VP9E_CONTENT_SCREEN) ? 35 : 20,
+                         rc->best_quality << 1);
+  int qp_max_thresh = 117 * MAXQ >> 7;
   cr->apply_cyclic_refresh = 1;
-  if (cm->frame_type == KEY_FRAME || cpi->svc.temporal_layer_id > 0 ||
+  if (frame_is_intra_only(cm) || cpi->svc.temporal_layer_id > 0 ||
+      is_lossless_requested(&cpi->oxcf) ||
+      rc->avg_frame_qindex[INTER_FRAME] < qp_thresh ||
+      (cpi->use_svc &&
+       cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame) ||
       (!cpi->use_svc && rc->avg_frame_low_motion < thresh_low_motion &&
-       rc->frames_since_key > 40)) {
+       rc->frames_since_key > 40) ||
+      (!cpi->use_svc && rc->avg_frame_qindex[INTER_FRAME] > qp_max_thresh &&
+       rc->frames_since_key > 20)) {
     cr->apply_cyclic_refresh = 0;
     return;
   }
@@ -454,10 +517,26 @@
       cr->rate_boost_fac = 13;
     }
   }
+  // For screen-content: keep rate_ratio_qdelta to 2.0 (segment#1 boost) and
+  // percent_refresh (refresh rate) to 10. But reduce rate boost for segment#2
+  // (rate_boost_fac = 10 disables segment#2).
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN) {
+    // Only enable feature of skipping flat_static blocks for top layer
+    // under screen content mode.
+    if (cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1)
+      cr->skip_flat_static_blocks = 1;
+    cr->percent_refresh = (cr->skip_flat_static_blocks) ? 5 : 10;
+    // Increase the amount of refresh on scene change that is encoded at max Q,
+    // increase for a few cycles of the refresh period (~100 / percent_refresh).
+    if (cr->counter_encode_maxq_scene_change < 30)
+      cr->percent_refresh = (cr->skip_flat_static_blocks) ? 10 : 15;
+    cr->rate_ratio_qdelta = 2.0;
+    cr->rate_boost_fac = 10;
+  }
   // Adjust some parameters for low resolutions.
-  if (cm->width <= 352 && cm->height <= 288) {
+  if (cm->width * cm->height <= 352 * 288) {
     if (rc->avg_frame_bandwidth < 3000) {
-      cr->motion_thresh = 16;
+      cr->motion_thresh = 64;
       cr->rate_boost_fac = 13;
     } else {
       cr->max_qdelta_perc = 70;
@@ -488,6 +567,13 @@
                    num8x8bl;
   if (weight_segment_target < 7 * weight_segment / 8)
     weight_segment = weight_segment_target;
+  // For screen-content: don't include target for the weight segment,
+  // since for all flat areas the segment is reset, so its more accurate
+  // to just use the previous actual number of seg blocks for the weight.
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN)
+    weight_segment =
+        (double)(cr->actual_num_seg1_blocks + cr->actual_num_seg2_blocks) /
+        num8x8bl;
   cr->weight_segment = weight_segment;
 }
 
@@ -497,23 +583,31 @@
   const RATE_CONTROL *const rc = &cpi->rc;
   CYCLIC_REFRESH *const cr = cpi->cyclic_refresh;
   struct segmentation *const seg = &cm->seg;
+  int scene_change_detected =
+      cpi->rc.high_source_sad ||
+      (cpi->use_svc && cpi->svc.high_source_sad_superframe);
   if (cm->current_video_frame == 0) cr->low_content_avg = 0.0;
-  if (!cr->apply_cyclic_refresh || (cpi->force_update_segmentation)) {
+  // Reset if resoluton change has occurred.
+  if (cpi->resize_pending != 0) vp9_cyclic_refresh_reset_resize(cpi);
+  if (!cr->apply_cyclic_refresh || (cpi->force_update_segmentation) ||
+      scene_change_detected) {
     // Set segmentation map to 0 and disable.
     unsigned char *const seg_map = cpi->segmentation_map;
     memset(seg_map, 0, cm->mi_rows * cm->mi_cols);
     vp9_disable_segmentation(&cm->seg);
-    if (cm->frame_type == KEY_FRAME) {
+    if (cm->frame_type == KEY_FRAME || scene_change_detected) {
       memset(cr->last_coded_q_map, MAXQ,
              cm->mi_rows * cm->mi_cols * sizeof(*cr->last_coded_q_map));
       cr->sb_index = 0;
       cr->reduce_refresh = 0;
+      cr->counter_encode_maxq_scene_change = 0;
     }
     return;
   } else {
     int qindex_delta = 0;
     int qindex2;
     const double q = vp9_convert_qindex_to_q(cm->base_qindex, cm->bit_depth);
+    cr->counter_encode_maxq_scene_change++;
     vpx_clear_system_state();
     // Set rate threshold to some multiple (set to 2 for now) of the target
     // rate (target is given by sb64_target_rate and scaled by 256).
@@ -563,9 +657,6 @@
     cr->qindex_delta[2] = qindex_delta;
     vp9_set_segdata(seg, CR_SEGMENT_ID_BOOST2, SEG_LVL_ALT_Q, qindex_delta);
 
-    // Reset if resoluton change has occurred.
-    if (cpi->resize_pending != 0) vp9_cyclic_refresh_reset_resize(cpi);
-
     // Update the segmentation and refresh map.
     cyclic_refresh_update_map(cpi);
   }
@@ -579,8 +670,19 @@
   const VP9_COMMON *const cm = &cpi->common;
   CYCLIC_REFRESH *const cr = cpi->cyclic_refresh;
   memset(cr->map, 0, cm->mi_rows * cm->mi_cols);
-  memset(cr->last_coded_q_map, MAXQ, cm->mi_rows * cm->mi_cols);
+  memset(cr->last_coded_q_map, MAXQ,
+         cm->mi_rows * cm->mi_cols * sizeof(*cr->last_coded_q_map));
   cr->sb_index = 0;
   cpi->refresh_golden_frame = 1;
   cpi->refresh_alt_ref_frame = 1;
+  cr->counter_encode_maxq_scene_change = 0;
+}
+
+void vp9_cyclic_refresh_limit_q(const VP9_COMP *cpi, int *q) {
+  CYCLIC_REFRESH *const cr = cpi->cyclic_refresh;
+  // For now apply hard limit to frame-level decrease in q, if the cyclic
+  // refresh is active (percent_refresh > 0).
+  if (cr->percent_refresh > 0 && cpi->rc.q_1_frame - *q > 8) {
+    *q = cpi->rc.q_1_frame - 8;
+  }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h
index 77fa67c9..b6d7fde 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_cyclicrefresh.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_AQ_CYCLICREFRESH_H_
-#define VP9_ENCODER_VP9_AQ_CYCLICREFRESH_H_
+#ifndef VPX_VP9_ENCODER_VP9_AQ_CYCLICREFRESH_H_
+#define VPX_VP9_ENCODER_VP9_AQ_CYCLICREFRESH_H_
 
 #include "vpx/vpx_integer.h"
 #include "vp9/common/vp9_blockd.h"
@@ -68,6 +68,8 @@
   int reduce_refresh;
   double weight_segment;
   int apply_cyclic_refresh;
+  int counter_encode_maxq_scene_change;
+  int skip_flat_static_blocks;
 };
 
 struct VP9_COMP;
@@ -102,10 +104,6 @@
                                              int mi_row, int mi_col,
                                              BLOCK_SIZE bsize);
 
-// Update the segmentation map, and related quantities: cyclic refresh map,
-// refresh sb_index, and target number of blocks to be refreshed.
-void vp9_cyclic_refresh_update__map(struct VP9_COMP *const cpi);
-
 // From the just encoded frame: update the actual number of blocks that were
 // applied the segment delta q, and the amount of low motion in the frame.
 // Also check conditions for forcing golden update, or preventing golden
@@ -139,8 +137,10 @@
     return CR_SEGMENT_ID_BASE;
 }
 
+void vp9_cyclic_refresh_limit_q(const struct VP9_COMP *cpi, int *q);
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_AQ_CYCLICREFRESH_H_
+#endif  // VPX_VP9_ENCODER_VP9_AQ_CYCLICREFRESH_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c
index 477f62b..1f9ce23 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.c
@@ -19,6 +19,7 @@
 
 #include "vp9/encoder/vp9_ratectrl.h"
 #include "vp9/encoder/vp9_rd.h"
+#include "vp9/encoder/vp9_encodeframe.h"
 #include "vp9/encoder/vp9_segmentation.h"
 
 #define ENERGY_MIN (-4)
@@ -108,7 +109,7 @@
 #if CONFIG_VP9_HIGHBITDEPTH
 static void aq_highbd_variance64(const uint8_t *a8, int a_stride,
                                  const uint8_t *b8, int b_stride, int w, int h,
-                                 uint64_t *sse, uint64_t *sum) {
+                                 uint64_t *sse, int64_t *sum) {
   int i, j;
 
   uint16_t *a = CONVERT_TO_SHORTPTR(a8);
@@ -127,15 +128,6 @@
   }
 }
 
-static void aq_highbd_8_variance(const uint8_t *a8, int a_stride,
-                                 const uint8_t *b8, int b_stride, int w, int h,
-                                 unsigned int *sse, int *sum) {
-  uint64_t sse_long = 0;
-  uint64_t sum_long = 0;
-  aq_highbd_variance64(a8, a_stride, b8, b_stride, w, h, &sse_long, &sum_long);
-  *sse = (unsigned int)sse_long;
-  *sum = (int)sum_long;
-}
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
 static unsigned int block_variance(VP9_COMP *cpi, MACROBLOCK *x,
@@ -153,11 +145,13 @@
     int avg;
 #if CONFIG_VP9_HIGHBITDEPTH
     if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
-      aq_highbd_8_variance(x->plane[0].src.buf, x->plane[0].src.stride,
+      uint64_t sse64 = 0;
+      int64_t sum64 = 0;
+      aq_highbd_variance64(x->plane[0].src.buf, x->plane[0].src.stride,
                            CONVERT_TO_BYTEPTR(vp9_highbd_64_zeros), 0, bw, bh,
-                           &sse, &avg);
-      sse >>= 2 * (xd->bd - 8);
-      avg >>= (xd->bd - 8);
+                           &sse64, &sum64);
+      sse = (unsigned int)(sse64 >> (2 * (xd->bd - 8)));
+      avg = (int)(sum64 >> (xd->bd - 8));
     } else {
       aq_variance(x->plane[0].src.buf, x->plane[0].src.stride, vp9_64_zeros, 0,
                   bw, bh, &sse, &avg);
@@ -192,6 +186,40 @@
   return log(var + 1.0);
 }
 
+// Get the range of sub block energy values;
+void vp9_get_sub_block_energy(VP9_COMP *cpi, MACROBLOCK *mb, int mi_row,
+                              int mi_col, BLOCK_SIZE bsize, int *min_e,
+                              int *max_e) {
+  VP9_COMMON *const cm = &cpi->common;
+  const int bw = num_8x8_blocks_wide_lookup[bsize];
+  const int bh = num_8x8_blocks_high_lookup[bsize];
+  const int xmis = VPXMIN(cm->mi_cols - mi_col, bw);
+  const int ymis = VPXMIN(cm->mi_rows - mi_row, bh);
+  int x, y;
+
+  if (xmis < bw || ymis < bh) {
+    vp9_setup_src_planes(mb, cpi->Source, mi_row, mi_col);
+    *min_e = vp9_block_energy(cpi, mb, bsize);
+    *max_e = *min_e;
+  } else {
+    int energy;
+    *min_e = ENERGY_MAX;
+    *max_e = ENERGY_MIN;
+
+    for (y = 0; y < ymis; ++y) {
+      for (x = 0; x < xmis; ++x) {
+        vp9_setup_src_planes(mb, cpi->Source, mi_row + y, mi_col + x);
+        energy = vp9_block_energy(cpi, mb, BLOCK_8X8);
+        *min_e = VPXMIN(*min_e, energy);
+        *max_e = VPXMAX(*max_e, energy);
+      }
+    }
+  }
+
+  // Re-instate source pointers back to what they should have been on entry.
+  vp9_setup_src_planes(mb, cpi->Source, mi_row, mi_col);
+}
+
 #define DEFAULT_E_MIDPOINT 10.0
 int vp9_block_energy(VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bs) {
   double energy;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h
index 211a69f..a4f8728 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_aq_variance.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_AQ_VARIANCE_H_
-#define VP9_ENCODER_VP9_AQ_VARIANCE_H_
+#ifndef VPX_VP9_ENCODER_VP9_AQ_VARIANCE_H_
+#define VPX_VP9_ENCODER_VP9_AQ_VARIANCE_H_
 
 #include "vp9/encoder/vp9_encoder.h"
 
@@ -20,11 +20,15 @@
 unsigned int vp9_vaq_segment_id(int energy);
 void vp9_vaq_frame_setup(VP9_COMP *cpi);
 
+void vp9_get_sub_block_energy(VP9_COMP *cpi, MACROBLOCK *mb, int mi_row,
+                              int mi_col, BLOCK_SIZE bsize, int *min_e,
+                              int *max_e);
 int vp9_block_energy(VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bs);
+
 double vp9_log_block_var(VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bs);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_AQ_VARIANCE_H_
+#endif  // VPX_VP9_ENCODER_VP9_AQ_VARIANCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c
index d346cd5..3eff4ce 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.c
@@ -18,6 +18,9 @@
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem_ops.h"
 #include "vpx_ports/system_state.h"
+#if CONFIG_BITSTREAM_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif  // CONFIG_BITSTREAM_DEBUG
 
 #include "vp9/common/vp9_entropy.h"
 #include "vp9/common/vp9_entropymode.h"
@@ -39,8 +42,10 @@
   { 0, 1 },  { 6, 3 },   { 28, 5 },  { 30, 5 }, { 58, 6 },
   { 59, 6 }, { 126, 7 }, { 127, 7 }, { 62, 6 }, { 2, 2 }
 };
-static const struct vp9_token switchable_interp_encodings[SWITCHABLE_FILTERS] =
-    { { 0, 1 }, { 2, 2 }, { 3, 2 } };
+static const struct vp9_token
+    switchable_interp_encodings[SWITCHABLE_FILTERS] = { { 0, 1 },
+                                                        { 2, 2 },
+                                                        { 3, 2 } };
 static const struct vp9_token partition_encodings[PARTITION_TYPES] = {
   { 0, 1 }, { 2, 2 }, { 6, 3 }, { 7, 3 }
 };
@@ -86,7 +91,7 @@
   BLOCK_SIZE bsize = xd->mi[0]->sb_type;
   const TX_SIZE max_tx_size = max_txsize_lookup[bsize];
   const vpx_prob *const tx_probs =
-      get_tx_probs2(max_tx_size, xd, &cm->fc->tx_probs);
+      get_tx_probs(max_tx_size, get_tx_size_context(xd), &cm->fc->tx_probs);
   vpx_write(w, tx_size != TX_4X4, tx_probs[0]);
   if (tx_size != TX_4X4 && max_tx_size >= TX_16X16) {
     vpx_write(w, tx_size != TX_8X8, tx_probs[1]);
@@ -217,7 +222,8 @@
     }
 
     if (is_compound) {
-      vpx_write(w, mi->ref_frame[0] == GOLDEN_FRAME,
+      const int idx = cm->ref_frame_sign_bias[cm->comp_fixed_ref];
+      vpx_write(w, mi->ref_frame[!idx] == cm->comp_var_ref[1],
                 vp9_get_pred_prob_comp_ref_p(cm, xd));
     } else {
       const int bit0 = mi->ref_frame[0] != LAST_FRAME;
@@ -459,7 +465,8 @@
           write_modes_b(cpi, xd, tile, w, tok, tok_end, mi_row, mi_col + bs,
                         max_mv_magnitude, interp_filter_selected);
         break;
-      case PARTITION_SPLIT:
+      default:
+        assert(partition == PARTITION_SPLIT);
         write_modes_sb(cpi, xd, tile, w, tok, tok_end, mi_row, mi_col, subsize,
                        max_mv_magnitude, interp_filter_selected);
         write_modes_sb(cpi, xd, tile, w, tok, tok_end, mi_row, mi_col + bs,
@@ -469,7 +476,6 @@
         write_modes_sb(cpi, xd, tile, w, tok, tok_end, mi_row + bs, mi_col + bs,
                        subsize, max_mv_magnitude, interp_filter_selected);
         break;
-      default: assert(0);
     }
   }
 
@@ -618,9 +624,10 @@
       return;
     }
 
-    case ONE_LOOP_REDUCED: {
+    default: {
       int updates = 0;
       int noupdates_before_first = 0;
+      assert(cpi->sf.use_fast_coef_updates == ONE_LOOP_REDUCED);
       for (i = 0; i < PLANE_TYPES; ++i) {
         for (j = 0; j < REF_TYPES; ++j) {
           for (k = 0; k < COEF_BANDS; ++k) {
@@ -670,7 +677,6 @@
       }
       return;
     }
-    default: assert(0);
   }
 }
 
@@ -909,10 +915,24 @@
            (cpi->refresh_golden_frame << cpi->alt_fb_idx);
   } else {
     int arf_idx = cpi->alt_fb_idx;
-    if ((cpi->oxcf.pass == 2) && cpi->multi_arf_allowed) {
-      const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
-      arf_idx = gf_group->arf_update_idx[gf_group->index];
+    GF_GROUP *const gf_group = &cpi->twopass.gf_group;
+
+    if (cpi->multi_layer_arf) {
+      for (arf_idx = 0; arf_idx < REF_FRAMES; ++arf_idx) {
+        if (arf_idx != cpi->alt_fb_idx && arf_idx != cpi->lst_fb_idx &&
+            arf_idx != cpi->gld_fb_idx) {
+          int idx;
+          for (idx = 0; idx < gf_group->stack_size; ++idx)
+            if (arf_idx == gf_group->arf_index_stack[idx]) break;
+          if (idx == gf_group->stack_size) break;
+        }
+      }
     }
+    cpi->twopass.gf_group.top_arf_idx = arf_idx;
+
+    if (cpi->use_svc && cpi->svc.use_set_ref_frame_config &&
+        cpi->svc.temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS)
+      return cpi->svc.update_buffer_slot[cpi->svc.spatial_layer_id];
     return (cpi->refresh_last_frame << cpi->lst_fb_idx) |
            (cpi->refresh_golden_frame << cpi->gld_fb_idx) |
            (cpi->refresh_alt_ref_frame << arf_idx);
@@ -1117,11 +1137,7 @@
         ((cpi->svc.number_temporal_layers > 1 &&
           cpi->oxcf.rc_mode == VPX_CBR) ||
          (cpi->svc.number_spatial_layers > 1 &&
-          cpi->svc.layer_context[cpi->svc.spatial_layer_id].is_key_frame) ||
-         (is_two_pass_svc(cpi) &&
-          cpi->svc.encode_empty_frame_state == ENCODING &&
-          cpi->svc.layer_context[0].frames_from_key_frame <
-              cpi->svc.number_temporal_layers + 1))) {
+          cpi->svc.layer_context[cpi->svc.spatial_layer_id].is_key_frame))) {
       found = 0;
     } else if (cfg != NULL) {
       found =
@@ -1153,8 +1169,10 @@
     case PROFILE_0: vpx_wb_write_literal(wb, 0, 2); break;
     case PROFILE_1: vpx_wb_write_literal(wb, 2, 2); break;
     case PROFILE_2: vpx_wb_write_literal(wb, 1, 2); break;
-    case PROFILE_3: vpx_wb_write_literal(wb, 6, 3); break;
-    default: assert(0);
+    default:
+      assert(profile == PROFILE_3);
+      vpx_wb_write_literal(wb, 6, 3);
+      break;
   }
 }
 
@@ -1191,7 +1209,13 @@
 
   write_profile(cm->profile, wb);
 
-  vpx_wb_write_bit(wb, 0);  // show_existing_frame
+  // If to use show existing frame.
+  vpx_wb_write_bit(wb, cm->show_existing_frame);
+  if (cm->show_existing_frame) {
+    vpx_wb_write_literal(wb, cpi->alt_fb_idx, 3);
+    return;
+  }
+
   vpx_wb_write_bit(wb, cm->frame_type);
   vpx_wb_write_bit(wb, cm->show_frame);
   vpx_wb_write_bit(wb, cm->error_resilient_mode);
@@ -1201,14 +1225,6 @@
     write_bitdepth_colorspace_sampling(cm, wb);
     write_frame_size(cm, wb);
   } else {
-    // In spatial svc if it's not error_resilient_mode then we need to code all
-    // visible frames as invisible. But we need to keep the show_frame flag so
-    // that the publisher could know whether it is supposed to be visible.
-    // So we will code the show_frame flag as it is. Then code the intra_only
-    // bit here. This will make the bitstream incompatible. In the player we
-    // will change to show_frame flag to 0, then add an one byte frame with
-    // show_existing_frame flag which tells the decoder which frame we want to
-    // show.
     if (!cm->show_frame) vpx_wb_write_bit(wb, cm->intra_only);
 
     if (!cm->error_resilient_mode)
@@ -1340,7 +1356,20 @@
   struct vpx_write_bit_buffer wb = { data, 0 };
   struct vpx_write_bit_buffer saved_wb;
 
+#if CONFIG_BITSTREAM_DEBUG
+  bitstream_queue_reset_write();
+#endif
+
   write_uncompressed_header(cpi, &wb);
+
+  // Skip the rest coding process if use show existing frame.
+  if (cpi->common.show_existing_frame) {
+    uncompressed_hdr_size = vpx_wb_bytes_written(&wb);
+    data += uncompressed_hdr_size;
+    *size = data - dest;
+    return;
+  }
+
   saved_wb = wb;
   vpx_wb_write_literal(&wb, 0, 16);  // don't know in advance first part. size
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h
index 339c3fe..208651d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_bitstream.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_BITSTREAM_H_
-#define VP9_ENCODER_VP9_BITSTREAM_H_
+#ifndef VPX_VP9_ENCODER_VP9_BITSTREAM_H_
+#define VPX_VP9_ENCODER_VP9_BITSTREAM_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -38,16 +38,12 @@
 void vp9_pack_bitstream(VP9_COMP *cpi, uint8_t *dest, size_t *size);
 
 static INLINE int vp9_preserve_existing_gf(VP9_COMP *cpi) {
-  return !cpi->multi_arf_allowed && cpi->refresh_golden_frame &&
-         cpi->rc.is_src_frame_alt_ref &&
-         (!cpi->use_svc ||  // Add spatial svc base layer case here
-          (is_two_pass_svc(cpi) && cpi->svc.spatial_layer_id == 0 &&
-           cpi->svc.layer_context[0].gold_ref_idx >= 0 &&
-           cpi->oxcf.ss_enable_auto_arf[0]));
+  return cpi->refresh_golden_frame && cpi->rc.is_src_frame_alt_ref &&
+         !cpi->use_svc;
 }
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_BITSTREAM_H_
+#endif  // VPX_VP9_ENCODER_VP9_BITSTREAM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h
index 724205d..37a4605 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_block.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_BLOCK_H_
-#define VP9_ENCODER_VP9_BLOCK_H_
+#ifndef VPX_VP9_ENCODER_VP9_BLOCK_H_
+#define VPX_VP9_ENCODER_VP9_BLOCK_H_
 
 #include "vpx_util/vpx_thread.h"
 
@@ -34,8 +34,8 @@
   struct buf_2d src;
 
   // Quantizer setings
+  DECLARE_ALIGNED(16, int16_t, round_fp[8]);
   int16_t *quant_fp;
-  int16_t *round_fp;
   int16_t *quant;
   int16_t *quant_shift;
   int16_t *zbin;
@@ -92,6 +92,8 @@
   int sadperbit4;
   int rddiv;
   int rdmult;
+  int cb_rdmult;
+  int segment_id;
   int mb_energy;
 
   // These are set to their default values at the beginning, and then adjusted
@@ -115,6 +117,12 @@
   int *nmvsadcost_hp[2];
   int **mvsadcost;
 
+  // sharpness is used to disable skip mode and change rd_mult
+  int sharpness;
+
+  // aq mode is used to adjust rd based on segment.
+  int adjust_rdmult_by_segment;
+
   // These define limits to motion vector components to prevent them
   // from extending outside the UMV borders
   MvLimits mv_limits;
@@ -180,6 +188,8 @@
 
   int sb_pickmode_part;
 
+  int zero_temp_sad_source;
+
   // For each superblock: saves the content value (e.g., low/high sad/sumdiff)
   // based on source sad, prior to encoding the frame.
   uint8_t content_state_sb;
@@ -199,10 +209,13 @@
   void (*highbd_inv_txfm_add)(const tran_low_t *input, uint16_t *dest,
                               int stride, int eob, int bd);
 #endif
+  DECLARE_ALIGNED(16, uint8_t, est_pred[64 * 64]);
+
+  struct scale_factors *me_sf;
 };
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_BLOCK_H_
+#endif  // VPX_VP9_ENCODER_VP9_BLOCK_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.c
index 9ab57b5..da68a3c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.c
@@ -11,6 +11,7 @@
 
 #include "vpx/vpx_integer.h"
 #include "vpx_ports/system_state.h"
+#include "vp9/encoder/vp9_blockiness.h"
 
 static int horizontal_filter(const uint8_t *s) {
   return (s[1] - s[-2]) * 2 + (s[-1] - s[0]) * 6;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.h
new file mode 100644
index 0000000..e840cb2
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_blockiness.h
@@ -0,0 +1,26 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_ENCODER_VP9_BLOCKINESS_H_
+#define VPX_VP9_ENCODER_VP9_BLOCKINESS_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+double vp9_get_blockiness(const uint8_t *img1, int img1_pitch,
+                          const uint8_t *img2, int img2_pitch, int width,
+                          int height);
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  // VPX_VP9_ENCODER_VP9_BLOCKINESS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.c
index 52a81af..b74b902 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.c
@@ -139,17 +139,22 @@
 }
 
 void vp9_free_pc_tree(ThreadData *td) {
-  const int tree_nodes = 64 + 16 + 4 + 1;
   int i;
 
-  // Set up all 4x4 mode contexts
-  for (i = 0; i < 64; ++i) free_mode_context(&td->leaf_tree[i]);
+  if (td == NULL) return;
 
-  // Sets up all the leaf nodes in the tree.
-  for (i = 0; i < tree_nodes; ++i) free_tree_contexts(&td->pc_tree[i]);
+  if (td->leaf_tree != NULL) {
+    // Set up all 4x4 mode contexts
+    for (i = 0; i < 64; ++i) free_mode_context(&td->leaf_tree[i]);
+    vpx_free(td->leaf_tree);
+    td->leaf_tree = NULL;
+  }
 
-  vpx_free(td->pc_tree);
-  td->pc_tree = NULL;
-  vpx_free(td->leaf_tree);
-  td->leaf_tree = NULL;
+  if (td->pc_tree != NULL) {
+    const int tree_nodes = 64 + 16 + 4 + 1;
+    // Sets up all the leaf nodes in the tree.
+    for (i = 0; i < tree_nodes; ++i) free_tree_contexts(&td->pc_tree[i]);
+    vpx_free(td->pc_tree);
+    td->pc_tree = NULL;
+  }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.h
index 73423c0..4e301cc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_context_tree.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_CONTEXT_TREE_H_
-#define VP9_ENCODER_VP9_CONTEXT_TREE_H_
+#ifndef VPX_VP9_ENCODER_VP9_CONTEXT_TREE_H_
+#define VPX_VP9_ENCODER_VP9_CONTEXT_TREE_H_
 
 #include "vp9/common/vp9_blockd.h"
 #include "vp9/encoder/vp9_block.h"
@@ -56,6 +56,7 @@
   // scope of refactoring.
   int rate;
   int64_t dist;
+  int64_t rdcost;
 
 #if CONFIG_VP9_TEMPORAL_DENOISING
   unsigned int newmv_sse;
@@ -75,6 +76,8 @@
 
   // Used for the machine learning-based early termination
   int32_t sum_y_eobs;
+  // Skip certain ref frames during RD search of rectangular partitions.
+  uint8_t skip_ref_frame_mask;
 } PICK_MODE_CONTEXT;
 
 typedef struct PC_TREE {
@@ -88,6 +91,9 @@
     struct PC_TREE *split[4];
     PICK_MODE_CONTEXT *leaf_split[4];
   };
+  // Obtained from a simple motion search. Used by the ML based partition search
+  // speed feature.
+  MV mv;
 } PC_TREE;
 
 void vp9_setup_pc_tree(struct VP9Common *cm, struct ThreadData *td);
@@ -97,4 +103,4 @@
 }  // extern "C"
 #endif
 
-#endif /* VP9_ENCODER_VP9_CONTEXT_TREE_H_ */
+#endif  // VPX_VP9_ENCODER_VP9_CONTEXT_TREE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.h
index 70a1a2e..638d72a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_cost.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_COST_H_
-#define VP9_ENCODER_VP9_COST_H_
+#ifndef VPX_VP9_ENCODER_VP9_COST_H_
+#define VPX_VP9_ENCODER_VP9_COST_H_
 
 #include "vpx_dsp/prob.h"
 #include "vpx/vpx_integer.h"
@@ -55,4 +55,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_COST_H_
+#endif  // VPX_VP9_ENCODER_VP9_COST_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_dct.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_dct.c
index 5c66562..2f42c6a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_dct.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_dct.c
@@ -554,109 +554,6 @@
   }
 }
 
-void vp9_fdct8x8_quant_c(const int16_t *input, int stride,
-                         tran_low_t *coeff_ptr, intptr_t n_coeffs,
-                         int skip_block, const int16_t *round_ptr,
-                         const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
-                         tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                         uint16_t *eob_ptr, const int16_t *scan,
-                         const int16_t *iscan) {
-  int eob = -1;
-
-  int i, j;
-  tran_low_t intermediate[64];
-
-  (void)iscan;
-
-  // Transform columns
-  {
-    tran_low_t *output = intermediate;
-    tran_high_t s0, s1, s2, s3, s4, s5, s6, s7;  // canbe16
-    tran_high_t t0, t1, t2, t3;                  // needs32
-    tran_high_t x0, x1, x2, x3;                  // canbe16
-
-    int i;
-    for (i = 0; i < 8; i++) {
-      // stage 1
-      s0 = (input[0 * stride] + input[7 * stride]) * 4;
-      s1 = (input[1 * stride] + input[6 * stride]) * 4;
-      s2 = (input[2 * stride] + input[5 * stride]) * 4;
-      s3 = (input[3 * stride] + input[4 * stride]) * 4;
-      s4 = (input[3 * stride] - input[4 * stride]) * 4;
-      s5 = (input[2 * stride] - input[5 * stride]) * 4;
-      s6 = (input[1 * stride] - input[6 * stride]) * 4;
-      s7 = (input[0 * stride] - input[7 * stride]) * 4;
-
-      // fdct4(step, step);
-      x0 = s0 + s3;
-      x1 = s1 + s2;
-      x2 = s1 - s2;
-      x3 = s0 - s3;
-      t0 = (x0 + x1) * cospi_16_64;
-      t1 = (x0 - x1) * cospi_16_64;
-      t2 = x2 * cospi_24_64 + x3 * cospi_8_64;
-      t3 = -x2 * cospi_8_64 + x3 * cospi_24_64;
-      output[0 * 8] = (tran_low_t)fdct_round_shift(t0);
-      output[2 * 8] = (tran_low_t)fdct_round_shift(t2);
-      output[4 * 8] = (tran_low_t)fdct_round_shift(t1);
-      output[6 * 8] = (tran_low_t)fdct_round_shift(t3);
-
-      // Stage 2
-      t0 = (s6 - s5) * cospi_16_64;
-      t1 = (s6 + s5) * cospi_16_64;
-      t2 = fdct_round_shift(t0);
-      t3 = fdct_round_shift(t1);
-
-      // Stage 3
-      x0 = s4 + t2;
-      x1 = s4 - t2;
-      x2 = s7 - t3;
-      x3 = s7 + t3;
-
-      // Stage 4
-      t0 = x0 * cospi_28_64 + x3 * cospi_4_64;
-      t1 = x1 * cospi_12_64 + x2 * cospi_20_64;
-      t2 = x2 * cospi_12_64 + x1 * -cospi_20_64;
-      t3 = x3 * cospi_28_64 + x0 * -cospi_4_64;
-      output[1 * 8] = (tran_low_t)fdct_round_shift(t0);
-      output[3 * 8] = (tran_low_t)fdct_round_shift(t2);
-      output[5 * 8] = (tran_low_t)fdct_round_shift(t1);
-      output[7 * 8] = (tran_low_t)fdct_round_shift(t3);
-      input++;
-      output++;
-    }
-  }
-
-  // Rows
-  for (i = 0; i < 8; ++i) {
-    fdct8(&intermediate[i * 8], &coeff_ptr[i * 8]);
-    for (j = 0; j < 8; ++j) coeff_ptr[j + i * 8] /= 2;
-  }
-
-  memset(qcoeff_ptr, 0, n_coeffs * sizeof(*qcoeff_ptr));
-  memset(dqcoeff_ptr, 0, n_coeffs * sizeof(*dqcoeff_ptr));
-
-  if (!skip_block) {
-    // Quantization pass: All coefficients with index >= zero_flag are
-    // skippable. Note: zero_flag can be zero.
-    for (i = 0; i < n_coeffs; i++) {
-      const int rc = scan[i];
-      const int coeff = coeff_ptr[rc];
-      const int coeff_sign = (coeff >> 31);
-      const int abs_coeff = (coeff ^ coeff_sign) - coeff_sign;
-
-      int tmp = clamp(abs_coeff + round_ptr[rc != 0], INT16_MIN, INT16_MAX);
-      tmp = (tmp * quant_ptr[rc != 0]) >> 16;
-
-      qcoeff_ptr[rc] = (tmp ^ coeff_sign) - coeff_sign;
-      dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr[rc != 0];
-
-      if (tmp) eob = i;
-    }
-  }
-  *eob_ptr = eob + 1;
-}
-
 void vp9_fht8x8_c(const int16_t *input, tran_low_t *output, int stride,
                   int tx_type) {
   if (tx_type == DCT_DCT) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.c
index b08ccaa..2885223 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.c
@@ -189,7 +189,7 @@
     int increase_denoising, int mi_row, int mi_col, PICK_MODE_CONTEXT *ctx,
     int motion_magnitude, int is_skin, int *zeromv_filter, int consec_zeromv,
     int num_spatial_layers, int width, int lst_fb_idx, int gld_fb_idx,
-    int use_svc, int spatial_layer) {
+    int use_svc, int spatial_layer, int use_gf_temporal_ref) {
   const int sse_diff = (ctx->newmv_sse == UINT_MAX)
                            ? 0
                            : ((int)ctx->zeromv_sse - (int)ctx->newmv_sse);
@@ -201,7 +201,7 @@
   int i;
   struct buf_2d saved_dst[MAX_MB_PLANE];
   struct buf_2d saved_pre[MAX_MB_PLANE];
-  RefBuffer *saved_block_refs[2];
+  const RefBuffer *saved_block_refs[2];
   MV_REFERENCE_FRAME saved_frame;
 
   frame = ctx->best_reference_frame;
@@ -219,8 +219,7 @@
 
   // If the best reference frame uses inter-prediction and there is enough of a
   // difference in sum-squared-error, use it.
-  if (frame != INTRA_FRAME && frame != ALTREF_FRAME &&
-      (frame != GOLDEN_FRAME || num_spatial_layers == 1) &&
+  if (frame != INTRA_FRAME && frame != ALTREF_FRAME && frame != GOLDEN_FRAME &&
       sse_diff > sse_diff_thresh(bs, increase_denoising, motion_magnitude)) {
     mi->ref_frame[0] = ctx->best_reference_frame;
     mi->mode = ctx->best_sse_inter_mode;
@@ -230,7 +229,9 @@
     frame = ctx->best_zeromv_reference_frame;
     ctx->newmv_sse = ctx->zeromv_sse;
     // Bias to last reference.
-    if (num_spatial_layers > 1 || frame == ALTREF_FRAME ||
+    if ((num_spatial_layers > 1 && !use_gf_temporal_ref) ||
+        frame == ALTREF_FRAME ||
+        (frame == GOLDEN_FRAME && use_gf_temporal_ref) ||
         (frame != LAST_FRAME &&
          ((ctx->zeromv_lastref_sse<(5 * ctx->zeromv_sse)>> 2) ||
           denoiser->denoising_level >= kDenHigh))) {
@@ -261,6 +262,14 @@
     denoise_layer_idx = num_spatial_layers - spatial_layer - 1;
   }
 
+  // Force copy (no denoise, copy source in denoised buffer) if
+  // running_avg_y[frame] is NULL.
+  if (denoiser->running_avg_y[frame].buffer_alloc == NULL) {
+    // Restore everything to its original state
+    *mi = saved_mi;
+    return COPY_BLOCK;
+  }
+
   if (ctx->newmv_sse > sse_thresh(bs, increase_denoising)) {
     // Restore everything to its original state
     *mi = saved_mi;
@@ -326,7 +335,8 @@
 
 void vp9_denoiser_denoise(VP9_COMP *cpi, MACROBLOCK *mb, int mi_row, int mi_col,
                           BLOCK_SIZE bs, PICK_MODE_CONTEXT *ctx,
-                          VP9_DENOISER_DECISION *denoiser_decision) {
+                          VP9_DENOISER_DECISION *denoiser_decision,
+                          int use_gf_temporal_ref) {
   int mv_col, mv_row;
   int motion_magnitude = 0;
   int zeromv_filter = 0;
@@ -349,6 +359,7 @@
   int is_skin = 0;
   int increase_denoising = 0;
   int consec_zeromv = 0;
+  int last_is_reference = cpi->ref_frame_flags & VP9_LAST_FLAG;
   mv_col = ctx->best_sse_mv.as_mv.col;
   mv_row = ctx->best_sse_mv.as_mv.row;
   motion_magnitude = mv_row * mv_row + mv_col * mv_col;
@@ -379,7 +390,7 @@
           // zero/small motion in skin detection is high, i.e, > 4).
           if (consec_zeromv < 4) {
             i = ymis;
-            j = xmis;
+            break;
           }
         }
       }
@@ -392,12 +403,18 @@
   }
   if (!is_skin && denoiser->denoising_level == kDenHigh) increase_denoising = 1;
 
-  if (denoiser->denoising_level >= kDenLow && !ctx->sb_skip_denoising)
+  // Copy block if LAST_FRAME is not a reference.
+  // Last doesn't always exist when SVC layers are dynamically changed, e.g. top
+  // spatial layer doesn't have last reference when it's brought up for the
+  // first time on the fly.
+  if (last_is_reference && denoiser->denoising_level >= kDenLow &&
+      !ctx->sb_skip_denoising)
     decision = perform_motion_compensation(
         &cpi->common, denoiser, mb, bs, increase_denoising, mi_row, mi_col, ctx,
         motion_magnitude, is_skin, &zeromv_filter, consec_zeromv,
         cpi->svc.number_spatial_layers, cpi->Source->y_width, cpi->lst_fb_idx,
-        cpi->gld_fb_idx, cpi->use_svc, cpi->svc.spatial_layer_id);
+        cpi->gld_fb_idx, cpi->use_svc, cpi->svc.spatial_layer_id,
+        use_gf_temporal_ref);
 
   if (decision == FILTER_BLOCK) {
     decision = vp9_denoiser_filter(src.buf, src.stride, mc_avg_start,
@@ -445,16 +462,16 @@
 }
 
 void vp9_denoiser_update_frame_info(
-    VP9_DENOISER *denoiser, YV12_BUFFER_CONFIG src, FRAME_TYPE frame_type,
-    int refresh_alt_ref_frame, int refresh_golden_frame, int refresh_last_frame,
-    int alt_fb_idx, int gld_fb_idx, int lst_fb_idx, int resized,
-    int svc_base_is_key, int second_spatial_layer) {
+    VP9_DENOISER *denoiser, YV12_BUFFER_CONFIG src, struct SVC *svc,
+    FRAME_TYPE frame_type, int refresh_alt_ref_frame, int refresh_golden_frame,
+    int refresh_last_frame, int alt_fb_idx, int gld_fb_idx, int lst_fb_idx,
+    int resized, int svc_refresh_denoiser_buffers, int second_spatial_layer) {
   const int shift = second_spatial_layer ? denoiser->num_ref_frames : 0;
   // Copy source into denoised reference buffers on KEY_FRAME or
   // if the just encoded frame was resized. For SVC, copy source if the base
   // spatial layer was key frame.
   if (frame_type == KEY_FRAME || resized != 0 || denoiser->reset ||
-      svc_base_is_key) {
+      svc_refresh_denoiser_buffers) {
     int i;
     // Start at 1 so as not to overwrite the INTRA_FRAME
     for (i = 1; i < denoiser->num_ref_frames; ++i) {
@@ -465,32 +482,43 @@
     return;
   }
 
-  // If more than one refresh occurs, must copy frame buffer.
-  if ((refresh_alt_ref_frame + refresh_golden_frame + refresh_last_frame) > 1) {
-    if (refresh_alt_ref_frame) {
-      copy_frame(&denoiser->running_avg_y[alt_fb_idx + 1 + shift],
-                 &denoiser->running_avg_y[INTRA_FRAME + shift]);
-    }
-    if (refresh_golden_frame) {
-      copy_frame(&denoiser->running_avg_y[gld_fb_idx + 1 + shift],
-                 &denoiser->running_avg_y[INTRA_FRAME + shift]);
-    }
-    if (refresh_last_frame) {
-      copy_frame(&denoiser->running_avg_y[lst_fb_idx + 1 + shift],
-                 &denoiser->running_avg_y[INTRA_FRAME + shift]);
+  if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->use_set_ref_frame_config) {
+    int i;
+    for (i = 0; i < REF_FRAMES; i++) {
+      if (svc->update_buffer_slot[svc->spatial_layer_id] & (1 << i))
+        copy_frame(&denoiser->running_avg_y[i + 1 + shift],
+                   &denoiser->running_avg_y[INTRA_FRAME + shift]);
     }
   } else {
-    if (refresh_alt_ref_frame) {
-      swap_frame_buffer(&denoiser->running_avg_y[alt_fb_idx + 1 + shift],
-                        &denoiser->running_avg_y[INTRA_FRAME + shift]);
-    }
-    if (refresh_golden_frame) {
-      swap_frame_buffer(&denoiser->running_avg_y[gld_fb_idx + 1 + shift],
-                        &denoiser->running_avg_y[INTRA_FRAME + shift]);
-    }
-    if (refresh_last_frame) {
-      swap_frame_buffer(&denoiser->running_avg_y[lst_fb_idx + 1 + shift],
-                        &denoiser->running_avg_y[INTRA_FRAME + shift]);
+    // If more than one refresh occurs, must copy frame buffer.
+    if ((refresh_alt_ref_frame + refresh_golden_frame + refresh_last_frame) >
+        1) {
+      if (refresh_alt_ref_frame) {
+        copy_frame(&denoiser->running_avg_y[alt_fb_idx + 1 + shift],
+                   &denoiser->running_avg_y[INTRA_FRAME + shift]);
+      }
+      if (refresh_golden_frame) {
+        copy_frame(&denoiser->running_avg_y[gld_fb_idx + 1 + shift],
+                   &denoiser->running_avg_y[INTRA_FRAME + shift]);
+      }
+      if (refresh_last_frame) {
+        copy_frame(&denoiser->running_avg_y[lst_fb_idx + 1 + shift],
+                   &denoiser->running_avg_y[INTRA_FRAME + shift]);
+      }
+    } else {
+      if (refresh_alt_ref_frame) {
+        swap_frame_buffer(&denoiser->running_avg_y[alt_fb_idx + 1 + shift],
+                          &denoiser->running_avg_y[INTRA_FRAME + shift]);
+      }
+      if (refresh_golden_frame) {
+        swap_frame_buffer(&denoiser->running_avg_y[gld_fb_idx + 1 + shift],
+                          &denoiser->running_avg_y[INTRA_FRAME + shift]);
+      }
+      if (refresh_last_frame) {
+        swap_frame_buffer(&denoiser->running_avg_y[lst_fb_idx + 1 + shift],
+                          &denoiser->running_avg_y[INTRA_FRAME + shift]);
+      }
     }
   }
 }
@@ -539,26 +567,38 @@
 }
 
 int vp9_denoiser_realloc_svc(VP9_COMMON *cm, VP9_DENOISER *denoiser,
-                             int svc_buf_shift, int refresh_alt,
-                             int refresh_gld, int refresh_lst, int alt_fb_idx,
-                             int gld_fb_idx, int lst_fb_idx) {
+                             struct SVC *svc, int svc_buf_shift,
+                             int refresh_alt, int refresh_gld, int refresh_lst,
+                             int alt_fb_idx, int gld_fb_idx, int lst_fb_idx) {
   int fail = 0;
-  if (refresh_alt) {
-    // Increase the frame buffer index by 1 to map it to the buffer index in the
-    // denoiser.
-    fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
-                                           alt_fb_idx + 1 + svc_buf_shift);
-    if (fail) return 1;
-  }
-  if (refresh_gld) {
-    fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
-                                           gld_fb_idx + 1 + svc_buf_shift);
-    if (fail) return 1;
-  }
-  if (refresh_lst) {
-    fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
-                                           lst_fb_idx + 1 + svc_buf_shift);
-    if (fail) return 1;
+  if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->use_set_ref_frame_config) {
+    int i;
+    for (i = 0; i < REF_FRAMES; i++) {
+      if (cm->frame_type == KEY_FRAME ||
+          svc->update_buffer_slot[svc->spatial_layer_id] & (1 << i)) {
+        fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
+                                               i + 1 + svc_buf_shift);
+      }
+    }
+  } else {
+    if (refresh_alt) {
+      // Increase the frame buffer index by 1 to map it to the buffer index in
+      // the denoiser.
+      fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
+                                             alt_fb_idx + 1 + svc_buf_shift);
+      if (fail) return 1;
+    }
+    if (refresh_gld) {
+      fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
+                                             gld_fb_idx + 1 + svc_buf_shift);
+      if (fail) return 1;
+    }
+    if (refresh_lst) {
+      fail = vp9_denoiser_realloc_svc_helper(cm, denoiser,
+                                             lst_fb_idx + 1 + svc_buf_shift);
+      if (fail) return 1;
+    }
   }
   return 0;
 }
@@ -648,9 +688,10 @@
   make_grayscale(&denoiser->running_avg_y[i]);
 #endif
   denoiser->frame_buffer_initialized = 1;
-  denoiser->denoising_level = kDenLow;
-  denoiser->prev_denoising_level = kDenLow;
+  denoiser->denoising_level = kDenMedium;
+  denoiser->prev_denoising_level = kDenMedium;
   denoiser->reset = 0;
+  denoiser->current_denoiser_frame = 0;
   return 0;
 }
 
@@ -675,13 +716,29 @@
   vpx_free_frame_buffer(&denoiser->last_source);
 }
 
-void vp9_denoiser_set_noise_level(VP9_DENOISER *denoiser, int noise_level) {
+static void force_refresh_longterm_ref(VP9_COMP *const cpi) {
+  SVC *const svc = &cpi->svc;
+  // If long term reference is used, force refresh of that slot, so
+  // denoiser buffer for long term reference stays in sync.
+  if (svc->use_gf_temporal_ref_current_layer) {
+    int index = svc->spatial_layer_id;
+    if (svc->number_spatial_layers == 3) index = svc->spatial_layer_id - 1;
+    assert(index >= 0);
+    cpi->alt_fb_idx = svc->buffer_gf_temporal_ref[index].idx;
+    cpi->refresh_alt_ref_frame = 1;
+  }
+}
+
+void vp9_denoiser_set_noise_level(VP9_COMP *const cpi, int noise_level) {
+  VP9_DENOISER *const denoiser = &cpi->denoiser;
   denoiser->denoising_level = noise_level;
   if (denoiser->denoising_level > kDenLowLow &&
-      denoiser->prev_denoising_level == kDenLowLow)
+      denoiser->prev_denoising_level == kDenLowLow) {
     denoiser->reset = 1;
-  else
+    force_refresh_longterm_ref(cpi);
+  } else {
     denoiser->reset = 0;
+  }
   denoiser->prev_denoising_level = denoiser->denoising_level;
 }
 
@@ -713,6 +770,56 @@
     return threshold;
 }
 
+void vp9_denoiser_reset_on_first_frame(VP9_COMP *const cpi) {
+  if (vp9_denoise_svc_non_key(cpi) &&
+      cpi->denoiser.current_denoiser_frame == 0) {
+    cpi->denoiser.reset = 1;
+    force_refresh_longterm_ref(cpi);
+  }
+}
+
+void vp9_denoiser_update_ref_frame(VP9_COMP *const cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
+
+  if (cpi->oxcf.noise_sensitivity > 0 && denoise_svc(cpi) &&
+      cpi->denoiser.denoising_level > kDenLowLow) {
+    int svc_refresh_denoiser_buffers = 0;
+    int denoise_svc_second_layer = 0;
+    FRAME_TYPE frame_type = cm->intra_only ? KEY_FRAME : cm->frame_type;
+    cpi->denoiser.current_denoiser_frame++;
+    if (cpi->use_svc) {
+      const int svc_buf_shift =
+          svc->number_spatial_layers - svc->spatial_layer_id == 2
+              ? cpi->denoiser.num_ref_frames
+              : 0;
+      int layer =
+          LAYER_IDS_TO_IDX(svc->spatial_layer_id, svc->temporal_layer_id,
+                           svc->number_temporal_layers);
+      LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+      svc_refresh_denoiser_buffers =
+          lc->is_key_frame || svc->spatial_layer_sync[svc->spatial_layer_id];
+      denoise_svc_second_layer =
+          svc->number_spatial_layers - svc->spatial_layer_id == 2 ? 1 : 0;
+      // Check if we need to allocate extra buffers in the denoiser
+      // for refreshed frames.
+      if (vp9_denoiser_realloc_svc(cm, &cpi->denoiser, svc, svc_buf_shift,
+                                   cpi->refresh_alt_ref_frame,
+                                   cpi->refresh_golden_frame,
+                                   cpi->refresh_last_frame, cpi->alt_fb_idx,
+                                   cpi->gld_fb_idx, cpi->lst_fb_idx))
+        vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
+                           "Failed to re-allocate denoiser for SVC");
+    }
+    vp9_denoiser_update_frame_info(
+        &cpi->denoiser, *cpi->Source, svc, frame_type,
+        cpi->refresh_alt_ref_frame, cpi->refresh_golden_frame,
+        cpi->refresh_last_frame, cpi->alt_fb_idx, cpi->gld_fb_idx,
+        cpi->lst_fb_idx, cpi->resize_pending, svc_refresh_denoiser_buffers,
+        denoise_svc_second_layer);
+  }
+}
+
 #ifdef OUTPUT_YUV_DENOISED
 static void make_grayscale(YV12_BUFFER_CONFIG *yuv) {
   int r, c;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.h
index f4da24c..1973e98 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_denoiser.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_DENOISER_H_
-#define VP9_ENCODER_DENOISER_H_
+#ifndef VPX_VP9_ENCODER_VP9_DENOISER_H_
+#define VPX_VP9_ENCODER_VP9_DENOISER_H_
 
 #include "vp9/encoder/vp9_block.h"
 #include "vp9/encoder/vp9_skin_detection.h"
@@ -50,6 +50,7 @@
   int reset;
   int num_ref_frames;
   int num_layers;
+  unsigned int current_denoiser_frame;
   VP9_DENOISER_LEVEL denoising_level;
   VP9_DENOISER_LEVEL prev_denoising_level;
 } VP9_DENOISER;
@@ -70,14 +71,15 @@
 struct SVC;
 
 void vp9_denoiser_update_frame_info(
-    VP9_DENOISER *denoiser, YV12_BUFFER_CONFIG src, FRAME_TYPE frame_type,
-    int refresh_alt_ref_frame, int refresh_golden_frame, int refresh_last_frame,
-    int alt_fb_idx, int gld_fb_idx, int lst_fb_idx, int resized,
-    int svc_base_is_key, int second_spatial_layer);
+    VP9_DENOISER *denoiser, YV12_BUFFER_CONFIG src, struct SVC *svc,
+    FRAME_TYPE frame_type, int refresh_alt_ref_frame, int refresh_golden_frame,
+    int refresh_last_frame, int alt_fb_idx, int gld_fb_idx, int lst_fb_idx,
+    int resized, int svc_refresh_denoiser_buffers, int second_spatial_layer);
 
 void vp9_denoiser_denoise(struct VP9_COMP *cpi, MACROBLOCK *mb, int mi_row,
                           int mi_col, BLOCK_SIZE bs, PICK_MODE_CONTEXT *ctx,
-                          VP9_DENOISER_DECISION *denoiser_decision);
+                          VP9_DENOISER_DECISION *denoiser_decision,
+                          int use_gf_temporal_ref);
 
 void vp9_denoiser_reset_frame_stats(PICK_MODE_CONTEXT *ctx);
 
@@ -86,9 +88,9 @@
                                      PICK_MODE_CONTEXT *ctx);
 
 int vp9_denoiser_realloc_svc(VP9_COMMON *cm, VP9_DENOISER *denoiser,
-                             int svc_buf_shift, int refresh_alt,
-                             int refresh_gld, int refresh_lst, int alt_fb_idx,
-                             int gld_fb_idx, int lst_fb_idx);
+                             struct SVC *svc, int svc_buf_shift,
+                             int refresh_alt, int refresh_gld, int refresh_lst,
+                             int alt_fb_idx, int gld_fb_idx, int lst_fb_idx);
 
 int vp9_denoiser_alloc(VP9_COMMON *cm, struct SVC *svc, VP9_DENOISER *denoiser,
                        int use_svc, int noise_sen, int width, int height,
@@ -110,7 +112,9 @@
 
 void vp9_denoiser_free(VP9_DENOISER *denoiser);
 
-void vp9_denoiser_set_noise_level(VP9_DENOISER *denoiser, int noise_level);
+void vp9_denoiser_set_noise_level(struct VP9_COMP *const cpi, int noise_level);
+
+void vp9_denoiser_reset_on_first_frame(struct VP9_COMP *const cpi);
 
 int64_t vp9_scale_part_thresh(int64_t threshold, VP9_DENOISER_LEVEL noise_level,
                               int content_state, int temporal_layer_id);
@@ -119,8 +123,10 @@
                                 VP9_DENOISER_LEVEL noise_level, int abs_sumdiff,
                                 int temporal_layer_id);
 
+void vp9_denoiser_update_ref_frame(struct VP9_COMP *const cpi);
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_DENOISER_H_
+#endif  // VPX_VP9_ENCODER_VP9_DENOISER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.c
index 6517fb3..13f9a1f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <float.h>
 #include <limits.h>
 #include <math.h>
 #include <stdio.h>
@@ -21,6 +22,10 @@
 #include "vpx_ports/vpx_timer.h"
 #include "vpx_ports/system_state.h"
 
+#if CONFIG_MISMATCH_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif  // CONFIG_MISMATCH_DEBUG
+
 #include "vp9/common/vp9_common.h"
 #include "vp9/common/vp9_entropy.h"
 #include "vp9/common/vp9_entropymode.h"
@@ -32,16 +37,21 @@
 #include "vp9/common/vp9_reconinter.h"
 #include "vp9/common/vp9_seg_common.h"
 #include "vp9/common/vp9_tile_common.h"
-
+#if !CONFIG_REALTIME_ONLY
 #include "vp9/encoder/vp9_aq_360.h"
 #include "vp9/encoder/vp9_aq_complexity.h"
+#endif
 #include "vp9/encoder/vp9_aq_cyclicrefresh.h"
+#if !CONFIG_REALTIME_ONLY
 #include "vp9/encoder/vp9_aq_variance.h"
+#endif
 #include "vp9/encoder/vp9_encodeframe.h"
 #include "vp9/encoder/vp9_encodemb.h"
 #include "vp9/encoder/vp9_encodemv.h"
 #include "vp9/encoder/vp9_ethread.h"
 #include "vp9/encoder/vp9_extend.h"
+#include "vp9/encoder/vp9_multi_thread.h"
+#include "vp9/encoder/vp9_partition_models.h"
 #include "vp9/encoder/vp9_pickmode.h"
 #include "vp9/encoder/vp9_rd.h"
 #include "vp9/encoder/vp9_rdopt.h"
@@ -52,33 +62,6 @@
                               int output_enabled, int mi_row, int mi_col,
                               BLOCK_SIZE bsize, PICK_MODE_CONTEXT *ctx);
 
-// Machine learning-based early termination parameters.
-static const double train_mean[24] = {
-  303501.697372, 3042630.372158, 24.694696, 1.392182,
-  689.413511,    162.027012,     1.478213,  0.0,
-  135382.260230, 912738.513263,  28.845217, 1.515230,
-  544.158492,    131.807995,     1.436863,  0.0,
-  43682.377587,  208131.711766,  28.084737, 1.356677,
-  138.254122,    119.522553,     1.252322,  0.0
-};
-
-static const double train_stdm[24] = {
-  673689.212982, 5996652.516628, 0.024449, 1.989792,
-  985.880847,    0.014638,       2.001898, 0.0,
-  208798.775332, 1812548.443284, 0.018693, 1.838009,
-  396.986910,    0.015657,       1.332541, 0.0,
-  55888.847031,  448587.962714,  0.017900, 1.904776,
-  98.652832,     0.016598,       1.320992, 0.0
-};
-
-// Error tolerance: 0.01%-0.0.05%-0.1%
-static const double classifiers[24] = {
-  0.111736, 0.289977, 0.042219, 0.204765, 0.120410, -0.143863,
-  0.282376, 0.847811, 0.637161, 0.131570, 0.018636, 0.202134,
-  0.112797, 0.028162, 0.182450, 1.124367, 0.386133, 0.083700,
-  0.050028, 0.150873, 0.061119, 0.109318, 0.127255, 0.625211
-};
-
 // This is used as a reference when computing the source variance for the
 //  purpose of activity masking.
 // Eventually this should be replaced by custom no-reference routines,
@@ -176,6 +159,7 @@
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
+#if !CONFIG_REALTIME_ONLY
 static unsigned int get_sby_perpixel_diff_variance(VP9_COMP *cpi,
                                                    const struct buf_2d *ref,
                                                    int mi_row, int mi_col,
@@ -204,6 +188,72 @@
   else
     return BLOCK_8X8;
 }
+#endif  // !CONFIG_REALTIME_ONLY
+
+static void set_segment_index(VP9_COMP *cpi, MACROBLOCK *const x, int mi_row,
+                              int mi_col, BLOCK_SIZE bsize, int segment_index) {
+  VP9_COMMON *const cm = &cpi->common;
+  const struct segmentation *const seg = &cm->seg;
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MODE_INFO *mi = xd->mi[0];
+
+  const AQ_MODE aq_mode = cpi->oxcf.aq_mode;
+  const uint8_t *const map =
+      seg->update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
+
+  // Initialize the segmentation index as 0.
+  mi->segment_id = 0;
+
+  // Skip the rest if AQ mode is disabled.
+  if (!seg->enabled) return;
+
+  switch (aq_mode) {
+    case CYCLIC_REFRESH_AQ:
+      mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
+      break;
+#if !CONFIG_REALTIME_ONLY
+    case VARIANCE_AQ:
+      if (cm->frame_type == KEY_FRAME || cpi->refresh_alt_ref_frame ||
+          cpi->force_update_segmentation ||
+          (cpi->refresh_golden_frame && !cpi->rc.is_src_frame_alt_ref)) {
+        int min_energy;
+        int max_energy;
+        // Get sub block energy range
+        if (bsize >= BLOCK_32X32) {
+          vp9_get_sub_block_energy(cpi, x, mi_row, mi_col, bsize, &min_energy,
+                                   &max_energy);
+        } else {
+          min_energy = bsize <= BLOCK_16X16 ? x->mb_energy
+                                            : vp9_block_energy(cpi, x, bsize);
+        }
+        mi->segment_id = vp9_vaq_segment_id(min_energy);
+      } else {
+        mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
+      }
+      break;
+    case EQUATOR360_AQ:
+      if (cm->frame_type == KEY_FRAME || cpi->force_update_segmentation)
+        mi->segment_id = vp9_360aq_segment_id(mi_row, cm->mi_rows);
+      else
+        mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
+      break;
+#endif
+    case LOOKAHEAD_AQ:
+      mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
+      break;
+    case PSNR_AQ: mi->segment_id = segment_index; break;
+    case PERCEPTUAL_AQ: mi->segment_id = x->segment_id; break;
+    default:
+      // NO_AQ or PSNR_AQ
+      break;
+  }
+
+  // Set segment index from ROI map if it's enabled.
+  if (cpi->roi.enabled)
+    mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
+
+  vp9_init_plane_quantizers(cpi, x);
+}
 
 // Lighter version of set_offsets that only sets the mode info
 // pointers.
@@ -217,23 +267,57 @@
   x->mbmi_ext = x->mbmi_ext_base + (mi_row * cm->mi_cols + mi_col);
 }
 
+static void set_ssim_rdmult(VP9_COMP *const cpi, MACROBLOCK *const x,
+                            const BLOCK_SIZE bsize, const int mi_row,
+                            const int mi_col, int *const rdmult) {
+  const VP9_COMMON *const cm = &cpi->common;
+
+  const int bsize_base = BLOCK_16X16;
+  const int num_8x8_w = num_8x8_blocks_wide_lookup[bsize_base];
+  const int num_8x8_h = num_8x8_blocks_high_lookup[bsize_base];
+  const int num_cols = (cm->mi_cols + num_8x8_w - 1) / num_8x8_w;
+  const int num_rows = (cm->mi_rows + num_8x8_h - 1) / num_8x8_h;
+  const int num_bcols =
+      (num_8x8_blocks_wide_lookup[bsize] + num_8x8_w - 1) / num_8x8_w;
+  const int num_brows =
+      (num_8x8_blocks_high_lookup[bsize] + num_8x8_h - 1) / num_8x8_h;
+  int row, col;
+  double num_of_mi = 0.0;
+  double geom_mean_of_scale = 0.0;
+
+  assert(cpi->oxcf.tuning == VP8_TUNE_SSIM);
+
+  for (row = mi_row / num_8x8_w;
+       row < num_rows && row < mi_row / num_8x8_w + num_brows; ++row) {
+    for (col = mi_col / num_8x8_h;
+         col < num_cols && col < mi_col / num_8x8_h + num_bcols; ++col) {
+      const int index = row * num_cols + col;
+      geom_mean_of_scale += log(cpi->mi_ssim_rdmult_scaling_factors[index]);
+      num_of_mi += 1.0;
+    }
+  }
+  geom_mean_of_scale = exp(geom_mean_of_scale / num_of_mi);
+
+  *rdmult = (int)((double)(*rdmult) * geom_mean_of_scale);
+  *rdmult = VPXMAX(*rdmult, 0);
+  set_error_per_bit(x, *rdmult);
+  vpx_clear_system_state();
+}
+
 static void set_offsets(VP9_COMP *cpi, const TileInfo *const tile,
                         MACROBLOCK *const x, int mi_row, int mi_col,
                         BLOCK_SIZE bsize) {
   VP9_COMMON *const cm = &cpi->common;
+  const VP9EncoderConfig *const oxcf = &cpi->oxcf;
   MACROBLOCKD *const xd = &x->e_mbd;
-  MODE_INFO *mi;
   const int mi_width = num_8x8_blocks_wide_lookup[bsize];
   const int mi_height = num_8x8_blocks_high_lookup[bsize];
-  const struct segmentation *const seg = &cm->seg;
   MvLimits *const mv_limits = &x->mv_limits;
 
   set_skip_context(xd, mi_row, mi_col);
 
   set_mode_info_offsets(cm, x, xd, mi_row, mi_col);
 
-  mi = xd->mi[0];
-
   // Set up destination pointers.
   vp9_setup_dst_planes(xd->plane, get_frame_new_buffer(cm), mi_row, mi_col);
 
@@ -255,21 +339,8 @@
   // R/D setup.
   x->rddiv = cpi->rd.RDDIV;
   x->rdmult = cpi->rd.RDMULT;
-
-  // Setup segment ID.
-  if (seg->enabled) {
-    if (cpi->oxcf.aq_mode != VARIANCE_AQ && cpi->oxcf.aq_mode != LOOKAHEAD_AQ &&
-        cpi->oxcf.aq_mode != EQUATOR360_AQ) {
-      const uint8_t *const map =
-          seg->update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
-      mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
-    }
-    vp9_init_plane_quantizers(cpi, x);
-
-    x->encode_breakout = cpi->segment_encode_breakout[mi->segment_id];
-  } else {
-    mi->segment_id = 0;
-    x->encode_breakout = cpi->encode_breakout;
+  if (oxcf->tuning == VP8_TUNE_SSIM) {
+    set_ssim_rdmult(cpi, x, bsize, mi_row, mi_col, &x->rdmult);
   }
 
   // required by vp9_append_sub8x8_mvs_for_idx() and vp9_find_best_ref_mvs()
@@ -385,16 +456,13 @@
         node->split[i] = &vt->split[i].part_variances.none;
       break;
     }
-    case BLOCK_4X4: {
+    default: {
       v4x4 *vt = (v4x4 *)data;
+      assert(bsize == BLOCK_4X4);
       node->part_variances = &vt->part_variances;
       for (i = 0; i < 4; i++) node->split[i] = &vt->split[i];
       break;
     }
-    default: {
-      assert(0);
-      break;
-    }
   }
 }
 
@@ -408,7 +476,8 @@
 static void get_variance(var *v) {
   v->variance =
       (int)(256 * (v->sum_square_error -
-                   ((v->sum_error * v->sum_error) >> v->log2_count)) >>
+                   (uint32_t)(((int64_t)v->sum_error * v->sum_error) >>
+                              v->log2_count)) >>
             v->log2_count);
 }
 
@@ -450,7 +519,7 @@
   // No check for vert/horiz split as too few samples for variance.
   if (bsize == bsize_min) {
     // Variance already computed to set the force_split.
-    if (cm->frame_type == KEY_FRAME) get_variance(&vt.part_variances->none);
+    if (frame_is_intra_only(cm)) get_variance(&vt.part_variances->none);
     if (mi_col + block_width / 2 < cm->mi_cols &&
         mi_row + block_height / 2 < cm->mi_rows &&
         vt.part_variances->none.variance < threshold) {
@@ -460,9 +529,9 @@
     return 0;
   } else if (bsize > bsize_min) {
     // Variance already computed to set the force_split.
-    if (cm->frame_type == KEY_FRAME) get_variance(&vt.part_variances->none);
+    if (frame_is_intra_only(cm)) get_variance(&vt.part_variances->none);
     // For key frame: take split for bsize above 32X32 or very high variance.
-    if (cm->frame_type == KEY_FRAME &&
+    if (frame_is_intra_only(cm) &&
         (bsize > BLOCK_32X32 ||
          vt.part_variances->none.variance > (threshold << 4))) {
       return 0;
@@ -534,8 +603,9 @@
 static void set_vbp_thresholds(VP9_COMP *cpi, int64_t thresholds[], int q,
                                int content_state) {
   VP9_COMMON *const cm = &cpi->common;
-  const int is_key_frame = (cm->frame_type == KEY_FRAME);
-  const int threshold_multiplier = is_key_frame ? 20 : 1;
+  const int is_key_frame = frame_is_intra_only(cm);
+  const int threshold_multiplier =
+      is_key_frame ? 20 : cpi->sf.variance_part_thresh_mult;
   int64_t threshold_base =
       (int64_t)(threshold_multiplier * cpi->y_dequant[q][1]);
 
@@ -579,6 +649,10 @@
       thresholds[0] = threshold_base >> 3;
       thresholds[1] = threshold_base >> 1;
       thresholds[2] = threshold_base << 3;
+      if (cpi->rc.avg_frame_qindex[INTER_FRAME] > 220)
+        thresholds[2] = thresholds[2] << 2;
+      else if (cpi->rc.avg_frame_qindex[INTER_FRAME] > 200)
+        thresholds[2] = thresholds[2] << 1;
     } else if (cm->width < 1280 && cm->height < 720) {
       thresholds[1] = (5 * threshold_base) >> 2;
     } else if (cm->width < 1920 && cm->height < 1080) {
@@ -586,6 +660,7 @@
     } else {
       thresholds[1] = (5 * threshold_base) >> 1;
     }
+    if (cpi->sf.disable_16x16part_nonkey) thresholds[2] = INT64_MAX;
   }
 }
 
@@ -593,7 +668,7 @@
                                            int content_state) {
   VP9_COMMON *const cm = &cpi->common;
   SPEED_FEATURES *const sf = &cpi->sf;
-  const int is_key_frame = (cm->frame_type == KEY_FRAME);
+  const int is_key_frame = frame_is_intra_only(cm);
   if (sf->partition_search_type != VAR_BASED_PARTITION &&
       sf->partition_search_type != REFERENCE_PARTITION) {
     return;
@@ -620,6 +695,11 @@
         cpi->vbp_threshold_copy = (cpi->y_dequant[q][1] << 3) > 8000
                                       ? (cpi->y_dequant[q][1] << 3)
                                       : 8000;
+      if (cpi->rc.high_source_sad ||
+          (cpi->use_svc && cpi->svc.high_source_sad_superframe)) {
+        cpi->vbp_threshold_sad = 0;
+        cpi->vbp_threshold_copy = 0;
+      }
     }
     cpi->vbp_threshold_minmax = 15 + (q >> 3);
   }
@@ -885,13 +965,13 @@
         set_block_size(cpi, x, xd, mi_row, mi_col, subsize);
         set_block_size(cpi, x, xd, mi_row, mi_col + bs, subsize);
         break;
-      case PARTITION_SPLIT:
+      default:
+        assert(partition == PARTITION_SPLIT);
         copy_partitioning_helper(cpi, x, xd, subsize, mi_row, mi_col);
         copy_partitioning_helper(cpi, x, xd, subsize, mi_row + bs, mi_col);
         copy_partitioning_helper(cpi, x, xd, subsize, mi_row, mi_col + bs);
         copy_partitioning_helper(cpi, x, xd, subsize, mi_row + bs, mi_col + bs);
         break;
-      default: assert(0);
     }
   }
 }
@@ -940,18 +1020,20 @@
   const int has_rows = (mi_row_high + bs_high) < cm->mi_rows;
   const int has_cols = (mi_col_high + bs_high) < cm->mi_cols;
 
-  const int row_boundary_block_scale_factor[BLOCK_SIZES] = {
-    13, 13, 13, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0
-  };
-  const int col_boundary_block_scale_factor[BLOCK_SIZES] = {
-    13, 13, 13, 2, 2, 0, 2, 2, 0, 2, 2, 0, 0
-  };
+  const int row_boundary_block_scale_factor[BLOCK_SIZES] = { 13, 13, 13, 1, 0,
+                                                             1,  1,  0,  1, 1,
+                                                             0,  1,  0 };
+  const int col_boundary_block_scale_factor[BLOCK_SIZES] = { 13, 13, 13, 2, 2,
+                                                             0,  2,  2,  0, 2,
+                                                             2,  0,  0 };
   int start_pos;
   BLOCK_SIZE bsize_low;
   PARTITION_TYPE partition_high;
 
   if (mi_row_high >= cm->mi_rows || mi_col_high >= cm->mi_cols) return 0;
-  if (mi_row >= (cm->mi_rows >> 1) || mi_col >= (cm->mi_cols >> 1)) return 0;
+  if (mi_row >= svc->mi_rows[svc->spatial_layer_id - 1] ||
+      mi_col >= svc->mi_cols[svc->spatial_layer_id - 1])
+    return 0;
 
   // Find corresponding (mi_col/mi_row) block down-scaled by 2x2.
   start_pos = mi_row * (svc->mi_stride[svc->spatial_layer_id - 1]) + mi_col;
@@ -1004,7 +1086,8 @@
           set_block_size(cpi, x, xd, mi_row_high, mi_col_high + bs_high,
                          subsize_high);
         break;
-      case PARTITION_SPLIT:
+      default:
+        assert(partition_high == PARTITION_SPLIT);
         if (scale_partitioning_svc(cpi, x, xd, subsize_high, mi_row, mi_col,
                                    mi_row_high, mi_col_high))
           return 1;
@@ -1020,7 +1103,6 @@
                                    mi_col_high + bs_high))
           return 1;
         break;
-      default: assert(0);
     }
   }
 
@@ -1067,13 +1149,13 @@
         prev_part[start_pos] = subsize;
         if (mi_col + bs < cm->mi_cols) prev_part[start_pos + bs] = subsize;
         break;
-      case PARTITION_SPLIT:
+      default:
+        assert(partition == PARTITION_SPLIT);
         update_partition_svc(cpi, subsize, mi_row, mi_col);
         update_partition_svc(cpi, subsize, mi_row + bs, mi_col);
         update_partition_svc(cpi, subsize, mi_row, mi_col + bs);
         update_partition_svc(cpi, subsize, mi_row + bs, mi_col + bs);
         break;
-      default: assert(0);
     }
   }
 }
@@ -1108,13 +1190,13 @@
         prev_part[start_pos] = subsize;
         if (mi_col + bs < cm->mi_cols) prev_part[start_pos + bs] = subsize;
         break;
-      case PARTITION_SPLIT:
+      default:
+        assert(partition == PARTITION_SPLIT);
         update_prev_partition_helper(cpi, subsize, mi_row, mi_col);
         update_prev_partition_helper(cpi, subsize, mi_row + bs, mi_col);
         update_prev_partition_helper(cpi, subsize, mi_row, mi_col + bs);
         update_prev_partition_helper(cpi, subsize, mi_row + bs, mi_col + bs);
         break;
-      default: assert(0);
     }
   }
 }
@@ -1130,20 +1212,25 @@
 }
 
 static void chroma_check(VP9_COMP *cpi, MACROBLOCK *x, int bsize,
-                         unsigned int y_sad, int is_key_frame) {
+                         unsigned int y_sad, int is_key_frame,
+                         int scene_change_detected) {
   int i;
   MACROBLOCKD *xd = &x->e_mbd;
+  int shift = 2;
 
   if (is_key_frame) return;
 
-  // For speed >= 8, avoid the chroma check if y_sad is above threshold.
-  if (cpi->oxcf.speed >= 8) {
+  // For speed > 8, avoid the chroma check if y_sad is above threshold.
+  if (cpi->oxcf.speed > 8) {
     if (y_sad > cpi->vbp_thresholds[1] &&
         (!cpi->noise_estimate.enabled ||
          vp9_noise_estimate_extract_level(&cpi->noise_estimate) < kMedium))
       return;
   }
 
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN && scene_change_detected)
+    shift = 5;
+
   for (i = 1; i <= 2; ++i) {
     unsigned int uv_sad = UINT_MAX;
     struct macroblock_plane *p = &x->plane[i];
@@ -1156,7 +1243,7 @@
 
     // TODO(marpan): Investigate if we should lower this threshold if
     // superblock is detected as skin.
-    x->color_sensitivity[i - 1] = uv_sad > (y_sad >> 2);
+    x->color_sensitivity[i - 1] = uv_sad > (y_sad >> shift);
   }
 }
 
@@ -1206,6 +1293,7 @@
       cpi->content_state_sb_fd[sb_offset] = 0;
     }
   }
+  if (tmp_sad == 0) x->zero_temp_sad_source = 1;
   return tmp_sad;
 }
 
@@ -1241,21 +1329,42 @@
   int pixels_wide = 64, pixels_high = 64;
   int64_t thresholds[4] = { cpi->vbp_thresholds[0], cpi->vbp_thresholds[1],
                             cpi->vbp_thresholds[2], cpi->vbp_thresholds[3] };
+  int scene_change_detected =
+      cpi->rc.high_source_sad ||
+      (cpi->use_svc && cpi->svc.high_source_sad_superframe);
+  int force_64_split = scene_change_detected ||
+                       (cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
+                        cpi->compute_source_sad_onepass &&
+                        cpi->sf.use_source_sad && !x->zero_temp_sad_source);
 
   // For the variance computation under SVC mode, we treat the frame as key if
   // the reference (base layer frame) is key frame (i.e., is_key_frame == 1).
-  const int is_key_frame =
-      (cm->frame_type == KEY_FRAME ||
+  int is_key_frame =
+      (frame_is_intra_only(cm) ||
        (is_one_pass_cbr_svc(cpi) &&
         cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame));
   // Always use 4x4 partition for key frame.
-  const int use_4x4_partition = cm->frame_type == KEY_FRAME;
+  const int use_4x4_partition = frame_is_intra_only(cm);
   const int low_res = (cm->width <= 352 && cm->height <= 288);
   int variance4x4downsample[16];
   int segment_id;
   int sb_offset = (cm->mi_stride >> 3) * (mi_row >> 3) + (mi_col >> 3);
 
+  // For SVC: check if LAST frame is NULL or if the resolution of LAST is
+  // different than the current frame resolution, and if so, treat this frame
+  // as a key frame, for the purpose of the superblock partitioning.
+  // LAST == NULL can happen in some cases where enhancement spatial layers are
+  // enabled dyanmically in the stream and the only reference is the spatial
+  // reference (GOLDEN).
+  if (cpi->use_svc) {
+    const YV12_BUFFER_CONFIG *const ref = get_ref_frame_buffer(cpi, LAST_FRAME);
+    if (ref == NULL || ref->y_crop_height != cm->height ||
+        ref->y_crop_width != cm->width)
+      is_key_frame = 1;
+  }
+
   set_offsets(cpi, tile, x, mi_row, mi_col, BLOCK_64X64);
+  set_segment_index(cpi, x, mi_row, mi_col, BLOCK_64X64, 0);
   segment_id = xd->mi[0]->segment_id;
 
   if (cpi->oxcf.speed >= 8 || (cpi->use_svc && cpi->svc.non_reference_frame))
@@ -1289,6 +1398,7 @@
     }
     // If source_sad is low copy the partition without computing the y_sad.
     if (x->skip_low_source_sad && cpi->sf.copy_partition_flag &&
+        !force_64_split &&
         copy_partitioning(cpi, x, xd, mi_row, mi_col, segment_id, sb_offset)) {
       x->sb_use_mv_part = 1;
       if (cpi->sf.svc_use_lowres_part &&
@@ -1305,6 +1415,11 @@
   } else {
     set_vbp_thresholds(cpi, thresholds, cm->base_qindex, content_state);
   }
+  // Decrease 32x32 split threshold for screen on base layer, for scene
+  // change/high motion frames.
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
+      cpi->svc.spatial_layer_id == 0 && force_64_split)
+    thresholds[1] = 3 * thresholds[1] >> 2;
 
   // For non keyframes, disable 4x4 average for low resolution when speed = 8
   threshold_4x4avg = (cpi->oxcf.speed < 8) ? thresholds[1] << 1 : INT64_MAX;
@@ -1317,7 +1432,7 @@
 
   // Index for force_split: 0 for 64x64, 1-4 for 32x32 blocks,
   // 5-20 for the 16x16 blocks.
-  force_split[0] = 0;
+  force_split[0] = force_64_split;
 
   if (!is_key_frame) {
     // In the case of spatial/temporal scalable coding, the assumption here is
@@ -1333,7 +1448,8 @@
 
     assert(yv12 != NULL);
 
-    if (!(is_one_pass_cbr_svc(cpi) && cpi->svc.spatial_layer_id)) {
+    if (!(is_one_pass_cbr_svc(cpi) && cpi->svc.spatial_layer_id) ||
+        cpi->svc.use_gf_temporal_ref_current_layer) {
       // For now, GOLDEN will not be used for non-zero spatial layers, since
       // it may not be a temporal reference.
       yv12_g = get_ref_frame_buffer(cpi, GOLDEN_FRAME);
@@ -1374,10 +1490,28 @@
           x->plane[0].src.buf, x->plane[0].src.stride, xd->plane[0].pre[0].buf,
           xd->plane[0].pre[0].stride);
     } else {
-      y_sad = vp9_int_pro_motion_estimation(cpi, x, bsize, mi_row, mi_col);
+      const MV dummy_mv = { 0, 0 };
+      y_sad = vp9_int_pro_motion_estimation(cpi, x, bsize, mi_row, mi_col,
+                                            &dummy_mv);
       x->sb_use_mv_part = 1;
       x->sb_mvcol_part = mi->mv[0].as_mv.col;
       x->sb_mvrow_part = mi->mv[0].as_mv.row;
+      if (cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
+          cpi->svc.spatial_layer_id == cpi->svc.first_spatial_layer_to_encode &&
+          cpi->svc.high_num_blocks_with_motion && !x->zero_temp_sad_source &&
+          cm->width > 640 && cm->height > 480) {
+        // Disable split below 16x16 block size when scroll motion (horz or
+        // vert) is detected.
+        // TODO(marpan/jianj): Improve this condition: issue is that search
+        // range is hard-coded/limited in vp9_int_pro_motion_estimation() so
+        // scroll motion may not be detected here.
+        if (((abs(x->sb_mvrow_part) >= 48 && abs(x->sb_mvcol_part) <= 8) ||
+             (abs(x->sb_mvcol_part) >= 48 && abs(x->sb_mvrow_part) <= 8)) &&
+            y_sad < 100000) {
+          compute_minmax_variance = 0;
+          thresholds[2] = INT64_MAX;
+        }
+      }
     }
 
     y_sad_last = y_sad;
@@ -1415,7 +1549,7 @@
           mi_row + block_height / 2 < cm->mi_rows) {
         set_block_size(cpi, x, xd, mi_row, mi_col, BLOCK_64X64);
         x->variance_low[0] = 1;
-        chroma_check(cpi, x, bsize, y_sad, is_key_frame);
+        chroma_check(cpi, x, bsize, y_sad, is_key_frame, scene_change_detected);
         if (cpi->sf.svc_use_lowres_part &&
             cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 2)
           update_partition_svc(cpi, BLOCK_64X64, mi_row, mi_col);
@@ -1432,7 +1566,7 @@
     // TODO(jianj) : tune the threshold.
     if (cpi->sf.copy_partition_flag && y_sad_last < cpi->vbp_threshold_copy &&
         copy_partitioning(cpi, x, xd, mi_row, mi_col, segment_id, sb_offset)) {
-      chroma_check(cpi, x, bsize, y_sad, is_key_frame);
+      chroma_check(cpi, x, bsize, y_sad, is_key_frame, scene_change_detected);
       if (cpi->sf.svc_use_lowres_part &&
           cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 2)
         update_partition_svc(cpi, BLOCK_64X64, mi_row, mi_col);
@@ -1648,11 +1782,11 @@
     }
   }
 
-  if (cm->frame_type != KEY_FRAME && cpi->sf.copy_partition_flag) {
+  if (!frame_is_intra_only(cm) && cpi->sf.copy_partition_flag) {
     update_prev_partition(cpi, x, segment_id, mi_row, mi_col, sb_offset);
   }
 
-  if (cm->frame_type != KEY_FRAME && cpi->sf.svc_use_lowres_part &&
+  if (!frame_is_intra_only(cm) && cpi->sf.svc_use_lowres_part &&
       cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 2)
     update_partition_svc(cpi, BLOCK_64X64, mi_row, mi_col);
 
@@ -1661,11 +1795,12 @@
                           mi_col, mi_row);
   }
 
-  chroma_check(cpi, x, bsize, y_sad, is_key_frame);
+  chroma_check(cpi, x, bsize, y_sad, is_key_frame, scene_change_detected);
   if (vt2) vpx_free(vt2);
   return 0;
 }
 
+#if !CONFIG_REALTIME_ONLY
 static void update_state(VP9_COMP *cpi, ThreadData *td, PICK_MODE_CONTEXT *ctx,
                          int mi_row, int mi_col, BLOCK_SIZE bsize,
                          int output_enabled) {
@@ -1794,6 +1929,7 @@
     }
   }
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 void vp9_setup_src_planes(MACROBLOCK *x, const YV12_BUFFER_CONFIG *src,
                           int mi_row, int mi_col) {
@@ -1836,20 +1972,41 @@
   vp9_rd_cost_init(rd_cost);
 }
 
-static int set_segment_rdmult(VP9_COMP *const cpi, MACROBLOCK *const x,
-                              int8_t segment_id) {
-  int segment_qindex;
+#if !CONFIG_REALTIME_ONLY
+static void set_segment_rdmult(VP9_COMP *const cpi, MACROBLOCK *const x,
+                               int mi_row, int mi_col, BLOCK_SIZE bsize,
+                               AQ_MODE aq_mode) {
   VP9_COMMON *const cm = &cpi->common;
+  const VP9EncoderConfig *const oxcf = &cpi->oxcf;
+  const uint8_t *const map =
+      cm->seg.update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
+
   vp9_init_plane_quantizers(cpi, x);
   vpx_clear_system_state();
-  segment_qindex = vp9_get_qindex(&cm->seg, segment_id, cm->base_qindex);
-  return vp9_compute_rd_mult(cpi, segment_qindex + cm->y_dc_delta_q);
+
+  if (aq_mode == NO_AQ || aq_mode == PSNR_AQ) {
+    if (cpi->sf.enable_tpl_model) x->rdmult = x->cb_rdmult;
+  } else if (aq_mode == PERCEPTUAL_AQ) {
+    x->rdmult = x->cb_rdmult;
+  } else if (aq_mode == CYCLIC_REFRESH_AQ) {
+    // If segment is boosted, use rdmult for that segment.
+    if (cyclic_refresh_segment_id_boosted(
+            get_segment_id(cm, map, bsize, mi_row, mi_col)))
+      x->rdmult = vp9_cyclic_refresh_get_rdmult(cpi->cyclic_refresh);
+  } else {
+    x->rdmult = vp9_compute_rd_mult(cpi, cm->base_qindex + cm->y_dc_delta_q);
+  }
+
+  if (oxcf->tuning == VP8_TUNE_SSIM) {
+    set_ssim_rdmult(cpi, x, bsize, mi_row, mi_col, &x->rdmult);
+  }
 }
 
 static void rd_pick_sb_modes(VP9_COMP *cpi, TileDataEnc *tile_data,
                              MACROBLOCK *const x, int mi_row, int mi_col,
                              RD_COST *rd_cost, BLOCK_SIZE bsize,
-                             PICK_MODE_CONTEXT *ctx, int64_t best_rd) {
+                             PICK_MODE_CONTEXT *ctx, int rate_in_best_rd,
+                             int64_t dist_in_best_rd) {
   VP9_COMMON *const cm = &cpi->common;
   TileInfo *const tile_info = &tile_data->tile_info;
   MACROBLOCKD *const xd = &x->e_mbd;
@@ -1858,6 +2015,7 @@
   struct macroblockd_plane *const pd = xd->plane;
   const AQ_MODE aq_mode = cpi->oxcf.aq_mode;
   int i, orig_rdmult;
+  int64_t best_rd = INT64_MAX;
 
   vpx_clear_system_state();
 
@@ -1914,43 +2072,11 @@
     x->block_qcoeff_opt = cpi->sf.allow_quant_coeff_opt;
   }
 
-  if (aq_mode == VARIANCE_AQ) {
-    const int energy =
-        bsize <= BLOCK_16X16 ? x->mb_energy : vp9_block_energy(cpi, x, bsize);
-
-    if (cm->frame_type == KEY_FRAME || cpi->refresh_alt_ref_frame ||
-        cpi->force_update_segmentation ||
-        (cpi->refresh_golden_frame && !cpi->rc.is_src_frame_alt_ref)) {
-      mi->segment_id = vp9_vaq_segment_id(energy);
-    } else {
-      const uint8_t *const map =
-          cm->seg.update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
-      mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
-    }
-    x->rdmult = set_segment_rdmult(cpi, x, mi->segment_id);
-  } else if (aq_mode == LOOKAHEAD_AQ) {
-    const uint8_t *const map = cpi->segmentation_map;
-
-    // I do not change rdmult here consciously.
-    mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
-  } else if (aq_mode == EQUATOR360_AQ) {
-    if (cm->frame_type == KEY_FRAME || cpi->force_update_segmentation) {
-      mi->segment_id = vp9_360aq_segment_id(mi_row, cm->mi_rows);
-    } else {
-      const uint8_t *const map =
-          cm->seg.update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
-      mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
-    }
-    x->rdmult = set_segment_rdmult(cpi, x, mi->segment_id);
-  } else if (aq_mode == COMPLEXITY_AQ) {
-    x->rdmult = set_segment_rdmult(cpi, x, mi->segment_id);
-  } else if (aq_mode == CYCLIC_REFRESH_AQ) {
-    const uint8_t *const map =
-        cm->seg.update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
-    // If segment is boosted, use rdmult for that segment.
-    if (cyclic_refresh_segment_id_boosted(
-            get_segment_id(cm, map, bsize, mi_row, mi_col)))
-      x->rdmult = vp9_cyclic_refresh_get_rdmult(cpi->cyclic_refresh);
+  set_segment_index(cpi, x, mi_row, mi_col, bsize, 0);
+  set_segment_rdmult(cpi, x, mi_row, mi_col, bsize, aq_mode);
+  if (rate_in_best_rd < INT_MAX && dist_in_best_rd < INT64_MAX) {
+    best_rd = vp9_calculate_rd_cost(x->rdmult, x->rddiv, rate_in_best_rd,
+                                    dist_in_best_rd);
   }
 
   // Find best coding mode & reconstruct the MB so it is available
@@ -1979,15 +2105,19 @@
     vp9_caq_select_segment(cpi, x, bsize, mi_row, mi_col, rd_cost->rate);
   }
 
-  x->rdmult = orig_rdmult;
-
   // TODO(jingning) The rate-distortion optimization flow needs to be
   // refactored to provide proper exit/return handle.
-  if (rd_cost->rate == INT_MAX) rd_cost->rdcost = INT64_MAX;
+  if (rd_cost->rate == INT_MAX || rd_cost->dist == INT64_MAX)
+    rd_cost->rdcost = INT64_MAX;
+  else
+    rd_cost->rdcost = RDCOST(x->rdmult, x->rddiv, rd_cost->rate, rd_cost->dist);
+
+  x->rdmult = orig_rdmult;
 
   ctx->rate = rd_cost->rate;
   ctx->dist = rd_cost->dist;
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 static void update_stats(VP9_COMMON *cm, ThreadData *td) {
   const MACROBLOCK *x = &td->mb;
@@ -2013,8 +2143,10 @@
                             [has_second_ref(mi)]++;
 
         if (has_second_ref(mi)) {
-          counts->comp_ref[vp9_get_pred_context_comp_ref_p(cm, xd)]
-                          [ref0 == GOLDEN_FRAME]++;
+          const int idx = cm->ref_frame_sign_bias[cm->comp_fixed_ref];
+          const int ctx = vp9_get_pred_context_comp_ref_p(cm, xd);
+          const int bit = mi->ref_frame[!idx] == cm->comp_var_ref[1];
+          counts->comp_ref[ctx][bit]++;
         } else {
           counts->single_ref[vp9_get_pred_context_single_ref_p1(xd)][0]
                             [ref0 != LAST_FRAME]++;
@@ -2046,6 +2178,7 @@
   }
 }
 
+#if !CONFIG_REALTIME_ONLY
 static void restore_context(MACROBLOCK *const x, int mi_row, int mi_col,
                             ENTROPY_CONTEXT a[16 * MAX_MB_PLANE],
                             ENTROPY_CONTEXT l[16 * MAX_MB_PLANE],
@@ -2110,6 +2243,16 @@
                      PICK_MODE_CONTEXT *ctx) {
   MACROBLOCK *const x = &td->mb;
   set_offsets(cpi, tile, x, mi_row, mi_col, bsize);
+
+  if (cpi->sf.enable_tpl_model &&
+      (cpi->oxcf.aq_mode == NO_AQ || cpi->oxcf.aq_mode == PERCEPTUAL_AQ)) {
+    const VP9EncoderConfig *const oxcf = &cpi->oxcf;
+    x->rdmult = x->cb_rdmult;
+    if (oxcf->tuning == VP8_TUNE_SSIM) {
+      set_ssim_rdmult(cpi, x, bsize, mi_row, mi_col, &x->rdmult);
+    }
+  }
+
   update_state(cpi, td, ctx, mi_row, mi_col, bsize, output_enabled);
   encode_superblock(cpi, td, tp, output_enabled, mi_row, mi_col, bsize, ctx);
 
@@ -2168,7 +2311,8 @@
                  subsize, &pc_tree->horizontal[1]);
       }
       break;
-    case PARTITION_SPLIT:
+    default:
+      assert(partition == PARTITION_SPLIT);
       if (bsize == BLOCK_8X8) {
         encode_b(cpi, tile, td, tp, mi_row, mi_col, output_enabled, subsize,
                  pc_tree->leaf_split[0]);
@@ -2183,12 +2327,12 @@
                   subsize, pc_tree->split[3]);
       }
       break;
-    default: assert(0 && "Invalid partition type."); break;
   }
 
   if (partition != PARTITION_SPLIT || bsize == BLOCK_8X8)
     update_partition_context(xd, mi_row, mi_col, subsize, bsize);
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 // Check to see if the given partition size is allowed for a specified number
 // of 8x8 block rows and columns remaining in the image.
@@ -2393,17 +2537,15 @@
   *(xd->mi[0]) = ctx->mic;
   *(x->mbmi_ext) = ctx->mbmi_ext;
 
-  if (seg->enabled && cpi->oxcf.aq_mode != NO_AQ) {
-    // For in frame complexity AQ or variance AQ, copy segment_id from
-    // segmentation_map.
-    if (cpi->oxcf.aq_mode != CYCLIC_REFRESH_AQ) {
+  if (seg->enabled && (cpi->oxcf.aq_mode != NO_AQ || cpi->roi.enabled)) {
+    // Setting segmentation map for cyclic_refresh.
+    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ) {
+      vp9_cyclic_refresh_update_segment(cpi, mi, mi_row, mi_col, bsize,
+                                        ctx->rate, ctx->dist, x->skip, p);
+    } else {
       const uint8_t *const map =
           seg->update_map ? cpi->segmentation_map : cm->last_frame_seg_map;
       mi->segment_id = get_segment_id(cm, map, bsize, mi_row, mi_col);
-    } else {
-      // Setting segmentation map for cyclic_refresh.
-      vp9_cyclic_refresh_update_segment(cpi, mi, mi_row, mi_col, bsize,
-                                        ctx->rate, ctx->dist, x->skip, p);
     }
     vp9_init_plane_quantizers(cpi, x);
   }
@@ -2441,7 +2583,7 @@
   }
 
   x->skip = ctx->skip;
-  x->skip_txfm[0] = mi->segment_id ? 0 : ctx->skip_txfm[0];
+  x->skip_txfm[0] = (mi->segment_id || xd->lossless) ? 0 : ctx->skip_txfm[0];
 }
 
 static void encode_b_rt(VP9_COMP *cpi, ThreadData *td,
@@ -2509,7 +2651,8 @@
                     subsize, &pc_tree->horizontal[1]);
       }
       break;
-    case PARTITION_SPLIT:
+    default:
+      assert(partition == PARTITION_SPLIT);
       subsize = get_subsize(bsize, PARTITION_SPLIT);
       encode_sb_rt(cpi, td, tile, tp, mi_row, mi_col, output_enabled, subsize,
                    pc_tree->split[0]);
@@ -2520,13 +2663,13 @@
       encode_sb_rt(cpi, td, tile, tp, mi_row + hbs, mi_col + hbs,
                    output_enabled, subsize, pc_tree->split[3]);
       break;
-    default: assert(0 && "Invalid partition type."); break;
   }
 
   if (partition != PARTITION_SPLIT || bsize == BLOCK_8X8)
     update_partition_context(xd, mi_row, mi_col, subsize, bsize);
 }
 
+#if !CONFIG_REALTIME_ONLY
 static void rd_use_partition(VP9_COMP *cpi, ThreadData *td,
                              TileDataEnc *tile_data, MODE_INFO **mi_8x8,
                              TOKENEXTRA **tp, int mi_row, int mi_col,
@@ -2595,7 +2738,7 @@
         mi_col + (mi_step >> 1) < cm->mi_cols) {
       pc_tree->partitioning = PARTITION_NONE;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &none_rdc, bsize, ctx,
-                       INT64_MAX);
+                       INT_MAX, INT64_MAX);
 
       pl = partition_plane_context(xd, mi_row, mi_col, bsize);
 
@@ -2614,11 +2757,12 @@
   switch (partition) {
     case PARTITION_NONE:
       rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &last_part_rdc, bsize,
-                       ctx, INT64_MAX);
+                       ctx, INT_MAX, INT64_MAX);
       break;
     case PARTITION_HORZ:
+      pc_tree->horizontal[0].skip_ref_frame_mask = 0;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &last_part_rdc,
-                       subsize, &pc_tree->horizontal[0], INT64_MAX);
+                       subsize, &pc_tree->horizontal[0], INT_MAX, INT64_MAX);
       if (last_part_rdc.rate != INT_MAX && bsize >= BLOCK_8X8 &&
           mi_row + (mi_step >> 1) < cm->mi_rows) {
         RD_COST tmp_rdc;
@@ -2626,8 +2770,10 @@
         vp9_rd_cost_init(&tmp_rdc);
         update_state(cpi, td, ctx, mi_row, mi_col, subsize, 0);
         encode_superblock(cpi, td, tp, 0, mi_row, mi_col, subsize, ctx);
+        pc_tree->horizontal[1].skip_ref_frame_mask = 0;
         rd_pick_sb_modes(cpi, tile_data, x, mi_row + (mi_step >> 1), mi_col,
-                         &tmp_rdc, subsize, &pc_tree->horizontal[1], INT64_MAX);
+                         &tmp_rdc, subsize, &pc_tree->horizontal[1], INT_MAX,
+                         INT64_MAX);
         if (tmp_rdc.rate == INT_MAX || tmp_rdc.dist == INT64_MAX) {
           vp9_rd_cost_reset(&last_part_rdc);
           break;
@@ -2638,8 +2784,9 @@
       }
       break;
     case PARTITION_VERT:
+      pc_tree->vertical[0].skip_ref_frame_mask = 0;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &last_part_rdc,
-                       subsize, &pc_tree->vertical[0], INT64_MAX);
+                       subsize, &pc_tree->vertical[0], INT_MAX, INT64_MAX);
       if (last_part_rdc.rate != INT_MAX && bsize >= BLOCK_8X8 &&
           mi_col + (mi_step >> 1) < cm->mi_cols) {
         RD_COST tmp_rdc;
@@ -2647,9 +2794,10 @@
         vp9_rd_cost_init(&tmp_rdc);
         update_state(cpi, td, ctx, mi_row, mi_col, subsize, 0);
         encode_superblock(cpi, td, tp, 0, mi_row, mi_col, subsize, ctx);
-        rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col + (mi_step >> 1),
-                         &tmp_rdc, subsize,
-                         &pc_tree->vertical[bsize > BLOCK_8X8], INT64_MAX);
+        pc_tree->vertical[bsize > BLOCK_8X8].skip_ref_frame_mask = 0;
+        rd_pick_sb_modes(
+            cpi, tile_data, x, mi_row, mi_col + (mi_step >> 1), &tmp_rdc,
+            subsize, &pc_tree->vertical[bsize > BLOCK_8X8], INT_MAX, INT64_MAX);
         if (tmp_rdc.rate == INT_MAX || tmp_rdc.dist == INT64_MAX) {
           vp9_rd_cost_reset(&last_part_rdc);
           break;
@@ -2659,10 +2807,11 @@
         last_part_rdc.rdcost += tmp_rdc.rdcost;
       }
       break;
-    case PARTITION_SPLIT:
+    default:
+      assert(partition == PARTITION_SPLIT);
       if (bsize == BLOCK_8X8) {
         rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &last_part_rdc,
-                         subsize, pc_tree->leaf_split[0], INT64_MAX);
+                         subsize, pc_tree->leaf_split[0], INT_MAX, INT64_MAX);
         break;
       }
       last_part_rdc.rate = 0;
@@ -2689,7 +2838,6 @@
         last_part_rdc.dist += tmp_rdc.dist;
       }
       break;
-    default: assert(0); break;
   }
 
   pl = partition_plane_context(xd, mi_row, mi_col, bsize);
@@ -2727,7 +2875,7 @@
       pc_tree->split[i]->partitioning = PARTITION_NONE;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row + y_idx, mi_col + x_idx,
                        &tmp_rdc, split_subsize, &pc_tree->split[i]->none,
-                       INT64_MAX);
+                       INT_MAX, INT64_MAX);
 
       restore_context(x, mi_row, mi_col, a, l, sa, sl, bsize);
 
@@ -2961,6 +3109,7 @@
   *min_bs = min_size;
   *max_bs = max_size;
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 static INLINE void store_pred_mv(MACROBLOCK *x, PICK_MODE_CONTEXT *ctx) {
   memcpy(ctx->pred_mv, x->pred_mv, sizeof(x->pred_mv));
@@ -2975,15 +3124,15 @@
                                                         1, 2, 2, 2, 4, 4 };
 const int num_16x16_blocks_high_lookup[BLOCK_SIZES] = { 1, 1, 1, 1, 1, 1, 1,
                                                         2, 1, 2, 4, 2, 4 };
-const int qindex_skip_threshold_lookup[BLOCK_SIZES] = {
-  0, 10, 10, 30, 40, 40, 60, 80, 80, 90, 100, 100, 120
-};
-const int qindex_split_threshold_lookup[BLOCK_SIZES] = {
-  0, 3, 3, 7, 15, 15, 30, 40, 40, 60, 80, 80, 120
-};
-const int complexity_16x16_blocks_threshold[BLOCK_SIZES] = {
-  1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 4, 4, 6
-};
+const int qindex_skip_threshold_lookup[BLOCK_SIZES] = { 0,   10,  10, 30, 40,
+                                                        40,  60,  80, 80, 90,
+                                                        100, 100, 120 };
+const int qindex_split_threshold_lookup[BLOCK_SIZES] = { 0,  3,  3,  7,  15,
+                                                         15, 30, 40, 40, 60,
+                                                         80, 80, 120 };
+const int complexity_16x16_blocks_threshold[BLOCK_SIZES] = { 1, 1, 1, 1, 1,
+                                                             1, 1, 1, 1, 1,
+                                                             4, 4, 6 };
 
 typedef enum {
   MV_ZERO = 0,
@@ -3018,14 +3167,60 @@
 }
 #endif
 
-// Calculate the score used in machine-learning based partition search early
-// termination.
-static double compute_score(VP9_COMMON *const cm, MACROBLOCKD *const xd,
-                            PICK_MODE_CONTEXT *ctx, int mi_row, int mi_col,
-                            BLOCK_SIZE bsize) {
-  const double *clf;
-  const double *mean;
-  const double *sd;
+// Calculate prediction based on the given input features and neural net config.
+// Assume there are no more than NN_MAX_NODES_PER_LAYER nodes in each hidden
+// layer.
+static void nn_predict(const float *features, const NN_CONFIG *nn_config,
+                       float *output) {
+  int num_input_nodes = nn_config->num_inputs;
+  int buf_index = 0;
+  float buf[2][NN_MAX_NODES_PER_LAYER];
+  const float *input_nodes = features;
+
+  // Propagate hidden layers.
+  const int num_layers = nn_config->num_hidden_layers;
+  int layer, node, i;
+  assert(num_layers <= NN_MAX_HIDDEN_LAYERS);
+  for (layer = 0; layer < num_layers; ++layer) {
+    const float *weights = nn_config->weights[layer];
+    const float *bias = nn_config->bias[layer];
+    float *output_nodes = buf[buf_index];
+    const int num_output_nodes = nn_config->num_hidden_nodes[layer];
+    assert(num_output_nodes < NN_MAX_NODES_PER_LAYER);
+    for (node = 0; node < num_output_nodes; ++node) {
+      float val = 0.0f;
+      for (i = 0; i < num_input_nodes; ++i) val += weights[i] * input_nodes[i];
+      val += bias[node];
+      // ReLU as activation function.
+      val = VPXMAX(val, 0.0f);
+      output_nodes[node] = val;
+      weights += num_input_nodes;
+    }
+    num_input_nodes = num_output_nodes;
+    input_nodes = output_nodes;
+    buf_index = 1 - buf_index;
+  }
+
+  // Final output layer.
+  {
+    const float *weights = nn_config->weights[num_layers];
+    for (node = 0; node < nn_config->num_outputs; ++node) {
+      const float *bias = nn_config->bias[num_layers];
+      float val = 0.0f;
+      for (i = 0; i < num_input_nodes; ++i) val += weights[i] * input_nodes[i];
+      output[node] = val + bias[node];
+      weights += num_input_nodes;
+    }
+  }
+}
+
+#if !CONFIG_REALTIME_ONLY
+#define FEATURES 7
+// Machine-learning based partition search early termination.
+// Return 1 to skip split and rect partitions.
+static int ml_pruning_partition(VP9_COMMON *const cm, MACROBLOCKD *const xd,
+                                PICK_MODE_CONTEXT *ctx, int mi_row, int mi_col,
+                                BLOCK_SIZE bsize) {
   const int mag_mv =
       abs(ctx->mic.mv[0].as_mv.col) + abs(ctx->mic.mv[0].as_mv.row);
   const int left_in_image = !!xd->left_mi;
@@ -3035,11 +3230,32 @@
   int above_par = 0;  // above_partitioning
   int left_par = 0;   // left_partitioning
   int last_par = 0;   // last_partitioning
-  BLOCK_SIZE context_size;
-  double score;
   int offset = 0;
+  int i;
+  BLOCK_SIZE context_size;
+  const NN_CONFIG *nn_config = NULL;
+  const float *mean, *sd, *linear_weights;
+  float nn_score, linear_score;
+  float features[FEATURES];
 
   assert(b_width_log2_lookup[bsize] == b_height_log2_lookup[bsize]);
+  vpx_clear_system_state();
+
+  switch (bsize) {
+    case BLOCK_64X64:
+      offset = 0;
+      nn_config = &vp9_partition_nnconfig_64x64;
+      break;
+    case BLOCK_32X32:
+      offset = 8;
+      nn_config = &vp9_partition_nnconfig_32x32;
+      break;
+    case BLOCK_16X16:
+      offset = 16;
+      nn_config = &vp9_partition_nnconfig_16x16;
+      break;
+    default: assert(0 && "Unexpected block size."); return 0;
+  }
 
   if (above_in_image) {
     context_size = xd->above_mi->sb_type;
@@ -3065,36 +3281,760 @@
       last_par = 1;
   }
 
-  if (bsize == BLOCK_64X64)
-    offset = 0;
-  else if (bsize == BLOCK_32X32)
-    offset = 8;
-  else if (bsize == BLOCK_16X16)
-    offset = 16;
+  mean = &vp9_partition_feature_mean[offset];
+  sd = &vp9_partition_feature_std[offset];
+  features[0] = ((float)ctx->rate - mean[0]) / sd[0];
+  features[1] = ((float)ctx->dist - mean[1]) / sd[1];
+  features[2] = ((float)mag_mv / 2 - mean[2]) * sd[2];
+  features[3] = ((float)(left_par + above_par) / 2 - mean[3]) * sd[3];
+  features[4] = ((float)ctx->sum_y_eobs - mean[4]) / sd[4];
+  features[5] = ((float)cm->base_qindex - mean[5]) * sd[5];
+  features[6] = ((float)last_par - mean[6]) * sd[6];
 
-  // early termination score calculation
-  clf = &classifiers[offset];
-  mean = &train_mean[offset];
-  sd = &train_stdm[offset];
-  score = clf[0] * (((double)ctx->rate - mean[0]) / sd[0]) +
-          clf[1] * (((double)ctx->dist - mean[1]) / sd[1]) +
-          clf[2] * (((double)mag_mv / 2 - mean[2]) * sd[2]) +
-          clf[3] * (((double)(left_par + above_par) / 2 - mean[3]) * sd[3]) +
-          clf[4] * (((double)ctx->sum_y_eobs - mean[4]) / sd[4]) +
-          clf[5] * (((double)cm->base_qindex - mean[5]) * sd[5]) +
-          clf[6] * (((double)last_par - mean[6]) * sd[6]) + clf[7];
-  return score;
+  // Predict using linear model.
+  linear_weights = &vp9_partition_linear_weights[offset];
+  linear_score = linear_weights[FEATURES];
+  for (i = 0; i < FEATURES; ++i)
+    linear_score += linear_weights[i] * features[i];
+  if (linear_score > 0.1f) return 0;
+
+  // Predict using neural net model.
+  nn_predict(features, nn_config, &nn_score);
+
+  if (linear_score < -0.0f && nn_score < 0.1f) return 1;
+  if (nn_score < -0.0f && linear_score < 0.1f) return 1;
+  return 0;
+}
+#undef FEATURES
+
+#define FEATURES 4
+// ML-based partition search breakout.
+static int ml_predict_breakout(VP9_COMP *const cpi, BLOCK_SIZE bsize,
+                               const MACROBLOCK *const x,
+                               const RD_COST *const rd_cost) {
+  DECLARE_ALIGNED(16, static const uint8_t, vp9_64_zeros[64]) = { 0 };
+  const VP9_COMMON *const cm = &cpi->common;
+  float features[FEATURES];
+  const float *linear_weights = NULL;  // Linear model weights.
+  float linear_score = 0.0f;
+  const int qindex = cm->base_qindex;
+  const int q_ctx = qindex >= 200 ? 0 : (qindex >= 150 ? 1 : 2);
+  const int is_720p_or_larger = VPXMIN(cm->width, cm->height) >= 720;
+  const int resolution_ctx = is_720p_or_larger ? 1 : 0;
+
+  switch (bsize) {
+    case BLOCK_64X64:
+      linear_weights = vp9_partition_breakout_weights_64[resolution_ctx][q_ctx];
+      break;
+    case BLOCK_32X32:
+      linear_weights = vp9_partition_breakout_weights_32[resolution_ctx][q_ctx];
+      break;
+    case BLOCK_16X16:
+      linear_weights = vp9_partition_breakout_weights_16[resolution_ctx][q_ctx];
+      break;
+    case BLOCK_8X8:
+      linear_weights = vp9_partition_breakout_weights_8[resolution_ctx][q_ctx];
+      break;
+    default: assert(0 && "Unexpected block size."); return 0;
+  }
+  if (!linear_weights) return 0;
+
+  {  // Generate feature values.
+#if CONFIG_VP9_HIGHBITDEPTH
+    const int ac_q =
+        vp9_ac_quant(cm->base_qindex, 0, cm->bit_depth) >> (x->e_mbd.bd - 8);
+#else
+    const int ac_q = vp9_ac_quant(qindex, 0, cm->bit_depth);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+    const int num_pels_log2 = num_pels_log2_lookup[bsize];
+    int feature_index = 0;
+    unsigned int var, sse;
+    float rate_f, dist_f;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (x->e_mbd.cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+      var =
+          vp9_high_get_sby_variance(cpi, &x->plane[0].src, bsize, x->e_mbd.bd);
+    } else {
+      var = cpi->fn_ptr[bsize].vf(x->plane[0].src.buf, x->plane[0].src.stride,
+                                  vp9_64_zeros, 0, &sse);
+    }
+#else
+    var = cpi->fn_ptr[bsize].vf(x->plane[0].src.buf, x->plane[0].src.stride,
+                                vp9_64_zeros, 0, &sse);
+#endif
+    var = var >> num_pels_log2;
+
+    vpx_clear_system_state();
+
+    rate_f = (float)VPXMIN(rd_cost->rate, INT_MAX);
+    dist_f = (float)(VPXMIN(rd_cost->dist, INT_MAX) >> num_pels_log2);
+    rate_f =
+        ((float)x->rdmult / 128.0f / 512.0f / (float)(1 << num_pels_log2)) *
+        rate_f;
+
+    features[feature_index++] = rate_f;
+    features[feature_index++] = dist_f;
+    features[feature_index++] = (float)var;
+    features[feature_index++] = (float)ac_q;
+    assert(feature_index == FEATURES);
+  }
+
+  {  // Calculate the output score.
+    int i;
+    linear_score = linear_weights[FEATURES];
+    for (i = 0; i < FEATURES; ++i)
+      linear_score += linear_weights[i] * features[i];
+  }
+
+  return linear_score >= cpi->sf.rd_ml_partition.search_breakout_thresh[q_ctx];
+}
+#undef FEATURES
+
+#define FEATURES 8
+#define LABELS 4
+static void ml_prune_rect_partition(VP9_COMP *const cpi, MACROBLOCK *const x,
+                                    BLOCK_SIZE bsize,
+                                    const PC_TREE *const pc_tree,
+                                    int *allow_horz, int *allow_vert,
+                                    int64_t ref_rd) {
+  const NN_CONFIG *nn_config = NULL;
+  float score[LABELS] = {
+    0.0f,
+  };
+  int thresh = -1;
+  int i;
+  (void)x;
+
+  if (ref_rd <= 0 || ref_rd > 1000000000) return;
+
+  switch (bsize) {
+    case BLOCK_8X8: break;
+    case BLOCK_16X16:
+      nn_config = &vp9_rect_part_nnconfig_16;
+      thresh = cpi->sf.rd_ml_partition.prune_rect_thresh[1];
+      break;
+    case BLOCK_32X32:
+      nn_config = &vp9_rect_part_nnconfig_32;
+      thresh = cpi->sf.rd_ml_partition.prune_rect_thresh[2];
+      break;
+    case BLOCK_64X64:
+      nn_config = &vp9_rect_part_nnconfig_64;
+      thresh = cpi->sf.rd_ml_partition.prune_rect_thresh[3];
+      break;
+    default: assert(0 && "Unexpected block size."); return;
+  }
+  if (!nn_config || thresh < 0) return;
+
+  // Feature extraction and model score calculation.
+  {
+    const VP9_COMMON *const cm = &cpi->common;
+#if CONFIG_VP9_HIGHBITDEPTH
+    const int dc_q =
+        vp9_dc_quant(cm->base_qindex, 0, cm->bit_depth) >> (x->e_mbd.bd - 8);
+#else
+    const int dc_q = vp9_dc_quant(cm->base_qindex, 0, cm->bit_depth);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+    const int bs = 4 * num_4x4_blocks_wide_lookup[bsize];
+    int feature_index = 0;
+    float features[FEATURES];
+
+    features[feature_index++] = logf((float)dc_q + 1.0f);
+    features[feature_index++] =
+        (float)(pc_tree->partitioning == PARTITION_NONE);
+    features[feature_index++] = logf((float)ref_rd / bs / bs + 1.0f);
+
+    {
+      const float norm_factor = 1.0f / ((float)ref_rd + 1.0f);
+      const int64_t none_rdcost = pc_tree->none.rdcost;
+      float rd_ratio = 2.0f;
+      if (none_rdcost > 0 && none_rdcost < 1000000000)
+        rd_ratio = (float)none_rdcost * norm_factor;
+      features[feature_index++] = VPXMIN(rd_ratio, 2.0f);
+
+      for (i = 0; i < 4; ++i) {
+        const int64_t this_rd = pc_tree->split[i]->none.rdcost;
+        const int rd_valid = this_rd > 0 && this_rd < 1000000000;
+        // Ratio between sub-block RD and whole block RD.
+        features[feature_index++] =
+            rd_valid ? (float)this_rd * norm_factor : 1.0f;
+      }
+    }
+
+    assert(feature_index == FEATURES);
+    nn_predict(features, nn_config, score);
+  }
+
+  // Make decisions based on the model score.
+  {
+    int max_score = -1000;
+    int horz = 0, vert = 0;
+    int int_score[LABELS];
+    for (i = 0; i < LABELS; ++i) {
+      int_score[i] = (int)(100 * score[i]);
+      max_score = VPXMAX(int_score[i], max_score);
+    }
+    thresh = max_score - thresh;
+    for (i = 0; i < LABELS; ++i) {
+      if (int_score[i] >= thresh) {
+        if ((i >> 0) & 1) horz = 1;
+        if ((i >> 1) & 1) vert = 1;
+      }
+    }
+    *allow_horz = *allow_horz && horz;
+    *allow_vert = *allow_vert && vert;
+  }
+}
+#undef FEATURES
+#undef LABELS
+
+// Perform fast and coarse motion search for the given block. This is a
+// pre-processing step for the ML based partition search speedup.
+static void simple_motion_search(const VP9_COMP *const cpi, MACROBLOCK *const x,
+                                 BLOCK_SIZE bsize, int mi_row, int mi_col,
+                                 MV ref_mv, MV_REFERENCE_FRAME ref,
+                                 uint8_t *const pred_buf) {
+  const VP9_COMMON *const cm = &cpi->common;
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MODE_INFO *const mi = xd->mi[0];
+  const YV12_BUFFER_CONFIG *const yv12 = get_ref_frame_buffer(cpi, ref);
+  const int step_param = 1;
+  const MvLimits tmp_mv_limits = x->mv_limits;
+  const SEARCH_METHODS search_method = NSTEP;
+  const int sadpb = x->sadperbit16;
+  MV ref_mv_full = { ref_mv.row >> 3, ref_mv.col >> 3 };
+  MV best_mv = { 0, 0 };
+  int cost_list[5];
+
+  assert(yv12 != NULL);
+  if (!yv12) return;
+  vp9_setup_pre_planes(xd, 0, yv12, mi_row, mi_col,
+                       &cm->frame_refs[ref - 1].sf);
+  mi->ref_frame[0] = ref;
+  mi->ref_frame[1] = NONE;
+  mi->sb_type = bsize;
+  vp9_set_mv_search_range(&x->mv_limits, &ref_mv);
+  vp9_full_pixel_search(cpi, x, bsize, &ref_mv_full, step_param, search_method,
+                        sadpb, cond_cost_list(cpi, cost_list), &ref_mv,
+                        &best_mv, 0, 0);
+  best_mv.row *= 8;
+  best_mv.col *= 8;
+  x->mv_limits = tmp_mv_limits;
+  mi->mv[0].as_mv = best_mv;
+
+  set_ref_ptrs(cm, xd, mi->ref_frame[0], mi->ref_frame[1]);
+  xd->plane[0].dst.buf = pred_buf;
+  xd->plane[0].dst.stride = 64;
+  vp9_build_inter_predictors_sby(xd, mi_row, mi_col, bsize);
 }
 
+// Use a neural net model to prune partition-none and partition-split search.
+// Features used: QP; spatial block size contexts; variance of prediction
+// residue after simple_motion_search.
+#define FEATURES 12
+static void ml_predict_var_rd_paritioning(const VP9_COMP *const cpi,
+                                          MACROBLOCK *const x,
+                                          PC_TREE *const pc_tree,
+                                          BLOCK_SIZE bsize, int mi_row,
+                                          int mi_col, int *none, int *split) {
+  const VP9_COMMON *const cm = &cpi->common;
+  const NN_CONFIG *nn_config = NULL;
+#if CONFIG_VP9_HIGHBITDEPTH
+  MACROBLOCKD *xd = &x->e_mbd;
+  DECLARE_ALIGNED(16, uint8_t, pred_buffer[64 * 64 * 2]);
+  uint8_t *const pred_buf = (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH)
+                                ? (CONVERT_TO_BYTEPTR(pred_buffer))
+                                : pred_buffer;
+#else
+  DECLARE_ALIGNED(16, uint8_t, pred_buffer[64 * 64]);
+  uint8_t *const pred_buf = pred_buffer;
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  const int speed = cpi->oxcf.speed;
+  float thresh = 0.0f;
+
+  switch (bsize) {
+    case BLOCK_64X64:
+      nn_config = &vp9_part_split_nnconfig_64;
+      thresh = speed > 0 ? 2.8f : 3.0f;
+      break;
+    case BLOCK_32X32:
+      nn_config = &vp9_part_split_nnconfig_32;
+      thresh = speed > 0 ? 3.5f : 3.0f;
+      break;
+    case BLOCK_16X16:
+      nn_config = &vp9_part_split_nnconfig_16;
+      thresh = speed > 0 ? 3.8f : 4.0f;
+      break;
+    case BLOCK_8X8:
+      nn_config = &vp9_part_split_nnconfig_8;
+      if (cm->width >= 720 && cm->height >= 720)
+        thresh = speed > 0 ? 2.5f : 2.0f;
+      else
+        thresh = speed > 0 ? 3.8f : 2.0f;
+      break;
+    default: assert(0 && "Unexpected block size."); return;
+  }
+
+  if (!nn_config) return;
+
+  // Do a simple single motion search to find a prediction for current block.
+  // The variance of the residue will be used as input features.
+  {
+    MV ref_mv;
+    const MV_REFERENCE_FRAME ref =
+        cpi->rc.is_src_frame_alt_ref ? ALTREF_FRAME : LAST_FRAME;
+    // If bsize is 64x64, use zero MV as reference; otherwise, use MV result
+    // of previous(larger) block as reference.
+    if (bsize == BLOCK_64X64)
+      ref_mv.row = ref_mv.col = 0;
+    else
+      ref_mv = pc_tree->mv;
+    vp9_setup_src_planes(x, cpi->Source, mi_row, mi_col);
+    simple_motion_search(cpi, x, bsize, mi_row, mi_col, ref_mv, ref, pred_buf);
+    pc_tree->mv = x->e_mbd.mi[0]->mv[0].as_mv;
+  }
+
+  vpx_clear_system_state();
+
+  {
+    float features[FEATURES] = { 0.0f };
+#if CONFIG_VP9_HIGHBITDEPTH
+    const int dc_q =
+        vp9_dc_quant(cm->base_qindex, 0, cm->bit_depth) >> (xd->bd - 8);
+#else
+    const int dc_q = vp9_dc_quant(cm->base_qindex, 0, cm->bit_depth);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+    int feature_idx = 0;
+    float score;
+
+    // Generate model input features.
+    features[feature_idx++] = logf((float)dc_q + 1.0f);
+
+    // Get the variance of the residue as input features.
+    {
+      const int bs = 4 * num_4x4_blocks_wide_lookup[bsize];
+      const BLOCK_SIZE subsize = get_subsize(bsize, PARTITION_SPLIT);
+      const uint8_t *pred = pred_buf;
+      const uint8_t *src = x->plane[0].src.buf;
+      const int src_stride = x->plane[0].src.stride;
+      const int pred_stride = 64;
+      unsigned int sse;
+      // Variance of whole block.
+      const unsigned int var =
+          cpi->fn_ptr[bsize].vf(src, src_stride, pred, pred_stride, &sse);
+      const float factor = (var == 0) ? 1.0f : (1.0f / (float)var);
+      const MACROBLOCKD *const xd = &x->e_mbd;
+      const int has_above = !!xd->above_mi;
+      const int has_left = !!xd->left_mi;
+      const BLOCK_SIZE above_bsize = has_above ? xd->above_mi->sb_type : bsize;
+      const BLOCK_SIZE left_bsize = has_left ? xd->left_mi->sb_type : bsize;
+      int i;
+
+      features[feature_idx++] = (float)has_above;
+      features[feature_idx++] = (float)b_width_log2_lookup[above_bsize];
+      features[feature_idx++] = (float)b_height_log2_lookup[above_bsize];
+      features[feature_idx++] = (float)has_left;
+      features[feature_idx++] = (float)b_width_log2_lookup[left_bsize];
+      features[feature_idx++] = (float)b_height_log2_lookup[left_bsize];
+      features[feature_idx++] = logf((float)var + 1.0f);
+      for (i = 0; i < 4; ++i) {
+        const int x_idx = (i & 1) * bs / 2;
+        const int y_idx = (i >> 1) * bs / 2;
+        const int src_offset = y_idx * src_stride + x_idx;
+        const int pred_offset = y_idx * pred_stride + x_idx;
+        // Variance of quarter block.
+        const unsigned int sub_var =
+            cpi->fn_ptr[subsize].vf(src + src_offset, src_stride,
+                                    pred + pred_offset, pred_stride, &sse);
+        const float var_ratio = (var == 0) ? 1.0f : factor * (float)sub_var;
+        features[feature_idx++] = var_ratio;
+      }
+    }
+    assert(feature_idx == FEATURES);
+
+    // Feed the features into the model to get the confidence score.
+    nn_predict(features, nn_config, &score);
+
+    // Higher score means that the model has higher confidence that the split
+    // partition is better than the non-split partition. So if the score is
+    // high enough, we skip the none-split partition search; if the score is
+    // low enough, we skip the split partition search.
+    if (score > thresh) *none = 0;
+    if (score < -thresh) *split = 0;
+  }
+}
+#undef FEATURES
+#endif  // !CONFIG_REALTIME_ONLY
+
+static double log_wiener_var(int64_t wiener_variance) {
+  return log(1.0 + wiener_variance) / log(2.0);
+}
+
+static void build_kmeans_segmentation(VP9_COMP *cpi) {
+  VP9_COMMON *cm = &cpi->common;
+  BLOCK_SIZE bsize = BLOCK_64X64;
+  KMEANS_DATA *kmeans_data;
+
+  vp9_disable_segmentation(&cm->seg);
+  if (cm->show_frame) {
+    int mi_row, mi_col;
+    cpi->kmeans_data_size = 0;
+    cpi->kmeans_ctr_num = 8;
+
+    for (mi_row = 0; mi_row < cm->mi_rows; mi_row += MI_BLOCK_SIZE) {
+      for (mi_col = 0; mi_col < cm->mi_cols; mi_col += MI_BLOCK_SIZE) {
+        int mb_row_start = mi_row >> 1;
+        int mb_col_start = mi_col >> 1;
+        int mb_row_end = VPXMIN(
+            (mi_row + num_8x8_blocks_high_lookup[bsize]) >> 1, cm->mb_rows);
+        int mb_col_end = VPXMIN(
+            (mi_col + num_8x8_blocks_wide_lookup[bsize]) >> 1, cm->mb_cols);
+        int row, col;
+        int64_t wiener_variance = 0;
+
+        for (row = mb_row_start; row < mb_row_end; ++row)
+          for (col = mb_col_start; col < mb_col_end; ++col)
+            wiener_variance += cpi->mb_wiener_variance[row * cm->mb_cols + col];
+
+        wiener_variance /=
+            (mb_row_end - mb_row_start) * (mb_col_end - mb_col_start);
+
+#if CONFIG_MULTITHREAD
+        pthread_mutex_lock(&cpi->kmeans_mutex);
+#endif  // CONFIG_MULTITHREAD
+
+        kmeans_data = &cpi->kmeans_data_arr[cpi->kmeans_data_size++];
+        kmeans_data->value = log_wiener_var(wiener_variance);
+        kmeans_data->pos = mi_row * cpi->kmeans_data_stride + mi_col;
+#if CONFIG_MULTITHREAD
+        pthread_mutex_unlock(&cpi->kmeans_mutex);
+#endif  // CONFIG_MULTITHREAD
+      }
+    }
+
+    vp9_kmeans(cpi->kmeans_ctr_ls, cpi->kmeans_boundary_ls,
+               cpi->kmeans_count_ls, cpi->kmeans_ctr_num, cpi->kmeans_data_arr,
+               cpi->kmeans_data_size);
+
+    vp9_perceptual_aq_mode_setup(cpi, &cm->seg);
+  }
+}
+
+#if !CONFIG_REALTIME_ONLY
+static int wiener_var_segment(VP9_COMP *cpi, BLOCK_SIZE bsize, int mi_row,
+                              int mi_col) {
+  VP9_COMMON *cm = &cpi->common;
+  int mb_row_start = mi_row >> 1;
+  int mb_col_start = mi_col >> 1;
+  int mb_row_end =
+      VPXMIN((mi_row + num_8x8_blocks_high_lookup[bsize]) >> 1, cm->mb_rows);
+  int mb_col_end =
+      VPXMIN((mi_col + num_8x8_blocks_wide_lookup[bsize]) >> 1, cm->mb_cols);
+  int row, col, idx;
+  int64_t wiener_variance = 0;
+  int segment_id;
+  int8_t seg_hist[MAX_SEGMENTS] = { 0 };
+  int8_t max_count = 0, max_index = -1;
+
+  vpx_clear_system_state();
+
+  assert(cpi->norm_wiener_variance > 0);
+
+  for (row = mb_row_start; row < mb_row_end; ++row) {
+    for (col = mb_col_start; col < mb_col_end; ++col) {
+      wiener_variance = cpi->mb_wiener_variance[row * cm->mb_cols + col];
+      segment_id =
+          vp9_get_group_idx(log_wiener_var(wiener_variance),
+                            cpi->kmeans_boundary_ls, cpi->kmeans_ctr_num);
+      ++seg_hist[segment_id];
+    }
+  }
+
+  for (idx = 0; idx < cpi->kmeans_ctr_num; ++idx) {
+    if (seg_hist[idx] > max_count) {
+      max_count = seg_hist[idx];
+      max_index = idx;
+    }
+  }
+
+  assert(max_index >= 0);
+  segment_id = max_index;
+
+  return segment_id;
+}
+
+static int get_rdmult_delta(VP9_COMP *cpi, BLOCK_SIZE bsize, int mi_row,
+                            int mi_col, int orig_rdmult) {
+  const int gf_group_index = cpi->twopass.gf_group.index;
+  TplDepFrame *tpl_frame = &cpi->tpl_stats[gf_group_index];
+  TplDepStats *tpl_stats = tpl_frame->tpl_stats_ptr;
+  int tpl_stride = tpl_frame->stride;
+  int64_t intra_cost = 0;
+  int64_t mc_dep_cost = 0;
+  int mi_wide = num_8x8_blocks_wide_lookup[bsize];
+  int mi_high = num_8x8_blocks_high_lookup[bsize];
+  int row, col;
+
+  int dr = 0;
+  int count = 0;
+  double r0, rk, beta;
+
+  if (tpl_frame->is_valid == 0) return orig_rdmult;
+
+  if (cpi->twopass.gf_group.layer_depth[gf_group_index] > 1) return orig_rdmult;
+
+  if (gf_group_index >= MAX_ARF_GOP_SIZE) return orig_rdmult;
+
+  for (row = mi_row; row < mi_row + mi_high; ++row) {
+    for (col = mi_col; col < mi_col + mi_wide; ++col) {
+      TplDepStats *this_stats = &tpl_stats[row * tpl_stride + col];
+
+      if (row >= cpi->common.mi_rows || col >= cpi->common.mi_cols) continue;
+
+      intra_cost += this_stats->intra_cost;
+      mc_dep_cost += this_stats->mc_dep_cost;
+
+      ++count;
+    }
+  }
+
+  vpx_clear_system_state();
+
+  r0 = cpi->rd.r0;
+  rk = (double)intra_cost / mc_dep_cost;
+  beta = r0 / rk;
+  dr = vp9_get_adaptive_rdmult(cpi, beta);
+
+  dr = VPXMIN(dr, orig_rdmult * 3 / 2);
+  dr = VPXMAX(dr, orig_rdmult * 1 / 2);
+
+  dr = VPXMAX(1, dr);
+
+  return dr;
+}
+#endif  // !CONFIG_REALTIME_ONLY
+
+#if CONFIG_RATE_CTRL
+static void assign_partition_info(
+    const int row_start_4x4, const int col_start_4x4, const int block_width_4x4,
+    const int block_height_4x4, const int num_unit_rows,
+    const int num_unit_cols, PARTITION_INFO *partition_info) {
+  int i, j;
+  for (i = 0; i < block_height_4x4; ++i) {
+    for (j = 0; j < block_width_4x4; ++j) {
+      const int row_4x4 = row_start_4x4 + i;
+      const int col_4x4 = col_start_4x4 + j;
+      const int unit_index = row_4x4 * num_unit_cols + col_4x4;
+      if (row_4x4 >= num_unit_rows || col_4x4 >= num_unit_cols) continue;
+      partition_info[unit_index].row = row_4x4 << 2;
+      partition_info[unit_index].column = col_4x4 << 2;
+      partition_info[unit_index].row_start = row_start_4x4 << 2;
+      partition_info[unit_index].column_start = col_start_4x4 << 2;
+      partition_info[unit_index].width = block_width_4x4 << 2;
+      partition_info[unit_index].height = block_height_4x4 << 2;
+    }
+  }
+}
+
+static void assign_motion_vector_info(const int block_width_4x4,
+                                      const int block_height_4x4,
+                                      const int row_start_4x4,
+                                      const int col_start_4x4,
+                                      const int num_unit_rows,
+                                      const int num_unit_cols, MV *source_mv[2],
+                                      MV_REFERENCE_FRAME source_ref_frame[2],
+                                      MOTION_VECTOR_INFO *motion_vector_info) {
+  int i, j;
+  for (i = 0; i < block_height_4x4; ++i) {
+    for (j = 0; j < block_width_4x4; ++j) {
+      const int row_4x4 = row_start_4x4 + i;
+      const int col_4x4 = col_start_4x4 + j;
+      const int unit_index = row_4x4 * num_unit_cols + col_4x4;
+      if (row_4x4 >= num_unit_rows || col_4x4 >= num_unit_cols) continue;
+      if (source_ref_frame[1] == NONE) {
+        assert(source_mv[1]->row == 0 && source_mv[1]->col == 0);
+      }
+      motion_vector_info[unit_index].ref_frame[0] = source_ref_frame[0];
+      motion_vector_info[unit_index].ref_frame[1] = source_ref_frame[1];
+      motion_vector_info[unit_index].mv[0].as_mv.row = source_mv[0]->row;
+      motion_vector_info[unit_index].mv[0].as_mv.col = source_mv[0]->col;
+      motion_vector_info[unit_index].mv[1].as_mv.row = source_mv[1]->row;
+      motion_vector_info[unit_index].mv[1].as_mv.col = source_mv[1]->col;
+    }
+  }
+}
+
+static void store_superblock_info(
+    const PC_TREE *const pc_tree, MODE_INFO **mi_grid_visible,
+    const int mi_stride, const int square_size_4x4, const int num_unit_rows,
+    const int num_unit_cols, const int row_start_4x4, const int col_start_4x4,
+    PARTITION_INFO *partition_info, MOTION_VECTOR_INFO *motion_vector_info) {
+  const int subblock_square_size_4x4 = square_size_4x4 >> 1;
+  if (row_start_4x4 >= num_unit_rows || col_start_4x4 >= num_unit_cols) return;
+  assert(pc_tree->partitioning != PARTITION_INVALID);
+  // End node, no split.
+  if (pc_tree->partitioning == PARTITION_NONE ||
+      pc_tree->partitioning == PARTITION_HORZ ||
+      pc_tree->partitioning == PARTITION_VERT || square_size_4x4 == 1) {
+    const int mi_row = row_start_4x4 >> 1;
+    const int mi_col = col_start_4x4 >> 1;
+    const int mi_idx = mi_stride * mi_row + mi_col;
+    MODE_INFO **mi = mi_grid_visible + mi_idx;
+    MV *source_mv[2];
+    MV_REFERENCE_FRAME source_ref_frame[2];
+
+    // partition info
+    const int block_width_4x4 = (pc_tree->partitioning == PARTITION_VERT)
+                                    ? square_size_4x4 >> 1
+                                    : square_size_4x4;
+    const int block_height_4x4 = (pc_tree->partitioning == PARTITION_HORZ)
+                                     ? square_size_4x4 >> 1
+                                     : square_size_4x4;
+    assign_partition_info(row_start_4x4, col_start_4x4, block_width_4x4,
+                          block_height_4x4, num_unit_rows, num_unit_cols,
+                          partition_info);
+    if (pc_tree->partitioning == PARTITION_VERT) {
+      assign_partition_info(row_start_4x4, col_start_4x4 + block_width_4x4,
+                            block_width_4x4, block_height_4x4, num_unit_rows,
+                            num_unit_cols, partition_info);
+    } else if (pc_tree->partitioning == PARTITION_HORZ) {
+      assign_partition_info(row_start_4x4 + block_height_4x4, col_start_4x4,
+                            block_width_4x4, block_height_4x4, num_unit_rows,
+                            num_unit_cols, partition_info);
+    }
+
+    // motion vector info
+    if (pc_tree->partitioning == PARTITION_HORZ) {
+      int is_valid_second_rectangle = 0;
+      assert(square_size_4x4 > 1);
+      // First rectangle.
+      source_ref_frame[0] = mi[0]->ref_frame[0];
+      source_ref_frame[1] = mi[0]->ref_frame[1];
+      source_mv[0] = &mi[0]->mv[0].as_mv;
+      source_mv[1] = &mi[0]->mv[1].as_mv;
+      assign_motion_vector_info(block_width_4x4, block_height_4x4,
+                                row_start_4x4, col_start_4x4, num_unit_rows,
+                                num_unit_cols, source_mv, source_ref_frame,
+                                motion_vector_info);
+      // Second rectangle.
+      if (square_size_4x4 == 2) {
+        is_valid_second_rectangle = 1;
+        source_ref_frame[0] = mi[0]->ref_frame[0];
+        source_ref_frame[1] = mi[0]->ref_frame[1];
+        source_mv[0] = &mi[0]->bmi[2].as_mv[0].as_mv;
+        source_mv[1] = &mi[0]->bmi[2].as_mv[1].as_mv;
+      } else {
+        const int mi_row_2 = mi_row + (block_height_4x4 >> 1);
+        const int mi_col_2 = mi_col;
+        if (mi_row_2 * 2 < num_unit_rows && mi_col_2 * 2 < num_unit_cols) {
+          const int mi_idx_2 = mi_stride * mi_row_2 + mi_col_2;
+          is_valid_second_rectangle = 1;
+          mi = mi_grid_visible + mi_idx_2;
+          source_ref_frame[0] = mi[0]->ref_frame[0];
+          source_ref_frame[1] = mi[0]->ref_frame[1];
+          source_mv[0] = &mi[0]->mv[0].as_mv;
+          source_mv[1] = &mi[0]->mv[1].as_mv;
+        }
+      }
+      if (is_valid_second_rectangle) {
+        assign_motion_vector_info(
+            block_width_4x4, block_height_4x4, row_start_4x4 + block_height_4x4,
+            col_start_4x4, num_unit_rows, num_unit_cols, source_mv,
+            source_ref_frame, motion_vector_info);
+      }
+    } else if (pc_tree->partitioning == PARTITION_VERT) {
+      int is_valid_second_rectangle = 0;
+      assert(square_size_4x4 > 1);
+      // First rectangle.
+      source_ref_frame[0] = mi[0]->ref_frame[0];
+      source_ref_frame[1] = mi[0]->ref_frame[1];
+      source_mv[0] = &mi[0]->mv[0].as_mv;
+      source_mv[1] = &mi[0]->mv[1].as_mv;
+      assign_motion_vector_info(block_width_4x4, block_height_4x4,
+                                row_start_4x4, col_start_4x4, num_unit_rows,
+                                num_unit_cols, source_mv, source_ref_frame,
+                                motion_vector_info);
+      // Second rectangle.
+      if (square_size_4x4 == 2) {
+        is_valid_second_rectangle = 1;
+        source_ref_frame[0] = mi[0]->ref_frame[0];
+        source_ref_frame[1] = mi[0]->ref_frame[1];
+        source_mv[0] = &mi[0]->bmi[1].as_mv[0].as_mv;
+        source_mv[1] = &mi[0]->bmi[1].as_mv[1].as_mv;
+      } else {
+        const int mi_row_2 = mi_row;
+        const int mi_col_2 = mi_col + (block_width_4x4 >> 1);
+        if (mi_row_2 * 2 < num_unit_rows && mi_col_2 * 2 < num_unit_cols) {
+          const int mi_idx_2 = mi_stride * mi_row_2 + mi_col_2;
+          is_valid_second_rectangle = 1;
+          mi = mi_grid_visible + mi_idx_2;
+          source_ref_frame[0] = mi[0]->ref_frame[0];
+          source_ref_frame[1] = mi[0]->ref_frame[1];
+          source_mv[0] = &mi[0]->mv[0].as_mv;
+          source_mv[1] = &mi[0]->mv[1].as_mv;
+        }
+      }
+      if (is_valid_second_rectangle) {
+        assign_motion_vector_info(
+            block_width_4x4, block_height_4x4, row_start_4x4,
+            col_start_4x4 + block_width_4x4, num_unit_rows, num_unit_cols,
+            source_mv, source_ref_frame, motion_vector_info);
+      }
+    } else {
+      assert(pc_tree->partitioning == PARTITION_NONE || square_size_4x4 == 1);
+      source_ref_frame[0] = mi[0]->ref_frame[0];
+      source_ref_frame[1] = mi[0]->ref_frame[1];
+      if (square_size_4x4 == 1) {
+        const int sub8x8_row = row_start_4x4 % 2;
+        const int sub8x8_col = col_start_4x4 % 2;
+        const int sub8x8_idx = sub8x8_row * 2 + sub8x8_col;
+        source_mv[0] = &mi[0]->bmi[sub8x8_idx].as_mv[0].as_mv;
+        source_mv[1] = &mi[0]->bmi[sub8x8_idx].as_mv[1].as_mv;
+      } else {
+        source_mv[0] = &mi[0]->mv[0].as_mv;
+        source_mv[1] = &mi[0]->mv[1].as_mv;
+      }
+      assign_motion_vector_info(block_width_4x4, block_height_4x4,
+                                row_start_4x4, col_start_4x4, num_unit_rows,
+                                num_unit_cols, source_mv, source_ref_frame,
+                                motion_vector_info);
+    }
+
+    return;
+  }
+  // recursively traverse partition tree when partition is split.
+  assert(pc_tree->partitioning == PARTITION_SPLIT);
+  store_superblock_info(pc_tree->split[0], mi_grid_visible, mi_stride,
+                        subblock_square_size_4x4, num_unit_rows, num_unit_cols,
+                        row_start_4x4, col_start_4x4, partition_info,
+                        motion_vector_info);
+  store_superblock_info(pc_tree->split[1], mi_grid_visible, mi_stride,
+                        subblock_square_size_4x4, num_unit_rows, num_unit_cols,
+                        row_start_4x4, col_start_4x4 + subblock_square_size_4x4,
+                        partition_info, motion_vector_info);
+  store_superblock_info(pc_tree->split[2], mi_grid_visible, mi_stride,
+                        subblock_square_size_4x4, num_unit_rows, num_unit_cols,
+                        row_start_4x4 + subblock_square_size_4x4, col_start_4x4,
+                        partition_info, motion_vector_info);
+  store_superblock_info(pc_tree->split[3], mi_grid_visible, mi_stride,
+                        subblock_square_size_4x4, num_unit_rows, num_unit_cols,
+                        row_start_4x4 + subblock_square_size_4x4,
+                        col_start_4x4 + subblock_square_size_4x4,
+                        partition_info, motion_vector_info);
+}
+#endif  // CONFIG_RATE_CTRL
+
+#if !CONFIG_REALTIME_ONLY
 // TODO(jingning,jimbankoski,rbultje): properly skip partition types that are
 // unlikely to be selected depending on previous rate-distortion optimization
 // results, for encoding speed-up.
-static void rd_pick_partition(VP9_COMP *cpi, ThreadData *td,
-                              TileDataEnc *tile_data, TOKENEXTRA **tp,
-                              int mi_row, int mi_col, BLOCK_SIZE bsize,
-                              RD_COST *rd_cost, int64_t best_rd,
-                              PC_TREE *pc_tree) {
+static int rd_pick_partition(VP9_COMP *cpi, ThreadData *td,
+                             TileDataEnc *tile_data, TOKENEXTRA **tp,
+                             int mi_row, int mi_col, BLOCK_SIZE bsize,
+                             RD_COST *rd_cost, RD_COST best_rdc,
+                             PC_TREE *pc_tree) {
   VP9_COMMON *const cm = &cpi->common;
+  const VP9EncoderConfig *const oxcf = &cpi->oxcf;
   TileInfo *const tile_info = &tile_data->tile_info;
   MACROBLOCK *const x = &td->mb;
   MACROBLOCKD *const xd = &x->e_mbd;
@@ -3102,11 +4042,11 @@
   ENTROPY_CONTEXT l[16 * MAX_MB_PLANE], a[16 * MAX_MB_PLANE];
   PARTITION_CONTEXT sl[8], sa[8];
   TOKENEXTRA *tp_orig = *tp;
-  PICK_MODE_CONTEXT *ctx = &pc_tree->none;
+  PICK_MODE_CONTEXT *const ctx = &pc_tree->none;
   int i;
   const int pl = partition_plane_context(xd, mi_row, mi_col, bsize);
   BLOCK_SIZE subsize;
-  RD_COST this_rdc, sum_rdc, best_rdc;
+  RD_COST this_rdc, sum_rdc;
   int do_split = bsize >= BLOCK_8X8;
   int do_rect = 1;
   INTERP_FILTER pred_interp_filter;
@@ -3133,24 +4073,35 @@
 
   int64_t dist_breakout_thr = cpi->sf.partition_search_breakout_thr.dist;
   int rate_breakout_thr = cpi->sf.partition_search_breakout_thr.rate;
+  int must_split = 0;
+  int should_encode_sb = 0;
+
+  // Ref frames picked in the [i_th] quarter subblock during square partition
+  // RD search. It may be used to prune ref frame selection of rect partitions.
+  uint8_t ref_frames_used[4] = { 0, 0, 0, 0 };
+
+  int partition_mul = x->cb_rdmult;
 
   (void)*tp_orig;
 
   assert(num_8x8_blocks_wide_lookup[bsize] ==
          num_8x8_blocks_high_lookup[bsize]);
 
-  // Adjust dist breakout threshold according to the partition size.
   dist_breakout_thr >>=
       8 - (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
+
   rate_breakout_thr *= num_pels_log2_lookup[bsize];
 
   vp9_rd_cost_init(&this_rdc);
   vp9_rd_cost_init(&sum_rdc);
-  vp9_rd_cost_reset(&best_rdc);
-  best_rdc.rdcost = best_rd;
 
   set_offsets(cpi, tile_info, x, mi_row, mi_col, bsize);
 
+  if (oxcf->tuning == VP8_TUNE_SSIM) {
+    set_ssim_rdmult(cpi, x, bsize, mi_row, mi_col, &partition_mul);
+  }
+  vp9_rd_cost_update(partition_mul, x->rddiv, &best_rdc);
+
   if (bsize == BLOCK_16X16 && cpi->oxcf.aq_mode != NO_AQ &&
       cpi->oxcf.aq_mode != LOOKAHEAD_AQ)
     x->mb_energy = vp9_block_energy(cpi, x, bsize);
@@ -3165,10 +4116,18 @@
       set_partition_range(cm, xd, mi_row, mi_col, bsize, &min_size, &max_size);
   }
 
+  // Get sub block energy range
+  if (bsize >= BLOCK_16X16) {
+    int min_energy, max_energy;
+    vp9_get_sub_block_energy(cpi, x, mi_row, mi_col, bsize, &min_energy,
+                             &max_energy);
+    must_split = (min_energy < -3) && (max_energy - min_energy > 2);
+  }
+
   // Determine partition types in search according to the speed features.
   // The threshold set here has to be of square block size.
   if (cpi->sf.auto_min_max_partition_size) {
-    partition_none_allowed &= (bsize <= max_size && bsize >= min_size);
+    partition_none_allowed &= (bsize <= max_size);
     partition_horz_allowed &=
         ((bsize <= max_size && bsize > min_size) || force_horz_split);
     partition_vert_allowed &=
@@ -3177,7 +4136,8 @@
   }
 
   if (cpi->sf.use_square_partition_only &&
-      bsize > cpi->sf.use_square_only_threshold) {
+      (bsize > cpi->sf.use_square_only_thresh_high ||
+       bsize < cpi->sf.use_square_only_thresh_low)) {
     if (cpi->use_svc) {
       if (!vp9_active_h_edge(cpi, mi_row, mi_step) || x->e_mbd.lossless)
         partition_horz_allowed &= force_horz_split;
@@ -3250,48 +4210,84 @@
   }
 #endif
 
+  pc_tree->partitioning = PARTITION_NONE;
+
+  if (cpi->sf.rd_ml_partition.var_pruning && !frame_is_intra_only(cm)) {
+    const int do_rd_ml_partition_var_pruning =
+        partition_none_allowed && do_split &&
+        mi_row + num_8x8_blocks_high_lookup[bsize] <= cm->mi_rows &&
+        mi_col + num_8x8_blocks_wide_lookup[bsize] <= cm->mi_cols;
+    if (do_rd_ml_partition_var_pruning) {
+      ml_predict_var_rd_paritioning(cpi, x, pc_tree, bsize, mi_row, mi_col,
+                                    &partition_none_allowed, &do_split);
+    } else {
+      vp9_zero(pc_tree->mv);
+    }
+    if (bsize > BLOCK_8X8) {  // Store MV result as reference for subblocks.
+      for (i = 0; i < 4; ++i) pc_tree->split[i]->mv = pc_tree->mv;
+    }
+  }
+
   // PARTITION_NONE
   if (partition_none_allowed) {
     rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &this_rdc, bsize, ctx,
-                     best_rdc.rdcost);
+                     best_rdc.rate, best_rdc.dist);
+    ctx->rdcost = this_rdc.rdcost;
     if (this_rdc.rate != INT_MAX) {
+      if (cpi->sf.prune_ref_frame_for_rect_partitions) {
+        const int ref1 = ctx->mic.ref_frame[0];
+        const int ref2 = ctx->mic.ref_frame[1];
+        for (i = 0; i < 4; ++i) {
+          ref_frames_used[i] |= (1 << ref1);
+          if (ref2 > 0) ref_frames_used[i] |= (1 << ref2);
+        }
+      }
       if (bsize >= BLOCK_8X8) {
         this_rdc.rate += cpi->partition_cost[pl][PARTITION_NONE];
-        this_rdc.rdcost =
-            RDCOST(x->rdmult, x->rddiv, this_rdc.rate, this_rdc.dist);
+        vp9_rd_cost_update(partition_mul, x->rddiv, &this_rdc);
       }
 
       if (this_rdc.rdcost < best_rdc.rdcost) {
         MODE_INFO *mi = xd->mi[0];
 
         best_rdc = this_rdc;
+        should_encode_sb = 1;
         if (bsize >= BLOCK_8X8) pc_tree->partitioning = PARTITION_NONE;
 
-        if (!cpi->sf.ml_partition_search_early_termination) {
-          // If all y, u, v transform blocks in this partition are skippable,
-          // and the dist & rate are within the thresholds, the partition search
-          // is terminated for current branch of the partition search tree.
-          if (!x->e_mbd.lossless && ctx->skippable &&
-              ((best_rdc.dist < (dist_breakout_thr >> 2)) ||
-               (best_rdc.dist < dist_breakout_thr &&
-                best_rdc.rate < rate_breakout_thr))) {
-            do_split = 0;
-            do_rect = 0;
-          }
-        } else {
+        if (cpi->sf.rd_ml_partition.search_early_termination) {
           // Currently, the machine-learning based partition search early
           // termination is only used while bsize is 16x16, 32x32 or 64x64,
           // VPXMIN(cm->width, cm->height) >= 480, and speed = 0.
           if (!x->e_mbd.lossless &&
               !segfeature_active(&cm->seg, mi->segment_id, SEG_LVL_SKIP) &&
               ctx->mic.mode >= INTRA_MODES && bsize >= BLOCK_16X16) {
-            if (compute_score(cm, xd, ctx, mi_row, mi_col, bsize) < 0.0) {
+            if (ml_pruning_partition(cm, xd, ctx, mi_row, mi_col, bsize)) {
               do_split = 0;
               do_rect = 0;
             }
           }
         }
 
+        if ((do_split || do_rect) && !x->e_mbd.lossless && ctx->skippable) {
+          const int use_ml_based_breakout =
+              cpi->sf.rd_ml_partition.search_breakout && cm->base_qindex >= 100;
+          if (use_ml_based_breakout) {
+            if (ml_predict_breakout(cpi, bsize, x, &this_rdc)) {
+              do_split = 0;
+              do_rect = 0;
+            }
+          } else {
+            if (!cpi->sf.rd_ml_partition.search_early_termination) {
+              if ((best_rdc.dist < (dist_breakout_thr >> 2)) ||
+                  (best_rdc.dist < dist_breakout_thr &&
+                   best_rdc.rate < rate_breakout_thr)) {
+                do_split = 0;
+                do_rect = 0;
+              }
+            }
+          }
+        }
+
 #if CONFIG_FP_MB_STATS
         // Check if every 16x16 first pass block statistics has zero
         // motion and the corresponding first pass residue is small enough.
@@ -3341,10 +4337,13 @@
       }
     }
     restore_context(x, mi_row, mi_col, a, l, sa, sl, bsize);
+  } else {
+    vp9_zero(ctx->pred_mv);
+    ctx->mic.interp_filter = EIGHTTAP;
   }
 
   // store estimated motion vector
-  if (cpi->sf.adaptive_motion_search) store_pred_mv(x, ctx);
+  store_pred_mv(x, ctx);
 
   // If the interp_filter is marked as SWITCHABLE_FILTERS, it was for an
   // intra block and used for context purposes.
@@ -3357,52 +4356,94 @@
   // PARTITION_SPLIT
   // TODO(jingning): use the motion vectors given by the above search as
   // the starting point of motion search in the following partition type check.
-  if (do_split) {
+  pc_tree->split[0]->none.rdcost = 0;
+  pc_tree->split[1]->none.rdcost = 0;
+  pc_tree->split[2]->none.rdcost = 0;
+  pc_tree->split[3]->none.rdcost = 0;
+  if (do_split || must_split) {
     subsize = get_subsize(bsize, PARTITION_SPLIT);
+    load_pred_mv(x, ctx);
     if (bsize == BLOCK_8X8) {
       i = 4;
       if (cpi->sf.adaptive_pred_interp_filter && partition_none_allowed)
         pc_tree->leaf_split[0]->pred_interp_filter = pred_interp_filter;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &sum_rdc, subsize,
-                       pc_tree->leaf_split[0], best_rdc.rdcost);
-
-      if (sum_rdc.rate == INT_MAX) sum_rdc.rdcost = INT64_MAX;
+                       pc_tree->leaf_split[0], best_rdc.rate, best_rdc.dist);
+      if (sum_rdc.rate == INT_MAX) {
+        sum_rdc.rdcost = INT64_MAX;
+      } else {
+        if (cpi->sf.prune_ref_frame_for_rect_partitions) {
+          const int ref1 = pc_tree->leaf_split[0]->mic.ref_frame[0];
+          const int ref2 = pc_tree->leaf_split[0]->mic.ref_frame[1];
+          for (i = 0; i < 4; ++i) {
+            ref_frames_used[i] |= (1 << ref1);
+            if (ref2 > 0) ref_frames_used[i] |= (1 << ref2);
+          }
+        }
+      }
     } else {
-      for (i = 0; i < 4 && sum_rdc.rdcost < best_rdc.rdcost; ++i) {
+      for (i = 0; (i < 4) && ((sum_rdc.rdcost < best_rdc.rdcost) || must_split);
+           ++i) {
         const int x_idx = (i & 1) * mi_step;
         const int y_idx = (i >> 1) * mi_step;
+        int found_best_rd = 0;
+        RD_COST best_rdc_split;
+        vp9_rd_cost_reset(&best_rdc_split);
+
+        if (best_rdc.rate < INT_MAX && best_rdc.dist < INT64_MAX) {
+          // A must split test here increases the number of sub
+          // partitions but hurts metrics results quite a bit,
+          // so this extra test is commented out pending
+          // further tests on whether it adds much in terms of
+          // visual quality.
+          // (must_split) ? best_rdc.rate
+          //              : best_rdc.rate - sum_rdc.rate,
+          // (must_split) ? best_rdc.dist
+          //              : best_rdc.dist - sum_rdc.dist,
+          best_rdc_split.rate = best_rdc.rate - sum_rdc.rate;
+          best_rdc_split.dist = best_rdc.dist - sum_rdc.dist;
+        }
 
         if (mi_row + y_idx >= cm->mi_rows || mi_col + x_idx >= cm->mi_cols)
           continue;
 
-        if (cpi->sf.adaptive_motion_search) load_pred_mv(x, ctx);
-
         pc_tree->split[i]->index = i;
-        rd_pick_partition(cpi, td, tile_data, tp, mi_row + y_idx,
-                          mi_col + x_idx, subsize, &this_rdc,
-                          best_rdc.rdcost - sum_rdc.rdcost, pc_tree->split[i]);
+        if (cpi->sf.prune_ref_frame_for_rect_partitions)
+          pc_tree->split[i]->none.rate = INT_MAX;
+        found_best_rd = rd_pick_partition(
+            cpi, td, tile_data, tp, mi_row + y_idx, mi_col + x_idx, subsize,
+            &this_rdc, best_rdc_split, pc_tree->split[i]);
 
-        if (this_rdc.rate == INT_MAX) {
+        if (found_best_rd == 0) {
           sum_rdc.rdcost = INT64_MAX;
           break;
         } else {
+          if (cpi->sf.prune_ref_frame_for_rect_partitions &&
+              pc_tree->split[i]->none.rate != INT_MAX) {
+            const int ref1 = pc_tree->split[i]->none.mic.ref_frame[0];
+            const int ref2 = pc_tree->split[i]->none.mic.ref_frame[1];
+            ref_frames_used[i] |= (1 << ref1);
+            if (ref2 > 0) ref_frames_used[i] |= (1 << ref2);
+          }
           sum_rdc.rate += this_rdc.rate;
           sum_rdc.dist += this_rdc.dist;
-          sum_rdc.rdcost += this_rdc.rdcost;
+          vp9_rd_cost_update(partition_mul, x->rddiv, &sum_rdc);
         }
       }
     }
 
-    if (sum_rdc.rdcost < best_rdc.rdcost && i == 4) {
+    if (((sum_rdc.rdcost < best_rdc.rdcost) || must_split) && i == 4) {
       sum_rdc.rate += cpi->partition_cost[pl][PARTITION_SPLIT];
-      sum_rdc.rdcost = RDCOST(x->rdmult, x->rddiv, sum_rdc.rate, sum_rdc.dist);
+      vp9_rd_cost_update(partition_mul, x->rddiv, &sum_rdc);
 
-      if (sum_rdc.rdcost < best_rdc.rdcost) {
+      if ((sum_rdc.rdcost < best_rdc.rdcost) ||
+          (must_split && (sum_rdc.dist < best_rdc.dist))) {
         best_rdc = sum_rdc;
+        should_encode_sb = 1;
         pc_tree->partitioning = PARTITION_SPLIT;
 
         // Rate and distortion based partition search termination clause.
-        if (!cpi->sf.ml_partition_search_early_termination &&
+        if (!cpi->sf.rd_ml_partition.search_early_termination &&
             !x->e_mbd.lossless &&
             ((best_rdc.dist < (dist_breakout_thr >> 2)) ||
              (best_rdc.dist < dist_breakout_thr &&
@@ -3413,58 +4454,94 @@
     } else {
       // skip rectangular partition test when larger block size
       // gives better rd cost
-      if ((cpi->sf.less_rectangular_check) &&
-          ((bsize > cpi->sf.use_square_only_threshold) ||
-           (best_rdc.dist < dist_breakout_thr)))
+      if (cpi->sf.less_rectangular_check &&
+          (bsize > cpi->sf.use_square_only_thresh_high ||
+           best_rdc.dist < dist_breakout_thr))
         do_rect &= !partition_none_allowed;
     }
     restore_context(x, mi_row, mi_col, a, l, sa, sl, bsize);
   }
 
+  pc_tree->horizontal[0].skip_ref_frame_mask = 0;
+  pc_tree->horizontal[1].skip_ref_frame_mask = 0;
+  pc_tree->vertical[0].skip_ref_frame_mask = 0;
+  pc_tree->vertical[1].skip_ref_frame_mask = 0;
+  if (cpi->sf.prune_ref_frame_for_rect_partitions) {
+    uint8_t used_frames;
+    used_frames = ref_frames_used[0] | ref_frames_used[1];
+    if (used_frames) {
+      pc_tree->horizontal[0].skip_ref_frame_mask = ~used_frames & 0xff;
+    }
+    used_frames = ref_frames_used[2] | ref_frames_used[3];
+    if (used_frames) {
+      pc_tree->horizontal[1].skip_ref_frame_mask = ~used_frames & 0xff;
+    }
+    used_frames = ref_frames_used[0] | ref_frames_used[2];
+    if (used_frames) {
+      pc_tree->vertical[0].skip_ref_frame_mask = ~used_frames & 0xff;
+    }
+    used_frames = ref_frames_used[1] | ref_frames_used[3];
+    if (used_frames) {
+      pc_tree->vertical[1].skip_ref_frame_mask = ~used_frames & 0xff;
+    }
+  }
+
+  {
+    const int do_ml_rect_partition_pruning =
+        !frame_is_intra_only(cm) && !force_horz_split && !force_vert_split &&
+        (partition_horz_allowed || partition_vert_allowed) && bsize > BLOCK_8X8;
+    if (do_ml_rect_partition_pruning) {
+      ml_prune_rect_partition(cpi, x, bsize, pc_tree, &partition_horz_allowed,
+                              &partition_vert_allowed, best_rdc.rdcost);
+    }
+  }
+
   // PARTITION_HORZ
   if (partition_horz_allowed &&
       (do_rect || vp9_active_h_edge(cpi, mi_row, mi_step))) {
+    const int part_mode_rate = cpi->partition_cost[pl][PARTITION_HORZ];
     subsize = get_subsize(bsize, PARTITION_HORZ);
-    if (cpi->sf.adaptive_motion_search) load_pred_mv(x, ctx);
+    load_pred_mv(x, ctx);
     if (cpi->sf.adaptive_pred_interp_filter && bsize == BLOCK_8X8 &&
         partition_none_allowed)
       pc_tree->horizontal[0].pred_interp_filter = pred_interp_filter;
     rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &sum_rdc, subsize,
-                     &pc_tree->horizontal[0], best_rdc.rdcost);
+                     &pc_tree->horizontal[0], best_rdc.rate - part_mode_rate,
+                     best_rdc.dist);
+    if (sum_rdc.rdcost < INT64_MAX) {
+      sum_rdc.rate += part_mode_rate;
+      vp9_rd_cost_update(partition_mul, x->rddiv, &sum_rdc);
+    }
 
     if (sum_rdc.rdcost < best_rdc.rdcost && mi_row + mi_step < cm->mi_rows &&
         bsize > BLOCK_8X8) {
       PICK_MODE_CONTEXT *ctx = &pc_tree->horizontal[0];
       update_state(cpi, td, ctx, mi_row, mi_col, subsize, 0);
       encode_superblock(cpi, td, tp, 0, mi_row, mi_col, subsize, ctx);
-
-      if (cpi->sf.adaptive_motion_search) load_pred_mv(x, ctx);
       if (cpi->sf.adaptive_pred_interp_filter && bsize == BLOCK_8X8 &&
           partition_none_allowed)
         pc_tree->horizontal[1].pred_interp_filter = pred_interp_filter;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row + mi_step, mi_col, &this_rdc,
                        subsize, &pc_tree->horizontal[1],
-                       best_rdc.rdcost - sum_rdc.rdcost);
+                       best_rdc.rate - sum_rdc.rate,
+                       best_rdc.dist - sum_rdc.dist);
       if (this_rdc.rate == INT_MAX) {
         sum_rdc.rdcost = INT64_MAX;
       } else {
         sum_rdc.rate += this_rdc.rate;
         sum_rdc.dist += this_rdc.dist;
-        sum_rdc.rdcost += this_rdc.rdcost;
+        vp9_rd_cost_update(partition_mul, x->rddiv, &sum_rdc);
       }
     }
 
     if (sum_rdc.rdcost < best_rdc.rdcost) {
-      sum_rdc.rate += cpi->partition_cost[pl][PARTITION_HORZ];
-      sum_rdc.rdcost = RDCOST(x->rdmult, x->rddiv, sum_rdc.rate, sum_rdc.dist);
-      if (sum_rdc.rdcost < best_rdc.rdcost) {
-        best_rdc = sum_rdc;
-        pc_tree->partitioning = PARTITION_HORZ;
+      best_rdc = sum_rdc;
+      should_encode_sb = 1;
+      pc_tree->partitioning = PARTITION_HORZ;
 
-        if ((cpi->sf.less_rectangular_check) &&
-            (bsize > cpi->sf.use_square_only_threshold))
-          do_rect = 0;
-      }
+      if (cpi->sf.less_rectangular_check &&
+          bsize > cpi->sf.use_square_only_thresh_high)
+        do_rect = 0;
     }
     restore_context(x, mi_row, mi_col, a, l, sa, sl, bsize);
   }
@@ -3472,59 +4549,67 @@
   // PARTITION_VERT
   if (partition_vert_allowed &&
       (do_rect || vp9_active_v_edge(cpi, mi_col, mi_step))) {
+    const int part_mode_rate = cpi->partition_cost[pl][PARTITION_VERT];
     subsize = get_subsize(bsize, PARTITION_VERT);
-
-    if (cpi->sf.adaptive_motion_search) load_pred_mv(x, ctx);
+    load_pred_mv(x, ctx);
     if (cpi->sf.adaptive_pred_interp_filter && bsize == BLOCK_8X8 &&
         partition_none_allowed)
       pc_tree->vertical[0].pred_interp_filter = pred_interp_filter;
     rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &sum_rdc, subsize,
-                     &pc_tree->vertical[0], best_rdc.rdcost);
+                     &pc_tree->vertical[0], best_rdc.rate - part_mode_rate,
+                     best_rdc.dist);
+    if (sum_rdc.rdcost < INT64_MAX) {
+      sum_rdc.rate += part_mode_rate;
+      vp9_rd_cost_update(partition_mul, x->rddiv, &sum_rdc);
+    }
+
     if (sum_rdc.rdcost < best_rdc.rdcost && mi_col + mi_step < cm->mi_cols &&
         bsize > BLOCK_8X8) {
       update_state(cpi, td, &pc_tree->vertical[0], mi_row, mi_col, subsize, 0);
       encode_superblock(cpi, td, tp, 0, mi_row, mi_col, subsize,
                         &pc_tree->vertical[0]);
-
-      if (cpi->sf.adaptive_motion_search) load_pred_mv(x, ctx);
       if (cpi->sf.adaptive_pred_interp_filter && bsize == BLOCK_8X8 &&
           partition_none_allowed)
         pc_tree->vertical[1].pred_interp_filter = pred_interp_filter;
       rd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col + mi_step, &this_rdc,
                        subsize, &pc_tree->vertical[1],
-                       best_rdc.rdcost - sum_rdc.rdcost);
+                       best_rdc.rate - sum_rdc.rate,
+                       best_rdc.dist - sum_rdc.dist);
       if (this_rdc.rate == INT_MAX) {
         sum_rdc.rdcost = INT64_MAX;
       } else {
         sum_rdc.rate += this_rdc.rate;
         sum_rdc.dist += this_rdc.dist;
-        sum_rdc.rdcost += this_rdc.rdcost;
+        vp9_rd_cost_update(partition_mul, x->rddiv, &sum_rdc);
       }
     }
 
     if (sum_rdc.rdcost < best_rdc.rdcost) {
-      sum_rdc.rate += cpi->partition_cost[pl][PARTITION_VERT];
-      sum_rdc.rdcost = RDCOST(x->rdmult, x->rddiv, sum_rdc.rate, sum_rdc.dist);
-      if (sum_rdc.rdcost < best_rdc.rdcost) {
-        best_rdc = sum_rdc;
-        pc_tree->partitioning = PARTITION_VERT;
-      }
+      best_rdc = sum_rdc;
+      should_encode_sb = 1;
+      pc_tree->partitioning = PARTITION_VERT;
     }
     restore_context(x, mi_row, mi_col, a, l, sa, sl, bsize);
   }
 
-  // TODO(jbb): This code added so that we avoid static analysis
-  // warning related to the fact that best_rd isn't used after this
-  // point.  This code should be refactored so that the duplicate
-  // checks occur in some sub function and thus are used...
-  (void)best_rd;
   *rd_cost = best_rdc;
 
-  if (best_rdc.rate < INT_MAX && best_rdc.dist < INT64_MAX &&
-      pc_tree->index != 3) {
+  if (should_encode_sb && pc_tree->index != 3) {
     int output_enabled = (bsize == BLOCK_64X64);
     encode_sb(cpi, td, tile_info, tp, mi_row, mi_col, output_enabled, bsize,
               pc_tree);
+#if CONFIG_RATE_CTRL
+    // Store partition, motion vector of the superblock.
+    if (output_enabled) {
+      const int num_unit_rows = get_num_unit_4x4(cpi->frame_info.frame_height);
+      const int num_unit_cols = get_num_unit_4x4(cpi->frame_info.frame_width);
+      store_superblock_info(pc_tree, cm->mi_grid_visible, cm->mi_stride,
+                            num_4x4_blocks_wide_lookup[BLOCK_64X64],
+                            num_unit_rows, num_unit_cols, mi_row << 1,
+                            mi_col << 1, cpi->partition_info,
+                            cpi->motion_vector_info);
+    }
+#endif  // CONFIG_RATE_CTRL
   }
 
   if (bsize == BLOCK_64X64) {
@@ -3534,6 +4619,8 @@
   } else {
     assert(tp_orig == *tp);
   }
+
+  return should_encode_sb;
 }
 
 static void encode_rd_sb_row(VP9_COMP *cpi, ThreadData *td,
@@ -3565,10 +4652,12 @@
     RD_COST dummy_rdc;
     int i;
     int seg_skip = 0;
+    int orig_rdmult = cpi->rd.RDMULT;
 
     const int idx_str = cm->mi_stride * mi_row + mi_col;
     MODE_INFO **mi = cm->mi_grid_visible + idx_str;
 
+    vp9_rd_cost_reset(&dummy_rdc);
     (*(cpi->row_mt_sync_read_ptr))(&tile_data->row_mt_sync, sb_row,
                                    sb_col_in_tile);
 
@@ -3583,7 +4672,10 @@
       }
     }
 
-    vp9_zero(x->pred_mv);
+    for (i = 0; i < MAX_REF_FRAMES; ++i) {
+      x->pred_mv[i].row = INT16_MAX;
+      x->pred_mv[i].col = INT16_MAX;
+    }
     td->pc_root->index = 0;
 
     if (seg->enabled) {
@@ -3594,6 +4686,9 @@
     }
 
     x->source_variance = UINT_MAX;
+
+    x->cb_rdmult = orig_rdmult;
+
     if (sf->partition_search_type == FIXED_PARTITION || seg_skip) {
       const BLOCK_SIZE bsize =
           seg_skip ? BLOCK_64X64 : sf->always_this_block_size;
@@ -3614,19 +4709,33 @@
       rd_use_partition(cpi, td, tile_data, mi, tp, mi_row, mi_col, BLOCK_64X64,
                        &dummy_rate, &dummy_dist, 1, td->pc_root);
     } else {
+      if (cpi->twopass.gf_group.index > 0 && cpi->sf.enable_tpl_model) {
+        int dr =
+            get_rdmult_delta(cpi, BLOCK_64X64, mi_row, mi_col, orig_rdmult);
+        x->cb_rdmult = dr;
+      }
+
+      if (cpi->oxcf.aq_mode == PERCEPTUAL_AQ && cm->show_frame) {
+        x->segment_id = wiener_var_segment(cpi, BLOCK_64X64, mi_row, mi_col);
+        x->cb_rdmult = vp9_compute_rd_mult(
+            cpi, vp9_get_qindex(&cm->seg, x->segment_id, cm->base_qindex));
+      }
+
       // If required set upper and lower partition size limits
       if (sf->auto_min_max_partition_size) {
         set_offsets(cpi, tile_info, x, mi_row, mi_col, BLOCK_64X64);
         rd_auto_partition_range(cpi, tile_info, xd, mi_row, mi_col,
                                 &x->min_partition_size, &x->max_partition_size);
       }
+      td->pc_root->none.rdcost = 0;
       rd_pick_partition(cpi, td, tile_data, tp, mi_row, mi_col, BLOCK_64X64,
-                        &dummy_rdc, INT64_MAX, td->pc_root);
+                        &dummy_rdc, dummy_rdc, td->pc_root);
     }
     (*(cpi->row_mt_sync_write_ptr))(&tile_data->row_mt_sync, sb_row,
                                     sb_col_in_tile, num_sb_cols);
   }
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 static void init_encode_frame_mb_context(VP9_COMP *cpi) {
   MACROBLOCK *const x = &cpi->td.mb;
@@ -3704,6 +4813,36 @@
     vp9_pick_intra_mode(cpi, x, rd_cost, bsize, ctx);
 }
 
+static void hybrid_search_svc_baseiskey(VP9_COMP *cpi, MACROBLOCK *const x,
+                                        RD_COST *rd_cost, BLOCK_SIZE bsize,
+                                        PICK_MODE_CONTEXT *ctx,
+                                        TileDataEnc *tile_data, int mi_row,
+                                        int mi_col) {
+  if (!cpi->sf.nonrd_keyframe && bsize <= BLOCK_8X8) {
+    vp9_rd_pick_intra_mode_sb(cpi, x, rd_cost, bsize, ctx, INT64_MAX);
+  } else {
+    if (cpi->svc.disable_inter_layer_pred == INTER_LAYER_PRED_OFF)
+      vp9_pick_intra_mode(cpi, x, rd_cost, bsize, ctx);
+    else if (bsize >= BLOCK_8X8)
+      vp9_pick_inter_mode(cpi, x, tile_data, mi_row, mi_col, rd_cost, bsize,
+                          ctx);
+    else
+      vp9_pick_inter_mode_sub8x8(cpi, x, mi_row, mi_col, rd_cost, bsize, ctx);
+  }
+}
+
+static void hybrid_search_scene_change(VP9_COMP *cpi, MACROBLOCK *const x,
+                                       RD_COST *rd_cost, BLOCK_SIZE bsize,
+                                       PICK_MODE_CONTEXT *ctx,
+                                       TileDataEnc *tile_data, int mi_row,
+                                       int mi_col) {
+  if (!cpi->sf.nonrd_keyframe && bsize <= BLOCK_8X8) {
+    vp9_rd_pick_intra_mode_sb(cpi, x, rd_cost, bsize, ctx, INT64_MAX);
+  } else {
+    vp9_pick_inter_mode(cpi, x, tile_data, mi_row, mi_col, rd_cost, bsize, ctx);
+  }
+}
+
 static void nonrd_pick_sb_modes(VP9_COMP *cpi, TileDataEnc *tile_data,
                                 MACROBLOCK *const x, int mi_row, int mi_col,
                                 RD_COST *rd_cost, BLOCK_SIZE bsize,
@@ -3719,6 +4858,9 @@
   int plane;
 
   set_offsets(cpi, tile_info, x, mi_row, mi_col, bsize);
+
+  set_segment_index(cpi, x, mi_row, mi_col, bsize, 0);
+
   mi = xd->mi[0];
   mi->sb_type = bsize;
 
@@ -3734,14 +4876,23 @@
     if (cyclic_refresh_segment_id_boosted(mi->segment_id))
       x->rdmult = vp9_cyclic_refresh_get_rdmult(cpi->cyclic_refresh);
 
-  if (cm->frame_type == KEY_FRAME)
+  if (frame_is_intra_only(cm))
     hybrid_intra_mode_search(cpi, x, rd_cost, bsize, ctx);
+  else if (cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame)
+    hybrid_search_svc_baseiskey(cpi, x, rd_cost, bsize, ctx, tile_data, mi_row,
+                                mi_col);
   else if (segfeature_active(&cm->seg, mi->segment_id, SEG_LVL_SKIP))
     set_mode_info_seg_skip(x, cm->tx_mode, rd_cost, bsize);
-  else if (bsize >= BLOCK_8X8)
-    vp9_pick_inter_mode(cpi, x, tile_data, mi_row, mi_col, rd_cost, bsize, ctx);
-  else
+  else if (bsize >= BLOCK_8X8) {
+    if (cpi->rc.hybrid_intra_scene_change)
+      hybrid_search_scene_change(cpi, x, rd_cost, bsize, ctx, tile_data, mi_row,
+                                 mi_col);
+    else
+      vp9_pick_inter_mode(cpi, x, tile_data, mi_row, mi_col, rd_cost, bsize,
+                          ctx);
+  } else {
     vp9_pick_inter_mode_sub8x8(cpi, x, mi_row, mi_col, rd_cost, bsize, ctx);
+  }
 
   duplicate_mode_info_in_sb(cm, xd, mi_row, mi_col, bsize);
 
@@ -3831,6 +4982,76 @@
   }
 }
 
+#define FEATURES 6
+#define LABELS 2
+static int ml_predict_var_paritioning(VP9_COMP *cpi, MACROBLOCK *x,
+                                      BLOCK_SIZE bsize, int mi_row,
+                                      int mi_col) {
+  VP9_COMMON *const cm = &cpi->common;
+  const NN_CONFIG *nn_config = NULL;
+
+  switch (bsize) {
+    case BLOCK_64X64: nn_config = &vp9_var_part_nnconfig_64; break;
+    case BLOCK_32X32: nn_config = &vp9_var_part_nnconfig_32; break;
+    case BLOCK_16X16: nn_config = &vp9_var_part_nnconfig_16; break;
+    case BLOCK_8X8: break;
+    default: assert(0 && "Unexpected block size."); return -1;
+  }
+
+  if (!nn_config) return -1;
+
+  vpx_clear_system_state();
+
+  {
+    const float thresh = cpi->oxcf.speed <= 5 ? 1.25f : 0.0f;
+    float features[FEATURES] = { 0.0f };
+    const int dc_q = vp9_dc_quant(cm->base_qindex, 0, cm->bit_depth);
+    int feature_idx = 0;
+    float score[LABELS];
+
+    features[feature_idx++] = logf((float)(dc_q * dc_q) / 256.0f + 1.0f);
+    vp9_setup_src_planes(x, cpi->Source, mi_row, mi_col);
+    {
+      const int bs = 4 * num_4x4_blocks_wide_lookup[bsize];
+      const BLOCK_SIZE subsize = get_subsize(bsize, PARTITION_SPLIT);
+      const int sb_offset_row = 8 * (mi_row & 7);
+      const int sb_offset_col = 8 * (mi_col & 7);
+      const uint8_t *pred = x->est_pred + sb_offset_row * 64 + sb_offset_col;
+      const uint8_t *src = x->plane[0].src.buf;
+      const int src_stride = x->plane[0].src.stride;
+      const int pred_stride = 64;
+      unsigned int sse;
+      int i;
+      // Variance of whole block.
+      const unsigned int var =
+          cpi->fn_ptr[bsize].vf(src, src_stride, pred, pred_stride, &sse);
+      const float factor = (var == 0) ? 1.0f : (1.0f / (float)var);
+
+      features[feature_idx++] = logf((float)var + 1.0f);
+      for (i = 0; i < 4; ++i) {
+        const int x_idx = (i & 1) * bs / 2;
+        const int y_idx = (i >> 1) * bs / 2;
+        const int src_offset = y_idx * src_stride + x_idx;
+        const int pred_offset = y_idx * pred_stride + x_idx;
+        // Variance of quarter block.
+        const unsigned int sub_var =
+            cpi->fn_ptr[subsize].vf(src + src_offset, src_stride,
+                                    pred + pred_offset, pred_stride, &sse);
+        const float var_ratio = (var == 0) ? 1.0f : factor * (float)sub_var;
+        features[feature_idx++] = var_ratio;
+      }
+    }
+
+    assert(feature_idx == FEATURES);
+    nn_predict(features, nn_config, score);
+    if (score[0] > thresh) return PARTITION_SPLIT;
+    if (score[0] < -thresh) return PARTITION_NONE;
+    return -1;
+  }
+}
+#undef FEATURES
+#undef LABELS
+
 static void nonrd_pick_partition(VP9_COMP *cpi, ThreadData *td,
                                  TileDataEnc *tile_data, TOKENEXTRA **tp,
                                  int mi_row, int mi_col, BLOCK_SIZE bsize,
@@ -3860,6 +5081,9 @@
       !force_vert_split && yss <= xss && bsize >= BLOCK_8X8;
   int partition_vert_allowed =
       !force_horz_split && xss <= yss && bsize >= BLOCK_8X8;
+  const int use_ml_based_partitioning =
+      sf->partition_search_type == ML_BASED_PARTITION;
+
   (void)*tp_orig;
 
   // Avoid checking for rectangular partitions for speed >= 6.
@@ -3890,6 +5114,18 @@
     partition_vert_allowed &= force_vert_split;
   }
 
+  if (use_ml_based_partitioning) {
+    if (partition_none_allowed || do_split) do_rect = 0;
+    if (partition_none_allowed && do_split) {
+      const int ml_predicted_partition =
+          ml_predict_var_paritioning(cpi, x, bsize, mi_row, mi_col);
+      if (ml_predicted_partition == PARTITION_NONE) do_split = 0;
+      if (ml_predicted_partition == PARTITION_SPLIT) partition_none_allowed = 0;
+    }
+  }
+
+  if (!partition_none_allowed && !do_split) do_rect = 1;
+
   ctx->pred_pixel_ready =
       !(partition_vert_allowed || partition_horz_allowed || do_split);
 
@@ -3903,26 +5139,25 @@
     ctx->skip = x->skip;
 
     if (this_rdc.rate != INT_MAX) {
-      int pl = partition_plane_context(xd, mi_row, mi_col, bsize);
+      const int pl = partition_plane_context(xd, mi_row, mi_col, bsize);
       this_rdc.rate += cpi->partition_cost[pl][PARTITION_NONE];
       this_rdc.rdcost =
           RDCOST(x->rdmult, x->rddiv, this_rdc.rate, this_rdc.dist);
       if (this_rdc.rdcost < best_rdc.rdcost) {
-        int64_t dist_breakout_thr = sf->partition_search_breakout_thr.dist;
-        int64_t rate_breakout_thr = sf->partition_search_breakout_thr.rate;
-
-        dist_breakout_thr >>=
-            8 - (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
-
-        rate_breakout_thr *= num_pels_log2_lookup[bsize];
-
         best_rdc = this_rdc;
         if (bsize >= BLOCK_8X8) pc_tree->partitioning = PARTITION_NONE;
 
-        if (!x->e_mbd.lossless && this_rdc.rate < rate_breakout_thr &&
-            this_rdc.dist < dist_breakout_thr) {
-          do_split = 0;
-          do_rect = 0;
+        if (!use_ml_based_partitioning) {
+          int64_t dist_breakout_thr = sf->partition_search_breakout_thr.dist;
+          int64_t rate_breakout_thr = sf->partition_search_breakout_thr.rate;
+          dist_breakout_thr >>=
+              8 - (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
+          rate_breakout_thr *= num_pels_log2_lookup[bsize];
+          if (!x->e_mbd.lossless && this_rdc.rate < rate_breakout_thr &&
+              this_rdc.dist < dist_breakout_thr) {
+            do_split = 0;
+            do_rect = 0;
+          }
         }
       }
     }
@@ -3970,7 +5205,7 @@
   // PARTITION_HORZ
   if (partition_horz_allowed && do_rect) {
     subsize = get_subsize(bsize, PARTITION_HORZ);
-    if (sf->adaptive_motion_search) load_pred_mv(x, ctx);
+    load_pred_mv(x, ctx);
     pc_tree->horizontal[0].pred_pixel_ready = 1;
     nonrd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &sum_rdc, subsize,
                         &pc_tree->horizontal[0]);
@@ -4014,7 +5249,7 @@
   // PARTITION_VERT
   if (partition_vert_allowed && do_rect) {
     subsize = get_subsize(bsize, PARTITION_VERT);
-    if (sf->adaptive_motion_search) load_pred_mv(x, ctx);
+    load_pred_mv(x, ctx);
     pc_tree->vertical[0].pred_pixel_ready = 1;
     nonrd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, &sum_rdc, subsize,
                         &pc_tree->vertical[0]);
@@ -4174,7 +5409,8 @@
           }
         }
         break;
-      case PARTITION_SPLIT:
+      default:
+        assert(partition == PARTITION_SPLIT);
         subsize = get_subsize(bsize, PARTITION_SPLIT);
         nonrd_select_partition(cpi, td, tile_data, mi, tp, mi_row, mi_col,
                                subsize, output_enabled, rd_cost,
@@ -4204,7 +5440,6 @@
           rd_cost->dist += this_rdc.dist;
         }
         break;
-      default: assert(0 && "Invalid partition type."); break;
     }
   }
 
@@ -4293,7 +5528,8 @@
                     output_enabled, subsize, &pc_tree->horizontal[1]);
       }
       break;
-    case PARTITION_SPLIT:
+    default:
+      assert(partition == PARTITION_SPLIT);
       subsize = get_subsize(bsize, PARTITION_SPLIT);
       if (bsize == BLOCK_8X8) {
         nonrd_pick_sb_modes(cpi, tile_data, x, mi_row, mi_col, dummy_cost,
@@ -4314,13 +5550,110 @@
                             dummy_cost, pc_tree->split[3]);
       }
       break;
-    default: assert(0 && "Invalid partition type."); break;
   }
 
   if (partition != PARTITION_SPLIT || bsize == BLOCK_8X8)
     update_partition_context(xd, mi_row, mi_col, subsize, bsize);
 }
 
+// Get a prediction(stored in x->est_pred) for the whole 64x64 superblock.
+static void get_estimated_pred(VP9_COMP *cpi, const TileInfo *const tile,
+                               MACROBLOCK *x, int mi_row, int mi_col) {
+  VP9_COMMON *const cm = &cpi->common;
+  const int is_key_frame = frame_is_intra_only(cm);
+  MACROBLOCKD *xd = &x->e_mbd;
+
+  set_offsets(cpi, tile, x, mi_row, mi_col, BLOCK_64X64);
+
+  if (!is_key_frame) {
+    MODE_INFO *mi = xd->mi[0];
+    YV12_BUFFER_CONFIG *yv12 = get_ref_frame_buffer(cpi, LAST_FRAME);
+    const YV12_BUFFER_CONFIG *yv12_g = NULL;
+    const BLOCK_SIZE bsize = BLOCK_32X32 + (mi_col + 4 < cm->mi_cols) * 2 +
+                             (mi_row + 4 < cm->mi_rows);
+    unsigned int y_sad_g, y_sad_thr;
+    unsigned int y_sad = UINT_MAX;
+
+    assert(yv12 != NULL);
+
+    if (!(is_one_pass_cbr_svc(cpi) && cpi->svc.spatial_layer_id) ||
+        cpi->svc.use_gf_temporal_ref_current_layer) {
+      // For now, GOLDEN will not be used for non-zero spatial layers, since
+      // it may not be a temporal reference.
+      yv12_g = get_ref_frame_buffer(cpi, GOLDEN_FRAME);
+    }
+
+    // Only compute y_sad_g (sad for golden reference) for speed < 8.
+    if (cpi->oxcf.speed < 8 && yv12_g && yv12_g != yv12 &&
+        (cpi->ref_frame_flags & VP9_GOLD_FLAG)) {
+      vp9_setup_pre_planes(xd, 0, yv12_g, mi_row, mi_col,
+                           &cm->frame_refs[GOLDEN_FRAME - 1].sf);
+      y_sad_g = cpi->fn_ptr[bsize].sdf(
+          x->plane[0].src.buf, x->plane[0].src.stride, xd->plane[0].pre[0].buf,
+          xd->plane[0].pre[0].stride);
+    } else {
+      y_sad_g = UINT_MAX;
+    }
+
+    if (cpi->oxcf.lag_in_frames > 0 && cpi->oxcf.rc_mode == VPX_VBR &&
+        cpi->rc.is_src_frame_alt_ref) {
+      yv12 = get_ref_frame_buffer(cpi, ALTREF_FRAME);
+      vp9_setup_pre_planes(xd, 0, yv12, mi_row, mi_col,
+                           &cm->frame_refs[ALTREF_FRAME - 1].sf);
+      mi->ref_frame[0] = ALTREF_FRAME;
+      y_sad_g = UINT_MAX;
+    } else {
+      vp9_setup_pre_planes(xd, 0, yv12, mi_row, mi_col,
+                           &cm->frame_refs[LAST_FRAME - 1].sf);
+      mi->ref_frame[0] = LAST_FRAME;
+    }
+    mi->ref_frame[1] = NONE;
+    mi->sb_type = BLOCK_64X64;
+    mi->mv[0].as_int = 0;
+    mi->interp_filter = BILINEAR;
+
+    {
+      const MV dummy_mv = { 0, 0 };
+      y_sad = vp9_int_pro_motion_estimation(cpi, x, bsize, mi_row, mi_col,
+                                            &dummy_mv);
+      x->sb_use_mv_part = 1;
+      x->sb_mvcol_part = mi->mv[0].as_mv.col;
+      x->sb_mvrow_part = mi->mv[0].as_mv.row;
+    }
+
+    // Pick ref frame for partitioning, bias last frame when y_sad_g and y_sad
+    // are close if short_circuit_low_temp_var is on.
+    y_sad_thr = cpi->sf.short_circuit_low_temp_var ? (y_sad * 7) >> 3 : y_sad;
+    if (y_sad_g < y_sad_thr) {
+      vp9_setup_pre_planes(xd, 0, yv12_g, mi_row, mi_col,
+                           &cm->frame_refs[GOLDEN_FRAME - 1].sf);
+      mi->ref_frame[0] = GOLDEN_FRAME;
+      mi->mv[0].as_int = 0;
+    } else {
+      x->pred_mv[LAST_FRAME] = mi->mv[0].as_mv;
+    }
+
+    set_ref_ptrs(cm, xd, mi->ref_frame[0], mi->ref_frame[1]);
+    xd->plane[0].dst.buf = x->est_pred;
+    xd->plane[0].dst.stride = 64;
+    vp9_build_inter_predictors_sb(xd, mi_row, mi_col, BLOCK_64X64);
+  } else {
+#if CONFIG_VP9_HIGHBITDEPTH
+    switch (xd->bd) {
+      case 8: memset(x->est_pred, 128, 64 * 64 * sizeof(x->est_pred[0])); break;
+      case 10:
+        memset(x->est_pred, 128 * 4, 64 * 64 * sizeof(x->est_pred[0]));
+        break;
+      case 12:
+        memset(x->est_pred, 128 * 16, 64 * 64 * sizeof(x->est_pred[0]));
+        break;
+    }
+#else
+    memset(x->est_pred, 128, 64 * 64 * sizeof(x->est_pred[0]));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  }
+}
+
 static void encode_nonrd_sb_row(VP9_COMP *cpi, ThreadData *td,
                                 TileDataEnc *tile_data, int mi_row,
                                 TOKENEXTRA **tp) {
@@ -4351,6 +5684,7 @@
     PARTITION_SEARCH_TYPE partition_search_type = sf->partition_search_type;
     BLOCK_SIZE bsize = BLOCK_64X64;
     int seg_skip = 0;
+    int i;
 
     (*(cpi->row_mt_sync_read_ptr))(&tile_data->row_mt_sync, sb_row,
                                    sb_col_in_tile);
@@ -4360,7 +5694,10 @@
     }
 
     x->source_variance = UINT_MAX;
-    vp9_zero(x->pred_mv);
+    for (i = 0; i < MAX_REF_FRAMES; ++i) {
+      x->pred_mv[i].row = INT16_MAX;
+      x->pred_mv[i].col = INT16_MAX;
+    }
     vp9_rd_cost_init(&dummy_rdc);
     x->color_sensitivity[0] = 0;
     x->color_sensitivity[1] = 0;
@@ -4368,6 +5705,7 @@
     x->skip_low_source_sad = 0;
     x->lowvar_highsumdiff = 0;
     x->content_state_sb = 0;
+    x->zero_temp_sad_source = 0;
     x->sb_use_mv_part = 0;
     x->sb_mvcol_part = 0;
     x->sb_mvrow_part = 0;
@@ -4407,6 +5745,15 @@
         nonrd_use_partition(cpi, td, tile_data, mi, tp, mi_row, mi_col,
                             BLOCK_64X64, 1, &dummy_rdc, td->pc_root);
         break;
+      case ML_BASED_PARTITION:
+        get_estimated_pred(cpi, tile_info, x, mi_row, mi_col);
+        x->max_partition_size = BLOCK_64X64;
+        x->min_partition_size = BLOCK_8X8;
+        x->sb_pickmode_part = 1;
+        nonrd_pick_partition(cpi, td, tile_data, tp, mi_row, mi_col,
+                             BLOCK_64X64, &dummy_rdc, 1, INT64_MAX,
+                             td->pc_root);
+        break;
       case SOURCE_VAR_BASED_PARTITION:
         set_source_var_based_partition(cpi, tile_info, x, mi, mi_row, mi_col);
         nonrd_use_partition(cpi, td, tile_data, mi, tp, mi_row, mi_col,
@@ -4418,14 +5765,15 @@
         nonrd_use_partition(cpi, td, tile_data, mi, tp, mi_row, mi_col,
                             BLOCK_64X64, 1, &dummy_rdc, td->pc_root);
         break;
-      case REFERENCE_PARTITION:
+      default:
+        assert(partition_search_type == REFERENCE_PARTITION);
         x->sb_pickmode_part = 1;
         set_offsets(cpi, tile_info, x, mi_row, mi_col, BLOCK_64X64);
         // Use nonrd_pick_partition on scene-cut for VBR mode.
         // nonrd_pick_partition does not support 4x4 partition, so avoid it
         // on key frame for now.
         if ((cpi->oxcf.rc_mode == VPX_VBR && cpi->rc.high_source_sad &&
-             cpi->oxcf.speed < 6 && cm->frame_type != KEY_FRAME &&
+             cpi->oxcf.speed < 6 && !frame_is_intra_only(cm) &&
              (cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame))) {
           // Use lower max_partition_size for low resoultions.
           if (cm->width <= 352 && cm->height <= 288)
@@ -4441,7 +5789,7 @@
           // TODO(marpan): Seems like nonrd_select_partition does not support
           // 4x4 partition. Since 4x4 is used on key frame, use this switch
           // for now.
-          if (cm->frame_type == KEY_FRAME)
+          if (frame_is_intra_only(cm))
             nonrd_use_partition(cpi, td, tile_data, mi, tp, mi_row, mi_col,
                                 BLOCK_64X64, 1, &dummy_rdc, td->pc_root);
           else
@@ -4450,7 +5798,6 @@
         }
 
         break;
-      default: assert(0); break;
     }
 
     // Update ref_frame usage for inter frame if this group is ARF group.
@@ -4517,16 +5864,12 @@
                                       &var16->sse, &var16->sum);
             var16->var = variance_highbd(var16);
             break;
-          case VPX_BITS_12:
+          default:
+            assert(cm->bit_depth == VPX_BITS_12);
             vpx_highbd_12_get16x16var(src, src_stride, last_src, last_stride,
                                       &var16->sse, &var16->sum);
             var16->var = variance_highbd(var16);
             break;
-          default:
-            assert(0 &&
-                   "cm->bit_depth should be VPX_BITS_8, VPX_BITS_10"
-                   " or VPX_BITS_12");
-            return -1;
         }
       } else {
         vpx_get16x16var(src, src_stride, last_src, last_stride, &var16->sse,
@@ -4634,6 +5977,9 @@
         for (i = 0; i < BLOCK_SIZES; ++i) {
           for (j = 0; j < MAX_MODES; ++j) {
             tile_data->thresh_freq_fact[i][j] = RD_THRESH_INIT_FACT;
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+            tile_data->thresh_freq_fact_prev[i][j] = RD_THRESH_INIT_FACT;
+#endif  // CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
             tile_data->mode_map[i][j] = j;
           }
         }
@@ -4647,6 +5993,9 @@
     for (tile_col = 0; tile_col < tile_cols; ++tile_col) {
       TileDataEnc *this_tile = &cpi->tile_data[tile_row * tile_cols + tile_col];
       TileInfo *tile_info = &this_tile->tile_info;
+      if (cpi->sf.adaptive_rd_thresh_row_mt &&
+          this_tile->row_base_thresh_freq_fact == NULL)
+        vp9_row_mt_alloc_rd_thresh(cpi, this_tile);
       vp9_tile_init(tile_info, cm, tile_row, tile_col);
 
       cpi->tile_tok[tile_row][tile_col] = pre_tok + tile_tok;
@@ -4677,8 +6026,10 @@
 
   if (cpi->sf.use_nonrd_pick_mode)
     encode_nonrd_sb_row(cpi, td, this_tile, mi_row, &tok);
+#if !CONFIG_REALTIME_ONLY
   else
     encode_rd_sb_row(cpi, td, this_tile, mi_row, &tok);
+#endif
 
   cpi->tplist[tile_row][tile_col][tile_sb_row].stop = tok;
   cpi->tplist[tile_row][tile_col][tile_sb_row].count =
@@ -4731,16 +6082,117 @@
 }
 #endif
 
+static int compare_kmeans_data(const void *a, const void *b) {
+  if (((const KMEANS_DATA *)a)->value > ((const KMEANS_DATA *)b)->value) {
+    return 1;
+  } else if (((const KMEANS_DATA *)a)->value <
+             ((const KMEANS_DATA *)b)->value) {
+    return -1;
+  } else {
+    return 0;
+  }
+}
+
+static void compute_boundary_ls(const double *ctr_ls, int k,
+                                double *boundary_ls) {
+  // boundary_ls[j] is the upper bound of data centered at ctr_ls[j]
+  int j;
+  for (j = 0; j < k - 1; ++j) {
+    boundary_ls[j] = (ctr_ls[j] + ctr_ls[j + 1]) / 2.;
+  }
+  boundary_ls[k - 1] = DBL_MAX;
+}
+
+int vp9_get_group_idx(double value, double *boundary_ls, int k) {
+  int group_idx = 0;
+  while (value >= boundary_ls[group_idx]) {
+    ++group_idx;
+    if (group_idx == k - 1) {
+      break;
+    }
+  }
+  return group_idx;
+}
+
+void vp9_kmeans(double *ctr_ls, double *boundary_ls, int *count_ls, int k,
+                KMEANS_DATA *arr, int size) {
+  int i, j;
+  int itr;
+  int group_idx;
+  double sum[MAX_KMEANS_GROUPS];
+  int count[MAX_KMEANS_GROUPS];
+
+  vpx_clear_system_state();
+
+  assert(k >= 2 && k <= MAX_KMEANS_GROUPS);
+
+  qsort(arr, size, sizeof(*arr), compare_kmeans_data);
+
+  // initialize the center points
+  for (j = 0; j < k; ++j) {
+    ctr_ls[j] = arr[(size * (2 * j + 1)) / (2 * k)].value;
+  }
+
+  for (itr = 0; itr < 10; ++itr) {
+    compute_boundary_ls(ctr_ls, k, boundary_ls);
+    for (i = 0; i < MAX_KMEANS_GROUPS; ++i) {
+      sum[i] = 0;
+      count[i] = 0;
+    }
+
+    // Both the data and centers are sorted in ascending order.
+    // As each data point is processed in order, its corresponding group index
+    // can only increase. So we only need to reset the group index to zero here.
+    group_idx = 0;
+    for (i = 0; i < size; ++i) {
+      while (arr[i].value >= boundary_ls[group_idx]) {
+        // place samples into clusters
+        ++group_idx;
+        if (group_idx == k - 1) {
+          break;
+        }
+      }
+      sum[group_idx] += arr[i].value;
+      ++count[group_idx];
+    }
+
+    for (group_idx = 0; group_idx < k; ++group_idx) {
+      if (count[group_idx] > 0)
+        ctr_ls[group_idx] = sum[group_idx] / count[group_idx];
+
+      sum[group_idx] = 0;
+      count[group_idx] = 0;
+    }
+  }
+
+  // compute group_idx, boundary_ls and count_ls
+  for (j = 0; j < k; ++j) {
+    count_ls[j] = 0;
+  }
+  compute_boundary_ls(ctr_ls, k, boundary_ls);
+  group_idx = 0;
+  for (i = 0; i < size; ++i) {
+    while (arr[i].value >= boundary_ls[group_idx]) {
+      ++group_idx;
+      if (group_idx == k - 1) {
+        break;
+      }
+    }
+    arr[i].group_idx = group_idx;
+    ++count_ls[group_idx];
+  }
+}
+
 static void encode_frame_internal(VP9_COMP *cpi) {
   SPEED_FEATURES *const sf = &cpi->sf;
   ThreadData *const td = &cpi->td;
   MACROBLOCK *const x = &td->mb;
   VP9_COMMON *const cm = &cpi->common;
   MACROBLOCKD *const xd = &x->e_mbd;
+  const int gf_group_index = cpi->twopass.gf_group.index;
 
   xd->mi = cm->mi_grid_visible;
   xd->mi[0] = cm->mi;
-
   vp9_zero(*td->counts);
   vp9_zero(cpi->td.rd_counts);
 
@@ -4758,8 +6210,12 @@
   x->fwd_txfm4x4 = xd->lossless ? vp9_fwht4x4 : vpx_fdct4x4;
 #endif  // CONFIG_VP9_HIGHBITDEPTH
   x->inv_txfm_add = xd->lossless ? vp9_iwht4x4_add : vp9_idct4x4_add;
-
+#if CONFIG_CONSISTENT_RECODE
+  x->optimize = sf->optimize_coefficients == 1 && cpi->oxcf.pass != 1;
+#endif
   if (xd->lossless) x->optimize = 0;
+  x->sharpness = cpi->oxcf.sharpness;
+  x->adjust_rdmult_by_segment = (cpi->oxcf.aq_mode == VARIANCE_AQ);
 
   cm->tx_mode = select_tx_mode(cpi, xd);
 
@@ -4801,8 +6257,33 @@
 
     if (sf->partition_search_type == SOURCE_VAR_BASED_PARTITION)
       source_var_based_partition_search_method(cpi);
+  } else if (gf_group_index && gf_group_index < MAX_ARF_GOP_SIZE &&
+             cpi->sf.enable_tpl_model) {
+    TplDepFrame *tpl_frame = &cpi->tpl_stats[cpi->twopass.gf_group.index];
+    TplDepStats *tpl_stats = tpl_frame->tpl_stats_ptr;
+
+    int tpl_stride = tpl_frame->stride;
+    int64_t intra_cost_base = 0;
+    int64_t mc_dep_cost_base = 0;
+    int row, col;
+
+    for (row = 0; row < cm->mi_rows && tpl_frame->is_valid; ++row) {
+      for (col = 0; col < cm->mi_cols; ++col) {
+        TplDepStats *this_stats = &tpl_stats[row * tpl_stride + col];
+        intra_cost_base += this_stats->intra_cost;
+        mc_dep_cost_base += this_stats->mc_dep_cost;
+      }
+    }
+
+    vpx_clear_system_state();
+
+    if (tpl_frame->is_valid)
+      cpi->rd.r0 = (double)intra_cost_base / mc_dep_cost_base;
   }
 
+  // Frame segmentation
+  if (cpi->oxcf.aq_mode == PERCEPTUAL_AQ) build_kmeans_segmentation(cpi);
+
   {
     struct vpx_usec_timer emr_timer;
     vpx_usec_timer_start(&emr_timer);
@@ -4883,9 +6364,52 @@
   return sum_delta / (cm->mi_rows * cm->mi_cols);
 }
 
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+static void restore_encode_params(VP9_COMP *cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  const int tile_rows = 1 << cm->log2_tile_rows;
+  int tile_col, tile_row;
+  int i, j;
+  RD_OPT *rd_opt = &cpi->rd;
+  for (i = 0; i < MAX_REF_FRAMES; i++) {
+    for (j = 0; j < REFERENCE_MODES; j++)
+      rd_opt->prediction_type_threshes[i][j] =
+          rd_opt->prediction_type_threshes_prev[i][j];
+
+    for (j = 0; j < SWITCHABLE_FILTER_CONTEXTS; j++)
+      rd_opt->filter_threshes[i][j] = rd_opt->filter_threshes_prev[i][j];
+  }
+
+  if (cpi->tile_data != NULL) {
+    for (tile_row = 0; tile_row < tile_rows; ++tile_row)
+      for (tile_col = 0; tile_col < tile_cols; ++tile_col) {
+        TileDataEnc *tile_data =
+            &cpi->tile_data[tile_row * tile_cols + tile_col];
+        for (i = 0; i < BLOCK_SIZES; ++i) {
+          for (j = 0; j < MAX_MODES; ++j) {
+            tile_data->thresh_freq_fact[i][j] =
+                tile_data->thresh_freq_fact_prev[i][j];
+          }
+        }
+      }
+  }
+
+  cm->interp_filter = cpi->sf.default_interp_filter;
+}
+#endif  // CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+
 void vp9_encode_frame(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
 
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+  restore_encode_params(cpi);
+#endif
+
+#if CONFIG_MISMATCH_DEBUG
+  mismatch_reset_frame(MAX_MB_PLANE);
+#endif
+
   // In the longer term the encoder should be generalized to match the
   // decoder such that we allow compound where one of the 3 buffers has a
   // different sign bias and that buffer is then the fixed ref. However, this
@@ -4893,16 +6417,11 @@
   // side behavior is where the ALT ref buffer has opposite sign bias to
   // the other two.
   if (!frame_is_intra_only(cm)) {
-    if ((cm->ref_frame_sign_bias[ALTREF_FRAME] ==
-         cm->ref_frame_sign_bias[GOLDEN_FRAME]) ||
-        (cm->ref_frame_sign_bias[ALTREF_FRAME] ==
-         cm->ref_frame_sign_bias[LAST_FRAME])) {
-      cpi->allow_comp_inter_inter = 0;
-    } else {
+    if (vp9_compound_reference_allowed(cm)) {
       cpi->allow_comp_inter_inter = 1;
-      cm->comp_fixed_ref = ALTREF_FRAME;
-      cm->comp_var_ref[0] = LAST_FRAME;
-      cm->comp_var_ref[1] = GOLDEN_FRAME;
+      vp9_setup_compound_reference_mode(cm);
+    } else {
+      cpi->allow_comp_inter_inter = 0;
     }
   }
 
@@ -5066,7 +6585,8 @@
   for (y = 0; y < ymis; y++)
     for (x = 0; x < xmis; x++) {
       int map_offset = block_index + y * cm->mi_cols + x;
-      if (is_inter_block(mi) && mi->segment_id <= CR_SEGMENT_ID_BOOST2) {
+      if (mi->ref_frame[0] == LAST_FRAME && is_inter_block(mi) &&
+          mi->segment_id <= CR_SEGMENT_ID_BOOST2) {
         if (abs(mv.row) < 8 && abs(mv.col) < 8) {
           if (cpi->consec_zero_mv[map_offset] < 255)
             cpi->consec_zero_mv[map_offset]++;
@@ -5133,7 +6653,27 @@
     vp9_build_inter_predictors_sbuv(xd, mi_row, mi_col,
                                     VPXMAX(bsize, BLOCK_8X8));
 
-    vp9_encode_sb(x, VPXMAX(bsize, BLOCK_8X8));
+#if CONFIG_MISMATCH_DEBUG
+    if (output_enabled) {
+      int plane;
+      for (plane = 0; plane < MAX_MB_PLANE; ++plane) {
+        const struct macroblockd_plane *pd = &xd->plane[plane];
+        int pixel_c, pixel_r;
+        const BLOCK_SIZE plane_bsize =
+            get_plane_block_size(VPXMAX(bsize, BLOCK_8X8), &xd->plane[plane]);
+        const int bw = get_block_width(plane_bsize);
+        const int bh = get_block_height(plane_bsize);
+        mi_to_pixel_loc(&pixel_c, &pixel_r, mi_col, mi_row, 0, 0,
+                        pd->subsampling_x, pd->subsampling_y);
+
+        mismatch_record_block_pre(pd->dst.buf, pd->dst.stride, plane, pixel_c,
+                                  pixel_r, bw, bh,
+                                  xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH);
+      }
+    }
+#endif
+
+    vp9_encode_sb(x, VPXMAX(bsize, BLOCK_8X8), mi_row, mi_col, output_enabled);
     vp9_tokenize_sb(cpi, td, t, !output_enabled, seg_skip,
                     VPXMAX(bsize, BLOCK_8X8));
   }
@@ -5161,7 +6701,11 @@
     ++td->counts->tx.tx_totals[get_uv_tx_size(mi, &xd->plane[1])];
     if (cm->seg.enabled && cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ)
       vp9_cyclic_refresh_update_sb_postencode(cpi, mi, mi_row, mi_col, bsize);
-    if (cpi->oxcf.pass == 0 && cpi->svc.temporal_layer_id == 0)
+    if (cpi->oxcf.pass == 0 && cpi->svc.temporal_layer_id == 0 &&
+        (!cpi->use_svc ||
+         (cpi->use_svc &&
+          !cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame &&
+          cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1)))
       update_zeromv_cnt(cpi, mi, mi_row, mi_col, bsize);
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.h
index cf5ae3d..fd0a9c5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodeframe.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_ENCODEFRAME_H_
-#define VP9_ENCODER_VP9_ENCODEFRAME_H_
+#ifndef VPX_VP9_ENCODER_VP9_ENCODEFRAME_H_
+#define VPX_VP9_ENCODER_VP9_ENCODEFRAME_H_
 
 #include "vpx/vpx_integer.h"
 
@@ -45,8 +45,13 @@
 void vp9_set_variance_partition_thresholds(struct VP9_COMP *cpi, int q,
                                            int content_state);
 
+struct KMEANS_DATA;
+void vp9_kmeans(double *ctr_ls, double *boundary_ls, int *count_ls, int k,
+                struct KMEANS_DATA *arr, int size);
+int vp9_get_group_idx(double value, double *boundary_ls, int k);
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_ENCODEFRAME_H_
+#endif  // VPX_VP9_ENCODER_VP9_ENCODEFRAME_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.c
index 970077d..7630a81 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.c
@@ -16,6 +16,10 @@
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem.h"
 
+#if CONFIG_MISMATCH_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif
+
 #include "vp9/common/vp9_idct.h"
 #include "vp9/common/vp9_reconinter.h"
 #include "vp9/common/vp9_reconintra.h"
@@ -56,7 +60,7 @@
 
 // 'num' can be negative, but 'shift' must be non-negative.
 #define RIGHT_SHIFT_POSSIBLY_NEGATIVE(num, shift) \
-  ((num) >= 0) ? (num) >> (shift) : -((-(num)) >> (shift))
+  (((num) >= 0) ? (num) >> (shift) : -((-(num)) >> (shift)))
 
 int vp9_optimize_b(MACROBLOCK *mb, int plane, int block, TX_SIZE tx_size,
                    int ctx) {
@@ -77,13 +81,19 @@
   const scan_order *const so = get_scan(xd, tx_size, plane_type, block);
   const int16_t *const scan = so->scan;
   const int16_t *const nb = so->neighbors;
+  const MODE_INFO *mbmi = xd->mi[0];
+  const int sharpness = mb->sharpness;
+  const int64_t rdadj = (int64_t)mb->rdmult * plane_rd_mult[ref][plane_type];
   const int64_t rdmult =
-      ((int64_t)mb->rdmult * plane_rd_mult[ref][plane_type]) >> 1;
+      (sharpness == 0 ? rdadj >> 1
+                      : (rdadj * (8 - sharpness + mbmi->segment_id)) >> 4);
+
   const int64_t rddiv = mb->rddiv;
   int64_t rd_cost0, rd_cost1;
   int64_t rate0, rate1;
   int16_t t0, t1;
   int i, final_eob;
+  int count_high_values_after_eob = 0;
 #if CONFIG_VP9_HIGHBITDEPTH
   const uint16_t *cat6_high_cost = vp9_get_high_cost_table(xd->bd);
 #else
@@ -263,6 +273,7 @@
           assert(distortion0 <= distortion_for_zero);
           token_cache[rc] = vp9_pt_energy_class[t0];
         }
+        if (sharpness > 0 && abs(qcoeff[rc]) > 1) count_high_values_after_eob++;
         assert(accu_error >= 0);
         x_prev = qcoeff[rc];  // Update based on selected quantized value.
 
@@ -273,6 +284,7 @@
         if (best_eob_cost_cur < best_block_rd_cost) {
           best_block_rd_cost = best_eob_cost_cur;
           final_eob = i + 1;
+          count_high_values_after_eob = 0;
           if (use_x1) {
             before_best_eob_qc = x1;
             before_best_eob_dqc = dqc1;
@@ -284,19 +296,31 @@
       }
     }
   }
-  assert(final_eob <= eob);
-  if (final_eob > 0) {
-    int rc;
-    assert(before_best_eob_qc != 0);
-    i = final_eob - 1;
-    rc = scan[i];
-    qcoeff[rc] = before_best_eob_qc;
-    dqcoeff[rc] = before_best_eob_dqc;
-  }
-  for (i = final_eob; i < eob; i++) {
-    int rc = scan[i];
-    qcoeff[rc] = 0;
-    dqcoeff[rc] = 0;
+  if (count_high_values_after_eob > 0) {
+    final_eob = eob - 1;
+    for (; final_eob >= 0; final_eob--) {
+      const int rc = scan[final_eob];
+      const int x = qcoeff[rc];
+      if (x) {
+        break;
+      }
+    }
+    final_eob++;
+  } else {
+    assert(final_eob <= eob);
+    if (final_eob > 0) {
+      int rc;
+      assert(before_best_eob_qc != 0);
+      i = final_eob - 1;
+      rc = scan[i];
+      qcoeff[rc] = before_best_eob_qc;
+      dqcoeff[rc] = before_best_eob_dqc;
+    }
+    for (i = final_eob; i < eob; i++) {
+      int rc = scan[i];
+      qcoeff[rc] = 0;
+      dqcoeff[rc] = 0;
+    }
   }
   mb->plane[plane].eobs[block] = final_eob;
   return final_eob;
@@ -358,13 +382,13 @@
                                p->quant_fp, qcoeff, dqcoeff, pd->dequant, eob,
                                scan_order->scan, scan_order->iscan);
         break;
-      case TX_4X4:
+      default:
+        assert(tx_size == TX_4X4);
         x->fwd_txfm4x4(src_diff, coeff, diff_stride);
         vp9_highbd_quantize_fp(coeff, 16, x->skip_block, p->round_fp,
                                p->quant_fp, qcoeff, dqcoeff, pd->dequant, eob,
                                scan_order->scan, scan_order->iscan);
         break;
-      default: assert(0);
     }
     return;
   }
@@ -384,17 +408,19 @@
                       scan_order->iscan);
       break;
     case TX_8X8:
-      vp9_fdct8x8_quant(src_diff, diff_stride, coeff, 64, x->skip_block,
-                        p->round_fp, p->quant_fp, qcoeff, dqcoeff, pd->dequant,
-                        eob, scan_order->scan, scan_order->iscan);
+      vpx_fdct8x8(src_diff, coeff, diff_stride);
+      vp9_quantize_fp(coeff, 64, x->skip_block, p->round_fp, p->quant_fp,
+                      qcoeff, dqcoeff, pd->dequant, eob, scan_order->scan,
+                      scan_order->iscan);
+
       break;
-    case TX_4X4:
+    default:
+      assert(tx_size == TX_4X4);
       x->fwd_txfm4x4(src_diff, coeff, diff_stride);
       vp9_quantize_fp(coeff, 16, x->skip_block, p->round_fp, p->quant_fp,
                       qcoeff, dqcoeff, pd->dequant, eob, scan_order->scan,
                       scan_order->iscan);
       break;
-    default: assert(0); break;
   }
 }
 
@@ -434,13 +460,13 @@
                                p->quant_fp[0], qcoeff, dqcoeff, pd->dequant[0],
                                eob);
         break;
-      case TX_4X4:
+      default:
+        assert(tx_size == TX_4X4);
         x->fwd_txfm4x4(src_diff, coeff, diff_stride);
         vpx_highbd_quantize_dc(coeff, 16, x->skip_block, p->round,
                                p->quant_fp[0], qcoeff, dqcoeff, pd->dequant[0],
                                eob);
         break;
-      default: assert(0);
     }
     return;
   }
@@ -462,12 +488,12 @@
       vpx_quantize_dc(coeff, 64, x->skip_block, p->round, p->quant_fp[0],
                       qcoeff, dqcoeff, pd->dequant[0], eob);
       break;
-    case TX_4X4:
+    default:
+      assert(tx_size == TX_4X4);
       x->fwd_txfm4x4(src_diff, coeff, diff_stride);
       vpx_quantize_dc(coeff, 16, x->skip_block, p->round, p->quant_fp[0],
                       qcoeff, dqcoeff, pd->dequant[0], eob);
       break;
-    default: assert(0); break;
   }
 }
 
@@ -511,14 +537,14 @@
                               pd->dequant, eob, scan_order->scan,
                               scan_order->iscan);
         break;
-      case TX_4X4:
+      default:
+        assert(tx_size == TX_4X4);
         x->fwd_txfm4x4(src_diff, coeff, diff_stride);
         vpx_highbd_quantize_b(coeff, 16, x->skip_block, p->zbin, p->round,
                               p->quant, p->quant_shift, qcoeff, dqcoeff,
                               pd->dequant, eob, scan_order->scan,
                               scan_order->iscan);
         break;
-      default: assert(0);
     }
     return;
   }
@@ -544,19 +570,24 @@
                      p->quant_shift, qcoeff, dqcoeff, pd->dequant, eob,
                      scan_order->scan, scan_order->iscan);
       break;
-    case TX_4X4:
+    default:
+      assert(tx_size == TX_4X4);
       x->fwd_txfm4x4(src_diff, coeff, diff_stride);
       vpx_quantize_b(coeff, 16, x->skip_block, p->zbin, p->round, p->quant,
                      p->quant_shift, qcoeff, dqcoeff, pd->dequant, eob,
                      scan_order->scan, scan_order->iscan);
       break;
-    default: assert(0); break;
   }
 }
 
 static void encode_block(int plane, int block, int row, int col,
                          BLOCK_SIZE plane_bsize, TX_SIZE tx_size, void *arg) {
   struct encode_b_args *const args = arg;
+#if CONFIG_MISMATCH_DEBUG
+  int mi_row = args->mi_row;
+  int mi_col = args->mi_col;
+  int output_enabled = args->output_enabled;
+#endif
   MACROBLOCK *const x = args->x;
   MACROBLOCKD *const xd = &x->e_mbd;
   struct macroblock_plane *const p = &x->plane[plane];
@@ -573,7 +604,11 @@
   if (x->zcoeff_blk[tx_size][block] && plane == 0) {
     p->eobs[block] = 0;
     *a = *l = 0;
+#if CONFIG_MISMATCH_DEBUG
+    goto encode_block_end;
+#else
     return;
+#endif
   }
 
   if (!x->skip_recode) {
@@ -583,7 +618,11 @@
         // skip forward transform
         p->eobs[block] = 0;
         *a = *l = 0;
+#if CONFIG_MISMATCH_DEBUG
+        goto encode_block_end;
+#else
         return;
+#endif
       } else {
         vp9_xform_quant_fp(x, plane, block, row, col, plane_bsize, tx_size);
       }
@@ -600,7 +639,11 @@
           // skip forward transform
           p->eobs[block] = 0;
           *a = *l = 0;
+#if CONFIG_MISMATCH_DEBUG
+          goto encode_block_end;
+#else
           return;
+#endif
         }
       } else {
         vp9_xform_quant(x, plane, block, row, col, plane_bsize, tx_size);
@@ -617,7 +660,13 @@
 
   if (p->eobs[block]) *(args->skip) = 0;
 
-  if (x->skip_encode || p->eobs[block] == 0) return;
+  if (x->skip_encode || p->eobs[block] == 0) {
+#if CONFIG_MISMATCH_DEBUG
+    goto encode_block_end;
+#else
+    return;
+#endif
+  }
 #if CONFIG_VP9_HIGHBITDEPTH
   if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
     uint16_t *const dst16 = CONVERT_TO_SHORTPTR(dst);
@@ -634,16 +683,20 @@
         vp9_highbd_idct8x8_add(dqcoeff, dst16, pd->dst.stride, p->eobs[block],
                                xd->bd);
         break;
-      case TX_4X4:
+      default:
+        assert(tx_size == TX_4X4);
         // this is like vp9_short_idct4x4 but has a special case around eob<=1
         // which is significant (not just an optimization) for the lossless
         // case.
         x->highbd_inv_txfm_add(dqcoeff, dst16, pd->dst.stride, p->eobs[block],
                                xd->bd);
         break;
-      default: assert(0 && "Invalid transform size");
     }
+#if CONFIG_MISMATCH_DEBUG
+    goto encode_block_end;
+#else
     return;
+#endif
   }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
@@ -657,14 +710,27 @@
     case TX_8X8:
       vp9_idct8x8_add(dqcoeff, dst, pd->dst.stride, p->eobs[block]);
       break;
-    case TX_4X4:
+    default:
+      assert(tx_size == TX_4X4);
       // this is like vp9_short_idct4x4 but has a special case around eob<=1
       // which is significant (not just an optimization) for the lossless
       // case.
       x->inv_txfm_add(dqcoeff, dst, pd->dst.stride, p->eobs[block]);
       break;
-    default: assert(0 && "Invalid transform size"); break;
   }
+#if CONFIG_MISMATCH_DEBUG
+encode_block_end:
+  if (output_enabled) {
+    int pixel_c, pixel_r;
+    int blk_w = 1 << (tx_size + TX_UNIT_SIZE_LOG2);
+    int blk_h = 1 << (tx_size + TX_UNIT_SIZE_LOG2);
+    mi_to_pixel_loc(&pixel_c, &pixel_r, mi_col, mi_row, col, row,
+                    pd->subsampling_x, pd->subsampling_y);
+    mismatch_record_block_tx(dst, pd->dst.stride, plane, pixel_c, pixel_r,
+                             blk_w, blk_h,
+                             xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH);
+  }
+#endif
 }
 
 static void encode_block_pass1(int plane, int block, int row, int col,
@@ -698,12 +764,21 @@
                                          encode_block_pass1, x);
 }
 
-void vp9_encode_sb(MACROBLOCK *x, BLOCK_SIZE bsize) {
+void vp9_encode_sb(MACROBLOCK *x, BLOCK_SIZE bsize, int mi_row, int mi_col,
+                   int output_enabled) {
   MACROBLOCKD *const xd = &x->e_mbd;
   struct optimize_ctx ctx;
   MODE_INFO *mi = xd->mi[0];
-  struct encode_b_args arg = { x, 1, NULL, NULL, &mi->skip };
   int plane;
+#if CONFIG_MISMATCH_DEBUG
+  struct encode_b_args arg = { x,         1,      NULL,   NULL,
+                               &mi->skip, mi_row, mi_col, output_enabled };
+#else
+  struct encode_b_args arg = { x, 1, NULL, NULL, &mi->skip };
+  (void)mi_row;
+  (void)mi_col;
+  (void)output_enabled;
+#endif
 
   mi->skip = 1;
 
@@ -848,7 +923,8 @@
                                 xd->bd);
         }
         break;
-      case TX_4X4:
+      default:
+        assert(tx_size == TX_4X4);
         if (!x->skip_recode) {
           vpx_highbd_subtract_block(4, 4, src_diff, diff_stride, src,
                                     src_stride, dst, dst_stride, xd->bd);
@@ -876,7 +952,6 @@
           }
         }
         break;
-      default: assert(0); return;
     }
     if (*eob) *(args->skip) = 0;
     return;
@@ -930,7 +1005,8 @@
       if (!x->skip_encode && *eob)
         vp9_iht8x8_add(tx_type, dqcoeff, dst, dst_stride, *eob);
       break;
-    case TX_4X4:
+    default:
+      assert(tx_size == TX_4X4);
       if (!x->skip_recode) {
         vpx_subtract_block(4, 4, src_diff, diff_stride, src, src_stride, dst,
                            dst_stride);
@@ -955,7 +1031,6 @@
           vp9_iht4x4_16_add(dqcoeff, dst, dst_stride, tx_type);
       }
       break;
-    default: assert(0); break;
   }
   if (*eob) *(args->skip) = 0;
 }
@@ -964,8 +1039,16 @@
                                   int enable_optimize_b) {
   const MACROBLOCKD *const xd = &x->e_mbd;
   struct optimize_ctx ctx;
+#if CONFIG_MISMATCH_DEBUG
+  // TODO(angiebird): make mismatch_debug support intra mode
+  struct encode_b_args arg = {
+    x, enable_optimize_b, ctx.ta[plane], ctx.tl[plane], &xd->mi[0]->skip, 0, 0,
+    0
+  };
+#else
   struct encode_b_args arg = { x, enable_optimize_b, ctx.ta[plane],
                                ctx.tl[plane], &xd->mi[0]->skip };
+#endif
 
   if (enable_optimize_b && x->optimize &&
       (!x->skip_recode || !x->skip_optimize)) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.h
index cf943be..1975ee7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemb.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_ENCODEMB_H_
-#define VP9_ENCODER_VP9_ENCODEMB_H_
+#ifndef VPX_VP9_ENCODER_VP9_ENCODEMB_H_
+#define VPX_VP9_ENCODER_VP9_ENCODEMB_H_
 
 #include "./vpx_config.h"
 #include "vp9/encoder/vp9_block.h"
@@ -24,10 +24,16 @@
   ENTROPY_CONTEXT *ta;
   ENTROPY_CONTEXT *tl;
   int8_t *skip;
+#if CONFIG_MISMATCH_DEBUG
+  int mi_row;
+  int mi_col;
+  int output_enabled;
+#endif
 };
 int vp9_optimize_b(MACROBLOCK *mb, int plane, int block, TX_SIZE tx_size,
                    int ctx);
-void vp9_encode_sb(MACROBLOCK *x, BLOCK_SIZE bsize);
+void vp9_encode_sb(MACROBLOCK *x, BLOCK_SIZE bsize, int mi_row, int mi_col,
+                   int output_enabled);
 void vp9_encode_sby_pass1(MACROBLOCK *x, BLOCK_SIZE bsize);
 void vp9_xform_quant_fp(MACROBLOCK *x, int plane, int block, int row, int col,
                         BLOCK_SIZE plane_bsize, TX_SIZE tx_size);
@@ -48,4 +54,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_ENCODEMB_H_
+#endif  // VPX_VP9_ENCODER_VP9_ENCODEMB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.h
index 9fc7ab8..2f1be4b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encodemv.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_ENCODEMV_H_
-#define VP9_ENCODER_VP9_ENCODEMV_H_
+#ifndef VPX_VP9_ENCODER_VP9_ENCODEMV_H_
+#define VPX_VP9_ENCODER_VP9_ENCODEMV_H_
 
 #include "vp9/encoder/vp9_encoder.h"
 
@@ -27,7 +27,7 @@
                    unsigned int *const max_mv_magnitude);
 
 void vp9_build_nmv_cost_table(int *mvjoint, int *mvcost[2],
-                              const nmv_context *mvctx, int usehp);
+                              const nmv_context *ctx, int usehp);
 
 void vp9_update_mv_count(ThreadData *td);
 
@@ -35,4 +35,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_ENCODEMV_H_
+#endif  // VPX_VP9_ENCODER_VP9_ENCODEMV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.c
index fd3889d..bd64cfe 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.c
@@ -8,9 +8,10 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <limits.h>
 #include <math.h>
 #include <stdio.h>
-#include <limits.h>
+#include <stdlib.h>
 
 #include "./vp9_rtcd.h"
 #include "./vpx_config.h"
@@ -25,31 +26,49 @@
 #include "vpx_ports/mem.h"
 #include "vpx_ports/system_state.h"
 #include "vpx_ports/vpx_timer.h"
+#if CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif  // CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
 
 #include "vp9/common/vp9_alloccommon.h"
 #include "vp9/common/vp9_filter.h"
 #include "vp9/common/vp9_idct.h"
+#if CONFIG_NON_GREEDY_MV
+#include "vp9/common/vp9_mvref_common.h"
+#endif
 #if CONFIG_VP9_POSTPROC
 #include "vp9/common/vp9_postproc.h"
 #endif
 #include "vp9/common/vp9_reconinter.h"
 #include "vp9/common/vp9_reconintra.h"
 #include "vp9/common/vp9_tile_common.h"
+#include "vp9/common/vp9_scan.h"
 
+#if !CONFIG_REALTIME_ONLY
 #include "vp9/encoder/vp9_alt_ref_aq.h"
 #include "vp9/encoder/vp9_aq_360.h"
 #include "vp9/encoder/vp9_aq_complexity.h"
+#endif
 #include "vp9/encoder/vp9_aq_cyclicrefresh.h"
+#if !CONFIG_REALTIME_ONLY
 #include "vp9/encoder/vp9_aq_variance.h"
+#endif
 #include "vp9/encoder/vp9_bitstream.h"
+#if CONFIG_INTERNAL_STATS
+#include "vp9/encoder/vp9_blockiness.h"
+#endif
 #include "vp9/encoder/vp9_context_tree.h"
 #include "vp9/encoder/vp9_encodeframe.h"
+#include "vp9/encoder/vp9_encodemb.h"
 #include "vp9/encoder/vp9_encodemv.h"
 #include "vp9/encoder/vp9_encoder.h"
-#include "vp9/encoder/vp9_extend.h"
 #include "vp9/encoder/vp9_ethread.h"
+#include "vp9/encoder/vp9_extend.h"
 #include "vp9/encoder/vp9_firstpass.h"
 #include "vp9/encoder/vp9_mbgraph.h"
+#if CONFIG_NON_GREEDY_MV
+#include "vp9/encoder/vp9_mcomp.h"
+#endif
 #include "vp9/encoder/vp9_multi_thread.h"
 #include "vp9/encoder/vp9_noise_estimate.h"
 #include "vp9/encoder/vp9_picklpf.h"
@@ -61,6 +80,7 @@
 #include "vp9/encoder/vp9_speed_features.h"
 #include "vp9/encoder/vp9_svc_layercontext.h"
 #include "vp9/encoder/vp9_temporal_filter.h"
+#include "vp9/vp9_cx_iface.h"
 
 #define AM_SEGMENT_ID_INACTIVE 7
 #define AM_SEGMENT_ID_ACTIVE 0
@@ -84,6 +104,9 @@
 #ifdef OUTPUT_YUV_REC
 FILE *yuv_rec_file;
 #endif
+#ifdef OUTPUT_YUV_SVC_SRC
+FILE *yuv_svc_src[3] = { NULL, NULL, NULL };
+#endif
 
 #if 0
 FILE *framepsnr;
@@ -102,6 +125,14 @@
 }
 #endif
 
+#if CONFIG_VP9_HIGHBITDEPTH
+void highbd_wht_fwd_txfm(int16_t *src_diff, int bw, tran_low_t *coeff,
+                         TX_SIZE tx_size);
+#endif
+void wht_fwd_txfm(int16_t *src_diff, int bw, tran_low_t *coeff,
+                  TX_SIZE tx_size);
+
+#if !CONFIG_REALTIME_ONLY
 // compute adaptive threshold for skip recoding
 static int compute_context_model_thresh(const VP9_COMP *const cpi) {
   const VP9_COMMON *const cm = &cpi->common;
@@ -426,10 +457,11 @@
 
   return -diff;
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 // Test for whether to calculate metrics for the frame.
-static int is_psnr_calc_enabled(VP9_COMP *cpi) {
-  VP9_COMMON *const cm = &cpi->common;
+static int is_psnr_calc_enabled(const VP9_COMP *cpi) {
+  const VP9_COMMON *const cm = &cpi->common;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
 
   return cpi->b_calculate_psnr && (oxcf->pass != 1) && cm->show_frame;
@@ -483,15 +515,11 @@
       *hr = 3;
       *hs = 5;
       break;
-    case ONETWO:
+    default:
+      assert(mode == ONETWO);
       *hr = 1;
       *hs = 2;
       break;
-    default:
-      *hr = 1;
-      *hs = 1;
-      assert(0);
-      break;
   }
 }
 
@@ -700,7 +728,7 @@
   }
   CHECK_MEM_ERROR(cm, roi->roi_map, vpx_malloc(rows * cols));
 
-  // Copy to ROI sturcture in the compressor.
+  // Copy to ROI structure in the compressor.
   memcpy(roi->roi_map, map, rows * cols);
   memcpy(&roi->delta_q, delta_q, MAX_SEGMENTS * sizeof(delta_q[0]));
   memcpy(&roi->delta_lf, delta_lf, MAX_SEGMENTS * sizeof(delta_lf[0]));
@@ -790,8 +818,37 @@
     if (!cpi->use_svc) cm->frame_context_idx = cpi->refresh_alt_ref_frame;
   }
 
+  // TODO(jingning): Overwrite the frame_context_idx index in multi-layer ARF
+  // case. Need some further investigation on if we could apply this to single
+  // layer ARF case as well.
+  if (cpi->multi_layer_arf && !cpi->use_svc) {
+    GF_GROUP *const gf_group = &cpi->twopass.gf_group;
+    const int gf_group_index = gf_group->index;
+    const int boost_frame =
+        !cpi->rc.is_src_frame_alt_ref &&
+        (cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame);
+
+    // frame_context_idx           Frame Type
+    //        0              Intra only frame, base layer ARF
+    //        1              ARFs with layer depth = 2,3
+    //        2              ARFs with layer depth > 3
+    //        3              Non-boosted frames
+    if (frame_is_intra_only(cm)) {
+      cm->frame_context_idx = 0;
+    } else if (boost_frame) {
+      if (gf_group->rf_level[gf_group_index] == GF_ARF_STD)
+        cm->frame_context_idx = 0;
+      else if (gf_group->layer_depth[gf_group_index] <= 3)
+        cm->frame_context_idx = 1;
+      else
+        cm->frame_context_idx = 2;
+    } else {
+      cm->frame_context_idx = 3;
+    }
+  }
+
   if (cm->frame_type == KEY_FRAME) {
-    if (!is_two_pass_svc(cpi)) cpi->refresh_golden_frame = 1;
+    cpi->refresh_golden_frame = 1;
     cpi->refresh_alt_ref_frame = 1;
     vp9_zero(cpi->interp_filter_selected);
   } else {
@@ -843,12 +900,17 @@
   cm->mi_grid_base = NULL;
   vpx_free(cm->prev_mi_grid_base);
   cm->prev_mi_grid_base = NULL;
+  cm->mi_alloc_size = 0;
 }
 
 static void vp9_swap_mi_and_prev_mi(VP9_COMMON *cm) {
   // Current mip will be the prev_mip for the next frame.
   MODE_INFO **temp_base = cm->prev_mi_grid_base;
   MODE_INFO *temp = cm->prev_mip;
+
+  // Skip update prev_mi frame in show_existing_frame mode.
+  if (cm->show_existing_frame) return;
+
   cm->prev_mip = cm->mip;
   cm->mip = temp;
 
@@ -953,6 +1015,17 @@
   vpx_free(cpi->consec_zero_mv);
   cpi->consec_zero_mv = NULL;
 
+  vpx_free(cpi->mb_wiener_variance);
+  cpi->mb_wiener_variance = NULL;
+
+  vpx_free(cpi->mi_ssim_rdmult_scaling_factors);
+  cpi->mi_ssim_rdmult_scaling_factors = NULL;
+
+#if CONFIG_RATE_CTRL
+  free_partition_info(cpi);
+  free_motion_vector_info(cpi);
+#endif
+
   vp9_free_ref_frame_buffers(cm->buffer_pool);
 #if CONFIG_VP9_POSTPROC
   vp9_free_postproc_buffers(cm);
@@ -1347,15 +1420,9 @@
   int min_log2_tile_cols, max_log2_tile_cols;
   vp9_get_tile_n_bits(cm->mi_cols, &min_log2_tile_cols, &max_log2_tile_cols);
 
-  if (is_two_pass_svc(cpi) && (cpi->svc.encode_empty_frame_state == ENCODING ||
-                               cpi->svc.number_spatial_layers > 1)) {
-    cm->log2_tile_cols = 0;
-    cm->log2_tile_rows = 0;
-  } else {
-    cm->log2_tile_cols =
-        clamp(cpi->oxcf.tile_columns, min_log2_tile_cols, max_log2_tile_cols);
-    cm->log2_tile_rows = cpi->oxcf.tile_rows;
-  }
+  cm->log2_tile_cols =
+      clamp(cpi->oxcf.tile_columns, min_log2_tile_cols, max_log2_tile_cols);
+  cm->log2_tile_rows = cpi->oxcf.tile_rows;
 
   if (cpi->oxcf.target_level == LEVEL_AUTO) {
     const int level_tile_cols =
@@ -1378,31 +1445,23 @@
          cm->mi_rows * cm->mi_cols * sizeof(*cpi->mbmi_ext_base));
 
   set_tile_limits(cpi);
-
-  if (is_two_pass_svc(cpi)) {
-    if (vpx_realloc_frame_buffer(&cpi->alt_ref_buffer, cm->width, cm->height,
-                                 cm->subsampling_x, cm->subsampling_y,
-#if CONFIG_VP9_HIGHBITDEPTH
-                                 cm->use_highbitdepth,
-#endif
-                                 VP9_ENC_BORDER_IN_PIXELS, cm->byte_alignment,
-                                 NULL, NULL, NULL))
-      vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
-                         "Failed to reallocate alt_ref_buffer");
-  }
 }
 
 static void init_buffer_indices(VP9_COMP *cpi) {
-  cpi->lst_fb_idx = 0;
-  cpi->gld_fb_idx = 1;
-  cpi->alt_fb_idx = 2;
+  int ref_frame;
+
+  for (ref_frame = 0; ref_frame < REF_FRAMES; ++ref_frame)
+    cpi->ref_fb_idx[ref_frame] = ref_frame;
+
+  cpi->lst_fb_idx = cpi->ref_fb_idx[LAST_FRAME - 1];
+  cpi->gld_fb_idx = cpi->ref_fb_idx[GOLDEN_FRAME - 1];
+  cpi->alt_fb_idx = cpi->ref_fb_idx[ALTREF_FRAME - 1];
 }
 
 static void init_level_constraint(LevelConstraint *lc) {
   lc->level_index = -1;
   lc->max_cpb_size = INT_MAX;
   lc->max_frame_size = INT_MAX;
-  lc->rc_config_updated = 0;
   lc->fail_flag = 0;
 }
 
@@ -1414,7 +1473,7 @@
   }
 }
 
-static void init_config(struct VP9_COMP *cpi, VP9EncoderConfig *oxcf) {
+static void init_config(struct VP9_COMP *cpi, const VP9EncoderConfig *oxcf) {
   VP9_COMMON *const cm = &cpi->common;
 
   cpi->oxcf = *oxcf;
@@ -1479,13 +1538,15 @@
 }
 
 #if CONFIG_VP9_HIGHBITDEPTH
+// TODO(angiebird): make sdx8f available for highbitdepth if needed
 #define HIGHBD_BFP(BT, SDF, SDAF, VF, SVF, SVAF, SDX4DF) \
   cpi->fn_ptr[BT].sdf = SDF;                             \
   cpi->fn_ptr[BT].sdaf = SDAF;                           \
   cpi->fn_ptr[BT].vf = VF;                               \
   cpi->fn_ptr[BT].svf = SVF;                             \
   cpi->fn_ptr[BT].svaf = SVAF;                           \
-  cpi->fn_ptr[BT].sdx4df = SDX4DF;
+  cpi->fn_ptr[BT].sdx4df = SDX4DF;                       \
+  cpi->fn_ptr[BT].sdx8f = NULL;
 
 #define MAKE_BFP_SAD_WRAPPER(fnname)                                           \
   static unsigned int fnname##_bits8(const uint8_t *src_ptr,                   \
@@ -1744,7 +1805,8 @@
                    vpx_highbd_sad4x4x4d_bits10)
         break;
 
-      case VPX_BITS_12:
+      default:
+        assert(cm->bit_depth == VPX_BITS_12);
         HIGHBD_BFP(BLOCK_32X16, vpx_highbd_sad32x16_bits12,
                    vpx_highbd_sad32x16_avg_bits12, vpx_highbd_12_variance32x16,
                    vpx_highbd_12_sub_pixel_variance32x16,
@@ -1823,11 +1885,6 @@
                    vpx_highbd_12_sub_pixel_avg_variance4x4,
                    vpx_highbd_sad4x4x4d_bits12)
         break;
-
-      default:
-        assert(0 &&
-               "cm->bit_depth should be VPX_BITS_8, "
-               "VPX_BITS_10 or VPX_BITS_12");
     }
   }
 }
@@ -1891,6 +1948,7 @@
   int last_w = cpi->oxcf.width;
   int last_h = cpi->oxcf.height;
 
+  vp9_init_quantizer(cpi);
   if (cm->profile != oxcf->profile) cm->profile = oxcf->profile;
   cm->bit_depth = oxcf->bit_depth;
   cm->color_space = oxcf->color_space;
@@ -2003,7 +2061,7 @@
   // configuration change has a large change in avg_frame_bandwidth.
   // For SVC check for resetting based on spatial layer average bandwidth.
   // Also reset buffer level to optimal level.
-  if (cm->current_video_frame > 0) {
+  if (cm->current_video_frame > (unsigned int)cpi->svc.number_spatial_layers) {
     if (cpi->use_svc) {
       vp9_svc_check_reset_layer_rc_flag(cpi);
     } else {
@@ -2106,7 +2164,112 @@
   } while (++i <= MV_MAX);
 }
 
-VP9_COMP *vp9_create_compressor(VP9EncoderConfig *oxcf,
+static void init_ref_frame_bufs(VP9_COMMON *cm) {
+  int i;
+  BufferPool *const pool = cm->buffer_pool;
+  cm->new_fb_idx = INVALID_IDX;
+  for (i = 0; i < REF_FRAMES; ++i) {
+    cm->ref_frame_map[i] = INVALID_IDX;
+  }
+  for (i = 0; i < FRAME_BUFFERS; ++i) {
+    pool->frame_bufs[i].ref_count = 0;
+  }
+}
+
+static void update_initial_width(VP9_COMP *cpi, int use_highbitdepth,
+                                 int subsampling_x, int subsampling_y) {
+  VP9_COMMON *const cm = &cpi->common;
+#if !CONFIG_VP9_HIGHBITDEPTH
+  (void)use_highbitdepth;
+  assert(use_highbitdepth == 0);
+#endif
+
+  if (!cpi->initial_width ||
+#if CONFIG_VP9_HIGHBITDEPTH
+      cm->use_highbitdepth != use_highbitdepth ||
+#endif
+      cm->subsampling_x != subsampling_x ||
+      cm->subsampling_y != subsampling_y) {
+    cm->subsampling_x = subsampling_x;
+    cm->subsampling_y = subsampling_y;
+#if CONFIG_VP9_HIGHBITDEPTH
+    cm->use_highbitdepth = use_highbitdepth;
+#endif
+    alloc_util_frame_buffers(cpi);
+    cpi->initial_width = cm->width;
+    cpi->initial_height = cm->height;
+    cpi->initial_mbs = cm->MBs;
+  }
+}
+
+// TODO(angiebird): Check whether we can move this function to vpx_image.c
+static INLINE void vpx_img_chroma_subsampling(vpx_img_fmt_t fmt,
+                                              unsigned int *subsampling_x,
+                                              unsigned int *subsampling_y) {
+  switch (fmt) {
+    case VPX_IMG_FMT_I420:
+    case VPX_IMG_FMT_YV12:
+    case VPX_IMG_FMT_I422:
+    case VPX_IMG_FMT_I42016:
+    case VPX_IMG_FMT_I42216: *subsampling_x = 1; break;
+    default: *subsampling_x = 0; break;
+  }
+
+  switch (fmt) {
+    case VPX_IMG_FMT_I420:
+    case VPX_IMG_FMT_I440:
+    case VPX_IMG_FMT_YV12:
+    case VPX_IMG_FMT_I42016:
+    case VPX_IMG_FMT_I44016: *subsampling_y = 1; break;
+    default: *subsampling_y = 0; break;
+  }
+}
+
+// TODO(angiebird): Check whether we can move this function to vpx_image.c
+static INLINE int vpx_img_use_highbitdepth(vpx_img_fmt_t fmt) {
+  return fmt & VPX_IMG_FMT_HIGHBITDEPTH;
+}
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+static void setup_denoiser_buffer(VP9_COMP *cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  if (cpi->oxcf.noise_sensitivity > 0 &&
+      !cpi->denoiser.frame_buffer_initialized) {
+    if (vp9_denoiser_alloc(cm, &cpi->svc, &cpi->denoiser, cpi->use_svc,
+                           cpi->oxcf.noise_sensitivity, cm->width, cm->height,
+                           cm->subsampling_x, cm->subsampling_y,
+#if CONFIG_VP9_HIGHBITDEPTH
+                           cm->use_highbitdepth,
+#endif
+                           VP9_ENC_BORDER_IN_PIXELS))
+      vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
+                         "Failed to allocate denoiser");
+  }
+}
+#endif
+
+void vp9_update_compressor_with_img_fmt(VP9_COMP *cpi, vpx_img_fmt_t img_fmt) {
+  const VP9EncoderConfig *oxcf = &cpi->oxcf;
+  unsigned int subsampling_x, subsampling_y;
+  const int use_highbitdepth = vpx_img_use_highbitdepth(img_fmt);
+  vpx_img_chroma_subsampling(img_fmt, &subsampling_x, &subsampling_y);
+
+  update_initial_width(cpi, use_highbitdepth, subsampling_x, subsampling_y);
+#if CONFIG_VP9_TEMPORAL_DENOISING
+  setup_denoiser_buffer(cpi);
+#endif
+
+  assert(cpi->lookahead == NULL);
+  cpi->lookahead = vp9_lookahead_init(oxcf->width, oxcf->height, subsampling_x,
+                                      subsampling_y,
+#if CONFIG_VP9_HIGHBITDEPTH
+                                      use_highbitdepth,
+#endif
+                                      oxcf->lag_in_frames);
+  alloc_raw_frame_buffers(cpi);
+}
+
+VP9_COMP *vp9_create_compressor(const VP9EncoderConfig *oxcf,
                                 BufferPool *const pool) {
   unsigned int i;
   VP9_COMP *volatile const cpi = vpx_memalign(32, sizeof(VP9_COMP));
@@ -2139,10 +2302,13 @@
   cpi->resize_buffer_underflow = 0;
   cpi->use_skin_detection = 0;
   cpi->common.buffer_pool = pool;
+  init_ref_frame_bufs(cm);
 
   cpi->force_update_segmentation = 0;
 
   init_config(cpi, oxcf);
+  cpi->frame_info = vp9_get_frame_info(oxcf);
+
   vp9_rc_init(&cpi->oxcf, oxcf->pass, &cpi->rc);
 
   cm->current_video_frame = 0;
@@ -2155,7 +2321,9 @@
       cm, cpi->skin_map,
       vpx_calloc(cm->mi_rows * cm->mi_cols, sizeof(cpi->skin_map[0])));
 
+#if !CONFIG_REALTIME_ONLY
   CHECK_MEM_ERROR(cm, cpi->alt_ref_aq, vp9_alt_ref_aq_create());
+#endif
 
   CHECK_MEM_ERROR(
       cm, cpi->consec_zero_mv,
@@ -2197,8 +2365,6 @@
 #endif
 
   cpi->refresh_alt_ref_frame = 0;
-  cpi->multi_arf_last_grp_enabled = 0;
-
   cpi->b_calculate_psnr = CONFIG_INTERNAL_STATS;
 
   init_level_info(&cpi->level_info);
@@ -2239,9 +2405,11 @@
 
   if (cpi->b_calculate_consistency) {
     CHECK_MEM_ERROR(cm, cpi->ssim_vars,
-                    vpx_malloc(sizeof(*cpi->ssim_vars) * 4 *
-                               cpi->common.mi_rows * cpi->common.mi_cols));
+                    vpx_calloc(cpi->common.mi_rows * cpi->common.mi_cols,
+                               sizeof(*cpi->ssim_vars) * 4));
     cpi->worst_consistency = 100.0;
+  } else {
+    cpi->ssim_vars = NULL;
   }
 
 #endif
@@ -2276,6 +2444,11 @@
 #ifdef OUTPUT_YUV_REC
   yuv_rec_file = fopen("rec.yuv", "wb");
 #endif
+#ifdef OUTPUT_YUV_SVC_SRC
+  yuv_svc_src[0] = fopen("svc_src_0.yuv", "wb");
+  yuv_svc_src[1] = fopen("svc_src_1.yuv", "wb");
+  yuv_svc_src[2] = fopen("svc_src_2.yuv", "wb");
+#endif
 
 #if 0
   framepsnr = fopen("framepsnr.stt", "a");
@@ -2303,6 +2476,7 @@
         const int layer_id = (int)last_packet_for_layer->spatial_layer_id;
         const int packets_in_layer = (int)last_packet_for_layer->count + 1;
         if (layer_id >= 0 && layer_id < oxcf->ss_number_layers) {
+          int num_frames;
           LAYER_CONTEXT *const lc = &cpi->svc.layer_context[layer_id];
 
           vpx_free(lc->rc_twopass_stats_in.buf);
@@ -2314,6 +2488,11 @@
           lc->twopass.stats_in = lc->twopass.stats_in_start;
           lc->twopass.stats_in_end =
               lc->twopass.stats_in_start + packets_in_layer - 1;
+          // Note the last packet is cumulative first pass stats.
+          // So the number of frames is packet number minus one
+          num_frames = packets_in_layer - 1;
+          fps_init_first_pass_info(&lc->twopass.first_pass_info,
+                                   lc->rc_twopass_stats_in.buf, num_frames);
           stats_copy[layer_id] = lc->rc_twopass_stats_in.buf;
         }
       }
@@ -2329,6 +2508,7 @@
 
       vp9_init_second_pass_spatial_svc(cpi);
     } else {
+      int num_frames;
 #if CONFIG_FP_MB_STATS
       if (cpi->use_fp_mb_stats) {
         const size_t psz = cpi->common.MBs * sizeof(uint8_t);
@@ -2345,75 +2525,106 @@
       cpi->twopass.stats_in_start = oxcf->two_pass_stats_in.buf;
       cpi->twopass.stats_in = cpi->twopass.stats_in_start;
       cpi->twopass.stats_in_end = &cpi->twopass.stats_in[packets - 1];
+      // Note the last packet is cumulative first pass stats.
+      // So the number of frames is packet number minus one
+      num_frames = packets - 1;
+      fps_init_first_pass_info(&cpi->twopass.first_pass_info,
+                               oxcf->two_pass_stats_in.buf, num_frames);
 
       vp9_init_second_pass(cpi);
     }
   }
 #endif  // !CONFIG_REALTIME_ONLY
 
-  vp9_set_speed_features_framesize_independent(cpi);
-  vp9_set_speed_features_framesize_dependent(cpi);
+  cpi->mb_wiener_var_cols = 0;
+  cpi->mb_wiener_var_rows = 0;
+  cpi->mb_wiener_variance = NULL;
+
+  vp9_set_speed_features_framesize_independent(cpi, oxcf->speed);
+  vp9_set_speed_features_framesize_dependent(cpi, oxcf->speed);
+
+  {
+    const int bsize = BLOCK_16X16;
+    const int w = num_8x8_blocks_wide_lookup[bsize];
+    const int h = num_8x8_blocks_high_lookup[bsize];
+    const int num_cols = (cm->mi_cols + w - 1) / w;
+    const int num_rows = (cm->mi_rows + h - 1) / h;
+    CHECK_MEM_ERROR(cm, cpi->mi_ssim_rdmult_scaling_factors,
+                    vpx_calloc(num_rows * num_cols,
+                               sizeof(*cpi->mi_ssim_rdmult_scaling_factors)));
+  }
+
+  cpi->kmeans_data_arr_alloc = 0;
+#if CONFIG_NON_GREEDY_MV
+  cpi->tpl_ready = 0;
+#endif  // CONFIG_NON_GREEDY_MV
+  for (i = 0; i < MAX_ARF_GOP_SIZE; ++i) cpi->tpl_stats[i].tpl_stats_ptr = NULL;
 
   // Allocate memory to store variances for a frame.
   CHECK_MEM_ERROR(cm, cpi->source_diff_var, vpx_calloc(cm->MBs, sizeof(diff)));
   cpi->source_var_thresh = 0;
   cpi->frames_till_next_var_check = 0;
+#define BFP(BT, SDF, SDAF, VF, SVF, SVAF, SDX4DF, SDX8F) \
+  cpi->fn_ptr[BT].sdf = SDF;                             \
+  cpi->fn_ptr[BT].sdaf = SDAF;                           \
+  cpi->fn_ptr[BT].vf = VF;                               \
+  cpi->fn_ptr[BT].svf = SVF;                             \
+  cpi->fn_ptr[BT].svaf = SVAF;                           \
+  cpi->fn_ptr[BT].sdx4df = SDX4DF;                       \
+  cpi->fn_ptr[BT].sdx8f = SDX8F;
 
-#define BFP(BT, SDF, SDAF, VF, SVF, SVAF, SDX4DF) \
-  cpi->fn_ptr[BT].sdf = SDF;                      \
-  cpi->fn_ptr[BT].sdaf = SDAF;                    \
-  cpi->fn_ptr[BT].vf = VF;                        \
-  cpi->fn_ptr[BT].svf = SVF;                      \
-  cpi->fn_ptr[BT].svaf = SVAF;                    \
-  cpi->fn_ptr[BT].sdx4df = SDX4DF;
-
+  // TODO(angiebird): make sdx8f available for every block size
   BFP(BLOCK_32X16, vpx_sad32x16, vpx_sad32x16_avg, vpx_variance32x16,
       vpx_sub_pixel_variance32x16, vpx_sub_pixel_avg_variance32x16,
-      vpx_sad32x16x4d)
+      vpx_sad32x16x4d, NULL)
 
   BFP(BLOCK_16X32, vpx_sad16x32, vpx_sad16x32_avg, vpx_variance16x32,
       vpx_sub_pixel_variance16x32, vpx_sub_pixel_avg_variance16x32,
-      vpx_sad16x32x4d)
+      vpx_sad16x32x4d, NULL)
 
   BFP(BLOCK_64X32, vpx_sad64x32, vpx_sad64x32_avg, vpx_variance64x32,
       vpx_sub_pixel_variance64x32, vpx_sub_pixel_avg_variance64x32,
-      vpx_sad64x32x4d)
+      vpx_sad64x32x4d, NULL)
 
   BFP(BLOCK_32X64, vpx_sad32x64, vpx_sad32x64_avg, vpx_variance32x64,
       vpx_sub_pixel_variance32x64, vpx_sub_pixel_avg_variance32x64,
-      vpx_sad32x64x4d)
+      vpx_sad32x64x4d, NULL)
 
   BFP(BLOCK_32X32, vpx_sad32x32, vpx_sad32x32_avg, vpx_variance32x32,
       vpx_sub_pixel_variance32x32, vpx_sub_pixel_avg_variance32x32,
-      vpx_sad32x32x4d)
+      vpx_sad32x32x4d, vpx_sad32x32x8)
 
   BFP(BLOCK_64X64, vpx_sad64x64, vpx_sad64x64_avg, vpx_variance64x64,
       vpx_sub_pixel_variance64x64, vpx_sub_pixel_avg_variance64x64,
-      vpx_sad64x64x4d)
+      vpx_sad64x64x4d, NULL)
 
   BFP(BLOCK_16X16, vpx_sad16x16, vpx_sad16x16_avg, vpx_variance16x16,
       vpx_sub_pixel_variance16x16, vpx_sub_pixel_avg_variance16x16,
-      vpx_sad16x16x4d)
+      vpx_sad16x16x4d, vpx_sad16x16x8)
 
   BFP(BLOCK_16X8, vpx_sad16x8, vpx_sad16x8_avg, vpx_variance16x8,
       vpx_sub_pixel_variance16x8, vpx_sub_pixel_avg_variance16x8,
-      vpx_sad16x8x4d)
+      vpx_sad16x8x4d, vpx_sad16x8x8)
 
   BFP(BLOCK_8X16, vpx_sad8x16, vpx_sad8x16_avg, vpx_variance8x16,
       vpx_sub_pixel_variance8x16, vpx_sub_pixel_avg_variance8x16,
-      vpx_sad8x16x4d)
+      vpx_sad8x16x4d, vpx_sad8x16x8)
 
   BFP(BLOCK_8X8, vpx_sad8x8, vpx_sad8x8_avg, vpx_variance8x8,
-      vpx_sub_pixel_variance8x8, vpx_sub_pixel_avg_variance8x8, vpx_sad8x8x4d)
+      vpx_sub_pixel_variance8x8, vpx_sub_pixel_avg_variance8x8, vpx_sad8x8x4d,
+      vpx_sad8x8x8)
 
   BFP(BLOCK_8X4, vpx_sad8x4, vpx_sad8x4_avg, vpx_variance8x4,
-      vpx_sub_pixel_variance8x4, vpx_sub_pixel_avg_variance8x4, vpx_sad8x4x4d)
+      vpx_sub_pixel_variance8x4, vpx_sub_pixel_avg_variance8x4, vpx_sad8x4x4d,
+      NULL)
 
   BFP(BLOCK_4X8, vpx_sad4x8, vpx_sad4x8_avg, vpx_variance4x8,
-      vpx_sub_pixel_variance4x8, vpx_sub_pixel_avg_variance4x8, vpx_sad4x8x4d)
+      vpx_sub_pixel_variance4x8, vpx_sub_pixel_avg_variance4x8, vpx_sad4x8x4d,
+      NULL)
 
   BFP(BLOCK_4X4, vpx_sad4x4, vpx_sad4x4_avg, vpx_variance4x4,
-      vpx_sub_pixel_variance4x4, vpx_sub_pixel_avg_variance4x4, vpx_sad4x4x4d)
+      vpx_sub_pixel_variance4x4, vpx_sub_pixel_avg_variance4x4, vpx_sad4x4x4d,
+      vpx_sad4x4x8)
 
 #if CONFIG_VP9_HIGHBITDEPTH
   highbd_set_var_fns(cpi);
@@ -2428,8 +2639,25 @@
 
   vp9_loop_filter_init(cm);
 
+  // Set up the unit scaling factor used during motion search.
+#if CONFIG_VP9_HIGHBITDEPTH
+  vp9_setup_scale_factors_for_frame(&cpi->me_sf, cm->width, cm->height,
+                                    cm->width, cm->height,
+                                    cm->use_highbitdepth);
+#else
+  vp9_setup_scale_factors_for_frame(&cpi->me_sf, cm->width, cm->height,
+                                    cm->width, cm->height);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  cpi->td.mb.me_sf = &cpi->me_sf;
+
   cm->error.setjmp = 0;
 
+#if CONFIG_RATE_CTRL
+  encode_command_init(&cpi->encode_command);
+  partition_info_init(cpi);
+  motion_vector_info_init(cpi);
+#endif
+
   return cpi;
 }
 
@@ -2440,6 +2668,8 @@
   snprintf((H) + strlen(H), sizeof(H) - strlen(H), (T), (V))
 #endif  // CONFIG_INTERNAL_STATS
 
+static void free_tpl_buffer(VP9_COMP *cpi);
+
 void vp9_remove_compressor(VP9_COMP *cpi) {
   VP9_COMMON *cm;
   unsigned int i;
@@ -2447,6 +2677,10 @@
 
   if (!cpi) return;
 
+#if CONFIG_INTERNAL_STATS
+  vpx_free(cpi->ssim_vars);
+#endif
+
   cm = &cpi->common;
   if (cm->current_video_frame > 0) {
 #if CONFIG_INTERNAL_STATS
@@ -2511,14 +2745,20 @@
           SNPRINT2(results, "\t%7.3f", cpi->worst_consistency);
         }
 
-        fprintf(f, "%s\t    Time\tRcErr\tAbsErr\n", headings);
-        fprintf(f, "%s\t%8.0f\t%7.2f\t%7.2f\n", results, total_encode_time,
-                rate_err, fabs(rate_err));
+        SNPRINT(headings, "\t    Time\tRcErr\tAbsErr");
+        SNPRINT2(results, "\t%8.0f", total_encode_time);
+        SNPRINT2(results, "\t%7.2f", rate_err);
+        SNPRINT2(results, "\t%7.2f", fabs(rate_err));
+
+        fprintf(f, "%s\tAPsnr611\n", headings);
+        fprintf(
+            f, "%s\t%7.3f\n", results,
+            (6 * cpi->psnr.stat[Y] + cpi->psnr.stat[U] + cpi->psnr.stat[V]) /
+                (cpi->count * 8));
       }
 
       fclose(f);
     }
-
 #endif
 
 #if 0
@@ -2537,6 +2777,15 @@
   vp9_denoiser_free(&(cpi->denoiser));
 #endif
 
+  if (cpi->kmeans_data_arr_alloc) {
+#if CONFIG_MULTITHREAD
+    pthread_mutex_destroy(&cpi->kmeans_mutex);
+#endif
+    vpx_free(cpi->kmeans_data_arr);
+  }
+
+  free_tpl_buffer(cpi);
+
   for (t = 0; t < cpi->num_workers; ++t) {
     VPxWorker *const worker = &cpi->workers[t];
     EncWorkerData *const thread_data = &cpi->tile_thr_data[t];
@@ -2560,7 +2809,9 @@
     vp9_bitstream_encode_tiles_buffer_dealloc(cpi);
   }
 
+#if !CONFIG_REALTIME_ONLY
   vp9_alt_ref_aq_destroy(cpi->alt_ref_aq);
+#endif
 
   dealloc_compressor_data(cpi);
 
@@ -2594,6 +2845,11 @@
 #ifdef OUTPUT_YUV_REC
   fclose(yuv_rec_file);
 #endif
+#ifdef OUTPUT_YUV_SVC_SRC
+  fclose(yuv_svc_src[0]);
+  fclose(yuv_svc_src[1]);
+  fclose(yuv_svc_src[2]);
+#endif
 
 #if 0
 
@@ -2609,30 +2865,19 @@
 #endif
 }
 
-static void generate_psnr_packet(VP9_COMP *cpi) {
-  struct vpx_codec_cx_pkt pkt;
-  int i;
-  PSNR_STATS psnr;
+int vp9_get_psnr(const VP9_COMP *cpi, PSNR_STATS *psnr) {
+  if (is_psnr_calc_enabled(cpi)) {
 #if CONFIG_VP9_HIGHBITDEPTH
-  vpx_calc_highbd_psnr(cpi->raw_source_frame, cpi->common.frame_to_show, &psnr,
-                       cpi->td.mb.e_mbd.bd, cpi->oxcf.input_bit_depth);
+    vpx_calc_highbd_psnr(cpi->raw_source_frame, cpi->common.frame_to_show, psnr,
+                         cpi->td.mb.e_mbd.bd, cpi->oxcf.input_bit_depth);
 #else
-  vpx_calc_psnr(cpi->raw_source_frame, cpi->common.frame_to_show, &psnr);
+    vpx_calc_psnr(cpi->raw_source_frame, cpi->common.frame_to_show, psnr);
 #endif
-
-  for (i = 0; i < 4; ++i) {
-    pkt.data.psnr.samples[i] = psnr.samples[i];
-    pkt.data.psnr.sse[i] = psnr.sse[i];
-    pkt.data.psnr.psnr[i] = psnr.psnr[i];
+    return 1;
+  } else {
+    vp9_zero(*psnr);
+    return 0;
   }
-  pkt.kind = VPX_CODEC_PSNR_PKT;
-  if (cpi->use_svc)
-    cpi->svc
-        .layer_context[cpi->svc.spatial_layer_id *
-                       cpi->svc.number_temporal_layers]
-        .psnr_pkt = pkt.data.psnr;
-  else
-    vpx_codec_pkt_list_add(cpi->output_pkt_list, &pkt);
 }
 
 int vp9_use_as_reference(VP9_COMP *cpi, int ref_frame_flags) {
@@ -2842,6 +3087,7 @@
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
+#if !CONFIG_REALTIME_ONLY
 static int scale_down(VP9_COMP *cpi, int q) {
   RATE_CONTROL *const rc = &cpi->rc;
   GF_GROUP *const gf_group = &cpi->twopass.gf_group;
@@ -2889,11 +3135,14 @@
 
 // test in two pass for the first
 static int two_pass_first_group_inter(VP9_COMP *cpi) {
-  TWO_PASS *const twopass = &cpi->twopass;
-  GF_GROUP *const gf_group = &twopass->gf_group;
-  if ((cpi->oxcf.pass == 2) &&
-      (gf_group->index == gf_group->first_inter_index)) {
-    return 1;
+  if (cpi->oxcf.pass == 2) {
+    TWO_PASS *const twopass = &cpi->twopass;
+    GF_GROUP *const gf_group = &twopass->gf_group;
+    const int gfg_index = gf_group->index;
+
+    if (gfg_index == 0) return gf_group->update_type[gfg_index] == LF_UPDATE;
+    return gf_group->update_type[gfg_index - 1] != LF_UPDATE &&
+           gf_group->update_type[gfg_index] == LF_UPDATE;
   } else {
     return 0;
   }
@@ -2942,10 +3191,24 @@
   }
   return force_recode;
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
-void vp9_update_reference_frames(VP9_COMP *cpi) {
+static void update_ref_frames(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
   BufferPool *const pool = cm->buffer_pool;
+  GF_GROUP *const gf_group = &cpi->twopass.gf_group;
+
+  if (cpi->rc.show_arf_as_gld) {
+    int tmp = cpi->alt_fb_idx;
+    cpi->alt_fb_idx = cpi->gld_fb_idx;
+    cpi->gld_fb_idx = tmp;
+  } else if (cm->show_existing_frame) {
+    // Pop ARF.
+    cpi->lst_fb_idx = cpi->alt_fb_idx;
+    cpi->alt_fb_idx =
+        stack_pop(gf_group->arf_index_stack, gf_group->stack_size);
+    --gf_group->stack_size;
+  }
 
   // At this point the new frame has been encoded.
   // If any buffer copy / swapping is signaled it should be done here.
@@ -2971,23 +3234,23 @@
     tmp = cpi->alt_fb_idx;
     cpi->alt_fb_idx = cpi->gld_fb_idx;
     cpi->gld_fb_idx = tmp;
-
-    if (is_two_pass_svc(cpi)) {
-      cpi->svc.layer_context[0].gold_ref_idx = cpi->gld_fb_idx;
-      cpi->svc.layer_context[0].alt_ref_idx = cpi->alt_fb_idx;
-    }
   } else { /* For non key/golden frames */
     if (cpi->refresh_alt_ref_frame) {
-      int arf_idx = cpi->alt_fb_idx;
-      if ((cpi->oxcf.pass == 2) && cpi->multi_arf_allowed) {
-        const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
-        arf_idx = gf_group->arf_update_idx[gf_group->index];
-      }
+      int arf_idx = gf_group->top_arf_idx;
+
+      // Push new ARF into stack.
+      stack_push(gf_group->arf_index_stack, cpi->alt_fb_idx,
+                 gf_group->stack_size);
+      ++gf_group->stack_size;
+
+      assert(arf_idx < REF_FRAMES);
 
       ref_cnt_fb(pool->frame_bufs, &cm->ref_frame_map[arf_idx], cm->new_fb_idx);
       memcpy(cpi->interp_filter_selected[ALTREF_FRAME],
              cpi->interp_filter_selected[0],
              sizeof(cpi->interp_filter_selected[0]));
+
+      cpi->alt_fb_idx = arf_idx;
     }
 
     if (cpi->refresh_golden_frame) {
@@ -3012,69 +3275,39 @@
              cpi->interp_filter_selected[0],
              sizeof(cpi->interp_filter_selected[0]));
   }
+
+  if (gf_group->update_type[gf_group->index] == MID_OVERLAY_UPDATE) {
+    cpi->alt_fb_idx =
+        stack_pop(gf_group->arf_index_stack, gf_group->stack_size);
+    --gf_group->stack_size;
+  }
+}
+
+void vp9_update_reference_frames(VP9_COMP *cpi) {
+  update_ref_frames(cpi);
+
 #if CONFIG_VP9_TEMPORAL_DENOISING
-  if (cpi->oxcf.noise_sensitivity > 0 && denoise_svc(cpi) &&
-      cpi->denoiser.denoising_level > kDenLowLow) {
-    int svc_base_is_key = 0;
-    int denoise_svc_second_layer = 0;
-    if (cpi->use_svc) {
-      int realloc_fail = 0;
-      const int svc_buf_shift =
-          cpi->svc.number_spatial_layers - cpi->svc.spatial_layer_id == 2
-              ? cpi->denoiser.num_ref_frames
-              : 0;
-      int layer = LAYER_IDS_TO_IDX(cpi->svc.spatial_layer_id,
-                                   cpi->svc.temporal_layer_id,
-                                   cpi->svc.number_temporal_layers);
-      LAYER_CONTEXT *lc = &cpi->svc.layer_context[layer];
-      svc_base_is_key = lc->is_key_frame;
-      denoise_svc_second_layer =
-          cpi->svc.number_spatial_layers - cpi->svc.spatial_layer_id == 2 ? 1
-                                                                          : 0;
-      // Check if we need to allocate extra buffers in the denoiser
-      // for
-      // refreshed frames.
-      realloc_fail = vp9_denoiser_realloc_svc(
-          cm, &cpi->denoiser, svc_buf_shift, cpi->refresh_alt_ref_frame,
-          cpi->refresh_golden_frame, cpi->refresh_last_frame, cpi->alt_fb_idx,
-          cpi->gld_fb_idx, cpi->lst_fb_idx);
-      if (realloc_fail)
-        vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
-                           "Failed to re-allocate denoiser for SVC");
-    }
-    vp9_denoiser_update_frame_info(
-        &cpi->denoiser, *cpi->Source, cpi->common.frame_type,
-        cpi->refresh_alt_ref_frame, cpi->refresh_golden_frame,
-        cpi->refresh_last_frame, cpi->alt_fb_idx, cpi->gld_fb_idx,
-        cpi->lst_fb_idx, cpi->resize_pending, svc_base_is_key,
-        denoise_svc_second_layer);
-  }
+  vp9_denoiser_update_ref_frame(cpi);
 #endif
-  if (is_one_pass_cbr_svc(cpi)) {
-    // Keep track of frame index for each reference frame.
-    SVC *const svc = &cpi->svc;
-    if (cm->frame_type == KEY_FRAME) {
-      svc->ref_frame_index[cpi->lst_fb_idx] = svc->current_superframe;
-      svc->ref_frame_index[cpi->gld_fb_idx] = svc->current_superframe;
-      svc->ref_frame_index[cpi->alt_fb_idx] = svc->current_superframe;
-    } else {
-      if (cpi->refresh_last_frame)
-        svc->ref_frame_index[cpi->lst_fb_idx] = svc->current_superframe;
-      if (cpi->refresh_golden_frame)
-        svc->ref_frame_index[cpi->gld_fb_idx] = svc->current_superframe;
-      if (cpi->refresh_alt_ref_frame)
-        svc->ref_frame_index[cpi->alt_fb_idx] = svc->current_superframe;
-    }
-  }
+
+  if (is_one_pass_cbr_svc(cpi)) vp9_svc_update_ref_frame(cpi);
 }
 
 static void loopfilter_frame(VP9_COMP *cpi, VP9_COMMON *cm) {
   MACROBLOCKD *xd = &cpi->td.mb.e_mbd;
   struct loopfilter *lf = &cm->lf;
-
-  const int is_reference_frame =
+  int is_reference_frame =
       (cm->frame_type == KEY_FRAME || cpi->refresh_last_frame ||
        cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame);
+  if (cpi->use_svc &&
+      cpi->svc.temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS)
+    is_reference_frame = !cpi->svc.non_reference_frame;
+
+  // Skip loop filter in show_existing_frame mode.
+  if (cm->show_existing_frame) {
+    lf->filter_level = 0;
+    return;
+  }
 
   if (xd->lossless) {
     lf->filter_level = 0;
@@ -3201,8 +3434,8 @@
         if (cpi->oxcf.pass == 0 && !cpi->use_svc) {
           // Check for release of scaled reference.
           buf_idx = cpi->scaled_ref_idx[ref_frame - 1];
-          buf = (buf_idx != INVALID_IDX) ? &pool->frame_bufs[buf_idx] : NULL;
-          if (buf != NULL) {
+          if (buf_idx != INVALID_IDX) {
+            buf = &pool->frame_bufs[buf_idx];
             --buf->ref_count;
             cpi->scaled_ref_idx[ref_frame - 1] = INVALID_IDX;
           }
@@ -3233,22 +3466,21 @@
     refresh[2] = (cpi->refresh_alt_ref_frame) ? 1 : 0;
     for (i = LAST_FRAME; i <= ALTREF_FRAME; ++i) {
       const int idx = cpi->scaled_ref_idx[i - 1];
-      RefCntBuffer *const buf =
-          idx != INVALID_IDX ? &cm->buffer_pool->frame_bufs[idx] : NULL;
-      const YV12_BUFFER_CONFIG *const ref = get_ref_frame_buffer(cpi, i);
-      if (buf != NULL &&
-          (refresh[i - 1] || (buf->buf.y_crop_width == ref->y_crop_width &&
-                              buf->buf.y_crop_height == ref->y_crop_height))) {
-        --buf->ref_count;
-        cpi->scaled_ref_idx[i - 1] = INVALID_IDX;
+      if (idx != INVALID_IDX) {
+        RefCntBuffer *const buf = &cm->buffer_pool->frame_bufs[idx];
+        const YV12_BUFFER_CONFIG *const ref = get_ref_frame_buffer(cpi, i);
+        if (refresh[i - 1] || (buf->buf.y_crop_width == ref->y_crop_width &&
+                               buf->buf.y_crop_height == ref->y_crop_height)) {
+          --buf->ref_count;
+          cpi->scaled_ref_idx[i - 1] = INVALID_IDX;
+        }
       }
     }
   } else {
-    for (i = 0; i < MAX_REF_FRAMES; ++i) {
+    for (i = 0; i < REFS_PER_FRAME; ++i) {
       const int idx = cpi->scaled_ref_idx[i];
-      RefCntBuffer *const buf =
-          idx != INVALID_IDX ? &cm->buffer_pool->frame_bufs[idx] : NULL;
-      if (buf != NULL) {
+      if (idx != INVALID_IDX) {
+        RefCntBuffer *const buf = &cm->buffer_pool->frame_bufs[idx];
         --buf->ref_count;
         cpi->scaled_ref_idx[i] = INVALID_IDX;
       }
@@ -3307,11 +3539,9 @@
       case VPX_BITS_10:
         dc_quant_devisor = 16.0;
         break;
-      case VPX_BITS_12:
-        dc_quant_devisor = 64.0;
-        break;
       default:
-        assert(0 && "bit_depth must be VPX_BITS_8, VPX_BITS_10 or VPX_BITS_12");
+        assert(cm->bit_depth == VPX_BITS_12);
+        dc_quant_devisor = 64.0;
         break;
     }
 #else
@@ -3427,7 +3657,7 @@
 }
 
 static void set_size_independent_vars(VP9_COMP *cpi) {
-  vp9_set_speed_features_framesize_independent(cpi);
+  vp9_set_speed_features_framesize_independent(cpi, cpi->oxcf.speed);
   vp9_set_rd_speed_thresholds(cpi);
   vp9_set_rd_speed_thresholds_sub8x8(cpi);
   cpi->common.interp_filter = cpi->sf.default_interp_filter;
@@ -3438,11 +3668,16 @@
   VP9_COMMON *const cm = &cpi->common;
 
   // Setup variables that depend on the dimensions of the frame.
-  vp9_set_speed_features_framesize_dependent(cpi);
+  vp9_set_speed_features_framesize_dependent(cpi, cpi->oxcf.speed);
 
   // Decide q and q bounds.
   *q = vp9_rc_pick_q_and_bounds(cpi, bottom_index, top_index);
 
+  if (cpi->oxcf.rc_mode == VPX_CBR && cpi->rc.force_max_q) {
+    *q = cpi->rc.worst_quality;
+    cpi->rc.force_max_q = 0;
+  }
+
   if (!frame_is_intra_only(cm)) {
     vp9_set_high_precision_mv(cpi, (*q) < HIGH_PRECISION_MV_QTHRESH);
   }
@@ -3472,29 +3707,12 @@
           vpx_calloc(cpi->un_scaled_source->y_width,
                      sizeof(*cpi->common.postproc_state.limits));
     }
-    vp9_denoise(cpi->Source, cpi->Source, l, cpi->common.postproc_state.limits);
+    vp9_denoise(&cpi->common, cpi->Source, cpi->Source, l,
+                cpi->common.postproc_state.limits);
   }
 #endif  // CONFIG_VP9_POSTPROC
 }
 
-#if CONFIG_VP9_TEMPORAL_DENOISING
-static void setup_denoiser_buffer(VP9_COMP *cpi) {
-  VP9_COMMON *const cm = &cpi->common;
-  if (cpi->oxcf.noise_sensitivity > 0 &&
-      !cpi->denoiser.frame_buffer_initialized) {
-    if (vp9_denoiser_alloc(cm, &cpi->svc, &cpi->denoiser, cpi->use_svc,
-                           cpi->oxcf.noise_sensitivity, cm->width, cm->height,
-                           cm->subsampling_x, cm->subsampling_y,
-#if CONFIG_VP9_HIGHBITDEPTH
-                           cm->use_highbitdepth,
-#endif
-                           VP9_ENC_BORDER_IN_PIXELS))
-      vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
-                         "Failed to allocate denoiser");
-  }
-}
-#endif
-
 static void init_motion_estimation(VP9_COMP *cpi) {
   int y_stride = cpi->scaled_source.y_stride;
 
@@ -3550,9 +3768,7 @@
 #endif
   }
 
-  if ((oxcf->pass == 2) &&
-      (!cpi->use_svc || (is_two_pass_svc(cpi) &&
-                         cpi->svc.encode_empty_frame_state != ENCODING))) {
+  if ((oxcf->pass == 2) && !cpi->use_svc) {
     vp9_set_target_rate(cpi);
   }
 
@@ -3599,19 +3815,76 @@
   set_ref_ptrs(cm, xd, LAST_FRAME, LAST_FRAME);
 }
 
-static void encode_without_recode_loop(VP9_COMP *cpi, size_t *size,
-                                       uint8_t *dest) {
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+static void save_encode_params(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
-  int q = 0, bottom_index = 0, top_index = 0;  // Dummy variables.
+  const int tile_cols = 1 << cm->log2_tile_cols;
+  const int tile_rows = 1 << cm->log2_tile_rows;
+  int tile_col, tile_row;
+  int i, j;
+  RD_OPT *rd_opt = &cpi->rd;
+  for (i = 0; i < MAX_REF_FRAMES; i++) {
+    for (j = 0; j < REFERENCE_MODES; j++)
+      rd_opt->prediction_type_threshes_prev[i][j] =
+          rd_opt->prediction_type_threshes[i][j];
+
+    for (j = 0; j < SWITCHABLE_FILTER_CONTEXTS; j++)
+      rd_opt->filter_threshes_prev[i][j] = rd_opt->filter_threshes[i][j];
+  }
+
+  if (cpi->tile_data != NULL) {
+    for (tile_row = 0; tile_row < tile_rows; ++tile_row)
+      for (tile_col = 0; tile_col < tile_cols; ++tile_col) {
+        TileDataEnc *tile_data =
+            &cpi->tile_data[tile_row * tile_cols + tile_col];
+        for (i = 0; i < BLOCK_SIZES; ++i) {
+          for (j = 0; j < MAX_MODES; ++j) {
+            tile_data->thresh_freq_fact_prev[i][j] =
+                tile_data->thresh_freq_fact[i][j];
+          }
+        }
+      }
+  }
+}
+#endif  // CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+
+static INLINE void set_raw_source_frame(VP9_COMP *cpi) {
+#ifdef ENABLE_KF_DENOISE
+  if (is_spatial_denoise_enabled(cpi)) {
+    cpi->raw_source_frame = vp9_scale_if_required(
+        cm, &cpi->raw_unscaled_source, &cpi->raw_scaled_source,
+        (oxcf->pass == 0), EIGHTTAP, 0);
+  } else {
+    cpi->raw_source_frame = cpi->Source;
+  }
+#else
+  cpi->raw_source_frame = cpi->Source;
+#endif
+}
+
+static int encode_without_recode_loop(VP9_COMP *cpi, size_t *size,
+                                      uint8_t *dest) {
+  VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
+  int q = 0, bottom_index = 0, top_index = 0;
+  int no_drop_scene_change = 0;
   const INTERP_FILTER filter_scaler =
       (is_one_pass_cbr_svc(cpi))
-          ? cpi->svc.downsample_filter_type[cpi->svc.spatial_layer_id]
+          ? svc->downsample_filter_type[svc->spatial_layer_id]
           : EIGHTTAP;
   const int phase_scaler =
       (is_one_pass_cbr_svc(cpi))
-          ? cpi->svc.downsample_filter_phase[cpi->svc.spatial_layer_id]
+          ? svc->downsample_filter_phase[svc->spatial_layer_id]
           : 0;
 
+  if (cm->show_existing_frame) {
+    cpi->rc.this_frame_target = 0;
+    if (is_psnr_calc_enabled(cpi)) set_raw_source_frame(cpi);
+    return 1;
+  }
+
+  svc->time_stamp_prev[svc->spatial_layer_id] = svc->time_stamp_superframe;
+
   // Flag to check if its valid to compute the source sad (used for
   // scene detection and for superblock content state in CBR mode).
   // The flag may get reset below based on SVC or resizing state.
@@ -3624,30 +3897,36 @@
   if (is_one_pass_cbr_svc(cpi) &&
       cpi->un_scaled_source->y_width == cm->width << 2 &&
       cpi->un_scaled_source->y_height == cm->height << 2 &&
-      cpi->svc.scaled_temp.y_width == cm->width << 1 &&
-      cpi->svc.scaled_temp.y_height == cm->height << 1) {
+      svc->scaled_temp.y_width == cm->width << 1 &&
+      svc->scaled_temp.y_height == cm->height << 1) {
     // For svc, if it is a 1/4x1/4 downscaling, do a two-stage scaling to take
     // advantage of the 1:2 optimized scaler. In the process, the 1/2x1/2
     // result will be saved in scaled_temp and might be used later.
-    const INTERP_FILTER filter_scaler2 = cpi->svc.downsample_filter_type[1];
-    const int phase_scaler2 = cpi->svc.downsample_filter_phase[1];
+    const INTERP_FILTER filter_scaler2 = svc->downsample_filter_type[1];
+    const int phase_scaler2 = svc->downsample_filter_phase[1];
     cpi->Source = vp9_svc_twostage_scale(
-        cm, cpi->un_scaled_source, &cpi->scaled_source, &cpi->svc.scaled_temp,
+        cm, cpi->un_scaled_source, &cpi->scaled_source, &svc->scaled_temp,
         filter_scaler, phase_scaler, filter_scaler2, phase_scaler2);
-    cpi->svc.scaled_one_half = 1;
+    svc->scaled_one_half = 1;
   } else if (is_one_pass_cbr_svc(cpi) &&
              cpi->un_scaled_source->y_width == cm->width << 1 &&
              cpi->un_scaled_source->y_height == cm->height << 1 &&
-             cpi->svc.scaled_one_half) {
+             svc->scaled_one_half) {
     // If the spatial layer is 1/2x1/2 and the scaling is already done in the
     // two-stage scaling, use the result directly.
-    cpi->Source = &cpi->svc.scaled_temp;
-    cpi->svc.scaled_one_half = 0;
+    cpi->Source = &svc->scaled_temp;
+    svc->scaled_one_half = 0;
   } else {
     cpi->Source = vp9_scale_if_required(
         cm, cpi->un_scaled_source, &cpi->scaled_source, (cpi->oxcf.pass == 0),
         filter_scaler, phase_scaler);
   }
+#ifdef OUTPUT_YUV_SVC_SRC
+  // Write out at most 3 spatial layers.
+  if (is_one_pass_cbr_svc(cpi) && svc->spatial_layer_id < 3) {
+    vpx_write_yuv_frame(yuv_svc_src[svc->spatial_layer_id], cpi->Source);
+  }
+#endif
   // Unfiltered raw source used in metrics calculation if the source
   // has been filtered.
   if (is_psnr_calc_enabled(cpi)) {
@@ -3665,9 +3944,9 @@
   }
 
   if ((cpi->use_svc &&
-       (cpi->svc.spatial_layer_id < cpi->svc.number_spatial_layers - 1 ||
-        cpi->svc.temporal_layer_id < cpi->svc.number_temporal_layers - 1 ||
-        cpi->svc.current_superframe < 1)) ||
+       (svc->spatial_layer_id < svc->number_spatial_layers - 1 ||
+        svc->temporal_layer_id < svc->number_temporal_layers - 1 ||
+        svc->current_superframe < 1)) ||
       cpi->resize_pending || cpi->resize_state || cpi->external_resize ||
       cpi->resize_state != ORIG) {
     cpi->compute_source_sad_onepass = 0;
@@ -3697,60 +3976,137 @@
       cpi->Last_Source->y_height != cpi->Source->y_height)
     cpi->compute_source_sad_onepass = 0;
 
-  if (cm->frame_type == KEY_FRAME || cpi->resize_pending != 0) {
+  if (frame_is_intra_only(cm) || cpi->resize_pending != 0) {
     memset(cpi->consec_zero_mv, 0,
            cm->mi_rows * cm->mi_cols * sizeof(*cpi->consec_zero_mv));
   }
 
-  vp9_update_noise_estimate(cpi);
+#if CONFIG_VP9_TEMPORAL_DENOISING
+  if (cpi->oxcf.noise_sensitivity > 0 && cpi->use_svc)
+    vp9_denoiser_reset_on_first_frame(cpi);
+#endif
 
   // Scene detection is always used for VBR mode or screen-content case.
   // For other cases (e.g., CBR mode) use it for 5 <= speed < 8 for now
   // (need to check encoding time cost for doing this for speed 8).
   cpi->rc.high_source_sad = 0;
-  if (cpi->compute_source_sad_onepass && cm->show_frame &&
+  cpi->rc.hybrid_intra_scene_change = 0;
+  cpi->rc.re_encode_maxq_scene_change = 0;
+  if (cm->show_frame && cpi->oxcf.mode == REALTIME &&
       (cpi->oxcf.rc_mode == VPX_VBR ||
        cpi->oxcf.content == VP9E_CONTENT_SCREEN ||
-       (cpi->oxcf.speed >= 5 && cpi->oxcf.speed < 8 && !cpi->use_svc)))
+       (cpi->oxcf.speed >= 5 && cpi->oxcf.speed < 8)))
     vp9_scene_detection_onepass(cpi);
 
+  if (svc->spatial_layer_id == svc->first_spatial_layer_to_encode) {
+    svc->high_source_sad_superframe = cpi->rc.high_source_sad;
+    svc->high_num_blocks_with_motion = cpi->rc.high_num_blocks_with_motion;
+    // On scene change reset temporal layer pattern to TL0.
+    // Note that if the base/lower spatial layers are skipped: instead of
+    // inserting base layer here, we force max-q for the next superframe
+    // with lower spatial layers: this is done in vp9_encodedframe_overshoot()
+    // when max-q is decided for the current layer.
+    // Only do this reset for bypass/flexible mode.
+    if (svc->high_source_sad_superframe && svc->temporal_layer_id > 0 &&
+        svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
+      // rc->high_source_sad will get reset so copy it to restore it.
+      int tmp_high_source_sad = cpi->rc.high_source_sad;
+      vp9_svc_reset_temporal_layers(cpi, cm->frame_type == KEY_FRAME);
+      cpi->rc.high_source_sad = tmp_high_source_sad;
+    }
+  }
+
+  vp9_update_noise_estimate(cpi);
+
+  // For 1 pass CBR, check if we are dropping this frame.
+  // Never drop on key frame, if base layer is key for svc,
+  // on scene change, or if superframe has layer sync.
+  if ((cpi->rc.high_source_sad || svc->high_source_sad_superframe) &&
+      !(cpi->rc.use_post_encode_drop && svc->last_layer_dropped[0]))
+    no_drop_scene_change = 1;
+  if (cpi->oxcf.pass == 0 && cpi->oxcf.rc_mode == VPX_CBR &&
+      !frame_is_intra_only(cm) && !no_drop_scene_change &&
+      !svc->superframe_has_layer_sync &&
+      (!cpi->use_svc ||
+       !svc->layer_context[svc->temporal_layer_id].is_key_frame)) {
+    if (vp9_rc_drop_frame(cpi)) return 0;
+  }
+
   // For 1 pass CBR SVC, only ZEROMV is allowed for spatial reference frame
   // when svc->force_zero_mode_spatial_ref = 1. Under those conditions we can
   // avoid this frame-level upsampling (for non intra_only frames).
   if (frame_is_intra_only(cm) == 0 &&
-      !(is_one_pass_cbr_svc(cpi) && cpi->svc.force_zero_mode_spatial_ref)) {
+      !(is_one_pass_cbr_svc(cpi) && svc->force_zero_mode_spatial_ref)) {
     vp9_scale_references(cpi);
   }
 
   set_size_independent_vars(cpi);
   set_size_dependent_vars(cpi, &q, &bottom_index, &top_index);
 
+  // search method and step parameter might be changed in speed settings.
+  init_motion_estimation(cpi);
+
   if (cpi->sf.copy_partition_flag) alloc_copy_partition_data(cpi);
 
   if (cpi->sf.svc_use_lowres_part &&
-      cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 2) {
-    if (cpi->svc.prev_partition_svc == NULL) {
+      svc->spatial_layer_id == svc->number_spatial_layers - 2) {
+    if (svc->prev_partition_svc == NULL) {
       CHECK_MEM_ERROR(
-          cm, cpi->svc.prev_partition_svc,
+          cm, svc->prev_partition_svc,
           (BLOCK_SIZE *)vpx_calloc(cm->mi_stride * cm->mi_rows,
-                                   sizeof(*cpi->svc.prev_partition_svc)));
+                                   sizeof(*svc->prev_partition_svc)));
     }
   }
 
-  if (cpi->oxcf.speed >= 5 && cpi->oxcf.pass == 0 &&
+  // TODO(jianj): Look into issue of skin detection with high bitdepth.
+  if (cm->bit_depth == 8 && cpi->oxcf.speed >= 5 && cpi->oxcf.pass == 0 &&
       cpi->oxcf.rc_mode == VPX_CBR &&
       cpi->oxcf.content != VP9E_CONTENT_SCREEN &&
       cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ) {
     cpi->use_skin_detection = 1;
   }
 
-  vp9_set_quantizer(cm, q);
+  // Enable post encode frame dropping for CBR on non key frame, when
+  // ext_use_post_encode_drop is specified by user.
+  cpi->rc.use_post_encode_drop = cpi->rc.ext_use_post_encode_drop &&
+                                 cpi->oxcf.rc_mode == VPX_CBR &&
+                                 cm->frame_type != KEY_FRAME;
+
+  vp9_set_quantizer(cpi, q);
   vp9_set_variance_partition_thresholds(cpi, q, 0);
 
   setup_frame(cpi);
 
   suppress_active_map(cpi);
 
+  if (cpi->use_svc) {
+    // On non-zero spatial layer, check for disabling inter-layer
+    // prediction.
+    if (svc->spatial_layer_id > 0) vp9_svc_constrain_inter_layer_pred(cpi);
+    vp9_svc_assert_constraints_pattern(cpi);
+  }
+
+  if (cpi->rc.last_post_encode_dropped_scene_change) {
+    cpi->rc.high_source_sad = 1;
+    svc->high_source_sad_superframe = 1;
+    // For now disable use_source_sad since Last_Source will not be the previous
+    // encoded but the dropped one.
+    cpi->sf.use_source_sad = 0;
+    cpi->rc.last_post_encode_dropped_scene_change = 0;
+  }
+  // Check if this high_source_sad (scene/slide change) frame should be
+  // encoded at high/max QP, and if so, set the q and adjust some rate
+  // control parameters.
+  if (cpi->sf.overshoot_detection_cbr_rt == FAST_DETECTION_MAXQ &&
+      (cpi->rc.high_source_sad ||
+       (cpi->use_svc && svc->high_source_sad_superframe))) {
+    if (vp9_encodedframe_overshoot(cpi, -1, &q)) {
+      vp9_set_quantizer(cpi, q);
+      vp9_set_variance_partition_thresholds(cpi, q, 0);
+    }
+  }
+
+#if !CONFIG_REALTIME_ONLY
   // Variance adaptive and in frame q adjustment experiments are mutually
   // exclusive.
   if (cpi->oxcf.aq_mode == VARIANCE_AQ) {
@@ -3759,26 +4115,32 @@
     vp9_360aq_frame_setup(cpi);
   } else if (cpi->oxcf.aq_mode == COMPLEXITY_AQ) {
     vp9_setup_in_frame_q_adj(cpi);
-  } else if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ) {
-    vp9_cyclic_refresh_setup(cpi);
   } else if (cpi->oxcf.aq_mode == LOOKAHEAD_AQ) {
     // it may be pretty bad for rate-control,
     // and I should handle it somehow
     vp9_alt_ref_aq_setup_map(cpi->alt_ref_aq, cpi);
-  } else if (cpi->roi.enabled && cm->frame_type != KEY_FRAME) {
-    apply_roi_map(cpi);
+  } else {
+#endif
+    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ) {
+      vp9_cyclic_refresh_setup(cpi);
+    } else if (cpi->roi.enabled && !frame_is_intra_only(cm)) {
+      apply_roi_map(cpi);
+    }
+#if !CONFIG_REALTIME_ONLY
   }
+#endif
 
   apply_active_map(cpi);
 
   vp9_encode_frame(cpi);
 
-  // Check if we should drop this frame because of high overshoot.
-  // Only for frames where high temporal-source SAD is detected.
-  if (cpi->oxcf.pass == 0 && cpi->oxcf.rc_mode == VPX_CBR &&
-      cpi->resize_state == ORIG && cm->frame_type != KEY_FRAME &&
-      cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
-      cpi->rc.high_source_sad == 1) {
+  // Check if we should re-encode this frame at high Q because of high
+  // overshoot based on the encoded frame size. Only for frames where
+  // high temporal-source SAD is detected.
+  // For SVC: all spatial layers are checked for re-encoding.
+  if (cpi->sf.overshoot_detection_cbr_rt == RE_ENCODE_MAXQ &&
+      (cpi->rc.high_source_sad ||
+       (cpi->use_svc && svc->high_source_sad_superframe))) {
     int frame_size = 0;
     // Get an estimate of the encoded frame size.
     save_coding_context(cpi);
@@ -3789,13 +4151,17 @@
     // adjust some rate control parameters, and return to re-encode the frame.
     if (vp9_encodedframe_overshoot(cpi, frame_size, &q)) {
       vpx_clear_system_state();
-      vp9_set_quantizer(cm, q);
+      vp9_set_quantizer(cpi, q);
       vp9_set_variance_partition_thresholds(cpi, q, 0);
       suppress_active_map(cpi);
       // Turn-off cyclic refresh for re-encoded frame.
       if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ) {
+        CYCLIC_REFRESH *const cr = cpi->cyclic_refresh;
         unsigned char *const seg_map = cpi->segmentation_map;
         memset(seg_map, 0, cm->mi_rows * cm->mi_cols);
+        memset(cr->last_coded_q_map, MAXQ,
+               cm->mi_rows * cm->mi_cols * sizeof(*cr->last_coded_q_map));
+        cr->sb_index = 0;
         vp9_disable_segmentation(&cm->seg);
       }
       apply_active_map(cpi);
@@ -3805,15 +4171,17 @@
 
   // Update some stats from cyclic refresh, and check for golden frame update.
   if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ && cm->seg.enabled &&
-      cm->frame_type != KEY_FRAME)
+      !frame_is_intra_only(cm))
     vp9_cyclic_refresh_postencode(cpi);
 
   // Update the skip mb flag probabilities based on the distribution
   // seen in the last encoder iteration.
   // update_base_skip_probs(cpi);
   vpx_clear_system_state();
+  return 1;
 }
 
+#if !CONFIG_REALTIME_ONLY
 #define MAX_QSTEP_ADJ 4
 static int get_qstep_adj(int rate_excess, int rate_limit) {
   int qstep =
@@ -3840,11 +4208,17 @@
   int qrange_adj = 1;
 #endif
 
+  if (cm->show_existing_frame) {
+    rc->this_frame_target = 0;
+    if (is_psnr_calc_enabled(cpi)) set_raw_source_frame(cpi);
+    return;
+  }
+
   set_size_independent_vars(cpi);
 
-  enable_acl = cpi->sf.allow_acl
-                   ? (cm->frame_type == KEY_FRAME) || (cm->show_frame == 0)
-                   : 0;
+  enable_acl = cpi->sf.allow_acl ? (cm->frame_type == KEY_FRAME) ||
+                                       (cpi->twopass.gf_group.index == 1)
+                                 : 0;
 
   do {
     vpx_clear_system_state();
@@ -3919,7 +4293,15 @@
       vp9_scale_references(cpi);
     }
 
-    vp9_set_quantizer(cm, q);
+#if CONFIG_RATE_CTRL
+    // TODO(angiebird): This is a hack for making sure the encoder use the
+    // external_quantize_index exactly. Avoid this kind of hack later.
+    if (cpi->encode_command.use_external_quantize_index) {
+      q = cpi->encode_command.external_quantize_index;
+    }
+#endif
+
+    vp9_set_quantizer(cpi, q);
 
     if (loop_count == 0) setup_frame(cpi);
 
@@ -3933,6 +4315,8 @@
       vp9_setup_in_frame_q_adj(cpi);
     } else if (oxcf->aq_mode == LOOKAHEAD_AQ) {
       vp9_alt_ref_aq_setup_map(cpi->alt_ref_aq, cpi);
+    } else if (oxcf->aq_mode == PSNR_AQ) {
+      vp9_psnr_aq_mode_setup(&cm->seg);
     }
 
     vp9_encode_frame(cpi);
@@ -3955,6 +4339,16 @@
       if (frame_over_shoot_limit == 0) frame_over_shoot_limit = 1;
     }
 
+#if CONFIG_RATE_CTRL
+    // This part needs to be after save_coding_context() because
+    // restore_coding_context will be called in the end of this function.
+    // TODO(angiebird): This is a hack for making sure the encoder use the
+    // external_quantize_index exactly. Avoid this kind of hack later.
+    if (cpi->encode_command.use_external_quantize_index) {
+      break;
+    }
+#endif
+
     if (oxcf->rc_mode == VPX_Q) {
       loop = 0;
     } else {
@@ -4037,8 +4431,9 @@
           // Special case if the projected size is > the max allowed.
           if ((q == q_high) &&
               ((rc->projected_frame_size >= rc->max_frame_bandwidth) ||
-               (rc->projected_frame_size >=
-                big_rate_miss_high_threshold(cpi)))) {
+               (!rc->is_src_frame_alt_ref &&
+                (rc->projected_frame_size >=
+                 big_rate_miss_high_threshold(cpi))))) {
             int max_rate = VPXMAX(1, VPXMIN(rc->max_frame_bandwidth,
                                             big_rate_miss_high_threshold(cpi)));
             double q_val_high;
@@ -4091,7 +4486,7 @@
             // Special case reset for qlow for constrained quality.
             // This should only trigger where there is very substantial
             // undershoot on a frame and the auto cq level is above
-            // the user passsed in value.
+            // the user passed in value.
             if (oxcf->rc_mode == VPX_CQ && q < q_low) {
               q_low = q;
             }
@@ -4130,7 +4525,7 @@
     }
 
     if (cpi->sf.recode_loop >= ALLOW_RECODE_KFARFGF)
-      if (loop || !enable_acl) restore_coding_context(cpi);
+      if (loop) restore_coding_context(cpi);
   } while (loop);
 
 #ifdef AGGRESSIVE_VBR
@@ -4143,7 +4538,7 @@
 #endif
     // Have we been forced to adapt Q outside the expected range by an extreme
     // rate miss. If so adjust the active maxQ for the subsequent frames.
-    if (q > cpi->twopass.active_worst_quality) {
+    if (!rc->is_src_frame_alt_ref && (q > cpi->twopass.active_worst_quality)) {
       cpi->twopass.active_worst_quality = q;
     } else if (oxcf->vbr_corpus_complexity && q == q_low &&
                rc->projected_frame_size < rc->this_frame_target) {
@@ -4156,23 +4551,16 @@
     // Skip recoding, if model diff is below threshold
     const int thresh = compute_context_model_thresh(cpi);
     const int diff = compute_context_model_diff(cm);
-    if (diff < thresh) {
-      vpx_clear_system_state();
-      restore_coding_context(cpi);
-      return;
+    if (diff >= thresh) {
+      vp9_encode_frame(cpi);
     }
-
-    vp9_encode_frame(cpi);
+  }
+  if (cpi->sf.recode_loop >= ALLOW_RECODE_KFARFGF) {
     vpx_clear_system_state();
     restore_coding_context(cpi);
-    vp9_pack_bitstream(cpi, dest, size);
-
-    vp9_encode_frame(cpi);
-    vpx_clear_system_state();
-
-    restore_coding_context(cpi);
   }
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 static int get_ref_frame_flags(const VP9_COMP *cpi) {
   const int *const map = cpi->common.ref_frame_map;
@@ -4268,20 +4656,21 @@
   }
 }
 
-static void set_arf_sign_bias(VP9_COMP *cpi) {
+static void set_ref_sign_bias(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
-  int arf_sign_bias;
+  RefCntBuffer *const ref_buffer = get_ref_cnt_buffer(cm, cm->new_fb_idx);
+  const int cur_frame_index = ref_buffer->frame_index;
+  MV_REFERENCE_FRAME ref_frame;
 
-  if ((cpi->oxcf.pass == 2) && cpi->multi_arf_allowed) {
-    const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
-    arf_sign_bias = cpi->rc.source_alt_ref_active &&
-                    (!cpi->refresh_alt_ref_frame ||
-                     (gf_group->rf_level[gf_group->index] == GF_ARF_LOW));
-  } else {
-    arf_sign_bias =
-        (cpi->rc.source_alt_ref_active && !cpi->refresh_alt_ref_frame);
+  for (ref_frame = LAST_FRAME; ref_frame < MAX_REF_FRAMES; ++ref_frame) {
+    const int buf_idx = get_ref_frame_buf_idx(cpi, ref_frame);
+    const RefCntBuffer *const ref_cnt_buf =
+        get_ref_cnt_buffer(&cpi->common, buf_idx);
+    if (ref_cnt_buf) {
+      cm->ref_frame_sign_bias[ref_frame] =
+          cur_frame_index < ref_cnt_buf->frame_index;
+    }
   }
-  cm->ref_frame_sign_bias[ALTREF_FRAME] = arf_sign_bias;
 }
 
 static int setup_interp_filter_search_mask(VP9_COMP *cpi) {
@@ -4310,7 +4699,7 @@
 }
 
 #ifdef ENABLE_KF_DENOISE
-// Baseline Kernal weights for denoise
+// Baseline kernel weights for denoise
 static uint8_t dn_kernal_3[9] = { 1, 2, 1, 2, 4, 2, 1, 2, 1 };
 static uint8_t dn_kernal_5[25] = { 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 4,
                                    2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1 };
@@ -4345,7 +4734,7 @@
     tmp_ptr += stride;
   }
 
-  // Select the kernal size.
+  // Select the kernel size.
   if (max_diff > (strength + (strength >> 1))) {
     kernal_size = 3;
     half_k_size = 1;
@@ -4353,7 +4742,7 @@
   }
   kernal_ptr = (kernal_size == 3) ? dn_kernal_3 : dn_kernal_5;
 
-  // Apply the kernal
+  // Apply the kernel
   tmp_ptr = src_ptr - (stride * half_k_size) - half_k_size;
   for (i = 0; i < kernal_size; ++i) {
     for (j = 0; j < kernal_size; ++j) {
@@ -4390,7 +4779,7 @@
     tmp_ptr += stride;
   }
 
-  // Select the kernal size.
+  // Select the kernel size.
   if (max_diff > (strength + (strength >> 1))) {
     kernal_size = 3;
     half_k_size = 1;
@@ -4398,7 +4787,7 @@
   }
   kernal_ptr = (kernal_size == 3) ? dn_kernal_3 : dn_kernal_5;
 
-  // Apply the kernal
+  // Apply the kernel
   tmp_ptr = src_ptr - (stride * half_k_size) - half_k_size;
   for (i = 0; i < kernal_size; ++i) {
     for (j = 0; j < kernal_size; ++j) {
@@ -4414,7 +4803,7 @@
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-// Apply thresholded spatial noise supression to a given buffer.
+// Apply thresholded spatial noise suppression to a given buffer.
 static void spatial_denoise_buffer(VP9_COMP *cpi, uint8_t *buffer,
                                    const int stride, const int width,
                                    const int height, const int strength) {
@@ -4439,7 +4828,7 @@
   }
 }
 
-// Apply thresholded spatial noise supression to source.
+// Apply thresholded spatial noise suppression to source.
 static void spatial_denoise_frame(VP9_COMP *cpi) {
   YV12_BUFFER_CONFIG *src = cpi->Source;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
@@ -4465,6 +4854,7 @@
 }
 #endif  // ENABLE_KF_DENOISE
 
+#if !CONFIG_REALTIME_ONLY
 static void vp9_try_disable_lookahead_aq(VP9_COMP *cpi, size_t *size,
                                          uint8_t *dest) {
   if (cpi->common.seg.enabled)
@@ -4488,6 +4878,207 @@
         vp9_enable_segmentation(&cpi->common.seg);
     }
 }
+#endif
+
+static void set_frame_index(VP9_COMP *cpi, VP9_COMMON *cm) {
+  RefCntBuffer *const ref_buffer = get_ref_cnt_buffer(cm, cm->new_fb_idx);
+
+  if (ref_buffer) {
+    const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
+    ref_buffer->frame_index =
+        cm->current_video_frame + gf_group->arf_src_offset[gf_group->index];
+  }
+}
+
+static void set_mb_ssim_rdmult_scaling(VP9_COMP *cpi) {
+  VP9_COMMON *cm = &cpi->common;
+  ThreadData *td = &cpi->td;
+  MACROBLOCK *x = &td->mb;
+  MACROBLOCKD *xd = &x->e_mbd;
+  uint8_t *y_buffer = cpi->Source->y_buffer;
+  const int y_stride = cpi->Source->y_stride;
+  const int block_size = BLOCK_16X16;
+
+  const int num_8x8_w = num_8x8_blocks_wide_lookup[block_size];
+  const int num_8x8_h = num_8x8_blocks_high_lookup[block_size];
+  const int num_cols = (cm->mi_cols + num_8x8_w - 1) / num_8x8_w;
+  const int num_rows = (cm->mi_rows + num_8x8_h - 1) / num_8x8_h;
+  double log_sum = 0.0;
+  int row, col;
+
+  // Loop through each 64x64 block.
+  for (row = 0; row < num_rows; ++row) {
+    for (col = 0; col < num_cols; ++col) {
+      int mi_row, mi_col;
+      double var = 0.0, num_of_var = 0.0;
+      const int index = row * num_cols + col;
+
+      for (mi_row = row * num_8x8_h;
+           mi_row < cm->mi_rows && mi_row < (row + 1) * num_8x8_h; ++mi_row) {
+        for (mi_col = col * num_8x8_w;
+             mi_col < cm->mi_cols && mi_col < (col + 1) * num_8x8_w; ++mi_col) {
+          struct buf_2d buf;
+          const int row_offset_y = mi_row << 3;
+          const int col_offset_y = mi_col << 3;
+
+          buf.buf = y_buffer + row_offset_y * y_stride + col_offset_y;
+          buf.stride = y_stride;
+
+          // In order to make SSIM_VAR_SCALE in a same scale for both 8 bit
+          // and high bit videos, the variance needs to be divided by 2.0 or
+          // 64.0 separately.
+          // TODO(sdeng): need to tune for 12bit videos.
+#if CONFIG_VP9_HIGHBITDEPTH
+          if (cpi->Source->flags & YV12_FLAG_HIGHBITDEPTH)
+            var += vp9_high_get_sby_variance(cpi, &buf, BLOCK_8X8, xd->bd);
+          else
+#endif
+            var += vp9_get_sby_variance(cpi, &buf, BLOCK_8X8);
+
+          num_of_var += 1.0;
+        }
+      }
+      var = var / num_of_var / 64.0;
+
+      // Curve fitting with an exponential model on all 16x16 blocks from the
+      // Midres dataset.
+      var = 67.035434 * (1 - exp(-0.0021489 * var)) + 17.492222;
+      cpi->mi_ssim_rdmult_scaling_factors[index] = var;
+      log_sum += log(var);
+    }
+  }
+  log_sum = exp(log_sum / (double)(num_rows * num_cols));
+
+  for (row = 0; row < num_rows; ++row) {
+    for (col = 0; col < num_cols; ++col) {
+      const int index = row * num_cols + col;
+      cpi->mi_ssim_rdmult_scaling_factors[index] /= log_sum;
+    }
+  }
+
+  (void)xd;
+}
+
+// Process the wiener variance in 16x16 block basis.
+static int qsort_comp(const void *elem1, const void *elem2) {
+  int a = *((const int *)elem1);
+  int b = *((const int *)elem2);
+  if (a > b) return 1;
+  if (a < b) return -1;
+  return 0;
+}
+
+static void init_mb_wiener_var_buffer(VP9_COMP *cpi) {
+  VP9_COMMON *cm = &cpi->common;
+
+  if (cpi->mb_wiener_variance && cpi->mb_wiener_var_rows >= cm->mb_rows &&
+      cpi->mb_wiener_var_cols >= cm->mb_cols)
+    return;
+
+  vpx_free(cpi->mb_wiener_variance);
+  cpi->mb_wiener_variance = NULL;
+
+  CHECK_MEM_ERROR(
+      cm, cpi->mb_wiener_variance,
+      vpx_calloc(cm->mb_rows * cm->mb_cols, sizeof(*cpi->mb_wiener_variance)));
+  cpi->mb_wiener_var_rows = cm->mb_rows;
+  cpi->mb_wiener_var_cols = cm->mb_cols;
+}
+
+static void set_mb_wiener_variance(VP9_COMP *cpi) {
+  VP9_COMMON *cm = &cpi->common;
+  uint8_t *buffer = cpi->Source->y_buffer;
+  int buf_stride = cpi->Source->y_stride;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  ThreadData *td = &cpi->td;
+  MACROBLOCK *x = &td->mb;
+  MACROBLOCKD *xd = &x->e_mbd;
+  DECLARE_ALIGNED(16, uint16_t, zero_pred16[32 * 32]);
+  DECLARE_ALIGNED(16, uint8_t, zero_pred8[32 * 32]);
+  uint8_t *zero_pred;
+#else
+  DECLARE_ALIGNED(16, uint8_t, zero_pred[32 * 32]);
+#endif
+
+  DECLARE_ALIGNED(16, int16_t, src_diff[32 * 32]);
+  DECLARE_ALIGNED(16, tran_low_t, coeff[32 * 32]);
+
+  int mb_row, mb_col, count = 0;
+  // Hard coded operating block size
+  const int block_size = 16;
+  const int coeff_count = block_size * block_size;
+  const TX_SIZE tx_size = TX_16X16;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  xd->cur_buf = cpi->Source;
+  if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+    zero_pred = CONVERT_TO_BYTEPTR(zero_pred16);
+    memset(zero_pred16, 0, sizeof(*zero_pred16) * coeff_count);
+  } else {
+    zero_pred = zero_pred8;
+    memset(zero_pred8, 0, sizeof(*zero_pred8) * coeff_count);
+  }
+#else
+  memset(zero_pred, 0, sizeof(*zero_pred) * coeff_count);
+#endif
+
+  cpi->norm_wiener_variance = 0;
+
+  for (mb_row = 0; mb_row < cm->mb_rows; ++mb_row) {
+    for (mb_col = 0; mb_col < cm->mb_cols; ++mb_col) {
+      int idx;
+      int16_t median_val = 0;
+      uint8_t *mb_buffer =
+          buffer + mb_row * block_size * buf_stride + mb_col * block_size;
+      int64_t wiener_variance = 0;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+      if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+        vpx_highbd_subtract_block(block_size, block_size, src_diff, block_size,
+                                  mb_buffer, buf_stride, zero_pred, block_size,
+                                  xd->bd);
+        highbd_wht_fwd_txfm(src_diff, block_size, coeff, tx_size);
+      } else {
+        vpx_subtract_block(block_size, block_size, src_diff, block_size,
+                           mb_buffer, buf_stride, zero_pred, block_size);
+        wht_fwd_txfm(src_diff, block_size, coeff, tx_size);
+      }
+#else
+      vpx_subtract_block(block_size, block_size, src_diff, block_size,
+                         mb_buffer, buf_stride, zero_pred, block_size);
+      wht_fwd_txfm(src_diff, block_size, coeff, tx_size);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+      coeff[0] = 0;
+      for (idx = 1; idx < coeff_count; ++idx) coeff[idx] = abs(coeff[idx]);
+
+      qsort(coeff, coeff_count - 1, sizeof(*coeff), qsort_comp);
+
+      // Noise level estimation
+      median_val = coeff[coeff_count / 2];
+
+      // Wiener filter
+      for (idx = 1; idx < coeff_count; ++idx) {
+        int64_t sqr_coeff = (int64_t)coeff[idx] * coeff[idx];
+        int64_t tmp_coeff = (int64_t)coeff[idx];
+        if (median_val) {
+          tmp_coeff = (sqr_coeff * coeff[idx]) /
+                      (sqr_coeff + (int64_t)median_val * median_val);
+        }
+        wiener_variance += tmp_coeff * tmp_coeff;
+      }
+      cpi->mb_wiener_variance[mb_row * cm->mb_cols + mb_col] =
+          wiener_variance / coeff_count;
+      cpi->norm_wiener_variance +=
+          cpi->mb_wiener_variance[mb_row * cm->mb_cols + mb_col];
+      ++count;
+    }
+  }
+
+  if (count) cpi->norm_wiener_variance /= count;
+  cpi->norm_wiener_variance = VPXMAX(1, cpi->norm_wiener_variance);
+}
 
 static void encode_frame_to_data_rate(VP9_COMP *cpi, size_t *size,
                                       uint8_t *dest,
@@ -4498,11 +5089,23 @@
   TX_SIZE t;
 
   // SVC: skip encoding of enhancement layer if the layer target bandwidth = 0.
+  // No need to set svc.skip_enhancement_layer if whole superframe will be
+  // dropped.
   if (cpi->use_svc && cpi->svc.spatial_layer_id > 0 &&
-      !cpi->svc.rc_drop_superframe && cpi->oxcf.target_bandwidth == 0) {
+      cpi->oxcf.target_bandwidth == 0 &&
+      !(cpi->svc.framedrop_mode != LAYER_DROP &&
+        (cpi->svc.framedrop_mode != CONSTRAINED_FROM_ABOVE_DROP ||
+         cpi->svc
+             .force_drop_constrained_from_above[cpi->svc.number_spatial_layers -
+                                                1]) &&
+        cpi->svc.drop_spatial_layer[0])) {
     cpi->svc.skip_enhancement_layer = 1;
     vp9_rc_postencode_update_drop_frame(cpi);
     cpi->ext_refresh_frame_flags_pending = 0;
+    cpi->last_frame_dropped = 1;
+    cpi->svc.last_layer_dropped[cpi->svc.spatial_layer_id] = 1;
+    cpi->svc.drop_spatial_layer[cpi->svc.spatial_layer_id] = 1;
+    vp9_inc_frame_in_layer(cpi);
     return;
   }
 
@@ -4514,8 +5117,13 @@
   if (is_spatial_denoise_enabled(cpi)) spatial_denoise_frame(cpi);
 #endif
 
-  // Set the arf sign bias for this frame.
-  set_arf_sign_bias(cpi);
+  if (cm->show_existing_frame == 0) {
+    // Update frame index
+    set_frame_index(cpi, cm);
+
+    // Set the arf sign bias for this frame.
+    set_ref_sign_bias(cpi);
+  }
 
   // Set default state for segment based loop filter update flags.
   cm->lf.mode_ref_delta_update = 0;
@@ -4550,65 +5158,12 @@
       cm->reset_frame_context = 2;
     }
   }
-  if (is_two_pass_svc(cpi) && cm->error_resilient_mode == 0) {
-    // Use context 0 for intra only empty frame, but the last frame context
-    // for other empty frames.
-    if (cpi->svc.encode_empty_frame_state == ENCODING) {
-      if (cpi->svc.encode_intra_empty_frame != 0)
-        cm->frame_context_idx = 0;
-      else
-        cm->frame_context_idx = FRAME_CONTEXTS - 1;
-    } else {
-      cm->frame_context_idx =
-          cpi->svc.spatial_layer_id * cpi->svc.number_temporal_layers +
-          cpi->svc.temporal_layer_id;
-    }
 
-    cm->frame_parallel_decoding_mode = oxcf->frame_parallel_decoding_mode;
+  if (oxcf->tuning == VP8_TUNE_SSIM) set_mb_ssim_rdmult_scaling(cpi);
 
-    // The probs will be updated based on the frame type of its previous
-    // frame if frame_parallel_decoding_mode is 0. The type may vary for
-    // the frame after a key frame in base layer since we may drop enhancement
-    // layers. So set frame_parallel_decoding_mode to 1 in this case.
-    if (cm->frame_parallel_decoding_mode == 0) {
-      if (cpi->svc.number_temporal_layers == 1) {
-        if (cpi->svc.spatial_layer_id == 0 &&
-            cpi->svc.layer_context[0].last_frame_type == KEY_FRAME)
-          cm->frame_parallel_decoding_mode = 1;
-      } else if (cpi->svc.spatial_layer_id == 0) {
-        // Find the 2nd frame in temporal base layer and 1st frame in temporal
-        // enhancement layers from the key frame.
-        int i;
-        for (i = 0; i < cpi->svc.number_temporal_layers; ++i) {
-          if (cpi->svc.layer_context[0].frames_from_key_frame == 1 << i) {
-            cm->frame_parallel_decoding_mode = 1;
-            break;
-          }
-        }
-      }
-    }
-  }
-
-  // For 1 pass CBR, check if we are dropping this frame.
-  // For spatial layers, for now only check for frame-dropping on first spatial
-  // layer, and if decision is to drop, we drop whole super-frame.
-  if (oxcf->pass == 0 && oxcf->rc_mode == VPX_CBR &&
-      cm->frame_type != KEY_FRAME) {
-    if (vp9_rc_drop_frame(cpi) ||
-        (is_one_pass_cbr_svc(cpi) && cpi->svc.rc_drop_superframe == 1)) {
-      vp9_rc_postencode_update_drop_frame(cpi);
-      cpi->ext_refresh_frame_flags_pending = 0;
-      cpi->svc.rc_drop_superframe = 1;
-      cpi->last_frame_dropped = 1;
-      // TODO(marpan): Advancing the svc counters on dropped frames can break
-      // the referencing scheme for the fixed svc patterns defined in
-      // vp9_one_pass_cbr_svc_start_layer(). Look into fixing this issue, but
-      // for now, don't advance the svc frame counters on dropped frame.
-      // if (cpi->use_svc)
-      //   vp9_inc_frame_in_layer(cpi);
-
-      return;
-    }
+  if (oxcf->aq_mode == PERCEPTUAL_AQ) {
+    init_mb_wiener_var_buffer(cpi);
+    set_mb_wiener_variance(cpi);
   }
 
   vpx_clear_system_state();
@@ -4617,18 +5172,33 @@
   memset(cpi->mode_chosen_counts, 0,
          MAX_MODES * sizeof(*cpi->mode_chosen_counts));
 #endif
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+  // Backup to ensure consistency between recodes
+  save_encode_params(cpi);
+#endif  // CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
 
   if (cpi->sf.recode_loop == DISALLOW_RECODE) {
-    encode_without_recode_loop(cpi, size, dest);
+    if (!encode_without_recode_loop(cpi, size, dest)) return;
   } else {
+#if !CONFIG_REALTIME_ONLY
     encode_with_recode_loop(cpi, size, dest);
+#endif
   }
 
-  cpi->last_frame_dropped = 0;
+  // TODO(jingning): When using show existing frame mode, we assume that the
+  // current ARF will be directly used as the final reconstructed frame. This is
+  // an encoder control scheme. One could in principle explore other
+  // possibilities to arrange the reference frame buffer and their coding order.
+  if (cm->show_existing_frame) {
+    ref_cnt_fb(cm->buffer_pool->frame_bufs, &cm->new_fb_idx,
+               cm->ref_frame_map[cpi->alt_fb_idx]);
+  }
 
+#if !CONFIG_REALTIME_ONLY
   // Disable segmentation if it decrease rate/distortion ratio
   if (cpi->oxcf.aq_mode == LOOKAHEAD_AQ)
     vp9_try_disable_lookahead_aq(cpi, size, dest);
+#endif
 
 #if CONFIG_VP9_TEMPORAL_DENOISING
 #ifdef OUTPUT_YUV_DENOISED
@@ -4672,9 +5242,33 @@
   // Pick the loop filter level for the frame.
   loopfilter_frame(cpi, cm);
 
+  if (cpi->rc.use_post_encode_drop) save_coding_context(cpi);
+
   // build the bitstream
   vp9_pack_bitstream(cpi, dest, size);
 
+  if (cpi->rc.use_post_encode_drop && cm->base_qindex < cpi->rc.worst_quality &&
+      cpi->svc.spatial_layer_id == 0 && post_encode_drop_cbr(cpi, size)) {
+    restore_coding_context(cpi);
+    return;
+  }
+
+  cpi->last_frame_dropped = 0;
+  cpi->svc.last_layer_dropped[cpi->svc.spatial_layer_id] = 0;
+  if (cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1)
+    cpi->svc.num_encoded_top_layer++;
+
+  // Keep track of the frame buffer index updated/refreshed for the
+  // current encoded TL0 superframe.
+  if (cpi->svc.temporal_layer_id == 0) {
+    if (cpi->refresh_last_frame)
+      cpi->svc.fb_idx_upd_tl0[cpi->svc.spatial_layer_id] = cpi->lst_fb_idx;
+    else if (cpi->refresh_golden_frame)
+      cpi->svc.fb_idx_upd_tl0[cpi->svc.spatial_layer_id] = cpi->gld_fb_idx;
+    else if (cpi->refresh_alt_ref_frame)
+      cpi->svc.fb_idx_upd_tl0[cpi->svc.spatial_layer_id] = cpi->alt_fb_idx;
+  }
+
   if (cm->seg.update_map) update_reference_segmentation_map(cpi);
 
   if (frame_is_intra_only(cm) == 0) {
@@ -4682,17 +5276,18 @@
   }
   vp9_update_reference_frames(cpi);
 
-  for (t = TX_4X4; t <= TX_32X32; t++)
-    full_to_model_counts(cpi->td.counts->coef[t],
-                         cpi->td.rd_counts.coef_counts[t]);
+  if (!cm->show_existing_frame) {
+    for (t = TX_4X4; t <= TX_32X32; ++t) {
+      full_to_model_counts(cpi->td.counts->coef[t],
+                           cpi->td.rd_counts.coef_counts[t]);
+    }
 
-  if (!cm->error_resilient_mode && !cm->frame_parallel_decoding_mode)
-    vp9_adapt_coef_probs(cm);
-
-  if (!frame_is_intra_only(cm)) {
     if (!cm->error_resilient_mode && !cm->frame_parallel_decoding_mode) {
-      vp9_adapt_mode_probs(cm);
-      vp9_adapt_mv_probs(cm, cm->allow_high_precision_mv);
+      if (!frame_is_intra_only(cm)) {
+        vp9_adapt_mode_probs(cm);
+        vp9_adapt_mv_probs(cm, cm->allow_high_precision_mv);
+      }
+      vp9_adapt_coef_probs(cm);
     }
   }
 
@@ -4712,8 +5307,9 @@
 
   cm->last_frame_type = cm->frame_type;
 
-  if (!(is_two_pass_svc(cpi) && cpi->svc.encode_empty_frame_state == ENCODING))
-    vp9_rc_postencode_update(cpi, *size);
+  vp9_rc_postencode_update(cpi, *size);
+
+  *size = VPXMAX(1, *size);
 
 #if 0
   output_frame_level_debug_stats(cpi);
@@ -4737,7 +5333,10 @@
   cm->last_height = cm->height;
 
   // reset to normal state now that we are done.
-  if (!cm->show_existing_frame) cm->last_show_frame = cm->show_frame;
+  if (!cm->show_existing_frame) {
+    cm->last_show_frame = cm->show_frame;
+    cm->prev_frame = cm->cur_frame;
+  }
 
   if (cm->show_frame) {
     vp9_swap_mi_and_prev_mi(cm);
@@ -4746,19 +5345,26 @@
     ++cm->current_video_frame;
     if (cpi->use_svc) vp9_inc_frame_in_layer(cpi);
   }
-  cm->prev_frame = cm->cur_frame;
 
-  if (cpi->use_svc)
+  if (cpi->use_svc) {
     cpi->svc
         .layer_context[cpi->svc.spatial_layer_id *
                            cpi->svc.number_temporal_layers +
                        cpi->svc.temporal_layer_id]
         .last_frame_type = cm->frame_type;
+    // Reset layer_sync back to 0 for next frame.
+    cpi->svc.spatial_layer_sync[cpi->svc.spatial_layer_id] = 0;
+  }
 
   cpi->force_update_segmentation = 0;
 
+#if !CONFIG_REALTIME_ONLY
   if (cpi->oxcf.aq_mode == LOOKAHEAD_AQ)
     vp9_alt_ref_aq_unset_all(cpi->alt_ref_aq, cpi);
+#endif
+
+  cpi->svc.previous_frame_is_intra_only = cm->intra_only;
+  cpi->svc.set_intra_only_frame = 0;
 }
 
 static void SvcEncode(VP9_COMP *cpi, size_t *size, uint8_t *dest,
@@ -4781,54 +5387,13 @@
 static void Pass2Encode(VP9_COMP *cpi, size_t *size, uint8_t *dest,
                         unsigned int *frame_flags) {
   cpi->allow_encode_breakout = ENCODE_BREAKOUT_ENABLED;
+#if CONFIG_MISMATCH_DEBUG
+  mismatch_move_frame_idx_w();
+#endif
   encode_frame_to_data_rate(cpi, size, dest, frame_flags);
-
-  if (!(is_two_pass_svc(cpi) && cpi->svc.encode_empty_frame_state == ENCODING))
-    vp9_twopass_postencode_update(cpi);
 }
 #endif  // !CONFIG_REALTIME_ONLY
 
-static void init_ref_frame_bufs(VP9_COMMON *cm) {
-  int i;
-  BufferPool *const pool = cm->buffer_pool;
-  cm->new_fb_idx = INVALID_IDX;
-  for (i = 0; i < REF_FRAMES; ++i) {
-    cm->ref_frame_map[i] = INVALID_IDX;
-    pool->frame_bufs[i].ref_count = 0;
-  }
-}
-
-static void check_initial_width(VP9_COMP *cpi,
-#if CONFIG_VP9_HIGHBITDEPTH
-                                int use_highbitdepth,
-#endif
-                                int subsampling_x, int subsampling_y) {
-  VP9_COMMON *const cm = &cpi->common;
-
-  if (!cpi->initial_width ||
-#if CONFIG_VP9_HIGHBITDEPTH
-      cm->use_highbitdepth != use_highbitdepth ||
-#endif
-      cm->subsampling_x != subsampling_x ||
-      cm->subsampling_y != subsampling_y) {
-    cm->subsampling_x = subsampling_x;
-    cm->subsampling_y = subsampling_y;
-#if CONFIG_VP9_HIGHBITDEPTH
-    cm->use_highbitdepth = use_highbitdepth;
-#endif
-
-    alloc_raw_frame_buffers(cpi);
-    init_ref_frame_bufs(cm);
-    alloc_util_frame_buffers(cpi);
-
-    init_motion_estimation(cpi);  // TODO(agrange) This can be removed.
-
-    cpi->initial_width = cm->width;
-    cpi->initial_height = cm->height;
-    cpi->initial_mbs = cm->MBs;
-  }
-}
-
 int vp9_receive_raw_frame(VP9_COMP *cpi, vpx_enc_frame_flags_t frame_flags,
                           YV12_BUFFER_CONFIG *sd, int64_t time_stamp,
                           int64_t end_time) {
@@ -4839,24 +5404,21 @@
   const int subsampling_y = sd->subsampling_y;
 #if CONFIG_VP9_HIGHBITDEPTH
   const int use_highbitdepth = (sd->flags & YV12_FLAG_HIGHBITDEPTH) != 0;
+#else
+  const int use_highbitdepth = 0;
 #endif
 
-#if CONFIG_VP9_HIGHBITDEPTH
-  check_initial_width(cpi, use_highbitdepth, subsampling_x, subsampling_y);
-#else
-  check_initial_width(cpi, subsampling_x, subsampling_y);
-#endif  // CONFIG_VP9_HIGHBITDEPTH
-
+  update_initial_width(cpi, use_highbitdepth, subsampling_x, subsampling_y);
 #if CONFIG_VP9_TEMPORAL_DENOISING
   setup_denoiser_buffer(cpi);
 #endif
+
+  alloc_raw_frame_buffers(cpi);
+
   vpx_usec_timer_start(&timer);
 
   if (vp9_lookahead_push(cpi->lookahead, sd, time_stamp, end_time,
-#if CONFIG_VP9_HIGHBITDEPTH
-                         use_highbitdepth,
-#endif  // CONFIG_VP9_HIGHBITDEPTH
-                         frame_flags))
+                         use_highbitdepth, frame_flags))
     res = -1;
   vpx_usec_timer_mark(&timer);
   cpi->time_receive_data += vpx_usec_timer_elapsed(&timer);
@@ -4967,10 +5529,6 @@
 }
 
 #if CONFIG_INTERNAL_STATS
-extern double vp9_get_blockiness(const uint8_t *img1, int img1_pitch,
-                                 const uint8_t *img2, int img2_pitch, int width,
-                                 int height);
-
 static void adjust_image_stat(double y, double u, double v, double all,
                               ImageStat *s) {
   s->stat[Y] += y;
@@ -5210,9 +5768,1541 @@
   }
 }
 
+typedef struct GF_PICTURE {
+  YV12_BUFFER_CONFIG *frame;
+  int ref_frame[3];
+  FRAME_UPDATE_TYPE update_type;
+} GF_PICTURE;
+
+static void init_gop_frames(VP9_COMP *cpi, GF_PICTURE *gf_picture,
+                            const GF_GROUP *gf_group, int *tpl_group_frames) {
+  VP9_COMMON *cm = &cpi->common;
+  int frame_idx = 0;
+  int i;
+  int gld_index = -1;
+  int alt_index = -1;
+  int lst_index = -1;
+  int arf_index_stack[MAX_ARF_LAYERS];
+  int arf_stack_size = 0;
+  int extend_frame_count = 0;
+  int pframe_qindex = cpi->tpl_stats[2].base_qindex;
+  int frame_gop_offset = 0;
+
+  RefCntBuffer *frame_bufs = cm->buffer_pool->frame_bufs;
+  int8_t recon_frame_index[REFS_PER_FRAME + MAX_ARF_LAYERS];
+
+  memset(recon_frame_index, -1, sizeof(recon_frame_index));
+  stack_init(arf_index_stack, MAX_ARF_LAYERS);
+
+  // TODO(jingning): To be used later for gf frame type parsing.
+  (void)gf_group;
+
+  for (i = 0; i < FRAME_BUFFERS; ++i) {
+    if (frame_bufs[i].ref_count == 0) {
+      alloc_frame_mvs(cm, i);
+      if (vpx_realloc_frame_buffer(&frame_bufs[i].buf, cm->width, cm->height,
+                                   cm->subsampling_x, cm->subsampling_y,
+#if CONFIG_VP9_HIGHBITDEPTH
+                                   cm->use_highbitdepth,
+#endif
+                                   VP9_ENC_BORDER_IN_PIXELS, cm->byte_alignment,
+                                   NULL, NULL, NULL))
+        vpx_internal_error(&cm->error, VPX_CODEC_MEM_ERROR,
+                           "Failed to allocate frame buffer");
+
+      recon_frame_index[frame_idx] = i;
+      ++frame_idx;
+
+      if (frame_idx >= REFS_PER_FRAME + cpi->oxcf.enable_auto_arf) break;
+    }
+  }
+
+  for (i = 0; i < REFS_PER_FRAME + 1; ++i) {
+    assert(recon_frame_index[i] >= 0);
+    cpi->tpl_recon_frames[i] = &frame_bufs[recon_frame_index[i]].buf;
+  }
+
+  *tpl_group_frames = 0;
+
+  // Initialize Golden reference frame.
+  gf_picture[0].frame = get_ref_frame_buffer(cpi, GOLDEN_FRAME);
+  for (i = 0; i < 3; ++i) gf_picture[0].ref_frame[i] = -1;
+  gf_picture[0].update_type = gf_group->update_type[0];
+  gld_index = 0;
+  ++*tpl_group_frames;
+
+  // Initialize base layer ARF frame
+  gf_picture[1].frame = cpi->Source;
+  gf_picture[1].ref_frame[0] = gld_index;
+  gf_picture[1].ref_frame[1] = lst_index;
+  gf_picture[1].ref_frame[2] = alt_index;
+  gf_picture[1].update_type = gf_group->update_type[1];
+  alt_index = 1;
+  ++*tpl_group_frames;
+
+  // Initialize P frames
+  for (frame_idx = 2; frame_idx < MAX_ARF_GOP_SIZE; ++frame_idx) {
+    struct lookahead_entry *buf;
+    frame_gop_offset = gf_group->frame_gop_index[frame_idx];
+    buf = vp9_lookahead_peek(cpi->lookahead, frame_gop_offset - 1);
+
+    if (buf == NULL) break;
+
+    gf_picture[frame_idx].frame = &buf->img;
+    gf_picture[frame_idx].ref_frame[0] = gld_index;
+    gf_picture[frame_idx].ref_frame[1] = lst_index;
+    gf_picture[frame_idx].ref_frame[2] = alt_index;
+    gf_picture[frame_idx].update_type = gf_group->update_type[frame_idx];
+
+    switch (gf_group->update_type[frame_idx]) {
+      case ARF_UPDATE:
+        stack_push(arf_index_stack, alt_index, arf_stack_size);
+        ++arf_stack_size;
+        alt_index = frame_idx;
+        break;
+      case LF_UPDATE: lst_index = frame_idx; break;
+      case OVERLAY_UPDATE:
+        gld_index = frame_idx;
+        alt_index = stack_pop(arf_index_stack, arf_stack_size);
+        --arf_stack_size;
+        break;
+      case USE_BUF_FRAME:
+        lst_index = alt_index;
+        alt_index = stack_pop(arf_index_stack, arf_stack_size);
+        --arf_stack_size;
+        break;
+      default: break;
+    }
+
+    ++*tpl_group_frames;
+
+    // The length of group of pictures is baseline_gf_interval, plus the
+    // beginning golden frame from last GOP, plus the last overlay frame in
+    // the same GOP.
+    if (frame_idx == gf_group->gf_group_size) break;
+  }
+
+  alt_index = -1;
+  ++frame_idx;
+  ++frame_gop_offset;
+
+  // Extend two frames outside the current gf group.
+  for (; frame_idx < MAX_LAG_BUFFERS && extend_frame_count < 2; ++frame_idx) {
+    struct lookahead_entry *buf =
+        vp9_lookahead_peek(cpi->lookahead, frame_gop_offset - 1);
+
+    if (buf == NULL) break;
+
+    cpi->tpl_stats[frame_idx].base_qindex = pframe_qindex;
+
+    gf_picture[frame_idx].frame = &buf->img;
+    gf_picture[frame_idx].ref_frame[0] = gld_index;
+    gf_picture[frame_idx].ref_frame[1] = lst_index;
+    gf_picture[frame_idx].ref_frame[2] = alt_index;
+    gf_picture[frame_idx].update_type = LF_UPDATE;
+    lst_index = frame_idx;
+    ++*tpl_group_frames;
+    ++extend_frame_count;
+    ++frame_gop_offset;
+  }
+}
+
+static void init_tpl_stats(VP9_COMP *cpi) {
+  int frame_idx;
+  for (frame_idx = 0; frame_idx < MAX_ARF_GOP_SIZE; ++frame_idx) {
+    TplDepFrame *tpl_frame = &cpi->tpl_stats[frame_idx];
+    memset(tpl_frame->tpl_stats_ptr, 0,
+           tpl_frame->height * tpl_frame->width *
+               sizeof(*tpl_frame->tpl_stats_ptr));
+    tpl_frame->is_valid = 0;
+  }
+}
+
+#if CONFIG_NON_GREEDY_MV
+static uint32_t full_pixel_motion_search(VP9_COMP *cpi, ThreadData *td,
+                                         MotionField *motion_field,
+                                         int frame_idx, uint8_t *cur_frame_buf,
+                                         uint8_t *ref_frame_buf, int stride,
+                                         BLOCK_SIZE bsize, int mi_row,
+                                         int mi_col, MV *mv) {
+  MACROBLOCK *const x = &td->mb;
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MV_SPEED_FEATURES *const mv_sf = &cpi->sf.mv;
+  int step_param;
+  uint32_t bestsme = UINT_MAX;
+  const MvLimits tmp_mv_limits = x->mv_limits;
+  // lambda is used to adjust the importance of motion vector consistency.
+  // TODO(angiebird): Figure out lambda's proper value.
+  const int lambda = cpi->tpl_stats[frame_idx].lambda;
+  int_mv nb_full_mvs[NB_MVS_NUM];
+  int nb_full_mv_num;
+
+  MV best_ref_mv1 = { 0, 0 };
+  MV best_ref_mv1_full; /* full-pixel value of best_ref_mv1 */
+
+  best_ref_mv1_full.col = best_ref_mv1.col >> 3;
+  best_ref_mv1_full.row = best_ref_mv1.row >> 3;
+
+  // Setup frame pointers
+  x->plane[0].src.buf = cur_frame_buf;
+  x->plane[0].src.stride = stride;
+  xd->plane[0].pre[0].buf = ref_frame_buf;
+  xd->plane[0].pre[0].stride = stride;
+
+  step_param = mv_sf->reduce_first_step_size;
+  step_param = VPXMIN(step_param, MAX_MVSEARCH_STEPS - 2);
+
+  vp9_set_mv_search_range(&x->mv_limits, &best_ref_mv1);
+
+  nb_full_mv_num =
+      vp9_prepare_nb_full_mvs(motion_field, mi_row, mi_col, nb_full_mvs);
+  vp9_full_pixel_diamond_new(cpi, x, bsize, &best_ref_mv1_full, step_param,
+                             lambda, 1, nb_full_mvs, nb_full_mv_num, mv);
+
+  /* restore UMV window */
+  x->mv_limits = tmp_mv_limits;
+
+  return bestsme;
+}
+
+static uint32_t sub_pixel_motion_search(VP9_COMP *cpi, ThreadData *td,
+                                        uint8_t *cur_frame_buf,
+                                        uint8_t *ref_frame_buf, int stride,
+                                        BLOCK_SIZE bsize, MV *mv) {
+  MACROBLOCK *const x = &td->mb;
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MV_SPEED_FEATURES *const mv_sf = &cpi->sf.mv;
+  uint32_t bestsme = UINT_MAX;
+  uint32_t distortion;
+  uint32_t sse;
+  int cost_list[5];
+
+  MV best_ref_mv1 = { 0, 0 };
+
+  // Setup frame pointers
+  x->plane[0].src.buf = cur_frame_buf;
+  x->plane[0].src.stride = stride;
+  xd->plane[0].pre[0].buf = ref_frame_buf;
+  xd->plane[0].pre[0].stride = stride;
+
+  // TODO(yunqing): may use higher tap interp filter than 2 taps.
+  // Ignore mv costing by sending NULL pointer instead of cost array
+  bestsme = cpi->find_fractional_mv_step(
+      x, mv, &best_ref_mv1, cpi->common.allow_high_precision_mv, x->errorperbit,
+      &cpi->fn_ptr[bsize], 0, mv_sf->subpel_search_level,
+      cond_cost_list(cpi, cost_list), NULL, NULL, &distortion, &sse, NULL, 0, 0,
+      USE_2_TAPS);
+
+  return bestsme;
+}
+
+#else  // CONFIG_NON_GREEDY_MV
+static uint32_t motion_compensated_prediction(VP9_COMP *cpi, ThreadData *td,
+                                              uint8_t *cur_frame_buf,
+                                              uint8_t *ref_frame_buf,
+                                              int stride, BLOCK_SIZE bsize,
+                                              MV *mv) {
+  MACROBLOCK *const x = &td->mb;
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MV_SPEED_FEATURES *const mv_sf = &cpi->sf.mv;
+  const SEARCH_METHODS search_method = NSTEP;
+  int step_param;
+  int sadpb = x->sadperbit16;
+  uint32_t bestsme = UINT_MAX;
+  uint32_t distortion;
+  uint32_t sse;
+  int cost_list[5];
+  const MvLimits tmp_mv_limits = x->mv_limits;
+
+  MV best_ref_mv1 = { 0, 0 };
+  MV best_ref_mv1_full; /* full-pixel value of best_ref_mv1 */
+
+  best_ref_mv1_full.col = best_ref_mv1.col >> 3;
+  best_ref_mv1_full.row = best_ref_mv1.row >> 3;
+
+  // Setup frame pointers
+  x->plane[0].src.buf = cur_frame_buf;
+  x->plane[0].src.stride = stride;
+  xd->plane[0].pre[0].buf = ref_frame_buf;
+  xd->plane[0].pre[0].stride = stride;
+
+  step_param = mv_sf->reduce_first_step_size;
+  step_param = VPXMIN(step_param, MAX_MVSEARCH_STEPS - 2);
+
+  vp9_set_mv_search_range(&x->mv_limits, &best_ref_mv1);
+
+  vp9_full_pixel_search(cpi, x, bsize, &best_ref_mv1_full, step_param,
+                        search_method, sadpb, cond_cost_list(cpi, cost_list),
+                        &best_ref_mv1, mv, 0, 0);
+
+  /* restore UMV window */
+  x->mv_limits = tmp_mv_limits;
+
+  // TODO(yunqing): may use higher tap interp filter than 2 taps.
+  // Ignore mv costing by sending NULL pointer instead of cost array
+  bestsme = cpi->find_fractional_mv_step(
+      x, mv, &best_ref_mv1, cpi->common.allow_high_precision_mv, x->errorperbit,
+      &cpi->fn_ptr[bsize], 0, mv_sf->subpel_search_level,
+      cond_cost_list(cpi, cost_list), NULL, NULL, &distortion, &sse, NULL, 0, 0,
+      USE_2_TAPS);
+
+  return bestsme;
+}
+#endif
+
+static int get_overlap_area(int grid_pos_row, int grid_pos_col, int ref_pos_row,
+                            int ref_pos_col, int block, BLOCK_SIZE bsize) {
+  int width = 0, height = 0;
+  int bw = 4 << b_width_log2_lookup[bsize];
+  int bh = 4 << b_height_log2_lookup[bsize];
+
+  switch (block) {
+    case 0:
+      width = grid_pos_col + bw - ref_pos_col;
+      height = grid_pos_row + bh - ref_pos_row;
+      break;
+    case 1:
+      width = ref_pos_col + bw - grid_pos_col;
+      height = grid_pos_row + bh - ref_pos_row;
+      break;
+    case 2:
+      width = grid_pos_col + bw - ref_pos_col;
+      height = ref_pos_row + bh - grid_pos_row;
+      break;
+    case 3:
+      width = ref_pos_col + bw - grid_pos_col;
+      height = ref_pos_row + bh - grid_pos_row;
+      break;
+    default: assert(0);
+  }
+
+  return width * height;
+}
+
+static int round_floor(int ref_pos, int bsize_pix) {
+  int round;
+  if (ref_pos < 0)
+    round = -(1 + (-ref_pos - 1) / bsize_pix);
+  else
+    round = ref_pos / bsize_pix;
+
+  return round;
+}
+
+static void tpl_model_store(TplDepStats *tpl_stats, int mi_row, int mi_col,
+                            BLOCK_SIZE bsize, int stride) {
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  const TplDepStats *src_stats = &tpl_stats[mi_row * stride + mi_col];
+  int idx, idy;
+
+  for (idy = 0; idy < mi_height; ++idy) {
+    for (idx = 0; idx < mi_width; ++idx) {
+      TplDepStats *tpl_ptr = &tpl_stats[(mi_row + idy) * stride + mi_col + idx];
+      const int64_t mc_flow = tpl_ptr->mc_flow;
+      const int64_t mc_ref_cost = tpl_ptr->mc_ref_cost;
+      *tpl_ptr = *src_stats;
+      tpl_ptr->mc_flow = mc_flow;
+      tpl_ptr->mc_ref_cost = mc_ref_cost;
+      tpl_ptr->mc_dep_cost = tpl_ptr->intra_cost + tpl_ptr->mc_flow;
+    }
+  }
+}
+
+static void tpl_model_update_b(TplDepFrame *tpl_frame, TplDepStats *tpl_stats,
+                               int mi_row, int mi_col, const BLOCK_SIZE bsize) {
+  TplDepFrame *ref_tpl_frame = &tpl_frame[tpl_stats->ref_frame_index];
+  TplDepStats *ref_stats = ref_tpl_frame->tpl_stats_ptr;
+  MV mv = tpl_stats->mv.as_mv;
+  int mv_row = mv.row >> 3;
+  int mv_col = mv.col >> 3;
+
+  int ref_pos_row = mi_row * MI_SIZE + mv_row;
+  int ref_pos_col = mi_col * MI_SIZE + mv_col;
+
+  const int bw = 4 << b_width_log2_lookup[bsize];
+  const int bh = 4 << b_height_log2_lookup[bsize];
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  const int pix_num = bw * bh;
+
+  // top-left on grid block location in pixel
+  int grid_pos_row_base = round_floor(ref_pos_row, bh) * bh;
+  int grid_pos_col_base = round_floor(ref_pos_col, bw) * bw;
+  int block;
+
+  for (block = 0; block < 4; ++block) {
+    int grid_pos_row = grid_pos_row_base + bh * (block >> 1);
+    int grid_pos_col = grid_pos_col_base + bw * (block & 0x01);
+
+    if (grid_pos_row >= 0 && grid_pos_row < ref_tpl_frame->mi_rows * MI_SIZE &&
+        grid_pos_col >= 0 && grid_pos_col < ref_tpl_frame->mi_cols * MI_SIZE) {
+      int overlap_area = get_overlap_area(
+          grid_pos_row, grid_pos_col, ref_pos_row, ref_pos_col, block, bsize);
+      int ref_mi_row = round_floor(grid_pos_row, bh) * mi_height;
+      int ref_mi_col = round_floor(grid_pos_col, bw) * mi_width;
+
+      int64_t mc_flow = tpl_stats->mc_dep_cost -
+                        (tpl_stats->mc_dep_cost * tpl_stats->inter_cost) /
+                            tpl_stats->intra_cost;
+
+      int idx, idy;
+
+      for (idy = 0; idy < mi_height; ++idy) {
+        for (idx = 0; idx < mi_width; ++idx) {
+          TplDepStats *des_stats =
+              &ref_stats[(ref_mi_row + idy) * ref_tpl_frame->stride +
+                         (ref_mi_col + idx)];
+
+          des_stats->mc_flow += (mc_flow * overlap_area) / pix_num;
+          des_stats->mc_ref_cost +=
+              ((tpl_stats->intra_cost - tpl_stats->inter_cost) * overlap_area) /
+              pix_num;
+          assert(overlap_area >= 0);
+        }
+      }
+    }
+  }
+}
+
+static void tpl_model_update(TplDepFrame *tpl_frame, TplDepStats *tpl_stats,
+                             int mi_row, int mi_col, const BLOCK_SIZE bsize) {
+  int idx, idy;
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+
+  for (idy = 0; idy < mi_height; ++idy) {
+    for (idx = 0; idx < mi_width; ++idx) {
+      TplDepStats *tpl_ptr =
+          &tpl_stats[(mi_row + idy) * tpl_frame->stride + (mi_col + idx)];
+      tpl_model_update_b(tpl_frame, tpl_ptr, mi_row + idy, mi_col + idx,
+                         BLOCK_8X8);
+    }
+  }
+}
+
+static void get_quantize_error(MACROBLOCK *x, int plane, tran_low_t *coeff,
+                               tran_low_t *qcoeff, tran_low_t *dqcoeff,
+                               TX_SIZE tx_size, int64_t *recon_error,
+                               int64_t *sse) {
+  MACROBLOCKD *const xd = &x->e_mbd;
+  const struct macroblock_plane *const p = &x->plane[plane];
+  const struct macroblockd_plane *const pd = &xd->plane[plane];
+  const scan_order *const scan_order = &vp9_default_scan_orders[tx_size];
+  uint16_t eob;
+  int pix_num = 1 << num_pels_log2_lookup[txsize_to_bsize[tx_size]];
+  const int shift = tx_size == TX_32X32 ? 0 : 2;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+    vp9_highbd_quantize_fp_32x32(coeff, pix_num, x->skip_block, p->round_fp,
+                                 p->quant_fp, qcoeff, dqcoeff, pd->dequant,
+                                 &eob, scan_order->scan, scan_order->iscan);
+  } else {
+    vp9_quantize_fp_32x32(coeff, pix_num, x->skip_block, p->round_fp,
+                          p->quant_fp, qcoeff, dqcoeff, pd->dequant, &eob,
+                          scan_order->scan, scan_order->iscan);
+  }
+#else
+  vp9_quantize_fp_32x32(coeff, pix_num, x->skip_block, p->round_fp, p->quant_fp,
+                        qcoeff, dqcoeff, pd->dequant, &eob, scan_order->scan,
+                        scan_order->iscan);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+  *recon_error = vp9_block_error(coeff, dqcoeff, pix_num, sse) >> shift;
+  *recon_error = VPXMAX(*recon_error, 1);
+
+  *sse = (*sse) >> shift;
+  *sse = VPXMAX(*sse, 1);
+}
+
+#if CONFIG_VP9_HIGHBITDEPTH
+void highbd_wht_fwd_txfm(int16_t *src_diff, int bw, tran_low_t *coeff,
+                         TX_SIZE tx_size) {
+  // TODO(sdeng): Implement SIMD based high bit-depth Hadamard transforms.
+  switch (tx_size) {
+    case TX_8X8: vpx_highbd_hadamard_8x8(src_diff, bw, coeff); break;
+    case TX_16X16: vpx_highbd_hadamard_16x16(src_diff, bw, coeff); break;
+    case TX_32X32: vpx_highbd_hadamard_32x32(src_diff, bw, coeff); break;
+    default: assert(0);
+  }
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+void wht_fwd_txfm(int16_t *src_diff, int bw, tran_low_t *coeff,
+                  TX_SIZE tx_size) {
+  switch (tx_size) {
+    case TX_8X8: vpx_hadamard_8x8(src_diff, bw, coeff); break;
+    case TX_16X16: vpx_hadamard_16x16(src_diff, bw, coeff); break;
+    case TX_32X32: vpx_hadamard_32x32(src_diff, bw, coeff); break;
+    default: assert(0);
+  }
+}
+
+static void set_mv_limits(const VP9_COMMON *cm, MACROBLOCK *x, int mi_row,
+                          int mi_col) {
+  x->mv_limits.row_min = -((mi_row * MI_SIZE) + (17 - 2 * VP9_INTERP_EXTEND));
+  x->mv_limits.row_max =
+      (cm->mi_rows - 1 - mi_row) * MI_SIZE + (17 - 2 * VP9_INTERP_EXTEND);
+  x->mv_limits.col_min = -((mi_col * MI_SIZE) + (17 - 2 * VP9_INTERP_EXTEND));
+  x->mv_limits.col_max =
+      ((cm->mi_cols - 1 - mi_col) * MI_SIZE) + (17 - 2 * VP9_INTERP_EXTEND);
+}
+
+static void mode_estimation(VP9_COMP *cpi, MACROBLOCK *x, MACROBLOCKD *xd,
+                            struct scale_factors *sf, GF_PICTURE *gf_picture,
+                            int frame_idx, TplDepFrame *tpl_frame,
+                            int16_t *src_diff, tran_low_t *coeff,
+                            tran_low_t *qcoeff, tran_low_t *dqcoeff, int mi_row,
+                            int mi_col, BLOCK_SIZE bsize, TX_SIZE tx_size,
+                            YV12_BUFFER_CONFIG *ref_frame[], uint8_t *predictor,
+                            int64_t *recon_error, int64_t *sse) {
+  VP9_COMMON *cm = &cpi->common;
+  ThreadData *td = &cpi->td;
+
+  const int bw = 4 << b_width_log2_lookup[bsize];
+  const int bh = 4 << b_height_log2_lookup[bsize];
+  const int pix_num = bw * bh;
+  int best_rf_idx = -1;
+  int_mv best_mv;
+  int64_t best_inter_cost = INT64_MAX;
+  int64_t inter_cost;
+  int rf_idx;
+  const InterpKernel *const kernel = vp9_filter_kernels[EIGHTTAP];
+
+  int64_t best_intra_cost = INT64_MAX;
+  int64_t intra_cost;
+  PREDICTION_MODE mode;
+  int mb_y_offset = mi_row * MI_SIZE * xd->cur_buf->y_stride + mi_col * MI_SIZE;
+  MODE_INFO mi_above, mi_left;
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  TplDepStats *tpl_stats =
+      &tpl_frame->tpl_stats_ptr[mi_row * tpl_frame->stride + mi_col];
+
+  xd->mb_to_top_edge = -((mi_row * MI_SIZE) * 8);
+  xd->mb_to_bottom_edge = ((cm->mi_rows - 1 - mi_row) * MI_SIZE) * 8;
+  xd->mb_to_left_edge = -((mi_col * MI_SIZE) * 8);
+  xd->mb_to_right_edge = ((cm->mi_cols - 1 - mi_col) * MI_SIZE) * 8;
+  xd->above_mi = (mi_row > 0) ? &mi_above : NULL;
+  xd->left_mi = (mi_col > 0) ? &mi_left : NULL;
+
+  // Intra prediction search
+  for (mode = DC_PRED; mode <= TM_PRED; ++mode) {
+    uint8_t *src, *dst;
+    int src_stride, dst_stride;
+
+    src = xd->cur_buf->y_buffer + mb_y_offset;
+    src_stride = xd->cur_buf->y_stride;
+
+    dst = &predictor[0];
+    dst_stride = bw;
+
+    xd->mi[0]->sb_type = bsize;
+    xd->mi[0]->ref_frame[0] = INTRA_FRAME;
+
+    vp9_predict_intra_block(xd, b_width_log2_lookup[bsize], tx_size, mode, src,
+                            src_stride, dst, dst_stride, 0, 0, 0);
+
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+      vpx_highbd_subtract_block(bh, bw, src_diff, bw, src, src_stride, dst,
+                                dst_stride, xd->bd);
+      highbd_wht_fwd_txfm(src_diff, bw, coeff, tx_size);
+      intra_cost = vpx_highbd_satd(coeff, pix_num);
+    } else {
+      vpx_subtract_block(bh, bw, src_diff, bw, src, src_stride, dst,
+                         dst_stride);
+      wht_fwd_txfm(src_diff, bw, coeff, tx_size);
+      intra_cost = vpx_satd(coeff, pix_num);
+    }
+#else
+    vpx_subtract_block(bh, bw, src_diff, bw, src, src_stride, dst, dst_stride);
+    wht_fwd_txfm(src_diff, bw, coeff, tx_size);
+    intra_cost = vpx_satd(coeff, pix_num);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+    if (intra_cost < best_intra_cost) best_intra_cost = intra_cost;
+  }
+
+  // Motion compensated prediction
+  best_mv.as_int = 0;
+
+  set_mv_limits(cm, x, mi_row, mi_col);
+
+  for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+    int_mv mv;
+#if CONFIG_NON_GREEDY_MV
+    MotionField *motion_field;
+#endif
+    if (ref_frame[rf_idx] == NULL) continue;
+
+#if CONFIG_NON_GREEDY_MV
+    (void)td;
+    motion_field = vp9_motion_field_info_get_motion_field(
+        &cpi->motion_field_info, frame_idx, rf_idx, bsize);
+    mv = vp9_motion_field_mi_get_mv(motion_field, mi_row, mi_col);
+#else
+    motion_compensated_prediction(cpi, td, xd->cur_buf->y_buffer + mb_y_offset,
+                                  ref_frame[rf_idx]->y_buffer + mb_y_offset,
+                                  xd->cur_buf->y_stride, bsize, &mv.as_mv);
+#endif
+
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+      vp9_highbd_build_inter_predictor(
+          CONVERT_TO_SHORTPTR(ref_frame[rf_idx]->y_buffer + mb_y_offset),
+          ref_frame[rf_idx]->y_stride, CONVERT_TO_SHORTPTR(&predictor[0]), bw,
+          &mv.as_mv, sf, bw, bh, 0, kernel, MV_PRECISION_Q3, mi_col * MI_SIZE,
+          mi_row * MI_SIZE, xd->bd);
+      vpx_highbd_subtract_block(
+          bh, bw, src_diff, bw, xd->cur_buf->y_buffer + mb_y_offset,
+          xd->cur_buf->y_stride, &predictor[0], bw, xd->bd);
+      highbd_wht_fwd_txfm(src_diff, bw, coeff, tx_size);
+      inter_cost = vpx_highbd_satd(coeff, pix_num);
+    } else {
+      vp9_build_inter_predictor(
+          ref_frame[rf_idx]->y_buffer + mb_y_offset,
+          ref_frame[rf_idx]->y_stride, &predictor[0], bw, &mv.as_mv, sf, bw, bh,
+          0, kernel, MV_PRECISION_Q3, mi_col * MI_SIZE, mi_row * MI_SIZE);
+      vpx_subtract_block(bh, bw, src_diff, bw,
+                         xd->cur_buf->y_buffer + mb_y_offset,
+                         xd->cur_buf->y_stride, &predictor[0], bw);
+      wht_fwd_txfm(src_diff, bw, coeff, tx_size);
+      inter_cost = vpx_satd(coeff, pix_num);
+    }
+#else
+    vp9_build_inter_predictor(ref_frame[rf_idx]->y_buffer + mb_y_offset,
+                              ref_frame[rf_idx]->y_stride, &predictor[0], bw,
+                              &mv.as_mv, sf, bw, bh, 0, kernel, MV_PRECISION_Q3,
+                              mi_col * MI_SIZE, mi_row * MI_SIZE);
+    vpx_subtract_block(bh, bw, src_diff, bw,
+                       xd->cur_buf->y_buffer + mb_y_offset,
+                       xd->cur_buf->y_stride, &predictor[0], bw);
+    wht_fwd_txfm(src_diff, bw, coeff, tx_size);
+    inter_cost = vpx_satd(coeff, pix_num);
+#endif
+
+    if (inter_cost < best_inter_cost) {
+      best_rf_idx = rf_idx;
+      best_inter_cost = inter_cost;
+      best_mv.as_int = mv.as_int;
+      get_quantize_error(x, 0, coeff, qcoeff, dqcoeff, tx_size, recon_error,
+                         sse);
+    }
+  }
+  best_intra_cost = VPXMAX(best_intra_cost, 1);
+  best_inter_cost = VPXMIN(best_intra_cost, best_inter_cost);
+  tpl_stats->inter_cost = VPXMAX(
+      1, (best_inter_cost << TPL_DEP_COST_SCALE_LOG2) / (mi_height * mi_width));
+  tpl_stats->intra_cost = VPXMAX(
+      1, (best_intra_cost << TPL_DEP_COST_SCALE_LOG2) / (mi_height * mi_width));
+  tpl_stats->ref_frame_index = gf_picture[frame_idx].ref_frame[best_rf_idx];
+  tpl_stats->mv.as_int = best_mv.as_int;
+}
+
+#if CONFIG_NON_GREEDY_MV
+static int get_block_src_pred_buf(MACROBLOCKD *xd, GF_PICTURE *gf_picture,
+                                  int frame_idx, int rf_idx, int mi_row,
+                                  int mi_col, struct buf_2d *src,
+                                  struct buf_2d *pre) {
+  const int mb_y_offset =
+      mi_row * MI_SIZE * xd->cur_buf->y_stride + mi_col * MI_SIZE;
+  YV12_BUFFER_CONFIG *ref_frame = NULL;
+  int ref_frame_idx = gf_picture[frame_idx].ref_frame[rf_idx];
+  if (ref_frame_idx != -1) {
+    ref_frame = gf_picture[ref_frame_idx].frame;
+    src->buf = xd->cur_buf->y_buffer + mb_y_offset;
+    src->stride = xd->cur_buf->y_stride;
+    pre->buf = ref_frame->y_buffer + mb_y_offset;
+    pre->stride = ref_frame->y_stride;
+    assert(src->stride == pre->stride);
+    return 1;
+  } else {
+    printf("invalid ref_frame_idx");
+    assert(ref_frame_idx != -1);
+    return 0;
+  }
+}
+
+#define kMvPreCheckLines 5
+#define kMvPreCheckSize 15
+
+#define MV_REF_POS_NUM 3
+POSITION mv_ref_pos[MV_REF_POS_NUM] = {
+  { -1, 0 },
+  { 0, -1 },
+  { -1, -1 },
+};
+
+static int_mv *get_select_mv(VP9_COMP *cpi, TplDepFrame *tpl_frame, int mi_row,
+                             int mi_col) {
+  return &cpi->select_mv_arr[mi_row * tpl_frame->stride + mi_col];
+}
+
+static int_mv find_ref_mv(int mv_mode, VP9_COMP *cpi, TplDepFrame *tpl_frame,
+                          BLOCK_SIZE bsize, int mi_row, int mi_col) {
+  int i;
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  int_mv nearest_mv, near_mv, invalid_mv;
+  nearest_mv.as_int = INVALID_MV;
+  near_mv.as_int = INVALID_MV;
+  invalid_mv.as_int = INVALID_MV;
+  for (i = 0; i < MV_REF_POS_NUM; ++i) {
+    int nb_row = mi_row + mv_ref_pos[i].row * mi_height;
+    int nb_col = mi_col + mv_ref_pos[i].col * mi_width;
+    assert(mv_ref_pos[i].row <= 0);
+    assert(mv_ref_pos[i].col <= 0);
+    if (nb_row >= 0 && nb_col >= 0) {
+      if (nearest_mv.as_int == INVALID_MV) {
+        nearest_mv = *get_select_mv(cpi, tpl_frame, nb_row, nb_col);
+      } else {
+        int_mv mv = *get_select_mv(cpi, tpl_frame, nb_row, nb_col);
+        if (mv.as_int == nearest_mv.as_int) {
+          continue;
+        } else {
+          near_mv = mv;
+          break;
+        }
+      }
+    }
+  }
+  if (nearest_mv.as_int == INVALID_MV) {
+    nearest_mv.as_mv.row = 0;
+    nearest_mv.as_mv.col = 0;
+  }
+  if (near_mv.as_int == INVALID_MV) {
+    near_mv.as_mv.row = 0;
+    near_mv.as_mv.col = 0;
+  }
+  if (mv_mode == NEAREST_MV_MODE) {
+    return nearest_mv;
+  }
+  if (mv_mode == NEAR_MV_MODE) {
+    return near_mv;
+  }
+  assert(0);
+  return invalid_mv;
+}
+
+static int_mv get_mv_from_mv_mode(int mv_mode, VP9_COMP *cpi,
+                                  MotionField *motion_field,
+                                  TplDepFrame *tpl_frame, BLOCK_SIZE bsize,
+                                  int mi_row, int mi_col) {
+  int_mv mv;
+  switch (mv_mode) {
+    case ZERO_MV_MODE:
+      mv.as_mv.row = 0;
+      mv.as_mv.col = 0;
+      break;
+    case NEW_MV_MODE:
+      mv = vp9_motion_field_mi_get_mv(motion_field, mi_row, mi_col);
+      break;
+    case NEAREST_MV_MODE:
+      mv = find_ref_mv(mv_mode, cpi, tpl_frame, bsize, mi_row, mi_col);
+      break;
+    case NEAR_MV_MODE:
+      mv = find_ref_mv(mv_mode, cpi, tpl_frame, bsize, mi_row, mi_col);
+      break;
+    default:
+      mv.as_int = INVALID_MV;
+      assert(0);
+      break;
+  }
+  return mv;
+}
+
+static double get_mv_dist(int mv_mode, VP9_COMP *cpi, MACROBLOCKD *xd,
+                          GF_PICTURE *gf_picture, MotionField *motion_field,
+                          int frame_idx, TplDepFrame *tpl_frame, int rf_idx,
+                          BLOCK_SIZE bsize, int mi_row, int mi_col,
+                          int_mv *mv) {
+  uint32_t sse;
+  struct buf_2d src;
+  struct buf_2d pre;
+  MV full_mv;
+  *mv = get_mv_from_mv_mode(mv_mode, cpi, motion_field, tpl_frame, bsize,
+                            mi_row, mi_col);
+  full_mv = get_full_mv(&mv->as_mv);
+  if (get_block_src_pred_buf(xd, gf_picture, frame_idx, rf_idx, mi_row, mi_col,
+                             &src, &pre)) {
+    // TODO(angiebird): Consider subpixel when computing the sse.
+    cpi->fn_ptr[bsize].vf(src.buf, src.stride, get_buf_from_mv(&pre, &full_mv),
+                          pre.stride, &sse);
+    return (double)(sse << VP9_DIST_SCALE_LOG2);
+  } else {
+    assert(0);
+    return 0;
+  }
+}
+
+static int get_mv_mode_cost(int mv_mode) {
+  // TODO(angiebird): The probabilities are roughly inferred from
+  // default_inter_mode_probs. Check if there is a better way to set the
+  // probabilities.
+  const int zero_mv_prob = 16;
+  const int new_mv_prob = 24 * 1;
+  const int ref_mv_prob = 256 - zero_mv_prob - new_mv_prob;
+  assert(zero_mv_prob + new_mv_prob + ref_mv_prob == 256);
+  switch (mv_mode) {
+    case ZERO_MV_MODE: return vp9_prob_cost[zero_mv_prob]; break;
+    case NEW_MV_MODE: return vp9_prob_cost[new_mv_prob]; break;
+    case NEAREST_MV_MODE: return vp9_prob_cost[ref_mv_prob]; break;
+    case NEAR_MV_MODE: return vp9_prob_cost[ref_mv_prob]; break;
+    default: assert(0); return -1;
+  }
+}
+
+static INLINE double get_mv_diff_cost(MV *new_mv, MV *ref_mv) {
+  double mv_diff_cost = log2(1 + abs(new_mv->row - ref_mv->row)) +
+                        log2(1 + abs(new_mv->col - ref_mv->col));
+  mv_diff_cost *= (1 << VP9_PROB_COST_SHIFT);
+  return mv_diff_cost;
+}
+static double get_mv_cost(int mv_mode, VP9_COMP *cpi, MotionField *motion_field,
+                          TplDepFrame *tpl_frame, BLOCK_SIZE bsize, int mi_row,
+                          int mi_col) {
+  double mv_cost = get_mv_mode_cost(mv_mode);
+  if (mv_mode == NEW_MV_MODE) {
+    MV new_mv = get_mv_from_mv_mode(mv_mode, cpi, motion_field, tpl_frame,
+                                    bsize, mi_row, mi_col)
+                    .as_mv;
+    MV nearest_mv = get_mv_from_mv_mode(NEAREST_MV_MODE, cpi, motion_field,
+                                        tpl_frame, bsize, mi_row, mi_col)
+                        .as_mv;
+    MV near_mv = get_mv_from_mv_mode(NEAR_MV_MODE, cpi, motion_field, tpl_frame,
+                                     bsize, mi_row, mi_col)
+                     .as_mv;
+    double nearest_cost = get_mv_diff_cost(&new_mv, &nearest_mv);
+    double near_cost = get_mv_diff_cost(&new_mv, &near_mv);
+    mv_cost += nearest_cost < near_cost ? nearest_cost : near_cost;
+  }
+  return mv_cost;
+}
+
+static double eval_mv_mode(int mv_mode, VP9_COMP *cpi, MACROBLOCK *x,
+                           GF_PICTURE *gf_picture, MotionField *motion_field,
+                           int frame_idx, TplDepFrame *tpl_frame, int rf_idx,
+                           BLOCK_SIZE bsize, int mi_row, int mi_col,
+                           int_mv *mv) {
+  MACROBLOCKD *xd = &x->e_mbd;
+  double mv_dist =
+      get_mv_dist(mv_mode, cpi, xd, gf_picture, motion_field, frame_idx,
+                  tpl_frame, rf_idx, bsize, mi_row, mi_col, mv);
+  double mv_cost =
+      get_mv_cost(mv_mode, cpi, motion_field, tpl_frame, bsize, mi_row, mi_col);
+  double mult = 180;
+
+  return mv_cost + mult * log2f(1 + mv_dist);
+}
+
+static int find_best_ref_mv_mode(VP9_COMP *cpi, MACROBLOCK *x,
+                                 GF_PICTURE *gf_picture,
+                                 MotionField *motion_field, int frame_idx,
+                                 TplDepFrame *tpl_frame, int rf_idx,
+                                 BLOCK_SIZE bsize, int mi_row, int mi_col,
+                                 double *rd, int_mv *mv) {
+  int best_mv_mode = ZERO_MV_MODE;
+  int update = 0;
+  int mv_mode;
+  *rd = 0;
+  for (mv_mode = 0; mv_mode < MAX_MV_MODE; ++mv_mode) {
+    double this_rd;
+    int_mv this_mv;
+    if (mv_mode == NEW_MV_MODE) {
+      continue;
+    }
+    this_rd = eval_mv_mode(mv_mode, cpi, x, gf_picture, motion_field, frame_idx,
+                           tpl_frame, rf_idx, bsize, mi_row, mi_col, &this_mv);
+    if (update == 0) {
+      *rd = this_rd;
+      *mv = this_mv;
+      best_mv_mode = mv_mode;
+      update = 1;
+    } else {
+      if (this_rd < *rd) {
+        *rd = this_rd;
+        *mv = this_mv;
+        best_mv_mode = mv_mode;
+      }
+    }
+  }
+  return best_mv_mode;
+}
+
+static void predict_mv_mode(VP9_COMP *cpi, MACROBLOCK *x,
+                            GF_PICTURE *gf_picture, MotionField *motion_field,
+                            int frame_idx, TplDepFrame *tpl_frame, int rf_idx,
+                            BLOCK_SIZE bsize, int mi_row, int mi_col) {
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  int tmp_mv_mode_arr[kMvPreCheckSize];
+  int *mv_mode_arr = tpl_frame->mv_mode_arr[rf_idx];
+  double *rd_diff_arr = tpl_frame->rd_diff_arr[rf_idx];
+  int_mv *select_mv_arr = cpi->select_mv_arr;
+  int_mv tmp_select_mv_arr[kMvPreCheckSize];
+  int stride = tpl_frame->stride;
+  double new_mv_rd = 0;
+  double no_new_mv_rd = 0;
+  double this_new_mv_rd = 0;
+  double this_no_new_mv_rd = 0;
+  int idx;
+  int tmp_idx;
+  assert(kMvPreCheckSize == (kMvPreCheckLines * (kMvPreCheckLines + 1)) >> 1);
+
+  // no new mv
+  // diagonal scan order
+  tmp_idx = 0;
+  for (idx = 0; idx < kMvPreCheckLines; ++idx) {
+    int r;
+    for (r = 0; r <= idx; ++r) {
+      int c = idx - r;
+      int nb_row = mi_row + r * mi_height;
+      int nb_col = mi_col + c * mi_width;
+      if (nb_row < tpl_frame->mi_rows && nb_col < tpl_frame->mi_cols) {
+        double this_rd;
+        int_mv *mv = &select_mv_arr[nb_row * stride + nb_col];
+        mv_mode_arr[nb_row * stride + nb_col] = find_best_ref_mv_mode(
+            cpi, x, gf_picture, motion_field, frame_idx, tpl_frame, rf_idx,
+            bsize, nb_row, nb_col, &this_rd, mv);
+        if (r == 0 && c == 0) {
+          this_no_new_mv_rd = this_rd;
+        }
+        no_new_mv_rd += this_rd;
+        tmp_mv_mode_arr[tmp_idx] = mv_mode_arr[nb_row * stride + nb_col];
+        tmp_select_mv_arr[tmp_idx] = select_mv_arr[nb_row * stride + nb_col];
+        ++tmp_idx;
+      }
+    }
+  }
+
+  // new mv
+  mv_mode_arr[mi_row * stride + mi_col] = NEW_MV_MODE;
+  this_new_mv_rd = eval_mv_mode(
+      NEW_MV_MODE, cpi, x, gf_picture, motion_field, frame_idx, tpl_frame,
+      rf_idx, bsize, mi_row, mi_col, &select_mv_arr[mi_row * stride + mi_col]);
+  new_mv_rd = this_new_mv_rd;
+  // We start from idx = 1 because idx = 0 is evaluated as NEW_MV_MODE
+  // beforehand.
+  for (idx = 1; idx < kMvPreCheckLines; ++idx) {
+    int r;
+    for (r = 0; r <= idx; ++r) {
+      int c = idx - r;
+      int nb_row = mi_row + r * mi_height;
+      int nb_col = mi_col + c * mi_width;
+      if (nb_row < tpl_frame->mi_rows && nb_col < tpl_frame->mi_cols) {
+        double this_rd;
+        int_mv *mv = &select_mv_arr[nb_row * stride + nb_col];
+        mv_mode_arr[nb_row * stride + nb_col] = find_best_ref_mv_mode(
+            cpi, x, gf_picture, motion_field, frame_idx, tpl_frame, rf_idx,
+            bsize, nb_row, nb_col, &this_rd, mv);
+        new_mv_rd += this_rd;
+      }
+    }
+  }
+
+  // update best_mv_mode
+  tmp_idx = 0;
+  if (no_new_mv_rd < new_mv_rd) {
+    for (idx = 0; idx < kMvPreCheckLines; ++idx) {
+      int r;
+      for (r = 0; r <= idx; ++r) {
+        int c = idx - r;
+        int nb_row = mi_row + r * mi_height;
+        int nb_col = mi_col + c * mi_width;
+        if (nb_row < tpl_frame->mi_rows && nb_col < tpl_frame->mi_cols) {
+          mv_mode_arr[nb_row * stride + nb_col] = tmp_mv_mode_arr[tmp_idx];
+          select_mv_arr[nb_row * stride + nb_col] = tmp_select_mv_arr[tmp_idx];
+          ++tmp_idx;
+        }
+      }
+    }
+    rd_diff_arr[mi_row * stride + mi_col] = 0;
+  } else {
+    rd_diff_arr[mi_row * stride + mi_col] =
+        (no_new_mv_rd - this_no_new_mv_rd) - (new_mv_rd - this_new_mv_rd);
+  }
+}
+
+static void predict_mv_mode_arr(VP9_COMP *cpi, MACROBLOCK *x,
+                                GF_PICTURE *gf_picture,
+                                MotionField *motion_field, int frame_idx,
+                                TplDepFrame *tpl_frame, int rf_idx,
+                                BLOCK_SIZE bsize) {
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  const int unit_rows = tpl_frame->mi_rows / mi_height;
+  const int unit_cols = tpl_frame->mi_cols / mi_width;
+  const int max_diagonal_lines = unit_rows + unit_cols - 1;
+  int idx;
+  for (idx = 0; idx < max_diagonal_lines; ++idx) {
+    int r;
+    for (r = VPXMAX(idx - unit_cols + 1, 0); r <= VPXMIN(idx, unit_rows - 1);
+         ++r) {
+      int c = idx - r;
+      int mi_row = r * mi_height;
+      int mi_col = c * mi_width;
+      assert(c >= 0 && c < unit_cols);
+      assert(mi_row >= 0 && mi_row < tpl_frame->mi_rows);
+      assert(mi_col >= 0 && mi_col < tpl_frame->mi_cols);
+      predict_mv_mode(cpi, x, gf_picture, motion_field, frame_idx, tpl_frame,
+                      rf_idx, bsize, mi_row, mi_col);
+    }
+  }
+}
+
+static void do_motion_search(VP9_COMP *cpi, ThreadData *td,
+                             MotionField *motion_field, int frame_idx,
+                             YV12_BUFFER_CONFIG *ref_frame, BLOCK_SIZE bsize,
+                             int mi_row, int mi_col) {
+  VP9_COMMON *cm = &cpi->common;
+  MACROBLOCK *x = &td->mb;
+  MACROBLOCKD *xd = &x->e_mbd;
+  const int mb_y_offset =
+      mi_row * MI_SIZE * xd->cur_buf->y_stride + mi_col * MI_SIZE;
+  assert(ref_frame != NULL);
+  set_mv_limits(cm, x, mi_row, mi_col);
+  {
+    int_mv mv = vp9_motion_field_mi_get_mv(motion_field, mi_row, mi_col);
+    uint8_t *cur_frame_buf = xd->cur_buf->y_buffer + mb_y_offset;
+    uint8_t *ref_frame_buf = ref_frame->y_buffer + mb_y_offset;
+    const int stride = xd->cur_buf->y_stride;
+    full_pixel_motion_search(cpi, td, motion_field, frame_idx, cur_frame_buf,
+                             ref_frame_buf, stride, bsize, mi_row, mi_col,
+                             &mv.as_mv);
+    sub_pixel_motion_search(cpi, td, cur_frame_buf, ref_frame_buf, stride,
+                            bsize, &mv.as_mv);
+    vp9_motion_field_mi_set_mv(motion_field, mi_row, mi_col, mv);
+  }
+}
+
+static void build_motion_field(
+    VP9_COMP *cpi, int frame_idx,
+    YV12_BUFFER_CONFIG *ref_frame[MAX_INTER_REF_FRAMES], BLOCK_SIZE bsize) {
+  VP9_COMMON *cm = &cpi->common;
+  ThreadData *td = &cpi->td;
+  TplDepFrame *tpl_frame = &cpi->tpl_stats[frame_idx];
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  const int pw = num_4x4_blocks_wide_lookup[bsize] << 2;
+  const int ph = num_4x4_blocks_high_lookup[bsize] << 2;
+  int mi_row, mi_col;
+  int rf_idx;
+
+  tpl_frame->lambda = (pw * ph) >> 2;
+  assert(pw * ph == tpl_frame->lambda << 2);
+
+  for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+    MotionField *motion_field = vp9_motion_field_info_get_motion_field(
+        &cpi->motion_field_info, frame_idx, rf_idx, bsize);
+    if (ref_frame[rf_idx] == NULL) {
+      continue;
+    }
+    vp9_motion_field_reset_mvs(motion_field);
+    for (mi_row = 0; mi_row < cm->mi_rows; mi_row += mi_height) {
+      for (mi_col = 0; mi_col < cm->mi_cols; mi_col += mi_width) {
+        do_motion_search(cpi, td, motion_field, frame_idx, ref_frame[rf_idx],
+                         bsize, mi_row, mi_col);
+      }
+    }
+  }
+}
+#endif  // CONFIG_NON_GREEDY_MV
+
+static void mc_flow_dispenser(VP9_COMP *cpi, GF_PICTURE *gf_picture,
+                              int frame_idx, BLOCK_SIZE bsize) {
+  TplDepFrame *tpl_frame = &cpi->tpl_stats[frame_idx];
+  YV12_BUFFER_CONFIG *this_frame = gf_picture[frame_idx].frame;
+  YV12_BUFFER_CONFIG *ref_frame[MAX_INTER_REF_FRAMES] = { NULL, NULL, NULL };
+
+  VP9_COMMON *cm = &cpi->common;
+  struct scale_factors sf;
+  int rdmult, idx;
+  ThreadData *td = &cpi->td;
+  MACROBLOCK *x = &td->mb;
+  MACROBLOCKD *xd = &x->e_mbd;
+  int mi_row, mi_col;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  DECLARE_ALIGNED(16, uint16_t, predictor16[32 * 32 * 3]);
+  DECLARE_ALIGNED(16, uint8_t, predictor8[32 * 32 * 3]);
+  uint8_t *predictor;
+#else
+  DECLARE_ALIGNED(16, uint8_t, predictor[32 * 32 * 3]);
+#endif
+  DECLARE_ALIGNED(16, int16_t, src_diff[32 * 32]);
+  DECLARE_ALIGNED(16, tran_low_t, coeff[32 * 32]);
+  DECLARE_ALIGNED(16, tran_low_t, qcoeff[32 * 32]);
+  DECLARE_ALIGNED(16, tran_low_t, dqcoeff[32 * 32]);
+
+  const TX_SIZE tx_size = max_txsize_lookup[bsize];
+  const int mi_height = num_8x8_blocks_high_lookup[bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+  int64_t recon_error, sse;
+#if CONFIG_NON_GREEDY_MV
+  int square_block_idx;
+  int rf_idx;
+#endif
+
+  // Setup scaling factor
+#if CONFIG_VP9_HIGHBITDEPTH
+  vp9_setup_scale_factors_for_frame(
+      &sf, this_frame->y_crop_width, this_frame->y_crop_height,
+      this_frame->y_crop_width, this_frame->y_crop_height,
+      cpi->common.use_highbitdepth);
+
+  if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH)
+    predictor = CONVERT_TO_BYTEPTR(predictor16);
+  else
+    predictor = predictor8;
+#else
+  vp9_setup_scale_factors_for_frame(
+      &sf, this_frame->y_crop_width, this_frame->y_crop_height,
+      this_frame->y_crop_width, this_frame->y_crop_height);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+  // Prepare reference frame pointers. If any reference frame slot is
+  // unavailable, the pointer will be set to Null.
+  for (idx = 0; idx < MAX_INTER_REF_FRAMES; ++idx) {
+    int rf_idx = gf_picture[frame_idx].ref_frame[idx];
+    if (rf_idx != -1) ref_frame[idx] = gf_picture[rf_idx].frame;
+  }
+
+  xd->mi = cm->mi_grid_visible;
+  xd->mi[0] = cm->mi;
+  xd->cur_buf = this_frame;
+
+  // Get rd multiplier set up.
+  rdmult = vp9_compute_rd_mult_based_on_qindex(cpi, tpl_frame->base_qindex);
+  set_error_per_bit(&cpi->td.mb, rdmult);
+  vp9_initialize_me_consts(cpi, &cpi->td.mb, tpl_frame->base_qindex);
+
+  tpl_frame->is_valid = 1;
+
+  cm->base_qindex = tpl_frame->base_qindex;
+  vp9_frame_init_quantizer(cpi);
+
+#if CONFIG_NON_GREEDY_MV
+  for (square_block_idx = 0; square_block_idx < SQUARE_BLOCK_SIZES;
+       ++square_block_idx) {
+    BLOCK_SIZE square_bsize = square_block_idx_to_bsize(square_block_idx);
+    build_motion_field(cpi, frame_idx, ref_frame, square_bsize);
+  }
+  for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+    int ref_frame_idx = gf_picture[frame_idx].ref_frame[rf_idx];
+    if (ref_frame_idx != -1) {
+      MotionField *motion_field = vp9_motion_field_info_get_motion_field(
+          &cpi->motion_field_info, frame_idx, rf_idx, bsize);
+      predict_mv_mode_arr(cpi, x, gf_picture, motion_field, frame_idx,
+                          tpl_frame, rf_idx, bsize);
+    }
+  }
+#endif
+
+  for (mi_row = 0; mi_row < cm->mi_rows; mi_row += mi_height) {
+    for (mi_col = 0; mi_col < cm->mi_cols; mi_col += mi_width) {
+      mode_estimation(cpi, x, xd, &sf, gf_picture, frame_idx, tpl_frame,
+                      src_diff, coeff, qcoeff, dqcoeff, mi_row, mi_col, bsize,
+                      tx_size, ref_frame, predictor, &recon_error, &sse);
+      // Motion flow dependency dispenser.
+      tpl_model_store(tpl_frame->tpl_stats_ptr, mi_row, mi_col, bsize,
+                      tpl_frame->stride);
+
+      tpl_model_update(cpi->tpl_stats, tpl_frame->tpl_stats_ptr, mi_row, mi_col,
+                       bsize);
+    }
+  }
+}
+
+#if CONFIG_NON_GREEDY_MV
+#define DUMP_TPL_STATS 0
+#if DUMP_TPL_STATS
+static void dump_buf(uint8_t *buf, int stride, int row, int col, int h, int w) {
+  int i, j;
+  printf("%d %d\n", h, w);
+  for (i = 0; i < h; ++i) {
+    for (j = 0; j < w; ++j) {
+      printf("%d ", buf[(row + i) * stride + col + j]);
+    }
+  }
+  printf("\n");
+}
+
+static void dump_frame_buf(const YV12_BUFFER_CONFIG *frame_buf) {
+  dump_buf(frame_buf->y_buffer, frame_buf->y_stride, 0, 0, frame_buf->y_height,
+           frame_buf->y_width);
+  dump_buf(frame_buf->u_buffer, frame_buf->uv_stride, 0, 0,
+           frame_buf->uv_height, frame_buf->uv_width);
+  dump_buf(frame_buf->v_buffer, frame_buf->uv_stride, 0, 0,
+           frame_buf->uv_height, frame_buf->uv_width);
+}
+
+static void dump_tpl_stats(const VP9_COMP *cpi, int tpl_group_frames,
+                           const GF_GROUP *gf_group,
+                           const GF_PICTURE *gf_picture, BLOCK_SIZE bsize) {
+  int frame_idx;
+  const VP9_COMMON *cm = &cpi->common;
+  int rf_idx;
+  for (frame_idx = 1; frame_idx < tpl_group_frames; ++frame_idx) {
+    for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+      const TplDepFrame *tpl_frame = &cpi->tpl_stats[frame_idx];
+      int mi_row, mi_col;
+      int ref_frame_idx;
+      const int mi_height = num_8x8_blocks_high_lookup[bsize];
+      const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+      ref_frame_idx = gf_picture[frame_idx].ref_frame[rf_idx];
+      if (ref_frame_idx != -1) {
+        YV12_BUFFER_CONFIG *ref_frame_buf = gf_picture[ref_frame_idx].frame;
+        const int gf_frame_offset = gf_group->frame_gop_index[frame_idx];
+        const int ref_gf_frame_offset =
+            gf_group->frame_gop_index[ref_frame_idx];
+        printf("=\n");
+        printf(
+            "frame_idx %d mi_rows %d mi_cols %d bsize %d ref_frame_idx %d "
+            "rf_idx %d gf_frame_offset %d ref_gf_frame_offset %d\n",
+            frame_idx, cm->mi_rows, cm->mi_cols, mi_width * MI_SIZE,
+            ref_frame_idx, rf_idx, gf_frame_offset, ref_gf_frame_offset);
+        for (mi_row = 0; mi_row < cm->mi_rows; ++mi_row) {
+          for (mi_col = 0; mi_col < cm->mi_cols; ++mi_col) {
+            if ((mi_row % mi_height) == 0 && (mi_col % mi_width) == 0) {
+              int_mv mv = vp9_motion_field_info_get_mv(&cpi->motion_field_info,
+                                                       frame_idx, rf_idx, bsize,
+                                                       mi_row, mi_col);
+              printf("%d %d %d %d\n", mi_row, mi_col, mv.as_mv.row,
+                     mv.as_mv.col);
+            }
+          }
+        }
+        for (mi_row = 0; mi_row < cm->mi_rows; ++mi_row) {
+          for (mi_col = 0; mi_col < cm->mi_cols; ++mi_col) {
+            if ((mi_row % mi_height) == 0 && (mi_col % mi_width) == 0) {
+              const TplDepStats *tpl_ptr =
+                  &tpl_frame
+                       ->tpl_stats_ptr[mi_row * tpl_frame->stride + mi_col];
+              printf("%f ", tpl_ptr->feature_score);
+            }
+          }
+        }
+        printf("\n");
+
+        for (mi_row = 0; mi_row < cm->mi_rows; mi_row += mi_height) {
+          for (mi_col = 0; mi_col < cm->mi_cols; mi_col += mi_width) {
+            const int mv_mode =
+                tpl_frame
+                    ->mv_mode_arr[rf_idx][mi_row * tpl_frame->stride + mi_col];
+            printf("%d ", mv_mode);
+          }
+        }
+        printf("\n");
+
+        dump_frame_buf(gf_picture[frame_idx].frame);
+        dump_frame_buf(ref_frame_buf);
+      }
+    }
+  }
+}
+#endif  // DUMP_TPL_STATS
+#endif  // CONFIG_NON_GREEDY_MV
+
+static void init_tpl_buffer(VP9_COMP *cpi) {
+  VP9_COMMON *cm = &cpi->common;
+  int frame;
+
+  const int mi_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+  const int mi_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+#if CONFIG_NON_GREEDY_MV
+  int rf_idx;
+
+  vpx_free(cpi->select_mv_arr);
+  CHECK_MEM_ERROR(
+      cm, cpi->select_mv_arr,
+      vpx_calloc(mi_rows * mi_cols * 4, sizeof(*cpi->select_mv_arr)));
+#endif
+
+  // TODO(jingning): Reduce the actual memory use for tpl model build up.
+  for (frame = 0; frame < MAX_ARF_GOP_SIZE; ++frame) {
+    if (cpi->tpl_stats[frame].width >= mi_cols &&
+        cpi->tpl_stats[frame].height >= mi_rows &&
+        cpi->tpl_stats[frame].tpl_stats_ptr)
+      continue;
+
+#if CONFIG_NON_GREEDY_MV
+    for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+      vpx_free(cpi->tpl_stats[frame].mv_mode_arr[rf_idx]);
+      CHECK_MEM_ERROR(
+          cm, cpi->tpl_stats[frame].mv_mode_arr[rf_idx],
+          vpx_calloc(mi_rows * mi_cols * 4,
+                     sizeof(*cpi->tpl_stats[frame].mv_mode_arr[rf_idx])));
+      vpx_free(cpi->tpl_stats[frame].rd_diff_arr[rf_idx]);
+      CHECK_MEM_ERROR(
+          cm, cpi->tpl_stats[frame].rd_diff_arr[rf_idx],
+          vpx_calloc(mi_rows * mi_cols * 4,
+                     sizeof(*cpi->tpl_stats[frame].rd_diff_arr[rf_idx])));
+    }
+#endif
+    vpx_free(cpi->tpl_stats[frame].tpl_stats_ptr);
+    CHECK_MEM_ERROR(cm, cpi->tpl_stats[frame].tpl_stats_ptr,
+                    vpx_calloc(mi_rows * mi_cols,
+                               sizeof(*cpi->tpl_stats[frame].tpl_stats_ptr)));
+    cpi->tpl_stats[frame].is_valid = 0;
+    cpi->tpl_stats[frame].width = mi_cols;
+    cpi->tpl_stats[frame].height = mi_rows;
+    cpi->tpl_stats[frame].stride = mi_cols;
+    cpi->tpl_stats[frame].mi_rows = cm->mi_rows;
+    cpi->tpl_stats[frame].mi_cols = cm->mi_cols;
+  }
+
+  for (frame = 0; frame < REF_FRAMES; ++frame) {
+    cpi->enc_frame_buf[frame].mem_valid = 0;
+    cpi->enc_frame_buf[frame].released = 1;
+  }
+}
+
+static void free_tpl_buffer(VP9_COMP *cpi) {
+  int frame;
+#if CONFIG_NON_GREEDY_MV
+  vp9_free_motion_field_info(&cpi->motion_field_info);
+  vpx_free(cpi->select_mv_arr);
+#endif
+  for (frame = 0; frame < MAX_ARF_GOP_SIZE; ++frame) {
+#if CONFIG_NON_GREEDY_MV
+    int rf_idx;
+    for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+      vpx_free(cpi->tpl_stats[frame].mv_mode_arr[rf_idx]);
+      vpx_free(cpi->tpl_stats[frame].rd_diff_arr[rf_idx]);
+    }
+#endif
+    vpx_free(cpi->tpl_stats[frame].tpl_stats_ptr);
+    cpi->tpl_stats[frame].is_valid = 0;
+  }
+}
+
+static void setup_tpl_stats(VP9_COMP *cpi) {
+  GF_PICTURE gf_picture[MAX_ARF_GOP_SIZE];
+  const GF_GROUP *gf_group = &cpi->twopass.gf_group;
+  int tpl_group_frames = 0;
+  int frame_idx;
+  cpi->tpl_bsize = BLOCK_32X32;
+
+  init_gop_frames(cpi, gf_picture, gf_group, &tpl_group_frames);
+
+  init_tpl_stats(cpi);
+
+  // Backward propagation from tpl_group_frames to 1.
+  for (frame_idx = tpl_group_frames - 1; frame_idx > 0; --frame_idx) {
+    if (gf_picture[frame_idx].update_type == USE_BUF_FRAME) continue;
+    mc_flow_dispenser(cpi, gf_picture, frame_idx, cpi->tpl_bsize);
+  }
+#if CONFIG_NON_GREEDY_MV
+  cpi->tpl_ready = 1;
+#if DUMP_TPL_STATS
+  dump_tpl_stats(cpi, tpl_group_frames, gf_group, gf_picture, cpi->tpl_bsize);
+#endif  // DUMP_TPL_STATS
+#endif  // CONFIG_NON_GREEDY_MV
+}
+
+#if !CONFIG_REALTIME_ONLY
+#if CONFIG_RATE_CTRL
+static void copy_frame_counts(const FRAME_COUNTS *input_counts,
+                              FRAME_COUNTS *output_counts) {
+  int i, j, k, l, m, n;
+  for (i = 0; i < BLOCK_SIZE_GROUPS; ++i) {
+    for (j = 0; j < INTRA_MODES; ++j) {
+      output_counts->y_mode[i][j] = input_counts->y_mode[i][j];
+    }
+  }
+  for (i = 0; i < INTRA_MODES; ++i) {
+    for (j = 0; j < INTRA_MODES; ++j) {
+      output_counts->uv_mode[i][j] = input_counts->uv_mode[i][j];
+    }
+  }
+  for (i = 0; i < PARTITION_CONTEXTS; ++i) {
+    for (j = 0; j < PARTITION_TYPES; ++j) {
+      output_counts->partition[i][j] = input_counts->partition[i][j];
+    }
+  }
+  for (i = 0; i < TX_SIZES; ++i) {
+    for (j = 0; j < PLANE_TYPES; ++j) {
+      for (k = 0; k < REF_TYPES; ++k) {
+        for (l = 0; l < COEF_BANDS; ++l) {
+          for (m = 0; m < COEFF_CONTEXTS; ++m) {
+            output_counts->eob_branch[i][j][k][l][m] =
+                input_counts->eob_branch[i][j][k][l][m];
+            for (n = 0; n < UNCONSTRAINED_NODES + 1; ++n) {
+              output_counts->coef[i][j][k][l][m][n] =
+                  input_counts->coef[i][j][k][l][m][n];
+            }
+          }
+        }
+      }
+    }
+  }
+  for (i = 0; i < SWITCHABLE_FILTER_CONTEXTS; ++i) {
+    for (j = 0; j < SWITCHABLE_FILTERS; ++j) {
+      output_counts->switchable_interp[i][j] =
+          input_counts->switchable_interp[i][j];
+    }
+  }
+  for (i = 0; i < INTER_MODE_CONTEXTS; ++i) {
+    for (j = 0; j < INTER_MODES; ++j) {
+      output_counts->inter_mode[i][j] = input_counts->inter_mode[i][j];
+    }
+  }
+  for (i = 0; i < INTRA_INTER_CONTEXTS; ++i) {
+    for (j = 0; j < 2; ++j) {
+      output_counts->intra_inter[i][j] = input_counts->intra_inter[i][j];
+    }
+  }
+  for (i = 0; i < COMP_INTER_CONTEXTS; ++i) {
+    for (j = 0; j < 2; ++j) {
+      output_counts->comp_inter[i][j] = input_counts->comp_inter[i][j];
+    }
+  }
+  for (i = 0; i < REF_CONTEXTS; ++i) {
+    for (j = 0; j < 2; ++j) {
+      for (k = 0; k < 2; ++k) {
+        output_counts->single_ref[i][j][k] = input_counts->single_ref[i][j][k];
+      }
+    }
+  }
+  for (i = 0; i < REF_CONTEXTS; ++i) {
+    for (j = 0; j < 2; ++j) {
+      output_counts->comp_ref[i][j] = input_counts->comp_ref[i][j];
+    }
+  }
+  for (i = 0; i < SKIP_CONTEXTS; ++i) {
+    for (j = 0; j < 2; ++j) {
+      output_counts->skip[i][j] = input_counts->skip[i][j];
+    }
+  }
+  for (i = 0; i < TX_SIZE_CONTEXTS; i++) {
+    for (j = 0; j < TX_SIZES; j++) {
+      output_counts->tx.p32x32[i][j] = input_counts->tx.p32x32[i][j];
+    }
+    for (j = 0; j < TX_SIZES - 1; j++) {
+      output_counts->tx.p16x16[i][j] = input_counts->tx.p16x16[i][j];
+    }
+    for (j = 0; j < TX_SIZES - 2; j++) {
+      output_counts->tx.p8x8[i][j] = input_counts->tx.p8x8[i][j];
+    }
+  }
+  for (i = 0; i < TX_SIZES; i++) {
+    output_counts->tx.tx_totals[i] = input_counts->tx.tx_totals[i];
+  }
+  for (i = 0; i < MV_JOINTS; i++) {
+    output_counts->mv.joints[i] = input_counts->mv.joints[i];
+  }
+  for (k = 0; k < 2; k++) {
+    nmv_component_counts *const comps = &output_counts->mv.comps[k];
+    const nmv_component_counts *const comps_t = &input_counts->mv.comps[k];
+    for (i = 0; i < 2; i++) {
+      comps->sign[i] = comps_t->sign[i];
+      comps->class0_hp[i] = comps_t->class0_hp[i];
+      comps->hp[i] = comps_t->hp[i];
+    }
+    for (i = 0; i < MV_CLASSES; i++) {
+      comps->classes[i] = comps_t->classes[i];
+    }
+    for (i = 0; i < CLASS0_SIZE; i++) {
+      comps->class0[i] = comps_t->class0[i];
+      for (j = 0; j < MV_FP_SIZE; j++) {
+        comps->class0_fp[i][j] = comps_t->class0_fp[i][j];
+      }
+    }
+    for (i = 0; i < MV_OFFSET_BITS; i++) {
+      for (j = 0; j < 2; j++) {
+        comps->bits[i][j] = comps_t->bits[i][j];
+      }
+    }
+    for (i = 0; i < MV_FP_SIZE; i++) {
+      comps->fp[i] = comps_t->fp[i];
+    }
+  }
+}
+
+static void yv12_buffer_to_image_buffer(const YV12_BUFFER_CONFIG *yv12_buffer,
+                                        IMAGE_BUFFER *image_buffer) {
+  const uint8_t *src_buf_ls[3] = { yv12_buffer->y_buffer, yv12_buffer->u_buffer,
+                                   yv12_buffer->v_buffer };
+  const int src_stride_ls[3] = { yv12_buffer->y_stride, yv12_buffer->uv_stride,
+                                 yv12_buffer->uv_stride };
+  const int w_ls[3] = { yv12_buffer->y_crop_width, yv12_buffer->uv_crop_width,
+                        yv12_buffer->uv_crop_width };
+  const int h_ls[3] = { yv12_buffer->y_crop_height, yv12_buffer->uv_crop_height,
+                        yv12_buffer->uv_crop_height };
+  int plane;
+  for (plane = 0; plane < 3; ++plane) {
+    const int src_stride = src_stride_ls[plane];
+    const int w = w_ls[plane];
+    const int h = h_ls[plane];
+    const uint8_t *src_buf = src_buf_ls[plane];
+    uint8_t *dst_buf = image_buffer->plane_buffer[plane];
+    int r;
+    assert(image_buffer->plane_width[plane] == w);
+    assert(image_buffer->plane_height[plane] == h);
+    for (r = 0; r < h; ++r) {
+      memcpy(dst_buf, src_buf, sizeof(*src_buf) * w);
+      src_buf += src_stride;
+      dst_buf += w;
+    }
+  }
+}
+#endif  // CONFIG_RATE_CTRL
+
+static void update_encode_frame_result(
+    int show_idx, FRAME_UPDATE_TYPE update_type,
+    const YV12_BUFFER_CONFIG *source_frame,
+    const YV12_BUFFER_CONFIG *coded_frame, int quantize_index,
+    uint32_t bit_depth, uint32_t input_bit_depth, const FRAME_COUNTS *counts,
+#if CONFIG_RATE_CTRL
+    const PARTITION_INFO *partition_info,
+    const MOTION_VECTOR_INFO *motion_vector_info,
+#endif  // CONFIG_RATE_CTRL
+    ENCODE_FRAME_RESULT *encode_frame_result) {
+#if CONFIG_RATE_CTRL
+  PSNR_STATS psnr;
+#if CONFIG_VP9_HIGHBITDEPTH
+  vpx_calc_highbd_psnr(source_frame, coded_frame, &psnr, bit_depth,
+                       input_bit_depth);
+#else   // CONFIG_VP9_HIGHBITDEPTH
+  (void)bit_depth;
+  (void)input_bit_depth;
+  vpx_calc_psnr(source_frame, coded_frame, &psnr);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  encode_frame_result->psnr = psnr.psnr[0];
+  encode_frame_result->sse = psnr.sse[0];
+  copy_frame_counts(counts, &encode_frame_result->frame_counts);
+  encode_frame_result->partition_info = partition_info;
+  encode_frame_result->motion_vector_info = motion_vector_info;
+  if (encode_frame_result->coded_frame.allocated) {
+    yv12_buffer_to_image_buffer(coded_frame, &encode_frame_result->coded_frame);
+  }
+#else   // CONFIG_RATE_CTRL
+  (void)bit_depth;
+  (void)input_bit_depth;
+  (void)source_frame;
+  (void)coded_frame;
+  (void)counts;
+#endif  // CONFIG_RATE_CTRL
+  encode_frame_result->show_idx = show_idx;
+  encode_frame_result->update_type = update_type;
+  encode_frame_result->quantize_index = quantize_index;
+}
+#endif  // !CONFIG_REALTIME_ONLY
+
+void vp9_init_encode_frame_result(ENCODE_FRAME_RESULT *encode_frame_result) {
+  encode_frame_result->show_idx = -1;  // Actual encoding doesn't happen.
+#if CONFIG_RATE_CTRL
+  vp9_zero(encode_frame_result->coded_frame);
+  encode_frame_result->coded_frame.allocated = 0;
+#endif  // CONFIG_RATE_CTRL
+}
+
 int vp9_get_compressed_data(VP9_COMP *cpi, unsigned int *frame_flags,
                             size_t *size, uint8_t *dest, int64_t *time_stamp,
-                            int64_t *time_end, int flush) {
+                            int64_t *time_end, int flush,
+                            ENCODE_FRAME_RESULT *encode_frame_result) {
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
   VP9_COMMON *const cm = &cpi->common;
   BufferPool *const pool = cm->buffer_pool;
@@ -5222,17 +7312,10 @@
   struct lookahead_entry *last_source = NULL;
   struct lookahead_entry *source = NULL;
   int arf_src_index;
+  const int gf_group_index = cpi->twopass.gf_group.index;
   int i;
 
-  if (is_two_pass_svc(cpi)) {
-#if CONFIG_SPATIAL_SVC
-    vp9_svc_start_frame(cpi);
-    // Use a small empty frame instead of a real frame
-    if (cpi->svc.encode_empty_frame_state == ENCODING)
-      source = &cpi->svc.empty_frame;
-#endif
-    if (oxcf->pass == 2) vp9_restore_layer_context(cpi);
-  } else if (is_one_pass_cbr_svc(cpi)) {
+  if (is_one_pass_cbr_svc(cpi)) {
     vp9_one_pass_cbr_svc_start_layer(cpi);
   }
 
@@ -5243,10 +7326,12 @@
   // Is multi-arf enabled.
   // Note that at the moment multi_arf is only configured for 2 pass VBR and
   // will not work properly with svc.
-  if ((oxcf->pass == 2) && !cpi->use_svc && (cpi->oxcf.enable_auto_arf > 1))
-    cpi->multi_arf_allowed = 1;
+  // Enable the Jingning's new "multi_layer_arf" code if "enable_auto_arf"
+  // is greater than or equal to 2.
+  if ((oxcf->pass == 2) && !cpi->use_svc && (cpi->oxcf.enable_auto_arf >= 2))
+    cpi->multi_layer_arf = 1;
   else
-    cpi->multi_arf_allowed = 0;
+    cpi->multi_layer_arf = 0;
 
   // Normal defaults
   cm->reset_frame_context = 0;
@@ -5260,9 +7345,6 @@
   // Should we encode an arf frame.
   arf_src_index = get_arf_src_index(cpi);
 
-  // Skip alt frame if we encode the empty frame
-  if (is_two_pass_svc(cpi) && source != NULL) arf_src_index = 0;
-
   if (arf_src_index) {
     for (i = 0; i <= arf_src_index; ++i) {
       struct lookahead_entry *e = vp9_lookahead_peek(cpi->lookahead, i);
@@ -5277,25 +7359,17 @@
     }
   }
 
+  // Clear arf index stack before group of pictures processing starts.
+  if (gf_group_index == 1) {
+    stack_init(cpi->twopass.gf_group.arf_index_stack, MAX_LAG_BUFFERS * 2);
+    cpi->twopass.gf_group.stack_size = 0;
+  }
+
   if (arf_src_index) {
     assert(arf_src_index <= rc->frames_to_key);
-
     if ((source = vp9_lookahead_peek(cpi->lookahead, arf_src_index)) != NULL) {
       cpi->alt_ref_source = source;
 
-#if CONFIG_SPATIAL_SVC
-      if (is_two_pass_svc(cpi) && cpi->svc.spatial_layer_id > 0) {
-        int i;
-        // Reference a hidden frame from a lower layer
-        for (i = cpi->svc.spatial_layer_id - 1; i >= 0; --i) {
-          if (oxcf->ss_enable_auto_arf[i]) {
-            cpi->gld_fb_idx = cpi->svc.layer_context[i].alt_ref_idx;
-            break;
-          }
-        }
-      }
-      cpi->svc.layer_context[cpi->svc.spatial_layer_id].has_alt_frame = 1;
-#endif
 #if !CONFIG_REALTIME_ONLY
       if ((oxcf->mode != REALTIME) && (oxcf->arnr_max_frames > 0) &&
           (oxcf->arnr_strength > 0)) {
@@ -5337,7 +7411,7 @@
     }
 
     // Read in the source frame.
-    if (cpi->use_svc)
+    if (cpi->use_svc || cpi->svc.set_intra_only_frame)
       source = vp9_svc_lookahead_pop(cpi, cpi->lookahead, flush);
     else
       source = vp9_lookahead_pop(cpi->lookahead, flush);
@@ -5345,9 +7419,10 @@
     if (source != NULL) {
       cm->show_frame = 1;
       cm->intra_only = 0;
-      // if the flags indicate intra frame, but if the current picture is for
-      // non-zero spatial layer, it should not be an intra picture.
-      if ((source->flags & VPX_EFLAG_FORCE_KF) &&
+      // If the flags indicate intra frame, but if the current picture is for
+      // spatial layer above first_spatial_layer_to_encode, it should not be an
+      // intra picture.
+      if ((source->flags & VPX_EFLAG_FORCE_KF) && cpi->use_svc &&
           cpi->svc.spatial_layer_id > cpi->svc.first_spatial_layer_to_encode) {
         source->flags &= ~(unsigned int)(VPX_EFLAG_FORCE_KF);
       }
@@ -5372,15 +7447,8 @@
     *time_stamp = source->ts_start;
     *time_end = source->ts_end;
     *frame_flags = (source->flags & VPX_EFLAG_FORCE_KF) ? FRAMEFLAGS_KEY : 0;
-
   } else {
     *size = 0;
-#if !CONFIG_REALTIME_ONLY
-    if (flush && oxcf->pass == 1 && !cpi->twopass.first_pass_done) {
-      vp9_end_first_pass(cpi); /* get last stats packet */
-      cpi->twopass.first_pass_done = 1;
-    }
-#endif  // !CONFIG_REALTIME_ONLY
     return -1;
   }
 
@@ -5394,7 +7462,11 @@
 
   // adjust frame rates based on timestamps given
   if (cm->show_frame) {
-    adjust_frame_rate(cpi, source);
+    if (cpi->use_svc && cpi->svc.use_set_ref_frame_config &&
+        cpi->svc.duration[cpi->svc.spatial_layer_id] > 0)
+      vp9_svc_adjust_frame_rate(cpi);
+    else
+      adjust_frame_rate(cpi, source);
   }
 
   if (is_one_pass_cbr_svc(cpi)) {
@@ -5413,24 +7485,13 @@
 
   cm->cur_frame = &pool->frame_bufs[cm->new_fb_idx];
 
-  if (!cpi->use_svc && cpi->multi_arf_allowed) {
-    if (cm->frame_type == KEY_FRAME) {
-      init_buffer_indices(cpi);
-    } else if (oxcf->pass == 2) {
-      const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
-      cpi->alt_fb_idx = gf_group->arf_ref_idx[gf_group->index];
-    }
-  }
-
   // Start with a 0 size frame.
   *size = 0;
 
   cpi->frame_flags = *frame_flags;
 
 #if !CONFIG_REALTIME_ONLY
-  if ((oxcf->pass == 2) &&
-      (!cpi->use_svc || (is_two_pass_svc(cpi) &&
-                         cpi->svc.encode_empty_frame_state != ENCODING))) {
+  if ((oxcf->pass == 2) && !cpi->use_svc) {
     vp9_rc_get_second_pass_params(cpi);
   } else if (oxcf->pass == 1) {
     set_frame_size(cpi);
@@ -5442,11 +7503,55 @@
     level_rc_framerate(cpi, arf_src_index);
 
   if (cpi->oxcf.pass != 0 || cpi->use_svc || frame_is_intra_only(cm) == 1) {
-    for (i = 0; i < MAX_REF_FRAMES; ++i) cpi->scaled_ref_idx[i] = INVALID_IDX;
+    for (i = 0; i < REFS_PER_FRAME; ++i) cpi->scaled_ref_idx[i] = INVALID_IDX;
   }
 
+  if (cpi->kmeans_data_arr_alloc == 0) {
+    const int mi_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+    const int mi_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+#if CONFIG_MULTITHREAD
+    pthread_mutex_init(&cpi->kmeans_mutex, NULL);
+#endif
+    CHECK_MEM_ERROR(
+        cm, cpi->kmeans_data_arr,
+        vpx_calloc(mi_rows * mi_cols, sizeof(*cpi->kmeans_data_arr)));
+    cpi->kmeans_data_stride = mi_cols;
+    cpi->kmeans_data_arr_alloc = 1;
+  }
+
+#if CONFIG_NON_GREEDY_MV
+  {
+    const int mi_cols = mi_cols_aligned_to_sb(cm->mi_cols);
+    const int mi_rows = mi_cols_aligned_to_sb(cm->mi_rows);
+    Status status = vp9_alloc_motion_field_info(
+        &cpi->motion_field_info, MAX_ARF_GOP_SIZE, mi_rows, mi_cols);
+    if (status == STATUS_FAILED) {
+      vpx_internal_error(&(cm)->error, VPX_CODEC_MEM_ERROR,
+                         "vp9_alloc_motion_field_info failed");
+    }
+  }
+#endif  // CONFIG_NON_GREEDY_MV
+
+  if (gf_group_index == 1 &&
+      cpi->twopass.gf_group.update_type[gf_group_index] == ARF_UPDATE &&
+      cpi->sf.enable_tpl_model) {
+    init_tpl_buffer(cpi);
+    vp9_estimate_qp_gop(cpi);
+    setup_tpl_stats(cpi);
+  }
+
+#if CONFIG_BITSTREAM_DEBUG
+  assert(cpi->oxcf.max_threads == 0 &&
+         "bitstream debug tool does not support multithreading");
+  bitstream_queue_record_write();
+#endif
+#if CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
+  bitstream_queue_set_frame_write(cm->current_video_frame * 2 + cm->show_frame);
+#endif
+
   cpi->td.mb.fp_src_pred = 0;
 #if CONFIG_REALTIME_ONLY
+  (void)encode_frame_result;
   if (cpi->use_svc) {
     SvcEncode(cpi, size, dest, frame_flags);
   } else {
@@ -5454,7 +7559,7 @@
     Pass0Encode(cpi, size, dest, frame_flags);
   }
 #else  // !CONFIG_REALTIME_ONLY
-  if (oxcf->pass == 1 && (!cpi->use_svc || is_two_pass_svc(cpi))) {
+  if (oxcf->pass == 1 && !cpi->use_svc) {
     const int lossless = is_lossless_requested(oxcf);
 #if CONFIG_VP9_HIGHBITDEPTH
     if (cpi->oxcf.use_highbitdepth)
@@ -5469,8 +7574,31 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     cpi->td.mb.inv_txfm_add = lossless ? vp9_iwht4x4_add : vp9_idct4x4_add;
     vp9_first_pass(cpi, source);
-  } else if (oxcf->pass == 2 && (!cpi->use_svc || is_two_pass_svc(cpi))) {
+  } else if (oxcf->pass == 2 && !cpi->use_svc) {
     Pass2Encode(cpi, size, dest, frame_flags);
+    // update_encode_frame_result() depends on twopass.gf_group.index and
+    // cm->new_fb_idx and cpi->Source are updated for current properly and have
+    // not been updated for the next frame yet.
+    // The update locations are as follows.
+    // 1) twopass.gf_group.index is initialized at define_gf_group by vp9_zero()
+    // for the first frame in the gf_group and is updated for the next frame at
+    // vp9_twopass_postencode_update().
+    // 2) cpi->Source is updated at the beginning of this function, i.e.
+    // vp9_get_compressed_data()
+    // 3) cm->new_fb_idx is updated at the beginning of this function by
+    // get_free_fb(cm)
+    // TODO(angiebird): Improve the codebase to make the update of frame
+    // dependent variables more robust.
+    update_encode_frame_result(
+        source->show_idx,
+        cpi->twopass.gf_group.update_type[cpi->twopass.gf_group.index],
+        cpi->Source, get_frame_new_buffer(cm), vp9_get_quantizer(cpi),
+        cpi->oxcf.input_bit_depth, cm->bit_depth, cpi->td.counts,
+#if CONFIG_RATE_CTRL
+        cpi->partition_info, cpi->motion_vector_info,
+#endif  // CONFIG_RATE_CTRL
+        encode_frame_result);
+    vp9_twopass_postencode_update(cpi);
   } else if (cpi->use_svc) {
     SvcEncode(cpi, size, dest, frame_flags);
   } else {
@@ -5479,6 +7607,8 @@
   }
 #endif  // CONFIG_REALTIME_ONLY
 
+  if (cm->show_frame) cm->cur_show_frame_fb_idx = cm->new_fb_idx;
+
   if (cm->refresh_frame_context)
     cm->frame_contexts[cm->frame_context_idx] = *cm->fc;
 
@@ -5501,9 +7631,6 @@
   vpx_usec_timer_mark(&cmptimer);
   cpi->time_compress_data += vpx_usec_timer_elapsed(&cmptimer);
 
-  // Should we calculate metrics for the frame.
-  if (is_psnr_calc_enabled(cpi)) generate_psnr_packet(cpi);
-
   if (cpi->keep_level_stats && oxcf->pass != 1)
     update_level_info(cpi, size, arf_src_index);
 
@@ -5561,7 +7688,8 @@
             ppflags.post_proc_flag = VP9D_DEBLOCK;
             ppflags.deblocking_level = 0;  // not used in vp9_post_proc_frame()
             ppflags.noise_level = 0;       // not used in vp9_post_proc_frame()
-            vp9_post_proc_frame(cm, pp, &ppflags);
+            vp9_post_proc_frame(cm, pp, &ppflags,
+                                cpi->un_scaled_source->y_width);
           }
 #endif
           vpx_clear_system_state();
@@ -5607,11 +7735,11 @@
           cpi->summedp_quality += frame_ssim2 * weight;
           cpi->summedp_weights += weight;
 #if 0
-          {
+          if (cm->show_frame) {
             FILE *f = fopen("q_used.stt", "a");
             fprintf(f, "%5d : Y%f7.3:U%f7.3:V%f7.3:F%f7.3:S%7.3f\n",
-                    cpi->common.current_video_frame, y2, u2, v2,
-                    frame_psnr2, frame_ssim2);
+                    cpi->common.current_video_frame, psnr2.psnr[1],
+                    psnr2.psnr[2], psnr2.psnr[3], psnr2.psnr[0], frame_ssim2);
             fclose(f);
           }
 #endif
@@ -5670,21 +7798,7 @@
 
 #endif
 
-  if (is_two_pass_svc(cpi)) {
-    if (cpi->svc.encode_empty_frame_state == ENCODING) {
-      cpi->svc.encode_empty_frame_state = ENCODED;
-      cpi->svc.encode_intra_empty_frame = 0;
-    }
-
-    if (cm->show_frame) {
-      ++cpi->svc.spatial_layer_to_encode;
-      if (cpi->svc.spatial_layer_to_encode >= cpi->svc.number_spatial_layers)
-        cpi->svc.spatial_layer_to_encode = 0;
-
-      // May need the empty frame after an visible frame.
-      cpi->svc.encode_empty_frame_state = NEED_TO_ENCODE;
-    }
-  } else if (is_one_pass_cbr_svc(cpi)) {
+  if (is_one_pass_cbr_svc(cpi)) {
     if (cm->show_frame) {
       ++cpi->svc.spatial_layer_to_encode;
       if (cpi->svc.spatial_layer_to_encode >= cpi->svc.number_spatial_layers)
@@ -5708,7 +7822,7 @@
   } else {
     int ret;
 #if CONFIG_VP9_POSTPROC
-    ret = vp9_post_proc_frame(cm, dest, flags);
+    ret = vp9_post_proc_frame(cm, dest, flags, cpi->un_scaled_source->y_width);
 #else
     if (cm->frame_to_show) {
       *dest = *cm->frame_to_show;
@@ -5753,15 +7867,15 @@
                          unsigned int height) {
   VP9_COMMON *cm = &cpi->common;
 #if CONFIG_VP9_HIGHBITDEPTH
-  check_initial_width(cpi, cm->use_highbitdepth, 1, 1);
+  update_initial_width(cpi, cm->use_highbitdepth, 1, 1);
 #else
-  check_initial_width(cpi, 1, 1);
+  update_initial_width(cpi, 0, 1, 1);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
 #if CONFIG_VP9_TEMPORAL_DENOISING
   setup_denoiser_buffer(cpi);
 #endif
-
+  alloc_raw_frame_buffers(cpi);
   if (width) {
     cm->width = width;
     if (cm->width > cpi->initial_width) {
@@ -5790,7 +7904,7 @@
   return;
 }
 
-int vp9_get_quantizer(VP9_COMP *cpi) { return cpi->common.base_qindex; }
+int vp9_get_quantizer(const VP9_COMP *cpi) { return cpi->common.base_qindex; }
 
 void vp9_apply_encoding_flags(VP9_COMP *cpi, vpx_enc_frame_flags_t flags) {
   if (flags &
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.h
index 2989af3..8b4c6a5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_encoder.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_ENCODER_H_
-#define VP9_ENCODER_VP9_ENCODER_H_
+#ifndef VPX_VP9_ENCODER_VP9_ENCODER_H_
+#define VPX_VP9_ENCODER_VP9_ENCODER_H_
 
 #include <stdio.h>
 
@@ -20,8 +20,10 @@
 #include "vpx_dsp/ssim.h"
 #endif
 #include "vpx_dsp/variance.h"
+#include "vpx_dsp/psnr.h"
 #include "vpx_ports/system_state.h"
 #include "vpx_util/vpx_thread.h"
+#include "vpx_util/vpx_timestamp.h"
 
 #include "vp9/common/vp9_alloccommon.h"
 #include "vp9/common/vp9_ppflags.h"
@@ -29,7 +31,9 @@
 #include "vp9/common/vp9_thread_common.h"
 #include "vp9/common/vp9_onyxc_int.h"
 
+#if !CONFIG_REALTIME_ONLY
 #include "vp9/encoder/vp9_alt_ref_aq.h"
+#endif
 #include "vp9/encoder/vp9_aq_cyclicrefresh.h"
 #include "vp9/encoder/vp9_context_tree.h"
 #include "vp9/encoder/vp9_encodemb.h"
@@ -119,9 +123,11 @@
   COMPLEXITY_AQ = 2,
   CYCLIC_REFRESH_AQ = 3,
   EQUATOR360_AQ = 4,
+  PERCEPTUAL_AQ = 5,
+  PSNR_AQ = 6,
   // AQ based on lookahead temporal
   // variance (only valid for altref frames)
-  LOOKAHEAD_AQ = 5,
+  LOOKAHEAD_AQ = 7,
   AQ_MODE_COUNT  // This should always be the last member of the enum
 } AQ_MODE;
 
@@ -148,7 +154,10 @@
   int height;                    // height of data passed to the compressor
   unsigned int input_bit_depth;  // Input bit depth.
   double init_framerate;         // set to passed in framerate
-  int64_t target_bandwidth;      // bandwidth to be used in bits per second
+  vpx_rational_t g_timebase;  // equivalent to g_timebase in vpx_codec_enc_cfg_t
+  vpx_rational64_t g_timebase_in_ts;  // g_timebase * TICKS_PER_SEC
+
+  int64_t target_bandwidth;  // bandwidth to be used in bits per second
 
   int noise_sensitivity;  // pre processing blur: recommendation 0
   int sharpness;          // sharpening output: recommendation 0:
@@ -248,12 +257,13 @@
   int tile_columns;
   int tile_rows;
 
+  int enable_tpl_model;
+
   int max_threads;
 
   unsigned int target_level;
 
   vpx_fixed_buf_t two_pass_stats_in;
-  struct vpx_codec_pkt_list *output_pkt_list;
 
 #if CONFIG_FP_MB_STATS
   vpx_fixed_buf_t firstpass_mb_stats_in;
@@ -272,17 +282,59 @@
 
   int row_mt;
   unsigned int motion_vector_unit_test;
+  int delta_q_uv;
 } VP9EncoderConfig;
 
 static INLINE int is_lossless_requested(const VP9EncoderConfig *cfg) {
   return cfg->best_allowed_q == 0 && cfg->worst_allowed_q == 0;
 }
 
+typedef struct TplDepStats {
+  int64_t intra_cost;
+  int64_t inter_cost;
+  int64_t mc_flow;
+  int64_t mc_dep_cost;
+  int64_t mc_ref_cost;
+
+  int ref_frame_index;
+  int_mv mv;
+} TplDepStats;
+
+#if CONFIG_NON_GREEDY_MV
+
+#define ZERO_MV_MODE 0
+#define NEW_MV_MODE 1
+#define NEAREST_MV_MODE 2
+#define NEAR_MV_MODE 3
+#define MAX_MV_MODE 4
+#endif
+
+typedef struct TplDepFrame {
+  uint8_t is_valid;
+  TplDepStats *tpl_stats_ptr;
+  int stride;
+  int width;
+  int height;
+  int mi_rows;
+  int mi_cols;
+  int base_qindex;
+#if CONFIG_NON_GREEDY_MV
+  int lambda;
+  int *mv_mode_arr[3];
+  double *rd_diff_arr[3];
+#endif
+} TplDepFrame;
+
+#define TPL_DEP_COST_SCALE_LOG2 4
+
 // TODO(jingning) All spatially adaptive variables should go to TileDataEnc.
 typedef struct TileDataEnc {
   TileInfo tile_info;
   int thresh_freq_fact[BLOCK_SIZES][MAX_MODES];
-  int mode_map[BLOCK_SIZES][MAX_MODES];
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+  int thresh_freq_fact_prev[BLOCK_SIZES][MAX_MODES];
+#endif  // CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+  int8_t mode_map[BLOCK_SIZES][MAX_MODES];
   FIRSTPASS_DATA fp_data;
   VP9RowMTSync row_mt_sync;
 
@@ -436,7 +488,6 @@
 
 typedef struct {
   int8_t level_index;
-  uint8_t rc_config_updated;
   uint8_t fail_flag;
   int max_frame_size;   // in bits
   double max_cpb_size;  // in bits
@@ -450,7 +501,76 @@
   struct scale_factors sf;
 } ARNRFilterData;
 
+typedef struct EncFrameBuf {
+  int mem_valid;
+  int released;
+  YV12_BUFFER_CONFIG frame;
+} EncFrameBuf;
+
+// Maximum operating frame buffer size needed for a GOP using ARF reference.
+#define MAX_ARF_GOP_SIZE (2 * MAX_LAG_BUFFERS)
+#define MAX_KMEANS_GROUPS 8
+
+typedef struct KMEANS_DATA {
+  double value;
+  int pos;
+  int group_idx;
+} KMEANS_DATA;
+
+#if CONFIG_RATE_CTRL
+typedef struct PARTITION_INFO {
+  int row;           // row pixel offset of current 4x4 block
+  int column;        // column pixel offset of current 4x4 block
+  int row_start;     // row pixel offset of the start of the prediction block
+  int column_start;  // column pixel offset of the start of the prediction block
+  int width;         // prediction block width
+  int height;        // prediction block height
+} PARTITION_INFO;
+
+typedef struct MOTION_VECTOR_INFO {
+  MV_REFERENCE_FRAME ref_frame[2];
+  int_mv mv[2];
+} MOTION_VECTOR_INFO;
+
+typedef struct ENCODE_COMMAND {
+  int use_external_quantize_index;
+  int external_quantize_index;
+  // A list of binary flags set from the external controller.
+  // Each binary flag indicates whether the frame is an arf or not.
+  const int *external_arf_indexes;
+} ENCODE_COMMAND;
+
+static INLINE void encode_command_init(ENCODE_COMMAND *encode_command) {
+  vp9_zero(*encode_command);
+  encode_command->use_external_quantize_index = 0;
+  encode_command->external_quantize_index = -1;
+  encode_command->external_arf_indexes = NULL;
+}
+
+static INLINE void encode_command_set_external_arf_indexes(
+    ENCODE_COMMAND *encode_command, const int *external_arf_indexes) {
+  encode_command->external_arf_indexes = external_arf_indexes;
+}
+
+static INLINE void encode_command_set_external_quantize_index(
+    ENCODE_COMMAND *encode_command, int quantize_index) {
+  encode_command->use_external_quantize_index = 1;
+  encode_command->external_quantize_index = quantize_index;
+}
+
+static INLINE void encode_command_reset_external_quantize_index(
+    ENCODE_COMMAND *encode_command) {
+  encode_command->use_external_quantize_index = 0;
+  encode_command->external_quantize_index = -1;
+}
+
+// Returns number of units in size of 4, if not multiple not a multiple of 4,
+// round it up. For example, size is 7, return 2.
+static INLINE int get_num_unit_4x4(int size) { return (size + 3) >> 2; }
+#endif  // CONFIG_RATE_CTRL
+
 typedef struct VP9_COMP {
+  FRAME_INFO frame_info;
   QUANTS quants;
   ThreadData td;
   MB_MODE_INFO_EXT *mbmi_ext_base;
@@ -473,17 +593,40 @@
 #endif
   YV12_BUFFER_CONFIG *raw_source_frame;
 
+  BLOCK_SIZE tpl_bsize;
+  TplDepFrame tpl_stats[MAX_ARF_GOP_SIZE];
+  YV12_BUFFER_CONFIG *tpl_recon_frames[REF_FRAMES];
+  EncFrameBuf enc_frame_buf[REF_FRAMES];
+#if CONFIG_MULTITHREAD
+  pthread_mutex_t kmeans_mutex;
+#endif
+  int kmeans_data_arr_alloc;
+  KMEANS_DATA *kmeans_data_arr;
+  int kmeans_data_size;
+  int kmeans_data_stride;
+  double kmeans_ctr_ls[MAX_KMEANS_GROUPS];
+  double kmeans_boundary_ls[MAX_KMEANS_GROUPS];
+  int kmeans_count_ls[MAX_KMEANS_GROUPS];
+  int kmeans_ctr_num;
+#if CONFIG_NON_GREEDY_MV
+  MotionFieldInfo motion_field_info;
+  int tpl_ready;
+  int_mv *select_mv_arr;
+#endif
+
   TileDataEnc *tile_data;
   int allocated_tiles;  // Keep track of memory allocated for tiles.
 
   // For a still frame, this flag is set to 1 to skip partition search.
   int partition_search_skippable_frame;
 
-  int scaled_ref_idx[MAX_REF_FRAMES];
+  int scaled_ref_idx[REFS_PER_FRAME];
   int lst_fb_idx;
   int gld_fb_idx;
   int alt_fb_idx;
 
+  int ref_fb_idx[REF_FRAMES];
+
   int refresh_last_frame;
   int refresh_golden_frame;
   int refresh_alt_ref_frame;
@@ -496,10 +639,15 @@
   int ext_refresh_frame_context_pending;
   int ext_refresh_frame_context;
 
+  int64_t norm_wiener_variance;
+  int64_t *mb_wiener_variance;
+  int mb_wiener_var_rows;
+  int mb_wiener_var_cols;
+  double *mi_ssim_rdmult_scaling_factors;
+
   YV12_BUFFER_CONFIG last_frame_uf;
 
   TOKENEXTRA *tile_tok[4][1 << 6];
-  uint32_t tok_count[4][1 << 6];
   TOKENLIST *tplist[4][1 << 6];
 
   // Ambient reconstruction err target for force key frames
@@ -521,7 +669,7 @@
   RATE_CONTROL rc;
   double framerate;
 
-  int interp_filter_selected[MAX_REF_FRAMES][SWITCHABLE];
+  int interp_filter_selected[REF_FRAMES][SWITCHABLE];
 
   struct vpx_codec_pkt_list *output_pkt_list;
 
@@ -555,6 +703,7 @@
   ActiveMap active_map;
 
   fractional_mv_step_fp *find_fractional_mv_step;
+  struct scale_factors me_sf;
   vp9_diamond_search_fn_t diamond_search_sad;
   vp9_variance_fn_ptr_t fn_ptr[BLOCK_SIZES];
   uint64_t time_receive_data;
@@ -645,10 +794,8 @@
   int y_mode_costs[INTRA_MODES][INTRA_MODES][INTRA_MODES];
   int switchable_interp_costs[SWITCHABLE_FILTER_CONTEXTS][SWITCHABLE_FILTERS];
   int partition_cost[PARTITION_CONTEXTS][PARTITION_TYPES];
-
-  int multi_arf_allowed;
-  int multi_arf_enabled;
-  int multi_arf_last_grp_enabled;
+  // Indices are:  max_tx_size-1,  tx_size_ctx,    tx_size
+  int tx_size_cost[TX_SIZES - 1][TX_SIZE_CONTEXTS][TX_SIZES];
 
 #if CONFIG_VP9_TEMPORAL_DENOISING
   VP9_DENOISER denoiser;
@@ -724,12 +871,87 @@
   uint8_t *count_arf_frame_usage;
   uint8_t *count_lastgolden_frame_usage;
 
+  int multi_layer_arf;
   vpx_roi_map_t roi;
+#if CONFIG_RATE_CTRL
+  ENCODE_COMMAND encode_command;
+  PARTITION_INFO *partition_info;
+  MOTION_VECTOR_INFO *motion_vector_info;
+#endif
 } VP9_COMP;
 
+#if CONFIG_RATE_CTRL
+// Allocates memory for the partition information.
+// The unit size is each 4x4 block.
+// Only called once in vp9_create_compressor().
+static INLINE void partition_info_init(struct VP9_COMP *cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  const int unit_width = get_num_unit_4x4(cpi->frame_info.frame_width);
+  const int unit_height = get_num_unit_4x4(cpi->frame_info.frame_height);
+  CHECK_MEM_ERROR(cm, cpi->partition_info,
+                  (PARTITION_INFO *)vpx_calloc(unit_width * unit_height,
+                                               sizeof(PARTITION_INFO)));
+  memset(cpi->partition_info, 0,
+         unit_width * unit_height * sizeof(PARTITION_INFO));
+}
+
+// Frees memory of the partition information.
+// Only called once in dealloc_compressor_data().
+static INLINE void free_partition_info(struct VP9_COMP *cpi) {
+  vpx_free(cpi->partition_info);
+  cpi->partition_info = NULL;
+}
+
+// Allocates memory for the motion vector information.
+// The unit size is each 4x4 block.
+// Only called once in vp9_create_compressor().
+static INLINE void motion_vector_info_init(struct VP9_COMP *cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  const int unit_width = get_num_unit_4x4(cpi->frame_info.frame_width);
+  const int unit_height = get_num_unit_4x4(cpi->frame_info.frame_height);
+  CHECK_MEM_ERROR(cm, cpi->motion_vector_info,
+                  (MOTION_VECTOR_INFO *)vpx_calloc(unit_width * unit_height,
+                                                   sizeof(MOTION_VECTOR_INFO)));
+  memset(cpi->motion_vector_info, 0,
+         unit_width * unit_height * sizeof(MOTION_VECTOR_INFO));
+}
+
+// Frees memory of the motion vector information.
+// Only called once in dealloc_compressor_data().
+static INLINE void free_motion_vector_info(struct VP9_COMP *cpi) {
+  vpx_free(cpi->motion_vector_info);
+  cpi->motion_vector_info = NULL;
+}
+
+// This is the c-version counter part of ImageBuffer
+typedef struct IMAGE_BUFFER {
+  int allocated;
+  int plane_width[3];
+  int plane_height[3];
+  uint8_t *plane_buffer[3];
+} IMAGE_BUFFER;
+#endif  // CONFIG_RATE_CTRL
+
+typedef struct ENCODE_FRAME_RESULT {
+  int show_idx;
+  FRAME_UPDATE_TYPE update_type;
+#if CONFIG_RATE_CTRL
+  double psnr;
+  uint64_t sse;
+  FRAME_COUNTS frame_counts;
+  const PARTITION_INFO *partition_info;
+  const MOTION_VECTOR_INFO *motion_vector_info;
+  IMAGE_BUFFER coded_frame;
+#endif  // CONFIG_RATE_CTRL
+  int quantize_index;
+} ENCODE_FRAME_RESULT;
+
+void vp9_init_encode_frame_result(ENCODE_FRAME_RESULT *encode_frame_result);
+
 void vp9_initialize_enc(void);
 
-struct VP9_COMP *vp9_create_compressor(VP9EncoderConfig *oxcf,
+void vp9_update_compressor_with_img_fmt(VP9_COMP *cpi, vpx_img_fmt_t img_fmt);
+struct VP9_COMP *vp9_create_compressor(const VP9EncoderConfig *oxcf,
                                        BufferPool *const pool);
 void vp9_remove_compressor(VP9_COMP *cpi);
 
@@ -739,11 +961,12 @@
 // frame is made and not just a copy of the pointer..
 int vp9_receive_raw_frame(VP9_COMP *cpi, vpx_enc_frame_flags_t frame_flags,
                           YV12_BUFFER_CONFIG *sd, int64_t time_stamp,
-                          int64_t end_time_stamp);
+                          int64_t end_time);
 
 int vp9_get_compressed_data(VP9_COMP *cpi, unsigned int *frame_flags,
                             size_t *size, uint8_t *dest, int64_t *time_stamp,
-                            int64_t *time_end, int flush);
+                            int64_t *time_end, int flush,
+                            ENCODE_FRAME_RESULT *encode_frame_result);
 
 int vp9_get_preview_raw_frame(VP9_COMP *cpi, YV12_BUFFER_CONFIG *dest,
                               vp9_ppflags_t *flags);
@@ -760,9 +983,11 @@
 
 int vp9_update_entropy(VP9_COMP *cpi, int update);
 
-int vp9_set_active_map(VP9_COMP *cpi, unsigned char *map, int rows, int cols);
+int vp9_set_active_map(VP9_COMP *cpi, unsigned char *new_map_16x16, int rows,
+                       int cols);
 
-int vp9_get_active_map(VP9_COMP *cpi, unsigned char *map, int rows, int cols);
+int vp9_get_active_map(VP9_COMP *cpi, unsigned char *new_map_16x16, int rows,
+                       int cols);
 
 int vp9_set_internal_size(VP9_COMP *cpi, VPX_SCALING horiz_mode,
                           VPX_SCALING vert_mode);
@@ -772,7 +997,28 @@
 
 void vp9_set_svc(VP9_COMP *cpi, int use_svc);
 
-int vp9_get_quantizer(struct VP9_COMP *cpi);
+static INLINE int stack_pop(int *stack, int stack_size) {
+  int idx;
+  const int r = stack[0];
+  for (idx = 1; idx < stack_size; ++idx) stack[idx - 1] = stack[idx];
+
+  return r;
+}
+
+static INLINE int stack_top(const int *stack) { return stack[0]; }
+
+static INLINE void stack_push(int *stack, int new_item, int stack_size) {
+  int idx;
+  for (idx = stack_size; idx > 0; --idx) stack[idx] = stack[idx - 1];
+  stack[0] = new_item;
+}
+
+static INLINE void stack_init(int *stack, int length) {
+  int idx;
+  for (idx = 0; idx < length; ++idx) stack[idx] = -1;
+}
+
+int vp9_get_quantizer(const VP9_COMP *cpi);
 
 static INLINE int frame_is_kf_gf_arf(const VP9_COMP *cpi) {
   return frame_is_intra_only(&cpi->common) || cpi->refresh_alt_ref_frame ||
@@ -797,9 +1043,13 @@
   return (map_idx != INVALID_IDX) ? cm->ref_frame_map[map_idx] : INVALID_IDX;
 }
 
+static INLINE RefCntBuffer *get_ref_cnt_buffer(VP9_COMMON *cm, int fb_idx) {
+  return fb_idx != INVALID_IDX ? &cm->buffer_pool->frame_bufs[fb_idx] : NULL;
+}
+
 static INLINE YV12_BUFFER_CONFIG *get_ref_frame_buffer(
-    VP9_COMP *cpi, MV_REFERENCE_FRAME ref_frame) {
-  VP9_COMMON *const cm = &cpi->common;
+    const VP9_COMP *const cpi, MV_REFERENCE_FRAME ref_frame) {
+  const VP9_COMMON *const cm = &cpi->common;
   const int buf_idx = get_ref_frame_buf_idx(cpi, ref_frame);
   return buf_idx != INVALID_IDX ? &cm->buffer_pool->frame_bufs[buf_idx].buf
                                 : NULL;
@@ -860,10 +1110,6 @@
 
 void vp9_apply_encoding_flags(VP9_COMP *cpi, vpx_enc_frame_flags_t flags);
 
-static INLINE int is_two_pass_svc(const struct VP9_COMP *const cpi) {
-  return cpi->use_svc && cpi->oxcf.pass != 0;
-}
-
 static INLINE int is_one_pass_cbr_svc(const struct VP9_COMP *const cpi) {
   return (cpi->use_svc && cpi->oxcf.pass == 0);
 }
@@ -879,12 +1125,10 @@
 static INLINE int is_altref_enabled(const VP9_COMP *const cpi) {
   return !(cpi->oxcf.mode == REALTIME && cpi->oxcf.rc_mode == VPX_CBR) &&
          cpi->oxcf.lag_in_frames >= MIN_LOOKAHEAD_FOR_ARFS &&
-         (cpi->oxcf.enable_auto_arf &&
-          (!is_two_pass_svc(cpi) ||
-           cpi->oxcf.ss_enable_auto_arf[cpi->svc.spatial_layer_id]));
+         cpi->oxcf.enable_auto_arf;
 }
 
-static INLINE void set_ref_ptrs(VP9_COMMON *cm, MACROBLOCKD *xd,
+static INLINE void set_ref_ptrs(const VP9_COMMON *const cm, MACROBLOCKD *xd,
                                 MV_REFERENCE_FRAME ref0,
                                 MV_REFERENCE_FRAME ref1) {
   xd->block_refs[0] =
@@ -947,10 +1191,12 @@
 
 void vp9_set_row_mt(VP9_COMP *cpi);
 
+int vp9_get_psnr(const VP9_COMP *cpi, PSNR_STATS *psnr);
+
 #define LAYER_IDS_TO_IDX(sl, tl, num_tl) ((sl) * (num_tl) + (tl))
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_ENCODER_H_
+#endif  // VPX_VP9_ENCODER_VP9_ENCODER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.c
index e8fc110..e7f8a537 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.c
@@ -270,19 +270,19 @@
   {
     int i;
 
-    CHECK_MEM_ERROR(cm, row_mt_sync->mutex_,
-                    vpx_malloc(sizeof(*row_mt_sync->mutex_) * rows));
-    if (row_mt_sync->mutex_) {
+    CHECK_MEM_ERROR(cm, row_mt_sync->mutex,
+                    vpx_malloc(sizeof(*row_mt_sync->mutex) * rows));
+    if (row_mt_sync->mutex) {
       for (i = 0; i < rows; ++i) {
-        pthread_mutex_init(&row_mt_sync->mutex_[i], NULL);
+        pthread_mutex_init(&row_mt_sync->mutex[i], NULL);
       }
     }
 
-    CHECK_MEM_ERROR(cm, row_mt_sync->cond_,
-                    vpx_malloc(sizeof(*row_mt_sync->cond_) * rows));
-    if (row_mt_sync->cond_) {
+    CHECK_MEM_ERROR(cm, row_mt_sync->cond,
+                    vpx_malloc(sizeof(*row_mt_sync->cond) * rows));
+    if (row_mt_sync->cond) {
       for (i = 0; i < rows; ++i) {
-        pthread_cond_init(&row_mt_sync->cond_[i], NULL);
+        pthread_cond_init(&row_mt_sync->cond[i], NULL);
       }
     }
   }
@@ -301,17 +301,17 @@
 #if CONFIG_MULTITHREAD
     int i;
 
-    if (row_mt_sync->mutex_ != NULL) {
+    if (row_mt_sync->mutex != NULL) {
       for (i = 0; i < row_mt_sync->rows; ++i) {
-        pthread_mutex_destroy(&row_mt_sync->mutex_[i]);
+        pthread_mutex_destroy(&row_mt_sync->mutex[i]);
       }
-      vpx_free(row_mt_sync->mutex_);
+      vpx_free(row_mt_sync->mutex);
     }
-    if (row_mt_sync->cond_ != NULL) {
+    if (row_mt_sync->cond != NULL) {
       for (i = 0; i < row_mt_sync->rows; ++i) {
-        pthread_cond_destroy(&row_mt_sync->cond_[i]);
+        pthread_cond_destroy(&row_mt_sync->cond[i]);
       }
-      vpx_free(row_mt_sync->cond_);
+      vpx_free(row_mt_sync->cond);
     }
 #endif  // CONFIG_MULTITHREAD
     vpx_free(row_mt_sync->cur_col);
@@ -327,11 +327,11 @@
   const int nsync = row_mt_sync->sync_range;
 
   if (r && !(c & (nsync - 1))) {
-    pthread_mutex_t *const mutex = &row_mt_sync->mutex_[r - 1];
+    pthread_mutex_t *const mutex = &row_mt_sync->mutex[r - 1];
     pthread_mutex_lock(mutex);
 
     while (c > row_mt_sync->cur_col[r - 1] - nsync + 1) {
-      pthread_cond_wait(&row_mt_sync->cond_[r - 1], mutex);
+      pthread_cond_wait(&row_mt_sync->cond[r - 1], mutex);
     }
     pthread_mutex_unlock(mutex);
   }
@@ -365,12 +365,12 @@
   }
 
   if (sig) {
-    pthread_mutex_lock(&row_mt_sync->mutex_[r]);
+    pthread_mutex_lock(&row_mt_sync->mutex[r]);
 
     row_mt_sync->cur_col[r] = cur;
 
-    pthread_cond_signal(&row_mt_sync->cond_[r]);
-    pthread_mutex_unlock(&row_mt_sync->mutex_[r]);
+    pthread_cond_signal(&row_mt_sync->cond[r]);
+    pthread_mutex_unlock(&row_mt_sync->mutex[r]);
   }
 #else
   (void)row_mt_sync;
@@ -510,8 +510,8 @@
       tile_col = proc_job->tile_col_id;
       tile_row = proc_job->tile_row_id;
       this_tile = &cpi->tile_data[tile_row * tile_cols + tile_col];
-      mb_col_start = (this_tile->tile_info.mi_col_start) >> 1;
-      mb_col_end = (this_tile->tile_info.mi_col_end + 1) >> 1;
+      mb_col_start = (this_tile->tile_info.mi_col_start) >> TF_SHIFT;
+      mb_col_end = (this_tile->tile_info.mi_col_end + TF_ROUND) >> TF_SHIFT;
       mb_row = proc_job->vert_unit_row_num;
 
       vp9_temporal_filter_iterate_row_c(cpi, thread_data->td, mb_row,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.h
index a396e62..cda0293 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ethread.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_ETHREAD_H_
-#define VP9_ENCODER_VP9_ETHREAD_H_
+#ifndef VPX_VP9_ENCODER_VP9_ETHREAD_H_
+#define VPX_VP9_ENCODER_VP9_ETHREAD_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -33,8 +33,8 @@
 // Encoder row synchronization
 typedef struct VP9RowMTSyncData {
 #if CONFIG_MULTITHREAD
-  pthread_mutex_t *mutex_;
-  pthread_cond_t *cond_;
+  pthread_mutex_t *mutex;
+  pthread_cond_t *cond;
 #endif
   // Allocate memory to store the sb/mb block index in each row.
   int *cur_col;
@@ -69,4 +69,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_ETHREAD_H_
+#endif  // VPX_VP9_ENCODER_VP9_ETHREAD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.h
index c0dd757..4ba7fc9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_extend.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_EXTEND_H_
-#define VP9_ENCODER_VP9_EXTEND_H_
+#ifndef VPX_VP9_ENCODER_VP9_EXTEND_H_
+#define VPX_VP9_ENCODER_VP9_EXTEND_H_
 
 #include "vpx_scale/yv12config.h"
 #include "vpx/vpx_integer.h"
@@ -28,4 +28,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_EXTEND_H_
+#endif  // VPX_VP9_ENCODER_VP9_EXTEND_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.c
index f4fda09..73ac4d6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.c
@@ -44,15 +44,11 @@
 #define COMPLEXITY_STATS_OUTPUT 0
 
 #define FIRST_PASS_Q 10.0
-#define INTRA_MODE_PENALTY 1024
-#define MIN_ARF_GF_BOOST 240
+#define NORMAL_BOOST 100
+#define MIN_ARF_GF_BOOST 250
 #define MIN_DECAY_FACTOR 0.01
 #define NEW_MV_MODE_PENALTY 32
 #define DARK_THRESH 64
-#define DEFAULT_GRP_WEIGHT 1.0
-#define RC_FACTOR_MIN 0.75
-#define RC_FACTOR_MAX 1.75
-#define SECTION_NOISE_DEF 250.0
 #define LOW_I_THRESH 24000
 
 #define NCOUNT_INTRA_THRESH 8192
@@ -88,14 +84,8 @@
   return 1;
 }
 
-static void output_stats(FIRSTPASS_STATS *stats,
-                         struct vpx_codec_pkt_list *pktlist) {
-  struct vpx_codec_cx_pkt pkt;
-  pkt.kind = VPX_CODEC_STATS_PKT;
-  pkt.data.twopass_stats.buf = stats;
-  pkt.data.twopass_stats.sz = sizeof(FIRSTPASS_STATS);
-  vpx_codec_pkt_list_add(pktlist, &pkt);
-
+static void output_stats(FIRSTPASS_STATS *stats) {
+  (void)stats;
 // TEMP debug code
 #if OUTPUT_FPF
   {
@@ -105,7 +95,7 @@
     fprintf(fpfile,
             "%12.0lf %12.4lf %12.2lf %12.2lf %12.2lf %12.0lf %12.4lf %12.4lf"
             "%12.4lf %12.4lf %12.4lf %12.4lf %12.4lf %12.4lf %12.4lf %12.4lf"
-            "%12.4lf %12.4lf %12.4lf %12.4lf %12.4lf %12.0lf %12.0lf %12.0lf"
+            "%12.4lf %12.4lf %12.4lf %12.4lf %12.4lf %12.0lf %12.4lf %12.0lf"
             "%12.4lf"
             "\n",
             stats->frame, stats->weight, stats->intra_error, stats->coded_error,
@@ -224,14 +214,14 @@
 // bars and partially discounts other 0 energy areas.
 #define MIN_ACTIVE_AREA 0.5
 #define MAX_ACTIVE_AREA 1.0
-static double calculate_active_area(const VP9_COMP *cpi,
+static double calculate_active_area(const FRAME_INFO *frame_info,
                                     const FIRSTPASS_STATS *this_frame) {
   double active_pct;
 
   active_pct =
       1.0 -
       ((this_frame->intra_skip_pct / 2) +
-       ((this_frame->inactive_zone_rows * 2) / (double)cpi->common.mb_rows));
+       ((this_frame->inactive_zone_rows * 2) / (double)frame_info->mb_rows));
   return fclamp(active_pct, MIN_ACTIVE_AREA, MAX_ACTIVE_AREA);
 }
 
@@ -264,17 +254,16 @@
   // remaining active MBs. The correction here assumes that coding
   // 0.5N blocks of complexity 2X is a little easier than coding N
   // blocks of complexity X.
-  modified_score *=
-      pow(calculate_active_area(cpi, this_frame), ACT_AREA_CORRECTION);
+  modified_score *= pow(calculate_active_area(&cpi->frame_info, this_frame),
+                        ACT_AREA_CORRECTION);
 
   return modified_score;
 }
 
-static double calculate_norm_frame_score(const VP9_COMP *cpi,
-                                         const TWO_PASS *twopass,
-                                         const VP9EncoderConfig *oxcf,
-                                         const FIRSTPASS_STATS *this_frame,
-                                         const double av_err) {
+static double calc_norm_frame_score(const VP9EncoderConfig *oxcf,
+                                    const FRAME_INFO *frame_info,
+                                    const FIRSTPASS_STATS *this_frame,
+                                    double mean_mod_score, double av_err) {
   double modified_score =
       av_err * pow(this_frame->coded_error * this_frame->weight /
                        DOUBLE_DIVIDE_CHECK(av_err),
@@ -289,14 +278,22 @@
   // 0.5N blocks of complexity 2X is a little easier than coding N
   // blocks of complexity X.
   modified_score *=
-      pow(calculate_active_area(cpi, this_frame), ACT_AREA_CORRECTION);
+      pow(calculate_active_area(frame_info, this_frame), ACT_AREA_CORRECTION);
 
   // Normalize to a midpoint score.
-  modified_score /= DOUBLE_DIVIDE_CHECK(twopass->mean_mod_score);
-
+  modified_score /= DOUBLE_DIVIDE_CHECK(mean_mod_score);
   return fclamp(modified_score, min_score, max_score);
 }
 
+static double calculate_norm_frame_score(const VP9_COMP *cpi,
+                                         const TWO_PASS *twopass,
+                                         const VP9EncoderConfig *oxcf,
+                                         const FIRSTPASS_STATS *this_frame,
+                                         const double av_err) {
+  return calc_norm_frame_score(oxcf, &cpi->frame_info, this_frame,
+                               twopass->mean_mod_score, av_err);
+}
+
 // This function returns the maximum target rate per frame.
 static int frame_max_bits(const RATE_CONTROL *rc,
                           const VP9EncoderConfig *oxcf) {
@@ -316,16 +313,8 @@
 }
 
 void vp9_end_first_pass(VP9_COMP *cpi) {
-  if (is_two_pass_svc(cpi)) {
-    int i;
-    for (i = 0; i < cpi->svc.number_spatial_layers; ++i) {
-      output_stats(&cpi->svc.layer_context[i].twopass.total_stats,
-                   cpi->output_pkt_list);
-    }
-  } else {
-    output_stats(&cpi->twopass.total_stats, cpi->output_pkt_list);
-  }
-
+  output_stats(&cpi->twopass.total_stats);
+  cpi->twopass.first_pass_done = 1;
   vpx_free(cpi->twopass.fp_mb_float_stats);
   cpi->twopass.fp_mb_float_stats = NULL;
 }
@@ -503,11 +492,10 @@
     switch (cm->bit_depth) {
       case VPX_BITS_8: ret_val = thresh; break;
       case VPX_BITS_10: ret_val = thresh << 4; break;
-      case VPX_BITS_12: ret_val = thresh << 8; break;
       default:
-        assert(0 &&
-               "cm->bit_depth should be VPX_BITS_8, "
-               "VPX_BITS_10 or VPX_BITS_12");
+        assert(cm->bit_depth == VPX_BITS_12);
+        ret_val = thresh << 8;
+        break;
     }
   }
 #else
@@ -529,11 +517,10 @@
     switch (cm->bit_depth) {
       case VPX_BITS_8: ret_val = UL_INTRA_THRESH; break;
       case VPX_BITS_10: ret_val = UL_INTRA_THRESH << 2; break;
-      case VPX_BITS_12: ret_val = UL_INTRA_THRESH << 4; break;
       default:
-        assert(0 &&
-               "cm->bit_depth should be VPX_BITS_8, "
-               "VPX_BITS_10 or VPX_BITS_12");
+        assert(cm->bit_depth == VPX_BITS_12);
+        ret_val = UL_INTRA_THRESH << 4;
+        break;
     }
   }
 #else
@@ -550,11 +537,10 @@
     switch (cm->bit_depth) {
       case VPX_BITS_8: ret_val = SMOOTH_INTRA_THRESH; break;
       case VPX_BITS_10: ret_val = SMOOTH_INTRA_THRESH << 4; break;
-      case VPX_BITS_12: ret_val = SMOOTH_INTRA_THRESH << 8; break;
       default:
-        assert(0 &&
-               "cm->bit_depth should be VPX_BITS_8, "
-               "VPX_BITS_10 or VPX_BITS_12");
+        assert(cm->bit_depth == VPX_BITS_12);
+        ret_val = SMOOTH_INTRA_THRESH << 8;
+        break;
     }
   }
 #else
@@ -564,7 +550,7 @@
 }
 
 #define FP_DN_THRESH 8
-#define FP_MAX_DN_THRESH 16
+#define FP_MAX_DN_THRESH 24
 #define KERNEL_SIZE 3
 
 // Baseline Kernal weights for first pass noise metric
@@ -824,6 +810,8 @@
                    fp_acc_data->image_data_start_row);
 }
 
+#define NZ_MOTION_PENALTY 128
+#define INTRA_MODE_PENALTY 1024
 void vp9_first_pass_encode_tile_mb_row(VP9_COMP *cpi, ThreadData *td,
                                        FIRSTPASS_DATA *fp_acc_data,
                                        TileDataEnc *tile_data, MV *best_ref_mv,
@@ -833,6 +821,8 @@
   VP9_COMMON *const cm = &cpi->common;
   MACROBLOCKD *const xd = &x->e_mbd;
   TileInfo tile = tile_data->tile_info;
+  const int mb_col_start = ROUND_POWER_OF_TWO(tile.mi_col_start, 1);
+  const int mb_col_end = ROUND_POWER_OF_TWO(tile.mi_col_end, 1);
   struct macroblock_plane *const p = x->plane;
   struct macroblockd_plane *const pd = xd->plane;
   const PICK_MODE_CONTEXT *ctx = &td->pc_root->none;
@@ -849,40 +839,19 @@
   YV12_BUFFER_CONFIG *const new_yv12 = get_frame_new_buffer(cm);
   const YV12_BUFFER_CONFIG *first_ref_buf = lst_yv12;
 
-  LAYER_CONTEXT *const lc =
-      is_two_pass_svc(cpi) ? &cpi->svc.layer_context[cpi->svc.spatial_layer_id]
-                           : NULL;
   MODE_INFO mi_above, mi_left;
 
   double mb_intra_factor;
   double mb_brightness_factor;
   double mb_neutral_count;
+  int scaled_low_intra_thresh = scale_sse_threshold(cm, LOW_I_THRESH);
 
   // First pass code requires valid last and new frame buffers.
   assert(new_yv12 != NULL);
-  assert((lc != NULL) || frame_is_intra_only(cm) || (lst_yv12 != NULL));
+  assert(frame_is_intra_only(cm) || (lst_yv12 != NULL));
 
-  if (lc != NULL) {
-    // Use either last frame or alt frame for motion search.
-    if (cpi->ref_frame_flags & VP9_LAST_FLAG) {
-      first_ref_buf = vp9_get_scaled_ref_frame(cpi, LAST_FRAME);
-      if (first_ref_buf == NULL)
-        first_ref_buf = get_ref_frame_buffer(cpi, LAST_FRAME);
-    }
-
-    if (cpi->ref_frame_flags & VP9_GOLD_FLAG) {
-      gld_yv12 = vp9_get_scaled_ref_frame(cpi, GOLDEN_FRAME);
-      if (gld_yv12 == NULL) {
-        gld_yv12 = get_ref_frame_buffer(cpi, GOLDEN_FRAME);
-      }
-    } else {
-      gld_yv12 = NULL;
-    }
-  }
-
-  xd->mi = cm->mi_grid_visible + xd->mi_stride * (mb_row << 1) +
-           (tile.mi_col_start >> 1);
-  xd->mi[0] = cm->mi + xd->mi_stride * (mb_row << 1) + (tile.mi_col_start >> 1);
+  xd->mi = cm->mi_grid_visible + xd->mi_stride * (mb_row << 1) + mb_col_start;
+  xd->mi[0] = cm->mi + xd->mi_stride * (mb_row << 1) + mb_col_start;
 
   for (i = 0; i < MAX_MB_PLANE; ++i) {
     p[i].coeff = ctx->coeff_pbuf[i][1];
@@ -896,10 +865,9 @@
   uv_mb_height = 16 >> (new_yv12->y_height > new_yv12->uv_height);
 
   // Reset above block coeffs.
-  recon_yoffset =
-      (mb_row * recon_y_stride * 16) + (tile.mi_col_start >> 1) * 16;
-  recon_uvoffset = (mb_row * recon_uv_stride * uv_mb_height) +
-                   (tile.mi_col_start >> 1) * uv_mb_height;
+  recon_yoffset = (mb_row * recon_y_stride * 16) + mb_col_start * 16;
+  recon_uvoffset =
+      (mb_row * recon_uv_stride * uv_mb_height) + mb_col_start * uv_mb_height;
 
   // Set up limit values for motion vectors to prevent them extending
   // outside the UMV borders.
@@ -907,8 +875,7 @@
   x->mv_limits.row_max =
       ((cm->mb_rows - 1 - mb_row) * 16) + BORDER_MV_PIXELS_B16;
 
-  for (mb_col = tile.mi_col_start >> 1, c = 0; mb_col < (tile.mi_col_end >> 1);
-       ++mb_col, c++) {
+  for (mb_col = mb_col_start, c = 0; mb_col < mb_col_end; ++mb_col, c++) {
     int this_error;
     int this_intra_error;
     const int use_dc_pred = (mb_col || mb_row) && (!mb_col || !mb_row);
@@ -954,7 +921,7 @@
     x->skip_encode = 0;
     x->fp_src_pred = 0;
     // Do intra prediction based on source pixels for tile boundaries
-    if ((mb_col == (tile.mi_col_start >> 1)) && mb_col != 0) {
+    if (mb_col == mb_col_start && mb_col != 0) {
       xd->left_mi = &mi_left;
       x->fp_src_pred = 1;
     }
@@ -1001,12 +968,10 @@
       switch (cm->bit_depth) {
         case VPX_BITS_8: break;
         case VPX_BITS_10: this_error >>= 4; break;
-        case VPX_BITS_12: this_error >>= 8; break;
         default:
-          assert(0 &&
-                 "cm->bit_depth should be VPX_BITS_8, "
-                 "VPX_BITS_10 or VPX_BITS_12");
-          return;
+          assert(cm->bit_depth == VPX_BITS_12);
+          this_error >>= 8;
+          break;
       }
     }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
@@ -1072,30 +1037,34 @@
         ((cm->mb_cols - 1 - mb_col) * 16) + BORDER_MV_PIXELS_B16;
 
     // Other than for the first frame do a motion search.
-    if ((lc == NULL && cm->current_video_frame > 0) ||
-        (lc != NULL && lc->current_video_frame_in_layer > 0)) {
-      int tmp_err, motion_error, raw_motion_error;
+    if (cm->current_video_frame > 0) {
+      int tmp_err, motion_error, this_motion_error, raw_motion_error;
       // Assume 0,0 motion with no mv overhead.
       MV mv = { 0, 0 }, tmp_mv = { 0, 0 };
       struct buf_2d unscaled_last_source_buf_2d;
+      vp9_variance_fn_ptr_t v_fn_ptr = cpi->fn_ptr[bsize];
 
       xd->plane[0].pre[0].buf = first_ref_buf->y_buffer + recon_yoffset;
 #if CONFIG_VP9_HIGHBITDEPTH
       if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
         motion_error = highbd_get_prediction_error(
             bsize, &x->plane[0].src, &xd->plane[0].pre[0], xd->bd);
+        this_motion_error = highbd_get_prediction_error(
+            bsize, &x->plane[0].src, &xd->plane[0].pre[0], 8);
       } else {
         motion_error =
             get_prediction_error(bsize, &x->plane[0].src, &xd->plane[0].pre[0]);
+        this_motion_error = motion_error;
       }
 #else
       motion_error =
           get_prediction_error(bsize, &x->plane[0].src, &xd->plane[0].pre[0]);
+      this_motion_error = motion_error;
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
       // Compute the motion error of the 0,0 motion using the last source
       // frame as the reference. Skip the further motion search on
-      // reconstructed frame if this error is small.
+      // reconstructed frame if this error is very small.
       unscaled_last_source_buf_2d.buf =
           cpi->unscaled_last_source->y_buffer + recon_yoffset;
       unscaled_last_source_buf_2d.stride = cpi->unscaled_last_source->y_stride;
@@ -1112,12 +1081,20 @@
                                               &unscaled_last_source_buf_2d);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-      // TODO(pengchong): Replace the hard-coded threshold
-      if (raw_motion_error > 25 || lc != NULL) {
+      if (raw_motion_error > NZ_MOTION_PENALTY) {
         // Test last reference frame using the previous best mv as the
         // starting point (best reference) for the search.
         first_pass_motion_search(cpi, x, best_ref_mv, &mv, &motion_error);
 
+        v_fn_ptr.vf = get_block_variance_fn(bsize);
+#if CONFIG_VP9_HIGHBITDEPTH
+        if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+          v_fn_ptr.vf = highbd_get_block_variance_fn(bsize, 8);
+        }
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+        this_motion_error =
+            vp9_get_mvpred_var(x, &mv, best_ref_mv, &v_fn_ptr, 0);
+
         // If the current best reference mv is not centered on 0,0 then do a
         // 0,0 based search as well.
         if (!is_zero_mv(best_ref_mv)) {
@@ -1127,13 +1104,13 @@
           if (tmp_err < motion_error) {
             motion_error = tmp_err;
             mv = tmp_mv;
+            this_motion_error =
+                vp9_get_mvpred_var(x, &tmp_mv, &zero_mv, &v_fn_ptr, 0);
           }
         }
 
         // Search in an older reference frame.
-        if (((lc == NULL && cm->current_video_frame > 1) ||
-             (lc != NULL && lc->current_video_frame_in_layer > 1)) &&
-            gld_yv12 != NULL) {
+        if ((cm->current_video_frame > 1) && gld_yv12 != NULL) {
           // Assume 0,0 motion with no mv overhead.
           int gf_motion_error;
 
@@ -1279,7 +1256,6 @@
             }
           }
 #endif
-
           // Does the row vector point inwards or outwards?
           if (mb_row < cm->mb_rows / 2) {
             if (mv.row > 0)
@@ -1305,17 +1281,16 @@
             else if (mv.col < 0)
               --(fp_acc_data->sum_in_vectors);
           }
-          fp_acc_data->frame_noise_energy += (int64_t)SECTION_NOISE_DEF;
-        } else if (this_intra_error < scale_sse_threshold(cm, LOW_I_THRESH)) {
+        }
+        if (this_intra_error < scaled_low_intra_thresh) {
           fp_acc_data->frame_noise_energy += fp_estimate_block_noise(x, bsize);
-        } else {  // 0,0 mv but high error
+        } else {
           fp_acc_data->frame_noise_energy += (int64_t)SECTION_NOISE_DEF;
         }
       } else {  // Intra < inter error
-        int scaled_low_intra_thresh = scale_sse_threshold(cm, LOW_I_THRESH);
         if (this_intra_error < scaled_low_intra_thresh) {
           fp_acc_data->frame_noise_energy += fp_estimate_block_noise(x, bsize);
-          if (motion_error < scaled_low_intra_thresh) {
+          if (this_motion_error < scaled_low_intra_thresh) {
             fp_acc_data->intra_count_low += 1.0;
           } else {
             fp_acc_data->intra_count_high += 1.0;
@@ -1334,7 +1309,7 @@
     recon_uvoffset += uv_mb_height;
 
     // Accumulate row level stats to the corresponding tile stats
-    if (cpi->row_mt && mb_col == (tile.mi_col_end >> 1) - 1)
+    if (cpi->row_mt && mb_col == mb_col_end - 1)
       accumulate_fp_mb_row_stat(tile_data, fp_acc_data);
 
     (*(cpi->row_mt_sync_write_ptr))(&tile_data->row_mt_sync, mb_row, c,
@@ -1371,9 +1346,6 @@
   YV12_BUFFER_CONFIG *const new_yv12 = get_frame_new_buffer(cm);
   const YV12_BUFFER_CONFIG *first_ref_buf = lst_yv12;
 
-  LAYER_CONTEXT *const lc =
-      is_two_pass_svc(cpi) ? &cpi->svc.layer_context[cpi->svc.spatial_layer_id]
-                           : NULL;
   BufferPool *const pool = cm->buffer_pool;
 
   FIRSTPASS_DATA fp_temp_data;
@@ -1385,7 +1357,7 @@
 
   // First pass code requires valid last and new frame buffers.
   assert(new_yv12 != NULL);
-  assert((lc != NULL) || frame_is_intra_only(cm) || (lst_yv12 != NULL));
+  assert(frame_is_intra_only(cm) || (lst_yv12 != NULL));
 
 #if CONFIG_FP_MB_STATS
   if (cpi->use_fp_mb_stats) {
@@ -1394,51 +1366,7 @@
 #endif
 
   set_first_pass_params(cpi);
-  vp9_set_quantizer(cm, find_fp_qindex(cm->bit_depth));
-
-  if (lc != NULL) {
-    twopass = &lc->twopass;
-
-    cpi->lst_fb_idx = cpi->svc.spatial_layer_id;
-    cpi->ref_frame_flags = VP9_LAST_FLAG;
-
-    if (cpi->svc.number_spatial_layers + cpi->svc.spatial_layer_id <
-        REF_FRAMES) {
-      cpi->gld_fb_idx =
-          cpi->svc.number_spatial_layers + cpi->svc.spatial_layer_id;
-      cpi->ref_frame_flags |= VP9_GOLD_FLAG;
-      cpi->refresh_golden_frame = (lc->current_video_frame_in_layer == 0);
-    } else {
-      cpi->refresh_golden_frame = 0;
-    }
-
-    if (lc->current_video_frame_in_layer == 0) cpi->ref_frame_flags = 0;
-
-    vp9_scale_references(cpi);
-
-    // Use either last frame or alt frame for motion search.
-    if (cpi->ref_frame_flags & VP9_LAST_FLAG) {
-      first_ref_buf = vp9_get_scaled_ref_frame(cpi, LAST_FRAME);
-      if (first_ref_buf == NULL)
-        first_ref_buf = get_ref_frame_buffer(cpi, LAST_FRAME);
-    }
-
-    if (cpi->ref_frame_flags & VP9_GOLD_FLAG) {
-      gld_yv12 = vp9_get_scaled_ref_frame(cpi, GOLDEN_FRAME);
-      if (gld_yv12 == NULL) {
-        gld_yv12 = get_ref_frame_buffer(cpi, GOLDEN_FRAME);
-      }
-    } else {
-      gld_yv12 = NULL;
-    }
-
-    set_ref_ptrs(cm, xd,
-                 (cpi->ref_frame_flags & VP9_LAST_FLAG) ? LAST_FRAME : NONE,
-                 (cpi->ref_frame_flags & VP9_GOLD_FLAG) ? GOLDEN_FRAME : NONE);
-
-    cpi->Source = vp9_scale_if_required(cm, cpi->un_scaled_source,
-                                        &cpi->scaled_source, 0, EIGHTTAP, 0);
-  }
+  vp9_set_quantizer(cpi, find_fp_qindex(cm->bit_depth));
 
   vp9_setup_block_planes(&x->e_mbd, cm->subsampling_x, cm->subsampling_y);
 
@@ -1495,7 +1423,7 @@
 
     // Don't want to do output stats with a stack variable!
     twopass->this_frame_stats = fps;
-    output_stats(&twopass->this_frame_stats, cpi->output_pkt_list);
+    output_stats(&twopass->this_frame_stats);
     accumulate_stats(&twopass->total_stats, &fps);
 
 #if CONFIG_FP_MB_STATS
@@ -1523,18 +1451,13 @@
 
   vpx_extend_frame_borders(new_yv12);
 
-  if (lc != NULL) {
-    vp9_update_reference_frames(cpi);
-  } else {
-    // The frame we just compressed now becomes the last frame.
-    ref_cnt_fb(pool->frame_bufs, &cm->ref_frame_map[cpi->lst_fb_idx],
-               cm->new_fb_idx);
-  }
+  // The frame we just compressed now becomes the last frame.
+  ref_cnt_fb(pool->frame_bufs, &cm->ref_frame_map[cpi->lst_fb_idx],
+             cm->new_fb_idx);
 
   // Special case for the first frame. Copy into the GF buffer as a second
   // reference.
-  if (cm->current_video_frame == 0 && cpi->gld_fb_idx != INVALID_IDX &&
-      lc == NULL) {
+  if (cm->current_video_frame == 0 && cpi->gld_fb_idx != INVALID_IDX) {
     ref_cnt_fb(pool->frame_bufs, &cm->ref_frame_map[cpi->gld_fb_idx],
                cm->ref_frame_map[cpi->lst_fb_idx]);
   }
@@ -1559,9 +1482,9 @@
   if (cpi->use_svc) vp9_inc_frame_in_layer(cpi);
 }
 
-static const double q_pow_term[(QINDEX_RANGE >> 5) + 1] = {
-  0.65, 0.70, 0.75, 0.85, 0.90, 0.90, 0.90, 1.00, 1.25
-};
+static const double q_pow_term[(QINDEX_RANGE >> 5) + 1] = { 0.65, 0.70, 0.75,
+                                                            0.85, 0.90, 0.90,
+                                                            0.90, 1.00, 1.25 };
 
 static double calc_correction_factor(double err_per_mb, double err_divisor,
                                      int q) {
@@ -1582,7 +1505,26 @@
   return fclamp(pow(error_term, power_term), 0.05, 5.0);
 }
 
-#define ERR_DIVISOR 115.0
+static double wq_err_divisor(VP9_COMP *cpi) {
+  const VP9_COMMON *const cm = &cpi->common;
+  unsigned int screen_area = (cm->width * cm->height);
+
+  // Use a different error per mb factor for calculating boost for
+  //  different formats.
+  if (screen_area <= 640 * 360) {
+    return 115.0;
+  } else if (screen_area < 1280 * 720) {
+    return 125.0;
+  } else if (screen_area <= 1920 * 1080) {
+    return 130.0;
+  } else if (screen_area < 3840 * 2160) {
+    return 150.0;
+  }
+
+  // Fall through to here only for 4K and above.
+  return 200.0;
+}
+
 #define NOISE_FACTOR_MIN 0.9
 #define NOISE_FACTOR_MAX 1.1
 static int get_twopass_worst_quality(VP9_COMP *cpi, const double section_err,
@@ -1642,7 +1584,7 @@
     // content at the given rate.
     for (q = rc->best_quality; q < rc->worst_quality; ++q) {
       const double factor =
-          calc_correction_factor(av_err_per_mb, ERR_DIVISOR, q);
+          calc_correction_factor(av_err_per_mb, wq_err_divisor(cpi), q);
       const int bits_per_mb = vp9_rc_bits_per_mb(
           INTER_FRAME, q,
           factor * speed_term * cpi->twopass.bpm_factor * noise_factor,
@@ -1689,14 +1631,9 @@
 }
 
 void vp9_init_second_pass(VP9_COMP *cpi) {
-  SVC *const svc = &cpi->svc;
   VP9EncoderConfig *const oxcf = &cpi->oxcf;
-  const int is_two_pass_svc =
-      (svc->number_spatial_layers > 1) || (svc->number_temporal_layers > 1);
   RATE_CONTROL *const rc = &cpi->rc;
-  TWO_PASS *const twopass =
-      is_two_pass_svc ? &svc->layer_context[svc->spatial_layer_id].twopass
-                      : &cpi->twopass;
+  TWO_PASS *const twopass = &cpi->twopass;
   double frame_rate;
   FIRSTPASS_STATS *stats;
 
@@ -1773,18 +1710,9 @@
   // encoded in the second pass is a guess. However, the sum duration is not.
   // It is calculated based on the actual durations of all frames from the
   // first pass.
-
-  if (is_two_pass_svc) {
-    vp9_update_spatial_layer_framerate(cpi, frame_rate);
-    twopass->bits_left =
-        (int64_t)(stats->duration *
-                  svc->layer_context[svc->spatial_layer_id].target_bandwidth /
-                  10000000.0);
-  } else {
-    vp9_new_framerate(cpi, frame_rate);
-    twopass->bits_left =
-        (int64_t)(stats->duration * oxcf->target_bandwidth / 10000000.0);
-  }
+  vp9_new_framerate(cpi, frame_rate);
+  twopass->bits_left =
+      (int64_t)(stats->duration * oxcf->target_bandwidth / 10000000.0);
 
   // This variable monitors how far behind the second ref update is lagging.
   twopass->sr_update_lag = 1;
@@ -1821,15 +1749,16 @@
 #define LOW_CODED_ERR_PER_MB 10.0
 #define NCOUNT_FRAME_II_THRESH 6.0
 
-static double get_sr_decay_rate(const VP9_COMP *cpi,
+static double get_sr_decay_rate(const FRAME_INFO *frame_info,
                                 const FIRSTPASS_STATS *frame) {
   double sr_diff = (frame->sr_coded_error - frame->coded_error);
   double sr_decay = 1.0;
   double modified_pct_inter;
   double modified_pcnt_intra;
   const double motion_amplitude_part =
-      frame->pcnt_motion * ((frame->mvc_abs + frame->mvr_abs) /
-                            (cpi->initial_height + cpi->initial_width));
+      frame->pcnt_motion *
+      ((frame->mvc_abs + frame->mvr_abs) /
+       (frame_info->frame_height + frame_info->frame_width));
 
   modified_pct_inter = frame->pcnt_inter;
   if ((frame->coded_error > LOW_CODED_ERR_PER_MB) &&
@@ -1850,72 +1779,73 @@
 
 // This function gives an estimate of how badly we believe the prediction
 // quality is decaying from frame to frame.
-static double get_zero_motion_factor(const VP9_COMP *cpi,
-                                     const FIRSTPASS_STATS *frame) {
-  const double zero_motion_pct = frame->pcnt_inter - frame->pcnt_motion;
-  double sr_decay = get_sr_decay_rate(cpi, frame);
+static double get_zero_motion_factor(const FRAME_INFO *frame_info,
+                                     const FIRSTPASS_STATS *frame_stats) {
+  const double zero_motion_pct =
+      frame_stats->pcnt_inter - frame_stats->pcnt_motion;
+  double sr_decay = get_sr_decay_rate(frame_info, frame_stats);
   return VPXMIN(sr_decay, zero_motion_pct);
 }
 
 #define ZM_POWER_FACTOR 0.75
 
-static double get_prediction_decay_rate(const VP9_COMP *cpi,
-                                        const FIRSTPASS_STATS *next_frame) {
-  const double sr_decay_rate = get_sr_decay_rate(cpi, next_frame);
+static double get_prediction_decay_rate(const FRAME_INFO *frame_info,
+                                        const FIRSTPASS_STATS *frame_stats) {
+  const double sr_decay_rate = get_sr_decay_rate(frame_info, frame_stats);
   const double zero_motion_factor =
-      (0.95 * pow((next_frame->pcnt_inter - next_frame->pcnt_motion),
+      (0.95 * pow((frame_stats->pcnt_inter - frame_stats->pcnt_motion),
                   ZM_POWER_FACTOR));
 
   return VPXMAX(zero_motion_factor,
                 (sr_decay_rate + ((1.0 - sr_decay_rate) * zero_motion_factor)));
 }
 
+static int get_show_idx(const TWO_PASS *twopass) {
+  return (int)(twopass->stats_in - twopass->stats_in_start);
+}
 // Function to test for a condition where a complex transition is followed
 // by a static section. For example in slide shows where there is a fade
 // between slides. This is to help with more optimal kf and gf positioning.
-static int detect_transition_to_still(VP9_COMP *cpi, int frame_interval,
-                                      int still_interval,
-                                      double loop_decay_rate,
-                                      double last_decay_rate) {
-  TWO_PASS *const twopass = &cpi->twopass;
-  RATE_CONTROL *const rc = &cpi->rc;
-
-  // Break clause to detect very still sections after motion
-  // For example a static image after a fade or other transition
-  // instead of a clean scene cut.
-  if (frame_interval > rc->min_gf_interval && loop_decay_rate >= 0.999 &&
-      last_decay_rate < 0.9) {
-    int j;
-
-    // Look ahead a few frames to see if static condition persists...
-    for (j = 0; j < still_interval; ++j) {
-      const FIRSTPASS_STATS *stats = &twopass->stats_in[j];
-      if (stats >= twopass->stats_in_end) break;
-
-      if (stats->pcnt_inter - stats->pcnt_motion < 0.999) break;
-    }
-
-    // Only if it does do we signal a transition to still.
-    return j == still_interval;
+static int check_transition_to_still(const FIRST_PASS_INFO *first_pass_info,
+                                     int show_idx, int still_interval) {
+  int j;
+  int num_frames = fps_get_num_frames(first_pass_info);
+  if (show_idx + still_interval > num_frames) {
+    return 0;
   }
 
-  return 0;
+  // Look ahead a few frames to see if static condition persists...
+  for (j = 0; j < still_interval; ++j) {
+    const FIRSTPASS_STATS *stats =
+        fps_get_frame_stats(first_pass_info, show_idx + j);
+    if (stats->pcnt_inter - stats->pcnt_motion < 0.999) break;
+  }
+
+  // Only if it does do we signal a transition to still.
+  return j == still_interval;
 }
 
 // This function detects a flash through the high relative pcnt_second_ref
 // score in the frame following a flash frame. The offset passed in should
 // reflect this.
-static int detect_flash(const TWO_PASS *twopass, int offset) {
-  const FIRSTPASS_STATS *const next_frame = read_frame_stats(twopass, offset);
-
+static int detect_flash_from_frame_stats(const FIRSTPASS_STATS *frame_stats) {
   // What we are looking for here is a situation where there is a
   // brief break in prediction (such as a flash) but subsequent frames
   // are reasonably well predicted by an earlier (pre flash) frame.
   // The recovery after a flash is indicated by a high pcnt_second_ref
-  // compared to pcnt_inter.
-  return next_frame != NULL &&
-         next_frame->pcnt_second_ref > next_frame->pcnt_inter &&
-         next_frame->pcnt_second_ref >= 0.5;
+  // useage or a second ref coded error notabley lower than the last
+  // frame coded error.
+  if (frame_stats == NULL) {
+    return 0;
+  }
+  return (frame_stats->sr_coded_error < frame_stats->coded_error) ||
+         ((frame_stats->pcnt_second_ref > frame_stats->pcnt_inter) &&
+          (frame_stats->pcnt_second_ref >= 0.5));
+}
+
+static int detect_flash(const TWO_PASS *twopass, int offset) {
+  const FIRSTPASS_STATS *const next_frame = read_frame_stats(twopass, offset);
+  return detect_flash_from_frame_stats(next_frame);
 }
 
 // Update the motion related elements to the GF arf boost calculation.
@@ -1948,13 +1878,15 @@
 
 #define BASELINE_ERR_PER_MB 12500.0
 #define GF_MAX_BOOST 96.0
-static double calc_frame_boost(VP9_COMP *cpi, const FIRSTPASS_STATS *this_frame,
+static double calc_frame_boost(const FRAME_INFO *frame_info,
+                               const FIRSTPASS_STATS *this_frame,
+                               int avg_frame_qindex,
                                double this_frame_mv_in_out) {
   double frame_boost;
-  const double lq = vp9_convert_qindex_to_q(
-      cpi->rc.avg_frame_qindex[INTER_FRAME], cpi->common.bit_depth);
+  const double lq =
+      vp9_convert_qindex_to_q(avg_frame_qindex, frame_info->bit_depth);
   const double boost_q_correction = VPXMIN((0.5 + (lq * 0.015)), 1.5);
-  const double active_area = calculate_active_area(cpi, this_frame);
+  const double active_area = calculate_active_area(frame_info, this_frame);
 
   // Underlying boost factor is based on inter error ratio.
   frame_boost = (BASELINE_ERR_PER_MB * active_area) /
@@ -1970,7 +1902,20 @@
   return VPXMIN(frame_boost, GF_MAX_BOOST * boost_q_correction);
 }
 
-#define KF_BASELINE_ERR_PER_MB 12500.0
+static double kf_err_per_mb(VP9_COMP *cpi) {
+  const VP9_COMMON *const cm = &cpi->common;
+  unsigned int screen_area = (cm->width * cm->height);
+
+  // Use a different error per mb factor for calculating boost for
+  //  different formats.
+  if (screen_area < 1280 * 720) {
+    return 2000.0;
+  } else if (screen_area < 1920 * 1080) {
+    return 500.0;
+  }
+  return 250.0;
+}
+
 static double calc_kf_frame_boost(VP9_COMP *cpi,
                                   const FIRSTPASS_STATS *this_frame,
                                   double *sr_accumulator,
@@ -1980,10 +1925,11 @@
   const double lq = vp9_convert_qindex_to_q(
       cpi->rc.avg_frame_qindex[INTER_FRAME], cpi->common.bit_depth);
   const double boost_q_correction = VPXMIN((0.50 + (lq * 0.015)), 2.00);
-  const double active_area = calculate_active_area(cpi, this_frame);
+  const double active_area =
+      calculate_active_area(&cpi->frame_info, this_frame);
 
   // Underlying boost factor is based on inter error ratio.
-  frame_boost = (KF_BASELINE_ERR_PER_MB * active_area) /
+  frame_boost = (kf_err_per_mb(cpi) * active_area) /
                 DOUBLE_DIVIDE_CHECK(this_frame->coded_error + *sr_accumulator);
 
   // Update the accumulator for second ref error difference.
@@ -1996,14 +1942,19 @@
   if (this_frame_mv_in_out > 0.0)
     frame_boost += frame_boost * (this_frame_mv_in_out * 2.0);
 
-  // Q correction and scalling
-  frame_boost = frame_boost * boost_q_correction;
+  // Q correction and scaling
+  // The 40.0 value here is an experimentally derived baseline minimum.
+  // This value is in line with the minimum per frame boost in the alt_ref
+  // boost calculation.
+  frame_boost = ((frame_boost + 40.0) * boost_q_correction);
 
   return VPXMIN(frame_boost, max_boost * boost_q_correction);
 }
 
-static int calc_arf_boost(VP9_COMP *cpi, int f_frames, int b_frames) {
-  TWO_PASS *const twopass = &cpi->twopass;
+static int compute_arf_boost(const FRAME_INFO *frame_info,
+                             const FIRST_PASS_INFO *first_pass_info,
+                             int arf_show_idx, int f_frames, int b_frames,
+                             int avg_frame_qindex) {
   int i;
   double boost_score = 0.0;
   double mv_ratio_accumulator = 0.0;
@@ -2016,7 +1967,10 @@
 
   // Search forward from the proposed arf/next gf position.
   for (i = 0; i < f_frames; ++i) {
-    const FIRSTPASS_STATS *this_frame = read_frame_stats(twopass, i);
+    const FIRSTPASS_STATS *this_frame =
+        fps_get_frame_stats(first_pass_info, arf_show_idx + i);
+    const FIRSTPASS_STATS *next_frame =
+        fps_get_frame_stats(first_pass_info, arf_show_idx + i + 1);
     if (this_frame == NULL) break;
 
     // Update the motion related elements to the boost calculation.
@@ -2026,17 +1980,19 @@
 
     // We want to discount the flash frame itself and the recovery
     // frame that follows as both will have poor scores.
-    flash_detected = detect_flash(twopass, i) || detect_flash(twopass, i + 1);
+    flash_detected = detect_flash_from_frame_stats(this_frame) ||
+                     detect_flash_from_frame_stats(next_frame);
 
     // Accumulate the effect of prediction quality decay.
     if (!flash_detected) {
-      decay_accumulator *= get_prediction_decay_rate(cpi, this_frame);
+      decay_accumulator *= get_prediction_decay_rate(frame_info, this_frame);
       decay_accumulator = decay_accumulator < MIN_DECAY_FACTOR
                               ? MIN_DECAY_FACTOR
                               : decay_accumulator;
     }
-    boost_score += decay_accumulator *
-                   calc_frame_boost(cpi, this_frame, this_frame_mv_in_out);
+    boost_score += decay_accumulator * calc_frame_boost(frame_info, this_frame,
+                                                        avg_frame_qindex,
+                                                        this_frame_mv_in_out);
   }
 
   arf_boost = (int)boost_score;
@@ -2051,7 +2007,10 @@
 
   // Search backward towards last gf position.
   for (i = -1; i >= -b_frames; --i) {
-    const FIRSTPASS_STATS *this_frame = read_frame_stats(twopass, i);
+    const FIRSTPASS_STATS *this_frame =
+        fps_get_frame_stats(first_pass_info, arf_show_idx + i);
+    const FIRSTPASS_STATS *next_frame =
+        fps_get_frame_stats(first_pass_info, arf_show_idx + i + 1);
     if (this_frame == NULL) break;
 
     // Update the motion related elements to the boost calculation.
@@ -2061,17 +2020,19 @@
 
     // We want to discount the the flash frame itself and the recovery
     // frame that follows as both will have poor scores.
-    flash_detected = detect_flash(twopass, i) || detect_flash(twopass, i + 1);
+    flash_detected = detect_flash_from_frame_stats(this_frame) ||
+                     detect_flash_from_frame_stats(next_frame);
 
     // Cumulative effect of prediction quality decay.
     if (!flash_detected) {
-      decay_accumulator *= get_prediction_decay_rate(cpi, this_frame);
+      decay_accumulator *= get_prediction_decay_rate(frame_info, this_frame);
       decay_accumulator = decay_accumulator < MIN_DECAY_FACTOR
                               ? MIN_DECAY_FACTOR
                               : decay_accumulator;
     }
-    boost_score += decay_accumulator *
-                   calc_frame_boost(cpi, this_frame, this_frame_mv_in_out);
+    boost_score += decay_accumulator * calc_frame_boost(frame_info, this_frame,
+                                                        avg_frame_qindex,
+                                                        this_frame_mv_in_out);
   }
   arf_boost += (int)boost_score;
 
@@ -2082,6 +2043,15 @@
   return arf_boost;
 }
 
+static int calc_arf_boost(VP9_COMP *cpi, int f_frames, int b_frames) {
+  const FRAME_INFO *frame_info = &cpi->frame_info;
+  TWO_PASS *const twopass = &cpi->twopass;
+  const int avg_inter_frame_qindex = cpi->rc.avg_frame_qindex[INTER_FRAME];
+  int arf_show_idx = get_show_idx(twopass);
+  return compute_arf_boost(frame_info, &twopass->first_pass_info, arf_show_idx,
+                           f_frames, b_frames, avg_inter_frame_qindex);
+}
+
 // Calculate a section intra ratio used in setting max loop filter.
 static int calculate_section_intra_ratio(const FIRSTPASS_STATS *begin,
                                          const FIRSTPASS_STATS *end,
@@ -2104,15 +2074,31 @@
 // Calculate the total bits to allocate in this GF/ARF group.
 static int64_t calculate_total_gf_group_bits(VP9_COMP *cpi,
                                              double gf_group_err) {
+  VP9_COMMON *const cm = &cpi->common;
   const RATE_CONTROL *const rc = &cpi->rc;
   const TWO_PASS *const twopass = &cpi->twopass;
   const int max_bits = frame_max_bits(rc, &cpi->oxcf);
   int64_t total_group_bits;
+  const int is_key_frame = frame_is_intra_only(cm);
+  const int arf_active_or_kf = is_key_frame || rc->source_alt_ref_active;
+  int gop_frames =
+      rc->baseline_gf_interval + rc->source_alt_ref_pending - arf_active_or_kf;
 
   // Calculate the bits to be allocated to the group as a whole.
   if ((twopass->kf_group_bits > 0) && (twopass->kf_group_error_left > 0.0)) {
+    int key_frame_interval = rc->frames_since_key + rc->frames_to_key;
+    int distance_from_next_key_frame =
+        rc->frames_to_key -
+        (rc->baseline_gf_interval + rc->source_alt_ref_pending);
+    int max_gf_bits_bias = rc->avg_frame_bandwidth;
+    double gf_interval_bias_bits_normalize_factor =
+        (double)rc->baseline_gf_interval / 16;
     total_group_bits = (int64_t)(twopass->kf_group_bits *
                                  (gf_group_err / twopass->kf_group_error_left));
+    // TODO(ravi): Experiment with different values of max_gf_bits_bias
+    total_group_bits +=
+        (int64_t)((double)distance_from_next_key_frame / key_frame_interval *
+                  max_gf_bits_bias * gf_interval_bias_bits_normalize_factor);
   } else {
     total_group_bits = 0;
   }
@@ -2125,8 +2111,8 @@
                                : total_group_bits;
 
   // Clip based on user supplied data rate variability limit.
-  if (total_group_bits > (int64_t)max_bits * rc->baseline_gf_interval)
-    total_group_bits = (int64_t)max_bits * rc->baseline_gf_interval;
+  if (total_group_bits > (int64_t)max_bits * gop_frames)
+    total_group_bits = (int64_t)max_bits * gop_frames;
 
   return total_group_bits;
 }
@@ -2139,7 +2125,7 @@
   // return 0 for invalid inputs (could arise e.g. through rounding errors)
   if (!boost || (total_group_bits <= 0) || (frame_count < 0)) return 0;
 
-  allocation_chunks = (frame_count * 100) + boost;
+  allocation_chunks = (frame_count * NORMAL_BOOST) + boost;
 
   // Prevent overflow.
   if (boost > 1023) {
@@ -2153,18 +2139,6 @@
                 0);
 }
 
-// Current limit on maximum number of active arfs in a GF/ARF group.
-#define MAX_ACTIVE_ARFS 2
-#define ARF_SLOT1 2
-#define ARF_SLOT2 3
-// This function indirects the choice of buffers for arfs.
-// At the moment the values are fixed but this may change as part of
-// the integration process with other codec features that swap buffers around.
-static void get_arf_buffer_indices(unsigned char *arf_buffer_indices) {
-  arf_buffer_indices[0] = ARF_SLOT1;
-  arf_buffer_indices[1] = ARF_SLOT2;
-}
-
 // Used in corpus vbr: Calculates the total normalized group complexity score
 // for a given number of frames starting at the current position in the stats
 // file.
@@ -2184,11 +2158,129 @@
     ++s;
     ++i;
   }
-  assert(i == frame_count);
 
   return score_total;
 }
 
+static void find_arf_order(VP9_COMP *cpi, GF_GROUP *gf_group,
+                           int *index_counter, int depth, int start, int end) {
+  TWO_PASS *twopass = &cpi->twopass;
+  const FIRSTPASS_STATS *const start_pos = twopass->stats_in;
+  FIRSTPASS_STATS fpf_frame;
+  const int mid = (start + end + 1) >> 1;
+  const int min_frame_interval = 2;
+  int idx;
+
+  // Process regular P frames
+  if ((end - start < min_frame_interval) ||
+      (depth > gf_group->allowed_max_layer_depth)) {
+    for (idx = start; idx <= end; ++idx) {
+      gf_group->update_type[*index_counter] = LF_UPDATE;
+      gf_group->arf_src_offset[*index_counter] = 0;
+      gf_group->frame_gop_index[*index_counter] = idx;
+      gf_group->rf_level[*index_counter] = INTER_NORMAL;
+      gf_group->layer_depth[*index_counter] = depth;
+      gf_group->gfu_boost[*index_counter] = NORMAL_BOOST;
+      ++(*index_counter);
+    }
+    gf_group->max_layer_depth = VPXMAX(gf_group->max_layer_depth, depth);
+    return;
+  }
+
+  assert(abs(mid - start) >= 1 && abs(mid - end) >= 1);
+
+  // Process ARF frame
+  gf_group->layer_depth[*index_counter] = depth;
+  gf_group->update_type[*index_counter] = ARF_UPDATE;
+  gf_group->arf_src_offset[*index_counter] = mid - start;
+  gf_group->frame_gop_index[*index_counter] = mid;
+  gf_group->rf_level[*index_counter] = GF_ARF_LOW;
+
+  for (idx = 0; idx <= mid; ++idx)
+    if (EOF == input_stats(twopass, &fpf_frame)) break;
+
+  gf_group->gfu_boost[*index_counter] =
+      VPXMAX(MIN_ARF_GF_BOOST,
+             calc_arf_boost(cpi, end - mid + 1, mid - start) >> depth);
+
+  reset_fpf_position(twopass, start_pos);
+
+  ++(*index_counter);
+
+  find_arf_order(cpi, gf_group, index_counter, depth + 1, start, mid - 1);
+
+  gf_group->update_type[*index_counter] = USE_BUF_FRAME;
+  gf_group->arf_src_offset[*index_counter] = 0;
+  gf_group->frame_gop_index[*index_counter] = mid;
+  gf_group->rf_level[*index_counter] = INTER_NORMAL;
+  gf_group->layer_depth[*index_counter] = depth;
+  ++(*index_counter);
+
+  find_arf_order(cpi, gf_group, index_counter, depth + 1, mid + 1, end);
+}
+
+static INLINE void set_gf_overlay_frame_type(GF_GROUP *gf_group,
+                                             int frame_index,
+                                             int source_alt_ref_active) {
+  if (source_alt_ref_active) {
+    gf_group->update_type[frame_index] = OVERLAY_UPDATE;
+    gf_group->rf_level[frame_index] = INTER_NORMAL;
+    gf_group->layer_depth[frame_index] = MAX_ARF_LAYERS - 1;
+    gf_group->gfu_boost[frame_index] = NORMAL_BOOST;
+  } else {
+    gf_group->update_type[frame_index] = GF_UPDATE;
+    gf_group->rf_level[frame_index] = GF_ARF_STD;
+    gf_group->layer_depth[frame_index] = 0;
+  }
+}
+
+static void define_gf_group_structure(VP9_COMP *cpi) {
+  RATE_CONTROL *const rc = &cpi->rc;
+  TWO_PASS *const twopass = &cpi->twopass;
+  GF_GROUP *const gf_group = &twopass->gf_group;
+  int frame_index = 0;
+  int key_frame = cpi->common.frame_type == KEY_FRAME;
+  int layer_depth = 1;
+  int gop_frames =
+      rc->baseline_gf_interval - (key_frame || rc->source_alt_ref_pending);
+
+  gf_group->frame_start = cpi->common.current_video_frame;
+  gf_group->frame_end = gf_group->frame_start + rc->baseline_gf_interval;
+  gf_group->max_layer_depth = 0;
+  gf_group->allowed_max_layer_depth = 0;
+
+  // For key frames the frame target rate is already set and it
+  // is also the golden frame.
+  // === [frame_index == 0] ===
+  if (!key_frame)
+    set_gf_overlay_frame_type(gf_group, frame_index, rc->source_alt_ref_active);
+
+  ++frame_index;
+
+  // === [frame_index == 1] ===
+  if (rc->source_alt_ref_pending) {
+    gf_group->update_type[frame_index] = ARF_UPDATE;
+    gf_group->rf_level[frame_index] = GF_ARF_STD;
+    gf_group->layer_depth[frame_index] = layer_depth;
+    gf_group->arf_src_offset[frame_index] =
+        (unsigned char)(rc->baseline_gf_interval - 1);
+    gf_group->frame_gop_index[frame_index] = rc->baseline_gf_interval;
+    gf_group->max_layer_depth = 1;
+    ++frame_index;
+    ++layer_depth;
+    gf_group->allowed_max_layer_depth = cpi->oxcf.enable_auto_arf;
+  }
+
+  find_arf_order(cpi, gf_group, &frame_index, layer_depth, 1, gop_frames);
+
+  set_gf_overlay_frame_type(gf_group, frame_index, rc->source_alt_ref_pending);
+  gf_group->arf_src_offset[frame_index] = 0;
+  gf_group->frame_gop_index[frame_index] = rc->baseline_gf_interval;
+
+  // Set the frame ops number.
+  gf_group->gf_group_size = frame_index;
+}
+
 static void allocate_gf_group_bits(VP9_COMP *cpi, int64_t gf_group_bits,
                                    int gf_arf_bits) {
   VP9EncoderConfig *const oxcf = &cpi->oxcf;
@@ -2197,17 +2289,12 @@
   GF_GROUP *const gf_group = &twopass->gf_group;
   FIRSTPASS_STATS frame_stats;
   int i;
-  int frame_index = 1;
+  int frame_index = 0;
   int target_frame_size;
   int key_frame;
   const int max_bits = frame_max_bits(&cpi->rc, oxcf);
   int64_t total_group_bits = gf_group_bits;
-  int mid_boost_bits = 0;
   int mid_frame_idx;
-  unsigned char arf_buffer_indices[MAX_ACTIVE_ARFS];
-  int alt_frame_index = frame_index;
-  int has_temporal_layers =
-      is_two_pass_svc(cpi) && cpi->svc.number_temporal_layers > 1;
   int normal_frames;
   int normal_frame_bits;
   int last_frame_reduction = 0;
@@ -2215,79 +2302,97 @@
   double tot_norm_frame_score = 1.0;
   double this_frame_score = 1.0;
 
-  // Only encode alt reference frame in temporal base layer.
-  if (has_temporal_layers) alt_frame_index = cpi->svc.number_temporal_layers;
+  // Define the GF structure and specify
+  int gop_frames = gf_group->gf_group_size;
 
-  key_frame =
-      cpi->common.frame_type == KEY_FRAME || vp9_is_upper_layer_key_frame(cpi);
-
-  get_arf_buffer_indices(arf_buffer_indices);
+  key_frame = cpi->common.frame_type == KEY_FRAME;
 
   // For key frames the frame target rate is already set and it
   // is also the golden frame.
+  // === [frame_index == 0] ===
   if (!key_frame) {
-    if (rc->source_alt_ref_active) {
-      gf_group->update_type[0] = OVERLAY_UPDATE;
-      gf_group->rf_level[0] = INTER_NORMAL;
-      gf_group->bit_allocation[0] = 0;
-    } else {
-      gf_group->update_type[0] = GF_UPDATE;
-      gf_group->rf_level[0] = GF_ARF_STD;
-      gf_group->bit_allocation[0] = gf_arf_bits;
-    }
-    gf_group->arf_update_idx[0] = arf_buffer_indices[0];
-    gf_group->arf_ref_idx[0] = arf_buffer_indices[0];
+    gf_group->bit_allocation[frame_index] =
+        rc->source_alt_ref_active ? 0 : gf_arf_bits;
   }
 
   // Deduct the boost bits for arf (or gf if it is not a key frame)
   // from the group total.
   if (rc->source_alt_ref_pending || !key_frame) total_group_bits -= gf_arf_bits;
 
+  ++frame_index;
+
+  // === [frame_index == 1] ===
   // Store the bits to spend on the ARF if there is one.
   if (rc->source_alt_ref_pending) {
-    gf_group->update_type[alt_frame_index] = ARF_UPDATE;
-    gf_group->rf_level[alt_frame_index] = GF_ARF_STD;
-    gf_group->bit_allocation[alt_frame_index] = gf_arf_bits;
+    gf_group->bit_allocation[frame_index] = gf_arf_bits;
 
-    if (has_temporal_layers)
-      gf_group->arf_src_offset[alt_frame_index] =
-          (unsigned char)(rc->baseline_gf_interval -
-                          cpi->svc.number_temporal_layers);
-    else
-      gf_group->arf_src_offset[alt_frame_index] =
-          (unsigned char)(rc->baseline_gf_interval - 1);
-
-    gf_group->arf_update_idx[alt_frame_index] = arf_buffer_indices[0];
-    gf_group->arf_ref_idx[alt_frame_index] =
-        arf_buffer_indices[cpi->multi_arf_last_grp_enabled &&
-                           rc->source_alt_ref_active];
-    if (!has_temporal_layers) ++frame_index;
-
-    if (cpi->multi_arf_enabled) {
-      // Set aside a slot for a level 1 arf.
-      gf_group->update_type[frame_index] = ARF_UPDATE;
-      gf_group->rf_level[frame_index] = GF_ARF_LOW;
-      gf_group->arf_src_offset[frame_index] =
-          (unsigned char)((rc->baseline_gf_interval >> 1) - 1);
-      gf_group->arf_update_idx[frame_index] = arf_buffer_indices[1];
-      gf_group->arf_ref_idx[frame_index] = arf_buffer_indices[0];
-      ++frame_index;
-    }
+    ++frame_index;
   }
 
-  // Note index of the first normal inter frame int eh group (not gf kf arf)
-  gf_group->first_inter_index = frame_index;
-
   // Define middle frame
   mid_frame_idx = frame_index + (rc->baseline_gf_interval >> 1) - 1;
 
-  normal_frames =
-      rc->baseline_gf_interval - (key_frame || rc->source_alt_ref_pending);
+  normal_frames = (rc->baseline_gf_interval - 1);
   if (normal_frames > 1)
     normal_frame_bits = (int)(total_group_bits / normal_frames);
   else
     normal_frame_bits = (int)total_group_bits;
 
+  gf_group->gfu_boost[1] = rc->gfu_boost;
+
+  if (cpi->multi_layer_arf) {
+    int idx;
+    int arf_depth_bits[MAX_ARF_LAYERS] = { 0 };
+    int arf_depth_count[MAX_ARF_LAYERS] = { 0 };
+    int arf_depth_boost[MAX_ARF_LAYERS] = { 0 };
+    int total_arfs = 1;  // Account for the base layer ARF.
+
+    for (idx = 0; idx < gop_frames; ++idx) {
+      if (gf_group->update_type[idx] == ARF_UPDATE) {
+        arf_depth_boost[gf_group->layer_depth[idx]] += gf_group->gfu_boost[idx];
+        ++arf_depth_count[gf_group->layer_depth[idx]];
+      }
+    }
+
+    for (idx = 2; idx < MAX_ARF_LAYERS; ++idx) {
+      if (arf_depth_boost[idx] == 0) break;
+      arf_depth_bits[idx] = calculate_boost_bits(
+          rc->baseline_gf_interval - total_arfs - arf_depth_count[idx],
+          arf_depth_boost[idx], total_group_bits);
+
+      total_group_bits -= arf_depth_bits[idx];
+      total_arfs += arf_depth_count[idx];
+    }
+
+    // offset the base layer arf
+    normal_frames -= (total_arfs - 1);
+    if (normal_frames > 1)
+      normal_frame_bits = (int)(total_group_bits / normal_frames);
+    else
+      normal_frame_bits = (int)total_group_bits;
+
+    target_frame_size = normal_frame_bits;
+    target_frame_size =
+        clamp(target_frame_size, 0, VPXMIN(max_bits, (int)total_group_bits));
+
+    // The first layer ARF has its bit allocation assigned.
+    for (idx = frame_index; idx < gop_frames; ++idx) {
+      switch (gf_group->update_type[idx]) {
+        case ARF_UPDATE:
+          gf_group->bit_allocation[idx] =
+              (int)(((int64_t)arf_depth_bits[gf_group->layer_depth[idx]] *
+                     gf_group->gfu_boost[idx]) /
+                    arf_depth_boost[gf_group->layer_depth[idx]]);
+          break;
+        case USE_BUF_FRAME: gf_group->bit_allocation[idx] = 0; break;
+        default: gf_group->bit_allocation[idx] = target_frame_size; break;
+      }
+    }
+    gf_group->bit_allocation[idx] = 0;
+
+    return;
+  }
+
   if (oxcf->vbr_corpus_complexity) {
     av_score = get_distribution_av_err(cpi, twopass);
     tot_norm_frame_score = calculate_group_score(cpi, av_score, normal_frames);
@@ -2295,13 +2400,7 @@
 
   // Allocate bits to the other frames in the group.
   for (i = 0; i < normal_frames; ++i) {
-    int arf_idx = 0;
     if (EOF == input_stats(twopass, &frame_stats)) break;
-
-    if (has_temporal_layers && frame_index == alt_frame_index) {
-      ++frame_index;
-    }
-
     if (oxcf->vbr_corpus_complexity) {
       this_frame_score = calculate_norm_frame_score(cpi, twopass, oxcf,
                                                     &frame_stats, av_score);
@@ -2315,21 +2414,9 @@
       target_frame_size -= last_frame_reduction;
     }
 
-    if (rc->source_alt_ref_pending && cpi->multi_arf_enabled) {
-      mid_boost_bits += (target_frame_size >> 4);
-      target_frame_size -= (target_frame_size >> 4);
-
-      if (frame_index <= mid_frame_idx) arf_idx = 1;
-    }
-    gf_group->arf_update_idx[frame_index] = arf_buffer_indices[arf_idx];
-    gf_group->arf_ref_idx[frame_index] = arf_buffer_indices[arf_idx];
-
     target_frame_size =
         clamp(target_frame_size, 0, VPXMIN(max_bits, (int)total_group_bits));
 
-    gf_group->update_type[frame_index] = LF_UPDATE;
-    gf_group->rf_level[frame_index] = INTER_NORMAL;
-
     gf_group->bit_allocation[frame_index] = target_frame_size;
     ++frame_index;
   }
@@ -2341,27 +2428,6 @@
   // We need to configure the frame at the end of the sequence + 1 that will be
   // the start frame for the next group. Otherwise prior to the call to
   // vp9_rc_get_second_pass_params() the data will be undefined.
-  gf_group->arf_update_idx[frame_index] = arf_buffer_indices[0];
-  gf_group->arf_ref_idx[frame_index] = arf_buffer_indices[0];
-
-  if (rc->source_alt_ref_pending) {
-    gf_group->update_type[frame_index] = OVERLAY_UPDATE;
-    gf_group->rf_level[frame_index] = INTER_NORMAL;
-
-    // Final setup for second arf and its overlay.
-    if (cpi->multi_arf_enabled) {
-      gf_group->bit_allocation[2] =
-          gf_group->bit_allocation[mid_frame_idx] + mid_boost_bits;
-      gf_group->update_type[mid_frame_idx] = OVERLAY_UPDATE;
-      gf_group->bit_allocation[mid_frame_idx] = 0;
-    }
-  } else {
-    gf_group->update_type[frame_index] = GF_UPDATE;
-    gf_group->rf_level[frame_index] = GF_ARF_STD;
-  }
-
-  // Note whether multi-arf was enabled this group for next time.
-  cpi->multi_arf_last_grp_enabled = cpi->multi_arf_enabled;
 }
 
 // Adjusts the ARNF filter for a GF group.
@@ -2373,23 +2439,263 @@
 
   twopass->arnr_strength_adjustment = 0;
 
-  if ((section_zeromv < 0.10) || (section_noise <= (SECTION_NOISE_DEF * 0.75)))
+  if (section_noise < 150) {
     twopass->arnr_strength_adjustment -= 1;
+    if (section_noise < 75) twopass->arnr_strength_adjustment -= 1;
+  } else if (section_noise > 250)
+    twopass->arnr_strength_adjustment += 1;
+
   if (section_zeromv > 0.50) twopass->arnr_strength_adjustment += 1;
 }
 
 // Analyse and define a gf/arf group.
-#define ARF_DECAY_BREAKOUT 0.10
 #define ARF_ABS_ZOOM_THRESH 4.0
 
-static void define_gf_group(VP9_COMP *cpi, FIRSTPASS_STATS *this_frame) {
+#define MAX_GF_BOOST 5400
+
+typedef struct RANGE {
+  int min;
+  int max;
+} RANGE;
+
+/* get_gop_coding_frame_num() depends on several fields in RATE_CONTROL *rc as
+ * follows.
+ * Static fields:
+ * (The following fields will remain unchanged after initialization of encoder.)
+ *   rc->static_scene_max_gf_interval
+ *   rc->min_gf_interval
+ *
+ * Dynamic fields:
+ * (The following fields will be updated before or after coding each frame.)
+ *   rc->frames_to_key
+ *   rc->frames_since_key
+ *   rc->source_alt_ref_active
+ *
+ * Special case: if CONFIG_RATE_CTRL is true, the external arf indexes will
+ * determine the arf position.
+ *
+ * TODO(angiebird): Separate the dynamic fields and static fields into two
+ * structs.
+ */
+static int get_gop_coding_frame_num(
+#if CONFIG_RATE_CTRL
+    const int *external_arf_indexes,
+#endif
+    int *use_alt_ref, const FRAME_INFO *frame_info,
+    const FIRST_PASS_INFO *first_pass_info, const RATE_CONTROL *rc,
+    int gf_start_show_idx, const RANGE *active_gf_interval,
+    double gop_intra_factor, int lag_in_frames) {
+  double loop_decay_rate = 1.00;
+  double mv_ratio_accumulator = 0.0;
+  double this_frame_mv_in_out = 0.0;
+  double mv_in_out_accumulator = 0.0;
+  double abs_mv_in_out_accumulator = 0.0;
+  double sr_accumulator = 0.0;
+  // Motion breakout threshold for loop below depends on image size.
+  double mv_ratio_accumulator_thresh =
+      (frame_info->frame_height + frame_info->frame_width) / 4.0;
+  double zero_motion_accumulator = 1.0;
+  int gop_coding_frames;
+#if CONFIG_RATE_CTRL
+  (void)mv_ratio_accumulator_thresh;
+  (void)active_gf_interval;
+  (void)gop_intra_factor;
+
+  if (external_arf_indexes != NULL && rc->frames_to_key > 1) {
+    // gop_coding_frames = 1 is necessary to filter out the overlay frame,
+    // since the arf is in this group of picture and its overlay is in the next.
+    gop_coding_frames = 1;
+    *use_alt_ref = 1;
+    while (gop_coding_frames < rc->frames_to_key) {
+      const int frame_index = gf_start_show_idx + gop_coding_frames;
+      ++gop_coding_frames;
+      if (external_arf_indexes[frame_index] == 1) break;
+    }
+    return gop_coding_frames;
+  }
+#endif  // CONFIG_RATE_CTRL
+
+  *use_alt_ref = 1;
+  gop_coding_frames = 0;
+  while (gop_coding_frames < rc->static_scene_max_gf_interval &&
+         gop_coding_frames < rc->frames_to_key) {
+    const FIRSTPASS_STATS *next_next_frame;
+    const FIRSTPASS_STATS *next_frame;
+    int flash_detected;
+    ++gop_coding_frames;
+
+    next_frame = fps_get_frame_stats(first_pass_info,
+                                     gf_start_show_idx + gop_coding_frames);
+    if (next_frame == NULL) {
+      break;
+    }
+
+    // Test for the case where there is a brief flash but the prediction
+    // quality back to an earlier frame is then restored.
+    next_next_frame = fps_get_frame_stats(
+        first_pass_info, gf_start_show_idx + gop_coding_frames + 1);
+    flash_detected = detect_flash_from_frame_stats(next_next_frame);
+
+    // Update the motion related elements to the boost calculation.
+    accumulate_frame_motion_stats(
+        next_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
+        &abs_mv_in_out_accumulator, &mv_ratio_accumulator);
+
+    // Monitor for static sections.
+    if ((rc->frames_since_key + gop_coding_frames - 1) > 1) {
+      zero_motion_accumulator =
+          VPXMIN(zero_motion_accumulator,
+                 get_zero_motion_factor(frame_info, next_frame));
+    }
+
+    // Accumulate the effect of prediction quality decay.
+    if (!flash_detected) {
+      double last_loop_decay_rate = loop_decay_rate;
+      loop_decay_rate = get_prediction_decay_rate(frame_info, next_frame);
+
+      // Break clause to detect very still sections after motion. For example,
+      // a static image after a fade or other transition.
+      if (gop_coding_frames > rc->min_gf_interval && loop_decay_rate >= 0.999 &&
+          last_loop_decay_rate < 0.9) {
+        int still_interval = 5;
+        if (check_transition_to_still(first_pass_info,
+                                      gf_start_show_idx + gop_coding_frames,
+                                      still_interval)) {
+          *use_alt_ref = 0;
+          break;
+        }
+      }
+
+      // Update the accumulator for second ref error difference.
+      // This is intended to give an indication of how much the coded error is
+      // increasing over time.
+      if (gop_coding_frames == 1) {
+        sr_accumulator += next_frame->coded_error;
+      } else {
+        sr_accumulator +=
+            (next_frame->sr_coded_error - next_frame->coded_error);
+      }
+    }
+
+    // Break out conditions.
+    // Break at maximum of active_gf_interval->max unless almost totally
+    // static.
+    //
+    // Note that the addition of a test of rc->source_alt_ref_active is
+    // deliberate. The effect of this is that after a normal altref group even
+    // if the material is static there will be one normal length GF group
+    // before allowing longer GF groups. The reason for this is that in cases
+    // such as slide shows where slides are separated by a complex transition
+    // such as a fade, the arf group spanning the transition may not be coded
+    // at a very high quality and hence this frame (with its overlay) is a
+    // poor golden frame to use for an extended group.
+    if ((gop_coding_frames >= active_gf_interval->max) &&
+        ((zero_motion_accumulator < 0.995) || (rc->source_alt_ref_active))) {
+      break;
+    }
+    if (
+        // Don't break out with a very short interval.
+        (gop_coding_frames >= active_gf_interval->min) &&
+        // If possible dont break very close to a kf
+        ((rc->frames_to_key - gop_coding_frames) >= rc->min_gf_interval) &&
+        (gop_coding_frames & 0x01) && (!flash_detected) &&
+        ((mv_ratio_accumulator > mv_ratio_accumulator_thresh) ||
+         (abs_mv_in_out_accumulator > ARF_ABS_ZOOM_THRESH) ||
+         (sr_accumulator > gop_intra_factor * next_frame->intra_error))) {
+      break;
+    }
+  }
+  *use_alt_ref &= zero_motion_accumulator < 0.995;
+  *use_alt_ref &= gop_coding_frames < lag_in_frames;
+  *use_alt_ref &= gop_coding_frames >= rc->min_gf_interval;
+  return gop_coding_frames;
+}
+
+static RANGE get_active_gf_inverval_range(
+    const FRAME_INFO *frame_info, const RATE_CONTROL *rc, int arf_active_or_kf,
+    int gf_start_show_idx, int active_worst_quality, int last_boosted_qindex) {
+  RANGE active_gf_interval;
+#if CONFIG_RATE_CTRL
+  (void)frame_info;
+  (void)gf_start_show_idx;
+  (void)active_worst_quality;
+  (void)last_boosted_qindex;
+  active_gf_interval.min = rc->min_gf_interval + arf_active_or_kf + 2;
+
+  active_gf_interval.max = 16 + arf_active_or_kf;
+
+  if ((active_gf_interval.max <= rc->frames_to_key) &&
+      (active_gf_interval.max >= (rc->frames_to_key - rc->min_gf_interval))) {
+    active_gf_interval.min = rc->frames_to_key / 2;
+    active_gf_interval.max = rc->frames_to_key / 2;
+  }
+#else
+  int int_max_q = (int)(vp9_convert_qindex_to_q(active_worst_quality,
+                                                frame_info->bit_depth));
+  int q_term = (gf_start_show_idx == 0)
+                   ? int_max_q / 32
+                   : (int)(vp9_convert_qindex_to_q(last_boosted_qindex,
+                                                   frame_info->bit_depth) /
+                           6);
+  active_gf_interval.min =
+      rc->min_gf_interval + arf_active_or_kf + VPXMIN(2, int_max_q / 200);
+  active_gf_interval.min =
+      VPXMIN(active_gf_interval.min, rc->max_gf_interval + arf_active_or_kf);
+
+  // The value chosen depends on the active Q range. At low Q we have
+  // bits to spare and are better with a smaller interval and smaller boost.
+  // At high Q when there are few bits to spare we are better with a longer
+  // interval to spread the cost of the GF.
+  active_gf_interval.max = 11 + arf_active_or_kf + VPXMIN(5, q_term);
+
+  // Force max GF interval to be odd.
+  active_gf_interval.max = active_gf_interval.max | 0x01;
+
+  // We have: active_gf_interval.min <=
+  // rc->max_gf_interval + arf_active_or_kf.
+  if (active_gf_interval.max < active_gf_interval.min) {
+    active_gf_interval.max = active_gf_interval.min;
+  } else {
+    active_gf_interval.max =
+        VPXMIN(active_gf_interval.max, rc->max_gf_interval + arf_active_or_kf);
+  }
+
+  // Would the active max drop us out just before the near the next kf?
+  if ((active_gf_interval.max <= rc->frames_to_key) &&
+      (active_gf_interval.max >= (rc->frames_to_key - rc->min_gf_interval))) {
+    active_gf_interval.max = rc->frames_to_key / 2;
+  }
+  active_gf_interval.max =
+      VPXMAX(active_gf_interval.max, active_gf_interval.min);
+#endif
+  return active_gf_interval;
+}
+
+static int get_arf_layers(int multi_layer_arf, int max_layers,
+                          int coding_frame_num) {
+  assert(max_layers <= MAX_ARF_LAYERS);
+  if (multi_layer_arf) {
+    int layers = 0;
+    int i;
+    for (i = coding_frame_num; i > 0; i >>= 1) {
+      ++layers;
+    }
+    layers = VPXMIN(max_layers, layers);
+    return layers;
+  } else {
+    return 1;
+  }
+}
+
+static void define_gf_group(VP9_COMP *cpi, int gf_start_show_idx) {
   VP9_COMMON *const cm = &cpi->common;
   RATE_CONTROL *const rc = &cpi->rc;
   VP9EncoderConfig *const oxcf = &cpi->oxcf;
   TWO_PASS *const twopass = &cpi->twopass;
-  FIRSTPASS_STATS next_frame;
+  const FRAME_INFO *frame_info = &cpi->frame_info;
+  const FIRST_PASS_INFO *first_pass_info = &twopass->first_pass_info;
   const FIRSTPASS_STATS *const start_pos = twopass->stats_in;
-  int i;
+  int gop_coding_frames;
 
   double gf_group_err = 0.0;
   double gf_group_raw_error = 0.0;
@@ -2398,30 +2704,21 @@
   double gf_group_inactive_zone_rows = 0.0;
   double gf_group_inter = 0.0;
   double gf_group_motion = 0.0;
-  double gf_first_frame_err = 0.0;
-  double mod_frame_err = 0.0;
 
-  double mv_ratio_accumulator = 0.0;
-  double zero_motion_accumulator = 1.0;
-  double loop_decay_rate = 1.00;
-  double last_loop_decay_rate = 1.00;
+  int allow_alt_ref = is_altref_enabled(cpi);
+  int use_alt_ref;
 
-  double this_frame_mv_in_out = 0.0;
-  double mv_in_out_accumulator = 0.0;
-  double abs_mv_in_out_accumulator = 0.0;
-  double mv_ratio_accumulator_thresh;
-  double abs_mv_in_out_thresh;
-  double sr_accumulator = 0.0;
-  const double av_err = get_distribution_av_err(cpi, twopass);
-  unsigned int allow_alt_ref = is_altref_enabled(cpi);
-
-  int flash_detected;
-  int active_max_gf_interval;
-  int active_min_gf_interval;
   int64_t gf_group_bits;
   int gf_arf_bits;
   const int is_key_frame = frame_is_intra_only(cm);
+  // If this is a key frame or the overlay from a previous arf then
+  // the error score / cost of this frame has already been accounted for.
   const int arf_active_or_kf = is_key_frame || rc->source_alt_ref_active;
+  int is_alt_ref_flash = 0;
+
+  double gop_intra_factor;
+  int gop_frames;
+  RANGE active_gf_interval;
 
   // Reset the GF group data structures unless this is a key
   // frame in which case it will already have been done.
@@ -2430,224 +2727,158 @@
   }
 
   vpx_clear_system_state();
-  vp9_zero(next_frame);
 
-  // Load stats for the current frame.
-  mod_frame_err =
-      calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
+  active_gf_interval = get_active_gf_inverval_range(
+      frame_info, rc, arf_active_or_kf, gf_start_show_idx,
+      twopass->active_worst_quality, rc->last_boosted_qindex);
 
-  // Note the error of the frame at the start of the group. This will be
-  // the GF frame error if we code a normal gf.
-  gf_first_frame_err = mod_frame_err;
-
-  // If this is a key frame or the overlay from a previous arf then
-  // the error score / cost of this frame has already been accounted for.
-  if (arf_active_or_kf) {
-    gf_group_err -= gf_first_frame_err;
-    gf_group_raw_error -= this_frame->coded_error;
-    gf_group_noise -= this_frame->frame_noise_energy;
-    gf_group_skip_pct -= this_frame->intra_skip_pct;
-    gf_group_inactive_zone_rows -= this_frame->inactive_zone_rows;
-    gf_group_inter -= this_frame->pcnt_inter;
-    gf_group_motion -= this_frame->pcnt_motion;
+  if (cpi->multi_layer_arf) {
+    int arf_layers = get_arf_layers(cpi->multi_layer_arf, oxcf->enable_auto_arf,
+                                    active_gf_interval.max);
+    gop_intra_factor = 1.0 + 0.25 * arf_layers;
+  } else {
+    gop_intra_factor = 1.0;
   }
 
-  // Motion breakout threshold for loop below depends on image size.
-  mv_ratio_accumulator_thresh =
-      (cpi->initial_height + cpi->initial_width) / 4.0;
-  abs_mv_in_out_thresh = ARF_ABS_ZOOM_THRESH;
-
-  // Set a maximum and minimum interval for the GF group.
-  // If the image appears almost completely static we can extend beyond this.
   {
-    int int_max_q = (int)(vp9_convert_qindex_to_q(twopass->active_worst_quality,
-                                                  cpi->common.bit_depth));
-    int int_lbq = (int)(vp9_convert_qindex_to_q(rc->last_boosted_qindex,
-                                                cpi->common.bit_depth));
-    active_min_gf_interval =
-        rc->min_gf_interval + arf_active_or_kf + VPXMIN(2, int_max_q / 200);
-    active_min_gf_interval =
-        VPXMIN(active_min_gf_interval, rc->max_gf_interval + arf_active_or_kf);
-
-    if (cpi->multi_arf_allowed) {
-      active_max_gf_interval = rc->max_gf_interval;
-    } else {
-      // The value chosen depends on the active Q range. At low Q we have
-      // bits to spare and are better with a smaller interval and smaller boost.
-      // At high Q when there are few bits to spare we are better with a longer
-      // interval to spread the cost of the GF.
-      active_max_gf_interval = 12 + arf_active_or_kf + VPXMIN(4, (int_lbq / 6));
-
-      // We have: active_min_gf_interval <=
-      // rc->max_gf_interval + arf_active_or_kf.
-      if (active_max_gf_interval < active_min_gf_interval) {
-        active_max_gf_interval = active_min_gf_interval;
-      } else {
-        active_max_gf_interval = VPXMIN(active_max_gf_interval,
-                                        rc->max_gf_interval + arf_active_or_kf);
-      }
-
-      // Would the active max drop us out just before the near the next kf?
-      if ((active_max_gf_interval <= rc->frames_to_key) &&
-          (active_max_gf_interval >= (rc->frames_to_key - rc->min_gf_interval)))
-        active_max_gf_interval = rc->frames_to_key / 2;
-    }
-  }
-
-  i = 0;
-  while (i < rc->static_scene_max_gf_interval && i < rc->frames_to_key) {
-    ++i;
-
-    // Accumulate error score of frames in this gf group.
-    mod_frame_err =
-        calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
-    gf_group_err += mod_frame_err;
-    gf_group_raw_error += this_frame->coded_error;
-    gf_group_noise += this_frame->frame_noise_energy;
-    gf_group_skip_pct += this_frame->intra_skip_pct;
-    gf_group_inactive_zone_rows += this_frame->inactive_zone_rows;
-    gf_group_inter += this_frame->pcnt_inter;
-    gf_group_motion += this_frame->pcnt_motion;
-
-    if (EOF == input_stats(twopass, &next_frame)) break;
-
-    // Test for the case where there is a brief flash but the prediction
-    // quality back to an earlier frame is then restored.
-    flash_detected = detect_flash(twopass, 0);
-
-    // Update the motion related elements to the boost calculation.
-    accumulate_frame_motion_stats(
-        &next_frame, &this_frame_mv_in_out, &mv_in_out_accumulator,
-        &abs_mv_in_out_accumulator, &mv_ratio_accumulator);
-
-    // Accumulate the effect of prediction quality decay.
-    if (!flash_detected) {
-      last_loop_decay_rate = loop_decay_rate;
-      loop_decay_rate = get_prediction_decay_rate(cpi, &next_frame);
-
-      // Monitor for static sections.
-      zero_motion_accumulator = VPXMIN(
-          zero_motion_accumulator, get_zero_motion_factor(cpi, &next_frame));
-
-      // Break clause to detect very still sections after motion. For example,
-      // a static image after a fade or other transition.
-      if (detect_transition_to_still(cpi, i, 5, loop_decay_rate,
-                                     last_loop_decay_rate)) {
-        allow_alt_ref = 0;
-        break;
-      }
-
-      // Update the accumulator for second ref error difference.
-      // This is intended to give an indication of how much the coded error is
-      // increasing over time.
-      if (i == 1) {
-        sr_accumulator += next_frame.coded_error;
-      } else {
-        sr_accumulator += (next_frame.sr_coded_error - next_frame.coded_error);
-      }
-    }
-
-    // Break out conditions.
-    // Break at maximum of active_max_gf_interval unless almost totally static.
-    if (((twopass->kf_zeromotion_pct < STATIC_KF_GROUP_THRESH) &&
-         (i >= active_max_gf_interval) && (zero_motion_accumulator < 0.995)) ||
-        (
-            // Don't break out with a very short interval.
-            (i >= active_min_gf_interval) &&
-            // If possible dont break very close to a kf
-            ((rc->frames_to_key - i) >= rc->min_gf_interval) &&
-            (!flash_detected) &&
-            ((mv_ratio_accumulator > mv_ratio_accumulator_thresh) ||
-             (abs_mv_in_out_accumulator > abs_mv_in_out_thresh) ||
-             (sr_accumulator > next_frame.intra_error)))) {
-      break;
-    }
-
-    *this_frame = next_frame;
+    gop_coding_frames = get_gop_coding_frame_num(
+#if CONFIG_RATE_CTRL
+        cpi->encode_command.external_arf_indexes,
+#endif
+        &use_alt_ref, frame_info, first_pass_info, rc, gf_start_show_idx,
+        &active_gf_interval, gop_intra_factor, cpi->oxcf.lag_in_frames);
+    use_alt_ref &= allow_alt_ref;
   }
 
   // Was the group length constrained by the requirement for a new KF?
-  rc->constrained_gf_group = (i >= rc->frames_to_key) ? 1 : 0;
+  rc->constrained_gf_group = (gop_coding_frames >= rc->frames_to_key) ? 1 : 0;
 
   // Should we use the alternate reference frame.
-  if ((twopass->kf_zeromotion_pct < STATIC_KF_GROUP_THRESH) && allow_alt_ref &&
-      (i < cpi->oxcf.lag_in_frames) && (i >= rc->min_gf_interval)) {
-    const int forward_frames = (rc->frames_to_key - i >= i - 1)
-                                   ? i - 1
-                                   : VPXMAX(0, rc->frames_to_key - i);
+  if (use_alt_ref) {
+    const int f_frames =
+        (rc->frames_to_key - gop_coding_frames >= gop_coding_frames - 1)
+            ? gop_coding_frames - 1
+            : VPXMAX(0, rc->frames_to_key - gop_coding_frames);
+    const int b_frames = gop_coding_frames - 1;
+    const int avg_inter_frame_qindex = rc->avg_frame_qindex[INTER_FRAME];
+    // TODO(angiebird): figure out why arf's location is assigned this way
+    const int arf_show_idx = VPXMIN(gf_start_show_idx + gop_coding_frames + 1,
+                                    fps_get_num_frames(first_pass_info));
 
     // Calculate the boost for alt ref.
-    rc->gfu_boost = calc_arf_boost(cpi, forward_frames, (i - 1));
+    rc->gfu_boost =
+        compute_arf_boost(frame_info, first_pass_info, arf_show_idx, f_frames,
+                          b_frames, avg_inter_frame_qindex);
     rc->source_alt_ref_pending = 1;
-
-    // Test to see if multi arf is appropriate.
-    cpi->multi_arf_enabled =
-        (cpi->multi_arf_allowed && (rc->baseline_gf_interval >= 6) &&
-         (zero_motion_accumulator < 0.995))
-            ? 1
-            : 0;
   } else {
-    rc->gfu_boost = calc_arf_boost(cpi, 0, (i - 1));
+    const int f_frames = gop_coding_frames - 1;
+    const int b_frames = 0;
+    const int avg_inter_frame_qindex = rc->avg_frame_qindex[INTER_FRAME];
+    // TODO(angiebird): figure out why arf's location is assigned this way
+    const int gld_show_idx =
+        VPXMIN(gf_start_show_idx + 1, fps_get_num_frames(first_pass_info));
+    const int arf_boost =
+        compute_arf_boost(frame_info, first_pass_info, gld_show_idx, f_frames,
+                          b_frames, avg_inter_frame_qindex);
+    rc->gfu_boost = VPXMIN(MAX_GF_BOOST, arf_boost);
     rc->source_alt_ref_pending = 0;
   }
 
+#define LAST_ALR_ACTIVE_BEST_QUALITY_ADJUSTMENT_FACTOR 0.2
+  rc->arf_active_best_quality_adjustment_factor = 1.0;
+  rc->arf_increase_active_best_quality = 0;
+
+  if (!is_lossless_requested(&cpi->oxcf)) {
+    if (rc->frames_since_key >= rc->frames_to_key) {
+      // Increase the active best quality in the second half of key frame
+      // interval.
+      rc->arf_active_best_quality_adjustment_factor =
+          LAST_ALR_ACTIVE_BEST_QUALITY_ADJUSTMENT_FACTOR +
+          (1.0 - LAST_ALR_ACTIVE_BEST_QUALITY_ADJUSTMENT_FACTOR) *
+              (rc->frames_to_key - gop_coding_frames) /
+              (VPXMAX(1, ((rc->frames_to_key + rc->frames_since_key) / 2 -
+                          gop_coding_frames)));
+      rc->arf_increase_active_best_quality = 1;
+    } else if ((rc->frames_to_key - gop_coding_frames) > 0) {
+      // Reduce the active best quality in the first half of key frame interval.
+      rc->arf_active_best_quality_adjustment_factor =
+          LAST_ALR_ACTIVE_BEST_QUALITY_ADJUSTMENT_FACTOR +
+          (1.0 - LAST_ALR_ACTIVE_BEST_QUALITY_ADJUSTMENT_FACTOR) *
+              (rc->frames_since_key + gop_coding_frames) /
+              (VPXMAX(1, (rc->frames_to_key + rc->frames_since_key) / 2 +
+                             gop_coding_frames));
+      rc->arf_increase_active_best_quality = -1;
+    }
+  }
+
 #ifdef AGGRESSIVE_VBR
   // Limit maximum boost based on interval length.
-  rc->gfu_boost = VPXMIN((int)rc->gfu_boost, i * 140);
+  rc->gfu_boost = VPXMIN((int)rc->gfu_boost, gop_coding_frames * 140);
 #else
-  rc->gfu_boost = VPXMIN((int)rc->gfu_boost, i * 200);
+  rc->gfu_boost = VPXMIN((int)rc->gfu_boost, gop_coding_frames * 200);
 #endif
 
-  // Set the interval until the next gf.
-  rc->baseline_gf_interval =
-      (twopass->kf_zeromotion_pct < STATIC_KF_GROUP_THRESH)
-          ? (i - (is_key_frame || rc->source_alt_ref_pending))
-          : i;
+  // Cap the ARF boost when perceptual quality AQ mode is enabled. This is
+  // designed to improve the perceptual quality of high value content and to
+  // make consistent quality across consecutive frames. It will hurt objective
+  // quality.
+  if (oxcf->aq_mode == PERCEPTUAL_AQ)
+    rc->gfu_boost = VPXMIN(rc->gfu_boost, MIN_ARF_GF_BOOST);
 
-  // Only encode alt reference frame in temporal base layer. So
-  // baseline_gf_interval should be multiple of a temporal layer group
-  // (typically the frame distance between two base layer frames)
-  if (is_two_pass_svc(cpi) && cpi->svc.number_temporal_layers > 1) {
-    int count = (1 << (cpi->svc.number_temporal_layers - 1)) - 1;
-    int new_gf_interval = (rc->baseline_gf_interval + count) & (~count);
+  rc->baseline_gf_interval = gop_coding_frames - rc->source_alt_ref_pending;
+
+  if (rc->source_alt_ref_pending)
+    is_alt_ref_flash = detect_flash(twopass, rc->baseline_gf_interval);
+
+  {
+    const double av_err = get_distribution_av_err(cpi, twopass);
+    const double mean_mod_score = twopass->mean_mod_score;
+    // If the first frame is a key frame or the overlay from a previous arf then
+    // the error score / cost of this frame has already been accounted for.
+    int start_idx = arf_active_or_kf ? 1 : 0;
     int j;
-    for (j = 0; j < new_gf_interval - rc->baseline_gf_interval; ++j) {
-      if (EOF == input_stats(twopass, this_frame)) break;
-      gf_group_err +=
-          calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
-      gf_group_raw_error += this_frame->coded_error;
-      gf_group_noise += this_frame->frame_noise_energy;
-      gf_group_skip_pct += this_frame->intra_skip_pct;
-      gf_group_inactive_zone_rows += this_frame->inactive_zone_rows;
-      gf_group_inter += this_frame->pcnt_inter;
-      gf_group_motion += this_frame->pcnt_motion;
+    for (j = start_idx; j < gop_coding_frames; ++j) {
+      int show_idx = gf_start_show_idx + j;
+      const FIRSTPASS_STATS *frame_stats =
+          fps_get_frame_stats(first_pass_info, show_idx);
+      // Accumulate error score of frames in this gf group.
+      gf_group_err += calc_norm_frame_score(oxcf, frame_info, frame_stats,
+                                            mean_mod_score, av_err);
+      gf_group_raw_error += frame_stats->coded_error;
+      gf_group_noise += frame_stats->frame_noise_energy;
+      gf_group_skip_pct += frame_stats->intra_skip_pct;
+      gf_group_inactive_zone_rows += frame_stats->inactive_zone_rows;
+      gf_group_inter += frame_stats->pcnt_inter;
+      gf_group_motion += frame_stats->pcnt_motion;
     }
-    rc->baseline_gf_interval = new_gf_interval;
   }
 
-  rc->frames_till_gf_update_due = rc->baseline_gf_interval;
-
-  // Reset the file position.
-  reset_fpf_position(twopass, start_pos);
-
   // Calculate the bits to be allocated to the gf/arf group as a whole
   gf_group_bits = calculate_total_gf_group_bits(cpi, gf_group_err);
 
+  gop_frames =
+      rc->baseline_gf_interval + rc->source_alt_ref_pending - arf_active_or_kf;
+
+  // Store the average moise level measured for the group
+  // TODO(any): Experiment with removal of else condition (gop_frames = 0) so
+  // that consumption of group noise energy is based on previous gf group
+  if (gop_frames > 0)
+    twopass->gf_group.group_noise_energy = (int)(gf_group_noise / gop_frames);
+  else
+    twopass->gf_group.group_noise_energy = 0;
+
   // Calculate an estimate of the maxq needed for the group.
   // We are more aggressive about correcting for sections
   // where there could be significant overshoot than for easier
   // sections where we do not wish to risk creating an overshoot
   // of the allocated bit budget.
   if ((cpi->oxcf.rc_mode != VPX_Q) && (rc->baseline_gf_interval > 1)) {
-    const int vbr_group_bits_per_frame =
-        (int)(gf_group_bits / rc->baseline_gf_interval);
-    const double group_av_err = gf_group_raw_error / rc->baseline_gf_interval;
-    const double group_av_noise = gf_group_noise / rc->baseline_gf_interval;
-    const double group_av_skip_pct =
-        gf_group_skip_pct / rc->baseline_gf_interval;
-    const double group_av_inactive_zone =
-        ((gf_group_inactive_zone_rows * 2) /
-         (rc->baseline_gf_interval * (double)cm->mb_rows));
+    const int vbr_group_bits_per_frame = (int)(gf_group_bits / gop_frames);
+    const double group_av_err = gf_group_raw_error / gop_frames;
+    const double group_av_noise = gf_group_noise / gop_frames;
+    const double group_av_skip_pct = gf_group_skip_pct / gop_frames;
+    const double group_av_inactive_zone = ((gf_group_inactive_zone_rows * 2) /
+                                           (gop_frames * (double)cm->mb_rows));
     int tmp_q = get_twopass_worst_quality(
         cpi, group_av_err, (group_av_skip_pct + group_av_inactive_zone),
         group_av_noise, vbr_group_bits_per_frame);
@@ -2663,20 +2894,23 @@
 
   // Context Adjustment of ARNR filter strength
   if (rc->baseline_gf_interval > 1) {
-    adjust_group_arnr_filter(cpi, (gf_group_noise / rc->baseline_gf_interval),
-                             (gf_group_inter / rc->baseline_gf_interval),
-                             (gf_group_motion / rc->baseline_gf_interval));
+    adjust_group_arnr_filter(cpi, (gf_group_noise / gop_frames),
+                             (gf_group_inter / gop_frames),
+                             (gf_group_motion / gop_frames));
   } else {
     twopass->arnr_strength_adjustment = 0;
   }
 
   // Calculate the extra bits to be used for boosted frame(s)
-  gf_arf_bits = calculate_boost_bits(rc->baseline_gf_interval, rc->gfu_boost,
-                                     gf_group_bits);
+  gf_arf_bits = calculate_boost_bits((rc->baseline_gf_interval - 1),
+                                     rc->gfu_boost, gf_group_bits);
 
   // Adjust KF group bits and error remaining.
   twopass->kf_group_error_left -= gf_group_err;
 
+  // Decide GOP structure.
+  define_gf_group_structure(cpi);
+
   // Allocate bits to each of the frames in the GF group.
   allocate_gf_group_bits(cpi, gf_group_bits, gf_arf_bits);
 
@@ -2684,10 +2918,8 @@
   reset_fpf_position(twopass, start_pos);
 
   // Calculate a section intra ratio used in setting max loop filter.
-  if (cpi->common.frame_type != KEY_FRAME) {
-    twopass->section_intra_rating = calculate_section_intra_ratio(
-        start_pos, twopass->stats_in_end, rc->baseline_gf_interval);
-  }
+  twopass->section_intra_rating = calculate_section_intra_ratio(
+      start_pos, twopass->stats_in_end, rc->baseline_gf_interval);
 
   if (oxcf->resize_mode == RESIZE_DYNAMIC) {
     // Default to starting GF groups at normal frame size.
@@ -2698,6 +2930,12 @@
   twopass->rolling_arf_group_target_bits = 0;
   twopass->rolling_arf_group_actual_bits = 0;
 #endif
+  rc->preserve_arf_as_gld = rc->preserve_next_arf_as_gld;
+  rc->preserve_next_arf_as_gld = 0;
+  // If alt ref frame is flash do not set preserve_arf_as_gld
+  if (!is_lossless_requested(&cpi->oxcf) && !cpi->use_svc &&
+      cpi->oxcf.aq_mode == NO_AQ && cpi->multi_layer_arf && !is_alt_ref_flash)
+    rc->preserve_next_arf_as_gld = 1;
 }
 
 // Intra / Inter threshold very low
@@ -2720,17 +2958,54 @@
          (this_frame->coded_error > (next_frame->coded_error * ERROR_SPIKE));
 }
 
-// Threshold for use of the lagging second reference frame. High second ref
-// usage may point to a transient event like a flash or occlusion rather than
-// a real scene cut.
-#define SECOND_REF_USEAGE_THRESH 0.1
+// This test looks for anomalous changes in the nature of the intra signal
+// related to the previous and next frame as an indicator for coding a key
+// frame. This test serves to detect some additional scene cuts,
+// especially in lowish motion and low contrast sections, that are missed
+// by the other tests.
+static int intra_step_transition(const FIRSTPASS_STATS *this_frame,
+                                 const FIRSTPASS_STATS *last_frame,
+                                 const FIRSTPASS_STATS *next_frame) {
+  double last_ii_ratio;
+  double this_ii_ratio;
+  double next_ii_ratio;
+  double last_pcnt_intra = 1.0 - last_frame->pcnt_inter;
+  double this_pcnt_intra = 1.0 - this_frame->pcnt_inter;
+  double next_pcnt_intra = 1.0 - next_frame->pcnt_inter;
+  double mod_this_intra = this_pcnt_intra + this_frame->pcnt_neutral;
+
+  // Calculate ii ratio for this frame last frame and next frame.
+  last_ii_ratio =
+      last_frame->intra_error / DOUBLE_DIVIDE_CHECK(last_frame->coded_error);
+  this_ii_ratio =
+      this_frame->intra_error / DOUBLE_DIVIDE_CHECK(this_frame->coded_error);
+  next_ii_ratio =
+      next_frame->intra_error / DOUBLE_DIVIDE_CHECK(next_frame->coded_error);
+
+  // Return true the intra/inter ratio for the current frame is
+  // low but better in the next and previous frame and the relative useage of
+  // intra in the current frame is markedly higher than the last and next frame.
+  if ((this_ii_ratio < 2.0) && (last_ii_ratio > 2.25) &&
+      (next_ii_ratio > 2.25) && (this_pcnt_intra > (3 * last_pcnt_intra)) &&
+      (this_pcnt_intra > (3 * next_pcnt_intra)) &&
+      ((this_pcnt_intra > 0.075) || (mod_this_intra > 0.85))) {
+    return 1;
+    // Very low inter intra ratio (i.e. not much gain from inter coding), most
+    // blocks neutral on coding method and better inter prediction either side
+  } else if ((this_ii_ratio < 1.25) && (mod_this_intra > 0.85) &&
+             (this_ii_ratio < last_ii_ratio * 0.9) &&
+             (this_ii_ratio < next_ii_ratio * 0.9)) {
+    return 1;
+  } else {
+    return 0;
+  }
+}
+
 // Minimum % intra coding observed in first pass (1.0 = 100%)
 #define MIN_INTRA_LEVEL 0.25
-// Minimum ratio between the % of intra coding and inter coding in the first
-// pass after discounting neutral blocks (discounting neutral blocks in this
-// way helps catch scene cuts in clips with very flat areas or letter box
-// format clips with image padding.
-#define INTRA_VS_INTER_THRESH 2.0
+// Threshold for use of the lagging second reference frame. Scene cuts do not
+// usually have a high second ref useage.
+#define SECOND_REF_USEAGE_THRESH 0.2
 // Hard threshold where the first pass chooses intra for almost all blocks.
 // In such a case even if the frame is not a scene cut coding a key frame
 // may be a good option.
@@ -2738,84 +3013,75 @@
 // Maximum threshold for the relative ratio of intra error score vs best
 // inter error score.
 #define KF_II_ERR_THRESHOLD 2.5
-// In real scene cuts there is almost always a sharp change in the intra
-// or inter error score.
-#define ERR_CHANGE_THRESHOLD 0.4
-// For real scene cuts we expect an improvment in the intra inter error
-// ratio in the next frame.
-#define II_IMPROVEMENT_THRESHOLD 3.5
 #define KF_II_MAX 128.0
 #define II_FACTOR 12.5
 // Test for very low intra complexity which could cause false key frames
 #define V_LOW_INTRA 0.5
 
-static int test_candidate_kf(TWO_PASS *twopass,
-                             const FIRSTPASS_STATS *last_frame,
-                             const FIRSTPASS_STATS *this_frame,
-                             const FIRSTPASS_STATS *next_frame) {
+static int test_candidate_kf(const FIRST_PASS_INFO *first_pass_info,
+                             int show_idx) {
+  const FIRSTPASS_STATS *last_frame =
+      fps_get_frame_stats(first_pass_info, show_idx - 1);
+  const FIRSTPASS_STATS *this_frame =
+      fps_get_frame_stats(first_pass_info, show_idx);
+  const FIRSTPASS_STATS *next_frame =
+      fps_get_frame_stats(first_pass_info, show_idx + 1);
   int is_viable_kf = 0;
   double pcnt_intra = 1.0 - this_frame->pcnt_inter;
-  double modified_pcnt_inter =
-      this_frame->pcnt_inter - this_frame->pcnt_neutral;
 
   // Does the frame satisfy the primary criteria of a key frame?
   // See above for an explanation of the test criteria.
   // If so, then examine how well it predicts subsequent frames.
-  if ((this_frame->pcnt_second_ref < SECOND_REF_USEAGE_THRESH) &&
-      (next_frame->pcnt_second_ref < SECOND_REF_USEAGE_THRESH) &&
+  detect_flash_from_frame_stats(next_frame);
+  if (!detect_flash_from_frame_stats(this_frame) &&
+      !detect_flash_from_frame_stats(next_frame) &&
+      (this_frame->pcnt_second_ref < SECOND_REF_USEAGE_THRESH) &&
       ((this_frame->pcnt_inter < VERY_LOW_INTER_THRESH) ||
        (slide_transition(this_frame, last_frame, next_frame)) ||
-       ((pcnt_intra > MIN_INTRA_LEVEL) &&
-        (pcnt_intra > (INTRA_VS_INTER_THRESH * modified_pcnt_inter)) &&
+       (intra_step_transition(this_frame, last_frame, next_frame)) ||
+       (((this_frame->coded_error > (next_frame->coded_error * 1.2)) &&
+         (this_frame->coded_error > (last_frame->coded_error * 1.2))) &&
+        (pcnt_intra > MIN_INTRA_LEVEL) &&
+        ((pcnt_intra + this_frame->pcnt_neutral) > 0.5) &&
         ((this_frame->intra_error /
           DOUBLE_DIVIDE_CHECK(this_frame->coded_error)) <
-         KF_II_ERR_THRESHOLD) &&
-        ((fabs(last_frame->coded_error - this_frame->coded_error) /
-              DOUBLE_DIVIDE_CHECK(this_frame->coded_error) >
-          ERR_CHANGE_THRESHOLD) ||
-         (fabs(last_frame->intra_error - this_frame->intra_error) /
-              DOUBLE_DIVIDE_CHECK(this_frame->intra_error) >
-          ERR_CHANGE_THRESHOLD) ||
-         ((next_frame->intra_error /
-           DOUBLE_DIVIDE_CHECK(next_frame->coded_error)) >
-          II_IMPROVEMENT_THRESHOLD))))) {
+         KF_II_ERR_THRESHOLD)))) {
     int i;
-    const FIRSTPASS_STATS *start_pos = twopass->stats_in;
-    FIRSTPASS_STATS local_next_frame = *next_frame;
     double boost_score = 0.0;
     double old_boost_score = 0.0;
     double decay_accumulator = 1.0;
 
     // Examine how well the key frame predicts subsequent frames.
     for (i = 0; i < 16; ++i) {
-      double next_iiratio = (II_FACTOR * local_next_frame.intra_error /
-                             DOUBLE_DIVIDE_CHECK(local_next_frame.coded_error));
+      const FIRSTPASS_STATS *frame_stats =
+          fps_get_frame_stats(first_pass_info, show_idx + 1 + i);
+      double next_iiratio = (II_FACTOR * frame_stats->intra_error /
+                             DOUBLE_DIVIDE_CHECK(frame_stats->coded_error));
 
       if (next_iiratio > KF_II_MAX) next_iiratio = KF_II_MAX;
 
       // Cumulative effect of decay in prediction quality.
-      if (local_next_frame.pcnt_inter > 0.85)
-        decay_accumulator *= local_next_frame.pcnt_inter;
+      if (frame_stats->pcnt_inter > 0.85)
+        decay_accumulator *= frame_stats->pcnt_inter;
       else
-        decay_accumulator *= (0.85 + local_next_frame.pcnt_inter) / 2.0;
+        decay_accumulator *= (0.85 + frame_stats->pcnt_inter) / 2.0;
 
       // Keep a running total.
       boost_score += (decay_accumulator * next_iiratio);
 
       // Test various breakout clauses.
-      if ((local_next_frame.pcnt_inter < 0.05) || (next_iiratio < 1.5) ||
-          (((local_next_frame.pcnt_inter - local_next_frame.pcnt_neutral) <
-            0.20) &&
+      if ((frame_stats->pcnt_inter < 0.05) || (next_iiratio < 1.5) ||
+          (((frame_stats->pcnt_inter - frame_stats->pcnt_neutral) < 0.20) &&
            (next_iiratio < 3.0)) ||
           ((boost_score - old_boost_score) < 3.0) ||
-          (local_next_frame.intra_error < V_LOW_INTRA)) {
+          (frame_stats->intra_error < V_LOW_INTRA)) {
         break;
       }
 
       old_boost_score = boost_score;
 
       // Get the next frame details
-      if (EOF == input_stats(twopass, &local_next_frame)) break;
+      if (show_idx + 1 + i == fps_get_num_frames(first_pass_info) - 1) break;
     }
 
     // If there is tolerable prediction for at least the next 3 frames then
@@ -2823,9 +3089,6 @@
     if (boost_score > 30.0 && (i > 3)) {
       is_viable_kf = 1;
     } else {
-      // Reset the file position
-      reset_fpf_position(twopass, start_pos);
-
       is_viable_kf = 0;
     }
   }
@@ -2835,7 +3098,9 @@
 
 #define FRAMES_TO_CHECK_DECAY 8
 #define MIN_KF_TOT_BOOST 300
-#define KF_BOOST_SCAN_MAX_FRAMES 32
+#define DEFAULT_SCAN_FRAMES_FOR_KF_BOOST 32
+#define MAX_SCAN_FRAMES_FOR_KF_BOOST 48
+#define MIN_SCAN_FRAMES_FOR_KF_BOOST 32
 #define KF_ABS_ZOOM_THRESH 6.0
 
 #ifdef AGGRESSIVE_VBR
@@ -2846,29 +3111,99 @@
 #define MAX_KF_TOT_BOOST 5400
 #endif
 
-static void find_next_key_frame(VP9_COMP *cpi, FIRSTPASS_STATS *this_frame) {
-  int i, j;
+int vp9_get_frames_to_next_key(const VP9EncoderConfig *oxcf,
+                               const FRAME_INFO *frame_info,
+                               const FIRST_PASS_INFO *first_pass_info,
+                               int kf_show_idx, int min_gf_interval) {
+  double recent_loop_decay[FRAMES_TO_CHECK_DECAY];
+  int j;
+  int frames_to_key;
+  int max_frames_to_key = first_pass_info->num_frames - kf_show_idx;
+  max_frames_to_key = VPXMIN(max_frames_to_key, oxcf->key_freq);
+
+  // Initialize the decay rates for the recent frames to check
+  for (j = 0; j < FRAMES_TO_CHECK_DECAY; ++j) recent_loop_decay[j] = 1.0;
+  // Find the next keyframe.
+  if (!oxcf->auto_key) {
+    frames_to_key = max_frames_to_key;
+  } else {
+    frames_to_key = 1;
+    while (frames_to_key < max_frames_to_key) {
+      // Provided that we are not at the end of the file...
+      if (kf_show_idx + frames_to_key + 1 < first_pass_info->num_frames) {
+        double loop_decay_rate;
+        double decay_accumulator;
+        const FIRSTPASS_STATS *next_frame = fps_get_frame_stats(
+            first_pass_info, kf_show_idx + frames_to_key + 1);
+
+        // Check for a scene cut.
+        if (test_candidate_kf(first_pass_info, kf_show_idx + frames_to_key))
+          break;
+
+        // How fast is the prediction quality decaying?
+        loop_decay_rate = get_prediction_decay_rate(frame_info, next_frame);
+
+        // We want to know something about the recent past... rather than
+        // as used elsewhere where we are concerned with decay in prediction
+        // quality since the last GF or KF.
+        recent_loop_decay[(frames_to_key - 1) % FRAMES_TO_CHECK_DECAY] =
+            loop_decay_rate;
+        decay_accumulator = 1.0;
+        for (j = 0; j < FRAMES_TO_CHECK_DECAY; ++j)
+          decay_accumulator *= recent_loop_decay[j];
+
+        // Special check for transition or high motion followed by a
+        // static scene.
+        if ((frames_to_key - 1) > min_gf_interval && loop_decay_rate >= 0.999 &&
+            decay_accumulator < 0.9) {
+          int still_interval = oxcf->key_freq - (frames_to_key - 1);
+          // TODO(angiebird): Figure out why we use "+1" here
+          int show_idx = kf_show_idx + frames_to_key;
+          if (check_transition_to_still(first_pass_info, show_idx,
+                                        still_interval)) {
+            break;
+          }
+        }
+      }
+      ++frames_to_key;
+    }
+  }
+  return frames_to_key;
+}
+
+static void find_next_key_frame(VP9_COMP *cpi, int kf_show_idx) {
+  int i;
   RATE_CONTROL *const rc = &cpi->rc;
   TWO_PASS *const twopass = &cpi->twopass;
   GF_GROUP *const gf_group = &twopass->gf_group;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
-  const FIRSTPASS_STATS first_frame = *this_frame;
+  const FIRST_PASS_INFO *first_pass_info = &twopass->first_pass_info;
+  const FRAME_INFO *frame_info = &cpi->frame_info;
   const FIRSTPASS_STATS *const start_position = twopass->stats_in;
+  const FIRSTPASS_STATS *keyframe_stats =
+      fps_get_frame_stats(first_pass_info, kf_show_idx);
   FIRSTPASS_STATS next_frame;
-  FIRSTPASS_STATS last_frame;
   int kf_bits = 0;
-  double decay_accumulator = 1.0;
+  int64_t max_kf_bits;
   double zero_motion_accumulator = 1.0;
+  double zero_motion_sum = 0.0;
+  double zero_motion_avg;
+  double motion_compensable_sum = 0.0;
+  double motion_compensable_avg;
+  int num_frames = 0;
+  int kf_boost_scan_frames = DEFAULT_SCAN_FRAMES_FOR_KF_BOOST;
   double boost_score = 0.0;
   double kf_mod_err = 0.0;
+  double kf_raw_err = 0.0;
   double kf_group_err = 0.0;
-  double recent_loop_decay[FRAMES_TO_CHECK_DECAY];
   double sr_accumulator = 0.0;
   double abs_mv_in_out_accumulator = 0.0;
   const double av_err = get_distribution_av_err(cpi, twopass);
+  const double mean_mod_score = twopass->mean_mod_score;
   vp9_zero(next_frame);
 
   cpi->common.frame_type = KEY_FRAME;
+  rc->frames_since_key = 0;
 
   // Reset the GF group data structures.
   vp9_zero(*gf_group);
@@ -2879,7 +3214,6 @@
   // Clear the alt ref active flag and last group multi arf flags as they
   // can never be set for a key frame.
   rc->source_alt_ref_active = 0;
-  cpi->multi_arf_last_grp_enabled = 0;
 
   // KF is always a GF so clear frames till next gf counter.
   rc->frames_till_gf_update_due = 0;
@@ -2889,107 +3223,29 @@
   twopass->kf_group_bits = 0;          // Total bits available to kf group
   twopass->kf_group_error_left = 0.0;  // Group modified error score.
 
-  kf_mod_err =
-      calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
+  kf_raw_err = keyframe_stats->intra_error;
+  kf_mod_err = calc_norm_frame_score(oxcf, frame_info, keyframe_stats,
+                                     mean_mod_score, av_err);
 
-  // Initialize the decay rates for the recent frames to check
-  for (j = 0; j < FRAMES_TO_CHECK_DECAY; ++j) recent_loop_decay[j] = 1.0;
-
-  // Find the next keyframe.
-  i = 0;
-  while (twopass->stats_in < twopass->stats_in_end &&
-         rc->frames_to_key < cpi->oxcf.key_freq) {
-    // Accumulate kf group error.
-    kf_group_err +=
-        calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
-
-    // Load the next frame's stats.
-    last_frame = *this_frame;
-    input_stats(twopass, this_frame);
-
-    // Provided that we are not at the end of the file...
-    if (cpi->oxcf.auto_key && twopass->stats_in < twopass->stats_in_end) {
-      double loop_decay_rate;
-
-      // Check for a scene cut.
-      if (test_candidate_kf(twopass, &last_frame, this_frame,
-                            twopass->stats_in))
-        break;
-
-      // How fast is the prediction quality decaying?
-      loop_decay_rate = get_prediction_decay_rate(cpi, twopass->stats_in);
-
-      // We want to know something about the recent past... rather than
-      // as used elsewhere where we are concerned with decay in prediction
-      // quality since the last GF or KF.
-      recent_loop_decay[i % FRAMES_TO_CHECK_DECAY] = loop_decay_rate;
-      decay_accumulator = 1.0;
-      for (j = 0; j < FRAMES_TO_CHECK_DECAY; ++j)
-        decay_accumulator *= recent_loop_decay[j];
-
-      // Special check for transition or high motion followed by a
-      // static scene.
-      if (detect_transition_to_still(cpi, i, cpi->oxcf.key_freq - i,
-                                     loop_decay_rate, decay_accumulator))
-        break;
-
-      // Step on to the next frame.
-      ++rc->frames_to_key;
-
-      // If we don't have a real key frame within the next two
-      // key_freq intervals then break out of the loop.
-      if (rc->frames_to_key >= 2 * cpi->oxcf.key_freq) break;
-    } else {
-      ++rc->frames_to_key;
-    }
-    ++i;
-  }
+  rc->frames_to_key = vp9_get_frames_to_next_key(
+      oxcf, frame_info, first_pass_info, kf_show_idx, rc->min_gf_interval);
 
   // If there is a max kf interval set by the user we must obey it.
   // We already breakout of the loop above at 2x max.
   // This code centers the extra kf if the actual natural interval
   // is between 1x and 2x.
-  if (cpi->oxcf.auto_key && rc->frames_to_key > cpi->oxcf.key_freq) {
-    FIRSTPASS_STATS tmp_frame = first_frame;
-
-    rc->frames_to_key /= 2;
-
-    // Reset to the start of the group.
-    reset_fpf_position(twopass, start_position);
-
-    kf_group_err = 0.0;
-
-    // Rescan to get the correct error data for the forced kf group.
-    for (i = 0; i < rc->frames_to_key; ++i) {
-      kf_group_err +=
-          calculate_norm_frame_score(cpi, twopass, oxcf, &tmp_frame, av_err);
-      input_stats(twopass, &tmp_frame);
-    }
-    rc->next_key_frame_forced = 1;
-  } else if (twopass->stats_in == twopass->stats_in_end ||
-             rc->frames_to_key >= cpi->oxcf.key_freq) {
+  if (rc->frames_to_key >= cpi->oxcf.key_freq) {
     rc->next_key_frame_forced = 1;
   } else {
     rc->next_key_frame_forced = 0;
   }
 
-  if (is_two_pass_svc(cpi) && cpi->svc.number_temporal_layers > 1) {
-    int count = (1 << (cpi->svc.number_temporal_layers - 1)) - 1;
-    int new_frame_to_key = (rc->frames_to_key + count) & (~count);
-    int j;
-    for (j = 0; j < new_frame_to_key - rc->frames_to_key; ++j) {
-      if (EOF == input_stats(twopass, this_frame)) break;
-      kf_group_err +=
-          calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
-    }
-    rc->frames_to_key = new_frame_to_key;
-  }
-
-  // Special case for the last key frame of the file.
-  if (twopass->stats_in >= twopass->stats_in_end) {
+  for (i = 0; i < rc->frames_to_key; ++i) {
+    const FIRSTPASS_STATS *frame_stats =
+        fps_get_frame_stats(first_pass_info, kf_show_idx + i);
     // Accumulate kf group error.
-    kf_group_err +=
-        calculate_norm_frame_score(cpi, twopass, oxcf, this_frame, av_err);
+    kf_group_err += calc_norm_frame_score(oxcf, frame_info, frame_stats,
+                                          mean_mod_score, av_err);
   }
 
   // Calculate the number of bits that should be assigned to the kf group.
@@ -3014,25 +3270,47 @@
   }
   twopass->kf_group_bits = VPXMAX(0, twopass->kf_group_bits);
 
-  // Reset the first pass file position.
-  reset_fpf_position(twopass, start_position);
-
   // Scan through the kf group collating various stats used to determine
   // how many bits to spend on it.
   boost_score = 0.0;
 
+  for (i = 0; i < VPXMIN(MAX_SCAN_FRAMES_FOR_KF_BOOST, (rc->frames_to_key - 1));
+       ++i) {
+    if (EOF == input_stats(twopass, &next_frame)) break;
+
+    zero_motion_sum += next_frame.pcnt_inter - next_frame.pcnt_motion;
+    motion_compensable_sum +=
+        1 - (double)next_frame.coded_error / next_frame.intra_error;
+    num_frames++;
+  }
+
+  if (num_frames >= MIN_SCAN_FRAMES_FOR_KF_BOOST) {
+    zero_motion_avg = zero_motion_sum / num_frames;
+    motion_compensable_avg = motion_compensable_sum / num_frames;
+    kf_boost_scan_frames = (int)(VPXMAX(64 * zero_motion_avg - 16,
+                                        160 * motion_compensable_avg - 112));
+    kf_boost_scan_frames =
+        VPXMAX(VPXMIN(kf_boost_scan_frames, MAX_SCAN_FRAMES_FOR_KF_BOOST),
+               MIN_SCAN_FRAMES_FOR_KF_BOOST);
+  }
+  reset_fpf_position(twopass, start_position);
+
   for (i = 0; i < (rc->frames_to_key - 1); ++i) {
     if (EOF == input_stats(twopass, &next_frame)) break;
 
-    if (i <= KF_BOOST_SCAN_MAX_FRAMES) {
+    // The zero motion test here insures that if we mark a kf group as static
+    // it is static throughout not just the first KF_BOOST_SCAN_MAX_FRAMES.
+    // It also allows for a larger boost on long static groups.
+    if ((i <= kf_boost_scan_frames) || (zero_motion_accumulator >= 0.99)) {
       double frame_boost;
       double zm_factor;
 
       // Monitor for static sections.
       // First frame in kf group the second ref indicator is invalid.
       if (i > 0) {
-        zero_motion_accumulator = VPXMIN(
-            zero_motion_accumulator, get_zero_motion_factor(cpi, &next_frame));
+        zero_motion_accumulator =
+            VPXMIN(zero_motion_accumulator,
+                   get_zero_motion_factor(&cpi->frame_info, &next_frame));
       } else {
         zero_motion_accumulator =
             next_frame.pcnt_inter - next_frame.pcnt_motion;
@@ -3056,7 +3334,8 @@
           fabs(next_frame.mv_in_out_count * next_frame.pcnt_motion);
 
       if ((frame_boost < 25.00) ||
-          (abs_mv_in_out_accumulator > KF_ABS_ZOOM_THRESH))
+          (abs_mv_in_out_accumulator > KF_ABS_ZOOM_THRESH) ||
+          (sr_accumulator > (kf_raw_err * 1.50)))
         break;
     } else {
       break;
@@ -3069,13 +3348,13 @@
   twopass->kf_zeromotion_pct = (int)(zero_motion_accumulator * 100.0);
 
   // Calculate a section intra ratio used in setting max loop filter.
-  twopass->section_intra_rating = calculate_section_intra_ratio(
+  twopass->key_frame_section_intra_rating = calculate_section_intra_ratio(
       start_position, twopass->stats_in_end, rc->frames_to_key);
 
   // Special case for static / slide show content but dont apply
   // if the kf group is very short.
   if ((zero_motion_accumulator > 0.99) && (rc->frames_to_key > 8)) {
-    rc->kf_boost = VPXMAX((rc->frames_to_key * 100), MAX_KF_TOT_BOOST);
+    rc->kf_boost = MAX_KF_TOT_BOOST;
   } else {
     // Apply various clamps for min and max boost
     rc->kf_boost = VPXMAX((int)boost_score, (rc->frames_to_key * 3));
@@ -3086,6 +3365,13 @@
   // Work out how many bits to allocate for the key frame itself.
   kf_bits = calculate_boost_bits((rc->frames_to_key - 1), rc->kf_boost,
                                  twopass->kf_group_bits);
+  // Based on the spatial complexity, increase the bits allocated to key frame.
+  kf_bits +=
+      (int)((twopass->kf_group_bits - kf_bits) * (kf_mod_err / kf_group_err));
+  max_kf_bits =
+      twopass->kf_group_bits - (rc->frames_to_key - 1) * FRAME_OVERHEAD_BITS;
+  max_kf_bits = lclamp(max_kf_bits, 0, INT_MAX);
+  kf_bits = VPXMIN(kf_bits, (int)max_kf_bits);
 
   twopass->kf_group_bits -= kf_bits;
 
@@ -3093,6 +3379,7 @@
   gf_group->bit_allocation[0] = kf_bits;
   gf_group->update_type[0] = KF_UPDATE;
   gf_group->rf_level[0] = KF_STD;
+  gf_group->layer_depth[0] = 0;
 
   // Note the total error score of the kf group minus the key frame itself.
   twopass->kf_group_error_left = (kf_group_err - kf_mod_err);
@@ -3108,60 +3395,12 @@
   }
 }
 
-// Define the reference buffers that will be updated post encode.
-static void configure_buffer_updates(VP9_COMP *cpi) {
-  TWO_PASS *const twopass = &cpi->twopass;
-
-  cpi->rc.is_src_frame_alt_ref = 0;
-  switch (twopass->gf_group.update_type[twopass->gf_group.index]) {
-    case KF_UPDATE:
-      cpi->refresh_last_frame = 1;
-      cpi->refresh_golden_frame = 1;
-      cpi->refresh_alt_ref_frame = 1;
-      break;
-    case LF_UPDATE:
-      cpi->refresh_last_frame = 1;
-      cpi->refresh_golden_frame = 0;
-      cpi->refresh_alt_ref_frame = 0;
-      break;
-    case GF_UPDATE:
-      cpi->refresh_last_frame = 1;
-      cpi->refresh_golden_frame = 1;
-      cpi->refresh_alt_ref_frame = 0;
-      break;
-    case OVERLAY_UPDATE:
-      cpi->refresh_last_frame = 0;
-      cpi->refresh_golden_frame = 1;
-      cpi->refresh_alt_ref_frame = 0;
-      cpi->rc.is_src_frame_alt_ref = 1;
-      break;
-    case ARF_UPDATE:
-      cpi->refresh_last_frame = 0;
-      cpi->refresh_golden_frame = 0;
-      cpi->refresh_alt_ref_frame = 1;
-      break;
-    default: assert(0); break;
-  }
-  if (is_two_pass_svc(cpi)) {
-    if (cpi->svc.temporal_layer_id > 0) {
-      cpi->refresh_last_frame = 0;
-      cpi->refresh_golden_frame = 0;
-    }
-    if (cpi->svc.layer_context[cpi->svc.spatial_layer_id].gold_ref_idx < 0)
-      cpi->refresh_golden_frame = 0;
-    if (cpi->alt_ref_source == NULL) cpi->refresh_alt_ref_frame = 0;
-  }
-}
-
 static int is_skippable_frame(const VP9_COMP *cpi) {
   // If the current frame does not have non-zero motion vector detected in the
   // first  pass, and so do its previous and forward frames, then this frame
   // can be skipped for partition check, and the partition size is assigned
   // according to the variance
-  const SVC *const svc = &cpi->svc;
-  const TWO_PASS *const twopass =
-      is_two_pass_svc(cpi) ? &svc->layer_context[svc->spatial_layer_id].twopass
-                           : &cpi->twopass;
+  const TWO_PASS *const twopass = &cpi->twopass;
 
   return (!frame_is_intra_only(&cpi->common) &&
           twopass->stats_in - 2 > twopass->stats_in_start &&
@@ -3181,11 +3420,7 @@
   TWO_PASS *const twopass = &cpi->twopass;
   GF_GROUP *const gf_group = &twopass->gf_group;
   FIRSTPASS_STATS this_frame;
-
-  int target_rate;
-  LAYER_CONTEXT *const lc =
-      is_two_pass_svc(cpi) ? &cpi->svc.layer_context[cpi->svc.spatial_layer_id]
-                           : 0;
+  const int show_idx = cm->current_video_frame;
 
   if (!twopass->stats_in) return;
 
@@ -3193,30 +3428,32 @@
   // advance the input pointer as we already have what we need.
   if (gf_group->update_type[gf_group->index] == ARF_UPDATE) {
     int target_rate;
-    configure_buffer_updates(cpi);
+
+    vp9_zero(this_frame);
+    this_frame =
+        cpi->twopass.stats_in_start[cm->current_video_frame +
+                                    gf_group->arf_src_offset[gf_group->index]];
+
+    vp9_configure_buffer_updates(cpi, gf_group->index);
+
     target_rate = gf_group->bit_allocation[gf_group->index];
     target_rate = vp9_rc_clamp_pframe_target_size(cpi, target_rate);
     rc->base_frame_target = target_rate;
 
     cm->frame_type = INTER_FRAME;
 
-    if (lc != NULL) {
-      if (cpi->svc.spatial_layer_id == 0) {
-        lc->is_key_frame = 0;
-      } else {
-        lc->is_key_frame = cpi->svc.layer_context[0].is_key_frame;
-
-        if (lc->is_key_frame) cpi->ref_frame_flags &= (~VP9_LAST_FLAG);
-      }
-    }
-
     // Do the firstpass stats indicate that this frame is skippable for the
     // partition search?
     if (cpi->sf.allow_partition_search_skip && cpi->oxcf.pass == 2 &&
-        (!cpi->use_svc || is_two_pass_svc(cpi))) {
+        !cpi->use_svc) {
       cpi->partition_search_skippable_frame = is_skippable_frame(cpi);
     }
 
+    // The multiplication by 256 reverses a scaling factor of (>> 8)
+    // applied when combining MB error values for the frame.
+    twopass->mb_av_energy = log((this_frame.intra_error * 256.0) + 1.0);
+    twopass->mb_smooth_pct = this_frame.intra_smooth_pct;
+
     return;
   }
 
@@ -3224,12 +3461,9 @@
 
   if (cpi->oxcf.rc_mode == VPX_Q) {
     twopass->active_worst_quality = cpi->oxcf.cq_level;
-  } else if (cm->current_video_frame == 0 ||
-             (lc != NULL && lc->current_video_frame_in_layer == 0)) {
+  } else if (cm->current_video_frame == 0) {
     const int frames_left =
-        (int)(twopass->total_stats.count -
-              ((lc != NULL) ? lc->current_video_frame_in_layer
-                            : cm->current_video_frame));
+        (int)(twopass->total_stats.count - cm->current_video_frame);
     // Special case code for first frame.
     const int section_target_bandwidth =
         (int)(twopass->bits_left / frames_left);
@@ -3269,68 +3503,42 @@
 
   // Keyframe and section processing.
   if (rc->frames_to_key == 0 || (cpi->frame_flags & FRAMEFLAGS_KEY)) {
-    FIRSTPASS_STATS this_frame_copy;
-    this_frame_copy = this_frame;
     // Define next KF group and assign bits to it.
-    find_next_key_frame(cpi, &this_frame);
-    this_frame = this_frame_copy;
+    find_next_key_frame(cpi, show_idx);
   } else {
     cm->frame_type = INTER_FRAME;
   }
 
-  if (lc != NULL) {
-    if (cpi->svc.spatial_layer_id == 0) {
-      lc->is_key_frame = (cm->frame_type == KEY_FRAME);
-      if (lc->is_key_frame) {
-        cpi->ref_frame_flags &=
-            (~VP9_LAST_FLAG & ~VP9_GOLD_FLAG & ~VP9_ALT_FLAG);
-        lc->frames_from_key_frame = 0;
-        // Encode an intra only empty frame since we have a key frame.
-        cpi->svc.encode_intra_empty_frame = 1;
-      }
-    } else {
-      cm->frame_type = INTER_FRAME;
-      lc->is_key_frame = cpi->svc.layer_context[0].is_key_frame;
-
-      if (lc->is_key_frame) {
-        cpi->ref_frame_flags &= (~VP9_LAST_FLAG);
-        lc->frames_from_key_frame = 0;
-      }
-    }
-  }
-
   // Define a new GF/ARF group. (Should always enter here for key frames).
   if (rc->frames_till_gf_update_due == 0) {
-    define_gf_group(cpi, &this_frame);
+    define_gf_group(cpi, show_idx);
 
     rc->frames_till_gf_update_due = rc->baseline_gf_interval;
-    if (lc != NULL) cpi->refresh_golden_frame = 1;
 
 #if ARF_STATS_OUTPUT
     {
       FILE *fpfile;
       fpfile = fopen("arf.stt", "a");
       ++arf_count;
-      fprintf(fpfile, "%10d %10ld %10d %10d %10ld\n", cm->current_video_frame,
-              rc->frames_till_gf_update_due, rc->kf_boost, arf_count,
-              rc->gfu_boost);
+      fprintf(fpfile, "%10d %10ld %10d %10d %10ld %10ld\n",
+              cm->current_video_frame, rc->frames_till_gf_update_due,
+              rc->kf_boost, arf_count, rc->gfu_boost, cm->frame_type);
 
       fclose(fpfile);
     }
 #endif
   }
 
-  configure_buffer_updates(cpi);
+  vp9_configure_buffer_updates(cpi, gf_group->index);
 
   // Do the firstpass stats indicate that this frame is skippable for the
   // partition search?
   if (cpi->sf.allow_partition_search_skip && cpi->oxcf.pass == 2 &&
-      (!cpi->use_svc || is_two_pass_svc(cpi))) {
+      !cpi->use_svc) {
     cpi->partition_search_skippable_frame = is_skippable_frame(cpi);
   }
 
-  target_rate = gf_group->bit_allocation[gf_group->index];
-  rc->base_frame_target = target_rate;
+  rc->base_frame_target = gf_group->bit_allocation[gf_group->index];
 
   // The multiplication by 256 reverses a scaling factor of (>> 8)
   // applied when combining MB error values for the frame.
@@ -3371,8 +3579,7 @@
     rc->rate_error_estimate = 0;
   }
 
-  if (cpi->common.frame_type != KEY_FRAME &&
-      !vp9_is_upper_layer_key_frame(cpi)) {
+  if (cpi->common.frame_type != KEY_FRAME) {
     twopass->kf_group_bits -= bits_used;
     twopass->last_kfgroup_zeromotion_pct = twopass->kf_zeromotion_pct;
   }
@@ -3392,7 +3599,8 @@
 
     // Extend min or Max Q range to account for imbalance from the base
     // value when using AQ.
-    if (cpi->oxcf.aq_mode != NO_AQ) {
+    if (cpi->oxcf.aq_mode != NO_AQ && cpi->oxcf.aq_mode != PSNR_AQ &&
+        cpi->oxcf.aq_mode != PERCEPTUAL_AQ) {
       if (cm->seg.aq_av_offset < 0) {
         // The balance of the AQ map tends towarda lowering the average Q.
         aq_extend_min = 0;
@@ -3460,3 +3668,119 @@
     }
   }
 }
+
+#if CONFIG_RATE_CTRL
+void vp9_get_next_group_of_picture(const VP9_COMP *cpi, int *first_is_key_frame,
+                                   int *use_alt_ref, int *coding_frame_count,
+                                   int *first_show_idx,
+                                   int *last_gop_use_alt_ref) {
+  // We make a copy of rc here because we want to get information from the
+  // encoder without changing its state.
+  // TODO(angiebird): Avoid copying rc here.
+  RATE_CONTROL rc = cpi->rc;
+  const int multi_layer_arf = 0;
+  const int allow_alt_ref = 1;
+  // We assume that current_video_frame is updated to the show index of the
+  // frame we are about to called. Note that current_video_frame is updated at
+  // the end of encode_frame_to_data_rate().
+  // TODO(angiebird): Avoid this kind of fragile style.
+  *first_show_idx = cpi->common.current_video_frame;
+  *last_gop_use_alt_ref = rc.source_alt_ref_active;
+
+  *first_is_key_frame = 0;
+  if (rc.frames_to_key == 0) {
+    rc.frames_to_key = vp9_get_frames_to_next_key(
+        &cpi->oxcf, &cpi->frame_info, &cpi->twopass.first_pass_info,
+        *first_show_idx, rc.min_gf_interval);
+    rc.frames_since_key = 0;
+    *first_is_key_frame = 1;
+  }
+
+  *coding_frame_count = vp9_get_gop_coding_frame_count(
+      cpi->encode_command.external_arf_indexes, &cpi->oxcf, &cpi->frame_info,
+      &cpi->twopass.first_pass_info, &rc, *first_show_idx, multi_layer_arf,
+      allow_alt_ref, *first_is_key_frame, *last_gop_use_alt_ref, use_alt_ref);
+}
+
+int vp9_get_gop_coding_frame_count(const int *external_arf_indexes,
+                                   const VP9EncoderConfig *oxcf,
+                                   const FRAME_INFO *frame_info,
+                                   const FIRST_PASS_INFO *first_pass_info,
+                                   const RATE_CONTROL *rc, int show_idx,
+                                   int multi_layer_arf, int allow_alt_ref,
+                                   int first_is_key_frame,
+                                   int last_gop_use_alt_ref, int *use_alt_ref) {
+  int frame_count;
+  double gop_intra_factor;
+  const int arf_active_or_kf = last_gop_use_alt_ref || first_is_key_frame;
+  RANGE active_gf_interval = get_active_gf_inverval_range(
+      frame_info, rc, arf_active_or_kf, show_idx, /*active_worst_quality=*/0,
+      /*last_boosted_qindex=*/0);
+
+  const int arf_layers = get_arf_layers(multi_layer_arf, oxcf->enable_auto_arf,
+                                        active_gf_interval.max);
+  if (multi_layer_arf) {
+    gop_intra_factor = 1.0 + 0.25 * arf_layers;
+  } else {
+    gop_intra_factor = 1.0;
+  }
+
+  frame_count = get_gop_coding_frame_num(
+#if CONFIG_RATE_CTRL
+      external_arf_indexes,
+#endif
+      use_alt_ref, frame_info, first_pass_info, rc, show_idx,
+      &active_gf_interval, gop_intra_factor, oxcf->lag_in_frames);
+  *use_alt_ref &= allow_alt_ref;
+  return frame_count;
+}
+
+// Under CONFIG_RATE_CTRL, once the first_pass_info is ready, the number of
+// coding frames (including show frame and alt ref) can be determined.
+int vp9_get_coding_frame_num(const int *external_arf_indexes,
+                             const VP9EncoderConfig *oxcf,
+                             const FRAME_INFO *frame_info,
+                             const FIRST_PASS_INFO *first_pass_info,
+                             int multi_layer_arf, int allow_alt_ref) {
+  int coding_frame_num = 0;
+  RATE_CONTROL rc;
+  int gop_coding_frame_count;
+  int gop_show_frames;
+  int show_idx = 0;
+  int last_gop_use_alt_ref = 0;
+  rc.static_scene_max_gf_interval = 250;
+  vp9_rc_init(oxcf, 1, &rc);
+
+  while (show_idx < first_pass_info->num_frames) {
+    int use_alt_ref;
+    int first_is_key_frame = 0;
+    if (rc.frames_to_key == 0) {
+      rc.frames_to_key = vp9_get_frames_to_next_key(
+          oxcf, frame_info, first_pass_info, show_idx, rc.min_gf_interval);
+      rc.frames_since_key = 0;
+      first_is_key_frame = 1;
+    }
+
+    gop_coding_frame_count = vp9_get_gop_coding_frame_count(
+        external_arf_indexes, oxcf, frame_info, first_pass_info, &rc, show_idx,
+        multi_layer_arf, allow_alt_ref, first_is_key_frame,
+        last_gop_use_alt_ref, &use_alt_ref);
+
+    rc.source_alt_ref_active = use_alt_ref;
+    last_gop_use_alt_ref = use_alt_ref;
+    gop_show_frames = gop_coding_frame_count - use_alt_ref;
+    rc.frames_to_key -= gop_show_frames;
+    rc.frames_since_key += gop_show_frames;
+    show_idx += gop_show_frames;
+    coding_frame_num += gop_show_frames + use_alt_ref;
+  }
+  return coding_frame_num;
+}
+#endif  // CONFIG_RATE_CTRL
+
+FIRSTPASS_STATS vp9_get_frame_stats(const TWO_PASS *twopass) {
+  return twopass->this_frame_stats;
+}
+FIRSTPASS_STATS vp9_get_total_stats(const TWO_PASS *twopass) {
+  return twopass->total_stats;
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.h
index aa497e3..dcaf2ee 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_firstpass.h
@@ -8,9 +8,12 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_FIRSTPASS_H_
-#define VP9_ENCODER_VP9_FIRSTPASS_H_
+#ifndef VPX_VP9_ENCODER_VP9_FIRSTPASS_H_
+#define VPX_VP9_ENCODER_VP9_FIRSTPASS_H_
 
+#include <assert.h>
+
+#include "vp9/common/vp9_onyxc_int.h"
 #include "vp9/encoder/vp9_lookahead.h"
 #include "vp9/encoder/vp9_ratectrl.h"
 
@@ -39,7 +42,10 @@
 } FIRSTPASS_MB_STATS;
 #endif
 
-#define INVALID_ROW -1
+#define INVALID_ROW (-1)
+
+#define MAX_ARF_LAYERS 6
+#define SECTION_NOISE_DEF 250.0
 
 typedef struct {
   double frame_mb_intra_factor;
@@ -107,7 +113,9 @@
   GF_UPDATE = 2,
   ARF_UPDATE = 3,
   OVERLAY_UPDATE = 4,
-  FRAME_UPDATE_TYPES = 5
+  MID_OVERLAY_UPDATE = 5,
+  USE_BUF_FRAME = 6,  // Use show existing frame, no ref buffer update
+  FRAME_UPDATE_TYPES = 7
 } FRAME_UPDATE_TYPE;
 
 #define FC_ANIMATION_THRESH 0.15
@@ -119,22 +127,59 @@
 
 typedef struct {
   unsigned char index;
-  unsigned char first_inter_index;
-  RATE_FACTOR_LEVEL rf_level[MAX_STATIC_GF_GROUP_LENGTH + 1];
-  FRAME_UPDATE_TYPE update_type[MAX_STATIC_GF_GROUP_LENGTH + 1];
-  unsigned char arf_src_offset[MAX_STATIC_GF_GROUP_LENGTH + 1];
-  unsigned char arf_update_idx[MAX_STATIC_GF_GROUP_LENGTH + 1];
-  unsigned char arf_ref_idx[MAX_STATIC_GF_GROUP_LENGTH + 1];
-  int bit_allocation[MAX_STATIC_GF_GROUP_LENGTH + 1];
+  RATE_FACTOR_LEVEL rf_level[MAX_STATIC_GF_GROUP_LENGTH + 2];
+  FRAME_UPDATE_TYPE update_type[MAX_STATIC_GF_GROUP_LENGTH + 2];
+  unsigned char arf_src_offset[MAX_STATIC_GF_GROUP_LENGTH + 2];
+  unsigned char layer_depth[MAX_STATIC_GF_GROUP_LENGTH + 2];
+  unsigned char frame_gop_index[MAX_STATIC_GF_GROUP_LENGTH + 2];
+  int bit_allocation[MAX_STATIC_GF_GROUP_LENGTH + 2];
+  int gfu_boost[MAX_STATIC_GF_GROUP_LENGTH + 2];
+
+  int frame_start;
+  int frame_end;
+  // TODO(jingning): The array size of arf_stack could be reduced.
+  int arf_index_stack[MAX_LAG_BUFFERS * 2];
+  int top_arf_idx;
+  int stack_size;
+  int gf_group_size;
+  int max_layer_depth;
+  int allowed_max_layer_depth;
+  int group_noise_energy;
 } GF_GROUP;
 
 typedef struct {
+  const FIRSTPASS_STATS *stats;
+  int num_frames;
+} FIRST_PASS_INFO;
+
+static INLINE void fps_init_first_pass_info(FIRST_PASS_INFO *first_pass_info,
+                                            const FIRSTPASS_STATS *stats,
+                                            int num_frames) {
+  first_pass_info->stats = stats;
+  first_pass_info->num_frames = num_frames;
+}
+
+static INLINE int fps_get_num_frames(const FIRST_PASS_INFO *first_pass_info) {
+  return first_pass_info->num_frames;
+}
+
+static INLINE const FIRSTPASS_STATS *fps_get_frame_stats(
+    const FIRST_PASS_INFO *first_pass_info, int show_idx) {
+  if (show_idx < 0 || show_idx >= first_pass_info->num_frames) {
+    return NULL;
+  }
+  return &first_pass_info->stats[show_idx];
+}
+
+typedef struct {
   unsigned int section_intra_rating;
+  unsigned int key_frame_section_intra_rating;
   FIRSTPASS_STATS total_stats;
   FIRSTPASS_STATS this_frame_stats;
   const FIRSTPASS_STATS *stats_in;
   const FIRSTPASS_STATS *stats_in_start;
   const FIRSTPASS_STATS *stats_in_end;
+  FIRST_PASS_INFO first_pass_info;
   FIRSTPASS_STATS total_left_stats;
   int first_pass_done;
   int64_t bits_left;
@@ -173,6 +218,7 @@
   int extend_maxq;
   int extend_minq_fast;
   int arnr_strength_adjustment;
+  int last_qindex_of_arf_layer[MAX_ARF_LAYERS];
 
   GF_GROUP gf_group;
 } TWO_PASS;
@@ -182,7 +228,6 @@
 struct TileDataEnc;
 
 void vp9_init_first_pass(struct VP9_COMP *cpi);
-void vp9_rc_get_first_pass_params(struct VP9_COMP *cpi);
 void vp9_first_pass(struct VP9_COMP *cpi, const struct lookahead_entry *source);
 void vp9_end_first_pass(struct VP9_COMP *cpi);
 
@@ -194,7 +239,6 @@
 
 void vp9_init_second_pass(struct VP9_COMP *cpi);
 void vp9_rc_get_second_pass_params(struct VP9_COMP *cpi);
-void vp9_twopass_postencode_update(struct VP9_COMP *cpi);
 
 // Post encode update of the rate control parameters for 2-pass
 void vp9_twopass_postencode_update(struct VP9_COMP *cpi);
@@ -202,8 +246,60 @@
 void calculate_coded_size(struct VP9_COMP *cpi, int *scaled_frame_width,
                           int *scaled_frame_height);
 
+struct VP9EncoderConfig;
+int vp9_get_frames_to_next_key(const struct VP9EncoderConfig *oxcf,
+                               const FRAME_INFO *frame_info,
+                               const FIRST_PASS_INFO *first_pass_info,
+                               int kf_show_idx, int min_gf_interval);
+#if CONFIG_RATE_CTRL
+/* Call this function to get info about the next group of pictures.
+ * This function should be called after vp9_create_compressor() when encoding
+ * starts or after vp9_get_compressed_data() when the encoding process of
+ * the last group of pictures is just finished.
+ */
+void vp9_get_next_group_of_picture(const struct VP9_COMP *cpi,
+                                   int *first_is_key_frame, int *use_alt_ref,
+                                   int *coding_frame_count, int *first_show_idx,
+                                   int *last_gop_use_alt_ref);
+
+/*!\brief Call this function before coding a new group of pictures to get
+ * information about it.
+ * \param[in] external_arf_indexes External arf indexs passed in
+ * \param[in] oxcf                 Encoder config
+ * \param[in] frame_info           Frame info
+ * \param[in] first_pass_info      First pass stats
+ * \param[in] rc                   Rate control state
+ * \param[in] show_idx             Show index of the first frame in the group
+ * \param[in] multi_layer_arf      Is multi-layer alternate reference used
+ * \param[in] allow_alt_ref        Is alternate reference allowed
+ * \param[in] first_is_key_frame   Is the first frame in the group a key frame
+ * \param[in] last_gop_use_alt_ref Does the last group use alternate reference
+ *
+ * \param[out] use_alt_ref         Does this group use alternate reference
+ *
+ * \return Returns coding frame count
+ */
+int vp9_get_gop_coding_frame_count(const int *external_arf_indexes,
+                                   const struct VP9EncoderConfig *oxcf,
+                                   const FRAME_INFO *frame_info,
+                                   const FIRST_PASS_INFO *first_pass_info,
+                                   const RATE_CONTROL *rc, int show_idx,
+                                   int multi_layer_arf, int allow_alt_ref,
+                                   int first_is_key_frame,
+                                   int last_gop_use_alt_ref, int *use_alt_ref);
+
+int vp9_get_coding_frame_num(const int *external_arf_indexes,
+                             const struct VP9EncoderConfig *oxcf,
+                             const FRAME_INFO *frame_info,
+                             const FIRST_PASS_INFO *first_pass_info,
+                             int multi_layer_arf, int allow_alt_ref);
+#endif  // CONFIG_RATE_CTRL
+
+FIRSTPASS_STATS vp9_get_frame_stats(const TWO_PASS *twopass);
+FIRSTPASS_STATS vp9_get_total_stats(const TWO_PASS *twopass);
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_FIRSTPASS_H_
+#endif  // VPX_VP9_ENCODER_VP9_FIRSTPASS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_job_queue.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_job_queue.h
index 89c08f2..ad09c11 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_job_queue.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_job_queue.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_JOB_QUEUE_H_
-#define VP9_ENCODER_VP9_JOB_QUEUE_H_
+#ifndef VPX_VP9_ENCODER_VP9_JOB_QUEUE_H_
+#define VPX_VP9_ENCODER_VP9_JOB_QUEUE_H_
 
 typedef enum {
   FIRST_PASS_JOB,
@@ -43,4 +43,4 @@
   int num_jobs_acquired;
 } JobQueueHandle;
 
-#endif  // VP9_ENCODER_VP9_JOB_QUEUE_H_
+#endif  // VPX_VP9_ENCODER_VP9_JOB_QUEUE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.c
index 392cd5d..97838c3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.c
@@ -64,6 +64,7 @@
     unsigned int i;
     ctx->max_sz = depth;
     ctx->buf = calloc(depth, sizeof(*ctx->buf));
+    ctx->next_show_idx = 0;
     if (!ctx->buf) goto bail;
     for (i = 0; i < depth; i++)
       if (vpx_alloc_frame_buffer(
@@ -81,12 +82,16 @@
 }
 
 #define USE_PARTIAL_COPY 0
+int vp9_lookahead_full(const struct lookahead_ctx *ctx) {
+  return ctx->sz + 1 + MAX_PRE_FRAMES > ctx->max_sz;
+}
+
+int vp9_lookahead_next_show_idx(const struct lookahead_ctx *ctx) {
+  return ctx->next_show_idx;
+}
 
 int vp9_lookahead_push(struct lookahead_ctx *ctx, YV12_BUFFER_CONFIG *src,
-                       int64_t ts_start, int64_t ts_end,
-#if CONFIG_VP9_HIGHBITDEPTH
-                       int use_highbitdepth,
-#endif
+                       int64_t ts_start, int64_t ts_end, int use_highbitdepth,
                        vpx_enc_frame_flags_t flags) {
   struct lookahead_entry *buf;
 #if USE_PARTIAL_COPY
@@ -101,8 +106,12 @@
   int subsampling_x = src->subsampling_x;
   int subsampling_y = src->subsampling_y;
   int larger_dimensions, new_dimensions;
+#if !CONFIG_VP9_HIGHBITDEPTH
+  (void)use_highbitdepth;
+  assert(use_highbitdepth == 0);
+#endif
 
-  if (ctx->sz + 1 + MAX_PRE_FRAMES > ctx->max_sz) return 1;
+  if (vp9_lookahead_full(ctx)) return 1;
   ctx->sz++;
   buf = pop(ctx, &ctx->write_idx);
 
@@ -184,6 +193,8 @@
   buf->ts_start = ts_start;
   buf->ts_end = ts_end;
   buf->flags = flags;
+  buf->show_idx = ctx->next_show_idx;
+  ++ctx->next_show_idx;
   return 0;
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.h
index 88be0ff..dbbe3af 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_lookahead.h
@@ -8,17 +8,13 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_LOOKAHEAD_H_
-#define VP9_ENCODER_VP9_LOOKAHEAD_H_
+#ifndef VPX_VP9_ENCODER_VP9_LOOKAHEAD_H_
+#define VPX_VP9_ENCODER_VP9_LOOKAHEAD_H_
 
 #include "vpx_scale/yv12config.h"
 #include "vpx/vpx_encoder.h"
 #include "vpx/vpx_integer.h"
 
-#if CONFIG_SPATIAL_SVC
-#include "vpx/vp8cx.h"
-#endif
-
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -29,6 +25,7 @@
   YV12_BUFFER_CONFIG img;
   int64_t ts_start;
   int64_t ts_end;
+  int show_idx; /*The show_idx of this frame*/
   vpx_enc_frame_flags_t flags;
 };
 
@@ -36,10 +33,12 @@
 #define MAX_PRE_FRAMES 1
 
 struct lookahead_ctx {
-  int max_sz;                  /* Absolute size of the queue */
-  int sz;                      /* Number of buffers currently in the queue */
-  int read_idx;                /* Read index */
-  int write_idx;               /* Write index */
+  int max_sz;        /* Absolute size of the queue */
+  int sz;            /* Number of buffers currently in the queue */
+  int read_idx;      /* Read index */
+  int write_idx;     /* Write index */
+  int next_show_idx; /* The show_idx that will be assigned to the next frame
+                        being pushed in the queue*/
   struct lookahead_entry *buf; /* Buffer list */
 };
 
@@ -61,6 +60,23 @@
  */
 void vp9_lookahead_destroy(struct lookahead_ctx *ctx);
 
+/**\brief Check if lookahead is full
+ *
+ * \param[in] ctx         Pointer to the lookahead context
+ *
+ * Return 1 if lookahead is full, otherwise return 0.
+ */
+int vp9_lookahead_full(const struct lookahead_ctx *ctx);
+
+/**\brief Return the next_show_idx
+ *
+ * \param[in] ctx         Pointer to the lookahead context
+ *
+ * Return the show_idx that will be assigned to the next
+ * frame pushed by vp9_lookahead_push()
+ */
+int vp9_lookahead_next_show_idx(const struct lookahead_ctx *ctx);
+
 /**\brief Enqueue a source buffer
  *
  * This function will copy the source image into a new framebuffer with
@@ -77,10 +93,7 @@
  * \param[in] active_map  Map that specifies which macroblock is active
  */
 int vp9_lookahead_push(struct lookahead_ctx *ctx, YV12_BUFFER_CONFIG *src,
-                       int64_t ts_start, int64_t ts_end,
-#if CONFIG_VP9_HIGHBITDEPTH
-                       int use_highbitdepth,
-#endif
+                       int64_t ts_start, int64_t ts_end, int use_highbitdepth,
                        vpx_enc_frame_flags_t flags);
 
 /**\brief Get the next source buffer to encode
@@ -115,4 +128,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_LOOKAHEAD_H_
+#endif  // VPX_VP9_ENCODER_VP9_LOOKAHEAD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.c
index 46d626d..831c79c17 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.c
@@ -57,11 +57,12 @@
   {
     uint32_t distortion;
     uint32_t sse;
+    // TODO(yunqing): may use higher tap interp filter than 2 taps if needed.
     cpi->find_fractional_mv_step(
         x, dst_mv, ref_mv, cpi->common.allow_high_precision_mv, x->errorperbit,
-        &v_fn_ptr, 0, mv_sf->subpel_iters_per_step,
+        &v_fn_ptr, 0, mv_sf->subpel_search_level,
         cond_cost_list(cpi, cost_list), NULL, NULL, &distortion, &sse, NULL, 0,
-        0);
+        0, USE_2_TAPS);
   }
 
   xd->mi[0]->mode = NEWMV;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.h
index c3af972..7b62986 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mbgraph.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_MBGRAPH_H_
-#define VP9_ENCODER_VP9_MBGRAPH_H_
+#ifndef VPX_VP9_ENCODER_VP9_MBGRAPH_H_
+#define VPX_VP9_ENCODER_VP9_MBGRAPH_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_MBGRAPH_H_
+#endif  // VPX_VP9_ENCODER_VP9_MBGRAPH_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.c
index 37406c2..ac29f36 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.c
@@ -29,11 +29,6 @@
 
 // #define NEW_DIAMOND_SEARCH
 
-static INLINE const uint8_t *get_buf_from_mv(const struct buf_2d *buf,
-                                             const MV *mv) {
-  return &buf->buf[mv->row * buf->stride + mv->col];
-}
-
 void vp9_set_mv_search_range(MvLimits *mv_limits, const MV *mv) {
   int col_min = (mv->col >> 3) - MAX_FULL_PEL_VAL + (mv->col & 7 ? 1 : 0);
   int row_min = (mv->row >> 3) - MAX_FULL_PEL_VAL + (mv->row & 7 ? 1 : 0);
@@ -263,27 +258,6 @@
     }                                                             \
   }
 
-// TODO(yunqingwang): SECOND_LEVEL_CHECKS_BEST was a rewrote of
-// SECOND_LEVEL_CHECKS, and SECOND_LEVEL_CHECKS should be rewritten
-// later in the same way.
-#define SECOND_LEVEL_CHECKS_BEST                \
-  {                                             \
-    unsigned int second;                        \
-    int br0 = br;                               \
-    int bc0 = bc;                               \
-    assert(tr == br || tc == bc);               \
-    if (tr == br && tc != bc) {                 \
-      kc = bc - tc;                             \
-    } else if (tr != br && tc == bc) {          \
-      kr = br - tr;                             \
-    }                                           \
-    CHECK_BETTER(second, br0 + kr, bc0);        \
-    CHECK_BETTER(second, br0, bc0 + kc);        \
-    if (br0 != br || bc0 != bc) {               \
-      CHECK_BETTER(second, br0 + kr, bc0 + kc); \
-    }                                           \
-  }
-
 #define SETUP_SUBPEL_SEARCH                                                 \
   const uint8_t *const z = x->plane[0].src.buf;                             \
   const int src_stride = x->plane[0].src.stride;                            \
@@ -329,8 +303,8 @@
   if (second_pred != NULL) {
     if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
       DECLARE_ALIGNED(16, uint16_t, comp_pred16[64 * 64]);
-      vpx_highbd_comp_avg_pred(comp_pred16, second_pred, w, h, y + offset,
-                               y_stride);
+      vpx_highbd_comp_avg_pred(comp_pred16, CONVERT_TO_SHORTPTR(second_pred), w,
+                               h, CONVERT_TO_SHORTPTR(y + offset), y_stride);
       besterr =
           vfp->vf(CONVERT_TO_BYTEPTR(comp_pred16), w, src, src_stride, sse1);
     } else {
@@ -388,14 +362,12 @@
   *ir = (int)divide_and_round(x1 * b, y1);
 }
 
-uint32_t vp9_skip_sub_pixel_tree(const MACROBLOCK *x, MV *bestmv,
-                                 const MV *ref_mv, int allow_hp,
-                                 int error_per_bit,
-                                 const vp9_variance_fn_ptr_t *vfp,
-                                 int forced_stop, int iters_per_step,
-                                 int *cost_list, int *mvjcost, int *mvcost[2],
-                                 uint32_t *distortion, uint32_t *sse1,
-                                 const uint8_t *second_pred, int w, int h) {
+uint32_t vp9_skip_sub_pixel_tree(
+    const MACROBLOCK *x, MV *bestmv, const MV *ref_mv, int allow_hp,
+    int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
+    int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
+    uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
+    int h, int use_accurate_subpel_search) {
   SETUP_SUBPEL_SEARCH;
   besterr = setup_center_error(xd, bestmv, ref_mv, error_per_bit, vfp, z,
                                src_stride, y, y_stride, second_pred, w, h,
@@ -418,6 +390,7 @@
   (void)sse;
   (void)thismse;
   (void)cost_list;
+  (void)use_accurate_subpel_search;
 
   return besterr;
 }
@@ -427,7 +400,7 @@
     int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h) {
+    int h, int use_accurate_subpel_search) {
   SETUP_SUBPEL_SEARCH;
   besterr = setup_center_error(xd, bestmv, ref_mv, error_per_bit, vfp, z,
                                src_stride, y, y_stride, second_pred, w, h,
@@ -439,6 +412,7 @@
   (void)allow_hp;
   (void)forced_stop;
   (void)hstep;
+  (void)use_accurate_subpel_search;
 
   if (cost_list && cost_list[0] != INT_MAX && cost_list[1] != INT_MAX &&
       cost_list[2] != INT_MAX && cost_list[3] != INT_MAX &&
@@ -492,8 +466,10 @@
     int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h) {
+    int h, int use_accurate_subpel_search) {
   SETUP_SUBPEL_SEARCH;
+  (void)use_accurate_subpel_search;
+
   besterr = setup_center_error(xd, bestmv, ref_mv, error_per_bit, vfp, z,
                                src_stride, y, y_stride, second_pred, w, h,
                                offset, mvjcost, mvcost, sse1, distortion);
@@ -552,8 +528,10 @@
     int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h) {
+    int h, int use_accurate_subpel_search) {
   SETUP_SUBPEL_SEARCH;
+  (void)use_accurate_subpel_search;
+
   besterr = setup_center_error(xd, bestmv, ref_mv, error_per_bit, vfp, z,
                                src_stride, y, y_stride, second_pred, w, h,
                                offset, mvjcost, mvcost, sse1, distortion);
@@ -638,12 +616,119 @@
 };
 /* clang-format on */
 
+static int accurate_sub_pel_search(
+    const MACROBLOCKD *xd, const MV *this_mv, const struct scale_factors *sf,
+    const InterpKernel *kernel, const vp9_variance_fn_ptr_t *vfp,
+    const uint8_t *const src_address, const int src_stride,
+    const uint8_t *const pre_address, int y_stride, const uint8_t *second_pred,
+    int w, int h, uint32_t *sse) {
+#if CONFIG_VP9_HIGHBITDEPTH
+  uint64_t besterr;
+  assert(sf->x_step_q4 == 16 && sf->y_step_q4 == 16);
+  assert(w != 0 && h != 0);
+  if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+    DECLARE_ALIGNED(16, uint16_t, pred16[64 * 64]);
+    vp9_highbd_build_inter_predictor(CONVERT_TO_SHORTPTR(pre_address), y_stride,
+                                     pred16, w, this_mv, sf, w, h, 0, kernel,
+                                     MV_PRECISION_Q3, 0, 0, xd->bd);
+    if (second_pred != NULL) {
+      DECLARE_ALIGNED(16, uint16_t, comp_pred16[64 * 64]);
+      vpx_highbd_comp_avg_pred(comp_pred16, CONVERT_TO_SHORTPTR(second_pred), w,
+                               h, pred16, w);
+      besterr = vfp->vf(CONVERT_TO_BYTEPTR(comp_pred16), w, src_address,
+                        src_stride, sse);
+    } else {
+      besterr =
+          vfp->vf(CONVERT_TO_BYTEPTR(pred16), w, src_address, src_stride, sse);
+    }
+  } else {
+    DECLARE_ALIGNED(16, uint8_t, pred[64 * 64]);
+    vp9_build_inter_predictor(pre_address, y_stride, pred, w, this_mv, sf, w, h,
+                              0, kernel, MV_PRECISION_Q3, 0, 0);
+    if (second_pred != NULL) {
+      DECLARE_ALIGNED(16, uint8_t, comp_pred[64 * 64]);
+      vpx_comp_avg_pred(comp_pred, second_pred, w, h, pred, w);
+      besterr = vfp->vf(comp_pred, w, src_address, src_stride, sse);
+    } else {
+      besterr = vfp->vf(pred, w, src_address, src_stride, sse);
+    }
+  }
+  if (besterr >= UINT_MAX) return UINT_MAX;
+  return (int)besterr;
+#else
+  int besterr;
+  DECLARE_ALIGNED(16, uint8_t, pred[64 * 64]);
+  assert(sf->x_step_q4 == 16 && sf->y_step_q4 == 16);
+  assert(w != 0 && h != 0);
+  (void)xd;
+
+  vp9_build_inter_predictor(pre_address, y_stride, pred, w, this_mv, sf, w, h,
+                            0, kernel, MV_PRECISION_Q3, 0, 0);
+  if (second_pred != NULL) {
+    DECLARE_ALIGNED(16, uint8_t, comp_pred[64 * 64]);
+    vpx_comp_avg_pred(comp_pred, second_pred, w, h, pred, w);
+    besterr = vfp->vf(comp_pred, w, src_address, src_stride, sse);
+  } else {
+    besterr = vfp->vf(pred, w, src_address, src_stride, sse);
+  }
+  return besterr;
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+}
+
+// TODO(yunqing): this part can be further refactored.
+#if CONFIG_VP9_HIGHBITDEPTH
+/* checks if (r, c) has better score than previous best */
+#define CHECK_BETTER1(v, r, c)                                                 \
+  if (c >= minc && c <= maxc && r >= minr && r <= maxr) {                      \
+    int64_t tmpmse;                                                            \
+    const MV mv = { r, c };                                                    \
+    const MV ref_mv = { rr, rc };                                              \
+    thismse =                                                                  \
+        accurate_sub_pel_search(xd, &mv, x->me_sf, kernel, vfp, z, src_stride, \
+                                y, y_stride, second_pred, w, h, &sse);         \
+    tmpmse = thismse;                                                          \
+    tmpmse += mv_err_cost(&mv, &ref_mv, mvjcost, mvcost, error_per_bit);       \
+    if (tmpmse >= INT_MAX) {                                                   \
+      v = INT_MAX;                                                             \
+    } else if ((v = (uint32_t)tmpmse) < besterr) {                             \
+      besterr = v;                                                             \
+      br = r;                                                                  \
+      bc = c;                                                                  \
+      *distortion = thismse;                                                   \
+      *sse1 = sse;                                                             \
+    }                                                                          \
+  } else {                                                                     \
+    v = INT_MAX;                                                               \
+  }
+#else
+/* checks if (r, c) has better score than previous best */
+#define CHECK_BETTER1(v, r, c)                                                 \
+  if (c >= minc && c <= maxc && r >= minr && r <= maxr) {                      \
+    const MV mv = { r, c };                                                    \
+    const MV ref_mv = { rr, rc };                                              \
+    thismse =                                                                  \
+        accurate_sub_pel_search(xd, &mv, x->me_sf, kernel, vfp, z, src_stride, \
+                                y, y_stride, second_pred, w, h, &sse);         \
+    if ((v = mv_err_cost(&mv, &ref_mv, mvjcost, mvcost, error_per_bit) +       \
+             thismse) < besterr) {                                             \
+      besterr = v;                                                             \
+      br = r;                                                                  \
+      bc = c;                                                                  \
+      *distortion = thismse;                                                   \
+      *sse1 = sse;                                                             \
+    }                                                                          \
+  } else {                                                                     \
+    v = INT_MAX;                                                               \
+  }
+
+#endif
+
 uint32_t vp9_find_best_sub_pixel_tree(
     const MACROBLOCK *x, MV *bestmv, const MV *ref_mv, int allow_hp,
     int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h) {
+    int h, int use_accurate_subpel_search) {
   const uint8_t *const z = x->plane[0].src.buf;
   const uint8_t *const src_address = z;
   const int src_stride = x->plane[0].src.stride;
@@ -671,6 +756,17 @@
   int kr, kc;
   MvLimits subpel_mv_limits;
 
+  // TODO(yunqing): need to add 4-tap filter optimization to speed up the
+  // encoder.
+  const InterpKernel *kernel =
+      (use_accurate_subpel_search > 0)
+          ? ((use_accurate_subpel_search == USE_4_TAPS)
+                 ? vp9_filter_kernels[FOURTAP]
+                 : ((use_accurate_subpel_search == USE_8_TAPS)
+                        ? vp9_filter_kernels[EIGHTTAP]
+                        : vp9_filter_kernels[EIGHTTAP_SHARP]))
+          : vp9_filter_kernels[BILINEAR];
+
   vp9_set_subpel_mv_search_range(&subpel_mv_limits, &x->mv_limits, ref_mv);
   minc = subpel_mv_limits.col_min;
   maxc = subpel_mv_limits.col_max;
@@ -695,16 +791,25 @@
       tr = br + search_step[idx].row;
       tc = bc + search_step[idx].col;
       if (tc >= minc && tc <= maxc && tr >= minr && tr <= maxr) {
-        const uint8_t *const pre_address = y + (tr >> 3) * y_stride + (tc >> 3);
         MV this_mv;
         this_mv.row = tr;
         this_mv.col = tc;
-        if (second_pred == NULL)
-          thismse = vfp->svf(pre_address, y_stride, sp(tc), sp(tr), src_address,
-                             src_stride, &sse);
-        else
-          thismse = vfp->svaf(pre_address, y_stride, sp(tc), sp(tr),
-                              src_address, src_stride, &sse, second_pred);
+
+        if (use_accurate_subpel_search) {
+          thismse = accurate_sub_pel_search(xd, &this_mv, x->me_sf, kernel, vfp,
+                                            src_address, src_stride, y,
+                                            y_stride, second_pred, w, h, &sse);
+        } else {
+          const uint8_t *const pre_address =
+              y + (tr >> 3) * y_stride + (tc >> 3);
+          if (second_pred == NULL)
+            thismse = vfp->svf(pre_address, y_stride, sp(tc), sp(tr),
+                               src_address, src_stride, &sse);
+          else
+            thismse = vfp->svaf(pre_address, y_stride, sp(tc), sp(tr),
+                                src_address, src_stride, &sse, second_pred);
+        }
+
         cost_array[idx] = thismse + mv_err_cost(&this_mv, ref_mv, mvjcost,
                                                 mvcost, error_per_bit);
 
@@ -726,14 +831,21 @@
     tc = bc + kc;
     tr = br + kr;
     if (tc >= minc && tc <= maxc && tr >= minr && tr <= maxr) {
-      const uint8_t *const pre_address = y + (tr >> 3) * y_stride + (tc >> 3);
       MV this_mv = { tr, tc };
-      if (second_pred == NULL)
-        thismse = vfp->svf(pre_address, y_stride, sp(tc), sp(tr), src_address,
-                           src_stride, &sse);
-      else
-        thismse = vfp->svaf(pre_address, y_stride, sp(tc), sp(tr), src_address,
-                            src_stride, &sse, second_pred);
+      if (use_accurate_subpel_search) {
+        thismse = accurate_sub_pel_search(xd, &this_mv, x->me_sf, kernel, vfp,
+                                          src_address, src_stride, y, y_stride,
+                                          second_pred, w, h, &sse);
+      } else {
+        const uint8_t *const pre_address = y + (tr >> 3) * y_stride + (tc >> 3);
+        if (second_pred == NULL)
+          thismse = vfp->svf(pre_address, y_stride, sp(tc), sp(tr), src_address,
+                             src_stride, &sse);
+        else
+          thismse = vfp->svaf(pre_address, y_stride, sp(tc), sp(tr),
+                              src_address, src_stride, &sse, second_pred);
+      }
+
       cost_array[4] = thismse + mv_err_cost(&this_mv, ref_mv, mvjcost, mvcost,
                                             error_per_bit);
 
@@ -755,10 +867,48 @@
       bc = tc;
     }
 
-    if (iters_per_step > 1 && best_idx != -1) SECOND_LEVEL_CHECKS_BEST;
+    if (iters_per_step > 0 && best_idx != -1) {
+      unsigned int second;
+      const int br0 = br;
+      const int bc0 = bc;
+      assert(tr == br || tc == bc);
 
-    tr = br;
-    tc = bc;
+      if (tr == br && tc != bc) {
+        kc = bc - tc;
+        if (iters_per_step == 1) {
+          if (use_accurate_subpel_search) {
+            CHECK_BETTER1(second, br0, bc0 + kc);
+          } else {
+            CHECK_BETTER(second, br0, bc0 + kc);
+          }
+        }
+      } else if (tr != br && tc == bc) {
+        kr = br - tr;
+        if (iters_per_step == 1) {
+          if (use_accurate_subpel_search) {
+            CHECK_BETTER1(second, br0 + kr, bc0);
+          } else {
+            CHECK_BETTER(second, br0 + kr, bc0);
+          }
+        }
+      }
+
+      if (iters_per_step > 1) {
+        if (use_accurate_subpel_search) {
+          CHECK_BETTER1(second, br0 + kr, bc0);
+          CHECK_BETTER1(second, br0, bc0 + kc);
+          if (br0 != br || bc0 != bc) {
+            CHECK_BETTER1(second, br0 + kr, bc0 + kc);
+          }
+        } else {
+          CHECK_BETTER(second, br0 + kr, bc0);
+          CHECK_BETTER(second, br0, bc0 + kc);
+          if (br0 != br || bc0 != bc) {
+            CHECK_BETTER(second, br0 + kr, bc0 + kc);
+          }
+        }
+      }
+    }
 
     search_step += 4;
     hstep >>= 1;
@@ -780,6 +930,7 @@
 }
 
 #undef CHECK_BETTER
+#undef CHECK_BETTER1
 
 static INLINE int check_bounds(const MvLimits *mv_limits, int row, int col,
                                int range) {
@@ -1490,7 +1641,7 @@
 
 // Exhuastive motion search around a given centre position with a given
 // step size.
-static int exhuastive_mesh_search(const MACROBLOCK *x, MV *ref_mv, MV *best_mv,
+static int exhaustive_mesh_search(const MACROBLOCK *x, MV *ref_mv, MV *best_mv,
                                   int range, int step, int sad_per_bit,
                                   const vp9_variance_fn_ptr_t *fn_ptr,
                                   const MV *center_mv) {
@@ -1576,6 +1727,361 @@
   return best_sad;
 }
 
+#define MIN_RANGE 7
+#define MAX_RANGE 256
+#define MIN_INTERVAL 1
+#if CONFIG_NON_GREEDY_MV
+static int64_t exhaustive_mesh_search_multi_step(
+    MV *best_mv, const MV *center_mv, int range, int step,
+    const struct buf_2d *src, const struct buf_2d *pre, int lambda,
+    const int_mv *nb_full_mvs, int full_mv_num, const MvLimits *mv_limits,
+    const vp9_variance_fn_ptr_t *fn_ptr) {
+  int64_t best_sad;
+  int r, c;
+  int start_col, end_col, start_row, end_row;
+  *best_mv = *center_mv;
+  best_sad =
+      ((int64_t)fn_ptr->sdf(src->buf, src->stride,
+                            get_buf_from_mv(pre, center_mv), pre->stride)
+       << LOG2_PRECISION) +
+      lambda * vp9_nb_mvs_inconsistency(best_mv, nb_full_mvs, full_mv_num);
+  start_row = VPXMAX(center_mv->row - range, mv_limits->row_min);
+  start_col = VPXMAX(center_mv->col - range, mv_limits->col_min);
+  end_row = VPXMIN(center_mv->row + range, mv_limits->row_max);
+  end_col = VPXMIN(center_mv->col + range, mv_limits->col_max);
+  for (r = start_row; r <= end_row; r += step) {
+    for (c = start_col; c <= end_col; c += step) {
+      const MV mv = { r, c };
+      int64_t sad = (int64_t)fn_ptr->sdf(src->buf, src->stride,
+                                         get_buf_from_mv(pre, &mv), pre->stride)
+                    << LOG2_PRECISION;
+      if (sad < best_sad) {
+        sad += lambda * vp9_nb_mvs_inconsistency(&mv, nb_full_mvs, full_mv_num);
+        if (sad < best_sad) {
+          best_sad = sad;
+          *best_mv = mv;
+        }
+      }
+    }
+  }
+  return best_sad;
+}
+
+static int64_t exhaustive_mesh_search_single_step(
+    MV *best_mv, const MV *center_mv, int range, const struct buf_2d *src,
+    const struct buf_2d *pre, int lambda, const int_mv *nb_full_mvs,
+    int full_mv_num, const MvLimits *mv_limits,
+    const vp9_variance_fn_ptr_t *fn_ptr) {
+  int64_t best_sad;
+  int r, c, i;
+  int start_col, end_col, start_row, end_row;
+
+  *best_mv = *center_mv;
+  best_sad =
+      ((int64_t)fn_ptr->sdf(src->buf, src->stride,
+                            get_buf_from_mv(pre, center_mv), pre->stride)
+       << LOG2_PRECISION) +
+      lambda * vp9_nb_mvs_inconsistency(best_mv, nb_full_mvs, full_mv_num);
+  start_row = VPXMAX(center_mv->row - range, mv_limits->row_min);
+  start_col = VPXMAX(center_mv->col - range, mv_limits->col_min);
+  end_row = VPXMIN(center_mv->row + range, mv_limits->row_max);
+  end_col = VPXMIN(center_mv->col + range, mv_limits->col_max);
+  for (r = start_row; r <= end_row; r += 1) {
+    c = start_col;
+    // sdx8f may not be available some block size
+    if (fn_ptr->sdx8f) {
+      while (c + 7 <= end_col) {
+        unsigned int sads[8];
+        const MV mv = { r, c };
+        const uint8_t *buf = get_buf_from_mv(pre, &mv);
+        fn_ptr->sdx8f(src->buf, src->stride, buf, pre->stride, sads);
+
+        for (i = 0; i < 8; ++i) {
+          int64_t sad = (int64_t)sads[i] << LOG2_PRECISION;
+          if (sad < best_sad) {
+            const MV mv = { r, c + i };
+            sad += lambda *
+                   vp9_nb_mvs_inconsistency(&mv, nb_full_mvs, full_mv_num);
+            if (sad < best_sad) {
+              best_sad = sad;
+              *best_mv = mv;
+            }
+          }
+        }
+        c += 8;
+      }
+    }
+    while (c + 3 <= end_col) {
+      unsigned int sads[4];
+      const uint8_t *addrs[4];
+      for (i = 0; i < 4; ++i) {
+        const MV mv = { r, c + i };
+        addrs[i] = get_buf_from_mv(pre, &mv);
+      }
+      fn_ptr->sdx4df(src->buf, src->stride, addrs, pre->stride, sads);
+
+      for (i = 0; i < 4; ++i) {
+        int64_t sad = (int64_t)sads[i] << LOG2_PRECISION;
+        if (sad < best_sad) {
+          const MV mv = { r, c + i };
+          sad +=
+              lambda * vp9_nb_mvs_inconsistency(&mv, nb_full_mvs, full_mv_num);
+          if (sad < best_sad) {
+            best_sad = sad;
+            *best_mv = mv;
+          }
+        }
+      }
+      c += 4;
+    }
+    while (c <= end_col) {
+      const MV mv = { r, c };
+      int64_t sad = (int64_t)fn_ptr->sdf(src->buf, src->stride,
+                                         get_buf_from_mv(pre, &mv), pre->stride)
+                    << LOG2_PRECISION;
+      if (sad < best_sad) {
+        sad += lambda * vp9_nb_mvs_inconsistency(&mv, nb_full_mvs, full_mv_num);
+        if (sad < best_sad) {
+          best_sad = sad;
+          *best_mv = mv;
+        }
+      }
+      c += 1;
+    }
+  }
+  return best_sad;
+}
+
+static int64_t exhaustive_mesh_search_new(const MACROBLOCK *x, MV *best_mv,
+                                          int range, int step,
+                                          const vp9_variance_fn_ptr_t *fn_ptr,
+                                          const MV *center_mv, int lambda,
+                                          const int_mv *nb_full_mvs,
+                                          int full_mv_num) {
+  const MACROBLOCKD *const xd = &x->e_mbd;
+  const struct buf_2d *src = &x->plane[0].src;
+  const struct buf_2d *pre = &xd->plane[0].pre[0];
+  assert(step >= 1);
+  assert(is_mv_in(&x->mv_limits, center_mv));
+  if (step == 1) {
+    return exhaustive_mesh_search_single_step(
+        best_mv, center_mv, range, src, pre, lambda, nb_full_mvs, full_mv_num,
+        &x->mv_limits, fn_ptr);
+  }
+  return exhaustive_mesh_search_multi_step(best_mv, center_mv, range, step, src,
+                                           pre, lambda, nb_full_mvs,
+                                           full_mv_num, &x->mv_limits, fn_ptr);
+}
+
+static int64_t full_pixel_exhaustive_new(const VP9_COMP *cpi, MACROBLOCK *x,
+                                         MV *centre_mv_full,
+                                         const vp9_variance_fn_ptr_t *fn_ptr,
+                                         MV *dst_mv, int lambda,
+                                         const int_mv *nb_full_mvs,
+                                         int full_mv_num) {
+  const SPEED_FEATURES *const sf = &cpi->sf;
+  MV temp_mv = { centre_mv_full->row, centre_mv_full->col };
+  int64_t bestsme;
+  int i;
+  int interval = sf->mesh_patterns[0].interval;
+  int range = sf->mesh_patterns[0].range;
+  int baseline_interval_divisor;
+
+  // Trap illegal values for interval and range for this function.
+  if ((range < MIN_RANGE) || (range > MAX_RANGE) || (interval < MIN_INTERVAL) ||
+      (interval > range)) {
+    printf("ERROR: invalid range\n");
+    assert(0);
+  }
+
+  baseline_interval_divisor = range / interval;
+
+  // Check size of proposed first range against magnitude of the centre
+  // value used as a starting point.
+  range = VPXMAX(range, (5 * VPXMAX(abs(temp_mv.row), abs(temp_mv.col))) / 4);
+  range = VPXMIN(range, MAX_RANGE);
+  interval = VPXMAX(interval, range / baseline_interval_divisor);
+
+  // initial search
+  bestsme =
+      exhaustive_mesh_search_new(x, &temp_mv, range, interval, fn_ptr, &temp_mv,
+                                 lambda, nb_full_mvs, full_mv_num);
+
+  if ((interval > MIN_INTERVAL) && (range > MIN_RANGE)) {
+    // Progressive searches with range and step size decreasing each time
+    // till we reach a step size of 1. Then break out.
+    for (i = 1; i < MAX_MESH_STEP; ++i) {
+      // First pass with coarser step and longer range
+      bestsme = exhaustive_mesh_search_new(
+          x, &temp_mv, sf->mesh_patterns[i].range,
+          sf->mesh_patterns[i].interval, fn_ptr, &temp_mv, lambda, nb_full_mvs,
+          full_mv_num);
+
+      if (sf->mesh_patterns[i].interval == 1) break;
+    }
+  }
+
+  *dst_mv = temp_mv;
+
+  return bestsme;
+}
+
+static int64_t diamond_search_sad_new(const MACROBLOCK *x,
+                                      const search_site_config *cfg,
+                                      const MV *init_full_mv, MV *best_full_mv,
+                                      int search_param, int lambda, int *num00,
+                                      const vp9_variance_fn_ptr_t *fn_ptr,
+                                      const int_mv *nb_full_mvs,
+                                      int full_mv_num) {
+  int i, j, step;
+
+  const MACROBLOCKD *const xd = &x->e_mbd;
+  uint8_t *what = x->plane[0].src.buf;
+  const int what_stride = x->plane[0].src.stride;
+  const uint8_t *in_what;
+  const int in_what_stride = xd->plane[0].pre[0].stride;
+  const uint8_t *best_address;
+
+  int64_t bestsad;
+  int best_site = -1;
+  int last_site = -1;
+
+  // search_param determines the length of the initial step and hence the number
+  // of iterations.
+  // 0 = initial step (MAX_FIRST_STEP) pel
+  // 1 = (MAX_FIRST_STEP/2) pel,
+  // 2 = (MAX_FIRST_STEP/4) pel...
+  //  const search_site *ss = &cfg->ss[search_param * cfg->searches_per_step];
+  const MV *ss_mv = &cfg->ss_mv[search_param * cfg->searches_per_step];
+  const intptr_t *ss_os = &cfg->ss_os[search_param * cfg->searches_per_step];
+  const int tot_steps = cfg->total_steps - search_param;
+  vpx_clear_system_state();
+
+  *best_full_mv = *init_full_mv;
+  clamp_mv(best_full_mv, x->mv_limits.col_min, x->mv_limits.col_max,
+           x->mv_limits.row_min, x->mv_limits.row_max);
+  *num00 = 0;
+
+  // Work out the start point for the search
+  in_what = xd->plane[0].pre[0].buf + best_full_mv->row * in_what_stride +
+            best_full_mv->col;
+  best_address = in_what;
+
+  // Check the starting position
+  {
+    const int64_t mv_dist =
+        (int64_t)fn_ptr->sdf(what, what_stride, in_what, in_what_stride)
+        << LOG2_PRECISION;
+    const int64_t mv_cost =
+        vp9_nb_mvs_inconsistency(best_full_mv, nb_full_mvs, full_mv_num);
+    bestsad = mv_dist + lambda * mv_cost;
+  }
+
+  i = 0;
+
+  for (step = 0; step < tot_steps; step++) {
+    int all_in = 1, t;
+
+    // All_in is true if every one of the points we are checking are within
+    // the bounds of the image.
+    all_in &= ((best_full_mv->row + ss_mv[i].row) > x->mv_limits.row_min);
+    all_in &= ((best_full_mv->row + ss_mv[i + 1].row) < x->mv_limits.row_max);
+    all_in &= ((best_full_mv->col + ss_mv[i + 2].col) > x->mv_limits.col_min);
+    all_in &= ((best_full_mv->col + ss_mv[i + 3].col) < x->mv_limits.col_max);
+
+    // If all the pixels are within the bounds we don't check whether the
+    // search point is valid in this loop,  otherwise we check each point
+    // for validity..
+    if (all_in) {
+      unsigned int sad_array[4];
+
+      for (j = 0; j < cfg->searches_per_step; j += 4) {
+        unsigned char const *block_offset[4];
+
+        for (t = 0; t < 4; t++) block_offset[t] = ss_os[i + t] + best_address;
+
+        fn_ptr->sdx4df(what, what_stride, block_offset, in_what_stride,
+                       sad_array);
+
+        for (t = 0; t < 4; t++, i++) {
+          const int64_t mv_dist = (int64_t)sad_array[t] << LOG2_PRECISION;
+          if (mv_dist < bestsad) {
+            const MV this_mv = { best_full_mv->row + ss_mv[i].row,
+                                 best_full_mv->col + ss_mv[i].col };
+            const int64_t mv_cost =
+                vp9_nb_mvs_inconsistency(&this_mv, nb_full_mvs, full_mv_num);
+            const int64_t thissad = mv_dist + lambda * mv_cost;
+            if (thissad < bestsad) {
+              bestsad = thissad;
+              best_site = i;
+            }
+          }
+        }
+      }
+    } else {
+      for (j = 0; j < cfg->searches_per_step; j++) {
+        // Trap illegal vectors
+        const MV this_mv = { best_full_mv->row + ss_mv[i].row,
+                             best_full_mv->col + ss_mv[i].col };
+
+        if (is_mv_in(&x->mv_limits, &this_mv)) {
+          const uint8_t *const check_here = ss_os[i] + best_address;
+          const int64_t mv_dist =
+              (int64_t)fn_ptr->sdf(what, what_stride, check_here,
+                                   in_what_stride)
+              << LOG2_PRECISION;
+          if (mv_dist < bestsad) {
+            const int64_t mv_cost =
+                vp9_nb_mvs_inconsistency(&this_mv, nb_full_mvs, full_mv_num);
+            const int64_t thissad = mv_dist + lambda * mv_cost;
+            if (thissad < bestsad) {
+              bestsad = thissad;
+              best_site = i;
+            }
+          }
+        }
+        i++;
+      }
+    }
+    if (best_site != last_site) {
+      best_full_mv->row += ss_mv[best_site].row;
+      best_full_mv->col += ss_mv[best_site].col;
+      best_address += ss_os[best_site];
+      last_site = best_site;
+    } else if (best_address == in_what) {
+      (*num00)++;
+    }
+  }
+  return bestsad;
+}
+
+int vp9_prepare_nb_full_mvs(const MotionField *motion_field, int mi_row,
+                            int mi_col, int_mv *nb_full_mvs) {
+  const int mi_width = num_8x8_blocks_wide_lookup[motion_field->bsize];
+  const int mi_height = num_8x8_blocks_high_lookup[motion_field->bsize];
+  const int dirs[NB_MVS_NUM][2] = { { -1, 0 }, { 0, -1 }, { 1, 0 }, { 0, 1 } };
+  int nb_full_mv_num = 0;
+  int i;
+  assert(mi_row % mi_height == 0);
+  assert(mi_col % mi_width == 0);
+  for (i = 0; i < NB_MVS_NUM; ++i) {
+    int r = dirs[i][0];
+    int c = dirs[i][1];
+    int brow = mi_row / mi_height + r;
+    int bcol = mi_col / mi_width + c;
+    if (brow >= 0 && brow < motion_field->block_rows && bcol >= 0 &&
+        bcol < motion_field->block_cols) {
+      if (vp9_motion_field_is_mv_set(motion_field, brow, bcol)) {
+        int_mv mv = vp9_motion_field_get_mv(motion_field, brow, bcol);
+        nb_full_mvs[nb_full_mv_num].as_mv = get_full_mv(&mv.as_mv);
+        ++nb_full_mv_num;
+      }
+    }
+  }
+  return nb_full_mv_num;
+}
+#endif  // CONFIG_NON_GREEDY_MV
+
 int vp9_diamond_search_sad_c(const MACROBLOCK *x, const search_site_config *cfg,
                              MV *ref_mv, MV *best_mv, int search_param,
                              int sad_per_bit, int *num00,
@@ -1793,7 +2299,7 @@
 
 unsigned int vp9_int_pro_motion_estimation(const VP9_COMP *cpi, MACROBLOCK *x,
                                            BLOCK_SIZE bsize, int mi_row,
-                                           int mi_col) {
+                                           int mi_col, const MV *ref_mv) {
   MACROBLOCKD *xd = &x->e_mbd;
   MODE_INFO *mi = xd->mi[0];
   struct buf_2d backup_yv12[MAX_MB_PLANE] = { { 0, 0 } };
@@ -1815,6 +2321,7 @@
   const int norm_factor = 3 + (bw >> 5);
   const YV12_BUFFER_CONFIG *scaled_ref_frame =
       vp9_get_scaled_ref_frame(cpi, mi->ref_frame[0]);
+  MvLimits subpel_mv_limits;
 
   if (scaled_ref_frame) {
     int i;
@@ -1917,6 +2424,10 @@
   tmp_mv->row *= 8;
   tmp_mv->col *= 8;
 
+  vp9_set_subpel_mv_search_range(&subpel_mv_limits, &x->mv_limits, ref_mv);
+  clamp_mv(tmp_mv, subpel_mv_limits.col_min, subpel_mv_limits.col_max,
+           subpel_mv_limits.row_min, subpel_mv_limits.row_max);
+
   if (scaled_ref_frame) {
     int i;
     for (i = 0; i < MAX_MB_PLANE; i++) xd->plane[i].pre[0] = backup_yv12[i];
@@ -1925,11 +2436,92 @@
   return best_sad;
 }
 
+static int get_exhaustive_threshold(int exhaustive_searches_thresh,
+                                    BLOCK_SIZE bsize) {
+  return exhaustive_searches_thresh >>
+         (8 - (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]));
+}
+
+#if CONFIG_NON_GREEDY_MV
 // Runs sequence of diamond searches in smaller steps for RD.
 /* do_refine: If last step (1-away) of n-step search doesn't pick the center
               point as the best match, we will do a final 1-away diamond
               refining search  */
-static int full_pixel_diamond(const VP9_COMP *cpi, MACROBLOCK *x, MV *mvp_full,
+int vp9_full_pixel_diamond_new(const VP9_COMP *cpi, MACROBLOCK *x,
+                               BLOCK_SIZE bsize, MV *mvp_full, int step_param,
+                               int lambda, int do_refine,
+                               const int_mv *nb_full_mvs, int full_mv_num,
+                               MV *best_mv) {
+  const vp9_variance_fn_ptr_t *fn_ptr = &cpi->fn_ptr[bsize];
+  const SPEED_FEATURES *const sf = &cpi->sf;
+  int n, num00 = 0;
+  int thissme;
+  int bestsme;
+  const int further_steps = MAX_MVSEARCH_STEPS - 1 - step_param;
+  const MV center_mv = { 0, 0 };
+  vpx_clear_system_state();
+  diamond_search_sad_new(x, &cpi->ss_cfg, mvp_full, best_mv, step_param, lambda,
+                         &n, fn_ptr, nb_full_mvs, full_mv_num);
+
+  bestsme = vp9_get_mvpred_var(x, best_mv, &center_mv, fn_ptr, 0);
+
+  // If there won't be more n-step search, check to see if refining search is
+  // needed.
+  if (n > further_steps) do_refine = 0;
+
+  while (n < further_steps) {
+    ++n;
+    if (num00) {
+      num00--;
+    } else {
+      MV temp_mv;
+      diamond_search_sad_new(x, &cpi->ss_cfg, mvp_full, &temp_mv,
+                             step_param + n, lambda, &num00, fn_ptr,
+                             nb_full_mvs, full_mv_num);
+      thissme = vp9_get_mvpred_var(x, &temp_mv, &center_mv, fn_ptr, 0);
+      // check to see if refining search is needed.
+      if (num00 > further_steps - n) do_refine = 0;
+
+      if (thissme < bestsme) {
+        bestsme = thissme;
+        *best_mv = temp_mv;
+      }
+    }
+  }
+
+  // final 1-away diamond refining search
+  if (do_refine) {
+    const int search_range = 8;
+    MV temp_mv = *best_mv;
+    vp9_refining_search_sad_new(x, &temp_mv, lambda, search_range, fn_ptr,
+                                nb_full_mvs, full_mv_num);
+    thissme = vp9_get_mvpred_var(x, &temp_mv, &center_mv, fn_ptr, 0);
+    if (thissme < bestsme) {
+      bestsme = thissme;
+      *best_mv = temp_mv;
+    }
+  }
+
+  if (sf->exhaustive_searches_thresh < INT_MAX &&
+      !cpi->rc.is_src_frame_alt_ref) {
+    const int64_t exhaustive_thr =
+        get_exhaustive_threshold(sf->exhaustive_searches_thresh, bsize);
+    if (bestsme > exhaustive_thr) {
+      full_pixel_exhaustive_new(cpi, x, best_mv, fn_ptr, best_mv, lambda,
+                                nb_full_mvs, full_mv_num);
+      bestsme = vp9_get_mvpred_var(x, best_mv, &center_mv, fn_ptr, 0);
+    }
+  }
+  return bestsme;
+}
+#endif  // CONFIG_NON_GREEDY_MV
+
+// Runs sequence of diamond searches in smaller steps for RD.
+/* do_refine: If last step (1-away) of n-step search doesn't pick the center
+              point as the best match, we will do a final 1-away diamond
+              refining search  */
+static int full_pixel_diamond(const VP9_COMP *const cpi,
+                              const MACROBLOCK *const x, MV *mvp_full,
                               int step_param, int sadpb, int further_steps,
                               int do_refine, int *cost_list,
                               const vp9_variance_fn_ptr_t *fn_ptr,
@@ -1989,13 +2581,11 @@
   return bestsme;
 }
 
-#define MIN_RANGE 7
-#define MAX_RANGE 256
-#define MIN_INTERVAL 1
 // Runs an limited range exhaustive mesh search using a pattern set
 // according to the encode speed profile.
-static int full_pixel_exhaustive(VP9_COMP *cpi, MACROBLOCK *x,
-                                 MV *centre_mv_full, int sadpb, int *cost_list,
+static int full_pixel_exhaustive(const VP9_COMP *const cpi,
+                                 const MACROBLOCK *const x, MV *centre_mv_full,
+                                 int sadpb, int *cost_list,
                                  const vp9_variance_fn_ptr_t *fn_ptr,
                                  const MV *ref_mv, MV *dst_mv) {
   const SPEED_FEATURES *const sf = &cpi->sf;
@@ -2021,7 +2611,7 @@
   interval = VPXMAX(interval, range / baseline_interval_divisor);
 
   // initial search
-  bestsme = exhuastive_mesh_search(x, &f_ref_mv, &temp_mv, range, interval,
+  bestsme = exhaustive_mesh_search(x, &f_ref_mv, &temp_mv, range, interval,
                                    sadpb, fn_ptr, &temp_mv);
 
   if ((interval > MIN_INTERVAL) && (range > MIN_RANGE)) {
@@ -2029,7 +2619,7 @@
     // till we reach a step size of 1. Then break out.
     for (i = 1; i < MAX_MESH_STEP; ++i) {
       // First pass with coarser step and longer range
-      bestsme = exhuastive_mesh_search(
+      bestsme = exhaustive_mesh_search(
           x, &f_ref_mv, &temp_mv, sf->mesh_patterns[i].range,
           sf->mesh_patterns[i].interval, sadpb, fn_ptr, &temp_mv);
 
@@ -2048,6 +2638,91 @@
   return bestsme;
 }
 
+#if CONFIG_NON_GREEDY_MV
+int64_t vp9_refining_search_sad_new(const MACROBLOCK *x, MV *best_full_mv,
+                                    int lambda, int search_range,
+                                    const vp9_variance_fn_ptr_t *fn_ptr,
+                                    const int_mv *nb_full_mvs,
+                                    int full_mv_num) {
+  const MACROBLOCKD *const xd = &x->e_mbd;
+  const MV neighbors[4] = { { -1, 0 }, { 0, -1 }, { 0, 1 }, { 1, 0 } };
+  const struct buf_2d *const what = &x->plane[0].src;
+  const struct buf_2d *const in_what = &xd->plane[0].pre[0];
+  const uint8_t *best_address = get_buf_from_mv(in_what, best_full_mv);
+  int64_t best_sad;
+  int i, j;
+  vpx_clear_system_state();
+  {
+    const int64_t mv_dist = (int64_t)fn_ptr->sdf(what->buf, what->stride,
+                                                 best_address, in_what->stride)
+                            << LOG2_PRECISION;
+    const int64_t mv_cost =
+        vp9_nb_mvs_inconsistency(best_full_mv, nb_full_mvs, full_mv_num);
+    best_sad = mv_dist + lambda * mv_cost;
+  }
+
+  for (i = 0; i < search_range; i++) {
+    int best_site = -1;
+    const int all_in = ((best_full_mv->row - 1) > x->mv_limits.row_min) &
+                       ((best_full_mv->row + 1) < x->mv_limits.row_max) &
+                       ((best_full_mv->col - 1) > x->mv_limits.col_min) &
+                       ((best_full_mv->col + 1) < x->mv_limits.col_max);
+
+    if (all_in) {
+      unsigned int sads[4];
+      const uint8_t *const positions[4] = { best_address - in_what->stride,
+                                            best_address - 1, best_address + 1,
+                                            best_address + in_what->stride };
+
+      fn_ptr->sdx4df(what->buf, what->stride, positions, in_what->stride, sads);
+
+      for (j = 0; j < 4; ++j) {
+        const MV mv = { best_full_mv->row + neighbors[j].row,
+                        best_full_mv->col + neighbors[j].col };
+        const int64_t mv_dist = (int64_t)sads[j] << LOG2_PRECISION;
+        const int64_t mv_cost =
+            vp9_nb_mvs_inconsistency(&mv, nb_full_mvs, full_mv_num);
+        const int64_t thissad = mv_dist + lambda * mv_cost;
+        if (thissad < best_sad) {
+          best_sad = thissad;
+          best_site = j;
+        }
+      }
+    } else {
+      for (j = 0; j < 4; ++j) {
+        const MV mv = { best_full_mv->row + neighbors[j].row,
+                        best_full_mv->col + neighbors[j].col };
+
+        if (is_mv_in(&x->mv_limits, &mv)) {
+          const int64_t mv_dist =
+              (int64_t)fn_ptr->sdf(what->buf, what->stride,
+                                   get_buf_from_mv(in_what, &mv),
+                                   in_what->stride)
+              << LOG2_PRECISION;
+          const int64_t mv_cost =
+              vp9_nb_mvs_inconsistency(&mv, nb_full_mvs, full_mv_num);
+          const int64_t thissad = mv_dist + lambda * mv_cost;
+          if (thissad < best_sad) {
+            best_sad = thissad;
+            best_site = j;
+          }
+        }
+      }
+    }
+
+    if (best_site == -1) {
+      break;
+    } else {
+      best_full_mv->row += neighbors[best_site].row;
+      best_full_mv->col += neighbors[best_site].col;
+      best_address = get_buf_from_mv(in_what, best_full_mv);
+    }
+  }
+
+  return best_sad;
+}
+#endif  // CONFIG_NON_GREEDY_MV
+
 int vp9_refining_search_sad(const MACROBLOCK *x, MV *ref_mv, int error_per_bit,
                             int search_range,
                             const vp9_variance_fn_ptr_t *fn_ptr,
@@ -2173,14 +2848,16 @@
   return best_sad;
 }
 
-int vp9_full_pixel_search(VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bsize,
-                          MV *mvp_full, int step_param, int search_method,
-                          int error_per_bit, int *cost_list, const MV *ref_mv,
-                          MV *tmp_mv, int var_max, int rd) {
+int vp9_full_pixel_search(const VP9_COMP *const cpi, const MACROBLOCK *const x,
+                          BLOCK_SIZE bsize, MV *mvp_full, int step_param,
+                          int search_method, int error_per_bit, int *cost_list,
+                          const MV *ref_mv, MV *tmp_mv, int var_max, int rd) {
   const SPEED_FEATURES *const sf = &cpi->sf;
   const SEARCH_METHODS method = (SEARCH_METHODS)search_method;
-  vp9_variance_fn_ptr_t *fn_ptr = &cpi->fn_ptr[bsize];
+  const vp9_variance_fn_ptr_t *fn_ptr = &cpi->fn_ptr[bsize];
   int var = 0;
+  int run_exhaustive_search = 0;
+
   if (cost_list) {
     cost_list[0] = INT_MAX;
     cost_list[1] = INT_MAX;
@@ -2211,35 +2888,39 @@
                           fn_ptr, 1, ref_mv, tmp_mv);
       break;
     case NSTEP:
+    case MESH:
       var = full_pixel_diamond(cpi, x, mvp_full, step_param, error_per_bit,
                                MAX_MVSEARCH_STEPS - 1 - step_param, 1,
                                cost_list, fn_ptr, ref_mv, tmp_mv);
-
-      // Should we allow a follow on exhaustive search?
-      if ((sf->exhaustive_searches_thresh < INT_MAX) &&
-          !cpi->rc.is_src_frame_alt_ref) {
-        int64_t exhuastive_thr = sf->exhaustive_searches_thresh;
-        exhuastive_thr >>=
-            8 - (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
-
-        // Threshold variance for an exhaustive full search.
-        if (var > exhuastive_thr) {
-          int var_ex;
-          MV tmp_mv_ex;
-          var_ex = full_pixel_exhaustive(cpi, x, tmp_mv, error_per_bit,
-                                         cost_list, fn_ptr, ref_mv, &tmp_mv_ex);
-
-          if (var_ex < var) {
-            var = var_ex;
-            *tmp_mv = tmp_mv_ex;
-          }
-        }
-      }
       break;
-    default: assert(0 && "Invalid search method.");
+    default: assert(0 && "Unknown search method");
   }
 
-  if (method != NSTEP && rd && var < var_max)
+  if (method == NSTEP) {
+    if (sf->exhaustive_searches_thresh < INT_MAX &&
+        !cpi->rc.is_src_frame_alt_ref) {
+      const int64_t exhaustive_thr =
+          get_exhaustive_threshold(sf->exhaustive_searches_thresh, bsize);
+      if (var > exhaustive_thr) {
+        run_exhaustive_search = 1;
+      }
+    }
+  } else if (method == MESH) {
+    run_exhaustive_search = 1;
+  }
+
+  if (run_exhaustive_search) {
+    int var_ex;
+    MV tmp_mv_ex;
+    var_ex = full_pixel_exhaustive(cpi, x, tmp_mv, error_per_bit, cost_list,
+                                   fn_ptr, ref_mv, &tmp_mv_ex);
+    if (var_ex < var) {
+      var = var_ex;
+      *tmp_mv = tmp_mv_ex;
+    }
+  }
+
+  if (method != NSTEP && method != MESH && rd && var < var_max)
     var = vp9_get_mvpred_var(x, tmp_mv, ref_mv, fn_ptr, 1);
 
   return var;
@@ -2280,7 +2961,8 @@
   (void)tc;            \
   (void)sse;           \
   (void)thismse;       \
-  (void)cost_list;
+  (void)cost_list;     \
+  (void)use_accurate_subpel_search;
 
 // Return the maximum MV.
 uint32_t vp9_return_max_sub_pixel_mv(
@@ -2288,7 +2970,7 @@
     int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h) {
+    int h, int use_accurate_subpel_search) {
   COMMON_MV_TEST;
 
   (void)minr;
@@ -2310,7 +2992,7 @@
     int error_per_bit, const vp9_variance_fn_ptr_t *vfp, int forced_stop,
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h) {
+    int h, int use_accurate_subpel_search) {
   COMMON_MV_TEST;
 
   (void)maxr;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.h
index b8db2c3..0c4d8f2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_mcomp.h
@@ -8,10 +8,13 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_MCOMP_H_
-#define VP9_ENCODER_VP9_MCOMP_H_
+#ifndef VPX_VP9_ENCODER_VP9_MCOMP_H_
+#define VPX_VP9_ENCODER_VP9_MCOMP_H_
 
 #include "vp9/encoder/vp9_block.h"
+#if CONFIG_NON_GREEDY_MV
+#include "vp9/encoder/vp9_non_greedy_mv.h"
+#endif  // CONFIG_NON_GREEDY_MV
 #include "vpx_dsp/variance.h"
 
 #ifdef __cplusplus
@@ -38,6 +41,11 @@
   int total_steps;
 } search_site_config;
 
+static INLINE const uint8_t *get_buf_from_mv(const struct buf_2d *buf,
+                                             const MV *mv) {
+  return &buf->buf[mv->row * buf->stride + mv->col];
+}
+
 void vp9_init_dsmotion_compensation(search_site_config *cfg, int stride);
 void vp9_init3smotion_compensation(search_site_config *cfg, int stride);
 
@@ -59,14 +67,15 @@
 int vp9_init_search_range(int size);
 
 int vp9_refining_search_sad(const struct macroblock *x, struct mv *ref_mv,
-                            int sad_per_bit, int distance,
+                            int error_per_bit, int search_range,
                             const struct vp9_variance_vtable *fn_ptr,
                             const struct mv *center_mv);
 
 // Perform integral projection based motion estimation.
 unsigned int vp9_int_pro_motion_estimation(const struct VP9_COMP *cpi,
                                            MACROBLOCK *x, BLOCK_SIZE bsize,
-                                           int mi_row, int mi_col);
+                                           int mi_row, int mi_col,
+                                           const MV *ref_mv);
 
 typedef uint32_t(fractional_mv_step_fp)(
     const MACROBLOCK *x, MV *bestmv, const MV *ref_mv, int allow_hp,
@@ -74,7 +83,7 @@
     int forced_stop,  // 0 - full, 1 - qtr only, 2 - half only
     int iters_per_step, int *cost_list, int *mvjcost, int *mvcost[2],
     uint32_t *distortion, uint32_t *sse1, const uint8_t *second_pred, int w,
-    int h);
+    int h, int use_accurate_subpel_search);
 
 extern fractional_mv_step_fp vp9_find_best_sub_pixel_tree;
 extern fractional_mv_step_fp vp9_find_best_sub_pixel_tree_pruned;
@@ -106,7 +115,11 @@
 
 struct VP9_COMP;
 
-int vp9_full_pixel_search(struct VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bsize,
+// "mvp_full" is the MV search starting point;
+// "ref_mv" is the context reference MV;
+// "tmp_mv" is the searched best MV.
+int vp9_full_pixel_search(const struct VP9_COMP *const cpi,
+                          const MACROBLOCK *const x, BLOCK_SIZE bsize,
                           MV *mvp_full, int step_param, int search_method,
                           int error_per_bit, int *cost_list, const MV *ref_mv,
                           MV *tmp_mv, int var_max, int rd);
@@ -115,8 +128,55 @@
                                     const MvLimits *umv_window_limits,
                                     const MV *ref_mv);
 
+#if CONFIG_NON_GREEDY_MV
+struct TplDepStats;
+int64_t vp9_refining_search_sad_new(const MACROBLOCK *x, MV *best_full_mv,
+                                    int lambda, int search_range,
+                                    const vp9_variance_fn_ptr_t *fn_ptr,
+                                    const int_mv *nb_full_mvs, int full_mv_num);
+
+int vp9_full_pixel_diamond_new(const struct VP9_COMP *cpi, MACROBLOCK *x,
+                               BLOCK_SIZE bsize, MV *mvp_full, int step_param,
+                               int lambda, int do_refine,
+                               const int_mv *nb_full_mvs, int full_mv_num,
+                               MV *best_mv);
+
+static INLINE MV get_full_mv(const MV *mv) {
+  MV out_mv;
+  out_mv.row = mv->row >> 3;
+  out_mv.col = mv->col >> 3;
+  return out_mv;
+}
+struct TplDepFrame;
+int vp9_prepare_nb_full_mvs(const struct MotionField *motion_field, int mi_row,
+                            int mi_col, int_mv *nb_full_mvs);
+
+static INLINE BLOCK_SIZE get_square_block_size(BLOCK_SIZE bsize) {
+  BLOCK_SIZE square_bsize;
+  switch (bsize) {
+    case BLOCK_4X4:
+    case BLOCK_4X8:
+    case BLOCK_8X4: square_bsize = BLOCK_4X4; break;
+    case BLOCK_8X8:
+    case BLOCK_8X16:
+    case BLOCK_16X8: square_bsize = BLOCK_8X8; break;
+    case BLOCK_16X16:
+    case BLOCK_16X32:
+    case BLOCK_32X16: square_bsize = BLOCK_16X16; break;
+    case BLOCK_32X32:
+    case BLOCK_32X64:
+    case BLOCK_64X32:
+    case BLOCK_64X64: square_bsize = BLOCK_32X32; break;
+    default:
+      square_bsize = BLOCK_INVALID;
+      assert(0 && "ERROR: invalid block size");
+      break;
+  }
+  return square_bsize;
+}
+#endif  // CONFIG_NON_GREEDY_MV
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_MCOMP_H_
+#endif  // VPX_VP9_ENCODER_VP9_MCOMP_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.c
index da06fb1..c66c035 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.c
@@ -13,6 +13,7 @@
 #include "vp9/encoder/vp9_encoder.h"
 #include "vp9/encoder/vp9_ethread.h"
 #include "vp9/encoder/vp9_multi_thread.h"
+#include "vp9/encoder/vp9_temporal_filter.h"
 
 void *vp9_enc_grp_get_next_job(MultiThreadHandle *multi_thread_ctxt,
                                int tile_id) {
@@ -50,6 +51,20 @@
   return job_info;
 }
 
+void vp9_row_mt_alloc_rd_thresh(VP9_COMP *const cpi,
+                                TileDataEnc *const this_tile) {
+  VP9_COMMON *const cm = &cpi->common;
+  const int sb_rows =
+      (mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2) + 1;
+  int i;
+
+  this_tile->row_base_thresh_freq_fact =
+      (int *)vpx_calloc(sb_rows * BLOCK_SIZES * MAX_MODES,
+                        sizeof(*(this_tile->row_base_thresh_freq_fact)));
+  for (i = 0; i < sb_rows * BLOCK_SIZES * MAX_MODES; i++)
+    this_tile->row_base_thresh_freq_fact[i] = RD_THRESH_INIT_FACT;
+}
+
 void vp9_row_mt_mem_alloc(VP9_COMP *cpi) {
   struct VP9Common *cm = &cpi->common;
   MultiThreadHandle *multi_thread_ctxt = &cpi->multi_thread_ctxt;
@@ -59,6 +74,8 @@
   const int sb_rows = mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2;
   int jobs_per_tile_col, total_jobs;
 
+  // Allocate memory that is large enough for all row_mt stages. First pass
+  // uses 16x16 block size.
   jobs_per_tile_col = VPXMAX(cm->mb_rows, sb_rows);
   // Calculate the total number of jobs
   total_jobs = jobs_per_tile_col * tile_cols;
@@ -83,14 +100,11 @@
     TileDataEnc *this_tile = &cpi->tile_data[tile_col];
     vp9_row_mt_sync_mem_alloc(&this_tile->row_mt_sync, cm, jobs_per_tile_col);
     if (cpi->sf.adaptive_rd_thresh_row_mt) {
-      const int sb_rows =
-          (mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2) + 1;
-      int i;
-      this_tile->row_base_thresh_freq_fact =
-          (int *)vpx_calloc(sb_rows * BLOCK_SIZES * MAX_MODES,
-                            sizeof(*(this_tile->row_base_thresh_freq_fact)));
-      for (i = 0; i < sb_rows * BLOCK_SIZES * MAX_MODES; i++)
-        this_tile->row_base_thresh_freq_fact[i] = RD_THRESH_INIT_FACT;
+      if (this_tile->row_base_thresh_freq_fact != NULL) {
+        vpx_free(this_tile->row_base_thresh_freq_fact);
+        this_tile->row_base_thresh_freq_fact = NULL;
+      }
+      vp9_row_mt_alloc_rd_thresh(cpi, this_tile);
     }
   }
 
@@ -146,11 +160,9 @@
       TileDataEnc *this_tile =
           &cpi->tile_data[tile_row * multi_thread_ctxt->allocated_tile_cols +
                           tile_col];
-      if (cpi->sf.adaptive_rd_thresh_row_mt) {
-        if (this_tile->row_base_thresh_freq_fact != NULL) {
-          vpx_free(this_tile->row_base_thresh_freq_fact);
-          this_tile->row_base_thresh_freq_fact = NULL;
-        }
+      if (this_tile->row_base_thresh_freq_fact != NULL) {
+        vpx_free(this_tile->row_base_thresh_freq_fact);
+        this_tile->row_base_thresh_freq_fact = NULL;
       }
     }
   }
@@ -219,11 +231,19 @@
   MultiThreadHandle *multi_thread_ctxt = &cpi->multi_thread_ctxt;
   JobQueue *job_queue = multi_thread_ctxt->job_queue;
   const int tile_cols = 1 << cm->log2_tile_cols;
-  int job_row_num, jobs_per_tile, jobs_per_tile_col, total_jobs;
+  int job_row_num, jobs_per_tile, jobs_per_tile_col = 0, total_jobs;
   const int sb_rows = mi_cols_aligned_to_sb(cm->mi_rows) >> MI_BLOCK_SIZE_LOG2;
   int tile_col, i;
 
-  jobs_per_tile_col = (job_type != ENCODE_JOB) ? cm->mb_rows : sb_rows;
+  switch (job_type) {
+    case ENCODE_JOB: jobs_per_tile_col = sb_rows; break;
+    case FIRST_PASS_JOB: jobs_per_tile_col = cm->mb_rows; break;
+    case ARNR_JOB:
+      jobs_per_tile_col = ((cm->mi_rows + TF_ROUND) >> TF_SHIFT);
+      break;
+    default: assert(0);
+  }
+
   total_jobs = jobs_per_tile_col * tile_cols;
 
   multi_thread_ctxt->jobs_per_tile_col = jobs_per_tile_col;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h
index bfc0c0a..a2276f4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_multi_thread.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_MULTI_THREAD_H
-#define VP9_ENCODER_VP9_MULTI_THREAD_H
+#ifndef VPX_VP9_ENCODER_VP9_MULTI_THREAD_H_
+#define VPX_VP9_ENCODER_VP9_MULTI_THREAD_H_
 
 #include "vp9/encoder/vp9_encoder.h"
 #include "vp9/encoder/vp9_job_queue.h"
@@ -29,10 +29,13 @@
 
 void vp9_row_mt_mem_alloc(VP9_COMP *cpi);
 
+void vp9_row_mt_alloc_rd_thresh(VP9_COMP *const cpi,
+                                TileDataEnc *const this_tile);
+
 void vp9_row_mt_mem_dealloc(VP9_COMP *cpi);
 
 int vp9_get_tiles_proc_status(MultiThreadHandle *multi_thread_ctxt,
                               int *tile_completion_status, int *cur_tile_id,
                               int tile_cols);
 
-#endif  // VP9_ENCODER_VP9_MULTI_THREAD_H
+#endif  // VPX_VP9_ENCODER_VP9_MULTI_THREAD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c
index 249e037..9696529 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.c
@@ -32,7 +32,7 @@
 
 void vp9_noise_estimate_init(NOISE_ESTIMATE *const ne, int width, int height) {
   ne->enabled = 0;
-  ne->level = kLowLow;
+  ne->level = (width * height < 1280 * 720) ? kLowLow : kLow;
   ne->value = 0;
   ne->count = 0;
   ne->thresh = 90;
@@ -46,6 +46,7 @@
     ne->thresh = 115;
   }
   ne->num_frames_estimate = 15;
+  ne->adapt_thresh = (3 * ne->thresh) >> 1;
 }
 
 static int enable_noise_estimation(VP9_COMP *const cpi) {
@@ -97,7 +98,7 @@
   } else {
     if (ne->value > ne->thresh)
       noise_level = kMedium;
-    else if (ne->value > ((9 * ne->thresh) >> 4))
+    else if (ne->value > (ne->thresh >> 1))
       noise_level = kLow;
     else
       noise_level = kLowLow;
@@ -112,10 +113,6 @@
   // Estimate of noise level every frame_period frames.
   int frame_period = 8;
   int thresh_consec_zeromv = 6;
-  unsigned int thresh_sum_diff = 100;
-  unsigned int thresh_sum_spatial = (200 * 200) << 8;
-  unsigned int thresh_spatial_var = (32 * 32) << 8;
-  int min_blocks_estimate = cm->mi_rows * cm->mi_cols >> 7;
   int frame_counter = cm->current_video_frame;
   // Estimate is between current source and last source.
   YV12_BUFFER_CONFIG *last_source = cpi->Last_Source;
@@ -124,11 +121,8 @@
     last_source = &cpi->denoiser.last_source;
     // Tune these thresholds for different resolutions when denoising is
     // enabled.
-    if (cm->width > 640 && cm->width < 1920) {
-      thresh_consec_zeromv = 4;
-      thresh_sum_diff = 200;
-      thresh_sum_spatial = (120 * 120) << 8;
-      thresh_spatial_var = (48 * 48) << 8;
+    if (cm->width > 640 && cm->width <= 1920) {
+      thresh_consec_zeromv = 2;
     }
   }
 #endif
@@ -148,8 +142,10 @@
       ne->last_h = cm->height;
     }
     return;
-  } else if (cm->current_video_frame > 60 &&
-             cpi->rc.avg_frame_low_motion < (low_res ? 70 : 50)) {
+  } else if (frame_counter > 60 && cpi->svc.num_encoded_top_layer > 1 &&
+             cpi->rc.frames_since_key > cpi->svc.number_spatial_layers &&
+             cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1 &&
+             cpi->rc.avg_frame_low_motion < (low_res ? 60 : 40)) {
     // Force noise estimation to 0 and denoiser off if content has high motion.
     ne->level = kLowLow;
     ne->count = 0;
@@ -157,17 +153,19 @@
 #if CONFIG_VP9_TEMPORAL_DENOISING
     if (cpi->oxcf.noise_sensitivity > 0 && noise_est_svc(cpi) &&
         cpi->svc.current_superframe > 1) {
-      vp9_denoiser_set_noise_level(&cpi->denoiser, ne->level);
+      vp9_denoiser_set_noise_level(cpi, ne->level);
       copy_frame(&cpi->denoiser.last_source, cpi->Source);
     }
 #endif
     return;
   } else {
-    int num_samples = 0;
-    uint64_t avg_est = 0;
+    unsigned int bin_size = 100;
+    unsigned int hist[MAX_VAR_HIST_BINS] = { 0 };
+    unsigned int hist_avg[MAX_VAR_HIST_BINS];
+    unsigned int max_bin = 0;
+    unsigned int max_bin_count = 0;
+    unsigned int bin_cnt;
     int bsize = BLOCK_16X16;
-    static const unsigned char const_source[16] = { 0, 0, 0, 0, 0, 0, 0, 0,
-                                                    0, 0, 0, 0, 0, 0, 0, 0 };
     // Loop over sub-sample of 16x16 blocks of frame, and for blocks that have
     // been encoded as zero/small mv at least x consecutive frames, compute
     // the variance to update estimate of noise in the source.
@@ -207,8 +205,11 @@
           // Only consider blocks that are likely steady background. i.e, have
           // been encoded as zero/low motion x (= thresh_consec_zeromv) frames
           // in a row. consec_zero_mv[] defined for 8x8 blocks, so consider all
-          // 4 sub-blocks for 16x16 block. Also, avoid skin blocks.
-          if (frame_low_motion && consec_zeromv > thresh_consec_zeromv) {
+          // 4 sub-blocks for 16x16 block. And exclude this frame if
+          // high_source_sad is true (i.e., scene/content change).
+          if (frame_low_motion && consec_zeromv > thresh_consec_zeromv &&
+              !cpi->rc.high_source_sad &&
+              !cpi->svc.high_source_sad_superframe) {
             int is_skin = 0;
             if (cpi->use_skin_detection) {
               is_skin =
@@ -217,25 +218,15 @@
             }
             if (!is_skin) {
               unsigned int sse;
-              // Compute variance.
+              // Compute variance between co-located blocks from current and
+              // last input frames.
               unsigned int variance = cpi->fn_ptr[bsize].vf(
                   src_y, src_ystride, last_src_y, last_src_ystride, &sse);
-              // Only consider this block as valid for noise measurement if the
-              // average term (sse - variance = N * avg^{2}, N = 16X16) of the
-              // temporal residual is small (avoid effects from lighting
-              // change).
-              if ((sse - variance) < thresh_sum_diff) {
-                unsigned int sse2;
-                const unsigned int spatial_variance = cpi->fn_ptr[bsize].vf(
-                    src_y, src_ystride, const_source, 0, &sse2);
-                // Avoid blocks with high brightness and high spatial variance.
-                if ((sse2 - spatial_variance) < thresh_sum_spatial &&
-                    spatial_variance < thresh_spatial_var) {
-                  avg_est += low_res ? variance >> 4
-                                     : variance / ((spatial_variance >> 9) + 1);
-                  num_samples++;
-                }
-              }
+              unsigned int hist_index = variance / bin_size;
+              if (hist_index < MAX_VAR_HIST_BINS)
+                hist[hist_index]++;
+              else if (hist_index < 3 * (MAX_VAR_HIST_BINS >> 1))
+                hist[MAX_VAR_HIST_BINS - 1]++;  // Account for the tail
             }
           }
         }
@@ -251,26 +242,58 @@
     }
     ne->last_w = cm->width;
     ne->last_h = cm->height;
-    // Update noise estimate if we have at a minimum number of block samples,
-    // and avg_est > 0 (avg_est == 0 can happen if the application inputs
-    // duplicate frames).
-    if (num_samples > min_blocks_estimate && avg_est > 0) {
-      // Normalize.
-      avg_est = avg_est / num_samples;
-      // Update noise estimate.
-      ne->value = (int)((3 * ne->value + avg_est) >> 2);
-      ne->count++;
-      if (ne->count == ne->num_frames_estimate) {
-        // Reset counter and check noise level condition.
-        ne->num_frames_estimate = 30;
-        ne->count = 0;
-        ne->level = vp9_noise_estimate_extract_level(ne);
-#if CONFIG_VP9_TEMPORAL_DENOISING
-        if (cpi->oxcf.noise_sensitivity > 0 && noise_est_svc(cpi))
-          vp9_denoiser_set_noise_level(&cpi->denoiser, ne->level);
-#endif
+    // Adjust histogram to account for effect that histogram flattens
+    // and shifts to zero as scene darkens.
+    if (hist[0] > 10 && (hist[MAX_VAR_HIST_BINS - 1] > hist[0] >> 2)) {
+      hist[0] = 0;
+      hist[1] >>= 2;
+      hist[2] >>= 2;
+      hist[3] >>= 2;
+      hist[4] >>= 1;
+      hist[5] >>= 1;
+      hist[6] = 3 * hist[6] >> 1;
+      hist[MAX_VAR_HIST_BINS - 1] >>= 1;
+    }
+
+    // Average hist[] and find largest bin
+    for (bin_cnt = 0; bin_cnt < MAX_VAR_HIST_BINS; bin_cnt++) {
+      if (bin_cnt == 0)
+        hist_avg[bin_cnt] = (hist[0] + hist[1] + hist[2]) / 3;
+      else if (bin_cnt == MAX_VAR_HIST_BINS - 1)
+        hist_avg[bin_cnt] = hist[MAX_VAR_HIST_BINS - 1] >> 2;
+      else if (bin_cnt == MAX_VAR_HIST_BINS - 2)
+        hist_avg[bin_cnt] = (hist[bin_cnt - 1] + 2 * hist[bin_cnt] +
+                             (hist[bin_cnt + 1] >> 1) + 2) >>
+                            2;
+      else
+        hist_avg[bin_cnt] =
+            (hist[bin_cnt - 1] + 2 * hist[bin_cnt] + hist[bin_cnt + 1] + 2) >>
+            2;
+
+      if (hist_avg[bin_cnt] > max_bin_count) {
+        max_bin_count = hist_avg[bin_cnt];
+        max_bin = bin_cnt;
       }
     }
+
+    // Scale by 40 to work with existing thresholds
+    ne->value = (int)((3 * ne->value + max_bin * 40) >> 2);
+    // Quickly increase VNR strength when the noise level increases suddenly.
+    if (ne->level < kMedium && ne->value > ne->adapt_thresh) {
+      ne->count = ne->num_frames_estimate;
+    } else {
+      ne->count++;
+    }
+    if (ne->count == ne->num_frames_estimate) {
+      // Reset counter and check noise level condition.
+      ne->num_frames_estimate = 30;
+      ne->count = 0;
+      ne->level = vp9_noise_estimate_extract_level(ne);
+#if CONFIG_VP9_TEMPORAL_DENOISING
+      if (cpi->oxcf.noise_sensitivity > 0 && noise_est_svc(cpi))
+        vp9_denoiser_set_noise_level(cpi, ne->level);
+#endif
+    }
   }
 #if CONFIG_VP9_TEMPORAL_DENOISING
   if (cpi->oxcf.noise_sensitivity > 0 && noise_est_svc(cpi))
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h
index 335cdbe..7fc94ff 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_noise_estimate.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_NOISE_ESTIMATE_H_
-#define VP9_ENCODER_NOISE_ESTIMATE_H_
+#ifndef VPX_VP9_ENCODER_VP9_NOISE_ESTIMATE_H_
+#define VPX_VP9_ENCODER_VP9_NOISE_ESTIMATE_H_
 
 #include "vp9/encoder/vp9_block.h"
 #include "vp9/encoder/vp9_skin_detection.h"
@@ -23,6 +23,8 @@
 extern "C" {
 #endif
 
+#define MAX_VAR_HIST_BINS 20
+
 typedef enum noise_level { kLowLow, kLow, kMedium, kHigh } NOISE_LEVEL;
 
 typedef struct noise_estimate {
@@ -30,6 +32,7 @@
   NOISE_LEVEL level;
   int value;
   int thresh;
+  int adapt_thresh;
   int count;
   int last_w;
   int last_h;
@@ -48,4 +51,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_NOISE_ESTIMATE_H_
+#endif  // VPX_VP9_ENCODER_VP9_NOISE_ESTIMATE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_non_greedy_mv.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_non_greedy_mv.c
new file mode 100644
index 0000000..4679d6c
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_non_greedy_mv.c
@@ -0,0 +1,533 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "vp9/common/vp9_mv.h"
+#include "vp9/encoder/vp9_non_greedy_mv.h"
+// TODO(angiebird): move non_greedy_mv related functions to this file
+
+#define LOG2_TABLE_SIZE 1024
+static const int log2_table[LOG2_TABLE_SIZE] = {
+  0,  // This is a dummy value
+  0,        1048576,  1661954,  2097152,  2434718,  2710530,  2943725,
+  3145728,  3323907,  3483294,  3627477,  3759106,  3880192,  3992301,
+  4096672,  4194304,  4286015,  4372483,  4454275,  4531870,  4605679,
+  4676053,  4743299,  4807682,  4869436,  4928768,  4985861,  5040877,
+  5093962,  5145248,  5194851,  5242880,  5289431,  5334591,  5378443,
+  5421059,  5462508,  5502851,  5542146,  5580446,  5617800,  5654255,
+  5689851,  5724629,  5758625,  5791875,  5824409,  5856258,  5887450,
+  5918012,  5947969,  5977344,  6006160,  6034437,  6062195,  6089453,
+  6116228,  6142538,  6168398,  6193824,  6218829,  6243427,  6267632,
+  6291456,  6314910,  6338007,  6360756,  6383167,  6405252,  6427019,
+  6448477,  6469635,  6490501,  6511084,  6531390,  6551427,  6571202,
+  6590722,  6609993,  6629022,  6647815,  6666376,  6684713,  6702831,
+  6720734,  6738427,  6755916,  6773205,  6790299,  6807201,  6823917,
+  6840451,  6856805,  6872985,  6888993,  6904834,  6920510,  6936026,
+  6951384,  6966588,  6981641,  6996545,  7011304,  7025920,  7040397,
+  7054736,  7068940,  7083013,  7096956,  7110771,  7124461,  7138029,
+  7151476,  7164804,  7178017,  7191114,  7204100,  7216974,  7229740,
+  7242400,  7254954,  7267405,  7279754,  7292003,  7304154,  7316208,
+  7328167,  7340032,  7351805,  7363486,  7375079,  7386583,  7398000,
+  7409332,  7420579,  7431743,  7442826,  7453828,  7464751,  7475595,
+  7486362,  7497053,  7507669,  7518211,  7528680,  7539077,  7549404,
+  7559660,  7569847,  7579966,  7590017,  7600003,  7609923,  7619778,
+  7629569,  7639298,  7648964,  7658569,  7668114,  7677598,  7687023,
+  7696391,  7705700,  7714952,  7724149,  7733289,  7742375,  7751407,
+  7760385,  7769310,  7778182,  7787003,  7795773,  7804492,  7813161,
+  7821781,  7830352,  7838875,  7847350,  7855777,  7864158,  7872493,
+  7880782,  7889027,  7897226,  7905381,  7913492,  7921561,  7929586,
+  7937569,  7945510,  7953410,  7961268,  7969086,  7976864,  7984602,
+  7992301,  7999960,  8007581,  8015164,  8022709,  8030217,  8037687,
+  8045121,  8052519,  8059880,  8067206,  8074496,  8081752,  8088973,
+  8096159,  8103312,  8110431,  8117516,  8124569,  8131589,  8138576,
+  8145532,  8152455,  8159347,  8166208,  8173037,  8179836,  8186605,
+  8193343,  8200052,  8206731,  8213380,  8220001,  8226593,  8233156,
+  8239690,  8246197,  8252676,  8259127,  8265550,  8271947,  8278316,
+  8284659,  8290976,  8297266,  8303530,  8309768,  8315981,  8322168,
+  8328330,  8334467,  8340579,  8346667,  8352730,  8358769,  8364784,
+  8370775,  8376743,  8382687,  8388608,  8394506,  8400381,  8406233,
+  8412062,  8417870,  8423655,  8429418,  8435159,  8440878,  8446576,
+  8452252,  8457908,  8463542,  8469155,  8474748,  8480319,  8485871,
+  8491402,  8496913,  8502404,  8507875,  8513327,  8518759,  8524171,
+  8529564,  8534938,  8540293,  8545629,  8550947,  8556245,  8561525,
+  8566787,  8572031,  8577256,  8582464,  8587653,  8592825,  8597980,
+  8603116,  8608236,  8613338,  8618423,  8623491,  8628542,  8633576,
+  8638593,  8643594,  8648579,  8653547,  8658499,  8663434,  8668354,
+  8673258,  8678145,  8683017,  8687874,  8692715,  8697540,  8702350,
+  8707145,  8711925,  8716690,  8721439,  8726174,  8730894,  8735599,
+  8740290,  8744967,  8749628,  8754276,  8758909,  8763528,  8768134,
+  8772725,  8777302,  8781865,  8786415,  8790951,  8795474,  8799983,
+  8804478,  8808961,  8813430,  8817886,  8822328,  8826758,  8831175,
+  8835579,  8839970,  8844349,  8848715,  8853068,  8857409,  8861737,
+  8866053,  8870357,  8874649,  8878928,  8883195,  8887451,  8891694,
+  8895926,  8900145,  8904353,  8908550,  8912734,  8916908,  8921069,
+  8925220,  8929358,  8933486,  8937603,  8941708,  8945802,  8949885,
+  8953957,  8958018,  8962068,  8966108,  8970137,  8974155,  8978162,
+  8982159,  8986145,  8990121,  8994086,  8998041,  9001986,  9005920,
+  9009844,  9013758,  9017662,  9021556,  9025440,  9029314,  9033178,
+  9037032,  9040877,  9044711,  9048536,  9052352,  9056157,  9059953,
+  9063740,  9067517,  9071285,  9075044,  9078793,  9082533,  9086263,
+  9089985,  9093697,  9097400,  9101095,  9104780,  9108456,  9112123,
+  9115782,  9119431,  9123072,  9126704,  9130328,  9133943,  9137549,
+  9141146,  9144735,  9148316,  9151888,  9155452,  9159007,  9162554,
+  9166092,  9169623,  9173145,  9176659,  9180165,  9183663,  9187152,
+  9190634,  9194108,  9197573,  9201031,  9204481,  9207923,  9211357,
+  9214784,  9218202,  9221613,  9225017,  9228412,  9231800,  9235181,
+  9238554,  9241919,  9245277,  9248628,  9251971,  9255307,  9258635,
+  9261956,  9265270,  9268577,  9271876,  9275169,  9278454,  9281732,
+  9285002,  9288266,  9291523,  9294773,  9298016,  9301252,  9304481,
+  9307703,  9310918,  9314126,  9317328,  9320523,  9323711,  9326892,
+  9330067,  9333235,  9336397,  9339552,  9342700,  9345842,  9348977,
+  9352106,  9355228,  9358344,  9361454,  9364557,  9367654,  9370744,
+  9373828,  9376906,  9379978,  9383043,  9386102,  9389155,  9392202,
+  9395243,  9398278,  9401306,  9404329,  9407345,  9410356,  9413360,
+  9416359,  9419351,  9422338,  9425319,  9428294,  9431263,  9434226,
+  9437184,  9440136,  9443082,  9446022,  9448957,  9451886,  9454809,
+  9457726,  9460638,  9463545,  9466446,  9469341,  9472231,  9475115,
+  9477994,  9480867,  9483735,  9486597,  9489454,  9492306,  9495152,
+  9497993,  9500828,  9503659,  9506484,  9509303,  9512118,  9514927,
+  9517731,  9520530,  9523324,  9526112,  9528895,  9531674,  9534447,
+  9537215,  9539978,  9542736,  9545489,  9548237,  9550980,  9553718,
+  9556451,  9559179,  9561903,  9564621,  9567335,  9570043,  9572747,
+  9575446,  9578140,  9580830,  9583514,  9586194,  9588869,  9591540,
+  9594205,  9596866,  9599523,  9602174,  9604821,  9607464,  9610101,
+  9612735,  9615363,  9617987,  9620607,  9623222,  9625832,  9628438,
+  9631040,  9633637,  9636229,  9638818,  9641401,  9643981,  9646556,
+  9649126,  9651692,  9654254,  9656812,  9659365,  9661914,  9664459,
+  9666999,  9669535,  9672067,  9674594,  9677118,  9679637,  9682152,
+  9684663,  9687169,  9689672,  9692170,  9694665,  9697155,  9699641,
+  9702123,  9704601,  9707075,  9709545,  9712010,  9714472,  9716930,
+  9719384,  9721834,  9724279,  9726721,  9729159,  9731593,  9734024,
+  9736450,  9738872,  9741291,  9743705,  9746116,  9748523,  9750926,
+  9753326,  9755721,  9758113,  9760501,  9762885,  9765266,  9767642,
+  9770015,  9772385,  9774750,  9777112,  9779470,  9781825,  9784175,
+  9786523,  9788866,  9791206,  9793543,  9795875,  9798204,  9800530,
+  9802852,  9805170,  9807485,  9809797,  9812104,  9814409,  9816710,
+  9819007,  9821301,  9823591,  9825878,  9828161,  9830441,  9832718,
+  9834991,  9837261,  9839527,  9841790,  9844050,  9846306,  9848559,
+  9850808,  9853054,  9855297,  9857537,  9859773,  9862006,  9864235,
+  9866462,  9868685,  9870904,  9873121,  9875334,  9877544,  9879751,
+  9881955,  9884155,  9886352,  9888546,  9890737,  9892925,  9895109,
+  9897291,  9899469,  9901644,  9903816,  9905985,  9908150,  9910313,
+  9912473,  9914629,  9916783,  9918933,  9921080,  9923225,  9925366,
+  9927504,  9929639,  9931771,  9933900,  9936027,  9938150,  9940270,
+  9942387,  9944502,  9946613,  9948721,  9950827,  9952929,  9955029,
+  9957126,  9959219,  9961310,  9963398,  9965484,  9967566,  9969645,
+  9971722,  9973796,  9975866,  9977934,  9980000,  9982062,  9984122,
+  9986179,  9988233,  9990284,  9992332,  9994378,  9996421,  9998461,
+  10000498, 10002533, 10004565, 10006594, 10008621, 10010644, 10012665,
+  10014684, 10016700, 10018713, 10020723, 10022731, 10024736, 10026738,
+  10028738, 10030735, 10032729, 10034721, 10036710, 10038697, 10040681,
+  10042662, 10044641, 10046617, 10048591, 10050562, 10052530, 10054496,
+  10056459, 10058420, 10060379, 10062334, 10064287, 10066238, 10068186,
+  10070132, 10072075, 10074016, 10075954, 10077890, 10079823, 10081754,
+  10083682, 10085608, 10087532, 10089453, 10091371, 10093287, 10095201,
+  10097112, 10099021, 10100928, 10102832, 10104733, 10106633, 10108529,
+  10110424, 10112316, 10114206, 10116093, 10117978, 10119861, 10121742,
+  10123620, 10125495, 10127369, 10129240, 10131109, 10132975, 10134839,
+  10136701, 10138561, 10140418, 10142273, 10144126, 10145976, 10147825,
+  10149671, 10151514, 10153356, 10155195, 10157032, 10158867, 10160699,
+  10162530, 10164358, 10166184, 10168007, 10169829, 10171648, 10173465,
+  10175280, 10177093, 10178904, 10180712, 10182519, 10184323, 10186125,
+  10187925, 10189722, 10191518, 10193311, 10195103, 10196892, 10198679,
+  10200464, 10202247, 10204028, 10205806, 10207583, 10209357, 10211130,
+  10212900, 10214668, 10216435, 10218199, 10219961, 10221721, 10223479,
+  10225235, 10226989, 10228741, 10230491, 10232239, 10233985, 10235728,
+  10237470, 10239210, 10240948, 10242684, 10244417, 10246149, 10247879,
+  10249607, 10251333, 10253057, 10254779, 10256499, 10258217, 10259933,
+  10261647, 10263360, 10265070, 10266778, 10268485, 10270189, 10271892,
+  10273593, 10275292, 10276988, 10278683, 10280376, 10282068, 10283757,
+  10285444, 10287130, 10288814, 10290495, 10292175, 10293853, 10295530,
+  10297204, 10298876, 10300547, 10302216, 10303883, 10305548, 10307211,
+  10308873, 10310532, 10312190, 10313846, 10315501, 10317153, 10318804,
+  10320452, 10322099, 10323745, 10325388, 10327030, 10328670, 10330308,
+  10331944, 10333578, 10335211, 10336842, 10338472, 10340099, 10341725,
+  10343349, 10344971, 10346592, 10348210, 10349828, 10351443, 10353057,
+  10354668, 10356279, 10357887, 10359494, 10361099, 10362702, 10364304,
+  10365904, 10367502, 10369099, 10370694, 10372287, 10373879, 10375468,
+  10377057, 10378643, 10380228, 10381811, 10383393, 10384973, 10386551,
+  10388128, 10389703, 10391276, 10392848, 10394418, 10395986, 10397553,
+  10399118, 10400682, 10402244, 10403804, 10405363, 10406920, 10408476,
+  10410030, 10411582, 10413133, 10414682, 10416230, 10417776, 10419320,
+  10420863, 10422404, 10423944, 10425482, 10427019, 10428554, 10430087,
+  10431619, 10433149, 10434678, 10436206, 10437731, 10439256, 10440778,
+  10442299, 10443819, 10445337, 10446854, 10448369, 10449882, 10451394,
+  10452905, 10454414, 10455921, 10457427, 10458932, 10460435, 10461936,
+  10463436, 10464935, 10466432, 10467927, 10469422, 10470914, 10472405,
+  10473895, 10475383, 10476870, 10478355, 10479839, 10481322, 10482802,
+  10484282,
+};
+
+static int mi_size_to_block_size(int mi_bsize, int mi_num) {
+  return (mi_num % mi_bsize) ? mi_num / mi_bsize + 1 : mi_num / mi_bsize;
+}
+
+Status vp9_alloc_motion_field_info(MotionFieldInfo *motion_field_info,
+                                   int frame_num, int mi_rows, int mi_cols) {
+  int frame_idx, rf_idx, square_block_idx;
+  if (motion_field_info->allocated) {
+    // TODO(angiebird): Avoid re-allocate buffer if possible
+    vp9_free_motion_field_info(motion_field_info);
+  }
+  motion_field_info->frame_num = frame_num;
+  motion_field_info->motion_field_array =
+      vpx_calloc(frame_num, sizeof(*motion_field_info->motion_field_array));
+  for (frame_idx = 0; frame_idx < frame_num; ++frame_idx) {
+    for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+      for (square_block_idx = 0; square_block_idx < SQUARE_BLOCK_SIZES;
+           ++square_block_idx) {
+        BLOCK_SIZE bsize = square_block_idx_to_bsize(square_block_idx);
+        const int mi_height = num_8x8_blocks_high_lookup[bsize];
+        const int mi_width = num_8x8_blocks_wide_lookup[bsize];
+        const int block_rows = mi_size_to_block_size(mi_height, mi_rows);
+        const int block_cols = mi_size_to_block_size(mi_width, mi_cols);
+        MotionField *motion_field =
+            &motion_field_info
+                 ->motion_field_array[frame_idx][rf_idx][square_block_idx];
+        Status status =
+            vp9_alloc_motion_field(motion_field, bsize, block_rows, block_cols);
+        if (status == STATUS_FAILED) {
+          return STATUS_FAILED;
+        }
+      }
+    }
+  }
+  motion_field_info->allocated = 1;
+  return STATUS_OK;
+}
+
+Status vp9_alloc_motion_field(MotionField *motion_field, BLOCK_SIZE bsize,
+                              int block_rows, int block_cols) {
+  Status status = STATUS_OK;
+  motion_field->ready = 0;
+  motion_field->bsize = bsize;
+  motion_field->block_rows = block_rows;
+  motion_field->block_cols = block_cols;
+  motion_field->block_num = block_rows * block_cols;
+  motion_field->mf =
+      vpx_calloc(motion_field->block_num, sizeof(*motion_field->mf));
+  if (motion_field->mf == NULL) {
+    status = STATUS_FAILED;
+  }
+  motion_field->set_mv =
+      vpx_calloc(motion_field->block_num, sizeof(*motion_field->set_mv));
+  if (motion_field->set_mv == NULL) {
+    vpx_free(motion_field->mf);
+    motion_field->mf = NULL;
+    status = STATUS_FAILED;
+  }
+  motion_field->local_structure = vpx_calloc(
+      motion_field->block_num, sizeof(*motion_field->local_structure));
+  if (motion_field->local_structure == NULL) {
+    vpx_free(motion_field->mf);
+    motion_field->mf = NULL;
+    vpx_free(motion_field->set_mv);
+    motion_field->set_mv = NULL;
+    status = STATUS_FAILED;
+  }
+  return status;
+}
+
+void vp9_free_motion_field(MotionField *motion_field) {
+  vpx_free(motion_field->mf);
+  vpx_free(motion_field->set_mv);
+  vpx_free(motion_field->local_structure);
+  vp9_zero(*motion_field);
+}
+
+void vp9_free_motion_field_info(MotionFieldInfo *motion_field_info) {
+  if (motion_field_info->allocated) {
+    int frame_idx, rf_idx, square_block_idx;
+    for (frame_idx = 0; frame_idx < motion_field_info->frame_num; ++frame_idx) {
+      for (rf_idx = 0; rf_idx < MAX_INTER_REF_FRAMES; ++rf_idx) {
+        for (square_block_idx = 0; square_block_idx < SQUARE_BLOCK_SIZES;
+             ++square_block_idx) {
+          MotionField *motion_field =
+              &motion_field_info
+                   ->motion_field_array[frame_idx][rf_idx][square_block_idx];
+          vp9_free_motion_field(motion_field);
+        }
+      }
+    }
+    vpx_free(motion_field_info->motion_field_array);
+    motion_field_info->motion_field_array = NULL;
+    motion_field_info->frame_num = 0;
+    motion_field_info->allocated = 0;
+  }
+}
+
+MotionField *vp9_motion_field_info_get_motion_field(
+    MotionFieldInfo *motion_field_info, int frame_idx, int rf_idx,
+    BLOCK_SIZE bsize) {
+  int square_block_idx = get_square_block_idx(bsize);
+  assert(frame_idx < motion_field_info->frame_num);
+  assert(motion_field_info->allocated == 1);
+  return &motion_field_info
+              ->motion_field_array[frame_idx][rf_idx][square_block_idx];
+}
+
+int vp9_motion_field_is_mv_set(const MotionField *motion_field, int brow,
+                               int bcol) {
+  assert(brow >= 0 && brow < motion_field->block_rows);
+  assert(bcol >= 0 && bcol < motion_field->block_cols);
+  return motion_field->set_mv[brow * motion_field->block_cols + bcol];
+}
+
+int_mv vp9_motion_field_get_mv(const MotionField *motion_field, int brow,
+                               int bcol) {
+  assert(brow >= 0 && brow < motion_field->block_rows);
+  assert(bcol >= 0 && bcol < motion_field->block_cols);
+  return motion_field->mf[brow * motion_field->block_cols + bcol];
+}
+
+int_mv vp9_motion_field_mi_get_mv(const MotionField *motion_field, int mi_row,
+                                  int mi_col) {
+  const int mi_height = num_8x8_blocks_high_lookup[motion_field->bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[motion_field->bsize];
+  const int brow = mi_row / mi_height;
+  const int bcol = mi_col / mi_width;
+  assert(mi_row % mi_height == 0);
+  assert(mi_col % mi_width == 0);
+  return vp9_motion_field_get_mv(motion_field, brow, bcol);
+}
+
+void vp9_motion_field_mi_set_mv(MotionField *motion_field, int mi_row,
+                                int mi_col, int_mv mv) {
+  const int mi_height = num_8x8_blocks_high_lookup[motion_field->bsize];
+  const int mi_width = num_8x8_blocks_wide_lookup[motion_field->bsize];
+  const int brow = mi_row / mi_height;
+  const int bcol = mi_col / mi_width;
+  assert(mi_row % mi_height == 0);
+  assert(mi_col % mi_width == 0);
+  assert(brow >= 0 && brow < motion_field->block_rows);
+  assert(bcol >= 0 && bcol < motion_field->block_cols);
+  motion_field->mf[brow * motion_field->block_cols + bcol] = mv;
+  motion_field->set_mv[brow * motion_field->block_cols + bcol] = 1;
+}
+
+void vp9_motion_field_reset_mvs(MotionField *motion_field) {
+  memset(motion_field->set_mv, 0,
+         motion_field->block_num * sizeof(*motion_field->set_mv));
+}
+
+static int64_t log2_approximation(int64_t v) {
+  assert(v > 0);
+  if (v < LOG2_TABLE_SIZE) {
+    return log2_table[v];
+  } else {
+    // use linear approximation when v >= 2^10
+    const int slope =
+        1477;  // slope = 1 / (log(2) * 1024) * (1 << LOG2_PRECISION)
+    assert(LOG2_TABLE_SIZE == 1 << 10);
+
+    return slope * (v - LOG2_TABLE_SIZE) + (10 << LOG2_PRECISION);
+  }
+}
+
+int64_t vp9_nb_mvs_inconsistency(const MV *mv, const int_mv *nb_full_mvs,
+                                 int mv_num) {
+  // The behavior of this function is to compute log2 of mv difference,
+  // i.e. min log2(1 + row_diff * row_diff + col_diff * col_diff)
+  // against available neighbor mvs.
+  // Since the log2 is monotonically increasing, we can compute
+  // min row_diff * row_diff + col_diff * col_diff first
+  // then apply log2 in the end.
+  int i;
+  int64_t min_abs_diff = INT64_MAX;
+  int cnt = 0;
+  assert(mv_num <= NB_MVS_NUM);
+  for (i = 0; i < mv_num; ++i) {
+    MV nb_mv = nb_full_mvs[i].as_mv;
+    const int64_t row_diff = abs(mv->row - nb_mv.row);
+    const int64_t col_diff = abs(mv->col - nb_mv.col);
+    const int64_t abs_diff = row_diff * row_diff + col_diff * col_diff;
+    assert(nb_full_mvs[i].as_int != INVALID_MV);
+    min_abs_diff = VPXMIN(abs_diff, min_abs_diff);
+    ++cnt;
+  }
+  if (cnt) {
+    return log2_approximation(1 + min_abs_diff);
+  }
+  return 0;
+}
+
+static FloatMV get_smooth_motion_vector(const FloatMV scaled_search_mv,
+                                        const FloatMV *tmp_mf,
+                                        const int (*M)[MF_LOCAL_STRUCTURE_SIZE],
+                                        int rows, int cols, int row, int col,
+                                        float alpha) {
+  const FloatMV tmp_mv = tmp_mf[row * cols + col];
+  int idx_row, idx_col;
+  FloatMV avg_nb_mv = { 0.0f, 0.0f };
+  FloatMV mv = { 0.0f, 0.0f };
+  float filter[3][3] = { { 1.0f / 12.0f, 1.0f / 6.0f, 1.0f / 12.0f },
+                         { 1.0f / 6.0f, 0.0f, 1.0f / 6.0f },
+                         { 1.0f / 12.0f, 1.0f / 6.0f, 1.0f / 12.0f } };
+  for (idx_row = 0; idx_row < 3; ++idx_row) {
+    int nb_row = row + idx_row - 1;
+    for (idx_col = 0; idx_col < 3; ++idx_col) {
+      int nb_col = col + idx_col - 1;
+      if (nb_row < 0 || nb_col < 0 || nb_row >= rows || nb_col >= cols) {
+        avg_nb_mv.row += (tmp_mv.row) * filter[idx_row][idx_col];
+        avg_nb_mv.col += (tmp_mv.col) * filter[idx_row][idx_col];
+      } else {
+        const FloatMV nb_mv = tmp_mf[nb_row * cols + nb_col];
+        avg_nb_mv.row += (nb_mv.row) * filter[idx_row][idx_col];
+        avg_nb_mv.col += (nb_mv.col) * filter[idx_row][idx_col];
+      }
+    }
+  }
+  {
+    // M is the local variance of reference frame
+    float M00 = M[row * cols + col][0];
+    float M01 = M[row * cols + col][1];
+    float M10 = M[row * cols + col][2];
+    float M11 = M[row * cols + col][3];
+
+    float det = (M00 + alpha) * (M11 + alpha) - M01 * M10;
+
+    float inv_M00 = (M11 + alpha) / det;
+    float inv_M01 = -M01 / det;
+    float inv_M10 = -M10 / det;
+    float inv_M11 = (M00 + alpha) / det;
+
+    float inv_MM00 = inv_M00 * M00 + inv_M01 * M10;
+    float inv_MM01 = inv_M00 * M01 + inv_M01 * M11;
+    float inv_MM10 = inv_M10 * M00 + inv_M11 * M10;
+    float inv_MM11 = inv_M10 * M01 + inv_M11 * M11;
+
+    mv.row = inv_M00 * avg_nb_mv.row * alpha + inv_M01 * avg_nb_mv.col * alpha +
+             inv_MM00 * scaled_search_mv.row + inv_MM01 * scaled_search_mv.col;
+    mv.col = inv_M10 * avg_nb_mv.row * alpha + inv_M11 * avg_nb_mv.col * alpha +
+             inv_MM10 * scaled_search_mv.row + inv_MM11 * scaled_search_mv.col;
+  }
+  return mv;
+}
+
+void vp9_get_smooth_motion_field(const MV *search_mf,
+                                 const int (*M)[MF_LOCAL_STRUCTURE_SIZE],
+                                 int rows, int cols, BLOCK_SIZE bsize,
+                                 float alpha, int num_iters, MV *smooth_mf) {
+  // M is the local variation of reference frame
+  // build two buffers
+  FloatMV *input = (FloatMV *)malloc(rows * cols * sizeof(FloatMV));
+  FloatMV *output = (FloatMV *)malloc(rows * cols * sizeof(FloatMV));
+  int idx;
+  int row, col;
+  int bw = 4 << b_width_log2_lookup[bsize];
+  int bh = 4 << b_height_log2_lookup[bsize];
+  // copy search results to input buffer
+  for (idx = 0; idx < rows * cols; ++idx) {
+    input[idx].row = (float)search_mf[idx].row / bh;
+    input[idx].col = (float)search_mf[idx].col / bw;
+  }
+  for (idx = 0; idx < num_iters; ++idx) {
+    FloatMV *tmp;
+    for (row = 0; row < rows; ++row) {
+      for (col = 0; col < cols; ++col) {
+        // note: the scaled_search_mf and smooth_mf are all scaled by macroblock
+        // size
+        const MV search_mv = search_mf[row * cols + col];
+        FloatMV scaled_search_mv = { (float)search_mv.row / bh,
+                                     (float)search_mv.col / bw };
+        output[row * cols + col] = get_smooth_motion_vector(
+            scaled_search_mv, input, M, rows, cols, row, col, alpha);
+      }
+    }
+    // swap buffers
+    tmp = input;
+    input = output;
+    output = tmp;
+  }
+  // copy smoothed results to output
+  for (idx = 0; idx < rows * cols; ++idx) {
+    smooth_mf[idx].row = (int)(input[idx].row * bh);
+    smooth_mf[idx].col = (int)(input[idx].col * bw);
+  }
+  free(input);
+  free(output);
+}
+
+void vp9_get_local_structure(const YV12_BUFFER_CONFIG *cur_frame,
+                             const YV12_BUFFER_CONFIG *ref_frame,
+                             const MV *search_mf,
+                             const vp9_variance_fn_ptr_t *fn_ptr, int rows,
+                             int cols, BLOCK_SIZE bsize,
+                             int (*M)[MF_LOCAL_STRUCTURE_SIZE]) {
+  const int bw = 4 << b_width_log2_lookup[bsize];
+  const int bh = 4 << b_height_log2_lookup[bsize];
+  const int cur_stride = cur_frame->y_stride;
+  const int ref_stride = ref_frame->y_stride;
+  const int width = ref_frame->y_width;
+  const int height = ref_frame->y_height;
+  int row, col;
+  for (row = 0; row < rows; ++row) {
+    for (col = 0; col < cols; ++col) {
+      int cur_offset = row * bh * cur_stride + col * bw;
+      uint8_t *center = cur_frame->y_buffer + cur_offset;
+      int ref_h = row * bh + search_mf[row * cols + col].row;
+      int ref_w = col * bw + search_mf[row * cols + col].col;
+      int ref_offset;
+      uint8_t *target;
+      uint8_t *nb;
+      int search_dist;
+      int nb_dist;
+      int I_row = 0, I_col = 0;
+      // TODO(Dan): handle the case that when reference frame block beyond the
+      // boundary
+      ref_h = ref_h < 0 ? 0 : (ref_h >= height - bh ? height - bh - 1 : ref_h);
+      ref_w = ref_w < 0 ? 0 : (ref_w >= width - bw ? width - bw - 1 : ref_w);
+      // compute search results distortion
+      // TODO(Dan): maybe need to use vp9 function to find the reference block,
+      // to compare with the results of my python code, I first use my way to
+      // compute the reference block
+      ref_offset = ref_h * ref_stride + ref_w;
+      target = ref_frame->y_buffer + ref_offset;
+      search_dist = fn_ptr->sdf(center, cur_stride, target, ref_stride);
+      // compute target's neighbors' distortions
+      // TODO(Dan): if using padding, the boundary condition may vary
+      // up
+      if (ref_h - bh >= 0) {
+        nb = target - ref_stride * bh;
+        nb_dist = fn_ptr->sdf(center, cur_stride, nb, ref_stride);
+        I_row += nb_dist - search_dist;
+      }
+      // down
+      if (ref_h + bh < height - bh) {
+        nb = target + ref_stride * bh;
+        nb_dist = fn_ptr->sdf(center, cur_stride, nb, ref_stride);
+        I_row += nb_dist - search_dist;
+      }
+      if (ref_h - bh >= 0 && ref_h + bh < height - bh) {
+        I_row /= 2;
+      }
+      I_row /= (bw * bh);
+      // left
+      if (ref_w - bw >= 0) {
+        nb = target - bw;
+        nb_dist = fn_ptr->sdf(center, cur_stride, nb, ref_stride);
+        I_col += nb_dist - search_dist;
+      }
+      // down
+      if (ref_w + bw < width - bw) {
+        nb = target + bw;
+        nb_dist = fn_ptr->sdf(center, cur_stride, nb, ref_stride);
+        I_col += nb_dist - search_dist;
+      }
+      if (ref_w - bw >= 0 && ref_w + bw < width - bw) {
+        I_col /= 2;
+      }
+      I_col /= (bw * bh);
+      M[row * cols + col][0] = I_row * I_row;
+      M[row * cols + col][1] = I_row * I_col;
+      M[row * cols + col][2] = I_col * I_row;
+      M[row * cols + col][3] = I_col * I_col;
+    }
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_non_greedy_mv.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_non_greedy_mv.h
new file mode 100644
index 0000000..c2bd697
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_non_greedy_mv.h
@@ -0,0 +1,129 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_ENCODER_VP9_NON_GREEDY_MV_H_
+#define VPX_VP9_ENCODER_VP9_NON_GREEDY_MV_H_
+
+#include "vp9/common/vp9_enums.h"
+#include "vp9/common/vp9_blockd.h"
+#include "vpx_scale/yv12config.h"
+#include "vpx_dsp/variance.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+#define NB_MVS_NUM 4
+#define LOG2_PRECISION 20
+#define MF_LOCAL_STRUCTURE_SIZE 4
+#define SQUARE_BLOCK_SIZES 4
+
+typedef enum Status { STATUS_OK = 0, STATUS_FAILED = 1 } Status;
+
+typedef struct MotionField {
+  int ready;
+  BLOCK_SIZE bsize;
+  int block_rows;
+  int block_cols;
+  int block_num;  // block_num == block_rows * block_cols
+  int (*local_structure)[MF_LOCAL_STRUCTURE_SIZE];
+  int_mv *mf;
+  int *set_mv;
+  int mv_log_scale;
+} MotionField;
+
+typedef struct MotionFieldInfo {
+  int frame_num;
+  int allocated;
+  MotionField (*motion_field_array)[MAX_INTER_REF_FRAMES][SQUARE_BLOCK_SIZES];
+} MotionFieldInfo;
+
+typedef struct {
+  float row, col;
+} FloatMV;
+
+static INLINE int get_square_block_idx(BLOCK_SIZE bsize) {
+  if (bsize == BLOCK_4X4) {
+    return 0;
+  }
+  if (bsize == BLOCK_8X8) {
+    return 1;
+  }
+  if (bsize == BLOCK_16X16) {
+    return 2;
+  }
+  if (bsize == BLOCK_32X32) {
+    return 3;
+  }
+  assert(0 && "ERROR: non-square block size");
+  return -1;
+}
+
+static INLINE BLOCK_SIZE square_block_idx_to_bsize(int square_block_idx) {
+  if (square_block_idx == 0) {
+    return BLOCK_4X4;
+  }
+  if (square_block_idx == 1) {
+    return BLOCK_8X8;
+  }
+  if (square_block_idx == 2) {
+    return BLOCK_16X16;
+  }
+  if (square_block_idx == 3) {
+    return BLOCK_32X32;
+  }
+  assert(0 && "ERROR: invalid square_block_idx");
+  return BLOCK_INVALID;
+}
+
+Status vp9_alloc_motion_field_info(MotionFieldInfo *motion_field_info,
+                                   int frame_num, int mi_rows, int mi_cols);
+
+Status vp9_alloc_motion_field(MotionField *motion_field, BLOCK_SIZE bsize,
+                              int block_rows, int block_cols);
+
+void vp9_free_motion_field(MotionField *motion_field);
+
+void vp9_free_motion_field_info(MotionFieldInfo *motion_field_info);
+
+int64_t vp9_nb_mvs_inconsistency(const MV *mv, const int_mv *nb_full_mvs,
+                                 int mv_num);
+
+void vp9_get_smooth_motion_field(const MV *search_mf,
+                                 const int (*M)[MF_LOCAL_STRUCTURE_SIZE],
+                                 int rows, int cols, BLOCK_SIZE bize,
+                                 float alpha, int num_iters, MV *smooth_mf);
+
+void vp9_get_local_structure(const YV12_BUFFER_CONFIG *cur_frame,
+                             const YV12_BUFFER_CONFIG *ref_frame,
+                             const MV *search_mf,
+                             const vp9_variance_fn_ptr_t *fn_ptr, int rows,
+                             int cols, BLOCK_SIZE bsize,
+                             int (*M)[MF_LOCAL_STRUCTURE_SIZE]);
+
+MotionField *vp9_motion_field_info_get_motion_field(
+    MotionFieldInfo *motion_field_info, int frame_idx, int rf_idx,
+    BLOCK_SIZE bsize);
+
+void vp9_motion_field_mi_set_mv(MotionField *motion_field, int mi_row,
+                                int mi_col, int_mv mv);
+
+void vp9_motion_field_reset_mvs(MotionField *motion_field);
+
+int_mv vp9_motion_field_get_mv(const MotionField *motion_field, int brow,
+                               int bcol);
+int_mv vp9_motion_field_mi_get_mv(const MotionField *motion_field, int mi_row,
+                                  int mi_col);
+int vp9_motion_field_is_mv_set(const MotionField *motion_field, int brow,
+                               int bcol);
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+#endif  // VPX_VP9_ENCODER_VP9_NON_GREEDY_MV_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h
new file mode 100644
index 0000000..09c0e30
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_partition_models.h
@@ -0,0 +1,975 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_ENCODER_VP9_PARTITION_MODELS_H_
+#define VPX_VP9_ENCODER_VP9_PARTITION_MODELS_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define NN_MAX_HIDDEN_LAYERS 10
+#define NN_MAX_NODES_PER_LAYER 128
+
+// Neural net model config. It defines the layout of a neural net model, such as
+// the number of inputs/outputs, number of layers, the number of nodes in each
+// layer, as well as the weights and bias of each node.
+typedef struct {
+  int num_inputs;         // Number of input nodes, i.e. features.
+  int num_outputs;        // Number of output nodes.
+  int num_hidden_layers;  // Number of hidden layers, maximum 10.
+  // Number of nodes for each hidden layer.
+  int num_hidden_nodes[NN_MAX_HIDDEN_LAYERS];
+  // Weight parameters, indexed by layer.
+  const float *weights[NN_MAX_HIDDEN_LAYERS + 1];
+  // Bias parameters, indexed by layer.
+  const float *bias[NN_MAX_HIDDEN_LAYERS + 1];
+} NN_CONFIG;
+
+// Partition search breakout model.
+#define FEATURES 4
+#define Q_CTX 3
+#define RESOLUTION_CTX 2
+static const float
+    vp9_partition_breakout_weights_64[RESOLUTION_CTX][Q_CTX][FEATURES + 1] = {
+      {
+          {
+              -0.016673f,
+              -0.001025f,
+              -0.000032f,
+              0.000833f,
+              1.94261885f - 2.1f,
+          },
+          {
+              -0.160867f,
+              -0.002101f,
+              0.000011f,
+              0.002448f,
+              1.65738142f - 2.5f,
+          },
+          {
+              -0.628934f,
+              -0.011459f,
+              -0.000009f,
+              0.013833f,
+              1.47982645f - 1.6f,
+          },
+      },
+      {
+          {
+              -0.064309f,
+              -0.006121f,
+              0.000232f,
+              0.005778f,
+              0.7989465f - 5.0f,
+          },
+          {
+              -0.314957f,
+              -0.009346f,
+              -0.000225f,
+              0.010072f,
+              2.80695581f - 5.5f,
+          },
+          {
+              -0.635535f,
+              -0.015135f,
+              0.000091f,
+              0.015247f,
+              2.90381241f - 5.0f,
+          },
+      },
+    };
+
+static const float
+    vp9_partition_breakout_weights_32[RESOLUTION_CTX][Q_CTX][FEATURES + 1] = {
+      {
+          {
+              -0.010554f,
+              -0.003081f,
+              -0.000134f,
+              0.004491f,
+              1.68445992f - 3.5f,
+          },
+          {
+              -0.051489f,
+              -0.007609f,
+              0.000016f,
+              0.009792f,
+              1.28089404f - 2.5f,
+          },
+          {
+              -0.163097f,
+              -0.013081f,
+              0.000022f,
+              0.019006f,
+              1.36129403f - 3.2f,
+          },
+      },
+      {
+          {
+              -0.024629f,
+              -0.006492f,
+              -0.000254f,
+              0.004895f,
+              1.27919173f - 4.5f,
+          },
+          {
+              -0.083936f,
+              -0.009827f,
+              -0.000200f,
+              0.010399f,
+              2.73731065f - 4.5f,
+          },
+          {
+              -0.279052f,
+              -0.013334f,
+              0.000289f,
+              0.023203f,
+              2.43595719f - 3.5f,
+          },
+      },
+    };
+
+static const float
+    vp9_partition_breakout_weights_16[RESOLUTION_CTX][Q_CTX][FEATURES + 1] = {
+      {
+          {
+              -0.013154f,
+              -0.002404f,
+              -0.000977f,
+              0.008450f,
+              2.57404566f - 5.5f,
+          },
+          {
+              -0.019146f,
+              -0.004018f,
+              0.000064f,
+              0.008187f,
+              2.15043926f - 2.5f,
+          },
+          {
+              -0.075755f,
+              -0.010858f,
+              0.000030f,
+              0.024505f,
+              2.06848121f - 2.5f,
+          },
+      },
+      {
+          {
+              -0.007636f,
+              -0.002751f,
+              -0.000682f,
+              0.005968f,
+              0.19225763f - 4.5f,
+          },
+          {
+              -0.047306f,
+              -0.009113f,
+              -0.000518f,
+              0.016007f,
+              2.61068869f - 4.0f,
+          },
+          {
+              -0.069336f,
+              -0.010448f,
+              -0.001120f,
+              0.023083f,
+              1.47591054f - 5.5f,
+          },
+      },
+    };
+
+static const float vp9_partition_breakout_weights_8[RESOLUTION_CTX][Q_CTX]
+                                                   [FEATURES + 1] = {
+                                                     {
+                                                         {
+                                                             -0.011807f,
+                                                             -0.009873f,
+                                                             -0.000931f,
+                                                             0.034768f,
+                                                             1.32254851f - 2.0f,
+                                                         },
+                                                         {
+                                                             -0.003861f,
+                                                             -0.002701f,
+                                                             0.000100f,
+                                                             0.013876f,
+                                                             1.96755111f - 1.5f,
+                                                         },
+                                                         {
+                                                             -0.013522f,
+                                                             -0.008677f,
+                                                             -0.000562f,
+                                                             0.034468f,
+                                                             1.53440356f - 1.5f,
+                                                         },
+                                                     },
+                                                     {
+                                                         {
+                                                             -0.003221f,
+                                                             -0.002125f,
+                                                             0.000993f,
+                                                             0.012768f,
+                                                             0.03541421f - 2.0f,
+                                                         },
+                                                         {
+                                                             -0.006069f,
+                                                             -0.007335f,
+                                                             0.000229f,
+                                                             0.026104f,
+                                                             0.17135315f - 1.5f,
+                                                         },
+                                                         {
+                                                             -0.039894f,
+                                                             -0.011419f,
+                                                             0.000070f,
+                                                             0.061817f,
+                                                             0.6739977f - 1.5f,
+                                                         },
+                                                     },
+                                                   };
+#undef FEATURES
+#undef Q_CTX
+#undef RESOLUTION_CTX
+
+// Rectangular partition search pruning model.
+#define FEATURES 8
+#define LABELS 4
+#define NODES 16
+static const float vp9_rect_part_nn_weights_16_layer0[FEATURES * NODES] = {
+  -0.432522f, 0.133070f,  -0.169187f, 0.768340f,  0.891228f,  0.554458f,
+  0.356000f,  0.403621f,  0.809165f,  0.778214f,  -0.520357f, 0.301451f,
+  -0.386972f, -0.314402f, 0.021878f,  1.148746f,  -0.462258f, -0.175524f,
+  -0.344589f, -0.475159f, -0.232322f, 0.471147f,  -0.489948f, 0.467740f,
+  -0.391550f, 0.208601f,  0.054138f,  0.076859f,  -0.309497f, -0.095927f,
+  0.225917f,  0.011582f,  -0.520730f, -0.585497f, 0.174036f,  0.072521f,
+  0.120771f,  -0.517234f, -0.581908f, -0.034003f, -0.694722f, -0.364368f,
+  0.290584f,  0.038373f,  0.685654f,  0.394019f,  0.759667f,  1.257502f,
+  -0.610516f, -0.185434f, 0.211997f,  -0.172458f, 0.044605f,  0.145316f,
+  -0.182525f, -0.147376f, 0.578742f,  0.312412f,  -0.446135f, -0.389112f,
+  0.454033f,  0.260490f,  0.664285f,  0.395856f,  -0.231827f, 0.215228f,
+  0.014856f,  -0.395462f, 0.479646f,  -0.391445f, -0.357788f, 0.166238f,
+  -0.056818f, -0.027783f, 0.060880f,  -1.604710f, 0.531268f,  0.282184f,
+  0.714944f,  0.093523f,  -0.218312f, -0.095546f, -0.285621f, -0.190871f,
+  -0.448340f, -0.016611f, 0.413913f,  -0.286720f, -0.158828f, -0.092635f,
+  -0.279551f, 0.166509f,  -0.088162f, 0.446543f,  -0.276830f, -0.065642f,
+  -0.176346f, -0.984754f, 0.338738f,  0.403809f,  0.738065f,  1.154439f,
+  0.750764f,  0.770959f,  -0.269403f, 0.295651f,  -0.331858f, 0.367144f,
+  0.279279f,  0.157419f,  -0.348227f, -0.168608f, -0.956000f, -0.647136f,
+  0.250516f,  0.858084f,  0.809802f,  0.492408f,  0.804841f,  0.282802f,
+  0.079395f,  -0.291771f, -0.024382f, -1.615880f, -0.445166f, -0.407335f,
+  -0.483044f, 0.141126f,
+};
+
+static const float vp9_rect_part_nn_bias_16_layer0[NODES] = {
+  0.275384f,  -0.053745f, 0.000000f,  0.000000f, -0.178103f, 0.513965f,
+  -0.161352f, 0.228551f,  0.000000f,  1.013712f, 0.000000f,  0.000000f,
+  -1.144009f, -0.000006f, -0.241727f, 2.048764f,
+};
+
+static const float vp9_rect_part_nn_weights_16_layer1[NODES * LABELS] = {
+  -1.435278f, 2.204691f,  -0.410718f, 0.202708f,  0.109208f,  1.059142f,
+  -0.306360f, 0.845906f,  0.489654f,  -1.121915f, -0.169133f, -0.003385f,
+  0.660590f,  -0.018711f, 1.227158f,  -2.967504f, 1.407345f,  -1.293243f,
+  -0.386921f, 0.300492f,  0.338824f,  -0.083250f, -0.069454f, -1.001827f,
+  -0.327891f, 0.899353f,  0.367397f,  -0.118601f, -0.171936f, -0.420646f,
+  -0.803319f, 2.029634f,  0.940268f,  -0.664484f, 0.339916f,  0.315944f,
+  0.157374f,  -0.402482f, -0.491695f, 0.595827f,  0.015031f,  0.255887f,
+  -0.466327f, -0.212598f, 0.136485f,  0.033363f,  -0.796921f, 1.414304f,
+  -0.282185f, -2.673571f, -0.280994f, 0.382658f,  -0.350902f, 0.227926f,
+  0.062602f,  -1.000199f, 0.433731f,  1.176439f,  -0.163216f, -0.229015f,
+  -0.640098f, -0.438852f, -0.947700f, 2.203434f,
+};
+
+static const float vp9_rect_part_nn_bias_16_layer1[LABELS] = {
+  -0.875510f,
+  0.982408f,
+  0.560854f,
+  -0.415209f,
+};
+
+static const NN_CONFIG vp9_rect_part_nnconfig_16 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_rect_part_nn_weights_16_layer0,
+      vp9_rect_part_nn_weights_16_layer1,
+  },
+  {
+      vp9_rect_part_nn_bias_16_layer0,
+      vp9_rect_part_nn_bias_16_layer1,
+  },
+};
+
+static const float vp9_rect_part_nn_weights_32_layer0[FEATURES * NODES] = {
+  -0.147312f, -0.753248f, 0.540206f,  0.661415f,  0.484117f,  -0.341609f,
+  0.016183f,  0.064177f,  0.781580f,  0.902232f,  -0.505342f, 0.325183f,
+  -0.231072f, -0.120107f, -0.076216f, 0.120038f,  0.403695f,  -0.463301f,
+  -0.192158f, 0.407442f,  0.106633f,  1.072371f,  -0.446779f, 0.467353f,
+  0.318812f,  -0.505996f, -0.008768f, -0.239598f, 0.085480f,  0.284640f,
+  -0.365045f, -0.048083f, -0.112090f, -0.067089f, 0.304138f,  -0.228809f,
+  0.383651f,  -0.196882f, 0.477039f,  -0.217978f, -0.506931f, -0.125675f,
+  0.050456f,  1.086598f,  0.732128f,  0.326941f,  0.103952f,  0.121769f,
+  -0.154487f, -0.255514f, 0.030591f,  -0.382797f, -0.019981f, -0.326570f,
+  0.149691f,  -0.435633f, -0.070795f, 0.167691f,  0.251413f,  -0.153405f,
+  0.160347f,  0.455107f,  -0.968580f, -0.575879f, 0.623115f,  -0.069793f,
+  -0.379768f, -0.965807f, -0.062057f, 0.071312f,  0.457098f,  0.350372f,
+  -0.460659f, -0.985393f, 0.359963f,  -0.093677f, 0.404272f,  -0.326896f,
+  -0.277752f, 0.609322f,  -0.114193f, -0.230701f, 0.089208f,  0.645381f,
+  0.494485f,  0.467876f,  -0.166187f, 0.251044f,  -0.394661f, 0.192895f,
+  -0.344777f, -0.041893f, -0.111163f, 0.066347f,  0.378158f,  -0.455465f,
+  0.339839f,  -0.418207f, -0.356515f, -0.227536f, -0.211091f, -0.122945f,
+  0.361772f,  -0.338095f, 0.004564f,  -0.398510f, 0.060876f,  -2.132504f,
+  -0.086776f, -0.029166f, 0.039241f,  0.222534f,  -0.188565f, -0.288792f,
+  -0.160789f, -0.123905f, 0.397916f,  -0.063779f, 0.167210f,  -0.445004f,
+  0.056889f,  0.207280f,  0.000101f,  0.384507f,  -1.721239f, -2.036402f,
+  -2.084403f, -2.060483f,
+};
+
+static const float vp9_rect_part_nn_bias_32_layer0[NODES] = {
+  -0.859251f, -0.109938f, 0.091838f,  0.187817f,  -0.728265f, 0.253080f,
+  0.000000f,  -0.357195f, -0.031290f, -1.373237f, -0.761086f, 0.000000f,
+  -0.024504f, 1.765711f,  0.000000f,  1.505390f,
+};
+
+static const float vp9_rect_part_nn_weights_32_layer1[NODES * LABELS] = {
+  0.680940f,  1.367178f,  0.403075f,  0.029957f,  0.500917f,  1.407776f,
+  -0.354002f, 0.011667f,  1.663767f,  0.959155f,  0.428323f,  -0.205345f,
+  -0.081850f, -3.920103f, -0.243802f, -4.253933f, -0.034020f, -1.361057f,
+  0.128236f,  -0.138422f, -0.025790f, -0.563518f, -0.148715f, -0.344381f,
+  -1.677389f, -0.868332f, -0.063792f, 0.052052f,  0.359591f,  2.739808f,
+  -0.414304f, 3.036597f,  -0.075368f, -1.019680f, 0.642501f,  0.209779f,
+  -0.374539f, -0.718294f, -0.116616f, -0.043212f, -1.787809f, -0.773262f,
+  0.068734f,  0.508309f,  0.099334f,  1.802239f,  -0.333538f, 2.708645f,
+  -0.447682f, -2.355555f, -0.506674f, -0.061028f, -0.310305f, -0.375475f,
+  0.194572f,  0.431788f,  -0.789624f, -0.031962f, 0.358353f,  0.382937f,
+  0.232002f,  2.321813f,  -0.037523f, 2.104652f,
+};
+
+static const float vp9_rect_part_nn_bias_32_layer1[LABELS] = {
+  -0.693383f,
+  0.773661f,
+  0.426878f,
+  -0.070619f,
+};
+
+static const NN_CONFIG vp9_rect_part_nnconfig_32 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_rect_part_nn_weights_32_layer0,
+      vp9_rect_part_nn_weights_32_layer1,
+  },
+  {
+      vp9_rect_part_nn_bias_32_layer0,
+      vp9_rect_part_nn_bias_32_layer1,
+  },
+};
+#undef NODES
+
+#define NODES 24
+static const float vp9_rect_part_nn_weights_64_layer0[FEATURES * NODES] = {
+  0.024671f,  -0.220610f, -0.284362f, -0.069556f, -0.315700f, 0.187861f,
+  0.139782f,  0.063110f,  0.796561f,  0.172868f,  -0.662194f, -1.393074f,
+  0.085003f,  0.393381f,  0.358477f,  -0.187268f, -0.370745f, 0.218287f,
+  0.027271f,  -0.254089f, -0.048236f, -0.459137f, 0.253171f,  0.122598f,
+  -0.550107f, -0.568456f, 0.159866f,  -0.246534f, 0.096384f,  -0.255460f,
+  0.077864f,  -0.334837f, 0.026921f,  -0.697252f, 0.345262f,  1.343578f,
+  0.815984f,  1.118211f,  1.574016f,  0.578476f,  -0.285967f, -0.508672f,
+  0.118137f,  0.037695f,  1.540510f,  1.256648f,  1.163819f,  1.172027f,
+  0.661551f,  -0.111980f, -0.434204f, -0.894217f, 0.570524f,  0.050292f,
+  -0.113680f, 0.000784f,  -0.211554f, -0.369394f, 0.158306f,  -0.512505f,
+  -0.238696f, 0.091498f,  -0.448490f, -0.491268f, -0.353112f, -0.303315f,
+  -0.428438f, 0.127998f,  -0.406790f, -0.401786f, -0.279888f, -0.384223f,
+  0.026100f,  0.041621f,  -0.315818f, -0.087888f, 0.353497f,  0.163123f,
+  -0.380128f, -0.090334f, -0.216647f, -0.117849f, -0.173502f, 0.301871f,
+  0.070854f,  0.114627f,  -0.050545f, -0.160381f, 0.595294f,  0.492696f,
+  -0.453858f, -1.154139f, 0.126000f,  0.034550f,  0.456665f,  -0.236618f,
+  -0.112640f, 0.050759f,  -0.449162f, 0.110059f,  0.147116f,  0.249358f,
+  -0.049894f, 0.063351f,  -0.004467f, 0.057242f,  -0.482015f, -0.174335f,
+  -0.085617f, -0.333808f, -0.358440f, -0.069006f, 0.099260f,  -1.243430f,
+  -0.052963f, 0.112088f,  -2.661115f, -2.445893f, -2.688174f, -2.624232f,
+  0.030494f,  0.161311f,  0.012136f,  0.207564f,  -2.776856f, -2.791940f,
+  -2.623962f, -2.918820f, 1.231619f,  -0.376692f, -0.698078f, 0.110336f,
+  -0.285378f, 0.258367f,  -0.180159f, -0.376608f, -0.034348f, -0.130206f,
+  0.160020f,  0.852977f,  0.580573f,  1.450782f,  1.357596f,  0.787382f,
+  -0.544004f, -0.014795f, 0.032121f,  -0.557696f, 0.159994f,  -0.540908f,
+  0.180380f,  -0.398045f, 0.705095f,  0.515103f,  -0.511521f, -1.271374f,
+  -0.231019f, 0.423647f,  0.064907f,  -0.255338f, -0.877748f, -0.667205f,
+  0.267847f,  0.135229f,  0.617844f,  1.349849f,  1.012623f,  0.730506f,
+  -0.078571f, 0.058401f,  0.053221f,  -2.426146f, -0.098808f, -0.138508f,
+  -0.153299f, 0.149116f,  -0.444243f, 0.301807f,  0.065066f,  0.092929f,
+  -0.372784f, -0.095540f, 0.192269f,  0.237894f,  0.080228f,  -0.214074f,
+  -0.011426f, -2.352367f, -0.085394f, -0.190361f, -0.001177f, 0.089197f,
+};
+
+static const float vp9_rect_part_nn_bias_64_layer0[NODES] = {
+  0.000000f,  -0.057652f, -0.175413f, -0.175389f, -1.084097f, -1.423801f,
+  -0.076307f, -0.193803f, 0.000000f,  -0.066474f, -0.050318f, -0.019832f,
+  -0.038814f, -0.144184f, 2.652451f,  2.415006f,  0.197464f,  -0.729842f,
+  -0.173774f, 0.239171f,  0.486425f,  2.463304f,  -0.175279f, 2.352637f,
+};
+
+static const float vp9_rect_part_nn_weights_64_layer1[NODES * LABELS] = {
+  -0.063237f, 1.925696f,  -0.182145f, -0.226687f, 0.602941f,  -0.941140f,
+  0.814598f,  -0.117063f, 0.282988f,  0.066369f,  0.096951f,  1.049735f,
+  -0.188188f, -0.281227f, -4.836746f, -5.047797f, 0.892358f,  0.417145f,
+  -0.279849f, 1.335945f,  0.660338f,  -2.757938f, -0.115714f, -1.862183f,
+  -0.045980f, -1.597624f, -0.586822f, -0.615589f, -0.330537f, 1.068496f,
+  -0.167290f, 0.141290f,  -0.112100f, 0.232761f,  0.252307f,  -0.399653f,
+  0.353118f,  0.241583f,  2.635241f,  4.026119f,  -1.137327f, -0.052446f,
+  -0.139814f, -1.104256f, -0.759391f, 2.508457f,  -0.526297f, 2.095348f,
+  -0.444473f, -1.090452f, 0.584122f,  0.468729f,  -0.368865f, 1.041425f,
+  -1.079504f, 0.348837f,  0.390091f,  0.416191f,  0.212906f,  -0.660255f,
+  0.053630f,  0.209476f,  3.595525f,  2.257293f,  -0.514030f, 0.074203f,
+  -0.375862f, -1.998307f, -0.930310f, 1.866686f,  -0.247137f, 1.087789f,
+  0.100186f,  0.298150f,  0.165265f,  0.050478f,  0.249167f,  0.371789f,
+  -0.294497f, 0.202954f,  0.037310f,  0.193159f,  0.161551f,  0.301597f,
+  0.299286f,  0.185946f,  0.822976f,  2.066130f,  -1.724588f, 0.055977f,
+  -0.330747f, -0.067747f, -0.475801f, 1.555958f,  -0.025808f, -0.081516f,
+};
+
+static const float vp9_rect_part_nn_bias_64_layer1[LABELS] = {
+  -0.090723f,
+  0.894968f,
+  0.844754f,
+  -3.496194f,
+};
+
+static const NN_CONFIG vp9_rect_part_nnconfig_64 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_rect_part_nn_weights_64_layer0,
+      vp9_rect_part_nn_weights_64_layer1,
+  },
+  {
+      vp9_rect_part_nn_bias_64_layer0,
+      vp9_rect_part_nn_bias_64_layer1,
+  },
+};
+#undef FEATURES
+#undef LABELS
+#undef NODES
+
+#define FEATURES 7
+// Partition pruning model(neural nets).
+static const float vp9_partition_nn_weights_64x64_layer0[FEATURES * 8] = {
+  -3.571348f, 0.014835f,  -3.255393f, -0.098090f, -0.013120f, 0.000221f,
+  0.056273f,  0.190179f,  -0.268130f, -1.828242f, -0.010655f, 0.937244f,
+  -0.435120f, 0.512125f,  1.610679f,  0.190816f,  -0.799075f, -0.377348f,
+  -0.144232f, 0.614383f,  -0.980388f, 1.754150f,  -0.185603f, -0.061854f,
+  -0.807172f, 1.240177f,  1.419531f,  -0.438544f, -5.980774f, 0.139045f,
+  -0.032359f, -0.068887f, -1.237918f, 0.115706f,  0.003164f,  2.924212f,
+  1.246838f,  -0.035833f, 0.810011f,  -0.805894f, 0.010966f,  0.076463f,
+  -4.226380f, -2.437764f, -0.010619f, -0.020935f, -0.451494f, 0.300079f,
+  -0.168961f, -3.326450f, -2.731094f, 0.002518f,  0.018840f,  -1.656815f,
+  0.068039f,  0.010586f,
+};
+
+static const float vp9_partition_nn_bias_64x64_layer0[8] = {
+  -3.469882f, 0.683989f, 0.194010f,  0.313782f,
+  -3.153335f, 2.245849f, -1.946190f, -3.740020f,
+};
+
+static const float vp9_partition_nn_weights_64x64_layer1[8] = {
+  -8.058566f, 0.108306f, -0.280620f, -0.818823f,
+  -6.445117f, 0.865364f, -1.127127f, -8.808660f,
+};
+
+static const float vp9_partition_nn_bias_64x64_layer1[1] = {
+  6.46909416f,
+};
+
+static const NN_CONFIG vp9_partition_nnconfig_64x64 = {
+  FEATURES,  // num_inputs
+  1,         // num_outputs
+  1,         // num_hidden_layers
+  {
+      8,
+  },  // num_hidden_nodes
+  {
+      vp9_partition_nn_weights_64x64_layer0,
+      vp9_partition_nn_weights_64x64_layer1,
+  },
+  {
+      vp9_partition_nn_bias_64x64_layer0,
+      vp9_partition_nn_bias_64x64_layer1,
+  },
+};
+
+static const float vp9_partition_nn_weights_32x32_layer0[FEATURES * 8] = {
+  -0.295437f, -4.002648f, -0.205399f, -0.060919f, 0.708037f,  0.027221f,
+  -0.039137f, -0.907724f, -3.151662f, 0.007106f,  0.018726f,  -0.534928f,
+  0.022744f,  0.000159f,  -1.717189f, -3.229031f, -0.027311f, 0.269863f,
+  -0.400747f, -0.394366f, -0.108878f, 0.603027f,  0.455369f,  -0.197170f,
+  1.241746f,  -1.347820f, -0.575636f, -0.462879f, -2.296426f, 0.196696f,
+  -0.138347f, -0.030754f, -0.200774f, 0.453795f,  0.055625f,  -3.163116f,
+  -0.091003f, -0.027028f, -0.042984f, -0.605185f, 0.143240f,  -0.036439f,
+  -0.801228f, 0.313409f,  -0.159942f, 0.031267f,  0.886454f,  -1.531644f,
+  -0.089655f, 0.037683f,  -0.163441f, -0.130454f, -0.058344f, 0.060011f,
+  0.275387f,  1.552226f,
+};
+
+static const float vp9_partition_nn_bias_32x32_layer0[8] = {
+  -0.838372f, -2.609089f, -0.055763f, 1.329485f,
+  -1.297638f, -2.636622f, -0.826909f, 1.012644f,
+};
+
+static const float vp9_partition_nn_weights_32x32_layer1[8] = {
+  -1.792632f, -7.322353f, -0.683386f, 0.676564f,
+  -1.488118f, -7.527719f, 1.240163f,  0.614309f,
+};
+
+static const float vp9_partition_nn_bias_32x32_layer1[1] = {
+  4.97422546f,
+};
+
+static const NN_CONFIG vp9_partition_nnconfig_32x32 = {
+  FEATURES,  // num_inputs
+  1,         // num_outputs
+  1,         // num_hidden_layers
+  {
+      8,
+  },  // num_hidden_nodes
+  {
+      vp9_partition_nn_weights_32x32_layer0,
+      vp9_partition_nn_weights_32x32_layer1,
+  },
+  {
+      vp9_partition_nn_bias_32x32_layer0,
+      vp9_partition_nn_bias_32x32_layer1,
+  },
+};
+
+static const float vp9_partition_nn_weights_16x16_layer0[FEATURES * 8] = {
+  -1.717673f, -4.718130f, -0.125725f, -0.183427f, -0.511764f, 0.035328f,
+  0.130891f,  -3.096753f, 0.174968f,  -0.188769f, -0.640796f, 1.305661f,
+  1.700638f,  -0.073806f, -4.006781f, -1.630999f, -0.064863f, -0.086410f,
+  -0.148617f, 0.172733f,  -0.018619f, 2.152595f,  0.778405f,  -0.156455f,
+  0.612995f,  -0.467878f, 0.152022f,  -0.236183f, 0.339635f,  -0.087119f,
+  -3.196610f, -1.080401f, -0.637704f, -0.059974f, 1.706298f,  -0.793705f,
+  -6.399260f, 0.010624f,  -0.064199f, -0.650621f, 0.338087f,  -0.001531f,
+  1.023655f,  -3.700272f, -0.055281f, -0.386884f, 0.375504f,  -0.898678f,
+  0.281156f,  -0.314611f, 0.863354f,  -0.040582f, -0.145019f, 0.029329f,
+  -2.197880f, -0.108733f,
+};
+
+static const float vp9_partition_nn_bias_16x16_layer0[8] = {
+  0.411516f,  -2.143737f, -3.693192f, 2.123142f,
+  -1.356910f, -3.561016f, -0.765045f, -2.417082f,
+};
+
+static const float vp9_partition_nn_weights_16x16_layer1[8] = {
+  -0.619755f, -2.202391f, -4.337171f, 0.611319f,
+  0.377677f,  -4.998723f, -1.052235f, 1.949922f,
+};
+
+static const float vp9_partition_nn_bias_16x16_layer1[1] = {
+  3.20981717f,
+};
+
+static const NN_CONFIG vp9_partition_nnconfig_16x16 = {
+  FEATURES,  // num_inputs
+  1,         // num_outputs
+  1,         // num_hidden_layers
+  {
+      8,
+  },  // num_hidden_nodes
+  {
+      vp9_partition_nn_weights_16x16_layer0,
+      vp9_partition_nn_weights_16x16_layer1,
+  },
+  {
+      vp9_partition_nn_bias_16x16_layer0,
+      vp9_partition_nn_bias_16x16_layer1,
+  },
+};
+#undef FEATURES
+
+#define FEATURES 6
+static const float vp9_var_part_nn_weights_64_layer0[FEATURES * 8] = {
+  -0.249572f, 0.205532f,  -2.175608f, 1.094836f,  -2.986370f, 0.193160f,
+  -0.143823f, 0.378511f,  -1.997788f, -2.166866f, -1.930158f, -1.202127f,
+  -0.611875f, -0.506422f, -0.432487f, 0.071205f,  0.578172f,  -0.154285f,
+  -0.051830f, 0.331681f,  -1.457177f, -2.443546f, -2.000302f, -1.389283f,
+  0.372084f,  -0.464917f, 2.265235f,  2.385787f,  2.312722f,  2.127868f,
+  -0.403963f, -0.177860f, -0.436751f, -0.560539f, 0.254903f,  0.193976f,
+  -0.305611f, 0.256632f,  0.309388f,  -0.437439f, 1.702640f,  -5.007069f,
+  -0.323450f, 0.294227f,  1.267193f,  1.056601f,  0.387181f,  -0.191215f,
+};
+
+static const float vp9_var_part_nn_bias_64_layer0[8] = {
+  -0.044396f, -0.938166f, 0.000000f,  -0.916375f,
+  1.242299f,  0.000000f,  -0.405734f, 0.014206f,
+};
+
+static const float vp9_var_part_nn_weights_64_layer1[8] = {
+  1.635945f,  0.979557f,  0.455315f, 1.197199f,
+  -2.251024f, -0.464953f, 1.378676f, -0.111927f,
+};
+
+static const float vp9_var_part_nn_bias_64_layer1[1] = {
+  -0.37972447f,
+};
+
+static const NN_CONFIG vp9_var_part_nnconfig_64 = {
+  FEATURES,  // num_inputs
+  1,         // num_outputs
+  1,         // num_hidden_layers
+  {
+      8,
+  },  // num_hidden_nodes
+  {
+      vp9_var_part_nn_weights_64_layer0,
+      vp9_var_part_nn_weights_64_layer1,
+  },
+  {
+      vp9_var_part_nn_bias_64_layer0,
+      vp9_var_part_nn_bias_64_layer1,
+  },
+};
+
+static const float vp9_var_part_nn_weights_32_layer0[FEATURES * 8] = {
+  0.067243f,  -0.083598f, -2.191159f, 2.726434f,  -3.324013f, 3.477977f,
+  0.323736f,  -0.510199f, 2.960693f,  2.937661f,  2.888476f,  2.938315f,
+  -0.307602f, -0.503353f, -0.080725f, -0.473909f, -0.417162f, 0.457089f,
+  0.665153f,  -0.273210f, 0.028279f,  0.972220f,  -0.445596f, 1.756611f,
+  -0.177892f, -0.091758f, 0.436661f,  -0.521506f, 0.133786f,  0.266743f,
+  0.637367f,  -0.160084f, -1.396269f, 1.020841f,  -1.112971f, 0.919496f,
+  -0.235883f, 0.651954f,  0.109061f,  -0.429463f, 0.740839f,  -0.962060f,
+  0.299519f,  -0.386298f, 1.550231f,  2.464915f,  1.311969f,  2.561612f,
+};
+
+static const float vp9_var_part_nn_bias_32_layer0[8] = {
+  0.368242f, 0.736617f, 0.000000f,  0.757287f,
+  0.000000f, 0.613248f, -0.776390f, 0.928497f,
+};
+
+static const float vp9_var_part_nn_weights_32_layer1[8] = {
+  0.939884f, -2.420850f, -0.410489f, -0.186690f,
+  0.063287f, -0.522011f, 0.484527f,  -0.639625f,
+};
+
+static const float vp9_var_part_nn_bias_32_layer1[1] = {
+  -0.6455006f,
+};
+
+static const NN_CONFIG vp9_var_part_nnconfig_32 = {
+  FEATURES,  // num_inputs
+  1,         // num_outputs
+  1,         // num_hidden_layers
+  {
+      8,
+  },  // num_hidden_nodes
+  {
+      vp9_var_part_nn_weights_32_layer0,
+      vp9_var_part_nn_weights_32_layer1,
+  },
+  {
+      vp9_var_part_nn_bias_32_layer0,
+      vp9_var_part_nn_bias_32_layer1,
+  },
+};
+
+static const float vp9_var_part_nn_weights_16_layer0[FEATURES * 8] = {
+  0.742567f,  -0.580624f, -0.244528f, 0.331661f,  -0.113949f, -0.559295f,
+  -0.386061f, 0.438653f,  1.467463f,  0.211589f,  0.513972f,  1.067855f,
+  -0.876679f, 0.088560f,  -0.687483f, -0.380304f, -0.016412f, 0.146380f,
+  0.015318f,  0.000351f,  -2.764887f, 3.269717f,  2.752428f,  -2.236754f,
+  0.561539f,  -0.852050f, -0.084667f, 0.202057f,  0.197049f,  0.364922f,
+  -0.463801f, 0.431790f,  1.872096f,  -0.091887f, -0.055034f, 2.443492f,
+  -0.156958f, -0.189571f, -0.542424f, -0.589804f, -0.354422f, 0.401605f,
+  0.642021f,  -0.875117f, 2.040794f,  1.921070f,  1.792413f,  1.839727f,
+};
+
+static const float vp9_var_part_nn_bias_16_layer0[8] = {
+  2.901234f, -1.940932f, -0.198970f, -0.406524f,
+  0.059422f, -1.879207f, -0.232340f, 2.979821f,
+};
+
+static const float vp9_var_part_nn_weights_16_layer1[8] = {
+  -0.528731f, 0.375234f, -0.088422f, 0.668629f,
+  0.870449f,  0.578735f, 0.546103f,  -1.957207f,
+};
+
+static const float vp9_var_part_nn_bias_16_layer1[1] = {
+  -1.95769405f,
+};
+
+static const NN_CONFIG vp9_var_part_nnconfig_16 = {
+  FEATURES,  // num_inputs
+  1,         // num_outputs
+  1,         // num_hidden_layers
+  {
+      8,
+  },  // num_hidden_nodes
+  {
+      vp9_var_part_nn_weights_16_layer0,
+      vp9_var_part_nn_weights_16_layer1,
+  },
+  {
+      vp9_var_part_nn_bias_16_layer0,
+      vp9_var_part_nn_bias_16_layer1,
+  },
+};
+#undef FEATURES
+
+#define FEATURES 12
+#define LABELS 1
+#define NODES 8
+static const float vp9_part_split_nn_weights_64_layer0[FEATURES * NODES] = {
+  -0.609728f, -0.409099f, -0.472449f, 0.183769f,  -0.457740f, 0.081089f,
+  0.171003f,  0.578696f,  -0.019043f, -0.856142f, 0.557369f,  -1.779424f,
+  -0.274044f, -0.320632f, -0.392531f, -0.359462f, -0.404106f, -0.288357f,
+  0.200620f,  0.038013f,  -0.430093f, 0.235083f,  -0.487442f, 0.424814f,
+  -0.232758f, -0.442943f, 0.229397f,  -0.540301f, -0.648421f, -0.649747f,
+  -0.171638f, 0.603824f,  0.468497f,  -0.421580f, 0.178840f,  -0.533838f,
+  -0.029471f, -0.076296f, 0.197426f,  -0.187908f, -0.003950f, -0.065740f,
+  0.085165f,  -0.039674f, -5.640702f, 1.909538f,  -1.434604f, 3.294606f,
+  -0.788812f, 0.196864f,  0.057012f,  -0.019757f, 0.336233f,  0.075378f,
+  0.081503f,  0.491864f,  -1.899470f, -1.764173f, -1.888137f, -1.762343f,
+  0.845542f,  0.202285f,  0.381948f,  -0.150996f, 0.556893f,  -0.305354f,
+  0.561482f,  -0.021974f, -0.703117f, 0.268638f,  -0.665736f, 1.191005f,
+  -0.081568f, -0.115653f, 0.272029f,  -0.140074f, 0.072683f,  0.092651f,
+  -0.472287f, -0.055790f, -0.434425f, 0.352055f,  0.048246f,  0.372865f,
+  0.111499f,  -0.338304f, 0.739133f,  0.156519f,  -0.594644f, 0.137295f,
+  0.613350f,  -0.165102f, -1.003731f, 0.043070f,  -0.887896f, -0.174202f,
+};
+
+static const float vp9_part_split_nn_bias_64_layer0[NODES] = {
+  1.182714f,  0.000000f,  0.902019f,  0.953115f,
+  -1.372486f, -1.288740f, -0.155144f, -3.041362f,
+};
+
+static const float vp9_part_split_nn_weights_64_layer1[NODES * LABELS] = {
+  0.841214f,  0.456016f,  0.869270f, 1.692999f,
+  -1.700494f, -0.911761f, 0.030111f, -1.447548f,
+};
+
+static const float vp9_part_split_nn_bias_64_layer1[LABELS] = {
+  1.17782545f,
+};
+
+static const NN_CONFIG vp9_part_split_nnconfig_64 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_part_split_nn_weights_64_layer0,
+      vp9_part_split_nn_weights_64_layer1,
+  },
+  {
+      vp9_part_split_nn_bias_64_layer0,
+      vp9_part_split_nn_bias_64_layer1,
+  },
+};
+
+static const float vp9_part_split_nn_weights_32_layer0[FEATURES * NODES] = {
+  -0.105488f, -0.218662f, 0.010980f,  -0.226979f, 0.028076f,  0.743430f,
+  0.789266f,  0.031907f,  -1.464200f, 0.222336f,  -1.068493f, -0.052712f,
+  -0.176181f, -0.102654f, -0.973932f, -0.182637f, -0.198000f, 0.335977f,
+  0.271346f,  0.133005f,  1.674203f,  0.689567f,  0.657133f,  0.283524f,
+  0.115529f,  0.738327f,  0.317184f,  -0.179736f, 0.403691f,  0.679350f,
+  0.048925f,  0.271338f,  -1.538921f, -0.900737f, -1.377845f, 0.084245f,
+  0.803122f,  -0.107806f, 0.103045f,  -0.023335f, -0.098116f, -0.127809f,
+  0.037665f,  -0.523225f, 1.622185f,  1.903999f,  1.358889f,  1.680785f,
+  0.027743f,  0.117906f,  -0.158810f, 0.057775f,  0.168257f,  0.062414f,
+  0.086228f,  -0.087381f, -3.066082f, 3.021855f,  -4.092155f, 2.550104f,
+  -0.230022f, -0.207445f, -0.000347f, 0.034042f,  0.097057f,  0.220088f,
+  -0.228841f, -0.029405f, -1.507174f, -1.455184f, 2.624904f,  2.643355f,
+  0.319912f,  0.585531f,  -1.018225f, -0.699606f, 1.026490f,  0.169952f,
+  -0.093579f, -0.142352f, -0.107256f, 0.059598f,  0.043190f,  0.507543f,
+  -0.138617f, 0.030197f,  0.059574f,  -0.634051f, -0.586724f, -0.148020f,
+  -0.334380f, 0.459547f,  1.620600f,  0.496850f,  0.639480f,  -0.465715f,
+};
+
+static const float vp9_part_split_nn_bias_32_layer0[NODES] = {
+  -1.125885f, 0.753197f, -0.825808f, 0.004839f,
+  0.583920f,  0.718062f, 0.976741f,  0.796188f,
+};
+
+static const float vp9_part_split_nn_weights_32_layer1[NODES * LABELS] = {
+  -0.458745f, 0.724624f, -0.479720f, -2.199872f,
+  1.162661f,  1.194153f, -0.716896f, 0.824080f,
+};
+
+static const float vp9_part_split_nn_bias_32_layer1[LABELS] = {
+  0.71644074f,
+};
+
+static const NN_CONFIG vp9_part_split_nnconfig_32 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_part_split_nn_weights_32_layer0,
+      vp9_part_split_nn_weights_32_layer1,
+  },
+  {
+      vp9_part_split_nn_bias_32_layer0,
+      vp9_part_split_nn_bias_32_layer1,
+  },
+};
+
+static const float vp9_part_split_nn_weights_16_layer0[FEATURES * NODES] = {
+  -0.003629f, -0.046852f, 0.220428f,  -0.033042f, 0.049365f,  0.112818f,
+  -0.306149f, -0.005872f, 1.066947f,  -2.290226f, 2.159505f,  -0.618714f,
+  -0.213294f, 0.451372f,  -0.199459f, 0.223730f,  -0.321709f, 0.063364f,
+  0.148704f,  -0.293371f, 0.077225f,  -0.421947f, -0.515543f, -0.240975f,
+  -0.418516f, 1.036523f,  -0.009165f, 0.032484f,  1.086549f,  0.220322f,
+  -0.247585f, -0.221232f, -0.225050f, 0.993051f,  0.285907f,  1.308846f,
+  0.707456f,  0.335152f,  0.234556f,  0.264590f,  -0.078033f, 0.542226f,
+  0.057777f,  0.163471f,  0.039245f,  -0.725960f, 0.963780f,  -0.972001f,
+  0.252237f,  -0.192745f, -0.836571f, -0.460539f, -0.528713f, -0.160198f,
+  -0.621108f, 0.486405f,  -0.221923f, 1.519426f,  -0.857871f, 0.411595f,
+  0.947188f,  0.203339f,  0.174526f,  0.016382f,  0.256879f,  0.049818f,
+  0.057836f,  -0.659096f, 0.459894f,  0.174695f,  0.379359f,  0.062530f,
+  -0.210201f, -0.355788f, -0.208432f, -0.401723f, -0.115373f, 0.191336f,
+  -0.109342f, 0.002455f,  -0.078746f, -0.391871f, 0.149892f,  -0.239615f,
+  -0.520709f, 0.118568f,  -0.437975f, 0.118116f,  -0.565426f, -0.206446f,
+  0.113407f,  0.558894f,  0.534627f,  1.154350f,  -0.116833f, 1.723311f,
+};
+
+static const float vp9_part_split_nn_bias_16_layer0[NODES] = {
+  0.013109f,  -0.034341f, 0.679845f,  -0.035781f,
+  -0.104183f, 0.098055f,  -0.041130f, 0.160107f,
+};
+
+static const float vp9_part_split_nn_weights_16_layer1[NODES * LABELS] = {
+  1.499564f, -0.403259f, 1.366532f, -0.469868f,
+  0.482227f, -2.076697f, 0.527691f, 0.540495f,
+};
+
+static const float vp9_part_split_nn_bias_16_layer1[LABELS] = {
+  0.01134653f,
+};
+
+static const NN_CONFIG vp9_part_split_nnconfig_16 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_part_split_nn_weights_16_layer0,
+      vp9_part_split_nn_weights_16_layer1,
+  },
+  {
+      vp9_part_split_nn_bias_16_layer0,
+      vp9_part_split_nn_bias_16_layer1,
+  },
+};
+
+static const float vp9_part_split_nn_weights_8_layer0[FEATURES * NODES] = {
+  -0.668875f, -0.159078f, -0.062663f, -0.483785f, -0.146814f, -0.608975f,
+  -0.589145f, 0.203704f,  -0.051007f, -0.113769f, -0.477511f, -0.122603f,
+  -1.329890f, 1.403386f,  0.199636f,  -0.161139f, 2.182090f,  -0.014307f,
+  0.015755f,  -0.208468f, 0.884353f,  0.815920f,  0.632464f,  0.838225f,
+  1.369483f,  -0.029068f, 0.570213f,  -0.573546f, 0.029617f,  0.562054f,
+  -0.653093f, -0.211910f, -0.661013f, -0.384418f, -0.574038f, -0.510069f,
+  0.173047f,  -0.274231f, -1.044008f, -0.422040f, -0.810296f, 0.144069f,
+  -0.406704f, 0.411230f,  -0.144023f, 0.745651f,  -0.595091f, 0.111787f,
+  0.840651f,  0.030123f,  -0.242155f, 0.101486f,  -0.017889f, -0.254467f,
+  -0.285407f, -0.076675f, -0.549542f, -0.013544f, -0.686566f, -0.755150f,
+  1.623949f,  -0.286369f, 0.170976f,  0.016442f,  -0.598353f, -0.038540f,
+  0.202597f,  -0.933582f, 0.599510f,  0.362273f,  0.577722f,  0.477603f,
+  0.767097f,  0.431532f,  0.457034f,  0.223279f,  0.381349f,  0.033777f,
+  0.423923f,  -0.664762f, 0.385662f,  0.075744f,  0.182681f,  0.024118f,
+  0.319408f,  -0.528864f, 0.976537f,  -0.305971f, -0.189380f, -0.241689f,
+  -1.318092f, 0.088647f,  -0.109030f, -0.945654f, 1.082797f,  0.184564f,
+};
+
+static const float vp9_part_split_nn_bias_8_layer0[NODES] = {
+  -0.237472f, 2.051396f,  0.297062f, -0.730194f,
+  0.060472f,  -0.565959f, 0.560869f, -0.395448f,
+};
+
+static const float vp9_part_split_nn_weights_8_layer1[NODES * LABELS] = {
+  0.568121f,  1.575915f,  -0.544309f, 0.751595f,
+  -0.117911f, -1.340730f, -0.739671f, 0.661216f,
+};
+
+static const float vp9_part_split_nn_bias_8_layer1[LABELS] = {
+  -0.63375306f,
+};
+
+static const NN_CONFIG vp9_part_split_nnconfig_8 = {
+  FEATURES,  // num_inputs
+  LABELS,    // num_outputs
+  1,         // num_hidden_layers
+  {
+      NODES,
+  },  // num_hidden_nodes
+  {
+      vp9_part_split_nn_weights_8_layer0,
+      vp9_part_split_nn_weights_8_layer1,
+  },
+  {
+      vp9_part_split_nn_bias_8_layer0,
+      vp9_part_split_nn_bias_8_layer1,
+  },
+};
+#undef NODES
+#undef FEATURES
+#undef LABELS
+
+// Partition pruning model(linear).
+static const float vp9_partition_feature_mean[24] = {
+  303501.697372f, 3042630.372158f, 24.694696f, 1.392182f,
+  689.413511f,    162.027012f,     1.478213f,  0.0,
+  135382.260230f, 912738.513263f,  28.845217f, 1.515230f,
+  544.158492f,    131.807995f,     1.436863f,  0.0f,
+  43682.377587f,  208131.711766f,  28.084737f, 1.356677f,
+  138.254122f,    119.522553f,     1.252322f,  0.0f,
+};
+
+static const float vp9_partition_feature_std[24] = {
+  673689.212982f, 5996652.516628f, 0.024449f, 1.989792f,
+  985.880847f,    0.014638f,       2.001898f, 0.0f,
+  208798.775332f, 1812548.443284f, 0.018693f, 1.838009f,
+  396.986910f,    0.015657f,       1.332541f, 0.0f,
+  55888.847031f,  448587.962714f,  0.017900f, 1.904776f,
+  98.652832f,     0.016598f,       1.320992f, 0.0f,
+};
+
+// Error tolerance: 0.01%-0.0.05%-0.1%
+static const float vp9_partition_linear_weights[24] = {
+  0.111736f, 0.289977f, 0.042219f, 0.204765f, 0.120410f, -0.143863f,
+  0.282376f, 0.847811f, 0.637161f, 0.131570f, 0.018636f, 0.202134f,
+  0.112797f, 0.028162f, 0.182450f, 1.124367f, 0.386133f, 0.083700f,
+  0.050028f, 0.150873f, 0.061119f, 0.109318f, 0.127255f, 0.625211f,
+};
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  // VPX_VP9_ENCODER_VP9_PARTITION_MODELS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c
index 1c2c55b..3a620df 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.c
@@ -24,10 +24,20 @@
 #include "vp9/encoder/vp9_picklpf.h"
 #include "vp9/encoder/vp9_quantize.h"
 
+static unsigned int get_section_intra_rating(const VP9_COMP *cpi) {
+  unsigned int section_intra_rating;
+
+  section_intra_rating = (cpi->common.frame_type == KEY_FRAME)
+                             ? cpi->twopass.key_frame_section_intra_rating
+                             : cpi->twopass.section_intra_rating;
+
+  return section_intra_rating;
+}
+
 static int get_max_filter_level(const VP9_COMP *cpi) {
   if (cpi->oxcf.pass == 2) {
-    return cpi->twopass.section_intra_rating > 8 ? MAX_LOOP_FILTER * 3 / 4
-                                                 : MAX_LOOP_FILTER;
+    unsigned int section_intra_rating = get_section_intra_rating(cpi);
+    return section_intra_rating > 8 ? MAX_LOOP_FILTER * 3 / 4 : MAX_LOOP_FILTER;
   } else {
     return MAX_LOOP_FILTER;
   }
@@ -81,6 +91,7 @@
   int filter_step = filt_mid < 16 ? 4 : filt_mid / 4;
   // Sum squared error at each filter level
   int64_t ss_err[MAX_LOOP_FILTER + 1];
+  unsigned int section_intra_rating = get_section_intra_rating(cpi);
 
   // Set each entry to -1
   memset(ss_err, 0xFF, sizeof(ss_err));
@@ -99,8 +110,8 @@
     // Bias against raising loop filter in favor of lowering it.
     int64_t bias = (best_err >> (15 - (filt_mid / 8))) * filter_step;
 
-    if ((cpi->oxcf.pass == 2) && (cpi->twopass.section_intra_rating < 20))
-      bias = (bias * cpi->twopass.section_intra_rating) / 20;
+    if ((cpi->oxcf.pass == 2) && (section_intra_rating < 20))
+      bias = (bias * section_intra_rating) / 20;
 
     // yx, bias less for large block size
     if (cm->tx_mode != ONLY_4X4) bias >>= 1;
@@ -150,7 +161,7 @@
   VP9_COMMON *const cm = &cpi->common;
   struct loopfilter *const lf = &cm->lf;
 
-  lf->sharpness_level = cm->frame_type == KEY_FRAME ? 0 : cpi->oxcf.sharpness;
+  lf->sharpness_level = 0;
 
   if (method == LPF_PICK_MINIMAL_LPF && lf->filter_level) {
     lf->filter_level = 0;
@@ -169,20 +180,17 @@
       case VPX_BITS_10:
         filt_guess = ROUND_POWER_OF_TWO(q * 20723 + 4060632, 20);
         break;
-      case VPX_BITS_12:
+      default:
+        assert(cm->bit_depth == VPX_BITS_12);
         filt_guess = ROUND_POWER_OF_TWO(q * 20723 + 16242526, 22);
         break;
-      default:
-        assert(0 &&
-               "bit_depth should be VPX_BITS_8, VPX_BITS_10 "
-               "or VPX_BITS_12");
-        return;
     }
 #else
     int filt_guess = ROUND_POWER_OF_TWO(q * 20723 + 1015158, 18);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
     if (cpi->oxcf.pass == 0 && cpi->oxcf.rc_mode == VPX_CBR &&
         cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ && cm->seg.enabled &&
+        (cm->base_qindex < 200 || cm->width * cm->height > 320 * 240) &&
         cpi->oxcf.content != VP9E_CONTENT_SCREEN && cm->frame_type != KEY_FRAME)
       filt_guess = 5 * filt_guess >> 3;
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h
index cecca05..8881b44 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_picklpf.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_PICKLPF_H_
-#define VP9_ENCODER_VP9_PICKLPF_H_
+#ifndef VPX_VP9_ENCODER_VP9_PICKLPF_H_
+#define VPX_VP9_ENCODER_VP9_PICKLPF_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -26,4 +26,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_PICKLPF_H_
+#endif  // VPX_VP9_ENCODER_VP9_PICKLPF_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c
index 212c260..23c943c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.c
@@ -19,7 +19,7 @@
 #include "vpx/vpx_codec.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_mem/vpx_mem.h"
-#include "vpx_ports/mem.h"
+#include "vpx_ports/compiler_attributes.h"
 
 #include "vp9/common/vp9_blockd.h"
 #include "vp9/common/vp9_common.h"
@@ -41,6 +41,17 @@
   int in_use;
 } PRED_BUFFER;
 
+typedef struct {
+  PRED_BUFFER *best_pred;
+  PREDICTION_MODE best_mode;
+  TX_SIZE best_tx_size;
+  TX_SIZE best_intra_tx_size;
+  MV_REFERENCE_FRAME best_ref_frame;
+  MV_REFERENCE_FRAME best_second_ref_frame;
+  uint8_t best_mode_skip_txfm;
+  INTERP_FILTER best_pred_filter;
+} BEST_PICKMODE;
+
 static const int pos_shift_16x16[4][4] = {
   { 9, 10, 13, 14 }, { 11, 12, 15, 16 }, { 17, 18, 21, 22 }, { 19, 20, 23, 24 }
 };
@@ -222,13 +233,22 @@
   }
 
   if (rv && search_subpel) {
-    int subpel_force_stop = cpi->sf.mv.subpel_force_stop;
-    if (use_base_mv && cpi->sf.base_mv_aggressive) subpel_force_stop = 2;
+    SUBPEL_FORCE_STOP subpel_force_stop = cpi->sf.mv.subpel_force_stop;
+    if (use_base_mv && cpi->sf.base_mv_aggressive) subpel_force_stop = HALF_PEL;
+    if (cpi->sf.mv.enable_adaptive_subpel_force_stop) {
+      const int mv_thresh = cpi->sf.mv.adapt_subpel_force_stop.mv_thresh;
+      if (abs(tmp_mv->as_mv.row) >= mv_thresh ||
+          abs(tmp_mv->as_mv.col) >= mv_thresh)
+        subpel_force_stop = cpi->sf.mv.adapt_subpel_force_stop.force_stop_above;
+      else
+        subpel_force_stop = cpi->sf.mv.adapt_subpel_force_stop.force_stop_below;
+    }
     cpi->find_fractional_mv_step(
         x, &tmp_mv->as_mv, &ref_mv, cpi->common.allow_high_precision_mv,
         x->errorperbit, &cpi->fn_ptr[bsize], subpel_force_stop,
-        cpi->sf.mv.subpel_iters_per_step, cond_cost_list(cpi, cost_list),
-        x->nmvjointcost, x->mvcost, &dis, &x->pred_sse[ref], NULL, 0, 0);
+        cpi->sf.mv.subpel_search_level, cond_cost_list(cpi, cost_list),
+        x->nmvjointcost, x->mvcost, &dis, &x->pred_sse[ref], NULL, 0, 0,
+        cpi->sf.use_accurate_subpel_search);
     *rate_mv = vp9_mv_bit_cost(&tmp_mv->as_mv, &ref_mv, x->nmvjointcost,
                                x->mvcost, MV_COST_WEIGHT);
   }
@@ -326,6 +346,82 @@
   return 1;
 }
 
+static TX_SIZE calculate_tx_size(VP9_COMP *const cpi, BLOCK_SIZE bsize,
+                                 MACROBLOCKD *const xd, unsigned int var,
+                                 unsigned int sse, int64_t ac_thr,
+                                 unsigned int source_variance, int is_intra) {
+  // TODO(marpan): Tune selection for intra-modes, screen content, etc.
+  TX_SIZE tx_size;
+  unsigned int var_thresh = is_intra ? (unsigned int)ac_thr : 1;
+  int limit_tx = 1;
+  if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ &&
+      (source_variance == 0 || var < var_thresh))
+    limit_tx = 0;
+  if (cpi->common.tx_mode == TX_MODE_SELECT) {
+    if (sse > (var << 2))
+      tx_size = VPXMIN(max_txsize_lookup[bsize],
+                       tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
+    else
+      tx_size = TX_8X8;
+    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ && limit_tx &&
+        cyclic_refresh_segment_id_boosted(xd->mi[0]->segment_id))
+      tx_size = TX_8X8;
+    else if (tx_size > TX_16X16 && limit_tx)
+      tx_size = TX_16X16;
+    // For screen-content force 4X4 tx_size over 8X8, for large variance.
+    if (cpi->oxcf.content == VP9E_CONTENT_SCREEN && tx_size == TX_8X8 &&
+        bsize <= BLOCK_16X16 && ((var >> 5) > (unsigned int)ac_thr))
+      tx_size = TX_4X4;
+  } else {
+    tx_size = VPXMIN(max_txsize_lookup[bsize],
+                     tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
+  }
+  return tx_size;
+}
+
+static void compute_intra_yprediction(PREDICTION_MODE mode, BLOCK_SIZE bsize,
+                                      MACROBLOCK *x, MACROBLOCKD *xd) {
+  struct macroblockd_plane *const pd = &xd->plane[0];
+  struct macroblock_plane *const p = &x->plane[0];
+  uint8_t *const src_buf_base = p->src.buf;
+  uint8_t *const dst_buf_base = pd->dst.buf;
+  const int src_stride = p->src.stride;
+  const int dst_stride = pd->dst.stride;
+  // block and transform sizes, in number of 4x4 blocks log 2 ("*_b")
+  // 4x4=0, 8x8=2, 16x16=4, 32x32=6, 64x64=8
+  const TX_SIZE tx_size = max_txsize_lookup[bsize];
+  const int num_4x4_w = num_4x4_blocks_wide_lookup[bsize];
+  const int num_4x4_h = num_4x4_blocks_high_lookup[bsize];
+  int row, col;
+  // If mb_to_right_edge is < 0 we are in a situation in which
+  // the current block size extends into the UMV and we won't
+  // visit the sub blocks that are wholly within the UMV.
+  const int max_blocks_wide =
+      num_4x4_w + (xd->mb_to_right_edge >= 0
+                       ? 0
+                       : xd->mb_to_right_edge >> (5 + pd->subsampling_x));
+  const int max_blocks_high =
+      num_4x4_h + (xd->mb_to_bottom_edge >= 0
+                       ? 0
+                       : xd->mb_to_bottom_edge >> (5 + pd->subsampling_y));
+
+  // Keep track of the row and column of the blocks we use so that we know
+  // if we are in the unrestricted motion border.
+  for (row = 0; row < max_blocks_high; row += (1 << tx_size)) {
+    // Skip visiting the sub blocks that are wholly within the UMV.
+    for (col = 0; col < max_blocks_wide; col += (1 << tx_size)) {
+      p->src.buf = &src_buf_base[4 * (row * (int64_t)src_stride + col)];
+      pd->dst.buf = &dst_buf_base[4 * (row * (int64_t)dst_stride + col)];
+      vp9_predict_intra_block(xd, b_width_log2_lookup[bsize], tx_size, mode,
+                              x->skip_encode ? p->src.buf : pd->dst.buf,
+                              x->skip_encode ? src_stride : dst_stride,
+                              pd->dst.buf, dst_stride, col, row, 0);
+    }
+  }
+  p->src.buf = src_buf_base;
+  pd->dst.buf = dst_buf_base;
+}
+
 static void model_rd_for_sb_y_large(VP9_COMP *cpi, BLOCK_SIZE bsize,
                                     MACROBLOCK *x, MACROBLOCKD *xd,
                                     int *out_rate_sum, int64_t *out_dist_sum,
@@ -342,7 +438,7 @@
   struct macroblockd_plane *const pd = &xd->plane[0];
   const uint32_t dc_quant = pd->dequant[0];
   const uint32_t ac_quant = pd->dequant[1];
-  const int64_t dc_thr = dc_quant * dc_quant >> 6;
+  int64_t dc_thr = dc_quant * dc_quant >> 6;
   int64_t ac_thr = ac_quant * ac_quant >> 6;
   unsigned int var;
   int sum;
@@ -386,26 +482,17 @@
                           cpi->common.height, abs(sum) >> (bw + bh));
 #endif
 
-  if (cpi->common.tx_mode == TX_MODE_SELECT) {
-    if (sse > (var << 2))
-      tx_size = VPXMIN(max_txsize_lookup[bsize],
-                       tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
-    else
-      tx_size = TX_8X8;
-
-    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ &&
-        cyclic_refresh_segment_id_boosted(xd->mi[0]->segment_id))
-      tx_size = TX_8X8;
-    else if (tx_size > TX_16X16)
-      tx_size = TX_16X16;
-  } else {
-    tx_size = VPXMIN(max_txsize_lookup[bsize],
-                     tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
-  }
-
-  assert(tx_size >= TX_8X8);
+  tx_size = calculate_tx_size(cpi, bsize, xd, var, sse, ac_thr,
+                              x->source_variance, 0);
+  // The code below for setting skip flag assumes tranform size of at least 8x8,
+  // so force this lower limit on transform.
+  if (tx_size < TX_8X8) tx_size = TX_8X8;
   xd->mi[0]->tx_size = tx_size;
 
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN && x->zero_temp_sad_source &&
+      x->source_variance == 0)
+    dc_thr = dc_thr << 1;
+
   // Evaluate if the partition block is a skippable block in Y plane.
   {
     unsigned int sse16x16[16] = { 0 };
@@ -473,33 +560,29 @@
 
     // Transform skipping test in UV planes.
     for (i = 1; i <= 2; i++) {
-      if (cpi->oxcf.speed < 8 || x->color_sensitivity[i - 1]) {
-        struct macroblock_plane *const p = &x->plane[i];
-        struct macroblockd_plane *const pd = &xd->plane[i];
-        const TX_SIZE uv_tx_size = get_uv_tx_size(xd->mi[0], pd);
-        const BLOCK_SIZE unit_size = txsize_to_bsize[uv_tx_size];
-        const BLOCK_SIZE uv_bsize = get_plane_block_size(bsize, pd);
-        const int uv_bw = b_width_log2_lookup[uv_bsize];
-        const int uv_bh = b_height_log2_lookup[uv_bsize];
-        const int sf = (uv_bw - b_width_log2_lookup[unit_size]) +
-                       (uv_bh - b_height_log2_lookup[unit_size]);
-        const uint32_t uv_dc_thr = pd->dequant[0] * pd->dequant[0] >> (6 - sf);
-        const uint32_t uv_ac_thr = pd->dequant[1] * pd->dequant[1] >> (6 - sf);
-        int j = i - 1;
+      struct macroblock_plane *const p = &x->plane[i];
+      struct macroblockd_plane *const pd = &xd->plane[i];
+      const TX_SIZE uv_tx_size = get_uv_tx_size(xd->mi[0], pd);
+      const BLOCK_SIZE unit_size = txsize_to_bsize[uv_tx_size];
+      const BLOCK_SIZE uv_bsize = get_plane_block_size(bsize, pd);
+      const int uv_bw = b_width_log2_lookup[uv_bsize];
+      const int uv_bh = b_height_log2_lookup[uv_bsize];
+      const int sf = (uv_bw - b_width_log2_lookup[unit_size]) +
+                     (uv_bh - b_height_log2_lookup[unit_size]);
+      const uint32_t uv_dc_thr = pd->dequant[0] * pd->dequant[0] >> (6 - sf);
+      const uint32_t uv_ac_thr = pd->dequant[1] * pd->dequant[1] >> (6 - sf);
+      int j = i - 1;
 
-        vp9_build_inter_predictors_sbp(xd, mi_row, mi_col, bsize, i);
-        flag_preduv_computed[i - 1] = 1;
-        var_uv[j] = cpi->fn_ptr[uv_bsize].vf(
-            p->src.buf, p->src.stride, pd->dst.buf, pd->dst.stride, &sse_uv[j]);
+      vp9_build_inter_predictors_sbp(xd, mi_row, mi_col, bsize, i);
+      flag_preduv_computed[i - 1] = 1;
+      var_uv[j] = cpi->fn_ptr[uv_bsize].vf(
+          p->src.buf, p->src.stride, pd->dst.buf, pd->dst.stride, &sse_uv[j]);
 
-        if ((var_uv[j] < uv_ac_thr || var_uv[j] == 0) &&
-            (sse_uv[j] - var_uv[j] < uv_dc_thr || sse_uv[j] == var_uv[j]))
-          skip_uv[j] = 1;
-        else
-          break;
-      } else {
-        skip_uv[i - 1] = 1;
-      }
+      if ((var_uv[j] < uv_ac_thr || var_uv[j] == 0) &&
+          (sse_uv[j] - var_uv[j] < uv_dc_thr || sse_uv[j] == var_uv[j]))
+        skip_uv[j] = 1;
+      else
+        break;
     }
 
     // If the transform in YUV planes are skippable, the mode search checks
@@ -543,7 +626,7 @@
 static void model_rd_for_sb_y(VP9_COMP *cpi, BLOCK_SIZE bsize, MACROBLOCK *x,
                               MACROBLOCKD *xd, int *out_rate_sum,
                               int64_t *out_dist_sum, unsigned int *var_y,
-                              unsigned int *sse_y) {
+                              unsigned int *sse_y, int is_intra) {
   // Note our transform coeffs are 8 times an orthogonal transform.
   // Hence quantizer step is also 8 times. To get effective quantizer
   // we need to divide by 8 before sending to modeling function.
@@ -563,24 +646,8 @@
   *var_y = var;
   *sse_y = sse;
 
-  if (cpi->common.tx_mode == TX_MODE_SELECT) {
-    if (sse > (var << 2))
-      xd->mi[0]->tx_size =
-          VPXMIN(max_txsize_lookup[bsize],
-                 tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
-    else
-      xd->mi[0]->tx_size = TX_8X8;
-
-    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ &&
-        cyclic_refresh_segment_id_boosted(xd->mi[0]->segment_id))
-      xd->mi[0]->tx_size = TX_8X8;
-    else if (xd->mi[0]->tx_size > TX_16X16)
-      xd->mi[0]->tx_size = TX_16X16;
-  } else {
-    xd->mi[0]->tx_size =
-        VPXMIN(max_txsize_lookup[bsize],
-               tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
-  }
+  xd->mi[0]->tx_size = calculate_tx_size(cpi, bsize, xd, var, sse, ac_thr,
+                                         x->source_variance, is_intra);
 
   // Evaluate if the partition block is a skippable block in Y plane.
   {
@@ -641,7 +708,7 @@
 
 static void block_yrd(VP9_COMP *cpi, MACROBLOCK *x, RD_COST *this_rdc,
                       int *skippable, int64_t *sse, BLOCK_SIZE bsize,
-                      TX_SIZE tx_size, int rd_computed) {
+                      TX_SIZE tx_size, int rd_computed, int is_intra) {
   MACROBLOCKD *xd = &x->e_mbd;
   const struct macroblockd_plane *pd = &xd->plane[0];
   struct macroblock_plane *const p = &x->plane[0];
@@ -658,25 +725,6 @@
   const int bw = 4 * num_4x4_w;
   const int bh = 4 * num_4x4_h;
 
-#if CONFIG_VP9_HIGHBITDEPTH
-  // TODO(jingning): Implement the high bit-depth Hadamard transforms and
-  // remove this check condition.
-  // TODO(marpan): Use this path (model_rd) for 8bit under certain conditions
-  // for now, as the vp9_quantize_fp below for highbitdepth build is slow.
-  if (xd->bd != 8 ||
-      (cpi->oxcf.speed > 5 && cpi->common.frame_type != KEY_FRAME &&
-       bsize < BLOCK_32X32)) {
-    unsigned int var_y, sse_y;
-    (void)tx_size;
-    if (!rd_computed)
-      model_rd_for_sb_y(cpi, bsize, x, xd, &this_rdc->rate, &this_rdc->dist,
-                        &var_y, &sse_y);
-    *sse = INT_MAX;
-    *skippable = 0;
-    return;
-  }
-#endif
-
   if (cpi->sf.use_simple_block_yrd && cpi->common.frame_type != KEY_FRAME &&
       (bsize < BLOCK_32X32 ||
        (cpi->use_svc &&
@@ -685,7 +733,7 @@
     (void)tx_size;
     if (!rd_computed)
       model_rd_for_sb_y(cpi, bsize, x, xd, &this_rdc->rate, &this_rdc->dist,
-                        &var_y, &sse_y);
+                        &var_y, &sse_y, is_intra);
     *sse = INT_MAX;
     *skippable = 0;
     return;
@@ -695,9 +743,19 @@
 
   // The max tx_size passed in is TX_16X16.
   assert(tx_size != TX_32X32);
-
+#if CONFIG_VP9_HIGHBITDEPTH
+  if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+    vpx_highbd_subtract_block(bh, bw, p->src_diff, bw, p->src.buf,
+                              p->src.stride, pd->dst.buf, pd->dst.stride,
+                              x->e_mbd.bd);
+  } else {
+    vpx_subtract_block(bh, bw, p->src_diff, bw, p->src.buf, p->src.stride,
+                       pd->dst.buf, pd->dst.stride);
+  }
+#else
   vpx_subtract_block(bh, bw, p->src_diff, bw, p->src.buf, p->src.stride,
                      pd->dst.buf, pd->dst.stride);
+#endif
   *skippable = 1;
   // Keep track of the row and column of the blocks we use so that we know
   // if we are in the unrestricted motion border.
@@ -726,13 +784,13 @@
                             qcoeff, dqcoeff, pd->dequant, eob, scan_order->scan,
                             scan_order->iscan);
             break;
-          case TX_4X4:
+          default:
+            assert(tx_size == TX_4X4);
             x->fwd_txfm4x4(src_diff, coeff, diff_stride);
             vp9_quantize_fp(coeff, 16, x->skip_block, p->round_fp, p->quant_fp,
                             qcoeff, dqcoeff, pd->dequant, eob, scan_order->scan,
                             scan_order->iscan);
             break;
-          default: assert(0); break;
         }
         *skippable &= (*eob == 0);
         eob_cost += 1;
@@ -876,6 +934,7 @@
   // Skipping threshold for dc.
   unsigned int thresh_dc;
   int motion_low = 1;
+
   if (cpi->use_svc && ref_frame == GOLDEN_FRAME) return;
   if (mi->mv[0].as_mv.row > 64 || mi->mv[0].as_mv.row < -64 ||
       mi->mv[0].as_mv.col > 64 || mi->mv[0].as_mv.col < -64)
@@ -981,8 +1040,8 @@
   VP9_COMP *const cpi = args->cpi;
   MACROBLOCK *const x = args->x;
   MACROBLOCKD *const xd = &x->e_mbd;
-  struct macroblock_plane *const p = &x->plane[0];
-  struct macroblockd_plane *const pd = &xd->plane[0];
+  struct macroblock_plane *const p = &x->plane[plane];
+  struct macroblockd_plane *const pd = &xd->plane[plane];
   const BLOCK_SIZE bsize_tx = txsize_to_bsize[tx_size];
   uint8_t *const src_buf_base = p->src.buf;
   uint8_t *const dst_buf_base = pd->dst.buf;
@@ -992,8 +1051,8 @@
 
   (void)block;
 
-  p->src.buf = &src_buf_base[4 * (row * src_stride + col)];
-  pd->dst.buf = &dst_buf_base[4 * (row * dst_stride + col)];
+  p->src.buf = &src_buf_base[4 * (row * (int64_t)src_stride + col)];
+  pd->dst.buf = &dst_buf_base[4 * (row * (int64_t)dst_stride + col)];
   // Use source buffer as an approximation for the fully reconstructed buffer.
   vp9_predict_intra_block(xd, b_width_log2_lookup[plane_bsize], tx_size,
                           args->mode, x->skip_encode ? p->src.buf : pd->dst.buf,
@@ -1002,13 +1061,12 @@
 
   if (plane == 0) {
     int64_t this_sse = INT64_MAX;
-    // TODO(jingning): This needs further refactoring.
     block_yrd(cpi, x, &this_rdc, &args->skippable, &this_sse, bsize_tx,
-              VPXMIN(tx_size, TX_16X16), 0);
+              VPXMIN(tx_size, TX_16X16), 0, 1);
   } else {
     unsigned int var = 0;
     unsigned int sse = 0;
-    model_rd_for_sb_uv(cpi, plane_bsize, x, xd, &this_rdc, &var, &sse, plane,
+    model_rd_for_sb_uv(cpi, bsize_tx, x, xd, &this_rdc, &var, &sse, plane,
                        plane);
   }
 
@@ -1292,18 +1350,16 @@
     VP9_PICKMODE_CTX_DEN *ctx_den, int64_t zero_last_cost_orig,
     int ref_frame_cost[MAX_REF_FRAMES],
     int_mv frame_mv[MB_MODE_COUNT][MAX_REF_FRAMES], int reuse_inter_pred,
-    TX_SIZE best_tx_size, PREDICTION_MODE best_mode,
-    MV_REFERENCE_FRAME best_ref_frame, INTERP_FILTER best_pred_filter,
-    uint8_t best_mode_skip_txfm) {
+    BEST_PICKMODE *bp) {
   ctx_den->zero_last_cost_orig = zero_last_cost_orig;
   ctx_den->ref_frame_cost = ref_frame_cost;
   ctx_den->frame_mv = frame_mv;
   ctx_den->reuse_inter_pred = reuse_inter_pred;
-  ctx_den->best_tx_size = best_tx_size;
-  ctx_den->best_mode = best_mode;
-  ctx_den->best_ref_frame = best_ref_frame;
-  ctx_den->best_pred_filter = best_pred_filter;
-  ctx_den->best_mode_skip_txfm = best_mode_skip_txfm;
+  ctx_den->best_tx_size = bp->best_tx_size;
+  ctx_den->best_mode = bp->best_mode;
+  ctx_den->best_ref_frame = bp->best_ref_frame;
+  ctx_den->best_pred_filter = bp->best_pred_filter;
+  ctx_den->best_mode_skip_txfm = bp->best_mode_skip_txfm;
 }
 
 static void recheck_zeromv_after_denoising(
@@ -1322,6 +1378,7 @@
         cpi->svc.number_spatial_layers == 1 &&
         decision == FILTER_ZEROMV_BLOCK))) {
     // Check if we should pick ZEROMV on denoised signal.
+    VP9_COMMON *const cm = &cpi->common;
     int rate = 0;
     int64_t dist = 0;
     uint32_t var_y = UINT_MAX;
@@ -1330,11 +1387,13 @@
     mi->mode = ZEROMV;
     mi->ref_frame[0] = LAST_FRAME;
     mi->ref_frame[1] = NONE;
+    set_ref_ptrs(cm, xd, mi->ref_frame[0], NONE);
     mi->mv[0].as_int = 0;
     mi->interp_filter = EIGHTTAP;
+    if (cpi->sf.default_interp_filter == BILINEAR) mi->interp_filter = BILINEAR;
     xd->plane[0].pre[0] = yv12_mb[LAST_FRAME][0];
     vp9_build_inter_predictors_sby(xd, mi_row, mi_col, bsize);
-    model_rd_for_sb_y(cpi, bsize, x, xd, &rate, &dist, &var_y, &sse_y);
+    model_rd_for_sb_y(cpi, bsize, x, xd, &rate, &dist, &var_y, &sse_y, 0);
     this_rdc.rate = rate + ctx_den->ref_frame_cost[LAST_FRAME] +
                     cpi->inter_mode_cost[x->mbmi_ext->mode_context[LAST_FRAME]]
                                         [INTER_OFFSET(ZEROMV)];
@@ -1346,6 +1405,7 @@
       this_rdc = *best_rdc;
       mi->mode = ctx_den->best_mode;
       mi->ref_frame[0] = ctx_den->best_ref_frame;
+      set_ref_ptrs(cm, xd, mi->ref_frame[0], NONE);
       mi->interp_filter = ctx_den->best_pred_filter;
       if (ctx_den->best_ref_frame == INTRA_FRAME) {
         mi->mv[0].as_int = INVALID_MV;
@@ -1416,27 +1476,223 @@
   return force_skip_low_temp_var;
 }
 
+static void search_filter_ref(VP9_COMP *cpi, MACROBLOCK *x, RD_COST *this_rdc,
+                              int mi_row, int mi_col, PRED_BUFFER *tmp,
+                              BLOCK_SIZE bsize, int reuse_inter_pred,
+                              PRED_BUFFER **this_mode_pred, unsigned int *var_y,
+                              unsigned int *sse_y, int force_smooth_filter,
+                              int *this_early_term, int *flag_preduv_computed,
+                              int use_model_yrd_large) {
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MODE_INFO *const mi = xd->mi[0];
+  struct macroblockd_plane *const pd = &xd->plane[0];
+  const int bw = num_4x4_blocks_wide_lookup[bsize] << 2;
+
+  int pf_rate[3] = { 0 };
+  int64_t pf_dist[3] = { 0 };
+  int curr_rate[3] = { 0 };
+  unsigned int pf_var[3] = { 0 };
+  unsigned int pf_sse[3] = { 0 };
+  TX_SIZE pf_tx_size[3] = { 0 };
+  int64_t best_cost = INT64_MAX;
+  INTERP_FILTER best_filter = SWITCHABLE, filter;
+  PRED_BUFFER *current_pred = *this_mode_pred;
+  uint8_t skip_txfm = SKIP_TXFM_NONE;
+  int best_early_term = 0;
+  int best_flag_preduv_computed[2] = { 0 };
+  INTERP_FILTER filter_start = force_smooth_filter ? EIGHTTAP_SMOOTH : EIGHTTAP;
+  INTERP_FILTER filter_end = EIGHTTAP_SMOOTH;
+  for (filter = filter_start; filter <= filter_end; ++filter) {
+    int64_t cost;
+    mi->interp_filter = filter;
+    vp9_build_inter_predictors_sby(xd, mi_row, mi_col, bsize);
+    // For large partition blocks, extra testing is done.
+    if (use_model_yrd_large)
+      model_rd_for_sb_y_large(cpi, bsize, x, xd, &pf_rate[filter],
+                              &pf_dist[filter], &pf_var[filter],
+                              &pf_sse[filter], mi_row, mi_col, this_early_term,
+                              flag_preduv_computed);
+    else
+      model_rd_for_sb_y(cpi, bsize, x, xd, &pf_rate[filter], &pf_dist[filter],
+                        &pf_var[filter], &pf_sse[filter], 0);
+    curr_rate[filter] = pf_rate[filter];
+    pf_rate[filter] += vp9_get_switchable_rate(cpi, xd);
+    cost = RDCOST(x->rdmult, x->rddiv, pf_rate[filter], pf_dist[filter]);
+    pf_tx_size[filter] = mi->tx_size;
+    if (cost < best_cost) {
+      best_filter = filter;
+      best_cost = cost;
+      skip_txfm = x->skip_txfm[0];
+      best_early_term = *this_early_term;
+      best_flag_preduv_computed[0] = flag_preduv_computed[0];
+      best_flag_preduv_computed[1] = flag_preduv_computed[1];
+
+      if (reuse_inter_pred) {
+        if (*this_mode_pred != current_pred) {
+          free_pred_buffer(*this_mode_pred);
+          *this_mode_pred = current_pred;
+        }
+        if (filter != filter_end) {
+          current_pred = &tmp[get_pred_buffer(tmp, 3)];
+          pd->dst.buf = current_pred->data;
+          pd->dst.stride = bw;
+        }
+      }
+    }
+  }
+
+  if (reuse_inter_pred && *this_mode_pred != current_pred)
+    free_pred_buffer(current_pred);
+
+  mi->interp_filter = best_filter;
+  mi->tx_size = pf_tx_size[best_filter];
+  this_rdc->rate = curr_rate[best_filter];
+  this_rdc->dist = pf_dist[best_filter];
+  *var_y = pf_var[best_filter];
+  *sse_y = pf_sse[best_filter];
+  x->skip_txfm[0] = skip_txfm;
+  *this_early_term = best_early_term;
+  flag_preduv_computed[0] = best_flag_preduv_computed[0];
+  flag_preduv_computed[1] = best_flag_preduv_computed[1];
+  if (reuse_inter_pred) {
+    pd->dst.buf = (*this_mode_pred)->data;
+    pd->dst.stride = (*this_mode_pred)->stride;
+  } else if (best_filter < filter_end) {
+    mi->interp_filter = best_filter;
+    vp9_build_inter_predictors_sby(xd, mi_row, mi_col, bsize);
+  }
+}
+
+static int search_new_mv(VP9_COMP *cpi, MACROBLOCK *x,
+                         int_mv frame_mv[][MAX_REF_FRAMES],
+                         MV_REFERENCE_FRAME ref_frame, int gf_temporal_ref,
+                         BLOCK_SIZE bsize, int mi_row, int mi_col,
+                         int best_pred_sad, int *rate_mv,
+                         unsigned int best_sse_sofar, RD_COST *best_rdc) {
+  SVC *const svc = &cpi->svc;
+  MACROBLOCKD *const xd = &x->e_mbd;
+  MODE_INFO *const mi = xd->mi[0];
+  SPEED_FEATURES *const sf = &cpi->sf;
+
+  if (ref_frame > LAST_FRAME && gf_temporal_ref &&
+      cpi->oxcf.rc_mode == VPX_CBR) {
+    int tmp_sad;
+    uint32_t dis;
+    int cost_list[5] = { INT_MAX, INT_MAX, INT_MAX, INT_MAX, INT_MAX };
+
+    if (bsize < BLOCK_16X16) return -1;
+
+    tmp_sad = vp9_int_pro_motion_estimation(
+        cpi, x, bsize, mi_row, mi_col,
+        &x->mbmi_ext->ref_mvs[ref_frame][0].as_mv);
+
+    if (tmp_sad > x->pred_mv_sad[LAST_FRAME]) return -1;
+    if (tmp_sad + (num_pels_log2_lookup[bsize] << 4) > best_pred_sad) return -1;
+
+    frame_mv[NEWMV][ref_frame].as_int = mi->mv[0].as_int;
+    *rate_mv = vp9_mv_bit_cost(&frame_mv[NEWMV][ref_frame].as_mv,
+                               &x->mbmi_ext->ref_mvs[ref_frame][0].as_mv,
+                               x->nmvjointcost, x->mvcost, MV_COST_WEIGHT);
+    frame_mv[NEWMV][ref_frame].as_mv.row >>= 3;
+    frame_mv[NEWMV][ref_frame].as_mv.col >>= 3;
+
+    cpi->find_fractional_mv_step(
+        x, &frame_mv[NEWMV][ref_frame].as_mv,
+        &x->mbmi_ext->ref_mvs[ref_frame][0].as_mv,
+        cpi->common.allow_high_precision_mv, x->errorperbit,
+        &cpi->fn_ptr[bsize], cpi->sf.mv.subpel_force_stop,
+        cpi->sf.mv.subpel_search_level, cond_cost_list(cpi, cost_list),
+        x->nmvjointcost, x->mvcost, &dis, &x->pred_sse[ref_frame], NULL, 0, 0,
+        cpi->sf.use_accurate_subpel_search);
+  } else if (svc->use_base_mv && svc->spatial_layer_id) {
+    if (frame_mv[NEWMV][ref_frame].as_int != INVALID_MV) {
+      const int pre_stride = xd->plane[0].pre[0].stride;
+      unsigned int base_mv_sse = UINT_MAX;
+      int scale = (cpi->rc.avg_frame_low_motion > 60) ? 2 : 4;
+      const uint8_t *const pre_buf =
+          xd->plane[0].pre[0].buf +
+          (frame_mv[NEWMV][ref_frame].as_mv.row >> 3) * pre_stride +
+          (frame_mv[NEWMV][ref_frame].as_mv.col >> 3);
+      cpi->fn_ptr[bsize].vf(x->plane[0].src.buf, x->plane[0].src.stride,
+                            pre_buf, pre_stride, &base_mv_sse);
+
+      // Exit NEWMV search if base_mv is (0,0) && bsize < BLOCK_16x16,
+      // for SVC encoding.
+      if (cpi->use_svc && svc->use_base_mv && bsize < BLOCK_16X16 &&
+          frame_mv[NEWMV][ref_frame].as_mv.row == 0 &&
+          frame_mv[NEWMV][ref_frame].as_mv.col == 0)
+        return -1;
+
+      // Exit NEWMV search if base_mv_sse is large.
+      if (sf->base_mv_aggressive && base_mv_sse > (best_sse_sofar << scale))
+        return -1;
+      if (base_mv_sse < (best_sse_sofar << 1)) {
+        // Base layer mv is good.
+        // Exit NEWMV search if the base_mv is (0, 0) and sse is low, since
+        // (0, 0) mode is already tested.
+        unsigned int base_mv_sse_normalized =
+            base_mv_sse >>
+            (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
+        if (sf->base_mv_aggressive && base_mv_sse <= best_sse_sofar &&
+            base_mv_sse_normalized < 400 &&
+            frame_mv[NEWMV][ref_frame].as_mv.row == 0 &&
+            frame_mv[NEWMV][ref_frame].as_mv.col == 0)
+          return -1;
+        if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
+                                    &frame_mv[NEWMV][ref_frame], rate_mv,
+                                    best_rdc->rdcost, 1)) {
+          return -1;
+        }
+      } else if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
+                                         &frame_mv[NEWMV][ref_frame], rate_mv,
+                                         best_rdc->rdcost, 0)) {
+        return -1;
+      }
+    } else if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
+                                       &frame_mv[NEWMV][ref_frame], rate_mv,
+                                       best_rdc->rdcost, 0)) {
+      return -1;
+    }
+  } else if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
+                                     &frame_mv[NEWMV][ref_frame], rate_mv,
+                                     best_rdc->rdcost, 0)) {
+    return -1;
+  }
+
+  return 0;
+}
+
+static INLINE void init_best_pickmode(BEST_PICKMODE *bp) {
+  bp->best_mode = ZEROMV;
+  bp->best_ref_frame = LAST_FRAME;
+  bp->best_tx_size = TX_SIZES;
+  bp->best_intra_tx_size = TX_SIZES;
+  bp->best_pred_filter = EIGHTTAP;
+  bp->best_mode_skip_txfm = SKIP_TXFM_NONE;
+  bp->best_second_ref_frame = NONE;
+  bp->best_pred = NULL;
+}
+
 void vp9_pick_inter_mode(VP9_COMP *cpi, MACROBLOCK *x, TileDataEnc *tile_data,
                          int mi_row, int mi_col, RD_COST *rd_cost,
                          BLOCK_SIZE bsize, PICK_MODE_CONTEXT *ctx) {
   VP9_COMMON *const cm = &cpi->common;
   SPEED_FEATURES *const sf = &cpi->sf;
-  const SVC *const svc = &cpi->svc;
+  SVC *const svc = &cpi->svc;
   MACROBLOCKD *const xd = &x->e_mbd;
   MODE_INFO *const mi = xd->mi[0];
   struct macroblockd_plane *const pd = &xd->plane[0];
-  PREDICTION_MODE best_mode = ZEROMV;
-  MV_REFERENCE_FRAME ref_frame, best_ref_frame = LAST_FRAME;
+
+  BEST_PICKMODE best_pickmode;
+
+  MV_REFERENCE_FRAME ref_frame;
   MV_REFERENCE_FRAME usable_ref_frame, second_ref_frame;
-  TX_SIZE best_tx_size = TX_SIZES;
-  INTERP_FILTER best_pred_filter = EIGHTTAP;
   int_mv frame_mv[MB_MODE_COUNT][MAX_REF_FRAMES];
   uint8_t mode_checked[MB_MODE_COUNT][MAX_REF_FRAMES];
   struct buf_2d yv12_mb[4][MAX_MB_PLANE];
   static const int flag_list[4] = { 0, VP9_LAST_FLAG, VP9_GOLD_FLAG,
                                     VP9_ALT_FLAG };
   RD_COST this_rdc, best_rdc;
-  uint8_t skip_txfm = SKIP_TXFM_NONE, best_mode_skip_txfm = SKIP_TXFM_NONE;
   // var_y and sse_y are saved to be used in skipping checking
   unsigned int var_y = UINT_MAX;
   unsigned int sse_y = UINT_MAX;
@@ -1451,15 +1707,11 @@
       (cpi->sf.adaptive_rd_thresh_row_mt)
           ? &(tile_data->row_base_thresh_freq_fact[thresh_freq_fact_idx])
           : tile_data->thresh_freq_fact[bsize];
-
+#if CONFIG_VP9_TEMPORAL_DENOISING
+  const int denoise_recheck_zeromv = 1;
+#endif
   INTERP_FILTER filter_ref;
-  const int bsl = mi_width_log2_lookup[bsize];
-  const int pred_filter_search =
-      cm->interp_filter == SWITCHABLE
-          ? (((mi_row + mi_col) >> bsl) +
-             get_chessboard_index(cm->current_video_frame)) &
-                0x1
-          : 0;
+  int pred_filter_search = cm->interp_filter == SWITCHABLE;
   int const_motion[MAX_REF_FRAMES] = { 0 };
   const int bh = num_4x4_blocks_high_lookup[bsize] << 2;
   const int bw = num_4x4_blocks_wide_lookup[bsize] << 2;
@@ -1467,12 +1719,11 @@
   // process.
   // tmp[3] points to dst buffer, and the other 3 point to allocated buffers.
   PRED_BUFFER tmp[4];
-  DECLARE_ALIGNED(16, uint8_t, pred_buf[3 * 64 * 64]);
+  DECLARE_ALIGNED(16, uint8_t, pred_buf[3 * 64 * 64] VPX_UNINITIALIZED);
 #if CONFIG_VP9_HIGHBITDEPTH
-  DECLARE_ALIGNED(16, uint16_t, pred_buf_16[3 * 64 * 64]);
+  DECLARE_ALIGNED(16, uint16_t, pred_buf_16[3 * 64 * 64] VPX_UNINITIALIZED);
 #endif
   struct buf_2d orig_dst = pd->dst;
-  PRED_BUFFER *best_pred = NULL;
   PRED_BUFFER *this_mode_pred = NULL;
   const int pixels_in_block = bh * bw;
   int reuse_inter_pred = cpi->sf.reuse_inter_pred_sby && ctx->pred_pixel_ready;
@@ -1488,33 +1739,84 @@
   int skip_ref_find_pred[4] = { 0 };
   unsigned int sse_zeromv_normalized = UINT_MAX;
   unsigned int best_sse_sofar = UINT_MAX;
+  int gf_temporal_ref = 0;
+  int force_test_gf_zeromv = 0;
 #if CONFIG_VP9_TEMPORAL_DENOISING
   VP9_PICKMODE_CTX_DEN ctx_den;
   int64_t zero_last_cost_orig = INT64_MAX;
   int denoise_svc_pickmode = 1;
 #endif
   INTERP_FILTER filter_gf_svc = EIGHTTAP;
-  MV_REFERENCE_FRAME best_second_ref_frame = NONE;
+  MV_REFERENCE_FRAME inter_layer_ref = GOLDEN_FRAME;
   const struct segmentation *const seg = &cm->seg;
   int comp_modes = 0;
   int num_inter_modes = (cpi->use_svc) ? RT_INTER_MODES_SVC : RT_INTER_MODES;
   int flag_svc_subpel = 0;
   int svc_mv_col = 0;
   int svc_mv_row = 0;
+  int no_scaling = 0;
+  int large_block = 0;
+  int use_model_yrd_large = 0;
   unsigned int thresh_svc_skip_golden = 500;
+  unsigned int thresh_skip_golden = 500;
+  int force_smooth_filter = cpi->sf.force_smooth_interpol;
+  int scene_change_detected =
+      cpi->rc.high_source_sad ||
+      (cpi->use_svc && cpi->svc.high_source_sad_superframe);
+
+  init_best_pickmode(&best_pickmode);
+
+  x->encode_breakout = seg->enabled
+                           ? cpi->segment_encode_breakout[mi->segment_id]
+                           : cpi->encode_breakout;
+
+  x->source_variance = UINT_MAX;
+  if (cpi->sf.default_interp_filter == BILINEAR) {
+    best_pickmode.best_pred_filter = BILINEAR;
+    filter_gf_svc = BILINEAR;
+  }
+  if (cpi->use_svc && svc->spatial_layer_id > 0) {
+    int layer =
+        LAYER_IDS_TO_IDX(svc->spatial_layer_id - 1, svc->temporal_layer_id,
+                         svc->number_temporal_layers);
+    LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+    if (lc->scaling_factor_num == lc->scaling_factor_den) no_scaling = 1;
+  }
+  if (svc->spatial_layer_id > 0 &&
+      (svc->high_source_sad_superframe || no_scaling))
+    thresh_svc_skip_golden = 0;
   // Lower the skip threshold if lower spatial layer is better quality relative
   // to current layer.
-  if (cpi->svc.spatial_layer_id > 0 && cm->base_qindex > 150 &&
-      cm->base_qindex > cpi->svc.lower_layer_qindex + 15)
+  else if (svc->spatial_layer_id > 0 && cm->base_qindex > 150 &&
+           cm->base_qindex > svc->lower_layer_qindex + 15)
     thresh_svc_skip_golden = 100;
   // Increase skip threshold if lower spatial layer is lower quality relative
   // to current layer.
-  else if (cpi->svc.spatial_layer_id > 0 && cm->base_qindex < 140 &&
-           cm->base_qindex < cpi->svc.lower_layer_qindex - 20)
+  else if (svc->spatial_layer_id > 0 && cm->base_qindex < 140 &&
+           cm->base_qindex < svc->lower_layer_qindex - 20)
     thresh_svc_skip_golden = 1000;
 
-  init_ref_frame_cost(cm, xd, ref_frame_cost);
+  if (!cpi->use_svc ||
+      (svc->use_gf_temporal_ref_current_layer &&
+       !svc->layer_context[svc->temporal_layer_id].is_key_frame)) {
+    struct scale_factors *const sf_last = &cm->frame_refs[LAST_FRAME - 1].sf;
+    struct scale_factors *const sf_golden =
+        &cm->frame_refs[GOLDEN_FRAME - 1].sf;
+    gf_temporal_ref = 1;
+    // For temporal long term prediction, check that the golden reference
+    // is same scale as last reference, otherwise disable.
+    if ((sf_last->x_scale_fp != sf_golden->x_scale_fp) ||
+        (sf_last->y_scale_fp != sf_golden->y_scale_fp)) {
+      gf_temporal_ref = 0;
+    } else {
+      if (cpi->rc.avg_frame_low_motion > 70)
+        thresh_svc_skip_golden = 500;
+      else
+        thresh_svc_skip_golden = 0;
+    }
+  }
 
+  init_ref_frame_cost(cm, xd, ref_frame_cost);
   memset(&mode_checked[0][0], 0, MB_MODE_COUNT * MAX_REF_FRAMES);
 
   if (reuse_inter_pred) {
@@ -1539,16 +1841,25 @@
   x->skip_encode = cpi->sf.skip_encode_frame && x->q_index < QIDX_SKIP_THRESH;
   x->skip = 0;
 
+  if (cpi->sf.cb_pred_filter_search) {
+    const int bsl = mi_width_log2_lookup[bsize];
+    pred_filter_search = cm->interp_filter == SWITCHABLE
+                             ? (((mi_row + mi_col) >> bsl) +
+                                get_chessboard_index(cm->current_video_frame)) &
+                                   0x1
+                             : 0;
+  }
   // Instead of using vp9_get_pred_context_switchable_interp(xd) to assign
   // filter_ref, we use a less strict condition on assigning filter_ref.
   // This is to reduce the probabily of entering the flow of not assigning
   // filter_ref and then skip filter search.
-  if (xd->above_mi && is_inter_block(xd->above_mi))
-    filter_ref = xd->above_mi->interp_filter;
-  else if (xd->left_mi && is_inter_block(xd->left_mi))
-    filter_ref = xd->left_mi->interp_filter;
-  else
-    filter_ref = cm->interp_filter;
+  filter_ref = cm->interp_filter;
+  if (cpi->sf.default_interp_filter != BILINEAR) {
+    if (xd->above_mi && is_inter_block(xd->above_mi))
+      filter_ref = xd->above_mi->interp_filter;
+    else if (xd->left_mi && is_inter_block(xd->left_mi))
+      filter_ref = xd->left_mi->interp_filter;
+  }
 
   // initialize mode decisions
   vp9_rd_cost_reset(&best_rdc);
@@ -1569,23 +1880,24 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
       x->source_variance =
           vp9_get_sby_perpixel_variance(cpi, &x->plane[0].src, bsize);
+
+    if (cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
+        cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ && mi->segment_id > 0 &&
+        x->zero_temp_sad_source && x->source_variance == 0) {
+      mi->segment_id = 0;
+      vp9_init_plane_quantizers(cpi, x);
+    }
   }
 
 #if CONFIG_VP9_TEMPORAL_DENOISING
   if (cpi->oxcf.noise_sensitivity > 0) {
-    if (cpi->use_svc) {
-      int layer = LAYER_IDS_TO_IDX(cpi->svc.spatial_layer_id,
-                                   cpi->svc.temporal_layer_id,
-                                   cpi->svc.number_temporal_layers);
-      LAYER_CONTEXT *lc = &cpi->svc.layer_context[layer];
-      denoise_svc_pickmode = denoise_svc(cpi) && !lc->is_key_frame;
-    }
+    if (cpi->use_svc) denoise_svc_pickmode = vp9_denoise_svc_non_key(cpi);
     if (cpi->denoiser.denoising_level > kDenLowLow && denoise_svc_pickmode)
       vp9_denoiser_reset_frame_stats(ctx);
   }
 #endif
 
-  if (cpi->rc.frames_since_golden == 0 && !cpi->use_svc &&
+  if (cpi->rc.frames_since_golden == 0 && gf_temporal_ref &&
       !cpi->rc.alt_ref_gf_group && !cpi->rc.last_frame_is_src_altref) {
     usable_ref_frame = LAST_FRAME;
   } else {
@@ -1612,14 +1924,20 @@
   // For svc mode, on spatial_layer_id > 0: if the reference has different scale
   // constrain the inter mode to only test zero motion.
   if (cpi->use_svc && svc->force_zero_mode_spatial_ref &&
-      cpi->svc.spatial_layer_id > 0) {
+      svc->spatial_layer_id > 0 && !gf_temporal_ref) {
     if (cpi->ref_frame_flags & flag_list[LAST_FRAME]) {
       struct scale_factors *const sf = &cm->frame_refs[LAST_FRAME - 1].sf;
-      if (vp9_is_scaled(sf)) svc_force_zero_mode[LAST_FRAME - 1] = 1;
+      if (vp9_is_scaled(sf)) {
+        svc_force_zero_mode[LAST_FRAME - 1] = 1;
+        inter_layer_ref = LAST_FRAME;
+      }
     }
     if (cpi->ref_frame_flags & flag_list[GOLDEN_FRAME]) {
       struct scale_factors *const sf = &cm->frame_refs[GOLDEN_FRAME - 1].sf;
-      if (vp9_is_scaled(sf)) svc_force_zero_mode[GOLDEN_FRAME - 1] = 1;
+      if (vp9_is_scaled(sf)) {
+        svc_force_zero_mode[GOLDEN_FRAME - 1] = 1;
+        inter_layer_ref = GOLDEN_FRAME;
+      }
     }
   }
 
@@ -1635,6 +1953,10 @@
     }
   }
 
+  if (sf->disable_golden_ref && (x->content_state_sb != kVeryHighSad ||
+                                 cpi->rc.avg_frame_low_motion < 60))
+    usable_ref_frame = LAST_FRAME;
+
   if (!((cpi->ref_frame_flags & flag_list[GOLDEN_FRAME]) &&
         !svc_force_zero_mode[GOLDEN_FRAME - 1] && !force_skip_low_temp_var))
     use_golden_nonzeromv = 0;
@@ -1660,6 +1982,10 @@
   }
 
   for (ref_frame = LAST_FRAME; ref_frame <= usable_ref_frame; ++ref_frame) {
+    // Skip find_predictor if the reference frame is not in the
+    // ref_frame_flags (i.e., not used as a reference for this frame).
+    skip_ref_find_pred[ref_frame] =
+        !(cpi->ref_frame_flags & flag_list[ref_frame]);
     if (!skip_ref_find_pred[ref_frame]) {
       find_predictors(cpi, x, ref_frame, frame_mv, const_motion,
                       &ref_frame_skip_mask, flag_list, tile_data, mi_row,
@@ -1673,16 +1999,37 @@
 
   // Set the flag_svc_subpel to 1 for SVC if the lower spatial layer used
   // an averaging filter for downsampling (phase = 8). If so, we will test
-  // a nonzero motion mode on the spatial (goldeen) reference.
+  // a nonzero motion mode on the spatial reference.
   // The nonzero motion is half pixel shifted to left and top (-4, -4).
-  if (cpi->use_svc && cpi->svc.spatial_layer_id > 0 &&
-      svc_force_zero_mode[GOLDEN_FRAME - 1] &&
-      cpi->svc.downsample_filter_phase[cpi->svc.spatial_layer_id - 1] == 8) {
+  if (cpi->use_svc && svc->spatial_layer_id > 0 &&
+      svc_force_zero_mode[inter_layer_ref - 1] &&
+      svc->downsample_filter_phase[svc->spatial_layer_id - 1] == 8 &&
+      !gf_temporal_ref) {
     svc_mv_col = -4;
     svc_mv_row = -4;
     flag_svc_subpel = 1;
   }
 
+  // For SVC with quality layers, when QP of lower layer is lower
+  // than current layer: force check of GF-ZEROMV before early exit
+  // due to skip flag.
+  if (svc->spatial_layer_id > 0 && no_scaling &&
+      (cpi->ref_frame_flags & flag_list[GOLDEN_FRAME]) &&
+      cm->base_qindex > svc->lower_layer_qindex + 10)
+    force_test_gf_zeromv = 1;
+
+  // For low motion content use x->sb_is_skin in addition to VeryHighSad
+  // for setting large_block.
+  large_block = (x->content_state_sb == kVeryHighSad ||
+                 (x->sb_is_skin && cpi->rc.avg_frame_low_motion > 70) ||
+                 cpi->oxcf.speed < 7)
+                    ? bsize > BLOCK_32X32
+                    : bsize >= BLOCK_32X32;
+  use_model_yrd_large =
+      cpi->oxcf.rc_mode == VPX_CBR && large_block &&
+      !cyclic_refresh_segment_id_boosted(xd->mi[0]->segment_id) &&
+      cm->base_qindex;
+
   for (idx = 0; idx < num_inter_modes + comp_modes; ++idx) {
     int rate_mv = 0;
     int mode_rd_thresh;
@@ -1696,7 +2043,7 @@
     int inter_mv_mode = 0;
     int skip_this_mv = 0;
     int comp_pred = 0;
-    int force_gf_mv = 0;
+    int force_mv_inter_layer = 0;
     PREDICTION_MODE this_mode;
     second_ref_frame = NONE;
 
@@ -1720,14 +2067,19 @@
     if (ref_frame > usable_ref_frame) continue;
     if (skip_ref_find_pred[ref_frame]) continue;
 
+    if (svc->previous_frame_is_intra_only) {
+      if (ref_frame != LAST_FRAME || frame_mv[this_mode][ref_frame].as_int != 0)
+        continue;
+    }
+
     // If the segment reference frame feature is enabled then do nothing if the
     // current ref frame is not allowed.
     if (segfeature_active(seg, mi->segment_id, SEG_LVL_REF_FRAME) &&
         get_segdata(seg, mi->segment_id, SEG_LVL_REF_FRAME) != (int)ref_frame)
       continue;
 
-    if (flag_svc_subpel && ref_frame == GOLDEN_FRAME) {
-      force_gf_mv = 1;
+    if (flag_svc_subpel && ref_frame == inter_layer_ref) {
+      force_mv_inter_layer = 1;
       // Only test mode if NEARESTMV/NEARMV is (svc_mv_col, svc_mv_row),
       // otherwise set NEWMV to (svc_mv_col, svc_mv_row).
       if (this_mode == NEWMV) {
@@ -1748,15 +2100,33 @@
       if (segfeature_active(seg, mi->segment_id, SEG_LVL_REF_FRAME)) continue;
     }
 
-    // For SVC, skip the golden (spatial) reference search if sse of zeromv_last
-    // is below threshold.
-    if (cpi->use_svc && ref_frame == GOLDEN_FRAME &&
-        sse_zeromv_normalized < thresh_svc_skip_golden)
+    // For CBR mode: skip the golden reference search if sse of zeromv_last is
+    // below threshold.
+    if (ref_frame == GOLDEN_FRAME && cpi->oxcf.rc_mode == VPX_CBR &&
+        ((cpi->use_svc && sse_zeromv_normalized < thresh_svc_skip_golden) ||
+         (!cpi->use_svc && sse_zeromv_normalized < thresh_skip_golden)))
       continue;
 
-    if (sf->short_circuit_flat_blocks && x->source_variance == 0 &&
-        this_mode != NEARESTMV) {
-      continue;
+    if (!(cpi->ref_frame_flags & flag_list[ref_frame])) continue;
+
+    // For screen content. If zero_temp_sad source is computed: skip
+    // non-zero motion check for stationary blocks. If the superblock is
+    // non-stationary then for flat blocks skip the zero last check (keep golden
+    // as it may be inter-layer reference). Otherwise (if zero_temp_sad_source
+    // is not computed) skip non-zero motion check for flat blocks.
+    // TODO(marpan): Compute zero_temp_sad_source per coding block.
+    if (cpi->oxcf.content == VP9E_CONTENT_SCREEN) {
+      if (cpi->compute_source_sad_onepass && cpi->sf.use_source_sad) {
+        if ((frame_mv[this_mode][ref_frame].as_int != 0 &&
+             x->zero_temp_sad_source) ||
+            (frame_mv[this_mode][ref_frame].as_int == 0 &&
+             x->source_variance == 0 && ref_frame == LAST_FRAME &&
+             !x->zero_temp_sad_source))
+          continue;
+      } else if (frame_mv[this_mode][ref_frame].as_int != 0 &&
+                 x->source_variance == 0) {
+        continue;
+      }
     }
 
     if (!(cpi->sf.inter_mode_mask[bsize] & (1 << this_mode))) continue;
@@ -1785,14 +2155,13 @@
         continue;
     }
 
-    if (!(cpi->ref_frame_flags & flag_list[ref_frame])) continue;
-
     if (const_motion[ref_frame] && this_mode == NEARMV) continue;
 
     // Skip non-zeromv mode search for golden frame if force_skip_low_temp_var
     // is set. If nearestmv for golden frame is 0, zeromv mode will be skipped
     // later.
-    if (!force_gf_mv && force_skip_low_temp_var && ref_frame == GOLDEN_FRAME &&
+    if (!force_mv_inter_layer && force_skip_low_temp_var &&
+        ref_frame == GOLDEN_FRAME &&
         frame_mv[this_mode][ref_frame].as_int != 0) {
       continue;
     }
@@ -1806,7 +2175,7 @@
     }
 
     if (cpi->use_svc) {
-      if (!force_gf_mv && svc_force_zero_mode[ref_frame - 1] &&
+      if (!force_mv_inter_layer && svc_force_zero_mode[ref_frame - 1] &&
           frame_mv[this_mode][ref_frame].as_int != 0)
         continue;
     }
@@ -1851,8 +2220,9 @@
     set_ref_ptrs(cm, xd, ref_frame, second_ref_frame);
 
     mode_index = mode_idx[ref_frame][INTER_OFFSET(this_mode)];
-    mode_rd_thresh = best_mode_skip_txfm ? rd_threshes[mode_index] << 1
-                                         : rd_threshes[mode_index];
+    mode_rd_thresh = best_pickmode.best_mode_skip_txfm
+                         ? rd_threshes[mode_index] << 1
+                         : rd_threshes[mode_index];
 
     // Increase mode_rd_thresh value for GOLDEN_FRAME for improved encoding
     // speed with little/no subjective quality loss.
@@ -1866,92 +2236,13 @@
         (!cpi->sf.adaptive_rd_thresh_row_mt &&
          rd_less_than_thresh(best_rdc.rdcost, mode_rd_thresh,
                              &rd_thresh_freq_fact[mode_index])))
-      continue;
+      if (frame_mv[this_mode][ref_frame].as_int != 0) continue;
 
-    if (this_mode == NEWMV && !force_gf_mv) {
-      if (ref_frame > LAST_FRAME && !cpi->use_svc &&
-          cpi->oxcf.rc_mode == VPX_CBR) {
-        int tmp_sad;
-        uint32_t dis;
-        int cost_list[5] = { INT_MAX, INT_MAX, INT_MAX, INT_MAX, INT_MAX };
-
-        if (bsize < BLOCK_16X16) continue;
-
-        tmp_sad = vp9_int_pro_motion_estimation(cpi, x, bsize, mi_row, mi_col);
-
-        if (tmp_sad > x->pred_mv_sad[LAST_FRAME]) continue;
-        if (tmp_sad + (num_pels_log2_lookup[bsize] << 4) > best_pred_sad)
-          continue;
-
-        frame_mv[NEWMV][ref_frame].as_int = mi->mv[0].as_int;
-        rate_mv = vp9_mv_bit_cost(&frame_mv[NEWMV][ref_frame].as_mv,
-                                  &x->mbmi_ext->ref_mvs[ref_frame][0].as_mv,
-                                  x->nmvjointcost, x->mvcost, MV_COST_WEIGHT);
-        frame_mv[NEWMV][ref_frame].as_mv.row >>= 3;
-        frame_mv[NEWMV][ref_frame].as_mv.col >>= 3;
-
-        cpi->find_fractional_mv_step(
-            x, &frame_mv[NEWMV][ref_frame].as_mv,
-            &x->mbmi_ext->ref_mvs[ref_frame][0].as_mv,
-            cpi->common.allow_high_precision_mv, x->errorperbit,
-            &cpi->fn_ptr[bsize], cpi->sf.mv.subpel_force_stop,
-            cpi->sf.mv.subpel_iters_per_step, cond_cost_list(cpi, cost_list),
-            x->nmvjointcost, x->mvcost, &dis, &x->pred_sse[ref_frame], NULL, 0,
-            0);
-      } else if (svc->use_base_mv && svc->spatial_layer_id) {
-        if (frame_mv[NEWMV][ref_frame].as_int != INVALID_MV) {
-          const int pre_stride = xd->plane[0].pre[0].stride;
-          unsigned int base_mv_sse = UINT_MAX;
-          int scale = (cpi->rc.avg_frame_low_motion > 60) ? 2 : 4;
-          const uint8_t *const pre_buf =
-              xd->plane[0].pre[0].buf +
-              (frame_mv[NEWMV][ref_frame].as_mv.row >> 3) * pre_stride +
-              (frame_mv[NEWMV][ref_frame].as_mv.col >> 3);
-          cpi->fn_ptr[bsize].vf(x->plane[0].src.buf, x->plane[0].src.stride,
-                                pre_buf, pre_stride, &base_mv_sse);
-
-          // Exit NEWMV search if base_mv is (0,0) && bsize < BLOCK_16x16,
-          // for SVC encoding.
-          if (cpi->use_svc && cpi->svc.use_base_mv && bsize < BLOCK_16X16 &&
-              frame_mv[NEWMV][ref_frame].as_mv.row == 0 &&
-              frame_mv[NEWMV][ref_frame].as_mv.col == 0)
-            continue;
-
-          // Exit NEWMV search if base_mv_sse is large.
-          if (sf->base_mv_aggressive && base_mv_sse > (best_sse_sofar << scale))
-            continue;
-          if (base_mv_sse < (best_sse_sofar << 1)) {
-            // Base layer mv is good.
-            // Exit NEWMV search if the base_mv is (0, 0) and sse is low, since
-            // (0, 0) mode is already tested.
-            unsigned int base_mv_sse_normalized =
-                base_mv_sse >>
-                (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
-            if (sf->base_mv_aggressive && base_mv_sse <= best_sse_sofar &&
-                base_mv_sse_normalized < 400 &&
-                frame_mv[NEWMV][ref_frame].as_mv.row == 0 &&
-                frame_mv[NEWMV][ref_frame].as_mv.col == 0)
-              continue;
-            if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
-                                        &frame_mv[NEWMV][ref_frame], &rate_mv,
-                                        best_rdc.rdcost, 1)) {
-              continue;
-            }
-          } else if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
-                                             &frame_mv[NEWMV][ref_frame],
-                                             &rate_mv, best_rdc.rdcost, 0)) {
-            continue;
-          }
-        } else if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
-                                           &frame_mv[NEWMV][ref_frame],
-                                           &rate_mv, best_rdc.rdcost, 0)) {
-          continue;
-        }
-      } else if (!combined_motion_search(cpi, x, bsize, mi_row, mi_col,
-                                         &frame_mv[NEWMV][ref_frame], &rate_mv,
-                                         best_rdc.rdcost, 0)) {
+    if (this_mode == NEWMV && !force_mv_inter_layer) {
+      if (search_new_mv(cpi, x, frame_mv, ref_frame, gf_temporal_ref, bsize,
+                        mi_row, mi_col, best_pred_sad, &rate_mv, best_sse_sofar,
+                        &best_rdc))
         continue;
-      }
     }
 
     // TODO(jianj): Skipping the testing of (duplicate) non-zero motion vector
@@ -2009,70 +2300,15 @@
     if ((this_mode == NEWMV || filter_ref == SWITCHABLE) &&
         pred_filter_search &&
         (ref_frame == LAST_FRAME ||
-         (ref_frame == GOLDEN_FRAME && !force_gf_mv &&
+         (ref_frame == GOLDEN_FRAME && !force_mv_inter_layer &&
           (cpi->use_svc || cpi->oxcf.rc_mode == VPX_VBR))) &&
         (((mi->mv[0].as_mv.row | mi->mv[0].as_mv.col) & 0x07) != 0)) {
-      int pf_rate[3];
-      int64_t pf_dist[3];
-      int curr_rate[3];
-      unsigned int pf_var[3];
-      unsigned int pf_sse[3];
-      TX_SIZE pf_tx_size[3];
-      int64_t best_cost = INT64_MAX;
-      INTERP_FILTER best_filter = SWITCHABLE, filter;
-      PRED_BUFFER *current_pred = this_mode_pred;
       rd_computed = 1;
-
-      for (filter = EIGHTTAP; filter <= EIGHTTAP_SMOOTH; ++filter) {
-        int64_t cost;
-        mi->interp_filter = filter;
-        vp9_build_inter_predictors_sby(xd, mi_row, mi_col, bsize);
-        model_rd_for_sb_y(cpi, bsize, x, xd, &pf_rate[filter], &pf_dist[filter],
-                          &pf_var[filter], &pf_sse[filter]);
-        curr_rate[filter] = pf_rate[filter];
-        pf_rate[filter] += vp9_get_switchable_rate(cpi, xd);
-        cost = RDCOST(x->rdmult, x->rddiv, pf_rate[filter], pf_dist[filter]);
-        pf_tx_size[filter] = mi->tx_size;
-        if (cost < best_cost) {
-          best_filter = filter;
-          best_cost = cost;
-          skip_txfm = x->skip_txfm[0];
-
-          if (reuse_inter_pred) {
-            if (this_mode_pred != current_pred) {
-              free_pred_buffer(this_mode_pred);
-              this_mode_pred = current_pred;
-            }
-            current_pred = &tmp[get_pred_buffer(tmp, 3)];
-            pd->dst.buf = current_pred->data;
-            pd->dst.stride = bw;
-          }
-        }
-      }
-
-      if (reuse_inter_pred && this_mode_pred != current_pred)
-        free_pred_buffer(current_pred);
-
-      mi->interp_filter = best_filter;
-      mi->tx_size = pf_tx_size[best_filter];
-      this_rdc.rate = curr_rate[best_filter];
-      this_rdc.dist = pf_dist[best_filter];
-      var_y = pf_var[best_filter];
-      sse_y = pf_sse[best_filter];
-      x->skip_txfm[0] = skip_txfm;
-      if (reuse_inter_pred) {
-        pd->dst.buf = this_mode_pred->data;
-        pd->dst.stride = this_mode_pred->stride;
-      }
+      search_filter_ref(cpi, x, &this_rdc, mi_row, mi_col, tmp, bsize,
+                        reuse_inter_pred, &this_mode_pred, &var_y, &sse_y,
+                        force_smooth_filter, &this_early_term,
+                        flag_preduv_computed, use_model_yrd_large);
     } else {
-      // For low motion content use x->sb_is_skin in addition to VeryHighSad
-      // for setting large_block.
-      const int large_block =
-          (x->content_state_sb == kVeryHighSad ||
-           (x->sb_is_skin && cpi->rc.avg_frame_low_motion > 70) ||
-           cpi->oxcf.speed < 7)
-              ? bsize > BLOCK_32X32
-              : bsize >= BLOCK_32X32;
       mi->interp_filter = (filter_ref == SWITCHABLE) ? EIGHTTAP : filter_ref;
 
       if (cpi->use_svc && ref_frame == GOLDEN_FRAME &&
@@ -2082,19 +2318,18 @@
       vp9_build_inter_predictors_sby(xd, mi_row, mi_col, bsize);
 
       // For large partition blocks, extra testing is done.
-      if (cpi->oxcf.rc_mode == VPX_CBR && large_block &&
-          !cyclic_refresh_segment_id_boosted(xd->mi[0]->segment_id) &&
-          cm->base_qindex) {
+      if (use_model_yrd_large) {
+        rd_computed = 1;
         model_rd_for_sb_y_large(cpi, bsize, x, xd, &this_rdc.rate,
                                 &this_rdc.dist, &var_y, &sse_y, mi_row, mi_col,
                                 &this_early_term, flag_preduv_computed);
       } else {
         rd_computed = 1;
         model_rd_for_sb_y(cpi, bsize, x, xd, &this_rdc.rate, &this_rdc.dist,
-                          &var_y, &sse_y);
+                          &var_y, &sse_y, 0);
       }
       // Save normalized sse (between current and last frame) for (0, 0) motion.
-      if (cpi->use_svc && ref_frame == LAST_FRAME &&
+      if (ref_frame == LAST_FRAME &&
           frame_mv[this_mode][ref_frame].as_int == 0) {
         sse_zeromv_normalized =
             sse_y >> (b_width_log2_lookup[bsize] + b_height_log2_lookup[bsize]);
@@ -2105,8 +2340,7 @@
     if (!this_early_term) {
       this_sse = (int64_t)sse_y;
       block_yrd(cpi, x, &this_rdc, &is_skippable, &this_sse, bsize,
-                VPXMIN(mi->tx_size, TX_16X16), rd_computed);
-
+                VPXMIN(mi->tx_size, TX_16X16), rd_computed, 0);
       x->skip_txfm[0] = is_skippable;
       if (is_skippable) {
         this_rdc.rate = vp9_cost_bit(vp9_get_skip_prob(cm, xd), 1);
@@ -2126,9 +2360,10 @@
           this_rdc.rate += vp9_get_switchable_rate(cpi, xd);
       }
     } else {
-      this_rdc.rate += cm->interp_filter == SWITCHABLE
-                           ? vp9_get_switchable_rate(cpi, xd)
-                           : 0;
+      if (cm->interp_filter == SWITCHABLE) {
+        if ((mi->mv[0].as_mv.row | mi->mv[0].as_mv.col) & 0x07)
+          this_rdc.rate += vp9_get_switchable_rate(cpi, xd);
+      }
       this_rdc.rate += vp9_cost_bit(vp9_get_skip_prob(cm, xd), 1);
     }
 
@@ -2169,7 +2404,8 @@
 
     // Skipping checking: test to see if this block can be reconstructed by
     // prediction only.
-    if (cpi->allow_encode_breakout) {
+    if (cpi->allow_encode_breakout && !xd->lossless && !scene_change_detected &&
+        !svc->high_num_blocks_with_motion) {
       encode_breakout_test(cpi, x, bsize, mi_row, mi_col, ref_frame, this_mode,
                            var_y, sse_y, yv12_mb, &this_rdc.rate,
                            &this_rdc.dist, flag_preduv_computed);
@@ -2180,6 +2416,15 @@
       }
     }
 
+    // On spatially flat blocks for screne content: bias against zero-last
+    // if the sse_y is non-zero. Only on scene change or high motion frames.
+    if (cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
+        (scene_change_detected || svc->high_num_blocks_with_motion) &&
+        ref_frame == LAST_FRAME && frame_mv[this_mode][ref_frame].as_int == 0 &&
+        svc->spatial_layer_id == 0 && x->source_variance == 0 && sse_y > 0) {
+      this_rdc.rdcost = this_rdc.rdcost << 2;
+    }
+
 #if CONFIG_VP9_TEMPORAL_DENOISING
     if (cpi->oxcf.noise_sensitivity > 0 && denoise_svc_pickmode &&
         cpi->denoiser.denoising_level > kDenLowLow) {
@@ -2196,56 +2441,61 @@
 
     if (this_rdc.rdcost < best_rdc.rdcost || x->skip) {
       best_rdc = this_rdc;
-      best_mode = this_mode;
-      best_pred_filter = mi->interp_filter;
-      best_tx_size = mi->tx_size;
-      best_ref_frame = ref_frame;
-      best_mode_skip_txfm = x->skip_txfm[0];
       best_early_term = this_early_term;
-      best_second_ref_frame = second_ref_frame;
+      best_pickmode.best_mode = this_mode;
+      best_pickmode.best_pred_filter = mi->interp_filter;
+      best_pickmode.best_tx_size = mi->tx_size;
+      best_pickmode.best_ref_frame = ref_frame;
+      best_pickmode.best_mode_skip_txfm = x->skip_txfm[0];
+      best_pickmode.best_second_ref_frame = second_ref_frame;
 
       if (reuse_inter_pred) {
-        free_pred_buffer(best_pred);
-        best_pred = this_mode_pred;
+        free_pred_buffer(best_pickmode.best_pred);
+        best_pickmode.best_pred = this_mode_pred;
       }
     } else {
       if (reuse_inter_pred) free_pred_buffer(this_mode_pred);
     }
 
-    if (x->skip) break;
+    if (x->skip &&
+        (!force_test_gf_zeromv || mode_checked[ZEROMV][GOLDEN_FRAME]))
+      break;
 
     // If early termination flag is 1 and at least 2 modes are checked,
     // the mode search is terminated.
-    if (best_early_term && idx > 0) {
+    if (best_early_term && idx > 0 && !scene_change_detected &&
+        (!force_test_gf_zeromv || mode_checked[ZEROMV][GOLDEN_FRAME])) {
       x->skip = 1;
       break;
     }
   }
 
-  mi->mode = best_mode;
-  mi->interp_filter = best_pred_filter;
-  mi->tx_size = best_tx_size;
-  mi->ref_frame[0] = best_ref_frame;
-  mi->mv[0].as_int = frame_mv[best_mode][best_ref_frame].as_int;
+  mi->mode = best_pickmode.best_mode;
+  mi->interp_filter = best_pickmode.best_pred_filter;
+  mi->tx_size = best_pickmode.best_tx_size;
+  mi->ref_frame[0] = best_pickmode.best_ref_frame;
+  mi->mv[0].as_int =
+      frame_mv[best_pickmode.best_mode][best_pickmode.best_ref_frame].as_int;
   xd->mi[0]->bmi[0].as_mv[0].as_int = mi->mv[0].as_int;
-  x->skip_txfm[0] = best_mode_skip_txfm;
-  mi->ref_frame[1] = best_second_ref_frame;
+  x->skip_txfm[0] = best_pickmode.best_mode_skip_txfm;
+  mi->ref_frame[1] = best_pickmode.best_second_ref_frame;
 
   // For spatial enhancemanent layer: perform intra prediction only if base
   // layer is chosen as the reference. Always perform intra prediction if
   // LAST is the only reference, or is_key_frame is set, or on base
   // temporal layer.
-  if (cpi->svc.spatial_layer_id) {
+  if (svc->spatial_layer_id && !gf_temporal_ref) {
     perform_intra_pred =
-        cpi->svc.temporal_layer_id == 0 ||
-        cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame ||
+        svc->temporal_layer_id == 0 ||
+        svc->layer_context[svc->temporal_layer_id].is_key_frame ||
         !(cpi->ref_frame_flags & flag_list[GOLDEN_FRAME]) ||
-        (!cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame &&
-         svc_force_zero_mode[best_ref_frame - 1]);
+        (!svc->layer_context[svc->temporal_layer_id].is_key_frame &&
+         svc_force_zero_mode[best_pickmode.best_ref_frame - 1]);
     inter_mode_thresh = (inter_mode_thresh << 1) + inter_mode_thresh;
   }
-  if (cpi->oxcf.lag_in_frames > 0 && cpi->oxcf.rc_mode == VPX_VBR &&
-      cpi->rc.is_src_frame_alt_ref)
+  if ((cpi->oxcf.lag_in_frames > 0 && cpi->oxcf.rc_mode == VPX_VBR &&
+       cpi->rc.is_src_frame_alt_ref) ||
+      svc->previous_frame_is_intra_only)
     perform_intra_pred = 0;
 
   // If the segment reference frame feature is enabled and set then
@@ -2257,19 +2507,20 @@
   // Perform intra prediction search, if the best SAD is above a certain
   // threshold.
   if (best_rdc.rdcost == INT64_MAX ||
+      (cpi->oxcf.content == VP9E_CONTENT_SCREEN && x->source_variance == 0) ||
+      (scene_change_detected && perform_intra_pred) ||
       ((!force_skip_low_temp_var || bsize < BLOCK_32X32 ||
         x->content_state_sb == kVeryHighSad) &&
        perform_intra_pred && !x->skip && best_rdc.rdcost > inter_mode_thresh &&
        bsize <= cpi->sf.max_intra_bsize && !x->skip_low_source_sad &&
        !x->lowvar_highsumdiff)) {
     struct estimate_block_intra_args args = { cpi, x, DC_PRED, 1, 0 };
+    int64_t this_sse = INT64_MAX;
     int i;
-    TX_SIZE best_intra_tx_size = TX_SIZES;
+    PRED_BUFFER *const best_pred = best_pickmode.best_pred;
     TX_SIZE intra_tx_size =
         VPXMIN(max_txsize_lookup[bsize],
                tx_mode_to_biggest_tx_size[cpi->common.tx_mode]);
-    if (cpi->oxcf.content != VP9E_CONTENT_SCREEN && intra_tx_size > TX_16X16)
-      intra_tx_size = TX_16X16;
 
     if (reuse_inter_pred && best_pred != NULL) {
       if (best_pred->data == orig_dst.buf) {
@@ -2289,7 +2540,7 @@
                           this_mode_pred->data, this_mode_pred->stride, NULL, 0,
                           0, 0, 0, bw, bh);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-        best_pred = this_mode_pred;
+        best_pickmode.best_pred = this_mode_pred;
       }
     }
     pd->dst = orig_dst;
@@ -2298,21 +2549,34 @@
       const PREDICTION_MODE this_mode = intra_mode_list[i];
       THR_MODES mode_index = mode_idx[INTRA_FRAME][mode_offset(this_mode)];
       int mode_rd_thresh = rd_threshes[mode_index];
+      // For spatially flat blocks, under short_circuit_flat_blocks flag:
+      // only check DC mode for stationary blocks, otherwise also check
+      // H and V mode.
       if (sf->short_circuit_flat_blocks && x->source_variance == 0 &&
-          this_mode != DC_PRED) {
+          ((x->zero_temp_sad_source && this_mode != DC_PRED) || i > 2)) {
         continue;
       }
 
       if (!((1 << this_mode) & cpi->sf.intra_y_mode_bsize_mask[bsize]))
         continue;
 
+      if (cpi->sf.rt_intra_dc_only_low_content && this_mode != DC_PRED &&
+          x->content_state_sb != kVeryHighSad)
+        continue;
+
       if ((cpi->sf.adaptive_rd_thresh_row_mt &&
            rd_less_than_thresh_row_mt(best_rdc.rdcost, mode_rd_thresh,
                                       &rd_thresh_freq_fact[mode_index])) ||
           (!cpi->sf.adaptive_rd_thresh_row_mt &&
            rd_less_than_thresh(best_rdc.rdcost, mode_rd_thresh,
-                               &rd_thresh_freq_fact[mode_index])))
-        continue;
+                               &rd_thresh_freq_fact[mode_index]))) {
+        // Avoid this early exit for screen on base layer, for scene
+        // changes or high motion frames.
+        if (cpi->oxcf.content != VP9E_CONTENT_SCREEN ||
+            svc->spatial_layer_id > 0 ||
+            (!scene_change_detected && !svc->high_num_blocks_with_motion))
+          continue;
+      }
 
       mi->mode = this_mode;
       mi->ref_frame[0] = INTRA_FRAME;
@@ -2321,8 +2585,13 @@
       args.skippable = 1;
       args.rdc = &this_rdc;
       mi->tx_size = intra_tx_size;
-      vp9_foreach_transformed_block_in_plane(xd, bsize, 0, estimate_block_intra,
-                                             &args);
+
+      compute_intra_yprediction(this_mode, bsize, x, xd);
+      model_rd_for_sb_y(cpi, bsize, x, xd, &this_rdc.rate, &this_rdc.dist,
+                        &var_y, &sse_y, 1);
+      block_yrd(cpi, x, &this_rdc, &args.skippable, &this_sse, bsize,
+                VPXMIN(mi->tx_size, TX_16X16), 1, 1);
+
       // Check skip cost here since skippable is not set for for uv, this
       // mirrors the behavior used by inter
       if (args.skippable) {
@@ -2349,36 +2618,37 @@
 
       if (this_rdc.rdcost < best_rdc.rdcost) {
         best_rdc = this_rdc;
-        best_mode = this_mode;
-        best_intra_tx_size = mi->tx_size;
-        best_ref_frame = INTRA_FRAME;
-        best_second_ref_frame = NONE;
+        best_pickmode.best_mode = this_mode;
+        best_pickmode.best_intra_tx_size = mi->tx_size;
+        best_pickmode.best_ref_frame = INTRA_FRAME;
+        best_pickmode.best_second_ref_frame = NONE;
         mi->uv_mode = this_mode;
         mi->mv[0].as_int = INVALID_MV;
         mi->mv[1].as_int = INVALID_MV;
-        best_mode_skip_txfm = x->skip_txfm[0];
+        best_pickmode.best_mode_skip_txfm = x->skip_txfm[0];
       }
     }
 
     // Reset mb_mode_info to the best inter mode.
-    if (best_ref_frame != INTRA_FRAME) {
-      mi->tx_size = best_tx_size;
+    if (best_pickmode.best_ref_frame != INTRA_FRAME) {
+      mi->tx_size = best_pickmode.best_tx_size;
     } else {
-      mi->tx_size = best_intra_tx_size;
+      mi->tx_size = best_pickmode.best_intra_tx_size;
     }
   }
 
   pd->dst = orig_dst;
-  mi->mode = best_mode;
-  mi->ref_frame[0] = best_ref_frame;
-  mi->ref_frame[1] = best_second_ref_frame;
-  x->skip_txfm[0] = best_mode_skip_txfm;
+  mi->mode = best_pickmode.best_mode;
+  mi->ref_frame[0] = best_pickmode.best_ref_frame;
+  mi->ref_frame[1] = best_pickmode.best_second_ref_frame;
+  x->skip_txfm[0] = best_pickmode.best_mode_skip_txfm;
 
   if (!is_inter_block(mi)) {
     mi->interp_filter = SWITCHABLE_FILTERS;
   }
 
-  if (reuse_inter_pred && best_pred != NULL) {
+  if (reuse_inter_pred && best_pickmode.best_pred != NULL) {
+    PRED_BUFFER *const best_pred = best_pickmode.best_pred;
     if (best_pred->data != orig_dst.buf && is_inter_mode(mi->mode)) {
 #if CONFIG_VP9_HIGHBITDEPTH
       if (cm->use_highbitdepth)
@@ -2407,25 +2677,27 @@
     // Remove this condition when the issue is resolved.
     if (x->sb_pickmode_part) ctx->sb_skip_denoising = 1;
     vp9_pickmode_ctx_den_update(&ctx_den, zero_last_cost_orig, ref_frame_cost,
-                                frame_mv, reuse_inter_pred, best_tx_size,
-                                best_mode, best_ref_frame, best_pred_filter,
-                                best_mode_skip_txfm);
-    vp9_denoiser_denoise(cpi, x, mi_row, mi_col, bsize, ctx, &decision);
-    recheck_zeromv_after_denoising(cpi, mi, x, xd, decision, &ctx_den, yv12_mb,
-                                   &best_rdc, bsize, mi_row, mi_col);
-    best_ref_frame = ctx_den.best_ref_frame;
+                                frame_mv, reuse_inter_pred, &best_pickmode);
+    vp9_denoiser_denoise(cpi, x, mi_row, mi_col, bsize, ctx, &decision,
+                         gf_temporal_ref);
+    if (denoise_recheck_zeromv)
+      recheck_zeromv_after_denoising(cpi, mi, x, xd, decision, &ctx_den,
+                                     yv12_mb, &best_rdc, bsize, mi_row, mi_col);
+    best_pickmode.best_ref_frame = ctx_den.best_ref_frame;
   }
 #endif
 
-  if (best_ref_frame == ALTREF_FRAME || best_second_ref_frame == ALTREF_FRAME)
+  if (best_pickmode.best_ref_frame == ALTREF_FRAME ||
+      best_pickmode.best_second_ref_frame == ALTREF_FRAME)
     x->arf_frame_usage++;
-  else if (best_ref_frame != INTRA_FRAME)
+  else if (best_pickmode.best_ref_frame != INTRA_FRAME)
     x->lastgolden_frame_usage++;
 
   if (cpi->sf.adaptive_rd_thresh) {
-    THR_MODES best_mode_idx = mode_idx[best_ref_frame][mode_offset(mi->mode)];
+    THR_MODES best_mode_idx =
+        mode_idx[best_pickmode.best_ref_frame][mode_offset(mi->mode)];
 
-    if (best_ref_frame == INTRA_FRAME) {
+    if (best_pickmode.best_ref_frame == INTRA_FRAME) {
       // Only consider the modes that are included in the intra_mode_list.
       int intra_modes = sizeof(intra_mode_list) / sizeof(PREDICTION_MODE);
       int i;
@@ -2445,7 +2717,7 @@
     } else {
       for (ref_frame = LAST_FRAME; ref_frame <= GOLDEN_FRAME; ++ref_frame) {
         PREDICTION_MODE this_mode;
-        if (best_ref_frame != ref_frame) continue;
+        if (best_pickmode.best_ref_frame != ref_frame) continue;
         for (this_mode = NEARESTMV; this_mode <= NEWMV; ++this_mode) {
           if (cpi->sf.adaptive_rd_thresh_row_mt)
             update_thresh_freq_fact_row_mt(cpi, tile_data, x->source_variance,
@@ -2625,9 +2897,10 @@
                 x, &tmp_mv, &mbmi_ext->ref_mvs[ref_frame][0].as_mv,
                 cpi->common.allow_high_precision_mv, x->errorperbit,
                 &cpi->fn_ptr[bsize], cpi->sf.mv.subpel_force_stop,
-                cpi->sf.mv.subpel_iters_per_step,
-                cond_cost_list(cpi, cost_list), x->nmvjointcost, x->mvcost,
-                &dummy_dist, &x->pred_sse[ref_frame], NULL, 0, 0);
+                cpi->sf.mv.subpel_search_level, cond_cost_list(cpi, cost_list),
+                x->nmvjointcost, x->mvcost, &dummy_dist,
+                &x->pred_sse[ref_frame], NULL, 0, 0,
+                cpi->sf.use_accurate_subpel_search);
 
             xd->mi[0]->bmi[i].as_mv[0].as_mv = tmp_mv;
           } else {
@@ -2660,7 +2933,7 @@
 #endif
 
           model_rd_for_sb_y(cpi, bsize, x, xd, &this_rdc.rate, &this_rdc.dist,
-                            &var_y, &sse_y);
+                            &var_y, &sse_y, 0);
 
           this_rdc.rate += b_rate;
           this_rdc.rdcost =
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.h
index 9aa00c4..15207e6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_pickmode.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_PICKMODE_H_
-#define VP9_ENCODER_VP9_PICKMODE_H_
+#ifndef VPX_VP9_ENCODER_VP9_PICKMODE_H_
+#define VPX_VP9_ENCODER_VP9_PICKMODE_H_
 
 #include "vp9/encoder/vp9_encoder.h"
 
@@ -32,4 +32,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_PICKMODE_H_
+#endif  // VPX_VP9_ENCODER_VP9_PICKMODE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.c
index 09f61ea..c996b75 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.c
@@ -204,10 +204,9 @@
   switch (bit_depth) {
     case VPX_BITS_8: return q == 0 ? 64 : (quant < 148 ? 84 : 80);
     case VPX_BITS_10: return q == 0 ? 64 : (quant < 592 ? 84 : 80);
-    case VPX_BITS_12: return q == 0 ? 64 : (quant < 2368 ? 84 : 80);
     default:
-      assert(0 && "bit_depth should be VPX_BITS_8, VPX_BITS_10 or VPX_BITS_12");
-      return -1;
+      assert(bit_depth == VPX_BITS_12);
+      return q == 0 ? 64 : (quant < 2368 ? 84 : 80);
   }
 #else
   (void)bit_depth;
@@ -221,13 +220,20 @@
   int i, q, quant;
 
   for (q = 0; q < QINDEX_RANGE; q++) {
-    const int qzbin_factor = get_qzbin_factor(q, cm->bit_depth);
-    const int qrounding_factor = q == 0 ? 64 : 48;
+    int qzbin_factor = get_qzbin_factor(q, cm->bit_depth);
+    int qrounding_factor = q == 0 ? 64 : 48;
+    const int sharpness_adjustment = 16 * (7 - cpi->oxcf.sharpness) / 7;
+
+    if (cpi->oxcf.sharpness > 0 && q > 0) {
+      qzbin_factor = 64 + sharpness_adjustment;
+      qrounding_factor = 64 - sharpness_adjustment;
+    }
 
     for (i = 0; i < 2; ++i) {
       int qrounding_factor_fp = i == 0 ? 48 : 42;
       if (q == 0) qrounding_factor_fp = 64;
-
+      if (cpi->oxcf.sharpness > 0)
+        qrounding_factor_fp = 64 - sharpness_adjustment;
       // y
       quant = i == 0 ? vp9_dc_quant(q, cm->y_dc_delta_q, cm->bit_depth)
                      : vp9_ac_quant(q, 0, cm->bit_depth);
@@ -282,12 +288,12 @@
   // Y
   x->plane[0].quant = quants->y_quant[qindex];
   x->plane[0].quant_fp = quants->y_quant_fp[qindex];
-  x->plane[0].round_fp = quants->y_round_fp[qindex];
+  memcpy(x->plane[0].round_fp, quants->y_round_fp[qindex],
+         8 * sizeof(*(x->plane[0].round_fp)));
   x->plane[0].quant_shift = quants->y_quant_shift[qindex];
   x->plane[0].zbin = quants->y_zbin[qindex];
   x->plane[0].round = quants->y_round[qindex];
   xd->plane[0].dequant = cpi->y_dequant[qindex];
-
   x->plane[0].quant_thred[0] = x->plane[0].zbin[0] * x->plane[0].zbin[0];
   x->plane[0].quant_thred[1] = x->plane[0].zbin[1] * x->plane[0].zbin[1];
 
@@ -295,12 +301,12 @@
   for (i = 1; i < 3; i++) {
     x->plane[i].quant = quants->uv_quant[qindex];
     x->plane[i].quant_fp = quants->uv_quant_fp[qindex];
-    x->plane[i].round_fp = quants->uv_round_fp[qindex];
+    memcpy(x->plane[i].round_fp, quants->uv_round_fp[qindex],
+           8 * sizeof(*(x->plane[i].round_fp)));
     x->plane[i].quant_shift = quants->uv_quant_shift[qindex];
     x->plane[i].zbin = quants->uv_zbin[qindex];
     x->plane[i].round = quants->uv_round[qindex];
     xd->plane[i].dequant = cpi->uv_dequant[qindex];
-
     x->plane[i].quant_thred[0] = x->plane[i].zbin[0] * x->plane[i].zbin[0];
     x->plane[i].quant_thred[1] = x->plane[i].zbin[1] * x->plane[i].zbin[1];
   }
@@ -317,13 +323,18 @@
   vp9_init_plane_quantizers(cpi, &cpi->td.mb);
 }
 
-void vp9_set_quantizer(VP9_COMMON *cm, int q) {
+void vp9_set_quantizer(VP9_COMP *cpi, int q) {
+  VP9_COMMON *cm = &cpi->common;
   // quantizer has to be reinitialized with vp9_init_quantizer() if any
   // delta_q changes.
   cm->base_qindex = q;
   cm->y_dc_delta_q = 0;
   cm->uv_dc_delta_q = 0;
   cm->uv_ac_delta_q = 0;
+  if (cpi->oxcf.delta_q_uv != 0) {
+    cm->uv_dc_delta_q = cm->uv_ac_delta_q = cpi->oxcf.delta_q_uv;
+    vp9_init_quantizer(cpi);
+  }
 }
 
 // Table that converts 0-63 Q-range values passed in outside to the Qindex
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.h
index 6132036..2e6d7da 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_quantize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_QUANTIZE_H_
-#define VP9_ENCODER_VP9_QUANTIZE_H_
+#ifndef VPX_VP9_ENCODER_VP9_QUANTIZE_H_
+#define VPX_VP9_ENCODER_VP9_QUANTIZE_H_
 
 #include "./vpx_config.h"
 #include "vp9/encoder/vp9_block.h"
@@ -49,7 +49,7 @@
 
 void vp9_init_quantizer(struct VP9_COMP *cpi);
 
-void vp9_set_quantizer(struct VP9Common *cm, int q);
+void vp9_set_quantizer(struct VP9_COMP *cm, int q);
 
 int vp9_quantizer_to_qindex(int quantizer);
 
@@ -59,4 +59,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_QUANTIZE_H_
+#endif  // VPX_VP9_ENCODER_VP9_QUANTIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.c
index 2e4c9d2..edbed73 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.c
@@ -48,18 +48,16 @@
 #define MAX_BPB_FACTOR 50
 
 #if CONFIG_VP9_HIGHBITDEPTH
-#define ASSIGN_MINQ_TABLE(bit_depth, name)                   \
-  do {                                                       \
-    switch (bit_depth) {                                     \
-      case VPX_BITS_8: name = name##_8; break;               \
-      case VPX_BITS_10: name = name##_10; break;             \
-      case VPX_BITS_12: name = name##_12; break;             \
-      default:                                               \
-        assert(0 &&                                          \
-               "bit_depth should be VPX_BITS_8, VPX_BITS_10" \
-               " or VPX_BITS_12");                           \
-        name = NULL;                                         \
-    }                                                        \
+#define ASSIGN_MINQ_TABLE(bit_depth, name)       \
+  do {                                           \
+    switch (bit_depth) {                         \
+      case VPX_BITS_8: name = name##_8; break;   \
+      case VPX_BITS_10: name = name##_10; break; \
+      default:                                   \
+        assert(bit_depth == VPX_BITS_12);        \
+        name = name##_12;                        \
+        break;                                   \
+    }                                            \
   } while (0)
 #else
 #define ASSIGN_MINQ_TABLE(bit_depth, name) \
@@ -100,8 +98,8 @@
 #else
 static int gf_high = 2000;
 static int gf_low = 400;
-static int kf_high = 5000;
-static int kf_low = 400;
+static int kf_high = 4800;
+static int kf_low = 300;
 #endif
 
 // Functions to compute the active minq lookup table entries based on a
@@ -131,7 +129,7 @@
   for (i = 0; i < QINDEX_RANGE; i++) {
     const double maxq = vp9_convert_qindex_to_q(i, bit_depth);
     kf_low_m[i] = get_minq_index(maxq, 0.000001, -0.0004, 0.150, bit_depth);
-    kf_high_m[i] = get_minq_index(maxq, 0.0000021, -0.00125, 0.55, bit_depth);
+    kf_high_m[i] = get_minq_index(maxq, 0.0000021, -0.00125, 0.45, bit_depth);
 #ifdef AGGRESSIVE_VBR
     arfgf_low[i] = get_minq_index(maxq, 0.0000015, -0.0009, 0.275, bit_depth);
     inter[i] = get_minq_index(maxq, 0.00000271, -0.00113, 0.80, bit_depth);
@@ -167,10 +165,9 @@
   switch (bit_depth) {
     case VPX_BITS_8: return vp9_ac_quant(qindex, 0, bit_depth) / 4.0;
     case VPX_BITS_10: return vp9_ac_quant(qindex, 0, bit_depth) / 16.0;
-    case VPX_BITS_12: return vp9_ac_quant(qindex, 0, bit_depth) / 64.0;
     default:
-      assert(0 && "bit_depth should be VPX_BITS_8, VPX_BITS_10 or VPX_BITS_12");
-      return -1.0;
+      assert(bit_depth == VPX_BITS_12);
+      return vp9_ac_quant(qindex, 0, bit_depth) / 64.0;
   }
 #else
   return vp9_ac_quant(qindex, 0, bit_depth) / 4.0;
@@ -214,17 +211,15 @@
   const RATE_CONTROL *rc = &cpi->rc;
   const VP9EncoderConfig *oxcf = &cpi->oxcf;
 
-  if (cpi->oxcf.pass != 2) {
-    const int min_frame_target =
-        VPXMAX(rc->min_frame_bandwidth, rc->avg_frame_bandwidth >> 5);
-    if (target < min_frame_target) target = min_frame_target;
-    if (cpi->refresh_golden_frame && rc->is_src_frame_alt_ref) {
-      // If there is an active ARF at this location use the minimum
-      // bits on this frame even if it is a constructed arf.
-      // The active maximum quantizer insures that an appropriate
-      // number of bits will be spent if needed for constructed ARFs.
-      target = min_frame_target;
-    }
+  const int min_frame_target =
+      VPXMAX(rc->min_frame_bandwidth, rc->avg_frame_bandwidth >> 5);
+  if (target < min_frame_target) target = min_frame_target;
+  if (cpi->refresh_golden_frame && rc->is_src_frame_alt_ref) {
+    // If there is an active ARF at this location use the minimum
+    // bits on this frame even if it is a constructed arf.
+    // The active maximum quantizer insures that an appropriate
+    // number of bits will be spent if needed for constructed ARFs.
+    target = min_frame_target;
   }
 
   // Clip the frame target to the maximum allowed value.
@@ -250,20 +245,68 @@
   return target;
 }
 
+// TODO(marpan/jianj): bits_off_target and buffer_level are used in the saame
+// way for CBR mode, for the buffering updates below. Look into removing one
+// of these (i.e., bits_off_target).
+// Update the buffer level before encoding with the per-frame-bandwidth,
+static void update_buffer_level_preencode(VP9_COMP *cpi) {
+  RATE_CONTROL *const rc = &cpi->rc;
+  rc->bits_off_target += rc->avg_frame_bandwidth;
+  // Clip the buffer level to the maximum specified buffer size.
+  rc->bits_off_target = VPXMIN(rc->bits_off_target, rc->maximum_buffer_size);
+  rc->buffer_level = rc->bits_off_target;
+}
+
+// Update the buffer level before encoding with the per-frame-bandwidth
+// for SVC. The current and all upper temporal layers are updated, needed
+// for the layered rate control which involves cumulative buffer levels for
+// the temporal layers. Allow for using the timestamp(pts) delta for the
+// framerate when the set_ref_frame_config is used.
+static void update_buffer_level_svc_preencode(VP9_COMP *cpi) {
+  SVC *const svc = &cpi->svc;
+  int i;
+  // Set this to 1 to use timestamp delta for "framerate" under
+  // ref_frame_config usage.
+  int use_timestamp = 1;
+  const int64_t ts_delta =
+      svc->time_stamp_superframe - svc->time_stamp_prev[svc->spatial_layer_id];
+  for (i = svc->temporal_layer_id; i < svc->number_temporal_layers; ++i) {
+    const int layer =
+        LAYER_IDS_TO_IDX(svc->spatial_layer_id, i, svc->number_temporal_layers);
+    LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+    RATE_CONTROL *const lrc = &lc->rc;
+    if (use_timestamp && cpi->svc.use_set_ref_frame_config &&
+        svc->number_temporal_layers == 1 && ts_delta > 0 &&
+        svc->current_superframe > 0) {
+      // TODO(marpan): This may need to be modified for temporal layers.
+      const double framerate_pts = 10000000.0 / ts_delta;
+      lrc->bits_off_target += (int)(lc->target_bandwidth / framerate_pts);
+    } else {
+      lrc->bits_off_target += (int)(lc->target_bandwidth / lc->framerate);
+    }
+    // Clip buffer level to maximum buffer size for the layer.
+    lrc->bits_off_target =
+        VPXMIN(lrc->bits_off_target, lrc->maximum_buffer_size);
+    lrc->buffer_level = lrc->bits_off_target;
+    if (i == svc->temporal_layer_id) {
+      cpi->rc.bits_off_target = lrc->bits_off_target;
+      cpi->rc.buffer_level = lrc->buffer_level;
+    }
+  }
+}
+
 // Update the buffer level for higher temporal layers, given the encoded current
 // temporal layer.
-static void update_layer_buffer_level(SVC *svc, int encoded_frame_size) {
+static void update_layer_buffer_level_postencode(SVC *svc,
+                                                 int encoded_frame_size) {
   int i = 0;
-  int current_temporal_layer = svc->temporal_layer_id;
+  const int current_temporal_layer = svc->temporal_layer_id;
   for (i = current_temporal_layer + 1; i < svc->number_temporal_layers; ++i) {
     const int layer =
         LAYER_IDS_TO_IDX(svc->spatial_layer_id, i, svc->number_temporal_layers);
     LAYER_CONTEXT *lc = &svc->layer_context[layer];
     RATE_CONTROL *lrc = &lc->rc;
-    int bits_off_for_this_layer =
-        (int)(lc->target_bandwidth / lc->framerate - encoded_frame_size);
-    lrc->bits_off_target += bits_off_for_this_layer;
-
+    lrc->bits_off_target -= encoded_frame_size;
     // Clip buffer level to maximum buffer size for the layer.
     lrc->bits_off_target =
         VPXMIN(lrc->bits_off_target, lrc->maximum_buffer_size);
@@ -271,21 +314,13 @@
   }
 }
 
-// Update the buffer level: leaky bucket model.
-static void update_buffer_level(VP9_COMP *cpi, int encoded_frame_size) {
-  const VP9_COMMON *const cm = &cpi->common;
+// Update the buffer level after encoding with encoded frame size.
+static void update_buffer_level_postencode(VP9_COMP *cpi,
+                                           int encoded_frame_size) {
   RATE_CONTROL *const rc = &cpi->rc;
-
-  // Non-viewable frames are a special case and are treated as pure overhead.
-  if (!cm->show_frame) {
-    rc->bits_off_target -= encoded_frame_size;
-  } else {
-    rc->bits_off_target += rc->avg_frame_bandwidth - encoded_frame_size;
-  }
-
+  rc->bits_off_target -= encoded_frame_size;
   // Clip the buffer level to the maximum specified buffer size.
   rc->bits_off_target = VPXMIN(rc->bits_off_target, rc->maximum_buffer_size);
-
   // For screen-content mode, and if frame-dropper is off, don't let buffer
   // level go below threshold, given here as -rc->maximum_ buffer_size.
   if (cpi->oxcf.content == VP9E_CONTENT_SCREEN &&
@@ -295,7 +330,7 @@
   rc->buffer_level = rc->bits_off_target;
 
   if (is_one_pass_cbr_svc(cpi)) {
-    update_layer_buffer_level(&cpi->svc, encoded_frame_size);
+    update_layer_buffer_level_postencode(&cpi->svc, encoded_frame_size);
   }
 }
 
@@ -358,6 +393,9 @@
   rc->high_source_sad = 0;
   rc->reset_high_source_sad = 0;
   rc->high_source_sad_lagindex = -1;
+  rc->high_num_blocks_with_motion = 0;
+  rc->hybrid_intra_scene_change = 0;
+  rc->re_encode_maxq_scene_change = 0;
   rc->alt_ref_gf_group = 0;
   rc->last_frame_is_src_altref = 0;
   rc->fac_active_worst_inter = 150;
@@ -380,6 +418,7 @@
 
   for (i = 0; i < RATE_FACTOR_LEVELS; ++i) {
     rc->rate_correction_factors[i] = 1.0;
+    rc->damped_adjustment[i] = 0;
   }
 
   rc->min_gf_interval = oxcf->min_gf_interval;
@@ -391,27 +430,115 @@
     rc->max_gf_interval = vp9_rc_get_default_max_gf_interval(
         oxcf->init_framerate, rc->min_gf_interval);
   rc->baseline_gf_interval = (rc->min_gf_interval + rc->max_gf_interval) / 2;
+
+  rc->force_max_q = 0;
+  rc->last_post_encode_dropped_scene_change = 0;
+  rc->use_post_encode_drop = 0;
+  rc->ext_use_post_encode_drop = 0;
+  rc->arf_active_best_quality_adjustment_factor = 1.0;
+  rc->arf_increase_active_best_quality = 0;
+  rc->preserve_arf_as_gld = 0;
+  rc->preserve_next_arf_as_gld = 0;
+  rc->show_arf_as_gld = 0;
 }
 
-int vp9_rc_drop_frame(VP9_COMP *cpi) {
+static int check_buffer_above_thresh(VP9_COMP *cpi, int drop_mark) {
+  SVC *svc = &cpi->svc;
+  if (!cpi->use_svc || cpi->svc.framedrop_mode != FULL_SUPERFRAME_DROP) {
+    RATE_CONTROL *const rc = &cpi->rc;
+    return (rc->buffer_level > drop_mark);
+  } else {
+    int i;
+    // For SVC in the FULL_SUPERFRAME_DROP): the condition on
+    // buffer (if its above threshold, so no drop) is checked on current and
+    // upper spatial layers. If any spatial layer is not above threshold then
+    // we return 0.
+    for (i = svc->spatial_layer_id; i < svc->number_spatial_layers; ++i) {
+      const int layer = LAYER_IDS_TO_IDX(i, svc->temporal_layer_id,
+                                         svc->number_temporal_layers);
+      LAYER_CONTEXT *lc = &svc->layer_context[layer];
+      RATE_CONTROL *lrc = &lc->rc;
+      // Exclude check for layer whose bitrate is 0.
+      if (lc->target_bandwidth > 0) {
+        const int drop_mark_layer = (int)(cpi->svc.framedrop_thresh[i] *
+                                          lrc->optimal_buffer_level / 100);
+        if (!(lrc->buffer_level > drop_mark_layer)) return 0;
+      }
+    }
+    return 1;
+  }
+}
+
+static int check_buffer_below_thresh(VP9_COMP *cpi, int drop_mark) {
+  SVC *svc = &cpi->svc;
+  if (!cpi->use_svc || cpi->svc.framedrop_mode == LAYER_DROP) {
+    RATE_CONTROL *const rc = &cpi->rc;
+    return (rc->buffer_level <= drop_mark);
+  } else {
+    int i;
+    // For SVC in the constrained framedrop mode (svc->framedrop_mode =
+    // CONSTRAINED_LAYER_DROP or FULL_SUPERFRAME_DROP): the condition on
+    // buffer (if its below threshold, so drop frame) is checked on current
+    // and upper spatial layers. For FULL_SUPERFRAME_DROP mode if any
+    // spatial layer is <= threshold, then we return 1 (drop).
+    for (i = svc->spatial_layer_id; i < svc->number_spatial_layers; ++i) {
+      const int layer = LAYER_IDS_TO_IDX(i, svc->temporal_layer_id,
+                                         svc->number_temporal_layers);
+      LAYER_CONTEXT *lc = &svc->layer_context[layer];
+      RATE_CONTROL *lrc = &lc->rc;
+      // Exclude check for layer whose bitrate is 0.
+      if (lc->target_bandwidth > 0) {
+        const int drop_mark_layer = (int)(cpi->svc.framedrop_thresh[i] *
+                                          lrc->optimal_buffer_level / 100);
+        if (cpi->svc.framedrop_mode == FULL_SUPERFRAME_DROP) {
+          if (lrc->buffer_level <= drop_mark_layer) return 1;
+        } else {
+          if (!(lrc->buffer_level <= drop_mark_layer)) return 0;
+        }
+      }
+    }
+    if (cpi->svc.framedrop_mode == FULL_SUPERFRAME_DROP)
+      return 0;
+    else
+      return 1;
+  }
+}
+
+int vp9_test_drop(VP9_COMP *cpi) {
   const VP9EncoderConfig *oxcf = &cpi->oxcf;
   RATE_CONTROL *const rc = &cpi->rc;
-  if (!oxcf->drop_frames_water_mark ||
-      (is_one_pass_cbr_svc(cpi) &&
-       cpi->svc.spatial_layer_id > cpi->svc.first_spatial_layer_to_encode)) {
+  SVC *svc = &cpi->svc;
+  int drop_frames_water_mark = oxcf->drop_frames_water_mark;
+  if (cpi->use_svc) {
+    // If we have dropped max_consec_drop frames, then we don't
+    // drop this spatial layer, and reset counter to 0.
+    if (svc->drop_count[svc->spatial_layer_id] == svc->max_consec_drop) {
+      svc->drop_count[svc->spatial_layer_id] = 0;
+      return 0;
+    } else {
+      drop_frames_water_mark = svc->framedrop_thresh[svc->spatial_layer_id];
+    }
+  }
+  if (!drop_frames_water_mark ||
+      (svc->spatial_layer_id > 0 &&
+       svc->framedrop_mode == FULL_SUPERFRAME_DROP)) {
     return 0;
   } else {
-    if (rc->buffer_level < 0) {
+    if ((rc->buffer_level < 0 && svc->framedrop_mode != FULL_SUPERFRAME_DROP) ||
+        (check_buffer_below_thresh(cpi, -1) &&
+         svc->framedrop_mode == FULL_SUPERFRAME_DROP)) {
       // Always drop if buffer is below 0.
       return 1;
     } else {
       // If buffer is below drop_mark, for now just drop every other frame
       // (starting with the next frame) until it increases back over drop_mark.
       int drop_mark =
-          (int)(oxcf->drop_frames_water_mark * rc->optimal_buffer_level / 100);
-      if ((rc->buffer_level > drop_mark) && (rc->decimation_factor > 0)) {
+          (int)(drop_frames_water_mark * rc->optimal_buffer_level / 100);
+      if (check_buffer_above_thresh(cpi, drop_mark) &&
+          (rc->decimation_factor > 0)) {
         --rc->decimation_factor;
-      } else if (rc->buffer_level <= drop_mark && rc->decimation_factor == 0) {
+      } else if (check_buffer_below_thresh(cpi, drop_mark) &&
+                 rc->decimation_factor == 0) {
         rc->decimation_factor = 1;
       }
       if (rc->decimation_factor > 0) {
@@ -430,11 +557,134 @@
   }
 }
 
+int post_encode_drop_cbr(VP9_COMP *cpi, size_t *size) {
+  size_t frame_size = *size << 3;
+  int64_t new_buffer_level =
+      cpi->rc.buffer_level + cpi->rc.avg_frame_bandwidth - (int64_t)frame_size;
+
+  // For now we drop if new buffer level (given the encoded frame size) goes
+  // below 0.
+  if (new_buffer_level < 0) {
+    *size = 0;
+    vp9_rc_postencode_update_drop_frame(cpi);
+    // Update flag to use for next frame.
+    if (cpi->rc.high_source_sad ||
+        (cpi->use_svc && cpi->svc.high_source_sad_superframe))
+      cpi->rc.last_post_encode_dropped_scene_change = 1;
+    // Force max_q on next fame.
+    cpi->rc.force_max_q = 1;
+    cpi->rc.avg_frame_qindex[INTER_FRAME] = cpi->rc.worst_quality;
+    cpi->last_frame_dropped = 1;
+    cpi->ext_refresh_frame_flags_pending = 0;
+    if (cpi->use_svc) {
+      SVC *svc = &cpi->svc;
+      int sl = 0;
+      int tl = 0;
+      svc->last_layer_dropped[svc->spatial_layer_id] = 1;
+      svc->drop_spatial_layer[svc->spatial_layer_id] = 1;
+      svc->drop_count[svc->spatial_layer_id]++;
+      svc->skip_enhancement_layer = 1;
+      // Postencode drop is only checked on base spatial layer,
+      // for now if max-q is set on base we force it on all layers.
+      for (sl = 0; sl < svc->number_spatial_layers; ++sl) {
+        for (tl = 0; tl < svc->number_temporal_layers; ++tl) {
+          const int layer =
+              LAYER_IDS_TO_IDX(sl, tl, svc->number_temporal_layers);
+          LAYER_CONTEXT *lc = &svc->layer_context[layer];
+          RATE_CONTROL *lrc = &lc->rc;
+          lrc->force_max_q = 1;
+          lrc->avg_frame_qindex[INTER_FRAME] = cpi->rc.worst_quality;
+        }
+      }
+    }
+    return 1;
+  }
+
+  cpi->rc.force_max_q = 0;
+  cpi->rc.last_post_encode_dropped_scene_change = 0;
+  return 0;
+}
+
+int vp9_rc_drop_frame(VP9_COMP *cpi) {
+  SVC *svc = &cpi->svc;
+  int svc_prev_layer_dropped = 0;
+  // In the constrained or full_superframe framedrop mode for svc
+  // (framedrop_mode != (LAYER_DROP && CONSTRAINED_FROM_ABOVE)),
+  // if the previous spatial layer was dropped, drop the current spatial layer.
+  if (cpi->use_svc && svc->spatial_layer_id > 0 &&
+      svc->drop_spatial_layer[svc->spatial_layer_id - 1])
+    svc_prev_layer_dropped = 1;
+  if ((svc_prev_layer_dropped && svc->framedrop_mode != LAYER_DROP &&
+       svc->framedrop_mode != CONSTRAINED_FROM_ABOVE_DROP) ||
+      svc->force_drop_constrained_from_above[svc->spatial_layer_id] ||
+      vp9_test_drop(cpi)) {
+    vp9_rc_postencode_update_drop_frame(cpi);
+    cpi->ext_refresh_frame_flags_pending = 0;
+    cpi->last_frame_dropped = 1;
+    if (cpi->use_svc) {
+      svc->last_layer_dropped[svc->spatial_layer_id] = 1;
+      svc->drop_spatial_layer[svc->spatial_layer_id] = 1;
+      svc->drop_count[svc->spatial_layer_id]++;
+      svc->skip_enhancement_layer = 1;
+      if (svc->framedrop_mode == LAYER_DROP ||
+          (svc->framedrop_mode == CONSTRAINED_FROM_ABOVE_DROP &&
+           svc->force_drop_constrained_from_above[svc->number_spatial_layers -
+                                                  1] == 0) ||
+          svc->drop_spatial_layer[0] == 0) {
+        // For the case of constrained drop mode where full superframe is
+        // dropped, we don't increment the svc frame counters.
+        // In particular temporal layer counter (which is incremented in
+        // vp9_inc_frame_in_layer()) won't be incremented, so on a dropped
+        // frame we try the same temporal_layer_id on next incoming frame.
+        // This is to avoid an issue with temporal alignement with full
+        // superframe dropping.
+        vp9_inc_frame_in_layer(cpi);
+      }
+      if (svc->spatial_layer_id == svc->number_spatial_layers - 1) {
+        int i;
+        int all_layers_drop = 1;
+        for (i = 0; i < svc->spatial_layer_id; i++) {
+          if (svc->drop_spatial_layer[i] == 0) {
+            all_layers_drop = 0;
+            break;
+          }
+        }
+        if (all_layers_drop == 1) svc->skip_enhancement_layer = 0;
+      }
+    }
+    return 1;
+  }
+  return 0;
+}
+
+static int adjust_q_cbr(const VP9_COMP *cpi, int q) {
+  // This makes sure q is between oscillating Qs to prevent resonance.
+  if (!cpi->rc.reset_high_source_sad &&
+      (!cpi->oxcf.gf_cbr_boost_pct ||
+       !(cpi->refresh_alt_ref_frame || cpi->refresh_golden_frame)) &&
+      (cpi->rc.rc_1_frame * cpi->rc.rc_2_frame == -1) &&
+      cpi->rc.q_1_frame != cpi->rc.q_2_frame) {
+    int qclamp = clamp(q, VPXMIN(cpi->rc.q_1_frame, cpi->rc.q_2_frame),
+                       VPXMAX(cpi->rc.q_1_frame, cpi->rc.q_2_frame));
+    // If the previous frame had overshoot and the current q needs to increase
+    // above the clamped value, reduce the clamp for faster reaction to
+    // overshoot.
+    if (cpi->rc.rc_1_frame == -1 && q > qclamp)
+      q = (q + qclamp) >> 1;
+    else
+      q = qclamp;
+  }
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN)
+    vp9_cyclic_refresh_limit_q(cpi, &q);
+  return VPXMAX(VPXMIN(q, cpi->rc.worst_quality), cpi->rc.best_quality);
+}
+
 static double get_rate_correction_factor(const VP9_COMP *cpi) {
   const RATE_CONTROL *const rc = &cpi->rc;
+  const VP9_COMMON *const cm = &cpi->common;
   double rcf;
 
-  if (cpi->common.frame_type == KEY_FRAME) {
+  if (frame_is_intra_only(cm)) {
     rcf = rc->rate_correction_factors[KF_STD];
   } else if (cpi->oxcf.pass == 2) {
     RATE_FACTOR_LEVEL rf_lvl =
@@ -454,13 +704,14 @@
 
 static void set_rate_correction_factor(VP9_COMP *cpi, double factor) {
   RATE_CONTROL *const rc = &cpi->rc;
+  const VP9_COMMON *const cm = &cpi->common;
 
   // Normalize RCF to account for the size-dependent scaling factor.
   factor /= rcf_mult[cpi->rc.frame_size_selector];
 
   factor = fclamp(factor, MIN_BPB_FACTOR, MAX_BPB_FACTOR);
 
-  if (cpi->common.frame_type == KEY_FRAME) {
+  if (frame_is_intra_only(cm)) {
     rc->rate_correction_factors[KF_STD] = factor;
   } else if (cpi->oxcf.pass == 2) {
     RATE_FACTOR_LEVEL rf_lvl =
@@ -481,6 +732,8 @@
   int correction_factor = 100;
   double rate_correction_factor = get_rate_correction_factor(cpi);
   double adjustment_limit;
+  RATE_FACTOR_LEVEL rf_lvl =
+      cpi->twopass.gf_group.rf_level[cpi->twopass.gf_group.index];
 
   int projected_size_based_on_q = 0;
 
@@ -497,8 +750,9 @@
     projected_size_based_on_q =
         vp9_cyclic_refresh_estimate_bits_at_q(cpi, rate_correction_factor);
   } else {
+    FRAME_TYPE frame_type = cm->intra_only ? KEY_FRAME : cm->frame_type;
     projected_size_based_on_q =
-        vp9_estimate_bits_at_q(cpi->common.frame_type, cm->base_qindex, cm->MBs,
+        vp9_estimate_bits_at_q(frame_type, cm->base_qindex, cm->MBs,
                                rate_correction_factor, cm->bit_depth);
   }
   // Work out a size correction factor.
@@ -506,10 +760,16 @@
     correction_factor = (int)((100 * (int64_t)cpi->rc.projected_frame_size) /
                               projected_size_based_on_q);
 
-  // More heavily damped adjustment used if we have been oscillating either side
-  // of target.
-  adjustment_limit =
-      0.25 + 0.5 * VPXMIN(1, fabs(log10(0.01 * correction_factor)));
+  // Do not use damped adjustment for the first frame of each frame type
+  if (!cpi->rc.damped_adjustment[rf_lvl]) {
+    adjustment_limit = 1.0;
+    cpi->rc.damped_adjustment[rf_lvl] = 1;
+  } else {
+    // More heavily damped adjustment used if we have been oscillating either
+    // side of target.
+    adjustment_limit =
+        0.25 + 0.5 * VPXMIN(1, fabs(log10(0.01 * correction_factor)));
+  }
 
   cpi->rc.q_2_frame = cpi->rc.q_1_frame;
   cpi->rc.q_1_frame = cm->base_qindex;
@@ -566,14 +826,14 @@
   i = active_best_quality;
 
   do {
-    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ && cm->seg.enabled &&
-        cr->apply_cyclic_refresh &&
+    if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ && cr->apply_cyclic_refresh &&
         (!cpi->oxcf.gf_cbr_boost_pct || !cpi->refresh_golden_frame)) {
       bits_per_mb_at_this_q =
           (int)vp9_cyclic_refresh_rc_bits_per_mb(cpi, i, correction_factor);
     } else {
+      FRAME_TYPE frame_type = cm->intra_only ? KEY_FRAME : cm->frame_type;
       bits_per_mb_at_this_q = (int)vp9_rc_bits_per_mb(
-          cm->frame_type, i, correction_factor, cm->bit_depth);
+          frame_type, i, correction_factor, cm->bit_depth);
     }
 
     if (bits_per_mb_at_this_q <= target_bits_per_mb) {
@@ -588,16 +848,9 @@
     }
   } while (++i <= active_worst_quality);
 
-  // In CBR mode, this makes sure q is between oscillating Qs to prevent
-  // resonance.
-  if (cpi->oxcf.rc_mode == VPX_CBR && !cpi->rc.reset_high_source_sad &&
-      (!cpi->oxcf.gf_cbr_boost_pct ||
-       !(cpi->refresh_alt_ref_frame || cpi->refresh_golden_frame)) &&
-      (cpi->rc.rc_1_frame * cpi->rc.rc_2_frame == -1) &&
-      cpi->rc.q_1_frame != cpi->rc.q_2_frame) {
-    q = clamp(q, VPXMIN(cpi->rc.q_1_frame, cpi->rc.q_2_frame),
-              VPXMAX(cpi->rc.q_1_frame, cpi->rc.q_2_frame));
-  }
+  // Adjustment to q for CBR mode.
+  if (cpi->oxcf.rc_mode == VPX_CBR) return adjust_q_cbr(cpi, q);
+
   return q;
 }
 
@@ -626,13 +879,19 @@
                             kf_low_motion_minq, kf_high_motion_minq);
 }
 
-static int get_gf_active_quality(const RATE_CONTROL *const rc, int q,
+static int get_gf_active_quality(const VP9_COMP *const cpi, int q,
                                  vpx_bit_depth_t bit_depth) {
+  const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
+  const RATE_CONTROL *const rc = &cpi->rc;
+
   int *arfgf_low_motion_minq;
   int *arfgf_high_motion_minq;
+  const int gfu_boost = cpi->multi_layer_arf
+                            ? gf_group->gfu_boost[gf_group->index]
+                            : rc->gfu_boost;
   ASSIGN_MINQ_TABLE(bit_depth, arfgf_low_motion_minq);
   ASSIGN_MINQ_TABLE(bit_depth, arfgf_high_motion_minq);
-  return get_active_quality(q, rc->gfu_boost, gf_low, gf_high,
+  return get_active_quality(q, gfu_boost, gf_low, gf_high,
                             arfgf_low_motion_minq, arfgf_high_motion_minq);
 }
 
@@ -677,7 +936,7 @@
   int active_worst_quality;
   int ambient_qp;
   unsigned int num_frames_weight_key = 5 * cpi->svc.number_temporal_layers;
-  if (cm->frame_type == KEY_FRAME || rc->reset_high_source_sad)
+  if (frame_is_intra_only(cm) || rc->reset_high_source_sad || rc->force_max_q)
     return rc->worst_quality;
   // For ambient_qp we use minimum of avg_frame_qindex[KEY_FRAME/INTER_FRAME]
   // for the first few frames following key frame. These are both initialized
@@ -688,6 +947,7 @@
                    ? VPXMIN(rc->avg_frame_qindex[INTER_FRAME],
                             rc->avg_frame_qindex[KEY_FRAME])
                    : rc->avg_frame_qindex[INTER_FRAME];
+  active_worst_quality = VPXMIN(rc->worst_quality, (ambient_qp * 5) >> 2);
   // For SVC if the current base spatial layer was key frame, use the QP from
   // that base layer for ambient_qp.
   if (cpi->use_svc && cpi->svc.spatial_layer_id > 0) {
@@ -697,13 +957,15 @@
     if (lc->is_key_frame) {
       const RATE_CONTROL *lrc = &lc->rc;
       ambient_qp = VPXMIN(ambient_qp, lrc->last_q[KEY_FRAME]);
+      active_worst_quality = VPXMIN(rc->worst_quality, (ambient_qp * 9) >> 3);
     }
   }
-  active_worst_quality = VPXMIN(rc->worst_quality, ambient_qp * 5 >> 2);
   if (rc->buffer_level > rc->optimal_buffer_level) {
     // Adjust down.
-    // Maximum limit for down adjustment, ~30%.
+    // Maximum limit for down adjustment ~30%; make it lower for screen content.
     int max_adjustment_down = active_worst_quality / 3;
+    if (cpi->oxcf.content == VP9E_CONTENT_SCREEN)
+      max_adjustment_down = active_worst_quality >> 3;
     if (max_adjustment_down) {
       buff_lvl_step = ((rc->maximum_buffer_size - rc->optimal_buffer_level) /
                        max_adjustment_down);
@@ -772,6 +1034,7 @@
           vp9_compute_qdelta(rc, q_val, q_val * q_adj_factor, cm->bit_depth);
     }
   } else if (!rc->is_src_frame_alt_ref && !cpi->use_svc &&
+             cpi->oxcf.gf_cbr_boost_pct &&
              (cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame)) {
     // Use the lower of active_worst_quality and recent
     // average Q as basis for GF/ARF best Q limit unless last frame was
@@ -782,7 +1045,7 @@
     } else {
       q = active_worst_quality;
     }
-    active_best_quality = get_gf_active_quality(rc, q, cm->bit_depth);
+    active_best_quality = get_gf_active_quality(cpi, q, cm->bit_depth);
   } else {
     // Use the lower of active_worst_quality and recent/average Q.
     if (cm->current_video_frame > 1) {
@@ -807,21 +1070,8 @@
   *top_index = active_worst_quality;
   *bottom_index = active_best_quality;
 
-#if LIMIT_QRANGE_FOR_ALTREF_AND_KEY
-  // Limit Q range for the adaptive loop.
-  if (cm->frame_type == KEY_FRAME && !rc->this_key_frame_forced &&
-      !(cm->current_video_frame == 0)) {
-    int qdelta = 0;
-    vpx_clear_system_state();
-    qdelta = vp9_compute_qdelta_by_rate(
-        &cpi->rc, cm->frame_type, active_worst_quality, 2.0, cm->bit_depth);
-    *top_index = active_worst_quality + qdelta;
-    *top_index = (*top_index > *bottom_index) ? *top_index : *bottom_index;
-  }
-#endif
-
   // Special case code to try and match quality with forced key frames
-  if (cm->frame_type == KEY_FRAME && rc->this_key_frame_forced) {
+  if (frame_is_intra_only(cm) && rc->this_key_frame_forced) {
     q = rc->last_boosted_qindex;
   } else {
     q = vp9_rc_regulate_q(cpi, rc->this_frame_target, active_best_quality,
@@ -834,6 +1084,7 @@
         q = *top_index;
     }
   }
+
   assert(*top_index <= rc->worst_quality && *top_index >= rc->best_quality);
   assert(*bottom_index <= rc->worst_quality &&
          *bottom_index >= rc->best_quality);
@@ -942,7 +1193,7 @@
     if (oxcf->rc_mode == VPX_CQ) {
       if (q < cq_level) q = cq_level;
 
-      active_best_quality = get_gf_active_quality(rc, q, cm->bit_depth);
+      active_best_quality = get_gf_active_quality(cpi, q, cm->bit_depth);
 
       // Constrained quality use slightly lower active best.
       active_best_quality = active_best_quality * 15 / 16;
@@ -957,7 +1208,7 @@
         delta_qindex = vp9_compute_qdelta(rc, q, q * 0.50, cm->bit_depth);
       active_best_quality = VPXMAX(qindex + delta_qindex, rc->best_quality);
     } else {
-      active_best_quality = get_gf_active_quality(rc, q, cm->bit_depth);
+      active_best_quality = get_gf_active_quality(cpi, q, cm->bit_depth);
     }
   } else {
     if (oxcf->rc_mode == VPX_Q) {
@@ -1048,19 +1299,122 @@
     1.75,  // GF_ARF_STD
     2.00,  // KF_STD
   };
-  static const FRAME_TYPE frame_type[RATE_FACTOR_LEVELS] = {
-    INTER_FRAME, INTER_FRAME, INTER_FRAME, INTER_FRAME, KEY_FRAME
-  };
   const VP9_COMMON *const cm = &cpi->common;
-  int qdelta =
-      vp9_compute_qdelta_by_rate(&cpi->rc, frame_type[rf_level], q,
-                                 rate_factor_deltas[rf_level], cm->bit_depth);
+
+  int qdelta = vp9_compute_qdelta_by_rate(
+      &cpi->rc, cm->frame_type, q, rate_factor_deltas[rf_level], cm->bit_depth);
   return qdelta;
 }
 
 #define STATIC_MOTION_THRESH 95
+
+static void pick_kf_q_bound_two_pass(const VP9_COMP *cpi, int *bottom_index,
+                                     int *top_index) {
+  const VP9_COMMON *const cm = &cpi->common;
+  const RATE_CONTROL *const rc = &cpi->rc;
+  int active_best_quality;
+  int active_worst_quality = cpi->twopass.active_worst_quality;
+
+  if (rc->this_key_frame_forced) {
+    // Handle the special case for key frames forced when we have reached
+    // the maximum key frame interval. Here force the Q to a range
+    // based on the ambient Q to reduce the risk of popping.
+    double last_boosted_q;
+    int delta_qindex;
+    int qindex;
+
+    if (cpi->twopass.last_kfgroup_zeromotion_pct >= STATIC_MOTION_THRESH) {
+      qindex = VPXMIN(rc->last_kf_qindex, rc->last_boosted_qindex);
+      active_best_quality = qindex;
+      last_boosted_q = vp9_convert_qindex_to_q(qindex, cm->bit_depth);
+      delta_qindex = vp9_compute_qdelta(rc, last_boosted_q,
+                                        last_boosted_q * 1.25, cm->bit_depth);
+      active_worst_quality =
+          VPXMIN(qindex + delta_qindex, active_worst_quality);
+    } else {
+      qindex = rc->last_boosted_qindex;
+      last_boosted_q = vp9_convert_qindex_to_q(qindex, cm->bit_depth);
+      delta_qindex = vp9_compute_qdelta(rc, last_boosted_q,
+                                        last_boosted_q * 0.75, cm->bit_depth);
+      active_best_quality = VPXMAX(qindex + delta_qindex, rc->best_quality);
+    }
+  } else {
+    // Not forced keyframe.
+    double q_adj_factor = 1.0;
+    double q_val;
+    // Baseline value derived from cpi->active_worst_quality and kf boost.
+    active_best_quality =
+        get_kf_active_quality(rc, active_worst_quality, cm->bit_depth);
+    if (cpi->twopass.kf_zeromotion_pct >= STATIC_KF_GROUP_THRESH) {
+      active_best_quality /= 4;
+    }
+
+    // Dont allow the active min to be lossless (q0) unlesss the max q
+    // already indicates lossless.
+    active_best_quality =
+        VPXMIN(active_worst_quality, VPXMAX(1, active_best_quality));
+
+    // Allow somewhat lower kf minq with small image formats.
+    if ((cm->width * cm->height) <= (352 * 288)) {
+      q_adj_factor -= 0.25;
+    }
+
+    // Make a further adjustment based on the kf zero motion measure.
+    q_adj_factor += 0.05 - (0.001 * (double)cpi->twopass.kf_zeromotion_pct);
+
+    // Convert the adjustment factor to a qindex delta
+    // on active_best_quality.
+    q_val = vp9_convert_qindex_to_q(active_best_quality, cm->bit_depth);
+    active_best_quality +=
+        vp9_compute_qdelta(rc, q_val, q_val * q_adj_factor, cm->bit_depth);
+  }
+  *top_index = active_worst_quality;
+  *bottom_index = active_best_quality;
+}
+
+static int rc_constant_q(const VP9_COMP *cpi, int *bottom_index, int *top_index,
+                         int gf_group_index) {
+  const VP9_COMMON *const cm = &cpi->common;
+  const RATE_CONTROL *const rc = &cpi->rc;
+  const VP9EncoderConfig *const oxcf = &cpi->oxcf;
+  const GF_GROUP *gf_group = &cpi->twopass.gf_group;
+  const int is_intra_frame = frame_is_intra_only(cm);
+
+  const int cq_level = get_active_cq_level_two_pass(&cpi->twopass, rc, oxcf);
+
+  int q = cq_level;
+  int active_best_quality = cq_level;
+  int active_worst_quality = cq_level;
+
+  // Key frame qp decision
+  if (is_intra_frame && rc->frames_to_key > 1)
+    pick_kf_q_bound_two_pass(cpi, &active_best_quality, &active_worst_quality);
+
+  // ARF / GF qp decision
+  if (!is_intra_frame && !rc->is_src_frame_alt_ref &&
+      cpi->refresh_alt_ref_frame) {
+    active_best_quality = get_gf_active_quality(cpi, q, cm->bit_depth);
+
+    // Modify best quality for second level arfs. For mode VPX_Q this
+    // becomes the baseline frame q.
+    if (gf_group->rf_level[gf_group_index] == GF_ARF_LOW) {
+      const int layer_depth = gf_group->layer_depth[gf_group_index];
+      // linearly fit the frame q depending on the layer depth index from
+      // the base layer ARF.
+      active_best_quality = ((layer_depth - 1) * cq_level +
+                             active_best_quality + layer_depth / 2) /
+                            layer_depth;
+    }
+  }
+
+  q = active_best_quality;
+  *top_index = active_worst_quality;
+  *bottom_index = active_best_quality;
+  return q;
+}
+
 static int rc_pick_q_and_bounds_two_pass(const VP9_COMP *cpi, int *bottom_index,
-                                         int *top_index) {
+                                         int *top_index, int gf_group_index) {
   const VP9_COMMON *const cm = &cpi->common;
   const RATE_CONTROL *const rc = &cpi->rc;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
@@ -1070,59 +1424,20 @@
   int active_worst_quality = cpi->twopass.active_worst_quality;
   int q;
   int *inter_minq;
+  int arf_active_best_quality_hl;
+  int *arfgf_high_motion_minq, *arfgf_low_motion_minq;
+  const int boost_frame =
+      !rc->is_src_frame_alt_ref &&
+      (cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame);
+
   ASSIGN_MINQ_TABLE(cm->bit_depth, inter_minq);
 
-  if (frame_is_intra_only(cm) || vp9_is_upper_layer_key_frame(cpi)) {
-    // Handle the special case for key frames forced when we have reached
-    // the maximum key frame interval. Here force the Q to a range
-    // based on the ambient Q to reduce the risk of popping.
-    if (rc->this_key_frame_forced) {
-      double last_boosted_q;
-      int delta_qindex;
-      int qindex;
+  if (oxcf->rc_mode == VPX_Q)
+    return rc_constant_q(cpi, bottom_index, top_index, gf_group_index);
 
-      if (cpi->twopass.last_kfgroup_zeromotion_pct >= STATIC_MOTION_THRESH) {
-        qindex = VPXMIN(rc->last_kf_qindex, rc->last_boosted_qindex);
-        active_best_quality = qindex;
-        last_boosted_q = vp9_convert_qindex_to_q(qindex, cm->bit_depth);
-        delta_qindex = vp9_compute_qdelta(rc, last_boosted_q,
-                                          last_boosted_q * 1.25, cm->bit_depth);
-        active_worst_quality =
-            VPXMIN(qindex + delta_qindex, active_worst_quality);
-      } else {
-        qindex = rc->last_boosted_qindex;
-        last_boosted_q = vp9_convert_qindex_to_q(qindex, cm->bit_depth);
-        delta_qindex = vp9_compute_qdelta(rc, last_boosted_q,
-                                          last_boosted_q * 0.75, cm->bit_depth);
-        active_best_quality = VPXMAX(qindex + delta_qindex, rc->best_quality);
-      }
-    } else {
-      // Not forced keyframe.
-      double q_adj_factor = 1.0;
-      double q_val;
-      // Baseline value derived from cpi->active_worst_quality and kf boost.
-      active_best_quality =
-          get_kf_active_quality(rc, active_worst_quality, cm->bit_depth);
-      if (cpi->twopass.kf_zeromotion_pct >= STATIC_KF_GROUP_THRESH) {
-        active_best_quality /= 4;
-      }
-
-      // Allow somewhat lower kf minq with small image formats.
-      if ((cm->width * cm->height) <= (352 * 288)) {
-        q_adj_factor -= 0.25;
-      }
-
-      // Make a further adjustment based on the kf zero motion measure.
-      q_adj_factor += 0.05 - (0.001 * (double)cpi->twopass.kf_zeromotion_pct);
-
-      // Convert the adjustment factor to a qindex delta
-      // on active_best_quality.
-      q_val = vp9_convert_qindex_to_q(active_best_quality, cm->bit_depth);
-      active_best_quality +=
-          vp9_compute_qdelta(rc, q_val, q_val * q_adj_factor, cm->bit_depth);
-    }
-  } else if (!rc->is_src_frame_alt_ref &&
-             (cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame)) {
+  if (frame_is_intra_only(cm)) {
+    pick_kf_q_bound_two_pass(cpi, &active_best_quality, &active_worst_quality);
+  } else if (boost_frame) {
     // Use the lower of active_worst_quality and recent
     // average Q as basis for GF/ARF best Q limit unless last frame was
     // a key frame.
@@ -1135,63 +1450,78 @@
     // For constrained quality dont allow Q less than the cq level
     if (oxcf->rc_mode == VPX_CQ) {
       if (q < cq_level) q = cq_level;
+    }
+    active_best_quality = get_gf_active_quality(cpi, q, cm->bit_depth);
+    arf_active_best_quality_hl = active_best_quality;
 
-      active_best_quality = get_gf_active_quality(rc, q, cm->bit_depth);
+    if (rc->arf_increase_active_best_quality == 1) {
+      ASSIGN_MINQ_TABLE(cm->bit_depth, arfgf_high_motion_minq);
+      arf_active_best_quality_hl = arfgf_high_motion_minq[q];
+    } else if (rc->arf_increase_active_best_quality == -1) {
+      ASSIGN_MINQ_TABLE(cm->bit_depth, arfgf_low_motion_minq);
+      arf_active_best_quality_hl = arfgf_low_motion_minq[q];
+    }
+    active_best_quality =
+        (int)((double)active_best_quality *
+                  rc->arf_active_best_quality_adjustment_factor +
+              (double)arf_active_best_quality_hl *
+                  (1.0 - rc->arf_active_best_quality_adjustment_factor));
 
-      // Constrained quality use slightly lower active best.
-      active_best_quality = active_best_quality * 15 / 16;
-
-    } else if (oxcf->rc_mode == VPX_Q) {
-      if (!cpi->refresh_alt_ref_frame) {
-        active_best_quality = cq_level;
-      } else {
-        active_best_quality = get_gf_active_quality(rc, q, cm->bit_depth);
-
-        // Modify best quality for second level arfs. For mode VPX_Q this
-        // becomes the baseline frame q.
-        if (gf_group->rf_level[gf_group->index] == GF_ARF_LOW)
-          active_best_quality = (active_best_quality + cq_level + 1) / 2;
-      }
-    } else {
-      active_best_quality = get_gf_active_quality(rc, q, cm->bit_depth);
+    // Modify best quality for second level arfs. For mode VPX_Q this
+    // becomes the baseline frame q.
+    if (gf_group->rf_level[gf_group_index] == GF_ARF_LOW) {
+      const int layer_depth = gf_group->layer_depth[gf_group_index];
+      // linearly fit the frame q depending on the layer depth index from
+      // the base layer ARF.
+      active_best_quality =
+          ((layer_depth - 1) * q + active_best_quality + layer_depth / 2) /
+          layer_depth;
     }
   } else {
-    if (oxcf->rc_mode == VPX_Q) {
-      active_best_quality = cq_level;
-    } else {
-      active_best_quality = inter_minq[active_worst_quality];
+    active_best_quality = inter_minq[active_worst_quality];
 
-      // For the constrained quality mode we don't want
-      // q to fall below the cq level.
-      if ((oxcf->rc_mode == VPX_CQ) && (active_best_quality < cq_level)) {
-        active_best_quality = cq_level;
-      }
+    // For the constrained quality mode we don't want
+    // q to fall below the cq level.
+    if ((oxcf->rc_mode == VPX_CQ) && (active_best_quality < cq_level)) {
+      active_best_quality = cq_level;
     }
   }
 
   // Extension to max or min Q if undershoot or overshoot is outside
   // the permitted range.
-  if (cpi->oxcf.rc_mode != VPX_Q) {
-    if (frame_is_intra_only(cm) ||
-        (!rc->is_src_frame_alt_ref &&
-         (cpi->refresh_golden_frame || cpi->refresh_alt_ref_frame))) {
-      active_best_quality -=
-          (cpi->twopass.extend_minq + cpi->twopass.extend_minq_fast);
-      active_worst_quality += (cpi->twopass.extend_maxq / 2);
-    } else {
-      active_best_quality -=
-          (cpi->twopass.extend_minq + cpi->twopass.extend_minq_fast) / 2;
-      active_worst_quality += cpi->twopass.extend_maxq;
+  if (frame_is_intra_only(cm) || boost_frame) {
+    const int layer_depth = gf_group->layer_depth[gf_group_index];
+    active_best_quality -=
+        (cpi->twopass.extend_minq + cpi->twopass.extend_minq_fast);
+    active_worst_quality += (cpi->twopass.extend_maxq / 2);
+
+    if (gf_group->rf_level[gf_group_index] == GF_ARF_LOW) {
+      assert(layer_depth > 1);
+      active_best_quality =
+          VPXMAX(active_best_quality,
+                 cpi->twopass.last_qindex_of_arf_layer[layer_depth - 1]);
     }
+  } else {
+    const int max_layer_depth = gf_group->max_layer_depth;
+    assert(max_layer_depth > 0);
+
+    active_best_quality -=
+        (cpi->twopass.extend_minq + cpi->twopass.extend_minq_fast) / 2;
+    active_worst_quality += cpi->twopass.extend_maxq;
+
+    // For normal frames do not allow an active minq lower than the q used for
+    // the last boosted frame.
+    active_best_quality =
+        VPXMAX(active_best_quality,
+               cpi->twopass.last_qindex_of_arf_layer[max_layer_depth - 1]);
   }
 
 #if LIMIT_QRANGE_FOR_ALTREF_AND_KEY
   vpx_clear_system_state();
   // Static forced key frames Q restrictions dealt with elsewhere.
-  if (!((frame_is_intra_only(cm) || vp9_is_upper_layer_key_frame(cpi))) ||
-      !rc->this_key_frame_forced ||
-      (cpi->twopass.last_kfgroup_zeromotion_pct < STATIC_MOTION_THRESH)) {
-    int qdelta = vp9_frame_type_qdelta(cpi, gf_group->rf_level[gf_group->index],
+  if (!frame_is_intra_only(cm) || !rc->this_key_frame_forced ||
+      cpi->twopass.last_kfgroup_zeromotion_pct < STATIC_MOTION_THRESH) {
+    int qdelta = vp9_frame_type_qdelta(cpi, gf_group->rf_level[gf_group_index],
                                        active_worst_quality);
     active_worst_quality =
         VPXMAX(active_worst_quality + qdelta, active_best_quality);
@@ -1211,17 +1541,15 @@
   active_worst_quality =
       clamp(active_worst_quality, active_best_quality, rc->worst_quality);
 
-  if (oxcf->rc_mode == VPX_Q) {
-    q = active_best_quality;
-    // Special case code to try and match quality with forced key frames.
-  } else if ((frame_is_intra_only(cm) || vp9_is_upper_layer_key_frame(cpi)) &&
-             rc->this_key_frame_forced) {
+  if (frame_is_intra_only(cm) && rc->this_key_frame_forced) {
     // If static since last kf use better of last boosted and last kf q.
     if (cpi->twopass.last_kfgroup_zeromotion_pct >= STATIC_MOTION_THRESH) {
       q = VPXMIN(rc->last_kf_qindex, rc->last_boosted_qindex);
     } else {
       q = rc->last_boosted_qindex;
     }
+  } else if (frame_is_intra_only(cm) && !rc->this_key_frame_forced) {
+    q = active_best_quality;
   } else {
     q = vp9_rc_regulate_q(cpi, rc->this_frame_target, active_best_quality,
                           active_worst_quality);
@@ -1248,13 +1576,15 @@
 int vp9_rc_pick_q_and_bounds(const VP9_COMP *cpi, int *bottom_index,
                              int *top_index) {
   int q;
+  const int gf_group_index = cpi->twopass.gf_group.index;
   if (cpi->oxcf.pass == 0) {
     if (cpi->oxcf.rc_mode == VPX_CBR)
       q = rc_pick_q_and_bounds_one_pass_cbr(cpi, bottom_index, top_index);
     else
       q = rc_pick_q_and_bounds_one_pass_vbr(cpi, bottom_index, top_index);
   } else {
-    q = rc_pick_q_and_bounds_two_pass(cpi, bottom_index, top_index);
+    q = rc_pick_q_and_bounds_two_pass(cpi, bottom_index, top_index,
+                                      gf_group_index);
   }
   if (cpi->sf.use_nonrd_pick_mode) {
     if (cpi->sf.force_frame_boost == 1) q -= cpi->sf.max_delta_qindex;
@@ -1267,6 +1597,89 @@
   return q;
 }
 
+void vp9_configure_buffer_updates(VP9_COMP *cpi, int gf_group_index) {
+  VP9_COMMON *cm = &cpi->common;
+  TWO_PASS *const twopass = &cpi->twopass;
+
+  cpi->rc.is_src_frame_alt_ref = 0;
+  cm->show_existing_frame = 0;
+  cpi->rc.show_arf_as_gld = 0;
+  switch (twopass->gf_group.update_type[gf_group_index]) {
+    case KF_UPDATE:
+      cpi->refresh_last_frame = 1;
+      cpi->refresh_golden_frame = 1;
+      cpi->refresh_alt_ref_frame = 1;
+      break;
+    case LF_UPDATE:
+      cpi->refresh_last_frame = 1;
+      cpi->refresh_golden_frame = 0;
+      cpi->refresh_alt_ref_frame = 0;
+      break;
+    case GF_UPDATE:
+      cpi->refresh_last_frame = 1;
+      cpi->refresh_golden_frame = 1;
+      cpi->refresh_alt_ref_frame = 0;
+      break;
+    case OVERLAY_UPDATE:
+      cpi->refresh_last_frame = 0;
+      cpi->refresh_golden_frame = 1;
+      cpi->refresh_alt_ref_frame = 0;
+      cpi->rc.is_src_frame_alt_ref = 1;
+      if (cpi->rc.preserve_arf_as_gld) {
+        cpi->rc.show_arf_as_gld = 1;
+        cpi->refresh_golden_frame = 0;
+        cm->show_existing_frame = 1;
+        cm->refresh_frame_context = 0;
+      }
+      break;
+    case MID_OVERLAY_UPDATE:
+      cpi->refresh_last_frame = 1;
+      cpi->refresh_golden_frame = 0;
+      cpi->refresh_alt_ref_frame = 0;
+      cpi->rc.is_src_frame_alt_ref = 1;
+      break;
+    case USE_BUF_FRAME:
+      cpi->refresh_last_frame = 0;
+      cpi->refresh_golden_frame = 0;
+      cpi->refresh_alt_ref_frame = 0;
+      cpi->rc.is_src_frame_alt_ref = 1;
+      cm->show_existing_frame = 1;
+      cm->refresh_frame_context = 0;
+      break;
+    default:
+      assert(twopass->gf_group.update_type[gf_group_index] == ARF_UPDATE);
+      cpi->refresh_last_frame = 0;
+      cpi->refresh_golden_frame = 0;
+      cpi->refresh_alt_ref_frame = 1;
+      break;
+  }
+}
+
+void vp9_estimate_qp_gop(VP9_COMP *cpi) {
+  int gop_length = cpi->twopass.gf_group.gf_group_size;
+  int bottom_index, top_index;
+  int idx;
+  const int gf_index = cpi->twopass.gf_group.index;
+  const int is_src_frame_alt_ref = cpi->rc.is_src_frame_alt_ref;
+  const int refresh_frame_context = cpi->common.refresh_frame_context;
+
+  for (idx = 1; idx <= gop_length; ++idx) {
+    TplDepFrame *tpl_frame = &cpi->tpl_stats[idx];
+    int target_rate = cpi->twopass.gf_group.bit_allocation[idx];
+    cpi->twopass.gf_group.index = idx;
+    vp9_rc_set_frame_target(cpi, target_rate);
+    vp9_configure_buffer_updates(cpi, idx);
+    tpl_frame->base_qindex =
+        rc_pick_q_and_bounds_two_pass(cpi, &bottom_index, &top_index, idx);
+    tpl_frame->base_qindex = VPXMAX(tpl_frame->base_qindex, 1);
+  }
+  // Reset the actual index and frame update
+  cpi->twopass.gf_group.index = gf_index;
+  cpi->rc.is_src_frame_alt_ref = is_src_frame_alt_ref;
+  cpi->common.refresh_frame_context = refresh_frame_context;
+  vp9_configure_buffer_updates(cpi, gf_index);
+}
+
 void vp9_rc_compute_frame_size_bounds(const VP9_COMP *cpi, int frame_target,
                                       int *frame_under_shoot_limit,
                                       int *frame_over_shoot_limit) {
@@ -1339,6 +1752,15 @@
     if (rc->frames_till_gf_update_due > 0) rc->frames_till_gf_update_due--;
 
     rc->frames_since_golden++;
+
+    if (rc->show_arf_as_gld) {
+      rc->frames_since_golden = 0;
+      // If we are not using alt ref in the up and coming group clear the arf
+      // active flag. In multi arf group case, if the index is not 0 then
+      // we are overlaying a mid group arf so should not reset the flag.
+      if (!rc->source_alt_ref_pending && (cpi->twopass.gf_group.index == 0))
+        rc->source_alt_ref_active = 0;
+    }
   }
 }
 
@@ -1373,7 +1795,8 @@
   int cnt_zeromv = 0;
   for (mi_row = 0; mi_row < rows; mi_row++) {
     for (mi_col = 0; mi_col < cols; mi_col++) {
-      if (abs(mi[0]->mv[0].as_mv.row) < 16 && abs(mi[0]->mv[0].as_mv.col) < 16)
+      if (mi[0]->ref_frame[0] == LAST_FRAME &&
+          abs(mi[0]->mv[0].as_mv.row) < 16 && abs(mi[0]->mv[0].as_mv.col) < 16)
         cnt_zeromv++;
       mi++;
     }
@@ -1387,7 +1810,11 @@
   const VP9_COMMON *const cm = &cpi->common;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
   RATE_CONTROL *const rc = &cpi->rc;
+  SVC *const svc = &cpi->svc;
   const int qindex = cm->base_qindex;
+  const GF_GROUP *gf_group = &cpi->twopass.gf_group;
+  const int gf_group_index = cpi->twopass.gf_group.index;
+  const int layer_depth = gf_group->layer_depth[gf_group_index];
 
   // Update rate control heuristics
   rc->projected_frame_size = (int)(bytes_used << 3);
@@ -1396,7 +1823,7 @@
   vp9_rc_update_rate_correction_factors(cpi);
 
   // Keep a record of last Q and ambient average Q.
-  if (cm->frame_type == KEY_FRAME) {
+  if (frame_is_intra_only(cm)) {
     rc->last_q[KEY_FRAME] = qindex;
     rc->avg_frame_qindex[KEY_FRAME] =
         ROUND_POWER_OF_TWO(3 * rc->avg_frame_qindex[KEY_FRAME] + qindex, 2);
@@ -1429,6 +1856,8 @@
     }
   }
 
+  if (cpi->use_svc) vp9_svc_adjust_avg_frame_qindex(cpi);
+
   // Keep record of last boosted (KF/KF/ARF) Q value.
   // If the current frame is coded at a lower Q then we also update it.
   // If all mbs in this group are skipped only update if the Q value is
@@ -1440,13 +1869,22 @@
         (cpi->refresh_golden_frame && !rc->is_src_frame_alt_ref)))) {
     rc->last_boosted_qindex = qindex;
   }
-  if (cm->frame_type == KEY_FRAME) rc->last_kf_qindex = qindex;
 
-  update_buffer_level(cpi, rc->projected_frame_size);
+  if ((qindex < cpi->twopass.last_qindex_of_arf_layer[layer_depth]) ||
+      (cm->frame_type == KEY_FRAME) ||
+      (!rc->constrained_gf_group &&
+       (cpi->refresh_alt_ref_frame ||
+        (cpi->refresh_golden_frame && !rc->is_src_frame_alt_ref)))) {
+    cpi->twopass.last_qindex_of_arf_layer[layer_depth] = qindex;
+  }
+
+  if (frame_is_intra_only(cm)) rc->last_kf_qindex = qindex;
+
+  update_buffer_level_postencode(cpi, rc->projected_frame_size);
 
   // Rolling monitors of whether we are over or underspending used to help
   // regulate min and Max Q in two pass.
-  if (cm->frame_type != KEY_FRAME) {
+  if (!frame_is_intra_only(cm)) {
     rc->rolling_target_bits = ROUND_POWER_OF_TWO(
         rc->rolling_target_bits * 3 + rc->this_frame_target, 2);
     rc->rolling_actual_bits = ROUND_POWER_OF_TWO(
@@ -1463,9 +1901,9 @@
 
   rc->total_target_vs_actual = rc->total_actual_bits - rc->total_target_bits;
 
-  if (!cpi->use_svc || is_two_pass_svc(cpi)) {
+  if (!cpi->use_svc) {
     if (is_altref_enabled(cpi) && cpi->refresh_alt_ref_frame &&
-        (cm->frame_type != KEY_FRAME))
+        (!frame_is_intra_only(cm)))
       // Update the alternate reference frame stats as appropriate.
       update_alt_ref_frame_stats(cpi);
     else
@@ -1473,7 +1911,28 @@
       update_golden_frame_stats(cpi);
   }
 
-  if (cm->frame_type == KEY_FRAME) rc->frames_since_key = 0;
+  // If second (long term) temporal reference is used for SVC,
+  // update the golden frame counter, only for base temporal layer.
+  if (cpi->use_svc && svc->use_gf_temporal_ref_current_layer &&
+      svc->temporal_layer_id == 0) {
+    int i = 0;
+    if (cpi->refresh_golden_frame)
+      rc->frames_since_golden = 0;
+    else
+      rc->frames_since_golden++;
+    // Decrement count down till next gf
+    if (rc->frames_till_gf_update_due > 0) rc->frames_till_gf_update_due--;
+    // Update the frames_since_golden for all upper temporal layers.
+    for (i = 1; i < svc->number_temporal_layers; ++i) {
+      const int layer = LAYER_IDS_TO_IDX(svc->spatial_layer_id, i,
+                                         svc->number_temporal_layers);
+      LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+      RATE_CONTROL *const lrc = &lc->rc;
+      lrc->frames_since_golden = rc->frames_since_golden;
+    }
+  }
+
+  if (frame_is_intra_only(cm)) rc->frames_since_key = 0;
   if (cm->show_frame) {
     rc->frames_since_key++;
     rc->frames_to_key--;
@@ -1487,29 +1946,53 @@
   }
 
   if (oxcf->pass == 0) {
-    if (cm->frame_type != KEY_FRAME) {
+    if (!frame_is_intra_only(cm) &&
+        (!cpi->use_svc ||
+         (cpi->use_svc &&
+          !svc->layer_context[svc->temporal_layer_id].is_key_frame &&
+          svc->spatial_layer_id == svc->number_spatial_layers - 1))) {
       compute_frame_low_motion(cpi);
       if (cpi->sf.use_altref_onepass) update_altref_usage(cpi);
     }
+    // For SVC: set avg_frame_low_motion (only computed on top spatial layer)
+    // to all lower spatial layers.
+    if (cpi->use_svc &&
+        svc->spatial_layer_id == svc->number_spatial_layers - 1) {
+      int i;
+      for (i = 0; i < svc->number_spatial_layers - 1; ++i) {
+        const int layer = LAYER_IDS_TO_IDX(i, svc->temporal_layer_id,
+                                           svc->number_temporal_layers);
+        LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+        RATE_CONTROL *const lrc = &lc->rc;
+        lrc->avg_frame_low_motion = rc->avg_frame_low_motion;
+      }
+    }
     cpi->rc.last_frame_is_src_altref = cpi->rc.is_src_frame_alt_ref;
   }
-  if (cm->frame_type != KEY_FRAME) rc->reset_high_source_sad = 0;
+  if (!frame_is_intra_only(cm)) rc->reset_high_source_sad = 0;
 
   rc->last_avg_frame_bandwidth = rc->avg_frame_bandwidth;
-  if (cpi->use_svc &&
-      cpi->svc.spatial_layer_id < cpi->svc.number_spatial_layers - 1)
-    cpi->svc.lower_layer_qindex = cm->base_qindex;
+  if (cpi->use_svc && svc->spatial_layer_id < svc->number_spatial_layers - 1)
+    svc->lower_layer_qindex = cm->base_qindex;
 }
 
 void vp9_rc_postencode_update_drop_frame(VP9_COMP *cpi) {
-  // Update buffer level with zero size, update frame counters, and return.
-  update_buffer_level(cpi, 0);
   cpi->common.current_video_frame++;
   cpi->rc.frames_since_key++;
   cpi->rc.frames_to_key--;
   cpi->rc.rc_2_frame = 0;
   cpi->rc.rc_1_frame = 0;
   cpi->rc.last_avg_frame_bandwidth = cpi->rc.avg_frame_bandwidth;
+  // For SVC on dropped frame when framedrop_mode != LAYER_DROP:
+  // in this mode the whole superframe may be dropped if only a single layer
+  // has buffer underflow (below threshold). Since this can then lead to
+  // increasing buffer levels/overflow for certain layers even though whole
+  // superframe is dropped, we cap buffer level if its already stable.
+  if (cpi->use_svc && cpi->svc.framedrop_mode != LAYER_DROP &&
+      cpi->rc.buffer_level > cpi->rc.optimal_buffer_level) {
+    cpi->rc.buffer_level = cpi->rc.optimal_buffer_level;
+    cpi->rc.bits_off_target = cpi->rc.optimal_buffer_level;
+  }
 }
 
 static int calc_pframe_target_size_one_pass_vbr(const VP9_COMP *const cpi) {
@@ -1555,10 +2038,9 @@
   VP9_COMMON *const cm = &cpi->common;
   RATE_CONTROL *const rc = &cpi->rc;
   int target;
-  // TODO(yaowu): replace the "auto_key && 0" below with proper decision logic.
   if (!cpi->refresh_alt_ref_frame &&
       (cm->current_video_frame == 0 || (cpi->frame_flags & FRAMEFLAGS_KEY) ||
-       rc->frames_to_key == 0 || (cpi->oxcf.auto_key && 0))) {
+       rc->frames_to_key == 0)) {
     cm->frame_type = KEY_FRAME;
     rc->this_key_frame_forced =
         cm->current_video_frame != 0 && rc->frames_to_key == 0;
@@ -1694,30 +2176,80 @@
   return vp9_rc_clamp_iframe_target_size(cpi, target);
 }
 
+static void set_intra_only_frame(VP9_COMP *cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
+  // Don't allow intra_only frame for bypass/flexible SVC mode, or if number
+  // of spatial layers is 1 or if number of spatial or temporal layers > 3.
+  // Also if intra-only is inserted on very first frame, don't allow if
+  // if number of temporal layers > 1. This is because on intra-only frame
+  // only 3 reference buffers can be updated, but for temporal layers > 1
+  // we generally need to use buffer slots 4 and 5.
+  if ((cm->current_video_frame == 0 && svc->number_temporal_layers > 1) ||
+      svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS ||
+      svc->number_spatial_layers > 3 || svc->number_temporal_layers > 3 ||
+      svc->number_spatial_layers == 1)
+    return;
+  cm->show_frame = 0;
+  cm->intra_only = 1;
+  cm->frame_type = INTER_FRAME;
+  cpi->ext_refresh_frame_flags_pending = 1;
+  cpi->ext_refresh_last_frame = 1;
+  cpi->ext_refresh_golden_frame = 1;
+  cpi->ext_refresh_alt_ref_frame = 1;
+  if (cm->current_video_frame == 0) {
+    cpi->lst_fb_idx = 0;
+    cpi->gld_fb_idx = 1;
+    cpi->alt_fb_idx = 2;
+  } else {
+    int i;
+    int count = 0;
+    cpi->lst_fb_idx = -1;
+    cpi->gld_fb_idx = -1;
+    cpi->alt_fb_idx = -1;
+    // For intra-only frame we need to refresh all slots that were
+    // being used for the base layer (fb_idx_base[i] == 1).
+    // Start with assigning last first, then golden and then alt.
+    for (i = 0; i < REF_FRAMES; ++i) {
+      if (svc->fb_idx_base[i] == 1) count++;
+      if (count == 1 && cpi->lst_fb_idx == -1) cpi->lst_fb_idx = i;
+      if (count == 2 && cpi->gld_fb_idx == -1) cpi->gld_fb_idx = i;
+      if (count == 3 && cpi->alt_fb_idx == -1) cpi->alt_fb_idx = i;
+    }
+    // If golden or alt is not being used for base layer, then set them
+    // to the lst_fb_idx.
+    if (cpi->gld_fb_idx == -1) cpi->gld_fb_idx = cpi->lst_fb_idx;
+    if (cpi->alt_fb_idx == -1) cpi->alt_fb_idx = cpi->lst_fb_idx;
+  }
+}
+
 void vp9_rc_get_svc_params(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
   RATE_CONTROL *const rc = &cpi->rc;
+  SVC *const svc = &cpi->svc;
   int target = rc->avg_frame_bandwidth;
-  int layer =
-      LAYER_IDS_TO_IDX(cpi->svc.spatial_layer_id, cpi->svc.temporal_layer_id,
-                       cpi->svc.number_temporal_layers);
+  int layer = LAYER_IDS_TO_IDX(svc->spatial_layer_id, svc->temporal_layer_id,
+                               svc->number_temporal_layers);
+  if (svc->first_spatial_layer_to_encode)
+    svc->layer_context[svc->temporal_layer_id].is_key_frame = 0;
   // Periodic key frames is based on the super-frame counter
   // (svc.current_superframe), also only base spatial layer is key frame.
-  if ((cm->current_video_frame == 0) || (cpi->frame_flags & FRAMEFLAGS_KEY) ||
+  // Key frame is set for any of the following: very first frame, frame flags
+  // indicates key, superframe counter hits key frequencey, or (non-intra) sync
+  // flag is set for spatial layer 0.
+  if ((cm->current_video_frame == 0 && !svc->previous_frame_is_intra_only) ||
+      (cpi->frame_flags & FRAMEFLAGS_KEY) ||
       (cpi->oxcf.auto_key &&
-       (cpi->svc.current_superframe % cpi->oxcf.key_freq == 0) &&
-       cpi->svc.spatial_layer_id == 0)) {
+       (svc->current_superframe % cpi->oxcf.key_freq == 0) &&
+       !svc->previous_frame_is_intra_only && svc->spatial_layer_id == 0) ||
+      (svc->spatial_layer_sync[0] == 1 && svc->spatial_layer_id == 0)) {
     cm->frame_type = KEY_FRAME;
     rc->source_alt_ref_active = 0;
-    if (is_two_pass_svc(cpi)) {
-      cpi->svc.layer_context[layer].is_key_frame = 1;
-      cpi->ref_frame_flags &= (~VP9_LAST_FLAG & ~VP9_GOLD_FLAG & ~VP9_ALT_FLAG);
-    } else if (is_one_pass_cbr_svc(cpi)) {
-      if (cm->current_video_frame > 0) vp9_svc_reset_key_frame(cpi);
-      layer = LAYER_IDS_TO_IDX(cpi->svc.spatial_layer_id,
-                               cpi->svc.temporal_layer_id,
-                               cpi->svc.number_temporal_layers);
-      cpi->svc.layer_context[layer].is_key_frame = 1;
+    if (is_one_pass_cbr_svc(cpi)) {
+      if (cm->current_video_frame > 0) vp9_svc_reset_temporal_layers(cpi, 1);
+      layer = LAYER_IDS_TO_IDX(svc->spatial_layer_id, svc->temporal_layer_id,
+                               svc->number_temporal_layers);
+      svc->layer_context[layer].is_key_frame = 1;
       cpi->ref_frame_flags &= (~VP9_LAST_FLAG & ~VP9_GOLD_FLAG & ~VP9_ALT_FLAG);
       // Assumption here is that LAST_FRAME is being updated for a keyframe.
       // Thus no change in update flags.
@@ -1725,48 +2257,156 @@
     }
   } else {
     cm->frame_type = INTER_FRAME;
-    if (is_two_pass_svc(cpi)) {
-      LAYER_CONTEXT *lc = &cpi->svc.layer_context[layer];
-      if (cpi->svc.spatial_layer_id == 0) {
-        lc->is_key_frame = 0;
-      } else {
-        lc->is_key_frame =
-            cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame;
-        if (lc->is_key_frame) cpi->ref_frame_flags &= (~VP9_LAST_FLAG);
-      }
-      cpi->ref_frame_flags &= (~VP9_ALT_FLAG);
-    } else if (is_one_pass_cbr_svc(cpi)) {
-      LAYER_CONTEXT *lc = &cpi->svc.layer_context[layer];
-      if (cpi->svc.spatial_layer_id == cpi->svc.first_spatial_layer_to_encode) {
-        lc->is_key_frame = 0;
-      } else {
-        lc->is_key_frame =
-            cpi->svc.layer_context[cpi->svc.temporal_layer_id].is_key_frame;
-      }
+    if (is_one_pass_cbr_svc(cpi)) {
+      LAYER_CONTEXT *lc = &svc->layer_context[layer];
+      // Add condition current_video_frame > 0 for the case where first frame
+      // is intra only followed by overlay/copy frame. In this case we don't
+      // want to reset is_key_frame to 0 on overlay/copy frame.
+      lc->is_key_frame =
+          (svc->spatial_layer_id == 0 && cm->current_video_frame > 0)
+              ? 0
+              : svc->layer_context[svc->temporal_layer_id].is_key_frame;
       target = calc_pframe_target_size_one_pass_cbr(cpi);
     }
   }
 
+  if (svc->simulcast_mode) {
+    if (svc->spatial_layer_id > 0 &&
+        svc->layer_context[layer].is_key_frame == 1) {
+      cm->frame_type = KEY_FRAME;
+      cpi->ref_frame_flags &= (~VP9_LAST_FLAG & ~VP9_GOLD_FLAG & ~VP9_ALT_FLAG);
+      target = calc_iframe_target_size_one_pass_cbr(cpi);
+    }
+    // Set the buffer idx and refresh flags for key frames in simulcast mode.
+    // Note the buffer slot for long-term reference is set below (line 2255),
+    // and alt_ref is used for that on key frame. So use last and golden for
+    // the other two normal slots.
+    if (cm->frame_type == KEY_FRAME) {
+      if (svc->number_spatial_layers == 2) {
+        if (svc->spatial_layer_id == 0) {
+          cpi->lst_fb_idx = 0;
+          cpi->gld_fb_idx = 2;
+          cpi->alt_fb_idx = 6;
+        } else if (svc->spatial_layer_id == 1) {
+          cpi->lst_fb_idx = 1;
+          cpi->gld_fb_idx = 3;
+          cpi->alt_fb_idx = 6;
+        }
+      } else if (svc->number_spatial_layers == 3) {
+        if (svc->spatial_layer_id == 0) {
+          cpi->lst_fb_idx = 0;
+          cpi->gld_fb_idx = 3;
+          cpi->alt_fb_idx = 6;
+        } else if (svc->spatial_layer_id == 1) {
+          cpi->lst_fb_idx = 1;
+          cpi->gld_fb_idx = 4;
+          cpi->alt_fb_idx = 6;
+        } else if (svc->spatial_layer_id == 2) {
+          cpi->lst_fb_idx = 2;
+          cpi->gld_fb_idx = 5;
+          cpi->alt_fb_idx = 7;
+        }
+      }
+      cpi->ext_refresh_last_frame = 1;
+      cpi->ext_refresh_golden_frame = 1;
+      cpi->ext_refresh_alt_ref_frame = 1;
+    }
+  }
+
+  // Check if superframe contains a sync layer request.
+  vp9_svc_check_spatial_layer_sync(cpi);
+
+  // If long term termporal feature is enabled, set the period of the update.
+  // The update/refresh of this reference frame is always on base temporal
+  // layer frame.
+  if (svc->use_gf_temporal_ref_current_layer) {
+    // Only use gf long-term prediction on non-key superframes.
+    if (!svc->layer_context[svc->temporal_layer_id].is_key_frame) {
+      // Use golden for this reference, which will be used for prediction.
+      int index = svc->spatial_layer_id;
+      if (svc->number_spatial_layers == 3) index = svc->spatial_layer_id - 1;
+      assert(index >= 0);
+      cpi->gld_fb_idx = svc->buffer_gf_temporal_ref[index].idx;
+      // Enable prediction off LAST (last reference) and golden (which will
+      // generally be further behind/long-term reference).
+      cpi->ref_frame_flags = VP9_LAST_FLAG | VP9_GOLD_FLAG;
+    }
+    // Check for update/refresh of reference: only refresh on base temporal
+    // layer.
+    if (svc->temporal_layer_id == 0) {
+      if (svc->layer_context[svc->temporal_layer_id].is_key_frame) {
+        // On key frame we update the buffer index used for long term reference.
+        // Use the alt_ref since it is not used or updated on key frames.
+        int index = svc->spatial_layer_id;
+        if (svc->number_spatial_layers == 3) index = svc->spatial_layer_id - 1;
+        assert(index >= 0);
+        cpi->alt_fb_idx = svc->buffer_gf_temporal_ref[index].idx;
+        cpi->ext_refresh_alt_ref_frame = 1;
+      } else if (rc->frames_till_gf_update_due == 0) {
+        // Set perdiod of next update. Make it a multiple of 10, as the cyclic
+        // refresh is typically ~10%, and we'd like the update to happen after
+        // a few cylces of the refresh (so it better quality frame). Note the
+        // cyclic refresh for SVC only operates on base temporal layer frames.
+        // Choose 20 as perdiod for now (2 cycles).
+        rc->baseline_gf_interval = 20;
+        rc->frames_till_gf_update_due = rc->baseline_gf_interval;
+        cpi->ext_refresh_golden_frame = 1;
+        rc->gfu_boost = DEFAULT_GF_BOOST;
+      }
+    }
+  } else if (!svc->use_gf_temporal_ref) {
+    rc->frames_till_gf_update_due = INT_MAX;
+    rc->baseline_gf_interval = INT_MAX;
+  }
+  if (svc->set_intra_only_frame) {
+    set_intra_only_frame(cpi);
+    target = calc_iframe_target_size_one_pass_cbr(cpi);
+  }
   // Any update/change of global cyclic refresh parameters (amount/delta-qp)
   // should be done here, before the frame qp is selected.
   if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ)
     vp9_cyclic_refresh_update_parameters(cpi);
 
   vp9_rc_set_frame_target(cpi, target);
-  rc->frames_till_gf_update_due = INT_MAX;
-  rc->baseline_gf_interval = INT_MAX;
+  if (cm->show_frame) update_buffer_level_svc_preencode(cpi);
+
+  if (cpi->oxcf.resize_mode == RESIZE_DYNAMIC && svc->single_layer_svc == 1 &&
+      svc->spatial_layer_id == svc->first_spatial_layer_to_encode) {
+    LAYER_CONTEXT *lc = NULL;
+    cpi->resize_pending = vp9_resize_one_pass_cbr(cpi);
+    if (cpi->resize_pending) {
+      int tl, width, height;
+      // Apply the same scale to all temporal layers.
+      for (tl = 0; tl < svc->number_temporal_layers; tl++) {
+        lc = &svc->layer_context[svc->spatial_layer_id *
+                                     svc->number_temporal_layers +
+                                 tl];
+        lc->scaling_factor_num_resize =
+            cpi->resize_scale_num * lc->scaling_factor_num;
+        lc->scaling_factor_den_resize =
+            cpi->resize_scale_den * lc->scaling_factor_den;
+      }
+      // Set the size for this current temporal layer.
+      lc = &svc->layer_context[svc->spatial_layer_id *
+                                   svc->number_temporal_layers +
+                               svc->temporal_layer_id];
+      get_layer_resolution(cpi->oxcf.width, cpi->oxcf.height,
+                           lc->scaling_factor_num_resize,
+                           lc->scaling_factor_den_resize, &width, &height);
+      vp9_set_size_literal(cpi, width, height);
+    }
+  } else {
+    cpi->resize_pending = 0;
+  }
 }
 
 void vp9_rc_get_one_pass_cbr_params(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
   RATE_CONTROL *const rc = &cpi->rc;
   int target;
-  // TODO(yaowu): replace the "auto_key && 0" below with proper decision logic.
-  if ((cm->current_video_frame == 0 || (cpi->frame_flags & FRAMEFLAGS_KEY) ||
-       rc->frames_to_key == 0 || (cpi->oxcf.auto_key && 0))) {
+  if ((cm->current_video_frame == 0) || (cpi->frame_flags & FRAMEFLAGS_KEY) ||
+      (cpi->oxcf.auto_key && rc->frames_to_key == 0)) {
     cm->frame_type = KEY_FRAME;
-    rc->this_key_frame_forced =
-        cm->current_video_frame != 0 && rc->frames_to_key == 0;
     rc->frames_to_key = cpi->oxcf.key_freq;
     rc->kf_boost = DEFAULT_KF_BOOST;
     rc->source_alt_ref_active = 0;
@@ -1792,12 +2432,15 @@
   if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ)
     vp9_cyclic_refresh_update_parameters(cpi);
 
-  if (cm->frame_type == KEY_FRAME)
+  if (frame_is_intra_only(cm))
     target = calc_iframe_target_size_one_pass_cbr(cpi);
   else
     target = calc_pframe_target_size_one_pass_cbr(cpi);
 
   vp9_rc_set_frame_target(cpi, target);
+
+  if (cm->show_frame) update_buffer_level_preencode(cpi);
+
   if (cpi->oxcf.resize_mode == RESIZE_DYNAMIC)
     cpi->resize_pending = vp9_resize_one_pass_cbr(cpi);
   else
@@ -1862,12 +2505,23 @@
     // Set Maximum gf/arf interval
     rc->max_gf_interval = oxcf->max_gf_interval;
     rc->min_gf_interval = oxcf->min_gf_interval;
+#if CONFIG_RATE_CTRL
+    if (rc->min_gf_interval == 0) {
+      rc->min_gf_interval = vp9_rc_get_default_min_gf_interval(
+          oxcf->width, oxcf->height, oxcf->init_framerate);
+    }
+    if (rc->max_gf_interval == 0) {
+      rc->max_gf_interval = vp9_rc_get_default_max_gf_interval(
+          oxcf->init_framerate, rc->min_gf_interval);
+    }
+#else
     if (rc->min_gf_interval == 0)
       rc->min_gf_interval = vp9_rc_get_default_min_gf_interval(
           oxcf->width, oxcf->height, cpi->framerate);
     if (rc->max_gf_interval == 0)
       rc->max_gf_interval = vp9_rc_get_default_max_gf_interval(
           cpi->framerate, rc->min_gf_interval);
+#endif
 
     // Extended max interval for genuinely static scenes like slide shows.
     rc->static_scene_max_gf_interval = MAX_STATIC_GF_GROUP_LENGTH;
@@ -2000,7 +2654,7 @@
   int avg_qp_thr1 = 70;
   int avg_qp_thr2 = 50;
   int min_width = 180;
-  int min_height = 180;
+  int min_height = 90;
   int down_size_on = 1;
   cpi->resize_scale_num = 1;
   cpi->resize_scale_den = 1;
@@ -2010,6 +2664,7 @@
     cpi->resize_count = 0;
     return 0;
   }
+
   // Check current frame reslution to avoid generating frames smaller than
   // the minimum resolution.
   if (ONEHALFONLY_RESIZE) {
@@ -2020,8 +2675,7 @@
         (cm->width * 3 / 4 < min_width || cm->height * 3 / 4 < min_height))
       return 0;
     else if (cpi->resize_state == THREE_QUARTER &&
-             ((cpi->oxcf.width >> 1) < min_width ||
-              (cpi->oxcf.height >> 1) < min_height))
+             (cm->width * 3 / 4 < min_width || cm->height * 3 / 4 < min_height))
       down_size_on = 0;
   }
 
@@ -2276,30 +2930,56 @@
 void vp9_scene_detection_onepass(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
   RATE_CONTROL *const rc = &cpi->rc;
+  YV12_BUFFER_CONFIG const *unscaled_src = cpi->un_scaled_source;
+  YV12_BUFFER_CONFIG const *unscaled_last_src = cpi->unscaled_last_source;
+  uint8_t *src_y;
+  int src_ystride;
+  int src_width;
+  int src_height;
+  uint8_t *last_src_y;
+  int last_src_ystride;
+  int last_src_width;
+  int last_src_height;
+  if (cpi->un_scaled_source == NULL || cpi->unscaled_last_source == NULL ||
+      (cpi->use_svc && cpi->svc.current_superframe == 0))
+    return;
+  src_y = unscaled_src->y_buffer;
+  src_ystride = unscaled_src->y_stride;
+  src_width = unscaled_src->y_width;
+  src_height = unscaled_src->y_height;
+  last_src_y = unscaled_last_src->y_buffer;
+  last_src_ystride = unscaled_last_src->y_stride;
+  last_src_width = unscaled_last_src->y_width;
+  last_src_height = unscaled_last_src->y_height;
 #if CONFIG_VP9_HIGHBITDEPTH
   if (cm->use_highbitdepth) return;
 #endif
   rc->high_source_sad = 0;
-  if (cpi->Last_Source != NULL &&
-      cpi->Last_Source->y_width == cpi->Source->y_width &&
-      cpi->Last_Source->y_height == cpi->Source->y_height) {
+  rc->high_num_blocks_with_motion = 0;
+  // For SVC: scene detection is only checked on first spatial layer of
+  // the superframe using the original/unscaled resolutions.
+  if (cpi->svc.spatial_layer_id == cpi->svc.first_spatial_layer_to_encode &&
+      src_width == last_src_width && src_height == last_src_height) {
     YV12_BUFFER_CONFIG *frames[MAX_LAG_BUFFERS] = { NULL };
-    uint8_t *src_y = cpi->Source->y_buffer;
-    int src_ystride = cpi->Source->y_stride;
-    uint8_t *last_src_y = cpi->Last_Source->y_buffer;
-    int last_src_ystride = cpi->Last_Source->y_stride;
+    int num_mi_cols = cm->mi_cols;
+    int num_mi_rows = cm->mi_rows;
     int start_frame = 0;
     int frames_to_buffer = 1;
     int frame = 0;
     int scene_cut_force_key_frame = 0;
+    int num_zero_temp_sad = 0;
     uint64_t avg_sad_current = 0;
-    uint32_t min_thresh = 4000;
+    uint32_t min_thresh = 10000;
     float thresh = 8.0f;
     uint32_t thresh_key = 140000;
     if (cpi->oxcf.speed <= 5) thresh_key = 240000;
-    if (cpi->oxcf.rc_mode == VPX_VBR) {
-      min_thresh = 65000;
-      thresh = 2.1f;
+    if (cpi->oxcf.content != VP9E_CONTENT_SCREEN) min_thresh = 65000;
+    if (cpi->oxcf.rc_mode == VPX_VBR) thresh = 2.1f;
+    if (cpi->use_svc && cpi->svc.number_spatial_layers > 1) {
+      const int aligned_width = ALIGN_POWER_OF_TWO(src_width, MI_SIZE_LOG2);
+      const int aligned_height = ALIGN_POWER_OF_TWO(src_height, MI_SIZE_LOG2);
+      num_mi_cols = aligned_width >> MI_SIZE_LOG2;
+      num_mi_rows = aligned_height >> MI_SIZE_LOG2;
     }
     if (cpi->oxcf.lag_in_frames > 0) {
       frames_to_buffer = (cm->current_video_frame == 1)
@@ -2347,14 +3027,15 @@
         uint64_t avg_sad = 0;
         uint64_t tmp_sad = 0;
         int num_samples = 0;
-        int sb_cols = (cm->mi_cols + MI_BLOCK_SIZE - 1) / MI_BLOCK_SIZE;
-        int sb_rows = (cm->mi_rows + MI_BLOCK_SIZE - 1) / MI_BLOCK_SIZE;
+        int sb_cols = (num_mi_cols + MI_BLOCK_SIZE - 1) / MI_BLOCK_SIZE;
+        int sb_rows = (num_mi_rows + MI_BLOCK_SIZE - 1) / MI_BLOCK_SIZE;
         if (cpi->oxcf.lag_in_frames > 0) {
           src_y = frames[frame]->y_buffer;
           src_ystride = frames[frame]->y_stride;
           last_src_y = frames[frame + 1]->y_buffer;
           last_src_ystride = frames[frame + 1]->y_stride;
         }
+        num_zero_temp_sad = 0;
         for (sbi_row = 0; sbi_row < sb_rows; ++sbi_row) {
           for (sbi_col = 0; sbi_col < sb_cols; ++sbi_col) {
             // Checker-board pattern, ignore boundary.
@@ -2366,6 +3047,7 @@
                                                last_src_ystride);
               avg_sad += tmp_sad;
               num_samples++;
+              if (tmp_sad == 0) num_zero_temp_sad++;
             }
             src_y += 64;
             last_src_y += 64;
@@ -2382,7 +3064,8 @@
           if (avg_sad >
                   VPXMAX(min_thresh,
                          (unsigned int)(rc->avg_source_sad[0] * thresh)) &&
-              rc->frames_since_key > 1)
+              rc->frames_since_key > 1 + cpi->svc.number_spatial_layers &&
+              num_zero_temp_sad < 3 * (num_samples >> 2))
             rc->high_source_sad = 1;
           else
             rc->high_source_sad = 0;
@@ -2393,6 +3076,8 @@
         } else {
           rc->avg_source_sad[lagframe_idx] = avg_sad;
         }
+        if (num_zero_temp_sad < (3 * num_samples >> 2))
+          rc->high_num_blocks_with_motion = 1;
       }
     }
     // For CBR non-screen content mode, check if we should reset the rate
@@ -2412,6 +3097,19 @@
       if (cm->frame_type != KEY_FRAME && rc->reset_high_source_sad)
         rc->this_frame_target = rc->avg_frame_bandwidth;
     }
+    // For SVC the new (updated) avg_source_sad[0] for the current superframe
+    // updates the setting for all layers.
+    if (cpi->use_svc) {
+      int sl, tl;
+      SVC *const svc = &cpi->svc;
+      for (sl = 0; sl < svc->number_spatial_layers; ++sl)
+        for (tl = 0; tl < svc->number_temporal_layers; ++tl) {
+          int layer = LAYER_IDS_TO_IDX(sl, tl, svc->number_temporal_layers);
+          LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+          RATE_CONTROL *const lrc = &lc->rc;
+          lrc->avg_source_sad[0] = rc->avg_source_sad[0];
+        }
+    }
     // For VBR, under scene change/high content change, force golden refresh.
     if (cpi->oxcf.rc_mode == VPX_VBR && cm->frame_type != KEY_FRAME &&
         rc->high_source_sad && rc->frames_to_key > 3 &&
@@ -2442,12 +3140,26 @@
 
 // Test if encoded frame will significantly overshoot the target bitrate, and
 // if so, set the QP, reset/adjust some rate control parameters, and return 1.
+// frame_size = -1 means frame has not been encoded.
 int vp9_encodedframe_overshoot(VP9_COMP *cpi, int frame_size, int *q) {
   VP9_COMMON *const cm = &cpi->common;
   RATE_CONTROL *const rc = &cpi->rc;
-  int thresh_qp = 3 * (rc->worst_quality >> 2);
-  int thresh_rate = rc->avg_frame_bandwidth * 10;
-  if (cm->base_qindex < thresh_qp && frame_size > thresh_rate) {
+  SPEED_FEATURES *const sf = &cpi->sf;
+  int thresh_qp = 7 * (rc->worst_quality >> 3);
+  int thresh_rate = rc->avg_frame_bandwidth << 3;
+  // Lower thresh_qp for video (more overshoot at lower Q) to be
+  // more conservative for video.
+  if (cpi->oxcf.content != VP9E_CONTENT_SCREEN)
+    thresh_qp = 3 * (rc->worst_quality >> 2);
+  // If this decision is not based on an encoded frame size but just on
+  // scene/slide change detection (i.e., re_encode_overshoot_cbr_rt ==
+  // FAST_DETECTION_MAXQ), for now skip the (frame_size > thresh_rate)
+  // condition in this case.
+  // TODO(marpan): Use a better size/rate condition for this case and
+  // adjust thresholds.
+  if ((sf->overshoot_detection_cbr_rt == FAST_DETECTION_MAXQ ||
+       frame_size > thresh_rate) &&
+      cm->base_qindex < thresh_qp) {
     double rate_correction_factor =
         cpi->rc.rate_correction_factors[INTER_NORMAL];
     const int target_size = cpi->rc.avg_frame_bandwidth;
@@ -2457,6 +3169,29 @@
     int enumerator;
     // Force a re-encode, and for now use max-QP.
     *q = cpi->rc.worst_quality;
+    cpi->cyclic_refresh->counter_encode_maxq_scene_change = 0;
+    cpi->rc.re_encode_maxq_scene_change = 1;
+    // If the frame_size is much larger than the threshold (big content change)
+    // and the encoded frame used alot of Intra modes, then force hybrid_intra
+    // encoding for the re-encode on this scene change. hybrid_intra will
+    // use rd-based intra mode selection for small blocks.
+    if (sf->overshoot_detection_cbr_rt == RE_ENCODE_MAXQ &&
+        frame_size > (thresh_rate << 1) && cpi->svc.spatial_layer_id == 0) {
+      MODE_INFO **mi = cm->mi_grid_visible;
+      int sum_intra_usage = 0;
+      int mi_row, mi_col;
+      int tot = 0;
+      for (mi_row = 0; mi_row < cm->mi_rows; mi_row++) {
+        for (mi_col = 0; mi_col < cm->mi_cols; mi_col++) {
+          if (mi[0]->ref_frame[0] == INTRA_FRAME) sum_intra_usage++;
+          tot++;
+          mi++;
+        }
+        mi += 8;
+      }
+      sum_intra_usage = 100 * sum_intra_usage / (cm->mi_rows * cm->mi_cols);
+      if (sum_intra_usage > 60) cpi->rc.hybrid_intra_scene_change = 1;
+    }
     // Adjust avg_frame_qindex, buffer_level, and rate correction factors, as
     // these parameters will affect QP selection for subsequent frames. If they
     // have settled down to a very different (low QP) state, then not adjusting
@@ -2484,21 +3219,27 @@
       cpi->rc.rate_correction_factors[INTER_NORMAL] = rate_correction_factor;
     }
     // For temporal layers, reset the rate control parametes across all
-    // temporal layers.
+    // temporal layers. If the first_spatial_layer_to_encode > 0, then this
+    // superframe has skipped lower base layers. So in this case we should also
+    // reset and force max-q for spatial layers < first_spatial_layer_to_encode.
     if (cpi->use_svc) {
-      int i = 0;
+      int tl = 0;
+      int sl = 0;
       SVC *svc = &cpi->svc;
-      for (i = 0; i < svc->number_temporal_layers; ++i) {
-        const int layer = LAYER_IDS_TO_IDX(svc->spatial_layer_id, i,
-                                           svc->number_temporal_layers);
-        LAYER_CONTEXT *lc = &svc->layer_context[layer];
-        RATE_CONTROL *lrc = &lc->rc;
-        lrc->avg_frame_qindex[INTER_FRAME] = *q;
-        lrc->buffer_level = rc->optimal_buffer_level;
-        lrc->bits_off_target = rc->optimal_buffer_level;
-        lrc->rc_1_frame = 0;
-        lrc->rc_2_frame = 0;
-        lrc->rate_correction_factors[INTER_NORMAL] = rate_correction_factor;
+      for (sl = 0; sl < svc->first_spatial_layer_to_encode; ++sl) {
+        for (tl = 0; tl < svc->number_temporal_layers; ++tl) {
+          const int layer =
+              LAYER_IDS_TO_IDX(sl, tl, svc->number_temporal_layers);
+          LAYER_CONTEXT *lc = &svc->layer_context[layer];
+          RATE_CONTROL *lrc = &lc->rc;
+          lrc->avg_frame_qindex[INTER_FRAME] = *q;
+          lrc->buffer_level = lrc->optimal_buffer_level;
+          lrc->bits_off_target = lrc->optimal_buffer_level;
+          lrc->rc_1_frame = 0;
+          lrc->rc_2_frame = 0;
+          lrc->rate_correction_factors[INTER_NORMAL] = rate_correction_factor;
+          lrc->force_max_q = 1;
+        }
       }
     }
     return 1;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.h
index 3a40e01..7dbe17d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_ratectrl.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_RATECTRL_H_
-#define VP9_ENCODER_VP9_RATECTRL_H_
+#ifndef VPX_VP9_ENCODER_VP9_RATECTRL_H_
+#define VPX_VP9_ENCODER_VP9_RATECTRL_H_
 
 #include "vpx/vpx_codec.h"
 #include "vpx/vpx_integer.h"
@@ -175,15 +175,34 @@
   uint64_t avg_source_sad[MAX_LAG_BUFFERS];
   uint64_t prev_avg_source_sad_lag;
   int high_source_sad_lagindex;
+  int high_num_blocks_with_motion;
   int alt_ref_gf_group;
   int last_frame_is_src_altref;
   int high_source_sad;
   int count_last_scene_change;
+  int hybrid_intra_scene_change;
+  int re_encode_maxq_scene_change;
   int avg_frame_low_motion;
   int af_ratio_onepass_vbr;
   int force_qpmin;
   int reset_high_source_sad;
   double perc_arf_usage;
+  int force_max_q;
+  // Last frame was dropped post encode on scene change.
+  int last_post_encode_dropped_scene_change;
+  // Enable post encode frame dropping for screen content. Only enabled when
+  // ext_use_post_encode_drop is enabled by user.
+  int use_post_encode_drop;
+  // External flag to enable post encode frame dropping, controlled by user.
+  int ext_use_post_encode_drop;
+
+  int damped_adjustment[RATE_FACTOR_LEVELS];
+  double arf_active_best_quality_adjustment_factor;
+  int arf_increase_active_best_quality;
+
+  int preserve_arf_as_gld;
+  int preserve_next_arf_as_gld;
+  int show_arf_as_gld;
 } RATE_CONTROL;
 
 struct VP9_COMP;
@@ -192,7 +211,7 @@
 void vp9_rc_init(const struct VP9EncoderConfig *oxcf, int pass,
                  RATE_CONTROL *rc);
 
-int vp9_estimate_bits_at_q(FRAME_TYPE frame_kind, int q, int mbs,
+int vp9_estimate_bits_at_q(FRAME_TYPE frame_type, int q, int mbs,
                            double correction_factor, vpx_bit_depth_t bit_depth);
 
 double vp9_convert_qindex_to_q(int qindex, vpx_bit_depth_t bit_depth);
@@ -203,9 +222,9 @@
 
 int vp9_rc_get_default_min_gf_interval(int width, int height, double framerate);
 // Note vp9_rc_get_default_max_gf_interval() requires the min_gf_interval to
-// be passed in to ensure that the max_gf_interval returned is at least as bis
+// be passed in to ensure that the max_gf_interval returned is at least as big
 // as that.
-int vp9_rc_get_default_max_gf_interval(double framerate, int min_frame_rate);
+int vp9_rc_get_default_max_gf_interval(double framerate, int min_gf_interval);
 
 // Generally at the high level, the following flow is expected
 // to be enforced for rate control:
@@ -245,13 +264,18 @@
 // Changes only the rate correction factors in the rate control structure.
 void vp9_rc_update_rate_correction_factors(struct VP9_COMP *cpi);
 
+// Post encode drop for CBR mode.
+int post_encode_drop_cbr(struct VP9_COMP *cpi, size_t *size);
+
+int vp9_test_drop(struct VP9_COMP *cpi);
+
 // Decide if we should drop this frame: For 1-pass CBR.
 // Changes only the decimation count in the rate control structure
 int vp9_rc_drop_frame(struct VP9_COMP *cpi);
 
 // Computes frame size bounds.
 void vp9_rc_compute_frame_size_bounds(const struct VP9_COMP *cpi,
-                                      int this_frame_target,
+                                      int frame_target,
                                       int *frame_under_shoot_limit,
                                       int *frame_over_shoot_limit);
 
@@ -302,8 +326,12 @@
 
 int vp9_encodedframe_overshoot(struct VP9_COMP *cpi, int frame_size, int *q);
 
+void vp9_configure_buffer_updates(struct VP9_COMP *cpi, int gf_group_index);
+
+void vp9_estimate_qp_gop(struct VP9_COMP *cpi);
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_RATECTRL_H_
+#endif  // VPX_VP9_ENCODER_VP9_RATECTRL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.c
index 6b2306c..34c7442 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.c
@@ -57,6 +57,30 @@
   rd_cost->rdcost = 0;
 }
 
+int64_t vp9_calculate_rd_cost(int mult, int div, int rate, int64_t dist) {
+  assert(mult >= 0);
+  assert(div > 0);
+  if (rate >= 0 && dist >= 0) {
+    return RDCOST(mult, div, rate, dist);
+  }
+  if (rate >= 0 && dist < 0) {
+    return RDCOST_NEG_D(mult, div, rate, -dist);
+  }
+  if (rate < 0 && dist >= 0) {
+    return RDCOST_NEG_R(mult, div, -rate, dist);
+  }
+  return -RDCOST(mult, div, -rate, -dist);
+}
+
+void vp9_rd_cost_update(int mult, int div, RD_COST *rd_cost) {
+  if (rd_cost->rate < INT_MAX && rd_cost->dist < INT64_MAX) {
+    rd_cost->rdcost =
+        vp9_calculate_rd_cost(mult, div, rd_cost->rate, rd_cost->dist);
+  } else {
+    vp9_rd_cost_reset(rd_cost);
+  }
+}
+
 // The baseline rd thresholds for breaking out of the rd loop for
 // certain modes are assumed to be based on 8x8 blocks.
 // This table is used to correct for block size.
@@ -69,10 +93,12 @@
   const FRAME_CONTEXT *const fc = cpi->common.fc;
   int i, j;
 
-  for (i = 0; i < INTRA_MODES; ++i)
-    for (j = 0; j < INTRA_MODES; ++j)
+  for (i = 0; i < INTRA_MODES; ++i) {
+    for (j = 0; j < INTRA_MODES; ++j) {
       vp9_cost_tokens(cpi->y_mode_costs[i][j], vp9_kf_y_mode_prob[i][j],
                       vp9_intra_mode_tree);
+    }
+  }
 
   vp9_cost_tokens(cpi->mbmode_cost, fc->y_mode_prob[1], vp9_intra_mode_tree);
   for (i = 0; i < INTRA_MODES; ++i) {
@@ -82,9 +108,28 @@
                     fc->uv_mode_prob[i], vp9_intra_mode_tree);
   }
 
-  for (i = 0; i < SWITCHABLE_FILTER_CONTEXTS; ++i)
+  for (i = 0; i < SWITCHABLE_FILTER_CONTEXTS; ++i) {
     vp9_cost_tokens(cpi->switchable_interp_costs[i],
                     fc->switchable_interp_prob[i], vp9_switchable_interp_tree);
+  }
+
+  for (i = TX_8X8; i < TX_SIZES; ++i) {
+    for (j = 0; j < TX_SIZE_CONTEXTS; ++j) {
+      const vpx_prob *tx_probs = get_tx_probs(i, j, &fc->tx_probs);
+      int k;
+      for (k = 0; k <= i; ++k) {
+        int cost = 0;
+        int m;
+        for (m = 0; m <= k - (k == i); ++m) {
+          if (m == k)
+            cost += vp9_cost_zero(tx_probs[m]);
+          else
+            cost += vp9_cost_one(tx_probs[m]);
+        }
+        cpi->tx_size_cost[i - 1][j][k] = cost;
+      }
+    }
+  }
 }
 
 static void fill_token_costs(vp9_coeff_cost *c,
@@ -143,40 +188,74 @@
 
 static const int rd_boost_factor[16] = { 64, 32, 32, 32, 24, 16, 12, 12,
                                          8,  8,  4,  4,  2,  2,  1,  0 };
-static const int rd_frame_type_factor[FRAME_UPDATE_TYPES] = { 128, 144, 128,
-                                                              128, 144 };
 
-int64_t vp9_compute_rd_mult_based_on_qindex(const VP9_COMP *cpi, int qindex) {
-  const int64_t q = vp9_dc_quant(qindex, 0, cpi->common.bit_depth);
-#if CONFIG_VP9_HIGHBITDEPTH
-  int64_t rdmult = 0;
-  switch (cpi->common.bit_depth) {
-    case VPX_BITS_8: rdmult = 88 * q * q / 24; break;
-    case VPX_BITS_10: rdmult = ROUND_POWER_OF_TWO(88 * q * q / 24, 4); break;
-    case VPX_BITS_12: rdmult = ROUND_POWER_OF_TWO(88 * q * q / 24, 8); break;
-    default:
-      assert(0 && "bit_depth should be VPX_BITS_8, VPX_BITS_10 or VPX_BITS_12");
-      return -1;
+// Note that the element below for frame type "USE_BUF_FRAME", which indicates
+// that the show frame flag is set, should not be used as no real frame
+// is encoded so we should not reach here. However, a dummy value
+// is inserted here to make sure the data structure has the right number
+// of values assigned.
+static const int rd_frame_type_factor[FRAME_UPDATE_TYPES] = { 128, 144, 128,
+                                                              128, 144, 144 };
+
+int vp9_compute_rd_mult_based_on_qindex(const VP9_COMP *cpi, int qindex) {
+  // largest dc_quant is 21387, therefore rdmult should always fit in int32_t
+  const int q = vp9_dc_quant(qindex, 0, cpi->common.bit_depth);
+  uint32_t rdmult = q * q;
+
+  if (cpi->common.frame_type != KEY_FRAME) {
+    if (qindex < 128)
+      rdmult = rdmult * 4;
+    else if (qindex < 190)
+      rdmult = rdmult * 4 + rdmult / 2;
+    else
+      rdmult = rdmult * 3;
+  } else {
+    if (qindex < 64)
+      rdmult = rdmult * 4;
+    else if (qindex <= 128)
+      rdmult = rdmult * 3 + rdmult / 2;
+    else if (qindex < 190)
+      rdmult = rdmult * 4 + rdmult / 2;
+    else
+      rdmult = rdmult * 7 + rdmult / 2;
   }
-#else
-  int64_t rdmult = 88 * q * q / 24;
+#if CONFIG_VP9_HIGHBITDEPTH
+  switch (cpi->common.bit_depth) {
+    case VPX_BITS_10: rdmult = ROUND_POWER_OF_TWO(rdmult, 4); break;
+    case VPX_BITS_12: rdmult = ROUND_POWER_OF_TWO(rdmult, 8); break;
+    default: break;
+  }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-  return rdmult;
+  return rdmult > 0 ? rdmult : 1;
 }
 
-int vp9_compute_rd_mult(const VP9_COMP *cpi, int qindex) {
-  int64_t rdmult = vp9_compute_rd_mult_based_on_qindex(cpi, qindex);
-
+static int modulate_rdmult(const VP9_COMP *cpi, int rdmult) {
+  int64_t rdmult_64 = rdmult;
   if (cpi->oxcf.pass == 2 && (cpi->common.frame_type != KEY_FRAME)) {
     const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
     const FRAME_UPDATE_TYPE frame_type = gf_group->update_type[gf_group->index];
-    const int boost_index = VPXMIN(15, (cpi->rc.gfu_boost / 100));
+    const int gfu_boost = cpi->multi_layer_arf
+                              ? gf_group->gfu_boost[gf_group->index]
+                              : cpi->rc.gfu_boost;
+    const int boost_index = VPXMIN(15, (gfu_boost / 100));
 
-    rdmult = (rdmult * rd_frame_type_factor[frame_type]) >> 7;
-    rdmult += ((rdmult * rd_boost_factor[boost_index]) >> 7);
+    rdmult_64 = (rdmult_64 * rd_frame_type_factor[frame_type]) >> 7;
+    rdmult_64 += ((rdmult_64 * rd_boost_factor[boost_index]) >> 7);
   }
-  if (rdmult < 1) rdmult = 1;
-  return (int)rdmult;
+  return (int)rdmult_64;
+}
+
+int vp9_compute_rd_mult(const VP9_COMP *cpi, int qindex) {
+  int rdmult = vp9_compute_rd_mult_based_on_qindex(cpi, qindex);
+  return modulate_rdmult(cpi, rdmult);
+}
+
+int vp9_get_adaptive_rdmult(const VP9_COMP *cpi, double beta) {
+  int rdmult =
+      vp9_compute_rd_mult_based_on_qindex(cpi, cpi->common.base_qindex);
+  rdmult = (int)((double)rdmult / beta);
+  rdmult = rdmult > 0 ? rdmult : 1;
+  return modulate_rdmult(cpi, rdmult);
 }
 
 static int compute_rd_thresh_factor(int qindex, vpx_bit_depth_t bit_depth) {
@@ -185,10 +264,10 @@
   switch (bit_depth) {
     case VPX_BITS_8: q = vp9_dc_quant(qindex, 0, VPX_BITS_8) / 4.0; break;
     case VPX_BITS_10: q = vp9_dc_quant(qindex, 0, VPX_BITS_10) / 16.0; break;
-    case VPX_BITS_12: q = vp9_dc_quant(qindex, 0, VPX_BITS_12) / 64.0; break;
     default:
-      assert(0 && "bit_depth should be VPX_BITS_8, VPX_BITS_10 or VPX_BITS_12");
-      return -1;
+      assert(bit_depth == VPX_BITS_12);
+      q = vp9_dc_quant(qindex, 0, VPX_BITS_12) / 64.0;
+      break;
   }
 #else
   (void)bit_depth;
@@ -209,12 +288,11 @@
       x->sadperbit16 = sad_per_bit16lut_10[qindex];
       x->sadperbit4 = sad_per_bit4lut_10[qindex];
       break;
-    case VPX_BITS_12:
+    default:
+      assert(cpi->common.bit_depth == VPX_BITS_12);
       x->sadperbit16 = sad_per_bit16lut_12[qindex];
       x->sadperbit4 = sad_per_bit4lut_12[qindex];
       break;
-    default:
-      assert(0 && "bit_depth should be VPX_BITS_8, VPX_BITS_10 or VPX_BITS_12");
   }
 #else
   (void)cpi;
@@ -255,6 +333,15 @@
   }
 }
 
+void vp9_build_inter_mode_cost(VP9_COMP *cpi) {
+  const VP9_COMMON *const cm = &cpi->common;
+  int i;
+  for (i = 0; i < INTER_MODE_CONTEXTS; ++i) {
+    vp9_cost_tokens((int *)cpi->inter_mode_cost[i], cm->fc->inter_mode_probs[i],
+                    vp9_inter_mode_tree);
+  }
+}
+
 void vp9_initialize_rd_consts(VP9_COMP *cpi) {
   VP9_COMMON *const cm = &cpi->common;
   MACROBLOCK *const x = &cpi->td.mb;
@@ -303,10 +390,7 @@
             x->nmvjointcost,
             cm->allow_high_precision_mv ? x->nmvcost_hp : x->nmvcost,
             &cm->fc->nmvc, cm->allow_high_precision_mv);
-
-        for (i = 0; i < INTER_MODE_CONTEXTS; ++i)
-          vp9_cost_tokens((int *)cpi->inter_mode_cost[i],
-                          cm->fc->inter_mode_probs[i], vp9_inter_mode_tree);
+        vp9_build_inter_mode_cost(cpi);
       }
     }
   }
@@ -471,13 +555,13 @@
       for (i = 0; i < num_4x4_h; i += 4)
         t_left[i] = !!*(const uint32_t *)&left[i];
       break;
-    case TX_32X32:
+    default:
+      assert(tx_size == TX_32X32);
       for (i = 0; i < num_4x4_w; i += 8)
         t_above[i] = !!*(const uint64_t *)&above[i];
       for (i = 0; i < num_4x4_h; i += 8)
         t_left[i] = !!*(const uint64_t *)&left[i];
       break;
-    default: assert(0 && "Invalid transform size."); break;
   }
 }
 
@@ -493,8 +577,7 @@
   uint8_t *src_y_ptr = x->plane[0].src.buf;
   uint8_t *ref_y_ptr;
   const int num_mv_refs =
-      MAX_MV_REF_CANDIDATES +
-      (cpi->sf.adaptive_motion_search && block_size < x->max_partition_size);
+      MAX_MV_REF_CANDIDATES + (block_size < x->max_partition_size);
 
   MV pred_mv[3];
   pred_mv[0] = x->mbmi_ext->ref_mvs[ref_frame][0].as_mv;
@@ -504,11 +587,12 @@
 
   near_same_nearest = x->mbmi_ext->ref_mvs[ref_frame][0].as_int ==
                       x->mbmi_ext->ref_mvs[ref_frame][1].as_int;
+
   // Get the sad for each candidate reference mv.
   for (i = 0; i < num_mv_refs; ++i) {
     const MV *this_mv = &pred_mv[i];
     int fp_row, fp_col;
-
+    if (this_mv->row == INT16_MAX || this_mv->col == INT16_MAX) continue;
     if (i == 1 && near_same_nearest) continue;
     fp_row = (this_mv->row + 3 + (this_mv->row >= 0)) >> 3;
     fp_col = (this_mv->col + 3 + (this_mv->col >= 0)) >> 3;
@@ -573,6 +657,7 @@
   const VP9_COMMON *const cm = &cpi->common;
   const int scaled_idx = cpi->scaled_ref_idx[ref_frame - 1];
   const int ref_idx = get_ref_frame_buf_idx(cpi, ref_frame);
+  assert(ref_frame >= LAST_FRAME && ref_frame <= ALTREF_FRAME);
   return (scaled_idx != ref_idx && scaled_idx != INVALID_IDX)
              ? &cm->buffer_pool->frame_bufs[scaled_idx].buf
              : NULL;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.h
index 59022c1..4c04c95 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rd.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_RD_H_
-#define VP9_ENCODER_VP9_RD_H_
+#ifndef VPX_VP9_ENCODER_VP9_RD_H_
+#define VPX_VP9_ENCODER_VP9_RD_H_
 
 #include <limits.h>
 
@@ -27,14 +27,17 @@
 #define RD_EPB_SHIFT 6
 
 #define RDCOST(RM, DM, R, D) \
-  (ROUND_POWER_OF_TWO(((int64_t)R) * (RM), VP9_PROB_COST_SHIFT) + (D << DM))
+  ROUND_POWER_OF_TWO(((int64_t)(R)) * (RM), VP9_PROB_COST_SHIFT) + ((D) << (DM))
+#define RDCOST_NEG_R(RM, DM, R, D) \
+  ((D) << (DM)) - ROUND_POWER_OF_TWO(((int64_t)(R)) * (RM), VP9_PROB_COST_SHIFT)
+#define RDCOST_NEG_D(RM, DM, R, D) \
+  ROUND_POWER_OF_TWO(((int64_t)(R)) * (RM), VP9_PROB_COST_SHIFT) - ((D) << (DM))
+
 #define QIDX_SKIP_THRESH 115
 
 #define MV_COST_WEIGHT 108
 #define MV_COST_WEIGHT_SUB 120
 
-#define INVALID_MV 0x80008000
-
 #define MAX_MODES 30
 #define MAX_REFS 6
 
@@ -42,6 +45,9 @@
 #define RD_THRESH_MAX_FACT 64
 #define RD_THRESH_INC 1
 
+#define VP9_DIST_SCALE_LOG2 4
+#define VP9_DIST_SCALE (1 << VP9_DIST_SCALE_LOG2)
+
 // This enumerator type needs to be kept aligned with the mode order in
 // const MODE_DEFINITION vp9_mode_order[MAX_MODES] used in the rd code.
 typedef enum {
@@ -98,8 +104,8 @@
 typedef struct RD_OPT {
   // Thresh_mult is used to set a threshold for the rd score. A higher value
   // means that we will accept the best mode so far more often. This number
-  // is used in combination with the current block size, and thresh_freq_fact
-  // to pick a threshold.
+  // is used in combination with the current block size, and thresh_freq_fact to
+  // pick a threshold.
   int thresh_mult[MAX_MODES];
   int thresh_mult_sub8x8[MAX_REFS];
 
@@ -108,9 +114,14 @@
   int64_t prediction_type_threshes[MAX_REF_FRAMES][REFERENCE_MODES];
 
   int64_t filter_threshes[MAX_REF_FRAMES][SWITCHABLE_FILTER_CONTEXTS];
+#if CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
+  int64_t prediction_type_threshes_prev[MAX_REF_FRAMES][REFERENCE_MODES];
 
+  int64_t filter_threshes_prev[MAX_REF_FRAMES][SWITCHABLE_FILTER_CONTEXTS];
+#endif  // CONFIG_CONSISTENT_RECODE || CONFIG_RATE_CTRL
   int RDMULT;
   int RDDIV;
+  double r0;
 } RD_OPT;
 
 typedef struct RD_COST {
@@ -123,22 +134,27 @@
 void vp9_rd_cost_reset(RD_COST *rd_cost);
 // Initialize the rate distortion cost values to zero.
 void vp9_rd_cost_init(RD_COST *rd_cost);
+// It supports negative rate and dist, which is different from RDCOST().
+int64_t vp9_calculate_rd_cost(int mult, int div, int rate, int64_t dist);
+// Update the cost value based on its rate and distortion.
+void vp9_rd_cost_update(int mult, int div, RD_COST *rd_cost);
 
 struct TileInfo;
 struct TileDataEnc;
 struct VP9_COMP;
 struct macroblock;
 
-int64_t vp9_compute_rd_mult_based_on_qindex(const struct VP9_COMP *cpi,
-                                            int qindex);
+int vp9_compute_rd_mult_based_on_qindex(const struct VP9_COMP *cpi, int qindex);
 
 int vp9_compute_rd_mult(const struct VP9_COMP *cpi, int qindex);
 
+int vp9_get_adaptive_rdmult(const struct VP9_COMP *cpi, double beta);
+
 void vp9_initialize_rd_consts(struct VP9_COMP *cpi);
 
 void vp9_initialize_me_consts(struct VP9_COMP *cpi, MACROBLOCK *x, int qindex);
 
-void vp9_model_rd_from_var_lapndz(unsigned int var, unsigned int n,
+void vp9_model_rd_from_var_lapndz(unsigned int var, unsigned int n_log2,
                                   unsigned int qstep, int *rate, int64_t *dist);
 
 void vp9_model_rd_from_var_lapndz_vec(unsigned int var[MAX_MB_PLANE],
@@ -169,8 +185,8 @@
 
 void vp9_set_rd_speed_thresholds_sub8x8(struct VP9_COMP *cpi);
 
-void vp9_update_rd_thresh_fact(int (*fact)[MAX_MODES], int rd_thresh, int bsize,
-                               int best_mode_index);
+void vp9_update_rd_thresh_fact(int (*factor_buf)[MAX_MODES], int rd_thresh,
+                               int bsize, int best_mode_index);
 
 static INLINE int rd_less_than_thresh(int64_t best_rd, int thresh,
                                       const int *const thresh_fact) {
@@ -208,8 +224,10 @@
                                                 BLOCK_SIZE bs, int bd);
 #endif
 
+void vp9_build_inter_mode_cost(struct VP9_COMP *cpi);
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_RD_H_
+#endif  // VPX_VP9_ENCODER_VP9_RD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.c
index 90f0672..39b99d5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.c
@@ -31,6 +31,9 @@
 #include "vp9/common/vp9_scan.h"
 #include "vp9/common/vp9_seg_common.h"
 
+#if !CONFIG_REALTIME_ONLY
+#include "vp9/encoder/vp9_aq_variance.h"
+#endif
 #include "vp9/encoder/vp9_cost.h"
 #include "vp9/encoder/vp9_encodemb.h"
 #include "vp9/encoder/vp9_encodemv.h"
@@ -40,7 +43,6 @@
 #include "vp9/encoder/vp9_ratectrl.h"
 #include "vp9/encoder/vp9_rd.h"
 #include "vp9/encoder/vp9_rdopt.h"
-#include "vp9/encoder/vp9_aq_variance.h"
 
 #define LAST_FRAME_MODE_MASK \
   ((1 << GOLDEN_FRAME) | (1 << ALTREF_FRAME) | (1 << INTRA_FRAME))
@@ -77,9 +79,12 @@
   int use_fast_coef_costing;
   const scan_order *so;
   uint8_t skippable;
+  struct buf_2d *this_recon;
 };
 
 #define LAST_NEW_MV_INDEX 6
+
+#if !CONFIG_REALTIME_ONLY
 static const MODE_DEFINITION vp9_mode_order[MAX_MODES] = {
   { NEARESTMV, { LAST_FRAME, NONE } },
   { NEARESTMV, { ALTREF_FRAME, NONE } },
@@ -127,6 +132,7 @@
   { { ALTREF_FRAME, NONE } },         { { LAST_FRAME, ALTREF_FRAME } },
   { { GOLDEN_FRAME, ALTREF_FRAME } }, { { INTRA_FRAME, NONE } },
 };
+#endif  // !CONFIG_REALTIME_ONLY
 
 static void swap_block_ptr(MACROBLOCK *x, PICK_MODE_CONTEXT *ctx, int m, int n,
                            int min_plane, int max_plane) {
@@ -153,6 +159,7 @@
   }
 }
 
+#if !CONFIG_REALTIME_ONLY
 static void model_rd_for_sb(VP9_COMP *cpi, BLOCK_SIZE bsize, MACROBLOCK *x,
                             MACROBLOCKD *xd, int *out_rate_sum,
                             int64_t *out_dist_sum, int *skip_txfm_sb,
@@ -273,10 +280,11 @@
   }
 
   *skip_txfm_sb = skip_flag;
-  *skip_sse_sb = total_sse << 4;
+  *skip_sse_sb = total_sse << VP9_DIST_SCALE_LOG2;
   *out_rate_sum = (int)rate_sum;
-  *out_dist_sum = dist_sum << 4;
+  *out_dist_sum = dist_sum << VP9_DIST_SCALE_LOG2;
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 #if CONFIG_VP9_HIGHBITDEPTH
 int64_t vp9_highbd_block_error_c(const tran_low_t *coeff,
@@ -459,6 +467,66 @@
   return plane_4x4_dim + (mb_to_edge_dim >> (5 + subsampling_dim)) - blk_dim;
 }
 
+// Copy all visible 4x4s in the transform block.
+static void copy_block_visible(const MACROBLOCKD *xd,
+                               const struct macroblockd_plane *const pd,
+                               const uint8_t *src, const int src_stride,
+                               uint8_t *dst, const int dst_stride, int blk_row,
+                               int blk_col, const BLOCK_SIZE plane_bsize,
+                               const BLOCK_SIZE tx_bsize) {
+  const int plane_4x4_w = num_4x4_blocks_wide_lookup[plane_bsize];
+  const int plane_4x4_h = num_4x4_blocks_high_lookup[plane_bsize];
+  const int tx_4x4_w = num_4x4_blocks_wide_lookup[tx_bsize];
+  const int tx_4x4_h = num_4x4_blocks_high_lookup[tx_bsize];
+  int b4x4s_to_right_edge = num_4x4_to_edge(plane_4x4_w, xd->mb_to_right_edge,
+                                            pd->subsampling_x, blk_col);
+  int b4x4s_to_bottom_edge = num_4x4_to_edge(plane_4x4_h, xd->mb_to_bottom_edge,
+                                             pd->subsampling_y, blk_row);
+  const int is_highbd = xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH;
+  if (tx_bsize == BLOCK_4X4 ||
+      (b4x4s_to_right_edge >= tx_4x4_w && b4x4s_to_bottom_edge >= tx_4x4_h)) {
+    const int w = tx_4x4_w << 2;
+    const int h = tx_4x4_h << 2;
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (is_highbd) {
+      vpx_highbd_convolve_copy(CONVERT_TO_SHORTPTR(src), src_stride,
+                               CONVERT_TO_SHORTPTR(dst), dst_stride, NULL, 0, 0,
+                               0, 0, w, h, xd->bd);
+    } else {
+#endif
+      vpx_convolve_copy(src, src_stride, dst, dst_stride, NULL, 0, 0, 0, 0, w,
+                        h);
+#if CONFIG_VP9_HIGHBITDEPTH
+    }
+#endif
+  } else {
+    int r, c;
+    int max_r = VPXMIN(b4x4s_to_bottom_edge, tx_4x4_h);
+    int max_c = VPXMIN(b4x4s_to_right_edge, tx_4x4_w);
+    // if we are in the unrestricted motion border.
+    for (r = 0; r < max_r; ++r) {
+      // Skip visiting the sub blocks that are wholly within the UMV.
+      for (c = 0; c < max_c; ++c) {
+        const uint8_t *src_ptr = src + r * src_stride * 4 + c * 4;
+        uint8_t *dst_ptr = dst + r * dst_stride * 4 + c * 4;
+#if CONFIG_VP9_HIGHBITDEPTH
+        if (is_highbd) {
+          vpx_highbd_convolve_copy(CONVERT_TO_SHORTPTR(src_ptr), src_stride,
+                                   CONVERT_TO_SHORTPTR(dst_ptr), dst_stride,
+                                   NULL, 0, 0, 0, 0, 4, 4, xd->bd);
+        } else {
+#endif
+          vpx_convolve_copy(src_ptr, src_stride, dst_ptr, dst_stride, NULL, 0,
+                            0, 0, 0, 4, 4);
+#if CONFIG_VP9_HIGHBITDEPTH
+        }
+#endif
+      }
+    }
+  }
+  (void)is_highbd;
+}
+
 // Compute the pixel domain sum square error on all visible 4x4s in the
 // transform block.
 static unsigned pixel_sse(const VP9_COMP *const cpi, const MACROBLOCKD *xd,
@@ -539,12 +607,13 @@
 static void dist_block(const VP9_COMP *cpi, MACROBLOCK *x, int plane,
                        BLOCK_SIZE plane_bsize, int block, int blk_row,
                        int blk_col, TX_SIZE tx_size, int64_t *out_dist,
-                       int64_t *out_sse) {
+                       int64_t *out_sse, struct buf_2d *out_recon) {
   MACROBLOCKD *const xd = &x->e_mbd;
   const struct macroblock_plane *const p = &x->plane[plane];
   const struct macroblockd_plane *const pd = &xd->plane[plane];
+  const int eob = p->eobs[block];
 
-  if (x->block_tx_domain) {
+  if (!out_recon && x->block_tx_domain && eob) {
     const int ss_txfrm_size = tx_size << 1;
     int64_t this_sse;
     const int shift = tx_size == TX_32X32 ? 0 : 2;
@@ -583,15 +652,23 @@
     const int dst_idx = 4 * (blk_row * dst_stride + blk_col);
     const uint8_t *src = &p->src.buf[src_idx];
     const uint8_t *dst = &pd->dst.buf[dst_idx];
+    uint8_t *out_recon_ptr = 0;
+
     const tran_low_t *dqcoeff = BLOCK_OFFSET(pd->dqcoeff, block);
-    const uint16_t *eob = &p->eobs[block];
     unsigned int tmp;
 
     tmp = pixel_sse(cpi, xd, pd, src, src_stride, dst, dst_stride, blk_row,
                     blk_col, plane_bsize, tx_bsize);
     *out_sse = (int64_t)tmp * 16;
+    if (out_recon) {
+      const int out_recon_idx = 4 * (blk_row * out_recon->stride + blk_col);
+      out_recon_ptr = &out_recon->buf[out_recon_idx];
+      copy_block_visible(xd, pd, dst, dst_stride, out_recon_ptr,
+                         out_recon->stride, blk_row, blk_col, plane_bsize,
+                         tx_bsize);
+    }
 
-    if (*eob) {
+    if (eob) {
 #if CONFIG_VP9_HIGHBITDEPTH
       DECLARE_ALIGNED(16, uint16_t, recon16[1024]);
       uint8_t *recon = (uint8_t *)recon16;
@@ -604,22 +681,22 @@
         vpx_highbd_convolve_copy(CONVERT_TO_SHORTPTR(dst), dst_stride, recon16,
                                  32, NULL, 0, 0, 0, 0, bs, bs, xd->bd);
         if (xd->lossless) {
-          vp9_highbd_iwht4x4_add(dqcoeff, recon16, 32, *eob, xd->bd);
+          vp9_highbd_iwht4x4_add(dqcoeff, recon16, 32, eob, xd->bd);
         } else {
           switch (tx_size) {
             case TX_4X4:
-              vp9_highbd_idct4x4_add(dqcoeff, recon16, 32, *eob, xd->bd);
+              vp9_highbd_idct4x4_add(dqcoeff, recon16, 32, eob, xd->bd);
               break;
             case TX_8X8:
-              vp9_highbd_idct8x8_add(dqcoeff, recon16, 32, *eob, xd->bd);
+              vp9_highbd_idct8x8_add(dqcoeff, recon16, 32, eob, xd->bd);
               break;
             case TX_16X16:
-              vp9_highbd_idct16x16_add(dqcoeff, recon16, 32, *eob, xd->bd);
+              vp9_highbd_idct16x16_add(dqcoeff, recon16, 32, eob, xd->bd);
               break;
-            case TX_32X32:
-              vp9_highbd_idct32x32_add(dqcoeff, recon16, 32, *eob, xd->bd);
+            default:
+              assert(tx_size == TX_32X32);
+              vp9_highbd_idct32x32_add(dqcoeff, recon16, 32, eob, xd->bd);
               break;
-            default: assert(0 && "Invalid transform size");
           }
         }
         recon = CONVERT_TO_BYTEPTR(recon16);
@@ -627,16 +704,16 @@
 #endif  // CONFIG_VP9_HIGHBITDEPTH
         vpx_convolve_copy(dst, dst_stride, recon, 32, NULL, 0, 0, 0, 0, bs, bs);
         switch (tx_size) {
-          case TX_32X32: vp9_idct32x32_add(dqcoeff, recon, 32, *eob); break;
-          case TX_16X16: vp9_idct16x16_add(dqcoeff, recon, 32, *eob); break;
-          case TX_8X8: vp9_idct8x8_add(dqcoeff, recon, 32, *eob); break;
-          case TX_4X4:
+          case TX_32X32: vp9_idct32x32_add(dqcoeff, recon, 32, eob); break;
+          case TX_16X16: vp9_idct16x16_add(dqcoeff, recon, 32, eob); break;
+          case TX_8X8: vp9_idct8x8_add(dqcoeff, recon, 32, eob); break;
+          default:
+            assert(tx_size == TX_4X4);
             // this is like vp9_short_idct4x4 but has a special case around
             // eob<=1, which is significant (not just an optimization) for
             // the lossless case.
-            x->inv_txfm_add(dqcoeff, recon, 32, *eob);
+            x->inv_txfm_add(dqcoeff, recon, 32, eob);
             break;
-          default: assert(0 && "Invalid transform size"); break;
         }
 #if CONFIG_VP9_HIGHBITDEPTH
       }
@@ -644,6 +721,10 @@
 
       tmp = pixel_sse(cpi, xd, pd, src, src_stride, recon, 32, blk_row, blk_col,
                       plane_bsize, tx_bsize);
+      if (out_recon) {
+        copy_block_visible(xd, pd, recon, 32, out_recon_ptr, out_recon->stride,
+                           blk_row, blk_col, plane_bsize, tx_bsize);
+      }
     }
 
     *out_dist = (int64_t)tmp * 16;
@@ -668,26 +749,38 @@
   int64_t sse;
   const int coeff_ctx =
       combine_entropy_contexts(args->t_left[blk_row], args->t_above[blk_col]);
+  struct buf_2d *recon = args->this_recon;
+  const BLOCK_SIZE tx_bsize = txsize_to_bsize[tx_size];
+  const struct macroblockd_plane *const pd = &xd->plane[plane];
+  const int dst_stride = pd->dst.stride;
+  const uint8_t *dst = &pd->dst.buf[4 * (blk_row * dst_stride + blk_col)];
 
   if (args->exit_early) return;
 
   if (!is_inter_block(mi)) {
+#if CONFIG_MISMATCH_DEBUG
+    struct encode_b_args intra_arg = {
+      x, x->block_qcoeff_opt, args->t_above, args->t_left, &mi->skip, 0, 0, 0
+    };
+#else
     struct encode_b_args intra_arg = { x, x->block_qcoeff_opt, args->t_above,
                                        args->t_left, &mi->skip };
+#endif
     vp9_encode_block_intra(plane, block, blk_row, blk_col, plane_bsize, tx_size,
                            &intra_arg);
+    if (recon) {
+      uint8_t *rec_ptr = &recon->buf[4 * (blk_row * recon->stride + blk_col)];
+      copy_block_visible(xd, pd, dst, dst_stride, rec_ptr, recon->stride,
+                         blk_row, blk_col, plane_bsize, tx_bsize);
+    }
     if (x->block_tx_domain) {
       dist_block(args->cpi, x, plane, plane_bsize, block, blk_row, blk_col,
-                 tx_size, &dist, &sse);
+                 tx_size, &dist, &sse, /*recon =*/0);
     } else {
-      const BLOCK_SIZE tx_bsize = txsize_to_bsize[tx_size];
       const struct macroblock_plane *const p = &x->plane[plane];
-      const struct macroblockd_plane *const pd = &xd->plane[plane];
       const int src_stride = p->src.stride;
-      const int dst_stride = pd->dst.stride;
       const int diff_stride = 4 * num_4x4_blocks_wide_lookup[plane_bsize];
       const uint8_t *src = &p->src.buf[4 * (blk_row * src_stride + blk_col)];
-      const uint8_t *dst = &pd->dst.buf[4 * (blk_row * dst_stride + blk_col)];
       const int16_t *diff = &p->src_diff[4 * (blk_row * diff_stride + blk_col)];
       unsigned int tmp;
       sse = sum_squares_visible(xd, pd, diff, diff_stride, blk_row, blk_col,
@@ -701,17 +794,20 @@
                       blk_row, blk_col, plane_bsize, tx_bsize);
       dist = (int64_t)tmp * 16;
     }
-  } else if (max_txsize_lookup[plane_bsize] == tx_size) {
-    if (x->skip_txfm[(plane << 2) + (block >> (tx_size << 1))] ==
-        SKIP_TXFM_NONE) {
+  } else {
+    int skip_txfm_flag = SKIP_TXFM_NONE;
+    if (max_txsize_lookup[plane_bsize] == tx_size)
+      skip_txfm_flag = x->skip_txfm[(plane << 2) + (block >> (tx_size << 1))];
+
+    if (skip_txfm_flag == SKIP_TXFM_NONE ||
+        (recon && skip_txfm_flag == SKIP_TXFM_AC_ONLY)) {
       // full forward transform and quantization
       vp9_xform_quant(x, plane, block, blk_row, blk_col, plane_bsize, tx_size);
       if (x->block_qcoeff_opt)
         vp9_optimize_b(x, plane, block, tx_size, coeff_ctx);
       dist_block(args->cpi, x, plane, plane_bsize, block, blk_row, blk_col,
-                 tx_size, &dist, &sse);
-    } else if (x->skip_txfm[(plane << 2) + (block >> (tx_size << 1))] ==
-               SKIP_TXFM_AC_ONLY) {
+                 tx_size, &dist, &sse, recon);
+    } else if (skip_txfm_flag == SKIP_TXFM_AC_ONLY) {
       // compute DC coefficient
       tran_low_t *const coeff = BLOCK_OFFSET(x->plane[plane].coeff, block);
       tran_low_t *const dqcoeff = BLOCK_OFFSET(xd->plane[plane].dqcoeff, block);
@@ -737,14 +833,12 @@
       x->plane[plane].eobs[block] = 0;
       sse = x->bsse[(plane << 2) + (block >> (tx_size << 1))] << 4;
       dist = sse;
+      if (recon) {
+        uint8_t *rec_ptr = &recon->buf[4 * (blk_row * recon->stride + blk_col)];
+        copy_block_visible(xd, pd, dst, dst_stride, rec_ptr, recon->stride,
+                           blk_row, blk_col, plane_bsize, tx_bsize);
+      }
     }
-  } else {
-    // full forward transform and quantization
-    vp9_xform_quant(x, plane, block, blk_row, blk_col, plane_bsize, tx_size);
-    if (x->block_qcoeff_opt)
-      vp9_optimize_b(x, plane, block, tx_size, coeff_ctx);
-    dist_block(args->cpi, x, plane, plane_bsize, block, blk_row, blk_col,
-               tx_size, &dist, &sse);
   }
 
   rd = RDCOST(x->rdmult, x->rddiv, 0, dist);
@@ -763,7 +857,8 @@
   rd = VPXMIN(rd1, rd2);
   if (plane == 0) {
     x->zcoeff_blk[tx_size][block] =
-        !x->plane[plane].eobs[block] || (rd1 > rd2 && !xd->lossless);
+        !x->plane[plane].eobs[block] ||
+        (x->sharpness == 0 && rd1 > rd2 && !xd->lossless);
     x->sum_y_eobs[tx_size] += x->plane[plane].eobs[block];
   }
 
@@ -783,7 +878,8 @@
 static void txfm_rd_in_plane(const VP9_COMP *cpi, MACROBLOCK *x, int *rate,
                              int64_t *distortion, int *skippable, int64_t *sse,
                              int64_t ref_best_rd, int plane, BLOCK_SIZE bsize,
-                             TX_SIZE tx_size, int use_fast_coef_casting) {
+                             TX_SIZE tx_size, int use_fast_coef_costing,
+                             struct buf_2d *recon) {
   MACROBLOCKD *const xd = &x->e_mbd;
   const struct macroblockd_plane *const pd = &xd->plane[plane];
   struct rdcost_block_args args;
@@ -791,8 +887,9 @@
   args.cpi = cpi;
   args.x = x;
   args.best_rd = ref_best_rd;
-  args.use_fast_coef_costing = use_fast_coef_casting;
+  args.use_fast_coef_costing = use_fast_coef_costing;
   args.skippable = 1;
+  args.this_recon = recon;
 
   if (plane == 0) xd->mi[0]->tx_size = tx_size;
 
@@ -817,7 +914,8 @@
 
 static void choose_largest_tx_size(VP9_COMP *cpi, MACROBLOCK *x, int *rate,
                                    int64_t *distortion, int *skip, int64_t *sse,
-                                   int64_t ref_best_rd, BLOCK_SIZE bs) {
+                                   int64_t ref_best_rd, BLOCK_SIZE bs,
+                                   struct buf_2d *recon) {
   const TX_SIZE max_tx_size = max_txsize_lookup[bs];
   VP9_COMMON *const cm = &cpi->common;
   const TX_SIZE largest_tx_size = tx_mode_to_biggest_tx_size[cm->tx_mode];
@@ -827,13 +925,13 @@
   mi->tx_size = VPXMIN(max_tx_size, largest_tx_size);
 
   txfm_rd_in_plane(cpi, x, rate, distortion, skip, sse, ref_best_rd, 0, bs,
-                   mi->tx_size, cpi->sf.use_fast_coef_costing);
+                   mi->tx_size, cpi->sf.use_fast_coef_costing, recon);
 }
 
 static void choose_tx_size_from_rd(VP9_COMP *cpi, MACROBLOCK *x, int *rate,
                                    int64_t *distortion, int *skip,
                                    int64_t *psse, int64_t ref_best_rd,
-                                   BLOCK_SIZE bs) {
+                                   BLOCK_SIZE bs, struct buf_2d *recon) {
   const TX_SIZE max_tx_size = max_txsize_lookup[bs];
   VP9_COMMON *const cm = &cpi->common;
   MACROBLOCKD *const xd = &x->e_mbd;
@@ -845,20 +943,34 @@
                               { INT64_MAX, INT64_MAX },
                               { INT64_MAX, INT64_MAX },
                               { INT64_MAX, INT64_MAX } };
-  int n, m;
+  int n;
   int s0, s1;
-  int64_t best_rd = INT64_MAX;
+  int64_t best_rd = ref_best_rd;
   TX_SIZE best_tx = max_tx_size;
   int start_tx, end_tx;
+  const int tx_size_ctx = get_tx_size_context(xd);
+#if CONFIG_VP9_HIGHBITDEPTH
+  DECLARE_ALIGNED(16, uint16_t, recon_buf16[TX_SIZES][64 * 64]);
+  uint8_t *recon_buf[TX_SIZES];
+  for (n = 0; n < TX_SIZES; ++n) {
+    if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+      recon_buf[n] = CONVERT_TO_BYTEPTR(recon_buf16[n]);
+    } else {
+      recon_buf[n] = (uint8_t *)recon_buf16[n];
+    }
+  }
+#else
+  DECLARE_ALIGNED(16, uint8_t, recon_buf[TX_SIZES][64 * 64]);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
-  const vpx_prob *tx_probs = get_tx_probs2(max_tx_size, xd, &cm->fc->tx_probs);
   assert(skip_prob > 0);
   s0 = vp9_cost_bit(skip_prob, 0);
   s1 = vp9_cost_bit(skip_prob, 1);
 
   if (cm->tx_mode == TX_MODE_SELECT) {
     start_tx = max_tx_size;
-    end_tx = 0;
+    end_tx = VPXMAX(start_tx - cpi->sf.tx_size_search_depth, 0);
+    if (bs > BLOCK_32X32) end_tx = VPXMIN(end_tx + 1, start_tx);
   } else {
     TX_SIZE chosen_tx_size =
         VPXMIN(max_tx_size, tx_mode_to_biggest_tx_size[cm->tx_mode]);
@@ -867,15 +979,17 @@
   }
 
   for (n = start_tx; n >= end_tx; n--) {
-    int r_tx_size = 0;
-    for (m = 0; m <= n - (n == (int)max_tx_size); m++) {
-      if (m == n)
-        r_tx_size += vp9_cost_zero(tx_probs[m]);
-      else
-        r_tx_size += vp9_cost_one(tx_probs[m]);
+    const int r_tx_size = cpi->tx_size_cost[max_tx_size - 1][tx_size_ctx][n];
+    if (recon) {
+      struct buf_2d this_recon;
+      this_recon.buf = recon_buf[n];
+      this_recon.stride = recon->stride;
+      txfm_rd_in_plane(cpi, x, &r[n][0], &d[n], &s[n], &sse[n], best_rd, 0, bs,
+                       n, cpi->sf.use_fast_coef_costing, &this_recon);
+    } else {
+      txfm_rd_in_plane(cpi, x, &r[n][0], &d[n], &s[n], &sse[n], best_rd, 0, bs,
+                       n, cpi->sf.use_fast_coef_costing, 0);
     }
-    txfm_rd_in_plane(cpi, x, &r[n][0], &d[n], &s[n], &sse[n], ref_best_rd, 0,
-                     bs, n, cpi->sf.use_fast_coef_costing);
     r[n][1] = r[n][0];
     if (r[n][0] < INT_MAX) {
       r[n][1] += r_tx_size;
@@ -917,11 +1031,25 @@
   *rate = r[mi->tx_size][cm->tx_mode == TX_MODE_SELECT];
   *skip = s[mi->tx_size];
   *psse = sse[mi->tx_size];
+  if (recon) {
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+      memcpy(CONVERT_TO_SHORTPTR(recon->buf),
+             CONVERT_TO_SHORTPTR(recon_buf[mi->tx_size]),
+             64 * 64 * sizeof(uint16_t));
+    } else {
+#endif
+      memcpy(recon->buf, recon_buf[mi->tx_size], 64 * 64);
+#if CONFIG_VP9_HIGHBITDEPTH
+    }
+#endif
+  }
 }
 
 static void super_block_yrd(VP9_COMP *cpi, MACROBLOCK *x, int *rate,
                             int64_t *distortion, int *skip, int64_t *psse,
-                            BLOCK_SIZE bs, int64_t ref_best_rd) {
+                            BLOCK_SIZE bs, int64_t ref_best_rd,
+                            struct buf_2d *recon) {
   MACROBLOCKD *xd = &x->e_mbd;
   int64_t sse;
   int64_t *ret_sse = psse ? psse : &sse;
@@ -930,10 +1058,10 @@
 
   if (cpi->sf.tx_size_search_method == USE_LARGESTALL || xd->lossless) {
     choose_largest_tx_size(cpi, x, rate, distortion, skip, ret_sse, ref_best_rd,
-                           bs);
+                           bs, recon);
   } else {
     choose_tx_size_from_rd(cpi, x, rate, distortion, skip, ret_sse, ref_best_rd,
-                           bs);
+                           bs, recon);
   }
 }
 
@@ -1275,7 +1403,7 @@
     mic->mode = mode;
 
     super_block_yrd(cpi, x, &this_rate_tokenonly, &this_distortion, &s, NULL,
-                    bsize, best_rd);
+                    bsize, best_rd, /*recon = */ 0);
 
     if (this_rate_tokenonly == INT_MAX) continue;
 
@@ -1327,7 +1455,8 @@
 
   for (plane = 1; plane < MAX_MB_PLANE; ++plane) {
     txfm_rd_in_plane(cpi, x, &pnrate, &pndist, &pnskip, &pnsse, ref_best_rd,
-                     plane, bsize, uv_tx_size, cpi->sf.use_fast_coef_costing);
+                     plane, bsize, uv_tx_size, cpi->sf.use_fast_coef_costing,
+                     /*recon = */ 0);
     if (pnrate == INT_MAX) {
       is_cost_valid = 0;
       break;
@@ -1395,6 +1524,7 @@
   return best_rd;
 }
 
+#if !CONFIG_REALTIME_ONLY
 static int64_t rd_sbuv_dcpred(const VP9_COMP *cpi, MACROBLOCK *x, int *rate,
                               int *rate_tokenonly, int64_t *distortion,
                               int *skippable, BLOCK_SIZE bsize) {
@@ -1468,11 +1598,11 @@
       if (is_compound)
         this_mv[1].as_int = frame_mv[mode][mi->ref_frame[1]].as_int;
       break;
-    case ZEROMV:
+    default:
+      assert(mode == ZEROMV);
       this_mv[0].as_int = 0;
       if (is_compound) this_mv[1].as_int = 0;
       break;
-    default: break;
   }
 
   mi->bmi[i].as_mv[0].as_int = this_mv[0].as_int;
@@ -1606,6 +1736,7 @@
 
   return RDCOST(x->rdmult, x->rddiv, *labelyrate, *distortion);
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 typedef struct {
   int eobs;
@@ -1633,6 +1764,7 @@
   int mvthresh;
 } BEST_SEG_INFO;
 
+#if !CONFIG_REALTIME_ONLY
 static INLINE int mv_check_bounds(const MvLimits *mv_limits, const MV *mv) {
   return (mv->row >> 3) < mv_limits->row_min ||
          (mv->row >> 3) > mv_limits->row_max ||
@@ -1831,8 +1963,8 @@
       bestsme = cpi->find_fractional_mv_step(
           x, &tmp_mv, &ref_mv[id].as_mv, cpi->common.allow_high_precision_mv,
           x->errorperbit, &cpi->fn_ptr[bsize], 0,
-          cpi->sf.mv.subpel_iters_per_step, NULL, x->nmvjointcost, x->mvcost,
-          &dis, &sse, second_pred, pw, ph);
+          cpi->sf.mv.subpel_search_level, NULL, x->nmvjointcost, x->mvcost,
+          &dis, &sse, second_pred, pw, ph, cpi->sf.use_accurate_subpel_search);
     }
 
     // Restore the pointer to the first (possibly scaled) prediction buffer.
@@ -1886,6 +2018,8 @@
   const BLOCK_SIZE bsize = mi->sb_type;
   const int num_4x4_blocks_wide = num_4x4_blocks_wide_lookup[bsize];
   const int num_4x4_blocks_high = num_4x4_blocks_high_lookup[bsize];
+  const int pw = num_4x4_blocks_wide << 2;
+  const int ph = num_4x4_blocks_high << 2;
   ENTROPY_CONTEXT t_above[2], t_left[2];
   int subpelmv = 1, have_ref = 0;
   SPEED_FEATURES *const sf = &cpi->sf;
@@ -1994,8 +2128,11 @@
           mvp_full.col = bsi->mvp.as_mv.col >> 3;
 
           if (sf->adaptive_motion_search) {
-            mvp_full.row = x->pred_mv[mi->ref_frame[0]].row >> 3;
-            mvp_full.col = x->pred_mv[mi->ref_frame[0]].col >> 3;
+            if (x->pred_mv[mi->ref_frame[0]].row != INT16_MAX &&
+                x->pred_mv[mi->ref_frame[0]].col != INT16_MAX) {
+              mvp_full.row = x->pred_mv[mi->ref_frame[0]].row >> 3;
+              mvp_full.col = x->pred_mv[mi->ref_frame[0]].col >> 3;
+            }
             step_param = VPXMAX(step_param, 8);
           }
 
@@ -2017,16 +2154,16 @@
             cpi->find_fractional_mv_step(
                 x, new_mv, &bsi->ref_mv[0]->as_mv, cm->allow_high_precision_mv,
                 x->errorperbit, &cpi->fn_ptr[bsize], sf->mv.subpel_force_stop,
-                sf->mv.subpel_iters_per_step, cond_cost_list(cpi, cost_list),
+                sf->mv.subpel_search_level, cond_cost_list(cpi, cost_list),
                 x->nmvjointcost, x->mvcost, &distortion,
-                &x->pred_sse[mi->ref_frame[0]], NULL, 0, 0);
+                &x->pred_sse[mi->ref_frame[0]], NULL, pw, ph,
+                cpi->sf.use_accurate_subpel_search);
 
             // save motion search result for use in compound prediction
             seg_mvs[i][mi->ref_frame[0]].as_mv = *new_mv;
           }
 
-          if (sf->adaptive_motion_search)
-            x->pred_mv[mi->ref_frame[0]] = *new_mv;
+          x->pred_mv[mi->ref_frame[0]] = *new_mv;
 
           // restore src pointers
           mi_buf_restore(x, orig_src, orig_pre);
@@ -2321,6 +2458,22 @@
                 block_size);
 }
 
+#if CONFIG_NON_GREEDY_MV
+static int ref_frame_to_gf_rf_idx(int ref_frame) {
+  if (ref_frame == GOLDEN_FRAME) {
+    return 0;
+  }
+  if (ref_frame == LAST_FRAME) {
+    return 1;
+  }
+  if (ref_frame == ALTREF_FRAME) {
+    return 2;
+  }
+  assert(0);
+  return -1;
+}
+#endif
+
 static void single_motion_search(VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bsize,
                                  int mi_row, int mi_col, int_mv *tmp_mv,
                                  int *rate_mv) {
@@ -2328,19 +2481,35 @@
   const VP9_COMMON *cm = &cpi->common;
   MODE_INFO *mi = xd->mi[0];
   struct buf_2d backup_yv12[MAX_MB_PLANE] = { { 0, 0 } };
-  int bestsme = INT_MAX;
   int step_param;
-  int sadpb = x->sadperbit16;
   MV mvp_full;
   int ref = mi->ref_frame[0];
   MV ref_mv = x->mbmi_ext->ref_mvs[ref][0].as_mv;
   const MvLimits tmp_mv_limits = x->mv_limits;
   int cost_list[5];
-
+  const int best_predmv_idx = x->mv_best_ref_index[ref];
   const YV12_BUFFER_CONFIG *scaled_ref_frame =
       vp9_get_scaled_ref_frame(cpi, ref);
-
+  const int pw = num_4x4_blocks_wide_lookup[bsize] << 2;
+  const int ph = num_4x4_blocks_high_lookup[bsize] << 2;
   MV pred_mv[3];
+
+  int bestsme = INT_MAX;
+#if CONFIG_NON_GREEDY_MV
+  int gf_group_idx = cpi->twopass.gf_group.index;
+  int gf_rf_idx = ref_frame_to_gf_rf_idx(ref);
+  BLOCK_SIZE square_bsize = get_square_block_size(bsize);
+  int_mv nb_full_mvs[NB_MVS_NUM] = { 0 };
+  MotionField *motion_field = vp9_motion_field_info_get_motion_field(
+      &cpi->motion_field_info, gf_group_idx, gf_rf_idx, square_bsize);
+  const int nb_full_mv_num =
+      vp9_prepare_nb_full_mvs(motion_field, mi_row, mi_col, nb_full_mvs);
+  const int lambda = (pw * ph) / 4;
+  assert(pw * ph == lambda << 2);
+#else   // CONFIG_NON_GREEDY_MV
+  int sadpb = x->sadperbit16;
+#endif  // CONFIG_NON_GREEDY_MV
+
   pred_mv[0] = x->mbmi_ext->ref_mvs[ref][0].as_mv;
   pred_mv[1] = x->mbmi_ext->ref_mvs[ref][1].as_mv;
   pred_mv[2] = x->pred_mv[ref];
@@ -2369,7 +2538,7 @@
   }
 
   if (cpi->sf.adaptive_motion_search && bsize < BLOCK_64X64) {
-    int boffset =
+    const int boffset =
         2 * (b_width_log2_lookup[BLOCK_64X64] -
              VPXMIN(b_height_log2_lookup[bsize], b_width_log2_lookup[bsize]));
     step_param = VPXMAX(step_param, boffset);
@@ -2387,8 +2556,8 @@
       int i;
       for (i = LAST_FRAME; i <= ALTREF_FRAME && cm->show_frame; ++i) {
         if ((x->pred_mv_sad[ref] >> 3) > x->pred_mv_sad[i]) {
-          x->pred_mv[ref].row = 0;
-          x->pred_mv[ref].col = 0;
+          x->pred_mv[ref].row = INT16_MAX;
+          x->pred_mv[ref].col = INT16_MAX;
           tmp_mv->as_int = INVALID_MV;
 
           if (scaled_ref_frame) {
@@ -2406,14 +2575,65 @@
   // after full-pixel motion search.
   vp9_set_mv_search_range(&x->mv_limits, &ref_mv);
 
-  mvp_full = pred_mv[x->mv_best_ref_index[ref]];
-
+  mvp_full = pred_mv[best_predmv_idx];
   mvp_full.col >>= 3;
   mvp_full.row >>= 3;
 
+#if CONFIG_NON_GREEDY_MV
+  bestsme = vp9_full_pixel_diamond_new(cpi, x, bsize, &mvp_full, step_param,
+                                       lambda, 1, nb_full_mvs, nb_full_mv_num,
+                                       &tmp_mv->as_mv);
+#else   // CONFIG_NON_GREEDY_MV
   bestsme = vp9_full_pixel_search(
       cpi, x, bsize, &mvp_full, step_param, cpi->sf.mv.search_method, sadpb,
       cond_cost_list(cpi, cost_list), &ref_mv, &tmp_mv->as_mv, INT_MAX, 1);
+#endif  // CONFIG_NON_GREEDY_MV
+
+  if (cpi->sf.enhanced_full_pixel_motion_search) {
+    int i;
+    for (i = 0; i < 3; ++i) {
+      int this_me;
+      MV this_mv;
+      int diff_row;
+      int diff_col;
+      int step;
+
+      if (pred_mv[i].row == INT16_MAX || pred_mv[i].col == INT16_MAX) continue;
+      if (i == best_predmv_idx) continue;
+
+      diff_row = ((int)pred_mv[i].row -
+                  pred_mv[i > 0 ? (i - 1) : best_predmv_idx].row) >>
+                 3;
+      diff_col = ((int)pred_mv[i].col -
+                  pred_mv[i > 0 ? (i - 1) : best_predmv_idx].col) >>
+                 3;
+      if (diff_row == 0 && diff_col == 0) continue;
+      if (diff_row < 0) diff_row = -diff_row;
+      if (diff_col < 0) diff_col = -diff_col;
+      step = get_msb((diff_row + diff_col + 1) >> 1);
+      if (step <= 0) continue;
+
+      mvp_full = pred_mv[i];
+      mvp_full.col >>= 3;
+      mvp_full.row >>= 3;
+#if CONFIG_NON_GREEDY_MV
+      this_me = vp9_full_pixel_diamond_new(
+          cpi, x, bsize, &mvp_full,
+          VPXMAX(step_param, MAX_MVSEARCH_STEPS - step), lambda, 1, nb_full_mvs,
+          nb_full_mv_num, &this_mv);
+#else   // CONFIG_NON_GREEDY_MV
+      this_me = vp9_full_pixel_search(
+          cpi, x, bsize, &mvp_full,
+          VPXMAX(step_param, MAX_MVSEARCH_STEPS - step),
+          cpi->sf.mv.search_method, sadpb, cond_cost_list(cpi, cost_list),
+          &ref_mv, &this_mv, INT_MAX, 1);
+#endif  // CONFIG_NON_GREEDY_MV
+      if (this_me < bestsme) {
+        tmp_mv->as_mv = this_mv;
+        bestsme = this_me;
+      }
+    }
+  }
 
   x->mv_limits = tmp_mv_limits;
 
@@ -2422,13 +2642,14 @@
     cpi->find_fractional_mv_step(
         x, &tmp_mv->as_mv, &ref_mv, cm->allow_high_precision_mv, x->errorperbit,
         &cpi->fn_ptr[bsize], cpi->sf.mv.subpel_force_stop,
-        cpi->sf.mv.subpel_iters_per_step, cond_cost_list(cpi, cost_list),
-        x->nmvjointcost, x->mvcost, &dis, &x->pred_sse[ref], NULL, 0, 0);
+        cpi->sf.mv.subpel_search_level, cond_cost_list(cpi, cost_list),
+        x->nmvjointcost, x->mvcost, &dis, &x->pred_sse[ref], NULL, pw, ph,
+        cpi->sf.use_accurate_subpel_search);
   }
   *rate_mv = vp9_mv_bit_cost(&tmp_mv->as_mv, &ref_mv, x->nmvjointcost,
                              x->mvcost, MV_COST_WEIGHT);
 
-  if (cpi->sf.adaptive_motion_search) x->pred_mv[ref] = tmp_mv->as_mv;
+  x->pred_mv[ref] = tmp_mv->as_mv;
 
   if (scaled_ref_frame) {
     int i;
@@ -2453,23 +2674,59 @@
 // However, once established that vector may be usable through the nearest and
 // near mv modes to reduce distortion in subsequent blocks and also improve
 // visual quality.
-static int discount_newmv_test(const VP9_COMP *cpi, int this_mode,
-                               int_mv this_mv,
-                               int_mv (*mode_mv)[MAX_REF_FRAMES],
-                               int ref_frame) {
+static int discount_newmv_test(VP9_COMP *cpi, int this_mode, int_mv this_mv,
+                               int_mv (*mode_mv)[MAX_REF_FRAMES], int ref_frame,
+                               int mi_row, int mi_col, BLOCK_SIZE bsize) {
+#if CONFIG_NON_GREEDY_MV
+  (void)mode_mv;
+  (void)this_mv;
+  if (this_mode == NEWMV && bsize >= BLOCK_8X8 && cpi->tpl_ready) {
+    const int gf_group_idx = cpi->twopass.gf_group.index;
+    const int gf_rf_idx = ref_frame_to_gf_rf_idx(ref_frame);
+    const TplDepFrame tpl_frame = cpi->tpl_stats[gf_group_idx];
+    const MotionField *motion_field = vp9_motion_field_info_get_motion_field(
+        &cpi->motion_field_info, gf_group_idx, gf_rf_idx, cpi->tpl_bsize);
+    const int tpl_block_mi_h = num_8x8_blocks_high_lookup[cpi->tpl_bsize];
+    const int tpl_block_mi_w = num_8x8_blocks_wide_lookup[cpi->tpl_bsize];
+    const int tpl_mi_row = mi_row - (mi_row % tpl_block_mi_h);
+    const int tpl_mi_col = mi_col - (mi_col % tpl_block_mi_w);
+    const int mv_mode =
+        tpl_frame
+            .mv_mode_arr[gf_rf_idx][tpl_mi_row * tpl_frame.stride + tpl_mi_col];
+    if (mv_mode == NEW_MV_MODE) {
+      int_mv tpl_new_mv =
+          vp9_motion_field_mi_get_mv(motion_field, tpl_mi_row, tpl_mi_col);
+      int row_diff = abs(tpl_new_mv.as_mv.row - this_mv.as_mv.row);
+      int col_diff = abs(tpl_new_mv.as_mv.col - this_mv.as_mv.col);
+      if (VPXMAX(row_diff, col_diff) <= 8) {
+        return 1;
+      } else {
+        return 0;
+      }
+    } else {
+      return 0;
+    }
+  } else {
+    return 0;
+  }
+#else
+  (void)mi_row;
+  (void)mi_col;
+  (void)bsize;
   return (!cpi->rc.is_src_frame_alt_ref && (this_mode == NEWMV) &&
           (this_mv.as_int != 0) &&
           ((mode_mv[NEARESTMV][ref_frame].as_int == 0) ||
            (mode_mv[NEARESTMV][ref_frame].as_int == INVALID_MV)) &&
           ((mode_mv[NEARMV][ref_frame].as_int == 0) ||
            (mode_mv[NEARMV][ref_frame].as_int == INVALID_MV)));
+#endif
 }
 
 static int64_t handle_inter_mode(
     VP9_COMP *cpi, MACROBLOCK *x, BLOCK_SIZE bsize, int *rate2,
     int64_t *distortion, int *skippable, int *rate_y, int *rate_uv,
-    int *disable_skip, int_mv (*mode_mv)[MAX_REF_FRAMES], int mi_row,
-    int mi_col, int_mv single_newmv[MAX_REF_FRAMES],
+    struct buf_2d *recon, int *disable_skip, int_mv (*mode_mv)[MAX_REF_FRAMES],
+    int mi_row, int mi_col, int_mv single_newmv[MAX_REF_FRAMES],
     INTERP_FILTER (*single_filter)[MAX_REF_FRAMES],
     int (*single_skippable)[MAX_REF_FRAMES], int64_t *psse,
     const int64_t ref_best_rd, int64_t *mask_filter, int64_t filter_cache[]) {
@@ -2575,7 +2832,8 @@
       // under certain circumstances where we want to help initiate a weak
       // motion field, where the distortion gain for a single block may not
       // be enough to overcome the cost of a new mv.
-      if (discount_newmv_test(cpi, this_mode, tmp_mv, mode_mv, refs[0])) {
+      if (discount_newmv_test(cpi, this_mode, tmp_mv, mode_mv, refs[0], mi_row,
+                              mi_col, bsize)) {
         *rate2 += VPXMAX((rate_mv / NEW_MV_DISCOUNT_FACTOR), 1);
       } else {
         *rate2 += rate_mv;
@@ -2608,8 +2866,8 @@
   //
   // Under some circumstances we discount the cost of new mv mode to encourage
   // initiation of a motion field.
-  if (discount_newmv_test(cpi, this_mode, frame_mv[refs[0]], mode_mv,
-                          refs[0])) {
+  if (discount_newmv_test(cpi, this_mode, frame_mv[refs[0]], mode_mv, refs[0],
+                          mi_row, mi_col, bsize)) {
     *rate2 +=
         VPXMIN(cost_mv_ref(cpi, this_mode, mbmi_ext->mode_context[refs[0]]),
                cost_mv_ref(cpi, NEARESTMV, mbmi_ext->mode_context[refs[0]]));
@@ -2773,7 +3031,7 @@
   memcpy(x->skip_txfm, skip_txfm, sizeof(skip_txfm));
   memcpy(x->bsse, bsse, sizeof(bsse));
 
-  if (!skip_txfm_sb) {
+  if (!skip_txfm_sb || xd->lossless) {
     int skippable_y, skippable_uv;
     int64_t sseuv = INT64_MAX;
     int64_t rdcosty = INT64_MAX;
@@ -2781,7 +3039,7 @@
     // Y cost and distortion
     vp9_subtract_plane(x, bsize, 0);
     super_block_yrd(cpi, x, rate_y, &distortion_y, &skippable_y, psse, bsize,
-                    ref_best_rd);
+                    ref_best_rd, recon);
 
     if (*rate_y == INT_MAX) {
       *rate2 = INT_MAX;
@@ -2823,6 +3081,7 @@
   restore_dst_buf(xd, orig_dst, orig_dst_stride);
   return 0;  // The rate-distortion cost will be re-calculated by caller.
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 void vp9_rd_pick_intra_mode_sb(VP9_COMP *cpi, MACROBLOCK *x, RD_COST *rd_cost,
                                BLOCK_SIZE bsize, PICK_MODE_CONTEXT *ctx,
@@ -2876,85 +3135,97 @@
   rd_cost->rdcost = RDCOST(x->rdmult, x->rddiv, rd_cost->rate, rd_cost->dist);
 }
 
+#if !CONFIG_REALTIME_ONLY
 // This function is designed to apply a bias or adjustment to an rd value based
 // on the relative variance of the source and reconstruction.
-#define VERY_LOW_VAR_THRESH 2
-#define LOW_VAR_THRESH 5
-#define VAR_MULT 100
-static unsigned int max_var_adjust[VP9E_CONTENT_INVALID] = { 16, 16, 100 };
+#define LOW_VAR_THRESH 250
+#define VAR_MULT 250
+static unsigned int max_var_adjust[VP9E_CONTENT_INVALID] = { 16, 16, 250 };
 
 static void rd_variance_adjustment(VP9_COMP *cpi, MACROBLOCK *x,
                                    BLOCK_SIZE bsize, int64_t *this_rd,
+                                   struct buf_2d *recon,
                                    MV_REFERENCE_FRAME ref_frame,
-                                   unsigned int source_variance) {
+                                   MV_REFERENCE_FRAME second_ref_frame,
+                                   PREDICTION_MODE this_mode) {
   MACROBLOCKD *const xd = &x->e_mbd;
   unsigned int rec_variance;
   unsigned int src_variance;
   unsigned int src_rec_min;
-  unsigned int absvar_diff = 0;
+  unsigned int var_diff = 0;
   unsigned int var_factor = 0;
   unsigned int adj_max;
+  unsigned int low_var_thresh = LOW_VAR_THRESH;
+  const int bw = num_8x8_blocks_wide_lookup[bsize];
+  const int bh = num_8x8_blocks_high_lookup[bsize];
   vp9e_tune_content content_type = cpi->oxcf.content;
 
   if (*this_rd == INT64_MAX) return;
 
 #if CONFIG_VP9_HIGHBITDEPTH
   if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
-    if (source_variance > 0) {
-      rec_variance = vp9_high_get_sby_perpixel_variance(cpi, &xd->plane[0].dst,
-                                                        bsize, xd->bd);
-      src_variance = source_variance;
-    } else {
-      rec_variance =
-          vp9_high_get_sby_variance(cpi, &xd->plane[0].dst, bsize, xd->bd);
-      src_variance =
-          vp9_high_get_sby_variance(cpi, &x->plane[0].src, bsize, xd->bd);
-    }
+    rec_variance = vp9_high_get_sby_variance(cpi, recon, bsize, xd->bd);
+    src_variance =
+        vp9_high_get_sby_variance(cpi, &x->plane[0].src, bsize, xd->bd);
   } else {
-    if (source_variance > 0) {
-      rec_variance =
-          vp9_get_sby_perpixel_variance(cpi, &xd->plane[0].dst, bsize);
-      src_variance = source_variance;
-    } else {
-      rec_variance = vp9_get_sby_variance(cpi, &xd->plane[0].dst, bsize);
-      src_variance = vp9_get_sby_variance(cpi, &x->plane[0].src, bsize);
-    }
-  }
-#else
-  if (source_variance > 0) {
-    rec_variance = vp9_get_sby_perpixel_variance(cpi, &xd->plane[0].dst, bsize);
-    src_variance = source_variance;
-  } else {
-    rec_variance = vp9_get_sby_variance(cpi, &xd->plane[0].dst, bsize);
+    rec_variance = vp9_get_sby_variance(cpi, recon, bsize);
     src_variance = vp9_get_sby_variance(cpi, &x->plane[0].src, bsize);
   }
+#else
+  rec_variance = vp9_get_sby_variance(cpi, recon, bsize);
+  src_variance = vp9_get_sby_variance(cpi, &x->plane[0].src, bsize);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
+  // Scale based on area in 8x8 blocks
+  rec_variance /= (bw * bh);
+  src_variance /= (bw * bh);
+
+  if (content_type == VP9E_CONTENT_FILM) {
+    if (cpi->oxcf.pass == 2) {
+      // Adjust low variance threshold based on estimated group noise enegry.
+      double noise_factor =
+          (double)cpi->twopass.gf_group.group_noise_energy / SECTION_NOISE_DEF;
+      low_var_thresh = (unsigned int)(low_var_thresh * noise_factor);
+
+      if (ref_frame == INTRA_FRAME) {
+        low_var_thresh *= 2;
+        if (this_mode == DC_PRED) low_var_thresh *= 5;
+      } else if (second_ref_frame > INTRA_FRAME) {
+        low_var_thresh *= 2;
+      }
+    }
+  } else {
+    low_var_thresh = LOW_VAR_THRESH / 2;
+  }
+
   // Lower of source (raw per pixel value) and recon variance. Note that
   // if the source per pixel is 0 then the recon value here will not be per
   // pixel (see above) so will likely be much larger.
-  src_rec_min = VPXMIN(source_variance, rec_variance);
+  src_rec_min = VPXMIN(src_variance, rec_variance);
 
-  if (src_rec_min > LOW_VAR_THRESH) return;
+  if (src_rec_min > low_var_thresh) return;
 
-  absvar_diff = (src_variance > rec_variance) ? (src_variance - rec_variance)
-                                              : (rec_variance - src_variance);
+  // We care more when the reconstruction has lower variance so give this case
+  // a stronger weighting.
+  var_diff = (src_variance > rec_variance) ? (src_variance - rec_variance) * 2
+                                           : (rec_variance - src_variance) / 2;
 
   adj_max = max_var_adjust[content_type];
 
   var_factor =
-      (unsigned int)((int64_t)VAR_MULT * absvar_diff) / VPXMAX(1, src_variance);
+      (unsigned int)((int64_t)VAR_MULT * var_diff) / VPXMAX(1, src_variance);
   var_factor = VPXMIN(adj_max, var_factor);
 
+  if ((content_type == VP9E_CONTENT_FILM) &&
+      ((ref_frame == INTRA_FRAME) || (second_ref_frame > INTRA_FRAME))) {
+    var_factor *= 2;
+  }
+
   *this_rd += (*this_rd * var_factor) / 100;
 
-  if (content_type == VP9E_CONTENT_FILM) {
-    if (src_rec_min <= VERY_LOW_VAR_THRESH) {
-      if (ref_frame == INTRA_FRAME) *this_rd *= 2;
-      if (bsize > 6) *this_rd *= 2;
-    }
-  }
+  (void)xd;
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 // Do we have an internal image edge (e.g. formatting bars).
 int vp9_internal_image_edge(VP9_COMP *cpi) {
@@ -3025,6 +3296,7 @@
          vp9_active_v_edge(cpi, mi_col, MI_BLOCK_SIZE);
 }
 
+#if !CONFIG_REALTIME_ONLY
 void vp9_rd_pick_inter_mode_sb(VP9_COMP *cpi, TileDataEnc *tile_data,
                                MACROBLOCK *x, int mi_row, int mi_col,
                                RD_COST *rd_cost, BLOCK_SIZE bsize,
@@ -3068,20 +3340,36 @@
   const int intra_cost_penalty =
       vp9_get_intra_cost_penalty(cpi, bsize, cm->base_qindex, cm->y_dc_delta_q);
   int best_skip2 = 0;
-  uint8_t ref_frame_skip_mask[2] = { 0 };
+  uint8_t ref_frame_skip_mask[2] = { 0, 1 };
   uint16_t mode_skip_mask[MAX_REF_FRAMES] = { 0 };
   int mode_skip_start = sf->mode_skip_start + 1;
   const int *const rd_threshes = rd_opt->threshes[segment_id][bsize];
   const int *const rd_thresh_freq_fact = tile_data->thresh_freq_fact[bsize];
   int64_t mode_threshold[MAX_MODES];
-  int *tile_mode_map = tile_data->mode_map[bsize];
-  int mode_map[MAX_MODES];  // Maintain mode_map information locally to avoid
-                            // lock mechanism involved with reads from
-                            // tile_mode_map
+  int8_t *tile_mode_map = tile_data->mode_map[bsize];
+  int8_t mode_map[MAX_MODES];  // Maintain mode_map information locally to avoid
+                               // lock mechanism involved with reads from
+                               // tile_mode_map
   const int mode_search_skip_flags = sf->mode_search_skip_flags;
+  const int is_rect_partition =
+      num_4x4_blocks_wide_lookup[bsize] != num_4x4_blocks_high_lookup[bsize];
   int64_t mask_filter = 0;
   int64_t filter_cache[SWITCHABLE_FILTER_CONTEXTS];
 
+  struct buf_2d *recon;
+  struct buf_2d recon_buf;
+#if CONFIG_VP9_HIGHBITDEPTH
+  DECLARE_ALIGNED(16, uint16_t, recon16[64 * 64]);
+  recon_buf.buf = xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH
+                      ? CONVERT_TO_BYTEPTR(recon16)
+                      : (uint8_t *)recon16;
+#else
+  DECLARE_ALIGNED(16, uint8_t, recon8[64 * 64]);
+  recon_buf.buf = recon8;
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  recon_buf.stride = 64;
+  recon = cpi->oxcf.content == VP9E_CONTENT_FILM ? &recon_buf : 0;
+
   vp9_zero(best_mbmode);
 
   x->skip_encode = sf->skip_encode_frame && x->q_index < QIDX_SKIP_THRESH;
@@ -3107,7 +3395,8 @@
 
   for (ref_frame = LAST_FRAME; ref_frame <= ALTREF_FRAME; ++ref_frame) {
     x->pred_mv_sad[ref_frame] = INT_MAX;
-    if (cpi->ref_frame_flags & flag_list[ref_frame]) {
+    if ((cpi->ref_frame_flags & flag_list[ref_frame]) &&
+        !(is_rect_partition && (ctx->skip_ref_frame_mask & (1 << ref_frame)))) {
       assert(get_ref_frame_buffer(cpi, ref_frame) != NULL);
       setup_buffer_inter(cpi, x, ref_frame, bsize, mi_row, mi_col,
                          frame_mv[NEARESTMV], frame_mv[NEARMV], yv12_mb);
@@ -3163,7 +3452,7 @@
   if (cpi->rc.is_src_frame_alt_ref) {
     if (sf->alt_ref_search_fp) {
       mode_skip_mask[ALTREF_FRAME] = 0;
-      ref_frame_skip_mask[0] = ~(1 << ALTREF_FRAME);
+      ref_frame_skip_mask[0] = ~(1 << ALTREF_FRAME) & 0xff;
       ref_frame_skip_mask[1] = SECOND_REF_FRAME_MASK;
     }
   }
@@ -3230,18 +3519,21 @@
 
     vp9_zero(x->sum_y_eobs);
 
+    if (is_rect_partition) {
+      if (ctx->skip_ref_frame_mask & (1 << ref_frame)) continue;
+      if (second_ref_frame > 0 &&
+          (ctx->skip_ref_frame_mask & (1 << second_ref_frame)))
+        continue;
+    }
+
     // Look at the reference frame of the best mode so far and set the
     // skip mask to look at a subset of the remaining modes.
     if (midx == mode_skip_start && best_mode_index >= 0) {
       switch (best_mbmode.ref_frame[0]) {
         case INTRA_FRAME: break;
-        case LAST_FRAME:
-          ref_frame_skip_mask[0] |= LAST_FRAME_MODE_MASK;
-          ref_frame_skip_mask[1] |= SECOND_REF_FRAME_MASK;
-          break;
+        case LAST_FRAME: ref_frame_skip_mask[0] |= LAST_FRAME_MODE_MASK; break;
         case GOLDEN_FRAME:
           ref_frame_skip_mask[0] |= GOLDEN_FRAME_MODE_MASK;
-          ref_frame_skip_mask[1] |= SECOND_REF_FRAME_MASK;
           break;
         case ALTREF_FRAME: ref_frame_skip_mask[0] |= ALT_REF_MODE_MASK; break;
         case NONE:
@@ -3315,6 +3607,10 @@
     if (comp_pred) {
       if (!cpi->allow_comp_inter_inter) continue;
 
+      if (cm->ref_frame_sign_bias[ref_frame] ==
+          cm->ref_frame_sign_bias[second_ref_frame])
+        continue;
+
       // Skip compound inter modes if ARF is not available.
       if (!(cpi->ref_frame_flags & flag_list[second_ref_frame])) continue;
 
@@ -3341,7 +3637,8 @@
         // Disable intra modes other than DC_PRED for blocks with low variance
         // Threshold for intra skipping based on source variance
         // TODO(debargha): Specialize the threshold for super block sizes
-        const unsigned int skip_intra_var_thresh = 64;
+        const unsigned int skip_intra_var_thresh =
+            (cpi->oxcf.content == VP9E_CONTENT_FILM) ? 0 : 64;
         if ((mode_search_skip_flags & FLAG_SKIP_INTRA_LOWVAR) &&
             x->source_variance < skip_intra_var_thresh)
           continue;
@@ -3387,7 +3684,7 @@
       struct macroblockd_plane *const pd = &xd->plane[1];
       memset(x->skip_txfm, 0, sizeof(x->skip_txfm));
       super_block_yrd(cpi, x, &rate_y, &distortion_y, &skippable, NULL, bsize,
-                      best_rd);
+                      best_rd, recon);
       if (rate_y == INT_MAX) continue;
 
       uv_tx = uv_txsize_lookup[bsize][mi->tx_size][pd->subsampling_x]
@@ -3410,7 +3707,7 @@
     } else {
       this_rd = handle_inter_mode(
           cpi, x, bsize, &rate2, &distortion2, &skippable, &rate_y, &rate_uv,
-          &disable_skip, frame_mv, mi_row, mi_col, single_newmv,
+          recon, &disable_skip, frame_mv, mi_row, mi_col, single_newmv,
           single_inter_filter, single_skippable, &total_sse, best_rd,
           &mask_filter, filter_cache);
       if (this_rd == INT64_MAX) continue;
@@ -3439,7 +3736,8 @@
 
         // Cost the skip mb case
         rate2 += skip_cost1;
-      } else if (ref_frame != INTRA_FRAME && !xd->lossless) {
+      } else if (ref_frame != INTRA_FRAME && !xd->lossless &&
+                 !cpi->oxcf.sharpness) {
         if (RDCOST(x->rdmult, x->rddiv, rate_y + rate_uv + skip_cost0,
                    distortion2) <
             RDCOST(x->rdmult, x->rddiv, skip_cost1, total_sse)) {
@@ -3463,10 +3761,39 @@
       this_rd = RDCOST(x->rdmult, x->rddiv, rate2, distortion2);
     }
 
-    // Apply an adjustment to the rd value based on the similarity of the
-    // source variance and reconstructed variance.
-    rd_variance_adjustment(cpi, x, bsize, &this_rd, ref_frame,
-                           x->source_variance);
+    if (recon) {
+      // In film mode bias against DC pred and other intra if there is a
+      // significant difference between the variance of the sub blocks in the
+      // the source. Also apply some bias against compound modes which also
+      // tend to blur fine texture such as film grain over time.
+      //
+      // The sub block test here acts in the case where one or more sub
+      // blocks have high relatively variance but others relatively low
+      // variance. Here the high variance sub blocks may push the
+      // total variance for the current block size over the thresholds
+      // used in rd_variance_adjustment() below.
+      if (cpi->oxcf.content == VP9E_CONTENT_FILM) {
+        if (bsize >= BLOCK_16X16) {
+          int min_energy, max_energy;
+          vp9_get_sub_block_energy(cpi, x, mi_row, mi_col, bsize, &min_energy,
+                                   &max_energy);
+          if (max_energy > min_energy) {
+            if (ref_frame == INTRA_FRAME) {
+              if (this_mode == DC_PRED)
+                this_rd += (this_rd * (max_energy - min_energy));
+              else
+                this_rd += (this_rd * (max_energy - min_energy)) / 4;
+            } else if (second_ref_frame > INTRA_FRAME) {
+              this_rd += this_rd / 4;
+            }
+          }
+        }
+      }
+      // Apply an adjustment to the rd value based on the similarity of the
+      // source variance and reconstructed variance.
+      rd_variance_adjustment(cpi, x, bsize, &this_rd, recon, ref_frame,
+                             second_ref_frame, this_mode);
+    }
 
     if (ref_frame == INTRA_FRAME) {
       // Keep record of best intra rd
@@ -3618,9 +3945,13 @@
   }
 
   if (best_mode_index < 0 || best_rd >= best_rd_so_far) {
-    // If adaptive interp filter is enabled, then the current leaf node of 8x8
-    // data is needed for sub8x8. Hence preserve the context.
+// If adaptive interp filter is enabled, then the current leaf node of 8x8
+// data is needed for sub8x8. Hence preserve the context.
+#if CONFIG_CONSISTENT_RECODE
+    if (bsize == BLOCK_8X8) ctx->mic = *xd->mi[0];
+#else
     if (cpi->row_mt && bsize == BLOCK_8X8) ctx->mic = *xd->mi[0];
+#endif
     rd_cost->rate = INT_MAX;
     rd_cost->rdcost = INT64_MAX;
     return;
@@ -3896,7 +4227,8 @@
 #if CONFIG_BETTER_HW_COMPATIBILITY
     // forbid 8X4 and 4X8 partitions if any reference frame is scaled.
     if (bsize == BLOCK_8X4 || bsize == BLOCK_4X8) {
-      int ref_scaled = vp9_is_scaled(&cm->frame_refs[ref_frame - 1].sf);
+      int ref_scaled = ref_frame > INTRA_FRAME &&
+                       vp9_is_scaled(&cm->frame_refs[ref_frame - 1].sf);
       if (second_ref_frame > INTRA_FRAME)
         ref_scaled += vp9_is_scaled(&cm->frame_refs[second_ref_frame - 1].sf);
       if (ref_scaled) continue;
@@ -3942,6 +4274,11 @@
     comp_pred = second_ref_frame > INTRA_FRAME;
     if (comp_pred) {
       if (!cpi->allow_comp_inter_inter) continue;
+
+      if (cm->ref_frame_sign_bias[ref_frame] ==
+          cm->ref_frame_sign_bias[second_ref_frame])
+        continue;
+
       if (!(cpi->ref_frame_flags & flag_list[second_ref_frame])) continue;
       // Do not allow compound prediction if the segment level reference frame
       // feature is in use as in this case there can only be one reference.
@@ -4396,6 +4733,13 @@
     mi->mv[0].as_int = xd->mi[0]->bmi[3].as_mv[0].as_int;
     mi->mv[1].as_int = xd->mi[0]->bmi[3].as_mv[1].as_int;
   }
+  // If the second reference does not exist, set the corresponding mv to zero.
+  if (mi->ref_frame[1] == NONE) {
+    mi->mv[1].as_int = 0;
+    for (i = 0; i < 4; ++i) {
+      mi->bmi[i].as_mv[1].as_int = 0;
+    }
+  }
 
   for (i = 0; i < REFERENCE_MODES; ++i) {
     if (best_pred_rd[i] == INT64_MAX)
@@ -4420,3 +4764,4 @@
   store_coding_context(x, ctx, best_ref_index, best_pred_diff, best_filter_diff,
                        0);
 }
+#endif  // !CONFIG_REALTIME_ONLY
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.h
index 795c91a..e1147ff 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_rdopt.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_RDOPT_H_
-#define VP9_ENCODER_VP9_RDOPT_H_
+#ifndef VPX_VP9_ENCODER_VP9_RDOPT_H_
+#define VPX_VP9_ENCODER_VP9_RDOPT_H_
 
 #include "vp9/common/vp9_blockd.h"
 
@@ -29,6 +29,7 @@
                                struct RD_COST *rd_cost, BLOCK_SIZE bsize,
                                PICK_MODE_CONTEXT *ctx, int64_t best_rd);
 
+#if !CONFIG_REALTIME_ONLY
 void vp9_rd_pick_inter_mode_sb(struct VP9_COMP *cpi,
                                struct TileDataEnc *tile_data,
                                struct macroblock *x, int mi_row, int mi_col,
@@ -39,21 +40,24 @@
     struct VP9_COMP *cpi, struct TileDataEnc *tile_data, struct macroblock *x,
     struct RD_COST *rd_cost, BLOCK_SIZE bsize, PICK_MODE_CONTEXT *ctx,
     int64_t best_rd_so_far);
+#endif
 
 int vp9_internal_image_edge(struct VP9_COMP *cpi);
 int vp9_active_h_edge(struct VP9_COMP *cpi, int mi_row, int mi_step);
 int vp9_active_v_edge(struct VP9_COMP *cpi, int mi_col, int mi_step);
 int vp9_active_edge_sb(struct VP9_COMP *cpi, int mi_row, int mi_col);
 
+#if !CONFIG_REALTIME_ONLY
 void vp9_rd_pick_inter_mode_sub8x8(struct VP9_COMP *cpi,
                                    struct TileDataEnc *tile_data,
                                    struct macroblock *x, int mi_row, int mi_col,
                                    struct RD_COST *rd_cost, BLOCK_SIZE bsize,
                                    PICK_MODE_CONTEXT *ctx,
                                    int64_t best_rd_so_far);
+#endif
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_RDOPT_H_
+#endif  // VPX_VP9_ENCODER_VP9_RDOPT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.c
index f6c4aad..7486dee 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.c
@@ -424,11 +424,11 @@
                       int in_stride, uint8_t *output, int height2, int width2,
                       int out_stride) {
   int i;
-  uint8_t *intbuf = (uint8_t *)malloc(sizeof(uint8_t) * width2 * height);
+  uint8_t *intbuf = (uint8_t *)calloc(width2 * height, sizeof(*intbuf));
   uint8_t *tmpbuf =
-      (uint8_t *)malloc(sizeof(uint8_t) * (width < height ? height : width));
-  uint8_t *arrbuf = (uint8_t *)malloc(sizeof(uint8_t) * height);
-  uint8_t *arrbuf2 = (uint8_t *)malloc(sizeof(uint8_t) * height2);
+      (uint8_t *)calloc(width < height ? height : width, sizeof(*tmpbuf));
+  uint8_t *arrbuf = (uint8_t *)calloc(height, sizeof(*arrbuf));
+  uint8_t *arrbuf2 = (uint8_t *)calloc(height2, sizeof(*arrbuf2));
   if (intbuf == NULL || tmpbuf == NULL || arrbuf == NULL || arrbuf2 == NULL)
     goto Error;
   assert(width > 0);
@@ -506,10 +506,12 @@
       sub_pel = (y >> (INTERP_PRECISION_BITS - SUBPEL_BITS)) & SUBPEL_MASK;
       filter = interp_filters[sub_pel];
       sum = 0;
-      for (k = 0; k < INTERP_TAPS; ++k)
+      for (k = 0; k < INTERP_TAPS; ++k) {
+        assert(int_pel - INTERP_TAPS / 2 + 1 + k < inlength);
         sum += filter[k] * input[(int_pel - INTERP_TAPS / 2 + 1 + k < 0
                                       ? 0
                                       : int_pel - INTERP_TAPS / 2 + 1 + k)];
+      }
       *optr++ = clip_pixel_highbd(ROUND_POWER_OF_TWO(sum, FILTER_BITS), bd);
     }
     // Middle part.
@@ -720,6 +722,10 @@
   uint16_t *arrbuf2 = (uint16_t *)malloc(sizeof(uint16_t) * height2);
   if (intbuf == NULL || tmpbuf == NULL || arrbuf == NULL || arrbuf2 == NULL)
     goto Error;
+  assert(width > 0);
+  assert(height > 0);
+  assert(width2 > 0);
+  assert(height2 > 0);
   for (i = 0; i < height; ++i) {
     highbd_resize_multistep(CONVERT_TO_SHORTPTR(input + in_stride * i), width,
                             intbuf + width2 * i, width2, tmpbuf, bd);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.h
index d3282ee..5d4ce97 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_resize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_RESIZE_H_
-#define VP9_ENCODER_VP9_RESIZE_H_
+#ifndef VPX_VP9_ENCODER_VP9_RESIZE_H_
+#define VPX_VP9_ENCODER_VP9_RESIZE_H_
 
 #include <stdio.h>
 #include "vpx/vpx_integer.h"
@@ -65,4 +65,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_RESIZE_H_
+#endif  // VPX_VP9_ENCODER_VP9_RESIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.c
index 4a5a68e..a163297 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.c
@@ -9,6 +9,7 @@
  */
 
 #include <limits.h>
+#include <math.h>
 
 #include "vpx_mem/vpx_mem.h"
 
@@ -46,6 +47,59 @@
   seg->feature_data[segment_id][feature_id] = 0;
 }
 
+void vp9_psnr_aq_mode_setup(struct segmentation *seg) {
+  int i;
+
+  vp9_enable_segmentation(seg);
+  vp9_clearall_segfeatures(seg);
+  seg->abs_delta = SEGMENT_DELTADATA;
+
+  for (i = 0; i < MAX_SEGMENTS; ++i) {
+    vp9_set_segdata(seg, i, SEG_LVL_ALT_Q, 2 * (i - (MAX_SEGMENTS / 2)));
+    vp9_enable_segfeature(seg, i, SEG_LVL_ALT_Q);
+  }
+}
+
+void vp9_perceptual_aq_mode_setup(struct VP9_COMP *cpi,
+                                  struct segmentation *seg) {
+  const VP9_COMMON *cm = &cpi->common;
+  const int seg_counts = cpi->kmeans_ctr_num;
+  const int base_qindex = cm->base_qindex;
+  const double base_qstep = vp9_convert_qindex_to_q(base_qindex, cm->bit_depth);
+  const double mid_ctr = cpi->kmeans_ctr_ls[seg_counts / 2];
+  const double var_diff_scale = 4.0;
+  int i;
+
+  assert(seg_counts <= MAX_SEGMENTS);
+
+  vp9_enable_segmentation(seg);
+  vp9_clearall_segfeatures(seg);
+  seg->abs_delta = SEGMENT_DELTADATA;
+
+  for (i = 0; i < seg_counts / 2; ++i) {
+    double wiener_var_diff = mid_ctr - cpi->kmeans_ctr_ls[i];
+    double target_qstep = base_qstep / (1.0 + wiener_var_diff / var_diff_scale);
+    int target_qindex = vp9_convert_q_to_qindex(target_qstep, cm->bit_depth);
+    assert(wiener_var_diff >= 0.0);
+
+    vp9_set_segdata(seg, i, SEG_LVL_ALT_Q, target_qindex - base_qindex);
+    vp9_enable_segfeature(seg, i, SEG_LVL_ALT_Q);
+  }
+
+  vp9_set_segdata(seg, i, SEG_LVL_ALT_Q, 0);
+  vp9_enable_segfeature(seg, i, SEG_LVL_ALT_Q);
+
+  for (; i < seg_counts; ++i) {
+    double wiener_var_diff = cpi->kmeans_ctr_ls[i] - mid_ctr;
+    double target_qstep = base_qstep * (1.0 + wiener_var_diff / var_diff_scale);
+    int target_qindex = vp9_convert_q_to_qindex(target_qstep, cm->bit_depth);
+    assert(wiener_var_diff >= 0.0);
+
+    vp9_set_segdata(seg, i, SEG_LVL_ALT_Q, target_qindex - base_qindex);
+    vp9_enable_segfeature(seg, i, SEG_LVL_ALT_Q);
+  }
+}
+
 // Based on set of segment counts calculate a probability tree
 static void calc_segtree_probs(int *segcounts, vpx_prob *segment_tree_probs) {
   // Work out probabilities of each segment
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.h
index 5628055..9404c38 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_segmentation.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_SEGMENTATION_H_
-#define VP9_ENCODER_VP9_SEGMENTATION_H_
+#ifndef VPX_VP9_ENCODER_VP9_SEGMENTATION_H_
+#define VPX_VP9_ENCODER_VP9_SEGMENTATION_H_
 
 #include "vp9/common/vp9_blockd.h"
 #include "vp9/encoder/vp9_encoder.h"
@@ -26,6 +26,11 @@
 void vp9_clear_segdata(struct segmentation *seg, int segment_id,
                        SEG_LVL_FEATURES feature_id);
 
+void vp9_psnr_aq_mode_setup(struct segmentation *seg);
+
+void vp9_perceptual_aq_mode_setup(struct VP9_COMP *cpi,
+                                  struct segmentation *seg);
+
 // The values given for each segment can be either deltas (from the default
 // value chosen for the frame) or absolute values.
 //
@@ -47,4 +52,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_SEGMENTATION_H_
+#endif  // VPX_VP9_ENCODER_VP9_SEGMENTATION_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.h
index 8880bff..46a722a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_skin_detection.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_SKIN_MAP_H_
-#define VP9_ENCODER_VP9_SKIN_MAP_H_
+#ifndef VPX_VP9_ENCODER_VP9_SKIN_DETECTION_H_
+#define VPX_VP9_ENCODER_VP9_SKIN_DETECTION_H_
 
 #include "vp9/common/vp9_blockd.h"
 #include "vpx_dsp/skin_detection.h"
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_SKIN_MAP_H_
+#endif  // VPX_VP9_ENCODER_VP9_SKIN_DETECTION_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.c
index c5eae62..7a26c41 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.c
@@ -20,6 +20,7 @@
   { 64, 4 }, { 28, 2 }, { 15, 1 }, { 7, 1 }
 };
 
+#if !CONFIG_REALTIME_ONLY
 // Define 3 mesh density levels to control the number of searches.
 #define MESH_DENSITY_LEVELS 3
 static MESH_PATTERN
@@ -32,7 +33,7 @@
 // Intra only frames, golden frames (except alt ref overlays) and
 // alt ref frames tend to be coded at a higher than ambient quality
 static int frame_is_boosted(const VP9_COMP *cpi) {
-  return frame_is_kf_gf_arf(cpi) || vp9_is_upper_layer_key_frame(cpi);
+  return frame_is_kf_gf_arf(cpi);
 }
 
 // Sets a partition size down to which the auto partition code will always
@@ -61,46 +62,92 @@
                                                        SPEED_FEATURES *sf,
                                                        int speed) {
   VP9_COMMON *const cm = &cpi->common;
+  const int min_frame_size = VPXMIN(cm->width, cm->height);
+  const int is_480p_or_larger = min_frame_size >= 480;
+  const int is_720p_or_larger = min_frame_size >= 720;
+  const int is_1080p_or_larger = min_frame_size >= 1080;
+  const int is_2160p_or_larger = min_frame_size >= 2160;
 
   // speed 0 features
   sf->partition_search_breakout_thr.dist = (1 << 20);
   sf->partition_search_breakout_thr.rate = 80;
+  sf->use_square_only_thresh_high = BLOCK_SIZES;
+  sf->use_square_only_thresh_low = BLOCK_4X4;
 
-  // Currently, the machine-learning based partition search early termination
-  // is only used while VPXMIN(cm->width, cm->height) >= 480 and speed = 0.
-  if (VPXMIN(cm->width, cm->height) >= 480) {
-    sf->ml_partition_search_early_termination = 1;
+  if (is_480p_or_larger) {
+    // Currently, the machine-learning based partition search early termination
+    // is only used while VPXMIN(cm->width, cm->height) >= 480 and speed = 0.
+    sf->rd_ml_partition.search_early_termination = 1;
+  } else {
+    sf->use_square_only_thresh_high = BLOCK_32X32;
   }
 
-  if (speed >= 1) {
-    sf->ml_partition_search_early_termination = 0;
-
-    if (VPXMIN(cm->width, cm->height) >= 720) {
-      sf->disable_split_mask =
-          cm->show_frame ? DISABLE_ALL_SPLIT : DISABLE_ALL_INTER_SPLIT;
-      sf->partition_search_breakout_thr.dist = (1 << 23);
+  if (!is_1080p_or_larger) {
+    sf->rd_ml_partition.search_breakout = 1;
+    if (is_720p_or_larger) {
+      sf->rd_ml_partition.search_breakout_thresh[0] = 0.0f;
+      sf->rd_ml_partition.search_breakout_thresh[1] = 0.0f;
+      sf->rd_ml_partition.search_breakout_thresh[2] = 0.0f;
     } else {
-      sf->disable_split_mask = DISABLE_COMPOUND_SPLIT;
-      sf->partition_search_breakout_thr.dist = (1 << 21);
+      sf->rd_ml_partition.search_breakout_thresh[0] = 2.5f;
+      sf->rd_ml_partition.search_breakout_thresh[1] = 1.5f;
+      sf->rd_ml_partition.search_breakout_thresh[2] = 1.5f;
     }
   }
 
+  if (speed >= 1) {
+    sf->rd_ml_partition.search_early_termination = 0;
+    sf->rd_ml_partition.search_breakout = 1;
+    if (is_480p_or_larger)
+      sf->use_square_only_thresh_high = BLOCK_64X64;
+    else
+      sf->use_square_only_thresh_high = BLOCK_32X32;
+    sf->use_square_only_thresh_low = BLOCK_16X16;
+    if (is_720p_or_larger) {
+      sf->disable_split_mask =
+          cm->show_frame ? DISABLE_ALL_SPLIT : DISABLE_ALL_INTER_SPLIT;
+      sf->partition_search_breakout_thr.dist = (1 << 22);
+      sf->rd_ml_partition.search_breakout_thresh[0] = -5.0f;
+      sf->rd_ml_partition.search_breakout_thresh[1] = -5.0f;
+      sf->rd_ml_partition.search_breakout_thresh[2] = -9.0f;
+    } else {
+      sf->disable_split_mask = DISABLE_COMPOUND_SPLIT;
+      sf->partition_search_breakout_thr.dist = (1 << 21);
+      sf->rd_ml_partition.search_breakout_thresh[0] = -1.0f;
+      sf->rd_ml_partition.search_breakout_thresh[1] = -1.0f;
+      sf->rd_ml_partition.search_breakout_thresh[2] = -1.0f;
+    }
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (cpi->Source->flags & YV12_FLAG_HIGHBITDEPTH) {
+      sf->rd_ml_partition.search_breakout_thresh[0] -= 1.0f;
+      sf->rd_ml_partition.search_breakout_thresh[1] -= 1.0f;
+      sf->rd_ml_partition.search_breakout_thresh[2] -= 1.0f;
+    }
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  }
+
   if (speed >= 2) {
-    if (VPXMIN(cm->width, cm->height) >= 720) {
+    sf->use_square_only_thresh_high = BLOCK_4X4;
+    sf->use_square_only_thresh_low = BLOCK_SIZES;
+    if (is_720p_or_larger) {
       sf->disable_split_mask =
           cm->show_frame ? DISABLE_ALL_SPLIT : DISABLE_ALL_INTER_SPLIT;
       sf->adaptive_pred_interp_filter = 0;
       sf->partition_search_breakout_thr.dist = (1 << 24);
       sf->partition_search_breakout_thr.rate = 120;
+      sf->rd_ml_partition.search_breakout = 0;
     } else {
       sf->disable_split_mask = LAST_AND_INTRA_SPLIT_ONLY;
       sf->partition_search_breakout_thr.dist = (1 << 22);
       sf->partition_search_breakout_thr.rate = 100;
+      sf->rd_ml_partition.search_breakout_thresh[0] = 0.0f;
+      sf->rd_ml_partition.search_breakout_thresh[1] = -1.0f;
+      sf->rd_ml_partition.search_breakout_thresh[2] = -4.0f;
     }
     sf->rd_auto_partition_min_limit = set_partition_min_limit(cm);
 
     // Use a set of speed features for 4k videos.
-    if (VPXMIN(cm->width, cm->height) >= 2160) {
+    if (is_2160p_or_larger) {
       sf->use_square_partition_only = 1;
       sf->intra_y_mode_mask[TX_32X32] = INTRA_DC;
       sf->intra_uv_mode_mask[TX_32X32] = INTRA_DC;
@@ -112,7 +159,8 @@
   }
 
   if (speed >= 3) {
-    if (VPXMIN(cm->width, cm->height) >= 720) {
+    sf->rd_ml_partition.search_breakout = 0;
+    if (is_720p_or_larger) {
       sf->disable_split_mask = DISABLE_ALL_SPLIT;
       sf->schedule_mode_search = cm->base_qindex < 220 ? 1 : 0;
       sf->partition_search_breakout_thr.dist = (1 << 25);
@@ -137,7 +185,7 @@
 
   if (speed >= 4) {
     sf->partition_search_breakout_thr.rate = 300;
-    if (VPXMIN(cm->width, cm->height) >= 720) {
+    if (is_720p_or_larger) {
       sf->partition_search_breakout_thr.dist = (1 << 26);
     } else {
       sf->partition_search_breakout_thr.dist = (1 << 24);
@@ -166,28 +214,41 @@
   sf->adaptive_rd_thresh_row_mt = 0;
   sf->allow_skip_recode = 1;
   sf->less_rectangular_check = 1;
-  sf->use_square_partition_only = !frame_is_boosted(cpi);
-  sf->use_square_only_threshold = BLOCK_16X16;
+  sf->use_square_partition_only = !boosted;
+  sf->prune_ref_frame_for_rect_partitions = 1;
+  sf->rd_ml_partition.var_pruning = 1;
+
+  sf->rd_ml_partition.prune_rect_thresh[0] = -1;
+  sf->rd_ml_partition.prune_rect_thresh[1] = 350;
+  sf->rd_ml_partition.prune_rect_thresh[2] = 325;
+  sf->rd_ml_partition.prune_rect_thresh[3] = 250;
 
   if (cpi->twopass.fr_content_type == FC_GRAPHICS_ANIMATION) {
     sf->exhaustive_searches_thresh = (1 << 22);
-    for (i = 0; i < MAX_MESH_STEP; ++i) {
-      int mesh_density_level = 0;
-      sf->mesh_patterns[i].range =
-          good_quality_mesh_patterns[mesh_density_level][i].range;
-      sf->mesh_patterns[i].interval =
-          good_quality_mesh_patterns[mesh_density_level][i].interval;
-    }
   } else {
     sf->exhaustive_searches_thresh = INT_MAX;
   }
 
+  for (i = 0; i < MAX_MESH_STEP; ++i) {
+    const int mesh_density_level = 0;
+    sf->mesh_patterns[i].range =
+        good_quality_mesh_patterns[mesh_density_level][i].range;
+    sf->mesh_patterns[i].interval =
+        good_quality_mesh_patterns[mesh_density_level][i].interval;
+  }
+
   if (speed >= 1) {
+    sf->temporal_filter_search_method = NSTEP;
+    sf->rd_ml_partition.var_pruning = !boosted;
+    sf->rd_ml_partition.prune_rect_thresh[1] = 225;
+    sf->rd_ml_partition.prune_rect_thresh[2] = 225;
+    sf->rd_ml_partition.prune_rect_thresh[3] = 225;
+
     if (oxcf->pass == 2) {
       TWO_PASS *const twopass = &cpi->twopass;
       if ((twopass->fr_content_type == FC_GRAPHICS_ANIMATION) ||
           vp9_internal_image_edge(cpi)) {
-        sf->use_square_partition_only = !frame_is_boosted(cpi);
+        sf->use_square_partition_only = !boosted;
       } else {
         sf->use_square_partition_only = !frame_is_intra_only(cm);
       }
@@ -199,23 +260,22 @@
     sf->tx_domain_thresh = tx_dom_thresholds[(speed < 6) ? speed : 5];
     sf->allow_quant_coeff_opt = sf->optimize_coefficients;
     sf->quant_opt_thresh = qopt_thresholds[(speed < 6) ? speed : 5];
-
-    sf->use_square_only_threshold = BLOCK_4X4;
     sf->less_rectangular_check = 1;
-
     sf->use_rd_breakout = 1;
     sf->adaptive_motion_search = 1;
     sf->mv.auto_mv_step_size = 1;
     sf->adaptive_rd_thresh = 2;
-    sf->mv.subpel_iters_per_step = 1;
-    sf->mode_skip_start = 10;
+    sf->mv.subpel_search_level = 1;
+    if (cpi->oxcf.content != VP9E_CONTENT_FILM) sf->mode_skip_start = 10;
     sf->adaptive_pred_interp_filter = 1;
     sf->allow_acl = 0;
 
     sf->intra_y_mode_mask[TX_32X32] = INTRA_DC_H_V;
     sf->intra_uv_mode_mask[TX_32X32] = INTRA_DC_H_V;
-    sf->intra_y_mode_mask[TX_16X16] = INTRA_DC_H_V;
-    sf->intra_uv_mode_mask[TX_16X16] = INTRA_DC_H_V;
+    if (cpi->oxcf.content != VP9E_CONTENT_FILM) {
+      sf->intra_y_mode_mask[TX_16X16] = INTRA_DC_H_V;
+      sf->intra_uv_mode_mask[TX_16X16] = INTRA_DC_H_V;
+    }
 
     sf->recode_tolerance_low = 15;
     sf->recode_tolerance_high = 30;
@@ -223,9 +283,11 @@
     sf->exhaustive_searches_thresh =
         (cpi->twopass.fr_content_type == FC_GRAPHICS_ANIMATION) ? (1 << 23)
                                                                 : INT_MAX;
+    sf->use_accurate_subpel_search = USE_4_TAPS;
   }
 
   if (speed >= 2) {
+    sf->rd_ml_partition.var_pruning = 0;
     if (oxcf->vbr_corpus_complexity)
       sf->recode_loop = ALLOW_RECODE_FIRST;
     else
@@ -247,6 +309,12 @@
     sf->auto_min_max_partition_size = RELAXED_NEIGHBORING_MIN_MAX;
     sf->recode_tolerance_low = 15;
     sf->recode_tolerance_high = 45;
+    sf->enhanced_full_pixel_motion_search = 0;
+    sf->prune_ref_frame_for_rect_partitions = 0;
+    sf->rd_ml_partition.prune_rect_thresh[1] = -1;
+    sf->rd_ml_partition.prune_rect_thresh[2] = -1;
+    sf->rd_ml_partition.prune_rect_thresh[3] = -1;
+    sf->mv.subpel_search_level = 0;
 
     if (cpi->twopass.fr_content_type == FC_GRAPHICS_ANIMATION) {
       for (i = 0; i < MAX_MESH_STEP; ++i) {
@@ -257,6 +325,8 @@
             good_quality_mesh_patterns[mesh_density_level][i].interval;
       }
     }
+
+    sf->use_accurate_subpel_search = USE_2_TAPS;
   }
 
   if (speed >= 3) {
@@ -316,6 +386,7 @@
     sf->simple_model_rd_from_var = 1;
   }
 }
+#endif  // !CONFIG_REALTIME_ONLY
 
 static void set_rt_speed_feature_framesize_dependent(VP9_COMP *cpi,
                                                      SPEED_FEATURES *sf,
@@ -358,6 +429,7 @@
 static void set_rt_speed_feature_framesize_independent(
     VP9_COMP *cpi, SPEED_FEATURES *sf, int speed, vp9e_tune_content content) {
   VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
   const int is_keyframe = cm->frame_type == KEY_FRAME;
   const int frames_since_key = is_keyframe ? 0 : cpi->rc.frames_since_key;
   sf->static_segmentation = 0;
@@ -374,6 +446,18 @@
   sf->use_compound_nonrd_pickmode = 0;
   sf->nonrd_keyframe = 0;
   sf->svc_use_lowres_part = 0;
+  sf->overshoot_detection_cbr_rt = NO_DETECTION;
+  sf->disable_16x16part_nonkey = 0;
+  sf->disable_golden_ref = 0;
+  sf->enable_tpl_model = 0;
+  sf->enhanced_full_pixel_motion_search = 0;
+  sf->use_accurate_subpel_search = USE_2_TAPS;
+  sf->nonrd_use_ml_partition = 0;
+  sf->variance_part_thresh_mult = 1;
+  sf->cb_pred_filter_search = 0;
+  sf->force_smooth_interpol = 0;
+  sf->rt_intra_dc_only_low_content = 0;
+  sf->mv.enable_adaptive_subpel_force_stop = 0;
 
   if (speed >= 1) {
     sf->allow_txfm_domain_distortion = 1;
@@ -407,7 +491,7 @@
     // Reference masking only enabled for 1 spatial layer, and if none of the
     // references have been scaled. The latter condition needs to be checked
     // for external or internal dynamic resize.
-    sf->reference_masking = (cpi->svc.number_spatial_layers == 1);
+    sf->reference_masking = (svc->number_spatial_layers == 1);
     if (sf->reference_masking == 1 &&
         (cpi->external_resize == 1 ||
          cpi->oxcf.resize_mode == RESIZE_DYNAMIC)) {
@@ -440,7 +524,7 @@
     sf->disable_filter_search_var_thresh = 100;
     sf->use_uv_intra_rd_estimate = 1;
     sf->skip_encode_sb = 1;
-    sf->mv.subpel_iters_per_step = 1;
+    sf->mv.subpel_search_level = 0;
     sf->adaptive_rd_thresh = 4;
     sf->mode_skip_start = 6;
     sf->allow_skip_recode = 0;
@@ -453,14 +537,7 @@
     int i;
     if (cpi->oxcf.rc_mode == VPX_VBR && cpi->oxcf.lag_in_frames > 0)
       sf->use_altref_onepass = 1;
-    sf->last_partitioning_redo_frequency = 4;
-    sf->adaptive_rd_thresh = 5;
-    sf->use_fast_coef_costing = 0;
-    sf->auto_min_max_partition_size = STRICT_NEIGHBORING_MIN_MAX;
-    sf->adjust_partitioning_from_last_frame =
-        cm->last_frame_type != cm->frame_type ||
-        (0 == (frames_since_key + 1) % sf->last_partitioning_redo_frequency);
-    sf->mv.subpel_force_stop = 1;
+    sf->mv.subpel_force_stop = QUARTER_PEL;
     for (i = 0; i < TX_SIZES; i++) {
       sf->intra_y_mode_mask[i] = INTRA_DC_H_V;
       sf->intra_uv_mode_mask[i] = INTRA_DC;
@@ -468,13 +545,19 @@
     sf->intra_y_mode_mask[TX_32X32] = INTRA_DC;
     sf->frame_parameter_update = 0;
     sf->mv.search_method = FAST_HEX;
-
-    sf->inter_mode_mask[BLOCK_32X32] = INTER_NEAREST_NEAR_NEW;
-    sf->inter_mode_mask[BLOCK_32X64] = INTER_NEAREST;
-    sf->inter_mode_mask[BLOCK_64X32] = INTER_NEAREST;
-    sf->inter_mode_mask[BLOCK_64X64] = INTER_NEAREST;
+    sf->allow_skip_recode = 0;
     sf->max_intra_bsize = BLOCK_32X32;
-    sf->allow_skip_recode = 1;
+    sf->use_fast_coef_costing = 0;
+    sf->use_quant_fp = !is_keyframe;
+    sf->inter_mode_mask[BLOCK_32X32] = INTER_NEAREST_NEW_ZERO;
+    sf->inter_mode_mask[BLOCK_32X64] = INTER_NEAREST_NEW_ZERO;
+    sf->inter_mode_mask[BLOCK_64X32] = INTER_NEAREST_NEW_ZERO;
+    sf->inter_mode_mask[BLOCK_64X64] = INTER_NEAREST_NEW_ZERO;
+    sf->adaptive_rd_thresh = 2;
+    sf->use_fast_coef_updates = is_keyframe ? TWO_LOOP : ONE_LOOP_REDUCED;
+    sf->mode_search_skip_flags = FLAG_SKIP_INTRA_DIRMISMATCH;
+    sf->tx_size_search_method = is_keyframe ? USE_LARGESTALL : USE_TX_8X8;
+    sf->partition_search_type = VAR_BASED_PARTITION;
   }
 
   if (speed >= 5) {
@@ -513,7 +596,10 @@
       int i;
       if (content == VP9E_CONTENT_SCREEN) {
         for (i = 0; i < BLOCK_SIZES; ++i)
-          sf->intra_y_mode_bsize_mask[i] = INTRA_DC_TM_H_V;
+          if (i >= BLOCK_32X32)
+            sf->intra_y_mode_bsize_mask[i] = INTRA_DC_H_V;
+          else
+            sf->intra_y_mode_bsize_mask[i] = INTRA_DC_TM_H_V;
       } else {
         for (i = 0; i < BLOCK_SIZES; ++i)
           if (i > BLOCK_16X16)
@@ -531,6 +617,23 @@
       sf->limit_newmv_early_exit = 1;
       if (!cpi->use_svc) sf->bias_golden = 1;
     }
+    // Keep nonrd_keyframe = 1 for non-base spatial layers to prevent
+    // increase in encoding time.
+    if (cpi->use_svc && svc->spatial_layer_id > 0) sf->nonrd_keyframe = 1;
+    if (cm->frame_type != KEY_FRAME && cpi->resize_state == ORIG &&
+        cpi->oxcf.rc_mode == VPX_CBR) {
+      if (cm->width * cm->height <= 352 * 288 && !cpi->use_svc &&
+          cpi->oxcf.content != VP9E_CONTENT_SCREEN)
+        sf->overshoot_detection_cbr_rt = RE_ENCODE_MAXQ;
+      else
+        sf->overshoot_detection_cbr_rt = FAST_DETECTION_MAXQ;
+    }
+    if (cpi->oxcf.rc_mode == VPX_VBR && cpi->oxcf.lag_in_frames > 0 &&
+        cm->width <= 1280 && cm->height <= 720) {
+      sf->use_altref_onepass = 1;
+      sf->use_compound_nonrd_pickmode = 1;
+    }
+    if (cm->width * cm->height > 1280 * 720) sf->cb_pred_filter_search = 1;
   }
 
   if (speed >= 6) {
@@ -539,8 +642,6 @@
       sf->use_compound_nonrd_pickmode = 1;
     }
     sf->partition_search_type = VAR_BASED_PARTITION;
-    // Turn on this to use non-RD key frame coding mode.
-    sf->use_nonrd_pick_mode = 1;
     sf->mv.search_method = NSTEP;
     sf->mv.reduce_first_step_size = 1;
     sf->skip_encode_sb = 0;
@@ -553,7 +654,7 @@
           (cm->width * cm->height <= 640 * 360) ? 40000 : 60000;
       if (cpi->content_state_sb_fd == NULL &&
           (!cpi->use_svc ||
-           cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1)) {
+           svc->spatial_layer_id == svc->number_spatial_layers - 1)) {
         cpi->content_state_sb_fd = (uint8_t *)vpx_calloc(
             (cm->mi_stride >> 3) * ((cm->mi_rows >> 3) + 1), sizeof(uint8_t));
       }
@@ -562,11 +663,14 @@
       // Enable short circuit for low temporal variance.
       sf->short_circuit_low_temp_var = 1;
     }
-    if (cpi->svc.temporal_layer_id > 0) {
+    if (svc->temporal_layer_id > 0) {
       sf->adaptive_rd_thresh = 4;
       sf->limit_newmv_early_exit = 0;
       sf->base_mv_aggressive = 1;
     }
+    if (cm->frame_type != KEY_FRAME && cpi->resize_state == ORIG &&
+        cpi->oxcf.rc_mode == VPX_CBR)
+      sf->overshoot_detection_cbr_rt = FAST_DETECTION_MAXQ;
   }
 
   if (speed >= 7) {
@@ -576,16 +680,15 @@
     sf->mv.fullpel_search_step_param = 10;
     // For SVC: use better mv search on base temporal layer, and only
     // on base spatial layer if highest resolution is above 640x360.
-    if (cpi->svc.number_temporal_layers > 2 &&
-        cpi->svc.temporal_layer_id == 0 &&
-        (cpi->svc.spatial_layer_id == 0 ||
+    if (svc->number_temporal_layers > 2 && svc->temporal_layer_id == 0 &&
+        (svc->spatial_layer_id == 0 ||
          cpi->oxcf.width * cpi->oxcf.height <= 640 * 360)) {
       sf->mv.search_method = NSTEP;
       sf->mv.fullpel_search_step_param = 6;
     }
-    if (cpi->svc.temporal_layer_id > 0 || cpi->svc.spatial_layer_id > 1) {
+    if (svc->temporal_layer_id > 0 || svc->spatial_layer_id > 1) {
       sf->use_simple_block_yrd = 1;
-      if (cpi->svc.non_reference_frame)
+      if (svc->non_reference_frame)
         sf->mv.subpel_search_method = SUBPEL_TREE_PRUNED_EVENMORE;
     }
     if (cpi->use_svc && cpi->row_mt && cpi->oxcf.max_threads > 1)
@@ -596,22 +699,30 @@
     if (!cpi->last_frame_dropped && cpi->resize_state == ORIG &&
         !cpi->external_resize &&
         (!cpi->use_svc ||
-         cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1)) {
+         (svc->spatial_layer_id == svc->number_spatial_layers - 1 &&
+          !svc->last_layer_dropped[svc->number_spatial_layers - 1]))) {
       sf->copy_partition_flag = 1;
       cpi->max_copied_frame = 2;
       // The top temporal enhancement layer (for number of temporal layers > 1)
       // are non-reference frames, so use large/max value for max_copied_frame.
-      if (cpi->svc.number_temporal_layers > 1 &&
-          cpi->svc.temporal_layer_id == cpi->svc.number_temporal_layers - 1)
+      if (svc->number_temporal_layers > 1 &&
+          svc->temporal_layer_id == svc->number_temporal_layers - 1)
         cpi->max_copied_frame = 255;
     }
     // For SVC: enable use of lower resolution partition for higher resolution,
     // only for 3 spatial layers and when config/top resolution is above VGA.
     // Enable only for non-base temporal layer frames.
-    if (cpi->use_svc && cpi->svc.use_partition_reuse &&
-        cpi->svc.number_spatial_layers == 3 && cpi->svc.temporal_layer_id > 0 &&
+    if (cpi->use_svc && svc->use_partition_reuse &&
+        svc->number_spatial_layers == 3 && svc->temporal_layer_id > 0 &&
         cpi->oxcf.width * cpi->oxcf.height > 640 * 480)
       sf->svc_use_lowres_part = 1;
+    // For SVC when golden is used as second temporal reference: to avoid
+    // encode time increase only use this feature on base temporal layer.
+    // (i.e remove golden flag from frame_flags for temporal_layer_id > 0).
+    if (cpi->use_svc && svc->use_gf_temporal_ref_current_layer &&
+        svc->temporal_layer_id > 0)
+      cpi->ref_frame_flags &= (~VP9_GOLD_FLAG);
+    if (cm->width * cm->height > 640 * 480) sf->cb_pred_filter_search = 1;
   }
 
   if (speed >= 8) {
@@ -621,15 +732,16 @@
     if (!cpi->use_svc) cpi->max_copied_frame = 4;
     if (cpi->row_mt && cpi->oxcf.max_threads > 1)
       sf->adaptive_rd_thresh_row_mt = 1;
-
-    if (content == VP9E_CONTENT_SCREEN) sf->mv.subpel_force_stop = 3;
-    if (content == VP9E_CONTENT_SCREEN) sf->lpf_pick = LPF_PICK_MINIMAL_LPF;
-    // Only keep INTRA_DC mode for speed 8.
-    if (!is_keyframe) {
-      int i = 0;
-      for (i = 0; i < BLOCK_SIZES; ++i)
-        sf->intra_y_mode_bsize_mask[i] = INTRA_DC;
+    // Enable ML based partition for low res.
+    if (!frame_is_intra_only(cm) && cm->width * cm->height <= 352 * 288) {
+      sf->nonrd_use_ml_partition = 1;
     }
+#if CONFIG_VP9_HIGHBITDEPTH
+    if (cpi->Source->flags & YV12_FLAG_HIGHBITDEPTH)
+      sf->nonrd_use_ml_partition = 0;
+#endif
+    if (content == VP9E_CONTENT_SCREEN) sf->mv.subpel_force_stop = HALF_PEL;
+    sf->rt_intra_dc_only_low_content = 1;
     if (!cpi->use_svc && cpi->oxcf.rc_mode == VPX_CBR &&
         content != VP9E_CONTENT_SCREEN) {
       // More aggressive short circuit for speed 8.
@@ -651,7 +763,33 @@
     }
     sf->limit_newmv_early_exit = 0;
     sf->use_simple_block_yrd = 1;
+    if (cm->width * cm->height > 352 * 288) sf->cb_pred_filter_search = 1;
   }
+
+  if (speed >= 9) {
+    // Only keep INTRA_DC mode for speed 9.
+    if (!is_keyframe) {
+      int i = 0;
+      for (i = 0; i < BLOCK_SIZES; ++i)
+        sf->intra_y_mode_bsize_mask[i] = INTRA_DC;
+    }
+    sf->cb_pred_filter_search = 1;
+    sf->mv.enable_adaptive_subpel_force_stop = 1;
+    sf->mv.adapt_subpel_force_stop.mv_thresh = 1;
+    sf->mv.adapt_subpel_force_stop.force_stop_below = QUARTER_PEL;
+    sf->mv.adapt_subpel_force_stop.force_stop_above = HALF_PEL;
+    // Disable partition blocks below 16x16, except for low-resolutions.
+    if (cm->frame_type != KEY_FRAME && cm->width >= 320 && cm->height >= 240)
+      sf->disable_16x16part_nonkey = 1;
+    // Allow for disabling GOLDEN reference, for CBR mode.
+    if (cpi->oxcf.rc_mode == VPX_CBR) sf->disable_golden_ref = 1;
+    if (cpi->rc.avg_frame_low_motion < 70) sf->default_interp_filter = BILINEAR;
+    if (cm->width * cm->height >= 640 * 360) sf->variance_part_thresh_mult = 2;
+  }
+
+  if (sf->nonrd_use_ml_partition)
+    sf->partition_search_type = ML_BASED_PARTITION;
+
   if (sf->use_altref_onepass) {
     if (cpi->rc.is_src_frame_alt_ref && cm->frame_type != KEY_FRAME) {
       sf->partition_search_type = FIXED_PARTITION;
@@ -666,9 +804,26 @@
           (uint8_t *)vpx_calloc((cm->mi_stride >> 3) * ((cm->mi_rows >> 3) + 1),
                                 sizeof(*cpi->count_lastgolden_frame_usage));
   }
+  if (svc->previous_frame_is_intra_only) {
+    sf->partition_search_type = FIXED_PARTITION;
+    sf->always_this_block_size = BLOCK_64X64;
+  }
+  // Special case for screen content: increase motion search on base spatial
+  // layer when high motion is detected or previous SL0 frame was dropped.
+  if (cpi->oxcf.content == VP9E_CONTENT_SCREEN && cpi->oxcf.speed >= 5 &&
+      (svc->high_num_blocks_with_motion || svc->last_layer_dropped[0])) {
+    sf->mv.search_method = NSTEP;
+    // TODO(marpan/jianj): Tune this setting for screensharing. For now use
+    // small step_param for all spatial layers.
+    sf->mv.fullpel_search_step_param = 2;
+  }
+  // TODO(marpan): There is regression for aq-mode=3 speed <= 4, force it
+  // off for now.
+  if (speed <= 3 && cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ)
+    cpi->oxcf.aq_mode = 0;
 }
 
-void vp9_set_speed_features_framesize_dependent(VP9_COMP *cpi) {
+void vp9_set_speed_features_framesize_dependent(VP9_COMP *cpi, int speed) {
   SPEED_FEATURES *const sf = &cpi->sf;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
   RD_OPT *const rd = &cpi->rd;
@@ -678,13 +833,15 @@
   // Some speed-up features even for best quality as minimal impact on quality.
   sf->partition_search_breakout_thr.dist = (1 << 19);
   sf->partition_search_breakout_thr.rate = 80;
-  sf->ml_partition_search_early_termination = 0;
+  sf->rd_ml_partition.search_early_termination = 0;
+  sf->rd_ml_partition.search_breakout = 0;
 
-  if (oxcf->mode == REALTIME) {
-    set_rt_speed_feature_framesize_dependent(cpi, sf, oxcf->speed);
-  } else if (oxcf->mode == GOOD) {
-    set_good_speed_feature_framesize_dependent(cpi, sf, oxcf->speed);
-  }
+  if (oxcf->mode == REALTIME)
+    set_rt_speed_feature_framesize_dependent(cpi, sf, speed);
+#if !CONFIG_REALTIME_ONLY
+  else if (oxcf->mode == GOOD)
+    set_good_speed_feature_framesize_dependent(cpi, sf, speed);
+#endif
 
   if (sf->disable_split_mask == DISABLE_ALL_SPLIT) {
     sf->adaptive_pred_interp_filter = 0;
@@ -710,17 +867,13 @@
   if (!sf->adaptive_rd_thresh_row_mt && cpi->row_mt_bit_exact &&
       oxcf->max_threads > 1)
     sf->adaptive_rd_thresh = 0;
-
-  // This is only used in motion vector unit test.
-  if (cpi->oxcf.motion_vector_unit_test == 1)
-    cpi->find_fractional_mv_step = vp9_return_max_sub_pixel_mv;
-  else if (cpi->oxcf.motion_vector_unit_test == 2)
-    cpi->find_fractional_mv_step = vp9_return_min_sub_pixel_mv;
 }
 
-void vp9_set_speed_features_framesize_independent(VP9_COMP *cpi) {
+void vp9_set_speed_features_framesize_independent(VP9_COMP *cpi, int speed) {
   SPEED_FEATURES *const sf = &cpi->sf;
+#if !CONFIG_REALTIME_ONLY
   VP9_COMMON *const cm = &cpi->common;
+#endif
   MACROBLOCK *const x = &cpi->td.mb;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
   int i;
@@ -730,8 +883,8 @@
   sf->mv.search_method = NSTEP;
   sf->recode_loop = ALLOW_RECODE_FIRST;
   sf->mv.subpel_search_method = SUBPEL_TREE;
-  sf->mv.subpel_iters_per_step = 2;
-  sf->mv.subpel_force_stop = 0;
+  sf->mv.subpel_search_level = 2;
+  sf->mv.subpel_force_stop = EIGHTH_PEL;
   sf->optimize_coefficients = !is_lossless_requested(&cpi->oxcf);
   sf->mv.reduce_first_step_size = 0;
   sf->coeff_prob_appx_step = 1;
@@ -741,6 +894,7 @@
   sf->tx_size_search_method = USE_FULL_RD;
   sf->use_lp32x32fdct = 0;
   sf->adaptive_motion_search = 0;
+  sf->enhanced_full_pixel_motion_search = 1;
   sf->adaptive_pred_interp_filter = 0;
   sf->adaptive_mode_search = 0;
   sf->cb_pred_filter_search = 0;
@@ -752,7 +906,8 @@
   sf->partition_search_type = SEARCH_PARTITION;
   sf->less_rectangular_check = 0;
   sf->use_square_partition_only = 0;
-  sf->use_square_only_threshold = BLOCK_SIZES;
+  sf->use_square_only_thresh_high = BLOCK_SIZES;
+  sf->use_square_only_thresh_low = BLOCK_4X4;
   sf->auto_min_max_partition_size = NOT_IN_USE;
   sf->rd_auto_partition_min_limit = BLOCK_4X4;
   sf->default_max_partition_size = BLOCK_64X64;
@@ -771,6 +926,9 @@
   sf->allow_quant_coeff_opt = sf->optimize_coefficients;
   sf->quant_opt_thresh = 99.0;
   sf->allow_acl = 1;
+  sf->enable_tpl_model = oxcf->enable_tpl_model;
+  sf->prune_ref_frame_for_rect_partitions = 0;
+  sf->temporal_filter_search_method = MESH;
 
   for (i = 0; i < TX_SIZES; i++) {
     sf->intra_y_mode_mask[i] = INTRA_ALL;
@@ -804,10 +962,17 @@
   sf->limit_newmv_early_exit = 0;
   sf->bias_golden = 0;
   sf->base_mv_aggressive = 0;
+  sf->rd_ml_partition.prune_rect_thresh[0] = -1;
+  sf->rd_ml_partition.prune_rect_thresh[1] = -1;
+  sf->rd_ml_partition.prune_rect_thresh[2] = -1;
+  sf->rd_ml_partition.prune_rect_thresh[3] = -1;
+  sf->rd_ml_partition.var_pruning = 0;
+  sf->use_accurate_subpel_search = USE_8_TAPS;
 
   // Some speed-up features even for best quality as minimal impact on quality.
   sf->adaptive_rd_thresh = 1;
   sf->tx_size_search_breakout = 1;
+  sf->tx_size_search_depth = 2;
 
   sf->exhaustive_searches_thresh =
       (cpi->twopass.fr_content_type == FC_GRAPHICS_ANIMATION) ? (1 << 20)
@@ -820,10 +985,11 @@
   }
 
   if (oxcf->mode == REALTIME)
-    set_rt_speed_feature_framesize_independent(cpi, sf, oxcf->speed,
-                                               oxcf->content);
+    set_rt_speed_feature_framesize_independent(cpi, sf, speed, oxcf->content);
+#if !CONFIG_REALTIME_ONLY
   else if (oxcf->mode == GOOD)
-    set_good_speed_feature_framesize_independent(cpi, cm, sf, oxcf->speed);
+    set_good_speed_feature_framesize_independent(cpi, cm, sf, speed);
+#endif
 
   cpi->diamond_search_sad = vp9_diamond_search_sad;
 
@@ -837,7 +1003,7 @@
     sf->optimize_coefficients = 0;
   }
 
-  if (sf->mv.subpel_force_stop == 3) {
+  if (sf->mv.subpel_force_stop == FULL_PEL) {
     // Whole pel only
     cpi->find_fractional_mv_step = vp9_skip_sub_pixel_tree;
   } else if (sf->mv.subpel_search_method == SUBPEL_TREE) {
@@ -850,6 +1016,12 @@
     cpi->find_fractional_mv_step = vp9_find_best_sub_pixel_tree_pruned_evenmore;
   }
 
+  // This is only used in motion vector unit test.
+  if (cpi->oxcf.motion_vector_unit_test == 1)
+    cpi->find_fractional_mv_step = vp9_return_max_sub_pixel_mv;
+  else if (cpi->oxcf.motion_vector_unit_test == 2)
+    cpi->find_fractional_mv_step = vp9_return_min_sub_pixel_mv;
+
   x->optimize = sf->optimize_coefficients == 1 && oxcf->pass != 1;
 
   x->min_partition_size = sf->default_min_partition_size;
@@ -867,10 +1039,4 @@
   if (!sf->adaptive_rd_thresh_row_mt && cpi->row_mt_bit_exact &&
       oxcf->max_threads > 1)
     sf->adaptive_rd_thresh = 0;
-
-  // This is only used in motion vector unit test.
-  if (cpi->oxcf.motion_vector_unit_test == 1)
-    cpi->find_fractional_mv_step = vp9_return_max_sub_pixel_mv;
-  else if (cpi->oxcf.motion_vector_unit_test == 2)
-    cpi->find_fractional_mv_step = vp9_return_min_sub_pixel_mv;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.h
index 50d52bc..ca284de 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_speed_features.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_SPEED_FEATURES_H_
-#define VP9_ENCODER_VP9_SPEED_FEATURES_H_
+#ifndef VPX_VP9_ENCODER_VP9_SPEED_FEATURES_H_
+#define VPX_VP9_ENCODER_VP9_SPEED_FEATURES_H_
 
 #include "vp9/common/vp9_enums.h"
 
@@ -57,7 +57,8 @@
   BIGDIA = 3,
   SQUARE = 4,
   FAST_HEX = 5,
-  FAST_DIAMOND = 6
+  FAST_DIAMOND = 6,
+  MESH = 7
 } SEARCH_METHODS;
 
 typedef enum {
@@ -135,20 +136,23 @@
 } INTERP_FILTER_MASK;
 
 typedef enum {
-  // Search partitions using RD/NONRD criterion
+  // Search partitions using RD/NONRD criterion.
   SEARCH_PARTITION,
 
-  // Always use a fixed size partition
+  // Always use a fixed size partition.
   FIXED_PARTITION,
 
   REFERENCE_PARTITION,
 
   // Use an arbitrary partitioning scheme based on source variance within
-  // a 64X64 SB
+  // a 64X64 SB.
   VAR_BASED_PARTITION,
 
-  // Use non-fixed partitions based on source variance
-  SOURCE_VAR_BASED_PARTITION
+  // Use non-fixed partitions based on source variance.
+  SOURCE_VAR_BASED_PARTITION,
+
+  // Make partition decisions with machine learning models.
+  ML_BASED_PARTITION
 } PARTITION_SEARCH_TYPE;
 
 typedef enum {
@@ -161,6 +165,19 @@
   ONE_LOOP_REDUCED = 1
 } FAST_COEFF_UPDATE;
 
+typedef enum { EIGHTH_PEL, QUARTER_PEL, HALF_PEL, FULL_PEL } SUBPEL_FORCE_STOP;
+
+typedef struct ADAPT_SUBPEL_FORCE_STOP {
+  // Threshold for full pixel motion vector;
+  int mv_thresh;
+
+  // subpel_force_stop if full pixel MV is below the threshold.
+  SUBPEL_FORCE_STOP force_stop_below;
+
+  // subpel_force_stop if full pixel MV is equal to or above the threshold.
+  SUBPEL_FORCE_STOP force_stop_above;
+} ADAPT_SUBPEL_FORCE_STOP;
+
 typedef struct MV_SPEED_FEATURES {
   // Motion search method (Diamond, NSTEP, Hex, Big Diamond, Square, etc).
   SEARCH_METHODS search_method;
@@ -179,15 +196,17 @@
   // the same process. Along the way it skips many diagonals.
   SUBPEL_SEARCH_METHODS subpel_search_method;
 
-  // Maximum number of steps in logarithmic subpel search before giving up.
-  int subpel_iters_per_step;
+  // Subpel MV search level. Can take values 0 - 2. Higher values mean more
+  // extensive subpel search.
+  int subpel_search_level;
 
-  // Control when to stop subpel search:
-  // 0: Full subpel search.
-  // 1: Stop at quarter pixel.
-  // 2: Stop at half pixel.
-  // 3: Stop at full pixel.
-  int subpel_force_stop;
+  // When to stop subpel motion search.
+  SUBPEL_FORCE_STOP subpel_force_stop;
+
+  // If it's enabled, different subpel_force_stop will be used for different MV.
+  int enable_adaptive_subpel_force_stop;
+
+  ADAPT_SUBPEL_FORCE_STOP adapt_subpel_force_stop;
 
   // This variable sets the step_param used in full pel motion search.
   int fullpel_search_step_param;
@@ -205,6 +224,28 @@
   int interval;
 } MESH_PATTERN;
 
+typedef enum {
+  // No reaction to rate control on a detected slide/scene change.
+  NO_DETECTION = 0,
+
+  // Set to larger Q (max_q set by user) based only on the
+  // detected slide/scene change and current/past Q.
+  FAST_DETECTION_MAXQ = 1,
+
+  // Based on (first pass) encoded frame, if large frame size is detected
+  // then set to higher Q for the second re-encode. This involves 2 pass
+  // encoding on slide change, so slower than 1, but more accurate for
+  // detecting overshoot.
+  RE_ENCODE_MAXQ = 2
+} OVERSHOOT_DETECTION_CBR_RT;
+
+typedef enum {
+  USE_2_TAPS = 0,
+  USE_4_TAPS,
+  USE_8_TAPS,
+  USE_8_TAPS_SHARP,
+} SUBPEL_SEARCH_TYPE;
+
 typedef struct SPEED_FEATURES {
   MV_SPEED_FEATURES mv;
 
@@ -258,6 +299,9 @@
   // alternate reference frames.
   int allow_acl;
 
+  // Temporal dependency model based encoding mode optimization
+  int enable_tpl_model;
+
   // Use transform domain distortion. Use pixel domain distortion in speed 0
   // and certain situations in higher speed to improve the RD model precision.
   int allow_txfm_domain_distortion;
@@ -272,6 +316,9 @@
   // for intra and model coefs for the rest.
   TX_SIZE_SEARCH_METHOD tx_size_search_method;
 
+  // How many levels of tx size to search, starting from the largest.
+  int tx_size_search_depth;
+
   // Low precision 32x32 fdct keeps everything in 16 bits and thus is less
   // precise but significantly faster than the non lp version.
   int use_lp32x32fdct;
@@ -293,9 +340,14 @@
   // rd than partition type split.
   int less_rectangular_check;
 
-  // Disable testing non square partitions. (eg 16x32)
+  // Disable testing non square partitions(eg 16x32) for block sizes larger than
+  // use_square_only_thresh_high or smaller than use_square_only_thresh_low.
   int use_square_partition_only;
-  BLOCK_SIZE use_square_only_threshold;
+  BLOCK_SIZE use_square_only_thresh_high;
+  BLOCK_SIZE use_square_only_thresh_low;
+
+  // Prune reference frames for rectangular partitions.
+  int prune_ref_frame_for_rect_partitions;
 
   // Sets min and max partition sizes for this 64x64 region based on the
   // same 64x64 in last encoded frame, and the left and above neighbor.
@@ -327,6 +379,9 @@
   // point for this motion search and limits the search range around it.
   int adaptive_motion_search;
 
+  // Do extra full pixel motion search to obtain better motion vector.
+  int enhanced_full_pixel_motion_search;
+
   // Threshold for allowing exhaistive motion search.
   int exhaustive_searches_thresh;
 
@@ -448,8 +503,27 @@
   // Partition search early breakout thresholds.
   PARTITION_SEARCH_BREAKOUT_THR partition_search_breakout_thr;
 
-  // Machine-learning based partition search early termination
-  int ml_partition_search_early_termination;
+  struct {
+    // Use ML-based partition search early breakout.
+    int search_breakout;
+    // Higher values mean more aggressiveness for partition search breakout that
+    // results in better encoding  speed but worse compression performance.
+    float search_breakout_thresh[3];
+
+    // Machine-learning based partition search early termination
+    int search_early_termination;
+
+    // Machine-learning based partition search pruning using prediction residue
+    // variance.
+    int var_pruning;
+
+    // Threshold values used for ML based rectangular partition search pruning.
+    // If < 0, the feature is turned off.
+    // Higher values mean more aggressiveness to skip rectangular partition
+    // search that results in better encoding speed but worse coding
+    // performance.
+    int prune_rect_thresh[4];
+  } rd_ml_partition;
 
   // Allow skipping partition search for still image frame
   int allow_partition_search_skip;
@@ -508,15 +582,47 @@
 
   // For SVC: enables use of partition from lower spatial resolution.
   int svc_use_lowres_part;
+
+  // Flag to indicate process for handling overshoot on slide/scene change,
+  // for real-time CBR mode.
+  OVERSHOOT_DETECTION_CBR_RT overshoot_detection_cbr_rt;
+
+  // Disable partitioning of 16x16 blocks.
+  int disable_16x16part_nonkey;
+
+  // Allow for disabling golden reference.
+  int disable_golden_ref;
+
+  // Allow sub-pixel search to use interpolation filters with different taps in
+  // order to achieve accurate motion search result.
+  SUBPEL_SEARCH_TYPE use_accurate_subpel_search;
+
+  // Search method used by temporal filtering in full_pixel_motion_search.
+  SEARCH_METHODS temporal_filter_search_method;
+
+  // Use machine learning based partition search.
+  int nonrd_use_ml_partition;
+
+  // Multiplier for base thresold for variance partitioning.
+  int variance_part_thresh_mult;
+
+  // Force subpel motion filter to always use SMOOTH_FILTER.
+  int force_smooth_interpol;
+
+  // For real-time mode: force DC only under intra search when content
+  // does not have high souce SAD.
+  int rt_intra_dc_only_low_content;
 } SPEED_FEATURES;
 
 struct VP9_COMP;
 
-void vp9_set_speed_features_framesize_independent(struct VP9_COMP *cpi);
-void vp9_set_speed_features_framesize_dependent(struct VP9_COMP *cpi);
+void vp9_set_speed_features_framesize_independent(struct VP9_COMP *cpi,
+                                                  int speed);
+void vp9_set_speed_features_framesize_dependent(struct VP9_COMP *cpi,
+                                                int speed);
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_SPEED_FEATURES_H_
+#endif  // VPX_VP9_ENCODER_VP9_SPEED_FEATURES_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.c
index e8212ce..19bbd53 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.c
@@ -71,6 +71,7 @@
   else
     i = recenter_nonneg(MAX_PROB - 1 - v, MAX_PROB - 1 - m) - 1;
 
+  assert(i >= 0 && (size_t)i < sizeof(map_table));
   i = map_table[i];
   return i;
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.h
index 26c89e2..f0d544b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_subexp.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_SUBEXP_H_
-#define VP9_ENCODER_VP9_SUBEXP_H_
+#ifndef VPX_VP9_ENCODER_VP9_SUBEXP_H_
+#define VPX_VP9_ENCODER_VP9_SUBEXP_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -37,4 +37,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_SUBEXP_H_
+#endif  // VPX_VP9_ENCODER_VP9_SUBEXP_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.c
index aaacbcb..1466d3a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.c
@@ -19,6 +19,14 @@
 #define SMALL_FRAME_WIDTH 32
 #define SMALL_FRAME_HEIGHT 16
 
+static void swap_ptr(void *a, void *b) {
+  void **a_p = (void **)a;
+  void **b_p = (void **)b;
+  void *c = *a_p;
+  *a_p = *b_p;
+  *b_p = c;
+}
+
 void vp9_init_layer_context(VP9_COMP *const cpi) {
   SVC *const svc = &cpi->svc;
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
@@ -29,26 +37,52 @@
 
   svc->spatial_layer_id = 0;
   svc->temporal_layer_id = 0;
-  svc->first_spatial_layer_to_encode = 0;
-  svc->rc_drop_superframe = 0;
   svc->force_zero_mode_spatial_ref = 0;
   svc->use_base_mv = 0;
   svc->use_partition_reuse = 0;
+  svc->use_gf_temporal_ref = 1;
+  svc->use_gf_temporal_ref_current_layer = 0;
   svc->scaled_temp_is_alloc = 0;
   svc->scaled_one_half = 0;
   svc->current_superframe = 0;
   svc->non_reference_frame = 0;
   svc->skip_enhancement_layer = 0;
+  svc->disable_inter_layer_pred = INTER_LAYER_PRED_ON;
+  svc->framedrop_mode = CONSTRAINED_LAYER_DROP;
+  svc->set_intra_only_frame = 0;
+  svc->previous_frame_is_intra_only = 0;
+  svc->superframe_has_layer_sync = 0;
+  svc->use_set_ref_frame_config = 0;
+  svc->num_encoded_top_layer = 0;
+  svc->simulcast_mode = 0;
+  svc->single_layer_svc = 0;
 
-  for (i = 0; i < REF_FRAMES; ++i) svc->ref_frame_index[i] = -1;
+  for (i = 0; i < REF_FRAMES; ++i) {
+    svc->fb_idx_spatial_layer_id[i] = 0xff;
+    svc->fb_idx_temporal_layer_id[i] = 0xff;
+    svc->fb_idx_base[i] = 0;
+  }
   for (sl = 0; sl < oxcf->ss_number_layers; ++sl) {
+    svc->last_layer_dropped[sl] = 0;
+    svc->drop_spatial_layer[sl] = 0;
     svc->ext_frame_flags[sl] = 0;
-    svc->ext_lst_fb_idx[sl] = 0;
-    svc->ext_gld_fb_idx[sl] = 1;
-    svc->ext_alt_fb_idx[sl] = 2;
+    svc->lst_fb_idx[sl] = 0;
+    svc->gld_fb_idx[sl] = 1;
+    svc->alt_fb_idx[sl] = 2;
     svc->downsample_filter_type[sl] = BILINEAR;
     svc->downsample_filter_phase[sl] = 8;  // Set to 8 for averaging filter.
+    svc->framedrop_thresh[sl] = oxcf->drop_frames_water_mark;
+    svc->fb_idx_upd_tl0[sl] = -1;
+    svc->drop_count[sl] = 0;
+    svc->spatial_layer_sync[sl] = 0;
+    svc->force_drop_constrained_from_above[sl] = 0;
   }
+  svc->max_consec_drop = INT_MAX;
+
+  svc->buffer_gf_temporal_ref[1].idx = 7;
+  svc->buffer_gf_temporal_ref[0].idx = 6;
+  svc->buffer_gf_temporal_ref[1].is_used = 0;
+  svc->buffer_gf_temporal_ref[0].is_used = 0;
 
   if (cpi->oxcf.error_resilient_mode == 0 && cpi->oxcf.pass == 2) {
     if (vpx_realloc_frame_buffer(&cpi->svc.empty_frame.img, SMALL_FRAME_WIDTH,
@@ -86,6 +120,8 @@
       lrc->ni_frames = 0;
       lrc->decimation_count = 0;
       lrc->decimation_factor = 0;
+      lrc->worst_quality = oxcf->worst_allowed_q;
+      lrc->best_quality = oxcf->best_allowed_q;
 
       for (i = 0; i < RATE_FACTOR_LEVELS; ++i) {
         lrc->rate_correction_factors[i] = 1.0;
@@ -124,6 +160,9 @@
         size_t consec_zero_mv_size;
         VP9_COMMON *const cm = &cpi->common;
         lc->sb_index = 0;
+        lc->actual_num_seg1_blocks = 0;
+        lc->actual_num_seg2_blocks = 0;
+        lc->counter_encode_maxq_scene_change = 0;
         CHECK_MEM_ERROR(cm, lc->map,
                         vpx_malloc(mi_rows * mi_cols * sizeof(*lc->map)));
         memset(lc->map, 0, mi_rows * mi_cols);
@@ -155,6 +194,7 @@
   const RATE_CONTROL *const rc = &cpi->rc;
   int sl, tl, layer = 0, spatial_layer_target;
   float bitrate_alloc = 1.0;
+  int num_spatial_layers_nonzero_rate = 0;
 
   cpi->svc.temporal_layering_mode = oxcf->temporal_layering_mode;
 
@@ -235,6 +275,17 @@
       lrc->best_quality = rc->best_quality;
     }
   }
+  for (sl = 0; sl < oxcf->ss_number_layers; ++sl) {
+    // Check bitrate of spatia layer.
+    layer = LAYER_IDS_TO_IDX(sl, oxcf->ts_number_layers - 1,
+                             oxcf->ts_number_layers);
+    if (oxcf->layer_target_bitrate[layer] > 0)
+      num_spatial_layers_nonzero_rate += 1;
+  }
+  if (num_spatial_layers_nonzero_rate == 1)
+    svc->single_layer_svc = 1;
+  else
+    svc->single_layer_svc = 0;
 }
 
 static LAYER_CONTEXT *get_layer_context(VP9_COMP *const cpi) {
@@ -294,6 +345,7 @@
   LAYER_CONTEXT *const lc = get_layer_context(cpi);
   const int old_frame_since_key = cpi->rc.frames_since_key;
   const int old_frame_to_key = cpi->rc.frames_to_key;
+  const int old_ext_use_post_encode_drop = cpi->rc.ext_use_post_encode_drop;
 
   cpi->rc = lc->rc;
   cpi->twopass = lc->twopass;
@@ -307,26 +359,23 @@
   // Reset the frames_since_key and frames_to_key counters to their values
   // before the layer restore. Keep these defined for the stream (not layer).
   if (cpi->svc.number_temporal_layers > 1 ||
-      (cpi->svc.number_spatial_layers > 1 && !is_two_pass_svc(cpi))) {
+      cpi->svc.number_spatial_layers > 1) {
     cpi->rc.frames_since_key = old_frame_since_key;
     cpi->rc.frames_to_key = old_frame_to_key;
   }
-
+  cpi->rc.ext_use_post_encode_drop = old_ext_use_post_encode_drop;
   // For spatial-svc, allow cyclic-refresh to be applied on the spatial layers,
   // for the base temporal layer.
   if (cpi->oxcf.aq_mode == CYCLIC_REFRESH_AQ &&
       cpi->svc.number_spatial_layers > 1 && cpi->svc.temporal_layer_id == 0) {
     CYCLIC_REFRESH *const cr = cpi->cyclic_refresh;
-    signed char *temp = cr->map;
-    uint8_t *temp2 = cr->last_coded_q_map;
-    uint8_t *temp3 = cpi->consec_zero_mv;
-    cr->map = lc->map;
-    lc->map = temp;
-    cr->last_coded_q_map = lc->last_coded_q_map;
-    lc->last_coded_q_map = temp2;
-    cpi->consec_zero_mv = lc->consec_zero_mv;
-    lc->consec_zero_mv = temp3;
+    swap_ptr(&cr->map, &lc->map);
+    swap_ptr(&cr->last_coded_q_map, &lc->last_coded_q_map);
+    swap_ptr(&cpi->consec_zero_mv, &lc->consec_zero_mv);
     cr->sb_index = lc->sb_index;
+    cr->actual_num_seg1_blocks = lc->actual_num_seg1_blocks;
+    cr->actual_num_seg2_blocks = lc->actual_num_seg2_blocks;
+    cr->counter_encode_maxq_scene_change = lc->counter_encode_maxq_scene_change;
   }
 }
 
@@ -354,6 +403,9 @@
     lc->consec_zero_mv = cpi->consec_zero_mv;
     cpi->consec_zero_mv = temp3;
     lc->sb_index = cr->sb_index;
+    lc->actual_num_seg1_blocks = cr->actual_num_seg1_blocks;
+    lc->actual_num_seg2_blocks = cr->actual_num_seg2_blocks;
+    lc->counter_encode_maxq_scene_change = cr->counter_encode_maxq_scene_change;
   }
 }
 
@@ -385,15 +437,6 @@
     ++cpi->svc.current_superframe;
 }
 
-int vp9_is_upper_layer_key_frame(const VP9_COMP *const cpi) {
-  return is_two_pass_svc(cpi) && cpi->svc.spatial_layer_id > 0 &&
-         cpi->svc
-             .layer_context[cpi->svc.spatial_layer_id *
-                                cpi->svc.number_temporal_layers +
-                            cpi->svc.temporal_layer_id]
-             .is_key_frame;
-}
-
 void get_layer_resolution(const int width_org, const int height_org,
                           const int num, const int den, int *width_out,
                           int *height_out) {
@@ -412,6 +455,51 @@
   *height_out = h;
 }
 
+static void reset_fb_idx_unused(VP9_COMP *const cpi) {
+  // If a reference frame is not referenced or refreshed, then set the
+  // fb_idx for that reference to the first one used/referenced.
+  // This is to avoid setting fb_idx for a reference to a slot that is not
+  // used/needed (i.e., since that reference is not referenced or refreshed).
+  static const int flag_list[4] = { 0, VP9_LAST_FLAG, VP9_GOLD_FLAG,
+                                    VP9_ALT_FLAG };
+  MV_REFERENCE_FRAME ref_frame;
+  MV_REFERENCE_FRAME first_ref = 0;
+  int first_fb_idx = 0;
+  int fb_idx[3] = { cpi->lst_fb_idx, cpi->gld_fb_idx, cpi->alt_fb_idx };
+  for (ref_frame = LAST_FRAME; ref_frame <= ALTREF_FRAME; ref_frame++) {
+    if (cpi->ref_frame_flags & flag_list[ref_frame]) {
+      first_ref = ref_frame;
+      first_fb_idx = fb_idx[ref_frame - 1];
+      break;
+    }
+  }
+  if (first_ref > 0) {
+    if (first_ref != LAST_FRAME &&
+        !(cpi->ref_frame_flags & flag_list[LAST_FRAME]) &&
+        !cpi->ext_refresh_last_frame)
+      cpi->lst_fb_idx = first_fb_idx;
+    else if (first_ref != GOLDEN_FRAME &&
+             !(cpi->ref_frame_flags & flag_list[GOLDEN_FRAME]) &&
+             !cpi->ext_refresh_golden_frame)
+      cpi->gld_fb_idx = first_fb_idx;
+    else if (first_ref != ALTREF_FRAME &&
+             !(cpi->ref_frame_flags & flag_list[ALTREF_FRAME]) &&
+             !cpi->ext_refresh_alt_ref_frame)
+      cpi->alt_fb_idx = first_fb_idx;
+  }
+}
+
+// Never refresh any reference frame buffers on top temporal layers in
+// simulcast mode, which has interlayer prediction disabled.
+static void non_reference_frame_simulcast(VP9_COMP *const cpi) {
+  if (cpi->svc.temporal_layer_id == cpi->svc.number_temporal_layers - 1 &&
+      cpi->svc.temporal_layer_id > 0) {
+    cpi->ext_refresh_last_frame = 0;
+    cpi->ext_refresh_golden_frame = 0;
+    cpi->ext_refresh_alt_ref_frame = 0;
+  }
+}
+
 // The function sets proper ref_frame_flags, buffer indices, and buffer update
 // variables for temporal layering mode 3 - that does 0-2-1-2 temporal layering
 // scheme.
@@ -515,6 +603,10 @@
     cpi->gld_fb_idx = cpi->svc.number_spatial_layers + spatial_id - 1;
     cpi->alt_fb_idx = cpi->svc.number_spatial_layers + spatial_id;
   }
+
+  if (cpi->svc.simulcast_mode) non_reference_frame_simulcast(cpi);
+
+  reset_fb_idx_unused(cpi);
 }
 
 // The function sets proper ref_frame_flags, buffer indices, and buffer update
@@ -574,6 +666,10 @@
     cpi->gld_fb_idx = cpi->svc.number_spatial_layers + spatial_id - 1;
     cpi->alt_fb_idx = cpi->svc.number_spatial_layers + spatial_id;
   }
+
+  if (cpi->svc.simulcast_mode) non_reference_frame_simulcast(cpi);
+
+  reset_fb_idx_unused(cpi);
 }
 
 // The function sets proper ref_frame_flags, buffer indices, and buffer update
@@ -606,103 +702,280 @@
   } else {
     cpi->gld_fb_idx = 0;
   }
+
+  if (cpi->svc.simulcast_mode) non_reference_frame_simulcast(cpi);
+
+  reset_fb_idx_unused(cpi);
+}
+
+static void set_flags_and_fb_idx_bypass_via_set_ref_frame_config(
+    VP9_COMP *const cpi) {
+  SVC *const svc = &cpi->svc;
+  int sl = svc->spatial_layer_id = svc->spatial_layer_to_encode;
+  cpi->svc.temporal_layer_id = cpi->svc.temporal_layer_id_per_spatial[sl];
+  cpi->ext_refresh_frame_flags_pending = 1;
+  cpi->lst_fb_idx = svc->lst_fb_idx[sl];
+  cpi->gld_fb_idx = svc->gld_fb_idx[sl];
+  cpi->alt_fb_idx = svc->alt_fb_idx[sl];
+  cpi->ext_refresh_last_frame = 0;
+  cpi->ext_refresh_golden_frame = 0;
+  cpi->ext_refresh_alt_ref_frame = 0;
+  cpi->ref_frame_flags = 0;
+  if (svc->reference_last[sl]) cpi->ref_frame_flags |= VP9_LAST_FLAG;
+  if (svc->reference_golden[sl]) cpi->ref_frame_flags |= VP9_GOLD_FLAG;
+  if (svc->reference_altref[sl]) cpi->ref_frame_flags |= VP9_ALT_FLAG;
+}
+
+void vp9_copy_flags_ref_update_idx(VP9_COMP *const cpi) {
+  SVC *const svc = &cpi->svc;
+  static const int flag_list[4] = { 0, VP9_LAST_FLAG, VP9_GOLD_FLAG,
+                                    VP9_ALT_FLAG };
+  int sl = svc->spatial_layer_id;
+  svc->lst_fb_idx[sl] = cpi->lst_fb_idx;
+  svc->gld_fb_idx[sl] = cpi->gld_fb_idx;
+  svc->alt_fb_idx[sl] = cpi->alt_fb_idx;
+  // For the fixed SVC mode: pass the refresh_lst/gld/alt_frame flags to the
+  // update_buffer_slot, this is needed for the GET_SVC_REF_FRAME_CONFIG api.
+  if (svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
+    int ref;
+    for (ref = 0; ref < REF_FRAMES; ++ref) {
+      svc->update_buffer_slot[sl] &= ~(1 << ref);
+      if ((ref == svc->lst_fb_idx[sl] && cpi->refresh_last_frame) ||
+          (ref == svc->gld_fb_idx[sl] && cpi->refresh_golden_frame) ||
+          (ref == svc->alt_fb_idx[sl] && cpi->refresh_alt_ref_frame))
+        svc->update_buffer_slot[sl] |= (1 << ref);
+    }
+  }
+
+  // TODO(jianj): Remove these 3, deprecated.
+  svc->update_last[sl] = (uint8_t)cpi->refresh_last_frame;
+  svc->update_golden[sl] = (uint8_t)cpi->refresh_golden_frame;
+  svc->update_altref[sl] = (uint8_t)cpi->refresh_alt_ref_frame;
+
+  svc->reference_last[sl] =
+      (uint8_t)(cpi->ref_frame_flags & flag_list[LAST_FRAME]);
+  svc->reference_golden[sl] =
+      (uint8_t)(cpi->ref_frame_flags & flag_list[GOLDEN_FRAME]);
+  svc->reference_altref[sl] =
+      (uint8_t)(cpi->ref_frame_flags & flag_list[ALTREF_FRAME]);
 }
 
 int vp9_one_pass_cbr_svc_start_layer(VP9_COMP *const cpi) {
   int width = 0, height = 0;
+  SVC *const svc = &cpi->svc;
   LAYER_CONTEXT *lc = NULL;
-  cpi->svc.skip_enhancement_layer = 0;
-  if (cpi->svc.number_spatial_layers > 1) {
-    cpi->svc.use_base_mv = 1;
-    cpi->svc.use_partition_reuse = 1;
-  }
-  cpi->svc.force_zero_mode_spatial_ref = 1;
-  cpi->svc.mi_stride[cpi->svc.spatial_layer_id] = cpi->common.mi_stride;
+  int scaling_factor_num = 1;
+  int scaling_factor_den = 1;
+  svc->skip_enhancement_layer = 0;
 
-  if (cpi->svc.temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_0212) {
-    set_flags_and_fb_idx_for_temporal_mode3(cpi);
-  } else if (cpi->svc.temporal_layering_mode ==
-             VP9E_TEMPORAL_LAYERING_MODE_NOLAYERING) {
-    set_flags_and_fb_idx_for_temporal_mode_noLayering(cpi);
-  } else if (cpi->svc.temporal_layering_mode ==
-             VP9E_TEMPORAL_LAYERING_MODE_0101) {
-    set_flags_and_fb_idx_for_temporal_mode2(cpi);
-  } else if (cpi->svc.temporal_layering_mode ==
-             VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
-    // In the BYPASS/flexible mode, the encoder is relying on the application
-    // to specify, for each spatial layer, the flags and buffer indices for the
-    // layering.
-    // Note that the check (cpi->ext_refresh_frame_flags_pending == 0) is
-    // needed to support the case where the frame flags may be passed in via
-    // vpx_codec_encode(), which can be used for the temporal-only svc case.
-    // TODO(marpan): Consider adding an enc_config parameter to better handle
-    // this case.
-    if (cpi->ext_refresh_frame_flags_pending == 0) {
-      int sl;
-      cpi->svc.spatial_layer_id = cpi->svc.spatial_layer_to_encode;
-      sl = cpi->svc.spatial_layer_id;
-      vp9_apply_encoding_flags(cpi, cpi->svc.ext_frame_flags[sl]);
-      cpi->lst_fb_idx = cpi->svc.ext_lst_fb_idx[sl];
-      cpi->gld_fb_idx = cpi->svc.ext_gld_fb_idx[sl];
-      cpi->alt_fb_idx = cpi->svc.ext_alt_fb_idx[sl];
+  if (svc->disable_inter_layer_pred == INTER_LAYER_PRED_OFF &&
+      svc->number_spatial_layers > 1 && svc->number_spatial_layers <= 3 &&
+      svc->number_temporal_layers <= 3 &&
+      !(svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+        svc->use_set_ref_frame_config))
+    svc->simulcast_mode = 1;
+  else
+    svc->simulcast_mode = 0;
+
+  if (svc->number_spatial_layers > 1) {
+    svc->use_base_mv = 1;
+    svc->use_partition_reuse = 1;
+  }
+  svc->force_zero_mode_spatial_ref = 1;
+  svc->mi_stride[svc->spatial_layer_id] = cpi->common.mi_stride;
+  svc->mi_rows[svc->spatial_layer_id] = cpi->common.mi_rows;
+  svc->mi_cols[svc->spatial_layer_id] = cpi->common.mi_cols;
+
+  // For constrained_from_above drop mode: before encoding superframe (i.e.,
+  // at SL0 frame) check all spatial layers (starting from top) for possible
+  // drop, and if so, set a flag to force drop of that layer and all its lower
+  // layers.
+  if (svc->spatial_layer_to_encode == svc->first_spatial_layer_to_encode) {
+    int sl;
+    for (sl = 0; sl < svc->number_spatial_layers; sl++)
+      svc->force_drop_constrained_from_above[sl] = 0;
+    if (svc->framedrop_mode == CONSTRAINED_FROM_ABOVE_DROP) {
+      for (sl = svc->number_spatial_layers - 1;
+           sl >= svc->first_spatial_layer_to_encode; sl--) {
+        int layer = sl * svc->number_temporal_layers + svc->temporal_layer_id;
+        LAYER_CONTEXT *const lc = &svc->layer_context[layer];
+        cpi->rc = lc->rc;
+        cpi->oxcf.target_bandwidth = lc->target_bandwidth;
+        if (vp9_test_drop(cpi)) {
+          int sl2;
+          // Set flag to force drop in encoding for this mode.
+          for (sl2 = sl; sl2 >= svc->first_spatial_layer_to_encode; sl2--)
+            svc->force_drop_constrained_from_above[sl2] = 1;
+          break;
+        }
+      }
     }
   }
 
-  if (cpi->svc.spatial_layer_id == cpi->svc.first_spatial_layer_to_encode)
-    cpi->svc.rc_drop_superframe = 0;
+  if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_0212) {
+    set_flags_and_fb_idx_for_temporal_mode3(cpi);
+  } else if (svc->temporal_layering_mode ==
+             VP9E_TEMPORAL_LAYERING_MODE_NOLAYERING) {
+    set_flags_and_fb_idx_for_temporal_mode_noLayering(cpi);
+  } else if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_0101) {
+    set_flags_and_fb_idx_for_temporal_mode2(cpi);
+  } else if (svc->temporal_layering_mode ==
+                 VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+             svc->use_set_ref_frame_config) {
+    set_flags_and_fb_idx_bypass_via_set_ref_frame_config(cpi);
+  }
 
-  lc = &cpi->svc.layer_context[cpi->svc.spatial_layer_id *
-                                   cpi->svc.number_temporal_layers +
-                               cpi->svc.temporal_layer_id];
+  if (cpi->lst_fb_idx == svc->buffer_gf_temporal_ref[0].idx ||
+      cpi->gld_fb_idx == svc->buffer_gf_temporal_ref[0].idx ||
+      cpi->alt_fb_idx == svc->buffer_gf_temporal_ref[0].idx)
+    svc->buffer_gf_temporal_ref[0].is_used = 1;
+  if (cpi->lst_fb_idx == svc->buffer_gf_temporal_ref[1].idx ||
+      cpi->gld_fb_idx == svc->buffer_gf_temporal_ref[1].idx ||
+      cpi->alt_fb_idx == svc->buffer_gf_temporal_ref[1].idx)
+    svc->buffer_gf_temporal_ref[1].is_used = 1;
+
+  // For the fixed (non-flexible/bypass) SVC mode:
+  // If long term temporal reference is enabled at the sequence level
+  // (use_gf_temporal_ref == 1), and inter_layer is disabled (on inter-frames),
+  // we can use golden as a second temporal reference
+  // (since the spatial/inter-layer reference is disabled).
+  // We check that the fb_idx for this reference (buffer_gf_temporal_ref.idx) is
+  // unused (slot 7 and 6 should be available for 3-3 layer system).
+  // For now usage of this second temporal reference will only be used for
+  // highest and next to highest spatial layer (i.e., top and middle layer for
+  // 3 spatial layers).
+  svc->use_gf_temporal_ref_current_layer = 0;
+  if (svc->use_gf_temporal_ref && !svc->buffer_gf_temporal_ref[0].is_used &&
+      !svc->buffer_gf_temporal_ref[1].is_used &&
+      svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->disable_inter_layer_pred != INTER_LAYER_PRED_ON &&
+      svc->number_spatial_layers <= 3 && svc->number_temporal_layers <= 3 &&
+      svc->spatial_layer_id >= svc->number_spatial_layers - 2) {
+    // Enable the second (long-term) temporal reference at the frame-level.
+    svc->use_gf_temporal_ref_current_layer = 1;
+  }
+
+  // Check if current superframe has any layer sync, only check once on
+  // base layer.
+  if (svc->spatial_layer_id == 0) {
+    int sl = 0;
+    // Default is no sync.
+    svc->superframe_has_layer_sync = 0;
+    for (sl = 0; sl < svc->number_spatial_layers; ++sl) {
+      if (cpi->svc.spatial_layer_sync[sl]) svc->superframe_has_layer_sync = 1;
+    }
+  }
+
+  // Reset the drop flags for all spatial layers, on the base layer.
+  if (svc->spatial_layer_id == 0) {
+    vp9_zero(svc->drop_spatial_layer);
+    // TODO(jianj/marpan): Investigate why setting svc->lst/gld/alt_fb_idx
+    // causes an issue with frame dropping and temporal layers, when the frame
+    // flags are passed via the encode call (bypass mode). Issue is that we're
+    // resetting ext_refresh_frame_flags_pending to 0 on frame drops.
+    if (svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
+      memset(&svc->lst_fb_idx, -1, sizeof(svc->lst_fb_idx));
+      memset(&svc->gld_fb_idx, -1, sizeof(svc->lst_fb_idx));
+      memset(&svc->alt_fb_idx, -1, sizeof(svc->lst_fb_idx));
+      // These are set by API before the superframe is encoded and they are
+      // passed to encoder layer by layer. Don't reset them on layer 0 in bypass
+      // mode.
+      vp9_zero(svc->update_buffer_slot);
+      vp9_zero(svc->reference_last);
+      vp9_zero(svc->reference_golden);
+      vp9_zero(svc->reference_altref);
+      // TODO(jianj): Remove these 3, deprecated.
+      vp9_zero(svc->update_last);
+      vp9_zero(svc->update_golden);
+      vp9_zero(svc->update_altref);
+    }
+  }
+
+  lc = &svc->layer_context[svc->spatial_layer_id * svc->number_temporal_layers +
+                           svc->temporal_layer_id];
 
   // Setting the worst/best_quality via the encoder control: SET_SVC_PARAMETERS,
   // only for non-BYPASS mode for now.
-  if (cpi->svc.temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
+  if (svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS ||
+      svc->use_set_ref_frame_config) {
     RATE_CONTROL *const lrc = &lc->rc;
     lrc->worst_quality = vp9_quantizer_to_qindex(lc->max_q);
     lrc->best_quality = vp9_quantizer_to_qindex(lc->min_q);
   }
 
-  get_layer_resolution(cpi->oxcf.width, cpi->oxcf.height,
-                       lc->scaling_factor_num, lc->scaling_factor_den, &width,
-                       &height);
+  if (cpi->oxcf.resize_mode == RESIZE_DYNAMIC && svc->single_layer_svc == 1 &&
+      svc->spatial_layer_id == svc->first_spatial_layer_to_encode &&
+      cpi->resize_state != ORIG) {
+    scaling_factor_num = lc->scaling_factor_num_resize;
+    scaling_factor_den = lc->scaling_factor_den_resize;
+  } else {
+    scaling_factor_num = lc->scaling_factor_num;
+    scaling_factor_den = lc->scaling_factor_den;
+  }
+
+  get_layer_resolution(cpi->oxcf.width, cpi->oxcf.height, scaling_factor_num,
+                       scaling_factor_den, &width, &height);
 
   // Use Eightap_smooth for low resolutions.
   if (width * height <= 320 * 240)
-    cpi->svc.downsample_filter_type[cpi->svc.spatial_layer_id] =
-        EIGHTTAP_SMOOTH;
+    svc->downsample_filter_type[svc->spatial_layer_id] = EIGHTTAP_SMOOTH;
   // For scale factors > 0.75, set the phase to 0 (aligns decimated pixel
   // to source pixel).
-  lc = &cpi->svc.layer_context[cpi->svc.spatial_layer_id *
-                                   cpi->svc.number_temporal_layers +
-                               cpi->svc.temporal_layer_id];
-  if (lc->scaling_factor_num > (3 * lc->scaling_factor_den) >> 2)
-    cpi->svc.downsample_filter_phase[cpi->svc.spatial_layer_id] = 0;
+  if (scaling_factor_num > (3 * scaling_factor_den) >> 2)
+    svc->downsample_filter_phase[svc->spatial_layer_id] = 0;
 
   // The usage of use_base_mv or partition_reuse assumes down-scale of 2x2.
   // For now, turn off use of base motion vectors and partition reuse if the
   // spatial scale factors for any layers are not 2,
   // keep the case of 3 spatial layers with scale factor of 4x4 for base layer.
   // TODO(marpan): Fix this to allow for use_base_mv for scale factors != 2.
-  if (cpi->svc.number_spatial_layers > 1) {
+  if (svc->number_spatial_layers > 1) {
     int sl;
-    for (sl = 0; sl < cpi->svc.number_spatial_layers - 1; ++sl) {
-      lc = &cpi->svc.layer_context[sl * cpi->svc.number_temporal_layers +
-                                   cpi->svc.temporal_layer_id];
+    for (sl = 0; sl < svc->number_spatial_layers - 1; ++sl) {
+      lc = &svc->layer_context[sl * svc->number_temporal_layers +
+                               svc->temporal_layer_id];
       if ((lc->scaling_factor_num != lc->scaling_factor_den >> 1) &&
           !(lc->scaling_factor_num == lc->scaling_factor_den >> 2 && sl == 0 &&
-            cpi->svc.number_spatial_layers == 3)) {
-        cpi->svc.use_base_mv = 0;
-        cpi->svc.use_partition_reuse = 0;
+            svc->number_spatial_layers == 3)) {
+        svc->use_base_mv = 0;
+        svc->use_partition_reuse = 0;
         break;
       }
     }
+    // For non-zero spatial layers: if the previous spatial layer was dropped
+    // disable the base_mv and partition_reuse features.
+    if (svc->spatial_layer_id > 0 &&
+        svc->drop_spatial_layer[svc->spatial_layer_id - 1]) {
+      svc->use_base_mv = 0;
+      svc->use_partition_reuse = 0;
+    }
   }
 
-  cpi->svc.non_reference_frame = 0;
+  svc->non_reference_frame = 0;
   if (cpi->common.frame_type != KEY_FRAME && !cpi->ext_refresh_last_frame &&
-      !cpi->ext_refresh_golden_frame && !cpi->ext_refresh_alt_ref_frame) {
-    cpi->svc.non_reference_frame = 1;
+      !cpi->ext_refresh_golden_frame && !cpi->ext_refresh_alt_ref_frame)
+    svc->non_reference_frame = 1;
+  // For non-flexible mode, where update_buffer_slot is used, need to check if
+  // all buffer slots are not refreshed.
+  if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS) {
+    if (svc->update_buffer_slot[svc->spatial_layer_id] != 0)
+      svc->non_reference_frame = 0;
+  }
+
+  if (svc->spatial_layer_id == 0) {
+    svc->high_source_sad_superframe = 0;
+    svc->high_num_blocks_with_motion = 0;
+  }
+
+  if (svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->last_layer_dropped[svc->spatial_layer_id] &&
+      svc->fb_idx_upd_tl0[svc->spatial_layer_id] != -1 &&
+      !svc->layer_context[svc->temporal_layer_id].is_key_frame) {
+    // For fixed/non-flexible mode, if the previous frame (same spatial layer
+    // from previous superframe) was dropped, make sure the lst_fb_idx
+    // for this frame corresponds to the buffer index updated on (last) encoded
+    // TL0 frame (with same spatial layer).
+    cpi->lst_fb_idx = svc->fb_idx_upd_tl0[svc->spatial_layer_id];
   }
 
   if (vp9_set_size_literal(cpi, width, height) != 0)
@@ -711,120 +984,6 @@
   return 0;
 }
 
-#if CONFIG_SPATIAL_SVC
-#define SMALL_FRAME_FB_IDX 7
-
-int vp9_svc_start_frame(VP9_COMP *const cpi) {
-  int width = 0, height = 0;
-  LAYER_CONTEXT *lc;
-  struct lookahead_entry *buf;
-  int count = 1 << (cpi->svc.number_temporal_layers - 1);
-
-  cpi->svc.spatial_layer_id = cpi->svc.spatial_layer_to_encode;
-  lc = &cpi->svc.layer_context[cpi->svc.spatial_layer_id];
-
-  cpi->svc.temporal_layer_id = 0;
-  while ((lc->current_video_frame_in_layer % count) != 0) {
-    ++cpi->svc.temporal_layer_id;
-    count >>= 1;
-  }
-
-  cpi->ref_frame_flags = VP9_ALT_FLAG | VP9_GOLD_FLAG | VP9_LAST_FLAG;
-
-  cpi->lst_fb_idx = cpi->svc.spatial_layer_id;
-
-  if (cpi->svc.spatial_layer_id == 0)
-    cpi->gld_fb_idx =
-        (lc->gold_ref_idx >= 0) ? lc->gold_ref_idx : cpi->lst_fb_idx;
-  else
-    cpi->gld_fb_idx = cpi->svc.spatial_layer_id - 1;
-
-  if (lc->current_video_frame_in_layer == 0) {
-    if (cpi->svc.spatial_layer_id >= 2) {
-      cpi->alt_fb_idx = cpi->svc.spatial_layer_id - 2;
-    } else {
-      cpi->alt_fb_idx = cpi->lst_fb_idx;
-      cpi->ref_frame_flags &= (~VP9_LAST_FLAG & ~VP9_ALT_FLAG);
-    }
-  } else {
-    if (cpi->oxcf.ss_enable_auto_arf[cpi->svc.spatial_layer_id]) {
-      cpi->alt_fb_idx = lc->alt_ref_idx;
-      if (!lc->has_alt_frame) cpi->ref_frame_flags &= (~VP9_ALT_FLAG);
-    } else {
-      // Find a proper alt_fb_idx for layers that don't have alt ref frame
-      if (cpi->svc.spatial_layer_id == 0) {
-        cpi->alt_fb_idx = cpi->lst_fb_idx;
-      } else {
-        LAYER_CONTEXT *lc_lower =
-            &cpi->svc.layer_context[cpi->svc.spatial_layer_id - 1];
-
-        if (cpi->oxcf.ss_enable_auto_arf[cpi->svc.spatial_layer_id - 1] &&
-            lc_lower->alt_ref_source != NULL)
-          cpi->alt_fb_idx = lc_lower->alt_ref_idx;
-        else if (cpi->svc.spatial_layer_id >= 2)
-          cpi->alt_fb_idx = cpi->svc.spatial_layer_id - 2;
-        else
-          cpi->alt_fb_idx = cpi->lst_fb_idx;
-      }
-    }
-  }
-
-  get_layer_resolution(cpi->oxcf.width, cpi->oxcf.height,
-                       lc->scaling_factor_num, lc->scaling_factor_den, &width,
-                       &height);
-
-  // Workaround for multiple frame contexts. In some frames we can't use prev_mi
-  // since its previous frame could be changed during decoding time. The idea is
-  // we put a empty invisible frame in front of them, then we will not use
-  // prev_mi when encoding these frames.
-
-  buf = vp9_lookahead_peek(cpi->lookahead, 0);
-  if (cpi->oxcf.error_resilient_mode == 0 && cpi->oxcf.pass == 2 &&
-      cpi->svc.encode_empty_frame_state == NEED_TO_ENCODE &&
-      lc->rc.frames_to_key != 0 &&
-      !(buf != NULL && (buf->flags & VPX_EFLAG_FORCE_KF))) {
-    if ((cpi->svc.number_temporal_layers > 1 &&
-         cpi->svc.temporal_layer_id < cpi->svc.number_temporal_layers - 1) ||
-        (cpi->svc.number_spatial_layers > 1 &&
-         cpi->svc.spatial_layer_id == 0)) {
-      struct lookahead_entry *buf = vp9_lookahead_peek(cpi->lookahead, 0);
-
-      if (buf != NULL) {
-        cpi->svc.empty_frame.ts_start = buf->ts_start;
-        cpi->svc.empty_frame.ts_end = buf->ts_end;
-        cpi->svc.encode_empty_frame_state = ENCODING;
-        cpi->common.show_frame = 0;
-        cpi->ref_frame_flags = 0;
-        cpi->common.frame_type = INTER_FRAME;
-        cpi->lst_fb_idx = cpi->gld_fb_idx = cpi->alt_fb_idx =
-            SMALL_FRAME_FB_IDX;
-
-        if (cpi->svc.encode_intra_empty_frame != 0) cpi->common.intra_only = 1;
-
-        width = SMALL_FRAME_WIDTH;
-        height = SMALL_FRAME_HEIGHT;
-      }
-    }
-  }
-
-  cpi->oxcf.worst_allowed_q = vp9_quantizer_to_qindex(lc->max_q);
-  cpi->oxcf.best_allowed_q = vp9_quantizer_to_qindex(lc->min_q);
-
-  vp9_change_config(cpi, &cpi->oxcf);
-
-  if (vp9_set_size_literal(cpi, width, height) != 0)
-    return VPX_CODEC_INVALID_PARAM;
-
-  vp9_set_high_precision_mv(cpi, 1);
-
-  cpi->alt_ref_source = get_layer_context(cpi)->alt_ref_source;
-
-  return 0;
-}
-
-#undef SMALL_FRAME_FB_IDX
-#endif  // CONFIG_SPATIAL_SVC
-
 struct lookahead_entry *vp9_svc_lookahead_pop(VP9_COMP *const cpi,
                                               struct lookahead_ctx *ctx,
                                               int drain) {
@@ -857,7 +1016,7 @@
 }
 
 // Reset on key frame: reset counters, references and buffer updates.
-void vp9_svc_reset_key_frame(VP9_COMP *const cpi) {
+void vp9_svc_reset_temporal_layers(VP9_COMP *const cpi, int is_key) {
   int sl, tl;
   SVC *const svc = &cpi->svc;
   LAYER_CONTEXT *lc = NULL;
@@ -865,7 +1024,7 @@
     for (tl = 0; tl < svc->number_temporal_layers; ++tl) {
       lc = &cpi->svc.layer_context[sl * svc->number_temporal_layers + tl];
       lc->current_video_frame_in_layer = 0;
-      lc->frames_from_key_frame = 0;
+      if (is_key) lc->frames_from_key_frame = 0;
     }
   }
   if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_0212) {
@@ -904,3 +1063,276 @@
     }
   }
 }
+
+void vp9_svc_constrain_inter_layer_pred(VP9_COMP *const cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
+  const int sl = svc->spatial_layer_id;
+  // Check for disabling inter-layer (spatial) prediction, if
+  // svc.disable_inter_layer_pred is set. If the previous spatial layer was
+  // dropped then disable the prediction from this (scaled) reference.
+  // For INTER_LAYER_PRED_OFF_NONKEY: inter-layer prediction is disabled
+  // on key frames or if any spatial layer is a sync layer.
+  if ((svc->disable_inter_layer_pred == INTER_LAYER_PRED_OFF_NONKEY &&
+       !svc->layer_context[svc->temporal_layer_id].is_key_frame &&
+       !svc->superframe_has_layer_sync) ||
+      svc->disable_inter_layer_pred == INTER_LAYER_PRED_OFF ||
+      svc->drop_spatial_layer[sl - 1]) {
+    MV_REFERENCE_FRAME ref_frame;
+    static const int flag_list[4] = { 0, VP9_LAST_FLAG, VP9_GOLD_FLAG,
+                                      VP9_ALT_FLAG };
+    for (ref_frame = LAST_FRAME; ref_frame <= ALTREF_FRAME; ++ref_frame) {
+      const YV12_BUFFER_CONFIG *yv12 = get_ref_frame_buffer(cpi, ref_frame);
+      if (yv12 != NULL && (cpi->ref_frame_flags & flag_list[ref_frame])) {
+        const struct scale_factors *const scale_fac =
+            &cm->frame_refs[ref_frame - 1].sf;
+        if (vp9_is_scaled(scale_fac)) {
+          cpi->ref_frame_flags &= (~flag_list[ref_frame]);
+          // Point golden/altref frame buffer index to last.
+          if (!svc->simulcast_mode) {
+            if (ref_frame == GOLDEN_FRAME)
+              cpi->gld_fb_idx = cpi->lst_fb_idx;
+            else if (ref_frame == ALTREF_FRAME)
+              cpi->alt_fb_idx = cpi->lst_fb_idx;
+          }
+        }
+      }
+    }
+  }
+  // For fixed/non-flexible SVC: check for disabling inter-layer prediction.
+  // If the reference for inter-layer prediction (the reference that is scaled)
+  // is not the previous spatial layer from the same superframe, then we disable
+  // inter-layer prediction. Only need to check when inter_layer prediction is
+  // not set to OFF mode.
+  if (svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->disable_inter_layer_pred != INTER_LAYER_PRED_OFF) {
+    // We only use LAST and GOLDEN for prediction in real-time mode, so we
+    // check both here.
+    MV_REFERENCE_FRAME ref_frame;
+    for (ref_frame = LAST_FRAME; ref_frame <= GOLDEN_FRAME; ref_frame++) {
+      struct scale_factors *scale_fac = &cm->frame_refs[ref_frame - 1].sf;
+      if (vp9_is_scaled(scale_fac)) {
+        // If this reference  was updated on the previous spatial layer of the
+        // current superframe, then we keep this reference (don't disable).
+        // Otherwise we disable the inter-layer prediction.
+        // This condition is verified by checking if the current frame buffer
+        // index is equal to any of the slots for the previous spatial layer,
+        // and if so, check if that slot was updated/refreshed. If that is the
+        // case, then this reference is valid for inter-layer prediction under
+        // the mode INTER_LAYER_PRED_ON_CONSTRAINED.
+        int fb_idx =
+            ref_frame == LAST_FRAME ? cpi->lst_fb_idx : cpi->gld_fb_idx;
+        int ref_flag = ref_frame == LAST_FRAME ? VP9_LAST_FLAG : VP9_GOLD_FLAG;
+        int disable = 1;
+        if (fb_idx < 0) continue;
+        if ((fb_idx == svc->lst_fb_idx[sl - 1] &&
+             (svc->update_buffer_slot[sl - 1] & (1 << fb_idx))) ||
+            (fb_idx == svc->gld_fb_idx[sl - 1] &&
+             (svc->update_buffer_slot[sl - 1] & (1 << fb_idx))) ||
+            (fb_idx == svc->alt_fb_idx[sl - 1] &&
+             (svc->update_buffer_slot[sl - 1] & (1 << fb_idx))))
+          disable = 0;
+        if (disable) cpi->ref_frame_flags &= (~ref_flag);
+      }
+    }
+  }
+}
+
+void vp9_svc_assert_constraints_pattern(VP9_COMP *const cpi) {
+  SVC *const svc = &cpi->svc;
+  // For fixed/non-flexible mode, the following constraint are expected,
+  // when inter-layer prediciton is on (default).
+  if (svc->temporal_layering_mode != VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->disable_inter_layer_pred == INTER_LAYER_PRED_ON &&
+      svc->framedrop_mode != LAYER_DROP) {
+    if (!svc->layer_context[svc->temporal_layer_id].is_key_frame) {
+      // On non-key frames: LAST is always temporal reference, GOLDEN is
+      // spatial reference.
+      if (svc->temporal_layer_id == 0)
+        // Base temporal only predicts from base temporal.
+        assert(svc->fb_idx_temporal_layer_id[cpi->lst_fb_idx] == 0);
+      else
+        // Non-base temporal only predicts from lower temporal layer.
+        assert(svc->fb_idx_temporal_layer_id[cpi->lst_fb_idx] <
+               svc->temporal_layer_id);
+      if (svc->spatial_layer_id > 0 && cpi->ref_frame_flags & VP9_GOLD_FLAG &&
+          svc->spatial_layer_id > svc->first_spatial_layer_to_encode) {
+        // Non-base spatial only predicts from lower spatial layer with same
+        // temporal_id.
+        assert(svc->fb_idx_spatial_layer_id[cpi->gld_fb_idx] ==
+               svc->spatial_layer_id - 1);
+        assert(svc->fb_idx_temporal_layer_id[cpi->gld_fb_idx] ==
+               svc->temporal_layer_id);
+      }
+    } else if (svc->spatial_layer_id > 0 &&
+               svc->spatial_layer_id > svc->first_spatial_layer_to_encode) {
+      // Only 1 reference for frame whose base is key; reference may be LAST
+      // or GOLDEN, so we check both.
+      if (cpi->ref_frame_flags & VP9_LAST_FLAG) {
+        assert(svc->fb_idx_spatial_layer_id[cpi->lst_fb_idx] ==
+               svc->spatial_layer_id - 1);
+        assert(svc->fb_idx_temporal_layer_id[cpi->lst_fb_idx] ==
+               svc->temporal_layer_id);
+      } else if (cpi->ref_frame_flags & VP9_GOLD_FLAG) {
+        assert(svc->fb_idx_spatial_layer_id[cpi->gld_fb_idx] ==
+               svc->spatial_layer_id - 1);
+        assert(svc->fb_idx_temporal_layer_id[cpi->gld_fb_idx] ==
+               svc->temporal_layer_id);
+      }
+    }
+  } else if (svc->use_gf_temporal_ref_current_layer &&
+             !svc->layer_context[svc->temporal_layer_id].is_key_frame) {
+    // For the usage of golden as second long term reference: the
+    // temporal_layer_id of that reference must be base temporal layer 0, and
+    // spatial_layer_id of that reference must be same as current
+    // spatial_layer_id. If not, disable feature.
+    // TODO(marpan): Investigate when this can happen, and maybe put this check
+    // and reset in a different place.
+    if (svc->fb_idx_spatial_layer_id[cpi->gld_fb_idx] !=
+            svc->spatial_layer_id ||
+        svc->fb_idx_temporal_layer_id[cpi->gld_fb_idx] != 0)
+      svc->use_gf_temporal_ref_current_layer = 0;
+  }
+}
+
+#if CONFIG_VP9_TEMPORAL_DENOISING
+int vp9_denoise_svc_non_key(VP9_COMP *const cpi) {
+  int layer =
+      LAYER_IDS_TO_IDX(cpi->svc.spatial_layer_id, cpi->svc.temporal_layer_id,
+                       cpi->svc.number_temporal_layers);
+  LAYER_CONTEXT *lc = &cpi->svc.layer_context[layer];
+  return denoise_svc(cpi) && !lc->is_key_frame;
+}
+#endif
+
+void vp9_svc_check_spatial_layer_sync(VP9_COMP *const cpi) {
+  SVC *const svc = &cpi->svc;
+  // Only for superframes whose base is not key, as those are
+  // already sync frames.
+  if (!svc->layer_context[svc->temporal_layer_id].is_key_frame) {
+    if (svc->spatial_layer_id == 0) {
+      // On base spatial layer: if the current superframe has a layer sync then
+      // reset the pattern counters and reset to base temporal layer.
+      if (svc->superframe_has_layer_sync)
+        vp9_svc_reset_temporal_layers(cpi, cpi->common.frame_type == KEY_FRAME);
+    }
+    // If the layer sync is set for this current spatial layer then
+    // disable the temporal reference.
+    if (svc->spatial_layer_id > 0 &&
+        svc->spatial_layer_sync[svc->spatial_layer_id]) {
+      cpi->ref_frame_flags &= (~VP9_LAST_FLAG);
+      if (svc->use_gf_temporal_ref_current_layer) {
+        int index = svc->spatial_layer_id;
+        // If golden is used as second reference: need to remove it from
+        // prediction, reset refresh period to 0, and update the reference.
+        svc->use_gf_temporal_ref_current_layer = 0;
+        cpi->rc.baseline_gf_interval = 0;
+        cpi->rc.frames_till_gf_update_due = 0;
+        // On layer sync frame we must update the buffer index used for long
+        // term reference. Use the alt_ref since it is not used or updated on
+        // sync frames.
+        if (svc->number_spatial_layers == 3) index = svc->spatial_layer_id - 1;
+        assert(index >= 0);
+        cpi->alt_fb_idx = svc->buffer_gf_temporal_ref[index].idx;
+        cpi->ext_refresh_alt_ref_frame = 1;
+      }
+    }
+  }
+}
+
+void vp9_svc_update_ref_frame_buffer_idx(VP9_COMP *const cpi) {
+  SVC *const svc = &cpi->svc;
+  // Update the usage of frame buffer index for base spatial layers.
+  if (svc->spatial_layer_id == 0) {
+    if ((cpi->ref_frame_flags & VP9_LAST_FLAG) || cpi->refresh_last_frame)
+      svc->fb_idx_base[cpi->lst_fb_idx] = 1;
+    if ((cpi->ref_frame_flags & VP9_GOLD_FLAG) || cpi->refresh_golden_frame)
+      svc->fb_idx_base[cpi->gld_fb_idx] = 1;
+    if ((cpi->ref_frame_flags & VP9_ALT_FLAG) || cpi->refresh_alt_ref_frame)
+      svc->fb_idx_base[cpi->alt_fb_idx] = 1;
+  }
+}
+
+static void vp9_svc_update_ref_frame_bypass_mode(VP9_COMP *const cpi) {
+  // For non-flexible/bypass SVC mode: check for refreshing other buffer
+  // slots.
+  SVC *const svc = &cpi->svc;
+  VP9_COMMON *const cm = &cpi->common;
+  BufferPool *const pool = cm->buffer_pool;
+  int i;
+  for (i = 0; i < REF_FRAMES; i++) {
+    if (cm->frame_type == KEY_FRAME ||
+        svc->update_buffer_slot[svc->spatial_layer_id] & (1 << i)) {
+      ref_cnt_fb(pool->frame_bufs, &cm->ref_frame_map[i], cm->new_fb_idx);
+      svc->fb_idx_spatial_layer_id[i] = svc->spatial_layer_id;
+      svc->fb_idx_temporal_layer_id[i] = svc->temporal_layer_id;
+    }
+  }
+}
+
+void vp9_svc_update_ref_frame(VP9_COMP *const cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
+  BufferPool *const pool = cm->buffer_pool;
+
+  if (svc->temporal_layering_mode == VP9E_TEMPORAL_LAYERING_MODE_BYPASS &&
+      svc->use_set_ref_frame_config) {
+    vp9_svc_update_ref_frame_bypass_mode(cpi);
+  } else if (cm->frame_type == KEY_FRAME && !svc->simulcast_mode) {
+    // Keep track of frame index for each reference frame.
+    int i;
+    // On key frame update all reference frame slots.
+    for (i = 0; i < REF_FRAMES; i++) {
+      svc->fb_idx_spatial_layer_id[i] = svc->spatial_layer_id;
+      svc->fb_idx_temporal_layer_id[i] = svc->temporal_layer_id;
+      // LAST/GOLDEN/ALTREF is already updated above.
+      if (i != cpi->lst_fb_idx && i != cpi->gld_fb_idx && i != cpi->alt_fb_idx)
+        ref_cnt_fb(pool->frame_bufs, &cm->ref_frame_map[i], cm->new_fb_idx);
+    }
+  } else {
+    if (cpi->refresh_last_frame) {
+      svc->fb_idx_spatial_layer_id[cpi->lst_fb_idx] = svc->spatial_layer_id;
+      svc->fb_idx_temporal_layer_id[cpi->lst_fb_idx] = svc->temporal_layer_id;
+    }
+    if (cpi->refresh_golden_frame) {
+      svc->fb_idx_spatial_layer_id[cpi->gld_fb_idx] = svc->spatial_layer_id;
+      svc->fb_idx_temporal_layer_id[cpi->gld_fb_idx] = svc->temporal_layer_id;
+    }
+    if (cpi->refresh_alt_ref_frame) {
+      svc->fb_idx_spatial_layer_id[cpi->alt_fb_idx] = svc->spatial_layer_id;
+      svc->fb_idx_temporal_layer_id[cpi->alt_fb_idx] = svc->temporal_layer_id;
+    }
+  }
+  // Copy flags from encoder to SVC struct.
+  vp9_copy_flags_ref_update_idx(cpi);
+  vp9_svc_update_ref_frame_buffer_idx(cpi);
+}
+
+void vp9_svc_adjust_frame_rate(VP9_COMP *const cpi) {
+  int64_t this_duration =
+      cpi->svc.timebase_fac * cpi->svc.duration[cpi->svc.spatial_layer_id];
+  vp9_new_framerate(cpi, 10000000.0 / this_duration);
+}
+
+void vp9_svc_adjust_avg_frame_qindex(VP9_COMP *const cpi) {
+  VP9_COMMON *const cm = &cpi->common;
+  SVC *const svc = &cpi->svc;
+  RATE_CONTROL *const rc = &cpi->rc;
+  // On key frames in CBR mode: reset the avg_frame_index for base layer
+  // (to level closer to worst_quality) if the overshoot is significant.
+  // Reset it for all temporal layers on base spatial layer.
+  if (cm->frame_type == KEY_FRAME && cpi->oxcf.rc_mode == VPX_CBR &&
+      !svc->simulcast_mode &&
+      rc->projected_frame_size > 3 * rc->avg_frame_bandwidth) {
+    int tl;
+    rc->avg_frame_qindex[INTER_FRAME] =
+        VPXMAX(rc->avg_frame_qindex[INTER_FRAME],
+               (cm->base_qindex + rc->worst_quality) >> 1);
+    for (tl = 0; tl < svc->number_temporal_layers; ++tl) {
+      const int layer = LAYER_IDS_TO_IDX(0, tl, svc->number_temporal_layers);
+      LAYER_CONTEXT *lc = &svc->layer_context[layer];
+      RATE_CONTROL *lrc = &lc->rc;
+      lrc->avg_frame_qindex[INTER_FRAME] = rc->avg_frame_qindex[INTER_FRAME];
+    }
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.h
index 1a581e0..7e46500 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_svc_layercontext.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_SVC_LAYERCONTEXT_H_
-#define VP9_ENCODER_VP9_SVC_LAYERCONTEXT_H_
+#ifndef VPX_VP9_ENCODER_VP9_SVC_LAYERCONTEXT_H_
+#define VPX_VP9_ENCODER_VP9_SVC_LAYERCONTEXT_H_
 
 #include "vpx/vpx_encoder.h"
 
@@ -19,6 +19,24 @@
 extern "C" {
 #endif
 
+typedef enum {
+  // Inter-layer prediction is on on all frames.
+  INTER_LAYER_PRED_ON,
+  // Inter-layer prediction is off on all frames.
+  INTER_LAYER_PRED_OFF,
+  // Inter-layer prediction is off on non-key frames and non-sync frames.
+  INTER_LAYER_PRED_OFF_NONKEY,
+  // Inter-layer prediction is on on all frames, but constrained such
+  // that any layer S (> 0) can only predict from previous spatial
+  // layer S-1, from the same superframe.
+  INTER_LAYER_PRED_ON_CONSTRAINED
+} INTER_LAYER_PRED;
+
+typedef struct BUFFER_LONGTERM_REF {
+  int idx;
+  int is_used;
+} BUFFER_LONGTERM_REF;
+
 typedef struct {
   RATE_CONTROL rc;
   int target_bandwidth;
@@ -29,6 +47,9 @@
   int min_q;
   int scaling_factor_num;
   int scaling_factor_den;
+  // Scaling factors used for internal resize scaling for single layer SVC.
+  int scaling_factor_num_resize;
+  int scaling_factor_den_resize;
   TWO_PASS twopass;
   vpx_fixed_buf_t rc_twopass_stats_in;
   unsigned int current_video_frame_in_layer;
@@ -40,12 +61,15 @@
   int gold_ref_idx;
   int has_alt_frame;
   size_t layer_size;
-  struct vpx_psnr_pkt psnr_pkt;
   // Cyclic refresh parameters (aq-mode=3), that need to be updated per-frame.
+  // TODO(jianj/marpan): Is it better to use the full cyclic refresh struct.
   int sb_index;
   signed char *map;
   uint8_t *last_coded_q_map;
   uint8_t *consec_zero_mv;
+  int actual_num_seg1_blocks;
+  int actual_num_seg2_blocks;
+  int counter_encode_maxq_scene_change;
   uint8_t speed;
 } LAYER_CONTEXT;
 
@@ -56,8 +80,6 @@
   int number_temporal_layers;
 
   int spatial_layer_to_encode;
-  int first_spatial_layer_to_encode;
-  int rc_drop_superframe;
 
   // Workaround for multiple frame contexts
   enum { ENCODED = 0, ENCODING, NEED_TO_ENCODE } encode_empty_frame_state;
@@ -81,11 +103,16 @@
   // Frame flags and buffer indexes for each spatial layer, set by the
   // application (external settings).
   int ext_frame_flags[VPX_MAX_LAYERS];
-  int ext_lst_fb_idx[VPX_MAX_LAYERS];
-  int ext_gld_fb_idx[VPX_MAX_LAYERS];
-  int ext_alt_fb_idx[VPX_MAX_LAYERS];
-  int ref_frame_index[REF_FRAMES];
+  int lst_fb_idx[VPX_MAX_LAYERS];
+  int gld_fb_idx[VPX_MAX_LAYERS];
+  int alt_fb_idx[VPX_MAX_LAYERS];
   int force_zero_mode_spatial_ref;
+  // Sequence level flag to enable second (long term) temporal reference.
+  int use_gf_temporal_ref;
+  // Frame level flag to enable second (long term) temporal reference.
+  int use_gf_temporal_ref_current_layer;
+  // Allow second reference for at most 2 top highest resolution layers.
+  BUFFER_LONGTERM_REF buffer_gf_temporal_ref[2];
   int current_superframe;
   int non_reference_frame;
   int use_base_mv;
@@ -100,12 +127,77 @@
 
   BLOCK_SIZE *prev_partition_svc;
   int mi_stride[VPX_MAX_LAYERS];
+  int mi_rows[VPX_MAX_LAYERS];
+  int mi_cols[VPX_MAX_LAYERS];
 
   int first_layer_denoise;
 
   int skip_enhancement_layer;
 
   int lower_layer_qindex;
+
+  int last_layer_dropped[VPX_MAX_LAYERS];
+  int drop_spatial_layer[VPX_MAX_LAYERS];
+  int framedrop_thresh[VPX_MAX_LAYERS];
+  int drop_count[VPX_MAX_LAYERS];
+  int force_drop_constrained_from_above[VPX_MAX_LAYERS];
+  int max_consec_drop;
+  SVC_LAYER_DROP_MODE framedrop_mode;
+
+  INTER_LAYER_PRED disable_inter_layer_pred;
+
+  // Flag to indicate scene change and high num of motion blocks at current
+  // superframe, scene detection is currently checked for each superframe prior
+  // to encoding, on the full resolution source.
+  int high_source_sad_superframe;
+  int high_num_blocks_with_motion;
+
+  // Flags used to get SVC pattern info.
+  int update_buffer_slot[VPX_SS_MAX_LAYERS];
+  uint8_t reference_last[VPX_SS_MAX_LAYERS];
+  uint8_t reference_golden[VPX_SS_MAX_LAYERS];
+  uint8_t reference_altref[VPX_SS_MAX_LAYERS];
+  // TODO(jianj): Remove these last 3, deprecated.
+  uint8_t update_last[VPX_SS_MAX_LAYERS];
+  uint8_t update_golden[VPX_SS_MAX_LAYERS];
+  uint8_t update_altref[VPX_SS_MAX_LAYERS];
+
+  // Keep track of the frame buffer index updated/refreshed on the base
+  // temporal superframe.
+  int fb_idx_upd_tl0[VPX_SS_MAX_LAYERS];
+
+  // Keep track of the spatial and temporal layer id of the frame that last
+  // updated the frame buffer index.
+  uint8_t fb_idx_spatial_layer_id[REF_FRAMES];
+  uint8_t fb_idx_temporal_layer_id[REF_FRAMES];
+
+  int spatial_layer_sync[VPX_SS_MAX_LAYERS];
+  uint8_t set_intra_only_frame;
+  uint8_t previous_frame_is_intra_only;
+  uint8_t superframe_has_layer_sync;
+
+  uint8_t fb_idx_base[REF_FRAMES];
+
+  int use_set_ref_frame_config;
+
+  int temporal_layer_id_per_spatial[VPX_SS_MAX_LAYERS];
+
+  int first_spatial_layer_to_encode;
+
+  // Parameters for allowing framerate per spatial layer, and buffer
+  // update based on timestamps.
+  int64_t duration[VPX_SS_MAX_LAYERS];
+  int64_t timebase_fac;
+  int64_t time_stamp_superframe;
+  int64_t time_stamp_prev[VPX_SS_MAX_LAYERS];
+
+  int num_encoded_top_layer;
+
+  // Every spatial layer on a superframe whose base is key is key too.
+  int simulcast_mode;
+
+  // Flag to indicate SVC is dynamically switched to a single layer.
+  int single_layer_svc;
 } SVC;
 
 struct VP9_COMP;
@@ -153,16 +245,37 @@
 // Start a frame and initialize svc parameters
 int vp9_svc_start_frame(struct VP9_COMP *const cpi);
 
+#if CONFIG_VP9_TEMPORAL_DENOISING
+int vp9_denoise_svc_non_key(struct VP9_COMP *const cpi);
+#endif
+
+void vp9_copy_flags_ref_update_idx(struct VP9_COMP *const cpi);
+
 int vp9_one_pass_cbr_svc_start_layer(struct VP9_COMP *const cpi);
 
 void vp9_free_svc_cyclic_refresh(struct VP9_COMP *const cpi);
 
-void vp9_svc_reset_key_frame(struct VP9_COMP *const cpi);
+void vp9_svc_reset_temporal_layers(struct VP9_COMP *const cpi, int is_key);
 
 void vp9_svc_check_reset_layer_rc_flag(struct VP9_COMP *const cpi);
 
+void vp9_svc_constrain_inter_layer_pred(struct VP9_COMP *const cpi);
+
+void vp9_svc_assert_constraints_pattern(struct VP9_COMP *const cpi);
+
+void vp9_svc_check_spatial_layer_sync(struct VP9_COMP *const cpi);
+
+void vp9_svc_update_ref_frame_buffer_idx(struct VP9_COMP *const cpi);
+
+void vp9_svc_update_ref_frame_key_simulcast(struct VP9_COMP *const cpi);
+
+void vp9_svc_update_ref_frame(struct VP9_COMP *const cpi);
+
+void vp9_svc_adjust_frame_rate(struct VP9_COMP *const cpi);
+
+void vp9_svc_adjust_avg_frame_qindex(struct VP9_COMP *const cpi);
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_SVC_LAYERCONTEXT_
+#endif  // VPX_VP9_ENCODER_VP9_SVC_LAYERCONTEXT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.c
index 2758c42..701bb89 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.c
@@ -34,57 +34,155 @@
 #include "vpx_scale/vpx_scale.h"
 
 static int fixed_divide[512];
+static unsigned int index_mult[14] = { 0,     0,     0,     0,     49152,
+                                       39322, 32768, 28087, 24576, 21846,
+                                       19661, 17874, 0,     15124 };
+#if CONFIG_VP9_HIGHBITDEPTH
+static int64_t highbd_index_mult[14] = { 0U,          0U,          0U,
+                                         0U,          3221225472U, 2576980378U,
+                                         2147483648U, 1840700270U, 1610612736U,
+                                         1431655766U, 1288490189U, 1171354718U,
+                                         0U,          991146300U };
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 
 static void temporal_filter_predictors_mb_c(
     MACROBLOCKD *xd, uint8_t *y_mb_ptr, uint8_t *u_mb_ptr, uint8_t *v_mb_ptr,
     int stride, int uv_block_width, int uv_block_height, int mv_row, int mv_col,
-    uint8_t *pred, struct scale_factors *scale, int x, int y) {
+    uint8_t *pred, struct scale_factors *scale, int x, int y, MV *blk_mvs,
+    int use_32x32) {
   const int which_mv = 0;
-  const MV mv = { mv_row, mv_col };
   const InterpKernel *const kernel = vp9_filter_kernels[EIGHTTAP_SHARP];
+  int i, j, k = 0, ys = (BH >> 1), xs = (BW >> 1);
 
   enum mv_precision mv_precision_uv;
   int uv_stride;
-  if (uv_block_width == 8) {
+  if (uv_block_width == (BW >> 1)) {
     uv_stride = (stride + 1) >> 1;
     mv_precision_uv = MV_PRECISION_Q4;
   } else {
     uv_stride = stride;
     mv_precision_uv = MV_PRECISION_Q3;
   }
+#if !CONFIG_VP9_HIGHBITDEPTH
+  (void)xd;
+#endif
 
+  if (use_32x32) {
+    const MV mv = { mv_row, mv_col };
 #if CONFIG_VP9_HIGHBITDEPTH
-  if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
-    vp9_highbd_build_inter_predictor(CONVERT_TO_SHORTPTR(y_mb_ptr), stride,
-                                     CONVERT_TO_SHORTPTR(&pred[0]), 16, &mv,
-                                     scale, 16, 16, which_mv, kernel,
-                                     MV_PRECISION_Q3, x, y, xd->bd);
+    if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+      vp9_highbd_build_inter_predictor(CONVERT_TO_SHORTPTR(y_mb_ptr), stride,
+                                       CONVERT_TO_SHORTPTR(&pred[0]), BW, &mv,
+                                       scale, BW, BH, which_mv, kernel,
+                                       MV_PRECISION_Q3, x, y, xd->bd);
 
-    vp9_highbd_build_inter_predictor(CONVERT_TO_SHORTPTR(u_mb_ptr), uv_stride,
-                                     CONVERT_TO_SHORTPTR(&pred[256]),
-                                     uv_block_width, &mv, scale, uv_block_width,
-                                     uv_block_height, which_mv, kernel,
-                                     mv_precision_uv, x, y, xd->bd);
+      vp9_highbd_build_inter_predictor(
+          CONVERT_TO_SHORTPTR(u_mb_ptr), uv_stride,
+          CONVERT_TO_SHORTPTR(&pred[BLK_PELS]), uv_block_width, &mv, scale,
+          uv_block_width, uv_block_height, which_mv, kernel, mv_precision_uv, x,
+          y, xd->bd);
 
-    vp9_highbd_build_inter_predictor(CONVERT_TO_SHORTPTR(v_mb_ptr), uv_stride,
-                                     CONVERT_TO_SHORTPTR(&pred[512]),
-                                     uv_block_width, &mv, scale, uv_block_width,
-                                     uv_block_height, which_mv, kernel,
-                                     mv_precision_uv, x, y, xd->bd);
+      vp9_highbd_build_inter_predictor(
+          CONVERT_TO_SHORTPTR(v_mb_ptr), uv_stride,
+          CONVERT_TO_SHORTPTR(&pred[(BLK_PELS << 1)]), uv_block_width, &mv,
+          scale, uv_block_width, uv_block_height, which_mv, kernel,
+          mv_precision_uv, x, y, xd->bd);
+      return;
+    }
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+    vp9_build_inter_predictor(y_mb_ptr, stride, &pred[0], BW, &mv, scale, BW,
+                              BH, which_mv, kernel, MV_PRECISION_Q3, x, y);
+
+    vp9_build_inter_predictor(u_mb_ptr, uv_stride, &pred[BLK_PELS],
+                              uv_block_width, &mv, scale, uv_block_width,
+                              uv_block_height, which_mv, kernel,
+                              mv_precision_uv, x, y);
+
+    vp9_build_inter_predictor(v_mb_ptr, uv_stride, &pred[(BLK_PELS << 1)],
+                              uv_block_width, &mv, scale, uv_block_width,
+                              uv_block_height, which_mv, kernel,
+                              mv_precision_uv, x, y);
     return;
   }
+
+  // While use_32x32 = 0, construct the 32x32 predictor using 4 16x16
+  // predictors.
+  // Y predictor
+  for (i = 0; i < BH; i += ys) {
+    for (j = 0; j < BW; j += xs) {
+      const MV mv = blk_mvs[k];
+      const int y_offset = i * stride + j;
+      const int p_offset = i * BW + j;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+      if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+        vp9_highbd_build_inter_predictor(
+            CONVERT_TO_SHORTPTR(y_mb_ptr + y_offset), stride,
+            CONVERT_TO_SHORTPTR(&pred[p_offset]), BW, &mv, scale, xs, ys,
+            which_mv, kernel, MV_PRECISION_Q3, x, y, xd->bd);
+      } else {
+        vp9_build_inter_predictor(y_mb_ptr + y_offset, stride, &pred[p_offset],
+                                  BW, &mv, scale, xs, ys, which_mv, kernel,
+                                  MV_PRECISION_Q3, x, y);
+      }
+#else
+      vp9_build_inter_predictor(y_mb_ptr + y_offset, stride, &pred[p_offset],
+                                BW, &mv, scale, xs, ys, which_mv, kernel,
+                                MV_PRECISION_Q3, x, y);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-  (void)xd;
-  vp9_build_inter_predictor(y_mb_ptr, stride, &pred[0], 16, &mv, scale, 16, 16,
-                            which_mv, kernel, MV_PRECISION_Q3, x, y);
+      k++;
+    }
+  }
 
-  vp9_build_inter_predictor(u_mb_ptr, uv_stride, &pred[256], uv_block_width,
-                            &mv, scale, uv_block_width, uv_block_height,
-                            which_mv, kernel, mv_precision_uv, x, y);
+  // U and V predictors
+  ys = (uv_block_height >> 1);
+  xs = (uv_block_width >> 1);
+  k = 0;
 
-  vp9_build_inter_predictor(v_mb_ptr, uv_stride, &pred[512], uv_block_width,
-                            &mv, scale, uv_block_width, uv_block_height,
-                            which_mv, kernel, mv_precision_uv, x, y);
+  for (i = 0; i < uv_block_height; i += ys) {
+    for (j = 0; j < uv_block_width; j += xs) {
+      const MV mv = blk_mvs[k];
+      const int uv_offset = i * uv_stride + j;
+      const int p_offset = i * uv_block_width + j;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+      if (xd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
+        vp9_highbd_build_inter_predictor(
+            CONVERT_TO_SHORTPTR(u_mb_ptr + uv_offset), uv_stride,
+            CONVERT_TO_SHORTPTR(&pred[BLK_PELS + p_offset]), uv_block_width,
+            &mv, scale, xs, ys, which_mv, kernel, mv_precision_uv, x, y,
+            xd->bd);
+
+        vp9_highbd_build_inter_predictor(
+            CONVERT_TO_SHORTPTR(v_mb_ptr + uv_offset), uv_stride,
+            CONVERT_TO_SHORTPTR(&pred[(BLK_PELS << 1) + p_offset]),
+            uv_block_width, &mv, scale, xs, ys, which_mv, kernel,
+            mv_precision_uv, x, y, xd->bd);
+      } else {
+        vp9_build_inter_predictor(u_mb_ptr + uv_offset, uv_stride,
+                                  &pred[BLK_PELS + p_offset], uv_block_width,
+                                  &mv, scale, xs, ys, which_mv, kernel,
+                                  mv_precision_uv, x, y);
+
+        vp9_build_inter_predictor(v_mb_ptr + uv_offset, uv_stride,
+                                  &pred[(BLK_PELS << 1) + p_offset],
+                                  uv_block_width, &mv, scale, xs, ys, which_mv,
+                                  kernel, mv_precision_uv, x, y);
+      }
+#else
+      vp9_build_inter_predictor(u_mb_ptr + uv_offset, uv_stride,
+                                &pred[BLK_PELS + p_offset], uv_block_width, &mv,
+                                scale, xs, ys, which_mv, kernel,
+                                mv_precision_uv, x, y);
+
+      vp9_build_inter_predictor(v_mb_ptr + uv_offset, uv_stride,
+                                &pred[(BLK_PELS << 1) + p_offset],
+                                uv_block_width, &mv, scale, xs, ys, which_mv,
+                                kernel, mv_precision_uv, x, y);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+      k++;
+    }
+  }
 }
 
 void vp9_temporal_filter_init(void) {
@@ -94,143 +192,372 @@
   for (i = 1; i < 512; ++i) fixed_divide[i] = 0x80000 / i;
 }
 
-void vp9_temporal_filter_apply_c(const uint8_t *frame1, unsigned int stride,
-                                 const uint8_t *frame2,
-                                 unsigned int block_width,
-                                 unsigned int block_height, int strength,
-                                 int filter_weight, uint32_t *accumulator,
-                                 uint16_t *count) {
-  unsigned int i, j, k;
+static INLINE int mod_index(int sum_dist, int index, int rounding, int strength,
+                            int filter_weight) {
+  int mod;
+
+  assert(index >= 0 && index <= 13);
+  assert(index_mult[index] != 0);
+
+  mod =
+      ((unsigned int)clamp(sum_dist, 0, UINT16_MAX) * index_mult[index]) >> 16;
+  mod += rounding;
+  mod >>= strength;
+
+  mod = VPXMIN(16, mod);
+
+  mod = 16 - mod;
+  mod *= filter_weight;
+
+  return mod;
+}
+
+#if CONFIG_VP9_HIGHBITDEPTH
+static INLINE int highbd_mod_index(int sum_dist, int index, int rounding,
+                                   int strength, int filter_weight) {
+  int mod;
+
+  assert(index >= 0 && index <= 13);
+  assert(highbd_index_mult[index] != 0);
+
+  mod = (int)((clamp(sum_dist, 0, INT32_MAX) * highbd_index_mult[index]) >> 32);
+  mod += rounding;
+  mod >>= strength;
+
+  mod = VPXMIN(16, mod);
+
+  mod = 16 - mod;
+  mod *= filter_weight;
+
+  return mod;
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+static INLINE int get_filter_weight(unsigned int i, unsigned int j,
+                                    unsigned int block_height,
+                                    unsigned int block_width,
+                                    const int *const blk_fw, int use_32x32) {
+  // blk_fw[0] ~ blk_fw[3] are the same.
+  if (use_32x32) {
+    return blk_fw[0];
+  }
+
+  if (i < block_height / 2) {
+    if (j < block_width / 2) {
+      return blk_fw[0];
+    }
+
+    return blk_fw[1];
+  }
+
+  if (j < block_width / 2) {
+    return blk_fw[2];
+  }
+
+  return blk_fw[3];
+}
+
+void vp9_apply_temporal_filter_c(
+    const uint8_t *y_frame1, int y_stride, const uint8_t *y_pred,
+    int y_buf_stride, const uint8_t *u_frame1, const uint8_t *v_frame1,
+    int uv_stride, const uint8_t *u_pred, const uint8_t *v_pred,
+    int uv_buf_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32,
+    uint32_t *y_accumulator, uint16_t *y_count, uint32_t *u_accumulator,
+    uint16_t *u_count, uint32_t *v_accumulator, uint16_t *v_count) {
+  unsigned int i, j, k, m;
   int modifier;
-  int byte = 0;
-  const int rounding = strength > 0 ? 1 << (strength - 1) : 0;
+  const int rounding = (1 << strength) >> 1;
+  const unsigned int uv_block_width = block_width >> ss_x;
+  const unsigned int uv_block_height = block_height >> ss_y;
+  DECLARE_ALIGNED(16, uint16_t, y_diff_sse[BLK_PELS]);
+  DECLARE_ALIGNED(16, uint16_t, u_diff_sse[BLK_PELS]);
+  DECLARE_ALIGNED(16, uint16_t, v_diff_sse[BLK_PELS]);
+
+  int idx = 0, idy;
 
   assert(strength >= 0);
   assert(strength <= 6);
 
-  assert(filter_weight >= 0);
-  assert(filter_weight <= 2);
+  memset(y_diff_sse, 0, BLK_PELS * sizeof(uint16_t));
+  memset(u_diff_sse, 0, BLK_PELS * sizeof(uint16_t));
+  memset(v_diff_sse, 0, BLK_PELS * sizeof(uint16_t));
 
-  for (i = 0, k = 0; i < block_height; i++) {
-    for (j = 0; j < block_width; j++, k++) {
-      int pixel_value = *frame2;
+  // Calculate diff^2 for each pixel of the 16x16 block.
+  // TODO(yunqing): the following code needs to be optimized.
+  for (i = 0; i < block_height; i++) {
+    for (j = 0; j < block_width; j++) {
+      const int16_t diff =
+          y_frame1[i * (int)y_stride + j] - y_pred[i * (int)block_width + j];
+      y_diff_sse[idx++] = diff * diff;
+    }
+  }
+  idx = 0;
+  for (i = 0; i < uv_block_height; i++) {
+    for (j = 0; j < uv_block_width; j++) {
+      const int16_t diffu =
+          u_frame1[i * uv_stride + j] - u_pred[i * uv_buf_stride + j];
+      const int16_t diffv =
+          v_frame1[i * uv_stride + j] - v_pred[i * uv_buf_stride + j];
+      u_diff_sse[idx] = diffu * diffu;
+      v_diff_sse[idx] = diffv * diffv;
+      idx++;
+    }
+  }
+
+  for (i = 0, k = 0, m = 0; i < block_height; i++) {
+    for (j = 0; j < block_width; j++) {
+      const int pixel_value = y_pred[i * y_buf_stride + j];
+      const int filter_weight =
+          get_filter_weight(i, j, block_height, block_width, blk_fw, use_32x32);
 
       // non-local mean approach
-      int diff_sse[9] = { 0 };
-      int idx, idy, index = 0;
+      int y_index = 0;
+
+      const int uv_r = i >> ss_y;
+      const int uv_c = j >> ss_x;
+      modifier = 0;
 
       for (idy = -1; idy <= 1; ++idy) {
         for (idx = -1; idx <= 1; ++idx) {
-          int row = (int)i + idy;
-          int col = (int)j + idx;
+          const int row = (int)i + idy;
+          const int col = (int)j + idx;
 
           if (row >= 0 && row < (int)block_height && col >= 0 &&
               col < (int)block_width) {
-            int diff = frame1[byte + idy * (int)stride + idx] -
-                       frame2[idy * (int)block_width + idx];
-            diff_sse[index] = diff * diff;
-            ++index;
+            modifier += y_diff_sse[row * (int)block_width + col];
+            ++y_index;
           }
         }
       }
 
-      assert(index > 0);
+      assert(y_index > 0);
 
-      modifier = 0;
-      for (idx = 0; idx < 9; ++idx) modifier += diff_sse[idx];
+      modifier += u_diff_sse[uv_r * uv_block_width + uv_c];
+      modifier += v_diff_sse[uv_r * uv_block_width + uv_c];
 
-      modifier *= 3;
-      modifier /= index;
+      y_index += 2;
 
-      ++frame2;
+      modifier =
+          mod_index(modifier, y_index, rounding, strength, filter_weight);
 
-      modifier += rounding;
-      modifier >>= strength;
+      y_count[k] += modifier;
+      y_accumulator[k] += modifier * pixel_value;
 
-      if (modifier > 16) modifier = 16;
+      ++k;
 
-      modifier = 16 - modifier;
-      modifier *= filter_weight;
+      // Process chroma component
+      if (!(i & ss_y) && !(j & ss_x)) {
+        const int u_pixel_value = u_pred[uv_r * uv_buf_stride + uv_c];
+        const int v_pixel_value = v_pred[uv_r * uv_buf_stride + uv_c];
 
-      count[k] += modifier;
-      accumulator[k] += modifier * pixel_value;
+        // non-local mean approach
+        int cr_index = 0;
+        int u_mod = 0, v_mod = 0;
+        int y_diff = 0;
 
-      byte++;
+        for (idy = -1; idy <= 1; ++idy) {
+          for (idx = -1; idx <= 1; ++idx) {
+            const int row = uv_r + idy;
+            const int col = uv_c + idx;
+
+            if (row >= 0 && row < (int)uv_block_height && col >= 0 &&
+                col < (int)uv_block_width) {
+              u_mod += u_diff_sse[row * uv_block_width + col];
+              v_mod += v_diff_sse[row * uv_block_width + col];
+              ++cr_index;
+            }
+          }
+        }
+
+        assert(cr_index > 0);
+
+        for (idy = 0; idy < 1 + ss_y; ++idy) {
+          for (idx = 0; idx < 1 + ss_x; ++idx) {
+            const int row = (uv_r << ss_y) + idy;
+            const int col = (uv_c << ss_x) + idx;
+            y_diff += y_diff_sse[row * (int)block_width + col];
+            ++cr_index;
+          }
+        }
+
+        u_mod += y_diff;
+        v_mod += y_diff;
+
+        u_mod = mod_index(u_mod, cr_index, rounding, strength, filter_weight);
+        v_mod = mod_index(v_mod, cr_index, rounding, strength, filter_weight);
+
+        u_count[m] += u_mod;
+        u_accumulator[m] += u_mod * u_pixel_value;
+        v_count[m] += v_mod;
+        v_accumulator[m] += v_mod * v_pixel_value;
+
+        ++m;
+      }  // Complete YUV pixel
     }
-
-    byte += stride - block_width;
   }
 }
 
 #if CONFIG_VP9_HIGHBITDEPTH
-void vp9_highbd_temporal_filter_apply_c(
-    const uint8_t *frame1_8, unsigned int stride, const uint8_t *frame2_8,
-    unsigned int block_width, unsigned int block_height, int strength,
-    int filter_weight, uint32_t *accumulator, uint16_t *count) {
-  const uint16_t *frame1 = CONVERT_TO_SHORTPTR(frame1_8);
-  const uint16_t *frame2 = CONVERT_TO_SHORTPTR(frame2_8);
-  unsigned int i, j, k;
-  int modifier;
-  int byte = 0;
-  const int rounding = strength > 0 ? 1 << (strength - 1) : 0;
+void vp9_highbd_apply_temporal_filter_c(
+    const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre,
+    int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src,
+    int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *const blk_fw, int use_32x32,
+    uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum, uint16_t *u_count,
+    uint32_t *v_accum, uint16_t *v_count) {
+  const int uv_block_width = block_width >> ss_x;
+  const int uv_block_height = block_height >> ss_y;
+  const int y_diff_stride = BW;
+  const int uv_diff_stride = BW;
 
-  for (i = 0, k = 0; i < block_height; i++) {
-    for (j = 0; j < block_width; j++, k++) {
-      int pixel_value = *frame2;
-      int diff_sse[9] = { 0 };
-      int idx, idy, index = 0;
+  DECLARE_ALIGNED(16, uint32_t, y_diff_sse[BLK_PELS]);
+  DECLARE_ALIGNED(16, uint32_t, u_diff_sse[BLK_PELS]);
+  DECLARE_ALIGNED(16, uint32_t, v_diff_sse[BLK_PELS]);
 
-      for (idy = -1; idy <= 1; ++idy) {
-        for (idx = -1; idx <= 1; ++idx) {
-          int row = (int)i + idy;
-          int col = (int)j + idx;
+  const int rounding = (1 << strength) >> 1;
 
-          if (row >= 0 && row < (int)block_height && col >= 0 &&
-              col < (int)block_width) {
-            int diff = frame1[byte + idy * (int)stride + idx] -
-                       frame2[idy * (int)block_width + idx];
-            diff_sse[index] = diff * diff;
-            ++index;
+  // Loop variables
+  int row, col;
+  int uv_row, uv_col;
+  int row_step, col_step;
+
+  memset(y_diff_sse, 0, BLK_PELS * sizeof(uint32_t));
+  memset(u_diff_sse, 0, BLK_PELS * sizeof(uint32_t));
+  memset(v_diff_sse, 0, BLK_PELS * sizeof(uint32_t));
+
+  // Get the square diffs
+  for (row = 0; row < (int)block_height; row++) {
+    for (col = 0; col < (int)block_width; col++) {
+      const int diff =
+          y_src[row * y_src_stride + col] - y_pre[row * y_pre_stride + col];
+      y_diff_sse[row * y_diff_stride + col] = diff * diff;
+    }
+  }
+
+  for (row = 0; row < uv_block_height; row++) {
+    for (col = 0; col < uv_block_width; col++) {
+      const int u_diff =
+          u_src[row * uv_src_stride + col] - u_pre[row * uv_pre_stride + col];
+      const int v_diff =
+          v_src[row * uv_src_stride + col] - v_pre[row * uv_pre_stride + col];
+      u_diff_sse[row * uv_diff_stride + col] = u_diff * u_diff;
+      v_diff_sse[row * uv_diff_stride + col] = v_diff * v_diff;
+    }
+  }
+
+  // Apply the filter to luma
+  for (row = 0; row < (int)block_height; row++) {
+    for (col = 0; col < (int)block_width; col++) {
+      const int uv_row = row >> ss_y;
+      const int uv_col = col >> ss_x;
+      const int filter_weight = get_filter_weight(
+          row, col, block_height, block_width, blk_fw, use_32x32);
+
+      // First we get the modifier for the current y pixel
+      const int y_pixel = y_pre[row * y_pre_stride + col];
+      int y_num_used = 0;
+      int y_mod = 0;
+
+      // Sum the neighboring 3x3 y pixels
+      for (row_step = -1; row_step <= 1; row_step++) {
+        for (col_step = -1; col_step <= 1; col_step++) {
+          const int sub_row = row + row_step;
+          const int sub_col = col + col_step;
+
+          if (sub_row >= 0 && sub_row < (int)block_height && sub_col >= 0 &&
+              sub_col < (int)block_width) {
+            y_mod += y_diff_sse[sub_row * y_diff_stride + sub_col];
+            y_num_used++;
           }
         }
       }
-      assert(index > 0);
 
-      modifier = 0;
-      for (idx = 0; idx < 9; ++idx) modifier += diff_sse[idx];
+      // Sum the corresponding uv pixels to the current y modifier
+      // Note we are rounding down instead of rounding to the nearest pixel.
+      y_mod += u_diff_sse[uv_row * uv_diff_stride + uv_col];
+      y_mod += v_diff_sse[uv_row * uv_diff_stride + uv_col];
 
-      modifier *= 3;
-      modifier /= index;
+      y_num_used += 2;
 
-      ++frame2;
-      modifier += rounding;
-      modifier >>= strength;
+      // Set the modifier
+      y_mod = highbd_mod_index(y_mod, y_num_used, rounding, strength,
+                               filter_weight);
 
-      if (modifier > 16) modifier = 16;
-
-      modifier = 16 - modifier;
-      modifier *= filter_weight;
-
-      count[k] += modifier;
-      accumulator[k] += modifier * pixel_value;
-
-      byte++;
+      // Accumulate the result
+      y_count[row * block_width + col] += y_mod;
+      y_accum[row * block_width + col] += y_mod * y_pixel;
     }
+  }
 
-    byte += stride - block_width;
+  // Apply the filter to chroma
+  for (uv_row = 0; uv_row < uv_block_height; uv_row++) {
+    for (uv_col = 0; uv_col < uv_block_width; uv_col++) {
+      const int y_row = uv_row << ss_y;
+      const int y_col = uv_col << ss_x;
+      const int filter_weight = get_filter_weight(
+          uv_row, uv_col, uv_block_height, uv_block_width, blk_fw, use_32x32);
+
+      const int u_pixel = u_pre[uv_row * uv_pre_stride + uv_col];
+      const int v_pixel = v_pre[uv_row * uv_pre_stride + uv_col];
+
+      int uv_num_used = 0;
+      int u_mod = 0, v_mod = 0;
+
+      // Sum the neighboring 3x3 chromal pixels to the chroma modifier
+      for (row_step = -1; row_step <= 1; row_step++) {
+        for (col_step = -1; col_step <= 1; col_step++) {
+          const int sub_row = uv_row + row_step;
+          const int sub_col = uv_col + col_step;
+
+          if (sub_row >= 0 && sub_row < uv_block_height && sub_col >= 0 &&
+              sub_col < uv_block_width) {
+            u_mod += u_diff_sse[sub_row * uv_diff_stride + sub_col];
+            v_mod += v_diff_sse[sub_row * uv_diff_stride + sub_col];
+            uv_num_used++;
+          }
+        }
+      }
+
+      // Sum all the luma pixels associated with the current luma pixel
+      for (row_step = 0; row_step < 1 + ss_y; row_step++) {
+        for (col_step = 0; col_step < 1 + ss_x; col_step++) {
+          const int sub_row = y_row + row_step;
+          const int sub_col = y_col + col_step;
+          const int y_diff = y_diff_sse[sub_row * y_diff_stride + sub_col];
+
+          u_mod += y_diff;
+          v_mod += y_diff;
+          uv_num_used++;
+        }
+      }
+
+      // Set the modifier
+      u_mod = highbd_mod_index(u_mod, uv_num_used, rounding, strength,
+                               filter_weight);
+      v_mod = highbd_mod_index(v_mod, uv_num_used, rounding, strength,
+                               filter_weight);
+
+      // Accumulate the result
+      u_count[uv_row * uv_block_width + uv_col] += u_mod;
+      u_accum[uv_row * uv_block_width + uv_col] += u_mod * u_pixel;
+      v_count[uv_row * uv_block_width + uv_col] += v_mod;
+      v_accum[uv_row * uv_block_width + uv_col] += v_mod * v_pixel;
+    }
   }
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-static uint32_t temporal_filter_find_matching_mb_c(VP9_COMP *cpi,
-                                                   ThreadData *td,
-                                                   uint8_t *arf_frame_buf,
-                                                   uint8_t *frame_ptr_buf,
-                                                   int stride, MV *ref_mv) {
+static uint32_t temporal_filter_find_matching_mb_c(
+    VP9_COMP *cpi, ThreadData *td, uint8_t *arf_frame_buf,
+    uint8_t *frame_ptr_buf, int stride, MV *ref_mv, MV *blk_mvs,
+    int *blk_bestsme) {
   MACROBLOCK *const x = &td->mb;
   MACROBLOCKD *const xd = &x->e_mbd;
   MV_SPEED_FEATURES *const mv_sf = &cpi->sf.mv;
-  const SEARCH_METHODS search_method = HEX;
+  const SEARCH_METHODS search_method = MESH;
+  const SEARCH_METHODS search_method_16 = cpi->sf.temporal_filter_search_method;
   int step_param;
   int sadpb = x->sadperbit16;
   uint32_t bestsme = UINT_MAX;
@@ -245,6 +572,7 @@
   // Save input state
   struct buf_2d src = x->plane[0].src;
   struct buf_2d pre = xd->plane[0].pre[0];
+  int i, j, k = 0;
 
   best_ref_mv1_full.col = best_ref_mv1.col >> 3;
   best_ref_mv1_full.row = best_ref_mv1.row >> 3;
@@ -260,19 +588,52 @@
 
   vp9_set_mv_search_range(&x->mv_limits, &best_ref_mv1);
 
-  vp9_full_pixel_search(cpi, x, BLOCK_16X16, &best_ref_mv1_full, step_param,
+  vp9_full_pixel_search(cpi, x, TF_BLOCK, &best_ref_mv1_full, step_param,
                         search_method, sadpb, cond_cost_list(cpi, cost_list),
                         &best_ref_mv1, ref_mv, 0, 0);
 
   /* restore UMV window */
   x->mv_limits = tmp_mv_limits;
 
-  // Ignore mv costing by sending NULL pointer instead of cost array
+  // find_fractional_mv_step parameters: best_ref_mv1 is for mv rate cost
+  // calculation. The start full mv and the search result are stored in
+  // ref_mv.
   bestsme = cpi->find_fractional_mv_step(
       x, ref_mv, &best_ref_mv1, cpi->common.allow_high_precision_mv,
-      x->errorperbit, &cpi->fn_ptr[BLOCK_16X16], 0,
-      mv_sf->subpel_iters_per_step, cond_cost_list(cpi, cost_list), NULL, NULL,
-      &distortion, &sse, NULL, 0, 0);
+      x->errorperbit, &cpi->fn_ptr[TF_BLOCK], 0, mv_sf->subpel_search_level,
+      cond_cost_list(cpi, cost_list), NULL, NULL, &distortion, &sse, NULL, BW,
+      BH, USE_8_TAPS_SHARP);
+
+  // DO motion search on 4 16x16 sub_blocks.
+  best_ref_mv1.row = ref_mv->row;
+  best_ref_mv1.col = ref_mv->col;
+  best_ref_mv1_full.col = best_ref_mv1.col >> 3;
+  best_ref_mv1_full.row = best_ref_mv1.row >> 3;
+
+  for (i = 0; i < BH; i += SUB_BH) {
+    for (j = 0; j < BW; j += SUB_BW) {
+      // Setup frame pointers
+      x->plane[0].src.buf = arf_frame_buf + i * stride + j;
+      x->plane[0].src.stride = stride;
+      xd->plane[0].pre[0].buf = frame_ptr_buf + i * stride + j;
+      xd->plane[0].pre[0].stride = stride;
+
+      vp9_set_mv_search_range(&x->mv_limits, &best_ref_mv1);
+      vp9_full_pixel_search(cpi, x, TF_SUB_BLOCK, &best_ref_mv1_full,
+                            step_param, search_method_16, sadpb,
+                            cond_cost_list(cpi, cost_list), &best_ref_mv1,
+                            &blk_mvs[k], 0, 0);
+      /* restore UMV window */
+      x->mv_limits = tmp_mv_limits;
+
+      blk_bestsme[k] = cpi->find_fractional_mv_step(
+          x, &blk_mvs[k], &best_ref_mv1, cpi->common.allow_high_precision_mv,
+          x->errorperbit, &cpi->fn_ptr[TF_SUB_BLOCK], 0,
+          mv_sf->subpel_search_level, cond_cost_list(cpi, cost_list), NULL,
+          NULL, &distortion, &sse, NULL, SUB_BW, SUB_BH, USE_8_TAPS_SHARP);
+      k++;
+    }
+  }
 
   // Restore input state
   x->plane[0].src = src;
@@ -293,25 +654,24 @@
   int byte;
   int frame;
   int mb_col;
-  unsigned int filter_weight;
-  int mb_cols = (frames[alt_ref_index]->y_crop_width + 15) >> 4;
-  int mb_rows = (frames[alt_ref_index]->y_crop_height + 15) >> 4;
-  DECLARE_ALIGNED(16, uint32_t, accumulator[16 * 16 * 3]);
-  DECLARE_ALIGNED(16, uint16_t, count[16 * 16 * 3]);
+  int mb_cols = (frames[alt_ref_index]->y_crop_width + BW - 1) >> BW_LOG2;
+  int mb_rows = (frames[alt_ref_index]->y_crop_height + BH - 1) >> BH_LOG2;
+  DECLARE_ALIGNED(16, uint32_t, accumulator[BLK_PELS * 3]);
+  DECLARE_ALIGNED(16, uint16_t, count[BLK_PELS * 3]);
   MACROBLOCKD *mbd = &td->mb.e_mbd;
   YV12_BUFFER_CONFIG *f = frames[alt_ref_index];
   uint8_t *dst1, *dst2;
 #if CONFIG_VP9_HIGHBITDEPTH
-  DECLARE_ALIGNED(16, uint16_t, predictor16[16 * 16 * 3]);
-  DECLARE_ALIGNED(16, uint8_t, predictor8[16 * 16 * 3]);
+  DECLARE_ALIGNED(16, uint16_t, predictor16[BLK_PELS * 3]);
+  DECLARE_ALIGNED(16, uint8_t, predictor8[BLK_PELS * 3]);
   uint8_t *predictor;
 #else
-  DECLARE_ALIGNED(16, uint8_t, predictor[16 * 16 * 3]);
+  DECLARE_ALIGNED(16, uint8_t, predictor[BLK_PELS * 3]);
 #endif
-  const int mb_uv_height = 16 >> mbd->plane[1].subsampling_y;
-  const int mb_uv_width = 16 >> mbd->plane[1].subsampling_x;
+  const int mb_uv_height = BH >> mbd->plane[1].subsampling_y;
+  const int mb_uv_width = BW >> mbd->plane[1].subsampling_x;
   // Addition of the tile col level offsets
-  int mb_y_offset = mb_row * 16 * (f->y_stride) + 16 * mb_col_start;
+  int mb_y_offset = mb_row * BH * (f->y_stride) + BW * mb_col_start;
   int mb_uv_offset =
       mb_row * mb_uv_height * f->uv_stride + mb_uv_width * mb_col_start;
 
@@ -334,21 +694,21 @@
   //  8 - VP9_INTERP_EXTEND.
   // To keep the mv in play for both Y and UV planes the max that it
   //  can be on a border is therefore 16 - (2*VP9_INTERP_EXTEND+1).
-  td->mb.mv_limits.row_min = -((mb_row * 16) + (17 - 2 * VP9_INTERP_EXTEND));
+  td->mb.mv_limits.row_min = -((mb_row * BH) + (17 - 2 * VP9_INTERP_EXTEND));
   td->mb.mv_limits.row_max =
-      ((mb_rows - 1 - mb_row) * 16) + (17 - 2 * VP9_INTERP_EXTEND);
+      ((mb_rows - 1 - mb_row) * BH) + (17 - 2 * VP9_INTERP_EXTEND);
 
   for (mb_col = mb_col_start; mb_col < mb_col_end; mb_col++) {
     int i, j, k;
     int stride;
     MV ref_mv;
 
-    vp9_zero_array(accumulator, 16 * 16 * 3);
-    vp9_zero_array(count, 16 * 16 * 3);
+    vp9_zero_array(accumulator, BLK_PELS * 3);
+    vp9_zero_array(count, BLK_PELS * 3);
 
-    td->mb.mv_limits.col_min = -((mb_col * 16) + (17 - 2 * VP9_INTERP_EXTEND));
+    td->mb.mv_limits.col_min = -((mb_col * BW) + (17 - 2 * VP9_INTERP_EXTEND));
     td->mb.mv_limits.col_max =
-        ((mb_cols - 1 - mb_col) * 16) + (17 - 2 * VP9_INTERP_EXTEND);
+        ((mb_cols - 1 - mb_col) * BW) + (17 - 2 * VP9_INTERP_EXTEND);
 
     if (cpi->oxcf.content == VP9E_CONTENT_FILM) {
       unsigned int src_variance;
@@ -360,92 +720,130 @@
 #if CONFIG_VP9_HIGHBITDEPTH
       if (mbd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
         src_variance =
-            vp9_high_get_sby_perpixel_variance(cpi, &src, BLOCK_16X16, mbd->bd);
+            vp9_high_get_sby_perpixel_variance(cpi, &src, TF_BLOCK, mbd->bd);
       } else {
-        src_variance = vp9_get_sby_perpixel_variance(cpi, &src, BLOCK_16X16);
+        src_variance = vp9_get_sby_perpixel_variance(cpi, &src, TF_BLOCK);
       }
 #else
-      src_variance = vp9_get_sby_perpixel_variance(cpi, &src, BLOCK_16X16);
+      src_variance = vp9_get_sby_perpixel_variance(cpi, &src, TF_BLOCK);
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-      if (src_variance <= 2) strength = VPXMAX(0, (int)strength - 2);
+      if (src_variance <= 2) {
+        strength = VPXMAX(0, arnr_filter_data->strength - 2);
+      }
     }
 
     for (frame = 0; frame < frame_count; frame++) {
-      const uint32_t thresh_low = 10000;
-      const uint32_t thresh_high = 20000;
+      // MVs for 4 16x16 sub blocks.
+      MV blk_mvs[4];
+      // Filter weights for 4 16x16 sub blocks.
+      int blk_fw[4] = { 0, 0, 0, 0 };
+      int use_32x32 = 0;
 
       if (frames[frame] == NULL) continue;
 
       ref_mv.row = 0;
       ref_mv.col = 0;
+      blk_mvs[0] = kZeroMv;
+      blk_mvs[1] = kZeroMv;
+      blk_mvs[2] = kZeroMv;
+      blk_mvs[3] = kZeroMv;
 
       if (frame == alt_ref_index) {
-        filter_weight = 2;
+        blk_fw[0] = blk_fw[1] = blk_fw[2] = blk_fw[3] = 2;
+        use_32x32 = 1;
       } else {
+        const int thresh_low = 10000;
+        const int thresh_high = 20000;
+        int blk_bestsme[4] = { INT_MAX, INT_MAX, INT_MAX, INT_MAX };
+
         // Find best match in this frame by MC
-        uint32_t err = temporal_filter_find_matching_mb_c(
+        int err = temporal_filter_find_matching_mb_c(
             cpi, td, frames[alt_ref_index]->y_buffer + mb_y_offset,
             frames[frame]->y_buffer + mb_y_offset, frames[frame]->y_stride,
-            &ref_mv);
+            &ref_mv, blk_mvs, blk_bestsme);
 
-        // Assign higher weight to matching MB if its error
-        // score is lower. If not applying MC default behavior
-        // is to weight all MBs equal.
-        filter_weight = err < thresh_low ? 2 : err < thresh_high ? 1 : 0;
+        int err16 =
+            blk_bestsme[0] + blk_bestsme[1] + blk_bestsme[2] + blk_bestsme[3];
+        int max_err = INT_MIN, min_err = INT_MAX;
+        for (k = 0; k < 4; k++) {
+          if (min_err > blk_bestsme[k]) min_err = blk_bestsme[k];
+          if (max_err < blk_bestsme[k]) max_err = blk_bestsme[k];
+        }
+
+        if (((err * 15 < (err16 << 4)) && max_err - min_err < 10000) ||
+            ((err * 14 < (err16 << 4)) && max_err - min_err < 5000)) {
+          use_32x32 = 1;
+          // Assign higher weight to matching MB if it's error
+          // score is lower. If not applying MC default behavior
+          // is to weight all MBs equal.
+          blk_fw[0] = err < (thresh_low << THR_SHIFT)
+                          ? 2
+                          : err < (thresh_high << THR_SHIFT) ? 1 : 0;
+          blk_fw[1] = blk_fw[2] = blk_fw[3] = blk_fw[0];
+        } else {
+          use_32x32 = 0;
+          for (k = 0; k < 4; k++)
+            blk_fw[k] = blk_bestsme[k] < thresh_low
+                            ? 2
+                            : blk_bestsme[k] < thresh_high ? 1 : 0;
+        }
+
+        for (k = 0; k < 4; k++) {
+          switch (abs(frame - alt_ref_index)) {
+            case 1: blk_fw[k] = VPXMIN(blk_fw[k], 2); break;
+            case 2:
+            case 3: blk_fw[k] = VPXMIN(blk_fw[k], 1); break;
+            default: break;
+          }
+        }
       }
 
-      if (filter_weight != 0) {
+      if (blk_fw[0] | blk_fw[1] | blk_fw[2] | blk_fw[3]) {
         // Construct the predictors
         temporal_filter_predictors_mb_c(
             mbd, frames[frame]->y_buffer + mb_y_offset,
             frames[frame]->u_buffer + mb_uv_offset,
             frames[frame]->v_buffer + mb_uv_offset, frames[frame]->y_stride,
             mb_uv_width, mb_uv_height, ref_mv.row, ref_mv.col, predictor, scale,
-            mb_col * 16, mb_row * 16);
+            mb_col * BW, mb_row * BH, blk_mvs, use_32x32);
 
 #if CONFIG_VP9_HIGHBITDEPTH
         if (mbd->cur_buf->flags & YV12_FLAG_HIGHBITDEPTH) {
           int adj_strength = strength + 2 * (mbd->bd - 8);
           // Apply the filter (YUV)
-          vp9_highbd_temporal_filter_apply(
-              f->y_buffer + mb_y_offset, f->y_stride, predictor, 16, 16,
-              adj_strength, filter_weight, accumulator, count);
-          vp9_highbd_temporal_filter_apply(
-              f->u_buffer + mb_uv_offset, f->uv_stride, predictor + 256,
-              mb_uv_width, mb_uv_height, adj_strength, filter_weight,
-              accumulator + 256, count + 256);
-          vp9_highbd_temporal_filter_apply(
-              f->v_buffer + mb_uv_offset, f->uv_stride, predictor + 512,
-              mb_uv_width, mb_uv_height, adj_strength, filter_weight,
-              accumulator + 512, count + 512);
+          vp9_highbd_apply_temporal_filter(
+              CONVERT_TO_SHORTPTR(f->y_buffer + mb_y_offset), f->y_stride,
+              CONVERT_TO_SHORTPTR(predictor), BW,
+              CONVERT_TO_SHORTPTR(f->u_buffer + mb_uv_offset),
+              CONVERT_TO_SHORTPTR(f->v_buffer + mb_uv_offset), f->uv_stride,
+              CONVERT_TO_SHORTPTR(predictor + BLK_PELS),
+              CONVERT_TO_SHORTPTR(predictor + (BLK_PELS << 1)), mb_uv_width, BW,
+              BH, mbd->plane[1].subsampling_x, mbd->plane[1].subsampling_y,
+              adj_strength, blk_fw, use_32x32, accumulator, count,
+              accumulator + BLK_PELS, count + BLK_PELS,
+              accumulator + (BLK_PELS << 1), count + (BLK_PELS << 1));
         } else {
           // Apply the filter (YUV)
-          vp9_temporal_filter_apply(f->y_buffer + mb_y_offset, f->y_stride,
-                                    predictor, 16, 16, strength, filter_weight,
-                                    accumulator, count);
-          vp9_temporal_filter_apply(f->u_buffer + mb_uv_offset, f->uv_stride,
-                                    predictor + 256, mb_uv_width, mb_uv_height,
-                                    strength, filter_weight, accumulator + 256,
-                                    count + 256);
-          vp9_temporal_filter_apply(f->v_buffer + mb_uv_offset, f->uv_stride,
-                                    predictor + 512, mb_uv_width, mb_uv_height,
-                                    strength, filter_weight, accumulator + 512,
-                                    count + 512);
+          vp9_apply_temporal_filter(
+              f->y_buffer + mb_y_offset, f->y_stride, predictor, BW,
+              f->u_buffer + mb_uv_offset, f->v_buffer + mb_uv_offset,
+              f->uv_stride, predictor + BLK_PELS, predictor + (BLK_PELS << 1),
+              mb_uv_width, BW, BH, mbd->plane[1].subsampling_x,
+              mbd->plane[1].subsampling_y, strength, blk_fw, use_32x32,
+              accumulator, count, accumulator + BLK_PELS, count + BLK_PELS,
+              accumulator + (BLK_PELS << 1), count + (BLK_PELS << 1));
         }
 #else
         // Apply the filter (YUV)
-        vp9_temporal_filter_apply(f->y_buffer + mb_y_offset, f->y_stride,
-                                  predictor, 16, 16, strength, filter_weight,
-                                  accumulator, count);
-        vp9_temporal_filter_apply(f->u_buffer + mb_uv_offset, f->uv_stride,
-                                  predictor + 256, mb_uv_width, mb_uv_height,
-                                  strength, filter_weight, accumulator + 256,
-                                  count + 256);
-        vp9_temporal_filter_apply(f->v_buffer + mb_uv_offset, f->uv_stride,
-                                  predictor + 512, mb_uv_width, mb_uv_height,
-                                  strength, filter_weight, accumulator + 512,
-                                  count + 512);
+        vp9_apply_temporal_filter(
+            f->y_buffer + mb_y_offset, f->y_stride, predictor, BW,
+            f->u_buffer + mb_uv_offset, f->v_buffer + mb_uv_offset,
+            f->uv_stride, predictor + BLK_PELS, predictor + (BLK_PELS << 1),
+            mb_uv_width, BW, BH, mbd->plane[1].subsampling_x,
+            mbd->plane[1].subsampling_y, strength, blk_fw, use_32x32,
+            accumulator, count, accumulator + BLK_PELS, count + BLK_PELS,
+            accumulator + (BLK_PELS << 1), count + (BLK_PELS << 1));
 #endif  // CONFIG_VP9_HIGHBITDEPTH
       }
     }
@@ -459,8 +857,8 @@
       dst1_16 = CONVERT_TO_SHORTPTR(dst1);
       stride = cpi->alt_ref_buffer.y_stride;
       byte = mb_y_offset;
-      for (i = 0, k = 0; i < 16; i++) {
-        for (j = 0; j < 16; j++, k++) {
+      for (i = 0, k = 0; i < BH; i++) {
+        for (j = 0; j < BW; j++, k++) {
           unsigned int pval = accumulator[k] + (count[k] >> 1);
           pval *= fixed_divide[count[k]];
           pval >>= 19;
@@ -471,7 +869,7 @@
           byte++;
         }
 
-        byte += stride - 16;
+        byte += stride - BW;
       }
 
       dst1 = cpi->alt_ref_buffer.u_buffer;
@@ -480,9 +878,9 @@
       dst2_16 = CONVERT_TO_SHORTPTR(dst2);
       stride = cpi->alt_ref_buffer.uv_stride;
       byte = mb_uv_offset;
-      for (i = 0, k = 256; i < mb_uv_height; i++) {
+      for (i = 0, k = BLK_PELS; i < mb_uv_height; i++) {
         for (j = 0; j < mb_uv_width; j++, k++) {
-          int m = k + 256;
+          int m = k + BLK_PELS;
 
           // U
           unsigned int pval = accumulator[k] + (count[k] >> 1);
@@ -507,8 +905,8 @@
       dst1 = cpi->alt_ref_buffer.y_buffer;
       stride = cpi->alt_ref_buffer.y_stride;
       byte = mb_y_offset;
-      for (i = 0, k = 0; i < 16; i++) {
-        for (j = 0; j < 16; j++, k++) {
+      for (i = 0, k = 0; i < BH; i++) {
+        for (j = 0; j < BW; j++, k++) {
           unsigned int pval = accumulator[k] + (count[k] >> 1);
           pval *= fixed_divide[count[k]];
           pval >>= 19;
@@ -518,16 +916,16 @@
           // move to next pixel
           byte++;
         }
-        byte += stride - 16;
+        byte += stride - BW;
       }
 
       dst1 = cpi->alt_ref_buffer.u_buffer;
       dst2 = cpi->alt_ref_buffer.v_buffer;
       stride = cpi->alt_ref_buffer.uv_stride;
       byte = mb_uv_offset;
-      for (i = 0, k = 256; i < mb_uv_height; i++) {
+      for (i = 0, k = BLK_PELS; i < mb_uv_height; i++) {
         for (j = 0; j < mb_uv_width; j++, k++) {
-          int m = k + 256;
+          int m = k + BLK_PELS;
 
           // U
           unsigned int pval = accumulator[k] + (count[k] >> 1);
@@ -552,8 +950,8 @@
     dst1 = cpi->alt_ref_buffer.y_buffer;
     stride = cpi->alt_ref_buffer.y_stride;
     byte = mb_y_offset;
-    for (i = 0, k = 0; i < 16; i++) {
-      for (j = 0; j < 16; j++, k++) {
+    for (i = 0, k = 0; i < BH; i++) {
+      for (j = 0; j < BW; j++, k++) {
         unsigned int pval = accumulator[k] + (count[k] >> 1);
         pval *= fixed_divide[count[k]];
         pval >>= 19;
@@ -563,16 +961,16 @@
         // move to next pixel
         byte++;
       }
-      byte += stride - 16;
+      byte += stride - BW;
     }
 
     dst1 = cpi->alt_ref_buffer.u_buffer;
     dst2 = cpi->alt_ref_buffer.v_buffer;
     stride = cpi->alt_ref_buffer.uv_stride;
     byte = mb_uv_offset;
-    for (i = 0, k = 256; i < mb_uv_height; i++) {
+    for (i = 0, k = BLK_PELS; i < mb_uv_height; i++) {
       for (j = 0; j < mb_uv_width; j++, k++) {
-        int m = k + 256;
+        int m = k + BLK_PELS;
 
         // U
         unsigned int pval = accumulator[k] + (count[k] >> 1);
@@ -592,7 +990,7 @@
       byte += stride - mb_uv_width;
     }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-    mb_y_offset += 16;
+    mb_y_offset += BW;
     mb_uv_offset += mb_uv_width;
   }
 }
@@ -603,10 +1001,10 @@
   const int tile_cols = 1 << cm->log2_tile_cols;
   TileInfo *tile_info =
       &cpi->tile_data[tile_row * tile_cols + tile_col].tile_info;
-  const int mb_row_start = (tile_info->mi_row_start) >> 1;
-  const int mb_row_end = (tile_info->mi_row_end + 1) >> 1;
-  const int mb_col_start = (tile_info->mi_col_start) >> 1;
-  const int mb_col_end = (tile_info->mi_col_end + 1) >> 1;
+  const int mb_row_start = (tile_info->mi_row_start) >> TF_SHIFT;
+  const int mb_row_end = (tile_info->mi_row_end + TF_ROUND) >> TF_SHIFT;
+  const int mb_col_start = (tile_info->mi_col_start) >> TF_SHIFT;
+  const int mb_col_end = (tile_info->mi_col_end + TF_ROUND) >> TF_SHIFT;
   int mb_row;
 
   for (mb_row = mb_row_start; mb_row < mb_row_end; mb_row++) {
@@ -620,13 +1018,6 @@
   const int tile_cols = 1 << cm->log2_tile_cols;
   const int tile_rows = 1 << cm->log2_tile_rows;
   int tile_row, tile_col;
-  MACROBLOCKD *mbd = &cpi->td.mb.e_mbd;
-  // Save input state
-  uint8_t *input_buffer[MAX_MB_PLANE];
-  int i;
-
-  for (i = 0; i < MAX_MB_PLANE; i++) input_buffer[i] = mbd->plane[i].pre[0].buf;
-
   vp9_init_tile_data(cpi);
 
   for (tile_row = 0; tile_row < tile_rows; ++tile_row) {
@@ -634,15 +1025,13 @@
       temporal_filter_iterate_tile_c(cpi, tile_row, tile_col);
     }
   }
-
-  // Restore input state
-  for (i = 0; i < MAX_MB_PLANE; i++) mbd->plane[i].pre[0].buf = input_buffer[i];
 }
 
 // Apply buffer limits and context specific adjustments to arnr filter.
 static void adjust_arnr_filter(VP9_COMP *cpi, int distance, int group_boost,
                                int *arnr_frames, int *arnr_strength) {
   const VP9EncoderConfig *const oxcf = &cpi->oxcf;
+  const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
   const int frames_after_arf =
       vp9_lookahead_depth(cpi->lookahead) - distance - 1;
   int frames_fwd = (cpi->oxcf.arnr_max_frames - 1) >> 1;
@@ -696,12 +1085,17 @@
   }
 
   // Adjustments for second level arf in multi arf case.
-  if (cpi->oxcf.pass == 2 && cpi->multi_arf_allowed) {
-    const GF_GROUP *const gf_group = &cpi->twopass.gf_group;
-    if (gf_group->rf_level[gf_group->index] != GF_ARF_STD) {
-      strength >>= 1;
-    }
-  }
+  // Leave commented out place holder for possible filtering adjustment with
+  // new multi-layer arf code.
+  // if (cpi->oxcf.pass == 2 && cpi->multi_arf_allowed)
+  //   if (gf_group->rf_level[gf_group->index] != GF_ARF_STD) strength >>= 1;
+
+  // TODO(jingning): Skip temporal filtering for intermediate frames that will
+  // be used as show_existing_frame. Need to further explore the possibility to
+  // apply certain filter.
+  if (gf_group->arf_src_offset[gf_group->index] <
+      cpi->rc.baseline_gf_interval - 1)
+    frames = 1;
 
   *arnr_frames = frames;
   *arnr_strength = strength;
@@ -800,8 +1194,7 @@
   }
 
   // Initialize errorperbit and sabperbit.
-  rdmult = (int)vp9_compute_rd_mult_based_on_qindex(cpi, ARNR_FILT_QINDEX);
-  if (rdmult < 1) rdmult = 1;
+  rdmult = vp9_compute_rd_mult_based_on_qindex(cpi, ARNR_FILT_QINDEX);
   set_error_per_bit(&cpi->td.mb, rdmult);
   vp9_initialize_me_consts(cpi, &cpi->td.mb, ARNR_FILT_QINDEX);
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.h
index 775e49c..553a468 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_temporal_filter.h
@@ -8,14 +8,29 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_TEMPORAL_FILTER_H_
-#define VP9_ENCODER_VP9_TEMPORAL_FILTER_H_
+#ifndef VPX_VP9_ENCODER_VP9_TEMPORAL_FILTER_H_
+#define VPX_VP9_ENCODER_VP9_TEMPORAL_FILTER_H_
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
 #define ARNR_FILT_QINDEX 128
+static const MV kZeroMv = { 0, 0 };
+
+// Block size used in temporal filtering
+#define TF_BLOCK BLOCK_32X32
+#define BH 32
+#define BH_LOG2 5
+#define BW 32
+#define BW_LOG2 5
+#define BLK_PELS ((BH) * (BW))  // Pixels in the block
+#define TF_SHIFT 2
+#define TF_ROUND 3
+#define THR_SHIFT 2
+#define TF_SUB_BLOCK BLOCK_16X16
+#define SUB_BH 16
+#define SUB_BW 16
 
 void vp9_temporal_filter_init(void);
 void vp9_temporal_filter(VP9_COMP *cpi, int distance);
@@ -28,4 +43,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_TEMPORAL_FILTER_H_
+#endif  // VPX_VP9_ENCODER_VP9_TEMPORAL_FILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.h
index b2f63ff..6407ff9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_tokenize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_TOKENIZE_H_
-#define VP9_ENCODER_VP9_TOKENIZE_H_
+#ifndef VPX_VP9_ENCODER_VP9_TOKENIZE_H_
+#define VPX_VP9_ENCODER_VP9_TOKENIZE_H_
 
 #include "vp9/common/vp9_entropy.h"
 
@@ -127,4 +127,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_TOKENIZE_H_
+#endif  // VPX_VP9_ENCODER_VP9_TOKENIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h
index a8b9c2c..86c5fa2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/vp9_treewriter.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_ENCODER_VP9_TREEWRITER_H_
-#define VP9_ENCODER_VP9_TREEWRITER_H_
+#ifndef VPX_VP9_ENCODER_VP9_TREEWRITER_H_
+#define VPX_VP9_ENCODER_VP9_TREEWRITER_H_
 
 #include "vpx_dsp/bitwriter.h"
 
@@ -48,4 +48,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_ENCODER_VP9_TREEWRITER_H_
+#endif  // VPX_VP9_ENCODER_VP9_TREEWRITER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/highbd_temporal_filter_sse4.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/highbd_temporal_filter_sse4.c
new file mode 100644
index 0000000..4fa2451
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/highbd_temporal_filter_sse4.c
@@ -0,0 +1,943 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+#include <smmintrin.h>
+
+#include "./vp9_rtcd.h"
+#include "./vpx_config.h"
+#include "vpx/vpx_integer.h"
+#include "vp9/encoder/vp9_encoder.h"
+#include "vp9/encoder/vp9_temporal_filter.h"
+#include "vp9/encoder/x86/temporal_filter_constants.h"
+
+// Compute (a-b)**2 for 8 pixels with size 16-bit
+static INLINE void highbd_store_dist_8(const uint16_t *a, const uint16_t *b,
+                                       uint32_t *dst) {
+  const __m128i zero = _mm_setzero_si128();
+  const __m128i a_reg = _mm_loadu_si128((const __m128i *)a);
+  const __m128i b_reg = _mm_loadu_si128((const __m128i *)b);
+
+  const __m128i a_first = _mm_cvtepu16_epi32(a_reg);
+  const __m128i a_second = _mm_unpackhi_epi16(a_reg, zero);
+  const __m128i b_first = _mm_cvtepu16_epi32(b_reg);
+  const __m128i b_second = _mm_unpackhi_epi16(b_reg, zero);
+
+  __m128i dist_first, dist_second;
+
+  dist_first = _mm_sub_epi32(a_first, b_first);
+  dist_second = _mm_sub_epi32(a_second, b_second);
+  dist_first = _mm_mullo_epi32(dist_first, dist_first);
+  dist_second = _mm_mullo_epi32(dist_second, dist_second);
+
+  _mm_storeu_si128((__m128i *)dst, dist_first);
+  _mm_storeu_si128((__m128i *)(dst + 4), dist_second);
+}
+
+// Sum up three neighboring distortions for the pixels
+static INLINE void highbd_get_sum_4(const uint32_t *dist, __m128i *sum) {
+  __m128i dist_reg, dist_left, dist_right;
+
+  dist_reg = _mm_loadu_si128((const __m128i *)dist);
+  dist_left = _mm_loadu_si128((const __m128i *)(dist - 1));
+  dist_right = _mm_loadu_si128((const __m128i *)(dist + 1));
+
+  *sum = _mm_add_epi32(dist_reg, dist_left);
+  *sum = _mm_add_epi32(*sum, dist_right);
+}
+
+static INLINE void highbd_get_sum_8(const uint32_t *dist, __m128i *sum_first,
+                                    __m128i *sum_second) {
+  highbd_get_sum_4(dist, sum_first);
+  highbd_get_sum_4(dist + 4, sum_second);
+}
+
+// Average the value based on the number of values summed (9 for pixels away
+// from the border, 4 for pixels in corners, and 6 for other edge values, plus
+// however many values from y/uv plane are).
+//
+// Add in the rounding factor and shift, clamp to 16, invert and shift. Multiply
+// by weight.
+static INLINE void highbd_average_4(__m128i *output, const __m128i *sum,
+                                    const __m128i *mul_constants,
+                                    const int strength, const int rounding,
+                                    const int weight) {
+  // _mm_srl_epi16 uses the lower 64 bit value for the shift.
+  const __m128i strength_u128 = _mm_set_epi32(0, 0, 0, strength);
+  const __m128i rounding_u32 = _mm_set1_epi32(rounding);
+  const __m128i weight_u32 = _mm_set1_epi32(weight);
+  const __m128i sixteen = _mm_set1_epi32(16);
+  const __m128i zero = _mm_setzero_si128();
+
+  // modifier * 3 / index;
+  const __m128i sum_lo = _mm_unpacklo_epi32(*sum, zero);
+  const __m128i sum_hi = _mm_unpackhi_epi32(*sum, zero);
+  const __m128i const_lo = _mm_unpacklo_epi32(*mul_constants, zero);
+  const __m128i const_hi = _mm_unpackhi_epi32(*mul_constants, zero);
+
+  const __m128i mul_lo = _mm_mul_epu32(sum_lo, const_lo);
+  const __m128i mul_lo_div = _mm_srli_epi64(mul_lo, 32);
+  const __m128i mul_hi = _mm_mul_epu32(sum_hi, const_hi);
+  const __m128i mul_hi_div = _mm_srli_epi64(mul_hi, 32);
+
+  // Now we have
+  //   mul_lo: 00 a1 00 a0
+  //   mul_hi: 00 a3 00 a2
+  // Unpack as 64 bit words to get even and odd elements
+  //   unpack_lo: 00 a2 00 a0
+  //   unpack_hi: 00 a3 00 a1
+  // Then we can shift and OR the results to get everything in 32-bits
+  const __m128i mul_even = _mm_unpacklo_epi64(mul_lo_div, mul_hi_div);
+  const __m128i mul_odd = _mm_unpackhi_epi64(mul_lo_div, mul_hi_div);
+  const __m128i mul_odd_shift = _mm_slli_si128(mul_odd, 4);
+  const __m128i mul = _mm_or_si128(mul_even, mul_odd_shift);
+
+  // Round
+  *output = _mm_add_epi32(mul, rounding_u32);
+  *output = _mm_srl_epi32(*output, strength_u128);
+
+  // Multiply with the weight
+  *output = _mm_min_epu32(*output, sixteen);
+  *output = _mm_sub_epi32(sixteen, *output);
+  *output = _mm_mullo_epi32(*output, weight_u32);
+}
+
+static INLINE void highbd_average_8(__m128i *output_0, __m128i *output_1,
+                                    const __m128i *sum_0_u32,
+                                    const __m128i *sum_1_u32,
+                                    const __m128i *mul_constants_0,
+                                    const __m128i *mul_constants_1,
+                                    const int strength, const int rounding,
+                                    const int weight) {
+  highbd_average_4(output_0, sum_0_u32, mul_constants_0, strength, rounding,
+                   weight);
+  highbd_average_4(output_1, sum_1_u32, mul_constants_1, strength, rounding,
+                   weight);
+}
+
+// Add 'sum_u32' to 'count'. Multiply by 'pred' and add to 'accumulator.'
+static INLINE void highbd_accumulate_and_store_8(const __m128i sum_first_u32,
+                                                 const __m128i sum_second_u32,
+                                                 const uint16_t *pred,
+                                                 uint16_t *count,
+                                                 uint32_t *accumulator) {
+  // Cast down to 16-bit ints
+  const __m128i sum_u16 = _mm_packus_epi32(sum_first_u32, sum_second_u32);
+  const __m128i zero = _mm_setzero_si128();
+
+  __m128i pred_u16 = _mm_loadu_si128((const __m128i *)pred);
+  __m128i count_u16 = _mm_loadu_si128((const __m128i *)count);
+
+  __m128i pred_0_u32, pred_1_u32;
+  __m128i accum_0_u32, accum_1_u32;
+
+  count_u16 = _mm_adds_epu16(count_u16, sum_u16);
+  _mm_storeu_si128((__m128i *)count, count_u16);
+
+  pred_u16 = _mm_mullo_epi16(sum_u16, pred_u16);
+
+  pred_0_u32 = _mm_cvtepu16_epi32(pred_u16);
+  pred_1_u32 = _mm_unpackhi_epi16(pred_u16, zero);
+
+  accum_0_u32 = _mm_loadu_si128((const __m128i *)accumulator);
+  accum_1_u32 = _mm_loadu_si128((const __m128i *)(accumulator + 4));
+
+  accum_0_u32 = _mm_add_epi32(pred_0_u32, accum_0_u32);
+  accum_1_u32 = _mm_add_epi32(pred_1_u32, accum_1_u32);
+
+  _mm_storeu_si128((__m128i *)accumulator, accum_0_u32);
+  _mm_storeu_si128((__m128i *)(accumulator + 4), accum_1_u32);
+}
+
+static INLINE void highbd_read_dist_4(const uint32_t *dist, __m128i *dist_reg) {
+  *dist_reg = _mm_loadu_si128((const __m128i *)dist);
+}
+
+static INLINE void highbd_read_dist_8(const uint32_t *dist, __m128i *reg_first,
+                                      __m128i *reg_second) {
+  highbd_read_dist_4(dist, reg_first);
+  highbd_read_dist_4(dist + 4, reg_second);
+}
+
+static INLINE void highbd_read_chroma_dist_row_8(
+    int ss_x, const uint32_t *u_dist, const uint32_t *v_dist, __m128i *u_first,
+    __m128i *u_second, __m128i *v_first, __m128i *v_second) {
+  if (!ss_x) {
+    // If there is no chroma subsampling in the horizontal direction, then we
+    // need to load 8 entries from chroma.
+    highbd_read_dist_8(u_dist, u_first, u_second);
+    highbd_read_dist_8(v_dist, v_first, v_second);
+  } else {  // ss_x == 1
+    // Otherwise, we only need to load 8 entries
+    __m128i u_reg, v_reg;
+
+    highbd_read_dist_4(u_dist, &u_reg);
+
+    *u_first = _mm_unpacklo_epi32(u_reg, u_reg);
+    *u_second = _mm_unpackhi_epi32(u_reg, u_reg);
+
+    highbd_read_dist_4(v_dist, &v_reg);
+
+    *v_first = _mm_unpacklo_epi32(v_reg, v_reg);
+    *v_second = _mm_unpackhi_epi32(v_reg, v_reg);
+  }
+}
+
+static void vp9_highbd_apply_temporal_filter_luma_8(
+    const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre,
+    int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src,
+    int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, int use_whole_blk, uint32_t *y_accum,
+    uint16_t *y_count, const uint32_t *y_dist, const uint32_t *u_dist,
+    const uint32_t *v_dist, const uint32_t *const *neighbors_first,
+    const uint32_t *const *neighbors_second, int top_weight,
+    int bottom_weight) {
+  const int rounding = (1 << strength) >> 1;
+  int weight = top_weight;
+
+  __m128i mul_first, mul_second;
+
+  __m128i sum_row_1_first, sum_row_1_second;
+  __m128i sum_row_2_first, sum_row_2_second;
+  __m128i sum_row_3_first, sum_row_3_second;
+
+  __m128i u_first, u_second;
+  __m128i v_first, v_second;
+
+  __m128i sum_row_first;
+  __m128i sum_row_second;
+
+  // Loop variables
+  unsigned int h;
+
+  assert(strength >= 4 && strength <= 14 &&
+         "invalid adjusted temporal filter strength");
+  assert(block_width == 8);
+
+  (void)block_width;
+
+  // First row
+  mul_first = _mm_load_si128((const __m128i *)neighbors_first[0]);
+  mul_second = _mm_load_si128((const __m128i *)neighbors_second[0]);
+
+  // Add luma values
+  highbd_get_sum_8(y_dist, &sum_row_2_first, &sum_row_2_second);
+  highbd_get_sum_8(y_dist + DIST_STRIDE, &sum_row_3_first, &sum_row_3_second);
+
+  // We don't need to saturate here because the maximum value is UINT12_MAX ** 2
+  // * 9 ~= 2**24 * 9 < 2 ** 28 < INT32_MAX
+  sum_row_first = _mm_add_epi32(sum_row_2_first, sum_row_3_first);
+  sum_row_second = _mm_add_epi32(sum_row_2_second, sum_row_3_second);
+
+  // Add chroma values
+  highbd_read_chroma_dist_row_8(ss_x, u_dist, v_dist, &u_first, &u_second,
+                                &v_first, &v_second);
+
+  // Max value here is 2 ** 24 * (9 + 2), so no saturation is needed
+  sum_row_first = _mm_add_epi32(sum_row_first, u_first);
+  sum_row_second = _mm_add_epi32(sum_row_second, u_second);
+
+  sum_row_first = _mm_add_epi32(sum_row_first, v_first);
+  sum_row_second = _mm_add_epi32(sum_row_second, v_second);
+
+  // Get modifier and store result
+  highbd_average_8(&sum_row_first, &sum_row_second, &sum_row_first,
+                   &sum_row_second, &mul_first, &mul_second, strength, rounding,
+                   weight);
+
+  highbd_accumulate_and_store_8(sum_row_first, sum_row_second, y_pre, y_count,
+                                y_accum);
+
+  y_src += y_src_stride;
+  y_pre += y_pre_stride;
+  y_count += y_pre_stride;
+  y_accum += y_pre_stride;
+  y_dist += DIST_STRIDE;
+
+  u_src += uv_src_stride;
+  u_pre += uv_pre_stride;
+  u_dist += DIST_STRIDE;
+  v_src += uv_src_stride;
+  v_pre += uv_pre_stride;
+  v_dist += DIST_STRIDE;
+
+  // Then all the rows except the last one
+  mul_first = _mm_load_si128((const __m128i *)neighbors_first[1]);
+  mul_second = _mm_load_si128((const __m128i *)neighbors_second[1]);
+
+  for (h = 1; h < block_height - 1; ++h) {
+    // Move the weight to bottom half
+    if (!use_whole_blk && h == block_height / 2) {
+      weight = bottom_weight;
+    }
+    // Shift the rows up
+    sum_row_1_first = sum_row_2_first;
+    sum_row_1_second = sum_row_2_second;
+    sum_row_2_first = sum_row_3_first;
+    sum_row_2_second = sum_row_3_second;
+
+    // Add luma values to the modifier
+    sum_row_first = _mm_add_epi32(sum_row_1_first, sum_row_2_first);
+    sum_row_second = _mm_add_epi32(sum_row_1_second, sum_row_2_second);
+
+    highbd_get_sum_8(y_dist + DIST_STRIDE, &sum_row_3_first, &sum_row_3_second);
+
+    sum_row_first = _mm_add_epi32(sum_row_first, sum_row_3_first);
+    sum_row_second = _mm_add_epi32(sum_row_second, sum_row_3_second);
+
+    // Add chroma values to the modifier
+    if (ss_y == 0 || h % 2 == 0) {
+      // Only calculate the new chroma distortion if we are at a pixel that
+      // corresponds to a new chroma row
+      highbd_read_chroma_dist_row_8(ss_x, u_dist, v_dist, &u_first, &u_second,
+                                    &v_first, &v_second);
+
+      u_src += uv_src_stride;
+      u_pre += uv_pre_stride;
+      u_dist += DIST_STRIDE;
+      v_src += uv_src_stride;
+      v_pre += uv_pre_stride;
+      v_dist += DIST_STRIDE;
+    }
+
+    sum_row_first = _mm_add_epi32(sum_row_first, u_first);
+    sum_row_second = _mm_add_epi32(sum_row_second, u_second);
+    sum_row_first = _mm_add_epi32(sum_row_first, v_first);
+    sum_row_second = _mm_add_epi32(sum_row_second, v_second);
+
+    // Get modifier and store result
+    highbd_average_8(&sum_row_first, &sum_row_second, &sum_row_first,
+                     &sum_row_second, &mul_first, &mul_second, strength,
+                     rounding, weight);
+    highbd_accumulate_and_store_8(sum_row_first, sum_row_second, y_pre, y_count,
+                                  y_accum);
+
+    y_src += y_src_stride;
+    y_pre += y_pre_stride;
+    y_count += y_pre_stride;
+    y_accum += y_pre_stride;
+    y_dist += DIST_STRIDE;
+  }
+
+  // The last row
+  mul_first = _mm_load_si128((const __m128i *)neighbors_first[0]);
+  mul_second = _mm_load_si128((const __m128i *)neighbors_second[0]);
+
+  // Shift the rows up
+  sum_row_1_first = sum_row_2_first;
+  sum_row_1_second = sum_row_2_second;
+  sum_row_2_first = sum_row_3_first;
+  sum_row_2_second = sum_row_3_second;
+
+  // Add luma values to the modifier
+  sum_row_first = _mm_add_epi32(sum_row_1_first, sum_row_2_first);
+  sum_row_second = _mm_add_epi32(sum_row_1_second, sum_row_2_second);
+
+  // Add chroma values to the modifier
+  if (ss_y == 0) {
+    // Only calculate the new chroma distortion if we are at a pixel that
+    // corresponds to a new chroma row
+    highbd_read_chroma_dist_row_8(ss_x, u_dist, v_dist, &u_first, &u_second,
+                                  &v_first, &v_second);
+  }
+
+  sum_row_first = _mm_add_epi32(sum_row_first, u_first);
+  sum_row_second = _mm_add_epi32(sum_row_second, u_second);
+  sum_row_first = _mm_add_epi32(sum_row_first, v_first);
+  sum_row_second = _mm_add_epi32(sum_row_second, v_second);
+
+  // Get modifier and store result
+  highbd_average_8(&sum_row_first, &sum_row_second, &sum_row_first,
+                   &sum_row_second, &mul_first, &mul_second, strength, rounding,
+                   weight);
+  highbd_accumulate_and_store_8(sum_row_first, sum_row_second, y_pre, y_count,
+                                y_accum);
+}
+
+// Perform temporal filter for the luma component.
+static void vp9_highbd_apply_temporal_filter_luma(
+    const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre,
+    int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src,
+    int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *blk_fw, int use_whole_blk,
+    uint32_t *y_accum, uint16_t *y_count, const uint32_t *y_dist,
+    const uint32_t *u_dist, const uint32_t *v_dist) {
+  unsigned int blk_col = 0, uv_blk_col = 0;
+  const unsigned int blk_col_step = 8, uv_blk_col_step = 8 >> ss_x;
+  const unsigned int mid_width = block_width >> 1,
+                     last_width = block_width - blk_col_step;
+  int top_weight = blk_fw[0],
+      bottom_weight = use_whole_blk ? blk_fw[0] : blk_fw[2];
+  const uint32_t *const *neighbors_first;
+  const uint32_t *const *neighbors_second;
+
+  // Left
+  neighbors_first = HIGHBD_LUMA_LEFT_COLUMN_NEIGHBORS;
+  neighbors_second = HIGHBD_LUMA_MIDDLE_COLUMN_NEIGHBORS;
+  vp9_highbd_apply_temporal_filter_luma_8(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, blk_col_step, block_height, ss_x, ss_y,
+      strength, use_whole_blk, y_accum + blk_col, y_count + blk_col,
+      y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col,
+      neighbors_first, neighbors_second, top_weight, bottom_weight);
+
+  blk_col += blk_col_step;
+  uv_blk_col += uv_blk_col_step;
+
+  // Middle First
+  neighbors_first = HIGHBD_LUMA_MIDDLE_COLUMN_NEIGHBORS;
+  for (; blk_col < mid_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_highbd_apply_temporal_filter_luma_8(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, blk_col_step,
+        block_height, ss_x, ss_y, strength, use_whole_blk, y_accum + blk_col,
+        y_count + blk_col, y_dist + blk_col, u_dist + uv_blk_col,
+        v_dist + uv_blk_col, neighbors_first, neighbors_second, top_weight,
+        bottom_weight);
+  }
+
+  if (!use_whole_blk) {
+    top_weight = blk_fw[1];
+    bottom_weight = blk_fw[3];
+  }
+
+  // Middle Second
+  for (; blk_col < last_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_highbd_apply_temporal_filter_luma_8(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, blk_col_step,
+        block_height, ss_x, ss_y, strength, use_whole_blk, y_accum + blk_col,
+        y_count + blk_col, y_dist + blk_col, u_dist + uv_blk_col,
+        v_dist + uv_blk_col, neighbors_first, neighbors_second, top_weight,
+        bottom_weight);
+  }
+
+  // Right
+  neighbors_second = HIGHBD_LUMA_RIGHT_COLUMN_NEIGHBORS;
+  vp9_highbd_apply_temporal_filter_luma_8(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, blk_col_step, block_height, ss_x, ss_y,
+      strength, use_whole_blk, y_accum + blk_col, y_count + blk_col,
+      y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col,
+      neighbors_first, neighbors_second, top_weight, bottom_weight);
+}
+
+// Add a row of luma distortion that corresponds to 8 chroma mods. If we are
+// subsampling in x direction, then we have 16 lumas, else we have 8.
+static INLINE void highbd_add_luma_dist_to_8_chroma_mod(
+    const uint32_t *y_dist, int ss_x, int ss_y, __m128i *u_mod_fst,
+    __m128i *u_mod_snd, __m128i *v_mod_fst, __m128i *v_mod_snd) {
+  __m128i y_reg_fst, y_reg_snd;
+  if (!ss_x) {
+    highbd_read_dist_8(y_dist, &y_reg_fst, &y_reg_snd);
+    if (ss_y == 1) {
+      __m128i y_tmp_fst, y_tmp_snd;
+      highbd_read_dist_8(y_dist + DIST_STRIDE, &y_tmp_fst, &y_tmp_snd);
+      y_reg_fst = _mm_add_epi32(y_reg_fst, y_tmp_fst);
+      y_reg_snd = _mm_add_epi32(y_reg_snd, y_tmp_snd);
+    }
+  } else {
+    // Temporary
+    __m128i y_fst, y_snd;
+
+    // First 8
+    highbd_read_dist_8(y_dist, &y_fst, &y_snd);
+    if (ss_y == 1) {
+      __m128i y_tmp_fst, y_tmp_snd;
+      highbd_read_dist_8(y_dist + DIST_STRIDE, &y_tmp_fst, &y_tmp_snd);
+
+      y_fst = _mm_add_epi32(y_fst, y_tmp_fst);
+      y_snd = _mm_add_epi32(y_snd, y_tmp_snd);
+    }
+
+    y_reg_fst = _mm_hadd_epi32(y_fst, y_snd);
+
+    // Second 8
+    highbd_read_dist_8(y_dist + 8, &y_fst, &y_snd);
+    if (ss_y == 1) {
+      __m128i y_tmp_fst, y_tmp_snd;
+      highbd_read_dist_8(y_dist + 8 + DIST_STRIDE, &y_tmp_fst, &y_tmp_snd);
+
+      y_fst = _mm_add_epi32(y_fst, y_tmp_fst);
+      y_snd = _mm_add_epi32(y_snd, y_tmp_snd);
+    }
+
+    y_reg_snd = _mm_hadd_epi32(y_fst, y_snd);
+  }
+
+  *u_mod_fst = _mm_add_epi32(*u_mod_fst, y_reg_fst);
+  *u_mod_snd = _mm_add_epi32(*u_mod_snd, y_reg_snd);
+  *v_mod_fst = _mm_add_epi32(*v_mod_fst, y_reg_fst);
+  *v_mod_snd = _mm_add_epi32(*v_mod_snd, y_reg_snd);
+}
+
+// Apply temporal filter to the chroma components. This performs temporal
+// filtering on a chroma block of 8 X uv_height. If blk_fw is not NULL, use
+// blk_fw as an array of size 4 for the weights for each of the 4 subblocks,
+// else use top_weight for top half, and bottom weight for bottom half.
+static void vp9_highbd_apply_temporal_filter_chroma_8(
+    const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre,
+    int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src,
+    int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre,
+    int uv_pre_stride, unsigned int uv_block_width,
+    unsigned int uv_block_height, int ss_x, int ss_y, int strength,
+    uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count,
+    const uint32_t *y_dist, const uint32_t *u_dist, const uint32_t *v_dist,
+    const uint32_t *const *neighbors_fst, const uint32_t *const *neighbors_snd,
+    int top_weight, int bottom_weight, const int *blk_fw) {
+  const int rounding = (1 << strength) >> 1;
+  int weight = top_weight;
+
+  __m128i mul_fst, mul_snd;
+
+  __m128i u_sum_row_1_fst, u_sum_row_2_fst, u_sum_row_3_fst;
+  __m128i v_sum_row_1_fst, v_sum_row_2_fst, v_sum_row_3_fst;
+  __m128i u_sum_row_1_snd, u_sum_row_2_snd, u_sum_row_3_snd;
+  __m128i v_sum_row_1_snd, v_sum_row_2_snd, v_sum_row_3_snd;
+
+  __m128i u_sum_row_fst, v_sum_row_fst;
+  __m128i u_sum_row_snd, v_sum_row_snd;
+
+  // Loop variable
+  unsigned int h;
+
+  (void)uv_block_width;
+
+  // First row
+  mul_fst = _mm_load_si128((const __m128i *)neighbors_fst[0]);
+  mul_snd = _mm_load_si128((const __m128i *)neighbors_snd[0]);
+
+  // Add chroma values
+  highbd_get_sum_8(u_dist, &u_sum_row_2_fst, &u_sum_row_2_snd);
+  highbd_get_sum_8(u_dist + DIST_STRIDE, &u_sum_row_3_fst, &u_sum_row_3_snd);
+
+  u_sum_row_fst = _mm_add_epi32(u_sum_row_2_fst, u_sum_row_3_fst);
+  u_sum_row_snd = _mm_add_epi32(u_sum_row_2_snd, u_sum_row_3_snd);
+
+  highbd_get_sum_8(v_dist, &v_sum_row_2_fst, &v_sum_row_2_snd);
+  highbd_get_sum_8(v_dist + DIST_STRIDE, &v_sum_row_3_fst, &v_sum_row_3_snd);
+
+  v_sum_row_fst = _mm_add_epi32(v_sum_row_2_fst, v_sum_row_3_fst);
+  v_sum_row_snd = _mm_add_epi32(v_sum_row_2_snd, v_sum_row_3_snd);
+
+  // Add luma values
+  highbd_add_luma_dist_to_8_chroma_mod(y_dist, ss_x, ss_y, &u_sum_row_fst,
+                                       &u_sum_row_snd, &v_sum_row_fst,
+                                       &v_sum_row_snd);
+
+  // Get modifier and store result
+  if (blk_fw) {
+    highbd_average_4(&u_sum_row_fst, &u_sum_row_fst, &mul_fst, strength,
+                     rounding, blk_fw[0]);
+    highbd_average_4(&u_sum_row_snd, &u_sum_row_snd, &mul_snd, strength,
+                     rounding, blk_fw[1]);
+
+    highbd_average_4(&v_sum_row_fst, &v_sum_row_fst, &mul_fst, strength,
+                     rounding, blk_fw[0]);
+    highbd_average_4(&v_sum_row_snd, &v_sum_row_snd, &mul_snd, strength,
+                     rounding, blk_fw[1]);
+
+  } else {
+    highbd_average_8(&u_sum_row_fst, &u_sum_row_snd, &u_sum_row_fst,
+                     &u_sum_row_snd, &mul_fst, &mul_snd, strength, rounding,
+                     weight);
+    highbd_average_8(&v_sum_row_fst, &v_sum_row_snd, &v_sum_row_fst,
+                     &v_sum_row_snd, &mul_fst, &mul_snd, strength, rounding,
+                     weight);
+  }
+  highbd_accumulate_and_store_8(u_sum_row_fst, u_sum_row_snd, u_pre, u_count,
+                                u_accum);
+  highbd_accumulate_and_store_8(v_sum_row_fst, v_sum_row_snd, v_pre, v_count,
+                                v_accum);
+
+  u_src += uv_src_stride;
+  u_pre += uv_pre_stride;
+  u_dist += DIST_STRIDE;
+  v_src += uv_src_stride;
+  v_pre += uv_pre_stride;
+  v_dist += DIST_STRIDE;
+  u_count += uv_pre_stride;
+  u_accum += uv_pre_stride;
+  v_count += uv_pre_stride;
+  v_accum += uv_pre_stride;
+
+  y_src += y_src_stride * (1 + ss_y);
+  y_pre += y_pre_stride * (1 + ss_y);
+  y_dist += DIST_STRIDE * (1 + ss_y);
+
+  // Then all the rows except the last one
+  mul_fst = _mm_load_si128((const __m128i *)neighbors_fst[1]);
+  mul_snd = _mm_load_si128((const __m128i *)neighbors_snd[1]);
+
+  for (h = 1; h < uv_block_height - 1; ++h) {
+    // Move the weight pointer to the bottom half of the blocks
+    if (h == uv_block_height / 2) {
+      if (blk_fw) {
+        blk_fw += 2;
+      } else {
+        weight = bottom_weight;
+      }
+    }
+
+    // Shift the rows up
+    u_sum_row_1_fst = u_sum_row_2_fst;
+    u_sum_row_2_fst = u_sum_row_3_fst;
+    u_sum_row_1_snd = u_sum_row_2_snd;
+    u_sum_row_2_snd = u_sum_row_3_snd;
+
+    v_sum_row_1_fst = v_sum_row_2_fst;
+    v_sum_row_2_fst = v_sum_row_3_fst;
+    v_sum_row_1_snd = v_sum_row_2_snd;
+    v_sum_row_2_snd = v_sum_row_3_snd;
+
+    // Add chroma values
+    u_sum_row_fst = _mm_add_epi32(u_sum_row_1_fst, u_sum_row_2_fst);
+    u_sum_row_snd = _mm_add_epi32(u_sum_row_1_snd, u_sum_row_2_snd);
+    highbd_get_sum_8(u_dist + DIST_STRIDE, &u_sum_row_3_fst, &u_sum_row_3_snd);
+    u_sum_row_fst = _mm_add_epi32(u_sum_row_fst, u_sum_row_3_fst);
+    u_sum_row_snd = _mm_add_epi32(u_sum_row_snd, u_sum_row_3_snd);
+
+    v_sum_row_fst = _mm_add_epi32(v_sum_row_1_fst, v_sum_row_2_fst);
+    v_sum_row_snd = _mm_add_epi32(v_sum_row_1_snd, v_sum_row_2_snd);
+    highbd_get_sum_8(v_dist + DIST_STRIDE, &v_sum_row_3_fst, &v_sum_row_3_snd);
+    v_sum_row_fst = _mm_add_epi32(v_sum_row_fst, v_sum_row_3_fst);
+    v_sum_row_snd = _mm_add_epi32(v_sum_row_snd, v_sum_row_3_snd);
+
+    // Add luma values
+    highbd_add_luma_dist_to_8_chroma_mod(y_dist, ss_x, ss_y, &u_sum_row_fst,
+                                         &u_sum_row_snd, &v_sum_row_fst,
+                                         &v_sum_row_snd);
+
+    // Get modifier and store result
+    if (blk_fw) {
+      highbd_average_4(&u_sum_row_fst, &u_sum_row_fst, &mul_fst, strength,
+                       rounding, blk_fw[0]);
+      highbd_average_4(&u_sum_row_snd, &u_sum_row_snd, &mul_snd, strength,
+                       rounding, blk_fw[1]);
+
+      highbd_average_4(&v_sum_row_fst, &v_sum_row_fst, &mul_fst, strength,
+                       rounding, blk_fw[0]);
+      highbd_average_4(&v_sum_row_snd, &v_sum_row_snd, &mul_snd, strength,
+                       rounding, blk_fw[1]);
+
+    } else {
+      highbd_average_8(&u_sum_row_fst, &u_sum_row_snd, &u_sum_row_fst,
+                       &u_sum_row_snd, &mul_fst, &mul_snd, strength, rounding,
+                       weight);
+      highbd_average_8(&v_sum_row_fst, &v_sum_row_snd, &v_sum_row_fst,
+                       &v_sum_row_snd, &mul_fst, &mul_snd, strength, rounding,
+                       weight);
+    }
+
+    highbd_accumulate_and_store_8(u_sum_row_fst, u_sum_row_snd, u_pre, u_count,
+                                  u_accum);
+    highbd_accumulate_and_store_8(v_sum_row_fst, v_sum_row_snd, v_pre, v_count,
+                                  v_accum);
+
+    u_src += uv_src_stride;
+    u_pre += uv_pre_stride;
+    u_dist += DIST_STRIDE;
+    v_src += uv_src_stride;
+    v_pre += uv_pre_stride;
+    v_dist += DIST_STRIDE;
+    u_count += uv_pre_stride;
+    u_accum += uv_pre_stride;
+    v_count += uv_pre_stride;
+    v_accum += uv_pre_stride;
+
+    y_src += y_src_stride * (1 + ss_y);
+    y_pre += y_pre_stride * (1 + ss_y);
+    y_dist += DIST_STRIDE * (1 + ss_y);
+  }
+
+  // The last row
+  mul_fst = _mm_load_si128((const __m128i *)neighbors_fst[0]);
+  mul_snd = _mm_load_si128((const __m128i *)neighbors_snd[0]);
+
+  // Shift the rows up
+  u_sum_row_1_fst = u_sum_row_2_fst;
+  u_sum_row_2_fst = u_sum_row_3_fst;
+  u_sum_row_1_snd = u_sum_row_2_snd;
+  u_sum_row_2_snd = u_sum_row_3_snd;
+
+  v_sum_row_1_fst = v_sum_row_2_fst;
+  v_sum_row_2_fst = v_sum_row_3_fst;
+  v_sum_row_1_snd = v_sum_row_2_snd;
+  v_sum_row_2_snd = v_sum_row_3_snd;
+
+  // Add chroma values
+  u_sum_row_fst = _mm_add_epi32(u_sum_row_1_fst, u_sum_row_2_fst);
+  v_sum_row_fst = _mm_add_epi32(v_sum_row_1_fst, v_sum_row_2_fst);
+  u_sum_row_snd = _mm_add_epi32(u_sum_row_1_snd, u_sum_row_2_snd);
+  v_sum_row_snd = _mm_add_epi32(v_sum_row_1_snd, v_sum_row_2_snd);
+
+  // Add luma values
+  highbd_add_luma_dist_to_8_chroma_mod(y_dist, ss_x, ss_y, &u_sum_row_fst,
+                                       &u_sum_row_snd, &v_sum_row_fst,
+                                       &v_sum_row_snd);
+
+  // Get modifier and store result
+  if (blk_fw) {
+    highbd_average_4(&u_sum_row_fst, &u_sum_row_fst, &mul_fst, strength,
+                     rounding, blk_fw[0]);
+    highbd_average_4(&u_sum_row_snd, &u_sum_row_snd, &mul_snd, strength,
+                     rounding, blk_fw[1]);
+
+    highbd_average_4(&v_sum_row_fst, &v_sum_row_fst, &mul_fst, strength,
+                     rounding, blk_fw[0]);
+    highbd_average_4(&v_sum_row_snd, &v_sum_row_snd, &mul_snd, strength,
+                     rounding, blk_fw[1]);
+
+  } else {
+    highbd_average_8(&u_sum_row_fst, &u_sum_row_snd, &u_sum_row_fst,
+                     &u_sum_row_snd, &mul_fst, &mul_snd, strength, rounding,
+                     weight);
+    highbd_average_8(&v_sum_row_fst, &v_sum_row_snd, &v_sum_row_fst,
+                     &v_sum_row_snd, &mul_fst, &mul_snd, strength, rounding,
+                     weight);
+  }
+
+  highbd_accumulate_and_store_8(u_sum_row_fst, u_sum_row_snd, u_pre, u_count,
+                                u_accum);
+  highbd_accumulate_and_store_8(v_sum_row_fst, v_sum_row_snd, v_pre, v_count,
+                                v_accum);
+}
+
+// Perform temporal filter for the chroma components.
+static void vp9_highbd_apply_temporal_filter_chroma(
+    const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre,
+    int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src,
+    int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *blk_fw, int use_whole_blk,
+    uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count,
+    const uint32_t *y_dist, const uint32_t *u_dist, const uint32_t *v_dist) {
+  const unsigned int uv_width = block_width >> ss_x,
+                     uv_height = block_height >> ss_y;
+
+  unsigned int blk_col = 0, uv_blk_col = 0;
+  const unsigned int uv_blk_col_step = 8, blk_col_step = 8 << ss_x;
+  const unsigned int uv_mid_width = uv_width >> 1,
+                     uv_last_width = uv_width - uv_blk_col_step;
+  int top_weight = blk_fw[0],
+      bottom_weight = use_whole_blk ? blk_fw[0] : blk_fw[2];
+  const uint32_t *const *neighbors_fst;
+  const uint32_t *const *neighbors_snd;
+
+  if (uv_width == 8) {
+    // Special Case: We are subsampling in x direction on a 16x16 block. Since
+    // we are operating on a row of 8 chroma pixels, we can't use the usual
+    // left-middle-right pattern.
+    assert(ss_x);
+
+    if (ss_y) {
+      neighbors_fst = HIGHBD_CHROMA_DOUBLE_SS_LEFT_COLUMN_NEIGHBORS;
+      neighbors_snd = HIGHBD_CHROMA_DOUBLE_SS_RIGHT_COLUMN_NEIGHBORS;
+    } else {
+      neighbors_fst = HIGHBD_CHROMA_SINGLE_SS_LEFT_COLUMN_NEIGHBORS;
+      neighbors_snd = HIGHBD_CHROMA_SINGLE_SS_RIGHT_COLUMN_NEIGHBORS;
+    }
+
+    if (use_whole_blk) {
+      vp9_highbd_apply_temporal_filter_chroma_8(
+          y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+          u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+          u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+          uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+          u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+          y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col,
+          neighbors_fst, neighbors_snd, top_weight, bottom_weight, NULL);
+    } else {
+      vp9_highbd_apply_temporal_filter_chroma_8(
+          y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+          u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+          u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+          uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+          u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+          y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col,
+          neighbors_fst, neighbors_snd, 0, 0, blk_fw);
+    }
+
+    return;
+  }
+
+  // Left
+  if (ss_x && ss_y) {
+    neighbors_fst = HIGHBD_CHROMA_DOUBLE_SS_LEFT_COLUMN_NEIGHBORS;
+    neighbors_snd = HIGHBD_CHROMA_DOUBLE_SS_MIDDLE_COLUMN_NEIGHBORS;
+  } else if (ss_x || ss_y) {
+    neighbors_fst = HIGHBD_CHROMA_SINGLE_SS_LEFT_COLUMN_NEIGHBORS;
+    neighbors_snd = HIGHBD_CHROMA_SINGLE_SS_MIDDLE_COLUMN_NEIGHBORS;
+  } else {
+    neighbors_fst = HIGHBD_CHROMA_NO_SS_LEFT_COLUMN_NEIGHBORS;
+    neighbors_snd = HIGHBD_CHROMA_NO_SS_MIDDLE_COLUMN_NEIGHBORS;
+  }
+
+  vp9_highbd_apply_temporal_filter_chroma_8(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, uv_width, uv_height, ss_x, ss_y,
+      strength, u_accum + uv_blk_col, u_count + uv_blk_col,
+      v_accum + uv_blk_col, v_count + uv_blk_col, y_dist + blk_col,
+      u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors_fst, neighbors_snd,
+      top_weight, bottom_weight, NULL);
+
+  blk_col += blk_col_step;
+  uv_blk_col += uv_blk_col_step;
+
+  // Middle First
+  if (ss_x && ss_y) {
+    neighbors_fst = HIGHBD_CHROMA_DOUBLE_SS_MIDDLE_COLUMN_NEIGHBORS;
+  } else if (ss_x || ss_y) {
+    neighbors_fst = HIGHBD_CHROMA_SINGLE_SS_MIDDLE_COLUMN_NEIGHBORS;
+  } else {
+    neighbors_fst = HIGHBD_CHROMA_NO_SS_MIDDLE_COLUMN_NEIGHBORS;
+  }
+
+  for (; uv_blk_col < uv_mid_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_highbd_apply_temporal_filter_chroma_8(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+        uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+        u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+        y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col,
+        neighbors_fst, neighbors_snd, top_weight, bottom_weight, NULL);
+  }
+
+  if (!use_whole_blk) {
+    top_weight = blk_fw[1];
+    bottom_weight = blk_fw[3];
+  }
+
+  // Middle Second
+  for (; uv_blk_col < uv_last_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_highbd_apply_temporal_filter_chroma_8(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+        uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+        u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+        y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col,
+        neighbors_fst, neighbors_snd, top_weight, bottom_weight, NULL);
+  }
+
+  // Right
+  if (ss_x && ss_y) {
+    neighbors_snd = HIGHBD_CHROMA_DOUBLE_SS_RIGHT_COLUMN_NEIGHBORS;
+  } else if (ss_x || ss_y) {
+    neighbors_snd = HIGHBD_CHROMA_SINGLE_SS_RIGHT_COLUMN_NEIGHBORS;
+  } else {
+    neighbors_snd = HIGHBD_CHROMA_NO_SS_RIGHT_COLUMN_NEIGHBORS;
+  }
+
+  vp9_highbd_apply_temporal_filter_chroma_8(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, uv_width, uv_height, ss_x, ss_y,
+      strength, u_accum + uv_blk_col, u_count + uv_blk_col,
+      v_accum + uv_blk_col, v_count + uv_blk_col, y_dist + blk_col,
+      u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors_fst, neighbors_snd,
+      top_weight, bottom_weight, NULL);
+}
+
+void vp9_highbd_apply_temporal_filter_sse4_1(
+    const uint16_t *y_src, int y_src_stride, const uint16_t *y_pre,
+    int y_pre_stride, const uint16_t *u_src, const uint16_t *v_src,
+    int uv_src_stride, const uint16_t *u_pre, const uint16_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *const blk_fw,
+    int use_whole_blk, uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum,
+    uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count) {
+  const unsigned int chroma_height = block_height >> ss_y,
+                     chroma_width = block_width >> ss_x;
+
+  DECLARE_ALIGNED(16, uint32_t, y_dist[BH * DIST_STRIDE]) = { 0 };
+  DECLARE_ALIGNED(16, uint32_t, u_dist[BH * DIST_STRIDE]) = { 0 };
+  DECLARE_ALIGNED(16, uint32_t, v_dist[BH * DIST_STRIDE]) = { 0 };
+
+  uint32_t *y_dist_ptr = y_dist + 1, *u_dist_ptr = u_dist + 1,
+           *v_dist_ptr = v_dist + 1;
+  const uint16_t *y_src_ptr = y_src, *u_src_ptr = u_src, *v_src_ptr = v_src;
+  const uint16_t *y_pre_ptr = y_pre, *u_pre_ptr = u_pre, *v_pre_ptr = v_pre;
+
+  // Loop variables
+  unsigned int row, blk_col;
+
+  assert(block_width <= BW && "block width too large");
+  assert(block_height <= BH && "block height too large");
+  assert(block_width % 16 == 0 && "block width must be multiple of 16");
+  assert(block_height % 2 == 0 && "block height must be even");
+  assert((ss_x == 0 || ss_x == 1) && (ss_y == 0 || ss_y == 1) &&
+         "invalid chroma subsampling");
+  assert(strength >= 4 && strength <= 14 &&
+         "invalid adjusted temporal filter strength");
+  assert(blk_fw[0] >= 0 && "filter weight must be positive");
+  assert(
+      (use_whole_blk || (blk_fw[1] >= 0 && blk_fw[2] >= 0 && blk_fw[3] >= 0)) &&
+      "subblock filter weight must be positive");
+  assert(blk_fw[0] <= 2 && "sublock filter weight must be less than 2");
+  assert(
+      (use_whole_blk || (blk_fw[1] <= 2 && blk_fw[2] <= 2 && blk_fw[3] <= 2)) &&
+      "subblock filter weight must be less than 2");
+
+  // Precompute the difference squared
+  for (row = 0; row < block_height; row++) {
+    for (blk_col = 0; blk_col < block_width; blk_col += 8) {
+      highbd_store_dist_8(y_src_ptr + blk_col, y_pre_ptr + blk_col,
+                          y_dist_ptr + blk_col);
+    }
+    y_src_ptr += y_src_stride;
+    y_pre_ptr += y_pre_stride;
+    y_dist_ptr += DIST_STRIDE;
+  }
+
+  for (row = 0; row < chroma_height; row++) {
+    for (blk_col = 0; blk_col < chroma_width; blk_col += 8) {
+      highbd_store_dist_8(u_src_ptr + blk_col, u_pre_ptr + blk_col,
+                          u_dist_ptr + blk_col);
+      highbd_store_dist_8(v_src_ptr + blk_col, v_pre_ptr + blk_col,
+                          v_dist_ptr + blk_col);
+    }
+
+    u_src_ptr += uv_src_stride;
+    u_pre_ptr += uv_pre_stride;
+    u_dist_ptr += DIST_STRIDE;
+    v_src_ptr += uv_src_stride;
+    v_pre_ptr += uv_pre_stride;
+    v_dist_ptr += DIST_STRIDE;
+  }
+
+  y_dist_ptr = y_dist + 1;
+  u_dist_ptr = u_dist + 1;
+  v_dist_ptr = v_dist + 1;
+
+  vp9_highbd_apply_temporal_filter_luma(
+      y_src, y_src_stride, y_pre, y_pre_stride, u_src, v_src, uv_src_stride,
+      u_pre, v_pre, uv_pre_stride, block_width, block_height, ss_x, ss_y,
+      strength, blk_fw, use_whole_blk, y_accum, y_count, y_dist_ptr, u_dist_ptr,
+      v_dist_ptr);
+
+  vp9_highbd_apply_temporal_filter_chroma(
+      y_src, y_src_stride, y_pre, y_pre_stride, u_src, v_src, uv_src_stride,
+      u_pre, v_pre, uv_pre_stride, block_width, block_height, ss_x, ss_y,
+      strength, blk_fw, use_whole_blk, u_accum, u_count, v_accum, v_count,
+      y_dist_ptr, u_dist_ptr, v_dist_ptr);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_constants.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_constants.h
new file mode 100644
index 0000000..7dcedda
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_constants.h
@@ -0,0 +1,410 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_ENCODER_X86_TEMPORAL_FILTER_CONSTANTS_H_
+#define VPX_VP9_ENCODER_X86_TEMPORAL_FILTER_CONSTANTS_H_
+#include "./vpx_config.h"
+
+// Division using multiplication and shifting. The C implementation does:
+// modifier *= 3;
+// modifier /= index;
+// where 'modifier' is a set of summed values and 'index' is the number of
+// summed values.
+//
+// This equation works out to (m * 3) / i which reduces to:
+// m * 3/4
+// m * 1/2
+// m * 1/3
+//
+// By pairing the multiply with a down shift by 16 (_mm_mulhi_epu16):
+// m * C / 65536
+// we can create a C to replicate the division.
+//
+// m * 49152 / 65536 = m * 3/4
+// m * 32758 / 65536 = m * 1/2
+// m * 21846 / 65536 = m * 0.3333
+//
+// These are loaded using an instruction expecting int16_t values but are used
+// with _mm_mulhi_epu16(), which treats them as unsigned.
+#define NEIGHBOR_CONSTANT_4 (int16_t)49152
+#define NEIGHBOR_CONSTANT_5 (int16_t)39322
+#define NEIGHBOR_CONSTANT_6 (int16_t)32768
+#define NEIGHBOR_CONSTANT_7 (int16_t)28087
+#define NEIGHBOR_CONSTANT_8 (int16_t)24576
+#define NEIGHBOR_CONSTANT_9 (int16_t)21846
+#define NEIGHBOR_CONSTANT_10 (int16_t)19661
+#define NEIGHBOR_CONSTANT_11 (int16_t)17874
+#define NEIGHBOR_CONSTANT_13 (int16_t)15124
+
+DECLARE_ALIGNED(16, static const int16_t, LEFT_CORNER_NEIGHBORS_PLUS_1[8]) = {
+  NEIGHBOR_CONSTANT_5, NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7,
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7,
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7
+};
+
+DECLARE_ALIGNED(16, static const int16_t, RIGHT_CORNER_NEIGHBORS_PLUS_1[8]) = {
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7,
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7,
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_5
+};
+
+DECLARE_ALIGNED(16, static const int16_t, LEFT_EDGE_NEIGHBORS_PLUS_1[8]) = {
+  NEIGHBOR_CONSTANT_7,  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const int16_t, RIGHT_EDGE_NEIGHBORS_PLUS_1[8]) = {
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_7
+};
+
+DECLARE_ALIGNED(16, static const int16_t, MIDDLE_EDGE_NEIGHBORS_PLUS_1[8]) = {
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7,
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7,
+  NEIGHBOR_CONSTANT_7, NEIGHBOR_CONSTANT_7
+};
+
+DECLARE_ALIGNED(16, static const int16_t, MIDDLE_CENTER_NEIGHBORS_PLUS_1[8]) = {
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const int16_t, LEFT_CORNER_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const int16_t, RIGHT_CORNER_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_6
+};
+
+DECLARE_ALIGNED(16, static const int16_t, LEFT_EDGE_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_8,  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11
+};
+
+DECLARE_ALIGNED(16, static const int16_t, RIGHT_EDGE_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const int16_t, MIDDLE_EDGE_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const int16_t, MIDDLE_CENTER_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11
+};
+
+DECLARE_ALIGNED(16, static const int16_t, TWO_CORNER_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_8,
+  NEIGHBOR_CONSTANT_8, NEIGHBOR_CONSTANT_6
+};
+
+DECLARE_ALIGNED(16, static const int16_t, TWO_EDGE_NEIGHBORS_PLUS_2[8]) = {
+  NEIGHBOR_CONSTANT_8,  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_11,
+  NEIGHBOR_CONSTANT_11, NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const int16_t, LEFT_CORNER_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_8,  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const int16_t, RIGHT_CORNER_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const int16_t, LEFT_EDGE_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13
+};
+
+DECLARE_ALIGNED(16, static const int16_t, RIGHT_EDGE_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const int16_t, MIDDLE_EDGE_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const int16_t, MIDDLE_CENTER_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13
+};
+
+DECLARE_ALIGNED(16, static const int16_t, TWO_CORNER_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_8,  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_10,
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const int16_t, TWO_EDGE_NEIGHBORS_PLUS_4[8]) = {
+  NEIGHBOR_CONSTANT_10, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_13,
+  NEIGHBOR_CONSTANT_13, NEIGHBOR_CONSTANT_10
+};
+
+static const int16_t *const LUMA_LEFT_COLUMN_NEIGHBORS[2] = {
+  LEFT_CORNER_NEIGHBORS_PLUS_2, LEFT_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const LUMA_MIDDLE_COLUMN_NEIGHBORS[2] = {
+  MIDDLE_EDGE_NEIGHBORS_PLUS_2, MIDDLE_CENTER_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const LUMA_RIGHT_COLUMN_NEIGHBORS[2] = {
+  RIGHT_CORNER_NEIGHBORS_PLUS_2, RIGHT_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const CHROMA_NO_SS_LEFT_COLUMN_NEIGHBORS[2] = {
+  LEFT_CORNER_NEIGHBORS_PLUS_1, LEFT_EDGE_NEIGHBORS_PLUS_1
+};
+
+static const int16_t *const CHROMA_NO_SS_MIDDLE_COLUMN_NEIGHBORS[2] = {
+  MIDDLE_EDGE_NEIGHBORS_PLUS_1, MIDDLE_CENTER_NEIGHBORS_PLUS_1
+};
+
+static const int16_t *const CHROMA_NO_SS_RIGHT_COLUMN_NEIGHBORS[2] = {
+  RIGHT_CORNER_NEIGHBORS_PLUS_1, RIGHT_EDGE_NEIGHBORS_PLUS_1
+};
+
+static const int16_t *const CHROMA_SINGLE_SS_LEFT_COLUMN_NEIGHBORS[2] = {
+  LEFT_CORNER_NEIGHBORS_PLUS_2, LEFT_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const CHROMA_SINGLE_SS_MIDDLE_COLUMN_NEIGHBORS[2] = {
+  MIDDLE_EDGE_NEIGHBORS_PLUS_2, MIDDLE_CENTER_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const CHROMA_SINGLE_SS_RIGHT_COLUMN_NEIGHBORS[2] = {
+  RIGHT_CORNER_NEIGHBORS_PLUS_2, RIGHT_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const CHROMA_SINGLE_SS_SINGLE_COLUMN_NEIGHBORS[2] = {
+  TWO_CORNER_NEIGHBORS_PLUS_2, TWO_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const int16_t *const CHROMA_DOUBLE_SS_LEFT_COLUMN_NEIGHBORS[2] = {
+  LEFT_CORNER_NEIGHBORS_PLUS_4, LEFT_EDGE_NEIGHBORS_PLUS_4
+};
+
+static const int16_t *const CHROMA_DOUBLE_SS_MIDDLE_COLUMN_NEIGHBORS[2] = {
+  MIDDLE_EDGE_NEIGHBORS_PLUS_4, MIDDLE_CENTER_NEIGHBORS_PLUS_4
+};
+
+static const int16_t *const CHROMA_DOUBLE_SS_RIGHT_COLUMN_NEIGHBORS[2] = {
+  RIGHT_CORNER_NEIGHBORS_PLUS_4, RIGHT_EDGE_NEIGHBORS_PLUS_4
+};
+
+static const int16_t *const CHROMA_DOUBLE_SS_SINGLE_COLUMN_NEIGHBORS[2] = {
+  TWO_CORNER_NEIGHBORS_PLUS_4, TWO_EDGE_NEIGHBORS_PLUS_4
+};
+
+#if CONFIG_VP9_HIGHBITDEPTH
+#define HIGHBD_NEIGHBOR_CONSTANT_4 (uint32_t)3221225472U
+#define HIGHBD_NEIGHBOR_CONSTANT_5 (uint32_t)2576980378U
+#define HIGHBD_NEIGHBOR_CONSTANT_6 (uint32_t)2147483648U
+#define HIGHBD_NEIGHBOR_CONSTANT_7 (uint32_t)1840700270U
+#define HIGHBD_NEIGHBOR_CONSTANT_8 (uint32_t)1610612736U
+#define HIGHBD_NEIGHBOR_CONSTANT_9 (uint32_t)1431655766U
+#define HIGHBD_NEIGHBOR_CONSTANT_10 (uint32_t)1288490189U
+#define HIGHBD_NEIGHBOR_CONSTANT_11 (uint32_t)1171354718U
+#define HIGHBD_NEIGHBOR_CONSTANT_13 (uint32_t)991146300U
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_1[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_5, HIGHBD_NEIGHBOR_CONSTANT_7,
+  HIGHBD_NEIGHBOR_CONSTANT_7, HIGHBD_NEIGHBOR_CONSTANT_7
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_1[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_7, HIGHBD_NEIGHBOR_CONSTANT_7,
+  HIGHBD_NEIGHBOR_CONSTANT_7, HIGHBD_NEIGHBOR_CONSTANT_5
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_1[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_7, HIGHBD_NEIGHBOR_CONSTANT_10,
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_1[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10,
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_7
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_1[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_7, HIGHBD_NEIGHBOR_CONSTANT_7,
+  HIGHBD_NEIGHBOR_CONSTANT_7, HIGHBD_NEIGHBOR_CONSTANT_7
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_1[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10,
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_2[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_6, HIGHBD_NEIGHBOR_CONSTANT_8,
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_2[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_8,
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_6
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_2[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_11,
+  HIGHBD_NEIGHBOR_CONSTANT_11, HIGHBD_NEIGHBOR_CONSTANT_11
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_2[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_11, HIGHBD_NEIGHBOR_CONSTANT_11,
+  HIGHBD_NEIGHBOR_CONSTANT_11, HIGHBD_NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_2[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_8,
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_2[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_11, HIGHBD_NEIGHBOR_CONSTANT_11,
+  HIGHBD_NEIGHBOR_CONSTANT_11, HIGHBD_NEIGHBOR_CONSTANT_11
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_4[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_8, HIGHBD_NEIGHBOR_CONSTANT_10,
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_4[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10,
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_8
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_4[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_13,
+  HIGHBD_NEIGHBOR_CONSTANT_13, HIGHBD_NEIGHBOR_CONSTANT_13
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_4[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_13, HIGHBD_NEIGHBOR_CONSTANT_13,
+  HIGHBD_NEIGHBOR_CONSTANT_13, HIGHBD_NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_4[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10,
+  HIGHBD_NEIGHBOR_CONSTANT_10, HIGHBD_NEIGHBOR_CONSTANT_10
+};
+
+DECLARE_ALIGNED(16, static const uint32_t,
+                HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_4[4]) = {
+  HIGHBD_NEIGHBOR_CONSTANT_13, HIGHBD_NEIGHBOR_CONSTANT_13,
+  HIGHBD_NEIGHBOR_CONSTANT_13, HIGHBD_NEIGHBOR_CONSTANT_13
+};
+
+static const uint32_t *const HIGHBD_LUMA_LEFT_COLUMN_NEIGHBORS[2] = {
+  HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_2, HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const uint32_t *const HIGHBD_LUMA_MIDDLE_COLUMN_NEIGHBORS[2] = {
+  HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_2, HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_2
+};
+
+static const uint32_t *const HIGHBD_LUMA_RIGHT_COLUMN_NEIGHBORS[2] = {
+  HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_2, HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_2
+};
+
+static const uint32_t *const HIGHBD_CHROMA_NO_SS_LEFT_COLUMN_NEIGHBORS[2] = {
+  HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_1, HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_1
+};
+
+static const uint32_t *const HIGHBD_CHROMA_NO_SS_MIDDLE_COLUMN_NEIGHBORS[2] = {
+  HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_1, HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_1
+};
+
+static const uint32_t *const HIGHBD_CHROMA_NO_SS_RIGHT_COLUMN_NEIGHBORS[2] = {
+  HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_1, HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_1
+};
+
+static const uint32_t
+    *const HIGHBD_CHROMA_SINGLE_SS_LEFT_COLUMN_NEIGHBORS[2] = {
+      HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_2, HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_2
+    };
+
+static const uint32_t
+    *const HIGHBD_CHROMA_SINGLE_SS_MIDDLE_COLUMN_NEIGHBORS[2] = {
+      HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_2, HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_2
+    };
+
+static const uint32_t
+    *const HIGHBD_CHROMA_SINGLE_SS_RIGHT_COLUMN_NEIGHBORS[2] = {
+      HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_2, HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_2
+    };
+
+static const uint32_t
+    *const HIGHBD_CHROMA_DOUBLE_SS_LEFT_COLUMN_NEIGHBORS[2] = {
+      HIGHBD_LEFT_CORNER_NEIGHBORS_PLUS_4, HIGHBD_LEFT_EDGE_NEIGHBORS_PLUS_4
+    };
+
+static const uint32_t
+    *const HIGHBD_CHROMA_DOUBLE_SS_MIDDLE_COLUMN_NEIGHBORS[2] = {
+      HIGHBD_MIDDLE_EDGE_NEIGHBORS_PLUS_4, HIGHBD_MIDDLE_CENTER_NEIGHBORS_PLUS_4
+    };
+
+static const uint32_t
+    *const HIGHBD_CHROMA_DOUBLE_SS_RIGHT_COLUMN_NEIGHBORS[2] = {
+      HIGHBD_RIGHT_CORNER_NEIGHBORS_PLUS_4, HIGHBD_RIGHT_EDGE_NEIGHBORS_PLUS_4
+    };
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
+#define DIST_STRIDE ((BW) + 2)
+
+#endif  // VPX_VP9_ENCODER_X86_TEMPORAL_FILTER_CONSTANTS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_sse4.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_sse4.c
index 460dab6..437f49f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_sse4.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/temporal_filter_sse4.c
@@ -14,96 +14,58 @@
 #include "./vp9_rtcd.h"
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
+#include "vp9/encoder/vp9_encoder.h"
+#include "vp9/encoder/vp9_temporal_filter.h"
+#include "vp9/encoder/x86/temporal_filter_constants.h"
 
-// Division using multiplication and shifting. The C implementation does:
-// modifier *= 3;
-// modifier /= index;
-// where 'modifier' is a set of summed values and 'index' is the number of
-// summed values. 'index' may be 4, 6, or 9, representing a block of 9 values
-// which may be bound by the edges of the block being filtered.
-//
-// This equation works out to (m * 3) / i which reduces to:
-// m * 3/4
-// m * 1/2
-// m * 1/3
-//
-// By pairing the multiply with a down shift by 16 (_mm_mulhi_epu16):
-// m * C / 65536
-// we can create a C to replicate the division.
-//
-// m * 49152 / 65536 = m * 3/4
-// m * 32758 / 65536 = m * 1/2
-// m * 21846 / 65536 = m * 0.3333
-//
-// These are loaded using an instruction expecting int16_t values but are used
-// with _mm_mulhi_epu16(), which treats them as unsigned.
-#define NEIGHBOR_CONSTANT_4 (int16_t)49152
-#define NEIGHBOR_CONSTANT_6 (int16_t)32768
-#define NEIGHBOR_CONSTANT_9 (int16_t)21846
+// Read in 8 pixels from a and b as 8-bit unsigned integers, compute the
+// difference squared, and store as unsigned 16-bit integer to dst.
+static INLINE void store_dist_8(const uint8_t *a, const uint8_t *b,
+                                uint16_t *dst) {
+  const __m128i a_reg = _mm_loadl_epi64((const __m128i *)a);
+  const __m128i b_reg = _mm_loadl_epi64((const __m128i *)b);
 
-// Load values from 'a' and 'b'. Compute the difference squared and sum
-// neighboring values such that:
-// sum[1] = (a[0]-b[0])^2 + (a[1]-b[1])^2 + (a[2]-b[2])^2
-// Values to the left and right of the row are set to 0.
-// The values are returned in sum_0 and sum_1 as *unsigned* 16 bit values.
-static void sum_8(const uint8_t *a, const uint8_t *b, __m128i *sum) {
-  const __m128i a_u8 = _mm_loadl_epi64((const __m128i *)a);
-  const __m128i b_u8 = _mm_loadl_epi64((const __m128i *)b);
+  const __m128i a_first = _mm_cvtepu8_epi16(a_reg);
+  const __m128i b_first = _mm_cvtepu8_epi16(b_reg);
 
-  const __m128i a_u16 = _mm_cvtepu8_epi16(a_u8);
-  const __m128i b_u16 = _mm_cvtepu8_epi16(b_u8);
+  __m128i dist_first;
 
-  const __m128i diff_s16 = _mm_sub_epi16(a_u16, b_u16);
-  const __m128i diff_sq_u16 = _mm_mullo_epi16(diff_s16, diff_s16);
+  dist_first = _mm_sub_epi16(a_first, b_first);
+  dist_first = _mm_mullo_epi16(dist_first, dist_first);
 
-  // Shift all the values one place to the left/right so we can efficiently sum
-  // diff_sq_u16[i - 1] + diff_sq_u16[i] + diff_sq_u16[i + 1].
-  const __m128i shift_left = _mm_slli_si128(diff_sq_u16, 2);
-  const __m128i shift_right = _mm_srli_si128(diff_sq_u16, 2);
-
-  // It becomes necessary to treat the values as unsigned at this point. The
-  // 255^2 fits in uint16_t but not int16_t. Use saturating adds from this point
-  // forward since the filter is only applied to smooth small pixel changes.
-  // Once the value has saturated to uint16_t it is well outside the useful
-  // range.
-  __m128i sum_u16 = _mm_adds_epu16(diff_sq_u16, shift_left);
-  sum_u16 = _mm_adds_epu16(sum_u16, shift_right);
-
-  *sum = sum_u16;
+  _mm_storeu_si128((__m128i *)dst, dist_first);
 }
 
-static void sum_16(const uint8_t *a, const uint8_t *b, __m128i *sum_0,
-                   __m128i *sum_1) {
+static INLINE void store_dist_16(const uint8_t *a, const uint8_t *b,
+                                 uint16_t *dst) {
   const __m128i zero = _mm_setzero_si128();
-  const __m128i a_u8 = _mm_loadu_si128((const __m128i *)a);
-  const __m128i b_u8 = _mm_loadu_si128((const __m128i *)b);
+  const __m128i a_reg = _mm_loadu_si128((const __m128i *)a);
+  const __m128i b_reg = _mm_loadu_si128((const __m128i *)b);
 
-  const __m128i a_0_u16 = _mm_cvtepu8_epi16(a_u8);
-  const __m128i a_1_u16 = _mm_unpackhi_epi8(a_u8, zero);
-  const __m128i b_0_u16 = _mm_cvtepu8_epi16(b_u8);
-  const __m128i b_1_u16 = _mm_unpackhi_epi8(b_u8, zero);
+  const __m128i a_first = _mm_cvtepu8_epi16(a_reg);
+  const __m128i a_second = _mm_unpackhi_epi8(a_reg, zero);
+  const __m128i b_first = _mm_cvtepu8_epi16(b_reg);
+  const __m128i b_second = _mm_unpackhi_epi8(b_reg, zero);
 
-  const __m128i diff_0_s16 = _mm_sub_epi16(a_0_u16, b_0_u16);
-  const __m128i diff_1_s16 = _mm_sub_epi16(a_1_u16, b_1_u16);
-  const __m128i diff_sq_0_u16 = _mm_mullo_epi16(diff_0_s16, diff_0_s16);
-  const __m128i diff_sq_1_u16 = _mm_mullo_epi16(diff_1_s16, diff_1_s16);
+  __m128i dist_first, dist_second;
 
-  __m128i shift_left = _mm_slli_si128(diff_sq_0_u16, 2);
-  // Use _mm_alignr_epi8() to "shift in" diff_sq_u16[8].
-  __m128i shift_right = _mm_alignr_epi8(diff_sq_1_u16, diff_sq_0_u16, 2);
+  dist_first = _mm_sub_epi16(a_first, b_first);
+  dist_second = _mm_sub_epi16(a_second, b_second);
+  dist_first = _mm_mullo_epi16(dist_first, dist_first);
+  dist_second = _mm_mullo_epi16(dist_second, dist_second);
 
-  __m128i sum_u16 = _mm_adds_epu16(diff_sq_0_u16, shift_left);
-  sum_u16 = _mm_adds_epu16(sum_u16, shift_right);
+  _mm_storeu_si128((__m128i *)dst, dist_first);
+  _mm_storeu_si128((__m128i *)(dst + 8), dist_second);
+}
 
-  *sum_0 = sum_u16;
+static INLINE void read_dist_8(const uint16_t *dist, __m128i *dist_reg) {
+  *dist_reg = _mm_loadu_si128((const __m128i *)dist);
+}
 
-  shift_left = _mm_alignr_epi8(diff_sq_1_u16, diff_sq_0_u16, 14);
-  shift_right = _mm_srli_si128(diff_sq_1_u16, 2);
-
-  sum_u16 = _mm_adds_epu16(diff_sq_1_u16, shift_left);
-  sum_u16 = _mm_adds_epu16(sum_u16, shift_right);
-
-  *sum_1 = sum_u16;
+static INLINE void read_dist_16(const uint16_t *dist, __m128i *reg_first,
+                                __m128i *reg_second) {
+  read_dist_8(dist, reg_first);
+  read_dist_8(dist + 8, reg_second);
 }
 
 // Average the value based on the number of values summed (9 for pixels away
@@ -111,17 +73,17 @@
 //
 // Add in the rounding factor and shift, clamp to 16, invert and shift. Multiply
 // by weight.
-static __m128i average_8(__m128i sum, const __m128i mul_constants,
-                         const int strength, const int rounding,
-                         const int weight) {
+static INLINE __m128i average_8(__m128i sum, const __m128i *mul_constants,
+                                const int strength, const int rounding,
+                                const __m128i *weight) {
   // _mm_srl_epi16 uses the lower 64 bit value for the shift.
   const __m128i strength_u128 = _mm_set_epi32(0, 0, 0, strength);
   const __m128i rounding_u16 = _mm_set1_epi16(rounding);
-  const __m128i weight_u16 = _mm_set1_epi16(weight);
+  const __m128i weight_u16 = *weight;
   const __m128i sixteen = _mm_set1_epi16(16);
 
   // modifier * 3 / index;
-  sum = _mm_mulhi_epu16(sum, mul_constants);
+  sum = _mm_mulhi_epu16(sum, *mul_constants);
 
   sum = _mm_adds_epu16(sum, rounding_u16);
   sum = _mm_srl_epi16(sum, strength_u128);
@@ -136,34 +98,6 @@
   return _mm_mullo_epi16(sum, weight_u16);
 }
 
-static void average_16(__m128i *sum_0_u16, __m128i *sum_1_u16,
-                       const __m128i mul_constants_0,
-                       const __m128i mul_constants_1, const int strength,
-                       const int rounding, const int weight) {
-  const __m128i strength_u128 = _mm_set_epi32(0, 0, 0, strength);
-  const __m128i rounding_u16 = _mm_set1_epi16(rounding);
-  const __m128i weight_u16 = _mm_set1_epi16(weight);
-  const __m128i sixteen = _mm_set1_epi16(16);
-  __m128i input_0, input_1;
-
-  input_0 = _mm_mulhi_epu16(*sum_0_u16, mul_constants_0);
-  input_0 = _mm_adds_epu16(input_0, rounding_u16);
-
-  input_1 = _mm_mulhi_epu16(*sum_1_u16, mul_constants_1);
-  input_1 = _mm_adds_epu16(input_1, rounding_u16);
-
-  input_0 = _mm_srl_epi16(input_0, strength_u128);
-  input_1 = _mm_srl_epi16(input_1, strength_u128);
-
-  input_0 = _mm_min_epu16(input_0, sixteen);
-  input_1 = _mm_min_epu16(input_1, sixteen);
-  input_0 = _mm_sub_epi16(sixteen, input_0);
-  input_1 = _mm_sub_epi16(sixteen, input_1);
-
-  *sum_0_u16 = _mm_mullo_epi16(input_0, weight_u16);
-  *sum_1_u16 = _mm_mullo_epi16(input_1, weight_u16);
-}
-
 // Add 'sum_u16' to 'count'. Multiply by 'pred' and add to 'accumulator.'
 static void accumulate_and_store_8(const __m128i sum_u16, const uint8_t *pred,
                                    uint16_t *count, uint32_t *accumulator) {
@@ -192,10 +126,10 @@
   _mm_storeu_si128((__m128i *)(accumulator + 4), accum_1_u32);
 }
 
-static void accumulate_and_store_16(const __m128i sum_0_u16,
-                                    const __m128i sum_1_u16,
-                                    const uint8_t *pred, uint16_t *count,
-                                    uint32_t *accumulator) {
+static INLINE void accumulate_and_store_16(const __m128i sum_0_u16,
+                                           const __m128i sum_1_u16,
+                                           const uint8_t *pred, uint16_t *count,
+                                           uint32_t *accumulator) {
   const __m128i pred_u8 = _mm_loadu_si128((const __m128i *)pred);
   const __m128i zero = _mm_setzero_si128();
   __m128i count_0_u16 = _mm_loadu_si128((const __m128i *)count),
@@ -235,142 +169,768 @@
   _mm_storeu_si128((__m128i *)(accumulator + 12), accum_3_u32);
 }
 
-void vp9_temporal_filter_apply_sse4_1(const uint8_t *a, unsigned int stride,
-                                      const uint8_t *b, unsigned int width,
-                                      unsigned int height, int strength,
-                                      int weight, uint32_t *accumulator,
-                                      uint16_t *count) {
+// Read in 8 pixels from y_dist. For each index i, compute y_dist[i-1] +
+// y_dist[i] + y_dist[i+1] and store in sum as 16-bit unsigned int.
+static INLINE void get_sum_8(const uint16_t *y_dist, __m128i *sum) {
+  __m128i dist_reg, dist_left, dist_right;
+
+  dist_reg = _mm_loadu_si128((const __m128i *)y_dist);
+  dist_left = _mm_loadu_si128((const __m128i *)(y_dist - 1));
+  dist_right = _mm_loadu_si128((const __m128i *)(y_dist + 1));
+
+  *sum = _mm_adds_epu16(dist_reg, dist_left);
+  *sum = _mm_adds_epu16(*sum, dist_right);
+}
+
+// Read in 16 pixels from y_dist. For each index i, compute y_dist[i-1] +
+// y_dist[i] + y_dist[i+1]. Store the result for first 8 pixels in sum_first and
+// the rest in sum_second.
+static INLINE void get_sum_16(const uint16_t *y_dist, __m128i *sum_first,
+                              __m128i *sum_second) {
+  get_sum_8(y_dist, sum_first);
+  get_sum_8(y_dist + 8, sum_second);
+}
+
+// Read in a row of chroma values corresponds to a row of 16 luma values.
+static INLINE void read_chroma_dist_row_16(int ss_x, const uint16_t *u_dist,
+                                           const uint16_t *v_dist,
+                                           __m128i *u_first, __m128i *u_second,
+                                           __m128i *v_first,
+                                           __m128i *v_second) {
+  if (!ss_x) {
+    // If there is no chroma subsampling in the horizontal direction, then we
+    // need to load 16 entries from chroma.
+    read_dist_16(u_dist, u_first, u_second);
+    read_dist_16(v_dist, v_first, v_second);
+  } else {  // ss_x == 1
+    // Otherwise, we only need to load 8 entries
+    __m128i u_reg, v_reg;
+
+    read_dist_8(u_dist, &u_reg);
+
+    *u_first = _mm_unpacklo_epi16(u_reg, u_reg);
+    *u_second = _mm_unpackhi_epi16(u_reg, u_reg);
+
+    read_dist_8(v_dist, &v_reg);
+
+    *v_first = _mm_unpacklo_epi16(v_reg, v_reg);
+    *v_second = _mm_unpackhi_epi16(v_reg, v_reg);
+  }
+}
+
+// Horizontal add unsigned 16-bit ints in src and store them as signed 32-bit
+// int in dst.
+static INLINE void hadd_epu16(__m128i *src, __m128i *dst) {
+  const __m128i zero = _mm_setzero_si128();
+  const __m128i shift_right = _mm_srli_si128(*src, 2);
+
+  const __m128i odd = _mm_blend_epi16(shift_right, zero, 170);
+  const __m128i even = _mm_blend_epi16(*src, zero, 170);
+
+  *dst = _mm_add_epi32(even, odd);
+}
+
+// Add a row of luma distortion to 8 corresponding chroma mods.
+static INLINE void add_luma_dist_to_8_chroma_mod(const uint16_t *y_dist,
+                                                 int ss_x, int ss_y,
+                                                 __m128i *u_mod,
+                                                 __m128i *v_mod) {
+  __m128i y_reg;
+  if (!ss_x) {
+    read_dist_8(y_dist, &y_reg);
+    if (ss_y == 1) {
+      __m128i y_tmp;
+      read_dist_8(y_dist + DIST_STRIDE, &y_tmp);
+
+      y_reg = _mm_adds_epu16(y_reg, y_tmp);
+    }
+  } else {
+    __m128i y_first, y_second;
+    read_dist_16(y_dist, &y_first, &y_second);
+    if (ss_y == 1) {
+      __m128i y_tmp_0, y_tmp_1;
+      read_dist_16(y_dist + DIST_STRIDE, &y_tmp_0, &y_tmp_1);
+
+      y_first = _mm_adds_epu16(y_first, y_tmp_0);
+      y_second = _mm_adds_epu16(y_second, y_tmp_1);
+    }
+
+    hadd_epu16(&y_first, &y_first);
+    hadd_epu16(&y_second, &y_second);
+
+    y_reg = _mm_packus_epi32(y_first, y_second);
+  }
+
+  *u_mod = _mm_adds_epu16(*u_mod, y_reg);
+  *v_mod = _mm_adds_epu16(*v_mod, y_reg);
+}
+
+// Apply temporal filter to the luma components. This performs temporal
+// filtering on a luma block of 16 X block_height. Use blk_fw as an array of
+// size 4 for the weights for each of the 4 subblocks if blk_fw is not NULL,
+// else use top_weight for top half, and bottom weight for bottom half.
+static void vp9_apply_temporal_filter_luma_16(
+    const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,
+    int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,
+    int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, int use_whole_blk, uint32_t *y_accum,
+    uint16_t *y_count, const uint16_t *y_dist, const uint16_t *u_dist,
+    const uint16_t *v_dist, const int16_t *const *neighbors_first,
+    const int16_t *const *neighbors_second, int top_weight, int bottom_weight,
+    const int *blk_fw) {
+  const int rounding = (1 << strength) >> 1;
+  __m128i weight_first, weight_second;
+
+  __m128i mul_first, mul_second;
+
+  __m128i sum_row_1_first, sum_row_1_second;
+  __m128i sum_row_2_first, sum_row_2_second;
+  __m128i sum_row_3_first, sum_row_3_second;
+
+  __m128i u_first, u_second;
+  __m128i v_first, v_second;
+
+  __m128i sum_row_first;
+  __m128i sum_row_second;
+
+  // Loop variables
   unsigned int h;
-  const int rounding = strength > 0 ? 1 << (strength - 1) : 0;
 
   assert(strength >= 0);
   assert(strength <= 6);
 
-  assert(weight >= 0);
-  assert(weight <= 2);
+  assert(block_width == 16);
 
-  assert(width == 8 || width == 16);
+  (void)block_width;
 
-  if (width == 8) {
-    __m128i sum_row_a, sum_row_b, sum_row_c;
-    __m128i mul_constants = _mm_setr_epi16(
-        NEIGHBOR_CONSTANT_4, NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-        NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-        NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_4);
-
-    sum_8(a, b, &sum_row_a);
-    sum_8(a + stride, b + width, &sum_row_b);
-    sum_row_c = _mm_adds_epu16(sum_row_a, sum_row_b);
-    sum_row_c = average_8(sum_row_c, mul_constants, strength, rounding, weight);
-    accumulate_and_store_8(sum_row_c, b, count, accumulator);
-
-    a += stride + stride;
-    b += width;
-    count += width;
-    accumulator += width;
-
-    mul_constants = _mm_setr_epi16(NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_9,
-                                   NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                   NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                   NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_6);
-
-    for (h = 0; h < height - 2; ++h) {
-      sum_8(a, b + width, &sum_row_c);
-      sum_row_a = _mm_adds_epu16(sum_row_a, sum_row_b);
-      sum_row_a = _mm_adds_epu16(sum_row_a, sum_row_c);
-      sum_row_a =
-          average_8(sum_row_a, mul_constants, strength, rounding, weight);
-      accumulate_and_store_8(sum_row_a, b, count, accumulator);
-
-      a += stride;
-      b += width;
-      count += width;
-      accumulator += width;
-
-      sum_row_a = sum_row_b;
-      sum_row_b = sum_row_c;
-    }
-
-    mul_constants = _mm_setr_epi16(NEIGHBOR_CONSTANT_4, NEIGHBOR_CONSTANT_6,
-                                   NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                   NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                   NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_4);
-    sum_row_a = _mm_adds_epu16(sum_row_a, sum_row_b);
-    sum_row_a = average_8(sum_row_a, mul_constants, strength, rounding, weight);
-    accumulate_and_store_8(sum_row_a, b, count, accumulator);
-
-  } else {  // width == 16
-    __m128i sum_row_a_0, sum_row_a_1;
-    __m128i sum_row_b_0, sum_row_b_1;
-    __m128i sum_row_c_0, sum_row_c_1;
-    __m128i mul_constants_0 = _mm_setr_epi16(
-                NEIGHBOR_CONSTANT_4, NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6),
-            mul_constants_1 = _mm_setr_epi16(
-                NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_4);
-
-    sum_16(a, b, &sum_row_a_0, &sum_row_a_1);
-    sum_16(a + stride, b + width, &sum_row_b_0, &sum_row_b_1);
-
-    sum_row_c_0 = _mm_adds_epu16(sum_row_a_0, sum_row_b_0);
-    sum_row_c_1 = _mm_adds_epu16(sum_row_a_1, sum_row_b_1);
-
-    average_16(&sum_row_c_0, &sum_row_c_1, mul_constants_0, mul_constants_1,
-               strength, rounding, weight);
-    accumulate_and_store_16(sum_row_c_0, sum_row_c_1, b, count, accumulator);
-
-    a += stride + stride;
-    b += width;
-    count += width;
-    accumulator += width;
-
-    mul_constants_0 = _mm_setr_epi16(NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_9,
-                                     NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                     NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                     NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9);
-    mul_constants_1 = _mm_setr_epi16(NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                     NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                     NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_9,
-                                     NEIGHBOR_CONSTANT_9, NEIGHBOR_CONSTANT_6);
-    for (h = 0; h < height - 2; ++h) {
-      sum_16(a, b + width, &sum_row_c_0, &sum_row_c_1);
-
-      sum_row_a_0 = _mm_adds_epu16(sum_row_a_0, sum_row_b_0);
-      sum_row_a_0 = _mm_adds_epu16(sum_row_a_0, sum_row_c_0);
-      sum_row_a_1 = _mm_adds_epu16(sum_row_a_1, sum_row_b_1);
-      sum_row_a_1 = _mm_adds_epu16(sum_row_a_1, sum_row_c_1);
-
-      average_16(&sum_row_a_0, &sum_row_a_1, mul_constants_0, mul_constants_1,
-                 strength, rounding, weight);
-      accumulate_and_store_16(sum_row_a_0, sum_row_a_1, b, count, accumulator);
-
-      a += stride;
-      b += width;
-      count += width;
-      accumulator += width;
-
-      sum_row_a_0 = sum_row_b_0;
-      sum_row_a_1 = sum_row_b_1;
-      sum_row_b_0 = sum_row_c_0;
-      sum_row_b_1 = sum_row_c_1;
-    }
-
-    mul_constants_0 = _mm_setr_epi16(NEIGHBOR_CONSTANT_4, NEIGHBOR_CONSTANT_6,
-                                     NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                     NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                     NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6);
-    mul_constants_1 = _mm_setr_epi16(NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                     NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                     NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_6,
-                                     NEIGHBOR_CONSTANT_6, NEIGHBOR_CONSTANT_4);
-    sum_row_c_0 = _mm_adds_epu16(sum_row_a_0, sum_row_b_0);
-    sum_row_c_1 = _mm_adds_epu16(sum_row_a_1, sum_row_b_1);
-
-    average_16(&sum_row_c_0, &sum_row_c_1, mul_constants_0, mul_constants_1,
-               strength, rounding, weight);
-    accumulate_and_store_16(sum_row_c_0, sum_row_c_1, b, count, accumulator);
+  // Initialize the weights
+  if (blk_fw) {
+    weight_first = _mm_set1_epi16(blk_fw[0]);
+    weight_second = _mm_set1_epi16(blk_fw[1]);
+  } else {
+    weight_first = _mm_set1_epi16(top_weight);
+    weight_second = weight_first;
   }
+
+  // First row
+  mul_first = _mm_load_si128((const __m128i *)neighbors_first[0]);
+  mul_second = _mm_load_si128((const __m128i *)neighbors_second[0]);
+
+  // Add luma values
+  get_sum_16(y_dist, &sum_row_2_first, &sum_row_2_second);
+  get_sum_16(y_dist + DIST_STRIDE, &sum_row_3_first, &sum_row_3_second);
+
+  sum_row_first = _mm_adds_epu16(sum_row_2_first, sum_row_3_first);
+  sum_row_second = _mm_adds_epu16(sum_row_2_second, sum_row_3_second);
+
+  // Add chroma values
+  read_chroma_dist_row_16(ss_x, u_dist, v_dist, &u_first, &u_second, &v_first,
+                          &v_second);
+
+  sum_row_first = _mm_adds_epu16(sum_row_first, u_first);
+  sum_row_second = _mm_adds_epu16(sum_row_second, u_second);
+
+  sum_row_first = _mm_adds_epu16(sum_row_first, v_first);
+  sum_row_second = _mm_adds_epu16(sum_row_second, v_second);
+
+  // Get modifier and store result
+  sum_row_first =
+      average_8(sum_row_first, &mul_first, strength, rounding, &weight_first);
+  sum_row_second = average_8(sum_row_second, &mul_second, strength, rounding,
+                             &weight_second);
+  accumulate_and_store_16(sum_row_first, sum_row_second, y_pre, y_count,
+                          y_accum);
+
+  y_src += y_src_stride;
+  y_pre += y_pre_stride;
+  y_count += y_pre_stride;
+  y_accum += y_pre_stride;
+  y_dist += DIST_STRIDE;
+
+  u_src += uv_src_stride;
+  u_pre += uv_pre_stride;
+  u_dist += DIST_STRIDE;
+  v_src += uv_src_stride;
+  v_pre += uv_pre_stride;
+  v_dist += DIST_STRIDE;
+
+  // Then all the rows except the last one
+  mul_first = _mm_load_si128((const __m128i *)neighbors_first[1]);
+  mul_second = _mm_load_si128((const __m128i *)neighbors_second[1]);
+
+  for (h = 1; h < block_height - 1; ++h) {
+    // Move the weight to bottom half
+    if (!use_whole_blk && h == block_height / 2) {
+      if (blk_fw) {
+        weight_first = _mm_set1_epi16(blk_fw[2]);
+        weight_second = _mm_set1_epi16(blk_fw[3]);
+      } else {
+        weight_first = _mm_set1_epi16(bottom_weight);
+        weight_second = weight_first;
+      }
+    }
+    // Shift the rows up
+    sum_row_1_first = sum_row_2_first;
+    sum_row_1_second = sum_row_2_second;
+    sum_row_2_first = sum_row_3_first;
+    sum_row_2_second = sum_row_3_second;
+
+    // Add luma values to the modifier
+    sum_row_first = _mm_adds_epu16(sum_row_1_first, sum_row_2_first);
+    sum_row_second = _mm_adds_epu16(sum_row_1_second, sum_row_2_second);
+
+    get_sum_16(y_dist + DIST_STRIDE, &sum_row_3_first, &sum_row_3_second);
+
+    sum_row_first = _mm_adds_epu16(sum_row_first, sum_row_3_first);
+    sum_row_second = _mm_adds_epu16(sum_row_second, sum_row_3_second);
+
+    // Add chroma values to the modifier
+    if (ss_y == 0 || h % 2 == 0) {
+      // Only calculate the new chroma distortion if we are at a pixel that
+      // corresponds to a new chroma row
+      read_chroma_dist_row_16(ss_x, u_dist, v_dist, &u_first, &u_second,
+                              &v_first, &v_second);
+
+      u_src += uv_src_stride;
+      u_pre += uv_pre_stride;
+      u_dist += DIST_STRIDE;
+      v_src += uv_src_stride;
+      v_pre += uv_pre_stride;
+      v_dist += DIST_STRIDE;
+    }
+
+    sum_row_first = _mm_adds_epu16(sum_row_first, u_first);
+    sum_row_second = _mm_adds_epu16(sum_row_second, u_second);
+    sum_row_first = _mm_adds_epu16(sum_row_first, v_first);
+    sum_row_second = _mm_adds_epu16(sum_row_second, v_second);
+
+    // Get modifier and store result
+    sum_row_first =
+        average_8(sum_row_first, &mul_first, strength, rounding, &weight_first);
+    sum_row_second = average_8(sum_row_second, &mul_second, strength, rounding,
+                               &weight_second);
+    accumulate_and_store_16(sum_row_first, sum_row_second, y_pre, y_count,
+                            y_accum);
+
+    y_src += y_src_stride;
+    y_pre += y_pre_stride;
+    y_count += y_pre_stride;
+    y_accum += y_pre_stride;
+    y_dist += DIST_STRIDE;
+  }
+
+  // The last row
+  mul_first = _mm_load_si128((const __m128i *)neighbors_first[0]);
+  mul_second = _mm_load_si128((const __m128i *)neighbors_second[0]);
+
+  // Shift the rows up
+  sum_row_1_first = sum_row_2_first;
+  sum_row_1_second = sum_row_2_second;
+  sum_row_2_first = sum_row_3_first;
+  sum_row_2_second = sum_row_3_second;
+
+  // Add luma values to the modifier
+  sum_row_first = _mm_adds_epu16(sum_row_1_first, sum_row_2_first);
+  sum_row_second = _mm_adds_epu16(sum_row_1_second, sum_row_2_second);
+
+  // Add chroma values to the modifier
+  if (ss_y == 0) {
+    // Only calculate the new chroma distortion if we are at a pixel that
+    // corresponds to a new chroma row
+    read_chroma_dist_row_16(ss_x, u_dist, v_dist, &u_first, &u_second, &v_first,
+                            &v_second);
+  }
+
+  sum_row_first = _mm_adds_epu16(sum_row_first, u_first);
+  sum_row_second = _mm_adds_epu16(sum_row_second, u_second);
+  sum_row_first = _mm_adds_epu16(sum_row_first, v_first);
+  sum_row_second = _mm_adds_epu16(sum_row_second, v_second);
+
+  // Get modifier and store result
+  sum_row_first =
+      average_8(sum_row_first, &mul_first, strength, rounding, &weight_first);
+  sum_row_second = average_8(sum_row_second, &mul_second, strength, rounding,
+                             &weight_second);
+  accumulate_and_store_16(sum_row_first, sum_row_second, y_pre, y_count,
+                          y_accum);
+}
+
+// Perform temporal filter for the luma component.
+static void vp9_apply_temporal_filter_luma(
+    const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,
+    int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,
+    int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *blk_fw, int use_whole_blk,
+    uint32_t *y_accum, uint16_t *y_count, const uint16_t *y_dist,
+    const uint16_t *u_dist, const uint16_t *v_dist) {
+  unsigned int blk_col = 0, uv_blk_col = 0;
+  const unsigned int blk_col_step = 16, uv_blk_col_step = 16 >> ss_x;
+  const unsigned int mid_width = block_width >> 1,
+                     last_width = block_width - blk_col_step;
+  int top_weight = blk_fw[0],
+      bottom_weight = use_whole_blk ? blk_fw[0] : blk_fw[2];
+  const int16_t *const *neighbors_first;
+  const int16_t *const *neighbors_second;
+
+  if (block_width == 16) {
+    // Special Case: The blockwidth is 16 and we are operating on a row of 16
+    // chroma pixels. In this case, we can't use the usualy left-midle-right
+    // pattern. We also don't support splitting now.
+    neighbors_first = LUMA_LEFT_COLUMN_NEIGHBORS;
+    neighbors_second = LUMA_RIGHT_COLUMN_NEIGHBORS;
+    if (use_whole_blk) {
+      vp9_apply_temporal_filter_luma_16(
+          y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+          u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+          u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, 16,
+          block_height, ss_x, ss_y, strength, use_whole_blk, y_accum + blk_col,
+          y_count + blk_col, y_dist + blk_col, u_dist + uv_blk_col,
+          v_dist + uv_blk_col, neighbors_first, neighbors_second, top_weight,
+          bottom_weight, NULL);
+    } else {
+      vp9_apply_temporal_filter_luma_16(
+          y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+          u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+          u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, 16,
+          block_height, ss_x, ss_y, strength, use_whole_blk, y_accum + blk_col,
+          y_count + blk_col, y_dist + blk_col, u_dist + uv_blk_col,
+          v_dist + uv_blk_col, neighbors_first, neighbors_second, 0, 0, blk_fw);
+    }
+
+    return;
+  }
+
+  // Left
+  neighbors_first = LUMA_LEFT_COLUMN_NEIGHBORS;
+  neighbors_second = LUMA_MIDDLE_COLUMN_NEIGHBORS;
+  vp9_apply_temporal_filter_luma_16(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, 16, block_height, ss_x, ss_y, strength,
+      use_whole_blk, y_accum + blk_col, y_count + blk_col, y_dist + blk_col,
+      u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors_first,
+      neighbors_second, top_weight, bottom_weight, NULL);
+
+  blk_col += blk_col_step;
+  uv_blk_col += uv_blk_col_step;
+
+  // Middle First
+  neighbors_first = LUMA_MIDDLE_COLUMN_NEIGHBORS;
+  for (; blk_col < mid_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_apply_temporal_filter_luma_16(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, 16, block_height,
+        ss_x, ss_y, strength, use_whole_blk, y_accum + blk_col,
+        y_count + blk_col, y_dist + blk_col, u_dist + uv_blk_col,
+        v_dist + uv_blk_col, neighbors_first, neighbors_second, top_weight,
+        bottom_weight, NULL);
+  }
+
+  if (!use_whole_blk) {
+    top_weight = blk_fw[1];
+    bottom_weight = blk_fw[3];
+  }
+
+  // Middle Second
+  for (; blk_col < last_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_apply_temporal_filter_luma_16(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, 16, block_height,
+        ss_x, ss_y, strength, use_whole_blk, y_accum + blk_col,
+        y_count + blk_col, y_dist + blk_col, u_dist + uv_blk_col,
+        v_dist + uv_blk_col, neighbors_first, neighbors_second, top_weight,
+        bottom_weight, NULL);
+  }
+
+  // Right
+  neighbors_second = LUMA_RIGHT_COLUMN_NEIGHBORS;
+  vp9_apply_temporal_filter_luma_16(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, 16, block_height, ss_x, ss_y, strength,
+      use_whole_blk, y_accum + blk_col, y_count + blk_col, y_dist + blk_col,
+      u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors_first,
+      neighbors_second, top_weight, bottom_weight, NULL);
+}
+
+// Apply temporal filter to the chroma components. This performs temporal
+// filtering on a chroma block of 8 X uv_height. If blk_fw is not NULL, use
+// blk_fw as an array of size 4 for the weights for each of the 4 subblocks,
+// else use top_weight for top half, and bottom weight for bottom half.
+static void vp9_apply_temporal_filter_chroma_8(
+    const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,
+    int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,
+    int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,
+    int uv_pre_stride, unsigned int uv_block_width,
+    unsigned int uv_block_height, int ss_x, int ss_y, int strength,
+    uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count,
+    const uint16_t *y_dist, const uint16_t *u_dist, const uint16_t *v_dist,
+    const int16_t *const *neighbors, int top_weight, int bottom_weight,
+    const int *blk_fw) {
+  const int rounding = (1 << strength) >> 1;
+
+  __m128i weight;
+
+  __m128i mul;
+
+  __m128i u_sum_row_1, u_sum_row_2, u_sum_row_3;
+  __m128i v_sum_row_1, v_sum_row_2, v_sum_row_3;
+
+  __m128i u_sum_row, v_sum_row;
+
+  // Loop variable
+  unsigned int h;
+
+  (void)uv_block_width;
+
+  // Initilize weight
+  if (blk_fw) {
+    weight = _mm_setr_epi16(blk_fw[0], blk_fw[0], blk_fw[0], blk_fw[0],
+                            blk_fw[1], blk_fw[1], blk_fw[1], blk_fw[1]);
+  } else {
+    weight = _mm_set1_epi16(top_weight);
+  }
+
+  // First row
+  mul = _mm_load_si128((const __m128i *)neighbors[0]);
+
+  // Add chroma values
+  get_sum_8(u_dist, &u_sum_row_2);
+  get_sum_8(u_dist + DIST_STRIDE, &u_sum_row_3);
+
+  u_sum_row = _mm_adds_epu16(u_sum_row_2, u_sum_row_3);
+
+  get_sum_8(v_dist, &v_sum_row_2);
+  get_sum_8(v_dist + DIST_STRIDE, &v_sum_row_3);
+
+  v_sum_row = _mm_adds_epu16(v_sum_row_2, v_sum_row_3);
+
+  // Add luma values
+  add_luma_dist_to_8_chroma_mod(y_dist, ss_x, ss_y, &u_sum_row, &v_sum_row);
+
+  // Get modifier and store result
+  u_sum_row = average_8(u_sum_row, &mul, strength, rounding, &weight);
+  v_sum_row = average_8(v_sum_row, &mul, strength, rounding, &weight);
+
+  accumulate_and_store_8(u_sum_row, u_pre, u_count, u_accum);
+  accumulate_and_store_8(v_sum_row, v_pre, v_count, v_accum);
+
+  u_src += uv_src_stride;
+  u_pre += uv_pre_stride;
+  u_dist += DIST_STRIDE;
+  v_src += uv_src_stride;
+  v_pre += uv_pre_stride;
+  v_dist += DIST_STRIDE;
+  u_count += uv_pre_stride;
+  u_accum += uv_pre_stride;
+  v_count += uv_pre_stride;
+  v_accum += uv_pre_stride;
+
+  y_src += y_src_stride * (1 + ss_y);
+  y_pre += y_pre_stride * (1 + ss_y);
+  y_dist += DIST_STRIDE * (1 + ss_y);
+
+  // Then all the rows except the last one
+  mul = _mm_load_si128((const __m128i *)neighbors[1]);
+
+  for (h = 1; h < uv_block_height - 1; ++h) {
+    // Move the weight pointer to the bottom half of the blocks
+    if (h == uv_block_height / 2) {
+      if (blk_fw) {
+        weight = _mm_setr_epi16(blk_fw[2], blk_fw[2], blk_fw[2], blk_fw[2],
+                                blk_fw[3], blk_fw[3], blk_fw[3], blk_fw[3]);
+      } else {
+        weight = _mm_set1_epi16(bottom_weight);
+      }
+    }
+
+    // Shift the rows up
+    u_sum_row_1 = u_sum_row_2;
+    u_sum_row_2 = u_sum_row_3;
+
+    v_sum_row_1 = v_sum_row_2;
+    v_sum_row_2 = v_sum_row_3;
+
+    // Add chroma values
+    u_sum_row = _mm_adds_epu16(u_sum_row_1, u_sum_row_2);
+    get_sum_8(u_dist + DIST_STRIDE, &u_sum_row_3);
+    u_sum_row = _mm_adds_epu16(u_sum_row, u_sum_row_3);
+
+    v_sum_row = _mm_adds_epu16(v_sum_row_1, v_sum_row_2);
+    get_sum_8(v_dist + DIST_STRIDE, &v_sum_row_3);
+    v_sum_row = _mm_adds_epu16(v_sum_row, v_sum_row_3);
+
+    // Add luma values
+    add_luma_dist_to_8_chroma_mod(y_dist, ss_x, ss_y, &u_sum_row, &v_sum_row);
+
+    // Get modifier and store result
+    u_sum_row = average_8(u_sum_row, &mul, strength, rounding, &weight);
+    v_sum_row = average_8(v_sum_row, &mul, strength, rounding, &weight);
+
+    accumulate_and_store_8(u_sum_row, u_pre, u_count, u_accum);
+    accumulate_and_store_8(v_sum_row, v_pre, v_count, v_accum);
+
+    u_src += uv_src_stride;
+    u_pre += uv_pre_stride;
+    u_dist += DIST_STRIDE;
+    v_src += uv_src_stride;
+    v_pre += uv_pre_stride;
+    v_dist += DIST_STRIDE;
+    u_count += uv_pre_stride;
+    u_accum += uv_pre_stride;
+    v_count += uv_pre_stride;
+    v_accum += uv_pre_stride;
+
+    y_src += y_src_stride * (1 + ss_y);
+    y_pre += y_pre_stride * (1 + ss_y);
+    y_dist += DIST_STRIDE * (1 + ss_y);
+  }
+
+  // The last row
+  mul = _mm_load_si128((const __m128i *)neighbors[0]);
+
+  // Shift the rows up
+  u_sum_row_1 = u_sum_row_2;
+  u_sum_row_2 = u_sum_row_3;
+
+  v_sum_row_1 = v_sum_row_2;
+  v_sum_row_2 = v_sum_row_3;
+
+  // Add chroma values
+  u_sum_row = _mm_adds_epu16(u_sum_row_1, u_sum_row_2);
+  v_sum_row = _mm_adds_epu16(v_sum_row_1, v_sum_row_2);
+
+  // Add luma values
+  add_luma_dist_to_8_chroma_mod(y_dist, ss_x, ss_y, &u_sum_row, &v_sum_row);
+
+  // Get modifier and store result
+  u_sum_row = average_8(u_sum_row, &mul, strength, rounding, &weight);
+  v_sum_row = average_8(v_sum_row, &mul, strength, rounding, &weight);
+
+  accumulate_and_store_8(u_sum_row, u_pre, u_count, u_accum);
+  accumulate_and_store_8(v_sum_row, v_pre, v_count, v_accum);
+}
+
+// Perform temporal filter for the chroma components.
+static void vp9_apply_temporal_filter_chroma(
+    const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,
+    int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,
+    int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *blk_fw, int use_whole_blk,
+    uint32_t *u_accum, uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count,
+    const uint16_t *y_dist, const uint16_t *u_dist, const uint16_t *v_dist) {
+  const unsigned int uv_width = block_width >> ss_x,
+                     uv_height = block_height >> ss_y;
+
+  unsigned int blk_col = 0, uv_blk_col = 0;
+  const unsigned int uv_blk_col_step = 8, blk_col_step = 8 << ss_x;
+  const unsigned int uv_mid_width = uv_width >> 1,
+                     uv_last_width = uv_width - uv_blk_col_step;
+  int top_weight = blk_fw[0],
+      bottom_weight = use_whole_blk ? blk_fw[0] : blk_fw[2];
+  const int16_t *const *neighbors;
+
+  if (uv_width == 8) {
+    // Special Case: We are subsampling in x direction on a 16x16 block. Since
+    // we are operating on a row of 8 chroma pixels, we can't use the usual
+    // left-middle-right pattern.
+    assert(ss_x);
+
+    if (ss_y) {
+      neighbors = CHROMA_DOUBLE_SS_SINGLE_COLUMN_NEIGHBORS;
+    } else {
+      neighbors = CHROMA_SINGLE_SS_SINGLE_COLUMN_NEIGHBORS;
+    }
+
+    if (use_whole_blk) {
+      vp9_apply_temporal_filter_chroma_8(
+          y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+          u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+          u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+          uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+          u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+          y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors,
+          top_weight, bottom_weight, NULL);
+    } else {
+      vp9_apply_temporal_filter_chroma_8(
+          y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+          u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+          u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+          uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+          u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+          y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors,
+          0, 0, blk_fw);
+    }
+
+    return;
+  }
+
+  // Left
+  if (ss_x && ss_y) {
+    neighbors = CHROMA_DOUBLE_SS_LEFT_COLUMN_NEIGHBORS;
+  } else if (ss_x || ss_y) {
+    neighbors = CHROMA_SINGLE_SS_LEFT_COLUMN_NEIGHBORS;
+  } else {
+    neighbors = CHROMA_NO_SS_LEFT_COLUMN_NEIGHBORS;
+  }
+
+  vp9_apply_temporal_filter_chroma_8(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, uv_width, uv_height, ss_x, ss_y,
+      strength, u_accum + uv_blk_col, u_count + uv_blk_col,
+      v_accum + uv_blk_col, v_count + uv_blk_col, y_dist + blk_col,
+      u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors, top_weight,
+      bottom_weight, NULL);
+
+  blk_col += blk_col_step;
+  uv_blk_col += uv_blk_col_step;
+
+  // Middle First
+  if (ss_x && ss_y) {
+    neighbors = CHROMA_DOUBLE_SS_MIDDLE_COLUMN_NEIGHBORS;
+  } else if (ss_x || ss_y) {
+    neighbors = CHROMA_SINGLE_SS_MIDDLE_COLUMN_NEIGHBORS;
+  } else {
+    neighbors = CHROMA_NO_SS_MIDDLE_COLUMN_NEIGHBORS;
+  }
+
+  for (; uv_blk_col < uv_mid_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_apply_temporal_filter_chroma_8(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+        uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+        u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+        y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors,
+        top_weight, bottom_weight, NULL);
+  }
+
+  if (!use_whole_blk) {
+    top_weight = blk_fw[1];
+    bottom_weight = blk_fw[3];
+  }
+
+  // Middle Second
+  for (; uv_blk_col < uv_last_width;
+       blk_col += blk_col_step, uv_blk_col += uv_blk_col_step) {
+    vp9_apply_temporal_filter_chroma_8(
+        y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+        u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride,
+        u_pre + uv_blk_col, v_pre + uv_blk_col, uv_pre_stride, uv_width,
+        uv_height, ss_x, ss_y, strength, u_accum + uv_blk_col,
+        u_count + uv_blk_col, v_accum + uv_blk_col, v_count + uv_blk_col,
+        y_dist + blk_col, u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors,
+        top_weight, bottom_weight, NULL);
+  }
+
+  // Right
+  if (ss_x && ss_y) {
+    neighbors = CHROMA_DOUBLE_SS_RIGHT_COLUMN_NEIGHBORS;
+  } else if (ss_x || ss_y) {
+    neighbors = CHROMA_SINGLE_SS_RIGHT_COLUMN_NEIGHBORS;
+  } else {
+    neighbors = CHROMA_NO_SS_RIGHT_COLUMN_NEIGHBORS;
+  }
+
+  vp9_apply_temporal_filter_chroma_8(
+      y_src + blk_col, y_src_stride, y_pre + blk_col, y_pre_stride,
+      u_src + uv_blk_col, v_src + uv_blk_col, uv_src_stride, u_pre + uv_blk_col,
+      v_pre + uv_blk_col, uv_pre_stride, uv_width, uv_height, ss_x, ss_y,
+      strength, u_accum + uv_blk_col, u_count + uv_blk_col,
+      v_accum + uv_blk_col, v_count + uv_blk_col, y_dist + blk_col,
+      u_dist + uv_blk_col, v_dist + uv_blk_col, neighbors, top_weight,
+      bottom_weight, NULL);
+}
+
+void vp9_apply_temporal_filter_sse4_1(
+    const uint8_t *y_src, int y_src_stride, const uint8_t *y_pre,
+    int y_pre_stride, const uint8_t *u_src, const uint8_t *v_src,
+    int uv_src_stride, const uint8_t *u_pre, const uint8_t *v_pre,
+    int uv_pre_stride, unsigned int block_width, unsigned int block_height,
+    int ss_x, int ss_y, int strength, const int *const blk_fw,
+    int use_whole_blk, uint32_t *y_accum, uint16_t *y_count, uint32_t *u_accum,
+    uint16_t *u_count, uint32_t *v_accum, uint16_t *v_count) {
+  const unsigned int chroma_height = block_height >> ss_y,
+                     chroma_width = block_width >> ss_x;
+
+  DECLARE_ALIGNED(16, uint16_t, y_dist[BH * DIST_STRIDE]) = { 0 };
+  DECLARE_ALIGNED(16, uint16_t, u_dist[BH * DIST_STRIDE]) = { 0 };
+  DECLARE_ALIGNED(16, uint16_t, v_dist[BH * DIST_STRIDE]) = { 0 };
+  const int *blk_fw_ptr = blk_fw;
+
+  uint16_t *y_dist_ptr = y_dist + 1, *u_dist_ptr = u_dist + 1,
+           *v_dist_ptr = v_dist + 1;
+  const uint8_t *y_src_ptr = y_src, *u_src_ptr = u_src, *v_src_ptr = v_src;
+  const uint8_t *y_pre_ptr = y_pre, *u_pre_ptr = u_pre, *v_pre_ptr = v_pre;
+
+  // Loop variables
+  unsigned int row, blk_col;
+
+  assert(block_width <= BW && "block width too large");
+  assert(block_height <= BH && "block height too large");
+  assert(block_width % 16 == 0 && "block width must be multiple of 16");
+  assert(block_height % 2 == 0 && "block height must be even");
+  assert((ss_x == 0 || ss_x == 1) && (ss_y == 0 || ss_y == 1) &&
+         "invalid chroma subsampling");
+  assert(strength >= 0 && strength <= 6 && "invalid temporal filter strength");
+  assert(blk_fw[0] >= 0 && "filter weight must be positive");
+  assert(
+      (use_whole_blk || (blk_fw[1] >= 0 && blk_fw[2] >= 0 && blk_fw[3] >= 0)) &&
+      "subblock filter weight must be positive");
+  assert(blk_fw[0] <= 2 && "sublock filter weight must be less than 2");
+  assert(
+      (use_whole_blk || (blk_fw[1] <= 2 && blk_fw[2] <= 2 && blk_fw[3] <= 2)) &&
+      "subblock filter weight must be less than 2");
+
+  // Precompute the difference sqaured
+  for (row = 0; row < block_height; row++) {
+    for (blk_col = 0; blk_col < block_width; blk_col += 16) {
+      store_dist_16(y_src_ptr + blk_col, y_pre_ptr + blk_col,
+                    y_dist_ptr + blk_col);
+    }
+    y_src_ptr += y_src_stride;
+    y_pre_ptr += y_pre_stride;
+    y_dist_ptr += DIST_STRIDE;
+  }
+
+  for (row = 0; row < chroma_height; row++) {
+    for (blk_col = 0; blk_col < chroma_width; blk_col += 8) {
+      store_dist_8(u_src_ptr + blk_col, u_pre_ptr + blk_col,
+                   u_dist_ptr + blk_col);
+      store_dist_8(v_src_ptr + blk_col, v_pre_ptr + blk_col,
+                   v_dist_ptr + blk_col);
+    }
+
+    u_src_ptr += uv_src_stride;
+    u_pre_ptr += uv_pre_stride;
+    u_dist_ptr += DIST_STRIDE;
+    v_src_ptr += uv_src_stride;
+    v_pre_ptr += uv_pre_stride;
+    v_dist_ptr += DIST_STRIDE;
+  }
+
+  y_dist_ptr = y_dist + 1;
+  u_dist_ptr = u_dist + 1;
+  v_dist_ptr = v_dist + 1;
+
+  vp9_apply_temporal_filter_luma(
+      y_src, y_src_stride, y_pre, y_pre_stride, u_src, v_src, uv_src_stride,
+      u_pre, v_pre, uv_pre_stride, block_width, block_height, ss_x, ss_y,
+      strength, blk_fw_ptr, use_whole_blk, y_accum, y_count, y_dist_ptr,
+      u_dist_ptr, v_dist_ptr);
+
+  vp9_apply_temporal_filter_chroma(
+      y_src, y_src_stride, y_pre, y_pre_stride, u_src, v_src, uv_src_stride,
+      u_pre, v_pre, uv_pre_stride, block_width, block_height, ss_x, ss_y,
+      strength, blk_fw_ptr, use_whole_blk, u_accum, u_count, v_accum, v_count,
+      y_dist_ptr, u_dist_ptr, v_dist_ptr);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_intrin_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_intrin_sse2.c
index 293cdcd..2188903 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_intrin_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_intrin_sse2.c
@@ -14,6 +14,7 @@
 #include "./vp9_rtcd.h"
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/txfm_common.h"
+#include "vpx_dsp/x86/bitdepth_conversion_sse2.h"
 #include "vpx_dsp/x86/fwd_txfm_sse2.h"
 #include "vpx_dsp/x86/transpose_sse2.h"
 #include "vpx_dsp/x86/txfm_common_sse2.h"
@@ -180,445 +181,6 @@
   }
 }
 
-void vp9_fdct8x8_quant_sse2(const int16_t *input, int stride,
-                            int16_t *coeff_ptr, intptr_t n_coeffs,
-                            int skip_block, const int16_t *round_ptr,
-                            const int16_t *quant_ptr, int16_t *qcoeff_ptr,
-                            int16_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                            uint16_t *eob_ptr, const int16_t *scan_ptr,
-                            const int16_t *iscan_ptr) {
-  __m128i zero;
-  int pass;
-
-  // Constants
-  //    When we use them, in one case, they are all the same. In all others
-  //    it's a pair of them that we need to repeat four times. This is done
-  //    by constructing the 32 bit constant corresponding to that pair.
-  const __m128i k__cospi_p16_p16 = _mm_set1_epi16(cospi_16_64);
-  const __m128i k__cospi_p16_m16 = pair_set_epi16(cospi_16_64, -cospi_16_64);
-  const __m128i k__cospi_p24_p08 = pair_set_epi16(cospi_24_64, cospi_8_64);
-  const __m128i k__cospi_m08_p24 = pair_set_epi16(-cospi_8_64, cospi_24_64);
-  const __m128i k__cospi_p28_p04 = pair_set_epi16(cospi_28_64, cospi_4_64);
-  const __m128i k__cospi_m04_p28 = pair_set_epi16(-cospi_4_64, cospi_28_64);
-  const __m128i k__cospi_p12_p20 = pair_set_epi16(cospi_12_64, cospi_20_64);
-  const __m128i k__cospi_m20_p12 = pair_set_epi16(-cospi_20_64, cospi_12_64);
-  const __m128i k__DCT_CONST_ROUNDING = _mm_set1_epi32(DCT_CONST_ROUNDING);
-  // Load input
-  __m128i in0 = _mm_load_si128((const __m128i *)(input + 0 * stride));
-  __m128i in1 = _mm_load_si128((const __m128i *)(input + 1 * stride));
-  __m128i in2 = _mm_load_si128((const __m128i *)(input + 2 * stride));
-  __m128i in3 = _mm_load_si128((const __m128i *)(input + 3 * stride));
-  __m128i in4 = _mm_load_si128((const __m128i *)(input + 4 * stride));
-  __m128i in5 = _mm_load_si128((const __m128i *)(input + 5 * stride));
-  __m128i in6 = _mm_load_si128((const __m128i *)(input + 6 * stride));
-  __m128i in7 = _mm_load_si128((const __m128i *)(input + 7 * stride));
-  __m128i *in[8];
-  int index = 0;
-
-  (void)scan_ptr;
-  (void)coeff_ptr;
-
-  // Pre-condition input (shift by two)
-  in0 = _mm_slli_epi16(in0, 2);
-  in1 = _mm_slli_epi16(in1, 2);
-  in2 = _mm_slli_epi16(in2, 2);
-  in3 = _mm_slli_epi16(in3, 2);
-  in4 = _mm_slli_epi16(in4, 2);
-  in5 = _mm_slli_epi16(in5, 2);
-  in6 = _mm_slli_epi16(in6, 2);
-  in7 = _mm_slli_epi16(in7, 2);
-
-  in[0] = &in0;
-  in[1] = &in1;
-  in[2] = &in2;
-  in[3] = &in3;
-  in[4] = &in4;
-  in[5] = &in5;
-  in[6] = &in6;
-  in[7] = &in7;
-
-  // We do two passes, first the columns, then the rows. The results of the
-  // first pass are transposed so that the same column code can be reused. The
-  // results of the second pass are also transposed so that the rows (processed
-  // as columns) are put back in row positions.
-  for (pass = 0; pass < 2; pass++) {
-    // To store results of each pass before the transpose.
-    __m128i res0, res1, res2, res3, res4, res5, res6, res7;
-    // Add/subtract
-    const __m128i q0 = _mm_add_epi16(in0, in7);
-    const __m128i q1 = _mm_add_epi16(in1, in6);
-    const __m128i q2 = _mm_add_epi16(in2, in5);
-    const __m128i q3 = _mm_add_epi16(in3, in4);
-    const __m128i q4 = _mm_sub_epi16(in3, in4);
-    const __m128i q5 = _mm_sub_epi16(in2, in5);
-    const __m128i q6 = _mm_sub_epi16(in1, in6);
-    const __m128i q7 = _mm_sub_epi16(in0, in7);
-    // Work on first four results
-    {
-      // Add/subtract
-      const __m128i r0 = _mm_add_epi16(q0, q3);
-      const __m128i r1 = _mm_add_epi16(q1, q2);
-      const __m128i r2 = _mm_sub_epi16(q1, q2);
-      const __m128i r3 = _mm_sub_epi16(q0, q3);
-      // Interleave to do the multiply by constants which gets us into 32bits
-      const __m128i t0 = _mm_unpacklo_epi16(r0, r1);
-      const __m128i t1 = _mm_unpackhi_epi16(r0, r1);
-      const __m128i t2 = _mm_unpacklo_epi16(r2, r3);
-      const __m128i t3 = _mm_unpackhi_epi16(r2, r3);
-      const __m128i u0 = _mm_madd_epi16(t0, k__cospi_p16_p16);
-      const __m128i u1 = _mm_madd_epi16(t1, k__cospi_p16_p16);
-      const __m128i u2 = _mm_madd_epi16(t0, k__cospi_p16_m16);
-      const __m128i u3 = _mm_madd_epi16(t1, k__cospi_p16_m16);
-      const __m128i u4 = _mm_madd_epi16(t2, k__cospi_p24_p08);
-      const __m128i u5 = _mm_madd_epi16(t3, k__cospi_p24_p08);
-      const __m128i u6 = _mm_madd_epi16(t2, k__cospi_m08_p24);
-      const __m128i u7 = _mm_madd_epi16(t3, k__cospi_m08_p24);
-      // dct_const_round_shift
-      const __m128i v0 = _mm_add_epi32(u0, k__DCT_CONST_ROUNDING);
-      const __m128i v1 = _mm_add_epi32(u1, k__DCT_CONST_ROUNDING);
-      const __m128i v2 = _mm_add_epi32(u2, k__DCT_CONST_ROUNDING);
-      const __m128i v3 = _mm_add_epi32(u3, k__DCT_CONST_ROUNDING);
-      const __m128i v4 = _mm_add_epi32(u4, k__DCT_CONST_ROUNDING);
-      const __m128i v5 = _mm_add_epi32(u5, k__DCT_CONST_ROUNDING);
-      const __m128i v6 = _mm_add_epi32(u6, k__DCT_CONST_ROUNDING);
-      const __m128i v7 = _mm_add_epi32(u7, k__DCT_CONST_ROUNDING);
-      const __m128i w0 = _mm_srai_epi32(v0, DCT_CONST_BITS);
-      const __m128i w1 = _mm_srai_epi32(v1, DCT_CONST_BITS);
-      const __m128i w2 = _mm_srai_epi32(v2, DCT_CONST_BITS);
-      const __m128i w3 = _mm_srai_epi32(v3, DCT_CONST_BITS);
-      const __m128i w4 = _mm_srai_epi32(v4, DCT_CONST_BITS);
-      const __m128i w5 = _mm_srai_epi32(v5, DCT_CONST_BITS);
-      const __m128i w6 = _mm_srai_epi32(v6, DCT_CONST_BITS);
-      const __m128i w7 = _mm_srai_epi32(v7, DCT_CONST_BITS);
-      // Combine
-      res0 = _mm_packs_epi32(w0, w1);
-      res4 = _mm_packs_epi32(w2, w3);
-      res2 = _mm_packs_epi32(w4, w5);
-      res6 = _mm_packs_epi32(w6, w7);
-    }
-    // Work on next four results
-    {
-      // Interleave to do the multiply by constants which gets us into 32bits
-      const __m128i d0 = _mm_unpacklo_epi16(q6, q5);
-      const __m128i d1 = _mm_unpackhi_epi16(q6, q5);
-      const __m128i e0 = _mm_madd_epi16(d0, k__cospi_p16_m16);
-      const __m128i e1 = _mm_madd_epi16(d1, k__cospi_p16_m16);
-      const __m128i e2 = _mm_madd_epi16(d0, k__cospi_p16_p16);
-      const __m128i e3 = _mm_madd_epi16(d1, k__cospi_p16_p16);
-      // dct_const_round_shift
-      const __m128i f0 = _mm_add_epi32(e0, k__DCT_CONST_ROUNDING);
-      const __m128i f1 = _mm_add_epi32(e1, k__DCT_CONST_ROUNDING);
-      const __m128i f2 = _mm_add_epi32(e2, k__DCT_CONST_ROUNDING);
-      const __m128i f3 = _mm_add_epi32(e3, k__DCT_CONST_ROUNDING);
-      const __m128i s0 = _mm_srai_epi32(f0, DCT_CONST_BITS);
-      const __m128i s1 = _mm_srai_epi32(f1, DCT_CONST_BITS);
-      const __m128i s2 = _mm_srai_epi32(f2, DCT_CONST_BITS);
-      const __m128i s3 = _mm_srai_epi32(f3, DCT_CONST_BITS);
-      // Combine
-      const __m128i r0 = _mm_packs_epi32(s0, s1);
-      const __m128i r1 = _mm_packs_epi32(s2, s3);
-      // Add/subtract
-      const __m128i x0 = _mm_add_epi16(q4, r0);
-      const __m128i x1 = _mm_sub_epi16(q4, r0);
-      const __m128i x2 = _mm_sub_epi16(q7, r1);
-      const __m128i x3 = _mm_add_epi16(q7, r1);
-      // Interleave to do the multiply by constants which gets us into 32bits
-      const __m128i t0 = _mm_unpacklo_epi16(x0, x3);
-      const __m128i t1 = _mm_unpackhi_epi16(x0, x3);
-      const __m128i t2 = _mm_unpacklo_epi16(x1, x2);
-      const __m128i t3 = _mm_unpackhi_epi16(x1, x2);
-      const __m128i u0 = _mm_madd_epi16(t0, k__cospi_p28_p04);
-      const __m128i u1 = _mm_madd_epi16(t1, k__cospi_p28_p04);
-      const __m128i u2 = _mm_madd_epi16(t0, k__cospi_m04_p28);
-      const __m128i u3 = _mm_madd_epi16(t1, k__cospi_m04_p28);
-      const __m128i u4 = _mm_madd_epi16(t2, k__cospi_p12_p20);
-      const __m128i u5 = _mm_madd_epi16(t3, k__cospi_p12_p20);
-      const __m128i u6 = _mm_madd_epi16(t2, k__cospi_m20_p12);
-      const __m128i u7 = _mm_madd_epi16(t3, k__cospi_m20_p12);
-      // dct_const_round_shift
-      const __m128i v0 = _mm_add_epi32(u0, k__DCT_CONST_ROUNDING);
-      const __m128i v1 = _mm_add_epi32(u1, k__DCT_CONST_ROUNDING);
-      const __m128i v2 = _mm_add_epi32(u2, k__DCT_CONST_ROUNDING);
-      const __m128i v3 = _mm_add_epi32(u3, k__DCT_CONST_ROUNDING);
-      const __m128i v4 = _mm_add_epi32(u4, k__DCT_CONST_ROUNDING);
-      const __m128i v5 = _mm_add_epi32(u5, k__DCT_CONST_ROUNDING);
-      const __m128i v6 = _mm_add_epi32(u6, k__DCT_CONST_ROUNDING);
-      const __m128i v7 = _mm_add_epi32(u7, k__DCT_CONST_ROUNDING);
-      const __m128i w0 = _mm_srai_epi32(v0, DCT_CONST_BITS);
-      const __m128i w1 = _mm_srai_epi32(v1, DCT_CONST_BITS);
-      const __m128i w2 = _mm_srai_epi32(v2, DCT_CONST_BITS);
-      const __m128i w3 = _mm_srai_epi32(v3, DCT_CONST_BITS);
-      const __m128i w4 = _mm_srai_epi32(v4, DCT_CONST_BITS);
-      const __m128i w5 = _mm_srai_epi32(v5, DCT_CONST_BITS);
-      const __m128i w6 = _mm_srai_epi32(v6, DCT_CONST_BITS);
-      const __m128i w7 = _mm_srai_epi32(v7, DCT_CONST_BITS);
-      // Combine
-      res1 = _mm_packs_epi32(w0, w1);
-      res7 = _mm_packs_epi32(w2, w3);
-      res5 = _mm_packs_epi32(w4, w5);
-      res3 = _mm_packs_epi32(w6, w7);
-    }
-    // Transpose the 8x8.
-    {
-      // 00 01 02 03 04 05 06 07
-      // 10 11 12 13 14 15 16 17
-      // 20 21 22 23 24 25 26 27
-      // 30 31 32 33 34 35 36 37
-      // 40 41 42 43 44 45 46 47
-      // 50 51 52 53 54 55 56 57
-      // 60 61 62 63 64 65 66 67
-      // 70 71 72 73 74 75 76 77
-      const __m128i tr0_0 = _mm_unpacklo_epi16(res0, res1);
-      const __m128i tr0_1 = _mm_unpacklo_epi16(res2, res3);
-      const __m128i tr0_2 = _mm_unpackhi_epi16(res0, res1);
-      const __m128i tr0_3 = _mm_unpackhi_epi16(res2, res3);
-      const __m128i tr0_4 = _mm_unpacklo_epi16(res4, res5);
-      const __m128i tr0_5 = _mm_unpacklo_epi16(res6, res7);
-      const __m128i tr0_6 = _mm_unpackhi_epi16(res4, res5);
-      const __m128i tr0_7 = _mm_unpackhi_epi16(res6, res7);
-      // 00 10 01 11 02 12 03 13
-      // 20 30 21 31 22 32 23 33
-      // 04 14 05 15 06 16 07 17
-      // 24 34 25 35 26 36 27 37
-      // 40 50 41 51 42 52 43 53
-      // 60 70 61 71 62 72 63 73
-      // 54 54 55 55 56 56 57 57
-      // 64 74 65 75 66 76 67 77
-      const __m128i tr1_0 = _mm_unpacklo_epi32(tr0_0, tr0_1);
-      const __m128i tr1_1 = _mm_unpacklo_epi32(tr0_2, tr0_3);
-      const __m128i tr1_2 = _mm_unpackhi_epi32(tr0_0, tr0_1);
-      const __m128i tr1_3 = _mm_unpackhi_epi32(tr0_2, tr0_3);
-      const __m128i tr1_4 = _mm_unpacklo_epi32(tr0_4, tr0_5);
-      const __m128i tr1_5 = _mm_unpacklo_epi32(tr0_6, tr0_7);
-      const __m128i tr1_6 = _mm_unpackhi_epi32(tr0_4, tr0_5);
-      const __m128i tr1_7 = _mm_unpackhi_epi32(tr0_6, tr0_7);
-      // 00 10 20 30 01 11 21 31
-      // 40 50 60 70 41 51 61 71
-      // 02 12 22 32 03 13 23 33
-      // 42 52 62 72 43 53 63 73
-      // 04 14 24 34 05 15 21 36
-      // 44 54 64 74 45 55 61 76
-      // 06 16 26 36 07 17 27 37
-      // 46 56 66 76 47 57 67 77
-      in0 = _mm_unpacklo_epi64(tr1_0, tr1_4);
-      in1 = _mm_unpackhi_epi64(tr1_0, tr1_4);
-      in2 = _mm_unpacklo_epi64(tr1_2, tr1_6);
-      in3 = _mm_unpackhi_epi64(tr1_2, tr1_6);
-      in4 = _mm_unpacklo_epi64(tr1_1, tr1_5);
-      in5 = _mm_unpackhi_epi64(tr1_1, tr1_5);
-      in6 = _mm_unpacklo_epi64(tr1_3, tr1_7);
-      in7 = _mm_unpackhi_epi64(tr1_3, tr1_7);
-      // 00 10 20 30 40 50 60 70
-      // 01 11 21 31 41 51 61 71
-      // 02 12 22 32 42 52 62 72
-      // 03 13 23 33 43 53 63 73
-      // 04 14 24 34 44 54 64 74
-      // 05 15 25 35 45 55 65 75
-      // 06 16 26 36 46 56 66 76
-      // 07 17 27 37 47 57 67 77
-    }
-  }
-  // Post-condition output and store it
-  {
-    // Post-condition (division by two)
-    //    division of two 16 bits signed numbers using shifts
-    //    n / 2 = (n - (n >> 15)) >> 1
-    const __m128i sign_in0 = _mm_srai_epi16(in0, 15);
-    const __m128i sign_in1 = _mm_srai_epi16(in1, 15);
-    const __m128i sign_in2 = _mm_srai_epi16(in2, 15);
-    const __m128i sign_in3 = _mm_srai_epi16(in3, 15);
-    const __m128i sign_in4 = _mm_srai_epi16(in4, 15);
-    const __m128i sign_in5 = _mm_srai_epi16(in5, 15);
-    const __m128i sign_in6 = _mm_srai_epi16(in6, 15);
-    const __m128i sign_in7 = _mm_srai_epi16(in7, 15);
-    in0 = _mm_sub_epi16(in0, sign_in0);
-    in1 = _mm_sub_epi16(in1, sign_in1);
-    in2 = _mm_sub_epi16(in2, sign_in2);
-    in3 = _mm_sub_epi16(in3, sign_in3);
-    in4 = _mm_sub_epi16(in4, sign_in4);
-    in5 = _mm_sub_epi16(in5, sign_in5);
-    in6 = _mm_sub_epi16(in6, sign_in6);
-    in7 = _mm_sub_epi16(in7, sign_in7);
-    in0 = _mm_srai_epi16(in0, 1);
-    in1 = _mm_srai_epi16(in1, 1);
-    in2 = _mm_srai_epi16(in2, 1);
-    in3 = _mm_srai_epi16(in3, 1);
-    in4 = _mm_srai_epi16(in4, 1);
-    in5 = _mm_srai_epi16(in5, 1);
-    in6 = _mm_srai_epi16(in6, 1);
-    in7 = _mm_srai_epi16(in7, 1);
-  }
-
-  iscan_ptr += n_coeffs;
-  qcoeff_ptr += n_coeffs;
-  dqcoeff_ptr += n_coeffs;
-  n_coeffs = -n_coeffs;
-  zero = _mm_setzero_si128();
-
-  if (!skip_block) {
-    __m128i eob;
-    __m128i round, quant, dequant;
-    {
-      __m128i coeff0, coeff1;
-
-      // Setup global values
-      {
-        round = _mm_load_si128((const __m128i *)round_ptr);
-        quant = _mm_load_si128((const __m128i *)quant_ptr);
-        dequant = _mm_load_si128((const __m128i *)dequant_ptr);
-      }
-
-      {
-        __m128i coeff0_sign, coeff1_sign;
-        __m128i qcoeff0, qcoeff1;
-        __m128i qtmp0, qtmp1;
-        // Do DC and first 15 AC
-        coeff0 = *in[0];
-        coeff1 = *in[1];
-
-        // Poor man's sign extract
-        coeff0_sign = _mm_srai_epi16(coeff0, 15);
-        coeff1_sign = _mm_srai_epi16(coeff1, 15);
-        qcoeff0 = _mm_xor_si128(coeff0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(coeff1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        qcoeff0 = _mm_adds_epi16(qcoeff0, round);
-        round = _mm_unpackhi_epi64(round, round);
-        qcoeff1 = _mm_adds_epi16(qcoeff1, round);
-        qtmp0 = _mm_mulhi_epi16(qcoeff0, quant);
-        quant = _mm_unpackhi_epi64(quant, quant);
-        qtmp1 = _mm_mulhi_epi16(qcoeff1, quant);
-
-        // Reinsert signs
-        qcoeff0 = _mm_xor_si128(qtmp0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(qtmp1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        _mm_store_si128((__m128i *)(qcoeff_ptr + n_coeffs), qcoeff0);
-        _mm_store_si128((__m128i *)(qcoeff_ptr + n_coeffs) + 1, qcoeff1);
-
-        coeff0 = _mm_mullo_epi16(qcoeff0, dequant);
-        dequant = _mm_unpackhi_epi64(dequant, dequant);
-        coeff1 = _mm_mullo_epi16(qcoeff1, dequant);
-
-        _mm_store_si128((__m128i *)(dqcoeff_ptr + n_coeffs), coeff0);
-        _mm_store_si128((__m128i *)(dqcoeff_ptr + n_coeffs) + 1, coeff1);
-      }
-
-      {
-        // Scan for eob
-        __m128i zero_coeff0, zero_coeff1;
-        __m128i nzero_coeff0, nzero_coeff1;
-        __m128i iscan0, iscan1;
-        __m128i eob1;
-        zero_coeff0 = _mm_cmpeq_epi16(coeff0, zero);
-        zero_coeff1 = _mm_cmpeq_epi16(coeff1, zero);
-        nzero_coeff0 = _mm_cmpeq_epi16(zero_coeff0, zero);
-        nzero_coeff1 = _mm_cmpeq_epi16(zero_coeff1, zero);
-        iscan0 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs));
-        iscan1 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs) + 1);
-        // Add one to convert from indices to counts
-        iscan0 = _mm_sub_epi16(iscan0, nzero_coeff0);
-        iscan1 = _mm_sub_epi16(iscan1, nzero_coeff1);
-        eob = _mm_and_si128(iscan0, nzero_coeff0);
-        eob1 = _mm_and_si128(iscan1, nzero_coeff1);
-        eob = _mm_max_epi16(eob, eob1);
-      }
-      n_coeffs += 8 * 2;
-    }
-
-    // AC only loop
-    index = 2;
-    while (n_coeffs < 0) {
-      __m128i coeff0, coeff1;
-      {
-        __m128i coeff0_sign, coeff1_sign;
-        __m128i qcoeff0, qcoeff1;
-        __m128i qtmp0, qtmp1;
-
-        assert(index < (int)(sizeof(in) / sizeof(in[0])) - 1);
-        coeff0 = *in[index];
-        coeff1 = *in[index + 1];
-
-        // Poor man's sign extract
-        coeff0_sign = _mm_srai_epi16(coeff0, 15);
-        coeff1_sign = _mm_srai_epi16(coeff1, 15);
-        qcoeff0 = _mm_xor_si128(coeff0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(coeff1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        qcoeff0 = _mm_adds_epi16(qcoeff0, round);
-        qcoeff1 = _mm_adds_epi16(qcoeff1, round);
-        qtmp0 = _mm_mulhi_epi16(qcoeff0, quant);
-        qtmp1 = _mm_mulhi_epi16(qcoeff1, quant);
-
-        // Reinsert signs
-        qcoeff0 = _mm_xor_si128(qtmp0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(qtmp1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        _mm_store_si128((__m128i *)(qcoeff_ptr + n_coeffs), qcoeff0);
-        _mm_store_si128((__m128i *)(qcoeff_ptr + n_coeffs) + 1, qcoeff1);
-
-        coeff0 = _mm_mullo_epi16(qcoeff0, dequant);
-        coeff1 = _mm_mullo_epi16(qcoeff1, dequant);
-
-        _mm_store_si128((__m128i *)(dqcoeff_ptr + n_coeffs), coeff0);
-        _mm_store_si128((__m128i *)(dqcoeff_ptr + n_coeffs) + 1, coeff1);
-      }
-
-      {
-        // Scan for eob
-        __m128i zero_coeff0, zero_coeff1;
-        __m128i nzero_coeff0, nzero_coeff1;
-        __m128i iscan0, iscan1;
-        __m128i eob0, eob1;
-        zero_coeff0 = _mm_cmpeq_epi16(coeff0, zero);
-        zero_coeff1 = _mm_cmpeq_epi16(coeff1, zero);
-        nzero_coeff0 = _mm_cmpeq_epi16(zero_coeff0, zero);
-        nzero_coeff1 = _mm_cmpeq_epi16(zero_coeff1, zero);
-        iscan0 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs));
-        iscan1 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs) + 1);
-        // Add one to convert from indices to counts
-        iscan0 = _mm_sub_epi16(iscan0, nzero_coeff0);
-        iscan1 = _mm_sub_epi16(iscan1, nzero_coeff1);
-        eob0 = _mm_and_si128(iscan0, nzero_coeff0);
-        eob1 = _mm_and_si128(iscan1, nzero_coeff1);
-        eob0 = _mm_max_epi16(eob0, eob1);
-        eob = _mm_max_epi16(eob, eob0);
-      }
-      n_coeffs += 8 * 2;
-      index += 2;
-    }
-
-    // Accumulate EOB
-    {
-      __m128i eob_shuffled;
-      eob_shuffled = _mm_shuffle_epi32(eob, 0xe);
-      eob = _mm_max_epi16(eob, eob_shuffled);
-      eob_shuffled = _mm_shufflelo_epi16(eob, 0xe);
-      eob = _mm_max_epi16(eob, eob_shuffled);
-      eob_shuffled = _mm_shufflelo_epi16(eob, 0x1);
-      eob = _mm_max_epi16(eob, eob_shuffled);
-      *eob_ptr = _mm_extract_epi16(eob, 1);
-    }
-  } else {
-    do {
-      _mm_store_si128((__m128i *)(dqcoeff_ptr + n_coeffs), zero);
-      _mm_store_si128((__m128i *)(dqcoeff_ptr + n_coeffs) + 1, zero);
-      _mm_store_si128((__m128i *)(qcoeff_ptr + n_coeffs), zero);
-      _mm_store_si128((__m128i *)(qcoeff_ptr + n_coeffs) + 1, zero);
-      n_coeffs += 8 * 2;
-    } while (n_coeffs < 0);
-    *eob_ptr = 0;
-  }
-}
-
 // load 8x8 array
 static INLINE void load_buffer_8x8(const int16_t *input, __m128i *in,
                                    int stride) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_ssse3.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_ssse3.c
deleted file mode 100644
index bf874a0..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_dct_ssse3.c
+++ /dev/null
@@ -1,465 +0,0 @@
-/*
- *  Copyright (c) 2014 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include <assert.h>
-#include <tmmintrin.h>  // SSSE3
-
-#include "./vp9_rtcd.h"
-#include "./vpx_config.h"
-#include "vpx_dsp/vpx_dsp_common.h"
-#include "vpx_dsp/x86/bitdepth_conversion_sse2.h"
-#include "vpx_dsp/x86/inv_txfm_sse2.h"
-#include "vpx_dsp/x86/txfm_common_sse2.h"
-
-void vp9_fdct8x8_quant_ssse3(
-    const int16_t *input, int stride, tran_low_t *coeff_ptr, intptr_t n_coeffs,
-    int skip_block, const int16_t *round_ptr, const int16_t *quant_ptr,
-    tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-    uint16_t *eob_ptr, const int16_t *scan_ptr, const int16_t *iscan_ptr) {
-  __m128i zero;
-  int pass;
-
-  // Constants
-  //    When we use them, in one case, they are all the same. In all others
-  //    it's a pair of them that we need to repeat four times. This is done
-  //    by constructing the 32 bit constant corresponding to that pair.
-  const __m128i k__dual_p16_p16 = dual_set_epi16(23170, 23170);
-  const __m128i k__cospi_p16_p16 = _mm_set1_epi16(cospi_16_64);
-  const __m128i k__cospi_p16_m16 = pair_set_epi16(cospi_16_64, -cospi_16_64);
-  const __m128i k__cospi_p24_p08 = pair_set_epi16(cospi_24_64, cospi_8_64);
-  const __m128i k__cospi_m08_p24 = pair_set_epi16(-cospi_8_64, cospi_24_64);
-  const __m128i k__cospi_p28_p04 = pair_set_epi16(cospi_28_64, cospi_4_64);
-  const __m128i k__cospi_m04_p28 = pair_set_epi16(-cospi_4_64, cospi_28_64);
-  const __m128i k__cospi_p12_p20 = pair_set_epi16(cospi_12_64, cospi_20_64);
-  const __m128i k__cospi_m20_p12 = pair_set_epi16(-cospi_20_64, cospi_12_64);
-  const __m128i k__DCT_CONST_ROUNDING = _mm_set1_epi32(DCT_CONST_ROUNDING);
-  // Load input
-  __m128i in0 = _mm_load_si128((const __m128i *)(input + 0 * stride));
-  __m128i in1 = _mm_load_si128((const __m128i *)(input + 1 * stride));
-  __m128i in2 = _mm_load_si128((const __m128i *)(input + 2 * stride));
-  __m128i in3 = _mm_load_si128((const __m128i *)(input + 3 * stride));
-  __m128i in4 = _mm_load_si128((const __m128i *)(input + 4 * stride));
-  __m128i in5 = _mm_load_si128((const __m128i *)(input + 5 * stride));
-  __m128i in6 = _mm_load_si128((const __m128i *)(input + 6 * stride));
-  __m128i in7 = _mm_load_si128((const __m128i *)(input + 7 * stride));
-  __m128i *in[8];
-  int index = 0;
-
-  (void)scan_ptr;
-  (void)coeff_ptr;
-
-  // Pre-condition input (shift by two)
-  in0 = _mm_slli_epi16(in0, 2);
-  in1 = _mm_slli_epi16(in1, 2);
-  in2 = _mm_slli_epi16(in2, 2);
-  in3 = _mm_slli_epi16(in3, 2);
-  in4 = _mm_slli_epi16(in4, 2);
-  in5 = _mm_slli_epi16(in5, 2);
-  in6 = _mm_slli_epi16(in6, 2);
-  in7 = _mm_slli_epi16(in7, 2);
-
-  in[0] = &in0;
-  in[1] = &in1;
-  in[2] = &in2;
-  in[3] = &in3;
-  in[4] = &in4;
-  in[5] = &in5;
-  in[6] = &in6;
-  in[7] = &in7;
-
-  // We do two passes, first the columns, then the rows. The results of the
-  // first pass are transposed so that the same column code can be reused. The
-  // results of the second pass are also transposed so that the rows (processed
-  // as columns) are put back in row positions.
-  for (pass = 0; pass < 2; pass++) {
-    // To store results of each pass before the transpose.
-    __m128i res0, res1, res2, res3, res4, res5, res6, res7;
-    // Add/subtract
-    const __m128i q0 = _mm_add_epi16(in0, in7);
-    const __m128i q1 = _mm_add_epi16(in1, in6);
-    const __m128i q2 = _mm_add_epi16(in2, in5);
-    const __m128i q3 = _mm_add_epi16(in3, in4);
-    const __m128i q4 = _mm_sub_epi16(in3, in4);
-    const __m128i q5 = _mm_sub_epi16(in2, in5);
-    const __m128i q6 = _mm_sub_epi16(in1, in6);
-    const __m128i q7 = _mm_sub_epi16(in0, in7);
-    // Work on first four results
-    {
-      // Add/subtract
-      const __m128i r0 = _mm_add_epi16(q0, q3);
-      const __m128i r1 = _mm_add_epi16(q1, q2);
-      const __m128i r2 = _mm_sub_epi16(q1, q2);
-      const __m128i r3 = _mm_sub_epi16(q0, q3);
-      // Interleave to do the multiply by constants which gets us into 32bits
-      const __m128i t0 = _mm_unpacklo_epi16(r0, r1);
-      const __m128i t1 = _mm_unpackhi_epi16(r0, r1);
-      const __m128i t2 = _mm_unpacklo_epi16(r2, r3);
-      const __m128i t3 = _mm_unpackhi_epi16(r2, r3);
-
-      const __m128i u0 = _mm_madd_epi16(t0, k__cospi_p16_p16);
-      const __m128i u1 = _mm_madd_epi16(t1, k__cospi_p16_p16);
-      const __m128i u2 = _mm_madd_epi16(t0, k__cospi_p16_m16);
-      const __m128i u3 = _mm_madd_epi16(t1, k__cospi_p16_m16);
-
-      const __m128i u4 = _mm_madd_epi16(t2, k__cospi_p24_p08);
-      const __m128i u5 = _mm_madd_epi16(t3, k__cospi_p24_p08);
-      const __m128i u6 = _mm_madd_epi16(t2, k__cospi_m08_p24);
-      const __m128i u7 = _mm_madd_epi16(t3, k__cospi_m08_p24);
-      // dct_const_round_shift
-
-      const __m128i v0 = _mm_add_epi32(u0, k__DCT_CONST_ROUNDING);
-      const __m128i v1 = _mm_add_epi32(u1, k__DCT_CONST_ROUNDING);
-      const __m128i v2 = _mm_add_epi32(u2, k__DCT_CONST_ROUNDING);
-      const __m128i v3 = _mm_add_epi32(u3, k__DCT_CONST_ROUNDING);
-
-      const __m128i v4 = _mm_add_epi32(u4, k__DCT_CONST_ROUNDING);
-      const __m128i v5 = _mm_add_epi32(u5, k__DCT_CONST_ROUNDING);
-      const __m128i v6 = _mm_add_epi32(u6, k__DCT_CONST_ROUNDING);
-      const __m128i v7 = _mm_add_epi32(u7, k__DCT_CONST_ROUNDING);
-
-      const __m128i w0 = _mm_srai_epi32(v0, DCT_CONST_BITS);
-      const __m128i w1 = _mm_srai_epi32(v1, DCT_CONST_BITS);
-      const __m128i w2 = _mm_srai_epi32(v2, DCT_CONST_BITS);
-      const __m128i w3 = _mm_srai_epi32(v3, DCT_CONST_BITS);
-
-      const __m128i w4 = _mm_srai_epi32(v4, DCT_CONST_BITS);
-      const __m128i w5 = _mm_srai_epi32(v5, DCT_CONST_BITS);
-      const __m128i w6 = _mm_srai_epi32(v6, DCT_CONST_BITS);
-      const __m128i w7 = _mm_srai_epi32(v7, DCT_CONST_BITS);
-      // Combine
-
-      res0 = _mm_packs_epi32(w0, w1);
-      res4 = _mm_packs_epi32(w2, w3);
-      res2 = _mm_packs_epi32(w4, w5);
-      res6 = _mm_packs_epi32(w6, w7);
-    }
-    // Work on next four results
-    {
-      // Interleave to do the multiply by constants which gets us into 32bits
-      const __m128i d0 = _mm_sub_epi16(q6, q5);
-      const __m128i d1 = _mm_add_epi16(q6, q5);
-      const __m128i r0 = _mm_mulhrs_epi16(d0, k__dual_p16_p16);
-      const __m128i r1 = _mm_mulhrs_epi16(d1, k__dual_p16_p16);
-
-      // Add/subtract
-      const __m128i x0 = _mm_add_epi16(q4, r0);
-      const __m128i x1 = _mm_sub_epi16(q4, r0);
-      const __m128i x2 = _mm_sub_epi16(q7, r1);
-      const __m128i x3 = _mm_add_epi16(q7, r1);
-      // Interleave to do the multiply by constants which gets us into 32bits
-      const __m128i t0 = _mm_unpacklo_epi16(x0, x3);
-      const __m128i t1 = _mm_unpackhi_epi16(x0, x3);
-      const __m128i t2 = _mm_unpacklo_epi16(x1, x2);
-      const __m128i t3 = _mm_unpackhi_epi16(x1, x2);
-      const __m128i u0 = _mm_madd_epi16(t0, k__cospi_p28_p04);
-      const __m128i u1 = _mm_madd_epi16(t1, k__cospi_p28_p04);
-      const __m128i u2 = _mm_madd_epi16(t0, k__cospi_m04_p28);
-      const __m128i u3 = _mm_madd_epi16(t1, k__cospi_m04_p28);
-      const __m128i u4 = _mm_madd_epi16(t2, k__cospi_p12_p20);
-      const __m128i u5 = _mm_madd_epi16(t3, k__cospi_p12_p20);
-      const __m128i u6 = _mm_madd_epi16(t2, k__cospi_m20_p12);
-      const __m128i u7 = _mm_madd_epi16(t3, k__cospi_m20_p12);
-      // dct_const_round_shift
-      const __m128i v0 = _mm_add_epi32(u0, k__DCT_CONST_ROUNDING);
-      const __m128i v1 = _mm_add_epi32(u1, k__DCT_CONST_ROUNDING);
-      const __m128i v2 = _mm_add_epi32(u2, k__DCT_CONST_ROUNDING);
-      const __m128i v3 = _mm_add_epi32(u3, k__DCT_CONST_ROUNDING);
-      const __m128i v4 = _mm_add_epi32(u4, k__DCT_CONST_ROUNDING);
-      const __m128i v5 = _mm_add_epi32(u5, k__DCT_CONST_ROUNDING);
-      const __m128i v6 = _mm_add_epi32(u6, k__DCT_CONST_ROUNDING);
-      const __m128i v7 = _mm_add_epi32(u7, k__DCT_CONST_ROUNDING);
-      const __m128i w0 = _mm_srai_epi32(v0, DCT_CONST_BITS);
-      const __m128i w1 = _mm_srai_epi32(v1, DCT_CONST_BITS);
-      const __m128i w2 = _mm_srai_epi32(v2, DCT_CONST_BITS);
-      const __m128i w3 = _mm_srai_epi32(v3, DCT_CONST_BITS);
-      const __m128i w4 = _mm_srai_epi32(v4, DCT_CONST_BITS);
-      const __m128i w5 = _mm_srai_epi32(v5, DCT_CONST_BITS);
-      const __m128i w6 = _mm_srai_epi32(v6, DCT_CONST_BITS);
-      const __m128i w7 = _mm_srai_epi32(v7, DCT_CONST_BITS);
-      // Combine
-      res1 = _mm_packs_epi32(w0, w1);
-      res7 = _mm_packs_epi32(w2, w3);
-      res5 = _mm_packs_epi32(w4, w5);
-      res3 = _mm_packs_epi32(w6, w7);
-    }
-    // Transpose the 8x8.
-    {
-      // 00 01 02 03 04 05 06 07
-      // 10 11 12 13 14 15 16 17
-      // 20 21 22 23 24 25 26 27
-      // 30 31 32 33 34 35 36 37
-      // 40 41 42 43 44 45 46 47
-      // 50 51 52 53 54 55 56 57
-      // 60 61 62 63 64 65 66 67
-      // 70 71 72 73 74 75 76 77
-      const __m128i tr0_0 = _mm_unpacklo_epi16(res0, res1);
-      const __m128i tr0_1 = _mm_unpacklo_epi16(res2, res3);
-      const __m128i tr0_2 = _mm_unpackhi_epi16(res0, res1);
-      const __m128i tr0_3 = _mm_unpackhi_epi16(res2, res3);
-      const __m128i tr0_4 = _mm_unpacklo_epi16(res4, res5);
-      const __m128i tr0_5 = _mm_unpacklo_epi16(res6, res7);
-      const __m128i tr0_6 = _mm_unpackhi_epi16(res4, res5);
-      const __m128i tr0_7 = _mm_unpackhi_epi16(res6, res7);
-      // 00 10 01 11 02 12 03 13
-      // 20 30 21 31 22 32 23 33
-      // 04 14 05 15 06 16 07 17
-      // 24 34 25 35 26 36 27 37
-      // 40 50 41 51 42 52 43 53
-      // 60 70 61 71 62 72 63 73
-      // 54 54 55 55 56 56 57 57
-      // 64 74 65 75 66 76 67 77
-      const __m128i tr1_0 = _mm_unpacklo_epi32(tr0_0, tr0_1);
-      const __m128i tr1_1 = _mm_unpacklo_epi32(tr0_2, tr0_3);
-      const __m128i tr1_2 = _mm_unpackhi_epi32(tr0_0, tr0_1);
-      const __m128i tr1_3 = _mm_unpackhi_epi32(tr0_2, tr0_3);
-      const __m128i tr1_4 = _mm_unpacklo_epi32(tr0_4, tr0_5);
-      const __m128i tr1_5 = _mm_unpacklo_epi32(tr0_6, tr0_7);
-      const __m128i tr1_6 = _mm_unpackhi_epi32(tr0_4, tr0_5);
-      const __m128i tr1_7 = _mm_unpackhi_epi32(tr0_6, tr0_7);
-      // 00 10 20 30 01 11 21 31
-      // 40 50 60 70 41 51 61 71
-      // 02 12 22 32 03 13 23 33
-      // 42 52 62 72 43 53 63 73
-      // 04 14 24 34 05 15 21 36
-      // 44 54 64 74 45 55 61 76
-      // 06 16 26 36 07 17 27 37
-      // 46 56 66 76 47 57 67 77
-      in0 = _mm_unpacklo_epi64(tr1_0, tr1_4);
-      in1 = _mm_unpackhi_epi64(tr1_0, tr1_4);
-      in2 = _mm_unpacklo_epi64(tr1_2, tr1_6);
-      in3 = _mm_unpackhi_epi64(tr1_2, tr1_6);
-      in4 = _mm_unpacklo_epi64(tr1_1, tr1_5);
-      in5 = _mm_unpackhi_epi64(tr1_1, tr1_5);
-      in6 = _mm_unpacklo_epi64(tr1_3, tr1_7);
-      in7 = _mm_unpackhi_epi64(tr1_3, tr1_7);
-      // 00 10 20 30 40 50 60 70
-      // 01 11 21 31 41 51 61 71
-      // 02 12 22 32 42 52 62 72
-      // 03 13 23 33 43 53 63 73
-      // 04 14 24 34 44 54 64 74
-      // 05 15 25 35 45 55 65 75
-      // 06 16 26 36 46 56 66 76
-      // 07 17 27 37 47 57 67 77
-    }
-  }
-  // Post-condition output and store it
-  {
-    // Post-condition (division by two)
-    //    division of two 16 bits signed numbers using shifts
-    //    n / 2 = (n - (n >> 15)) >> 1
-    const __m128i sign_in0 = _mm_srai_epi16(in0, 15);
-    const __m128i sign_in1 = _mm_srai_epi16(in1, 15);
-    const __m128i sign_in2 = _mm_srai_epi16(in2, 15);
-    const __m128i sign_in3 = _mm_srai_epi16(in3, 15);
-    const __m128i sign_in4 = _mm_srai_epi16(in4, 15);
-    const __m128i sign_in5 = _mm_srai_epi16(in5, 15);
-    const __m128i sign_in6 = _mm_srai_epi16(in6, 15);
-    const __m128i sign_in7 = _mm_srai_epi16(in7, 15);
-    in0 = _mm_sub_epi16(in0, sign_in0);
-    in1 = _mm_sub_epi16(in1, sign_in1);
-    in2 = _mm_sub_epi16(in2, sign_in2);
-    in3 = _mm_sub_epi16(in3, sign_in3);
-    in4 = _mm_sub_epi16(in4, sign_in4);
-    in5 = _mm_sub_epi16(in5, sign_in5);
-    in6 = _mm_sub_epi16(in6, sign_in6);
-    in7 = _mm_sub_epi16(in7, sign_in7);
-    in0 = _mm_srai_epi16(in0, 1);
-    in1 = _mm_srai_epi16(in1, 1);
-    in2 = _mm_srai_epi16(in2, 1);
-    in3 = _mm_srai_epi16(in3, 1);
-    in4 = _mm_srai_epi16(in4, 1);
-    in5 = _mm_srai_epi16(in5, 1);
-    in6 = _mm_srai_epi16(in6, 1);
-    in7 = _mm_srai_epi16(in7, 1);
-  }
-
-  iscan_ptr += n_coeffs;
-  qcoeff_ptr += n_coeffs;
-  dqcoeff_ptr += n_coeffs;
-  n_coeffs = -n_coeffs;
-  zero = _mm_setzero_si128();
-
-  if (!skip_block) {
-    __m128i eob;
-    __m128i round, quant, dequant, thr;
-    int16_t nzflag;
-    {
-      __m128i coeff0, coeff1;
-
-      // Setup global values
-      {
-        round = _mm_load_si128((const __m128i *)round_ptr);
-        quant = _mm_load_si128((const __m128i *)quant_ptr);
-        dequant = _mm_load_si128((const __m128i *)dequant_ptr);
-      }
-
-      {
-        __m128i coeff0_sign, coeff1_sign;
-        __m128i qcoeff0, qcoeff1;
-        __m128i qtmp0, qtmp1;
-        // Do DC and first 15 AC
-        coeff0 = *in[0];
-        coeff1 = *in[1];
-
-        // Poor man's sign extract
-        coeff0_sign = _mm_srai_epi16(coeff0, 15);
-        coeff1_sign = _mm_srai_epi16(coeff1, 15);
-        qcoeff0 = _mm_xor_si128(coeff0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(coeff1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        qcoeff0 = _mm_adds_epi16(qcoeff0, round);
-        round = _mm_unpackhi_epi64(round, round);
-        qcoeff1 = _mm_adds_epi16(qcoeff1, round);
-        qtmp0 = _mm_mulhi_epi16(qcoeff0, quant);
-        quant = _mm_unpackhi_epi64(quant, quant);
-        qtmp1 = _mm_mulhi_epi16(qcoeff1, quant);
-
-        // Reinsert signs
-        qcoeff0 = _mm_xor_si128(qtmp0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(qtmp1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        store_tran_low(qcoeff0, qcoeff_ptr + n_coeffs);
-        store_tran_low(qcoeff1, qcoeff_ptr + n_coeffs + 8);
-
-        coeff0 = _mm_mullo_epi16(qcoeff0, dequant);
-        dequant = _mm_unpackhi_epi64(dequant, dequant);
-        coeff1 = _mm_mullo_epi16(qcoeff1, dequant);
-
-        store_tran_low(coeff0, dqcoeff_ptr + n_coeffs);
-        store_tran_low(coeff1, dqcoeff_ptr + n_coeffs + 8);
-      }
-
-      {
-        // Scan for eob
-        __m128i zero_coeff0, zero_coeff1;
-        __m128i nzero_coeff0, nzero_coeff1;
-        __m128i iscan0, iscan1;
-        __m128i eob1;
-        zero_coeff0 = _mm_cmpeq_epi16(coeff0, zero);
-        zero_coeff1 = _mm_cmpeq_epi16(coeff1, zero);
-        nzero_coeff0 = _mm_cmpeq_epi16(zero_coeff0, zero);
-        nzero_coeff1 = _mm_cmpeq_epi16(zero_coeff1, zero);
-        iscan0 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs));
-        iscan1 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs) + 1);
-        // Add one to convert from indices to counts
-        iscan0 = _mm_sub_epi16(iscan0, nzero_coeff0);
-        iscan1 = _mm_sub_epi16(iscan1, nzero_coeff1);
-        eob = _mm_and_si128(iscan0, nzero_coeff0);
-        eob1 = _mm_and_si128(iscan1, nzero_coeff1);
-        eob = _mm_max_epi16(eob, eob1);
-      }
-      n_coeffs += 8 * 2;
-    }
-
-    // AC only loop
-    index = 2;
-    thr = _mm_srai_epi16(dequant, 1);
-    while (n_coeffs < 0) {
-      __m128i coeff0, coeff1;
-      {
-        __m128i coeff0_sign, coeff1_sign;
-        __m128i qcoeff0, qcoeff1;
-        __m128i qtmp0, qtmp1;
-
-        assert(index < (int)(sizeof(in) / sizeof(in[0])) - 1);
-        coeff0 = *in[index];
-        coeff1 = *in[index + 1];
-
-        // Poor man's sign extract
-        coeff0_sign = _mm_srai_epi16(coeff0, 15);
-        coeff1_sign = _mm_srai_epi16(coeff1, 15);
-        qcoeff0 = _mm_xor_si128(coeff0, coeff0_sign);
-        qcoeff1 = _mm_xor_si128(coeff1, coeff1_sign);
-        qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-        qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-        nzflag = _mm_movemask_epi8(_mm_cmpgt_epi16(qcoeff0, thr)) |
-                 _mm_movemask_epi8(_mm_cmpgt_epi16(qcoeff1, thr));
-
-        if (nzflag) {
-          qcoeff0 = _mm_adds_epi16(qcoeff0, round);
-          qcoeff1 = _mm_adds_epi16(qcoeff1, round);
-          qtmp0 = _mm_mulhi_epi16(qcoeff0, quant);
-          qtmp1 = _mm_mulhi_epi16(qcoeff1, quant);
-
-          // Reinsert signs
-          qcoeff0 = _mm_xor_si128(qtmp0, coeff0_sign);
-          qcoeff1 = _mm_xor_si128(qtmp1, coeff1_sign);
-          qcoeff0 = _mm_sub_epi16(qcoeff0, coeff0_sign);
-          qcoeff1 = _mm_sub_epi16(qcoeff1, coeff1_sign);
-
-          store_tran_low(qcoeff0, qcoeff_ptr + n_coeffs);
-          store_tran_low(qcoeff1, qcoeff_ptr + n_coeffs + 8);
-
-          coeff0 = _mm_mullo_epi16(qcoeff0, dequant);
-          coeff1 = _mm_mullo_epi16(qcoeff1, dequant);
-
-          store_tran_low(coeff0, dqcoeff_ptr + n_coeffs);
-          store_tran_low(coeff1, dqcoeff_ptr + n_coeffs + 8);
-        } else {
-          // Maybe a more efficient way to store 0?
-          store_zero_tran_low(qcoeff_ptr + n_coeffs);
-          store_zero_tran_low(qcoeff_ptr + n_coeffs + 8);
-
-          store_zero_tran_low(dqcoeff_ptr + n_coeffs);
-          store_zero_tran_low(dqcoeff_ptr + n_coeffs + 8);
-        }
-      }
-
-      if (nzflag) {
-        // Scan for eob
-        __m128i zero_coeff0, zero_coeff1;
-        __m128i nzero_coeff0, nzero_coeff1;
-        __m128i iscan0, iscan1;
-        __m128i eob0, eob1;
-        zero_coeff0 = _mm_cmpeq_epi16(coeff0, zero);
-        zero_coeff1 = _mm_cmpeq_epi16(coeff1, zero);
-        nzero_coeff0 = _mm_cmpeq_epi16(zero_coeff0, zero);
-        nzero_coeff1 = _mm_cmpeq_epi16(zero_coeff1, zero);
-        iscan0 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs));
-        iscan1 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs) + 1);
-        // Add one to convert from indices to counts
-        iscan0 = _mm_sub_epi16(iscan0, nzero_coeff0);
-        iscan1 = _mm_sub_epi16(iscan1, nzero_coeff1);
-        eob0 = _mm_and_si128(iscan0, nzero_coeff0);
-        eob1 = _mm_and_si128(iscan1, nzero_coeff1);
-        eob0 = _mm_max_epi16(eob0, eob1);
-        eob = _mm_max_epi16(eob, eob0);
-      }
-      n_coeffs += 8 * 2;
-      index += 2;
-    }
-
-    // Accumulate EOB
-    {
-      __m128i eob_shuffled;
-      eob_shuffled = _mm_shuffle_epi32(eob, 0xe);
-      eob = _mm_max_epi16(eob, eob_shuffled);
-      eob_shuffled = _mm_shufflelo_epi16(eob, 0xe);
-      eob = _mm_max_epi16(eob, eob_shuffled);
-      eob_shuffled = _mm_shufflelo_epi16(eob, 0x1);
-      eob = _mm_max_epi16(eob, eob_shuffled);
-      *eob_ptr = _mm_extract_epi16(eob, 1);
-    }
-  } else {
-    do {
-      store_zero_tran_low(dqcoeff_ptr + n_coeffs);
-      store_zero_tran_low(dqcoeff_ptr + n_coeffs + 8);
-      store_zero_tran_low(qcoeff_ptr + n_coeffs);
-      store_zero_tran_low(qcoeff_ptr + n_coeffs + 8);
-      n_coeffs += 8 * 2;
-    } while (n_coeffs < 0);
-    *eob_ptr = 0;
-  }
-}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_diamond_search_sad_avx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_diamond_search_sad_avx.c
index 2f3c66c..4be6a5e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_diamond_search_sad_avx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_diamond_search_sad_avx.c
@@ -114,7 +114,7 @@
   // Work out the start point for the search
   const uint8_t *best_address = in_what;
   const uint8_t *new_best_address = best_address;
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
   __m128i v_ba_q = _mm_set1_epi64x((intptr_t)best_address);
 #else
   __m128i v_ba_d = _mm_set1_epi32((intptr_t)best_address);
@@ -138,7 +138,7 @@
   for (i = 0, step = 0; step < tot_steps; step++) {
     for (j = 0; j < cfg->searches_per_step; j += 4, i += 4) {
       __m128i v_sad_d, v_cost_d, v_outside_d, v_inside_d, v_diff_mv_w;
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
       __m128i v_blocka[2];
 #else
       __m128i v_blocka[1];
@@ -160,7 +160,7 @@
       }
 
       // The inverse mask indicates which of the MVs are outside
-      v_outside_d = _mm_xor_si128(v_inside_d, _mm_set1_epi8(0xff));
+      v_outside_d = _mm_xor_si128(v_inside_d, _mm_set1_epi8((int8_t)0xff));
       // Shift right to keep the sign bit clear, we will use this later
       // to set the cost to the maximum value.
       v_outside_d = _mm_srli_epi32(v_outside_d, 1);
@@ -175,7 +175,7 @@
 
       // Compute the SIMD pointer offsets.
       {
-#if ARCH_X86_64  //  sizeof(intptr_t) == 8
+#if VPX_ARCH_X86_64  //  sizeof(intptr_t) == 8
         // Load the offsets
         __m128i v_bo10_q = _mm_loadu_si128((const __m128i *)&ss_os[i + 0]);
         __m128i v_bo32_q = _mm_loadu_si128((const __m128i *)&ss_os[i + 2]);
@@ -186,7 +186,7 @@
         // Compute the candidate addresses
         v_blocka[0] = _mm_add_epi64(v_ba_q, v_bo10_q);
         v_blocka[1] = _mm_add_epi64(v_ba_q, v_bo32_q);
-#else  // ARCH_X86 //  sizeof(intptr_t) == 4
+#else  // VPX_ARCH_X86 //  sizeof(intptr_t) == 4
         __m128i v_bo_d = _mm_loadu_si128((const __m128i *)&ss_os[i]);
         v_bo_d = _mm_and_si128(v_bo_d, v_inside_d);
         v_blocka[0] = _mm_add_epi32(v_ba_d, v_bo_d);
@@ -294,7 +294,7 @@
     best_address = new_best_address;
 
     v_bmv_w = _mm_set1_epi32(bmv.as_int);
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
     v_ba_q = _mm_set1_epi64x((intptr_t)best_address);
 #else
     v_ba_d = _mm_set1_epi32((intptr_t)best_address);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_error_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_error_sse2.asm
index 11d473b..7beec13 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_error_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_error_sse2.asm
@@ -58,7 +58,7 @@
   movhlps   m7, m6
   paddq     m4, m5
   paddq     m6, m7
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   movq    rax, m4
   movq [sszq], m6
 %else
@@ -105,7 +105,7 @@
   ; accumulate horizontally and store in return value
   movhlps   m5, m4
   paddq     m4, m5
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   movq    rax, m4
 %else
   pshufd   m5, m4, 0x1
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_highbd_block_error_intrin_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_highbd_block_error_intrin_sse2.c
index 91f627c..d7aafe7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_highbd_block_error_intrin_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_highbd_block_error_intrin_sse2.c
@@ -11,27 +11,28 @@
 #include <emmintrin.h>
 #include <stdio.h>
 
+#include "./vp9_rtcd.h"
 #include "vp9/common/vp9_common.h"
 
-int64_t vp9_highbd_block_error_sse2(tran_low_t *coeff, tran_low_t *dqcoeff,
-                                    intptr_t block_size, int64_t *ssz,
-                                    int bps) {
+int64_t vp9_highbd_block_error_sse2(const tran_low_t *coeff,
+                                    const tran_low_t *dqcoeff,
+                                    intptr_t block_size, int64_t *ssz, int bd) {
   int i, j, test;
   uint32_t temp[4];
   __m128i max, min, cmp0, cmp1, cmp2, cmp3;
   int64_t error = 0, sqcoeff = 0;
-  const int shift = 2 * (bps - 8);
+  const int shift = 2 * (bd - 8);
   const int rounding = shift > 0 ? 1 << (shift - 1) : 0;
 
   for (i = 0; i < block_size; i += 8) {
     // Load the data into xmm registers
-    __m128i mm_coeff = _mm_load_si128((__m128i *)(coeff + i));
-    __m128i mm_coeff2 = _mm_load_si128((__m128i *)(coeff + i + 4));
-    __m128i mm_dqcoeff = _mm_load_si128((__m128i *)(dqcoeff + i));
-    __m128i mm_dqcoeff2 = _mm_load_si128((__m128i *)(dqcoeff + i + 4));
+    __m128i mm_coeff = _mm_load_si128((const __m128i *)(coeff + i));
+    __m128i mm_coeff2 = _mm_load_si128((const __m128i *)(coeff + i + 4));
+    __m128i mm_dqcoeff = _mm_load_si128((const __m128i *)(dqcoeff + i));
+    __m128i mm_dqcoeff2 = _mm_load_si128((const __m128i *)(dqcoeff + i + 4));
     // Check if any values require more than 15 bit
     max = _mm_set1_epi32(0x3fff);
-    min = _mm_set1_epi32(0xffffc000);
+    min = _mm_set1_epi32((int32_t)0xffffc000);
     cmp0 = _mm_xor_si128(_mm_cmpgt_epi32(mm_coeff, max),
                          _mm_cmplt_epi32(mm_coeff, min));
     cmp1 = _mm_xor_si128(_mm_cmpgt_epi32(mm_coeff2, max),
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_avx2.c
index 4bebc34..8dfdbd5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_avx2.c
@@ -15,7 +15,7 @@
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_dsp/x86/bitdepth_conversion_avx2.h"
-#include "vpx_dsp/x86/quantize_x86.h"
+#include "vpx_dsp/x86/quantize_sse2.h"
 
 // Zero fill 8 positions in the output buffer.
 static INLINE void store_zero_tran_low(tran_low_t *a) {
@@ -50,18 +50,18 @@
                           int skip_block, const int16_t *round_ptr,
                           const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
                           tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                          uint16_t *eob_ptr, const int16_t *scan_ptr,
-                          const int16_t *iscan_ptr) {
+                          uint16_t *eob_ptr, const int16_t *scan,
+                          const int16_t *iscan) {
   __m128i eob;
   __m256i round256, quant256, dequant256;
   __m256i eob256, thr256;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)skip_block;
   assert(!skip_block);
 
   coeff_ptr += n_coeffs;
-  iscan_ptr += n_coeffs;
+  iscan += n_coeffs;
   qcoeff_ptr += n_coeffs;
   dqcoeff_ptr += n_coeffs;
   n_coeffs = -n_coeffs;
@@ -97,7 +97,7 @@
       store_tran_low(coeff256, dqcoeff_ptr + n_coeffs);
     }
 
-    eob256 = scan_eob_256((const __m256i *)(iscan_ptr + n_coeffs), &coeff256);
+    eob256 = scan_eob_256((const __m256i *)(iscan + n_coeffs), &coeff256);
     n_coeffs += 8 * 2;
   }
 
@@ -124,8 +124,7 @@
       coeff256 = _mm256_mullo_epi16(qcoeff256, dequant256);
       store_tran_low(coeff256, dqcoeff_ptr + n_coeffs);
       eob256 = _mm256_max_epi16(
-          eob256,
-          scan_eob_256((const __m256i *)(iscan_ptr + n_coeffs), &coeff256));
+          eob256, scan_eob_256((const __m256i *)(iscan + n_coeffs), &coeff256));
     } else {
       store_zero_tran_low(qcoeff_ptr + n_coeffs);
       store_zero_tran_low(dqcoeff_ptr + n_coeffs);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_sse2.c
index ca0ad44..e3d803b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/encoder/x86/vp9_quantize_sse2.c
@@ -21,20 +21,20 @@
                           int skip_block, const int16_t *round_ptr,
                           const int16_t *quant_ptr, tran_low_t *qcoeff_ptr,
                           tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                          uint16_t *eob_ptr, const int16_t *scan_ptr,
-                          const int16_t *iscan_ptr) {
+                          uint16_t *eob_ptr, const int16_t *scan,
+                          const int16_t *iscan) {
   __m128i zero;
   __m128i thr;
-  int16_t nzflag;
+  int nzflag;
   __m128i eob;
   __m128i round, quant, dequant;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)skip_block;
   assert(!skip_block);
 
   coeff_ptr += n_coeffs;
-  iscan_ptr += n_coeffs;
+  iscan += n_coeffs;
   qcoeff_ptr += n_coeffs;
   dqcoeff_ptr += n_coeffs;
   n_coeffs = -n_coeffs;
@@ -100,8 +100,8 @@
       zero_coeff1 = _mm_cmpeq_epi16(coeff1, zero);
       nzero_coeff0 = _mm_cmpeq_epi16(zero_coeff0, zero);
       nzero_coeff1 = _mm_cmpeq_epi16(zero_coeff1, zero);
-      iscan0 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs));
-      iscan1 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs) + 1);
+      iscan0 = _mm_load_si128((const __m128i *)(iscan + n_coeffs));
+      iscan1 = _mm_load_si128((const __m128i *)(iscan + n_coeffs) + 1);
       // Add one to convert from indices to counts
       iscan0 = _mm_sub_epi16(iscan0, nzero_coeff0);
       iscan1 = _mm_sub_epi16(iscan1, nzero_coeff1);
@@ -175,8 +175,8 @@
       zero_coeff1 = _mm_cmpeq_epi16(coeff1, zero);
       nzero_coeff0 = _mm_cmpeq_epi16(zero_coeff0, zero);
       nzero_coeff1 = _mm_cmpeq_epi16(zero_coeff1, zero);
-      iscan0 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs));
-      iscan1 = _mm_load_si128((const __m128i *)(iscan_ptr + n_coeffs) + 1);
+      iscan0 = _mm_load_si128((const __m128i *)(iscan + n_coeffs));
+      iscan1 = _mm_load_si128((const __m128i *)(iscan + n_coeffs) + 1);
       // Add one to convert from indices to counts
       iscan0 = _mm_sub_epi16(iscan0, nzero_coeff0);
       iscan1 = _mm_sub_epi16(iscan1, nzero_coeff1);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/simple_encode.cc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/simple_encode.cc
new file mode 100644
index 0000000..25d0251
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/simple_encode.cc
@@ -0,0 +1,987 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <memory>
+#include <vector>
+#include "./ivfenc.h"
+#include "vp9/common/vp9_entropymode.h"
+#include "vp9/common/vp9_enums.h"
+#include "vp9/common/vp9_onyxc_int.h"
+#include "vp9/vp9_iface_common.h"
+#include "vp9/encoder/vp9_encoder.h"
+#include "vp9/encoder/vp9_firstpass.h"
+#include "vp9/simple_encode.h"
+#include "vp9/vp9_cx_iface.h"
+
+namespace vp9 {
+
+static int get_plane_height(vpx_img_fmt_t img_fmt, int frame_height,
+                            int plane) {
+  assert(plane < 3);
+  if (plane == 0) {
+    return frame_height;
+  }
+  switch (img_fmt) {
+    case VPX_IMG_FMT_I420:
+    case VPX_IMG_FMT_I440:
+    case VPX_IMG_FMT_YV12:
+    case VPX_IMG_FMT_I42016:
+    case VPX_IMG_FMT_I44016: return (frame_height + 1) >> 1;
+    default: return frame_height;
+  }
+}
+
+static int get_plane_width(vpx_img_fmt_t img_fmt, int frame_width, int plane) {
+  assert(plane < 3);
+  if (plane == 0) {
+    return frame_width;
+  }
+  switch (img_fmt) {
+    case VPX_IMG_FMT_I420:
+    case VPX_IMG_FMT_YV12:
+    case VPX_IMG_FMT_I422:
+    case VPX_IMG_FMT_I42016:
+    case VPX_IMG_FMT_I42216: return (frame_width + 1) >> 1;
+    default: return frame_width;
+  }
+}
+
+// TODO(angiebird): Merge this function with vpx_img_plane_width()
+static int img_plane_width(const vpx_image_t *img, int plane) {
+  if (plane > 0 && img->x_chroma_shift > 0)
+    return (img->d_w + 1) >> img->x_chroma_shift;
+  else
+    return img->d_w;
+}
+
+// TODO(angiebird): Merge this function with vpx_img_plane_height()
+static int img_plane_height(const vpx_image_t *img, int plane) {
+  if (plane > 0 && img->y_chroma_shift > 0)
+    return (img->d_h + 1) >> img->y_chroma_shift;
+  else
+    return img->d_h;
+}
+
+// TODO(angiebird): Merge this function with vpx_img_read()
+static int img_read(vpx_image_t *img, FILE *file) {
+  int plane;
+
+  for (plane = 0; plane < 3; ++plane) {
+    unsigned char *buf = img->planes[plane];
+    const int stride = img->stride[plane];
+    const int w = img_plane_width(img, plane) *
+                  ((img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) ? 2 : 1);
+    const int h = img_plane_height(img, plane);
+    int y;
+
+    for (y = 0; y < h; ++y) {
+      if (fread(buf, 1, w, file) != (size_t)w) return 0;
+      buf += stride;
+    }
+  }
+
+  return 1;
+}
+
+class SimpleEncode::EncodeImpl {
+ public:
+  VP9_COMP *cpi;
+  vpx_img_fmt_t img_fmt;
+  vpx_image_t tmp_img;
+  std::vector<FIRSTPASS_STATS> first_pass_stats;
+};
+
+static VP9_COMP *init_encoder(const VP9EncoderConfig *oxcf,
+                              vpx_img_fmt_t img_fmt) {
+  VP9_COMP *cpi;
+  BufferPool *buffer_pool = (BufferPool *)vpx_calloc(1, sizeof(*buffer_pool));
+  vp9_initialize_enc();
+  cpi = vp9_create_compressor(oxcf, buffer_pool);
+  vp9_update_compressor_with_img_fmt(cpi, img_fmt);
+  return cpi;
+}
+
+static void free_encoder(VP9_COMP *cpi) {
+  BufferPool *buffer_pool = cpi->common.buffer_pool;
+  vp9_remove_compressor(cpi);
+  // buffer_pool needs to be free after cpi because buffer_pool contains
+  // allocated buffers that will be free in vp9_remove_compressor()
+  vpx_free(buffer_pool);
+}
+
+static INLINE vpx_rational_t make_vpx_rational(int num, int den) {
+  vpx_rational_t v;
+  v.num = num;
+  v.den = den;
+  return v;
+}
+
+static INLINE vpx_rational_t invert_vpx_rational(vpx_rational_t v) {
+  vpx_rational_t inverse_v;
+  inverse_v.num = v.den;
+  inverse_v.den = v.num;
+  return inverse_v;
+}
+
+static INLINE FrameType
+get_frame_type_from_update_type(FRAME_UPDATE_TYPE update_type) {
+  switch (update_type) {
+    case KF_UPDATE: return kFrameTypeKey;
+    case ARF_UPDATE: return kFrameTypeAltRef;
+    case GF_UPDATE: return kFrameTypeGolden;
+    case OVERLAY_UPDATE: return kFrameTypeOverlay;
+    case LF_UPDATE: return kFrameTypeInter;
+    default:
+      fprintf(stderr, "Unsupported update_type %d\n", update_type);
+      abort();
+      return kFrameTypeInter;
+  }
+}
+
+static void update_partition_info(const PARTITION_INFO *input_partition_info,
+                                  const int num_rows_4x4,
+                                  const int num_cols_4x4,
+                                  PartitionInfo *output_partition_info) {
+  const int num_units_4x4 = num_rows_4x4 * num_cols_4x4;
+  for (int i = 0; i < num_units_4x4; ++i) {
+    output_partition_info[i].row = input_partition_info[i].row;
+    output_partition_info[i].column = input_partition_info[i].column;
+    output_partition_info[i].row_start = input_partition_info[i].row_start;
+    output_partition_info[i].column_start =
+        input_partition_info[i].column_start;
+    output_partition_info[i].width = input_partition_info[i].width;
+    output_partition_info[i].height = input_partition_info[i].height;
+  }
+}
+
+// translate MV_REFERENCE_FRAME to RefFrameType
+static RefFrameType mv_ref_frame_to_ref_frame_type(
+    MV_REFERENCE_FRAME mv_ref_frame) {
+  switch (mv_ref_frame) {
+    case LAST_FRAME: return kRefFrameTypeLast;
+    case GOLDEN_FRAME: return kRefFrameTypePast;
+    case ALTREF_FRAME: return kRefFrameTypeFuture;
+    default: return kRefFrameTypeNone;
+  }
+}
+
+static void update_motion_vector_info(
+    const MOTION_VECTOR_INFO *input_motion_vector_info, const int num_rows_4x4,
+    const int num_cols_4x4, MotionVectorInfo *output_motion_vector_info) {
+  const int num_units_4x4 = num_rows_4x4 * num_cols_4x4;
+  for (int i = 0; i < num_units_4x4; ++i) {
+    const MV_REFERENCE_FRAME *in_ref_frame =
+        input_motion_vector_info[i].ref_frame;
+    output_motion_vector_info[i].mv_count =
+        (in_ref_frame[0] == INTRA_FRAME) ? 0
+                                         : ((in_ref_frame[1] == NONE) ? 1 : 2);
+    if (in_ref_frame[0] == NONE) {
+      fprintf(stderr, "in_ref_frame[0] shouldn't be NONE\n");
+      abort();
+    }
+    output_motion_vector_info[i].ref_frame[0] =
+        mv_ref_frame_to_ref_frame_type(in_ref_frame[0]);
+    output_motion_vector_info[i].ref_frame[1] =
+        mv_ref_frame_to_ref_frame_type(in_ref_frame[1]);
+    output_motion_vector_info[i].mv_row[0] =
+        (double)input_motion_vector_info[i].mv[0].as_mv.row /
+        kMotionVectorPrecision;
+    output_motion_vector_info[i].mv_column[0] =
+        (double)input_motion_vector_info[i].mv[0].as_mv.col /
+        kMotionVectorPrecision;
+    output_motion_vector_info[i].mv_row[1] =
+        (double)input_motion_vector_info[i].mv[1].as_mv.row /
+        kMotionVectorPrecision;
+    output_motion_vector_info[i].mv_column[1] =
+        (double)input_motion_vector_info[i].mv[1].as_mv.col /
+        kMotionVectorPrecision;
+  }
+}
+
+static void update_frame_counts(const FRAME_COUNTS *input_counts,
+                                FrameCounts *output_counts) {
+  // Init array sizes.
+  output_counts->y_mode.resize(BLOCK_SIZE_GROUPS);
+  for (int i = 0; i < BLOCK_SIZE_GROUPS; ++i) {
+    output_counts->y_mode[i].resize(INTRA_MODES);
+  }
+
+  output_counts->uv_mode.resize(INTRA_MODES);
+  for (int i = 0; i < INTRA_MODES; ++i) {
+    output_counts->uv_mode[i].resize(INTRA_MODES);
+  }
+
+  output_counts->partition.resize(PARTITION_CONTEXTS);
+  for (int i = 0; i < PARTITION_CONTEXTS; ++i) {
+    output_counts->partition[i].resize(PARTITION_TYPES);
+  }
+
+  output_counts->coef.resize(TX_SIZES);
+  output_counts->eob_branch.resize(TX_SIZES);
+  for (int i = 0; i < TX_SIZES; ++i) {
+    output_counts->coef[i].resize(PLANE_TYPES);
+    output_counts->eob_branch[i].resize(PLANE_TYPES);
+    for (int j = 0; j < PLANE_TYPES; ++j) {
+      output_counts->coef[i][j].resize(REF_TYPES);
+      output_counts->eob_branch[i][j].resize(REF_TYPES);
+      for (int k = 0; k < REF_TYPES; ++k) {
+        output_counts->coef[i][j][k].resize(COEF_BANDS);
+        output_counts->eob_branch[i][j][k].resize(COEF_BANDS);
+        for (int l = 0; l < COEF_BANDS; ++l) {
+          output_counts->coef[i][j][k][l].resize(COEFF_CONTEXTS);
+          output_counts->eob_branch[i][j][k][l].resize(COEFF_CONTEXTS);
+          for (int m = 0; m < COEFF_CONTEXTS; ++m) {
+            output_counts->coef[i][j][k][l][m].resize(UNCONSTRAINED_NODES + 1);
+          }
+        }
+      }
+    }
+  }
+
+  output_counts->switchable_interp.resize(SWITCHABLE_FILTER_CONTEXTS);
+  for (int i = 0; i < SWITCHABLE_FILTER_CONTEXTS; ++i) {
+    output_counts->switchable_interp[i].resize(SWITCHABLE_FILTERS);
+  }
+
+  output_counts->inter_mode.resize(INTER_MODE_CONTEXTS);
+  for (int i = 0; i < INTER_MODE_CONTEXTS; ++i) {
+    output_counts->inter_mode[i].resize(INTER_MODES);
+  }
+
+  output_counts->intra_inter.resize(INTRA_INTER_CONTEXTS);
+  for (int i = 0; i < INTRA_INTER_CONTEXTS; ++i) {
+    output_counts->intra_inter[i].resize(2);
+  }
+
+  output_counts->comp_inter.resize(COMP_INTER_CONTEXTS);
+  for (int i = 0; i < COMP_INTER_CONTEXTS; ++i) {
+    output_counts->comp_inter[i].resize(2);
+  }
+
+  output_counts->single_ref.resize(REF_CONTEXTS);
+  for (int i = 0; i < REF_CONTEXTS; ++i) {
+    output_counts->single_ref[i].resize(2);
+    for (int j = 0; j < 2; ++j) {
+      output_counts->single_ref[i][j].resize(2);
+    }
+  }
+
+  output_counts->comp_ref.resize(REF_CONTEXTS);
+  for (int i = 0; i < REF_CONTEXTS; ++i) {
+    output_counts->comp_ref[i].resize(2);
+  }
+
+  output_counts->skip.resize(SKIP_CONTEXTS);
+  for (int i = 0; i < SKIP_CONTEXTS; ++i) {
+    output_counts->skip[i].resize(2);
+  }
+
+  output_counts->tx.p32x32.resize(TX_SIZE_CONTEXTS);
+  output_counts->tx.p16x16.resize(TX_SIZE_CONTEXTS);
+  output_counts->tx.p8x8.resize(TX_SIZE_CONTEXTS);
+  for (int i = 0; i < TX_SIZE_CONTEXTS; i++) {
+    output_counts->tx.p32x32[i].resize(TX_SIZES);
+    output_counts->tx.p16x16[i].resize(TX_SIZES - 1);
+    output_counts->tx.p8x8[i].resize(TX_SIZES - 2);
+  }
+  output_counts->tx.tx_totals.resize(TX_SIZES);
+
+  output_counts->mv.joints.resize(MV_JOINTS);
+  output_counts->mv.comps.resize(2);
+  for (int i = 0; i < 2; ++i) {
+    output_counts->mv.comps[i].sign.resize(2);
+    output_counts->mv.comps[i].classes.resize(MV_CLASSES);
+    output_counts->mv.comps[i].class0.resize(CLASS0_SIZE);
+    output_counts->mv.comps[i].bits.resize(MV_OFFSET_BITS);
+    for (int j = 0; j < MV_OFFSET_BITS; ++j) {
+      output_counts->mv.comps[i].bits[j].resize(2);
+    }
+    output_counts->mv.comps[i].class0_fp.resize(CLASS0_SIZE);
+    for (int j = 0; j < CLASS0_SIZE; ++j) {
+      output_counts->mv.comps[i].class0_fp[j].resize(MV_FP_SIZE);
+    }
+    output_counts->mv.comps[i].fp.resize(MV_FP_SIZE);
+    output_counts->mv.comps[i].class0_hp.resize(2);
+    output_counts->mv.comps[i].hp.resize(2);
+  }
+
+  // Populate counts.
+  for (int i = 0; i < BLOCK_SIZE_GROUPS; ++i) {
+    for (int j = 0; j < INTRA_MODES; ++j) {
+      output_counts->y_mode[i][j] = input_counts->y_mode[i][j];
+    }
+  }
+  for (int i = 0; i < INTRA_MODES; ++i) {
+    for (int j = 0; j < INTRA_MODES; ++j) {
+      output_counts->uv_mode[i][j] = input_counts->uv_mode[i][j];
+    }
+  }
+  for (int i = 0; i < PARTITION_CONTEXTS; ++i) {
+    for (int j = 0; j < PARTITION_TYPES; ++j) {
+      output_counts->partition[i][j] = input_counts->partition[i][j];
+    }
+  }
+  for (int i = 0; i < TX_SIZES; ++i) {
+    for (int j = 0; j < PLANE_TYPES; ++j) {
+      for (int k = 0; k < REF_TYPES; ++k) {
+        for (int l = 0; l < COEF_BANDS; ++l) {
+          for (int m = 0; m < COEFF_CONTEXTS; ++m) {
+            output_counts->eob_branch[i][j][k][l][m] =
+                input_counts->eob_branch[i][j][k][l][m];
+            for (int n = 0; n < UNCONSTRAINED_NODES + 1; n++) {
+              output_counts->coef[i][j][k][l][m][n] =
+                  input_counts->coef[i][j][k][l][m][n];
+            }
+          }
+        }
+      }
+    }
+  }
+  for (int i = 0; i < SWITCHABLE_FILTER_CONTEXTS; ++i) {
+    for (int j = 0; j < SWITCHABLE_FILTERS; ++j) {
+      output_counts->switchable_interp[i][j] =
+          input_counts->switchable_interp[i][j];
+    }
+  }
+  for (int i = 0; i < INTER_MODE_CONTEXTS; ++i) {
+    for (int j = 0; j < INTER_MODES; ++j) {
+      output_counts->inter_mode[i][j] = input_counts->inter_mode[i][j];
+    }
+  }
+  for (int i = 0; i < INTRA_INTER_CONTEXTS; ++i) {
+    for (int j = 0; j < 2; ++j) {
+      output_counts->intra_inter[i][j] = input_counts->intra_inter[i][j];
+    }
+  }
+  for (int i = 0; i < COMP_INTER_CONTEXTS; ++i) {
+    for (int j = 0; j < 2; ++j) {
+      output_counts->comp_inter[i][j] = input_counts->comp_inter[i][j];
+    }
+  }
+  for (int i = 0; i < REF_CONTEXTS; ++i) {
+    for (int j = 0; j < 2; ++j) {
+      for (int k = 0; k < 2; ++k) {
+        output_counts->single_ref[i][j][k] = input_counts->single_ref[i][j][k];
+      }
+    }
+  }
+  for (int i = 0; i < REF_CONTEXTS; ++i) {
+    for (int j = 0; j < 2; ++j) {
+      output_counts->comp_ref[i][j] = input_counts->comp_ref[i][j];
+    }
+  }
+  for (int i = 0; i < SKIP_CONTEXTS; ++i) {
+    for (int j = 0; j < 2; ++j) {
+      output_counts->skip[i][j] = input_counts->skip[i][j];
+    }
+  }
+  for (int i = 0; i < TX_SIZE_CONTEXTS; i++) {
+    for (int j = 0; j < TX_SIZES; j++) {
+      output_counts->tx.p32x32[i][j] = input_counts->tx.p32x32[i][j];
+    }
+    for (int j = 0; j < TX_SIZES - 1; j++) {
+      output_counts->tx.p16x16[i][j] = input_counts->tx.p16x16[i][j];
+    }
+    for (int j = 0; j < TX_SIZES - 2; j++) {
+      output_counts->tx.p8x8[i][j] = input_counts->tx.p8x8[i][j];
+    }
+  }
+  for (int i = 0; i < TX_SIZES; i++) {
+    output_counts->tx.tx_totals[i] = input_counts->tx.tx_totals[i];
+  }
+  for (int i = 0; i < MV_JOINTS; i++) {
+    output_counts->mv.joints[i] = input_counts->mv.joints[i];
+  }
+  for (int k = 0; k < 2; k++) {
+    const nmv_component_counts *const comps_t = &input_counts->mv.comps[k];
+    for (int i = 0; i < 2; i++) {
+      output_counts->mv.comps[k].sign[i] = comps_t->sign[i];
+      output_counts->mv.comps[k].class0_hp[i] = comps_t->class0_hp[i];
+      output_counts->mv.comps[k].hp[i] = comps_t->hp[i];
+    }
+    for (int i = 0; i < MV_CLASSES; i++) {
+      output_counts->mv.comps[k].classes[i] = comps_t->classes[i];
+    }
+    for (int i = 0; i < CLASS0_SIZE; i++) {
+      output_counts->mv.comps[k].class0[i] = comps_t->class0[i];
+      for (int j = 0; j < MV_FP_SIZE; j++) {
+        output_counts->mv.comps[k].class0_fp[i][j] = comps_t->class0_fp[i][j];
+      }
+    }
+    for (int i = 0; i < MV_OFFSET_BITS; i++) {
+      for (int j = 0; j < 2; j++) {
+        output_counts->mv.comps[k].bits[i][j] = comps_t->bits[i][j];
+      }
+    }
+    for (int i = 0; i < MV_FP_SIZE; i++) {
+      output_counts->mv.comps[k].fp[i] = comps_t->fp[i];
+    }
+  }
+}
+
+void output_image_buffer(const ImageBuffer &image_buffer, std::FILE *out_file) {
+  for (int plane = 0; plane < 3; ++plane) {
+    const int w = image_buffer.plane_width[plane];
+    const int h = image_buffer.plane_height[plane];
+    const uint8_t *buf = image_buffer.plane_buffer[plane].get();
+    fprintf(out_file, "%d %d\n", h, w);
+    for (int i = 0; i < w * h; ++i) {
+      fprintf(out_file, "%d ", (int)buf[i]);
+    }
+    fprintf(out_file, "\n");
+  }
+}
+
+static bool init_image_buffer(ImageBuffer *image_buffer, int frame_width,
+                              int frame_height, vpx_img_fmt_t img_fmt) {
+  for (int plane = 0; plane < 3; ++plane) {
+    const int w = get_plane_width(img_fmt, frame_width, plane);
+    const int h = get_plane_height(img_fmt, frame_height, plane);
+    image_buffer->plane_width[plane] = w;
+    image_buffer->plane_height[plane] = h;
+    image_buffer->plane_buffer[plane].reset(new (std::nothrow) uint8_t[w * h]);
+    if (image_buffer->plane_buffer[plane].get() == nullptr) {
+      return false;
+    }
+  }
+  return true;
+}
+
+static void ImageBuffer_to_IMAGE_BUFFER(const ImageBuffer &image_buffer,
+                                        IMAGE_BUFFER *image_buffer_c) {
+  image_buffer_c->allocated = 1;
+  for (int plane = 0; plane < 3; ++plane) {
+    image_buffer_c->plane_width[plane] = image_buffer.plane_width[plane];
+    image_buffer_c->plane_height[plane] = image_buffer.plane_height[plane];
+    image_buffer_c->plane_buffer[plane] =
+        image_buffer.plane_buffer[plane].get();
+  }
+}
+
+static size_t get_max_coding_data_byte_size(int frame_width, int frame_height) {
+  return frame_width * frame_height * 3;
+}
+
+static bool init_encode_frame_result(EncodeFrameResult *encode_frame_result,
+                                     int frame_width, int frame_height,
+                                     vpx_img_fmt_t img_fmt) {
+  const size_t max_coding_data_byte_size =
+      get_max_coding_data_byte_size(frame_width, frame_height);
+
+  encode_frame_result->coding_data.reset(
+      new (std::nothrow) uint8_t[max_coding_data_byte_size]);
+
+  encode_frame_result->num_rows_4x4 = get_num_unit_4x4(frame_width);
+  encode_frame_result->num_cols_4x4 = get_num_unit_4x4(frame_height);
+  encode_frame_result->partition_info.resize(encode_frame_result->num_rows_4x4 *
+                                             encode_frame_result->num_cols_4x4);
+  encode_frame_result->motion_vector_info.resize(
+      encode_frame_result->num_rows_4x4 * encode_frame_result->num_cols_4x4);
+
+  if (encode_frame_result->coding_data.get() == nullptr) {
+    return false;
+  }
+  return init_image_buffer(&encode_frame_result->coded_frame, frame_width,
+                           frame_height, img_fmt);
+}
+
+static void update_encode_frame_result(
+    EncodeFrameResult *encode_frame_result,
+    const ENCODE_FRAME_RESULT *encode_frame_info) {
+  encode_frame_result->coding_data_bit_size =
+      encode_frame_result->coding_data_byte_size * 8;
+  encode_frame_result->show_idx = encode_frame_info->show_idx;
+  encode_frame_result->frame_type =
+      get_frame_type_from_update_type(encode_frame_info->update_type);
+  encode_frame_result->psnr = encode_frame_info->psnr;
+  encode_frame_result->sse = encode_frame_info->sse;
+  encode_frame_result->quantize_index = encode_frame_info->quantize_index;
+  update_partition_info(encode_frame_info->partition_info,
+                        encode_frame_result->num_rows_4x4,
+                        encode_frame_result->num_cols_4x4,
+                        &encode_frame_result->partition_info[0]);
+  update_motion_vector_info(encode_frame_info->motion_vector_info,
+                            encode_frame_result->num_rows_4x4,
+                            encode_frame_result->num_cols_4x4,
+                            &encode_frame_result->motion_vector_info[0]);
+  update_frame_counts(&encode_frame_info->frame_counts,
+                      &encode_frame_result->frame_counts);
+}
+
+static void IncreaseGroupOfPictureIndex(GroupOfPicture *group_of_picture) {
+  ++group_of_picture->next_encode_frame_index;
+}
+
+static int IsGroupOfPictureFinished(const GroupOfPicture &group_of_picture) {
+  return static_cast<size_t>(group_of_picture.next_encode_frame_index) ==
+         group_of_picture.encode_frame_list.size();
+}
+
+static void InitRefFrameInfo(RefFrameInfo *ref_frame_info) {
+  for (int i = 0; i < kRefFrameTypeMax; ++i) {
+    ref_frame_info->coding_indexes[i] = 0;
+    ref_frame_info->valid_list[i] = 0;
+  }
+}
+
+// After finishing coding a frame, this function will update the coded frame
+// into the ref_frame_info based on the frame_type and the coding_index.
+static void PostUpdateRefFrameInfo(FrameType frame_type, int frame_coding_index,
+                                   RefFrameInfo *ref_frame_info) {
+  // This part is written based on the logics in vp9_configure_buffer_updates()
+  // and update_ref_frames()
+  int *ref_frame_coding_indexes = ref_frame_info->coding_indexes;
+  switch (frame_type) {
+    case kFrameTypeKey:
+      ref_frame_coding_indexes[kRefFrameTypeLast] = frame_coding_index;
+      ref_frame_coding_indexes[kRefFrameTypePast] = frame_coding_index;
+      ref_frame_coding_indexes[kRefFrameTypeFuture] = frame_coding_index;
+      break;
+    case kFrameTypeInter:
+      ref_frame_coding_indexes[kRefFrameTypeLast] = frame_coding_index;
+      break;
+    case kFrameTypeAltRef:
+      ref_frame_coding_indexes[kRefFrameTypeFuture] = frame_coding_index;
+      break;
+    case kFrameTypeOverlay:
+      // Reserve the past coding_index in the future slot. This logic is from
+      // update_ref_frames() with condition vp9_preserve_existing_gf() == 1
+      // TODO(angiebird): Invetegate why we need this.
+      ref_frame_coding_indexes[kRefFrameTypeFuture] =
+          ref_frame_coding_indexes[kRefFrameTypePast];
+      ref_frame_coding_indexes[kRefFrameTypePast] = frame_coding_index;
+      break;
+    case kFrameTypeGolden:
+      ref_frame_coding_indexes[kRefFrameTypePast] = frame_coding_index;
+      ref_frame_coding_indexes[kRefFrameTypeLast] = frame_coding_index;
+      break;
+  }
+
+  //  This part is written based on the logics in get_ref_frame_flags() but we
+  //  rename the flags alt, golden to future, past respectively. Mark
+  //  non-duplicated reference frames as valid. The priorities are
+  //  kRefFrameTypeLast > kRefFrameTypePast > kRefFrameTypeFuture.
+  const int last_index = ref_frame_coding_indexes[kRefFrameTypeLast];
+  const int past_index = ref_frame_coding_indexes[kRefFrameTypePast];
+  const int future_index = ref_frame_coding_indexes[kRefFrameTypeFuture];
+
+  int *ref_frame_valid_list = ref_frame_info->valid_list;
+  for (int ref_frame_idx = 0; ref_frame_idx < kRefFrameTypeMax;
+       ++ref_frame_idx) {
+    ref_frame_valid_list[ref_frame_idx] = 1;
+  }
+
+  if (past_index == last_index) {
+    ref_frame_valid_list[kRefFrameTypeLast] = 0;
+  }
+
+  if (future_index == last_index) {
+    ref_frame_valid_list[kRefFrameTypeFuture] = 0;
+  }
+
+  if (future_index == past_index) {
+    ref_frame_valid_list[kRefFrameTypeFuture] = 0;
+  }
+}
+
+static void SetGroupOfPicture(int first_is_key_frame, int use_alt_ref,
+                              int coding_frame_count, int first_show_idx,
+                              int last_gop_use_alt_ref, int start_coding_index,
+                              const RefFrameInfo &start_ref_frame_info,
+                              GroupOfPicture *group_of_picture) {
+  // Clean up the state of previous group of picture.
+  group_of_picture->encode_frame_list.clear();
+  group_of_picture->next_encode_frame_index = 0;
+  group_of_picture->show_frame_count = coding_frame_count - use_alt_ref;
+  group_of_picture->start_show_index = first_show_idx;
+  group_of_picture->start_coding_index = start_coding_index;
+
+  // We need to make a copy of start reference frame info because we
+  // use it to simulate the ref frame update.
+  RefFrameInfo ref_frame_info = start_ref_frame_info;
+
+  {
+    // First frame in the group of pictures. It's either key frame or show inter
+    // frame.
+    EncodeFrameInfo encode_frame_info;
+    // Set frame_type
+    if (first_is_key_frame) {
+      encode_frame_info.frame_type = kFrameTypeKey;
+    } else {
+      if (last_gop_use_alt_ref) {
+        encode_frame_info.frame_type = kFrameTypeOverlay;
+      } else {
+        encode_frame_info.frame_type = kFrameTypeGolden;
+      }
+    }
+
+    encode_frame_info.show_idx = first_show_idx;
+    encode_frame_info.coding_index = start_coding_index;
+
+    encode_frame_info.ref_frame_info = ref_frame_info;
+    PostUpdateRefFrameInfo(encode_frame_info.frame_type,
+                           encode_frame_info.coding_index, &ref_frame_info);
+
+    group_of_picture->encode_frame_list.push_back(encode_frame_info);
+  }
+
+  const int show_frame_count = coding_frame_count - use_alt_ref;
+  if (use_alt_ref) {
+    // If there is alternate reference, it is always coded at the second place.
+    // Its show index (or timestamp) is at the last of this group
+    EncodeFrameInfo encode_frame_info;
+    encode_frame_info.frame_type = kFrameTypeAltRef;
+    encode_frame_info.show_idx = first_show_idx + show_frame_count;
+    encode_frame_info.coding_index = start_coding_index + 1;
+
+    encode_frame_info.ref_frame_info = ref_frame_info;
+    PostUpdateRefFrameInfo(encode_frame_info.frame_type,
+                           encode_frame_info.coding_index, &ref_frame_info);
+
+    group_of_picture->encode_frame_list.push_back(encode_frame_info);
+  }
+
+  // Encode the rest show inter frames.
+  for (int i = 1; i < show_frame_count; ++i) {
+    EncodeFrameInfo encode_frame_info;
+    encode_frame_info.frame_type = kFrameTypeInter;
+    encode_frame_info.show_idx = first_show_idx + i;
+    encode_frame_info.coding_index = start_coding_index + use_alt_ref + i;
+
+    encode_frame_info.ref_frame_info = ref_frame_info;
+    PostUpdateRefFrameInfo(encode_frame_info.frame_type,
+                           encode_frame_info.coding_index, &ref_frame_info);
+
+    group_of_picture->encode_frame_list.push_back(encode_frame_info);
+  }
+}
+
+// Gets group of picture information from VP9's decision, and update
+// |group_of_picture| accordingly.
+// This is called at the starting of encoding of each group of picture.
+static void UpdateGroupOfPicture(const VP9_COMP *cpi, int start_coding_index,
+                                 const RefFrameInfo &start_ref_frame_info,
+                                 GroupOfPicture *group_of_picture) {
+  int first_is_key_frame;
+  int use_alt_ref;
+  int coding_frame_count;
+  int first_show_idx;
+  int last_gop_use_alt_ref;
+  vp9_get_next_group_of_picture(cpi, &first_is_key_frame, &use_alt_ref,
+                                &coding_frame_count, &first_show_idx,
+                                &last_gop_use_alt_ref);
+  SetGroupOfPicture(first_is_key_frame, use_alt_ref, coding_frame_count,
+                    first_show_idx, last_gop_use_alt_ref, start_coding_index,
+                    start_ref_frame_info, group_of_picture);
+}
+
+SimpleEncode::SimpleEncode(int frame_width, int frame_height,
+                           int frame_rate_num, int frame_rate_den,
+                           int target_bitrate, int num_frames,
+                           const char *infile_path, const char *outfile_path) {
+  impl_ptr_ = std::unique_ptr<EncodeImpl>(new EncodeImpl());
+  frame_width_ = frame_width;
+  frame_height_ = frame_height;
+  frame_rate_num_ = frame_rate_num;
+  frame_rate_den_ = frame_rate_den;
+  target_bitrate_ = target_bitrate;
+  num_frames_ = num_frames;
+  frame_coding_index_ = 0;
+  // TODO(angirbid): Should we keep a file pointer here or keep the file_path?
+  assert(infile_path != nullptr);
+  in_file_ = fopen(infile_path, "r");
+  if (outfile_path != nullptr) {
+    out_file_ = fopen(outfile_path, "w");
+  } else {
+    out_file_ = nullptr;
+  }
+  impl_ptr_->cpi = nullptr;
+  impl_ptr_->img_fmt = VPX_IMG_FMT_I420;
+
+  InitRefFrameInfo(&ref_frame_info_);
+}
+
+void SimpleEncode::ComputeFirstPassStats() {
+  vpx_rational_t frame_rate =
+      make_vpx_rational(frame_rate_num_, frame_rate_den_);
+  const VP9EncoderConfig oxcf =
+      vp9_get_encoder_config(frame_width_, frame_height_, frame_rate,
+                             target_bitrate_, VPX_RC_FIRST_PASS);
+  VP9_COMP *cpi = init_encoder(&oxcf, impl_ptr_->img_fmt);
+  struct lookahead_ctx *lookahead = cpi->lookahead;
+  int i;
+  int use_highbitdepth = 0;
+#if CONFIG_VP9_HIGHBITDEPTH
+  use_highbitdepth = cpi->common.use_highbitdepth;
+#endif
+  vpx_image_t img;
+  vpx_img_alloc(&img, impl_ptr_->img_fmt, frame_width_, frame_height_, 1);
+  rewind(in_file_);
+  impl_ptr_->first_pass_stats.clear();
+  for (i = 0; i < num_frames_; ++i) {
+    assert(!vp9_lookahead_full(lookahead));
+    if (img_read(&img, in_file_)) {
+      int next_show_idx = vp9_lookahead_next_show_idx(lookahead);
+      int64_t ts_start =
+          timebase_units_to_ticks(&oxcf.g_timebase_in_ts, next_show_idx);
+      int64_t ts_end =
+          timebase_units_to_ticks(&oxcf.g_timebase_in_ts, next_show_idx + 1);
+      YV12_BUFFER_CONFIG sd;
+      image2yuvconfig(&img, &sd);
+      vp9_lookahead_push(lookahead, &sd, ts_start, ts_end, use_highbitdepth, 0);
+      {
+        int64_t time_stamp;
+        int64_t time_end;
+        int flush = 1;  // Makes vp9_get_compressed_data process a frame
+        size_t size;
+        unsigned int frame_flags = 0;
+        ENCODE_FRAME_RESULT encode_frame_info;
+        vp9_init_encode_frame_result(&encode_frame_info);
+        // TODO(angiebird): Call vp9_first_pass directly
+        vp9_get_compressed_data(cpi, &frame_flags, &size, nullptr, &time_stamp,
+                                &time_end, flush, &encode_frame_info);
+        // vp9_get_compressed_data only generates first pass stats not
+        // compresses data
+        assert(size == 0);
+      }
+      impl_ptr_->first_pass_stats.push_back(vp9_get_frame_stats(&cpi->twopass));
+    }
+  }
+  vp9_end_first_pass(cpi);
+  // TODO(angiebird): Store the total_stats apart form first_pass_stats
+  impl_ptr_->first_pass_stats.push_back(vp9_get_total_stats(&cpi->twopass));
+  free_encoder(cpi);
+  rewind(in_file_);
+  vpx_img_free(&img);
+}
+
+std::vector<std::vector<double>> SimpleEncode::ObserveFirstPassStats() {
+  std::vector<std::vector<double>> output_stats;
+  // TODO(angiebird): This function make several assumptions of
+  // FIRSTPASS_STATS. 1) All elements in FIRSTPASS_STATS are double except the
+  // last one. 2) The last entry of first_pass_stats is the total_stats.
+  // Change the code structure, so that we don't have to make these assumptions
+
+  // Note the last entry of first_pass_stats is the total_stats, we don't need
+  // it.
+  for (size_t i = 0; i < impl_ptr_->first_pass_stats.size() - 1; ++i) {
+    double *buf_start =
+        reinterpret_cast<double *>(&impl_ptr_->first_pass_stats[i]);
+    // We use - 1 here because the last member in FIRSTPASS_STATS is not double
+    double *buf_end =
+        buf_start + sizeof(impl_ptr_->first_pass_stats[i]) / sizeof(*buf_end) -
+        1;
+    std::vector<double> this_stats(buf_start, buf_end);
+    output_stats.push_back(this_stats);
+  }
+  return output_stats;
+}
+
+void SimpleEncode::SetExternalGroupOfPicture(
+    std::vector<int> external_arf_indexes) {
+  external_arf_indexes_ = external_arf_indexes;
+}
+
+void SimpleEncode::StartEncode() {
+  assert(impl_ptr_->first_pass_stats.size() > 0);
+  vpx_rational_t frame_rate =
+      make_vpx_rational(frame_rate_num_, frame_rate_den_);
+  VP9EncoderConfig oxcf =
+      vp9_get_encoder_config(frame_width_, frame_height_, frame_rate,
+                             target_bitrate_, VPX_RC_LAST_PASS);
+  vpx_fixed_buf_t stats;
+  stats.buf = impl_ptr_->first_pass_stats.data();
+  stats.sz = sizeof(impl_ptr_->first_pass_stats[0]) *
+             impl_ptr_->first_pass_stats.size();
+
+  vp9_set_first_pass_stats(&oxcf, &stats);
+  assert(impl_ptr_->cpi == nullptr);
+  impl_ptr_->cpi = init_encoder(&oxcf, impl_ptr_->img_fmt);
+  vpx_img_alloc(&impl_ptr_->tmp_img, impl_ptr_->img_fmt, frame_width_,
+                frame_height_, 1);
+  frame_coding_index_ = 0;
+  encode_command_set_external_arf_indexes(&impl_ptr_->cpi->encode_command,
+                                          external_arf_indexes_.data());
+  UpdateGroupOfPicture(impl_ptr_->cpi, frame_coding_index_, ref_frame_info_,
+                       &group_of_picture_);
+  rewind(in_file_);
+
+  if (out_file_ != nullptr) {
+    const char *fourcc = "VP90";
+    vpx_rational_t time_base = invert_vpx_rational(frame_rate);
+    ivf_write_file_header_with_video_info(out_file_, *(const uint32_t *)fourcc,
+                                          num_frames_, frame_width_,
+                                          frame_height_, time_base);
+  }
+}
+
+void SimpleEncode::EndEncode() {
+  free_encoder(impl_ptr_->cpi);
+  impl_ptr_->cpi = nullptr;
+  vpx_img_free(&impl_ptr_->tmp_img);
+  rewind(in_file_);
+}
+
+int SimpleEncode::GetKeyFrameGroupSize(int key_frame_index) const {
+  const VP9_COMP *cpi = impl_ptr_->cpi;
+  return vp9_get_frames_to_next_key(&cpi->oxcf, &cpi->frame_info,
+                                    &cpi->twopass.first_pass_info,
+                                    key_frame_index, cpi->rc.min_gf_interval);
+}
+
+GroupOfPicture SimpleEncode::ObserveGroupOfPicture() const {
+  return group_of_picture_;
+}
+
+EncodeFrameInfo SimpleEncode::GetNextEncodeFrameInfo() const {
+  return group_of_picture_
+      .encode_frame_list[group_of_picture_.next_encode_frame_index];
+}
+
+void SimpleEncode::PostUpdateState(
+    const EncodeFrameResult &encode_frame_result) {
+  // This function needs to be called before the increament of
+  // frame_coding_index_
+  PostUpdateRefFrameInfo(encode_frame_result.frame_type, frame_coding_index_,
+                         &ref_frame_info_);
+  ++frame_coding_index_;
+  IncreaseGroupOfPictureIndex(&group_of_picture_);
+  if (IsGroupOfPictureFinished(group_of_picture_)) {
+    UpdateGroupOfPicture(impl_ptr_->cpi, frame_coding_index_, ref_frame_info_,
+                         &group_of_picture_);
+  }
+}
+
+void SimpleEncode::EncodeFrame(EncodeFrameResult *encode_frame_result) {
+  VP9_COMP *cpi = impl_ptr_->cpi;
+  struct lookahead_ctx *lookahead = cpi->lookahead;
+  int use_highbitdepth = 0;
+#if CONFIG_VP9_HIGHBITDEPTH
+  use_highbitdepth = cpi->common.use_highbitdepth;
+#endif
+  // The lookahead's size is set to oxcf->lag_in_frames.
+  // We want to fill lookahead to it's max capacity if possible so that the
+  // encoder can construct alt ref frame in time.
+  // In the other words, we hope vp9_get_compressed_data to encode a frame
+  // every time in the function
+  while (!vp9_lookahead_full(lookahead)) {
+    // TODO(angiebird): Check whether we can move this file read logics to
+    // lookahead
+    if (img_read(&impl_ptr_->tmp_img, in_file_)) {
+      int next_show_idx = vp9_lookahead_next_show_idx(lookahead);
+      int64_t ts_start =
+          timebase_units_to_ticks(&cpi->oxcf.g_timebase_in_ts, next_show_idx);
+      int64_t ts_end = timebase_units_to_ticks(&cpi->oxcf.g_timebase_in_ts,
+                                               next_show_idx + 1);
+      YV12_BUFFER_CONFIG sd;
+      image2yuvconfig(&impl_ptr_->tmp_img, &sd);
+      vp9_lookahead_push(lookahead, &sd, ts_start, ts_end, use_highbitdepth, 0);
+    } else {
+      break;
+    }
+  }
+
+  if (init_encode_frame_result(encode_frame_result, frame_width_, frame_height_,
+                               impl_ptr_->img_fmt)) {
+    int64_t time_stamp;
+    int64_t time_end;
+    int flush = 1;  // Make vp9_get_compressed_data encode a frame
+    unsigned int frame_flags = 0;
+    ENCODE_FRAME_RESULT encode_frame_info;
+    vp9_init_encode_frame_result(&encode_frame_info);
+    ImageBuffer_to_IMAGE_BUFFER(encode_frame_result->coded_frame,
+                                &encode_frame_info.coded_frame);
+    vp9_get_compressed_data(cpi, &frame_flags,
+                            &encode_frame_result->coding_data_byte_size,
+                            encode_frame_result->coding_data.get(), &time_stamp,
+                            &time_end, flush, &encode_frame_info);
+    if (out_file_ != nullptr) {
+      ivf_write_frame_header(out_file_, time_stamp,
+                             encode_frame_result->coding_data_byte_size);
+      fwrite(encode_frame_result->coding_data.get(), 1,
+             encode_frame_result->coding_data_byte_size, out_file_);
+    }
+
+    // vp9_get_compressed_data is expected to encode a frame every time, so the
+    // data size should be greater than zero.
+    if (encode_frame_result->coding_data_byte_size <= 0) {
+      fprintf(stderr, "Coding data size <= 0\n");
+      abort();
+    }
+    const size_t max_coding_data_byte_size =
+        get_max_coding_data_byte_size(frame_width_, frame_height_);
+    if (encode_frame_result->coding_data_byte_size >
+        max_coding_data_byte_size) {
+      fprintf(stderr, "Coding data size exceeds the maximum.\n");
+      abort();
+    }
+
+    update_encode_frame_result(encode_frame_result, &encode_frame_info);
+    PostUpdateState(*encode_frame_result);
+  } else {
+    // TODO(angiebird): Clean up encode_frame_result.
+    fprintf(stderr, "init_encode_frame_result() failed.\n");
+    this->EndEncode();
+  }
+}
+
+void SimpleEncode::EncodeFrameWithQuantizeIndex(
+    EncodeFrameResult *encode_frame_result, int quantize_index) {
+  encode_command_set_external_quantize_index(&impl_ptr_->cpi->encode_command,
+                                             quantize_index);
+  EncodeFrame(encode_frame_result);
+  encode_command_reset_external_quantize_index(&impl_ptr_->cpi->encode_command);
+}
+
+int SimpleEncode::GetCodingFrameNum() const {
+  assert(impl_ptr_->first_pass_stats.size() - 1 > 0);
+  // These are the default settings for now.
+  const int multi_layer_arf = 0;
+  const int allow_alt_ref = 1;
+  vpx_rational_t frame_rate =
+      make_vpx_rational(frame_rate_num_, frame_rate_den_);
+  const VP9EncoderConfig oxcf =
+      vp9_get_encoder_config(frame_width_, frame_height_, frame_rate,
+                             target_bitrate_, VPX_RC_LAST_PASS);
+  FRAME_INFO frame_info = vp9_get_frame_info(&oxcf);
+  FIRST_PASS_INFO first_pass_info;
+  fps_init_first_pass_info(&first_pass_info, impl_ptr_->first_pass_stats.data(),
+                           num_frames_);
+  return vp9_get_coding_frame_num(external_arf_indexes_.data(), &oxcf,
+                                  &frame_info, &first_pass_info,
+                                  multi_layer_arf, allow_alt_ref);
+}
+
+uint64_t SimpleEncode::GetFramePixelCount() const {
+  assert(frame_width_ % 2 == 0);
+  assert(frame_height_ % 2 == 0);
+  switch (impl_ptr_->img_fmt) {
+    case VPX_IMG_FMT_I420: return frame_width_ * frame_height_ * 3 / 2;
+    case VPX_IMG_FMT_I422: return frame_width_ * frame_height_ * 2;
+    case VPX_IMG_FMT_I444: return frame_width_ * frame_height_ * 3;
+    case VPX_IMG_FMT_I440: return frame_width_ * frame_height_ * 2;
+    case VPX_IMG_FMT_I42016: return frame_width_ * frame_height_ * 3 / 2;
+    case VPX_IMG_FMT_I42216: return frame_width_ * frame_height_ * 2;
+    case VPX_IMG_FMT_I44416: return frame_width_ * frame_height_ * 3;
+    case VPX_IMG_FMT_I44016: return frame_width_ * frame_height_ * 2;
+    default: return 0;
+  }
+}
+
+SimpleEncode::~SimpleEncode() {
+  if (in_file_ != nullptr) {
+    fclose(in_file_);
+  }
+  if (out_file_ != nullptr) {
+    fclose(out_file_);
+  }
+}
+
+}  // namespace vp9
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/simple_encode.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/simple_encode.h
new file mode 100644
index 0000000..821bb92
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/simple_encode.h
@@ -0,0 +1,371 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_SIMPLE_ENCODE_H_
+#define VPX_VP9_SIMPLE_ENCODE_H_
+
+#include <cstddef>
+#include <cstdint>
+#include <cstdio>
+#include <memory>
+#include <vector>
+
+namespace vp9 {
+
+// TODO(angiebird): Add description for each frame type.
+enum FrameType {
+  kFrameTypeKey = 0,
+  kFrameTypeInter,
+  kFrameTypeAltRef,
+  kFrameTypeOverlay,
+  kFrameTypeGolden,
+};
+
+// TODO(angiebird): Add description for each reference frame type.
+// This enum numbers have to be contiguous and start from zero except
+// kNoneRefFrame.
+enum RefFrameType {
+  kRefFrameTypeLast = 0,
+  kRefFrameTypePast = 1,
+  kRefFrameTypeFuture = 2,
+  kRefFrameTypeMax = 3,
+  kRefFrameTypeNone = -1,
+};
+
+// The frame is split to 4x4 blocks.
+// This structure contains the information of each 4x4 block.
+struct PartitionInfo {
+  int row;           // row pixel offset of current 4x4 block
+  int column;        // column pixel offset of current 4x4 block
+  int row_start;     // row pixel offset of the start of the prediction block
+  int column_start;  // column pixel offset of the start of the prediction block
+  int width;         // prediction block width
+  int height;        // prediction block height
+};
+
+constexpr int kMotionVectorPrecision = 8;
+
+// The frame is split to 4x4 blocks.
+// This structure contains the information of each 4x4 block.
+struct MotionVectorInfo {
+  // Number of valid motion vectors, always 0 if this block is in the key frame.
+  // For inter frames, it could be 1 or 2.
+  int mv_count;
+  // The reference frame for motion vectors. If the second motion vector does
+  // not exist (mv_count = 1), the reference frame is kNoneRefFrame.
+  // Otherwise, the reference frame is either kLastFrame, or kGoldenFrame,
+  // or kAltRefFrame.
+  RefFrameType ref_frame[2];
+  // The row offset of motion vectors in the unit of pixel.
+  // If the second motion vector does not exist, the value is 0.
+  double mv_row[2];
+  // The column offset of motion vectors in the unit of pixel.
+  // If the second motion vector does not exist, the value is 0.
+  double mv_column[2];
+};
+
+struct RefFrameInfo {
+  int coding_indexes[kRefFrameTypeMax];
+
+  // Indicate whether the reference frames are available or not.
+  // When the reference frame type is not valid, it means either the to-be-coded
+  // frame is a key frame or the reference frame already appears in other
+  // reference frame type. vp9 always keeps three types of reference frame
+  // available.  However, the duplicated reference frames will not be
+  // chosen by the encoder. The priorities of choosing reference frames are
+  // kRefFrameTypeLast > kRefFrameTypePast > kRefFrameTypeFuture.
+  // For example, if kRefFrameTypeLast and kRefFrameTypePast both point to the
+  // same frame, kRefFrameTypePast will be set to invalid.
+  int valid_list[kRefFrameTypeMax];
+};
+
+struct EncodeFrameInfo {
+  int show_idx;
+
+  // Each show or no show frame is assigned with a coding index based on its
+  // coding order (starting from zero) in the coding process of the entire
+  // video. The coding index for each frame is unique.
+  int coding_index;
+  RefFrameInfo ref_frame_info;
+  FrameType frame_type;
+};
+
+// This structure is a copy of vp9 |nmv_component_counts|.
+struct NewMotionvectorComponentCounts {
+  std::vector<unsigned int> sign;
+  std::vector<unsigned int> classes;
+  std::vector<unsigned int> class0;
+  std::vector<std::vector<unsigned int>> bits;
+  std::vector<std::vector<unsigned int>> class0_fp;
+  std::vector<unsigned int> fp;
+  std::vector<unsigned int> class0_hp;
+  std::vector<unsigned int> hp;
+};
+
+// This structure is a copy of vp9 |nmv_context_counts|.
+struct NewMotionVectorContextCounts {
+  std::vector<unsigned int> joints;
+  std::vector<NewMotionvectorComponentCounts> comps;
+};
+
+using UintArray2D = std::vector<std::vector<unsigned int>>;
+using UintArray3D = std::vector<std::vector<std::vector<unsigned int>>>;
+using UintArray5D = std::vector<
+    std::vector<std::vector<std::vector<std::vector<unsigned int>>>>>;
+using UintArray6D = std::vector<std::vector<
+    std::vector<std::vector<std::vector<std::vector<unsigned int>>>>>>;
+
+// This structure is a copy of vp9 |tx_counts|.
+struct TransformSizeCounts {
+  // Transform size found in blocks of partition size 32x32.
+  // First dimension: transform size contexts (2).
+  // Second dimension: transform size type (3: 32x32, 16x16, 8x8)
+  UintArray2D p32x32;
+  // Transform size found in blocks of partition size 16x16.
+  // First dimension: transform size contexts (2).
+  // Second dimension: transform size type (2: 16x16, 8x8)
+  UintArray2D p16x16;
+  // Transform size found in blocks of partition size 8x8.
+  // First dimension: transform size contexts (2).
+  // Second dimension: transform size type (1: 8x8)
+  UintArray2D p8x8;
+  // Overall transform size count.
+  std::vector<unsigned int> tx_totals;
+};
+
+// This structure is a copy of vp9 |FRAME_COUNTS|.
+struct FrameCounts {
+  // Intra prediction mode for luma plane. First dimension: block size (4).
+  // Second dimension: intra prediction mode (10).
+  UintArray2D y_mode;
+  // Intra prediction mode for chroma plane. First and second dimension:
+  // intra prediction mode (10).
+  UintArray2D uv_mode;
+  // Partition type. First dimension: partition contexts (16).
+  // Second dimension: partition type (4).
+  UintArray2D partition;
+  // Transform coefficient.
+  UintArray6D coef;
+  // End of block (the position of the last non-zero transform coefficient)
+  UintArray5D eob_branch;
+  // Interpolation filter type. First dimension: switchable filter contexts (4).
+  // Second dimension: filter types (3).
+  UintArray2D switchable_interp;
+  // Inter prediction mode (the motion vector type).
+  // First dimension: inter mode contexts (7).
+  // Second dimension: mode type (4).
+  UintArray2D inter_mode;
+  // Block is intra or inter predicted. First dimension: contexts (4).
+  // Second dimension: type (0 for intra, 1 for inter).
+  UintArray2D intra_inter;
+  // Block is compound predicted (predicted from average of two blocks).
+  // First dimension: contexts (5).
+  // Second dimension: type (0 for single, 1 for compound prediction).
+  UintArray2D comp_inter;
+  // Type of the reference frame. Only one reference frame.
+  // First dimension: context (5). Second dimension: context (2).
+  // Third dimension: count (2).
+  UintArray3D single_ref;
+  // Type of the two reference frames.
+  // First dimension: context (5). Second dimension: count (2).
+  UintArray2D comp_ref;
+  // Block skips transform and quantization, uses prediction as reconstruction.
+  // First dimension: contexts (3). Second dimension: type (0 not skip, 1 skip).
+  UintArray2D skip;
+  // Transform size.
+  TransformSizeCounts tx;
+  // New motion vector.
+  NewMotionVectorContextCounts mv;
+};
+
+struct ImageBuffer {
+  // The image data is stored in raster order,
+  // i.e. image[plane][r][c] =
+  // plane_buffer[plane][r * plane_width[plane] + plane_height[plane]].
+  std::unique_ptr<unsigned char[]> plane_buffer[3];
+  int plane_width[3];
+  int plane_height[3];
+};
+
+void output_image_buffer(const ImageBuffer &image_buffer, std::FILE *out_file);
+
+struct EncodeFrameResult {
+  int show_idx;
+  FrameType frame_type;
+  size_t coding_data_bit_size;
+  size_t coding_data_byte_size;
+  // The EncodeFrame will allocate a buffer, write the coding data into the
+  // buffer and give the ownership of the buffer to coding_data.
+  std::unique_ptr<unsigned char[]> coding_data;
+  double psnr;
+  uint64_t sse;
+  int quantize_index;
+  FrameCounts frame_counts;
+  int num_rows_4x4;  // number of row units, in size of 4.
+  int num_cols_4x4;  // number of column units, in size of 4.
+  // A vector of the partition information of the frame.
+  // The number of elements is |num_rows_4x4| * |num_cols_4x4|.
+  // The frame is divided 4x4 blocks of |num_rows_4x4| rows and
+  // |num_cols_4x4| columns.
+  // Each 4x4 block contains the current pixel position (|row|, |column|),
+  // the start pixel position of the partition (|row_start|, |column_start|),
+  // and the |width|, |height| of the partition.
+  // The current pixel position can be the same as the start pixel position
+  // if the 4x4 block is the top-left block in the partition. Otherwise, they
+  // are different.
+  // Within the same partition, all 4x4 blocks have the same |row_start|,
+  // |column_start|, |width| and |height|.
+  // For example, if the frame is partitioned to a 32x32 block,
+  // starting at (0, 0). Then, there're 64 4x4 blocks within this partition.
+  // They all have the same |row_start|, |column_start|, |width|, |height|,
+  // which can be used to figure out the start of the current partition and
+  // the start of the next partition block.
+  // Horizontal next: |column_start| + |width|,
+  // Vertical next: |row_start| + |height|.
+  std::vector<PartitionInfo> partition_info;
+  // A vector of the motion vector information of the frame.
+  // The number of elements is |num_rows_4x4| * |num_cols_4x4|.
+  // The frame is divided 4x4 blocks of |num_rows_4x4| rows and
+  // |num_cols_4x4| columns.
+  // Each 4x4 block contains 0 motion vector if this is an intra predicted
+  // frame (for example, the key frame). If the frame is inter predicted,
+  // each 4x4 block contains either 1 or 2 motion vectors.
+  // Similar to partition info, all 4x4 blocks inside the same partition block
+  // share the same motion vector information.
+  std::vector<MotionVectorInfo> motion_vector_info;
+  ImageBuffer coded_frame;
+};
+
+struct GroupOfPicture {
+  // This list will be updated internally in StartEncode() and
+  // EncodeFrame()/EncodeFrameWithQuantizeIndex().
+  // In EncodeFrame()/EncodeFrameWithQuantizeIndex(), the update will only be
+  // triggered when the coded frame is the last one in the previous group of
+  // pictures.
+  std::vector<EncodeFrameInfo> encode_frame_list;
+  // Indicates the index of the next coding frame in encode_frame_list.
+  // In other words, EncodeFrameInfo of the next coding frame can be
+  // obtained with encode_frame_list[next_encode_frame_index].
+  // Internally, next_encode_frame_index will be set to zero after the last
+  // frame of the group of pictures is coded. Otherwise, next_encode_frame_index
+  // will be increased after each EncodeFrame()/EncodeFrameWithQuantizeIndex()
+  // call.
+  int next_encode_frame_index;
+  // Number of show frames in this group of pictures.
+  int show_frame_count;
+  // The show index/timestamp of the earliest show frame in the group of
+  // pictures.
+  int start_show_index;
+  // The coding index of the first coding frame in the group of picture.
+  int start_coding_index;
+};
+
+class SimpleEncode {
+ public:
+  // When outfile_path is set, the encoder will output the bitstream in ivf
+  // format.
+  SimpleEncode(int frame_width, int frame_height, int frame_rate_num,
+               int frame_rate_den, int target_bitrate, int num_frames,
+               const char *infile_path, const char *outfile_path = nullptr);
+  ~SimpleEncode();
+  SimpleEncode(SimpleEncode &) = delete;
+  SimpleEncode &operator=(const SimpleEncode &) = delete;
+
+  // Makes encoder compute the first pass stats and store it internally for
+  // future encode.
+  void ComputeFirstPassStats();
+
+  // Outputs the first pass stats represented by a 2-D vector.
+  // One can use the frame index at first dimension to retrieve the stats for
+  // each video frame. The stats of each video frame is a vector of 25 double
+  // values. For details, please check FIRSTPASS_STATS in vp9_firstpass.h
+  std::vector<std::vector<double>> ObserveFirstPassStats();
+
+  // Sets arf indexes for the video from external input.
+  // The arf index determines whether a frame is arf or not.
+  // Therefore it also determines the group of picture size.
+  // If set, VP9 will use the external arf index to make decision.
+  // This function should be called only once after ComputeFirstPassStats(),
+  // before StartEncde().
+  void SetExternalGroupOfPicture(std::vector<int> external_arf_indexes);
+
+  // Initializes the encoder for actual encoding.
+  // This function should be called after ComputeFirstPassStats().
+  void StartEncode();
+
+  // Frees the encoder.
+  // This function should be called after StartEncode() or EncodeFrame().
+  void EndEncode();
+
+  // Given a key_frame_index, computes this key frame group's size.
+  // The key frame group size includes one key frame plus the number of
+  // following inter frames. Note that the key frame group size only counts the
+  // show frames. The number of no show frames like alternate refereces are not
+  // counted.
+  int GetKeyFrameGroupSize(int key_frame_index) const;
+
+  // Provides the group of pictures that the next coding frame is in.
+  // Only call this function between StartEncode() and EndEncode()
+  GroupOfPicture ObserveGroupOfPicture() const;
+
+  // Gets encode_frame_info for the next coding frame.
+  // Only call this function between StartEncode() and EndEncode()
+  EncodeFrameInfo GetNextEncodeFrameInfo() const;
+
+  // Encodes a frame
+  // This function should be called after StartEncode() and before EndEncode().
+  void EncodeFrame(EncodeFrameResult *encode_frame_result);
+
+  // Encodes a frame with a specific quantize index.
+  // This function should be called after StartEncode() and before EndEncode().
+  void EncodeFrameWithQuantizeIndex(EncodeFrameResult *encode_frame_result,
+                                    int quantize_index);
+
+  // Gets the number of coding frames for the video. The coding frames include
+  // show frame and no show frame.
+  // This function should be called after ComputeFirstPassStats().
+  int GetCodingFrameNum() const;
+
+  // Gets the total number of pixels of YUV planes per frame.
+  uint64_t GetFramePixelCount() const;
+
+ private:
+  class EncodeImpl;
+
+  int frame_width_;   // frame width in pixels.
+  int frame_height_;  // frame height in pixels.
+  int frame_rate_num_;
+  int frame_rate_den_;
+  int target_bitrate_;
+  int num_frames_;
+
+  std::FILE *in_file_;
+  std::FILE *out_file_;
+  std::unique_ptr<EncodeImpl> impl_ptr_;
+
+  std::vector<int> external_arf_indexes_;
+  GroupOfPicture group_of_picture_;
+
+  // Each show or no show frame is assigned with a coding index based on its
+  // coding order (starting from zero) in the coding process of the entire
+  // video. The coding index of the to-be-coded frame.
+  int frame_coding_index_;
+
+  // TODO(angiebird): Do we need to reset ref_frames_info_ when the next key
+  // frame appears?
+  // Reference frames info of the to-be-coded frame.
+  RefFrameInfo ref_frame_info_;
+
+  void PostUpdateState(const EncodeFrameResult &encode_frame_result);
+};
+
+}  // namespace vp9
+
+#endif  // VPX_VP9_SIMPLE_ENCODE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_common.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_common.mk
index 75a0fc1..5ef2f89 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_common.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_common.mk
@@ -10,6 +10,7 @@
 
 VP9_COMMON_SRCS-yes += vp9_common.mk
 VP9_COMMON_SRCS-yes += vp9_iface_common.h
+VP9_COMMON_SRCS-yes += vp9_iface_common.c
 VP9_COMMON_SRCS-yes += common/vp9_ppflags.h
 VP9_COMMON_SRCS-yes += common/vp9_alloccommon.c
 VP9_COMMON_SRCS-yes += common/vp9_blockd.c
@@ -64,10 +65,14 @@
 VP9_COMMON_SRCS-$(CONFIG_VP9_POSTPROC) += common/vp9_mfqe.h
 VP9_COMMON_SRCS-$(CONFIG_VP9_POSTPROC) += common/vp9_mfqe.c
 
+ifneq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 VP9_COMMON_SRCS-$(HAVE_MSA)   += common/mips/msa/vp9_idct4x4_msa.c
 VP9_COMMON_SRCS-$(HAVE_MSA)   += common/mips/msa/vp9_idct8x8_msa.c
 VP9_COMMON_SRCS-$(HAVE_MSA)   += common/mips/msa/vp9_idct16x16_msa.c
+endif  # !CONFIG_VP9_HIGHBITDEPTH
+
 VP9_COMMON_SRCS-$(HAVE_SSE2)  += common/x86/vp9_idct_intrin_sse2.c
+VP9_COMMON_SRCS-$(HAVE_VSX)   += common/ppc/vp9_idct_vsx.c
 VP9_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/vp9_iht4x4_add_neon.c
 VP9_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/vp9_iht8x8_add_neon.c
 VP9_COMMON_SRCS-$(HAVE_NEON)  += common/arm/neon/vp9_iht16x16_add_neon.c
@@ -85,6 +90,7 @@
 else
 VP9_COMMON_SRCS-$(HAVE_NEON)   += common/arm/neon/vp9_highbd_iht4x4_add_neon.c
 VP9_COMMON_SRCS-$(HAVE_NEON)   += common/arm/neon/vp9_highbd_iht8x8_add_neon.c
+VP9_COMMON_SRCS-$(HAVE_NEON)   += common/arm/neon/vp9_highbd_iht16x16_add_neon.c
 VP9_COMMON_SRCS-$(HAVE_SSE4_1) += common/x86/vp9_highbd_iht4x4_add_sse4.c
 VP9_COMMON_SRCS-$(HAVE_SSE4_1) += common/x86/vp9_highbd_iht8x8_add_sse4.c
 VP9_COMMON_SRCS-$(HAVE_SSE4_1) += common/x86/vp9_highbd_iht16x16_add_sse4.c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c
index 40f7ab53..2ca2114 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.c
@@ -13,16 +13,23 @@
 
 #include "./vpx_config.h"
 #include "vpx/vpx_encoder.h"
+#include "vpx_dsp/psnr.h"
 #include "vpx_ports/vpx_once.h"
+#include "vpx_ports/static_assert.h"
 #include "vpx_ports/system_state.h"
+#include "vpx_util/vpx_timestamp.h"
 #include "vpx/internal/vpx_codec_internal.h"
 #include "./vpx_version.h"
 #include "vp9/encoder/vp9_encoder.h"
 #include "vpx/vp8cx.h"
+#include "vp9/common/vp9_alloccommon.h"
+#include "vp9/vp9_cx_iface.h"
 #include "vp9/encoder/vp9_firstpass.h"
+#include "vp9/encoder/vp9_lookahead.h"
+#include "vp9/vp9_cx_iface.h"
 #include "vp9/vp9_iface_common.h"
 
-struct vp9_extracfg {
+typedef struct vp9_extracfg {
   int cpu_used;  // available cpu percentage in 1/16
   unsigned int enable_auto_alt_ref;
   unsigned int noise_sensitivity;
@@ -30,6 +37,7 @@
   unsigned int static_thresh;
   unsigned int tile_columns;
   unsigned int tile_rows;
+  unsigned int enable_tpl_model;
   unsigned int arnr_max_frames;
   unsigned int arnr_strength;
   unsigned int min_gf_interval;
@@ -53,7 +61,8 @@
   int render_height;
   unsigned int row_mt;
   unsigned int motion_vector_unit_test;
-};
+  int delta_q_uv;
+} vp9_extracfg;
 
 static struct vp9_extracfg default_extra_cfg = {
   0,                     // cpu_used
@@ -63,6 +72,7 @@
   0,                     // static_thresh
   6,                     // tile_columns
   0,                     // tile_rows
+  1,                     // enable_tpl_model
   7,                     // arnr_max_frames
   5,                     // arnr_strength
   0,                     // min_gf_interval; 0 -> default decision
@@ -86,12 +96,16 @@
   0,                     // render height
   0,                     // row_mt
   0,                     // motion_vector_unit_test
+  0,                     // delta_q_uv
 };
 
 struct vpx_codec_alg_priv {
   vpx_codec_priv_t base;
   vpx_codec_enc_cfg_t cfg;
   struct vp9_extracfg extra_cfg;
+  vpx_rational64_t timestamp_ratio;
+  vpx_codec_pts_t pts_offset;
+  unsigned char pts_offset_initialized;
   VP9EncoderConfig oxcf;
   VP9_COMP *cpi;
   unsigned char *cx_data;
@@ -128,10 +142,10 @@
     return VPX_CODEC_INVALID_PARAM; \
   } while (0)
 
-#define RANGE_CHECK(p, memb, lo, hi)                                 \
-  do {                                                               \
-    if (!(((p)->memb == lo || (p)->memb > (lo)) && (p)->memb <= hi)) \
-      ERROR(#memb " out of range [" #lo ".." #hi "]");               \
+#define RANGE_CHECK(p, memb, lo, hi)                                     \
+  do {                                                                   \
+    if (!(((p)->memb == (lo) || (p)->memb > (lo)) && (p)->memb <= (hi))) \
+      ERROR(#memb " out of range [" #lo ".." #hi "]");                   \
   } while (0)
 
 #define RANGE_CHECK_HI(p, memb, hi)                                     \
@@ -237,22 +251,6 @@
         ERROR("ts_rate_decimator factors are not powers of 2");
   }
 
-#if CONFIG_SPATIAL_SVC
-
-  if ((cfg->ss_number_layers > 1 || cfg->ts_number_layers > 1) &&
-      cfg->g_pass == VPX_RC_LAST_PASS) {
-    unsigned int i, alt_ref_sum = 0;
-    for (i = 0; i < cfg->ss_number_layers; ++i) {
-      if (cfg->ss_enable_auto_alt_ref[i]) ++alt_ref_sum;
-    }
-    if (alt_ref_sum > REF_FRAMES - cfg->ss_number_layers)
-      ERROR("Not enough ref buffers for svc alt ref frames");
-    if (cfg->ss_number_layers * cfg->ts_number_layers > 3 &&
-        cfg->g_error_resilient == 0)
-      ERROR("Multiple frame context are not supported for more than 3 layers");
-  }
-#endif
-
   // VP9 does not support a lower bound on the keyframe interval in
   // automatic keyframe placement mode.
   if (cfg->kf_mode != VPX_KF_DISABLED && cfg->kf_min_dist != cfg->kf_max_dist &&
@@ -263,8 +261,8 @@
 
   RANGE_CHECK(extra_cfg, row_mt, 0, 1);
   RANGE_CHECK(extra_cfg, motion_vector_unit_test, 0, 2);
-  RANGE_CHECK(extra_cfg, enable_auto_alt_ref, 0, 2);
-  RANGE_CHECK(extra_cfg, cpu_used, -8, 8);
+  RANGE_CHECK(extra_cfg, enable_auto_alt_ref, 0, MAX_ARF_LAYERS);
+  RANGE_CHECK(extra_cfg, cpu_used, -9, 9);
   RANGE_CHECK_HI(extra_cfg, noise_sensitivity, 6);
   RANGE_CHECK(extra_cfg, tile_columns, 0, 6);
   RANGE_CHECK(extra_cfg, tile_rows, 0, 2);
@@ -277,10 +275,6 @@
   RANGE_CHECK(extra_cfg, content, VP9E_CONTENT_DEFAULT,
               VP9E_CONTENT_INVALID - 1);
 
-  // TODO(yaowu): remove this when ssim tuning is implemented for vp9
-  if (extra_cfg->tuning == VP8_TUNE_SSIM)
-    ERROR("Option --tune=ssim is not currently supported in VP9.");
-
 #if !CONFIG_REALTIME_ONLY
   if (cfg->g_pass == VPX_RC_LAST_PASS) {
     const size_t packet_sz = sizeof(FIRSTPASS_STATS);
@@ -464,6 +458,15 @@
   }
 }
 
+static vpx_rational64_t get_g_timebase_in_ts(vpx_rational_t g_timebase) {
+  vpx_rational64_t g_timebase_in_ts;
+  g_timebase_in_ts.den = g_timebase.den;
+  g_timebase_in_ts.num = g_timebase.num;
+  g_timebase_in_ts.num *= TICKS_PER_SEC;
+  reduce_ratio(&g_timebase_in_ts);
+  return g_timebase_in_ts;
+}
+
 static vpx_codec_err_t set_encoder_config(
     VP9EncoderConfig *oxcf, const vpx_codec_enc_cfg_t *cfg,
     const struct vp9_extracfg *extra_cfg) {
@@ -475,9 +478,13 @@
   oxcf->height = cfg->g_h;
   oxcf->bit_depth = cfg->g_bit_depth;
   oxcf->input_bit_depth = cfg->g_input_bit_depth;
+  // TODO(angiebird): Figure out if we can just use g_timebase to indicate the
+  // inverse of framerate
   // guess a frame rate if out of whack, use 30
   oxcf->init_framerate = (double)cfg->g_timebase.den / cfg->g_timebase.num;
   if (oxcf->init_framerate > 180) oxcf->init_framerate = 30;
+  oxcf->g_timebase = cfg->g_timebase;
+  oxcf->g_timebase_in_ts = get_g_timebase_in_ts(oxcf->g_timebase);
 
   oxcf->mode = GOOD;
 
@@ -537,10 +544,16 @@
   oxcf->speed = abs(extra_cfg->cpu_used);
   oxcf->encode_breakout = extra_cfg->static_thresh;
   oxcf->enable_auto_arf = extra_cfg->enable_auto_alt_ref;
-  oxcf->noise_sensitivity = extra_cfg->noise_sensitivity;
+  if (oxcf->bit_depth == VPX_BITS_8) {
+    oxcf->noise_sensitivity = extra_cfg->noise_sensitivity;
+  } else {
+    // Disable denoiser for high bitdepth since vp9_denoiser_filter only works
+    // for 8 bits.
+    oxcf->noise_sensitivity = 0;
+  }
   oxcf->sharpness = extra_cfg->sharpness;
 
-  oxcf->two_pass_stats_in = cfg->rc_twopass_stats_in;
+  vp9_set_first_pass_stats(oxcf, &cfg->rc_twopass_stats_in);
 
 #if CONFIG_FP_MB_STATS
   oxcf->firstpass_mb_stats_in = cfg->rc_firstpass_mb_stats_in;
@@ -560,6 +573,8 @@
 
   oxcf->tile_columns = extra_cfg->tile_columns;
 
+  oxcf->enable_tpl_model = extra_cfg->enable_tpl_model;
+
   // TODO(yunqing): The dependencies between row tiles cause error in multi-
   // threaded encoding. For now, tile_rows is forced to be 0 in this case.
   // The further fix can be done by adding synchronizations after a tile row
@@ -588,10 +603,9 @@
   oxcf->row_mt = extra_cfg->row_mt;
   oxcf->motion_vector_unit_test = extra_cfg->motion_vector_unit_test;
 
+  oxcf->delta_q_uv = extra_cfg->delta_q_uv;
+
   for (sl = 0; sl < oxcf->ss_number_layers; ++sl) {
-#if CONFIG_SPATIAL_SVC
-    oxcf->ss_enable_auto_arf[sl] = cfg->ss_enable_auto_alt_ref[sl];
-#endif
     for (tl = 0; tl < oxcf->ts_number_layers; ++tl) {
       oxcf->layer_target_bitrate[sl * oxcf->ts_number_layers + tl] =
           1000 * cfg->layer_target_bitrate[sl * oxcf->ts_number_layers + tl];
@@ -599,9 +613,6 @@
   }
   if (oxcf->ss_number_layers == 1 && oxcf->pass != 0) {
     oxcf->ss_target_bitrate[0] = (int)oxcf->target_bandwidth;
-#if CONFIG_SPATIAL_SVC
-    oxcf->ss_enable_auto_arf[0] = extra_cfg->enable_auto_alt_ref;
-#endif
   }
   if (oxcf->ts_number_layers > 1) {
     for (tl = 0; tl < VPX_TS_MAX_LAYERS; ++tl) {
@@ -613,40 +624,7 @@
   }
 
   if (get_level_index(oxcf->target_level) >= 0) config_target_level(oxcf);
-  /*
-  printf("Current VP9 Settings: \n");
-  printf("target_bandwidth: %d\n", oxcf->target_bandwidth);
-  printf("target_level: %d\n", oxcf->target_level);
-  printf("noise_sensitivity: %d\n", oxcf->noise_sensitivity);
-  printf("sharpness: %d\n",    oxcf->sharpness);
-  printf("cpu_used: %d\n",  oxcf->cpu_used);
-  printf("Mode: %d\n",     oxcf->mode);
-  printf("auto_key: %d\n",  oxcf->auto_key);
-  printf("key_freq: %d\n", oxcf->key_freq);
-  printf("end_usage: %d\n", oxcf->end_usage);
-  printf("under_shoot_pct: %d\n", oxcf->under_shoot_pct);
-  printf("over_shoot_pct: %d\n", oxcf->over_shoot_pct);
-  printf("starting_buffer_level: %d\n", oxcf->starting_buffer_level);
-  printf("optimal_buffer_level: %d\n",  oxcf->optimal_buffer_level);
-  printf("maximum_buffer_size: %d\n", oxcf->maximum_buffer_size);
-  printf("fixed_q: %d\n",  oxcf->fixed_q);
-  printf("worst_allowed_q: %d\n", oxcf->worst_allowed_q);
-  printf("best_allowed_q: %d\n", oxcf->best_allowed_q);
-  printf("allow_spatial_resampling: %d\n", oxcf->allow_spatial_resampling);
-  printf("scaled_frame_width: %d\n", oxcf->scaled_frame_width);
-  printf("scaled_frame_height: %d\n", oxcf->scaled_frame_height);
-  printf("two_pass_vbrbias: %d\n",  oxcf->two_pass_vbrbias);
-  printf("two_pass_vbrmin_section: %d\n", oxcf->two_pass_vbrmin_section);
-  printf("two_pass_vbrmax_section: %d\n", oxcf->two_pass_vbrmax_section);
-  printf("vbr_corpus_complexity: %d\n",  oxcf->vbr_corpus_complexity);
-  printf("lag_in_frames: %d\n", oxcf->lag_in_frames);
-  printf("enable_auto_arf: %d\n", oxcf->enable_auto_arf);
-  printf("Version: %d\n", oxcf->Version);
-  printf("encode_breakout: %d\n", oxcf->encode_breakout);
-  printf("error resilient: %d\n", oxcf->error_resilient_mode);
-  printf("frame parallel detokenization: %d\n",
-         oxcf->frame_parallel_decoding_mode);
-  */
+  // vp9_dump_encoder_config(oxcf);
   return VPX_CODEC_OK;
 }
 
@@ -716,7 +694,10 @@
 static vpx_codec_err_t ctrl_set_cpuused(vpx_codec_alg_priv_t *ctx,
                                         va_list args) {
   struct vp9_extracfg extra_cfg = ctx->extra_cfg;
+  // Use fastest speed setting (speed 9 or -9) if it's set beyond the range.
   extra_cfg.cpu_used = CAST(VP8E_SET_CPUUSED, args);
+  extra_cfg.cpu_used = VPXMIN(9, extra_cfg.cpu_used);
+  extra_cfg.cpu_used = VPXMAX(-9, extra_cfg.cpu_used);
   return update_extra_cfg(ctx, &extra_cfg);
 }
 
@@ -762,6 +743,13 @@
   return update_extra_cfg(ctx, &extra_cfg);
 }
 
+static vpx_codec_err_t ctrl_set_tpl_model(vpx_codec_alg_priv_t *ctx,
+                                          va_list args) {
+  struct vp9_extracfg extra_cfg = ctx->extra_cfg;
+  extra_cfg.enable_tpl_model = CAST(VP9E_SET_TPL, args);
+  return update_extra_cfg(ctx, &extra_cfg);
+}
+
 static vpx_codec_err_t ctrl_set_arnr_max_frames(vpx_codec_alg_priv_t *ctx,
                                                 va_list args) {
   struct vp9_extracfg extra_cfg = ctx->extra_cfg;
@@ -809,7 +797,7 @@
     vpx_codec_alg_priv_t *ctx, va_list args) {
   struct vp9_extracfg extra_cfg = ctx->extra_cfg;
   extra_cfg.rc_max_inter_bitrate_pct =
-      CAST(VP8E_SET_MAX_INTER_BITRATE_PCT, args);
+      CAST(VP9E_SET_MAX_INTER_BITRATE_PCT, args);
   return update_extra_cfg(ctx, &extra_cfg);
 }
 
@@ -926,16 +914,18 @@
     res = validate_config(priv, &priv->cfg, &priv->extra_cfg);
 
     if (res == VPX_CODEC_OK) {
+      priv->pts_offset_initialized = 0;
+      // TODO(angiebird): Replace priv->timestamp_ratio by
+      // oxcf->g_timebase_in_ts
+      priv->timestamp_ratio = get_g_timebase_in_ts(priv->cfg.g_timebase);
+
       set_encoder_config(&priv->oxcf, &priv->cfg, &priv->extra_cfg);
 #if CONFIG_VP9_HIGHBITDEPTH
       priv->oxcf.use_highbitdepth =
           (ctx->init_flags & VPX_CODEC_USE_HIGHBITDEPTH) ? 1 : 0;
 #endif
       priv->cpi = vp9_create_compressor(&priv->oxcf, priv->buffer_pool);
-      if (priv->cpi == NULL)
-        res = VPX_CODEC_MEM_ERROR;
-      else
-        priv->cpi->output_pkt_list = &priv->pkt_list.head;
+      if (priv->cpi == NULL) res = VPX_CODEC_MEM_ERROR;
     }
   }
 
@@ -962,12 +952,14 @@
   switch (ctx->cfg.g_pass) {
     case VPX_RC_ONE_PASS:
       if (deadline > 0) {
-        const vpx_codec_enc_cfg_t *const cfg = &ctx->cfg;
-
         // Convert duration parameter from stream timebase to microseconds.
-        const uint64_t duration_us = (uint64_t)duration * 1000000 *
-                                     (uint64_t)cfg->g_timebase.num /
-                                     (uint64_t)cfg->g_timebase.den;
+        uint64_t duration_us;
+
+        VPX_STATIC_ASSERT(TICKS_PER_SEC > 1000000 &&
+                          (TICKS_PER_SEC % 1000000) == 0);
+
+        duration_us = duration * (uint64_t)ctx->timestamp_ratio.num /
+                      (ctx->timestamp_ratio.den * (TICKS_PER_SEC / 1000000));
 
         // If the deadline is more that the duration this frame is to be shown,
         // use good quality mode. Otherwise use realtime mode.
@@ -1051,17 +1043,6 @@
   return index_sz;
 }
 
-static int64_t timebase_units_to_ticks(const vpx_rational_t *timebase,
-                                       int64_t n) {
-  return n * TICKS_PER_SEC * timebase->num / timebase->den;
-}
-
-static int64_t ticks_to_timebase_units(const vpx_rational_t *timebase,
-                                       int64_t n) {
-  const int64_t round = (int64_t)TICKS_PER_SEC * timebase->num / 2 - 1;
-  return (n * timebase->den + round) / timebase->num / TICKS_PER_SEC;
-}
-
 static vpx_codec_frame_flags_t get_frame_pkt_flags(const VP9_COMP *cpi,
                                                    unsigned int lib_flags) {
   vpx_codec_frame_flags_t flags = lib_flags << 16;
@@ -1079,50 +1060,52 @@
   return flags;
 }
 
+static INLINE vpx_codec_cx_pkt_t get_psnr_pkt(const PSNR_STATS *psnr) {
+  vpx_codec_cx_pkt_t pkt;
+  pkt.kind = VPX_CODEC_PSNR_PKT;
+  pkt.data.psnr = *psnr;
+  return pkt;
+}
+
+#if !CONFIG_REALTIME_ONLY
+static INLINE vpx_codec_cx_pkt_t
+get_first_pass_stats_pkt(FIRSTPASS_STATS *stats) {
+  // WARNNING: This function assumes that stats will
+  // exist and not be changed until the packet is processed
+  // TODO(angiebird): Refactor the code to avoid using the assumption.
+  vpx_codec_cx_pkt_t pkt;
+  pkt.kind = VPX_CODEC_STATS_PKT;
+  pkt.data.twopass_stats.buf = stats;
+  pkt.data.twopass_stats.sz = sizeof(*stats);
+  return pkt;
+}
+#endif
+
 const size_t kMinCompressedSize = 8192;
 static vpx_codec_err_t encoder_encode(vpx_codec_alg_priv_t *ctx,
                                       const vpx_image_t *img,
-                                      vpx_codec_pts_t pts,
+                                      vpx_codec_pts_t pts_val,
                                       unsigned long duration,
                                       vpx_enc_frame_flags_t enc_flags,
                                       unsigned long deadline) {
   volatile vpx_codec_err_t res = VPX_CODEC_OK;
   volatile vpx_enc_frame_flags_t flags = enc_flags;
+  volatile vpx_codec_pts_t pts = pts_val;
   VP9_COMP *const cpi = ctx->cpi;
-  const vpx_rational_t *const timebase = &ctx->cfg.g_timebase;
+  const vpx_rational64_t *const timestamp_ratio = &ctx->timestamp_ratio;
   size_t data_sz;
+  vpx_codec_cx_pkt_t pkt;
+  memset(&pkt, 0, sizeof(pkt));
 
   if (cpi == NULL) return VPX_CODEC_INVALID_PARAM;
 
-  if (cpi->oxcf.pass == 2 && cpi->level_constraint.level_index >= 0 &&
-      !cpi->level_constraint.rc_config_updated) {
-    SVC *const svc = &cpi->svc;
-    const int is_two_pass_svc =
-        (svc->number_spatial_layers > 1) || (svc->number_temporal_layers > 1);
-    const VP9EncoderConfig *const oxcf = &cpi->oxcf;
-    TWO_PASS *const twopass = &cpi->twopass;
-    FIRSTPASS_STATS *stats = &twopass->total_stats;
-    if (is_two_pass_svc) {
-      const double frame_rate = 10000000.0 * stats->count / stats->duration;
-      vp9_update_spatial_layer_framerate(cpi, frame_rate);
-      twopass->bits_left =
-          (int64_t)(stats->duration *
-                    svc->layer_context[svc->spatial_layer_id].target_bandwidth /
-                    10000000.0);
-    } else {
-      twopass->bits_left =
-          (int64_t)(stats->duration * oxcf->target_bandwidth / 10000000.0);
-    }
-    cpi->level_constraint.rc_config_updated = 1;
-  }
-
   if (img != NULL) {
     res = validate_img(ctx, img);
     if (res == VPX_CODEC_OK) {
       // There's no codec control for multiple alt-refs so check the encoder
       // instance for its status to determine the compressed data size.
       data_sz = ctx->cfg.g_w * ctx->cfg.g_h * get_image_bps(img) / 8 *
-                (cpi->multi_arf_allowed ? 8 : 2);
+                (cpi->multi_layer_arf ? 8 : 2);
       if (data_sz < kMinCompressedSize) data_sz = kMinCompressedSize;
       if (ctx->cx_data == NULL || ctx->cx_data_sz < data_sz) {
         ctx->cx_data_sz = data_sz;
@@ -1135,6 +1118,12 @@
     }
   }
 
+  if (!ctx->pts_offset_initialized) {
+    ctx->pts_offset = pts;
+    ctx->pts_offset_initialized = 1;
+  }
+  pts -= ctx->pts_offset;
+
   pick_quickcompress_mode(ctx, duration, deadline);
   vpx_codec_pkt_list_init(&ctx->pkt_list);
 
@@ -1167,12 +1156,15 @@
   if (res == VPX_CODEC_OK) {
     unsigned int lib_flags = 0;
     YV12_BUFFER_CONFIG sd;
-    int64_t dst_time_stamp = timebase_units_to_ticks(timebase, pts);
+    int64_t dst_time_stamp = timebase_units_to_ticks(timestamp_ratio, pts);
     int64_t dst_end_time_stamp =
-        timebase_units_to_ticks(timebase, pts + duration);
+        timebase_units_to_ticks(timestamp_ratio, pts + duration);
     size_t size, cx_data_sz;
     unsigned char *cx_data;
 
+    cpi->svc.timebase_fac = timebase_units_to_ticks(timestamp_ratio, 1);
+    cpi->svc.time_stamp_superframe = dst_time_stamp;
+
     // Set up internal flags
     if (ctx->base.init_flags & VPX_CODEC_USE_PSNR) cpi->b_calculate_psnr = 1;
 
@@ -1208,114 +1200,137 @@
       }
     }
 
-    while (cx_data_sz >= ctx->cx_data_sz / 2 &&
-           -1 != vp9_get_compressed_data(cpi, &lib_flags, &size, cx_data,
-                                         &dst_time_stamp, &dst_end_time_stamp,
-                                         !img)) {
-      if (size || (cpi->use_svc && cpi->svc.skip_enhancement_layer)) {
-        vpx_codec_cx_pkt_t pkt;
+    if (cpi->oxcf.pass == 1 && !cpi->use_svc) {
+#if !CONFIG_REALTIME_ONLY
+      // compute first pass stats
+      if (img) {
+        int ret;
+        vpx_codec_cx_pkt_t fps_pkt;
+        ENCODE_FRAME_RESULT encode_frame_result;
+        vp9_init_encode_frame_result(&encode_frame_result);
+        // TODO(angiebird): Call vp9_first_pass directly
+        ret = vp9_get_compressed_data(cpi, &lib_flags, &size, cx_data,
+                                      &dst_time_stamp, &dst_end_time_stamp,
+                                      !img, &encode_frame_result);
+        assert(size == 0);  // There is no compressed data in the first pass
+        (void)ret;
+        assert(ret == 0);
+        fps_pkt = get_first_pass_stats_pkt(&cpi->twopass.this_frame_stats);
+        vpx_codec_pkt_list_add(&ctx->pkt_list.head, &fps_pkt);
+      } else {
+        if (!cpi->twopass.first_pass_done) {
+          vpx_codec_cx_pkt_t fps_pkt;
+          vp9_end_first_pass(cpi);
+          fps_pkt = get_first_pass_stats_pkt(&cpi->twopass.total_stats);
+          vpx_codec_pkt_list_add(&ctx->pkt_list.head, &fps_pkt);
+        }
+      }
+#else   // !CONFIG_REALTIME_ONLY
+      assert(0);
+#endif  // !CONFIG_REALTIME_ONLY
+    } else {
+      ENCODE_FRAME_RESULT encode_frame_result;
+      vp9_init_encode_frame_result(&encode_frame_result);
+      while (cx_data_sz >= ctx->cx_data_sz / 2 &&
+             -1 != vp9_get_compressed_data(cpi, &lib_flags, &size, cx_data,
+                                           &dst_time_stamp, &dst_end_time_stamp,
+                                           !img, &encode_frame_result)) {
+        // Pack psnr pkt
+        if (size > 0 && !cpi->use_svc) {
+          // TODO(angiebird): Figure out while we don't need psnr pkt when
+          // use_svc is on
+          PSNR_STATS psnr;
+          if (vp9_get_psnr(cpi, &psnr)) {
+            vpx_codec_cx_pkt_t psnr_pkt = get_psnr_pkt(&psnr);
+            vpx_codec_pkt_list_add(&ctx->pkt_list.head, &psnr_pkt);
+          }
+        }
 
-#if CONFIG_SPATIAL_SVC
-        if (cpi->use_svc)
-          cpi->svc
-              .layer_context[cpi->svc.spatial_layer_id *
-                             cpi->svc.number_temporal_layers]
-              .layer_size += size;
-#endif
+        if (size || (cpi->use_svc && cpi->svc.skip_enhancement_layer)) {
+          // Pack invisible frames with the next visible frame
+          if (!cpi->common.show_frame ||
+              (cpi->use_svc && cpi->svc.spatial_layer_id <
+                                   cpi->svc.number_spatial_layers - 1)) {
+            if (ctx->pending_cx_data == 0) ctx->pending_cx_data = cx_data;
+            ctx->pending_cx_data_sz += size;
+            if (size)
+              ctx->pending_frame_sizes[ctx->pending_frame_count++] = size;
+            ctx->pending_frame_magnitude |= size;
+            cx_data += size;
+            cx_data_sz -= size;
+            pkt.data.frame.width[cpi->svc.spatial_layer_id] = cpi->common.width;
+            pkt.data.frame.height[cpi->svc.spatial_layer_id] =
+                cpi->common.height;
+            pkt.data.frame.spatial_layer_encoded[cpi->svc.spatial_layer_id] =
+                1 - cpi->svc.drop_spatial_layer[cpi->svc.spatial_layer_id];
 
-        // Pack invisible frames with the next visible frame
-        if (!cpi->common.show_frame ||
-            (cpi->use_svc &&
-             cpi->svc.spatial_layer_id < cpi->svc.number_spatial_layers - 1)) {
-          if (ctx->pending_cx_data == 0) ctx->pending_cx_data = cx_data;
-          ctx->pending_cx_data_sz += size;
-          ctx->pending_frame_sizes[ctx->pending_frame_count++] = size;
-          ctx->pending_frame_magnitude |= size;
-          cx_data += size;
-          cx_data_sz -= size;
+            if (ctx->output_cx_pkt_cb.output_cx_pkt) {
+              pkt.kind = VPX_CODEC_CX_FRAME_PKT;
+              pkt.data.frame.pts =
+                  ticks_to_timebase_units(timestamp_ratio, dst_time_stamp) +
+                  ctx->pts_offset;
+              pkt.data.frame.duration = (unsigned long)ticks_to_timebase_units(
+                  timestamp_ratio, dst_end_time_stamp - dst_time_stamp);
+              pkt.data.frame.flags = get_frame_pkt_flags(cpi, lib_flags);
+              pkt.data.frame.buf = ctx->pending_cx_data;
+              pkt.data.frame.sz = size;
+              ctx->pending_cx_data = NULL;
+              ctx->pending_cx_data_sz = 0;
+              ctx->pending_frame_count = 0;
+              ctx->pending_frame_magnitude = 0;
+              ctx->output_cx_pkt_cb.output_cx_pkt(
+                  &pkt, ctx->output_cx_pkt_cb.user_priv);
+            }
+            continue;
+          }
+
+          // Add the frame packet to the list of returned packets.
+          pkt.kind = VPX_CODEC_CX_FRAME_PKT;
+          pkt.data.frame.pts =
+              ticks_to_timebase_units(timestamp_ratio, dst_time_stamp) +
+              ctx->pts_offset;
+          pkt.data.frame.duration = (unsigned long)ticks_to_timebase_units(
+              timestamp_ratio, dst_end_time_stamp - dst_time_stamp);
+          pkt.data.frame.flags = get_frame_pkt_flags(cpi, lib_flags);
           pkt.data.frame.width[cpi->svc.spatial_layer_id] = cpi->common.width;
           pkt.data.frame.height[cpi->svc.spatial_layer_id] = cpi->common.height;
+          pkt.data.frame.spatial_layer_encoded[cpi->svc.spatial_layer_id] =
+              1 - cpi->svc.drop_spatial_layer[cpi->svc.spatial_layer_id];
 
-          if (ctx->output_cx_pkt_cb.output_cx_pkt) {
-            pkt.kind = VPX_CODEC_CX_FRAME_PKT;
-            pkt.data.frame.pts =
-                ticks_to_timebase_units(timebase, dst_time_stamp);
-            pkt.data.frame.duration = (unsigned long)ticks_to_timebase_units(
-                timebase, dst_end_time_stamp - dst_time_stamp);
-            pkt.data.frame.flags = get_frame_pkt_flags(cpi, lib_flags);
+          if (ctx->pending_cx_data) {
+            if (size)
+              ctx->pending_frame_sizes[ctx->pending_frame_count++] = size;
+            ctx->pending_frame_magnitude |= size;
+            ctx->pending_cx_data_sz += size;
+            // write the superframe only for the case when
+            if (!ctx->output_cx_pkt_cb.output_cx_pkt)
+              size += write_superframe_index(ctx);
             pkt.data.frame.buf = ctx->pending_cx_data;
-            pkt.data.frame.sz = size;
+            pkt.data.frame.sz = ctx->pending_cx_data_sz;
             ctx->pending_cx_data = NULL;
             ctx->pending_cx_data_sz = 0;
             ctx->pending_frame_count = 0;
             ctx->pending_frame_magnitude = 0;
+          } else {
+            pkt.data.frame.buf = cx_data;
+            pkt.data.frame.sz = size;
+          }
+          pkt.data.frame.partition_id = -1;
+
+          if (ctx->output_cx_pkt_cb.output_cx_pkt)
             ctx->output_cx_pkt_cb.output_cx_pkt(
                 &pkt, ctx->output_cx_pkt_cb.user_priv);
+          else
+            vpx_codec_pkt_list_add(&ctx->pkt_list.head, &pkt);
+
+          cx_data += size;
+          cx_data_sz -= size;
+          if (is_one_pass_cbr_svc(cpi) &&
+              (cpi->svc.spatial_layer_id ==
+               cpi->svc.number_spatial_layers - 1)) {
+            // Encoded all spatial layers; exit loop.
+            break;
           }
-          continue;
-        }
-
-        // Add the frame packet to the list of returned packets.
-        pkt.kind = VPX_CODEC_CX_FRAME_PKT;
-        pkt.data.frame.pts = ticks_to_timebase_units(timebase, dst_time_stamp);
-        pkt.data.frame.duration = (unsigned long)ticks_to_timebase_units(
-            timebase, dst_end_time_stamp - dst_time_stamp);
-        pkt.data.frame.flags = get_frame_pkt_flags(cpi, lib_flags);
-        pkt.data.frame.width[cpi->svc.spatial_layer_id] = cpi->common.width;
-        pkt.data.frame.height[cpi->svc.spatial_layer_id] = cpi->common.height;
-
-        if (ctx->pending_cx_data) {
-          if (size) ctx->pending_frame_sizes[ctx->pending_frame_count++] = size;
-          ctx->pending_frame_magnitude |= size;
-          ctx->pending_cx_data_sz += size;
-          // write the superframe only for the case when
-          if (!ctx->output_cx_pkt_cb.output_cx_pkt)
-            size += write_superframe_index(ctx);
-          pkt.data.frame.buf = ctx->pending_cx_data;
-          pkt.data.frame.sz = ctx->pending_cx_data_sz;
-          ctx->pending_cx_data = NULL;
-          ctx->pending_cx_data_sz = 0;
-          ctx->pending_frame_count = 0;
-          ctx->pending_frame_magnitude = 0;
-        } else {
-          pkt.data.frame.buf = cx_data;
-          pkt.data.frame.sz = size;
-        }
-        pkt.data.frame.partition_id = -1;
-
-        if (ctx->output_cx_pkt_cb.output_cx_pkt)
-          ctx->output_cx_pkt_cb.output_cx_pkt(&pkt,
-                                              ctx->output_cx_pkt_cb.user_priv);
-        else
-          vpx_codec_pkt_list_add(&ctx->pkt_list.head, &pkt);
-
-        cx_data += size;
-        cx_data_sz -= size;
-#if CONFIG_SPATIAL_SVC && defined(VPX_TEST_SPATIAL_SVC)
-        if (cpi->use_svc && !ctx->output_cx_pkt_cb.output_cx_pkt) {
-          vpx_codec_cx_pkt_t pkt_sizes, pkt_psnr;
-          int sl;
-          vp9_zero(pkt_sizes);
-          vp9_zero(pkt_psnr);
-          pkt_sizes.kind = VPX_CODEC_SPATIAL_SVC_LAYER_SIZES;
-          pkt_psnr.kind = VPX_CODEC_SPATIAL_SVC_LAYER_PSNR;
-          for (sl = 0; sl < cpi->svc.number_spatial_layers; ++sl) {
-            LAYER_CONTEXT *lc =
-                &cpi->svc.layer_context[sl * cpi->svc.number_temporal_layers];
-            pkt_sizes.data.layer_sizes[sl] = lc->layer_size;
-            pkt_psnr.data.layer_psnr[sl] = lc->psnr_pkt;
-            lc->layer_size = 0;
-          }
-
-          vpx_codec_pkt_list_add(&ctx->pkt_list.head, &pkt_sizes);
-
-          vpx_codec_pkt_list_add(&ctx->pkt_list.head, &pkt_psnr);
-        }
-#endif
-        if (is_one_pass_cbr_svc(cpi) &&
-            (cpi->svc.spatial_layer_id == cpi->svc.number_spatial_layers - 1)) {
-          // Encoded all spatial layers; exit loop.
-          break;
         }
       }
     }
@@ -1365,9 +1380,9 @@
   vp9_ref_frame_t *const frame = va_arg(args, vp9_ref_frame_t *);
 
   if (frame != NULL) {
-    YV12_BUFFER_CONFIG *fb = get_ref_frame(&ctx->cpi->common, frame->idx);
+    const int fb_idx = ctx->cpi->common.cur_show_frame_fb_idx;
+    YV12_BUFFER_CONFIG *fb = get_buf_frame(&ctx->cpi->common, fb_idx);
     if (fb == NULL) return VPX_CODEC_ERROR;
-
     yuvconfig2image(&frame->img, fb, NULL);
     return VPX_CODEC_OK;
   }
@@ -1494,22 +1509,23 @@
   vpx_svc_layer_id_t *const data = va_arg(args, vpx_svc_layer_id_t *);
   VP9_COMP *const cpi = (VP9_COMP *)ctx->cpi;
   SVC *const svc = &cpi->svc;
+  int sl;
 
-  svc->first_spatial_layer_to_encode = data->spatial_layer_id;
   svc->spatial_layer_to_encode = data->spatial_layer_id;
+  svc->first_spatial_layer_to_encode = data->spatial_layer_id;
+  // TODO(jianj): Deprecated to be removed.
   svc->temporal_layer_id = data->temporal_layer_id;
+  // Allow for setting temporal layer per spatial layer for superframe.
+  for (sl = 0; sl < cpi->svc.number_spatial_layers; ++sl) {
+    svc->temporal_layer_id_per_spatial[sl] =
+        data->temporal_layer_id_per_spatial[sl];
+  }
   // Checks on valid layer_id input.
   if (svc->temporal_layer_id < 0 ||
       svc->temporal_layer_id >= (int)ctx->cfg.ts_number_layers) {
     return VPX_CODEC_INVALID_PARAM;
   }
-  if (svc->first_spatial_layer_to_encode < 0 ||
-      svc->first_spatial_layer_to_encode >= (int)ctx->cfg.ss_number_layers) {
-    return VPX_CODEC_INVALID_PARAM;
-  }
-  // First spatial layer to encode not implemented for two-pass.
-  if (is_two_pass_svc(cpi) && svc->first_spatial_layer_to_encode > 0)
-    return VPX_CODEC_INVALID_PARAM;
+
   return VPX_CODEC_OK;
 }
 
@@ -1549,20 +1565,96 @@
   return VPX_CODEC_OK;
 }
 
+static vpx_codec_err_t ctrl_get_svc_ref_frame_config(vpx_codec_alg_priv_t *ctx,
+                                                     va_list args) {
+  VP9_COMP *const cpi = ctx->cpi;
+  vpx_svc_ref_frame_config_t *data = va_arg(args, vpx_svc_ref_frame_config_t *);
+  int sl;
+  for (sl = 0; sl <= cpi->svc.spatial_layer_id; sl++) {
+    data->update_buffer_slot[sl] = cpi->svc.update_buffer_slot[sl];
+    data->reference_last[sl] = cpi->svc.reference_last[sl];
+    data->reference_golden[sl] = cpi->svc.reference_golden[sl];
+    data->reference_alt_ref[sl] = cpi->svc.reference_altref[sl];
+    data->lst_fb_idx[sl] = cpi->svc.lst_fb_idx[sl];
+    data->gld_fb_idx[sl] = cpi->svc.gld_fb_idx[sl];
+    data->alt_fb_idx[sl] = cpi->svc.alt_fb_idx[sl];
+    // TODO(jianj): Remove these 3, deprecated.
+    data->update_last[sl] = cpi->svc.update_last[sl];
+    data->update_golden[sl] = cpi->svc.update_golden[sl];
+    data->update_alt_ref[sl] = cpi->svc.update_altref[sl];
+  }
+  return VPX_CODEC_OK;
+}
+
 static vpx_codec_err_t ctrl_set_svc_ref_frame_config(vpx_codec_alg_priv_t *ctx,
                                                      va_list args) {
   VP9_COMP *const cpi = ctx->cpi;
   vpx_svc_ref_frame_config_t *data = va_arg(args, vpx_svc_ref_frame_config_t *);
   int sl;
+  cpi->svc.use_set_ref_frame_config = 1;
   for (sl = 0; sl < cpi->svc.number_spatial_layers; ++sl) {
-    cpi->svc.ext_frame_flags[sl] = data->frame_flags[sl];
-    cpi->svc.ext_lst_fb_idx[sl] = data->lst_fb_idx[sl];
-    cpi->svc.ext_gld_fb_idx[sl] = data->gld_fb_idx[sl];
-    cpi->svc.ext_alt_fb_idx[sl] = data->alt_fb_idx[sl];
+    cpi->svc.update_buffer_slot[sl] = data->update_buffer_slot[sl];
+    cpi->svc.reference_last[sl] = data->reference_last[sl];
+    cpi->svc.reference_golden[sl] = data->reference_golden[sl];
+    cpi->svc.reference_altref[sl] = data->reference_alt_ref[sl];
+    cpi->svc.lst_fb_idx[sl] = data->lst_fb_idx[sl];
+    cpi->svc.gld_fb_idx[sl] = data->gld_fb_idx[sl];
+    cpi->svc.alt_fb_idx[sl] = data->alt_fb_idx[sl];
+    cpi->svc.duration[sl] = data->duration[sl];
   }
   return VPX_CODEC_OK;
 }
 
+static vpx_codec_err_t ctrl_set_svc_inter_layer_pred(vpx_codec_alg_priv_t *ctx,
+                                                     va_list args) {
+  const int data = va_arg(args, int);
+  VP9_COMP *const cpi = ctx->cpi;
+  cpi->svc.disable_inter_layer_pred = data;
+  return VPX_CODEC_OK;
+}
+
+static vpx_codec_err_t ctrl_set_svc_frame_drop_layer(vpx_codec_alg_priv_t *ctx,
+                                                     va_list args) {
+  VP9_COMP *const cpi = ctx->cpi;
+  vpx_svc_frame_drop_t *data = va_arg(args, vpx_svc_frame_drop_t *);
+  int sl;
+  cpi->svc.framedrop_mode = data->framedrop_mode;
+  for (sl = 0; sl < cpi->svc.number_spatial_layers; ++sl)
+    cpi->svc.framedrop_thresh[sl] = data->framedrop_thresh[sl];
+  // Don't allow max_consec_drop values below 1.
+  cpi->svc.max_consec_drop = VPXMAX(1, data->max_consec_drop);
+  return VPX_CODEC_OK;
+}
+
+static vpx_codec_err_t ctrl_set_svc_gf_temporal_ref(vpx_codec_alg_priv_t *ctx,
+                                                    va_list args) {
+  VP9_COMP *const cpi = ctx->cpi;
+  const unsigned int data = va_arg(args, unsigned int);
+  cpi->svc.use_gf_temporal_ref = data;
+  return VPX_CODEC_OK;
+}
+
+static vpx_codec_err_t ctrl_set_svc_spatial_layer_sync(
+    vpx_codec_alg_priv_t *ctx, va_list args) {
+  VP9_COMP *const cpi = ctx->cpi;
+  vpx_svc_spatial_layer_sync_t *data =
+      va_arg(args, vpx_svc_spatial_layer_sync_t *);
+  int sl;
+  for (sl = 0; sl < cpi->svc.number_spatial_layers; ++sl)
+    cpi->svc.spatial_layer_sync[sl] = data->spatial_layer_sync[sl];
+  cpi->svc.set_intra_only_frame = data->base_layer_intra_only;
+  return VPX_CODEC_OK;
+}
+
+static vpx_codec_err_t ctrl_set_delta_q_uv(vpx_codec_alg_priv_t *ctx,
+                                           va_list args) {
+  struct vp9_extracfg extra_cfg = ctx->extra_cfg;
+  int data = va_arg(args, int);
+  data = VPXMIN(VPXMAX(data, -15), 15);
+  extra_cfg.delta_q_uv = data;
+  return update_extra_cfg(ctx, &extra_cfg);
+}
+
 static vpx_codec_err_t ctrl_register_cx_callback(vpx_codec_alg_priv_t *ctx,
                                                  va_list args) {
   vpx_codec_priv_output_cx_pkt_cb_pair_t *cbp =
@@ -1603,6 +1695,14 @@
   return update_extra_cfg(ctx, &extra_cfg);
 }
 
+static vpx_codec_err_t ctrl_set_postencode_drop(vpx_codec_alg_priv_t *ctx,
+                                                va_list args) {
+  VP9_COMP *const cpi = ctx->cpi;
+  const unsigned int data = va_arg(args, unsigned int);
+  cpi->rc.ext_use_post_encode_drop = data;
+  return VPX_CODEC_OK;
+}
+
 static vpx_codec_ctrl_fn_map_t encoder_ctrl_maps[] = {
   { VP8_COPY_REFERENCE, ctrl_copy_reference },
 
@@ -1618,6 +1718,7 @@
   { VP8E_SET_STATIC_THRESHOLD, ctrl_set_static_thresh },
   { VP9E_SET_TILE_COLUMNS, ctrl_set_tile_columns },
   { VP9E_SET_TILE_ROWS, ctrl_set_tile_rows },
+  { VP9E_SET_TPL, ctrl_set_tpl_model },
   { VP8E_SET_ARNR_MAXFRAMES, ctrl_set_arnr_max_frames },
   { VP8E_SET_ARNR_STRENGTH, ctrl_set_arnr_strength },
   { VP8E_SET_ARNR_TYPE, ctrl_set_arnr_type },
@@ -1645,7 +1746,13 @@
   { VP9E_SET_RENDER_SIZE, ctrl_set_render_size },
   { VP9E_SET_TARGET_LEVEL, ctrl_set_target_level },
   { VP9E_SET_ROW_MT, ctrl_set_row_mt },
+  { VP9E_SET_POSTENCODE_DROP, ctrl_set_postencode_drop },
   { VP9E_ENABLE_MOTION_VECTOR_UNIT_TEST, ctrl_enable_motion_vector_unit_test },
+  { VP9E_SET_SVC_INTER_LAYER_PRED, ctrl_set_svc_inter_layer_pred },
+  { VP9E_SET_SVC_FRAME_DROP_LAYER, ctrl_set_svc_frame_drop_layer },
+  { VP9E_SET_SVC_GF_TEMPORAL_REF, ctrl_set_svc_gf_temporal_ref },
+  { VP9E_SET_SVC_SPATIAL_LAYER_SYNC, ctrl_set_svc_spatial_layer_sync },
+  { VP9E_SET_DELTA_Q_UV, ctrl_set_delta_q_uv },
 
   // Getters
   { VP8E_GET_LAST_QUANTIZER, ctrl_get_quantizer },
@@ -1654,6 +1761,7 @@
   { VP9E_GET_SVC_LAYER_ID, ctrl_get_svc_layer_id },
   { VP9E_GET_ACTIVEMAP, ctrl_get_active_map },
   { VP9E_GET_LEVEL, ctrl_get_level },
+  { VP9E_GET_SVC_REF_FRAME_CONFIG, ctrl_get_svc_ref_frame_config },
 
   { -1, NULL },
 };
@@ -1662,7 +1770,7 @@
   { 0,
     {
         // NOLINT
-        0,  // g_usage
+        0,  // g_usage (unused)
         8,  // g_threads
         0,  // g_profile
 
@@ -1689,7 +1797,7 @@
         VPX_VBR,      // rc_end_usage
         { NULL, 0 },  // rc_twopass_stats_in
         { NULL, 0 },  // rc_firstpass_mb_stats_in
-        256,          // rc_target_bandwidth
+        256,          // rc_target_bitrate
         0,            // rc_min_quantizer
         63,           // rc_max_quantizer
         25,           // rc_undershoot_pct
@@ -1755,3 +1863,222 @@
       NULL                    // vpx_codec_enc_mr_get_mem_loc_fn_t
   }
 };
+
+static vpx_codec_enc_cfg_t get_enc_cfg(int frame_width, int frame_height,
+                                       vpx_rational_t frame_rate,
+                                       int target_bitrate,
+                                       vpx_enc_pass enc_pass) {
+  vpx_codec_enc_cfg_t enc_cfg = encoder_usage_cfg_map[0].cfg;
+  enc_cfg.g_w = frame_width;
+  enc_cfg.g_h = frame_height;
+  enc_cfg.rc_target_bitrate = target_bitrate;
+  enc_cfg.g_pass = enc_pass;
+  // g_timebase is the inverse of frame_rate
+  enc_cfg.g_timebase.num = frame_rate.den;
+  enc_cfg.g_timebase.den = frame_rate.num;
+  return enc_cfg;
+}
+
+static vp9_extracfg get_extra_cfg() {
+  vp9_extracfg extra_cfg = default_extra_cfg;
+  return extra_cfg;
+}
+
+VP9EncoderConfig vp9_get_encoder_config(int frame_width, int frame_height,
+                                        vpx_rational_t frame_rate,
+                                        int target_bitrate,
+                                        vpx_enc_pass enc_pass) {
+  /* This function will generate the same VP9EncoderConfig used by the
+   * vpxenc command given below.
+   * The configs in the vpxenc command corresponds to parameters of
+   * vp9_get_encoder_config() as follows.
+   *
+   * WIDTH:   frame_width
+   * HEIGHT:  frame_height
+   * FPS:     frame_rate
+   * BITRATE: target_bitrate
+   *
+   * INPUT, OUTPUT, LIMIT will not affect VP9EncoderConfig
+   *
+   * vpxenc command:
+   * INPUT=bus_cif.y4m
+   * OUTPUT=output.webm
+   * WIDTH=352
+   * HEIGHT=288
+   * BITRATE=600
+   * FPS=30/1
+   * LIMIT=150
+   * ./vpxenc --limit=$LIMIT --width=$WIDTH --height=$HEIGHT --fps=$FPS
+   * --lag-in-frames=25 \
+   *  --codec=vp9 --good --cpu-used=0 --threads=0 --profile=0 \
+   *  --min-q=0 --max-q=63 --auto-alt-ref=1 --passes=2 --kf-max-dist=150 \
+   *  --kf-min-dist=0 --drop-frame=0 --static-thresh=0 --bias-pct=50 \
+   *  --minsection-pct=0 --maxsection-pct=150 --arnr-maxframes=7 --psnr \
+   *  --arnr-strength=5 --sharpness=0 --undershoot-pct=100 --overshoot-pct=100 \
+   *  --frame-parallel=0 --tile-columns=0 --cpu-used=0 --end-usage=vbr \
+   *  --target-bitrate=$BITRATE -o $OUTPUT $INPUT
+   */
+
+  VP9EncoderConfig oxcf;
+  vp9_extracfg extra_cfg = get_extra_cfg();
+  vpx_codec_enc_cfg_t enc_cfg = get_enc_cfg(
+      frame_width, frame_height, frame_rate, target_bitrate, enc_pass);
+  set_encoder_config(&oxcf, &enc_cfg, &extra_cfg);
+
+  // These settings are made to match the settings of the vpxenc command.
+  oxcf.key_freq = 150;
+  oxcf.under_shoot_pct = 100;
+  oxcf.over_shoot_pct = 100;
+  oxcf.max_threads = 0;
+  oxcf.tile_columns = 0;
+  oxcf.frame_parallel_decoding_mode = 0;
+  oxcf.two_pass_vbrmax_section = 150;
+  return oxcf;
+}
+
+#define DUMP_STRUCT_VALUE(struct, value) \
+  printf(#value " %" PRId64 "\n", (int64_t)(struct)->value)
+
+void vp9_dump_encoder_config(const VP9EncoderConfig *oxcf) {
+  DUMP_STRUCT_VALUE(oxcf, profile);
+  DUMP_STRUCT_VALUE(oxcf, bit_depth);
+  DUMP_STRUCT_VALUE(oxcf, width);
+  DUMP_STRUCT_VALUE(oxcf, height);
+  DUMP_STRUCT_VALUE(oxcf, input_bit_depth);
+  DUMP_STRUCT_VALUE(oxcf, init_framerate);
+  // TODO(angiebird): dump g_timebase
+  // TODO(angiebird): dump g_timebase_in_ts
+
+  DUMP_STRUCT_VALUE(oxcf, target_bandwidth);
+
+  DUMP_STRUCT_VALUE(oxcf, noise_sensitivity);
+  DUMP_STRUCT_VALUE(oxcf, sharpness);
+  DUMP_STRUCT_VALUE(oxcf, speed);
+  DUMP_STRUCT_VALUE(oxcf, rc_max_intra_bitrate_pct);
+  DUMP_STRUCT_VALUE(oxcf, rc_max_inter_bitrate_pct);
+  DUMP_STRUCT_VALUE(oxcf, gf_cbr_boost_pct);
+
+  DUMP_STRUCT_VALUE(oxcf, mode);
+  DUMP_STRUCT_VALUE(oxcf, pass);
+
+  // Key Framing Operations
+  DUMP_STRUCT_VALUE(oxcf, auto_key);
+  DUMP_STRUCT_VALUE(oxcf, key_freq);
+
+  DUMP_STRUCT_VALUE(oxcf, lag_in_frames);
+
+  // ----------------------------------------------------------------
+  // DATARATE CONTROL OPTIONS
+
+  // vbr, cbr, constrained quality or constant quality
+  DUMP_STRUCT_VALUE(oxcf, rc_mode);
+
+  // buffer targeting aggressiveness
+  DUMP_STRUCT_VALUE(oxcf, under_shoot_pct);
+  DUMP_STRUCT_VALUE(oxcf, over_shoot_pct);
+
+  // buffering parameters
+  // TODO(angiebird): dump tarting_buffer_level_ms
+  // TODO(angiebird): dump ptimal_buffer_level_ms
+  // TODO(angiebird): dump maximum_buffer_size_ms
+
+  // Frame drop threshold.
+  DUMP_STRUCT_VALUE(oxcf, drop_frames_water_mark);
+
+  // controlling quality
+  DUMP_STRUCT_VALUE(oxcf, fixed_q);
+  DUMP_STRUCT_VALUE(oxcf, worst_allowed_q);
+  DUMP_STRUCT_VALUE(oxcf, best_allowed_q);
+  DUMP_STRUCT_VALUE(oxcf, cq_level);
+  DUMP_STRUCT_VALUE(oxcf, aq_mode);
+
+  // Special handling of Adaptive Quantization for AltRef frames
+  DUMP_STRUCT_VALUE(oxcf, alt_ref_aq);
+
+  // Internal frame size scaling.
+  DUMP_STRUCT_VALUE(oxcf, resize_mode);
+  DUMP_STRUCT_VALUE(oxcf, scaled_frame_width);
+  DUMP_STRUCT_VALUE(oxcf, scaled_frame_height);
+
+  // Enable feature to reduce the frame quantization every x frames.
+  DUMP_STRUCT_VALUE(oxcf, frame_periodic_boost);
+
+  // two pass datarate control
+  DUMP_STRUCT_VALUE(oxcf, two_pass_vbrbias);
+  DUMP_STRUCT_VALUE(oxcf, two_pass_vbrmin_section);
+  DUMP_STRUCT_VALUE(oxcf, two_pass_vbrmax_section);
+  DUMP_STRUCT_VALUE(oxcf, vbr_corpus_complexity);
+  // END DATARATE CONTROL OPTIONS
+  // ----------------------------------------------------------------
+
+  // Spatial and temporal scalability.
+  DUMP_STRUCT_VALUE(oxcf, ss_number_layers);
+  DUMP_STRUCT_VALUE(oxcf, ts_number_layers);
+
+  // Bitrate allocation for spatial layers.
+  // TODO(angiebird): dump layer_target_bitrate[VPX_MAX_LAYERS]
+  // TODO(angiebird): dump ss_target_bitrate[VPX_SS_MAX_LAYERS]
+  // TODO(angiebird): dump ss_enable_auto_arf[VPX_SS_MAX_LAYERS]
+  // TODO(angiebird): dump ts_rate_decimator[VPX_TS_MAX_LAYERS]
+
+  DUMP_STRUCT_VALUE(oxcf, enable_auto_arf);
+  DUMP_STRUCT_VALUE(oxcf, encode_breakout);
+  DUMP_STRUCT_VALUE(oxcf, error_resilient_mode);
+  DUMP_STRUCT_VALUE(oxcf, frame_parallel_decoding_mode);
+
+  DUMP_STRUCT_VALUE(oxcf, arnr_max_frames);
+  DUMP_STRUCT_VALUE(oxcf, arnr_strength);
+
+  DUMP_STRUCT_VALUE(oxcf, min_gf_interval);
+  DUMP_STRUCT_VALUE(oxcf, max_gf_interval);
+
+  DUMP_STRUCT_VALUE(oxcf, tile_columns);
+  DUMP_STRUCT_VALUE(oxcf, tile_rows);
+
+  DUMP_STRUCT_VALUE(oxcf, enable_tpl_model);
+
+  DUMP_STRUCT_VALUE(oxcf, max_threads);
+
+  DUMP_STRUCT_VALUE(oxcf, target_level);
+
+  // TODO(angiebird): dump two_pass_stats_in
+
+#if CONFIG_FP_MB_STATS
+  // TODO(angiebird): dump firstpass_mb_stats_in
+#endif
+
+  DUMP_STRUCT_VALUE(oxcf, tuning);
+  DUMP_STRUCT_VALUE(oxcf, content);
+#if CONFIG_VP9_HIGHBITDEPTH
+  DUMP_STRUCT_VALUE(oxcf, use_highbitdepth);
+#endif
+  DUMP_STRUCT_VALUE(oxcf, color_space);
+  DUMP_STRUCT_VALUE(oxcf, color_range);
+  DUMP_STRUCT_VALUE(oxcf, render_width);
+  DUMP_STRUCT_VALUE(oxcf, render_height);
+  DUMP_STRUCT_VALUE(oxcf, temporal_layering_mode);
+
+  DUMP_STRUCT_VALUE(oxcf, row_mt);
+  DUMP_STRUCT_VALUE(oxcf, motion_vector_unit_test);
+}
+
+FRAME_INFO vp9_get_frame_info(const VP9EncoderConfig *oxcf) {
+  FRAME_INFO frame_info;
+  int dummy;
+  frame_info.frame_width = oxcf->width;
+  frame_info.frame_height = oxcf->height;
+  frame_info.render_frame_width = oxcf->width;
+  frame_info.render_frame_height = oxcf->height;
+  frame_info.bit_depth = oxcf->bit_depth;
+  vp9_set_mi_size(&frame_info.mi_rows, &frame_info.mi_cols, &dummy,
+                  frame_info.frame_width, frame_info.frame_height);
+  vp9_set_mb_size(&frame_info.mb_rows, &frame_info.mb_cols, &frame_info.num_mbs,
+                  frame_info.mi_rows, frame_info.mi_cols);
+  // TODO(angiebird): Figure out how to get subsampling_x/y here
+  return frame_info;
+}
+
+void vp9_set_first_pass_stats(VP9EncoderConfig *oxcf,
+                              const vpx_fixed_buf_t *stats) {
+  oxcf->two_pass_stats_in = *stats;
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h
new file mode 100644
index 0000000..08569fc
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_cx_iface.h
@@ -0,0 +1,48 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VP9_VP9_CX_IFACE_H_
+#define VPX_VP9_VP9_CX_IFACE_H_
+#include "vp9/encoder/vp9_encoder.h"
+#include "vp9/common/vp9_onyxc_int.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+VP9EncoderConfig vp9_get_encoder_config(int frame_width, int frame_height,
+                                        vpx_rational_t frame_rate,
+                                        int target_bitrate,
+                                        vpx_enc_pass enc_pass);
+
+void vp9_dump_encoder_config(const VP9EncoderConfig *oxcf);
+
+FRAME_INFO vp9_get_frame_info(const VP9EncoderConfig *oxcf);
+
+static INLINE int64_t
+timebase_units_to_ticks(const vpx_rational64_t *timestamp_ratio, int64_t n) {
+  return n * timestamp_ratio->num / timestamp_ratio->den;
+}
+
+static INLINE int64_t
+ticks_to_timebase_units(const vpx_rational64_t *timestamp_ratio, int64_t n) {
+  int64_t round = timestamp_ratio->num / 2;
+  if (round > 0) --round;
+  return (n * timestamp_ratio->den + round) / timestamp_ratio->num;
+}
+
+void vp9_set_first_pass_stats(VP9EncoderConfig *oxcf,
+                              const vpx_fixed_buf_t *stats);
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  // VPX_VP9_VP9_CX_IFACE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c
index 657490f..35ecbaf 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.c
@@ -97,7 +97,7 @@
     const uint8_t *data, unsigned int data_sz, vpx_codec_stream_info_t *si,
     int *is_intra_only, vpx_decrypt_cb decrypt_cb, void *decrypt_state) {
   int intra_only_flag = 0;
-  uint8_t clear_buffer[10];
+  uint8_t clear_buffer[11];
 
   if (data + data_sz <= data) return VPX_CODEC_INVALID_PARAM;
 
@@ -158,6 +158,9 @@
         if (profile > PROFILE_0) {
           if (!parse_bitdepth_colorspace_sampling(profile, &rb))
             return VPX_CODEC_UNSUP_BITSTREAM;
+          // The colorspace info may cause vp9_read_frame_size() to need 11
+          // bytes.
+          if (data_sz < 11) return VPX_CODEC_UNSUP_BITSTREAM;
         }
         rb.bit_offset += REF_FRAMES;  // refresh_frame_flags
         vp9_read_frame_size(&rb, (int *)&si->w, (int *)&si->h);
@@ -235,6 +238,19 @@
   flags->noise_level = ctx->postproc_cfg.noise_level;
 }
 
+#undef ERROR
+#define ERROR(str)                  \
+  do {                              \
+    ctx->base.err_detail = str;     \
+    return VPX_CODEC_INVALID_PARAM; \
+  } while (0)
+
+#define RANGE_CHECK(p, memb, lo, hi)                                     \
+  do {                                                                   \
+    if (!(((p)->memb == (lo) || (p)->memb > (lo)) && (p)->memb <= (hi))) \
+      ERROR(#memb " out of range [" #lo ".." #hi "]");                   \
+  } while (0)
+
 static vpx_codec_err_t init_decoder(vpx_codec_alg_priv_t *ctx) {
   ctx->last_show_frame = -1;
   ctx->need_resync = 1;
@@ -251,6 +267,12 @@
   ctx->pbi->max_threads = ctx->cfg.threads;
   ctx->pbi->inv_tile_order = ctx->invert_tile_order;
 
+  RANGE_CHECK(ctx, row_mt, 0, 1);
+  ctx->pbi->row_mt = ctx->row_mt;
+
+  RANGE_CHECK(ctx, lpf_opt, 0, 1);
+  ctx->pbi->lpf_mt_opt = ctx->lpf_opt;
+
   // If postprocessing was enabled by the application and a
   // configuration has not been provided, default it.
   if (!ctx->postproc_cfg_set && (ctx->base.init_flags & VPX_CODEC_USE_POSTPROC))
@@ -452,11 +474,15 @@
   vp9_ref_frame_t *data = va_arg(args, vp9_ref_frame_t *);
 
   if (data) {
-    YV12_BUFFER_CONFIG *fb;
-    fb = get_ref_frame(&ctx->pbi->common, data->idx);
-    if (fb == NULL) return VPX_CODEC_ERROR;
-    yuvconfig2image(&data->img, fb, NULL);
-    return VPX_CODEC_OK;
+    if (ctx->pbi) {
+      const int fb_idx = ctx->pbi->common.cur_show_frame_fb_idx;
+      YV12_BUFFER_CONFIG *fb = get_buf_frame(&ctx->pbi->common, fb_idx);
+      if (fb == NULL) return VPX_CODEC_ERROR;
+      yuvconfig2image(&data->img, fb, NULL);
+      return VPX_CODEC_OK;
+    } else {
+      return VPX_CODEC_ERROR;
+    }
   } else {
     return VPX_CODEC_INVALID_PARAM;
   }
@@ -632,6 +658,20 @@
     return VPX_CODEC_OK;
 }
 
+static vpx_codec_err_t ctrl_set_row_mt(vpx_codec_alg_priv_t *ctx,
+                                       va_list args) {
+  ctx->row_mt = va_arg(args, int);
+
+  return VPX_CODEC_OK;
+}
+
+static vpx_codec_err_t ctrl_enable_lpf_opt(vpx_codec_alg_priv_t *ctx,
+                                           va_list args) {
+  ctx->lpf_opt = va_arg(args, int);
+
+  return VPX_CODEC_OK;
+}
+
 static vpx_codec_ctrl_fn_map_t decoder_ctrl_maps[] = {
   { VP8_COPY_REFERENCE, ctrl_copy_reference },
 
@@ -643,6 +683,8 @@
   { VP9_SET_BYTE_ALIGNMENT, ctrl_set_byte_alignment },
   { VP9_SET_SKIP_LOOP_FILTER, ctrl_set_skip_loop_filter },
   { VP9_DECODE_SVC_SPATIAL_LAYER, ctrl_set_spatial_layer_svc },
+  { VP9D_SET_ROW_MT, ctrl_set_row_mt },
+  { VP9D_SET_LOOP_FILTER_OPT, ctrl_enable_lpf_opt },
 
   // Getters
   { VPXD_GET_LAST_QUANTIZER, ctrl_get_quantizer },
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h
index 18bc7ab..f60688c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_dx_iface.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_VP9_DX_IFACE_H_
-#define VP9_VP9_DX_IFACE_H_
+#ifndef VPX_VP9_VP9_DX_IFACE_H_
+#define VPX_VP9_VP9_DX_IFACE_H_
 
 #include "vp9/decoder/vp9_decoder.h"
 
@@ -45,6 +45,8 @@
   // Allow for decoding up to a given spatial layer for SVC stream.
   int svc_decoding;
   int svc_spatial_layer;
+  int row_mt;
+  int lpf_opt;
 };
 
-#endif  // VP9_VP9_DX_IFACE_H_
+#endif  // VPX_VP9_VP9_DX_IFACE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c
new file mode 100644
index 0000000..74d08a5
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.c
@@ -0,0 +1,131 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed  by a BSD-style license that can be
+ *  found in the LICENSE file in the root of the source tree. An additional
+ *  intellectual property  rights grant can  be found in the  file PATENTS.
+ *  All contributing  project authors may be  found in the AUTHORS  file in
+ *  the root of the source tree.
+ */
+
+#include "vp9/vp9_iface_common.h"
+void yuvconfig2image(vpx_image_t *img, const YV12_BUFFER_CONFIG *yv12,
+                     void *user_priv) {
+  /** vpx_img_wrap() doesn't allow specifying independent strides for
+   * the Y, U, and V planes, nor other alignment adjustments that
+   * might be representable by a YV12_BUFFER_CONFIG, so we just
+   * initialize all the fields.*/
+  int bps;
+  if (!yv12->subsampling_y) {
+    if (!yv12->subsampling_x) {
+      img->fmt = VPX_IMG_FMT_I444;
+      bps = 24;
+    } else {
+      img->fmt = VPX_IMG_FMT_I422;
+      bps = 16;
+    }
+  } else {
+    if (!yv12->subsampling_x) {
+      img->fmt = VPX_IMG_FMT_I440;
+      bps = 16;
+    } else {
+      img->fmt = VPX_IMG_FMT_I420;
+      bps = 12;
+    }
+  }
+  img->cs = yv12->color_space;
+  img->range = yv12->color_range;
+  img->bit_depth = 8;
+  img->w = yv12->y_stride;
+  img->h = ALIGN_POWER_OF_TWO(yv12->y_height + 2 * VP9_ENC_BORDER_IN_PIXELS, 3);
+  img->d_w = yv12->y_crop_width;
+  img->d_h = yv12->y_crop_height;
+  img->r_w = yv12->render_width;
+  img->r_h = yv12->render_height;
+  img->x_chroma_shift = yv12->subsampling_x;
+  img->y_chroma_shift = yv12->subsampling_y;
+  img->planes[VPX_PLANE_Y] = yv12->y_buffer;
+  img->planes[VPX_PLANE_U] = yv12->u_buffer;
+  img->planes[VPX_PLANE_V] = yv12->v_buffer;
+  img->planes[VPX_PLANE_ALPHA] = NULL;
+  img->stride[VPX_PLANE_Y] = yv12->y_stride;
+  img->stride[VPX_PLANE_U] = yv12->uv_stride;
+  img->stride[VPX_PLANE_V] = yv12->uv_stride;
+  img->stride[VPX_PLANE_ALPHA] = yv12->y_stride;
+#if CONFIG_VP9_HIGHBITDEPTH
+  if (yv12->flags & YV12_FLAG_HIGHBITDEPTH) {
+    // vpx_image_t uses byte strides and a pointer to the first byte
+    // of the image.
+    img->fmt = (vpx_img_fmt_t)(img->fmt | VPX_IMG_FMT_HIGHBITDEPTH);
+    img->bit_depth = yv12->bit_depth;
+    img->planes[VPX_PLANE_Y] = (uint8_t *)CONVERT_TO_SHORTPTR(yv12->y_buffer);
+    img->planes[VPX_PLANE_U] = (uint8_t *)CONVERT_TO_SHORTPTR(yv12->u_buffer);
+    img->planes[VPX_PLANE_V] = (uint8_t *)CONVERT_TO_SHORTPTR(yv12->v_buffer);
+    img->planes[VPX_PLANE_ALPHA] = NULL;
+    img->stride[VPX_PLANE_Y] = 2 * yv12->y_stride;
+    img->stride[VPX_PLANE_U] = 2 * yv12->uv_stride;
+    img->stride[VPX_PLANE_V] = 2 * yv12->uv_stride;
+    img->stride[VPX_PLANE_ALPHA] = 2 * yv12->y_stride;
+  }
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  img->bps = bps;
+  img->user_priv = user_priv;
+  img->img_data = yv12->buffer_alloc;
+  img->img_data_owner = 0;
+  img->self_allocd = 0;
+}
+
+vpx_codec_err_t image2yuvconfig(const vpx_image_t *img,
+                                YV12_BUFFER_CONFIG *yv12) {
+  yv12->y_buffer = img->planes[VPX_PLANE_Y];
+  yv12->u_buffer = img->planes[VPX_PLANE_U];
+  yv12->v_buffer = img->planes[VPX_PLANE_V];
+
+  yv12->y_crop_width = img->d_w;
+  yv12->y_crop_height = img->d_h;
+  yv12->render_width = img->r_w;
+  yv12->render_height = img->r_h;
+  yv12->y_width = img->d_w;
+  yv12->y_height = img->d_h;
+
+  yv12->uv_width =
+      img->x_chroma_shift == 1 ? (1 + yv12->y_width) / 2 : yv12->y_width;
+  yv12->uv_height =
+      img->y_chroma_shift == 1 ? (1 + yv12->y_height) / 2 : yv12->y_height;
+  yv12->uv_crop_width = yv12->uv_width;
+  yv12->uv_crop_height = yv12->uv_height;
+
+  yv12->y_stride = img->stride[VPX_PLANE_Y];
+  yv12->uv_stride = img->stride[VPX_PLANE_U];
+  yv12->color_space = img->cs;
+  yv12->color_range = img->range;
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  if (img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
+    // In vpx_image_t
+    //     planes point to uint8 address of start of data
+    //     stride counts uint8s to reach next row
+    // In YV12_BUFFER_CONFIG
+    //     y_buffer, u_buffer, v_buffer point to uint16 address of data
+    //     stride and border counts in uint16s
+    // This means that all the address calculations in the main body of code
+    // should work correctly.
+    // However, before we do any pixel operations we need to cast the address
+    // to a uint16 ponter and double its value.
+    yv12->y_buffer = CONVERT_TO_BYTEPTR(yv12->y_buffer);
+    yv12->u_buffer = CONVERT_TO_BYTEPTR(yv12->u_buffer);
+    yv12->v_buffer = CONVERT_TO_BYTEPTR(yv12->v_buffer);
+    yv12->y_stride >>= 1;
+    yv12->uv_stride >>= 1;
+    yv12->flags = YV12_FLAG_HIGHBITDEPTH;
+  } else {
+    yv12->flags = 0;
+  }
+  yv12->border = (yv12->y_stride - img->w) / 2;
+#else
+  yv12->border = (img->stride[VPX_PLANE_Y] - img->w) / 2;
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+  yv12->subsampling_x = img->x_chroma_shift;
+  yv12->subsampling_y = img->y_chroma_shift;
+  return VPX_CODEC_OK;
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h
index 9f27abe..e646917 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9_iface_common.h
@@ -7,133 +7,27 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VP9_VP9_IFACE_COMMON_H_
-#define VP9_VP9_IFACE_COMMON_H_
+#ifndef VPX_VP9_VP9_IFACE_COMMON_H_
+#define VPX_VP9_VP9_IFACE_COMMON_H_
 
+#include <assert.h>
 #include "vpx_ports/mem.h"
+#include "vpx/vp8.h"
+#include "vpx_scale/yv12config.h"
+#include "common/vp9_enums.h"
 
-static void yuvconfig2image(vpx_image_t *img, const YV12_BUFFER_CONFIG *yv12,
-                            void *user_priv) {
-  /** vpx_img_wrap() doesn't allow specifying independent strides for
-   * the Y, U, and V planes, nor other alignment adjustments that
-   * might be representable by a YV12_BUFFER_CONFIG, so we just
-   * initialize all the fields.*/
-  int bps;
-  if (!yv12->subsampling_y) {
-    if (!yv12->subsampling_x) {
-      img->fmt = VPX_IMG_FMT_I444;
-      bps = 24;
-    } else {
-      img->fmt = VPX_IMG_FMT_I422;
-      bps = 16;
-    }
-  } else {
-    if (!yv12->subsampling_x) {
-      img->fmt = VPX_IMG_FMT_I440;
-      bps = 16;
-    } else {
-      img->fmt = VPX_IMG_FMT_I420;
-      bps = 12;
-    }
-  }
-  img->cs = yv12->color_space;
-  img->range = yv12->color_range;
-  img->bit_depth = 8;
-  img->w = yv12->y_stride;
-  img->h = ALIGN_POWER_OF_TWO(yv12->y_height + 2 * VP9_ENC_BORDER_IN_PIXELS, 3);
-  img->d_w = yv12->y_crop_width;
-  img->d_h = yv12->y_crop_height;
-  img->r_w = yv12->render_width;
-  img->r_h = yv12->render_height;
-  img->x_chroma_shift = yv12->subsampling_x;
-  img->y_chroma_shift = yv12->subsampling_y;
-  img->planes[VPX_PLANE_Y] = yv12->y_buffer;
-  img->planes[VPX_PLANE_U] = yv12->u_buffer;
-  img->planes[VPX_PLANE_V] = yv12->v_buffer;
-  img->planes[VPX_PLANE_ALPHA] = NULL;
-  img->stride[VPX_PLANE_Y] = yv12->y_stride;
-  img->stride[VPX_PLANE_U] = yv12->uv_stride;
-  img->stride[VPX_PLANE_V] = yv12->uv_stride;
-  img->stride[VPX_PLANE_ALPHA] = yv12->y_stride;
-#if CONFIG_VP9_HIGHBITDEPTH
-  if (yv12->flags & YV12_FLAG_HIGHBITDEPTH) {
-    // vpx_image_t uses byte strides and a pointer to the first byte
-    // of the image.
-    img->fmt = (vpx_img_fmt_t)(img->fmt | VPX_IMG_FMT_HIGHBITDEPTH);
-    img->bit_depth = yv12->bit_depth;
-    img->planes[VPX_PLANE_Y] = (uint8_t *)CONVERT_TO_SHORTPTR(yv12->y_buffer);
-    img->planes[VPX_PLANE_U] = (uint8_t *)CONVERT_TO_SHORTPTR(yv12->u_buffer);
-    img->planes[VPX_PLANE_V] = (uint8_t *)CONVERT_TO_SHORTPTR(yv12->v_buffer);
-    img->planes[VPX_PLANE_ALPHA] = NULL;
-    img->stride[VPX_PLANE_Y] = 2 * yv12->y_stride;
-    img->stride[VPX_PLANE_U] = 2 * yv12->uv_stride;
-    img->stride[VPX_PLANE_V] = 2 * yv12->uv_stride;
-    img->stride[VPX_PLANE_ALPHA] = 2 * yv12->y_stride;
-  }
-#endif  // CONFIG_VP9_HIGHBITDEPTH
-  img->bps = bps;
-  img->user_priv = user_priv;
-  img->img_data = yv12->buffer_alloc;
-  img->img_data_owner = 0;
-  img->self_allocd = 0;
-}
+#ifdef __cplusplus
+extern "C" {
+#endif
 
-static vpx_codec_err_t image2yuvconfig(const vpx_image_t *img,
-                                       YV12_BUFFER_CONFIG *yv12) {
-  yv12->y_buffer = img->planes[VPX_PLANE_Y];
-  yv12->u_buffer = img->planes[VPX_PLANE_U];
-  yv12->v_buffer = img->planes[VPX_PLANE_V];
+void yuvconfig2image(vpx_image_t *img, const YV12_BUFFER_CONFIG *yv12,
+                     void *user_priv);
 
-  yv12->y_crop_width = img->d_w;
-  yv12->y_crop_height = img->d_h;
-  yv12->render_width = img->r_w;
-  yv12->render_height = img->r_h;
-  yv12->y_width = img->d_w;
-  yv12->y_height = img->d_h;
+vpx_codec_err_t image2yuvconfig(const vpx_image_t *img,
+                                YV12_BUFFER_CONFIG *yv12);
 
-  yv12->uv_width =
-      img->x_chroma_shift == 1 ? (1 + yv12->y_width) / 2 : yv12->y_width;
-  yv12->uv_height =
-      img->y_chroma_shift == 1 ? (1 + yv12->y_height) / 2 : yv12->y_height;
-  yv12->uv_crop_width = yv12->uv_width;
-  yv12->uv_crop_height = yv12->uv_height;
-
-  yv12->y_stride = img->stride[VPX_PLANE_Y];
-  yv12->uv_stride = img->stride[VPX_PLANE_U];
-  yv12->color_space = img->cs;
-  yv12->color_range = img->range;
-
-#if CONFIG_VP9_HIGHBITDEPTH
-  if (img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
-    // In vpx_image_t
-    //     planes point to uint8 address of start of data
-    //     stride counts uint8s to reach next row
-    // In YV12_BUFFER_CONFIG
-    //     y_buffer, u_buffer, v_buffer point to uint16 address of data
-    //     stride and border counts in uint16s
-    // This means that all the address calculations in the main body of code
-    // should work correctly.
-    // However, before we do any pixel operations we need to cast the address
-    // to a uint16 ponter and double its value.
-    yv12->y_buffer = CONVERT_TO_BYTEPTR(yv12->y_buffer);
-    yv12->u_buffer = CONVERT_TO_BYTEPTR(yv12->u_buffer);
-    yv12->v_buffer = CONVERT_TO_BYTEPTR(yv12->v_buffer);
-    yv12->y_stride >>= 1;
-    yv12->uv_stride >>= 1;
-    yv12->flags = YV12_FLAG_HIGHBITDEPTH;
-  } else {
-    yv12->flags = 0;
-  }
-  yv12->border = (yv12->y_stride - img->w) / 2;
-#else
-  yv12->border = (img->stride[VPX_PLANE_Y] - img->w) / 2;
-#endif  // CONFIG_VP9_HIGHBITDEPTH
-  yv12->subsampling_x = img->x_chroma_shift;
-  yv12->subsampling_y = img->y_chroma_shift;
-  return VPX_CODEC_OK;
-}
-
-static VP9_REFFRAME ref_frame_to_vp9_reframe(vpx_ref_frame_type_t frame) {
+static INLINE VP9_REFFRAME
+ref_frame_to_vp9_reframe(vpx_ref_frame_type_t frame) {
   switch (frame) {
     case VP8_LAST_FRAME: return VP9_LAST_FLAG;
     case VP8_GOLD_FRAME: return VP9_GOLD_FLAG;
@@ -142,4 +36,9 @@
   assert(0 && "Invalid Reference Frame");
   return VP9_LAST_FLAG;
 }
-#endif  // VP9_VP9_IFACE_COMMON_H_
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  // VPX_VP9_VP9_IFACE_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9cx.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9cx.mk
index 6186d46..ad77450 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9cx.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9cx.mk
@@ -16,6 +16,10 @@
 VP9_CX_SRCS_REMOVE-no  += $(VP9_COMMON_SRCS_REMOVE-no)
 
 VP9_CX_SRCS-yes += vp9_cx_iface.c
+VP9_CX_SRCS-yes += vp9_cx_iface.h
+
+VP9_CX_SRCS-$(CONFIG_RATE_CTRL) += simple_encode.cc
+VP9_CX_SRCS-$(CONFIG_RATE_CTRL) += simple_encode.h
 
 VP9_CX_SRCS-yes += encoder/vp9_bitstream.c
 VP9_CX_SRCS-yes += encoder/vp9_context_tree.c
@@ -64,6 +68,7 @@
 VP9_CX_SRCS-yes += encoder/vp9_rd.c
 VP9_CX_SRCS-yes += encoder/vp9_rdopt.c
 VP9_CX_SRCS-yes += encoder/vp9_pickmode.c
+VP9_CX_SRCS-yes += encoder/vp9_partition_models.h
 VP9_CX_SRCS-yes += encoder/vp9_segmentation.c
 VP9_CX_SRCS-yes += encoder/vp9_segmentation.h
 VP9_CX_SRCS-yes += encoder/vp9_speed_features.c
@@ -74,6 +79,9 @@
 VP9_CX_SRCS-yes += encoder/vp9_resize.c
 VP9_CX_SRCS-yes += encoder/vp9_resize.h
 VP9_CX_SRCS-$(CONFIG_INTERNAL_STATS) += encoder/vp9_blockiness.c
+VP9_CX_SRCS-$(CONFIG_INTERNAL_STATS) += encoder/vp9_blockiness.h
+VP9_CX_SRCS-$(CONFIG_NON_GREEDY_MV) += encoder/vp9_non_greedy_mv.c
+VP9_CX_SRCS-$(CONFIG_NON_GREEDY_MV) += encoder/vp9_non_greedy_mv.h
 
 VP9_CX_SRCS-yes += encoder/vp9_tokenize.c
 VP9_CX_SRCS-yes += encoder/vp9_treewriter.c
@@ -101,23 +109,24 @@
 VP9_CX_SRCS-yes += encoder/vp9_mbgraph.h
 
 VP9_CX_SRCS-$(HAVE_SSE4_1) += encoder/x86/temporal_filter_sse4.c
+VP9_CX_SRCS-$(HAVE_SSE4_1) += encoder/x86/temporal_filter_constants.h
 
 VP9_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp9_quantize_sse2.c
 VP9_CX_SRCS-$(HAVE_AVX2) += encoder/x86/vp9_quantize_avx2.c
 VP9_CX_SRCS-$(HAVE_AVX) += encoder/x86/vp9_diamond_search_sad_avx.c
 ifeq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 VP9_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp9_highbd_block_error_intrin_sse2.c
+VP9_CX_SRCS-$(HAVE_SSE4_1) += encoder/x86/highbd_temporal_filter_sse4.c
 endif
 
 VP9_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp9_dct_sse2.asm
 VP9_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp9_error_sse2.asm
 
-ifeq ($(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86_64),yes)
 VP9_CX_SRCS-$(HAVE_SSSE3) += encoder/x86/vp9_quantize_ssse3_x86_64.asm
 endif
 
 VP9_CX_SRCS-$(HAVE_SSE2) += encoder/x86/vp9_dct_intrin_sse2.c
-VP9_CX_SRCS-$(HAVE_SSSE3) += encoder/x86/vp9_dct_ssse3.c
 VP9_CX_SRCS-$(HAVE_SSSE3) += encoder/x86/vp9_frame_scale_ssse3.c
 
 ifeq ($(CONFIG_VP9_TEMPORAL_DENOISING),yes)
@@ -130,20 +139,34 @@
 ifneq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 VP9_CX_SRCS-$(HAVE_NEON) += encoder/arm/neon/vp9_error_neon.c
 endif
-VP9_CX_SRCS-$(HAVE_NEON) += encoder/arm/neon/vp9_dct_neon.c
 VP9_CX_SRCS-$(HAVE_NEON) += encoder/arm/neon/vp9_frame_scale_neon.c
 VP9_CX_SRCS-$(HAVE_NEON) += encoder/arm/neon/vp9_quantize_neon.c
 
 VP9_CX_SRCS-$(HAVE_MSA) += encoder/mips/msa/vp9_error_msa.c
+
+ifneq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 VP9_CX_SRCS-$(HAVE_MSA) += encoder/mips/msa/vp9_fdct4x4_msa.c
 VP9_CX_SRCS-$(HAVE_MSA) += encoder/mips/msa/vp9_fdct8x8_msa.c
 VP9_CX_SRCS-$(HAVE_MSA) += encoder/mips/msa/vp9_fdct16x16_msa.c
 VP9_CX_SRCS-$(HAVE_MSA) += encoder/mips/msa/vp9_fdct_msa.h
+endif  # !CONFIG_VP9_HIGHBITDEPTH
+
+VP9_CX_SRCS-$(HAVE_VSX) += encoder/ppc/vp9_quantize_vsx.c
 
 # Strip unnecessary files with CONFIG_REALTIME_ONLY
 VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_firstpass.c
 VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_mbgraph.c
 VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_temporal_filter.c
 VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/x86/temporal_filter_sse4.c
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/x86/temporal_filter_constants.h
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/x86/highbd_temporal_filter_sse4.c
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_alt_ref_aq.h
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_alt_ref_aq.c
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_aq_variance.c
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_aq_variance.h
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_aq_360.c
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_aq_360.h
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_aq_complexity.c
+VP9_CX_SRCS_REMOVE-$(CONFIG_REALTIME_ONLY) += encoder/vp9_aq_complexity.h
 
 VP9_CX_SRCS-yes := $(filter-out $(VP9_CX_SRCS_REMOVE-yes),$(VP9_CX_SRCS-yes))
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9dx.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9dx.mk
index 59f612b..93a5f36 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9dx.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vp9/vp9dx.mk
@@ -28,5 +28,7 @@
 VP9_DX_SRCS-yes += decoder/vp9_decoder.h
 VP9_DX_SRCS-yes += decoder/vp9_dsubexp.c
 VP9_DX_SRCS-yes += decoder/vp9_dsubexp.h
+VP9_DX_SRCS-yes += decoder/vp9_job_queue.c
+VP9_DX_SRCS-yes += decoder/vp9_job_queue.h
 
 VP9_DX_SRCS-yes := $(filter-out $(VP9_DX_SRCS_REMOVE-yes),$(VP9_DX_SRCS-yes))
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/exports_spatial_svc b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/exports_spatial_svc
deleted file mode 100644
index d258a1d..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/exports_spatial_svc
+++ /dev/null
@@ -1,6 +0,0 @@
-text vpx_svc_dump_statistics
-text vpx_svc_encode
-text vpx_svc_get_message
-text vpx_svc_init
-text vpx_svc_release
-text vpx_svc_set_options
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h
index 522e5c1..9eed85e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/internal/vpx_codec_internal.h
@@ -40,8 +40,8 @@
  * Once initialized, the instance is manged using other functions from
  * the vpx_codec_* family.
  */
-#ifndef VPX_INTERNAL_VPX_CODEC_INTERNAL_H_
-#define VPX_INTERNAL_VPX_CODEC_INTERNAL_H_
+#ifndef VPX_VPX_INTERNAL_VPX_CODEC_INTERNAL_H_
+#define VPX_VPX_INTERNAL_VPX_CODEC_INTERNAL_H_
 #include "../vpx_decoder.h"
 #include "../vpx_encoder.h"
 #include <stdarg.h>
@@ -442,4 +442,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_INTERNAL_VPX_CODEC_INTERNAL_H_
+#endif  // VPX_VPX_INTERNAL_VPX_CODEC_INTERNAL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c
index 1cf2dca..6272502 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_encoder.c
@@ -20,7 +20,7 @@
 #include "vpx_config.h"
 #include "vpx/internal/vpx_codec_internal.h"
 
-#define SAVE_STATUS(ctx, var) (ctx ? (ctx->err = var) : var)
+#define SAVE_STATUS(ctx, var) ((ctx) ? ((ctx)->err = (var)) : (var))
 
 static vpx_codec_alg_priv_t *get_alg_priv(vpx_codec_ctx_t *ctx) {
   return (vpx_codec_alg_priv_t *)ctx->priv;
@@ -82,6 +82,9 @@
     res = VPX_CODEC_INCAPABLE;
   else {
     int i;
+#if CONFIG_MULTI_RES_ENCODING
+    int mem_loc_owned = 0;
+#endif
     void *mem_loc = NULL;
 
     if (iface->enc.mr_get_mem_loc == NULL) return VPX_CODEC_INCAPABLE;
@@ -101,12 +104,6 @@
           mr_cfg.mr_down_sampling_factor.num = dsf->num;
           mr_cfg.mr_down_sampling_factor.den = dsf->den;
 
-          /* Force Key-frame synchronization. Namely, encoder at higher
-           * resolution always use the same frame_type chosen by the
-           * lowest-resolution encoder.
-           */
-          if (mr_cfg.mr_encoder_id) cfg->kf_mode = VPX_KF_DISABLED;
-
           ctx->iface = iface;
           ctx->name = iface->name;
           ctx->priv = NULL;
@@ -129,13 +126,17 @@
             i--;
           }
 #if CONFIG_MULTI_RES_ENCODING
-          assert(mem_loc);
-          free(((LOWER_RES_FRAME_INFO *)mem_loc)->mb_info);
-          free(mem_loc);
+          if (!mem_loc_owned) {
+            assert(mem_loc);
+            free(((LOWER_RES_FRAME_INFO *)mem_loc)->mb_info);
+            free(mem_loc);
+          }
 #endif
           return SAVE_STATUS(ctx, res);
         }
-
+#if CONFIG_MULTI_RES_ENCODING
+        mem_loc_owned = 1;
+#endif
         ctx++;
         cfg++;
         dsf++;
@@ -154,7 +155,7 @@
   vpx_codec_enc_cfg_map_t *map;
   int i;
 
-  if (!iface || !cfg || usage > INT_MAX)
+  if (!iface || !cfg || usage != 0)
     res = VPX_CODEC_INVALID_PARAM;
   else if (!(iface->caps & VPX_CODEC_CAP_ENCODER))
     res = VPX_CODEC_INCAPABLE;
@@ -163,19 +164,16 @@
 
     for (i = 0; i < iface->enc.cfg_map_count; ++i) {
       map = iface->enc.cfg_maps + i;
-      if (map->usage == (int)usage) {
-        *cfg = map->cfg;
-        cfg->g_usage = usage;
-        res = VPX_CODEC_OK;
-        break;
-      }
+      *cfg = map->cfg;
+      res = VPX_CODEC_OK;
+      break;
     }
   }
 
   return res;
 }
 
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
 /* On X86, disable the x87 unit's internal 80 bit precision for better
  * consistency with the SSE unit's 64 bit precision.
  */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_image.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_image.c
index af7c529..a7c6ec0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_image.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/src/vpx_image.c
@@ -38,23 +38,8 @@
 
   /* Get sample size for this format */
   switch (fmt) {
-    case VPX_IMG_FMT_RGB32:
-    case VPX_IMG_FMT_RGB32_LE:
-    case VPX_IMG_FMT_ARGB:
-    case VPX_IMG_FMT_ARGB_LE: bps = 32; break;
-    case VPX_IMG_FMT_RGB24:
-    case VPX_IMG_FMT_BGR24: bps = 24; break;
-    case VPX_IMG_FMT_RGB565:
-    case VPX_IMG_FMT_RGB565_LE:
-    case VPX_IMG_FMT_RGB555:
-    case VPX_IMG_FMT_RGB555_LE:
-    case VPX_IMG_FMT_UYVY:
-    case VPX_IMG_FMT_YUY2:
-    case VPX_IMG_FMT_YVYU: bps = 16; break;
     case VPX_IMG_FMT_I420:
-    case VPX_IMG_FMT_YV12:
-    case VPX_IMG_FMT_VPXI420:
-    case VPX_IMG_FMT_VPXYV12: bps = 12; break;
+    case VPX_IMG_FMT_YV12: bps = 12; break;
     case VPX_IMG_FMT_I422:
     case VPX_IMG_FMT_I440: bps = 16; break;
     case VPX_IMG_FMT_I444: bps = 24; break;
@@ -69,8 +54,6 @@
   switch (fmt) {
     case VPX_IMG_FMT_I420:
     case VPX_IMG_FMT_YV12:
-    case VPX_IMG_FMT_VPXI420:
-    case VPX_IMG_FMT_VPXYV12:
     case VPX_IMG_FMT_I422:
     case VPX_IMG_FMT_I42016:
     case VPX_IMG_FMT_I42216: xcs = 1; break;
@@ -81,8 +64,6 @@
     case VPX_IMG_FMT_I420:
     case VPX_IMG_FMT_I440:
     case VPX_IMG_FMT_YV12:
-    case VPX_IMG_FMT_VPXI420:
-    case VPX_IMG_FMT_VPXYV12:
     case VPX_IMG_FMT_I42016:
     case VPX_IMG_FMT_I44016: ycs = 1; break;
     default: ycs = 0; break;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8.h
index 059c9d0..f30dafe 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8.h
@@ -10,7 +10,7 @@
 
 /*!\defgroup vp8 VP8
  * \ingroup codecs
- * VP8 is vpx's newest video compression algorithm that uses motion
+ * VP8 is a video compression algorithm that uses motion
  * compensated prediction, Discrete Cosine Transform (DCT) coding of the
  * prediction error signal and context dependent entropy coding techniques
  * based on arithmetic principles. It features:
@@ -27,8 +27,8 @@
 /*!\file
  * \brief Provides controls common to both the VP8 encoder and decoder.
  */
-#ifndef VPX_VP8_H_
-#define VPX_VP8_H_
+#ifndef VPX_VPX_VP8_H_
+#define VPX_VPX_VP8_H_
 
 #include "./vpx_codec.h"
 #include "./vpx_image.h"
@@ -47,10 +47,6 @@
   VP8_SET_REFERENCE = 1,
   VP8_COPY_REFERENCE = 2, /**< get a copy of reference frame from the decoder */
   VP8_SET_POSTPROC = 3,   /**< set the decoder's post processing settings  */
-  VP8_SET_DBG_COLOR_REF_FRAME = 4, /**< \deprecated */
-  VP8_SET_DBG_COLOR_MB_MODES = 5,  /**< \deprecated */
-  VP8_SET_DBG_COLOR_B_MODES = 6,   /**< \deprecated */
-  VP8_SET_DBG_DISPLAY_MV = 7,      /**< \deprecated */
 
   /* TODO(jkoleszar): The encoder incorrectly reuses some of these values (5+)
    * for its control ids. These should be migrated to something like the
@@ -70,12 +66,7 @@
   VP8_DEBLOCK = 1 << 0,
   VP8_DEMACROBLOCK = 1 << 1,
   VP8_ADDNOISE = 1 << 2,
-  VP8_DEBUG_TXT_FRAME_INFO = 1 << 3, /**< print frame information */
-  VP8_DEBUG_TXT_MBLK_MODES =
-      1 << 4, /**< print macro block modes over each macro block */
-  VP8_DEBUG_TXT_DC_DIFF = 1 << 5,   /**< print dc diff for each macro block */
-  VP8_DEBUG_TXT_RATE_INFO = 1 << 6, /**< print video rate info (encoder only) */
-  VP8_MFQE = 1 << 10
+  VP8_MFQE = 1 << 3
 };
 
 /*!\brief post process flags
@@ -132,14 +123,6 @@
 #define VPX_CTRL_VP8_COPY_REFERENCE
 VPX_CTRL_USE_TYPE(VP8_SET_POSTPROC, vp8_postproc_cfg_t *)
 #define VPX_CTRL_VP8_SET_POSTPROC
-VPX_CTRL_USE_TYPE_DEPRECATED(VP8_SET_DBG_COLOR_REF_FRAME, int)
-#define VPX_CTRL_VP8_SET_DBG_COLOR_REF_FRAME
-VPX_CTRL_USE_TYPE_DEPRECATED(VP8_SET_DBG_COLOR_MB_MODES, int)
-#define VPX_CTRL_VP8_SET_DBG_COLOR_MB_MODES
-VPX_CTRL_USE_TYPE_DEPRECATED(VP8_SET_DBG_COLOR_B_MODES, int)
-#define VPX_CTRL_VP8_SET_DBG_COLOR_B_MODES
-VPX_CTRL_USE_TYPE_DEPRECATED(VP8_SET_DBG_DISPLAY_MV, int)
-#define VPX_CTRL_VP8_SET_DBG_DISPLAY_MV
 VPX_CTRL_USE_TYPE(VP9_GET_REFERENCE, vp9_ref_frame_t *)
 #define VPX_CTRL_VP9_GET_REFERENCE
 
@@ -150,4 +133,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_VP8_H_
+#endif  // VPX_VPX_VP8_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8cx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8cx.h
index 261ed86..dcdd710 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8cx.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8cx.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_VP8CX_H_
-#define VPX_VP8CX_H_
+#ifndef VPX_VPX_VP8CX_H_
+#define VPX_VPX_VP8CX_H_
 
 /*!\defgroup vp8_encoder WebM VP8/VP9 Encoder
  * \ingroup vp8
@@ -148,13 +148,16 @@
    * speed at the expense of quality.
    *
    * \note Valid range for VP8: -16..16
-   * \note Valid range for VP9: -8..8
+   * \note Valid range for VP9: -9..9
    *
    * Supported in codecs: VP8, VP9
    */
   VP8E_SET_CPUUSED = 13,
 
-  /*!\brief Codec control function to enable automatic set and use alf frames.
+  /*!\brief Codec control function to enable automatic use of arf frames.
+   *
+   * \note Valid range for VP8: 0..1
+   * \note Valid range for VP9: 0..6
    *
    * Supported in codecs: VP8, VP9
    */
@@ -169,7 +172,10 @@
    */
   VP8E_SET_NOISE_SENSITIVITY,
 
-  /*!\brief Codec control function to set sharpness.
+  /*!\brief Codec control function to set higher sharpness at the expense
+   * of a lower PSNR.
+   *
+   * \note Valid range: 0..7
    *
    * Supported in codecs: VP8, VP9
    */
@@ -225,10 +231,10 @@
    */
   VP8E_SET_TUNING,
 
-  /*!\brief Codec control function to set constrained quality level.
+  /*!\brief Codec control function to set constrained / constant quality level.
    *
-   * \attention For this value to be used vpx_codec_enc_cfg_t::g_usage must be
-   *            set to #VPX_CQ.
+   * \attention For this value to be used vpx_codec_enc_cfg_t::rc_end_usage must
+   *            be set to #VPX_CQ or #VPX_Q
    * \note Valid range: 0..63
    *
    * Supported in codecs: VP8, VP9
@@ -602,6 +608,82 @@
    * Supported in codecs: VP9
    */
   VP9E_ENABLE_MOTION_VECTOR_UNIT_TEST,
+
+  /*!\brief Codec control function to constrain the inter-layer prediction
+   * (prediction of lower spatial resolution) in VP9 SVC.
+   *
+   * 0 : inter-layer prediction on, 1 : off, 2 : off only on non-key frames
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_SET_SVC_INTER_LAYER_PRED,
+
+  /*!\brief Codec control function to set mode and thresholds for frame
+   *  dropping in SVC. Drop frame thresholds are set per-layer. Mode is set as:
+   * 0 : layer-dependent dropping, 1 : constrained dropping, current layer drop
+   * forces drop on all upper layers. Default mode is 0.
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_SET_SVC_FRAME_DROP_LAYER,
+
+  /*!\brief Codec control function to get the refresh and reference flags and
+   * the buffer indices, up to the last encoded spatial layer.
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_GET_SVC_REF_FRAME_CONFIG,
+
+  /*!\brief Codec control function to enable/disable use of golden reference as
+   * a second temporal reference for SVC. Only used when inter-layer prediction
+   * is disabled on INTER frames.
+   *
+   * 0: Off, 1: Enabled (default)
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_SET_SVC_GF_TEMPORAL_REF,
+
+  /*!\brief Codec control function to enable spatial layer sync frame, for any
+   * spatial layer. Enabling it for layer k means spatial layer k will disable
+   * all temporal prediction, but keep the inter-layer prediction. It will
+   * refresh any temporal reference buffer for that layer, and reset the
+   * temporal layer for the superframe to 0. Setting the layer sync for base
+   * spatial layer forces a key frame. Default is off (0) for all spatial
+   * layers. Spatial layer sync flag is reset to 0 after each encoded layer,
+   * so when control is invoked it is only used for the current superframe.
+   *
+   * 0: Off (default), 1: Enabled
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_SET_SVC_SPATIAL_LAYER_SYNC,
+
+  /*!\brief Codec control function to enable temporal dependency model.
+   *
+   * Vp9 allows the encoder to run temporal dependency model and use it to
+   * improve the compression performance. To enable, set this parameter to be
+   * 1. The default value is set to be 1.
+   */
+  VP9E_SET_TPL,
+
+  /*!\brief Codec control function to enable postencode frame drop.
+   *
+   * This will allow encoder to drop frame after it's encoded.
+   *
+   * 0: Off (default), 1: Enabled
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_SET_POSTENCODE_DROP,
+
+  /*!\brief Codec control function to set delta q for uv.
+   *
+   * Cap it at +/-15.
+   *
+   * Supported in codecs: VP9
+   */
+  VP9E_SET_DELTA_Q_UV,
 };
 
 /*!\brief vpx 1-D scaling mode
@@ -726,11 +808,13 @@
  *
  */
 typedef struct vpx_svc_layer_id {
-  int spatial_layer_id;  /**< Spatial layer id number. */
+  int spatial_layer_id; /**< First spatial layer to start encoding. */
+  // TODO(jianj): Deprecated, to be removed.
   int temporal_layer_id; /**< Temporal layer id number. */
+  int temporal_layer_id_per_spatial[VPX_SS_MAX_LAYERS]; /**< Temp layer id. */
 } vpx_svc_layer_id_t;
 
-/*!\brief  vp9 svc frame flag parameters.
+/*!\brief vp9 svc frame flag parameters.
  *
  * This defines the frame flags and buffer indices for each spatial layer for
  * svc encoding.
@@ -739,12 +823,58 @@
  *
  */
 typedef struct vpx_svc_ref_frame_config {
-  int frame_flags[VPX_TS_MAX_LAYERS]; /**< Frame flags. */
-  int lst_fb_idx[VPX_TS_MAX_LAYERS];  /**< Last buffer index. */
-  int gld_fb_idx[VPX_TS_MAX_LAYERS];  /**< Golden buffer index. */
-  int alt_fb_idx[VPX_TS_MAX_LAYERS];  /**< Altref buffer index. */
+  int lst_fb_idx[VPX_SS_MAX_LAYERS];         /**< Last buffer index. */
+  int gld_fb_idx[VPX_SS_MAX_LAYERS];         /**< Golden buffer index. */
+  int alt_fb_idx[VPX_SS_MAX_LAYERS];         /**< Altref buffer index. */
+  int update_buffer_slot[VPX_SS_MAX_LAYERS]; /**< Update reference frames. */
+  // TODO(jianj): Remove update_last/golden/alt_ref, these are deprecated.
+  int update_last[VPX_SS_MAX_LAYERS];       /**< Update last. */
+  int update_golden[VPX_SS_MAX_LAYERS];     /**< Update golden. */
+  int update_alt_ref[VPX_SS_MAX_LAYERS];    /**< Update altref. */
+  int reference_last[VPX_SS_MAX_LAYERS];    /**< Last as reference. */
+  int reference_golden[VPX_SS_MAX_LAYERS];  /**< Golden as reference. */
+  int reference_alt_ref[VPX_SS_MAX_LAYERS]; /**< Altref as reference. */
+  int64_t duration[VPX_SS_MAX_LAYERS];      /**< Duration per spatial layer. */
 } vpx_svc_ref_frame_config_t;
 
+/*!\brief VP9 svc frame dropping mode.
+ *
+ * This defines the frame drop mode for SVC.
+ *
+ */
+typedef enum {
+  CONSTRAINED_LAYER_DROP,
+  /**< Upper layers are constrained to drop if current layer drops. */
+  LAYER_DROP,           /**< Any spatial layer can drop. */
+  FULL_SUPERFRAME_DROP, /**< Only full superframe can drop. */
+  CONSTRAINED_FROM_ABOVE_DROP,
+  /**< Lower layers are constrained to drop if current layer drops. */
+} SVC_LAYER_DROP_MODE;
+
+/*!\brief vp9 svc frame dropping parameters.
+ *
+ * This defines the frame drop thresholds for each spatial layer, and
+ * the frame dropping mode: 0 = layer based frame dropping (default),
+ * 1 = constrained dropping where current layer drop forces all upper
+ * spatial layers to drop.
+ */
+typedef struct vpx_svc_frame_drop {
+  int framedrop_thresh[VPX_SS_MAX_LAYERS]; /**< Frame drop thresholds */
+  SVC_LAYER_DROP_MODE
+  framedrop_mode;      /**< Layer-based or constrained dropping. */
+  int max_consec_drop; /**< Maximum consecutive drops, for any layer. */
+} vpx_svc_frame_drop_t;
+
+/*!\brief vp9 svc spatial layer sync parameters.
+ *
+ * This defines the spatial layer sync flag, defined per spatial layer.
+ *
+ */
+typedef struct vpx_svc_spatial_layer_sync {
+  int spatial_layer_sync[VPX_SS_MAX_LAYERS]; /**< Sync layer flags */
+  int base_layer_intra_only; /**< Flag for setting Intra-only frame on base */
+} vpx_svc_spatial_layer_sync_t;
+
 /*!\cond */
 /*!\brief VP8 encoder control function parameter type
  *
@@ -804,6 +934,9 @@
 VPX_CTRL_USE_TYPE(VP9E_SET_TILE_ROWS, int)
 #define VPX_CTRL_VP9E_SET_TILE_ROWS
 
+VPX_CTRL_USE_TYPE(VP9E_SET_TPL, int)
+#define VPX_CTRL_VP9E_SET_TPL
+
 VPX_CTRL_USE_TYPE(VP8E_GET_LAST_QUANTIZER, int *)
 #define VPX_CTRL_VP8E_GET_LAST_QUANTIZER
 VPX_CTRL_USE_TYPE(VP8E_GET_LAST_QUANTIZER_64, int *)
@@ -813,8 +946,8 @@
 
 VPX_CTRL_USE_TYPE(VP8E_SET_MAX_INTRA_BITRATE_PCT, unsigned int)
 #define VPX_CTRL_VP8E_SET_MAX_INTRA_BITRATE_PCT
-VPX_CTRL_USE_TYPE(VP8E_SET_MAX_INTER_BITRATE_PCT, unsigned int)
-#define VPX_CTRL_VP8E_SET_MAX_INTER_BITRATE_PCT
+VPX_CTRL_USE_TYPE(VP9E_SET_MAX_INTER_BITRATE_PCT, unsigned int)
+#define VPX_CTRL_VP9E_SET_MAX_INTER_BITRATE_PCT
 
 VPX_CTRL_USE_TYPE(VP8E_SET_GF_CBR_BOOST_PCT, unsigned int)
 #define VPX_CTRL_VP8E_SET_GF_CBR_BOOST_PCT
@@ -879,10 +1012,32 @@
 VPX_CTRL_USE_TYPE(VP9E_ENABLE_MOTION_VECTOR_UNIT_TEST, unsigned int)
 #define VPX_CTRL_VP9E_ENABLE_MOTION_VECTOR_UNIT_TEST
 
+VPX_CTRL_USE_TYPE(VP9E_SET_SVC_INTER_LAYER_PRED, unsigned int)
+#define VPX_CTRL_VP9E_SET_SVC_INTER_LAYER_PRED
+
+VPX_CTRL_USE_TYPE(VP9E_SET_SVC_FRAME_DROP_LAYER, vpx_svc_frame_drop_t *)
+#define VPX_CTRL_VP9E_SET_SVC_FRAME_DROP_LAYER
+
+VPX_CTRL_USE_TYPE(VP9E_GET_SVC_REF_FRAME_CONFIG, vpx_svc_ref_frame_config_t *)
+#define VPX_CTRL_VP9E_GET_SVC_REF_FRAME_CONFIG
+
+VPX_CTRL_USE_TYPE(VP9E_SET_SVC_GF_TEMPORAL_REF, unsigned int)
+#define VPX_CTRL_VP9E_SET_SVC_GF_TEMPORAL_REF
+
+VPX_CTRL_USE_TYPE(VP9E_SET_SVC_SPATIAL_LAYER_SYNC,
+                  vpx_svc_spatial_layer_sync_t *)
+#define VPX_CTRL_VP9E_SET_SVC_SPATIAL_LAYER_SYNC
+
+VPX_CTRL_USE_TYPE(VP9E_SET_POSTENCODE_DROP, unsigned int)
+#define VPX_CTRL_VP9E_SET_POSTENCODE_DROP
+
+VPX_CTRL_USE_TYPE(VP9E_SET_DELTA_Q_UV, int)
+#define VPX_CTRL_VP9E_SET_DELTA_Q_UV
+
 /*!\endcond */
 /*! @} - end defgroup vp8_encoder */
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VPX_VP8CX_H_
+#endif  // VPX_VPX_VP8CX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8dx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8dx.h
index 398c670..af92f21 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8dx.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vp8dx.h
@@ -17,8 +17,8 @@
  * \brief Provides definitions for using VP8 or VP9 within the vpx Decoder
  *        interface.
  */
-#ifndef VPX_VP8DX_H_
-#define VPX_VP8DX_H_
+#ifndef VPX_VPX_VP8DX_H_
+#define VPX_VPX_VP8DX_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -124,6 +124,24 @@
    */
   VPXD_GET_LAST_QUANTIZER,
 
+  /*!\brief Codec control function to set row level multi-threading.
+   *
+   * 0 : off, 1 : on
+   *
+   * Supported in codecs: VP9
+   */
+  VP9D_SET_ROW_MT,
+
+  /*!\brief Codec control function to set loopfilter optimization.
+   *
+   * 0 : off, Loop filter is done after all tiles have been decoded
+   * 1 : on, Loop filter is done immediately after decode without
+   *     waiting for all threads to sync.
+   *
+   * Supported in codecs: VP9
+   */
+  VP9D_SET_LOOP_FILTER_OPT,
+
   VP8_DECODER_CTRL_ID_MAX
 };
 
@@ -145,10 +163,6 @@
   void *decrypt_state;
 } vpx_decrypt_init;
 
-/*!\brief A deprecated alias for vpx_decrypt_init.
- */
-typedef vpx_decrypt_init vp8_decrypt_init;
-
 /*!\cond */
 /*!\brief VP8 decoder control function parameter type
  *
@@ -181,6 +195,10 @@
 VPX_CTRL_USE_TYPE(VP9_DECODE_SVC_SPATIAL_LAYER, int)
 #define VPX_CTRL_VP9_SET_SKIP_LOOP_FILTER
 VPX_CTRL_USE_TYPE(VP9_SET_SKIP_LOOP_FILTER, int)
+#define VPX_CTRL_VP9_DECODE_SET_ROW_MT
+VPX_CTRL_USE_TYPE(VP9D_SET_ROW_MT, int)
+#define VPX_CTRL_VP9_SET_LOOP_FILTER_OPT
+VPX_CTRL_USE_TYPE(VP9D_SET_LOOP_FILTER_OPT, int)
 
 /*!\endcond */
 /*! @} - end defgroup vp8_decoder */
@@ -189,4 +207,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_VP8DX_H_
+#endif  // VPX_VPX_VP8DX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.h
index ad05f4c..6371a6c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.h
@@ -35,8 +35,8 @@
  * Once initialized, the instance is manged using other functions from
  * the vpx_codec_* family.
  */
-#ifndef VPX_VPX_CODEC_H_
-#define VPX_VPX_CODEC_H_
+#ifndef VPX_VPX_VPX_CODEC_H_
+#define VPX_VPX_VPX_CODEC_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -241,11 +241,11 @@
  */
 int vpx_codec_version(void);
 #define VPX_VERSION_MAJOR(v) \
-  ((v >> 16) & 0xff) /**< extract major from packed version */
+  (((v) >> 16) & 0xff) /**< extract major from packed version */
 #define VPX_VERSION_MINOR(v) \
-  ((v >> 8) & 0xff) /**< extract minor from packed version */
+  (((v) >> 8) & 0xff) /**< extract minor from packed version */
 #define VPX_VERSION_PATCH(v) \
-  ((v >> 0) & 0xff) /**< extract patch from packed version */
+  (((v) >> 0) & 0xff) /**< extract patch from packed version */
 
 /*!\brief Return the version major number */
 #define vpx_codec_version_major() ((vpx_codec_version() >> 16) & 0xff)
@@ -465,4 +465,4 @@
 #ifdef __cplusplus
 }
 #endif
-#endif  // VPX_VPX_CODEC_H_
+#endif  // VPX_VPX_VPX_CODEC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.mk
index b77f458..4ed77ad 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_codec.mk
@@ -15,10 +15,6 @@
 API_SRCS-$(CONFIG_VP8_ENCODER) += vp8cx.h
 API_DOC_SRCS-$(CONFIG_VP8_ENCODER) += vp8.h
 API_DOC_SRCS-$(CONFIG_VP8_ENCODER) += vp8cx.h
-ifeq ($(CONFIG_VP9_ENCODER),yes)
-  API_SRCS-$(CONFIG_SPATIAL_SVC) += src/svc_encodeframe.c
-  API_SRCS-$(CONFIG_SPATIAL_SVC) += svc_context.h
-endif
 
 API_SRCS-$(CONFIG_VP8_DECODER) += vp8.h
 API_SRCS-$(CONFIG_VP8_DECODER) += vp8dx.h
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_decoder.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_decoder.h
index 2ff1211..f113f71 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_decoder.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_decoder.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_VPX_DECODER_H_
-#define VPX_VPX_DECODER_H_
+#ifndef VPX_VPX_VPX_DECODER_H_
+#define VPX_VPX_VPX_DECODER_H_
 
 /*!\defgroup decoder Decoder Algorithm Interface
  * \ingroup codec
@@ -362,4 +362,4 @@
 #ifdef __cplusplus
 }
 #endif
-#endif  // VPX_VPX_DECODER_H_
+#endif  // VPX_VPX_VPX_DECODER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_encoder.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_encoder.h
index f9a2ed3..c84d40f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_encoder.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_encoder.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_VPX_ENCODER_H_
-#define VPX_VPX_ENCODER_H_
+#ifndef VPX_VPX_VPX_ENCODER_H_
+#define VPX_VPX_VPX_ENCODER_H_
 
 /*!\defgroup encoder Encoder Algorithm Interface
  * \ingroup codec
@@ -39,15 +39,9 @@
 /*! Temporal Scalability: Maximum number of coding layers */
 #define VPX_TS_MAX_LAYERS 5
 
-/*!\deprecated Use #VPX_TS_MAX_PERIODICITY instead. */
-#define MAX_PERIODICITY VPX_TS_MAX_PERIODICITY
-
 /*! Temporal+Spatial Scalability: Maximum number of coding layers */
 #define VPX_MAX_LAYERS 12  // 3 temporal + 4 spatial layers are allowed.
 
-/*!\deprecated Use #VPX_MAX_LAYERS instead. */
-#define MAX_LAYERS VPX_MAX_LAYERS  // 3 temporal + 4 spatial layers allowed.
-
 /*! Spatial Scalability: Maximum number of coding layers */
 #define VPX_SS_MAX_LAYERS 5
 
@@ -63,7 +57,7 @@
  * fields to structures
  */
 #define VPX_ENCODER_ABI_VERSION \
-  (8 + VPX_CODEC_ABI_VERSION) /**<\hideinitializer*/
+  (14 + VPX_CODEC_ABI_VERSION) /**<\hideinitializer*/
 
 /*! \brief Encoder capabilities bitfield
  *
@@ -150,15 +144,10 @@
  * extend this list to provide additional functionality.
  */
 enum vpx_codec_cx_pkt_kind {
-  VPX_CODEC_CX_FRAME_PKT,   /**< Compressed video frame */
-  VPX_CODEC_STATS_PKT,      /**< Two-pass statistics for this frame */
-  VPX_CODEC_FPMB_STATS_PKT, /**< first pass mb statistics for this frame */
-  VPX_CODEC_PSNR_PKT,       /**< PSNR statistics for this frame */
-// Spatial SVC is still experimental and may be removed.
-#if defined(VPX_TEST_SPATIAL_SVC)
-  VPX_CODEC_SPATIAL_SVC_LAYER_SIZES, /**< Sizes for each layer in this frame*/
-  VPX_CODEC_SPATIAL_SVC_LAYER_PSNR,  /**< PSNR for each layer in this frame*/
-#endif
+  VPX_CODEC_CX_FRAME_PKT,    /**< Compressed video frame */
+  VPX_CODEC_STATS_PKT,       /**< Two-pass statistics for this frame */
+  VPX_CODEC_FPMB_STATS_PKT,  /**< first pass mb statistics for this frame */
+  VPX_CODEC_PSNR_PKT,        /**< PSNR statistics for this frame */
   VPX_CODEC_CUSTOM_PKT = 256 /**< Algorithm extensions  */
 };
 
@@ -186,6 +175,9 @@
        * first one.*/
       unsigned int width[VPX_SS_MAX_LAYERS];  /**< frame width */
       unsigned int height[VPX_SS_MAX_LAYERS]; /**< frame height */
+      /*!\brief Flag to indicate if spatial layer frame in this packet is
+       * encoded or dropped. VP8 will always be set to 1.*/
+      uint8_t spatial_layer_encoded[VPX_SS_MAX_LAYERS];
     } frame;                            /**< data for compressed frame packet */
     vpx_fixed_buf_t twopass_stats;      /**< data for two-pass packet */
     vpx_fixed_buf_t firstpass_mb_stats; /**< first pass mb packet */
@@ -195,11 +187,6 @@
       double psnr[4];          /**< PSNR, total/y/u/v */
     } psnr;                    /**< data for PSNR packet */
     vpx_fixed_buf_t raw;       /**< data for arbitrary packets */
-// Spatial SVC is still experimental and may be removed.
-#if defined(VPX_TEST_SPATIAL_SVC)
-    size_t layer_sizes[VPX_SS_MAX_LAYERS];
-    struct vpx_psnr_pkt layer_psnr[VPX_SS_MAX_LAYERS];
-#endif
 
     /* This packet size is fixed to allow codecs to extend this
      * interface without having to manage storage for raw packets,
@@ -215,8 +202,6 @@
  * This callback function, when registered, returns with packets when each
  * spatial layer is encoded.
  */
-// putting the definitions here for now. (agrange: find if there
-// is a better place for this)
 typedef void (*vpx_codec_enc_output_cx_pkt_cb_fn_t)(vpx_codec_cx_pkt_t *pkt,
                                                     void *user_data);
 
@@ -236,11 +221,11 @@
 } vpx_rational_t; /**< alias for struct vpx_rational */
 
 /*!\brief Multi-pass Encoding Pass */
-enum vpx_enc_pass {
+typedef enum vpx_enc_pass {
   VPX_RC_ONE_PASS,   /**< Single pass mode */
   VPX_RC_FIRST_PASS, /**< First pass of multi-pass mode */
   VPX_RC_LAST_PASS   /**< Final pass of multi-pass mode */
-};
+} vpx_enc_pass;
 
 /*!\brief Rate control mode */
 enum vpx_rc_mode {
@@ -285,12 +270,9 @@
    * generic settings (g)
    */
 
-  /*!\brief Algorithm specific "usage" value
+  /*!\brief Deprecated: Algorithm specific "usage" value
    *
-   * Algorithms may define multiple values for usage, which may convey the
-   * intent of how the application intends to use the stream. If this value
-   * is non-zero, consult the documentation for the codec to determine its
-   * meaning.
+   * This value must be zero.
    */
   unsigned int g_usage;
 
@@ -401,9 +383,6 @@
    * trade-off is often acceptable, but for many applications is not. It can
    * be disabled in these cases.
    *
-   * Note that not all codecs support this feature. All vpx VPx codecs do.
-   * For other codecs, consult the documentation for that algorithm.
-   *
    * This threshold is described as a percentage of the target data buffer.
    * When the data buffer falls below this percentage of fullness, a
    * dropped frame is indicated. Set the threshold to zero (0) to disable
@@ -489,8 +468,7 @@
    * The quantizer is the most direct control over the quality of the
    * encoded image. The range of valid values for the quantizer is codec
    * specific. Consult the documentation for the codec to determine the
-   * values to use. To determine the range programmatically, call
-   * vpx_codec_enc_config_default() with a usage value of 0.
+   * values to use.
    */
   unsigned int rc_min_quantizer;
 
@@ -499,8 +477,7 @@
    * The quantizer is the most direct control over the quality of the
    * encoded image. The range of valid values for the quantizer is codec
    * specific. Consult the documentation for the codec to determine the
-   * values to use. To determine the range programmatically, call
-   * vpx_codec_enc_config_default() with a usage value of 0.
+   * values to use.
    */
   unsigned int rc_max_quantizer;
 
@@ -516,7 +493,7 @@
    * be subtracted from the target bitrate in order to compensate
    * for prior overshoot.
    * VP9: Expressed as a percentage of the target bitrate, a threshold
-   * undershoot level (current rate vs target) beyond which more agressive
+   * undershoot level (current rate vs target) beyond which more aggressive
    * corrective measures are taken.
    *   *
    * Valid values in the range VP8:0-1000 VP9: 0-100.
@@ -531,7 +508,7 @@
    * be added to the target bitrate in order to compensate for
    * prior undershoot.
    * VP9: Expressed as a percentage of the target bitrate, a threshold
-   * overshoot level (current rate vs target) beyond which more agressive
+   * overshoot level (current rate vs target) beyond which more aggressive
    * corrective measures are taken.
    *
    * Valid values in the range VP8:0-1000 VP9: 0-100.
@@ -656,7 +633,7 @@
   /*!\brief Target bitrate for each spatial layer.
    *
    * These values specify the target coding bitrate to be used for each
-   * spatial layer.
+   * spatial layer. (in kbps)
    */
   unsigned int ss_target_bitrate[VPX_SS_MAX_LAYERS];
 
@@ -669,7 +646,7 @@
   /*!\brief Target bitrate for each temporal layer.
    *
    * These values specify the target coding bitrate to be used for each
-   * temporal layer.
+   * temporal layer. (in kbps)
    */
   unsigned int ts_target_bitrate[VPX_TS_MAX_LAYERS];
 
@@ -701,7 +678,7 @@
   /*!\brief Target bitrate for each spatial/temporal layer.
    *
    * These values specify the target coding bitrate to be used for each
-   * spatial/temporal layer.
+   * spatial/temporal layer. (in kbps)
    *
    */
   unsigned int layer_target_bitrate[VPX_MAX_LAYERS];
@@ -806,7 +783,7 @@
  *
  * \param[in]    iface     Pointer to the algorithm interface to use.
  * \param[out]   cfg       Configuration buffer to populate.
- * \param[in]    reserved  Must set to 0 for VP8 and VP9.
+ * \param[in]    usage     Must be set to 0.
  *
  * \retval #VPX_CODEC_OK
  *     The configuration was populated.
@@ -817,7 +794,7 @@
  */
 vpx_codec_err_t vpx_codec_enc_config_default(vpx_codec_iface_t *iface,
                                              vpx_codec_enc_cfg_t *cfg,
-                                             unsigned int reserved);
+                                             unsigned int usage);
 
 /*!\brief Set or change configuration
  *
@@ -866,7 +843,7 @@
  * implicit that limiting the available time to encode will degrade the
  * output quality. The encoder can be given an unlimited time to produce the
  * best possible frame by specifying a deadline of '0'. This deadline
- * supercedes the VPx notion of "best quality, good quality, realtime".
+ * supersedes the VPx notion of "best quality, good quality, realtime".
  * Applications that wish to map these former settings to the new deadline
  * based system can use the symbols #VPX_DL_REALTIME, #VPX_DL_GOOD_QUALITY,
  * and #VPX_DL_BEST_QUALITY.
@@ -988,4 +965,4 @@
 #ifdef __cplusplus
 }
 #endif
-#endif  // VPX_VPX_ENCODER_H_
+#endif  // VPX_VPX_VPX_ENCODER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_frame_buffer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_frame_buffer.h
index ac17c52..fc83200 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_frame_buffer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_frame_buffer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_VPX_FRAME_BUFFER_H_
-#define VPX_VPX_FRAME_BUFFER_H_
+#ifndef VPX_VPX_VPX_FRAME_BUFFER_H_
+#define VPX_VPX_VPX_FRAME_BUFFER_H_
 
 /*!\file
  * \brief Describes the decoder external frame buffer interface.
@@ -52,9 +52,9 @@
  * data. The callback is triggered when the decoder needs a frame buffer to
  * decode a compressed image into. This function may be called more than once
  * for every call to vpx_codec_decode. The application may set fb->priv to
- * some data which will be passed back in the ximage and the release function
- * call. |fb| is guaranteed to not be NULL. On success the callback must
- * return 0. Any failure the callback must return a value less than 0.
+ * some data which will be passed back in the vpx_image_t and the release
+ * function call. |fb| is guaranteed to not be NULL. On success the callback
+ * must return 0. Any failure the callback must return a value less than 0.
  *
  * \param[in] priv         Callback's private data
  * \param[in] min_size     Size in bytes needed by the buffer
@@ -80,4 +80,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_VPX_FRAME_BUFFER_H_
+#endif  // VPX_VPX_VPX_FRAME_BUFFER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_image.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_image.h
index d6d3166..98be596 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_image.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_image.h
@@ -12,8 +12,8 @@
  * \brief Describes the vpx image descriptor and associated operations
  *
  */
-#ifndef VPX_VPX_IMAGE_H_
-#define VPX_VPX_IMAGE_H_
+#ifndef VPX_VPX_VPX_IMAGE_H_
+#define VPX_VPX_VPX_IMAGE_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -27,7 +27,7 @@
  * types, removing or reassigning enums, adding/removing/rearranging
  * fields to structures
  */
-#define VPX_IMAGE_ABI_VERSION (4) /**<\hideinitializer*/
+#define VPX_IMAGE_ABI_VERSION (5) /**<\hideinitializer*/
 
 #define VPX_IMG_FMT_PLANAR 0x100       /**< Image is a planar format. */
 #define VPX_IMG_FMT_UV_FLIP 0x200      /**< V plane precedes U in memory. */
@@ -37,29 +37,12 @@
 /*!\brief List of supported image formats */
 typedef enum vpx_img_fmt {
   VPX_IMG_FMT_NONE,
-  VPX_IMG_FMT_RGB24,     /**< 24 bit per pixel packed RGB */
-  VPX_IMG_FMT_RGB32,     /**< 32 bit per pixel packed 0RGB */
-  VPX_IMG_FMT_RGB565,    /**< 16 bit per pixel, 565 */
-  VPX_IMG_FMT_RGB555,    /**< 16 bit per pixel, 555 */
-  VPX_IMG_FMT_UYVY,      /**< UYVY packed YUV */
-  VPX_IMG_FMT_YUY2,      /**< YUYV packed YUV */
-  VPX_IMG_FMT_YVYU,      /**< YVYU packed YUV */
-  VPX_IMG_FMT_BGR24,     /**< 24 bit per pixel packed BGR */
-  VPX_IMG_FMT_RGB32_LE,  /**< 32 bit packed BGR0 */
-  VPX_IMG_FMT_ARGB,      /**< 32 bit packed ARGB, alpha=255 */
-  VPX_IMG_FMT_ARGB_LE,   /**< 32 bit packed BGRA, alpha=255 */
-  VPX_IMG_FMT_RGB565_LE, /**< 16 bit per pixel, gggbbbbb rrrrrggg */
-  VPX_IMG_FMT_RGB555_LE, /**< 16 bit per pixel, gggbbbbb 0rrrrrgg */
   VPX_IMG_FMT_YV12 =
       VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_UV_FLIP | 1, /**< planar YVU */
   VPX_IMG_FMT_I420 = VPX_IMG_FMT_PLANAR | 2,
-  VPX_IMG_FMT_VPXYV12 = VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_UV_FLIP |
-                        3, /** < planar 4:2:0 format with vpx color space */
-  VPX_IMG_FMT_VPXI420 = VPX_IMG_FMT_PLANAR | 4,
   VPX_IMG_FMT_I422 = VPX_IMG_FMT_PLANAR | 5,
   VPX_IMG_FMT_I444 = VPX_IMG_FMT_PLANAR | 6,
   VPX_IMG_FMT_I440 = VPX_IMG_FMT_PLANAR | 7,
-  VPX_IMG_FMT_444A = VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_HAS_ALPHA | 6,
   VPX_IMG_FMT_I42016 = VPX_IMG_FMT_I420 | VPX_IMG_FMT_HIGHBITDEPTH,
   VPX_IMG_FMT_I42216 = VPX_IMG_FMT_I422 | VPX_IMG_FMT_HIGHBITDEPTH,
   VPX_IMG_FMT_I44416 = VPX_IMG_FMT_I444 | VPX_IMG_FMT_HIGHBITDEPTH,
@@ -167,21 +150,21 @@
  * storage for descriptor has been allocated elsewhere, and a descriptor is
  * desired to "wrap" that storage.
  *
- * \param[in]    img       Pointer to storage for descriptor. If this parameter
- *                         is NULL, the storage for the descriptor will be
- *                         allocated on the heap.
- * \param[in]    fmt       Format for the image
- * \param[in]    d_w       Width of the image
- * \param[in]    d_h       Height of the image
- * \param[in]    align     Alignment, in bytes, of each row in the image.
- * \param[in]    img_data  Storage to use for the image
+ * \param[in]    img           Pointer to storage for descriptor. If this
+ *                             parameter is NULL, the storage for the descriptor
+ *                             will be allocated on the heap.
+ * \param[in]    fmt           Format for the image
+ * \param[in]    d_w           Width of the image
+ * \param[in]    d_h           Height of the image
+ * \param[in]    stride_align  Alignment, in bytes, of each row in the image.
+ * \param[in]    img_data      Storage to use for the image
  *
  * \return Returns a pointer to the initialized image descriptor. If the img
  *         parameter is non-null, the value of the img parameter will be
  *         returned.
  */
 vpx_image_t *vpx_img_wrap(vpx_image_t *img, vpx_img_fmt_t fmt, unsigned int d_w,
-                          unsigned int d_h, unsigned int align,
+                          unsigned int d_h, unsigned int stride_align,
                           unsigned char *img_data);
 
 /*!\brief Set the rectangle identifying the displayed portion of the image
@@ -221,4 +204,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_VPX_IMAGE_H_
+#endif  // VPX_VPX_VPX_IMAGE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_integer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_integer.h
index 09bad92..4129d15 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_integer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx/vpx_integer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_VPX_INTEGER_H_
-#define VPX_VPX_INTEGER_H_
+#ifndef VPX_VPX_VPX_INTEGER_H_
+#define VPX_VPX_VPX_INTEGER_H_
 
 /* get ptrdiff_t, size_t, wchar_t, NULL */
 #include <stddef.h>
@@ -18,27 +18,12 @@
 #define VPX_FORCE_INLINE __forceinline
 #define VPX_INLINE __inline
 #else
-#define VPX_FORCE_INLINE __inline__ __attribute__(always_inline)
+#define VPX_FORCE_INLINE __inline__ __attribute__((always_inline))
 // TODO(jbb): Allow a way to force inline off for older compilers.
 #define VPX_INLINE inline
 #endif
 
-#if defined(VPX_EMULATE_INTTYPES)
-typedef signed char int8_t;
-typedef signed short int16_t;
-typedef signed int int32_t;
-
-typedef unsigned char uint8_t;
-typedef unsigned short uint16_t;
-typedef unsigned int uint32_t;
-
-#ifndef _UINTPTR_T_DEFINED
-typedef size_t uintptr_t;
-#endif
-
-#else
-
-/* Most platforms have the C99 standard integer types. */
+/* Assume platforms have the C99 standard integer types. */
 
 #if defined(__cplusplus)
 #if !defined(__STDC_FORMAT_MACROS)
@@ -49,15 +34,7 @@
 #endif
 #endif  // __cplusplus
 
+#include <inttypes.h>
 #include <stdint.h>
 
-#endif
-
-/* VS2010 defines stdint.h, but not inttypes.h */
-#if defined(_MSC_VER) && _MSC_VER < 1800
-#define PRId64 "I64d"
-#else
-#include <inttypes.h>
-#endif
-
-#endif  // VPX_VPX_INTEGER_H_
+#endif  // VPX_VPX_VPX_INTEGER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/add_noise.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/add_noise.c
index cda6ae8..6839e97 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/add_noise.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/add_noise.c
@@ -52,6 +52,7 @@
     const int a_i = (int)(0.5 + 256 * gaussian(sigma, 0, i));
     if (a_i) {
       for (j = 0; j < a_i; ++j) {
+        if (next + j >= 256) goto set_noise;
         char_dist[next + j] = (int8_t)i;
       }
       next = next + j;
@@ -63,6 +64,7 @@
     char_dist[next] = 0;
   }
 
+set_noise:
   for (i = 0; i < size; ++i) {
     noise[i] = char_dist[rand() & 0xff];  // NOLINT
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_pred_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_pred_neon.c
index 1370ec2..5afdece 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_pred_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/avg_pred_neon.c
@@ -17,8 +17,8 @@
 void vpx_comp_avg_pred_neon(uint8_t *comp, const uint8_t *pred, int width,
                             int height, const uint8_t *ref, int ref_stride) {
   if (width > 8) {
-    int x, y;
-    for (y = 0; y < height; ++y) {
+    int x, y = height;
+    do {
       for (x = 0; x < width; x += 16) {
         const uint8x16_t p = vld1q_u8(pred + x);
         const uint8x16_t r = vld1q_u8(ref + x);
@@ -28,28 +28,38 @@
       comp += width;
       pred += width;
       ref += ref_stride;
-    }
-  } else {
-    int i;
-    for (i = 0; i < width * height; i += 16) {
+    } while (--y);
+  } else if (width == 8) {
+    int i = width * height;
+    do {
       const uint8x16_t p = vld1q_u8(pred);
       uint8x16_t r;
-
-      if (width == 4) {
-        r = load_unaligned_u8q(ref, ref_stride);
-        ref += 4 * ref_stride;
-      } else {
-        const uint8x8_t r_0 = vld1_u8(ref);
-        const uint8x8_t r_1 = vld1_u8(ref + ref_stride);
-        assert(width == 8);
-        r = vcombine_u8(r_0, r_1);
-        ref += 2 * ref_stride;
-      }
+      const uint8x8_t r_0 = vld1_u8(ref);
+      const uint8x8_t r_1 = vld1_u8(ref + ref_stride);
+      r = vcombine_u8(r_0, r_1);
+      ref += 2 * ref_stride;
       r = vrhaddq_u8(r, p);
       vst1q_u8(comp, r);
 
       pred += 16;
       comp += 16;
-    }
+      i -= 16;
+    } while (i);
+  } else {
+    int i = width * height;
+    assert(width == 4);
+    do {
+      const uint8x16_t p = vld1q_u8(pred);
+      uint8x16_t r;
+
+      r = load_unaligned_u8q(ref, ref_stride);
+      ref += 4 * ref_stride;
+      r = vrhaddq_u8(r, p);
+      vst1q_u8(comp, r);
+
+      pred += 16;
+      comp += 16;
+      i -= 16;
+    } while (i);
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/deblock_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/deblock_neon.c
index 1fb41d2..7efce32 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/deblock_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/deblock_neon.c
@@ -91,11 +91,6 @@
   int row;
   int col;
 
-  // Process a stripe of macroblocks. The stripe will be a multiple of 16 (for
-  // Y) or 8 (for U/V) wide (cols) and the height (size) will be 16 (for Y) or 8
-  // (for U/V).
-  assert((size == 8 || size == 16) && cols % 8 == 0);
-
   // While columns of length 16 can be processed, load them.
   for (col = 0; col < cols - 8; col += 16) {
     uint8x16_t a0, a1, a2, a3, a4, a5, a6, a7;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_neon.c
index 04646ed..3708cbb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fdct_neon.c
@@ -11,6 +11,7 @@
 #include <arm_neon.h>
 
 #include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/txfm_common.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_dsp/arm/idct_neon.h"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fwd_txfm_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fwd_txfm_neon.c
index 8049277..374a262 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fwd_txfm_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/fwd_txfm_neon.c
@@ -11,6 +11,7 @@
 #include <arm_neon.h>
 
 #include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/txfm_common.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_dsp/arm/idct_neon.h"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct16x16_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct16x16_add_neon.c
index 3fa2f9e..654ab42 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct16x16_add_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct16x16_add_neon.c
@@ -11,6 +11,7 @@
 #include <arm_neon.h>
 
 #include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/arm/highbd_idct_neon.h"
 #include "vpx_dsp/arm/idct_neon.h"
 #include "vpx_dsp/inv_txfm.h"
 
@@ -515,62 +516,9 @@
   out[15] = vsubq_s32(step2[0], step2[15]);
 }
 
-static INLINE void highbd_idct16x16_store_pass1(const int32x4x2_t *const out,
-                                                int32_t *output) {
-  // Save the result into output
-  vst1q_s32(output + 0, out[0].val[0]);
-  vst1q_s32(output + 4, out[0].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[1].val[0]);
-  vst1q_s32(output + 4, out[1].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[2].val[0]);
-  vst1q_s32(output + 4, out[2].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[3].val[0]);
-  vst1q_s32(output + 4, out[3].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[4].val[0]);
-  vst1q_s32(output + 4, out[4].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[5].val[0]);
-  vst1q_s32(output + 4, out[5].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[6].val[0]);
-  vst1q_s32(output + 4, out[6].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[7].val[0]);
-  vst1q_s32(output + 4, out[7].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[8].val[0]);
-  vst1q_s32(output + 4, out[8].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[9].val[0]);
-  vst1q_s32(output + 4, out[9].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[10].val[0]);
-  vst1q_s32(output + 4, out[10].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[11].val[0]);
-  vst1q_s32(output + 4, out[11].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[12].val[0]);
-  vst1q_s32(output + 4, out[12].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[13].val[0]);
-  vst1q_s32(output + 4, out[13].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[14].val[0]);
-  vst1q_s32(output + 4, out[14].val[1]);
-  output += 16;
-  vst1q_s32(output + 0, out[15].val[0]);
-  vst1q_s32(output + 4, out[15].val[1]);
-}
-
-static void vpx_highbd_idct16x16_256_add_half1d(const int32_t *input,
-                                                int32_t *output, uint16_t *dest,
-                                                const int stride,
-                                                const int bd) {
+void vpx_highbd_idct16x16_256_add_half1d(const int32_t *input, int32_t *output,
+                                         uint16_t *dest, const int stride,
+                                         const int bd) {
   const int32x4_t cospi_0_8_16_24 = vld1q_s32(kCospi32 + 0);
   const int32x4_t cospi_4_12_20N_28 = vld1q_s32(kCospi32 + 4);
   const int32x4_t cospi_2_30_10_22 = vld1q_s32(kCospi32 + 8);
@@ -978,8 +926,8 @@
   }
 }
 
-void vpx_highbd_idct16x16_10_add_half1d_pass1(const tran_low_t *input,
-                                              int32_t *output) {
+static void highbd_idct16x16_10_add_half1d_pass1(const tran_low_t *input,
+                                                 int32_t *output) {
   const int32x4_t cospi_0_8_16_24 = vld1q_s32(kCospi32 + 0);
   const int32x4_t cospi_4_12_20N_28 = vld1q_s32(kCospi32 + 4);
   const int32x4_t cospi_2_30_10_22 = vld1q_s32(kCospi32 + 8);
@@ -1117,10 +1065,11 @@
   vst1q_s32(output, out[15]);
 }
 
-void vpx_highbd_idct16x16_10_add_half1d_pass2(const int32_t *input,
-                                              int32_t *const output,
-                                              uint16_t *const dest,
-                                              const int stride, const int bd) {
+static void highbd_idct16x16_10_add_half1d_pass2(const int32_t *input,
+                                                 int32_t *const output,
+                                                 uint16_t *const dest,
+                                                 const int stride,
+                                                 const int bd) {
   const int32x4_t cospi_0_8_16_24 = vld1q_s32(kCospi32 + 0);
   const int32x4_t cospi_4_12_20N_28 = vld1q_s32(kCospi32 + 4);
   const int32x4_t cospi_2_30_10_22 = vld1q_s32(kCospi32 + 8);
@@ -1341,16 +1290,16 @@
 
     // pass 1
     // Parallel idct on the upper 8 rows
-    vpx_highbd_idct16x16_10_add_half1d_pass1(input, row_idct_output);
+    highbd_idct16x16_10_add_half1d_pass1(input, row_idct_output);
 
     // pass 2
     // Parallel idct to get the left 8 columns
-    vpx_highbd_idct16x16_10_add_half1d_pass2(row_idct_output, NULL, dest,
-                                             stride, bd);
+    highbd_idct16x16_10_add_half1d_pass2(row_idct_output, NULL, dest, stride,
+                                         bd);
 
     // Parallel idct to get the right 8 columns
-    vpx_highbd_idct16x16_10_add_half1d_pass2(row_idct_output + 4 * 8, NULL,
-                                             dest + 8, stride, bd);
+    highbd_idct16x16_10_add_half1d_pass2(row_idct_output + 4 * 8, NULL,
+                                         dest + 8, stride, bd);
   }
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_135_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_135_add_neon.c
index 3970a5a..6750c1a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_135_add_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_135_add_neon.c
@@ -12,6 +12,7 @@
 
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/arm/highbd_idct_neon.h"
 #include "vpx_dsp/arm/idct_neon.h"
 #include "vpx_dsp/arm/transpose_neon.h"
 #include "vpx_dsp/txfm_common.h"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_34_add_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_34_add_neon.c
index 5d9063b..f05932c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_34_add_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct32x32_34_add_neon.c
@@ -12,6 +12,7 @@
 
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/arm/highbd_idct_neon.h"
 #include "vpx_dsp/arm/idct_neon.h"
 #include "vpx_dsp/arm/transpose_neon.h"
 #include "vpx_dsp/txfm_common.h"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct_neon.h
index 612bcf5..518ef43 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/highbd_idct_neon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_ARM_HIGHBD_IDCT_NEON_H_
-#define VPX_DSP_ARM_HIGHBD_IDCT_NEON_H_
+#ifndef VPX_VPX_DSP_ARM_HIGHBD_IDCT_NEON_H_
+#define VPX_VPX_DSP_ARM_HIGHBD_IDCT_NEON_H_
 
 #include <arm_neon.h>
 
@@ -359,4 +359,116 @@
   *io7 = vsubq_s32(step1[0], step2[7]);
 }
 
-#endif  // VPX_DSP_ARM_HIGHBD_IDCT_NEON_H_
+static INLINE void highbd_idct16x16_store_pass1(const int32x4x2_t *const out,
+                                                int32_t *output) {
+  // Save the result into output
+  vst1q_s32(output + 0, out[0].val[0]);
+  vst1q_s32(output + 4, out[0].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[1].val[0]);
+  vst1q_s32(output + 4, out[1].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[2].val[0]);
+  vst1q_s32(output + 4, out[2].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[3].val[0]);
+  vst1q_s32(output + 4, out[3].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[4].val[0]);
+  vst1q_s32(output + 4, out[4].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[5].val[0]);
+  vst1q_s32(output + 4, out[5].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[6].val[0]);
+  vst1q_s32(output + 4, out[6].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[7].val[0]);
+  vst1q_s32(output + 4, out[7].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[8].val[0]);
+  vst1q_s32(output + 4, out[8].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[9].val[0]);
+  vst1q_s32(output + 4, out[9].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[10].val[0]);
+  vst1q_s32(output + 4, out[10].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[11].val[0]);
+  vst1q_s32(output + 4, out[11].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[12].val[0]);
+  vst1q_s32(output + 4, out[12].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[13].val[0]);
+  vst1q_s32(output + 4, out[13].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[14].val[0]);
+  vst1q_s32(output + 4, out[14].val[1]);
+  output += 16;
+  vst1q_s32(output + 0, out[15].val[0]);
+  vst1q_s32(output + 4, out[15].val[1]);
+}
+
+static INLINE void highbd_idct16x16_add_store(const int32x4x2_t *const out,
+                                              uint16_t *dest, const int stride,
+                                              const int bd) {
+  // Add the result to dest
+  const int16x8_t max = vdupq_n_s16((1 << bd) - 1);
+  int16x8_t o[16];
+  o[0] = vcombine_s16(vrshrn_n_s32(out[0].val[0], 6),
+                      vrshrn_n_s32(out[0].val[1], 6));
+  o[1] = vcombine_s16(vrshrn_n_s32(out[1].val[0], 6),
+                      vrshrn_n_s32(out[1].val[1], 6));
+  o[2] = vcombine_s16(vrshrn_n_s32(out[2].val[0], 6),
+                      vrshrn_n_s32(out[2].val[1], 6));
+  o[3] = vcombine_s16(vrshrn_n_s32(out[3].val[0], 6),
+                      vrshrn_n_s32(out[3].val[1], 6));
+  o[4] = vcombine_s16(vrshrn_n_s32(out[4].val[0], 6),
+                      vrshrn_n_s32(out[4].val[1], 6));
+  o[5] = vcombine_s16(vrshrn_n_s32(out[5].val[0], 6),
+                      vrshrn_n_s32(out[5].val[1], 6));
+  o[6] = vcombine_s16(vrshrn_n_s32(out[6].val[0], 6),
+                      vrshrn_n_s32(out[6].val[1], 6));
+  o[7] = vcombine_s16(vrshrn_n_s32(out[7].val[0], 6),
+                      vrshrn_n_s32(out[7].val[1], 6));
+  o[8] = vcombine_s16(vrshrn_n_s32(out[8].val[0], 6),
+                      vrshrn_n_s32(out[8].val[1], 6));
+  o[9] = vcombine_s16(vrshrn_n_s32(out[9].val[0], 6),
+                      vrshrn_n_s32(out[9].val[1], 6));
+  o[10] = vcombine_s16(vrshrn_n_s32(out[10].val[0], 6),
+                       vrshrn_n_s32(out[10].val[1], 6));
+  o[11] = vcombine_s16(vrshrn_n_s32(out[11].val[0], 6),
+                       vrshrn_n_s32(out[11].val[1], 6));
+  o[12] = vcombine_s16(vrshrn_n_s32(out[12].val[0], 6),
+                       vrshrn_n_s32(out[12].val[1], 6));
+  o[13] = vcombine_s16(vrshrn_n_s32(out[13].val[0], 6),
+                       vrshrn_n_s32(out[13].val[1], 6));
+  o[14] = vcombine_s16(vrshrn_n_s32(out[14].val[0], 6),
+                       vrshrn_n_s32(out[14].val[1], 6));
+  o[15] = vcombine_s16(vrshrn_n_s32(out[15].val[0], 6),
+                       vrshrn_n_s32(out[15].val[1], 6));
+  highbd_idct16x16_add8x1(o[0], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[1], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[2], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[3], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[4], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[5], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[6], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[7], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[8], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[9], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[10], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[11], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[12], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[13], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[14], max, &dest, stride);
+  highbd_idct16x16_add8x1(o[15], max, &dest, stride);
+}
+
+void vpx_highbd_idct16x16_256_add_half1d(const int32_t *input, int32_t *output,
+                                         uint16_t *dest, const int stride,
+                                         const int bd);
+
+#endif  // VPX_VPX_DSP_ARM_HIGHBD_IDCT_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.h
index 73dc2a4..c023113 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/idct_neon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_ARM_IDCT_NEON_H_
-#define VPX_DSP_ARM_IDCT_NEON_H_
+#ifndef VPX_VPX_DSP_ARM_IDCT_NEON_H_
+#define VPX_VPX_DSP_ARM_IDCT_NEON_H_
 
 #include <arm_neon.h>
 
@@ -890,62 +890,6 @@
   highbd_idct16x16_add8x1_bd8(a[31], &out, stride);
 }
 
-static INLINE void highbd_idct16x16_add_store(const int32x4x2_t *const out,
-                                              uint16_t *dest, const int stride,
-                                              const int bd) {
-  // Add the result to dest
-  const int16x8_t max = vdupq_n_s16((1 << bd) - 1);
-  int16x8_t o[16];
-  o[0] = vcombine_s16(vrshrn_n_s32(out[0].val[0], 6),
-                      vrshrn_n_s32(out[0].val[1], 6));
-  o[1] = vcombine_s16(vrshrn_n_s32(out[1].val[0], 6),
-                      vrshrn_n_s32(out[1].val[1], 6));
-  o[2] = vcombine_s16(vrshrn_n_s32(out[2].val[0], 6),
-                      vrshrn_n_s32(out[2].val[1], 6));
-  o[3] = vcombine_s16(vrshrn_n_s32(out[3].val[0], 6),
-                      vrshrn_n_s32(out[3].val[1], 6));
-  o[4] = vcombine_s16(vrshrn_n_s32(out[4].val[0], 6),
-                      vrshrn_n_s32(out[4].val[1], 6));
-  o[5] = vcombine_s16(vrshrn_n_s32(out[5].val[0], 6),
-                      vrshrn_n_s32(out[5].val[1], 6));
-  o[6] = vcombine_s16(vrshrn_n_s32(out[6].val[0], 6),
-                      vrshrn_n_s32(out[6].val[1], 6));
-  o[7] = vcombine_s16(vrshrn_n_s32(out[7].val[0], 6),
-                      vrshrn_n_s32(out[7].val[1], 6));
-  o[8] = vcombine_s16(vrshrn_n_s32(out[8].val[0], 6),
-                      vrshrn_n_s32(out[8].val[1], 6));
-  o[9] = vcombine_s16(vrshrn_n_s32(out[9].val[0], 6),
-                      vrshrn_n_s32(out[9].val[1], 6));
-  o[10] = vcombine_s16(vrshrn_n_s32(out[10].val[0], 6),
-                       vrshrn_n_s32(out[10].val[1], 6));
-  o[11] = vcombine_s16(vrshrn_n_s32(out[11].val[0], 6),
-                       vrshrn_n_s32(out[11].val[1], 6));
-  o[12] = vcombine_s16(vrshrn_n_s32(out[12].val[0], 6),
-                       vrshrn_n_s32(out[12].val[1], 6));
-  o[13] = vcombine_s16(vrshrn_n_s32(out[13].val[0], 6),
-                       vrshrn_n_s32(out[13].val[1], 6));
-  o[14] = vcombine_s16(vrshrn_n_s32(out[14].val[0], 6),
-                       vrshrn_n_s32(out[14].val[1], 6));
-  o[15] = vcombine_s16(vrshrn_n_s32(out[15].val[0], 6),
-                       vrshrn_n_s32(out[15].val[1], 6));
-  highbd_idct16x16_add8x1(o[0], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[1], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[2], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[3], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[4], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[5], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[6], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[7], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[8], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[9], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[10], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[11], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[12], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[13], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[14], max, &dest, stride);
-  highbd_idct16x16_add8x1(o[15], max, &dest, stride);
-}
-
 void vpx_idct16x16_256_add_half1d(const void *const input, int16_t *output,
                                   void *const dest, const int stride,
                                   const int highbd_flag);
@@ -972,4 +916,4 @@
 void vpx_idct32_8_neon(const int16_t *input, void *const output, int stride,
                        const int highbd_flag);
 
-#endif  // VPX_DSP_ARM_IDCT_NEON_H_
+#endif  // VPX_VPX_DSP_ARM_IDCT_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_8_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_8_neon.asm
index a042d40..a81a9d1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_8_neon.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/loopfilter_8_neon.asm
@@ -201,7 +201,7 @@
     str         lr, [sp, #16]              ; thresh1
     add         sp, #4
     pop         {r0-r1, lr}
-    add         r0, r1, lsl #3             ; s + 8 * pitch
+    add         r0, r0, r1, lsl #3         ; s + 8 * pitch
     b           vpx_lpf_vertical_8_neon
     ENDP        ; |vpx_lpf_vertical_8_dual_neon|
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/mem_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/mem_neon.h
index 12c0a54..943865b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/mem_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/mem_neon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_ARM_MEM_NEON_H_
-#define VPX_DSP_ARM_MEM_NEON_H_
+#ifndef VPX_VPX_DSP_ARM_MEM_NEON_H_
+#define VPX_VPX_DSP_ARM_MEM_NEON_H_
 
 #include <arm_neon.h>
 #include <assert.h>
@@ -101,9 +101,9 @@
   if (stride == 4) return vld1_u8(buf);
   memcpy(&a, buf, 4);
   buf += stride;
-  a_u32 = vld1_lane_u32(&a, a_u32, 0);
+  a_u32 = vset_lane_u32(a, a_u32, 0);
   memcpy(&a, buf, 4);
-  a_u32 = vld1_lane_u32(&a, a_u32, 1);
+  a_u32 = vset_lane_u32(a, a_u32, 1);
   return vreinterpret_u8_u32(a_u32);
 }
 
@@ -127,16 +127,16 @@
   if (stride == 4) return vld1q_u8(buf);
   memcpy(&a, buf, 4);
   buf += stride;
-  a_u32 = vld1q_lane_u32(&a, a_u32, 0);
+  a_u32 = vsetq_lane_u32(a, a_u32, 0);
   memcpy(&a, buf, 4);
   buf += stride;
-  a_u32 = vld1q_lane_u32(&a, a_u32, 1);
+  a_u32 = vsetq_lane_u32(a, a_u32, 1);
   memcpy(&a, buf, 4);
   buf += stride;
-  a_u32 = vld1q_lane_u32(&a, a_u32, 2);
+  a_u32 = vsetq_lane_u32(a, a_u32, 2);
   memcpy(&a, buf, 4);
   buf += stride;
-  a_u32 = vld1q_lane_u32(&a, a_u32, 3);
+  a_u32 = vsetq_lane_u32(a, a_u32, 3);
   return vreinterpretq_u8_u32(a_u32);
 }
 
@@ -181,4 +181,4 @@
   buf += stride;
   vst1_lane_u32((uint32_t *)buf, a_u32, 1);
 }
-#endif  // VPX_DSP_ARM_MEM_NEON_H_
+#endif  // VPX_VPX_DSP_ARM_MEM_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/quantize_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/quantize_neon.c
index a0a1e6d..adef5f6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/quantize_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/quantize_neon.c
@@ -15,17 +15,33 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/arm/mem_neon.h"
 
+static INLINE void calculate_dqcoeff_and_store(const int16x8_t qcoeff,
+                                               const int16x8_t dequant,
+                                               tran_low_t *dqcoeff) {
+  const int32x4_t dqcoeff_0 =
+      vmull_s16(vget_low_s16(qcoeff), vget_low_s16(dequant));
+  const int32x4_t dqcoeff_1 =
+      vmull_s16(vget_high_s16(qcoeff), vget_high_s16(dequant));
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  vst1q_s32(dqcoeff, dqcoeff_0);
+  vst1q_s32(dqcoeff + 4, dqcoeff_1);
+#else
+  vst1q_s16(dqcoeff, vcombine_s16(vmovn_s32(dqcoeff_0), vmovn_s32(dqcoeff_1)));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+}
+
 void vpx_quantize_b_neon(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
                          int skip_block, const int16_t *zbin_ptr,
                          const int16_t *round_ptr, const int16_t *quant_ptr,
                          const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
                          tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                         uint16_t *eob_ptr, const int16_t *scan_ptr,
-                         const int16_t *iscan_ptr) {
+                         uint16_t *eob_ptr, const int16_t *scan,
+                         const int16_t *iscan) {
   const int16x8_t one = vdupq_n_s16(1);
   const int16x8_t neg_one = vdupq_n_s16(-1);
   uint16x8_t eob_max;
-  (void)scan_ptr;
+  (void)scan;
   (void)skip_block;
   assert(!skip_block);
 
@@ -38,8 +54,8 @@
     const int16x8_t quant_shift = vld1q_s16(quant_shift_ptr);
     const int16x8_t dequant = vld1q_s16(dequant_ptr);
     // Add one because the eob does not index from 0.
-    const uint16x8_t iscan =
-        vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan_ptr), one));
+    const uint16x8_t v_iscan =
+        vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan), one));
 
     const int16x8_t coeff = load_tran_low_to_s16q(coeff_ptr);
     const int16x8_t coeff_sign = vshrq_n_s16(coeff, 15);
@@ -65,17 +81,15 @@
     qcoeff = vandq_s16(qcoeff, zbin_mask);
 
     // Set non-zero elements to -1 and use that to extract values for eob.
-    eob_max = vandq_u16(vtstq_s16(qcoeff, neg_one), iscan);
+    eob_max = vandq_u16(vtstq_s16(qcoeff, neg_one), v_iscan);
 
     coeff_ptr += 8;
-    iscan_ptr += 8;
+    iscan += 8;
 
     store_s16q_to_tran_low(qcoeff_ptr, qcoeff);
     qcoeff_ptr += 8;
 
-    qcoeff = vmulq_s16(qcoeff, dequant);
-
-    store_s16q_to_tran_low(dqcoeff_ptr, qcoeff);
+    calculate_dqcoeff_and_store(qcoeff, dequant, dqcoeff_ptr);
     dqcoeff_ptr += 8;
   }
 
@@ -90,8 +104,8 @@
 
     do {
       // Add one because the eob is not its index.
-      const uint16x8_t iscan =
-          vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan_ptr), one));
+      const uint16x8_t v_iscan =
+          vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan), one));
 
       const int16x8_t coeff = load_tran_low_to_s16q(coeff_ptr);
       const int16x8_t coeff_sign = vshrq_n_s16(coeff, 15);
@@ -118,23 +132,24 @@
 
       // Set non-zero elements to -1 and use that to extract values for eob.
       eob_max =
-          vmaxq_u16(eob_max, vandq_u16(vtstq_s16(qcoeff, neg_one), iscan));
+          vmaxq_u16(eob_max, vandq_u16(vtstq_s16(qcoeff, neg_one), v_iscan));
 
       coeff_ptr += 8;
-      iscan_ptr += 8;
+      iscan += 8;
 
       store_s16q_to_tran_low(qcoeff_ptr, qcoeff);
       qcoeff_ptr += 8;
 
-      qcoeff = vmulq_s16(qcoeff, dequant);
-
-      store_s16q_to_tran_low(dqcoeff_ptr, qcoeff);
+      calculate_dqcoeff_and_store(qcoeff, dequant, dqcoeff_ptr);
       dqcoeff_ptr += 8;
 
       n_coeffs -= 8;
     } while (n_coeffs > 0);
   }
 
+#ifdef __aarch64__
+  *eob_ptr = vmaxvq_u16(eob_max);
+#else
   {
     const uint16x4_t eob_max_0 =
         vmax_u16(vget_low_u16(eob_max), vget_high_u16(eob_max));
@@ -142,25 +157,50 @@
     const uint16x4_t eob_max_2 = vpmax_u16(eob_max_1, eob_max_1);
     vst1_lane_u16(eob_ptr, eob_max_2, 0);
   }
+#endif  // __aarch64__
 }
 
 static INLINE int32x4_t extract_sign_bit(int32x4_t a) {
   return vreinterpretq_s32_u32(vshrq_n_u32(vreinterpretq_u32_s32(a), 31));
 }
 
+static INLINE void calculate_dqcoeff_and_store_32x32(const int16x8_t qcoeff,
+                                                     const int16x8_t dequant,
+                                                     tran_low_t *dqcoeff) {
+  int32x4_t dqcoeff_0 = vmull_s16(vget_low_s16(qcoeff), vget_low_s16(dequant));
+  int32x4_t dqcoeff_1 =
+      vmull_s16(vget_high_s16(qcoeff), vget_high_s16(dequant));
+
+  // Add 1 if negative to round towards zero because the C uses division.
+  dqcoeff_0 = vaddq_s32(dqcoeff_0, extract_sign_bit(dqcoeff_0));
+  dqcoeff_1 = vaddq_s32(dqcoeff_1, extract_sign_bit(dqcoeff_1));
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  dqcoeff_0 = vshrq_n_s32(dqcoeff_0, 1);
+  dqcoeff_1 = vshrq_n_s32(dqcoeff_1, 1);
+  vst1q_s32(dqcoeff, dqcoeff_0);
+  vst1q_s32(dqcoeff + 4, dqcoeff_1);
+#else
+  vst1q_s16(dqcoeff,
+            vcombine_s16(vshrn_n_s32(dqcoeff_0, 1), vshrn_n_s32(dqcoeff_1, 1)));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+}
+
 // Main difference is that zbin values are halved before comparison and dqcoeff
 // values are divided by 2. zbin is rounded but dqcoeff is not.
-void vpx_quantize_b_32x32_neon(
-    const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block,
-    const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr,
-    const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
-    tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr,
-    const int16_t *scan_ptr, const int16_t *iscan_ptr) {
+void vpx_quantize_b_32x32_neon(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
+                               int skip_block, const int16_t *zbin_ptr,
+                               const int16_t *round_ptr,
+                               const int16_t *quant_ptr,
+                               const int16_t *quant_shift_ptr,
+                               tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
+                               const int16_t *dequant_ptr, uint16_t *eob_ptr,
+                               const int16_t *scan, const int16_t *iscan) {
   const int16x8_t one = vdupq_n_s16(1);
   const int16x8_t neg_one = vdupq_n_s16(-1);
   uint16x8_t eob_max;
   int i;
-  (void)scan_ptr;
+  (void)scan;
   (void)n_coeffs;  // Because we will always calculate 32*32.
   (void)skip_block;
   assert(!skip_block);
@@ -174,8 +214,8 @@
     const int16x8_t quant_shift = vld1q_s16(quant_shift_ptr);
     const int16x8_t dequant = vld1q_s16(dequant_ptr);
     // Add one because the eob does not index from 0.
-    const uint16x8_t iscan =
-        vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan_ptr), one));
+    const uint16x8_t v_iscan =
+        vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan), one));
 
     const int16x8_t coeff = load_tran_low_to_s16q(coeff_ptr);
     const int16x8_t coeff_sign = vshrq_n_s16(coeff, 15);
@@ -188,8 +228,6 @@
 
     // (round * quant * 2) >> 16 >> 1 == (round * quant) >> 16
     int16x8_t qcoeff = vshrq_n_s16(vqdmulhq_s16(rounded, quant), 1);
-    int16x8_t dqcoeff;
-    int32x4_t dqcoeff_0, dqcoeff_1;
 
     qcoeff = vaddq_s16(qcoeff, rounded);
 
@@ -203,25 +241,15 @@
     qcoeff = vandq_s16(qcoeff, zbin_mask);
 
     // Set non-zero elements to -1 and use that to extract values for eob.
-    eob_max = vandq_u16(vtstq_s16(qcoeff, neg_one), iscan);
+    eob_max = vandq_u16(vtstq_s16(qcoeff, neg_one), v_iscan);
 
     coeff_ptr += 8;
-    iscan_ptr += 8;
+    iscan += 8;
 
     store_s16q_to_tran_low(qcoeff_ptr, qcoeff);
     qcoeff_ptr += 8;
 
-    dqcoeff_0 = vmull_s16(vget_low_s16(qcoeff), vget_low_s16(dequant));
-    dqcoeff_1 = vmull_s16(vget_high_s16(qcoeff), vget_high_s16(dequant));
-
-    // Add 1 if negative to round towards zero because the C uses division.
-    dqcoeff_0 = vaddq_s32(dqcoeff_0, extract_sign_bit(dqcoeff_0));
-    dqcoeff_1 = vaddq_s32(dqcoeff_1, extract_sign_bit(dqcoeff_1));
-
-    dqcoeff =
-        vcombine_s16(vshrn_n_s32(dqcoeff_0, 1), vshrn_n_s32(dqcoeff_1, 1));
-
-    store_s16q_to_tran_low(dqcoeff_ptr, dqcoeff);
+    calculate_dqcoeff_and_store_32x32(qcoeff, dequant, dqcoeff_ptr);
     dqcoeff_ptr += 8;
   }
 
@@ -234,8 +262,8 @@
 
     for (i = 1; i < 32 * 32 / 8; ++i) {
       // Add one because the eob is not its index.
-      const uint16x8_t iscan =
-          vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan_ptr), one));
+      const uint16x8_t v_iscan =
+          vreinterpretq_u16_s16(vaddq_s16(vld1q_s16(iscan), one));
 
       const int16x8_t coeff = load_tran_low_to_s16q(coeff_ptr);
       const int16x8_t coeff_sign = vshrq_n_s16(coeff, 15);
@@ -248,8 +276,6 @@
 
       // (round * quant * 2) >> 16 >> 1 == (round * quant) >> 16
       int16x8_t qcoeff = vshrq_n_s16(vqdmulhq_s16(rounded, quant), 1);
-      int16x8_t dqcoeff;
-      int32x4_t dqcoeff_0, dqcoeff_1;
 
       qcoeff = vaddq_s16(qcoeff, rounded);
 
@@ -264,28 +290,22 @@
 
       // Set non-zero elements to -1 and use that to extract values for eob.
       eob_max =
-          vmaxq_u16(eob_max, vandq_u16(vtstq_s16(qcoeff, neg_one), iscan));
+          vmaxq_u16(eob_max, vandq_u16(vtstq_s16(qcoeff, neg_one), v_iscan));
 
       coeff_ptr += 8;
-      iscan_ptr += 8;
+      iscan += 8;
 
       store_s16q_to_tran_low(qcoeff_ptr, qcoeff);
       qcoeff_ptr += 8;
 
-      dqcoeff_0 = vmull_s16(vget_low_s16(qcoeff), vget_low_s16(dequant));
-      dqcoeff_1 = vmull_s16(vget_high_s16(qcoeff), vget_high_s16(dequant));
-
-      dqcoeff_0 = vaddq_s32(dqcoeff_0, extract_sign_bit(dqcoeff_0));
-      dqcoeff_1 = vaddq_s32(dqcoeff_1, extract_sign_bit(dqcoeff_1));
-
-      dqcoeff =
-          vcombine_s16(vshrn_n_s32(dqcoeff_0, 1), vshrn_n_s32(dqcoeff_1, 1));
-
-      store_s16q_to_tran_low(dqcoeff_ptr, dqcoeff);
+      calculate_dqcoeff_and_store_32x32(qcoeff, dequant, dqcoeff_ptr);
       dqcoeff_ptr += 8;
     }
   }
 
+#ifdef __aarch64__
+  *eob_ptr = vmaxvq_u16(eob_max);
+#else
   {
     const uint16x4_t eob_max_0 =
         vmax_u16(vget_low_u16(eob_max), vget_high_u16(eob_max));
@@ -293,4 +313,5 @@
     const uint16x4_t eob_max_2 = vpmax_u16(eob_max_1, eob_max_1);
     vst1_lane_u16(eob_ptr, eob_max_2, 0);
   }
+#endif  // __aarch64__
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad4d_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad4d_neon.c
index b04de3a..06443c6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad4d_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad4d_neon.c
@@ -10,233 +10,371 @@
 
 #include <arm_neon.h>
 
+#include <assert.h>
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/arm/mem_neon.h"
 #include "vpx_dsp/arm/sum_neon.h"
 
-void vpx_sad4x4x4d_neon(const uint8_t *src, int src_stride,
-                        const uint8_t *const ref[4], int ref_stride,
-                        uint32_t *res) {
-  int i;
-  const uint8x16_t src_u8 = load_unaligned_u8q(src, src_stride);
-  for (i = 0; i < 4; ++i) {
-    const uint8x16_t ref_u8 = load_unaligned_u8q(ref[i], ref_stride);
-    uint16x8_t abs = vabdl_u8(vget_low_u8(src_u8), vget_low_u8(ref_u8));
-    abs = vabal_u8(abs, vget_high_u8(src_u8), vget_high_u8(ref_u8));
-    res[i] = vget_lane_u32(horizontal_add_uint16x8(abs), 0);
-  }
+static INLINE uint8x8_t load_unaligned_2_buffers(const void *const buf0,
+                                                 const void *const buf1) {
+  uint32_t a;
+  uint32x2_t aa = vdup_n_u32(0);
+  memcpy(&a, buf0, 4);
+  aa = vset_lane_u32(a, aa, 0);
+  memcpy(&a, buf1, 4);
+  aa = vset_lane_u32(a, aa, 1);
+  return vreinterpret_u8_u32(aa);
 }
 
-void vpx_sad4x8x4d_neon(const uint8_t *src, int src_stride,
-                        const uint8_t *const ref[4], int ref_stride,
-                        uint32_t *res) {
+static INLINE void sad4x_4d(const uint8_t *const src_ptr, const int src_stride,
+                            const uint8_t *const ref_array[4],
+                            const int ref_stride, const int height,
+                            uint32_t *const res) {
   int i;
-  const uint8x16_t src_0 = load_unaligned_u8q(src, src_stride);
-  const uint8x16_t src_1 = load_unaligned_u8q(src + 4 * src_stride, src_stride);
-  for (i = 0; i < 4; ++i) {
-    const uint8x16_t ref_0 = load_unaligned_u8q(ref[i], ref_stride);
-    const uint8x16_t ref_1 =
-        load_unaligned_u8q(ref[i] + 4 * ref_stride, ref_stride);
-    uint16x8_t abs = vabdl_u8(vget_low_u8(src_0), vget_low_u8(ref_0));
-    abs = vabal_u8(abs, vget_high_u8(src_0), vget_high_u8(ref_0));
-    abs = vabal_u8(abs, vget_low_u8(src_1), vget_low_u8(ref_1));
-    abs = vabal_u8(abs, vget_high_u8(src_1), vget_high_u8(ref_1));
-    res[i] = vget_lane_u32(horizontal_add_uint16x8(abs), 0);
-  }
-}
+  uint16x8_t abs[2] = { vdupq_n_u16(0), vdupq_n_u16(0) };
+  uint16x4_t a[2];
+  uint32x4_t r;
 
-static INLINE void sad8x_4d(const uint8_t *a, int a_stride,
-                            const uint8_t *const b[4], int b_stride,
-                            uint32_t *result, const int height) {
-  int i, j;
-  uint16x8_t sum[4] = { vdupq_n_u16(0), vdupq_n_u16(0), vdupq_n_u16(0),
-                        vdupq_n_u16(0) };
-  const uint8_t *b_loop[4] = { b[0], b[1], b[2], b[3] };
+  assert(!((intptr_t)src_ptr % sizeof(uint32_t)));
+  assert(!(src_stride % sizeof(uint32_t)));
 
   for (i = 0; i < height; ++i) {
-    const uint8x8_t a_u8 = vld1_u8(a);
-    a += a_stride;
+    const uint8x8_t s = vreinterpret_u8_u32(
+        vld1_dup_u32((const uint32_t *)(src_ptr + i * src_stride)));
+    const uint8x8_t ref01 = load_unaligned_2_buffers(
+        ref_array[0] + i * ref_stride, ref_array[1] + i * ref_stride);
+    const uint8x8_t ref23 = load_unaligned_2_buffers(
+        ref_array[2] + i * ref_stride, ref_array[3] + i * ref_stride);
+    abs[0] = vabal_u8(abs[0], s, ref01);
+    abs[1] = vabal_u8(abs[1], s, ref23);
+  }
+
+  a[0] = vpadd_u16(vget_low_u16(abs[0]), vget_high_u16(abs[0]));
+  a[1] = vpadd_u16(vget_low_u16(abs[1]), vget_high_u16(abs[1]));
+  r = vpaddlq_u16(vcombine_u16(a[0], a[1]));
+  vst1q_u32(res, r);
+}
+
+void vpx_sad4x4x4d_neon(const uint8_t *src_ptr, int src_stride,
+                        const uint8_t *const ref_array[4], int ref_stride,
+                        uint32_t *res) {
+  sad4x_4d(src_ptr, src_stride, ref_array, ref_stride, 4, res);
+}
+
+void vpx_sad4x8x4d_neon(const uint8_t *src_ptr, int src_stride,
+                        const uint8_t *const ref_array[4], int ref_stride,
+                        uint32_t *res) {
+  sad4x_4d(src_ptr, src_stride, ref_array, ref_stride, 8, res);
+}
+
+////////////////////////////////////////////////////////////////////////////////
+
+// Can handle 512 pixels' sad sum (such as 16x32 or 32x16)
+static INLINE void sad_512_pel_final_neon(const uint16x8_t *sum /*[4]*/,
+                                          uint32_t *const res) {
+  const uint16x4_t a0 = vadd_u16(vget_low_u16(sum[0]), vget_high_u16(sum[0]));
+  const uint16x4_t a1 = vadd_u16(vget_low_u16(sum[1]), vget_high_u16(sum[1]));
+  const uint16x4_t a2 = vadd_u16(vget_low_u16(sum[2]), vget_high_u16(sum[2]));
+  const uint16x4_t a3 = vadd_u16(vget_low_u16(sum[3]), vget_high_u16(sum[3]));
+  const uint16x4_t b0 = vpadd_u16(a0, a1);
+  const uint16x4_t b1 = vpadd_u16(a2, a3);
+  const uint32x4_t r = vpaddlq_u16(vcombine_u16(b0, b1));
+  vst1q_u32(res, r);
+}
+
+// Can handle 1024 pixels' sad sum (such as 32x32)
+static INLINE void sad_1024_pel_final_neon(const uint16x8_t *sum /*[4]*/,
+                                           uint32_t *const res) {
+  const uint16x4_t a0 = vpadd_u16(vget_low_u16(sum[0]), vget_high_u16(sum[0]));
+  const uint16x4_t a1 = vpadd_u16(vget_low_u16(sum[1]), vget_high_u16(sum[1]));
+  const uint16x4_t a2 = vpadd_u16(vget_low_u16(sum[2]), vget_high_u16(sum[2]));
+  const uint16x4_t a3 = vpadd_u16(vget_low_u16(sum[3]), vget_high_u16(sum[3]));
+  const uint32x4_t b0 = vpaddlq_u16(vcombine_u16(a0, a1));
+  const uint32x4_t b1 = vpaddlq_u16(vcombine_u16(a2, a3));
+  const uint32x2_t c0 = vpadd_u32(vget_low_u32(b0), vget_high_u32(b0));
+  const uint32x2_t c1 = vpadd_u32(vget_low_u32(b1), vget_high_u32(b1));
+  vst1q_u32(res, vcombine_u32(c0, c1));
+}
+
+// Can handle 2048 pixels' sad sum (such as 32x64 or 64x32)
+static INLINE void sad_2048_pel_final_neon(const uint16x8_t *sum /*[4]*/,
+                                           uint32_t *const res) {
+  const uint32x4_t a0 = vpaddlq_u16(sum[0]);
+  const uint32x4_t a1 = vpaddlq_u16(sum[1]);
+  const uint32x4_t a2 = vpaddlq_u16(sum[2]);
+  const uint32x4_t a3 = vpaddlq_u16(sum[3]);
+  const uint32x2_t b0 = vadd_u32(vget_low_u32(a0), vget_high_u32(a0));
+  const uint32x2_t b1 = vadd_u32(vget_low_u32(a1), vget_high_u32(a1));
+  const uint32x2_t b2 = vadd_u32(vget_low_u32(a2), vget_high_u32(a2));
+  const uint32x2_t b3 = vadd_u32(vget_low_u32(a3), vget_high_u32(a3));
+  const uint32x2_t c0 = vpadd_u32(b0, b1);
+  const uint32x2_t c1 = vpadd_u32(b2, b3);
+  vst1q_u32(res, vcombine_u32(c0, c1));
+}
+
+// Can handle 4096 pixels' sad sum (such as 64x64)
+static INLINE void sad_4096_pel_final_neon(const uint16x8_t *sum /*[8]*/,
+                                           uint32_t *const res) {
+  const uint32x4_t a0 = vpaddlq_u16(sum[0]);
+  const uint32x4_t a1 = vpaddlq_u16(sum[1]);
+  const uint32x4_t a2 = vpaddlq_u16(sum[2]);
+  const uint32x4_t a3 = vpaddlq_u16(sum[3]);
+  const uint32x4_t a4 = vpaddlq_u16(sum[4]);
+  const uint32x4_t a5 = vpaddlq_u16(sum[5]);
+  const uint32x4_t a6 = vpaddlq_u16(sum[6]);
+  const uint32x4_t a7 = vpaddlq_u16(sum[7]);
+  const uint32x4_t b0 = vaddq_u32(a0, a1);
+  const uint32x4_t b1 = vaddq_u32(a2, a3);
+  const uint32x4_t b2 = vaddq_u32(a4, a5);
+  const uint32x4_t b3 = vaddq_u32(a6, a7);
+  const uint32x2_t c0 = vadd_u32(vget_low_u32(b0), vget_high_u32(b0));
+  const uint32x2_t c1 = vadd_u32(vget_low_u32(b1), vget_high_u32(b1));
+  const uint32x2_t c2 = vadd_u32(vget_low_u32(b2), vget_high_u32(b2));
+  const uint32x2_t c3 = vadd_u32(vget_low_u32(b3), vget_high_u32(b3));
+  const uint32x2_t d0 = vpadd_u32(c0, c1);
+  const uint32x2_t d1 = vpadd_u32(c2, c3);
+  vst1q_u32(res, vcombine_u32(d0, d1));
+}
+
+static INLINE void sad8x_4d(const uint8_t *src_ptr, int src_stride,
+                            const uint8_t *const ref_array[4], int ref_stride,
+                            uint32_t *res, const int height) {
+  int i, j;
+  const uint8_t *ref_loop[4] = { ref_array[0], ref_array[1], ref_array[2],
+                                 ref_array[3] };
+  uint16x8_t sum[4] = { vdupq_n_u16(0), vdupq_n_u16(0), vdupq_n_u16(0),
+                        vdupq_n_u16(0) };
+
+  for (i = 0; i < height; ++i) {
+    const uint8x8_t s = vld1_u8(src_ptr);
+    src_ptr += src_stride;
     for (j = 0; j < 4; ++j) {
-      const uint8x8_t b_u8 = vld1_u8(b_loop[j]);
-      b_loop[j] += b_stride;
-      sum[j] = vabal_u8(sum[j], a_u8, b_u8);
+      const uint8x8_t b_u8 = vld1_u8(ref_loop[j]);
+      ref_loop[j] += ref_stride;
+      sum[j] = vabal_u8(sum[j], s, b_u8);
     }
   }
 
-  for (j = 0; j < 4; ++j) {
-    result[j] = vget_lane_u32(horizontal_add_uint16x8(sum[j]), 0);
-  }
+  sad_512_pel_final_neon(sum, res);
 }
 
-void vpx_sad8x4x4d_neon(const uint8_t *src, int src_stride,
-                        const uint8_t *const ref[4], int ref_stride,
+void vpx_sad8x4x4d_neon(const uint8_t *src_ptr, int src_stride,
+                        const uint8_t *const ref_array[4], int ref_stride,
                         uint32_t *res) {
-  sad8x_4d(src, src_stride, ref, ref_stride, res, 4);
+  sad8x_4d(src_ptr, src_stride, ref_array, ref_stride, res, 4);
 }
 
-void vpx_sad8x8x4d_neon(const uint8_t *src, int src_stride,
-                        const uint8_t *const ref[4], int ref_stride,
+void vpx_sad8x8x4d_neon(const uint8_t *src_ptr, int src_stride,
+                        const uint8_t *const ref_array[4], int ref_stride,
                         uint32_t *res) {
-  sad8x_4d(src, src_stride, ref, ref_stride, res, 8);
+  sad8x_4d(src_ptr, src_stride, ref_array, ref_stride, res, 8);
 }
 
-void vpx_sad8x16x4d_neon(const uint8_t *src, int src_stride,
-                         const uint8_t *const ref[4], int ref_stride,
+void vpx_sad8x16x4d_neon(const uint8_t *src_ptr, int src_stride,
+                         const uint8_t *const ref_array[4], int ref_stride,
                          uint32_t *res) {
-  sad8x_4d(src, src_stride, ref, ref_stride, res, 16);
+  sad8x_4d(src_ptr, src_stride, ref_array, ref_stride, res, 16);
 }
 
-static INLINE void sad16x_4d(const uint8_t *a, int a_stride,
-                             const uint8_t *const b[4], int b_stride,
-                             uint32_t *result, const int height) {
+////////////////////////////////////////////////////////////////////////////////
+
+static INLINE void sad16_neon(const uint8_t *ref_ptr, const uint8x16_t src_ptr,
+                              uint16x8_t *const sum) {
+  const uint8x16_t r = vld1q_u8(ref_ptr);
+  *sum = vabal_u8(*sum, vget_low_u8(src_ptr), vget_low_u8(r));
+  *sum = vabal_u8(*sum, vget_high_u8(src_ptr), vget_high_u8(r));
+}
+
+static INLINE void sad16x_4d(const uint8_t *src_ptr, int src_stride,
+                             const uint8_t *const ref_array[4], int ref_stride,
+                             uint32_t *res, const int height) {
   int i, j;
+  const uint8_t *ref_loop[4] = { ref_array[0], ref_array[1], ref_array[2],
+                                 ref_array[3] };
   uint16x8_t sum[4] = { vdupq_n_u16(0), vdupq_n_u16(0), vdupq_n_u16(0),
                         vdupq_n_u16(0) };
-  const uint8_t *b_loop[4] = { b[0], b[1], b[2], b[3] };
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_u8 = vld1q_u8(a);
-    a += a_stride;
+    const uint8x16_t s = vld1q_u8(src_ptr);
+    src_ptr += src_stride;
     for (j = 0; j < 4; ++j) {
-      const uint8x16_t b_u8 = vld1q_u8(b_loop[j]);
-      b_loop[j] += b_stride;
-      sum[j] = vabal_u8(sum[j], vget_low_u8(a_u8), vget_low_u8(b_u8));
-      sum[j] = vabal_u8(sum[j], vget_high_u8(a_u8), vget_high_u8(b_u8));
+      sad16_neon(ref_loop[j], s, &sum[j]);
+      ref_loop[j] += ref_stride;
     }
   }
 
-  for (j = 0; j < 4; ++j) {
-    result[j] = vget_lane_u32(horizontal_add_uint16x8(sum[j]), 0);
-  }
+  sad_512_pel_final_neon(sum, res);
 }
 
-void vpx_sad16x8x4d_neon(const uint8_t *src, int src_stride,
-                         const uint8_t *const ref[4], int ref_stride,
+void vpx_sad16x8x4d_neon(const uint8_t *src_ptr, int src_stride,
+                         const uint8_t *const ref_array[4], int ref_stride,
                          uint32_t *res) {
-  sad16x_4d(src, src_stride, ref, ref_stride, res, 8);
+  sad16x_4d(src_ptr, src_stride, ref_array, ref_stride, res, 8);
 }
 
-void vpx_sad16x16x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
+void vpx_sad16x16x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
                           uint32_t *res) {
-  sad16x_4d(src, src_stride, ref, ref_stride, res, 16);
+  sad16x_4d(src_ptr, src_stride, ref_array, ref_stride, res, 16);
 }
 
-void vpx_sad16x32x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
+void vpx_sad16x32x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
                           uint32_t *res) {
-  sad16x_4d(src, src_stride, ref, ref_stride, res, 32);
+  sad16x_4d(src_ptr, src_stride, ref_array, ref_stride, res, 32);
 }
 
-static INLINE void sad32x_4d(const uint8_t *a, int a_stride,
-                             const uint8_t *const b[4], int b_stride,
-                             uint32_t *result, const int height) {
-  int i, j;
+////////////////////////////////////////////////////////////////////////////////
+
+static INLINE void sad32x_4d(const uint8_t *src_ptr, int src_stride,
+                             const uint8_t *const ref_array[4], int ref_stride,
+                             const int height, uint16x8_t *const sum) {
+  int i;
+  const uint8_t *ref_loop[4] = { ref_array[0], ref_array[1], ref_array[2],
+                                 ref_array[3] };
+
+  sum[0] = sum[1] = sum[2] = sum[3] = vdupq_n_u16(0);
+
+  for (i = 0; i < height; ++i) {
+    uint8x16_t s;
+
+    s = vld1q_u8(src_ptr + 0 * 16);
+    sad16_neon(ref_loop[0] + 0 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 0 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[2] + 0 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[3] + 0 * 16, s, &sum[3]);
+
+    s = vld1q_u8(src_ptr + 1 * 16);
+    sad16_neon(ref_loop[0] + 1 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 1 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[2] + 1 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[3] + 1 * 16, s, &sum[3]);
+
+    src_ptr += src_stride;
+    ref_loop[0] += ref_stride;
+    ref_loop[1] += ref_stride;
+    ref_loop[2] += ref_stride;
+    ref_loop[3] += ref_stride;
+  }
+}
+
+void vpx_sad32x16x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
+                          uint32_t *res) {
+  uint16x8_t sum[4];
+  sad32x_4d(src_ptr, src_stride, ref_array, ref_stride, 16, sum);
+  sad_512_pel_final_neon(sum, res);
+}
+
+void vpx_sad32x32x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
+                          uint32_t *res) {
+  uint16x8_t sum[4];
+  sad32x_4d(src_ptr, src_stride, ref_array, ref_stride, 32, sum);
+  sad_1024_pel_final_neon(sum, res);
+}
+
+void vpx_sad32x64x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
+                          uint32_t *res) {
+  uint16x8_t sum[4];
+  sad32x_4d(src_ptr, src_stride, ref_array, ref_stride, 64, sum);
+  sad_2048_pel_final_neon(sum, res);
+}
+
+////////////////////////////////////////////////////////////////////////////////
+
+void vpx_sad64x32x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
+                          uint32_t *res) {
+  int i;
+  const uint8_t *ref_loop[4] = { ref_array[0], ref_array[1], ref_array[2],
+                                 ref_array[3] };
   uint16x8_t sum[4] = { vdupq_n_u16(0), vdupq_n_u16(0), vdupq_n_u16(0),
                         vdupq_n_u16(0) };
-  const uint8_t *b_loop[4] = { b[0], b[1], b[2], b[3] };
 
-  for (i = 0; i < height; ++i) {
-    const uint8x16_t a_0 = vld1q_u8(a);
-    const uint8x16_t a_1 = vld1q_u8(a + 16);
-    a += a_stride;
-    for (j = 0; j < 4; ++j) {
-      const uint8x16_t b_0 = vld1q_u8(b_loop[j]);
-      const uint8x16_t b_1 = vld1q_u8(b_loop[j] + 16);
-      b_loop[j] += b_stride;
-      sum[j] = vabal_u8(sum[j], vget_low_u8(a_0), vget_low_u8(b_0));
-      sum[j] = vabal_u8(sum[j], vget_high_u8(a_0), vget_high_u8(b_0));
-      sum[j] = vabal_u8(sum[j], vget_low_u8(a_1), vget_low_u8(b_1));
-      sum[j] = vabal_u8(sum[j], vget_high_u8(a_1), vget_high_u8(b_1));
-    }
+  for (i = 0; i < 32; ++i) {
+    uint8x16_t s;
+
+    s = vld1q_u8(src_ptr + 0 * 16);
+    sad16_neon(ref_loop[0] + 0 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 0 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[2] + 0 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[3] + 0 * 16, s, &sum[3]);
+
+    s = vld1q_u8(src_ptr + 1 * 16);
+    sad16_neon(ref_loop[0] + 1 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 1 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[2] + 1 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[3] + 1 * 16, s, &sum[3]);
+
+    s = vld1q_u8(src_ptr + 2 * 16);
+    sad16_neon(ref_loop[0] + 2 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 2 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[2] + 2 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[3] + 2 * 16, s, &sum[3]);
+
+    s = vld1q_u8(src_ptr + 3 * 16);
+    sad16_neon(ref_loop[0] + 3 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 3 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[2] + 3 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[3] + 3 * 16, s, &sum[3]);
+
+    src_ptr += src_stride;
+    ref_loop[0] += ref_stride;
+    ref_loop[1] += ref_stride;
+    ref_loop[2] += ref_stride;
+    ref_loop[3] += ref_stride;
   }
 
-  for (j = 0; j < 4; ++j) {
-    result[j] = vget_lane_u32(horizontal_add_uint16x8(sum[j]), 0);
-  }
+  sad_2048_pel_final_neon(sum, res);
 }
 
-void vpx_sad32x16x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
+void vpx_sad64x64x4d_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
                           uint32_t *res) {
-  sad32x_4d(src, src_stride, ref, ref_stride, res, 16);
-}
-
-void vpx_sad32x32x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
-                          uint32_t *res) {
-  sad32x_4d(src, src_stride, ref, ref_stride, res, 32);
-}
-
-void vpx_sad32x64x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
-                          uint32_t *res) {
-  sad32x_4d(src, src_stride, ref, ref_stride, res, 64);
-}
-
-static INLINE void sum64x(const uint8x16_t a_0, const uint8x16_t a_1,
-                          const uint8x16_t b_0, const uint8x16_t b_1,
-                          uint16x8_t *sum) {
-  *sum = vabal_u8(*sum, vget_low_u8(a_0), vget_low_u8(b_0));
-  *sum = vabal_u8(*sum, vget_high_u8(a_0), vget_high_u8(b_0));
-  *sum = vabal_u8(*sum, vget_low_u8(a_1), vget_low_u8(b_1));
-  *sum = vabal_u8(*sum, vget_high_u8(a_1), vget_high_u8(b_1));
-}
-
-static INLINE void sad64x_4d(const uint8_t *a, int a_stride,
-                             const uint8_t *const b[4], int b_stride,
-                             uint32_t *result, const int height) {
   int i;
-  uint16x8_t sum_0 = vdupq_n_u16(0);
-  uint16x8_t sum_1 = vdupq_n_u16(0);
-  uint16x8_t sum_2 = vdupq_n_u16(0);
-  uint16x8_t sum_3 = vdupq_n_u16(0);
-  uint16x8_t sum_4 = vdupq_n_u16(0);
-  uint16x8_t sum_5 = vdupq_n_u16(0);
-  uint16x8_t sum_6 = vdupq_n_u16(0);
-  uint16x8_t sum_7 = vdupq_n_u16(0);
-  const uint8_t *b_loop[4] = { b[0], b[1], b[2], b[3] };
+  const uint8_t *ref_loop[4] = { ref_array[0], ref_array[1], ref_array[2],
+                                 ref_array[3] };
+  uint16x8_t sum[8] = { vdupq_n_u16(0), vdupq_n_u16(0), vdupq_n_u16(0),
+                        vdupq_n_u16(0), vdupq_n_u16(0), vdupq_n_u16(0),
+                        vdupq_n_u16(0), vdupq_n_u16(0) };
 
-  for (i = 0; i < height; ++i) {
-    const uint8x16_t a_0 = vld1q_u8(a);
-    const uint8x16_t a_1 = vld1q_u8(a + 16);
-    const uint8x16_t a_2 = vld1q_u8(a + 32);
-    const uint8x16_t a_3 = vld1q_u8(a + 48);
-    a += a_stride;
-    sum64x(a_0, a_1, vld1q_u8(b_loop[0]), vld1q_u8(b_loop[0] + 16), &sum_0);
-    sum64x(a_2, a_3, vld1q_u8(b_loop[0] + 32), vld1q_u8(b_loop[0] + 48),
-           &sum_1);
-    b_loop[0] += b_stride;
-    sum64x(a_0, a_1, vld1q_u8(b_loop[1]), vld1q_u8(b_loop[1] + 16), &sum_2);
-    sum64x(a_2, a_3, vld1q_u8(b_loop[1] + 32), vld1q_u8(b_loop[1] + 48),
-           &sum_3);
-    b_loop[1] += b_stride;
-    sum64x(a_0, a_1, vld1q_u8(b_loop[2]), vld1q_u8(b_loop[2] + 16), &sum_4);
-    sum64x(a_2, a_3, vld1q_u8(b_loop[2] + 32), vld1q_u8(b_loop[2] + 48),
-           &sum_5);
-    b_loop[2] += b_stride;
-    sum64x(a_0, a_1, vld1q_u8(b_loop[3]), vld1q_u8(b_loop[3] + 16), &sum_6);
-    sum64x(a_2, a_3, vld1q_u8(b_loop[3] + 32), vld1q_u8(b_loop[3] + 48),
-           &sum_7);
-    b_loop[3] += b_stride;
+  for (i = 0; i < 64; ++i) {
+    uint8x16_t s;
+
+    s = vld1q_u8(src_ptr + 0 * 16);
+    sad16_neon(ref_loop[0] + 0 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 0 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[2] + 0 * 16, s, &sum[4]);
+    sad16_neon(ref_loop[3] + 0 * 16, s, &sum[6]);
+
+    s = vld1q_u8(src_ptr + 1 * 16);
+    sad16_neon(ref_loop[0] + 1 * 16, s, &sum[0]);
+    sad16_neon(ref_loop[1] + 1 * 16, s, &sum[2]);
+    sad16_neon(ref_loop[2] + 1 * 16, s, &sum[4]);
+    sad16_neon(ref_loop[3] + 1 * 16, s, &sum[6]);
+
+    s = vld1q_u8(src_ptr + 2 * 16);
+    sad16_neon(ref_loop[0] + 2 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[1] + 2 * 16, s, &sum[3]);
+    sad16_neon(ref_loop[2] + 2 * 16, s, &sum[5]);
+    sad16_neon(ref_loop[3] + 2 * 16, s, &sum[7]);
+
+    s = vld1q_u8(src_ptr + 3 * 16);
+    sad16_neon(ref_loop[0] + 3 * 16, s, &sum[1]);
+    sad16_neon(ref_loop[1] + 3 * 16, s, &sum[3]);
+    sad16_neon(ref_loop[2] + 3 * 16, s, &sum[5]);
+    sad16_neon(ref_loop[3] + 3 * 16, s, &sum[7]);
+
+    src_ptr += src_stride;
+    ref_loop[0] += ref_stride;
+    ref_loop[1] += ref_stride;
+    ref_loop[2] += ref_stride;
+    ref_loop[3] += ref_stride;
   }
 
-  result[0] = vget_lane_u32(horizontal_add_long_uint16x8(sum_0, sum_1), 0);
-  result[1] = vget_lane_u32(horizontal_add_long_uint16x8(sum_2, sum_3), 0);
-  result[2] = vget_lane_u32(horizontal_add_long_uint16x8(sum_4, sum_5), 0);
-  result[3] = vget_lane_u32(horizontal_add_long_uint16x8(sum_6, sum_7), 0);
-}
-
-void vpx_sad64x32x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
-                          uint32_t *res) {
-  sad64x_4d(src, src_stride, ref, ref_stride, res, 32);
-}
-
-void vpx_sad64x64x4d_neon(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
-                          uint32_t *res) {
-  sad64x_4d(src, src_stride, ref, ref_stride, res, 64);
+  sad_4096_pel_final_neon(sum, res);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad_neon.c
index 9518a16..c4a49e3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sad_neon.c
@@ -11,6 +11,7 @@
 #include <arm_neon.h>
 
 #include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
 
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/arm/mem_neon.h"
@@ -73,128 +74,132 @@
   return vget_lane_u32(horizontal_add_uint16x8(abs), 0);
 }
 
-static INLINE uint16x8_t sad8x(const uint8_t *a, int a_stride, const uint8_t *b,
-                               int b_stride, const int height) {
+static INLINE uint16x8_t sad8x(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride,
+                               const int height) {
   int i;
   uint16x8_t abs = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x8_t a_u8 = vld1_u8(a);
-    const uint8x8_t b_u8 = vld1_u8(b);
-    a += a_stride;
-    b += b_stride;
+    const uint8x8_t a_u8 = vld1_u8(src_ptr);
+    const uint8x8_t b_u8 = vld1_u8(ref_ptr);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
     abs = vabal_u8(abs, a_u8, b_u8);
   }
   return abs;
 }
 
-static INLINE uint16x8_t sad8x_avg(const uint8_t *a, int a_stride,
-                                   const uint8_t *b, int b_stride,
-                                   const uint8_t *c, const int height) {
+static INLINE uint16x8_t sad8x_avg(const uint8_t *src_ptr, int src_stride,
+                                   const uint8_t *ref_ptr, int ref_stride,
+                                   const uint8_t *second_pred,
+                                   const int height) {
   int i;
   uint16x8_t abs = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x8_t a_u8 = vld1_u8(a);
-    const uint8x8_t b_u8 = vld1_u8(b);
-    const uint8x8_t c_u8 = vld1_u8(c);
+    const uint8x8_t a_u8 = vld1_u8(src_ptr);
+    const uint8x8_t b_u8 = vld1_u8(ref_ptr);
+    const uint8x8_t c_u8 = vld1_u8(second_pred);
     const uint8x8_t avg = vrhadd_u8(b_u8, c_u8);
-    a += a_stride;
-    b += b_stride;
-    c += 8;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+    second_pred += 8;
     abs = vabal_u8(abs, a_u8, avg);
   }
   return abs;
 }
 
-#define sad8xN(n)                                                      \
-  uint32_t vpx_sad8x##n##_neon(const uint8_t *src, int src_stride,     \
-                               const uint8_t *ref, int ref_stride) {   \
-    const uint16x8_t abs = sad8x(src, src_stride, ref, ref_stride, n); \
-    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);             \
-  }                                                                    \
-                                                                       \
-  uint32_t vpx_sad8x##n##_avg_neon(const uint8_t *src, int src_stride, \
-                                   const uint8_t *ref, int ref_stride, \
-                                   const uint8_t *second_pred) {       \
-    const uint16x8_t abs =                                             \
-        sad8x_avg(src, src_stride, ref, ref_stride, second_pred, n);   \
-    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);             \
+#define sad8xN(n)                                                              \
+  uint32_t vpx_sad8x##n##_neon(const uint8_t *src_ptr, int src_stride,         \
+                               const uint8_t *ref_ptr, int ref_stride) {       \
+    const uint16x8_t abs = sad8x(src_ptr, src_stride, ref_ptr, ref_stride, n); \
+    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);                     \
+  }                                                                            \
+                                                                               \
+  uint32_t vpx_sad8x##n##_avg_neon(const uint8_t *src_ptr, int src_stride,     \
+                                   const uint8_t *ref_ptr, int ref_stride,     \
+                                   const uint8_t *second_pred) {               \
+    const uint16x8_t abs =                                                     \
+        sad8x_avg(src_ptr, src_stride, ref_ptr, ref_stride, second_pred, n);   \
+    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);                     \
   }
 
 sad8xN(4);
 sad8xN(8);
 sad8xN(16);
 
-static INLINE uint16x8_t sad16x(const uint8_t *a, int a_stride,
-                                const uint8_t *b, int b_stride,
+static INLINE uint16x8_t sad16x(const uint8_t *src_ptr, int src_stride,
+                                const uint8_t *ref_ptr, int ref_stride,
                                 const int height) {
   int i;
   uint16x8_t abs = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_u8 = vld1q_u8(a);
-    const uint8x16_t b_u8 = vld1q_u8(b);
-    a += a_stride;
-    b += b_stride;
+    const uint8x16_t a_u8 = vld1q_u8(src_ptr);
+    const uint8x16_t b_u8 = vld1q_u8(ref_ptr);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
     abs = vabal_u8(abs, vget_low_u8(a_u8), vget_low_u8(b_u8));
     abs = vabal_u8(abs, vget_high_u8(a_u8), vget_high_u8(b_u8));
   }
   return abs;
 }
 
-static INLINE uint16x8_t sad16x_avg(const uint8_t *a, int a_stride,
-                                    const uint8_t *b, int b_stride,
-                                    const uint8_t *c, const int height) {
+static INLINE uint16x8_t sad16x_avg(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    const uint8_t *second_pred,
+                                    const int height) {
   int i;
   uint16x8_t abs = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_u8 = vld1q_u8(a);
-    const uint8x16_t b_u8 = vld1q_u8(b);
-    const uint8x16_t c_u8 = vld1q_u8(c);
+    const uint8x16_t a_u8 = vld1q_u8(src_ptr);
+    const uint8x16_t b_u8 = vld1q_u8(ref_ptr);
+    const uint8x16_t c_u8 = vld1q_u8(second_pred);
     const uint8x16_t avg = vrhaddq_u8(b_u8, c_u8);
-    a += a_stride;
-    b += b_stride;
-    c += 16;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+    second_pred += 16;
     abs = vabal_u8(abs, vget_low_u8(a_u8), vget_low_u8(avg));
     abs = vabal_u8(abs, vget_high_u8(a_u8), vget_high_u8(avg));
   }
   return abs;
 }
 
-#define sad16xN(n)                                                      \
-  uint32_t vpx_sad16x##n##_neon(const uint8_t *src, int src_stride,     \
-                                const uint8_t *ref, int ref_stride) {   \
-    const uint16x8_t abs = sad16x(src, src_stride, ref, ref_stride, n); \
-    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);              \
-  }                                                                     \
-                                                                        \
-  uint32_t vpx_sad16x##n##_avg_neon(const uint8_t *src, int src_stride, \
-                                    const uint8_t *ref, int ref_stride, \
-                                    const uint8_t *second_pred) {       \
-    const uint16x8_t abs =                                              \
-        sad16x_avg(src, src_stride, ref, ref_stride, second_pred, n);   \
-    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);              \
+#define sad16xN(n)                                                            \
+  uint32_t vpx_sad16x##n##_neon(const uint8_t *src_ptr, int src_stride,       \
+                                const uint8_t *ref_ptr, int ref_stride) {     \
+    const uint16x8_t abs =                                                    \
+        sad16x(src_ptr, src_stride, ref_ptr, ref_stride, n);                  \
+    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);                    \
+  }                                                                           \
+                                                                              \
+  uint32_t vpx_sad16x##n##_avg_neon(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride,   \
+                                    const uint8_t *second_pred) {             \
+    const uint16x8_t abs =                                                    \
+        sad16x_avg(src_ptr, src_stride, ref_ptr, ref_stride, second_pred, n); \
+    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);                    \
   }
 
 sad16xN(8);
 sad16xN(16);
 sad16xN(32);
 
-static INLINE uint16x8_t sad32x(const uint8_t *a, int a_stride,
-                                const uint8_t *b, int b_stride,
+static INLINE uint16x8_t sad32x(const uint8_t *src_ptr, int src_stride,
+                                const uint8_t *ref_ptr, int ref_stride,
                                 const int height) {
   int i;
   uint16x8_t abs = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_lo = vld1q_u8(a);
-    const uint8x16_t a_hi = vld1q_u8(a + 16);
-    const uint8x16_t b_lo = vld1q_u8(b);
-    const uint8x16_t b_hi = vld1q_u8(b + 16);
-    a += a_stride;
-    b += b_stride;
+    const uint8x16_t a_lo = vld1q_u8(src_ptr);
+    const uint8x16_t a_hi = vld1q_u8(src_ptr + 16);
+    const uint8x16_t b_lo = vld1q_u8(ref_ptr);
+    const uint8x16_t b_hi = vld1q_u8(ref_ptr + 16);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
     abs = vabal_u8(abs, vget_low_u8(a_lo), vget_low_u8(b_lo));
     abs = vabal_u8(abs, vget_high_u8(a_lo), vget_high_u8(b_lo));
     abs = vabal_u8(abs, vget_low_u8(a_hi), vget_low_u8(b_hi));
@@ -203,24 +208,25 @@
   return abs;
 }
 
-static INLINE uint16x8_t sad32x_avg(const uint8_t *a, int a_stride,
-                                    const uint8_t *b, int b_stride,
-                                    const uint8_t *c, const int height) {
+static INLINE uint16x8_t sad32x_avg(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    const uint8_t *second_pred,
+                                    const int height) {
   int i;
   uint16x8_t abs = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_lo = vld1q_u8(a);
-    const uint8x16_t a_hi = vld1q_u8(a + 16);
-    const uint8x16_t b_lo = vld1q_u8(b);
-    const uint8x16_t b_hi = vld1q_u8(b + 16);
-    const uint8x16_t c_lo = vld1q_u8(c);
-    const uint8x16_t c_hi = vld1q_u8(c + 16);
+    const uint8x16_t a_lo = vld1q_u8(src_ptr);
+    const uint8x16_t a_hi = vld1q_u8(src_ptr + 16);
+    const uint8x16_t b_lo = vld1q_u8(ref_ptr);
+    const uint8x16_t b_hi = vld1q_u8(ref_ptr + 16);
+    const uint8x16_t c_lo = vld1q_u8(second_pred);
+    const uint8x16_t c_hi = vld1q_u8(second_pred + 16);
     const uint8x16_t avg_lo = vrhaddq_u8(b_lo, c_lo);
     const uint8x16_t avg_hi = vrhaddq_u8(b_hi, c_hi);
-    a += a_stride;
-    b += b_stride;
-    c += 32;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+    second_pred += 32;
     abs = vabal_u8(abs, vget_low_u8(a_lo), vget_low_u8(avg_lo));
     abs = vabal_u8(abs, vget_high_u8(a_lo), vget_high_u8(avg_lo));
     abs = vabal_u8(abs, vget_low_u8(a_hi), vget_low_u8(avg_hi));
@@ -229,43 +235,44 @@
   return abs;
 }
 
-#define sad32xN(n)                                                      \
-  uint32_t vpx_sad32x##n##_neon(const uint8_t *src, int src_stride,     \
-                                const uint8_t *ref, int ref_stride) {   \
-    const uint16x8_t abs = sad32x(src, src_stride, ref, ref_stride, n); \
-    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);              \
-  }                                                                     \
-                                                                        \
-  uint32_t vpx_sad32x##n##_avg_neon(const uint8_t *src, int src_stride, \
-                                    const uint8_t *ref, int ref_stride, \
-                                    const uint8_t *second_pred) {       \
-    const uint16x8_t abs =                                              \
-        sad32x_avg(src, src_stride, ref, ref_stride, second_pred, n);   \
-    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);              \
+#define sad32xN(n)                                                            \
+  uint32_t vpx_sad32x##n##_neon(const uint8_t *src_ptr, int src_stride,       \
+                                const uint8_t *ref_ptr, int ref_stride) {     \
+    const uint16x8_t abs =                                                    \
+        sad32x(src_ptr, src_stride, ref_ptr, ref_stride, n);                  \
+    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);                    \
+  }                                                                           \
+                                                                              \
+  uint32_t vpx_sad32x##n##_avg_neon(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride,   \
+                                    const uint8_t *second_pred) {             \
+    const uint16x8_t abs =                                                    \
+        sad32x_avg(src_ptr, src_stride, ref_ptr, ref_stride, second_pred, n); \
+    return vget_lane_u32(horizontal_add_uint16x8(abs), 0);                    \
   }
 
 sad32xN(16);
 sad32xN(32);
 sad32xN(64);
 
-static INLINE uint32x4_t sad64x(const uint8_t *a, int a_stride,
-                                const uint8_t *b, int b_stride,
+static INLINE uint32x4_t sad64x(const uint8_t *src_ptr, int src_stride,
+                                const uint8_t *ref_ptr, int ref_stride,
                                 const int height) {
   int i;
   uint16x8_t abs_0 = vdupq_n_u16(0);
   uint16x8_t abs_1 = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_0 = vld1q_u8(a);
-    const uint8x16_t a_1 = vld1q_u8(a + 16);
-    const uint8x16_t a_2 = vld1q_u8(a + 32);
-    const uint8x16_t a_3 = vld1q_u8(a + 48);
-    const uint8x16_t b_0 = vld1q_u8(b);
-    const uint8x16_t b_1 = vld1q_u8(b + 16);
-    const uint8x16_t b_2 = vld1q_u8(b + 32);
-    const uint8x16_t b_3 = vld1q_u8(b + 48);
-    a += a_stride;
-    b += b_stride;
+    const uint8x16_t a_0 = vld1q_u8(src_ptr);
+    const uint8x16_t a_1 = vld1q_u8(src_ptr + 16);
+    const uint8x16_t a_2 = vld1q_u8(src_ptr + 32);
+    const uint8x16_t a_3 = vld1q_u8(src_ptr + 48);
+    const uint8x16_t b_0 = vld1q_u8(ref_ptr);
+    const uint8x16_t b_1 = vld1q_u8(ref_ptr + 16);
+    const uint8x16_t b_2 = vld1q_u8(ref_ptr + 32);
+    const uint8x16_t b_3 = vld1q_u8(ref_ptr + 48);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
     abs_0 = vabal_u8(abs_0, vget_low_u8(a_0), vget_low_u8(b_0));
     abs_0 = vabal_u8(abs_0, vget_high_u8(a_0), vget_high_u8(b_0));
     abs_0 = vabal_u8(abs_0, vget_low_u8(a_1), vget_low_u8(b_1));
@@ -282,33 +289,34 @@
   }
 }
 
-static INLINE uint32x4_t sad64x_avg(const uint8_t *a, int a_stride,
-                                    const uint8_t *b, int b_stride,
-                                    const uint8_t *c, const int height) {
+static INLINE uint32x4_t sad64x_avg(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    const uint8_t *second_pred,
+                                    const int height) {
   int i;
   uint16x8_t abs_0 = vdupq_n_u16(0);
   uint16x8_t abs_1 = vdupq_n_u16(0);
 
   for (i = 0; i < height; ++i) {
-    const uint8x16_t a_0 = vld1q_u8(a);
-    const uint8x16_t a_1 = vld1q_u8(a + 16);
-    const uint8x16_t a_2 = vld1q_u8(a + 32);
-    const uint8x16_t a_3 = vld1q_u8(a + 48);
-    const uint8x16_t b_0 = vld1q_u8(b);
-    const uint8x16_t b_1 = vld1q_u8(b + 16);
-    const uint8x16_t b_2 = vld1q_u8(b + 32);
-    const uint8x16_t b_3 = vld1q_u8(b + 48);
-    const uint8x16_t c_0 = vld1q_u8(c);
-    const uint8x16_t c_1 = vld1q_u8(c + 16);
-    const uint8x16_t c_2 = vld1q_u8(c + 32);
-    const uint8x16_t c_3 = vld1q_u8(c + 48);
+    const uint8x16_t a_0 = vld1q_u8(src_ptr);
+    const uint8x16_t a_1 = vld1q_u8(src_ptr + 16);
+    const uint8x16_t a_2 = vld1q_u8(src_ptr + 32);
+    const uint8x16_t a_3 = vld1q_u8(src_ptr + 48);
+    const uint8x16_t b_0 = vld1q_u8(ref_ptr);
+    const uint8x16_t b_1 = vld1q_u8(ref_ptr + 16);
+    const uint8x16_t b_2 = vld1q_u8(ref_ptr + 32);
+    const uint8x16_t b_3 = vld1q_u8(ref_ptr + 48);
+    const uint8x16_t c_0 = vld1q_u8(second_pred);
+    const uint8x16_t c_1 = vld1q_u8(second_pred + 16);
+    const uint8x16_t c_2 = vld1q_u8(second_pred + 32);
+    const uint8x16_t c_3 = vld1q_u8(second_pred + 48);
     const uint8x16_t avg_0 = vrhaddq_u8(b_0, c_0);
     const uint8x16_t avg_1 = vrhaddq_u8(b_1, c_1);
     const uint8x16_t avg_2 = vrhaddq_u8(b_2, c_2);
     const uint8x16_t avg_3 = vrhaddq_u8(b_3, c_3);
-    a += a_stride;
-    b += b_stride;
-    c += 64;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+    second_pred += 64;
     abs_0 = vabal_u8(abs_0, vget_low_u8(a_0), vget_low_u8(avg_0));
     abs_0 = vabal_u8(abs_0, vget_high_u8(a_0), vget_high_u8(avg_0));
     abs_0 = vabal_u8(abs_0, vget_low_u8(a_1), vget_low_u8(avg_1));
@@ -325,19 +333,20 @@
   }
 }
 
-#define sad64xN(n)                                                      \
-  uint32_t vpx_sad64x##n##_neon(const uint8_t *src, int src_stride,     \
-                                const uint8_t *ref, int ref_stride) {   \
-    const uint32x4_t abs = sad64x(src, src_stride, ref, ref_stride, n); \
-    return vget_lane_u32(horizontal_add_uint32x4(abs), 0);              \
-  }                                                                     \
-                                                                        \
-  uint32_t vpx_sad64x##n##_avg_neon(const uint8_t *src, int src_stride, \
-                                    const uint8_t *ref, int ref_stride, \
-                                    const uint8_t *second_pred) {       \
-    const uint32x4_t abs =                                              \
-        sad64x_avg(src, src_stride, ref, ref_stride, second_pred, n);   \
-    return vget_lane_u32(horizontal_add_uint32x4(abs), 0);              \
+#define sad64xN(n)                                                            \
+  uint32_t vpx_sad64x##n##_neon(const uint8_t *src_ptr, int src_stride,       \
+                                const uint8_t *ref_ptr, int ref_stride) {     \
+    const uint32x4_t abs =                                                    \
+        sad64x(src_ptr, src_stride, ref_ptr, ref_stride, n);                  \
+    return vget_lane_u32(horizontal_add_uint32x4(abs), 0);                    \
+  }                                                                           \
+                                                                              \
+  uint32_t vpx_sad64x##n##_avg_neon(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride,   \
+                                    const uint8_t *second_pred) {             \
+    const uint32x4_t abs =                                                    \
+        sad64x_avg(src_ptr, src_stride, ref_ptr, ref_stride, second_pred, n); \
+    return vget_lane_u32(horizontal_add_uint32x4(abs), 0);                    \
   }
 
 sad64xN(32);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subpel_variance_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subpel_variance_neon.c
index 4f58a78..37bfd1c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subpel_variance_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subpel_variance_neon.c
@@ -97,30 +97,30 @@
 
 // 4xM filter writes an extra row to fdata because it processes two rows at a
 // time.
-#define sub_pixel_varianceNxM(n, m)                                 \
-  uint32_t vpx_sub_pixel_variance##n##x##m##_neon(                  \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,     \
-      const uint8_t *b, int b_stride, uint32_t *sse) {              \
-    uint8_t temp0[n * (m + (n == 4 ? 2 : 1))];                      \
-    uint8_t temp1[n * m];                                           \
-                                                                    \
-    if (n == 4) {                                                   \
-      var_filter_block2d_bil_w4(a, temp0, a_stride, 1, (m + 2),     \
-                                bilinear_filters[xoffset]);         \
-      var_filter_block2d_bil_w4(temp0, temp1, n, n, m,              \
-                                bilinear_filters[yoffset]);         \
-    } else if (n == 8) {                                            \
-      var_filter_block2d_bil_w8(a, temp0, a_stride, 1, (m + 1),     \
-                                bilinear_filters[xoffset]);         \
-      var_filter_block2d_bil_w8(temp0, temp1, n, n, m,              \
-                                bilinear_filters[yoffset]);         \
-    } else {                                                        \
-      var_filter_block2d_bil_w16(a, temp0, a_stride, 1, (m + 1), n, \
-                                 bilinear_filters[xoffset]);        \
-      var_filter_block2d_bil_w16(temp0, temp1, n, n, m, n,          \
-                                 bilinear_filters[yoffset]);        \
-    }                                                               \
-    return vpx_variance##n##x##m(temp1, n, b, b_stride, sse);       \
+#define sub_pixel_varianceNxM(n, m)                                         \
+  uint32_t vpx_sub_pixel_variance##n##x##m##_neon(                          \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,   \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {              \
+    uint8_t temp0[n * (m + (n == 4 ? 2 : 1))];                              \
+    uint8_t temp1[n * m];                                                   \
+                                                                            \
+    if (n == 4) {                                                           \
+      var_filter_block2d_bil_w4(src_ptr, temp0, src_stride, 1, (m + 2),     \
+                                bilinear_filters[x_offset]);                \
+      var_filter_block2d_bil_w4(temp0, temp1, n, n, m,                      \
+                                bilinear_filters[y_offset]);                \
+    } else if (n == 8) {                                                    \
+      var_filter_block2d_bil_w8(src_ptr, temp0, src_stride, 1, (m + 1),     \
+                                bilinear_filters[x_offset]);                \
+      var_filter_block2d_bil_w8(temp0, temp1, n, n, m,                      \
+                                bilinear_filters[y_offset]);                \
+    } else {                                                                \
+      var_filter_block2d_bil_w16(src_ptr, temp0, src_stride, 1, (m + 1), n, \
+                                 bilinear_filters[x_offset]);               \
+      var_filter_block2d_bil_w16(temp0, temp1, n, n, m, n,                  \
+                                 bilinear_filters[y_offset]);               \
+    }                                                                       \
+    return vpx_variance##n##x##m(temp1, n, ref_ptr, ref_stride, sse);       \
   }
 
 sub_pixel_varianceNxM(4, 4);
@@ -139,34 +139,34 @@
 
 // 4xM filter writes an extra row to fdata because it processes two rows at a
 // time.
-#define sub_pixel_avg_varianceNxM(n, m)                             \
-  uint32_t vpx_sub_pixel_avg_variance##n##x##m##_neon(              \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,     \
-      const uint8_t *b, int b_stride, uint32_t *sse,                \
-      const uint8_t *second_pred) {                                 \
-    uint8_t temp0[n * (m + (n == 4 ? 2 : 1))];                      \
-    uint8_t temp1[n * m];                                           \
-                                                                    \
-    if (n == 4) {                                                   \
-      var_filter_block2d_bil_w4(a, temp0, a_stride, 1, (m + 2),     \
-                                bilinear_filters[xoffset]);         \
-      var_filter_block2d_bil_w4(temp0, temp1, n, n, m,              \
-                                bilinear_filters[yoffset]);         \
-    } else if (n == 8) {                                            \
-      var_filter_block2d_bil_w8(a, temp0, a_stride, 1, (m + 1),     \
-                                bilinear_filters[xoffset]);         \
-      var_filter_block2d_bil_w8(temp0, temp1, n, n, m,              \
-                                bilinear_filters[yoffset]);         \
-    } else {                                                        \
-      var_filter_block2d_bil_w16(a, temp0, a_stride, 1, (m + 1), n, \
-                                 bilinear_filters[xoffset]);        \
-      var_filter_block2d_bil_w16(temp0, temp1, n, n, m, n,          \
-                                 bilinear_filters[yoffset]);        \
-    }                                                               \
-                                                                    \
-    vpx_comp_avg_pred(temp0, second_pred, n, m, temp1, n);          \
-                                                                    \
-    return vpx_variance##n##x##m(temp0, n, b, b_stride, sse);       \
+#define sub_pixel_avg_varianceNxM(n, m)                                     \
+  uint32_t vpx_sub_pixel_avg_variance##n##x##m##_neon(                      \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,   \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse,                \
+      const uint8_t *second_pred) {                                         \
+    uint8_t temp0[n * (m + (n == 4 ? 2 : 1))];                              \
+    uint8_t temp1[n * m];                                                   \
+                                                                            \
+    if (n == 4) {                                                           \
+      var_filter_block2d_bil_w4(src_ptr, temp0, src_stride, 1, (m + 2),     \
+                                bilinear_filters[x_offset]);                \
+      var_filter_block2d_bil_w4(temp0, temp1, n, n, m,                      \
+                                bilinear_filters[y_offset]);                \
+    } else if (n == 8) {                                                    \
+      var_filter_block2d_bil_w8(src_ptr, temp0, src_stride, 1, (m + 1),     \
+                                bilinear_filters[x_offset]);                \
+      var_filter_block2d_bil_w8(temp0, temp1, n, n, m,                      \
+                                bilinear_filters[y_offset]);                \
+    } else {                                                                \
+      var_filter_block2d_bil_w16(src_ptr, temp0, src_stride, 1, (m + 1), n, \
+                                 bilinear_filters[x_offset]);               \
+      var_filter_block2d_bil_w16(temp0, temp1, n, n, m, n,                  \
+                                 bilinear_filters[y_offset]);               \
+    }                                                                       \
+                                                                            \
+    vpx_comp_avg_pred(temp0, second_pred, n, m, temp1, n);                  \
+                                                                            \
+    return vpx_variance##n##x##m(temp0, n, ref_ptr, ref_stride, sse);       \
   }
 
 sub_pixel_avg_varianceNxM(4, 4);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subtract_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subtract_neon.c
index ce81fb6..612897e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subtract_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/subtract_neon.c
@@ -9,71 +9,73 @@
  */
 
 #include <arm_neon.h>
+#include <assert.h>
 
 #include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
+#include "vpx_dsp/arm/mem_neon.h"
 
 void vpx_subtract_block_neon(int rows, int cols, int16_t *diff,
                              ptrdiff_t diff_stride, const uint8_t *src,
                              ptrdiff_t src_stride, const uint8_t *pred,
                              ptrdiff_t pred_stride) {
-  int r, c;
+  int r = rows, c;
 
   if (cols > 16) {
-    for (r = 0; r < rows; ++r) {
+    do {
       for (c = 0; c < cols; c += 32) {
-        const uint8x16_t v_src_00 = vld1q_u8(&src[c + 0]);
-        const uint8x16_t v_src_16 = vld1q_u8(&src[c + 16]);
-        const uint8x16_t v_pred_00 = vld1q_u8(&pred[c + 0]);
-        const uint8x16_t v_pred_16 = vld1q_u8(&pred[c + 16]);
-        const uint16x8_t v_diff_lo_00 =
-            vsubl_u8(vget_low_u8(v_src_00), vget_low_u8(v_pred_00));
-        const uint16x8_t v_diff_hi_00 =
-            vsubl_u8(vget_high_u8(v_src_00), vget_high_u8(v_pred_00));
-        const uint16x8_t v_diff_lo_16 =
-            vsubl_u8(vget_low_u8(v_src_16), vget_low_u8(v_pred_16));
-        const uint16x8_t v_diff_hi_16 =
-            vsubl_u8(vget_high_u8(v_src_16), vget_high_u8(v_pred_16));
-        vst1q_s16(&diff[c + 0], vreinterpretq_s16_u16(v_diff_lo_00));
-        vst1q_s16(&diff[c + 8], vreinterpretq_s16_u16(v_diff_hi_00));
-        vst1q_s16(&diff[c + 16], vreinterpretq_s16_u16(v_diff_lo_16));
-        vst1q_s16(&diff[c + 24], vreinterpretq_s16_u16(v_diff_hi_16));
+        const uint8x16_t s0 = vld1q_u8(&src[c + 0]);
+        const uint8x16_t s1 = vld1q_u8(&src[c + 16]);
+        const uint8x16_t p0 = vld1q_u8(&pred[c + 0]);
+        const uint8x16_t p1 = vld1q_u8(&pred[c + 16]);
+        const uint16x8_t d0 = vsubl_u8(vget_low_u8(s0), vget_low_u8(p0));
+        const uint16x8_t d1 = vsubl_u8(vget_high_u8(s0), vget_high_u8(p0));
+        const uint16x8_t d2 = vsubl_u8(vget_low_u8(s1), vget_low_u8(p1));
+        const uint16x8_t d3 = vsubl_u8(vget_high_u8(s1), vget_high_u8(p1));
+        vst1q_s16(&diff[c + 0], vreinterpretq_s16_u16(d0));
+        vst1q_s16(&diff[c + 8], vreinterpretq_s16_u16(d1));
+        vst1q_s16(&diff[c + 16], vreinterpretq_s16_u16(d2));
+        vst1q_s16(&diff[c + 24], vreinterpretq_s16_u16(d3));
       }
       diff += diff_stride;
       pred += pred_stride;
       src += src_stride;
-    }
+    } while (--r);
   } else if (cols > 8) {
-    for (r = 0; r < rows; ++r) {
-      const uint8x16_t v_src = vld1q_u8(&src[0]);
-      const uint8x16_t v_pred = vld1q_u8(&pred[0]);
-      const uint16x8_t v_diff_lo =
-          vsubl_u8(vget_low_u8(v_src), vget_low_u8(v_pred));
-      const uint16x8_t v_diff_hi =
-          vsubl_u8(vget_high_u8(v_src), vget_high_u8(v_pred));
-      vst1q_s16(&diff[0], vreinterpretq_s16_u16(v_diff_lo));
-      vst1q_s16(&diff[8], vreinterpretq_s16_u16(v_diff_hi));
+    do {
+      const uint8x16_t s = vld1q_u8(&src[0]);
+      const uint8x16_t p = vld1q_u8(&pred[0]);
+      const uint16x8_t d0 = vsubl_u8(vget_low_u8(s), vget_low_u8(p));
+      const uint16x8_t d1 = vsubl_u8(vget_high_u8(s), vget_high_u8(p));
+      vst1q_s16(&diff[0], vreinterpretq_s16_u16(d0));
+      vst1q_s16(&diff[8], vreinterpretq_s16_u16(d1));
       diff += diff_stride;
       pred += pred_stride;
       src += src_stride;
-    }
+    } while (--r);
   } else if (cols > 4) {
-    for (r = 0; r < rows; ++r) {
-      const uint8x8_t v_src = vld1_u8(&src[0]);
-      const uint8x8_t v_pred = vld1_u8(&pred[0]);
-      const uint16x8_t v_diff = vsubl_u8(v_src, v_pred);
+    do {
+      const uint8x8_t s = vld1_u8(&src[0]);
+      const uint8x8_t p = vld1_u8(&pred[0]);
+      const uint16x8_t v_diff = vsubl_u8(s, p);
       vst1q_s16(&diff[0], vreinterpretq_s16_u16(v_diff));
       diff += diff_stride;
       pred += pred_stride;
       src += src_stride;
-    }
+    } while (--r);
   } else {
-    for (r = 0; r < rows; ++r) {
-      for (c = 0; c < cols; ++c) diff[c] = src[c] - pred[c];
-
-      diff += diff_stride;
-      pred += pred_stride;
-      src += src_stride;
-    }
+    assert(cols == 4);
+    do {
+      const uint8x8_t s = load_unaligned_u8(src, (int)src_stride);
+      const uint8x8_t p = load_unaligned_u8(pred, (int)pred_stride);
+      const uint16x8_t d = vsubl_u8(s, p);
+      vst1_s16(diff + 0 * diff_stride, vreinterpret_s16_u16(vget_low_u16(d)));
+      vst1_s16(diff + 1 * diff_stride, vreinterpret_s16_u16(vget_high_u16(d)));
+      diff += 2 * diff_stride;
+      pred += 2 * pred_stride;
+      src += 2 * src_stride;
+      r -= 2;
+    } while (r);
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_neon.h
index d74fe0c..9e6833a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_neon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_ARM_SUM_NEON_H_
-#define VPX_DSP_ARM_SUM_NEON_H_
+#ifndef VPX_VPX_DSP_ARM_SUM_NEON_H_
+#define VPX_VPX_DSP_ARM_SUM_NEON_H_
 
 #include <arm_neon.h>
 
@@ -30,18 +30,9 @@
                   vreinterpret_u32_u64(vget_high_u64(c)));
 }
 
-static INLINE uint32x2_t horizontal_add_long_uint16x8(const uint16x8_t a,
-                                                      const uint16x8_t b) {
-  const uint32x4_t c = vpaddlq_u16(a);
-  const uint32x4_t d = vpadalq_u16(c, b);
-  const uint64x2_t e = vpaddlq_u32(d);
-  return vadd_u32(vreinterpret_u32_u64(vget_low_u64(e)),
-                  vreinterpret_u32_u64(vget_high_u64(e)));
-}
-
 static INLINE uint32x2_t horizontal_add_uint32x4(const uint32x4_t a) {
   const uint64x2_t b = vpaddlq_u32(a);
   return vadd_u32(vreinterpret_u32_u64(vget_low_u64(b)),
                   vreinterpret_u32_u64(vget_high_u64(b)));
 }
-#endif  // VPX_DSP_ARM_SUM_NEON_H_
+#endif  // VPX_VPX_DSP_ARM_SUM_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_squares_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_squares_neon.c
new file mode 100644
index 0000000..cfefad9
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/sum_squares_neon.c
@@ -0,0 +1,85 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <arm_neon.h>
+
+#include <assert.h>
+#include "./vpx_dsp_rtcd.h"
+
+uint64_t vpx_sum_squares_2d_i16_neon(const int16_t *src, int stride, int size) {
+  uint64x1_t s2;
+
+  if (size == 4) {
+    int16x4_t s[4];
+    int32x4_t s0;
+    uint32x2_t s1;
+
+    s[0] = vld1_s16(src + 0 * stride);
+    s[1] = vld1_s16(src + 1 * stride);
+    s[2] = vld1_s16(src + 2 * stride);
+    s[3] = vld1_s16(src + 3 * stride);
+    s0 = vmull_s16(s[0], s[0]);
+    s0 = vmlal_s16(s0, s[1], s[1]);
+    s0 = vmlal_s16(s0, s[2], s[2]);
+    s0 = vmlal_s16(s0, s[3], s[3]);
+    s1 = vpadd_u32(vget_low_u32(vreinterpretq_u32_s32(s0)),
+                   vget_high_u32(vreinterpretq_u32_s32(s0)));
+    s2 = vpaddl_u32(s1);
+  } else {
+    int r = size;
+    uint64x2_t s1 = vdupq_n_u64(0);
+
+    do {
+      int c = size;
+      int32x4_t s0 = vdupq_n_s32(0);
+      const int16_t *src_t = src;
+
+      do {
+        int16x8_t s[8];
+
+        s[0] = vld1q_s16(src_t + 0 * stride);
+        s[1] = vld1q_s16(src_t + 1 * stride);
+        s[2] = vld1q_s16(src_t + 2 * stride);
+        s[3] = vld1q_s16(src_t + 3 * stride);
+        s[4] = vld1q_s16(src_t + 4 * stride);
+        s[5] = vld1q_s16(src_t + 5 * stride);
+        s[6] = vld1q_s16(src_t + 6 * stride);
+        s[7] = vld1q_s16(src_t + 7 * stride);
+        s0 = vmlal_s16(s0, vget_low_s16(s[0]), vget_low_s16(s[0]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[1]), vget_low_s16(s[1]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[2]), vget_low_s16(s[2]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[3]), vget_low_s16(s[3]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[4]), vget_low_s16(s[4]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[5]), vget_low_s16(s[5]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[6]), vget_low_s16(s[6]));
+        s0 = vmlal_s16(s0, vget_low_s16(s[7]), vget_low_s16(s[7]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[0]), vget_high_s16(s[0]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[1]), vget_high_s16(s[1]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[2]), vget_high_s16(s[2]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[3]), vget_high_s16(s[3]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[4]), vget_high_s16(s[4]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[5]), vget_high_s16(s[5]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[6]), vget_high_s16(s[6]));
+        s0 = vmlal_s16(s0, vget_high_s16(s[7]), vget_high_s16(s[7]));
+        src_t += 8;
+        c -= 8;
+      } while (c);
+
+      s1 = vaddw_u32(s1, vget_low_u32(vreinterpretq_u32_s32(s0)));
+      s1 = vaddw_u32(s1, vget_high_u32(vreinterpretq_u32_s32(s0)));
+      src += 8 * stride;
+      r -= 8;
+    } while (r);
+
+    s2 = vadd_u64(vget_low_u64(s1), vget_high_u64(s1));
+  }
+
+  return vget_lane_u64(s2, 0);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/transpose_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/transpose_neon.h
index d85cbce..7523081 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/transpose_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/transpose_neon.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_ARM_TRANSPOSE_NEON_H_
-#define VPX_DSP_ARM_TRANSPOSE_NEON_H_
+#ifndef VPX_VPX_DSP_ARM_TRANSPOSE_NEON_H_
+#define VPX_VPX_DSP_ARM_TRANSPOSE_NEON_H_
 
 #include <arm_neon.h>
 
@@ -138,8 +138,8 @@
       vtrnq_s32(vreinterpretq_s32_s16(*a0), vreinterpretq_s32_s16(*a1));
 
   // Swap 64 bit elements resulting in:
-  // c0.val[0]: 00 01 20 21  02 03 22 23
-  // c0.val[1]: 10 11 30 31  12 13 32 33
+  // c0: 00 01 20 21  02 03 22 23
+  // c1: 10 11 30 31  12 13 32 33
 
   const int32x4_t c0 =
       vcombine_s32(vget_low_s32(b0.val[0]), vget_low_s32(b0.val[1]));
@@ -169,8 +169,8 @@
       vtrnq_u32(vreinterpretq_u32_u16(*a0), vreinterpretq_u32_u16(*a1));
 
   // Swap 64 bit elements resulting in:
-  // c0.val[0]: 00 01 20 21  02 03 22 23
-  // c0.val[1]: 10 11 30 31  12 13 32 33
+  // c0: 00 01 20 21  02 03 22 23
+  // c1: 10 11 30 31  12 13 32 33
 
   const uint32x4_t c0 =
       vcombine_u32(vget_low_u32(b0.val[0]), vget_low_u32(b0.val[1]));
@@ -1313,4 +1313,4 @@
 
   transpose_s32_8x8(a0, a1, a2, a3, a4, a5, a6, a7);
 }
-#endif  // VPX_DSP_ARM_TRANSPOSE_NEON_H_
+#endif  // VPX_VPX_DSP_ARM_TRANSPOSE_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/variance_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/variance_neon.c
index 61c2c16..77b1015 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/variance_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/variance_neon.c
@@ -27,8 +27,9 @@
 // this limit.
 
 // Process a block of width 4 four rows at a time.
-static void variance_neon_w4x4(const uint8_t *a, int a_stride, const uint8_t *b,
-                               int b_stride, int h, uint32_t *sse, int *sum) {
+static void variance_neon_w4x4(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride, int h,
+                               uint32_t *sse, int *sum) {
   int i;
   int16x8_t sum_s16 = vdupq_n_s16(0);
   int32x4_t sse_lo_s32 = vdupq_n_s32(0);
@@ -38,8 +39,8 @@
   assert(h <= 256);
 
   for (i = 0; i < h; i += 4) {
-    const uint8x16_t a_u8 = load_unaligned_u8q(a, a_stride);
-    const uint8x16_t b_u8 = load_unaligned_u8q(b, b_stride);
+    const uint8x16_t a_u8 = load_unaligned_u8q(src_ptr, src_stride);
+    const uint8x16_t b_u8 = load_unaligned_u8q(ref_ptr, ref_stride);
     const uint16x8_t diff_lo_u16 =
         vsubl_u8(vget_low_u8(a_u8), vget_low_u8(b_u8));
     const uint16x8_t diff_hi_u16 =
@@ -61,8 +62,8 @@
     sse_hi_s32 = vmlal_s16(sse_hi_s32, vget_high_s16(diff_hi_s16),
                            vget_high_s16(diff_hi_s16));
 
-    a += 4 * a_stride;
-    b += 4 * b_stride;
+    src_ptr += 4 * src_stride;
+    ref_ptr += 4 * ref_stride;
   }
 
   *sum = vget_lane_s32(horizontal_add_int16x8(sum_s16), 0);
@@ -72,9 +73,9 @@
 }
 
 // Process a block of any size where the width is divisible by 16.
-static void variance_neon_w16(const uint8_t *a, int a_stride, const uint8_t *b,
-                              int b_stride, int w, int h, uint32_t *sse,
-                              int *sum) {
+static void variance_neon_w16(const uint8_t *src_ptr, int src_stride,
+                              const uint8_t *ref_ptr, int ref_stride, int w,
+                              int h, uint32_t *sse, int *sum) {
   int i, j;
   int16x8_t sum_s16 = vdupq_n_s16(0);
   int32x4_t sse_lo_s32 = vdupq_n_s32(0);
@@ -86,8 +87,8 @@
 
   for (i = 0; i < h; ++i) {
     for (j = 0; j < w; j += 16) {
-      const uint8x16_t a_u8 = vld1q_u8(a + j);
-      const uint8x16_t b_u8 = vld1q_u8(b + j);
+      const uint8x16_t a_u8 = vld1q_u8(src_ptr + j);
+      const uint8x16_t b_u8 = vld1q_u8(ref_ptr + j);
 
       const uint16x8_t diff_lo_u16 =
           vsubl_u8(vget_low_u8(a_u8), vget_low_u8(b_u8));
@@ -110,8 +111,8 @@
       sse_hi_s32 = vmlal_s16(sse_hi_s32, vget_high_s16(diff_hi_s16),
                              vget_high_s16(diff_hi_s16));
     }
-    a += a_stride;
-    b += b_stride;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
   }
 
   *sum = vget_lane_s32(horizontal_add_int16x8(sum_s16), 0);
@@ -121,8 +122,9 @@
 }
 
 // Process a block of width 8 two rows at a time.
-static void variance_neon_w8x2(const uint8_t *a, int a_stride, const uint8_t *b,
-                               int b_stride, int h, uint32_t *sse, int *sum) {
+static void variance_neon_w8x2(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride, int h,
+                               uint32_t *sse, int *sum) {
   int i = 0;
   int16x8_t sum_s16 = vdupq_n_s16(0);
   int32x4_t sse_lo_s32 = vdupq_n_s32(0);
@@ -132,10 +134,10 @@
   assert(h <= 128);
 
   do {
-    const uint8x8_t a_0_u8 = vld1_u8(a);
-    const uint8x8_t a_1_u8 = vld1_u8(a + a_stride);
-    const uint8x8_t b_0_u8 = vld1_u8(b);
-    const uint8x8_t b_1_u8 = vld1_u8(b + b_stride);
+    const uint8x8_t a_0_u8 = vld1_u8(src_ptr);
+    const uint8x8_t a_1_u8 = vld1_u8(src_ptr + src_stride);
+    const uint8x8_t b_0_u8 = vld1_u8(ref_ptr);
+    const uint8x8_t b_1_u8 = vld1_u8(ref_ptr + ref_stride);
     const uint16x8_t diff_0_u16 = vsubl_u8(a_0_u8, b_0_u8);
     const uint16x8_t diff_1_u16 = vsubl_u8(a_1_u8, b_1_u8);
     const int16x8_t diff_0_s16 = vreinterpretq_s16_u16(diff_0_u16);
@@ -150,8 +152,8 @@
                            vget_high_s16(diff_0_s16));
     sse_hi_s32 = vmlal_s16(sse_hi_s32, vget_high_s16(diff_1_s16),
                            vget_high_s16(diff_1_s16));
-    a += a_stride + a_stride;
-    b += b_stride + b_stride;
+    src_ptr += src_stride + src_stride;
+    ref_ptr += ref_stride + ref_stride;
     i += 2;
   } while (i < h);
 
@@ -161,31 +163,36 @@
                        0);
 }
 
-void vpx_get8x8var_neon(const uint8_t *a, int a_stride, const uint8_t *b,
-                        int b_stride, unsigned int *sse, int *sum) {
-  variance_neon_w8x2(a, a_stride, b, b_stride, 8, sse, sum);
+void vpx_get8x8var_neon(const uint8_t *src_ptr, int src_stride,
+                        const uint8_t *ref_ptr, int ref_stride,
+                        unsigned int *sse, int *sum) {
+  variance_neon_w8x2(src_ptr, src_stride, ref_ptr, ref_stride, 8, sse, sum);
 }
 
-void vpx_get16x16var_neon(const uint8_t *a, int a_stride, const uint8_t *b,
-                          int b_stride, unsigned int *sse, int *sum) {
-  variance_neon_w16(a, a_stride, b, b_stride, 16, 16, sse, sum);
+void vpx_get16x16var_neon(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *ref_ptr, int ref_stride,
+                          unsigned int *sse, int *sum) {
+  variance_neon_w16(src_ptr, src_stride, ref_ptr, ref_stride, 16, 16, sse, sum);
 }
 
-#define varianceNxM(n, m, shift)                                            \
-  unsigned int vpx_variance##n##x##m##_neon(const uint8_t *a, int a_stride, \
-                                            const uint8_t *b, int b_stride, \
-                                            unsigned int *sse) {            \
-    int sum;                                                                \
-    if (n == 4)                                                             \
-      variance_neon_w4x4(a, a_stride, b, b_stride, m, sse, &sum);           \
-    else if (n == 8)                                                        \
-      variance_neon_w8x2(a, a_stride, b, b_stride, m, sse, &sum);           \
-    else                                                                    \
-      variance_neon_w16(a, a_stride, b, b_stride, n, m, sse, &sum);         \
-    if (n * m < 16 * 16)                                                    \
-      return *sse - ((sum * sum) >> shift);                                 \
-    else                                                                    \
-      return *sse - (uint32_t)(((int64_t)sum * sum) >> shift);              \
+#define varianceNxM(n, m, shift)                                             \
+  unsigned int vpx_variance##n##x##m##_neon(                                 \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,        \
+      int ref_stride, unsigned int *sse) {                                   \
+    int sum;                                                                 \
+    if (n == 4)                                                              \
+      variance_neon_w4x4(src_ptr, src_stride, ref_ptr, ref_stride, m, sse,   \
+                         &sum);                                              \
+    else if (n == 8)                                                         \
+      variance_neon_w8x2(src_ptr, src_stride, ref_ptr, ref_stride, m, sse,   \
+                         &sum);                                              \
+    else                                                                     \
+      variance_neon_w16(src_ptr, src_stride, ref_ptr, ref_stride, n, m, sse, \
+                        &sum);                                               \
+    if (n * m < 16 * 16)                                                     \
+      return *sse - ((sum * sum) >> shift);                                  \
+    else                                                                     \
+      return *sse - (uint32_t)(((int64_t)sum * sum) >> shift);               \
   }
 
 varianceNxM(4, 4, 4);
@@ -199,58 +206,66 @@
 varianceNxM(32, 16, 9);
 varianceNxM(32, 32, 10);
 
-unsigned int vpx_variance32x64_neon(const uint8_t *a, int a_stride,
-                                    const uint8_t *b, int b_stride,
+unsigned int vpx_variance32x64_neon(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum1, sum2;
   uint32_t sse1, sse2;
-  variance_neon_w16(a, a_stride, b, b_stride, 32, 32, &sse1, &sum1);
-  variance_neon_w16(a + (32 * a_stride), a_stride, b + (32 * b_stride),
-                    b_stride, 32, 32, &sse2, &sum2);
+  variance_neon_w16(src_ptr, src_stride, ref_ptr, ref_stride, 32, 32, &sse1,
+                    &sum1);
+  variance_neon_w16(src_ptr + (32 * src_stride), src_stride,
+                    ref_ptr + (32 * ref_stride), ref_stride, 32, 32, &sse2,
+                    &sum2);
   *sse = sse1 + sse2;
   sum1 += sum2;
   return *sse - (unsigned int)(((int64_t)sum1 * sum1) >> 11);
 }
 
-unsigned int vpx_variance64x32_neon(const uint8_t *a, int a_stride,
-                                    const uint8_t *b, int b_stride,
+unsigned int vpx_variance64x32_neon(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum1, sum2;
   uint32_t sse1, sse2;
-  variance_neon_w16(a, a_stride, b, b_stride, 64, 16, &sse1, &sum1);
-  variance_neon_w16(a + (16 * a_stride), a_stride, b + (16 * b_stride),
-                    b_stride, 64, 16, &sse2, &sum2);
+  variance_neon_w16(src_ptr, src_stride, ref_ptr, ref_stride, 64, 16, &sse1,
+                    &sum1);
+  variance_neon_w16(src_ptr + (16 * src_stride), src_stride,
+                    ref_ptr + (16 * ref_stride), ref_stride, 64, 16, &sse2,
+                    &sum2);
   *sse = sse1 + sse2;
   sum1 += sum2;
   return *sse - (unsigned int)(((int64_t)sum1 * sum1) >> 11);
 }
 
-unsigned int vpx_variance64x64_neon(const uint8_t *a, int a_stride,
-                                    const uint8_t *b, int b_stride,
+unsigned int vpx_variance64x64_neon(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum1, sum2;
   uint32_t sse1, sse2;
 
-  variance_neon_w16(a, a_stride, b, b_stride, 64, 16, &sse1, &sum1);
-  variance_neon_w16(a + (16 * a_stride), a_stride, b + (16 * b_stride),
-                    b_stride, 64, 16, &sse2, &sum2);
+  variance_neon_w16(src_ptr, src_stride, ref_ptr, ref_stride, 64, 16, &sse1,
+                    &sum1);
+  variance_neon_w16(src_ptr + (16 * src_stride), src_stride,
+                    ref_ptr + (16 * ref_stride), ref_stride, 64, 16, &sse2,
+                    &sum2);
   sse1 += sse2;
   sum1 += sum2;
 
-  variance_neon_w16(a + (16 * 2 * a_stride), a_stride, b + (16 * 2 * b_stride),
-                    b_stride, 64, 16, &sse2, &sum2);
+  variance_neon_w16(src_ptr + (16 * 2 * src_stride), src_stride,
+                    ref_ptr + (16 * 2 * ref_stride), ref_stride, 64, 16, &sse2,
+                    &sum2);
   sse1 += sse2;
   sum1 += sum2;
 
-  variance_neon_w16(a + (16 * 3 * a_stride), a_stride, b + (16 * 3 * b_stride),
-                    b_stride, 64, 16, &sse2, &sum2);
+  variance_neon_w16(src_ptr + (16 * 3 * src_stride), src_stride,
+                    ref_ptr + (16 * 3 * ref_stride), ref_stride, 64, 16, &sse2,
+                    &sum2);
   *sse = sse1 + sse2;
   sum1 += sum2;
   return *sse - (unsigned int)(((int64_t)sum1 * sum1) >> 12);
 }
 
-unsigned int vpx_mse16x16_neon(const unsigned char *src_ptr, int source_stride,
-                               const unsigned char *ref_ptr, int recon_stride,
+unsigned int vpx_mse16x16_neon(const unsigned char *src_ptr, int src_stride,
+                               const unsigned char *ref_ptr, int ref_stride,
                                unsigned int *sse) {
   int i;
   int16x4_t d22s16, d23s16, d24s16, d25s16, d26s16, d27s16, d28s16, d29s16;
@@ -267,13 +282,13 @@
 
   for (i = 0; i < 8; i++) {  // mse16x16_neon_loop
     q0u8 = vld1q_u8(src_ptr);
-    src_ptr += source_stride;
+    src_ptr += src_stride;
     q1u8 = vld1q_u8(src_ptr);
-    src_ptr += source_stride;
+    src_ptr += src_stride;
     q2u8 = vld1q_u8(ref_ptr);
-    ref_ptr += recon_stride;
+    ref_ptr += ref_stride;
     q3u8 = vld1q_u8(ref_ptr);
-    ref_ptr += recon_stride;
+    ref_ptr += ref_stride;
 
     q11u16 = vsubl_u8(vget_low_u8(q0u8), vget_low_u8(q2u8));
     q12u16 = vsubl_u8(vget_high_u8(q0u8), vget_high_u8(q2u8));
@@ -312,10 +327,9 @@
   return vget_lane_u32(vreinterpret_u32_s64(d0s64), 0);
 }
 
-unsigned int vpx_get4x4sse_cs_neon(const unsigned char *src_ptr,
-                                   int source_stride,
+unsigned int vpx_get4x4sse_cs_neon(const unsigned char *src_ptr, int src_stride,
                                    const unsigned char *ref_ptr,
-                                   int recon_stride) {
+                                   int ref_stride) {
   int16x4_t d22s16, d24s16, d26s16, d28s16;
   int64x1_t d0s64;
   uint8x8_t d0u8, d1u8, d2u8, d3u8, d4u8, d5u8, d6u8, d7u8;
@@ -324,21 +338,21 @@
   int64x2_t q1s64;
 
   d0u8 = vld1_u8(src_ptr);
-  src_ptr += source_stride;
+  src_ptr += src_stride;
   d4u8 = vld1_u8(ref_ptr);
-  ref_ptr += recon_stride;
+  ref_ptr += ref_stride;
   d1u8 = vld1_u8(src_ptr);
-  src_ptr += source_stride;
+  src_ptr += src_stride;
   d5u8 = vld1_u8(ref_ptr);
-  ref_ptr += recon_stride;
+  ref_ptr += ref_stride;
   d2u8 = vld1_u8(src_ptr);
-  src_ptr += source_stride;
+  src_ptr += src_stride;
   d6u8 = vld1_u8(ref_ptr);
-  ref_ptr += recon_stride;
+  ref_ptr += ref_stride;
   d3u8 = vld1_u8(src_ptr);
-  src_ptr += source_stride;
+  src_ptr += src_stride;
   d7u8 = vld1_u8(ref_ptr);
-  ref_ptr += recon_stride;
+  ref_ptr += ref_stride;
 
   q11u16 = vsubl_u8(d0u8, d4u8);
   q12u16 = vsubl_u8(d1u8, d5u8);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type1_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type1_neon.asm
new file mode 100644
index 0000000..d8e4bcc
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type1_neon.asm
@@ -0,0 +1,438 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers*****************************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r3 =>  dst_stride
+;    r4 => filter_x0
+;    r8 =>  ht
+;    r10 =>  wd
+
+    EXPORT          |vpx_convolve8_avg_horiz_filter_type1_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_avg_horiz_filter_type1_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+
+start_loop_count
+    ldr             r4,     [sp,    #104]   ;loads pi1_coeff
+    ldr             r8,     [sp,    #108]   ;loads x0_q4
+    add             r4,     r4,     r8,     lsl #4 ;r4 = filter[x0_q4]
+    ldr             r8,     [sp,    #128]   ;loads ht
+    ldr             r10,    [sp,    #124]   ;loads wd
+    vld2.8          {d0,    d1},    [r4]    ;coeff = vld1_s8(pi1_coeff)
+    mov             r11,    #1
+    subs            r14,    r8,     #0      ;checks for ht == 0
+    vabs.s8         d2,     d0              ;vabs_s8(coeff)
+    vdup.8          d24,    d2[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0)
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    vdup.8          d25,    d2[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1)
+    add             r4,     r12,    r2      ;pu1_src_tmp2_8 = pu1_src + src_strd
+    vdup.8          d26,    d2[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2)
+    rsb             r9,     r10,    r2,     lsl #1 ;2*src_strd - wd
+    vdup.8          d27,    d2[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3)
+    rsb             r8,     r10,    r3,     lsl #1 ;2*dst_strd - wd
+    vdup.8          d28,    d2[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4)
+    vdup.8          d29,    d2[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5)
+    vdup.8          d30,    d2[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6)
+    vdup.8          d31,    d2[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7)
+    mov             r7,     r1
+    cmp             r10,    #4
+    ble             outer_loop_4
+
+    cmp             r10,    #24
+    moveq           r10,    #16
+    addeq           r8,     #8
+    addeq           r9,     #8
+    cmp             r10,    #16
+    bge             outer_loop_16
+
+    cmp             r10,    #12
+    addeq           r8,     #4
+    addeq           r9,     #4
+    b               outer_loop_8
+
+outer_loop8_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    mov             r14,    #32
+    add             r1,     #16
+    add             r12,    #16
+    mov             r10,    #8
+    add             r8,     #8
+    add             r9,     #8
+
+outer_loop_8
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_8
+
+inner_loop_8
+    mov             r7,     #0xc000
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {d1},   [r12],  r11
+    vdup.16         q5,     r7
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    mov             r7,     #0x4000
+    vld1.u32        {d4},   [r12],  r11
+    vmlsl.u8        q4,     d1,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {d5},   [r12],  r11
+    vmlal.u8        q4,     d3,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d6},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {d7},   [r12],  r11
+    vmlal.u8        q4,     d2,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlal.u8        q4,     d4,     d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d13},  [r4],   r11
+    vmlal.u8        q4,     d5,     d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vld1.u32        {d14},  [r4],   r11
+    vmlsl.u8        q4,     d6,     d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vld1.u32        {d15},  [r4],   r11
+    vmlsl.u8        q4,     d7,     d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vld1.u32        {d16},  [r4],   r11     ;vector load pu1_src + src_strd
+    vdup.16         q11,    r7
+    vmlal.u8        q5,     d15,    d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d17},  [r4],   r11
+    vmlal.u8        q5,     d14,    d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {d18},  [r4],   r11
+    vmlal.u8        q5,     d16,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d19},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlal.u8        q5,     d17,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vld1.u8         {d6},   [r1]
+    vqrshrun.s16    d20,    q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlsl.u8        q5,     d18,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d19,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vld1.u8         {d7},   [r6]
+    vrhadd.u8       d20,    d20,    d6
+    vmlsl.u8        q5,     d12,    d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlsl.u8        q5,     d13,    d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vst1.8          {d20},  [r1]!           ;store the result pu1_dst
+    vhadd.s16       q5,     q5,     q11
+    subs            r5,     r5,     #8      ;decrement the wd loop
+    vqrshrun.s16    d8,     q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    vrhadd.u8       d8,     d8,     d7
+    vst1.8          {d8},   [r6]!           ;store the result pu1_dst
+    cmp             r5,     #4
+    bgt             inner_loop_8
+
+end_inner_loop_8
+    subs            r14,    r14,    #2      ;decrement the ht loop
+    add             r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the dst pointer by
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_8
+
+    ldr             r10,    [sp,    #120]   ;loads wd
+    cmp             r10,    #12
+    beq             outer_loop4_residual
+
+end_loops
+    b               end_func
+
+outer_loop_16
+    str             r0,     [sp,  #-4]!
+    str             r7,     [sp,  #-4]!
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    and             r0,     r12,    #31
+    mov             r7,     #0xc000
+    sub             r5,     r10,    #0      ;checks wd
+    pld             [r4,    r2,     lsl #1]
+    pld             [r12,   r2,     lsl #1]
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {q1},   [r12],  r11
+    vld1.u32        {q2},   [r12],  r11
+    vld1.u32        {q3},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {q6},   [r12],  r11
+    vmlsl.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q7},   [r12],  r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {q9},   [r12],  r11
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlal.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vdup.16         q10,    r7
+    vmlsl.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+
+inner_loop_16
+    vmlsl.u8        q10,    d1,     d24
+    vdup.16         q5,     r7
+    vmlsl.u8        q10,    d3,     d25
+    mov             r7,     #0x4000
+    vdup.16         q11,    r7
+    vmlal.u8        q10,    d5,     d26
+    vld1.u32        {q0},   [r4],   r11     ;vector load pu1_src
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {q1},   [r4],   r11
+    vmlal.u8        q10,    d7,     d27
+    add             r12,    #8
+    subs            r5,     r5,     #16
+    vmlal.u8        q10,    d13,    d28
+    vld1.u32        {q2},   [r4],   r11
+    vmlal.u8        q10,    d15,    d29
+    vld1.u32        {q3},   [r4],   r11
+    vqrshrun.s16    d8,     q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlsl.u8        q10,    d17,    d30
+    vld1.u32        {q6},   [r4],   r11
+    vmlsl.u8        q10,    d19,    d31
+    vld1.u32        {q7},   [r4],   r11
+    add             r7,     r1,     #8
+    vmlsl.u8        q5,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlsl.u8        q5,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q8},   [r4],   r11
+    vhadd.s16       q10,    q10,    q11
+    vld1.u32        {q9},   [r4],   r11
+    vld1.u8         {d0},   [r1]
+    vmlal.u8        q5,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u8         {d2},   [r7]
+    vmlal.u8        q5,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    add             r4,     #8
+    mov             r7,     #0xc000
+    vmlal.u8        q5,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlal.u8        q5,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vqrshrun.s16    d9,     q10,    #6
+    vdup.16         q11,    r7
+    vmlsl.u8        q5,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    mov             r7,     #0x4000
+    vrhadd.u8       d8,     d8,     d0
+    vrhadd.u8       d9,     d9,     d2
+    vmlsl.u8        q11,    d1,     d24
+    vmlsl.u8        q11,    d3,     d25
+    vdup.16         q10,    r7
+    vmlal.u8        q11,    d5,     d26
+    pld             [r12,   r2,     lsl #2]
+    pld             [r4,    r2,     lsl #2]
+    addeq           r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    addeq           r4,     r12,    r2      ;pu1_src + src_strd
+    vmlal.u8        q11,    d7,     d27
+    vmlal.u8        q11,    d13,    d28
+    vst1.8          {q4},   [r1]!           ;store the result pu1_dst
+    subeq           r14,    r14,    #2
+    vhadd.s16       q5,     q5,     q10
+    vmlal.u8        q11,    d15,    d29
+    addeq           r1,     r1,     r8
+    vmlsl.u8        q11,    d17,    d30
+    cmp             r14,    #0
+    vmlsl.u8        q11,    d19,    d31
+    vqrshrun.s16    d10,    q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    beq             epilog_16
+
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    mov             r7,     #0xc000
+    cmp             r5,     #0
+    vld1.u32        {q1},   [r12],  r11
+    vhadd.s16       q11,    q11,    q10
+    vld1.u32        {q2},   [r12],  r11
+    vdup.16         q4,     r7
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vdup.16         q10,    r7
+    vld1.u32        {q3},   [r12],  r11
+    add             r7,     r6,     #8
+    moveq           r5,     r10
+    vld1.u8         {d0},   [r6]
+    vmlsl.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u8         {d2},   [r7]
+    vqrshrun.s16    d11,    q11,    #6
+    vmlal.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q6},   [r12],  r11
+    vrhadd.u8       d10,    d10,    d0
+    vld1.u32        {q7},   [r12],  r11
+    vrhadd.u8       d11,    d11,    d2
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {q9},   [r12],  r11
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlal.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    mov             r7,     #0xc000
+    vmlsl.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    addeq           r6,     r1,     r3      ;pu1_dst + dst_strd
+    b               inner_loop_16
+
+epilog_16
+    mov             r7,     #0x4000
+    ldr             r0,     [sp],   #4
+    ldr             r10,    [sp,    #120]
+    vdup.16         q10,    r7
+    vhadd.s16       q11,    q11,    q10
+    vqrshrun.s16    d11,    q11,    #6
+    add             r7,     r6,     #8
+    vld1.u8         {d20},  [r6]
+    vld1.u8         {d21},  [r7]
+    vrhadd.u8       d10,    d10,    d20
+    vrhadd.u8       d11,    d11,    d21
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    ldr             r7,     [sp],   #4
+    cmp             r10,    #24
+    beq             outer_loop8_residual
+
+end_loops1
+    b               end_func
+
+outer_loop4_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    add             r1,     #8
+    mov             r10,    #4
+    add             r12,    #8
+    mov             r14,    #16
+    add             r8,     #4
+    add             r9,     #4
+
+outer_loop_4
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_4
+
+inner_loop_4
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vld1.u32        {d1},   [r12],  r11
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    vld1.u32        {d4},   [r12],  r11
+    vld1.u32        {d5},   [r12],  r11
+    vld1.u32        {d6},   [r12],  r11
+    vld1.u32        {d7},   [r12],  r11
+    sub             r12,    r12,    #4
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vld1.u32        {d13},  [r4],   r11
+    vzip.32         d0,     d12             ;vector zip the i iteration and ii
+                                            ; interation in single register
+    vld1.u32        {d14},  [r4],   r11
+    vzip.32         d1,     d13
+    vld1.u32        {d15},  [r4],   r11
+    vzip.32         d2,     d14
+    vld1.u32        {d16},  [r4],   r11
+    vzip.32         d3,     d15
+    vld1.u32        {d17},  [r4],   r11
+    vzip.32         d4,     d16
+    vld1.u32        {d18},  [r4],   r11
+    vzip.32         d5,     d17
+    vld1.u32        {d19},  [r4],   r11
+    mov             r7,     #0xc000
+    vdup.16         q4,     r7
+    sub             r4,     r4,     #4
+    vzip.32         d6,     d18
+    vzip.32         d7,     d19
+    vmlsl.u8        q4,     d1,     d25     ;arithmetic operations for ii
+                                            ; iteration in the same time
+    vmlsl.u8        q4,     d0,     d24
+    vmlal.u8        q4,     d2,     d26
+    vmlal.u8        q4,     d3,     d27
+    vmlal.u8        q4,     d4,     d28
+    vmlal.u8        q4,     d5,     d29
+    vmlsl.u8        q4,     d6,     d30
+    vmlsl.u8        q4,     d7,     d31
+    mov             r7,     #0x4000
+    vdup.16         q10,    r7
+    vhadd.s16       q4,     q4,     q10
+    vqrshrun.s16    d8,     q4,     #6
+    vld1.u32        {d10[0]},       [r1]
+    vld1.u32        {d10[1]},       [r6]
+    vrhadd.u8       d8,     d8,     d10
+    vst1.32         {d8[0]},[r1]!           ;store the i iteration result which
+                                            ; is in upper part of the register
+    vst1.32         {d8[1]},[r6]!           ;store the ii iteration result which
+                                            ; is in lower part of the register
+    subs            r5,     r5,     #4      ;decrement the wd by 4
+    bgt             inner_loop_4
+
+end_inner_loop_4
+    subs            r14,    r14,    #2      ;decrement the ht by 4
+    add             r12,    r12,    r9      ;increment the input pointer
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the output pointer
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_4
+
+end_func
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type2_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type2_neon.asm
new file mode 100644
index 0000000..7a77747
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_horiz_filter_type2_neon.asm
@@ -0,0 +1,439 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r3 =>  dst_stride
+;    r4 => filter_x0
+;    r8 =>  ht
+;    r10 =>  wd
+
+    EXPORT          |vpx_convolve8_avg_horiz_filter_type2_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_avg_horiz_filter_type2_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+
+start_loop_count
+    ldr             r4,     [sp,    #104]   ;loads pi1_coeff
+    ldr             r8,     [sp,    #108]   ;loads x0_q4
+    add             r4,     r4,     r8,     lsl #4 ;r4 = filter[x0_q4]
+    ldr             r8,     [sp,    #128]   ;loads ht
+    ldr             r10,    [sp,    #124]   ;loads wd
+    vld2.8          {d0,    d1},    [r4]    ;coeff = vld1_s8(pi1_coeff)
+    mov             r11,    #1
+    subs            r14,    r8,     #0      ;checks for ht == 0
+    vabs.s8         d2,     d0              ;vabs_s8(coeff)
+    vdup.8          d24,    d2[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0)
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    vdup.8          d25,    d2[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1)
+    add             r4,     r12,    r2      ;pu1_src_tmp2_8 = pu1_src + src_strd
+    vdup.8          d26,    d2[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2)
+    rsb             r9,     r10,    r2,     lsl #1 ;2*src_strd - wd
+    vdup.8          d27,    d2[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3)
+    rsb             r8,     r10,    r3,     lsl #1 ;2*dst_strd - wd
+    vdup.8          d28,    d2[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4)
+    vdup.8          d29,    d2[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5)
+    vdup.8          d30,    d2[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6)
+    vdup.8          d31,    d2[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7)
+    mov             r7,     r1
+    cmp             r10,    #4
+    ble             outer_loop_4
+
+    cmp             r10,    #24
+    moveq           r10,    #16
+    addeq           r8,     #8
+    addeq           r9,     #8
+    cmp             r10,    #16
+    bge             outer_loop_16
+
+    cmp             r10,    #12
+    addeq           r8,     #4
+    addeq           r9,     #4
+    b               outer_loop_8
+
+outer_loop8_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    mov             r14,    #32
+    add             r1,     #16
+    add             r12,    #16
+    mov             r10,    #8
+    add             r8,     #8
+    add             r9,     #8
+
+outer_loop_8
+
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_8
+
+inner_loop_8
+    mov             r7,     #0xc000
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {d1},   [r12],  r11
+    vdup.16         q5,     r7
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    mov             r7,     #0x4000
+    vld1.u32        {d4},   [r12],  r11
+    vmlal.u8        q4,     d1,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {d5},   [r12],  r11
+    vmlal.u8        q4,     d3,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d6},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {d7},   [r12],  r11
+    vmlsl.u8        q4,     d2,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlal.u8        q4,     d4,     d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d13},  [r4],   r11
+    vmlsl.u8        q4,     d5,     d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vld1.u32        {d14},  [r4],   r11
+    vmlal.u8        q4,     d6,     d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vld1.u32        {d15},  [r4],   r11
+    vmlsl.u8        q4,     d7,     d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vld1.u32        {d16},  [r4],   r11     ;vector load pu1_src + src_strd
+    vdup.16         q11,    r7
+    vmlal.u8        q5,     d15,    d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d17},  [r4],   r11
+    vmlsl.u8        q5,     d14,    d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {d18},  [r4],   r11
+    vmlal.u8        q5,     d16,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d19},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlsl.u8        q5,     d17,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vld1.u8         {d6},   [r1]
+    vqrshrun.s16    d20,    q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlal.u8        q5,     d18,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d19,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vld1.u8         {d7},   [r6]
+    vrhadd.u8       d20,    d20,    d6
+    vmlsl.u8        q5,     d12,    d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlal.u8        q5,     d13,    d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vst1.8          {d20},  [r1]!           ;store the result pu1_dst
+    vhadd.s16       q5,     q5,     q11
+    subs            r5,     r5,     #8      ;decrement the wd loop
+    vqrshrun.s16    d8,     q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    vrhadd.u8       d8,     d8,     d7
+    vst1.8          {d8},   [r6]!           ;store the result pu1_dst
+    cmp             r5,     #4
+    bgt             inner_loop_8
+
+end_inner_loop_8
+    subs            r14,    r14,    #2      ;decrement the ht loop
+    add             r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the dst pointer by
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_8
+
+    ldr             r10,    [sp,    #120]   ;loads wd
+    cmp             r10,    #12
+    beq             outer_loop4_residual
+
+end_loops
+    b               end_func
+
+outer_loop_16
+    str             r0,     [sp,  #-4]!
+    str             r7,     [sp,  #-4]!
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    and             r0,     r12,    #31
+    mov             r7,     #0xc000
+    sub             r5,     r10,    #0      ;checks wd
+    pld             [r4,    r2,     lsl #1]
+    pld             [r12,   r2,     lsl #1]
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {q1},   [r12],  r11
+    vld1.u32        {q2},   [r12],  r11
+    vld1.u32        {q3},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {q6},   [r12],  r11
+    vmlal.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q7},   [r12],  r11
+    vmlsl.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {q9},   [r12],  r11
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlsl.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vdup.16         q10,    r7
+    vmlal.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+
+inner_loop_16
+    vmlsl.u8        q10,    d1,     d24
+    vdup.16         q5,     r7
+    vmlal.u8        q10,    d3,     d25
+    mov             r7,     #0x4000
+    vdup.16         q11,    r7
+    vmlsl.u8        q10,    d5,     d26
+    vld1.u32        {q0},   [r4],   r11     ;vector load pu1_src
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {q1},   [r4],   r11
+    vmlal.u8        q10,    d7,     d27
+    add             r12,    #8
+    subs            r5,     r5,     #16
+    vmlal.u8        q10,    d13,    d28
+    vld1.u32        {q2},   [r4],   r11
+    vmlsl.u8        q10,    d15,    d29
+    vld1.u32        {q3},   [r4],   r11
+    vqrshrun.s16    d8,     q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlal.u8        q10,    d17,    d30
+    vld1.u32        {q6},   [r4],   r11
+    vmlsl.u8        q10,    d19,    d31
+    vld1.u32        {q7},   [r4],   r11
+    add             r7,     r1,     #8
+    vmlsl.u8        q5,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlal.u8        q5,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q8},   [r4],   r11
+    vhadd.s16       q10,    q10,    q11
+    vld1.u32        {q9},   [r4],   r11
+    vld1.u8         {d0},   [r1]
+    vmlsl.u8        q5,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u8         {d2},   [r7]
+    vmlal.u8        q5,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    add             r4,     #8
+    mov             r7,     #0xc000
+    vmlal.u8        q5,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlsl.u8        q5,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vqrshrun.s16    d9,     q10,    #6
+    vdup.16         q11,    r7
+    vmlal.u8        q5,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    mov             r7,     #0x4000
+    vrhadd.u8       d8,     d8,     d0
+    vrhadd.u8       d9,     d9,     d2
+    vmlsl.u8        q11,    d1,     d24
+    vmlal.u8        q11,    d3,     d25
+    vdup.16         q10,    r7
+    vmlsl.u8        q11,    d5,     d26
+    pld             [r12,   r2,     lsl #2]
+    pld             [r4,    r2,     lsl #2]
+    addeq           r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    addeq           r4,     r12,    r2      ;pu1_src + src_strd
+    vmlal.u8        q11,    d7,     d27
+    vmlal.u8        q11,    d13,    d28
+    vst1.8          {q4},   [r1]!           ;store the result pu1_dst
+    subeq           r14,    r14,    #2
+    vhadd.s16       q5,     q5,     q10
+    vmlsl.u8        q11,    d15,    d29
+    addeq           r1,     r1,     r8
+    vmlal.u8        q11,    d17,    d30
+    cmp             r14,    #0
+    vmlsl.u8        q11,    d19,    d31
+    vqrshrun.s16    d10,    q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    beq             epilog_16
+
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    mov             r7,     #0xc000
+    cmp             r5,     #0
+    vld1.u32        {q1},   [r12],  r11
+    vhadd.s16       q11,    q11,    q10
+    vld1.u32        {q2},   [r12],  r11
+    vdup.16         q4,     r7
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vdup.16         q10,    r7
+    vld1.u32        {q3},   [r12],  r11
+    add             r7,     r6,     #8
+    moveq           r5,     r10
+    vld1.u8         {d0},   [r6]
+    vmlal.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u8         {d2},   [r7]
+    vqrshrun.s16    d11,    q11,    #6
+    vmlsl.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q6},   [r12],  r11
+    vrhadd.u8       d10,    d10,    d0
+    vld1.u32        {q7},   [r12],  r11
+    vrhadd.u8       d11,    d11,    d2
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {q9},   [r12],  r11
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlsl.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    mov             r7,     #0xc000
+    vmlal.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    addeq           r6,     r1,     r3      ;pu1_dst + dst_strd
+    b               inner_loop_16
+
+epilog_16
+    mov             r7,     #0x4000
+    ldr             r0,     [sp],   #4
+    ldr             r10,    [sp,    #120]
+    vdup.16         q10,    r7
+    vhadd.s16       q11,    q11,    q10
+    vqrshrun.s16    d11,    q11,    #6
+    add             r7,     r6,     #8
+    vld1.u8         {d20},  [r6]
+    vld1.u8         {d21},  [r7]
+    vrhadd.u8       d10,    d10,    d20
+    vrhadd.u8       d11,    d11,    d21
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    ldr             r7,     [sp],   #4
+    cmp             r10,    #24
+    beq             outer_loop8_residual
+
+end_loops1
+    b               end_func
+
+outer_loop4_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    add             r1,     #8
+    mov             r10,    #4
+    add             r12,    #8
+    mov             r14,    #16
+    add             r8,     #4
+    add             r9,     #4
+
+outer_loop_4
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_4
+
+inner_loop_4
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vld1.u32        {d1},   [r12],  r11
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    vld1.u32        {d4},   [r12],  r11
+    vld1.u32        {d5},   [r12],  r11
+    vld1.u32        {d6},   [r12],  r11
+    vld1.u32        {d7},   [r12],  r11
+    sub             r12,    r12,    #4
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vld1.u32        {d13},  [r4],   r11
+    vzip.32         d0,     d12             ;vector zip the i iteration and ii
+                                            ; interation in single register
+    vld1.u32        {d14},  [r4],   r11
+    vzip.32         d1,     d13
+    vld1.u32        {d15},  [r4],   r11
+    vzip.32         d2,     d14
+    vld1.u32        {d16},  [r4],   r11
+    vzip.32         d3,     d15
+    vld1.u32        {d17},  [r4],   r11
+    vzip.32         d4,     d16
+    vld1.u32        {d18},  [r4],   r11
+    vzip.32         d5,     d17
+    vld1.u32        {d19},  [r4],   r11
+    mov             r7,     #0xc000
+    vdup.16         q4,     r7
+    sub             r4,     r4,     #4
+    vzip.32         d6,     d18
+    vzip.32         d7,     d19
+    vmlal.u8        q4,     d1,     d25     ;arithmetic operations for ii
+                                            ; iteration in the same time
+    vmlsl.u8        q4,     d0,     d24
+    vmlsl.u8        q4,     d2,     d26
+    vmlal.u8        q4,     d3,     d27
+    vmlal.u8        q4,     d4,     d28
+    vmlsl.u8        q4,     d5,     d29
+    vmlal.u8        q4,     d6,     d30
+    vmlsl.u8        q4,     d7,     d31
+    mov             r7,     #0x4000
+    vdup.16         q10,    r7
+    vhadd.s16       q4,     q4,     q10
+    vqrshrun.s16    d8,     q4,     #6
+    vld1.u32        {d10[0]},       [r1]
+    vld1.u32        {d10[1]},       [r6]
+    vrhadd.u8       d8,     d8,     d10
+    vst1.32         {d8[0]},[r1]!           ;store the i iteration result which
+                                            ; is in upper part of the register
+    vst1.32         {d8[1]},[r6]!           ;store the ii iteration result which
+                                            ; is in lower part of the register
+    subs            r5,     r5,     #4      ;decrement the wd by 4
+    bgt             inner_loop_4
+
+end_inner_loop_4
+    subs            r14,    r14,    #2      ;decrement the ht by 4
+    add             r12,    r12,    r9      ;increment the input pointer
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the output pointer
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_4
+
+end_func
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_neon_asm.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_neon_asm.asm
deleted file mode 100644
index 1c2ee50..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_neon_asm.asm
+++ /dev/null
@@ -1,295 +0,0 @@
-;
-;  Copyright (c) 2013 The WebM project authors. All Rights Reserved.
-;
-;  Use of this source code is governed by a BSD-style license
-;  that can be found in the LICENSE file in the root of the source
-;  tree. An additional intellectual property rights grant can be found
-;  in the file PATENTS.  All contributing project authors may
-;  be found in the AUTHORS file in the root of the source tree.
-;
-
-
-    ; These functions are only valid when:
-    ; x_step_q4 == 16
-    ; w%4 == 0
-    ; h%4 == 0
-    ; taps == 8
-    ; VP9_FILTER_WEIGHT == 128
-    ; VP9_FILTER_SHIFT == 7
-
-    EXPORT  |vpx_convolve8_avg_horiz_neon|
-    EXPORT  |vpx_convolve8_avg_vert_neon|
-    ARM
-    REQUIRE8
-    PRESERVE8
-
-    AREA ||.text||, CODE, READONLY, ALIGN=2
-
-    ; Multiply and accumulate by q0
-    MACRO
-    MULTIPLY_BY_Q0 $dst, $src0, $src1, $src2, $src3, $src4, $src5, $src6, $src7
-    vmull.s16 $dst, $src0, d0[0]
-    vmlal.s16 $dst, $src1, d0[1]
-    vmlal.s16 $dst, $src2, d0[2]
-    vmlal.s16 $dst, $src3, d0[3]
-    vmlal.s16 $dst, $src4, d1[0]
-    vmlal.s16 $dst, $src5, d1[1]
-    vmlal.s16 $dst, $src6, d1[2]
-    vmlal.s16 $dst, $src7, d1[3]
-    MEND
-
-; r0    const uint8_t *src
-; r1    int src_stride
-; r2    uint8_t *dst
-; r3    int dst_stride
-; sp[]const int16_t *filter
-; sp[]int x0_q4
-; sp[]int x_step_q4 ; unused
-; sp[]int y0_q4
-; sp[]int y_step_q4 ; unused
-; sp[]int w
-; sp[]int h
-
-|vpx_convolve8_avg_horiz_neon| PROC
-    push            {r4-r10, lr}
-
-    sub             r0, r0, #3              ; adjust for taps
-
-    ldrd            r4, r5, [sp, #32]       ; filter, x0_q4
-    add             r4, r5, lsl #4
-    ldrd            r6, r7, [sp, #52]       ; w, h
-
-    vld1.s16        {q0}, [r4]              ; filter
-
-    sub             r8, r1, r1, lsl #2      ; -src_stride * 3
-    add             r8, r8, #4              ; -src_stride * 3 + 4
-
-    sub             r4, r3, r3, lsl #2      ; -dst_stride * 3
-    add             r4, r4, #4              ; -dst_stride * 3 + 4
-
-    rsb             r9, r6, r1, lsl #2      ; reset src for outer loop
-    sub             r9, r9, #7
-    rsb             r12, r6, r3, lsl #2     ; reset dst for outer loop
-
-    mov             r10, r6                 ; w loop counter
-
-vpx_convolve8_avg_loop_horiz_v
-    vld1.8          {d24}, [r0], r1
-    vld1.8          {d25}, [r0], r1
-    vld1.8          {d26}, [r0], r1
-    vld1.8          {d27}, [r0], r8
-
-    vtrn.16         q12, q13
-    vtrn.8          d24, d25
-    vtrn.8          d26, d27
-
-    pld             [r0, r1, lsl #2]
-
-    vmovl.u8        q8, d24
-    vmovl.u8        q9, d25
-    vmovl.u8        q10, d26
-    vmovl.u8        q11, d27
-
-    ; save a few instructions in the inner loop
-    vswp            d17, d18
-    vmov            d23, d21
-
-    add             r0, r0, #3
-
-vpx_convolve8_avg_loop_horiz
-    add             r5, r0, #64
-
-    vld1.32         {d28[]}, [r0], r1
-    vld1.32         {d29[]}, [r0], r1
-    vld1.32         {d31[]}, [r0], r1
-    vld1.32         {d30[]}, [r0], r8
-
-    pld             [r5]
-
-    vtrn.16         d28, d31
-    vtrn.16         d29, d30
-    vtrn.8          d28, d29
-    vtrn.8          d31, d30
-
-    pld             [r5, r1]
-
-    ; extract to s16
-    vtrn.32         q14, q15
-    vmovl.u8        q12, d28
-    vmovl.u8        q13, d29
-
-    pld             [r5, r1, lsl #1]
-
-    ; slightly out of order load to match the existing data
-    vld1.u32        {d6[0]}, [r2], r3
-    vld1.u32        {d7[0]}, [r2], r3
-    vld1.u32        {d6[1]}, [r2], r3
-    vld1.u32        {d7[1]}, [r2], r3
-
-    sub             r2, r2, r3, lsl #2      ; reset for store
-
-    ; src[] * filter
-    MULTIPLY_BY_Q0  q1,  d16, d17, d20, d22, d18, d19, d23, d24
-    MULTIPLY_BY_Q0  q2,  d17, d20, d22, d18, d19, d23, d24, d26
-    MULTIPLY_BY_Q0  q14, d20, d22, d18, d19, d23, d24, d26, d27
-    MULTIPLY_BY_Q0  q15, d22, d18, d19, d23, d24, d26, d27, d25
-
-    pld             [r5, -r8]
-
-    ; += 64 >> 7
-    vqrshrun.s32    d2, q1, #7
-    vqrshrun.s32    d3, q2, #7
-    vqrshrun.s32    d4, q14, #7
-    vqrshrun.s32    d5, q15, #7
-
-    ; saturate
-    vqmovn.u16      d2, q1
-    vqmovn.u16      d3, q2
-
-    ; transpose
-    vtrn.16         d2, d3
-    vtrn.32         d2, d3
-    vtrn.8          d2, d3
-
-    ; average the new value and the dst value
-    vrhadd.u8       q1, q1, q3
-
-    vst1.u32        {d2[0]}, [r2@32], r3
-    vst1.u32        {d3[0]}, [r2@32], r3
-    vst1.u32        {d2[1]}, [r2@32], r3
-    vst1.u32        {d3[1]}, [r2@32], r4
-
-    vmov            q8,  q9
-    vmov            d20, d23
-    vmov            q11, q12
-    vmov            q9,  q13
-
-    subs            r6, r6, #4              ; w -= 4
-    bgt             vpx_convolve8_avg_loop_horiz
-
-    ; outer loop
-    mov             r6, r10                 ; restore w counter
-    add             r0, r0, r9              ; src += src_stride * 4 - w
-    add             r2, r2, r12             ; dst += dst_stride * 4 - w
-    subs            r7, r7, #4              ; h -= 4
-    bgt vpx_convolve8_avg_loop_horiz_v
-
-    pop             {r4-r10, pc}
-
-    ENDP
-
-|vpx_convolve8_avg_vert_neon| PROC
-    push            {r4-r8, lr}
-
-    ; adjust for taps
-    sub             r0, r0, r1
-    sub             r0, r0, r1, lsl #1
-
-    ldr             r4, [sp, #24]           ; filter
-    ldr             r5, [sp, #36]           ; y0_q4
-    add             r4, r5, lsl #4
-    ldr             r6, [sp, #44]           ; w
-    ldr             lr, [sp, #48]           ; h
-
-    vld1.s16        {q0}, [r4]              ; filter
-
-    lsl             r1, r1, #1
-    lsl             r3, r3, #1
-
-vpx_convolve8_avg_loop_vert_h
-    mov             r4, r0
-    add             r7, r0, r1, asr #1
-    mov             r5, r2
-    add             r8, r2, r3, asr #1
-    mov             r12, lr                 ; h loop counter
-
-    vld1.u32        {d16[0]}, [r4], r1
-    vld1.u32        {d16[1]}, [r7], r1
-    vld1.u32        {d18[0]}, [r4], r1
-    vld1.u32        {d18[1]}, [r7], r1
-    vld1.u32        {d20[0]}, [r4], r1
-    vld1.u32        {d20[1]}, [r7], r1
-    vld1.u32        {d22[0]}, [r4], r1
-
-    vmovl.u8        q8, d16
-    vmovl.u8        q9, d18
-    vmovl.u8        q10, d20
-    vmovl.u8        q11, d22
-
-vpx_convolve8_avg_loop_vert
-    ; always process a 4x4 block at a time
-    vld1.u32        {d24[0]}, [r7], r1
-    vld1.u32        {d26[0]}, [r4], r1
-    vld1.u32        {d26[1]}, [r7], r1
-    vld1.u32        {d24[1]}, [r4], r1
-
-    ; extract to s16
-    vmovl.u8        q12, d24
-    vmovl.u8        q13, d26
-
-    vld1.u32        {d6[0]}, [r5@32], r3
-    vld1.u32        {d6[1]}, [r8@32], r3
-    vld1.u32        {d7[0]}, [r5@32], r3
-    vld1.u32        {d7[1]}, [r8@32], r3
-
-    pld             [r7]
-    pld             [r4]
-
-    ; src[] * filter
-    MULTIPLY_BY_Q0  q1,  d16, d17, d18, d19, d20, d21, d22, d24
-
-    pld             [r7, r1]
-    pld             [r4, r1]
-
-    MULTIPLY_BY_Q0  q2,  d17, d18, d19, d20, d21, d22, d24, d26
-
-    pld             [r5]
-    pld             [r8]
-
-    MULTIPLY_BY_Q0  q14, d18, d19, d20, d21, d22, d24, d26, d27
-
-    pld             [r5, r3]
-    pld             [r8, r3]
-
-    MULTIPLY_BY_Q0  q15, d19, d20, d21, d22, d24, d26, d27, d25
-
-    ; += 64 >> 7
-    vqrshrun.s32    d2, q1, #7
-    vqrshrun.s32    d3, q2, #7
-    vqrshrun.s32    d4, q14, #7
-    vqrshrun.s32    d5, q15, #7
-
-    ; saturate
-    vqmovn.u16      d2, q1
-    vqmovn.u16      d3, q2
-
-    ; average the new value and the dst value
-    vrhadd.u8       q1, q1, q3
-
-    sub             r5, r5, r3, lsl #1      ; reset for store
-    sub             r8, r8, r3, lsl #1
-
-    vst1.u32        {d2[0]}, [r5@32], r3
-    vst1.u32        {d2[1]}, [r8@32], r3
-    vst1.u32        {d3[0]}, [r5@32], r3
-    vst1.u32        {d3[1]}, [r8@32], r3
-
-    vmov            q8, q10
-    vmov            d18, d22
-    vmov            d19, d24
-    vmov            q10, q13
-    vmov            d22, d25
-
-    subs            r12, r12, #4            ; h -= 4
-    bgt             vpx_convolve8_avg_loop_vert
-
-    ; outer loop
-    add             r0, r0, #4
-    add             r2, r2, #4
-    subs            r6, r6, #4              ; w -= 4
-    bgt             vpx_convolve8_avg_loop_vert_h
-
-    pop             {r4-r8, pc}
-
-    ENDP
-    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type1_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type1_neon.asm
new file mode 100644
index 0000000..d310a83
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type1_neon.asm
@@ -0,0 +1,486 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r6 =>  dst_stride
+;    r12 => filter_y0
+;    r5 =>  ht
+;    r3 =>  wd
+
+    EXPORT          |vpx_convolve8_avg_vert_filter_type1_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_avg_vert_filter_type1_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+    vmov.i16        q15,    #0x4000
+    mov             r11,    #0xc000
+    ldr             r12,    [sp,    #104]   ;load filter
+    ldr             r6,     [sp,    #116]   ;load y0_q4
+    add             r12,    r12,    r6,     lsl #4 ;r12 = filter[y0_q4]
+    mov             r6,     r3
+    ldr             r5,     [sp,    #124]   ;load wd
+    vld2.8          {d0,    d1},    [r12]   ;coeff = vld1_s8(pi1_coeff)
+    sub             r12,    r2,     r2,     lsl #2 ;src_ctrd & pi1_coeff
+    vabs.s8         d0,     d0              ;vabs_s8(coeff)
+    add             r0,     r0,     r12     ;r0->pu1_src    r12->pi1_coeff
+    ldr             r3,     [sp,    #128]   ;load ht
+    subs            r7,     r3,     #0      ;r3->ht
+    vdup.u8         d22,    d0[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0);
+    cmp             r5,     #8
+    vdup.u8         d23,    d0[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1);
+    vdup.u8         d24,    d0[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2);
+    vdup.u8         d25,    d0[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3);
+    vdup.u8         d26,    d0[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4);
+    vdup.u8         d27,    d0[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5);
+    vdup.u8         d28,    d0[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6);
+    vdup.u8         d29,    d0[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7);
+    blt             core_loop_wd_4          ;core loop wd 4 jump
+    str             r0,     [sp,  #-4]!
+    str             r1,     [sp,  #-4]!
+    bic             r4,     r5,     #7      ;r5 ->wd
+    rsb             r9,     r4,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r4,     r2,     lsl #2 ;r2->src_strd
+    mov             r3,     r5,     lsr #3  ;divide by 8
+    mul             r7,     r3              ;multiply height by width
+    sub             r7,     #4              ;subtract by one for epilog
+
+prolog
+    and             r10,    r0,     #31
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vdup.16         q4,     r11
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    subs            r4,     r4,     #8
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vdup.16         q5,     r11
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    addle           r0,     r0,     r8
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlal.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    pld             [r3]
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    pld             [r3,    r2]
+    pld             [r3,    r2,     lsl #1]
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    add             r3,     r3,     r2
+    vmlal.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    pld             [r3,    r2,     lsl #1]
+    vmlsl.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vld1.u8         {d20},  [r1]
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d3,     d23
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d2,     d22
+    vrhadd.u8       d8,     d8,     d20
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d4,     d24
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d5,     d25
+    vmlal.u8        q6,     d6,     d26
+    add             r14,    r1,     r6
+    vmlal.u8        q6,     d7,     d27
+    vmlsl.u8        q6,     d16,    d28
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vmlsl.u8        q6,     d17,    d29
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    addle           r1,     r1,     r9
+    vmlsl.u8        q7,     d4,     d23
+    subs            r7,     r7,     #4
+    vmlsl.u8        q7,     d3,     d22
+    vmlal.u8        q7,     d5,     d24
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d6,     d25
+    vrhadd.u8       d10,    d10,    d20
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+    blt             epilog_end              ;jumps to epilog_end
+
+    beq             epilog                  ;jumps to epilog
+
+main_loop_8
+    subs            r4,     r4,     #8
+    vmlsl.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vld1.u8         {d20},  [r14]
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    addle           r0,     r0,     r8
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlal.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vrhadd.u8       d12,    d12,    d20
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlsl.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vst1.8          {d12},  [r14],  r6
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d14,    q7,     #6
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vmlsl.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vrhadd.u8       d14,    d14,    d20
+    vmlal.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vst1.8          {d14},  [r14],  r6
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    add             r14,    r1,     #0
+    vmlal.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    add             r1,     r1,     #8
+    vmlsl.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    addle           r1,     r1,     r9
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vmlsl.u8        q6,     d3,     d23
+    add             r10,    r3,     r2,     lsl #3 ; 10*strd - 8+2
+    vmlsl.u8        q6,     d2,     d22
+    vrhadd.u8       d8,     d8,     d20
+    add             r10,    r10,    r2      ; 11*strd
+    vmlal.u8        q6,     d4,     d24
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vst1.8          {d8},   [r14],  r6      ;vst1_u8(pu1_dst,sto_res);
+    pld             [r10]                   ;11+ 0
+    vmlal.u8        q6,     d7,     d27
+    pld             [r10,   r2]             ;11+ 1*strd
+    pld             [r10,   r2,     lsl #1] ;11+ 2*strd
+    vmlsl.u8        q6,     d16,    d28
+    add             r10,    r10,    r2      ;12*strd
+    vmlsl.u8        q6,     d17,    d29
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+
+    pld             [r10,   r2,     lsl #1] ;11+ 3*strd
+    vmlsl.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    vrhadd.u8       d10,    d10,    d20
+    subs            r7,     r7,     #4
+    vmlal.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vqrshrun.s16    d12,    q6,     #6
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    bgt             main_loop_8             ;jumps to main_loop_8
+
+epilog
+    vld1.u8         {d20},  [r14]
+    vmlsl.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vmlal.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vrhadd.u8       d12,    d12,    d20
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vmlal.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlsl.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vst1.8          {d12},  [r14],  r6
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d14,    q7,     #6
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vrhadd.u8       d14,    d14,    d20
+    vmlal.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    vmlal.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    vmlsl.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    vst1.8          {d14},  [r14],  r6
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vld1.u8         {d20},  [r1]
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d3,     d23
+    vmlsl.u8        q6,     d2,     d22
+    vrhadd.u8       d8,     d8,     d20
+    vmlal.u8        q6,     d4,     d24
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vmlal.u8        q6,     d7,     d27
+    add             r14,    r1,     r6
+    vmlsl.u8        q6,     d16,    d28
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vmlsl.u8        q6,     d17,    d29
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    vrhadd.u8       d10,    d10,    d20
+    vmlal.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vhadd.s16       q6,     q6,     q15
+    vmlal.u8        q7,     d7,     d26
+    vmlal.u8        q7,     d16,    d27
+    vmlsl.u8        q7,     d17,    d28
+    vmlsl.u8        q7,     d18,    d29
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+
+epilog_end
+    vld1.u8         {d20},  [r14]
+    vrhadd.u8       d12,    d12,    d20
+    vst1.8          {d12},  [r14],  r6
+    vhadd.s16       q7,     q7,     q15
+    vqrshrun.s16    d14,    q7,     #6
+    vld1.u8         {d20},  [r14]
+    vrhadd.u8       d14,    d14,    d20
+    vst1.8          {d14},  [r14],  r6
+
+end_loops
+    tst             r5,     #7
+    ldr             r1,     [sp],   #4
+    ldr             r0,     [sp],   #4
+    vpopeq          {d8  -  d15}
+    ldmfdeq         sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+    mov             r5,     #4
+    add             r0,     r0,     #8
+    add             r1,     r1,     #8
+    mov             r7,     #16
+
+core_loop_wd_4
+    rsb             r9,     r5,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r5,     r2,     lsl #2 ;r2->src_strd
+    vmov.i8         d4,     #0
+
+outer_loop_wd_4
+    subs            r12,    r5,     #0
+    ble             end_inner_loop_wd_4     ;outer loop jump
+
+inner_loop_wd_4
+    add             r3,     r0,     r2
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    subs            r12,    r12,    #4
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vld1.u32        {d4[0]},[r0]            ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 0);
+    vdup.16         q0,     r11
+    vmlsl.u8        q0,     d5,     d23     ;mul_res1 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp2), coeffabs_1);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    add             r0,     r0,     #4
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlsl.u8        q0,     d4,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_0);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlal.u8        q0,     d6,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_2);
+    vdup.16         q4,     r11
+    vmlsl.u8        q4,     d7,     d23
+    vdup.u32        d4,     d7[1]           ;src_tmp1 = vdup_lane_u32(src_tmp4,
+                                            ; 1);
+    vmull.u8        q1,     d7,     d25     ;mul_res2 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp4), coeffabs_3);
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    vmlsl.u8        q4,     d6,     d22
+    vmlal.u8        q0,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_4);
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vmlal.u8        q4,     d4,     d24
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vmlal.u8        q1,     d5,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp2), coeffabs_5);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    vmlal.u8        q4,     d5,     d25
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlsl.u8        q0,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_6);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vmlal.u8        q4,     d6,     d26
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlsl.u8        q1,     d7,     d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp4), coeffabs_7);
+    vdup.u32        d4,     d7[1]
+    vadd.i16        q0,     q0,     q1      ;mul_res1 = vaddq_u16(mul_res1,
+                                            ; mul_res2);
+    vmlal.u8        q4,     d7,     d27
+    vld1.u32        {d4[1]},[r3],   r2
+    vmlsl.u8        q4,     d4,     d28
+    vdup.u32        d5,     d4[1]
+    vhadd.s16       q0,     q0,     q15
+    vqrshrun.s16    d0,     q0,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u32        {d5[1]},[r3]
+    add             r3,     r1,     r6
+    vld1.u32        {d20[0]},       [r1]
+    vld1.u32        {d20[1]},       [r3]
+    vrhadd.u8       d0,     d0,     d20
+    vst1.32         {d0[0]},[r1]            ;vst1_lane_u32((uint32_t *)pu1_dst,
+                                            ; vreinterpret_u32_u8(sto_res), 0);
+    vmlsl.u8        q4,     d5,     d29
+    vst1.32         {d0[1]},[r3],   r6      ;vst1_lane_u32((uint32_t
+                                            ; *)pu1_dst_tmp, vreinterpret_u32_u8(sto_res), 1);
+    vhadd.s16       q4,     q4,     q15
+    vqrshrun.s16    d8,     q4,     #6
+    mov             r4,     r3
+    vld1.u32        {d20[0]},       [r4],   r6
+    vld1.u32        {d20[1]},       [r4]
+    vrhadd.u8       d8,     d8,     d20
+    vst1.32         {d8[0]},[r3],   r6
+    add             r1,     r1,     #4
+    vst1.32         {d8[1]},[r3]
+    bgt             inner_loop_wd_4
+
+end_inner_loop_wd_4
+    subs            r7,     r7,     #4
+    add             r1,     r1,     r9
+    add             r0,     r0,     r8
+    bgt             outer_loop_wd_4
+
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type2_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type2_neon.asm
new file mode 100644
index 0000000..c5695fb
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_avg_vert_filter_type2_neon.asm
@@ -0,0 +1,487 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r6 =>  dst_stride
+;    r12 => filter_y0
+;    r5 =>  ht
+;    r3 =>  wd
+
+    EXPORT          |vpx_convolve8_avg_vert_filter_type2_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_avg_vert_filter_type2_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+    vmov.i16        q15,    #0x4000
+    mov             r11,    #0xc000
+    ldr             r12,    [sp,    #104]   ;load filter
+    ldr             r6,     [sp,    #116]   ;load y0_q4
+    add             r12,    r12,    r6,     lsl #4 ;r12 = filter[y0_q4]
+    mov             r6,     r3
+    ldr             r5,     [sp,    #124]   ;load wd
+    vld2.8          {d0,    d1},    [r12]   ;coeff = vld1_s8(pi1_coeff)
+    sub             r12,    r2,     r2,     lsl #2 ;src_ctrd & pi1_coeff
+    vabs.s8         d0,     d0              ;vabs_s8(coeff)
+    add             r0,     r0,     r12     ;r0->pu1_src    r12->pi1_coeff
+    ldr             r3,     [sp,    #128]   ;load ht
+    subs            r7,     r3,     #0      ;r3->ht
+    vdup.u8         d22,    d0[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0);
+    cmp             r5,     #8
+    vdup.u8         d23,    d0[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1);
+    vdup.u8         d24,    d0[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2);
+    vdup.u8         d25,    d0[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3);
+    vdup.u8         d26,    d0[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4);
+    vdup.u8         d27,    d0[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5);
+    vdup.u8         d28,    d0[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6);
+    vdup.u8         d29,    d0[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7);
+    blt             core_loop_wd_4          ;core loop wd 4 jump
+
+    str             r0,     [sp,  #-4]!
+    str             r1,     [sp,  #-4]!
+    bic             r4,     r5,     #7      ;r5 ->wd
+    rsb             r9,     r4,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r4,     r2,     lsl #2 ;r2->src_strd
+    mov             r3,     r5,     lsr #3  ;divide by 8
+    mul             r7,     r3              ;multiply height by width
+    sub             r7,     #4              ;subtract by one for epilog
+
+prolog
+    and             r10,    r0,     #31
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vdup.16         q4,     r11
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    subs            r4,     r4,     #8
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vdup.16         q5,     r11
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    addle           r0,     r0,     r8
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlsl.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    pld             [r3]
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    pld             [r3,    r2]
+    pld             [r3,    r2,     lsl #1]
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    add             r3,     r3,     r2
+    vmlsl.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    pld             [r3,    r2,     lsl #1]
+    vmlal.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vld1.u8         {d20},  [r1]
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d3,     d23
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d2,     d22
+    vrhadd.u8       d8,     d8,     d20
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d4,     d24
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d5,     d25
+    vmlal.u8        q6,     d6,     d26
+    add             r14,    r1,     r6
+    vmlsl.u8        q6,     d7,     d27
+    vmlal.u8        q6,     d16,    d28
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vmlsl.u8        q6,     d17,    d29
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    addle           r1,     r1,     r9
+    vmlal.u8        q7,     d4,     d23
+    subs            r7,     r7,     #4
+    vmlsl.u8        q7,     d3,     d22
+    vmlsl.u8        q7,     d5,     d24
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d6,     d25
+    vrhadd.u8       d10,    d10,    d20
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+    blt             epilog_end              ;jumps to epilog_end
+
+    beq             epilog                  ;jumps to epilog
+
+main_loop_8
+    subs            r4,     r4,     #8
+    vmlal.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vld1.u8         {d20},  [r14]
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    addle           r0,     r0,     r8
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlsl.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vrhadd.u8       d12,    d12,    d20
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlal.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vst1.8          {d12},  [r14],  r6
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d14,    q7,     #6
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vmlal.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vrhadd.u8       d14,    d14,    d20
+    vmlsl.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vst1.8          {d14},  [r14],  r6
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    add             r14,    r1,     #0
+    vmlsl.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    add             r1,     r1,     #8
+    vmlal.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    addle           r1,     r1,     r9
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vmlal.u8        q6,     d3,     d23
+    add             r10,    r3,     r2,     lsl #3 ; 10*strd - 8+2
+    vmlsl.u8        q6,     d2,     d22
+    vrhadd.u8       d8,     d8,     d20
+    add             r10,    r10,    r2      ; 11*strd
+    vmlsl.u8        q6,     d4,     d24
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vst1.8          {d8},   [r14],  r6      ;vst1_u8(pu1_dst,sto_res);
+    pld             [r10]                   ;11+ 0
+    vmlsl.u8        q6,     d7,     d27
+    pld             [r10,   r2]             ;11+ 1*strd
+    pld             [r10,   r2,     lsl #1] ;11+ 2*strd
+    vmlal.u8        q6,     d16,    d28
+    add             r10,    r10,    r2      ;12*strd
+    vmlsl.u8        q6,     d17,    d29
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    pld             [r10,   r2,     lsl #1] ;11+ 3*strd
+    vmlal.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    vrhadd.u8       d10,    d10,    d20
+    subs            r7,     r7,     #4
+    vmlsl.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vqrshrun.s16    d12,    q6,     #6
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    bgt             main_loop_8             ;jumps to main_loop_8
+
+epilog
+    vld1.u8         {d20},  [r14]
+    vmlal.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vmlsl.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vrhadd.u8       d12,    d12,    d20
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vmlsl.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlal.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vst1.8          {d12},  [r14],  r6
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d14,    q7,     #6
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vrhadd.u8       d14,    d14,    d20
+    vmlsl.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    vmlsl.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    vmlal.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    vst1.8          {d14},  [r14],  r6
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vld1.u8         {d20},  [r1]
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d3,     d23
+    vmlsl.u8        q6,     d2,     d22
+    vrhadd.u8       d8,     d8,     d20
+    vmlsl.u8        q6,     d4,     d24
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vmlsl.u8        q6,     d7,     d27
+    add             r14,    r1,     r6
+    vmlal.u8        q6,     d16,    d28
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vmlsl.u8        q6,     d17,    d29
+    vld1.u8         {d20},  [r14]
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    vrhadd.u8       d10,    d10,    d20
+    vmlsl.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vhadd.s16       q6,     q6,     q15
+    vmlal.u8        q7,     d7,     d26
+    vmlsl.u8        q7,     d16,    d27
+    vmlal.u8        q7,     d17,    d28
+    vmlsl.u8        q7,     d18,    d29
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+
+epilog_end
+    vld1.u8         {d20},  [r14]
+    vrhadd.u8       d12,    d12,    d20
+    vst1.8          {d12},  [r14],  r6
+    vhadd.s16       q7,     q7,     q15
+    vqrshrun.s16    d14,    q7,     #6
+    vld1.u8         {d20},  [r14]
+    vrhadd.u8       d14,    d14,    d20
+    vst1.8          {d14},  [r14],  r6
+
+end_loops
+    tst             r5,     #7
+    ldr             r1,     [sp],   #4
+    ldr             r0,     [sp],   #4
+    vpopeq          {d8  -  d15}
+    ldmfdeq         sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    mov             r5,     #4
+    add             r0,     r0,     #8
+    add             r1,     r1,     #8
+    mov             r7,     #16
+
+core_loop_wd_4
+    rsb             r9,     r5,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r5,     r2,     lsl #2 ;r2->src_strd
+    vmov.i8         d4,     #0
+
+outer_loop_wd_4
+    subs            r12,    r5,     #0
+    ble             end_inner_loop_wd_4     ;outer loop jump
+
+inner_loop_wd_4
+    add             r3,     r0,     r2
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    subs            r12,    r12,    #4
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vld1.u32        {d4[0]},[r0]            ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 0);
+    vdup.16         q0,     r11
+    vmlal.u8        q0,     d5,     d23     ;mul_res1 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp2), coeffabs_1);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    add             r0,     r0,     #4
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlsl.u8        q0,     d4,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_0);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlsl.u8        q0,     d6,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_2);
+    vdup.16         q4,     r11
+    vmlal.u8        q4,     d7,     d23
+    vdup.u32        d4,     d7[1]           ;src_tmp1 = vdup_lane_u32(src_tmp4,
+                                            ; 1);
+    vmull.u8        q1,     d7,     d25     ;mul_res2 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp4), coeffabs_3);
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    vmlsl.u8        q4,     d6,     d22
+    vmlal.u8        q0,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_4);
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vmlsl.u8        q4,     d4,     d24
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vmlsl.u8        q1,     d5,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp2), coeffabs_5);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    vmlal.u8        q4,     d5,     d25
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlal.u8        q0,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_6);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vmlal.u8        q4,     d6,     d26
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlsl.u8        q1,     d7,     d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp4), coeffabs_7);
+    vdup.u32        d4,     d7[1]
+    vadd.i16        q0,     q0,     q1      ;mul_res1 = vaddq_u16(mul_res1,
+                                            ; mul_res2);
+    vmlsl.u8        q4,     d7,     d27
+    vld1.u32        {d4[1]},[r3],   r2
+    vmlal.u8        q4,     d4,     d28
+    vdup.u32        d5,     d4[1]
+    vhadd.s16       q0,     q0,     q15
+    vqrshrun.s16    d0,     q0,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u32        {d5[1]},[r3]
+    add             r3,     r1,     r6
+    vld1.u32        {d20[0]},       [r1]
+    vld1.u32        {d20[1]},       [r3]
+    vrhadd.u8       d0,     d0,     d20
+    vst1.32         {d0[0]},[r1]            ;vst1_lane_u32((uint32_t *)pu1_dst,
+                                            ; vreinterpret_u32_u8(sto_res), 0);
+    vmlsl.u8        q4,     d5,     d29
+    vst1.32         {d0[1]},[r3],   r6      ;vst1_lane_u32((uint32_t
+                                            ; *)pu1_dst_tmp, vreinterpret_u32_u8(sto_res), 1);
+    vhadd.s16       q4,     q4,     q15
+    vqrshrun.s16    d8,     q4,     #6
+    mov             r4,     r3
+    vld1.u32        {d20[0]},       [r4],   r6
+    vld1.u32        {d20[1]},       [r4]
+    vrhadd.u8       d8,     d8,     d20
+    vst1.32         {d8[0]},[r3],   r6
+    add             r1,     r1,     #4
+    vst1.32         {d8[1]},[r3]
+    bgt             inner_loop_wd_4
+
+end_inner_loop_wd_4
+    subs            r7,     r7,     #4
+    add             r1,     r1,     r9
+    add             r0,     r0,     r8
+    bgt             outer_loop_wd_4
+
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type1_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type1_neon.asm
new file mode 100644
index 0000000..fa1b732
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type1_neon.asm
@@ -0,0 +1,415 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r3 =>  dst_stride
+;    r4 => filter_x0
+;    r8 =>  ht
+;    r10 =>  wd
+
+    EXPORT          |vpx_convolve8_horiz_filter_type1_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_horiz_filter_type1_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+start_loop_count
+    ldr             r4,     [sp,    #104]   ;loads pi1_coeff
+    ldr             r8,     [sp,    #108]   ;loads x0_q4
+    add             r4,     r4,     r8,     lsl #4 ;r4 = filter[x0_q4]
+    ldr             r8,     [sp,    #128]   ;loads ht
+    ldr             r10,    [sp,    #124]   ;loads wd
+    vld2.8          {d0,    d1},    [r4]    ;coeff = vld1_s8(pi1_coeff)
+    mov             r11,    #1
+    subs            r14,    r8,     #0      ;checks for ht == 0
+    vabs.s8         d2,     d0              ;vabs_s8(coeff)
+    vdup.8          d24,    d2[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0)
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    vdup.8          d25,    d2[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1)
+    add             r4,     r12,    r2      ;pu1_src_tmp2_8 = pu1_src + src_strd
+    vdup.8          d26,    d2[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2)
+    rsb             r9,     r10,    r2,     lsl #1 ;2*src_strd - wd
+    vdup.8          d27,    d2[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3)
+    rsb             r8,     r10,    r3,     lsl #1 ;2*dst_strd - wd
+    vdup.8          d28,    d2[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4)
+    vdup.8          d29,    d2[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5)
+    vdup.8          d30,    d2[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6)
+    vdup.8          d31,    d2[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7)
+    mov             r7,     r1
+    cmp             r10,    #4
+    ble             outer_loop_4
+
+    cmp             r10,    #24
+    moveq           r10,    #16
+    addeq           r8,     #8
+    addeq           r9,     #8
+    cmp             r10,    #16
+    bge             outer_loop_16
+
+    cmp             r10,    #12
+    addeq           r8,     #4
+    addeq           r9,     #4
+    b               outer_loop_8
+
+outer_loop8_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    mov             r14,    #32
+    add             r1,     #16
+    add             r12,    #16
+    mov             r10,    #8
+    add             r8,     #8
+    add             r9,     #8
+
+outer_loop_8
+
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_8
+
+inner_loop_8
+    mov             r7,     #0xc000
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {d1},   [r12],  r11
+    vdup.16         q5,     r7
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    mov             r7,     #0x4000
+    vld1.u32        {d4},   [r12],  r11
+    vmlsl.u8        q4,     d1,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {d5},   [r12],  r11
+    vmlal.u8        q4,     d3,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d6},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {d7},   [r12],  r11
+    vmlal.u8        q4,     d2,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlal.u8        q4,     d4,     d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d13},  [r4],   r11
+    vmlal.u8        q4,     d5,     d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vld1.u32        {d14},  [r4],   r11
+    vmlsl.u8        q4,     d6,     d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vld1.u32        {d15},  [r4],   r11
+    vmlsl.u8        q4,     d7,     d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vld1.u32        {d16},  [r4],   r11     ;vector load pu1_src + src_strd
+    vdup.16         q11,    r7
+    vmlal.u8        q5,     d15,    d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d17},  [r4],   r11
+    vmlal.u8        q5,     d14,    d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {d18},  [r4],   r11
+    vmlal.u8        q5,     d16,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d19},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlal.u8        q5,     d17,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vmlsl.u8        q5,     d18,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d19,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vqrshrun.s16    d20,    q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlsl.u8        q5,     d12,    d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlsl.u8        q5,     d13,    d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vst1.8          {d20},  [r1]!           ;store the result pu1_dst
+    vhadd.s16       q5,     q5,     q11
+    subs            r5,     r5,     #8      ;decrement the wd loop
+    vqrshrun.s16    d8,     q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    vst1.8          {d8},   [r6]!           ;store the result pu1_dst
+    cmp             r5,     #4
+    bgt             inner_loop_8
+
+end_inner_loop_8
+    subs            r14,    r14,    #2      ;decrement the ht loop
+    add             r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the dst pointer by
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_8
+
+    ldr             r10,    [sp,    #120]   ;loads wd
+    cmp             r10,    #12
+    beq             outer_loop4_residual
+
+end_loops
+    b               end_func
+
+outer_loop_16
+    str             r0,     [sp,  #-4]!
+    str             r7,     [sp,  #-4]!
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    and             r0,     r12,    #31
+    mov             r7,     #0xc000
+    sub             r5,     r10,    #0      ;checks wd
+    pld             [r4,    r2,     lsl #1]
+    pld             [r12,   r2,     lsl #1]
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {q1},   [r12],  r11
+    vld1.u32        {q2},   [r12],  r11
+    vld1.u32        {q3},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {q6},   [r12],  r11
+    vmlsl.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q7},   [r12],  r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {q9},   [r12],  r11
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlal.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vdup.16         q10,    r7
+    vmlsl.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+
+inner_loop_16
+    vmlsl.u8        q10,    d1,     d24
+    vdup.16         q5,     r7
+    vmlsl.u8        q10,    d3,     d25
+    mov             r7,     #0x4000
+    vdup.16         q11,    r7
+    vmlal.u8        q10,    d5,     d26
+    vld1.u32        {q0},   [r4],   r11     ;vector load pu1_src
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {q1},   [r4],   r11
+    vmlal.u8        q10,    d7,     d27
+    add             r12,    #8
+    subs            r5,     r5,     #16
+    vmlal.u8        q10,    d13,    d28
+    vld1.u32        {q2},   [r4],   r11
+    vmlal.u8        q10,    d15,    d29
+    vld1.u32        {q3},   [r4],   r11
+    vqrshrun.s16    d8,     q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlsl.u8        q10,    d17,    d30
+    vld1.u32        {q6},   [r4],   r11
+    vmlsl.u8        q10,    d19,    d31
+    vld1.u32        {q7},   [r4],   r11
+    vmlsl.u8        q5,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlsl.u8        q5,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q8},   [r4],   r11
+    vhadd.s16       q10,    q10,    q11
+    vld1.u32        {q9},   [r4],   r11
+    vmlal.u8        q5,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vmlal.u8        q5,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    add             r4,     #8
+    mov             r7,     #0xc000
+    vmlal.u8        q5,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlal.u8        q5,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vqrshrun.s16    d9,     q10,    #6
+    vdup.16         q11,    r7
+    vmlsl.u8        q5,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    mov             r7,     #0x4000
+    vmlsl.u8        q11,    d1,     d24
+    vst1.8          {q4},   [r1]!           ;store the result pu1_dst
+    vmlsl.u8        q11,    d3,     d25
+    vdup.16         q10,    r7
+    vmlal.u8        q11,    d5,     d26
+    pld             [r12,   r2,     lsl #2]
+    pld             [r4,    r2,     lsl #2]
+    addeq           r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    addeq           r4,     r12,    r2      ;pu1_src + src_strd
+    vmlal.u8        q11,    d7,     d27
+    addeq           r1,     r1,     r8
+    subeq           r14,    r14,    #2
+    vmlal.u8        q11,    d13,    d28
+    vhadd.s16       q5,     q5,     q10
+    vmlal.u8        q11,    d15,    d29
+    vmlsl.u8        q11,    d17,    d30
+    cmp             r14,    #0
+    vmlsl.u8        q11,    d19,    d31
+    vqrshrun.s16    d10,    q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    beq             epilog_16
+
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    mov             r7,     #0xc000
+    cmp             r5,     #0
+    vld1.u32        {q1},   [r12],  r11
+    vhadd.s16       q11,    q11,    q10
+    vld1.u32        {q2},   [r12],  r11
+    vdup.16         q4,     r7
+    vld1.u32        {q3},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {q6},   [r12],  r11
+    vld1.u32        {q7},   [r12],  r11
+    vmlsl.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q9},   [r12],  r11
+    vqrshrun.s16    d11,    q11,    #6
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    moveq           r5,     r10
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vdup.16         q10,    r7
+    vmlal.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    vmlsl.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    addeq           r6,     r1,     r3      ;pu1_dst + dst_strd
+    b               inner_loop_16
+
+epilog_16
+    mov             r7,     #0x4000
+    ldr             r0,     [sp],   #4
+    ldr             r10,    [sp,    #120]
+    vdup.16         q10,    r7
+    vhadd.s16       q11,    q11,    q10
+    vqrshrun.s16    d11,    q11,    #6
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    ldr             r7,     [sp],   #4
+    cmp             r10,    #24
+    beq             outer_loop8_residual
+
+end_loops1
+    b               end_func
+
+outer_loop4_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    add             r1,     #8
+    mov             r10,    #4
+    add             r12,    #8
+    mov             r14,    #16
+    add             r8,     #4
+    add             r9,     #4
+
+outer_loop_4
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_4
+
+inner_loop_4
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vld1.u32        {d1},   [r12],  r11
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    vld1.u32        {d4},   [r12],  r11
+    vld1.u32        {d5},   [r12],  r11
+    vld1.u32        {d6},   [r12],  r11
+    vld1.u32        {d7},   [r12],  r11
+    sub             r12,    r12,    #4
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vld1.u32        {d13},  [r4],   r11
+    vzip.32         d0,     d12             ;vector zip the i iteration and ii
+                                            ; interation in single register
+    vld1.u32        {d14},  [r4],   r11
+    vzip.32         d1,     d13
+    vld1.u32        {d15},  [r4],   r11
+    vzip.32         d2,     d14
+    vld1.u32        {d16},  [r4],   r11
+    vzip.32         d3,     d15
+    vld1.u32        {d17},  [r4],   r11
+    vzip.32         d4,     d16
+    vld1.u32        {d18},  [r4],   r11
+    vzip.32         d5,     d17
+    vld1.u32        {d19},  [r4],   r11
+    mov             r7,     #0xc000
+    vdup.16         q4,     r7
+    sub             r4,     r4,     #4
+    vzip.32         d6,     d18
+    vzip.32         d7,     d19
+    vmlsl.u8        q4,     d1,     d25     ;arithmetic operations for ii
+                                            ; iteration in the same time
+    vmlsl.u8        q4,     d0,     d24
+    vmlal.u8        q4,     d2,     d26
+    vmlal.u8        q4,     d3,     d27
+    vmlal.u8        q4,     d4,     d28
+    vmlal.u8        q4,     d5,     d29
+    vmlsl.u8        q4,     d6,     d30
+    vmlsl.u8        q4,     d7,     d31
+    mov             r7,     #0x4000
+    vdup.16         q10,    r7
+    vhadd.s16       q4,     q4,     q10
+    vqrshrun.s16    d8,     q4,     #6
+    vst1.32         {d8[0]},[r1]!           ;store the i iteration result which
+                                            ; is in upper part of the register
+    vst1.32         {d8[1]},[r6]!           ;store the ii iteration result which
+                                            ; is in lower part of the register
+    subs            r5,     r5,     #4      ;decrement the wd by 4
+    bgt             inner_loop_4
+
+end_inner_loop_4
+    subs            r14,    r14,    #2      ;decrement the ht by 4
+    add             r12,    r12,    r9      ;increment the input pointer
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the output pointer
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_4
+
+end_func
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type2_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type2_neon.asm
new file mode 100644
index 0000000..90b2c8f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_horiz_filter_type2_neon.asm
@@ -0,0 +1,415 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r3 =>  dst_stride
+;    r4 => filter_x0
+;    r8 =>  ht
+;    r10 =>  wd
+
+    EXPORT          |vpx_convolve8_horiz_filter_type2_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_horiz_filter_type2_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+
+start_loop_count
+    ldr             r4,     [sp,    #104]   ;loads pi1_coeff
+    ldr             r8,     [sp,    #108]   ;loads x0_q4
+    add             r4,     r4,     r8,     lsl #4 ;r4 = filter[x0_q4]
+    ldr             r8,     [sp,    #128]   ;loads ht
+    ldr             r10,    [sp,    #124]   ;loads wd
+    vld2.8          {d0,    d1},    [r4]    ;coeff = vld1_s8(pi1_coeff)
+    mov             r11,    #1
+    subs            r14,    r8,     #0      ;checks for ht == 0
+    vabs.s8         d2,     d0              ;vabs_s8(coeff)
+    vdup.8          d24,    d2[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0)
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    vdup.8          d25,    d2[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1)
+    add             r4,     r12,    r2      ;pu1_src_tmp2_8 = pu1_src + src_strd
+    vdup.8          d26,    d2[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2)
+    rsb             r9,     r10,    r2,     lsl #1 ;2*src_strd - wd
+    vdup.8          d27,    d2[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3)
+    rsb             r8,     r10,    r3,     lsl #1 ;2*dst_strd - wd
+    vdup.8          d28,    d2[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4)
+    vdup.8          d29,    d2[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5)
+    vdup.8          d30,    d2[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6)
+    vdup.8          d31,    d2[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7)
+    mov             r7,     r1
+    cmp             r10,    #4
+    ble             outer_loop_4
+
+    cmp             r10,    #24
+    moveq           r10,    #16
+    addeq           r8,     #8
+    addeq           r9,     #8
+    cmp             r10,    #16
+    bge             outer_loop_16
+
+    cmp             r10,    #12
+    addeq           r8,     #4
+    addeq           r9,     #4
+    b               outer_loop_8
+
+outer_loop8_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    mov             r14,    #32
+    add             r1,     #16
+    add             r12,    #16
+    mov             r10,    #8
+    add             r8,     #8
+    add             r9,     #8
+
+outer_loop_8
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_8
+
+inner_loop_8
+    mov             r7,     #0xc000
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {d1},   [r12],  r11
+    vdup.16         q5,     r7
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    mov             r7,     #0x4000
+    vld1.u32        {d4},   [r12],  r11
+    vmlal.u8        q4,     d1,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {d5},   [r12],  r11
+    vmlal.u8        q4,     d3,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d6},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {d7},   [r12],  r11
+    vmlsl.u8        q4,     d2,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlal.u8        q4,     d4,     d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d13},  [r4],   r11
+    vmlsl.u8        q4,     d5,     d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vld1.u32        {d14},  [r4],   r11
+    vmlal.u8        q4,     d6,     d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vld1.u32        {d15},  [r4],   r11
+    vmlsl.u8        q4,     d7,     d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vld1.u32        {d16},  [r4],   r11     ;vector load pu1_src + src_strd
+    vdup.16         q11,    r7
+    vmlal.u8        q5,     d15,    d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {d17},  [r4],   r11
+    vmlsl.u8        q5,     d14,    d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {d18},  [r4],   r11
+    vmlal.u8        q5,     d16,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vld1.u32        {d19},  [r4],   r11     ;vector load pu1_src + src_strd
+    vmlsl.u8        q5,     d17,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vmlal.u8        q5,     d18,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d19,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    vqrshrun.s16    d20,    q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlsl.u8        q5,     d12,    d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlal.u8        q5,     d13,    d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vst1.8          {d20},  [r1]!           ;store the result pu1_dst
+    vhadd.s16       q5,     q5,     q11
+    subs            r5,     r5,     #8      ;decrement the wd loop
+    vqrshrun.s16    d8,     q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    vst1.8          {d8},   [r6]!           ;store the result pu1_dst
+    cmp             r5,     #4
+    bgt             inner_loop_8
+
+end_inner_loop_8
+    subs            r14,    r14,    #2      ;decrement the ht loop
+    add             r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the dst pointer by
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_8
+
+    ldr             r10,    [sp,    #120]   ;loads wd
+    cmp             r10,    #12
+    beq             outer_loop4_residual
+
+end_loops
+    b               end_func
+
+outer_loop_16
+    str             r0,     [sp,  #-4]!
+    str             r7,     [sp,  #-4]!
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    and             r0,     r12,    #31
+    mov             r7,     #0xc000
+    sub             r5,     r10,    #0      ;checks wd
+    pld             [r4,    r2,     lsl #1]
+    pld             [r12,   r2,     lsl #1]
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    vdup.16         q4,     r7
+    vld1.u32        {q1},   [r12],  r11
+    vld1.u32        {q2},   [r12],  r11
+    vld1.u32        {q3},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {q6},   [r12],  r11
+    vmlal.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q7},   [r12],  r11
+    vmlsl.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q8},   [r12],  r11
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    vld1.u32        {q9},   [r12],  r11
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlsl.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vdup.16         q10,    r7
+    vmlal.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+
+inner_loop_16
+    vmlsl.u8        q10,    d1,     d24
+    vdup.16         q5,     r7
+    vmlal.u8        q10,    d3,     d25
+    mov             r7,     #0x4000
+    vdup.16         q11,    r7
+    vmlsl.u8        q10,    d5,     d26
+    vld1.u32        {q0},   [r4],   r11     ;vector load pu1_src
+    vhadd.s16       q4,     q4,     q11
+    vld1.u32        {q1},   [r4],   r11
+    vmlal.u8        q10,    d7,     d27
+    add             r12,    #8
+    subs            r5,     r5,     #16
+    vmlal.u8        q10,    d13,    d28
+    vld1.u32        {q2},   [r4],   r11
+    vmlsl.u8        q10,    d15,    d29
+    vld1.u32        {q3},   [r4],   r11
+    vqrshrun.s16    d8,     q4,     #6      ;right shift and saturating narrow
+                                            ; result 1
+    vmlal.u8        q10,    d17,    d30
+    vld1.u32        {q6},   [r4],   r11
+    vmlsl.u8        q10,    d19,    d31
+    vld1.u32        {q7},   [r4],   r11
+    vmlsl.u8        q5,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vmlal.u8        q5,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q8},   [r4],   r11
+    vhadd.s16       q10,    q10,    q11
+    vld1.u32        {q9},   [r4],   r11
+    vmlsl.u8        q5,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vmlal.u8        q5,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    add             r4,     #8
+    mov             r7,     #0xc000
+    vmlal.u8        q5,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vmlsl.u8        q5,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vqrshrun.s16    d9,     q10,    #6
+    vdup.16         q11,    r7
+    vmlal.u8        q5,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q5,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    mov             r7,     #0x4000
+    vmlsl.u8        q11,    d1,     d24
+    vst1.8          {q4},   [r1]!           ;store the result pu1_dst
+    vmlal.u8        q11,    d3,     d25
+    vdup.16         q10,    r7
+    vmlsl.u8        q11,    d5,     d26
+    pld             [r12,   r2,     lsl #2]
+    pld             [r4,    r2,     lsl #2]
+    addeq           r12,    r12,    r9      ;increment the src pointer by
+                                            ; 2*src_strd-wd
+    addeq           r4,     r12,    r2      ;pu1_src + src_strd
+    vmlal.u8        q11,    d7,     d27
+    addeq           r1,     r1,     r8
+    subeq           r14,    r14,    #2
+    vmlal.u8        q11,    d13,    d28
+    vhadd.s16       q5,     q5,     q10
+    vmlsl.u8        q11,    d15,    d29
+    vmlal.u8        q11,    d17,    d30
+    cmp             r14,    #0
+    vmlsl.u8        q11,    d19,    d31
+    vqrshrun.s16    d10,    q5,     #6      ;right shift and saturating narrow
+                                            ; result 2
+    beq             epilog_16
+
+    vld1.u32        {q0},   [r12],  r11     ;vector load pu1_src
+    mov             r7,     #0xc000
+    cmp             r5,     #0
+    vld1.u32        {q1},   [r12],  r11
+    vhadd.s16       q11,    q11,    q10
+    vld1.u32        {q2},   [r12],  r11
+    vdup.16         q4,     r7
+    vld1.u32        {q3},   [r12],  r11
+    vmlsl.u8        q4,     d0,     d24     ;mul_res = vmlsl_u8(src[0_0],
+                                            ; coeffabs_0);
+    vld1.u32        {q6},   [r12],  r11
+    vld1.u32        {q7},   [r12],  r11
+    vmlal.u8        q4,     d2,     d25     ;mul_res = vmlal_u8(src[0_1],
+                                            ; coeffabs_1);
+    vld1.u32        {q8},   [r12],  r11
+    vmlsl.u8        q4,     d4,     d26     ;mul_res = vmlsl_u8(src[0_2],
+                                            ; coeffabs_2);
+    vld1.u32        {q9},   [r12],  r11
+    vqrshrun.s16    d11,    q11,    #6
+    vmlal.u8        q4,     d6,     d27     ;mul_res = vmull_u8(src[0_3],
+                                            ; coeffabs_3);
+    moveq           r5,     r10
+    vmlal.u8        q4,     d12,    d28     ;mul_res = vmlal_u8(src[0_4],
+                                            ; coeffabs_4);
+    vdup.16         q10,    r7
+    vmlsl.u8        q4,     d14,    d29     ;mul_res = vmlsl_u8(src[0_5],
+                                            ; coeffabs_5);
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    vmlal.u8        q4,     d16,    d30     ;mul_res = vmlal_u8(src[0_6],
+                                            ; coeffabs_6);
+    vmlsl.u8        q4,     d18,    d31     ;mul_res = vmlsl_u8(src[0_7],
+                                            ; coeffabs_7);
+    addeq           r6,     r1,     r3      ;pu1_dst + dst_strd
+    b               inner_loop_16
+
+epilog_16
+    mov             r7,     #0x4000
+    ldr             r0,     [sp],   #4
+    ldr             r10,    [sp,    #120]
+    vdup.16         q10,    r7
+    vhadd.s16       q11,    q11,    q10
+    vqrshrun.s16    d11,    q11,    #6
+    vst1.8          {q5},   [r6]!           ;store the result pu1_dst
+    ldr             r7,     [sp],   #4
+    cmp             r10,    #24
+    beq             outer_loop8_residual
+
+end_loops1
+    b               end_func
+
+outer_loop4_residual
+    sub             r12,    r0,     #3      ;pu1_src - 3
+    mov             r1,     r7
+    add             r1,     #8
+    mov             r10,    #4
+    add             r12,    #8
+    mov             r14,    #16
+    add             r8,     #4
+    add             r9,     #4
+
+outer_loop_4
+    add             r6,     r1,     r3      ;pu1_dst + dst_strd
+    add             r4,     r12,    r2      ;pu1_src + src_strd
+    subs            r5,     r10,    #0      ;checks wd
+    ble             end_inner_loop_4
+
+inner_loop_4
+    vld1.u32        {d0},   [r12],  r11     ;vector load pu1_src
+    vld1.u32        {d1},   [r12],  r11
+    vld1.u32        {d2},   [r12],  r11
+    vld1.u32        {d3},   [r12],  r11
+    vld1.u32        {d4},   [r12],  r11
+    vld1.u32        {d5},   [r12],  r11
+    vld1.u32        {d6},   [r12],  r11
+    vld1.u32        {d7},   [r12],  r11
+    sub             r12,    r12,    #4
+    vld1.u32        {d12},  [r4],   r11     ;vector load pu1_src + src_strd
+    vld1.u32        {d13},  [r4],   r11
+    vzip.32         d0,     d12             ;vector zip the i iteration and ii
+                                            ; interation in single register
+    vld1.u32        {d14},  [r4],   r11
+    vzip.32         d1,     d13
+    vld1.u32        {d15},  [r4],   r11
+    vzip.32         d2,     d14
+    vld1.u32        {d16},  [r4],   r11
+    vzip.32         d3,     d15
+    vld1.u32        {d17},  [r4],   r11
+    vzip.32         d4,     d16
+    vld1.u32        {d18},  [r4],   r11
+    vzip.32         d5,     d17
+    vld1.u32        {d19},  [r4],   r11
+    mov             r7,     #0xc000
+    vdup.16         q4,     r7
+    sub             r4,     r4,     #4
+    vzip.32         d6,     d18
+    vzip.32         d7,     d19
+    vmlal.u8        q4,     d1,     d25     ;arithmetic operations for ii
+                                            ; iteration in the same time
+    vmlsl.u8        q4,     d0,     d24
+    vmlsl.u8        q4,     d2,     d26
+    vmlal.u8        q4,     d3,     d27
+    vmlal.u8        q4,     d4,     d28
+    vmlsl.u8        q4,     d5,     d29
+    vmlal.u8        q4,     d6,     d30
+    vmlsl.u8        q4,     d7,     d31
+    mov             r7,     #0x4000
+    vdup.16         q10,    r7
+    vhadd.s16       q4,     q4,     q10
+    vqrshrun.s16    d8,     q4,     #6
+    vst1.32         {d8[0]},[r1]!           ;store the i iteration result which
+                                            ; is in upper part of the register
+    vst1.32         {d8[1]},[r6]!           ;store the ii iteration result which
+                                            ; is in lower part of the register
+    subs            r5,     r5,     #4      ;decrement the wd by 4
+    bgt             inner_loop_4
+
+end_inner_loop_4
+    subs            r14,    r14,    #2      ;decrement the ht by 4
+    add             r12,    r12,    r9      ;increment the input pointer
+                                            ; 2*src_strd-wd
+    add             r1,     r1,     r8      ;increment the output pointer
+                                            ; 2*dst_strd-wd
+    bgt             outer_loop_4
+
+end_func
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.h
index c1634ed..4f27da9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon.h
@@ -8,6 +8,9 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#ifndef VPX_VPX_DSP_ARM_VPX_CONVOLVE8_NEON_H_
+#define VPX_VPX_DSP_ARM_VPX_CONVOLVE8_NEON_H_
+
 #include <arm_neon.h>
 
 #include "./vpx_config.h"
@@ -131,3 +134,5 @@
   return convolve8_8(ss[0], ss[1], ss[2], ss[3], ss[4], ss[5], ss[6], ss[7],
                      filters, filter3, filter4);
 }
+
+#endif  // VPX_VPX_DSP_ARM_VPX_CONVOLVE8_NEON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.asm
deleted file mode 100644
index 5eee156..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.asm
+++ /dev/null
@@ -1,273 +0,0 @@
-;
-;  Copyright (c) 2013 The WebM project authors. All Rights Reserved.
-;
-;  Use of this source code is governed by a BSD-style license
-;  that can be found in the LICENSE file in the root of the source
-;  tree. An additional intellectual property rights grant can be found
-;  in the file PATENTS.  All contributing project authors may
-;  be found in the AUTHORS file in the root of the source tree.
-;
-
-
-    ; These functions are only valid when:
-    ; x_step_q4 == 16
-    ; w%4 == 0
-    ; h%4 == 0
-    ; taps == 8
-    ; VP9_FILTER_WEIGHT == 128
-    ; VP9_FILTER_SHIFT == 7
-
-    EXPORT  |vpx_convolve8_horiz_neon|
-    EXPORT  |vpx_convolve8_vert_neon|
-    ARM
-    REQUIRE8
-    PRESERVE8
-
-    AREA ||.text||, CODE, READONLY, ALIGN=2
-
-    ; Multiply and accumulate by q0
-    MACRO
-    MULTIPLY_BY_Q0 $dst, $src0, $src1, $src2, $src3, $src4, $src5, $src6, $src7
-    vmull.s16 $dst, $src0, d0[0]
-    vmlal.s16 $dst, $src1, d0[1]
-    vmlal.s16 $dst, $src2, d0[2]
-    vmlal.s16 $dst, $src3, d0[3]
-    vmlal.s16 $dst, $src4, d1[0]
-    vmlal.s16 $dst, $src5, d1[1]
-    vmlal.s16 $dst, $src6, d1[2]
-    vmlal.s16 $dst, $src7, d1[3]
-    MEND
-
-; r0    const uint8_t *src
-; r1    int src_stride
-; r2    uint8_t *dst
-; r3    int dst_stride
-; sp[]const int16_t *filter
-; sp[]int x0_q4
-; sp[]int x_step_q4 ; unused
-; sp[]int y0_q4
-; sp[]int y_step_q4 ; unused
-; sp[]int w
-; sp[]int h
-
-|vpx_convolve8_horiz_neon| PROC
-    push            {r4-r10, lr}
-
-    sub             r0, r0, #3              ; adjust for taps
-
-    ldrd            r4, r5, [sp, #32]       ; filter, x0_q4
-    add             r4, r5, lsl #4
-    ldrd            r6, r7, [sp, #52]       ; w, h
-
-    vld1.s16        {q0}, [r4]              ; filter
-
-    sub             r8, r1, r1, lsl #2      ; -src_stride * 3
-    add             r8, r8, #4              ; -src_stride * 3 + 4
-
-    sub             r4, r3, r3, lsl #2      ; -dst_stride * 3
-    add             r4, r4, #4              ; -dst_stride * 3 + 4
-
-    rsb             r9, r6, r1, lsl #2      ; reset src for outer loop
-    sub             r9, r9, #7
-    rsb             r12, r6, r3, lsl #2     ; reset dst for outer loop
-
-    mov             r10, r6                 ; w loop counter
-
-vpx_convolve8_loop_horiz_v
-    vld1.8          {d24}, [r0], r1
-    vld1.8          {d25}, [r0], r1
-    vld1.8          {d26}, [r0], r1
-    vld1.8          {d27}, [r0], r8
-
-    vtrn.16         q12, q13
-    vtrn.8          d24, d25
-    vtrn.8          d26, d27
-
-    pld             [r0, r1, lsl #2]
-
-    vmovl.u8        q8, d24
-    vmovl.u8        q9, d25
-    vmovl.u8        q10, d26
-    vmovl.u8        q11, d27
-
-    ; save a few instructions in the inner loop
-    vswp            d17, d18
-    vmov            d23, d21
-
-    add             r0, r0, #3
-
-vpx_convolve8_loop_horiz
-    add             r5, r0, #64
-
-    vld1.32         {d28[]}, [r0], r1
-    vld1.32         {d29[]}, [r0], r1
-    vld1.32         {d31[]}, [r0], r1
-    vld1.32         {d30[]}, [r0], r8
-
-    pld             [r5]
-
-    vtrn.16         d28, d31
-    vtrn.16         d29, d30
-    vtrn.8          d28, d29
-    vtrn.8          d31, d30
-
-    pld             [r5, r1]
-
-    ; extract to s16
-    vtrn.32         q14, q15
-    vmovl.u8        q12, d28
-    vmovl.u8        q13, d29
-
-    pld             [r5, r1, lsl #1]
-
-    ; src[] * filter
-    MULTIPLY_BY_Q0  q1,  d16, d17, d20, d22, d18, d19, d23, d24
-    MULTIPLY_BY_Q0  q2,  d17, d20, d22, d18, d19, d23, d24, d26
-    MULTIPLY_BY_Q0  q14, d20, d22, d18, d19, d23, d24, d26, d27
-    MULTIPLY_BY_Q0  q15, d22, d18, d19, d23, d24, d26, d27, d25
-
-    pld             [r5, -r8]
-
-    ; += 64 >> 7
-    vqrshrun.s32    d2, q1, #7
-    vqrshrun.s32    d3, q2, #7
-    vqrshrun.s32    d4, q14, #7
-    vqrshrun.s32    d5, q15, #7
-
-    ; saturate
-    vqmovn.u16      d2, q1
-    vqmovn.u16      d3, q2
-
-    ; transpose
-    vtrn.16         d2, d3
-    vtrn.32         d2, d3
-    vtrn.8          d2, d3
-
-    vst1.u32        {d2[0]}, [r2@32], r3
-    vst1.u32        {d3[0]}, [r2@32], r3
-    vst1.u32        {d2[1]}, [r2@32], r3
-    vst1.u32        {d3[1]}, [r2@32], r4
-
-    vmov            q8,  q9
-    vmov            d20, d23
-    vmov            q11, q12
-    vmov            q9,  q13
-
-    subs            r6, r6, #4              ; w -= 4
-    bgt             vpx_convolve8_loop_horiz
-
-    ; outer loop
-    mov             r6, r10                 ; restore w counter
-    add             r0, r0, r9              ; src += src_stride * 4 - w
-    add             r2, r2, r12             ; dst += dst_stride * 4 - w
-    subs            r7, r7, #4              ; h -= 4
-    bgt vpx_convolve8_loop_horiz_v
-
-    pop             {r4-r10, pc}
-
-    ENDP
-
-|vpx_convolve8_vert_neon| PROC
-    push            {r4-r8, lr}
-
-    ; adjust for taps
-    sub             r0, r0, r1
-    sub             r0, r0, r1, lsl #1
-
-    ldr             r4, [sp, #24]           ; filter
-    ldr             r5, [sp, #36]           ; y0_q4
-    add             r4, r5, lsl #4
-    ldr             r6, [sp, #44]           ; w
-    ldr             lr, [sp, #48]           ; h
-
-    vld1.s16        {q0}, [r4]              ; filter
-
-    lsl             r1, r1, #1
-    lsl             r3, r3, #1
-
-vpx_convolve8_loop_vert_h
-    mov             r4, r0
-    add             r7, r0, r1, asr #1
-    mov             r5, r2
-    add             r8, r2, r3, asr #1
-    mov             r12, lr                 ; h loop counter
-
-    vld1.u32        {d16[0]}, [r4], r1
-    vld1.u32        {d16[1]}, [r7], r1
-    vld1.u32        {d18[0]}, [r4], r1
-    vld1.u32        {d18[1]}, [r7], r1
-    vld1.u32        {d20[0]}, [r4], r1
-    vld1.u32        {d20[1]}, [r7], r1
-    vld1.u32        {d22[0]}, [r4], r1
-
-    vmovl.u8        q8, d16
-    vmovl.u8        q9, d18
-    vmovl.u8        q10, d20
-    vmovl.u8        q11, d22
-
-vpx_convolve8_loop_vert
-    ; always process a 4x4 block at a time
-    vld1.u32        {d24[0]}, [r7], r1
-    vld1.u32        {d26[0]}, [r4], r1
-    vld1.u32        {d26[1]}, [r7], r1
-    vld1.u32        {d24[1]}, [r4], r1
-
-    ; extract to s16
-    vmovl.u8        q12, d24
-    vmovl.u8        q13, d26
-
-    pld             [r5]
-    pld             [r8]
-
-    ; src[] * filter
-    MULTIPLY_BY_Q0  q1,  d16, d17, d18, d19, d20, d21, d22, d24
-
-    pld             [r5, r3]
-    pld             [r8, r3]
-
-    MULTIPLY_BY_Q0  q2,  d17, d18, d19, d20, d21, d22, d24, d26
-
-    pld             [r7]
-    pld             [r4]
-
-    MULTIPLY_BY_Q0  q14, d18, d19, d20, d21, d22, d24, d26, d27
-
-    pld             [r7, r1]
-    pld             [r4, r1]
-
-    MULTIPLY_BY_Q0  q15, d19, d20, d21, d22, d24, d26, d27, d25
-
-    ; += 64 >> 7
-    vqrshrun.s32    d2, q1, #7
-    vqrshrun.s32    d3, q2, #7
-    vqrshrun.s32    d4, q14, #7
-    vqrshrun.s32    d5, q15, #7
-
-    ; saturate
-    vqmovn.u16      d2, q1
-    vqmovn.u16      d3, q2
-
-    vst1.u32        {d2[0]}, [r5@32], r3
-    vst1.u32        {d2[1]}, [r8@32], r3
-    vst1.u32        {d3[0]}, [r5@32], r3
-    vst1.u32        {d3[1]}, [r8@32], r3
-
-    vmov            q8, q10
-    vmov            d18, d22
-    vmov            d19, d24
-    vmov            q10, q13
-    vmov            d22, d25
-
-    subs            r12, r12, #4            ; h -= 4
-    bgt             vpx_convolve8_loop_vert
-
-    ; outer loop
-    add             r0, r0, #4
-    add             r2, r2, #4
-    subs            r6, r6, #4              ; w -= 4
-    bgt             vpx_convolve8_loop_vert_h
-
-    pop             {r4-r8, pc}
-
-    ENDP
-    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.c
new file mode 100644
index 0000000..4470b28
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.c
@@ -0,0 +1,41 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "./vpx_dsp_rtcd.h"
+#include "vp9/common/vp9_filter.h"
+#include "vpx_dsp/arm/vpx_convolve8_neon_asm.h"
+
+/* Type1 and Type2 functions are called depending on the position of the
+ * negative and positive coefficients in the filter. In type1, the filter kernel
+ * used is sub_pel_filters_8lp, in which only the first two and the last two
+ * coefficients are negative. In type2, the negative coefficients are 0, 2, 5 &
+ * 7.
+ */
+
+#define DEFINE_FILTER(dir)                                                   \
+  void vpx_convolve8_##dir##_neon(                                           \
+      const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,                \
+      ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4,           \
+      int x_step_q4, int y0_q4, int y_step_q4, int w, int h) {               \
+    if (filter == vp9_filter_kernels[1]) {                                   \
+      vpx_convolve8_##dir##_filter_type1_neon(                               \
+          src, src_stride, dst, dst_stride, filter, x0_q4, x_step_q4, y0_q4, \
+          y_step_q4, w, h);                                                  \
+    } else {                                                                 \
+      vpx_convolve8_##dir##_filter_type2_neon(                               \
+          src, src_stride, dst, dst_stride, filter, x0_q4, x_step_q4, y0_q4, \
+          y_step_q4, w, h);                                                  \
+    }                                                                        \
+  }
+
+DEFINE_FILTER(horiz);
+DEFINE_FILTER(avg_horiz);
+DEFINE_FILTER(vert);
+DEFINE_FILTER(avg_vert);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.h
new file mode 100644
index 0000000..b123d1c
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_neon_asm.h
@@ -0,0 +1,29 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_DSP_ARM_VPX_CONVOLVE8_NEON_ASM_H_
+#define VPX_VPX_DSP_ARM_VPX_CONVOLVE8_NEON_ASM_H_
+
+#define DECLARE_FILTER(dir, type)                                  \
+  void vpx_convolve8_##dir##_filter_##type##_neon(                 \
+      const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,      \
+      ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, \
+      int x_step_q4, int y0_q4, int y_step_q4, int w, int h);
+
+DECLARE_FILTER(horiz, type1);
+DECLARE_FILTER(avg_horiz, type1);
+DECLARE_FILTER(horiz, type2);
+DECLARE_FILTER(avg_horiz, type2);
+DECLARE_FILTER(vert, type1);
+DECLARE_FILTER(avg_vert, type1);
+DECLARE_FILTER(vert, type2);
+DECLARE_FILTER(avg_vert, type2);
+
+#endif  // VPX_VPX_DSP_ARM_VPX_CONVOLVE8_NEON_ASM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type1_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type1_neon.asm
new file mode 100644
index 0000000..2666d42
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type1_neon.asm
@@ -0,0 +1,457 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r6 =>  dst_stride
+;    r12 => filter_y0
+;    r5 =>  ht
+;    r3 =>  wd
+
+    EXPORT          |vpx_convolve8_vert_filter_type1_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_vert_filter_type1_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+    vmov.i16        q15,    #0x4000
+    mov             r11,    #0xc000
+    ldr             r12,    [sp,    #104]   ;load filter
+    ldr             r6,     [sp,    #116]   ;load y0_q4
+    add             r12,    r12,    r6,     lsl #4 ;r12 = filter[y0_q4]
+    mov             r6,     r3
+    ldr             r5,     [sp,    #124]   ;load wd
+    vld2.8          {d0,    d1},    [r12]   ;coeff = vld1_s8(pi1_coeff)
+    sub             r12,    r2,     r2,     lsl #2 ;src_ctrd & pi1_coeff
+    vabs.s8         d0,     d0              ;vabs_s8(coeff)
+    add             r0,     r0,     r12     ;r0->pu1_src    r12->pi1_coeff
+    ldr             r3,     [sp,    #128]   ;load ht
+    subs            r7,     r3,     #0      ;r3->ht
+    vdup.u8         d22,    d0[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0);
+    cmp             r5,     #8
+    vdup.u8         d23,    d0[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1);
+    vdup.u8         d24,    d0[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2);
+    vdup.u8         d25,    d0[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3);
+    vdup.u8         d26,    d0[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4);
+    vdup.u8         d27,    d0[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5);
+    vdup.u8         d28,    d0[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6);
+    vdup.u8         d29,    d0[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7);
+    blt             core_loop_wd_4          ;core loop wd 4 jump
+
+    str             r0,     [sp,  #-4]!
+    str             r1,     [sp,  #-4]!
+    bic             r4,     r5,     #7      ;r5 ->wd
+    rsb             r9,     r4,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r4,     r2,     lsl #2 ;r2->src_strd
+    mov             r3,     r5,     lsr #3  ;divide by 8
+    mul             r7,     r3              ;multiply height by width
+    sub             r7,     #4              ;subtract by one for epilog
+
+prolog
+    and             r10,    r0,     #31
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vdup.16         q4,     r11
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    subs            r4,     r4,     #8
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vdup.16         q5,     r11
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    addle           r0,     r0,     r8
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlal.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    pld             [r3]
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    pld             [r3,    r2]
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    pld             [r3,    r2,     lsl #1]
+    vmlal.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    add             r3,     r3,     r2
+    vmlsl.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    pld             [r3,    r2,     lsl #1]
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d3,     d23
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d2,     d22
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d4,     d24
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d5,     d25
+    vmlal.u8        q6,     d6,     d26
+    vmlal.u8        q6,     d7,     d27
+    vmlsl.u8        q6,     d16,    d28
+    vmlsl.u8        q6,     d17,    d29
+    add             r14,    r1,     r6
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    addle           r1,     r1,     r9
+    vmlsl.u8        q7,     d4,     d23
+    subs            r7,     r7,     #4
+    vmlsl.u8        q7,     d3,     d22
+    vmlal.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+    blt             epilog_end              ;jumps to epilog_end
+
+    beq             epilog                  ;jumps to epilog
+
+main_loop_8
+    subs            r4,     r4,     #8
+    vmlsl.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    addle           r0,     r0,     r8
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlal.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlsl.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vst1.8          {d12},  [r14],  r6
+    vqrshrun.s16    d14,    q7,     #6
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vmlsl.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vmlal.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vst1.8          {d14},  [r14],  r6
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    add             r14,    r1,     #0
+    vmlal.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    add             r1,     r1,     #8
+    vmlsl.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    addle           r1,     r1,     r9
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vmlsl.u8        q6,     d3,     d23
+    add             r10,    r3,     r2,     lsl #3 ; 10*strd - 8+2
+    vmlsl.u8        q6,     d2,     d22
+    add             r10,    r10,    r2      ; 11*strd
+    vmlal.u8        q6,     d4,     d24
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vst1.8          {d8},   [r14],  r6      ;vst1_u8(pu1_dst,sto_res);
+    pld             [r10]                   ;11+ 0
+    vmlal.u8        q6,     d7,     d27
+    pld             [r10,   r2]             ;11+ 1*strd
+    vmlsl.u8        q6,     d16,    d28
+    pld             [r10,   r2,     lsl #1] ;11+ 2*strd
+    vmlsl.u8        q6,     d17,    d29
+    add             r10,    r10,    r2      ;12*strd
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    pld             [r10,   r2,     lsl #1] ;11+ 3*strd
+    vmlsl.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    subs            r7,     r7,     #4
+    vmlal.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vqrshrun.s16    d12,    q6,     #6
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    bgt             main_loop_8             ;jumps to main_loop_8
+
+epilog
+    vmlsl.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vmlal.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vmlal.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlsl.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vst1.8          {d12},  [r14],  r6
+    vqrshrun.s16    d14,    q7,     #6
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vmlal.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    vmlal.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    vmlsl.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vst1.8          {d14},  [r14],  r6
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d3,     d23
+    vmlsl.u8        q6,     d2,     d22
+    vmlal.u8        q6,     d4,     d24
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vmlal.u8        q6,     d7,     d27
+    vmlsl.u8        q6,     d16,    d28
+    vmlsl.u8        q6,     d17,    d29
+    add             r14,    r1,     r6
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    vmlal.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vhadd.s16       q6,     q6,     q15
+    vmlal.u8        q7,     d7,     d26
+    vmlal.u8        q7,     d16,    d27
+    vmlsl.u8        q7,     d17,    d28
+    vmlsl.u8        q7,     d18,    d29
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+
+epilog_end
+    vst1.8          {d12},  [r14],  r6
+    vhadd.s16       q7,     q7,     q15
+    vqrshrun.s16    d14,    q7,     #6
+    vst1.8          {d14},  [r14],  r6
+
+end_loops
+    tst             r5,     #7
+    ldr             r1,     [sp],   #4
+    ldr             r0,     [sp],   #4
+    vpopeq          {d8  -  d15}
+    ldmfdeq         sp!,    {r4  -  r12,    r15} ;reload the registers from
+                                            ; sp
+    mov             r5,     #4
+    add             r0,     r0,     #8
+    add             r1,     r1,     #8
+    mov             r7,     #16
+
+core_loop_wd_4
+    rsb             r9,     r5,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r5,     r2,     lsl #2 ;r2->src_strd
+    vmov.i8         d4,     #0
+
+outer_loop_wd_4
+    subs            r12,    r5,     #0
+    ble             end_inner_loop_wd_4     ;outer loop jump
+
+inner_loop_wd_4
+    add             r3,     r0,     r2
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    subs            r12,    r12,    #4
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vld1.u32        {d4[0]},[r0]            ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 0);
+    vdup.16         q0,     r11
+    vmlsl.u8        q0,     d5,     d23     ;mul_res1 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp2), coeffabs_1);
+
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    add             r0,     r0,     #4
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlsl.u8        q0,     d4,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_0);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlal.u8        q0,     d6,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_2);
+    vdup.16         q4,     r11
+    vmlsl.u8        q4,     d7,     d23
+    vdup.u32        d4,     d7[1]           ;src_tmp1 = vdup_lane_u32(src_tmp4,
+                                            ; 1);
+    vmull.u8        q1,     d7,     d25     ;mul_res2 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp4), coeffabs_3);
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    vmlsl.u8        q4,     d6,     d22
+    vmlal.u8        q0,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_4);
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vmlal.u8        q4,     d4,     d24
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vmlal.u8        q1,     d5,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp2), coeffabs_5);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    vmlal.u8        q4,     d5,     d25
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlsl.u8        q0,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_6);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vmlal.u8        q4,     d6,     d26
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlsl.u8        q1,     d7,     d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp4), coeffabs_7);
+    vdup.u32        d4,     d7[1]
+    vadd.i16        q0,     q0,     q1      ;mul_res1 = vaddq_u16(mul_res1,
+                                            ; mul_res2);
+    vmlal.u8        q4,     d7,     d27
+    vld1.u32        {d4[1]},[r3],   r2
+    vmlsl.u8        q4,     d4,     d28
+    vdup.u32        d5,     d4[1]
+    vhadd.s16       q0,     q0,     q15
+    vqrshrun.s16    d0,     q0,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u32        {d5[1]},[r3]
+    add             r3,     r1,     r6
+    vst1.32         {d0[0]},[r1]            ;vst1_lane_u32((uint32_t *)pu1_dst,
+                                            ; vreinterpret_u32_u8(sto_res), 0);
+    vmlsl.u8        q4,     d5,     d29
+    vst1.32         {d0[1]},[r3],   r6      ;vst1_lane_u32((uint32_t
+                                            ; *)pu1_dst_tmp, vreinterpret_u32_u8(sto_res), 1);
+    vhadd.s16       q4,     q4,     q15
+    vqrshrun.s16    d8,     q4,     #6
+    vst1.32         {d8[0]},[r3],   r6
+    add             r1,     r1,     #4
+    vst1.32         {d8[1]},[r3]
+    bgt             inner_loop_wd_4
+
+end_inner_loop_wd_4
+    subs            r7,     r7,     #4
+    add             r1,     r1,     r9
+    add             r0,     r0,     r8
+    bgt             outer_loop_wd_4
+
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type2_neon.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type2_neon.asm
new file mode 100644
index 0000000..cb5d6d3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve8_vert_filter_type2_neon.asm
@@ -0,0 +1,455 @@
+;
+;  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+;**************Variables Vs Registers***********************************
+;    r0 => src
+;    r1 => dst
+;    r2 =>  src_stride
+;    r6 =>  dst_stride
+;    r12 => filter_y0
+;    r5 =>  ht
+;    r3 =>  wd
+
+    EXPORT          |vpx_convolve8_vert_filter_type2_neon|
+    ARM
+    REQUIRE8
+    PRESERVE8
+
+    AREA  ||.text||, CODE, READONLY, ALIGN=2
+
+|vpx_convolve8_vert_filter_type2_neon| PROC
+
+    stmfd           sp!,    {r4  -  r12,    r14} ;stack stores the values of
+                                                 ; the arguments
+    vpush           {d8  -  d15}                 ; stack offset by 64
+    mov             r4,     r1
+    mov             r1,     r2
+    mov             r2,     r4
+    vmov.i16        q15,    #0x4000
+    mov             r11,    #0xc000
+    ldr             r12,    [sp,    #104]   ;load filter
+    ldr             r6,     [sp,    #116]   ;load y0_q4
+    add             r12,    r12,    r6,     lsl #4 ;r12 = filter[y0_q4]
+    mov             r6,     r3
+    ldr             r5,     [sp,    #124]   ;load wd
+    vld2.8          {d0,    d1},    [r12]   ;coeff = vld1_s8(pi1_coeff)
+    sub             r12,    r2,     r2,     lsl #2 ;src_ctrd & pi1_coeff
+    vabs.s8         d0,     d0              ;vabs_s8(coeff)
+    add             r0,     r0,     r12     ;r0->pu1_src    r12->pi1_coeff
+    ldr             r3,     [sp,    #128]   ;load ht
+    subs            r7,     r3,     #0      ;r3->ht
+    vdup.u8         d22,    d0[0]           ;coeffabs_0 = vdup_lane_u8(coeffabs,
+                                            ; 0);
+    cmp             r5,     #8
+    vdup.u8         d23,    d0[1]           ;coeffabs_1 = vdup_lane_u8(coeffabs,
+                                            ; 1);
+    vdup.u8         d24,    d0[2]           ;coeffabs_2 = vdup_lane_u8(coeffabs,
+                                            ; 2);
+    vdup.u8         d25,    d0[3]           ;coeffabs_3 = vdup_lane_u8(coeffabs,
+                                            ; 3);
+    vdup.u8         d26,    d0[4]           ;coeffabs_4 = vdup_lane_u8(coeffabs,
+                                            ; 4);
+    vdup.u8         d27,    d0[5]           ;coeffabs_5 = vdup_lane_u8(coeffabs,
+                                            ; 5);
+    vdup.u8         d28,    d0[6]           ;coeffabs_6 = vdup_lane_u8(coeffabs,
+                                            ; 6);
+    vdup.u8         d29,    d0[7]           ;coeffabs_7 = vdup_lane_u8(coeffabs,
+                                            ; 7);
+    blt             core_loop_wd_4          ;core loop wd 4 jump
+
+    str             r0,     [sp,  #-4]!
+    str             r1,     [sp,  #-4]!
+    bic             r4,     r5,     #7      ;r5 ->wd
+    rsb             r9,     r4,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r4,     r2,     lsl #2 ;r2->src_strd
+    mov             r3,     r5,     lsr #3  ;divide by 8
+    mul             r7,     r3              ;multiply height by width
+    sub             r7,     #4              ;subtract by one for epilog
+
+prolog
+    and             r10,    r0,     #31
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vdup.16         q4,     r11
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    subs            r4,     r4,     #8
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vdup.16         q5,     r11
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    addle           r0,     r0,     r8
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlsl.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    pld             [r3]
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    pld             [r3,    r2]
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    pld             [r3,    r2,     lsl #1]
+    vmlsl.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    add             r3,     r3,     r2
+    vmlal.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    pld             [r3,    r2,     lsl #1]
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d3,     d23
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d2,     d22
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q6,     d4,     d24
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d5,     d25
+    vmlal.u8        q6,     d6,     d26
+    vmlsl.u8        q6,     d7,     d27
+    vmlal.u8        q6,     d16,    d28
+    vmlsl.u8        q6,     d17,    d29
+    add             r14,    r1,     r6
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    addle           r1,     r1,     r9
+    vmlal.u8        q7,     d4,     d23
+    subs            r7,     r7,     #4
+    vmlsl.u8        q7,     d3,     d22
+    vmlsl.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+    blt             epilog_end              ;jumps to epilog_end
+
+    beq             epilog                  ;jumps to epilog
+
+main_loop_8
+    subs            r4,     r4,     #8
+    vmlal.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+                                            ; coeffabs_1);
+    addle           r0,     r0,     r8
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    bicle           r4,     r5,     #7      ;r5 ->wd
+    vmlsl.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlal.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vst1.8          {d12},  [r14],  r6
+    vqrshrun.s16    d14,    q7,     #6
+    add             r3,     r0,     r2      ;pu1_src_tmp += src_strd;
+    vmlal.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vld1.u8         {d0},   [r0]!           ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vmlsl.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vld1.u8         {d1},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vst1.8          {d14},  [r14],  r6
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    add             r14,    r1,     #0
+    vmlsl.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    add             r1,     r1,     #8
+    vmlal.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    addle           r1,     r1,     r9
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vmlal.u8        q6,     d3,     d23
+    add             r10,    r3,     r2,     lsl #3 ; 10*strd - 8+2
+    vmlsl.u8        q6,     d2,     d22
+    add             r10,    r10,    r2      ; 11*strd
+    vmlsl.u8        q6,     d4,     d24
+    vld1.u8         {d2},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vst1.8          {d8},   [r14],  r6      ;vst1_u8(pu1_dst,sto_res);
+    pld             [r10]                   ;11+ 0
+    vmlsl.u8        q6,     d7,     d27
+    pld             [r10,   r2]             ;11+ 1*strd
+    vmlal.u8        q6,     d16,    d28
+    pld             [r10,   r2,     lsl #1] ;11+ 2*strd
+    vmlsl.u8        q6,     d17,    d29
+    add             r10,    r10,    r2      ;12*strd
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    pld             [r10,   r2,     lsl #1] ;11+ 3*strd
+    vmlal.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    subs            r7,     r7,     #4
+    vmlsl.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vld1.u8         {d3},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vhadd.s16       q6,     q6,     q15
+    vdup.16         q4,     r11
+    vmlal.u8        q7,     d7,     d26
+    vld1.u8         {d4},   [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d16,    d27
+    vld1.u8         {d5},   [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d17,    d28
+    vld1.u8         {d6},   [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlsl.u8        q7,     d18,    d29
+    vld1.u8         {d7},   [r3],   r2      ;src_tmp4 = vld1_u8(pu1_src_tmp);
+    vqrshrun.s16    d12,    q6,     #6
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    bgt             main_loop_8             ;jumps to main_loop_8
+
+epilog
+    vmlal.u8        q4,     d1,     d23     ;mul_res1 = vmull_u8(src_tmp2,
+    vmlsl.u8        q4,     d0,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_0);
+    vmlsl.u8        q4,     d2,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_2);
+    vmlal.u8        q4,     d3,     d25     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_3);
+    vhadd.s16       q7,     q7,     q15
+    vdup.16         q5,     r11
+    vmlal.u8        q4,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp1, coeffabs_4);
+    vmlsl.u8        q4,     d5,     d27     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp2, coeffabs_5);
+    vmlal.u8        q4,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; src_tmp3, coeffabs_6);
+    vmlsl.u8        q4,     d7,     d29     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; src_tmp4, coeffabs_7);
+    vst1.8          {d12},  [r14],  r6
+    vqrshrun.s16    d14,    q7,     #6
+    vld1.u8         {d16},  [r3],   r2      ;src_tmp1 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q5,     d2,     d23     ;mul_res2 = vmull_u8(src_tmp3,
+                                            ; coeffabs_1);
+    vmlsl.u8        q5,     d1,     d22     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_0);
+    vmlsl.u8        q5,     d3,     d24     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_2);
+    vmlal.u8        q5,     d4,     d25     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_3);
+    vhadd.s16       q4,     q4,     q15
+    vdup.16         q6,     r11
+    vmlal.u8        q5,     d5,     d26     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp2, coeffabs_4);
+    vmlsl.u8        q5,     d6,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp3, coeffabs_5);
+    vmlal.u8        q5,     d7,     d28     ;mul_res2 = vmlal_u8(mul_res2,
+                                            ; src_tmp4, coeffabs_6);
+    vmlsl.u8        q5,     d16,    d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; src_tmp1, coeffabs_7);
+    vst1.8          {d14},  [r14],  r6
+    vqrshrun.s16    d8,     q4,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d17},  [r3],   r2      ;src_tmp2 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q6,     d3,     d23
+    vmlsl.u8        q6,     d2,     d22
+    vmlsl.u8        q6,     d4,     d24
+    vmlal.u8        q6,     d5,     d25
+    vhadd.s16       q5,     q5,     q15
+    vdup.16         q7,     r11
+    vmlal.u8        q6,     d6,     d26
+    vmlsl.u8        q6,     d7,     d27
+    vmlal.u8        q6,     d16,    d28
+    vmlsl.u8        q6,     d17,    d29
+    add             r14,    r1,     r6
+    vst1.8          {d8},   [r1]!           ;vst1_u8(pu1_dst,sto_res);
+    vqrshrun.s16    d10,    q5,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u8         {d18},  [r3],   r2      ;src_tmp3 = vld1_u8(pu1_src_tmp);
+    vmlal.u8        q7,     d4,     d23
+    vmlsl.u8        q7,     d3,     d22
+    vmlsl.u8        q7,     d5,     d24
+    vmlal.u8        q7,     d6,     d25
+    vhadd.s16       q6,     q6,     q15
+    vmlal.u8        q7,     d7,     d26
+    vmlsl.u8        q7,     d16,    d27
+    vmlal.u8        q7,     d17,    d28
+    vmlsl.u8        q7,     d18,    d29
+    vst1.8          {d10},  [r14],  r6      ;vst1_u8(pu1_dst_tmp,sto_res);
+    vqrshrun.s16    d12,    q6,     #6
+
+epilog_end
+    vst1.8          {d12},  [r14],  r6
+    vhadd.s16       q7,     q7,     q15
+    vqrshrun.s16    d14,    q7,     #6
+    vst1.8          {d14},  [r14],  r6
+
+end_loops
+    tst             r5,     #7
+    ldr             r1,     [sp],   #4
+    ldr             r0,     [sp],   #4
+    vpopeq          {d8  -  d15}
+    ldmfdeq         sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+    mov             r5,     #4
+    add             r0,     r0,     #8
+    add             r1,     r1,     #8
+    mov             r7,     #16
+
+core_loop_wd_4
+    rsb             r9,     r5,     r6,     lsl #2 ;r6->dst_strd    r5    ->wd
+    rsb             r8,     r5,     r2,     lsl #2 ;r2->src_strd
+    vmov.i8         d4,     #0
+
+outer_loop_wd_4
+    subs            r12,    r5,     #0
+    ble             end_inner_loop_wd_4     ;outer loop jump
+
+inner_loop_wd_4
+    add             r3,     r0,     r2
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    subs            r12,    r12,    #4
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vld1.u32        {d4[0]},[r0]            ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 0);
+    vdup.16         q0,     r11
+    vmlal.u8        q0,     d5,     d23     ;mul_res1 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp2), coeffabs_1);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    add             r0,     r0,     #4
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlsl.u8        q0,     d4,     d22     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_0);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlsl.u8        q0,     d6,     d24     ;mul_res1 = vmlsl_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_2);
+    vdup.16         q4,     r11
+    vmlal.u8        q4,     d7,     d23
+    vdup.u32        d4,     d7[1]           ;src_tmp1 = vdup_lane_u32(src_tmp4,
+                                            ; 1);
+    vmull.u8        q1,     d7,     d25     ;mul_res2 =
+                                            ; vmull_u8(vreinterpret_u8_u32(src_tmp4), coeffabs_3);
+    vld1.u32        {d4[1]},[r3],   r2      ;src_tmp1 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp1, 1);
+    vmlsl.u8        q4,     d6,     d22
+    vmlal.u8        q0,     d4,     d26     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp1), coeffabs_4);
+    vdup.u32        d5,     d4[1]           ;src_tmp2 = vdup_lane_u32(src_tmp1,
+                                            ; 1);
+    vmlsl.u8        q4,     d4,     d24
+    vld1.u32        {d5[1]},[r3],   r2      ;src_tmp2 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp2, 1);
+    vmlsl.u8        q1,     d5,     d27     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp2), coeffabs_5);
+    vdup.u32        d6,     d5[1]           ;src_tmp3 = vdup_lane_u32(src_tmp2,
+                                            ; 1);
+    vmlal.u8        q4,     d5,     d25
+    vld1.u32        {d6[1]},[r3],   r2      ;src_tmp3 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp3, 1);
+    vmlal.u8        q0,     d6,     d28     ;mul_res1 = vmlal_u8(mul_res1,
+                                            ; vreinterpret_u8_u32(src_tmp3), coeffabs_6);
+    vdup.u32        d7,     d6[1]           ;src_tmp4 = vdup_lane_u32(src_tmp3,
+                                            ; 1);
+    vmlal.u8        q4,     d6,     d26
+    vld1.u32        {d7[1]},[r3],   r2      ;src_tmp4 = vld1_lane_u32((uint32_t
+                                            ; *)pu1_src_tmp, src_tmp4, 1);
+    vmlsl.u8        q1,     d7,     d29     ;mul_res2 = vmlsl_u8(mul_res2,
+                                            ; vreinterpret_u8_u32(src_tmp4), coeffabs_7);
+    vdup.u32        d4,     d7[1]
+    vadd.i16        q0,     q0,     q1      ;mul_res1 = vaddq_u16(mul_res1,
+                                            ; mul_res2);
+    vmlsl.u8        q4,     d7,     d27
+    vld1.u32        {d4[1]},[r3],   r2
+    vmlal.u8        q4,     d4,     d28
+    vdup.u32        d5,     d4[1]
+    vhadd.s16       q0,     q0,     q15
+    vqrshrun.s16    d0,     q0,     #6      ;sto_res = vqmovun_s16(sto_res_tmp);
+    vld1.u32        {d5[1]},[r3]
+    add             r3,     r1,     r6
+    vst1.32         {d0[0]},[r1]            ;vst1_lane_u32((uint32_t *)pu1_dst,
+                                            ; vreinterpret_u32_u8(sto_res), 0);
+    vmlsl.u8        q4,     d5,     d29
+    vst1.32         {d0[1]},[r3],   r6      ;vst1_lane_u32((uint32_t
+                                            ; *)pu1_dst_tmp, vreinterpret_u32_u8(sto_res), 1);
+    vhadd.s16       q4,     q4,     q15
+    vqrshrun.s16    d8,     q4,     #6
+    vst1.32         {d8[0]},[r3],   r6
+    add             r1,     r1,     #4
+    vst1.32         {d8[1]},[r3]
+    bgt             inner_loop_wd_4
+
+end_inner_loop_wd_4
+    subs            r7,     r7,     #4
+    add             r1,     r1,     r9
+    add             r0,     r0,     r8
+    bgt             outer_loop_wd_4
+
+    vpop            {d8  -  d15}
+    ldmfd           sp!,    {r4  -  r12,    r15} ;reload the registers from sp
+
+    ENDP
+
+    END
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_neon.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_neon.c
index 2bf2d89..830f317 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_neon.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/arm/vpx_convolve_neon.c
@@ -24,7 +24,8 @@
   uint8_t temp[64 * 72];
 
   // Account for the vertical phase needing 3 lines prior and 4 lines post
-  const int intermediate_height = h + 7;
+  // (+ 1 to make it divisible by 4).
+  const int intermediate_height = h + 8;
 
   assert(y_step_q4 == 16);
   assert(x_step_q4 == 16);
@@ -48,7 +49,7 @@
                             int x_step_q4, int y0_q4, int y_step_q4, int w,
                             int h) {
   uint8_t temp[64 * 72];
-  const int intermediate_height = h + 7;
+  const int intermediate_height = h + 8;
 
   assert(y_step_q4 == 16);
   assert(x_step_q4 == 16);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/avg.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/avg.c
index a7ac6d9..1c45e8a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/avg.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/avg.c
@@ -32,6 +32,166 @@
   return (sum + 8) >> 4;
 }
 
+#if CONFIG_VP9_HIGHBITDEPTH
+// src_diff: 13 bit, dynamic range [-4095, 4095]
+// coeff: 16 bit
+static void hadamard_highbd_col8_first_pass(const int16_t *src_diff,
+                                            ptrdiff_t src_stride,
+                                            int16_t *coeff) {
+  int16_t b0 = src_diff[0 * src_stride] + src_diff[1 * src_stride];
+  int16_t b1 = src_diff[0 * src_stride] - src_diff[1 * src_stride];
+  int16_t b2 = src_diff[2 * src_stride] + src_diff[3 * src_stride];
+  int16_t b3 = src_diff[2 * src_stride] - src_diff[3 * src_stride];
+  int16_t b4 = src_diff[4 * src_stride] + src_diff[5 * src_stride];
+  int16_t b5 = src_diff[4 * src_stride] - src_diff[5 * src_stride];
+  int16_t b6 = src_diff[6 * src_stride] + src_diff[7 * src_stride];
+  int16_t b7 = src_diff[6 * src_stride] - src_diff[7 * src_stride];
+
+  int16_t c0 = b0 + b2;
+  int16_t c1 = b1 + b3;
+  int16_t c2 = b0 - b2;
+  int16_t c3 = b1 - b3;
+  int16_t c4 = b4 + b6;
+  int16_t c5 = b5 + b7;
+  int16_t c6 = b4 - b6;
+  int16_t c7 = b5 - b7;
+
+  coeff[0] = c0 + c4;
+  coeff[7] = c1 + c5;
+  coeff[3] = c2 + c6;
+  coeff[4] = c3 + c7;
+  coeff[2] = c0 - c4;
+  coeff[6] = c1 - c5;
+  coeff[1] = c2 - c6;
+  coeff[5] = c3 - c7;
+}
+
+// src_diff: 16 bit, dynamic range [-32760, 32760]
+// coeff: 19 bit
+static void hadamard_highbd_col8_second_pass(const int16_t *src_diff,
+                                             ptrdiff_t src_stride,
+                                             int32_t *coeff) {
+  int32_t b0 = src_diff[0 * src_stride] + src_diff[1 * src_stride];
+  int32_t b1 = src_diff[0 * src_stride] - src_diff[1 * src_stride];
+  int32_t b2 = src_diff[2 * src_stride] + src_diff[3 * src_stride];
+  int32_t b3 = src_diff[2 * src_stride] - src_diff[3 * src_stride];
+  int32_t b4 = src_diff[4 * src_stride] + src_diff[5 * src_stride];
+  int32_t b5 = src_diff[4 * src_stride] - src_diff[5 * src_stride];
+  int32_t b6 = src_diff[6 * src_stride] + src_diff[7 * src_stride];
+  int32_t b7 = src_diff[6 * src_stride] - src_diff[7 * src_stride];
+
+  int32_t c0 = b0 + b2;
+  int32_t c1 = b1 + b3;
+  int32_t c2 = b0 - b2;
+  int32_t c3 = b1 - b3;
+  int32_t c4 = b4 + b6;
+  int32_t c5 = b5 + b7;
+  int32_t c6 = b4 - b6;
+  int32_t c7 = b5 - b7;
+
+  coeff[0] = c0 + c4;
+  coeff[7] = c1 + c5;
+  coeff[3] = c2 + c6;
+  coeff[4] = c3 + c7;
+  coeff[2] = c0 - c4;
+  coeff[6] = c1 - c5;
+  coeff[1] = c2 - c6;
+  coeff[5] = c3 - c7;
+}
+
+// The order of the output coeff of the hadamard is not important. For
+// optimization purposes the final transpose may be skipped.
+void vpx_highbd_hadamard_8x8_c(const int16_t *src_diff, ptrdiff_t src_stride,
+                               tran_low_t *coeff) {
+  int idx;
+  int16_t buffer[64];
+  int32_t buffer2[64];
+  int16_t *tmp_buf = &buffer[0];
+  for (idx = 0; idx < 8; ++idx) {
+    // src_diff: 13 bit
+    // buffer: 16 bit, dynamic range [-32760, 32760]
+    hadamard_highbd_col8_first_pass(src_diff, src_stride, tmp_buf);
+    tmp_buf += 8;
+    ++src_diff;
+  }
+
+  tmp_buf = &buffer[0];
+  for (idx = 0; idx < 8; ++idx) {
+    // buffer: 16 bit
+    // buffer2: 19 bit, dynamic range [-262080, 262080]
+    hadamard_highbd_col8_second_pass(tmp_buf, 8, buffer2 + 8 * idx);
+    ++tmp_buf;
+  }
+
+  for (idx = 0; idx < 64; ++idx) coeff[idx] = (tran_low_t)buffer2[idx];
+}
+
+// In place 16x16 2D Hadamard transform
+void vpx_highbd_hadamard_16x16_c(const int16_t *src_diff, ptrdiff_t src_stride,
+                                 tran_low_t *coeff) {
+  int idx;
+  for (idx = 0; idx < 4; ++idx) {
+    // src_diff: 13 bit, dynamic range [-4095, 4095]
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 8 * src_stride + (idx & 0x01) * 8;
+    vpx_highbd_hadamard_8x8_c(src_ptr, src_stride, coeff + idx * 64);
+  }
+
+  // coeff: 19 bit, dynamic range [-262080, 262080]
+  for (idx = 0; idx < 64; ++idx) {
+    tran_low_t a0 = coeff[0];
+    tran_low_t a1 = coeff[64];
+    tran_low_t a2 = coeff[128];
+    tran_low_t a3 = coeff[192];
+
+    tran_low_t b0 = (a0 + a1) >> 1;
+    tran_low_t b1 = (a0 - a1) >> 1;
+    tran_low_t b2 = (a2 + a3) >> 1;
+    tran_low_t b3 = (a2 - a3) >> 1;
+
+    // new coeff dynamic range: 20 bit
+    coeff[0] = b0 + b2;
+    coeff[64] = b1 + b3;
+    coeff[128] = b0 - b2;
+    coeff[192] = b1 - b3;
+
+    ++coeff;
+  }
+}
+
+void vpx_highbd_hadamard_32x32_c(const int16_t *src_diff, ptrdiff_t src_stride,
+                                 tran_low_t *coeff) {
+  int idx;
+  for (idx = 0; idx < 4; ++idx) {
+    // src_diff: 13 bit, dynamic range [-4095, 4095]
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 16 * src_stride + (idx & 0x01) * 16;
+    vpx_highbd_hadamard_16x16_c(src_ptr, src_stride, coeff + idx * 256);
+  }
+
+  // coeff: 20 bit
+  for (idx = 0; idx < 256; ++idx) {
+    tran_low_t a0 = coeff[0];
+    tran_low_t a1 = coeff[256];
+    tran_low_t a2 = coeff[512];
+    tran_low_t a3 = coeff[768];
+
+    tran_low_t b0 = (a0 + a1) >> 2;
+    tran_low_t b1 = (a0 - a1) >> 2;
+    tran_low_t b2 = (a2 + a3) >> 2;
+    tran_low_t b3 = (a2 - a3) >> 2;
+
+    // new coeff dynamic range: 20 bit
+    coeff[0] = b0 + b2;
+    coeff[256] = b1 + b3;
+    coeff[512] = b0 - b2;
+    coeff[768] = b1 - b3;
+
+    ++coeff;
+  }
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 // src_diff: first pass, 9 bit, dynamic range [-255, 255]
 //           second pass, 12 bit, dynamic range [-2040, 2040]
 static void hadamard_col8(const int16_t *src_diff, ptrdiff_t src_stride,
@@ -123,6 +283,50 @@
   }
 }
 
+void vpx_hadamard_32x32_c(const int16_t *src_diff, ptrdiff_t src_stride,
+                          tran_low_t *coeff) {
+  int idx;
+  for (idx = 0; idx < 4; ++idx) {
+    // src_diff: 9 bit, dynamic range [-255, 255]
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 16 * src_stride + (idx & 0x01) * 16;
+    vpx_hadamard_16x16_c(src_ptr, src_stride, coeff + idx * 256);
+  }
+
+  // coeff: 15 bit, dynamic range [-16320, 16320]
+  for (idx = 0; idx < 256; ++idx) {
+    tran_low_t a0 = coeff[0];
+    tran_low_t a1 = coeff[256];
+    tran_low_t a2 = coeff[512];
+    tran_low_t a3 = coeff[768];
+
+    tran_low_t b0 = (a0 + a1) >> 2;  // (a0 + a1): 16 bit, [-32640, 32640]
+    tran_low_t b1 = (a0 - a1) >> 2;  // b0-b3: 15 bit, dynamic range
+    tran_low_t b2 = (a2 + a3) >> 2;  // [-16320, 16320]
+    tran_low_t b3 = (a2 - a3) >> 2;
+
+    coeff[0] = b0 + b2;  // 16 bit, [-32640, 32640]
+    coeff[256] = b1 + b3;
+    coeff[512] = b0 - b2;
+    coeff[768] = b1 - b3;
+
+    ++coeff;
+  }
+}
+
+#if CONFIG_VP9_HIGHBITDEPTH
+// coeff: dynamic range 20 bit.
+// length: value range {16, 64, 256, 1024}.
+int vpx_highbd_satd_c(const tran_low_t *coeff, int length) {
+  int i;
+  int satd = 0;
+  for (i = 0; i < length; ++i) satd += abs(coeff[i]);
+
+  // satd: 30 bits
+  return satd;
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 // coeff: 16 bits, dynamic range [-32640, 32640].
 // length: value range {16, 64, 256, 1024}.
 int vpx_satd_c(const tran_low_t *coeff, int length) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader.h
index 16272ae..a5927ea 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader.h
@@ -8,10 +8,11 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_BITREADER_H_
-#define VPX_DSP_BITREADER_H_
+#ifndef VPX_VPX_DSP_BITREADER_H_
+#define VPX_VPX_DSP_BITREADER_H_
 
 #include <stddef.h>
+#include <stdio.h>
 #include <limits.h>
 
 #include "./vpx_config.h"
@@ -19,6 +20,9 @@
 #include "vpx/vp8dx.h"
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/prob.h"
+#if CONFIG_BITSTREAM_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif  // CONFIG_BITSTREAM_DEBUG
 
 #ifdef __cplusplus
 extern "C" {
@@ -94,7 +98,7 @@
   }
 
   {
-    const int shift = vpx_norm[range];
+    const unsigned char shift = vpx_norm[(unsigned char)range];
     range <<= shift;
     value <<= shift;
     count -= shift;
@@ -103,6 +107,31 @@
   r->count = count;
   r->range = range;
 
+#if CONFIG_BITSTREAM_DEBUG
+  {
+    const int queue_r = bitstream_queue_get_read();
+    const int frame_idx = bitstream_queue_get_frame_read();
+    int ref_result, ref_prob;
+    bitstream_queue_pop(&ref_result, &ref_prob);
+    if ((int)bit != ref_result) {
+      fprintf(stderr,
+              "\n *** [bit] result error, frame_idx_r %d bit %d ref_result %d "
+              "queue_r %d\n",
+              frame_idx, bit, ref_result, queue_r);
+
+      assert(0);
+    }
+    if (prob != ref_prob) {
+      fprintf(stderr,
+              "\n *** [bit] prob error, frame_idx_r %d prob %d ref_prob %d "
+              "queue_r %d\n",
+              frame_idx, prob, ref_prob, queue_r);
+
+      assert(0);
+    }
+  }
+#endif
+
   return bit;
 }
 
@@ -131,4 +160,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_BITREADER_H_
+#endif  // VPX_VPX_DSP_BITREADER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.c
index 3e16bfa..f59f1f7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.c
@@ -23,7 +23,7 @@
     rb->bit_offset = off + 1;
     return bit;
   } else {
-    rb->error_handler(rb->error_handler_data);
+    if (rb->error_handler != NULL) rb->error_handler(rb->error_handler_data);
     return 0;
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.h
index 8a48a95..b27703a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitreader_buffer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_BITREADER_BUFFER_H_
-#define VPX_DSP_BITREADER_BUFFER_H_
+#ifndef VPX_VPX_DSP_BITREADER_BUFFER_H_
+#define VPX_VPX_DSP_BITREADER_BUFFER_H_
 
 #include <limits.h>
 
@@ -44,4 +44,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_BITREADER_BUFFER_H_
+#endif  // VPX_VPX_DSP_BITREADER_BUFFER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.c
index 81e28b3..5b41aa54 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.c
@@ -12,6 +12,10 @@
 
 #include "./bitwriter.h"
 
+#if CONFIG_BITSTREAM_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif
+
 void vpx_start_encode(vpx_writer *br, uint8_t *source) {
   br->lowvalue = 0;
   br->range = 255;
@@ -24,8 +28,15 @@
 void vpx_stop_encode(vpx_writer *br) {
   int i;
 
+#if CONFIG_BITSTREAM_DEBUG
+  bitstream_queue_set_skip_write(1);
+#endif
   for (i = 0; i < 32; i++) vpx_write_bit(br, 0);
 
   // Ensure there's no ambigous collision with any index marker bytes
   if ((br->buffer[br->pos - 1] & 0xe0) == 0xc0) br->buffer[br->pos++] = 0;
+
+#if CONFIG_BITSTREAM_DEBUG
+  bitstream_queue_set_skip_write(0);
+#endif
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.h
index e63092a..04084af 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter.h
@@ -8,12 +8,17 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_BITWRITER_H_
-#define VPX_DSP_BITWRITER_H_
+#ifndef VPX_VPX_DSP_BITWRITER_H_
+#define VPX_VPX_DSP_BITWRITER_H_
+
+#include <stdio.h>
 
 #include "vpx_ports/mem.h"
 
 #include "vpx_dsp/prob.h"
+#if CONFIG_BITSTREAM_DEBUG
+#include "vpx_util/vpx_debug_util.h"
+#endif  // CONFIG_BITSTREAM_DEBUG
 
 #ifdef __cplusplus
 extern "C" {
@@ -27,8 +32,8 @@
   uint8_t *buffer;
 } vpx_writer;
 
-void vpx_start_encode(vpx_writer *bc, uint8_t *buffer);
-void vpx_stop_encode(vpx_writer *bc);
+void vpx_start_encode(vpx_writer *br, uint8_t *source);
+void vpx_stop_encode(vpx_writer *br);
 
 static INLINE void vpx_write(vpx_writer *br, int bit, int probability) {
   unsigned int split;
@@ -37,6 +42,21 @@
   unsigned int lowvalue = br->lowvalue;
   int shift;
 
+#if CONFIG_BITSTREAM_DEBUG
+  /*
+  int queue_r = 0;
+  int frame_idx_r = 0;
+  int queue_w = bitstream_queue_get_write();
+  int frame_idx_w = bitstream_queue_get_frame_write();
+  if (frame_idx_w == frame_idx_r && queue_w == queue_r) {
+    fprintf(stderr, "\n *** bitstream queue at frame_idx_w %d queue_w %d\n",
+            frame_idx_w, queue_w);
+    assert(0);
+  }
+  */
+  bitstream_queue_push(bit, probability);
+#endif
+
   split = 1 + (((range - 1) * probability) >> 8);
 
   range = split;
@@ -65,7 +85,7 @@
       br->buffer[x] += 1;
     }
 
-    br->buffer[br->pos++] = (lowvalue >> (24 - offset));
+    br->buffer[br->pos++] = (lowvalue >> (24 - offset)) & 0xff;
     lowvalue <<= offset;
     shift = count;
     lowvalue &= 0xffffff;
@@ -94,4 +114,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_BITWRITER_H_
+#endif  // VPX_VPX_DSP_BITWRITER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.h
index a123a2f..3662cb6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/bitwriter_buffer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_BITWRITER_BUFFER_H_
-#define VPX_DSP_BITWRITER_BUFFER_H_
+#ifndef VPX_VPX_DSP_BITWRITER_BUFFER_H_
+#define VPX_VPX_DSP_BITWRITER_BUFFER_H_
 
 #include "vpx/vpx_integer.h"
 
@@ -35,4 +35,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_BITWRITER_BUFFER_H_
+#endif  // VPX_VPX_DSP_BITWRITER_BUFFER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/deblock.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/deblock.c
index 94acbb3..455b73b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/deblock.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/deblock.c
@@ -39,11 +39,10 @@
   9,  10, 13,
 };
 
-void vpx_post_proc_down_and_across_mb_row_c(unsigned char *src_ptr,
-                                            unsigned char *dst_ptr,
-                                            int src_pixels_per_line,
-                                            int dst_pixels_per_line, int cols,
-                                            unsigned char *f, int size) {
+void vpx_post_proc_down_and_across_mb_row_c(unsigned char *src,
+                                            unsigned char *dst, int src_pitch,
+                                            int dst_pitch, int cols,
+                                            unsigned char *flimits, int size) {
   unsigned char *p_src, *p_dst;
   int row;
   int col;
@@ -55,19 +54,21 @@
 
   for (row = 0; row < size; row++) {
     /* post_proc_down for one row */
-    p_src = src_ptr;
-    p_dst = dst_ptr;
+    p_src = src;
+    p_dst = dst;
 
     for (col = 0; col < cols; col++) {
-      unsigned char p_above2 = p_src[col - 2 * src_pixels_per_line];
-      unsigned char p_above1 = p_src[col - src_pixels_per_line];
-      unsigned char p_below1 = p_src[col + src_pixels_per_line];
-      unsigned char p_below2 = p_src[col + 2 * src_pixels_per_line];
+      unsigned char p_above2 = p_src[col - 2 * src_pitch];
+      unsigned char p_above1 = p_src[col - src_pitch];
+      unsigned char p_below1 = p_src[col + src_pitch];
+      unsigned char p_below2 = p_src[col + 2 * src_pitch];
 
       v = p_src[col];
 
-      if ((abs(v - p_above2) < f[col]) && (abs(v - p_above1) < f[col]) &&
-          (abs(v - p_below1) < f[col]) && (abs(v - p_below2) < f[col])) {
+      if ((abs(v - p_above2) < flimits[col]) &&
+          (abs(v - p_above1) < flimits[col]) &&
+          (abs(v - p_below1) < flimits[col]) &&
+          (abs(v - p_below2) < flimits[col])) {
         unsigned char k1, k2, k3;
         k1 = (p_above2 + p_above1 + 1) >> 1;
         k2 = (p_below2 + p_below1 + 1) >> 1;
@@ -79,8 +80,8 @@
     }
 
     /* now post_proc_across */
-    p_src = dst_ptr;
-    p_dst = dst_ptr;
+    p_src = dst;
+    p_dst = dst;
 
     p_src[-2] = p_src[-1] = p_src[0];
     p_src[cols] = p_src[cols + 1] = p_src[cols - 1];
@@ -88,10 +89,10 @@
     for (col = 0; col < cols; col++) {
       v = p_src[col];
 
-      if ((abs(v - p_src[col - 2]) < f[col]) &&
-          (abs(v - p_src[col - 1]) < f[col]) &&
-          (abs(v - p_src[col + 1]) < f[col]) &&
-          (abs(v - p_src[col + 2]) < f[col])) {
+      if ((abs(v - p_src[col - 2]) < flimits[col]) &&
+          (abs(v - p_src[col - 1]) < flimits[col]) &&
+          (abs(v - p_src[col + 1]) < flimits[col]) &&
+          (abs(v - p_src[col + 2]) < flimits[col])) {
         unsigned char k1, k2, k3;
         k1 = (p_src[col - 2] + p_src[col - 1] + 1) >> 1;
         k2 = (p_src[col + 2] + p_src[col + 1] + 1) >> 1;
@@ -109,8 +110,8 @@
     p_dst[col - 1] = d[(col - 1) & 3];
 
     /* next row */
-    src_ptr += src_pixels_per_line;
-    dst_ptr += dst_pixels_per_line;
+    src += src_pitch;
+    dst += dst_pitch;
   }
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fastssim.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fastssim.c
index 0469071..6ab6f55 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fastssim.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fastssim.c
@@ -128,10 +128,12 @@
       int i1;
       i0 = 2 * i;
       i1 = FS_MINI(i0 + 1, w2);
-      dst1[j * w + i] = src1[j0offs + i0] + src1[j0offs + i1] +
-                        src1[j1offs + i0] + src1[j1offs + i1];
-      dst2[j * w + i] = src2[j0offs + i0] + src2[j0offs + i1] +
-                        src2[j1offs + i0] + src2[j1offs + i1];
+      dst1[j * w + i] =
+          (uint32_t)((int64_t)src1[j0offs + i0] + src1[j0offs + i1] +
+                     src1[j1offs + i0] + src1[j1offs + i1]);
+      dst2[j * w + i] =
+          (uint32_t)((int64_t)src2[j0offs + i0] + src2[j0offs + i1] +
+                     src2[j1offs + i0] + src2[j1offs + i1]);
     }
   }
 }
@@ -220,12 +222,12 @@
   ssim = _ctx->level[_l].ssim;
   c1 = (double)(ssim_c1 * 4096 * (1 << 4 * _l));
   for (j = 0; j < h; j++) {
-    unsigned mux;
-    unsigned muy;
+    int64_t mux;
+    int64_t muy;
     int i0;
     int i1;
-    mux = 5 * col_sums_x[0];
-    muy = 5 * col_sums_y[0];
+    mux = (int64_t)5 * col_sums_x[0];
+    muy = (int64_t)5 * col_sums_y[0];
     for (i = 1; i < 4; i++) {
       i1 = FS_MINI(i, w - 1);
       mux += col_sums_x[i1];
@@ -237,8 +239,8 @@
       if (i + 1 < w) {
         i0 = FS_MAXI(0, i - 4);
         i1 = FS_MINI(i + 4, w - 1);
-        mux += col_sums_x[i1] - col_sums_x[i0];
-        muy += col_sums_x[i1] - col_sums_x[i0];
+        mux += (int)col_sums_x[i1] - (int)col_sums_x[i0];
+        muy += (int)col_sums_x[i1] - (int)col_sums_x[i0];
       }
     }
     if (j + 1 < h) {
@@ -246,8 +248,10 @@
       for (i = 0; i < w; i++) col_sums_x[i] -= im1[j0offs + i];
       for (i = 0; i < w; i++) col_sums_y[i] -= im2[j0offs + i];
       j1offs = FS_MINI(j + 4, h - 1) * w;
-      for (i = 0; i < w; i++) col_sums_x[i] += im1[j1offs + i];
-      for (i = 0; i < w; i++) col_sums_y[i] += im2[j1offs + i];
+      for (i = 0; i < w; i++)
+        col_sums_x[i] = (uint32_t)((int64_t)col_sums_x[i] + im1[j1offs + i]);
+      for (i = 0; i < w; i++)
+        col_sums_y[i] = (uint32_t)((int64_t)col_sums_y[i] + im2[j1offs + i]);
     }
   }
 }
@@ -343,18 +347,18 @@
   for (j = 0; j < h + 4; j++) {
     if (j < h - 1) {
       for (i = 0; i < w - 1; i++) {
-        unsigned g1;
-        unsigned g2;
-        unsigned gx;
-        unsigned gy;
-        g1 = abs((int)im1[(j + 1) * w + i + 1] - (int)im1[j * w + i]);
-        g2 = abs((int)im1[(j + 1) * w + i] - (int)im1[j * w + i + 1]);
+        int64_t g1;
+        int64_t g2;
+        int64_t gx;
+        int64_t gy;
+        g1 = labs((int64_t)im1[(j + 1) * w + i + 1] - (int64_t)im1[j * w + i]);
+        g2 = labs((int64_t)im1[(j + 1) * w + i] - (int64_t)im1[j * w + i + 1]);
         gx = 4 * FS_MAXI(g1, g2) + FS_MINI(g1, g2);
-        g1 = abs((int)im2[(j + 1) * w + i + 1] - (int)im2[j * w + i]);
-        g2 = abs((int)im2[(j + 1) * w + i] - (int)im2[j * w + i + 1]);
-        gy = 4 * FS_MAXI(g1, g2) + FS_MINI(g1, g2);
-        gx_buf[(j & 7) * stride + i + 4] = gx;
-        gy_buf[(j & 7) * stride + i + 4] = gy;
+        g1 = labs((int64_t)im2[(j + 1) * w + i + 1] - (int64_t)im2[j * w + i]);
+        g2 = labs((int64_t)im2[(j + 1) * w + i] - (int64_t)im2[j * w + i + 1]);
+        gy = ((int64_t)4 * FS_MAXI(g1, g2) + FS_MINI(g1, g2));
+        gx_buf[(j & 7) * stride + i + 4] = (uint32_t)gx;
+        gy_buf[(j & 7) * stride + i + 4] = (uint32_t)gy;
       }
     } else {
       memset(gx_buf + (j & 7) * stride, 0, stride * sizeof(*gx_buf));
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.c
index 6dcb3ba..ef66de0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.c
@@ -87,11 +87,11 @@
   output[0] = sum * 2;
 }
 
-void vpx_fdct8x8_c(const int16_t *input, tran_low_t *final_output, int stride) {
+void vpx_fdct8x8_c(const int16_t *input, tran_low_t *output, int stride) {
   int i, j;
   tran_low_t intermediate[64];
   int pass;
-  tran_low_t *output = intermediate;
+  tran_low_t *out = intermediate;
   const tran_low_t *in = NULL;
 
   // Transform columns
@@ -133,10 +133,10 @@
       t1 = (x0 - x1) * cospi_16_64;
       t2 = x2 * cospi_24_64 + x3 * cospi_8_64;
       t3 = -x2 * cospi_8_64 + x3 * cospi_24_64;
-      output[0] = (tran_low_t)fdct_round_shift(t0);
-      output[2] = (tran_low_t)fdct_round_shift(t2);
-      output[4] = (tran_low_t)fdct_round_shift(t1);
-      output[6] = (tran_low_t)fdct_round_shift(t3);
+      out[0] = (tran_low_t)fdct_round_shift(t0);
+      out[2] = (tran_low_t)fdct_round_shift(t2);
+      out[4] = (tran_low_t)fdct_round_shift(t1);
+      out[6] = (tran_low_t)fdct_round_shift(t3);
 
       // Stage 2
       t0 = (s6 - s5) * cospi_16_64;
@@ -155,19 +155,19 @@
       t1 = x1 * cospi_12_64 + x2 * cospi_20_64;
       t2 = x2 * cospi_12_64 + x1 * -cospi_20_64;
       t3 = x3 * cospi_28_64 + x0 * -cospi_4_64;
-      output[1] = (tran_low_t)fdct_round_shift(t0);
-      output[3] = (tran_low_t)fdct_round_shift(t2);
-      output[5] = (tran_low_t)fdct_round_shift(t1);
-      output[7] = (tran_low_t)fdct_round_shift(t3);
-      output += 8;
+      out[1] = (tran_low_t)fdct_round_shift(t0);
+      out[3] = (tran_low_t)fdct_round_shift(t2);
+      out[5] = (tran_low_t)fdct_round_shift(t1);
+      out[7] = (tran_low_t)fdct_round_shift(t3);
+      out += 8;
     }
     in = intermediate;
-    output = final_output;
+    out = output;
   }
 
   // Rows
   for (i = 0; i < 8; ++i) {
-    for (j = 0; j < 8; ++j) final_output[j + i * 8] /= 2;
+    for (j = 0; j < 8; ++j) output[j + i * 8] /= 2;
   }
 }
 
@@ -705,9 +705,9 @@
   output[31] = dct_32_round(step[31] * cospi_31_64 + step[16] * -cospi_1_64);
 }
 
-void vpx_fdct32x32_c(const int16_t *input, tran_low_t *out, int stride) {
+void vpx_fdct32x32_c(const int16_t *input, tran_low_t *output, int stride) {
   int i, j;
-  tran_high_t output[32 * 32];
+  tran_high_t out[32 * 32];
 
   // Columns
   for (i = 0; i < 32; ++i) {
@@ -715,16 +715,16 @@
     for (j = 0; j < 32; ++j) temp_in[j] = input[j * stride + i] * 4;
     vpx_fdct32(temp_in, temp_out, 0);
     for (j = 0; j < 32; ++j)
-      output[j * 32 + i] = (temp_out[j] + 1 + (temp_out[j] > 0)) >> 2;
+      out[j * 32 + i] = (temp_out[j] + 1 + (temp_out[j] > 0)) >> 2;
   }
 
   // Rows
   for (i = 0; i < 32; ++i) {
     tran_high_t temp_in[32], temp_out[32];
-    for (j = 0; j < 32; ++j) temp_in[j] = output[j + i * 32];
+    for (j = 0; j < 32; ++j) temp_in[j] = out[j + i * 32];
     vpx_fdct32(temp_in, temp_out, 0);
     for (j = 0; j < 32; ++j)
-      out[j + i * 32] =
+      output[j + i * 32] =
           (tran_low_t)((temp_out[j] + 1 + (temp_out[j] < 0)) >> 2);
   }
 }
@@ -732,9 +732,9 @@
 // Note that although we use dct_32_round in dct32 computation flow,
 // this 2d fdct32x32 for rate-distortion optimization loop is operating
 // within 16 bits precision.
-void vpx_fdct32x32_rd_c(const int16_t *input, tran_low_t *out, int stride) {
+void vpx_fdct32x32_rd_c(const int16_t *input, tran_low_t *output, int stride) {
   int i, j;
-  tran_high_t output[32 * 32];
+  tran_high_t out[32 * 32];
 
   // Columns
   for (i = 0; i < 32; ++i) {
@@ -745,15 +745,15 @@
       // TODO(cd): see quality impact of only doing
       //           output[j * 32 + i] = (temp_out[j] + 1) >> 2;
       //           PS: also change code in vpx_dsp/x86/vpx_dct_sse2.c
-      output[j * 32 + i] = (temp_out[j] + 1 + (temp_out[j] > 0)) >> 2;
+      out[j * 32 + i] = (temp_out[j] + 1 + (temp_out[j] > 0)) >> 2;
   }
 
   // Rows
   for (i = 0; i < 32; ++i) {
     tran_high_t temp_in[32], temp_out[32];
-    for (j = 0; j < 32; ++j) temp_in[j] = output[j + i * 32];
+    for (j = 0; j < 32; ++j) temp_in[j] = out[j + i * 32];
     vpx_fdct32(temp_in, temp_out, 1);
-    for (j = 0; j < 32; ++j) out[j + i * 32] = (tran_low_t)temp_out[j];
+    for (j = 0; j < 32; ++j) output[j + i * 32] = (tran_low_t)temp_out[j];
   }
 }
 
@@ -772,14 +772,14 @@
   vpx_fdct4x4_c(input, output, stride);
 }
 
-void vpx_highbd_fdct8x8_c(const int16_t *input, tran_low_t *final_output,
+void vpx_highbd_fdct8x8_c(const int16_t *input, tran_low_t *output,
                           int stride) {
-  vpx_fdct8x8_c(input, final_output, stride);
+  vpx_fdct8x8_c(input, output, stride);
 }
 
-void vpx_highbd_fdct8x8_1_c(const int16_t *input, tran_low_t *final_output,
+void vpx_highbd_fdct8x8_1_c(const int16_t *input, tran_low_t *output,
                             int stride) {
-  vpx_fdct8x8_1_c(input, final_output, stride);
+  vpx_fdct8x8_1_c(input, output, stride);
 }
 
 void vpx_highbd_fdct16x16_c(const int16_t *input, tran_low_t *output,
@@ -792,17 +792,18 @@
   vpx_fdct16x16_1_c(input, output, stride);
 }
 
-void vpx_highbd_fdct32x32_c(const int16_t *input, tran_low_t *out, int stride) {
-  vpx_fdct32x32_c(input, out, stride);
+void vpx_highbd_fdct32x32_c(const int16_t *input, tran_low_t *output,
+                            int stride) {
+  vpx_fdct32x32_c(input, output, stride);
 }
 
-void vpx_highbd_fdct32x32_rd_c(const int16_t *input, tran_low_t *out,
+void vpx_highbd_fdct32x32_rd_c(const int16_t *input, tran_low_t *output,
                                int stride) {
-  vpx_fdct32x32_rd_c(input, out, stride);
+  vpx_fdct32x32_rd_c(input, output, stride);
 }
 
-void vpx_highbd_fdct32x32_1_c(const int16_t *input, tran_low_t *out,
+void vpx_highbd_fdct32x32_1_c(const int16_t *input, tran_low_t *output,
                               int stride) {
-  vpx_fdct32x32_1_c(input, out, stride);
+  vpx_fdct32x32_1_c(input, output, stride);
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.h
index 29e139c..a43c8ea 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/fwd_txfm.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_FWD_TXFM_H_
-#define VPX_DSP_FWD_TXFM_H_
+#ifndef VPX_VPX_DSP_FWD_TXFM_H_
+#define VPX_VPX_DSP_FWD_TXFM_H_
 
 #include "vpx_dsp/txfm_common.h"
 
@@ -22,4 +22,4 @@
 }
 
 void vpx_fdct32(const tran_high_t *input, tran_high_t *output, int round);
-#endif  // VPX_DSP_FWD_TXFM_H_
+#endif  // VPX_VPX_DSP_FWD_TXFM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.c
index 0194aa1..97655b3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.c
@@ -67,11 +67,11 @@
   }
 }
 
-void vpx_iwht4x4_1_add_c(const tran_low_t *in, uint8_t *dest, int stride) {
+void vpx_iwht4x4_1_add_c(const tran_low_t *input, uint8_t *dest, int stride) {
   int i;
   tran_high_t a1, e1;
   tran_low_t tmp[4];
-  const tran_low_t *ip = in;
+  const tran_low_t *ip = input;
   tran_low_t *op = tmp;
 
   a1 = ip[0] >> UNIT_QUANT_SHIFT;
@@ -701,22 +701,22 @@
   step2[15] = step1[15];
 
   // stage 7
-  output[0] = WRAPLOW(step2[0] + step2[15]);
-  output[1] = WRAPLOW(step2[1] + step2[14]);
-  output[2] = WRAPLOW(step2[2] + step2[13]);
-  output[3] = WRAPLOW(step2[3] + step2[12]);
-  output[4] = WRAPLOW(step2[4] + step2[11]);
-  output[5] = WRAPLOW(step2[5] + step2[10]);
-  output[6] = WRAPLOW(step2[6] + step2[9]);
-  output[7] = WRAPLOW(step2[7] + step2[8]);
-  output[8] = WRAPLOW(step2[7] - step2[8]);
-  output[9] = WRAPLOW(step2[6] - step2[9]);
-  output[10] = WRAPLOW(step2[5] - step2[10]);
-  output[11] = WRAPLOW(step2[4] - step2[11]);
-  output[12] = WRAPLOW(step2[3] - step2[12]);
-  output[13] = WRAPLOW(step2[2] - step2[13]);
-  output[14] = WRAPLOW(step2[1] - step2[14]);
-  output[15] = WRAPLOW(step2[0] - step2[15]);
+  output[0] = (tran_low_t)WRAPLOW(step2[0] + step2[15]);
+  output[1] = (tran_low_t)WRAPLOW(step2[1] + step2[14]);
+  output[2] = (tran_low_t)WRAPLOW(step2[2] + step2[13]);
+  output[3] = (tran_low_t)WRAPLOW(step2[3] + step2[12]);
+  output[4] = (tran_low_t)WRAPLOW(step2[4] + step2[11]);
+  output[5] = (tran_low_t)WRAPLOW(step2[5] + step2[10]);
+  output[6] = (tran_low_t)WRAPLOW(step2[6] + step2[9]);
+  output[7] = (tran_low_t)WRAPLOW(step2[7] + step2[8]);
+  output[8] = (tran_low_t)WRAPLOW(step2[7] - step2[8]);
+  output[9] = (tran_low_t)WRAPLOW(step2[6] - step2[9]);
+  output[10] = (tran_low_t)WRAPLOW(step2[5] - step2[10]);
+  output[11] = (tran_low_t)WRAPLOW(step2[4] - step2[11]);
+  output[12] = (tran_low_t)WRAPLOW(step2[3] - step2[12]);
+  output[13] = (tran_low_t)WRAPLOW(step2[2] - step2[13]);
+  output[14] = (tran_low_t)WRAPLOW(step2[1] - step2[14]);
+  output[15] = (tran_low_t)WRAPLOW(step2[0] - step2[15]);
 }
 
 void vpx_idct16x16_256_add_c(const tran_low_t *input, uint8_t *dest,
@@ -1346,12 +1346,12 @@
   }
 }
 
-void vpx_highbd_iwht4x4_1_add_c(const tran_low_t *in, uint16_t *dest,
+void vpx_highbd_iwht4x4_1_add_c(const tran_low_t *input, uint16_t *dest,
                                 int stride, int bd) {
   int i;
   tran_high_t a1, e1;
   tran_low_t tmp[4];
-  const tran_low_t *ip = in;
+  const tran_low_t *ip = input;
   tran_low_t *op = tmp;
   (void)bd;
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.h
index e86fd39..6eedbea 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/inv_txfm.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_INV_TXFM_H_
-#define VPX_DSP_INV_TXFM_H_
+#ifndef VPX_VPX_DSP_INV_TXFM_H_
+#define VPX_VPX_DSP_INV_TXFM_H_
 
 #include <assert.h>
 
@@ -122,4 +122,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_INV_TXFM_H_
+#endif  // VPX_VPX_DSP_INV_TXFM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/loopfilter.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/loopfilter.c
index 9866ea3..9956028 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/loopfilter.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/loopfilter.c
@@ -81,11 +81,11 @@
                            uint8_t *op0, uint8_t *oq0, uint8_t *oq1) {
   int8_t filter1, filter2;
 
-  const int8_t ps1 = (int8_t)*op1 ^ 0x80;
-  const int8_t ps0 = (int8_t)*op0 ^ 0x80;
-  const int8_t qs0 = (int8_t)*oq0 ^ 0x80;
-  const int8_t qs1 = (int8_t)*oq1 ^ 0x80;
-  const uint8_t hev = hev_mask(thresh, *op1, *op0, *oq0, *oq1);
+  const int8_t ps1 = (int8_t)(*op1 ^ 0x80);
+  const int8_t ps0 = (int8_t)(*op0 ^ 0x80);
+  const int8_t qs0 = (int8_t)(*oq0 ^ 0x80);
+  const int8_t qs1 = (int8_t)(*oq1 ^ 0x80);
+  const int8_t hev = hev_mask(thresh, *op1, *op0, *oq0, *oq1);
 
   // add outer taps if we have high edge variance
   int8_t filter = signed_char_clamp(ps1 - qs1) & hev;
@@ -99,39 +99,40 @@
   filter1 = signed_char_clamp(filter + 4) >> 3;
   filter2 = signed_char_clamp(filter + 3) >> 3;
 
-  *oq0 = signed_char_clamp(qs0 - filter1) ^ 0x80;
-  *op0 = signed_char_clamp(ps0 + filter2) ^ 0x80;
+  *oq0 = (uint8_t)(signed_char_clamp(qs0 - filter1) ^ 0x80);
+  *op0 = (uint8_t)(signed_char_clamp(ps0 + filter2) ^ 0x80);
 
   // outer tap adjustments
   filter = ROUND_POWER_OF_TWO(filter1, 1) & ~hev;
 
-  *oq1 = signed_char_clamp(qs1 - filter) ^ 0x80;
-  *op1 = signed_char_clamp(ps1 + filter) ^ 0x80;
+  *oq1 = (uint8_t)(signed_char_clamp(qs1 - filter) ^ 0x80);
+  *op1 = (uint8_t)(signed_char_clamp(ps1 + filter) ^ 0x80);
 }
 
-void vpx_lpf_horizontal_4_c(uint8_t *s, int p /* pitch */,
-                            const uint8_t *blimit, const uint8_t *limit,
-                            const uint8_t *thresh) {
+void vpx_lpf_horizontal_4_c(uint8_t *s, int pitch, const uint8_t *blimit,
+                            const uint8_t *limit, const uint8_t *thresh) {
   int i;
 
   // loop filter designed to work using chars so that we can make maximum use
   // of 8 bit simd instructions.
   for (i = 0; i < 8; ++i) {
-    const uint8_t p3 = s[-4 * p], p2 = s[-3 * p], p1 = s[-2 * p], p0 = s[-p];
-    const uint8_t q0 = s[0 * p], q1 = s[1 * p], q2 = s[2 * p], q3 = s[3 * p];
+    const uint8_t p3 = s[-4 * pitch], p2 = s[-3 * pitch], p1 = s[-2 * pitch],
+                  p0 = s[-pitch];
+    const uint8_t q0 = s[0 * pitch], q1 = s[1 * pitch], q2 = s[2 * pitch],
+                  q3 = s[3 * pitch];
     const int8_t mask =
         filter_mask(*limit, *blimit, p3, p2, p1, p0, q0, q1, q2, q3);
-    filter4(mask, *thresh, s - 2 * p, s - 1 * p, s, s + 1 * p);
+    filter4(mask, *thresh, s - 2 * pitch, s - 1 * pitch, s, s + 1 * pitch);
     ++s;
   }
 }
 
-void vpx_lpf_horizontal_4_dual_c(uint8_t *s, int p, const uint8_t *blimit0,
+void vpx_lpf_horizontal_4_dual_c(uint8_t *s, int pitch, const uint8_t *blimit0,
                                  const uint8_t *limit0, const uint8_t *thresh0,
                                  const uint8_t *blimit1, const uint8_t *limit1,
                                  const uint8_t *thresh1) {
-  vpx_lpf_horizontal_4_c(s, p, blimit0, limit0, thresh0);
-  vpx_lpf_horizontal_4_c(s + 8, p, blimit1, limit1, thresh1);
+  vpx_lpf_horizontal_4_c(s, pitch, blimit0, limit0, thresh0);
+  vpx_lpf_horizontal_4_c(s + 8, pitch, blimit1, limit1, thresh1);
 }
 
 void vpx_lpf_vertical_4_c(uint8_t *s, int pitch, const uint8_t *blimit,
@@ -178,31 +179,33 @@
   }
 }
 
-void vpx_lpf_horizontal_8_c(uint8_t *s, int p, const uint8_t *blimit,
+void vpx_lpf_horizontal_8_c(uint8_t *s, int pitch, const uint8_t *blimit,
                             const uint8_t *limit, const uint8_t *thresh) {
   int i;
 
   // loop filter designed to work using chars so that we can make maximum use
   // of 8 bit simd instructions.
   for (i = 0; i < 8; ++i) {
-    const uint8_t p3 = s[-4 * p], p2 = s[-3 * p], p1 = s[-2 * p], p0 = s[-p];
-    const uint8_t q0 = s[0 * p], q1 = s[1 * p], q2 = s[2 * p], q3 = s[3 * p];
+    const uint8_t p3 = s[-4 * pitch], p2 = s[-3 * pitch], p1 = s[-2 * pitch],
+                  p0 = s[-pitch];
+    const uint8_t q0 = s[0 * pitch], q1 = s[1 * pitch], q2 = s[2 * pitch],
+                  q3 = s[3 * pitch];
 
     const int8_t mask =
         filter_mask(*limit, *blimit, p3, p2, p1, p0, q0, q1, q2, q3);
     const int8_t flat = flat_mask4(1, p3, p2, p1, p0, q0, q1, q2, q3);
-    filter8(mask, *thresh, flat, s - 4 * p, s - 3 * p, s - 2 * p, s - 1 * p, s,
-            s + 1 * p, s + 2 * p, s + 3 * p);
+    filter8(mask, *thresh, flat, s - 4 * pitch, s - 3 * pitch, s - 2 * pitch,
+            s - 1 * pitch, s, s + 1 * pitch, s + 2 * pitch, s + 3 * pitch);
     ++s;
   }
 }
 
-void vpx_lpf_horizontal_8_dual_c(uint8_t *s, int p, const uint8_t *blimit0,
+void vpx_lpf_horizontal_8_dual_c(uint8_t *s, int pitch, const uint8_t *blimit0,
                                  const uint8_t *limit0, const uint8_t *thresh0,
                                  const uint8_t *blimit1, const uint8_t *limit1,
                                  const uint8_t *thresh1) {
-  vpx_lpf_horizontal_8_c(s, p, blimit0, limit0, thresh0);
-  vpx_lpf_horizontal_8_c(s + 8, p, blimit1, limit1, thresh1);
+  vpx_lpf_horizontal_8_c(s, pitch, blimit0, limit0, thresh0);
+  vpx_lpf_horizontal_8_c(s + 8, pitch, blimit1, limit1, thresh1);
 }
 
 void vpx_lpf_vertical_8_c(uint8_t *s, int pitch, const uint8_t *blimit,
@@ -283,7 +286,8 @@
   }
 }
 
-static void mb_lpf_horizontal_edge_w(uint8_t *s, int p, const uint8_t *blimit,
+static void mb_lpf_horizontal_edge_w(uint8_t *s, int pitch,
+                                     const uint8_t *blimit,
                                      const uint8_t *limit,
                                      const uint8_t *thresh, int count) {
   int i;
@@ -291,34 +295,37 @@
   // loop filter designed to work using chars so that we can make maximum use
   // of 8 bit simd instructions.
   for (i = 0; i < 8 * count; ++i) {
-    const uint8_t p3 = s[-4 * p], p2 = s[-3 * p], p1 = s[-2 * p], p0 = s[-p];
-    const uint8_t q0 = s[0 * p], q1 = s[1 * p], q2 = s[2 * p], q3 = s[3 * p];
+    const uint8_t p3 = s[-4 * pitch], p2 = s[-3 * pitch], p1 = s[-2 * pitch],
+                  p0 = s[-pitch];
+    const uint8_t q0 = s[0 * pitch], q1 = s[1 * pitch], q2 = s[2 * pitch],
+                  q3 = s[3 * pitch];
     const int8_t mask =
         filter_mask(*limit, *blimit, p3, p2, p1, p0, q0, q1, q2, q3);
     const int8_t flat = flat_mask4(1, p3, p2, p1, p0, q0, q1, q2, q3);
-    const int8_t flat2 =
-        flat_mask5(1, s[-8 * p], s[-7 * p], s[-6 * p], s[-5 * p], p0, q0,
-                   s[4 * p], s[5 * p], s[6 * p], s[7 * p]);
+    const int8_t flat2 = flat_mask5(
+        1, s[-8 * pitch], s[-7 * pitch], s[-6 * pitch], s[-5 * pitch], p0, q0,
+        s[4 * pitch], s[5 * pitch], s[6 * pitch], s[7 * pitch]);
 
-    filter16(mask, *thresh, flat, flat2, s - 8 * p, s - 7 * p, s - 6 * p,
-             s - 5 * p, s - 4 * p, s - 3 * p, s - 2 * p, s - 1 * p, s,
-             s + 1 * p, s + 2 * p, s + 3 * p, s + 4 * p, s + 5 * p, s + 6 * p,
-             s + 7 * p);
+    filter16(mask, *thresh, flat, flat2, s - 8 * pitch, s - 7 * pitch,
+             s - 6 * pitch, s - 5 * pitch, s - 4 * pitch, s - 3 * pitch,
+             s - 2 * pitch, s - 1 * pitch, s, s + 1 * pitch, s + 2 * pitch,
+             s + 3 * pitch, s + 4 * pitch, s + 5 * pitch, s + 6 * pitch,
+             s + 7 * pitch);
     ++s;
   }
 }
 
-void vpx_lpf_horizontal_16_c(uint8_t *s, int p, const uint8_t *blimit,
+void vpx_lpf_horizontal_16_c(uint8_t *s, int pitch, const uint8_t *blimit,
                              const uint8_t *limit, const uint8_t *thresh) {
-  mb_lpf_horizontal_edge_w(s, p, blimit, limit, thresh, 1);
+  mb_lpf_horizontal_edge_w(s, pitch, blimit, limit, thresh, 1);
 }
 
-void vpx_lpf_horizontal_16_dual_c(uint8_t *s, int p, const uint8_t *blimit,
+void vpx_lpf_horizontal_16_dual_c(uint8_t *s, int pitch, const uint8_t *blimit,
                                   const uint8_t *limit, const uint8_t *thresh) {
-  mb_lpf_horizontal_edge_w(s, p, blimit, limit, thresh, 2);
+  mb_lpf_horizontal_edge_w(s, pitch, blimit, limit, thresh, 2);
 }
 
-static void mb_lpf_vertical_edge_w(uint8_t *s, int p, const uint8_t *blimit,
+static void mb_lpf_vertical_edge_w(uint8_t *s, int pitch, const uint8_t *blimit,
                                    const uint8_t *limit, const uint8_t *thresh,
                                    int count) {
   int i;
@@ -335,18 +342,18 @@
     filter16(mask, *thresh, flat, flat2, s - 8, s - 7, s - 6, s - 5, s - 4,
              s - 3, s - 2, s - 1, s, s + 1, s + 2, s + 3, s + 4, s + 5, s + 6,
              s + 7);
-    s += p;
+    s += pitch;
   }
 }
 
-void vpx_lpf_vertical_16_c(uint8_t *s, int p, const uint8_t *blimit,
+void vpx_lpf_vertical_16_c(uint8_t *s, int pitch, const uint8_t *blimit,
                            const uint8_t *limit, const uint8_t *thresh) {
-  mb_lpf_vertical_edge_w(s, p, blimit, limit, thresh, 8);
+  mb_lpf_vertical_edge_w(s, pitch, blimit, limit, thresh, 8);
 }
 
-void vpx_lpf_vertical_16_dual_c(uint8_t *s, int p, const uint8_t *blimit,
+void vpx_lpf_vertical_16_dual_c(uint8_t *s, int pitch, const uint8_t *blimit,
                                 const uint8_t *limit, const uint8_t *thresh) {
-  mb_lpf_vertical_edge_w(s, p, blimit, limit, thresh, 16);
+  mb_lpf_vertical_edge_w(s, pitch, blimit, limit, thresh, 16);
 }
 
 #if CONFIG_VP9_HIGHBITDEPTH
@@ -416,7 +423,7 @@
   const int16_t ps0 = (int16_t)*op0 - (0x80 << shift);
   const int16_t qs0 = (int16_t)*oq0 - (0x80 << shift);
   const int16_t qs1 = (int16_t)*oq1 - (0x80 << shift);
-  const uint16_t hev = highbd_hev_mask(thresh, *op1, *op0, *oq0, *oq1, bd);
+  const int16_t hev = highbd_hev_mask(thresh, *op1, *op0, *oq0, *oq1, bd);
 
   // Add outer taps if we have high edge variance.
   int16_t filter = signed_char_clamp_high(ps1 - qs1, bd) & hev;
@@ -440,7 +447,7 @@
   *op1 = signed_char_clamp_high(ps1 + filter, bd) + (0x80 << shift);
 }
 
-void vpx_highbd_lpf_horizontal_4_c(uint16_t *s, int p /* pitch */,
+void vpx_highbd_lpf_horizontal_4_c(uint16_t *s, int pitch,
                                    const uint8_t *blimit, const uint8_t *limit,
                                    const uint8_t *thresh, int bd) {
   int i;
@@ -448,27 +455,28 @@
   // loop filter designed to work using chars so that we can make maximum use
   // of 8 bit simd instructions.
   for (i = 0; i < 8; ++i) {
-    const uint16_t p3 = s[-4 * p];
-    const uint16_t p2 = s[-3 * p];
-    const uint16_t p1 = s[-2 * p];
-    const uint16_t p0 = s[-p];
-    const uint16_t q0 = s[0 * p];
-    const uint16_t q1 = s[1 * p];
-    const uint16_t q2 = s[2 * p];
-    const uint16_t q3 = s[3 * p];
+    const uint16_t p3 = s[-4 * pitch];
+    const uint16_t p2 = s[-3 * pitch];
+    const uint16_t p1 = s[-2 * pitch];
+    const uint16_t p0 = s[-pitch];
+    const uint16_t q0 = s[0 * pitch];
+    const uint16_t q1 = s[1 * pitch];
+    const uint16_t q2 = s[2 * pitch];
+    const uint16_t q3 = s[3 * pitch];
     const int8_t mask =
         highbd_filter_mask(*limit, *blimit, p3, p2, p1, p0, q0, q1, q2, q3, bd);
-    highbd_filter4(mask, *thresh, s - 2 * p, s - 1 * p, s, s + 1 * p, bd);
+    highbd_filter4(mask, *thresh, s - 2 * pitch, s - 1 * pitch, s,
+                   s + 1 * pitch, bd);
     ++s;
   }
 }
 
 void vpx_highbd_lpf_horizontal_4_dual_c(
-    uint16_t *s, int p, const uint8_t *blimit0, const uint8_t *limit0,
+    uint16_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
     const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
     const uint8_t *thresh1, int bd) {
-  vpx_highbd_lpf_horizontal_4_c(s, p, blimit0, limit0, thresh0, bd);
-  vpx_highbd_lpf_horizontal_4_c(s + 8, p, blimit1, limit1, thresh1, bd);
+  vpx_highbd_lpf_horizontal_4_c(s, pitch, blimit0, limit0, thresh0, bd);
+  vpx_highbd_lpf_horizontal_4_c(s + 8, pitch, blimit1, limit1, thresh1, bd);
 }
 
 void vpx_highbd_lpf_vertical_4_c(uint16_t *s, int pitch, const uint8_t *blimit,
@@ -517,33 +525,36 @@
   }
 }
 
-void vpx_highbd_lpf_horizontal_8_c(uint16_t *s, int p, const uint8_t *blimit,
-                                   const uint8_t *limit, const uint8_t *thresh,
-                                   int bd) {
+void vpx_highbd_lpf_horizontal_8_c(uint16_t *s, int pitch,
+                                   const uint8_t *blimit, const uint8_t *limit,
+                                   const uint8_t *thresh, int bd) {
   int i;
 
   // loop filter designed to work using chars so that we can make maximum use
   // of 8 bit simd instructions.
   for (i = 0; i < 8; ++i) {
-    const uint16_t p3 = s[-4 * p], p2 = s[-3 * p], p1 = s[-2 * p], p0 = s[-p];
-    const uint16_t q0 = s[0 * p], q1 = s[1 * p], q2 = s[2 * p], q3 = s[3 * p];
+    const uint16_t p3 = s[-4 * pitch], p2 = s[-3 * pitch], p1 = s[-2 * pitch],
+                   p0 = s[-pitch];
+    const uint16_t q0 = s[0 * pitch], q1 = s[1 * pitch], q2 = s[2 * pitch],
+                   q3 = s[3 * pitch];
 
     const int8_t mask =
         highbd_filter_mask(*limit, *blimit, p3, p2, p1, p0, q0, q1, q2, q3, bd);
     const int8_t flat =
         highbd_flat_mask4(1, p3, p2, p1, p0, q0, q1, q2, q3, bd);
-    highbd_filter8(mask, *thresh, flat, s - 4 * p, s - 3 * p, s - 2 * p,
-                   s - 1 * p, s, s + 1 * p, s + 2 * p, s + 3 * p, bd);
+    highbd_filter8(mask, *thresh, flat, s - 4 * pitch, s - 3 * pitch,
+                   s - 2 * pitch, s - 1 * pitch, s, s + 1 * pitch,
+                   s + 2 * pitch, s + 3 * pitch, bd);
     ++s;
   }
 }
 
 void vpx_highbd_lpf_horizontal_8_dual_c(
-    uint16_t *s, int p, const uint8_t *blimit0, const uint8_t *limit0,
+    uint16_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
     const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
     const uint8_t *thresh1, int bd) {
-  vpx_highbd_lpf_horizontal_8_c(s, p, blimit0, limit0, thresh0, bd);
-  vpx_highbd_lpf_horizontal_8_c(s + 8, p, blimit1, limit1, thresh1, bd);
+  vpx_highbd_lpf_horizontal_8_c(s, pitch, blimit0, limit0, thresh0, bd);
+  vpx_highbd_lpf_horizontal_8_c(s + 8, pitch, blimit1, limit1, thresh1, bd);
 }
 
 void vpx_highbd_lpf_vertical_8_c(uint16_t *s, int pitch, const uint8_t *blimit,
@@ -639,7 +650,7 @@
   }
 }
 
-static void highbd_mb_lpf_horizontal_edge_w(uint16_t *s, int p,
+static void highbd_mb_lpf_horizontal_edge_w(uint16_t *s, int pitch,
                                             const uint8_t *blimit,
                                             const uint8_t *limit,
                                             const uint8_t *thresh, int count,
@@ -649,44 +660,45 @@
   // loop filter designed to work using chars so that we can make maximum use
   // of 8 bit simd instructions.
   for (i = 0; i < 8 * count; ++i) {
-    const uint16_t p3 = s[-4 * p];
-    const uint16_t p2 = s[-3 * p];
-    const uint16_t p1 = s[-2 * p];
-    const uint16_t p0 = s[-p];
-    const uint16_t q0 = s[0 * p];
-    const uint16_t q1 = s[1 * p];
-    const uint16_t q2 = s[2 * p];
-    const uint16_t q3 = s[3 * p];
+    const uint16_t p3 = s[-4 * pitch];
+    const uint16_t p2 = s[-3 * pitch];
+    const uint16_t p1 = s[-2 * pitch];
+    const uint16_t p0 = s[-pitch];
+    const uint16_t q0 = s[0 * pitch];
+    const uint16_t q1 = s[1 * pitch];
+    const uint16_t q2 = s[2 * pitch];
+    const uint16_t q3 = s[3 * pitch];
     const int8_t mask =
         highbd_filter_mask(*limit, *blimit, p3, p2, p1, p0, q0, q1, q2, q3, bd);
     const int8_t flat =
         highbd_flat_mask4(1, p3, p2, p1, p0, q0, q1, q2, q3, bd);
-    const int8_t flat2 =
-        highbd_flat_mask5(1, s[-8 * p], s[-7 * p], s[-6 * p], s[-5 * p], p0, q0,
-                          s[4 * p], s[5 * p], s[6 * p], s[7 * p], bd);
+    const int8_t flat2 = highbd_flat_mask5(
+        1, s[-8 * pitch], s[-7 * pitch], s[-6 * pitch], s[-5 * pitch], p0, q0,
+        s[4 * pitch], s[5 * pitch], s[6 * pitch], s[7 * pitch], bd);
 
-    highbd_filter16(mask, *thresh, flat, flat2, s - 8 * p, s - 7 * p, s - 6 * p,
-                    s - 5 * p, s - 4 * p, s - 3 * p, s - 2 * p, s - 1 * p, s,
-                    s + 1 * p, s + 2 * p, s + 3 * p, s + 4 * p, s + 5 * p,
-                    s + 6 * p, s + 7 * p, bd);
+    highbd_filter16(mask, *thresh, flat, flat2, s - 8 * pitch, s - 7 * pitch,
+                    s - 6 * pitch, s - 5 * pitch, s - 4 * pitch, s - 3 * pitch,
+                    s - 2 * pitch, s - 1 * pitch, s, s + 1 * pitch,
+                    s + 2 * pitch, s + 3 * pitch, s + 4 * pitch, s + 5 * pitch,
+                    s + 6 * pitch, s + 7 * pitch, bd);
     ++s;
   }
 }
 
-void vpx_highbd_lpf_horizontal_16_c(uint16_t *s, int p, const uint8_t *blimit,
-                                    const uint8_t *limit, const uint8_t *thresh,
-                                    int bd) {
-  highbd_mb_lpf_horizontal_edge_w(s, p, blimit, limit, thresh, 1, bd);
+void vpx_highbd_lpf_horizontal_16_c(uint16_t *s, int pitch,
+                                    const uint8_t *blimit, const uint8_t *limit,
+                                    const uint8_t *thresh, int bd) {
+  highbd_mb_lpf_horizontal_edge_w(s, pitch, blimit, limit, thresh, 1, bd);
 }
 
-void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t *s, int p,
+void vpx_highbd_lpf_horizontal_16_dual_c(uint16_t *s, int pitch,
                                          const uint8_t *blimit,
                                          const uint8_t *limit,
                                          const uint8_t *thresh, int bd) {
-  highbd_mb_lpf_horizontal_edge_w(s, p, blimit, limit, thresh, 2, bd);
+  highbd_mb_lpf_horizontal_edge_w(s, pitch, blimit, limit, thresh, 2, bd);
 }
 
-static void highbd_mb_lpf_vertical_edge_w(uint16_t *s, int p,
+static void highbd_mb_lpf_vertical_edge_w(uint16_t *s, int pitch,
                                           const uint8_t *blimit,
                                           const uint8_t *limit,
                                           const uint8_t *thresh, int count,
@@ -712,20 +724,20 @@
     highbd_filter16(mask, *thresh, flat, flat2, s - 8, s - 7, s - 6, s - 5,
                     s - 4, s - 3, s - 2, s - 1, s, s + 1, s + 2, s + 3, s + 4,
                     s + 5, s + 6, s + 7, bd);
-    s += p;
+    s += pitch;
   }
 }
 
-void vpx_highbd_lpf_vertical_16_c(uint16_t *s, int p, const uint8_t *blimit,
+void vpx_highbd_lpf_vertical_16_c(uint16_t *s, int pitch, const uint8_t *blimit,
                                   const uint8_t *limit, const uint8_t *thresh,
                                   int bd) {
-  highbd_mb_lpf_vertical_edge_w(s, p, blimit, limit, thresh, 8, bd);
+  highbd_mb_lpf_vertical_edge_w(s, pitch, blimit, limit, thresh, 8, bd);
 }
 
-void vpx_highbd_lpf_vertical_16_dual_c(uint16_t *s, int p,
+void vpx_highbd_lpf_vertical_16_dual_c(uint16_t *s, int pitch,
                                        const uint8_t *blimit,
                                        const uint8_t *limit,
                                        const uint8_t *thresh, int bd) {
-  highbd_mb_lpf_vertical_edge_w(s, p, blimit, limit, thresh, 16, bd);
+  highbd_mb_lpf_vertical_edge_w(s, pitch, blimit, limit, thresh, 16, bd);
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/add_noise_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/add_noise_msa.c
index 43d2c11..9754141 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/add_noise_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/add_noise_msa.c
@@ -9,7 +9,9 @@
  */
 
 #include <stdlib.h>
-#include "./macros_msa.h"
+
+#include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/mips/macros_msa.h"
 
 void vpx_plane_add_noise_msa(uint8_t *start_ptr, const int8_t *noise,
                              int blackclamp, int whiteclamp, int width,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/avg_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/avg_msa.c
index d0ac7b8..3fd18de 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/avg_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/avg_msa.c
@@ -9,6 +9,7 @@
  */
 #include <stdlib.h>
 
+#include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/mips/macros_msa.h"
 
@@ -56,6 +57,7 @@
   return sum_out;
 }
 
+#if !CONFIG_VP9_HIGHBITDEPTH
 void vpx_hadamard_8x8_msa(const int16_t *src, ptrdiff_t src_stride,
                           int16_t *dst) {
   v8i16 src0, src1, src2, src3, src4, src5, src6, src7;
@@ -391,6 +393,7 @@
 
   return satd;
 }
+#endif  // !CONFIG_VP9_HIGHBITDEPTH
 
 void vpx_int_pro_row_msa(int16_t hbuf[16], const uint8_t *ref,
                          const int ref_stride, const int height) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/common_dspr2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/common_dspr2.h
index 0a42f5c..87a5bba 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/common_dspr2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/common_dspr2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_COMMON_MIPS_DSPR2_H_
-#define VPX_COMMON_MIPS_DSPR2_H_
+#ifndef VPX_VPX_DSP_MIPS_COMMON_DSPR2_H_
+#define VPX_VPX_DSP_MIPS_COMMON_DSPR2_H_
 
 #include <assert.h>
 #include "./vpx_config.h"
@@ -45,4 +45,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_COMMON_MIPS_DSPR2_H_
+#endif  // VPX_VPX_DSP_MIPS_COMMON_DSPR2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_dspr2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_dspr2.c
index d9c2bef..cc458c8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_dspr2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_dspr2.c
@@ -15,6 +15,7 @@
 #include "vpx_dsp/mips/convolve_common_dspr2.h"
 #include "vpx_dsp/vpx_convolve.h"
 #include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/vpx_filter.h"
 #include "vpx_ports/mem.h"
 
 #if HAVE_DSPR2
@@ -341,7 +342,7 @@
   assert(y_step_q4 == 16);
   assert(((const int32_t *)filter_y)[1] != 0x800000);
 
-  if (((const int32_t *)filter_y)[0] == 0) {
+  if (vpx_get_filter_taps(filter_y) == 2) {
     vpx_convolve2_avg_vert_dspr2(src, src_stride, dst, dst_stride, filter,
                                  x0_q4, x_step_q4, y0_q4, y_step_q4, w, h);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_horiz_dspr2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_horiz_dspr2.c
index fb68ad8..7a9aa49 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_horiz_dspr2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_avg_horiz_dspr2.c
@@ -15,6 +15,7 @@
 #include "vpx_dsp/mips/convolve_common_dspr2.h"
 #include "vpx_dsp/vpx_convolve.h"
 #include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/vpx_filter.h"
 #include "vpx_ports/mem.h"
 
 #if HAVE_DSPR2
@@ -945,7 +946,7 @@
   assert(x_step_q4 == 16);
   assert(((const int32_t *)filter_x)[1] != 0x800000);
 
-  if (((const int32_t *)filter_x)[0] == 0) {
+  if (vpx_get_filter_taps(filter_x) == 2) {
     vpx_convolve2_avg_horiz_dspr2(src, src_stride, dst, dst_stride, filter,
                                   x0_q4, x_step_q4, y0_q4, y_step_q4, w, h);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_dspr2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_dspr2.c
index 89f0f41..1e7052f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_dspr2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_dspr2.c
@@ -1322,7 +1322,7 @@
   if (filter_x[3] == 0x80) {
     copy_horiz_transposed(src - src_stride * 3, src_stride, temp,
                           intermediate_height, w, intermediate_height);
-  } else if (((const int32_t *)filter_x)[0] == 0) {
+  } else if (vpx_get_filter_taps(filter_x) == 2) {
     vpx_convolve2_dspr2(src - src_stride * 3, src_stride, temp,
                         intermediate_height, filter_x, w, intermediate_height);
   } else {
@@ -1365,7 +1365,7 @@
   /* copy the src to dst */
   if (filter_y[3] == 0x80) {
     copy_horiz_transposed(temp + 3, intermediate_height, dst, dst_stride, h, w);
-  } else if (((const int32_t *)filter_y)[0] == 0) {
+  } else if (vpx_get_filter_taps(filter_y) == 2) {
     vpx_convolve2_dspr2(temp + 3, intermediate_height, dst, dst_stride,
                         filter_y, h, w);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_horiz_dspr2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_horiz_dspr2.c
index 77e95c8..09d6f36 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_horiz_dspr2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_horiz_dspr2.c
@@ -825,7 +825,7 @@
   assert(x_step_q4 == 16);
   assert(((const int32_t *)filter_x)[1] != 0x800000);
 
-  if (((const int32_t *)filter_x)[0] == 0) {
+  if (vpx_get_filter_taps(filter_x) == 2) {
     vpx_convolve2_horiz_dspr2(src, src_stride, dst, dst_stride, filter, x0_q4,
                               x_step_q4, y0_q4, y_step_q4, w, h);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_vert_dspr2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_vert_dspr2.c
index c329f71..fd977b5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_vert_dspr2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve8_vert_dspr2.c
@@ -325,7 +325,7 @@
   assert(y_step_q4 == 16);
   assert(((const int32_t *)filter_y)[1] != 0x800000);
 
-  if (((const int32_t *)filter_y)[0] == 0) {
+  if (vpx_get_filter_taps(filter_y) == 2) {
     vpx_convolve2_vert_dspr2(src, src_stride, dst, dst_stride, filter, x0_q4,
                              x_step_q4, y0_q4, y_step_q4, w, h);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve_common_dspr2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve_common_dspr2.h
index 48e440d..14b65bc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve_common_dspr2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/convolve_common_dspr2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_VPX_COMMON_DSPR2_H_
-#define VPX_DSP_MIPS_VPX_COMMON_DSPR2_H_
+#ifndef VPX_VPX_DSP_MIPS_CONVOLVE_COMMON_DSPR2_H_
+#define VPX_VPX_DSP_MIPS_CONVOLVE_COMMON_DSPR2_H_
 
 #include <assert.h>
 
@@ -55,4 +55,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_MIPS_VPX_COMMON_DSPR2_H_
+#endif  // VPX_VPX_DSP_MIPS_CONVOLVE_COMMON_DSPR2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/deblock_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/deblock_msa.c
index 9ef0483..4e93ff5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/deblock_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/deblock_msa.c
@@ -10,7 +10,8 @@
 
 #include <stdlib.h>
 
-#include "./macros_msa.h"
+#include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/mips/macros_msa.h"
 
 extern const int16_t vpx_rv[];
 
@@ -508,11 +509,11 @@
   }
 }
 
-void vpx_mbpost_proc_across_ip_msa(uint8_t *src_ptr, int32_t pitch,
-                                   int32_t rows, int32_t cols, int32_t flimit) {
+void vpx_mbpost_proc_across_ip_msa(uint8_t *src, int32_t pitch, int32_t rows,
+                                   int32_t cols, int32_t flimit) {
   int32_t row, col, cnt;
-  uint8_t *src_dup = src_ptr;
-  v16u8 src0, src, tmp_orig;
+  uint8_t *src_dup = src;
+  v16u8 src0, src1, tmp_orig;
   v16u8 tmp = { 0 };
   v16i8 zero = { 0 };
   v8u16 sum_h, src_r_h, src_l_h;
@@ -531,13 +532,13 @@
     src_dup[cols + 16] = src_dup[cols - 1];
     tmp_orig = (v16u8)__msa_ldi_b(0);
     tmp_orig[15] = tmp[15];
-    src = LD_UB(src_dup - 8);
-    src[15] = 0;
-    ILVRL_B2_UH(zero, src, src_r_h, src_l_h);
+    src1 = LD_UB(src_dup - 8);
+    src1[15] = 0;
+    ILVRL_B2_UH(zero, src1, src_r_h, src_l_h);
     src_r_w = __msa_dotp_u_w(src_r_h, src_r_h);
     src_r_w += __msa_dotp_u_w(src_l_h, src_l_h);
     sum_sq = HADD_SW_S32(src_r_w) + 16;
-    sum_h = __msa_hadd_u_h(src, src);
+    sum_h = __msa_hadd_u_h(src1, src1);
     sum = HADD_UH_U32(sum_h);
     {
       v16u8 src7, src8, src_r, src_l;
@@ -566,8 +567,8 @@
           sum_l[cnt + 1] = sum_l[cnt] + sub_l[cnt + 1];
         }
         sum = sum_l[7];
-        src = LD_UB(src_dup + 16 * col);
-        ILVRL_B2_UH(zero, src, src_r_h, src_l_h);
+        src1 = LD_UB(src_dup + 16 * col);
+        ILVRL_B2_UH(zero, src1, src_r_h, src_l_h);
         src7 = (v16u8)((const8 + sum_r + (v8i16)src_r_h) >> 4);
         src8 = (v16u8)((const8 + sum_l + (v8i16)src_l_h) >> 4);
         tmp = (v16u8)__msa_pckev_b((v16i8)src8, (v16i8)src7);
@@ -613,7 +614,7 @@
         total3 = (total3 < flimit_vec);
         PCKEV_H2_SH(total1, total0, total3, total2, mask0, mask1);
         mask = __msa_pckev_b((v16i8)mask1, (v16i8)mask0);
-        tmp = __msa_bmz_v(tmp, src, (v16u8)mask);
+        tmp = __msa_bmz_v(tmp, src1, (v16u8)mask);
 
         if (col == 0) {
           uint64_t src_d;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_dct32x32_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_dct32x32_msa.c
index 06fdc95..36583e2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_dct32x32_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_dct32x32_msa.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/mips/fwd_txfm_msa.h"
 
 static void fdct8x32_1d_column_load_butterfly(const int16_t *input,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_txfm_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_txfm_msa.h
index fd58922..c0be56b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_txfm_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/fwd_txfm_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_FWD_TXFM_MSA_H_
-#define VPX_DSP_MIPS_FWD_TXFM_MSA_H_
+#ifndef VPX_VPX_DSP_MIPS_FWD_TXFM_MSA_H_
+#define VPX_VPX_DSP_MIPS_FWD_TXFM_MSA_H_
 
 #include "vpx_dsp/mips/txfm_macros_msa.h"
 #include "vpx_dsp/txfm_common.h"
@@ -361,4 +361,4 @@
 void fdct8x16_1d_column(const int16_t *input, int16_t *tmp_ptr,
                         int32_t src_stride);
 void fdct16x8_1d_row(int16_t *input, int16_t *output);
-#endif  // VPX_DSP_MIPS_FWD_TXFM_MSA_H_
+#endif  // VPX_VPX_DSP_MIPS_FWD_TXFM_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct16x16_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct16x16_msa.c
index 2a211c5..7ca61a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct16x16_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct16x16_msa.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
 void vpx_idct16_1d_rows_msa(const int16_t *input, int16_t *output) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct32x32_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct32x32_msa.c
index 2ea6136..0539481 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct32x32_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct32x32_msa.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
 static void idct32x8_row_transpose_store(const int16_t *input,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct4x4_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct4x4_msa.c
index 0a85742..56ffec3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct4x4_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct4x4_msa.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
 void vpx_iwht4x4_16_add_msa(const int16_t *input, uint8_t *dst,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct8x8_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct8x8_msa.c
index 7f77d20..a383ff2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct8x8_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/idct8x8_msa.c
@@ -8,6 +8,7 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/mips/inv_txfm_msa.h"
 
 void vpx_idct8x8_64_add_msa(const int16_t *input, uint8_t *dst,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_dspr2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_dspr2.h
index 27881f0..cbea22f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_dspr2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_dspr2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_INV_TXFM_DSPR2_H_
-#define VPX_DSP_MIPS_INV_TXFM_DSPR2_H_
+#ifndef VPX_VPX_DSP_MIPS_INV_TXFM_DSPR2_H_
+#define VPX_VPX_DSP_MIPS_INV_TXFM_DSPR2_H_
 
 #include <assert.h>
 
@@ -25,7 +25,6 @@
 #if HAVE_DSPR2
 #define DCT_CONST_ROUND_SHIFT_TWICE_COSPI_16_64(input)                         \
   ({                                                                           \
-                                                                               \
     int32_t tmp, out;                                                          \
     int dct_cost_rounding = DCT_CONST_ROUNDING;                                \
     int in = input;                                                            \
@@ -73,4 +72,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_MIPS_INV_TXFM_DSPR2_H_
+#endif  // VPX_VPX_DSP_MIPS_INV_TXFM_DSPR2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_msa.h
index 1fe9b28..3b66249 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/inv_txfm_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_INV_TXFM_MSA_H_
-#define VPX_DSP_MIPS_INV_TXFM_MSA_H_
+#ifndef VPX_VPX_DSP_MIPS_INV_TXFM_MSA_H_
+#define VPX_VPX_DSP_MIPS_INV_TXFM_MSA_H_
 
 #include "vpx_dsp/mips/macros_msa.h"
 #include "vpx_dsp/mips/txfm_macros_msa.h"
@@ -408,4 +408,4 @@
 void vpx_iadst16_1d_columns_addblk_msa(int16_t *input, uint8_t *dst,
                                        int32_t dst_stride);
 void vpx_iadst16_1d_rows_msa(const int16_t *input, int16_t *output);
-#endif  // VPX_DSP_MIPS_INV_TXFM_MSA_H_
+#endif  // VPX_VPX_DSP_MIPS_INV_TXFM_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_filters_dspr2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_filters_dspr2.h
index 5b0c733..ec339be 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_filters_dspr2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_filters_dspr2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_FILTERS_DSPR2_H_
-#define VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_FILTERS_DSPR2_H_
+#ifndef VPX_VPX_DSP_MIPS_LOOPFILTER_FILTERS_DSPR2_H_
+#define VPX_VPX_DSP_MIPS_LOOPFILTER_FILTERS_DSPR2_H_
 
 #include <stdlib.h>
 
@@ -731,4 +731,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_FILTERS_DSPR2_H_
+#endif  // VPX_VPX_DSP_MIPS_LOOPFILTER_FILTERS_DSPR2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_macros_dspr2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_macros_dspr2.h
index 38ed0b2..9af0b423 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_macros_dspr2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_macros_dspr2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_MACROS_DSPR2_H_
-#define VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_MACROS_DSPR2_H_
+#ifndef VPX_VPX_DSP_MIPS_LOOPFILTER_MACROS_DSPR2_H_
+#define VPX_VPX_DSP_MIPS_LOOPFILTER_MACROS_DSPR2_H_
 
 #include <stdlib.h>
 
@@ -432,4 +432,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_MACROS_DSPR2_H_
+#endif  // VPX_VPX_DSP_MIPS_LOOPFILTER_MACROS_DSPR2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_masks_dspr2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_masks_dspr2.h
index ee11142..24c492b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_masks_dspr2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_masks_dspr2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_MASKS_DSPR2_H_
-#define VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_MASKS_DSPR2_H_
+#ifndef VPX_VPX_DSP_MIPS_LOOPFILTER_MASKS_DSPR2_H_
+#define VPX_VPX_DSP_MIPS_LOOPFILTER_MASKS_DSPR2_H_
 
 #include <stdlib.h>
 
@@ -352,4 +352,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VP9_COMMON_MIPS_DSPR2_VP9_LOOPFILTER_MASKS_DSPR2_H_
+#endif  // VPX_VPX_DSP_MIPS_LOOPFILTER_MASKS_DSPR2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_msa.h
index 49fd74c..1ea05e0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/loopfilter_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_LOOPFILTER_MSA_H_
-#define VPX_DSP_LOOPFILTER_MSA_H_
+#ifndef VPX_VPX_DSP_MIPS_LOOPFILTER_MSA_H_
+#define VPX_VPX_DSP_MIPS_LOOPFILTER_MSA_H_
 
 #include "vpx_dsp/mips/macros_msa.h"
 
@@ -174,4 +174,4 @@
     mask_out = limit_in < (v16u8)mask_out;                                   \
     mask_out = __msa_xori_b(mask_out, 0xff);                                 \
   }
-#endif /* VPX_DSP_LOOPFILTER_MSA_H_ */
+#endif  // VPX_VPX_DSP_MIPS_LOOPFILTER_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/macros_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/macros_msa.h
index f9a446e..a3a5a4d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/macros_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/macros_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_MACROS_MSA_H_
-#define VPX_DSP_MIPS_MACROS_MSA_H_
+#ifndef VPX_VPX_DSP_MIPS_MACROS_MSA_H_
+#define VPX_VPX_DSP_MIPS_MACROS_MSA_H_
 
 #include <msa.h>
 
@@ -1966,4 +1966,4 @@
                                                                 \
     tmp1_m;                                                     \
   })
-#endif /* VPX_DSP_MIPS_MACROS_MSA_H_ */
+#endif  // VPX_VPX_DSP_MIPS_MACROS_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sad_mmi.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sad_mmi.c
index 33bd3fe..4368db5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sad_mmi.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sad_mmi.c
@@ -341,7 +341,7 @@
                                     const uint8_t *ref_array, int ref_stride, \
                                     uint32_t *sad_array) {                    \
     int i;                                                                    \
-    for (i = 0; i < k; ++i)                                                   \
+    for (i = 0; i < (k); ++i)                                                 \
       sad_array[i] =                                                          \
           vpx_sad##m##x##n##_mmi(src, src_stride, &ref_array[i], ref_stride); \
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sub_pixel_variance_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sub_pixel_variance_msa.c
index 313e06f..572fcab 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sub_pixel_variance_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/sub_pixel_variance_msa.c
@@ -27,13 +27,14 @@
     HSUB_UB2_SH(src_l0_m, src_l1_m, res_l0_m, res_l1_m);            \
     DPADD_SH2_SW(res_l0_m, res_l1_m, res_l0_m, res_l1_m, var, var); \
                                                                     \
-    sub += res_l0_m + res_l1_m;                                     \
+    (sub) += res_l0_m + res_l1_m;                                   \
   }
 
-#define VARIANCE_WxH(sse, diff, shift) sse - (((uint32_t)diff * diff) >> shift)
+#define VARIANCE_WxH(sse, diff, shift) \
+  (sse) - (((uint32_t)(diff) * (diff)) >> (shift))
 
 #define VARIANCE_LARGE_WxH(sse, diff, shift) \
-  sse - (((int64_t)diff * diff) >> shift)
+  (sse) - (((int64_t)(diff) * (diff)) >> (shift))
 
 static uint32_t avg_sse_diff_4width_msa(const uint8_t *src_ptr,
                                         int32_t src_stride,
@@ -1619,16 +1620,16 @@
 
 #define VPX_SUB_PIXEL_VARIANCE_WDXHT_MSA(wd, ht)                              \
   uint32_t vpx_sub_pixel_variance##wd##x##ht##_msa(                           \
-      const uint8_t *src, int32_t src_stride, int32_t xoffset,                \
-      int32_t yoffset, const uint8_t *ref, int32_t ref_stride,                \
+      const uint8_t *src, int32_t src_stride, int32_t x_offset,               \
+      int32_t y_offset, const uint8_t *ref, int32_t ref_stride,               \
       uint32_t *sse) {                                                        \
     int32_t diff;                                                             \
     uint32_t var;                                                             \
-    const uint8_t *h_filter = bilinear_filters_msa[xoffset];                  \
-    const uint8_t *v_filter = bilinear_filters_msa[yoffset];                  \
+    const uint8_t *h_filter = bilinear_filters_msa[x_offset];                 \
+    const uint8_t *v_filter = bilinear_filters_msa[y_offset];                 \
                                                                               \
-    if (yoffset) {                                                            \
-      if (xoffset) {                                                          \
+    if (y_offset) {                                                           \
+      if (x_offset) {                                                         \
         *sse = sub_pixel_sse_diff_##wd##width_hv_msa(                         \
             src, src_stride, ref, ref_stride, h_filter, v_filter, ht, &diff); \
       } else {                                                                \
@@ -1638,7 +1639,7 @@
                                                                               \
       var = VARIANCE_##wd##Wx##ht##H(*sse, diff);                             \
     } else {                                                                  \
-      if (xoffset) {                                                          \
+      if (x_offset) {                                                         \
         *sse = sub_pixel_sse_diff_##wd##width_h_msa(                          \
             src, src_stride, ref, ref_stride, h_filter, ht, &diff);           \
                                                                               \
@@ -1672,15 +1673,15 @@
 
 #define VPX_SUB_PIXEL_AVG_VARIANCE_WDXHT_MSA(wd, ht)                          \
   uint32_t vpx_sub_pixel_avg_variance##wd##x##ht##_msa(                       \
-      const uint8_t *src_ptr, int32_t src_stride, int32_t xoffset,            \
-      int32_t yoffset, const uint8_t *ref_ptr, int32_t ref_stride,            \
+      const uint8_t *src_ptr, int32_t src_stride, int32_t x_offset,           \
+      int32_t y_offset, const uint8_t *ref_ptr, int32_t ref_stride,           \
       uint32_t *sse, const uint8_t *sec_pred) {                               \
     int32_t diff;                                                             \
-    const uint8_t *h_filter = bilinear_filters_msa[xoffset];                  \
-    const uint8_t *v_filter = bilinear_filters_msa[yoffset];                  \
+    const uint8_t *h_filter = bilinear_filters_msa[x_offset];                 \
+    const uint8_t *v_filter = bilinear_filters_msa[y_offset];                 \
                                                                               \
-    if (yoffset) {                                                            \
-      if (xoffset) {                                                          \
+    if (y_offset) {                                                           \
+      if (x_offset) {                                                         \
         *sse = sub_pixel_avg_sse_diff_##wd##width_hv_msa(                     \
             src_ptr, src_stride, ref_ptr, ref_stride, sec_pred, h_filter,     \
             v_filter, ht, &diff);                                             \
@@ -1690,7 +1691,7 @@
             &diff);                                                           \
       }                                                                       \
     } else {                                                                  \
-      if (xoffset) {                                                          \
+      if (x_offset) {                                                         \
         *sse = sub_pixel_avg_sse_diff_##wd##width_h_msa(                      \
             src_ptr, src_stride, ref_ptr, ref_stride, sec_pred, h_filter, ht, \
             &diff);                                                           \
@@ -1719,16 +1720,16 @@
 
 uint32_t vpx_sub_pixel_avg_variance32x64_msa(const uint8_t *src_ptr,
                                              int32_t src_stride,
-                                             int32_t xoffset, int32_t yoffset,
+                                             int32_t x_offset, int32_t y_offset,
                                              const uint8_t *ref_ptr,
                                              int32_t ref_stride, uint32_t *sse,
                                              const uint8_t *sec_pred) {
   int32_t diff;
-  const uint8_t *h_filter = bilinear_filters_msa[xoffset];
-  const uint8_t *v_filter = bilinear_filters_msa[yoffset];
+  const uint8_t *h_filter = bilinear_filters_msa[x_offset];
+  const uint8_t *v_filter = bilinear_filters_msa[y_offset];
 
-  if (yoffset) {
-    if (xoffset) {
+  if (y_offset) {
+    if (x_offset) {
       *sse = sub_pixel_avg_sse_diff_32width_hv_msa(
           src_ptr, src_stride, ref_ptr, ref_stride, sec_pred, h_filter,
           v_filter, 64, &diff);
@@ -1738,7 +1739,7 @@
                                                   v_filter, 64, &diff);
     }
   } else {
-    if (xoffset) {
+    if (x_offset) {
       *sse = sub_pixel_avg_sse_diff_32width_h_msa(src_ptr, src_stride, ref_ptr,
                                                   ref_stride, sec_pred,
                                                   h_filter, 64, &diff);
@@ -1753,15 +1754,15 @@
 
 #define VPX_SUB_PIXEL_AVG_VARIANCE64XHEIGHT_MSA(ht)                           \
   uint32_t vpx_sub_pixel_avg_variance64x##ht##_msa(                           \
-      const uint8_t *src_ptr, int32_t src_stride, int32_t xoffset,            \
-      int32_t yoffset, const uint8_t *ref_ptr, int32_t ref_stride,            \
+      const uint8_t *src_ptr, int32_t src_stride, int32_t x_offset,           \
+      int32_t y_offset, const uint8_t *ref_ptr, int32_t ref_stride,           \
       uint32_t *sse, const uint8_t *sec_pred) {                               \
     int32_t diff;                                                             \
-    const uint8_t *h_filter = bilinear_filters_msa[xoffset];                  \
-    const uint8_t *v_filter = bilinear_filters_msa[yoffset];                  \
+    const uint8_t *h_filter = bilinear_filters_msa[x_offset];                 \
+    const uint8_t *v_filter = bilinear_filters_msa[y_offset];                 \
                                                                               \
-    if (yoffset) {                                                            \
-      if (xoffset) {                                                          \
+    if (y_offset) {                                                           \
+      if (x_offset) {                                                         \
         *sse = sub_pixel_avg_sse_diff_64width_hv_msa(                         \
             src_ptr, src_stride, ref_ptr, ref_stride, sec_pred, h_filter,     \
             v_filter, ht, &diff);                                             \
@@ -1771,7 +1772,7 @@
             &diff);                                                           \
       }                                                                       \
     } else {                                                                  \
-      if (xoffset) {                                                          \
+      if (x_offset) {                                                         \
         *sse = sub_pixel_avg_sse_diff_64width_h_msa(                          \
             src_ptr, src_stride, ref_ptr, ref_stride, sec_pred, h_filter, ht, \
             &diff);                                                           \
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/txfm_macros_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/txfm_macros_msa.h
index f077fa4..f27504a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/txfm_macros_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/txfm_macros_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_TXFM_MACROS_MIPS_MSA_H_
-#define VPX_DSP_MIPS_TXFM_MACROS_MIPS_MSA_H_
+#ifndef VPX_VPX_DSP_MIPS_TXFM_MACROS_MSA_H_
+#define VPX_VPX_DSP_MIPS_TXFM_MACROS_MSA_H_
 
 #include "vpx_dsp/mips/macros_msa.h"
 
@@ -98,4 +98,4 @@
     SRARI_W4_SW(m4_m, m5_m, tmp2_m, tmp3_m, DCT_CONST_BITS);                  \
     PCKEV_H2_SH(m5_m, m4_m, tmp3_m, tmp2_m, out2, out3);                      \
   }
-#endif  // VPX_DSP_MIPS_TXFM_MACROS_MIPS_MSA_H_
+#endif  // VPX_VPX_DSP_MIPS_TXFM_MACROS_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_mmi.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_mmi.c
index 4af60d36..c1780c3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_mmi.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_mmi.c
@@ -87,10 +87,10 @@
   "paddh      %[ftmp12],  %[ftmp12],      %[ftmp6]            \n\t"
 
 #define VARIANCE_SSE_8                                              \
-  "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t" \
-  "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t" \
-  "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t" \
+  "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t" \
   "pasubub    %[ftmp3],   %[ftmp1],       %[ftmp2]            \n\t" \
   "punpcklbh  %[ftmp4],   %[ftmp3],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp5],   %[ftmp3],       %[ftmp0]            \n\t" \
@@ -101,10 +101,10 @@
 
 #define VARIANCE_SSE_16                                             \
   VARIANCE_SSE_8                                                    \
-  "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t" \
-  "gsldlc1    %[ftmp2],   0x0f(%[b])                          \n\t" \
-  "gsldrc1    %[ftmp2],   0x08(%[b])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
+  "gsldlc1    %[ftmp2],   0x0f(%[ref_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp2],   0x08(%[ref_ptr])                    \n\t" \
   "pasubub    %[ftmp3],   %[ftmp1],       %[ftmp2]            \n\t" \
   "punpcklbh  %[ftmp4],   %[ftmp3],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp5],   %[ftmp3],       %[ftmp0]            \n\t" \
@@ -115,11 +115,11 @@
 
 #define VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_A                       \
   /* calculate fdata3[0]~fdata3[3], store at ftmp2*/                \
-  "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp2],   %[ftmp1],       %[ftmp0]            \n\t" \
-  "gsldlc1    %[ftmp1],   0x08(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x01(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x01(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp3],   %[ftmp1],       %[ftmp0]            \n\t" \
   "pmullh     %[ftmp2],   %[ftmp2],       %[filter_x0]        \n\t" \
   "paddh      %[ftmp2],   %[ftmp2],       %[ff_ph_40]         \n\t" \
@@ -129,11 +129,11 @@
 
 #define VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_B                       \
   /* calculate fdata3[0]~fdata3[3], store at ftmp4*/                \
-  "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp4],   %[ftmp1],       %[ftmp0]            \n\t" \
-  "gsldlc1    %[ftmp1],   0x08(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x01(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x01(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp5],   %[ftmp1],       %[ftmp0]            \n\t" \
   "pmullh     %[ftmp4],   %[ftmp4],       %[filter_x0]        \n\t" \
   "paddh      %[ftmp4],   %[ftmp4],       %[ff_ph_40]         \n\t" \
@@ -169,12 +169,12 @@
 
 #define VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_A                       \
   /* calculate fdata3[0]~fdata3[7], store at ftmp2 and ftmp3*/      \
-  "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp2],   %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp3],   %[ftmp1],       %[ftmp0]            \n\t" \
-  "gsldlc1    %[ftmp1],   0x08(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x01(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x01(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp4],   %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp5],   %[ftmp1],       %[ftmp0]            \n\t" \
   "pmullh     %[ftmp2],   %[ftmp2],       %[filter_x0]        \n\t" \
@@ -190,12 +190,12 @@
 
 #define VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_B                       \
   /* calculate fdata3[0]~fdata3[7], store at ftmp8 and ftmp9*/      \
-  "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp8],   %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp9],   %[ftmp1],       %[ftmp0]            \n\t" \
-  "gsldlc1    %[ftmp1],   0x08(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x01(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x01(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp10],  %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp11],  %[ftmp1],       %[ftmp0]            \n\t" \
   "pmullh     %[ftmp8],   %[ftmp8],       %[filter_x0]        \n\t" \
@@ -258,12 +258,12 @@
   VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_A                             \
                                                                     \
   /* calculate fdata3[8]~fdata3[15], store at ftmp4 and ftmp5*/     \
-  "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp4],   %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp5],   %[ftmp1],       %[ftmp0]            \n\t" \
-  "gsldlc1    %[ftmp1],   0x10(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x09(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x10(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x09(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp6],   %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp7],   %[ftmp1],       %[ftmp0]            \n\t" \
   "pmullh     %[ftmp4],   %[ftmp4],       %[filter_x0]        \n\t" \
@@ -282,12 +282,12 @@
   VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_B                             \
                                                                     \
   /* calculate fdata3[8]~fdata3[15], store at ftmp10 and ftmp11*/   \
-  "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp10],  %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp11],  %[ftmp1],       %[ftmp0]            \n\t" \
-  "gsldlc1    %[ftmp1],   0x10(%[a])                          \n\t" \
-  "gsldrc1    %[ftmp1],   0x09(%[a])                          \n\t" \
+  "gsldlc1    %[ftmp1],   0x10(%[src_ptr])                    \n\t" \
+  "gsldrc1    %[ftmp1],   0x09(%[src_ptr])                    \n\t" \
   "punpcklbh  %[ftmp12],  %[ftmp1],       %[ftmp0]            \n\t" \
   "punpckhbh  %[ftmp13],  %[ftmp1],       %[ftmp0]            \n\t" \
   "pmullh     %[ftmp10],  %[ftmp10],      %[filter_x0]        \n\t" \
@@ -357,24 +357,23 @@
 // taps should sum to FILTER_WEIGHT. pixel_step defines whether the filter is
 // applied horizontally (pixel_step = 1) or vertically (pixel_step = stride).
 // It defines the offset required to move from one input to the next.
-static void var_filter_block2d_bil_first_pass(const uint8_t *a, uint16_t *b,
-                                              unsigned int src_pixels_per_line,
-                                              int pixel_step,
-                                              unsigned int output_height,
-                                              unsigned int output_width,
-                                              const uint8_t *filter) {
+static void var_filter_block2d_bil_first_pass(
+    const uint8_t *src_ptr, uint16_t *ref_ptr, unsigned int src_pixels_per_line,
+    int pixel_step, unsigned int output_height, unsigned int output_width,
+    const uint8_t *filter) {
   unsigned int i, j;
 
   for (i = 0; i < output_height; ++i) {
     for (j = 0; j < output_width; ++j) {
-      b[j] = ROUND_POWER_OF_TWO(
-          (int)a[0] * filter[0] + (int)a[pixel_step] * filter[1], FILTER_BITS);
+      ref_ptr[j] = ROUND_POWER_OF_TWO(
+          (int)src_ptr[0] * filter[0] + (int)src_ptr[pixel_step] * filter[1],
+          FILTER_BITS);
 
-      ++a;
+      ++src_ptr;
     }
 
-    a += src_pixels_per_line - output_width;
-    b += output_width;
+    src_ptr += src_pixels_per_line - output_width;
+    ref_ptr += output_width;
   }
 }
 
@@ -387,28 +386,27 @@
 // filter is applied horizontally (pixel_step = 1) or vertically
 // (pixel_step = stride). It defines the offset required to move from one input
 // to the next. Output is 8-bit.
-static void var_filter_block2d_bil_second_pass(const uint16_t *a, uint8_t *b,
-                                               unsigned int src_pixels_per_line,
-                                               unsigned int pixel_step,
-                                               unsigned int output_height,
-                                               unsigned int output_width,
-                                               const uint8_t *filter) {
+static void var_filter_block2d_bil_second_pass(
+    const uint16_t *src_ptr, uint8_t *ref_ptr, unsigned int src_pixels_per_line,
+    unsigned int pixel_step, unsigned int output_height,
+    unsigned int output_width, const uint8_t *filter) {
   unsigned int i, j;
 
   for (i = 0; i < output_height; ++i) {
     for (j = 0; j < output_width; ++j) {
-      b[j] = ROUND_POWER_OF_TWO(
-          (int)a[0] * filter[0] + (int)a[pixel_step] * filter[1], FILTER_BITS);
-      ++a;
+      ref_ptr[j] = ROUND_POWER_OF_TWO(
+          (int)src_ptr[0] * filter[0] + (int)src_ptr[pixel_step] * filter[1],
+          FILTER_BITS);
+      ++src_ptr;
     }
 
-    a += src_pixels_per_line - output_width;
-    b += output_width;
+    src_ptr += src_pixels_per_line - output_width;
+    ref_ptr += output_width;
   }
 }
 
-static inline uint32_t vpx_variance64x(const uint8_t *a, int a_stride,
-                                       const uint8_t *b, int b_stride,
+static inline uint32_t vpx_variance64x(const uint8_t *src_ptr, int src_stride,
+                                       const uint8_t *ref_ptr, int ref_stride,
                                        uint32_t *sse, int high) {
   int sum;
   double ftmp[12];
@@ -424,57 +422,57 @@
     "xor        %[ftmp9],   %[ftmp9],       %[ftmp9]            \n\t"
     "xor        %[ftmp10],  %[ftmp10],      %[ftmp10]           \n\t"
     "1:                                                         \n\t"
-    "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x0f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x08(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x0f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x08(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x17(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x10(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x17(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x10(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x17(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x10(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x17(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x10(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x1f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x18(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x1f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x18(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x1f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x18(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x1f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x18(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x27(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x20(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x27(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x20(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x27(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x20(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x27(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x20(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x2f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x28(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x2f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x28(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x2f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x28(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x2f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x28(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x37(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x30(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x37(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x30(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x37(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x30(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x37(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x30(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x3f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x38(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x3f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x38(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x3f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x38(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x3f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x38(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "mfc1       %[tmp1],    %[ftmp9]                            \n\t"
@@ -491,9 +489,10 @@
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [tmp0]"=&r"(tmp[0]),              [tmp1]"=&r"(tmp[1]),
       [tmp2]"=&r"(tmp[2]),
-      [a]"+&r"(a),                      [b]"+&r"(b),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr),
       [sum]"=&r"(sum)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse)
     : "memory"
   );
@@ -501,18 +500,19 @@
   return *sse - (((int64_t)sum * sum) / (64 * high));
 }
 
-#define VPX_VARIANCE64XN(n)                                         \
-  uint32_t vpx_variance64x##n##_mmi(const uint8_t *a, int a_stride, \
-                                    const uint8_t *b, int b_stride, \
-                                    uint32_t *sse) {                \
-    return vpx_variance64x(a, a_stride, b, b_stride, sse, n);       \
+#define VPX_VARIANCE64XN(n)                                                   \
+  uint32_t vpx_variance64x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride,   \
+                                    uint32_t *sse) {                          \
+    return vpx_variance64x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 VPX_VARIANCE64XN(64)
 VPX_VARIANCE64XN(32)
 
-uint32_t vpx_variance32x64_mmi(const uint8_t *a, int a_stride, const uint8_t *b,
-                               int b_stride, uint32_t *sse) {
+uint32_t vpx_variance32x64_mmi(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride,
+                               uint32_t *sse) {
   int sum;
   double ftmp[12];
   uint32_t tmp[3];
@@ -527,33 +527,33 @@
     "xor        %[ftmp9],   %[ftmp9],       %[ftmp9]            \n\t"
     "xor        %[ftmp10],  %[ftmp10],      %[ftmp10]           \n\t"
     "1:                                                         \n\t"
-    "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x0f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x08(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x0f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x08(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x17(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x10(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x17(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x10(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x17(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x10(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x17(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x10(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
-    "gsldlc1    %[ftmp1],   0x1f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x18(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x1f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x18(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x1f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x18(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x1f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x18(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8_FOR_W64
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "mfc1       %[tmp1],    %[ftmp9]                            \n\t"
@@ -570,9 +570,10 @@
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [tmp0]"=&r"(tmp[0]),              [tmp1]"=&r"(tmp[1]),
       [tmp2]"=&r"(tmp[2]),
-      [a]"+&r"(a),                      [b]"+&r"(b),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr),
       [sum]"=&r"(sum)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [sse]"r"(sse)
     : "memory"
   );
@@ -580,8 +581,8 @@
   return *sse - (((int64_t)sum * sum) / 2048);
 }
 
-static inline uint32_t vpx_variance32x(const uint8_t *a, int a_stride,
-                                       const uint8_t *b, int b_stride,
+static inline uint32_t vpx_variance32x(const uint8_t *src_ptr, int src_stride,
+                                       const uint8_t *ref_ptr, int ref_stride,
                                        uint32_t *sse, int high) {
   int sum;
   double ftmp[13];
@@ -598,30 +599,30 @@
     "xor        %[ftmp10],  %[ftmp10],      %[ftmp10]           \n\t"
     "xor        %[ftmp12],  %[ftmp12],      %[ftmp12]           \n\t"
     "1:                                                         \n\t"
-    "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
-    "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x0f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x08(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x0f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x08(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
-    "gsldlc1    %[ftmp1],   0x17(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x10(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x17(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x10(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x17(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x10(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x17(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x10(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
-    "gsldlc1    %[ftmp1],   0x1f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x18(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x1f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x18(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x1f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x18(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x1f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x18(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "dsrl       %[ftmp9],   %[ftmp8],       %[ftmp11]           \n\t"
@@ -646,8 +647,9 @@
       [ftmp8]"=&f"(ftmp[8]),            [ftmp9]"=&f"(ftmp[9]),
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [ftmp12]"=&f"(ftmp[12]),          [tmp0]"=&r"(tmp[0]),
-      [a]"+&r"(a),                      [b]"+&r"(b)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr)
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse), [sum]"r"(&sum)
     : "memory"
   );
@@ -655,18 +657,18 @@
   return *sse - (((int64_t)sum * sum) / (32 * high));
 }
 
-#define VPX_VARIANCE32XN(n)                                         \
-  uint32_t vpx_variance32x##n##_mmi(const uint8_t *a, int a_stride, \
-                                    const uint8_t *b, int b_stride, \
-                                    uint32_t *sse) {                \
-    return vpx_variance32x(a, a_stride, b, b_stride, sse, n);       \
+#define VPX_VARIANCE32XN(n)                                                   \
+  uint32_t vpx_variance32x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride,   \
+                                    uint32_t *sse) {                          \
+    return vpx_variance32x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 VPX_VARIANCE32XN(32)
 VPX_VARIANCE32XN(16)
 
-static inline uint32_t vpx_variance16x(const uint8_t *a, int a_stride,
-                                       const uint8_t *b, int b_stride,
+static inline uint32_t vpx_variance16x(const uint8_t *src_ptr, int src_stride,
+                                       const uint8_t *ref_ptr, int ref_stride,
                                        uint32_t *sse, int high) {
   int sum;
   double ftmp[13];
@@ -683,20 +685,20 @@
     "xor        %[ftmp10],  %[ftmp10],      %[ftmp10]           \n\t"
     "xor        %[ftmp12],  %[ftmp12],      %[ftmp12]           \n\t"
     "1:                                                         \n\t"
-    "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
-    "gsldlc1    %[ftmp1],   0x0f(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x08(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x0f(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x08(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x0f(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x08(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x0f(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x08(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "dsrl       %[ftmp9],   %[ftmp8],       %[ftmp11]           \n\t"
@@ -721,8 +723,9 @@
       [ftmp8]"=&f"(ftmp[8]),            [ftmp9]"=&f"(ftmp[9]),
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [ftmp12]"=&f"(ftmp[12]),          [tmp0]"=&r"(tmp[0]),
-      [a]"+&r"(a),                      [b]"+&r"(b)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr)
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse), [sum]"r"(&sum)
     : "memory"
   );
@@ -730,19 +733,19 @@
   return *sse - (((int64_t)sum * sum) / (16 * high));
 }
 
-#define VPX_VARIANCE16XN(n)                                         \
-  uint32_t vpx_variance16x##n##_mmi(const uint8_t *a, int a_stride, \
-                                    const uint8_t *b, int b_stride, \
-                                    uint32_t *sse) {                \
-    return vpx_variance16x(a, a_stride, b, b_stride, sse, n);       \
+#define VPX_VARIANCE16XN(n)                                                   \
+  uint32_t vpx_variance16x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride,   \
+                                    uint32_t *sse) {                          \
+    return vpx_variance16x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 VPX_VARIANCE16XN(32)
 VPX_VARIANCE16XN(16)
 VPX_VARIANCE16XN(8)
 
-static inline uint32_t vpx_variance8x(const uint8_t *a, int a_stride,
-                                      const uint8_t *b, int b_stride,
+static inline uint32_t vpx_variance8x(const uint8_t *src_ptr, int src_stride,
+                                      const uint8_t *ref_ptr, int ref_stride,
                                       uint32_t *sse, int high) {
   int sum;
   double ftmp[13];
@@ -759,15 +762,15 @@
     "xor        %[ftmp10],  %[ftmp10],      %[ftmp10]           \n\t"
     "xor        %[ftmp12],  %[ftmp12],      %[ftmp12]           \n\t"
     "1:                                                         \n\t"
-    "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_8
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "dsrl       %[ftmp9],   %[ftmp8],       %[ftmp11]           \n\t"
@@ -792,8 +795,9 @@
       [ftmp8]"=&f"(ftmp[8]),            [ftmp9]"=&f"(ftmp[9]),
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [ftmp12]"=&f"(ftmp[12]),          [tmp0]"=&r"(tmp[0]),
-      [a]"+&r"(a),                      [b]"+&r"(b)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr)
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse), [sum]"r"(&sum)
     : "memory"
   );
@@ -801,19 +805,19 @@
   return *sse - (((int64_t)sum * sum) / (8 * high));
 }
 
-#define VPX_VARIANCE8XN(n)                                         \
-  uint32_t vpx_variance8x##n##_mmi(const uint8_t *a, int a_stride, \
-                                   const uint8_t *b, int b_stride, \
-                                   uint32_t *sse) {                \
-    return vpx_variance8x(a, a_stride, b, b_stride, sse, n);       \
+#define VPX_VARIANCE8XN(n)                                                   \
+  uint32_t vpx_variance8x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                                   const uint8_t *ref_ptr, int ref_stride,   \
+                                   uint32_t *sse) {                          \
+    return vpx_variance8x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 VPX_VARIANCE8XN(16)
 VPX_VARIANCE8XN(8)
 VPX_VARIANCE8XN(4)
 
-static inline uint32_t vpx_variance4x(const uint8_t *a, int a_stride,
-                                      const uint8_t *b, int b_stride,
+static inline uint32_t vpx_variance4x(const uint8_t *src_ptr, int src_stride,
+                                      const uint8_t *ref_ptr, int ref_stride,
                                       uint32_t *sse, int high) {
   int sum;
   double ftmp[12];
@@ -830,15 +834,15 @@
     "xor        %[ftmp7],   %[ftmp7],       %[ftmp7]            \n\t"
     "xor        %[ftmp8],   %[ftmp8],       %[ftmp8]            \n\t"
     "1:                                                         \n\t"
-    "gsldlc1    %[ftmp1],   0x07(%[a])                          \n\t"
-    "gsldrc1    %[ftmp1],   0x00(%[a])                          \n\t"
-    "gsldlc1    %[ftmp2],   0x07(%[b])                          \n\t"
-    "gsldrc1    %[ftmp2],   0x00(%[b])                          \n\t"
+    "gsldlc1    %[ftmp1],   0x07(%[src_ptr])                    \n\t"
+    "gsldrc1    %[ftmp1],   0x00(%[src_ptr])                    \n\t"
+    "gsldlc1    %[ftmp2],   0x07(%[ref_ptr])                    \n\t"
+    "gsldrc1    %[ftmp2],   0x00(%[ref_ptr])                    \n\t"
     VARIANCE_SSE_SUM_4
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "dsrl       %[ftmp9],   %[ftmp6],       %[ftmp10]           \n\t"
@@ -862,8 +866,9 @@
       [ftmp8]"=&f"(ftmp[8]),            [ftmp9]"=&f"(ftmp[9]),
       [ftmp10]"=&f"(ftmp[10]),
       [tmp0]"=&r"(tmp[0]),
-      [a]"+&r"(a),                      [b]"+&r"(b)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr)
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse), [sum]"r"(&sum)
     : "memory"
   );
@@ -871,19 +876,19 @@
   return *sse - (((int64_t)sum * sum) / (4 * high));
 }
 
-#define VPX_VARIANCE4XN(n)                                         \
-  uint32_t vpx_variance4x##n##_mmi(const uint8_t *a, int a_stride, \
-                                   const uint8_t *b, int b_stride, \
-                                   uint32_t *sse) {                \
-    return vpx_variance4x(a, a_stride, b, b_stride, sse, n);       \
+#define VPX_VARIANCE4XN(n)                                                   \
+  uint32_t vpx_variance4x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                                   const uint8_t *ref_ptr, int ref_stride,   \
+                                   uint32_t *sse) {                          \
+    return vpx_variance4x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 VPX_VARIANCE4XN(8)
 VPX_VARIANCE4XN(4)
 
-static inline uint32_t vpx_mse16x(const uint8_t *a, int a_stride,
-                                  const uint8_t *b, int b_stride, uint32_t *sse,
-                                  uint64_t high) {
+static inline uint32_t vpx_mse16x(const uint8_t *src_ptr, int src_stride,
+                                  const uint8_t *ref_ptr, int ref_stride,
+                                  uint32_t *sse, uint64_t high) {
   double ftmp[12];
   uint32_t tmp[1];
 
@@ -900,8 +905,8 @@
     VARIANCE_SSE_16
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "dsrl       %[ftmp9],   %[ftmp8],       %[ftmp11]           \n\t"
@@ -914,8 +919,9 @@
       [ftmp8]"=&f"(ftmp[8]),            [ftmp9]"=&f"(ftmp[9]),
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [tmp0]"=&r"(tmp[0]),
-      [a]"+&r"(a),                      [b]"+&r"(b)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr)
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse)
     : "memory"
   );
@@ -923,19 +929,19 @@
   return *sse;
 }
 
-#define vpx_mse16xN(n)                                         \
-  uint32_t vpx_mse16x##n##_mmi(const uint8_t *a, int a_stride, \
-                               const uint8_t *b, int b_stride, \
-                               uint32_t *sse) {                \
-    return vpx_mse16x(a, a_stride, b, b_stride, sse, n);       \
+#define vpx_mse16xN(n)                                                   \
+  uint32_t vpx_mse16x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                               const uint8_t *ref_ptr, int ref_stride,   \
+                               uint32_t *sse) {                          \
+    return vpx_mse16x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 vpx_mse16xN(16);
 vpx_mse16xN(8);
 
-static inline uint32_t vpx_mse8x(const uint8_t *a, int a_stride,
-                                 const uint8_t *b, int b_stride, uint32_t *sse,
-                                 uint64_t high) {
+static inline uint32_t vpx_mse8x(const uint8_t *src_ptr, int src_stride,
+                                 const uint8_t *ref_ptr, int ref_stride,
+                                 uint32_t *sse, uint64_t high) {
   double ftmp[12];
   uint32_t tmp[1];
 
@@ -952,8 +958,8 @@
     VARIANCE_SSE_8
 
     "addiu      %[tmp0],    %[tmp0],        -0x01               \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
-    MMI_ADDU(%[b], %[b], %[b_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
+    MMI_ADDU(%[ref_ptr], %[ref_ptr], %[ref_stride])
     "bnez       %[tmp0],    1b                                  \n\t"
 
     "dsrl       %[ftmp9],   %[ftmp8],       %[ftmp11]           \n\t"
@@ -966,8 +972,9 @@
       [ftmp8]"=&f"(ftmp[8]),            [ftmp9]"=&f"(ftmp[9]),
       [ftmp10]"=&f"(ftmp[10]),          [ftmp11]"=&f"(ftmp[11]),
       [tmp0]"=&r"(tmp[0]),
-      [a]"+&r"(a),                      [b]"+&r"(b)
-    : [a_stride]"r"((mips_reg)a_stride),[b_stride]"r"((mips_reg)b_stride),
+      [src_ptr]"+&r"(src_ptr),          [ref_ptr]"+&r"(ref_ptr)
+    : [src_stride]"r"((mips_reg)src_stride),
+      [ref_stride]"r"((mips_reg)ref_stride),
       [high]"r"(&high), [sse]"r"(sse)
     : "memory"
   );
@@ -975,28 +982,29 @@
   return *sse;
 }
 
-#define vpx_mse8xN(n)                                                          \
-  uint32_t vpx_mse8x##n##_mmi(const uint8_t *a, int a_stride,                  \
-                              const uint8_t *b, int b_stride, uint32_t *sse) { \
-    return vpx_mse8x(a, a_stride, b, b_stride, sse, n);                        \
+#define vpx_mse8xN(n)                                                   \
+  uint32_t vpx_mse8x##n##_mmi(const uint8_t *src_ptr, int src_stride,   \
+                              const uint8_t *ref_ptr, int ref_stride,   \
+                              uint32_t *sse) {                          \
+    return vpx_mse8x(src_ptr, src_stride, ref_ptr, ref_stride, sse, n); \
   }
 
 vpx_mse8xN(16);
 vpx_mse8xN(8);
 
-#define SUBPIX_VAR(W, H)                                                \
-  uint32_t vpx_sub_pixel_variance##W##x##H##_mmi(                       \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,         \
-      const uint8_t *b, int b_stride, uint32_t *sse) {                  \
-    uint16_t fdata3[(H + 1) * W];                                       \
-    uint8_t temp2[H * W];                                               \
-                                                                        \
-    var_filter_block2d_bil_first_pass(a, fdata3, a_stride, 1, H + 1, W, \
-                                      bilinear_filters[xoffset]);       \
-    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
-                                       bilinear_filters[yoffset]);      \
-                                                                        \
-    return vpx_variance##W##x##H##_mmi(temp2, W, b, b_stride, sse);     \
+#define SUBPIX_VAR(W, H)                                                       \
+  uint32_t vpx_sub_pixel_variance##W##x##H##_mmi(                              \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                 \
+    uint16_t fdata3[((H) + 1) * (W)];                                          \
+    uint8_t temp2[(H) * (W)];                                                  \
+                                                                               \
+    var_filter_block2d_bil_first_pass(src_ptr, fdata3, src_stride, 1, (H) + 1, \
+                                      W, bilinear_filters[x_offset]);          \
+    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,              \
+                                       bilinear_filters[y_offset]);            \
+                                                                               \
+    return vpx_variance##W##x##H##_mmi(temp2, W, ref_ptr, ref_stride, sse);    \
   }
 
 SUBPIX_VAR(64, 64)
@@ -1006,9 +1014,10 @@
 SUBPIX_VAR(32, 16)
 SUBPIX_VAR(16, 32)
 
-static inline void var_filter_block2d_bil_16x(const uint8_t *a, int a_stride,
-                                              int xoffset, int yoffset,
-                                              uint8_t *temp2, int counter) {
+static inline void var_filter_block2d_bil_16x(const uint8_t *src_ptr,
+                                              int src_stride, int x_offset,
+                                              int y_offset, uint8_t *temp2,
+                                              int counter) {
   uint8_t *temp2_ptr = temp2;
   mips_reg l_counter = counter;
   double ftmp[15];
@@ -1016,8 +1025,8 @@
   DECLARE_ALIGNED(8, const uint64_t, ff_ph_40) = { 0x0040004000400040ULL };
   DECLARE_ALIGNED(8, const uint64_t, mask) = { 0x00ff00ff00ff00ffULL };
 
-  const uint8_t *filter_x = bilinear_filters[xoffset];
-  const uint8_t *filter_y = bilinear_filters[yoffset];
+  const uint8_t *filter_x = bilinear_filters[x_offset];
+  const uint8_t *filter_y = bilinear_filters[y_offset];
 
   __asm__ volatile (
     "xor        %[ftmp0],   %[ftmp0],       %[ftmp0]            \n\t"
@@ -1031,26 +1040,26 @@
     // fdata3: fdata3[0] ~ fdata3[15]
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_16_A
 
-    // fdata3 +a_stride*1: fdata3[0] ~ fdata3[15]
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    // fdata3 +src_stride*1: fdata3[0] ~ fdata3[15]
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_16_B
     // temp2: temp2[0] ~ temp2[15]
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_16_A
 
-    // fdata3 +a_stride*2: fdata3[0] ~ fdata3[15]
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    // fdata3 +src_stride*2: fdata3[0] ~ fdata3[15]
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_16_A
     // temp2+16*1: temp2[0] ~ temp2[15]
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x10)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_16_B
 
     "1:                                                         \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_16_B
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x10)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_16_A
 
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_16_A
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x10)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_16_B
@@ -1062,43 +1071,44 @@
       [ftmp9] "=&f"(ftmp[9]), [ftmp10] "=&f"(ftmp[10]),
       [ftmp11] "=&f"(ftmp[11]), [ftmp12] "=&f"(ftmp[12]),
       [ftmp13] "=&f"(ftmp[13]), [ftmp14] "=&f"(ftmp[14]),
-      [tmp0] "=&r"(tmp[0]), [a] "+&r"(a), [temp2_ptr] "+&r"(temp2_ptr),
+      [tmp0] "=&r"(tmp[0]), [src_ptr] "+&r"(src_ptr), [temp2_ptr] "+&r"(temp2_ptr),
       [counter]"+&r"(l_counter)
     : [filter_x0] "f"((uint64_t)filter_x[0]),
       [filter_x1] "f"((uint64_t)filter_x[1]),
       [filter_y0] "f"((uint64_t)filter_y[0]),
       [filter_y1] "f"((uint64_t)filter_y[1]),
-      [a_stride] "r"((mips_reg)a_stride), [ff_ph_40] "f"(ff_ph_40),
+      [src_stride] "r"((mips_reg)src_stride), [ff_ph_40] "f"(ff_ph_40),
       [mask] "f"(mask)
     : "memory"
   );
 }
 
-#define SUBPIX_VAR16XN(H)                                            \
-  uint32_t vpx_sub_pixel_variance16x##H##_mmi(                       \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,      \
-      const uint8_t *b, int b_stride, uint32_t *sse) {               \
-    uint8_t temp2[16 * H];                                           \
-    var_filter_block2d_bil_16x(a, a_stride, xoffset, yoffset, temp2, \
-                               (H - 2) / 2);                         \
-                                                                     \
-    return vpx_variance16x##H##_mmi(temp2, 16, b, b_stride, sse);    \
+#define SUBPIX_VAR16XN(H)                                                      \
+  uint32_t vpx_sub_pixel_variance16x##H##_mmi(                                 \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                 \
+    uint8_t temp2[16 * (H)];                                                   \
+    var_filter_block2d_bil_16x(src_ptr, src_stride, x_offset, y_offset, temp2, \
+                               ((H)-2) / 2);                                   \
+                                                                               \
+    return vpx_variance16x##H##_mmi(temp2, 16, ref_ptr, ref_stride, sse);      \
   }
 
 SUBPIX_VAR16XN(16)
 SUBPIX_VAR16XN(8)
 
-static inline void var_filter_block2d_bil_8x(const uint8_t *a, int a_stride,
-                                             int xoffset, int yoffset,
-                                             uint8_t *temp2, int counter) {
+static inline void var_filter_block2d_bil_8x(const uint8_t *src_ptr,
+                                             int src_stride, int x_offset,
+                                             int y_offset, uint8_t *temp2,
+                                             int counter) {
   uint8_t *temp2_ptr = temp2;
   mips_reg l_counter = counter;
   double ftmp[15];
   mips_reg tmp[2];
   DECLARE_ALIGNED(8, const uint64_t, ff_ph_40) = { 0x0040004000400040ULL };
   DECLARE_ALIGNED(8, const uint64_t, mask) = { 0x00ff00ff00ff00ffULL };
-  const uint8_t *filter_x = bilinear_filters[xoffset];
-  const uint8_t *filter_y = bilinear_filters[yoffset];
+  const uint8_t *filter_x = bilinear_filters[x_offset];
+  const uint8_t *filter_y = bilinear_filters[y_offset];
 
   __asm__ volatile (
     "xor        %[ftmp0],   %[ftmp0],       %[ftmp0]            \n\t"
@@ -1112,26 +1122,26 @@
     // fdata3: fdata3[0] ~ fdata3[7]
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_A
 
-    // fdata3 +a_stride*1: fdata3[0] ~ fdata3[7]
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    // fdata3 +src_stride*1: fdata3[0] ~ fdata3[7]
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_B
     // temp2: temp2[0] ~ temp2[7]
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_8_A
 
-    // fdata3 +a_stride*2: fdata3[0] ~ fdata3[7]
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    // fdata3 +src_stride*2: fdata3[0] ~ fdata3[7]
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_A
     // temp2+8*1: temp2[0] ~ temp2[7]
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x08)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_8_B
 
     "1:                                                         \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_B
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x08)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_8_A
 
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_8_A
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x08)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_8_B
@@ -1143,44 +1153,45 @@
       [ftmp9] "=&f"(ftmp[9]), [ftmp10] "=&f"(ftmp[10]),
       [ftmp11] "=&f"(ftmp[11]), [ftmp12] "=&f"(ftmp[12]),
       [ftmp13] "=&f"(ftmp[13]), [ftmp14] "=&f"(ftmp[14]),
-      [tmp0] "=&r"(tmp[0]), [a] "+&r"(a), [temp2_ptr] "+&r"(temp2_ptr),
+      [tmp0] "=&r"(tmp[0]), [src_ptr] "+&r"(src_ptr), [temp2_ptr] "+&r"(temp2_ptr),
       [counter]"+&r"(l_counter)
     : [filter_x0] "f"((uint64_t)filter_x[0]),
       [filter_x1] "f"((uint64_t)filter_x[1]),
       [filter_y0] "f"((uint64_t)filter_y[0]),
       [filter_y1] "f"((uint64_t)filter_y[1]),
-      [a_stride] "r"((mips_reg)a_stride), [ff_ph_40] "f"(ff_ph_40),
+      [src_stride] "r"((mips_reg)src_stride), [ff_ph_40] "f"(ff_ph_40),
       [mask] "f"(mask)
     : "memory"
   );
 }
 
-#define SUBPIX_VAR8XN(H)                                            \
-  uint32_t vpx_sub_pixel_variance8x##H##_mmi(                       \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,     \
-      const uint8_t *b, int b_stride, uint32_t *sse) {              \
-    uint8_t temp2[8 * H];                                           \
-    var_filter_block2d_bil_8x(a, a_stride, xoffset, yoffset, temp2, \
-                              (H - 2) / 2);                         \
-                                                                    \
-    return vpx_variance8x##H##_mmi(temp2, 8, b, b_stride, sse);     \
+#define SUBPIX_VAR8XN(H)                                                      \
+  uint32_t vpx_sub_pixel_variance8x##H##_mmi(                                 \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,     \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                \
+    uint8_t temp2[8 * (H)];                                                   \
+    var_filter_block2d_bil_8x(src_ptr, src_stride, x_offset, y_offset, temp2, \
+                              ((H)-2) / 2);                                   \
+                                                                              \
+    return vpx_variance8x##H##_mmi(temp2, 8, ref_ptr, ref_stride, sse);       \
   }
 
 SUBPIX_VAR8XN(16)
 SUBPIX_VAR8XN(8)
 SUBPIX_VAR8XN(4)
 
-static inline void var_filter_block2d_bil_4x(const uint8_t *a, int a_stride,
-                                             int xoffset, int yoffset,
-                                             uint8_t *temp2, int counter) {
+static inline void var_filter_block2d_bil_4x(const uint8_t *src_ptr,
+                                             int src_stride, int x_offset,
+                                             int y_offset, uint8_t *temp2,
+                                             int counter) {
   uint8_t *temp2_ptr = temp2;
   mips_reg l_counter = counter;
   double ftmp[7];
   mips_reg tmp[2];
   DECLARE_ALIGNED(8, const uint64_t, ff_ph_40) = { 0x0040004000400040ULL };
   DECLARE_ALIGNED(8, const uint64_t, mask) = { 0x00ff00ff00ff00ffULL };
-  const uint8_t *filter_x = bilinear_filters[xoffset];
-  const uint8_t *filter_y = bilinear_filters[yoffset];
+  const uint8_t *filter_x = bilinear_filters[x_offset];
+  const uint8_t *filter_y = bilinear_filters[y_offset];
 
   __asm__ volatile (
     "xor        %[ftmp0],   %[ftmp0],       %[ftmp0]            \n\t"
@@ -1193,26 +1204,26 @@
     // fdata3: fdata3[0] ~ fdata3[3]
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_A
 
-    // fdata3 +a_stride*1: fdata3[0] ~ fdata3[3]
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    // fdata3 +src_stride*1: fdata3[0] ~ fdata3[3]
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_B
     // temp2: temp2[0] ~ temp2[7]
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_4_A
 
-    // fdata3 +a_stride*2: fdata3[0] ~ fdata3[3]
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    // fdata3 +src_stride*2: fdata3[0] ~ fdata3[3]
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_A
     // temp2+4*1: temp2[0] ~ temp2[7]
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x04)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_4_B
 
     "1:                                                         \n\t"
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_B
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x04)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_4_A
 
-    MMI_ADDU(%[a], %[a], %[a_stride])
+    MMI_ADDU(%[src_ptr], %[src_ptr], %[src_stride])
     VAR_FILTER_BLOCK2D_BIL_FIRST_PASS_4_A
     MMI_ADDIU(%[temp2_ptr], %[temp2_ptr], 0x04)
     VAR_FILTER_BLOCK2D_BIL_SECOND_PASS_4_B
@@ -1220,49 +1231,49 @@
     "bnez       %[counter], 1b                                  \n\t"
     : [ftmp0] "=&f"(ftmp[0]), [ftmp1] "=&f"(ftmp[1]), [ftmp2] "=&f"(ftmp[2]),
       [ftmp3] "=&f"(ftmp[3]), [ftmp4] "=&f"(ftmp[4]), [ftmp5] "=&f"(ftmp[5]),
-      [ftmp6] "=&f"(ftmp[6]), [tmp0] "=&r"(tmp[0]), [a] "+&r"(a),
+      [ftmp6] "=&f"(ftmp[6]), [tmp0] "=&r"(tmp[0]), [src_ptr] "+&r"(src_ptr),
       [temp2_ptr] "+&r"(temp2_ptr), [counter]"+&r"(l_counter)
     : [filter_x0] "f"((uint64_t)filter_x[0]),
       [filter_x1] "f"((uint64_t)filter_x[1]),
       [filter_y0] "f"((uint64_t)filter_y[0]),
       [filter_y1] "f"((uint64_t)filter_y[1]),
-      [a_stride] "r"((mips_reg)a_stride), [ff_ph_40] "f"(ff_ph_40),
+      [src_stride] "r"((mips_reg)src_stride), [ff_ph_40] "f"(ff_ph_40),
       [mask] "f"(mask)
     : "memory"
   );
 }
 
-#define SUBPIX_VAR4XN(H)                                            \
-  uint32_t vpx_sub_pixel_variance4x##H##_mmi(                       \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,     \
-      const uint8_t *b, int b_stride, uint32_t *sse) {              \
-    uint8_t temp2[4 * H];                                           \
-    var_filter_block2d_bil_4x(a, a_stride, xoffset, yoffset, temp2, \
-                              (H - 2) / 2);                         \
-                                                                    \
-    return vpx_variance4x##H##_mmi(temp2, 4, b, b_stride, sse);     \
+#define SUBPIX_VAR4XN(H)                                                      \
+  uint32_t vpx_sub_pixel_variance4x##H##_mmi(                                 \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,     \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                \
+    uint8_t temp2[4 * (H)];                                                   \
+    var_filter_block2d_bil_4x(src_ptr, src_stride, x_offset, y_offset, temp2, \
+                              ((H)-2) / 2);                                   \
+                                                                              \
+    return vpx_variance4x##H##_mmi(temp2, 4, ref_ptr, ref_stride, sse);       \
   }
 
 SUBPIX_VAR4XN(8)
 SUBPIX_VAR4XN(4)
 
-#define SUBPIX_AVG_VAR(W, H)                                            \
-  uint32_t vpx_sub_pixel_avg_variance##W##x##H##_mmi(                   \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,         \
-      const uint8_t *b, int b_stride, uint32_t *sse,                    \
-      const uint8_t *second_pred) {                                     \
-    uint16_t fdata3[(H + 1) * W];                                       \
-    uint8_t temp2[H * W];                                               \
-    DECLARE_ALIGNED(16, uint8_t, temp3[H * W]);                         \
-                                                                        \
-    var_filter_block2d_bil_first_pass(a, fdata3, a_stride, 1, H + 1, W, \
-                                      bilinear_filters[xoffset]);       \
-    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
-                                       bilinear_filters[yoffset]);      \
-                                                                        \
-    vpx_comp_avg_pred_c(temp3, second_pred, W, H, temp2, W);            \
-                                                                        \
-    return vpx_variance##W##x##H##_mmi(temp3, W, b, b_stride, sse);     \
+#define SUBPIX_AVG_VAR(W, H)                                                   \
+  uint32_t vpx_sub_pixel_avg_variance##W##x##H##_mmi(                          \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse,                   \
+      const uint8_t *second_pred) {                                            \
+    uint16_t fdata3[((H) + 1) * (W)];                                          \
+    uint8_t temp2[(H) * (W)];                                                  \
+    DECLARE_ALIGNED(16, uint8_t, temp3[(H) * (W)]);                            \
+                                                                               \
+    var_filter_block2d_bil_first_pass(src_ptr, fdata3, src_stride, 1, (H) + 1, \
+                                      W, bilinear_filters[x_offset]);          \
+    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,              \
+                                       bilinear_filters[y_offset]);            \
+                                                                               \
+    vpx_comp_avg_pred_c(temp3, second_pred, W, H, temp2, W);                   \
+                                                                               \
+    return vpx_variance##W##x##H##_mmi(temp3, W, ref_ptr, ref_stride, sse);    \
   }
 
 SUBPIX_AVG_VAR(64, 64)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_msa.c
index 49b2f99..444b086 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/variance_msa.c
@@ -33,10 +33,11 @@
     sub += res_l0_m + res_l1_m;                                     \
   }
 
-#define VARIANCE_WxH(sse, diff, shift) sse - (((uint32_t)diff * diff) >> shift)
+#define VARIANCE_WxH(sse, diff, shift) \
+  (sse) - (((uint32_t)(diff) * (diff)) >> (shift))
 
 #define VARIANCE_LARGE_WxH(sse, diff, shift) \
-  sse - (((int64_t)diff * diff) >> shift)
+  (sse) - (((int64_t)(diff) * (diff)) >> (shift))
 
 static uint32_t sse_diff_4width_msa(const uint8_t *src_ptr, int32_t src_stride,
                                     const uint8_t *ref_ptr, int32_t ref_stride,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_horiz_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_horiz_msa.c
index 187a013..5b5a1cb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_horiz_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_horiz_msa.c
@@ -658,7 +658,7 @@
     filt_hor[cnt] = filter_x[cnt];
   }
 
-  if (((const int32_t *)filter_x)[0] == 0) {
+  if (vpx_get_filter_taps(filter_x) == 2) {
     switch (w) {
       case 4:
         common_hz_2t_and_aver_dst_4w_msa(src, (int32_t)src_stride, dst,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_msa.c
index 5187cea..ba81619 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_msa.c
@@ -538,8 +538,8 @@
     filt_ver[cnt] = filter_y[cnt];
   }
 
-  if (((const int32_t *)filter_x)[0] == 0 &&
-      ((const int32_t *)filter_y)[0] == 0) {
+  if (vpx_get_filter_taps(filter_x) == 2 &&
+      vpx_get_filter_taps(filter_y) == 2) {
     switch (w) {
       case 4:
         common_hv_2ht_2vt_and_aver_dst_4w_msa(src, (int32_t)src_stride, dst,
@@ -571,8 +571,8 @@
                             x_step_q4, y0_q4, y_step_q4, w, h);
         break;
     }
-  } else if (((const int32_t *)filter_x)[0] == 0 ||
-             ((const int32_t *)filter_y)[0] == 0) {
+  } else if (vpx_get_filter_taps(filter_x) == 2 ||
+             vpx_get_filter_taps(filter_y) == 2) {
     vpx_convolve8_avg_c(src, src_stride, dst, dst_stride, filter, x0_q4,
                         x_step_q4, y0_q4, y_step_q4, w, h);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_vert_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_vert_msa.c
index ef8c901..e6a790d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_vert_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_avg_vert_msa.c
@@ -625,7 +625,7 @@
     filt_ver[cnt] = filter_y[cnt];
   }
 
-  if (((const int32_t *)filter_y)[0] == 0) {
+  if (vpx_get_filter_taps(filter_y) == 2) {
     switch (w) {
       case 4:
         common_vt_2t_and_aver_dst_4w_msa(src, (int32_t)src_stride, dst,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_horiz_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_horiz_msa.c
index 152dc26..792c0f7 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_horiz_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_horiz_msa.c
@@ -634,7 +634,7 @@
     filt_hor[cnt] = filter_x[cnt];
   }
 
-  if (((const int32_t *)filter_x)[0] == 0) {
+  if (vpx_get_filter_taps(filter_x) == 2) {
     switch (w) {
       case 4:
         common_hz_2t_4w_msa(src, (int32_t)src_stride, dst, (int32_t)dst_stride,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_mmi.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_mmi.c
new file mode 100644
index 0000000..ba9ceb8
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_mmi.c
@@ -0,0 +1,716 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+#include <string.h>
+
+#include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/vpx_dsp_common.h"
+#include "vpx_dsp/vpx_filter.h"
+#include "vpx_ports/asmdefs_mmi.h"
+#include "vpx_ports/mem.h"
+
+#define GET_DATA_H_MMI                                     \
+  "pmaddhw    %[ftmp4],    %[ftmp4],   %[filter1]    \n\t" \
+  "pmaddhw    %[ftmp5],    %[ftmp5],   %[filter2]    \n\t" \
+  "paddw      %[ftmp4],    %[ftmp4],   %[ftmp5]      \n\t" \
+  "punpckhwd  %[ftmp5],    %[ftmp4],   %[ftmp0]      \n\t" \
+  "paddw      %[ftmp4],    %[ftmp4],   %[ftmp5]      \n\t" \
+  "pmaddhw    %[ftmp6],    %[ftmp6],   %[filter1]    \n\t" \
+  "pmaddhw    %[ftmp7],    %[ftmp7],   %[filter2]    \n\t" \
+  "paddw      %[ftmp6],    %[ftmp6],   %[ftmp7]      \n\t" \
+  "punpckhwd  %[ftmp7],    %[ftmp6],   %[ftmp0]      \n\t" \
+  "paddw      %[ftmp6],    %[ftmp6],   %[ftmp7]      \n\t" \
+  "punpcklwd  %[srcl],     %[ftmp4],   %[ftmp6]      \n\t" \
+  "pmaddhw    %[ftmp8],    %[ftmp8],   %[filter1]    \n\t" \
+  "pmaddhw    %[ftmp9],    %[ftmp9],   %[filter2]    \n\t" \
+  "paddw      %[ftmp8],    %[ftmp8],   %[ftmp9]      \n\t" \
+  "punpckhwd  %[ftmp9],    %[ftmp8],   %[ftmp0]      \n\t" \
+  "paddw      %[ftmp8],    %[ftmp8],   %[ftmp9]      \n\t" \
+  "pmaddhw    %[ftmp10],   %[ftmp10],  %[filter1]    \n\t" \
+  "pmaddhw    %[ftmp11],   %[ftmp11],  %[filter2]    \n\t" \
+  "paddw      %[ftmp10],   %[ftmp10],  %[ftmp11]     \n\t" \
+  "punpckhwd  %[ftmp11],   %[ftmp10],  %[ftmp0]      \n\t" \
+  "paddw      %[ftmp10],   %[ftmp10],  %[ftmp11]     \n\t" \
+  "punpcklwd  %[srch],     %[ftmp8],   %[ftmp10]     \n\t"
+
+#define GET_DATA_V_MMI                                     \
+  "punpcklhw  %[srcl],     %[ftmp4],   %[ftmp5]      \n\t" \
+  "pmaddhw    %[srcl],     %[srcl],    %[filter10]   \n\t" \
+  "punpcklhw  %[ftmp12],   %[ftmp6],   %[ftmp7]      \n\t" \
+  "pmaddhw    %[ftmp12],   %[ftmp12],  %[filter32]   \n\t" \
+  "paddw      %[srcl],     %[srcl],    %[ftmp12]     \n\t" \
+  "punpcklhw  %[ftmp12],   %[ftmp8],   %[ftmp9]      \n\t" \
+  "pmaddhw    %[ftmp12],   %[ftmp12],  %[filter54]   \n\t" \
+  "paddw      %[srcl],     %[srcl],    %[ftmp12]     \n\t" \
+  "punpcklhw  %[ftmp12],   %[ftmp10],  %[ftmp11]     \n\t" \
+  "pmaddhw    %[ftmp12],   %[ftmp12],  %[filter76]   \n\t" \
+  "paddw      %[srcl],     %[srcl],    %[ftmp12]     \n\t" \
+  "punpckhhw  %[srch],     %[ftmp4],   %[ftmp5]      \n\t" \
+  "pmaddhw    %[srch],     %[srch],    %[filter10]   \n\t" \
+  "punpckhhw  %[ftmp12],   %[ftmp6],   %[ftmp7]      \n\t" \
+  "pmaddhw    %[ftmp12],   %[ftmp12],  %[filter32]   \n\t" \
+  "paddw      %[srch],     %[srch],    %[ftmp12]     \n\t" \
+  "punpckhhw  %[ftmp12],   %[ftmp8],   %[ftmp9]      \n\t" \
+  "pmaddhw    %[ftmp12],   %[ftmp12],  %[filter54]   \n\t" \
+  "paddw      %[srch],     %[srch],    %[ftmp12]     \n\t" \
+  "punpckhhw  %[ftmp12],   %[ftmp10],  %[ftmp11]     \n\t" \
+  "pmaddhw    %[ftmp12],   %[ftmp12],  %[filter76]   \n\t" \
+  "paddw      %[srch],     %[srch],    %[ftmp12]     \n\t"
+
+/* clang-format off */
+#define ROUND_POWER_OF_TWO_MMI                             \
+  /* Add para[0] */                                        \
+  "lw         %[tmp0],     0x00(%[para])             \n\t" \
+  MMI_MTC1(%[tmp0],     %[ftmp6])                          \
+  "punpcklwd  %[ftmp6],    %[ftmp6],    %[ftmp6]     \n\t" \
+  "paddw      %[srcl],     %[srcl],     %[ftmp6]     \n\t" \
+  "paddw      %[srch],     %[srch],     %[ftmp6]     \n\t" \
+  /* Arithmetic right shift para[1] bits */                \
+  "lw         %[tmp0],     0x04(%[para])             \n\t" \
+  MMI_MTC1(%[tmp0],     %[ftmp5])                          \
+  "psraw      %[srcl],     %[srcl],     %[ftmp5]     \n\t" \
+  "psraw      %[srch],     %[srch],     %[ftmp5]     \n\t"
+/* clang-format on */
+
+#define CLIP_PIXEL_MMI                                     \
+  /* Staturated operation */                               \
+  "packsswh   %[srcl],     %[srcl],     %[srch]      \n\t" \
+  "packushb   %[ftmp12],   %[srcl],     %[ftmp0]     \n\t"
+
+static void convolve_horiz_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                               uint8_t *dst, ptrdiff_t dst_stride,
+                               const InterpKernel *filter, int x0_q4,
+                               int x_step_q4, int32_t w, int32_t h) {
+  const int16_t *filter_x = filter[x0_q4];
+  double ftmp[14];
+  uint32_t tmp[2];
+  uint32_t para[5];
+  para[0] = (1 << ((FILTER_BITS)-1));
+  para[1] = FILTER_BITS;
+  src -= SUBPEL_TAPS / 2 - 1;
+  src_stride -= w;
+  dst_stride -= w;
+  (void)x_step_q4;
+
+  /* clang-format off */
+  __asm__ volatile(
+    "move       %[tmp1],    %[width]                   \n\t"
+    "xor        %[ftmp0],   %[ftmp0],    %[ftmp0]      \n\t"
+    "gsldlc1    %[filter1], 0x03(%[filter])            \n\t"
+    "gsldrc1    %[filter1], 0x00(%[filter])            \n\t"
+    "gsldlc1    %[filter2], 0x0b(%[filter])            \n\t"
+    "gsldrc1    %[filter2], 0x08(%[filter])            \n\t"
+    "1:                                                \n\t"
+    /* Get 8 data per row */
+    "gsldlc1    %[ftmp5],   0x07(%[src])               \n\t"
+    "gsldrc1    %[ftmp5],   0x00(%[src])               \n\t"
+    "gsldlc1    %[ftmp7],   0x08(%[src])               \n\t"
+    "gsldrc1    %[ftmp7],   0x01(%[src])               \n\t"
+    "gsldlc1    %[ftmp9],   0x09(%[src])               \n\t"
+    "gsldrc1    %[ftmp9],   0x02(%[src])               \n\t"
+    "gsldlc1    %[ftmp11],  0x0A(%[src])               \n\t"
+    "gsldrc1    %[ftmp11],  0x03(%[src])               \n\t"
+    "punpcklbh  %[ftmp4],   %[ftmp5],    %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp5],   %[ftmp5],    %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp6],   %[ftmp7],    %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp7],   %[ftmp7],    %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp8],   %[ftmp9],    %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp9],   %[ftmp9],    %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp10],  %[ftmp11],   %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp11],  %[ftmp11],   %[ftmp0]      \n\t"
+    MMI_ADDIU(%[width],   %[width],    -0x04)
+    /* Get raw data */
+    GET_DATA_H_MMI
+    ROUND_POWER_OF_TWO_MMI
+    CLIP_PIXEL_MMI
+    "swc1       %[ftmp12],  0x00(%[dst])               \n\t"
+    MMI_ADDIU(%[dst],     %[dst],      0x04)
+    MMI_ADDIU(%[src],     %[src],      0x04)
+    /* Loop count */
+    "bnez       %[width],   1b                         \n\t"
+    "move       %[width],   %[tmp1]                    \n\t"
+    MMI_ADDU(%[src],      %[src],      %[src_stride])
+    MMI_ADDU(%[dst],      %[dst],      %[dst_stride])
+    MMI_ADDIU(%[height],  %[height],   -0x01)
+    "bnez       %[height],  1b                         \n\t"
+    : [srcl]"=&f"(ftmp[0]),     [srch]"=&f"(ftmp[1]),
+      [filter1]"=&f"(ftmp[2]),  [filter2]"=&f"(ftmp[3]),
+      [ftmp0]"=&f"(ftmp[4]),    [ftmp4]"=&f"(ftmp[5]),
+      [ftmp5]"=&f"(ftmp[6]),    [ftmp6]"=&f"(ftmp[7]),
+      [ftmp7]"=&f"(ftmp[8]),    [ftmp8]"=&f"(ftmp[9]),
+      [ftmp9]"=&f"(ftmp[10]),   [ftmp10]"=&f"(ftmp[11]),
+      [ftmp11]"=&f"(ftmp[12]),  [ftmp12]"=&f"(ftmp[13]),
+      [tmp0]"=&r"(tmp[0]),      [tmp1]"=&r"(tmp[1]),
+      [src]"+&r"(src),          [width]"+&r"(w),
+      [dst]"+&r"(dst),          [height]"+&r"(h)
+    : [filter]"r"(filter_x),    [para]"r"(para),
+      [src_stride]"r"((mips_reg)src_stride),
+      [dst_stride]"r"((mips_reg)dst_stride)
+    : "memory"
+  );
+  /* clang-format on */
+}
+
+static void convolve_vert_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                              uint8_t *dst, ptrdiff_t dst_stride,
+                              const InterpKernel *filter, int y0_q4,
+                              int y_step_q4, int32_t w, int32_t h) {
+  const int16_t *filter_y = filter[y0_q4];
+  double ftmp[16];
+  uint32_t tmp[1];
+  uint32_t para[2];
+  ptrdiff_t addr = src_stride;
+  para[0] = (1 << ((FILTER_BITS)-1));
+  para[1] = FILTER_BITS;
+  src -= src_stride * (SUBPEL_TAPS / 2 - 1);
+  src_stride -= w;
+  dst_stride -= w;
+  (void)y_step_q4;
+
+  __asm__ volatile(
+    "xor        %[ftmp0],    %[ftmp0],   %[ftmp0]      \n\t"
+    "gsldlc1    %[ftmp4],    0x03(%[filter])           \n\t"
+    "gsldrc1    %[ftmp4],    0x00(%[filter])           \n\t"
+    "gsldlc1    %[ftmp5],    0x0b(%[filter])           \n\t"
+    "gsldrc1    %[ftmp5],    0x08(%[filter])           \n\t"
+    "punpcklwd  %[filter10], %[ftmp4],   %[ftmp4]      \n\t"
+    "punpckhwd  %[filter32], %[ftmp4],   %[ftmp4]      \n\t"
+    "punpcklwd  %[filter54], %[ftmp5],   %[ftmp5]      \n\t"
+    "punpckhwd  %[filter76], %[ftmp5],   %[ftmp5]      \n\t"
+    "1:                                                \n\t"
+    /* Get 8 data per column */
+    "gsldlc1    %[ftmp4],    0x07(%[src])              \n\t"
+    "gsldrc1    %[ftmp4],    0x00(%[src])              \n\t"
+    MMI_ADDU(%[tmp0],     %[src],     %[addr])
+    "gsldlc1    %[ftmp5],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp5],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp6],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp6],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp7],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp7],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp8],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp8],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp9],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp9],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp10],   0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp10],   0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp11],   0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp11],   0x00(%[tmp0])             \n\t"
+    "punpcklbh  %[ftmp4],    %[ftmp4],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp5],    %[ftmp5],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp6],    %[ftmp6],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp7],    %[ftmp7],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp8],    %[ftmp8],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp9],    %[ftmp9],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp10],   %[ftmp10],  %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp11],   %[ftmp11],  %[ftmp0]      \n\t"
+    MMI_ADDIU(%[width],   %[width],   -0x04)
+    /* Get raw data */
+    GET_DATA_V_MMI
+    ROUND_POWER_OF_TWO_MMI
+    CLIP_PIXEL_MMI
+    "swc1       %[ftmp12],   0x00(%[dst])              \n\t"
+    MMI_ADDIU(%[dst],     %[dst],      0x04)
+    MMI_ADDIU(%[src],     %[src],      0x04)
+    /* Loop count */
+    "bnez       %[width],    1b                        \n\t"
+    MMI_SUBU(%[width],    %[addr],     %[src_stride])
+    MMI_ADDU(%[src],      %[src],      %[src_stride])
+    MMI_ADDU(%[dst],      %[dst],      %[dst_stride])
+    MMI_ADDIU(%[height],  %[height],   -0x01)
+    "bnez       %[height],   1b                        \n\t"
+    : [srcl]"=&f"(ftmp[0]),     [srch]"=&f"(ftmp[1]),
+      [filter10]"=&f"(ftmp[2]), [filter32]"=&f"(ftmp[3]),
+      [filter54]"=&f"(ftmp[4]), [filter76]"=&f"(ftmp[5]),
+      [ftmp0]"=&f"(ftmp[6]),    [ftmp4]"=&f"(ftmp[7]),
+      [ftmp5]"=&f"(ftmp[8]),    [ftmp6]"=&f"(ftmp[9]),
+      [ftmp7]"=&f"(ftmp[10]),   [ftmp8]"=&f"(ftmp[11]),
+      [ftmp9]"=&f"(ftmp[12]),   [ftmp10]"=&f"(ftmp[13]),
+      [ftmp11]"=&f"(ftmp[14]),  [ftmp12]"=&f"(ftmp[15]),
+      [src]"+&r"(src),          [dst]"+&r"(dst),
+      [width]"+&r"(w),          [height]"+&r"(h),
+      [tmp0]"=&r"(tmp[0])
+    : [filter]"r"(filter_y),    [para]"r"(para),
+      [src_stride]"r"((mips_reg)src_stride),
+      [dst_stride]"r"((mips_reg)dst_stride),
+      [addr]"r"((mips_reg)addr)
+    : "memory"
+  );
+}
+
+static void convolve_avg_horiz_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                                   uint8_t *dst, ptrdiff_t dst_stride,
+                                   const InterpKernel *filter, int x0_q4,
+                                   int x_step_q4, int32_t w, int32_t h) {
+  const int16_t *filter_x = filter[x0_q4];
+  double ftmp[14];
+  uint32_t tmp[2];
+  uint32_t para[2];
+  para[0] = (1 << ((FILTER_BITS)-1));
+  para[1] = FILTER_BITS;
+  src -= SUBPEL_TAPS / 2 - 1;
+  src_stride -= w;
+  dst_stride -= w;
+  (void)x_step_q4;
+
+  __asm__ volatile(
+    "move       %[tmp1],    %[width]                   \n\t"
+    "xor        %[ftmp0],   %[ftmp0],    %[ftmp0]      \n\t"
+    "gsldlc1    %[filter1], 0x03(%[filter])            \n\t"
+    "gsldrc1    %[filter1], 0x00(%[filter])            \n\t"
+    "gsldlc1    %[filter2], 0x0b(%[filter])            \n\t"
+    "gsldrc1    %[filter2], 0x08(%[filter])            \n\t"
+    "1:                                                \n\t"
+    /* Get 8 data per row */
+    "gsldlc1    %[ftmp5],   0x07(%[src])               \n\t"
+    "gsldrc1    %[ftmp5],   0x00(%[src])               \n\t"
+    "gsldlc1    %[ftmp7],   0x08(%[src])               \n\t"
+    "gsldrc1    %[ftmp7],   0x01(%[src])               \n\t"
+    "gsldlc1    %[ftmp9],   0x09(%[src])               \n\t"
+    "gsldrc1    %[ftmp9],   0x02(%[src])               \n\t"
+    "gsldlc1    %[ftmp11],  0x0A(%[src])               \n\t"
+    "gsldrc1    %[ftmp11],  0x03(%[src])               \n\t"
+    "punpcklbh  %[ftmp4],   %[ftmp5],    %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp5],   %[ftmp5],    %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp6],   %[ftmp7],    %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp7],   %[ftmp7],    %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp8],   %[ftmp9],    %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp9],   %[ftmp9],    %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp10],  %[ftmp11],   %[ftmp0]      \n\t"
+    "punpckhbh  %[ftmp11],  %[ftmp11],   %[ftmp0]      \n\t"
+    MMI_ADDIU(%[width],   %[width],    -0x04)
+    /* Get raw data */
+    GET_DATA_H_MMI
+    ROUND_POWER_OF_TWO_MMI
+    CLIP_PIXEL_MMI
+    "punpcklbh  %[ftmp12],  %[ftmp12],   %[ftmp0]      \n\t"
+    "gsldlc1    %[ftmp4],   0x07(%[dst])               \n\t"
+    "gsldrc1    %[ftmp4],   0x00(%[dst])               \n\t"
+    "punpcklbh  %[ftmp4],   %[ftmp4],    %[ftmp0]      \n\t"
+    "paddh      %[ftmp12],  %[ftmp12],   %[ftmp4]      \n\t"
+    "li         %[tmp0],    0x10001                    \n\t"
+    MMI_MTC1(%[tmp0],     %[ftmp5])
+    "punpcklhw  %[ftmp5],   %[ftmp5],    %[ftmp5]      \n\t"
+    "paddh      %[ftmp12],  %[ftmp12],   %[ftmp5]      \n\t"
+    "psrah      %[ftmp12],  %[ftmp12],   %[ftmp5]      \n\t"
+    "packushb   %[ftmp12],  %[ftmp12],   %[ftmp0]      \n\t"
+    "swc1       %[ftmp12],  0x00(%[dst])               \n\t"
+    MMI_ADDIU(%[dst],     %[dst],      0x04)
+    MMI_ADDIU(%[src],     %[src],      0x04)
+    /* Loop count */
+    "bnez       %[width],   1b                         \n\t"
+    "move       %[width],   %[tmp1]                    \n\t"
+    MMI_ADDU(%[src],      %[src],      %[src_stride])
+    MMI_ADDU(%[dst],      %[dst],      %[dst_stride])
+    MMI_ADDIU(%[height],  %[height],   -0x01)
+    "bnez       %[height],  1b                         \n\t"
+    : [srcl]"=&f"(ftmp[0]),     [srch]"=&f"(ftmp[1]),
+      [filter1]"=&f"(ftmp[2]),  [filter2]"=&f"(ftmp[3]),
+      [ftmp0]"=&f"(ftmp[4]),    [ftmp4]"=&f"(ftmp[5]),
+      [ftmp5]"=&f"(ftmp[6]),    [ftmp6]"=&f"(ftmp[7]),
+      [ftmp7]"=&f"(ftmp[8]),    [ftmp8]"=&f"(ftmp[9]),
+      [ftmp9]"=&f"(ftmp[10]),   [ftmp10]"=&f"(ftmp[11]),
+      [ftmp11]"=&f"(ftmp[12]),  [ftmp12]"=&f"(ftmp[13]),
+      [tmp0]"=&r"(tmp[0]),      [tmp1]"=&r"(tmp[1]),
+      [src]"+&r"(src),          [width]"+&r"(w),
+      [dst]"+&r"(dst),          [height]"+&r"(h)
+    : [filter]"r"(filter_x),    [para]"r"(para),
+      [src_stride]"r"((mips_reg)src_stride),
+      [dst_stride]"r"((mips_reg)dst_stride)
+    : "memory"
+  );
+}
+
+static void convolve_avg_vert_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                                  uint8_t *dst, ptrdiff_t dst_stride,
+                                  const InterpKernel *filter, int y0_q4,
+                                  int y_step_q4, int32_t w, int32_t h) {
+  const int16_t *filter_y = filter[y0_q4];
+  double ftmp[16];
+  uint32_t tmp[1];
+  uint32_t para[2];
+  ptrdiff_t addr = src_stride;
+  para[0] = (1 << ((FILTER_BITS)-1));
+  para[1] = FILTER_BITS;
+  src -= src_stride * (SUBPEL_TAPS / 2 - 1);
+  src_stride -= w;
+  dst_stride -= w;
+  (void)y_step_q4;
+
+  __asm__ volatile(
+    "xor        %[ftmp0],    %[ftmp0],   %[ftmp0]      \n\t"
+    "gsldlc1    %[ftmp4],    0x03(%[filter])           \n\t"
+    "gsldrc1    %[ftmp4],    0x00(%[filter])           \n\t"
+    "gsldlc1    %[ftmp5],    0x0b(%[filter])           \n\t"
+    "gsldrc1    %[ftmp5],    0x08(%[filter])           \n\t"
+    "punpcklwd  %[filter10], %[ftmp4],   %[ftmp4]      \n\t"
+    "punpckhwd  %[filter32], %[ftmp4],   %[ftmp4]      \n\t"
+    "punpcklwd  %[filter54], %[ftmp5],   %[ftmp5]      \n\t"
+    "punpckhwd  %[filter76], %[ftmp5],   %[ftmp5]      \n\t"
+    "1:                                                \n\t"
+    /* Get 8 data per column */
+    "gsldlc1    %[ftmp4],    0x07(%[src])              \n\t"
+    "gsldrc1    %[ftmp4],    0x00(%[src])              \n\t"
+    MMI_ADDU(%[tmp0],     %[src],     %[addr])
+    "gsldlc1    %[ftmp5],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp5],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp6],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp6],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp7],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp7],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp8],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp8],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp9],    0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp9],    0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp10],   0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp10],   0x00(%[tmp0])             \n\t"
+    MMI_ADDU(%[tmp0],     %[tmp0],    %[addr])
+    "gsldlc1    %[ftmp11],   0x07(%[tmp0])             \n\t"
+    "gsldrc1    %[ftmp11],   0x00(%[tmp0])             \n\t"
+    "punpcklbh  %[ftmp4],    %[ftmp4],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp5],    %[ftmp5],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp6],    %[ftmp6],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp7],    %[ftmp7],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp8],    %[ftmp8],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp9],    %[ftmp9],   %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp10],   %[ftmp10],  %[ftmp0]      \n\t"
+    "punpcklbh  %[ftmp11],   %[ftmp11],  %[ftmp0]      \n\t"
+    MMI_ADDIU(%[width],   %[width],   -0x04)
+    /* Get raw data */
+    GET_DATA_V_MMI
+    ROUND_POWER_OF_TWO_MMI
+    CLIP_PIXEL_MMI
+    "punpcklbh  %[ftmp12],   %[ftmp12],  %[ftmp0]      \n\t"
+    "gsldlc1    %[ftmp4],    0x07(%[dst])              \n\t"
+    "gsldrc1    %[ftmp4],    0x00(%[dst])              \n\t"
+    "punpcklbh  %[ftmp4],    %[ftmp4],   %[ftmp0]      \n\t"
+    "paddh      %[ftmp12],   %[ftmp12],  %[ftmp4]      \n\t"
+    "li         %[tmp0],     0x10001                   \n\t"
+    MMI_MTC1(%[tmp0],     %[ftmp5])
+    "punpcklhw  %[ftmp5],    %[ftmp5],   %[ftmp5]      \n\t"
+    "paddh      %[ftmp12],   %[ftmp12],  %[ftmp5]      \n\t"
+    "psrah      %[ftmp12],   %[ftmp12],  %[ftmp5]      \n\t"
+    "packushb   %[ftmp12],   %[ftmp12],  %[ftmp0]      \n\t"
+    "swc1       %[ftmp12],   0x00(%[dst])              \n\t"
+    MMI_ADDIU(%[dst],     %[dst],      0x04)
+    MMI_ADDIU(%[src],     %[src],      0x04)
+    /* Loop count */
+    "bnez       %[width],    1b                        \n\t"
+    MMI_SUBU(%[width],    %[addr],     %[src_stride])
+    MMI_ADDU(%[src],      %[src],      %[src_stride])
+    MMI_ADDU(%[dst],      %[dst],      %[dst_stride])
+    MMI_ADDIU(%[height],  %[height],   -0x01)
+    "bnez       %[height],   1b                        \n\t"
+    : [srcl]"=&f"(ftmp[0]),     [srch]"=&f"(ftmp[1]),
+      [filter10]"=&f"(ftmp[2]), [filter32]"=&f"(ftmp[3]),
+      [filter54]"=&f"(ftmp[4]), [filter76]"=&f"(ftmp[5]),
+      [ftmp0]"=&f"(ftmp[6]),    [ftmp4]"=&f"(ftmp[7]),
+      [ftmp5]"=&f"(ftmp[8]),    [ftmp6]"=&f"(ftmp[9]),
+      [ftmp7]"=&f"(ftmp[10]),   [ftmp8]"=&f"(ftmp[11]),
+      [ftmp9]"=&f"(ftmp[12]),   [ftmp10]"=&f"(ftmp[13]),
+      [ftmp11]"=&f"(ftmp[14]),  [ftmp12]"=&f"(ftmp[15]),
+      [src]"+&r"(src),          [dst]"+&r"(dst),
+      [width]"+&r"(w),          [height]"+&r"(h),
+      [tmp0]"=&r"(tmp[0])
+    : [filter]"r"(filter_y),    [para]"r"(para),
+      [src_stride]"r"((mips_reg)src_stride),
+      [dst_stride]"r"((mips_reg)dst_stride),
+      [addr]"r"((mips_reg)addr)
+    : "memory"
+  );
+}
+
+void vpx_convolve_avg_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                          uint8_t *dst, ptrdiff_t dst_stride,
+                          const InterpKernel *filter, int x0_q4, int x_step_q4,
+                          int y0_q4, int y_step_q4, int w, int h) {
+  int x, y;
+
+  (void)filter;
+  (void)x0_q4;
+  (void)x_step_q4;
+  (void)y0_q4;
+  (void)y_step_q4;
+
+  if (w & 0x03) {
+    for (y = 0; y < h; ++y) {
+      for (x = 0; x < w; ++x) dst[x] = ROUND_POWER_OF_TWO(dst[x] + src[x], 1);
+      src += src_stride;
+      dst += dst_stride;
+    }
+  } else {
+    double ftmp[4];
+    uint32_t tmp[2];
+    src_stride -= w;
+    dst_stride -= w;
+
+    __asm__ volatile(
+      "move       %[tmp1],    %[width]                  \n\t"
+      "xor        %[ftmp0],   %[ftmp0],   %[ftmp0]      \n\t"
+      "li         %[tmp0],    0x10001                   \n\t"
+      MMI_MTC1(%[tmp0],    %[ftmp3])
+      "punpcklhw  %[ftmp3],   %[ftmp3],   %[ftmp3]      \n\t"
+      "1:                                               \n\t"
+      "gsldlc1    %[ftmp1],   0x07(%[src])              \n\t"
+      "gsldrc1    %[ftmp1],   0x00(%[src])              \n\t"
+      "gsldlc1    %[ftmp2],   0x07(%[dst])              \n\t"
+      "gsldrc1    %[ftmp2],   0x00(%[dst])              \n\t"
+      "punpcklbh  %[ftmp1],   %[ftmp1],   %[ftmp0]      \n\t"
+      "punpcklbh  %[ftmp2],   %[ftmp2],   %[ftmp0]      \n\t"
+      "paddh      %[ftmp1],   %[ftmp1],   %[ftmp2]      \n\t"
+      "paddh      %[ftmp1],   %[ftmp1],   %[ftmp3]      \n\t"
+      "psrah      %[ftmp1],   %[ftmp1],   %[ftmp3]      \n\t"
+      "packushb   %[ftmp1],   %[ftmp1],   %[ftmp0]      \n\t"
+      "swc1       %[ftmp1],   0x00(%[dst])              \n\t"
+      MMI_ADDIU(%[width],  %[width],   -0x04)
+      MMI_ADDIU(%[dst],    %[dst],     0x04)
+      MMI_ADDIU(%[src],    %[src],     0x04)
+      "bnez       %[width],   1b                        \n\t"
+      "move       %[width],   %[tmp1]                   \n\t"
+      MMI_ADDU(%[dst],     %[dst],     %[dst_stride])
+      MMI_ADDU(%[src],     %[src],     %[src_stride])
+      MMI_ADDIU(%[height], %[height],  -0x01)
+      "bnez       %[height],  1b                        \n\t"
+      : [ftmp0]"=&f"(ftmp[0]),  [ftmp1]"=&f"(ftmp[1]),
+        [ftmp2]"=&f"(ftmp[2]),  [ftmp3]"=&f"(ftmp[3]),
+        [tmp0]"=&r"(tmp[0]),    [tmp1]"=&r"(tmp[1]),
+        [src]"+&r"(src),        [dst]"+&r"(dst),
+        [width]"+&r"(w),        [height]"+&r"(h)
+      : [src_stride]"r"((mips_reg)src_stride),
+        [dst_stride]"r"((mips_reg)dst_stride)
+      : "memory"
+    );
+  }
+}
+
+static void convolve_horiz(const uint8_t *src, ptrdiff_t src_stride,
+                           uint8_t *dst, ptrdiff_t dst_stride,
+                           const InterpKernel *x_filters, int x0_q4,
+                           int x_step_q4, int w, int h) {
+  int x, y;
+  src -= SUBPEL_TAPS / 2 - 1;
+
+  for (y = 0; y < h; ++y) {
+    int x_q4 = x0_q4;
+    for (x = 0; x < w; ++x) {
+      const uint8_t *const src_x = &src[x_q4 >> SUBPEL_BITS];
+      const int16_t *const x_filter = x_filters[x_q4 & SUBPEL_MASK];
+      int k, sum = 0;
+      for (k = 0; k < SUBPEL_TAPS; ++k) sum += src_x[k] * x_filter[k];
+      dst[x] = clip_pixel(ROUND_POWER_OF_TWO(sum, FILTER_BITS));
+      x_q4 += x_step_q4;
+    }
+    src += src_stride;
+    dst += dst_stride;
+  }
+}
+
+static void convolve_vert(const uint8_t *src, ptrdiff_t src_stride,
+                          uint8_t *dst, ptrdiff_t dst_stride,
+                          const InterpKernel *y_filters, int y0_q4,
+                          int y_step_q4, int w, int h) {
+  int x, y;
+  src -= src_stride * (SUBPEL_TAPS / 2 - 1);
+
+  for (x = 0; x < w; ++x) {
+    int y_q4 = y0_q4;
+    for (y = 0; y < h; ++y) {
+      const uint8_t *src_y = &src[(y_q4 >> SUBPEL_BITS) * src_stride];
+      const int16_t *const y_filter = y_filters[y_q4 & SUBPEL_MASK];
+      int k, sum = 0;
+      for (k = 0; k < SUBPEL_TAPS; ++k)
+        sum += src_y[k * src_stride] * y_filter[k];
+      dst[y * dst_stride] = clip_pixel(ROUND_POWER_OF_TWO(sum, FILTER_BITS));
+      y_q4 += y_step_q4;
+    }
+    ++src;
+    ++dst;
+  }
+}
+
+static void convolve_avg_vert(const uint8_t *src, ptrdiff_t src_stride,
+                              uint8_t *dst, ptrdiff_t dst_stride,
+                              const InterpKernel *y_filters, int y0_q4,
+                              int y_step_q4, int w, int h) {
+  int x, y;
+  src -= src_stride * (SUBPEL_TAPS / 2 - 1);
+
+  for (x = 0; x < w; ++x) {
+    int y_q4 = y0_q4;
+    for (y = 0; y < h; ++y) {
+      const uint8_t *src_y = &src[(y_q4 >> SUBPEL_BITS) * src_stride];
+      const int16_t *const y_filter = y_filters[y_q4 & SUBPEL_MASK];
+      int k, sum = 0;
+      for (k = 0; k < SUBPEL_TAPS; ++k)
+        sum += src_y[k * src_stride] * y_filter[k];
+      dst[y * dst_stride] = ROUND_POWER_OF_TWO(
+          dst[y * dst_stride] +
+              clip_pixel(ROUND_POWER_OF_TWO(sum, FILTER_BITS)),
+          1);
+      y_q4 += y_step_q4;
+    }
+    ++src;
+    ++dst;
+  }
+}
+
+static void convolve_avg_horiz(const uint8_t *src, ptrdiff_t src_stride,
+                               uint8_t *dst, ptrdiff_t dst_stride,
+                               const InterpKernel *x_filters, int x0_q4,
+                               int x_step_q4, int w, int h) {
+  int x, y;
+  src -= SUBPEL_TAPS / 2 - 1;
+
+  for (y = 0; y < h; ++y) {
+    int x_q4 = x0_q4;
+    for (x = 0; x < w; ++x) {
+      const uint8_t *const src_x = &src[x_q4 >> SUBPEL_BITS];
+      const int16_t *const x_filter = x_filters[x_q4 & SUBPEL_MASK];
+      int k, sum = 0;
+      for (k = 0; k < SUBPEL_TAPS; ++k) sum += src_x[k] * x_filter[k];
+      dst[x] = ROUND_POWER_OF_TWO(
+          dst[x] + clip_pixel(ROUND_POWER_OF_TWO(sum, FILTER_BITS)), 1);
+      x_q4 += x_step_q4;
+    }
+    src += src_stride;
+    dst += dst_stride;
+  }
+}
+
+void vpx_convolve8_mmi(const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,
+                       ptrdiff_t dst_stride, const InterpKernel *filter,
+                       int x0_q4, int32_t x_step_q4, int y0_q4,
+                       int32_t y_step_q4, int32_t w, int32_t h) {
+  // Note: Fixed size intermediate buffer, temp, places limits on parameters.
+  // 2d filtering proceeds in 2 steps:
+  //   (1) Interpolate horizontally into an intermediate buffer, temp.
+  //   (2) Interpolate temp vertically to derive the sub-pixel result.
+  // Deriving the maximum number of rows in the temp buffer (135):
+  // --Smallest scaling factor is x1/2 ==> y_step_q4 = 32 (Normative).
+  // --Largest block size is 64x64 pixels.
+  // --64 rows in the downscaled frame span a distance of (64 - 1) * 32 in the
+  //   original frame (in 1/16th pixel units).
+  // --Must round-up because block may be located at sub-pixel position.
+  // --Require an additional SUBPEL_TAPS rows for the 8-tap filter tails.
+  // --((64 - 1) * 32 + 15) >> 4 + 8 = 135.
+  // When calling in frame scaling function, the smallest scaling factor is x1/4
+  // ==> y_step_q4 = 64. Since w and h are at most 16, the temp buffer is still
+  // big enough.
+  uint8_t temp[64 * 135];
+  const int intermediate_height =
+      (((h - 1) * y_step_q4 + y0_q4) >> SUBPEL_BITS) + SUBPEL_TAPS;
+
+  assert(w <= 64);
+  assert(h <= 64);
+  assert(y_step_q4 <= 32 || (y_step_q4 <= 64 && h <= 32));
+  assert(x_step_q4 <= 64);
+
+  if (w & 0x03) {
+    convolve_horiz(src - src_stride * (SUBPEL_TAPS / 2 - 1), src_stride, temp,
+                   64, filter, x0_q4, x_step_q4, w, intermediate_height);
+    convolve_vert(temp + 64 * (SUBPEL_TAPS / 2 - 1), 64, dst, dst_stride,
+                  filter, y0_q4, y_step_q4, w, h);
+  } else {
+    convolve_horiz_mmi(src - src_stride * (SUBPEL_TAPS / 2 - 1), src_stride,
+                       temp, 64, filter, x0_q4, x_step_q4, w,
+                       intermediate_height);
+    convolve_vert_mmi(temp + 64 * (SUBPEL_TAPS / 2 - 1), 64, dst, dst_stride,
+                      filter, y0_q4, y_step_q4, w, h);
+  }
+}
+
+void vpx_convolve8_horiz_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                             uint8_t *dst, ptrdiff_t dst_stride,
+                             const InterpKernel *filter, int x0_q4,
+                             int32_t x_step_q4, int y0_q4, int32_t y_step_q4,
+                             int32_t w, int32_t h) {
+  (void)y0_q4;
+  (void)y_step_q4;
+  if (w & 0x03)
+    convolve_horiz(src, src_stride, dst, dst_stride, filter, x0_q4, x_step_q4,
+                   w, h);
+  else
+    convolve_horiz_mmi(src, src_stride, dst, dst_stride, filter, x0_q4,
+                       x_step_q4, w, h);
+}
+
+void vpx_convolve8_vert_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                            uint8_t *dst, ptrdiff_t dst_stride,
+                            const InterpKernel *filter, int x0_q4,
+                            int32_t x_step_q4, int y0_q4, int y_step_q4, int w,
+                            int h) {
+  (void)x0_q4;
+  (void)x_step_q4;
+  if (w & 0x03)
+    convolve_vert(src, src_stride, dst, dst_stride, filter, y0_q4, y_step_q4, w,
+                  h);
+  else
+    convolve_vert_mmi(src, src_stride, dst, dst_stride, filter, y0_q4,
+                      y_step_q4, w, h);
+}
+
+void vpx_convolve8_avg_horiz_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                                 uint8_t *dst, ptrdiff_t dst_stride,
+                                 const InterpKernel *filter, int x0_q4,
+                                 int32_t x_step_q4, int y0_q4, int y_step_q4,
+                                 int w, int h) {
+  (void)y0_q4;
+  (void)y_step_q4;
+  if (w & 0x03)
+    convolve_avg_horiz(src, src_stride, dst, dst_stride, filter, x0_q4,
+                       x_step_q4, w, h);
+  else
+    convolve_avg_horiz_mmi(src, src_stride, dst, dst_stride, filter, x0_q4,
+                           x_step_q4, w, h);
+}
+
+void vpx_convolve8_avg_vert_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                                uint8_t *dst, ptrdiff_t dst_stride,
+                                const InterpKernel *filter, int x0_q4,
+                                int32_t x_step_q4, int y0_q4, int y_step_q4,
+                                int w, int h) {
+  (void)x0_q4;
+  (void)x_step_q4;
+  if (w & 0x03)
+    convolve_avg_vert(src, src_stride, dst, dst_stride, filter, y0_q4,
+                      y_step_q4, w, h);
+  else
+    convolve_avg_vert_mmi(src, src_stride, dst, dst_stride, filter, y0_q4,
+                          y_step_q4, w, h);
+}
+
+void vpx_convolve8_avg_mmi(const uint8_t *src, ptrdiff_t src_stride,
+                           uint8_t *dst, ptrdiff_t dst_stride,
+                           const InterpKernel *filter, int x0_q4,
+                           int32_t x_step_q4, int y0_q4, int32_t y_step_q4,
+                           int32_t w, int32_t h) {
+  // Fixed size intermediate buffer places limits on parameters.
+  DECLARE_ALIGNED(16, uint8_t, temp[64 * 64]);
+  assert(w <= 64);
+  assert(h <= 64);
+
+  vpx_convolve8_mmi(src, src_stride, temp, 64, filter, x0_q4, x_step_q4, y0_q4,
+                    y_step_q4, w, h);
+  vpx_convolve_avg_mmi(temp, 64, dst, dst_stride, NULL, 0, 0, 0, 0, w, h);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_msa.c
index d35a5a7..c942167 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_msa.c
@@ -558,8 +558,8 @@
     filt_ver[cnt] = filter_y[cnt];
   }
 
-  if (((const int32_t *)filter_x)[0] == 0 &&
-      ((const int32_t *)filter_y)[0] == 0) {
+  if (vpx_get_filter_taps(filter_x) == 2 &&
+      vpx_get_filter_taps(filter_y) == 2) {
     switch (w) {
       case 4:
         common_hv_2ht_2vt_4w_msa(src, (int32_t)src_stride, dst,
@@ -591,8 +591,8 @@
                         x_step_q4, y0_q4, y_step_q4, w, h);
         break;
     }
-  } else if (((const int32_t *)filter_x)[0] == 0 ||
-             ((const int32_t *)filter_y)[0] == 0) {
+  } else if (vpx_get_filter_taps(filter_x) == 2 ||
+             vpx_get_filter_taps(filter_y) == 2) {
     vpx_convolve8_c(src, src_stride, dst, dst_stride, filter, x0_q4, x_step_q4,
                     y0_q4, y_step_q4, w, h);
   } else {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_vert_msa.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_vert_msa.c
index 13fce00..1952286 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_vert_msa.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve8_vert_msa.c
@@ -641,7 +641,7 @@
     filt_ver[cnt] = filter_y[cnt];
   }
 
-  if (((const int32_t *)filter_y)[0] == 0) {
+  if (vpx_get_filter_taps(filter_y) == 2) {
     switch (w) {
       case 4:
         common_vt_2t_4w_msa(src, (int32_t)src_stride, dst, (int32_t)dst_stride,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve_msa.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve_msa.h
index d532445..a0280c5 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve_msa.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/mips/vpx_convolve_msa.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_MIPS_VPX_CONVOLVE_MSA_H_
-#define VPX_DSP_MIPS_VPX_CONVOLVE_MSA_H_
+#ifndef VPX_VPX_DSP_MIPS_VPX_CONVOLVE_MSA_H_
+#define VPX_VPX_DSP_MIPS_VPX_CONVOLVE_MSA_H_
 
 #include "vpx_dsp/mips/macros_msa.h"
 #include "vpx_dsp/vpx_filter.h"
@@ -119,4 +119,4 @@
     AVER_UB2_UB(tmp0_m, dst0, tmp1_m, dst1, tmp0_m, tmp1_m);             \
     ST8x4_UB(tmp0_m, tmp1_m, pdst_m, stride);                            \
   }
-#endif /* VPX_DSP_MIPS_VPX_CONVOLVE_MSA_H_ */
+#endif  // VPX_VPX_DSP_MIPS_VPX_CONVOLVE_MSA_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/postproc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/postproc.h
index 43cb5c8..37f993f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/postproc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/postproc.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_POSTPROC_H_
-#define VPX_DSP_POSTPROC_H_
+#ifndef VPX_VPX_DSP_POSTPROC_H_
+#define VPX_VPX_DSP_POSTPROC_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -22,4 +22,4 @@
 }
 #endif
 
-#endif  // VPX_DSP_POSTPROC_H_
+#endif  // VPX_VPX_DSP_POSTPROC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/bitdepth_conversion_vsx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/bitdepth_conversion_vsx.h
index 2c5d9a4..7ac873f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/bitdepth_conversion_vsx.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/bitdepth_conversion_vsx.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_PPC_BITDEPTH_CONVERSION_VSX_H_
-#define VPX_DSP_PPC_BITDEPTH_CONVERSION_VSX_H_
+#ifndef VPX_VPX_DSP_PPC_BITDEPTH_CONVERSION_VSX_H_
+#define VPX_VPX_DSP_PPC_BITDEPTH_CONVERSION_VSX_H_
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -44,4 +44,4 @@
 #endif
 }
 
-#endif  // VPX_DSP_PPC_BITDEPTH_CONVERSION_VSX_H_
+#endif  // VPX_VPX_DSP_PPC_BITDEPTH_CONVERSION_VSX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/deblock_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/deblock_vsx.c
new file mode 100644
index 0000000..2129911
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/deblock_vsx.c
@@ -0,0 +1,374 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+
+#include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/ppc/types_vsx.h"
+
+extern const int16_t vpx_rv[];
+
+static const uint8x16_t load_merge = { 0x00, 0x02, 0x04, 0x06, 0x08, 0x0A,
+                                       0x0C, 0x0E, 0x18, 0x19, 0x1A, 0x1B,
+                                       0x1C, 0x1D, 0x1E, 0x1F };
+
+static const uint8x16_t st8_perm = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05,
+                                     0x06, 0x07, 0x18, 0x19, 0x1A, 0x1B,
+                                     0x1C, 0x1D, 0x1E, 0x1F };
+
+static INLINE uint8x16_t apply_filter(uint8x16_t ctx[4], uint8x16_t v,
+                                      uint8x16_t filter) {
+  const uint8x16_t k1 = vec_avg(ctx[0], ctx[1]);
+  const uint8x16_t k2 = vec_avg(ctx[3], ctx[2]);
+  const uint8x16_t k3 = vec_avg(k1, k2);
+  const uint8x16_t f_a = vec_max(vec_absd(v, ctx[0]), vec_absd(v, ctx[1]));
+  const uint8x16_t f_b = vec_max(vec_absd(v, ctx[2]), vec_absd(v, ctx[3]));
+  const bool8x16_t mask = vec_cmplt(vec_max(f_a, f_b), filter);
+  return vec_sel(v, vec_avg(k3, v), mask);
+}
+
+static INLINE void vert_ctx(uint8x16_t ctx[4], int col, uint8_t *src,
+                            int stride) {
+  ctx[0] = vec_vsx_ld(col - 2 * stride, src);
+  ctx[1] = vec_vsx_ld(col - stride, src);
+  ctx[2] = vec_vsx_ld(col + stride, src);
+  ctx[3] = vec_vsx_ld(col + 2 * stride, src);
+}
+
+static INLINE void horz_ctx(uint8x16_t ctx[4], uint8x16_t left_ctx,
+                            uint8x16_t v, uint8x16_t right_ctx) {
+  static const uint8x16_t l2_perm = { 0x0E, 0x0F, 0x10, 0x11, 0x12, 0x13,
+                                      0x14, 0x15, 0x16, 0x17, 0x18, 0x19,
+                                      0x1A, 0x1B, 0x1C, 0x1D };
+
+  static const uint8x16_t l1_perm = { 0x0F, 0x10, 0x11, 0x12, 0x13, 0x14,
+                                      0x15, 0x16, 0x17, 0x18, 0x19, 0x1A,
+                                      0x1B, 0x1C, 0x1D, 0x1E };
+
+  static const uint8x16_t r1_perm = { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06,
+                                      0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C,
+                                      0x0D, 0x0E, 0x0F, 0x10 };
+
+  static const uint8x16_t r2_perm = { 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+                                      0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D,
+                                      0x0E, 0x0F, 0x10, 0x11 };
+  ctx[0] = vec_perm(left_ctx, v, l2_perm);
+  ctx[1] = vec_perm(left_ctx, v, l1_perm);
+  ctx[2] = vec_perm(v, right_ctx, r1_perm);
+  ctx[3] = vec_perm(v, right_ctx, r2_perm);
+}
+void vpx_post_proc_down_and_across_mb_row_vsx(unsigned char *src_ptr,
+                                              unsigned char *dst_ptr,
+                                              int src_pixels_per_line,
+                                              int dst_pixels_per_line, int cols,
+                                              unsigned char *f, int size) {
+  int row, col;
+  uint8x16_t ctx[4], out, v, left_ctx;
+
+  for (row = 0; row < size; row++) {
+    for (col = 0; col < cols - 8; col += 16) {
+      const uint8x16_t filter = vec_vsx_ld(col, f);
+      v = vec_vsx_ld(col, src_ptr);
+      vert_ctx(ctx, col, src_ptr, src_pixels_per_line);
+      vec_vsx_st(apply_filter(ctx, v, filter), col, dst_ptr);
+    }
+
+    if (col != cols) {
+      const uint8x16_t filter = vec_vsx_ld(col, f);
+      v = vec_vsx_ld(col, src_ptr);
+      vert_ctx(ctx, col, src_ptr, src_pixels_per_line);
+      out = apply_filter(ctx, v, filter);
+      vec_vsx_st(vec_perm(out, v, st8_perm), col, dst_ptr);
+    }
+
+    /* now post_proc_across */
+    left_ctx = vec_splats(dst_ptr[0]);
+    v = vec_vsx_ld(0, dst_ptr);
+    for (col = 0; col < cols - 8; col += 16) {
+      const uint8x16_t filter = vec_vsx_ld(col, f);
+      const uint8x16_t right_ctx = (col + 16 == cols)
+                                       ? vec_splats(dst_ptr[cols - 1])
+                                       : vec_vsx_ld(col, dst_ptr + 16);
+      horz_ctx(ctx, left_ctx, v, right_ctx);
+      vec_vsx_st(apply_filter(ctx, v, filter), col, dst_ptr);
+      left_ctx = v;
+      v = right_ctx;
+    }
+
+    if (col != cols) {
+      const uint8x16_t filter = vec_vsx_ld(col, f);
+      const uint8x16_t right_ctx = vec_splats(dst_ptr[cols - 1]);
+      horz_ctx(ctx, left_ctx, v, right_ctx);
+      out = apply_filter(ctx, v, filter);
+      vec_vsx_st(vec_perm(out, v, st8_perm), col, dst_ptr);
+    }
+
+    src_ptr += src_pixels_per_line;
+    dst_ptr += dst_pixels_per_line;
+  }
+}
+
+// C: s[c + 7]
+static INLINE int16x8_t next7l_s16(uint8x16_t c) {
+  static const uint8x16_t next7_perm = {
+    0x07, 0x10, 0x08, 0x11, 0x09, 0x12, 0x0A, 0x13,
+    0x0B, 0x14, 0x0C, 0x15, 0x0D, 0x16, 0x0E, 0x17,
+  };
+  return (int16x8_t)vec_perm(c, vec_zeros_u8, next7_perm);
+}
+
+// Slide across window and add.
+static INLINE int16x8_t slide_sum_s16(int16x8_t x) {
+  // x = A B C D E F G H
+  //
+  // 0 A B C D E F G
+  const int16x8_t sum1 = vec_add(x, vec_slo(x, vec_splats((int8_t)(2 << 3))));
+  // 0 0 A B C D E F
+  const int16x8_t sum2 = vec_add(vec_slo(x, vec_splats((int8_t)(4 << 3))),
+                                 // 0 0 0 A B C D E
+                                 vec_slo(x, vec_splats((int8_t)(6 << 3))));
+  // 0 0 0 0 A B C D
+  const int16x8_t sum3 = vec_add(vec_slo(x, vec_splats((int8_t)(8 << 3))),
+                                 // 0 0 0 0 0 A B C
+                                 vec_slo(x, vec_splats((int8_t)(10 << 3))));
+  // 0 0 0 0 0 0 A B
+  const int16x8_t sum4 = vec_add(vec_slo(x, vec_splats((int8_t)(12 << 3))),
+                                 // 0 0 0 0 0 0 0 A
+                                 vec_slo(x, vec_splats((int8_t)(14 << 3))));
+  return vec_add(vec_add(sum1, sum2), vec_add(sum3, sum4));
+}
+
+// Slide across window and add.
+static INLINE int32x4_t slide_sumsq_s32(int32x4_t xsq_even, int32x4_t xsq_odd) {
+  //   0 A C E
+  // + 0 B D F
+  int32x4_t sumsq_1 = vec_add(vec_slo(xsq_even, vec_splats((int8_t)(4 << 3))),
+                              vec_slo(xsq_odd, vec_splats((int8_t)(4 << 3))));
+  //   0 0 A C
+  // + 0 0 B D
+  int32x4_t sumsq_2 = vec_add(vec_slo(xsq_even, vec_splats((int8_t)(8 << 3))),
+                              vec_slo(xsq_odd, vec_splats((int8_t)(8 << 3))));
+  //   0 0 0 A
+  // + 0 0 0 B
+  int32x4_t sumsq_3 = vec_add(vec_slo(xsq_even, vec_splats((int8_t)(12 << 3))),
+                              vec_slo(xsq_odd, vec_splats((int8_t)(12 << 3))));
+  sumsq_1 = vec_add(sumsq_1, xsq_even);
+  sumsq_2 = vec_add(sumsq_2, sumsq_3);
+  return vec_add(sumsq_1, sumsq_2);
+}
+
+// C: (b + sum + val) >> 4
+static INLINE int16x8_t filter_s16(int16x8_t b, int16x8_t sum, int16x8_t val) {
+  return vec_sra(vec_add(vec_add(b, sum), val), vec_splats((uint16_t)4));
+}
+
+// C: sumsq * 15 - sum * sum
+static INLINE bool16x8_t mask_s16(int32x4_t sumsq_even, int32x4_t sumsq_odd,
+                                  int16x8_t sum, int32x4_t lim) {
+  static const uint8x16_t mask_merge = { 0x00, 0x01, 0x10, 0x11, 0x04, 0x05,
+                                         0x14, 0x15, 0x08, 0x09, 0x18, 0x19,
+                                         0x0C, 0x0D, 0x1C, 0x1D };
+  const int32x4_t sumsq_odd_scaled =
+      vec_mul(sumsq_odd, vec_splats((int32_t)15));
+  const int32x4_t sumsq_even_scaled =
+      vec_mul(sumsq_even, vec_splats((int32_t)15));
+  const int32x4_t thres_odd = vec_sub(sumsq_odd_scaled, vec_mulo(sum, sum));
+  const int32x4_t thres_even = vec_sub(sumsq_even_scaled, vec_mule(sum, sum));
+
+  const bool32x4_t mask_odd = vec_cmplt(thres_odd, lim);
+  const bool32x4_t mask_even = vec_cmplt(thres_even, lim);
+  return vec_perm((bool16x8_t)mask_even, (bool16x8_t)mask_odd, mask_merge);
+}
+
+void vpx_mbpost_proc_across_ip_vsx(unsigned char *src, int pitch, int rows,
+                                   int cols, int flimit) {
+  int row, col;
+  const int32x4_t lim = vec_splats(flimit);
+
+  // 8 columns are processed at a time.
+  assert(cols % 8 == 0);
+
+  for (row = 0; row < rows; row++) {
+    // The sum is signed and requires at most 13 bits.
+    // (8 bits + sign) * 15 (4 bits)
+    int16x8_t sum;
+    // The sum of squares requires at most 20 bits.
+    // (16 bits + sign) * 15 (4 bits)
+    int32x4_t sumsq_even, sumsq_odd;
+
+    // Fill left context with first col.
+    int16x8_t left_ctx = vec_splats((int16_t)src[0]);
+    int16_t s = src[0] * 9;
+    int32_t ssq = src[0] * src[0] * 9 + 16;
+
+    // Fill the next 6 columns of the sliding window with cols 2 to 7.
+    for (col = 1; col <= 6; ++col) {
+      s += src[col];
+      ssq += src[col] * src[col];
+    }
+    // Set this sum to every element in the window.
+    sum = vec_splats(s);
+    sumsq_even = vec_splats(ssq);
+    sumsq_odd = vec_splats(ssq);
+
+    for (col = 0; col < cols; col += 8) {
+      bool16x8_t mask;
+      int16x8_t filtered, masked;
+      uint8x16_t out;
+
+      const uint8x16_t val = vec_vsx_ld(0, src + col);
+      const int16x8_t val_high = unpack_to_s16_h(val);
+
+      // C: s[c + 7]
+      const int16x8_t right_ctx = (col + 8 == cols)
+                                      ? vec_splats((int16_t)src[col + 7])
+                                      : next7l_s16(val);
+
+      // C: x = s[c + 7] - s[c - 8];
+      const int16x8_t x = vec_sub(right_ctx, left_ctx);
+      const int32x4_t xsq_even =
+          vec_sub(vec_mule(right_ctx, right_ctx), vec_mule(left_ctx, left_ctx));
+      const int32x4_t xsq_odd =
+          vec_sub(vec_mulo(right_ctx, right_ctx), vec_mulo(left_ctx, left_ctx));
+
+      const int32x4_t sumsq_tmp = slide_sumsq_s32(xsq_even, xsq_odd);
+      // A C E G
+      // 0 B D F
+      // 0 A C E
+      // 0 0 B D
+      // 0 0 A C
+      // 0 0 0 B
+      // 0 0 0 A
+      sumsq_even = vec_add(sumsq_even, sumsq_tmp);
+      // B D F G
+      // A C E G
+      // 0 B D F
+      // 0 A C E
+      // 0 0 B D
+      // 0 0 A C
+      // 0 0 0 B
+      // 0 0 0 A
+      sumsq_odd = vec_add(sumsq_odd, vec_add(sumsq_tmp, xsq_odd));
+
+      sum = vec_add(sum, slide_sum_s16(x));
+
+      // C: (8 + sum + s[c]) >> 4
+      filtered = filter_s16(vec_splats((int16_t)8), sum, val_high);
+      // C: sumsq * 15 - sum * sum
+      mask = mask_s16(sumsq_even, sumsq_odd, sum, lim);
+      masked = vec_sel(val_high, filtered, mask);
+
+      out = vec_perm((uint8x16_t)masked, vec_vsx_ld(0, src + col), load_merge);
+      vec_vsx_st(out, 0, src + col);
+
+      // Update window sum and square sum
+      sum = vec_splat(sum, 7);
+      sumsq_even = vec_splat(sumsq_odd, 3);
+      sumsq_odd = vec_splat(sumsq_odd, 3);
+
+      // C: s[c - 8] (for next iteration)
+      left_ctx = val_high;
+    }
+    src += pitch;
+  }
+}
+
+void vpx_mbpost_proc_down_vsx(uint8_t *dst, int pitch, int rows, int cols,
+                              int flimit) {
+  int col, row, i;
+  int16x8_t window[16];
+  const int32x4_t lim = vec_splats(flimit);
+
+  // 8 columns are processed at a time.
+  assert(cols % 8 == 0);
+  // If rows is less than 8 the bottom border extension fails.
+  assert(rows >= 8);
+
+  for (col = 0; col < cols; col += 8) {
+    // The sum is signed and requires at most 13 bits.
+    // (8 bits + sign) * 15 (4 bits)
+    int16x8_t r1, sum;
+    // The sum of squares requires at most 20 bits.
+    // (16 bits + sign) * 15 (4 bits)
+    int32x4_t sumsq_even, sumsq_odd;
+
+    r1 = unpack_to_s16_h(vec_vsx_ld(0, dst));
+    // Fill sliding window with first row.
+    for (i = 0; i <= 8; i++) {
+      window[i] = r1;
+    }
+    // First 9 rows of the sliding window are the same.
+    // sum = r1 * 9
+    sum = vec_mladd(r1, vec_splats((int16_t)9), vec_zeros_s16);
+
+    // sumsq = r1 * r1 * 9
+    sumsq_even = vec_mule(sum, r1);
+    sumsq_odd = vec_mulo(sum, r1);
+
+    // Fill the next 6 rows of the sliding window with rows 2 to 7.
+    for (i = 1; i <= 6; ++i) {
+      const int16x8_t next_row = unpack_to_s16_h(vec_vsx_ld(i * pitch, dst));
+      window[i + 8] = next_row;
+      sum = vec_add(sum, next_row);
+      sumsq_odd = vec_add(sumsq_odd, vec_mulo(next_row, next_row));
+      sumsq_even = vec_add(sumsq_even, vec_mule(next_row, next_row));
+    }
+
+    for (row = 0; row < rows; row++) {
+      int32x4_t d15_even, d15_odd, d0_even, d0_odd;
+      bool16x8_t mask;
+      int16x8_t filtered, masked;
+      uint8x16_t out;
+
+      const int16x8_t rv = vec_vsx_ld(0, vpx_rv + (row & 127));
+
+      // Move the sliding window
+      if (row + 7 < rows) {
+        window[15] = unpack_to_s16_h(vec_vsx_ld((row + 7) * pitch, dst));
+      } else {
+        window[15] = window[14];
+      }
+
+      // C: sum += s[7 * pitch] - s[-8 * pitch];
+      sum = vec_add(sum, vec_sub(window[15], window[0]));
+
+      // C: sumsq += s[7 * pitch] * s[7 * pitch] - s[-8 * pitch] * s[-8 *
+      // pitch];
+      // Optimization Note: Caching a squared-window for odd and even is
+      // slower than just repeating the multiplies.
+      d15_odd = vec_mulo(window[15], window[15]);
+      d15_even = vec_mule(window[15], window[15]);
+      d0_odd = vec_mulo(window[0], window[0]);
+      d0_even = vec_mule(window[0], window[0]);
+      sumsq_odd = vec_add(sumsq_odd, vec_sub(d15_odd, d0_odd));
+      sumsq_even = vec_add(sumsq_even, vec_sub(d15_even, d0_even));
+
+      // C: (vpx_rv[(r & 127) + (c & 7)] + sum + s[0]) >> 4
+      filtered = filter_s16(rv, sum, window[8]);
+
+      // C: sumsq * 15 - sum * sum
+      mask = mask_s16(sumsq_even, sumsq_odd, sum, lim);
+      masked = vec_sel(window[8], filtered, mask);
+
+      // TODO(ltrudeau) If cols % 16 == 0, we could just process 16 per
+      // iteration
+      out = vec_perm((uint8x16_t)masked, vec_vsx_ld(0, dst + row * pitch),
+                     load_merge);
+      vec_vsx_st(out, 0, dst + row * pitch);
+
+      // Optimization Note: Turns out that the following loop is faster than
+      // using pointers to manage the sliding window.
+      for (i = 1; i < 16; i++) {
+        window[i - 1] = window[i];
+      }
+    }
+    dst += 8;
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/fdct32x32_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/fdct32x32_vsx.c
new file mode 100644
index 0000000..328b0e3
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/fdct32x32_vsx.c
@@ -0,0 +1,553 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
+
+#include "vpx_dsp/ppc/transpose_vsx.h"
+#include "vpx_dsp/ppc/txfm_common_vsx.h"
+#include "vpx_dsp/ppc/types_vsx.h"
+
+// Returns ((a +/- b) * cospi16 + (2 << 13)) >> 14.
+static INLINE void single_butterfly(int16x8_t a, int16x8_t b, int16x8_t *add,
+                                    int16x8_t *sub) {
+  // Since a + b can overflow 16 bits, the multiplication is distributed
+  // (a * c +/- b * c).
+  const int32x4_t ac_e = vec_mule(a, cospi16_v);
+  const int32x4_t ac_o = vec_mulo(a, cospi16_v);
+  const int32x4_t bc_e = vec_mule(b, cospi16_v);
+  const int32x4_t bc_o = vec_mulo(b, cospi16_v);
+
+  // Reuse the same multiplies for sum and difference.
+  const int32x4_t sum_e = vec_add(ac_e, bc_e);
+  const int32x4_t sum_o = vec_add(ac_o, bc_o);
+  const int32x4_t diff_e = vec_sub(ac_e, bc_e);
+  const int32x4_t diff_o = vec_sub(ac_o, bc_o);
+
+  // Add rounding offset
+  const int32x4_t rsum_o = vec_add(sum_o, vec_dct_const_rounding);
+  const int32x4_t rsum_e = vec_add(sum_e, vec_dct_const_rounding);
+  const int32x4_t rdiff_o = vec_add(diff_o, vec_dct_const_rounding);
+  const int32x4_t rdiff_e = vec_add(diff_e, vec_dct_const_rounding);
+
+  const int32x4_t ssum_o = vec_sra(rsum_o, vec_dct_const_bits);
+  const int32x4_t ssum_e = vec_sra(rsum_e, vec_dct_const_bits);
+  const int32x4_t sdiff_o = vec_sra(rdiff_o, vec_dct_const_bits);
+  const int32x4_t sdiff_e = vec_sra(rdiff_e, vec_dct_const_bits);
+
+  // There's no pack operation for even and odd, so we need to permute.
+  *add = (int16x8_t)vec_perm(ssum_e, ssum_o, vec_perm_odd_even_pack);
+  *sub = (int16x8_t)vec_perm(sdiff_e, sdiff_o, vec_perm_odd_even_pack);
+}
+
+// Returns (a * c1 +/- b * c2 + (2 << 13)) >> 14
+static INLINE void double_butterfly(int16x8_t a, int16x8_t c1, int16x8_t b,
+                                    int16x8_t c2, int16x8_t *add,
+                                    int16x8_t *sub) {
+  const int32x4_t ac1_o = vec_mulo(a, c1);
+  const int32x4_t ac1_e = vec_mule(a, c1);
+  const int32x4_t ac2_o = vec_mulo(a, c2);
+  const int32x4_t ac2_e = vec_mule(a, c2);
+
+  const int32x4_t bc1_o = vec_mulo(b, c1);
+  const int32x4_t bc1_e = vec_mule(b, c1);
+  const int32x4_t bc2_o = vec_mulo(b, c2);
+  const int32x4_t bc2_e = vec_mule(b, c2);
+
+  const int32x4_t sum_o = vec_add(ac1_o, bc2_o);
+  const int32x4_t sum_e = vec_add(ac1_e, bc2_e);
+  const int32x4_t diff_o = vec_sub(ac2_o, bc1_o);
+  const int32x4_t diff_e = vec_sub(ac2_e, bc1_e);
+
+  // Add rounding offset
+  const int32x4_t rsum_o = vec_add(sum_o, vec_dct_const_rounding);
+  const int32x4_t rsum_e = vec_add(sum_e, vec_dct_const_rounding);
+  const int32x4_t rdiff_o = vec_add(diff_o, vec_dct_const_rounding);
+  const int32x4_t rdiff_e = vec_add(diff_e, vec_dct_const_rounding);
+
+  const int32x4_t ssum_o = vec_sra(rsum_o, vec_dct_const_bits);
+  const int32x4_t ssum_e = vec_sra(rsum_e, vec_dct_const_bits);
+  const int32x4_t sdiff_o = vec_sra(rdiff_o, vec_dct_const_bits);
+  const int32x4_t sdiff_e = vec_sra(rdiff_e, vec_dct_const_bits);
+
+  // There's no pack operation for even and odd, so we need to permute.
+  *add = (int16x8_t)vec_perm(ssum_e, ssum_o, vec_perm_odd_even_pack);
+  *sub = (int16x8_t)vec_perm(sdiff_e, sdiff_o, vec_perm_odd_even_pack);
+}
+
+// While other architecture combine the load and the stage 1 operations, Power9
+// benchmarking show no benefit in such an approach.
+static INLINE void load(const int16_t *a, int stride, int16x8_t *b) {
+  // Tried out different combinations of load and shift instructions, this is
+  // the fastest one.
+  {
+    const int16x8_t l0 = vec_vsx_ld(0, a);
+    const int16x8_t l1 = vec_vsx_ld(0, a + stride);
+    const int16x8_t l2 = vec_vsx_ld(0, a + 2 * stride);
+    const int16x8_t l3 = vec_vsx_ld(0, a + 3 * stride);
+    const int16x8_t l4 = vec_vsx_ld(0, a + 4 * stride);
+    const int16x8_t l5 = vec_vsx_ld(0, a + 5 * stride);
+    const int16x8_t l6 = vec_vsx_ld(0, a + 6 * stride);
+    const int16x8_t l7 = vec_vsx_ld(0, a + 7 * stride);
+
+    const int16x8_t l8 = vec_vsx_ld(0, a + 8 * stride);
+    const int16x8_t l9 = vec_vsx_ld(0, a + 9 * stride);
+    const int16x8_t l10 = vec_vsx_ld(0, a + 10 * stride);
+    const int16x8_t l11 = vec_vsx_ld(0, a + 11 * stride);
+    const int16x8_t l12 = vec_vsx_ld(0, a + 12 * stride);
+    const int16x8_t l13 = vec_vsx_ld(0, a + 13 * stride);
+    const int16x8_t l14 = vec_vsx_ld(0, a + 14 * stride);
+    const int16x8_t l15 = vec_vsx_ld(0, a + 15 * stride);
+
+    b[0] = vec_sl(l0, vec_dct_scale_log2);
+    b[1] = vec_sl(l1, vec_dct_scale_log2);
+    b[2] = vec_sl(l2, vec_dct_scale_log2);
+    b[3] = vec_sl(l3, vec_dct_scale_log2);
+    b[4] = vec_sl(l4, vec_dct_scale_log2);
+    b[5] = vec_sl(l5, vec_dct_scale_log2);
+    b[6] = vec_sl(l6, vec_dct_scale_log2);
+    b[7] = vec_sl(l7, vec_dct_scale_log2);
+
+    b[8] = vec_sl(l8, vec_dct_scale_log2);
+    b[9] = vec_sl(l9, vec_dct_scale_log2);
+    b[10] = vec_sl(l10, vec_dct_scale_log2);
+    b[11] = vec_sl(l11, vec_dct_scale_log2);
+    b[12] = vec_sl(l12, vec_dct_scale_log2);
+    b[13] = vec_sl(l13, vec_dct_scale_log2);
+    b[14] = vec_sl(l14, vec_dct_scale_log2);
+    b[15] = vec_sl(l15, vec_dct_scale_log2);
+  }
+  {
+    const int16x8_t l16 = vec_vsx_ld(0, a + 16 * stride);
+    const int16x8_t l17 = vec_vsx_ld(0, a + 17 * stride);
+    const int16x8_t l18 = vec_vsx_ld(0, a + 18 * stride);
+    const int16x8_t l19 = vec_vsx_ld(0, a + 19 * stride);
+    const int16x8_t l20 = vec_vsx_ld(0, a + 20 * stride);
+    const int16x8_t l21 = vec_vsx_ld(0, a + 21 * stride);
+    const int16x8_t l22 = vec_vsx_ld(0, a + 22 * stride);
+    const int16x8_t l23 = vec_vsx_ld(0, a + 23 * stride);
+
+    const int16x8_t l24 = vec_vsx_ld(0, a + 24 * stride);
+    const int16x8_t l25 = vec_vsx_ld(0, a + 25 * stride);
+    const int16x8_t l26 = vec_vsx_ld(0, a + 26 * stride);
+    const int16x8_t l27 = vec_vsx_ld(0, a + 27 * stride);
+    const int16x8_t l28 = vec_vsx_ld(0, a + 28 * stride);
+    const int16x8_t l29 = vec_vsx_ld(0, a + 29 * stride);
+    const int16x8_t l30 = vec_vsx_ld(0, a + 30 * stride);
+    const int16x8_t l31 = vec_vsx_ld(0, a + 31 * stride);
+
+    b[16] = vec_sl(l16, vec_dct_scale_log2);
+    b[17] = vec_sl(l17, vec_dct_scale_log2);
+    b[18] = vec_sl(l18, vec_dct_scale_log2);
+    b[19] = vec_sl(l19, vec_dct_scale_log2);
+    b[20] = vec_sl(l20, vec_dct_scale_log2);
+    b[21] = vec_sl(l21, vec_dct_scale_log2);
+    b[22] = vec_sl(l22, vec_dct_scale_log2);
+    b[23] = vec_sl(l23, vec_dct_scale_log2);
+
+    b[24] = vec_sl(l24, vec_dct_scale_log2);
+    b[25] = vec_sl(l25, vec_dct_scale_log2);
+    b[26] = vec_sl(l26, vec_dct_scale_log2);
+    b[27] = vec_sl(l27, vec_dct_scale_log2);
+    b[28] = vec_sl(l28, vec_dct_scale_log2);
+    b[29] = vec_sl(l29, vec_dct_scale_log2);
+    b[30] = vec_sl(l30, vec_dct_scale_log2);
+    b[31] = vec_sl(l31, vec_dct_scale_log2);
+  }
+}
+
+static INLINE void store(tran_low_t *a, const int16x8_t *b) {
+  vec_vsx_st(b[0], 0, a);
+  vec_vsx_st(b[8], 0, a + 8);
+  vec_vsx_st(b[16], 0, a + 16);
+  vec_vsx_st(b[24], 0, a + 24);
+
+  vec_vsx_st(b[1], 0, a + 32);
+  vec_vsx_st(b[9], 0, a + 40);
+  vec_vsx_st(b[17], 0, a + 48);
+  vec_vsx_st(b[25], 0, a + 56);
+
+  vec_vsx_st(b[2], 0, a + 64);
+  vec_vsx_st(b[10], 0, a + 72);
+  vec_vsx_st(b[18], 0, a + 80);
+  vec_vsx_st(b[26], 0, a + 88);
+
+  vec_vsx_st(b[3], 0, a + 96);
+  vec_vsx_st(b[11], 0, a + 104);
+  vec_vsx_st(b[19], 0, a + 112);
+  vec_vsx_st(b[27], 0, a + 120);
+
+  vec_vsx_st(b[4], 0, a + 128);
+  vec_vsx_st(b[12], 0, a + 136);
+  vec_vsx_st(b[20], 0, a + 144);
+  vec_vsx_st(b[28], 0, a + 152);
+
+  vec_vsx_st(b[5], 0, a + 160);
+  vec_vsx_st(b[13], 0, a + 168);
+  vec_vsx_st(b[21], 0, a + 176);
+  vec_vsx_st(b[29], 0, a + 184);
+
+  vec_vsx_st(b[6], 0, a + 192);
+  vec_vsx_st(b[14], 0, a + 200);
+  vec_vsx_st(b[22], 0, a + 208);
+  vec_vsx_st(b[30], 0, a + 216);
+
+  vec_vsx_st(b[7], 0, a + 224);
+  vec_vsx_st(b[15], 0, a + 232);
+  vec_vsx_st(b[23], 0, a + 240);
+  vec_vsx_st(b[31], 0, a + 248);
+}
+
+// Returns 1 if negative 0 if positive
+static INLINE int16x8_t vec_sign_s16(int16x8_t a) {
+  return vec_sr(a, vec_shift_sign_s16);
+}
+
+// Add 2 if positive, 1 if negative, and shift by 2.
+static INLINE int16x8_t sub_round_shift(const int16x8_t a) {
+  const int16x8_t sign = vec_sign_s16(a);
+  return vec_sra(vec_sub(vec_add(a, vec_twos_s16), sign), vec_dct_scale_log2);
+}
+
+// Add 1 if positive, 2 if negative, and shift by 2.
+// In practice, add 1, then add the sign bit, then shift without rounding.
+static INLINE int16x8_t add_round_shift_s16(const int16x8_t a) {
+  const int16x8_t sign = vec_sign_s16(a);
+  return vec_sra(vec_add(vec_add(a, vec_ones_s16), sign), vec_dct_scale_log2);
+}
+
+static void fdct32_vsx(const int16x8_t *in, int16x8_t *out, int pass) {
+  int16x8_t temp0[32];  // Hold stages: 1, 4, 7
+  int16x8_t temp1[32];  // Hold stages: 2, 5
+  int16x8_t temp2[32];  // Hold stages: 3, 6
+  int i;
+
+  // Stage 1
+  // Unrolling this loops actually slows down Power9 benchmarks
+  for (i = 0; i < 16; i++) {
+    temp0[i] = vec_add(in[i], in[31 - i]);
+    // pass through to stage 3.
+    temp1[i + 16] = vec_sub(in[15 - i], in[i + 16]);
+  }
+
+  // Stage 2
+  // Unrolling this loops actually slows down Power9 benchmarks
+  for (i = 0; i < 8; i++) {
+    temp1[i] = vec_add(temp0[i], temp0[15 - i]);
+    temp1[i + 8] = vec_sub(temp0[7 - i], temp0[i + 8]);
+  }
+
+  // Apply butterflies (in place) on pass through to stage 3.
+  single_butterfly(temp1[27], temp1[20], &temp1[27], &temp1[20]);
+  single_butterfly(temp1[26], temp1[21], &temp1[26], &temp1[21]);
+  single_butterfly(temp1[25], temp1[22], &temp1[25], &temp1[22]);
+  single_butterfly(temp1[24], temp1[23], &temp1[24], &temp1[23]);
+
+  // dump the magnitude by 4, hence the intermediate values are within
+  // the range of 16 bits.
+  if (pass) {
+    temp1[0] = add_round_shift_s16(temp1[0]);
+    temp1[1] = add_round_shift_s16(temp1[1]);
+    temp1[2] = add_round_shift_s16(temp1[2]);
+    temp1[3] = add_round_shift_s16(temp1[3]);
+    temp1[4] = add_round_shift_s16(temp1[4]);
+    temp1[5] = add_round_shift_s16(temp1[5]);
+    temp1[6] = add_round_shift_s16(temp1[6]);
+    temp1[7] = add_round_shift_s16(temp1[7]);
+    temp1[8] = add_round_shift_s16(temp1[8]);
+    temp1[9] = add_round_shift_s16(temp1[9]);
+    temp1[10] = add_round_shift_s16(temp1[10]);
+    temp1[11] = add_round_shift_s16(temp1[11]);
+    temp1[12] = add_round_shift_s16(temp1[12]);
+    temp1[13] = add_round_shift_s16(temp1[13]);
+    temp1[14] = add_round_shift_s16(temp1[14]);
+    temp1[15] = add_round_shift_s16(temp1[15]);
+
+    temp1[16] = add_round_shift_s16(temp1[16]);
+    temp1[17] = add_round_shift_s16(temp1[17]);
+    temp1[18] = add_round_shift_s16(temp1[18]);
+    temp1[19] = add_round_shift_s16(temp1[19]);
+    temp1[20] = add_round_shift_s16(temp1[20]);
+    temp1[21] = add_round_shift_s16(temp1[21]);
+    temp1[22] = add_round_shift_s16(temp1[22]);
+    temp1[23] = add_round_shift_s16(temp1[23]);
+    temp1[24] = add_round_shift_s16(temp1[24]);
+    temp1[25] = add_round_shift_s16(temp1[25]);
+    temp1[26] = add_round_shift_s16(temp1[26]);
+    temp1[27] = add_round_shift_s16(temp1[27]);
+    temp1[28] = add_round_shift_s16(temp1[28]);
+    temp1[29] = add_round_shift_s16(temp1[29]);
+    temp1[30] = add_round_shift_s16(temp1[30]);
+    temp1[31] = add_round_shift_s16(temp1[31]);
+  }
+
+  // Stage 3
+  temp2[0] = vec_add(temp1[0], temp1[7]);
+  temp2[1] = vec_add(temp1[1], temp1[6]);
+  temp2[2] = vec_add(temp1[2], temp1[5]);
+  temp2[3] = vec_add(temp1[3], temp1[4]);
+  temp2[5] = vec_sub(temp1[2], temp1[5]);
+  temp2[6] = vec_sub(temp1[1], temp1[6]);
+  temp2[8] = temp1[8];
+  temp2[9] = temp1[9];
+
+  single_butterfly(temp1[13], temp1[10], &temp2[13], &temp2[10]);
+  single_butterfly(temp1[12], temp1[11], &temp2[12], &temp2[11]);
+  temp2[14] = temp1[14];
+  temp2[15] = temp1[15];
+
+  temp2[18] = vec_add(temp1[18], temp1[21]);
+  temp2[19] = vec_add(temp1[19], temp1[20]);
+
+  temp2[20] = vec_sub(temp1[19], temp1[20]);
+  temp2[21] = vec_sub(temp1[18], temp1[21]);
+
+  temp2[26] = vec_sub(temp1[29], temp1[26]);
+  temp2[27] = vec_sub(temp1[28], temp1[27]);
+
+  temp2[28] = vec_add(temp1[28], temp1[27]);
+  temp2[29] = vec_add(temp1[29], temp1[26]);
+
+  // Pass through Stage 4
+  temp0[7] = vec_sub(temp1[0], temp1[7]);
+  temp0[4] = vec_sub(temp1[3], temp1[4]);
+  temp0[16] = vec_add(temp1[16], temp1[23]);
+  temp0[17] = vec_add(temp1[17], temp1[22]);
+  temp0[22] = vec_sub(temp1[17], temp1[22]);
+  temp0[23] = vec_sub(temp1[16], temp1[23]);
+  temp0[24] = vec_sub(temp1[31], temp1[24]);
+  temp0[25] = vec_sub(temp1[30], temp1[25]);
+  temp0[30] = vec_add(temp1[30], temp1[25]);
+  temp0[31] = vec_add(temp1[31], temp1[24]);
+
+  // Stage 4
+  temp0[0] = vec_add(temp2[0], temp2[3]);
+  temp0[1] = vec_add(temp2[1], temp2[2]);
+  temp0[2] = vec_sub(temp2[1], temp2[2]);
+  temp0[3] = vec_sub(temp2[0], temp2[3]);
+  single_butterfly(temp2[6], temp2[5], &temp0[6], &temp0[5]);
+
+  temp0[9] = vec_add(temp2[9], temp2[10]);
+  temp0[10] = vec_sub(temp2[9], temp2[10]);
+  temp0[13] = vec_sub(temp2[14], temp2[13]);
+  temp0[14] = vec_add(temp2[14], temp2[13]);
+
+  double_butterfly(temp2[29], cospi8_v, temp2[18], cospi24_v, &temp0[29],
+                   &temp0[18]);
+  double_butterfly(temp2[28], cospi8_v, temp2[19], cospi24_v, &temp0[28],
+                   &temp0[19]);
+  double_butterfly(temp2[27], cospi24_v, temp2[20], cospi8m_v, &temp0[27],
+                   &temp0[20]);
+  double_butterfly(temp2[26], cospi24_v, temp2[21], cospi8m_v, &temp0[26],
+                   &temp0[21]);
+
+  // Pass through Stage 5
+  temp1[8] = vec_add(temp2[8], temp2[11]);
+  temp1[11] = vec_sub(temp2[8], temp2[11]);
+  temp1[12] = vec_sub(temp2[15], temp2[12]);
+  temp1[15] = vec_add(temp2[15], temp2[12]);
+
+  // Stage 5
+  // 0 and 1 pass through to 0 and 16 at the end
+  single_butterfly(temp0[0], temp0[1], &out[0], &out[16]);
+
+  // 2 and 3 pass through to 8 and 24 at the end
+  double_butterfly(temp0[3], cospi8_v, temp0[2], cospi24_v, &out[8], &out[24]);
+
+  temp1[4] = vec_add(temp0[4], temp0[5]);
+  temp1[5] = vec_sub(temp0[4], temp0[5]);
+  temp1[6] = vec_sub(temp0[7], temp0[6]);
+  temp1[7] = vec_add(temp0[7], temp0[6]);
+
+  double_butterfly(temp0[14], cospi8_v, temp0[9], cospi24_v, &temp1[14],
+                   &temp1[9]);
+  double_butterfly(temp0[13], cospi24_v, temp0[10], cospi8m_v, &temp1[13],
+                   &temp1[10]);
+
+  temp1[17] = vec_add(temp0[17], temp0[18]);
+  temp1[18] = vec_sub(temp0[17], temp0[18]);
+
+  temp1[21] = vec_sub(temp0[22], temp0[21]);
+  temp1[22] = vec_add(temp0[22], temp0[21]);
+
+  temp1[25] = vec_add(temp0[25], temp0[26]);
+  temp1[26] = vec_sub(temp0[25], temp0[26]);
+
+  temp1[29] = vec_sub(temp0[30], temp0[29]);
+  temp1[30] = vec_add(temp0[30], temp0[29]);
+
+  // Pass through Stage 6
+  temp2[16] = vec_add(temp0[16], temp0[19]);
+  temp2[19] = vec_sub(temp0[16], temp0[19]);
+  temp2[20] = vec_sub(temp0[23], temp0[20]);
+  temp2[23] = vec_add(temp0[23], temp0[20]);
+  temp2[24] = vec_add(temp0[24], temp0[27]);
+  temp2[27] = vec_sub(temp0[24], temp0[27]);
+  temp2[28] = vec_sub(temp0[31], temp0[28]);
+  temp2[31] = vec_add(temp0[31], temp0[28]);
+
+  // Stage 6
+  // 4 and 7 pass through to 4 and 28 at the end
+  double_butterfly(temp1[7], cospi4_v, temp1[4], cospi28_v, &out[4], &out[28]);
+  // 5 and 6 pass through to 20 and 12 at the end
+  double_butterfly(temp1[6], cospi20_v, temp1[5], cospi12_v, &out[20],
+                   &out[12]);
+  temp2[8] = vec_add(temp1[8], temp1[9]);
+  temp2[9] = vec_sub(temp1[8], temp1[9]);
+  temp2[10] = vec_sub(temp1[11], temp1[10]);
+  temp2[11] = vec_add(temp1[11], temp1[10]);
+  temp2[12] = vec_add(temp1[12], temp1[13]);
+  temp2[13] = vec_sub(temp1[12], temp1[13]);
+  temp2[14] = vec_sub(temp1[15], temp1[14]);
+  temp2[15] = vec_add(temp1[15], temp1[14]);
+
+  double_butterfly(temp1[30], cospi4_v, temp1[17], cospi28_v, &temp2[30],
+                   &temp2[17]);
+  double_butterfly(temp1[29], cospi28_v, temp1[18], cospi4m_v, &temp2[29],
+                   &temp2[18]);
+  double_butterfly(temp1[26], cospi20_v, temp1[21], cospi12_v, &temp2[26],
+                   &temp2[21]);
+  double_butterfly(temp1[25], cospi12_v, temp1[22], cospi20m_v, &temp2[25],
+                   &temp2[22]);
+
+  // Stage 7
+  double_butterfly(temp2[15], cospi2_v, temp2[8], cospi30_v, &out[2], &out[30]);
+  double_butterfly(temp2[14], cospi18_v, temp2[9], cospi14_v, &out[18],
+                   &out[14]);
+  double_butterfly(temp2[13], cospi10_v, temp2[10], cospi22_v, &out[10],
+                   &out[22]);
+  double_butterfly(temp2[12], cospi26_v, temp2[11], cospi6_v, &out[26],
+                   &out[6]);
+
+  temp0[16] = vec_add(temp2[16], temp2[17]);
+  temp0[17] = vec_sub(temp2[16], temp2[17]);
+  temp0[18] = vec_sub(temp2[19], temp2[18]);
+  temp0[19] = vec_add(temp2[19], temp2[18]);
+  temp0[20] = vec_add(temp2[20], temp2[21]);
+  temp0[21] = vec_sub(temp2[20], temp2[21]);
+  temp0[22] = vec_sub(temp2[23], temp2[22]);
+  temp0[23] = vec_add(temp2[23], temp2[22]);
+  temp0[24] = vec_add(temp2[24], temp2[25]);
+  temp0[25] = vec_sub(temp2[24], temp2[25]);
+  temp0[26] = vec_sub(temp2[27], temp2[26]);
+  temp0[27] = vec_add(temp2[27], temp2[26]);
+  temp0[28] = vec_add(temp2[28], temp2[29]);
+  temp0[29] = vec_sub(temp2[28], temp2[29]);
+  temp0[30] = vec_sub(temp2[31], temp2[30]);
+  temp0[31] = vec_add(temp2[31], temp2[30]);
+
+  // Final stage --- outputs indices are bit-reversed.
+  double_butterfly(temp0[31], cospi1_v, temp0[16], cospi31_v, &out[1],
+                   &out[31]);
+  double_butterfly(temp0[30], cospi17_v, temp0[17], cospi15_v, &out[17],
+                   &out[15]);
+  double_butterfly(temp0[29], cospi9_v, temp0[18], cospi23_v, &out[9],
+                   &out[23]);
+  double_butterfly(temp0[28], cospi25_v, temp0[19], cospi7_v, &out[25],
+                   &out[7]);
+  double_butterfly(temp0[27], cospi5_v, temp0[20], cospi27_v, &out[5],
+                   &out[27]);
+  double_butterfly(temp0[26], cospi21_v, temp0[21], cospi11_v, &out[21],
+                   &out[11]);
+  double_butterfly(temp0[25], cospi13_v, temp0[22], cospi19_v, &out[13],
+                   &out[19]);
+  double_butterfly(temp0[24], cospi29_v, temp0[23], cospi3_v, &out[29],
+                   &out[3]);
+
+  if (pass == 0) {
+    for (i = 0; i < 32; i++) {
+      out[i] = sub_round_shift(out[i]);
+    }
+  }
+}
+
+void vpx_fdct32x32_rd_vsx(const int16_t *input, tran_low_t *out, int stride) {
+  int16x8_t temp0[32];
+  int16x8_t temp1[32];
+  int16x8_t temp2[32];
+  int16x8_t temp3[32];
+  int16x8_t temp4[32];
+  int16x8_t temp5[32];
+  int16x8_t temp6[32];
+
+  // Process in 8x32 columns.
+  load(input, stride, temp0);
+  fdct32_vsx(temp0, temp1, 0);
+
+  load(input + 8, stride, temp0);
+  fdct32_vsx(temp0, temp2, 0);
+
+  load(input + 16, stride, temp0);
+  fdct32_vsx(temp0, temp3, 0);
+
+  load(input + 24, stride, temp0);
+  fdct32_vsx(temp0, temp4, 0);
+
+  // Generate the top row by munging the first set of 8 from each one
+  // together.
+  transpose_8x8(&temp1[0], &temp0[0]);
+  transpose_8x8(&temp2[0], &temp0[8]);
+  transpose_8x8(&temp3[0], &temp0[16]);
+  transpose_8x8(&temp4[0], &temp0[24]);
+
+  fdct32_vsx(temp0, temp5, 1);
+
+  transpose_8x8(&temp5[0], &temp6[0]);
+  transpose_8x8(&temp5[8], &temp6[8]);
+  transpose_8x8(&temp5[16], &temp6[16]);
+  transpose_8x8(&temp5[24], &temp6[24]);
+
+  store(out, temp6);
+
+  // Second row of 8x32.
+  transpose_8x8(&temp1[8], &temp0[0]);
+  transpose_8x8(&temp2[8], &temp0[8]);
+  transpose_8x8(&temp3[8], &temp0[16]);
+  transpose_8x8(&temp4[8], &temp0[24]);
+
+  fdct32_vsx(temp0, temp5, 1);
+
+  transpose_8x8(&temp5[0], &temp6[0]);
+  transpose_8x8(&temp5[8], &temp6[8]);
+  transpose_8x8(&temp5[16], &temp6[16]);
+  transpose_8x8(&temp5[24], &temp6[24]);
+
+  store(out + 8 * 32, temp6);
+
+  // Third row of 8x32
+  transpose_8x8(&temp1[16], &temp0[0]);
+  transpose_8x8(&temp2[16], &temp0[8]);
+  transpose_8x8(&temp3[16], &temp0[16]);
+  transpose_8x8(&temp4[16], &temp0[24]);
+
+  fdct32_vsx(temp0, temp5, 1);
+
+  transpose_8x8(&temp5[0], &temp6[0]);
+  transpose_8x8(&temp5[8], &temp6[8]);
+  transpose_8x8(&temp5[16], &temp6[16]);
+  transpose_8x8(&temp5[24], &temp6[24]);
+
+  store(out + 16 * 32, temp6);
+
+  // Final row of 8x32.
+  transpose_8x8(&temp1[24], &temp0[0]);
+  transpose_8x8(&temp2[24], &temp0[8]);
+  transpose_8x8(&temp3[24], &temp0[16]);
+  transpose_8x8(&temp4[24], &temp0[24]);
+
+  fdct32_vsx(temp0, temp5, 1);
+
+  transpose_8x8(&temp5[0], &temp6[0]);
+  transpose_8x8(&temp5[8], &temp6[8]);
+  transpose_8x8(&temp5[16], &temp6[16]);
+  transpose_8x8(&temp5[24], &temp6[24]);
+
+  store(out + 24 * 32, temp6);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/intrapred_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/intrapred_vsx.c
index 6273460..a4c8322 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/intrapred_vsx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/intrapred_vsx.c
@@ -35,6 +35,8 @@
   }
 }
 
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 static const uint32x4_t mask4 = { 0, 0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF };
 
 void vpx_h_predictor_4x4_vsx(uint8_t *dst, ptrdiff_t stride,
@@ -87,6 +89,7 @@
   dst += stride;
   vec_vsx_st(xxpermdi(v7, vec_vsx_ld(0, dst), 1), 0, dst);
 }
+#endif
 
 void vpx_h_predictor_16x16_vsx(uint8_t *dst, ptrdiff_t stride,
                                const uint8_t *above, const uint8_t *left) {
@@ -233,6 +236,8 @@
   H_PREDICTOR_32(v15_1);
 }
 
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 void vpx_tm_predictor_4x4_vsx(uint8_t *dst, ptrdiff_t stride,
                               const uint8_t *above, const uint8_t *left) {
   const int16x8_t tl = unpack_to_s16_h(vec_splat(vec_vsx_ld(-1, above), 0));
@@ -311,6 +316,7 @@
   val = vec_sub(vec_add(vec_splat(l, 7), a), tl);
   vec_vsx_st(vec_packsu(val, tmp), 0, dst);
 }
+#endif
 
 static void tm_predictor_16x8(uint8_t *dst, const ptrdiff_t stride, int16x8_t l,
                               int16x8_t ah, int16x8_t al, int16x8_t tl) {
@@ -547,6 +553,8 @@
   dc_fill_predictor_32x32(dst, stride, avg32(above));
 }
 
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 static uint8x16_t dc_avg8(const uint8_t *above, const uint8_t *left) {
   const uint8x16_t a0 = vec_vsx_ld(0, above);
   const uint8x16_t l0 = vec_vsx_ld(0, left);
@@ -559,6 +567,7 @@
   return vec_splat(vec_pack(vec_pack(avg, vec_splat_u32(0)), vec_splat_u16(0)),
                    3);
 }
+#endif
 
 static uint8x16_t dc_avg16(const uint8_t *above, const uint8_t *left) {
   const uint8x16_t a0 = vec_vsx_ld(0, above);
@@ -573,10 +582,13 @@
                    3);
 }
 
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 void vpx_dc_predictor_8x8_vsx(uint8_t *dst, ptrdiff_t stride,
                               const uint8_t *above, const uint8_t *left) {
   dc_fill_predictor_8x8(dst, stride, dc_avg8(above, left));
 }
+#endif
 
 void vpx_dc_predictor_16x16_vsx(uint8_t *dst, ptrdiff_t stride,
                                 const uint8_t *above, const uint8_t *left) {
@@ -615,6 +627,8 @@
 static const uint8x16_t sl1 = { 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8,
                                 0x9, 0xA, 0xB, 0xC, 0xD, 0xE, 0xF, 0x10 };
 
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 void vpx_d45_predictor_8x8_vsx(uint8_t *dst, ptrdiff_t stride,
                                const uint8_t *above, const uint8_t *left) {
   const uint8x16_t af = vec_vsx_ld(0, above);
@@ -633,6 +647,7 @@
     row = vec_perm(row, above_right, sl1);
   }
 }
+#endif
 
 void vpx_d45_predictor_16x16_vsx(uint8_t *dst, ptrdiff_t stride,
                                  const uint8_t *above, const uint8_t *left) {
@@ -674,6 +689,8 @@
   }
 }
 
+// TODO(crbug.com/webm/1522): Fix test failures.
+#if 0
 void vpx_d63_predictor_8x8_vsx(uint8_t *dst, ptrdiff_t stride,
                                const uint8_t *above, const uint8_t *left) {
   const uint8x16_t af = vec_vsx_ld(0, above);
@@ -696,6 +713,7 @@
     row1 = vec_perm(row1, above_right, sl1);
   }
 }
+#endif
 
 void vpx_d63_predictor_16x16_vsx(uint8_t *dst, ptrdiff_t stride,
                                  const uint8_t *above, const uint8_t *left) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.c
index f095cb0..e99412e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.c
@@ -14,67 +14,129 @@
 
 #include "vpx_dsp/ppc/bitdepth_conversion_vsx.h"
 #include "vpx_dsp/ppc/types_vsx.h"
+#include "vpx_dsp/ppc/inv_txfm_vsx.h"
 
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/inv_txfm.h"
 
-static int16x8_t cospi1_v = { 16364, 16364, 16364, 16364,
-                              16364, 16364, 16364, 16364 };
-static int16x8_t cospi2_v = { 16305, 16305, 16305, 16305,
-                              16305, 16305, 16305, 16305 };
-static int16x8_t cospi3_v = { 16207, 16207, 16207, 16207,
-                              16207, 16207, 16207, 16207 };
-static int16x8_t cospi4_v = { 16069, 16069, 16069, 16069,
-                              16069, 16069, 16069, 16069 };
-static int16x8_t cospi4m_v = { -16069, -16069, -16069, -16069,
-                               -16069, -16069, -16069, -16069 };
-static int16x8_t cospi5_v = { 15893, 15893, 15893, 15893,
-                              15893, 15893, 15893, 15893 };
-static int16x8_t cospi6_v = { 15679, 15679, 15679, 15679,
-                              15679, 15679, 15679, 15679 };
-static int16x8_t cospi7_v = { 15426, 15426, 15426, 15426,
-                              15426, 15426, 15426, 15426 };
-static int16x8_t cospi8_v = { 15137, 15137, 15137, 15137,
-                              15137, 15137, 15137, 15137 };
-static int16x8_t cospi8m_v = { -15137, -15137, -15137, -15137,
-                               -15137, -15137, -15137, -15137 };
-static int16x8_t cospi9_v = { 14811, 14811, 14811, 14811,
-                              14811, 14811, 14811, 14811 };
-static int16x8_t cospi10_v = { 14449, 14449, 14449, 14449,
-                               14449, 14449, 14449, 14449 };
-static int16x8_t cospi11_v = { 14053, 14053, 14053, 14053,
-                               14053, 14053, 14053, 14053 };
-static int16x8_t cospi12_v = { 13623, 13623, 13623, 13623,
-                               13623, 13623, 13623, 13623 };
-static int16x8_t cospi13_v = { 13160, 13160, 13160, 13160,
-                               13160, 13160, 13160, 13160 };
-static int16x8_t cospi14_v = { 12665, 12665, 12665, 12665,
-                               12665, 12665, 12665, 12665 };
-static int16x8_t cospi15_v = { 12140, 12140, 12140, 12140,
-                               12140, 12140, 12140, 12140 };
-static int16x8_t cospi16_v = { 11585, 11585, 11585, 11585,
-                               11585, 11585, 11585, 11585 };
-static int16x8_t cospi17_v = { 11003, 11003, 11003, 11003,
-                               11003, 11003, 11003, 11003 };
-static int16x8_t cospi18_v = { 10394, 10394, 10394, 10394,
-                               10394, 10394, 10394, 10394 };
-static int16x8_t cospi19_v = { 9760, 9760, 9760, 9760, 9760, 9760, 9760, 9760 };
-static int16x8_t cospi20_v = { 9102, 9102, 9102, 9102, 9102, 9102, 9102, 9102 };
-static int16x8_t cospi20m_v = { -9102, -9102, -9102, -9102,
-                                -9102, -9102, -9102, -9102 };
-static int16x8_t cospi21_v = { 8423, 8423, 8423, 8423, 8423, 8423, 8423, 8423 };
-static int16x8_t cospi22_v = { 7723, 7723, 7723, 7723, 7723, 7723, 7723, 7723 };
-static int16x8_t cospi23_v = { 7005, 7005, 7005, 7005, 7005, 7005, 7005, 7005 };
-static int16x8_t cospi24_v = { 6270, 6270, 6270, 6270, 6270, 6270, 6270, 6270 };
-static int16x8_t cospi24_mv = { -6270, -6270, -6270, -6270,
-                                -6270, -6270, -6270, -6270 };
-static int16x8_t cospi25_v = { 5520, 5520, 5520, 5520, 5520, 5520, 5520, 5520 };
-static int16x8_t cospi26_v = { 4756, 4756, 4756, 4756, 4756, 4756, 4756, 4756 };
-static int16x8_t cospi27_v = { 3981, 3981, 3981, 3981, 3981, 3981, 3981, 3981 };
-static int16x8_t cospi28_v = { 3196, 3196, 3196, 3196, 3196, 3196, 3196, 3196 };
-static int16x8_t cospi29_v = { 2404, 2404, 2404, 2404, 2404, 2404, 2404, 2404 };
-static int16x8_t cospi30_v = { 1606, 1606, 1606, 1606, 1606, 1606, 1606, 1606 };
-static int16x8_t cospi31_v = { 804, 804, 804, 804, 804, 804, 804, 804 };
+static const int16x8_t cospi1_v = { 16364, 16364, 16364, 16364,
+                                    16364, 16364, 16364, 16364 };
+static const int16x8_t cospi1m_v = { -16364, -16364, -16364, -16364,
+                                     -16364, -16364, -16364, -16364 };
+static const int16x8_t cospi2_v = { 16305, 16305, 16305, 16305,
+                                    16305, 16305, 16305, 16305 };
+static const int16x8_t cospi2m_v = { -16305, -16305, -16305, -16305,
+                                     -16305, -16305, -16305, -16305 };
+static const int16x8_t cospi3_v = { 16207, 16207, 16207, 16207,
+                                    16207, 16207, 16207, 16207 };
+static const int16x8_t cospi4_v = { 16069, 16069, 16069, 16069,
+                                    16069, 16069, 16069, 16069 };
+static const int16x8_t cospi4m_v = { -16069, -16069, -16069, -16069,
+                                     -16069, -16069, -16069, -16069 };
+static const int16x8_t cospi5_v = { 15893, 15893, 15893, 15893,
+                                    15893, 15893, 15893, 15893 };
+static const int16x8_t cospi5m_v = { -15893, -15893, -15893, -15893,
+                                     -15893, -15893, -15893, -15893 };
+static const int16x8_t cospi6_v = { 15679, 15679, 15679, 15679,
+                                    15679, 15679, 15679, 15679 };
+static const int16x8_t cospi7_v = { 15426, 15426, 15426, 15426,
+                                    15426, 15426, 15426, 15426 };
+static const int16x8_t cospi8_v = { 15137, 15137, 15137, 15137,
+                                    15137, 15137, 15137, 15137 };
+static const int16x8_t cospi8m_v = { -15137, -15137, -15137, -15137,
+                                     -15137, -15137, -15137, -15137 };
+static const int16x8_t cospi9_v = { 14811, 14811, 14811, 14811,
+                                    14811, 14811, 14811, 14811 };
+static const int16x8_t cospi9m_v = { -14811, -14811, -14811, -14811,
+                                     -14811, -14811, -14811, -14811 };
+static const int16x8_t cospi10_v = { 14449, 14449, 14449, 14449,
+                                     14449, 14449, 14449, 14449 };
+static const int16x8_t cospi10m_v = { -14449, -14449, -14449, -14449,
+                                      -14449, -14449, -14449, -14449 };
+static const int16x8_t cospi11_v = { 14053, 14053, 14053, 14053,
+                                     14053, 14053, 14053, 14053 };
+static const int16x8_t cospi12_v = { 13623, 13623, 13623, 13623,
+                                     13623, 13623, 13623, 13623 };
+static const int16x8_t cospi12m_v = { -13623, -13623, -13623, -13623,
+                                      -13623, -13623, -13623, -13623 };
+static const int16x8_t cospi13_v = { 13160, 13160, 13160, 13160,
+                                     13160, 13160, 13160, 13160 };
+static const int16x8_t cospi13m_v = { -13160, -13160, -13160, -13160,
+                                      -13160, -13160, -13160, -13160 };
+static const int16x8_t cospi14_v = { 12665, 12665, 12665, 12665,
+                                     12665, 12665, 12665, 12665 };
+static const int16x8_t cospi15_v = { 12140, 12140, 12140, 12140,
+                                     12140, 12140, 12140, 12140 };
+static const int16x8_t cospi16_v = { 11585, 11585, 11585, 11585,
+                                     11585, 11585, 11585, 11585 };
+static const int16x8_t cospi16m_v = { -11585, -11585, -11585, -11585,
+                                      -11585, -11585, -11585, -11585 };
+static const int16x8_t cospi17_v = { 11003, 11003, 11003, 11003,
+                                     11003, 11003, 11003, 11003 };
+static const int16x8_t cospi17m_v = { -11003, -11003, -11003, -11003,
+                                      -11003, -11003, -11003, -11003 };
+static const int16x8_t cospi18_v = { 10394, 10394, 10394, 10394,
+                                     10394, 10394, 10394, 10394 };
+static const int16x8_t cospi18m_v = { -10394, -10394, -10394, -10394,
+                                      -10394, -10394, -10394, -10394 };
+static const int16x8_t cospi19_v = { 9760, 9760, 9760, 9760,
+                                     9760, 9760, 9760, 9760 };
+static const int16x8_t cospi20_v = { 9102, 9102, 9102, 9102,
+                                     9102, 9102, 9102, 9102 };
+static const int16x8_t cospi20m_v = { -9102, -9102, -9102, -9102,
+                                      -9102, -9102, -9102, -9102 };
+static const int16x8_t cospi21_v = { 8423, 8423, 8423, 8423,
+                                     8423, 8423, 8423, 8423 };
+static const int16x8_t cospi21m_v = { -8423, -8423, -8423, -8423,
+                                      -8423, -8423, -8423, -8423 };
+static const int16x8_t cospi22_v = { 7723, 7723, 7723, 7723,
+                                     7723, 7723, 7723, 7723 };
+static const int16x8_t cospi23_v = { 7005, 7005, 7005, 7005,
+                                     7005, 7005, 7005, 7005 };
+static const int16x8_t cospi24_v = { 6270, 6270, 6270, 6270,
+                                     6270, 6270, 6270, 6270 };
+static const int16x8_t cospi24m_v = { -6270, -6270, -6270, -6270,
+                                      -6270, -6270, -6270, -6270 };
+static const int16x8_t cospi25_v = { 5520, 5520, 5520, 5520,
+                                     5520, 5520, 5520, 5520 };
+static const int16x8_t cospi25m_v = { -5520, -5520, -5520, -5520,
+                                      -5520, -5520, -5520, -5520 };
+static const int16x8_t cospi26_v = { 4756, 4756, 4756, 4756,
+                                     4756, 4756, 4756, 4756 };
+static const int16x8_t cospi26m_v = { -4756, -4756, -4756, -4756,
+                                      -4756, -4756, -4756, -4756 };
+static const int16x8_t cospi27_v = { 3981, 3981, 3981, 3981,
+                                     3981, 3981, 3981, 3981 };
+static const int16x8_t cospi28_v = { 3196, 3196, 3196, 3196,
+                                     3196, 3196, 3196, 3196 };
+static const int16x8_t cospi28m_v = { -3196, -3196, -3196, -3196,
+                                      -3196, -3196, -3196, -3196 };
+static const int16x8_t cospi29_v = { 2404, 2404, 2404, 2404,
+                                     2404, 2404, 2404, 2404 };
+static const int16x8_t cospi29m_v = { -2404, -2404, -2404, -2404,
+                                      -2404, -2404, -2404, -2404 };
+static const int16x8_t cospi30_v = { 1606, 1606, 1606, 1606,
+                                     1606, 1606, 1606, 1606 };
+static const int16x8_t cospi31_v = { 804, 804, 804, 804, 804, 804, 804, 804 };
+
+static const int16x8_t sinpi_1_9_v = { 5283, 5283, 5283, 5283,
+                                       5283, 5283, 5283, 5283 };
+static const int16x8_t sinpi_2_9_v = { 9929, 9929, 9929, 9929,
+                                       9929, 9929, 9929, 9929 };
+static const int16x8_t sinpi_3_9_v = { 13377, 13377, 13377, 13377,
+                                       13377, 13377, 13377, 13377 };
+static const int16x8_t sinpi_4_9_v = { 15212, 15212, 15212, 15212,
+                                       15212, 15212, 15212, 15212 };
+
+static uint8x16_t tr8_mask0 = {
+  0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7,
+  0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17
+};
+
+static uint8x16_t tr8_mask1 = {
+  0x8,  0x9,  0xA,  0xB,  0xC,  0xD,  0xE,  0xF,
+  0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F
+};
 
 #define ROUND_SHIFT_INIT                                               \
   const int32x4_t shift = vec_sl(vec_splat_s32(1), vec_splat_u32(13)); \
@@ -107,20 +169,18 @@
   out1 = vec_sub(step0, step1);                                               \
   out1 = vec_perm(out1, out1, mask0);
 
-void vpx_idct4x4_16_add_vsx(const tran_low_t *input, uint8_t *dest,
+#define PACK_STORE(v0, v1)                                \
+  tmp16_0 = vec_add(vec_perm(d_u0, d_u1, tr8_mask0), v0); \
+  tmp16_1 = vec_add(vec_perm(d_u2, d_u3, tr8_mask0), v1); \
+  output_v = vec_packsu(tmp16_0, tmp16_1);                \
+                                                          \
+  vec_vsx_st(output_v, 0, tmp_dest);                      \
+  for (i = 0; i < 4; i++)                                 \
+    for (j = 0; j < 4; j++) dest[j * stride + i] = tmp_dest[j * 4 + i];
+
+void vpx_round_store4x4_vsx(int16x8_t *in, int16x8_t *out, uint8_t *dest,
                             int stride) {
   int i, j;
-  int32x4_t temp1, temp2, temp3, temp4;
-  int16x8_t step0, step1, tmp16_0, tmp16_1, t_out0, t_out1;
-  uint8x16_t mask0 = { 0x8, 0x9, 0xA, 0xB, 0xC, 0xD, 0xE, 0xF,
-                       0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7 };
-  uint8x16_t mask1 = { 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7,
-                       0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17 };
-  int16x8_t v0 = load_tran_low(0, input);
-  int16x8_t v1 = load_tran_low(8 * sizeof(*input), input);
-  int16x8_t t0 = vec_mergeh(v0, v1);
-  int16x8_t t1 = vec_mergel(v0, v1);
-
   uint8x16_t dest0 = vec_vsx_ld(0, dest);
   uint8x16_t dest1 = vec_vsx_ld(stride, dest);
   uint8x16_t dest2 = vec_vsx_ld(2 * stride, dest);
@@ -130,31 +190,45 @@
   int16x8_t d_u1 = (int16x8_t)vec_mergeh(dest1, zerov);
   int16x8_t d_u2 = (int16x8_t)vec_mergeh(dest2, zerov);
   int16x8_t d_u3 = (int16x8_t)vec_mergeh(dest3, zerov);
+  int16x8_t tmp16_0, tmp16_1;
   uint8x16_t output_v;
   uint8_t tmp_dest[16];
-  ROUND_SHIFT_INIT
   PIXEL_ADD_INIT;
 
-  v0 = vec_mergeh(t0, t1);
-  v1 = vec_mergel(t0, t1);
+  PIXEL_ADD4(out[0], in[0]);
+  PIXEL_ADD4(out[1], in[1]);
 
-  IDCT4(v0, v1, t_out0, t_out1);
-  // transpose
-  t0 = vec_mergeh(t_out0, t_out1);
-  t1 = vec_mergel(t_out0, t_out1);
-  v0 = vec_mergeh(t0, t1);
-  v1 = vec_mergel(t0, t1);
-  IDCT4(v0, v1, t_out0, t_out1);
+  PACK_STORE(out[0], out[1]);
+}
 
-  PIXEL_ADD4(v0, t_out0);
-  PIXEL_ADD4(v1, t_out1);
-  tmp16_0 = vec_add(vec_perm(d_u0, d_u1, mask1), v0);
-  tmp16_1 = vec_add(vec_perm(d_u2, d_u3, mask1), v1);
-  output_v = vec_packsu(tmp16_0, tmp16_1);
+void vpx_idct4_vsx(int16x8_t *in, int16x8_t *out) {
+  int32x4_t temp1, temp2, temp3, temp4;
+  int16x8_t step0, step1, tmp16_0;
+  uint8x16_t mask0 = { 0x8, 0x9, 0xA, 0xB, 0xC, 0xD, 0xE, 0xF,
+                       0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7 };
+  int16x8_t t0 = vec_mergeh(in[0], in[1]);
+  int16x8_t t1 = vec_mergel(in[0], in[1]);
+  ROUND_SHIFT_INIT
 
-  vec_vsx_st(output_v, 0, tmp_dest);
-  for (i = 0; i < 4; i++)
-    for (j = 0; j < 4; j++) dest[j * stride + i] = tmp_dest[j * 4 + i];
+  in[0] = vec_mergeh(t0, t1);
+  in[1] = vec_mergel(t0, t1);
+
+  IDCT4(in[0], in[1], out[0], out[1]);
+}
+
+void vpx_idct4x4_16_add_vsx(const tran_low_t *input, uint8_t *dest,
+                            int stride) {
+  int16x8_t in[2], out[2];
+
+  in[0] = load_tran_low(0, input);
+  in[1] = load_tran_low(8 * sizeof(*input), input);
+  // Rows
+  vpx_idct4_vsx(in, out);
+
+  // Columns
+  vpx_idct4_vsx(out, in);
+
+  vpx_round_store4x4_vsx(in, out, dest, stride);
 }
 
 #define TRANSPOSE8x8(in0, in1, in2, in3, in4, in5, in6, in7, out0, out1, out2, \
@@ -256,28 +330,20 @@
 #define PIXEL_ADD(in, out, add, shiftx) \
   out = vec_add(vec_sra(vec_add(in, add), shiftx), out);
 
-static uint8x16_t tr8_mask0 = {
-  0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7,
-  0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17
-};
-static uint8x16_t tr8_mask1 = {
-  0x8,  0x9,  0xA,  0xB,  0xC,  0xD,  0xE,  0xF,
-  0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F
-};
-void vpx_idct8x8_64_add_vsx(const tran_low_t *input, uint8_t *dest,
-                            int stride) {
-  int32x4_t temp10, temp11;
+void vpx_idct8_vsx(int16x8_t *in, int16x8_t *out) {
   int16x8_t step0, step1, step2, step3, step4, step5, step6, step7;
-  int16x8_t tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7, tmp16_0, tmp16_1,
-      tmp16_2, tmp16_3;
-  int16x8_t src0 = load_tran_low(0, input);
-  int16x8_t src1 = load_tran_low(8 * sizeof(*input), input);
-  int16x8_t src2 = load_tran_low(16 * sizeof(*input), input);
-  int16x8_t src3 = load_tran_low(24 * sizeof(*input), input);
-  int16x8_t src4 = load_tran_low(32 * sizeof(*input), input);
-  int16x8_t src5 = load_tran_low(40 * sizeof(*input), input);
-  int16x8_t src6 = load_tran_low(48 * sizeof(*input), input);
-  int16x8_t src7 = load_tran_low(56 * sizeof(*input), input);
+  int16x8_t tmp16_0, tmp16_1, tmp16_2, tmp16_3;
+  int32x4_t temp10, temp11;
+  ROUND_SHIFT_INIT;
+
+  TRANSPOSE8x8(in[0], in[1], in[2], in[3], in[4], in[5], in[6], in[7], out[0],
+               out[1], out[2], out[3], out[4], out[5], out[6], out[7]);
+
+  IDCT8(out[0], out[1], out[2], out[3], out[4], out[5], out[6], out[7]);
+}
+
+void vpx_round_store8x8_vsx(int16x8_t *in, uint8_t *dest, int stride) {
+  uint8x16_t zerov = vec_splat_u8(0);
   uint8x16_t dest0 = vec_vsx_ld(0, dest);
   uint8x16_t dest1 = vec_vsx_ld(stride, dest);
   uint8x16_t dest2 = vec_vsx_ld(2 * stride, dest);
@@ -286,7 +352,6 @@
   uint8x16_t dest5 = vec_vsx_ld(5 * stride, dest);
   uint8x16_t dest6 = vec_vsx_ld(6 * stride, dest);
   uint8x16_t dest7 = vec_vsx_ld(7 * stride, dest);
-  uint8x16_t zerov = vec_splat_u8(0);
   int16x8_t d_u0 = (int16x8_t)vec_mergeh(dest0, zerov);
   int16x8_t d_u1 = (int16x8_t)vec_mergeh(dest1, zerov);
   int16x8_t d_u2 = (int16x8_t)vec_mergeh(dest2, zerov);
@@ -298,23 +363,15 @@
   int16x8_t add = vec_sl(vec_splat_s16(8), vec_splat_u16(1));
   uint16x8_t shift5 = vec_splat_u16(5);
   uint8x16_t output0, output1, output2, output3;
-  ROUND_SHIFT_INIT;
 
-  TRANSPOSE8x8(src0, src1, src2, src3, src4, src5, src6, src7, tmp0, tmp1, tmp2,
-               tmp3, tmp4, tmp5, tmp6, tmp7);
-
-  IDCT8(tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
-  TRANSPOSE8x8(tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7, src0, src1, src2,
-               src3, src4, src5, src6, src7);
-  IDCT8(src0, src1, src2, src3, src4, src5, src6, src7);
-  PIXEL_ADD(src0, d_u0, add, shift5);
-  PIXEL_ADD(src1, d_u1, add, shift5);
-  PIXEL_ADD(src2, d_u2, add, shift5);
-  PIXEL_ADD(src3, d_u3, add, shift5);
-  PIXEL_ADD(src4, d_u4, add, shift5);
-  PIXEL_ADD(src5, d_u5, add, shift5);
-  PIXEL_ADD(src6, d_u6, add, shift5);
-  PIXEL_ADD(src7, d_u7, add, shift5);
+  PIXEL_ADD(in[0], d_u0, add, shift5);
+  PIXEL_ADD(in[1], d_u1, add, shift5);
+  PIXEL_ADD(in[2], d_u2, add, shift5);
+  PIXEL_ADD(in[3], d_u3, add, shift5);
+  PIXEL_ADD(in[4], d_u4, add, shift5);
+  PIXEL_ADD(in[5], d_u5, add, shift5);
+  PIXEL_ADD(in[6], d_u6, add, shift5);
+  PIXEL_ADD(in[7], d_u7, add, shift5);
   output0 = vec_packsu(d_u0, d_u1);
   output1 = vec_packsu(d_u2, d_u3);
   output2 = vec_packsu(d_u4, d_u5);
@@ -330,24 +387,24 @@
   vec_vsx_st(xxpermdi(output3, dest7, 3), 7 * stride, dest);
 }
 
-#define LOAD_INPUT16(load, source, offset, step, in0, in1, in2, in3, in4, in5, \
-                     in6, in7, in8, in9, inA, inB, inC, inD, inE, inF)         \
-  in0 = load(offset, source);                                                  \
-  in1 = load((step) + (offset), source);                                       \
-  in2 = load(2 * (step) + (offset), source);                                   \
-  in3 = load(3 * (step) + (offset), source);                                   \
-  in4 = load(4 * (step) + (offset), source);                                   \
-  in5 = load(5 * (step) + (offset), source);                                   \
-  in6 = load(6 * (step) + (offset), source);                                   \
-  in7 = load(7 * (step) + (offset), source);                                   \
-  in8 = load(8 * (step) + (offset), source);                                   \
-  in9 = load(9 * (step) + (offset), source);                                   \
-  inA = load(10 * (step) + (offset), source);                                  \
-  inB = load(11 * (step) + (offset), source);                                  \
-  inC = load(12 * (step) + (offset), source);                                  \
-  inD = load(13 * (step) + (offset), source);                                  \
-  inE = load(14 * (step) + (offset), source);                                  \
-  inF = load(15 * (step) + (offset), source);
+void vpx_idct8x8_64_add_vsx(const tran_low_t *input, uint8_t *dest,
+                            int stride) {
+  int16x8_t src[8], tmp[8];
+
+  src[0] = load_tran_low(0, input);
+  src[1] = load_tran_low(8 * sizeof(*input), input);
+  src[2] = load_tran_low(16 * sizeof(*input), input);
+  src[3] = load_tran_low(24 * sizeof(*input), input);
+  src[4] = load_tran_low(32 * sizeof(*input), input);
+  src[5] = load_tran_low(40 * sizeof(*input), input);
+  src[6] = load_tran_low(48 * sizeof(*input), input);
+  src[7] = load_tran_low(56 * sizeof(*input), input);
+
+  vpx_idct8_vsx(src, tmp);
+  vpx_idct8_vsx(tmp, src);
+
+  vpx_round_store8x8_vsx(src, dest, stride);
+}
 
 #define STEP16_1(inpt0, inpt1, outpt0, outpt1, cospi) \
   tmp16_0 = vec_mergeh(inpt0, inpt1);                 \
@@ -447,9 +504,9 @@
   tmp16_0 = vec_mergeh(outA, outD);                                            \
   tmp16_1 = vec_mergel(outA, outD);                                            \
   temp10 =                                                                     \
-      vec_sub(vec_mule(tmp16_0, cospi24_mv), vec_mulo(tmp16_0, cospi8_v));     \
+      vec_sub(vec_mule(tmp16_0, cospi24m_v), vec_mulo(tmp16_0, cospi8_v));     \
   temp11 =                                                                     \
-      vec_sub(vec_mule(tmp16_1, cospi24_mv), vec_mulo(tmp16_1, cospi8_v));     \
+      vec_sub(vec_mule(tmp16_1, cospi24m_v), vec_mulo(tmp16_1, cospi8_v));     \
   DCT_CONST_ROUND_SHIFT(temp10);                                               \
   DCT_CONST_ROUND_SHIFT(temp11);                                               \
   inA = vec_packs(temp10, temp11);                                             \
@@ -521,95 +578,131 @@
   PIXEL_ADD(in1, d_ul, add, shift6);             \
   vec_vsx_st(vec_packsu(d_uh, d_ul), offset, dest);
 
-void vpx_idct16x16_256_add_vsx(const tran_low_t *input, uint8_t *dest,
-                               int stride) {
+static void half_idct16x8_vsx(int16x8_t *src) {
+  int16x8_t tmp0[8], tmp1[8];
   int32x4_t temp10, temp11, temp20, temp21, temp30;
-  int16x8_t src00, src01, src02, src03, src04, src05, src06, src07, src10,
-      src11, src12, src13, src14, src15, src16, src17;
-  int16x8_t src20, src21, src22, src23, src24, src25, src26, src27, src30,
-      src31, src32, src33, src34, src35, src36, src37;
-  int16x8_t tmp00, tmp01, tmp02, tmp03, tmp04, tmp05, tmp06, tmp07, tmp10,
-      tmp11, tmp12, tmp13, tmp14, tmp15, tmp16, tmp17, tmp16_0, tmp16_1;
-  int16x8_t tmp20, tmp21, tmp22, tmp23, tmp24, tmp25, tmp26, tmp27, tmp30,
-      tmp31, tmp32, tmp33, tmp34, tmp35, tmp36, tmp37;
-  uint8x16_t dest0, dest1, dest2, dest3, dest4, dest5, dest6, dest7, dest8,
-      dest9, destA, destB, destC, destD, destE, destF;
-  int16x8_t d_uh, d_ul;
-  int16x8_t add = vec_sl(vec_splat_s16(8), vec_splat_u16(2));
-  uint16x8_t shift6 = vec_splat_u16(6);
-  uint8x16_t zerov = vec_splat_u8(0);
+  int16x8_t tmp16_0, tmp16_1;
   ROUND_SHIFT_INIT;
 
-  // transform rows
-  // load and transform the upper half of 16x16 matrix
-  LOAD_INPUT16(load_tran_low, input, 0, 8 * sizeof(*input), src00, src10, src01,
-               src11, src02, src12, src03, src13, src04, src14, src05, src15,
-               src06, src16, src07, src17);
-  TRANSPOSE8x8(src00, src01, src02, src03, src04, src05, src06, src07, tmp00,
-               tmp01, tmp02, tmp03, tmp04, tmp05, tmp06, tmp07);
-  TRANSPOSE8x8(src10, src11, src12, src13, src14, src15, src16, src17, tmp10,
-               tmp11, tmp12, tmp13, tmp14, tmp15, tmp16, tmp17);
-  IDCT16(tmp00, tmp01, tmp02, tmp03, tmp04, tmp05, tmp06, tmp07, tmp10, tmp11,
-         tmp12, tmp13, tmp14, tmp15, tmp16, tmp17, src00, src01, src02, src03,
-         src04, src05, src06, src07, src10, src11, src12, src13, src14, src15,
-         src16, src17);
-  TRANSPOSE8x8(src00, src01, src02, src03, src04, src05, src06, src07, tmp00,
-               tmp01, tmp02, tmp03, tmp04, tmp05, tmp06, tmp07);
-  TRANSPOSE8x8(src10, src11, src12, src13, src14, src15, src16, src17, tmp10,
-               tmp11, tmp12, tmp13, tmp14, tmp15, tmp16, tmp17);
+  TRANSPOSE8x8(src[0], src[2], src[4], src[6], src[8], src[10], src[12],
+               src[14], tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5],
+               tmp0[6], tmp0[7]);
+  TRANSPOSE8x8(src[1], src[3], src[5], src[7], src[9], src[11], src[13],
+               src[15], tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5],
+               tmp1[6], tmp1[7]);
+  IDCT16(tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5], tmp0[6], tmp0[7],
+         tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5], tmp1[6], tmp1[7],
+         src[0], src[2], src[4], src[6], src[8], src[10], src[12], src[14],
+         src[1], src[3], src[5], src[7], src[9], src[11], src[13], src[15]);
+}
 
-  // load and transform the lower half of 16x16 matrix
+void vpx_idct16_vsx(int16x8_t *src0, int16x8_t *src1) {
+  int16x8_t tmp0[8], tmp1[8], tmp2[8], tmp3[8];
+  int32x4_t temp10, temp11, temp20, temp21, temp30;
+  int16x8_t tmp16_0, tmp16_1;
+  ROUND_SHIFT_INIT;
+
+  TRANSPOSE8x8(src0[0], src0[2], src0[4], src0[6], src0[8], src0[10], src0[12],
+               src0[14], tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5],
+               tmp0[6], tmp0[7]);
+  TRANSPOSE8x8(src0[1], src0[3], src0[5], src0[7], src0[9], src0[11], src0[13],
+               src0[15], tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5],
+               tmp1[6], tmp1[7]);
+  TRANSPOSE8x8(src1[0], src1[2], src1[4], src1[6], src1[8], src1[10], src1[12],
+               src1[14], tmp2[0], tmp2[1], tmp2[2], tmp2[3], tmp2[4], tmp2[5],
+               tmp2[6], tmp2[7]);
+  TRANSPOSE8x8(src1[1], src1[3], src1[5], src1[7], src1[9], src1[11], src1[13],
+               src1[15], tmp3[0], tmp3[1], tmp3[2], tmp3[3], tmp3[4], tmp3[5],
+               tmp3[6], tmp3[7]);
+
+  IDCT16(tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5], tmp0[6], tmp0[7],
+         tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5], tmp1[6], tmp1[7],
+         src0[0], src0[2], src0[4], src0[6], src0[8], src0[10], src0[12],
+         src0[14], src1[0], src1[2], src1[4], src1[6], src1[8], src1[10],
+         src1[12], src1[14]);
+
+  IDCT16(tmp2[0], tmp2[1], tmp2[2], tmp2[3], tmp2[4], tmp2[5], tmp2[6], tmp2[7],
+         tmp3[0], tmp3[1], tmp3[2], tmp3[3], tmp3[4], tmp3[5], tmp3[6], tmp3[7],
+         src0[1], src0[3], src0[5], src0[7], src0[9], src0[11], src0[13],
+         src0[15], src1[1], src1[3], src1[5], src1[7], src1[9], src1[11],
+         src1[13], src1[15]);
+}
+
+void vpx_round_store16x16_vsx(int16x8_t *src0, int16x8_t *src1, uint8_t *dest,
+                              int stride) {
+  uint8x16_t destv[16];
+  int16x8_t d_uh, d_ul;
+  uint8x16_t zerov = vec_splat_u8(0);
+  uint16x8_t shift6 = vec_splat_u16(6);
+  int16x8_t add = vec_sl(vec_splat_s16(8), vec_splat_u16(2));
+
+  // load dest
+  LOAD_INPUT16(vec_vsx_ld, dest, 0, stride, destv);
+
+  PIXEL_ADD_STORE16(src0[0], src0[1], destv[0], 0);
+  PIXEL_ADD_STORE16(src0[2], src0[3], destv[1], stride);
+  PIXEL_ADD_STORE16(src0[4], src0[5], destv[2], 2 * stride);
+  PIXEL_ADD_STORE16(src0[6], src0[7], destv[3], 3 * stride);
+  PIXEL_ADD_STORE16(src0[8], src0[9], destv[4], 4 * stride);
+  PIXEL_ADD_STORE16(src0[10], src0[11], destv[5], 5 * stride);
+  PIXEL_ADD_STORE16(src0[12], src0[13], destv[6], 6 * stride);
+  PIXEL_ADD_STORE16(src0[14], src0[15], destv[7], 7 * stride);
+
+  PIXEL_ADD_STORE16(src1[0], src1[1], destv[8], 8 * stride);
+  PIXEL_ADD_STORE16(src1[2], src1[3], destv[9], 9 * stride);
+  PIXEL_ADD_STORE16(src1[4], src1[5], destv[10], 10 * stride);
+  PIXEL_ADD_STORE16(src1[6], src1[7], destv[11], 11 * stride);
+  PIXEL_ADD_STORE16(src1[8], src1[9], destv[12], 12 * stride);
+  PIXEL_ADD_STORE16(src1[10], src1[11], destv[13], 13 * stride);
+  PIXEL_ADD_STORE16(src1[12], src1[13], destv[14], 14 * stride);
+  PIXEL_ADD_STORE16(src1[14], src1[15], destv[15], 15 * stride);
+}
+void vpx_idct16x16_256_add_vsx(const tran_low_t *input, uint8_t *dest,
+                               int stride) {
+  int16x8_t src0[16], src1[16];
+  int16x8_t tmp0[8], tmp1[8], tmp2[8], tmp3[8];
+  int32x4_t temp10, temp11, temp20, temp21, temp30;
+  int16x8_t tmp16_0, tmp16_1;
+  ROUND_SHIFT_INIT;
+
+  LOAD_INPUT16(load_tran_low, input, 0, 8 * sizeof(*input), src0);
   LOAD_INPUT16(load_tran_low, input, 8 * 8 * 2 * sizeof(*input),
-               8 * sizeof(*input), src20, src30, src21, src31, src22, src32,
-               src23, src33, src24, src34, src25, src35, src26, src36, src27,
-               src37);
-  TRANSPOSE8x8(src20, src21, src22, src23, src24, src25, src26, src27, tmp20,
-               tmp21, tmp22, tmp23, tmp24, tmp25, tmp26, tmp27);
-  TRANSPOSE8x8(src30, src31, src32, src33, src34, src35, src36, src37, tmp30,
-               tmp31, tmp32, tmp33, tmp34, tmp35, tmp36, tmp37);
-  IDCT16(tmp20, tmp21, tmp22, tmp23, tmp24, tmp25, tmp26, tmp27, tmp30, tmp31,
-         tmp32, tmp33, tmp34, tmp35, tmp36, tmp37, src20, src21, src22, src23,
-         src24, src25, src26, src27, src30, src31, src32, src33, src34, src35,
-         src36, src37);
-  TRANSPOSE8x8(src20, src21, src22, src23, src24, src25, src26, src27, tmp20,
-               tmp21, tmp22, tmp23, tmp24, tmp25, tmp26, tmp27);
-  TRANSPOSE8x8(src30, src31, src32, src33, src34, src35, src36, src37, tmp30,
-               tmp31, tmp32, tmp33, tmp34, tmp35, tmp36, tmp37);
+               8 * sizeof(*input), src1);
+
+  // transform rows
+  // transform the upper half of 16x16 matrix
+  half_idct16x8_vsx(src0);
+  TRANSPOSE8x8(src0[0], src0[2], src0[4], src0[6], src0[8], src0[10], src0[12],
+               src0[14], tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5],
+               tmp0[6], tmp0[7]);
+  TRANSPOSE8x8(src0[1], src0[3], src0[5], src0[7], src0[9], src0[11], src0[13],
+               src0[15], tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5],
+               tmp1[6], tmp1[7]);
+
+  // transform the lower half of 16x16 matrix
+  half_idct16x8_vsx(src1);
+  TRANSPOSE8x8(src1[0], src1[2], src1[4], src1[6], src1[8], src1[10], src1[12],
+               src1[14], tmp2[0], tmp2[1], tmp2[2], tmp2[3], tmp2[4], tmp2[5],
+               tmp2[6], tmp2[7]);
+  TRANSPOSE8x8(src1[1], src1[3], src1[5], src1[7], src1[9], src1[11], src1[13],
+               src1[15], tmp3[0], tmp3[1], tmp3[2], tmp3[3], tmp3[4], tmp3[5],
+               tmp3[6], tmp3[7]);
 
   // transform columns
   // left half first
-  IDCT16(tmp00, tmp01, tmp02, tmp03, tmp04, tmp05, tmp06, tmp07, tmp20, tmp21,
-         tmp22, tmp23, tmp24, tmp25, tmp26, tmp27, src00, src01, src02, src03,
-         src04, src05, src06, src07, src20, src21, src22, src23, src24, src25,
-         src26, src27);
+  IDCT16(tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5], tmp0[6], tmp0[7],
+         tmp2[0], tmp2[1], tmp2[2], tmp2[3], tmp2[4], tmp2[5], tmp2[6], tmp2[7],
+         src0[0], src0[2], src0[4], src0[6], src0[8], src0[10], src0[12],
+         src0[14], src1[0], src1[2], src1[4], src1[6], src1[8], src1[10],
+         src1[12], src1[14]);
   // right half
-  IDCT16(tmp10, tmp11, tmp12, tmp13, tmp14, tmp15, tmp16, tmp17, tmp30, tmp31,
-         tmp32, tmp33, tmp34, tmp35, tmp36, tmp37, src10, src11, src12, src13,
-         src14, src15, src16, src17, src30, src31, src32, src33, src34, src35,
-         src36, src37);
+  IDCT16(tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5], tmp1[6], tmp1[7],
+         tmp3[0], tmp3[1], tmp3[2], tmp3[3], tmp3[4], tmp3[5], tmp3[6], tmp3[7],
+         src0[1], src0[3], src0[5], src0[7], src0[9], src0[11], src0[13],
+         src0[15], src1[1], src1[3], src1[5], src1[7], src1[9], src1[11],
+         src1[13], src1[15]);
 
-  // load dest
-  LOAD_INPUT16(vec_vsx_ld, dest, 0, stride, dest0, dest1, dest2, dest3, dest4,
-               dest5, dest6, dest7, dest8, dest9, destA, destB, destC, destD,
-               destE, destF);
-
-  PIXEL_ADD_STORE16(src00, src10, dest0, 0);
-  PIXEL_ADD_STORE16(src01, src11, dest1, stride);
-  PIXEL_ADD_STORE16(src02, src12, dest2, 2 * stride);
-  PIXEL_ADD_STORE16(src03, src13, dest3, 3 * stride);
-  PIXEL_ADD_STORE16(src04, src14, dest4, 4 * stride);
-  PIXEL_ADD_STORE16(src05, src15, dest5, 5 * stride);
-  PIXEL_ADD_STORE16(src06, src16, dest6, 6 * stride);
-  PIXEL_ADD_STORE16(src07, src17, dest7, 7 * stride);
-
-  PIXEL_ADD_STORE16(src20, src30, dest8, 8 * stride);
-  PIXEL_ADD_STORE16(src21, src31, dest9, 9 * stride);
-  PIXEL_ADD_STORE16(src22, src32, destA, 10 * stride);
-  PIXEL_ADD_STORE16(src23, src33, destB, 11 * stride);
-  PIXEL_ADD_STORE16(src24, src34, destC, 12 * stride);
-  PIXEL_ADD_STORE16(src25, src35, destD, 13 * stride);
-  PIXEL_ADD_STORE16(src26, src36, destE, 14 * stride);
-  PIXEL_ADD_STORE16(src27, src37, destF, 15 * stride);
+  vpx_round_store16x16_vsx(src0, src1, dest, stride);
 }
 
 #define LOAD_8x32(load, in00, in01, in02, in03, in10, in11, in12, in13, in20, \
@@ -981,15 +1074,15 @@
   PIXEL_ADD(in3, d_ul, add, shift6);                       \
   vec_vsx_st(vec_packsu(d_uh, d_ul), (step)*stride + 16, dest);
 
-#define ADD_STORE_BLOCK(in, offset)                                      \
-  PIXEL_ADD_STORE32(in[0][0], in[1][0], in[2][0], in[3][0], offset + 0); \
-  PIXEL_ADD_STORE32(in[0][1], in[1][1], in[2][1], in[3][1], offset + 1); \
-  PIXEL_ADD_STORE32(in[0][2], in[1][2], in[2][2], in[3][2], offset + 2); \
-  PIXEL_ADD_STORE32(in[0][3], in[1][3], in[2][3], in[3][3], offset + 3); \
-  PIXEL_ADD_STORE32(in[0][4], in[1][4], in[2][4], in[3][4], offset + 4); \
-  PIXEL_ADD_STORE32(in[0][5], in[1][5], in[2][5], in[3][5], offset + 5); \
-  PIXEL_ADD_STORE32(in[0][6], in[1][6], in[2][6], in[3][6], offset + 6); \
-  PIXEL_ADD_STORE32(in[0][7], in[1][7], in[2][7], in[3][7], offset + 7);
+#define ADD_STORE_BLOCK(in, offset)                                        \
+  PIXEL_ADD_STORE32(in[0][0], in[1][0], in[2][0], in[3][0], (offset) + 0); \
+  PIXEL_ADD_STORE32(in[0][1], in[1][1], in[2][1], in[3][1], (offset) + 1); \
+  PIXEL_ADD_STORE32(in[0][2], in[1][2], in[2][2], in[3][2], (offset) + 2); \
+  PIXEL_ADD_STORE32(in[0][3], in[1][3], in[2][3], in[3][3], (offset) + 3); \
+  PIXEL_ADD_STORE32(in[0][4], in[1][4], in[2][4], in[3][4], (offset) + 4); \
+  PIXEL_ADD_STORE32(in[0][5], in[1][5], in[2][5], in[3][5], (offset) + 5); \
+  PIXEL_ADD_STORE32(in[0][6], in[1][6], in[2][6], in[3][6], (offset) + 6); \
+  PIXEL_ADD_STORE32(in[0][7], in[1][7], in[2][7], in[3][7], (offset) + 7);
 
 void vpx_idct32x32_1024_add_vsx(const tran_low_t *input, uint8_t *dest,
                                 int stride) {
@@ -1062,3 +1155,674 @@
   ADD_STORE_BLOCK(src2, 16);
   ADD_STORE_BLOCK(src3, 24);
 }
+
+#define TRANSFORM_COLS           \
+  v32_a = vec_add(v32_a, v32_c); \
+  v32_d = vec_sub(v32_d, v32_b); \
+  v32_e = vec_sub(v32_a, v32_d); \
+  v32_e = vec_sra(v32_e, one);   \
+  v32_b = vec_sub(v32_e, v32_b); \
+  v32_c = vec_sub(v32_e, v32_c); \
+  v32_a = vec_sub(v32_a, v32_b); \
+  v32_d = vec_add(v32_d, v32_c); \
+  v_a = vec_packs(v32_a, v32_b); \
+  v_c = vec_packs(v32_c, v32_d);
+
+#define TRANSPOSE_WHT             \
+  tmp_a = vec_mergeh(v_a, v_c);   \
+  tmp_c = vec_mergel(v_a, v_c);   \
+  v_a = vec_mergeh(tmp_a, tmp_c); \
+  v_c = vec_mergel(tmp_a, tmp_c);
+
+void vpx_iwht4x4_16_add_vsx(const tran_low_t *input, uint8_t *dest,
+                            int stride) {
+  int16x8_t v_a = load_tran_low(0, input);
+  int16x8_t v_c = load_tran_low(8 * sizeof(*input), input);
+  int16x8_t tmp_a, tmp_c;
+  uint16x8_t two = vec_splat_u16(2);
+  uint32x4_t one = vec_splat_u32(1);
+  int16x8_t tmp16_0, tmp16_1;
+  int32x4_t v32_a, v32_c, v32_d, v32_b, v32_e;
+  uint8x16_t dest0 = vec_vsx_ld(0, dest);
+  uint8x16_t dest1 = vec_vsx_ld(stride, dest);
+  uint8x16_t dest2 = vec_vsx_ld(2 * stride, dest);
+  uint8x16_t dest3 = vec_vsx_ld(3 * stride, dest);
+  int16x8_t d_u0 = (int16x8_t)unpack_to_u16_h(dest0);
+  int16x8_t d_u1 = (int16x8_t)unpack_to_u16_h(dest1);
+  int16x8_t d_u2 = (int16x8_t)unpack_to_u16_h(dest2);
+  int16x8_t d_u3 = (int16x8_t)unpack_to_u16_h(dest3);
+  uint8x16_t output_v;
+  uint8_t tmp_dest[16];
+  int i, j;
+
+  v_a = vec_sra(v_a, two);
+  v_c = vec_sra(v_c, two);
+
+  TRANSPOSE_WHT;
+
+  v32_a = vec_unpackh(v_a);
+  v32_c = vec_unpackl(v_a);
+
+  v32_d = vec_unpackh(v_c);
+  v32_b = vec_unpackl(v_c);
+
+  TRANSFORM_COLS;
+
+  TRANSPOSE_WHT;
+
+  v32_a = vec_unpackh(v_a);
+  v32_c = vec_unpackl(v_a);
+  v32_d = vec_unpackh(v_c);
+  v32_b = vec_unpackl(v_c);
+
+  TRANSFORM_COLS;
+
+  PACK_STORE(v_a, v_c);
+}
+
+void vp9_iadst4_vsx(int16x8_t *in, int16x8_t *out) {
+  int16x8_t sinpi_1_3_v, sinpi_4_2_v, sinpi_2_3_v, sinpi_1_4_v, sinpi_12_n3_v;
+  int32x4_t v_v[5], u_v[4];
+  int32x4_t zerov = vec_splat_s32(0);
+  int16x8_t tmp0, tmp1;
+  int16x8_t zero16v = vec_splat_s16(0);
+  uint32x4_t shift16 = vec_sl(vec_splat_u32(8), vec_splat_u32(1));
+  ROUND_SHIFT_INIT;
+
+  sinpi_1_3_v = vec_mergel(sinpi_1_9_v, sinpi_3_9_v);
+  sinpi_4_2_v = vec_mergel(sinpi_4_9_v, sinpi_2_9_v);
+  sinpi_2_3_v = vec_mergel(sinpi_2_9_v, sinpi_3_9_v);
+  sinpi_1_4_v = vec_mergel(sinpi_1_9_v, sinpi_4_9_v);
+  sinpi_12_n3_v = vec_mergel(vec_add(sinpi_1_9_v, sinpi_2_9_v),
+                             vec_sub(zero16v, sinpi_3_9_v));
+
+  tmp0 = (int16x8_t)vec_mergeh((int32x4_t)in[0], (int32x4_t)in[1]);
+  tmp1 = (int16x8_t)vec_mergel((int32x4_t)in[0], (int32x4_t)in[1]);
+  in[0] = (int16x8_t)vec_mergeh((int32x4_t)tmp0, (int32x4_t)tmp1);
+  in[1] = (int16x8_t)vec_mergel((int32x4_t)tmp0, (int32x4_t)tmp1);
+
+  v_v[0] = vec_msum(in[0], sinpi_1_3_v, zerov);
+  v_v[1] = vec_msum(in[1], sinpi_4_2_v, zerov);
+  v_v[2] = vec_msum(in[0], sinpi_2_3_v, zerov);
+  v_v[3] = vec_msum(in[1], sinpi_1_4_v, zerov);
+  v_v[4] = vec_msum(in[0], sinpi_12_n3_v, zerov);
+
+  in[0] = vec_sub(in[0], in[1]);
+  in[1] = (int16x8_t)vec_sra((int32x4_t)in[1], shift16);
+  in[0] = vec_add(in[0], in[1]);
+  in[0] = (int16x8_t)vec_sl((int32x4_t)in[0], shift16);
+
+  u_v[0] = vec_add(v_v[0], v_v[1]);
+  u_v[1] = vec_sub(v_v[2], v_v[3]);
+  u_v[2] = vec_msum(in[0], sinpi_1_3_v, zerov);
+  u_v[3] = vec_sub(v_v[1], v_v[3]);
+  u_v[3] = vec_add(u_v[3], v_v[4]);
+
+  DCT_CONST_ROUND_SHIFT(u_v[0]);
+  DCT_CONST_ROUND_SHIFT(u_v[1]);
+  DCT_CONST_ROUND_SHIFT(u_v[2]);
+  DCT_CONST_ROUND_SHIFT(u_v[3]);
+
+  out[0] = vec_packs(u_v[0], u_v[1]);
+  out[1] = vec_packs(u_v[2], u_v[3]);
+}
+
+#define MSUM_ROUND_SHIFT(a, b, cospi) \
+  b = vec_msums(a, cospi, zerov);     \
+  DCT_CONST_ROUND_SHIFT(b);
+
+#define IADST_WRAPLOW(in0, in1, tmp0, tmp1, out, cospi) \
+  MSUM_ROUND_SHIFT(in0, tmp0, cospi);                   \
+  MSUM_ROUND_SHIFT(in1, tmp1, cospi);                   \
+  out = vec_packs(tmp0, tmp1);
+
+void vp9_iadst8_vsx(int16x8_t *in, int16x8_t *out) {
+  int32x4_t tmp0[16], tmp1[16];
+
+  int32x4_t zerov = vec_splat_s32(0);
+  int16x8_t zero16v = vec_splat_s16(0);
+  int16x8_t cospi_p02_p30_v = vec_mergel(cospi2_v, cospi30_v);
+  int16x8_t cospi_p30_m02_v = vec_mergel(cospi30_v, cospi2m_v);
+  int16x8_t cospi_p10_p22_v = vec_mergel(cospi10_v, cospi22_v);
+  int16x8_t cospi_p22_m10_v = vec_mergel(cospi22_v, cospi10m_v);
+  int16x8_t cospi_p18_p14_v = vec_mergel(cospi18_v, cospi14_v);
+  int16x8_t cospi_p14_m18_v = vec_mergel(cospi14_v, cospi18m_v);
+  int16x8_t cospi_p26_p06_v = vec_mergel(cospi26_v, cospi6_v);
+  int16x8_t cospi_p06_m26_v = vec_mergel(cospi6_v, cospi26m_v);
+  int16x8_t cospi_p08_p24_v = vec_mergel(cospi8_v, cospi24_v);
+  int16x8_t cospi_p24_m08_v = vec_mergel(cospi24_v, cospi8m_v);
+  int16x8_t cospi_m24_p08_v = vec_mergel(cospi24m_v, cospi8_v);
+  int16x8_t cospi_p16_m16_v = vec_mergel(cospi16_v, cospi16m_v);
+  ROUND_SHIFT_INIT;
+
+  TRANSPOSE8x8(in[0], in[1], in[2], in[3], in[4], in[5], in[6], in[7], out[0],
+               out[1], out[2], out[3], out[4], out[5], out[6], out[7]);
+
+  // stage 1
+  // interleave and multiply/add into 32-bit integer
+  in[0] = vec_mergeh(out[7], out[0]);
+  in[1] = vec_mergel(out[7], out[0]);
+  in[2] = vec_mergeh(out[5], out[2]);
+  in[3] = vec_mergel(out[5], out[2]);
+  in[4] = vec_mergeh(out[3], out[4]);
+  in[5] = vec_mergel(out[3], out[4]);
+  in[6] = vec_mergeh(out[1], out[6]);
+  in[7] = vec_mergel(out[1], out[6]);
+
+  tmp1[0] = vec_msum(in[0], cospi_p02_p30_v, zerov);
+  tmp1[1] = vec_msum(in[1], cospi_p02_p30_v, zerov);
+  tmp1[2] = vec_msum(in[0], cospi_p30_m02_v, zerov);
+  tmp1[3] = vec_msum(in[1], cospi_p30_m02_v, zerov);
+  tmp1[4] = vec_msum(in[2], cospi_p10_p22_v, zerov);
+  tmp1[5] = vec_msum(in[3], cospi_p10_p22_v, zerov);
+  tmp1[6] = vec_msum(in[2], cospi_p22_m10_v, zerov);
+  tmp1[7] = vec_msum(in[3], cospi_p22_m10_v, zerov);
+  tmp1[8] = vec_msum(in[4], cospi_p18_p14_v, zerov);
+  tmp1[9] = vec_msum(in[5], cospi_p18_p14_v, zerov);
+  tmp1[10] = vec_msum(in[4], cospi_p14_m18_v, zerov);
+  tmp1[11] = vec_msum(in[5], cospi_p14_m18_v, zerov);
+  tmp1[12] = vec_msum(in[6], cospi_p26_p06_v, zerov);
+  tmp1[13] = vec_msum(in[7], cospi_p26_p06_v, zerov);
+  tmp1[14] = vec_msum(in[6], cospi_p06_m26_v, zerov);
+  tmp1[15] = vec_msum(in[7], cospi_p06_m26_v, zerov);
+
+  tmp0[0] = vec_add(tmp1[0], tmp1[8]);
+  tmp0[1] = vec_add(tmp1[1], tmp1[9]);
+  tmp0[2] = vec_add(tmp1[2], tmp1[10]);
+  tmp0[3] = vec_add(tmp1[3], tmp1[11]);
+  tmp0[4] = vec_add(tmp1[4], tmp1[12]);
+  tmp0[5] = vec_add(tmp1[5], tmp1[13]);
+  tmp0[6] = vec_add(tmp1[6], tmp1[14]);
+  tmp0[7] = vec_add(tmp1[7], tmp1[15]);
+  tmp0[8] = vec_sub(tmp1[0], tmp1[8]);
+  tmp0[9] = vec_sub(tmp1[1], tmp1[9]);
+  tmp0[10] = vec_sub(tmp1[2], tmp1[10]);
+  tmp0[11] = vec_sub(tmp1[3], tmp1[11]);
+  tmp0[12] = vec_sub(tmp1[4], tmp1[12]);
+  tmp0[13] = vec_sub(tmp1[5], tmp1[13]);
+  tmp0[14] = vec_sub(tmp1[6], tmp1[14]);
+  tmp0[15] = vec_sub(tmp1[7], tmp1[15]);
+
+  // shift and rounding
+  DCT_CONST_ROUND_SHIFT(tmp0[0]);
+  DCT_CONST_ROUND_SHIFT(tmp0[1]);
+  DCT_CONST_ROUND_SHIFT(tmp0[2]);
+  DCT_CONST_ROUND_SHIFT(tmp0[3]);
+  DCT_CONST_ROUND_SHIFT(tmp0[4]);
+  DCT_CONST_ROUND_SHIFT(tmp0[5]);
+  DCT_CONST_ROUND_SHIFT(tmp0[6]);
+  DCT_CONST_ROUND_SHIFT(tmp0[7]);
+  DCT_CONST_ROUND_SHIFT(tmp0[8]);
+  DCT_CONST_ROUND_SHIFT(tmp0[9]);
+  DCT_CONST_ROUND_SHIFT(tmp0[10]);
+  DCT_CONST_ROUND_SHIFT(tmp0[11]);
+  DCT_CONST_ROUND_SHIFT(tmp0[12]);
+  DCT_CONST_ROUND_SHIFT(tmp0[13]);
+  DCT_CONST_ROUND_SHIFT(tmp0[14]);
+  DCT_CONST_ROUND_SHIFT(tmp0[15]);
+
+  // back to 16-bit
+  out[0] = vec_packs(tmp0[0], tmp0[1]);
+  out[1] = vec_packs(tmp0[2], tmp0[3]);
+  out[2] = vec_packs(tmp0[4], tmp0[5]);
+  out[3] = vec_packs(tmp0[6], tmp0[7]);
+  out[4] = vec_packs(tmp0[8], tmp0[9]);
+  out[5] = vec_packs(tmp0[10], tmp0[11]);
+  out[6] = vec_packs(tmp0[12], tmp0[13]);
+  out[7] = vec_packs(tmp0[14], tmp0[15]);
+
+  // stage 2
+  in[0] = vec_add(out[0], out[2]);
+  in[1] = vec_add(out[1], out[3]);
+  in[2] = vec_sub(out[0], out[2]);
+  in[3] = vec_sub(out[1], out[3]);
+  in[4] = vec_mergeh(out[4], out[5]);
+  in[5] = vec_mergel(out[4], out[5]);
+  in[6] = vec_mergeh(out[6], out[7]);
+  in[7] = vec_mergel(out[6], out[7]);
+
+  tmp1[0] = vec_msum(in[4], cospi_p08_p24_v, zerov);
+  tmp1[1] = vec_msum(in[5], cospi_p08_p24_v, zerov);
+  tmp1[2] = vec_msum(in[4], cospi_p24_m08_v, zerov);
+  tmp1[3] = vec_msum(in[5], cospi_p24_m08_v, zerov);
+  tmp1[4] = vec_msum(in[6], cospi_m24_p08_v, zerov);
+  tmp1[5] = vec_msum(in[7], cospi_m24_p08_v, zerov);
+  tmp1[6] = vec_msum(in[6], cospi_p08_p24_v, zerov);
+  tmp1[7] = vec_msum(in[7], cospi_p08_p24_v, zerov);
+
+  tmp0[0] = vec_add(tmp1[0], tmp1[4]);
+  tmp0[1] = vec_add(tmp1[1], tmp1[5]);
+  tmp0[2] = vec_add(tmp1[2], tmp1[6]);
+  tmp0[3] = vec_add(tmp1[3], tmp1[7]);
+  tmp0[4] = vec_sub(tmp1[0], tmp1[4]);
+  tmp0[5] = vec_sub(tmp1[1], tmp1[5]);
+  tmp0[6] = vec_sub(tmp1[2], tmp1[6]);
+  tmp0[7] = vec_sub(tmp1[3], tmp1[7]);
+
+  DCT_CONST_ROUND_SHIFT(tmp0[0]);
+  DCT_CONST_ROUND_SHIFT(tmp0[1]);
+  DCT_CONST_ROUND_SHIFT(tmp0[2]);
+  DCT_CONST_ROUND_SHIFT(tmp0[3]);
+  DCT_CONST_ROUND_SHIFT(tmp0[4]);
+  DCT_CONST_ROUND_SHIFT(tmp0[5]);
+  DCT_CONST_ROUND_SHIFT(tmp0[6]);
+  DCT_CONST_ROUND_SHIFT(tmp0[7]);
+
+  in[4] = vec_packs(tmp0[0], tmp0[1]);
+  in[5] = vec_packs(tmp0[2], tmp0[3]);
+  in[6] = vec_packs(tmp0[4], tmp0[5]);
+  in[7] = vec_packs(tmp0[6], tmp0[7]);
+
+  // stage 3
+  out[0] = vec_mergeh(in[2], in[3]);
+  out[1] = vec_mergel(in[2], in[3]);
+  out[2] = vec_mergeh(in[6], in[7]);
+  out[3] = vec_mergel(in[6], in[7]);
+
+  IADST_WRAPLOW(out[0], out[1], tmp0[0], tmp0[1], in[2], cospi16_v);
+  IADST_WRAPLOW(out[0], out[1], tmp0[0], tmp0[1], in[3], cospi_p16_m16_v);
+  IADST_WRAPLOW(out[2], out[3], tmp0[0], tmp0[1], in[6], cospi16_v);
+  IADST_WRAPLOW(out[2], out[3], tmp0[0], tmp0[1], in[7], cospi_p16_m16_v);
+
+  out[0] = in[0];
+  out[2] = in[6];
+  out[4] = in[3];
+  out[6] = in[5];
+
+  out[1] = vec_sub(zero16v, in[4]);
+  out[3] = vec_sub(zero16v, in[2]);
+  out[5] = vec_sub(zero16v, in[7]);
+  out[7] = vec_sub(zero16v, in[1]);
+}
+
+static void iadst16x8_vsx(int16x8_t *in, int16x8_t *out) {
+  int32x4_t tmp0[32], tmp1[32];
+  int16x8_t tmp16_0[8];
+  int16x8_t cospi_p01_p31 = vec_mergel(cospi1_v, cospi31_v);
+  int16x8_t cospi_p31_m01 = vec_mergel(cospi31_v, cospi1m_v);
+  int16x8_t cospi_p05_p27 = vec_mergel(cospi5_v, cospi27_v);
+  int16x8_t cospi_p27_m05 = vec_mergel(cospi27_v, cospi5m_v);
+  int16x8_t cospi_p09_p23 = vec_mergel(cospi9_v, cospi23_v);
+  int16x8_t cospi_p23_m09 = vec_mergel(cospi23_v, cospi9m_v);
+  int16x8_t cospi_p13_p19 = vec_mergel(cospi13_v, cospi19_v);
+  int16x8_t cospi_p19_m13 = vec_mergel(cospi19_v, cospi13m_v);
+  int16x8_t cospi_p17_p15 = vec_mergel(cospi17_v, cospi15_v);
+  int16x8_t cospi_p15_m17 = vec_mergel(cospi15_v, cospi17m_v);
+  int16x8_t cospi_p21_p11 = vec_mergel(cospi21_v, cospi11_v);
+  int16x8_t cospi_p11_m21 = vec_mergel(cospi11_v, cospi21m_v);
+  int16x8_t cospi_p25_p07 = vec_mergel(cospi25_v, cospi7_v);
+  int16x8_t cospi_p07_m25 = vec_mergel(cospi7_v, cospi25m_v);
+  int16x8_t cospi_p29_p03 = vec_mergel(cospi29_v, cospi3_v);
+  int16x8_t cospi_p03_m29 = vec_mergel(cospi3_v, cospi29m_v);
+  int16x8_t cospi_p04_p28 = vec_mergel(cospi4_v, cospi28_v);
+  int16x8_t cospi_p28_m04 = vec_mergel(cospi28_v, cospi4m_v);
+  int16x8_t cospi_p20_p12 = vec_mergel(cospi20_v, cospi12_v);
+  int16x8_t cospi_p12_m20 = vec_mergel(cospi12_v, cospi20m_v);
+  int16x8_t cospi_m28_p04 = vec_mergel(cospi28m_v, cospi4_v);
+  int16x8_t cospi_m12_p20 = vec_mergel(cospi12m_v, cospi20_v);
+  int16x8_t cospi_p08_p24 = vec_mergel(cospi8_v, cospi24_v);
+  int16x8_t cospi_p24_m08 = vec_mergel(cospi24_v, cospi8m_v);
+  int16x8_t cospi_m24_p08 = vec_mergel(cospi24m_v, cospi8_v);
+  int32x4_t zerov = vec_splat_s32(0);
+  ROUND_SHIFT_INIT;
+
+  tmp16_0[0] = vec_mergeh(in[15], in[0]);
+  tmp16_0[1] = vec_mergel(in[15], in[0]);
+  tmp16_0[2] = vec_mergeh(in[13], in[2]);
+  tmp16_0[3] = vec_mergel(in[13], in[2]);
+  tmp16_0[4] = vec_mergeh(in[11], in[4]);
+  tmp16_0[5] = vec_mergel(in[11], in[4]);
+  tmp16_0[6] = vec_mergeh(in[9], in[6]);
+  tmp16_0[7] = vec_mergel(in[9], in[6]);
+  tmp16_0[8] = vec_mergeh(in[7], in[8]);
+  tmp16_0[9] = vec_mergel(in[7], in[8]);
+  tmp16_0[10] = vec_mergeh(in[5], in[10]);
+  tmp16_0[11] = vec_mergel(in[5], in[10]);
+  tmp16_0[12] = vec_mergeh(in[3], in[12]);
+  tmp16_0[13] = vec_mergel(in[3], in[12]);
+  tmp16_0[14] = vec_mergeh(in[1], in[14]);
+  tmp16_0[15] = vec_mergel(in[1], in[14]);
+
+  tmp0[0] = vec_msum(tmp16_0[0], cospi_p01_p31, zerov);
+  tmp0[1] = vec_msum(tmp16_0[1], cospi_p01_p31, zerov);
+  tmp0[2] = vec_msum(tmp16_0[0], cospi_p31_m01, zerov);
+  tmp0[3] = vec_msum(tmp16_0[1], cospi_p31_m01, zerov);
+  tmp0[4] = vec_msum(tmp16_0[2], cospi_p05_p27, zerov);
+  tmp0[5] = vec_msum(tmp16_0[3], cospi_p05_p27, zerov);
+  tmp0[6] = vec_msum(tmp16_0[2], cospi_p27_m05, zerov);
+  tmp0[7] = vec_msum(tmp16_0[3], cospi_p27_m05, zerov);
+  tmp0[8] = vec_msum(tmp16_0[4], cospi_p09_p23, zerov);
+  tmp0[9] = vec_msum(tmp16_0[5], cospi_p09_p23, zerov);
+  tmp0[10] = vec_msum(tmp16_0[4], cospi_p23_m09, zerov);
+  tmp0[11] = vec_msum(tmp16_0[5], cospi_p23_m09, zerov);
+  tmp0[12] = vec_msum(tmp16_0[6], cospi_p13_p19, zerov);
+  tmp0[13] = vec_msum(tmp16_0[7], cospi_p13_p19, zerov);
+  tmp0[14] = vec_msum(tmp16_0[6], cospi_p19_m13, zerov);
+  tmp0[15] = vec_msum(tmp16_0[7], cospi_p19_m13, zerov);
+  tmp0[16] = vec_msum(tmp16_0[8], cospi_p17_p15, zerov);
+  tmp0[17] = vec_msum(tmp16_0[9], cospi_p17_p15, zerov);
+  tmp0[18] = vec_msum(tmp16_0[8], cospi_p15_m17, zerov);
+  tmp0[19] = vec_msum(tmp16_0[9], cospi_p15_m17, zerov);
+  tmp0[20] = vec_msum(tmp16_0[10], cospi_p21_p11, zerov);
+  tmp0[21] = vec_msum(tmp16_0[11], cospi_p21_p11, zerov);
+  tmp0[22] = vec_msum(tmp16_0[10], cospi_p11_m21, zerov);
+  tmp0[23] = vec_msum(tmp16_0[11], cospi_p11_m21, zerov);
+  tmp0[24] = vec_msum(tmp16_0[12], cospi_p25_p07, zerov);
+  tmp0[25] = vec_msum(tmp16_0[13], cospi_p25_p07, zerov);
+  tmp0[26] = vec_msum(tmp16_0[12], cospi_p07_m25, zerov);
+  tmp0[27] = vec_msum(tmp16_0[13], cospi_p07_m25, zerov);
+  tmp0[28] = vec_msum(tmp16_0[14], cospi_p29_p03, zerov);
+  tmp0[29] = vec_msum(tmp16_0[15], cospi_p29_p03, zerov);
+  tmp0[30] = vec_msum(tmp16_0[14], cospi_p03_m29, zerov);
+  tmp0[31] = vec_msum(tmp16_0[15], cospi_p03_m29, zerov);
+
+  tmp1[0] = vec_add(tmp0[0], tmp0[16]);
+  tmp1[1] = vec_add(tmp0[1], tmp0[17]);
+  tmp1[2] = vec_add(tmp0[2], tmp0[18]);
+  tmp1[3] = vec_add(tmp0[3], tmp0[19]);
+  tmp1[4] = vec_add(tmp0[4], tmp0[20]);
+  tmp1[5] = vec_add(tmp0[5], tmp0[21]);
+  tmp1[6] = vec_add(tmp0[6], tmp0[22]);
+  tmp1[7] = vec_add(tmp0[7], tmp0[23]);
+  tmp1[8] = vec_add(tmp0[8], tmp0[24]);
+  tmp1[9] = vec_add(tmp0[9], tmp0[25]);
+  tmp1[10] = vec_add(tmp0[10], tmp0[26]);
+  tmp1[11] = vec_add(tmp0[11], tmp0[27]);
+  tmp1[12] = vec_add(tmp0[12], tmp0[28]);
+  tmp1[13] = vec_add(tmp0[13], tmp0[29]);
+  tmp1[14] = vec_add(tmp0[14], tmp0[30]);
+  tmp1[15] = vec_add(tmp0[15], tmp0[31]);
+  tmp1[16] = vec_sub(tmp0[0], tmp0[16]);
+  tmp1[17] = vec_sub(tmp0[1], tmp0[17]);
+  tmp1[18] = vec_sub(tmp0[2], tmp0[18]);
+  tmp1[19] = vec_sub(tmp0[3], tmp0[19]);
+  tmp1[20] = vec_sub(tmp0[4], tmp0[20]);
+  tmp1[21] = vec_sub(tmp0[5], tmp0[21]);
+  tmp1[22] = vec_sub(tmp0[6], tmp0[22]);
+  tmp1[23] = vec_sub(tmp0[7], tmp0[23]);
+  tmp1[24] = vec_sub(tmp0[8], tmp0[24]);
+  tmp1[25] = vec_sub(tmp0[9], tmp0[25]);
+  tmp1[26] = vec_sub(tmp0[10], tmp0[26]);
+  tmp1[27] = vec_sub(tmp0[11], tmp0[27]);
+  tmp1[28] = vec_sub(tmp0[12], tmp0[28]);
+  tmp1[29] = vec_sub(tmp0[13], tmp0[29]);
+  tmp1[30] = vec_sub(tmp0[14], tmp0[30]);
+  tmp1[31] = vec_sub(tmp0[15], tmp0[31]);
+
+  DCT_CONST_ROUND_SHIFT(tmp1[0]);
+  DCT_CONST_ROUND_SHIFT(tmp1[1]);
+  DCT_CONST_ROUND_SHIFT(tmp1[2]);
+  DCT_CONST_ROUND_SHIFT(tmp1[3]);
+  DCT_CONST_ROUND_SHIFT(tmp1[4]);
+  DCT_CONST_ROUND_SHIFT(tmp1[5]);
+  DCT_CONST_ROUND_SHIFT(tmp1[6]);
+  DCT_CONST_ROUND_SHIFT(tmp1[7]);
+  DCT_CONST_ROUND_SHIFT(tmp1[8]);
+  DCT_CONST_ROUND_SHIFT(tmp1[9]);
+  DCT_CONST_ROUND_SHIFT(tmp1[10]);
+  DCT_CONST_ROUND_SHIFT(tmp1[11]);
+  DCT_CONST_ROUND_SHIFT(tmp1[12]);
+  DCT_CONST_ROUND_SHIFT(tmp1[13]);
+  DCT_CONST_ROUND_SHIFT(tmp1[14]);
+  DCT_CONST_ROUND_SHIFT(tmp1[15]);
+  DCT_CONST_ROUND_SHIFT(tmp1[16]);
+  DCT_CONST_ROUND_SHIFT(tmp1[17]);
+  DCT_CONST_ROUND_SHIFT(tmp1[18]);
+  DCT_CONST_ROUND_SHIFT(tmp1[19]);
+  DCT_CONST_ROUND_SHIFT(tmp1[20]);
+  DCT_CONST_ROUND_SHIFT(tmp1[21]);
+  DCT_CONST_ROUND_SHIFT(tmp1[22]);
+  DCT_CONST_ROUND_SHIFT(tmp1[23]);
+  DCT_CONST_ROUND_SHIFT(tmp1[24]);
+  DCT_CONST_ROUND_SHIFT(tmp1[25]);
+  DCT_CONST_ROUND_SHIFT(tmp1[26]);
+  DCT_CONST_ROUND_SHIFT(tmp1[27]);
+  DCT_CONST_ROUND_SHIFT(tmp1[28]);
+  DCT_CONST_ROUND_SHIFT(tmp1[29]);
+  DCT_CONST_ROUND_SHIFT(tmp1[30]);
+  DCT_CONST_ROUND_SHIFT(tmp1[31]);
+
+  in[0] = vec_packs(tmp1[0], tmp1[1]);
+  in[1] = vec_packs(tmp1[2], tmp1[3]);
+  in[2] = vec_packs(tmp1[4], tmp1[5]);
+  in[3] = vec_packs(tmp1[6], tmp1[7]);
+  in[4] = vec_packs(tmp1[8], tmp1[9]);
+  in[5] = vec_packs(tmp1[10], tmp1[11]);
+  in[6] = vec_packs(tmp1[12], tmp1[13]);
+  in[7] = vec_packs(tmp1[14], tmp1[15]);
+  in[8] = vec_packs(tmp1[16], tmp1[17]);
+  in[9] = vec_packs(tmp1[18], tmp1[19]);
+  in[10] = vec_packs(tmp1[20], tmp1[21]);
+  in[11] = vec_packs(tmp1[22], tmp1[23]);
+  in[12] = vec_packs(tmp1[24], tmp1[25]);
+  in[13] = vec_packs(tmp1[26], tmp1[27]);
+  in[14] = vec_packs(tmp1[28], tmp1[29]);
+  in[15] = vec_packs(tmp1[30], tmp1[31]);
+
+  // stage 2
+  tmp16_0[0] = vec_mergeh(in[8], in[9]);
+  tmp16_0[1] = vec_mergel(in[8], in[9]);
+  tmp16_0[2] = vec_mergeh(in[10], in[11]);
+  tmp16_0[3] = vec_mergel(in[10], in[11]);
+  tmp16_0[4] = vec_mergeh(in[12], in[13]);
+  tmp16_0[5] = vec_mergel(in[12], in[13]);
+  tmp16_0[6] = vec_mergeh(in[14], in[15]);
+  tmp16_0[7] = vec_mergel(in[14], in[15]);
+
+  tmp0[0] = vec_msum(tmp16_0[0], cospi_p04_p28, zerov);
+  tmp0[1] = vec_msum(tmp16_0[1], cospi_p04_p28, zerov);
+  tmp0[2] = vec_msum(tmp16_0[0], cospi_p28_m04, zerov);
+  tmp0[3] = vec_msum(tmp16_0[1], cospi_p28_m04, zerov);
+  tmp0[4] = vec_msum(tmp16_0[2], cospi_p20_p12, zerov);
+  tmp0[5] = vec_msum(tmp16_0[3], cospi_p20_p12, zerov);
+  tmp0[6] = vec_msum(tmp16_0[2], cospi_p12_m20, zerov);
+  tmp0[7] = vec_msum(tmp16_0[3], cospi_p12_m20, zerov);
+  tmp0[8] = vec_msum(tmp16_0[4], cospi_m28_p04, zerov);
+  tmp0[9] = vec_msum(tmp16_0[5], cospi_m28_p04, zerov);
+  tmp0[10] = vec_msum(tmp16_0[4], cospi_p04_p28, zerov);
+  tmp0[11] = vec_msum(tmp16_0[5], cospi_p04_p28, zerov);
+  tmp0[12] = vec_msum(tmp16_0[6], cospi_m12_p20, zerov);
+  tmp0[13] = vec_msum(tmp16_0[7], cospi_m12_p20, zerov);
+  tmp0[14] = vec_msum(tmp16_0[6], cospi_p20_p12, zerov);
+  tmp0[15] = vec_msum(tmp16_0[7], cospi_p20_p12, zerov);
+
+  tmp1[0] = vec_add(tmp0[0], tmp0[8]);
+  tmp1[1] = vec_add(tmp0[1], tmp0[9]);
+  tmp1[2] = vec_add(tmp0[2], tmp0[10]);
+  tmp1[3] = vec_add(tmp0[3], tmp0[11]);
+  tmp1[4] = vec_add(tmp0[4], tmp0[12]);
+  tmp1[5] = vec_add(tmp0[5], tmp0[13]);
+  tmp1[6] = vec_add(tmp0[6], tmp0[14]);
+  tmp1[7] = vec_add(tmp0[7], tmp0[15]);
+  tmp1[8] = vec_sub(tmp0[0], tmp0[8]);
+  tmp1[9] = vec_sub(tmp0[1], tmp0[9]);
+  tmp1[10] = vec_sub(tmp0[2], tmp0[10]);
+  tmp1[11] = vec_sub(tmp0[3], tmp0[11]);
+  tmp1[12] = vec_sub(tmp0[4], tmp0[12]);
+  tmp1[13] = vec_sub(tmp0[5], tmp0[13]);
+  tmp1[14] = vec_sub(tmp0[6], tmp0[14]);
+  tmp1[15] = vec_sub(tmp0[7], tmp0[15]);
+
+  DCT_CONST_ROUND_SHIFT(tmp1[0]);
+  DCT_CONST_ROUND_SHIFT(tmp1[1]);
+  DCT_CONST_ROUND_SHIFT(tmp1[2]);
+  DCT_CONST_ROUND_SHIFT(tmp1[3]);
+  DCT_CONST_ROUND_SHIFT(tmp1[4]);
+  DCT_CONST_ROUND_SHIFT(tmp1[5]);
+  DCT_CONST_ROUND_SHIFT(tmp1[6]);
+  DCT_CONST_ROUND_SHIFT(tmp1[7]);
+  DCT_CONST_ROUND_SHIFT(tmp1[8]);
+  DCT_CONST_ROUND_SHIFT(tmp1[9]);
+  DCT_CONST_ROUND_SHIFT(tmp1[10]);
+  DCT_CONST_ROUND_SHIFT(tmp1[11]);
+  DCT_CONST_ROUND_SHIFT(tmp1[12]);
+  DCT_CONST_ROUND_SHIFT(tmp1[13]);
+  DCT_CONST_ROUND_SHIFT(tmp1[14]);
+  DCT_CONST_ROUND_SHIFT(tmp1[15]);
+
+  tmp16_0[0] = vec_add(in[0], in[4]);
+  tmp16_0[1] = vec_add(in[1], in[5]);
+  tmp16_0[2] = vec_add(in[2], in[6]);
+  tmp16_0[3] = vec_add(in[3], in[7]);
+  tmp16_0[4] = vec_sub(in[0], in[4]);
+  tmp16_0[5] = vec_sub(in[1], in[5]);
+  tmp16_0[6] = vec_sub(in[2], in[6]);
+  tmp16_0[7] = vec_sub(in[3], in[7]);
+  tmp16_0[8] = vec_packs(tmp1[0], tmp1[1]);
+  tmp16_0[9] = vec_packs(tmp1[2], tmp1[3]);
+  tmp16_0[10] = vec_packs(tmp1[4], tmp1[5]);
+  tmp16_0[11] = vec_packs(tmp1[6], tmp1[7]);
+  tmp16_0[12] = vec_packs(tmp1[8], tmp1[9]);
+  tmp16_0[13] = vec_packs(tmp1[10], tmp1[11]);
+  tmp16_0[14] = vec_packs(tmp1[12], tmp1[13]);
+  tmp16_0[15] = vec_packs(tmp1[14], tmp1[15]);
+
+  // stage 3
+  in[0] = vec_mergeh(tmp16_0[4], tmp16_0[5]);
+  in[1] = vec_mergel(tmp16_0[4], tmp16_0[5]);
+  in[2] = vec_mergeh(tmp16_0[6], tmp16_0[7]);
+  in[3] = vec_mergel(tmp16_0[6], tmp16_0[7]);
+  in[4] = vec_mergeh(tmp16_0[12], tmp16_0[13]);
+  in[5] = vec_mergel(tmp16_0[12], tmp16_0[13]);
+  in[6] = vec_mergeh(tmp16_0[14], tmp16_0[15]);
+  in[7] = vec_mergel(tmp16_0[14], tmp16_0[15]);
+
+  tmp0[0] = vec_msum(in[0], cospi_p08_p24, zerov);
+  tmp0[1] = vec_msum(in[1], cospi_p08_p24, zerov);
+  tmp0[2] = vec_msum(in[0], cospi_p24_m08, zerov);
+  tmp0[3] = vec_msum(in[1], cospi_p24_m08, zerov);
+  tmp0[4] = vec_msum(in[2], cospi_m24_p08, zerov);
+  tmp0[5] = vec_msum(in[3], cospi_m24_p08, zerov);
+  tmp0[6] = vec_msum(in[2], cospi_p08_p24, zerov);
+  tmp0[7] = vec_msum(in[3], cospi_p08_p24, zerov);
+  tmp0[8] = vec_msum(in[4], cospi_p08_p24, zerov);
+  tmp0[9] = vec_msum(in[5], cospi_p08_p24, zerov);
+  tmp0[10] = vec_msum(in[4], cospi_p24_m08, zerov);
+  tmp0[11] = vec_msum(in[5], cospi_p24_m08, zerov);
+  tmp0[12] = vec_msum(in[6], cospi_m24_p08, zerov);
+  tmp0[13] = vec_msum(in[7], cospi_m24_p08, zerov);
+  tmp0[14] = vec_msum(in[6], cospi_p08_p24, zerov);
+  tmp0[15] = vec_msum(in[7], cospi_p08_p24, zerov);
+
+  tmp1[0] = vec_add(tmp0[0], tmp0[4]);
+  tmp1[1] = vec_add(tmp0[1], tmp0[5]);
+  tmp1[2] = vec_add(tmp0[2], tmp0[6]);
+  tmp1[3] = vec_add(tmp0[3], tmp0[7]);
+  tmp1[4] = vec_sub(tmp0[0], tmp0[4]);
+  tmp1[5] = vec_sub(tmp0[1], tmp0[5]);
+  tmp1[6] = vec_sub(tmp0[2], tmp0[6]);
+  tmp1[7] = vec_sub(tmp0[3], tmp0[7]);
+  tmp1[8] = vec_add(tmp0[8], tmp0[12]);
+  tmp1[9] = vec_add(tmp0[9], tmp0[13]);
+  tmp1[10] = vec_add(tmp0[10], tmp0[14]);
+  tmp1[11] = vec_add(tmp0[11], tmp0[15]);
+  tmp1[12] = vec_sub(tmp0[8], tmp0[12]);
+  tmp1[13] = vec_sub(tmp0[9], tmp0[13]);
+  tmp1[14] = vec_sub(tmp0[10], tmp0[14]);
+  tmp1[15] = vec_sub(tmp0[11], tmp0[15]);
+
+  DCT_CONST_ROUND_SHIFT(tmp1[0]);
+  DCT_CONST_ROUND_SHIFT(tmp1[1]);
+  DCT_CONST_ROUND_SHIFT(tmp1[2]);
+  DCT_CONST_ROUND_SHIFT(tmp1[3]);
+  DCT_CONST_ROUND_SHIFT(tmp1[4]);
+  DCT_CONST_ROUND_SHIFT(tmp1[5]);
+  DCT_CONST_ROUND_SHIFT(tmp1[6]);
+  DCT_CONST_ROUND_SHIFT(tmp1[7]);
+  DCT_CONST_ROUND_SHIFT(tmp1[8]);
+  DCT_CONST_ROUND_SHIFT(tmp1[9]);
+  DCT_CONST_ROUND_SHIFT(tmp1[10]);
+  DCT_CONST_ROUND_SHIFT(tmp1[11]);
+  DCT_CONST_ROUND_SHIFT(tmp1[12]);
+  DCT_CONST_ROUND_SHIFT(tmp1[13]);
+  DCT_CONST_ROUND_SHIFT(tmp1[14]);
+  DCT_CONST_ROUND_SHIFT(tmp1[15]);
+
+  in[0] = vec_add(tmp16_0[0], tmp16_0[2]);
+  in[1] = vec_add(tmp16_0[1], tmp16_0[3]);
+  in[2] = vec_sub(tmp16_0[0], tmp16_0[2]);
+  in[3] = vec_sub(tmp16_0[1], tmp16_0[3]);
+  in[4] = vec_packs(tmp1[0], tmp1[1]);
+  in[5] = vec_packs(tmp1[2], tmp1[3]);
+  in[6] = vec_packs(tmp1[4], tmp1[5]);
+  in[7] = vec_packs(tmp1[6], tmp1[7]);
+  in[8] = vec_add(tmp16_0[8], tmp16_0[10]);
+  in[9] = vec_add(tmp16_0[9], tmp16_0[11]);
+  in[10] = vec_sub(tmp16_0[8], tmp16_0[10]);
+  in[11] = vec_sub(tmp16_0[9], tmp16_0[11]);
+  in[12] = vec_packs(tmp1[8], tmp1[9]);
+  in[13] = vec_packs(tmp1[10], tmp1[11]);
+  in[14] = vec_packs(tmp1[12], tmp1[13]);
+  in[15] = vec_packs(tmp1[14], tmp1[15]);
+
+  // stage 4
+  out[0] = vec_mergeh(in[2], in[3]);
+  out[1] = vec_mergel(in[2], in[3]);
+  out[2] = vec_mergeh(in[6], in[7]);
+  out[3] = vec_mergel(in[6], in[7]);
+  out[4] = vec_mergeh(in[10], in[11]);
+  out[5] = vec_mergel(in[10], in[11]);
+  out[6] = vec_mergeh(in[14], in[15]);
+  out[7] = vec_mergel(in[14], in[15]);
+}
+
+void vpx_iadst16_vsx(int16x8_t *src0, int16x8_t *src1) {
+  int16x8_t tmp0[16], tmp1[16], tmp2[8];
+  int32x4_t tmp3, tmp4;
+  int16x8_t zero16v = vec_splat_s16(0);
+  int32x4_t zerov = vec_splat_s32(0);
+  int16x8_t cospi_p16_m16 = vec_mergel(cospi16_v, cospi16m_v);
+  int16x8_t cospi_m16_p16 = vec_mergel(cospi16m_v, cospi16_v);
+  ROUND_SHIFT_INIT;
+
+  TRANSPOSE8x8(src0[0], src0[2], src0[4], src0[6], src0[8], src0[10], src0[12],
+               src0[14], tmp0[0], tmp0[1], tmp0[2], tmp0[3], tmp0[4], tmp0[5],
+               tmp0[6], tmp0[7]);
+  TRANSPOSE8x8(src1[0], src1[2], src1[4], src1[6], src1[8], src1[10], src1[12],
+               src1[14], tmp1[0], tmp1[1], tmp1[2], tmp1[3], tmp1[4], tmp1[5],
+               tmp1[6], tmp1[7]);
+  TRANSPOSE8x8(src0[1], src0[3], src0[5], src0[7], src0[9], src0[11], src0[13],
+               src0[15], tmp0[8], tmp0[9], tmp0[10], tmp0[11], tmp0[12],
+               tmp0[13], tmp0[14], tmp0[15]);
+  TRANSPOSE8x8(src1[1], src1[3], src1[5], src1[7], src1[9], src1[11], src1[13],
+               src1[15], tmp1[8], tmp1[9], tmp1[10], tmp1[11], tmp1[12],
+               tmp1[13], tmp1[14], tmp1[15]);
+
+  iadst16x8_vsx(tmp0, tmp2);
+  IADST_WRAPLOW(tmp2[0], tmp2[1], tmp3, tmp4, src0[14], cospi16m_v);
+  IADST_WRAPLOW(tmp2[0], tmp2[1], tmp3, tmp4, src1[0], cospi_p16_m16);
+  IADST_WRAPLOW(tmp2[2], tmp2[3], tmp3, tmp4, src0[8], cospi16_v);
+  IADST_WRAPLOW(tmp2[2], tmp2[3], tmp3, tmp4, src1[6], cospi_m16_p16);
+  IADST_WRAPLOW(tmp2[4], tmp2[5], tmp3, tmp4, src0[12], cospi16_v);
+  IADST_WRAPLOW(tmp2[4], tmp2[5], tmp3, tmp4, src1[2], cospi_m16_p16);
+  IADST_WRAPLOW(tmp2[6], tmp2[7], tmp3, tmp4, src0[10], cospi16m_v);
+  IADST_WRAPLOW(tmp2[6], tmp2[7], tmp3, tmp4, src1[4], cospi_p16_m16);
+
+  src0[0] = tmp0[0];
+  src0[2] = vec_sub(zero16v, tmp0[8]);
+  src0[4] = tmp0[12];
+  src0[6] = vec_sub(zero16v, tmp0[4]);
+  src1[8] = tmp0[5];
+  src1[10] = vec_sub(zero16v, tmp0[13]);
+  src1[12] = tmp0[9];
+  src1[14] = vec_sub(zero16v, tmp0[1]);
+
+  iadst16x8_vsx(tmp1, tmp2);
+  IADST_WRAPLOW(tmp2[0], tmp2[1], tmp3, tmp4, src0[15], cospi16m_v);
+  IADST_WRAPLOW(tmp2[0], tmp2[1], tmp3, tmp4, src1[1], cospi_p16_m16);
+  IADST_WRAPLOW(tmp2[2], tmp2[3], tmp3, tmp4, src0[9], cospi16_v);
+  IADST_WRAPLOW(tmp2[2], tmp2[3], tmp3, tmp4, src1[7], cospi_m16_p16);
+  IADST_WRAPLOW(tmp2[4], tmp2[5], tmp3, tmp4, src0[13], cospi16_v);
+  IADST_WRAPLOW(tmp2[4], tmp2[5], tmp3, tmp4, src1[3], cospi_m16_p16);
+  IADST_WRAPLOW(tmp2[6], tmp2[7], tmp3, tmp4, src0[11], cospi16m_v);
+  IADST_WRAPLOW(tmp2[6], tmp2[7], tmp3, tmp4, src1[5], cospi_p16_m16);
+
+  src0[1] = tmp1[0];
+  src0[3] = vec_sub(zero16v, tmp1[8]);
+  src0[5] = tmp1[12];
+  src0[7] = vec_sub(zero16v, tmp1[4]);
+  src1[9] = tmp1[5];
+  src1[11] = vec_sub(zero16v, tmp1[13]);
+  src1[13] = tmp1[9];
+  src1[15] = vec_sub(zero16v, tmp1[1]);
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.h
new file mode 100644
index 0000000..7031742
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/inv_txfm_vsx.h
@@ -0,0 +1,48 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_DSP_PPC_INV_TXFM_VSX_H_
+#define VPX_VPX_DSP_PPC_INV_TXFM_VSX_H_
+
+#include "vpx_dsp/ppc/types_vsx.h"
+
+void vpx_round_store4x4_vsx(int16x8_t *in, int16x8_t *out, uint8_t *dest,
+                            int stride);
+void vpx_idct4_vsx(int16x8_t *in, int16x8_t *out);
+void vp9_iadst4_vsx(int16x8_t *in, int16x8_t *out);
+
+void vpx_round_store8x8_vsx(int16x8_t *in, uint8_t *dest, int stride);
+void vpx_idct8_vsx(int16x8_t *in, int16x8_t *out);
+void vp9_iadst8_vsx(int16x8_t *in, int16x8_t *out);
+
+#define LOAD_INPUT16(load, source, offset, step, in) \
+  in[0] = load(offset, source);                      \
+  in[1] = load((step) + (offset), source);           \
+  in[2] = load(2 * (step) + (offset), source);       \
+  in[3] = load(3 * (step) + (offset), source);       \
+  in[4] = load(4 * (step) + (offset), source);       \
+  in[5] = load(5 * (step) + (offset), source);       \
+  in[6] = load(6 * (step) + (offset), source);       \
+  in[7] = load(7 * (step) + (offset), source);       \
+  in[8] = load(8 * (step) + (offset), source);       \
+  in[9] = load(9 * (step) + (offset), source);       \
+  in[10] = load(10 * (step) + (offset), source);     \
+  in[11] = load(11 * (step) + (offset), source);     \
+  in[12] = load(12 * (step) + (offset), source);     \
+  in[13] = load(13 * (step) + (offset), source);     \
+  in[14] = load(14 * (step) + (offset), source);     \
+  in[15] = load(15 * (step) + (offset), source);
+
+void vpx_round_store16x16_vsx(int16x8_t *src0, int16x8_t *src1, uint8_t *dest,
+                              int stride);
+void vpx_idct16_vsx(int16x8_t *src0, int16x8_t *src1);
+void vpx_iadst16_vsx(int16x8_t *src0, int16x8_t *src1);
+
+#endif  // VPX_VPX_DSP_PPC_INV_TXFM_VSX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/quantize_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/quantize_vsx.c
new file mode 100644
index 0000000..d85e63b
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/quantize_vsx.c
@@ -0,0 +1,305 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+
+#include "./vpx_dsp_rtcd.h"
+#include "vpx_dsp/ppc/types_vsx.h"
+
+// Negate 16-bit integers in a when the corresponding signed 16-bit
+// integer in b is negative.
+static INLINE int16x8_t vec_sign(int16x8_t a, int16x8_t b) {
+  const int16x8_t mask = vec_sra(b, vec_shift_sign_s16);
+  return vec_xor(vec_add(a, mask), mask);
+}
+
+// Sets the value of a 32-bit integers to 1 when the corresponding value in a is
+// negative.
+static INLINE int32x4_t vec_is_neg(int32x4_t a) {
+  return vec_sr(a, vec_shift_sign_s32);
+}
+
+// Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit
+// integers, and return the high 16 bits of the intermediate integers.
+// (a * b) >> 16
+static INLINE int16x8_t vec_mulhi(int16x8_t a, int16x8_t b) {
+  // madds does ((A * B) >>15) + C, we need >> 16, so we perform an extra right
+  // shift.
+  return vec_sra(vec_madds(a, b, vec_zeros_s16), vec_ones_u16);
+}
+
+// Quantization function used for 4x4, 8x8 and 16x16 blocks.
+static INLINE int16x8_t quantize_coeff(int16x8_t coeff, int16x8_t coeff_abs,
+                                       int16x8_t round, int16x8_t quant,
+                                       int16x8_t quant_shift, bool16x8_t mask) {
+  const int16x8_t rounded = vec_vaddshs(coeff_abs, round);
+  int16x8_t qcoeff = vec_mulhi(rounded, quant);
+  qcoeff = vec_add(qcoeff, rounded);
+  qcoeff = vec_mulhi(qcoeff, quant_shift);
+  qcoeff = vec_sign(qcoeff, coeff);
+  return vec_and(qcoeff, mask);
+}
+
+// Quantization function used for 32x32 blocks.
+static INLINE int16x8_t quantize_coeff_32(int16x8_t coeff, int16x8_t coeff_abs,
+                                          int16x8_t round, int16x8_t quant,
+                                          int16x8_t quant_shift,
+                                          bool16x8_t mask) {
+  const int16x8_t rounded = vec_vaddshs(coeff_abs, round);
+  int16x8_t qcoeff = vec_mulhi(rounded, quant);
+  qcoeff = vec_add(qcoeff, rounded);
+  // 32x32 blocks require an extra multiplication by 2, this compensates for the
+  // extra right shift added in vec_mulhi, as such vec_madds can be used
+  // directly instead of vec_mulhi (((a * b) >> 15) >> 1) << 1 == (a * b >> 15)
+  qcoeff = vec_madds(qcoeff, quant_shift, vec_zeros_s16);
+  qcoeff = vec_sign(qcoeff, coeff);
+  return vec_and(qcoeff, mask);
+}
+
+// DeQuantization function used for 32x32 blocks. Quantized coeff of 32x32
+// blocks are twice as big as for other block sizes. As such, using
+// vec_mladd results in overflow.
+static INLINE int16x8_t dequantize_coeff_32(int16x8_t qcoeff,
+                                            int16x8_t dequant) {
+  int32x4_t dqcoeffe = vec_mule(qcoeff, dequant);
+  int32x4_t dqcoeffo = vec_mulo(qcoeff, dequant);
+  // Add 1 if negative to round towards zero because the C uses division.
+  dqcoeffe = vec_add(dqcoeffe, vec_is_neg(dqcoeffe));
+  dqcoeffo = vec_add(dqcoeffo, vec_is_neg(dqcoeffo));
+  dqcoeffe = vec_sra(dqcoeffe, vec_ones_u32);
+  dqcoeffo = vec_sra(dqcoeffo, vec_ones_u32);
+  return (int16x8_t)vec_perm(dqcoeffe, dqcoeffo, vec_perm_odd_even_pack);
+}
+
+static INLINE int16x8_t nonzero_scanindex(int16x8_t qcoeff, bool16x8_t mask,
+                                          const int16_t *iscan_ptr, int index) {
+  int16x8_t scan = vec_vsx_ld(index, iscan_ptr);
+  bool16x8_t zero_coeff = vec_cmpeq(qcoeff, vec_zeros_s16);
+  scan = vec_sub(scan, mask);
+  return vec_andc(scan, zero_coeff);
+}
+
+// Compare packed 16-bit integers across a, and return the maximum value in
+// every element. Returns a vector containing the biggest value across vector a.
+static INLINE int16x8_t vec_max_across(int16x8_t a) {
+  a = vec_max(a, vec_perm(a, a, vec_perm64));
+  a = vec_max(a, vec_perm(a, a, vec_perm32));
+  return vec_max(a, vec_perm(a, a, vec_perm16));
+}
+
+void vpx_quantize_b_vsx(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
+                        int skip_block, const int16_t *zbin_ptr,
+                        const int16_t *round_ptr, const int16_t *quant_ptr,
+                        const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
+                        tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
+                        uint16_t *eob_ptr, const int16_t *scan_ptr,
+                        const int16_t *iscan_ptr) {
+  int16x8_t qcoeff0, qcoeff1, dqcoeff0, dqcoeff1, eob;
+  bool16x8_t zero_mask0, zero_mask1;
+
+  // First set of 8 coeff starts with DC + 7 AC
+  int16x8_t zbin = vec_vsx_ld(0, zbin_ptr);
+  int16x8_t round = vec_vsx_ld(0, round_ptr);
+  int16x8_t quant = vec_vsx_ld(0, quant_ptr);
+  int16x8_t dequant = vec_vsx_ld(0, dequant_ptr);
+  int16x8_t quant_shift = vec_vsx_ld(0, quant_shift_ptr);
+
+  int16x8_t coeff0 = vec_vsx_ld(0, coeff_ptr);
+  int16x8_t coeff1 = vec_vsx_ld(16, coeff_ptr);
+
+  int16x8_t coeff0_abs = vec_abs(coeff0);
+  int16x8_t coeff1_abs = vec_abs(coeff1);
+
+  zero_mask0 = vec_cmpge(coeff0_abs, zbin);
+  zbin = vec_splat(zbin, 1);
+  zero_mask1 = vec_cmpge(coeff1_abs, zbin);
+
+  (void)scan_ptr;
+  (void)skip_block;
+  assert(!skip_block);
+
+  qcoeff0 =
+      quantize_coeff(coeff0, coeff0_abs, round, quant, quant_shift, zero_mask0);
+  vec_vsx_st(qcoeff0, 0, qcoeff_ptr);
+  round = vec_splat(round, 1);
+  quant = vec_splat(quant, 1);
+  quant_shift = vec_splat(quant_shift, 1);
+  qcoeff1 =
+      quantize_coeff(coeff1, coeff1_abs, round, quant, quant_shift, zero_mask1);
+  vec_vsx_st(qcoeff1, 16, qcoeff_ptr);
+
+  dqcoeff0 = vec_mladd(qcoeff0, dequant, vec_zeros_s16);
+  vec_vsx_st(dqcoeff0, 0, dqcoeff_ptr);
+  dequant = vec_splat(dequant, 1);
+  dqcoeff1 = vec_mladd(qcoeff1, dequant, vec_zeros_s16);
+  vec_vsx_st(dqcoeff1, 16, dqcoeff_ptr);
+
+  eob = vec_max(nonzero_scanindex(qcoeff0, zero_mask0, iscan_ptr, 0),
+                nonzero_scanindex(qcoeff1, zero_mask1, iscan_ptr, 16));
+
+  if (n_coeffs > 16) {
+    int index = 16;
+    int off0 = 32;
+    int off1 = 48;
+    int off2 = 64;
+    do {
+      int16x8_t coeff2, coeff2_abs, qcoeff2, dqcoeff2, eob2;
+      bool16x8_t zero_mask2;
+      coeff0 = vec_vsx_ld(off0, coeff_ptr);
+      coeff1 = vec_vsx_ld(off1, coeff_ptr);
+      coeff2 = vec_vsx_ld(off2, coeff_ptr);
+      coeff0_abs = vec_abs(coeff0);
+      coeff1_abs = vec_abs(coeff1);
+      coeff2_abs = vec_abs(coeff2);
+      zero_mask0 = vec_cmpge(coeff0_abs, zbin);
+      zero_mask1 = vec_cmpge(coeff1_abs, zbin);
+      zero_mask2 = vec_cmpge(coeff2_abs, zbin);
+      qcoeff0 = quantize_coeff(coeff0, coeff0_abs, round, quant, quant_shift,
+                               zero_mask0);
+      qcoeff1 = quantize_coeff(coeff1, coeff1_abs, round, quant, quant_shift,
+                               zero_mask1);
+      qcoeff2 = quantize_coeff(coeff2, coeff2_abs, round, quant, quant_shift,
+                               zero_mask2);
+      vec_vsx_st(qcoeff0, off0, qcoeff_ptr);
+      vec_vsx_st(qcoeff1, off1, qcoeff_ptr);
+      vec_vsx_st(qcoeff2, off2, qcoeff_ptr);
+
+      dqcoeff0 = vec_mladd(qcoeff0, dequant, vec_zeros_s16);
+      dqcoeff1 = vec_mladd(qcoeff1, dequant, vec_zeros_s16);
+      dqcoeff2 = vec_mladd(qcoeff2, dequant, vec_zeros_s16);
+
+      vec_vsx_st(dqcoeff0, off0, dqcoeff_ptr);
+      vec_vsx_st(dqcoeff1, off1, dqcoeff_ptr);
+      vec_vsx_st(dqcoeff2, off2, dqcoeff_ptr);
+
+      eob =
+          vec_max(eob, nonzero_scanindex(qcoeff0, zero_mask0, iscan_ptr, off0));
+      eob2 = vec_max(nonzero_scanindex(qcoeff1, zero_mask1, iscan_ptr, off1),
+                     nonzero_scanindex(qcoeff2, zero_mask2, iscan_ptr, off2));
+      eob = vec_max(eob, eob2);
+
+      index += 24;
+      off0 += 48;
+      off1 += 48;
+      off2 += 48;
+    } while (index < n_coeffs);
+  }
+
+  eob = vec_max_across(eob);
+  *eob_ptr = eob[0];
+}
+
+void vpx_quantize_b_32x32_vsx(
+    const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block,
+    const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr,
+    const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
+    tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr,
+    const int16_t *scan_ptr, const int16_t *iscan_ptr) {
+  // In stage 1, we quantize 16 coeffs (DC + 15 AC)
+  // In stage 2, we loop 42 times and quantize 24 coeffs per iteration
+  // (32 * 32 - 16) / 24 = 42
+  int num_itr = 42;
+  // Offsets are in bytes, 16 coeffs = 32 bytes
+  int off0 = 32;
+  int off1 = 48;
+  int off2 = 64;
+
+  int16x8_t qcoeff0, qcoeff1, eob;
+  bool16x8_t zero_mask0, zero_mask1;
+
+  int16x8_t zbin = vec_vsx_ld(0, zbin_ptr);
+  int16x8_t round = vec_vsx_ld(0, round_ptr);
+  int16x8_t quant = vec_vsx_ld(0, quant_ptr);
+  int16x8_t dequant = vec_vsx_ld(0, dequant_ptr);
+  int16x8_t quant_shift = vec_vsx_ld(0, quant_shift_ptr);
+
+  int16x8_t coeff0 = vec_vsx_ld(0, coeff_ptr);
+  int16x8_t coeff1 = vec_vsx_ld(16, coeff_ptr);
+
+  int16x8_t coeff0_abs = vec_abs(coeff0);
+  int16x8_t coeff1_abs = vec_abs(coeff1);
+
+  (void)scan_ptr;
+  (void)skip_block;
+  (void)n_coeffs;
+  assert(!skip_block);
+
+  // 32x32 quantization requires that zbin and round be divided by 2
+  zbin = vec_sra(vec_add(zbin, vec_ones_s16), vec_ones_u16);
+  round = vec_sra(vec_add(round, vec_ones_s16), vec_ones_u16);
+
+  zero_mask0 = vec_cmpge(coeff0_abs, zbin);
+  zbin = vec_splat(zbin, 1);  // remove DC from zbin
+  zero_mask1 = vec_cmpge(coeff1_abs, zbin);
+
+  qcoeff0 = quantize_coeff_32(coeff0, coeff0_abs, round, quant, quant_shift,
+                              zero_mask0);
+  round = vec_splat(round, 1);              // remove DC from round
+  quant = vec_splat(quant, 1);              // remove DC from quant
+  quant_shift = vec_splat(quant_shift, 1);  // remove DC from quant_shift
+  qcoeff1 = quantize_coeff_32(coeff1, coeff1_abs, round, quant, quant_shift,
+                              zero_mask1);
+
+  vec_vsx_st(qcoeff0, 0, qcoeff_ptr);
+  vec_vsx_st(qcoeff1, 16, qcoeff_ptr);
+
+  vec_vsx_st(dequantize_coeff_32(qcoeff0, dequant), 0, dqcoeff_ptr);
+  dequant = vec_splat(dequant, 1);  // remove DC from dequant
+  vec_vsx_st(dequantize_coeff_32(qcoeff1, dequant), 16, dqcoeff_ptr);
+
+  eob = vec_max(nonzero_scanindex(qcoeff0, zero_mask0, iscan_ptr, 0),
+                nonzero_scanindex(qcoeff1, zero_mask1, iscan_ptr, 16));
+
+  do {
+    int16x8_t coeff2, coeff2_abs, qcoeff2, eob2;
+    bool16x8_t zero_mask2;
+
+    coeff0 = vec_vsx_ld(off0, coeff_ptr);
+    coeff1 = vec_vsx_ld(off1, coeff_ptr);
+    coeff2 = vec_vsx_ld(off2, coeff_ptr);
+
+    coeff0_abs = vec_abs(coeff0);
+    coeff1_abs = vec_abs(coeff1);
+    coeff2_abs = vec_abs(coeff2);
+
+    zero_mask0 = vec_cmpge(coeff0_abs, zbin);
+    zero_mask1 = vec_cmpge(coeff1_abs, zbin);
+    zero_mask2 = vec_cmpge(coeff2_abs, zbin);
+
+    qcoeff0 = quantize_coeff_32(coeff0, coeff0_abs, round, quant, quant_shift,
+                                zero_mask0);
+    qcoeff1 = quantize_coeff_32(coeff1, coeff1_abs, round, quant, quant_shift,
+                                zero_mask1);
+    qcoeff2 = quantize_coeff_32(coeff2, coeff2_abs, round, quant, quant_shift,
+                                zero_mask2);
+
+    vec_vsx_st(qcoeff0, off0, qcoeff_ptr);
+    vec_vsx_st(qcoeff1, off1, qcoeff_ptr);
+    vec_vsx_st(qcoeff2, off2, qcoeff_ptr);
+
+    vec_vsx_st(dequantize_coeff_32(qcoeff0, dequant), off0, dqcoeff_ptr);
+    vec_vsx_st(dequantize_coeff_32(qcoeff1, dequant), off1, dqcoeff_ptr);
+    vec_vsx_st(dequantize_coeff_32(qcoeff2, dequant), off2, dqcoeff_ptr);
+
+    eob = vec_max(eob, nonzero_scanindex(qcoeff0, zero_mask0, iscan_ptr, off0));
+    eob2 = vec_max(nonzero_scanindex(qcoeff1, zero_mask1, iscan_ptr, off1),
+                   nonzero_scanindex(qcoeff2, zero_mask2, iscan_ptr, off2));
+    eob = vec_max(eob, eob2);
+
+    // 24 int16_t is 48 bytes
+    off0 += 48;
+    off1 += 48;
+    off2 += 48;
+    num_itr--;
+  } while (num_itr != 0);
+
+  eob = vec_max_across(eob);
+  *eob_ptr = eob[0];
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/sad_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/sad_vsx.c
index bb49add..a08ae12 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/sad_vsx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/sad_vsx.c
@@ -17,71 +17,75 @@
 #include "vpx/vpx_integer.h"
 #include "vpx_ports/mem.h"
 
-#define PROCESS16(offset)           \
-  v_a = vec_vsx_ld(offset, a);      \
-  v_b = vec_vsx_ld(offset, b);      \
-  v_ah = unpack_to_s16_h(v_a);      \
-  v_al = unpack_to_s16_l(v_a);      \
-  v_bh = unpack_to_s16_h(v_b);      \
-  v_bl = unpack_to_s16_l(v_b);      \
-  v_subh = vec_sub(v_ah, v_bh);     \
-  v_subl = vec_sub(v_al, v_bl);     \
-  v_absh = vec_abs(v_subh);         \
-  v_absl = vec_abs(v_subl);         \
-  v_sad = vec_sum4s(v_absh, v_sad); \
-  v_sad = vec_sum4s(v_absl, v_sad);
+#define PROCESS16(offset)      \
+  v_a = vec_vsx_ld(offset, a); \
+  v_b = vec_vsx_ld(offset, b); \
+  v_abs = vec_absd(v_a, v_b);  \
+  v_sad = vec_sum4s(v_abs, v_sad);
+
+#define SAD8(height)                                                     \
+  unsigned int vpx_sad8x##height##_vsx(const uint8_t *a, int a_stride,   \
+                                       const uint8_t *b, int b_stride) { \
+    int y = 0;                                                           \
+    uint8x16_t v_a, v_b, v_abs;                                          \
+    uint32x4_t v_sad = vec_zeros_u32;                                    \
+                                                                         \
+    do {                                                                 \
+      PROCESS16(0)                                                       \
+                                                                         \
+      a += a_stride;                                                     \
+      b += b_stride;                                                     \
+      y++;                                                               \
+    } while (y < height);                                                \
+                                                                         \
+    return v_sad[1] + v_sad[0];                                          \
+  }
 
 #define SAD16(height)                                                     \
   unsigned int vpx_sad16x##height##_vsx(const uint8_t *a, int a_stride,   \
                                         const uint8_t *b, int b_stride) { \
-    int y;                                                                \
-    unsigned int sad[4];                                                  \
-    uint8x16_t v_a, v_b;                                                  \
-    int16x8_t v_ah, v_al, v_bh, v_bl, v_absh, v_absl, v_subh, v_subl;     \
-    int32x4_t v_sad = vec_splat_s32(0);                                   \
+    int y = 0;                                                            \
+    uint8x16_t v_a, v_b, v_abs;                                           \
+    uint32x4_t v_sad = vec_zeros_u32;                                     \
                                                                           \
-    for (y = 0; y < height; y++) {                                        \
+    do {                                                                  \
       PROCESS16(0);                                                       \
                                                                           \
       a += a_stride;                                                      \
       b += b_stride;                                                      \
-    }                                                                     \
-    vec_vsx_st((uint32x4_t)v_sad, 0, sad);                                \
+      y++;                                                                \
+    } while (y < height);                                                 \
                                                                           \
-    return sad[3] + sad[2] + sad[1] + sad[0];                             \
+    return v_sad[3] + v_sad[2] + v_sad[1] + v_sad[0];                     \
   }
 
 #define SAD32(height)                                                     \
   unsigned int vpx_sad32x##height##_vsx(const uint8_t *a, int a_stride,   \
                                         const uint8_t *b, int b_stride) { \
-    int y;                                                                \
-    unsigned int sad[4];                                                  \
-    uint8x16_t v_a, v_b;                                                  \
-    int16x8_t v_ah, v_al, v_bh, v_bl, v_absh, v_absl, v_subh, v_subl;     \
-    int32x4_t v_sad = vec_splat_s32(0);                                   \
+    int y = 0;                                                            \
+    uint8x16_t v_a, v_b, v_abs;                                           \
+    uint32x4_t v_sad = vec_zeros_u32;                                     \
                                                                           \
-    for (y = 0; y < height; y++) {                                        \
+    do {                                                                  \
       PROCESS16(0);                                                       \
       PROCESS16(16);                                                      \
                                                                           \
       a += a_stride;                                                      \
       b += b_stride;                                                      \
-    }                                                                     \
-    vec_vsx_st((uint32x4_t)v_sad, 0, sad);                                \
+      y++;                                                                \
+    } while (y < height);                                                 \
                                                                           \
-    return sad[3] + sad[2] + sad[1] + sad[0];                             \
+    return v_sad[3] + v_sad[2] + v_sad[1] + v_sad[0];                     \
   }
 
 #define SAD64(height)                                                     \
   unsigned int vpx_sad64x##height##_vsx(const uint8_t *a, int a_stride,   \
                                         const uint8_t *b, int b_stride) { \
-    int y;                                                                \
-    unsigned int sad[4];                                                  \
-    uint8x16_t v_a, v_b;                                                  \
-    int16x8_t v_ah, v_al, v_bh, v_bl, v_absh, v_absl, v_subh, v_subl;     \
-    int32x4_t v_sad = vec_splat_s32(0);                                   \
+    int y = 0;                                                            \
+    uint8x16_t v_a, v_b, v_abs;                                           \
+    uint32x4_t v_sad = vec_zeros_u32;                                     \
                                                                           \
-    for (y = 0; y < height; y++) {                                        \
+    do {                                                                  \
       PROCESS16(0);                                                       \
       PROCESS16(16);                                                      \
       PROCESS16(32);                                                      \
@@ -89,12 +93,15 @@
                                                                           \
       a += a_stride;                                                      \
       b += b_stride;                                                      \
-    }                                                                     \
-    vec_vsx_st((uint32x4_t)v_sad, 0, sad);                                \
+      y++;                                                                \
+    } while (y < height);                                                 \
                                                                           \
-    return sad[3] + sad[2] + sad[1] + sad[0];                             \
+    return v_sad[3] + v_sad[2] + v_sad[1] + v_sad[0];                     \
   }
 
+SAD8(4);
+SAD8(8);
+SAD8(16);
 SAD16(8);
 SAD16(16);
 SAD16(32);
@@ -108,7 +115,7 @@
   unsigned int vpx_sad16x##height##_avg_vsx(                                  \
       const uint8_t *src, int src_stride, const uint8_t *ref, int ref_stride, \
       const uint8_t *second_pred) {                                           \
-    DECLARE_ALIGNED(16, uint8_t, comp_pred[16 * height]);                     \
+    DECLARE_ALIGNED(16, uint8_t, comp_pred[16 * (height)]);                   \
     vpx_comp_avg_pred_vsx(comp_pred, second_pred, 16, height, ref,            \
                           ref_stride);                                        \
                                                                               \
@@ -119,7 +126,7 @@
   unsigned int vpx_sad32x##height##_avg_vsx(                                  \
       const uint8_t *src, int src_stride, const uint8_t *ref, int ref_stride, \
       const uint8_t *second_pred) {                                           \
-    DECLARE_ALIGNED(32, uint8_t, comp_pred[32 * height]);                     \
+    DECLARE_ALIGNED(32, uint8_t, comp_pred[32 * (height)]);                   \
     vpx_comp_avg_pred_vsx(comp_pred, second_pred, 32, height, ref,            \
                           ref_stride);                                        \
                                                                               \
@@ -130,7 +137,7 @@
   unsigned int vpx_sad64x##height##_avg_vsx(                                  \
       const uint8_t *src, int src_stride, const uint8_t *ref, int ref_stride, \
       const uint8_t *second_pred) {                                           \
-    DECLARE_ALIGNED(64, uint8_t, comp_pred[64 * height]);                     \
+    DECLARE_ALIGNED(64, uint8_t, comp_pred[64 * (height)]);                   \
     vpx_comp_avg_pred_vsx(comp_pred, second_pred, 64, height, ref,            \
                           ref_stride);                                        \
     return vpx_sad64x##height##_vsx(src, src_stride, comp_pred, 64);          \
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/subtract_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/subtract_vsx.c
new file mode 100644
index 0000000..76ad302d
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/subtract_vsx.c
@@ -0,0 +1,117 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+
+#include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/ppc/types_vsx.h"
+
+static VPX_FORCE_INLINE void subtract_block4x4(
+    int16_t *diff, ptrdiff_t diff_stride, const uint8_t *src,
+    ptrdiff_t src_stride, const uint8_t *pred, ptrdiff_t pred_stride) {
+  int16_t *diff1 = diff + 2 * diff_stride;
+  const uint8_t *src1 = src + 2 * src_stride;
+  const uint8_t *pred1 = pred + 2 * pred_stride;
+
+  const int16x8_t d0 = vec_vsx_ld(0, diff);
+  const int16x8_t d1 = vec_vsx_ld(0, diff + diff_stride);
+  const int16x8_t d2 = vec_vsx_ld(0, diff1);
+  const int16x8_t d3 = vec_vsx_ld(0, diff1 + diff_stride);
+
+  const uint8x16_t s0 = read4x2(src, (int)src_stride);
+  const uint8x16_t p0 = read4x2(pred, (int)pred_stride);
+  const uint8x16_t s1 = read4x2(src1, (int)src_stride);
+  const uint8x16_t p1 = read4x2(pred1, (int)pred_stride);
+
+  const int16x8_t da = vec_sub(unpack_to_s16_h(s0), unpack_to_s16_h(p0));
+  const int16x8_t db = vec_sub(unpack_to_s16_h(s1), unpack_to_s16_h(p1));
+
+  vec_vsx_st(xxpermdi(da, d0, 1), 0, diff);
+  vec_vsx_st(xxpermdi(da, d1, 3), 0, diff + diff_stride);
+  vec_vsx_st(xxpermdi(db, d2, 1), 0, diff1);
+  vec_vsx_st(xxpermdi(db, d3, 3), 0, diff1 + diff_stride);
+}
+
+void vpx_subtract_block_vsx(int rows, int cols, int16_t *diff,
+                            ptrdiff_t diff_stride, const uint8_t *src,
+                            ptrdiff_t src_stride, const uint8_t *pred,
+                            ptrdiff_t pred_stride) {
+  int r = rows, c;
+
+  switch (cols) {
+    case 64:
+    case 32:
+      do {
+        for (c = 0; c < cols; c += 32) {
+          const uint8x16_t s0 = vec_vsx_ld(0, src + c);
+          const uint8x16_t s1 = vec_vsx_ld(16, src + c);
+          const uint8x16_t p0 = vec_vsx_ld(0, pred + c);
+          const uint8x16_t p1 = vec_vsx_ld(16, pred + c);
+          const int16x8_t d0l =
+              vec_sub(unpack_to_s16_l(s0), unpack_to_s16_l(p0));
+          const int16x8_t d0h =
+              vec_sub(unpack_to_s16_h(s0), unpack_to_s16_h(p0));
+          const int16x8_t d1l =
+              vec_sub(unpack_to_s16_l(s1), unpack_to_s16_l(p1));
+          const int16x8_t d1h =
+              vec_sub(unpack_to_s16_h(s1), unpack_to_s16_h(p1));
+          vec_vsx_st(d0h, 0, diff + c);
+          vec_vsx_st(d0l, 16, diff + c);
+          vec_vsx_st(d1h, 0, diff + c + 16);
+          vec_vsx_st(d1l, 16, diff + c + 16);
+        }
+        diff += diff_stride;
+        pred += pred_stride;
+        src += src_stride;
+      } while (--r);
+      break;
+    case 16:
+      do {
+        const uint8x16_t s0 = vec_vsx_ld(0, src);
+        const uint8x16_t p0 = vec_vsx_ld(0, pred);
+        const int16x8_t d0l = vec_sub(unpack_to_s16_l(s0), unpack_to_s16_l(p0));
+        const int16x8_t d0h = vec_sub(unpack_to_s16_h(s0), unpack_to_s16_h(p0));
+        vec_vsx_st(d0h, 0, diff);
+        vec_vsx_st(d0l, 16, diff);
+        diff += diff_stride;
+        pred += pred_stride;
+        src += src_stride;
+      } while (--r);
+      break;
+    case 8:
+      do {
+        const uint8x16_t s0 = vec_vsx_ld(0, src);
+        const uint8x16_t p0 = vec_vsx_ld(0, pred);
+        const int16x8_t d0h = vec_sub(unpack_to_s16_h(s0), unpack_to_s16_h(p0));
+        vec_vsx_st(d0h, 0, diff);
+        diff += diff_stride;
+        pred += pred_stride;
+        src += src_stride;
+      } while (--r);
+      break;
+    case 4:
+      subtract_block4x4(diff, diff_stride, src, src_stride, pred, pred_stride);
+      if (r > 4) {
+        diff += 4 * diff_stride;
+        pred += 4 * pred_stride;
+        src += 4 * src_stride;
+
+        subtract_block4x4(diff, diff_stride,
+
+                          src, src_stride,
+
+                          pred, pred_stride);
+      }
+      break;
+    default: assert(0);  // unreachable
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/transpose_vsx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/transpose_vsx.h
index f02556d..4883b73 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/transpose_vsx.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/transpose_vsx.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_PPC_TRANSPOSE_VSX_H_
-#define VPX_DSP_PPC_TRANSPOSE_VSX_H_
+#ifndef VPX_VPX_DSP_PPC_TRANSPOSE_VSX_H_
+#define VPX_VPX_DSP_PPC_TRANSPOSE_VSX_H_
 
 #include "./vpx_config.h"
 #include "vpx_dsp/ppc/types_vsx.h"
@@ -98,4 +98,36 @@
   // v[7]: 07 17 27 37 47 57 67 77
 }
 
-#endif  // VPX_DSP_PPC_TRANSPOSE_VSX_H_
+static INLINE void transpose_8x8(const int16x8_t *a, int16x8_t *b) {
+  // Stage 1
+  const int16x8_t s1_0 = vec_mergeh(a[0], a[4]);
+  const int16x8_t s1_1 = vec_mergel(a[0], a[4]);
+  const int16x8_t s1_2 = vec_mergeh(a[1], a[5]);
+  const int16x8_t s1_3 = vec_mergel(a[1], a[5]);
+  const int16x8_t s1_4 = vec_mergeh(a[2], a[6]);
+  const int16x8_t s1_5 = vec_mergel(a[2], a[6]);
+  const int16x8_t s1_6 = vec_mergeh(a[3], a[7]);
+  const int16x8_t s1_7 = vec_mergel(a[3], a[7]);
+
+  // Stage 2
+  const int16x8_t s2_0 = vec_mergeh(s1_0, s1_4);
+  const int16x8_t s2_1 = vec_mergel(s1_0, s1_4);
+  const int16x8_t s2_2 = vec_mergeh(s1_1, s1_5);
+  const int16x8_t s2_3 = vec_mergel(s1_1, s1_5);
+  const int16x8_t s2_4 = vec_mergeh(s1_2, s1_6);
+  const int16x8_t s2_5 = vec_mergel(s1_2, s1_6);
+  const int16x8_t s2_6 = vec_mergeh(s1_3, s1_7);
+  const int16x8_t s2_7 = vec_mergel(s1_3, s1_7);
+
+  // Stage 2
+  b[0] = vec_mergeh(s2_0, s2_4);
+  b[1] = vec_mergel(s2_0, s2_4);
+  b[2] = vec_mergeh(s2_1, s2_5);
+  b[3] = vec_mergel(s2_1, s2_5);
+  b[4] = vec_mergeh(s2_2, s2_6);
+  b[5] = vec_mergel(s2_2, s2_6);
+  b[6] = vec_mergeh(s2_3, s2_7);
+  b[7] = vec_mergel(s2_3, s2_7);
+}
+
+#endif  // VPX_VPX_DSP_PPC_TRANSPOSE_VSX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/txfm_common_vsx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/txfm_common_vsx.h
new file mode 100644
index 0000000..2907a1f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/txfm_common_vsx.h
@@ -0,0 +1,90 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_DSP_PPC_TXFM_COMMON_VSX_H_
+#define VPX_VPX_DSP_PPC_TXFM_COMMON_VSX_H_
+
+#include "vpx_dsp/ppc/types_vsx.h"
+
+static const int32x4_t vec_dct_const_rounding = { 8192, 8192, 8192, 8192 };
+
+static const uint32x4_t vec_dct_const_bits = { 14, 14, 14, 14 };
+
+static const uint16x8_t vec_dct_scale_log2 = { 2, 2, 2, 2, 2, 2, 2, 2 };
+
+static const int16x8_t cospi1_v = { 16364, 16364, 16364, 16364,
+                                    16364, 16364, 16364, 16364 };
+static const int16x8_t cospi2_v = { 16305, 16305, 16305, 16305,
+                                    16305, 16305, 16305, 16305 };
+static const int16x8_t cospi3_v = { 16207, 16207, 16207, 16207,
+                                    16207, 16207, 16207, 16207 };
+static const int16x8_t cospi4_v = { 16069, 16069, 16069, 16069,
+                                    16069, 16069, 16069, 16069 };
+static const int16x8_t cospi4m_v = { -16069, -16069, -16069, -16069,
+                                     -16069, -16069, -16069, -16069 };
+static const int16x8_t cospi5_v = { 15893, 15893, 15893, 15893,
+                                    15893, 15893, 15893, 15893 };
+static const int16x8_t cospi6_v = { 15679, 15679, 15679, 15679,
+                                    15679, 15679, 15679, 15679 };
+static const int16x8_t cospi7_v = { 15426, 15426, 15426, 15426,
+                                    15426, 15426, 15426, 15426 };
+static const int16x8_t cospi8_v = { 15137, 15137, 15137, 15137,
+                                    15137, 15137, 15137, 15137 };
+static const int16x8_t cospi8m_v = { -15137, -15137, -15137, -15137,
+                                     -15137, -15137, -15137, -15137 };
+static const int16x8_t cospi9_v = { 14811, 14811, 14811, 14811,
+                                    14811, 14811, 14811, 14811 };
+static const int16x8_t cospi10_v = { 14449, 14449, 14449, 14449,
+                                     14449, 14449, 14449, 14449 };
+static const int16x8_t cospi11_v = { 14053, 14053, 14053, 14053,
+                                     14053, 14053, 14053, 14053 };
+static const int16x8_t cospi12_v = { 13623, 13623, 13623, 13623,
+                                     13623, 13623, 13623, 13623 };
+static const int16x8_t cospi13_v = { 13160, 13160, 13160, 13160,
+                                     13160, 13160, 13160, 13160 };
+static const int16x8_t cospi14_v = { 12665, 12665, 12665, 12665,
+                                     12665, 12665, 12665, 12665 };
+static const int16x8_t cospi15_v = { 12140, 12140, 12140, 12140,
+                                     12140, 12140, 12140, 12140 };
+static const int16x8_t cospi16_v = { 11585, 11585, 11585, 11585,
+                                     11585, 11585, 11585, 11585 };
+static const int16x8_t cospi17_v = { 11003, 11003, 11003, 11003,
+                                     11003, 11003, 11003, 11003 };
+static const int16x8_t cospi18_v = { 10394, 10394, 10394, 10394,
+                                     10394, 10394, 10394, 10394 };
+static const int16x8_t cospi19_v = { 9760, 9760, 9760, 9760,
+                                     9760, 9760, 9760, 9760 };
+static const int16x8_t cospi20_v = { 9102, 9102, 9102, 9102,
+                                     9102, 9102, 9102, 9102 };
+static const int16x8_t cospi20m_v = { -9102, -9102, -9102, -9102,
+                                      -9102, -9102, -9102, -9102 };
+static const int16x8_t cospi21_v = { 8423, 8423, 8423, 8423,
+                                     8423, 8423, 8423, 8423 };
+static const int16x8_t cospi22_v = { 7723, 7723, 7723, 7723,
+                                     7723, 7723, 7723, 7723 };
+static const int16x8_t cospi23_v = { 7005, 7005, 7005, 7005,
+                                     7005, 7005, 7005, 7005 };
+static const int16x8_t cospi24_v = { 6270, 6270, 6270, 6270,
+                                     6270, 6270, 6270, 6270 };
+static const int16x8_t cospi25_v = { 5520, 5520, 5520, 5520,
+                                     5520, 5520, 5520, 5520 };
+static const int16x8_t cospi26_v = { 4756, 4756, 4756, 4756,
+                                     4756, 4756, 4756, 4756 };
+static const int16x8_t cospi27_v = { 3981, 3981, 3981, 3981,
+                                     3981, 3981, 3981, 3981 };
+static const int16x8_t cospi28_v = { 3196, 3196, 3196, 3196,
+                                     3196, 3196, 3196, 3196 };
+static const int16x8_t cospi29_v = { 2404, 2404, 2404, 2404,
+                                     2404, 2404, 2404, 2404 };
+static const int16x8_t cospi30_v = { 1606, 1606, 1606, 1606,
+                                     1606, 1606, 1606, 1606 };
+static const int16x8_t cospi31_v = { 804, 804, 804, 804, 804, 804, 804, 804 };
+
+#endif  // VPX_VPX_DSP_PPC_TXFM_COMMON_VSX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/types_vsx.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/types_vsx.h
index f611d02..b891169 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/types_vsx.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/types_vsx.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_PPC_TYPES_VSX_H_
-#define VPX_DSP_PPC_TYPES_VSX_H_
+#ifndef VPX_VPX_DSP_PPC_TYPES_VSX_H_
+#define VPX_VPX_DSP_PPC_TYPES_VSX_H_
 
 #include <altivec.h>
 
@@ -19,8 +19,11 @@
 typedef vector unsigned short uint16x8_t;
 typedef vector signed int int32x4_t;
 typedef vector unsigned int uint32x4_t;
+typedef vector bool char bool8x16_t;
+typedef vector bool short bool16x8_t;
+typedef vector bool int bool32x4_t;
 
-#ifdef __clang__
+#if defined(__clang__) && __clang_major__ < 6
 static const uint8x16_t xxpermdi0_perm = { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05,
                                            0x06, 0x07, 0x10, 0x11, 0x12, 0x13,
                                            0x14, 0x15, 0x16, 0x17 };
@@ -61,8 +64,45 @@
 #define unpack_to_s16_l(v) \
   (int16x8_t) vec_mergel((uint8x16_t)v, vec_splat_u8(0))
 #ifndef xxpermdi
-#define xxpermdi(a, b, c) vec_xxpermdi(b, a, ((c >> 1) | (c & 1) << 1) ^ 3)
+#define xxpermdi(a, b, c) vec_xxpermdi(b, a, (((c) >> 1) | ((c)&1) << 1) ^ 3)
 #endif
 #endif
 
-#endif  // VPX_DSP_PPC_TYPES_VSX_H_
+static INLINE uint8x16_t read4x2(const uint8_t *a, int stride) {
+  const uint32x4_t a0 = (uint32x4_t)vec_vsx_ld(0, a);
+  const uint32x4_t a1 = (uint32x4_t)vec_vsx_ld(0, a + stride);
+
+  return (uint8x16_t)vec_mergeh(a0, a1);
+}
+
+#ifndef __POWER9_VECTOR__
+#define vec_absd(a, b) vec_sub(vec_max(a, b), vec_min(a, b))
+#endif
+
+static const uint8x16_t vec_zeros_u8 = { 0, 0, 0, 0, 0, 0, 0, 0,
+                                         0, 0, 0, 0, 0, 0, 0, 0 };
+static const int16x8_t vec_zeros_s16 = { 0, 0, 0, 0, 0, 0, 0, 0 };
+static const int16x8_t vec_ones_s16 = { 1, 1, 1, 1, 1, 1, 1, 1 };
+static const int16x8_t vec_twos_s16 = { 2, 2, 2, 2, 2, 2, 2, 2 };
+static const uint16x8_t vec_ones_u16 = { 1, 1, 1, 1, 1, 1, 1, 1 };
+static const uint32x4_t vec_ones_u32 = { 1, 1, 1, 1 };
+static const int32x4_t vec_zeros_s32 = { 0, 0, 0, 0 };
+static const uint32x4_t vec_zeros_u32 = { 0, 0, 0, 0 };
+static const uint16x8_t vec_shift_sign_s16 = { 15, 15, 15, 15, 15, 15, 15, 15 };
+static const uint32x4_t vec_shift_sign_s32 = { 31, 31, 31, 31 };
+static const uint8x16_t vec_perm64 = { 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D,
+                                       0x0E, 0x0F, 0x00, 0x01, 0x02, 0x03,
+                                       0x04, 0x05, 0x06, 0x07 };
+static const uint8x16_t vec_perm32 = { 0x04, 0x05, 0x06, 0x07, 0x08, 0x09,
+                                       0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F,
+                                       0x00, 0x01, 0x02, 0x03 };
+static const uint8x16_t vec_perm16 = { 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+                                       0x08, 0x09, 0x0A, 0x0B, 0x0E, 0x0D,
+                                       0x0E, 0x0F, 0x00, 0x01 };
+
+static const uint8x16_t vec_perm_odd_even_pack = { 0x00, 0x01, 0x10, 0x11,
+                                                   0x04, 0x05, 0x14, 0x15,
+                                                   0x08, 0x09, 0x18, 0x19,
+                                                   0x0C, 0x0D, 0x1C, 0x1D };
+
+#endif  // VPX_VPX_DSP_PPC_TYPES_VSX_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/variance_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/variance_vsx.c
index 1efe2f0..be9614a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/variance_vsx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/variance_vsx.c
@@ -10,24 +10,20 @@
 
 #include <assert.h>
 
+#include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/ppc/types_vsx.h"
 
-static inline uint8x16_t read4x2(const uint8_t *a, int stride) {
-  const uint32x4_t a0 = (uint32x4_t)vec_vsx_ld(0, a);
-  const uint32x4_t a1 = (uint32x4_t)vec_vsx_ld(0, a + stride);
-
-  return (uint8x16_t)vec_mergeh(a0, a1);
-}
-
-uint32_t vpx_get4x4sse_cs_vsx(const uint8_t *a, int a_stride, const uint8_t *b,
-                              int b_stride) {
+uint32_t vpx_get4x4sse_cs_vsx(const uint8_t *src_ptr, int src_stride,
+                              const uint8_t *ref_ptr, int ref_stride) {
   int distortion;
 
-  const int16x8_t a0 = unpack_to_s16_h(read4x2(a, a_stride));
-  const int16x8_t a1 = unpack_to_s16_h(read4x2(a + a_stride * 2, a_stride));
-  const int16x8_t b0 = unpack_to_s16_h(read4x2(b, b_stride));
-  const int16x8_t b1 = unpack_to_s16_h(read4x2(b + b_stride * 2, b_stride));
+  const int16x8_t a0 = unpack_to_s16_h(read4x2(src_ptr, src_stride));
+  const int16x8_t a1 =
+      unpack_to_s16_h(read4x2(src_ptr + src_stride * 2, src_stride));
+  const int16x8_t b0 = unpack_to_s16_h(read4x2(ref_ptr, ref_stride));
+  const int16x8_t b1 =
+      unpack_to_s16_h(read4x2(ref_ptr + ref_stride * 2, ref_stride));
   const int16x8_t d0 = vec_sub(a0, b0);
   const int16x8_t d1 = vec_sub(a1, b1);
   const int32x4_t ds = vec_msum(d1, d1, vec_msum(d0, d0, vec_splat_s32(0)));
@@ -39,12 +35,12 @@
 }
 
 // TODO(lu_zero): Unroll
-uint32_t vpx_get_mb_ss_vsx(const int16_t *a) {
+uint32_t vpx_get_mb_ss_vsx(const int16_t *src_ptr) {
   unsigned int i, sum = 0;
   int32x4_t s = vec_splat_s32(0);
 
   for (i = 0; i < 256; i += 8) {
-    const int16x8_t v = vec_vsx_ld(0, a + i);
+    const int16x8_t v = vec_vsx_ld(0, src_ptr + i);
     s = vec_msum(v, v, s);
   }
 
@@ -101,3 +97,175 @@
     }
   }
 }
+
+static INLINE void variance_inner_32(const uint8_t *src_ptr,
+                                     const uint8_t *ref_ptr,
+                                     int32x4_t *sum_squared, int32x4_t *sum) {
+  int32x4_t s = *sum;
+  int32x4_t ss = *sum_squared;
+
+  const uint8x16_t va0 = vec_vsx_ld(0, src_ptr);
+  const uint8x16_t vb0 = vec_vsx_ld(0, ref_ptr);
+  const uint8x16_t va1 = vec_vsx_ld(16, src_ptr);
+  const uint8x16_t vb1 = vec_vsx_ld(16, ref_ptr);
+
+  const int16x8_t a0 = unpack_to_s16_h(va0);
+  const int16x8_t b0 = unpack_to_s16_h(vb0);
+  const int16x8_t a1 = unpack_to_s16_l(va0);
+  const int16x8_t b1 = unpack_to_s16_l(vb0);
+  const int16x8_t a2 = unpack_to_s16_h(va1);
+  const int16x8_t b2 = unpack_to_s16_h(vb1);
+  const int16x8_t a3 = unpack_to_s16_l(va1);
+  const int16x8_t b3 = unpack_to_s16_l(vb1);
+  const int16x8_t d0 = vec_sub(a0, b0);
+  const int16x8_t d1 = vec_sub(a1, b1);
+  const int16x8_t d2 = vec_sub(a2, b2);
+  const int16x8_t d3 = vec_sub(a3, b3);
+
+  s = vec_sum4s(d0, s);
+  ss = vec_msum(d0, d0, ss);
+  s = vec_sum4s(d1, s);
+  ss = vec_msum(d1, d1, ss);
+  s = vec_sum4s(d2, s);
+  ss = vec_msum(d2, d2, ss);
+  s = vec_sum4s(d3, s);
+  ss = vec_msum(d3, d3, ss);
+  *sum = s;
+  *sum_squared = ss;
+}
+
+static INLINE void variance(const uint8_t *src_ptr, int src_stride,
+                            const uint8_t *ref_ptr, int ref_stride, int w,
+                            int h, uint32_t *sse, int *sum) {
+  int i;
+
+  int32x4_t s = vec_splat_s32(0);
+  int32x4_t ss = vec_splat_s32(0);
+
+  switch (w) {
+    case 4:
+      for (i = 0; i < h / 2; ++i) {
+        const int16x8_t a0 = unpack_to_s16_h(read4x2(src_ptr, src_stride));
+        const int16x8_t b0 = unpack_to_s16_h(read4x2(ref_ptr, ref_stride));
+        const int16x8_t d = vec_sub(a0, b0);
+        s = vec_sum4s(d, s);
+        ss = vec_msum(d, d, ss);
+        src_ptr += src_stride * 2;
+        ref_ptr += ref_stride * 2;
+      }
+      break;
+    case 8:
+      for (i = 0; i < h; ++i) {
+        const int16x8_t a0 = unpack_to_s16_h(vec_vsx_ld(0, src_ptr));
+        const int16x8_t b0 = unpack_to_s16_h(vec_vsx_ld(0, ref_ptr));
+        const int16x8_t d = vec_sub(a0, b0);
+
+        s = vec_sum4s(d, s);
+        ss = vec_msum(d, d, ss);
+        src_ptr += src_stride;
+        ref_ptr += ref_stride;
+      }
+      break;
+    case 16:
+      for (i = 0; i < h; ++i) {
+        const uint8x16_t va = vec_vsx_ld(0, src_ptr);
+        const uint8x16_t vb = vec_vsx_ld(0, ref_ptr);
+        const int16x8_t a0 = unpack_to_s16_h(va);
+        const int16x8_t b0 = unpack_to_s16_h(vb);
+        const int16x8_t a1 = unpack_to_s16_l(va);
+        const int16x8_t b1 = unpack_to_s16_l(vb);
+        const int16x8_t d0 = vec_sub(a0, b0);
+        const int16x8_t d1 = vec_sub(a1, b1);
+
+        s = vec_sum4s(d0, s);
+        ss = vec_msum(d0, d0, ss);
+        s = vec_sum4s(d1, s);
+        ss = vec_msum(d1, d1, ss);
+
+        src_ptr += src_stride;
+        ref_ptr += ref_stride;
+      }
+      break;
+    case 32:
+      for (i = 0; i < h; ++i) {
+        variance_inner_32(src_ptr, ref_ptr, &ss, &s);
+        src_ptr += src_stride;
+        ref_ptr += ref_stride;
+      }
+      break;
+    case 64:
+      for (i = 0; i < h; ++i) {
+        variance_inner_32(src_ptr, ref_ptr, &ss, &s);
+        variance_inner_32(src_ptr + 32, ref_ptr + 32, &ss, &s);
+
+        src_ptr += src_stride;
+        ref_ptr += ref_stride;
+      }
+      break;
+  }
+
+  s = vec_splat(vec_sums(s, vec_splat_s32(0)), 3);
+
+  vec_ste(s, 0, sum);
+
+  ss = vec_splat(vec_sums(ss, vec_splat_s32(0)), 3);
+
+  vec_ste((uint32x4_t)ss, 0, sse);
+}
+
+/* Identical to the variance call except it takes an additional parameter, sum,
+ * and returns that value using pass-by-reference instead of returning
+ * sse - sum^2 / w*h
+ */
+#define GET_VAR(W, H)                                                    \
+  void vpx_get##W##x##H##var_vsx(const uint8_t *src_ptr, int src_stride, \
+                                 const uint8_t *ref_ptr, int ref_stride, \
+                                 uint32_t *sse, int *sum) {              \
+    variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, sum);  \
+  }
+
+/* Identical to the variance call except it does not calculate the
+ * sse - sum^2 / w*h and returns sse in addtion to modifying the passed in
+ * variable.
+ */
+#define MSE(W, H)                                                         \
+  uint32_t vpx_mse##W##x##H##_vsx(const uint8_t *src_ptr, int src_stride, \
+                                  const uint8_t *ref_ptr, int ref_stride, \
+                                  uint32_t *sse) {                        \
+    int sum;                                                              \
+    variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, &sum);  \
+    return *sse;                                                          \
+  }
+
+#define VAR(W, H)                                                              \
+  uint32_t vpx_variance##W##x##H##_vsx(const uint8_t *src_ptr, int src_stride, \
+                                       const uint8_t *ref_ptr, int ref_stride, \
+                                       uint32_t *sse) {                        \
+    int sum;                                                                   \
+    variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, &sum);       \
+    return *sse - (uint32_t)(((int64_t)sum * sum) / ((W) * (H)));              \
+  }
+
+#define VARIANCES(W, H) VAR(W, H)
+
+VARIANCES(64, 64)
+VARIANCES(64, 32)
+VARIANCES(32, 64)
+VARIANCES(32, 32)
+VARIANCES(32, 16)
+VARIANCES(16, 32)
+VARIANCES(16, 16)
+VARIANCES(16, 8)
+VARIANCES(8, 16)
+VARIANCES(8, 8)
+VARIANCES(8, 4)
+VARIANCES(4, 8)
+VARIANCES(4, 4)
+
+GET_VAR(16, 16)
+GET_VAR(8, 8)
+
+MSE(16, 16)
+MSE(16, 8)
+MSE(8, 16)
+MSE(8, 8)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/vpx_convolve_vsx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/vpx_convolve_vsx.c
index 5c3ba45..2dc6605 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/vpx_convolve_vsx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ppc/vpx_convolve_vsx.c
@@ -9,13 +9,16 @@
  */
 #include <assert.h>
 #include <string.h>
+
 #include "./vpx_dsp_rtcd.h"
-#include "vpx_dsp/vpx_filter.h"
+#include "vpx/vpx_integer.h"
 #include "vpx_dsp/ppc/types_vsx.h"
+#include "vpx_dsp/vpx_filter.h"
 
 // TODO(lu_zero): unroll
-static inline void copy_w16(const uint8_t *src, ptrdiff_t src_stride,
-                            uint8_t *dst, ptrdiff_t dst_stride, int32_t h) {
+static VPX_FORCE_INLINE void copy_w16(const uint8_t *src, ptrdiff_t src_stride,
+                                      uint8_t *dst, ptrdiff_t dst_stride,
+                                      int32_t h) {
   int i;
 
   for (i = h; i--;) {
@@ -25,8 +28,9 @@
   }
 }
 
-static inline void copy_w32(const uint8_t *src, ptrdiff_t src_stride,
-                            uint8_t *dst, ptrdiff_t dst_stride, int32_t h) {
+static VPX_FORCE_INLINE void copy_w32(const uint8_t *src, ptrdiff_t src_stride,
+                                      uint8_t *dst, ptrdiff_t dst_stride,
+                                      int32_t h) {
   int i;
 
   for (i = h; i--;) {
@@ -37,8 +41,9 @@
   }
 }
 
-static inline void copy_w64(const uint8_t *src, ptrdiff_t src_stride,
-                            uint8_t *dst, ptrdiff_t dst_stride, int32_t h) {
+static VPX_FORCE_INLINE void copy_w64(const uint8_t *src, ptrdiff_t src_stride,
+                                      uint8_t *dst, ptrdiff_t dst_stride,
+                                      int32_t h) {
   int i;
 
   for (i = h; i--;) {
@@ -86,8 +91,9 @@
   }
 }
 
-static inline void avg_w16(const uint8_t *src, ptrdiff_t src_stride,
-                           uint8_t *dst, ptrdiff_t dst_stride, int32_t h) {
+static VPX_FORCE_INLINE void avg_w16(const uint8_t *src, ptrdiff_t src_stride,
+                                     uint8_t *dst, ptrdiff_t dst_stride,
+                                     int32_t h) {
   int i;
 
   for (i = h; i--;) {
@@ -98,8 +104,9 @@
   }
 }
 
-static inline void avg_w32(const uint8_t *src, ptrdiff_t src_stride,
-                           uint8_t *dst, ptrdiff_t dst_stride, int32_t h) {
+static VPX_FORCE_INLINE void avg_w32(const uint8_t *src, ptrdiff_t src_stride,
+                                     uint8_t *dst, ptrdiff_t dst_stride,
+                                     int32_t h) {
   int i;
 
   for (i = h; i--;) {
@@ -112,8 +119,9 @@
   }
 }
 
-static inline void avg_w64(const uint8_t *src, ptrdiff_t src_stride,
-                           uint8_t *dst, ptrdiff_t dst_stride, int32_t h) {
+static VPX_FORCE_INLINE void avg_w64(const uint8_t *src, ptrdiff_t src_stride,
+                                     uint8_t *dst, ptrdiff_t dst_stride,
+                                     int32_t h) {
   int i;
 
   for (i = h; i--;) {
@@ -155,8 +163,8 @@
   }
 }
 
-static inline void convolve_line(uint8_t *dst, const int16x8_t s,
-                                 const int16x8_t f) {
+static VPX_FORCE_INLINE void convolve_line(uint8_t *dst, const int16x8_t s,
+                                           const int16x8_t f) {
   const int32x4_t sum = vec_msum(s, f, vec_splat_s32(0));
   const int32x4_t bias =
       vec_sl(vec_splat_s32(1), vec_splat_u32(FILTER_BITS - 1));
@@ -166,8 +174,9 @@
   vec_ste(v, 0, dst);
 }
 
-static inline void convolve_line_h(uint8_t *dst, const uint8_t *const src_x,
-                                   const int16_t *const x_filter) {
+static VPX_FORCE_INLINE void convolve_line_h(uint8_t *dst,
+                                             const uint8_t *const src_x,
+                                             const int16_t *const x_filter) {
   const int16x8_t s = unpack_to_s16_h(vec_vsx_ld(0, src_x));
   const int16x8_t f = vec_vsx_ld(0, x_filter);
 
@@ -175,10 +184,12 @@
 }
 
 // TODO(lu_zero): Implement 8x8 and bigger block special cases
-static inline void convolve_horiz(const uint8_t *src, ptrdiff_t src_stride,
-                                  uint8_t *dst, ptrdiff_t dst_stride,
-                                  const InterpKernel *x_filters, int x0_q4,
-                                  int x_step_q4, int w, int h) {
+static VPX_FORCE_INLINE void convolve_horiz(const uint8_t *src,
+                                            ptrdiff_t src_stride, uint8_t *dst,
+                                            ptrdiff_t dst_stride,
+                                            const InterpKernel *x_filters,
+                                            int x0_q4, int x_step_q4, int w,
+                                            int h) {
   int x, y;
   src -= SUBPEL_TAPS / 2 - 1;
 
@@ -194,10 +205,10 @@
   }
 }
 
-static inline void convolve_avg_horiz(const uint8_t *src, ptrdiff_t src_stride,
-                                      uint8_t *dst, ptrdiff_t dst_stride,
-                                      const InterpKernel *x_filters, int x0_q4,
-                                      int x_step_q4, int w, int h) {
+static VPX_FORCE_INLINE void convolve_avg_horiz(
+    const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,
+    ptrdiff_t dst_stride, const InterpKernel *x_filters, int x0_q4,
+    int x_step_q4, int w, int h) {
   int x, y;
   src -= SUBPEL_TAPS / 2 - 1;
 
@@ -230,9 +241,10 @@
   return (uint8x16_t)vec_mergeh(abcd, efgh);
 }
 
-static inline void convolve_line_v(uint8_t *dst, const uint8_t *const src_y,
-                                   ptrdiff_t src_stride,
-                                   const int16_t *const y_filter) {
+static VPX_FORCE_INLINE void convolve_line_v(uint8_t *dst,
+                                             const uint8_t *const src_y,
+                                             ptrdiff_t src_stride,
+                                             const int16_t *const y_filter) {
   uint8x16_t s0 = vec_vsx_ld(0, src_y + 0 * src_stride);
   uint8x16_t s1 = vec_vsx_ld(0, src_y + 1 * src_stride);
   uint8x16_t s2 = vec_vsx_ld(0, src_y + 2 * src_stride);
@@ -250,10 +262,12 @@
   convolve_line(dst, unpack_to_s16_h(s), f);
 }
 
-static inline void convolve_vert(const uint8_t *src, ptrdiff_t src_stride,
-                                 uint8_t *dst, ptrdiff_t dst_stride,
-                                 const InterpKernel *y_filters, int y0_q4,
-                                 int y_step_q4, int w, int h) {
+static VPX_FORCE_INLINE void convolve_vert(const uint8_t *src,
+                                           ptrdiff_t src_stride, uint8_t *dst,
+                                           ptrdiff_t dst_stride,
+                                           const InterpKernel *y_filters,
+                                           int y0_q4, int y_step_q4, int w,
+                                           int h) {
   int x, y;
   src -= src_stride * (SUBPEL_TAPS / 2 - 1);
 
@@ -270,10 +284,10 @@
   }
 }
 
-static inline void convolve_avg_vert(const uint8_t *src, ptrdiff_t src_stride,
-                                     uint8_t *dst, ptrdiff_t dst_stride,
-                                     const InterpKernel *y_filters, int y0_q4,
-                                     int y_step_q4, int w, int h) {
+static VPX_FORCE_INLINE void convolve_avg_vert(
+    const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,
+    ptrdiff_t dst_stride, const InterpKernel *y_filters, int y0_q4,
+    int y_step_q4, int w, int h) {
   int x, y;
   src -= src_stride * (SUBPEL_TAPS / 2 - 1);
 
@@ -291,11 +305,11 @@
   }
 }
 
-static inline void convolve(const uint8_t *src, ptrdiff_t src_stride,
-                            uint8_t *dst, ptrdiff_t dst_stride,
-                            const InterpKernel *const filter, int x0_q4,
-                            int x_step_q4, int y0_q4, int y_step_q4, int w,
-                            int h) {
+static VPX_FORCE_INLINE void convolve(const uint8_t *src, ptrdiff_t src_stride,
+                                      uint8_t *dst, ptrdiff_t dst_stride,
+                                      const InterpKernel *const filter,
+                                      int x0_q4, int x_step_q4, int y0_q4,
+                                      int y_step_q4, int w, int h) {
   // Note: Fixed size intermediate buffer, temp, places limits on parameters.
   // 2d filtering proceeds in 2 steps:
   //   (1) Interpolate horizontally into an intermediate buffer, temp.
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/prob.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/prob.h
index f1cc0ea..7a71c00 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/prob.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/prob.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_PROB_H_
-#define VPX_DSP_PROB_H_
+#ifndef VPX_VPX_DSP_PROB_H_
+#define VPX_VPX_DSP_PROB_H_
 
 #include <assert.h>
 
@@ -32,7 +32,7 @@
 
 #define TREE_SIZE(leaf_count) (2 * (leaf_count)-2)
 
-#define vpx_complement(x) (255 - x)
+#define vpx_complement(x) (255 - (x))
 
 #define MODE_MV_COUNT_SAT 20
 
@@ -103,4 +103,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_PROB_H_
+#endif  // VPX_VPX_DSP_PROB_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnr.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnr.h
index 43bcb1f..9ebb64d 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnr.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnr.h
@@ -8,10 +8,11 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_PSNR_H_
-#define VPX_DSP_PSNR_H_
+#ifndef VPX_VPX_DSP_PSNR_H_
+#define VPX_VPX_DSP_PSNR_H_
 
 #include "vpx_scale/yv12config.h"
+#include "vpx/vpx_encoder.h"
 
 #define MAX_PSNR 100.0
 
@@ -19,11 +20,7 @@
 extern "C" {
 #endif
 
-typedef struct {
-  double psnr[4];       // total/y/u/v
-  uint64_t sse[4];      // total/y/u/v
-  uint32_t samples[4];  // total/y/u/v
-} PSNR_STATS;
+typedef struct vpx_psnr_pkt PSNR_STATS;
 
 // TODO(dkovalev) change vpx_sse_to_psnr signature: double -> int64_t
 
@@ -54,4 +51,4 @@
 #ifdef __cplusplus
 }  // extern "C"
 #endif
-#endif  // VPX_DSP_PSNR_H_
+#endif  // VPX_VPX_DSP_PSNR_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnrhvs.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnrhvs.c
index b391015..d7ec1a4 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnrhvs.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/psnrhvs.c
@@ -126,8 +126,10 @@
   const uint8_t *_dst8 = dst;
   const uint16_t *_src16 = CONVERT_TO_SHORTPTR(src);
   const uint16_t *_dst16 = CONVERT_TO_SHORTPTR(dst);
-  int16_t dct_s[8 * 8], dct_d[8 * 8];
-  tran_low_t dct_s_coef[8 * 8], dct_d_coef[8 * 8];
+  DECLARE_ALIGNED(16, int16_t, dct_s[8 * 8]);
+  DECLARE_ALIGNED(16, int16_t, dct_d[8 * 8]);
+  DECLARE_ALIGNED(16, tran_low_t, dct_s_coef[8 * 8]);
+  DECLARE_ALIGNED(16, tran_low_t, dct_d_coef[8 * 8]);
   double mask[8][8];
   int pixels;
   int x;
@@ -142,7 +144,7 @@
    been normalized and then squared." Their CSF matrix (from PSNR-HVS)
    was also constructed from the JPEG matrices. I can not find any obvious
    scheme of normalizing to produce their table, but if I multiply their
-   CSF by 0.38857 and square the result I get their masking table.
+   CSF by 0.3885746225901003 and square the result I get their masking table.
    I have no idea where this constant comes from, but deviating from it
    too greatly hurts MOS agreement.
 
@@ -150,11 +152,15 @@
    Jaakko Astola, Vladimir Lukin, "On between-coefficient contrast masking
    of DCT basis functions", CD-ROM Proceedings of the Third
    International Workshop on Video Processing and Quality Metrics for Consumer
-   Electronics VPQM-07, Scottsdale, Arizona, USA, 25-26 January, 2007, 4 p.*/
+   Electronics VPQM-07, Scottsdale, Arizona, USA, 25-26 January, 2007, 4 p.
+
+   Suggested in aomedia issue #2363:
+   0.3885746225901003 is a reciprocal of the maximum coefficient (2.573509)
+   of the old JPEG based matrix from the paper. Since you are not using that,
+   divide by actual maximum coefficient. */
   for (x = 0; x < 8; x++)
     for (y = 0; y < 8; y++)
-      mask[x][y] =
-          (_csf[x][y] * 0.3885746225901003) * (_csf[x][y] * 0.3885746225901003);
+      mask[x][y] = (_csf[x][y] / _csf[1][0]) * (_csf[x][y] / _csf[1][0]);
   for (y = 0; y < _h - 7; y += _step) {
     for (x = 0; x < _w - 7; x += _step) {
       int i;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.c
index e37ca92..61818f6 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.c
@@ -12,12 +12,13 @@
 
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/quantize.h"
+#include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_mem/vpx_mem.h"
 
 void vpx_quantize_dc(const tran_low_t *coeff_ptr, int n_coeffs, int skip_block,
                      const int16_t *round_ptr, const int16_t quant,
                      tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
-                     const int16_t dequant_ptr, uint16_t *eob_ptr) {
+                     const int16_t dequant, uint16_t *eob_ptr) {
   const int rc = 0;
   const int coeff = coeff_ptr[rc];
   const int coeff_sign = (coeff >> 31);
@@ -31,7 +32,7 @@
     tmp = clamp(abs_coeff + round_ptr[rc != 0], INT16_MIN, INT16_MAX);
     tmp = (tmp * quant) >> 16;
     qcoeff_ptr[rc] = (tmp ^ coeff_sign) - coeff_sign;
-    dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr;
+    dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant;
     if (tmp) eob = 0;
   }
   *eob_ptr = eob + 1;
@@ -41,7 +42,7 @@
 void vpx_highbd_quantize_dc(const tran_low_t *coeff_ptr, int n_coeffs,
                             int skip_block, const int16_t *round_ptr,
                             const int16_t quant, tran_low_t *qcoeff_ptr,
-                            tran_low_t *dqcoeff_ptr, const int16_t dequant_ptr,
+                            tran_low_t *dqcoeff_ptr, const int16_t dequant,
                             uint16_t *eob_ptr) {
   int eob = -1;
 
@@ -55,7 +56,7 @@
     const int64_t tmp = abs_coeff + round_ptr[0];
     const int abs_qcoeff = (int)((tmp * quant) >> 16);
     qcoeff_ptr[0] = (tran_low_t)((abs_qcoeff ^ coeff_sign) - coeff_sign);
-    dqcoeff_ptr[0] = qcoeff_ptr[0] * dequant_ptr;
+    dqcoeff_ptr[0] = qcoeff_ptr[0] * dequant;
     if (abs_qcoeff) eob = 0;
   }
   *eob_ptr = eob + 1;
@@ -65,7 +66,7 @@
 void vpx_quantize_dc_32x32(const tran_low_t *coeff_ptr, int skip_block,
                            const int16_t *round_ptr, const int16_t quant,
                            tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
-                           const int16_t dequant_ptr, uint16_t *eob_ptr) {
+                           const int16_t dequant, uint16_t *eob_ptr) {
   const int n_coeffs = 1024;
   const int rc = 0;
   const int coeff = coeff_ptr[rc];
@@ -81,7 +82,7 @@
                 INT16_MIN, INT16_MAX);
     tmp = (tmp * quant) >> 15;
     qcoeff_ptr[rc] = (tmp ^ coeff_sign) - coeff_sign;
-    dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr / 2;
+    dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant / 2;
     if (tmp) eob = 0;
   }
   *eob_ptr = eob + 1;
@@ -92,8 +93,7 @@
                                   const int16_t *round_ptr, const int16_t quant,
                                   tran_low_t *qcoeff_ptr,
                                   tran_low_t *dqcoeff_ptr,
-                                  const int16_t dequant_ptr,
-                                  uint16_t *eob_ptr) {
+                                  const int16_t dequant, uint16_t *eob_ptr) {
   const int n_coeffs = 1024;
   int eob = -1;
 
@@ -107,7 +107,7 @@
     const int64_t tmp = abs_coeff + ROUND_POWER_OF_TWO(round_ptr[0], 1);
     const int abs_qcoeff = (int)((tmp * quant) >> 15);
     qcoeff_ptr[0] = (tran_low_t)((abs_qcoeff ^ coeff_sign) - coeff_sign);
-    dqcoeff_ptr[0] = qcoeff_ptr[0] * dequant_ptr / 2;
+    dqcoeff_ptr[0] = qcoeff_ptr[0] * dequant / 2;
     if (abs_qcoeff) eob = 0;
   }
   *eob_ptr = eob + 1;
@@ -156,7 +156,7 @@
              quant_shift_ptr[rc != 0]) >>
             16;  // quantization
       qcoeff_ptr[rc] = (tmp ^ coeff_sign) - coeff_sign;
-      dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr[rc != 0];
+      dqcoeff_ptr[rc] = (tran_low_t)(qcoeff_ptr[rc] * dequant_ptr[rc != 0]);
 
       if (tmp) eob = i;
     }
@@ -260,7 +260,15 @@
           15;
 
     qcoeff_ptr[rc] = (tmp ^ coeff_sign) - coeff_sign;
+#if (VPX_ARCH_X86 || VPX_ARCH_X86_64) && !CONFIG_VP9_HIGHBITDEPTH
+    // When tran_low_t is only 16 bits dqcoeff can outrange it. Rather than
+    // truncating with a cast, saturate the value. This is easier to implement
+    // on x86 and preserves the sign of the value.
+    dqcoeff_ptr[rc] =
+        clamp(qcoeff_ptr[rc] * dequant_ptr[rc != 0] / 2, INT16_MIN, INT16_MAX);
+#else
     dqcoeff_ptr[rc] = qcoeff_ptr[rc] * dequant_ptr[rc != 0] / 2;
+#endif  // VPX_ARCH_X86 && CONFIG_VP9_HIGHBITDEPTH
 
     if (tmp) eob = idx_arr[i];
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.h
index e132845..7cac140 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/quantize.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_QUANTIZE_H_
-#define VPX_DSP_QUANTIZE_H_
+#ifndef VPX_VPX_DSP_QUANTIZE_H_
+#define VPX_VPX_DSP_QUANTIZE_H_
 
 #include "./vpx_config.h"
 #include "vpx_dsp/vpx_dsp_common.h"
@@ -19,30 +19,29 @@
 #endif
 
 void vpx_quantize_dc(const tran_low_t *coeff_ptr, int n_coeffs, int skip_block,
-                     const int16_t *round_ptr, const int16_t quant_ptr,
+                     const int16_t *round_ptr, const int16_t quant,
                      tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
-                     const int16_t dequant_ptr, uint16_t *eob_ptr);
+                     const int16_t dequant, uint16_t *eob_ptr);
 void vpx_quantize_dc_32x32(const tran_low_t *coeff_ptr, int skip_block,
-                           const int16_t *round_ptr, const int16_t quant_ptr,
+                           const int16_t *round_ptr, const int16_t quant,
                            tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
-                           const int16_t dequant_ptr, uint16_t *eob_ptr);
+                           const int16_t dequant, uint16_t *eob_ptr);
 
 #if CONFIG_VP9_HIGHBITDEPTH
 void vpx_highbd_quantize_dc(const tran_low_t *coeff_ptr, int n_coeffs,
                             int skip_block, const int16_t *round_ptr,
-                            const int16_t quant_ptr, tran_low_t *qcoeff_ptr,
-                            tran_low_t *dqcoeff_ptr, const int16_t dequant_ptr,
+                            const int16_t quant, tran_low_t *qcoeff_ptr,
+                            tran_low_t *dqcoeff_ptr, const int16_t dequant,
                             uint16_t *eob_ptr);
 void vpx_highbd_quantize_dc_32x32(const tran_low_t *coeff_ptr, int skip_block,
-                                  const int16_t *round_ptr,
-                                  const int16_t quant_ptr,
+                                  const int16_t *round_ptr, const int16_t quant,
                                   tran_low_t *qcoeff_ptr,
                                   tran_low_t *dqcoeff_ptr,
-                                  const int16_t dequant_ptr, uint16_t *eob_ptr);
+                                  const int16_t dequant, uint16_t *eob_ptr);
 #endif
 
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_QUANTIZE_H_
+#endif  // VPX_VPX_DSP_QUANTIZE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sad.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sad.c
index 18b6dc6..7693220 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sad.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sad.c
@@ -17,54 +17,55 @@
 #include "vpx_ports/mem.h"
 
 /* Sum the difference between every corresponding element of the buffers. */
-static INLINE unsigned int sad(const uint8_t *a, int a_stride, const uint8_t *b,
-                               int b_stride, int width, int height) {
+static INLINE unsigned int sad(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride,
+                               int width, int height) {
   int y, x;
   unsigned int sad = 0;
 
   for (y = 0; y < height; y++) {
-    for (x = 0; x < width; x++) sad += abs(a[x] - b[x]);
+    for (x = 0; x < width; x++) sad += abs(src_ptr[x] - ref_ptr[x]);
 
-    a += a_stride;
-    b += b_stride;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
   }
   return sad;
 }
 
-#define sadMxN(m, n)                                                        \
-  unsigned int vpx_sad##m##x##n##_c(const uint8_t *src, int src_stride,     \
-                                    const uint8_t *ref, int ref_stride) {   \
-    return sad(src, src_stride, ref, ref_stride, m, n);                     \
-  }                                                                         \
-  unsigned int vpx_sad##m##x##n##_avg_c(const uint8_t *src, int src_stride, \
-                                        const uint8_t *ref, int ref_stride, \
-                                        const uint8_t *second_pred) {       \
-    DECLARE_ALIGNED(16, uint8_t, comp_pred[m * n]);                         \
-    vpx_comp_avg_pred_c(comp_pred, second_pred, m, n, ref, ref_stride);     \
-    return sad(src, src_stride, comp_pred, m, m, n);                        \
+#define sadMxN(m, n)                                                          \
+  unsigned int vpx_sad##m##x##n##_c(const uint8_t *src_ptr, int src_stride,   \
+                                    const uint8_t *ref_ptr, int ref_stride) { \
+    return sad(src_ptr, src_stride, ref_ptr, ref_stride, m, n);               \
+  }                                                                           \
+  unsigned int vpx_sad##m##x##n##_avg_c(                                      \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,         \
+      int ref_stride, const uint8_t *second_pred) {                           \
+    DECLARE_ALIGNED(16, uint8_t, comp_pred[m * n]);                           \
+    vpx_comp_avg_pred_c(comp_pred, second_pred, m, n, ref_ptr, ref_stride);   \
+    return sad(src_ptr, src_stride, comp_pred, m, m, n);                      \
   }
 
 // depending on call sites, pass **ref_array to avoid & in subsequent call and
 // de-dup with 4D below.
-#define sadMxNxK(m, n, k)                                                   \
-  void vpx_sad##m##x##n##x##k##_c(const uint8_t *src, int src_stride,       \
-                                  const uint8_t *ref_array, int ref_stride, \
-                                  uint32_t *sad_array) {                    \
-    int i;                                                                  \
-    for (i = 0; i < k; ++i)                                                 \
-      sad_array[i] =                                                        \
-          vpx_sad##m##x##n##_c(src, src_stride, &ref_array[i], ref_stride); \
+#define sadMxNxK(m, n, k)                                                     \
+  void vpx_sad##m##x##n##x##k##_c(const uint8_t *src_ptr, int src_stride,     \
+                                  const uint8_t *ref_ptr, int ref_stride,     \
+                                  uint32_t *sad_array) {                      \
+    int i;                                                                    \
+    for (i = 0; i < k; ++i)                                                   \
+      sad_array[i] =                                                          \
+          vpx_sad##m##x##n##_c(src_ptr, src_stride, &ref_ptr[i], ref_stride); \
   }
 
 // This appears to be equivalent to the above when k == 4 and refs is const
-#define sadMxNx4D(m, n)                                                    \
-  void vpx_sad##m##x##n##x4d_c(const uint8_t *src, int src_stride,         \
-                               const uint8_t *const ref_array[],           \
-                               int ref_stride, uint32_t *sad_array) {      \
-    int i;                                                                 \
-    for (i = 0; i < 4; ++i)                                                \
-      sad_array[i] =                                                       \
-          vpx_sad##m##x##n##_c(src, src_stride, ref_array[i], ref_stride); \
+#define sadMxNx4D(m, n)                                                        \
+  void vpx_sad##m##x##n##x4d_c(const uint8_t *src_ptr, int src_stride,         \
+                               const uint8_t *const ref_array[],               \
+                               int ref_stride, uint32_t *sad_array) {          \
+    int i;                                                                     \
+    for (i = 0; i < 4; ++i)                                                    \
+      sad_array[i] =                                                           \
+          vpx_sad##m##x##n##_c(src_ptr, src_stride, ref_array[i], ref_stride); \
   }
 
 /* clang-format off */
@@ -82,6 +83,7 @@
 
 // 32x32
 sadMxN(32, 32)
+sadMxNxK(32, 32, 8)
 sadMxNx4D(32, 32)
 
 // 32x16
@@ -133,59 +135,61 @@
 
 #if CONFIG_VP9_HIGHBITDEPTH
         static INLINE
-    unsigned int highbd_sad(const uint8_t *a8, int a_stride, const uint8_t *b8,
-                            int b_stride, int width, int height) {
+    unsigned int highbd_sad(const uint8_t *src8_ptr, int src_stride,
+                            const uint8_t *ref8_ptr, int ref_stride, int width,
+                            int height) {
   int y, x;
   unsigned int sad = 0;
-  const uint16_t *a = CONVERT_TO_SHORTPTR(a8);
-  const uint16_t *b = CONVERT_TO_SHORTPTR(b8);
+  const uint16_t *src = CONVERT_TO_SHORTPTR(src8_ptr);
+  const uint16_t *ref_ptr = CONVERT_TO_SHORTPTR(ref8_ptr);
   for (y = 0; y < height; y++) {
-    for (x = 0; x < width; x++) sad += abs(a[x] - b[x]);
+    for (x = 0; x < width; x++) sad += abs(src[x] - ref_ptr[x]);
 
-    a += a_stride;
-    b += b_stride;
+    src += src_stride;
+    ref_ptr += ref_stride;
   }
   return sad;
 }
 
-static INLINE unsigned int highbd_sadb(const uint8_t *a8, int a_stride,
-                                       const uint16_t *b, int b_stride,
+static INLINE unsigned int highbd_sadb(const uint8_t *src8_ptr, int src_stride,
+                                       const uint16_t *ref_ptr, int ref_stride,
                                        int width, int height) {
   int y, x;
   unsigned int sad = 0;
-  const uint16_t *a = CONVERT_TO_SHORTPTR(a8);
+  const uint16_t *src = CONVERT_TO_SHORTPTR(src8_ptr);
   for (y = 0; y < height; y++) {
-    for (x = 0; x < width; x++) sad += abs(a[x] - b[x]);
+    for (x = 0; x < width; x++) sad += abs(src[x] - ref_ptr[x]);
 
-    a += a_stride;
-    b += b_stride;
+    src += src_stride;
+    ref_ptr += ref_stride;
   }
   return sad;
 }
 
 #define highbd_sadMxN(m, n)                                                    \
-  unsigned int vpx_highbd_sad##m##x##n##_c(const uint8_t *src, int src_stride, \
-                                           const uint8_t *ref,                 \
-                                           int ref_stride) {                   \
-    return highbd_sad(src, src_stride, ref, ref_stride, m, n);                 \
+  unsigned int vpx_highbd_sad##m##x##n##_c(                                    \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,          \
+      int ref_stride) {                                                        \
+    return highbd_sad(src_ptr, src_stride, ref_ptr, ref_stride, m, n);         \
   }                                                                            \
   unsigned int vpx_highbd_sad##m##x##n##_avg_c(                                \
-      const uint8_t *src, int src_stride, const uint8_t *ref, int ref_stride,  \
-      const uint8_t *second_pred) {                                            \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,          \
+      int ref_stride, const uint8_t *second_pred) {                            \
     DECLARE_ALIGNED(16, uint16_t, comp_pred[m * n]);                           \
-    vpx_highbd_comp_avg_pred_c(comp_pred, second_pred, m, n, ref, ref_stride); \
-    return highbd_sadb(src, src_stride, comp_pred, m, m, n);                   \
+    vpx_highbd_comp_avg_pred_c(comp_pred, CONVERT_TO_SHORTPTR(second_pred), m, \
+                               n, CONVERT_TO_SHORTPTR(ref_ptr), ref_stride);   \
+    return highbd_sadb(src_ptr, src_stride, comp_pred, m, m, n);               \
   }
 
-#define highbd_sadMxNx4D(m, n)                                               \
-  void vpx_highbd_sad##m##x##n##x4d_c(const uint8_t *src, int src_stride,    \
-                                      const uint8_t *const ref_array[],      \
-                                      int ref_stride, uint32_t *sad_array) { \
-    int i;                                                                   \
-    for (i = 0; i < 4; ++i) {                                                \
-      sad_array[i] = vpx_highbd_sad##m##x##n##_c(src, src_stride,            \
-                                                 ref_array[i], ref_stride);  \
-    }                                                                        \
+#define highbd_sadMxNx4D(m, n)                                                \
+  void vpx_highbd_sad##m##x##n##x4d_c(const uint8_t *src_ptr, int src_stride, \
+                                      const uint8_t *const ref_array[],       \
+                                      int ref_stride, uint32_t *sad_array) {  \
+    int i;                                                                    \
+    for (i = 0; i < 4; ++i) {                                                 \
+      sad_array[i] = vpx_highbd_sad##m##x##n##_c(src_ptr, src_stride,         \
+                                                 ref_array[i], ref_stride);   \
+    }                                                                         \
   }
 
 /* clang-format off */
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.h
index a2e99ba..91640c3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/skin_detection.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_SKIN_DETECTION_H_
-#define VPX_DSP_SKIN_DETECTION_H_
+#ifndef VPX_VPX_DSP_SKIN_DETECTION_H_
+#define VPX_VPX_DSP_SKIN_DETECTION_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -21,4 +21,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_SKIN_DETECTION_H_
+#endif  // VPX_VPX_DSP_SKIN_DETECTION_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.c
index 7a29bd29..7c3c31b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.c
@@ -73,7 +73,7 @@
 static double similarity(uint32_t sum_s, uint32_t sum_r, uint32_t sum_sq_s,
                          uint32_t sum_sq_r, uint32_t sum_sxr, int count,
                          uint32_t bd) {
-  int64_t ssim_n, ssim_d;
+  double ssim_n, ssim_d;
   int64_t c1, c2;
   if (bd == 8) {
     // scale the constants by number of pixels
@@ -90,14 +90,14 @@
     assert(0);
   }
 
-  ssim_n = (2 * sum_s * sum_r + c1) *
-           ((int64_t)2 * count * sum_sxr - (int64_t)2 * sum_s * sum_r + c2);
+  ssim_n = (2.0 * sum_s * sum_r + c1) *
+           (2.0 * count * sum_sxr - 2.0 * sum_s * sum_r + c2);
 
-  ssim_d = (sum_s * sum_s + sum_r * sum_r + c1) *
-           ((int64_t)count * sum_sq_s - (int64_t)sum_s * sum_s +
-            (int64_t)count * sum_sq_r - (int64_t)sum_r * sum_r + c2);
+  ssim_d = ((double)sum_s * sum_s + (double)sum_r * sum_r + c1) *
+           ((double)count * sum_sq_s - (double)sum_s * sum_s +
+            (double)count * sum_sq_r - (double)sum_r * sum_r + c2);
 
-  return ssim_n * 1.0 / ssim_d;
+  return ssim_n / ssim_d;
 }
 
 static double ssim_8x8(const uint8_t *s, int sp, const uint8_t *r, int rp) {
@@ -284,7 +284,7 @@
   for (i = 0; i < height;
        i += 4, img1 += img1_pitch * 4, img2 += img2_pitch * 4) {
     for (j = 0; j < width; j += 4, ++c) {
-      Ssimv sv = { 0 };
+      Ssimv sv = { 0, 0, 0, 0, 0, 0 };
       double ssim;
       double ssim2;
       double dssim;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.h
index 4f2bb1d..c382237 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/ssim.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_SSIM_H_
-#define VPX_DSP_SSIM_H_
+#ifndef VPX_VPX_DSP_SSIM_H_
+#define VPX_VPX_DSP_SSIM_H_
 
 #define MAX_SSIM_DB 100.0;
 
@@ -84,4 +84,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_SSIM_H_
+#endif  // VPX_VPX_DSP_SSIM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/subtract.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/subtract.c
index 95e7071..45c819e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/subtract.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/subtract.c
@@ -16,37 +16,37 @@
 #include "vpx/vpx_integer.h"
 #include "vpx_ports/mem.h"
 
-void vpx_subtract_block_c(int rows, int cols, int16_t *diff,
-                          ptrdiff_t diff_stride, const uint8_t *src,
-                          ptrdiff_t src_stride, const uint8_t *pred,
+void vpx_subtract_block_c(int rows, int cols, int16_t *diff_ptr,
+                          ptrdiff_t diff_stride, const uint8_t *src_ptr,
+                          ptrdiff_t src_stride, const uint8_t *pred_ptr,
                           ptrdiff_t pred_stride) {
   int r, c;
 
   for (r = 0; r < rows; r++) {
-    for (c = 0; c < cols; c++) diff[c] = src[c] - pred[c];
+    for (c = 0; c < cols; c++) diff_ptr[c] = src_ptr[c] - pred_ptr[c];
 
-    diff += diff_stride;
-    pred += pred_stride;
-    src += src_stride;
+    diff_ptr += diff_stride;
+    pred_ptr += pred_stride;
+    src_ptr += src_stride;
   }
 }
 
 #if CONFIG_VP9_HIGHBITDEPTH
-void vpx_highbd_subtract_block_c(int rows, int cols, int16_t *diff,
-                                 ptrdiff_t diff_stride, const uint8_t *src8,
-                                 ptrdiff_t src_stride, const uint8_t *pred8,
+void vpx_highbd_subtract_block_c(int rows, int cols, int16_t *diff_ptr,
+                                 ptrdiff_t diff_stride, const uint8_t *src8_ptr,
+                                 ptrdiff_t src_stride, const uint8_t *pred8_ptr,
                                  ptrdiff_t pred_stride, int bd) {
   int r, c;
-  uint16_t *src = CONVERT_TO_SHORTPTR(src8);
-  uint16_t *pred = CONVERT_TO_SHORTPTR(pred8);
+  uint16_t *src = CONVERT_TO_SHORTPTR(src8_ptr);
+  uint16_t *pred = CONVERT_TO_SHORTPTR(pred8_ptr);
   (void)bd;
 
   for (r = 0; r < rows; r++) {
     for (c = 0; c < cols; c++) {
-      diff[c] = src[c] - pred[c];
+      diff_ptr[c] = src[c] - pred[c];
     }
 
-    diff += diff_stride;
+    diff_ptr += diff_stride;
     pred += pred_stride;
     src += src_stride;
   }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sum_squares.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sum_squares.c
index 7c535ac..b80cd58 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sum_squares.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/sum_squares.c
@@ -10,8 +10,7 @@
 
 #include "./vpx_dsp_rtcd.h"
 
-uint64_t vpx_sum_squares_2d_i16_c(const int16_t *src, int src_stride,
-                                  int size) {
+uint64_t vpx_sum_squares_2d_i16_c(const int16_t *src, int stride, int size) {
   int r, c;
   uint64_t ss = 0;
 
@@ -20,7 +19,7 @@
       const int16_t v = src[c];
       ss += v * v;
     }
-    src += src_stride;
+    src += stride;
   }
 
   return ss;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/txfm_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/txfm_common.h
index d01d708..25f4fdb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/txfm_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/txfm_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_TXFM_COMMON_H_
-#define VPX_DSP_TXFM_COMMON_H_
+#ifndef VPX_VPX_DSP_TXFM_COMMON_H_
+#define VPX_VPX_DSP_TXFM_COMMON_H_
 
 #include "vpx_dsp/vpx_dsp_common.h"
 
@@ -63,4 +63,4 @@
 static const tran_coef_t sinpi_3_9 = 13377;
 static const tran_coef_t sinpi_4_9 = 15212;
 
-#endif  // VPX_DSP_TXFM_COMMON_H_
+#endif  // VPX_VPX_DSP_TXFM_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.c
index fefac42..30b55dc 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.c
@@ -21,36 +21,37 @@
   { 64, 64 }, { 48, 80 },  { 32, 96 }, { 16, 112 },
 };
 
-uint32_t vpx_get4x4sse_cs_c(const uint8_t *a, int a_stride, const uint8_t *b,
-                            int b_stride) {
+uint32_t vpx_get4x4sse_cs_c(const uint8_t *src_ptr, int src_stride,
+                            const uint8_t *ref_ptr, int ref_stride) {
   int distortion = 0;
   int r, c;
 
   for (r = 0; r < 4; ++r) {
     for (c = 0; c < 4; ++c) {
-      int diff = a[c] - b[c];
+      int diff = src_ptr[c] - ref_ptr[c];
       distortion += diff * diff;
     }
 
-    a += a_stride;
-    b += b_stride;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
   }
 
   return distortion;
 }
 
-uint32_t vpx_get_mb_ss_c(const int16_t *a) {
+uint32_t vpx_get_mb_ss_c(const int16_t *src_ptr) {
   unsigned int i, sum = 0;
 
   for (i = 0; i < 256; ++i) {
-    sum += a[i] * a[i];
+    sum += src_ptr[i] * src_ptr[i];
   }
 
   return sum;
 }
 
-static void variance(const uint8_t *a, int a_stride, const uint8_t *b,
-                     int b_stride, int w, int h, uint32_t *sse, int *sum) {
+static void variance(const uint8_t *src_ptr, int src_stride,
+                     const uint8_t *ref_ptr, int ref_stride, int w, int h,
+                     uint32_t *sse, int *sum) {
   int i, j;
 
   *sum = 0;
@@ -58,13 +59,13 @@
 
   for (i = 0; i < h; ++i) {
     for (j = 0; j < w; ++j) {
-      const int diff = a[j] - b[j];
+      const int diff = src_ptr[j] - ref_ptr[j];
       *sum += diff;
       *sse += diff * diff;
     }
 
-    a += a_stride;
-    b += b_stride;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
   }
 }
 
@@ -76,24 +77,23 @@
 // taps should sum to FILTER_WEIGHT. pixel_step defines whether the filter is
 // applied horizontally (pixel_step = 1) or vertically (pixel_step = stride).
 // It defines the offset required to move from one input to the next.
-static void var_filter_block2d_bil_first_pass(const uint8_t *a, uint16_t *b,
-                                              unsigned int src_pixels_per_line,
-                                              int pixel_step,
-                                              unsigned int output_height,
-                                              unsigned int output_width,
-                                              const uint8_t *filter) {
+static void var_filter_block2d_bil_first_pass(
+    const uint8_t *src_ptr, uint16_t *ref_ptr, unsigned int src_pixels_per_line,
+    int pixel_step, unsigned int output_height, unsigned int output_width,
+    const uint8_t *filter) {
   unsigned int i, j;
 
   for (i = 0; i < output_height; ++i) {
     for (j = 0; j < output_width; ++j) {
-      b[j] = ROUND_POWER_OF_TWO(
-          (int)a[0] * filter[0] + (int)a[pixel_step] * filter[1], FILTER_BITS);
+      ref_ptr[j] = ROUND_POWER_OF_TWO(
+          (int)src_ptr[0] * filter[0] + (int)src_ptr[pixel_step] * filter[1],
+          FILTER_BITS);
 
-      ++a;
+      ++src_ptr;
     }
 
-    a += src_pixels_per_line - output_width;
-    b += output_width;
+    src_ptr += src_pixels_per_line - output_width;
+    ref_ptr += output_width;
   }
 }
 
@@ -106,91 +106,90 @@
 // filter is applied horizontally (pixel_step = 1) or vertically
 // (pixel_step = stride). It defines the offset required to move from one input
 // to the next. Output is 8-bit.
-static void var_filter_block2d_bil_second_pass(const uint16_t *a, uint8_t *b,
-                                               unsigned int src_pixels_per_line,
-                                               unsigned int pixel_step,
-                                               unsigned int output_height,
-                                               unsigned int output_width,
-                                               const uint8_t *filter) {
+static void var_filter_block2d_bil_second_pass(
+    const uint16_t *src_ptr, uint8_t *ref_ptr, unsigned int src_pixels_per_line,
+    unsigned int pixel_step, unsigned int output_height,
+    unsigned int output_width, const uint8_t *filter) {
   unsigned int i, j;
 
   for (i = 0; i < output_height; ++i) {
     for (j = 0; j < output_width; ++j) {
-      b[j] = ROUND_POWER_OF_TWO(
-          (int)a[0] * filter[0] + (int)a[pixel_step] * filter[1], FILTER_BITS);
-      ++a;
+      ref_ptr[j] = ROUND_POWER_OF_TWO(
+          (int)src_ptr[0] * filter[0] + (int)src_ptr[pixel_step] * filter[1],
+          FILTER_BITS);
+      ++src_ptr;
     }
 
-    a += src_pixels_per_line - output_width;
-    b += output_width;
+    src_ptr += src_pixels_per_line - output_width;
+    ref_ptr += output_width;
   }
 }
 
-#define VAR(W, H)                                                    \
-  uint32_t vpx_variance##W##x##H##_c(const uint8_t *a, int a_stride, \
-                                     const uint8_t *b, int b_stride, \
-                                     uint32_t *sse) {                \
-    int sum;                                                         \
-    variance(a, a_stride, b, b_stride, W, H, sse, &sum);             \
-    return *sse - (uint32_t)(((int64_t)sum * sum) / (W * H));        \
+#define VAR(W, H)                                                            \
+  uint32_t vpx_variance##W##x##H##_c(const uint8_t *src_ptr, int src_stride, \
+                                     const uint8_t *ref_ptr, int ref_stride, \
+                                     uint32_t *sse) {                        \
+    int sum;                                                                 \
+    variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, &sum);     \
+    return *sse - (uint32_t)(((int64_t)sum * sum) / (W * H));                \
   }
 
-#define SUBPIX_VAR(W, H)                                                \
-  uint32_t vpx_sub_pixel_variance##W##x##H##_c(                         \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,         \
-      const uint8_t *b, int b_stride, uint32_t *sse) {                  \
-    uint16_t fdata3[(H + 1) * W];                                       \
-    uint8_t temp2[H * W];                                               \
-                                                                        \
-    var_filter_block2d_bil_first_pass(a, fdata3, a_stride, 1, H + 1, W, \
-                                      bilinear_filters[xoffset]);       \
-    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
-                                       bilinear_filters[yoffset]);      \
-                                                                        \
-    return vpx_variance##W##x##H##_c(temp2, W, b, b_stride, sse);       \
+#define SUBPIX_VAR(W, H)                                                     \
+  uint32_t vpx_sub_pixel_variance##W##x##H##_c(                              \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,    \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {               \
+    uint16_t fdata3[(H + 1) * W];                                            \
+    uint8_t temp2[H * W];                                                    \
+                                                                             \
+    var_filter_block2d_bil_first_pass(src_ptr, fdata3, src_stride, 1, H + 1, \
+                                      W, bilinear_filters[x_offset]);        \
+    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,            \
+                                       bilinear_filters[y_offset]);          \
+                                                                             \
+    return vpx_variance##W##x##H##_c(temp2, W, ref_ptr, ref_stride, sse);    \
   }
 
-#define SUBPIX_AVG_VAR(W, H)                                            \
-  uint32_t vpx_sub_pixel_avg_variance##W##x##H##_c(                     \
-      const uint8_t *a, int a_stride, int xoffset, int yoffset,         \
-      const uint8_t *b, int b_stride, uint32_t *sse,                    \
-      const uint8_t *second_pred) {                                     \
-    uint16_t fdata3[(H + 1) * W];                                       \
-    uint8_t temp2[H * W];                                               \
-    DECLARE_ALIGNED(16, uint8_t, temp3[H * W]);                         \
-                                                                        \
-    var_filter_block2d_bil_first_pass(a, fdata3, a_stride, 1, H + 1, W, \
-                                      bilinear_filters[xoffset]);       \
-    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
-                                       bilinear_filters[yoffset]);      \
-                                                                        \
-    vpx_comp_avg_pred_c(temp3, second_pred, W, H, temp2, W);            \
-                                                                        \
-    return vpx_variance##W##x##H##_c(temp3, W, b, b_stride, sse);       \
+#define SUBPIX_AVG_VAR(W, H)                                                 \
+  uint32_t vpx_sub_pixel_avg_variance##W##x##H##_c(                          \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,    \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse,                 \
+      const uint8_t *second_pred) {                                          \
+    uint16_t fdata3[(H + 1) * W];                                            \
+    uint8_t temp2[H * W];                                                    \
+    DECLARE_ALIGNED(16, uint8_t, temp3[H * W]);                              \
+                                                                             \
+    var_filter_block2d_bil_first_pass(src_ptr, fdata3, src_stride, 1, H + 1, \
+                                      W, bilinear_filters[x_offset]);        \
+    var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,            \
+                                       bilinear_filters[y_offset]);          \
+                                                                             \
+    vpx_comp_avg_pred_c(temp3, second_pred, W, H, temp2, W);                 \
+                                                                             \
+    return vpx_variance##W##x##H##_c(temp3, W, ref_ptr, ref_stride, sse);    \
   }
 
 /* Identical to the variance call except it takes an additional parameter, sum,
  * and returns that value using pass-by-reference instead of returning
  * sse - sum^2 / w*h
  */
-#define GET_VAR(W, H)                                                         \
-  void vpx_get##W##x##H##var_c(const uint8_t *a, int a_stride,                \
-                               const uint8_t *b, int b_stride, uint32_t *sse, \
-                               int *sum) {                                    \
-    variance(a, a_stride, b, b_stride, W, H, sse, sum);                       \
+#define GET_VAR(W, H)                                                   \
+  void vpx_get##W##x##H##var_c(const uint8_t *src_ptr, int src_stride,  \
+                               const uint8_t *ref_ptr, int ref_stride,  \
+                               uint32_t *sse, int *sum) {               \
+    variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, sum); \
   }
 
 /* Identical to the variance call except it does not calculate the
  * sse - sum^2 / w*h and returns sse in addtion to modifying the passed in
  * variable.
  */
-#define MSE(W, H)                                               \
-  uint32_t vpx_mse##W##x##H##_c(const uint8_t *a, int a_stride, \
-                                const uint8_t *b, int b_stride, \
-                                uint32_t *sse) {                \
-    int sum;                                                    \
-    variance(a, a_stride, b, b_stride, W, H, sse, &sum);        \
-    return *sse;                                                \
+#define MSE(W, H)                                                        \
+  uint32_t vpx_mse##W##x##H##_c(const uint8_t *src_ptr, int src_stride,  \
+                                const uint8_t *ref_ptr, int ref_stride,  \
+                                uint32_t *sse) {                         \
+    int sum;                                                             \
+    variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, &sum); \
+    return *sse;                                                         \
   }
 
 /* All three forms of the variance are available in the same sizes. */
@@ -236,130 +235,141 @@
   }
 }
 
-#if CONFIG_VP9
 #if CONFIG_VP9_HIGHBITDEPTH
-static void highbd_variance64(const uint8_t *a8, int a_stride,
-                              const uint8_t *b8, int b_stride, int w, int h,
-                              uint64_t *sse, int64_t *sum) {
+static void highbd_variance64(const uint8_t *src8_ptr, int src_stride,
+                              const uint8_t *ref8_ptr, int ref_stride, int w,
+                              int h, uint64_t *sse, int64_t *sum) {
   int i, j;
 
-  uint16_t *a = CONVERT_TO_SHORTPTR(a8);
-  uint16_t *b = CONVERT_TO_SHORTPTR(b8);
+  uint16_t *src_ptr = CONVERT_TO_SHORTPTR(src8_ptr);
+  uint16_t *ref_ptr = CONVERT_TO_SHORTPTR(ref8_ptr);
   *sum = 0;
   *sse = 0;
 
   for (i = 0; i < h; ++i) {
     for (j = 0; j < w; ++j) {
-      const int diff = a[j] - b[j];
+      const int diff = src_ptr[j] - ref_ptr[j];
       *sum += diff;
       *sse += diff * diff;
     }
-    a += a_stride;
-    b += b_stride;
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
   }
 }
 
-static void highbd_8_variance(const uint8_t *a8, int a_stride,
-                              const uint8_t *b8, int b_stride, int w, int h,
-                              uint32_t *sse, int *sum) {
+static void highbd_8_variance(const uint8_t *src8_ptr, int src_stride,
+                              const uint8_t *ref8_ptr, int ref_stride, int w,
+                              int h, uint32_t *sse, int *sum) {
   uint64_t sse_long = 0;
   int64_t sum_long = 0;
-  highbd_variance64(a8, a_stride, b8, b_stride, w, h, &sse_long, &sum_long);
+  highbd_variance64(src8_ptr, src_stride, ref8_ptr, ref_stride, w, h, &sse_long,
+                    &sum_long);
   *sse = (uint32_t)sse_long;
   *sum = (int)sum_long;
 }
 
-static void highbd_10_variance(const uint8_t *a8, int a_stride,
-                               const uint8_t *b8, int b_stride, int w, int h,
-                               uint32_t *sse, int *sum) {
+static void highbd_10_variance(const uint8_t *src8_ptr, int src_stride,
+                               const uint8_t *ref8_ptr, int ref_stride, int w,
+                               int h, uint32_t *sse, int *sum) {
   uint64_t sse_long = 0;
   int64_t sum_long = 0;
-  highbd_variance64(a8, a_stride, b8, b_stride, w, h, &sse_long, &sum_long);
+  highbd_variance64(src8_ptr, src_stride, ref8_ptr, ref_stride, w, h, &sse_long,
+                    &sum_long);
   *sse = (uint32_t)ROUND_POWER_OF_TWO(sse_long, 4);
   *sum = (int)ROUND_POWER_OF_TWO(sum_long, 2);
 }
 
-static void highbd_12_variance(const uint8_t *a8, int a_stride,
-                               const uint8_t *b8, int b_stride, int w, int h,
-                               uint32_t *sse, int *sum) {
+static void highbd_12_variance(const uint8_t *src8_ptr, int src_stride,
+                               const uint8_t *ref8_ptr, int ref_stride, int w,
+                               int h, uint32_t *sse, int *sum) {
   uint64_t sse_long = 0;
   int64_t sum_long = 0;
-  highbd_variance64(a8, a_stride, b8, b_stride, w, h, &sse_long, &sum_long);
+  highbd_variance64(src8_ptr, src_stride, ref8_ptr, ref_stride, w, h, &sse_long,
+                    &sum_long);
   *sse = (uint32_t)ROUND_POWER_OF_TWO(sse_long, 8);
   *sum = (int)ROUND_POWER_OF_TWO(sum_long, 4);
 }
 
-#define HIGHBD_VAR(W, H)                                                       \
-  uint32_t vpx_highbd_8_variance##W##x##H##_c(const uint8_t *a, int a_stride,  \
-                                              const uint8_t *b, int b_stride,  \
-                                              uint32_t *sse) {                 \
-    int sum;                                                                   \
-    highbd_8_variance(a, a_stride, b, b_stride, W, H, sse, &sum);              \
-    return *sse - (uint32_t)(((int64_t)sum * sum) / (W * H));                  \
-  }                                                                            \
-                                                                               \
-  uint32_t vpx_highbd_10_variance##W##x##H##_c(const uint8_t *a, int a_stride, \
-                                               const uint8_t *b, int b_stride, \
-                                               uint32_t *sse) {                \
-    int sum;                                                                   \
-    int64_t var;                                                               \
-    highbd_10_variance(a, a_stride, b, b_stride, W, H, sse, &sum);             \
-    var = (int64_t)(*sse) - (((int64_t)sum * sum) / (W * H));                  \
-    return (var >= 0) ? (uint32_t)var : 0;                                     \
-  }                                                                            \
-                                                                               \
-  uint32_t vpx_highbd_12_variance##W##x##H##_c(const uint8_t *a, int a_stride, \
-                                               const uint8_t *b, int b_stride, \
-                                               uint32_t *sse) {                \
-    int sum;                                                                   \
-    int64_t var;                                                               \
-    highbd_12_variance(a, a_stride, b, b_stride, W, H, sse, &sum);             \
-    var = (int64_t)(*sse) - (((int64_t)sum * sum) / (W * H));                  \
-    return (var >= 0) ? (uint32_t)var : 0;                                     \
+#define HIGHBD_VAR(W, H)                                                    \
+  uint32_t vpx_highbd_8_variance##W##x##H##_c(                              \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse) {                                      \
+    int sum;                                                                \
+    highbd_8_variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse,  \
+                      &sum);                                                \
+    return *sse - (uint32_t)(((int64_t)sum * sum) / (W * H));               \
+  }                                                                         \
+                                                                            \
+  uint32_t vpx_highbd_10_variance##W##x##H##_c(                             \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse) {                                      \
+    int sum;                                                                \
+    int64_t var;                                                            \
+    highbd_10_variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, \
+                       &sum);                                               \
+    var = (int64_t)(*sse) - (((int64_t)sum * sum) / (W * H));               \
+    return (var >= 0) ? (uint32_t)var : 0;                                  \
+  }                                                                         \
+                                                                            \
+  uint32_t vpx_highbd_12_variance##W##x##H##_c(                             \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse) {                                      \
+    int sum;                                                                \
+    int64_t var;                                                            \
+    highbd_12_variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, \
+                       &sum);                                               \
+    var = (int64_t)(*sse) - (((int64_t)sum * sum) / (W * H));               \
+    return (var >= 0) ? (uint32_t)var : 0;                                  \
   }
 
-#define HIGHBD_GET_VAR(S)                                                    \
-  void vpx_highbd_8_get##S##x##S##var_c(const uint8_t *src, int src_stride,  \
-                                        const uint8_t *ref, int ref_stride,  \
-                                        uint32_t *sse, int *sum) {           \
-    highbd_8_variance(src, src_stride, ref, ref_stride, S, S, sse, sum);     \
-  }                                                                          \
-                                                                             \
-  void vpx_highbd_10_get##S##x##S##var_c(const uint8_t *src, int src_stride, \
-                                         const uint8_t *ref, int ref_stride, \
-                                         uint32_t *sse, int *sum) {          \
-    highbd_10_variance(src, src_stride, ref, ref_stride, S, S, sse, sum);    \
-  }                                                                          \
-                                                                             \
-  void vpx_highbd_12_get##S##x##S##var_c(const uint8_t *src, int src_stride, \
-                                         const uint8_t *ref, int ref_stride, \
-                                         uint32_t *sse, int *sum) {          \
-    highbd_12_variance(src, src_stride, ref, ref_stride, S, S, sse, sum);    \
+#define HIGHBD_GET_VAR(S)                                                   \
+  void vpx_highbd_8_get##S##x##S##var_c(                                    \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse, int *sum) {                            \
+    highbd_8_variance(src_ptr, src_stride, ref_ptr, ref_stride, S, S, sse,  \
+                      sum);                                                 \
+  }                                                                         \
+                                                                            \
+  void vpx_highbd_10_get##S##x##S##var_c(                                   \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse, int *sum) {                            \
+    highbd_10_variance(src_ptr, src_stride, ref_ptr, ref_stride, S, S, sse, \
+                       sum);                                                \
+  }                                                                         \
+                                                                            \
+  void vpx_highbd_12_get##S##x##S##var_c(                                   \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse, int *sum) {                            \
+    highbd_12_variance(src_ptr, src_stride, ref_ptr, ref_stride, S, S, sse, \
+                       sum);                                                \
   }
 
-#define HIGHBD_MSE(W, H)                                                      \
-  uint32_t vpx_highbd_8_mse##W##x##H##_c(const uint8_t *src, int src_stride,  \
-                                         const uint8_t *ref, int ref_stride,  \
-                                         uint32_t *sse) {                     \
-    int sum;                                                                  \
-    highbd_8_variance(src, src_stride, ref, ref_stride, W, H, sse, &sum);     \
-    return *sse;                                                              \
-  }                                                                           \
-                                                                              \
-  uint32_t vpx_highbd_10_mse##W##x##H##_c(const uint8_t *src, int src_stride, \
-                                          const uint8_t *ref, int ref_stride, \
-                                          uint32_t *sse) {                    \
-    int sum;                                                                  \
-    highbd_10_variance(src, src_stride, ref, ref_stride, W, H, sse, &sum);    \
-    return *sse;                                                              \
-  }                                                                           \
-                                                                              \
-  uint32_t vpx_highbd_12_mse##W##x##H##_c(const uint8_t *src, int src_stride, \
-                                          const uint8_t *ref, int ref_stride, \
-                                          uint32_t *sse) {                    \
-    int sum;                                                                  \
-    highbd_12_variance(src, src_stride, ref, ref_stride, W, H, sse, &sum);    \
-    return *sse;                                                              \
+#define HIGHBD_MSE(W, H)                                                    \
+  uint32_t vpx_highbd_8_mse##W##x##H##_c(                                   \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse) {                                      \
+    int sum;                                                                \
+    highbd_8_variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse,  \
+                      &sum);                                                \
+    return *sse;                                                            \
+  }                                                                         \
+                                                                            \
+  uint32_t vpx_highbd_10_mse##W##x##H##_c(                                  \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse) {                                      \
+    int sum;                                                                \
+    highbd_10_variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, \
+                       &sum);                                               \
+    return *sse;                                                            \
+  }                                                                         \
+                                                                            \
+  uint32_t vpx_highbd_12_mse##W##x##H##_c(                                  \
+      const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr,       \
+      int ref_stride, uint32_t *sse) {                                      \
+    int sum;                                                                \
+    highbd_12_variance(src_ptr, src_stride, ref_ptr, ref_stride, W, H, sse, \
+                       &sum);                                               \
+    return *sse;                                                            \
   }
 
 static void highbd_var_filter_block2d_bil_first_pass(
@@ -404,111 +414,111 @@
   }
 }
 
-#define HIGHBD_SUBPIX_VAR(W, H)                                              \
-  uint32_t vpx_highbd_8_sub_pixel_variance##W##x##H##_c(                     \
-      const uint8_t *src, int src_stride, int xoffset, int yoffset,          \
-      const uint8_t *dst, int dst_stride, uint32_t *sse) {                   \
-    uint16_t fdata3[(H + 1) * W];                                            \
-    uint16_t temp2[H * W];                                                   \
-                                                                             \
-    highbd_var_filter_block2d_bil_first_pass(                                \
-        src, fdata3, src_stride, 1, H + 1, W, bilinear_filters[xoffset]);    \
-    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,     \
-                                              bilinear_filters[yoffset]);    \
-                                                                             \
-    return vpx_highbd_8_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp2), W,  \
-                                              dst, dst_stride, sse);         \
-  }                                                                          \
-                                                                             \
-  uint32_t vpx_highbd_10_sub_pixel_variance##W##x##H##_c(                    \
-      const uint8_t *src, int src_stride, int xoffset, int yoffset,          \
-      const uint8_t *dst, int dst_stride, uint32_t *sse) {                   \
-    uint16_t fdata3[(H + 1) * W];                                            \
-    uint16_t temp2[H * W];                                                   \
-                                                                             \
-    highbd_var_filter_block2d_bil_first_pass(                                \
-        src, fdata3, src_stride, 1, H + 1, W, bilinear_filters[xoffset]);    \
-    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,     \
-                                              bilinear_filters[yoffset]);    \
-                                                                             \
-    return vpx_highbd_10_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp2), W, \
-                                               dst, dst_stride, sse);        \
-  }                                                                          \
-                                                                             \
-  uint32_t vpx_highbd_12_sub_pixel_variance##W##x##H##_c(                    \
-      const uint8_t *src, int src_stride, int xoffset, int yoffset,          \
-      const uint8_t *dst, int dst_stride, uint32_t *sse) {                   \
-    uint16_t fdata3[(H + 1) * W];                                            \
-    uint16_t temp2[H * W];                                                   \
-                                                                             \
-    highbd_var_filter_block2d_bil_first_pass(                                \
-        src, fdata3, src_stride, 1, H + 1, W, bilinear_filters[xoffset]);    \
-    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,     \
-                                              bilinear_filters[yoffset]);    \
-                                                                             \
-    return vpx_highbd_12_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp2), W, \
-                                               dst, dst_stride, sse);        \
+#define HIGHBD_SUBPIX_VAR(W, H)                                                \
+  uint32_t vpx_highbd_8_sub_pixel_variance##W##x##H##_c(                       \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                 \
+    uint16_t fdata3[(H + 1) * W];                                              \
+    uint16_t temp2[H * W];                                                     \
+                                                                               \
+    highbd_var_filter_block2d_bil_first_pass(                                  \
+        src_ptr, fdata3, src_stride, 1, H + 1, W, bilinear_filters[x_offset]); \
+    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
+                                              bilinear_filters[y_offset]);     \
+                                                                               \
+    return vpx_highbd_8_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp2), W,    \
+                                              ref_ptr, ref_stride, sse);       \
+  }                                                                            \
+                                                                               \
+  uint32_t vpx_highbd_10_sub_pixel_variance##W##x##H##_c(                      \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                 \
+    uint16_t fdata3[(H + 1) * W];                                              \
+    uint16_t temp2[H * W];                                                     \
+                                                                               \
+    highbd_var_filter_block2d_bil_first_pass(                                  \
+        src_ptr, fdata3, src_stride, 1, H + 1, W, bilinear_filters[x_offset]); \
+    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
+                                              bilinear_filters[y_offset]);     \
+                                                                               \
+    return vpx_highbd_10_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp2), W,   \
+                                               ref_ptr, ref_stride, sse);      \
+  }                                                                            \
+                                                                               \
+  uint32_t vpx_highbd_12_sub_pixel_variance##W##x##H##_c(                      \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse) {                 \
+    uint16_t fdata3[(H + 1) * W];                                              \
+    uint16_t temp2[H * W];                                                     \
+                                                                               \
+    highbd_var_filter_block2d_bil_first_pass(                                  \
+        src_ptr, fdata3, src_stride, 1, H + 1, W, bilinear_filters[x_offset]); \
+    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
+                                              bilinear_filters[y_offset]);     \
+                                                                               \
+    return vpx_highbd_12_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp2), W,   \
+                                               ref_ptr, ref_stride, sse);      \
   }
 
-#define HIGHBD_SUBPIX_AVG_VAR(W, H)                                          \
-  uint32_t vpx_highbd_8_sub_pixel_avg_variance##W##x##H##_c(                 \
-      const uint8_t *src, int src_stride, int xoffset, int yoffset,          \
-      const uint8_t *dst, int dst_stride, uint32_t *sse,                     \
-      const uint8_t *second_pred) {                                          \
-    uint16_t fdata3[(H + 1) * W];                                            \
-    uint16_t temp2[H * W];                                                   \
-    DECLARE_ALIGNED(16, uint16_t, temp3[H * W]);                             \
-                                                                             \
-    highbd_var_filter_block2d_bil_first_pass(                                \
-        src, fdata3, src_stride, 1, H + 1, W, bilinear_filters[xoffset]);    \
-    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,     \
-                                              bilinear_filters[yoffset]);    \
-                                                                             \
-    vpx_highbd_comp_avg_pred_c(temp3, second_pred, W, H,                     \
-                               CONVERT_TO_BYTEPTR(temp2), W);                \
-                                                                             \
-    return vpx_highbd_8_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp3), W,  \
-                                              dst, dst_stride, sse);         \
-  }                                                                          \
-                                                                             \
-  uint32_t vpx_highbd_10_sub_pixel_avg_variance##W##x##H##_c(                \
-      const uint8_t *src, int src_stride, int xoffset, int yoffset,          \
-      const uint8_t *dst, int dst_stride, uint32_t *sse,                     \
-      const uint8_t *second_pred) {                                          \
-    uint16_t fdata3[(H + 1) * W];                                            \
-    uint16_t temp2[H * W];                                                   \
-    DECLARE_ALIGNED(16, uint16_t, temp3[H * W]);                             \
-                                                                             \
-    highbd_var_filter_block2d_bil_first_pass(                                \
-        src, fdata3, src_stride, 1, H + 1, W, bilinear_filters[xoffset]);    \
-    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,     \
-                                              bilinear_filters[yoffset]);    \
-                                                                             \
-    vpx_highbd_comp_avg_pred_c(temp3, second_pred, W, H,                     \
-                               CONVERT_TO_BYTEPTR(temp2), W);                \
-                                                                             \
-    return vpx_highbd_10_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp3), W, \
-                                               dst, dst_stride, sse);        \
-  }                                                                          \
-                                                                             \
-  uint32_t vpx_highbd_12_sub_pixel_avg_variance##W##x##H##_c(                \
-      const uint8_t *src, int src_stride, int xoffset, int yoffset,          \
-      const uint8_t *dst, int dst_stride, uint32_t *sse,                     \
-      const uint8_t *second_pred) {                                          \
-    uint16_t fdata3[(H + 1) * W];                                            \
-    uint16_t temp2[H * W];                                                   \
-    DECLARE_ALIGNED(16, uint16_t, temp3[H * W]);                             \
-                                                                             \
-    highbd_var_filter_block2d_bil_first_pass(                                \
-        src, fdata3, src_stride, 1, H + 1, W, bilinear_filters[xoffset]);    \
-    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,     \
-                                              bilinear_filters[yoffset]);    \
-                                                                             \
-    vpx_highbd_comp_avg_pred_c(temp3, second_pred, W, H,                     \
-                               CONVERT_TO_BYTEPTR(temp2), W);                \
-                                                                             \
-    return vpx_highbd_12_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp3), W, \
-                                               dst, dst_stride, sse);        \
+#define HIGHBD_SUBPIX_AVG_VAR(W, H)                                            \
+  uint32_t vpx_highbd_8_sub_pixel_avg_variance##W##x##H##_c(                   \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse,                   \
+      const uint8_t *second_pred) {                                            \
+    uint16_t fdata3[(H + 1) * W];                                              \
+    uint16_t temp2[H * W];                                                     \
+    DECLARE_ALIGNED(16, uint16_t, temp3[H * W]);                               \
+                                                                               \
+    highbd_var_filter_block2d_bil_first_pass(                                  \
+        src_ptr, fdata3, src_stride, 1, H + 1, W, bilinear_filters[x_offset]); \
+    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
+                                              bilinear_filters[y_offset]);     \
+                                                                               \
+    vpx_highbd_comp_avg_pred_c(temp3, CONVERT_TO_SHORTPTR(second_pred), W, H,  \
+                               temp2, W);                                      \
+                                                                               \
+    return vpx_highbd_8_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp3), W,    \
+                                              ref_ptr, ref_stride, sse);       \
+  }                                                                            \
+                                                                               \
+  uint32_t vpx_highbd_10_sub_pixel_avg_variance##W##x##H##_c(                  \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse,                   \
+      const uint8_t *second_pred) {                                            \
+    uint16_t fdata3[(H + 1) * W];                                              \
+    uint16_t temp2[H * W];                                                     \
+    DECLARE_ALIGNED(16, uint16_t, temp3[H * W]);                               \
+                                                                               \
+    highbd_var_filter_block2d_bil_first_pass(                                  \
+        src_ptr, fdata3, src_stride, 1, H + 1, W, bilinear_filters[x_offset]); \
+    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
+                                              bilinear_filters[y_offset]);     \
+                                                                               \
+    vpx_highbd_comp_avg_pred_c(temp3, CONVERT_TO_SHORTPTR(second_pred), W, H,  \
+                               temp2, W);                                      \
+                                                                               \
+    return vpx_highbd_10_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp3), W,   \
+                                               ref_ptr, ref_stride, sse);      \
+  }                                                                            \
+                                                                               \
+  uint32_t vpx_highbd_12_sub_pixel_avg_variance##W##x##H##_c(                  \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,      \
+      const uint8_t *ref_ptr, int ref_stride, uint32_t *sse,                   \
+      const uint8_t *second_pred) {                                            \
+    uint16_t fdata3[(H + 1) * W];                                              \
+    uint16_t temp2[H * W];                                                     \
+    DECLARE_ALIGNED(16, uint16_t, temp3[H * W]);                               \
+                                                                               \
+    highbd_var_filter_block2d_bil_first_pass(                                  \
+        src_ptr, fdata3, src_stride, 1, H + 1, W, bilinear_filters[x_offset]); \
+    highbd_var_filter_block2d_bil_second_pass(fdata3, temp2, W, W, H, W,       \
+                                              bilinear_filters[y_offset]);     \
+                                                                               \
+    vpx_highbd_comp_avg_pred_c(temp3, CONVERT_TO_SHORTPTR(second_pred), W, H,  \
+                               temp2, W);                                      \
+                                                                               \
+    return vpx_highbd_12_variance##W##x##H##_c(CONVERT_TO_BYTEPTR(temp3), W,   \
+                                               ref_ptr, ref_stride, sse);      \
   }
 
 /* All three forms of the variance are available in the same sizes. */
@@ -539,12 +549,10 @@
 HIGHBD_MSE(8, 16)
 HIGHBD_MSE(8, 8)
 
-void vpx_highbd_comp_avg_pred(uint16_t *comp_pred, const uint8_t *pred8,
-                              int width, int height, const uint8_t *ref8,
+void vpx_highbd_comp_avg_pred(uint16_t *comp_pred, const uint16_t *pred,
+                              int width, int height, const uint16_t *ref,
                               int ref_stride) {
   int i, j;
-  uint16_t *pred = CONVERT_TO_SHORTPTR(pred8);
-  uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);
   for (i = 0; i < height; ++i) {
     for (j = 0; j < width; ++j) {
       const int tmp = pred[j] + ref[j];
@@ -556,4 +564,3 @@
   }
 }
 #endif  // CONFIG_VP9_HIGHBITDEPTH
-#endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.h
index 1005732..f8b44f0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/variance.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_VARIANCE_H_
-#define VPX_DSP_VARIANCE_H_
+#ifndef VPX_VPX_DSP_VARIANCE_H_
+#define VPX_VPX_DSP_VARIANCE_H_
 
 #include "./vpx_config.h"
 
@@ -22,37 +22,38 @@
 #define FILTER_BITS 7
 #define FILTER_WEIGHT 128
 
-typedef unsigned int (*vpx_sad_fn_t)(const uint8_t *a, int a_stride,
-                                     const uint8_t *b_ptr, int b_stride);
+typedef unsigned int (*vpx_sad_fn_t)(const uint8_t *src_ptr, int src_stride,
+                                     const uint8_t *ref_ptr, int ref_stride);
 
-typedef unsigned int (*vpx_sad_avg_fn_t)(const uint8_t *a_ptr, int a_stride,
-                                         const uint8_t *b_ptr, int b_stride,
+typedef unsigned int (*vpx_sad_avg_fn_t)(const uint8_t *src_ptr, int src_stride,
+                                         const uint8_t *ref_ptr, int ref_stride,
                                          const uint8_t *second_pred);
 
-typedef void (*vp8_copy32xn_fn_t)(const uint8_t *a, int a_stride, uint8_t *b,
-                                  int b_stride, int n);
+typedef void (*vp8_copy32xn_fn_t)(const uint8_t *src_ptr, int src_stride,
+                                  uint8_t *ref_ptr, int ref_stride, int n);
 
-typedef void (*vpx_sad_multi_fn_t)(const uint8_t *a, int a_stride,
-                                   const uint8_t *b, int b_stride,
+typedef void (*vpx_sad_multi_fn_t)(const uint8_t *src_ptr, int src_stride,
+                                   const uint8_t *ref_ptr, int ref_stride,
                                    unsigned int *sad_array);
 
-typedef void (*vpx_sad_multi_d_fn_t)(const uint8_t *a, int a_stride,
+typedef void (*vpx_sad_multi_d_fn_t)(const uint8_t *src_ptr, int src_stride,
                                      const uint8_t *const b_array[],
-                                     int b_stride, unsigned int *sad_array);
+                                     int ref_stride, unsigned int *sad_array);
 
-typedef unsigned int (*vpx_variance_fn_t)(const uint8_t *a, int a_stride,
-                                          const uint8_t *b, int b_stride,
-                                          unsigned int *sse);
+typedef unsigned int (*vpx_variance_fn_t)(const uint8_t *src_ptr,
+                                          int src_stride,
+                                          const uint8_t *ref_ptr,
+                                          int ref_stride, unsigned int *sse);
 
-typedef unsigned int (*vpx_subpixvariance_fn_t)(const uint8_t *a, int a_stride,
-                                                int xoffset, int yoffset,
-                                                const uint8_t *b, int b_stride,
-                                                unsigned int *sse);
+typedef unsigned int (*vpx_subpixvariance_fn_t)(
+    const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,
+    const uint8_t *ref_ptr, int ref_stride, unsigned int *sse);
 
 typedef unsigned int (*vpx_subp_avg_variance_fn_t)(
-    const uint8_t *a_ptr, int a_stride, int xoffset, int yoffset,
-    const uint8_t *b_ptr, int b_stride, unsigned int *sse,
+    const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,
+    const uint8_t *ref_ptr, int ref_stride, unsigned int *sse,
     const uint8_t *second_pred);
+
 #if CONFIG_VP8
 typedef struct variance_vtable {
   vpx_sad_fn_t sdf;
@@ -61,7 +62,7 @@
   vpx_sad_multi_fn_t sdx3f;
   vpx_sad_multi_fn_t sdx8f;
   vpx_sad_multi_d_fn_t sdx4df;
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
   vp8_copy32xn_fn_t copymem;
 #endif
 } vp8_variance_fn_ptr_t;
@@ -75,6 +76,7 @@
   vpx_subpixvariance_fn_t svf;
   vpx_subp_avg_variance_fn_t svaf;
   vpx_sad_multi_d_fn_t sdx4df;
+  vpx_sad_multi_fn_t sdx8f;
 } vp9_variance_fn_ptr_t;
 #endif  // CONFIG_VP9
 
@@ -82,4 +84,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_VARIANCE_H_
+#endif  // VPX_VPX_DSP_VARIANCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.h
index 7979268..d5793e1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_convolve.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_DSP_VPX_CONVOLVE_H_
-#define VPX_DSP_VPX_CONVOLVE_H_
+#ifndef VPX_VPX_DSP_VPX_CONVOLVE_H_
+#define VPX_VPX_DSP_VPX_CONVOLVE_H_
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -35,4 +35,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_VPX_CONVOLVE_H_
+#endif  // VPX_VPX_DSP_VPX_CONVOLVE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp.mk
index a4a6fa0..0165310 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp.mk
@@ -47,13 +47,11 @@
 # intra predictions
 DSP_SRCS-yes += intrapred.c
 
-DSP_SRCS-$(HAVE_SSE) += x86/intrapred_sse2.asm
 DSP_SRCS-$(HAVE_SSE2) += x86/intrapred_sse2.asm
 DSP_SRCS-$(HAVE_SSSE3) += x86/intrapred_ssse3.asm
 DSP_SRCS-$(HAVE_VSX) += ppc/intrapred_vsx.c
 
 ifeq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
-DSP_SRCS-$(HAVE_SSE)  += x86/highbd_intrapred_sse2.asm
 DSP_SRCS-$(HAVE_SSE2) += x86/highbd_intrapred_sse2.asm
 DSP_SRCS-$(HAVE_SSE2) += x86/highbd_intrapred_intrin_sse2.c
 DSP_SRCS-$(HAVE_SSSE3) += x86/highbd_intrapred_intrin_ssse3.c
@@ -69,6 +67,8 @@
 DSP_SRCS-$(HAVE_NEON) += arm/deblock_neon.c
 DSP_SRCS-$(HAVE_SSE2) += x86/add_noise_sse2.asm
 DSP_SRCS-$(HAVE_SSE2) += x86/deblock_sse2.asm
+DSP_SRCS-$(HAVE_SSE2) += x86/post_proc_sse2.c
+DSP_SRCS-$(HAVE_VSX) += ppc/deblock_vsx.c
 endif # CONFIG_POSTPROC
 
 DSP_SRCS-$(HAVE_NEON_ASM) += arm/intrapred_neon_asm$(ASM)
@@ -81,16 +81,19 @@
 DSP_SRCS-$(HAVE_DSPR2)  += mips/common_dspr2.h
 DSP_SRCS-$(HAVE_DSPR2)  += mips/common_dspr2.c
 
+DSP_SRCS-yes += vpx_filter.h
+ifeq ($(CONFIG_VP9),yes)
 # interpolation filters
 DSP_SRCS-yes += vpx_convolve.c
 DSP_SRCS-yes += vpx_convolve.h
-DSP_SRCS-yes += vpx_filter.h
 
-DSP_SRCS-$(ARCH_X86)$(ARCH_X86_64) += x86/convolve.h
-DSP_SRCS-$(ARCH_X86)$(ARCH_X86_64) += x86/vpx_asm_stubs.c
+DSP_SRCS-$(VPX_ARCH_X86)$(VPX_ARCH_X86_64) += x86/convolve.h
+
+DSP_SRCS-$(HAVE_SSE2) += x86/convolve_sse2.h
 DSP_SRCS-$(HAVE_SSSE3) += x86/convolve_ssse3.h
 DSP_SRCS-$(HAVE_AVX2) += x86/convolve_avx2.h
 DSP_SRCS-$(HAVE_SSE2)  += x86/vpx_subpixel_8t_sse2.asm
+DSP_SRCS-$(HAVE_SSE2)  += x86/vpx_subpixel_4t_intrin_sse2.c
 DSP_SRCS-$(HAVE_SSE2)  += x86/vpx_subpixel_bilinear_sse2.asm
 DSP_SRCS-$(HAVE_SSSE3) += x86/vpx_subpixel_8t_ssse3.asm
 DSP_SRCS-$(HAVE_SSSE3) += x86/vpx_subpixel_bilinear_ssse3.asm
@@ -111,9 +114,17 @@
 
 ifeq ($(HAVE_NEON_ASM),yes)
 DSP_SRCS-yes += arm/vpx_convolve_copy_neon_asm$(ASM)
-DSP_SRCS-yes += arm/vpx_convolve8_avg_neon_asm$(ASM)
-DSP_SRCS-yes += arm/vpx_convolve8_neon_asm$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_horiz_filter_type2_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_vert_filter_type2_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_horiz_filter_type1_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_vert_filter_type1_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_avg_horiz_filter_type2_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_avg_vert_filter_type2_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_avg_horiz_filter_type1_neon$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_avg_vert_filter_type1_neon$(ASM)
 DSP_SRCS-yes += arm/vpx_convolve_avg_neon_asm$(ASM)
+DSP_SRCS-yes += arm/vpx_convolve8_neon_asm.c
+DSP_SRCS-yes += arm/vpx_convolve8_neon_asm.h
 DSP_SRCS-yes += arm/vpx_convolve_neon.c
 else
 ifeq ($(HAVE_NEON),yes)
@@ -134,6 +145,7 @@
 DSP_SRCS-$(HAVE_MSA) += mips/vpx_convolve_avg_msa.c
 DSP_SRCS-$(HAVE_MSA) += mips/vpx_convolve_copy_msa.c
 DSP_SRCS-$(HAVE_MSA) += mips/vpx_convolve_msa.h
+DSP_SRCS-$(HAVE_MMI) += mips/vpx_convolve8_mmi.c
 
 # common (dspr2)
 DSP_SRCS-$(HAVE_DSPR2)  += mips/convolve_common_dspr2.h
@@ -153,8 +165,8 @@
 # loop filters
 DSP_SRCS-yes += loopfilter.c
 
-DSP_SRCS-$(ARCH_X86)$(ARCH_X86_64)   += x86/loopfilter_sse2.c
-DSP_SRCS-$(HAVE_AVX2)                += x86/loopfilter_avx2.c
+DSP_SRCS-$(HAVE_SSE2)  += x86/loopfilter_sse2.c
+DSP_SRCS-$(HAVE_AVX2)  += x86/loopfilter_avx2.c
 
 ifeq ($(HAVE_NEON_ASM),yes)
 DSP_SRCS-yes  += arm/loopfilter_16_neon$(ASM)
@@ -180,6 +192,7 @@
 DSP_SRCS-$(HAVE_NEON)   += arm/highbd_loopfilter_neon.c
 DSP_SRCS-$(HAVE_SSE2)   += x86/highbd_loopfilter_sse2.c
 endif  # CONFIG_VP9_HIGHBITDEPTH
+endif # CONFIG_VP9
 
 DSP_SRCS-yes            += txfm_common.h
 DSP_SRCS-$(HAVE_SSE2)   += x86/txfm_common_sse2.h
@@ -192,7 +205,7 @@
 DSP_SRCS-$(HAVE_SSE2)   += x86/fwd_txfm_sse2.c
 DSP_SRCS-$(HAVE_SSE2)   += x86/fwd_txfm_impl_sse2.h
 DSP_SRCS-$(HAVE_SSE2)   += x86/fwd_dct32x32_impl_sse2.h
-ifeq ($(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86_64),yes)
 DSP_SRCS-$(HAVE_SSSE3)  += x86/fwd_txfm_ssse3_x86_64.asm
 endif
 DSP_SRCS-$(HAVE_AVX2)   += x86/fwd_txfm_avx2.c
@@ -204,7 +217,12 @@
 DSP_SRCS-$(HAVE_NEON)   += arm/fwd_txfm_neon.c
 DSP_SRCS-$(HAVE_MSA)    += mips/fwd_txfm_msa.h
 DSP_SRCS-$(HAVE_MSA)    += mips/fwd_txfm_msa.c
+
+ifneq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 DSP_SRCS-$(HAVE_MSA)    += mips/fwd_dct32x32_msa.c
+endif  # !CONFIG_VP9_HIGHBITDEPTH
+
+DSP_SRCS-$(HAVE_VSX)    += ppc/fdct32x32_vsx.c
 endif  # CONFIG_VP9_ENCODER
 
 # inverse transform
@@ -280,11 +298,13 @@
 DSP_SRCS-yes            += quantize.c
 DSP_SRCS-yes            += quantize.h
 
-DSP_SRCS-$(HAVE_SSE2)   += x86/quantize_x86.h
 DSP_SRCS-$(HAVE_SSE2)   += x86/quantize_sse2.c
+DSP_SRCS-$(HAVE_SSE2)   += x86/quantize_sse2.h
 DSP_SRCS-$(HAVE_SSSE3)  += x86/quantize_ssse3.c
+DSP_SRCS-$(HAVE_SSSE3)  += x86/quantize_ssse3.h
 DSP_SRCS-$(HAVE_AVX)    += x86/quantize_avx.c
 DSP_SRCS-$(HAVE_NEON)   += arm/quantize_neon.c
+DSP_SRCS-$(HAVE_VSX)    += ppc/quantize_vsx.c
 ifeq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 DSP_SRCS-$(HAVE_SSE2)   += x86/highbd_quantize_intrin_sse2.c
 endif
@@ -296,7 +316,7 @@
 DSP_SRCS-$(HAVE_NEON)  += arm/avg_neon.c
 DSP_SRCS-$(HAVE_NEON)  += arm/hadamard_neon.c
 DSP_SRCS-$(HAVE_MSA)   += mips/avg_msa.c
-ifeq ($(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86_64),yes)
 DSP_SRCS-$(HAVE_SSSE3) += x86/avg_ssse3_x86_64.asm
 endif
 DSP_SRCS-$(HAVE_VSX)   += ppc/hadamard_vsx.c
@@ -311,6 +331,7 @@
 DSP_SRCS-yes            += sad.c
 DSP_SRCS-yes            += subtract.c
 DSP_SRCS-yes            += sum_squares.c
+DSP_SRCS-$(HAVE_NEON)   += arm/sum_squares_neon.c
 DSP_SRCS-$(HAVE_SSE2)   += x86/sum_squares_sse2.c
 DSP_SRCS-$(HAVE_MSA)    += mips/sum_squares_msa.c
 
@@ -331,13 +352,12 @@
 DSP_SRCS-$(HAVE_AVX2)   += x86/sad_avx2.c
 DSP_SRCS-$(HAVE_AVX512) += x86/sad4d_avx512.c
 
-DSP_SRCS-$(HAVE_SSE)    += x86/sad4d_sse2.asm
-DSP_SRCS-$(HAVE_SSE)    += x86/sad_sse2.asm
 DSP_SRCS-$(HAVE_SSE2)   += x86/sad4d_sse2.asm
 DSP_SRCS-$(HAVE_SSE2)   += x86/sad_sse2.asm
 DSP_SRCS-$(HAVE_SSE2)   += x86/subtract_sse2.asm
 
 DSP_SRCS-$(HAVE_VSX) += ppc/sad_vsx.c
+DSP_SRCS-$(HAVE_VSX) += ppc/subtract_vsx.c
 
 ifeq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
 DSP_SRCS-$(HAVE_SSE2) += x86/highbd_sad4d_sse2.asm
@@ -359,17 +379,15 @@
 
 DSP_SRCS-$(HAVE_MMI)    += mips/variance_mmi.c
 
-DSP_SRCS-$(HAVE_SSE)    += x86/variance_sse2.c
 DSP_SRCS-$(HAVE_SSE2)   += x86/avg_pred_sse2.c
 DSP_SRCS-$(HAVE_SSE2)   += x86/variance_sse2.c  # Contains SSE2 and SSSE3
 DSP_SRCS-$(HAVE_AVX2)   += x86/variance_avx2.c
 DSP_SRCS-$(HAVE_VSX)    += ppc/variance_vsx.c
 
-ifeq ($(ARCH_X86_64),yes)
+ifeq ($(VPX_ARCH_X86_64),yes)
 DSP_SRCS-$(HAVE_SSE2)   += x86/ssim_opt_x86_64.asm
-endif  # ARCH_X86_64
+endif  # VPX_ARCH_X86_64
 
-DSP_SRCS-$(HAVE_SSE)    += x86/subpel_variance_sse2.asm
 DSP_SRCS-$(HAVE_SSE2)   += x86/subpel_variance_sse2.asm  # Contains SSE2 and SSSE3
 
 ifeq ($(CONFIG_VP9_HIGHBITDEPTH),yes)
@@ -387,6 +405,7 @@
 
 # PPC VSX utilities
 DSP_SRCS-$(HAVE_VSX)  += ppc/types_vsx.h
+DSP_SRCS-$(HAVE_VSX)  += ppc/txfm_common_vsx.h
 DSP_SRCS-$(HAVE_VSX)  += ppc/transpose_vsx.h
 DSP_SRCS-$(HAVE_VSX)  += ppc/bitdepth_conversion_vsx.h
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_common.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_common.h
index c8c8523..2de4495 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_common.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_common.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_VPX_DSP_COMMON_H_
-#define VPX_DSP_VPX_DSP_COMMON_H_
+#ifndef VPX_VPX_DSP_VPX_DSP_COMMON_H_
+#define VPX_VPX_DSP_VPX_DSP_COMMON_H_
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -25,8 +25,8 @@
 #define VPX_SWAP(type, a, b) \
   do {                       \
     type c = (b);            \
-    b = a;                   \
-    a = c;                   \
+    (b) = a;                 \
+    (a) = c;                 \
   } while (0)
 
 #if CONFIG_VP9_HIGHBITDEPTH
@@ -57,6 +57,10 @@
   return value < low ? low : (value > high ? high : value);
 }
 
+static INLINE int64_t lclamp(int64_t value, int64_t low, int64_t high) {
+  return value < low ? low : (value > high ? high : value);
+}
+
 static INLINE uint16_t clip_pixel_highbd(int val, int bd) {
   switch (bd) {
     case 8:
@@ -70,4 +74,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_VPX_DSP_COMMON_H_
+#endif  // VPX_VPX_DSP_VPX_DSP_COMMON_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_rtcd_defs.pl b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_rtcd_defs.pl
index 41ba20f..fd7eefd 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_rtcd_defs.pl
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_dsp_rtcd_defs.pl
@@ -37,325 +37,333 @@
 # Intra prediction
 #
 
-add_proto qw/void vpx_d207_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d207_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d207_predictor_4x4 sse2/;
 
-add_proto qw/void vpx_d45_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d45_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d45_predictor_4x4 neon sse2/;
 
-add_proto qw/void vpx_d45e_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d45e_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_d63_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d63_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d63_predictor_4x4 ssse3/;
 
-add_proto qw/void vpx_d63e_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d63e_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_h_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_h_predictor_4x4 neon dspr2 msa sse2 vsx/;
+add_proto qw/void vpx_h_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_h_predictor_4x4 neon dspr2 msa sse2/;
 
-add_proto qw/void vpx_he_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_he_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_d117_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d117_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_d135_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d135_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d135_predictor_4x4 neon/;
 
-add_proto qw/void vpx_d153_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d153_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d153_predictor_4x4 ssse3/;
 
-add_proto qw/void vpx_v_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_v_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_v_predictor_4x4 neon msa sse2/;
 
-add_proto qw/void vpx_ve_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_ve_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_tm_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_tm_predictor_4x4 neon dspr2 msa sse2 vsx/;
+add_proto qw/void vpx_tm_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_tm_predictor_4x4 neon dspr2 msa sse2/;
 
-add_proto qw/void vpx_dc_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_predictor_4x4 dspr2 msa neon sse2/;
 
-add_proto qw/void vpx_dc_top_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_top_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_top_predictor_4x4 msa neon sse2/;
 
-add_proto qw/void vpx_dc_left_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_left_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_left_predictor_4x4 msa neon sse2/;
 
-add_proto qw/void vpx_dc_128_predictor_4x4/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_128_predictor_4x4/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_128_predictor_4x4 msa neon sse2/;
 
-add_proto qw/void vpx_d207_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d207_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d207_predictor_8x8 ssse3/;
 
-add_proto qw/void vpx_d45_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_d45_predictor_8x8 neon sse2 vsx/;
+add_proto qw/void vpx_d45_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_d45_predictor_8x8 neon sse2/;
 
-add_proto qw/void vpx_d63_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_d63_predictor_8x8 ssse3 vsx/;
+add_proto qw/void vpx_d63_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_d63_predictor_8x8 ssse3/;
 
-add_proto qw/void vpx_h_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_h_predictor_8x8 neon dspr2 msa sse2 vsx/;
+add_proto qw/void vpx_h_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_h_predictor_8x8 neon dspr2 msa sse2/;
 
-add_proto qw/void vpx_d117_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d117_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_d135_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d135_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d135_predictor_8x8 neon/;
 
-add_proto qw/void vpx_d153_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d153_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d153_predictor_8x8 ssse3/;
 
-add_proto qw/void vpx_v_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_v_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_v_predictor_8x8 neon msa sse2/;
 
-add_proto qw/void vpx_tm_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_tm_predictor_8x8 neon dspr2 msa sse2 vsx/;
+add_proto qw/void vpx_tm_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_tm_predictor_8x8 neon dspr2 msa sse2/;
 
-add_proto qw/void vpx_dc_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
-specialize qw/vpx_dc_predictor_8x8 dspr2 neon msa sse2 vsx/;
+add_proto qw/void vpx_dc_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
+# TODO(crbug.com/webm/1522): Re-enable vsx implementation.
+specialize qw/vpx_dc_predictor_8x8 dspr2 neon msa sse2/;
 
-add_proto qw/void vpx_dc_top_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_top_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_top_predictor_8x8 neon msa sse2/;
 
-add_proto qw/void vpx_dc_left_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_left_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_left_predictor_8x8 neon msa sse2/;
 
-add_proto qw/void vpx_dc_128_predictor_8x8/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_128_predictor_8x8/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_128_predictor_8x8 neon msa sse2/;
 
-add_proto qw/void vpx_d207_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d207_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d207_predictor_16x16 ssse3/;
 
-add_proto qw/void vpx_d45_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d45_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d45_predictor_16x16 neon ssse3 vsx/;
 
-add_proto qw/void vpx_d63_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d63_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d63_predictor_16x16 ssse3 vsx/;
 
-add_proto qw/void vpx_h_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_h_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_h_predictor_16x16 neon dspr2 msa sse2 vsx/;
 
-add_proto qw/void vpx_d117_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d117_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_d135_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d135_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d135_predictor_16x16 neon/;
 
-add_proto qw/void vpx_d153_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d153_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d153_predictor_16x16 ssse3/;
 
-add_proto qw/void vpx_v_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_v_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_v_predictor_16x16 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_tm_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_tm_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_tm_predictor_16x16 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_dc_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_predictor_16x16 dspr2 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_dc_top_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_top_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_top_predictor_16x16 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_dc_left_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_left_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_left_predictor_16x16 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_dc_128_predictor_16x16/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_128_predictor_16x16/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_128_predictor_16x16 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_d207_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d207_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d207_predictor_32x32 ssse3/;
 
-add_proto qw/void vpx_d45_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d45_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d45_predictor_32x32 neon ssse3 vsx/;
 
-add_proto qw/void vpx_d63_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d63_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d63_predictor_32x32 ssse3 vsx/;
 
-add_proto qw/void vpx_h_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_h_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_h_predictor_32x32 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_d117_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d117_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 
-add_proto qw/void vpx_d135_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d135_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d135_predictor_32x32 neon/;
 
-add_proto qw/void vpx_d153_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_d153_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_d153_predictor_32x32 ssse3/;
 
-add_proto qw/void vpx_v_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_v_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_v_predictor_32x32 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_tm_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_tm_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_tm_predictor_32x32 neon msa sse2 vsx/;
 
-add_proto qw/void vpx_dc_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_predictor_32x32 msa neon sse2 vsx/;
 
-add_proto qw/void vpx_dc_top_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_top_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_top_predictor_32x32 msa neon sse2 vsx/;
 
-add_proto qw/void vpx_dc_left_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_left_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_left_predictor_32x32 msa neon sse2 vsx/;
 
-add_proto qw/void vpx_dc_128_predictor_32x32/, "uint8_t *dst, ptrdiff_t y_stride, const uint8_t *above, const uint8_t *left";
+add_proto qw/void vpx_dc_128_predictor_32x32/, "uint8_t *dst, ptrdiff_t stride, const uint8_t *above, const uint8_t *left";
 specialize qw/vpx_dc_128_predictor_32x32 msa neon sse2 vsx/;
 
 # High bitdepth functions
 if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
-  add_proto qw/void vpx_highbd_d207_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d207_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d207_predictor_4x4 sse2/;
 
-  add_proto qw/void vpx_highbd_d45_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d45_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d45_predictor_4x4 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d63_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d63_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d63_predictor_4x4 sse2/;
 
-  add_proto qw/void vpx_highbd_h_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_h_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_h_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d117_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d117_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d117_predictor_4x4 sse2/;
 
-  add_proto qw/void vpx_highbd_d135_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d135_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d135_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d153_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d153_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d153_predictor_4x4 sse2/;
 
-  add_proto qw/void vpx_highbd_v_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_v_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_v_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_tm_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_tm_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_tm_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_top_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_top_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_top_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_left_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_left_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_left_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_128_predictor_4x4/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_128_predictor_4x4/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_128_predictor_4x4 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d207_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d207_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d207_predictor_8x8 ssse3/;
 
-  add_proto qw/void vpx_highbd_d45_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d45_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d45_predictor_8x8 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d63_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d63_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d63_predictor_8x8 ssse3/;
 
-  add_proto qw/void vpx_highbd_h_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_h_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_h_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d117_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d117_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d117_predictor_8x8 ssse3/;
 
-  add_proto qw/void vpx_highbd_d135_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d135_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d135_predictor_8x8 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d153_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d153_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d153_predictor_8x8 ssse3/;
 
-  add_proto qw/void vpx_highbd_v_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_v_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_v_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_tm_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_tm_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_tm_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_top_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_top_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_top_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_left_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_left_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_left_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_128_predictor_8x8/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_128_predictor_8x8/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_128_predictor_8x8 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d207_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d207_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d207_predictor_16x16 ssse3/;
 
-  add_proto qw/void vpx_highbd_d45_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d45_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d45_predictor_16x16 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d63_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d63_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d63_predictor_16x16 ssse3/;
 
-  add_proto qw/void vpx_highbd_h_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_h_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_h_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d117_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d117_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d117_predictor_16x16 ssse3/;
 
-  add_proto qw/void vpx_highbd_d135_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d135_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d135_predictor_16x16 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d153_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d153_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d153_predictor_16x16 ssse3/;
 
-  add_proto qw/void vpx_highbd_v_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_v_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_v_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_tm_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_tm_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_tm_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_top_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_top_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_top_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_left_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_left_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_left_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_128_predictor_16x16/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_128_predictor_16x16/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_128_predictor_16x16 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d207_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d207_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d207_predictor_32x32 ssse3/;
 
-  add_proto qw/void vpx_highbd_d45_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d45_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d45_predictor_32x32 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d63_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d63_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d63_predictor_32x32 ssse3/;
 
-  add_proto qw/void vpx_highbd_h_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_h_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_h_predictor_32x32 neon sse2/;
 
-  add_proto qw/void vpx_highbd_d117_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d117_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d117_predictor_32x32 ssse3/;
 
-  add_proto qw/void vpx_highbd_d135_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d135_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d135_predictor_32x32 neon ssse3/;
 
-  add_proto qw/void vpx_highbd_d153_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_d153_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_d153_predictor_32x32 ssse3/;
 
-  add_proto qw/void vpx_highbd_v_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_v_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_v_predictor_32x32 neon sse2/;
 
-  add_proto qw/void vpx_highbd_tm_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_tm_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_tm_predictor_32x32 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_predictor_32x32 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_top_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_top_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_top_predictor_32x32 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_left_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_left_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_left_predictor_32x32 neon sse2/;
 
-  add_proto qw/void vpx_highbd_dc_128_predictor_32x32/, "uint16_t *dst, ptrdiff_t y_stride, const uint16_t *above, const uint16_t *left, int bd";
+  add_proto qw/void vpx_highbd_dc_128_predictor_32x32/, "uint16_t *dst, ptrdiff_t stride, const uint16_t *above, const uint16_t *left, int bd";
   specialize qw/vpx_highbd_dc_128_predictor_32x32 neon sse2/;
 }  # CONFIG_VP9_HIGHBITDEPTH
 
+if (vpx_config("CONFIG_VP9") eq "yes") {
 #
 # Sub Pixel Filters
 #
@@ -363,25 +371,25 @@
 specialize qw/vpx_convolve_copy neon dspr2 msa sse2 vsx/;
 
 add_proto qw/void vpx_convolve_avg/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve_avg neon dspr2 msa sse2 vsx/;
+specialize qw/vpx_convolve_avg neon dspr2 msa sse2 vsx mmi/;
 
 add_proto qw/void vpx_convolve8/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve8 sse2 ssse3 avx2 neon dspr2 msa vsx/;
+specialize qw/vpx_convolve8 sse2 ssse3 avx2 neon dspr2 msa vsx mmi/;
 
 add_proto qw/void vpx_convolve8_horiz/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve8_horiz sse2 ssse3 avx2 neon dspr2 msa vsx/;
+specialize qw/vpx_convolve8_horiz sse2 ssse3 avx2 neon dspr2 msa vsx mmi/;
 
 add_proto qw/void vpx_convolve8_vert/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve8_vert sse2 ssse3 avx2 neon dspr2 msa vsx/;
+specialize qw/vpx_convolve8_vert sse2 ssse3 avx2 neon dspr2 msa vsx mmi/;
 
 add_proto qw/void vpx_convolve8_avg/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve8_avg sse2 ssse3 avx2 neon dspr2 msa vsx/;
+specialize qw/vpx_convolve8_avg sse2 ssse3 avx2 neon dspr2 msa vsx mmi/;
 
 add_proto qw/void vpx_convolve8_avg_horiz/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve8_avg_horiz sse2 ssse3 avx2 neon dspr2 msa vsx/;
+specialize qw/vpx_convolve8_avg_horiz sse2 ssse3 avx2 neon dspr2 msa vsx mmi/;
 
 add_proto qw/void vpx_convolve8_avg_vert/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
-specialize qw/vpx_convolve8_avg_vert sse2 ssse3 avx2 neon dspr2 msa vsx/;
+specialize qw/vpx_convolve8_avg_vert sse2 ssse3 avx2 neon dspr2 msa vsx mmi/;
 
 add_proto qw/void vpx_scaled_2d/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
 specialize qw/vpx_scaled_2d ssse3 neon msa/;
@@ -395,36 +403,38 @@
 add_proto qw/void vpx_scaled_avg_horiz/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
 
 add_proto qw/void vpx_scaled_avg_vert/, "const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h";
+} #CONFIG_VP9
 
 if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
   #
   # Sub Pixel Filters
   #
-  add_proto qw/void vpx_highbd_convolve_copy/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve_copy/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve_copy sse2 avx2 neon/;
 
-  add_proto qw/void vpx_highbd_convolve_avg/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve_avg/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve_avg sse2 avx2 neon/;
 
-  add_proto qw/void vpx_highbd_convolve8/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve8/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve8 avx2 neon/, "$sse2_x86_64";
 
-  add_proto qw/void vpx_highbd_convolve8_horiz/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve8_horiz/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve8_horiz avx2 neon/, "$sse2_x86_64";
 
-  add_proto qw/void vpx_highbd_convolve8_vert/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve8_vert/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve8_vert avx2 neon/, "$sse2_x86_64";
 
-  add_proto qw/void vpx_highbd_convolve8_avg/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve8_avg/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve8_avg avx2 neon/, "$sse2_x86_64";
 
-  add_proto qw/void vpx_highbd_convolve8_avg_horiz/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve8_avg_horiz/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve8_avg_horiz avx2 neon/, "$sse2_x86_64";
 
-  add_proto qw/void vpx_highbd_convolve8_avg_vert/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bps";
+  add_proto qw/void vpx_highbd_convolve8_avg_vert/, "const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst, ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4, int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd";
   specialize qw/vpx_highbd_convolve8_avg_vert avx2 neon/, "$sse2_x86_64";
 }  # CONFIG_VP9_HIGHBITDEPTH
 
+if (vpx_config("CONFIG_VP9") eq "yes") {
 #
 # Loopfilter
 #
@@ -463,6 +473,7 @@
 
 add_proto qw/void vpx_lpf_horizontal_4_dual/, "uint8_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0, const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1, const uint8_t *thresh1";
 specialize qw/vpx_lpf_horizontal_4_dual sse2 neon dspr2 msa/;
+} #CONFIG_VP9
 
 if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
   add_proto qw/void vpx_highbd_lpf_vertical_16/, "uint16_t *s, int pitch, const uint8_t *blimit, const uint8_t *limit, const uint8_t *thresh, int bd";
@@ -583,7 +594,7 @@
   specialize qw/vpx_fdct32x32 neon sse2 avx2 msa/;
 
   add_proto qw/void vpx_fdct32x32_rd/, "const int16_t *input, tran_low_t *output, int stride";
-  specialize qw/vpx_fdct32x32_rd sse2 avx2 neon msa/;
+  specialize qw/vpx_fdct32x32_rd sse2 avx2 neon msa vsx/;
 
   add_proto qw/void vpx_fdct32x32_1/, "const int16_t *input, tran_low_t *output, int stride";
   specialize qw/vpx_fdct32x32_1 sse2 neon msa/;
@@ -626,7 +637,7 @@
   specialize qw/vpx_idct32x32_135_add neon sse2 ssse3/;
   specialize qw/vpx_idct32x32_34_add neon sse2 ssse3/;
   specialize qw/vpx_idct32x32_1_add neon sse2/;
-  specialize qw/vpx_iwht4x4_16_add sse2/;
+  specialize qw/vpx_iwht4x4_16_add sse2 vsx/;
 
   if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") ne "yes") {
     # Note that these specializations are appended to the above ones.
@@ -699,10 +710,10 @@
 #
 if (vpx_config("CONFIG_VP9_ENCODER") eq "yes") {
   add_proto qw/void vpx_quantize_b/, "const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block, const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr, const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr, const int16_t *scan, const int16_t *iscan";
-  specialize qw/vpx_quantize_b neon sse2 ssse3 avx/;
+  specialize qw/vpx_quantize_b neon sse2 ssse3 avx vsx/;
 
   add_proto qw/void vpx_quantize_b_32x32/, "const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block, const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr, const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr, const int16_t *scan, const int16_t *iscan";
-  specialize qw/vpx_quantize_b_32x32 neon ssse3 avx/;
+  specialize qw/vpx_quantize_b_32x32 neon ssse3 avx vsx/;
 
   if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
     add_proto qw/void vpx_highbd_quantize_b/, "const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block, const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr, const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr, const int16_t *scan, const int16_t *iscan";
@@ -718,7 +729,7 @@
 # Block subtraction
 #
 add_proto qw/void vpx_subtract_block/, "int rows, int cols, int16_t *diff_ptr, ptrdiff_t diff_stride, const uint8_t *src_ptr, ptrdiff_t src_stride, const uint8_t *pred_ptr, ptrdiff_t pred_stride";
-specialize qw/vpx_subtract_block neon msa mmi sse2/;
+specialize qw/vpx_subtract_block neon msa mmi sse2 vsx/;
 
 #
 # Single block SAD
@@ -748,13 +759,13 @@
 specialize qw/vpx_sad16x8 neon msa sse2 vsx mmi/;
 
 add_proto qw/unsigned int vpx_sad8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride";
-specialize qw/vpx_sad8x16 neon msa sse2 mmi/;
+specialize qw/vpx_sad8x16 neon msa sse2 vsx mmi/;
 
 add_proto qw/unsigned int vpx_sad8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride";
-specialize qw/vpx_sad8x8 neon msa sse2 mmi/;
+specialize qw/vpx_sad8x8 neon msa sse2 vsx mmi/;
 
 add_proto qw/unsigned int vpx_sad8x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride";
-specialize qw/vpx_sad8x4 neon msa sse2 mmi/;
+specialize qw/vpx_sad8x4 neon msa sse2 vsx mmi/;
 
 add_proto qw/unsigned int vpx_sad4x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride";
 specialize qw/vpx_sad4x8 neon msa sse2 mmi/;
@@ -782,8 +793,23 @@
     add_proto qw/void vpx_hadamard_16x16/, "const int16_t *src_diff, ptrdiff_t src_stride, tran_low_t *coeff";
     specialize qw/vpx_hadamard_16x16 avx2 sse2 neon vsx/;
 
+    add_proto qw/void vpx_hadamard_32x32/, "const int16_t *src_diff, ptrdiff_t src_stride, tran_low_t *coeff";
+    specialize qw/vpx_hadamard_32x32 sse2 avx2/;
+
+    add_proto qw/void vpx_highbd_hadamard_8x8/, "const int16_t *src_diff, ptrdiff_t src_stride, tran_low_t *coeff";
+    specialize qw/vpx_highbd_hadamard_8x8 avx2/;
+
+    add_proto qw/void vpx_highbd_hadamard_16x16/, "const int16_t *src_diff, ptrdiff_t src_stride, tran_low_t *coeff";
+    specialize qw/vpx_highbd_hadamard_16x16 avx2/;
+
+    add_proto qw/void vpx_highbd_hadamard_32x32/, "const int16_t *src_diff, ptrdiff_t src_stride, tran_low_t *coeff";
+    specialize qw/vpx_highbd_hadamard_32x32 avx2/;
+
     add_proto qw/int vpx_satd/, "const tran_low_t *coeff, int length";
     specialize qw/vpx_satd avx2 sse2 neon/;
+
+    add_proto qw/int vpx_highbd_satd/, "const tran_low_t *coeff, int length";
+    specialize qw/vpx_highbd_satd avx2/;
   } else {
     add_proto qw/void vpx_hadamard_8x8/, "const int16_t *src_diff, ptrdiff_t src_stride, int16_t *coeff";
     specialize qw/vpx_hadamard_8x8 sse2 neon msa vsx/, "$ssse3_x86_64";
@@ -791,6 +817,9 @@
     add_proto qw/void vpx_hadamard_16x16/, "const int16_t *src_diff, ptrdiff_t src_stride, int16_t *coeff";
     specialize qw/vpx_hadamard_16x16 avx2 sse2 neon msa vsx/;
 
+    add_proto qw/void vpx_hadamard_32x32/, "const int16_t *src_diff, ptrdiff_t src_stride, int16_t *coeff";
+    specialize qw/vpx_hadamard_32x32 sse2 avx2/;
+
     add_proto qw/int vpx_satd/, "const int16_t *coeff, int length";
     specialize qw/vpx_satd avx2 sse2 neon msa/;
   }
@@ -864,6 +893,9 @@
 specialize qw/vpx_sad4x4x3 sse3 msa mmi/;
 
 # Blocks of 8
+add_proto qw/void vpx_sad32x32x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, uint32_t *sad_array";
+specialize qw/vpx_sad32x32x8 avx2/;
+
 add_proto qw/void vpx_sad16x16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad16x16x8 sse4_1 msa mmi/;
 
@@ -882,47 +914,47 @@
 #
 # Multi-block SAD, comparing a reference to N independent blocks
 #
-add_proto qw/void vpx_sad64x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad64x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad64x64x4d avx512 avx2 neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad64x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad64x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad64x32x4d neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad32x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad32x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad32x64x4d neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad32x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad32x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad32x32x4d avx2 neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad32x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad32x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad32x16x4d neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad16x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad16x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad16x32x4d neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad16x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad16x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad16x16x4d neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad16x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad16x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad16x8x4d neon msa sse2 vsx mmi/;
 
-add_proto qw/void vpx_sad8x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad8x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad8x16x4d neon msa sse2 mmi/;
 
-add_proto qw/void vpx_sad8x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad8x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad8x8x4d neon msa sse2 mmi/;
 
-add_proto qw/void vpx_sad8x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad8x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad8x4x4d neon msa sse2 mmi/;
 
-add_proto qw/void vpx_sad4x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad4x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad4x8x4d neon msa sse2 mmi/;
 
-add_proto qw/void vpx_sad4x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_ptr[], int ref_stride, uint32_t *sad_array";
+add_proto qw/void vpx_sad4x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t * const ref_array[], int ref_stride, uint32_t *sad_array";
 specialize qw/vpx_sad4x4x4d neon msa sse2 mmi/;
 
 add_proto qw/uint64_t vpx_sum_squares_2d_i16/, "const int16_t *src, int stride, int size";
-specialize qw/vpx_sum_squares_2d_i16 sse2 msa/;
+specialize qw/vpx_sum_squares_2d_i16 neon sse2 msa/;
 
 #
 # Structured Similarity (SSIM)
@@ -939,7 +971,7 @@
   #
   # Block subtraction
   #
-  add_proto qw/void vpx_highbd_subtract_block/, "int rows, int cols, int16_t *diff_ptr, ptrdiff_t diff_stride, const uint8_t *src_ptr, ptrdiff_t src_stride, const uint8_t *pred_ptr, ptrdiff_t pred_stride, int bd";
+  add_proto qw/void vpx_highbd_subtract_block/, "int rows, int cols, int16_t *diff_ptr, ptrdiff_t diff_stride, const uint8_t *src8_ptr, ptrdiff_t src_stride, const uint8_t *pred8_ptr, ptrdiff_t pred_stride, int bd";
 
   #
   # Single block SAD
@@ -984,9 +1016,13 @@
   #
   # Avg
   #
-  add_proto qw/unsigned int vpx_highbd_avg_8x8/, "const uint8_t *, int p";
-  add_proto qw/unsigned int vpx_highbd_avg_4x4/, "const uint8_t *, int p";
-  add_proto qw/void vpx_highbd_minmax_8x8/, "const uint8_t *s, int p, const uint8_t *d, int dp, int *min, int *max";
+  add_proto qw/unsigned int vpx_highbd_avg_8x8/, "const uint8_t *s8, int p";
+  specialize qw/vpx_highbd_avg_8x8 sse2/;
+
+  add_proto qw/unsigned int vpx_highbd_avg_4x4/, "const uint8_t *s8, int p";
+  specialize qw/vpx_highbd_avg_4x4 sse2/;
+
+  add_proto qw/void vpx_highbd_minmax_8x8/, "const uint8_t *s8, int p, const uint8_t *d8, int dp, int *min, int *max";
 
   add_proto qw/unsigned int vpx_highbd_sad64x64_avg/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, const uint8_t *second_pred";
   specialize qw/vpx_highbd_sad64x64_avg sse2/;
@@ -1028,43 +1064,43 @@
   #
   # Multi-block SAD, comparing a reference to N independent blocks
   #
-  add_proto qw/void vpx_highbd_sad64x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad64x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad64x64x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad64x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad64x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad64x32x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad32x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad32x64x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad32x64x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad32x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad32x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad32x32x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad32x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad32x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad32x16x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad16x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad16x32x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad16x32x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad16x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad16x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad16x16x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad16x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad16x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad16x8x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad8x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad8x16x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad8x16x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad8x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad8x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad8x8x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad8x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad8x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad8x4x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad4x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad4x8x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad4x8x4d sse2/;
 
-  add_proto qw/void vpx_highbd_sad4x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_ptr[], int ref_stride, uint32_t *sad_array";
+  add_proto qw/void vpx_highbd_sad4x4x4d/, "const uint8_t *src_ptr, int src_stride, const uint8_t* const ref_array[], int ref_stride, uint32_t *sad_array";
   specialize qw/vpx_highbd_sad4x4x4d sse2/;
 
   #
@@ -1081,70 +1117,70 @@
 #
 # Variance
 #
-add_proto qw/unsigned int vpx_variance64x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance64x64 sse2 avx2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance64x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance64x64 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance64x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance64x32 sse2 avx2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance64x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance64x32 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance32x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance32x64 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance32x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance32x64 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance32x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance32x32 sse2 avx2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance32x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance32x32 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance32x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance32x16 sse2 avx2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance32x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance32x16 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance16x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance16x32 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance16x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance16x32 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance16x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance16x16 sse2 avx2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance16x16 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance16x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance16x8 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance16x8 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance8x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance8x16 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance8x16 sse2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance8x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance8x8 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance8x8 sse2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance8x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance8x4 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance8x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance8x4 sse2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance4x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance4x8 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance4x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance4x8 sse2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_variance4x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  specialize qw/vpx_variance4x4 sse2 neon msa mmi/;
+add_proto qw/unsigned int vpx_variance4x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_variance4x4 sse2 neon msa mmi vsx/;
 
 #
 # Specialty Variance
 #
-add_proto qw/void vpx_get16x16var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
-  specialize qw/vpx_get16x16var sse2 avx2 neon msa/;
+add_proto qw/void vpx_get16x16var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_get16x16var sse2 avx2 neon msa vsx/;
 
-add_proto qw/void vpx_get8x8var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
-  specialize qw/vpx_get8x8var sse2 neon msa/;
+add_proto qw/void vpx_get8x8var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_get8x8var sse2 neon msa vsx/;
 
-add_proto qw/unsigned int vpx_mse16x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  specialize qw/vpx_mse16x16 sse2 avx2 neon msa mmi/;
+add_proto qw/unsigned int vpx_mse16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_mse16x16 sse2 avx2 neon msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_mse16x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  specialize qw/vpx_mse16x8 sse2 msa mmi/;
+add_proto qw/unsigned int vpx_mse16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_mse16x8 sse2 avx2 msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_mse8x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  specialize qw/vpx_mse8x16 sse2 msa mmi/;
+add_proto qw/unsigned int vpx_mse8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_mse8x16 sse2 msa mmi vsx/;
 
-add_proto qw/unsigned int vpx_mse8x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  specialize qw/vpx_mse8x8 sse2 msa mmi/;
+add_proto qw/unsigned int vpx_mse8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  specialize qw/vpx_mse8x8 sse2 msa mmi vsx/;
 
 add_proto qw/unsigned int vpx_get_mb_ss/, "const int16_t *";
   specialize qw/vpx_get_mb_ss sse2 msa vsx/;
 
-add_proto qw/unsigned int vpx_get4x4sse_cs/, "const unsigned char *src_ptr, int source_stride, const unsigned char *ref_ptr, int  ref_stride";
+add_proto qw/unsigned int vpx_get4x4sse_cs/, "const unsigned char *src_ptr, int src_stride, const unsigned char *ref_ptr, int ref_stride";
   specialize qw/vpx_get4x4sse_cs neon msa vsx/;
 
 add_proto qw/void vpx_comp_avg_pred/, "uint8_t *comp_pred, const uint8_t *pred, int width, int height, const uint8_t *ref, int ref_stride";
@@ -1153,440 +1189,449 @@
 #
 # Subpixel Variance
 #
-add_proto qw/uint32_t vpx_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance64x64 avx2 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance64x32 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance32x64 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance32x32 avx2 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance32x16 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance16x32 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance16x16 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance16x8 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance8x16 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance8x8 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance8x4 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance4x8 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+add_proto qw/uint32_t vpx_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_sub_pixel_variance4x4 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance64x64 neon avx2 msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance64x32 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance32x64 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance32x32 neon avx2 msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance32x16 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance16x32 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance16x16 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance16x8 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance8x16 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance8x8 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance8x4 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance4x8 neon msa mmi sse2 ssse3/;
 
-add_proto qw/uint32_t vpx_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+add_proto qw/uint32_t vpx_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_sub_pixel_avg_variance4x4 neon msa mmi sse2 ssse3/;
 
 if (vpx_config("CONFIG_VP9_HIGHBITDEPTH") eq "yes") {
-  add_proto qw/unsigned int vpx_highbd_12_variance64x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance64x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance64x64 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance64x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance64x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance64x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance32x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance32x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance32x64 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance32x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance32x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance32x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance32x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance32x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance32x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance16x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance16x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance16x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance16x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance16x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance16x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance16x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance8x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance8x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance8x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_variance8x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_variance8x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_12_variance4x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_12_variance4x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance8x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance4x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_variance4x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
 
-  add_proto qw/unsigned int vpx_highbd_10_variance64x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance64x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance64x64 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance64x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance64x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance64x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance32x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance32x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance32x64 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance32x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance32x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance32x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance32x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance32x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance32x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance16x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance16x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance16x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance16x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance16x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance16x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance16x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance8x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance8x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance8x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_variance8x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_variance8x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_10_variance4x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_10_variance4x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance8x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance4x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_variance4x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
 
-  add_proto qw/unsigned int vpx_highbd_8_variance64x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance64x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance64x64 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance64x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance64x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance64x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance32x64/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance32x64/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance32x64 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance32x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance32x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance32x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance32x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance32x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance32x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance16x32/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance16x32/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance16x32 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance16x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance16x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance16x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance16x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance8x16/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance8x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance8x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_variance8x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_variance8x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_8_variance4x8/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_8_variance4x4/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance8x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance4x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_variance4x4/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
 
-  add_proto qw/void vpx_highbd_8_get16x16var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
-  add_proto qw/void vpx_highbd_8_get8x8var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  add_proto qw/void vpx_highbd_8_get16x16var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_highbd_8_get16x16var sse2/;
 
-  add_proto qw/void vpx_highbd_10_get16x16var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
-  add_proto qw/void vpx_highbd_10_get8x8var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  add_proto qw/void vpx_highbd_8_get8x8var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_highbd_8_get8x8var sse2/;
 
-  add_proto qw/void vpx_highbd_12_get16x16var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
-  add_proto qw/void vpx_highbd_12_get8x8var/, "const uint8_t *src_ptr, int source_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  add_proto qw/void vpx_highbd_10_get16x16var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_highbd_10_get16x16var sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_mse16x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
+  add_proto qw/void vpx_highbd_10_get8x8var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_highbd_10_get8x8var sse2/;
+
+  add_proto qw/void vpx_highbd_12_get16x16var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_highbd_12_get16x16var sse2/;
+
+  add_proto qw/void vpx_highbd_12_get8x8var/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse, int *sum";
+  specialize qw/vpx_highbd_12_get8x8var sse2/;
+
+  add_proto qw/unsigned int vpx_highbd_8_mse16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_mse16x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_8_mse16x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_8_mse8x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_8_mse8x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_mse16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_mse8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_8_mse8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_8_mse8x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_mse16x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_mse16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_mse16x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_10_mse16x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_10_mse8x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_10_mse8x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_mse16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_mse8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_10_mse8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_10_mse8x8 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_mse16x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_mse16x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_mse16x16 sse2/;
 
-  add_proto qw/unsigned int vpx_highbd_12_mse16x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_12_mse8x16/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
-  add_proto qw/unsigned int vpx_highbd_12_mse8x8/, "const uint8_t *src_ptr, int  source_stride, const uint8_t *ref_ptr, int  recon_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_mse16x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_mse8x16/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
+  add_proto qw/unsigned int vpx_highbd_12_mse8x8/, "const uint8_t *src_ptr, int src_stride, const uint8_t *ref_ptr, int ref_stride, unsigned int *sse";
   specialize qw/vpx_highbd_12_mse8x8 sse2/;
 
-  add_proto qw/void vpx_highbd_comp_avg_pred/, "uint16_t *comp_pred, const uint8_t *pred8, int width, int height, const uint8_t *ref8, int ref_stride";
+  add_proto qw/void vpx_highbd_comp_avg_pred/, "uint16_t *comp_pred, const uint16_t *pred, int width, int height, const uint16_t *ref, int ref_stride";
 
   #
   # Subpixel Variance
   #
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance64x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance64x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance32x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance32x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance32x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance16x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance16x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance16x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance8x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance8x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_12_sub_pixel_variance8x4 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance64x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance64x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance32x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance32x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance32x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance16x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance16x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance16x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance8x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance8x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_10_sub_pixel_variance8x4 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance64x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance64x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance32x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance32x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance32x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance16x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance16x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance16x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance8x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance8x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
   specialize qw/vpx_highbd_8_sub_pixel_variance8x4 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse";
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance64x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance64x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance32x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance32x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance32x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance16x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance16x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance16x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance8x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance8x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_12_sub_pixel_avg_variance8x4 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
-  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_12_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance64x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance64x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance32x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance32x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance32x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance16x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance16x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance16x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance8x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance8x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_10_sub_pixel_avg_variance8x4 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
-  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_10_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance64x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance64x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance64x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance64x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance32x64/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance32x64 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance32x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance32x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance32x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance32x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance16x32/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance16x32 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance16x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance16x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance16x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance16x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance8x16/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance8x16 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance8x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance8x8 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance8x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
   specialize qw/vpx_highbd_8_sub_pixel_avg_variance8x4 sse2/;
 
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
-  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int source_stride, int xoffset, int  yoffset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance4x8/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
+  add_proto qw/uint32_t vpx_highbd_8_sub_pixel_avg_variance4x4/, "const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, const uint8_t *ref_ptr, int ref_stride, uint32_t *sse, const uint8_t *second_pred";
 
 }  # CONFIG_VP9_HIGHBITDEPTH
 
@@ -1598,13 +1643,13 @@
     specialize qw/vpx_plane_add_noise sse2 msa/;
 
     add_proto qw/void vpx_mbpost_proc_down/, "unsigned char *dst, int pitch, int rows, int cols,int flimit";
-    specialize qw/vpx_mbpost_proc_down sse2 neon msa/;
+    specialize qw/vpx_mbpost_proc_down sse2 neon msa vsx/;
 
-    add_proto qw/void vpx_mbpost_proc_across_ip/, "unsigned char *dst, int pitch, int rows, int cols,int flimit";
-    specialize qw/vpx_mbpost_proc_across_ip sse2 neon msa/;
+    add_proto qw/void vpx_mbpost_proc_across_ip/, "unsigned char *src, int pitch, int rows, int cols,int flimit";
+    specialize qw/vpx_mbpost_proc_across_ip sse2 neon msa vsx/;
 
     add_proto qw/void vpx_post_proc_down_and_across_mb_row/, "unsigned char *src, unsigned char *dst, int src_pitch, int dst_pitch, int cols, unsigned char *flimits, int size";
-    specialize qw/vpx_post_proc_down_and_across_mb_row sse2 neon msa/;
+    specialize qw/vpx_post_proc_down_and_across_mb_row sse2 neon msa vsx/;
 
 }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_filter.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_filter.h
index 6cea251..54357ee 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_filter.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/vpx_filter.h
@@ -8,9 +8,10 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_VPX_FILTER_H_
-#define VPX_DSP_VPX_FILTER_H_
+#ifndef VPX_VPX_DSP_VPX_FILTER_H_
+#define VPX_VPX_DSP_VPX_FILTER_H_
 
+#include <assert.h>
 #include "vpx/vpx_integer.h"
 
 #ifdef __cplusplus
@@ -26,8 +27,16 @@
 
 typedef int16_t InterpKernel[SUBPEL_TAPS];
 
+static INLINE int vpx_get_filter_taps(const int16_t *const filter) {
+  assert(filter[3] != 128);
+  if (!filter[0] && !filter[1] && !filter[2])
+    return 2;
+  else
+    return 8;
+}
+
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_VPX_FILTER_H_
+#endif  // VPX_VPX_DSP_VPX_FILTER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_avx2.c
index ff19ea6..3f4f577 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_avx2.c
@@ -15,6 +15,209 @@
 #include "vpx_dsp/x86/bitdepth_conversion_avx2.h"
 #include "vpx_ports/mem.h"
 
+#if CONFIG_VP9_HIGHBITDEPTH
+static void highbd_hadamard_col8_avx2(__m256i *in, int iter) {
+  __m256i a0 = in[0];
+  __m256i a1 = in[1];
+  __m256i a2 = in[2];
+  __m256i a3 = in[3];
+  __m256i a4 = in[4];
+  __m256i a5 = in[5];
+  __m256i a6 = in[6];
+  __m256i a7 = in[7];
+
+  __m256i b0 = _mm256_add_epi32(a0, a1);
+  __m256i b1 = _mm256_sub_epi32(a0, a1);
+  __m256i b2 = _mm256_add_epi32(a2, a3);
+  __m256i b3 = _mm256_sub_epi32(a2, a3);
+  __m256i b4 = _mm256_add_epi32(a4, a5);
+  __m256i b5 = _mm256_sub_epi32(a4, a5);
+  __m256i b6 = _mm256_add_epi32(a6, a7);
+  __m256i b7 = _mm256_sub_epi32(a6, a7);
+
+  a0 = _mm256_add_epi32(b0, b2);
+  a1 = _mm256_add_epi32(b1, b3);
+  a2 = _mm256_sub_epi32(b0, b2);
+  a3 = _mm256_sub_epi32(b1, b3);
+  a4 = _mm256_add_epi32(b4, b6);
+  a5 = _mm256_add_epi32(b5, b7);
+  a6 = _mm256_sub_epi32(b4, b6);
+  a7 = _mm256_sub_epi32(b5, b7);
+
+  if (iter == 0) {
+    b0 = _mm256_add_epi32(a0, a4);
+    b7 = _mm256_add_epi32(a1, a5);
+    b3 = _mm256_add_epi32(a2, a6);
+    b4 = _mm256_add_epi32(a3, a7);
+    b2 = _mm256_sub_epi32(a0, a4);
+    b6 = _mm256_sub_epi32(a1, a5);
+    b1 = _mm256_sub_epi32(a2, a6);
+    b5 = _mm256_sub_epi32(a3, a7);
+
+    a0 = _mm256_unpacklo_epi32(b0, b1);
+    a1 = _mm256_unpacklo_epi32(b2, b3);
+    a2 = _mm256_unpackhi_epi32(b0, b1);
+    a3 = _mm256_unpackhi_epi32(b2, b3);
+    a4 = _mm256_unpacklo_epi32(b4, b5);
+    a5 = _mm256_unpacklo_epi32(b6, b7);
+    a6 = _mm256_unpackhi_epi32(b4, b5);
+    a7 = _mm256_unpackhi_epi32(b6, b7);
+
+    b0 = _mm256_unpacklo_epi64(a0, a1);
+    b1 = _mm256_unpacklo_epi64(a4, a5);
+    b2 = _mm256_unpackhi_epi64(a0, a1);
+    b3 = _mm256_unpackhi_epi64(a4, a5);
+    b4 = _mm256_unpacklo_epi64(a2, a3);
+    b5 = _mm256_unpacklo_epi64(a6, a7);
+    b6 = _mm256_unpackhi_epi64(a2, a3);
+    b7 = _mm256_unpackhi_epi64(a6, a7);
+
+    in[0] = _mm256_permute2x128_si256(b0, b1, 0x20);
+    in[1] = _mm256_permute2x128_si256(b0, b1, 0x31);
+    in[2] = _mm256_permute2x128_si256(b2, b3, 0x20);
+    in[3] = _mm256_permute2x128_si256(b2, b3, 0x31);
+    in[4] = _mm256_permute2x128_si256(b4, b5, 0x20);
+    in[5] = _mm256_permute2x128_si256(b4, b5, 0x31);
+    in[6] = _mm256_permute2x128_si256(b6, b7, 0x20);
+    in[7] = _mm256_permute2x128_si256(b6, b7, 0x31);
+  } else {
+    in[0] = _mm256_add_epi32(a0, a4);
+    in[7] = _mm256_add_epi32(a1, a5);
+    in[3] = _mm256_add_epi32(a2, a6);
+    in[4] = _mm256_add_epi32(a3, a7);
+    in[2] = _mm256_sub_epi32(a0, a4);
+    in[6] = _mm256_sub_epi32(a1, a5);
+    in[1] = _mm256_sub_epi32(a2, a6);
+    in[5] = _mm256_sub_epi32(a3, a7);
+  }
+}
+
+void vpx_highbd_hadamard_8x8_avx2(const int16_t *src_diff, ptrdiff_t src_stride,
+                                  tran_low_t *coeff) {
+  __m128i src16[8];
+  __m256i src32[8];
+
+  src16[0] = _mm_loadu_si128((const __m128i *)src_diff);
+  src16[1] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+  src16[2] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+  src16[3] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+  src16[4] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+  src16[5] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+  src16[6] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+  src16[7] = _mm_loadu_si128((const __m128i *)(src_diff += src_stride));
+
+  src32[0] = _mm256_cvtepi16_epi32(src16[0]);
+  src32[1] = _mm256_cvtepi16_epi32(src16[1]);
+  src32[2] = _mm256_cvtepi16_epi32(src16[2]);
+  src32[3] = _mm256_cvtepi16_epi32(src16[3]);
+  src32[4] = _mm256_cvtepi16_epi32(src16[4]);
+  src32[5] = _mm256_cvtepi16_epi32(src16[5]);
+  src32[6] = _mm256_cvtepi16_epi32(src16[6]);
+  src32[7] = _mm256_cvtepi16_epi32(src16[7]);
+
+  highbd_hadamard_col8_avx2(src32, 0);
+  highbd_hadamard_col8_avx2(src32, 1);
+
+  _mm256_storeu_si256((__m256i *)coeff, src32[0]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[1]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[2]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[3]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[4]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[5]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[6]);
+  coeff += 8;
+  _mm256_storeu_si256((__m256i *)coeff, src32[7]);
+}
+
+void vpx_highbd_hadamard_16x16_avx2(const int16_t *src_diff,
+                                    ptrdiff_t src_stride, tran_low_t *coeff) {
+  int idx;
+  tran_low_t *t_coeff = coeff;
+  for (idx = 0; idx < 4; ++idx) {
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 8 * src_stride + (idx & 0x01) * 8;
+    vpx_highbd_hadamard_8x8_avx2(src_ptr, src_stride, t_coeff + idx * 64);
+  }
+
+  for (idx = 0; idx < 64; idx += 8) {
+    __m256i coeff0 = _mm256_loadu_si256((const __m256i *)t_coeff);
+    __m256i coeff1 = _mm256_loadu_si256((const __m256i *)(t_coeff + 64));
+    __m256i coeff2 = _mm256_loadu_si256((const __m256i *)(t_coeff + 128));
+    __m256i coeff3 = _mm256_loadu_si256((const __m256i *)(t_coeff + 192));
+
+    __m256i b0 = _mm256_add_epi32(coeff0, coeff1);
+    __m256i b1 = _mm256_sub_epi32(coeff0, coeff1);
+    __m256i b2 = _mm256_add_epi32(coeff2, coeff3);
+    __m256i b3 = _mm256_sub_epi32(coeff2, coeff3);
+
+    b0 = _mm256_srai_epi32(b0, 1);
+    b1 = _mm256_srai_epi32(b1, 1);
+    b2 = _mm256_srai_epi32(b2, 1);
+    b3 = _mm256_srai_epi32(b3, 1);
+
+    coeff0 = _mm256_add_epi32(b0, b2);
+    coeff1 = _mm256_add_epi32(b1, b3);
+    coeff2 = _mm256_sub_epi32(b0, b2);
+    coeff3 = _mm256_sub_epi32(b1, b3);
+
+    _mm256_storeu_si256((__m256i *)coeff, coeff0);
+    _mm256_storeu_si256((__m256i *)(coeff + 64), coeff1);
+    _mm256_storeu_si256((__m256i *)(coeff + 128), coeff2);
+    _mm256_storeu_si256((__m256i *)(coeff + 192), coeff3);
+
+    coeff += 8;
+    t_coeff += 8;
+  }
+}
+
+void vpx_highbd_hadamard_32x32_avx2(const int16_t *src_diff,
+                                    ptrdiff_t src_stride, tran_low_t *coeff) {
+  int idx;
+  tran_low_t *t_coeff = coeff;
+  for (idx = 0; idx < 4; ++idx) {
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 16 * src_stride + (idx & 0x01) * 16;
+    vpx_highbd_hadamard_16x16_avx2(src_ptr, src_stride, t_coeff + idx * 256);
+  }
+
+  for (idx = 0; idx < 256; idx += 8) {
+    __m256i coeff0 = _mm256_loadu_si256((const __m256i *)t_coeff);
+    __m256i coeff1 = _mm256_loadu_si256((const __m256i *)(t_coeff + 256));
+    __m256i coeff2 = _mm256_loadu_si256((const __m256i *)(t_coeff + 512));
+    __m256i coeff3 = _mm256_loadu_si256((const __m256i *)(t_coeff + 768));
+
+    __m256i b0 = _mm256_add_epi32(coeff0, coeff1);
+    __m256i b1 = _mm256_sub_epi32(coeff0, coeff1);
+    __m256i b2 = _mm256_add_epi32(coeff2, coeff3);
+    __m256i b3 = _mm256_sub_epi32(coeff2, coeff3);
+
+    b0 = _mm256_srai_epi32(b0, 2);
+    b1 = _mm256_srai_epi32(b1, 2);
+    b2 = _mm256_srai_epi32(b2, 2);
+    b3 = _mm256_srai_epi32(b3, 2);
+
+    coeff0 = _mm256_add_epi32(b0, b2);
+    coeff1 = _mm256_add_epi32(b1, b3);
+    coeff2 = _mm256_sub_epi32(b0, b2);
+    coeff3 = _mm256_sub_epi32(b1, b3);
+
+    _mm256_storeu_si256((__m256i *)coeff, coeff0);
+    _mm256_storeu_si256((__m256i *)(coeff + 256), coeff1);
+    _mm256_storeu_si256((__m256i *)(coeff + 512), coeff2);
+    _mm256_storeu_si256((__m256i *)(coeff + 768), coeff3);
+
+    coeff += 8;
+    t_coeff += 8;
+  }
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 static void hadamard_col8x2_avx2(__m256i *in, int iter) {
   __m256i a0 = in[0];
   __m256i a1 = in[1];
@@ -91,7 +294,7 @@
   }
 }
 
-static void hadamard_8x8x2_avx2(int16_t const *src_diff, ptrdiff_t src_stride,
+static void hadamard_8x8x2_avx2(const int16_t *src_diff, ptrdiff_t src_stride,
                                 int16_t *coeff) {
   __m256i src[8];
   src[0] = _mm256_loadu_si256((const __m256i *)src_diff);
@@ -131,18 +334,19 @@
                       _mm256_permute2x128_si256(src[6], src[7], 0x31));
 }
 
-void vpx_hadamard_16x16_avx2(int16_t const *src_diff, ptrdiff_t src_stride,
-                             tran_low_t *coeff) {
-  int idx;
+static INLINE void hadamard_16x16_avx2(const int16_t *src_diff,
+                                       ptrdiff_t src_stride, tran_low_t *coeff,
+                                       int is_final) {
 #if CONFIG_VP9_HIGHBITDEPTH
   DECLARE_ALIGNED(32, int16_t, temp_coeff[16 * 16]);
   int16_t *t_coeff = temp_coeff;
 #else
   int16_t *t_coeff = coeff;
 #endif
-
+  int16_t *coeff16 = (int16_t *)coeff;
+  int idx;
   for (idx = 0; idx < 2; ++idx) {
-    int16_t const *src_ptr = src_diff + idx * 8 * src_stride;
+    const int16_t *src_ptr = src_diff + idx * 8 * src_stride;
     hadamard_8x8x2_avx2(src_ptr, src_stride, t_coeff + (idx * 64 * 2));
   }
 
@@ -161,11 +365,69 @@
     b1 = _mm256_srai_epi16(b1, 1);
     b2 = _mm256_srai_epi16(b2, 1);
     b3 = _mm256_srai_epi16(b3, 1);
+    if (is_final) {
+      store_tran_low(_mm256_add_epi16(b0, b2), coeff);
+      store_tran_low(_mm256_add_epi16(b1, b3), coeff + 64);
+      store_tran_low(_mm256_sub_epi16(b0, b2), coeff + 128);
+      store_tran_low(_mm256_sub_epi16(b1, b3), coeff + 192);
+      coeff += 16;
+    } else {
+      _mm256_storeu_si256((__m256i *)coeff16, _mm256_add_epi16(b0, b2));
+      _mm256_storeu_si256((__m256i *)(coeff16 + 64), _mm256_add_epi16(b1, b3));
+      _mm256_storeu_si256((__m256i *)(coeff16 + 128), _mm256_sub_epi16(b0, b2));
+      _mm256_storeu_si256((__m256i *)(coeff16 + 192), _mm256_sub_epi16(b1, b3));
+      coeff16 += 16;
+    }
+    t_coeff += 16;
+  }
+}
+
+void vpx_hadamard_16x16_avx2(const int16_t *src_diff, ptrdiff_t src_stride,
+                             tran_low_t *coeff) {
+  hadamard_16x16_avx2(src_diff, src_stride, coeff, 1);
+}
+
+void vpx_hadamard_32x32_avx2(const int16_t *src_diff, ptrdiff_t src_stride,
+                             tran_low_t *coeff) {
+#if CONFIG_VP9_HIGHBITDEPTH
+  // For high bitdepths, it is unnecessary to store_tran_low
+  // (mult/unpack/store), then load_tran_low (load/pack) the same memory in the
+  // next stage.  Output to an intermediate buffer first, then store_tran_low()
+  // in the final stage.
+  DECLARE_ALIGNED(32, int16_t, temp_coeff[32 * 32]);
+  int16_t *t_coeff = temp_coeff;
+#else
+  int16_t *t_coeff = coeff;
+#endif
+  int idx;
+  for (idx = 0; idx < 4; ++idx) {
+    // src_diff: 9 bit, dynamic range [-255, 255]
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 16 * src_stride + (idx & 0x01) * 16;
+    hadamard_16x16_avx2(src_ptr, src_stride,
+                        (tran_low_t *)(t_coeff + idx * 256), 0);
+  }
+
+  for (idx = 0; idx < 256; idx += 16) {
+    const __m256i coeff0 = _mm256_loadu_si256((const __m256i *)t_coeff);
+    const __m256i coeff1 = _mm256_loadu_si256((const __m256i *)(t_coeff + 256));
+    const __m256i coeff2 = _mm256_loadu_si256((const __m256i *)(t_coeff + 512));
+    const __m256i coeff3 = _mm256_loadu_si256((const __m256i *)(t_coeff + 768));
+
+    __m256i b0 = _mm256_add_epi16(coeff0, coeff1);
+    __m256i b1 = _mm256_sub_epi16(coeff0, coeff1);
+    __m256i b2 = _mm256_add_epi16(coeff2, coeff3);
+    __m256i b3 = _mm256_sub_epi16(coeff2, coeff3);
+
+    b0 = _mm256_srai_epi16(b0, 2);
+    b1 = _mm256_srai_epi16(b1, 2);
+    b2 = _mm256_srai_epi16(b2, 2);
+    b3 = _mm256_srai_epi16(b3, 2);
 
     store_tran_low(_mm256_add_epi16(b0, b2), coeff);
-    store_tran_low(_mm256_add_epi16(b1, b3), coeff + 64);
-    store_tran_low(_mm256_sub_epi16(b0, b2), coeff + 128);
-    store_tran_low(_mm256_sub_epi16(b1, b3), coeff + 192);
+    store_tran_low(_mm256_add_epi16(b1, b3), coeff + 256);
+    store_tran_low(_mm256_sub_epi16(b0, b2), coeff + 512);
+    store_tran_low(_mm256_sub_epi16(b1, b3), coeff + 768);
 
     coeff += 16;
     t_coeff += 16;
@@ -195,3 +457,26 @@
     return _mm_cvtsi128_si32(accum_128);
   }
 }
+
+#if CONFIG_VP9_HIGHBITDEPTH
+int vpx_highbd_satd_avx2(const tran_low_t *coeff, int length) {
+  __m256i accum = _mm256_setzero_si256();
+  int i;
+
+  for (i = 0; i < length; i += 8, coeff += 8) {
+    const __m256i src_line = _mm256_loadu_si256((const __m256i *)coeff);
+    const __m256i abs = _mm256_abs_epi32(src_line);
+    accum = _mm256_add_epi32(accum, abs);
+  }
+
+  {  // 32 bit horizontal add
+    const __m256i a = _mm256_srli_si256(accum, 8);
+    const __m256i b = _mm256_add_epi32(accum, a);
+    const __m256i c = _mm256_srli_epi64(b, 32);
+    const __m256i d = _mm256_add_epi32(b, c);
+    const __m128i accum_128 = _mm_add_epi32(_mm256_castsi256_si128(d),
+                                            _mm256_extractf128_si256(d, 1));
+    return _mm_cvtsi128_si32(accum_128);
+  }
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_sse2.c
index a235ba4..3cba258 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_intrin_sse2.c
@@ -138,6 +138,56 @@
   return (avg + 8) >> 4;
 }
 
+#if CONFIG_VP9_HIGHBITDEPTH
+unsigned int vpx_highbd_avg_8x8_sse2(const uint8_t *s8, int p) {
+  __m128i s0, s1;
+  unsigned int avg;
+  const uint16_t *s = CONVERT_TO_SHORTPTR(s8);
+  const __m128i zero = _mm_setzero_si128();
+  s0 = _mm_loadu_si128((const __m128i *)(s));
+  s1 = _mm_loadu_si128((const __m128i *)(s + p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadu_si128((const __m128i *)(s + 2 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadu_si128((const __m128i *)(s + 3 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadu_si128((const __m128i *)(s + 4 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadu_si128((const __m128i *)(s + 5 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadu_si128((const __m128i *)(s + 6 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadu_si128((const __m128i *)(s + 7 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_unpackhi_epi16(s0, zero);
+  s0 = _mm_unpacklo_epi16(s0, zero);
+  s0 = _mm_add_epi32(s0, s1);
+  s0 = _mm_add_epi32(s0, _mm_srli_si128(s0, 8));
+  s0 = _mm_add_epi32(s0, _mm_srli_si128(s0, 4));
+  avg = _mm_cvtsi128_si32(s0);
+
+  return (avg + 32) >> 6;
+}
+
+unsigned int vpx_highbd_avg_4x4_sse2(const uint8_t *s8, int p) {
+  __m128i s0, s1;
+  unsigned int avg;
+  const uint16_t *s = CONVERT_TO_SHORTPTR(s8);
+  s0 = _mm_loadl_epi64((const __m128i *)(s));
+  s1 = _mm_loadl_epi64((const __m128i *)(s + p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadl_epi64((const __m128i *)(s + 2 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s1 = _mm_loadl_epi64((const __m128i *)(s + 3 * p));
+  s0 = _mm_adds_epu16(s0, s1);
+  s0 = _mm_add_epi16(s0, _mm_srli_si128(s0, 4));
+  s0 = _mm_add_epi16(s0, _mm_srli_si128(s0, 2));
+  avg = _mm_extract_epi16(s0, 0);
+
+  return (avg + 8) >> 4;
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+
 static void hadamard_col8_sse2(__m128i *in, int iter) {
   __m128i a0 = in[0];
   __m128i a1 = in[1];
@@ -214,8 +264,9 @@
   }
 }
 
-void vpx_hadamard_8x8_sse2(int16_t const *src_diff, ptrdiff_t src_stride,
-                           tran_low_t *coeff) {
+static INLINE void hadamard_8x8_sse2(const int16_t *src_diff,
+                                     ptrdiff_t src_stride, tran_low_t *coeff,
+                                     int is_final) {
   __m128i src[8];
   src[0] = _mm_load_si128((const __m128i *)src_diff);
   src[1] = _mm_load_si128((const __m128i *)(src_diff += src_stride));
@@ -229,37 +280,74 @@
   hadamard_col8_sse2(src, 0);
   hadamard_col8_sse2(src, 1);
 
-  store_tran_low(src[0], coeff);
-  coeff += 8;
-  store_tran_low(src[1], coeff);
-  coeff += 8;
-  store_tran_low(src[2], coeff);
-  coeff += 8;
-  store_tran_low(src[3], coeff);
-  coeff += 8;
-  store_tran_low(src[4], coeff);
-  coeff += 8;
-  store_tran_low(src[5], coeff);
-  coeff += 8;
-  store_tran_low(src[6], coeff);
-  coeff += 8;
-  store_tran_low(src[7], coeff);
+  if (is_final) {
+    store_tran_low(src[0], coeff);
+    coeff += 8;
+    store_tran_low(src[1], coeff);
+    coeff += 8;
+    store_tran_low(src[2], coeff);
+    coeff += 8;
+    store_tran_low(src[3], coeff);
+    coeff += 8;
+    store_tran_low(src[4], coeff);
+    coeff += 8;
+    store_tran_low(src[5], coeff);
+    coeff += 8;
+    store_tran_low(src[6], coeff);
+    coeff += 8;
+    store_tran_low(src[7], coeff);
+  } else {
+    int16_t *coeff16 = (int16_t *)coeff;
+    _mm_store_si128((__m128i *)coeff16, src[0]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[1]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[2]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[3]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[4]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[5]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[6]);
+    coeff16 += 8;
+    _mm_store_si128((__m128i *)coeff16, src[7]);
+  }
 }
 
-void vpx_hadamard_16x16_sse2(int16_t const *src_diff, ptrdiff_t src_stride,
-                             tran_low_t *coeff) {
+void vpx_hadamard_8x8_sse2(const int16_t *src_diff, ptrdiff_t src_stride,
+                           tran_low_t *coeff) {
+  hadamard_8x8_sse2(src_diff, src_stride, coeff, 1);
+}
+
+static INLINE void hadamard_16x16_sse2(const int16_t *src_diff,
+                                       ptrdiff_t src_stride, tran_low_t *coeff,
+                                       int is_final) {
+#if CONFIG_VP9_HIGHBITDEPTH
+  // For high bitdepths, it is unnecessary to store_tran_low
+  // (mult/unpack/store), then load_tran_low (load/pack) the same memory in the
+  // next stage.  Output to an intermediate buffer first, then store_tran_low()
+  // in the final stage.
+  DECLARE_ALIGNED(32, int16_t, temp_coeff[16 * 16]);
+  int16_t *t_coeff = temp_coeff;
+#else
+  int16_t *t_coeff = coeff;
+#endif
+  int16_t *coeff16 = (int16_t *)coeff;
   int idx;
   for (idx = 0; idx < 4; ++idx) {
-    int16_t const *src_ptr =
+    const int16_t *src_ptr =
         src_diff + (idx >> 1) * 8 * src_stride + (idx & 0x01) * 8;
-    vpx_hadamard_8x8_sse2(src_ptr, src_stride, coeff + idx * 64);
+    hadamard_8x8_sse2(src_ptr, src_stride, (tran_low_t *)(t_coeff + idx * 64),
+                      0);
   }
 
   for (idx = 0; idx < 64; idx += 8) {
-    __m128i coeff0 = load_tran_low(coeff);
-    __m128i coeff1 = load_tran_low(coeff + 64);
-    __m128i coeff2 = load_tran_low(coeff + 128);
-    __m128i coeff3 = load_tran_low(coeff + 192);
+    __m128i coeff0 = _mm_load_si128((const __m128i *)t_coeff);
+    __m128i coeff1 = _mm_load_si128((const __m128i *)(t_coeff + 64));
+    __m128i coeff2 = _mm_load_si128((const __m128i *)(t_coeff + 128));
+    __m128i coeff3 = _mm_load_si128((const __m128i *)(t_coeff + 192));
 
     __m128i b0 = _mm_add_epi16(coeff0, coeff1);
     __m128i b1 = _mm_sub_epi16(coeff0, coeff1);
@@ -273,15 +361,80 @@
 
     coeff0 = _mm_add_epi16(b0, b2);
     coeff1 = _mm_add_epi16(b1, b3);
+    coeff2 = _mm_sub_epi16(b0, b2);
+    coeff3 = _mm_sub_epi16(b1, b3);
+
+    if (is_final) {
+      store_tran_low(coeff0, coeff);
+      store_tran_low(coeff1, coeff + 64);
+      store_tran_low(coeff2, coeff + 128);
+      store_tran_low(coeff3, coeff + 192);
+      coeff += 8;
+    } else {
+      _mm_store_si128((__m128i *)coeff16, coeff0);
+      _mm_store_si128((__m128i *)(coeff16 + 64), coeff1);
+      _mm_store_si128((__m128i *)(coeff16 + 128), coeff2);
+      _mm_store_si128((__m128i *)(coeff16 + 192), coeff3);
+      coeff16 += 8;
+    }
+
+    t_coeff += 8;
+  }
+}
+
+void vpx_hadamard_16x16_sse2(const int16_t *src_diff, ptrdiff_t src_stride,
+                             tran_low_t *coeff) {
+  hadamard_16x16_sse2(src_diff, src_stride, coeff, 1);
+}
+
+void vpx_hadamard_32x32_sse2(const int16_t *src_diff, ptrdiff_t src_stride,
+                             tran_low_t *coeff) {
+#if CONFIG_VP9_HIGHBITDEPTH
+  // For high bitdepths, it is unnecessary to store_tran_low
+  // (mult/unpack/store), then load_tran_low (load/pack) the same memory in the
+  // next stage.  Output to an intermediate buffer first, then store_tran_low()
+  // in the final stage.
+  DECLARE_ALIGNED(32, int16_t, temp_coeff[32 * 32]);
+  int16_t *t_coeff = temp_coeff;
+#else
+  int16_t *t_coeff = coeff;
+#endif
+  int idx;
+  for (idx = 0; idx < 4; ++idx) {
+    const int16_t *src_ptr =
+        src_diff + (idx >> 1) * 16 * src_stride + (idx & 0x01) * 16;
+    hadamard_16x16_sse2(src_ptr, src_stride,
+                        (tran_low_t *)(t_coeff + idx * 256), 0);
+  }
+
+  for (idx = 0; idx < 256; idx += 8) {
+    __m128i coeff0 = _mm_load_si128((const __m128i *)t_coeff);
+    __m128i coeff1 = _mm_load_si128((const __m128i *)(t_coeff + 256));
+    __m128i coeff2 = _mm_load_si128((const __m128i *)(t_coeff + 512));
+    __m128i coeff3 = _mm_load_si128((const __m128i *)(t_coeff + 768));
+
+    __m128i b0 = _mm_add_epi16(coeff0, coeff1);
+    __m128i b1 = _mm_sub_epi16(coeff0, coeff1);
+    __m128i b2 = _mm_add_epi16(coeff2, coeff3);
+    __m128i b3 = _mm_sub_epi16(coeff2, coeff3);
+
+    b0 = _mm_srai_epi16(b0, 2);
+    b1 = _mm_srai_epi16(b1, 2);
+    b2 = _mm_srai_epi16(b2, 2);
+    b3 = _mm_srai_epi16(b3, 2);
+
+    coeff0 = _mm_add_epi16(b0, b2);
+    coeff1 = _mm_add_epi16(b1, b3);
     store_tran_low(coeff0, coeff);
-    store_tran_low(coeff1, coeff + 64);
+    store_tran_low(coeff1, coeff + 256);
 
     coeff2 = _mm_sub_epi16(b0, b2);
     coeff3 = _mm_sub_epi16(b1, b3);
-    store_tran_low(coeff2, coeff + 128);
-    store_tran_low(coeff3, coeff + 192);
+    store_tran_low(coeff2, coeff + 512);
+    store_tran_low(coeff3, coeff + 768);
 
     coeff += 8;
+    t_coeff += 8;
   }
 }
 
@@ -311,7 +464,7 @@
   return _mm_cvtsi128_si32(accum);
 }
 
-void vpx_int_pro_row_sse2(int16_t *hbuf, uint8_t const *ref,
+void vpx_int_pro_row_sse2(int16_t *hbuf, const uint8_t *ref,
                           const int ref_stride, const int height) {
   int idx;
   __m128i zero = _mm_setzero_si128();
@@ -360,16 +513,16 @@
   _mm_storeu_si128((__m128i *)hbuf, s1);
 }
 
-int16_t vpx_int_pro_col_sse2(uint8_t const *ref, const int width) {
+int16_t vpx_int_pro_col_sse2(const uint8_t *ref, const int width) {
   __m128i zero = _mm_setzero_si128();
-  __m128i src_line = _mm_load_si128((const __m128i *)ref);
+  __m128i src_line = _mm_loadu_si128((const __m128i *)ref);
   __m128i s0 = _mm_sad_epu8(src_line, zero);
   __m128i s1;
   int i;
 
   for (i = 16; i < width; i += 16) {
     ref += 16;
-    src_line = _mm_load_si128((const __m128i *)ref);
+    src_line = _mm_loadu_si128((const __m128i *)ref);
     s1 = _mm_sad_epu8(src_line, zero);
     s0 = _mm_adds_epu16(s0, s1);
   }
@@ -380,7 +533,7 @@
   return _mm_extract_epi16(s0, 0);
 }
 
-int vpx_vector_var_sse2(int16_t const *ref, int16_t const *src, const int bwl) {
+int vpx_vector_var_sse2(const int16_t *ref, const int16_t *src, const int bwl) {
   int idx;
   int width = 4 << bwl;
   int16_t mean;
@@ -418,7 +571,7 @@
   v1 = _mm_srli_epi64(sse, 32);
   sse = _mm_add_epi32(sse, v1);
 
-  mean = _mm_extract_epi16(sum, 0);
+  mean = (int16_t)_mm_extract_epi16(sum, 0);
 
   return _mm_cvtsi128_si32(sse) - ((mean * mean) >> (bwl + 2));
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_pred_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_pred_sse2.c
index f83b264..e4e1e0e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_pred_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_pred_sse2.c
@@ -13,11 +13,12 @@
 
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
+#include "vpx_dsp/x86/mem_sse2.h"
 
-void vpx_comp_avg_pred_sse2(uint8_t *comp, const uint8_t *pred, int width,
+void vpx_comp_avg_pred_sse2(uint8_t *comp_pred, const uint8_t *pred, int width,
                             int height, const uint8_t *ref, int ref_stride) {
-  /* comp and pred must be 16 byte aligned. */
-  assert(((intptr_t)comp & 0xf) == 0);
+  /* comp_pred and pred must be 16 byte aligned. */
+  assert(((intptr_t)comp_pred & 0xf) == 0);
   assert(((intptr_t)pred & 0xf) == 0);
   if (width > 8) {
     int x, y;
@@ -26,17 +27,17 @@
         const __m128i p = _mm_load_si128((const __m128i *)(pred + x));
         const __m128i r = _mm_loadu_si128((const __m128i *)(ref + x));
         const __m128i avg = _mm_avg_epu8(p, r);
-        _mm_store_si128((__m128i *)(comp + x), avg);
+        _mm_store_si128((__m128i *)(comp_pred + x), avg);
       }
-      comp += width;
+      comp_pred += width;
       pred += width;
       ref += ref_stride;
     }
   } else {  // width must be 4 or 8.
     int i;
-    // Process 16 elements at a time. comp and pred have width == stride and
-    // therefore live in contigious memory. 4*4, 4*8, 8*4, 8*8, and 8*16 are all
-    // divisible by 16 so just ref needs to be massaged when loading.
+    // Process 16 elements at a time. comp_pred and pred have width == stride
+    // and therefore live in contigious memory. 4*4, 4*8, 8*4, 8*8, and 8*16 are
+    // all divisible by 16 so just ref needs to be massaged when loading.
     for (i = 0; i < width * height; i += 16) {
       const __m128i p = _mm_load_si128((const __m128i *)pred);
       __m128i r;
@@ -45,10 +46,9 @@
         r = _mm_loadu_si128((const __m128i *)ref);
         ref += 16;
       } else if (width == 4) {
-        r = _mm_set_epi32(*(const uint32_t *)(ref + 3 * ref_stride),
-                          *(const uint32_t *)(ref + 2 * ref_stride),
-                          *(const uint32_t *)(ref + ref_stride),
-                          *(const uint32_t *)(ref));
+        r = _mm_set_epi32(loadu_uint32(ref + 3 * ref_stride),
+                          loadu_uint32(ref + 2 * ref_stride),
+                          loadu_uint32(ref + ref_stride), loadu_uint32(ref));
 
         ref += 4 * ref_stride;
       } else {
@@ -60,10 +60,10 @@
         ref += 2 * ref_stride;
       }
       avg = _mm_avg_epu8(p, r);
-      _mm_store_si128((__m128i *)comp, avg);
+      _mm_store_si128((__m128i *)comp_pred, avg);
 
       pred += 16;
-      comp += 16;
+      comp_pred += 16;
     }
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_ssse3_x86_64.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_ssse3_x86_64.asm
index 22e0a08..9122b5a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_ssse3_x86_64.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/avg_ssse3_x86_64.asm
@@ -13,7 +13,7 @@
 
 SECTION .text
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 ; matrix transpose
 %macro TRANSPOSE8X8 10
   ; stage 1
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_avx2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_avx2.h
index 3552c07..c02b47a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_avx2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_avx2.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_DSP_X86_BITDEPTH_CONVERSION_AVX2_H_
-#define VPX_DSP_X86_BITDEPTH_CONVERSION_AVX2_H_
+#ifndef VPX_VPX_DSP_X86_BITDEPTH_CONVERSION_AVX2_H_
+#define VPX_VPX_DSP_X86_BITDEPTH_CONVERSION_AVX2_H_
 
 #include <immintrin.h>
 
@@ -41,4 +41,4 @@
   _mm256_storeu_si256((__m256i *)b, a);
 #endif
 }
-#endif  // VPX_DSP_X86_BITDEPTH_CONVERSION_AVX2_H_
+#endif  // VPX_VPX_DSP_X86_BITDEPTH_CONVERSION_AVX2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_sse2.h
index 5d1d779..74dde65 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/bitdepth_conversion_sse2.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_DSP_X86_BITDEPTH_CONVERSION_SSE2_H_
-#define VPX_DSP_X86_BITDEPTH_CONVERSION_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_BITDEPTH_CONVERSION_SSE2_H_
+#define VPX_VPX_DSP_X86_BITDEPTH_CONVERSION_SSE2_H_
 
 #include <xmmintrin.h>
 
@@ -53,4 +53,4 @@
   _mm_store_si128((__m128i *)(a), zero);
 #endif
 }
-#endif  // VPX_DSP_X86_BITDEPTH_CONVERSION_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_BITDEPTH_CONVERSION_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve.h
index 68d7589..c339600 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve.h
@@ -7,65 +7,92 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPX_DSP_X86_CONVOLVE_H_
-#define VPX_DSP_X86_CONVOLVE_H_
+#ifndef VPX_VPX_DSP_X86_CONVOLVE_H_
+#define VPX_VPX_DSP_X86_CONVOLVE_H_
 
 #include <assert.h>
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
-#include "vpx_ports/mem.h"
+#include "vpx_ports/compiler_attributes.h"
 
+// TODO(chiyotsai@google.com): Refactor the code here. Currently this is pretty
+// hacky and awful to read. Note that there is a filter_x[3] == 128 check in
+// HIGHBD_FUN_CONV_2D to avoid seg fault due to the fact that the c function
+// assumes the filter is always 8 tap.
 typedef void filter8_1dfunction(const uint8_t *src_ptr, ptrdiff_t src_pitch,
                                 uint8_t *output_ptr, ptrdiff_t out_pitch,
                                 uint32_t output_height, const int16_t *filter);
 
-#define FUN_CONV_1D(name, offset, step_q4, dir, src_start, avg, opt)         \
+// TODO(chiyotsai@google.com): Remove the is_avg argument to the MACROS once we
+// have 4-tap vert avg filter.
+#define FUN_CONV_1D(name, offset, step_q4, dir, src_start, avg, opt, is_avg) \
   void vpx_convolve8_##name##_##opt(                                         \
       const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,                \
-      ptrdiff_t dst_stride, const InterpKernel *filter_kernel, int x0_q4,    \
+      ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4,           \
       int x_step_q4, int y0_q4, int y_step_q4, int w, int h) {               \
-    const int16_t *filter = filter_kernel[offset];                           \
+    const int16_t *filter_row = filter[offset];                              \
     (void)x0_q4;                                                             \
     (void)x_step_q4;                                                         \
     (void)y0_q4;                                                             \
     (void)y_step_q4;                                                         \
-    assert(filter[3] != 128);                                                \
+    assert(filter_row[3] != 128);                                            \
     assert(step_q4 == 16);                                                   \
-    if (filter[0] | filter[1] | filter[2]) {                                 \
+    if (filter_row[0] | filter_row[1] | filter_row[6] | filter_row[7]) {     \
+      const int num_taps = 8;                                                \
       while (w >= 16) {                                                      \
         vpx_filter_block1d16_##dir##8_##avg##opt(src_start, src_stride, dst, \
-                                                 dst_stride, h, filter);     \
+                                                 dst_stride, h, filter_row); \
         src += 16;                                                           \
         dst += 16;                                                           \
         w -= 16;                                                             \
       }                                                                      \
       if (w == 8) {                                                          \
         vpx_filter_block1d8_##dir##8_##avg##opt(src_start, src_stride, dst,  \
-                                                dst_stride, h, filter);      \
+                                                dst_stride, h, filter_row);  \
       } else if (w == 4) {                                                   \
         vpx_filter_block1d4_##dir##8_##avg##opt(src_start, src_stride, dst,  \
-                                                dst_stride, h, filter);      \
+                                                dst_stride, h, filter_row);  \
       }                                                                      \
-    } else {                                                                 \
+      (void)num_taps;                                                        \
+    } else if (filter_row[2] | filter_row[5]) {                              \
+      const int num_taps = is_avg ? 8 : 4;                                   \
       while (w >= 16) {                                                      \
-        vpx_filter_block1d16_##dir##2_##avg##opt(src, src_stride, dst,       \
-                                                 dst_stride, h, filter);     \
+        vpx_filter_block1d16_##dir##4_##avg##opt(src_start, src_stride, dst, \
+                                                 dst_stride, h, filter_row); \
         src += 16;                                                           \
         dst += 16;                                                           \
         w -= 16;                                                             \
       }                                                                      \
       if (w == 8) {                                                          \
-        vpx_filter_block1d8_##dir##2_##avg##opt(src, src_stride, dst,        \
-                                                dst_stride, h, filter);      \
+        vpx_filter_block1d8_##dir##4_##avg##opt(src_start, src_stride, dst,  \
+                                                dst_stride, h, filter_row);  \
       } else if (w == 4) {                                                   \
-        vpx_filter_block1d4_##dir##2_##avg##opt(src, src_stride, dst,        \
-                                                dst_stride, h, filter);      \
+        vpx_filter_block1d4_##dir##4_##avg##opt(src_start, src_stride, dst,  \
+                                                dst_stride, h, filter_row);  \
       }                                                                      \
+      (void)num_taps;                                                        \
+    } else {                                                                 \
+      const int num_taps = 2;                                                \
+      while (w >= 16) {                                                      \
+        vpx_filter_block1d16_##dir##2_##avg##opt(src_start, src_stride, dst, \
+                                                 dst_stride, h, filter_row); \
+        src += 16;                                                           \
+        dst += 16;                                                           \
+        w -= 16;                                                             \
+      }                                                                      \
+      if (w == 8) {                                                          \
+        vpx_filter_block1d8_##dir##2_##avg##opt(src_start, src_stride, dst,  \
+                                                dst_stride, h, filter_row);  \
+      } else if (w == 4) {                                                   \
+        vpx_filter_block1d4_##dir##2_##avg##opt(src_start, src_stride, dst,  \
+                                                dst_stride, h, filter_row);  \
+      }                                                                      \
+      (void)num_taps;                                                        \
     }                                                                        \
   }
 
-#define FUN_CONV_2D(avg, opt)                                                  \
+#define FUN_CONV_2D(avg, opt, is_avg)                                          \
   void vpx_convolve8_##avg##opt(                                               \
       const uint8_t *src, ptrdiff_t src_stride, uint8_t *dst,                  \
       ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4,             \
@@ -79,16 +106,25 @@
     assert(h <= 64);                                                           \
     assert(x_step_q4 == 16);                                                   \
     assert(y_step_q4 == 16);                                                   \
-    if (filter_x[0] | filter_x[1] | filter_x[2]) {                             \
-      DECLARE_ALIGNED(16, uint8_t, fdata2[64 * 71]);                           \
+    if (filter_x[0] | filter_x[1] | filter_x[6] | filter_x[7]) {               \
+      DECLARE_ALIGNED(16, uint8_t, fdata2[64 * 71] VPX_UNINITIALIZED);         \
       vpx_convolve8_horiz_##opt(src - 3 * src_stride, src_stride, fdata2, 64,  \
                                 filter, x0_q4, x_step_q4, y0_q4, y_step_q4, w, \
                                 h + 7);                                        \
       vpx_convolve8_##avg##vert_##opt(fdata2 + 3 * 64, 64, dst, dst_stride,    \
                                       filter, x0_q4, x_step_q4, y0_q4,         \
                                       y_step_q4, w, h);                        \
+    } else if (filter_x[2] | filter_x[5]) {                                    \
+      const int num_taps = is_avg ? 8 : 4;                                     \
+      DECLARE_ALIGNED(16, uint8_t, fdata2[64 * 71] VPX_UNINITIALIZED);         \
+      vpx_convolve8_horiz_##opt(                                               \
+          src - (num_taps / 2 - 1) * src_stride, src_stride, fdata2, 64,       \
+          filter, x0_q4, x_step_q4, y0_q4, y_step_q4, w, h + num_taps - 1);    \
+      vpx_convolve8_##avg##vert_##opt(fdata2 + 64 * (num_taps / 2 - 1), 64,    \
+                                      dst, dst_stride, filter, x0_q4,          \
+                                      x_step_q4, y0_q4, y_step_q4, w, h);      \
     } else {                                                                   \
-      DECLARE_ALIGNED(16, uint8_t, fdata2[64 * 65]);                           \
+      DECLARE_ALIGNED(16, uint8_t, fdata2[64 * 65] VPX_UNINITIALIZED);         \
       vpx_convolve8_horiz_##opt(src, src_stride, fdata2, 64, filter, x0_q4,    \
                                 x_step_q4, y0_q4, y_step_q4, w, h + 1);        \
       vpx_convolve8_##avg##vert_##opt(fdata2, 64, dst, dst_stride, filter,     \
@@ -106,57 +142,86 @@
                                        unsigned int output_height,
                                        const int16_t *filter, int bd);
 
-#define HIGH_FUN_CONV_1D(name, offset, step_q4, dir, src_start, avg, opt)     \
+#define HIGH_FUN_CONV_1D(name, offset, step_q4, dir, src_start, avg, opt,     \
+                         is_avg)                                              \
   void vpx_highbd_convolve8_##name##_##opt(                                   \
       const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst,               \
       ptrdiff_t dst_stride, const InterpKernel *filter_kernel, int x0_q4,     \
       int x_step_q4, int y0_q4, int y_step_q4, int w, int h, int bd) {        \
-    const int16_t *filter = filter_kernel[offset];                            \
-    if (step_q4 == 16 && filter[3] != 128) {                                  \
-      if (filter[0] | filter[1] | filter[2]) {                                \
+    const int16_t *filter_row = filter_kernel[offset];                        \
+    if (step_q4 == 16 && filter_row[3] != 128) {                              \
+      if (filter_row[0] | filter_row[1] | filter_row[6] | filter_row[7]) {    \
+        const int num_taps = 8;                                               \
         while (w >= 16) {                                                     \
           vpx_highbd_filter_block1d16_##dir##8_##avg##opt(                    \
-              src_start, src_stride, dst, dst_stride, h, filter, bd);         \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
           src += 16;                                                          \
           dst += 16;                                                          \
           w -= 16;                                                            \
         }                                                                     \
         while (w >= 8) {                                                      \
           vpx_highbd_filter_block1d8_##dir##8_##avg##opt(                     \
-              src_start, src_stride, dst, dst_stride, h, filter, bd);         \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
           src += 8;                                                           \
           dst += 8;                                                           \
           w -= 8;                                                             \
         }                                                                     \
         while (w >= 4) {                                                      \
           vpx_highbd_filter_block1d4_##dir##8_##avg##opt(                     \
-              src_start, src_stride, dst, dst_stride, h, filter, bd);         \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
           src += 4;                                                           \
           dst += 4;                                                           \
           w -= 4;                                                             \
         }                                                                     \
+        (void)num_taps;                                                       \
+      } else if (filter_row[2] | filter_row[5]) {                             \
+        const int num_taps = is_avg ? 8 : 4;                                  \
+        while (w >= 16) {                                                     \
+          vpx_highbd_filter_block1d16_##dir##4_##avg##opt(                    \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
+          src += 16;                                                          \
+          dst += 16;                                                          \
+          w -= 16;                                                            \
+        }                                                                     \
+        while (w >= 8) {                                                      \
+          vpx_highbd_filter_block1d8_##dir##4_##avg##opt(                     \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
+          src += 8;                                                           \
+          dst += 8;                                                           \
+          w -= 8;                                                             \
+        }                                                                     \
+        while (w >= 4) {                                                      \
+          vpx_highbd_filter_block1d4_##dir##4_##avg##opt(                     \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
+          src += 4;                                                           \
+          dst += 4;                                                           \
+          w -= 4;                                                             \
+        }                                                                     \
+        (void)num_taps;                                                       \
       } else {                                                                \
+        const int num_taps = 2;                                               \
         while (w >= 16) {                                                     \
           vpx_highbd_filter_block1d16_##dir##2_##avg##opt(                    \
-              src, src_stride, dst, dst_stride, h, filter, bd);               \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
           src += 16;                                                          \
           dst += 16;                                                          \
           w -= 16;                                                            \
         }                                                                     \
         while (w >= 8) {                                                      \
           vpx_highbd_filter_block1d8_##dir##2_##avg##opt(                     \
-              src, src_stride, dst, dst_stride, h, filter, bd);               \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
           src += 8;                                                           \
           dst += 8;                                                           \
           w -= 8;                                                             \
         }                                                                     \
         while (w >= 4) {                                                      \
           vpx_highbd_filter_block1d4_##dir##2_##avg##opt(                     \
-              src, src_stride, dst, dst_stride, h, filter, bd);               \
+              src_start, src_stride, dst, dst_stride, h, filter_row, bd);     \
           src += 4;                                                           \
           dst += 4;                                                           \
           w -= 4;                                                             \
         }                                                                     \
+        (void)num_taps;                                                       \
       }                                                                       \
     }                                                                         \
     if (w) {                                                                  \
@@ -166,7 +231,7 @@
     }                                                                         \
   }
 
-#define HIGH_FUN_CONV_2D(avg, opt)                                             \
+#define HIGH_FUN_CONV_2D(avg, opt, is_avg)                                     \
   void vpx_highbd_convolve8_##avg##opt(                                        \
       const uint16_t *src, ptrdiff_t src_stride, uint16_t *dst,                \
       ptrdiff_t dst_stride, const InterpKernel *filter, int x0_q4,             \
@@ -175,16 +240,27 @@
     assert(w <= 64);                                                           \
     assert(h <= 64);                                                           \
     if (x_step_q4 == 16 && y_step_q4 == 16) {                                  \
-      if ((filter_x[0] | filter_x[1] | filter_x[2]) || filter_x[3] == 128) {   \
-        DECLARE_ALIGNED(16, uint16_t, fdata2[64 * 71]);                        \
+      if ((filter_x[0] | filter_x[1] | filter_x[6] | filter_x[7]) ||           \
+          filter_x[3] == 128) {                                                \
+        DECLARE_ALIGNED(16, uint16_t, fdata2[64 * 71] VPX_UNINITIALIZED);      \
         vpx_highbd_convolve8_horiz_##opt(src - 3 * src_stride, src_stride,     \
                                          fdata2, 64, filter, x0_q4, x_step_q4, \
                                          y0_q4, y_step_q4, w, h + 7, bd);      \
         vpx_highbd_convolve8_##avg##vert_##opt(                                \
             fdata2 + 192, 64, dst, dst_stride, filter, x0_q4, x_step_q4,       \
             y0_q4, y_step_q4, w, h, bd);                                       \
+      } else if (filter_x[2] | filter_x[5]) {                                  \
+        const int num_taps = is_avg ? 8 : 4;                                   \
+        DECLARE_ALIGNED(16, uint16_t, fdata2[64 * 71] VPX_UNINITIALIZED);      \
+        vpx_highbd_convolve8_horiz_##opt(                                      \
+            src - (num_taps / 2 - 1) * src_stride, src_stride, fdata2, 64,     \
+            filter, x0_q4, x_step_q4, y0_q4, y_step_q4, w, h + num_taps - 1,   \
+            bd);                                                               \
+        vpx_highbd_convolve8_##avg##vert_##opt(                                \
+            fdata2 + 64 * (num_taps / 2 - 1), 64, dst, dst_stride, filter,     \
+            x0_q4, x_step_q4, y0_q4, y_step_q4, w, h, bd);                     \
       } else {                                                                 \
-        DECLARE_ALIGNED(16, uint16_t, fdata2[64 * 65]);                        \
+        DECLARE_ALIGNED(16, uint16_t, fdata2[64 * 65] VPX_UNINITIALIZED);      \
         vpx_highbd_convolve8_horiz_##opt(src, src_stride, fdata2, 64, filter,  \
                                          x0_q4, x_step_q4, y0_q4, y_step_q4,   \
                                          w, h + 1, bd);                        \
@@ -198,6 +274,6 @@
                                     bd);                                       \
     }                                                                          \
   }
-#endif  // CONFIG_VP9_HIGHBITDEPTH
 
-#endif  // VPX_DSP_X86_CONVOLVE_H_
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+#endif  // VPX_VPX_DSP_X86_CONVOLVE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_avx2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_avx2.h
index bc96b73..99bc963 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_avx2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_avx2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_CONVOLVE_AVX2_H_
-#define VPX_DSP_X86_CONVOLVE_AVX2_H_
+#ifndef VPX_VPX_DSP_X86_CONVOLVE_AVX2_H_
+#define VPX_VPX_DSP_X86_CONVOLVE_AVX2_H_
 
 #include <immintrin.h>  // AVX2
 
@@ -100,6 +100,63 @@
   return sum1;
 }
 
+static INLINE __m256i mm256_loadu2_si128(const void *lo, const void *hi) {
+  const __m256i tmp =
+      _mm256_castsi128_si256(_mm_loadu_si128((const __m128i *)lo));
+  return _mm256_inserti128_si256(tmp, _mm_loadu_si128((const __m128i *)hi), 1);
+}
+
+static INLINE __m256i mm256_loadu2_epi64(const void *lo, const void *hi) {
+  const __m256i tmp =
+      _mm256_castsi128_si256(_mm_loadl_epi64((const __m128i *)lo));
+  return _mm256_inserti128_si256(tmp, _mm_loadl_epi64((const __m128i *)hi), 1);
+}
+
+static INLINE void mm256_store2_si128(__m128i *const dst_ptr_1,
+                                      __m128i *const dst_ptr_2,
+                                      const __m256i *const src) {
+  _mm_store_si128(dst_ptr_1, _mm256_castsi256_si128(*src));
+  _mm_store_si128(dst_ptr_2, _mm256_extractf128_si256(*src, 1));
+}
+
+static INLINE void mm256_storeu2_epi64(__m128i *const dst_ptr_1,
+                                       __m128i *const dst_ptr_2,
+                                       const __m256i *const src) {
+  _mm_storel_epi64(dst_ptr_1, _mm256_castsi256_si128(*src));
+  _mm_storel_epi64(dst_ptr_2, _mm256_extractf128_si256(*src, 1));
+}
+
+static INLINE void mm256_storeu2_epi32(__m128i *const dst_ptr_1,
+                                       __m128i *const dst_ptr_2,
+                                       const __m256i *const src) {
+  *((uint32_t *)(dst_ptr_1)) = _mm_cvtsi128_si32(_mm256_castsi256_si128(*src));
+  *((uint32_t *)(dst_ptr_2)) =
+      _mm_cvtsi128_si32(_mm256_extractf128_si256(*src, 1));
+}
+
+static INLINE __m256i mm256_round_epi32(const __m256i *const src,
+                                        const __m256i *const half_depth,
+                                        const int depth) {
+  const __m256i nearest_src = _mm256_add_epi32(*src, *half_depth);
+  return _mm256_srai_epi32(nearest_src, depth);
+}
+
+static INLINE __m256i mm256_round_epi16(const __m256i *const src,
+                                        const __m256i *const half_depth,
+                                        const int depth) {
+  const __m256i nearest_src = _mm256_adds_epi16(*src, *half_depth);
+  return _mm256_srai_epi16(nearest_src, depth);
+}
+
+static INLINE __m256i mm256_madd_add_epi32(const __m256i *const src_0,
+                                           const __m256i *const src_1,
+                                           const __m256i *const ker_0,
+                                           const __m256i *const ker_1) {
+  const __m256i tmp_0 = _mm256_madd_epi16(*src_0, *ker_0);
+  const __m256i tmp_1 = _mm256_madd_epi16(*src_1, *ker_1);
+  return _mm256_add_epi32(tmp_0, tmp_1);
+}
+
 #undef MM256_BROADCASTSI128_SI256
 
-#endif  // VPX_DSP_X86_CONVOLVE_AVX2_H_
+#endif  // VPX_VPX_DSP_X86_CONVOLVE_AVX2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_sse2.h
new file mode 100644
index 0000000..8443546
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_sse2.h
@@ -0,0 +1,88 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_DSP_X86_CONVOLVE_SSE2_H_
+#define VPX_VPX_DSP_X86_CONVOLVE_SSE2_H_
+
+#include <emmintrin.h>  // SSE2
+
+#include "./vpx_config.h"
+
+// Interprets the input register as 16-bit words 7 6 5 4 3 2 1 0, then returns
+// values at index 2 and 3 to return 3 2 3 2 3 2 3 2 as 16-bit words
+static INLINE __m128i extract_quarter_2_epi16_sse2(const __m128i *const reg) {
+  __m128i tmp = _mm_unpacklo_epi32(*reg, *reg);
+  return _mm_unpackhi_epi64(tmp, tmp);
+}
+
+// Interprets the input register as 16-bit words 7 6 5 4 3 2 1 0, then returns
+// values at index 2 and 3 to return 5 4 5 4 5 4 5 4 as 16-bit words.
+static INLINE __m128i extract_quarter_3_epi16_sse2(const __m128i *const reg) {
+  __m128i tmp = _mm_unpackhi_epi32(*reg, *reg);
+  return _mm_unpacklo_epi64(tmp, tmp);
+}
+
+// Interprets src as 8-bit words, zero extends to form 16-bit words, then
+// multiplies with ker and add the adjacent results to form 32-bit words.
+// Finally adds the result from 1 and 2 together.
+static INLINE __m128i mm_madd_add_epi8_sse2(const __m128i *const src_1,
+                                            const __m128i *const src_2,
+                                            const __m128i *const ker_1,
+                                            const __m128i *const ker_2) {
+  const __m128i src_1_half = _mm_unpacklo_epi8(*src_1, _mm_setzero_si128());
+  const __m128i src_2_half = _mm_unpacklo_epi8(*src_2, _mm_setzero_si128());
+  const __m128i madd_1 = _mm_madd_epi16(src_1_half, *ker_1);
+  const __m128i madd_2 = _mm_madd_epi16(src_2_half, *ker_2);
+  return _mm_add_epi32(madd_1, madd_2);
+}
+
+// Interprets src as 16-bit words, then multiplies with ker and add the
+// adjacent results to form 32-bit words. Finally adds the result from 1 and 2
+// together.
+static INLINE __m128i mm_madd_add_epi16_sse2(const __m128i *const src_1,
+                                             const __m128i *const src_2,
+                                             const __m128i *const ker_1,
+                                             const __m128i *const ker_2) {
+  const __m128i madd_1 = _mm_madd_epi16(*src_1, *ker_1);
+  const __m128i madd_2 = _mm_madd_epi16(*src_2, *ker_2);
+  return _mm_add_epi32(madd_1, madd_2);
+}
+
+static INLINE __m128i mm_madd_packs_epi16_sse2(const __m128i *const src_0,
+                                               const __m128i *const src_1,
+                                               const __m128i *const ker) {
+  const __m128i madd_1 = _mm_madd_epi16(*src_0, *ker);
+  const __m128i madd_2 = _mm_madd_epi16(*src_1, *ker);
+  return _mm_packs_epi32(madd_1, madd_2);
+}
+
+// Interleaves src_1 and src_2
+static INLINE __m128i mm_zip_epi32_sse2(const __m128i *const src_1,
+                                        const __m128i *const src_2) {
+  const __m128i tmp_1 = _mm_unpacklo_epi32(*src_1, *src_2);
+  const __m128i tmp_2 = _mm_unpackhi_epi32(*src_1, *src_2);
+  return _mm_packs_epi32(tmp_1, tmp_2);
+}
+
+static INLINE __m128i mm_round_epi32_sse2(const __m128i *const src,
+                                          const __m128i *const half_depth,
+                                          const int depth) {
+  const __m128i nearest_src = _mm_add_epi32(*src, *half_depth);
+  return _mm_srai_epi32(nearest_src, depth);
+}
+
+static INLINE __m128i mm_round_epi16_sse2(const __m128i *const src,
+                                          const __m128i *const half_depth,
+                                          const int depth) {
+  const __m128i nearest_src = _mm_adds_epi16(*src, *half_depth);
+  return _mm_srai_epi16(nearest_src, depth);
+}
+
+#endif  // VPX_VPX_DSP_X86_CONVOLVE_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_ssse3.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_ssse3.h
index e5d452f..8a4b165 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_ssse3.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/convolve_ssse3.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_CONVOLVE_SSSE3_H_
-#define VPX_DSP_X86_CONVOLVE_SSSE3_H_
+#ifndef VPX_VPX_DSP_X86_CONVOLVE_SSSE3_H_
+#define VPX_VPX_DSP_X86_CONVOLVE_SSSE3_H_
 
 #include <assert.h>
 #include <tmmintrin.h>  // SSSE3
@@ -109,4 +109,4 @@
   return temp;
 }
 
-#endif  // VPX_DSP_X86_CONVOLVE_SSSE3_H_
+#endif  // VPX_VPX_DSP_X86_CONVOLVE_SSSE3_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/deblock_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/deblock_sse2.asm
index 97cb43b..9d8e5e3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/deblock_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/deblock_sse2.asm
@@ -232,237 +232,6 @@
     ret
 %undef flimit
 
-;void vpx_mbpost_proc_down_sse2(unsigned char *dst,
-;                               int pitch, int rows, int cols,int flimit)
-extern sym(vpx_rv)
-global sym(vpx_mbpost_proc_down_sse2) PRIVATE
-sym(vpx_mbpost_proc_down_sse2):
-    push        rbp
-    mov         rbp, rsp
-    SHADOW_ARGS_TO_STACK 5
-    SAVE_XMM 7
-    GET_GOT     rbx
-    push        rsi
-    push        rdi
-    ; end prolog
-
-    ALIGN_STACK 16, rax
-    sub         rsp, 128+16
-
-    ; unsigned char d[16][8] at [rsp]
-    ; create flimit2 at [rsp+128]
-    mov         eax, dword ptr arg(4) ;flimit
-    mov         [rsp+128], eax
-    mov         [rsp+128+4], eax
-    mov         [rsp+128+8], eax
-    mov         [rsp+128+12], eax
-%define flimit4 [rsp+128]
-
-%if ABI_IS_32BIT=0
-    lea         r8,       [GLOBAL(sym(vpx_rv))]
-%endif
-
-    ;rows +=8;
-    add         dword arg(2), 8
-
-    ;for(c=0; c<cols; c+=8)
-.loop_col:
-            mov         rsi,        arg(0) ; s
-            pxor        xmm0,       xmm0        ;
-
-            movsxd      rax,        dword ptr arg(1) ;pitch       ;
-
-            ; this copies the last row down into the border 8 rows
-            mov         rdi,        rsi
-            mov         rdx,        arg(2)
-            sub         rdx,        9
-            imul        rdx,        rax
-            lea         rdi,        [rdi+rdx]
-            movq        xmm1,       QWORD ptr[rdi]              ; first row
-            mov         rcx,        8
-.init_borderd:                                                  ; initialize borders
-            lea         rdi,        [rdi + rax]
-            movq        [rdi],      xmm1
-
-            dec         rcx
-            jne         .init_borderd
-
-            neg         rax                                     ; rax = -pitch
-
-            ; this copies the first row up into the border 8 rows
-            mov         rdi,        rsi
-            movq        xmm1,       QWORD ptr[rdi]              ; first row
-            mov         rcx,        8
-.init_border:                                                   ; initialize borders
-            lea         rdi,        [rdi + rax]
-            movq        [rdi],      xmm1
-
-            dec         rcx
-            jne         .init_border
-
-
-
-            lea         rsi,        [rsi + rax*8];              ; rdi = s[-pitch*8]
-            neg         rax
-
-            pxor        xmm5,       xmm5
-            pxor        xmm6,       xmm6        ;
-
-            pxor        xmm7,       xmm7        ;
-            mov         rdi,        rsi
-
-            mov         rcx,        15          ;
-
-.loop_initvar:
-            movq        xmm1,       QWORD PTR [rdi];
-            punpcklbw   xmm1,       xmm0        ;
-
-            paddw       xmm5,       xmm1        ;
-            pmullw      xmm1,       xmm1        ;
-
-            movdqa      xmm2,       xmm1        ;
-            punpcklwd   xmm1,       xmm0        ;
-
-            punpckhwd   xmm2,       xmm0        ;
-            paddd       xmm6,       xmm1        ;
-
-            paddd       xmm7,       xmm2        ;
-            lea         rdi,        [rdi+rax]   ;
-
-            dec         rcx
-            jne         .loop_initvar
-            ;save the var and sum
-            xor         rdx,        rdx
-.loop_row:
-            movq        xmm1,       QWORD PTR [rsi]     ; [s-pitch*8]
-            movq        xmm2,       QWORD PTR [rdi]     ; [s+pitch*7]
-
-            punpcklbw   xmm1,       xmm0
-            punpcklbw   xmm2,       xmm0
-
-            paddw       xmm5,       xmm2
-            psubw       xmm5,       xmm1
-
-            pmullw      xmm2,       xmm2
-            movdqa      xmm4,       xmm2
-
-            punpcklwd   xmm2,       xmm0
-            punpckhwd   xmm4,       xmm0
-
-            paddd       xmm6,       xmm2
-            paddd       xmm7,       xmm4
-
-            pmullw      xmm1,       xmm1
-            movdqa      xmm2,       xmm1
-
-            punpcklwd   xmm1,       xmm0
-            psubd       xmm6,       xmm1
-
-            punpckhwd   xmm2,       xmm0
-            psubd       xmm7,       xmm2
-
-
-            movdqa      xmm3,       xmm6
-            pslld       xmm3,       4
-
-            psubd       xmm3,       xmm6
-            movdqa      xmm1,       xmm5
-
-            movdqa      xmm4,       xmm5
-            pmullw      xmm1,       xmm1
-
-            pmulhw      xmm4,       xmm4
-            movdqa      xmm2,       xmm1
-
-            punpcklwd   xmm1,       xmm4
-            punpckhwd   xmm2,       xmm4
-
-            movdqa      xmm4,       xmm7
-            pslld       xmm4,       4
-
-            psubd       xmm4,       xmm7
-
-            psubd       xmm3,       xmm1
-            psubd       xmm4,       xmm2
-
-            psubd       xmm3,       flimit4
-            psubd       xmm4,       flimit4
-
-            psrad       xmm3,       31
-            psrad       xmm4,       31
-
-            packssdw    xmm3,       xmm4
-            packsswb    xmm3,       xmm0
-
-            movq        xmm1,       QWORD PTR [rsi+rax*8]
-
-            movq        xmm2,       xmm1
-            punpcklbw   xmm1,       xmm0
-
-            paddw       xmm1,       xmm5
-            mov         rcx,        rdx
-
-            and         rcx,        127
-%if ABI_IS_32BIT=1 && CONFIG_PIC=1
-            push        rax
-            lea         rax,        [GLOBAL(sym(vpx_rv))]
-            movdqu      xmm4,       [rax + rcx*2] ;vpx_rv[rcx*2]
-            pop         rax
-%elif ABI_IS_32BIT=0
-            movdqu      xmm4,       [r8 + rcx*2] ;vpx_rv[rcx*2]
-%else
-            movdqu      xmm4,       [sym(vpx_rv) + rcx*2]
-%endif
-
-            paddw       xmm1,       xmm4
-            ;paddw     xmm1,       eight8s
-            psraw       xmm1,       4
-
-            packuswb    xmm1,       xmm0
-            pand        xmm1,       xmm3
-
-            pandn       xmm3,       xmm2
-            por         xmm1,       xmm3
-
-            and         rcx,        15
-            movq        QWORD PTR   [rsp + rcx*8], xmm1 ;d[rcx*8]
-
-            cmp         edx,        8
-            jl          .skip_assignment
-
-            mov         rcx,        rdx
-            sub         rcx,        8
-            and         rcx,        15
-            movq        mm0,        [rsp + rcx*8] ;d[rcx*8]
-            movq        [rsi],      mm0
-
-.skip_assignment:
-            lea         rsi,        [rsi+rax]
-
-            lea         rdi,        [rdi+rax]
-            add         rdx,        1
-
-            cmp         edx,        dword arg(2) ;rows
-            jl          .loop_row
-
-        add         dword arg(0), 8 ; s += 8
-        sub         dword arg(3), 8 ; cols -= 8
-        cmp         dword arg(3), 0
-        jg          .loop_col
-
-    add         rsp, 128+16
-    pop         rsp
-
-    ; begin epilog
-    pop rdi
-    pop rsi
-    RESTORE_GOT
-    RESTORE_XMM
-    UNSHADOW_ARGS
-    pop         rbp
-    ret
-%undef flimit4
-
 
 ;void vpx_mbpost_proc_across_ip_sse2(unsigned char *src,
 ;                                    int pitch, int rows, int cols,int flimit)
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_sse2.h
index 2dc1462..ac1246f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_dct32x32_impl_sse2.h
@@ -21,7 +21,7 @@
 #define ADD_EPI16 _mm_adds_epi16
 #define SUB_EPI16 _mm_subs_epi16
 #if FDCT32x32_HIGH_PRECISION
-void vpx_fdct32x32_rows_c(const int16_t *intermediate, tran_low_t *out) {
+static void vpx_fdct32x32_rows_c(const int16_t *intermediate, tran_low_t *out) {
   int i, j;
   for (i = 0; i < 32; ++i) {
     tran_high_t temp_in[32], temp_out[32];
@@ -35,7 +35,8 @@
 #define HIGH_FDCT32x32_2D_C vpx_highbd_fdct32x32_c
 #define HIGH_FDCT32x32_2D_ROWS_C vpx_fdct32x32_rows_c
 #else
-void vpx_fdct32x32_rd_rows_c(const int16_t *intermediate, tran_low_t *out) {
+static void vpx_fdct32x32_rd_rows_c(const int16_t *intermediate,
+                                    tran_low_t *out) {
   int i, j;
   for (i = 0; i < 32; ++i) {
     tran_high_t temp_in[32], temp_out[32];
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_avx2.c
index 21f11f0..a2ed420 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_avx2.c
@@ -9,7 +9,9 @@
  */
 
 #include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
 
+#if !CONFIG_VP9_HIGHBITDEPTH
 #define FDCT32x32_2D_AVX2 vpx_fdct32x32_rd_avx2
 #define FDCT32x32_HIGH_PRECISION 0
 #include "vpx_dsp/x86/fwd_dct32x32_impl_avx2.h"
@@ -21,3 +23,4 @@
 #include "vpx_dsp/x86/fwd_dct32x32_impl_avx2.h"  // NOLINT
 #undef FDCT32x32_2D_AVX2
 #undef FDCT32x32_HIGH_PRECISION
+#endif  // !CONFIG_VP9_HIGHBITDEPTH
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_impl_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_impl_sse2.h
index fd28d0d..d546f02 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_impl_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_impl_sse2.h
@@ -93,9 +93,9 @@
 #if DCT_HIGH_BIT_DEPTH
   // Check inputs small enough to use optimised code
   cmp0 = _mm_xor_si128(_mm_cmpgt_epi16(in0, _mm_set1_epi16(0x3ff)),
-                       _mm_cmplt_epi16(in0, _mm_set1_epi16(0xfc00)));
+                       _mm_cmplt_epi16(in0, _mm_set1_epi16((int16_t)0xfc00)));
   cmp1 = _mm_xor_si128(_mm_cmpgt_epi16(in1, _mm_set1_epi16(0x3ff)),
-                       _mm_cmplt_epi16(in1, _mm_set1_epi16(0xfc00)));
+                       _mm_cmplt_epi16(in1, _mm_set1_epi16((int16_t)0xfc00)));
   test = _mm_movemask_epi8(_mm_or_si128(cmp0, cmp1));
   if (test) {
     vpx_highbd_fdct4x4_c(input, output, stride);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_sse2.h
index 5201e76..5aa2779 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_sse2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_FWD_TXFM_SSE2_H_
-#define VPX_DSP_X86_FWD_TXFM_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_FWD_TXFM_SSE2_H_
+#define VPX_VPX_DSP_X86_FWD_TXFM_SSE2_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -36,7 +36,7 @@
 static INLINE int check_epi16_overflow_x2(const __m128i *preg0,
                                           const __m128i *preg1) {
   const __m128i max_overflow = _mm_set1_epi16(0x7fff);
-  const __m128i min_overflow = _mm_set1_epi16(0x8000);
+  const __m128i min_overflow = _mm_set1_epi16((short)0x8000);
   __m128i cmp0 = _mm_or_si128(_mm_cmpeq_epi16(*preg0, max_overflow),
                               _mm_cmpeq_epi16(*preg0, min_overflow));
   __m128i cmp1 = _mm_or_si128(_mm_cmpeq_epi16(*preg1, max_overflow),
@@ -50,7 +50,7 @@
                                           const __m128i *preg2,
                                           const __m128i *preg3) {
   const __m128i max_overflow = _mm_set1_epi16(0x7fff);
-  const __m128i min_overflow = _mm_set1_epi16(0x8000);
+  const __m128i min_overflow = _mm_set1_epi16((short)0x8000);
   __m128i cmp0 = _mm_or_si128(_mm_cmpeq_epi16(*preg0, max_overflow),
                               _mm_cmpeq_epi16(*preg0, min_overflow));
   __m128i cmp1 = _mm_or_si128(_mm_cmpeq_epi16(*preg1, max_overflow),
@@ -368,4 +368,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_DSP_X86_FWD_TXFM_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_FWD_TXFM_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_ssse3_x86_64.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_ssse3_x86_64.asm
index 32824a0..2c338fb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_ssse3_x86_64.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/fwd_txfm_ssse3_x86_64.asm
@@ -27,7 +27,7 @@
 
 SECTION .text
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 INIT_XMM ssse3
 cglobal fdct8x8, 3, 5, 13, input, output, stride
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_convolve_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_convolve_avx2.c
index ef94522..3209625 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_convolve_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_convolve_avx2.c
@@ -9,9 +9,9 @@
  */
 
 #include <immintrin.h>
-
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/x86/convolve.h"
+#include "vpx_dsp/x86/convolve_avx2.h"
 
 // -----------------------------------------------------------------------------
 // Copy and average
@@ -20,7 +20,7 @@
                                    uint16_t *dst, ptrdiff_t dst_stride,
                                    const InterpKernel *filter, int x0_q4,
                                    int x_step_q4, int y0_q4, int y_step_q4,
-                                   int width, int h, int bd) {
+                                   int w, int h, int bd) {
   (void)filter;
   (void)x0_q4;
   (void)x_step_q4;
@@ -28,8 +28,8 @@
   (void)y_step_q4;
   (void)bd;
 
-  assert(width % 4 == 0);
-  if (width > 32) {  // width = 64
+  assert(w % 4 == 0);
+  if (w > 32) {  // w = 64
     do {
       const __m256i p0 = _mm256_loadu_si256((const __m256i *)src);
       const __m256i p1 = _mm256_loadu_si256((const __m256i *)(src + 16));
@@ -43,7 +43,7 @@
       dst += dst_stride;
       h--;
     } while (h > 0);
-  } else if (width > 16) {  // width = 32
+  } else if (w > 16) {  // w = 32
     do {
       const __m256i p0 = _mm256_loadu_si256((const __m256i *)src);
       const __m256i p1 = _mm256_loadu_si256((const __m256i *)(src + 16));
@@ -53,7 +53,7 @@
       dst += dst_stride;
       h--;
     } while (h > 0);
-  } else if (width > 8) {  // width = 16
+  } else if (w > 8) {  // w = 16
     __m256i p0, p1;
     do {
       p0 = _mm256_loadu_si256((const __m256i *)src);
@@ -67,7 +67,7 @@
       dst += dst_stride;
       h -= 2;
     } while (h > 0);
-  } else if (width > 4) {  // width = 8
+  } else if (w > 4) {  // w = 8
     __m128i p0, p1;
     do {
       p0 = _mm_loadu_si128((const __m128i *)src);
@@ -81,7 +81,7 @@
       dst += dst_stride;
       h -= 2;
     } while (h > 0);
-  } else {  // width = 4
+  } else {  // w = 4
     __m128i p0, p1;
     do {
       p0 = _mm_loadl_epi64((const __m128i *)src);
@@ -102,7 +102,7 @@
                                   uint16_t *dst, ptrdiff_t dst_stride,
                                   const InterpKernel *filter, int x0_q4,
                                   int x_step_q4, int y0_q4, int y_step_q4,
-                                  int width, int h, int bd) {
+                                  int w, int h, int bd) {
   (void)filter;
   (void)x0_q4;
   (void)x_step_q4;
@@ -110,8 +110,8 @@
   (void)y_step_q4;
   (void)bd;
 
-  assert(width % 4 == 0);
-  if (width > 32) {  // width = 64
+  assert(w % 4 == 0);
+  if (w > 32) {  // w = 64
     __m256i p0, p1, p2, p3, u0, u1, u2, u3;
     do {
       p0 = _mm256_loadu_si256((const __m256i *)src);
@@ -130,7 +130,7 @@
       dst += dst_stride;
       h--;
     } while (h > 0);
-  } else if (width > 16) {  // width = 32
+  } else if (w > 16) {  // w = 32
     __m256i p0, p1, u0, u1;
     do {
       p0 = _mm256_loadu_si256((const __m256i *)src);
@@ -143,7 +143,7 @@
       dst += dst_stride;
       h--;
     } while (h > 0);
-  } else if (width > 8) {  // width = 16
+  } else if (w > 8) {  // w = 16
     __m256i p0, p1, u0, u1;
     do {
       p0 = _mm256_loadu_si256((const __m256i *)src);
@@ -158,7 +158,7 @@
       dst += dst_stride << 1;
       h -= 2;
     } while (h > 0);
-  } else if (width > 4) {  // width = 8
+  } else if (w > 4) {  // w = 8
     __m128i p0, p1, u0, u1;
     do {
       p0 = _mm_loadu_si128((const __m128i *)src);
@@ -172,7 +172,7 @@
       dst += dst_stride << 1;
       h -= 2;
     } while (h > 0);
-  } else {  // width = 4
+  } else {  // w = 4
     __m128i p0, p1, u0, u1;
     do {
       p0 = _mm_loadl_epi64((const __m128i *)src);
@@ -209,6 +209,7 @@
 static const uint32_t signal_index[8] = { 2, 3, 4, 5, 2, 3, 4, 5 };
 
 #define CONV8_ROUNDING_BITS (7)
+#define CONV8_ROUNDING_NUM (1 << (CONV8_ROUNDING_BITS - 1))
 
 // -----------------------------------------------------------------------------
 // Horizontal Filtering
@@ -923,6 +924,196 @@
   } while (height > 0);
 }
 
+static void vpx_highbd_filter_block1d4_h4_avx2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We extract the middle four elements of the kernel into two registers in
+  // the form
+  // ... k[3] k[2] k[3] k[2]
+  // ... k[5] k[4] k[5] k[4]
+  // Then we shuffle the source into
+  // ... s[1] s[0] s[0] s[-1]
+  // ... s[3] s[2] s[2] s[1]
+  // Calling multiply and add gives us half of the sum. Calling add on the two
+  // halves gives us the output. Since avx2 allows us to use 256-bit buffer, we
+  // can do this two rows at a time.
+
+  __m256i src_reg, src_reg_shift_0, src_reg_shift_2;
+  __m256i res_reg;
+  __m256i idx_shift_0 =
+      _mm256_setr_epi8(0, 1, 2, 3, 2, 3, 4, 5, 4, 5, 6, 7, 6, 7, 8, 9, 0, 1, 2,
+                       3, 2, 3, 4, 5, 4, 5, 6, 7, 6, 7, 8, 9);
+  __m256i idx_shift_2 =
+      _mm256_setr_epi8(4, 5, 6, 7, 6, 7, 8, 9, 8, 9, 10, 11, 10, 11, 12, 13, 4,
+                       5, 6, 7, 6, 7, 8, 9, 8, 9, 10, 11, 10, 11, 12, 13);
+
+  __m128i kernel_reg_128;  // Kernel
+  __m256i kernel_reg, kernel_reg_23,
+      kernel_reg_45;  // Segments of the kernel used
+  const __m256i reg_round =
+      _mm256_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m256i reg_max = _mm256_set1_epi16((1 << bd) - 1);
+  const ptrdiff_t unrolled_src_stride = src_stride << 1;
+  const ptrdiff_t unrolled_dst_stride = dst_stride << 1;
+  int h;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg_23 = _mm256_shuffle_epi32(kernel_reg, 0x55);
+  kernel_reg_45 = _mm256_shuffle_epi32(kernel_reg, 0xaa);
+
+  for (h = height; h >= 2; h -= 2) {
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr, src_ptr + src_stride);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Get the output
+    res_reg = mm256_madd_add_epi32(&src_reg_shift_0, &src_reg_shift_2,
+                                   &kernel_reg_23, &kernel_reg_45);
+
+    // Round the result
+    res_reg = mm256_round_epi32(&res_reg, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Finally combine to get the final dst
+    res_reg = _mm256_packus_epi32(res_reg, res_reg);
+    res_reg = _mm256_min_epi16(res_reg, reg_max);
+    mm256_storeu2_epi64((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                        &res_reg);
+
+    src_ptr += unrolled_src_stride;
+    dst_ptr += unrolled_dst_stride;
+  }
+
+  // Repeat for the last row if needed
+  if (h > 0) {
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr, src_ptr + 4);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Get the output
+    res_reg = mm256_madd_add_epi32(&src_reg_shift_0, &src_reg_shift_2,
+                                   &kernel_reg_23, &kernel_reg_45);
+
+    // Round the result
+    res_reg = mm256_round_epi32(&res_reg, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Finally combine to get the final dst
+    res_reg = _mm256_packus_epi32(res_reg, res_reg);
+    res_reg = _mm256_min_epi16(res_reg, reg_max);
+    _mm_storel_epi64((__m128i *)dst_ptr, _mm256_castsi256_si128(res_reg));
+  }
+}
+
+static void vpx_highbd_filter_block1d8_h4_avx2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will extract the middle four elements of the kernel into two registers
+  // in the form
+  // ... k[3] k[2] k[3] k[2]
+  // ... k[5] k[4] k[5] k[4]
+  // Then we shuffle the source into
+  // ... s[1] s[0] s[0] s[-1]
+  // ... s[3] s[2] s[2] s[1]
+  // Calling multiply and add gives us half of the sum of the first half.
+  // Calling add gives us first half of the output. Repat again to get the whole
+  // output. Since avx2 allows us to use 256-bit buffer, we can do this two rows
+  // at a time.
+
+  __m256i src_reg, src_reg_shift_0, src_reg_shift_2;
+  __m256i res_reg, res_first, res_last;
+  __m256i idx_shift_0 =
+      _mm256_setr_epi8(0, 1, 2, 3, 2, 3, 4, 5, 4, 5, 6, 7, 6, 7, 8, 9, 0, 1, 2,
+                       3, 2, 3, 4, 5, 4, 5, 6, 7, 6, 7, 8, 9);
+  __m256i idx_shift_2 =
+      _mm256_setr_epi8(4, 5, 6, 7, 6, 7, 8, 9, 8, 9, 10, 11, 10, 11, 12, 13, 4,
+                       5, 6, 7, 6, 7, 8, 9, 8, 9, 10, 11, 10, 11, 12, 13);
+
+  __m128i kernel_reg_128;  // Kernel
+  __m256i kernel_reg, kernel_reg_23,
+      kernel_reg_45;  // Segments of the kernel used
+  const __m256i reg_round =
+      _mm256_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m256i reg_max = _mm256_set1_epi16((1 << bd) - 1);
+  const ptrdiff_t unrolled_src_stride = src_stride << 1;
+  const ptrdiff_t unrolled_dst_stride = dst_stride << 1;
+  int h;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg_23 = _mm256_shuffle_epi32(kernel_reg, 0x55);
+  kernel_reg_45 = _mm256_shuffle_epi32(kernel_reg, 0xaa);
+
+  for (h = height; h >= 2; h -= 2) {
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr, src_ptr + src_stride);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Result for first half
+    res_first = mm256_madd_add_epi32(&src_reg_shift_0, &src_reg_shift_2,
+                                     &kernel_reg_23, &kernel_reg_45);
+
+    // Do again to get the second half of dst
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr + 4, src_ptr + src_stride + 4);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Result for second half
+    res_last = mm256_madd_add_epi32(&src_reg_shift_0, &src_reg_shift_2,
+                                    &kernel_reg_23, &kernel_reg_45);
+
+    // Round each result
+    res_first = mm256_round_epi32(&res_first, &reg_round, CONV8_ROUNDING_BITS);
+    res_last = mm256_round_epi32(&res_last, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Finally combine to get the final dst
+    res_reg = _mm256_packus_epi32(res_first, res_last);
+    res_reg = _mm256_min_epi16(res_reg, reg_max);
+    mm256_store2_si128((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                       &res_reg);
+
+    src_ptr += unrolled_src_stride;
+    dst_ptr += unrolled_dst_stride;
+  }
+
+  // Repeat for the last row if needed
+  if (h > 0) {
+    src_reg = mm256_loadu2_si128(src_ptr, src_ptr + 4);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    res_reg = mm256_madd_add_epi32(&src_reg_shift_0, &src_reg_shift_2,
+                                   &kernel_reg_23, &kernel_reg_45);
+
+    res_reg = mm256_round_epi32(&res_reg, &reg_round, CONV8_ROUNDING_BITS);
+
+    res_reg = _mm256_packus_epi32(res_reg, res_reg);
+    res_reg = _mm256_min_epi16(res_reg, reg_max);
+
+    mm256_storeu2_epi64((__m128i *)dst_ptr, (__m128i *)(dst_ptr + 4), &res_reg);
+  }
+}
+
+static void vpx_highbd_filter_block1d16_h4_avx2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  vpx_highbd_filter_block1d8_h4_avx2(src_ptr, src_stride, dst_ptr, dst_stride,
+                                     height, kernel, bd);
+  vpx_highbd_filter_block1d8_h4_avx2(src_ptr + 8, src_stride, dst_ptr + 8,
+                                     dst_stride, height, kernel, bd);
+}
+
 static void vpx_highbd_filter_block1d8_v8_avg_avx2(
     const uint16_t *src_ptr, ptrdiff_t src_pitch, uint16_t *dst_ptr,
     ptrdiff_t dst_pitch, uint32_t height, const int16_t *filter, int bd) {
@@ -1058,39 +1249,235 @@
   } while (height > 0);
 }
 
-void vpx_highbd_filter_block1d4_h8_sse2(const uint16_t *, ptrdiff_t, uint16_t *,
-                                        ptrdiff_t, uint32_t, const int16_t *,
-                                        int);
-void vpx_highbd_filter_block1d4_h2_sse2(const uint16_t *, ptrdiff_t, uint16_t *,
-                                        ptrdiff_t, uint32_t, const int16_t *,
-                                        int);
-void vpx_highbd_filter_block1d4_v8_sse2(const uint16_t *, ptrdiff_t, uint16_t *,
-                                        ptrdiff_t, uint32_t, const int16_t *,
-                                        int);
-void vpx_highbd_filter_block1d4_v2_sse2(const uint16_t *, ptrdiff_t, uint16_t *,
-                                        ptrdiff_t, uint32_t, const int16_t *,
-                                        int);
+static void vpx_highbd_filter_block1d4_v4_avx2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will load two rows of pixels and rearrange them into the form
+  // ... s[1,0] s[0,0] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel partial output. Then
+  // we can call add with another row to get the output.
+
+  // Register for source s[-1:3, :]
+  __m256i src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m256i src_reg_m10, src_reg_01, src_reg_12, src_reg_23;
+  __m256i src_reg_m1001, src_reg_1223;
+
+  // Result after multiply and add
+  __m256i res_reg;
+
+  __m128i kernel_reg_128;                            // Kernel
+  __m256i kernel_reg, kernel_reg_23, kernel_reg_45;  // Segments of kernel used
+
+  const __m256i reg_round =
+      _mm256_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m256i reg_max = _mm256_set1_epi16((1 << bd) - 1);
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg_23 = _mm256_shuffle_epi32(kernel_reg, 0x55);
+  kernel_reg_45 = _mm256_shuffle_epi32(kernel_reg, 0xaa);
+
+  // Row -1 to row 0
+  src_reg_m10 = mm256_loadu2_epi64((const __m128i *)src_ptr,
+                                   (const __m128i *)(src_ptr + src_stride));
+
+  // Row 0 to row 1
+  src_reg_1 = _mm256_castsi128_si256(
+      _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2)));
+  src_reg_01 = _mm256_permute2x128_si256(src_reg_m10, src_reg_1, 0x21);
+
+  // First three rows
+  src_reg_m1001 = _mm256_unpacklo_epi16(src_reg_m10, src_reg_01);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm256_castsi128_si256(
+        _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 3)));
+
+    src_reg_12 = _mm256_inserti128_si256(src_reg_1,
+                                         _mm256_castsi256_si128(src_reg_2), 1);
+
+    src_reg_3 = _mm256_castsi128_si256(
+        _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 4)));
+
+    src_reg_23 = _mm256_inserti128_si256(src_reg_2,
+                                         _mm256_castsi256_si128(src_reg_3), 1);
+
+    // Last three rows
+    src_reg_1223 = _mm256_unpacklo_epi16(src_reg_12, src_reg_23);
+
+    // Output
+    res_reg = mm256_madd_add_epi32(&src_reg_m1001, &src_reg_1223,
+                                   &kernel_reg_23, &kernel_reg_45);
+
+    // Round the words
+    res_reg = mm256_round_epi32(&res_reg, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Combine to get the result
+    res_reg = _mm256_packus_epi32(res_reg, res_reg);
+    res_reg = _mm256_min_epi16(res_reg, reg_max);
+
+    // Save the result
+    mm256_storeu2_epi64((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                        &res_reg);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m1001 = src_reg_1223;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_highbd_filter_block1d8_v4_avx2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will load two rows of pixels and rearrange them into the form
+  // ... s[1,0] s[0,0] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel partial output. Then
+  // we can call add with another row to get the output.
+
+  // Register for source s[-1:3, :]
+  __m256i src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m256i src_reg_m10, src_reg_01, src_reg_12, src_reg_23;
+  __m256i src_reg_m1001_lo, src_reg_m1001_hi, src_reg_1223_lo, src_reg_1223_hi;
+
+  __m128i kernel_reg_128;                            // Kernel
+  __m256i kernel_reg, kernel_reg_23, kernel_reg_45;  // Segments of kernel
+
+  // Result after multiply and add
+  __m256i res_reg, res_reg_lo, res_reg_hi;
+
+  const __m256i reg_round =
+      _mm256_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m256i reg_max = _mm256_set1_epi16((1 << bd) - 1);
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg_23 = _mm256_shuffle_epi32(kernel_reg, 0x55);
+  kernel_reg_45 = _mm256_shuffle_epi32(kernel_reg, 0xaa);
+
+  // Row -1 to row 0
+  src_reg_m10 = mm256_loadu2_si128((const __m128i *)src_ptr,
+                                   (const __m128i *)(src_ptr + src_stride));
+
+  // Row 0 to row 1
+  src_reg_1 = _mm256_castsi128_si256(
+      _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2)));
+  src_reg_01 = _mm256_permute2x128_si256(src_reg_m10, src_reg_1, 0x21);
+
+  // First three rows
+  src_reg_m1001_lo = _mm256_unpacklo_epi16(src_reg_m10, src_reg_01);
+  src_reg_m1001_hi = _mm256_unpackhi_epi16(src_reg_m10, src_reg_01);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm256_castsi128_si256(
+        _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3)));
+
+    src_reg_12 = _mm256_inserti128_si256(src_reg_1,
+                                         _mm256_castsi256_si128(src_reg_2), 1);
+
+    src_reg_3 = _mm256_castsi128_si256(
+        _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4)));
+
+    src_reg_23 = _mm256_inserti128_si256(src_reg_2,
+                                         _mm256_castsi256_si128(src_reg_3), 1);
+
+    // Last three rows
+    src_reg_1223_lo = _mm256_unpacklo_epi16(src_reg_12, src_reg_23);
+    src_reg_1223_hi = _mm256_unpackhi_epi16(src_reg_12, src_reg_23);
+
+    // Output from first half
+    res_reg_lo = mm256_madd_add_epi32(&src_reg_m1001_lo, &src_reg_1223_lo,
+                                      &kernel_reg_23, &kernel_reg_45);
+
+    // Output from second half
+    res_reg_hi = mm256_madd_add_epi32(&src_reg_m1001_hi, &src_reg_1223_hi,
+                                      &kernel_reg_23, &kernel_reg_45);
+
+    // Round the words
+    res_reg_lo =
+        mm256_round_epi32(&res_reg_lo, &reg_round, CONV8_ROUNDING_BITS);
+    res_reg_hi =
+        mm256_round_epi32(&res_reg_hi, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Combine to get the result
+    res_reg = _mm256_packus_epi32(res_reg_lo, res_reg_hi);
+    res_reg = _mm256_min_epi16(res_reg, reg_max);
+
+    // Save the result
+    mm256_store2_si128((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                       &res_reg);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m1001_lo = src_reg_1223_lo;
+    src_reg_m1001_hi = src_reg_1223_hi;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_highbd_filter_block1d16_v4_avx2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  vpx_highbd_filter_block1d8_v4_avx2(src_ptr, src_stride, dst_ptr, dst_stride,
+                                     height, kernel, bd);
+  vpx_highbd_filter_block1d8_v4_avx2(src_ptr + 8, src_stride, dst_ptr + 8,
+                                     dst_stride, height, kernel, bd);
+}
+
+// From vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm.
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v8_sse2;
+
+// From vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm.
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v2_sse2;
+
 #define vpx_highbd_filter_block1d4_h8_avx2 vpx_highbd_filter_block1d4_h8_sse2
 #define vpx_highbd_filter_block1d4_h2_avx2 vpx_highbd_filter_block1d4_h2_sse2
 #define vpx_highbd_filter_block1d4_v8_avx2 vpx_highbd_filter_block1d4_v8_sse2
 #define vpx_highbd_filter_block1d4_v2_avx2 vpx_highbd_filter_block1d4_v2_sse2
 
-HIGH_FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , avx2);
-HIGH_FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * 3, , avx2);
-HIGH_FUN_CONV_2D(, avx2);
+// Use the [vh]8 version because there is no [vh]4 implementation.
+#define vpx_highbd_filter_block1d16_v4_avg_avx2 \
+  vpx_highbd_filter_block1d16_v8_avg_avx2
+#define vpx_highbd_filter_block1d16_h4_avg_avx2 \
+  vpx_highbd_filter_block1d16_h8_avg_avx2
+#define vpx_highbd_filter_block1d8_v4_avg_avx2 \
+  vpx_highbd_filter_block1d8_v8_avg_avx2
+#define vpx_highbd_filter_block1d8_h4_avg_avx2 \
+  vpx_highbd_filter_block1d8_h8_avg_avx2
+#define vpx_highbd_filter_block1d4_v4_avg_avx2 \
+  vpx_highbd_filter_block1d4_v8_avg_avx2
+#define vpx_highbd_filter_block1d4_h4_avg_avx2 \
+  vpx_highbd_filter_block1d4_h8_avg_avx2
 
-void vpx_highbd_filter_block1d4_h8_avg_sse2(const uint16_t *, ptrdiff_t,
-                                            uint16_t *, ptrdiff_t, uint32_t,
-                                            const int16_t *, int);
-void vpx_highbd_filter_block1d4_h2_avg_sse2(const uint16_t *, ptrdiff_t,
-                                            uint16_t *, ptrdiff_t, uint32_t,
-                                            const int16_t *, int);
-void vpx_highbd_filter_block1d4_v8_avg_sse2(const uint16_t *, ptrdiff_t,
-                                            uint16_t *, ptrdiff_t, uint32_t,
-                                            const int16_t *, int);
-void vpx_highbd_filter_block1d4_v2_avg_sse2(const uint16_t *, ptrdiff_t,
-                                            uint16_t *, ptrdiff_t, uint32_t,
-                                            const int16_t *, int);
+HIGH_FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , avx2, 0);
+HIGH_FUN_CONV_1D(vert, y0_q4, y_step_q4, v,
+                 src - src_stride * (num_taps / 2 - 1), , avx2, 0);
+HIGH_FUN_CONV_2D(, avx2, 0);
+
+// From vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm.
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h8_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v8_avg_sse2;
+
+// From vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm.
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h2_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v2_avg_sse2;
+
 #define vpx_highbd_filter_block1d4_h8_avg_avx2 \
   vpx_highbd_filter_block1d4_h8_avg_sse2
 #define vpx_highbd_filter_block1d4_h2_avg_avx2 \
@@ -1100,9 +1487,9 @@
 #define vpx_highbd_filter_block1d4_v2_avg_avx2 \
   vpx_highbd_filter_block1d4_v2_avg_sse2
 
-HIGH_FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, avx2);
-HIGH_FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v, src - src_stride * 3, avg_,
-                 avx2);
-HIGH_FUN_CONV_2D(avg_, avx2);
+HIGH_FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, avx2, 1);
+HIGH_FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v,
+                 src - src_stride * (num_taps / 2 - 1), avg_, avx2, 1);
+HIGH_FUN_CONV_2D(avg_, avx2, 1);
 
 #undef HIGHBD_FUNC
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_idct4x4_add_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_idct4x4_add_sse2.c
index 2e54d24..b9c8884 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_idct4x4_add_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_idct4x4_add_sse2.c
@@ -112,8 +112,8 @@
     min_input = _mm_min_epi16(min_input, _mm_srli_si128(min_input, 4));
     max_input = _mm_max_epi16(max_input, _mm_srli_si128(max_input, 2));
     min_input = _mm_min_epi16(min_input, _mm_srli_si128(min_input, 2));
-    max = _mm_extract_epi16(max_input, 0);
-    min = _mm_extract_epi16(min_input, 0);
+    max = (int16_t)_mm_extract_epi16(max_input, 0);
+    min = (int16_t)_mm_extract_epi16(min_input, 0);
   }
 
   if (bd == 8 || (max < 4096 && min >= -4096)) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_sse2.c
index 2051381..43634ae 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_sse2.c
@@ -460,7 +460,8 @@
   const int J = left[1];
   const int K = left[2];
   const int L = left[3];
-  const __m128i XXXXXABC = _mm_loadu_si128((const __m128i *)(above - 5));
+  const __m128i XXXXXABC = _mm_castps_si128(
+      _mm_loadh_pi(_mm_setzero_ps(), (const __m64 *)(above - 1)));
   const __m128i LXXXXABC = _mm_insert_epi16(XXXXXABC, L, 0);
   const __m128i LKXXXABC = _mm_insert_epi16(LXXXXABC, K, 1);
   const __m128i LKJXXABC = _mm_insert_epi16(LKXXXABC, J, 2);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_ssse3.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_ssse3.c
index b9dcef2..d673fac 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_ssse3.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_intrin_ssse3.c
@@ -170,9 +170,9 @@
   }
 }
 
-DECLARE_ALIGNED(16, static const uint8_t, rotate_right_epu16[16]) = {
-  2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1
-};
+DECLARE_ALIGNED(16, static const uint8_t,
+                rotate_right_epu16[16]) = { 2,  3,  4,  5,  6,  7,  8, 9,
+                                            10, 11, 12, 13, 14, 15, 0, 1 };
 
 static INLINE __m128i rotr_epu16(__m128i *a, const __m128i *rotrw) {
   *a = _mm_shuffle_epi8(*a, *rotrw);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_sse2.asm
index c61b621..caf506a 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_intrapred_sse2.asm
@@ -256,7 +256,7 @@
   REP_RET
 
 INIT_XMM sse2
-cglobal highbd_tm_predictor_4x4, 5, 5, 6, dst, stride, above, left, bps
+cglobal highbd_tm_predictor_4x4, 5, 5, 6, dst, stride, above, left, bd
   movd                  m1, [aboveq-2]
   movq                  m0, [aboveq]
   pshuflw               m1, m1, 0x0
@@ -264,7 +264,7 @@
   movlhps               m1, m1         ; tl tl tl tl tl tl tl tl
   ; Get the values to compute the maximum value at this bit depth
   pcmpeqw               m3, m3
-  movd                  m4, bpsd
+  movd                  m4, bdd
   psubw                 m0, m1         ; t1-tl t2-tl t3-tl t4-tl
   psllw                 m3, m4
   pcmpeqw               m2, m2
@@ -295,7 +295,7 @@
   RET
 
 INIT_XMM sse2
-cglobal highbd_tm_predictor_8x8, 5, 6, 5, dst, stride, above, left, bps, one
+cglobal highbd_tm_predictor_8x8, 5, 6, 5, dst, stride, above, left, bd, one
   movd                  m1, [aboveq-2]
   mova                  m0, [aboveq]
   pshuflw               m1, m1, 0x0
@@ -304,7 +304,7 @@
   pxor                  m3, m3
   pxor                  m4, m4
   pinsrw                m3, oned, 0
-  pinsrw                m4, bpsd, 0
+  pinsrw                m4, bdd, 0
   pshuflw               m3, m3, 0x0
   DEFINE_ARGS dst, stride, line, left
   punpcklqdq            m3, m3
@@ -339,14 +339,14 @@
   REP_RET
 
 INIT_XMM sse2
-cglobal highbd_tm_predictor_16x16, 5, 5, 8, dst, stride, above, left, bps
+cglobal highbd_tm_predictor_16x16, 5, 5, 8, dst, stride, above, left, bd
   movd                  m2, [aboveq-2]
   mova                  m0, [aboveq]
   mova                  m1, [aboveq+16]
   pshuflw               m2, m2, 0x0
   ; Get the values to compute the maximum value at this bit depth
   pcmpeqw               m3, m3
-  movd                  m4, bpsd
+  movd                  m4, bdd
   punpcklqdq            m2, m2
   psllw                 m3, m4
   pcmpeqw               m5, m5
@@ -386,7 +386,7 @@
   REP_RET
 
 INIT_XMM sse2
-cglobal highbd_tm_predictor_32x32, 5, 5, 8, dst, stride, above, left, bps
+cglobal highbd_tm_predictor_32x32, 5, 5, 8, dst, stride, above, left, bd
   movd                  m0, [aboveq-2]
   mova                  m1, [aboveq]
   mova                  m2, [aboveq+16]
@@ -395,7 +395,7 @@
   pshuflw               m0, m0, 0x0
   ; Get the values to compute the maximum value at this bit depth
   pcmpeqw               m5, m5
-  movd                  m6, bpsd
+  movd                  m6, bdd
   psllw                 m5, m6
   pcmpeqw               m7, m7
   pxor                  m6, m6         ; min possible value
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse2.h
index c89666b..78cf911 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_HIGHBD_INV_TXFM_SSE2_H_
-#define VPX_DSP_X86_HIGHBD_INV_TXFM_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_HIGHBD_INV_TXFM_SSE2_H_
+#define VPX_VPX_DSP_X86_HIGHBD_INV_TXFM_SSE2_H_
 
 #include <emmintrin.h>  // SSE2
 
@@ -401,4 +401,4 @@
   recon_and_store_4(out, dest, bd);
 }
 
-#endif  // VPX_DSP_X86_HIGHBD_INV_TXFM_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_HIGHBD_INV_TXFM_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse4.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse4.h
index 5a7fd1d..f446bb1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse4.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_inv_txfm_sse4.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_HIGHBD_INV_TXFM_SSE4_H_
-#define VPX_DSP_X86_HIGHBD_INV_TXFM_SSE4_H_
+#ifndef VPX_VPX_DSP_X86_HIGHBD_INV_TXFM_SSE4_H_
+#define VPX_VPX_DSP_X86_HIGHBD_INV_TXFM_SSE4_H_
 
 #include <smmintrin.h>  // SSE4.1
 
@@ -109,4 +109,4 @@
 void vpx_highbd_idct8x8_half1d_sse4_1(__m128i *const io);
 void vpx_highbd_idct16_4col_sse4_1(__m128i *const io /*io[16]*/);
 
-#endif  // VPX_DSP_X86_HIGHBD_INV_TXFM_SSE4_H_
+#endif  // VPX_VPX_DSP_X86_HIGHBD_INV_TXFM_SSE4_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_loopfilter_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_loopfilter_sse2.c
index ec22db9..d265fc1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_loopfilter_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_loopfilter_sse2.c
@@ -47,13 +47,13 @@
 
 // TODO(debargha, peter): Break up large functions into smaller ones
 // in this file.
-void vpx_highbd_lpf_horizontal_16_sse2(uint16_t *s, int p,
-                                       const uint8_t *_blimit,
-                                       const uint8_t *_limit,
-                                       const uint8_t *_thresh, int bd) {
+void vpx_highbd_lpf_horizontal_16_sse2(uint16_t *s, int pitch,
+                                       const uint8_t *blimit,
+                                       const uint8_t *limit,
+                                       const uint8_t *thresh, int bd) {
   const __m128i zero = _mm_set1_epi16(0);
   const __m128i one = _mm_set1_epi16(1);
-  __m128i blimit, limit, thresh;
+  __m128i blimit_v, limit_v, thresh_v;
   __m128i q7, p7, q6, p6, q5, p5, q4, p4, q3, p3, q2, p2, q1, p1, q0, p0;
   __m128i mask, hev, flat, flat2, abs_p1p0, abs_q1q0;
   __m128i ps1, qs1, ps0, qs0;
@@ -70,35 +70,35 @@
   __m128i eight, four;
 
   if (bd == 8) {
-    blimit = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero);
-    limit = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero);
-    thresh = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero);
+    blimit_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero);
+    limit_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero);
+    thresh_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero);
   } else if (bd == 10) {
-    blimit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero), 2);
-    limit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero), 2);
-    thresh = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero), 2);
+    blimit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero), 2);
+    limit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero), 2);
+    thresh_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero), 2);
   } else {  // bd == 12
-    blimit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero), 4);
-    limit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero), 4);
-    thresh = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero), 4);
+    blimit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero), 4);
+    limit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero), 4);
+    thresh_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero), 4);
   }
 
-  q4 = _mm_load_si128((__m128i *)(s + 4 * p));
-  p4 = _mm_load_si128((__m128i *)(s - 5 * p));
-  q3 = _mm_load_si128((__m128i *)(s + 3 * p));
-  p3 = _mm_load_si128((__m128i *)(s - 4 * p));
-  q2 = _mm_load_si128((__m128i *)(s + 2 * p));
-  p2 = _mm_load_si128((__m128i *)(s - 3 * p));
-  q1 = _mm_load_si128((__m128i *)(s + 1 * p));
-  p1 = _mm_load_si128((__m128i *)(s - 2 * p));
-  q0 = _mm_load_si128((__m128i *)(s + 0 * p));
-  p0 = _mm_load_si128((__m128i *)(s - 1 * p));
+  q4 = _mm_load_si128((__m128i *)(s + 4 * pitch));
+  p4 = _mm_load_si128((__m128i *)(s - 5 * pitch));
+  q3 = _mm_load_si128((__m128i *)(s + 3 * pitch));
+  p3 = _mm_load_si128((__m128i *)(s - 4 * pitch));
+  q2 = _mm_load_si128((__m128i *)(s + 2 * pitch));
+  p2 = _mm_load_si128((__m128i *)(s - 3 * pitch));
+  q1 = _mm_load_si128((__m128i *)(s + 1 * pitch));
+  p1 = _mm_load_si128((__m128i *)(s - 2 * pitch));
+  q0 = _mm_load_si128((__m128i *)(s + 0 * pitch));
+  p0 = _mm_load_si128((__m128i *)(s - 1 * pitch));
 
   //  highbd_filter_mask
   abs_p1p0 = _mm_or_si128(_mm_subs_epu16(p1, p0), _mm_subs_epu16(p0, p1));
@@ -111,14 +111,14 @@
 
   //  highbd_hev_mask (in C code this is actually called from highbd_filter4)
   flat = _mm_max_epi16(abs_p1p0, abs_q1q0);
-  hev = _mm_subs_epu16(flat, thresh);
+  hev = _mm_subs_epu16(flat, thresh_v);
   hev = _mm_xor_si128(_mm_cmpeq_epi16(hev, zero), ffff);
 
   abs_p0q0 = _mm_adds_epu16(abs_p0q0, abs_p0q0);  // abs(p0 - q0) * 2
   abs_p1q1 = _mm_srli_epi16(abs_p1q1, 1);         // abs(p1 - q1) / 2
-  mask = _mm_subs_epu16(_mm_adds_epu16(abs_p0q0, abs_p1q1), blimit);
+  mask = _mm_subs_epu16(_mm_adds_epu16(abs_p0q0, abs_p1q1), blimit_v);
   mask = _mm_xor_si128(_mm_cmpeq_epi16(mask, zero), ffff);
-  mask = _mm_and_si128(mask, _mm_adds_epu16(limit, one));
+  mask = _mm_and_si128(mask, _mm_adds_epu16(limit_v, one));
   work = _mm_max_epi16(
       _mm_or_si128(_mm_subs_epu16(p1, p0), _mm_subs_epu16(p0, p1)),
       _mm_or_si128(_mm_subs_epu16(q1, q0), _mm_subs_epu16(q0, q1)));
@@ -132,7 +132,7 @@
       _mm_or_si128(_mm_subs_epu16(q3, q2), _mm_subs_epu16(q2, q3)));
   mask = _mm_max_epi16(work, mask);
 
-  mask = _mm_subs_epu16(mask, limit);
+  mask = _mm_subs_epu16(mask, limit_v);
   mask = _mm_cmpeq_epi16(mask, zero);  // return ~mask
 
   // lp filter
@@ -207,12 +207,12 @@
   // (because, in both vars, each block of 16 either all 1s or all 0s)
   flat = _mm_and_si128(flat, mask);
 
-  p5 = _mm_load_si128((__m128i *)(s - 6 * p));
-  q5 = _mm_load_si128((__m128i *)(s + 5 * p));
-  p6 = _mm_load_si128((__m128i *)(s - 7 * p));
-  q6 = _mm_load_si128((__m128i *)(s + 6 * p));
-  p7 = _mm_load_si128((__m128i *)(s - 8 * p));
-  q7 = _mm_load_si128((__m128i *)(s + 7 * p));
+  p5 = _mm_load_si128((__m128i *)(s - 6 * pitch));
+  q5 = _mm_load_si128((__m128i *)(s + 5 * pitch));
+  p6 = _mm_load_si128((__m128i *)(s - 7 * pitch));
+  q6 = _mm_load_si128((__m128i *)(s + 6 * pitch));
+  p7 = _mm_load_si128((__m128i *)(s - 8 * pitch));
+  q7 = _mm_load_si128((__m128i *)(s + 7 * pitch));
 
   // highbd_flat_mask5 (arguments passed in are p0, q0, p4-p7, q4-q7
   // but referred to as p0-p4 & q0-q4 in fn)
@@ -389,8 +389,8 @@
   flat2_q6 = _mm_and_si128(flat2, flat2_q6);
   //  get values for when (flat2 && flat && mask)
   q6 = _mm_or_si128(q6, flat2_q6);  // full list of q6 values
-  _mm_store_si128((__m128i *)(s - 7 * p), p6);
-  _mm_store_si128((__m128i *)(s + 6 * p), q6);
+  _mm_store_si128((__m128i *)(s - 7 * pitch), p6);
+  _mm_store_si128((__m128i *)(s + 6 * pitch), q6);
 
   p5 = _mm_andnot_si128(flat2, p5);
   //  p5 remains unchanged if !(flat2 && flat && mask)
@@ -404,8 +404,8 @@
   //  get values for when (flat2 && flat && mask)
   q5 = _mm_or_si128(q5, flat2_q5);
   //  full list of q5 values
-  _mm_store_si128((__m128i *)(s - 6 * p), p5);
-  _mm_store_si128((__m128i *)(s + 5 * p), q5);
+  _mm_store_si128((__m128i *)(s - 6 * pitch), p5);
+  _mm_store_si128((__m128i *)(s + 5 * pitch), q5);
 
   p4 = _mm_andnot_si128(flat2, p4);
   //  p4 remains unchanged if !(flat2 && flat && mask)
@@ -417,8 +417,8 @@
   flat2_q4 = _mm_and_si128(flat2, flat2_q4);
   //  get values for when (flat2 && flat && mask)
   q4 = _mm_or_si128(q4, flat2_q4);  // full list of q4 values
-  _mm_store_si128((__m128i *)(s - 5 * p), p4);
-  _mm_store_si128((__m128i *)(s + 4 * p), q4);
+  _mm_store_si128((__m128i *)(s - 5 * pitch), p4);
+  _mm_store_si128((__m128i *)(s + 4 * pitch), q4);
 
   p3 = _mm_andnot_si128(flat2, p3);
   //  p3 takes value from highbd_filter8 if !(flat2 && flat && mask)
@@ -430,8 +430,8 @@
   flat2_q3 = _mm_and_si128(flat2, flat2_q3);
   //  get values for when (flat2 && flat && mask)
   q3 = _mm_or_si128(q3, flat2_q3);  // full list of q3 values
-  _mm_store_si128((__m128i *)(s - 4 * p), p3);
-  _mm_store_si128((__m128i *)(s + 3 * p), q3);
+  _mm_store_si128((__m128i *)(s - 4 * pitch), p3);
+  _mm_store_si128((__m128i *)(s + 3 * pitch), q3);
 
   p2 = _mm_andnot_si128(flat2, p2);
   //  p2 takes value from highbd_filter8 if !(flat2 && flat && mask)
@@ -444,8 +444,8 @@
   flat2_q2 = _mm_and_si128(flat2, flat2_q2);
   //  get values for when (flat2 && flat && mask)
   q2 = _mm_or_si128(q2, flat2_q2);  // full list of q2 values
-  _mm_store_si128((__m128i *)(s - 3 * p), p2);
-  _mm_store_si128((__m128i *)(s + 2 * p), q2);
+  _mm_store_si128((__m128i *)(s - 3 * pitch), p2);
+  _mm_store_si128((__m128i *)(s + 2 * pitch), q2);
 
   p1 = _mm_andnot_si128(flat2, p1);
   //  p1 takes value from highbd_filter8 if !(flat2 && flat && mask)
@@ -457,8 +457,8 @@
   flat2_q1 = _mm_and_si128(flat2, flat2_q1);
   //  get values for when (flat2 && flat && mask)
   q1 = _mm_or_si128(q1, flat2_q1);  // full list of q1 values
-  _mm_store_si128((__m128i *)(s - 2 * p), p1);
-  _mm_store_si128((__m128i *)(s + 1 * p), q1);
+  _mm_store_si128((__m128i *)(s - 2 * pitch), p1);
+  _mm_store_si128((__m128i *)(s + 1 * pitch), q1);
 
   p0 = _mm_andnot_si128(flat2, p0);
   //  p0 takes value from highbd_filter8 if !(flat2 && flat && mask)
@@ -470,22 +470,22 @@
   flat2_q0 = _mm_and_si128(flat2, flat2_q0);
   //  get values for when (flat2 && flat && mask)
   q0 = _mm_or_si128(q0, flat2_q0);  // full list of q0 values
-  _mm_store_si128((__m128i *)(s - 1 * p), p0);
-  _mm_store_si128((__m128i *)(s - 0 * p), q0);
+  _mm_store_si128((__m128i *)(s - 1 * pitch), p0);
+  _mm_store_si128((__m128i *)(s - 0 * pitch), q0);
 }
 
-void vpx_highbd_lpf_horizontal_16_dual_sse2(uint16_t *s, int p,
-                                            const uint8_t *_blimit,
-                                            const uint8_t *_limit,
-                                            const uint8_t *_thresh, int bd) {
-  vpx_highbd_lpf_horizontal_16_sse2(s, p, _blimit, _limit, _thresh, bd);
-  vpx_highbd_lpf_horizontal_16_sse2(s + 8, p, _blimit, _limit, _thresh, bd);
+void vpx_highbd_lpf_horizontal_16_dual_sse2(uint16_t *s, int pitch,
+                                            const uint8_t *blimit,
+                                            const uint8_t *limit,
+                                            const uint8_t *thresh, int bd) {
+  vpx_highbd_lpf_horizontal_16_sse2(s, pitch, blimit, limit, thresh, bd);
+  vpx_highbd_lpf_horizontal_16_sse2(s + 8, pitch, blimit, limit, thresh, bd);
 }
 
-void vpx_highbd_lpf_horizontal_8_sse2(uint16_t *s, int p,
-                                      const uint8_t *_blimit,
-                                      const uint8_t *_limit,
-                                      const uint8_t *_thresh, int bd) {
+void vpx_highbd_lpf_horizontal_8_sse2(uint16_t *s, int pitch,
+                                      const uint8_t *blimit,
+                                      const uint8_t *limit,
+                                      const uint8_t *thresh, int bd) {
   DECLARE_ALIGNED(16, uint16_t, flat_op2[16]);
   DECLARE_ALIGNED(16, uint16_t, flat_op1[16]);
   DECLARE_ALIGNED(16, uint16_t, flat_op0[16]);
@@ -493,16 +493,16 @@
   DECLARE_ALIGNED(16, uint16_t, flat_oq1[16]);
   DECLARE_ALIGNED(16, uint16_t, flat_oq0[16]);
   const __m128i zero = _mm_set1_epi16(0);
-  __m128i blimit, limit, thresh;
+  __m128i blimit_v, limit_v, thresh_v;
   __m128i mask, hev, flat;
-  __m128i p3 = _mm_load_si128((__m128i *)(s - 4 * p));
-  __m128i q3 = _mm_load_si128((__m128i *)(s + 3 * p));
-  __m128i p2 = _mm_load_si128((__m128i *)(s - 3 * p));
-  __m128i q2 = _mm_load_si128((__m128i *)(s + 2 * p));
-  __m128i p1 = _mm_load_si128((__m128i *)(s - 2 * p));
-  __m128i q1 = _mm_load_si128((__m128i *)(s + 1 * p));
-  __m128i p0 = _mm_load_si128((__m128i *)(s - 1 * p));
-  __m128i q0 = _mm_load_si128((__m128i *)(s + 0 * p));
+  __m128i p3 = _mm_load_si128((__m128i *)(s - 4 * pitch));
+  __m128i q3 = _mm_load_si128((__m128i *)(s + 3 * pitch));
+  __m128i p2 = _mm_load_si128((__m128i *)(s - 3 * pitch));
+  __m128i q2 = _mm_load_si128((__m128i *)(s + 2 * pitch));
+  __m128i p1 = _mm_load_si128((__m128i *)(s - 2 * pitch));
+  __m128i q1 = _mm_load_si128((__m128i *)(s + 1 * pitch));
+  __m128i p0 = _mm_load_si128((__m128i *)(s - 1 * pitch));
+  __m128i q0 = _mm_load_si128((__m128i *)(s + 0 * pitch));
   const __m128i one = _mm_set1_epi16(1);
   const __m128i ffff = _mm_cmpeq_epi16(one, one);
   __m128i abs_p1q1, abs_p0q0, abs_q1q0, abs_p1p0, work;
@@ -519,25 +519,25 @@
   __m128i filter1, filter2;
 
   if (bd == 8) {
-    blimit = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero);
-    limit = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero);
-    thresh = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero);
+    blimit_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero);
+    limit_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero);
+    thresh_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero);
     t80 = _mm_set1_epi16(0x80);
   } else if (bd == 10) {
-    blimit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero), 2);
-    limit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero), 2);
-    thresh = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero), 2);
+    blimit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero), 2);
+    limit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero), 2);
+    thresh_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero), 2);
     t80 = _mm_set1_epi16(0x200);
   } else {  // bd == 12
-    blimit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero), 4);
-    limit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero), 4);
-    thresh = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero), 4);
+    blimit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero), 4);
+    limit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero), 4);
+    thresh_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero), 4);
     t80 = _mm_set1_epi16(0x800);
   }
 
@@ -553,16 +553,16 @@
   abs_p0q0 = _mm_or_si128(_mm_subs_epu16(p0, q0), _mm_subs_epu16(q0, p0));
   abs_p1q1 = _mm_or_si128(_mm_subs_epu16(p1, q1), _mm_subs_epu16(q1, p1));
   flat = _mm_max_epi16(abs_p1p0, abs_q1q0);
-  hev = _mm_subs_epu16(flat, thresh);
+  hev = _mm_subs_epu16(flat, thresh_v);
   hev = _mm_xor_si128(_mm_cmpeq_epi16(hev, zero), ffff);
 
   abs_p0q0 = _mm_adds_epu16(abs_p0q0, abs_p0q0);
   abs_p1q1 = _mm_srli_epi16(abs_p1q1, 1);
-  mask = _mm_subs_epu16(_mm_adds_epu16(abs_p0q0, abs_p1q1), blimit);
+  mask = _mm_subs_epu16(_mm_adds_epu16(abs_p0q0, abs_p1q1), blimit_v);
   mask = _mm_xor_si128(_mm_cmpeq_epi16(mask, zero), ffff);
   // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
   // So taking maximums continues to work:
-  mask = _mm_and_si128(mask, _mm_adds_epu16(limit, one));
+  mask = _mm_and_si128(mask, _mm_adds_epu16(limit_v, one));
   mask = _mm_max_epi16(abs_p1p0, mask);
   // mask |= (abs(p1 - p0) > limit) * -1;
   mask = _mm_max_epi16(abs_q1q0, mask);
@@ -576,7 +576,7 @@
       _mm_or_si128(_mm_subs_epu16(p3, p2), _mm_subs_epu16(p2, p3)),
       _mm_or_si128(_mm_subs_epu16(q3, q2), _mm_subs_epu16(q2, q3)));
   mask = _mm_max_epi16(work, mask);
-  mask = _mm_subs_epu16(mask, limit);
+  mask = _mm_subs_epu16(mask, limit_v);
   mask = _mm_cmpeq_epi16(mask, zero);
 
   // flat_mask4
@@ -674,7 +674,7 @@
   q1 = _mm_and_si128(flat, q1);
   q1 = _mm_or_si128(work_a, q1);
 
-  work_a = _mm_loadu_si128((__m128i *)(s + 2 * p));
+  work_a = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
   q2 = _mm_load_si128((__m128i *)flat_oq2);
   work_a = _mm_andnot_si128(flat, work_a);
   q2 = _mm_and_si128(flat, q2);
@@ -694,43 +694,43 @@
   p1 = _mm_and_si128(flat, p1);
   p1 = _mm_or_si128(work_a, p1);
 
-  work_a = _mm_loadu_si128((__m128i *)(s - 3 * p));
+  work_a = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
   p2 = _mm_load_si128((__m128i *)flat_op2);
   work_a = _mm_andnot_si128(flat, work_a);
   p2 = _mm_and_si128(flat, p2);
   p2 = _mm_or_si128(work_a, p2);
 
-  _mm_store_si128((__m128i *)(s - 3 * p), p2);
-  _mm_store_si128((__m128i *)(s - 2 * p), p1);
-  _mm_store_si128((__m128i *)(s - 1 * p), p0);
-  _mm_store_si128((__m128i *)(s + 0 * p), q0);
-  _mm_store_si128((__m128i *)(s + 1 * p), q1);
-  _mm_store_si128((__m128i *)(s + 2 * p), q2);
+  _mm_store_si128((__m128i *)(s - 3 * pitch), p2);
+  _mm_store_si128((__m128i *)(s - 2 * pitch), p1);
+  _mm_store_si128((__m128i *)(s - 1 * pitch), p0);
+  _mm_store_si128((__m128i *)(s + 0 * pitch), q0);
+  _mm_store_si128((__m128i *)(s + 1 * pitch), q1);
+  _mm_store_si128((__m128i *)(s + 2 * pitch), q2);
 }
 
 void vpx_highbd_lpf_horizontal_8_dual_sse2(
-    uint16_t *s, int p, const uint8_t *_blimit0, const uint8_t *_limit0,
-    const uint8_t *_thresh0, const uint8_t *_blimit1, const uint8_t *_limit1,
-    const uint8_t *_thresh1, int bd) {
-  vpx_highbd_lpf_horizontal_8_sse2(s, p, _blimit0, _limit0, _thresh0, bd);
-  vpx_highbd_lpf_horizontal_8_sse2(s + 8, p, _blimit1, _limit1, _thresh1, bd);
+    uint16_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
+    const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
+    const uint8_t *thresh1, int bd) {
+  vpx_highbd_lpf_horizontal_8_sse2(s, pitch, blimit0, limit0, thresh0, bd);
+  vpx_highbd_lpf_horizontal_8_sse2(s + 8, pitch, blimit1, limit1, thresh1, bd);
 }
 
-void vpx_highbd_lpf_horizontal_4_sse2(uint16_t *s, int p,
-                                      const uint8_t *_blimit,
-                                      const uint8_t *_limit,
-                                      const uint8_t *_thresh, int bd) {
+void vpx_highbd_lpf_horizontal_4_sse2(uint16_t *s, int pitch,
+                                      const uint8_t *blimit,
+                                      const uint8_t *limit,
+                                      const uint8_t *thresh, int bd) {
   const __m128i zero = _mm_set1_epi16(0);
-  __m128i blimit, limit, thresh;
+  __m128i blimit_v, limit_v, thresh_v;
   __m128i mask, hev, flat;
-  __m128i p3 = _mm_loadu_si128((__m128i *)(s - 4 * p));
-  __m128i p2 = _mm_loadu_si128((__m128i *)(s - 3 * p));
-  __m128i p1 = _mm_loadu_si128((__m128i *)(s - 2 * p));
-  __m128i p0 = _mm_loadu_si128((__m128i *)(s - 1 * p));
-  __m128i q0 = _mm_loadu_si128((__m128i *)(s - 0 * p));
-  __m128i q1 = _mm_loadu_si128((__m128i *)(s + 1 * p));
-  __m128i q2 = _mm_loadu_si128((__m128i *)(s + 2 * p));
-  __m128i q3 = _mm_loadu_si128((__m128i *)(s + 3 * p));
+  __m128i p3 = _mm_loadu_si128((__m128i *)(s - 4 * pitch));
+  __m128i p2 = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
+  __m128i p1 = _mm_loadu_si128((__m128i *)(s - 2 * pitch));
+  __m128i p0 = _mm_loadu_si128((__m128i *)(s - 1 * pitch));
+  __m128i q0 = _mm_loadu_si128((__m128i *)(s - 0 * pitch));
+  __m128i q1 = _mm_loadu_si128((__m128i *)(s + 1 * pitch));
+  __m128i q2 = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
+  __m128i q3 = _mm_loadu_si128((__m128i *)(s + 3 * pitch));
   const __m128i abs_p1p0 =
       _mm_or_si128(_mm_subs_epu16(p1, p0), _mm_subs_epu16(p0, p1));
   const __m128i abs_q1q0 =
@@ -760,57 +760,57 @@
   __m128i filter1, filter2;
 
   if (bd == 8) {
-    blimit = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero);
-    limit = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero);
-    thresh = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero);
+    blimit_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero);
+    limit_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero);
+    thresh_v = _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero);
     t80 = _mm_set1_epi16(0x80);
-    tff80 = _mm_set1_epi16(0xff80);
-    tffe0 = _mm_set1_epi16(0xffe0);
+    tff80 = _mm_set1_epi16((int16_t)0xff80);
+    tffe0 = _mm_set1_epi16((int16_t)0xffe0);
     t1f = _mm_srli_epi16(_mm_set1_epi16(0x1fff), 8);
     t7f = _mm_srli_epi16(_mm_set1_epi16(0x7fff), 8);
   } else if (bd == 10) {
-    blimit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero), 2);
-    limit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero), 2);
-    thresh = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero), 2);
+    blimit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero), 2);
+    limit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero), 2);
+    thresh_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero), 2);
     t80 = _mm_slli_epi16(_mm_set1_epi16(0x80), 2);
-    tff80 = _mm_slli_epi16(_mm_set1_epi16(0xff80), 2);
-    tffe0 = _mm_slli_epi16(_mm_set1_epi16(0xffe0), 2);
+    tff80 = _mm_slli_epi16(_mm_set1_epi16((int16_t)0xff80), 2);
+    tffe0 = _mm_slli_epi16(_mm_set1_epi16((int16_t)0xffe0), 2);
     t1f = _mm_srli_epi16(_mm_set1_epi16(0x1fff), 6);
     t7f = _mm_srli_epi16(_mm_set1_epi16(0x7fff), 6);
   } else {  // bd == 12
-    blimit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_blimit), zero), 4);
-    limit = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_limit), zero), 4);
-    thresh = _mm_slli_epi16(
-        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)_thresh), zero), 4);
+    blimit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)blimit), zero), 4);
+    limit_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)limit), zero), 4);
+    thresh_v = _mm_slli_epi16(
+        _mm_unpacklo_epi8(_mm_load_si128((const __m128i *)thresh), zero), 4);
     t80 = _mm_slli_epi16(_mm_set1_epi16(0x80), 4);
-    tff80 = _mm_slli_epi16(_mm_set1_epi16(0xff80), 4);
-    tffe0 = _mm_slli_epi16(_mm_set1_epi16(0xffe0), 4);
+    tff80 = _mm_slli_epi16(_mm_set1_epi16((int16_t)0xff80), 4);
+    tffe0 = _mm_slli_epi16(_mm_set1_epi16((int16_t)0xffe0), 4);
     t1f = _mm_srli_epi16(_mm_set1_epi16(0x1fff), 4);
     t7f = _mm_srli_epi16(_mm_set1_epi16(0x7fff), 4);
   }
 
-  ps1 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s - 2 * p)), t80);
-  ps0 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s - 1 * p)), t80);
-  qs0 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s + 0 * p)), t80);
-  qs1 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s + 1 * p)), t80);
+  ps1 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s - 2 * pitch)), t80);
+  ps0 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s - 1 * pitch)), t80);
+  qs0 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s + 0 * pitch)), t80);
+  qs1 = _mm_subs_epi16(_mm_loadu_si128((__m128i *)(s + 1 * pitch)), t80);
 
   // filter_mask and hev_mask
   flat = _mm_max_epi16(abs_p1p0, abs_q1q0);
-  hev = _mm_subs_epu16(flat, thresh);
+  hev = _mm_subs_epu16(flat, thresh_v);
   hev = _mm_xor_si128(_mm_cmpeq_epi16(hev, zero), ffff);
 
   abs_p0q0 = _mm_adds_epu16(abs_p0q0, abs_p0q0);
   abs_p1q1 = _mm_srli_epi16(abs_p1q1, 1);
-  mask = _mm_subs_epu16(_mm_adds_epu16(abs_p0q0, abs_p1q1), blimit);
+  mask = _mm_subs_epu16(_mm_adds_epu16(abs_p0q0, abs_p1q1), blimit_v);
   mask = _mm_xor_si128(_mm_cmpeq_epi16(mask, zero), ffff);
   // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
   // So taking maximums continues to work:
-  mask = _mm_and_si128(mask, _mm_adds_epu16(limit, one));
+  mask = _mm_and_si128(mask, _mm_adds_epu16(limit_v, one));
   mask = _mm_max_epi16(flat, mask);
   // mask |= (abs(p1 - p0) > limit) * -1;
   // mask |= (abs(q1 - q0) > limit) * -1;
@@ -822,7 +822,7 @@
       _mm_or_si128(_mm_subs_epu16(q2, q1), _mm_subs_epu16(q1, q2)),
       _mm_or_si128(_mm_subs_epu16(q3, q2), _mm_subs_epu16(q2, q3)));
   mask = _mm_max_epi16(work, mask);
-  mask = _mm_subs_epu16(mask, limit);
+  mask = _mm_subs_epu16(mask, limit_v);
   mask = _mm_cmpeq_epi16(mask, zero);
 
   // filter4
@@ -872,18 +872,18 @@
   p1 = _mm_adds_epi16(signed_char_clamp_bd_sse2(_mm_adds_epi16(ps1, filt), bd),
                       t80);
 
-  _mm_storeu_si128((__m128i *)(s - 2 * p), p1);
-  _mm_storeu_si128((__m128i *)(s - 1 * p), p0);
-  _mm_storeu_si128((__m128i *)(s + 0 * p), q0);
-  _mm_storeu_si128((__m128i *)(s + 1 * p), q1);
+  _mm_storeu_si128((__m128i *)(s - 2 * pitch), p1);
+  _mm_storeu_si128((__m128i *)(s - 1 * pitch), p0);
+  _mm_storeu_si128((__m128i *)(s + 0 * pitch), q0);
+  _mm_storeu_si128((__m128i *)(s + 1 * pitch), q1);
 }
 
 void vpx_highbd_lpf_horizontal_4_dual_sse2(
-    uint16_t *s, int p, const uint8_t *_blimit0, const uint8_t *_limit0,
-    const uint8_t *_thresh0, const uint8_t *_blimit1, const uint8_t *_limit1,
-    const uint8_t *_thresh1, int bd) {
-  vpx_highbd_lpf_horizontal_4_sse2(s, p, _blimit0, _limit0, _thresh0, bd);
-  vpx_highbd_lpf_horizontal_4_sse2(s + 8, p, _blimit1, _limit1, _thresh1, bd);
+    uint16_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
+    const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
+    const uint8_t *thresh1, int bd) {
+  vpx_highbd_lpf_horizontal_4_sse2(s, pitch, blimit0, limit0, thresh0, bd);
+  vpx_highbd_lpf_horizontal_4_sse2(s + 8, pitch, blimit1, limit1, thresh1, bd);
 }
 
 static INLINE void highbd_transpose(uint16_t *src[], int in_p, uint16_t *dst[],
@@ -998,9 +998,9 @@
   highbd_transpose(src1, in_p, dest1, out_p, 1);
 }
 
-void vpx_highbd_lpf_vertical_4_sse2(uint16_t *s, int p, const uint8_t *blimit,
-                                    const uint8_t *limit, const uint8_t *thresh,
-                                    int bd) {
+void vpx_highbd_lpf_vertical_4_sse2(uint16_t *s, int pitch,
+                                    const uint8_t *blimit, const uint8_t *limit,
+                                    const uint8_t *thresh, int bd) {
   DECLARE_ALIGNED(16, uint16_t, t_dst[8 * 8]);
   uint16_t *src[1];
   uint16_t *dst[1];
@@ -1009,7 +1009,7 @@
   src[0] = s - 4;
   dst[0] = t_dst;
 
-  highbd_transpose(src, p, dst, 8, 1);
+  highbd_transpose(src, pitch, dst, 8, 1);
 
   // Loop filtering
   vpx_highbd_lpf_horizontal_4_sse2(t_dst + 4 * 8, 8, blimit, limit, thresh, bd);
@@ -1018,11 +1018,11 @@
   dst[0] = s - 4;
 
   // Transpose back
-  highbd_transpose(src, 8, dst, p, 1);
+  highbd_transpose(src, 8, dst, pitch, 1);
 }
 
 void vpx_highbd_lpf_vertical_4_dual_sse2(
-    uint16_t *s, int p, const uint8_t *blimit0, const uint8_t *limit0,
+    uint16_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
     const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
     const uint8_t *thresh1, int bd) {
   DECLARE_ALIGNED(16, uint16_t, t_dst[16 * 8]);
@@ -1030,7 +1030,7 @@
   uint16_t *dst[2];
 
   // Transpose 8x16
-  highbd_transpose8x16(s - 4, s - 4 + p * 8, p, t_dst, 16);
+  highbd_transpose8x16(s - 4, s - 4 + pitch * 8, pitch, t_dst, 16);
 
   // Loop filtering
   vpx_highbd_lpf_horizontal_4_dual_sse2(t_dst + 4 * 16, 16, blimit0, limit0,
@@ -1038,15 +1038,15 @@
   src[0] = t_dst;
   src[1] = t_dst + 8;
   dst[0] = s - 4;
-  dst[1] = s - 4 + p * 8;
+  dst[1] = s - 4 + pitch * 8;
 
   // Transpose back
-  highbd_transpose(src, 16, dst, p, 2);
+  highbd_transpose(src, 16, dst, pitch, 2);
 }
 
-void vpx_highbd_lpf_vertical_8_sse2(uint16_t *s, int p, const uint8_t *blimit,
-                                    const uint8_t *limit, const uint8_t *thresh,
-                                    int bd) {
+void vpx_highbd_lpf_vertical_8_sse2(uint16_t *s, int pitch,
+                                    const uint8_t *blimit, const uint8_t *limit,
+                                    const uint8_t *thresh, int bd) {
   DECLARE_ALIGNED(16, uint16_t, t_dst[8 * 8]);
   uint16_t *src[1];
   uint16_t *dst[1];
@@ -1055,7 +1055,7 @@
   src[0] = s - 4;
   dst[0] = t_dst;
 
-  highbd_transpose(src, p, dst, 8, 1);
+  highbd_transpose(src, pitch, dst, 8, 1);
 
   // Loop filtering
   vpx_highbd_lpf_horizontal_8_sse2(t_dst + 4 * 8, 8, blimit, limit, thresh, bd);
@@ -1064,11 +1064,11 @@
   dst[0] = s - 4;
 
   // Transpose back
-  highbd_transpose(src, 8, dst, p, 1);
+  highbd_transpose(src, 8, dst, pitch, 1);
 }
 
 void vpx_highbd_lpf_vertical_8_dual_sse2(
-    uint16_t *s, int p, const uint8_t *blimit0, const uint8_t *limit0,
+    uint16_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
     const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
     const uint8_t *thresh1, int bd) {
   DECLARE_ALIGNED(16, uint16_t, t_dst[16 * 8]);
@@ -1076,7 +1076,7 @@
   uint16_t *dst[2];
 
   // Transpose 8x16
-  highbd_transpose8x16(s - 4, s - 4 + p * 8, p, t_dst, 16);
+  highbd_transpose8x16(s - 4, s - 4 + pitch * 8, pitch, t_dst, 16);
 
   // Loop filtering
   vpx_highbd_lpf_horizontal_8_dual_sse2(t_dst + 4 * 16, 16, blimit0, limit0,
@@ -1085,13 +1085,14 @@
   src[1] = t_dst + 8;
 
   dst[0] = s - 4;
-  dst[1] = s - 4 + p * 8;
+  dst[1] = s - 4 + pitch * 8;
 
   // Transpose back
-  highbd_transpose(src, 16, dst, p, 2);
+  highbd_transpose(src, 16, dst, pitch, 2);
 }
 
-void vpx_highbd_lpf_vertical_16_sse2(uint16_t *s, int p, const uint8_t *blimit,
+void vpx_highbd_lpf_vertical_16_sse2(uint16_t *s, int pitch,
+                                     const uint8_t *blimit,
                                      const uint8_t *limit,
                                      const uint8_t *thresh, int bd) {
   DECLARE_ALIGNED(16, uint16_t, t_dst[8 * 16]);
@@ -1104,7 +1105,7 @@
   dst[1] = t_dst + 8 * 8;
 
   // Transpose 16x8
-  highbd_transpose(src, p, dst, 8, 2);
+  highbd_transpose(src, pitch, dst, 8, 2);
 
   // Loop filtering
   vpx_highbd_lpf_horizontal_16_sse2(t_dst + 8 * 8, 8, blimit, limit, thresh,
@@ -1115,24 +1116,25 @@
   dst[1] = s;
 
   // Transpose back
-  highbd_transpose(src, 8, dst, p, 2);
+  highbd_transpose(src, 8, dst, pitch, 2);
 }
 
-void vpx_highbd_lpf_vertical_16_dual_sse2(uint16_t *s, int p,
+void vpx_highbd_lpf_vertical_16_dual_sse2(uint16_t *s, int pitch,
                                           const uint8_t *blimit,
                                           const uint8_t *limit,
                                           const uint8_t *thresh, int bd) {
   DECLARE_ALIGNED(16, uint16_t, t_dst[256]);
 
   //  Transpose 16x16
-  highbd_transpose8x16(s - 8, s - 8 + 8 * p, p, t_dst, 16);
-  highbd_transpose8x16(s, s + 8 * p, p, t_dst + 8 * 16, 16);
+  highbd_transpose8x16(s - 8, s - 8 + 8 * pitch, pitch, t_dst, 16);
+  highbd_transpose8x16(s, s + 8 * pitch, pitch, t_dst + 8 * 16, 16);
 
   //  Loop filtering
   vpx_highbd_lpf_horizontal_16_dual_sse2(t_dst + 8 * 16, 16, blimit, limit,
                                          thresh, bd);
 
   //  Transpose back
-  highbd_transpose8x16(t_dst, t_dst + 8 * 16, 16, s - 8, p);
-  highbd_transpose8x16(t_dst + 8, t_dst + 8 + 8 * 16, 16, s - 8 + 8 * p, p);
+  highbd_transpose8x16(t_dst, t_dst + 8 * 16, 16, s - 8, pitch);
+  highbd_transpose8x16(t_dst + 8, t_dst + 8 + 8 * 16, 16, s - 8 + 8 * pitch,
+                       pitch);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_quantize_intrin_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_quantize_intrin_sse2.c
index cedf98a..7149e4f 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_quantize_intrin_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_quantize_intrin_sse2.c
@@ -11,6 +11,7 @@
 #include <assert.h>
 #include <emmintrin.h>
 
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/vpx_dsp_common.h"
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem.h"
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_sad_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_sad_sse2.asm
index bc4b28d..6a1a6f3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_sad_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_sad_sse2.asm
@@ -25,11 +25,11 @@
 cglobal highbd_sad%1x%2_avg, 5, 1 + %3, 7, src, src_stride, ref, ref_stride, \
                                     second_pred, n_rows
 %else ; %3 == 7
-cglobal highbd_sad%1x%2_avg, 5, ARCH_X86_64 + %3, 7, src, src_stride, \
+cglobal highbd_sad%1x%2_avg, 5, VPX_ARCH_X86_64 + %3, 7, src, src_stride, \
                                               ref, ref_stride, \
                                               second_pred, \
                                               src_stride3, ref_stride3
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 %define n_rowsd r7d
 %else ; x86-32
 %define n_rowsd dword r0m
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_subpel_variance_impl_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_subpel_variance_impl_sse2.asm
index e1f9657..5a3a281 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_subpel_variance_impl_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_subpel_variance_impl_sse2.asm
@@ -32,12 +32,12 @@
 
 ; int vpx_sub_pixel_varianceNxh(const uint8_t *src, ptrdiff_t src_stride,
 ;                               int x_offset, int y_offset,
-;                               const uint8_t *dst, ptrdiff_t dst_stride,
+;                               const uint8_t *ref, ptrdiff_t ref_stride,
 ;                               int height, unsigned int *sse);
 ;
 ; This function returns the SE and stores SSE in the given pointer.
 
-%macro SUM_SSE 6 ; src1, dst1, src2, dst2, sum, sse
+%macro SUM_SSE 6 ; src1, ref1, src2, ref2, sum, sse
   psubw                %3, %4
   psubw                %1, %2
   mova                 %4, %3       ; make copies to manipulate to calc sum
@@ -78,7 +78,7 @@
 %endmacro
 
 %macro INC_SRC_BY_SRC_STRIDE  0
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
   add                srcq, src_stridemp
   add                srcq, src_stridemp
 %else
@@ -91,17 +91,17 @@
 %define filter_idx_shift 5
 
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   %if %2 == 1 ; avg
     cglobal highbd_sub_pixel_avg_variance%1xh, 9, 10, 13, src, src_stride, \
                                       x_offset, y_offset, \
-                                      dst, dst_stride, \
-                                      sec, sec_stride, height, sse
-    %define sec_str sec_strideq
+                                      ref, ref_stride, \
+                                      second_pred, second_stride, height, sse
+    %define second_str second_strideq
   %else
     cglobal highbd_sub_pixel_variance%1xh, 7, 8, 13, src, src_stride, \
                                   x_offset, y_offset, \
-                                  dst, dst_stride, height, sse
+                                  ref, ref_stride, height, sse
   %endif
   %define block_height heightd
   %define bilin_filter sseq
@@ -110,58 +110,46 @@
     %if %2 == 1 ; avg
       cglobal highbd_sub_pixel_avg_variance%1xh, 7, 7, 13, src, src_stride, \
                                         x_offset, y_offset, \
-                                        dst, dst_stride, \
-                                        sec, sec_stride, height, sse, \
-                                        g_bilin_filter, g_pw_8
+                                        ref, ref_stride, \
+                                        second_pred, second_stride, height, sse
       %define block_height dword heightm
-      %define sec_str sec_stridemp
-
-      ; Store bilin_filter and pw_8 location in stack
-      %if GET_GOT_DEFINED == 1
-        GET_GOT eax
-        add esp, 4                ; restore esp
-      %endif
-
-      lea ecx, [GLOBAL(bilin_filter_m)]
-      mov g_bilin_filterm, ecx
-
-      lea ecx, [GLOBAL(pw_8)]
-      mov g_pw_8m, ecx
-
-      LOAD_IF_USED 0, 1         ; load eax, ecx back
+      %define second_str second_stridemp
     %else
       cglobal highbd_sub_pixel_variance%1xh, 7, 7, 13, src, src_stride, \
                                     x_offset, y_offset, \
-                                    dst, dst_stride, height, sse, \
-                                    g_bilin_filter, g_pw_8
+                                    ref, ref_stride, height, sse
       %define block_height heightd
-
-      ; Store bilin_filter and pw_8 location in stack
-      %if GET_GOT_DEFINED == 1
-        GET_GOT eax
-        add esp, 4                ; restore esp
-      %endif
-
-      lea ecx, [GLOBAL(bilin_filter_m)]
-      mov g_bilin_filterm, ecx
-
-      lea ecx, [GLOBAL(pw_8)]
-      mov g_pw_8m, ecx
-
-      LOAD_IF_USED 0, 1         ; load eax, ecx back
     %endif
+
+    ; reuse argument stack space
+    %define g_bilin_filterm x_offsetm
+    %define g_pw_8m y_offsetm
+
+    ; Store bilin_filter and pw_8 location in stack
+    %if GET_GOT_DEFINED == 1
+      GET_GOT eax
+      add esp, 4                ; restore esp
+    %endif
+
+    lea ecx, [GLOBAL(bilin_filter_m)]
+    mov g_bilin_filterm, ecx
+
+    lea ecx, [GLOBAL(pw_8)]
+    mov g_pw_8m, ecx
+
+    LOAD_IF_USED 0, 1         ; load eax, ecx back
   %else
     %if %2 == 1 ; avg
       cglobal highbd_sub_pixel_avg_variance%1xh, 7, 7, 13, src, src_stride, \
                                         x_offset, y_offset, \
-                                        dst, dst_stride, \
-                                        sec, sec_stride, height, sse
+                                        ref, ref_stride, \
+                                        second_pred, second_stride, height, sse
       %define block_height dword heightm
-      %define sec_str sec_stridemp
+      %define second_str second_stridemp
     %else
       cglobal highbd_sub_pixel_variance%1xh, 7, 7, 13, src, src_stride, \
                                     x_offset, y_offset, \
-                                    dst, dst_stride, height, sse
+                                    ref, ref_stride, height, sse
       %define block_height heightd
     %endif
 
@@ -177,7 +165,7 @@
   sar                   block_height, 1
 %endif
 %if %2 == 1 ; avg
-  shl             sec_str, 1
+  shl             second_str, 1
 %endif
 
   ; FIXME(rbultje) replace by jumptable?
@@ -192,35 +180,35 @@
 %if %1 == 16
   movu                 m0, [srcq]
   movu                 m2, [srcq + 16]
-  mova                 m1, [dstq]
-  mova                 m3, [dstq + 16]
+  mova                 m1, [refq]
+  mova                 m3, [refq + 16]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m2, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m2, [second_predq+16]
 %endif
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*2]
-  lea                dstq, [dstq + dst_strideq*2]
+  lea                refq, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
   movu                 m2, [srcq + src_strideq*2]
-  mova                 m1, [dstq]
-  mova                 m3, [dstq + dst_strideq*2]
+  mova                 m1, [refq]
+  mova                 m3, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m2, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m2, [second_predq]
 %endif
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*4]
-  lea                dstq, [dstq + dst_strideq*4]
+  lea                refq, [refq + ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -238,40 +226,40 @@
   movu                 m1, [srcq+16]
   movu                 m4, [srcq+src_strideq*2]
   movu                 m5, [srcq+src_strideq*2+16]
-  mova                 m2, [dstq]
-  mova                 m3, [dstq+16]
+  mova                 m2, [refq]
+  mova                 m3, [refq+16]
   pavgw                m0, m4
   pavgw                m1, m5
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*2]
-  lea                dstq, [dstq + dst_strideq*2]
+  lea                refq, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
   movu                 m1, [srcq+src_strideq*2]
   movu                 m5, [srcq+src_strideq*4]
-  mova                 m2, [dstq]
-  mova                 m3, [dstq+dst_strideq*2]
+  mova                 m2, [refq]
+  mova                 m3, [refq+ref_strideq*2]
   pavgw                m0, m1
   pavgw                m1, m5
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m1, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m1, [second_predq]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*4]
-  lea                dstq, [dstq + dst_strideq*4]
+  lea                refq, [refq + ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -280,11 +268,11 @@
 
 .x_zero_y_nonhalf:
   ; x_offset == 0 && y_offset == bilin interpolation
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           y_offsetd, filter_idx_shift
-%if ARCH_X86_64 && mmsize == 16
+%if VPX_ARCH_X86_64 && mmsize == 16
   mova                 m8, [bilin_filter+y_offsetq]
   mova                 m9, [bilin_filter+y_offsetq+16]
   mova                m10, [GLOBAL(pw_8)]
@@ -292,7 +280,7 @@
 %define filter_y_b m9
 %define filter_rnd m10
 %else ; x86-32 or mmx
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; x_offset == 0, reuse x_offset reg
 %define tempq x_offsetq
   add y_offsetq, g_bilin_filterm
@@ -314,8 +302,8 @@
   movu                 m1, [srcq + 16]
   movu                 m4, [srcq+src_strideq*2]
   movu                 m5, [srcq+src_strideq*2+16]
-  mova                 m2, [dstq]
-  mova                 m3, [dstq+16]
+  mova                 m2, [refq]
+  mova                 m3, [refq+16]
   ; FIXME(rbultje) instead of out=((num-x)*in1+x*in2+rnd)>>log2(num), we can
   ; also do out=in1+(((num-x)*(in2-in1)+rnd)>>log2(num)). Total number of
   ; instructions is the same (5), but it is 1 mul instead of 2, so might be
@@ -332,23 +320,23 @@
   psrlw                m1, 4
   psrlw                m0, 4
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*2]
-  lea                dstq, [dstq + dst_strideq*2]
+  lea                refq, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
   movu                 m1, [srcq+src_strideq*2]
   movu                 m5, [srcq+src_strideq*4]
   mova                 m4, m1
-  mova                 m2, [dstq]
-  mova                 m3, [dstq+dst_strideq*2]
+  mova                 m2, [refq]
+  mova                 m3, [refq+ref_strideq*2]
   pmullw               m1, filter_y_a
   pmullw               m5, filter_y_b
   paddw                m1, filter_rnd
@@ -360,16 +348,16 @@
   psrlw                m1, 4
   psrlw                m0, 4
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m1, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m1, [second_predq]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*4]
-  lea                dstq, [dstq + dst_strideq*4]
+  lea                refq, [refq + ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -393,41 +381,41 @@
   movu                 m1, [srcq + 16]
   movu                 m4, [srcq + 2]
   movu                 m5, [srcq + 18]
-  mova                 m2, [dstq]
-  mova                 m3, [dstq + 16]
+  mova                 m2, [refq]
+  mova                 m3, [refq + 16]
   pavgw                m0, m4
   pavgw                m1, m5
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*2]
-  lea                dstq, [dstq + dst_strideq*2]
+  lea                refq, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
   movu                 m1, [srcq + src_strideq*2]
   movu                 m4, [srcq + 2]
   movu                 m5, [srcq + src_strideq*2 + 2]
-  mova                 m2, [dstq]
-  mova                 m3, [dstq + dst_strideq*2]
+  mova                 m2, [refq]
+  mova                 m3, [refq + ref_strideq*2]
   pavgw                m0, m4
   pavgw                m1, m5
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m1, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m1, [second_predq]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
 
   lea                srcq, [srcq + src_strideq*4]
-  lea                dstq, [dstq + dst_strideq*4]
+  lea                refq, [refq + ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -456,20 +444,20 @@
   pavgw                m3, m5
   pavgw                m0, m2
   pavgw                m1, m3
-  mova                 m4, [dstq]
-  mova                 m5, [dstq + 16]
+  mova                 m4, [refq]
+  mova                 m5, [refq + 16]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m4, m1, m5, m6, m7
   mova                 m0, m2
   mova                 m1, m3
 
   lea                srcq, [srcq + src_strideq*2]
-  lea                dstq, [dstq + dst_strideq*2]
+  lea                refq, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
@@ -485,20 +473,20 @@
   pavgw                m3, m5
   pavgw                m0, m2
   pavgw                m2, m3
-  mova                 m4, [dstq]
-  mova                 m5, [dstq + dst_strideq*2]
+  mova                 m4, [refq]
+  mova                 m5, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m2, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m2, [second_predq]
 %endif
   SUM_SSE              m0, m4, m2, m5, m6, m7
   mova                 m0, m3
 
   lea                srcq, [srcq + src_strideq*4]
-  lea                dstq, [dstq + dst_strideq*4]
+  lea                refq, [refq + ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -507,11 +495,11 @@
 
 .x_half_y_nonhalf:
   ; x_offset == 0.5 && y_offset == bilin interpolation
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           y_offsetd, filter_idx_shift
-%if ARCH_X86_64 && mmsize == 16
+%if VPX_ARCH_X86_64 && mmsize == 16
   mova                 m8, [bilin_filter+y_offsetq]
   mova                 m9, [bilin_filter+y_offsetq+16]
   mova                m10, [GLOBAL(pw_8)]
@@ -519,7 +507,7 @@
 %define filter_y_b m9
 %define filter_rnd m10
 %else  ; x86_32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; x_offset == 0.5. We can reuse x_offset reg
 %define tempq x_offsetq
   add y_offsetq, g_bilin_filterm
@@ -561,21 +549,21 @@
   paddw                m0, filter_rnd
   psrlw                m1, 4
   paddw                m0, m2
-  mova                 m2, [dstq]
+  mova                 m2, [refq]
   psrlw                m0, 4
-  mova                 m3, [dstq+16]
+  mova                 m3, [refq+16]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
   mova                 m0, m4
   mova                 m1, m5
 
   lea                srcq, [srcq + src_strideq*2]
-  lea                dstq, [dstq + dst_strideq*2]
+  lea                refq, [refq + ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
@@ -600,21 +588,21 @@
   paddw                m0, filter_rnd
   psrlw                m4, 4
   paddw                m0, m2
-  mova                 m2, [dstq]
+  mova                 m2, [refq]
   psrlw                m0, 4
-  mova                 m3, [dstq+dst_strideq*2]
+  mova                 m3, [refq+ref_strideq*2]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m4, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m4, [second_predq]
 %endif
   SUM_SSE              m0, m2, m4, m3, m6, m7
   mova                 m0, m5
 
   lea                srcq, [srcq + src_strideq*4]
-  lea                dstq, [dstq + dst_strideq*4]
+  lea                refq, [refq + ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -629,11 +617,11 @@
   jnz .x_nonhalf_y_nonzero
 
   ; x_offset == bilin interpolation && y_offset == 0
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           x_offsetd, filter_idx_shift
-%if ARCH_X86_64 && mmsize == 16
+%if VPX_ARCH_X86_64 && mmsize == 16
   mova                 m8, [bilin_filter+x_offsetq]
   mova                 m9, [bilin_filter+x_offsetq+16]
   mova                m10, [GLOBAL(pw_8)]
@@ -641,7 +629,7 @@
 %define filter_x_b m9
 %define filter_rnd m10
 %else    ; x86-32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; y_offset == 0. We can reuse y_offset reg.
 %define tempq y_offsetq
   add x_offsetq, g_bilin_filterm
@@ -663,8 +651,8 @@
   movu                 m1, [srcq+16]
   movu                 m2, [srcq+2]
   movu                 m3, [srcq+18]
-  mova                 m4, [dstq]
-  mova                 m5, [dstq+16]
+  mova                 m4, [refq]
+  mova                 m5, [refq+16]
   pmullw               m1, filter_x_a
   pmullw               m3, filter_x_b
   paddw                m1, filter_rnd
@@ -676,23 +664,23 @@
   psrlw                m1, 4
   psrlw                m0, 4
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m4, m1, m5, m6, m7
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
   movu                 m1, [srcq+src_strideq*2]
   movu                 m2, [srcq+2]
   movu                 m3, [srcq+src_strideq*2+2]
-  mova                 m4, [dstq]
-  mova                 m5, [dstq+dst_strideq*2]
+  mova                 m4, [refq]
+  mova                 m5, [refq+ref_strideq*2]
   pmullw               m1, filter_x_a
   pmullw               m3, filter_x_b
   paddw                m1, filter_rnd
@@ -704,16 +692,16 @@
   psrlw                m1, 4
   psrlw                m0, 4
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m1, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m1, [second_predq]
 %endif
   SUM_SSE              m0, m4, m1, m5, m6, m7
 
   lea                srcq, [srcq+src_strideq*4]
-  lea                dstq, [dstq+dst_strideq*4]
+  lea                refq, [refq+ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -728,11 +716,11 @@
   jne .x_nonhalf_y_nonhalf
 
   ; x_offset == bilin interpolation && y_offset == 0.5
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           x_offsetd, filter_idx_shift
-%if ARCH_X86_64 && mmsize == 16
+%if VPX_ARCH_X86_64 && mmsize == 16
   mova                 m8, [bilin_filter+x_offsetq]
   mova                 m9, [bilin_filter+x_offsetq+16]
   mova                m10, [GLOBAL(pw_8)]
@@ -740,7 +728,7 @@
 %define filter_x_b m9
 %define filter_rnd m10
 %else    ; x86-32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; y_offset == 0.5. We can reuse y_offset reg.
 %define tempq y_offsetq
   add x_offsetq, g_bilin_filterm
@@ -785,24 +773,24 @@
   paddw                m3, filter_rnd
   paddw                m2, m4
   paddw                m3, m5
-  mova                 m4, [dstq]
-  mova                 m5, [dstq+16]
+  mova                 m4, [refq]
+  mova                 m5, [refq+16]
   psrlw                m2, 4
   psrlw                m3, 4
   pavgw                m0, m2
   pavgw                m1, m3
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m4, m1, m5, m6, m7
   mova                 m0, m2
   mova                 m1, m3
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
@@ -826,24 +814,24 @@
   paddw                m3, filter_rnd
   paddw                m2, m4
   paddw                m3, m5
-  mova                 m4, [dstq]
-  mova                 m5, [dstq+dst_strideq*2]
+  mova                 m4, [refq]
+  mova                 m5, [refq+ref_strideq*2]
   psrlw                m2, 4
   psrlw                m3, 4
   pavgw                m0, m2
   pavgw                m2, m3
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m2, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m2, [second_predq]
 %endif
   SUM_SSE              m0, m4, m2, m5, m6, m7
   mova                 m0, m3
 
   lea                srcq, [srcq+src_strideq*4]
-  lea                dstq, [dstq+dst_strideq*4]
+  lea                refq, [refq+ref_strideq*4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
@@ -855,12 +843,12 @@
 
 .x_nonhalf_y_nonhalf:
 ; loading filter - this is same as in 8-bit depth
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           x_offsetd, filter_idx_shift ; filter_idx_shift = 5
   shl           y_offsetd, filter_idx_shift
-%if ARCH_X86_64 && mmsize == 16
+%if VPX_ARCH_X86_64 && mmsize == 16
   mova                 m8, [bilin_filter+x_offsetq]
   mova                 m9, [bilin_filter+x_offsetq+16]
   mova                m10, [bilin_filter+y_offsetq]
@@ -872,7 +860,7 @@
 %define filter_y_b m11
 %define filter_rnd m12
 %else   ; x86-32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; In this case, there is NO unused register. Used src_stride register. Later,
 ; src_stride has to be loaded from stack when it is needed.
 %define tempq src_strideq
@@ -941,23 +929,23 @@
   pmullw               m3, filter_y_b
   paddw                m0, m2
   paddw                m1, filter_rnd
-  mova                 m2, [dstq]
+  mova                 m2, [refq]
   paddw                m1, m3
   psrlw                m0, 4
   psrlw                m1, 4
-  mova                 m3, [dstq+16]
+  mova                 m3, [refq+16]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  pavgw                m1, [secq+16]
+  pavgw                m0, [second_predq]
+  pavgw                m1, [second_predq+16]
 %endif
   SUM_SSE              m0, m2, m1, m3, m6, m7
   mova                 m0, m4
   mova                 m1, m5
 
   INC_SRC_BY_SRC_STRIDE
-  lea                dstq, [dstq + dst_strideq * 2]
+  lea                refq, [refq + ref_strideq * 2]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %else ; %1 < 16
   movu                 m0, [srcq]
@@ -995,23 +983,23 @@
   pmullw               m3, filter_y_b
   paddw                m0, m2
   paddw                m4, filter_rnd
-  mova                 m2, [dstq]
+  mova                 m2, [refq]
   paddw                m4, m3
   psrlw                m0, 4
   psrlw                m4, 4
-  mova                 m3, [dstq+dst_strideq*2]
+  mova                 m3, [refq+ref_strideq*2]
 %if %2 == 1 ; avg
-  pavgw                m0, [secq]
-  add                secq, sec_str
-  pavgw                m4, [secq]
+  pavgw                m0, [second_predq]
+  add                second_predq, second_str
+  pavgw                m4, [second_predq]
 %endif
   SUM_SSE              m0, m2, m4, m3, m6, m7
   mova                 m0, m5
 
   INC_SRC_BY_SRC_STRIDE
-  lea                dstq, [dstq + dst_strideq * 4]
+  lea                refq, [refq + ref_strideq * 4]
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
 %endif
   dec                   block_height
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_impl_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_impl_sse2.asm
index e646767..a256a59 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_impl_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_impl_sse2.asm
@@ -16,9 +16,9 @@
 ;unsigned int vpx_highbd_calc16x16var_sse2
 ;(
 ;    unsigned char   *  src_ptr,
-;    int             source_stride,
+;    int             src_stride,
 ;    unsigned char   *  ref_ptr,
-;    int             recon_stride,
+;    int             ref_stride,
 ;    unsigned int    *  SSE,
 ;    int             *  Sum
 ;)
@@ -36,8 +36,8 @@
         mov         rsi,            arg(0) ;[src_ptr]
         mov         rdi,            arg(2) ;[ref_ptr]
 
-        movsxd      rax,            DWORD PTR arg(1) ;[source_stride]
-        movsxd      rdx,            DWORD PTR arg(3) ;[recon_stride]
+        movsxd      rax,            DWORD PTR arg(1) ;[src_stride]
+        movsxd      rdx,            DWORD PTR arg(3) ;[ref_stride]
         add         rax,            rax ; source stride in bytes
         add         rdx,            rdx ; recon stride in bytes
 
@@ -169,9 +169,9 @@
 ;unsigned int vpx_highbd_calc8x8var_sse2
 ;(
 ;    unsigned char   *  src_ptr,
-;    int             source_stride,
+;    int             src_stride,
 ;    unsigned char   *  ref_ptr,
-;    int             recon_stride,
+;    int             ref_stride,
 ;    unsigned int    *  SSE,
 ;    int             *  Sum
 ;)
@@ -189,8 +189,8 @@
         mov         rsi,            arg(0) ;[src_ptr]
         mov         rdi,            arg(2) ;[ref_ptr]
 
-        movsxd      rax,            DWORD PTR arg(1) ;[source_stride]
-        movsxd      rdx,            DWORD PTR arg(3) ;[recon_stride]
+        movsxd      rax,            DWORD PTR arg(1) ;[src_stride]
+        movsxd      rdx,            DWORD PTR arg(3) ;[ref_stride]
         add         rax,            rax ; source stride in bytes
         add         rdx,            rdx ; recon stride in bytes
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_sse2.c
index a6f7c3d..dd6cfbb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/highbd_variance_sse2.c
@@ -7,8 +7,9 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#include "./vpx_config.h"
 
+#include "./vpx_config.h"
+#include "./vpx_dsp_rtcd.h"
 #include "vpx_ports/mem.h"
 
 typedef uint32_t (*high_variance_fn_t)(const uint16_t *src, int src_stride,
@@ -89,9 +90,9 @@
 }
 
 #define HIGH_GET_VAR(S)                                                       \
-  void vpx_highbd_get##S##x##S##var_sse2(const uint8_t *src8, int src_stride, \
-                                         const uint8_t *ref8, int ref_stride, \
-                                         uint32_t *sse, int *sum) {           \
+  void vpx_highbd_8_get##S##x##S##var_sse2(                                   \
+      const uint8_t *src8, int src_stride, const uint8_t *ref8,               \
+      int ref_stride, uint32_t *sse, int *sum) {                              \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                \
     uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                \
     vpx_highbd_calc##S##x##S##var_sse2(src, src_stride, ref, ref_stride, sse, \
@@ -135,7 +136,7 @@
     highbd_8_variance_sse2(                                                \
         src, src_stride, ref, ref_stride, w, h, sse, &sum,                 \
         vpx_highbd_calc##block_size##x##block_size##var_sse2, block_size); \
-    return *sse - (uint32_t)(((int64_t)sum * sum) >> shift);               \
+    return *sse - (uint32_t)(((int64_t)sum * sum) >> (shift));             \
   }                                                                        \
                                                                            \
   uint32_t vpx_highbd_10_variance##w##x##h##_sse2(                         \
@@ -148,7 +149,7 @@
     highbd_10_variance_sse2(                                               \
         src, src_stride, ref, ref_stride, w, h, sse, &sum,                 \
         vpx_highbd_calc##block_size##x##block_size##var_sse2, block_size); \
-    var = (int64_t)(*sse) - (((int64_t)sum * sum) >> shift);               \
+    var = (int64_t)(*sse) - (((int64_t)sum * sum) >> (shift));             \
     return (var >= 0) ? (uint32_t)var : 0;                                 \
   }                                                                        \
                                                                            \
@@ -162,7 +163,7 @@
     highbd_12_variance_sse2(                                               \
         src, src_stride, ref, ref_stride, w, h, sse, &sum,                 \
         vpx_highbd_calc##block_size##x##block_size##var_sse2, block_size); \
-    var = (int64_t)(*sse) - (((int64_t)sum * sum) >> shift);               \
+    var = (int64_t)(*sse) - (((int64_t)sum * sum) >> (shift));             \
     return (var >= 0) ? (uint32_t)var : 0;                                 \
   }
 
@@ -251,7 +252,7 @@
 #define DECL(w, opt)                                                         \
   int vpx_highbd_sub_pixel_variance##w##xh_##opt(                            \
       const uint16_t *src, ptrdiff_t src_stride, int x_offset, int y_offset, \
-      const uint16_t *dst, ptrdiff_t dst_stride, int height,                 \
+      const uint16_t *ref, ptrdiff_t ref_stride, int height,                 \
       unsigned int *sse, void *unused0, void *unused);
 #define DECLS(opt) \
   DECL(8, opt);    \
@@ -265,28 +266,28 @@
 #define FN(w, h, wf, wlog2, hlog2, opt, cast)                                  \
   uint32_t vpx_highbd_8_sub_pixel_variance##w##x##h##_##opt(                   \
       const uint8_t *src8, int src_stride, int x_offset, int y_offset,         \
-      const uint8_t *dst8, int dst_stride, uint32_t *sse_ptr) {                \
+      const uint8_t *ref8, int ref_stride, uint32_t *sse_ptr) {                \
     uint32_t sse;                                                              \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                 \
-    uint16_t *dst = CONVERT_TO_SHORTPTR(dst8);                                 \
+    uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                 \
     int se = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                      \
-        src, src_stride, x_offset, y_offset, dst, dst_stride, h, &sse, NULL,   \
+        src, src_stride, x_offset, y_offset, ref, ref_stride, h, &sse, NULL,   \
         NULL);                                                                 \
     if (w > wf) {                                                              \
       unsigned int sse2;                                                       \
       int se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                   \
-          src + 16, src_stride, x_offset, y_offset, dst + 16, dst_stride, h,   \
+          src + 16, src_stride, x_offset, y_offset, ref + 16, ref_stride, h,   \
           &sse2, NULL, NULL);                                                  \
       se += se2;                                                               \
       sse += sse2;                                                             \
       if (w > wf * 2) {                                                        \
         se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                     \
-            src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride, h, \
+            src + 32, src_stride, x_offset, y_offset, ref + 32, ref_stride, h, \
             &sse2, NULL, NULL);                                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
         se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                     \
-            src + 48, src_stride, x_offset, y_offset, dst + 48, dst_stride, h, \
+            src + 48, src_stride, x_offset, y_offset, ref + 48, ref_stride, h, \
             &sse2, NULL, NULL);                                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
@@ -298,29 +299,29 @@
                                                                                \
   uint32_t vpx_highbd_10_sub_pixel_variance##w##x##h##_##opt(                  \
       const uint8_t *src8, int src_stride, int x_offset, int y_offset,         \
-      const uint8_t *dst8, int dst_stride, uint32_t *sse_ptr) {                \
+      const uint8_t *ref8, int ref_stride, uint32_t *sse_ptr) {                \
     int64_t var;                                                               \
     uint32_t sse;                                                              \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                 \
-    uint16_t *dst = CONVERT_TO_SHORTPTR(dst8);                                 \
+    uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                 \
     int se = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                      \
-        src, src_stride, x_offset, y_offset, dst, dst_stride, h, &sse, NULL,   \
+        src, src_stride, x_offset, y_offset, ref, ref_stride, h, &sse, NULL,   \
         NULL);                                                                 \
     if (w > wf) {                                                              \
       uint32_t sse2;                                                           \
       int se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                   \
-          src + 16, src_stride, x_offset, y_offset, dst + 16, dst_stride, h,   \
+          src + 16, src_stride, x_offset, y_offset, ref + 16, ref_stride, h,   \
           &sse2, NULL, NULL);                                                  \
       se += se2;                                                               \
       sse += sse2;                                                             \
       if (w > wf * 2) {                                                        \
         se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                     \
-            src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride, h, \
+            src + 32, src_stride, x_offset, y_offset, ref + 32, ref_stride, h, \
             &sse2, NULL, NULL);                                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
         se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                     \
-            src + 48, src_stride, x_offset, y_offset, dst + 48, dst_stride, h, \
+            src + 48, src_stride, x_offset, y_offset, ref + 48, ref_stride, h, \
             &sse2, NULL, NULL);                                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
@@ -335,40 +336,40 @@
                                                                                \
   uint32_t vpx_highbd_12_sub_pixel_variance##w##x##h##_##opt(                  \
       const uint8_t *src8, int src_stride, int x_offset, int y_offset,         \
-      const uint8_t *dst8, int dst_stride, uint32_t *sse_ptr) {                \
+      const uint8_t *ref8, int ref_stride, uint32_t *sse_ptr) {                \
     int start_row;                                                             \
     uint32_t sse;                                                              \
     int se = 0;                                                                \
     int64_t var;                                                               \
     uint64_t long_sse = 0;                                                     \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                 \
-    uint16_t *dst = CONVERT_TO_SHORTPTR(dst8);                                 \
+    uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                 \
     for (start_row = 0; start_row < h; start_row += 16) {                      \
       uint32_t sse2;                                                           \
       int height = h - start_row < 16 ? h - start_row : 16;                    \
       int se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                   \
           src + (start_row * src_stride), src_stride, x_offset, y_offset,      \
-          dst + (start_row * dst_stride), dst_stride, height, &sse2, NULL,     \
+          ref + (start_row * ref_stride), ref_stride, height, &sse2, NULL,     \
           NULL);                                                               \
       se += se2;                                                               \
       long_sse += sse2;                                                        \
       if (w > wf) {                                                            \
         se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                     \
             src + 16 + (start_row * src_stride), src_stride, x_offset,         \
-            y_offset, dst + 16 + (start_row * dst_stride), dst_stride, height, \
+            y_offset, ref + 16 + (start_row * ref_stride), ref_stride, height, \
             &sse2, NULL, NULL);                                                \
         se += se2;                                                             \
         long_sse += sse2;                                                      \
         if (w > wf * 2) {                                                      \
           se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                   \
               src + 32 + (start_row * src_stride), src_stride, x_offset,       \
-              y_offset, dst + 32 + (start_row * dst_stride), dst_stride,       \
+              y_offset, ref + 32 + (start_row * ref_stride), ref_stride,       \
               height, &sse2, NULL, NULL);                                      \
           se += se2;                                                           \
           long_sse += sse2;                                                    \
           se2 = vpx_highbd_sub_pixel_variance##wf##xh_##opt(                   \
               src + 48 + (start_row * src_stride), src_stride, x_offset,       \
-              y_offset, dst + 48 + (start_row * dst_stride), dst_stride,       \
+              y_offset, ref + 48 + (start_row * ref_stride), ref_stride,       \
               height, &sse2, NULL, NULL);                                      \
           se += se2;                                                           \
           long_sse += sse2;                                                    \
@@ -404,8 +405,8 @@
 #define DECL(w, opt)                                                         \
   int vpx_highbd_sub_pixel_avg_variance##w##xh_##opt(                        \
       const uint16_t *src, ptrdiff_t src_stride, int x_offset, int y_offset, \
-      const uint16_t *dst, ptrdiff_t dst_stride, const uint16_t *sec,        \
-      ptrdiff_t sec_stride, int height, unsigned int *sse, void *unused0,    \
+      const uint16_t *ref, ptrdiff_t ref_stride, const uint16_t *second,     \
+      ptrdiff_t second_stride, int height, unsigned int *sse, void *unused0, \
       void *unused);
 #define DECLS(opt1) \
   DECL(16, opt1)    \
@@ -418,30 +419,30 @@
 #define FN(w, h, wf, wlog2, hlog2, opt, cast)                                  \
   uint32_t vpx_highbd_8_sub_pixel_avg_variance##w##x##h##_##opt(               \
       const uint8_t *src8, int src_stride, int x_offset, int y_offset,         \
-      const uint8_t *dst8, int dst_stride, uint32_t *sse_ptr,                  \
+      const uint8_t *ref8, int ref_stride, uint32_t *sse_ptr,                  \
       const uint8_t *sec8) {                                                   \
     uint32_t sse;                                                              \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                 \
-    uint16_t *dst = CONVERT_TO_SHORTPTR(dst8);                                 \
+    uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                 \
     uint16_t *sec = CONVERT_TO_SHORTPTR(sec8);                                 \
     int se = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                  \
-        src, src_stride, x_offset, y_offset, dst, dst_stride, sec, w, h, &sse, \
+        src, src_stride, x_offset, y_offset, ref, ref_stride, sec, w, h, &sse, \
         NULL, NULL);                                                           \
     if (w > wf) {                                                              \
       uint32_t sse2;                                                           \
       int se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(               \
-          src + 16, src_stride, x_offset, y_offset, dst + 16, dst_stride,      \
+          src + 16, src_stride, x_offset, y_offset, ref + 16, ref_stride,      \
           sec + 16, w, h, &sse2, NULL, NULL);                                  \
       se += se2;                                                               \
       sse += sse2;                                                             \
       if (w > wf * 2) {                                                        \
         se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                 \
-            src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride,    \
+            src + 32, src_stride, x_offset, y_offset, ref + 32, ref_stride,    \
             sec + 32, w, h, &sse2, NULL, NULL);                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
         se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                 \
-            src + 48, src_stride, x_offset, y_offset, dst + 48, dst_stride,    \
+            src + 48, src_stride, x_offset, y_offset, ref + 48, ref_stride,    \
             sec + 48, w, h, &sse2, NULL, NULL);                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
@@ -453,31 +454,31 @@
                                                                                \
   uint32_t vpx_highbd_10_sub_pixel_avg_variance##w##x##h##_##opt(              \
       const uint8_t *src8, int src_stride, int x_offset, int y_offset,         \
-      const uint8_t *dst8, int dst_stride, uint32_t *sse_ptr,                  \
+      const uint8_t *ref8, int ref_stride, uint32_t *sse_ptr,                  \
       const uint8_t *sec8) {                                                   \
     int64_t var;                                                               \
     uint32_t sse;                                                              \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                 \
-    uint16_t *dst = CONVERT_TO_SHORTPTR(dst8);                                 \
+    uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                 \
     uint16_t *sec = CONVERT_TO_SHORTPTR(sec8);                                 \
     int se = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                  \
-        src, src_stride, x_offset, y_offset, dst, dst_stride, sec, w, h, &sse, \
+        src, src_stride, x_offset, y_offset, ref, ref_stride, sec, w, h, &sse, \
         NULL, NULL);                                                           \
     if (w > wf) {                                                              \
       uint32_t sse2;                                                           \
       int se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(               \
-          src + 16, src_stride, x_offset, y_offset, dst + 16, dst_stride,      \
+          src + 16, src_stride, x_offset, y_offset, ref + 16, ref_stride,      \
           sec + 16, w, h, &sse2, NULL, NULL);                                  \
       se += se2;                                                               \
       sse += sse2;                                                             \
       if (w > wf * 2) {                                                        \
         se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                 \
-            src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride,    \
+            src + 32, src_stride, x_offset, y_offset, ref + 32, ref_stride,    \
             sec + 32, w, h, &sse2, NULL, NULL);                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
         se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                 \
-            src + 48, src_stride, x_offset, y_offset, dst + 48, dst_stride,    \
+            src + 48, src_stride, x_offset, y_offset, ref + 48, ref_stride,    \
             sec + 48, w, h, &sse2, NULL, NULL);                                \
         se += se2;                                                             \
         sse += sse2;                                                           \
@@ -492,7 +493,7 @@
                                                                                \
   uint32_t vpx_highbd_12_sub_pixel_avg_variance##w##x##h##_##opt(              \
       const uint8_t *src8, int src_stride, int x_offset, int y_offset,         \
-      const uint8_t *dst8, int dst_stride, uint32_t *sse_ptr,                  \
+      const uint8_t *ref8, int ref_stride, uint32_t *sse_ptr,                  \
       const uint8_t *sec8) {                                                   \
     int start_row;                                                             \
     int64_t var;                                                               \
@@ -500,34 +501,34 @@
     int se = 0;                                                                \
     uint64_t long_sse = 0;                                                     \
     uint16_t *src = CONVERT_TO_SHORTPTR(src8);                                 \
-    uint16_t *dst = CONVERT_TO_SHORTPTR(dst8);                                 \
+    uint16_t *ref = CONVERT_TO_SHORTPTR(ref8);                                 \
     uint16_t *sec = CONVERT_TO_SHORTPTR(sec8);                                 \
     for (start_row = 0; start_row < h; start_row += 16) {                      \
       uint32_t sse2;                                                           \
       int height = h - start_row < 16 ? h - start_row : 16;                    \
       int se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(               \
           src + (start_row * src_stride), src_stride, x_offset, y_offset,      \
-          dst + (start_row * dst_stride), dst_stride, sec + (start_row * w),   \
+          ref + (start_row * ref_stride), ref_stride, sec + (start_row * w),   \
           w, height, &sse2, NULL, NULL);                                       \
       se += se2;                                                               \
       long_sse += sse2;                                                        \
       if (w > wf) {                                                            \
         se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(                 \
             src + 16 + (start_row * src_stride), src_stride, x_offset,         \
-            y_offset, dst + 16 + (start_row * dst_stride), dst_stride,         \
+            y_offset, ref + 16 + (start_row * ref_stride), ref_stride,         \
             sec + 16 + (start_row * w), w, height, &sse2, NULL, NULL);         \
         se += se2;                                                             \
         long_sse += sse2;                                                      \
         if (w > wf * 2) {                                                      \
           se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(               \
               src + 32 + (start_row * src_stride), src_stride, x_offset,       \
-              y_offset, dst + 32 + (start_row * dst_stride), dst_stride,       \
+              y_offset, ref + 32 + (start_row * ref_stride), ref_stride,       \
               sec + 32 + (start_row * w), w, height, &sse2, NULL, NULL);       \
           se += se2;                                                           \
           long_sse += sse2;                                                    \
           se2 = vpx_highbd_sub_pixel_avg_variance##wf##xh_##opt(               \
               src + 48 + (start_row * src_stride), src_stride, x_offset,       \
-              y_offset, dst + 48 + (start_row * dst_stride), dst_stride,       \
+              y_offset, ref + 48 + (start_row * ref_stride), ref_stride,       \
               sec + 48 + (start_row * w), w, height, &sse2, NULL, NULL);       \
           se += se2;                                                           \
           long_sse += sse2;                                                    \
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.h
index d573f66..b4bbd18 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_sse2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_INV_TXFM_SSE2_H_
-#define VPX_DSP_X86_INV_TXFM_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_INV_TXFM_SSE2_H_
+#define VPX_VPX_DSP_X86_INV_TXFM_SSE2_H_
 
 #include <emmintrin.h>  // SSE2
 
@@ -707,4 +707,4 @@
 void idct32_34_8x32_sse2(const __m128i *const in, __m128i *const out);
 void idct32_34_8x32_ssse3(const __m128i *const in, __m128i *const out);
 
-#endif  // VPX_DSP_X86_INV_TXFM_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_INV_TXFM_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.h
index e785c8e..e9f0f69 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/inv_txfm_ssse3.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_INV_TXFM_SSSE3_H_
-#define VPX_DSP_X86_INV_TXFM_SSSE3_H_
+#ifndef VPX_VPX_DSP_X86_INV_TXFM_SSSE3_H_
+#define VPX_VPX_DSP_X86_INV_TXFM_SSSE3_H_
 
 #include <tmmintrin.h>
 
@@ -107,4 +107,4 @@
 
 void idct32_135_8x32_ssse3(const __m128i *const in, __m128i *const out);
 
-#endif  // VPX_DSP_X86_INV_TXFM_SSSE3_H_
+#endif  // VPX_VPX_DSP_X86_INV_TXFM_SSSE3_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_avx2.c
index 6652a62..be39199 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_avx2.c
@@ -13,38 +13,38 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_ports/mem.h"
 
-void vpx_lpf_horizontal_16_avx2(unsigned char *s, int p,
-                                const unsigned char *_blimit,
-                                const unsigned char *_limit,
-                                const unsigned char *_thresh) {
+void vpx_lpf_horizontal_16_avx2(unsigned char *s, int pitch,
+                                const unsigned char *blimit,
+                                const unsigned char *limit,
+                                const unsigned char *thresh) {
   __m128i mask, hev, flat, flat2;
   const __m128i zero = _mm_set1_epi16(0);
   const __m128i one = _mm_set1_epi8(1);
   __m128i q7p7, q6p6, q5p5, q4p4, q3p3, q2p2, q1p1, q0p0, p0q0, p1q1;
   __m128i abs_p1p0;
 
-  const __m128i thresh =
-      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)_thresh[0]));
-  const __m128i limit = _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)_limit[0]));
-  const __m128i blimit =
-      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)_blimit[0]));
+  const __m128i thresh_v =
+      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)thresh[0]));
+  const __m128i limit_v = _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)limit[0]));
+  const __m128i blimit_v =
+      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)blimit[0]));
 
-  q4p4 = _mm_loadl_epi64((__m128i *)(s - 5 * p));
+  q4p4 = _mm_loadl_epi64((__m128i *)(s - 5 * pitch));
   q4p4 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q4p4), (__m64 *)(s + 4 * p)));
-  q3p3 = _mm_loadl_epi64((__m128i *)(s - 4 * p));
+      _mm_loadh_pi(_mm_castsi128_ps(q4p4), (__m64 *)(s + 4 * pitch)));
+  q3p3 = _mm_loadl_epi64((__m128i *)(s - 4 * pitch));
   q3p3 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q3p3), (__m64 *)(s + 3 * p)));
-  q2p2 = _mm_loadl_epi64((__m128i *)(s - 3 * p));
+      _mm_loadh_pi(_mm_castsi128_ps(q3p3), (__m64 *)(s + 3 * pitch)));
+  q2p2 = _mm_loadl_epi64((__m128i *)(s - 3 * pitch));
   q2p2 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q2p2), (__m64 *)(s + 2 * p)));
-  q1p1 = _mm_loadl_epi64((__m128i *)(s - 2 * p));
+      _mm_loadh_pi(_mm_castsi128_ps(q2p2), (__m64 *)(s + 2 * pitch)));
+  q1p1 = _mm_loadl_epi64((__m128i *)(s - 2 * pitch));
   q1p1 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q1p1), (__m64 *)(s + 1 * p)));
+      _mm_loadh_pi(_mm_castsi128_ps(q1p1), (__m64 *)(s + 1 * pitch)));
   p1q1 = _mm_shuffle_epi32(q1p1, 78);
-  q0p0 = _mm_loadl_epi64((__m128i *)(s - 1 * p));
+  q0p0 = _mm_loadl_epi64((__m128i *)(s - 1 * pitch));
   q0p0 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q0p0), (__m64 *)(s - 0 * p)));
+      _mm_loadh_pi(_mm_castsi128_ps(q0p0), (__m64 *)(s - 0 * pitch)));
   p0q0 = _mm_shuffle_epi32(q0p0, 78);
 
   {
@@ -52,19 +52,19 @@
     abs_p1p0 =
         _mm_or_si128(_mm_subs_epu8(q1p1, q0p0), _mm_subs_epu8(q0p0, q1p1));
     abs_q1q0 = _mm_srli_si128(abs_p1p0, 8);
-    fe = _mm_set1_epi8(0xfe);
+    fe = _mm_set1_epi8((int8_t)0xfe);
     ff = _mm_cmpeq_epi8(abs_p1p0, abs_p1p0);
     abs_p0q0 =
         _mm_or_si128(_mm_subs_epu8(q0p0, p0q0), _mm_subs_epu8(p0q0, q0p0));
     abs_p1q1 =
         _mm_or_si128(_mm_subs_epu8(q1p1, p1q1), _mm_subs_epu8(p1q1, q1p1));
     flat = _mm_max_epu8(abs_p1p0, abs_q1q0);
-    hev = _mm_subs_epu8(flat, thresh);
+    hev = _mm_subs_epu8(flat, thresh_v);
     hev = _mm_xor_si128(_mm_cmpeq_epi8(hev, zero), ff);
 
     abs_p0q0 = _mm_adds_epu8(abs_p0q0, abs_p0q0);
     abs_p1q1 = _mm_srli_epi16(_mm_and_si128(abs_p1q1, fe), 1);
-    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit);
+    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit_v);
     mask = _mm_xor_si128(_mm_cmpeq_epi8(mask, zero), ff);
     // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
     mask = _mm_max_epu8(abs_p1p0, mask);
@@ -76,7 +76,7 @@
         _mm_or_si128(_mm_subs_epu8(q3p3, q2p2), _mm_subs_epu8(q2p2, q3p3)));
     mask = _mm_max_epu8(work, mask);
     mask = _mm_max_epu8(mask, _mm_srli_si128(mask, 8));
-    mask = _mm_subs_epu8(mask, limit);
+    mask = _mm_subs_epu8(mask, limit_v);
     mask = _mm_cmpeq_epi8(mask, zero);
   }
 
@@ -84,7 +84,7 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
     const __m128i t1 = _mm_set1_epi16(0x1);
     __m128i qs1ps1 = _mm_xor_si128(q1p1, t80);
     __m128i qs0ps0 = _mm_xor_si128(q0p0, t80);
@@ -136,21 +136,21 @@
       flat = _mm_cmpeq_epi8(flat, zero);
       flat = _mm_and_si128(flat, mask);
 
-      q5p5 = _mm_loadl_epi64((__m128i *)(s - 6 * p));
+      q5p5 = _mm_loadl_epi64((__m128i *)(s - 6 * pitch));
       q5p5 = _mm_castps_si128(
-          _mm_loadh_pi(_mm_castsi128_ps(q5p5), (__m64 *)(s + 5 * p)));
+          _mm_loadh_pi(_mm_castsi128_ps(q5p5), (__m64 *)(s + 5 * pitch)));
 
-      q6p6 = _mm_loadl_epi64((__m128i *)(s - 7 * p));
+      q6p6 = _mm_loadl_epi64((__m128i *)(s - 7 * pitch));
       q6p6 = _mm_castps_si128(
-          _mm_loadh_pi(_mm_castsi128_ps(q6p6), (__m64 *)(s + 6 * p)));
+          _mm_loadh_pi(_mm_castsi128_ps(q6p6), (__m64 *)(s + 6 * pitch)));
 
       flat2 = _mm_max_epu8(
           _mm_or_si128(_mm_subs_epu8(q4p4, q0p0), _mm_subs_epu8(q0p0, q4p4)),
           _mm_or_si128(_mm_subs_epu8(q5p5, q0p0), _mm_subs_epu8(q0p0, q5p5)));
 
-      q7p7 = _mm_loadl_epi64((__m128i *)(s - 8 * p));
+      q7p7 = _mm_loadl_epi64((__m128i *)(s - 8 * pitch));
       q7p7 = _mm_castps_si128(
-          _mm_loadh_pi(_mm_castsi128_ps(q7p7), (__m64 *)(s + 7 * p)));
+          _mm_loadh_pi(_mm_castsi128_ps(q7p7), (__m64 *)(s + 7 * pitch)));
 
       work = _mm_max_epu8(
           _mm_or_si128(_mm_subs_epu8(q6p6, q0p0), _mm_subs_epu8(q0p0, q6p6)),
@@ -321,44 +321,44 @@
     q6p6 = _mm_andnot_si128(flat2, q6p6);
     flat2_q6p6 = _mm_and_si128(flat2, flat2_q6p6);
     q6p6 = _mm_or_si128(q6p6, flat2_q6p6);
-    _mm_storel_epi64((__m128i *)(s - 7 * p), q6p6);
-    _mm_storeh_pi((__m64 *)(s + 6 * p), _mm_castsi128_ps(q6p6));
+    _mm_storel_epi64((__m128i *)(s - 7 * pitch), q6p6);
+    _mm_storeh_pi((__m64 *)(s + 6 * pitch), _mm_castsi128_ps(q6p6));
 
     q5p5 = _mm_andnot_si128(flat2, q5p5);
     flat2_q5p5 = _mm_and_si128(flat2, flat2_q5p5);
     q5p5 = _mm_or_si128(q5p5, flat2_q5p5);
-    _mm_storel_epi64((__m128i *)(s - 6 * p), q5p5);
-    _mm_storeh_pi((__m64 *)(s + 5 * p), _mm_castsi128_ps(q5p5));
+    _mm_storel_epi64((__m128i *)(s - 6 * pitch), q5p5);
+    _mm_storeh_pi((__m64 *)(s + 5 * pitch), _mm_castsi128_ps(q5p5));
 
     q4p4 = _mm_andnot_si128(flat2, q4p4);
     flat2_q4p4 = _mm_and_si128(flat2, flat2_q4p4);
     q4p4 = _mm_or_si128(q4p4, flat2_q4p4);
-    _mm_storel_epi64((__m128i *)(s - 5 * p), q4p4);
-    _mm_storeh_pi((__m64 *)(s + 4 * p), _mm_castsi128_ps(q4p4));
+    _mm_storel_epi64((__m128i *)(s - 5 * pitch), q4p4);
+    _mm_storeh_pi((__m64 *)(s + 4 * pitch), _mm_castsi128_ps(q4p4));
 
     q3p3 = _mm_andnot_si128(flat2, q3p3);
     flat2_q3p3 = _mm_and_si128(flat2, flat2_q3p3);
     q3p3 = _mm_or_si128(q3p3, flat2_q3p3);
-    _mm_storel_epi64((__m128i *)(s - 4 * p), q3p3);
-    _mm_storeh_pi((__m64 *)(s + 3 * p), _mm_castsi128_ps(q3p3));
+    _mm_storel_epi64((__m128i *)(s - 4 * pitch), q3p3);
+    _mm_storeh_pi((__m64 *)(s + 3 * pitch), _mm_castsi128_ps(q3p3));
 
     q2p2 = _mm_andnot_si128(flat2, q2p2);
     flat2_q2p2 = _mm_and_si128(flat2, flat2_q2p2);
     q2p2 = _mm_or_si128(q2p2, flat2_q2p2);
-    _mm_storel_epi64((__m128i *)(s - 3 * p), q2p2);
-    _mm_storeh_pi((__m64 *)(s + 2 * p), _mm_castsi128_ps(q2p2));
+    _mm_storel_epi64((__m128i *)(s - 3 * pitch), q2p2);
+    _mm_storeh_pi((__m64 *)(s + 2 * pitch), _mm_castsi128_ps(q2p2));
 
     q1p1 = _mm_andnot_si128(flat2, q1p1);
     flat2_q1p1 = _mm_and_si128(flat2, flat2_q1p1);
     q1p1 = _mm_or_si128(q1p1, flat2_q1p1);
-    _mm_storel_epi64((__m128i *)(s - 2 * p), q1p1);
-    _mm_storeh_pi((__m64 *)(s + 1 * p), _mm_castsi128_ps(q1p1));
+    _mm_storel_epi64((__m128i *)(s - 2 * pitch), q1p1);
+    _mm_storeh_pi((__m64 *)(s + 1 * pitch), _mm_castsi128_ps(q1p1));
 
     q0p0 = _mm_andnot_si128(flat2, q0p0);
     flat2_q0p0 = _mm_and_si128(flat2, flat2_q0p0);
     q0p0 = _mm_or_si128(q0p0, flat2_q0p0);
-    _mm_storel_epi64((__m128i *)(s - 1 * p), q0p0);
-    _mm_storeh_pi((__m64 *)(s - 0 * p), _mm_castsi128_ps(q0p0));
+    _mm_storel_epi64((__m128i *)(s - 1 * pitch), q0p0);
+    _mm_storeh_pi((__m64 *)(s - 0 * pitch), _mm_castsi128_ps(q0p0));
   }
 }
 
@@ -367,10 +367,10 @@
   8, 128, 9, 128, 10, 128, 11, 128, 12, 128, 13, 128, 14, 128, 15, 128
 };
 
-void vpx_lpf_horizontal_16_dual_avx2(unsigned char *s, int p,
-                                     const unsigned char *_blimit,
-                                     const unsigned char *_limit,
-                                     const unsigned char *_thresh) {
+void vpx_lpf_horizontal_16_dual_avx2(unsigned char *s, int pitch,
+                                     const unsigned char *blimit,
+                                     const unsigned char *limit,
+                                     const unsigned char *thresh) {
   __m128i mask, hev, flat, flat2;
   const __m128i zero = _mm_set1_epi16(0);
   const __m128i one = _mm_set1_epi8(1);
@@ -380,32 +380,32 @@
   __m256i p256_7, q256_7, p256_6, q256_6, p256_5, q256_5, p256_4, q256_4,
       p256_3, q256_3, p256_2, q256_2, p256_1, q256_1, p256_0, q256_0;
 
-  const __m128i thresh =
-      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)_thresh[0]));
-  const __m128i limit = _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)_limit[0]));
-  const __m128i blimit =
-      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)_blimit[0]));
+  const __m128i thresh_v =
+      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)thresh[0]));
+  const __m128i limit_v = _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)limit[0]));
+  const __m128i blimit_v =
+      _mm_broadcastb_epi8(_mm_cvtsi32_si128((int)blimit[0]));
 
-  p256_4 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s - 5 * p)));
-  p256_3 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s - 4 * p)));
-  p256_2 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s - 3 * p)));
-  p256_1 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s - 2 * p)));
-  p256_0 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s - 1 * p)));
-  q256_0 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s - 0 * p)));
-  q256_1 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s + 1 * p)));
-  q256_2 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s + 2 * p)));
-  q256_3 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s + 3 * p)));
-  q256_4 =
-      _mm256_castpd_si256(_mm256_broadcast_pd((__m128d const *)(s + 4 * p)));
+  p256_4 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s - 5 * pitch)));
+  p256_3 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s - 4 * pitch)));
+  p256_2 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s - 3 * pitch)));
+  p256_1 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s - 2 * pitch)));
+  p256_0 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s - 1 * pitch)));
+  q256_0 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s - 0 * pitch)));
+  q256_1 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s + 1 * pitch)));
+  q256_2 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s + 2 * pitch)));
+  q256_3 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s + 3 * pitch)));
+  q256_4 = _mm256_castpd_si256(
+      _mm256_broadcast_pd((__m128d const *)(s + 4 * pitch)));
 
   p4 = _mm256_castsi256_si128(p256_4);
   p3 = _mm256_castsi256_si128(p256_3);
@@ -423,7 +423,7 @@
         _mm_or_si128(_mm_subs_epu8(p1, p0), _mm_subs_epu8(p0, p1));
     const __m128i abs_q1q0 =
         _mm_or_si128(_mm_subs_epu8(q1, q0), _mm_subs_epu8(q0, q1));
-    const __m128i fe = _mm_set1_epi8(0xfe);
+    const __m128i fe = _mm_set1_epi8((int8_t)0xfe);
     const __m128i ff = _mm_cmpeq_epi8(abs_p1p0, abs_p1p0);
     __m128i abs_p0q0 =
         _mm_or_si128(_mm_subs_epu8(p0, q0), _mm_subs_epu8(q0, p0));
@@ -431,12 +431,12 @@
         _mm_or_si128(_mm_subs_epu8(p1, q1), _mm_subs_epu8(q1, p1));
     __m128i work;
     flat = _mm_max_epu8(abs_p1p0, abs_q1q0);
-    hev = _mm_subs_epu8(flat, thresh);
+    hev = _mm_subs_epu8(flat, thresh_v);
     hev = _mm_xor_si128(_mm_cmpeq_epi8(hev, zero), ff);
 
     abs_p0q0 = _mm_adds_epu8(abs_p0q0, abs_p0q0);
     abs_p1q1 = _mm_srli_epi16(_mm_and_si128(abs_p1q1, fe), 1);
-    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit);
+    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit_v);
     mask = _mm_xor_si128(_mm_cmpeq_epi8(mask, zero), ff);
     // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
     mask = _mm_max_epu8(flat, mask);
@@ -450,7 +450,7 @@
         _mm_or_si128(_mm_subs_epu8(q2, q1), _mm_subs_epu8(q1, q2)),
         _mm_or_si128(_mm_subs_epu8(q3, q2), _mm_subs_epu8(q2, q3)));
     mask = _mm_max_epu8(work, mask);
-    mask = _mm_subs_epu8(mask, limit);
+    mask = _mm_subs_epu8(mask, limit_v);
     mask = _mm_cmpeq_epi8(mask, zero);
   }
 
@@ -458,8 +458,8 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
-    const __m128i te0 = _mm_set1_epi8(0xe0);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
+    const __m128i te0 = _mm_set1_epi8((int8_t)0xe0);
     const __m128i t1f = _mm_set1_epi8(0x1f);
     const __m128i t1 = _mm_set1_epi8(0x1);
     const __m128i t7f = _mm_set1_epi8(0x7f);
@@ -532,9 +532,9 @@
       flat = _mm_and_si128(flat, mask);
 
       p256_5 = _mm256_castpd_si256(
-          _mm256_broadcast_pd((__m128d const *)(s - 6 * p)));
+          _mm256_broadcast_pd((__m128d const *)(s - 6 * pitch)));
       q256_5 = _mm256_castpd_si256(
-          _mm256_broadcast_pd((__m128d const *)(s + 5 * p)));
+          _mm256_broadcast_pd((__m128d const *)(s + 5 * pitch)));
       p5 = _mm256_castsi256_si128(p256_5);
       q5 = _mm256_castsi256_si128(q256_5);
       flat2 = _mm_max_epu8(
@@ -543,9 +543,9 @@
 
       flat2 = _mm_max_epu8(work, flat2);
       p256_6 = _mm256_castpd_si256(
-          _mm256_broadcast_pd((__m128d const *)(s - 7 * p)));
+          _mm256_broadcast_pd((__m128d const *)(s - 7 * pitch)));
       q256_6 = _mm256_castpd_si256(
-          _mm256_broadcast_pd((__m128d const *)(s + 6 * p)));
+          _mm256_broadcast_pd((__m128d const *)(s + 6 * pitch)));
       p6 = _mm256_castsi256_si128(p256_6);
       q6 = _mm256_castsi256_si128(q256_6);
       work = _mm_max_epu8(
@@ -555,9 +555,9 @@
       flat2 = _mm_max_epu8(work, flat2);
 
       p256_7 = _mm256_castpd_si256(
-          _mm256_broadcast_pd((__m128d const *)(s - 8 * p)));
+          _mm256_broadcast_pd((__m128d const *)(s - 8 * pitch)));
       q256_7 = _mm256_castpd_si256(
-          _mm256_broadcast_pd((__m128d const *)(s + 7 * p)));
+          _mm256_broadcast_pd((__m128d const *)(s + 7 * pitch)));
       p7 = _mm256_castsi256_si128(p256_7);
       q7 = _mm256_castsi256_si128(q256_7);
       work = _mm_max_epu8(
@@ -843,71 +843,71 @@
     p6 = _mm_andnot_si128(flat2, p6);
     flat2_p6 = _mm_and_si128(flat2, flat2_p6);
     p6 = _mm_or_si128(flat2_p6, p6);
-    _mm_storeu_si128((__m128i *)(s - 7 * p), p6);
+    _mm_storeu_si128((__m128i *)(s - 7 * pitch), p6);
 
     p5 = _mm_andnot_si128(flat2, p5);
     flat2_p5 = _mm_and_si128(flat2, flat2_p5);
     p5 = _mm_or_si128(flat2_p5, p5);
-    _mm_storeu_si128((__m128i *)(s - 6 * p), p5);
+    _mm_storeu_si128((__m128i *)(s - 6 * pitch), p5);
 
     p4 = _mm_andnot_si128(flat2, p4);
     flat2_p4 = _mm_and_si128(flat2, flat2_p4);
     p4 = _mm_or_si128(flat2_p4, p4);
-    _mm_storeu_si128((__m128i *)(s - 5 * p), p4);
+    _mm_storeu_si128((__m128i *)(s - 5 * pitch), p4);
 
     p3 = _mm_andnot_si128(flat2, p3);
     flat2_p3 = _mm_and_si128(flat2, flat2_p3);
     p3 = _mm_or_si128(flat2_p3, p3);
-    _mm_storeu_si128((__m128i *)(s - 4 * p), p3);
+    _mm_storeu_si128((__m128i *)(s - 4 * pitch), p3);
 
     p2 = _mm_andnot_si128(flat2, p2);
     flat2_p2 = _mm_and_si128(flat2, flat2_p2);
     p2 = _mm_or_si128(flat2_p2, p2);
-    _mm_storeu_si128((__m128i *)(s - 3 * p), p2);
+    _mm_storeu_si128((__m128i *)(s - 3 * pitch), p2);
 
     p1 = _mm_andnot_si128(flat2, p1);
     flat2_p1 = _mm_and_si128(flat2, flat2_p1);
     p1 = _mm_or_si128(flat2_p1, p1);
-    _mm_storeu_si128((__m128i *)(s - 2 * p), p1);
+    _mm_storeu_si128((__m128i *)(s - 2 * pitch), p1);
 
     p0 = _mm_andnot_si128(flat2, p0);
     flat2_p0 = _mm_and_si128(flat2, flat2_p0);
     p0 = _mm_or_si128(flat2_p0, p0);
-    _mm_storeu_si128((__m128i *)(s - 1 * p), p0);
+    _mm_storeu_si128((__m128i *)(s - 1 * pitch), p0);
 
     q0 = _mm_andnot_si128(flat2, q0);
     flat2_q0 = _mm_and_si128(flat2, flat2_q0);
     q0 = _mm_or_si128(flat2_q0, q0);
-    _mm_storeu_si128((__m128i *)(s - 0 * p), q0);
+    _mm_storeu_si128((__m128i *)(s - 0 * pitch), q0);
 
     q1 = _mm_andnot_si128(flat2, q1);
     flat2_q1 = _mm_and_si128(flat2, flat2_q1);
     q1 = _mm_or_si128(flat2_q1, q1);
-    _mm_storeu_si128((__m128i *)(s + 1 * p), q1);
+    _mm_storeu_si128((__m128i *)(s + 1 * pitch), q1);
 
     q2 = _mm_andnot_si128(flat2, q2);
     flat2_q2 = _mm_and_si128(flat2, flat2_q2);
     q2 = _mm_or_si128(flat2_q2, q2);
-    _mm_storeu_si128((__m128i *)(s + 2 * p), q2);
+    _mm_storeu_si128((__m128i *)(s + 2 * pitch), q2);
 
     q3 = _mm_andnot_si128(flat2, q3);
     flat2_q3 = _mm_and_si128(flat2, flat2_q3);
     q3 = _mm_or_si128(flat2_q3, q3);
-    _mm_storeu_si128((__m128i *)(s + 3 * p), q3);
+    _mm_storeu_si128((__m128i *)(s + 3 * pitch), q3);
 
     q4 = _mm_andnot_si128(flat2, q4);
     flat2_q4 = _mm_and_si128(flat2, flat2_q4);
     q4 = _mm_or_si128(flat2_q4, q4);
-    _mm_storeu_si128((__m128i *)(s + 4 * p), q4);
+    _mm_storeu_si128((__m128i *)(s + 4 * pitch), q4);
 
     q5 = _mm_andnot_si128(flat2, q5);
     flat2_q5 = _mm_and_si128(flat2, flat2_q5);
     q5 = _mm_or_si128(flat2_q5, q5);
-    _mm_storeu_si128((__m128i *)(s + 5 * p), q5);
+    _mm_storeu_si128((__m128i *)(s + 5 * pitch), q5);
 
     q6 = _mm_andnot_si128(flat2, q6);
     flat2_q6 = _mm_and_si128(flat2, flat2_q6);
     q6 = _mm_or_si128(flat2_q6, q6);
-    _mm_storeu_si128((__m128i *)(s + 6 * p), q6);
+    _mm_storeu_si128((__m128i *)(s + 6 * pitch), q6);
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_sse2.c
index 28e6fd6..b6ff248 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/loopfilter_sse2.c
@@ -13,6 +13,7 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_ports/mem.h"
 #include "vpx_ports/emmintrin_compat.h"
+#include "vpx_dsp/x86/mem_sse2.h"
 
 static INLINE __m128i abs_diff(__m128i a, __m128i b) {
   return _mm_or_si128(_mm_subs_epu8(a, b), _mm_subs_epu8(b, a));
@@ -30,7 +31,7 @@
     /* const uint8_t hev = hev_mask(thresh, *op1, *op0, *oq0, *oq1); */       \
     hev =                                                                     \
         _mm_unpacklo_epi8(_mm_max_epu8(flat, _mm_srli_si128(flat, 8)), zero); \
-    hev = _mm_cmpgt_epi16(hev, thresh);                                       \
+    hev = _mm_cmpgt_epi16(hev, thresh_v);                                     \
     hev = _mm_packs_epi16(hev, hev);                                          \
                                                                               \
     /* const int8_t mask = filter_mask(*limit, *blimit, */                    \
@@ -51,7 +52,7 @@
     flat = _mm_max_epu8(work, flat);                                          \
     flat = _mm_max_epu8(flat, _mm_srli_si128(flat, 8));                       \
     mask = _mm_unpacklo_epi64(mask, flat);                                    \
-    mask = _mm_subs_epu8(mask, limit);                                        \
+    mask = _mm_subs_epu8(mask, limit_v);                                      \
     mask = _mm_cmpeq_epi8(mask, zero);                                        \
     mask = _mm_and_si128(mask, _mm_srli_si128(mask, 8));                      \
   } while (0)
@@ -60,7 +61,7 @@
   do {                                                                      \
     const __m128i t3t4 =                                                    \
         _mm_set_epi8(3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4);       \
-    const __m128i t80 = _mm_set1_epi8(0x80);                                \
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);                        \
     __m128i filter, filter2filter1, work;                                   \
                                                                             \
     ps1ps0 = _mm_xor_si128(p1p0, t80); /* ^ 0x80 */                         \
@@ -103,27 +104,26 @@
     ps1ps0 = _mm_xor_si128(ps1ps0, t80); /* ^ 0x80 */                       \
   } while (0)
 
-void vpx_lpf_horizontal_4_sse2(uint8_t *s, int p /* pitch */,
-                               const uint8_t *_blimit, const uint8_t *_limit,
-                               const uint8_t *_thresh) {
+void vpx_lpf_horizontal_4_sse2(uint8_t *s, int pitch, const uint8_t *blimit,
+                               const uint8_t *limit, const uint8_t *thresh) {
   const __m128i zero = _mm_set1_epi16(0);
-  const __m128i limit =
-      _mm_unpacklo_epi64(_mm_loadl_epi64((const __m128i *)_blimit),
-                         _mm_loadl_epi64((const __m128i *)_limit));
-  const __m128i thresh =
-      _mm_unpacklo_epi8(_mm_loadl_epi64((const __m128i *)_thresh), zero);
+  const __m128i limit_v =
+      _mm_unpacklo_epi64(_mm_loadl_epi64((const __m128i *)blimit),
+                         _mm_loadl_epi64((const __m128i *)limit));
+  const __m128i thresh_v =
+      _mm_unpacklo_epi8(_mm_loadl_epi64((const __m128i *)thresh), zero);
   const __m128i ff = _mm_cmpeq_epi8(zero, zero);
   __m128i q1p1, q0p0, p3p2, p2p1, p1p0, q3q2, q2q1, q1q0, ps1ps0, qs1qs0;
   __m128i mask, hev;
 
-  p3p2 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 3 * p)),
-                            _mm_loadl_epi64((__m128i *)(s - 4 * p)));
-  q1p1 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 2 * p)),
-                            _mm_loadl_epi64((__m128i *)(s + 1 * p)));
-  q0p0 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 1 * p)),
-                            _mm_loadl_epi64((__m128i *)(s + 0 * p)));
-  q3q2 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s + 2 * p)),
-                            _mm_loadl_epi64((__m128i *)(s + 3 * p)));
+  p3p2 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 3 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s - 4 * pitch)));
+  q1p1 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 2 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s + 1 * pitch)));
+  q0p0 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 1 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s + 0 * pitch)));
+  q3q2 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s + 2 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s + 3 * pitch)));
   p1p0 = _mm_unpacklo_epi64(q0p0, q1p1);
   p2p1 = _mm_unpacklo_epi64(q1p1, p3p2);
   q1q0 = _mm_unpackhi_epi64(q0p0, q1p1);
@@ -132,41 +132,40 @@
   FILTER_HEV_MASK;
   FILTER4;
 
-  _mm_storeh_pi((__m64 *)(s - 2 * p), _mm_castsi128_ps(ps1ps0));  // *op1
-  _mm_storel_epi64((__m128i *)(s - 1 * p), ps1ps0);               // *op0
-  _mm_storel_epi64((__m128i *)(s + 0 * p), qs1qs0);               // *oq0
-  _mm_storeh_pi((__m64 *)(s + 1 * p), _mm_castsi128_ps(qs1qs0));  // *oq1
+  _mm_storeh_pi((__m64 *)(s - 2 * pitch), _mm_castsi128_ps(ps1ps0));  // *op1
+  _mm_storel_epi64((__m128i *)(s - 1 * pitch), ps1ps0);               // *op0
+  _mm_storel_epi64((__m128i *)(s + 0 * pitch), qs1qs0);               // *oq0
+  _mm_storeh_pi((__m64 *)(s + 1 * pitch), _mm_castsi128_ps(qs1qs0));  // *oq1
 }
 
-void vpx_lpf_vertical_4_sse2(uint8_t *s, int p /* pitch */,
-                             const uint8_t *_blimit, const uint8_t *_limit,
-                             const uint8_t *_thresh) {
+void vpx_lpf_vertical_4_sse2(uint8_t *s, int pitch, const uint8_t *blimit,
+                             const uint8_t *limit, const uint8_t *thresh) {
   const __m128i zero = _mm_set1_epi16(0);
-  const __m128i limit =
-      _mm_unpacklo_epi64(_mm_loadl_epi64((const __m128i *)_blimit),
-                         _mm_loadl_epi64((const __m128i *)_limit));
-  const __m128i thresh =
-      _mm_unpacklo_epi8(_mm_loadl_epi64((const __m128i *)_thresh), zero);
+  const __m128i limit_v =
+      _mm_unpacklo_epi64(_mm_loadl_epi64((const __m128i *)blimit),
+                         _mm_loadl_epi64((const __m128i *)limit));
+  const __m128i thresh_v =
+      _mm_unpacklo_epi8(_mm_loadl_epi64((const __m128i *)thresh), zero);
   const __m128i ff = _mm_cmpeq_epi8(zero, zero);
   __m128i x0, x1, x2, x3;
   __m128i q1p1, q0p0, p3p2, p2p1, p1p0, q3q2, q2q1, q1q0, ps1ps0, qs1qs0;
   __m128i mask, hev;
 
   // 00 10 01 11 02 12 03 13 04 14 05 15 06 16 07 17
-  q1q0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 0 * p - 4)),
-                           _mm_loadl_epi64((__m128i *)(s + 1 * p - 4)));
+  q1q0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 0 * pitch - 4)),
+                           _mm_loadl_epi64((__m128i *)(s + 1 * pitch - 4)));
 
   // 20 30 21 31 22 32 23 33 24 34 25 35 26 36 27 37
-  x1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 2 * p - 4)),
-                         _mm_loadl_epi64((__m128i *)(s + 3 * p - 4)));
+  x1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 2 * pitch - 4)),
+                         _mm_loadl_epi64((__m128i *)(s + 3 * pitch - 4)));
 
   // 40 50 41 51 42 52 43 53 44 54 45 55 46 56 47 57
-  x2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 4 * p - 4)),
-                         _mm_loadl_epi64((__m128i *)(s + 5 * p - 4)));
+  x2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 4 * pitch - 4)),
+                         _mm_loadl_epi64((__m128i *)(s + 5 * pitch - 4)));
 
   // 60 70 61 71 62 72 63 73 64 74 65 75 66 76 67 77
-  x3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 6 * p - 4)),
-                         _mm_loadl_epi64((__m128i *)(s + 7 * p - 4)));
+  x3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(s + 6 * pitch - 4)),
+                         _mm_loadl_epi64((__m128i *)(s + 7 * pitch - 4)));
 
   // Transpose 8x8
   // 00 10 20 30 01 11 21 31  02 12 22 32 03 13 23 33
@@ -212,69 +211,69 @@
   // 00 10 20 30 01 11 21 31  02 12 22 32 03 13 23 33
   ps1ps0 = _mm_unpacklo_epi8(ps1ps0, x0);
 
-  *(int *)(s + 0 * p - 2) = _mm_cvtsi128_si32(ps1ps0);
+  storeu_uint32(s + 0 * pitch - 2, _mm_cvtsi128_si32(ps1ps0));
   ps1ps0 = _mm_srli_si128(ps1ps0, 4);
-  *(int *)(s + 1 * p - 2) = _mm_cvtsi128_si32(ps1ps0);
+  storeu_uint32(s + 1 * pitch - 2, _mm_cvtsi128_si32(ps1ps0));
   ps1ps0 = _mm_srli_si128(ps1ps0, 4);
-  *(int *)(s + 2 * p - 2) = _mm_cvtsi128_si32(ps1ps0);
+  storeu_uint32(s + 2 * pitch - 2, _mm_cvtsi128_si32(ps1ps0));
   ps1ps0 = _mm_srli_si128(ps1ps0, 4);
-  *(int *)(s + 3 * p - 2) = _mm_cvtsi128_si32(ps1ps0);
+  storeu_uint32(s + 3 * pitch - 2, _mm_cvtsi128_si32(ps1ps0));
 
-  *(int *)(s + 4 * p - 2) = _mm_cvtsi128_si32(qs1qs0);
+  storeu_uint32(s + 4 * pitch - 2, _mm_cvtsi128_si32(qs1qs0));
   qs1qs0 = _mm_srli_si128(qs1qs0, 4);
-  *(int *)(s + 5 * p - 2) = _mm_cvtsi128_si32(qs1qs0);
+  storeu_uint32(s + 5 * pitch - 2, _mm_cvtsi128_si32(qs1qs0));
   qs1qs0 = _mm_srli_si128(qs1qs0, 4);
-  *(int *)(s + 6 * p - 2) = _mm_cvtsi128_si32(qs1qs0);
+  storeu_uint32(s + 6 * pitch - 2, _mm_cvtsi128_si32(qs1qs0));
   qs1qs0 = _mm_srli_si128(qs1qs0, 4);
-  *(int *)(s + 7 * p - 2) = _mm_cvtsi128_si32(qs1qs0);
+  storeu_uint32(s + 7 * pitch - 2, _mm_cvtsi128_si32(qs1qs0));
 }
 
-void vpx_lpf_horizontal_16_sse2(unsigned char *s, int p,
-                                const unsigned char *_blimit,
-                                const unsigned char *_limit,
-                                const unsigned char *_thresh) {
+void vpx_lpf_horizontal_16_sse2(unsigned char *s, int pitch,
+                                const unsigned char *blimit,
+                                const unsigned char *limit,
+                                const unsigned char *thresh) {
   const __m128i zero = _mm_set1_epi16(0);
   const __m128i one = _mm_set1_epi8(1);
-  const __m128i blimit = _mm_load_si128((const __m128i *)_blimit);
-  const __m128i limit = _mm_load_si128((const __m128i *)_limit);
-  const __m128i thresh = _mm_load_si128((const __m128i *)_thresh);
+  const __m128i blimit_v = _mm_load_si128((const __m128i *)blimit);
+  const __m128i limit_v = _mm_load_si128((const __m128i *)limit);
+  const __m128i thresh_v = _mm_load_si128((const __m128i *)thresh);
   __m128i mask, hev, flat, flat2;
   __m128i q7p7, q6p6, q5p5, q4p4, q3p3, q2p2, q1p1, q0p0, p0q0, p1q1;
   __m128i abs_p1p0;
 
-  q4p4 = _mm_loadl_epi64((__m128i *)(s - 5 * p));
+  q4p4 = _mm_loadl_epi64((__m128i *)(s - 5 * pitch));
   q4p4 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q4p4), (__m64 *)(s + 4 * p)));
-  q3p3 = _mm_loadl_epi64((__m128i *)(s - 4 * p));
+      _mm_loadh_pi(_mm_castsi128_ps(q4p4), (__m64 *)(s + 4 * pitch)));
+  q3p3 = _mm_loadl_epi64((__m128i *)(s - 4 * pitch));
   q3p3 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q3p3), (__m64 *)(s + 3 * p)));
-  q2p2 = _mm_loadl_epi64((__m128i *)(s - 3 * p));
+      _mm_loadh_pi(_mm_castsi128_ps(q3p3), (__m64 *)(s + 3 * pitch)));
+  q2p2 = _mm_loadl_epi64((__m128i *)(s - 3 * pitch));
   q2p2 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q2p2), (__m64 *)(s + 2 * p)));
-  q1p1 = _mm_loadl_epi64((__m128i *)(s - 2 * p));
+      _mm_loadh_pi(_mm_castsi128_ps(q2p2), (__m64 *)(s + 2 * pitch)));
+  q1p1 = _mm_loadl_epi64((__m128i *)(s - 2 * pitch));
   q1p1 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q1p1), (__m64 *)(s + 1 * p)));
+      _mm_loadh_pi(_mm_castsi128_ps(q1p1), (__m64 *)(s + 1 * pitch)));
   p1q1 = _mm_shuffle_epi32(q1p1, 78);
-  q0p0 = _mm_loadl_epi64((__m128i *)(s - 1 * p));
+  q0p0 = _mm_loadl_epi64((__m128i *)(s - 1 * pitch));
   q0p0 = _mm_castps_si128(
-      _mm_loadh_pi(_mm_castsi128_ps(q0p0), (__m64 *)(s - 0 * p)));
+      _mm_loadh_pi(_mm_castsi128_ps(q0p0), (__m64 *)(s - 0 * pitch)));
   p0q0 = _mm_shuffle_epi32(q0p0, 78);
 
   {
     __m128i abs_p1q1, abs_p0q0, abs_q1q0, fe, ff, work;
     abs_p1p0 = abs_diff(q1p1, q0p0);
     abs_q1q0 = _mm_srli_si128(abs_p1p0, 8);
-    fe = _mm_set1_epi8(0xfe);
+    fe = _mm_set1_epi8((int8_t)0xfe);
     ff = _mm_cmpeq_epi8(abs_p1p0, abs_p1p0);
     abs_p0q0 = abs_diff(q0p0, p0q0);
     abs_p1q1 = abs_diff(q1p1, p1q1);
     flat = _mm_max_epu8(abs_p1p0, abs_q1q0);
-    hev = _mm_subs_epu8(flat, thresh);
+    hev = _mm_subs_epu8(flat, thresh_v);
     hev = _mm_xor_si128(_mm_cmpeq_epi8(hev, zero), ff);
 
     abs_p0q0 = _mm_adds_epu8(abs_p0q0, abs_p0q0);
     abs_p1q1 = _mm_srli_epi16(_mm_and_si128(abs_p1q1, fe), 1);
-    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit);
+    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit_v);
     mask = _mm_xor_si128(_mm_cmpeq_epi8(mask, zero), ff);
     // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
     mask = _mm_max_epu8(abs_p1p0, mask);
@@ -284,7 +283,7 @@
     work = _mm_max_epu8(abs_diff(q2p2, q1p1), abs_diff(q3p3, q2p2));
     mask = _mm_max_epu8(work, mask);
     mask = _mm_max_epu8(mask, _mm_srli_si128(mask, 8));
-    mask = _mm_subs_epu8(mask, limit);
+    mask = _mm_subs_epu8(mask, limit_v);
     mask = _mm_cmpeq_epi8(mask, zero);
   }
 
@@ -292,7 +291,7 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
     const __m128i t1 = _mm_set1_epi16(0x1);
     __m128i qs1ps1 = _mm_xor_si128(q1p1, t80);
     __m128i qs0ps0 = _mm_xor_si128(q0p0, t80);
@@ -342,18 +341,18 @@
       flat = _mm_cmpeq_epi8(flat, zero);
       flat = _mm_and_si128(flat, mask);
 
-      q5p5 = _mm_loadl_epi64((__m128i *)(s - 6 * p));
+      q5p5 = _mm_loadl_epi64((__m128i *)(s - 6 * pitch));
       q5p5 = _mm_castps_si128(
-          _mm_loadh_pi(_mm_castsi128_ps(q5p5), (__m64 *)(s + 5 * p)));
+          _mm_loadh_pi(_mm_castsi128_ps(q5p5), (__m64 *)(s + 5 * pitch)));
 
-      q6p6 = _mm_loadl_epi64((__m128i *)(s - 7 * p));
+      q6p6 = _mm_loadl_epi64((__m128i *)(s - 7 * pitch));
       q6p6 = _mm_castps_si128(
-          _mm_loadh_pi(_mm_castsi128_ps(q6p6), (__m64 *)(s + 6 * p)));
+          _mm_loadh_pi(_mm_castsi128_ps(q6p6), (__m64 *)(s + 6 * pitch)));
       flat2 = _mm_max_epu8(abs_diff(q4p4, q0p0), abs_diff(q5p5, q0p0));
 
-      q7p7 = _mm_loadl_epi64((__m128i *)(s - 8 * p));
+      q7p7 = _mm_loadl_epi64((__m128i *)(s - 8 * pitch));
       q7p7 = _mm_castps_si128(
-          _mm_loadh_pi(_mm_castsi128_ps(q7p7), (__m64 *)(s + 7 * p)));
+          _mm_loadh_pi(_mm_castsi128_ps(q7p7), (__m64 *)(s + 7 * pitch)));
       work = _mm_max_epu8(abs_diff(q6p6, q0p0), abs_diff(q7p7, q0p0));
       flat2 = _mm_max_epu8(work, flat2);
       flat2 = _mm_max_epu8(flat2, _mm_srli_si128(flat2, 8));
@@ -520,44 +519,44 @@
     q6p6 = _mm_andnot_si128(flat2, q6p6);
     flat2_q6p6 = _mm_and_si128(flat2, flat2_q6p6);
     q6p6 = _mm_or_si128(q6p6, flat2_q6p6);
-    _mm_storel_epi64((__m128i *)(s - 7 * p), q6p6);
-    _mm_storeh_pi((__m64 *)(s + 6 * p), _mm_castsi128_ps(q6p6));
+    _mm_storel_epi64((__m128i *)(s - 7 * pitch), q6p6);
+    _mm_storeh_pi((__m64 *)(s + 6 * pitch), _mm_castsi128_ps(q6p6));
 
     q5p5 = _mm_andnot_si128(flat2, q5p5);
     flat2_q5p5 = _mm_and_si128(flat2, flat2_q5p5);
     q5p5 = _mm_or_si128(q5p5, flat2_q5p5);
-    _mm_storel_epi64((__m128i *)(s - 6 * p), q5p5);
-    _mm_storeh_pi((__m64 *)(s + 5 * p), _mm_castsi128_ps(q5p5));
+    _mm_storel_epi64((__m128i *)(s - 6 * pitch), q5p5);
+    _mm_storeh_pi((__m64 *)(s + 5 * pitch), _mm_castsi128_ps(q5p5));
 
     q4p4 = _mm_andnot_si128(flat2, q4p4);
     flat2_q4p4 = _mm_and_si128(flat2, flat2_q4p4);
     q4p4 = _mm_or_si128(q4p4, flat2_q4p4);
-    _mm_storel_epi64((__m128i *)(s - 5 * p), q4p4);
-    _mm_storeh_pi((__m64 *)(s + 4 * p), _mm_castsi128_ps(q4p4));
+    _mm_storel_epi64((__m128i *)(s - 5 * pitch), q4p4);
+    _mm_storeh_pi((__m64 *)(s + 4 * pitch), _mm_castsi128_ps(q4p4));
 
     q3p3 = _mm_andnot_si128(flat2, q3p3);
     flat2_q3p3 = _mm_and_si128(flat2, flat2_q3p3);
     q3p3 = _mm_or_si128(q3p3, flat2_q3p3);
-    _mm_storel_epi64((__m128i *)(s - 4 * p), q3p3);
-    _mm_storeh_pi((__m64 *)(s + 3 * p), _mm_castsi128_ps(q3p3));
+    _mm_storel_epi64((__m128i *)(s - 4 * pitch), q3p3);
+    _mm_storeh_pi((__m64 *)(s + 3 * pitch), _mm_castsi128_ps(q3p3));
 
     q2p2 = _mm_andnot_si128(flat2, q2p2);
     flat2_q2p2 = _mm_and_si128(flat2, flat2_q2p2);
     q2p2 = _mm_or_si128(q2p2, flat2_q2p2);
-    _mm_storel_epi64((__m128i *)(s - 3 * p), q2p2);
-    _mm_storeh_pi((__m64 *)(s + 2 * p), _mm_castsi128_ps(q2p2));
+    _mm_storel_epi64((__m128i *)(s - 3 * pitch), q2p2);
+    _mm_storeh_pi((__m64 *)(s + 2 * pitch), _mm_castsi128_ps(q2p2));
 
     q1p1 = _mm_andnot_si128(flat2, q1p1);
     flat2_q1p1 = _mm_and_si128(flat2, flat2_q1p1);
     q1p1 = _mm_or_si128(q1p1, flat2_q1p1);
-    _mm_storel_epi64((__m128i *)(s - 2 * p), q1p1);
-    _mm_storeh_pi((__m64 *)(s + 1 * p), _mm_castsi128_ps(q1p1));
+    _mm_storel_epi64((__m128i *)(s - 2 * pitch), q1p1);
+    _mm_storeh_pi((__m64 *)(s + 1 * pitch), _mm_castsi128_ps(q1p1));
 
     q0p0 = _mm_andnot_si128(flat2, q0p0);
     flat2_q0p0 = _mm_and_si128(flat2, flat2_q0p0);
     q0p0 = _mm_or_si128(q0p0, flat2_q0p0);
-    _mm_storel_epi64((__m128i *)(s - 1 * p), q0p0);
-    _mm_storeh_pi((__m64 *)(s - 0 * p), _mm_castsi128_ps(q0p0));
+    _mm_storel_epi64((__m128i *)(s - 1 * pitch), q0p0);
+    _mm_storeh_pi((__m64 *)(s - 0 * pitch), _mm_castsi128_ps(q0p0));
   }
 }
 
@@ -591,15 +590,15 @@
   return _mm_or_si128(_mm_andnot_si128(*flat, *other_filt), result);
 }
 
-void vpx_lpf_horizontal_16_dual_sse2(unsigned char *s, int p,
-                                     const unsigned char *_blimit,
-                                     const unsigned char *_limit,
-                                     const unsigned char *_thresh) {
+void vpx_lpf_horizontal_16_dual_sse2(unsigned char *s, int pitch,
+                                     const unsigned char *blimit,
+                                     const unsigned char *limit,
+                                     const unsigned char *thresh) {
   const __m128i zero = _mm_set1_epi16(0);
   const __m128i one = _mm_set1_epi8(1);
-  const __m128i blimit = _mm_load_si128((const __m128i *)_blimit);
-  const __m128i limit = _mm_load_si128((const __m128i *)_limit);
-  const __m128i thresh = _mm_load_si128((const __m128i *)_thresh);
+  const __m128i blimit_v = _mm_load_si128((const __m128i *)blimit);
+  const __m128i limit_v = _mm_load_si128((const __m128i *)limit);
+  const __m128i thresh_v = _mm_load_si128((const __m128i *)thresh);
   __m128i mask, hev, flat, flat2;
   __m128i p7, p6, p5;
   __m128i p4, p3, p2, p1, p0, q0, q1, q2, q3, q4;
@@ -609,27 +608,27 @@
 
   __m128i max_abs_p1p0q1q0;
 
-  p7 = _mm_loadu_si128((__m128i *)(s - 8 * p));
-  p6 = _mm_loadu_si128((__m128i *)(s - 7 * p));
-  p5 = _mm_loadu_si128((__m128i *)(s - 6 * p));
-  p4 = _mm_loadu_si128((__m128i *)(s - 5 * p));
-  p3 = _mm_loadu_si128((__m128i *)(s - 4 * p));
-  p2 = _mm_loadu_si128((__m128i *)(s - 3 * p));
-  p1 = _mm_loadu_si128((__m128i *)(s - 2 * p));
-  p0 = _mm_loadu_si128((__m128i *)(s - 1 * p));
-  q0 = _mm_loadu_si128((__m128i *)(s - 0 * p));
-  q1 = _mm_loadu_si128((__m128i *)(s + 1 * p));
-  q2 = _mm_loadu_si128((__m128i *)(s + 2 * p));
-  q3 = _mm_loadu_si128((__m128i *)(s + 3 * p));
-  q4 = _mm_loadu_si128((__m128i *)(s + 4 * p));
-  q5 = _mm_loadu_si128((__m128i *)(s + 5 * p));
-  q6 = _mm_loadu_si128((__m128i *)(s + 6 * p));
-  q7 = _mm_loadu_si128((__m128i *)(s + 7 * p));
+  p7 = _mm_loadu_si128((__m128i *)(s - 8 * pitch));
+  p6 = _mm_loadu_si128((__m128i *)(s - 7 * pitch));
+  p5 = _mm_loadu_si128((__m128i *)(s - 6 * pitch));
+  p4 = _mm_loadu_si128((__m128i *)(s - 5 * pitch));
+  p3 = _mm_loadu_si128((__m128i *)(s - 4 * pitch));
+  p2 = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
+  p1 = _mm_loadu_si128((__m128i *)(s - 2 * pitch));
+  p0 = _mm_loadu_si128((__m128i *)(s - 1 * pitch));
+  q0 = _mm_loadu_si128((__m128i *)(s - 0 * pitch));
+  q1 = _mm_loadu_si128((__m128i *)(s + 1 * pitch));
+  q2 = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
+  q3 = _mm_loadu_si128((__m128i *)(s + 3 * pitch));
+  q4 = _mm_loadu_si128((__m128i *)(s + 4 * pitch));
+  q5 = _mm_loadu_si128((__m128i *)(s + 5 * pitch));
+  q6 = _mm_loadu_si128((__m128i *)(s + 6 * pitch));
+  q7 = _mm_loadu_si128((__m128i *)(s + 7 * pitch));
 
   {
     const __m128i abs_p1p0 = abs_diff(p1, p0);
     const __m128i abs_q1q0 = abs_diff(q1, q0);
-    const __m128i fe = _mm_set1_epi8(0xfe);
+    const __m128i fe = _mm_set1_epi8((int8_t)0xfe);
     const __m128i ff = _mm_cmpeq_epi8(zero, zero);
     __m128i abs_p0q0 = abs_diff(p0, q0);
     __m128i abs_p1q1 = abs_diff(p1, q1);
@@ -638,7 +637,7 @@
 
     abs_p0q0 = _mm_adds_epu8(abs_p0q0, abs_p0q0);
     abs_p1q1 = _mm_srli_epi16(_mm_and_si128(abs_p1q1, fe), 1);
-    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit);
+    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit_v);
     mask = _mm_xor_si128(_mm_cmpeq_epi8(mask, zero), ff);
     // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
     mask = _mm_max_epu8(max_abs_p1p0q1q0, mask);
@@ -648,7 +647,7 @@
     mask = _mm_max_epu8(work, mask);
     work = _mm_max_epu8(abs_diff(q2, q1), abs_diff(q3, q2));
     mask = _mm_max_epu8(work, mask);
-    mask = _mm_subs_epu8(mask, limit);
+    mask = _mm_subs_epu8(mask, limit_v);
     mask = _mm_cmpeq_epi8(mask, zero);
   }
 
@@ -678,8 +677,8 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
-    const __m128i te0 = _mm_set1_epi8(0xe0);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
+    const __m128i te0 = _mm_set1_epi8((int8_t)0xe0);
     const __m128i t1f = _mm_set1_epi8(0x1f);
     const __m128i t1 = _mm_set1_epi8(0x1);
     const __m128i t7f = _mm_set1_epi8(0x7f);
@@ -694,7 +693,7 @@
     oq0 = _mm_xor_si128(q0, t80);
     oq1 = _mm_xor_si128(q1, t80);
 
-    hev = _mm_subs_epu8(max_abs_p1p0q1q0, thresh);
+    hev = _mm_subs_epu8(max_abs_p1p0q1q0, thresh_v);
     hev = _mm_xor_si128(_mm_cmpeq_epi8(hev, zero), ff);
     filt = _mm_and_si128(_mm_subs_epi8(op1, oq1), hev);
 
@@ -851,82 +850,82 @@
       f_hi = _mm_add_epi16(_mm_add_epi16(p5_hi, eight), f_hi);
 
       p6 = filter16_mask(&flat2, &p6, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 7 * p), p6);
+      _mm_storeu_si128((__m128i *)(s - 7 * pitch), p6);
 
       f_lo = filter_add2_sub2(&f_lo, &q1_lo, &p5_lo, &p6_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q1_hi, &p5_hi, &p6_hi, &p7_hi);
       p5 = filter16_mask(&flat2, &p5, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 6 * p), p5);
+      _mm_storeu_si128((__m128i *)(s - 6 * pitch), p5);
 
       f_lo = filter_add2_sub2(&f_lo, &q2_lo, &p4_lo, &p5_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q2_hi, &p4_hi, &p5_hi, &p7_hi);
       p4 = filter16_mask(&flat2, &p4, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 5 * p), p4);
+      _mm_storeu_si128((__m128i *)(s - 5 * pitch), p4);
 
       f_lo = filter_add2_sub2(&f_lo, &q3_lo, &p3_lo, &p4_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q3_hi, &p3_hi, &p4_hi, &p7_hi);
       p3 = filter16_mask(&flat2, &p3, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 4 * p), p3);
+      _mm_storeu_si128((__m128i *)(s - 4 * pitch), p3);
 
       f_lo = filter_add2_sub2(&f_lo, &q4_lo, &p2_lo, &p3_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q4_hi, &p2_hi, &p3_hi, &p7_hi);
       op2 = filter16_mask(&flat2, &op2, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 3 * p), op2);
+      _mm_storeu_si128((__m128i *)(s - 3 * pitch), op2);
 
       f_lo = filter_add2_sub2(&f_lo, &q5_lo, &p1_lo, &p2_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q5_hi, &p1_hi, &p2_hi, &p7_hi);
       op1 = filter16_mask(&flat2, &op1, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 2 * p), op1);
+      _mm_storeu_si128((__m128i *)(s - 2 * pitch), op1);
 
       f_lo = filter_add2_sub2(&f_lo, &q6_lo, &p0_lo, &p1_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q6_hi, &p0_hi, &p1_hi, &p7_hi);
       op0 = filter16_mask(&flat2, &op0, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 1 * p), op0);
+      _mm_storeu_si128((__m128i *)(s - 1 * pitch), op0);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q0_lo, &p0_lo, &p7_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q0_hi, &p0_hi, &p7_hi);
       oq0 = filter16_mask(&flat2, &oq0, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s - 0 * p), oq0);
+      _mm_storeu_si128((__m128i *)(s - 0 * pitch), oq0);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q1_lo, &p6_lo, &q0_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q1_hi, &p6_hi, &q0_hi);
       oq1 = filter16_mask(&flat2, &oq1, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s + 1 * p), oq1);
+      _mm_storeu_si128((__m128i *)(s + 1 * pitch), oq1);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q2_lo, &p5_lo, &q1_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q2_hi, &p5_hi, &q1_hi);
       oq2 = filter16_mask(&flat2, &oq2, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s + 2 * p), oq2);
+      _mm_storeu_si128((__m128i *)(s + 2 * pitch), oq2);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q3_lo, &p4_lo, &q2_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q3_hi, &p4_hi, &q2_hi);
       q3 = filter16_mask(&flat2, &q3, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s + 3 * p), q3);
+      _mm_storeu_si128((__m128i *)(s + 3 * pitch), q3);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q4_lo, &p3_lo, &q3_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q4_hi, &p3_hi, &q3_hi);
       q4 = filter16_mask(&flat2, &q4, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s + 4 * p), q4);
+      _mm_storeu_si128((__m128i *)(s + 4 * pitch), q4);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q5_lo, &p2_lo, &q4_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q5_hi, &p2_hi, &q4_hi);
       q5 = filter16_mask(&flat2, &q5, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s + 5 * p), q5);
+      _mm_storeu_si128((__m128i *)(s + 5 * pitch), q5);
 
       f_lo = filter_add2_sub2(&f_lo, &q7_lo, &q6_lo, &p1_lo, &q5_lo);
       f_hi = filter_add2_sub2(&f_hi, &q7_hi, &q6_hi, &p1_hi, &q5_hi);
       q6 = filter16_mask(&flat2, &q6, &f_lo, &f_hi);
-      _mm_storeu_si128((__m128i *)(s + 6 * p), q6);
+      _mm_storeu_si128((__m128i *)(s + 6 * pitch), q6);
     }
     // wide flat
     // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   }
 }
 
-void vpx_lpf_horizontal_8_sse2(unsigned char *s, int p,
-                               const unsigned char *_blimit,
-                               const unsigned char *_limit,
-                               const unsigned char *_thresh) {
+void vpx_lpf_horizontal_8_sse2(unsigned char *s, int pitch,
+                               const unsigned char *blimit,
+                               const unsigned char *limit,
+                               const unsigned char *thresh) {
   DECLARE_ALIGNED(16, unsigned char, flat_op2[16]);
   DECLARE_ALIGNED(16, unsigned char, flat_op1[16]);
   DECLARE_ALIGNED(16, unsigned char, flat_op0[16]);
@@ -934,28 +933,28 @@
   DECLARE_ALIGNED(16, unsigned char, flat_oq1[16]);
   DECLARE_ALIGNED(16, unsigned char, flat_oq0[16]);
   const __m128i zero = _mm_set1_epi16(0);
-  const __m128i blimit = _mm_load_si128((const __m128i *)_blimit);
-  const __m128i limit = _mm_load_si128((const __m128i *)_limit);
-  const __m128i thresh = _mm_load_si128((const __m128i *)_thresh);
+  const __m128i blimit_v = _mm_load_si128((const __m128i *)blimit);
+  const __m128i limit_v = _mm_load_si128((const __m128i *)limit);
+  const __m128i thresh_v = _mm_load_si128((const __m128i *)thresh);
   __m128i mask, hev, flat;
   __m128i p3, p2, p1, p0, q0, q1, q2, q3;
   __m128i q3p3, q2p2, q1p1, q0p0, p1q1, p0q0;
 
-  q3p3 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 4 * p)),
-                            _mm_loadl_epi64((__m128i *)(s + 3 * p)));
-  q2p2 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 3 * p)),
-                            _mm_loadl_epi64((__m128i *)(s + 2 * p)));
-  q1p1 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 2 * p)),
-                            _mm_loadl_epi64((__m128i *)(s + 1 * p)));
-  q0p0 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 1 * p)),
-                            _mm_loadl_epi64((__m128i *)(s - 0 * p)));
+  q3p3 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 4 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s + 3 * pitch)));
+  q2p2 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 3 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s + 2 * pitch)));
+  q1p1 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 2 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s + 1 * pitch)));
+  q0p0 = _mm_unpacklo_epi64(_mm_loadl_epi64((__m128i *)(s - 1 * pitch)),
+                            _mm_loadl_epi64((__m128i *)(s - 0 * pitch)));
   p1q1 = _mm_shuffle_epi32(q1p1, 78);
   p0q0 = _mm_shuffle_epi32(q0p0, 78);
 
   {
     // filter_mask and hev_mask
     const __m128i one = _mm_set1_epi8(1);
-    const __m128i fe = _mm_set1_epi8(0xfe);
+    const __m128i fe = _mm_set1_epi8((int8_t)0xfe);
     const __m128i ff = _mm_cmpeq_epi8(fe, fe);
     __m128i abs_p1q1, abs_p0q0, abs_q1q0, abs_p1p0, work;
     abs_p1p0 = abs_diff(q1p1, q0p0);
@@ -964,12 +963,12 @@
     abs_p0q0 = abs_diff(q0p0, p0q0);
     abs_p1q1 = abs_diff(q1p1, p1q1);
     flat = _mm_max_epu8(abs_p1p0, abs_q1q0);
-    hev = _mm_subs_epu8(flat, thresh);
+    hev = _mm_subs_epu8(flat, thresh_v);
     hev = _mm_xor_si128(_mm_cmpeq_epi8(hev, zero), ff);
 
     abs_p0q0 = _mm_adds_epu8(abs_p0q0, abs_p0q0);
     abs_p1q1 = _mm_srli_epi16(_mm_and_si128(abs_p1q1, fe), 1);
-    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit);
+    mask = _mm_subs_epu8(_mm_adds_epu8(abs_p0q0, abs_p1q1), blimit_v);
     mask = _mm_xor_si128(_mm_cmpeq_epi8(mask, zero), ff);
     // mask |= (abs(p0 - q0) * 2 + abs(p1 - q1) / 2  > blimit) * -1;
     mask = _mm_max_epu8(abs_p1p0, mask);
@@ -979,7 +978,7 @@
     work = _mm_max_epu8(abs_diff(q2p2, q1p1), abs_diff(q3p3, q2p2));
     mask = _mm_max_epu8(work, mask);
     mask = _mm_max_epu8(mask, _mm_srli_si128(mask, 8));
-    mask = _mm_subs_epu8(mask, limit);
+    mask = _mm_subs_epu8(mask, limit_v);
     mask = _mm_cmpeq_epi8(mask, zero);
 
     // flat_mask4
@@ -997,14 +996,22 @@
     unsigned char *src = s;
     {
       __m128i workp_a, workp_b, workp_shft;
-      p3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 4 * p)), zero);
-      p2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 3 * p)), zero);
-      p1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 2 * p)), zero);
-      p0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 1 * p)), zero);
-      q0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 0 * p)), zero);
-      q1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 1 * p)), zero);
-      q2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 2 * p)), zero);
-      q3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 3 * p)), zero);
+      p3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 4 * pitch)),
+                             zero);
+      p2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 3 * pitch)),
+                             zero);
+      p1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 2 * pitch)),
+                             zero);
+      p0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 1 * pitch)),
+                             zero);
+      q0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 0 * pitch)),
+                             zero);
+      q1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 1 * pitch)),
+                             zero);
+      q2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 2 * pitch)),
+                             zero);
+      q3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 3 * pitch)),
+                             zero);
 
       workp_a = _mm_add_epi16(_mm_add_epi16(p3, p3), _mm_add_epi16(p2, p1));
       workp_a = _mm_add_epi16(_mm_add_epi16(workp_a, four), p0);
@@ -1047,16 +1054,16 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
     const __m128i t1 = _mm_set1_epi8(0x1);
     const __m128i ps1 =
-        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s - 2 * p)), t80);
+        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s - 2 * pitch)), t80);
     const __m128i ps0 =
-        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s - 1 * p)), t80);
+        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s - 1 * pitch)), t80);
     const __m128i qs0 =
-        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s + 0 * p)), t80);
+        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s + 0 * pitch)), t80);
     const __m128i qs1 =
-        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s + 1 * p)), t80);
+        _mm_xor_si128(_mm_loadl_epi64((__m128i *)(s + 1 * pitch)), t80);
     __m128i filt;
     __m128i work_a;
     __m128i filter1, filter2;
@@ -1102,7 +1109,7 @@
     q1 = _mm_and_si128(flat, q1);
     q1 = _mm_or_si128(work_a, q1);
 
-    work_a = _mm_loadu_si128((__m128i *)(s + 2 * p));
+    work_a = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
     q2 = _mm_loadl_epi64((__m128i *)flat_oq2);
     work_a = _mm_andnot_si128(flat, work_a);
     q2 = _mm_and_si128(flat, q2);
@@ -1120,27 +1127,25 @@
     p1 = _mm_and_si128(flat, p1);
     p1 = _mm_or_si128(work_a, p1);
 
-    work_a = _mm_loadu_si128((__m128i *)(s - 3 * p));
+    work_a = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
     p2 = _mm_loadl_epi64((__m128i *)flat_op2);
     work_a = _mm_andnot_si128(flat, work_a);
     p2 = _mm_and_si128(flat, p2);
     p2 = _mm_or_si128(work_a, p2);
 
-    _mm_storel_epi64((__m128i *)(s - 3 * p), p2);
-    _mm_storel_epi64((__m128i *)(s - 2 * p), p1);
-    _mm_storel_epi64((__m128i *)(s - 1 * p), p0);
-    _mm_storel_epi64((__m128i *)(s + 0 * p), q0);
-    _mm_storel_epi64((__m128i *)(s + 1 * p), q1);
-    _mm_storel_epi64((__m128i *)(s + 2 * p), q2);
+    _mm_storel_epi64((__m128i *)(s - 3 * pitch), p2);
+    _mm_storel_epi64((__m128i *)(s - 2 * pitch), p1);
+    _mm_storel_epi64((__m128i *)(s - 1 * pitch), p0);
+    _mm_storel_epi64((__m128i *)(s + 0 * pitch), q0);
+    _mm_storel_epi64((__m128i *)(s + 1 * pitch), q1);
+    _mm_storel_epi64((__m128i *)(s + 2 * pitch), q2);
   }
 }
 
-void vpx_lpf_horizontal_8_dual_sse2(uint8_t *s, int p, const uint8_t *_blimit0,
-                                    const uint8_t *_limit0,
-                                    const uint8_t *_thresh0,
-                                    const uint8_t *_blimit1,
-                                    const uint8_t *_limit1,
-                                    const uint8_t *_thresh1) {
+void vpx_lpf_horizontal_8_dual_sse2(
+    uint8_t *s, int pitch, const uint8_t *blimit0, const uint8_t *limit0,
+    const uint8_t *thresh0, const uint8_t *blimit1, const uint8_t *limit1,
+    const uint8_t *thresh1) {
   DECLARE_ALIGNED(16, unsigned char, flat_op2[16]);
   DECLARE_ALIGNED(16, unsigned char, flat_op1[16]);
   DECLARE_ALIGNED(16, unsigned char, flat_op0[16]);
@@ -1149,33 +1154,33 @@
   DECLARE_ALIGNED(16, unsigned char, flat_oq0[16]);
   const __m128i zero = _mm_set1_epi16(0);
   const __m128i blimit =
-      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)_blimit0),
-                         _mm_load_si128((const __m128i *)_blimit1));
+      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)blimit0),
+                         _mm_load_si128((const __m128i *)blimit1));
   const __m128i limit =
-      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)_limit0),
-                         _mm_load_si128((const __m128i *)_limit1));
+      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)limit0),
+                         _mm_load_si128((const __m128i *)limit1));
   const __m128i thresh =
-      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)_thresh0),
-                         _mm_load_si128((const __m128i *)_thresh1));
+      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)thresh0),
+                         _mm_load_si128((const __m128i *)thresh1));
 
   __m128i mask, hev, flat;
   __m128i p3, p2, p1, p0, q0, q1, q2, q3;
 
-  p3 = _mm_loadu_si128((__m128i *)(s - 4 * p));
-  p2 = _mm_loadu_si128((__m128i *)(s - 3 * p));
-  p1 = _mm_loadu_si128((__m128i *)(s - 2 * p));
-  p0 = _mm_loadu_si128((__m128i *)(s - 1 * p));
-  q0 = _mm_loadu_si128((__m128i *)(s - 0 * p));
-  q1 = _mm_loadu_si128((__m128i *)(s + 1 * p));
-  q2 = _mm_loadu_si128((__m128i *)(s + 2 * p));
-  q3 = _mm_loadu_si128((__m128i *)(s + 3 * p));
+  p3 = _mm_loadu_si128((__m128i *)(s - 4 * pitch));
+  p2 = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
+  p1 = _mm_loadu_si128((__m128i *)(s - 2 * pitch));
+  p0 = _mm_loadu_si128((__m128i *)(s - 1 * pitch));
+  q0 = _mm_loadu_si128((__m128i *)(s - 0 * pitch));
+  q1 = _mm_loadu_si128((__m128i *)(s + 1 * pitch));
+  q2 = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
+  q3 = _mm_loadu_si128((__m128i *)(s + 3 * pitch));
   {
     const __m128i abs_p1p0 =
         _mm_or_si128(_mm_subs_epu8(p1, p0), _mm_subs_epu8(p0, p1));
     const __m128i abs_q1q0 =
         _mm_or_si128(_mm_subs_epu8(q1, q0), _mm_subs_epu8(q0, q1));
     const __m128i one = _mm_set1_epi8(1);
-    const __m128i fe = _mm_set1_epi8(0xfe);
+    const __m128i fe = _mm_set1_epi8((int8_t)0xfe);
     const __m128i ff = _mm_cmpeq_epi8(abs_p1p0, abs_p1p0);
     __m128i abs_p0q0 =
         _mm_or_si128(_mm_subs_epu8(p0, q0), _mm_subs_epu8(q0, p0));
@@ -1227,14 +1232,22 @@
 
     do {
       __m128i workp_a, workp_b, workp_shft;
-      p3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 4 * p)), zero);
-      p2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 3 * p)), zero);
-      p1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 2 * p)), zero);
-      p0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 1 * p)), zero);
-      q0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 0 * p)), zero);
-      q1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 1 * p)), zero);
-      q2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 2 * p)), zero);
-      q3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 3 * p)), zero);
+      p3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 4 * pitch)),
+                             zero);
+      p2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 3 * pitch)),
+                             zero);
+      p1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 2 * pitch)),
+                             zero);
+      p0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 1 * pitch)),
+                             zero);
+      q0 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src - 0 * pitch)),
+                             zero);
+      q1 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 1 * pitch)),
+                             zero);
+      q2 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 2 * pitch)),
+                             zero);
+      q3 = _mm_unpacklo_epi8(_mm_loadl_epi64((__m128i *)(src + 3 * pitch)),
+                             zero);
 
       workp_a = _mm_add_epi16(_mm_add_epi16(p3, p3), _mm_add_epi16(p2, p1));
       workp_a = _mm_add_epi16(_mm_add_epi16(workp_a, four), p0);
@@ -1279,20 +1292,20 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
-    const __m128i te0 = _mm_set1_epi8(0xe0);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
+    const __m128i te0 = _mm_set1_epi8((int8_t)0xe0);
     const __m128i t1f = _mm_set1_epi8(0x1f);
     const __m128i t1 = _mm_set1_epi8(0x1);
     const __m128i t7f = _mm_set1_epi8(0x7f);
 
     const __m128i ps1 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 2 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 2 * pitch)), t80);
     const __m128i ps0 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 1 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 1 * pitch)), t80);
     const __m128i qs0 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 0 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 0 * pitch)), t80);
     const __m128i qs1 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 1 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 1 * pitch)), t80);
     __m128i filt;
     __m128i work_a;
     __m128i filter1, filter2;
@@ -1344,7 +1357,7 @@
     q1 = _mm_and_si128(flat, q1);
     q1 = _mm_or_si128(work_a, q1);
 
-    work_a = _mm_loadu_si128((__m128i *)(s + 2 * p));
+    work_a = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
     q2 = _mm_load_si128((__m128i *)flat_oq2);
     work_a = _mm_andnot_si128(flat, work_a);
     q2 = _mm_and_si128(flat, q2);
@@ -1362,49 +1375,49 @@
     p1 = _mm_and_si128(flat, p1);
     p1 = _mm_or_si128(work_a, p1);
 
-    work_a = _mm_loadu_si128((__m128i *)(s - 3 * p));
+    work_a = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
     p2 = _mm_load_si128((__m128i *)flat_op2);
     work_a = _mm_andnot_si128(flat, work_a);
     p2 = _mm_and_si128(flat, p2);
     p2 = _mm_or_si128(work_a, p2);
 
-    _mm_storeu_si128((__m128i *)(s - 3 * p), p2);
-    _mm_storeu_si128((__m128i *)(s - 2 * p), p1);
-    _mm_storeu_si128((__m128i *)(s - 1 * p), p0);
-    _mm_storeu_si128((__m128i *)(s + 0 * p), q0);
-    _mm_storeu_si128((__m128i *)(s + 1 * p), q1);
-    _mm_storeu_si128((__m128i *)(s + 2 * p), q2);
+    _mm_storeu_si128((__m128i *)(s - 3 * pitch), p2);
+    _mm_storeu_si128((__m128i *)(s - 2 * pitch), p1);
+    _mm_storeu_si128((__m128i *)(s - 1 * pitch), p0);
+    _mm_storeu_si128((__m128i *)(s + 0 * pitch), q0);
+    _mm_storeu_si128((__m128i *)(s + 1 * pitch), q1);
+    _mm_storeu_si128((__m128i *)(s + 2 * pitch), q2);
   }
 }
 
-void vpx_lpf_horizontal_4_dual_sse2(unsigned char *s, int p,
-                                    const unsigned char *_blimit0,
-                                    const unsigned char *_limit0,
-                                    const unsigned char *_thresh0,
-                                    const unsigned char *_blimit1,
-                                    const unsigned char *_limit1,
-                                    const unsigned char *_thresh1) {
+void vpx_lpf_horizontal_4_dual_sse2(unsigned char *s, int pitch,
+                                    const unsigned char *blimit0,
+                                    const unsigned char *limit0,
+                                    const unsigned char *thresh0,
+                                    const unsigned char *blimit1,
+                                    const unsigned char *limit1,
+                                    const unsigned char *thresh1) {
   const __m128i blimit =
-      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)_blimit0),
-                         _mm_load_si128((const __m128i *)_blimit1));
+      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)blimit0),
+                         _mm_load_si128((const __m128i *)blimit1));
   const __m128i limit =
-      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)_limit0),
-                         _mm_load_si128((const __m128i *)_limit1));
+      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)limit0),
+                         _mm_load_si128((const __m128i *)limit1));
   const __m128i thresh =
-      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)_thresh0),
-                         _mm_load_si128((const __m128i *)_thresh1));
+      _mm_unpacklo_epi64(_mm_load_si128((const __m128i *)thresh0),
+                         _mm_load_si128((const __m128i *)thresh1));
   const __m128i zero = _mm_set1_epi16(0);
   __m128i p3, p2, p1, p0, q0, q1, q2, q3;
   __m128i mask, hev, flat;
 
-  p3 = _mm_loadu_si128((__m128i *)(s - 4 * p));
-  p2 = _mm_loadu_si128((__m128i *)(s - 3 * p));
-  p1 = _mm_loadu_si128((__m128i *)(s - 2 * p));
-  p0 = _mm_loadu_si128((__m128i *)(s - 1 * p));
-  q0 = _mm_loadu_si128((__m128i *)(s - 0 * p));
-  q1 = _mm_loadu_si128((__m128i *)(s + 1 * p));
-  q2 = _mm_loadu_si128((__m128i *)(s + 2 * p));
-  q3 = _mm_loadu_si128((__m128i *)(s + 3 * p));
+  p3 = _mm_loadu_si128((__m128i *)(s - 4 * pitch));
+  p2 = _mm_loadu_si128((__m128i *)(s - 3 * pitch));
+  p1 = _mm_loadu_si128((__m128i *)(s - 2 * pitch));
+  p0 = _mm_loadu_si128((__m128i *)(s - 1 * pitch));
+  q0 = _mm_loadu_si128((__m128i *)(s - 0 * pitch));
+  q1 = _mm_loadu_si128((__m128i *)(s + 1 * pitch));
+  q2 = _mm_loadu_si128((__m128i *)(s + 2 * pitch));
+  q3 = _mm_loadu_si128((__m128i *)(s + 3 * pitch));
 
   // filter_mask and hev_mask
   {
@@ -1412,7 +1425,7 @@
         _mm_or_si128(_mm_subs_epu8(p1, p0), _mm_subs_epu8(p0, p1));
     const __m128i abs_q1q0 =
         _mm_or_si128(_mm_subs_epu8(q1, q0), _mm_subs_epu8(q0, q1));
-    const __m128i fe = _mm_set1_epi8(0xfe);
+    const __m128i fe = _mm_set1_epi8((int8_t)0xfe);
     const __m128i ff = _mm_cmpeq_epi8(abs_p1p0, abs_p1p0);
     __m128i abs_p0q0 =
         _mm_or_si128(_mm_subs_epu8(p0, q0), _mm_subs_epu8(q0, p0));
@@ -1448,20 +1461,20 @@
   {
     const __m128i t4 = _mm_set1_epi8(4);
     const __m128i t3 = _mm_set1_epi8(3);
-    const __m128i t80 = _mm_set1_epi8(0x80);
-    const __m128i te0 = _mm_set1_epi8(0xe0);
+    const __m128i t80 = _mm_set1_epi8((int8_t)0x80);
+    const __m128i te0 = _mm_set1_epi8((int8_t)0xe0);
     const __m128i t1f = _mm_set1_epi8(0x1f);
     const __m128i t1 = _mm_set1_epi8(0x1);
     const __m128i t7f = _mm_set1_epi8(0x7f);
 
     const __m128i ps1 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 2 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 2 * pitch)), t80);
     const __m128i ps0 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 1 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s - 1 * pitch)), t80);
     const __m128i qs0 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 0 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 0 * pitch)), t80);
     const __m128i qs1 =
-        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 1 * p)), t80);
+        _mm_xor_si128(_mm_loadu_si128((__m128i *)(s + 1 * pitch)), t80);
     __m128i filt;
     __m128i work_a;
     __m128i filter1, filter2;
@@ -1506,10 +1519,10 @@
     p0 = _mm_xor_si128(_mm_adds_epi8(ps0, filter2), t80);
     p1 = _mm_xor_si128(_mm_adds_epi8(ps1, filt), t80);
 
-    _mm_storeu_si128((__m128i *)(s - 2 * p), p1);
-    _mm_storeu_si128((__m128i *)(s - 1 * p), p0);
-    _mm_storeu_si128((__m128i *)(s + 0 * p), q0);
-    _mm_storeu_si128((__m128i *)(s + 1 * p), q1);
+    _mm_storeu_si128((__m128i *)(s - 2 * pitch), p1);
+    _mm_storeu_si128((__m128i *)(s - 1 * pitch), p0);
+    _mm_storeu_si128((__m128i *)(s + 0 * pitch), q0);
+    _mm_storeu_si128((__m128i *)(s + 1 * pitch), q1);
   }
 }
 
@@ -1626,16 +1639,12 @@
     x5 = _mm_unpacklo_epi16(x2, x3);
     // 00 10 20 30 40 50 60 70 01 11 21 31 41 51 61 71
     x6 = _mm_unpacklo_epi32(x4, x5);
-    _mm_storel_pd((double *)(out + 0 * out_p),
-                  _mm_castsi128_pd(x6));  // 00 10 20 30 40 50 60 70
-    _mm_storeh_pd((double *)(out + 1 * out_p),
-                  _mm_castsi128_pd(x6));  // 01 11 21 31 41 51 61 71
+    mm_storelu(out + 0 * out_p, x6);  // 00 10 20 30 40 50 60 70
+    mm_storehu(out + 1 * out_p, x6);  // 01 11 21 31 41 51 61 71
     // 02 12 22 32 42 52 62 72 03 13 23 33 43 53 63 73
     x7 = _mm_unpackhi_epi32(x4, x5);
-    _mm_storel_pd((double *)(out + 2 * out_p),
-                  _mm_castsi128_pd(x7));  // 02 12 22 32 42 52 62 72
-    _mm_storeh_pd((double *)(out + 3 * out_p),
-                  _mm_castsi128_pd(x7));  // 03 13 23 33 43 53 63 73
+    mm_storelu(out + 2 * out_p, x7);  // 02 12 22 32 42 52 62 72
+    mm_storehu(out + 3 * out_p, x7);  // 03 13 23 33 43 53 63 73
 
     // 04 14 24 34 05 15 25 35 06 16 26 36 07 17 27 37
     x4 = _mm_unpackhi_epi16(x0, x1);
@@ -1643,21 +1652,17 @@
     x5 = _mm_unpackhi_epi16(x2, x3);
     // 04 14 24 34 44 54 64 74 05 15 25 35 45 55 65 75
     x6 = _mm_unpacklo_epi32(x4, x5);
-    _mm_storel_pd((double *)(out + 4 * out_p),
-                  _mm_castsi128_pd(x6));  // 04 14 24 34 44 54 64 74
-    _mm_storeh_pd((double *)(out + 5 * out_p),
-                  _mm_castsi128_pd(x6));  // 05 15 25 35 45 55 65 75
+    mm_storelu(out + 4 * out_p, x6);  // 04 14 24 34 44 54 64 74
+    mm_storehu(out + 5 * out_p, x6);  // 05 15 25 35 45 55 65 75
     // 06 16 26 36 46 56 66 76 07 17 27 37 47 57 67 77
     x7 = _mm_unpackhi_epi32(x4, x5);
 
-    _mm_storel_pd((double *)(out + 6 * out_p),
-                  _mm_castsi128_pd(x7));  // 06 16 26 36 46 56 66 76
-    _mm_storeh_pd((double *)(out + 7 * out_p),
-                  _mm_castsi128_pd(x7));  // 07 17 27 37 47 57 67 77
+    mm_storelu(out + 6 * out_p, x7);  // 06 16 26 36 46 56 66 76
+    mm_storehu(out + 7 * out_p, x7);  // 07 17 27 37 47 57 67 77
   } while (++idx8x8 < num_8x8_to_transpose);
 }
 
-void vpx_lpf_vertical_4_dual_sse2(uint8_t *s, int p, const uint8_t *blimit0,
+void vpx_lpf_vertical_4_dual_sse2(uint8_t *s, int pitch, const uint8_t *blimit0,
                                   const uint8_t *limit0, const uint8_t *thresh0,
                                   const uint8_t *blimit1, const uint8_t *limit1,
                                   const uint8_t *thresh1) {
@@ -1666,21 +1671,21 @@
   unsigned char *dst[2];
 
   // Transpose 8x16
-  transpose8x16(s - 4, s - 4 + p * 8, p, t_dst, 16);
+  transpose8x16(s - 4, s - 4 + pitch * 8, pitch, t_dst, 16);
 
   // Loop filtering
-  vpx_lpf_horizontal_4_dual_sse2(t_dst + 4 * 16, 16, blimit0, limit0, thresh0,
-                                 blimit1, limit1, thresh1);
+  vpx_lpf_horizontal_4_dual(t_dst + 4 * 16, 16, blimit0, limit0, thresh0,
+                            blimit1, limit1, thresh1);
   src[0] = t_dst;
   src[1] = t_dst + 8;
   dst[0] = s - 4;
-  dst[1] = s - 4 + p * 8;
+  dst[1] = s - 4 + pitch * 8;
 
   // Transpose back
-  transpose(src, 16, dst, p, 2);
+  transpose(src, 16, dst, pitch, 2);
 }
 
-void vpx_lpf_vertical_8_sse2(unsigned char *s, int p,
+void vpx_lpf_vertical_8_sse2(unsigned char *s, int pitch,
                              const unsigned char *blimit,
                              const unsigned char *limit,
                              const unsigned char *thresh) {
@@ -1692,19 +1697,19 @@
   src[0] = s - 4;
   dst[0] = t_dst;
 
-  transpose(src, p, dst, 8, 1);
+  transpose(src, pitch, dst, 8, 1);
 
   // Loop filtering
-  vpx_lpf_horizontal_8_sse2(t_dst + 4 * 8, 8, blimit, limit, thresh);
+  vpx_lpf_horizontal_8(t_dst + 4 * 8, 8, blimit, limit, thresh);
 
   src[0] = t_dst;
   dst[0] = s - 4;
 
   // Transpose back
-  transpose(src, 8, dst, p, 1);
+  transpose(src, 8, dst, pitch, 1);
 }
 
-void vpx_lpf_vertical_8_dual_sse2(uint8_t *s, int p, const uint8_t *blimit0,
+void vpx_lpf_vertical_8_dual_sse2(uint8_t *s, int pitch, const uint8_t *blimit0,
                                   const uint8_t *limit0, const uint8_t *thresh0,
                                   const uint8_t *blimit1, const uint8_t *limit1,
                                   const uint8_t *thresh1) {
@@ -1713,22 +1718,22 @@
   unsigned char *dst[2];
 
   // Transpose 8x16
-  transpose8x16(s - 4, s - 4 + p * 8, p, t_dst, 16);
+  transpose8x16(s - 4, s - 4 + pitch * 8, pitch, t_dst, 16);
 
   // Loop filtering
-  vpx_lpf_horizontal_8_dual_sse2(t_dst + 4 * 16, 16, blimit0, limit0, thresh0,
-                                 blimit1, limit1, thresh1);
+  vpx_lpf_horizontal_8_dual(t_dst + 4 * 16, 16, blimit0, limit0, thresh0,
+                            blimit1, limit1, thresh1);
   src[0] = t_dst;
   src[1] = t_dst + 8;
 
   dst[0] = s - 4;
-  dst[1] = s - 4 + p * 8;
+  dst[1] = s - 4 + pitch * 8;
 
   // Transpose back
-  transpose(src, 16, dst, p, 2);
+  transpose(src, 16, dst, pitch, 2);
 }
 
-void vpx_lpf_vertical_16_sse2(unsigned char *s, int p,
+void vpx_lpf_vertical_16_sse2(unsigned char *s, int pitch,
                               const unsigned char *blimit,
                               const unsigned char *limit,
                               const unsigned char *thresh) {
@@ -1742,10 +1747,10 @@
   dst[1] = t_dst + 8 * 8;
 
   // Transpose 16x8
-  transpose(src, p, dst, 8, 2);
+  transpose(src, pitch, dst, 8, 2);
 
   // Loop filtering
-  vpx_lpf_horizontal_16_sse2(t_dst + 8 * 8, 8, blimit, limit, thresh);
+  vpx_lpf_horizontal_16(t_dst + 8 * 8, 8, blimit, limit, thresh);
 
   src[0] = t_dst;
   src[1] = t_dst + 8 * 8;
@@ -1753,22 +1758,22 @@
   dst[1] = s;
 
   // Transpose back
-  transpose(src, 8, dst, p, 2);
+  transpose(src, 8, dst, pitch, 2);
 }
 
-void vpx_lpf_vertical_16_dual_sse2(unsigned char *s, int p,
+void vpx_lpf_vertical_16_dual_sse2(unsigned char *s, int pitch,
                                    const uint8_t *blimit, const uint8_t *limit,
                                    const uint8_t *thresh) {
   DECLARE_ALIGNED(16, unsigned char, t_dst[256]);
 
   // Transpose 16x16
-  transpose8x16(s - 8, s - 8 + 8 * p, p, t_dst, 16);
-  transpose8x16(s, s + 8 * p, p, t_dst + 8 * 16, 16);
+  transpose8x16(s - 8, s - 8 + 8 * pitch, pitch, t_dst, 16);
+  transpose8x16(s, s + 8 * pitch, pitch, t_dst + 8 * 16, 16);
 
   // Loop filtering
-  vpx_lpf_horizontal_16_dual_sse2(t_dst + 8 * 16, 16, blimit, limit, thresh);
+  vpx_lpf_horizontal_16_dual(t_dst + 8 * 16, 16, blimit, limit, thresh);
 
   // Transpose back
-  transpose8x16(t_dst, t_dst + 8 * 16, 16, s - 8, p);
-  transpose8x16(t_dst + 8, t_dst + 8 + 8 * 16, 16, s - 8 + 8 * p, p);
+  transpose8x16(t_dst, t_dst + 8 * 16, 16, s - 8, pitch);
+  transpose8x16(t_dst + 8, t_dst + 8 + 8 * 16, 16, s - 8 + 8 * pitch, pitch);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/mem_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/mem_sse2.h
index 2ce738f..258ab38 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/mem_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/mem_sse2.h
@@ -8,13 +8,43 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_MEM_SSE2_H_
-#define VPX_DSP_X86_MEM_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_MEM_SSE2_H_
+#define VPX_VPX_DSP_X86_MEM_SSE2_H_
 
 #include <emmintrin.h>  // SSE2
+#include <string.h>
 
 #include "./vpx_config.h"
 
+static INLINE void storeu_uint32(void *dst, uint32_t v) {
+  memcpy(dst, &v, sizeof(v));
+}
+
+static INLINE uint32_t loadu_uint32(const void *src) {
+  uint32_t v;
+  memcpy(&v, src, sizeof(v));
+  return v;
+}
+
+static INLINE __m128i load_unaligned_u32(const void *a) {
+  uint32_t val;
+  memcpy(&val, a, sizeof(val));
+  return _mm_cvtsi32_si128(val);
+}
+
+static INLINE void store_unaligned_u32(void *const a, const __m128i v) {
+  const uint32_t val = _mm_cvtsi128_si32(v);
+  memcpy(a, &val, sizeof(val));
+}
+
+#define mm_storelu(dst, v) memcpy((dst), (const char *)&(v), 8)
+#define mm_storehu(dst, v) memcpy((dst), (const char *)&(v) + 8, 8)
+
+static INLINE __m128i loadh_epi64(const __m128i s, const void *const src) {
+  return _mm_castps_si128(
+      _mm_loadh_pi(_mm_castsi128_ps(s), (const __m64 *)src));
+}
+
 static INLINE void load_8bit_4x4(const uint8_t *const s, const ptrdiff_t stride,
                                  __m128i *const d) {
   d[0] = _mm_cvtsi32_si128(*(const int *)(s + 0 * stride));
@@ -121,4 +151,4 @@
   _mm_storeu_si128((__m128i *)(d + 3 * stride), s[3]);
 }
 
-#endif  // VPX_DSP_X86_MEM_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_MEM_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/post_proc_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/post_proc_sse2.c
new file mode 100644
index 0000000..d1029af
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/post_proc_sse2.c
@@ -0,0 +1,141 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+#include <emmintrin.h>
+
+#include <stdio.h>
+
+#include "./vpx_dsp_rtcd.h"
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/x86/mem_sse2.h"
+
+extern const int16_t vpx_rv[];
+
+void vpx_mbpost_proc_down_sse2(unsigned char *dst, int pitch, int rows,
+                               int cols, int flimit) {
+  int col;
+  const __m128i zero = _mm_setzero_si128();
+  const __m128i f = _mm_set1_epi32(flimit);
+  DECLARE_ALIGNED(16, int16_t, above_context[8 * 8]);
+
+  // 8 columns are processed at a time.
+  // If rows is less than 8 the bottom border extension fails.
+  assert(cols % 8 == 0);
+  assert(rows >= 8);
+
+  for (col = 0; col < cols; col += 8) {
+    int row, i;
+    __m128i s = _mm_loadl_epi64((__m128i *)dst);
+    __m128i sum, sumsq_0, sumsq_1;
+    __m128i tmp_0, tmp_1;
+    __m128i below_context;
+
+    s = _mm_unpacklo_epi8(s, zero);
+
+    for (i = 0; i < 8; ++i) {
+      _mm_store_si128((__m128i *)above_context + i, s);
+    }
+
+    // sum *= 9
+    sum = _mm_slli_epi16(s, 3);
+    sum = _mm_add_epi16(s, sum);
+
+    // sum^2 * 9 == (sum * 9) * sum
+    tmp_0 = _mm_mullo_epi16(sum, s);
+    tmp_1 = _mm_mulhi_epi16(sum, s);
+
+    sumsq_0 = _mm_unpacklo_epi16(tmp_0, tmp_1);
+    sumsq_1 = _mm_unpackhi_epi16(tmp_0, tmp_1);
+
+    // Prime sum/sumsq
+    for (i = 1; i <= 6; ++i) {
+      __m128i a = _mm_loadl_epi64((__m128i *)(dst + i * pitch));
+      a = _mm_unpacklo_epi8(a, zero);
+      sum = _mm_add_epi16(sum, a);
+      a = _mm_mullo_epi16(a, a);
+      sumsq_0 = _mm_add_epi32(sumsq_0, _mm_unpacklo_epi16(a, zero));
+      sumsq_1 = _mm_add_epi32(sumsq_1, _mm_unpackhi_epi16(a, zero));
+    }
+
+    for (row = 0; row < rows + 8; row++) {
+      const __m128i above =
+          _mm_load_si128((__m128i *)above_context + (row & 7));
+      __m128i this_row = _mm_loadl_epi64((__m128i *)(dst + row * pitch));
+      __m128i above_sq, below_sq;
+      __m128i mask_0, mask_1;
+      __m128i multmp_0, multmp_1;
+      __m128i rv;
+      __m128i out;
+
+      this_row = _mm_unpacklo_epi8(this_row, zero);
+
+      if (row + 7 < rows) {
+        // Instead of copying the end context we just stop loading when we get
+        // to the last one.
+        below_context = _mm_loadl_epi64((__m128i *)(dst + (row + 7) * pitch));
+        below_context = _mm_unpacklo_epi8(below_context, zero);
+      }
+
+      sum = _mm_sub_epi16(sum, above);
+      sum = _mm_add_epi16(sum, below_context);
+
+      // context^2 fits in 16 bits. Don't need to mulhi and combine. Just zero
+      // extend. Unfortunately we can't do below_sq - above_sq in 16 bits
+      // because x86 does not have unpack with sign extension.
+      above_sq = _mm_mullo_epi16(above, above);
+      sumsq_0 = _mm_sub_epi32(sumsq_0, _mm_unpacklo_epi16(above_sq, zero));
+      sumsq_1 = _mm_sub_epi32(sumsq_1, _mm_unpackhi_epi16(above_sq, zero));
+
+      below_sq = _mm_mullo_epi16(below_context, below_context);
+      sumsq_0 = _mm_add_epi32(sumsq_0, _mm_unpacklo_epi16(below_sq, zero));
+      sumsq_1 = _mm_add_epi32(sumsq_1, _mm_unpackhi_epi16(below_sq, zero));
+
+      // sumsq * 16 - sumsq == sumsq * 15
+      mask_0 = _mm_slli_epi32(sumsq_0, 4);
+      mask_0 = _mm_sub_epi32(mask_0, sumsq_0);
+      mask_1 = _mm_slli_epi32(sumsq_1, 4);
+      mask_1 = _mm_sub_epi32(mask_1, sumsq_1);
+
+      multmp_0 = _mm_mullo_epi16(sum, sum);
+      multmp_1 = _mm_mulhi_epi16(sum, sum);
+
+      mask_0 = _mm_sub_epi32(mask_0, _mm_unpacklo_epi16(multmp_0, multmp_1));
+      mask_1 = _mm_sub_epi32(mask_1, _mm_unpackhi_epi16(multmp_0, multmp_1));
+
+      // mask - f gives a negative value when mask < f
+      mask_0 = _mm_sub_epi32(mask_0, f);
+      mask_1 = _mm_sub_epi32(mask_1, f);
+
+      // Shift the sign bit down to create a mask
+      mask_0 = _mm_srai_epi32(mask_0, 31);
+      mask_1 = _mm_srai_epi32(mask_1, 31);
+
+      mask_0 = _mm_packs_epi32(mask_0, mask_1);
+
+      rv = _mm_loadu_si128((__m128i const *)(vpx_rv + (row & 127)));
+
+      mask_1 = _mm_add_epi16(rv, sum);
+      mask_1 = _mm_add_epi16(mask_1, this_row);
+      mask_1 = _mm_srai_epi16(mask_1, 4);
+
+      mask_1 = _mm_and_si128(mask_0, mask_1);
+      mask_0 = _mm_andnot_si128(mask_0, this_row);
+      out = _mm_or_si128(mask_1, mask_0);
+
+      _mm_storel_epi64((__m128i *)(dst + row * pitch),
+                       _mm_packus_epi16(out, zero));
+
+      _mm_store_si128((__m128i *)above_context + ((row + 8) & 7), this_row);
+    }
+
+    dst += 8;
+  }
+}
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_avx.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_avx.c
index 6f44890..0a91d36 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_avx.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_avx.c
@@ -17,15 +17,16 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/x86/bitdepth_conversion_sse2.h"
-#include "vpx_dsp/x86/quantize_x86.h"
+#include "vpx_dsp/x86/quantize_sse2.h"
+#include "vpx_dsp/x86/quantize_ssse3.h"
 
 void vpx_quantize_b_avx(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
                         int skip_block, const int16_t *zbin_ptr,
                         const int16_t *round_ptr, const int16_t *quant_ptr,
                         const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
                         tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                        uint16_t *eob_ptr, const int16_t *scan_ptr,
-                        const int16_t *iscan_ptr) {
+                        uint16_t *eob_ptr, const int16_t *scan,
+                        const int16_t *iscan) {
   const __m128i zero = _mm_setzero_si128();
   const __m256i big_zero = _mm256_setzero_si256();
   int index;
@@ -37,7 +38,7 @@
   __m128i all_zero;
   __m128i eob = zero, eob0;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)skip_block;
   assert(!skip_block);
 
@@ -90,15 +91,12 @@
     store_tran_low(qcoeff0, qcoeff_ptr);
     store_tran_low(qcoeff1, qcoeff_ptr + 8);
 
-    coeff0 = calculate_dqcoeff(qcoeff0, dequant);
+    calculate_dqcoeff_and_store(qcoeff0, dequant, dqcoeff_ptr);
     dequant = _mm_unpackhi_epi64(dequant, dequant);
-    coeff1 = calculate_dqcoeff(qcoeff1, dequant);
+    calculate_dqcoeff_and_store(qcoeff1, dequant, dqcoeff_ptr + 8);
 
-    store_tran_low(coeff0, dqcoeff_ptr);
-    store_tran_low(coeff1, dqcoeff_ptr + 8);
-
-    eob = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr, 0,
-                       zero);
+    eob =
+        scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, 0, zero);
   }
 
   // AC only loop.
@@ -135,26 +133,25 @@
     store_tran_low(qcoeff0, qcoeff_ptr + index);
     store_tran_low(qcoeff1, qcoeff_ptr + index + 8);
 
-    coeff0 = calculate_dqcoeff(qcoeff0, dequant);
-    coeff1 = calculate_dqcoeff(qcoeff1, dequant);
+    calculate_dqcoeff_and_store(qcoeff0, dequant, dqcoeff_ptr + index);
+    calculate_dqcoeff_and_store(qcoeff1, dequant, dqcoeff_ptr + index + 8);
 
-    store_tran_low(coeff0, dqcoeff_ptr + index);
-    store_tran_low(coeff1, dqcoeff_ptr + index + 8);
-
-    eob0 = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr,
-                        index, zero);
+    eob0 = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, index,
+                        zero);
     eob = _mm_max_epi16(eob, eob0);
   }
 
   *eob_ptr = accumulate_eob(eob);
 }
 
-void vpx_quantize_b_32x32_avx(
-    const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block,
-    const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr,
-    const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
-    tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr,
-    const int16_t *scan_ptr, const int16_t *iscan_ptr) {
+void vpx_quantize_b_32x32_avx(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
+                              int skip_block, const int16_t *zbin_ptr,
+                              const int16_t *round_ptr,
+                              const int16_t *quant_ptr,
+                              const int16_t *quant_shift_ptr,
+                              tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
+                              const int16_t *dequant_ptr, uint16_t *eob_ptr,
+                              const int16_t *scan, const int16_t *iscan) {
   const __m128i zero = _mm_setzero_si128();
   const __m128i one = _mm_set1_epi16(1);
   const __m256i big_zero = _mm256_setzero_si256();
@@ -167,7 +164,7 @@
   __m128i all_zero;
   __m128i eob = zero, eob0;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)n_coeffs;
   (void)skip_block;
   assert(!skip_block);
@@ -233,28 +230,12 @@
     store_tran_low(qcoeff0, qcoeff_ptr);
     store_tran_low(qcoeff1, qcoeff_ptr + 8);
 
-    // Un-sign to bias rounding like C.
-    // dequant is almost always negative, so this is probably the backwards way
-    // to handle the sign. However, it matches the previous assembly.
-    coeff0 = _mm_abs_epi16(qcoeff0);
-    coeff1 = _mm_abs_epi16(qcoeff1);
-
-    coeff0 = calculate_dqcoeff(coeff0, dequant);
+    calculate_dqcoeff_and_store_32x32(qcoeff0, dequant, zero, dqcoeff_ptr);
     dequant = _mm_unpackhi_epi64(dequant, dequant);
-    coeff1 = calculate_dqcoeff(coeff1, dequant);
+    calculate_dqcoeff_and_store_32x32(qcoeff1, dequant, zero, dqcoeff_ptr + 8);
 
-    // "Divide" by 2.
-    coeff0 = _mm_srli_epi16(coeff0, 1);
-    coeff1 = _mm_srli_epi16(coeff1, 1);
-
-    coeff0 = _mm_sign_epi16(coeff0, qcoeff0);
-    coeff1 = _mm_sign_epi16(coeff1, qcoeff1);
-
-    store_tran_low(coeff0, dqcoeff_ptr);
-    store_tran_low(coeff1, dqcoeff_ptr + 8);
-
-    eob = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr, 0,
-                       zero);
+    eob =
+        scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, 0, zero);
   }
 
   // AC only loop.
@@ -291,23 +272,13 @@
     store_tran_low(qcoeff0, qcoeff_ptr + index);
     store_tran_low(qcoeff1, qcoeff_ptr + index + 8);
 
-    coeff0 = _mm_abs_epi16(qcoeff0);
-    coeff1 = _mm_abs_epi16(qcoeff1);
+    calculate_dqcoeff_and_store_32x32(qcoeff0, dequant, zero,
+                                      dqcoeff_ptr + index);
+    calculate_dqcoeff_and_store_32x32(qcoeff1, dequant, zero,
+                                      dqcoeff_ptr + index + 8);
 
-    coeff0 = calculate_dqcoeff(coeff0, dequant);
-    coeff1 = calculate_dqcoeff(coeff1, dequant);
-
-    coeff0 = _mm_srli_epi16(coeff0, 1);
-    coeff1 = _mm_srli_epi16(coeff1, 1);
-
-    coeff0 = _mm_sign_epi16(coeff0, qcoeff0);
-    coeff1 = _mm_sign_epi16(coeff1, qcoeff1);
-
-    store_tran_low(coeff0, dqcoeff_ptr + index);
-    store_tran_low(coeff1, dqcoeff_ptr + index + 8);
-
-    eob0 = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr,
-                        index, zero);
+    eob0 = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, index,
+                        zero);
     eob = _mm_max_epi16(eob, eob0);
   }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.c
index c020b39..e38a405 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.c
@@ -15,15 +15,15 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/x86/bitdepth_conversion_sse2.h"
-#include "vpx_dsp/x86/quantize_x86.h"
+#include "vpx_dsp/x86/quantize_sse2.h"
 
 void vpx_quantize_b_sse2(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
                          int skip_block, const int16_t *zbin_ptr,
                          const int16_t *round_ptr, const int16_t *quant_ptr,
                          const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
                          tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr,
-                         uint16_t *eob_ptr, const int16_t *scan_ptr,
-                         const int16_t *iscan_ptr) {
+                         uint16_t *eob_ptr, const int16_t *scan,
+                         const int16_t *iscan) {
   const __m128i zero = _mm_setzero_si128();
   int index = 16;
 
@@ -33,7 +33,7 @@
   __m128i cmp_mask0, cmp_mask1;
   __m128i eob, eob0;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)skip_block;
   assert(!skip_block);
 
@@ -74,15 +74,11 @@
   store_tran_low(qcoeff0, qcoeff_ptr);
   store_tran_low(qcoeff1, qcoeff_ptr + 8);
 
-  coeff0 = calculate_dqcoeff(qcoeff0, dequant);
+  calculate_dqcoeff_and_store(qcoeff0, dequant, dqcoeff_ptr);
   dequant = _mm_unpackhi_epi64(dequant, dequant);
-  coeff1 = calculate_dqcoeff(qcoeff1, dequant);
+  calculate_dqcoeff_and_store(qcoeff1, dequant, dqcoeff_ptr + 8);
 
-  store_tran_low(coeff0, dqcoeff_ptr);
-  store_tran_low(coeff1, dqcoeff_ptr + 8);
-
-  eob =
-      scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr, 0, zero);
+  eob = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, 0, zero);
 
   // AC only loop.
   while (index < n_coeffs) {
@@ -109,14 +105,11 @@
     store_tran_low(qcoeff0, qcoeff_ptr + index);
     store_tran_low(qcoeff1, qcoeff_ptr + index + 8);
 
-    coeff0 = calculate_dqcoeff(qcoeff0, dequant);
-    coeff1 = calculate_dqcoeff(qcoeff1, dequant);
+    calculate_dqcoeff_and_store(qcoeff0, dequant, dqcoeff_ptr + index);
+    calculate_dqcoeff_and_store(qcoeff1, dequant, dqcoeff_ptr + index + 8);
 
-    store_tran_low(coeff0, dqcoeff_ptr + index);
-    store_tran_low(coeff1, dqcoeff_ptr + index + 8);
-
-    eob0 = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr,
-                        index, zero);
+    eob0 = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, index,
+                        zero);
     eob = _mm_max_epi16(eob, eob0);
 
     index += 16;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_x86.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.h
similarity index 70%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_x86.h
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.h
index 0e07a2a..afe2f92 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_x86.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_sse2.h
@@ -8,6 +8,9 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#ifndef VPX_VPX_DSP_X86_QUANTIZE_SSE2_H_
+#define VPX_VPX_DSP_X86_QUANTIZE_SSE2_H_
+
 #include <emmintrin.h>
 
 #include "./vpx_config.h"
@@ -41,21 +44,35 @@
   *coeff = _mm_mulhi_epi16(qcoeff, shift);
 }
 
-static INLINE __m128i calculate_dqcoeff(__m128i qcoeff, __m128i dequant) {
-  return _mm_mullo_epi16(qcoeff, dequant);
+static INLINE void calculate_dqcoeff_and_store(__m128i qcoeff, __m128i dequant,
+                                               tran_low_t *dqcoeff) {
+#if CONFIG_VP9_HIGHBITDEPTH
+  const __m128i low = _mm_mullo_epi16(qcoeff, dequant);
+  const __m128i high = _mm_mulhi_epi16(qcoeff, dequant);
+
+  const __m128i dqcoeff32_0 = _mm_unpacklo_epi16(low, high);
+  const __m128i dqcoeff32_1 = _mm_unpackhi_epi16(low, high);
+
+  _mm_store_si128((__m128i *)(dqcoeff), dqcoeff32_0);
+  _mm_store_si128((__m128i *)(dqcoeff + 4), dqcoeff32_1);
+#else
+  const __m128i dqcoeff16 = _mm_mullo_epi16(qcoeff, dequant);
+
+  _mm_store_si128((__m128i *)(dqcoeff), dqcoeff16);
+#endif  // CONFIG_VP9_HIGHBITDEPTH
 }
 
-// Scan 16 values for eob reference in scan_ptr. Use masks (-1) from comparing
-// to zbin to add 1 to the index in 'scan'.
+// Scan 16 values for eob reference in scan. Use masks (-1) from comparing to
+// zbin to add 1 to the index in 'scan'.
 static INLINE __m128i scan_for_eob(__m128i *coeff0, __m128i *coeff1,
                                    const __m128i zbin_mask0,
                                    const __m128i zbin_mask1,
-                                   const int16_t *scan_ptr, const int index,
+                                   const int16_t *scan, const int index,
                                    const __m128i zero) {
   const __m128i zero_coeff0 = _mm_cmpeq_epi16(*coeff0, zero);
   const __m128i zero_coeff1 = _mm_cmpeq_epi16(*coeff1, zero);
-  __m128i scan0 = _mm_load_si128((const __m128i *)(scan_ptr + index));
-  __m128i scan1 = _mm_load_si128((const __m128i *)(scan_ptr + index + 8));
+  __m128i scan0 = _mm_load_si128((const __m128i *)(scan + index));
+  __m128i scan1 = _mm_load_si128((const __m128i *)(scan + index + 8));
   __m128i eob0, eob1;
   // Add one to convert from indices to counts
   scan0 = _mm_sub_epi16(scan0, zbin_mask0);
@@ -75,3 +92,5 @@
   eob = _mm_max_epi16(eob, eob_shuffled);
   return _mm_extract_epi16(eob, 1);
 }
+
+#endif  // VPX_VPX_DSP_X86_QUANTIZE_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.c
index 3f528e1..fc1d919 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.c
@@ -14,7 +14,8 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
 #include "vpx_dsp/x86/bitdepth_conversion_sse2.h"
-#include "vpx_dsp/x86/quantize_x86.h"
+#include "vpx_dsp/x86/quantize_sse2.h"
+#include "vpx_dsp/x86/quantize_ssse3.h"
 
 void vpx_quantize_b_ssse3(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
                           int skip_block, const int16_t *zbin_ptr,
@@ -22,7 +23,7 @@
                           const int16_t *quant_shift_ptr,
                           tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
                           const int16_t *dequant_ptr, uint16_t *eob_ptr,
-                          const int16_t *scan_ptr, const int16_t *iscan_ptr) {
+                          const int16_t *scan, const int16_t *iscan) {
   const __m128i zero = _mm_setzero_si128();
   int index = 16;
 
@@ -32,7 +33,7 @@
   __m128i cmp_mask0, cmp_mask1;
   __m128i eob, eob0;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)skip_block;
   assert(!skip_block);
 
@@ -67,15 +68,11 @@
   store_tran_low(qcoeff0, qcoeff_ptr);
   store_tran_low(qcoeff1, qcoeff_ptr + 8);
 
-  coeff0 = calculate_dqcoeff(qcoeff0, dequant);
+  calculate_dqcoeff_and_store(qcoeff0, dequant, dqcoeff_ptr);
   dequant = _mm_unpackhi_epi64(dequant, dequant);
-  coeff1 = calculate_dqcoeff(qcoeff1, dequant);
+  calculate_dqcoeff_and_store(qcoeff1, dequant, dqcoeff_ptr + 8);
 
-  store_tran_low(coeff0, dqcoeff_ptr);
-  store_tran_low(coeff1, dqcoeff_ptr + 8);
-
-  eob =
-      scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr, 0, zero);
+  eob = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, 0, zero);
 
   // AC only loop.
   while (index < n_coeffs) {
@@ -100,14 +97,11 @@
     store_tran_low(qcoeff0, qcoeff_ptr + index);
     store_tran_low(qcoeff1, qcoeff_ptr + index + 8);
 
-    coeff0 = calculate_dqcoeff(qcoeff0, dequant);
-    coeff1 = calculate_dqcoeff(qcoeff1, dequant);
+    calculate_dqcoeff_and_store(qcoeff0, dequant, dqcoeff_ptr + index);
+    calculate_dqcoeff_and_store(qcoeff1, dequant, dqcoeff_ptr + index + 8);
 
-    store_tran_low(coeff0, dqcoeff_ptr + index);
-    store_tran_low(coeff1, dqcoeff_ptr + index + 8);
-
-    eob0 = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr,
-                        index, zero);
+    eob0 = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, index,
+                        zero);
     eob = _mm_max_epi16(eob, eob0);
 
     index += 16;
@@ -116,12 +110,14 @@
   *eob_ptr = accumulate_eob(eob);
 }
 
-void vpx_quantize_b_32x32_ssse3(
-    const tran_low_t *coeff_ptr, intptr_t n_coeffs, int skip_block,
-    const int16_t *zbin_ptr, const int16_t *round_ptr, const int16_t *quant_ptr,
-    const int16_t *quant_shift_ptr, tran_low_t *qcoeff_ptr,
-    tran_low_t *dqcoeff_ptr, const int16_t *dequant_ptr, uint16_t *eob_ptr,
-    const int16_t *scan_ptr, const int16_t *iscan_ptr) {
+void vpx_quantize_b_32x32_ssse3(const tran_low_t *coeff_ptr, intptr_t n_coeffs,
+                                int skip_block, const int16_t *zbin_ptr,
+                                const int16_t *round_ptr,
+                                const int16_t *quant_ptr,
+                                const int16_t *quant_shift_ptr,
+                                tran_low_t *qcoeff_ptr, tran_low_t *dqcoeff_ptr,
+                                const int16_t *dequant_ptr, uint16_t *eob_ptr,
+                                const int16_t *scan, const int16_t *iscan) {
   const __m128i zero = _mm_setzero_si128();
   const __m128i one = _mm_set1_epi16(1);
   int index;
@@ -133,7 +129,7 @@
   __m128i all_zero;
   __m128i eob = zero, eob0;
 
-  (void)scan_ptr;
+  (void)scan;
   (void)n_coeffs;
   (void)skip_block;
   assert(!skip_block);
@@ -206,28 +202,12 @@
     store_tran_low(qcoeff0, qcoeff_ptr);
     store_tran_low(qcoeff1, qcoeff_ptr + 8);
 
-    // Un-sign to bias rounding like C.
-    // dequant is almost always negative, so this is probably the backwards way
-    // to handle the sign. However, it matches the previous assembly.
-    coeff0 = _mm_abs_epi16(qcoeff0);
-    coeff1 = _mm_abs_epi16(qcoeff1);
-
-    coeff0 = calculate_dqcoeff(coeff0, dequant);
+    calculate_dqcoeff_and_store_32x32(qcoeff0, dequant, zero, dqcoeff_ptr);
     dequant = _mm_unpackhi_epi64(dequant, dequant);
-    coeff1 = calculate_dqcoeff(coeff1, dequant);
+    calculate_dqcoeff_and_store_32x32(qcoeff1, dequant, zero, dqcoeff_ptr + 8);
 
-    // "Divide" by 2.
-    coeff0 = _mm_srli_epi16(coeff0, 1);
-    coeff1 = _mm_srli_epi16(coeff1, 1);
-
-    coeff0 = _mm_sign_epi16(coeff0, qcoeff0);
-    coeff1 = _mm_sign_epi16(coeff1, qcoeff1);
-
-    store_tran_low(coeff0, dqcoeff_ptr);
-    store_tran_low(coeff1, dqcoeff_ptr + 8);
-
-    eob = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr, 0,
-                       zero);
+    eob =
+        scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, 0, zero);
   }
 
   // AC only loop.
@@ -268,23 +248,13 @@
     store_tran_low(qcoeff0, qcoeff_ptr + index);
     store_tran_low(qcoeff1, qcoeff_ptr + index + 8);
 
-    coeff0 = _mm_abs_epi16(qcoeff0);
-    coeff1 = _mm_abs_epi16(qcoeff1);
+    calculate_dqcoeff_and_store_32x32(qcoeff0, dequant, zero,
+                                      dqcoeff_ptr + index);
+    calculate_dqcoeff_and_store_32x32(qcoeff1, dequant, zero,
+                                      dqcoeff_ptr + 8 + index);
 
-    coeff0 = calculate_dqcoeff(coeff0, dequant);
-    coeff1 = calculate_dqcoeff(coeff1, dequant);
-
-    coeff0 = _mm_srli_epi16(coeff0, 1);
-    coeff1 = _mm_srli_epi16(coeff1, 1);
-
-    coeff0 = _mm_sign_epi16(coeff0, qcoeff0);
-    coeff1 = _mm_sign_epi16(coeff1, qcoeff1);
-
-    store_tran_low(coeff0, dqcoeff_ptr + index);
-    store_tran_low(coeff1, dqcoeff_ptr + index + 8);
-
-    eob0 = scan_for_eob(&coeff0, &coeff1, cmp_mask0, cmp_mask1, iscan_ptr,
-                        index, zero);
+    eob0 = scan_for_eob(&qcoeff0, &qcoeff1, cmp_mask0, cmp_mask1, iscan, index,
+                        zero);
     eob = _mm_max_epi16(eob, eob0);
   }
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.h
new file mode 100644
index 0000000..e8d2a05
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/quantize_ssse3.h
@@ -0,0 +1,51 @@
+/*
+ *  Copyright (c) 2017 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_DSP_X86_QUANTIZE_SSSE3_H_
+#define VPX_VPX_DSP_X86_QUANTIZE_SSSE3_H_
+
+#include <emmintrin.h>
+
+#include "./vpx_config.h"
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/x86/quantize_sse2.h"
+
+static INLINE void calculate_dqcoeff_and_store_32x32(const __m128i qcoeff,
+                                                     const __m128i dequant,
+                                                     const __m128i zero,
+                                                     tran_low_t *dqcoeff) {
+  // Un-sign to bias rounding like C.
+  const __m128i coeff = _mm_abs_epi16(qcoeff);
+
+  const __m128i sign_0 = _mm_unpacklo_epi16(zero, qcoeff);
+  const __m128i sign_1 = _mm_unpackhi_epi16(zero, qcoeff);
+
+  const __m128i low = _mm_mullo_epi16(coeff, dequant);
+  const __m128i high = _mm_mulhi_epi16(coeff, dequant);
+  __m128i dqcoeff32_0 = _mm_unpacklo_epi16(low, high);
+  __m128i dqcoeff32_1 = _mm_unpackhi_epi16(low, high);
+
+  // "Divide" by 2.
+  dqcoeff32_0 = _mm_srli_epi32(dqcoeff32_0, 1);
+  dqcoeff32_1 = _mm_srli_epi32(dqcoeff32_1, 1);
+
+  dqcoeff32_0 = _mm_sign_epi32(dqcoeff32_0, sign_0);
+  dqcoeff32_1 = _mm_sign_epi32(dqcoeff32_1, sign_1);
+
+#if CONFIG_VP9_HIGHBITDEPTH
+  _mm_store_si128((__m128i *)(dqcoeff), dqcoeff32_0);
+  _mm_store_si128((__m128i *)(dqcoeff + 4), dqcoeff32_1);
+#else
+  _mm_store_si128((__m128i *)(dqcoeff),
+                  _mm_packs_epi32(dqcoeff32_0, dqcoeff32_1));
+#endif  // CONFIG_VP9_HIGHBITDEPTH
+}
+
+#endif  // VPX_VPX_DSP_X86_QUANTIZE_SSSE3_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx2.c
index 962b8fb..a5c4f8c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx2.c
@@ -11,154 +11,177 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
 
-void vpx_sad32x32x4d_avx2(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
-                          uint32_t res[4]) {
-  __m256i src_reg, ref0_reg, ref1_reg, ref2_reg, ref3_reg;
-  __m256i sum_ref0, sum_ref1, sum_ref2, sum_ref3;
-  __m256i sum_mlow, sum_mhigh;
-  int i;
-  const uint8_t *ref0, *ref1, *ref2, *ref3;
-
-  ref0 = ref[0];
-  ref1 = ref[1];
-  ref2 = ref[2];
-  ref3 = ref[3];
-  sum_ref0 = _mm256_set1_epi16(0);
-  sum_ref1 = _mm256_set1_epi16(0);
-  sum_ref2 = _mm256_set1_epi16(0);
-  sum_ref3 = _mm256_set1_epi16(0);
-  for (i = 0; i < 32; i++) {
-    // load src and all refs
-    src_reg = _mm256_loadu_si256((const __m256i *)src);
-    ref0_reg = _mm256_loadu_si256((const __m256i *)ref0);
-    ref1_reg = _mm256_loadu_si256((const __m256i *)ref1);
-    ref2_reg = _mm256_loadu_si256((const __m256i *)ref2);
-    ref3_reg = _mm256_loadu_si256((const __m256i *)ref3);
-    // sum of the absolute differences between every ref-i to src
-    ref0_reg = _mm256_sad_epu8(ref0_reg, src_reg);
-    ref1_reg = _mm256_sad_epu8(ref1_reg, src_reg);
-    ref2_reg = _mm256_sad_epu8(ref2_reg, src_reg);
-    ref3_reg = _mm256_sad_epu8(ref3_reg, src_reg);
-    // sum every ref-i
-    sum_ref0 = _mm256_add_epi32(sum_ref0, ref0_reg);
-    sum_ref1 = _mm256_add_epi32(sum_ref1, ref1_reg);
-    sum_ref2 = _mm256_add_epi32(sum_ref2, ref2_reg);
-    sum_ref3 = _mm256_add_epi32(sum_ref3, ref3_reg);
-
-    src += src_stride;
-    ref0 += ref_stride;
-    ref1 += ref_stride;
-    ref2 += ref_stride;
-    ref3 += ref_stride;
-  }
-  {
-    __m128i sum;
-    // in sum_ref-i the result is saved in the first 4 bytes
-    // the other 4 bytes are zeroed.
-    // sum_ref1 and sum_ref3 are shifted left by 4 bytes
-    sum_ref1 = _mm256_slli_si256(sum_ref1, 4);
-    sum_ref3 = _mm256_slli_si256(sum_ref3, 4);
-
-    // merge sum_ref0 and sum_ref1 also sum_ref2 and sum_ref3
-    sum_ref0 = _mm256_or_si256(sum_ref0, sum_ref1);
-    sum_ref2 = _mm256_or_si256(sum_ref2, sum_ref3);
-
-    // merge every 64 bit from each sum_ref-i
-    sum_mlow = _mm256_unpacklo_epi64(sum_ref0, sum_ref2);
-    sum_mhigh = _mm256_unpackhi_epi64(sum_ref0, sum_ref2);
-
-    // add the low 64 bit to the high 64 bit
-    sum_mlow = _mm256_add_epi32(sum_mlow, sum_mhigh);
-
-    // add the low 128 bit to the high 128 bit
-    sum = _mm_add_epi32(_mm256_castsi256_si128(sum_mlow),
-                        _mm256_extractf128_si256(sum_mlow, 1));
-
-    _mm_storeu_si128((__m128i *)(res), sum);
-  }
+static INLINE void calc_final_4(const __m256i *const sums /*[4]*/,
+                                uint32_t *sad_array) {
+  const __m256i t0 = _mm256_hadd_epi32(sums[0], sums[1]);
+  const __m256i t1 = _mm256_hadd_epi32(sums[2], sums[3]);
+  const __m256i t2 = _mm256_hadd_epi32(t0, t1);
+  const __m128i sum = _mm_add_epi32(_mm256_castsi256_si128(t2),
+                                    _mm256_extractf128_si256(t2, 1));
+  _mm_storeu_si128((__m128i *)sad_array, sum);
 }
 
-void vpx_sad64x64x4d_avx2(const uint8_t *src, int src_stride,
-                          const uint8_t *const ref[4], int ref_stride,
-                          uint32_t res[4]) {
-  __m256i src_reg, srcnext_reg, ref0_reg, ref0next_reg;
-  __m256i ref1_reg, ref1next_reg, ref2_reg, ref2next_reg;
-  __m256i ref3_reg, ref3next_reg;
-  __m256i sum_ref0, sum_ref1, sum_ref2, sum_ref3;
-  __m256i sum_mlow, sum_mhigh;
+void vpx_sad32x32x4d_avx2(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
+                          uint32_t sad_array[4]) {
   int i;
-  const uint8_t *ref0, *ref1, *ref2, *ref3;
+  const uint8_t *refs[4];
+  __m256i sums[4];
 
-  ref0 = ref[0];
-  ref1 = ref[1];
-  ref2 = ref[2];
-  ref3 = ref[3];
-  sum_ref0 = _mm256_set1_epi16(0);
-  sum_ref1 = _mm256_set1_epi16(0);
-  sum_ref2 = _mm256_set1_epi16(0);
-  sum_ref3 = _mm256_set1_epi16(0);
+  refs[0] = ref_array[0];
+  refs[1] = ref_array[1];
+  refs[2] = ref_array[2];
+  refs[3] = ref_array[3];
+  sums[0] = _mm256_setzero_si256();
+  sums[1] = _mm256_setzero_si256();
+  sums[2] = _mm256_setzero_si256();
+  sums[3] = _mm256_setzero_si256();
+
+  for (i = 0; i < 32; i++) {
+    __m256i r[4];
+
+    // load src and all ref[]
+    const __m256i s = _mm256_load_si256((const __m256i *)src_ptr);
+    r[0] = _mm256_loadu_si256((const __m256i *)refs[0]);
+    r[1] = _mm256_loadu_si256((const __m256i *)refs[1]);
+    r[2] = _mm256_loadu_si256((const __m256i *)refs[2]);
+    r[3] = _mm256_loadu_si256((const __m256i *)refs[3]);
+
+    // sum of the absolute differences between every ref[] to src
+    r[0] = _mm256_sad_epu8(r[0], s);
+    r[1] = _mm256_sad_epu8(r[1], s);
+    r[2] = _mm256_sad_epu8(r[2], s);
+    r[3] = _mm256_sad_epu8(r[3], s);
+
+    // sum every ref[]
+    sums[0] = _mm256_add_epi32(sums[0], r[0]);
+    sums[1] = _mm256_add_epi32(sums[1], r[1]);
+    sums[2] = _mm256_add_epi32(sums[2], r[2]);
+    sums[3] = _mm256_add_epi32(sums[3], r[3]);
+
+    src_ptr += src_stride;
+    refs[0] += ref_stride;
+    refs[1] += ref_stride;
+    refs[2] += ref_stride;
+    refs[3] += ref_stride;
+  }
+
+  calc_final_4(sums, sad_array);
+}
+
+void vpx_sad32x32x8_avx2(const uint8_t *src_ptr, int src_stride,
+                         const uint8_t *ref_ptr, int ref_stride,
+                         uint32_t *sad_array) {
+  int i;
+  __m256i sums[8];
+
+  sums[0] = _mm256_setzero_si256();
+  sums[1] = _mm256_setzero_si256();
+  sums[2] = _mm256_setzero_si256();
+  sums[3] = _mm256_setzero_si256();
+  sums[4] = _mm256_setzero_si256();
+  sums[5] = _mm256_setzero_si256();
+  sums[6] = _mm256_setzero_si256();
+  sums[7] = _mm256_setzero_si256();
+
+  for (i = 0; i < 32; i++) {
+    __m256i r[8];
+
+    // load src and all ref[]
+    const __m256i s = _mm256_load_si256((const __m256i *)src_ptr);
+    r[0] = _mm256_loadu_si256((const __m256i *)&ref_ptr[0]);
+    r[1] = _mm256_loadu_si256((const __m256i *)&ref_ptr[1]);
+    r[2] = _mm256_loadu_si256((const __m256i *)&ref_ptr[2]);
+    r[3] = _mm256_loadu_si256((const __m256i *)&ref_ptr[3]);
+    r[4] = _mm256_loadu_si256((const __m256i *)&ref_ptr[4]);
+    r[5] = _mm256_loadu_si256((const __m256i *)&ref_ptr[5]);
+    r[6] = _mm256_loadu_si256((const __m256i *)&ref_ptr[6]);
+    r[7] = _mm256_loadu_si256((const __m256i *)&ref_ptr[7]);
+
+    // sum of the absolute differences between every ref[] to src
+    r[0] = _mm256_sad_epu8(r[0], s);
+    r[1] = _mm256_sad_epu8(r[1], s);
+    r[2] = _mm256_sad_epu8(r[2], s);
+    r[3] = _mm256_sad_epu8(r[3], s);
+    r[4] = _mm256_sad_epu8(r[4], s);
+    r[5] = _mm256_sad_epu8(r[5], s);
+    r[6] = _mm256_sad_epu8(r[6], s);
+    r[7] = _mm256_sad_epu8(r[7], s);
+
+    // sum every ref[]
+    sums[0] = _mm256_add_epi32(sums[0], r[0]);
+    sums[1] = _mm256_add_epi32(sums[1], r[1]);
+    sums[2] = _mm256_add_epi32(sums[2], r[2]);
+    sums[3] = _mm256_add_epi32(sums[3], r[3]);
+    sums[4] = _mm256_add_epi32(sums[4], r[4]);
+    sums[5] = _mm256_add_epi32(sums[5], r[5]);
+    sums[6] = _mm256_add_epi32(sums[6], r[6]);
+    sums[7] = _mm256_add_epi32(sums[7], r[7]);
+
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+  }
+
+  calc_final_4(sums, sad_array);
+  calc_final_4(sums + 4, sad_array + 4);
+}
+
+void vpx_sad64x64x4d_avx2(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *const ref_array[4], int ref_stride,
+                          uint32_t sad_array[4]) {
+  __m256i sums[4];
+  int i;
+  const uint8_t *refs[4];
+
+  refs[0] = ref_array[0];
+  refs[1] = ref_array[1];
+  refs[2] = ref_array[2];
+  refs[3] = ref_array[3];
+  sums[0] = _mm256_setzero_si256();
+  sums[1] = _mm256_setzero_si256();
+  sums[2] = _mm256_setzero_si256();
+  sums[3] = _mm256_setzero_si256();
+
   for (i = 0; i < 64; i++) {
-    // load 64 bytes from src and all refs
-    src_reg = _mm256_loadu_si256((const __m256i *)src);
-    srcnext_reg = _mm256_loadu_si256((const __m256i *)(src + 32));
-    ref0_reg = _mm256_loadu_si256((const __m256i *)ref0);
-    ref0next_reg = _mm256_loadu_si256((const __m256i *)(ref0 + 32));
-    ref1_reg = _mm256_loadu_si256((const __m256i *)ref1);
-    ref1next_reg = _mm256_loadu_si256((const __m256i *)(ref1 + 32));
-    ref2_reg = _mm256_loadu_si256((const __m256i *)ref2);
-    ref2next_reg = _mm256_loadu_si256((const __m256i *)(ref2 + 32));
-    ref3_reg = _mm256_loadu_si256((const __m256i *)ref3);
-    ref3next_reg = _mm256_loadu_si256((const __m256i *)(ref3 + 32));
-    // sum of the absolute differences between every ref-i to src
-    ref0_reg = _mm256_sad_epu8(ref0_reg, src_reg);
-    ref1_reg = _mm256_sad_epu8(ref1_reg, src_reg);
-    ref2_reg = _mm256_sad_epu8(ref2_reg, src_reg);
-    ref3_reg = _mm256_sad_epu8(ref3_reg, src_reg);
-    ref0next_reg = _mm256_sad_epu8(ref0next_reg, srcnext_reg);
-    ref1next_reg = _mm256_sad_epu8(ref1next_reg, srcnext_reg);
-    ref2next_reg = _mm256_sad_epu8(ref2next_reg, srcnext_reg);
-    ref3next_reg = _mm256_sad_epu8(ref3next_reg, srcnext_reg);
+    __m256i r_lo[4], r_hi[4];
+    // load 64 bytes from src and all ref[]
+    const __m256i s_lo = _mm256_load_si256((const __m256i *)src_ptr);
+    const __m256i s_hi = _mm256_load_si256((const __m256i *)(src_ptr + 32));
+    r_lo[0] = _mm256_loadu_si256((const __m256i *)refs[0]);
+    r_hi[0] = _mm256_loadu_si256((const __m256i *)(refs[0] + 32));
+    r_lo[1] = _mm256_loadu_si256((const __m256i *)refs[1]);
+    r_hi[1] = _mm256_loadu_si256((const __m256i *)(refs[1] + 32));
+    r_lo[2] = _mm256_loadu_si256((const __m256i *)refs[2]);
+    r_hi[2] = _mm256_loadu_si256((const __m256i *)(refs[2] + 32));
+    r_lo[3] = _mm256_loadu_si256((const __m256i *)refs[3]);
+    r_hi[3] = _mm256_loadu_si256((const __m256i *)(refs[3] + 32));
 
-    // sum every ref-i
-    sum_ref0 = _mm256_add_epi32(sum_ref0, ref0_reg);
-    sum_ref1 = _mm256_add_epi32(sum_ref1, ref1_reg);
-    sum_ref2 = _mm256_add_epi32(sum_ref2, ref2_reg);
-    sum_ref3 = _mm256_add_epi32(sum_ref3, ref3_reg);
-    sum_ref0 = _mm256_add_epi32(sum_ref0, ref0next_reg);
-    sum_ref1 = _mm256_add_epi32(sum_ref1, ref1next_reg);
-    sum_ref2 = _mm256_add_epi32(sum_ref2, ref2next_reg);
-    sum_ref3 = _mm256_add_epi32(sum_ref3, ref3next_reg);
-    src += src_stride;
-    ref0 += ref_stride;
-    ref1 += ref_stride;
-    ref2 += ref_stride;
-    ref3 += ref_stride;
+    // sum of the absolute differences between every ref[] to src
+    r_lo[0] = _mm256_sad_epu8(r_lo[0], s_lo);
+    r_lo[1] = _mm256_sad_epu8(r_lo[1], s_lo);
+    r_lo[2] = _mm256_sad_epu8(r_lo[2], s_lo);
+    r_lo[3] = _mm256_sad_epu8(r_lo[3], s_lo);
+    r_hi[0] = _mm256_sad_epu8(r_hi[0], s_hi);
+    r_hi[1] = _mm256_sad_epu8(r_hi[1], s_hi);
+    r_hi[2] = _mm256_sad_epu8(r_hi[2], s_hi);
+    r_hi[3] = _mm256_sad_epu8(r_hi[3], s_hi);
+
+    // sum every ref[]
+    sums[0] = _mm256_add_epi32(sums[0], r_lo[0]);
+    sums[1] = _mm256_add_epi32(sums[1], r_lo[1]);
+    sums[2] = _mm256_add_epi32(sums[2], r_lo[2]);
+    sums[3] = _mm256_add_epi32(sums[3], r_lo[3]);
+    sums[0] = _mm256_add_epi32(sums[0], r_hi[0]);
+    sums[1] = _mm256_add_epi32(sums[1], r_hi[1]);
+    sums[2] = _mm256_add_epi32(sums[2], r_hi[2]);
+    sums[3] = _mm256_add_epi32(sums[3], r_hi[3]);
+
+    src_ptr += src_stride;
+    refs[0] += ref_stride;
+    refs[1] += ref_stride;
+    refs[2] += ref_stride;
+    refs[3] += ref_stride;
   }
-  {
-    __m128i sum;
 
-    // in sum_ref-i the result is saved in the first 4 bytes
-    // the other 4 bytes are zeroed.
-    // sum_ref1 and sum_ref3 are shifted left by 4 bytes
-    sum_ref1 = _mm256_slli_si256(sum_ref1, 4);
-    sum_ref3 = _mm256_slli_si256(sum_ref3, 4);
-
-    // merge sum_ref0 and sum_ref1 also sum_ref2 and sum_ref3
-    sum_ref0 = _mm256_or_si256(sum_ref0, sum_ref1);
-    sum_ref2 = _mm256_or_si256(sum_ref2, sum_ref3);
-
-    // merge every 64 bit from each sum_ref-i
-    sum_mlow = _mm256_unpacklo_epi64(sum_ref0, sum_ref2);
-    sum_mhigh = _mm256_unpackhi_epi64(sum_ref0, sum_ref2);
-
-    // add the low 64 bit to the high 64 bit
-    sum_mlow = _mm256_add_epi32(sum_mlow, sum_mhigh);
-
-    // add the low 128 bit to the high 128 bit
-    sum = _mm_add_epi32(_mm256_castsi256_si128(sum_mlow),
-                        _mm256_extractf128_si256(sum_mlow, 1));
-
-    _mm_storeu_si128((__m128i *)(res), sum);
-  }
+  calc_final_4(sums, sad_array);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx512.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx512.c
index 5f2ab6e..4c5d704 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx512.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad4d_avx512.c
@@ -11,8 +11,8 @@
 #include "./vpx_dsp_rtcd.h"
 #include "vpx/vpx_integer.h"
 
-void vpx_sad64x64x4d_avx512(const uint8_t *src, int src_stride,
-                            const uint8_t *const ref[4], int ref_stride,
+void vpx_sad64x64x4d_avx512(const uint8_t *src_ptr, int src_stride,
+                            const uint8_t *const ref_array[4], int ref_stride,
                             uint32_t res[4]) {
   __m512i src_reg, ref0_reg, ref1_reg, ref2_reg, ref3_reg;
   __m512i sum_ref0, sum_ref1, sum_ref2, sum_ref3;
@@ -20,33 +20,33 @@
   int i;
   const uint8_t *ref0, *ref1, *ref2, *ref3;
 
-  ref0 = ref[0];
-  ref1 = ref[1];
-  ref2 = ref[2];
-  ref3 = ref[3];
+  ref0 = ref_array[0];
+  ref1 = ref_array[1];
+  ref2 = ref_array[2];
+  ref3 = ref_array[3];
   sum_ref0 = _mm512_set1_epi16(0);
   sum_ref1 = _mm512_set1_epi16(0);
   sum_ref2 = _mm512_set1_epi16(0);
   sum_ref3 = _mm512_set1_epi16(0);
   for (i = 0; i < 64; i++) {
-    // load src and all refs
-    src_reg = _mm512_loadu_si512((const __m512i *)src);
+    // load src and all ref[]
+    src_reg = _mm512_loadu_si512((const __m512i *)src_ptr);
     ref0_reg = _mm512_loadu_si512((const __m512i *)ref0);
     ref1_reg = _mm512_loadu_si512((const __m512i *)ref1);
     ref2_reg = _mm512_loadu_si512((const __m512i *)ref2);
     ref3_reg = _mm512_loadu_si512((const __m512i *)ref3);
-    // sum of the absolute differences between every ref-i to src
+    // sum of the absolute differences between every ref[] to src
     ref0_reg = _mm512_sad_epu8(ref0_reg, src_reg);
     ref1_reg = _mm512_sad_epu8(ref1_reg, src_reg);
     ref2_reg = _mm512_sad_epu8(ref2_reg, src_reg);
     ref3_reg = _mm512_sad_epu8(ref3_reg, src_reg);
-    // sum every ref-i
+    // sum every ref[]
     sum_ref0 = _mm512_add_epi32(sum_ref0, ref0_reg);
     sum_ref1 = _mm512_add_epi32(sum_ref1, ref1_reg);
     sum_ref2 = _mm512_add_epi32(sum_ref2, ref2_reg);
     sum_ref3 = _mm512_add_epi32(sum_ref3, ref3_reg);
 
-    src += src_stride;
+    src_ptr += src_stride;
     ref0 += ref_stride;
     ref1 += ref_stride;
     ref2 += ref_stride;
@@ -55,7 +55,7 @@
   {
     __m256i sum256;
     __m128i sum128;
-    // in sum_ref-i the result is saved in the first 4 bytes
+    // in sum_ref[] the result is saved in the first 4 bytes
     // the other 4 bytes are zeroed.
     // sum_ref1 and sum_ref3 are shifted left by 4 bytes
     sum_ref1 = _mm512_bslli_epi128(sum_ref1, 4);
@@ -65,7 +65,7 @@
     sum_ref0 = _mm512_or_si512(sum_ref0, sum_ref1);
     sum_ref2 = _mm512_or_si512(sum_ref2, sum_ref3);
 
-    // merge every 64 bit from each sum_ref-i
+    // merge every 64 bit from each sum_ref[]
     sum_mlow = _mm512_unpacklo_epi64(sum_ref0, sum_ref2);
     sum_mhigh = _mm512_unpackhi_epi64(sum_ref0, sum_ref2);
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad_sse2.asm
index 1ec906c..e4e1bc3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sad_sse2.asm
@@ -25,11 +25,11 @@
 cglobal sad%1x%2_avg, 5, 1 + %3, 5, src, src_stride, ref, ref_stride, \
                                     second_pred, n_rows
 %else ; %3 == 7
-cglobal sad%1x%2_avg, 5, ARCH_X86_64 + %3, 6, src, src_stride, \
+cglobal sad%1x%2_avg, 5, VPX_ARCH_X86_64 + %3, 6, src, src_stride, \
                                               ref, ref_stride, \
                                               second_pred, \
                                               src_stride3, ref_stride3
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 %define n_rowsd r7d
 %else ; x86-32
 %define n_rowsd dword r0m
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/subpel_variance_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/subpel_variance_sse2.asm
index d938c1d..d1d8d34 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/subpel_variance_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/subpel_variance_sse2.asm
@@ -41,12 +41,12 @@
 
 ; int vpx_sub_pixel_varianceNxh(const uint8_t *src, ptrdiff_t src_stride,
 ;                               int x_offset, int y_offset,
-;                               const uint8_t *dst, ptrdiff_t dst_stride,
+;                               const uint8_t *ref, ptrdiff_t ref_stride,
 ;                               int height, unsigned int *sse);
 ;
 ; This function returns the SE and stores SSE in the given pointer.
 
-%macro SUM_SSE 6 ; src1, dst1, src2, dst2, sum, sse
+%macro SUM_SSE 6 ; src1, ref1, src2, ref2, sum, sse
   psubw                %3, %4
   psubw                %1, %2
   paddw                %5, %3
@@ -95,7 +95,7 @@
 %endmacro
 
 %macro INC_SRC_BY_SRC_STRIDE  0
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
   add                srcq, src_stridemp
 %else
   add                srcq, src_strideq
@@ -114,15 +114,15 @@
 ; 11, not 13, if the registers are ordered correctly. May make a minor speed
 ; difference on Win64
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   %if %2 == 1 ; avg
     cglobal sub_pixel_avg_variance%1xh, 9, 10, 13, src, src_stride, \
-                                        x_offset, y_offset, dst, dst_stride, \
-                                        sec, sec_stride, height, sse
-    %define sec_str sec_strideq
+                                        x_offset, y_offset, ref, ref_stride, \
+                                        second_pred, second_stride, height, sse
+    %define second_str second_strideq
   %else
     cglobal sub_pixel_variance%1xh, 7, 8, 13, src, src_stride, \
-                                    x_offset, y_offset, dst, dst_stride, \
+                                    x_offset, y_offset, ref, ref_stride, \
                                     height, sse
   %endif
   %define block_height heightd
@@ -131,56 +131,45 @@
   %if CONFIG_PIC=1
     %if %2 == 1 ; avg
       cglobal sub_pixel_avg_variance%1xh, 7, 7, 13, src, src_stride, \
-                                          x_offset, y_offset, dst, dst_stride, \
-                                          sec, sec_stride, height, sse, \
-                                          g_bilin_filter, g_pw_8
+                                          x_offset, y_offset, ref, ref_stride, \
+                                          second_pred, second_stride, height, sse
       %define block_height dword heightm
-      %define sec_str sec_stridemp
-
-      ;Store bilin_filter and pw_8 location in stack
-      %if GET_GOT_DEFINED == 1
-        GET_GOT eax
-        add esp, 4                ; restore esp
-      %endif
-
-      lea ecx, [GLOBAL(bilin_filter_m)]
-      mov g_bilin_filterm, ecx
-
-      lea ecx, [GLOBAL(pw_8)]
-      mov g_pw_8m, ecx
-
-      LOAD_IF_USED 0, 1         ; load eax, ecx back
+      %define second_str second_stridemp
     %else
       cglobal sub_pixel_variance%1xh, 7, 7, 13, src, src_stride, \
-                                      x_offset, y_offset, dst, dst_stride, \
-                                      height, sse, g_bilin_filter, g_pw_8
+                                      x_offset, y_offset, ref, ref_stride, \
+                                      height, sse
       %define block_height heightd
-
-      ;Store bilin_filter and pw_8 location in stack
-      %if GET_GOT_DEFINED == 1
-        GET_GOT eax
-        add esp, 4                ; restore esp
-      %endif
-
-      lea ecx, [GLOBAL(bilin_filter_m)]
-      mov g_bilin_filterm, ecx
-
-      lea ecx, [GLOBAL(pw_8)]
-      mov g_pw_8m, ecx
-
-      LOAD_IF_USED 0, 1         ; load eax, ecx back
     %endif
+
+    ; reuse argument stack space
+    %define g_bilin_filterm x_offsetm
+    %define g_pw_8m y_offsetm
+
+    ;Store bilin_filter and pw_8 location in stack
+    %if GET_GOT_DEFINED == 1
+      GET_GOT eax
+      add esp, 4                ; restore esp
+    %endif
+
+    lea ecx, [GLOBAL(bilin_filter_m)]
+    mov g_bilin_filterm, ecx
+
+    lea ecx, [GLOBAL(pw_8)]
+    mov g_pw_8m, ecx
+
+    LOAD_IF_USED 0, 1         ; load eax, ecx back
   %else
     %if %2 == 1 ; avg
       cglobal sub_pixel_avg_variance%1xh, 7, 7, 13, src, src_stride, \
                                           x_offset, y_offset, \
-                                          dst, dst_stride, sec, sec_stride, \
+                                          ref, ref_stride, second_pred, second_stride, \
                                           height, sse
       %define block_height dword heightm
-      %define sec_str sec_stridemp
+      %define second_str second_stridemp
     %else
       cglobal sub_pixel_variance%1xh, 7, 7, 13, src, src_stride, \
-                                      x_offset, y_offset, dst, dst_stride, \
+                                      x_offset, y_offset, ref, ref_stride, \
                                       height, sse
       %define block_height heightd
     %endif
@@ -203,7 +192,7 @@
 %if %1 < 16
   sar                   block_height, 1
 %if %2 == 1 ; avg
-  shl             sec_str, 1
+  shl             second_str, 1
 %endif
 %endif
 
@@ -218,9 +207,9 @@
 .x_zero_y_zero_loop:
 %if %1 == 16
   movu                 m0, [srcq]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
 %if %2 == 1 ; avg
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m3, m1, m5
   punpcklbw            m1, m5
 %endif
@@ -234,7 +223,7 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
 %if %2 == 1 ; avg
@@ -248,14 +237,14 @@
   movx                 m2, [srcq+src_strideq]
 %endif
 
-  movx                 m1, [dstq]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m1, [refq]
+  movx                 m3, [refq+ref_strideq]
 
 %if %2 == 1 ; avg
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
 %else
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
 %endif
   punpcklbw            m3, m5
@@ -276,10 +265,10 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_zero_y_zero_loop
@@ -294,11 +283,11 @@
 %if %1 == 16
   movu                 m0, [srcq]
   movu                 m4, [srcq+src_strideq]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   pavgb                m0, m4
   punpckhbw            m3, m1, m5
 %if %2 == 1 ; avg
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
 %endif
   punpcklbw            m1, m5
   punpckhbw            m2, m0, m5
@@ -306,7 +295,7 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m2, [srcq+src_strideq]
@@ -317,22 +306,22 @@
   movx                 m1, [srcq+src_strideq*2]
   punpckldq            m2, m1
 %endif
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
 %if %1 > 4
   movlhps              m0, m2
 %else ; 4xh
   punpckldq            m0, m2
 %endif
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
   pavgb                m0, m2
   punpcklbw            m1, m5
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpcklbw            m3, m5
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else ; 4xh
-  movh                 m4, [secq]
+  movh                 m4, [second_predq]
   pavgb                m0, m4
   punpcklbw            m3, m5
   punpcklbw            m0, m5
@@ -340,9 +329,9 @@
 %endif
 %else ; !avg
   movx                 m4, [srcq+src_strideq*2]
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   pavgb                m0, m2
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
   pavgb                m2, m4
   punpcklbw            m0, m5
   punpcklbw            m2, m5
@@ -352,10 +341,10 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_zero_y_half_loop
@@ -363,11 +352,11 @@
 
 .x_zero_y_nonhalf:
   ; x_offset == 0 && y_offset == bilin interpolation
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           y_offsetd, filter_idx_shift
-%if ARCH_X86_64 && %1 > 4
+%if VPX_ARCH_X86_64 && %1 > 4
   mova                 m8, [bilin_filter+y_offsetq]
 %if notcpuflag(ssse3) ; FIXME(rbultje) don't scatter registers on x86-64
   mova                 m9, [bilin_filter+y_offsetq+16]
@@ -377,7 +366,7 @@
 %define filter_y_b m9
 %define filter_rnd m10
 %else ; x86-32 or mmx
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; x_offset == 0, reuse x_offset reg
 %define tempq x_offsetq
   add y_offsetq, g_bilin_filterm
@@ -397,7 +386,7 @@
 %if %1 == 16
   movu                 m0, [srcq]
   movu                 m4, [srcq+src_strideq]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
 %if cpuflag(ssse3)
   punpckhbw            m2, m0, m4
   punpcklbw            m0, m4
@@ -429,7 +418,7 @@
 %if %2 == 1 ; avg
   ; FIXME(rbultje) pipeline
   packuswb             m0, m2
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %endif
@@ -438,14 +427,14 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m2, [srcq+src_strideq]
   movx                 m4, [srcq+src_strideq*2]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
 %if cpuflag(ssse3)
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   punpcklbw            m0, m2
   punpcklbw            m2, m4
   pmaddubsw            m0, filter_y_a
@@ -465,7 +454,7 @@
   pmullw               m4, filter_y_b
   paddw                m0, m1
   paddw                m2, filter_rnd
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   paddw                m2, m4
 %endif
   psraw                m0, 4
@@ -477,11 +466,11 @@
 %endif
   packuswb             m0, m2
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else ; 4xh
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
   punpcklbw            m0, m5
   movhlps              m2, m0
@@ -491,10 +480,10 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_zero_y_other_loop
@@ -515,11 +504,11 @@
 %if %1 == 16
   movu                 m0, [srcq]
   movu                 m4, [srcq+1]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   pavgb                m0, m4
   punpckhbw            m3, m1, m5
 %if %2 == 1 ; avg
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
 %endif
   punpcklbw            m1, m5
   punpckhbw            m2, m0, m5
@@ -527,7 +516,7 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m4, [srcq+1]
@@ -541,17 +530,17 @@
   movx                 m2, [srcq+src_strideq+1]
   punpckldq            m4, m2
 %endif
-  movx                 m1, [dstq]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m1, [refq]
+  movx                 m3, [refq+ref_strideq]
   pavgb                m0, m4
   punpcklbw            m3, m5
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpcklbw            m1, m5
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else ; 4xh
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
   punpcklbw            m1, m5
   punpcklbw            m0, m5
@@ -559,10 +548,10 @@
 %endif
 %else ; !avg
   movx                 m2, [srcq+src_strideq]
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   pavgb                m0, m4
   movx                 m4, [srcq+src_strideq+1]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
   pavgb                m2, m4
   punpcklbw            m0, m5
   punpcklbw            m2, m5
@@ -572,10 +561,10 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_half_y_zero_loop
@@ -594,13 +583,13 @@
 .x_half_y_half_loop:
   movu                 m4, [srcq]
   movu                 m3, [srcq+1]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   pavgb                m4, m3
   punpckhbw            m3, m1, m5
   pavgb                m0, m4
 %if %2 == 1 ; avg
   punpcklbw            m1, m5
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else
@@ -612,7 +601,7 @@
   mova                 m0, m4
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m3, [srcq+1]
@@ -639,13 +628,13 @@
   punpckldq            m0, m2
   pshuflw              m4, m2, 0xe
 %endif
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   pavgb                m0, m2
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
 %else
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
 %endif
   punpcklbw            m3, m5
@@ -664,8 +653,8 @@
   pavgb                m4, m1
   pavgb                m0, m2
   pavgb                m2, m4
-  movx                 m1, [dstq]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m1, [refq]
+  movx                 m3, [refq+ref_strideq]
   punpcklbw            m0, m5
   punpcklbw            m2, m5
   punpcklbw            m3, m5
@@ -675,10 +664,10 @@
   mova                 m0, m4
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_half_y_half_loop
@@ -686,11 +675,11 @@
 
 .x_half_y_nonhalf:
   ; x_offset == 0.5 && y_offset == bilin interpolation
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           y_offsetd, filter_idx_shift
-%if ARCH_X86_64 && %1 > 4
+%if VPX_ARCH_X86_64 && %1 > 4
   mova                 m8, [bilin_filter+y_offsetq]
 %if notcpuflag(ssse3) ; FIXME(rbultje) don't scatter registers on x86-64
   mova                 m9, [bilin_filter+y_offsetq+16]
@@ -700,7 +689,7 @@
 %define filter_y_b m9
 %define filter_rnd m10
 %else  ;x86_32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; x_offset == 0.5. We can reuse x_offset reg
 %define tempq x_offsetq
   add y_offsetq, g_bilin_filterm
@@ -724,7 +713,7 @@
 .x_half_y_other_loop:
   movu                 m4, [srcq]
   movu                 m2, [srcq+1]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   pavgb                m4, m2
 %if cpuflag(ssse3)
   punpckhbw            m2, m0, m4
@@ -754,7 +743,7 @@
 %if %2 == 1 ; avg
   ; FIXME(rbultje) pipeline
   packuswb             m0, m2
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %endif
@@ -763,7 +752,7 @@
   mova                 m0, m4
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m3, [srcq+1]
@@ -779,9 +768,9 @@
   movx                 m3, [srcq+src_strideq+1]
   pavgb                m2, m1
   pavgb                m4, m3
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
 %if cpuflag(ssse3)
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   punpcklbw            m0, m2
   punpcklbw            m2, m4
   pmaddubsw            m0, filter_y_a
@@ -801,7 +790,7 @@
   pmullw               m1, m4, filter_y_b
   paddw                m2, filter_rnd
   paddw                m2, m1
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
 %endif
   psraw                m0, 4
   psraw                m2, 4
@@ -812,11 +801,11 @@
 %endif
   packuswb             m0, m2
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
   punpcklbw            m0, m5
   movhlps              m2, m0
@@ -827,10 +816,10 @@
   mova                 m0, m4
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_half_y_other_loop
@@ -844,11 +833,11 @@
   jnz .x_nonhalf_y_nonzero
 
   ; x_offset == bilin interpolation && y_offset == 0
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           x_offsetd, filter_idx_shift
-%if ARCH_X86_64 && %1 > 4
+%if VPX_ARCH_X86_64 && %1 > 4
   mova                 m8, [bilin_filter+x_offsetq]
 %if notcpuflag(ssse3) ; FIXME(rbultje) don't scatter registers on x86-64
   mova                 m9, [bilin_filter+x_offsetq+16]
@@ -858,7 +847,7 @@
 %define filter_x_b m9
 %define filter_rnd m10
 %else    ; x86-32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ;y_offset == 0. We can reuse y_offset reg.
 %define tempq y_offsetq
   add x_offsetq, g_bilin_filterm
@@ -878,7 +867,7 @@
 %if %1 == 16
   movu                 m0, [srcq]
   movu                 m4, [srcq+1]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
 %if cpuflag(ssse3)
   punpckhbw            m2, m0, m4
   punpcklbw            m0, m4
@@ -905,7 +894,7 @@
 %if %2 == 1 ; avg
   ; FIXME(rbultje) pipeline
   packuswb             m0, m2
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %endif
@@ -914,16 +903,16 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m1, [srcq+1]
   movx                 m2, [srcq+src_strideq]
   movx                 m4, [srcq+src_strideq+1]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
 %if cpuflag(ssse3)
   punpcklbw            m0, m1
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   punpcklbw            m2, m4
   pmaddubsw            m0, filter_x_a
   pmaddubsw            m2, filter_x_a
@@ -943,7 +932,7 @@
   pmullw               m4, filter_x_b
   paddw                m0, m1
   paddw                m2, filter_rnd
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   paddw                m2, m4
 %endif
   psraw                m0, 4
@@ -955,11 +944,11 @@
 %endif
   packuswb             m0, m2
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
   punpcklbw            m0, m5
   movhlps              m2, m0
@@ -969,10 +958,10 @@
   SUM_SSE              m0, m1, m2, m3, m6, m7
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_other_y_zero_loop
@@ -986,11 +975,11 @@
   jne .x_nonhalf_y_nonhalf
 
   ; x_offset == bilin interpolation && y_offset == 0.5
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           x_offsetd, filter_idx_shift
-%if ARCH_X86_64 && %1 > 4
+%if VPX_ARCH_X86_64 && %1 > 4
   mova                 m8, [bilin_filter+x_offsetq]
 %if notcpuflag(ssse3) ; FIXME(rbultje) don't scatter registers on x86-64
   mova                 m9, [bilin_filter+x_offsetq+16]
@@ -1000,7 +989,7 @@
 %define filter_x_b m9
 %define filter_rnd m10
 %else    ; x86-32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; y_offset == 0.5. We can reuse y_offset reg.
 %define tempq y_offsetq
   add x_offsetq, g_bilin_filterm
@@ -1048,7 +1037,7 @@
   movu                 m4, [srcq]
   movu                 m3, [srcq+1]
 %if cpuflag(ssse3)
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   punpckhbw            m2, m4, m3
   punpcklbw            m4, m3
   pmaddubsw            m2, filter_x_a
@@ -1074,7 +1063,7 @@
   paddw                m2, filter_rnd
   paddw                m4, m3
   paddw                m2, m1
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   psraw                m4, 4
   psraw                m2, 4
   punpckhbw            m3, m1, m5
@@ -1088,7 +1077,7 @@
 %endif
 %if %2 == 1 ; avg
   ; FIXME(rbultje) pipeline
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
 %endif
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
@@ -1096,7 +1085,7 @@
   mova                 m0, m4
 
   add                srcq, src_strideq
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m1, [srcq+1]
@@ -1124,8 +1113,8 @@
   punpcklbw            m4, m3
   pmaddubsw            m2, filter_x_a
   pmaddubsw            m4, filter_x_a
-  movx                 m1, [dstq]
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m1, [refq]
+  movx                 m3, [refq+ref_strideq]
   paddw                m2, filter_rnd
   paddw                m4, filter_rnd
 %else
@@ -1140,9 +1129,9 @@
   pmullw               m3, filter_x_b
   paddw                m4, filter_rnd
   paddw                m2, m1
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   paddw                m4, m3
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
 %endif
   psraw                m2, 4
   psraw                m4, 4
@@ -1155,11 +1144,11 @@
 %endif
   packuswb             m0, m2
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
   punpcklbw            m0, m5
   movhlps              m2, m0
@@ -1171,10 +1160,10 @@
   mova                 m0, m4
 
   lea                srcq, [srcq+src_strideq*2]
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_other_y_half_loop
@@ -1184,12 +1173,12 @@
   STORE_AND_RET %1
 
 .x_nonhalf_y_nonhalf:
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   lea        bilin_filter, [GLOBAL(bilin_filter_m)]
 %endif
   shl           x_offsetd, filter_idx_shift
   shl           y_offsetd, filter_idx_shift
-%if ARCH_X86_64 && %1 > 4
+%if VPX_ARCH_X86_64 && %1 > 4
   mova                 m8, [bilin_filter+x_offsetq]
 %if notcpuflag(ssse3) ; FIXME(rbultje) don't scatter registers on x86-64
   mova                 m9, [bilin_filter+x_offsetq+16]
@@ -1205,7 +1194,7 @@
 %define filter_y_b m11
 %define filter_rnd m12
 %else   ; x86-32
-%if ARCH_X86=1 && CONFIG_PIC=1
+%if VPX_ARCH_X86=1 && CONFIG_PIC=1
 ; In this case, there is NO unused register. Used src_stride register. Later,
 ; src_stride has to be loaded from stack when it is needed.
 %define tempq src_strideq
@@ -1265,7 +1254,7 @@
 %if cpuflag(ssse3)
   movu                 m4, [srcq]
   movu                 m3, [srcq+1]
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   punpckhbw            m2, m4, m3
   punpcklbw            m4, m3
   pmaddubsw            m2, filter_x_a
@@ -1311,7 +1300,7 @@
   pmullw               m0, filter_y_a
   pmullw               m3, filter_y_b
   paddw                m2, m1
-  mova                 m1, [dstq]
+  mova                 m1, [refq]
   paddw                m0, filter_rnd
   psraw                m2, 4
   paddw                m0, m3
@@ -1322,7 +1311,7 @@
 %if %2 == 1 ; avg
   ; FIXME(rbultje) pipeline
   packuswb             m0, m2
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %endif
@@ -1330,7 +1319,7 @@
   mova                 m0, m4
 
   INC_SRC_BY_SRC_STRIDE
-  add                dstq, dst_strideq
+  add                refq, ref_strideq
 %else ; %1 < 16
   movx                 m0, [srcq]
   movx                 m1, [srcq+1]
@@ -1366,8 +1355,8 @@
   punpcklbw            m4, m3
   pmaddubsw            m2, filter_x_a
   pmaddubsw            m4, filter_x_a
-  movx                 m3, [dstq+dst_strideq]
-  movx                 m1, [dstq]
+  movx                 m3, [refq+ref_strideq]
+  movx                 m1, [refq]
   paddw                m2, filter_rnd
   paddw                m4, filter_rnd
   psraw                m2, 4
@@ -1406,9 +1395,9 @@
   pmullw               m1, m4, filter_y_b
   paddw                m2, filter_rnd
   paddw                m0, m3
-  movx                 m3, [dstq+dst_strideq]
+  movx                 m3, [refq+ref_strideq]
   paddw                m2, m1
-  movx                 m1, [dstq]
+  movx                 m1, [refq]
   psraw                m0, 4
   psraw                m2, 4
   punpcklbw            m3, m5
@@ -1421,11 +1410,11 @@
 %endif
   packuswb             m0, m2
 %if %1 > 4
-  pavgb                m0, [secq]
+  pavgb                m0, [second_predq]
   punpckhbw            m2, m0, m5
   punpcklbw            m0, m5
 %else
-  movh                 m2, [secq]
+  movh                 m2, [second_predq]
   pavgb                m0, m2
   punpcklbw            m0, m5
   movhlps              m2, m0
@@ -1435,10 +1424,10 @@
   mova                 m0, m4
 
   INC_SRC_BY_SRC_STRIDE
-  lea                dstq, [dstq+dst_strideq*2]
+  lea                refq, [refq+ref_strideq*2]
 %endif
 %if %2 == 1 ; avg
-  add                secq, sec_str
+  add                second_predq, second_str
 %endif
   dec                   block_height
   jg .x_other_y_other_loop
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sum_squares_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sum_squares_sse2.c
index 026d0ca..14f3b35 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sum_squares_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/sum_squares_sse2.c
@@ -10,120 +10,96 @@
 
 #include <assert.h>
 #include <emmintrin.h>
-#include <stdio.h>
 
 #include "./vpx_dsp_rtcd.h"
-
-static uint64_t vpx_sum_squares_2d_i16_4x4_sse2(const int16_t *src,
-                                                int stride) {
-  const __m128i v_val_0_w =
-      _mm_loadl_epi64((const __m128i *)(src + 0 * stride));
-  const __m128i v_val_1_w =
-      _mm_loadl_epi64((const __m128i *)(src + 1 * stride));
-  const __m128i v_val_2_w =
-      _mm_loadl_epi64((const __m128i *)(src + 2 * stride));
-  const __m128i v_val_3_w =
-      _mm_loadl_epi64((const __m128i *)(src + 3 * stride));
-
-  const __m128i v_sq_0_d = _mm_madd_epi16(v_val_0_w, v_val_0_w);
-  const __m128i v_sq_1_d = _mm_madd_epi16(v_val_1_w, v_val_1_w);
-  const __m128i v_sq_2_d = _mm_madd_epi16(v_val_2_w, v_val_2_w);
-  const __m128i v_sq_3_d = _mm_madd_epi16(v_val_3_w, v_val_3_w);
-
-  const __m128i v_sum_01_d = _mm_add_epi32(v_sq_0_d, v_sq_1_d);
-  const __m128i v_sum_23_d = _mm_add_epi32(v_sq_2_d, v_sq_3_d);
-  const __m128i v_sum_0123_d = _mm_add_epi32(v_sum_01_d, v_sum_23_d);
-
-  const __m128i v_sum_d =
-      _mm_add_epi32(v_sum_0123_d, _mm_srli_epi64(v_sum_0123_d, 32));
-
-  return (uint64_t)_mm_cvtsi128_si32(v_sum_d);
-}
-
-// TODO(jingning): Evaluate the performance impact here.
-#ifdef __GNUC__
-// This prevents GCC/Clang from inlining this function into
-// vpx_sum_squares_2d_i16_sse2, which in turn saves some stack
-// maintenance instructions in the common case of 4x4.
-__attribute__((noinline))
-#endif
-static uint64_t
-vpx_sum_squares_2d_i16_nxn_sse2(const int16_t *src, int stride, int size) {
-  int r, c;
-  const __m128i v_zext_mask_q = _mm_set_epi32(0, 0xffffffff, 0, 0xffffffff);
-  __m128i v_acc_q = _mm_setzero_si128();
-
-  for (r = 0; r < size; r += 8) {
-    __m128i v_acc_d = _mm_setzero_si128();
-
-    for (c = 0; c < size; c += 8) {
-      const int16_t *b = src + c;
-      const __m128i v_val_0_w =
-          _mm_load_si128((const __m128i *)(b + 0 * stride));
-      const __m128i v_val_1_w =
-          _mm_load_si128((const __m128i *)(b + 1 * stride));
-      const __m128i v_val_2_w =
-          _mm_load_si128((const __m128i *)(b + 2 * stride));
-      const __m128i v_val_3_w =
-          _mm_load_si128((const __m128i *)(b + 3 * stride));
-      const __m128i v_val_4_w =
-          _mm_load_si128((const __m128i *)(b + 4 * stride));
-      const __m128i v_val_5_w =
-          _mm_load_si128((const __m128i *)(b + 5 * stride));
-      const __m128i v_val_6_w =
-          _mm_load_si128((const __m128i *)(b + 6 * stride));
-      const __m128i v_val_7_w =
-          _mm_load_si128((const __m128i *)(b + 7 * stride));
-
-      const __m128i v_sq_0_d = _mm_madd_epi16(v_val_0_w, v_val_0_w);
-      const __m128i v_sq_1_d = _mm_madd_epi16(v_val_1_w, v_val_1_w);
-      const __m128i v_sq_2_d = _mm_madd_epi16(v_val_2_w, v_val_2_w);
-      const __m128i v_sq_3_d = _mm_madd_epi16(v_val_3_w, v_val_3_w);
-      const __m128i v_sq_4_d = _mm_madd_epi16(v_val_4_w, v_val_4_w);
-      const __m128i v_sq_5_d = _mm_madd_epi16(v_val_5_w, v_val_5_w);
-      const __m128i v_sq_6_d = _mm_madd_epi16(v_val_6_w, v_val_6_w);
-      const __m128i v_sq_7_d = _mm_madd_epi16(v_val_7_w, v_val_7_w);
-
-      const __m128i v_sum_01_d = _mm_add_epi32(v_sq_0_d, v_sq_1_d);
-      const __m128i v_sum_23_d = _mm_add_epi32(v_sq_2_d, v_sq_3_d);
-      const __m128i v_sum_45_d = _mm_add_epi32(v_sq_4_d, v_sq_5_d);
-      const __m128i v_sum_67_d = _mm_add_epi32(v_sq_6_d, v_sq_7_d);
-
-      const __m128i v_sum_0123_d = _mm_add_epi32(v_sum_01_d, v_sum_23_d);
-      const __m128i v_sum_4567_d = _mm_add_epi32(v_sum_45_d, v_sum_67_d);
-
-      v_acc_d = _mm_add_epi32(v_acc_d, v_sum_0123_d);
-      v_acc_d = _mm_add_epi32(v_acc_d, v_sum_4567_d);
-    }
-
-    v_acc_q = _mm_add_epi64(v_acc_q, _mm_and_si128(v_acc_d, v_zext_mask_q));
-    v_acc_q = _mm_add_epi64(v_acc_q, _mm_srli_epi64(v_acc_d, 32));
-
-    src += 8 * stride;
-  }
-
-  v_acc_q = _mm_add_epi64(v_acc_q, _mm_srli_si128(v_acc_q, 8));
-
-#if ARCH_X86_64
-  return (uint64_t)_mm_cvtsi128_si64(v_acc_q);
-#else
-  {
-    uint64_t tmp;
-    _mm_storel_epi64((__m128i *)&tmp, v_acc_q);
-    return tmp;
-  }
-#endif
-}
+#include "vpx_dsp/x86/mem_sse2.h"
 
 uint64_t vpx_sum_squares_2d_i16_sse2(const int16_t *src, int stride, int size) {
-  // 4 elements per row only requires half an XMM register, so this
-  // must be a special case, but also note that over 75% of all calls
-  // are with size == 4, so it is also the common case.
+  // Over 75% of all calls are with size == 4.
   if (size == 4) {
-    return vpx_sum_squares_2d_i16_4x4_sse2(src, stride);
+    __m128i s[2], sq[2], ss;
+
+    s[0] = _mm_loadl_epi64((const __m128i *)(src + 0 * stride));
+    s[0] = loadh_epi64(s[0], src + 1 * stride);
+    s[1] = _mm_loadl_epi64((const __m128i *)(src + 2 * stride));
+    s[1] = loadh_epi64(s[1], src + 3 * stride);
+    sq[0] = _mm_madd_epi16(s[0], s[0]);
+    sq[1] = _mm_madd_epi16(s[1], s[1]);
+    sq[0] = _mm_add_epi32(sq[0], sq[1]);
+    ss = _mm_add_epi32(sq[0], _mm_srli_si128(sq[0], 8));
+    ss = _mm_add_epi32(ss, _mm_srli_epi64(ss, 32));
+
+    return (uint64_t)_mm_cvtsi128_si32(ss);
   } else {
     // Generic case
+    int r = size;
+    const __m128i v_zext_mask_q = _mm_set_epi32(0, 0xffffffff, 0, 0xffffffff);
+    __m128i v_acc_q = _mm_setzero_si128();
+
     assert(size % 8 == 0);
-    return vpx_sum_squares_2d_i16_nxn_sse2(src, stride, size);
+
+    do {
+      int c = 0;
+      __m128i v_acc_d = _mm_setzero_si128();
+
+      do {
+        const int16_t *const b = src + c;
+        const __m128i v_val_0_w =
+            _mm_load_si128((const __m128i *)(b + 0 * stride));
+        const __m128i v_val_1_w =
+            _mm_load_si128((const __m128i *)(b + 1 * stride));
+        const __m128i v_val_2_w =
+            _mm_load_si128((const __m128i *)(b + 2 * stride));
+        const __m128i v_val_3_w =
+            _mm_load_si128((const __m128i *)(b + 3 * stride));
+        const __m128i v_val_4_w =
+            _mm_load_si128((const __m128i *)(b + 4 * stride));
+        const __m128i v_val_5_w =
+            _mm_load_si128((const __m128i *)(b + 5 * stride));
+        const __m128i v_val_6_w =
+            _mm_load_si128((const __m128i *)(b + 6 * stride));
+        const __m128i v_val_7_w =
+            _mm_load_si128((const __m128i *)(b + 7 * stride));
+
+        const __m128i v_sq_0_d = _mm_madd_epi16(v_val_0_w, v_val_0_w);
+        const __m128i v_sq_1_d = _mm_madd_epi16(v_val_1_w, v_val_1_w);
+        const __m128i v_sq_2_d = _mm_madd_epi16(v_val_2_w, v_val_2_w);
+        const __m128i v_sq_3_d = _mm_madd_epi16(v_val_3_w, v_val_3_w);
+        const __m128i v_sq_4_d = _mm_madd_epi16(v_val_4_w, v_val_4_w);
+        const __m128i v_sq_5_d = _mm_madd_epi16(v_val_5_w, v_val_5_w);
+        const __m128i v_sq_6_d = _mm_madd_epi16(v_val_6_w, v_val_6_w);
+        const __m128i v_sq_7_d = _mm_madd_epi16(v_val_7_w, v_val_7_w);
+
+        const __m128i v_sum_01_d = _mm_add_epi32(v_sq_0_d, v_sq_1_d);
+        const __m128i v_sum_23_d = _mm_add_epi32(v_sq_2_d, v_sq_3_d);
+        const __m128i v_sum_45_d = _mm_add_epi32(v_sq_4_d, v_sq_5_d);
+        const __m128i v_sum_67_d = _mm_add_epi32(v_sq_6_d, v_sq_7_d);
+
+        const __m128i v_sum_0123_d = _mm_add_epi32(v_sum_01_d, v_sum_23_d);
+        const __m128i v_sum_4567_d = _mm_add_epi32(v_sum_45_d, v_sum_67_d);
+
+        v_acc_d = _mm_add_epi32(v_acc_d, v_sum_0123_d);
+        v_acc_d = _mm_add_epi32(v_acc_d, v_sum_4567_d);
+        c += 8;
+      } while (c < size);
+
+      v_acc_q = _mm_add_epi64(v_acc_q, _mm_and_si128(v_acc_d, v_zext_mask_q));
+      v_acc_q = _mm_add_epi64(v_acc_q, _mm_srli_epi64(v_acc_d, 32));
+
+      src += 8 * stride;
+      r -= 8;
+    } while (r);
+
+    v_acc_q = _mm_add_epi64(v_acc_q, _mm_srli_si128(v_acc_q, 8));
+
+#if VPX_ARCH_X86_64
+    return (uint64_t)_mm_cvtsi128_si64(v_acc_q);
+#else
+    {
+      uint64_t tmp;
+      _mm_storel_epi64((__m128i *)&tmp, v_acc_q);
+      return tmp;
+    }
+#endif
   }
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/transpose_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/transpose_sse2.h
index 8a0119c..6e07871 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/transpose_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/transpose_sse2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_TRANSPOSE_SSE2_H_
-#define VPX_DSP_X86_TRANSPOSE_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_TRANSPOSE_SSE2_H_
+#define VPX_VPX_DSP_X86_TRANSPOSE_SSE2_H_
 
 #include <emmintrin.h>  // SSE2
 
@@ -364,4 +364,4 @@
   out[7] = _mm_unpackhi_epi64(a6, a7);
 }
 
-#endif  // VPX_DSP_X86_TRANSPOSE_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_TRANSPOSE_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/txfm_common_sse2.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/txfm_common_sse2.h
index 0a9542c..de5ce43 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/txfm_common_sse2.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/txfm_common_sse2.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_DSP_X86_TXFM_COMMON_SSE2_H_
-#define VPX_DSP_X86_TXFM_COMMON_SSE2_H_
+#ifndef VPX_VPX_DSP_X86_TXFM_COMMON_SSE2_H_
+#define VPX_VPX_DSP_X86_TXFM_COMMON_SSE2_H_
 
 #include <emmintrin.h>
 #include "vpx/vpx_integer.h"
@@ -29,4 +29,4 @@
   _mm_setr_epi16((int16_t)(a), (int16_t)(b), (int16_t)(c), (int16_t)(d), \
                  (int16_t)(e), (int16_t)(f), (int16_t)(g), (int16_t)(h))
 
-#endif  // VPX_DSP_X86_TXFM_COMMON_SSE2_H_
+#endif  // VPX_VPX_DSP_X86_TXFM_COMMON_SSE2_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_avx2.c
index d15a89c..9232acb 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_avx2.c
@@ -38,130 +38,140 @@
 };
 /* clang-format on */
 
-void vpx_get16x16var_avx2(const unsigned char *src_ptr, int source_stride,
-                          const unsigned char *ref_ptr, int recon_stride,
-                          unsigned int *sse, int *sum) {
-  unsigned int i, src_2strides, ref_2strides;
-  __m256i sum_reg = _mm256_setzero_si256();
-  __m256i sse_reg = _mm256_setzero_si256();
-  // process two 16 byte locations in a 256 bit register
-  src_2strides = source_stride << 1;
-  ref_2strides = recon_stride << 1;
-  for (i = 0; i < 8; ++i) {
-    // convert up values in 128 bit registers across lanes
-    const __m256i src0 =
-        _mm256_cvtepu8_epi16(_mm_loadu_si128((__m128i const *)(src_ptr)));
-    const __m256i src1 = _mm256_cvtepu8_epi16(
-        _mm_loadu_si128((__m128i const *)(src_ptr + source_stride)));
-    const __m256i ref0 =
-        _mm256_cvtepu8_epi16(_mm_loadu_si128((__m128i const *)(ref_ptr)));
-    const __m256i ref1 = _mm256_cvtepu8_epi16(
-        _mm_loadu_si128((__m128i const *)(ref_ptr + recon_stride)));
-    const __m256i diff0 = _mm256_sub_epi16(src0, ref0);
-    const __m256i diff1 = _mm256_sub_epi16(src1, ref1);
-    const __m256i madd0 = _mm256_madd_epi16(diff0, diff0);
-    const __m256i madd1 = _mm256_madd_epi16(diff1, diff1);
+static INLINE void variance_kernel_avx2(const __m256i src, const __m256i ref,
+                                        __m256i *const sse,
+                                        __m256i *const sum) {
+  const __m256i adj_sub = _mm256_load_si256((__m256i const *)adjacent_sub_avx2);
 
-    // add to the running totals
-    sum_reg = _mm256_add_epi16(sum_reg, _mm256_add_epi16(diff0, diff1));
-    sse_reg = _mm256_add_epi32(sse_reg, _mm256_add_epi32(madd0, madd1));
+  // unpack into pairs of source and reference values
+  const __m256i src_ref0 = _mm256_unpacklo_epi8(src, ref);
+  const __m256i src_ref1 = _mm256_unpackhi_epi8(src, ref);
 
-    src_ptr += src_2strides;
-    ref_ptr += ref_2strides;
-  }
-  {
-    // extract the low lane and add it to the high lane
-    const __m128i sum_reg_128 = _mm_add_epi16(
-        _mm256_castsi256_si128(sum_reg), _mm256_extractf128_si256(sum_reg, 1));
-    const __m128i sse_reg_128 = _mm_add_epi32(
-        _mm256_castsi256_si128(sse_reg), _mm256_extractf128_si256(sse_reg, 1));
+  // subtract adjacent elements using src*1 + ref*-1
+  const __m256i diff0 = _mm256_maddubs_epi16(src_ref0, adj_sub);
+  const __m256i diff1 = _mm256_maddubs_epi16(src_ref1, adj_sub);
+  const __m256i madd0 = _mm256_madd_epi16(diff0, diff0);
+  const __m256i madd1 = _mm256_madd_epi16(diff1, diff1);
 
-    // sum upper and lower 64 bits together and convert up to 32 bit values
-    const __m128i sum_reg_64 =
-        _mm_add_epi16(sum_reg_128, _mm_srli_si128(sum_reg_128, 8));
-    const __m128i sum_int32 = _mm_cvtepi16_epi32(sum_reg_64);
+  // add to the running totals
+  *sum = _mm256_add_epi16(*sum, _mm256_add_epi16(diff0, diff1));
+  *sse = _mm256_add_epi32(*sse, _mm256_add_epi32(madd0, madd1));
+}
 
-    // unpack sse and sum registers and add
-    const __m128i sse_sum_lo = _mm_unpacklo_epi32(sse_reg_128, sum_int32);
-    const __m128i sse_sum_hi = _mm_unpackhi_epi32(sse_reg_128, sum_int32);
-    const __m128i sse_sum = _mm_add_epi32(sse_sum_lo, sse_sum_hi);
+static INLINE void variance_final_from_32bit_sum_avx2(__m256i vsse,
+                                                      __m128i vsum,
+                                                      unsigned int *const sse,
+                                                      int *const sum) {
+  // extract the low lane and add it to the high lane
+  const __m128i sse_reg_128 = _mm_add_epi32(_mm256_castsi256_si128(vsse),
+                                            _mm256_extractf128_si256(vsse, 1));
 
-    // perform the final summation and extract the results
-    const __m128i res = _mm_add_epi32(sse_sum, _mm_srli_si128(sse_sum, 8));
-    *((int *)sse) = _mm_cvtsi128_si32(res);
-    *((int *)sum) = _mm_extract_epi32(res, 1);
+  // unpack sse and sum registers and add
+  const __m128i sse_sum_lo = _mm_unpacklo_epi32(sse_reg_128, vsum);
+  const __m128i sse_sum_hi = _mm_unpackhi_epi32(sse_reg_128, vsum);
+  const __m128i sse_sum = _mm_add_epi32(sse_sum_lo, sse_sum_hi);
+
+  // perform the final summation and extract the results
+  const __m128i res = _mm_add_epi32(sse_sum, _mm_srli_si128(sse_sum, 8));
+  *((int *)sse) = _mm_cvtsi128_si32(res);
+  *((int *)sum) = _mm_extract_epi32(res, 1);
+}
+
+static INLINE void variance_final_from_16bit_sum_avx2(__m256i vsse,
+                                                      __m256i vsum,
+                                                      unsigned int *const sse,
+                                                      int *const sum) {
+  // extract the low lane and add it to the high lane
+  const __m128i sum_reg_128 = _mm_add_epi16(_mm256_castsi256_si128(vsum),
+                                            _mm256_extractf128_si256(vsum, 1));
+  const __m128i sum_reg_64 =
+      _mm_add_epi16(sum_reg_128, _mm_srli_si128(sum_reg_128, 8));
+  const __m128i sum_int32 = _mm_cvtepi16_epi32(sum_reg_64);
+
+  variance_final_from_32bit_sum_avx2(vsse, sum_int32, sse, sum);
+}
+
+static INLINE __m256i sum_to_32bit_avx2(const __m256i sum) {
+  const __m256i sum_lo = _mm256_cvtepi16_epi32(_mm256_castsi256_si128(sum));
+  const __m256i sum_hi =
+      _mm256_cvtepi16_epi32(_mm256_extractf128_si256(sum, 1));
+  return _mm256_add_epi32(sum_lo, sum_hi);
+}
+
+static INLINE void variance16_kernel_avx2(
+    const uint8_t *const src, const int src_stride, const uint8_t *const ref,
+    const int ref_stride, __m256i *const sse, __m256i *const sum) {
+  const __m128i s0 = _mm_loadu_si128((__m128i const *)(src + 0 * src_stride));
+  const __m128i s1 = _mm_loadu_si128((__m128i const *)(src + 1 * src_stride));
+  const __m128i r0 = _mm_loadu_si128((__m128i const *)(ref + 0 * ref_stride));
+  const __m128i r1 = _mm_loadu_si128((__m128i const *)(ref + 1 * ref_stride));
+  const __m256i s = _mm256_inserti128_si256(_mm256_castsi128_si256(s0), s1, 1);
+  const __m256i r = _mm256_inserti128_si256(_mm256_castsi128_si256(r0), r1, 1);
+  variance_kernel_avx2(s, r, sse, sum);
+}
+
+static INLINE void variance32_kernel_avx2(const uint8_t *const src,
+                                          const uint8_t *const ref,
+                                          __m256i *const sse,
+                                          __m256i *const sum) {
+  const __m256i s = _mm256_loadu_si256((__m256i const *)(src));
+  const __m256i r = _mm256_loadu_si256((__m256i const *)(ref));
+  variance_kernel_avx2(s, r, sse, sum);
+}
+
+static INLINE void variance16_avx2(const uint8_t *src, const int src_stride,
+                                   const uint8_t *ref, const int ref_stride,
+                                   const int h, __m256i *const vsse,
+                                   __m256i *const vsum) {
+  int i;
+  *vsum = _mm256_setzero_si256();
+  *vsse = _mm256_setzero_si256();
+
+  for (i = 0; i < h; i += 2) {
+    variance16_kernel_avx2(src, src_stride, ref, ref_stride, vsse, vsum);
+    src += 2 * src_stride;
+    ref += 2 * ref_stride;
   }
 }
 
-static void get32x16var_avx2(const unsigned char *src_ptr, int source_stride,
-                             const unsigned char *ref_ptr, int recon_stride,
-                             unsigned int *sse, int *sum) {
-  unsigned int i, src_2strides, ref_2strides;
-  const __m256i adj_sub = _mm256_load_si256((__m256i const *)adjacent_sub_avx2);
-  __m256i sum_reg = _mm256_setzero_si256();
-  __m256i sse_reg = _mm256_setzero_si256();
+static INLINE void variance32_avx2(const uint8_t *src, const int src_stride,
+                                   const uint8_t *ref, const int ref_stride,
+                                   const int h, __m256i *const vsse,
+                                   __m256i *const vsum) {
+  int i;
+  *vsum = _mm256_setzero_si256();
+  *vsse = _mm256_setzero_si256();
 
-  // process 64 elements in an iteration
-  src_2strides = source_stride << 1;
-  ref_2strides = recon_stride << 1;
-  for (i = 0; i < 8; i++) {
-    const __m256i src0 = _mm256_loadu_si256((__m256i const *)(src_ptr));
-    const __m256i src1 =
-        _mm256_loadu_si256((__m256i const *)(src_ptr + source_stride));
-    const __m256i ref0 = _mm256_loadu_si256((__m256i const *)(ref_ptr));
-    const __m256i ref1 =
-        _mm256_loadu_si256((__m256i const *)(ref_ptr + recon_stride));
-
-    // unpack into pairs of source and reference values
-    const __m256i src_ref0 = _mm256_unpacklo_epi8(src0, ref0);
-    const __m256i src_ref1 = _mm256_unpackhi_epi8(src0, ref0);
-    const __m256i src_ref2 = _mm256_unpacklo_epi8(src1, ref1);
-    const __m256i src_ref3 = _mm256_unpackhi_epi8(src1, ref1);
-
-    // subtract adjacent elements using src*1 + ref*-1
-    const __m256i diff0 = _mm256_maddubs_epi16(src_ref0, adj_sub);
-    const __m256i diff1 = _mm256_maddubs_epi16(src_ref1, adj_sub);
-    const __m256i diff2 = _mm256_maddubs_epi16(src_ref2, adj_sub);
-    const __m256i diff3 = _mm256_maddubs_epi16(src_ref3, adj_sub);
-    const __m256i madd0 = _mm256_madd_epi16(diff0, diff0);
-    const __m256i madd1 = _mm256_madd_epi16(diff1, diff1);
-    const __m256i madd2 = _mm256_madd_epi16(diff2, diff2);
-    const __m256i madd3 = _mm256_madd_epi16(diff3, diff3);
-
-    // add to the running totals
-    sum_reg = _mm256_add_epi16(sum_reg, _mm256_add_epi16(diff0, diff1));
-    sum_reg = _mm256_add_epi16(sum_reg, _mm256_add_epi16(diff2, diff3));
-    sse_reg = _mm256_add_epi32(sse_reg, _mm256_add_epi32(madd0, madd1));
-    sse_reg = _mm256_add_epi32(sse_reg, _mm256_add_epi32(madd2, madd3));
-
-    src_ptr += src_2strides;
-    ref_ptr += ref_2strides;
+  for (i = 0; i < h; i++) {
+    variance32_kernel_avx2(src, ref, vsse, vsum);
+    src += src_stride;
+    ref += ref_stride;
   }
+}
 
-  {
-    // extract the low lane and add it to the high lane
-    const __m128i sum_reg_128 = _mm_add_epi16(
-        _mm256_castsi256_si128(sum_reg), _mm256_extractf128_si256(sum_reg, 1));
-    const __m128i sse_reg_128 = _mm_add_epi32(
-        _mm256_castsi256_si128(sse_reg), _mm256_extractf128_si256(sse_reg, 1));
+static INLINE void variance64_avx2(const uint8_t *src, const int src_stride,
+                                   const uint8_t *ref, const int ref_stride,
+                                   const int h, __m256i *const vsse,
+                                   __m256i *const vsum) {
+  int i;
+  *vsum = _mm256_setzero_si256();
 
-    // sum upper and lower 64 bits together and convert up to 32 bit values
-    const __m128i sum_reg_64 =
-        _mm_add_epi16(sum_reg_128, _mm_srli_si128(sum_reg_128, 8));
-    const __m128i sum_int32 = _mm_cvtepi16_epi32(sum_reg_64);
-
-    // unpack sse and sum registers and add
-    const __m128i sse_sum_lo = _mm_unpacklo_epi32(sse_reg_128, sum_int32);
-    const __m128i sse_sum_hi = _mm_unpackhi_epi32(sse_reg_128, sum_int32);
-    const __m128i sse_sum = _mm_add_epi32(sse_sum_lo, sse_sum_hi);
-
-    // perform the final summation and extract the results
-    const __m128i res = _mm_add_epi32(sse_sum, _mm_srli_si128(sse_sum, 8));
-    *((int *)sse) = _mm_cvtsi128_si32(res);
-    *((int *)sum) = _mm_extract_epi32(res, 1);
+  for (i = 0; i < h; i++) {
+    variance32_kernel_avx2(src + 0, ref + 0, vsse, vsum);
+    variance32_kernel_avx2(src + 32, ref + 32, vsse, vsum);
+    src += src_stride;
+    ref += ref_stride;
   }
 }
 
+void vpx_get16x16var_avx2(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *ref_ptr, int ref_stride,
+                          unsigned int *sse, int *sum) {
+  __m256i vsse, vsum;
+  variance16_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, sum);
+}
+
 #define FILTER_SRC(filter)                               \
   /* filter the source */                                \
   exp_src_lo = _mm256_maddubs_epi16(exp_src_lo, filter); \
@@ -214,8 +224,9 @@
 
 static INLINE void spv32_x0_y0(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg) {
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg) {
   const __m256i zero_reg = _mm256_setzero_si256();
   __m256i exp_src_lo, exp_src_hi, exp_dst_lo, exp_dst_hi;
   int i;
@@ -223,11 +234,11 @@
     const __m256i dst_reg = _mm256_loadu_si256((__m256i const *)dst);
     const __m256i src_reg = _mm256_loadu_si256((__m256i const *)src);
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i avg_reg = _mm256_avg_epu8(src_reg, sec_reg);
       exp_src_lo = _mm256_unpacklo_epi8(avg_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_reg, zero_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
     } else {
       exp_src_lo = _mm256_unpacklo_epi8(src_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(src_reg, zero_reg);
@@ -241,9 +252,10 @@
 // (x == 0, y == 4) or (x == 4, y == 0).  sstep determines the direction.
 static INLINE void spv32_half_zero(const uint8_t *src, int src_stride,
                                    const uint8_t *dst, int dst_stride,
-                                   const uint8_t *sec, int sec_stride,
-                                   int do_sec, int height, __m256i *sum_reg,
-                                   __m256i *sse_reg, int sstep) {
+                                   const uint8_t *second_pred,
+                                   int second_stride, int do_sec, int height,
+                                   __m256i *sum_reg, __m256i *sse_reg,
+                                   int sstep) {
   const __m256i zero_reg = _mm256_setzero_si256();
   __m256i exp_src_lo, exp_src_hi, exp_dst_lo, exp_dst_hi;
   int i;
@@ -253,11 +265,11 @@
     const __m256i src_1 = _mm256_loadu_si256((__m256i const *)(src + sstep));
     const __m256i src_avg = _mm256_avg_epu8(src_0, src_1);
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i avg_reg = _mm256_avg_epu8(src_avg, sec_reg);
       exp_src_lo = _mm256_unpacklo_epi8(avg_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_reg, zero_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
     } else {
       exp_src_lo = _mm256_unpacklo_epi8(src_avg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(src_avg, zero_reg);
@@ -270,24 +282,27 @@
 
 static INLINE void spv32_x0_y4(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg) {
-  spv32_half_zero(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, sum_reg, sse_reg, src_stride);
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg) {
+  spv32_half_zero(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, sum_reg, sse_reg, src_stride);
 }
 
 static INLINE void spv32_x4_y0(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg) {
-  spv32_half_zero(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, sum_reg, sse_reg, 1);
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg) {
+  spv32_half_zero(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, sum_reg, sse_reg, 1);
 }
 
 static INLINE void spv32_x4_y4(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg) {
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg) {
   const __m256i zero_reg = _mm256_setzero_si256();
   const __m256i src_a = _mm256_loadu_si256((__m256i const *)src);
   const __m256i src_b = _mm256_loadu_si256((__m256i const *)(src + 1));
@@ -304,11 +319,11 @@
     prev_src_avg = src_avg;
 
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i avg_reg = _mm256_avg_epu8(current_avg, sec_reg);
       exp_src_lo = _mm256_unpacklo_epi8(avg_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_reg, zero_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
     } else {
       exp_src_lo = _mm256_unpacklo_epi8(current_avg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(current_avg, zero_reg);
@@ -323,9 +338,10 @@
 // (x == 0, y == bil) or (x == 4, y == bil).  sstep determines the direction.
 static INLINE void spv32_bilin_zero(const uint8_t *src, int src_stride,
                                     const uint8_t *dst, int dst_stride,
-                                    const uint8_t *sec, int sec_stride,
-                                    int do_sec, int height, __m256i *sum_reg,
-                                    __m256i *sse_reg, int offset, int sstep) {
+                                    const uint8_t *second_pred,
+                                    int second_stride, int do_sec, int height,
+                                    __m256i *sum_reg, __m256i *sse_reg,
+                                    int offset, int sstep) {
   const __m256i zero_reg = _mm256_setzero_si256();
   const __m256i pw8 = _mm256_set1_epi16(8);
   const __m256i filter = _mm256_load_si256(
@@ -341,10 +357,10 @@
 
     FILTER_SRC(filter)
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i exp_src = _mm256_packus_epi16(exp_src_lo, exp_src_hi);
       const __m256i avg_reg = _mm256_avg_epu8(exp_src, sec_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
       exp_src_lo = _mm256_unpacklo_epi8(avg_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_reg, zero_reg);
     }
@@ -356,27 +372,27 @@
 
 static INLINE void spv32_x0_yb(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg,
-                               int y_offset) {
-  spv32_bilin_zero(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                   height, sum_reg, sse_reg, y_offset, src_stride);
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg, int y_offset) {
+  spv32_bilin_zero(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                   do_sec, height, sum_reg, sse_reg, y_offset, src_stride);
 }
 
 static INLINE void spv32_xb_y0(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg,
-                               int x_offset) {
-  spv32_bilin_zero(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                   height, sum_reg, sse_reg, x_offset, 1);
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg, int x_offset) {
+  spv32_bilin_zero(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                   do_sec, height, sum_reg, sse_reg, x_offset, 1);
 }
 
 static INLINE void spv32_x4_yb(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg,
-                               int y_offset) {
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg, int y_offset) {
   const __m256i zero_reg = _mm256_setzero_si256();
   const __m256i pw8 = _mm256_set1_epi16(8);
   const __m256i filter = _mm256_load_si256(
@@ -398,12 +414,12 @@
 
     FILTER_SRC(filter)
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i exp_src_avg = _mm256_packus_epi16(exp_src_lo, exp_src_hi);
       const __m256i avg_reg = _mm256_avg_epu8(exp_src_avg, sec_reg);
       exp_src_lo = _mm256_unpacklo_epi8(avg_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_reg, zero_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
     }
     CALC_SUM_SSE_INSIDE_LOOP
     dst += dst_stride;
@@ -413,9 +429,9 @@
 
 static INLINE void spv32_xb_y4(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg,
-                               int x_offset) {
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg, int x_offset) {
   const __m256i zero_reg = _mm256_setzero_si256();
   const __m256i pw8 = _mm256_set1_epi16(8);
   const __m256i filter = _mm256_load_si256(
@@ -446,11 +462,11 @@
     src_pack = _mm256_avg_epu8(src_pack, src_reg);
 
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i avg_pack = _mm256_avg_epu8(src_pack, sec_reg);
       exp_src_lo = _mm256_unpacklo_epi8(avg_pack, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_pack, zero_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
     } else {
       exp_src_lo = _mm256_unpacklo_epi8(src_pack, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(src_pack, zero_reg);
@@ -464,9 +480,9 @@
 
 static INLINE void spv32_xb_yb(const uint8_t *src, int src_stride,
                                const uint8_t *dst, int dst_stride,
-                               const uint8_t *sec, int sec_stride, int do_sec,
-                               int height, __m256i *sum_reg, __m256i *sse_reg,
-                               int x_offset, int y_offset) {
+                               const uint8_t *second_pred, int second_stride,
+                               int do_sec, int height, __m256i *sum_reg,
+                               __m256i *sse_reg, int x_offset, int y_offset) {
   const __m256i zero_reg = _mm256_setzero_si256();
   const __m256i pw8 = _mm256_set1_epi16(8);
   const __m256i xfilter = _mm256_load_si256(
@@ -501,12 +517,12 @@
 
     FILTER_SRC(yfilter)
     if (do_sec) {
-      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)sec);
+      const __m256i sec_reg = _mm256_loadu_si256((__m256i const *)second_pred);
       const __m256i exp_src = _mm256_packus_epi16(exp_src_lo, exp_src_hi);
       const __m256i avg_reg = _mm256_avg_epu8(exp_src, sec_reg);
       exp_src_lo = _mm256_unpacklo_epi8(avg_reg, zero_reg);
       exp_src_hi = _mm256_unpackhi_epi8(avg_reg, zero_reg);
-      sec += sec_stride;
+      second_pred += second_stride;
     }
 
     prev_src_pack = src_pack;
@@ -520,7 +536,7 @@
 static INLINE int sub_pix_var32xh(const uint8_t *src, int src_stride,
                                   int x_offset, int y_offset,
                                   const uint8_t *dst, int dst_stride,
-                                  const uint8_t *sec, int sec_stride,
+                                  const uint8_t *second_pred, int second_stride,
                                   int do_sec, int height, unsigned int *sse) {
   const __m256i zero_reg = _mm256_setzero_si256();
   __m256i sum_reg = _mm256_setzero_si256();
@@ -530,44 +546,44 @@
   // x_offset = 0 and y_offset = 0
   if (x_offset == 0) {
     if (y_offset == 0) {
-      spv32_x0_y0(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg);
+      spv32_x0_y0(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg);
       // x_offset = 0 and y_offset = 4
     } else if (y_offset == 4) {
-      spv32_x0_y4(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg);
+      spv32_x0_y4(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg);
       // x_offset = 0 and y_offset = bilin interpolation
     } else {
-      spv32_x0_yb(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg, y_offset);
+      spv32_x0_yb(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg, y_offset);
     }
     // x_offset = 4  and y_offset = 0
   } else if (x_offset == 4) {
     if (y_offset == 0) {
-      spv32_x4_y0(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg);
+      spv32_x4_y0(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg);
       // x_offset = 4  and y_offset = 4
     } else if (y_offset == 4) {
-      spv32_x4_y4(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg);
+      spv32_x4_y4(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg);
       // x_offset = 4  and y_offset = bilin interpolation
     } else {
-      spv32_x4_yb(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg, y_offset);
+      spv32_x4_yb(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg, y_offset);
     }
     // x_offset = bilin interpolation and y_offset = 0
   } else {
     if (y_offset == 0) {
-      spv32_xb_y0(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg, x_offset);
+      spv32_xb_y0(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg, x_offset);
       // x_offset = bilin interpolation and y_offset = 4
     } else if (y_offset == 4) {
-      spv32_xb_y4(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg, x_offset);
+      spv32_xb_y4(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg, x_offset);
       // x_offset = bilin interpolation and y_offset = bilin interpolation
     } else {
-      spv32_xb_yb(src, src_stride, dst, dst_stride, sec, sec_stride, do_sec,
-                  height, &sum_reg, &sse_reg, x_offset, y_offset);
+      spv32_xb_yb(src, src_stride, dst, dst_stride, second_pred, second_stride,
+                  do_sec, height, &sum_reg, &sse_reg, x_offset, y_offset);
     }
   }
   CALC_SUM_AND_SSE
@@ -583,127 +599,177 @@
 
 static unsigned int sub_pixel_avg_variance32xh_avx2(
     const uint8_t *src, int src_stride, int x_offset, int y_offset,
-    const uint8_t *dst, int dst_stride, const uint8_t *sec, int sec_stride,
-    int height, unsigned int *sse) {
+    const uint8_t *dst, int dst_stride, const uint8_t *second_pred,
+    int second_stride, int height, unsigned int *sse) {
   return sub_pix_var32xh(src, src_stride, x_offset, y_offset, dst, dst_stride,
-                         sec, sec_stride, 1, height, sse);
+                         second_pred, second_stride, 1, height, sse);
 }
 
-typedef void (*get_var_avx2)(const uint8_t *src, int src_stride,
-                             const uint8_t *ref, int ref_stride,
+typedef void (*get_var_avx2)(const uint8_t *src_ptr, int src_stride,
+                             const uint8_t *ref_ptr, int ref_stride,
                              unsigned int *sse, int *sum);
 
-static void variance_avx2(const uint8_t *src, int src_stride,
-                          const uint8_t *ref, int ref_stride, int w, int h,
-                          unsigned int *sse, int *sum, get_var_avx2 var_fn,
-                          int block_size) {
-  int i, j;
-
-  *sse = 0;
-  *sum = 0;
-
-  for (i = 0; i < h; i += 16) {
-    for (j = 0; j < w; j += block_size) {
-      unsigned int sse0;
-      int sum0;
-      var_fn(&src[src_stride * i + j], src_stride, &ref[ref_stride * i + j],
-             ref_stride, &sse0, &sum0);
-      *sse += sse0;
-      *sum += sum0;
-    }
-  }
+unsigned int vpx_variance16x8_avx2(const uint8_t *src_ptr, int src_stride,
+                                   const uint8_t *ref_ptr, int ref_stride,
+                                   unsigned int *sse) {
+  int sum;
+  __m256i vsse, vsum;
+  variance16_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 8, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, &sum);
+  return *sse - (uint32_t)(((int64_t)sum * sum) >> 7);
 }
 
-unsigned int vpx_variance16x16_avx2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance16x16_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum;
-  variance_avx2(src, src_stride, ref, ref_stride, 16, 16, sse, &sum,
-                vpx_get16x16var_avx2, 16);
+  __m256i vsse, vsum;
+  variance16_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, &sum);
   return *sse - (uint32_t)(((int64_t)sum * sum) >> 8);
 }
 
-unsigned int vpx_mse16x16_avx2(const uint8_t *src, int src_stride,
-                               const uint8_t *ref, int ref_stride,
-                               unsigned int *sse) {
-  int sum;
-  vpx_get16x16var_avx2(src, src_stride, ref, ref_stride, sse, &sum);
-  return *sse;
-}
-
-unsigned int vpx_variance32x16_avx2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance16x32_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum;
-  variance_avx2(src, src_stride, ref, ref_stride, 32, 16, sse, &sum,
-                get32x16var_avx2, 32);
+  __m256i vsse, vsum;
+  variance16_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 32, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, &sum);
   return *sse - (uint32_t)(((int64_t)sum * sum) >> 9);
 }
 
-unsigned int vpx_variance32x32_avx2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance32x16_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum;
-  variance_avx2(src, src_stride, ref, ref_stride, 32, 32, sse, &sum,
-                get32x16var_avx2, 32);
+  __m256i vsse, vsum;
+  variance32_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, &sum);
+  return *sse - (uint32_t)(((int64_t)sum * sum) >> 9);
+}
+
+unsigned int vpx_variance32x32_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    unsigned int *sse) {
+  int sum;
+  __m256i vsse, vsum;
+  __m128i vsum_128;
+  variance32_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 32, &vsse, &vsum);
+  vsum_128 = _mm_add_epi16(_mm256_castsi256_si128(vsum),
+                           _mm256_extractf128_si256(vsum, 1));
+  vsum_128 = _mm_add_epi32(_mm_cvtepi16_epi32(vsum_128),
+                           _mm_cvtepi16_epi32(_mm_srli_si128(vsum_128, 8)));
+  variance_final_from_32bit_sum_avx2(vsse, vsum_128, sse, &sum);
   return *sse - (uint32_t)(((int64_t)sum * sum) >> 10);
 }
 
-unsigned int vpx_variance64x64_avx2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance32x64_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
   int sum;
-  variance_avx2(src, src_stride, ref, ref_stride, 64, 64, sse, &sum,
-                get32x16var_avx2, 32);
-  return *sse - (uint32_t)(((int64_t)sum * sum) >> 12);
-}
-
-unsigned int vpx_variance64x32_avx2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
-                                    unsigned int *sse) {
-  int sum;
-  variance_avx2(src, src_stride, ref, ref_stride, 64, 32, sse, &sum,
-                get32x16var_avx2, 32);
+  __m256i vsse, vsum;
+  __m128i vsum_128;
+  variance32_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 64, &vsse, &vsum);
+  vsum = sum_to_32bit_avx2(vsum);
+  vsum_128 = _mm_add_epi32(_mm256_castsi256_si128(vsum),
+                           _mm256_extractf128_si256(vsum, 1));
+  variance_final_from_32bit_sum_avx2(vsse, vsum_128, sse, &sum);
   return *sse - (uint32_t)(((int64_t)sum * sum) >> 11);
 }
 
-unsigned int vpx_sub_pixel_variance64x64_avx2(const uint8_t *src,
-                                              int src_stride, int x_offset,
-                                              int y_offset, const uint8_t *dst,
-                                              int dst_stride,
-                                              unsigned int *sse) {
+unsigned int vpx_variance64x32_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    unsigned int *sse) {
+  __m256i vsse = _mm256_setzero_si256();
+  __m256i vsum = _mm256_setzero_si256();
+  __m128i vsum_128;
+  int sum;
+  variance64_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 32, &vsse, &vsum);
+  vsum = sum_to_32bit_avx2(vsum);
+  vsum_128 = _mm_add_epi32(_mm256_castsi256_si128(vsum),
+                           _mm256_extractf128_si256(vsum, 1));
+  variance_final_from_32bit_sum_avx2(vsse, vsum_128, sse, &sum);
+  return *sse - (uint32_t)(((int64_t)sum * sum) >> 11);
+}
+
+unsigned int vpx_variance64x64_avx2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    unsigned int *sse) {
+  __m256i vsse = _mm256_setzero_si256();
+  __m256i vsum = _mm256_setzero_si256();
+  __m128i vsum_128;
+  int sum;
+  int i = 0;
+
+  for (i = 0; i < 2; i++) {
+    __m256i vsum16;
+    variance64_avx2(src_ptr + 32 * i * src_stride, src_stride,
+                    ref_ptr + 32 * i * ref_stride, ref_stride, 32, &vsse,
+                    &vsum16);
+    vsum = _mm256_add_epi32(vsum, sum_to_32bit_avx2(vsum16));
+  }
+  vsum_128 = _mm_add_epi32(_mm256_castsi256_si128(vsum),
+                           _mm256_extractf128_si256(vsum, 1));
+  variance_final_from_32bit_sum_avx2(vsse, vsum_128, sse, &sum);
+  return *sse - (unsigned int)(((int64_t)sum * sum) >> 12);
+}
+
+unsigned int vpx_mse16x8_avx2(const uint8_t *src_ptr, int src_stride,
+                              const uint8_t *ref_ptr, int ref_stride,
+                              unsigned int *sse) {
+  int sum;
+  __m256i vsse, vsum;
+  variance16_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 8, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, &sum);
+  return *sse;
+}
+
+unsigned int vpx_mse16x16_avx2(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride,
+                               unsigned int *sse) {
+  int sum;
+  __m256i vsse, vsum;
+  variance16_avx2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_from_16bit_sum_avx2(vsse, vsum, sse, &sum);
+  return *sse;
+}
+
+unsigned int vpx_sub_pixel_variance64x64_avx2(
+    const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,
+    const uint8_t *ref_ptr, int ref_stride, unsigned int *sse) {
   unsigned int sse1;
   const int se1 = sub_pixel_variance32xh_avx2(
-      src, src_stride, x_offset, y_offset, dst, dst_stride, 64, &sse1);
+      src_ptr, src_stride, x_offset, y_offset, ref_ptr, ref_stride, 64, &sse1);
   unsigned int sse2;
   const int se2 =
-      sub_pixel_variance32xh_avx2(src + 32, src_stride, x_offset, y_offset,
-                                  dst + 32, dst_stride, 64, &sse2);
+      sub_pixel_variance32xh_avx2(src_ptr + 32, src_stride, x_offset, y_offset,
+                                  ref_ptr + 32, ref_stride, 64, &sse2);
   const int se = se1 + se2;
   *sse = sse1 + sse2;
   return *sse - (uint32_t)(((int64_t)se * se) >> 12);
 }
 
-unsigned int vpx_sub_pixel_variance32x32_avx2(const uint8_t *src,
-                                              int src_stride, int x_offset,
-                                              int y_offset, const uint8_t *dst,
-                                              int dst_stride,
-                                              unsigned int *sse) {
+unsigned int vpx_sub_pixel_variance32x32_avx2(
+    const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,
+    const uint8_t *ref_ptr, int ref_stride, unsigned int *sse) {
   const int se = sub_pixel_variance32xh_avx2(
-      src, src_stride, x_offset, y_offset, dst, dst_stride, 32, sse);
+      src_ptr, src_stride, x_offset, y_offset, ref_ptr, ref_stride, 32, sse);
   return *sse - (uint32_t)(((int64_t)se * se) >> 10);
 }
 
 unsigned int vpx_sub_pixel_avg_variance64x64_avx2(
-    const uint8_t *src, int src_stride, int x_offset, int y_offset,
-    const uint8_t *dst, int dst_stride, unsigned int *sse, const uint8_t *sec) {
+    const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,
+    const uint8_t *ref_ptr, int ref_stride, unsigned int *sse,
+    const uint8_t *second_pred) {
   unsigned int sse1;
-  const int se1 = sub_pixel_avg_variance32xh_avx2(
-      src, src_stride, x_offset, y_offset, dst, dst_stride, sec, 64, 64, &sse1);
+  const int se1 = sub_pixel_avg_variance32xh_avx2(src_ptr, src_stride, x_offset,
+                                                  y_offset, ref_ptr, ref_stride,
+                                                  second_pred, 64, 64, &sse1);
   unsigned int sse2;
   const int se2 = sub_pixel_avg_variance32xh_avx2(
-      src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride, sec + 32,
-      64, 64, &sse2);
+      src_ptr + 32, src_stride, x_offset, y_offset, ref_ptr + 32, ref_stride,
+      second_pred + 32, 64, 64, &sse2);
   const int se = se1 + se2;
 
   *sse = sse1 + sse2;
@@ -712,10 +778,12 @@
 }
 
 unsigned int vpx_sub_pixel_avg_variance32x32_avx2(
-    const uint8_t *src, int src_stride, int x_offset, int y_offset,
-    const uint8_t *dst, int dst_stride, unsigned int *sse, const uint8_t *sec) {
+    const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset,
+    const uint8_t *ref_ptr, int ref_stride, unsigned int *sse,
+    const uint8_t *second_pred) {
   // Process 32 elements in parallel.
-  const int se = sub_pixel_avg_variance32xh_avx2(
-      src, src_stride, x_offset, y_offset, dst, dst_stride, sec, 32, 32, sse);
+  const int se = sub_pixel_avg_variance32xh_avx2(src_ptr, src_stride, x_offset,
+                                                 y_offset, ref_ptr, ref_stride,
+                                                 second_pred, 32, 32, sse);
   return *sse - (uint32_t)(((int64_t)se * se) >> 10);
 }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_sse2.c
index 8d8bf18..37ef64e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_sse2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/variance_sse2.c
@@ -8,312 +8,426 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
+#include <assert.h>
 #include <emmintrin.h>  // SSE2
 
 #include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
-
 #include "vpx_ports/mem.h"
+#include "vpx_dsp/x86/mem_sse2.h"
 
-typedef void (*getNxMvar_fn_t)(const unsigned char *src, int src_stride,
-                               const unsigned char *ref, int ref_stride,
-                               unsigned int *sse, int *sum);
+static INLINE unsigned int add32x4_sse2(__m128i val) {
+  val = _mm_add_epi32(val, _mm_srli_si128(val, 8));
+  val = _mm_add_epi32(val, _mm_srli_si128(val, 4));
+  return _mm_cvtsi128_si32(val);
+}
 
-unsigned int vpx_get_mb_ss_sse2(const int16_t *src) {
+unsigned int vpx_get_mb_ss_sse2(const int16_t *src_ptr) {
   __m128i vsum = _mm_setzero_si128();
   int i;
 
   for (i = 0; i < 32; ++i) {
-    const __m128i v = _mm_loadu_si128((const __m128i *)src);
+    const __m128i v = _mm_loadu_si128((const __m128i *)src_ptr);
     vsum = _mm_add_epi32(vsum, _mm_madd_epi16(v, v));
-    src += 8;
+    src_ptr += 8;
   }
 
-  vsum = _mm_add_epi32(vsum, _mm_srli_si128(vsum, 8));
-  vsum = _mm_add_epi32(vsum, _mm_srli_si128(vsum, 4));
-  return _mm_cvtsi128_si32(vsum);
+  return add32x4_sse2(vsum);
 }
 
-#define READ64(p, stride, i)                                  \
-  _mm_unpacklo_epi8(                                          \
-      _mm_cvtsi32_si128(*(const uint32_t *)(p + i * stride)), \
-      _mm_cvtsi32_si128(*(const uint32_t *)(p + (i + 1) * stride)))
+static INLINE __m128i load4x2_sse2(const uint8_t *const p, const int stride) {
+  const __m128i p0 = _mm_cvtsi32_si128(loadu_uint32(p + 0 * stride));
+  const __m128i p1 = _mm_cvtsi32_si128(loadu_uint32(p + 1 * stride));
+  const __m128i p01 = _mm_unpacklo_epi32(p0, p1);
+  return _mm_unpacklo_epi8(p01, _mm_setzero_si128());
+}
 
-static void get4x4var_sse2(const uint8_t *src, int src_stride,
-                           const uint8_t *ref, int ref_stride,
-                           unsigned int *sse, int *sum) {
-  const __m128i zero = _mm_setzero_si128();
-  const __m128i src0 = _mm_unpacklo_epi8(READ64(src, src_stride, 0), zero);
-  const __m128i src1 = _mm_unpacklo_epi8(READ64(src, src_stride, 2), zero);
-  const __m128i ref0 = _mm_unpacklo_epi8(READ64(ref, ref_stride, 0), zero);
-  const __m128i ref1 = _mm_unpacklo_epi8(READ64(ref, ref_stride, 2), zero);
-  const __m128i diff0 = _mm_sub_epi16(src0, ref0);
-  const __m128i diff1 = _mm_sub_epi16(src1, ref1);
+static INLINE void variance_kernel_sse2(const __m128i src_ptr,
+                                        const __m128i ref_ptr,
+                                        __m128i *const sse,
+                                        __m128i *const sum) {
+  const __m128i diff = _mm_sub_epi16(src_ptr, ref_ptr);
+  *sse = _mm_add_epi32(*sse, _mm_madd_epi16(diff, diff));
+  *sum = _mm_add_epi16(*sum, diff);
+}
 
-  // sum
-  __m128i vsum = _mm_add_epi16(diff0, diff1);
+// Can handle 128 pixels' diff sum (such as 8x16 or 16x8)
+// Slightly faster than variance_final_256_pel_sse2()
+static INLINE void variance_final_128_pel_sse2(__m128i vsse, __m128i vsum,
+                                               unsigned int *const sse,
+                                               int *const sum) {
+  *sse = add32x4_sse2(vsse);
+
   vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 8));
   vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 4));
   vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 2));
   *sum = (int16_t)_mm_extract_epi16(vsum, 0);
-
-  // sse
-  vsum =
-      _mm_add_epi32(_mm_madd_epi16(diff0, diff0), _mm_madd_epi16(diff1, diff1));
-  vsum = _mm_add_epi32(vsum, _mm_srli_si128(vsum, 8));
-  vsum = _mm_add_epi32(vsum, _mm_srli_si128(vsum, 4));
-  *sse = _mm_cvtsi128_si32(vsum);
 }
 
-void vpx_get8x8var_sse2(const uint8_t *src, int src_stride, const uint8_t *ref,
-                        int ref_stride, unsigned int *sse, int *sum) {
-  const __m128i zero = _mm_setzero_si128();
-  __m128i vsum = _mm_setzero_si128();
-  __m128i vsse = _mm_setzero_si128();
-  int i;
+// Can handle 256 pixels' diff sum (such as 16x16)
+static INLINE void variance_final_256_pel_sse2(__m128i vsse, __m128i vsum,
+                                               unsigned int *const sse,
+                                               int *const sum) {
+  *sse = add32x4_sse2(vsse);
 
-  for (i = 0; i < 8; i += 2) {
-    const __m128i src0 = _mm_unpacklo_epi8(
-        _mm_loadl_epi64((const __m128i *)(src + i * src_stride)), zero);
-    const __m128i ref0 = _mm_unpacklo_epi8(
-        _mm_loadl_epi64((const __m128i *)(ref + i * ref_stride)), zero);
-    const __m128i diff0 = _mm_sub_epi16(src0, ref0);
-
-    const __m128i src1 = _mm_unpacklo_epi8(
-        _mm_loadl_epi64((const __m128i *)(src + (i + 1) * src_stride)), zero);
-    const __m128i ref1 = _mm_unpacklo_epi8(
-        _mm_loadl_epi64((const __m128i *)(ref + (i + 1) * ref_stride)), zero);
-    const __m128i diff1 = _mm_sub_epi16(src1, ref1);
-
-    vsum = _mm_add_epi16(vsum, diff0);
-    vsum = _mm_add_epi16(vsum, diff1);
-    vsse = _mm_add_epi32(vsse, _mm_madd_epi16(diff0, diff0));
-    vsse = _mm_add_epi32(vsse, _mm_madd_epi16(diff1, diff1));
-  }
-
-  // sum
   vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 8));
   vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 4));
-  vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 2));
   *sum = (int16_t)_mm_extract_epi16(vsum, 0);
-
-  // sse
-  vsse = _mm_add_epi32(vsse, _mm_srli_si128(vsse, 8));
-  vsse = _mm_add_epi32(vsse, _mm_srli_si128(vsse, 4));
-  *sse = _mm_cvtsi128_si32(vsse);
+  *sum += (int16_t)_mm_extract_epi16(vsum, 1);
 }
 
-void vpx_get16x16var_sse2(const uint8_t *src, int src_stride,
-                          const uint8_t *ref, int ref_stride, unsigned int *sse,
-                          int *sum) {
-  const __m128i zero = _mm_setzero_si128();
-  __m128i vsum = _mm_setzero_si128();
-  __m128i vsse = _mm_setzero_si128();
+// Can handle 512 pixels' diff sum (such as 16x32 or 32x16)
+static INLINE void variance_final_512_pel_sse2(__m128i vsse, __m128i vsum,
+                                               unsigned int *const sse,
+                                               int *const sum) {
+  *sse = add32x4_sse2(vsse);
+
+  vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 8));
+  vsum = _mm_unpacklo_epi16(vsum, vsum);
+  vsum = _mm_srai_epi32(vsum, 16);
+  *sum = add32x4_sse2(vsum);
+}
+
+static INLINE __m128i sum_to_32bit_sse2(const __m128i sum) {
+  const __m128i sum_lo = _mm_srai_epi32(_mm_unpacklo_epi16(sum, sum), 16);
+  const __m128i sum_hi = _mm_srai_epi32(_mm_unpackhi_epi16(sum, sum), 16);
+  return _mm_add_epi32(sum_lo, sum_hi);
+}
+
+// Can handle 1024 pixels' diff sum (such as 32x32)
+static INLINE int sum_final_sse2(const __m128i sum) {
+  const __m128i t = sum_to_32bit_sse2(sum);
+  return add32x4_sse2(t);
+}
+
+static INLINE void variance4_sse2(const uint8_t *src_ptr, const int src_stride,
+                                  const uint8_t *ref_ptr, const int ref_stride,
+                                  const int h, __m128i *const sse,
+                                  __m128i *const sum) {
   int i;
 
-  for (i = 0; i < 16; ++i) {
-    const __m128i s = _mm_loadu_si128((const __m128i *)src);
-    const __m128i r = _mm_loadu_si128((const __m128i *)ref);
+  assert(h <= 256);  // May overflow for larger height.
+  *sse = _mm_setzero_si128();
+  *sum = _mm_setzero_si128();
 
-    const __m128i src0 = _mm_unpacklo_epi8(s, zero);
-    const __m128i ref0 = _mm_unpacklo_epi8(r, zero);
-    const __m128i diff0 = _mm_sub_epi16(src0, ref0);
+  for (i = 0; i < h; i += 2) {
+    const __m128i s = load4x2_sse2(src_ptr, src_stride);
+    const __m128i r = load4x2_sse2(ref_ptr, ref_stride);
 
-    const __m128i src1 = _mm_unpackhi_epi8(s, zero);
-    const __m128i ref1 = _mm_unpackhi_epi8(r, zero);
-    const __m128i diff1 = _mm_sub_epi16(src1, ref1);
-
-    vsum = _mm_add_epi16(vsum, diff0);
-    vsum = _mm_add_epi16(vsum, diff1);
-    vsse = _mm_add_epi32(vsse, _mm_madd_epi16(diff0, diff0));
-    vsse = _mm_add_epi32(vsse, _mm_madd_epi16(diff1, diff1));
-
-    src += src_stride;
-    ref += ref_stride;
-  }
-
-  // sum
-  vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 8));
-  vsum = _mm_add_epi16(vsum, _mm_srli_si128(vsum, 4));
-  *sum =
-      (int16_t)_mm_extract_epi16(vsum, 0) + (int16_t)_mm_extract_epi16(vsum, 1);
-
-  // sse
-  vsse = _mm_add_epi32(vsse, _mm_srli_si128(vsse, 8));
-  vsse = _mm_add_epi32(vsse, _mm_srli_si128(vsse, 4));
-  *sse = _mm_cvtsi128_si32(vsse);
-}
-
-static void variance_sse2(const unsigned char *src, int src_stride,
-                          const unsigned char *ref, int ref_stride, int w,
-                          int h, unsigned int *sse, int *sum,
-                          getNxMvar_fn_t var_fn, int block_size) {
-  int i, j;
-
-  *sse = 0;
-  *sum = 0;
-
-  for (i = 0; i < h; i += block_size) {
-    for (j = 0; j < w; j += block_size) {
-      unsigned int sse0;
-      int sum0;
-      var_fn(src + src_stride * i + j, src_stride, ref + ref_stride * i + j,
-             ref_stride, &sse0, &sum0);
-      *sse += sse0;
-      *sum += sum0;
-    }
+    variance_kernel_sse2(s, r, sse, sum);
+    src_ptr += 2 * src_stride;
+    ref_ptr += 2 * ref_stride;
   }
 }
 
-unsigned int vpx_variance4x4_sse2(const unsigned char *src, int src_stride,
-                                  const unsigned char *ref, int ref_stride,
+static INLINE void variance8_sse2(const uint8_t *src_ptr, const int src_stride,
+                                  const uint8_t *ref_ptr, const int ref_stride,
+                                  const int h, __m128i *const sse,
+                                  __m128i *const sum) {
+  const __m128i zero = _mm_setzero_si128();
+  int i;
+
+  assert(h <= 128);  // May overflow for larger height.
+  *sse = _mm_setzero_si128();
+  *sum = _mm_setzero_si128();
+
+  for (i = 0; i < h; i++) {
+    const __m128i s =
+        _mm_unpacklo_epi8(_mm_loadl_epi64((const __m128i *)src_ptr), zero);
+    const __m128i r =
+        _mm_unpacklo_epi8(_mm_loadl_epi64((const __m128i *)ref_ptr), zero);
+
+    variance_kernel_sse2(s, r, sse, sum);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+  }
+}
+
+static INLINE void variance16_kernel_sse2(const uint8_t *const src_ptr,
+                                          const uint8_t *const ref_ptr,
+                                          __m128i *const sse,
+                                          __m128i *const sum) {
+  const __m128i zero = _mm_setzero_si128();
+  const __m128i s = _mm_loadu_si128((const __m128i *)src_ptr);
+  const __m128i r = _mm_loadu_si128((const __m128i *)ref_ptr);
+  const __m128i src0 = _mm_unpacklo_epi8(s, zero);
+  const __m128i ref0 = _mm_unpacklo_epi8(r, zero);
+  const __m128i src1 = _mm_unpackhi_epi8(s, zero);
+  const __m128i ref1 = _mm_unpackhi_epi8(r, zero);
+
+  variance_kernel_sse2(src0, ref0, sse, sum);
+  variance_kernel_sse2(src1, ref1, sse, sum);
+}
+
+static INLINE void variance16_sse2(const uint8_t *src_ptr, const int src_stride,
+                                   const uint8_t *ref_ptr, const int ref_stride,
+                                   const int h, __m128i *const sse,
+                                   __m128i *const sum) {
+  int i;
+
+  assert(h <= 64);  // May overflow for larger height.
+  *sse = _mm_setzero_si128();
+  *sum = _mm_setzero_si128();
+
+  for (i = 0; i < h; ++i) {
+    variance16_kernel_sse2(src_ptr, ref_ptr, sse, sum);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+  }
+}
+
+static INLINE void variance32_sse2(const uint8_t *src_ptr, const int src_stride,
+                                   const uint8_t *ref_ptr, const int ref_stride,
+                                   const int h, __m128i *const sse,
+                                   __m128i *const sum) {
+  int i;
+
+  assert(h <= 32);  // May overflow for larger height.
+  // Don't initialize sse here since it's an accumulation.
+  *sum = _mm_setzero_si128();
+
+  for (i = 0; i < h; ++i) {
+    variance16_kernel_sse2(src_ptr + 0, ref_ptr + 0, sse, sum);
+    variance16_kernel_sse2(src_ptr + 16, ref_ptr + 16, sse, sum);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+  }
+}
+
+static INLINE void variance64_sse2(const uint8_t *src_ptr, const int src_stride,
+                                   const uint8_t *ref_ptr, const int ref_stride,
+                                   const int h, __m128i *const sse,
+                                   __m128i *const sum) {
+  int i;
+
+  assert(h <= 16);  // May overflow for larger height.
+  // Don't initialize sse here since it's an accumulation.
+  *sum = _mm_setzero_si128();
+
+  for (i = 0; i < h; ++i) {
+    variance16_kernel_sse2(src_ptr + 0, ref_ptr + 0, sse, sum);
+    variance16_kernel_sse2(src_ptr + 16, ref_ptr + 16, sse, sum);
+    variance16_kernel_sse2(src_ptr + 32, ref_ptr + 32, sse, sum);
+    variance16_kernel_sse2(src_ptr + 48, ref_ptr + 48, sse, sum);
+    src_ptr += src_stride;
+    ref_ptr += ref_stride;
+  }
+}
+
+void vpx_get8x8var_sse2(const uint8_t *src_ptr, int src_stride,
+                        const uint8_t *ref_ptr, int ref_stride,
+                        unsigned int *sse, int *sum) {
+  __m128i vsse, vsum;
+  variance8_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 8, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, sum);
+}
+
+void vpx_get16x16var_sse2(const uint8_t *src_ptr, int src_stride,
+                          const uint8_t *ref_ptr, int ref_stride,
+                          unsigned int *sse, int *sum) {
+  __m128i vsse, vsum;
+  variance16_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_256_pel_sse2(vsse, vsum, sse, sum);
+}
+
+unsigned int vpx_variance4x4_sse2(const uint8_t *src_ptr, int src_stride,
+                                  const uint8_t *ref_ptr, int ref_stride,
                                   unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  get4x4var_sse2(src, src_stride, ref, ref_stride, sse, &sum);
+  variance4_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 4, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - ((sum * sum) >> 4);
 }
 
-unsigned int vpx_variance8x4_sse2(const uint8_t *src, int src_stride,
-                                  const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance4x8_sse2(const uint8_t *src_ptr, int src_stride,
+                                  const uint8_t *ref_ptr, int ref_stride,
                                   unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 8, 4, sse, &sum,
-                get4x4var_sse2, 4);
+  variance4_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 8, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - ((sum * sum) >> 5);
 }
 
-unsigned int vpx_variance4x8_sse2(const uint8_t *src, int src_stride,
-                                  const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance8x4_sse2(const uint8_t *src_ptr, int src_stride,
+                                  const uint8_t *ref_ptr, int ref_stride,
                                   unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 4, 8, sse, &sum,
-                get4x4var_sse2, 4);
+  variance8_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 4, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - ((sum * sum) >> 5);
 }
 
-unsigned int vpx_variance8x8_sse2(const unsigned char *src, int src_stride,
-                                  const unsigned char *ref, int ref_stride,
+unsigned int vpx_variance8x8_sse2(const uint8_t *src_ptr, int src_stride,
+                                  const uint8_t *ref_ptr, int ref_stride,
                                   unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  vpx_get8x8var_sse2(src, src_stride, ref, ref_stride, sse, &sum);
+  variance8_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 8, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - ((sum * sum) >> 6);
 }
 
-unsigned int vpx_variance16x8_sse2(const unsigned char *src, int src_stride,
-                                   const unsigned char *ref, int ref_stride,
+unsigned int vpx_variance8x16_sse2(const uint8_t *src_ptr, int src_stride,
+                                   const uint8_t *ref_ptr, int ref_stride,
                                    unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 16, 8, sse, &sum,
-                vpx_get8x8var_sse2, 8);
+  variance8_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - ((sum * sum) >> 7);
 }
 
-unsigned int vpx_variance8x16_sse2(const unsigned char *src, int src_stride,
-                                   const unsigned char *ref, int ref_stride,
+unsigned int vpx_variance16x8_sse2(const uint8_t *src_ptr, int src_stride,
+                                   const uint8_t *ref_ptr, int ref_stride,
                                    unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 8, 16, sse, &sum,
-                vpx_get8x8var_sse2, 8);
+  variance16_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 8, &vsse, &vsum);
+  variance_final_128_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - ((sum * sum) >> 7);
 }
 
-unsigned int vpx_variance16x16_sse2(const unsigned char *src, int src_stride,
-                                    const unsigned char *ref, int ref_stride,
+unsigned int vpx_variance16x16_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  vpx_get16x16var_sse2(src, src_stride, ref, ref_stride, sse, &sum);
+  variance16_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_256_pel_sse2(vsse, vsum, sse, &sum);
   return *sse - (uint32_t)(((int64_t)sum * sum) >> 8);
 }
 
-unsigned int vpx_variance32x32_sse2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance16x32_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
+  __m128i vsse, vsum;
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 32, 32, sse, &sum,
-                vpx_get16x16var_sse2, 16);
+  variance16_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 32, &vsse, &vsum);
+  variance_final_512_pel_sse2(vsse, vsum, sse, &sum);
+  return *sse - (unsigned int)(((int64_t)sum * sum) >> 9);
+}
+
+unsigned int vpx_variance32x16_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    unsigned int *sse) {
+  __m128i vsse = _mm_setzero_si128();
+  __m128i vsum;
+  int sum;
+  variance32_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 16, &vsse, &vsum);
+  variance_final_512_pel_sse2(vsse, vsum, sse, &sum);
+  return *sse - (unsigned int)(((int64_t)sum * sum) >> 9);
+}
+
+unsigned int vpx_variance32x32_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
+                                    unsigned int *sse) {
+  __m128i vsse = _mm_setzero_si128();
+  __m128i vsum;
+  int sum;
+  variance32_sse2(src_ptr, src_stride, ref_ptr, ref_stride, 32, &vsse, &vsum);
+  *sse = add32x4_sse2(vsse);
+  sum = sum_final_sse2(vsum);
   return *sse - (unsigned int)(((int64_t)sum * sum) >> 10);
 }
 
-unsigned int vpx_variance32x16_sse2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance32x64_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
+  __m128i vsse = _mm_setzero_si128();
+  __m128i vsum = _mm_setzero_si128();
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 32, 16, sse, &sum,
-                vpx_get16x16var_sse2, 16);
-  return *sse - (unsigned int)(((int64_t)sum * sum) >> 9);
+  int i = 0;
+
+  for (i = 0; i < 2; i++) {
+    __m128i vsum16;
+    variance32_sse2(src_ptr + 32 * i * src_stride, src_stride,
+                    ref_ptr + 32 * i * ref_stride, ref_stride, 32, &vsse,
+                    &vsum16);
+    vsum = _mm_add_epi32(vsum, sum_to_32bit_sse2(vsum16));
+  }
+  *sse = add32x4_sse2(vsse);
+  sum = add32x4_sse2(vsum);
+  return *sse - (unsigned int)(((int64_t)sum * sum) >> 11);
 }
 
-unsigned int vpx_variance16x32_sse2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance64x32_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
+  __m128i vsse = _mm_setzero_si128();
+  __m128i vsum = _mm_setzero_si128();
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 16, 32, sse, &sum,
-                vpx_get16x16var_sse2, 16);
-  return *sse - (unsigned int)(((int64_t)sum * sum) >> 9);
+  int i = 0;
+
+  for (i = 0; i < 2; i++) {
+    __m128i vsum16;
+    variance64_sse2(src_ptr + 16 * i * src_stride, src_stride,
+                    ref_ptr + 16 * i * ref_stride, ref_stride, 16, &vsse,
+                    &vsum16);
+    vsum = _mm_add_epi32(vsum, sum_to_32bit_sse2(vsum16));
+  }
+  *sse = add32x4_sse2(vsse);
+  sum = add32x4_sse2(vsum);
+  return *sse - (unsigned int)(((int64_t)sum * sum) >> 11);
 }
 
-unsigned int vpx_variance64x64_sse2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
+unsigned int vpx_variance64x64_sse2(const uint8_t *src_ptr, int src_stride,
+                                    const uint8_t *ref_ptr, int ref_stride,
                                     unsigned int *sse) {
+  __m128i vsse = _mm_setzero_si128();
+  __m128i vsum = _mm_setzero_si128();
   int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 64, 64, sse, &sum,
-                vpx_get16x16var_sse2, 16);
+  int i = 0;
+
+  for (i = 0; i < 4; i++) {
+    __m128i vsum16;
+    variance64_sse2(src_ptr + 16 * i * src_stride, src_stride,
+                    ref_ptr + 16 * i * ref_stride, ref_stride, 16, &vsse,
+                    &vsum16);
+    vsum = _mm_add_epi32(vsum, sum_to_32bit_sse2(vsum16));
+  }
+  *sse = add32x4_sse2(vsse);
+  sum = add32x4_sse2(vsum);
   return *sse - (unsigned int)(((int64_t)sum * sum) >> 12);
 }
 
-unsigned int vpx_variance64x32_sse2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
-                                    unsigned int *sse) {
-  int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 64, 32, sse, &sum,
-                vpx_get16x16var_sse2, 16);
-  return *sse - (unsigned int)(((int64_t)sum * sum) >> 11);
-}
-
-unsigned int vpx_variance32x64_sse2(const uint8_t *src, int src_stride,
-                                    const uint8_t *ref, int ref_stride,
-                                    unsigned int *sse) {
-  int sum;
-  variance_sse2(src, src_stride, ref, ref_stride, 32, 64, sse, &sum,
-                vpx_get16x16var_sse2, 16);
-  return *sse - (unsigned int)(((int64_t)sum * sum) >> 11);
-}
-
-unsigned int vpx_mse8x8_sse2(const uint8_t *src, int src_stride,
-                             const uint8_t *ref, int ref_stride,
+unsigned int vpx_mse8x8_sse2(const uint8_t *src_ptr, int src_stride,
+                             const uint8_t *ref_ptr, int ref_stride,
                              unsigned int *sse) {
-  vpx_variance8x8_sse2(src, src_stride, ref, ref_stride, sse);
+  vpx_variance8x8_sse2(src_ptr, src_stride, ref_ptr, ref_stride, sse);
   return *sse;
 }
 
-unsigned int vpx_mse8x16_sse2(const uint8_t *src, int src_stride,
-                              const uint8_t *ref, int ref_stride,
+unsigned int vpx_mse8x16_sse2(const uint8_t *src_ptr, int src_stride,
+                              const uint8_t *ref_ptr, int ref_stride,
                               unsigned int *sse) {
-  vpx_variance8x16_sse2(src, src_stride, ref, ref_stride, sse);
+  vpx_variance8x16_sse2(src_ptr, src_stride, ref_ptr, ref_stride, sse);
   return *sse;
 }
 
-unsigned int vpx_mse16x8_sse2(const uint8_t *src, int src_stride,
-                              const uint8_t *ref, int ref_stride,
+unsigned int vpx_mse16x8_sse2(const uint8_t *src_ptr, int src_stride,
+                              const uint8_t *ref_ptr, int ref_stride,
                               unsigned int *sse) {
-  vpx_variance16x8_sse2(src, src_stride, ref, ref_stride, sse);
+  vpx_variance16x8_sse2(src_ptr, src_stride, ref_ptr, ref_stride, sse);
   return *sse;
 }
 
-unsigned int vpx_mse16x16_sse2(const uint8_t *src, int src_stride,
-                               const uint8_t *ref, int ref_stride,
+unsigned int vpx_mse16x16_sse2(const uint8_t *src_ptr, int src_stride,
+                               const uint8_t *ref_ptr, int ref_stride,
                                unsigned int *sse) {
-  vpx_variance16x16_sse2(src, src_stride, ref, ref_stride, sse);
+  vpx_variance16x16_sse2(src_ptr, src_stride, ref_ptr, ref_stride, sse);
   return *sse;
 }
 
 // The 2 unused parameters are place holders for PIC enabled build.
 // These definitions are for functions defined in subpel_variance.asm
-#define DECL(w, opt)                                                           \
-  int vpx_sub_pixel_variance##w##xh_##opt(                                     \
-      const uint8_t *src, ptrdiff_t src_stride, int x_offset, int y_offset,    \
-      const uint8_t *dst, ptrdiff_t dst_stride, int height, unsigned int *sse, \
-      void *unused0, void *unused)
+#define DECL(w, opt)                                                          \
+  int vpx_sub_pixel_variance##w##xh_##opt(                                    \
+      const uint8_t *src_ptr, ptrdiff_t src_stride, int x_offset,             \
+      int y_offset, const uint8_t *ref_ptr, ptrdiff_t ref_stride, int height, \
+      unsigned int *sse, void *unused0, void *unused)
 #define DECLS(opt1, opt2) \
   DECL(4, opt1);          \
   DECL(8, opt1);          \
@@ -324,36 +438,37 @@
 #undef DECLS
 #undef DECL
 
-#define FN(w, h, wf, wlog2, hlog2, opt, cast_prod, cast)                       \
-  unsigned int vpx_sub_pixel_variance##w##x##h##_##opt(                        \
-      const uint8_t *src, int src_stride, int x_offset, int y_offset,          \
-      const uint8_t *dst, int dst_stride, unsigned int *sse_ptr) {             \
-    unsigned int sse;                                                          \
-    int se = vpx_sub_pixel_variance##wf##xh_##opt(src, src_stride, x_offset,   \
-                                                  y_offset, dst, dst_stride,   \
-                                                  h, &sse, NULL, NULL);        \
-    if (w > wf) {                                                              \
-      unsigned int sse2;                                                       \
-      int se2 = vpx_sub_pixel_variance##wf##xh_##opt(                          \
-          src + 16, src_stride, x_offset, y_offset, dst + 16, dst_stride, h,   \
-          &sse2, NULL, NULL);                                                  \
-      se += se2;                                                               \
-      sse += sse2;                                                             \
-      if (w > wf * 2) {                                                        \
-        se2 = vpx_sub_pixel_variance##wf##xh_##opt(                            \
-            src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride, h, \
-            &sse2, NULL, NULL);                                                \
-        se += se2;                                                             \
-        sse += sse2;                                                           \
-        se2 = vpx_sub_pixel_variance##wf##xh_##opt(                            \
-            src + 48, src_stride, x_offset, y_offset, dst + 48, dst_stride, h, \
-            &sse2, NULL, NULL);                                                \
-        se += se2;                                                             \
-        sse += sse2;                                                           \
-      }                                                                        \
-    }                                                                          \
-    *sse_ptr = sse;                                                            \
-    return sse - (unsigned int)(cast_prod(cast se * se) >> (wlog2 + hlog2));   \
+#define FN(w, h, wf, wlog2, hlog2, opt, cast_prod, cast)                  \
+  unsigned int vpx_sub_pixel_variance##w##x##h##_##opt(                   \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, \
+      const uint8_t *ref_ptr, int ref_stride, unsigned int *sse) {        \
+    unsigned int sse_tmp;                                                 \
+    int se = vpx_sub_pixel_variance##wf##xh_##opt(                        \
+        src_ptr, src_stride, x_offset, y_offset, ref_ptr, ref_stride, h,  \
+        &sse_tmp, NULL, NULL);                                            \
+    if (w > wf) {                                                         \
+      unsigned int sse2;                                                  \
+      int se2 = vpx_sub_pixel_variance##wf##xh_##opt(                     \
+          src_ptr + 16, src_stride, x_offset, y_offset, ref_ptr + 16,     \
+          ref_stride, h, &sse2, NULL, NULL);                              \
+      se += se2;                                                          \
+      sse_tmp += sse2;                                                    \
+      if (w > wf * 2) {                                                   \
+        se2 = vpx_sub_pixel_variance##wf##xh_##opt(                       \
+            src_ptr + 32, src_stride, x_offset, y_offset, ref_ptr + 32,   \
+            ref_stride, h, &sse2, NULL, NULL);                            \
+        se += se2;                                                        \
+        sse_tmp += sse2;                                                  \
+        se2 = vpx_sub_pixel_variance##wf##xh_##opt(                       \
+            src_ptr + 48, src_stride, x_offset, y_offset, ref_ptr + 48,   \
+            ref_stride, h, &sse2, NULL, NULL);                            \
+        se += se2;                                                        \
+        sse_tmp += sse2;                                                  \
+      }                                                                   \
+    }                                                                     \
+    *sse = sse_tmp;                                                       \
+    return sse_tmp -                                                      \
+           (unsigned int)(cast_prod(cast se * se) >> (wlog2 + hlog2));    \
   }
 
 #define FNS(opt1, opt2)                              \
@@ -378,12 +493,12 @@
 #undef FN
 
 // The 2 unused parameters are place holders for PIC enabled build.
-#define DECL(w, opt)                                                        \
-  int vpx_sub_pixel_avg_variance##w##xh_##opt(                              \
-      const uint8_t *src, ptrdiff_t src_stride, int x_offset, int y_offset, \
-      const uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *sec,         \
-      ptrdiff_t sec_stride, int height, unsigned int *sse, void *unused0,   \
-      void *unused)
+#define DECL(w, opt)                                                   \
+  int vpx_sub_pixel_avg_variance##w##xh_##opt(                         \
+      const uint8_t *src_ptr, ptrdiff_t src_stride, int x_offset,      \
+      int y_offset, const uint8_t *ref_ptr, ptrdiff_t ref_stride,      \
+      const uint8_t *second_pred, ptrdiff_t second_stride, int height, \
+      unsigned int *sse, void *unused0, void *unused)
 #define DECLS(opt1, opt2) \
   DECL(4, opt1);          \
   DECL(8, opt1);          \
@@ -394,37 +509,38 @@
 #undef DECL
 #undef DECLS
 
-#define FN(w, h, wf, wlog2, hlog2, opt, cast_prod, cast)                       \
-  unsigned int vpx_sub_pixel_avg_variance##w##x##h##_##opt(                    \
-      const uint8_t *src, int src_stride, int x_offset, int y_offset,          \
-      const uint8_t *dst, int dst_stride, unsigned int *sseptr,                \
-      const uint8_t *sec) {                                                    \
-    unsigned int sse;                                                          \
-    int se = vpx_sub_pixel_avg_variance##wf##xh_##opt(                         \
-        src, src_stride, x_offset, y_offset, dst, dst_stride, sec, w, h, &sse, \
-        NULL, NULL);                                                           \
-    if (w > wf) {                                                              \
-      unsigned int sse2;                                                       \
-      int se2 = vpx_sub_pixel_avg_variance##wf##xh_##opt(                      \
-          src + 16, src_stride, x_offset, y_offset, dst + 16, dst_stride,      \
-          sec + 16, w, h, &sse2, NULL, NULL);                                  \
-      se += se2;                                                               \
-      sse += sse2;                                                             \
-      if (w > wf * 2) {                                                        \
-        se2 = vpx_sub_pixel_avg_variance##wf##xh_##opt(                        \
-            src + 32, src_stride, x_offset, y_offset, dst + 32, dst_stride,    \
-            sec + 32, w, h, &sse2, NULL, NULL);                                \
-        se += se2;                                                             \
-        sse += sse2;                                                           \
-        se2 = vpx_sub_pixel_avg_variance##wf##xh_##opt(                        \
-            src + 48, src_stride, x_offset, y_offset, dst + 48, dst_stride,    \
-            sec + 48, w, h, &sse2, NULL, NULL);                                \
-        se += se2;                                                             \
-        sse += sse2;                                                           \
-      }                                                                        \
-    }                                                                          \
-    *sseptr = sse;                                                             \
-    return sse - (unsigned int)(cast_prod(cast se * se) >> (wlog2 + hlog2));   \
+#define FN(w, h, wf, wlog2, hlog2, opt, cast_prod, cast)                  \
+  unsigned int vpx_sub_pixel_avg_variance##w##x##h##_##opt(               \
+      const uint8_t *src_ptr, int src_stride, int x_offset, int y_offset, \
+      const uint8_t *ref_ptr, int ref_stride, unsigned int *sse,          \
+      const uint8_t *second_pred) {                                       \
+    unsigned int sse_tmp;                                                 \
+    int se = vpx_sub_pixel_avg_variance##wf##xh_##opt(                    \
+        src_ptr, src_stride, x_offset, y_offset, ref_ptr, ref_stride,     \
+        second_pred, w, h, &sse_tmp, NULL, NULL);                         \
+    if (w > wf) {                                                         \
+      unsigned int sse2;                                                  \
+      int se2 = vpx_sub_pixel_avg_variance##wf##xh_##opt(                 \
+          src_ptr + 16, src_stride, x_offset, y_offset, ref_ptr + 16,     \
+          ref_stride, second_pred + 16, w, h, &sse2, NULL, NULL);         \
+      se += se2;                                                          \
+      sse_tmp += sse2;                                                    \
+      if (w > wf * 2) {                                                   \
+        se2 = vpx_sub_pixel_avg_variance##wf##xh_##opt(                   \
+            src_ptr + 32, src_stride, x_offset, y_offset, ref_ptr + 32,   \
+            ref_stride, second_pred + 32, w, h, &sse2, NULL, NULL);       \
+        se += se2;                                                        \
+        sse_tmp += sse2;                                                  \
+        se2 = vpx_sub_pixel_avg_variance##wf##xh_##opt(                   \
+            src_ptr + 48, src_stride, x_offset, y_offset, ref_ptr + 48,   \
+            ref_stride, second_pred + 48, w, h, &sse2, NULL, NULL);       \
+        se += se2;                                                        \
+        sse_tmp += sse2;                                                  \
+      }                                                                   \
+    }                                                                     \
+    *sse = sse_tmp;                                                       \
+    return sse_tmp -                                                      \
+           (unsigned int)(cast_prod(cast se * se) >> (wlog2 + hlog2));    \
   }
 
 #define FNS(opt1, opt2)                              \
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_asm_stubs.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_asm_stubs.c
deleted file mode 100644
index 4f164af..0000000
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_asm_stubs.c
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- *  Copyright (c) 2014 The WebM project authors. All Rights Reserved.
- *
- *  Use of this source code is governed by a BSD-style license
- *  that can be found in the LICENSE file in the root of the source
- *  tree. An additional intellectual property rights grant can be found
- *  in the file PATENTS.  All contributing project authors may
- *  be found in the AUTHORS file in the root of the source tree.
- */
-
-#include "./vpx_config.h"
-#include "./vpx_dsp_rtcd.h"
-#include "vpx_dsp/x86/convolve.h"
-
-#if HAVE_SSE2
-filter8_1dfunction vpx_filter_block1d16_v8_sse2;
-filter8_1dfunction vpx_filter_block1d16_h8_sse2;
-filter8_1dfunction vpx_filter_block1d8_v8_sse2;
-filter8_1dfunction vpx_filter_block1d8_h8_sse2;
-filter8_1dfunction vpx_filter_block1d4_v8_sse2;
-filter8_1dfunction vpx_filter_block1d4_h8_sse2;
-filter8_1dfunction vpx_filter_block1d16_v8_avg_sse2;
-filter8_1dfunction vpx_filter_block1d16_h8_avg_sse2;
-filter8_1dfunction vpx_filter_block1d8_v8_avg_sse2;
-filter8_1dfunction vpx_filter_block1d8_h8_avg_sse2;
-filter8_1dfunction vpx_filter_block1d4_v8_avg_sse2;
-filter8_1dfunction vpx_filter_block1d4_h8_avg_sse2;
-
-filter8_1dfunction vpx_filter_block1d16_v2_sse2;
-filter8_1dfunction vpx_filter_block1d16_h2_sse2;
-filter8_1dfunction vpx_filter_block1d8_v2_sse2;
-filter8_1dfunction vpx_filter_block1d8_h2_sse2;
-filter8_1dfunction vpx_filter_block1d4_v2_sse2;
-filter8_1dfunction vpx_filter_block1d4_h2_sse2;
-filter8_1dfunction vpx_filter_block1d16_v2_avg_sse2;
-filter8_1dfunction vpx_filter_block1d16_h2_avg_sse2;
-filter8_1dfunction vpx_filter_block1d8_v2_avg_sse2;
-filter8_1dfunction vpx_filter_block1d8_h2_avg_sse2;
-filter8_1dfunction vpx_filter_block1d4_v2_avg_sse2;
-filter8_1dfunction vpx_filter_block1d4_h2_avg_sse2;
-
-// void vpx_convolve8_horiz_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                               uint8_t *dst, ptrdiff_t dst_stride,
-//                               const InterpKernel *filter, int x0_q4,
-//                               int32_t x_step_q4, int y0_q4, int y_step_q4,
-//                               int w, int h);
-// void vpx_convolve8_vert_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                              uint8_t *dst, ptrdiff_t dst_stride,
-//                              const InterpKernel *filter, int x0_q4,
-//                              int32_t x_step_q4, int y0_q4, int y_step_q4,
-//                              int w, int h);
-// void vpx_convolve8_avg_horiz_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                                   uint8_t *dst, ptrdiff_t dst_stride,
-//                                   const InterpKernel *filter, int x0_q4,
-//                                   int32_t x_step_q4, int y0_q4,
-//                                   int y_step_q4, int w, int h);
-// void vpx_convolve8_avg_vert_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                                  uint8_t *dst, ptrdiff_t dst_stride,
-//                                  const InterpKernel *filter, int x0_q4,
-//                                  int32_t x_step_q4, int y0_q4, int y_step_q4,
-//                                  int w, int h);
-FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , sse2);
-FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * 3, , sse2);
-FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, sse2);
-FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v, src - src_stride * 3, avg_, sse2);
-
-// void vpx_convolve8_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                         uint8_t *dst, ptrdiff_t dst_stride,
-//                         const InterpKernel *filter, int x0_q4,
-//                         int32_t x_step_q4, int y0_q4, int y_step_q4,
-//                         int w, int h);
-// void vpx_convolve8_avg_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                             uint8_t *dst, ptrdiff_t dst_stride,
-//                             const InterpKernel *filter, int x0_q4,
-//                             int32_t x_step_q4, int y0_q4, int y_step_q4,
-//                             int w, int h);
-FUN_CONV_2D(, sse2);
-FUN_CONV_2D(avg_, sse2);
-
-#if CONFIG_VP9_HIGHBITDEPTH && ARCH_X86_64
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v8_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h8_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v8_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h8_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v8_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h8_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v8_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h8_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v8_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h8_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v8_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h8_avg_sse2;
-
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v2_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h2_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v2_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h2_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v2_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h2_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v2_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h2_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v2_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h2_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v2_avg_sse2;
-highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h2_avg_sse2;
-
-// void vpx_highbd_convolve8_horiz_sse2(const uint8_t *src,
-//                                      ptrdiff_t src_stride,
-//                                      uint8_t *dst,
-//                                      ptrdiff_t dst_stride,
-//                                      const int16_t *filter_x,
-//                                      int x_step_q4,
-//                                      const int16_t *filter_y,
-//                                      int y_step_q4,
-//                                      int w, int h, int bd);
-// void vpx_highbd_convolve8_vert_sse2(const uint8_t *src,
-//                                     ptrdiff_t src_stride,
-//                                     uint8_t *dst,
-//                                     ptrdiff_t dst_stride,
-//                                     const int16_t *filter_x,
-//                                     int x_step_q4,
-//                                     const int16_t *filter_y,
-//                                     int y_step_q4,
-//                                     int w, int h, int bd);
-// void vpx_highbd_convolve8_avg_horiz_sse2(const uint8_t *src,
-//                                          ptrdiff_t src_stride,
-//                                          uint8_t *dst,
-//                                          ptrdiff_t dst_stride,
-//                                          const int16_t *filter_x,
-//                                          int x_step_q4,
-//                                          const int16_t *filter_y,
-//                                          int y_step_q4,
-//                                          int w, int h, int bd);
-// void vpx_highbd_convolve8_avg_vert_sse2(const uint8_t *src,
-//                                         ptrdiff_t src_stride,
-//                                         uint8_t *dst,
-//                                         ptrdiff_t dst_stride,
-//                                         const int16_t *filter_x,
-//                                         int x_step_q4,
-//                                         const int16_t *filter_y,
-//                                         int y_step_q4,
-//                                         int w, int h, int bd);
-HIGH_FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , sse2);
-HIGH_FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * 3, , sse2);
-HIGH_FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, sse2);
-HIGH_FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v, src - src_stride * 3, avg_,
-                 sse2);
-
-// void vpx_highbd_convolve8_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                                uint8_t *dst, ptrdiff_t dst_stride,
-//                                const InterpKernel *filter, int x0_q4,
-//                                int32_t x_step_q4, int y0_q4, int y_step_q4,
-//                                int w, int h, int bd);
-// void vpx_highbd_convolve8_avg_sse2(const uint8_t *src, ptrdiff_t src_stride,
-//                                    uint8_t *dst, ptrdiff_t dst_stride,
-//                                    const InterpKernel *filter, int x0_q4,
-//                                    int32_t x_step_q4, int y0_q4,
-//                                    int y_step_q4, int w, int h, int bd);
-HIGH_FUN_CONV_2D(, sse2);
-HIGH_FUN_CONV_2D(avg_, sse2);
-#endif  // CONFIG_VP9_HIGHBITDEPTH && ARCH_X86_64
-#endif  // HAVE_SSE2
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm
index d83507d..c571496 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm
@@ -45,7 +45,7 @@
 
     ;Compute max and min values of a pixel
     mov         rdx, 0x00010001
-    movsxd      rcx, DWORD PTR arg(6)      ;bps
+    movsxd      rcx, DWORD PTR arg(6)      ;bd
     movq        xmm0, rdx
     movq        xmm1, rcx
     pshufd      xmm0, xmm0, 0b
@@ -121,7 +121,7 @@
 
     ;Compute max and min values of a pixel
     mov         rdx, 0x00010001
-    movsxd      rcx, DWORD PTR arg(6)       ;bps
+    movsxd      rcx, DWORD PTR arg(6)       ;bd
     movq        xmm0, rdx
     movq        xmm1, rcx
     pshufd      xmm0, xmm0, 0b
@@ -199,7 +199,7 @@
 
 SECTION .text
 
-;void vpx_filter_block1d4_v8_sse2
+;void vpx_highbd_filter_block1d4_v8_sse2
 ;(
 ;    unsigned char *src_ptr,
 ;    unsigned int   src_pitch,
@@ -269,7 +269,7 @@
     pop         rbp
     ret
 
-;void vpx_filter_block1d8_v8_sse2
+;void vpx_highbd_filter_block1d8_v8_sse2
 ;(
 ;    unsigned char *src_ptr,
 ;    unsigned int   src_pitch,
@@ -328,7 +328,7 @@
     pop         rbp
     ret
 
-;void vpx_filter_block1d16_v8_sse2
+;void vpx_highbd_filter_block1d16_v8_sse2
 ;(
 ;    unsigned char *src_ptr,
 ;    unsigned int   src_pitch,
@@ -554,7 +554,7 @@
     pop         rbp
     ret
 
-;void vpx_filter_block1d4_h8_sse2
+;void vpx_highbd_filter_block1d4_h8_sse2
 ;(
 ;    unsigned char  *src_ptr,
 ;    unsigned int    src_pixels_per_line,
@@ -629,7 +629,7 @@
     pop         rbp
     ret
 
-;void vpx_filter_block1d8_h8_sse2
+;void vpx_highbd_filter_block1d8_h8_sse2
 ;(
 ;    unsigned char  *src_ptr,
 ;    unsigned int    src_pixels_per_line,
@@ -695,7 +695,7 @@
     pop         rbp
     ret
 
-;void vpx_filter_block1d16_h8_sse2
+;void vpx_highbd_filter_block1d16_h8_sse2
 ;(
 ;    unsigned char  *src_ptr,
 ;    unsigned int    src_pixels_per_line,
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm
index 9bffe50..ec18d37 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm
@@ -26,7 +26,7 @@
     pshufd      xmm3, xmm3, 0
 
     mov         rdx, 0x00010001
-    movsxd      rcx, DWORD PTR arg(6)       ;bps
+    movsxd      rcx, DWORD PTR arg(6)       ;bd
     movq        xmm5, rdx
     movq        xmm2, rcx
     pshufd      xmm5, xmm5, 0b
@@ -64,7 +64,7 @@
     dec         rcx
 %endm
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 %macro HIGH_GET_PARAM 0
     mov         rdx, arg(5)                 ;filter ptr
     mov         rsi, arg(0)                 ;src_ptr
@@ -82,7 +82,7 @@
     pshufd      xmm4, xmm4, 0
 
     mov         rdx, 0x00010001
-    movsxd      rcx, DWORD PTR arg(6)       ;bps
+    movsxd      rcx, DWORD PTR arg(6)       ;bd
     movq        xmm8, rdx
     movq        xmm5, rcx
     pshufd      xmm8, xmm8, 0b
@@ -197,7 +197,7 @@
     pop         rbp
     ret
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 global sym(vpx_highbd_filter_block1d8_v2_sse2) PRIVATE
 sym(vpx_highbd_filter_block1d8_v2_sse2):
     push        rbp
@@ -277,7 +277,7 @@
     pop         rbp
     ret
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 global sym(vpx_highbd_filter_block1d8_v2_avg_sse2) PRIVATE
 sym(vpx_highbd_filter_block1d8_v2_avg_sse2):
     push        rbp
@@ -358,7 +358,7 @@
     pop         rbp
     ret
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 global sym(vpx_highbd_filter_block1d8_h2_sse2) PRIVATE
 sym(vpx_highbd_filter_block1d8_h2_sse2):
     push        rbp
@@ -439,7 +439,7 @@
     pop         rbp
     ret
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
 global sym(vpx_highbd_filter_block1d8_h2_avg_sse2) PRIVATE
 sym(vpx_highbd_filter_block1d8_h2_avg_sse2):
     push        rbp
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_4t_intrin_sse2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_4t_intrin_sse2.c
new file mode 100644
index 0000000..2391790
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_4t_intrin_sse2.c
@@ -0,0 +1,1161 @@
+/*
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <emmintrin.h>
+
+#include "./vpx_dsp_rtcd.h"
+#include "vpx/vpx_integer.h"
+#include "vpx_dsp/x86/convolve.h"
+#include "vpx_dsp/x86/convolve_sse2.h"
+#include "vpx_ports/mem.h"
+
+#define CONV8_ROUNDING_BITS (7)
+#define CONV8_ROUNDING_NUM (1 << (CONV8_ROUNDING_BITS - 1))
+
+static void vpx_filter_block1d16_h4_sse2(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  __m128i kernel_reg;                         // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;       // Segments of the kernel used
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  int h;
+
+  __m128i src_reg, src_reg_shift_1, src_reg_shift_2, src_reg_shift_3;
+  __m128i dst_first, dst_second;
+  __m128i even, odd;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  for (h = height; h > 0; --h) {
+    // We will load multiple shifted versions of the row and shuffle them into
+    // 16-bit words of the form
+    // ... s[2] s[1] s[0] s[-1]
+    // ... s[4] s[3] s[2] s[1]
+    // Then we call multiply and add to get partial results
+    // s[2]k[3]+s[1]k[2] s[0]k[3]s[-1]k[2]
+    // s[4]k[5]+s[3]k[4] s[2]k[5]s[1]k[4]
+    // The two results are then added together for the first half of even
+    // output.
+    // Repeat multiple times to get the whole outoput
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shift_1 = _mm_srli_si128(src_reg, 1);
+    src_reg_shift_2 = _mm_srli_si128(src_reg, 2);
+    src_reg_shift_3 = _mm_srli_si128(src_reg, 3);
+
+    // Output 6 4 2 0
+    even = mm_madd_add_epi8_sse2(&src_reg, &src_reg_shift_2, &kernel_reg_23,
+                                 &kernel_reg_45);
+
+    // Output 7 5 3 1
+    odd = mm_madd_add_epi8_sse2(&src_reg_shift_1, &src_reg_shift_3,
+                                &kernel_reg_23, &kernel_reg_45);
+
+    // Combine to get the first half of the dst
+    dst_first = mm_zip_epi32_sse2(&even, &odd);
+
+    // Do again to get the second half of dst
+    src_reg = _mm_loadu_si128((const __m128i *)(src_ptr + 8));
+    src_reg_shift_1 = _mm_srli_si128(src_reg, 1);
+    src_reg_shift_2 = _mm_srli_si128(src_reg, 2);
+    src_reg_shift_3 = _mm_srli_si128(src_reg, 3);
+
+    // Output 14 12 10 8
+    even = mm_madd_add_epi8_sse2(&src_reg, &src_reg_shift_2, &kernel_reg_23,
+                                 &kernel_reg_45);
+
+    // Output 15 13 11 9
+    odd = mm_madd_add_epi8_sse2(&src_reg_shift_1, &src_reg_shift_3,
+                                &kernel_reg_23, &kernel_reg_45);
+
+    // Combine to get the second half of the dst
+    dst_second = mm_zip_epi32_sse2(&even, &odd);
+
+    // Round each result
+    dst_first = mm_round_epi16_sse2(&dst_first, &reg_32, 6);
+    dst_second = mm_round_epi16_sse2(&dst_second, &reg_32, 6);
+
+    // Finally combine to get the final dst
+    dst_first = _mm_packus_epi16(dst_first, dst_second);
+    _mm_store_si128((__m128i *)dst_ptr, dst_first);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+/* The macro used to generate functions shifts the src_ptr up by 3 rows already
+ * */
+
+static void vpx_filter_block1d16_v4_sse2(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10_lo, src_reg_m10_hi, src_reg_01_lo, src_reg_01_hi;
+  __m128i src_reg_12_lo, src_reg_12_hi, src_reg_23_lo, src_reg_23_hi;
+  // Half of half of the interleaved rows
+  __m128i src_reg_m10_lo_1, src_reg_m10_lo_2, src_reg_m10_hi_1,
+      src_reg_m10_hi_2;
+  __m128i src_reg_01_lo_1, src_reg_01_lo_2, src_reg_01_hi_1, src_reg_01_hi_2;
+  __m128i src_reg_12_lo_1, src_reg_12_lo_2, src_reg_12_hi_1, src_reg_12_hi_2;
+  __m128i src_reg_23_lo_1, src_reg_23_lo_2, src_reg_23_hi_1, src_reg_23_hi_2;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m128i res_reg_m10_lo, res_reg_01_lo, res_reg_12_lo, res_reg_23_lo;
+  __m128i res_reg_m10_hi, res_reg_01_hi, res_reg_12_hi, res_reg_23_hi;
+  __m128i res_reg_m1012, res_reg_0123;
+  __m128i res_reg_m1012_lo, res_reg_0123_lo, res_reg_m1012_hi, res_reg_0123_hi;
+
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  // We will load two rows of pixels as 8-bit words, rearrange them as 16-bit
+  // words,
+  // shuffle the data into the form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // ... s[0,7] s[-1,7] s[0,6] s[-1,6]
+  // ... s[0,9] s[-1,9] s[0,8] s[-1,8]
+  // ... s[0,13] s[-1,13] s[0,12] s[-1,12]
+  // so that we can call multiply and add with the kernel to get 32-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadu_si128((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10_lo = _mm_unpacklo_epi8(src_reg_m1, src_reg_0);
+  src_reg_m10_hi = _mm_unpackhi_epi8(src_reg_m1, src_reg_0);
+  src_reg_m10_lo_1 = _mm_unpacklo_epi8(src_reg_m10_lo, _mm_setzero_si128());
+  src_reg_m10_lo_2 = _mm_unpackhi_epi8(src_reg_m10_lo, _mm_setzero_si128());
+  src_reg_m10_hi_1 = _mm_unpacklo_epi8(src_reg_m10_hi, _mm_setzero_si128());
+  src_reg_m10_hi_2 = _mm_unpackhi_epi8(src_reg_m10_hi, _mm_setzero_si128());
+
+  // More shuffling
+  src_reg_1 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01_lo = _mm_unpacklo_epi8(src_reg_0, src_reg_1);
+  src_reg_01_hi = _mm_unpackhi_epi8(src_reg_0, src_reg_1);
+  src_reg_01_lo_1 = _mm_unpacklo_epi8(src_reg_01_lo, _mm_setzero_si128());
+  src_reg_01_lo_2 = _mm_unpackhi_epi8(src_reg_01_lo, _mm_setzero_si128());
+  src_reg_01_hi_1 = _mm_unpacklo_epi8(src_reg_01_hi, _mm_setzero_si128());
+  src_reg_01_hi_2 = _mm_unpackhi_epi8(src_reg_01_hi, _mm_setzero_si128());
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12_lo = _mm_unpacklo_epi8(src_reg_1, src_reg_2);
+    src_reg_12_hi = _mm_unpackhi_epi8(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23_lo = _mm_unpacklo_epi8(src_reg_2, src_reg_3);
+    src_reg_23_hi = _mm_unpackhi_epi8(src_reg_2, src_reg_3);
+
+    // Partial output from first half
+    res_reg_m10_lo = mm_madd_packs_epi16_sse2(
+        &src_reg_m10_lo_1, &src_reg_m10_lo_2, &kernel_reg_23);
+
+    res_reg_01_lo = mm_madd_packs_epi16_sse2(&src_reg_01_lo_1, &src_reg_01_lo_2,
+                                             &kernel_reg_23);
+
+    src_reg_12_lo_1 = _mm_unpacklo_epi8(src_reg_12_lo, _mm_setzero_si128());
+    src_reg_12_lo_2 = _mm_unpackhi_epi8(src_reg_12_lo, _mm_setzero_si128());
+    res_reg_12_lo = mm_madd_packs_epi16_sse2(&src_reg_12_lo_1, &src_reg_12_lo_2,
+                                             &kernel_reg_45);
+
+    src_reg_23_lo_1 = _mm_unpacklo_epi8(src_reg_23_lo, _mm_setzero_si128());
+    src_reg_23_lo_2 = _mm_unpackhi_epi8(src_reg_23_lo, _mm_setzero_si128());
+    res_reg_23_lo = mm_madd_packs_epi16_sse2(&src_reg_23_lo_1, &src_reg_23_lo_2,
+                                             &kernel_reg_45);
+
+    // Add to get first half of the results
+    res_reg_m1012_lo = _mm_adds_epi16(res_reg_m10_lo, res_reg_12_lo);
+    res_reg_0123_lo = _mm_adds_epi16(res_reg_01_lo, res_reg_23_lo);
+
+    // Now repeat everything again for the second half
+    // Partial output for second half
+    res_reg_m10_hi = mm_madd_packs_epi16_sse2(
+        &src_reg_m10_hi_1, &src_reg_m10_hi_2, &kernel_reg_23);
+
+    res_reg_01_hi = mm_madd_packs_epi16_sse2(&src_reg_01_hi_1, &src_reg_01_hi_2,
+                                             &kernel_reg_23);
+
+    src_reg_12_hi_1 = _mm_unpacklo_epi8(src_reg_12_hi, _mm_setzero_si128());
+    src_reg_12_hi_2 = _mm_unpackhi_epi8(src_reg_12_hi, _mm_setzero_si128());
+    res_reg_12_hi = mm_madd_packs_epi16_sse2(&src_reg_12_hi_1, &src_reg_12_hi_2,
+                                             &kernel_reg_45);
+
+    src_reg_23_hi_1 = _mm_unpacklo_epi8(src_reg_23_hi, _mm_setzero_si128());
+    src_reg_23_hi_2 = _mm_unpackhi_epi8(src_reg_23_hi, _mm_setzero_si128());
+    res_reg_23_hi = mm_madd_packs_epi16_sse2(&src_reg_23_hi_1, &src_reg_23_hi_2,
+                                             &kernel_reg_45);
+
+    // Second half of the results
+    res_reg_m1012_hi = _mm_adds_epi16(res_reg_m10_hi, res_reg_12_hi);
+    res_reg_0123_hi = _mm_adds_epi16(res_reg_01_hi, res_reg_23_hi);
+
+    // Round the words
+    res_reg_m1012_lo = mm_round_epi16_sse2(&res_reg_m1012_lo, &reg_32, 6);
+    res_reg_0123_lo = mm_round_epi16_sse2(&res_reg_0123_lo, &reg_32, 6);
+    res_reg_m1012_hi = mm_round_epi16_sse2(&res_reg_m1012_hi, &reg_32, 6);
+    res_reg_0123_hi = mm_round_epi16_sse2(&res_reg_0123_hi, &reg_32, 6);
+
+    // Combine to get the result
+    res_reg_m1012 = _mm_packus_epi16(res_reg_m1012_lo, res_reg_m1012_hi);
+    res_reg_0123 = _mm_packus_epi16(res_reg_0123_lo, res_reg_0123_hi);
+
+    _mm_store_si128((__m128i *)dst_ptr, res_reg_m1012);
+    _mm_store_si128((__m128i *)(dst_ptr + dst_stride), res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10_lo_1 = src_reg_12_lo_1;
+    src_reg_m10_lo_2 = src_reg_12_lo_2;
+    src_reg_m10_hi_1 = src_reg_12_hi_1;
+    src_reg_m10_hi_2 = src_reg_12_hi_2;
+    src_reg_01_lo_1 = src_reg_23_lo_1;
+    src_reg_01_lo_2 = src_reg_23_lo_2;
+    src_reg_01_hi_1 = src_reg_23_hi_1;
+    src_reg_01_hi_2 = src_reg_23_hi_2;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_filter_block1d8_h4_sse2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  __m128i kernel_reg;                         // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;       // Segments of the kernel used
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  int h;
+
+  __m128i src_reg, src_reg_shift_1, src_reg_shift_2, src_reg_shift_3;
+  __m128i dst_first;
+  __m128i even, odd;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  for (h = height; h > 0; --h) {
+    // We will load multiple shifted versions of the row and shuffle them into
+    // 16-bit words of the form
+    // ... s[2] s[1] s[0] s[-1]
+    // ... s[4] s[3] s[2] s[1]
+    // Then we call multiply and add to get partial results
+    // s[2]k[3]+s[1]k[2] s[0]k[3]s[-1]k[2]
+    // s[4]k[5]+s[3]k[4] s[2]k[5]s[1]k[4]
+    // The two results are then added together to get the even output
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shift_1 = _mm_srli_si128(src_reg, 1);
+    src_reg_shift_2 = _mm_srli_si128(src_reg, 2);
+    src_reg_shift_3 = _mm_srli_si128(src_reg, 3);
+
+    // Output 6 4 2 0
+    even = mm_madd_add_epi8_sse2(&src_reg, &src_reg_shift_2, &kernel_reg_23,
+                                 &kernel_reg_45);
+
+    // Output 7 5 3 1
+    odd = mm_madd_add_epi8_sse2(&src_reg_shift_1, &src_reg_shift_3,
+                                &kernel_reg_23, &kernel_reg_45);
+
+    // Combine to get the first half of the dst
+    dst_first = mm_zip_epi32_sse2(&even, &odd);
+    dst_first = mm_round_epi16_sse2(&dst_first, &reg_32, 6);
+
+    // Saturate and convert to 8-bit words
+    dst_first = _mm_packus_epi16(dst_first, _mm_setzero_si128());
+
+    _mm_storel_epi64((__m128i *)dst_ptr, dst_first);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_filter_block1d8_v4_sse2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10_lo, src_reg_01_lo;
+  __m128i src_reg_12_lo, src_reg_23_lo;
+  // Half of half of the interleaved rows
+  __m128i src_reg_m10_lo_1, src_reg_m10_lo_2;
+  __m128i src_reg_01_lo_1, src_reg_01_lo_2;
+  __m128i src_reg_12_lo_1, src_reg_12_lo_2;
+  __m128i src_reg_23_lo_1, src_reg_23_lo_2;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m128i res_reg_m10_lo, res_reg_01_lo, res_reg_12_lo, res_reg_23_lo;
+  __m128i res_reg_m1012, res_reg_0123;
+  __m128i res_reg_m1012_lo, res_reg_0123_lo;
+
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  // We will load two rows of pixels as 8-bit words, rearrange them as 16-bit
+  // words,
+  // shuffle the data into the form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // ... s[0,7] s[-1,7] s[0,6] s[-1,6]
+  // ... s[0,9] s[-1,9] s[0,8] s[-1,8]
+  // ... s[0,13] s[-1,13] s[0,12] s[-1,12]
+  // so that we can call multiply and add with the kernel to get 32-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadu_si128((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10_lo = _mm_unpacklo_epi8(src_reg_m1, src_reg_0);
+  src_reg_m10_lo_1 = _mm_unpacklo_epi8(src_reg_m10_lo, _mm_setzero_si128());
+  src_reg_m10_lo_2 = _mm_unpackhi_epi8(src_reg_m10_lo, _mm_setzero_si128());
+
+  // More shuffling
+  src_reg_1 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01_lo = _mm_unpacklo_epi8(src_reg_0, src_reg_1);
+  src_reg_01_lo_1 = _mm_unpacklo_epi8(src_reg_01_lo, _mm_setzero_si128());
+  src_reg_01_lo_2 = _mm_unpackhi_epi8(src_reg_01_lo, _mm_setzero_si128());
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12_lo = _mm_unpacklo_epi8(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23_lo = _mm_unpacklo_epi8(src_reg_2, src_reg_3);
+
+    // Partial output
+    res_reg_m10_lo = mm_madd_packs_epi16_sse2(
+        &src_reg_m10_lo_1, &src_reg_m10_lo_2, &kernel_reg_23);
+
+    res_reg_01_lo = mm_madd_packs_epi16_sse2(&src_reg_01_lo_1, &src_reg_01_lo_2,
+                                             &kernel_reg_23);
+
+    src_reg_12_lo_1 = _mm_unpacklo_epi8(src_reg_12_lo, _mm_setzero_si128());
+    src_reg_12_lo_2 = _mm_unpackhi_epi8(src_reg_12_lo, _mm_setzero_si128());
+    res_reg_12_lo = mm_madd_packs_epi16_sse2(&src_reg_12_lo_1, &src_reg_12_lo_2,
+                                             &kernel_reg_45);
+
+    src_reg_23_lo_1 = _mm_unpacklo_epi8(src_reg_23_lo, _mm_setzero_si128());
+    src_reg_23_lo_2 = _mm_unpackhi_epi8(src_reg_23_lo, _mm_setzero_si128());
+    res_reg_23_lo = mm_madd_packs_epi16_sse2(&src_reg_23_lo_1, &src_reg_23_lo_2,
+                                             &kernel_reg_45);
+
+    // Add to get results
+    res_reg_m1012_lo = _mm_adds_epi16(res_reg_m10_lo, res_reg_12_lo);
+    res_reg_0123_lo = _mm_adds_epi16(res_reg_01_lo, res_reg_23_lo);
+
+    // Round the words
+    res_reg_m1012_lo = mm_round_epi16_sse2(&res_reg_m1012_lo, &reg_32, 6);
+    res_reg_0123_lo = mm_round_epi16_sse2(&res_reg_0123_lo, &reg_32, 6);
+
+    // Convert to 8-bit words
+    res_reg_m1012 = _mm_packus_epi16(res_reg_m1012_lo, _mm_setzero_si128());
+    res_reg_0123 = _mm_packus_epi16(res_reg_0123_lo, _mm_setzero_si128());
+
+    // Save only half of the register (8 words)
+    _mm_storel_epi64((__m128i *)dst_ptr, res_reg_m1012);
+    _mm_storel_epi64((__m128i *)(dst_ptr + dst_stride), res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10_lo_1 = src_reg_12_lo_1;
+    src_reg_m10_lo_2 = src_reg_12_lo_2;
+    src_reg_01_lo_1 = src_reg_23_lo_1;
+    src_reg_01_lo_2 = src_reg_23_lo_2;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_filter_block1d4_h4_sse2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  __m128i kernel_reg;                         // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;       // Segments of the kernel used
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  int h;
+
+  __m128i src_reg, src_reg_shift_1, src_reg_shift_2, src_reg_shift_3;
+  __m128i dst_first;
+  __m128i tmp_0, tmp_1;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  for (h = height; h > 0; --h) {
+    // We will load multiple shifted versions of the row and shuffle them into
+    // 16-bit words of the form
+    // ... s[1] s[0] s[0] s[-1]
+    // ... s[3] s[2] s[2] s[1]
+    // Then we call multiply and add to get partial results
+    // s[1]k[3]+s[0]k[2] s[0]k[3]s[-1]k[2]
+    // s[3]k[5]+s[2]k[4] s[2]k[5]s[1]k[4]
+    // The two results are then added together to get the output
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shift_1 = _mm_srli_si128(src_reg, 1);
+    src_reg_shift_2 = _mm_srli_si128(src_reg, 2);
+    src_reg_shift_3 = _mm_srli_si128(src_reg, 3);
+
+    // Convert to 16-bit words
+    src_reg = _mm_unpacklo_epi8(src_reg, _mm_setzero_si128());
+    src_reg_shift_1 = _mm_unpacklo_epi8(src_reg_shift_1, _mm_setzero_si128());
+    src_reg_shift_2 = _mm_unpacklo_epi8(src_reg_shift_2, _mm_setzero_si128());
+    src_reg_shift_3 = _mm_unpacklo_epi8(src_reg_shift_3, _mm_setzero_si128());
+
+    // Shuffle into the right format
+    tmp_0 = _mm_unpacklo_epi32(src_reg, src_reg_shift_1);
+    tmp_1 = _mm_unpacklo_epi32(src_reg_shift_2, src_reg_shift_3);
+
+    // Partial output
+    tmp_0 = _mm_madd_epi16(tmp_0, kernel_reg_23);
+    tmp_1 = _mm_madd_epi16(tmp_1, kernel_reg_45);
+
+    // Output
+    dst_first = _mm_add_epi32(tmp_0, tmp_1);
+    dst_first = _mm_packs_epi32(dst_first, _mm_setzero_si128());
+
+    dst_first = mm_round_epi16_sse2(&dst_first, &reg_32, 6);
+
+    // Saturate and convert to 8-bit words
+    dst_first = _mm_packus_epi16(dst_first, _mm_setzero_si128());
+
+    *((uint32_t *)(dst_ptr)) = _mm_cvtsi128_si32(dst_first);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_filter_block1d4_v4_sse2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10_lo, src_reg_01_lo;
+  __m128i src_reg_12_lo, src_reg_23_lo;
+  // Half of half of the interleaved rows
+  __m128i src_reg_m10_lo_1;
+  __m128i src_reg_01_lo_1;
+  __m128i src_reg_12_lo_1;
+  __m128i src_reg_23_lo_1;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m128i res_reg_m10_lo, res_reg_01_lo, res_reg_12_lo, res_reg_23_lo;
+  __m128i res_reg_m1012, res_reg_0123;
+  __m128i res_reg_m1012_lo, res_reg_0123_lo;
+
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  const __m128i reg_zero = _mm_setzero_si128();
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  // We will load two rows of pixels as 8-bit words, rearrange them as 16-bit
+  // words,
+  // shuffle the data into the form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // ... s[0,7] s[-1,7] s[0,6] s[-1,6]
+  // ... s[0,9] s[-1,9] s[0,8] s[-1,8]
+  // ... s[0,13] s[-1,13] s[0,12] s[-1,12]
+  // so that we can call multiply and add with the kernel to get 32-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadu_si128((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10_lo = _mm_unpacklo_epi8(src_reg_m1, src_reg_0);
+  src_reg_m10_lo_1 = _mm_unpacklo_epi8(src_reg_m10_lo, _mm_setzero_si128());
+
+  // More shuffling
+  src_reg_1 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01_lo = _mm_unpacklo_epi8(src_reg_0, src_reg_1);
+  src_reg_01_lo_1 = _mm_unpacklo_epi8(src_reg_01_lo, _mm_setzero_si128());
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12_lo = _mm_unpacklo_epi8(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23_lo = _mm_unpacklo_epi8(src_reg_2, src_reg_3);
+
+    // Partial output
+    res_reg_m10_lo =
+        mm_madd_packs_epi16_sse2(&src_reg_m10_lo_1, &reg_zero, &kernel_reg_23);
+
+    res_reg_01_lo =
+        mm_madd_packs_epi16_sse2(&src_reg_01_lo_1, &reg_zero, &kernel_reg_23);
+
+    src_reg_12_lo_1 = _mm_unpacklo_epi8(src_reg_12_lo, _mm_setzero_si128());
+    res_reg_12_lo =
+        mm_madd_packs_epi16_sse2(&src_reg_12_lo_1, &reg_zero, &kernel_reg_45);
+
+    src_reg_23_lo_1 = _mm_unpacklo_epi8(src_reg_23_lo, _mm_setzero_si128());
+    res_reg_23_lo =
+        mm_madd_packs_epi16_sse2(&src_reg_23_lo_1, &reg_zero, &kernel_reg_45);
+
+    // Add to get results
+    res_reg_m1012_lo = _mm_adds_epi16(res_reg_m10_lo, res_reg_12_lo);
+    res_reg_0123_lo = _mm_adds_epi16(res_reg_01_lo, res_reg_23_lo);
+
+    // Round the words
+    res_reg_m1012_lo = mm_round_epi16_sse2(&res_reg_m1012_lo, &reg_32, 6);
+    res_reg_0123_lo = mm_round_epi16_sse2(&res_reg_0123_lo, &reg_32, 6);
+
+    // Convert to 8-bit words
+    res_reg_m1012 = _mm_packus_epi16(res_reg_m1012_lo, reg_zero);
+    res_reg_0123 = _mm_packus_epi16(res_reg_0123_lo, reg_zero);
+
+    // Save only half of the register (8 words)
+    *((uint32_t *)(dst_ptr)) = _mm_cvtsi128_si32(res_reg_m1012);
+    *((uint32_t *)(dst_ptr + dst_stride)) = _mm_cvtsi128_si32(res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10_lo_1 = src_reg_12_lo_1;
+    src_reg_01_lo_1 = src_reg_23_lo_1;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+#if CONFIG_VP9_HIGHBITDEPTH && VPX_ARCH_X86_64
+static void vpx_highbd_filter_block1d4_h4_sse2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will load multiple shifted versions of the row and shuffle them into
+  // 16-bit words of the form
+  // ... s[2] s[1] s[0] s[-1]
+  // ... s[4] s[3] s[2] s[1]
+  // Then we call multiply and add to get partial results
+  // s[2]k[3]+s[1]k[2] s[0]k[3]s[-1]k[2]
+  // s[4]k[5]+s[3]k[4] s[2]k[5]s[1]k[4]
+  // The two results are then added together to get the even output
+
+  __m128i src_reg, src_reg_shift_1, src_reg_shift_2, src_reg_shift_3;
+  __m128i res_reg;
+  __m128i even, odd;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+  const __m128i reg_round =
+      _mm_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m128i reg_max = _mm_set1_epi16((1 << bd) - 1);
+  const __m128i reg_zero = _mm_setzero_si128();
+  int h;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  for (h = height; h > 0; --h) {
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shift_1 = _mm_srli_si128(src_reg, 2);
+    src_reg_shift_2 = _mm_srli_si128(src_reg, 4);
+    src_reg_shift_3 = _mm_srli_si128(src_reg, 6);
+
+    // Output 2 0
+    even = mm_madd_add_epi16_sse2(&src_reg, &src_reg_shift_2, &kernel_reg_23,
+                                  &kernel_reg_45);
+
+    // Output 3 1
+    odd = mm_madd_add_epi16_sse2(&src_reg_shift_1, &src_reg_shift_3,
+                                 &kernel_reg_23, &kernel_reg_45);
+
+    // Combine to get the first half of the dst
+    res_reg = _mm_unpacklo_epi32(even, odd);
+    res_reg = mm_round_epi32_sse2(&res_reg, &reg_round, CONV8_ROUNDING_BITS);
+    res_reg = _mm_packs_epi32(res_reg, reg_zero);
+
+    // Saturate the result and save
+    res_reg = _mm_min_epi16(res_reg, reg_max);
+    res_reg = _mm_max_epi16(res_reg, reg_zero);
+    _mm_storel_epi64((__m128i *)dst_ptr, res_reg);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_highbd_filter_block1d4_v4_sse2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will load two rows of pixels as 16-bit words, and shuffle them into the
+  // form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // ... s[0,7] s[-1,7] s[0,6] s[-1,6]
+  // ... s[0,9] s[-1,9] s[0,8] s[-1,8]
+  // ... s[0,13] s[-1,13] s[0,12] s[-1,12]
+  // so that we can call multiply and add with the kernel to get 32-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10, src_reg_01;
+  __m128i src_reg_12, src_reg_23;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m128i res_reg_m10, res_reg_01, res_reg_12, res_reg_23;
+  __m128i res_reg_m1012, res_reg_0123;
+
+  const __m128i reg_round =
+      _mm_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m128i reg_max = _mm_set1_epi16((1 << bd) - 1);
+  const __m128i reg_zero = _mm_setzero_si128();
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadl_epi64((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10 = _mm_unpacklo_epi16(src_reg_m1, src_reg_0);
+
+  // More shuffling
+  src_reg_1 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01 = _mm_unpacklo_epi16(src_reg_0, src_reg_1);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12 = _mm_unpacklo_epi16(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23 = _mm_unpacklo_epi16(src_reg_2, src_reg_3);
+
+    // Partial output
+    res_reg_m10 = _mm_madd_epi16(src_reg_m10, kernel_reg_23);
+    res_reg_01 = _mm_madd_epi16(src_reg_01, kernel_reg_23);
+    res_reg_12 = _mm_madd_epi16(src_reg_12, kernel_reg_45);
+    res_reg_23 = _mm_madd_epi16(src_reg_23, kernel_reg_45);
+
+    // Add to get results
+    res_reg_m1012 = _mm_add_epi32(res_reg_m10, res_reg_12);
+    res_reg_0123 = _mm_add_epi32(res_reg_01, res_reg_23);
+
+    // Round the words
+    res_reg_m1012 =
+        mm_round_epi32_sse2(&res_reg_m1012, &reg_round, CONV8_ROUNDING_BITS);
+    res_reg_0123 =
+        mm_round_epi32_sse2(&res_reg_0123, &reg_round, CONV8_ROUNDING_BITS);
+
+    res_reg_m1012 = _mm_packs_epi32(res_reg_m1012, reg_zero);
+    res_reg_0123 = _mm_packs_epi32(res_reg_0123, reg_zero);
+
+    // Saturate according to bit depth
+    res_reg_m1012 = _mm_min_epi16(res_reg_m1012, reg_max);
+    res_reg_0123 = _mm_min_epi16(res_reg_0123, reg_max);
+    res_reg_m1012 = _mm_max_epi16(res_reg_m1012, reg_zero);
+    res_reg_0123 = _mm_max_epi16(res_reg_0123, reg_zero);
+
+    // Save only half of the register (8 words)
+    _mm_storel_epi64((__m128i *)dst_ptr, res_reg_m1012);
+    _mm_storel_epi64((__m128i *)(dst_ptr + dst_stride), res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10 = src_reg_12;
+    src_reg_01 = src_reg_23;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_highbd_filter_block1d8_h4_sse2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will load multiple shifted versions of the row and shuffle them into
+  // 16-bit words of the form
+  // ... s[2] s[1] s[0] s[-1]
+  // ... s[4] s[3] s[2] s[1]
+  // Then we call multiply and add to get partial results
+  // s[2]k[3]+s[1]k[2] s[0]k[3]s[-1]k[2]
+  // s[4]k[5]+s[3]k[4] s[2]k[5]s[1]k[4]
+  // The two results are then added together for the first half of even
+  // output.
+  // Repeat multiple times to get the whole outoput
+
+  __m128i src_reg, src_reg_next, src_reg_shift_1, src_reg_shift_2,
+      src_reg_shift_3;
+  __m128i res_reg;
+  __m128i even, odd;
+  __m128i tmp_0, tmp_1;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+  const __m128i reg_round =
+      _mm_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m128i reg_max = _mm_set1_epi16((1 << bd) - 1);
+  const __m128i reg_zero = _mm_setzero_si128();
+  int h;
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  for (h = height; h > 0; --h) {
+    // We will put first half in the first half of the reg, and second half in
+    // second half
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_next = _mm_loadu_si128((const __m128i *)(src_ptr + 5));
+
+    // Output 6 4 2 0
+    tmp_0 = _mm_srli_si128(src_reg, 4);
+    tmp_1 = _mm_srli_si128(src_reg_next, 2);
+    src_reg_shift_2 = _mm_unpacklo_epi64(tmp_0, tmp_1);
+    even = mm_madd_add_epi16_sse2(&src_reg, &src_reg_shift_2, &kernel_reg_23,
+                                  &kernel_reg_45);
+
+    // Output 7 5 3 1
+    tmp_0 = _mm_srli_si128(src_reg, 2);
+    tmp_1 = src_reg_next;
+    src_reg_shift_1 = _mm_unpacklo_epi64(tmp_0, tmp_1);
+
+    tmp_0 = _mm_srli_si128(src_reg, 6);
+    tmp_1 = _mm_srli_si128(src_reg_next, 4);
+    src_reg_shift_3 = _mm_unpacklo_epi64(tmp_0, tmp_1);
+
+    odd = mm_madd_add_epi16_sse2(&src_reg_shift_1, &src_reg_shift_3,
+                                 &kernel_reg_23, &kernel_reg_45);
+
+    // Combine to get the first half of the dst
+    even = mm_round_epi32_sse2(&even, &reg_round, CONV8_ROUNDING_BITS);
+    odd = mm_round_epi32_sse2(&odd, &reg_round, CONV8_ROUNDING_BITS);
+    res_reg = mm_zip_epi32_sse2(&even, &odd);
+
+    // Saturate the result and save
+    res_reg = _mm_min_epi16(res_reg, reg_max);
+    res_reg = _mm_max_epi16(res_reg, reg_zero);
+
+    _mm_store_si128((__m128i *)dst_ptr, res_reg);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_highbd_filter_block1d8_v4_sse2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  // We will load two rows of pixels as 16-bit words, and shuffle them into the
+  // form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // ... s[0,7] s[-1,7] s[0,6] s[-1,6]
+  // ... s[0,9] s[-1,9] s[0,8] s[-1,8]
+  // ... s[0,13] s[-1,13] s[0,12] s[-1,12]
+  // so that we can call multiply and add with the kernel to get 32-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10_lo, src_reg_01_lo, src_reg_m10_hi, src_reg_01_hi;
+  __m128i src_reg_12_lo, src_reg_23_lo, src_reg_12_hi, src_reg_23_hi;
+
+  // Result after multiply and add
+  __m128i res_reg_m10_lo, res_reg_01_lo, res_reg_12_lo, res_reg_23_lo;
+  __m128i res_reg_m10_hi, res_reg_01_hi, res_reg_12_hi, res_reg_23_hi;
+  __m128i res_reg_m1012, res_reg_0123;
+  __m128i res_reg_m1012_lo, res_reg_0123_lo;
+  __m128i res_reg_m1012_hi, res_reg_0123_hi;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  const __m128i reg_round =
+      _mm_set1_epi32(CONV8_ROUNDING_NUM);  // Used for rounding
+  const __m128i reg_max = _mm_set1_epi16((1 << bd) - 1);
+  const __m128i reg_zero = _mm_setzero_si128();
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_23 = extract_quarter_2_epi16_sse2(&kernel_reg);
+  kernel_reg_45 = extract_quarter_3_epi16_sse2(&kernel_reg);
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadu_si128((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10_lo = _mm_unpacklo_epi16(src_reg_m1, src_reg_0);
+  src_reg_m10_hi = _mm_unpackhi_epi16(src_reg_m1, src_reg_0);
+
+  // More shuffling
+  src_reg_1 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01_lo = _mm_unpacklo_epi16(src_reg_0, src_reg_1);
+  src_reg_01_hi = _mm_unpackhi_epi16(src_reg_0, src_reg_1);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12_lo = _mm_unpacklo_epi16(src_reg_1, src_reg_2);
+    src_reg_12_hi = _mm_unpackhi_epi16(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23_lo = _mm_unpacklo_epi16(src_reg_2, src_reg_3);
+    src_reg_23_hi = _mm_unpackhi_epi16(src_reg_2, src_reg_3);
+
+    // Partial output for first half
+    res_reg_m10_lo = _mm_madd_epi16(src_reg_m10_lo, kernel_reg_23);
+    res_reg_01_lo = _mm_madd_epi16(src_reg_01_lo, kernel_reg_23);
+    res_reg_12_lo = _mm_madd_epi16(src_reg_12_lo, kernel_reg_45);
+    res_reg_23_lo = _mm_madd_epi16(src_reg_23_lo, kernel_reg_45);
+
+    // Add to get results
+    res_reg_m1012_lo = _mm_add_epi32(res_reg_m10_lo, res_reg_12_lo);
+    res_reg_0123_lo = _mm_add_epi32(res_reg_01_lo, res_reg_23_lo);
+
+    // Round the words
+    res_reg_m1012_lo =
+        mm_round_epi32_sse2(&res_reg_m1012_lo, &reg_round, CONV8_ROUNDING_BITS);
+    res_reg_0123_lo =
+        mm_round_epi32_sse2(&res_reg_0123_lo, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Partial output for first half
+    res_reg_m10_hi = _mm_madd_epi16(src_reg_m10_hi, kernel_reg_23);
+    res_reg_01_hi = _mm_madd_epi16(src_reg_01_hi, kernel_reg_23);
+    res_reg_12_hi = _mm_madd_epi16(src_reg_12_hi, kernel_reg_45);
+    res_reg_23_hi = _mm_madd_epi16(src_reg_23_hi, kernel_reg_45);
+
+    // Add to get results
+    res_reg_m1012_hi = _mm_add_epi32(res_reg_m10_hi, res_reg_12_hi);
+    res_reg_0123_hi = _mm_add_epi32(res_reg_01_hi, res_reg_23_hi);
+
+    // Round the words
+    res_reg_m1012_hi =
+        mm_round_epi32_sse2(&res_reg_m1012_hi, &reg_round, CONV8_ROUNDING_BITS);
+    res_reg_0123_hi =
+        mm_round_epi32_sse2(&res_reg_0123_hi, &reg_round, CONV8_ROUNDING_BITS);
+
+    // Combine the two halfs
+    res_reg_m1012 = _mm_packs_epi32(res_reg_m1012_lo, res_reg_m1012_hi);
+    res_reg_0123 = _mm_packs_epi32(res_reg_0123_lo, res_reg_0123_hi);
+
+    // Saturate according to bit depth
+    res_reg_m1012 = _mm_min_epi16(res_reg_m1012, reg_max);
+    res_reg_0123 = _mm_min_epi16(res_reg_0123, reg_max);
+    res_reg_m1012 = _mm_max_epi16(res_reg_m1012, reg_zero);
+    res_reg_0123 = _mm_max_epi16(res_reg_0123, reg_zero);
+
+    // Save only half of the register (8 words)
+    _mm_store_si128((__m128i *)dst_ptr, res_reg_m1012);
+    _mm_store_si128((__m128i *)(dst_ptr + dst_stride), res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10_lo = src_reg_12_lo;
+    src_reg_m10_hi = src_reg_12_hi;
+    src_reg_01_lo = src_reg_23_lo;
+    src_reg_01_hi = src_reg_23_hi;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_highbd_filter_block1d16_h4_sse2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  vpx_highbd_filter_block1d8_h4_sse2(src_ptr, src_stride, dst_ptr, dst_stride,
+                                     height, kernel, bd);
+  vpx_highbd_filter_block1d8_h4_sse2(src_ptr + 8, src_stride, dst_ptr + 8,
+                                     dst_stride, height, kernel, bd);
+}
+
+static void vpx_highbd_filter_block1d16_v4_sse2(
+    const uint16_t *src_ptr, ptrdiff_t src_stride, uint16_t *dst_ptr,
+    ptrdiff_t dst_stride, uint32_t height, const int16_t *kernel, int bd) {
+  vpx_highbd_filter_block1d8_v4_sse2(src_ptr, src_stride, dst_ptr, dst_stride,
+                                     height, kernel, bd);
+  vpx_highbd_filter_block1d8_v4_sse2(src_ptr + 8, src_stride, dst_ptr + 8,
+                                     dst_stride, height, kernel, bd);
+}
+#endif  // CONFIG_VP9_HIGHBITDEPTH && VPX_ARCH_X86_64
+
+// From vpx_subpixel_8t_sse2.asm.
+filter8_1dfunction vpx_filter_block1d16_v8_sse2;
+filter8_1dfunction vpx_filter_block1d16_h8_sse2;
+filter8_1dfunction vpx_filter_block1d8_v8_sse2;
+filter8_1dfunction vpx_filter_block1d8_h8_sse2;
+filter8_1dfunction vpx_filter_block1d4_v8_sse2;
+filter8_1dfunction vpx_filter_block1d4_h8_sse2;
+filter8_1dfunction vpx_filter_block1d16_v8_avg_sse2;
+filter8_1dfunction vpx_filter_block1d16_h8_avg_sse2;
+filter8_1dfunction vpx_filter_block1d8_v8_avg_sse2;
+filter8_1dfunction vpx_filter_block1d8_h8_avg_sse2;
+filter8_1dfunction vpx_filter_block1d4_v8_avg_sse2;
+filter8_1dfunction vpx_filter_block1d4_h8_avg_sse2;
+
+// Use the [vh]8 version because there is no [vh]4 implementation.
+#define vpx_filter_block1d16_v4_avg_sse2 vpx_filter_block1d16_v8_avg_sse2
+#define vpx_filter_block1d16_h4_avg_sse2 vpx_filter_block1d16_h8_avg_sse2
+#define vpx_filter_block1d8_v4_avg_sse2 vpx_filter_block1d8_v8_avg_sse2
+#define vpx_filter_block1d8_h4_avg_sse2 vpx_filter_block1d8_h8_avg_sse2
+#define vpx_filter_block1d4_v4_avg_sse2 vpx_filter_block1d4_v8_avg_sse2
+#define vpx_filter_block1d4_h4_avg_sse2 vpx_filter_block1d4_h8_avg_sse2
+
+// From vpx_dsp/x86/vpx_subpixel_bilinear_sse2.asm.
+filter8_1dfunction vpx_filter_block1d16_v2_sse2;
+filter8_1dfunction vpx_filter_block1d16_h2_sse2;
+filter8_1dfunction vpx_filter_block1d8_v2_sse2;
+filter8_1dfunction vpx_filter_block1d8_h2_sse2;
+filter8_1dfunction vpx_filter_block1d4_v2_sse2;
+filter8_1dfunction vpx_filter_block1d4_h2_sse2;
+filter8_1dfunction vpx_filter_block1d16_v2_avg_sse2;
+filter8_1dfunction vpx_filter_block1d16_h2_avg_sse2;
+filter8_1dfunction vpx_filter_block1d8_v2_avg_sse2;
+filter8_1dfunction vpx_filter_block1d8_h2_avg_sse2;
+filter8_1dfunction vpx_filter_block1d4_v2_avg_sse2;
+filter8_1dfunction vpx_filter_block1d4_h2_avg_sse2;
+
+// void vpx_convolve8_horiz_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                               uint8_t *dst, ptrdiff_t dst_stride,
+//                               const InterpKernel *filter, int x0_q4,
+//                               int32_t x_step_q4, int y0_q4, int y_step_q4,
+//                               int w, int h);
+// void vpx_convolve8_vert_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                              uint8_t *dst, ptrdiff_t dst_stride,
+//                              const InterpKernel *filter, int x0_q4,
+//                              int32_t x_step_q4, int y0_q4, int y_step_q4,
+//                              int w, int h);
+// void vpx_convolve8_avg_horiz_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                                   uint8_t *dst, ptrdiff_t dst_stride,
+//                                   const InterpKernel *filter, int x0_q4,
+//                                   int32_t x_step_q4, int y0_q4,
+//                                   int y_step_q4, int w, int h);
+// void vpx_convolve8_avg_vert_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                                  uint8_t *dst, ptrdiff_t dst_stride,
+//                                  const InterpKernel *filter, int x0_q4,
+//                                  int32_t x_step_q4, int y0_q4, int y_step_q4,
+//                                  int w, int h);
+FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , sse2, 0);
+FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - (num_taps / 2 - 1) * src_stride, ,
+            sse2, 0);
+FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, sse2, 1);
+FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v,
+            src - (num_taps / 2 - 1) * src_stride, avg_, sse2, 1);
+
+// void vpx_convolve8_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                         uint8_t *dst, ptrdiff_t dst_stride,
+//                         const InterpKernel *filter, int x0_q4,
+//                         int32_t x_step_q4, int y0_q4, int y_step_q4,
+//                         int w, int h);
+// void vpx_convolve8_avg_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                             uint8_t *dst, ptrdiff_t dst_stride,
+//                             const InterpKernel *filter, int x0_q4,
+//                             int32_t x_step_q4, int y0_q4, int y_step_q4,
+//                             int w, int h);
+FUN_CONV_2D(, sse2, 0);
+FUN_CONV_2D(avg_, sse2, 1);
+
+#if CONFIG_VP9_HIGHBITDEPTH && VPX_ARCH_X86_64
+// From vpx_dsp/x86/vpx_high_subpixel_8t_sse2.asm.
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h8_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v8_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h8_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v8_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h8_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v8_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h8_avg_sse2;
+
+// Use the [vh]8 version because there is no [vh]4 implementation.
+#define vpx_highbd_filter_block1d16_v4_avg_sse2 \
+  vpx_highbd_filter_block1d16_v8_avg_sse2
+#define vpx_highbd_filter_block1d16_h4_avg_sse2 \
+  vpx_highbd_filter_block1d16_h8_avg_sse2
+#define vpx_highbd_filter_block1d8_v4_avg_sse2 \
+  vpx_highbd_filter_block1d8_v8_avg_sse2
+#define vpx_highbd_filter_block1d8_h4_avg_sse2 \
+  vpx_highbd_filter_block1d8_h8_avg_sse2
+#define vpx_highbd_filter_block1d4_v4_avg_sse2 \
+  vpx_highbd_filter_block1d4_v8_avg_sse2
+#define vpx_highbd_filter_block1d4_h4_avg_sse2 \
+  vpx_highbd_filter_block1d4_h8_avg_sse2
+
+// From vpx_dsp/x86/vpx_high_subpixel_bilinear_sse2.asm.
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h2_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_v2_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d16_h2_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_v2_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d8_h2_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_v2_avg_sse2;
+highbd_filter8_1dfunction vpx_highbd_filter_block1d4_h2_avg_sse2;
+
+// void vpx_highbd_convolve8_horiz_sse2(const uint8_t *src,
+//                                      ptrdiff_t src_stride,
+//                                      uint8_t *dst,
+//                                      ptrdiff_t dst_stride,
+//                                      const int16_t *filter_x,
+//                                      int x_step_q4,
+//                                      const int16_t *filter_y,
+//                                      int y_step_q4,
+//                                      int w, int h, int bd);
+// void vpx_highbd_convolve8_vert_sse2(const uint8_t *src,
+//                                     ptrdiff_t src_stride,
+//                                     uint8_t *dst,
+//                                     ptrdiff_t dst_stride,
+//                                     const int16_t *filter_x,
+//                                     int x_step_q4,
+//                                     const int16_t *filter_y,
+//                                     int y_step_q4,
+//                                     int w, int h, int bd);
+// void vpx_highbd_convolve8_avg_horiz_sse2(const uint8_t *src,
+//                                          ptrdiff_t src_stride,
+//                                          uint8_t *dst,
+//                                          ptrdiff_t dst_stride,
+//                                          const int16_t *filter_x,
+//                                          int x_step_q4,
+//                                          const int16_t *filter_y,
+//                                          int y_step_q4,
+//                                          int w, int h, int bd);
+// void vpx_highbd_convolve8_avg_vert_sse2(const uint8_t *src,
+//                                         ptrdiff_t src_stride,
+//                                         uint8_t *dst,
+//                                         ptrdiff_t dst_stride,
+//                                         const int16_t *filter_x,
+//                                         int x_step_q4,
+//                                         const int16_t *filter_y,
+//                                         int y_step_q4,
+//                                         int w, int h, int bd);
+HIGH_FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , sse2, 0);
+HIGH_FUN_CONV_1D(vert, y0_q4, y_step_q4, v,
+                 src - src_stride * (num_taps / 2 - 1), , sse2, 0);
+HIGH_FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, sse2, 1);
+HIGH_FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v,
+                 src - src_stride * (num_taps / 2 - 1), avg_, sse2, 1);
+
+// void vpx_highbd_convolve8_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                                uint8_t *dst, ptrdiff_t dst_stride,
+//                                const InterpKernel *filter, int x0_q4,
+//                                int32_t x_step_q4, int y0_q4, int y_step_q4,
+//                                int w, int h, int bd);
+// void vpx_highbd_convolve8_avg_sse2(const uint8_t *src, ptrdiff_t src_stride,
+//                                    uint8_t *dst, ptrdiff_t dst_stride,
+//                                    const InterpKernel *filter, int x0_q4,
+//                                    int32_t x_step_q4, int y0_q4,
+//                                    int y_step_q4, int w, int h, int bd);
+HIGH_FUN_CONV_2D(, sse2, 0);
+HIGH_FUN_CONV_2D(avg_, sse2, 1);
+#endif  // CONFIG_VP9_HIGHBITDEPTH && VPX_ARCH_X86_64
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_avx2.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_avx2.c
index d091969..1eaa19b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_avx2.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_avx2.c
@@ -9,22 +9,24 @@
  */
 
 #include <immintrin.h>
+#include <stdio.h>
 
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/x86/convolve.h"
 #include "vpx_dsp/x86/convolve_avx2.h"
+#include "vpx_dsp/x86/convolve_sse2.h"
 #include "vpx_ports/mem.h"
 
 // filters for 16_h8
-DECLARE_ALIGNED(32, static const uint8_t, filt1_global_avx2[32]) = {
-  0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8,
-  0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8
-};
+DECLARE_ALIGNED(32, static const uint8_t,
+                filt1_global_avx2[32]) = { 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5,
+                                           6, 6, 7, 7, 8, 0, 1, 1, 2, 2, 3,
+                                           3, 4, 4, 5, 5, 6, 6, 7, 7, 8 };
 
-DECLARE_ALIGNED(32, static const uint8_t, filt2_global_avx2[32]) = {
-  2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10,
-  2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10
-};
+DECLARE_ALIGNED(32, static const uint8_t,
+                filt2_global_avx2[32]) = { 2, 3, 3, 4, 4,  5, 5, 6, 6, 7, 7,
+                                           8, 8, 9, 9, 10, 2, 3, 3, 4, 4, 5,
+                                           5, 6, 6, 7, 7,  8, 8, 9, 9, 10 };
 
 DECLARE_ALIGNED(32, static const uint8_t, filt3_global_avx2[32]) = {
   4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12,
@@ -326,23 +328,587 @@
                                  height, filter, 1);
 }
 
+static void vpx_filter_block1d16_h4_avx2(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // We will cast the kernel from 16-bit words to 8-bit words, and then extract
+  // the middle four elements of the kernel into two registers in the form
+  // ... k[3] k[2] k[3] k[2]
+  // ... k[5] k[4] k[5] k[4]
+  // Then we shuffle the source into
+  // ... s[1] s[0] s[0] s[-1]
+  // ... s[3] s[2] s[2] s[1]
+  // Calling multiply and add gives us half of the sum. Calling add gives us
+  // first half of the output. Repeat again to get the second half of the
+  // output. Finally we shuffle again to combine the two outputs.
+  // Since avx2 allows us to use 256-bit buffer, we can do this two rows at a
+  // time.
+
+  __m128i kernel_reg;  // Kernel
+  __m256i kernel_reg_256, kernel_reg_23,
+      kernel_reg_45;                             // Segments of the kernel used
+  const __m256i reg_32 = _mm256_set1_epi16(32);  // Used for rounding
+  const ptrdiff_t unrolled_src_stride = src_stride << 1;
+  const ptrdiff_t unrolled_dst_stride = dst_stride << 1;
+  int h;
+
+  __m256i src_reg, src_reg_shift_0, src_reg_shift_2;
+  __m256i dst_first, dst_second;
+  __m256i tmp_0, tmp_1;
+  __m256i idx_shift_0 =
+      _mm256_setr_epi8(0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 0, 1, 1,
+                       2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8);
+  __m256i idx_shift_2 =
+      _mm256_setr_epi8(2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 2, 3, 3,
+                       4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10);
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg_256 = _mm256_broadcastsi128_si256(kernel_reg);
+  kernel_reg_23 =
+      _mm256_shuffle_epi8(kernel_reg_256, _mm256_set1_epi16(0x0302u));
+  kernel_reg_45 =
+      _mm256_shuffle_epi8(kernel_reg_256, _mm256_set1_epi16(0x0504u));
+
+  for (h = height; h >= 2; h -= 2) {
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr, src_ptr + src_stride);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Partial result for first half
+    tmp_0 = _mm256_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm256_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_first = _mm256_adds_epi16(tmp_0, tmp_1);
+
+    // Do again to get the second half of dst
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr + 8, src_ptr + src_stride + 8);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Partial result for second half
+    tmp_0 = _mm256_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm256_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_second = _mm256_adds_epi16(tmp_0, tmp_1);
+
+    // Round each result
+    dst_first = mm256_round_epi16(&dst_first, &reg_32, 6);
+    dst_second = mm256_round_epi16(&dst_second, &reg_32, 6);
+
+    // Finally combine to get the final dst
+    dst_first = _mm256_packus_epi16(dst_first, dst_second);
+    mm256_store2_si128((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                       &dst_first);
+
+    src_ptr += unrolled_src_stride;
+    dst_ptr += unrolled_dst_stride;
+  }
+
+  // Repeat for the last row if needed
+  if (h > 0) {
+    src_reg = _mm256_loadu_si256((const __m256i *)src_ptr);
+    // Reorder into 2 1 1 2
+    src_reg = _mm256_permute4x64_epi64(src_reg, 0x94);
+
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    tmp_0 = _mm256_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm256_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_first = _mm256_adds_epi16(tmp_0, tmp_1);
+
+    dst_first = mm256_round_epi16(&dst_first, &reg_32, 6);
+
+    dst_first = _mm256_packus_epi16(dst_first, dst_first);
+    dst_first = _mm256_permute4x64_epi64(dst_first, 0x8);
+
+    _mm_store_si128((__m128i *)dst_ptr, _mm256_castsi256_si128(dst_first));
+  }
+}
+
+static void vpx_filter_block1d16_v4_avx2(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // We will load two rows of pixels as 8-bit words, rearrange them into the
+  // form
+  // ... s[1,0] s[0,0] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel partial output. Then
+  // we can call add with another row to get the output.
+
+  // Register for source s[-1:3, :]
+  __m256i src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m256i src_reg_m10, src_reg_01, src_reg_12, src_reg_23;
+  __m256i src_reg_m1001_lo, src_reg_m1001_hi, src_reg_1223_lo, src_reg_1223_hi;
+
+  __m128i kernel_reg;  // Kernel
+  __m256i kernel_reg_256, kernel_reg_23,
+      kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m256i res_reg_m1001_lo, res_reg_1223_lo, res_reg_m1001_hi, res_reg_1223_hi;
+  __m256i res_reg, res_reg_lo, res_reg_hi;
+
+  const __m256i reg_32 = _mm256_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg_256 = _mm256_broadcastsi128_si256(kernel_reg);
+  kernel_reg_23 =
+      _mm256_shuffle_epi8(kernel_reg_256, _mm256_set1_epi16(0x0302u));
+  kernel_reg_45 =
+      _mm256_shuffle_epi8(kernel_reg_256, _mm256_set1_epi16(0x0504u));
+
+  // Row -1 to row 0
+  src_reg_m10 = mm256_loadu2_si128((const __m128i *)src_ptr,
+                                   (const __m128i *)(src_ptr + src_stride));
+
+  // Row 0 to row 1
+  src_reg_1 = _mm256_castsi128_si256(
+      _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2)));
+  src_reg_01 = _mm256_permute2x128_si256(src_reg_m10, src_reg_1, 0x21);
+
+  // First three rows
+  src_reg_m1001_lo = _mm256_unpacklo_epi8(src_reg_m10, src_reg_01);
+  src_reg_m1001_hi = _mm256_unpackhi_epi8(src_reg_m10, src_reg_01);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm256_castsi128_si256(
+        _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3)));
+
+    src_reg_12 = _mm256_inserti128_si256(src_reg_1,
+                                         _mm256_castsi256_si128(src_reg_2), 1);
+
+    src_reg_3 = _mm256_castsi128_si256(
+        _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4)));
+
+    src_reg_23 = _mm256_inserti128_si256(src_reg_2,
+                                         _mm256_castsi256_si128(src_reg_3), 1);
+
+    // Last three rows
+    src_reg_1223_lo = _mm256_unpacklo_epi8(src_reg_12, src_reg_23);
+    src_reg_1223_hi = _mm256_unpackhi_epi8(src_reg_12, src_reg_23);
+
+    // Output from first half
+    res_reg_m1001_lo = _mm256_maddubs_epi16(src_reg_m1001_lo, kernel_reg_23);
+    res_reg_1223_lo = _mm256_maddubs_epi16(src_reg_1223_lo, kernel_reg_45);
+    res_reg_lo = _mm256_adds_epi16(res_reg_m1001_lo, res_reg_1223_lo);
+
+    // Output from second half
+    res_reg_m1001_hi = _mm256_maddubs_epi16(src_reg_m1001_hi, kernel_reg_23);
+    res_reg_1223_hi = _mm256_maddubs_epi16(src_reg_1223_hi, kernel_reg_45);
+    res_reg_hi = _mm256_adds_epi16(res_reg_m1001_hi, res_reg_1223_hi);
+
+    // Round the words
+    res_reg_lo = mm256_round_epi16(&res_reg_lo, &reg_32, 6);
+    res_reg_hi = mm256_round_epi16(&res_reg_hi, &reg_32, 6);
+
+    // Combine to get the result
+    res_reg = _mm256_packus_epi16(res_reg_lo, res_reg_hi);
+
+    // Save the result
+    mm256_store2_si128((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                       &res_reg);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m1001_lo = src_reg_1223_lo;
+    src_reg_m1001_hi = src_reg_1223_hi;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_filter_block1d8_h4_avx2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  // We will cast the kernel from 16-bit words to 8-bit words, and then extract
+  // the middle four elements of the kernel into two registers in the form
+  // ... k[3] k[2] k[3] k[2]
+  // ... k[5] k[4] k[5] k[4]
+  // Then we shuffle the source into
+  // ... s[1] s[0] s[0] s[-1]
+  // ... s[3] s[2] s[2] s[1]
+  // Calling multiply and add gives us half of the sum. Calling add gives us
+  // first half of the output. Repeat again to get the second half of the
+  // output. Finally we shuffle again to combine the two outputs.
+  // Since avx2 allows us to use 256-bit buffer, we can do this two rows at a
+  // time.
+
+  __m128i kernel_reg_128;  // Kernel
+  __m256i kernel_reg, kernel_reg_23,
+      kernel_reg_45;                             // Segments of the kernel used
+  const __m256i reg_32 = _mm256_set1_epi16(32);  // Used for rounding
+  const ptrdiff_t unrolled_src_stride = src_stride << 1;
+  const ptrdiff_t unrolled_dst_stride = dst_stride << 1;
+  int h;
+
+  __m256i src_reg, src_reg_shift_0, src_reg_shift_2;
+  __m256i dst_reg;
+  __m256i tmp_0, tmp_1;
+  __m256i idx_shift_0 =
+      _mm256_setr_epi8(0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 0, 1, 1,
+                       2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8);
+  __m256i idx_shift_2 =
+      _mm256_setr_epi8(2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 2, 3, 3,
+                       4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10);
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_128 = _mm_srai_epi16(kernel_reg_128, 1);
+  kernel_reg_128 = _mm_packs_epi16(kernel_reg_128, kernel_reg_128);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg_23 = _mm256_shuffle_epi8(kernel_reg, _mm256_set1_epi16(0x0302u));
+  kernel_reg_45 = _mm256_shuffle_epi8(kernel_reg, _mm256_set1_epi16(0x0504u));
+
+  for (h = height; h >= 2; h -= 2) {
+    // Load the source
+    src_reg = mm256_loadu2_si128(src_ptr, src_ptr + src_stride);
+    src_reg_shift_0 = _mm256_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm256_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Get the output
+    tmp_0 = _mm256_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm256_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_reg = _mm256_adds_epi16(tmp_0, tmp_1);
+
+    // Round the result
+    dst_reg = mm256_round_epi16(&dst_reg, &reg_32, 6);
+
+    // Finally combine to get the final dst
+    dst_reg = _mm256_packus_epi16(dst_reg, dst_reg);
+    mm256_storeu2_epi64((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                        &dst_reg);
+
+    src_ptr += unrolled_src_stride;
+    dst_ptr += unrolled_dst_stride;
+  }
+
+  // Repeat for the last row if needed
+  if (h > 0) {
+    __m128i src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    __m128i dst_reg;
+    const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+    __m128i tmp_0, tmp_1;
+
+    __m128i src_reg_shift_0 =
+        _mm_shuffle_epi8(src_reg, _mm256_castsi256_si128(idx_shift_0));
+    __m128i src_reg_shift_2 =
+        _mm_shuffle_epi8(src_reg, _mm256_castsi256_si128(idx_shift_2));
+
+    tmp_0 = _mm_maddubs_epi16(src_reg_shift_0,
+                              _mm256_castsi256_si128(kernel_reg_23));
+    tmp_1 = _mm_maddubs_epi16(src_reg_shift_2,
+                              _mm256_castsi256_si128(kernel_reg_45));
+    dst_reg = _mm_adds_epi16(tmp_0, tmp_1);
+
+    dst_reg = mm_round_epi16_sse2(&dst_reg, &reg_32, 6);
+
+    dst_reg = _mm_packus_epi16(dst_reg, _mm_setzero_si128());
+
+    _mm_storel_epi64((__m128i *)dst_ptr, dst_reg);
+  }
+}
+
+static void vpx_filter_block1d8_v4_avx2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  // We will load two rows of pixels as 8-bit words, rearrange them into the
+  // form
+  // ... s[1,0] s[0,0] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel partial output. Then
+  // we can call add with another row to get the output.
+
+  // Register for source s[-1:3, :]
+  __m256i src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m256i src_reg_m10, src_reg_01, src_reg_12, src_reg_23;
+  __m256i src_reg_m1001, src_reg_1223;
+
+  __m128i kernel_reg_128;  // Kernel
+  __m256i kernel_reg, kernel_reg_23,
+      kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m256i res_reg_m1001, res_reg_1223;
+  __m256i res_reg;
+
+  const __m256i reg_32 = _mm256_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_128 = _mm_srai_epi16(kernel_reg_128, 1);
+  kernel_reg_128 = _mm_packs_epi16(kernel_reg_128, kernel_reg_128);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg_23 = _mm256_shuffle_epi8(kernel_reg, _mm256_set1_epi16(0x0302u));
+  kernel_reg_45 = _mm256_shuffle_epi8(kernel_reg, _mm256_set1_epi16(0x0504u));
+
+  // Row -1 to row 0
+  src_reg_m10 = mm256_loadu2_epi64((const __m128i *)src_ptr,
+                                   (const __m128i *)(src_ptr + src_stride));
+
+  // Row 0 to row 1
+  src_reg_1 = _mm256_castsi128_si256(
+      _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2)));
+  src_reg_01 = _mm256_permute2x128_si256(src_reg_m10, src_reg_1, 0x21);
+
+  // First three rows
+  src_reg_m1001 = _mm256_unpacklo_epi8(src_reg_m10, src_reg_01);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm256_castsi128_si256(
+        _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 3)));
+
+    src_reg_12 = _mm256_inserti128_si256(src_reg_1,
+                                         _mm256_castsi256_si128(src_reg_2), 1);
+
+    src_reg_3 = _mm256_castsi128_si256(
+        _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 4)));
+
+    src_reg_23 = _mm256_inserti128_si256(src_reg_2,
+                                         _mm256_castsi256_si128(src_reg_3), 1);
+
+    // Last three rows
+    src_reg_1223 = _mm256_unpacklo_epi8(src_reg_12, src_reg_23);
+
+    // Output
+    res_reg_m1001 = _mm256_maddubs_epi16(src_reg_m1001, kernel_reg_23);
+    res_reg_1223 = _mm256_maddubs_epi16(src_reg_1223, kernel_reg_45);
+    res_reg = _mm256_adds_epi16(res_reg_m1001, res_reg_1223);
+
+    // Round the words
+    res_reg = mm256_round_epi16(&res_reg, &reg_32, 6);
+
+    // Combine to get the result
+    res_reg = _mm256_packus_epi16(res_reg, res_reg);
+
+    // Save the result
+    mm256_storeu2_epi64((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                        &res_reg);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m1001 = src_reg_1223;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_filter_block1d4_h4_avx2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  // We will cast the kernel from 16-bit words to 8-bit words, and then extract
+  // the middle four elements of the kernel into a single register in the form
+  // k[5:2] k[5:2] k[5:2] k[5:2]
+  // Then we shuffle the source into
+  // s[5:2] s[4:1] s[3:0] s[2:-1]
+  // Calling multiply and add gives us half of the sum next to each other.
+  // Calling horizontal add then gives us the output.
+  // Since avx2 has 256-bit register, we can do 2 rows at a time.
+
+  __m128i kernel_reg_128;  // Kernel
+  __m256i kernel_reg;
+  const __m256i reg_32 = _mm256_set1_epi16(32);  // Used for rounding
+  int h;
+  const ptrdiff_t unrolled_src_stride = src_stride << 1;
+  const ptrdiff_t unrolled_dst_stride = dst_stride << 1;
+
+  __m256i src_reg, src_reg_shuf;
+  __m256i dst;
+  __m256i shuf_idx =
+      _mm256_setr_epi8(0, 1, 2, 3, 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6, 0, 1, 2,
+                       3, 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6);
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_128 = _mm_srai_epi16(kernel_reg_128, 1);
+  kernel_reg_128 = _mm_packs_epi16(kernel_reg_128, kernel_reg_128);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg = _mm256_shuffle_epi8(kernel_reg, _mm256_set1_epi32(0x05040302u));
+
+  for (h = height; h > 1; h -= 2) {
+    // Load the source
+    src_reg = mm256_loadu2_epi64((const __m128i *)src_ptr,
+                                 (const __m128i *)(src_ptr + src_stride));
+    src_reg_shuf = _mm256_shuffle_epi8(src_reg, shuf_idx);
+
+    // Get the result
+    dst = _mm256_maddubs_epi16(src_reg_shuf, kernel_reg);
+    dst = _mm256_hadds_epi16(dst, _mm256_setzero_si256());
+
+    // Round result
+    dst = mm256_round_epi16(&dst, &reg_32, 6);
+
+    // Pack to 8-bits
+    dst = _mm256_packus_epi16(dst, _mm256_setzero_si256());
+
+    // Save
+    mm256_storeu2_epi32((__m128i *const)dst_ptr,
+                        (__m128i *const)(dst_ptr + dst_stride), &dst);
+
+    src_ptr += unrolled_src_stride;
+    dst_ptr += unrolled_dst_stride;
+  }
+
+  if (h > 0) {
+    // Load the source
+    const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+    __m128i src_reg = _mm_loadl_epi64((const __m128i *)src_ptr);
+    __m128i src_reg_shuf =
+        _mm_shuffle_epi8(src_reg, _mm256_castsi256_si128(shuf_idx));
+
+    // Get the result
+    __m128i dst =
+        _mm_maddubs_epi16(src_reg_shuf, _mm256_castsi256_si128(kernel_reg));
+    dst = _mm_hadds_epi16(dst, _mm_setzero_si128());
+
+    // Round result
+    dst = mm_round_epi16_sse2(&dst, &reg_32, 6);
+
+    // Pack to 8-bits
+    dst = _mm_packus_epi16(dst, _mm_setzero_si128());
+    *((uint32_t *)(dst_ptr)) = _mm_cvtsi128_si32(dst);
+  }
+}
+
+static void vpx_filter_block1d4_v4_avx2(const uint8_t *src_ptr,
+                                        ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                        ptrdiff_t dst_stride, uint32_t height,
+                                        const int16_t *kernel) {
+  // We will load two rows of pixels as 8-bit words, rearrange them into the
+  // form
+  // ... s[3,0] s[2,0] s[1,0] s[0,0] s[2,0] s[1,0] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel to get partial output.
+  // Calling horizontal add then gives us the completely output
+
+  // Register for source s[-1:3, :]
+  __m256i src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m256i src_reg_m10, src_reg_01, src_reg_12, src_reg_23;
+  __m256i src_reg_m1001, src_reg_1223, src_reg_m1012_1023;
+
+  __m128i kernel_reg_128;  // Kernel
+  __m256i kernel_reg;
+
+  // Result after multiply and add
+  __m256i res_reg;
+
+  const __m256i reg_32 = _mm256_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg_128 = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg_128 = _mm_srai_epi16(kernel_reg_128, 1);
+  kernel_reg_128 = _mm_packs_epi16(kernel_reg_128, kernel_reg_128);
+  kernel_reg = _mm256_broadcastsi128_si256(kernel_reg_128);
+  kernel_reg = _mm256_shuffle_epi8(kernel_reg, _mm256_set1_epi32(0x05040302u));
+
+  // Row -1 to row 0
+  src_reg_m10 = mm256_loadu2_si128((const __m128i *)src_ptr,
+                                   (const __m128i *)(src_ptr + src_stride));
+
+  // Row 0 to row 1
+  src_reg_1 = _mm256_castsi128_si256(
+      _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2)));
+  src_reg_01 = _mm256_permute2x128_si256(src_reg_m10, src_reg_1, 0x21);
+
+  // First three rows
+  src_reg_m1001 = _mm256_unpacklo_epi8(src_reg_m10, src_reg_01);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm256_castsi128_si256(
+        _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 3)));
+
+    src_reg_12 = _mm256_inserti128_si256(src_reg_1,
+                                         _mm256_castsi256_si128(src_reg_2), 1);
+
+    src_reg_3 = _mm256_castsi128_si256(
+        _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 4)));
+
+    src_reg_23 = _mm256_inserti128_si256(src_reg_2,
+                                         _mm256_castsi256_si128(src_reg_3), 1);
+
+    // Last three rows
+    src_reg_1223 = _mm256_unpacklo_epi8(src_reg_12, src_reg_23);
+
+    // Combine all the rows
+    src_reg_m1012_1023 = _mm256_unpacklo_epi16(src_reg_m1001, src_reg_1223);
+
+    // Output
+    res_reg = _mm256_maddubs_epi16(src_reg_m1012_1023, kernel_reg);
+    res_reg = _mm256_hadds_epi16(res_reg, _mm256_setzero_si256());
+
+    // Round the words
+    res_reg = mm256_round_epi16(&res_reg, &reg_32, 6);
+
+    // Combine to get the result
+    res_reg = _mm256_packus_epi16(res_reg, res_reg);
+
+    // Save the result
+    mm256_storeu2_epi32((__m128i *)dst_ptr, (__m128i *)(dst_ptr + dst_stride),
+                        &res_reg);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m1001 = src_reg_1223;
+    src_reg_1 = src_reg_3;
+  }
+}
+
 #if HAVE_AVX2 && HAVE_SSSE3
 filter8_1dfunction vpx_filter_block1d4_v8_ssse3;
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
 filter8_1dfunction vpx_filter_block1d8_v8_intrin_ssse3;
 filter8_1dfunction vpx_filter_block1d8_h8_intrin_ssse3;
 filter8_1dfunction vpx_filter_block1d4_h8_intrin_ssse3;
 #define vpx_filter_block1d8_v8_avx2 vpx_filter_block1d8_v8_intrin_ssse3
 #define vpx_filter_block1d8_h8_avx2 vpx_filter_block1d8_h8_intrin_ssse3
 #define vpx_filter_block1d4_h8_avx2 vpx_filter_block1d4_h8_intrin_ssse3
-#else  // ARCH_X86
+#else  // VPX_ARCH_X86
 filter8_1dfunction vpx_filter_block1d8_v8_ssse3;
 filter8_1dfunction vpx_filter_block1d8_h8_ssse3;
 filter8_1dfunction vpx_filter_block1d4_h8_ssse3;
 #define vpx_filter_block1d8_v8_avx2 vpx_filter_block1d8_v8_ssse3
 #define vpx_filter_block1d8_h8_avx2 vpx_filter_block1d8_h8_ssse3
 #define vpx_filter_block1d4_h8_avx2 vpx_filter_block1d4_h8_ssse3
-#endif  // ARCH_X86_64
+#endif  // VPX_ARCH_X86_64
 filter8_1dfunction vpx_filter_block1d8_v8_avg_ssse3;
 filter8_1dfunction vpx_filter_block1d8_h8_avg_ssse3;
 filter8_1dfunction vpx_filter_block1d4_v8_avg_ssse3;
@@ -376,6 +942,13 @@
 #define vpx_filter_block1d8_h2_avg_avx2 vpx_filter_block1d8_h2_avg_ssse3
 #define vpx_filter_block1d4_v2_avg_avx2 vpx_filter_block1d4_v2_avg_ssse3
 #define vpx_filter_block1d4_h2_avg_avx2 vpx_filter_block1d4_h2_avg_ssse3
+
+#define vpx_filter_block1d16_v4_avg_avx2 vpx_filter_block1d16_v8_avg_avx2
+#define vpx_filter_block1d16_h4_avg_avx2 vpx_filter_block1d16_h8_avg_avx2
+#define vpx_filter_block1d8_v4_avg_avx2 vpx_filter_block1d8_v8_avg_avx2
+#define vpx_filter_block1d8_h4_avg_avx2 vpx_filter_block1d8_h8_avg_avx2
+#define vpx_filter_block1d4_v4_avg_avx2 vpx_filter_block1d4_v8_avg_avx2
+#define vpx_filter_block1d4_h4_avg_avx2 vpx_filter_block1d4_h8_avg_avx2
 // void vpx_convolve8_horiz_avx2(const uint8_t *src, ptrdiff_t src_stride,
 //                                uint8_t *dst, ptrdiff_t dst_stride,
 //                                const InterpKernel *filter, int x0_q4,
@@ -396,10 +969,12 @@
 //                                   const InterpKernel *filter, int x0_q4,
 //                                   int32_t x_step_q4, int y0_q4,
 //                                   int y_step_q4, int w, int h);
-FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , avx2);
-FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * 3, , avx2);
-FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, avx2);
-FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v, src - src_stride * 3, avg_, avx2);
+FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , avx2, 0);
+FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * (num_taps / 2 - 1), ,
+            avx2, 0);
+FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, avx2, 1);
+FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v,
+            src - src_stride * (num_taps / 2 - 1), avg_, avx2, 1);
 
 // void vpx_convolve8_avx2(const uint8_t *src, ptrdiff_t src_stride,
 //                          uint8_t *dst, ptrdiff_t dst_stride,
@@ -411,6 +986,6 @@
 //                              const InterpKernel *filter, int x0_q4,
 //                              int32_t x_step_q4, int y0_q4, int y_step_q4,
 //                              int w, int h);
-FUN_CONV_2D(, avx2);
-FUN_CONV_2D(avg_, avx2);
+FUN_CONV_2D(, avx2, 0);
+FUN_CONV_2D(avg_, avx2, 1);
 #endif  // HAVE_AX2 && HAVE_SSSE3
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_ssse3.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_ssse3.c
index e4f9927..77355a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_ssse3.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_intrin_ssse3.c
@@ -12,20 +12,17 @@
 
 #include <string.h>
 
+#include "./vpx_config.h"
 #include "./vpx_dsp_rtcd.h"
 #include "vpx_dsp/vpx_filter.h"
 #include "vpx_dsp/x86/convolve.h"
+#include "vpx_dsp/x86/convolve_sse2.h"
 #include "vpx_dsp/x86/convolve_ssse3.h"
 #include "vpx_dsp/x86/mem_sse2.h"
 #include "vpx_dsp/x86/transpose_sse2.h"
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem.h"
 
-// These are reused by the avx2 intrinsics.
-// vpx_filter_block1d8_v8_intrin_ssse3()
-// vpx_filter_block1d8_h8_intrin_ssse3()
-// vpx_filter_block1d4_h8_intrin_ssse3()
-
 static INLINE __m128i shuffle_filter_convolve8_8_ssse3(
     const __m128i *const s, const int16_t *const filter) {
   __m128i f[4];
@@ -33,6 +30,23 @@
   return convolve8_8_ssse3(s, f);
 }
 
+// Used by the avx2 implementation.
+#if VPX_ARCH_X86_64
+// Use the intrinsics below
+filter8_1dfunction vpx_filter_block1d4_h8_intrin_ssse3;
+filter8_1dfunction vpx_filter_block1d8_h8_intrin_ssse3;
+filter8_1dfunction vpx_filter_block1d8_v8_intrin_ssse3;
+#define vpx_filter_block1d4_h8_ssse3 vpx_filter_block1d4_h8_intrin_ssse3
+#define vpx_filter_block1d8_h8_ssse3 vpx_filter_block1d8_h8_intrin_ssse3
+#define vpx_filter_block1d8_v8_ssse3 vpx_filter_block1d8_v8_intrin_ssse3
+#else  // VPX_ARCH_X86
+// Use the assembly in vpx_dsp/x86/vpx_subpixel_8t_ssse3.asm.
+filter8_1dfunction vpx_filter_block1d4_h8_ssse3;
+filter8_1dfunction vpx_filter_block1d8_h8_ssse3;
+filter8_1dfunction vpx_filter_block1d8_v8_ssse3;
+#endif
+
+#if VPX_ARCH_X86_64
 void vpx_filter_block1d4_h8_intrin_ssse3(
     const uint8_t *src_ptr, ptrdiff_t src_pitch, uint8_t *output_ptr,
     ptrdiff_t output_pitch, uint32_t output_height, const int16_t *filter) {
@@ -184,13 +198,490 @@
     output_ptr += out_pitch;
   }
 }
+#endif  // VPX_ARCH_X86_64
 
+static void vpx_filter_block1d16_h4_ssse3(const uint8_t *src_ptr,
+                                          ptrdiff_t src_stride,
+                                          uint8_t *dst_ptr,
+                                          ptrdiff_t dst_stride, uint32_t height,
+                                          const int16_t *kernel) {
+  // We will cast the kernel from 16-bit words to 8-bit words, and then extract
+  // the middle four elements of the kernel into two registers in the form
+  // ... k[3] k[2] k[3] k[2]
+  // ... k[5] k[4] k[5] k[4]
+  // Then we shuffle the source into
+  // ... s[1] s[0] s[0] s[-1]
+  // ... s[3] s[2] s[2] s[1]
+  // Calling multiply and add gives us half of the sum. Calling add gives us
+  // first half of the output. Repeat again to get the second half of the
+  // output. Finally we shuffle again to combine the two outputs.
+
+  __m128i kernel_reg;                         // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;       // Segments of the kernel used
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  int h;
+
+  __m128i src_reg, src_reg_shift_0, src_reg_shift_2;
+  __m128i dst_first, dst_second;
+  __m128i tmp_0, tmp_1;
+  __m128i idx_shift_0 =
+      _mm_setr_epi8(0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8);
+  __m128i idx_shift_2 =
+      _mm_setr_epi8(2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10);
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg_23 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0302u));
+  kernel_reg_45 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0504u));
+
+  for (h = height; h > 0; --h) {
+    // Load the source
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shift_0 = _mm_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Partial result for first half
+    tmp_0 = _mm_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_first = _mm_adds_epi16(tmp_0, tmp_1);
+
+    // Do again to get the second half of dst
+    // Load the source
+    src_reg = _mm_loadu_si128((const __m128i *)(src_ptr + 8));
+    src_reg_shift_0 = _mm_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Partial result for first half
+    tmp_0 = _mm_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_second = _mm_adds_epi16(tmp_0, tmp_1);
+
+    // Round each result
+    dst_first = mm_round_epi16_sse2(&dst_first, &reg_32, 6);
+    dst_second = mm_round_epi16_sse2(&dst_second, &reg_32, 6);
+
+    // Finally combine to get the final dst
+    dst_first = _mm_packus_epi16(dst_first, dst_second);
+    _mm_store_si128((__m128i *)dst_ptr, dst_first);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_filter_block1d16_v4_ssse3(const uint8_t *src_ptr,
+                                          ptrdiff_t src_stride,
+                                          uint8_t *dst_ptr,
+                                          ptrdiff_t dst_stride, uint32_t height,
+                                          const int16_t *kernel) {
+  // We will load two rows of pixels as 8-bit words, rearrange them into the
+  // form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // ... s[0,9] s[-1,9] s[0,8] s[-1,8]
+  // so that we can call multiply and add with the kernel to get 16-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10_lo, src_reg_m10_hi, src_reg_01_lo, src_reg_01_hi;
+  __m128i src_reg_12_lo, src_reg_12_hi, src_reg_23_lo, src_reg_23_hi;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m128i res_reg_m10_lo, res_reg_01_lo, res_reg_12_lo, res_reg_23_lo;
+  __m128i res_reg_m10_hi, res_reg_01_hi, res_reg_12_hi, res_reg_23_hi;
+  __m128i res_reg_m1012, res_reg_0123;
+  __m128i res_reg_m1012_lo, res_reg_0123_lo, res_reg_m1012_hi, res_reg_0123_hi;
+
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg_23 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0302u));
+  kernel_reg_45 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0504u));
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadu_si128((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10_lo = _mm_unpacklo_epi8(src_reg_m1, src_reg_0);
+  src_reg_m10_hi = _mm_unpackhi_epi8(src_reg_m1, src_reg_0);
+
+  // More shuffling
+  src_reg_1 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01_lo = _mm_unpacklo_epi8(src_reg_0, src_reg_1);
+  src_reg_01_hi = _mm_unpackhi_epi8(src_reg_0, src_reg_1);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12_lo = _mm_unpacklo_epi8(src_reg_1, src_reg_2);
+    src_reg_12_hi = _mm_unpackhi_epi8(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadu_si128((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23_lo = _mm_unpacklo_epi8(src_reg_2, src_reg_3);
+    src_reg_23_hi = _mm_unpackhi_epi8(src_reg_2, src_reg_3);
+
+    // Partial output from first half
+    res_reg_m10_lo = _mm_maddubs_epi16(src_reg_m10_lo, kernel_reg_23);
+    res_reg_01_lo = _mm_maddubs_epi16(src_reg_01_lo, kernel_reg_23);
+
+    res_reg_12_lo = _mm_maddubs_epi16(src_reg_12_lo, kernel_reg_45);
+    res_reg_23_lo = _mm_maddubs_epi16(src_reg_23_lo, kernel_reg_45);
+
+    // Add to get first half of the results
+    res_reg_m1012_lo = _mm_adds_epi16(res_reg_m10_lo, res_reg_12_lo);
+    res_reg_0123_lo = _mm_adds_epi16(res_reg_01_lo, res_reg_23_lo);
+
+    // Partial output for second half
+    res_reg_m10_hi = _mm_maddubs_epi16(src_reg_m10_hi, kernel_reg_23);
+    res_reg_01_hi = _mm_maddubs_epi16(src_reg_01_hi, kernel_reg_23);
+
+    res_reg_12_hi = _mm_maddubs_epi16(src_reg_12_hi, kernel_reg_45);
+    res_reg_23_hi = _mm_maddubs_epi16(src_reg_23_hi, kernel_reg_45);
+
+    // Second half of the results
+    res_reg_m1012_hi = _mm_adds_epi16(res_reg_m10_hi, res_reg_12_hi);
+    res_reg_0123_hi = _mm_adds_epi16(res_reg_01_hi, res_reg_23_hi);
+
+    // Round the words
+    res_reg_m1012_lo = mm_round_epi16_sse2(&res_reg_m1012_lo, &reg_32, 6);
+    res_reg_0123_lo = mm_round_epi16_sse2(&res_reg_0123_lo, &reg_32, 6);
+    res_reg_m1012_hi = mm_round_epi16_sse2(&res_reg_m1012_hi, &reg_32, 6);
+    res_reg_0123_hi = mm_round_epi16_sse2(&res_reg_0123_hi, &reg_32, 6);
+
+    // Combine to get the result
+    res_reg_m1012 = _mm_packus_epi16(res_reg_m1012_lo, res_reg_m1012_hi);
+    res_reg_0123 = _mm_packus_epi16(res_reg_0123_lo, res_reg_0123_hi);
+
+    _mm_store_si128((__m128i *)dst_ptr, res_reg_m1012);
+    _mm_store_si128((__m128i *)(dst_ptr + dst_stride), res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10_lo = src_reg_12_lo;
+    src_reg_m10_hi = src_reg_12_hi;
+    src_reg_01_lo = src_reg_23_lo;
+    src_reg_01_hi = src_reg_23_hi;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_filter_block1d8_h4_ssse3(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // We will cast the kernel from 16-bit words to 8-bit words, and then extract
+  // the middle four elements of the kernel into two registers in the form
+  // ... k[3] k[2] k[3] k[2]
+  // ... k[5] k[4] k[5] k[4]
+  // Then we shuffle the source into
+  // ... s[1] s[0] s[0] s[-1]
+  // ... s[3] s[2] s[2] s[1]
+  // Calling multiply and add gives us half of the sum. Calling add gives us
+  // first half of the output. Repeat again to get the second half of the
+  // output. Finally we shuffle again to combine the two outputs.
+
+  __m128i kernel_reg;                         // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;       // Segments of the kernel used
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  int h;
+
+  __m128i src_reg, src_reg_shift_0, src_reg_shift_2;
+  __m128i dst_first;
+  __m128i tmp_0, tmp_1;
+  __m128i idx_shift_0 =
+      _mm_setr_epi8(0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8);
+  __m128i idx_shift_2 =
+      _mm_setr_epi8(2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10);
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg_23 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0302u));
+  kernel_reg_45 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0504u));
+
+  for (h = height; h > 0; --h) {
+    // Load the source
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shift_0 = _mm_shuffle_epi8(src_reg, idx_shift_0);
+    src_reg_shift_2 = _mm_shuffle_epi8(src_reg, idx_shift_2);
+
+    // Get the result
+    tmp_0 = _mm_maddubs_epi16(src_reg_shift_0, kernel_reg_23);
+    tmp_1 = _mm_maddubs_epi16(src_reg_shift_2, kernel_reg_45);
+    dst_first = _mm_adds_epi16(tmp_0, tmp_1);
+
+    // Round round result
+    dst_first = mm_round_epi16_sse2(&dst_first, &reg_32, 6);
+
+    // Pack to 8-bits
+    dst_first = _mm_packus_epi16(dst_first, _mm_setzero_si128());
+    _mm_storel_epi64((__m128i *)dst_ptr, dst_first);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_filter_block1d8_v4_ssse3(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // We will load two rows of pixels as 8-bit words, rearrange them into the
+  // form
+  // ... s[0,1] s[-1,1] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel to get 16-bit words of
+  // the form
+  // ... s[0,1]k[3]+s[-1,1]k[2] s[0,0]k[3]+s[-1,0]k[2]
+  // Finally, we can add multiple rows together to get the desired output.
+
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source. lo is first half, hi second
+  __m128i src_reg_m10, src_reg_01;
+  __m128i src_reg_12, src_reg_23;
+
+  __m128i kernel_reg;                    // Kernel
+  __m128i kernel_reg_23, kernel_reg_45;  // Segments of the kernel used
+
+  // Result after multiply and add
+  __m128i res_reg_m10, res_reg_01, res_reg_12, res_reg_23;
+  __m128i res_reg_m1012, res_reg_0123;
+
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg_23 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0302u));
+  kernel_reg_45 = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi16(0x0504u));
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadl_epi64((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10 = _mm_unpacklo_epi8(src_reg_m1, src_reg_0);
+
+  // More shuffling
+  src_reg_1 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01 = _mm_unpacklo_epi8(src_reg_0, src_reg_1);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 3));
+
+    src_reg_12 = _mm_unpacklo_epi8(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 4));
+
+    src_reg_23 = _mm_unpacklo_epi8(src_reg_2, src_reg_3);
+
+    // Partial output
+    res_reg_m10 = _mm_maddubs_epi16(src_reg_m10, kernel_reg_23);
+    res_reg_01 = _mm_maddubs_epi16(src_reg_01, kernel_reg_23);
+
+    res_reg_12 = _mm_maddubs_epi16(src_reg_12, kernel_reg_45);
+    res_reg_23 = _mm_maddubs_epi16(src_reg_23, kernel_reg_45);
+
+    // Add to get entire output
+    res_reg_m1012 = _mm_adds_epi16(res_reg_m10, res_reg_12);
+    res_reg_0123 = _mm_adds_epi16(res_reg_01, res_reg_23);
+
+    // Round the words
+    res_reg_m1012 = mm_round_epi16_sse2(&res_reg_m1012, &reg_32, 6);
+    res_reg_0123 = mm_round_epi16_sse2(&res_reg_0123, &reg_32, 6);
+
+    // Pack from 16-bit to 8-bit
+    res_reg_m1012 = _mm_packus_epi16(res_reg_m1012, _mm_setzero_si128());
+    res_reg_0123 = _mm_packus_epi16(res_reg_0123, _mm_setzero_si128());
+
+    _mm_storel_epi64((__m128i *)dst_ptr, res_reg_m1012);
+    _mm_storel_epi64((__m128i *)(dst_ptr + dst_stride), res_reg_0123);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m10 = src_reg_12;
+    src_reg_01 = src_reg_23;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+static void vpx_filter_block1d4_h4_ssse3(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // We will cast the kernel from 16-bit words to 8-bit words, and then extract
+  // the middle four elements of the kernel into a single register in the form
+  // k[5:2] k[5:2] k[5:2] k[5:2]
+  // Then we shuffle the source into
+  // s[5:2] s[4:1] s[3:0] s[2:-1]
+  // Calling multiply and add gives us half of the sum next to each other.
+  // Calling horizontal add then gives us the output.
+
+  __m128i kernel_reg;                         // Kernel
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+  int h;
+
+  __m128i src_reg, src_reg_shuf;
+  __m128i dst_first;
+  __m128i shuf_idx =
+      _mm_setr_epi8(0, 1, 2, 3, 1, 2, 3, 4, 2, 3, 4, 5, 3, 4, 5, 6);
+
+  // Start one pixel before as we need tap/2 - 1 = 1 sample from the past
+  src_ptr -= 1;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi32(0x05040302u));
+
+  for (h = height; h > 0; --h) {
+    // Load the source
+    src_reg = _mm_loadu_si128((const __m128i *)src_ptr);
+    src_reg_shuf = _mm_shuffle_epi8(src_reg, shuf_idx);
+
+    // Get the result
+    dst_first = _mm_maddubs_epi16(src_reg_shuf, kernel_reg);
+    dst_first = _mm_hadds_epi16(dst_first, _mm_setzero_si128());
+
+    // Round result
+    dst_first = mm_round_epi16_sse2(&dst_first, &reg_32, 6);
+
+    // Pack to 8-bits
+    dst_first = _mm_packus_epi16(dst_first, _mm_setzero_si128());
+    *((uint32_t *)(dst_ptr)) = _mm_cvtsi128_si32(dst_first);
+
+    src_ptr += src_stride;
+    dst_ptr += dst_stride;
+  }
+}
+
+static void vpx_filter_block1d4_v4_ssse3(const uint8_t *src_ptr,
+                                         ptrdiff_t src_stride, uint8_t *dst_ptr,
+                                         ptrdiff_t dst_stride, uint32_t height,
+                                         const int16_t *kernel) {
+  // We will load two rows of pixels as 8-bit words, rearrange them into the
+  // form
+  // ... s[2,0] s[1,0] s[0,0] s[-1,0]
+  // so that we can call multiply and add with the kernel partial output. Then
+  // we can call horizontal add to get the output.
+  // Finally, we can add multiple rows together to get the desired output.
+  // This is done two rows at a time
+
+  // Register for source s[-1:3, :]
+  __m128i src_reg_m1, src_reg_0, src_reg_1, src_reg_2, src_reg_3;
+  // Interleaved rows of the source.
+  __m128i src_reg_m10, src_reg_01;
+  __m128i src_reg_12, src_reg_23;
+  __m128i src_reg_m1001, src_reg_1223;
+  __m128i src_reg_m1012_1023_lo, src_reg_m1012_1023_hi;
+
+  __m128i kernel_reg;  // Kernel
+
+  // Result after multiply and add
+  __m128i reg_0, reg_1;
+
+  const __m128i reg_32 = _mm_set1_epi16(32);  // Used for rounding
+
+  // We will compute the result two rows at a time
+  const ptrdiff_t src_stride_unrolled = src_stride << 1;
+  const ptrdiff_t dst_stride_unrolled = dst_stride << 1;
+  int h;
+
+  // Load Kernel
+  kernel_reg = _mm_loadu_si128((const __m128i *)kernel);
+  kernel_reg = _mm_srai_epi16(kernel_reg, 1);
+  kernel_reg = _mm_packs_epi16(kernel_reg, kernel_reg);
+  kernel_reg = _mm_shuffle_epi8(kernel_reg, _mm_set1_epi32(0x05040302u));
+
+  // First shuffle the data
+  src_reg_m1 = _mm_loadl_epi64((const __m128i *)src_ptr);
+  src_reg_0 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride));
+  src_reg_m10 = _mm_unpacklo_epi32(src_reg_m1, src_reg_0);
+
+  // More shuffling
+  src_reg_1 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 2));
+  src_reg_01 = _mm_unpacklo_epi32(src_reg_0, src_reg_1);
+
+  // Put three rows next to each other
+  src_reg_m1001 = _mm_unpacklo_epi8(src_reg_m10, src_reg_01);
+
+  for (h = height; h > 1; h -= 2) {
+    src_reg_2 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 3));
+    src_reg_12 = _mm_unpacklo_epi32(src_reg_1, src_reg_2);
+
+    src_reg_3 = _mm_loadl_epi64((const __m128i *)(src_ptr + src_stride * 4));
+    src_reg_23 = _mm_unpacklo_epi32(src_reg_2, src_reg_3);
+
+    // Put three rows next to each other
+    src_reg_1223 = _mm_unpacklo_epi8(src_reg_12, src_reg_23);
+
+    // Put all four rows next to each other
+    src_reg_m1012_1023_lo = _mm_unpacklo_epi16(src_reg_m1001, src_reg_1223);
+    src_reg_m1012_1023_hi = _mm_unpackhi_epi16(src_reg_m1001, src_reg_1223);
+
+    // Get the results
+    reg_0 = _mm_maddubs_epi16(src_reg_m1012_1023_lo, kernel_reg);
+    reg_1 = _mm_maddubs_epi16(src_reg_m1012_1023_hi, kernel_reg);
+    reg_0 = _mm_hadds_epi16(reg_0, _mm_setzero_si128());
+    reg_1 = _mm_hadds_epi16(reg_1, _mm_setzero_si128());
+
+    // Round the words
+    reg_0 = mm_round_epi16_sse2(&reg_0, &reg_32, 6);
+    reg_1 = mm_round_epi16_sse2(&reg_1, &reg_32, 6);
+
+    // Pack from 16-bit to 8-bit and put them in the right order
+    reg_0 = _mm_packus_epi16(reg_0, reg_0);
+    reg_1 = _mm_packus_epi16(reg_1, reg_1);
+
+    // Save the result
+    *((uint32_t *)(dst_ptr)) = _mm_cvtsi128_si32(reg_0);
+    *((uint32_t *)(dst_ptr + dst_stride)) = _mm_cvtsi128_si32(reg_1);
+
+    // Update the source by two rows
+    src_ptr += src_stride_unrolled;
+    dst_ptr += dst_stride_unrolled;
+
+    src_reg_m1001 = src_reg_1223;
+    src_reg_1 = src_reg_3;
+  }
+}
+
+// From vpx_dsp/x86/vpx_subpixel_8t_ssse3.asm
 filter8_1dfunction vpx_filter_block1d16_v8_ssse3;
 filter8_1dfunction vpx_filter_block1d16_h8_ssse3;
-filter8_1dfunction vpx_filter_block1d8_v8_ssse3;
-filter8_1dfunction vpx_filter_block1d8_h8_ssse3;
 filter8_1dfunction vpx_filter_block1d4_v8_ssse3;
-filter8_1dfunction vpx_filter_block1d4_h8_ssse3;
 filter8_1dfunction vpx_filter_block1d16_v8_avg_ssse3;
 filter8_1dfunction vpx_filter_block1d16_h8_avg_ssse3;
 filter8_1dfunction vpx_filter_block1d8_v8_avg_ssse3;
@@ -198,6 +689,15 @@
 filter8_1dfunction vpx_filter_block1d4_v8_avg_ssse3;
 filter8_1dfunction vpx_filter_block1d4_h8_avg_ssse3;
 
+// Use the [vh]8 version because there is no [vh]4 implementation.
+#define vpx_filter_block1d16_v4_avg_ssse3 vpx_filter_block1d16_v8_avg_ssse3
+#define vpx_filter_block1d16_h4_avg_ssse3 vpx_filter_block1d16_h8_avg_ssse3
+#define vpx_filter_block1d8_v4_avg_ssse3 vpx_filter_block1d8_v8_avg_ssse3
+#define vpx_filter_block1d8_h4_avg_ssse3 vpx_filter_block1d8_h8_avg_ssse3
+#define vpx_filter_block1d4_v4_avg_ssse3 vpx_filter_block1d4_v8_avg_ssse3
+#define vpx_filter_block1d4_h4_avg_ssse3 vpx_filter_block1d4_h8_avg_ssse3
+
+// From vpx_dsp/x86/vpx_subpixel_bilinear_ssse3.asm
 filter8_1dfunction vpx_filter_block1d16_v2_ssse3;
 filter8_1dfunction vpx_filter_block1d16_h2_ssse3;
 filter8_1dfunction vpx_filter_block1d8_v2_ssse3;
@@ -231,10 +731,12 @@
 //                                   const InterpKernel *filter, int x0_q4,
 //                                   int32_t x_step_q4, int y0_q4,
 //                                   int y_step_q4, int w, int h);
-FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , ssse3);
-FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * 3, , ssse3);
-FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, ssse3);
-FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v, src - src_stride * 3, avg_, ssse3);
+FUN_CONV_1D(horiz, x0_q4, x_step_q4, h, src, , ssse3, 0);
+FUN_CONV_1D(vert, y0_q4, y_step_q4, v, src - src_stride * (num_taps / 2 - 1), ,
+            ssse3, 0);
+FUN_CONV_1D(avg_horiz, x0_q4, x_step_q4, h, src, avg_, ssse3, 1);
+FUN_CONV_1D(avg_vert, y0_q4, y_step_q4, v,
+            src - src_stride * (num_taps / 2 - 1), avg_, ssse3, 1);
 
 static void filter_horiz_w8_ssse3(const uint8_t *const src,
                                   const ptrdiff_t src_stride,
@@ -571,7 +1073,7 @@
   }
 }
 
-// void vp9_convolve8_ssse3(const uint8_t *src, ptrdiff_t src_stride,
+// void vpx_convolve8_ssse3(const uint8_t *src, ptrdiff_t src_stride,
 //                          uint8_t *dst, ptrdiff_t dst_stride,
 //                          const InterpKernel *filter, int x0_q4,
 //                          int32_t x_step_q4, int y0_q4, int y_step_q4,
@@ -581,5 +1083,5 @@
 //                              const InterpKernel *filter, int x0_q4,
 //                              int32_t x_step_q4, int y0_q4, int y_step_q4,
 //                              int w, int h);
-FUN_CONV_2D(, ssse3);
-FUN_CONV_2D(avg_, ssse3);
+FUN_CONV_2D(, ssse3, 0);
+FUN_CONV_2D(avg_, ssse3, 1);
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_ssse3.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_ssse3.asm
index 952d930..fe617f1 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_ssse3.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_dsp/x86/vpx_subpixel_8t_ssse3.asm
@@ -26,7 +26,7 @@
 %define LOCAL_VARS_SIZE 16*6
 
 %macro SETUP_LOCAL_VARS 0
-    ; TODO(slavarnway): using xmm registers for these on ARCH_X86_64 +
+    ; TODO(slavarnway): using xmm registers for these on VPX_ARCH_X86_64 +
     ; pmaddubsw has a higher latency on some platforms, this might be eased by
     ; interleaving the instructions.
     %define    k0k1  [rsp + 16*0]
@@ -48,7 +48,7 @@
     mova       k2k3, m1
     mova       k4k5, m2
     mova       k6k7, m3
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
     %define     krd  m12
     %define    tmp0  [rsp + 16*4]
     %define    tmp1  [rsp + 16*5]
@@ -68,7 +68,7 @@
 %endm
 
 ;-------------------------------------------------------------------------------
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
   %define LOCAL_VARS_SIZE_H4 0
 %else
   %define LOCAL_VARS_SIZE_H4 16*4
@@ -79,7 +79,7 @@
                             src, sstride, dst, dstride, height, filter
     mova                m4, [filterq]
     packsswb            m4, m4
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
     %define       k0k1k4k5  m8
     %define       k2k3k6k7  m9
     %define            krd  m10
@@ -339,7 +339,7 @@
 ; TODO(Linfeng): Detect cpu type and choose the code with better performance.
 %define X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON 1
 
-%if ARCH_X86_64 && X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON
+%if VPX_ARCH_X86_64 && X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON
     %define NUM_GENERAL_REG_USED 9
 %else
     %define NUM_GENERAL_REG_USED 6
@@ -359,9 +359,9 @@
 
     dec                 heightd
 
-%if ARCH_X86 || X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON
+%if VPX_ARCH_X86 || X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
     %define               src1q  r7
     %define           sstride6q  r8
     %define          dst_stride  dstrideq
@@ -467,7 +467,7 @@
     movx                 [dstq], m0
 
 %else
-    ; ARCH_X86_64
+    ; VPX_ARCH_X86_64
 
     movx                     m0, [srcq                ]     ;A
     movx                     m1, [srcq + sstrideq     ]     ;B
@@ -567,7 +567,7 @@
 %endif
     movx                 [dstq], m0
 
-%endif ; ARCH_X86_64
+%endif ; VPX_ARCH_X86_64
 
 .done:
     REP_RET
@@ -581,9 +581,9 @@
     mova                     m4, [filterq]
     SETUP_LOCAL_VARS
 
-%if ARCH_X86 || X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON
+%if VPX_ARCH_X86 || X86_SUBPIX_VFILTER_PREFER_SLOW_CELERON
 
-%if ARCH_X86_64
+%if VPX_ARCH_X86_64
     %define               src1q  r7
     %define           sstride6q  r8
     %define          dst_stride  dstrideq
@@ -654,7 +654,7 @@
     REP_RET
 
 %else
-    ; ARCH_X86_64
+    ; VPX_ARCH_X86_64
     dec                 heightd
 
     movu                     m1, [srcq                ]     ;A
@@ -790,7 +790,7 @@
 .done:
     REP_RET
 
-%endif ; ARCH_X86_64
+%endif ; VPX_ARCH_X86_64
 
 %endm
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h
index 2c259d3..5631130 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/include/vpx_mem_intrnl.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_MEM_INCLUDE_VPX_MEM_INTRNL_H_
-#define VPX_MEM_INCLUDE_VPX_MEM_INTRNL_H_
+#ifndef VPX_VPX_MEM_INCLUDE_VPX_MEM_INTRNL_H_
+#define VPX_VPX_MEM_INCLUDE_VPX_MEM_INTRNL_H_
 #include "./vpx_config.h"
 
 #define ADDRESS_STORAGE_SIZE sizeof(size_t)
@@ -28,4 +28,4 @@
 #define align_addr(addr, align) \
   (void *)(((size_t)(addr) + ((align)-1)) & ~(size_t)((align)-1))
 
-#endif  // VPX_MEM_INCLUDE_VPX_MEM_INTRNL_H_
+#endif  // VPX_VPX_MEM_INCLUDE_VPX_MEM_INTRNL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c
index eeba34c..18abf11 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.c
@@ -16,12 +16,14 @@
 #include "include/vpx_mem_intrnl.h"
 #include "vpx/vpx_integer.h"
 
+#if !defined(VPX_MAX_ALLOCABLE_MEMORY)
 #if SIZE_MAX > (1ULL << 40)
 #define VPX_MAX_ALLOCABLE_MEMORY (1ULL << 40)
 #else
 // For 32-bit targets keep this below INT_MAX to avoid valgrind warnings.
 #define VPX_MAX_ALLOCABLE_MEMORY ((1ULL << 31) - (1 << 16))
 #endif
+#endif
 
 // Returns 0 in case of overflow of nmemb * size.
 static int check_size_argument_overflow(uint64_t nmemb, uint64_t size) {
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h
index a4274b8..7689a05 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_mem/vpx_mem.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_MEM_VPX_MEM_H_
-#define VPX_MEM_VPX_MEM_H_
+#ifndef VPX_VPX_MEM_VPX_MEM_H_
+#define VPX_VPX_MEM_VPX_MEM_H_
 
 #include "vpx_config.h"
 #if defined(__uClinux__)
@@ -49,4 +49,4 @@
 }
 #endif
 
-#endif  // VPX_MEM_VPX_MEM_H_
+#endif  // VPX_VPX_MEM_VPX_MEM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/arm.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/arm.h
index 7be6104..6458a2c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/arm.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/arm.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_ARM_H_
-#define VPX_PORTS_ARM_H_
+#ifndef VPX_VPX_PORTS_ARM_H_
+#define VPX_VPX_PORTS_ARM_H_
 #include <stdlib.h>
 #include "vpx_config.h"
 
@@ -36,4 +36,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_PORTS_ARM_H_
+#endif  // VPX_VPX_PORTS_ARM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/asmdefs_mmi.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/asmdefs_mmi.h
index a9a4974..28355bf 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/asmdefs_mmi.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/asmdefs_mmi.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_ASMDEFS_MMI_H_
-#define VPX_PORTS_ASMDEFS_MMI_H_
+#ifndef VPX_VPX_PORTS_ASMDEFS_MMI_H_
+#define VPX_VPX_PORTS_ASMDEFS_MMI_H_
 
 #include "./vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -78,4 +78,4 @@
 
 #endif /* HAVE_MMI */
 
-#endif /* VPX_PORTS_ASMDEFS_MMI_H_ */
+#endif  // VPX_VPX_PORTS_ASMDEFS_MMI_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/bitops.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/bitops.h
index 0ed7189..5b2f31c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/bitops.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/bitops.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_BITOPS_H_
-#define VPX_PORTS_BITOPS_H_
+#ifndef VPX_VPX_PORTS_BITOPS_H_
+#define VPX_VPX_PORTS_BITOPS_H_
 
 #include <assert.h>
 
@@ -72,4 +72,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_PORTS_BITOPS_H_
+#endif  // VPX_VPX_PORTS_BITOPS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h
new file mode 100644
index 0000000..3543520
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/compiler_attributes.h
@@ -0,0 +1,59 @@
+/*
+ *  Copyright (c) 2020 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_PORTS_COMPILER_ATTRIBUTES_H_
+#define VPX_VPX_PORTS_COMPILER_ATTRIBUTES_H_
+
+#if !defined(__has_feature)
+#define __has_feature(x) 0
+#endif  // !defined(__has_feature)
+
+#if !defined(__has_attribute)
+#define __has_attribute(x) 0
+#endif  // !defined(__has_attribute)
+
+//------------------------------------------------------------------------------
+// Sanitizer attributes.
+
+#if __has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
+#define VPX_WITH_ASAN 1
+#else
+#define VPX_WITH_ASAN 0
+#endif  // __has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
+
+#if defined(__clang__) && __has_attribute(no_sanitize)
+#define VPX_NO_UNSIGNED_OVERFLOW_CHECK \
+  __attribute__((no_sanitize("unsigned-integer-overflow")))
+#endif
+
+#ifndef VPX_NO_UNSIGNED_OVERFLOW_CHECK
+#define VPX_NO_UNSIGNED_OVERFLOW_CHECK
+#endif
+
+//------------------------------------------------------------------------------
+// Variable attributes.
+
+#if __has_attribute(uninitialized)
+// Attribute "uninitialized" disables -ftrivial-auto-var-init=pattern for
+// the specified variable.
+//
+// -ftrivial-auto-var-init is security risk mitigation feature, so attribute
+// should not be used "just in case", but only to fix real performance
+// bottlenecks when other approaches do not work. In general the compiler is
+// quite effective at eliminating unneeded initializations introduced by the
+// flag, e.g. when they are followed by actual initialization by a program.
+// However if compiler optimization fails and code refactoring is hard, the
+// attribute can be used as a workaround.
+#define VPX_UNINITIALIZED __attribute__((uninitialized))
+#else
+#define VPX_UNINITIALIZED
+#endif  // __has_attribute(uninitialized)
+
+#endif  // VPX_VPX_PORTS_COMPILER_ATTRIBUTES_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h
index 903534e..d6cc68e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emmintrin_compat.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_EMMINTRIN_COMPAT_H_
-#define VPX_PORTS_EMMINTRIN_COMPAT_H_
+#ifndef VPX_VPX_PORTS_EMMINTRIN_COMPAT_H_
+#define VPX_VPX_PORTS_EMMINTRIN_COMPAT_H_
 
 #if defined(__GNUC__) && __GNUC__ < 4
 /* From emmintrin.h (gcc 4.5.3) */
@@ -52,4 +52,4 @@
 }
 #endif
 
-#endif  // VPX_PORTS_EMMINTRIN_COMPAT_H_
+#endif  // VPX_VPX_PORTS_EMMINTRIN_COMPAT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.asm
new file mode 100644
index 0000000..9f33590
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.asm
@@ -0,0 +1,18 @@
+;
+;  Copyright (c) 2010 The WebM project authors. All Rights Reserved.
+;
+;  Use of this source code is governed by a BSD-style license
+;  that can be found in the LICENSE file in the root of the source
+;  tree. An additional intellectual property rights grant can be found
+;  in the file PATENTS.  All contributing project authors may
+;  be found in the AUTHORS file in the root of the source tree.
+;
+
+
+%include "vpx_ports/x86_abi_support.asm"
+
+section .text
+global sym(vpx_clear_system_state) PRIVATE
+sym(vpx_clear_system_state):
+    emms
+    ret
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.c
similarity index 66%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/config.h
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.c
index 3c1ab99..f1036b9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms_mmx.c
@@ -1,5 +1,5 @@
 /*
- *  Copyright (c) 2010 The WebM project authors. All Rights Reserved.
+ *  Copyright (c) 2018 The WebM project authors. All Rights Reserved.
  *
  *  Use of this source code is governed by a BSD-style license
  *  that can be found in the LICENSE file in the root of the source
@@ -8,9 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_CONFIG_H_
-#define VPX_PORTS_CONFIG_H_
+#include <mmintrin.h>
 
-#include "vpx_config.h"
+#include "vpx_ports/system_state.h"
 
-#endif  // VPX_PORTS_CONFIG_H_
+void vpx_clear_system_state() { _mm_empty(); }
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms.asm b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/float_control_word.asm
similarity index 90%
rename from Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms.asm
rename to Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/float_control_word.asm
index db8da28..256dae0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/emms.asm
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/float_control_word.asm
@@ -12,11 +12,6 @@
 %include "vpx_ports/x86_abi_support.asm"
 
 section .text
-global sym(vpx_reset_mmx_state) PRIVATE
-sym(vpx_reset_mmx_state):
-    emms
-    ret
-
 
 %if LIBVPX_YASM_WIN64
 global sym(vpx_winx64_fldcw) PRIVATE
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem.h
index bfef783..5eccfe8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_MEM_H_
-#define VPX_PORTS_MEM_H_
+#ifndef VPX_VPX_PORTS_MEM_H_
+#define VPX_VPX_PORTS_MEM_H_
 
 #include "vpx_config.h"
 #include "vpx/vpx_integer.h"
@@ -41,14 +41,4 @@
 #define CAST_TO_BYTEPTR(x) ((uint8_t *)((uintptr_t)(x)))
 #endif  // CONFIG_VP9_HIGHBITDEPTH
 
-#if !defined(__has_feature)
-#define __has_feature(x) 0
-#endif  // !defined(__has_feature)
-
-#if __has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
-#define VPX_WITH_ASAN 1
-#else
-#define VPX_WITH_ASAN 0
-#endif  // __has_feature(address_sanitizer) || defined(__SANITIZE_ADDRESS__)
-
-#endif  // VPX_PORTS_MEM_H_
+#endif  // VPX_VPX_PORTS_MEM_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h
index b060859..b17015e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_MEM_OPS_H_
-#define VPX_PORTS_MEM_OPS_H_
+#ifndef VPX_VPX_PORTS_MEM_OPS_H_
+#define VPX_VPX_PORTS_MEM_OPS_H_
 
 /* \file
  * \brief Provides portable memory access primitives
@@ -224,4 +224,4 @@
   mem[3] = (MAU_T)((val >> 24) & 0xff);
 }
 /* clang-format on */
-#endif  // VPX_PORTS_MEM_OPS_H_
+#endif  // VPX_VPX_PORTS_MEM_OPS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h
index ccac391..8649b87 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/mem_ops_aligned.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_MEM_OPS_ALIGNED_H_
-#define VPX_PORTS_MEM_OPS_ALIGNED_H_
+#ifndef VPX_VPX_PORTS_MEM_OPS_ALIGNED_H_
+#define VPX_VPX_PORTS_MEM_OPS_ALIGNED_H_
 
 #include "vpx/vpx_integer.h"
 
@@ -168,4 +168,4 @@
 #undef swap_endian_32_se
 /* clang-format on */
 
-#endif  // VPX_PORTS_MEM_OPS_ALIGNED_H_
+#endif  // VPX_VPX_PORTS_MEM_OPS_ALIGNED_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/msvc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/msvc.h
index 3ff7147..d58de35 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/msvc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/msvc.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_MSVC_H_
-#define VPX_PORTS_MSVC_H_
+#ifndef VPX_VPX_PORTS_MSVC_H_
+#define VPX_VPX_PORTS_MSVC_H_
 #ifdef _MSC_VER
 
 #include "./vpx_config.h"
@@ -29,4 +29,4 @@
 #endif  // _MSC_VER < 1800
 
 #endif  // _MSC_VER
-#endif  // VPX_PORTS_MSVC_H_
+#endif  // VPX_VPX_PORTS_MSVC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/ppc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/ppc.h
index ed29ef2..a11f4e8 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/ppc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/ppc.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_PPC_H_
-#define VPX_PORTS_PPC_H_
+#ifndef VPX_VPX_PORTS_PPC_H_
+#define VPX_VPX_PORTS_PPC_H_
 #include <stdlib.h>
 
 #include "./vpx_config.h"
@@ -26,4 +26,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_PORTS_PPC_H_
+#endif  // VPX_VPX_PORTS_PPC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/static_assert.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/static_assert.h
new file mode 100644
index 0000000..f632d9f
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/static_assert.h
@@ -0,0 +1,30 @@
+/*
+ *  Copyright (c) 2020 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_PORTS_STATIC_ASSERT_H_
+#define VPX_VPX_PORTS_STATIC_ASSERT_H_
+
+#if defined(_MSC_VER)
+#define VPX_STATIC_ASSERT(boolexp)              \
+  do {                                          \
+    char vpx_static_assert[(boolexp) ? 1 : -1]; \
+    (void)vpx_static_assert;                    \
+  } while (0)
+#else  // !_MSC_VER
+#define VPX_STATIC_ASSERT(boolexp)                         \
+  do {                                                     \
+    struct {                                               \
+      unsigned int vpx_static_assert : (boolexp) ? 1 : -1; \
+    } vpx_static_assert;                                   \
+    (void)vpx_static_assert;                               \
+  } while (0)
+#endif  // _MSC_VER
+
+#endif  // VPX_VPX_PORTS_STATIC_ASSERT_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/system_state.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/system_state.h
index 7bf1634..32ebd0e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/system_state.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/system_state.h
@@ -8,22 +8,23 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_SYSTEM_STATE_H_
-#define VPX_PORTS_SYSTEM_STATE_H_
+#ifndef VPX_VPX_PORTS_SYSTEM_STATE_H_
+#define VPX_VPX_PORTS_SYSTEM_STATE_H_
 
 #include "./vpx_config.h"
 
-#if ARCH_X86 || ARCH_X86_64
-
-void vpx_reset_mmx_state(void);
-
-#if HAVE_MMX
-#define vpx_clear_system_state() vpx_reset_mmx_state()
-#else
-#define vpx_clear_system_state() { }
+#ifdef __cplusplus
+extern "C" {
 #endif
 
+#if (VPX_ARCH_X86 || VPX_ARCH_X86_64) && HAVE_MMX
+extern void vpx_clear_system_state(void);
 #else
 #define vpx_clear_system_state()
-#endif  // ARCH_X86 || ARCH_X86_64
-#endif  // VPX_PORTS_SYSTEM_STATE_H_
+#endif  // (VPX_ARCH_X86 || VPX_ARCH_X86_64) && HAVE_MMX
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  // VPX_VPX_PORTS_SYSTEM_STATE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h
index 7d9fc3b..4eb592b 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_once.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_VPX_ONCE_H_
-#define VPX_PORTS_VPX_ONCE_H_
+#ifndef VPX_VPX_PORTS_VPX_ONCE_H_
+#define VPX_VPX_PORTS_VPX_ONCE_H_
 
 #include "vpx_config.h"
 
@@ -137,4 +137,4 @@
 }
 #endif
 
-#endif  // VPX_PORTS_VPX_ONCE_H_
+#endif  // VPX_VPX_PORTS_VPX_ONCE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_ports.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_ports.mk
index e17145e..2331773 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_ports.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_ports.mk
@@ -12,23 +12,36 @@
 PORTS_SRCS-yes += vpx_ports.mk
 
 PORTS_SRCS-yes += bitops.h
+PORTS_SRCS-yes += compiler_attributes.h
 PORTS_SRCS-yes += mem.h
 PORTS_SRCS-yes += msvc.h
+PORTS_SRCS-yes += static_assert.h
 PORTS_SRCS-yes += system_state.h
 PORTS_SRCS-yes += vpx_timer.h
 
-ifeq ($(ARCH_X86)$(ARCH_X86_64),yes)
-PORTS_SRCS-yes += emms.asm
+ifeq ($(VPX_ARCH_X86),yes)
+PORTS_SRCS-$(HAVE_MMX) += emms_mmx.c
+endif
+ifeq ($(VPX_ARCH_X86_64),yes)
+# Visual Studio x64 does not support the _mm_empty() intrinsic.
+PORTS_SRCS-$(HAVE_MMX) += emms_mmx.asm
+endif
+
+ifeq ($(VPX_ARCH_X86_64),yes)
+PORTS_SRCS-$(CONFIG_MSVS) += float_control_word.asm
+endif
+
+ifeq ($(VPX_ARCH_X86)$(VPX_ARCH_X86_64),yes)
 PORTS_SRCS-yes += x86.h
 PORTS_SRCS-yes += x86_abi_support.asm
 endif
 
-PORTS_SRCS-$(ARCH_ARM) += arm_cpudetect.c
-PORTS_SRCS-$(ARCH_ARM) += arm.h
+PORTS_SRCS-$(VPX_ARCH_ARM) += arm_cpudetect.c
+PORTS_SRCS-$(VPX_ARCH_ARM) += arm.h
 
-PORTS_SRCS-$(ARCH_PPC) += ppc_cpudetect.c
-PORTS_SRCS-$(ARCH_PPC) += ppc.h
+PORTS_SRCS-$(VPX_ARCH_PPC) += ppc_cpudetect.c
+PORTS_SRCS-$(VPX_ARCH_PPC) += ppc.h
 
-ifeq ($(ARCH_MIPS), yes)
+ifeq ($(VPX_ARCH_MIPS), yes)
 PORTS_SRCS-yes += asmdefs_mmi.h
 endif
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h
index 2083b4e..4934d52 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/vpx_timer.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_VPX_TIMER_H_
-#define VPX_PORTS_VPX_TIMER_H_
+#ifndef VPX_VPX_PORTS_VPX_TIMER_H_
+#define VPX_VPX_PORTS_VPX_TIMER_H_
 
 #include "./vpx_config.h"
 
@@ -106,4 +106,4 @@
 
 #endif /* CONFIG_OS_SUPPORT */
 
-#endif  // VPX_PORTS_VPX_TIMER_H_
+#endif  // VPX_VPX_PORTS_VPX_TIMER_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/x86.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/x86.h
index ced65ac..14f43444 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/x86.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_ports/x86.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_PORTS_X86_H_
-#define VPX_PORTS_X86_H_
+#ifndef VPX_VPX_PORTS_X86_H_
+#define VPX_VPX_PORTS_X86_H_
 #include <stdlib.h>
 
 #if defined(_MSC_VER)
@@ -43,7 +43,7 @@
 } vpx_cpu_t;
 
 #if defined(__GNUC__) && __GNUC__ || defined(__ANDROID__)
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
 #define cpuid(func, func2, ax, bx, cx, dx)                      \
   __asm__ __volatile__("cpuid           \n\t"                   \
                        : "=a"(ax), "=b"(bx), "=c"(cx), "=d"(dx) \
@@ -59,7 +59,7 @@
 #endif
 #elif defined(__SUNPRO_C) || \
     defined(__SUNPRO_CC) /* end __GNUC__ or __ANDROID__*/
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
 #define cpuid(func, func2, ax, bx, cx, dx)     \
   asm volatile(                                \
       "xchg %rsi, %rbx \n\t"                   \
@@ -79,7 +79,7 @@
       : "a"(func), "c"(func2));
 #endif
 #else /* end __SUNPRO__ */
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
 #if defined(_MSC_VER) && _MSC_VER > 1500
 #define cpuid(func, func2, a, b, c, d) \
   do {                                 \
@@ -161,12 +161,12 @@
 #define HAS_AVX2 0x080
 #define HAS_AVX512 0x100
 #ifndef BIT
-#define BIT(n) (1u << n)
+#define BIT(n) (1u << (n))
 #endif
 
 static INLINE int x86_simd_caps(void) {
   unsigned int flags = 0;
-  unsigned int mask = ~0;
+  unsigned int mask = ~0u;
   unsigned int max_cpuid_val, reg_eax, reg_ebx, reg_ecx, reg_edx;
   char *env;
   (void)reg_ebx;
@@ -202,6 +202,7 @@
 
   // bits 27 (OSXSAVE) & 28 (256-bit AVX)
   if ((reg_ecx & (BIT(27) | BIT(28))) == (BIT(27) | BIT(28))) {
+    // Check for OS-support of YMM state. Necessary for AVX and AVX2.
     if ((xgetbv() & 0x6) == 0x6) {
       flags |= HAS_AVX;
 
@@ -214,8 +215,10 @@
         // bits 16 (AVX-512F) & 17 (AVX-512DQ) & 28 (AVX-512CD) &
         // 30 (AVX-512BW) & 32 (AVX-512VL)
         if ((reg_ebx & (BIT(16) | BIT(17) | BIT(28) | BIT(30) | BIT(31))) ==
-            (BIT(16) | BIT(17) | BIT(28) | BIT(30) | BIT(31)))
-          flags |= HAS_AVX512;
+            (BIT(16) | BIT(17) | BIT(28) | BIT(30) | BIT(31))) {
+          // Check for OS-support of ZMM and YMM state. Necessary for AVX-512.
+          if ((xgetbv() & 0xe6) == 0xe6) flags |= HAS_AVX512;
+        }
       }
     }
   }
@@ -223,11 +226,26 @@
   return flags & mask;
 }
 
-// Note:
-//  32-bit CPU cycle counter is light-weighted for most function performance
-//  measurement. For large function (CPU time > a couple of seconds), 64-bit
-//  counter should be used.
-// 32-bit CPU cycle counter
+// Fine-Grain Measurement Functions
+//
+// If you are timing a small region of code, access the timestamp counter
+// (TSC) via:
+//
+// unsigned int start = x86_tsc_start();
+//   ...
+// unsigned int end = x86_tsc_end();
+// unsigned int diff = end - start;
+//
+// The start/end functions introduce a few more instructions than using
+// x86_readtsc directly, but prevent the CPU's out-of-order execution from
+// affecting the measurement (by having earlier/later instructions be evaluated
+// in the time interval). See the white paper, "How to Benchmark Code
+// Execution Times on Intel® IA-32 and IA-64 Instruction Set Architectures" by
+// Gabriele Paoloni for more information.
+//
+// If you are timing a large function (CPU time > a couple of seconds), use
+// x86_readtsc64 to read the timestamp counter in a 64-bit integer. The
+// out-of-order leakage that can occur is minimal compared to total runtime.
 static INLINE unsigned int x86_readtsc(void) {
 #if defined(__GNUC__) && __GNUC__
   unsigned int tsc;
@@ -238,7 +256,7 @@
   asm volatile("rdtsc\n\t" : "=a"(tsc) :);
   return tsc;
 #else
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
   return (unsigned int)__rdtsc();
 #else
   __asm rdtsc;
@@ -256,7 +274,7 @@
   asm volatile("rdtsc\n\t" : "=a"(lo), "=d"(hi));
   return ((uint64_t)hi << 32) | lo;
 #else
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
   return (uint64_t)__rdtsc();
 #else
   __asm rdtsc;
@@ -264,12 +282,47 @@
 #endif
 }
 
+// 32-bit CPU cycle counter with a partial fence against out-of-order execution.
+static INLINE unsigned int x86_readtscp(void) {
+#if defined(__GNUC__) && __GNUC__
+  unsigned int tscp;
+  __asm__ __volatile__("rdtscp\n\t" : "=a"(tscp) :);
+  return tscp;
+#elif defined(__SUNPRO_C) || defined(__SUNPRO_CC)
+  unsigned int tscp;
+  asm volatile("rdtscp\n\t" : "=a"(tscp) :);
+  return tscp;
+#elif defined(_MSC_VER)
+  unsigned int ui;
+  return (unsigned int)__rdtscp(&ui);
+#else
+#if VPX_ARCH_X86_64
+  return (unsigned int)__rdtscp();
+#else
+  __asm rdtscp;
+#endif
+#endif
+}
+
+static INLINE unsigned int x86_tsc_start(void) {
+  unsigned int reg_eax, reg_ebx, reg_ecx, reg_edx;
+  cpuid(0, 0, reg_eax, reg_ebx, reg_ecx, reg_edx);
+  return x86_readtsc();
+}
+
+static INLINE unsigned int x86_tsc_end(void) {
+  uint32_t v = x86_readtscp();
+  unsigned int reg_eax, reg_ebx, reg_ecx, reg_edx;
+  cpuid(0, 0, reg_eax, reg_ebx, reg_ecx, reg_edx);
+  return v;
+}
+
 #if defined(__GNUC__) && __GNUC__
 #define x86_pause_hint() __asm__ __volatile__("pause \n\t")
 #elif defined(__SUNPRO_C) || defined(__SUNPRO_CC)
 #define x86_pause_hint() asm volatile("pause \n\t")
 #else
-#if ARCH_X86_64
+#if VPX_ARCH_X86_64
 #define x86_pause_hint() _mm_pause();
 #else
 #define x86_pause_hint() __asm pause
@@ -294,7 +347,7 @@
   asm volatile("fstcw %0\n\t" : "=m"(*&mode) :);
   return mode;
 }
-#elif ARCH_X86_64
+#elif VPX_ARCH_X86_64
 /* No fldcw intrinsics on Windows x64, punt to external asm */
 extern void vpx_winx64_fldcw(unsigned short mode);
 extern unsigned short vpx_winx64_fstcw(void);
@@ -313,14 +366,23 @@
 
 static INLINE unsigned int x87_set_double_precision(void) {
   unsigned int mode = x87_get_control_word();
+  // Intel 64 and IA-32 Architectures Developer's Manual: Vol. 1
+  // https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-1-manual.pdf
+  // 8.1.5.2 Precision Control Field
+  // Bits 8 and 9 (0x300) of the x87 FPU Control Word ("Precision Control")
+  // determine the number of bits used in floating point calculations. To match
+  // later SSE instructions restrict x87 operations to Double Precision (0x200).
+  // Precision                     PC Field
+  // Single Precision (24-Bits)    00B
+  // Reserved                      01B
+  // Double Precision (53-Bits)    10B
+  // Extended Precision (64-Bits)  11B
   x87_set_control_word((mode & ~0x300) | 0x200);
   return mode;
 }
 
-extern void vpx_reset_mmx_state(void);
-
 #ifdef __cplusplus
 }  // extern "C"
 #endif
 
-#endif  // VPX_PORTS_X86_H_
+#endif  // VPX_VPX_PORTS_X86_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12config.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12config.c
index 220b8be..eee291c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12config.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12config.c
@@ -15,6 +15,9 @@
 #include "vpx_mem/vpx_mem.h"
 #include "vpx_ports/mem.h"
 
+#if defined(VPX_MAX_ALLOCABLE_MEMORY)
+#include "vp9/common/vp9_onyxc_int.h"
+#endif  // VPX_MAX_ALLOCABLE_MEMORY
 /****************************************************************************
  *  Exports
  ****************************************************************************/
@@ -57,10 +60,18 @@
      *  uv_stride == y_stride/2, so enforce this here. */
     int uv_stride = y_stride >> 1;
     int uvplane_size = (uv_height + border) * uv_stride;
-    const int frame_size = yplane_size + 2 * uvplane_size;
+    const size_t frame_size = yplane_size + 2 * uvplane_size;
 
     if (!ybf->buffer_alloc) {
       ybf->buffer_alloc = (uint8_t *)vpx_memalign(32, frame_size);
+#if defined(__has_feature)
+#if __has_feature(memory_sanitizer)
+      // This memset is needed for fixing the issue of using uninitialized
+      // value in msan test. It will cause a perf loss, so only do this for
+      // msan test.
+      memset(ybf->buffer_alloc, 0, frame_size);
+#endif
+#endif
       ybf->buffer_alloc_sz = frame_size;
     }
 
@@ -142,6 +153,17 @@
                              int border, int byte_alignment,
                              vpx_codec_frame_buffer_t *fb,
                              vpx_get_frame_buffer_cb_fn_t cb, void *cb_priv) {
+#if CONFIG_SIZE_LIMIT
+  if (width > DECODE_WIDTH_LIMIT || height > DECODE_HEIGHT_LIMIT) return -1;
+#endif
+
+  /* Only support allocating buffers that have a border that's a multiple
+   * of 32. The border restriction is required to get 16-byte alignment of
+   * the start of the chroma rows without introducing an arbitrary gap
+   * between planes, which would break the semantics of things like
+   * vpx_img_set_rect(). */
+  if (border & 0x1f) return -3;
+
   if (ybf) {
     const int vp9_byte_align = (byte_alignment == 0) ? 1 : byte_alignment;
     const int aligned_width = (width + 7) & ~7;
@@ -166,9 +188,16 @@
 
     uint8_t *buf = NULL;
 
-    // frame_size is stored in buffer_alloc_sz, which is an int. If it won't
+#if defined(VPX_MAX_ALLOCABLE_MEMORY)
+    // The decoder may allocate REF_FRAMES frame buffers in the frame buffer
+    // pool. Bound the total amount of allocated memory as if these REF_FRAMES
+    // frame buffers were allocated in a single allocation.
+    if (frame_size > VPX_MAX_ALLOCABLE_MEMORY / REF_FRAMES) return -1;
+#endif  // VPX_MAX_ALLOCABLE_MEMORY
+
+    // frame_size is stored in buffer_alloc_sz, which is a size_t. If it won't
     // fit, fail early.
-    if (frame_size > INT_MAX) {
+    if (frame_size > SIZE_MAX) {
       return -1;
     }
 
@@ -192,18 +221,19 @@
       // This memset is needed for fixing the issue of using uninitialized
       // value in msan test. It will cause a perf loss, so only do this for
       // msan test.
-      memset(ybf->buffer_alloc, 0, (int)frame_size);
+      memset(ybf->buffer_alloc, 0, (size_t)frame_size);
 #endif
 #endif
-    } else if (frame_size > (size_t)ybf->buffer_alloc_sz) {
+    } else if (frame_size > ybf->buffer_alloc_sz) {
       // Allocation to hold larger frame, or first allocation.
       vpx_free(ybf->buffer_alloc);
       ybf->buffer_alloc = NULL;
+      ybf->buffer_alloc_sz = 0;
 
       ybf->buffer_alloc = (uint8_t *)vpx_memalign(32, (size_t)frame_size);
       if (!ybf->buffer_alloc) return -1;
 
-      ybf->buffer_alloc_sz = (int)frame_size;
+      ybf->buffer_alloc_sz = (size_t)frame_size;
 
       // This memset is needed for fixing valgrind error from C loop filter
       // due to access uninitialized memory in frame border. It could be
@@ -211,13 +241,6 @@
       memset(ybf->buffer_alloc, 0, ybf->buffer_alloc_sz);
     }
 
-    /* Only support allocating buffers that have a border that's a multiple
-     * of 32. The border restriction is required to get 16-byte alignment of
-     * the start of the chroma rows without introducing an arbitrary gap
-     * between planes, which would break the semantics of things like
-     * vpx_img_set_rect(). */
-    if (border & 0x1f) return -3;
-
     ybf->y_crop_width = width;
     ybf->y_crop_height = height;
     ybf->y_width = aligned_width;
@@ -231,7 +254,7 @@
     ybf->uv_stride = uv_stride;
 
     ybf->border = border;
-    ybf->frame_size = (int)frame_size;
+    ybf->frame_size = (size_t)frame_size;
     ybf->subsampling_x = ss_x;
     ybf->subsampling_y = ss_y;
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12extend.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12extend.c
index f088d28..e231806 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12extend.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/generic/yv12extend.c
@@ -58,7 +58,7 @@
     dst_ptr2 += src_stride;
   }
 }
-#if CONFIG_VP9
+
 #if CONFIG_VP9_HIGHBITDEPTH
 static void extend_plane_high(uint8_t *const src8, int src_stride, int width,
                               int height, int extend_top, int extend_left,
@@ -101,7 +101,6 @@
   }
 }
 #endif
-#endif
 
 void vp8_yv12_extend_frame_borders_c(YV12_BUFFER_CONFIG *ybf) {
   const int uv_border = ybf->border / 2;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/vpx_scale.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/vpx_scale.h
index 478a483..fd5ba7c 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/vpx_scale.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/vpx_scale.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_SCALE_VPX_SCALE_H_
-#define VPX_SCALE_VPX_SCALE_H_
+#ifndef VPX_VPX_SCALE_VPX_SCALE_H_
+#define VPX_VPX_SCALE_VPX_SCALE_H_
 
 #include "vpx_scale/yv12config.h"
 
@@ -19,4 +19,4 @@
                             unsigned int vscale, unsigned int vratio,
                             unsigned int interlaced);
 
-#endif  // VPX_SCALE_VPX_SCALE_H_
+#endif  // VPX_VPX_SCALE_VPX_SCALE_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/yv12config.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/yv12config.h
index b9b3362..2cf1821 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/yv12config.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_scale/yv12config.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_SCALE_YV12CONFIG_H_
-#define VPX_SCALE_YV12CONFIG_H_
+#ifndef VPX_VPX_SCALE_YV12CONFIG_H_
+#define VPX_VPX_SCALE_YV12CONFIG_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -49,9 +49,9 @@
   uint8_t *alpha_buffer;
 
   uint8_t *buffer_alloc;
-  int buffer_alloc_sz;
+  size_t buffer_alloc_sz;
   int border;
-  int frame_size;
+  size_t frame_size;
   int subsampling_x;
   int subsampling_y;
   unsigned int bit_depth;
@@ -100,4 +100,4 @@
 }
 #endif
 
-#endif  // VPX_SCALE_YV12CONFIG_H_
+#endif  // VPX_VPX_SCALE_YV12CONFIG_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/endian_inl.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/endian_inl.h
index dc38774..1b6ef56 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/endian_inl.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/endian_inl.h
@@ -9,8 +9,8 @@
 //
 // Endian related functions.
 
-#ifndef VPX_UTIL_ENDIAN_INL_H_
-#define VPX_UTIL_ENDIAN_INL_H_
+#ifndef VPX_VPX_UTIL_ENDIAN_INL_H_
+#define VPX_VPX_UTIL_ENDIAN_INL_H_
 
 #include <stdlib.h>
 #include "./vpx_config.h"
@@ -115,4 +115,4 @@
 #endif  // HAVE_BUILTIN_BSWAP64
 }
 
-#endif  // VPX_UTIL_ENDIAN_INL_H_
+#endif  // VPX_VPX_UTIL_ENDIAN_INL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h
index 5046910..23ad566 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_atomics.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_UTIL_VPX_ATOMICS_H_
-#define VPX_UTIL_VPX_ATOMICS_H_
+#ifndef VPX_VPX_UTIL_VPX_ATOMICS_H_
+#define VPX_VPX_UTIL_VPX_ATOMICS_H_
 
 #include "./vpx_config.h"
 
@@ -51,16 +51,16 @@
   do {                              \
   } while (0)
 #else
-#if ARCH_X86 || ARCH_X86_64
+#if VPX_ARCH_X86 || VPX_ARCH_X86_64
 // Use a compiler barrier on x86, no runtime penalty.
 #define vpx_atomic_memory_barrier() __asm__ __volatile__("" ::: "memory")
-#elif ARCH_ARM
+#elif VPX_ARCH_ARM
 #define vpx_atomic_memory_barrier() __asm__ __volatile__("dmb ish" ::: "memory")
-#elif ARCH_MIPS
+#elif VPX_ARCH_MIPS
 #define vpx_atomic_memory_barrier() __asm__ __volatile__("sync" ::: "memory")
 #else
 #error Unsupported architecture!
-#endif  // ARCH_X86 || ARCH_X86_64
+#endif  // VPX_ARCH_X86 || VPX_ARCH_X86_64
 #endif  // defined(_MSC_VER)
 #endif  // atomic builtin availability check
 
@@ -108,4 +108,4 @@
 }  // extern "C"
 #endif  // __cplusplus
 
-#endif  // VPX_UTIL_VPX_ATOMICS_H_
+#endif  // VPX_VPX_UTIL_VPX_ATOMICS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_debug_util.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_debug_util.c
new file mode 100644
index 0000000..3ce4065
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_debug_util.c
@@ -0,0 +1,282 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <string.h>
+#include "vpx_util/vpx_debug_util.h"
+
+#if CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
+static int frame_idx_w = 0;
+static int frame_idx_r = 0;
+
+void bitstream_queue_set_frame_write(int frame_idx) { frame_idx_w = frame_idx; }
+
+int bitstream_queue_get_frame_write(void) { return frame_idx_w; }
+
+void bitstream_queue_set_frame_read(int frame_idx) { frame_idx_r = frame_idx; }
+
+int bitstream_queue_get_frame_read(void) { return frame_idx_r; }
+#endif
+
+#if CONFIG_BITSTREAM_DEBUG
+#define QUEUE_MAX_SIZE 2000000
+static int result_queue[QUEUE_MAX_SIZE];
+static int prob_queue[QUEUE_MAX_SIZE];
+
+static int queue_r = 0;
+static int queue_w = 0;
+static int queue_prev_w = -1;
+static int skip_r = 0;
+static int skip_w = 0;
+void bitstream_queue_set_skip_write(int skip) { skip_w = skip; }
+
+void bitstream_queue_set_skip_read(int skip) { skip_r = skip; }
+
+void bitstream_queue_record_write(void) { queue_prev_w = queue_w; }
+
+void bitstream_queue_reset_write(void) { queue_w = queue_prev_w; }
+
+int bitstream_queue_get_write(void) { return queue_w; }
+
+int bitstream_queue_get_read(void) { return queue_r; }
+
+void bitstream_queue_pop(int *result, int *prob) {
+  if (!skip_r) {
+    if (queue_w == queue_r) {
+      printf("buffer underflow queue_w %d queue_r %d\n", queue_w, queue_r);
+      assert(0);
+    }
+    *result = result_queue[queue_r];
+    *prob = prob_queue[queue_r];
+    queue_r = (queue_r + 1) % QUEUE_MAX_SIZE;
+  }
+}
+
+void bitstream_queue_push(int result, const int prob) {
+  if (!skip_w) {
+    result_queue[queue_w] = result;
+    prob_queue[queue_w] = prob;
+    queue_w = (queue_w + 1) % QUEUE_MAX_SIZE;
+    if (queue_w == queue_r) {
+      printf("buffer overflow queue_w %d queue_r %d\n", queue_w, queue_r);
+      assert(0);
+    }
+  }
+}
+#endif  // CONFIG_BITSTREAM_DEBUG
+
+#if CONFIG_MISMATCH_DEBUG
+static int frame_buf_idx_r = 0;
+static int frame_buf_idx_w = 0;
+#define MAX_FRAME_BUF_NUM 20
+#define MAX_FRAME_STRIDE 1920
+#define MAX_FRAME_HEIGHT 1080
+static uint16_t
+    frame_pre[MAX_FRAME_BUF_NUM][3]
+             [MAX_FRAME_STRIDE * MAX_FRAME_HEIGHT];  // prediction only
+static uint16_t
+    frame_tx[MAX_FRAME_BUF_NUM][3]
+            [MAX_FRAME_STRIDE * MAX_FRAME_HEIGHT];  // prediction + txfm
+static int frame_stride = MAX_FRAME_STRIDE;
+static int frame_height = MAX_FRAME_HEIGHT;
+static int frame_size = MAX_FRAME_STRIDE * MAX_FRAME_HEIGHT;
+void mismatch_move_frame_idx_w(void) {
+  frame_buf_idx_w = (frame_buf_idx_w + 1) % MAX_FRAME_BUF_NUM;
+  if (frame_buf_idx_w == frame_buf_idx_r) {
+    printf("frame_buf overflow\n");
+    assert(0);
+  }
+}
+
+void mismatch_reset_frame(int num_planes) {
+  int plane;
+  for (plane = 0; plane < num_planes; ++plane) {
+    memset(frame_pre[frame_buf_idx_w][plane], 0,
+           sizeof(frame_pre[frame_buf_idx_w][plane][0]) * frame_size);
+    memset(frame_tx[frame_buf_idx_w][plane], 0,
+           sizeof(frame_tx[frame_buf_idx_w][plane][0]) * frame_size);
+  }
+}
+
+void mismatch_move_frame_idx_r(void) {
+  if (frame_buf_idx_w == frame_buf_idx_r) {
+    printf("frame_buf underflow\n");
+    assert(0);
+  }
+  frame_buf_idx_r = (frame_buf_idx_r + 1) % MAX_FRAME_BUF_NUM;
+}
+
+void mismatch_record_block_pre(const uint8_t *src, int src_stride, int plane,
+                               int pixel_c, int pixel_r, int blk_w, int blk_h,
+                               int highbd) {
+  const uint16_t *src16 = highbd ? CONVERT_TO_SHORTPTR(src) : NULL;
+  int r, c;
+
+  if (pixel_c + blk_w >= frame_stride || pixel_r + blk_h >= frame_height) {
+    printf("frame_buf undersized\n");
+    assert(0);
+  }
+
+  for (r = 0; r < blk_h; ++r) {
+    for (c = 0; c < blk_w; ++c) {
+      frame_pre[frame_buf_idx_w][plane]
+               [(r + pixel_r) * frame_stride + c + pixel_c] =
+                   src16 ? src16[r * src_stride + c] : src[r * src_stride + c];
+    }
+  }
+#if 0
+  {
+    int ref_frame_idx = 3;
+    int ref_plane = 1;
+    int ref_pixel_c = 162;
+    int ref_pixel_r = 16;
+    if (frame_idx_w == ref_frame_idx && plane == ref_plane &&
+        ref_pixel_c >= pixel_c && ref_pixel_c < pixel_c + blk_w &&
+        ref_pixel_r >= pixel_r && ref_pixel_r < pixel_r + blk_h) {
+      printf(
+          "\nrecord_block_pre frame_idx %d plane %d pixel_c %d pixel_r %d blk_w"
+          " %d blk_h %d\n",
+          frame_idx_w, plane, pixel_c, pixel_r, blk_w, blk_h);
+    }
+  }
+#endif
+}
+void mismatch_record_block_tx(const uint8_t *src, int src_stride, int plane,
+                              int pixel_c, int pixel_r, int blk_w, int blk_h,
+                              int highbd) {
+  const uint16_t *src16 = highbd ? CONVERT_TO_SHORTPTR(src) : NULL;
+  int r, c;
+  if (pixel_c + blk_w >= frame_stride || pixel_r + blk_h >= frame_height) {
+    printf("frame_buf undersized\n");
+    assert(0);
+  }
+
+  for (r = 0; r < blk_h; ++r) {
+    for (c = 0; c < blk_w; ++c) {
+      frame_tx[frame_buf_idx_w][plane]
+              [(r + pixel_r) * frame_stride + c + pixel_c] =
+                  src16 ? src16[r * src_stride + c] : src[r * src_stride + c];
+    }
+  }
+#if 0
+  {
+    int ref_frame_idx = 3;
+    int ref_plane = 1;
+    int ref_pixel_c = 162;
+    int ref_pixel_r = 16;
+    if (frame_idx_w == ref_frame_idx && plane == ref_plane &&
+        ref_pixel_c >= pixel_c && ref_pixel_c < pixel_c + blk_w &&
+        ref_pixel_r >= pixel_r && ref_pixel_r < pixel_r + blk_h) {
+      printf(
+          "\nrecord_block_tx frame_idx %d plane %d pixel_c %d pixel_r %d blk_w "
+          "%d blk_h %d\n",
+          frame_idx_w, plane, pixel_c, pixel_r, blk_w, blk_h);
+    }
+  }
+#endif
+}
+void mismatch_check_block_pre(const uint8_t *src, int src_stride, int plane,
+                              int pixel_c, int pixel_r, int blk_w, int blk_h,
+                              int highbd) {
+  const uint16_t *src16 = highbd ? CONVERT_TO_SHORTPTR(src) : NULL;
+  int mismatch = 0;
+  int r, c;
+  if (pixel_c + blk_w >= frame_stride || pixel_r + blk_h >= frame_height) {
+    printf("frame_buf undersized\n");
+    assert(0);
+  }
+
+  for (r = 0; r < blk_h; ++r) {
+    for (c = 0; c < blk_w; ++c) {
+      if (frame_pre[frame_buf_idx_r][plane]
+                   [(r + pixel_r) * frame_stride + c + pixel_c] !=
+          (uint16_t)(src16 ? src16[r * src_stride + c]
+                           : src[r * src_stride + c])) {
+        mismatch = 1;
+      }
+    }
+  }
+  if (mismatch) {
+    int rr, cc;
+    printf(
+        "\ncheck_block_pre failed frame_idx %d plane %d "
+        "pixel_c %d pixel_r "
+        "%d blk_w %d blk_h %d\n",
+        frame_idx_r, plane, pixel_c, pixel_r, blk_w, blk_h);
+    printf("enc\n");
+    for (rr = 0; rr < blk_h; ++rr) {
+      for (cc = 0; cc < blk_w; ++cc) {
+        printf("%d ", frame_pre[frame_buf_idx_r][plane]
+                               [(rr + pixel_r) * frame_stride + cc + pixel_c]);
+      }
+      printf("\n");
+    }
+
+    printf("dec\n");
+    for (rr = 0; rr < blk_h; ++rr) {
+      for (cc = 0; cc < blk_w; ++cc) {
+        printf("%d ",
+               src16 ? src16[rr * src_stride + cc] : src[rr * src_stride + cc]);
+      }
+      printf("\n");
+    }
+    assert(0);
+  }
+}
+void mismatch_check_block_tx(const uint8_t *src, int src_stride, int plane,
+                             int pixel_c, int pixel_r, int blk_w, int blk_h,
+                             int highbd) {
+  const uint16_t *src16 = highbd ? CONVERT_TO_SHORTPTR(src) : NULL;
+  int mismatch = 0;
+  int r, c;
+  if (pixel_c + blk_w >= frame_stride || pixel_r + blk_h >= frame_height) {
+    printf("frame_buf undersized\n");
+    assert(0);
+  }
+
+  for (r = 0; r < blk_h; ++r) {
+    for (c = 0; c < blk_w; ++c) {
+      if (frame_tx[frame_buf_idx_r][plane]
+                  [(r + pixel_r) * frame_stride + c + pixel_c] !=
+          (uint16_t)(src16 ? src16[r * src_stride + c]
+                           : src[r * src_stride + c])) {
+        mismatch = 1;
+      }
+    }
+  }
+  if (mismatch) {
+    int rr, cc;
+    printf(
+        "\ncheck_block_tx failed frame_idx %d plane %d pixel_c "
+        "%d pixel_r "
+        "%d blk_w %d blk_h %d\n",
+        frame_idx_r, plane, pixel_c, pixel_r, blk_w, blk_h);
+    printf("enc\n");
+    for (rr = 0; rr < blk_h; ++rr) {
+      for (cc = 0; cc < blk_w; ++cc) {
+        printf("%d ", frame_tx[frame_buf_idx_r][plane]
+                              [(rr + pixel_r) * frame_stride + cc + pixel_c]);
+      }
+      printf("\n");
+    }
+
+    printf("dec\n");
+    for (rr = 0; rr < blk_h; ++rr) {
+      for (cc = 0; cc < blk_w; ++cc) {
+        printf("%d ",
+               src16 ? src16[rr * src_stride + cc] : src[rr * src_stride + cc]);
+      }
+      printf("\n");
+    }
+    assert(0);
+  }
+}
+#endif  // CONFIG_MISMATCH_DEBUG
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_debug_util.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_debug_util.h
new file mode 100644
index 0000000..df1a1aa
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_debug_util.h
@@ -0,0 +1,70 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_UTIL_VPX_DEBUG_UTIL_H_
+#define VPX_VPX_UTIL_VPX_DEBUG_UTIL_H_
+
+#include "./vpx_config.h"
+
+#include "vpx_dsp/prob.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#if CONFIG_BITSTREAM_DEBUG || CONFIG_MISMATCH_DEBUG
+void bitstream_queue_set_frame_write(int frame_idx);
+int bitstream_queue_get_frame_write(void);
+void bitstream_queue_set_frame_read(int frame_idx);
+int bitstream_queue_get_frame_read(void);
+#endif
+
+#if CONFIG_BITSTREAM_DEBUG
+/* This is a debug tool used to detect bitstream error. On encoder side, it
+ * pushes each bit and probability into a queue before the bit is written into
+ * the Arithmetic coder. On decoder side, whenever a bit is read out from the
+ * Arithmetic coder, it pops out the reference bit and probability from the
+ * queue as well. If the two results do not match, this debug tool will report
+ * an error.  This tool can be used to pin down the bitstream error precisely.
+ * By combining gdb's backtrace method, we can detect which module causes the
+ * bitstream error. */
+int bitstream_queue_get_write(void);
+int bitstream_queue_get_read(void);
+void bitstream_queue_record_write(void);
+void bitstream_queue_reset_write(void);
+void bitstream_queue_pop(int *result, int *prob);
+void bitstream_queue_push(int result, const int prob);
+void bitstream_queue_set_skip_write(int skip);
+void bitstream_queue_set_skip_read(int skip);
+#endif  // CONFIG_BITSTREAM_DEBUG
+
+#if CONFIG_MISMATCH_DEBUG
+void mismatch_move_frame_idx_w(void);
+void mismatch_move_frame_idx_r(void);
+void mismatch_reset_frame(int num_planes);
+void mismatch_record_block_pre(const uint8_t *src, int src_stride, int plane,
+                               int pixel_c, int pixel_r, int blk_w, int blk_h,
+                               int highbd);
+void mismatch_record_block_tx(const uint8_t *src, int src_stride, int plane,
+                              int pixel_c, int pixel_r, int blk_w, int blk_h,
+                              int highbd);
+void mismatch_check_block_pre(const uint8_t *src, int src_stride, int plane,
+                              int pixel_c, int pixel_r, int blk_w, int blk_h,
+                              int highbd);
+void mismatch_check_block_tx(const uint8_t *src, int src_stride, int plane,
+                             int pixel_c, int pixel_r, int blk_w, int blk_h,
+                             int highbd);
+#endif  // CONFIG_MISMATCH_DEBUG
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif
+
+#endif  // VPX_VPX_UTIL_VPX_DEBUG_UTIL_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h
index 53a5f49..6d308e9 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_thread.h
@@ -12,8 +12,8 @@
 // Original source:
 //  https://chromium.googlesource.com/webm/libwebp
 
-#ifndef VPX_THREAD_H_
-#define VPX_THREAD_H_
+#ifndef VPX_VPX_UTIL_VPX_THREAD_H_
+#define VPX_VPX_UTIL_VPX_THREAD_H_
 
 #include "./vpx_config.h"
 
@@ -159,6 +159,23 @@
   return 0;
 }
 
+static INLINE int pthread_cond_broadcast(pthread_cond_t *const condition) {
+  int ok = 1;
+#ifdef USE_WINDOWS_CONDITION_VARIABLE
+  WakeAllConditionVariable(condition);
+#else
+  while (WaitForSingleObject(condition->waiting_sem_, 0) == WAIT_OBJECT_0) {
+    // a thread is waiting in pthread_cond_wait: allow it to be notified
+    ok &= SetEvent(condition->signal_event_);
+    // wait until the event is consumed so the signaler cannot consume
+    // the event via its own pthread_cond_wait.
+    ok &= (WaitForSingleObject(condition->received_sem_, INFINITE) !=
+           WAIT_OBJECT_0);
+  }
+#endif
+  return !ok;
+}
+
 static INLINE int pthread_cond_signal(pthread_cond_t *const condition) {
   int ok = 1;
 #ifdef USE_WINDOWS_CONDITION_VARIABLE
@@ -194,6 +211,7 @@
 #endif
   return !ok;
 }
+
 #elif defined(__OS2__)
 #define INCL_DOS
 #include <os2.h>  // NOLINT
@@ -202,6 +220,11 @@
 #include <stdlib.h>       // NOLINT
 #include <sys/builtin.h>  // NOLINT
 
+#if defined(__STRICT_ANSI__)
+// _beginthread() is not declared on __STRICT_ANSI__ mode. Declare here.
+int _beginthread(void (*)(void *), void *, unsigned, void *);
+#endif
+
 #define pthread_t TID
 #define pthread_mutex_t HMTX
 
@@ -412,4 +435,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_THREAD_H_
+#endif  // VPX_VPX_UTIL_VPX_THREAD_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h
new file mode 100644
index 0000000..5296458
--- /dev/null
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_timestamp.h
@@ -0,0 +1,49 @@
+/*
+ *  Copyright (c) 2019 The WebM project authors. All Rights Reserved.
+ *
+ *  Use of this source code is governed by a BSD-style license
+ *  that can be found in the LICENSE file in the root of the source
+ *  tree. An additional intellectual property rights grant can be found
+ *  in the file PATENTS.  All contributing project authors may
+ *  be found in the AUTHORS file in the root of the source tree.
+ */
+
+#ifndef VPX_VPX_UTIL_VPX_TIMESTAMP_H_
+#define VPX_VPX_UTIL_VPX_TIMESTAMP_H_
+
+#include <assert.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif  // __cplusplus
+
+// Rational Number with an int64 numerator
+typedef struct vpx_rational64 {
+  int64_t num;       // fraction numerator
+  int den;           // fraction denominator
+} vpx_rational64_t;  // alias for struct vpx_rational64_t
+
+static INLINE int gcd(int64_t a, int b) {
+  int r;  // remainder
+  assert(a >= 0);
+  assert(b > 0);
+  while (b != 0) {
+    r = (int)(a % b);
+    a = b;
+    b = r;
+  }
+
+  return (int)a;
+}
+
+static INLINE void reduce_ratio(vpx_rational64_t *ratio) {
+  const int denom = gcd(ratio->num, ratio->den);
+  ratio->num /= denom;
+  ratio->den /= denom;
+}
+
+#ifdef __cplusplus
+}  // extern "C"
+#endif  // __cplusplus
+
+#endif  // VPX_VPX_UTIL_VPX_TIMESTAMP_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_util.mk b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_util.mk
index 86d3ece..1162714 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_util.mk
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_util.mk
@@ -15,3 +15,6 @@
 UTIL_SRCS-yes += endian_inl.h
 UTIL_SRCS-yes += vpx_write_yuv_frame.h
 UTIL_SRCS-yes += vpx_write_yuv_frame.c
+UTIL_SRCS-yes += vpx_timestamp.h
+UTIL_SRCS-$(or $(CONFIG_BITSTREAM_DEBUG),$(CONFIG_MISMATCH_DEBUG)) += vpx_debug_util.h
+UTIL_SRCS-$(or $(CONFIG_BITSTREAM_DEBUG),$(CONFIG_MISMATCH_DEBUG)) += vpx_debug_util.c
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c
index ab68558..4ef57a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.c
@@ -13,7 +13,7 @@
 
 void vpx_write_yuv_frame(FILE *yuv_file, YV12_BUFFER_CONFIG *s) {
 #if defined(OUTPUT_YUV_SRC) || defined(OUTPUT_YUV_DENOISED) || \
-    defined(OUTPUT_YUV_SKINMAP)
+    defined(OUTPUT_YUV_SKINMAP) || defined(OUTPUT_YUV_SVC_SRC)
 
   unsigned char *src = s->y_buffer;
   int h = s->y_crop_height;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h
index 1cb7029..ce11024 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpx_util/vpx_write_yuv_frame.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPX_UTIL_VPX_WRITE_YUV_FRAME_H_
-#define VPX_UTIL_VPX_WRITE_YUV_FRAME_H_
+#ifndef VPX_VPX_UTIL_VPX_WRITE_YUV_FRAME_H_
+#define VPX_VPX_UTIL_VPX_WRITE_YUV_FRAME_H_
 
 #include <stdio.h>
 #include "vpx_scale/yv12config.h"
@@ -24,4 +24,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPX_UTIL_VPX_WRITE_YUV_FRAME_H_
+#endif  // VPX_VPX_UTIL_VPX_WRITE_YUV_FRAME_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxdec.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxdec.c
index ff20e6a..ad368a2 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxdec.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxdec.c
@@ -98,20 +98,41 @@
     NULL, "svc-decode-layer", 1, "Decode SVC stream up to given spatial layer");
 static const arg_def_t framestatsarg =
     ARG_DEF(NULL, "framestats", 1, "Output per-frame stats (.csv format)");
+static const arg_def_t rowmtarg =
+    ARG_DEF(NULL, "row-mt", 1, "Enable multi-threading to run row-wise in VP9");
+static const arg_def_t lpfoptarg =
+    ARG_DEF(NULL, "lpf-opt", 1,
+            "Do loopfilter without waiting for all threads to sync.");
 
-static const arg_def_t *all_args[] = {
-  &help,           &codecarg,          &use_yv12,
-  &use_i420,       &flipuvarg,         &rawvideo,
-  &noblitarg,      &progressarg,       &limitarg,
-  &skiparg,        &postprocarg,       &summaryarg,
-  &outputfile,     &threadsarg,        &frameparallelarg,
-  &verbosearg,     &scalearg,          &fb_arg,
-  &md5arg,         &error_concealment, &continuearg,
+static const arg_def_t *all_args[] = { &help,
+                                       &codecarg,
+                                       &use_yv12,
+                                       &use_i420,
+                                       &flipuvarg,
+                                       &rawvideo,
+                                       &noblitarg,
+                                       &progressarg,
+                                       &limitarg,
+                                       &skiparg,
+                                       &postprocarg,
+                                       &summaryarg,
+                                       &outputfile,
+                                       &threadsarg,
+                                       &frameparallelarg,
+                                       &verbosearg,
+                                       &scalearg,
+                                       &fb_arg,
+                                       &md5arg,
+                                       &error_concealment,
+                                       &continuearg,
 #if CONFIG_VP9_HIGHBITDEPTH
-  &outbitdeptharg,
+                                       &outbitdeptharg,
 #endif
-  &svcdecodingarg, &framestatsarg,     NULL
-};
+                                       &svcdecodingarg,
+                                       &framestatsarg,
+                                       &rowmtarg,
+                                       &lpfoptarg,
+                                       NULL };
 
 #if CONFIG_VP8_DECODER
 static const arg_def_t addnoise_level =
@@ -154,7 +175,7 @@
                    dst->d_h, mode);
 }
 #endif
-void show_help(FILE *fout, int shorthelp) {
+static void show_help(FILE *fout, int shorthelp) {
   int i;
 
   fprintf(fout, "Usage: %s <options> filename\n\n", exec_name);
@@ -238,13 +259,14 @@
       return 1;
     }
     *bytes_read = frame_size;
+    return 0;
   }
 
-  return 0;
+  return 1;
 }
 
-static int read_frame(struct VpxDecInputContext *input, uint8_t **buf,
-                      size_t *bytes_in_buffer, size_t *buffer_size) {
+static int dec_read_frame(struct VpxDecInputContext *input, uint8_t **buf,
+                          size_t *bytes_in_buffer, size_t *buffer_size) {
   switch (input->vpx_input_ctx->file_type) {
 #if CONFIG_WEBM_IO
     case FILE_TYPE_WEBM:
@@ -506,6 +528,8 @@
   int arg_skip = 0;
   int ec_enabled = 0;
   int keep_going = 0;
+  int enable_row_mt = 0;
+  int enable_lpf_opt = 0;
   const VpxInterface *interface = NULL;
   const VpxInterface *fourcc_interface = NULL;
   uint64_t dx_time = 0;
@@ -628,6 +652,10 @@
         die("Error: Could not open --framestats file (%s) for writing.\n",
             arg.val);
       }
+    } else if (arg_match(&arg, &rowmtarg, argi)) {
+      enable_row_mt = arg_parse_uint(&arg);
+    } else if (arg_match(&arg, &lpfoptarg, argi)) {
+      enable_lpf_opt = arg_parse_uint(&arg);
     }
 #if CONFIG_VP8_DECODER
     else if (arg_match(&arg, &addnoise_level, argi)) {
@@ -694,6 +722,7 @@
 #if !CONFIG_WEBM_IO
     fprintf(stderr, "vpxdec was built without WebM container support.\n");
 #endif
+    free(argv);
     return EXIT_FAILURE;
   }
 
@@ -753,6 +782,18 @@
       goto fail;
     }
   }
+  if (interface->fourcc == VP9_FOURCC &&
+      vpx_codec_control(&decoder, VP9D_SET_ROW_MT, enable_row_mt)) {
+    fprintf(stderr, "Failed to set decoder in row multi-thread mode: %s\n",
+            vpx_codec_error(&decoder));
+    goto fail;
+  }
+  if (interface->fourcc == VP9_FOURCC &&
+      vpx_codec_control(&decoder, VP9D_SET_LOOP_FILTER_OPT, enable_lpf_opt)) {
+    fprintf(stderr, "Failed to set decoder in optimized loopfilter mode: %s\n",
+            vpx_codec_error(&decoder));
+    goto fail;
+  }
   if (!quiet) fprintf(stderr, "%s\n", decoder.name);
 
 #if CONFIG_VP8_DECODER
@@ -766,7 +807,7 @@
 
   if (arg_skip) fprintf(stderr, "Skipping first %d frames.\n", arg_skip);
   while (arg_skip) {
-    if (read_frame(&input, &buf, &bytes_in_buffer, &buffer_size)) break;
+    if (dec_read_frame(&input, &buf, &bytes_in_buffer, &buffer_size)) break;
     arg_skip--;
   }
 
@@ -797,7 +838,7 @@
 
     frame_avail = 0;
     if (!stop_after || frame_in < stop_after) {
-      if (!read_frame(&input, &buf, &bytes_in_buffer, &buffer_size)) {
+      if (!dec_read_frame(&input, &buf, &bytes_in_buffer, &buffer_size)) {
         frame_avail = 1;
         frame_in++;
 
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.c
index 144e60c..50c36be 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.c
@@ -50,12 +50,6 @@
 #endif
 #include "./y4minput.h"
 
-/* Swallow warnings about unused results of fread/fwrite */
-static size_t wrap_fread(void *ptr, size_t size, size_t nmemb, FILE *stream) {
-  return fread(ptr, size, nmemb, stream);
-}
-#define fread wrap_fread
-
 static size_t wrap_fwrite(const void *ptr, size_t size, size_t nmemb,
                           FILE *stream) {
   return fwrite(ptr, size, nmemb, stream);
@@ -95,34 +89,6 @@
   va_end(ap);
 }
 
-static int read_frame(struct VpxInputContext *input_ctx, vpx_image_t *img) {
-  FILE *f = input_ctx->file;
-  y4m_input *y4m = &input_ctx->y4m;
-  int shortread = 0;
-
-  if (input_ctx->file_type == FILE_TYPE_Y4M) {
-    if (y4m_input_fetch_frame(y4m, f, img) < 1) return 0;
-  } else {
-    shortread = read_yuv_frame(input_ctx, img);
-  }
-
-  return !shortread;
-}
-
-static int file_is_y4m(const char detect[4]) {
-  if (memcmp(detect, "YUV4", 4) == 0) {
-    return 1;
-  }
-  return 0;
-}
-
-static int fourcc_is_ivf(const char detect[4]) {
-  if (memcmp(detect, "DKIF", 4) == 0) {
-    return 1;
-  }
-  return 0;
-}
-
 static const arg_def_t help =
     ARG_DEF(NULL, "help", 0, "Show usage options and exit");
 static const arg_def_t debugmode =
@@ -326,9 +292,9 @@
     ARG_DEF(NULL, "maxsection-pct", 1, "GOP max bitrate (% of target)");
 static const arg_def_t corpus_complexity =
     ARG_DEF(NULL, "corpus-complexity", 1, "corpus vbr complexity midpoint");
-static const arg_def_t *rc_twopass_args[] = {
-  &bias_pct, &minsection_pct, &maxsection_pct, &corpus_complexity, NULL
-};
+static const arg_def_t *rc_twopass_args[] = { &bias_pct, &minsection_pct,
+                                              &maxsection_pct,
+                                              &corpus_complexity, NULL };
 
 static const arg_def_t kf_min_dist =
     ARG_DEF(NULL, "kf-min-dist", 1, "Minimum keyframe interval (frames)");
@@ -342,19 +308,19 @@
 static const arg_def_t noise_sens =
     ARG_DEF(NULL, "noise-sensitivity", 1, "Noise sensitivity (frames to blur)");
 static const arg_def_t sharpness =
-    ARG_DEF(NULL, "sharpness", 1, "Loop filter sharpness (0..7)");
+    ARG_DEF(NULL, "sharpness", 1,
+            "Increase sharpness at the expense of lower PSNR. (0..7)");
 static const arg_def_t static_thresh =
     ARG_DEF(NULL, "static-thresh", 1, "Motion detection threshold");
-static const arg_def_t auto_altref =
-    ARG_DEF(NULL, "auto-alt-ref", 1, "Enable automatic alt reference frames");
 static const arg_def_t arnr_maxframes =
     ARG_DEF(NULL, "arnr-maxframes", 1, "AltRef max frames (0..15)");
 static const arg_def_t arnr_strength =
     ARG_DEF(NULL, "arnr-strength", 1, "AltRef filter strength (0..6)");
-static const arg_def_t arnr_type = ARG_DEF(NULL, "arnr-type", 1, "AltRef type");
-static const struct arg_enum_list tuning_enum[] = {
-  { "psnr", VP8_TUNE_PSNR }, { "ssim", VP8_TUNE_SSIM }, { NULL, 0 }
-};
+static const arg_def_t arnr_type =
+    ARG_DEF(NULL, "arnr-type", 1, "AltRef filter type (1..3)");
+static const struct arg_enum_list tuning_enum[] = { { "psnr", VP8_TUNE_PSNR },
+                                                    { "ssim", VP8_TUNE_SSIM },
+                                                    { NULL, 0 } };
 static const arg_def_t tune_ssim =
     ARG_DEF_ENUM(NULL, "tune", 1, "Material to favor", tuning_enum);
 static const arg_def_t cq_level =
@@ -367,12 +333,14 @@
 #if CONFIG_VP8_ENCODER
 static const arg_def_t cpu_used_vp8 =
     ARG_DEF(NULL, "cpu-used", 1, "CPU Used (-16..16)");
+static const arg_def_t auto_altref_vp8 = ARG_DEF(
+    NULL, "auto-alt-ref", 1, "Enable automatic alt reference frames. (0..1)");
 static const arg_def_t token_parts =
     ARG_DEF(NULL, "token-parts", 1, "Number of token partitions to use, log2");
 static const arg_def_t screen_content_mode =
     ARG_DEF(NULL, "screen-content-mode", 1, "Screen content mode");
 static const arg_def_t *vp8_args[] = { &cpu_used_vp8,
-                                       &auto_altref,
+                                       &auto_altref_vp8,
                                        &noise_sens,
                                        &sharpness,
                                        &static_thresh,
@@ -405,12 +373,19 @@
 
 #if CONFIG_VP9_ENCODER
 static const arg_def_t cpu_used_vp9 =
-    ARG_DEF(NULL, "cpu-used", 1, "CPU Used (-8..8)");
+    ARG_DEF(NULL, "cpu-used", 1, "CPU Used (-9..9)");
+static const arg_def_t auto_altref_vp9 = ARG_DEF(
+    NULL, "auto-alt-ref", 1,
+    "Enable automatic alt reference frames, 2+ enables multi-layer. (0..6)");
 static const arg_def_t tile_cols =
     ARG_DEF(NULL, "tile-columns", 1, "Number of tile columns to use, log2");
 static const arg_def_t tile_rows =
     ARG_DEF(NULL, "tile-rows", 1,
             "Number of tile rows to use, log2 (set to 0 while threads > 1)");
+
+static const arg_def_t enable_tpl_model =
+    ARG_DEF(NULL, "enable-tpl", 1, "Enable temporal dependency model");
+
 static const arg_def_t lossless =
     ARG_DEF(NULL, "lossless", 1, "Lossless mode (0: false (default), 1: true)");
 static const arg_def_t frame_parallel_decoding = ARG_DEF(
@@ -491,11 +466,12 @@
 
 #if CONFIG_VP9_ENCODER
 static const arg_def_t *vp9_args[] = { &cpu_used_vp9,
-                                       &auto_altref,
+                                       &auto_altref_vp9,
                                        &sharpness,
                                        &static_thresh,
                                        &tile_cols,
                                        &tile_rows,
+                                       &enable_tpl_model,
                                        &arnr_maxframes,
                                        &arnr_strength,
                                        &arnr_type,
@@ -527,6 +503,7 @@
                                         VP8E_SET_STATIC_THRESHOLD,
                                         VP9E_SET_TILE_COLUMNS,
                                         VP9E_SET_TILE_ROWS,
+                                        VP9E_SET_TPL,
                                         VP8E_SET_ARNR_MAXFRAMES,
                                         VP8E_SET_ARNR_STRENGTH,
                                         VP8E_SET_ARNR_TYPE,
@@ -552,7 +529,7 @@
 
 static const arg_def_t *no_args[] = { NULL };
 
-void show_help(FILE *fout, int shorthelp) {
+static void show_help(FILE *fout, int shorthelp) {
   int i;
   const int num_encoder = get_vpx_encoder_count();
 
@@ -603,230 +580,6 @@
   exit(EXIT_FAILURE);
 }
 
-#define mmin(a, b) ((a) < (b) ? (a) : (b))
-
-#if CONFIG_VP9_HIGHBITDEPTH
-static void find_mismatch_high(const vpx_image_t *const img1,
-                               const vpx_image_t *const img2, int yloc[4],
-                               int uloc[4], int vloc[4]) {
-  uint16_t *plane1, *plane2;
-  uint32_t stride1, stride2;
-  const uint32_t bsize = 64;
-  const uint32_t bsizey = bsize >> img1->y_chroma_shift;
-  const uint32_t bsizex = bsize >> img1->x_chroma_shift;
-  const uint32_t c_w =
-      (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
-  const uint32_t c_h =
-      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
-  int match = 1;
-  uint32_t i, j;
-  yloc[0] = yloc[1] = yloc[2] = yloc[3] = -1;
-  plane1 = (uint16_t *)img1->planes[VPX_PLANE_Y];
-  plane2 = (uint16_t *)img2->planes[VPX_PLANE_Y];
-  stride1 = img1->stride[VPX_PLANE_Y] / 2;
-  stride2 = img2->stride[VPX_PLANE_Y] / 2;
-  for (i = 0, match = 1; match && i < img1->d_h; i += bsize) {
-    for (j = 0; match && j < img1->d_w; j += bsize) {
-      int k, l;
-      const int si = mmin(i + bsize, img1->d_h) - i;
-      const int sj = mmin(j + bsize, img1->d_w) - j;
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(plane1 + (i + k) * stride1 + j + l) !=
-              *(plane2 + (i + k) * stride2 + j + l)) {
-            yloc[0] = i + k;
-            yloc[1] = j + l;
-            yloc[2] = *(plane1 + (i + k) * stride1 + j + l);
-            yloc[3] = *(plane2 + (i + k) * stride2 + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-
-  uloc[0] = uloc[1] = uloc[2] = uloc[3] = -1;
-  plane1 = (uint16_t *)img1->planes[VPX_PLANE_U];
-  plane2 = (uint16_t *)img2->planes[VPX_PLANE_U];
-  stride1 = img1->stride[VPX_PLANE_U] / 2;
-  stride2 = img2->stride[VPX_PLANE_U] / 2;
-  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
-    for (j = 0; match && j < c_w; j += bsizex) {
-      int k, l;
-      const int si = mmin(i + bsizey, c_h - i);
-      const int sj = mmin(j + bsizex, c_w - j);
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(plane1 + (i + k) * stride1 + j + l) !=
-              *(plane2 + (i + k) * stride2 + j + l)) {
-            uloc[0] = i + k;
-            uloc[1] = j + l;
-            uloc[2] = *(plane1 + (i + k) * stride1 + j + l);
-            uloc[3] = *(plane2 + (i + k) * stride2 + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-
-  vloc[0] = vloc[1] = vloc[2] = vloc[3] = -1;
-  plane1 = (uint16_t *)img1->planes[VPX_PLANE_V];
-  plane2 = (uint16_t *)img2->planes[VPX_PLANE_V];
-  stride1 = img1->stride[VPX_PLANE_V] / 2;
-  stride2 = img2->stride[VPX_PLANE_V] / 2;
-  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
-    for (j = 0; match && j < c_w; j += bsizex) {
-      int k, l;
-      const int si = mmin(i + bsizey, c_h - i);
-      const int sj = mmin(j + bsizex, c_w - j);
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(plane1 + (i + k) * stride1 + j + l) !=
-              *(plane2 + (i + k) * stride2 + j + l)) {
-            vloc[0] = i + k;
-            vloc[1] = j + l;
-            vloc[2] = *(plane1 + (i + k) * stride1 + j + l);
-            vloc[3] = *(plane2 + (i + k) * stride2 + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-}
-#endif
-
-static void find_mismatch(const vpx_image_t *const img1,
-                          const vpx_image_t *const img2, int yloc[4],
-                          int uloc[4], int vloc[4]) {
-  const uint32_t bsize = 64;
-  const uint32_t bsizey = bsize >> img1->y_chroma_shift;
-  const uint32_t bsizex = bsize >> img1->x_chroma_shift;
-  const uint32_t c_w =
-      (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
-  const uint32_t c_h =
-      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
-  int match = 1;
-  uint32_t i, j;
-  yloc[0] = yloc[1] = yloc[2] = yloc[3] = -1;
-  for (i = 0, match = 1; match && i < img1->d_h; i += bsize) {
-    for (j = 0; match && j < img1->d_w; j += bsize) {
-      int k, l;
-      const int si = mmin(i + bsize, img1->d_h) - i;
-      const int sj = mmin(j + bsize, img1->d_w) - j;
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(img1->planes[VPX_PLANE_Y] +
-                (i + k) * img1->stride[VPX_PLANE_Y] + j + l) !=
-              *(img2->planes[VPX_PLANE_Y] +
-                (i + k) * img2->stride[VPX_PLANE_Y] + j + l)) {
-            yloc[0] = i + k;
-            yloc[1] = j + l;
-            yloc[2] = *(img1->planes[VPX_PLANE_Y] +
-                        (i + k) * img1->stride[VPX_PLANE_Y] + j + l);
-            yloc[3] = *(img2->planes[VPX_PLANE_Y] +
-                        (i + k) * img2->stride[VPX_PLANE_Y] + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-
-  uloc[0] = uloc[1] = uloc[2] = uloc[3] = -1;
-  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
-    for (j = 0; match && j < c_w; j += bsizex) {
-      int k, l;
-      const int si = mmin(i + bsizey, c_h - i);
-      const int sj = mmin(j + bsizex, c_w - j);
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(img1->planes[VPX_PLANE_U] +
-                (i + k) * img1->stride[VPX_PLANE_U] + j + l) !=
-              *(img2->planes[VPX_PLANE_U] +
-                (i + k) * img2->stride[VPX_PLANE_U] + j + l)) {
-            uloc[0] = i + k;
-            uloc[1] = j + l;
-            uloc[2] = *(img1->planes[VPX_PLANE_U] +
-                        (i + k) * img1->stride[VPX_PLANE_U] + j + l);
-            uloc[3] = *(img2->planes[VPX_PLANE_U] +
-                        (i + k) * img2->stride[VPX_PLANE_U] + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-  vloc[0] = vloc[1] = vloc[2] = vloc[3] = -1;
-  for (i = 0, match = 1; match && i < c_h; i += bsizey) {
-    for (j = 0; match && j < c_w; j += bsizex) {
-      int k, l;
-      const int si = mmin(i + bsizey, c_h - i);
-      const int sj = mmin(j + bsizex, c_w - j);
-      for (k = 0; match && k < si; ++k) {
-        for (l = 0; match && l < sj; ++l) {
-          if (*(img1->planes[VPX_PLANE_V] +
-                (i + k) * img1->stride[VPX_PLANE_V] + j + l) !=
-              *(img2->planes[VPX_PLANE_V] +
-                (i + k) * img2->stride[VPX_PLANE_V] + j + l)) {
-            vloc[0] = i + k;
-            vloc[1] = j + l;
-            vloc[2] = *(img1->planes[VPX_PLANE_V] +
-                        (i + k) * img1->stride[VPX_PLANE_V] + j + l);
-            vloc[3] = *(img2->planes[VPX_PLANE_V] +
-                        (i + k) * img2->stride[VPX_PLANE_V] + j + l);
-            match = 0;
-            break;
-          }
-        }
-      }
-    }
-  }
-}
-
-static int compare_img(const vpx_image_t *const img1,
-                       const vpx_image_t *const img2) {
-  uint32_t l_w = img1->d_w;
-  uint32_t c_w = (img1->d_w + img1->x_chroma_shift) >> img1->x_chroma_shift;
-  const uint32_t c_h =
-      (img1->d_h + img1->y_chroma_shift) >> img1->y_chroma_shift;
-  uint32_t i;
-  int match = 1;
-
-  match &= (img1->fmt == img2->fmt);
-  match &= (img1->d_w == img2->d_w);
-  match &= (img1->d_h == img2->d_h);
-#if CONFIG_VP9_HIGHBITDEPTH
-  if (img1->fmt & VPX_IMG_FMT_HIGHBITDEPTH) {
-    l_w *= 2;
-    c_w *= 2;
-  }
-#endif
-
-  for (i = 0; i < img1->d_h; ++i)
-    match &= (memcmp(img1->planes[VPX_PLANE_Y] + i * img1->stride[VPX_PLANE_Y],
-                     img2->planes[VPX_PLANE_Y] + i * img2->stride[VPX_PLANE_Y],
-                     l_w) == 0);
-
-  for (i = 0; i < c_h; ++i)
-    match &= (memcmp(img1->planes[VPX_PLANE_U] + i * img1->stride[VPX_PLANE_U],
-                     img2->planes[VPX_PLANE_U] + i * img2->stride[VPX_PLANE_U],
-                     c_w) == 0);
-
-  for (i = 0; i < c_h; ++i)
-    match &= (memcmp(img1->planes[VPX_PLANE_V] + i * img1->stride[VPX_PLANE_V],
-                     img2->planes[VPX_PLANE_V] + i * img2->stride[VPX_PLANE_V],
-                     c_w) == 0);
-
-  return match;
-}
-
 #define NELEMENTS(x) (sizeof(x) / sizeof(x[0]))
 #if CONFIG_VP9_ENCODER
 #define ARG_CTRL_CNT_MAX NELEMENTS(vp9_arg_ctrl_map)
@@ -1012,57 +765,6 @@
   }
 }
 
-static void open_input_file(struct VpxInputContext *input) {
-  /* Parse certain options from the input file, if possible */
-  input->file = strcmp(input->filename, "-") ? fopen(input->filename, "rb")
-                                             : set_binary_mode(stdin);
-
-  if (!input->file) fatal("Failed to open input file");
-
-  if (!fseeko(input->file, 0, SEEK_END)) {
-    /* Input file is seekable. Figure out how long it is, so we can get
-     * progress info.
-     */
-    input->length = ftello(input->file);
-    rewind(input->file);
-  }
-
-  /* Default to 1:1 pixel aspect ratio. */
-  input->pixel_aspect_ratio.numerator = 1;
-  input->pixel_aspect_ratio.denominator = 1;
-
-  /* For RAW input sources, these bytes will applied on the first frame
-   *  in read_frame().
-   */
-  input->detect.buf_read = fread(input->detect.buf, 1, 4, input->file);
-  input->detect.position = 0;
-
-  if (input->detect.buf_read == 4 && file_is_y4m(input->detect.buf)) {
-    if (y4m_input_open(&input->y4m, input->file, input->detect.buf, 4,
-                       input->only_i420) >= 0) {
-      input->file_type = FILE_TYPE_Y4M;
-      input->width = input->y4m.pic_w;
-      input->height = input->y4m.pic_h;
-      input->pixel_aspect_ratio.numerator = input->y4m.par_n;
-      input->pixel_aspect_ratio.denominator = input->y4m.par_d;
-      input->framerate.numerator = input->y4m.fps_n;
-      input->framerate.denominator = input->y4m.fps_d;
-      input->fmt = input->y4m.vpx_fmt;
-      input->bit_depth = input->y4m.bit_depth;
-    } else
-      fatal("Unsupported Y4M stream.");
-  } else if (input->detect.buf_read == 4 && fourcc_is_ivf(input->detect.buf)) {
-    fatal("IVF is not supported as input.");
-  } else {
-    input->file_type = FILE_TYPE_RAW;
-  }
-}
-
-static void close_input_file(struct VpxInputContext *input) {
-  fclose(input->file);
-  if (input->file_type == FILE_TYPE_Y4M) y4m_input_close(&input->y4m);
-}
-
 static struct stream_state *new_stream(struct VpxEncoderConfig *global,
                                        struct stream_state *prev) {
   struct stream_state *stream;
@@ -1614,14 +1316,14 @@
             vpx_img_alloc(NULL, VPX_IMG_FMT_I42016, cfg->g_w, cfg->g_h, 16);
       }
       I420Scale_16(
-          (uint16 *)img->planes[VPX_PLANE_Y], img->stride[VPX_PLANE_Y] / 2,
-          (uint16 *)img->planes[VPX_PLANE_U], img->stride[VPX_PLANE_U] / 2,
-          (uint16 *)img->planes[VPX_PLANE_V], img->stride[VPX_PLANE_V] / 2,
-          img->d_w, img->d_h, (uint16 *)stream->img->planes[VPX_PLANE_Y],
+          (uint16_t *)img->planes[VPX_PLANE_Y], img->stride[VPX_PLANE_Y] / 2,
+          (uint16_t *)img->planes[VPX_PLANE_U], img->stride[VPX_PLANE_U] / 2,
+          (uint16_t *)img->planes[VPX_PLANE_V], img->stride[VPX_PLANE_V] / 2,
+          img->d_w, img->d_h, (uint16_t *)stream->img->planes[VPX_PLANE_Y],
           stream->img->stride[VPX_PLANE_Y] / 2,
-          (uint16 *)stream->img->planes[VPX_PLANE_U],
+          (uint16_t *)stream->img->planes[VPX_PLANE_U],
           stream->img->stride[VPX_PLANE_U] / 2,
-          (uint16 *)stream->img->planes[VPX_PLANE_V],
+          (uint16_t *)stream->img->planes[VPX_PLANE_V],
           stream->img->stride[VPX_PLANE_V] / 2, stream->img->d_w,
           stream->img->d_h, kFilterBox);
       img = stream->img;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.h
index d867e9d..b780aed 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxenc.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef VPXENC_H_
-#define VPXENC_H_
+#ifndef VPX_VPXENC_H_
+#define VPX_VPXENC_H_
 
 #include "vpx/vpx_encoder.h"
 
@@ -61,4 +61,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPXENC_H_
+#endif  // VPX_VPXENC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxstats.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxstats.h
index 5c9ea34..3625ee3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxstats.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/vpxstats.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef VPXSTATS_H_
-#define VPXSTATS_H_
+#ifndef VPX_VPXSTATS_H_
+#define VPX_VPXSTATS_H_
 
 #include <stdio.h>
 
@@ -40,4 +40,4 @@
 }  // extern "C"
 #endif
 
-#endif  // VPXSTATS_H_
+#endif  // VPX_VPXSTATS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/warnings.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/warnings.h
index 6b8ae67..15558c64 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/warnings.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/warnings.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef WARNINGS_H_
-#define WARNINGS_H_
+#ifndef VPX_WARNINGS_H_
+#define VPX_WARNINGS_H_
 
 #ifdef __cplusplus
 extern "C" {
@@ -30,4 +30,4 @@
 }  // extern "C"
 #endif
 
-#endif  // WARNINGS_H_
+#endif  // VPX_WARNINGS_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmdec.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmdec.h
index 7dcb170..d8618b0 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmdec.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmdec.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef WEBMDEC_H_
-#define WEBMDEC_H_
+#ifndef VPX_WEBMDEC_H_
+#define VPX_WEBMDEC_H_
 
 #include "./tools_common.h"
 
@@ -66,4 +66,4 @@
 }  // extern "C"
 #endif
 
-#endif  // WEBMDEC_H_
+#endif  // VPX_WEBMDEC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmenc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmenc.h
index b4a9e35..4176e82 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmenc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/webmenc.h
@@ -7,8 +7,8 @@
  *  in the file PATENTS.  All contributing project authors may
  *  be found in the AUTHORS file in the root of the source tree.
  */
-#ifndef WEBMENC_H_
-#define WEBMENC_H_
+#ifndef VPX_WEBMENC_H_
+#define VPX_WEBMENC_H_
 
 #include <stdio.h>
 #include <stdlib.h>
@@ -52,4 +52,4 @@
 }  // extern "C"
 #endif
 
-#endif  // WEBMENC_H_
+#endif  // VPX_WEBMENC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.c
index 05018db..02b729e 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.c
@@ -17,11 +17,9 @@
   const char *color;
   switch (bit_depth) {
     case 8:
-      color = fmt == VPX_IMG_FMT_444A
-                  ? "C444alpha\n"
-                  : fmt == VPX_IMG_FMT_I444
-                        ? "C444\n"
-                        : fmt == VPX_IMG_FMT_I422 ? "C422\n" : "C420jpeg\n";
+      color = fmt == VPX_IMG_FMT_I444
+                  ? "C444\n"
+                  : fmt == VPX_IMG_FMT_I422 ? "C422\n" : "C420jpeg\n";
       break;
     case 9:
       color = fmt == VPX_IMG_FMT_I44416
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.h
index 69d5904..9a367e3 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4menc.h
@@ -8,8 +8,8 @@
  *  be found in the AUTHORS file in the root of the source tree.
  */
 
-#ifndef Y4MENC_H_
-#define Y4MENC_H_
+#ifndef VPX_Y4MENC_H_
+#define VPX_Y4MENC_H_
 
 #include "./tools_common.h"
 
@@ -30,4 +30,4 @@
 }  // extern "C"
 #endif
 
-#endif  // Y4MENC_H_
+#endif  // VPX_Y4MENC_H_
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.c b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.c
index bf349c0..007bd99 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.c
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.c
@@ -975,6 +975,8 @@
     _y4m->aux_buf_sz =
         _y4m->aux_buf_read_sz + ((_y4m->pic_w + 1) / 2) * _y4m->pic_h;
     _y4m->convert = y4m_convert_411_420jpeg;
+    fprintf(stderr, "Unsupported conversion from yuv 411\n");
+    return -1;
   } else if (strcmp(_y4m->chroma_type, "444") == 0) {
     _y4m->src_c_dec_h = 1;
     _y4m->src_c_dec_v = 1;
@@ -1029,30 +1031,6 @@
       fprintf(stderr, "Unsupported conversion from 444p12 to 420jpeg\n");
       return -1;
     }
-  } else if (strcmp(_y4m->chroma_type, "444alpha") == 0) {
-    _y4m->src_c_dec_h = 1;
-    _y4m->src_c_dec_v = 1;
-    if (only_420) {
-      _y4m->dst_c_dec_h = 2;
-      _y4m->dst_c_dec_v = 2;
-      _y4m->dst_buf_read_sz = _y4m->pic_w * _y4m->pic_h;
-      /*Chroma filter required: read into the aux buf first.
-        We need to make two filter passes, so we need some extra space in the
-         aux buffer.
-        The extra plane also gets read into the aux buf.
-        It will be discarded.*/
-      _y4m->aux_buf_sz = _y4m->aux_buf_read_sz = 3 * _y4m->pic_w * _y4m->pic_h;
-      _y4m->convert = y4m_convert_444_420jpeg;
-    } else {
-      _y4m->vpx_fmt = VPX_IMG_FMT_444A;
-      _y4m->bps = 32;
-      _y4m->dst_c_dec_h = _y4m->src_c_dec_h;
-      _y4m->dst_c_dec_v = _y4m->src_c_dec_v;
-      _y4m->dst_buf_read_sz = 4 * _y4m->pic_w * _y4m->pic_h;
-      /*Natively supported: no conversion required.*/
-      _y4m->aux_buf_sz = _y4m->aux_buf_read_sz = 0;
-      _y4m->convert = y4m_convert_null;
-    }
   } else if (strcmp(_y4m->chroma_type, "mono") == 0) {
     _y4m->src_c_dec_h = _y4m->src_c_dec_v = 0;
     _y4m->dst_c_dec_h = _y4m->dst_c_dec_v = 2;
diff --git a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.h b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.h
index ff068230..a4a8b18 100644
--- a/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.h
+++ b/Source/ThirdParty/libwebrtc/Source/third_party/libvpx/source/libvpx/y4minput.h
@@ -1,4 +1,3 @@
-
 /*
  *  Copyright (c) 2010 The WebM project authors. All Rights Reserved.
  *
@@ -12,8 +11,8 @@
  *  Copyright (C) 2002-2010 The Xiph.Org Foundation and contributors.
  */
 
-#ifndef Y4MINPUT_H_
-#define Y4MINPUT_H_
+#ifndef VPX_Y4MINPUT_H_
+#define VPX_Y4MINPUT_H_
 
 #include <stdio.h>
 #include "vpx/vpx_image.h"
@@ -66,4 +65,4 @@
 }  // extern "C"
 #endif
 
-#endif  // Y4MINPUT_H_
+#endif  // VPX_Y4MINPUT_H_
diff --git a/Source/ThirdParty/libwebrtc/libwebrtc.xcodeproj/project.pbxproj b/Source/ThirdParty/libwebrtc/libwebrtc.xcodeproj/project.pbxproj
index 56d650a..63a24c4 100644
--- a/Source/ThirdParty/libwebrtc/libwebrtc.xcodeproj/project.pbxproj
+++ b/Source/ThirdParty/libwebrtc/libwebrtc.xcodeproj/project.pbxproj
@@ -984,6 +984,219 @@
 		414035EF24AA0EBC00BCE9B2 /* RTCVideoDecoderVP9.mm in Sources */ = {isa = PBXBuildFile; fileRef = 414035EB24AA0EBB00BCE9B2 /* RTCVideoDecoderVP9.mm */; };
 		414035F224AA0F5400BCE9B2 /* video_rtp_depacketizer_vp9.cc in Sources */ = {isa = PBXBuildFile; fileRef = 414035F024AA0F5300BCE9B2 /* video_rtp_depacketizer_vp9.cc */; };
 		414035F324AA0F5400BCE9B2 /* video_rtp_depacketizer_vp9.h in Headers */ = {isa = PBXBuildFile; fileRef = 414035F124AA0F5300BCE9B2 /* video_rtp_depacketizer_vp9.h */; };
+		414035FA24AA1F5500BCE9B2 /* vp9_frame_buffer_pool.h in Headers */ = {isa = PBXBuildFile; fileRef = 414035F524AA1F5400BCE9B2 /* vp9_frame_buffer_pool.h */; };
+		414035FC24AA1F5500BCE9B2 /* vp9_impl.h in Headers */ = {isa = PBXBuildFile; fileRef = 414035F724AA1F5400BCE9B2 /* vp9_impl.h */; };
+		414035FD24AA1F5500BCE9B2 /* vp9_frame_buffer_pool.cc in Sources */ = {isa = PBXBuildFile; fileRef = 414035F824AA1F5500BCE9B2 /* vp9_frame_buffer_pool.cc */; };
+		4140360424AA24BD00BCE9B2 /* block_error_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 414035FF24AA248700BCE9B2 /* block_error_sse2.asm */; };
+		4140360524AA24C000BCE9B2 /* copy_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 414035FE24AA248700BCE9B2 /* copy_sse2.asm */; };
+		4140360624AA24C300BCE9B2 /* copy_sse3.asm in Sources */ = {isa = PBXBuildFile; fileRef = 4140360024AA248700BCE9B2 /* copy_sse3.asm */; };
+		4140360924AA24FE00BCE9B2 /* bilinear_filter_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140360724AA24FB00BCE9B2 /* bilinear_filter_sse2.c */; };
+		4140360E24AA253A00BCE9B2 /* vpx_subpixel_4t_intrin_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140360A24AA253800BCE9B2 /* vpx_subpixel_4t_intrin_sse2.c */; };
+		4140360F24AA253A00BCE9B2 /* quantize_sse2.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140360B24AA253900BCE9B2 /* quantize_sse2.h */; };
+		4140361024AA253A00BCE9B2 /* convolve_sse2.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140360C24AA253900BCE9B2 /* convolve_sse2.h */; };
+		4140361124AA253A00BCE9B2 /* post_proc_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140360D24AA253900BCE9B2 /* post_proc_sse2.c */; };
+		4140361724AA294000BCE9B2 /* x86.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140361624AA293F00BCE9B2 /* x86.h */; };
+		4140361824AA2B9700BCE9B2 /* highbd_idct16x16_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3D6212E2D9300E22482 /* highbd_idct16x16_add_sse4.c */; };
+		4140361924AA2BB100BCE9B2 /* highbd_idct8x8_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3CD212E2D9100E22482 /* highbd_idct8x8_add_sse4.c */; };
+		4140361A24AA2BBC00BCE9B2 /* quantize_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731DA212E045D001280EB /* quantize_sse4.c */; };
+		4140361B24AA2D9100BCE9B2 /* sad.c in Sources */ = {isa = PBXBuildFile; fileRef = 413309F0212E2BD600280939 /* sad.c */; };
+		4140361E24AA2E6800BCE9B2 /* emms_mmx.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140361D24AA2E6700BCE9B2 /* emms_mmx.c */; };
+		4140361F24AA2E9000BCE9B2 /* highbd_idct4x4_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3C7212E2D9000E22482 /* highbd_idct4x4_add_sse4.c */; };
+		4140362024AA2EB500BCE9B2 /* highbd_idct32x32_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 41C62918212E2DE7002313D4 /* highbd_idct32x32_add_sse4.c */; };
+		4140362224AA300300BCE9B2 /* vp9.cc in Sources */ = {isa = PBXBuildFile; fileRef = 414035F424AA1F5400BCE9B2 /* vp9.cc */; };
+		4140362324AA300700BCE9B2 /* vp9_impl.cc in Sources */ = {isa = PBXBuildFile; fileRef = 414035F624AA1F5400BCE9B2 /* vp9_impl.cc */; };
+		4140362D24AA306600BCE9B2 /* vp9_iface_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140362524AA306400BCE9B2 /* vp9_iface_common.c */; };
+		4140362E24AA306600BCE9B2 /* simple_encode.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140362624AA306400BCE9B2 /* simple_encode.h */; };
+		4140362F24AA306600BCE9B2 /* vp9_cx_iface.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140362724AA306400BCE9B2 /* vp9_cx_iface.c */; };
+		4140363024AA306600BCE9B2 /* vp9_dx_iface.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140362824AA306500BCE9B2 /* vp9_dx_iface.h */; };
+		4140363124AA306600BCE9B2 /* vp9_cx_iface.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140362924AA306500BCE9B2 /* vp9_cx_iface.h */; };
+		4140363224AA306600BCE9B2 /* vp9_dx_iface.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140362A24AA306500BCE9B2 /* vp9_dx_iface.c */; };
+		4140363324AA306600BCE9B2 /* vp9_iface_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140362B24AA306500BCE9B2 /* vp9_iface_common.h */; };
+		4140368924AA30B600BCE9B2 /* vp9_aq_complexity.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140363824AA309F00BCE9B2 /* vp9_aq_complexity.h */; };
+		4140368A24AA30B600BCE9B2 /* vp9_cost.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140363924AA309F00BCE9B2 /* vp9_cost.c */; };
+		4140368B24AA30B600BCE9B2 /* vp9_pickmode.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140363A24AA30A000BCE9B2 /* vp9_pickmode.h */; };
+		4140368C24AA30B600BCE9B2 /* vp9_ratectrl.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140363B24AA30A000BCE9B2 /* vp9_ratectrl.c */; };
+		4140368D24AA30B600BCE9B2 /* vp9_temporal_filter.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140363C24AA30A000BCE9B2 /* vp9_temporal_filter.c */; };
+		4140368E24AA30B600BCE9B2 /* vp9_lookahead.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140363D24AA30A000BCE9B2 /* vp9_lookahead.h */; };
+		4140368F24AA30B600BCE9B2 /* vp9_temporal_filter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140363E24AA30A100BCE9B2 /* vp9_temporal_filter.h */; };
+		4140369024AA30B600BCE9B2 /* vp9_multi_thread.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140363F24AA30A100BCE9B2 /* vp9_multi_thread.h */; };
+		4140369124AA30B600BCE9B2 /* vp9_resize.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364024AA30A100BCE9B2 /* vp9_resize.h */; };
+		4140369224AA30B600BCE9B2 /* vp9_pickmode.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364124AA30A100BCE9B2 /* vp9_pickmode.c */; };
+		4140369324AA30B600BCE9B2 /* vp9_mbgraph.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364224AA30A200BCE9B2 /* vp9_mbgraph.h */; };
+		4140369424AA30B600BCE9B2 /* vp9_alt_ref_aq.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364324AA30A200BCE9B2 /* vp9_alt_ref_aq.h */; };
+		4140369524AA30B600BCE9B2 /* vp9_dct.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364424AA30A200BCE9B2 /* vp9_dct.c */; };
+		4140369624AA30B600BCE9B2 /* vp9_speed_features.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364524AA30A200BCE9B2 /* vp9_speed_features.c */; };
+		4140369724AA30B600BCE9B2 /* vp9_encodemb.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364624AA30A300BCE9B2 /* vp9_encodemb.c */; };
+		4140369824AA30B600BCE9B2 /* vp9_bitstream.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364724AA30A300BCE9B2 /* vp9_bitstream.h */; };
+		4140369924AA30B600BCE9B2 /* vp9_speed_features.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364824AA30A300BCE9B2 /* vp9_speed_features.h */; };
+		4140369A24AA30B600BCE9B2 /* vp9_segmentation.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364924AA30A300BCE9B2 /* vp9_segmentation.c */; };
+		4140369B24AA30B600BCE9B2 /* vp9_subexp.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364A24AA30A400BCE9B2 /* vp9_subexp.h */; };
+		4140369C24AA30B600BCE9B2 /* vp9_aq_complexity.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364B24AA30A400BCE9B2 /* vp9_aq_complexity.c */; };
+		4140369D24AA30B600BCE9B2 /* vp9_subexp.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364C24AA30A400BCE9B2 /* vp9_subexp.c */; };
+		4140369E24AA30B600BCE9B2 /* vp9_blockiness.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364D24AA30A500BCE9B2 /* vp9_blockiness.h */; };
+		4140369F24AA30B600BCE9B2 /* vp9_extend.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140364E24AA30A500BCE9B2 /* vp9_extend.h */; };
+		414036A024AA30B600BCE9B2 /* vp9_aq_cyclicrefresh.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140364F24AA30A500BCE9B2 /* vp9_aq_cyclicrefresh.c */; };
+		414036A124AA30B600BCE9B2 /* vp9_ethread.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365024AA30A500BCE9B2 /* vp9_ethread.h */; };
+		414036A224AA30B600BCE9B2 /* vp9_denoiser.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365124AA30A600BCE9B2 /* vp9_denoiser.h */; };
+		414036A324AA30B600BCE9B2 /* vp9_quantize.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365224AA30A600BCE9B2 /* vp9_quantize.c */; };
+		414036A424AA30B600BCE9B2 /* vp9_aq_360.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365324AA30A600BCE9B2 /* vp9_aq_360.h */; };
+		414036A524AA30B700BCE9B2 /* vp9_ratectrl.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365424AA30A600BCE9B2 /* vp9_ratectrl.h */; };
+		414036A624AA30B700BCE9B2 /* vp9_tokenize.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365524AA30A700BCE9B2 /* vp9_tokenize.h */; };
+		414036A724AA30B700BCE9B2 /* vp9_block.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365624AA30A700BCE9B2 /* vp9_block.h */; };
+		414036A824AA30B700BCE9B2 /* vp9_aq_360.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365724AA30A700BCE9B2 /* vp9_aq_360.c */; };
+		414036A924AA30B700BCE9B2 /* vp9_context_tree.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365824AA30A700BCE9B2 /* vp9_context_tree.c */; };
+		414036AA24AA30B700BCE9B2 /* vp9_context_tree.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365924AA30A700BCE9B2 /* vp9_context_tree.h */; };
+		414036AB24AA30B700BCE9B2 /* vp9_ethread.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365A24AA30A800BCE9B2 /* vp9_ethread.c */; };
+		414036AC24AA30B700BCE9B2 /* vp9_non_greedy_mv.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365B24AA30A800BCE9B2 /* vp9_non_greedy_mv.c */; };
+		414036AD24AA30B700BCE9B2 /* vp9_skin_detection.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365C24AA30A800BCE9B2 /* vp9_skin_detection.h */; };
+		414036AE24AA30B700BCE9B2 /* vp9_multi_thread.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365D24AA30A900BCE9B2 /* vp9_multi_thread.c */; };
+		414036AF24AA30B700BCE9B2 /* vp9_rd.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140365E24AA30A900BCE9B2 /* vp9_rd.h */; };
+		414036B024AA30B700BCE9B2 /* vp9_picklpf.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140365F24AA30A900BCE9B2 /* vp9_picklpf.c */; };
+		414036B124AA30B700BCE9B2 /* vp9_treewriter.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140366024AA30A900BCE9B2 /* vp9_treewriter.c */; };
+		414036B224AA30B700BCE9B2 /* vp9_rdopt.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140366124AA30AA00BCE9B2 /* vp9_rdopt.c */; };
+		414036B324AA30B700BCE9B2 /* vp9_encodemv.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366224AA30AA00BCE9B2 /* vp9_encodemv.h */; };
+		414036B424AA30B700BCE9B2 /* vp9_job_queue.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366324AA30AA00BCE9B2 /* vp9_job_queue.h */; };
+		414036B524AA30B700BCE9B2 /* vp9_resize.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140366424AA30AA00BCE9B2 /* vp9_resize.c */; };
+		414036B624AA30B700BCE9B2 /* vp9_encodeframe.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366524AA30AA00BCE9B2 /* vp9_encodeframe.h */; };
+		414036B724AA30B700BCE9B2 /* vp9_partition_models.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366624AA30AB00BCE9B2 /* vp9_partition_models.h */; };
+		414036B824AA30B700BCE9B2 /* vp9_noise_estimate.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366724AA30AB00BCE9B2 /* vp9_noise_estimate.h */; };
+		414036B924AA30B700BCE9B2 /* vp9_non_greedy_mv.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366824AA30AB00BCE9B2 /* vp9_non_greedy_mv.h */; };
+		414036BA24AA30B700BCE9B2 /* vp9_picklpf.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366924AA30AB00BCE9B2 /* vp9_picklpf.h */; };
+		414036BB24AA30B700BCE9B2 /* vp9_rdopt.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366A24AA30AC00BCE9B2 /* vp9_rdopt.h */; };
+		414036BC24AA30B700BCE9B2 /* vp9_encoder.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366B24AA30AC00BCE9B2 /* vp9_encoder.h */; };
+		414036BD24AA30B700BCE9B2 /* vp9_quantize.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366C24AA30AC00BCE9B2 /* vp9_quantize.h */; };
+		414036BE24AA30B700BCE9B2 /* vp9_lookahead.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140366D24AA30AC00BCE9B2 /* vp9_lookahead.c */; };
+		414036BF24AA30B700BCE9B2 /* vp9_aq_variance.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140366E24AA30AC00BCE9B2 /* vp9_aq_variance.h */; };
+		414036C024AA30B700BCE9B2 /* vp9_mbgraph.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140366F24AA30AD00BCE9B2 /* vp9_mbgraph.c */; };
+		414036C124AA30B700BCE9B2 /* vp9_extend.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367024AA30AD00BCE9B2 /* vp9_extend.c */; };
+		414036C224AA30B700BCE9B2 /* vp9_encoder.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367124AA30AD00BCE9B2 /* vp9_encoder.c */; };
+		414036C324AA30B700BCE9B2 /* vp9_denoiser.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367224AA30AE00BCE9B2 /* vp9_denoiser.c */; };
+		414036C424AA30B700BCE9B2 /* vp9_encodemv.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367324AA30AE00BCE9B2 /* vp9_encodemv.c */; };
+		414036C524AA30B700BCE9B2 /* vp9_mcomp.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140367424AA30AE00BCE9B2 /* vp9_mcomp.h */; };
+		414036C624AA30B700BCE9B2 /* vp9_encodeframe.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367524AA30AE00BCE9B2 /* vp9_encodeframe.c */; };
+		414036C724AA30B700BCE9B2 /* vp9_alt_ref_aq.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367624AA30AF00BCE9B2 /* vp9_alt_ref_aq.c */; };
+		414036C824AA30B700BCE9B2 /* vp9_aq_variance.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367724AA30AF00BCE9B2 /* vp9_aq_variance.c */; };
+		414036C924AA30B700BCE9B2 /* vp9_svc_layercontext.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140367824AA30AF00BCE9B2 /* vp9_svc_layercontext.h */; };
+		414036CA24AA30B700BCE9B2 /* vp9_tokenize.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367924AA30AF00BCE9B2 /* vp9_tokenize.c */; };
+		414036CB24AA30B700BCE9B2 /* vp9_treewriter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140367A24AA30AF00BCE9B2 /* vp9_treewriter.h */; };
+		414036CC24AA30B700BCE9B2 /* vp9_segmentation.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140367B24AA30B300BCE9B2 /* vp9_segmentation.h */; };
+		414036CD24AA30B700BCE9B2 /* vp9_svc_layercontext.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367C24AA30B300BCE9B2 /* vp9_svc_layercontext.c */; };
+		414036CE24AA30B700BCE9B2 /* vp9_mcomp.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367D24AA30B300BCE9B2 /* vp9_mcomp.c */; };
+		414036CF24AA30B700BCE9B2 /* vp9_blockiness.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367E24AA30B400BCE9B2 /* vp9_blockiness.c */; };
+		414036D024AA30B700BCE9B2 /* vp9_noise_estimate.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140367F24AA30B400BCE9B2 /* vp9_noise_estimate.c */; };
+		414036D124AA30B700BCE9B2 /* vp9_skin_detection.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140368024AA30B400BCE9B2 /* vp9_skin_detection.c */; };
+		414036D224AA30B700BCE9B2 /* vp9_firstpass.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140368124AA30B400BCE9B2 /* vp9_firstpass.c */; };
+		414036D324AA30B700BCE9B2 /* vp9_rd.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140368224AA30B500BCE9B2 /* vp9_rd.c */; };
+		414036D424AA30B700BCE9B2 /* vp9_firstpass.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140368324AA30B500BCE9B2 /* vp9_firstpass.h */; };
+		414036D524AA30B700BCE9B2 /* vp9_bitstream.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140368424AA30B500BCE9B2 /* vp9_bitstream.c */; };
+		414036D624AA30B700BCE9B2 /* vp9_aq_cyclicrefresh.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140368524AA30B600BCE9B2 /* vp9_aq_cyclicrefresh.h */; };
+		414036D724AA30B700BCE9B2 /* vp9_cost.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140368624AA30B600BCE9B2 /* vp9_cost.h */; };
+		414036D824AA30B700BCE9B2 /* vp9_encodemb.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140368724AA30B600BCE9B2 /* vp9_encodemb.h */; };
+		414036D924AA30B700BCE9B2 /* vp9_frame_scale.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140368824AA30B600BCE9B2 /* vp9_frame_scale.c */; };
+		414036E624AA30BC00BCE9B2 /* vp9_job_queue.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036DA24AA30B900BCE9B2 /* vp9_job_queue.h */; };
+		414036E724AA30BC00BCE9B2 /* vp9_job_queue.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036DB24AA30B900BCE9B2 /* vp9_job_queue.c */; };
+		414036E824AA30BC00BCE9B2 /* vp9_decoder.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036DC24AA30BA00BCE9B2 /* vp9_decoder.c */; };
+		414036E924AA30BC00BCE9B2 /* vp9_dsubexp.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036DD24AA30BA00BCE9B2 /* vp9_dsubexp.h */; };
+		414036EA24AA30BC00BCE9B2 /* vp9_decodeframe.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036DE24AA30BA00BCE9B2 /* vp9_decodeframe.c */; };
+		414036EB24AA30BC00BCE9B2 /* vp9_dsubexp.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036DF24AA30BA00BCE9B2 /* vp9_dsubexp.c */; };
+		414036EC24AA30BC00BCE9B2 /* vp9_detokenize.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036E024AA30BB00BCE9B2 /* vp9_detokenize.h */; };
+		414036ED24AA30BC00BCE9B2 /* vp9_detokenize.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036E124AA30BB00BCE9B2 /* vp9_detokenize.c */; };
+		414036EE24AA30BC00BCE9B2 /* vp9_decodemv.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036E224AA30BB00BCE9B2 /* vp9_decodemv.c */; };
+		414036EF24AA30BC00BCE9B2 /* vp9_decodemv.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036E324AA30BB00BCE9B2 /* vp9_decodemv.h */; };
+		414036F024AA30BC00BCE9B2 /* vp9_decodeframe.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036E424AA30BC00BCE9B2 /* vp9_decodeframe.h */; };
+		414036F124AA30BC00BCE9B2 /* vp9_decoder.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036E524AA30BC00BCE9B2 /* vp9_decoder.h */; };
+		4140372724AA30D900BCE9B2 /* vp9_enums.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036F224AA30C800BCE9B2 /* vp9_enums.h */; };
+		4140372824AA30D900BCE9B2 /* vp9_blockd.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036F324AA30C900BCE9B2 /* vp9_blockd.c */; };
+		4140372924AA30D900BCE9B2 /* vp9_entropy.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036F424AA30C900BCE9B2 /* vp9_entropy.h */; };
+		4140372A24AA30D900BCE9B2 /* vp9_loopfilter.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036F524AA30C900BCE9B2 /* vp9_loopfilter.h */; };
+		4140372B24AA30D900BCE9B2 /* vp9_mvref_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036F624AA30CA00BCE9B2 /* vp9_mvref_common.c */; };
+		4140372C24AA30D900BCE9B2 /* vp9_scale.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036F724AA30CA00BCE9B2 /* vp9_scale.h */; };
+		4140372D24AA30D900BCE9B2 /* vp9_pred_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036F824AA30CA00BCE9B2 /* vp9_pred_common.c */; };
+		4140372E24AA30D900BCE9B2 /* vp9_mfqe.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036F924AA30CA00BCE9B2 /* vp9_mfqe.c */; };
+		4140372F24AA30D900BCE9B2 /* vp9_tile_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 414036FA24AA30CB00BCE9B2 /* vp9_tile_common.h */; };
+		4140373024AA30D900BCE9B2 /* vp9_alloccommon.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036FB24AA30CB00BCE9B2 /* vp9_alloccommon.c */; };
+		4140373124AA30D900BCE9B2 /* vp9_entropymv.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036FC24AA30CB00BCE9B2 /* vp9_entropymv.c */; };
+		4140373224AA30D900BCE9B2 /* vp9_postproc.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036FD24AA30CB00BCE9B2 /* vp9_postproc.c */; };
+		4140373324AA30D900BCE9B2 /* vp9_frame_buffers.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036FE24AA30CC00BCE9B2 /* vp9_frame_buffers.c */; };
+		4140373424AA30D900BCE9B2 /* vp9_common_data.c in Sources */ = {isa = PBXBuildFile; fileRef = 414036FF24AA30CC00BCE9B2 /* vp9_common_data.c */; };
+		4140373524AA30D900BCE9B2 /* vp9_idct.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370024AA30CC00BCE9B2 /* vp9_idct.h */; };
+		4140373624AA30D900BCE9B2 /* vp9_blockd.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370124AA30CC00BCE9B2 /* vp9_blockd.h */; };
+		4140373724AA30D900BCE9B2 /* vp9_common_data.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370224AA30CD00BCE9B2 /* vp9_common_data.h */; };
+		4140373824AA30D900BCE9B2 /* vp9_seg_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370324AA30CD00BCE9B2 /* vp9_seg_common.c */; };
+		4140373924AA30D900BCE9B2 /* vp9_rtcd.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370424AA30CD00BCE9B2 /* vp9_rtcd.c */; };
+		4140373A24AA30D900BCE9B2 /* vp9_scan.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370524AA30CE00BCE9B2 /* vp9_scan.h */; };
+		4140373B24AA30DA00BCE9B2 /* vp9_onyxc_int.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370624AA30CE00BCE9B2 /* vp9_onyxc_int.h */; };
+		4140373C24AA30DA00BCE9B2 /* vp9_thread_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370724AA30CE00BCE9B2 /* vp9_thread_common.c */; };
+		4140373D24AA30DA00BCE9B2 /* vp9_idct.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370824AA30CE00BCE9B2 /* vp9_idct.c */; };
+		4140373E24AA30DA00BCE9B2 /* vp9_quant_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370924AA30CE00BCE9B2 /* vp9_quant_common.c */; };
+		4140373F24AA30DA00BCE9B2 /* vp9_reconinter.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370A24AA30CF00BCE9B2 /* vp9_reconinter.c */; };
+		4140374024AA30DA00BCE9B2 /* vp9_debugmodes.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370B24AA30CF00BCE9B2 /* vp9_debugmodes.c */; };
+		4140374124AA30DA00BCE9B2 /* vp9_thread_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370C24AA30CF00BCE9B2 /* vp9_thread_common.h */; };
+		4140374224AA30DA00BCE9B2 /* vp9_reconintra.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140370D24AA30CF00BCE9B2 /* vp9_reconintra.h */; };
+		4140374324AA30DA00BCE9B2 /* vp9_scale.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370E24AA30D000BCE9B2 /* vp9_scale.c */; };
+		4140374424AA30DA00BCE9B2 /* vp9_loopfilter.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140370F24AA30D000BCE9B2 /* vp9_loopfilter.c */; };
+		4140374524AA30DA00BCE9B2 /* vp9_mv.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371024AA30D000BCE9B2 /* vp9_mv.h */; };
+		4140374624AA30DA00BCE9B2 /* vp9_filter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371124AA30D000BCE9B2 /* vp9_filter.h */; };
+		4140374724AA30DA00BCE9B2 /* vp9_filter.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140371224AA30D100BCE9B2 /* vp9_filter.c */; };
+		4140374824AA30DA00BCE9B2 /* vp9_scan.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140371324AA30D100BCE9B2 /* vp9_scan.c */; };
+		4140374924AA30DA00BCE9B2 /* vp9_pred_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371524AA30D300BCE9B2 /* vp9_pred_common.h */; };
+		4140374A24AA30DA00BCE9B2 /* vp9_reconintra.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140371624AA30D400BCE9B2 /* vp9_reconintra.c */; };
+		4140374B24AA30DA00BCE9B2 /* vp9_postproc.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371824AA30D500BCE9B2 /* vp9_postproc.h */; };
+		4140374C24AA30DA00BCE9B2 /* vp9_tile_common.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140371924AA30D500BCE9B2 /* vp9_tile_common.c */; };
+		4140374D24AA30DA00BCE9B2 /* vp9_reconinter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371A24AA30D600BCE9B2 /* vp9_reconinter.h */; };
+		4140374E24AA30DA00BCE9B2 /* vp9_seg_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371B24AA30D600BCE9B2 /* vp9_seg_common.h */; };
+		4140374F24AA30DA00BCE9B2 /* vp9_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371C24AA30D600BCE9B2 /* vp9_common.h */; };
+		4140375024AA30DA00BCE9B2 /* vp9_entropy.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140371D24AA30D600BCE9B2 /* vp9_entropy.c */; };
+		4140375124AA30DA00BCE9B2 /* vp9_entropymode.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140371E24AA30D700BCE9B2 /* vp9_entropymode.c */; };
+		4140375224AA30DA00BCE9B2 /* vp9_mfqe.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140371F24AA30D700BCE9B2 /* vp9_mfqe.h */; };
+		4140375324AA30DA00BCE9B2 /* vp9_frame_buffers.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372024AA30D700BCE9B2 /* vp9_frame_buffers.h */; };
+		4140375424AA30DA00BCE9B2 /* vp9_entropymv.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372124AA30D700BCE9B2 /* vp9_entropymv.h */; };
+		4140375524AA30DA00BCE9B2 /* vp9_mvref_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372224AA30D800BCE9B2 /* vp9_mvref_common.h */; };
+		4140375624AA30DA00BCE9B2 /* vp9_entropymode.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372324AA30D800BCE9B2 /* vp9_entropymode.h */; };
+		4140375724AA30DA00BCE9B2 /* vp9_alloccommon.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372424AA30D800BCE9B2 /* vp9_alloccommon.h */; };
+		4140375824AA30DA00BCE9B2 /* vp9_ppflags.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372524AA30D900BCE9B2 /* vp9_ppflags.h */; };
+		4140375924AA30DA00BCE9B2 /* vp9_quant_common.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140372624AA30D900BCE9B2 /* vp9_quant_common.h */; };
+		4140376024AA311500BCE9B2 /* vp9_highbd_iht16x16_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140375B24AA311300BCE9B2 /* vp9_highbd_iht16x16_add_sse4.c */; };
+		4140376124AA311500BCE9B2 /* vp9_highbd_iht4x4_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140375C24AA311400BCE9B2 /* vp9_highbd_iht4x4_add_sse4.c */; };
+		4140376224AA311500BCE9B2 /* vp9_idct_intrin_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140375D24AA311400BCE9B2 /* vp9_idct_intrin_sse2.c */; };
+		4140376324AA311500BCE9B2 /* vp9_mfqe_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 4140375E24AA311500BCE9B2 /* vp9_mfqe_sse2.asm */; };
+		4140376424AA311500BCE9B2 /* vp9_highbd_iht8x8_add_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140375F24AA311500BCE9B2 /* vp9_highbd_iht8x8_add_sse4.c */; };
+		4140376D24AA312100BCE9B2 /* vp9_iht4x4_add_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140376624AA311F00BCE9B2 /* vp9_iht4x4_add_neon.c */; };
+		4140376F24AA312100BCE9B2 /* vp9_iht_neon.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140376824AA312000BCE9B2 /* vp9_iht_neon.h */; };
+		4140377024AA312100BCE9B2 /* vp9_iht8x8_add_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140376924AA312000BCE9B2 /* vp9_iht8x8_add_neon.c */; };
+		4140377224AA312100BCE9B2 /* vp9_iht16x16_add_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140376B24AA312100BCE9B2 /* vp9_iht16x16_add_neon.c */; };
+		4140378424AA32DC00BCE9B2 /* vp9_quantize_ssse3_x86_64.asm in Sources */ = {isa = PBXBuildFile; fileRef = 4140377624AA32D900BCE9B2 /* vp9_quantize_ssse3_x86_64.asm */; };
+		4140378524AA32DC00BCE9B2 /* temporal_filter_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377724AA32D900BCE9B2 /* temporal_filter_sse4.c */; };
+		4140378624AA32DC00BCE9B2 /* vp9_frame_scale_ssse3.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377824AA32D900BCE9B2 /* vp9_frame_scale_ssse3.c */; };
+		4140378724AA32DC00BCE9B2 /* vp9_highbd_block_error_intrin_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377924AA32DA00BCE9B2 /* vp9_highbd_block_error_intrin_sse2.c */; };
+		4140378824AA32DC00BCE9B2 /* highbd_temporal_filter_sse4.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377A24AA32DA00BCE9B2 /* highbd_temporal_filter_sse4.c */; };
+		4140378924AA32DC00BCE9B2 /* vp9_denoiser_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377B24AA32DA00BCE9B2 /* vp9_denoiser_sse2.c */; };
+		4140378A24AA32DC00BCE9B2 /* temporal_filter_constants.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140377C24AA32DB00BCE9B2 /* temporal_filter_constants.h */; };
+		4140378B24AA32DC00BCE9B2 /* vp9_error_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 4140377D24AA32DB00BCE9B2 /* vp9_error_sse2.asm */; };
+		4140378C24AA32DC00BCE9B2 /* vp9_dct_intrin_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377E24AA32DB00BCE9B2 /* vp9_dct_intrin_sse2.c */; };
+		4140378D24AA32DC00BCE9B2 /* vp9_quantize_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140377F24AA32DB00BCE9B2 /* vp9_quantize_sse2.c */; };
+		4140378E24AA32DC00BCE9B2 /* vp9_dct_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 4140378024AA32DC00BCE9B2 /* vp9_dct_sse2.asm */; };
+		4140378F24AA32DC00BCE9B2 /* vp9_diamond_search_sad_avx.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140378124AA32DC00BCE9B2 /* vp9_diamond_search_sad_avx.c */; };
+		4140379F24AB2FB500BCE9B2 /* ethreading.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379024AB2FB200BCE9B2 /* ethreading.h */; };
+		414037A024AB2FB500BCE9B2 /* quantize.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379124AB2FB200BCE9B2 /* quantize.h */; };
+		414037A124AB2FB500BCE9B2 /* segmentation.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379224AB2FB200BCE9B2 /* segmentation.h */; };
+		414037A224AB2FB500BCE9B2 /* ratectrl.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379324AB2FB300BCE9B2 /* ratectrl.h */; };
+		414037A324AB2FB500BCE9B2 /* dct_value_cost.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379424AB2FB300BCE9B2 /* dct_value_cost.h */; };
+		414037A424AB2FB500BCE9B2 /* boolhuff.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379524AB2FB300BCE9B2 /* boolhuff.h */; };
+		414037A524AB2FB500BCE9B2 /* pickinter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379624AB2FB300BCE9B2 /* pickinter.h */; };
+		414037A624AB2FB500BCE9B2 /* encodemb.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379724AB2FB300BCE9B2 /* encodemb.h */; };
+		414037A724AB2FB500BCE9B2 /* rdopt.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379824AB2FB400BCE9B2 /* rdopt.h */; };
+		414037A824AB2FB500BCE9B2 /* lookahead.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379924AB2FB400BCE9B2 /* lookahead.h */; };
+		414037A924AB2FB500BCE9B2 /* temporal_filter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379A24AB2FB400BCE9B2 /* temporal_filter.h */; };
+		414037AA24AB2FB500BCE9B2 /* copy_c.c in Sources */ = {isa = PBXBuildFile; fileRef = 4140379B24AB2FB400BCE9B2 /* copy_c.c */; };
+		414037AB24AB2FB500BCE9B2 /* modecosts.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379C24AB2FB500BCE9B2 /* modecosts.h */; };
+		414037AC24AB2FB500BCE9B2 /* treewriter.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379D24AB2FB500BCE9B2 /* treewriter.h */; };
+		414037AD24AB2FB500BCE9B2 /* mr_dissim.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140379E24AB2FB500BCE9B2 /* mr_dissim.h */; };
+		414037B324AB359700BCE9B2 /* vp9_denoiser_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 414037AF24AB359600BCE9B2 /* vp9_denoiser_neon.c */; };
+		414037B424AB359700BCE9B2 /* vp9_error_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 414037B024AB359600BCE9B2 /* vp9_error_neon.c */; };
+		414037B524AB359700BCE9B2 /* vp9_frame_scale_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 414037B124AB359700BCE9B2 /* vp9_frame_scale_neon.c */; };
+		414037B624AB359700BCE9B2 /* vp9_quantize_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 414037B224AB359700BCE9B2 /* vp9_quantize_neon.c */; };
+		414037B824AB35E200BCE9B2 /* sum_squares_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 414037B724AB35E100BCE9B2 /* sum_squares_neon.c */; };
 		4140B8201E4E3383007409E6 /* audio_encoder_pcm.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4140B8181E4E3383007409E6 /* audio_encoder_pcm.cc */; };
 		4140B8211E4E3383007409E6 /* audio_encoder_pcm.h in Headers */ = {isa = PBXBuildFile; fileRef = 4140B8191E4E3383007409E6 /* audio_encoder_pcm.h */; };
 		4140B8221E4E3383007409E6 /* audio_decoder_pcm.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4140B81A1E4E3383007409E6 /* audio_decoder_pcm.cc */; };
@@ -1135,8 +1348,6 @@
 		416225E92169819500A91C9B /* objc_video_frame.h in Headers */ = {isa = PBXBuildFile; fileRef = 416225E12169819400A91C9B /* objc_video_frame.h */; };
 		4162BA37216596160044F344 /* vp8_cx_iface.c in Sources */ = {isa = PBXBuildFile; fileRef = 411ED031212E04CC004320BA /* vp8_cx_iface.c */; };
 		4162BA38216596160044F344 /* vp8_dx_iface.c in Sources */ = {isa = PBXBuildFile; fileRef = 411ED032212E04CC004320BA /* vp8_dx_iface.c */; };
-		41659C082165975700CCBDC2 /* filter_x86.c in Sources */ = {isa = PBXBuildFile; fileRef = 41239AE6214756C700396F81 /* filter_x86.c */; };
-		41659C092165975700CCBDC2 /* filter_x86.h in Headers */ = {isa = PBXBuildFile; fileRef = 41239AE9214756C700396F81 /* filter_x86.h */; };
 		41659C0A2165975700CCBDC2 /* idct_blk_mmx.c in Sources */ = {isa = PBXBuildFile; fileRef = 41239AE7214756C700396F81 /* idct_blk_mmx.c */; };
 		41659C0B2165975700CCBDC2 /* idct_blk_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 41239AEB214756C800396F81 /* idct_blk_sse2.c */; };
 		41659C0C2165975700CCBDC2 /* loopfilter_x86.c in Sources */ = {isa = PBXBuildFile; fileRef = 41239AEA214756C800396F81 /* loopfilter_x86.c */; };
@@ -1450,7 +1661,6 @@
 		41893A6B242A7826007FDC41 /* task_queue_paced_sender.cc in Sources */ = {isa = PBXBuildFile; fileRef = 41893A68242A7825007FDC41 /* task_queue_paced_sender.cc */; };
 		41893A6C242A7826007FDC41 /* task_queue_paced_sender.h in Headers */ = {isa = PBXBuildFile; fileRef = 41893A69242A7826007FDC41 /* task_queue_paced_sender.h */; };
 		41893A6E242A78DF007FDC41 /* common.cc in Sources */ = {isa = PBXBuildFile; fileRef = 41893A6D242A78DF007FDC41 /* common.cc */; };
-		418B14DF2165959F0046E03F /* svc_encodeframe.c in Sources */ = {isa = PBXBuildFile; fileRef = 41EAF1BD212E2AAD009F73EC /* svc_encodeframe.c */; };
 		418B14E02165959F0046E03F /* vpx_codec.c in Sources */ = {isa = PBXBuildFile; fileRef = 41EAF1B9212E2AAD009F73EC /* vpx_codec.c */; };
 		418B14E12165959F0046E03F /* vpx_decoder.c in Sources */ = {isa = PBXBuildFile; fileRef = 41EAF1BB212E2AAD009F73EC /* vpx_decoder.c */; };
 		418B14E22165959F0046E03F /* vpx_encoder.c in Sources */ = {isa = PBXBuildFile; fileRef = 41EAF1BA212E2AAD009F73EC /* vpx_encoder.c */; };
@@ -1466,8 +1676,6 @@
 		419100DA2152ECE700A6F17B /* dequant_idct_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100BB2152ECDC00A6F17B /* dequant_idct_neon.c */; };
 		419100DB2152ECE700A6F17B /* dequantizeb_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100C62152ECDE00A6F17B /* dequantizeb_neon.c */; };
 		419100DC2152ECE700A6F17B /* idct_blk_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100BE2152ECDC00A6F17B /* idct_blk_neon.c */; };
-		419100DD2152ECE700A6F17B /* idct_dequant_0_2x_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100BA2152ECDB00A6F17B /* idct_dequant_0_2x_neon.c */; };
-		419100DE2152ECE700A6F17B /* idct_dequant_full_2x_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100C22152ECDD00A6F17B /* idct_dequant_full_2x_neon.c */; };
 		419100DF2152ECE700A6F17B /* iwalsh_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100C12152ECDD00A6F17B /* iwalsh_neon.c */; };
 		419100E02152ECE700A6F17B /* loopfiltersimplehorizontaledge_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100C02152ECDD00A6F17B /* loopfiltersimplehorizontaledge_neon.c */; };
 		419100E12152ECE700A6F17B /* loopfiltersimpleverticaledge_neon.c in Sources */ = {isa = PBXBuildFile; fileRef = 419100C42152ECDD00A6F17B /* loopfiltersimpleverticaledge_neon.c */; };
@@ -1807,7 +2015,6 @@
 		41C628FA212E2DB0002313D4 /* avg_intrin_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3CC212E2D9100E22482 /* avg_intrin_sse2.c */; };
 		41C628FC212E2DB0002313D4 /* fwd_txfm_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3D0212E2D9200E22482 /* fwd_txfm_sse2.c */; };
 		41C62907212E2DB0002313D4 /* loopfilter_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3CA212E2D9000E22482 /* loopfilter_sse2.c */; };
-		41C6290B212E2DB0002313D4 /* vpx_asm_stubs.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3CE212E2D9100E22482 /* vpx_asm_stubs.c */; };
 		41C6290D212E2DB0002313D4 /* vpx_subpixel_8t_intrin_ssse3.c in Sources */ = {isa = PBXBuildFile; fileRef = 41BAE3C9212E2D9000E22482 /* vpx_subpixel_8t_intrin_ssse3.c */; };
 		41C6291D212E2DE9002313D4 /* quantize_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 41C6290F212E2DE3002313D4 /* quantize_sse2.c */; };
 		41C6291F212E2DE9002313D4 /* avg_pred_sse2.c in Sources */ = {isa = PBXBuildFile; fileRef = 41C62911212E2DE4002313D4 /* avg_pred_sse2.c */; };
@@ -1820,7 +2027,6 @@
 		41C62937212E2F1E002313D4 /* alloccommon.c in Sources */ = {isa = PBXBuildFile; fileRef = 41673201212E0490001280EB /* alloccommon.c */; };
 		41C62938212E2F1E002313D4 /* blockd.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731EC212E048B001280EB /* blockd.c */; };
 		41C62939212E2F1E002313D4 /* context.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731F3212E048D001280EB /* context.c */; };
-		41C6293A212E2F1E002313D4 /* copy_c.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731FF212E048F001280EB /* copy_c.c */; };
 		41C6293B212E2F1E002313D4 /* debugmodes.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731F7212E048E001280EB /* debugmodes.c */; };
 		41C6293C212E2F1E002313D4 /* dequantize.c in Sources */ = {isa = PBXBuildFile; fileRef = 41673200212E048F001280EB /* dequantize.c */; };
 		41C6293D212E2F1E002313D4 /* entropy.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731FC212E048F001280EB /* entropy.c */; };
@@ -1852,11 +2058,9 @@
 		41CB0A07215C8D760097B8AA /* idctllm_mmx.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB09FA215C8D740097B8AA /* idctllm_mmx.asm */; };
 		41CB0A08215C8D760097B8AA /* idctllm_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB09FB215C8D740097B8AA /* idctllm_sse2.asm */; };
 		41CB0A09215C8D760097B8AA /* mfqe_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB09FC215C8D740097B8AA /* mfqe_sse2.asm */; };
-		41CB0A0A215C8D760097B8AA /* copy_sse3.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB09FD215C8D750097B8AA /* copy_sse3.asm */; };
 		41CB0A0B215C8D760097B8AA /* recon_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB09FE215C8D750097B8AA /* recon_sse2.asm */; };
 		41CB0A0C215C8D760097B8AA /* dequantize_mmx.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB09FF215C8D750097B8AA /* dequantize_mmx.asm */; };
 		41CB0A0D215C8D760097B8AA /* recon_mmx.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB0A00215C8D750097B8AA /* recon_mmx.asm */; };
-		41CB0A0E215C8D760097B8AA /* copy_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB0A01215C8D750097B8AA /* copy_sse2.asm */; };
 		41CB0A0F215C8D760097B8AA /* subpixel_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB0A02215C8D760097B8AA /* subpixel_sse2.asm */; };
 		41CB0A10215C8D760097B8AA /* iwalsh_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB0A03215C8D760097B8AA /* iwalsh_sse2.asm */; };
 		41CB0A11215C8D940097B8AA /* vpx_config.asm in Sources */ = {isa = PBXBuildFile; fileRef = 4105EB9F212E02CC008C0C20 /* vpx_config.asm */; };
@@ -1902,10 +2106,8 @@
 		41CB0A52215C8DC90097B8AA /* vpx_subpixel_8t_ssse3.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41C6296B212E3654002313D4 /* vpx_subpixel_8t_ssse3.asm */; };
 		41CB0A53215C8DC90097B8AA /* vpx_subpixel_bilinear_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41C6296F212E3655002313D4 /* vpx_subpixel_bilinear_sse2.asm */; };
 		41CB0A54215C8DC90097B8AA /* vpx_subpixel_bilinear_ssse3.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41C62974212E3656002313D4 /* vpx_subpixel_bilinear_ssse3.asm */; };
-		41CB0A56215C8DDA0097B8AA /* emms.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB0A55215C8DDA0097B8AA /* emms.asm */; };
 		41CB0A59215C90500097B8AA /* subpixel_ssse3.asm in Sources */ = {isa = PBXBuildFile; fileRef = 41CB0A58215C90500097B8AA /* subpixel_ssse3.asm */; };
 		41CB0A5A215C90750097B8AA /* dct_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 416731DC212E045D001280EB /* dct_sse2.asm */; };
-		41CB0A5B215C90750097B8AA /* encodeopt.asm in Sources */ = {isa = PBXBuildFile; fileRef = 416731D9212E045D001280EB /* encodeopt.asm */; };
 		41CB0A5C215C90750097B8AA /* fwalsh_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 416731E1212E045E001280EB /* fwalsh_sse2.asm */; };
 		41CB0A5D215C90750097B8AA /* temporal_filter_apply_sse2.asm in Sources */ = {isa = PBXBuildFile; fileRef = 416731E0212E045E001280EB /* temporal_filter_apply_sse2.asm */; };
 		41CBAF94212E039300DE1E1D /* dboolhuff.c in Sources */ = {isa = PBXBuildFile; fileRef = 4105EBB5212E035D008C0C20 /* dboolhuff.c */; };
@@ -2075,7 +2277,6 @@
 		41EED7BD2152EEC9000F2A16 /* arm.h in Headers */ = {isa = PBXBuildFile; fileRef = 41EED7BB2152EEC8000F2A16 /* arm.h */; };
 		41EED7BE2152EEC9000F2A16 /* arm_cpudetect.c in Sources */ = {isa = PBXBuildFile; fileRef = 41EED7BC2152EEC8000F2A16 /* arm_cpudetect.c */; };
 		41EED7BF2152F1FB000F2A16 /* onyx_if.c in Sources */ = {isa = PBXBuildFile; fileRef = 416731A8212E0426001280EB /* onyx_if.c */; };
-		41EED7DC21531E5F000F2A16 /* sad.c in Sources */ = {isa = PBXBuildFile; fileRef = 413309F0212E2BD600280939 /* sad.c */; settings = {COMPILER_FLAGS = "-Wno-incompatible-pointer-types "; }; };
 		41F2635F21267ADF00274F59 /* builtin_video_decoder_factory.cc in Sources */ = {isa = PBXBuildFile; fileRef = 41DDB27321267AC000296D47 /* builtin_video_decoder_factory.cc */; };
 		41F2636021267ADF00274F59 /* builtin_video_decoder_factory.h in Headers */ = {isa = PBXBuildFile; fileRef = 41DDB27621267AC100296D47 /* builtin_video_decoder_factory.h */; };
 		41F2636121267ADF00274F59 /* builtin_video_encoder_factory.h in Headers */ = {isa = PBXBuildFile; fileRef = 41DDB27721267AC100296D47 /* builtin_video_encoder_factory.h */; };
@@ -3324,7 +3525,6 @@
 		5CDD8C0B1E43C34600621E92 /* audio_encoder_isac_t_impl.h in Headers */ = {isa = PBXBuildFile; fileRef = 5CDD8BFD1E43C34600621E92 /* audio_encoder_isac_t_impl.h */; };
 		5CDD8C0C1E43C34600621E92 /* audio_encoder_isac_t.h in Headers */ = {isa = PBXBuildFile; fileRef = 5CDD8BFE1E43C34600621E92 /* audio_encoder_isac_t.h */; };
 		5CDD8C0D1E43C34600621E92 /* bandwidth_info.h in Headers */ = {isa = PBXBuildFile; fileRef = 5CDD8BFF1E43C34600621E92 /* bandwidth_info.h */; };
-		5CDD8C141E43C3B400621E92 /* vp9_noop.cc in Sources */ = {isa = PBXBuildFile; fileRef = 5CDD8C131E43C3B400621E92 /* vp9_noop.cc */; };
 		5CDD8C4F1E43C58E00621E92 /* audio_sink.h in Headers */ = {isa = PBXBuildFile; fileRef = 5CDD8C491E43C58E00621E92 /* audio_sink.h */; };
 		5CDD8C601E43C60900621E92 /* audio_decoder_opus.cc in Sources */ = {isa = PBXBuildFile; fileRef = 5CDD8C531E43C60900621E92 /* audio_decoder_opus.cc */; };
 		5CDD8C611E43C60900621E92 /* audio_decoder_opus.h in Headers */ = {isa = PBXBuildFile; fileRef = 5CDD8C541E43C60900621E92 /* audio_decoder_opus.h */; };
@@ -4044,10 +4244,8 @@
 		411ED031212E04CC004320BA /* vp8_cx_iface.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp8_cx_iface.c; sourceTree = "<group>"; };
 		411ED032212E04CC004320BA /* vp8_dx_iface.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp8_dx_iface.c; sourceTree = "<group>"; };
 		411ED035212E05DE004320BA /* libvpx.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = libvpx.xcconfig; sourceTree = "<group>"; };
-		41239AE6214756C700396F81 /* filter_x86.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = filter_x86.c; sourceTree = "<group>"; };
 		41239AE7214756C700396F81 /* idct_blk_mmx.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = idct_blk_mmx.c; sourceTree = "<group>"; };
 		41239AE8214756C700396F81 /* vp8_asm_stubs.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp8_asm_stubs.c; sourceTree = "<group>"; };
-		41239AE9214756C700396F81 /* filter_x86.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = filter_x86.h; sourceTree = "<group>"; };
 		41239AEA214756C800396F81 /* loopfilter_x86.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = loopfilter_x86.c; sourceTree = "<group>"; };
 		41239AEB214756C800396F81 /* idct_blk_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = idct_blk_sse2.c; sourceTree = "<group>"; };
 		41239B08214757AD00396F81 /* vpx_scale.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vpx_scale.c; sourceTree = "<group>"; };
@@ -5003,6 +5201,220 @@
 		414035EB24AA0EBB00BCE9B2 /* RTCVideoDecoderVP9.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = RTCVideoDecoderVP9.mm; sourceTree = "<group>"; };
 		414035F024AA0F5300BCE9B2 /* video_rtp_depacketizer_vp9.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = video_rtp_depacketizer_vp9.cc; sourceTree = "<group>"; };
 		414035F124AA0F5300BCE9B2 /* video_rtp_depacketizer_vp9.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = video_rtp_depacketizer_vp9.h; sourceTree = "<group>"; };
+		414035F424AA1F5400BCE9B2 /* vp9.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = vp9.cc; path = codecs/vp9/vp9.cc; sourceTree = "<group>"; };
+		414035F524AA1F5400BCE9B2 /* vp9_frame_buffer_pool.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = vp9_frame_buffer_pool.h; path = codecs/vp9/vp9_frame_buffer_pool.h; sourceTree = "<group>"; };
+		414035F624AA1F5400BCE9B2 /* vp9_impl.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = vp9_impl.cc; path = codecs/vp9/vp9_impl.cc; sourceTree = "<group>"; };
+		414035F724AA1F5400BCE9B2 /* vp9_impl.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = vp9_impl.h; path = codecs/vp9/vp9_impl.h; sourceTree = "<group>"; };
+		414035F824AA1F5500BCE9B2 /* vp9_frame_buffer_pool.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = vp9_frame_buffer_pool.cc; path = codecs/vp9/vp9_frame_buffer_pool.cc; sourceTree = "<group>"; };
+		414035FE24AA248700BCE9B2 /* copy_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = copy_sse2.asm; sourceTree = "<group>"; };
+		414035FF24AA248700BCE9B2 /* block_error_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = block_error_sse2.asm; sourceTree = "<group>"; };
+		4140360024AA248700BCE9B2 /* copy_sse3.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = copy_sse3.asm; sourceTree = "<group>"; };
+		4140360724AA24FB00BCE9B2 /* bilinear_filter_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = bilinear_filter_sse2.c; sourceTree = "<group>"; };
+		4140360A24AA253800BCE9B2 /* vpx_subpixel_4t_intrin_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vpx_subpixel_4t_intrin_sse2.c; sourceTree = "<group>"; };
+		4140360B24AA253900BCE9B2 /* quantize_sse2.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = quantize_sse2.h; sourceTree = "<group>"; };
+		4140360C24AA253900BCE9B2 /* convolve_sse2.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = convolve_sse2.h; sourceTree = "<group>"; };
+		4140360D24AA253900BCE9B2 /* post_proc_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = post_proc_sse2.c; sourceTree = "<group>"; };
+		4140361624AA293F00BCE9B2 /* x86.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = x86.h; sourceTree = "<group>"; };
+		4140361D24AA2E6700BCE9B2 /* emms_mmx.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = emms_mmx.c; sourceTree = "<group>"; };
+		4140362524AA306400BCE9B2 /* vp9_iface_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_iface_common.c; sourceTree = "<group>"; };
+		4140362624AA306400BCE9B2 /* simple_encode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = simple_encode.h; sourceTree = "<group>"; };
+		4140362724AA306400BCE9B2 /* vp9_cx_iface.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_cx_iface.c; sourceTree = "<group>"; };
+		4140362824AA306500BCE9B2 /* vp9_dx_iface.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_dx_iface.h; sourceTree = "<group>"; };
+		4140362924AA306500BCE9B2 /* vp9_cx_iface.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_cx_iface.h; sourceTree = "<group>"; };
+		4140362A24AA306500BCE9B2 /* vp9_dx_iface.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_dx_iface.c; sourceTree = "<group>"; };
+		4140362B24AA306500BCE9B2 /* vp9_iface_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_iface_common.h; sourceTree = "<group>"; };
+		4140362C24AA306600BCE9B2 /* simple_encode.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = simple_encode.cc; sourceTree = "<group>"; };
+		4140363824AA309F00BCE9B2 /* vp9_aq_complexity.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_aq_complexity.h; sourceTree = "<group>"; };
+		4140363924AA309F00BCE9B2 /* vp9_cost.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_cost.c; sourceTree = "<group>"; };
+		4140363A24AA30A000BCE9B2 /* vp9_pickmode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_pickmode.h; sourceTree = "<group>"; };
+		4140363B24AA30A000BCE9B2 /* vp9_ratectrl.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_ratectrl.c; sourceTree = "<group>"; };
+		4140363C24AA30A000BCE9B2 /* vp9_temporal_filter.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_temporal_filter.c; sourceTree = "<group>"; };
+		4140363D24AA30A000BCE9B2 /* vp9_lookahead.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_lookahead.h; sourceTree = "<group>"; };
+		4140363E24AA30A100BCE9B2 /* vp9_temporal_filter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_temporal_filter.h; sourceTree = "<group>"; };
+		4140363F24AA30A100BCE9B2 /* vp9_multi_thread.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_multi_thread.h; sourceTree = "<group>"; };
+		4140364024AA30A100BCE9B2 /* vp9_resize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_resize.h; sourceTree = "<group>"; };
+		4140364124AA30A100BCE9B2 /* vp9_pickmode.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_pickmode.c; sourceTree = "<group>"; };
+		4140364224AA30A200BCE9B2 /* vp9_mbgraph.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_mbgraph.h; sourceTree = "<group>"; };
+		4140364324AA30A200BCE9B2 /* vp9_alt_ref_aq.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_alt_ref_aq.h; sourceTree = "<group>"; };
+		4140364424AA30A200BCE9B2 /* vp9_dct.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_dct.c; sourceTree = "<group>"; };
+		4140364524AA30A200BCE9B2 /* vp9_speed_features.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_speed_features.c; sourceTree = "<group>"; };
+		4140364624AA30A300BCE9B2 /* vp9_encodemb.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_encodemb.c; sourceTree = "<group>"; };
+		4140364724AA30A300BCE9B2 /* vp9_bitstream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_bitstream.h; sourceTree = "<group>"; };
+		4140364824AA30A300BCE9B2 /* vp9_speed_features.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_speed_features.h; sourceTree = "<group>"; };
+		4140364924AA30A300BCE9B2 /* vp9_segmentation.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_segmentation.c; sourceTree = "<group>"; };
+		4140364A24AA30A400BCE9B2 /* vp9_subexp.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_subexp.h; sourceTree = "<group>"; };
+		4140364B24AA30A400BCE9B2 /* vp9_aq_complexity.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_aq_complexity.c; sourceTree = "<group>"; };
+		4140364C24AA30A400BCE9B2 /* vp9_subexp.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_subexp.c; sourceTree = "<group>"; };
+		4140364D24AA30A500BCE9B2 /* vp9_blockiness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_blockiness.h; sourceTree = "<group>"; };
+		4140364E24AA30A500BCE9B2 /* vp9_extend.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_extend.h; sourceTree = "<group>"; };
+		4140364F24AA30A500BCE9B2 /* vp9_aq_cyclicrefresh.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_aq_cyclicrefresh.c; sourceTree = "<group>"; };
+		4140365024AA30A500BCE9B2 /* vp9_ethread.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_ethread.h; sourceTree = "<group>"; };
+		4140365124AA30A600BCE9B2 /* vp9_denoiser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_denoiser.h; sourceTree = "<group>"; };
+		4140365224AA30A600BCE9B2 /* vp9_quantize.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_quantize.c; sourceTree = "<group>"; };
+		4140365324AA30A600BCE9B2 /* vp9_aq_360.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_aq_360.h; sourceTree = "<group>"; };
+		4140365424AA30A600BCE9B2 /* vp9_ratectrl.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_ratectrl.h; sourceTree = "<group>"; };
+		4140365524AA30A700BCE9B2 /* vp9_tokenize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_tokenize.h; sourceTree = "<group>"; };
+		4140365624AA30A700BCE9B2 /* vp9_block.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_block.h; sourceTree = "<group>"; };
+		4140365724AA30A700BCE9B2 /* vp9_aq_360.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_aq_360.c; sourceTree = "<group>"; };
+		4140365824AA30A700BCE9B2 /* vp9_context_tree.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_context_tree.c; sourceTree = "<group>"; };
+		4140365924AA30A700BCE9B2 /* vp9_context_tree.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_context_tree.h; sourceTree = "<group>"; };
+		4140365A24AA30A800BCE9B2 /* vp9_ethread.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_ethread.c; sourceTree = "<group>"; };
+		4140365B24AA30A800BCE9B2 /* vp9_non_greedy_mv.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_non_greedy_mv.c; sourceTree = "<group>"; };
+		4140365C24AA30A800BCE9B2 /* vp9_skin_detection.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_skin_detection.h; sourceTree = "<group>"; };
+		4140365D24AA30A900BCE9B2 /* vp9_multi_thread.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_multi_thread.c; sourceTree = "<group>"; };
+		4140365E24AA30A900BCE9B2 /* vp9_rd.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_rd.h; sourceTree = "<group>"; };
+		4140365F24AA30A900BCE9B2 /* vp9_picklpf.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_picklpf.c; sourceTree = "<group>"; };
+		4140366024AA30A900BCE9B2 /* vp9_treewriter.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_treewriter.c; sourceTree = "<group>"; };
+		4140366124AA30AA00BCE9B2 /* vp9_rdopt.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_rdopt.c; sourceTree = "<group>"; };
+		4140366224AA30AA00BCE9B2 /* vp9_encodemv.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_encodemv.h; sourceTree = "<group>"; };
+		4140366324AA30AA00BCE9B2 /* vp9_job_queue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_job_queue.h; sourceTree = "<group>"; };
+		4140366424AA30AA00BCE9B2 /* vp9_resize.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_resize.c; sourceTree = "<group>"; };
+		4140366524AA30AA00BCE9B2 /* vp9_encodeframe.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_encodeframe.h; sourceTree = "<group>"; };
+		4140366624AA30AB00BCE9B2 /* vp9_partition_models.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_partition_models.h; sourceTree = "<group>"; };
+		4140366724AA30AB00BCE9B2 /* vp9_noise_estimate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_noise_estimate.h; sourceTree = "<group>"; };
+		4140366824AA30AB00BCE9B2 /* vp9_non_greedy_mv.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_non_greedy_mv.h; sourceTree = "<group>"; };
+		4140366924AA30AB00BCE9B2 /* vp9_picklpf.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_picklpf.h; sourceTree = "<group>"; };
+		4140366A24AA30AC00BCE9B2 /* vp9_rdopt.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_rdopt.h; sourceTree = "<group>"; };
+		4140366B24AA30AC00BCE9B2 /* vp9_encoder.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_encoder.h; sourceTree = "<group>"; };
+		4140366C24AA30AC00BCE9B2 /* vp9_quantize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_quantize.h; sourceTree = "<group>"; };
+		4140366D24AA30AC00BCE9B2 /* vp9_lookahead.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_lookahead.c; sourceTree = "<group>"; };
+		4140366E24AA30AC00BCE9B2 /* vp9_aq_variance.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_aq_variance.h; sourceTree = "<group>"; };
+		4140366F24AA30AD00BCE9B2 /* vp9_mbgraph.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_mbgraph.c; sourceTree = "<group>"; };
+		4140367024AA30AD00BCE9B2 /* vp9_extend.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_extend.c; sourceTree = "<group>"; };
+		4140367124AA30AD00BCE9B2 /* vp9_encoder.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_encoder.c; sourceTree = "<group>"; };
+		4140367224AA30AE00BCE9B2 /* vp9_denoiser.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_denoiser.c; sourceTree = "<group>"; };
+		4140367324AA30AE00BCE9B2 /* vp9_encodemv.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_encodemv.c; sourceTree = "<group>"; };
+		4140367424AA30AE00BCE9B2 /* vp9_mcomp.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_mcomp.h; sourceTree = "<group>"; };
+		4140367524AA30AE00BCE9B2 /* vp9_encodeframe.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_encodeframe.c; sourceTree = "<group>"; };
+		4140367624AA30AF00BCE9B2 /* vp9_alt_ref_aq.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_alt_ref_aq.c; sourceTree = "<group>"; };
+		4140367724AA30AF00BCE9B2 /* vp9_aq_variance.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_aq_variance.c; sourceTree = "<group>"; };
+		4140367824AA30AF00BCE9B2 /* vp9_svc_layercontext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_svc_layercontext.h; sourceTree = "<group>"; };
+		4140367924AA30AF00BCE9B2 /* vp9_tokenize.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_tokenize.c; sourceTree = "<group>"; };
+		4140367A24AA30AF00BCE9B2 /* vp9_treewriter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_treewriter.h; sourceTree = "<group>"; };
+		4140367B24AA30B300BCE9B2 /* vp9_segmentation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_segmentation.h; sourceTree = "<group>"; };
+		4140367C24AA30B300BCE9B2 /* vp9_svc_layercontext.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_svc_layercontext.c; sourceTree = "<group>"; };
+		4140367D24AA30B300BCE9B2 /* vp9_mcomp.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_mcomp.c; sourceTree = "<group>"; };
+		4140367E24AA30B400BCE9B2 /* vp9_blockiness.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_blockiness.c; sourceTree = "<group>"; };
+		4140367F24AA30B400BCE9B2 /* vp9_noise_estimate.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_noise_estimate.c; sourceTree = "<group>"; };
+		4140368024AA30B400BCE9B2 /* vp9_skin_detection.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_skin_detection.c; sourceTree = "<group>"; };
+		4140368124AA30B400BCE9B2 /* vp9_firstpass.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_firstpass.c; sourceTree = "<group>"; };
+		4140368224AA30B500BCE9B2 /* vp9_rd.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_rd.c; sourceTree = "<group>"; };
+		4140368324AA30B500BCE9B2 /* vp9_firstpass.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_firstpass.h; sourceTree = "<group>"; };
+		4140368424AA30B500BCE9B2 /* vp9_bitstream.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_bitstream.c; sourceTree = "<group>"; };
+		4140368524AA30B600BCE9B2 /* vp9_aq_cyclicrefresh.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_aq_cyclicrefresh.h; sourceTree = "<group>"; };
+		4140368624AA30B600BCE9B2 /* vp9_cost.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_cost.h; sourceTree = "<group>"; };
+		4140368724AA30B600BCE9B2 /* vp9_encodemb.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_encodemb.h; sourceTree = "<group>"; };
+		4140368824AA30B600BCE9B2 /* vp9_frame_scale.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_frame_scale.c; sourceTree = "<group>"; };
+		414036DA24AA30B900BCE9B2 /* vp9_job_queue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_job_queue.h; sourceTree = "<group>"; };
+		414036DB24AA30B900BCE9B2 /* vp9_job_queue.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_job_queue.c; sourceTree = "<group>"; };
+		414036DC24AA30BA00BCE9B2 /* vp9_decoder.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_decoder.c; sourceTree = "<group>"; };
+		414036DD24AA30BA00BCE9B2 /* vp9_dsubexp.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_dsubexp.h; sourceTree = "<group>"; };
+		414036DE24AA30BA00BCE9B2 /* vp9_decodeframe.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_decodeframe.c; sourceTree = "<group>"; };
+		414036DF24AA30BA00BCE9B2 /* vp9_dsubexp.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_dsubexp.c; sourceTree = "<group>"; };
+		414036E024AA30BB00BCE9B2 /* vp9_detokenize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_detokenize.h; sourceTree = "<group>"; };
+		414036E124AA30BB00BCE9B2 /* vp9_detokenize.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_detokenize.c; sourceTree = "<group>"; };
+		414036E224AA30BB00BCE9B2 /* vp9_decodemv.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_decodemv.c; sourceTree = "<group>"; };
+		414036E324AA30BB00BCE9B2 /* vp9_decodemv.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_decodemv.h; sourceTree = "<group>"; };
+		414036E424AA30BC00BCE9B2 /* vp9_decodeframe.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_decodeframe.h; sourceTree = "<group>"; };
+		414036E524AA30BC00BCE9B2 /* vp9_decoder.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_decoder.h; sourceTree = "<group>"; };
+		414036F224AA30C800BCE9B2 /* vp9_enums.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_enums.h; sourceTree = "<group>"; };
+		414036F324AA30C900BCE9B2 /* vp9_blockd.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_blockd.c; sourceTree = "<group>"; };
+		414036F424AA30C900BCE9B2 /* vp9_entropy.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_entropy.h; sourceTree = "<group>"; };
+		414036F524AA30C900BCE9B2 /* vp9_loopfilter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_loopfilter.h; sourceTree = "<group>"; };
+		414036F624AA30CA00BCE9B2 /* vp9_mvref_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_mvref_common.c; sourceTree = "<group>"; };
+		414036F724AA30CA00BCE9B2 /* vp9_scale.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_scale.h; sourceTree = "<group>"; };
+		414036F824AA30CA00BCE9B2 /* vp9_pred_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_pred_common.c; sourceTree = "<group>"; };
+		414036F924AA30CA00BCE9B2 /* vp9_mfqe.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_mfqe.c; sourceTree = "<group>"; };
+		414036FA24AA30CB00BCE9B2 /* vp9_tile_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_tile_common.h; sourceTree = "<group>"; };
+		414036FB24AA30CB00BCE9B2 /* vp9_alloccommon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_alloccommon.c; sourceTree = "<group>"; };
+		414036FC24AA30CB00BCE9B2 /* vp9_entropymv.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_entropymv.c; sourceTree = "<group>"; };
+		414036FD24AA30CB00BCE9B2 /* vp9_postproc.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_postproc.c; sourceTree = "<group>"; };
+		414036FE24AA30CC00BCE9B2 /* vp9_frame_buffers.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_frame_buffers.c; sourceTree = "<group>"; };
+		414036FF24AA30CC00BCE9B2 /* vp9_common_data.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_common_data.c; sourceTree = "<group>"; };
+		4140370024AA30CC00BCE9B2 /* vp9_idct.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_idct.h; sourceTree = "<group>"; };
+		4140370124AA30CC00BCE9B2 /* vp9_blockd.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_blockd.h; sourceTree = "<group>"; };
+		4140370224AA30CD00BCE9B2 /* vp9_common_data.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_common_data.h; sourceTree = "<group>"; };
+		4140370324AA30CD00BCE9B2 /* vp9_seg_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_seg_common.c; sourceTree = "<group>"; };
+		4140370424AA30CD00BCE9B2 /* vp9_rtcd.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_rtcd.c; sourceTree = "<group>"; };
+		4140370524AA30CE00BCE9B2 /* vp9_scan.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_scan.h; sourceTree = "<group>"; };
+		4140370624AA30CE00BCE9B2 /* vp9_onyxc_int.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_onyxc_int.h; sourceTree = "<group>"; };
+		4140370724AA30CE00BCE9B2 /* vp9_thread_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_thread_common.c; sourceTree = "<group>"; };
+		4140370824AA30CE00BCE9B2 /* vp9_idct.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_idct.c; sourceTree = "<group>"; };
+		4140370924AA30CE00BCE9B2 /* vp9_quant_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_quant_common.c; sourceTree = "<group>"; };
+		4140370A24AA30CF00BCE9B2 /* vp9_reconinter.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_reconinter.c; sourceTree = "<group>"; };
+		4140370B24AA30CF00BCE9B2 /* vp9_debugmodes.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_debugmodes.c; sourceTree = "<group>"; };
+		4140370C24AA30CF00BCE9B2 /* vp9_thread_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_thread_common.h; sourceTree = "<group>"; };
+		4140370D24AA30CF00BCE9B2 /* vp9_reconintra.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_reconintra.h; sourceTree = "<group>"; };
+		4140370E24AA30D000BCE9B2 /* vp9_scale.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_scale.c; sourceTree = "<group>"; };
+		4140370F24AA30D000BCE9B2 /* vp9_loopfilter.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_loopfilter.c; sourceTree = "<group>"; };
+		4140371024AA30D000BCE9B2 /* vp9_mv.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_mv.h; sourceTree = "<group>"; };
+		4140371124AA30D000BCE9B2 /* vp9_filter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_filter.h; sourceTree = "<group>"; };
+		4140371224AA30D100BCE9B2 /* vp9_filter.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_filter.c; sourceTree = "<group>"; };
+		4140371324AA30D100BCE9B2 /* vp9_scan.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_scan.c; sourceTree = "<group>"; };
+		4140371524AA30D300BCE9B2 /* vp9_pred_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_pred_common.h; sourceTree = "<group>"; };
+		4140371624AA30D400BCE9B2 /* vp9_reconintra.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_reconintra.c; sourceTree = "<group>"; };
+		4140371724AA30D400BCE9B2 /* vp9_rtcd_defs.pl */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.perl; path = vp9_rtcd_defs.pl; sourceTree = "<group>"; };
+		4140371824AA30D500BCE9B2 /* vp9_postproc.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_postproc.h; sourceTree = "<group>"; };
+		4140371924AA30D500BCE9B2 /* vp9_tile_common.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_tile_common.c; sourceTree = "<group>"; };
+		4140371A24AA30D600BCE9B2 /* vp9_reconinter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_reconinter.h; sourceTree = "<group>"; };
+		4140371B24AA30D600BCE9B2 /* vp9_seg_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_seg_common.h; sourceTree = "<group>"; };
+		4140371C24AA30D600BCE9B2 /* vp9_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_common.h; sourceTree = "<group>"; };
+		4140371D24AA30D600BCE9B2 /* vp9_entropy.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_entropy.c; sourceTree = "<group>"; };
+		4140371E24AA30D700BCE9B2 /* vp9_entropymode.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_entropymode.c; sourceTree = "<group>"; };
+		4140371F24AA30D700BCE9B2 /* vp9_mfqe.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_mfqe.h; sourceTree = "<group>"; };
+		4140372024AA30D700BCE9B2 /* vp9_frame_buffers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_frame_buffers.h; sourceTree = "<group>"; };
+		4140372124AA30D700BCE9B2 /* vp9_entropymv.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_entropymv.h; sourceTree = "<group>"; };
+		4140372224AA30D800BCE9B2 /* vp9_mvref_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_mvref_common.h; sourceTree = "<group>"; };
+		4140372324AA30D800BCE9B2 /* vp9_entropymode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_entropymode.h; sourceTree = "<group>"; };
+		4140372424AA30D800BCE9B2 /* vp9_alloccommon.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_alloccommon.h; sourceTree = "<group>"; };
+		4140372524AA30D900BCE9B2 /* vp9_ppflags.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_ppflags.h; sourceTree = "<group>"; };
+		4140372624AA30D900BCE9B2 /* vp9_quant_common.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = vp9_quant_common.h; sourceTree = "<group>"; };
+		4140375B24AA311300BCE9B2 /* vp9_highbd_iht16x16_add_sse4.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_highbd_iht16x16_add_sse4.c; sourceTree = "<group>"; };
+		4140375C24AA311400BCE9B2 /* vp9_highbd_iht4x4_add_sse4.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_highbd_iht4x4_add_sse4.c; sourceTree = "<group>"; };
+		4140375D24AA311400BCE9B2 /* vp9_idct_intrin_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_idct_intrin_sse2.c; sourceTree = "<group>"; };
+		4140375E24AA311500BCE9B2 /* vp9_mfqe_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = vp9_mfqe_sse2.asm; sourceTree = "<group>"; };
+		4140375F24AA311500BCE9B2 /* vp9_highbd_iht8x8_add_sse4.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_highbd_iht8x8_add_sse4.c; sourceTree = "<group>"; };
+		4140376524AA311F00BCE9B2 /* vp9_highbd_iht16x16_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = vp9_highbd_iht16x16_add_neon.c; path = neon/vp9_highbd_iht16x16_add_neon.c; sourceTree = "<group>"; };
+		4140376624AA311F00BCE9B2 /* vp9_iht4x4_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = vp9_iht4x4_add_neon.c; path = neon/vp9_iht4x4_add_neon.c; sourceTree = "<group>"; };
+		4140376724AA312000BCE9B2 /* vp9_highbd_iht4x4_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = vp9_highbd_iht4x4_add_neon.c; path = neon/vp9_highbd_iht4x4_add_neon.c; sourceTree = "<group>"; };
+		4140376824AA312000BCE9B2 /* vp9_iht_neon.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = vp9_iht_neon.h; path = neon/vp9_iht_neon.h; sourceTree = "<group>"; };
+		4140376924AA312000BCE9B2 /* vp9_iht8x8_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = vp9_iht8x8_add_neon.c; path = neon/vp9_iht8x8_add_neon.c; sourceTree = "<group>"; };
+		4140376A24AA312100BCE9B2 /* vp9_highbd_iht8x8_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = vp9_highbd_iht8x8_add_neon.c; path = neon/vp9_highbd_iht8x8_add_neon.c; sourceTree = "<group>"; };
+		4140376B24AA312100BCE9B2 /* vp9_iht16x16_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = vp9_iht16x16_add_neon.c; path = neon/vp9_iht16x16_add_neon.c; sourceTree = "<group>"; };
+		4140377424AA32D800BCE9B2 /* vp9_quantize_avx2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_quantize_avx2.c; sourceTree = "<group>"; };
+		4140377524AA32D900BCE9B2 /* vp9_error_avx2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_error_avx2.c; sourceTree = "<group>"; };
+		4140377624AA32D900BCE9B2 /* vp9_quantize_ssse3_x86_64.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = vp9_quantize_ssse3_x86_64.asm; sourceTree = "<group>"; };
+		4140377724AA32D900BCE9B2 /* temporal_filter_sse4.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = temporal_filter_sse4.c; sourceTree = "<group>"; };
+		4140377824AA32D900BCE9B2 /* vp9_frame_scale_ssse3.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_frame_scale_ssse3.c; sourceTree = "<group>"; };
+		4140377924AA32DA00BCE9B2 /* vp9_highbd_block_error_intrin_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_highbd_block_error_intrin_sse2.c; sourceTree = "<group>"; };
+		4140377A24AA32DA00BCE9B2 /* highbd_temporal_filter_sse4.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = highbd_temporal_filter_sse4.c; sourceTree = "<group>"; };
+		4140377B24AA32DA00BCE9B2 /* vp9_denoiser_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_denoiser_sse2.c; sourceTree = "<group>"; };
+		4140377C24AA32DB00BCE9B2 /* temporal_filter_constants.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = temporal_filter_constants.h; sourceTree = "<group>"; };
+		4140377D24AA32DB00BCE9B2 /* vp9_error_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = vp9_error_sse2.asm; sourceTree = "<group>"; };
+		4140377E24AA32DB00BCE9B2 /* vp9_dct_intrin_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_dct_intrin_sse2.c; sourceTree = "<group>"; };
+		4140377F24AA32DB00BCE9B2 /* vp9_quantize_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_quantize_sse2.c; sourceTree = "<group>"; };
+		4140378024AA32DC00BCE9B2 /* vp9_dct_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = vp9_dct_sse2.asm; sourceTree = "<group>"; };
+		4140378124AA32DC00BCE9B2 /* vp9_diamond_search_sad_avx.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_diamond_search_sad_avx.c; sourceTree = "<group>"; };
+		4140379024AB2FB200BCE9B2 /* ethreading.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ethreading.h; path = encoder/ethreading.h; sourceTree = "<group>"; };
+		4140379124AB2FB200BCE9B2 /* quantize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = quantize.h; path = encoder/quantize.h; sourceTree = "<group>"; };
+		4140379224AB2FB200BCE9B2 /* segmentation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = segmentation.h; path = encoder/segmentation.h; sourceTree = "<group>"; };
+		4140379324AB2FB300BCE9B2 /* ratectrl.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ratectrl.h; path = encoder/ratectrl.h; sourceTree = "<group>"; };
+		4140379424AB2FB300BCE9B2 /* dct_value_cost.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = dct_value_cost.h; path = encoder/dct_value_cost.h; sourceTree = "<group>"; };
+		4140379524AB2FB300BCE9B2 /* boolhuff.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = boolhuff.h; path = encoder/boolhuff.h; sourceTree = "<group>"; };
+		4140379624AB2FB300BCE9B2 /* pickinter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = pickinter.h; path = encoder/pickinter.h; sourceTree = "<group>"; };
+		4140379724AB2FB300BCE9B2 /* encodemb.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = encodemb.h; path = encoder/encodemb.h; sourceTree = "<group>"; };
+		4140379824AB2FB400BCE9B2 /* rdopt.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = rdopt.h; path = encoder/rdopt.h; sourceTree = "<group>"; };
+		4140379924AB2FB400BCE9B2 /* lookahead.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = lookahead.h; path = encoder/lookahead.h; sourceTree = "<group>"; };
+		4140379A24AB2FB400BCE9B2 /* temporal_filter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = temporal_filter.h; path = encoder/temporal_filter.h; sourceTree = "<group>"; };
+		4140379B24AB2FB400BCE9B2 /* copy_c.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = copy_c.c; path = encoder/copy_c.c; sourceTree = "<group>"; };
+		4140379C24AB2FB500BCE9B2 /* modecosts.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = modecosts.h; path = encoder/modecosts.h; sourceTree = "<group>"; };
+		4140379D24AB2FB500BCE9B2 /* treewriter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = treewriter.h; path = encoder/treewriter.h; sourceTree = "<group>"; };
+		4140379E24AB2FB500BCE9B2 /* mr_dissim.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = mr_dissim.h; path = encoder/mr_dissim.h; sourceTree = "<group>"; };
+		414037AF24AB359600BCE9B2 /* vp9_denoiser_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_denoiser_neon.c; sourceTree = "<group>"; };
+		414037B024AB359600BCE9B2 /* vp9_error_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_error_neon.c; sourceTree = "<group>"; };
+		414037B124AB359700BCE9B2 /* vp9_frame_scale_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_frame_scale_neon.c; sourceTree = "<group>"; };
+		414037B224AB359700BCE9B2 /* vp9_quantize_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp9_quantize_neon.c; sourceTree = "<group>"; };
+		414037B724AB35E100BCE9B2 /* sum_squares_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = sum_squares_neon.c; sourceTree = "<group>"; };
 		4140B8181E4E3383007409E6 /* audio_encoder_pcm.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = audio_encoder_pcm.cc; path = g711/audio_encoder_pcm.cc; sourceTree = "<group>"; };
 		4140B8191E4E3383007409E6 /* audio_encoder_pcm.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = audio_encoder_pcm.h; path = g711/audio_encoder_pcm.h; sourceTree = "<group>"; };
 		4140B81A1E4E3383007409E6 /* audio_decoder_pcm.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = audio_decoder_pcm.cc; path = g711/audio_decoder_pcm.cc; sourceTree = "<group>"; };
@@ -5209,7 +5621,6 @@
 		416731B0212E0428001280EB /* ethreading.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; name = ethreading.c; path = encoder/ethreading.c; sourceTree = "<group>"; };
 		416731B1212E0429001280EB /* tokenize.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; name = tokenize.c; path = encoder/tokenize.c; sourceTree = "<group>"; };
 		416731B2212E0429001280EB /* treewriter.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; name = treewriter.c; path = encoder/treewriter.c; sourceTree = "<group>"; };
-		416731D9212E045D001280EB /* encodeopt.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = encodeopt.asm; sourceTree = "<group>"; };
 		416731DA212E045D001280EB /* quantize_sse4.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = quantize_sse4.c; sourceTree = "<group>"; };
 		416731DB212E045D001280EB /* vp8_quantize_sse2.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp8_quantize_sse2.c; sourceTree = "<group>"; };
 		416731DC212E045D001280EB /* dct_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = dct_sse2.asm; sourceTree = "<group>"; };
@@ -5238,7 +5649,6 @@
 		416731FC212E048F001280EB /* entropy.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = entropy.c; sourceTree = "<group>"; };
 		416731FD212E048F001280EB /* mbpitch.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = mbpitch.c; sourceTree = "<group>"; };
 		416731FE212E048F001280EB /* findnearmv.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = findnearmv.c; sourceTree = "<group>"; };
-		416731FF212E048F001280EB /* copy_c.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = copy_c.c; sourceTree = "<group>"; };
 		41673200212E048F001280EB /* dequantize.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = dequantize.c; sourceTree = "<group>"; };
 		41673201212E0490001280EB /* alloccommon.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = alloccommon.c; sourceTree = "<group>"; };
 		41673202212E0490001280EB /* loopfilter_filters.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = loopfilter_filters.c; sourceTree = "<group>"; };
@@ -5496,7 +5906,6 @@
 		419100B62152ECD300A6F17B /* loopfilter_arm.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = loopfilter_arm.c; sourceTree = "<group>"; };
 		419100B82152ECDB00A6F17B /* dc_only_idct_add_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = dc_only_idct_add_neon.c; sourceTree = "<group>"; };
 		419100B92152ECDB00A6F17B /* vp8_loopfilter_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vp8_loopfilter_neon.c; sourceTree = "<group>"; };
-		419100BA2152ECDB00A6F17B /* idct_dequant_0_2x_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = idct_dequant_0_2x_neon.c; sourceTree = "<group>"; };
 		419100BB2152ECDC00A6F17B /* dequant_idct_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = dequant_idct_neon.c; sourceTree = "<group>"; };
 		419100BC2152ECDC00A6F17B /* copymem_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = copymem_neon.c; sourceTree = "<group>"; };
 		419100BD2152ECDC00A6F17B /* bilinearpredict_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = bilinearpredict_neon.c; sourceTree = "<group>"; };
@@ -5504,7 +5913,6 @@
 		419100BF2152ECDC00A6F17B /* shortidct4x4llm_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = shortidct4x4llm_neon.c; sourceTree = "<group>"; };
 		419100C02152ECDD00A6F17B /* loopfiltersimplehorizontaledge_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = loopfiltersimplehorizontaledge_neon.c; sourceTree = "<group>"; };
 		419100C12152ECDD00A6F17B /* iwalsh_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = iwalsh_neon.c; sourceTree = "<group>"; };
-		419100C22152ECDD00A6F17B /* idct_dequant_full_2x_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = idct_dequant_full_2x_neon.c; sourceTree = "<group>"; };
 		419100C32152ECDD00A6F17B /* mbloopfilter_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = mbloopfilter_neon.c; sourceTree = "<group>"; };
 		419100C42152ECDD00A6F17B /* loopfiltersimpleverticaledge_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = loopfiltersimpleverticaledge_neon.c; sourceTree = "<group>"; };
 		419100C52152ECDE00A6F17B /* sixtappredict_neon.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = sixtappredict_neon.c; sourceTree = "<group>"; };
@@ -5932,7 +6340,6 @@
 		41BAE3CB212E2D9000E22482 /* sad_avx2.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = sad_avx2.c; sourceTree = "<group>"; };
 		41BAE3CC212E2D9100E22482 /* avg_intrin_sse2.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = avg_intrin_sse2.c; sourceTree = "<group>"; };
 		41BAE3CD212E2D9100E22482 /* highbd_idct8x8_add_sse4.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = highbd_idct8x8_add_sse4.c; sourceTree = "<group>"; };
-		41BAE3CE212E2D9100E22482 /* vpx_asm_stubs.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = vpx_asm_stubs.c; sourceTree = "<group>"; };
 		41BAE3CF212E2D9100E22482 /* highbd_idct16x16_add_sse2.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = highbd_idct16x16_add_sse2.c; sourceTree = "<group>"; };
 		41BAE3D0212E2D9200E22482 /* fwd_txfm_sse2.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = fwd_txfm_sse2.c; sourceTree = "<group>"; };
 		41BAE3D1212E2D9200E22482 /* highbd_idct8x8_add_sse2.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = highbd_idct8x8_add_sse2.c; sourceTree = "<group>"; };
@@ -6025,11 +6432,9 @@
 		41CB09FA215C8D740097B8AA /* idctllm_mmx.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = idctllm_mmx.asm; sourceTree = "<group>"; };
 		41CB09FB215C8D740097B8AA /* idctllm_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = idctllm_sse2.asm; sourceTree = "<group>"; };
 		41CB09FC215C8D740097B8AA /* mfqe_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = mfqe_sse2.asm; sourceTree = "<group>"; };
-		41CB09FD215C8D750097B8AA /* copy_sse3.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = copy_sse3.asm; sourceTree = "<group>"; };
 		41CB09FE215C8D750097B8AA /* recon_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = recon_sse2.asm; sourceTree = "<group>"; };
 		41CB09FF215C8D750097B8AA /* dequantize_mmx.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = dequantize_mmx.asm; sourceTree = "<group>"; };
 		41CB0A00215C8D750097B8AA /* recon_mmx.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = recon_mmx.asm; sourceTree = "<group>"; };
-		41CB0A01215C8D750097B8AA /* copy_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = copy_sse2.asm; sourceTree = "<group>"; };
 		41CB0A02215C8D760097B8AA /* subpixel_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = subpixel_sse2.asm; sourceTree = "<group>"; };
 		41CB0A03215C8D760097B8AA /* iwalsh_sse2.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = iwalsh_sse2.asm; sourceTree = "<group>"; };
 		41CB0A12215C8DA40097B8AA /* fwd_txfm_impl_sse2.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = fwd_txfm_impl_sse2.h; sourceTree = "<group>"; };
@@ -6054,7 +6459,6 @@
 		41CB0A25215C8DAA0097B8AA /* convolve.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = convolve.h; sourceTree = "<group>"; };
 		41CB0A26215C8DAA0097B8AA /* highbd_inv_txfm_sse2.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = highbd_inv_txfm_sse2.h; sourceTree = "<group>"; };
 		41CB0A27215C8DAB0097B8AA /* bitdepth_conversion_avx2.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = bitdepth_conversion_avx2.h; sourceTree = "<group>"; };
-		41CB0A55215C8DDA0097B8AA /* emms.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = emms.asm; sourceTree = "<group>"; };
 		41CB0A58215C90500097B8AA /* subpixel_ssse3.asm */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.asm.asm; path = subpixel_ssse3.asm; sourceTree = "<group>"; };
 		41CBAF90212E037E00DE1E1D /* decodemv.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = decodemv.c; path = decoder/decodemv.c; sourceTree = "<group>"; };
 		41CBAF91212E037F00DE1E1D /* error_concealment.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = error_concealment.c; path = decoder/error_concealment.c; sourceTree = "<group>"; };
@@ -6205,7 +6609,6 @@
 		41EAF1BA212E2AAD009F73EC /* vpx_encoder.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vpx_encoder.c; sourceTree = "<group>"; };
 		41EAF1BB212E2AAD009F73EC /* vpx_decoder.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vpx_decoder.c; sourceTree = "<group>"; };
 		41EAF1BC212E2AAD009F73EC /* vpx_image.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = vpx_image.c; sourceTree = "<group>"; };
-		41EAF1BD212E2AAD009F73EC /* svc_encodeframe.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = svc_encodeframe.c; sourceTree = "<group>"; };
 		41EAF1CA212E2B69009F73EC /* psnr.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = psnr.c; sourceTree = "<group>"; };
 		41EAF1CB212E2B69009F73EC /* bitreader_buffer.c */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.c; path = bitreader_buffer.c; sourceTree = "<group>"; };
 		41EAF1CC212E2B69009F73EC /* fwd_txfm.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = fwd_txfm.h; sourceTree = "<group>"; };
@@ -7964,11 +8367,12 @@
 			isa = PBXGroup;
 			children = (
 				4105EBA7212E02D8008C0C20 /* vp8 */,
+				4140362424AA303E00BCE9B2 /* vp9 */,
 				4105EBA8212E02E9008C0C20 /* vpx */,
+				4105EBAC212E0319008C0C20 /* vpx_scale */,
 				4105EBA9212E02F3008C0C20 /* vpx_dsp */,
 				4105EBAA212E02FE008C0C20 /* vpx_mem */,
 				4105EBAB212E030B008C0C20 /* vpx_ports */,
-				4105EBAC212E0319008C0C20 /* vpx_scale */,
 				4105EBAD212E0327008C0C20 /* vpx_util */,
 			);
 			path = libvpx;
@@ -8035,6 +8439,7 @@
 		4105EBA8212E02E9008C0C20 /* vpx */ = {
 			isa = PBXGroup;
 			children = (
+				4140361524AA281700BCE9B2 /* vpx_scale */,
 				41EAF1B8212E2A93009F73EC /* src */,
 			);
 			path = vpx;
@@ -8086,10 +8491,11 @@
 		4105EBAB212E030B008C0C20 /* vpx_ports */ = {
 			isa = PBXGroup;
 			children = (
+				4140361D24AA2E6700BCE9B2 /* emms_mmx.c */,
 				41EED7BB2152EEC8000F2A16 /* arm.h */,
 				41EED7BC2152EEC8000F2A16 /* arm_cpudetect.c */,
-				41CB0A55215C8DDA0097B8AA /* emms.asm */,
 				41E59E9323ACB6520095A94B /* system_state.h */,
+				4140361624AA293F00BCE9B2 /* x86.h */,
 			);
 			path = vpx_ports;
 			sourceTree = "<group>";
@@ -8125,7 +8531,6 @@
 				416731EC212E048B001280EB /* blockd.c */,
 				414502072152E01B0033B4D3 /* blockd.h */,
 				416731F3212E048D001280EB /* context.c */,
-				416731FF212E048F001280EB /* copy_c.c */,
 				416731F7212E048E001280EB /* debugmodes.c */,
 				414502092152E01B0033B4D3 /* default_coef_probs.h */,
 				41673200212E048F001280EB /* dequantize.c */,
@@ -8169,8 +8574,11 @@
 				41CBAFA6212E03AD00DE1E1D /* bitstream.h */,
 				41EEFDA0212E03EC00E54E93 /* block.h */,
 				416731A6212E0425001280EB /* boolhuff.c */,
-				41EEFDA5212E03EE00E54E93 /* dct.c */,
+				4140379524AB2FB300BCE9B2 /* boolhuff.h */,
+				4140379B24AB2FB400BCE9B2 /* copy_c.c */,
+				4140379424AB2FB300BCE9B2 /* dct_value_cost.h */,
 				41EEFDAA212E03F000E54E93 /* dct_value_tokens.h */,
+				41EEFDA5212E03EE00E54E93 /* dct.c */,
 				41EEFDAC212E03F000E54E93 /* defaultcoefcounts.h */,
 				416731AF212E0428001280EB /* denoising.c */,
 				41CBAFA4212E03AC00DE1E1D /* denoising.h */,
@@ -8179,28 +8587,40 @@
 				41EEFDA6212E03EF00E54E93 /* encodeintra.c */,
 				41EEFDA9212E03F000E54E93 /* encodeintra.h */,
 				41EEFDA1212E03ED00E54E93 /* encodemb.c */,
+				4140379724AB2FB300BCE9B2 /* encodemb.h */,
 				41EEFD9E212E03EB00E54E93 /* encodemv.c */,
 				41EEFDAD212E03F100E54E93 /* encodemv.h */,
 				416731B0212E0428001280EB /* ethreading.c */,
+				4140379024AB2FB200BCE9B2 /* ethreading.h */,
 				41CBAFA9212E03AD00DE1E1D /* firstpass.c */,
 				41EEFDA4212E03EE00E54E93 /* firstpass.h */,
 				416731AC212E0427001280EB /* lookahead.c */,
+				4140379924AB2FB400BCE9B2 /* lookahead.h */,
 				416731AE212E0428001280EB /* mcomp.c */,
 				41CBAFA5212E03AC00DE1E1D /* mcomp.h */,
 				416731AB212E0427001280EB /* modecosts.c */,
+				4140379C24AB2FB500BCE9B2 /* modecosts.h */,
 				416731AA212E0426001280EB /* mr_dissim.c */,
+				4140379E24AB2FB500BCE9B2 /* mr_dissim.h */,
 				416731A8212E0426001280EB /* onyx_if.c */,
 				41EEFDA7212E03EF00E54E93 /* onyx_int.h */,
 				41CBAFAA212E03AD00DE1E1D /* pickinter.c */,
+				4140379624AB2FB300BCE9B2 /* pickinter.h */,
 				416731AD212E0427001280EB /* picklpf.c */,
 				41EEFD9F212E03EB00E54E93 /* picklpf.h */,
+				4140379124AB2FB200BCE9B2 /* quantize.h */,
 				416731A7212E0425001280EB /* ratectrl.c */,
+				4140379324AB2FB300BCE9B2 /* ratectrl.h */,
 				416731A5212E0425001280EB /* rdopt.c */,
+				4140379824AB2FB400BCE9B2 /* rdopt.h */,
 				416731A9212E0426001280EB /* segmentation.c */,
+				4140379224AB2FB200BCE9B2 /* segmentation.h */,
 				41EEFDAB212E03F000E54E93 /* temporal_filter.c */,
+				4140379A24AB2FB400BCE9B2 /* temporal_filter.h */,
 				416731B1212E0429001280EB /* tokenize.c */,
 				41EEFDA2212E03ED00E54E93 /* tokenize.h */,
 				416731B2212E0429001280EB /* treewriter.c */,
+				4140379D24AB2FB500BCE9B2 /* treewriter.h */,
 				41CBAFA8212E03AD00DE1E1D /* vp8_quantize.c */,
 			);
 			name = encoder;
@@ -8239,11 +8659,8 @@
 		410994F92147565500347814 /* x86 */ = {
 			isa = PBXGroup;
 			children = (
-				41CB0A01215C8D750097B8AA /* copy_sse2.asm */,
-				41CB09FD215C8D750097B8AA /* copy_sse3.asm */,
+				4140360724AA24FB00BCE9B2 /* bilinear_filter_sse2.c */,
 				41CB09FF215C8D750097B8AA /* dequantize_mmx.asm */,
-				41239AE6214756C700396F81 /* filter_x86.c */,
-				41239AE9214756C700396F81 /* filter_x86.h */,
 				41239AE7214756C700396F81 /* idct_blk_mmx.c */,
 				41239AEB214756C800396F81 /* idct_blk_sse2.c */,
 				41CB09FA215C8D740097B8AA /* idctllm_mmx.asm */,
@@ -8923,6 +9340,261 @@
 			path = video_frame_buffer;
 			sourceTree = "<group>";
 		};
+		4140361524AA281700BCE9B2 /* vpx_scale */ = {
+			isa = PBXGroup;
+			children = (
+			);
+			name = vpx_scale;
+			path = ..;
+			sourceTree = "<group>";
+		};
+		4140362424AA303E00BCE9B2 /* vp9 */ = {
+			isa = PBXGroup;
+			children = (
+				4140363724AA307700BCE9B2 /* common */,
+				4140363624AA307100BCE9B2 /* decoder */,
+				4140363524AA306900BCE9B2 /* encoder */,
+				4140362C24AA306600BCE9B2 /* simple_encode.cc */,
+				4140362624AA306400BCE9B2 /* simple_encode.h */,
+				4140362724AA306400BCE9B2 /* vp9_cx_iface.c */,
+				4140362924AA306500BCE9B2 /* vp9_cx_iface.h */,
+				4140362A24AA306500BCE9B2 /* vp9_dx_iface.c */,
+				4140362824AA306500BCE9B2 /* vp9_dx_iface.h */,
+				4140362524AA306400BCE9B2 /* vp9_iface_common.c */,
+				4140362B24AA306500BCE9B2 /* vp9_iface_common.h */,
+			);
+			path = vp9;
+			sourceTree = "<group>";
+		};
+		4140363524AA306900BCE9B2 /* encoder */ = {
+			isa = PBXGroup;
+			children = (
+				414037AE24AB358100BCE9B2 /* arm */,
+				4140377324AA32C400BCE9B2 /* x86 */,
+				4140367624AA30AF00BCE9B2 /* vp9_alt_ref_aq.c */,
+				4140364324AA30A200BCE9B2 /* vp9_alt_ref_aq.h */,
+				4140365724AA30A700BCE9B2 /* vp9_aq_360.c */,
+				4140365324AA30A600BCE9B2 /* vp9_aq_360.h */,
+				4140364B24AA30A400BCE9B2 /* vp9_aq_complexity.c */,
+				4140363824AA309F00BCE9B2 /* vp9_aq_complexity.h */,
+				4140364F24AA30A500BCE9B2 /* vp9_aq_cyclicrefresh.c */,
+				4140368524AA30B600BCE9B2 /* vp9_aq_cyclicrefresh.h */,
+				4140367724AA30AF00BCE9B2 /* vp9_aq_variance.c */,
+				4140366E24AA30AC00BCE9B2 /* vp9_aq_variance.h */,
+				4140368424AA30B500BCE9B2 /* vp9_bitstream.c */,
+				4140364724AA30A300BCE9B2 /* vp9_bitstream.h */,
+				4140365624AA30A700BCE9B2 /* vp9_block.h */,
+				4140367E24AA30B400BCE9B2 /* vp9_blockiness.c */,
+				4140364D24AA30A500BCE9B2 /* vp9_blockiness.h */,
+				4140365824AA30A700BCE9B2 /* vp9_context_tree.c */,
+				4140365924AA30A700BCE9B2 /* vp9_context_tree.h */,
+				4140363924AA309F00BCE9B2 /* vp9_cost.c */,
+				4140368624AA30B600BCE9B2 /* vp9_cost.h */,
+				4140364424AA30A200BCE9B2 /* vp9_dct.c */,
+				4140367224AA30AE00BCE9B2 /* vp9_denoiser.c */,
+				4140365124AA30A600BCE9B2 /* vp9_denoiser.h */,
+				4140367524AA30AE00BCE9B2 /* vp9_encodeframe.c */,
+				4140366524AA30AA00BCE9B2 /* vp9_encodeframe.h */,
+				4140364624AA30A300BCE9B2 /* vp9_encodemb.c */,
+				4140368724AA30B600BCE9B2 /* vp9_encodemb.h */,
+				4140367324AA30AE00BCE9B2 /* vp9_encodemv.c */,
+				4140366224AA30AA00BCE9B2 /* vp9_encodemv.h */,
+				4140367124AA30AD00BCE9B2 /* vp9_encoder.c */,
+				4140366B24AA30AC00BCE9B2 /* vp9_encoder.h */,
+				4140365A24AA30A800BCE9B2 /* vp9_ethread.c */,
+				4140365024AA30A500BCE9B2 /* vp9_ethread.h */,
+				4140367024AA30AD00BCE9B2 /* vp9_extend.c */,
+				4140364E24AA30A500BCE9B2 /* vp9_extend.h */,
+				4140368124AA30B400BCE9B2 /* vp9_firstpass.c */,
+				4140368324AA30B500BCE9B2 /* vp9_firstpass.h */,
+				4140368824AA30B600BCE9B2 /* vp9_frame_scale.c */,
+				4140366324AA30AA00BCE9B2 /* vp9_job_queue.h */,
+				4140366D24AA30AC00BCE9B2 /* vp9_lookahead.c */,
+				4140363D24AA30A000BCE9B2 /* vp9_lookahead.h */,
+				4140366F24AA30AD00BCE9B2 /* vp9_mbgraph.c */,
+				4140364224AA30A200BCE9B2 /* vp9_mbgraph.h */,
+				4140367D24AA30B300BCE9B2 /* vp9_mcomp.c */,
+				4140367424AA30AE00BCE9B2 /* vp9_mcomp.h */,
+				4140365D24AA30A900BCE9B2 /* vp9_multi_thread.c */,
+				4140363F24AA30A100BCE9B2 /* vp9_multi_thread.h */,
+				4140367F24AA30B400BCE9B2 /* vp9_noise_estimate.c */,
+				4140366724AA30AB00BCE9B2 /* vp9_noise_estimate.h */,
+				4140365B24AA30A800BCE9B2 /* vp9_non_greedy_mv.c */,
+				4140366824AA30AB00BCE9B2 /* vp9_non_greedy_mv.h */,
+				4140366624AA30AB00BCE9B2 /* vp9_partition_models.h */,
+				4140365F24AA30A900BCE9B2 /* vp9_picklpf.c */,
+				4140366924AA30AB00BCE9B2 /* vp9_picklpf.h */,
+				4140364124AA30A100BCE9B2 /* vp9_pickmode.c */,
+				4140363A24AA30A000BCE9B2 /* vp9_pickmode.h */,
+				4140365224AA30A600BCE9B2 /* vp9_quantize.c */,
+				4140366C24AA30AC00BCE9B2 /* vp9_quantize.h */,
+				4140363B24AA30A000BCE9B2 /* vp9_ratectrl.c */,
+				4140365424AA30A600BCE9B2 /* vp9_ratectrl.h */,
+				4140368224AA30B500BCE9B2 /* vp9_rd.c */,
+				4140365E24AA30A900BCE9B2 /* vp9_rd.h */,
+				4140366124AA30AA00BCE9B2 /* vp9_rdopt.c */,
+				4140366A24AA30AC00BCE9B2 /* vp9_rdopt.h */,
+				4140366424AA30AA00BCE9B2 /* vp9_resize.c */,
+				4140364024AA30A100BCE9B2 /* vp9_resize.h */,
+				4140364924AA30A300BCE9B2 /* vp9_segmentation.c */,
+				4140367B24AA30B300BCE9B2 /* vp9_segmentation.h */,
+				4140368024AA30B400BCE9B2 /* vp9_skin_detection.c */,
+				4140365C24AA30A800BCE9B2 /* vp9_skin_detection.h */,
+				4140364524AA30A200BCE9B2 /* vp9_speed_features.c */,
+				4140364824AA30A300BCE9B2 /* vp9_speed_features.h */,
+				4140364C24AA30A400BCE9B2 /* vp9_subexp.c */,
+				4140364A24AA30A400BCE9B2 /* vp9_subexp.h */,
+				4140367C24AA30B300BCE9B2 /* vp9_svc_layercontext.c */,
+				4140367824AA30AF00BCE9B2 /* vp9_svc_layercontext.h */,
+				4140363C24AA30A000BCE9B2 /* vp9_temporal_filter.c */,
+				4140363E24AA30A100BCE9B2 /* vp9_temporal_filter.h */,
+				4140367924AA30AF00BCE9B2 /* vp9_tokenize.c */,
+				4140365524AA30A700BCE9B2 /* vp9_tokenize.h */,
+				4140366024AA30A900BCE9B2 /* vp9_treewriter.c */,
+				4140367A24AA30AF00BCE9B2 /* vp9_treewriter.h */,
+			);
+			path = encoder;
+			sourceTree = "<group>";
+		};
+		4140363624AA307100BCE9B2 /* decoder */ = {
+			isa = PBXGroup;
+			children = (
+				414036DE24AA30BA00BCE9B2 /* vp9_decodeframe.c */,
+				414036E424AA30BC00BCE9B2 /* vp9_decodeframe.h */,
+				414036E224AA30BB00BCE9B2 /* vp9_decodemv.c */,
+				414036E324AA30BB00BCE9B2 /* vp9_decodemv.h */,
+				414036DC24AA30BA00BCE9B2 /* vp9_decoder.c */,
+				414036E524AA30BC00BCE9B2 /* vp9_decoder.h */,
+				414036E124AA30BB00BCE9B2 /* vp9_detokenize.c */,
+				414036E024AA30BB00BCE9B2 /* vp9_detokenize.h */,
+				414036DF24AA30BA00BCE9B2 /* vp9_dsubexp.c */,
+				414036DD24AA30BA00BCE9B2 /* vp9_dsubexp.h */,
+				414036DB24AA30B900BCE9B2 /* vp9_job_queue.c */,
+				414036DA24AA30B900BCE9B2 /* vp9_job_queue.h */,
+			);
+			path = decoder;
+			sourceTree = "<group>";
+		};
+		4140363724AA307700BCE9B2 /* common */ = {
+			isa = PBXGroup;
+			children = (
+				4140375A24AA30F300BCE9B2 /* x86 */,
+				4140371424AA30D300BCE9B2 /* arm */,
+				414036FB24AA30CB00BCE9B2 /* vp9_alloccommon.c */,
+				4140372424AA30D800BCE9B2 /* vp9_alloccommon.h */,
+				414036F324AA30C900BCE9B2 /* vp9_blockd.c */,
+				4140370124AA30CC00BCE9B2 /* vp9_blockd.h */,
+				414036FF24AA30CC00BCE9B2 /* vp9_common_data.c */,
+				4140370224AA30CD00BCE9B2 /* vp9_common_data.h */,
+				4140371C24AA30D600BCE9B2 /* vp9_common.h */,
+				4140370B24AA30CF00BCE9B2 /* vp9_debugmodes.c */,
+				4140371D24AA30D600BCE9B2 /* vp9_entropy.c */,
+				414036F424AA30C900BCE9B2 /* vp9_entropy.h */,
+				4140371E24AA30D700BCE9B2 /* vp9_entropymode.c */,
+				4140372324AA30D800BCE9B2 /* vp9_entropymode.h */,
+				414036FC24AA30CB00BCE9B2 /* vp9_entropymv.c */,
+				4140372124AA30D700BCE9B2 /* vp9_entropymv.h */,
+				414036F224AA30C800BCE9B2 /* vp9_enums.h */,
+				4140371224AA30D100BCE9B2 /* vp9_filter.c */,
+				4140371124AA30D000BCE9B2 /* vp9_filter.h */,
+				414036FE24AA30CC00BCE9B2 /* vp9_frame_buffers.c */,
+				4140372024AA30D700BCE9B2 /* vp9_frame_buffers.h */,
+				4140370824AA30CE00BCE9B2 /* vp9_idct.c */,
+				4140370024AA30CC00BCE9B2 /* vp9_idct.h */,
+				4140370F24AA30D000BCE9B2 /* vp9_loopfilter.c */,
+				414036F524AA30C900BCE9B2 /* vp9_loopfilter.h */,
+				414036F924AA30CA00BCE9B2 /* vp9_mfqe.c */,
+				4140371F24AA30D700BCE9B2 /* vp9_mfqe.h */,
+				4140371024AA30D000BCE9B2 /* vp9_mv.h */,
+				414036F624AA30CA00BCE9B2 /* vp9_mvref_common.c */,
+				4140372224AA30D800BCE9B2 /* vp9_mvref_common.h */,
+				4140370624AA30CE00BCE9B2 /* vp9_onyxc_int.h */,
+				414036FD24AA30CB00BCE9B2 /* vp9_postproc.c */,
+				4140371824AA30D500BCE9B2 /* vp9_postproc.h */,
+				4140372524AA30D900BCE9B2 /* vp9_ppflags.h */,
+				414036F824AA30CA00BCE9B2 /* vp9_pred_common.c */,
+				4140371524AA30D300BCE9B2 /* vp9_pred_common.h */,
+				4140370924AA30CE00BCE9B2 /* vp9_quant_common.c */,
+				4140372624AA30D900BCE9B2 /* vp9_quant_common.h */,
+				4140370A24AA30CF00BCE9B2 /* vp9_reconinter.c */,
+				4140371A24AA30D600BCE9B2 /* vp9_reconinter.h */,
+				4140371624AA30D400BCE9B2 /* vp9_reconintra.c */,
+				4140370D24AA30CF00BCE9B2 /* vp9_reconintra.h */,
+				4140371724AA30D400BCE9B2 /* vp9_rtcd_defs.pl */,
+				4140370424AA30CD00BCE9B2 /* vp9_rtcd.c */,
+				4140370E24AA30D000BCE9B2 /* vp9_scale.c */,
+				414036F724AA30CA00BCE9B2 /* vp9_scale.h */,
+				4140371324AA30D100BCE9B2 /* vp9_scan.c */,
+				4140370524AA30CE00BCE9B2 /* vp9_scan.h */,
+				4140370324AA30CD00BCE9B2 /* vp9_seg_common.c */,
+				4140371B24AA30D600BCE9B2 /* vp9_seg_common.h */,
+				4140370724AA30CE00BCE9B2 /* vp9_thread_common.c */,
+				4140370C24AA30CF00BCE9B2 /* vp9_thread_common.h */,
+				4140371924AA30D500BCE9B2 /* vp9_tile_common.c */,
+				414036FA24AA30CB00BCE9B2 /* vp9_tile_common.h */,
+			);
+			path = common;
+			sourceTree = "<group>";
+		};
+		4140371424AA30D300BCE9B2 /* arm */ = {
+			isa = PBXGroup;
+			children = (
+				4140376724AA312000BCE9B2 /* vp9_highbd_iht4x4_add_neon.c */,
+				4140376A24AA312100BCE9B2 /* vp9_highbd_iht8x8_add_neon.c */,
+				4140376524AA311F00BCE9B2 /* vp9_highbd_iht16x16_add_neon.c */,
+				4140376824AA312000BCE9B2 /* vp9_iht_neon.h */,
+				4140376624AA311F00BCE9B2 /* vp9_iht4x4_add_neon.c */,
+				4140376924AA312000BCE9B2 /* vp9_iht8x8_add_neon.c */,
+				4140376B24AA312100BCE9B2 /* vp9_iht16x16_add_neon.c */,
+			);
+			path = arm;
+			sourceTree = "<group>";
+		};
+		4140375A24AA30F300BCE9B2 /* x86 */ = {
+			isa = PBXGroup;
+			children = (
+				4140375C24AA311400BCE9B2 /* vp9_highbd_iht4x4_add_sse4.c */,
+				4140375F24AA311500BCE9B2 /* vp9_highbd_iht8x8_add_sse4.c */,
+				4140375B24AA311300BCE9B2 /* vp9_highbd_iht16x16_add_sse4.c */,
+				4140375D24AA311400BCE9B2 /* vp9_idct_intrin_sse2.c */,
+				4140375E24AA311500BCE9B2 /* vp9_mfqe_sse2.asm */,
+			);
+			path = x86;
+			sourceTree = "<group>";
+		};
+		4140377324AA32C400BCE9B2 /* x86 */ = {
+			isa = PBXGroup;
+			children = (
+				4140377A24AA32DA00BCE9B2 /* highbd_temporal_filter_sse4.c */,
+				4140377C24AA32DB00BCE9B2 /* temporal_filter_constants.h */,
+				4140377724AA32D900BCE9B2 /* temporal_filter_sse4.c */,
+				4140377E24AA32DB00BCE9B2 /* vp9_dct_intrin_sse2.c */,
+				4140378024AA32DC00BCE9B2 /* vp9_dct_sse2.asm */,
+				4140377B24AA32DA00BCE9B2 /* vp9_denoiser_sse2.c */,
+				4140378124AA32DC00BCE9B2 /* vp9_diamond_search_sad_avx.c */,
+				4140377524AA32D900BCE9B2 /* vp9_error_avx2.c */,
+				4140377D24AA32DB00BCE9B2 /* vp9_error_sse2.asm */,
+				4140377824AA32D900BCE9B2 /* vp9_frame_scale_ssse3.c */,
+				4140377924AA32DA00BCE9B2 /* vp9_highbd_block_error_intrin_sse2.c */,
+				4140377424AA32D800BCE9B2 /* vp9_quantize_avx2.c */,
+				4140377F24AA32DB00BCE9B2 /* vp9_quantize_sse2.c */,
+				4140377624AA32D900BCE9B2 /* vp9_quantize_ssse3_x86_64.asm */,
+			);
+			path = x86;
+			sourceTree = "<group>";
+		};
+		414037AE24AB358100BCE9B2 /* arm */ = {
+			isa = PBXGroup;
+			children = (
+				414037AF24AB359600BCE9B2 /* vp9_denoiser_neon.c */,
+				414037B024AB359600BCE9B2 /* vp9_error_neon.c */,
+				414037B124AB359700BCE9B2 /* vp9_frame_scale_neon.c */,
+				414037B224AB359700BCE9B2 /* vp9_quantize_neon.c */,
+			);
+			name = arm;
+			path = arm/neon;
+			sourceTree = "<group>";
+		};
 		4140B8161E4E335E007409E6 /* g711 */ = {
 			isa = PBXGroup;
 			children = (
@@ -9221,9 +9893,11 @@
 		416731D8212E044B001280EB /* x86 */ = {
 			isa = PBXGroup;
 			children = (
+				414035FF24AA248700BCE9B2 /* block_error_sse2.asm */,
+				414035FE24AA248700BCE9B2 /* copy_sse2.asm */,
+				4140360024AA248700BCE9B2 /* copy_sse3.asm */,
 				416731DC212E045D001280EB /* dct_sse2.asm */,
 				416731DE212E045E001280EB /* denoising_sse2.c */,
-				416731D9212E045D001280EB /* encodeopt.asm */,
 				416731E1212E045E001280EB /* fwalsh_sse2.asm */,
 				416731DA212E045D001280EB /* quantize_sse4.c */,
 				416731E0212E045E001280EB /* temporal_filter_apply_sse2.asm */,
@@ -9453,8 +10127,6 @@
 				419100BB2152ECDC00A6F17B /* dequant_idct_neon.c */,
 				419100C62152ECDE00A6F17B /* dequantizeb_neon.c */,
 				419100BE2152ECDC00A6F17B /* idct_blk_neon.c */,
-				419100BA2152ECDB00A6F17B /* idct_dequant_0_2x_neon.c */,
-				419100C22152ECDD00A6F17B /* idct_dequant_full_2x_neon.c */,
 				419100C12152ECDD00A6F17B /* iwalsh_neon.c */,
 				419100C02152ECDD00A6F17B /* loopfiltersimplehorizontaledge_neon.c */,
 				419100C42152ECDD00A6F17B /* loopfiltersimpleverticaledge_neon.c */,
@@ -9520,6 +10192,7 @@
 				419478712152ED3E00275257 /* subpel_variance_neon.c */,
 				4194787B2152ED4200275257 /* subtract_neon.c */,
 				419478842152ED4400275257 /* sum_neon.h */,
+				414037B724AB35E100BCE9B2 /* sum_squares_neon.c */,
 				419100EF2152ED1800A6F17B /* transpose_neon.h */,
 				4194787D2152ED4200275257 /* variance_neon.c */,
 				41953C052152ED6300136625 /* vpx_convolve8_avg_neon_asm.asm */,
@@ -10328,9 +11001,10 @@
 				41CB0A27215C8DAB0097B8AA /* bitdepth_conversion_avx2.h */,
 				41CB0A15215C8DA40097B8AA /* bitdepth_conversion_sse2.asm */,
 				41CB0A20215C8DA90097B8AA /* bitdepth_conversion_sse2.h */,
-				41CB0A25215C8DAA0097B8AA /* convolve.h */,
 				41CB0A22215C8DA90097B8AA /* convolve_avx2.h */,
+				4140360C24AA253900BCE9B2 /* convolve_sse2.h */,
 				41CB0A19215C8DA50097B8AA /* convolve_ssse3.h */,
+				41CB0A25215C8DAA0097B8AA /* convolve.h */,
 				41C62967212E3653002313D4 /* deblock_sse2.asm */,
 				41CB0A1C215C8DA60097B8AA /* fwd_dct32x32_impl_avx2.h */,
 				41CB0A21215C8DA90097B8AA /* fwd_dct32x32_impl_sse2.h */,
@@ -10340,14 +11014,14 @@
 				41CB0A1D215C8DA60097B8AA /* fwd_txfm_sse2.h */,
 				41C62972212E3656002313D4 /* fwd_txfm_ssse3_x86_64.asm */,
 				41BAE3D9212E2D9300E22482 /* highbd_convolve_avx2.c */,
-				41BAE3CF212E2D9100E22482 /* highbd_idct16x16_add_sse2.c */,
-				41BAE3D6212E2D9300E22482 /* highbd_idct16x16_add_sse4.c */,
-				41C6290E212E2DE2002313D4 /* highbd_idct32x32_add_sse2.c */,
-				41C62918212E2DE7002313D4 /* highbd_idct32x32_add_sse4.c */,
 				41C62910212E2DE3002313D4 /* highbd_idct4x4_add_sse2.c */,
 				41BAE3C7212E2D9000E22482 /* highbd_idct4x4_add_sse4.c */,
 				41BAE3D1212E2D9200E22482 /* highbd_idct8x8_add_sse2.c */,
 				41BAE3CD212E2D9100E22482 /* highbd_idct8x8_add_sse4.c */,
+				41BAE3CF212E2D9100E22482 /* highbd_idct16x16_add_sse2.c */,
+				41BAE3D6212E2D9300E22482 /* highbd_idct16x16_add_sse4.c */,
+				41C6290E212E2DE2002313D4 /* highbd_idct32x32_add_sse2.c */,
+				41C62918212E2DE7002313D4 /* highbd_idct32x32_add_sse4.c */,
 				41BAE3C6212E2D8F00E22482 /* highbd_intrapred_intrin_sse2.c */,
 				41BAE3D8212E2D9300E22482 /* highbd_intrapred_intrin_ssse3.c */,
 				41CB0A1A215C8DA60097B8AA /* highbd_intrapred_sse2.asm */,
@@ -10355,8 +11029,8 @@
 				41CB0A1E215C8DA60097B8AA /* highbd_inv_txfm_sse4.h */,
 				41BAE3C5212E2D8F00E22482 /* highbd_loopfilter_sse2.c */,
 				41C62912212E2DE4002313D4 /* highbd_quantize_intrin_sse2.c */,
-				41C6296C212E3654002313D4 /* highbd_sad4d_sse2.asm */,
 				41C62961212E3652002313D4 /* highbd_sad_sse2.asm */,
+				41C6296C212E3654002313D4 /* highbd_sad4d_sse2.asm */,
 				41C62964212E3653002313D4 /* highbd_subpel_variance_impl_sse2.asm */,
 				41C62966212E3653002313D4 /* highbd_variance_impl_sse2.asm */,
 				41C62916212E2DE6002313D4 /* highbd_variance_sse2.c */,
@@ -10370,18 +11044,20 @@
 				41BAE3D7212E2D9300E22482 /* loopfilter_avx2.c */,
 				41BAE3CA212E2D9000E22482 /* loopfilter_sse2.c */,
 				41CB0A13215C8DA40097B8AA /* mem_sse2.h */,
+				4140360D24AA253900BCE9B2 /* post_proc_sse2.c */,
 				41BAE3D2212E2D9200E22482 /* quantize_avx.c */,
 				41C6290F212E2DE3002313D4 /* quantize_sse2.c */,
+				4140360B24AA253900BCE9B2 /* quantize_sse2.h */,
 				41C62915212E2DE5002313D4 /* quantize_ssse3.c */,
 				41CB0A1B215C8DA60097B8AA /* quantize_x86.h */,
-				41C62913212E2DE4002313D4 /* sad4d_avx2.c */,
-				41BAE3D3212E2D9200E22482 /* sad4d_avx512.c */,
-				41C62969212E3654002313D4 /* sad4d_sse2.asm */,
 				41BAE3CB212E2D9000E22482 /* sad_avx2.c */,
 				41C6296E212E3655002313D4 /* sad_sse2.asm */,
 				41CB0A14215C8DA40097B8AA /* sad_sse3.asm */,
 				41C62968212E3654002313D4 /* sad_sse4.asm */,
 				41C6295E212E3652002313D4 /* sad_ssse3.asm */,
+				41C62913212E2DE4002313D4 /* sad4d_avx2.c */,
+				41BAE3D3212E2D9200E22482 /* sad4d_avx512.c */,
+				41C62969212E3654002313D4 /* sad4d_sse2.asm */,
 				41C6295F212E3652002313D4 /* ssim_opt_x86_64.asm */,
 				41C62973212E3656002313D4 /* subpel_variance_sse2.asm */,
 				41C62963212E3653002313D4 /* subtract_sse2.asm */,
@@ -10390,10 +11066,10 @@
 				41CB0A16215C8DA50097B8AA /* txfm_common_sse2.h */,
 				41C6291B212E2DE8002313D4 /* variance_avx2.c */,
 				41C62914212E2DE5002313D4 /* variance_sse2.c */,
-				41BAE3CE212E2D9100E22482 /* vpx_asm_stubs.c */,
 				41C62962212E3652002313D4 /* vpx_convolve_copy_sse2.asm */,
 				41C62965212E3653002313D4 /* vpx_high_subpixel_8t_sse2.asm */,
 				41CB0A24215C8DAA0097B8AA /* vpx_high_subpixel_bilinear_sse2.asm */,
+				4140360A24AA253800BCE9B2 /* vpx_subpixel_4t_intrin_sse2.c */,
 				41BAE3C8212E2D9000E22482 /* vpx_subpixel_8t_intrin_avx2.c */,
 				41BAE3C9212E2D9000E22482 /* vpx_subpixel_8t_intrin_ssse3.c */,
 				41C62971212E3656002313D4 /* vpx_subpixel_8t_sse2.asm */,
@@ -10718,7 +11394,6 @@
 		41EAF1B8212E2A93009F73EC /* src */ = {
 			isa = PBXGroup;
 			children = (
-				41EAF1BD212E2AAD009F73EC /* svc_encodeframe.c */,
 				41EAF1B9212E2AAD009F73EC /* vpx_codec.c */,
 				41EAF1BB212E2AAD009F73EC /* vpx_decoder.c */,
 				41EAF1BA212E2AAD009F73EC /* vpx_encoder.c */,
@@ -13833,6 +14508,11 @@
 				413E67922169881900EF37ED /* svc_rate_allocator.cc */,
 				413E67932169881900EF37ED /* svc_rate_allocator.h */,
 				5CDD8C131E43C3B400621E92 /* vp9_noop.cc */,
+				414035F824AA1F5500BCE9B2 /* vp9_frame_buffer_pool.cc */,
+				414035F524AA1F5400BCE9B2 /* vp9_frame_buffer_pool.h */,
+				414035F624AA1F5400BCE9B2 /* vp9_impl.cc */,
+				414035F724AA1F5400BCE9B2 /* vp9_impl.h */,
+				414035F424AA1F5400BCE9B2 /* vp9.cc */,
 			);
 			name = vp9;
 			sourceTree = "<group>";
@@ -14460,54 +15140,150 @@
 			isa = PBXHeadersBuildPhase;
 			buildActionMask = 2147483647;
 			files = (
+				414036F124AA30BC00BCE9B2 /* vp9_decoder.h in Headers */,
+				4140372924AA30D900BCE9B2 /* vp9_entropy.h in Headers */,
+				414036F024AA30BC00BCE9B2 /* vp9_decodeframe.h in Headers */,
 				41EED7BD2152EEC9000F2A16 /* arm.h in Headers */,
 				41CB0A36215C8DAB0097B8AA /* bitdepth_conversion_sse2.h in Headers */,
+				414036E624AA30BC00BCE9B2 /* vp9_job_queue.h in Headers */,
+				4140373B24AA30DA00BCE9B2 /* vp9_onyxc_int.h in Headers */,
 				416731D2212E0430001280EB /* bitstream.h in Headers */,
+				414036EC24AA30BC00BCE9B2 /* vp9_detokenize.h in Headers */,
+				414036A624AA30B700BCE9B2 /* vp9_tokenize.h in Headers */,
+				414037AB24AB2FB500BCE9B2 /* modecosts.h in Headers */,
+				4140362E24AA306600BCE9B2 /* simple_encode.h in Headers */,
+				4140369024AA30B600BCE9B2 /* vp9_multi_thread.h in Headers */,
 				416731C1212E0430001280EB /* block.h in Headers */,
+				414037A424AB2FB500BCE9B2 /* boolhuff.h in Headers */,
+				4140376F24AA312100BCE9B2 /* vp9_iht_neon.h in Headers */,
 				41BA04E22165992800B527E0 /* blockd.h in Headers */,
+				414036BC24AA30B700BCE9B2 /* vp9_encoder.h in Headers */,
+				4140368B24AA30B600BCE9B2 /* vp9_pickmode.h in Headers */,
 				41CB0A3B215C8DAB0097B8AA /* convolve.h in Headers */,
+				414036D624AA30B700BCE9B2 /* vp9_aq_cyclicrefresh.h in Headers */,
 				41CB0A2F215C8DAB0097B8AA /* convolve_ssse3.h in Headers */,
+				4140372724AA30D900BCE9B2 /* vp9_enums.h in Headers */,
 				416731C2212E0430001280EB /* dct_value_tokens.h in Headers */,
+				414037AC24AB2FB500BCE9B2 /* treewriter.h in Headers */,
+				4140363024AA306600BCE9B2 /* vp9_dx_iface.h in Headers */,
+				414037A924AB2FB500BCE9B2 /* temporal_filter.h in Headers */,
 				41CBAF97212E039300DE1E1D /* decoderthreading.h in Headers */,
+				414036D824AA30B700BCE9B2 /* vp9_encodemb.h in Headers */,
 				41BA04E32165992800B527E0 /* default_coef_probs.h in Headers */,
+				4140369424AA30B600BCE9B2 /* vp9_alt_ref_aq.h in Headers */,
+				4140363324AA306600BCE9B2 /* vp9_iface_common.h in Headers */,
+				414036C924AA30B700BCE9B2 /* vp9_svc_layercontext.h in Headers */,
+				4140369B24AA30B600BCE9B2 /* vp9_subexp.h in Headers */,
+				4140375724AA30DA00BCE9B2 /* vp9_alloccommon.h in Headers */,
+				414036BD24AA30B700BCE9B2 /* vp9_quantize.h in Headers */,
 				416731C4212E0430001280EB /* defaultcoefcounts.h in Headers */,
 				416731D3212E0430001280EB /* denoising.h in Headers */,
+				4140373A24AA30D900BCE9B2 /* vp9_scan.h in Headers */,
+				414036BB24AA30B700BCE9B2 /* vp9_rdopt.h in Headers */,
+				4140378A24AA32DC00BCE9B2 /* temporal_filter_constants.h in Headers */,
 				416731C6212E0430001280EB /* encodeframe.h in Headers */,
+				4140373724AA30D900BCE9B2 /* vp9_common_data.h in Headers */,
+				4140369824AA30B600BCE9B2 /* vp9_bitstream.h in Headers */,
+				414036E924AA30BC00BCE9B2 /* vp9_dsubexp.h in Headers */,
+				4140375524AA30DA00BCE9B2 /* vp9_mvref_common.h in Headers */,
 				416731C8212E0430001280EB /* encodeintra.h in Headers */,
+				4140374124AA30DA00BCE9B2 /* vp9_thread_common.h in Headers */,
+				414036B724AA30B700BCE9B2 /* vp9_partition_models.h in Headers */,
+				4140374624AA30DA00BCE9B2 /* vp9_filter.h in Headers */,
+				414036A524AA30B700BCE9B2 /* vp9_ratectrl.h in Headers */,
 				416731CB212E0430001280EB /* encodemv.h in Headers */,
-				41659C092165975700CCBDC2 /* filter_x86.h in Headers */,
+				414036A224AA30B600BCE9B2 /* vp9_denoiser.h in Headers */,
+				4140368E24AA30B600BCE9B2 /* vp9_lookahead.h in Headers */,
 				416731CC212E0430001280EB /* firstpass.h in Headers */,
+				414036B624AA30B700BCE9B2 /* vp9_encodeframe.h in Headers */,
+				414036D424AA30B700BCE9B2 /* vp9_firstpass.h in Headers */,
 				41CB0A37215C8DAB0097B8AA /* fwd_dct32x32_impl_sse2.h in Headers */,
+				414036CC24AA30B700BCE9B2 /* vp9_segmentation.h in Headers */,
+				4140374D24AA30DA00BCE9B2 /* vp9_reconinter.h in Headers */,
+				414036B324AA30B700BCE9B2 /* vp9_encodemv.h in Headers */,
+				414037A024AB2FB500BCE9B2 /* quantize.h in Headers */,
+				4140375924AA30DA00BCE9B2 /* vp9_quant_common.h in Headers */,
+				414036D724AA30B700BCE9B2 /* vp9_cost.h in Headers */,
 				41330A2A212E2BDF00280939 /* fwd_txfm.h in Headers */,
+				414036A424AA30B600BCE9B2 /* vp9_aq_360.h in Headers */,
+				4140361024AA253A00BCE9B2 /* convolve_sse2.h in Headers */,
+				4140360F24AA253A00BCE9B2 /* quantize_sse2.h in Headers */,
 				41CB0A28215C8DAB0097B8AA /* fwd_txfm_impl_sse2.h in Headers */,
+				4140374524AA30DA00BCE9B2 /* vp9_mv.h in Headers */,
 				41CB0A33215C8DAB0097B8AA /* fwd_txfm_sse2.h in Headers */,
 				41CB0A3C215C8DAB0097B8AA /* highbd_inv_txfm_sse2.h in Headers */,
+				414037A624AB2FB500BCE9B2 /* encodemb.h in Headers */,
 				41CB0A34215C8DAB0097B8AA /* highbd_inv_txfm_sse4.h in Headers */,
+				414036BA24AA30B700BCE9B2 /* vp9_picklpf.h in Headers */,
+				4140369124AA30B600BCE9B2 /* vp9_resize.h in Headers */,
+				4140374F24AA30DA00BCE9B2 /* vp9_common.h in Headers */,
+				414036BF24AA30B700BCE9B2 /* vp9_aq_variance.h in Headers */,
 				41EED7942152ED8E000F2A16 /* idct_neon.h in Headers */,
+				4140372A24AA30D900BCE9B2 /* vp9_loopfilter.h in Headers */,
+				4140379F24AB2FB500BCE9B2 /* ethreading.h in Headers */,
+				4140363124AA306600BCE9B2 /* vp9_cx_iface.h in Headers */,
+				4140375624AA30DA00BCE9B2 /* vp9_entropymode.h in Headers */,
+				4140361724AA294000BCE9B2 /* x86.h in Headers */,
 				41CB0A39215C8DAB0097B8AA /* inv_txfm_sse2.h in Headers */,
+				414036EF24AA30BC00BCE9B2 /* vp9_decodemv.h in Headers */,
+				414036CB24AA30B700BCE9B2 /* vp9_treewriter.h in Headers */,
+				414036AF24AA30B700BCE9B2 /* vp9_rd.h in Headers */,
+				414036AA24AA30B700BCE9B2 /* vp9_context_tree.h in Headers */,
+				414036A124AA30B600BCE9B2 /* vp9_ethread.h in Headers */,
 				41CB0A35215C8DAB0097B8AA /* inv_txfm_ssse3.h in Headers */,
+				414037A124AB2FB500BCE9B2 /* segmentation.h in Headers */,
+				414037A824AB2FB500BCE9B2 /* lookahead.h in Headers */,
+				414037A224AB2FB500BCE9B2 /* ratectrl.h in Headers */,
+				414036B924AA30B700BCE9B2 /* vp9_non_greedy_mv.h in Headers */,
+				4140374E24AA30DA00BCE9B2 /* vp9_seg_common.h in Headers */,
 				416731D5212E0430001280EB /* mcomp.h in Headers */,
 				41EED7A72152ED8E000F2A16 /* mem_neon.h in Headers */,
 				41CB0A29215C8DAB0097B8AA /* mem_sse2.h in Headers */,
 				416731CD212E0430001280EB /* onyx_int.h in Headers */,
+				4140375824AA30DA00BCE9B2 /* vp9_ppflags.h in Headers */,
+				4140375324AA30DA00BCE9B2 /* vp9_frame_buffers.h in Headers */,
 				41BA04E42165992800B527E0 /* onyxd.h in Headers */,
 				416731CE212E0430001280EB /* picklpf.h in Headers */,
+				4140369E24AA30B600BCE9B2 /* vp9_blockiness.h in Headers */,
 				41CB0A31215C8DAB0097B8AA /* quantize_x86.h in Headers */,
 				41EED7AE2152ED8E000F2A16 /* sum_neon.h in Headers */,
+				4140372C24AA30D900BCE9B2 /* vp9_scale.h in Headers */,
 				41BA04E62165992800B527E0 /* threading.h in Headers */,
+				4140374924AA30DA00BCE9B2 /* vp9_pred_common.h in Headers */,
+				4140375224AA30DA00BCE9B2 /* vp9_mfqe.h in Headers */,
 				416731D0212E0430001280EB /* tokenize.h in Headers */,
+				4140372F24AA30D900BCE9B2 /* vp9_tile_common.h in Headers */,
+				4140369324AA30B600BCE9B2 /* vp9_mbgraph.h in Headers */,
+				414036B824AA30B700BCE9B2 /* vp9_noise_estimate.h in Headers */,
 				41EED7AF2152ED8E000F2A16 /* transpose_neon.h in Headers */,
 				41CB0A2E215C8DAB0097B8AA /* transpose_sse2.h in Headers */,
+				4140375424AA30DA00BCE9B2 /* vp9_entropymv.h in Headers */,
 				41CBAF9C212E039300DE1E1D /* treereader.h in Headers */,
+				4140374224AA30DA00BCE9B2 /* vp9_reconintra.h in Headers */,
+				414037A524AB2FB500BCE9B2 /* pickinter.h in Headers */,
+				414037A724AB2FB500BCE9B2 /* rdopt.h in Headers */,
 				41CB0A2C215C8DAB0097B8AA /* txfm_common_sse2.h in Headers */,
+				414036A724AA30B700BCE9B2 /* vp9_block.h in Headers */,
 				41BA04E72165992800B527E0 /* vp8_entropymodedata.h in Headers */,
+				4140374B24AA30DA00BCE9B2 /* vp9_postproc.h in Headers */,
 				41C62953212E2FA6002313D4 /* vp8_rtcd.h in Headers */,
+				414037A324AB2FB500BCE9B2 /* dct_value_cost.h in Headers */,
+				414036AD24AA30B700BCE9B2 /* vp9_skin_detection.h in Headers */,
+				4140369924AA30B600BCE9B2 /* vp9_speed_features.h in Headers */,
+				414036B424AA30B700BCE9B2 /* vp9_job_queue.h in Headers */,
 				41BA04E82165992800B527E0 /* vp8_skin_detection.h in Headers */,
+				4140369F24AA30B600BCE9B2 /* vp9_extend.h in Headers */,
 				41EED7B92152ED8E000F2A16 /* vpx_convolve8_neon.h in Headers */,
 				41330A31212E2BF500280939 /* vpx_mem.h in Headers */,
 				41330A35212E2C1F00280939 /* vpx_scale.h in Headers */,
+				414037AD24AB2FB500BCE9B2 /* mr_dissim.h in Headers */,
+				414036C524AA30B700BCE9B2 /* vp9_mcomp.h in Headers */,
 				41BAE3C1212E2C5B00E22482 /* vpx_thread.h in Headers */,
 				41BAE3C3212E2C5B00E22482 /* vpx_write_yuv_frame.h in Headers */,
+				4140373524AA30D900BCE9B2 /* vp9_idct.h in Headers */,
+				4140368F24AA30B600BCE9B2 /* vp9_temporal_filter.h in Headers */,
+				4140368924AA30B600BCE9B2 /* vp9_aq_complexity.h in Headers */,
+				4140373624AA30D900BCE9B2 /* vp9_blockd.h in Headers */,
 			);
 			runOnlyForDeploymentPostprocessing = 0;
 		};
@@ -14895,6 +15671,7 @@
 				41BE71AE215C463700A7B196 /* bytecode.h in Headers */,
 				5CD284B81E6A5F9F0094FDC8 /* call.h in Headers */,
 				41D6B45421273159008F9353 /* call_config.h in Headers */,
+				414035FC24AA1F5500BCE9B2 /* vp9_impl.h in Headers */,
 				4131C478234C81EA0028A615 /* call_factory.h in Headers */,
 				5CDD85131E43B1EA00621E92 /* call_statistics.h in Headers */,
 				5CDD859D1E43B5C000621E92 /* call_stats.h in Headers */,
@@ -15308,6 +16085,7 @@
 				4131BFA9234B88A60028A615 /* media_config.h in Headers */,
 				4131BF8D234B88A60028A615 /* media_constants.h in Headers */,
 				4131BF9D234B88A60028A615 /* media_engine.h in Headers */,
+				414035FA24AA1F5500BCE9B2 /* vp9_frame_buffer_pool.h in Headers */,
 				5CDD83E61E439A6F00621E92 /* media_opt_util.h in Headers */,
 				4131BF45234B88200028A615 /* media_protocol_names.h in Headers */,
 				4131BF21234B88200028A615 /* media_session.h in Headers */,
@@ -16419,17 +17197,25 @@
 			isa = PBXSourcesBuildPhase;
 			buildActionMask = 2147483647;
 			files = (
+				4140361B24AA2D9100BCE9B2 /* sad.c in Sources */,
 				41330A13212E2BDF00280939 /* add_noise.c in Sources */,
+				4140375124AA30DA00BCE9B2 /* vp9_entropymode.c in Sources */,
 				41CB0A3E215C8DC90097B8AA /* add_noise_sse2.asm in Sources */,
 				41C62937212E2F1E002313D4 /* alloccommon.c in Sources */,
+				414036A324AA30B600BCE9B2 /* vp9_quantize.c in Sources */,
+				414036B024AA30B700BCE9B2 /* vp9_picklpf.c in Sources */,
 				41EED7BE2152EEC9000F2A16 /* arm_cpudetect.c in Sources */,
+				4140378624AA32DC00BCE9B2 /* vp9_frame_scale_ssse3.c in Sources */,
 				41330A14212E2BDF00280939 /* avg.c in Sources */,
 				41C628FA212E2DB0002313D4 /* avg_intrin_sse2.c in Sources */,
 				41EED77C2152ED8E000F2A16 /* avg_neon.c in Sources */,
 				41EED77D2152ED8E000F2A16 /* avg_pred_neon.c in Sources */,
 				41C6291F212E2DE9002313D4 /* avg_pred_sse2.c in Sources */,
 				41CB0A2D215C8DAB0097B8AA /* avg_ssse3_x86_64.asm in Sources */,
+				4140374424AA30DA00BCE9B2 /* vp9_loopfilter.c in Sources */,
+				414036D324AA30B700BCE9B2 /* vp9_rd.c in Sources */,
 				419100D72152ECE700A6F17B /* bilinearpredict_neon.c in Sources */,
+				4140372B24AA30D900BCE9B2 /* vp9_mvref_common.c in Sources */,
 				41CB0A2B215C8DAB0097B8AA /* bitdepth_conversion_sse2.asm in Sources */,
 				41330A15212E2BDF00280939 /* bitreader.c in Sources */,
 				41330A28212E2BDF00280939 /* bitreader_buffer.c in Sources */,
@@ -16437,88 +17223,132 @@
 				41330A16212E2BDF00280939 /* bitwriter.c in Sources */,
 				41330A29212E2BDF00280939 /* bitwriter_buffer.c in Sources */,
 				41C62938212E2F1E002313D4 /* blockd.c in Sources */,
+				414036ED24AA30BC00BCE9B2 /* vp9_detokenize.c in Sources */,
 				416731B3212E0430001280EB /* boolhuff.c in Sources */,
 				41C62939212E2F1E002313D4 /* context.c in Sources */,
-				41C6293A212E2F1E002313D4 /* copy_c.c in Sources */,
-				41CB0A0E215C8D760097B8AA /* copy_sse2.asm in Sources */,
-				41CB0A0A215C8D760097B8AA /* copy_sse3.asm in Sources */,
 				419100D82152ECE700A6F17B /* copymem_neon.c in Sources */,
+				414036B524AA30B700BCE9B2 /* vp9_resize.c in Sources */,
 				41CBAF94212E039300DE1E1D /* dboolhuff.c in Sources */,
+				414036C824AA30B700BCE9B2 /* vp9_aq_variance.c in Sources */,
 				419100D92152ECE700A6F17B /* dc_only_idct_add_neon.c in Sources */,
 				416731C3212E0430001280EB /* dct.c in Sources */,
+				4140362D24AA306600BCE9B2 /* vp9_iface_common.c in Sources */,
 				41CB0A5A215C90750097B8AA /* dct_sse2.asm in Sources */,
 				41330A17212E2BDF00280939 /* deblock.c in Sources */,
 				41EED77E2152ED8E000F2A16 /* deblock_neon.c in Sources */,
 				41CB0A3F215C8DC90097B8AA /* deblock_sse2.asm in Sources */,
 				41C6293B212E2F1E002313D4 /* debugmodes.c in Sources */,
+				4140362F24AA306600BCE9B2 /* vp9_cx_iface.c in Sources */,
+				4140378424AA32DC00BCE9B2 /* vp9_quantize_ssse3_x86_64.asm in Sources */,
+				4140373024AA30D900BCE9B2 /* vp9_alloccommon.c in Sources */,
 				41CBAF95212E039300DE1E1D /* decodeframe.c in Sources */,
 				41CBAF96212E039300DE1E1D /* decodemv.c in Sources */,
 				416731B4212E0430001280EB /* denoising.c in Sources */,
+				4140369C24AA30B600BCE9B2 /* vp9_aq_complexity.c in Sources */,
+				4140369A24AA30B600BCE9B2 /* vp9_segmentation.c in Sources */,
+				4140374C24AA30DA00BCE9B2 /* vp9_tile_common.c in Sources */,
 				419100B02152EC9000A6F17B /* denoising_neon.c in Sources */,
 				41BA04C3216598C700B527E0 /* denoising_sse2.c in Sources */,
+				4140377024AA312100BCE9B2 /* vp9_iht8x8_add_neon.c in Sources */,
 				419100DA2152ECE700A6F17B /* dequant_idct_neon.c in Sources */,
+				4140374A24AA30DA00BCE9B2 /* vp9_reconintra.c in Sources */,
+				4140374324AA30DA00BCE9B2 /* vp9_scale.c in Sources */,
+				414036A924AA30B700BCE9B2 /* vp9_context_tree.c in Sources */,
+				414037B424AB359700BCE9B2 /* vp9_error_neon.c in Sources */,
 				41C6293C212E2F1E002313D4 /* dequantize.c in Sources */,
 				41CB0A0C215C8D760097B8AA /* dequantize_mmx.asm in Sources */,
 				419100DB2152ECE700A6F17B /* dequantizeb_neon.c in Sources */,
 				41CBAF98212E039300DE1E1D /* detokenize.c in Sources */,
-				41CB0A56215C8DDA0097B8AA /* emms.asm in Sources */,
+				4140378F24AA32DC00BCE9B2 /* vp9_diamond_search_sad_avx.c in Sources */,
+				4140378B24AA32DC00BCE9B2 /* vp9_error_sse2.asm in Sources */,
+				4140378524AA32DC00BCE9B2 /* temporal_filter_sse4.c in Sources */,
+				414036B224AA30B700BCE9B2 /* vp9_rdopt.c in Sources */,
+				4140373124AA30D900BCE9B2 /* vp9_entropymv.c in Sources */,
+				414036B124AA30B700BCE9B2 /* vp9_treewriter.c in Sources */,
 				416731C5212E0430001280EB /* encodeframe.c in Sources */,
+				4140373224AA30D900BCE9B2 /* vp9_postproc.c in Sources */,
+				414036E724AA30BC00BCE9B2 /* vp9_job_queue.c in Sources */,
 				416731C7212E0430001280EB /* encodeintra.c in Sources */,
 				416731C9212E0430001280EB /* encodemb.c in Sources */,
+				414037B324AB359700BCE9B2 /* vp9_denoiser_neon.c in Sources */,
 				416731CA212E0430001280EB /* encodemv.c in Sources */,
-				41CB0A5B215C90750097B8AA /* encodeopt.asm in Sources */,
 				41C6293D212E2F1E002313D4 /* entropy.c in Sources */,
 				41C6293E212E2F1E002313D4 /* entropymode.c in Sources */,
 				41C6293F212E2F1E002313D4 /* entropymv.c in Sources */,
 				416731B5212E0430001280EB /* ethreading.c in Sources */,
 				41C62940212E2F1E002313D4 /* extend.c in Sources */,
+				4140369224AA30B600BCE9B2 /* vp9_pickmode.c in Sources */,
 				419100B12152EC9000A6F17B /* fastquantizeb_neon.c in Sources */,
+				414036EA24AA30BC00BCE9B2 /* vp9_decodeframe.c in Sources */,
 				41330A18212E2BDF00280939 /* fastssim.c in Sources */,
 				41EED7812152ED8E000F2A16 /* fdct16x16_neon.c in Sources */,
 				41EED7822152ED8E000F2A16 /* fdct32x32_neon.c in Sources */,
+				414036A024AA30B600BCE9B2 /* vp9_aq_cyclicrefresh.c in Sources */,
 				41EED77F2152ED8E000F2A16 /* fdct_neon.c in Sources */,
 				41EED7802152ED8E000F2A16 /* fdct_partial_neon.c in Sources */,
 				41C62941212E2F1E002313D4 /* filter.c in Sources */,
-				41659C082165975700CCBDC2 /* filter_x86.c in Sources */,
 				41C62942212E2F1E002313D4 /* findnearmv.c in Sources */,
 				416731D4212E0430001280EB /* firstpass.c in Sources */,
+				414036D224AA30B700BCE9B2 /* vp9_firstpass.c in Sources */,
+				4140376124AA311500BCE9B2 /* vp9_highbd_iht4x4_add_sse4.c in Sources */,
+				4140372824AA30D900BCE9B2 /* vp9_blockd.c in Sources */,
+				414036E824AA30BC00BCE9B2 /* vp9_decoder.c in Sources */,
 				41CB0A5C215C90750097B8AA /* fwalsh_sse2.asm in Sources */,
+				4140378924AA32DC00BCE9B2 /* vp9_denoiser_sse2.c in Sources */,
 				41330A19212E2BDF00280939 /* fwd_txfm.c in Sources */,
 				41EED7832152ED8E000F2A16 /* fwd_txfm_neon.c in Sources */,
+				4140378C24AA32DC00BCE9B2 /* vp9_dct_intrin_sse2.c in Sources */,
+				4140372D24AA30D900BCE9B2 /* vp9_pred_common.c in Sources */,
 				41C628FC212E2DB0002313D4 /* fwd_txfm_sse2.c in Sources */,
 				41CB0A40215C8DC90097B8AA /* fwd_txfm_ssse3_x86_64.asm in Sources */,
+				4140360924AA24FE00BCE9B2 /* bilinear_filter_sse2.c in Sources */,
 				4175EA0B216596DD00B46390 /* gen_scalers.c in Sources */,
 				41EED7842152ED8E000F2A16 /* hadamard_neon.c in Sources */,
 				41B675B7216599A80040A75D /* highbd_idct16x16_add_sse2.c in Sources */,
 				41B675B9216599A80040A75D /* highbd_idct32x32_add_sse2.c in Sources */,
+				4140378E24AA32DC00BCE9B2 /* vp9_dct_sse2.asm in Sources */,
 				41B675BB216599A80040A75D /* highbd_idct4x4_add_sse2.c in Sources */,
 				41B675BD216599A80040A75D /* highbd_idct8x8_add_sse2.c in Sources */,
+				4140361E24AA2E6800BCE9B2 /* emms_mmx.c in Sources */,
 				41B675BF216599A80040A75D /* highbd_intrapred_intrin_sse2.c in Sources */,
 				41B675C0216599A80040A75D /* highbd_intrapred_intrin_ssse3.c in Sources */,
 				41CB0A30215C8DAB0097B8AA /* highbd_intrapred_sse2.asm in Sources */,
 				41B675C1216599A80040A75D /* highbd_loopfilter_sse2.c in Sources */,
 				41B675C2216599A80040A75D /* highbd_quantize_intrin_sse2.c in Sources */,
+				4140374724AA30DA00BCE9B2 /* vp9_filter.c in Sources */,
+				4140372E24AA30D900BCE9B2 /* vp9_mfqe.c in Sources */,
+				4140368C24AA30B600BCE9B2 /* vp9_ratectrl.c in Sources */,
 				41CB0A42215C8DC90097B8AA /* highbd_sad4d_sse2.asm in Sources */,
+				4140373424AA30D900BCE9B2 /* vp9_common_data.c in Sources */,
 				41CB0A41215C8DC90097B8AA /* highbd_sad_sse2.asm in Sources */,
 				41CB0A43215C8DC90097B8AA /* highbd_subpel_variance_impl_sse2.asm in Sources */,
 				41CB0A44215C8DC90097B8AA /* highbd_variance_impl_sse2.asm in Sources */,
+				414037B824AB35E200BCE9B2 /* sum_squares_neon.c in Sources */,
 				41B675C3216599A80040A75D /* highbd_variance_sse2.c in Sources */,
 				41EED79B2152ED8E000F2A16 /* idct16x16_1_add_neon.c in Sources */,
 				41EED79C2152ED8E000F2A16 /* idct16x16_add_neon.c in Sources */,
+				4140377224AA312100BCE9B2 /* vp9_iht16x16_add_neon.c in Sources */,
+				4140378724AA32DC00BCE9B2 /* vp9_highbd_block_error_intrin_sse2.c in Sources */,
 				41EED79F2152ED8E000F2A16 /* idct32x32_135_add_neon.c in Sources */,
+				4140360624AA24C300BCE9B2 /* copy_sse3.asm in Sources */,
 				41EED79D2152ED8E000F2A16 /* idct32x32_1_add_neon.c in Sources */,
 				41EED79E2152ED8E000F2A16 /* idct32x32_34_add_neon.c in Sources */,
+				4140360524AA24C000BCE9B2 /* copy_sse2.asm in Sources */,
 				41EED7A02152ED8E000F2A16 /* idct32x32_add_neon.c in Sources */,
+				4140361924AA2BB100BCE9B2 /* highbd_idct8x8_add_sse4.c in Sources */,
 				41EED7962152ED8E000F2A16 /* idct4x4_1_add_neon.c in Sources */,
 				41EED7982152ED8E000F2A16 /* idct4x4_add_neon.c in Sources */,
 				41EED7992152ED8E000F2A16 /* idct8x8_1_add_neon.c in Sources */,
 				41EED79A2152ED8E000F2A16 /* idct8x8_add_neon.c in Sources */,
+				414036A824AA30B700BCE9B2 /* vp9_aq_360.c in Sources */,
 				41C62943212E2F1E002313D4 /* idct_blk.c in Sources */,
 				41659C0A2165975700CCBDC2 /* idct_blk_mmx.c in Sources */,
 				419100DC2152ECE700A6F17B /* idct_blk_neon.c in Sources */,
+				4140373824AA30D900BCE9B2 /* vp9_seg_common.c in Sources */,
 				41659C0B2165975700CCBDC2 /* idct_blk_sse2.c in Sources */,
-				419100DD2152ECE700A6F17B /* idct_dequant_0_2x_neon.c in Sources */,
-				419100DE2152ECE700A6F17B /* idct_dequant_full_2x_neon.c in Sources */,
+				414036D024AA30B700BCE9B2 /* vp9_noise_estimate.c in Sources */,
+				4140376224AA311500BCE9B2 /* vp9_idct_intrin_sse2.c in Sources */,
+				4140373C24AA30DA00BCE9B2 /* vp9_thread_common.c in Sources */,
 				41C62944212E2F1E002313D4 /* idctllm.c in Sources */,
 				41CB0A07215C8D760097B8AA /* idctllm_mmx.asm in Sources */,
 				41CB0A08215C8D760097B8AA /* idctllm_sse2.asm in Sources */,
@@ -16534,74 +17364,106 @@
 				41CB0A10215C8D760097B8AA /* iwalsh_sse2.asm in Sources */,
 				416731B6212E0430001280EB /* lookahead.c in Sources */,
 				41330A1C212E2BDF00280939 /* loopfilter.c in Sources */,
+				4140373E24AA30DA00BCE9B2 /* vp9_quant_common.c in Sources */,
+				4140360424AA24BD00BCE9B2 /* block_error_sse2.asm in Sources */,
 				419100D62152ECE200A6F17B /* loopfilter_arm.c in Sources */,
+				414036AB24AA30B700BCE9B2 /* vp9_ethread.c in Sources */,
 				41CB0A05215C8D760097B8AA /* loopfilter_block_sse2_x86_64.asm in Sources */,
 				41C62945212E2F1E002313D4 /* loopfilter_filters.c in Sources */,
 				41EED7A62152ED8E000F2A16 /* loopfilter_neon.c in Sources */,
 				41CB0A06215C8D760097B8AA /* loopfilter_sse2.asm in Sources */,
+				4140361124AA253A00BCE9B2 /* post_proc_sse2.c in Sources */,
 				41C62907212E2DB0002313D4 /* loopfilter_sse2.c in Sources */,
+				4140376D24AA312100BCE9B2 /* vp9_iht4x4_add_neon.c in Sources */,
 				41659C0C2165975700CCBDC2 /* loopfilter_x86.c in Sources */,
 				419100E02152ECE700A6F17B /* loopfiltersimplehorizontaledge_neon.c in Sources */,
 				419100E12152ECE700A6F17B /* loopfiltersimpleverticaledge_neon.c in Sources */,
 				419100E22152ECE700A6F17B /* mbloopfilter_neon.c in Sources */,
+				4140374024AA30DA00BCE9B2 /* vp9_debugmodes.c in Sources */,
 				41C62946212E2F1E002313D4 /* mbpitch.c in Sources */,
+				414036EE24AA30BC00BCE9B2 /* vp9_decodemv.c in Sources */,
 				416731B7212E0430001280EB /* mcomp.c in Sources */,
+				4140369624AA30B600BCE9B2 /* vp9_speed_features.c in Sources */,
 				41C62947212E2F1E002313D4 /* mfqe.c in Sources */,
 				41CB0A09215C8D760097B8AA /* mfqe_sse2.asm in Sources */,
 				41C62948212E2F1E002313D4 /* modecont.c in Sources */,
 				416731B8212E0430001280EB /* modecosts.c in Sources */,
+				414036CA24AA30B700BCE9B2 /* vp9_tokenize.c in Sources */,
 				416731B9212E0430001280EB /* mr_dissim.c in Sources */,
 				41EED7BF2152F1FB000F2A16 /* onyx_if.c in Sources */,
 				41CBAF9A212E039300DE1E1D /* onyxd_if.c in Sources */,
 				416731D6212E0430001280EB /* pickinter.c in Sources */,
 				416731BB212E0430001280EB /* picklpf.c in Sources */,
+				414036C124AA30B700BCE9B2 /* vp9_extend.c in Sources */,
+				414036C424AA30B700BCE9B2 /* vp9_encodemv.c in Sources */,
+				4140376324AA311500BCE9B2 /* vp9_mfqe_sse2.asm in Sources */,
 				41C62949212E2F1E002313D4 /* postproc.c in Sources */,
 				41330A1D212E2BDF00280939 /* prob.c in Sources */,
+				4140361824AA2B9700BCE9B2 /* highbd_idct16x16_add_sse4.c in Sources */,
 				41330A2B212E2BDF00280939 /* psnr.c in Sources */,
 				41330A1E212E2BDF00280939 /* psnrhvs.c in Sources */,
+				4140373924AA30D900BCE9B2 /* vp9_rtcd.c in Sources */,
 				41BA04E52165992800B527E0 /* quant_common.c in Sources */,
 				41330A1F212E2BDF00280939 /* quantize.c in Sources */,
 				41EED7A82152ED8E000F2A16 /* quantize_neon.c in Sources */,
+				4140369D24AA30B600BCE9B2 /* vp9_subexp.c in Sources */,
 				41C6291D212E2DE9002313D4 /* quantize_sse2.c in Sources */,
+				414036C324AA30B700BCE9B2 /* vp9_denoiser.c in Sources */,
 				41C62923212E2DE9002313D4 /* quantize_ssse3.c in Sources */,
 				416731BC212E0430001280EB /* ratectrl.c in Sources */,
+				414036CF24AA30B700BCE9B2 /* vp9_blockiness.c in Sources */,
 				416731BD212E0430001280EB /* rdopt.c in Sources */,
 				41CB0A0D215C8D760097B8AA /* recon_mmx.asm in Sources */,
+				414036D524AA30B700BCE9B2 /* vp9_bitstream.c in Sources */,
 				41CB0A0B215C8D760097B8AA /* recon_sse2.asm in Sources */,
 				41C6294A212E2F1E002313D4 /* reconinter.c in Sources */,
+				414036C024AA30B700BCE9B2 /* vp9_mbgraph.c in Sources */,
 				41C6294B212E2F1E002313D4 /* reconintra.c in Sources */,
 				41C6294C212E2F1E002313D4 /* reconintra4x4.c in Sources */,
 				41C6294D212E2F1E002313D4 /* rtcd.c in Sources */,
-				41EED7DC21531E5F000F2A16 /* sad.c in Sources */,
+				4140373324AA30D900BCE9B2 /* vp9_frame_buffers.c in Sources */,
+				4140373D24AA30DA00BCE9B2 /* vp9_idct.c in Sources */,
+				414036BE24AA30B700BCE9B2 /* vp9_lookahead.c in Sources */,
 				41EED7AA2152ED8E000F2A16 /* sad4d_neon.c in Sources */,
+				4140361F24AA2E9000BCE9B2 /* highbd_idct4x4_add_sse4.c in Sources */,
 				41CB0A4B215C8DC90097B8AA /* sad4d_sse2.asm in Sources */,
+				414036D124AA30B700BCE9B2 /* vp9_skin_detection.c in Sources */,
+				414037AA24AB2FB500BCE9B2 /* copy_c.c in Sources */,
 				41EED7A92152ED8E000F2A16 /* sad_neon.c in Sources */,
 				41CB0A48215C8DC90097B8AA /* sad_sse2.asm in Sources */,
 				41CB0A2A215C8DAB0097B8AA /* sad_sse3.asm in Sources */,
+				4140378D24AA32DC00BCE9B2 /* vp9_quantize_sse2.c in Sources */,
 				41CB0A49215C8DC90097B8AA /* sad_sse4.asm in Sources */,
 				41CB0A4A215C8DC90097B8AA /* sad_ssse3.asm in Sources */,
 				416731BE212E0430001280EB /* segmentation.c in Sources */,
 				41C6294E212E2F1E002313D4 /* setupintrarecon.c in Sources */,
+				4140373F24AA30DA00BCE9B2 /* vp9_reconinter.c in Sources */,
 				419100B22152EC9000A6F17B /* shortfdct_neon.c in Sources */,
+				414036CE24AA30B700BCE9B2 /* vp9_mcomp.c in Sources */,
 				419100E32152ECE700A6F17B /* shortidct4x4llm_neon.c in Sources */,
 				419100E42152ECE700A6F17B /* sixtappredict_neon.c in Sources */,
+				4140362024AA2EB500BCE9B2 /* highbd_idct32x32_add_sse4.c in Sources */,
 				41330A21212E2BDF00280939 /* skin_detection.c in Sources */,
 				41CB0A4C215C8DC90097B8AA /* ssim_opt_x86_64.asm in Sources */,
 				41EED7AC2152ED8E000F2A16 /* subpel_variance_neon.c in Sources */,
+				414036CD24AA30B700BCE9B2 /* vp9_svc_layercontext.c in Sources */,
 				41CB0A4D215C8DC90097B8AA /* subpel_variance_sse2.asm in Sources */,
 				41CB0A04215C8D760097B8AA /* subpixel_mmx.asm in Sources */,
 				41CB0A0F215C8D760097B8AA /* subpixel_sse2.asm in Sources */,
+				4140376424AA311500BCE9B2 /* vp9_highbd_iht8x8_add_sse4.c in Sources */,
 				41CB0A59215C90500097B8AA /* subpixel_ssse3.asm in Sources */,
 				41330A23212E2BDF00280939 /* subtract.c in Sources */,
 				41EED7AD2152ED8E000F2A16 /* subtract_neon.c in Sources */,
 				41CB0A4E215C8DC90097B8AA /* subtract_sse2.asm in Sources */,
 				41330A24212E2BDF00280939 /* sum_squares.c in Sources */,
+				414036EB24AA30BC00BCE9B2 /* vp9_dsubexp.c in Sources */,
 				41C62928212E2DE9002313D4 /* sum_squares_sse2.c in Sources */,
-				418B14DF2165959F0046E03F /* svc_encodeframe.c in Sources */,
 				41C6294F212E2F1E002313D4 /* swapyv12buffer.c in Sources */,
 				41659C0E2165977500CCBDC2 /* systemdependent.c in Sources */,
 				416731CF212E0430001280EB /* temporal_filter.c in Sources */,
 				41CB0A5D215C90750097B8AA /* temporal_filter_apply_sse2.asm in Sources */,
+				414036C624AA30B700BCE9B2 /* vp9_encodeframe.c in Sources */,
+				4140361A24AA2BBC00BCE9B2 /* quantize_sse4.c in Sources */,
 				41CBAF9B212E039300DE1E1D /* threading.c in Sources */,
 				416731BF212E0430001280EB /* tokenize.c in Sources */,
 				41C62950212E2F1E002313D4 /* treecoder.c in Sources */,
@@ -16610,25 +17472,36 @@
 				41EED7B02152ED8E000F2A16 /* variance_neon.c in Sources */,
 				41C62922212E2DE9002313D4 /* variance_sse2.c in Sources */,
 				41659C0D2165975700CCBDC2 /* vp8_asm_stubs.c in Sources */,
+				4140374824AA30DA00BCE9B2 /* vp9_scan.c in Sources */,
 				4162BA37216596160044F344 /* vp8_cx_iface.c in Sources */,
 				4162BA38216596160044F344 /* vp8_dx_iface.c in Sources */,
 				41659C10216597A100CCBDC2 /* vp8_enc_stubs_sse2.c in Sources */,
+				414037B524AB359700BCE9B2 /* vp9_frame_scale_neon.c in Sources */,
 				41C62951212E2F1E002313D4 /* vp8_loopfilter.c in Sources */,
 				419100E52152ECE700A6F17B /* vp8_loopfilter_neon.c in Sources */,
+				414036C224AA30B700BCE9B2 /* vp9_encoder.c in Sources */,
+				414036AE24AA30B700BCE9B2 /* vp9_multi_thread.c in Sources */,
+				414037B624AB359700BCE9B2 /* vp9_quantize_neon.c in Sources */,
+				4140369724AA30B600BCE9B2 /* vp9_encodemb.c in Sources */,
+				414036C724AA30B700BCE9B2 /* vp9_alt_ref_aq.c in Sources */,
 				416731D7212E0430001280EB /* vp8_quantize.c in Sources */,
 				41659C11216597A100CCBDC2 /* vp8_quantize_sse2.c in Sources */,
 				41659C12216597A100CCBDC2 /* vp8_quantize_ssse3.c in Sources */,
 				419100B32152EC9000A6F17B /* vp8_shortwalsh4x4_neon.c in Sources */,
 				41C62952212E2F1E002313D4 /* vp8_skin_detection.c in Sources */,
-				41C6290B212E2DB0002313D4 /* vpx_asm_stubs.c in Sources */,
+				4140368A24AA30B600BCE9B2 /* vp9_cost.c in Sources */,
 				418B14E02165959F0046E03F /* vpx_codec.c in Sources */,
+				4140368D24AA30B600BCE9B2 /* vp9_temporal_filter.c in Sources */,
 				41CB0A11215C8D940097B8AA /* vpx_config.asm in Sources */,
+				4140363224AA306600BCE9B2 /* vp9_dx_iface.c in Sources */,
 				4129408A212E0CC400AD95E7 /* vpx_config.c in Sources */,
 				41330A26212E2BDF00280939 /* vpx_convolve.c in Sources */,
 				41EED7B82152ED8E000F2A16 /* vpx_convolve8_neon.c in Sources */,
+				4140360E24AA253A00BCE9B2 /* vpx_subpixel_4t_intrin_sse2.c in Sources */,
 				41EED7B22152ED8E000F2A16 /* vpx_convolve_avg_neon.c in Sources */,
 				41EED7B42152ED8E000F2A16 /* vpx_convolve_copy_neon.c in Sources */,
 				41CB0A4F215C8DC90097B8AA /* vpx_convolve_copy_sse2.asm in Sources */,
+				4140378824AA32DC00BCE9B2 /* highbd_temporal_filter_sse4.c in Sources */,
 				41EED7B52152ED8E000F2A16 /* vpx_convolve_neon.c in Sources */,
 				418B14E12165959F0046E03F /* vpx_decoder.c in Sources */,
 				41330A27212E2BDF00280939 /* vpx_dsp_rtcd.c in Sources */,
@@ -16638,14 +17511,19 @@
 				418B14E32165959F0046E03F /* vpx_image.c in Sources */,
 				41330A30212E2BF500280939 /* vpx_mem.c in Sources */,
 				4175EA0C216596DD00B46390 /* vpx_scale.c in Sources */,
+				414036D924AA30B700BCE9B2 /* vp9_frame_scale.c in Sources */,
+				4140369524AA30B600BCE9B2 /* vp9_dct.c in Sources */,
+				4140376024AA311500BCE9B2 /* vp9_highbd_iht16x16_add_sse4.c in Sources */,
 				41330A34212E2C1F00280939 /* vpx_scale_rtcd.c in Sources */,
 				41EED7BA2152ED8E000F2A16 /* vpx_scaled_convolve8_neon.c in Sources */,
 				41C6290D212E2DB0002313D4 /* vpx_subpixel_8t_intrin_ssse3.c in Sources */,
+				4140375024AA30DA00BCE9B2 /* vp9_entropy.c in Sources */,
 				41CB0A51215C8DC90097B8AA /* vpx_subpixel_8t_sse2.asm in Sources */,
 				41CB0A52215C8DC90097B8AA /* vpx_subpixel_8t_ssse3.asm in Sources */,
 				41CB0A53215C8DC90097B8AA /* vpx_subpixel_bilinear_sse2.asm in Sources */,
 				41CB0A54215C8DC90097B8AA /* vpx_subpixel_bilinear_ssse3.asm in Sources */,
 				41BAE3C0212E2C5B00E22482 /* vpx_thread.c in Sources */,
+				414036AC24AA30B700BCE9B2 /* vp9_non_greedy_mv.c in Sources */,
 				41BAE3C2212E2C5B00E22482 /* vpx_write_yuv_frame.c in Sources */,
 				4175EA0D216596DD00B46390 /* yv12config.c in Sources */,
 				4175EA0E216596DD00B46390 /* yv12extend.c in Sources */,
@@ -17944,6 +18822,7 @@
 				5CDD83EC1E439A6F00621E92 /* packet_buffer.cc in Sources */,
 				5CDD888C1E43BE3C00621E92 /* packet_loss_stats.cc in Sources */,
 				5CDD8FA41E43CCBE00621E92 /* packet_router.cc in Sources */,
+				4140362324AA300700BCE9B2 /* vp9_impl.cc in Sources */,
 				5CDD8AAC1E43C00F00621E92 /* packet_source.cc in Sources */,
 				4131C1EA234B8A4B0028A615 /* packet_transport_internal.cc in Sources */,
 				4131C289234B8CC40028A615 /* payload_type_mapper.cc in Sources */,
@@ -18090,6 +18969,7 @@
 				41A08BBD21269553001D5D7B /* rtc_event_ice_candidate_pair_config.cc in Sources */,
 				4131C475234C81B60028A615 /* rtc_event_log.cc in Sources */,
 				4131C474234C81B60028A615 /* rtc_event_log_factory.cc in Sources */,
+				414035FD24AA1F5500BCE9B2 /* vp9_frame_buffer_pool.cc in Sources */,
 				41924179212738FB00634FCF /* rtc_event_log_impl.cc in Sources */,
 				4131C2F9234B8DC20028A615 /* rtc_event_log_output_file.cc in Sources */,
 				4131C533234C8B190028A615 /* rtc_event_probe_cluster_created.cc in Sources */,
@@ -18378,6 +19258,7 @@
 				5CDD88E21E43BE3D00621E92 /* ulpfec_header_reader_writer.cc in Sources */,
 				5CDD88E41E43BE3D00621E92 /* ulpfec_receiver_impl.cc in Sources */,
 				4131C280234B8CC40028A615 /* unhandled_packets_buffer.cc in Sources */,
+				4140362224AA300300BCE9B2 /* vp9.cc in Sources */,
 				4131C144234B89E20028A615 /* unique_id_generator.cc in Sources */,
 				41893A12242A757A007FDC41 /* unique_timestamp_counter.cc in Sources */,
 				5CDD8BF11E43C2B500621E92 /* unpack_bits.c in Sources */,
@@ -18453,7 +19334,6 @@
 				5CDD83851E439A3500621E92 /* vp8_header_parser.cc in Sources */,
 				4131C3D9234C79D10028A615 /* vp8_temporal_layers.cc in Sources */,
 				4131C3D7234C79D10028A615 /* vp8_temporal_layers_factory.cc in Sources */,
-				5CDD8C141E43C3B400621E92 /* vp9_noop.cc in Sources */,
 				4131BF9A234B88A60028A615 /* vp9_profile.cc in Sources */,
 				5CDD8BF31E43C2B500621E92 /* vq3.c in Sources */,
 				5CDD8BF51E43C2B500621E92 /* vq4.c in Sources */,